Sampler precision and format?

Hi all,
I am attempting to make a particle system very similar to
http://nullprogram.com/blog/2014/06/29/

The idea is to store each particle’s data as one pixel inside a texture. Then, a bunch of vertices are passed to a vertex shader with an ID lookup into the particle data texture. The vertex shader reads particle data, and outputs the right position. I am trying to replicate this, but I’ve hit a snag, and I think it is has something to do with the precision of a render target.

So, I have a render target for storing the particle data. I declare it like this… Notice the SurfaceFormat alots 16 bits per channel.

_particleBuffer = new RenderTarget2D(Device, TEXTURE_SIZE, TEXTURE_SIZE, false, SurfaceFormat.Rgba64, DepthFormat.None);
            

Then, when I add a particle, I encode the position of the new particle into my _particleBuffer. I also have some code that behaves pretty much like SpriteBatch, but its my own code, and allows for some extra data to be passed in the VertexDeclaration.
The add function looks like this

        public void Add(Vector2 position, Vector2 velocity)
        {
            var next = ParticleCount;
            ParticleCount += 1;
            if (next >= MaxParticleCount)
            {
                throw new Exception("Out of particle memory");
            }

            var x = next % TEXTURE_SIZE;
            var y = next / TEXTURE_SIZE;
           
            _particleBuffer.SetData(0, new Rectangle(x, y, 1, 1), new ushort[] {
                Encode2(position.X), // the red value is encoded X
                Encode2(position.Y), // the green value is encoded Y
                0, // the blue value will eventually be velocity X, but for now, is just 0
                65535 // the alpha will eventually be velocity Y, but for now, is effectively 1.0
            }, 0, 1);

            // the _segment is my SpriteBatch. 
            // it draws a pixel, 
            // at position zero,
            // with scale 10
            // with zero offset
            // with zero rotation
            // with a color of white
            // and assigns X value of 'extra' to be equal to this particle id number.
            _segment.Box(_pixelSource, Vector2.Zero, Vector2.One * 10, Vector2.Zero, 0, Color.White, new Vector4(next, 0, 0, 0))
        }

        // input can be [-max/2, to max/2]
        private ushort Encode2(float input)
        {

            var max = 65536; // 2 ^ 16.
            if (Math.Abs(input) > max / 2)
                throw new InvalidOperationException("Particle out of bounds");


            // scale to [-1 to 1] range
            var scaled = (input * 2) / max;

            // scale to [0 to 1] range
            var transformed = (scaled + 1) / 2;

            return (ushort) (transformed * max);
        }

Then, for render the particles on the screen, I pass the projection and the _particleBuffer into a custom Effect. That effect reads below…It set up for a MRT system, so its drawing to a Diffuse output target and a Normal output target. That part I’m interested in is the vertex shader…

sampler DataSampler : register(s0) {
	Texture = (DataMap);
	magfilter = POINT;
	minfilter = POINT;
	mipfilter = POINT;
	AddressU = clamp;
	AddressV = clamp;
};

struct VS_INPUT {
	float4 Position : SV_POSITION0;
	float4 Color : COLOR0;
	float2 UV : TEXCOORD0;
	float4 Extra : POSITION1;
};
struct PS_INPUT {
	float4 Position : SV_POSITION;
	float4 Extra : POSIITON1;
};
struct PS_OUTPUT {
	float4 Diffuse : SV_TARGET0;
	float4 Normal : SV_TARGET1;
};

uniform float4x4 Projection;

//------------------------ VERTEX SHADER ---------------------------------------
PS_INPUT VertexShaderFunction(VS_INPUT input)
{
	PS_INPUT output;

        // input.Extra.x is the particle id number.
        // the width and height of the datamap should be 128 (for now)
        // get the x and y coordinate of the particle in the dataMap (which is _particleBuffer_
	float x = input.Extra.x % 128;
	float y = floor(input.Extra.x / 128);

        // read the particle's data
	float4 pos = tex2Dlod(DataSampler, float4(x, y, 0, 0));

        // this is commented out because it definately doesn't work, but it should...
	//// scale from [0 to 1] range to [-1 to 1]
	//pos = (pos * 2) - 1;
	//// scale from [-1 to 1] range to [-max/2, max/2];
	//pos = pos * 65536.0 / 2.0f;

        // this is just a test. adjust the position so that Y is always zero, and X scales up the [0,1] ratio to the [0,100] scale.
	pos.x *= 100;
	pos.y = 0;

	pos.x += input.Position.x;
	pos.y += input.Position.y;
	pos.z = input.Position.z;   // we know nothing about z, so take the original value
	pos.w = input.Position.w;  // we know nothing about w, so take the original value

	output.Position = mul(pos, Projection);
	output.Extra = float4(0,0,0,0);
	return output;
}

//------------------------ PIXEL SHADER ----------------------------------------
PS_OUTPUT PixelShaderFunction(PS_INPUT input)
{
	PS_OUTPUT output;

	output.Diffuse = float4(1, input.Extra.x, input.Extra.y, 1);
	output.Normal = float4(.5, .5, .5, 0);

	return output;
}

//-------------------------- TECHNIQUES ----------------------------------------
technique Tech
{
	pass OnlyPass
	{

		VertexShader = compile vs_4_0 VertexShaderFunction();
		PixelShader = compile ps_4_0 PixelShaderFunction();
	}
}

So the issue here is that the result of the tex2Dlod function seems to be failing me. I am rendering 600 particles, with positions in the X ranging from 0 to 600. The Y values are all zero. I would expect to see a line of little boxes at the top of my screen, but instead, I see a dot at zero, and a dot about 100 pixels in. It seems like the resolution is lost. Even though I am using a renderTarget with 16 bits per channel, I feel like the tex2DLod function doesn’t know about the format.

At long last, my question. Can I specify what the format of my sampler is? How does the system know that sampler should be treated as a 16bit/channel instead of an 8bit/channel ?

Let me know if I’ve left out important details.
Thanks for the thought.

your output position in vertex shader must be between -1 and 1. you should check for that, I think you go abit over that.

apart from that you might want to use .load instead of SampleLevel (or tex2dlod, which is the deprecated way of writing that)

Load works directly with pixels and the input have to be ints, no sampler needed. I think in your case that is beneficial.

So notice: Load takes an int3 for an input, so you should use int3(32,32, 0) to get the 32,32 pixel from the image.

Thanks for the help.
So, I think the Projection matrix is helping to put everything back into the [-1 to 1] range. Until the matrix multiply though, everything is in ‘real’ coordinates? I’m not sure what to call it, but I hope you know what I mean.

I tried switching over to use .Load. Its challenging my understanding of how to pass textures to an Effect in Monogame. I have this in my shader.

// near top of file
uniform Texture2D ParticleData;
// ... random code....
// inside vertex shader
	float x = input.Extra.x % 128;
	float y = floor(input.Extra.x / 128);
	int xi = floor(x);
	int yi = floor(y);
	//float4 pos = tex2Dlod(DataSampler, float4(x, y, 0, 0));
	float4 loadedPos = ParticleData.Load(int3(xi, yi, 0));

Then, to actually set ParticleData, I am running this code from C#

// shader is my instance of Effect
shader.Parameters["ParticleData"].SetValue(_particleBuffer);

I’m getting zero’s from the .Load function. I know from the docs it says that if I exceed the bounds, then zeros will be returned, but I’m 99% sure that my xi and yi are in bounds.
Is the way I’m passing the texture correct as far as any one can tell ?

it’s possible that’s a problem. Use SurfacFormat.Vector4