Packing Values in Texture2D

mmh … I basically want to use the GPU to do some processing of arbritary data - ideally i would pack all needed data into texture data. Ideally I would have something like R32_UINT and pack the uint by myself. I would actually need 9 bits for flags and the remaining 23 bits for a float.

any idea how I could accomplish that, as there seems not to be a proper surface format … there is no R32_UINT and I am not sure, if a R32_FLOAT would just be the same or will I get casting issues?

HLSL has conversion functions in shader model 4 and up, so you should be able to use a float surface and treat it like it was an int surface:
asuint, asfloat

Thanks for your reply, that’s good to know. I currently switched to a RG32 and accept a precision of 16 bit for the float and may use the other float for packing different values into.

There is actual a PackedVector for that … and it seems they are going to produce a uint anyway.

My main concern was - lets assume I would stick to 23bit float - that in the shader, what would I work with when sampling the texture? a float1/2/3/4? so how would HLSL do the channel assignment in that case.

That’s one reason I currently went with RG32 - because there it’s very clear, that the float component will be in the red channel, and everything else in the green channel

I just wondered, how one would do with an unusal packed vector, like a 23bit float … how would the shader know, that the red channel is 23bit? So I came to single-channel SurfaceFormats (and unpack myself in shader) and the only available is R32_Float. But that’s not a PackedVector (uint), but a float - so I wondered if I could still feed an uint into the Texture without screwing up HLSL.
In reality it’s just a 32 bit number, but I don’t know if the shader does anything special for float vs uint SurfaceFormats

There’s a MonoGame fork by cpt-max where you can use a computer shader: https://github.com/cpt-max/MonoGame/tree/compute_shader. With some samples here: https://github.com/cpt-max/MonoGame-Shader-Samples.

It will most likely make it into MonoGame at some point in that future via this PR.

That would be very cool - but for my current demands, I am fine with the PixelShader and it works fine with the float component.

But. It turned out RG32 isn’t actually handled as UINT, it’s actually a R16G16_UNORM … which means it’s treated as normalized floats - making it basically impossible to simply treat a component as bitfield. Still searching if there is any way to get a uint surface format, looks like there simply is no non-float SurfaceFormat in monogame ?

ok. so as there are only UNORM types in Monogame, I had to work on the sources. I basically just added a new SurfaceFormat (R16G16_UINT) to the SurfaceFormat Enum and adapted some code to get the correct size.

voila, data is now coming in proper UINT format into the PixelShader and I can red single bits. I use the Red Channel for a 16bit Float now (but could easily do the initially wanted 23 bit float by using some R32_UINT Format) and have the other 16 bits as a bitfield.

Of course the R16G16_UINT will not display on the screen with SpriteBatch, so I may make another shader to plot it for debugging purposes

It should basically be openGL compatible but I didn’t try. Not too happy tho with using direct sources from git.

2 Likes