I have a custom processor that uses the 2D Normal Map sample from XNA to create normal map bitmaps for 2d shadows from greyscale heightmaps
.
In the migrated MonoGame project, the output normal bitmaps are completely different from the ones output by XNA.
Via logging to output (launching the pipeline tool with debugger didn’t work), I have narrowed down the problem to these lines:
TextureContent depthMapTexture = context.BuildAndLoadAsset<TextureContent,
TextureContent>(depthMapTextureReference, null);
/* compute normals here, put it back in the bitmap, read from the bitmap & output the result, everything seems ok
(...)
*/
PixelBitmapContent<Vector4> bitmap;
bitmap = (PixelBitmapContent<Vector4>)depthMapTexture.Faces[0][0];
// convert the bitmap, this seems to change the values in the bitmap completely:
depthMapTexture.ConvertBitmapType(typeof(PixelBitmapContent<NormalizedByte4>));
So we’ve got the pixel conversion correct, but something goes wrong after that? I don’t see any relation between 7FCA720D and 12E700FF that you see with XNA. This is an area where there is zero documentation on what happens, so it’s a lot of guesswork.
So either GetData or the the code that outputs the bitmap at the end of Process shifts the byte components 1 byte to the right.
I don’t know why the XNA GetData returns a different result from Paint.NET either… XNA gives alpha = 127, while Paint.net shows alpha = 255 for all the filled pixels.
As a test, I tried reading the bitmap .xnb file produced by the XNA processor into MonoGame.
I got different results, both when loading as Color and as NormalizedByte4:
For Color, it appears R and B components are switched around in XNA compared to MonoGame.
For NormalizedByte, the order has changed compared to Color, and 2 bytes are switched around.
It seems there are differences in both writing and reading the texture data. When loading the .xnb file compiled by XNA I get one set of differences. When compiling a new texture in MonoGame and loading that, I get another.
It was mainly for compatibility reasons. I think mainly for OpenGL platforms if I remember correctly. Texture2DReader will be the place to look for that.
Besides the component reordering, there is another problem.
In XNA, all the texture component values are doubled when they are output either via Texture2D.SaveAsPng(), or when they are output from the shader.
In MonoGame, they are not.
Is premultiplied alpha involved, or some other conversion?
The only time a color value is multiplied by alpha is in the texture processor at content build time. With the value you list above, I’m not sure how it should handle doubling 202 (0xCA) when that channel holds 255 maximum.
Thinking about it, we are taking the -1 … 1 range of the NormalizedByte and mapping that to the signed byte range of -128 … 127. This is why the W value of 1 becomes an A value of 127 (0x7F). Because PNG expects unsigned bytes, XNA may be remapping the NormalizedByte range of -128 … 127 to 0 … 255 (by adding 128) for output to the PNG.
This is a hypothesis that I can’t test right now. Are you able to supply the source image, the XNA-compiled XNB and an output PNG of what you expect from XNA? This will give us something to target as a known good value.
Whatever it is, the exact same transformation happens when using SaveAsPng and EffectParameter.SetValue with a Texture2D that has the NormalizedByte4 surface format.