NormalizedByte4 Texture2D gives different results from XNA

I have a custom processor that uses the 2D Normal Map sample from XNA to create normal map bitmaps for 2d shadows from greyscale heightmaps
.
In the migrated MonoGame project, the output normal bitmaps are completely different from the ones output by XNA.

Via logging to output (launching the pipeline tool with debugger didn’t work), I have narrowed down the problem to these lines:

TextureContent depthMapTexture = context.BuildAndLoadAsset<TextureContent,
                                              TextureContent>(depthMapTextureReference, null);

/* compute normals here, put it back in the bitmap, read from the bitmap & output the result, everything seems ok
(...)
*/

PixelBitmapContent<Vector4> bitmap;
bitmap = (PixelBitmapContent<Vector4>)depthMapTexture.Faces[0][0];

// convert the bitmap, this seems to change the values in the bitmap completely:
depthMapTexture.ConvertBitmapType(typeof(PixelBitmapContent<NormalizedByte4>));

Maybe MG gets the format wrong? I don’t now a lot about this. @KonajuGames is probably the go to person for stuff like this

I have added this debugging code to read the value of the converted pixel at 43, 10:

PixelBitmapContent<NormalizedByte4> convertedBitmap;
convertedBitmap = (PixelBitmapContent<NormalizedByte4>)depthMapTexture.Faces[0][0];
NormalizedByte4 pixelValue = convertedBitmap.GetPixel(43, 10);

context.Logger.LogImportantMessage("Converted pixel: {0}", pixelValue); 

The Vector4 that was read from the bitmap before converting was {X:0,1058512 Y:0,8997355 Z:-0,4234049 W:1}

In Mono, pixel value after converting: 7FCA720D
In XNA, pixel after converting: 7FCA720D

However, when opening the bitmap in paint.net, I get the values:
12E700FF (XNA) and
73CA7F09 (MonoGame)

So we’ve got the pixel conversion correct, but something goes wrong after that? I don’t see any relation between 7FCA720D and 12E700FF that you see with XNA. This is an area where there is zero documentation on what happens, so it’s a lot of guesswork.

Yes, conversion seems to be correct…

This is the last code in the processor:

PixelBitmapContent<NormalizedByte4> packedSpritesNormalMaps = null;
BitmapContent packedSprites = null;
            
SpritePacker.PackSprites(sourceSprites, spriteSheet.SpriteNames, sourceSpritesDepthMaps, 
                                     spriteSheet.SpriteRectangles, context,
                                     out packedSprites, out packedSpritesNormalMaps); //, false);
         
 spriteSheet.NormalTexture.Mipmaps.Add(packedSpritesNormalMaps);

Here are the comparison results so far:

MonoGame:

Converted pixel, output from processor: 7FCA720D

Read with GetData after Load: {R:115 G:202 B:127 A:9}
73CA7F09

Paint.NET: 73CA7F09

XNA:

Converted pixel, output from processor: 7FCA720D

Read with GetData after Load: {R:9 G:115 B:202 A:127}
0973CA7F

Paint.NET: 12E700FF

So either GetData or the the code that outputs the bitmap at the end of Process shifts the byte components 1 byte to the right.
I don’t know why the XNA GetData returns a different result from Paint.NET either… XNA gives alpha = 127, while Paint.net shows alpha = 255 for all the filled pixels.

The color values read by Paint.NET are double the color values read by XNA via GetData:

Paint.NET: 12E700FF {R:18 G:231 B:00 A:255}
XNA GetData: 0973CA7F {R:9 G:115 B:202 A:127}

Perhaps premultiplied alpha plays a role in these differences.

As a test, I tried reading the bitmap .xnb file produced by the XNA processor into MonoGame.
I got different results, both when loading as Color and as NormalizedByte4:

Read XNA-compiled xnb in MonoGame:

Texture2D.GetData<Color>: { R:202 G:115 B:9 A:127 } 
Texture2D.GetData<NormalizedByte4>: 7F0973CA

Read XNA-compiled xnb in XNA:

Texture2D.GetData<Color>:  {R:9 G:115 B:202 A:127}  ( 0973CA7F)	
Texture2D.GetData<NormalizedByte4>: 7FCA7309

For Color, it appears R and B components are switched around in XNA compared to MonoGame.
For NormalizedByte, the order has changed compared to Color, and 2 bytes are switched around.

It seems there are differences in both writing and reading the texture data. When loading the .xnb file compiled by XNA I get one set of differences. When compiling a new texture in MonoGame and loading that, I get another.

There is some byte-swapping (RGBA->BGRA) done for some formats when loading XNB files in Texture2DReader. That would be the area to look at.

Did the byte order change between XNA and MonoGame?

It was mainly for compatibility reasons. I think mainly for OpenGL platforms if I remember correctly. Texture2DReader will be the place to look for that.

Right, I can see in Texture2DReader where the R and B bytes are switched for the NormalizedByte4 surface format.

				    case SurfaceFormat.NormalizedByte4:
					    {
						    int bytesPerPixel = surfaceFormat.GetSize();
						    int pitch = levelWidth * bytesPerPixel;
						    for (int y = 0; y < levelHeight; y++)
						    {
							    for (int x = 0; x < levelWidth; x++)
							    {
								    int color = BitConverter.ToInt32(levelData, y * pitch + x * bytesPerPixel);
								    levelData[y * pitch + x * 4] = (byte)(((color >> 16) & 0xff)); //R:=W
								    levelData[y * pitch + x * 4 + 1] = (byte)(((color >> 8) & 0xff)); //G:=V
								    levelData[y * pitch + x * 4 + 2] = (byte)(((color) & 0xff)); //B:=U
								    levelData[y * pitch + x * 4 + 3] = (byte)(((color >> 24) & 0xff)); //A:=Q
							    }
						    }
					    }
					    break;

I guess I can compensate for that in my normal map processor.

But the alpha value also seemed to be affected, at least when I compiled the texture in MonoGame.

Perhaps they don’t need to be swapped at all. So is that code necessary?

As a test, remove that code and see how it goes.

The texture is passed directly to the shader. So then I would have to make changes there I guess… something to think about.

I can’t remove the code because I am not building MonoGame from source.

Besides the component reordering, there is another problem.
In XNA, all the texture component values are doubled when they are output either via Texture2D.SaveAsPng(), or when they are output from the shader.
In MonoGame, they are not.

Is premultiplied alpha involved, or some other conversion?

The only time a color value is multiplied by alpha is in the texture processor at content build time. With the value you list above, I’m not sure how it should handle doubling 202 (0xCA) when that channel holds 255 maximum.

Thinking about it, we are taking the -1 … 1 range of the NormalizedByte and mapping that to the signed byte range of -128 … 127. This is why the W value of 1 becomes an A value of 127 (0x7F). Because PNG expects unsigned bytes, XNA may be remapping the NormalizedByte range of -128 … 127 to 0 … 255 (by adding 128) for output to the PNG.

This is a hypothesis that I can’t test right now. Are you able to supply the source image, the XNA-compiled XNB and an output PNG of what you expect from XNA? This will give us something to target as a known good value.

Project sent. :gift:

Whatever it is, the exact same transformation happens when using SaveAsPng and EffectParameter.SetValue with a Texture2D that has the NormalizedByte4 surface format.