NormalizedByte4 Texture2D gives different results from XNA

Yes, conversion seems to be correct…

This is the last code in the processor:

PixelBitmapContent<NormalizedByte4> packedSpritesNormalMaps = null;
BitmapContent packedSprites = null;
            
SpritePacker.PackSprites(sourceSprites, spriteSheet.SpriteNames, sourceSpritesDepthMaps, 
                                     spriteSheet.SpriteRectangles, context,
                                     out packedSprites, out packedSpritesNormalMaps); //, false);
         
 spriteSheet.NormalTexture.Mipmaps.Add(packedSpritesNormalMaps);

Here are the comparison results so far:

MonoGame:

Converted pixel, output from processor: 7FCA720D

Read with GetData after Load: {R:115 G:202 B:127 A:9}
73CA7F09

Paint.NET: 73CA7F09

XNA:

Converted pixel, output from processor: 7FCA720D

Read with GetData after Load: {R:9 G:115 B:202 A:127}
0973CA7F

Paint.NET: 12E700FF

So either GetData or the the code that outputs the bitmap at the end of Process shifts the byte components 1 byte to the right.
I don’t know why the XNA GetData returns a different result from Paint.NET either… XNA gives alpha = 127, while Paint.net shows alpha = 255 for all the filled pixels.

The color values read by Paint.NET are double the color values read by XNA via GetData:

Paint.NET: 12E700FF {R:18 G:231 B:00 A:255}
XNA GetData: 0973CA7F {R:9 G:115 B:202 A:127}

Perhaps premultiplied alpha plays a role in these differences.

As a test, I tried reading the bitmap .xnb file produced by the XNA processor into MonoGame.
I got different results, both when loading as Color and as NormalizedByte4:

Read XNA-compiled xnb in MonoGame:

Texture2D.GetData<Color>: { R:202 G:115 B:9 A:127 } 
Texture2D.GetData<NormalizedByte4>: 7F0973CA

Read XNA-compiled xnb in XNA:

Texture2D.GetData<Color>:  {R:9 G:115 B:202 A:127}  ( 0973CA7F)	
Texture2D.GetData<NormalizedByte4>: 7FCA7309

For Color, it appears R and B components are switched around in XNA compared to MonoGame.
For NormalizedByte, the order has changed compared to Color, and 2 bytes are switched around.

It seems there are differences in both writing and reading the texture data. When loading the .xnb file compiled by XNA I get one set of differences. When compiling a new texture in MonoGame and loading that, I get another.

There is some byte-swapping (RGBA->BGRA) done for some formats when loading XNB files in Texture2DReader. That would be the area to look at.

Did the byte order change between XNA and MonoGame?

It was mainly for compatibility reasons. I think mainly for OpenGL platforms if I remember correctly. Texture2DReader will be the place to look for that.

Right, I can see in Texture2DReader where the R and B bytes are switched for the NormalizedByte4 surface format.

				    case SurfaceFormat.NormalizedByte4:
					    {
						    int bytesPerPixel = surfaceFormat.GetSize();
						    int pitch = levelWidth * bytesPerPixel;
						    for (int y = 0; y < levelHeight; y++)
						    {
							    for (int x = 0; x < levelWidth; x++)
							    {
								    int color = BitConverter.ToInt32(levelData, y * pitch + x * bytesPerPixel);
								    levelData[y * pitch + x * 4] = (byte)(((color >> 16) & 0xff)); //R:=W
								    levelData[y * pitch + x * 4 + 1] = (byte)(((color >> 8) & 0xff)); //G:=V
								    levelData[y * pitch + x * 4 + 2] = (byte)(((color) & 0xff)); //B:=U
								    levelData[y * pitch + x * 4 + 3] = (byte)(((color >> 24) & 0xff)); //A:=Q
							    }
						    }
					    }
					    break;

I guess I can compensate for that in my normal map processor.

But the alpha value also seemed to be affected, at least when I compiled the texture in MonoGame.

Perhaps they don’t need to be swapped at all. So is that code necessary?

As a test, remove that code and see how it goes.

The texture is passed directly to the shader. So then I would have to make changes there I guess… something to think about.

I can’t remove the code because I am not building MonoGame from source.

Besides the component reordering, there is another problem.
In XNA, all the texture component values are doubled when they are output either via Texture2D.SaveAsPng(), or when they are output from the shader.
In MonoGame, they are not.

Is premultiplied alpha involved, or some other conversion?

The only time a color value is multiplied by alpha is in the texture processor at content build time. With the value you list above, I’m not sure how it should handle doubling 202 (0xCA) when that channel holds 255 maximum.

Thinking about it, we are taking the -1 … 1 range of the NormalizedByte and mapping that to the signed byte range of -128 … 127. This is why the W value of 1 becomes an A value of 127 (0x7F). Because PNG expects unsigned bytes, XNA may be remapping the NormalizedByte range of -128 … 127 to 0 … 255 (by adding 128) for output to the PNG.

This is a hypothesis that I can’t test right now. Are you able to supply the source image, the XNA-compiled XNB and an output PNG of what you expect from XNA? This will give us something to target as a known good value.

Project sent. :gift:

Whatever it is, the exact same transformation happens when using SaveAsPng and EffectParameter.SetValue with a Texture2D that has the NormalizedByte4 surface format.

As an experiment, I tried using a Vector4 format instead of the NormalizedByte4 format which is causing me all this trouble.
However, I got an exception in SpritePacker Copy:

System.InvalidOperationException: Could not copy between Vector4 and Vector4

PixelBitmapContent<Vector4>.Copy(source, new Rectangle(0, 0, w, h),
                                       output, new Rectangle(x + 1, y + 1, w, h));

This is the framework code that throws the exception:

/// <summary>
        /// Copies one bitmap into another.
        /// The destination bitmap can be in any format and size. If the destination is larger or smaller, the source bitmap is scaled accordingly.
        /// </summary>
        /// <param name="sourceBitmap">BitmapContent being copied.</param>
        /// <param name="sourceRegion">Region of sourceBitmap.</param>
        /// <param name="destinationBitmap">BitmapContent being overwritten.</param>
        /// <param name="destinationRegion">Region of bitmap to be overwritten.</param>
        public static void Copy(BitmapContent sourceBitmap, Rectangle sourceRegion, BitmapContent destinationBitmap, Rectangle destinationRegion)
        {
            ValidateCopyArguments(sourceBitmap, sourceRegion, destinationBitmap, destinationRegion);

            SurfaceFormat sourceFormat;
            if (!sourceBitmap.TryGetFormat(out sourceFormat))
                throw new InvalidOperationException("Could not retrieve surface format of source bitmap");
            SurfaceFormat destinationFormat;
            if (!destinationBitmap.TryGetFormat(out destinationFormat))
                throw new InvalidOperationException("Could not retrieve surface format of destination bitmap");

            // If the formats are the same and the regions are the full bounds of the bitmaps and they are the same size, do a simpler copy
            if (sourceFormat == destinationFormat && sourceRegion == destinationRegion
                && sourceRegion == new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height)
                && destinationRegion == new Rectangle(0, 0, destinationBitmap.Width, destinationBitmap.Height))
            {
                destinationBitmap.SetPixelData(sourceBitmap.GetPixelData());
                return;
            }

            // The basic process is
            // 1. Copy from source bitmap region to a new PixelBitmapContent<Vector4> using sourceBitmap.TryCopyTo()
            // 2. If source and destination regions are a different size, resize Vector4 version
            // 3. Copy from Vector4 to destination region using destinationBitmap.TryCopyFrom()

            // Copy from the source to the intermediate Vector4 format
            var intermediate = new PixelBitmapContent<Vector4>(sourceRegion.Width, sourceRegion.Height);
            var intermediateRegion = new Rectangle(0, 0, intermediate.Width, intermediate.Height);
            if (sourceBitmap.TryCopyTo(intermediate, sourceRegion, intermediateRegion))
            {
                // Resize the intermediate if required
                if (intermediate.Width != destinationRegion.Width || intermediate.Height != destinationRegion.Height)
                    intermediate = intermediate.Resize(destinationRegion.Width, destinationRegion.Height) as PixelBitmapContent<Vector4>;
                // Copy from the intermediate to the destination
                if (destinationBitmap.TryCopyFrom(intermediate, new Rectangle(0, 0, intermediate.Width, intermediate.Height), destinationRegion))
                    return;
            }

            // If we got here, one of the above steps didn't work
            throw new InvalidOperationException("Could not copy between " + sourceFormat + " and " + destinationFormat);
        }

XNA does not throw this exception, so I will create a separate post for it.

I managed to get the shader to work. I built the texture in XNA in the Vector4 format instead of the NormalizedByte4 format, and everything became better.

Now my only problem is that MonoGame has an issue that prevents the SpritePacker Copy from working with Vector4 bitmaps. I created a separate post for that:

For now, my workaround will be to compile the spritesheet in my old XNA project and transfer the .xnb file to MonoGame.

I’ll have a go at these issues this weekend.

With the Vector4 issue, are you copying a region to another region, or a full bitmap to a region on the destination?

It’s from the Sprite Sheet sample in XNA, I believe it’s copying regions.