Why image size between png and xnb is so different ?

I have a png image of 34.4ko that becomes a xnb of 16386 ko.
In addition to that, i have other png files with different sizes in the same folder and all become xnb files of 16386 ko.

What the hell is going on ?l

png is a compressed format.

You can chose the output format in the pipeline tool, if you opt for “color” it’s uncompressed bitmaps afaik.

that is silly, isn’t it ?

Ahem … " Please Monogame, can you make my textures built bigger than original to annoy the player with heavy files " ?
Thx, xoxo, Devil.

So what’s the point ?

The point is filesize vs. speed. When you load a texture, regardless of filetype (.jpg/.png/etc), it is decompressed to a bitmap with an alpha channel in memory (iirc). The content pipeline does this for you so that files are loaded as quickly as possible at runtime. If filesize is a big concern for you, you can still load from a file using Texture2D.FromStream(). The texture will still be converted, but this will happen at runtime vs. compile time with the content pipeline. Code will look something like this:

using (FileStream fileStream = new FileStream("Content/sprites/yourSprite.png", FileMode.Open))
{
    Texture2D sprite = Texture2D.FromStream(graphicsDevice, fileStream);
}

This will allow you to keep your filesizes small, but in a big project with lots and lots of textures, it might affect performance.

PNG is an efficient file storage format. It is not an efficient in-memory representation of an image. The 16MB file you see is the full color BGRA version of your image. For a PNG to be 34KB, I’m guessing there is a lot of flat colour or empty space in that image. This is because the zlib compression used in PNG can look at the entire image to determine patterns that it can use to compress the file to as small as possible. Flat color or empty areas are a very simple pattern to compress to a tiny file size. GPUs in computers cannot use zlib natively because it does not fit with how they work. They need to access a small part of the texture and have all the information they need in that small part. Full color works because there is no other information necessary. Texture compression such as DXT works by splitting the image into 4x4 blocks. For each block it then determines a low color and a high color, stores these in 16-bit format, and then each pixel in the 4x4 block is given one of four values: 0 for the low color, 1 for a color one third of the way between low and high, 2 for the color two thirds of the way between low and high, and finally 3 for the high color. Using this, the texture can be compressed by a fixed value, usually 8x or 16x depending on the presence of an alpha channel or not. This is a rough approximation of how the block compressors work. There will be differences between compression schemes, but they all work on blocks.

We default to full color textures because these are supported everywhere. Desktop platforms support DXT, mobile platforms support a range of the texture compression schemes (DXT, ETC1, PVRTC, ATITC, etc) and so on. There is no single texture compression scheme that works everywhere.

You could keep your PNG and load it using Texture2D.FromStream(). This will keep your file size down, but it will be decompressed to a full color BGRA format that will occupy 16MB of memory, at least.

Edit: The reason your PNGs are different sizes is that they contain different patterns, and the zlib compressed size is largely determined by how many and what patterns it can find. Different images present different patterns, and therefore compress differently. GPU texture resources are all based on consistent size for the given dimensions of the texture. A full color 1024x1024 texture will occupy 16MB of memory (+33% for mipmaps). A 1024x1024 texture compressed with DXT1 will occupy 1MB because DXT1 gives a fixed 16:1 compression.

Thanks for these explanation. I’m guessing that explain also why old games had images sticking in the most effictive way in textures.

the images i was talking about use a big amount of usless transparency. I will cut and get rid of that empty heavy space.

I’m curious if it really is faster given that 16486 / 34.4 = 479 X bigger file. That means way more I/O from slow storage, but no decompression to do. Would be interesting to see numbers between the two.

Agreed with @Chuck_Esterbrook

Someone can compute that ?

Speeds will depend greatly on CPU speed and HDD transfer speed, but it will likely be faster to load the PNG directly through Texture2D.FromStream() on a spinning platter HDD. On a SSD the results would be closer, but probably still in favour of the PNG slightly. This is however just a part of the efficiency equation. The rest comes from

  • memory use will be higher (a PNG loaded through FromStream is always expanded to the full 16MB in memory).
  • GPUs have a texture cache similar to the cache that CPUs have. Very large textures will be more likely to cause cache misses depending on usage, slowing down rendering as the cache is refreshed.
  • FromStream() does not normally generate mipmaps. This means that if the texture is used on triangles that only occupy small parts of the screen, it will still be accessing the full texture and causing cache misses as listed above.

Ok, so, i don’t know how to implement the FromStream() thing in the system i’ve done then i decided to use the Monogame Pipeline instead.

To achieve that i did a test on a PNG file of 2.4 mo that, when built, produce a XNB file of 27 mo with TextueFormat Color.
Firstly, i resized the file to dimensions of power of 2 to use DXT compression. PNG file became 2.5 mo and XNB became 65 mo with DXT compression.
Then it means compression makes files bigger ?
What is going on here ?

Because of MipMap generation ? No compression in the pipeline tool ?

Mipmaps generation is set to False. Should i set it to True ?
Also, i’ve seen that file is bigger because i increased width and heigth to get power of 2 dimensions.
So, DXT don’t make file bigger but doen’t make it lighter either.

Don’t, if set to true, the xnb will get even bigger as it generates smaller sizes of it:

(i’ve edited previous post)

Ok, i am really curious about to know in which case it is interesting to make a small image to be heavy. Really weird.

That’s a really good question :slight_smile:
DXT uses 4x4 pixels if I remember well another time ^^ I dunno how it works when sizes are not power of , maybe bigger size to fit the closest upper power of 2 size ?
Do you have a lot of noise/sparse pixels in your image ?

Settings are :
Processor : Texture - Monogame
ColorKeyEnabled : True
GenerateMipmaps : False
MakeSquare : False
PremultiplyAlpha : True
ResizeToPowerOfTwo : False
TextureFormat : DxtCompressed

It is applied on a png file.

It is a spritesheet and i have a lot of transparency in the file, maybe it can be the problem ?
But no noise.

I really hope i will find a solution because all the images i use take only 32 mo but generate a build of almost 1Go. It is awful.

What culture changes Kb to Ko? / Mb to Mo? just curious… :confounded:

Mo = Mega Octet. French ?

I will check my project, as it is about 500Mb built, but “only” 350Mb sources+content.

@MrValentine French yes. Sorry for wrong translation.

1 Like