Fellow devs, please help with an strange Texture2D memory usage

Hi Folks, after a couple of days knocking my head against the wall, I decided to ask for your help.

The issue I am having is a strange memory usage: whenever I create a Texture2D the process memory spikes to 7 Gb and stays at 6Gb. Check this out:

The problem happens both using the Content.Load or trying the Texture2D.FromFile. For example, the following test code always makes the memory usage problem happen:


*for code readability, the map_descriptor.Tile is a Dictionary that maps coordinates to a path (image file). And that is why I use the Path.GetFileNameWithoutExtension when calling the Content Manager.

Please notice I was able to use a few images (for my UI) without any problem. So, being more specific, the issue happens when loading the images I uploaded to this Google Drive. I am creating those Images rendering an SVG with the Magick.Net library, so maybe there is a specific format/encoding that results in such a bug?

A few extra details:

  • I already experimented with rendering those tile images in different formats, such as jpg and png, but the problem persisted.
  • When adding the tile images to the Content Manager, the resulting XNB files are 1,025 Kb in size, making me believe the images were ok. But, the issue also occurs.
  • My map currently has 650 tiles, so I was expecting the process memory to be around 700Mb after I loaded the map… not almost 7Gb O.o
  • As maps are created by an editor and support modding, I was planning not to use the Content Manager for map-related assets.

And if you are still reading my long post, thanks a lot!!!

Are you by any chance loading the same texture over and over again? something, something, loading model, applying texture 1, loading model, applying texture 2, loading model, applying texture 1…

If so, this could be your culprit… avoid texture swapping when loading… load all models of a particular texture at the same time…

If not… it would definitely be loading related, or otherwise the BMP size… all textures are loaded into memory as BMP’s or something anyway…

Try exporting those textures as BMP images and see how big they are…

Also, something that comes to mind, … umm… mipmapping?

Code On!

EDIT

Also, Welcome to the Community!

Happy Coding!

Are you loading a brand new Texture for each Coordinate? I would advise against that. Load one instance of each tile you need and reuse the Texture2D object for other Coordinates if you need another Coordinate to be assigned that same graphic.

Thanks for the fast reply @MrValentine and @Jesuszilla!
And in case you celebrate it, a Happy Thanksgiving! :turkey:

As for the textures, they are unique and represent a single tile (region) in the map, so they are not shared, neither re-used, and are rendered only in a single place (with a SpriteBatch in their corresponding coordinates).

But deep-diving on your re-usability comment @Jesuszilla, I am currently detecting water-only tiles and those I ignore, as the GraphicsDevice.Clear get the job done :slight_smile:

Also, the sample code I shared is in the LoadContent() method, so it is running a single time:

What surprises me is that the full 7 Gb gets allocated in a single call. Consider the image above, if I place a breakpoint at line 59, all is good. As soon as I step to line 60, the process spikes and now has 7 Gb allocated!

As for the encoding/image-type, I exported all tiles as BMPs (available is this GDrive folder) and the size (as shown in the image below) is the expected 1Mb. But unfortunately, using BMPs did not solved the problem.
image

Finally, as for the mipmaps, I could not find any customization available in the FromStream or FromFile Texture2D helper methods. But, taking a loot in the internal Moonogame source code (and I hope I am not misreading it), it seems the mipmaps levels are set to 1:

I know I am going paranoid, but tomorrow I will try to create a clean project and just load a single of those images for validation O.o

Let us know how it goes, but what dimension is the texture too?

One thing to keep in mind, too, don’t expect any compressed formats (.JPG, .PNG, .BMP) to be representative of what they will take up in actual memory. Raw, decompressed image data (which textures store) is much larger.

1 Like

This VRAM compressed texture takes a lot of space when imported · Issue #24501 · godotengine/godot (github.com) got me thinking, what format are you using?

Also, can you test this on a release build?

Hi again folks, and thanks for the assistance so far @MrValentine and @Jesuszilla!

So, I created a new project using the VS Extension, exactly as instructed by the official getting Started docs. I used the Content Manager to import a map tile texture (in the BMP format, as the ones previously shared in this thread) and the process memory increased by ~3.62 Mb. I also loaded the exact same texture with the FromFile method and the memory increased by ~6.51 Mb. For reference, the code snippet follows:

Now, right of the bet we can notice the 6Gb issue was a problem with my original VS Project. I am not sure how, but it seems I created an epic bogus state in that project. But let’s table that for now as I have no idea how it was even possible and I will need to explore more :confused:

Nevertheless, considering just the new project, it is clear there are caveats with using the FromFile method. So, surprised by this outcome, I coded my own texture loading procedure to make sure no mipmaps were created, neither an alpha channel was used (as I have no transparency). And I finally code the memory increase by just ~2Mb. This is my code:

I may be losing my mind, :rofl:, but I was expecting a 512 x 512 texture in a 16-bit format to have 4,194,304 bits. So, divided by 8 (for bytes) and 1024 (for Kbs) I should have 512, meaning that a texture representing a tile of my map should consume just 0.5 Mb.

So, now that I “manually” created the texture giving the exact desired bytes count (of 524,288), I am wondering why the space discrepancy is happening.

I haven’t dived into the source code of DirectX or MonoGame, but I believe everything gets converted to 32-bit RGBA in memory regardless of the format. The SurfaceFormat is just telling the framework what format the image you’re giving it is. If I did the math correctly, that places it closer to ~1Mb per texture. There’s also more data associated with a Texture2D than just the image data itself, so that might all amount to 1Mb on its own.

1 Like

Just something for future readers:

gpu - How to calculate how much video memory or memory cost by a texture - Stack Overflow

1 Like

No, textures are kept according to surface format, that’s incredibly important for optimization (as well for possibilities, both single channel and high bit depth surface are used all across modern rendering pipelines). Texture size is bitdepth (as set by surface format) * resolution. Only exceptions are DXT compressed textures which are kept in memory in compressed state.

1 Like

Once again, thank you @Jesuszilla and @MrValentine for the help!
And before I close this thread, I wanted to congratulate everyone in the community on this great engine! :clap:

So, as for optimizing the memory usage, indeed building the Texture2D with Bgr565 can help, but the convenience of calling FromFile has value for those just starting to code with the engine (such as myself).

Also, as for the overhead in memory, I noticed the issue disappeared as I allocated more textures. For example, consider the code below (that iterates over a folder and loads 499 textures). The first Texture2D.FromFile call increases the process memory in ~4Mb, but after every call is complete, the process memory is only ~25Mb more.

And, as for that 7 Gb memory issue, it was totally in my code. I misconfigured the capacity of a dictionary with an epic value and the initialization was causing the memory spike (yes… it was a legendary value).

Thanks again,
And happy coding!

1 Like

Thanks for sharing it @Ravendarke! I will play with the DXT format to check the benefits of having textures compressed in memory :slight_smile:

Remember that it is lossy compression and it definitely aint for example pixel art friendly.