I have a strange issue with memory when I switch from x86 taget to x64.
Most of time the x86 build uses from 80 to 100 MB ram. If I switch to x64 the memory usage jumps from 900 to 1200 MB ram. I know 64 bits uses more ram than 32 bits, but 10x seems insane…
I have used the VS 2022 memory profiler to understand what’s appening but the things going stranger :
The graph part shows up to 1200 MB ram
The stack indicates something like 33 MB ram
The compressed content is somthing like 75 MB on disk and the orignal content is somthing like 230 MB on disk (mostly pngs)
Can you please give me some advices to understant what’s happening ? It seems that next monogame release will be x64 only and I’m afraid my small game would require gigabytes ram to run properly…
I tested this on my own game (published with win-64 and win-86 targets), and the difference was, like, 50 vs 80 MB. Maybe it’s something specific to your game/machine. And even if so, an extra gig doesn’t really matter nowadays.
That is the reality, my boy. Of course it would be nicer if that wasn’t the case. But I am personally way more concerned about the game itself and its performance than how much RAM it consumes. You can pretty much assume you’ve got infinite RAM unless you’re making Minecraft or something REALLY heavy.
It’s a small game, I can imagine having a normal game using 10x more RAM. There is clearly something wrong with my code, my hardware, my image format (maybe I should try the power of two) or something else.
I continue my tests with fresh project, dummy content on a different PC.
One posibility is that the garbage collector on net6/x64 is more flexible in managing system memory.
Do a GC.Collect(), on each update and also after each Content.Load<>(). Check if that affected the allocated memory. (Do not keep those changes in production).
Run additional third applications that reserves a lot of memory and watch if your app returned the memory to the system.
What type of card do you have ?
Does it have dedicated vram or it’s embedded with shared system memory?
Normally, if those 900Mb are mostly textures & 3dmodels they will get loaded on the gpu.
On the cpu you will see only the intermediated buffers/pools that are used to load those assets.
“assume you have infinite ram”
folks that are reading this, don’t listen to martenfur. if you haven’t figured it out by now, he’s kind of a troll. well, he’s not really a troll, it’s just that his mind has been so corrupted he can no longer form useful replies. so, when you discover one of his ‘contributions’ just remember that he probably has no idea what he’s talking about and it’s safe to ignore him.
marten, i hope you dont get conscripted to shoot rusty aks with steel ammo.
Basically my assets are fully loaded while displaying my logo splash screen.
To reduce the memory usage of my loaded assets, I have implemented an async loading for each screen, only for assets actually used. As result the async loading time is way longer for un smaler number of assets and the memory sparing is not significant.
I also try to figure out how image size affect loading time and mem usage to knwo if it’s better to have less but bigger images or more but smaller image. (PrivateMemorySize64 vs expected non compressed image W * H * 64 / 8) expressed in Bytes (Octet)
96 x 96
memory : 1613824 B expected : 73728 B
time : 00:00:00.3132904 ==> First loading, not significant
1281 x 250
memory : 5656576 B expected : 983808 B
time : 00:00:00.0086977 ==> Incredibly fast.
2030 x 1080
memory : 32468992 B expected : 1559040 B
time : 00:00:00.1155399
2047 x 3103
memory : 106393600 B
expected : 1572096 B
time : 00:00:00.2742769
Loading time highly depends on disk speed and appears not revelant compared to image dimensions.
I suspect a “buffet” effect… Memory usage seems crazy because each image use more than twice the expected size for a 64bit per pixel image.
I’m now going to tests multiple content manager to keep the most used UI element loaded and flush only the level elements.
As a desperate move, I sacrifice 10 mana points to try to summon @mrhelmut for help.
Graphical assets (textures, shaders…) are not loaded into .NET memory (beside a shared scratch buffer while loading but that should have a very minimal footprint) but into GPU vRAM. Loaded graphical assets are therefore not visible to the .NET profiler and not accounted in System.GC.GetTotalMemory().
The Task Manager shows the working memory, which is how much physical RAM is used by a process. This is the measure you seem to be looking for. Note that this excludes GPU vRAM usage but includes the .NET runtime itself (not just your program), so it’s normal that it’s bigger. Also: when running in debug mode attached to VS, this will be largely bloated by virtual hosts from the VS debugging environment. If you wish to know the proper measure, run your game in release mode detached from VS and look at the Task Manager. This is your RAM usage.
To know how much GPU vRAM is used… well, you can’t really know that per process. The Windows Task Manager provides some insights like the total GPU memory used (in the performance tab), and that’s pretty much all you can know (GPUs are “blind” and don’t know which process allocates memory). So if you need to know what your process uses, you’ll have to do the difference by yourself when starting/quitting your app.
PrivateMemorySize64 returns the total physical memory used (including the .NET runtime itself, but excluding GPU vRAM) + the paging memory reserved for memory operations (the system always allocate more space to manage memory movements and this amount is affected by your program behavior; this is normal and in most case the system will free physical RAM and use HDD swaps if RAM usage starts being critical). The Visual Studio diagnostic tools returns this, and the debugging memory overhead. These measures are likely irrelevant here.
If you want to know what your .NET program really uses, the correct measure is System.GC.GetTotalMemory(). This returns the amount of managed memory that your program currently uses (including garbage). This excludes the .NET runtime itself, eventual native memory (e.g. SDL, OpenAL…), and of course GPU vRAM. This is the single most important measure when developing .NET apps.
The .NET runtime reserves more memory than what your program uses so that it can handle more smoothly your program highs and lows (and handling the garbage/repacking of the memory). The amount of reserved memory can be huge if your program allocates a lot of temporary objects.
I don’t know if the x64 .NET runtime does anything differently from the x86 one, but it may be very possible that it has a different reservation strategy to handle 64bit alignment/repacking. If your RAM usage (in task manager) is stable, I wouldn’t worry much about that.
TL;DR: the memory that your .NET program uses is given by System.GC.GetTotalMemory() and the physical RAM used (which includes the .NET runtime itself along with your program) is given by the Task Manager in release mode outside of VS. The other measures include stuff that you shouldn’t worry much about.
I don’t understand what happens with GetTotalMemory but running release build outside of VS gives me better understandable result. I also learned that the smalest the image is, the most it cost “per pixel” in loading time.
By the way, it looks like some image dimensions, cost more than expected, same for pictures bigger than 2048x2048 px. Do you know if power of two or square picture affect this ?
So here’s what I’m gonna try :
2048x2048 max images.
No more orphan small images
Maybe Square & power of two dimensions.
Again, textures are not loaded into .NET/CPU RAM and are not visible/counted anywhere. What you see is only the scratch buffer used to transfer them from RAM to the GPU (because there’s no direct line from disk to GPU, they have to be loaded to RAM first), not the texture itself (which resides in GPU vRAM and can’t be queried per process).
MonoGame doesn’t use one scratch buffer per texture, but a buffer pool to avoid garbage when loading content. This comes with a memory footprint which usually is as large as the largest asset you loaded at any point of time (e.g. if you loaded 1 GB worth of textures and the largest texture was 20 MB, then the buffer overhead will be 20 MB).
The default buffer pool size is 1 MB, so if you load only 128 KB textures, the overhead will still be 1 MB.