Memory usage targeting x86 vs x64

Hi !

I have a strange issue with memory when I switch from x86 taget to x64.
Most of time the x86 build uses from 80 to 100 MB ram. If I switch to x64 the memory usage jumps from 900 to 1200 MB ram. I know 64 bits uses more ram than 32 bits, but 10x seems insane…

I have used the VS 2022 memory profiler to understand what’s appening but the things going stranger :

  • The graph part shows up to 1200 MB ram
  • The stack indicates something like 33 MB ram

The compressed content is somthing like 75 MB on disk and the orignal content is somthing like 230 MB on disk (mostly pngs)

Can you please give me some advices to understant what’s happening ? It seems that next monogame release will be x64 only and I’m afraid my small game would require gigabytes ram to run properly…

Thanks

Yeah but did you compile in Release mode?

Hi !

Yes. I tried many combinations.

I tested this on my own game (published with win-64 and win-86 targets), and the difference was, like, 50 vs 80 MB. Maybe it’s something specific to your game/machine. And even if so, an extra gig doesn’t really matter nowadays.

It matters for me :slight_smile:

I tried with a fresh project. Just loading assets.
x86

  • Empty project → 20.7 MB
  • Full asset (956 MB on disk) → 51.4 MB

x64

  • Empty project x64 → 24.2 MB
  • Full asset (956 MB on disk) → 1068.8 MB

It’s like there was a memory saving option that doesn’t works with 64 bits.
Maybe my hardware is not enable to build properly a x64 projet…

That is asinine

That is the reality, my boy. Of course it would be nicer if that wasn’t the case. But I am personally way more concerned about the game itself and its performance than how much RAM it consumes. You can pretty much assume you’ve got infinite RAM unless you’re making Minecraft or something REALLY heavy.

It’s a small game, I can imagine having a normal game using 10x more RAM. There is clearly something wrong with my code, my hardware, my image format (maybe I should try the power of two) or something else.

I continue my tests with fresh project, dummy content on a different PC.

Your results do not mean everything takes 10x as much ram. Only that there is an additional overhead of… something.

One posibility is that the garbage collector on net6/x64 is more flexible in managing system memory.

Do a GC.Collect(), on each update and also after each Content.Load<>(). Check if that affected the allocated memory. (Do not keep those changes in production).

Run additional third applications that reserves a lot of memory and watch if your app returned the memory to the system.

What type of card do you have ?
Does it have dedicated vram or it’s embedded with shared system memory?
Normally, if those 900Mb are mostly textures & 3dmodels they will get loaded on the gpu.
On the cpu you will see only the intermediated buffers/pools that are used to load those assets.

1 Like

Hi !

GC.Collect() doesn’t change anything.

I use a GeForce GTX 850M, but I have to force windows to run the game on it. The defaut GPU is an Intel HD Graphics 4600.

If I force the GeForce the globale memory usage (shown in VS2022) drop from 1.1GB to 960MB.
Windows shows only 139MB, I suppose the memory “goes” in the nvidia dedicated memory…

Power of two image resolution doesn’t changes anything.

Today I tried on another laptop without dedicated GPU, results are more close between 32 / 64 bit builds.

Tomorow I try with a GT 1030 on another machine.

Just keeping traks of similare issues.

It looks like my game has allways been a bloatware but x86 build wasn’t showing it…

“assume you have infinite ram”
folks that are reading this, don’t listen to martenfur. if you haven’t figured it out by now, he’s kind of a troll. well, he’s not really a troll, it’s just that his mind has been so corrupted he can no longer form useful replies. so, when you discover one of his ‘contributions’ just remember that he probably has no idea what he’s talking about and it’s safe to ignore him.

marten, i hope you dont get conscripted to shoot rusty aks with steel ammo.

4 Likes

Did you fix your collisions finally?

Hi people.

Jokes aside, I’m still working on my issue.
I have upgraded monogame to last version and switched from .net core to .net 6.0
I tried a lot of combinations :

  • Compressed asset vs uncompressed (seems to no longer apply with 3.8.1, asset disk size remains the same)
  • GPU with dedictaed RAM vs GPU with shared RAM
  • Target plateform in the project properties vs target platform in solution’s configurations
  • 3 different hardware
  • Forcing nvidia GPU intstead of onboard intel GPU

Result observed in visual studio is still the same :

  • x86 : RAM usage decrease in a few seconds after image loading
  • x64 : RAM usage reflect the uncompressed size of my images without decreasing

Garbage collector calls doesn’t change anything.

Now I’m going to load my asset “just in time”…

2 Likes

Here some (not good) news.

Basically my assets are fully loaded while displaying my logo splash screen.

To reduce the memory usage of my loaded assets, I have implemented an async loading for each screen, only for assets actually used. As result the async loading time is way longer for un smaler number of assets and the memory sparing is not significant.

I also try to figure out how image size affect loading time and mem usage to knwo if it’s better to have less but bigger images or more but smaller image. (PrivateMemorySize64 vs expected non compressed image W * H * 64 / 8) expressed in Bytes (Octet)

96 x 96
memory : 1613824 B expected : 73728 B
time : 00:00:00.3132904 ==> First loading, not significant

1281 x 250
memory : 5656576 B expected : 983808 B
time : 00:00:00.0086977 ==> Incredibly fast.

2030 x 1080
memory : 32468992 B expected : 1559040 B
time : 00:00:00.1155399

2047 x 3103
memory : 106393600 B
expected : 1572096 B
time : 00:00:00.2742769

Loading time highly depends on disk speed and appears not revelant compared to image dimensions.
I suspect a “buffet” effect… Memory usage seems crazy because each image use more than twice the expected size for a 64bit per pixel image.

I’m now going to tests multiple content manager to keep the most used UI element loaded and flush only the level elements.

As a desperate move, I sacrifice 10 mana points to try to summon @mrhelmut for help.

A few things to consider:

  • Graphical assets (textures, shaders…) are not loaded into .NET memory (beside a shared scratch buffer while loading but that should have a very minimal footprint) but into GPU vRAM. Loaded graphical assets are therefore not visible to the .NET profiler and not accounted in System.GC.GetTotalMemory().

  • The Task Manager shows the working memory, which is how much physical RAM is used by a process. This is the measure you seem to be looking for. Note that this excludes GPU vRAM usage but includes the .NET runtime itself (not just your program), so it’s normal that it’s bigger. Also: when running in debug mode attached to VS, this will be largely bloated by virtual hosts from the VS debugging environment. If you wish to know the proper measure, run your game in release mode detached from VS and look at the Task Manager. This is your RAM usage.

  • To know how much GPU vRAM is used… well, you can’t really know that per process. The Windows Task Manager provides some insights like the total GPU memory used (in the performance tab), and that’s pretty much all you can know (GPUs are “blind” and don’t know which process allocates memory). So if you need to know what your process uses, you’ll have to do the difference by yourself when starting/quitting your app.

  • PrivateMemorySize64 returns the total physical memory used (including the .NET runtime itself, but excluding GPU vRAM) + the paging memory reserved for memory operations (the system always allocate more space to manage memory movements and this amount is affected by your program behavior; this is normal and in most case the system will free physical RAM and use HDD swaps if RAM usage starts being critical). The Visual Studio diagnostic tools returns this, and the debugging memory overhead. These measures are likely irrelevant here.

  • If you want to know what your .NET program really uses, the correct measure is System.GC.GetTotalMemory(). This returns the amount of managed memory that your program currently uses (including garbage). This excludes the .NET runtime itself, eventual native memory (e.g. SDL, OpenAL…), and of course GPU vRAM. This is the single most important measure when developing .NET apps.

  • The .NET runtime reserves more memory than what your program uses so that it can handle more smoothly your program highs and lows (and handling the garbage/repacking of the memory). The amount of reserved memory can be huge if your program allocates a lot of temporary objects.

  • I don’t know if the x64 .NET runtime does anything differently from the x86 one, but it may be very possible that it has a different reservation strategy to handle 64bit alignment/repacking. If your RAM usage (in task manager) is stable, I wouldn’t worry much about that.

TL;DR: the memory that your .NET program uses is given by System.GC.GetTotalMemory() and the physical RAM used (which includes the .NET runtime itself along with your program) is given by the Task Manager in release mode outside of VS. The other measures include stuff that you shouldn’t worry much about.

3 Likes

Hi MrHelmut !

Thank you very much for those explanations ! Seems the most successful thing I’ve done today is summoning you. Once again I owe you a beer + 10 mana points.

I tried GC.GetTotalMemory with a bunch of spritesheets with very different sizes.

size : w= 96 H= 96
GetTotalMemory: 1060KB
expected 32bit RGBA : 36KB
taskmanager + 1.5MB
time : 124ms
px/ms : 74

size : w= 1281 H= 250
GetTotalMemory: 227KB
expected 32bit RGBA : 1250KB
taskmanager + 1.3MB
time : 60ms
px/ms : 5337

size : w= 2030 H= 1080
GetTotalMemory: 7313KB
expected 32bit RGBA : 8564KB
taskmanager + 7.0MB
time : 106ms
px/ms : 20683

size : w= 2047 H= 3103
GetTotalMemory: 16244KB
expected 32bit RGBA : 24811KB
taskmanager + 49.2MB
time : 235ms
px/ms : 27029

size : w= 1404 H= 247
GetTotalMemory: 4KB
expected 32bit RGBA : 1354KB
taskmanager + 7.5MB
time : 77ms
px/ms : 4503

size : w= 36 H= 36
GetTotalMemory: 4KB
expected 32bit RGBA : 5KB
taskmanager + 0.1MB
time : 59ms
px/ms : 21

size : w= 1019 H= 1709
GetTotalMemory: 4KB
expected 32bit RGBA : 6802KB
taskmanager + 7.6MB
time : 90ms
px/ms : 19349

size : w= 1630 H= 937
GetTotalMemory: 4KB
expected 32bit RGBA : 5966KB
taskmanager + 5.8MB
time : 84ms
px/ms : 18182

I don’t understand what happens with GetTotalMemory but running release build outside of VS gives me better understandable result. I also learned that the smalest the image is, the most it cost “per pixel” in loading time.

By the way, it looks like some image dimensions, cost more than expected, same for pictures bigger than 2048x2048 px. Do you know if power of two or square picture affect this ?

So here’s what I’m gonna try :
2048x2048 max images.
No more orphan small images
Maybe Square & power of two dimensions.

Again, textures are not loaded into .NET/CPU RAM and are not visible/counted anywhere. What you see is only the scratch buffer used to transfer them from RAM to the GPU (because there’s no direct line from disk to GPU, they have to be loaded to RAM first), not the texture itself (which resides in GPU vRAM and can’t be queried per process).

MonoGame doesn’t use one scratch buffer per texture, but a buffer pool to avoid garbage when loading content. This comes with a memory footprint which usually is as large as the largest asset you loaded at any point of time (e.g. if you loaded 1 GB worth of textures and the largest texture was 20 MB, then the buffer overhead will be 20 MB).
The default buffer pool size is 1 MB, so if you load only 128 KB textures, the overhead will still be 1 MB.

2 Likes