I’m currently working on a gravitation simulator. I made ajustable options for simulation speed (speed up time or slow down time) and simulation precision (movements of bodies are calculated by smaller steps) but when I crank them up too high it starts behaving wierd. I get like 2 fps but that is not the problem, it acts as if the update method ended before it reached the end of the code. I measured time, how often the update method is called and got more than 200 ms. Also it’s not even using whole processor, it’s usage is like 50 %. Are there any graphical or other settings I should change? Thaks for help.
I took a quick look at your code, and it looks like you’re using fixed time step (the default setting).
With fixed time step, the game will attempt to consistently update at a rate that matches the target elapsed time (by default, 60 fps). If you don’t exceed CPU limits, then the Update loop will loop like this:
However, what happens if your Update method takes longer than the allotted time? The game will attempt to play catch-up by skipping Draw calls:
Update // took too long
So you end up with dropped frames.
If the game gets too far behind, I’m not sure exactly what happens, but I don’t think it’s good.
For the default 60 fps target elapsed time, your update and draw methods combined shouldn’t take more than 16.67 ms or else it can’t meet the target. 200 ms is obviously way past that and won’t give the game enough time to catch up, no matter how many draw calls it drops. I’m not sure what exactly would cause the Update loop to not reach the end. Maybe the MonoGame runtime detects that it’s falling too far behind somehow and simply bails.
You might want to try disabling fixed time step. With fixed time step disabled, no matter how long an Update call takes, it will always be followed by a Draw call. But then you’ll need to account for elapsed game time due to the variable amount of time between frames.
Pretty much all processors are multi-core and multi-thread at this point, so you’re not going to get 100% usage of your CPU unless you write multi-threaded code, and if you can at all avoid doing that, you should, because doing so exponentially increases the complexity of coding and debugging.
I struggle with this for months and got the answer. Im assume u are doign Newtonian gravity, N body problem with planets and orbits . so, Two threads. run yor gravity controllers, and physic stuff in a seperate backgroudn thread. as fast as u can, or measure the time it took , and use a sleep to make it an even frame rate if each fraem isnt the same and for gravity it will be. for collisions it wont. … it is an n a O(n2) complex N body problem, or worse. then every time its done making a frame. clone all the positiions of the bodies, everythign u need to draw. stick it in a buffer or queue. thsi is Producer /consumer pattern. Only draw the clones, so that the physic engine can update teh originals at the time time as draw draws the clones , without worrying about timing. Then when u get a draww call, draw the complete set of teh latest clones, that contain just visual properties . le tht ephysic engine update the new frame while u draw the last good one. You wont need to do a ny Semaphonre or wait handles. Because they simplyu dont work on all platfroms, they relay on the timer resolution too.
never mind the update in monogame. i dont think i call anything on it. its for a one threaded model. just use the Draw and sync to the screen. you wont draw every physics frame but you draw as fast as ur screen can refresh. its ok because ur physics wil be a tighter simulation.
I run my phyiscs at 80 fps. ( tiime step is 1/ 80) not default 1/60 for me. at highter stesp it can get errors that break the float precision. for some reason its my sweet spot. turn on CCD. but i alwasy used a physic thread in background. it can have one thread but aether allows more… but its using locks and thsoe depend on timers. for rays i run them in parallel. For winodws use SetTimerResolution. to 1 ms. its 16ms by default . that is usesless. as its the leght of one frame. To prove this measure sleep(1) and u will get results that vary by 16 ms. in androind can cant do this. anways i hope its not too much but my issue took me 2 months to sort out. if u need timers use the stopwatch…
fi you wanna go deep into into relativity gravity u need to use Eulerian methods, grids , and fluid dynamics stuff , it can get very deep… Space-time concept – interactive simulations – eduMedia… because then u can have clumps and galaxies and not have to cross repference every singe thing in the universe…
@Damian_Eaglestein thank you very much. I have never thought about multithreading (because I have never used it before ). While it did not solve my problem it really helped me with performance and thanks to it my code is also cleaner.
Now I’m pretty much sure it is not a monogame error as I’ve previosly thought but something else, probably doing with float precision as you mentioned. Maybe if I wrote my own vector2 that uses decimals instead of floats it would solve it, I don’t know. I will try it later and come back with results.
i look at your code and thaqt u added the thread, but and see a few fundamental issues… 1. gravity by Newton is G (constant) / M1 x M2 / dist * dist, you are using the G x M1 i think…
and u are using lots of int factors … N/2 is not the same as N/2f sometimes , jsut to be sure. add an f , keep stuff double until your find cast to float if you cant afford doubles for everyting. You should use real units, meters/ per sec, and time steps like 1 / 60f; your code looks too compelx… stick to the real world standard units, Meters/ sec, and Kg, youll get stable orbits . Also i think the phsycis engien have a radial gravity controller or you could put your code in one, They will set up your bodies, and standard update loops, And they will do collision so ur planets can bouce off each otehr or break or merge or something.
also i see u do a normalize but you could jsut calc the force and get the acceleration from the a= F/M of each planet… you dont need to normalize …
Then you might have issues wiht long distances from 0,0 , once you get past a few 1000 meters ( big worlds) , you have to use local spaces. so then instead of using 0,0 as an origin, trasfrom both plasnet to one planet location, in world so its at 0,0 , do the gravity accel calc, then transfrom both planets back to Universal world . then add all the accels accumulated…
N body problems should be on githubi somewhere and u can port to c#
if ur issue is precision, and
if you arent using the physics engines like Velco or Bepu, or Vector2 with float def look at System.numberics Vector2 and 3, You -could- use doubles with it, and it will use SIMD to do X and Y and Z at the same time…withotu any extra codingn on your part. I did a lilttle experimement to Normalize random System numerics vectors vs the not SIMD ones and was even more that 2x faster for no effort. Issue is they havent done the optimization for double and admit its physcialy not possible now with current hardware… And its not gonna be much faster because or the size of teh points especially loads of them. So , unless you have high enoguh performance and can afford doubles, can do go to floats, but whats easier is “local spaces” . What you might want to do is check if your cooridnates are too far from the origin . Then transform to the space of one body for the gravity between the nearest one or whatever, , do the calcuation for gravity, then transfrom back to your universal space. I have to do this all the time with floats. Really thoug your math steps dont have to be so tiny for gravity to be stable it think its your cooridate systems and units. but yes for big worlds space games its done with large ints or doubles or the local spaces… not a simple descision. none of the standard engines can deal with it but they can do collisions and give your a standard how you do your loops and units…I recomdment Velcro, or maybe Bepu but that is a very advanced one… and they struggle High precision poses · Issue #13 · bepu/bepuphysics2 · GitHub
if you look down ,
Add support System.Numerics.Vectors types with double precision · Issue #24168 · dotnet/runtime · GitHub Microsoft hasnt got it generally solved either.
It may be relevant in this topic just to mention that apparently there can be serious performance degradation when floats get into unusual “denormalized” ranges, for eg very small values. In these cases the usual trick is to detect the issue and set to predefined good values. For eg the velocity here:
// Denormalized floats cause significant performance issues if (Math.Abs(vel.X) + Math.Abs(vel.Y) < 0.00000000001f) vel = Vector2.Zero;
Hello. I rewrote every number into decimal to try, wheter it would help at all. I wrote my own vector2 and used DecimalMath. On it’s own it did not help and reduced my performance in half. However, when I increased fps of the simulation to 200 Hz it seemed to have fixed my problem. I don’t want to close this question yet because I feel the performance is important too and I want to improve it. As it is now, I have basically 1/8 of previous performance (I used 50 Hz before). This would become a problem if I wanted to add more bodies and when speeding up time.
I thaught I would try replacing decimals where they are not needed and thus improving performance but I was experimenting with different sets of bodies and even if I still use just 2 of them, performance can vary greatly based on their movement. Therefore I will try these “local spaces” as @Damian_Eaglestein has proposed.
Also I am certain I use the right formula, because I calculate their acceleration, not force and therefore I don’t need the mass of the body I want to move. In regards to real units, I have planned to include them down the line but in the end, they are not that important. My reason for doing this is not to plan next Moon mission but to prove to myself (and my future employer) that I am capable of something like this. This is also the reason why I don’t want to use any existing engines.
I see… so the Mass term is dropped… for space games I’ve not heard mention of decimal ( since its for small spaces, high precision, low ranges) , for big spaces its either longs, float ( w/ local coordinate frame spaces, or doubles, being debated… Numerics has Vector3 ( float)… and Vector3 , which is slower… but decimal is 2x bigger than eve doubles 128 bits. so thats allot of data to move around… . Still it depends , if you want to make a solar system of stable orbits ( that stay stable for virtual years) , or a galaxy , or a clump of particles forming a planet , how your demo is gonna be judged.
When people suggest to use “long int” which is a "whole number " as if the world is voxels or pixels, but really want "real number ", a point in a space time continuum, Its usually for deterministic physics on differnt machines for a multiplayer sim, because differnet PCs do floating point math with subtle differences so players see different uinveses as time goes on. they do the physics wiht the long int but convert the long int to a real number by dividing by the scale of the universe total size or whatever. so anyways i suggest ditch decimal mybe try double, and or reduce the gravitational constant, so that the accelrations are less. now that you have have two threads u can do 2000 frames of physics for every 1 frame you actually show. 60 hz is fine… you can max out the cpu if you use c# parallelfor loops as well. that iwill max all 4 cores u probably have 4 -8… I bet u never hit 27% on a 4 core. with one physics and one draw thread
For the local space, 2- body subsytem or put one body at 0,0 ( or the system current center of mass) and make method to xfrom to World from Body local or System local WCStoLocal and its i LocalToWCS, … i have to do this even for a simple 1000 meter space with angles calculations … once creates get s away from 0, 0 the stardard methods with vector float dont work …
you might also want to look at different numerical intergation shemes as well to impress them. I think we both are using foward Euler which is the most basic. but its a stairstep approximation of a curve. vel += accel, pos = pos + vel * dt * ;but theres Runge Kutta and all sorts of them. error accumulates every single step and if planets do a “flyby” you might see something weird. Usually as long as u lose some energy ( i have to damp by .9999999 on my system of masses to make a rope wave, and every step its stable. but the system wil cool and stop sooner, at least it won’t start gaining energy from accumulated errors even step and explode… the more steps u use, you think you are gaiing accuracy but letting the numeral integration have more abiltiy to accumulate error. the reason to look atht exisiting physic engine is just to see their basic rigid body and integration methods or each step, you can use learn from that and make sure your code isnt doing stuff theres Leapfrog itegration.
i loook at ur code again its pretty sound for forward Euler prolly world fine.
btw… thers are quite a few places in your code you can use ParallelFor
this inner loop (mayhbe not outer one tho))
foreach (Body body2 in bodies)
if (body1 != body2) Accelerate(body1, body2);
also if you wan get really crazy try Vector and using SIMD… but i havnet yet tis too hard for me, Bepu does this stuff…
I think I got as far as possible in my case, so I’m gonna sum up and close this up.
The title is wrong. The reason for my pain was not CPU or Monogame but floats. Floats suck. They are fast, sure, but for small universe they’re too imperfect. The thing is that when I turn up fps, the simulation should be more precise but it in fact isn’t and it even has the opposite effect. Decimals however don’t do this, they get only more precise so that is the only way. I guess with doubles the effect would be less noticible but it would still be there.
I tried using local spaces and that did nothing for me, I guess the bodies were not far enough from the center.
I was experimenting with different sets of bodies and even if I still use just 2 of them, performance can vary greatly based on their movement.
I was wrong about that, performence was effected by body’s size not their position. That is caused just by bad coding in other part of my sim.
What is really helpful though is multi-threading. Separating program in two main threads was a really good idea. With that, I am able to get higher simulation fps or when they get low the program is still responsive. Using Parallel.ForEach helpes when calculating multiple bodies (for two it actually slowed it down but that doesn’t matter), with 5 I got 20% fps boost.
As for using derivates well…I will leave it for another time, it is too hard for me to code them right now even though Leapfrog integration suits my need well.
I want to really really thank all of you, especially @Damian_Eaglestein. Thanks to you, not only I was able to do what I wanted to but I also learned a lot of new things. See you around!
yes , floats suck, and errors can add up especially with euler, and small time steps give them more chances to add mutations to the physci and stop conserving momentum but instead … gain energy , which lead tos positive feedback and and nonlinear effects take over, and things explode… i dont mind sharing because i been doing numeric calculus for games for liek 25 years and still not bored with making little puppets and universes and fluids.
even for a simple rope i damp each iteration by by 0.999999 hree and there over to keep my wave thing stable. one thing to add… if your have 4 cores. you should see 4x (almost exacty)_ improvement ( in release build) on a big set liek 40 or 400 at least since there is a thread (pool is already done for you) . as long at the order of things doesnt matter and such. anything less means the set is small or there is an applicabiltiy issue or timer resolution is bad SetTimerresolution(1) or use huge repeated loops to time stuff. . . and use Netcore 6 if and put your core code in netstandard 2.0 DLL… i got 40% boost free and run my physics at up to 2000 fps . even if i draw only 60 fps i can test faster. with normal time times mine is 1/80… too big or two small its breaks down…Also, really look at Numerics and SIMD and structs. the bepu performance its all about cache consciouness and he got incredible pileing. theres so much to explore in this space, and COmpute shader is how you would really want to do this, and that is being added into MG 8.2 or something. you culd make a galaxy with that. I did an experiemtn wiht vector2 (float) and numberics.
its was more that 2x faster. I expected 2x since x, and y are running at same time… i just added up a list of random vector2 , normalized and spat the output so it doesn’t get optimzied out… It can solve iterative differential equations for you as well and has templated types. In new .dot net 6 preview… Also microsfot is making it easier to use a library with operations and plug in any type Preview Features in .NET 6 - Generic Math - .NET Blog Anwyas turned out was a good choice using c# there… most science code is still (or F#) Python or c++ or Fortran I keep telling them to try c#…
Here is a quick tip related to integration: Explicit methods tend to gain energy*, forward Runge is more accurate for a given step size, but not more stable. In fact, it can be worse for highly nonlinear systems.
For games, where stability is crucial across a wide range of inputs and performance needs to be consistent, I would recommend an unconditionally stable method like the Trapezoidal Rule (2nd order implicit Euler) or Backward Euler.
Learning the implicit techniques will save you time overall, especially if you want to ship a game that doesn’t explode.
I mostly use floats and occasionally float16s. Provided that you have enough precision to model the level of detail that you need, floats shouldn’t cause new stability issues.
- There are workarounds like artificial damping, but how much? Calculating how much damping you need to ensure stability is not trivial, and maintaining that code is not fun.
Im learning semi lagrangian advection now. This is unconditionally stable too, but has its issues but im making a game so good enough for me. I was sturggling a long time, but now im convinced a universe sim ends up being a chimera CFD thing , needs at least one tensor field model, and there is no grand unified theory … some Eulerian parts and some Lagrangian bits , and weaving them together is the way to get it tto run 2000 fps
but when you are doing N body physics on sparse spaces , for performance issues, you might want to use nested spaces, preperable each with its own spatial index inner space. the Ancient ODE had a way to collide two spaces. Now even modern engines dont and they should … the basic idea is one could have a tree and one could have a hash or a grid to find quickly each pair of bodies. each space has an AABB or a sort of bounds. For the bodies in teh intersection of the two outer subspaces, collide onlly those together, by converting the cooridnates one of those ones to the other. In your case you would have a sort of escape field strenth cutoff .
You might want to see this discusson on Bepu: Celestial Body or micro world simulation · Discussion #223 · bepu/bepuphysics2 · GitHub
but this si still newtonian and if yhou want to get beyond it you have to get into fluid dynamics and eulerian methods ( grid based fields) … The universe is a big fluid, on a warped space, not a set of rigid bodies and newtons laws arent going to get you warped space. Even weather men dealing with the curved surface of the earth use more advanced methods thant what we are doing.