AFAIK with XNA/MG, there is a redrawing of the exact same picture (the game itself) every single frame even when nothing changes at all in the displayed world.
I would like to know why XNA has been done this way: does it comes from DX itself ? Or from the GPU behaviour (“the way it meant to be played” according to NVidia )
As I have known a time when every single bit was spared if not strictly needed, a time also no framework at all exist and everything has to be done over and over from scratch in asm, I wonder if there is a way to send data once to the GPU, keep the picture on screen and using spared resources to do something else, or nothing at all.
I think some people (at Microsoft maybe, for XNA, or even DX) miss a point: they doesn’t have to, or they shouldn’t have to, do the same as the old CRT monitors (ie: to move the electrons beam and redraw the screen many times a second).
This may be considered quite naive, especially in an era of wasting for mere profit, but it’s bug me.
Alternatively, if you are willing to do some manual optimization, you can keep a global variable for NeedUpdate, and on everything that would modify data thus requiring an update of draws, set needUpdate to true, and when the draw is complete set needUpdate to false… then at the beginning of the draw sub, before the clear, do a check - if (!NeedUpdate) return; I have done this in a project i was working on for a friend recently. Works GREAT!
Want to draw the same thing every frame? Remove this line from your draw() function body:
This will prevent clearing of the back buffer…the section of memory on your GPU that is sent to the screen every single frame…and thus will preserve the result of previous draw calls.
This question is silly and I had fun writing this response while drinking heavily. Please take my response with a grain of salt.
Why does XNA work this way?
XNA was done this way because most modern games are done this way. You asked if it comes from DX itself, but actually it goes beyond that to the hardware itself. Your GPU is always putting a new image on the screen at a regular interval depending on your monitor’s refresh rate. The information is sent 60 or more times a second regardless what you do because hardware manufacturers designed it that way. By assuming this use case, hardware manufacturers have made life easier for people who want to make real-time applications. In a sense, optimizing that process is the entire purpose of modern GPU hardware.
The time when every single bit needs to be spared has long passed for the average developer. There are people on the cutting edge of development who still worry about those bits, but they are working for Nvidia or ATI trying to build a better device driver to optimize their hardware for a game with a 500 million dollar budget built by a small city of developers. I think if you were to consider how insanely powerful modern computer hardware is, you might feel a little pedantic worrying about the cost of redrawing a scene every frame. I mean, worrying about redrawing the same scene elements each frame is like nintendo level stuff…my phone is powerful enough to run a real time simulation of an entire Super Nintendo chip-set (emulator) in the background while streaming 1080p video and browsing reddit in another tab.
At some point hardware got so fast someone eventually said “why not make it just redraw everything every frame by default” and that was so long ago that many people who program games today can’t recall a time when it worked any other way.
Also, Microsoft isn’t Naive for making XNA this way, they made XNA as an abstraction tool for people who value precious developer effort more than bountiful processor time. It is designed to waste a little processor time to make life easier for the programmer because it’s a damn good trade off usually. If you prefer to do things the hard way, and it bothers you having things done for you, I would suggest that probably neither XNA nor Monogame is right for you.
I sympathize with InfiniteProductions that redrawing everything frame-by-frame consumes clock cycles, CPU and battery life in mobile applications. I think a nice compromise can be found in composing in layers on a larger canvas. What we call a sprite is a set of images or sometimes just one big image. Though most tutorials go straight to the Graphics device, an initial complex set of drawings to one sprite layer can be rendered and updated from the same “sprite” context over and over if nothing changes. We used to implement double-buffering to draw to 2 or more frames and alternate rendering finished scenes to avoid flickering updates. Now that CPUs and GPUs are fast enough, we start and stop the whole sprite batch to the screen with the knowledge that our hardware can handle it.