Hello fellow devs,
I’m evaluating the use of RenderTarget.GetData to measure the light exposition of the player’s sprite in my 2D DesktopGL game.
Concept is simple :
- Lights are sprites rendered in a RenderTarget, then mixed with the scene.
- I have a specific RenderTarget very small ( 32x32 ) where the lights targeting the player are rendered.
- Every 500ms, I render the players light in the small RT, then I request the RT data.
Now here’s my problem.
I’m testing this project on several machines with some puzzling results.
On 2 Linux machines :
- one is a powerful machine with AMD CPU and Nvidia GPU
- the other one is a amd graphics laptop quite weaker
Result is 8ms to get the RT data
On 2 Windows machines :
- both have the same graphic card a GTX 960 and a descent intel CPU
- W11 and W10
Result is 49ms on one, 33ms on the other, to get the RT data
Worth to know :
- I also made tests on consoles where the result where under 10ms, IIRC.
- Changing the RT size does not lead to significant performance changes
- I tested getting the RT data during Update and during Draw, results are the same
As you can see, the differences are significant and the time needed to read the data is not compatible with a smooth playing experience on the 2 windows machines.
I’m definitely unsure of why would such differences are possible.
Now my questions :
- Is there a “state of the art” approach to reading the data from the GPU ?
- A specific timing in the game loop ?
- A specific thread ?
- Some wizardly memory alignment to set ?
- Windows machines should be using DX rather than DesktopGL ?
I would like to be sure that I tried everything and did my best before having to implement another solution to my needs.
Thanks for any answers.