I am having a weird behaviour with fps drop in my game. (this is on Windows DirectX)
I render my scene to a RenderTarget2D, and then render this texture to the back buffer, along with my UI, to the backbuffer. I am using a stopwatch to monitor the time it takes to render my scene to the render target. I am using the same stopwatch to monitor my fps (the number of times the draw function is called per second)
Sometimes, my fps drops, even though the times it takes to draw my scene to the render target remains constant (the time it takes to render the texture + the ui to the back buffer remains constant as well)
Is there an explanation as to why the Game object decides to call my Draw function less often? Looking at the performance profiler, I see that the game uses a lot of its time on the āPresentā function, but I donāt see how this helps me.
The numbers by the draw_ and update_ are the time it takes to render my scene (which is very simple), in milliseconds.
My draw function is very simple, is there somewhere else I should be starting / stopping my timers?
/// <summary>
/// This is called when the game should draw itself.
/// </summary>
/// <param name="gameTime">Provides a snapshot of timing values.</param>
protected override void Draw(GameTime gameTime)
{
Console.Timer.StartMeasure(TimerMeasure.draw_total);
Console.Timer.StartMeasure(TimerMeasure.draw_main);
viewports[mainViewport].Render(GraphicsDevice, spriteBatch, contentHolder, PhysicsScene);
Console.Timer.StopMeasure(TimerMeasure.draw_main);
base.Draw(gameTime);
Console.Timer.StopMeasure(TimerMeasure.draw_total);
Console.Timer.SetMark(TimerMeasure.fps_draw);
}
EDIT : Just a quick sanity check, I added a dirty Thread.Sleep(10) in the middle of my UI drawing code, just to see if maybe there was something wrong with my timers, and thereās not. (The time needed to draw the UI according to the stopwatch went up by 10ms)
Could you measure the timings between the end of Draw and the call to BeginDraw() ?
And between the end of Update() and the call to BeginDraw() ? (Overriding BeginDraw of course)
Sure, when not lagging (60 fps), the time between the end of Draw and the call to BeginDraw() is 16ms
When my fps drops to around 50, the time between the end of Draw and the call to BeginDraw() 19-20ms
The time between the end of Update() and the call to BeginDraw() is negligible in both cases (0.2ms)
I get around 111 fps when everything is fine, and it drops to about 40 fps when the drop happens.
The fps drop happens depending on where I position my camera, or what part of the scene my shadowmap is looking at.
What I donāt get though, is that the lag does not occur where it should (ie when I draw my complex scene to the rendertarget), or when I draw my rendertarget + ui to the backbuffer, but between the end of my draw call and the beggining of the next BeginDraw().
Is there anything that could cause the Present() method to be so slow?
Is it a constant drop when you are in the ācriticalā zone ? Maybe you have a model āovertesselatedā or badly shaped to improve culling ? (Dual sided ? etc ?)
Thatās normal. Drawing calls are recorded in a queue and executed on another thread. Present() will block the calling thread until everything is finished and the result is copied to the display. To verify this, call .GetData() on your rendertarget (after you set device target back to null). The driver will then block the current thread until the GPU finishes with the rendertarget.
I guess thereās something wrong with your shadowmap thatās slowing down the GPU. Take a second look at your code and shaders.
Thatās interesting, it means that my current timer approach is completely useless (I use it to monitor and budget what time to allocate to my shadowmap, actual rendering of the scene, fxaaā¦)
If all these things are jumbled together behing the scenes and executed in sequence my timers are uselessā¦ Is there a way to block the current thread after each part of my draw step? (Without calling GetData() at each step, which must have a cost of its own since we are transferring a texture from the GPU to the CPU)
I guess Iāll start a new thread with for my shadowmap problem, I just went along with the first shader that āworkedā, so there is probably a lot of room for improvement there.
I use it to monitor and budget what time to allocate to my shadowmap, actual rendering of the scene, fxaa
This maybe mislead me. How do you allocate time to shadowmap for example ? I donāt really understand how you make your calls to draw with a given time.
Do you use timers, and when it reaches its time, you call the shadowmap draw() method ? Or are you saying you manage to make your algorithmās time of execution fit say, 20ms for shadowmap ?
I donāt allocate time at runtime at all, I budget, for example, the size of my shadow map based on how long it takes to render it. The timers are not used at runtime at all, I just use them as an information to tweak my rendering process (less samples on my shadow map shader for example)
This stops my draw thread until my scene is fully rendered to the render target, but the Thread.Sleep(1) is still ugly, Iād rather have a native blocking function I could callā¦ I donāt suppose calling Present() multiple times in the middle of my draw function is recommended?
This stops my draw thread until my scene is fully rendered to the render target, but the Thread.Sleep(1) is still ugly, Iād rather have a native blocking function I could callā¦
Thatās not supported. MSDN suggests something like āwhile (!occlusionQuery.IsComplete);ā which is fine on multicore CPU. OcclusionQuery is not optional, there are other types of queries like D3D11_QUERY_EVENT, D3D11_QUERY_TIMESTAMP, D3D11_QUERY_PIPELINE_STATISTICS that are more relevant. There is also a ID3D11Counter that would be ideal. Non of the above are supported by MG.
If you want, you can request a feature support for performance counters.
Meanwhile you can try something else.
Disable FixedStep and VSync and measure the time it takes to draw a full frame. (that would be the value of gameTime.EllapseTime reallyā¦)
Then test how the total time change if you disable certain parts of your rendering (or update). Or change one variable (ex: renderTarget size/format, LOD, shader) and plot how this affects the total frame time.
I am using the while (!occlusionQuery.IsComplete); solution for now. Itās not ideal, but at least for now to help me troubleshoot my shadow map performance issue it will do, and I can always strip it away in my release build.