Fps drop, but Draw() function duration remains constant

Hi,

I am having a weird behaviour with fps drop in my game. (this is on Windows DirectX)

I render my scene to a RenderTarget2D, and then render this texture to the back buffer, along with my UI, to the backbuffer. I am using a stopwatch to monitor the time it takes to render my scene to the render target. I am using the same stopwatch to monitor my fps (the number of times the draw function is called per second)

Sometimes, my fps drops, even though the times it takes to draw my scene to the render target remains constant (the time it takes to render the texture + the ui to the back buffer remains constant as well)

Is there an explanation as to why the Game object decides to call my Draw function less often? Looking at the performance profiler, I see that the game uses a lot of its time on the ā€œPresentā€ function, but I donā€™t see how this helps me.

Thank you to anyone who knows what is going on.

Maybe itā€™s not the draw() function but the Update() one?

Plus, you will have some small FPS drops because of garbage collection every few seconds (depending on your garbage creation of course)

Thank you kosmonautgames,

Itā€™s not the garbage creation (I am watching the GC collections, and none happen when the fps drop occurs)

Itā€™s not the update method either (I am logging the fps for my draw and my update methods separately)

In the following picture, my fps drops to around 50, even if all my draw and update stuff takes way less than a millisecond :

The numbers by the draw_ and update_ are the time it takes to render my scene (which is very simple), in milliseconds.

My draw function is very simple, is there somewhere else I should be starting / stopping my timers?

    /// <summary>
    /// This is called when the game should draw itself.
    /// </summary>
    /// <param name="gameTime">Provides a snapshot of timing values.</param>
    protected override void Draw(GameTime gameTime)
    {

        Console.Timer.StartMeasure(TimerMeasure.draw_total);

        Console.Timer.StartMeasure(TimerMeasure.draw_main);
        viewports[mainViewport].Render(GraphicsDevice, spriteBatch, contentHolder, PhysicsScene);
        Console.Timer.StopMeasure(TimerMeasure.draw_main);

        base.Draw(gameTime);


        Console.Timer.StopMeasure(TimerMeasure.draw_total);
        Console.Timer.SetMark(TimerMeasure.fps_draw);
    }

EDIT : Just a quick sanity check, I added a dirty Thread.Sleep(10) in the middle of my UI drawing code, just to see if maybe there was something wrong with my timers, and thereā€™s not. (The time needed to draw the UI according to the stopwatch went up by 10ms)

Could you measure the timings between the end of Draw and the call to BeginDraw() ?
And between the end of Update() and the call to BeginDraw() ? (Overriding BeginDraw of course)

Sure, when not lagging (60 fps), the time between the end of Draw and the call to BeginDraw() is 16ms
When my fps drops to around 50, the time between the end of Draw and the call to BeginDraw() 19-20ms

The time between the end of Update() and the call to BeginDraw() is negligible in both cases (0.2ms)

Letā€™s test something, download and run this tool, then close all other programs and run you game. https://vvvv.org/contribution/windows-system-timer-tool

What is the Current Timer? It should be something close to 16ms.

Set the value to minimum value (.5 or 1.0ms), does it make any difference?

The current timer was 16ms, but changing it to 1ms does not change the behaviour of my application :
http://imgur.com/a/SAT3N

Can you turn off IsFixedTimeStep & graphics.SynchronizeWithVerticalRetrace to see how many frames you are getting?

I get around 111 fps when everything is fine, and it drops to about 40 fps when the drop happens.

The fps drop happens depending on where I position my camera, or what part of the scene my shadowmap is looking at.

What I donā€™t get though, is that the lag does not occur where it should (ie when I draw my complex scene to the rendertarget), or when I draw my rendertarget + ui to the backbuffer, but between the end of my draw call and the beggining of the next BeginDraw().

Is there anything that could cause the Present() method to be so slow?

Is it a constant drop when you are in the ā€œcriticalā€ zone ? Maybe you have a model ā€œovertesselatedā€ or badly shaped to improve culling ? (Dual sided ? etc ?)

Thatā€™s normal. Drawing calls are recorded in a queue and executed on another thread. Present() will block the calling thread until everything is finished and the result is copied to the display. To verify this, call .GetData() on your rendertarget (after you set device target back to null). The driver will then block the current thread until the GPU finishes with the rendertarget.

I guess thereā€™s something wrong with your shadowmap thatā€™s slowing down the GPU. Take a second look at your code and shaders.

Thatā€™s interesting, it means that my current timer approach is completely useless (I use it to monitor and budget what time to allocate to my shadowmap, actual rendering of the scene, fxaaā€¦)

If all these things are jumbled together behing the scenes and executed in sequence my timers are uselessā€¦ Is there a way to block the current thread after each part of my draw step? (Without calling GetData() at each step, which must have a cost of its own since we are transferring a texture from the GPU to the CPU)

I guess Iā€™ll start a new thread with for my shadowmap problem, I just went along with the first shader that ā€œworkedā€, so there is probably a lot of room for improvement there.

Is there a way to block the current thread after each part of my draw step?

Maybe OcclusionQuery will do the trick. I donā€™t really know.

I use it to monitor and budget what time to allocate to my shadowmap, actual rendering of the scene, fxaa

This maybe mislead me. How do you allocate time to shadowmap for example ? I donā€™t really understand how you make your calls to draw with a given time.
Do you use timers, and when it reaches its time, you call the shadowmap draw() method ? Or are you saying you manage to make your algorithmā€™s time of execution fit say, 20ms for shadowmap ?

I donā€™t allocate time at runtime at all, I budget, for example, the size of my shadow map based on how long it takes to render it. The timers are not used at runtime at all, I just use them as an information to tweak my rendering process (less samples on my shadow map shader for example)

So OcclusionQuery sort of works, but here is what I used :

Console.Timer.StartMeasure(TimerMeasure.draw_main); occlusionQuery.Begin(); viewports[mainViewport].Render(GraphicsDevice, spriteBatch, contentHolder, PhysicsScene); occlusionQuery.End();

while (!occlusionQuery.IsComplete) Thread.Sleep(1);

Console.Timer.StopMeasure(TimerMeasure.draw_main);

This stops my draw thread until my scene is fully rendered to the render target, but the Thread.Sleep(1) is still ugly, Iā€™d rather have a native blocking function I could callā€¦ I donā€™t suppose calling Present() multiple times in the middle of my draw function is recommended?

Yeah especially since Sleep has a minimum of 15/16ms, so your program cannot run faster than 60fps any more (given the rest takes almost no time)

unless you use google chrome that is

This stops my draw thread until my scene is fully rendered to the render target, but the Thread.Sleep(1) is still ugly, Iā€™d rather have a native blocking function I could callā€¦

Thatā€™s not supported. MSDN suggests something like ā€˜while (!occlusionQuery.IsComplete);ā€™ which is fine on multicore CPU. OcclusionQuery is not optional, there are other types of queries like D3D11_QUERY_EVENT, D3D11_QUERY_TIMESTAMP, D3D11_QUERY_PIPELINE_STATISTICS that are more relevant. There is also a ID3D11Counter that would be ideal. Non of the above are supported by MG.
If you want, you can request a feature support for performance counters.

Meanwhile you can try something else.
Disable FixedStep and VSync and measure the time it takes to draw a full frame. (that would be the value of gameTime.EllapseTime reallyā€¦)
Then test how the total time change if you disable certain parts of your rendering (or update). Or change one variable (ex: renderTarget size/format, LOD, shader) and plot how this affects the total frame time.

You can also use a GPU profiler like PIX.

I am using the while (!occlusionQuery.IsComplete); solution for now. Itā€™s not ideal, but at least for now to help me troubleshoot my shadow map performance issue it will do, and I can always strip it away in my release build.

Thank you !

1 Like