Window[ID] Control from XNA to MonoGame 3.8

I have a project that I have been working on for a long time using XNA 4.0. Obviously, MS abandoned XNA a very long time back, so I thought I’d move on to MonoGame. This has worked quite painlessly, but the result has been a garbled mess.

The program is a WinForms application written in VB.NET. When the program starts up, a full-screen form is created and a series of icons are drawn around the side. The middle of the form has a panel on it, and a series of other controls drawn onto that panel. The panel and all the controls on it use XNA to draw themselves, but they are derived from Windows.Forms.Control.

I almost certainly got this code from a tutorial about 10 years back, and I’m not sure I could find it again. Still, the code didn’t change except for one line, and that line should be meaningless. The key point is that everything happens in response to the Paint event of the control, and that code is here:

Protected Overrides Sub OnPaint(e As System.Windows.Forms.PaintEventArgs)
If Me.Visible Then
Dim beginDrawError As String = BeginDraw()
If String.IsNullOrEmpty(beginDrawError) Then
Draw()
EndDraw()
Else
PaintUsingSystemTools(e.Graphics, beginDrawError)
End If
End If
End Sub

BeginDraw is very simple, as it checks to see that the graphicsDevice is in good shape, and if it is, it sets the ViewPort to the size of the client area of the control being drawn.

EndDraw is just a call to GraphicsDevice.Present. XNA 4.0 had three overloads of this method, while OpenGame has only one with no arguments. That is the only difference between my old XNA code and the OpenGame code. I had this line:

mGDeviceService.GraphicsDevice.Present(srcRect, Nothing, Me.Handle)

Where the rectangle is just the same viewport rectangle set in BeginDraw. It seems like this overload is unnecessary, because the GraphicsDevice already has the rectangle and the handle.

What is happening, though, is that various textures are being drawn onto the screen in various places, with most of the screen being white. Because each control has a series of regions within it that show tooltips, I can move the mouse around the screen and see that the controls are all being sized and positioned correctly, it is just that the drawing is…well, a mess that I can’t quite explain. The drawing is mostly absent, and what is there seems to be nearly randomly selected from the textures I have loaded.

So, the way control drawing works, is that, the panel gets invalidated, which causes all the child controls to invalidate. The Paint event is raised first for the panel, then for the child controls. I can see that this is happening, because I can see the sequence of Paint events being raised.

As for the Draw method. None of those were changed when I moved from XNA to MonoGame, so I suspect that this has something to do with the setup code, and not the code in the Draw routines, which position and compose SpriteBatch objects.

I fear that I have posted an incomplete question, but I don’t have enough experience with this to be certain as to what is relevant and what is not.

A couple additional points:

  1. All the controls aside from the panel are dynamically loaded plugins.

  2. The panel, the MonoGame base classes, and the control plugins are all FW4.7.2. The main program that loads all these things is currently FW 4.0.

Hope to get this sorted.

There is a project MonoGame Forms by @BlizzCrafter which might be interesting for you.

I studied the code in that project, and found that it was VERY similar to what I already had. There was one key difference, which is the use of the SwapChainRenderTarget. In the code I had working, there was nothing like that.

From the comments in the project you linked to, this seems like something that I would need to have, but I didn’t. It was an easy thing to add, and it mostly got the project working, again. Everything renders as it did before. The one difference is that it is SLOW. Before, I could zoom in and out and the whole screen would redraw so quickly that it appeared instant. Now, while the refresh is sub-second, which is tolerable, it isn’t instant anymore. You can see the various controls draw onto the screen. Sure, it’s done inside of a second, and the visual effect isn’t entirely without a certain beauty, but you SEE it, where before it was too fast to see.

Now, the only change I have from before to after is the addition of the SwapChainRenderTarget, which I added more to try it out than anything else. I’m certainly not clear on what it is really doing for me, though it seems kind of obvious, just from the name. However, that leaves me with a couple questions, and I’d take answers to any or all:

  1. What changed between the old XNA 4.0 and MonoGame 3.8 such that I have to use the SwapChainRenderTarget where I didn’t before?

  2. What is the SwapChainRenderTarget and what does it do? Any discussion of that object would help.

  3. Is there anything I can do to speed up the rendering to get back to my instantaneous speed? One thing I should add about this last one was that previously, I had tried my old code with and without Vsync (I may have the wrong term, as that’s mostly from the CRT days). The display was quite smooth either way, but with it synced there was a slight, fraction of a second, pause when zooming in or out, so I left it off.

  1. The DX9 (XNA) had a Graphics.Present() method that could render on a specific child hWnd/Rect. In DX11 (MG) this method no longer exists.

  2. SwapChainRenderTarget.Present() is a replacement of device.Present(hWnd). It attaches a SwapChan to a window and in the front it pretends to be a RenderTarget so that you can set it as a renderTarget. There is a class with the same name and behavior in AUP XAML.

  3. I have multiple windows with no apparent slowdown, but to be fair I did change the code a bit. On that note, I am currently try to move away from SwapChainRenderTarget and replace it with an ImageSource (Bitmap or D3DImage) to solve the airspace problem.

1 Like

Well, that certainly answered the key question, as there was a change between DX9 and DX11.

When you say multiple windows, how many? In my case, there is a panel with very little drawn on it, but that panel then contains other controls that act a bit like pictureboxes in that they show a texture onto which a series of other layers are placed. The current test case has 62 controls, which isn’t an unreasonable number for this problem, though I know of cases that would require as many as 200. At some point, a bit of slowdown would be inevitable. I had to go to XNA after I had squeezed every cycle I could out of a GDI+ implementation and had a few edge cases that were unacceptable.

I’m not sure what problem you are speaking of for going to an ImageSource, but perhaps my old GDI+ solution would become relevant once again. The way I had the program mostly working was to build up a cache of images at each zoom level. I had only five zoom levels, rather than infinite, so I pre-composed the images for each control at each zoom level, and cached them. Going from one zoom level to the next required no more than taking the right image from the cache. The user might alter up to two controls at a time (moving items out of one control and into another), in which case the caches of those would be invalidated. When I zoomed, only those with invalid caches would be re-composed.

All that went away with XNA, because composing and drawing appeared instantaneous with no tearing (so it could all be done within one screen refresh). This simplified the code, but perhaps it’s time to consider bringing that back. Very few of these controls change all that much, except when zooming.

I don’t feel I have a good mental model of how the various pieces fit together, largely because I wrote the composition code almost 10 years ago. I see that I create a couple RenderTarget2D objects, then use the SpriteBatch.Draw to draw on each, then merge them together and add a bit more. Would I gain much by making up a series of 5 RenderTarget2D objects, composing them for each zoom level, and using that as the cache? That would reduce the composition for most controls, as all the composition that built up the two layers of render targets would be avoided most of the time, but would it gain much of anything? Would caching the render targets be a reasonable approach? Should something else be cached that would provide a greater benefit?

One issue with XNA is that it’s so very fast that it’s hard to figure out where the time is being spent.

62? Wow! I was thinking like 5 or 10. I suppose you don’t update them 60 at fps or start droping frames when they run slow.
The problem I have is with hosting the control in WPF, the directX window doesn’t let you see xaml controls on top of it. If your app is WinForms then you would have an issue.
I didn’t see much difference between XNA and MG but then again I never tried to draw 60 controls.
What version of MG are you using? Did you enable anti aliasing? Maybe you are creating new render states or rendertargets frequently, that’s a common issue we have seen from porting XNA code to MG.

This has been a move from XNA 4.0 to MG 3.8. It has been mostly painless, and even the refresh of 60+ controls is pretty smooth. It’s tolerable, and sometimes unnoticeable, but I’m looking to do better.

The project is WinForms and is based on dynamically loaded plugins. The controls themselves are plugins, and they display some textures also coming from other plugins. The VAST majority of the project is plain WinForms. Only the main form requires the use of the GPU to get good performance, largely due to the number of controls on there. As you might have guessed, these aren’t your typical control. The project is a fish hatchery manager, and the controls are rearing units of different types and shapes. The user designs their hatchery using whatever rearing units they need (of many different shapes). The user interacts with the rearing units and the fish in them, by dragging and dropping icons onto the fish or rearing unit, dragging fish between rearing units, and interacting with icons appearing in the rearing units. Only on one of the two main forms is it necessary to show all the rearing units, so that’s the only place the large number of rearing units matters.

I’m not doing anti aliasing, and don’t think it would matter all that much. I’m not much of an artist, so sticking with rectangles is just fine with me, for the most part.

I do believe I am creating new render targets quite frequently. If that includes creating RenderTarget2D objects, I am currently creating two of them for every composition, which would mean creating 120+ per zoom, with my current design. If that’s a bad thing, I can create them once and cache them. I’m currently working on setting up a test environment to get at timing different approaches, and that would be one of the first to try.

Not sure about render states. I don’t believe I have encountered that term. What does it entail?

Hi @ShaggyTheHiker Welcome to the Forums!

I amended your title, I hope it is clearer and more on point, if not, let me know.

Happy Coding!

I meant BlendState, SamplerState, etc.

Well, I don’t remember what it was, but it certainly seems clear now, so the change probably didn’t make it any worse.

1 Like

Not dealing with any of those. I do use some stencils for some controls, but that’s it.

So, what about RenderTargets? They’d be easy to cache and recycle with my current design, if that would make a difference.

For editors or tools i would care that much unless it’s too frequently noticeable. You can’t know unless you profile it.

Working on that. Turns out, there are a few other changes that will become necessary, as well, but they’re all pretty minor, so far.

Caching render targets had no statistically significant impact on timing. It may have had a slight negative impact, which would mean that the cost of creating a render target is less than the cost of looking one up in a dictionary.

That doesn’t surprise me, as this will be the case so long as a RenderTarget doesn’t have some costly setup involved with creation, which is rarely the case and usually a bad design.

I then commented out ALL the draw calls aside from the first one, which just drew a background…and that didn’t impact the timing, either. I confirmed that the code was being run, so I was timing what I thought I was timing…sort of. It’s just that the composition step didn’t add any noticeable time at all, which means that caching isn’t going to gain me much of anything.

What I did find interesting was that this was my test loop:

For x = 0 To 10
mTestControl.Refresh()
Next

This just forces the control to repaint over and over. One control, and the cost was up around 170ms to run this. If I used Invalidate rather than Refresh, it ran only one time, but that’s largely because the paint events wouldn’t happen until the loop had completed, so there would effectively be only one paint event, no matter how many iterations of the loop I had.

However, 170ms is too slow. That’s only about as good as GDI was doing, and there is clearly something wrong with the test, because that would mean that refreshing 60 controls would require a bit over one second, while I’m seeing a refresh time that is down around one or two tenths of a second, at worst, and sometimes it’s even better than that. So, Refresh alone, must be having a significant impact.

After further testing, I found that I was wrong in saying that the cached render targets aren’t all that beneficial. By further isolating that part of the code, I was able to find that the cached approach is about 4x faster, all else being equal. That boost was being utterly swamped by something else in the original test, so it didn’t appear.

1 Like

After a bit more testing, I’ve narrowed things down considerably. I was seeing 60+ controls refresh in sub-second time in the main app, but in a test app that used the exact same code (referenced the same underlying dlls) except for the Draw routine, I was seeing one control appear to take 17ms to refresh once.

I was able to isolate the draw routine, which was the only thing different between the test and main apps, and found that it took very little time (perhaps 0.01ms) for some fairly extensive composition.

I then added a second control so that I could refresh both in a loop, as shown earlier. The time for refreshing two controls was not different from the time to refresh one control (the 0.01ms for the composition was too small to show up in this test). So, the 17ms cost is not per control, it is paid as long as I refresh anything, and the only per control cost appears to be the trivial 0.01ms for the composition. So, if my test is doing this:
For x = 0 To 10
mTestControl.Refresh()
mTestControl2.Refresh()
Next
I pay a penalty of roughly 17ms per iteration of the loop, regardless of the number of controls being refreshed in the loop (as long as they are XNA controls).

The common code that the main app and the test app share is virtually identical to that used in Monogame.Forms for the GraphicsObject and GraphicsService (though it’s all VB rather than C#), and both come from a far older example from 10 years back, or so. Therefore, if anybody wants to see the code I’m using, just look at that project.

One more test, just for grins: I swapped out refreshing my custom controls for refreshing a standard WinForms button with no custom graphics at all. Interestingly, that had a cost of 0.5ms, so perhaps 0.5 of the 17ms is going to be paid for ANY refresh, though refreshing two buttons cost 0.1ms, so that 0.5ms appears to be a per control cost for WinForms controls, whereas it seems to be a fixed cost for XNA controls.

Can anyone suggest where the 17ms penalty comes from? It’s not the composition of the control (I timed that, plus it would double when the number of controls doubled, which it does not), it’s the fact of refreshing in some way, and I can’t think of anything that would happen as long as something got refreshed, and would not be per control.

17ms sounds like a v-sync lock when you call Present().

1 Like

Can you expound on that topic a bit? Are you suggesting that it is waiting until the next v-sync before drawing? That certainly would explain it.

Look at the PresentInterval.Immediate in the constructor of https://docs.monogame.net/api/Microsoft.Xna.Framework.Graphics.SwapChainRenderTarget.html
This might work.

I think this is the answer. I haven’t been able to get back to this, but you jogged my memory. Back when I first wrote the code, which was in 2010 or 2011, I tried altering the very property you mentioned. Immediate should have caused tearing, but didn’t, while the other setting I used caused a barely perceptible, and totally acceptable, hesitation while zooming. Having no reason not to go with Immediate, I did.

There aren’t many changes between the MS XNA 4.0 and MonoGame, but there are a few (DX9 to DX11, anyways). I believe that what I am seeing is what amounts to tearing in a control based display (as opposed to a typical game loop).

I’ll have to decide whether I care. The visual effect isn’t entirely unpleasing.

1 Like