I was just trying to add some glowing effect to my UI and was wondering, why the background always gets black.
After some graphics debugging and a lot of try & error I found out, that changing the RenderTarget(s) of GraphicsDevice the screen will be cleared. (Appears black, obviously it’s transparent, which my graphics debugger showed me.
Removing the second line will cause no changes and my scene is drawn as it should be.
I also tried out, what happens using a RenderTarget object from beginning (e.g. screenshot) and found out that it really doesn’t matter, if the originRenderTargets are null or not.
Is this a bug? Or is it just necessary to do it in another way?
In current my project’s drawing the UI between setting RenderTarget to glowRenderTarget and Setting it to origin (which used to be null)
System Information:
Windows 10 Pro
MonoGame 3.6.0.1625
Graphics Card:
When setting a RenderTarget (or the back buffer) as the target for the GraphicsDevice to render to it will by default clear that target. You should order your draw calls to accomodate this. Alternatively you can set the RenderTargetUsage in the presentation parameters to not clear the back buffer when it is set as the target, but this may not work for all platforms and can slightly slow down switching the render target.
Yep, same appearance on both.
Nope, no iPGU installed, even not possible with that MoBo.
@Jjagg
Yeah I already had the idea to do some preparing PostProcessing RenderTarget stuff before base.Draw in Game Class,
so I just need to draw the rendertarget with the last necessary effect.
I just wanted to make sure, this isn’t a bug.
Changing the order won’t help me, since my scene has postprocessing as well.
This behaviour can be prevented if you create the rendertarget and set the RenderTargetUsage to PreserveContents. This is a parameter when creating a render target.
Example _linearDepthTarget = new RenderTarget2D(_graphics, width, height, false, SurfaceFormat.Vector4, DepthFormat.Depth24, 0, RenderTargetUsage.PreserveContents);
Could you please shred some light on how this really works in today’s GPUs?
I vaguely remember (from what I heard years ago, so don’t mind me if I’m wrong) that in XNA the contents were always discarded because of Xbox 360 limitations. So, if you wanted to preserve contents, the system had first to make a copy of the RenderTarget, switch RenderTarget and then copy the content over the RT.
But I suppose that nowadays (at least on PS4,X1, Switch, PC, but maybe not mobile) this is no longer a problem in terms of speed penalty. Is this right?
I’m trying to improve the impostor rendering of some objects in the distance, and it’d be great if I could use a big rendertarget and assign a little space 128x128 for each impostor, and render all of them without having to switch textures every time I render one.