RenderTarget2D and Backbuffer render depth differently? CullMode being ignored!

My question has changed a bit since starting – deleting my old topic and trying to add a new one.

I’ve been trying to render a scene – and it looks differently in the Backbuffer (without a RenderTarget2D or RenderTarget2D = null) than it does on the RenderTarget2D. I’m currently deployed against DesktopGL – not sure if I should switch to DX.

My goal is to alleviate the space/gaps in triangle list primitives when rendering and sharing vertices. I figured the way to do this (from what I can tell) is to render my scene to a RenderTarget2D, then SpriteBatch that out with multisampling to anti-alias it. If there is a better way to remove the jagged edges, I would be happy to entertain that thought. Is it better to just run everything thru a Shader (and if so – are there any basic/simple ones that handle basic AA, be it FXAA, etc?)

Everything looked good in the 3D scene rendered normally (without an external RenderTarget), minus the spacing between the triangles (also of note, the “quads” the triangles form when two are next to each other render perfectly with no visible space between). When I render to the RenderTarget, the DepthBuffer appears to get all whacky.

Are there hidden parameters or something I need to enable/include on the RenderTarget that I may be ignoring otherwise?

Here’s a bit of code:

   RenderTarget2D SceneBuffer = new RenderTarget2D( GraphicsDevice, GraphicsDevice.PresentationParameters.BackBufferWidth, GraphicsDevice.PresentationParameters.BackBufferHeight, false,
            GraphicsDevice.PresentationParameters.BackBufferFormat, GraphicsDevice.PresentationParameters.DepthStencilFormat, GraphicsDevice.PresentationParameters.MultiSampleCount, RenderTargetUsage.DiscardContents );

The code above should be copying all of the “rendering” details/parameters from what is normal.

I noticed that each time I set the RenderTarget2D of the GraphicsDevice, I lost my RasterizerState, so I know to reset that each time:

            GraphicsDevice.SetRenderTarget( SceneBuffer );

            GraphicsDevice.RasterizerState = new RasterizerState()
            {
                CullMode = CullMode.None,
                MultiSampleAntiAlias = true
            };

            foreach ( EffectPass Pass in TileMeshEffect.CurrentTechnique.Passes )
            {
                Pass.Apply();

                GraphicsDevice.SetVertexBuffer( TerrainMesh );
                GraphicsDevice.DrawPrimitives( PrimitiveType.TriangleList, 0, TerrainMesh.VertexCount / 3 );
            }

            GraphicsDevice.SetRenderTarget( null );

            SceneBatch.Begin();
            SceneBatch.Draw( SceneBuffer, GraphicsDevice.Viewport.Bounds, Color.White );
            SceneBatch.End();

On my normal rendering (if I don’t SetRenderTarget to SceneBuffer), everything is fine – I show the tops and everything correctly [Green tiles]). If I do it via this method, it renders things unusually between 0 and PI/2 rotation. X and Y are the left/right, forward/back coordinates, while Z is my depth, up and down. Such that when you’re looking from above (Isometric view), you’re looking down the Z axis.

The TerrainMesh is built in world coordinates. So I don’t really need a World matrix or anything like that when I do it with standard 3D – I just use the default one (which I believe is Identity). Only the View Matrix is really adjusted (which keeps in mind the World coordinates). Standard Orthographic matrix of (16, 9, 1, 2000) from Matrix.CreateOrthographic.

The following image shows the scene being rotated:

Between 0 and PI/2, the green tiles show correctly (as they do in the normal render without RenderTarget), but for the other angles, the bottoms/sides (each is a cube/voxel) starts bleeding thru. This leads me to think the DepthBuffer details aren’t being handled correctly.

Any thoughts?

EDIT:

As I’ve worked on this and tried to diagnose further issues, it looks as though the CullMode is being ignored on None. The two other modes do as they should (CounterClockwise and Clockwise) render correctly for what they’re supposed to do. But CullMode.None is doing something different/unusual.

…and I finally got it. This strikes me as VERY unusual, but:

I have to change:

            SceneBatch.Begin();

to

            SceneBatch.Begin( SpriteSortMode.Immediate, GraphicsDevice.BlendState, null, GraphicsDevice.DepthStencilState, RasterizerState.CullNone );

Simply calling:

            GraphicsDevice.RasterizerState = RasterizerState.CullNone;

After changing the RenderTarget back to null does not do the trick. Not sure if there’s something I’m missing, but hopefully this helps someone else stumbling along! The code looks like the following now:

        GraphicsDevice.SetRenderTarget( SceneBuffer );

        GraphicsDevice.RasterizerState = new RasterizerState()
        {
            CullMode = CullMode.None,
            MultiSampleAntiAlias = true
        };

        foreach ( EffectPass Pass in TileMeshEffect.CurrentTechnique.Passes )
        {
            Pass.Apply();

            GraphicsDevice.SetVertexBuffer( TerrainMesh );
            GraphicsDevice.DrawPrimitives( PrimitiveType.TriangleList, 0, TerrainMesh.VertexCount / 3 );
        }

        GraphicsDevice.SetRenderTarget( null );

        SceneBatch.Begin( SpriteSortMode.Immediate, GraphicsDevice.BlendState, null, GraphicsDevice.DepthStencilState, RasterizerState.CullNone );
        SceneBatch.Draw( SceneBuffer, GraphicsDevice.Viewport.Bounds, Color.White );
        SceneBatch.End();

What you’re missing is that SpriteBatch modifies GraphicsDevice states. This happens in the Setup function, which is called from SpriteBatch.Begin in Immediate mode and SpriteBatch.End for other modes

This means that when you call SpriteBatch.Begin without arguments it will use default graphics states

  • BlendState => AlphaBlend
  • SamplerState => LinearClamp
  • DepthStencilState => None
  • RasterizerState => CullCounterClockWise

I found that out, thank you.

I also thought that when I rendered to a RenderTarget2D, it would render all the pixels to a backbuffer and then I was using a SpriteBatch to draw it. So it seems interesting in that regard.

Further, it’s more interesting that SpriteBatch doesn’t take its settings from GraphicsDevice on a SpriteBatch.Begin() call – seems weird we don’t have an overload, but it also makes sense, I suppose!

Thanks for the input!

This is a very old thread but I’m afraid I’m now experiencing the same thing. I’ve tried the solution mentioned regarding the SpriteBatch.Begin with no change. Basically when I render directly to the backbuffer, it works exactly as expected. However when rendering to a RenderTarget2D, the depth is being ignored when rendering the model. The only change is the RenderTarget. No other code changes have occurred.

With RenderTarget As null:

With RenderTarget Set (notice it’s rendering the bottom through the top, and the particles even render on top of the model:

I’ve tried explicitly setting the SpriteBatch.Begin params as above in this thread, with one exception… I’m using Deferred rather than Immediate. I’ve also tried setting the Rasterize mode prior to rendering each model:

Globals.graphicsDevice.RasterizerState = new RasterizerState() { CullMode = CullMode.CullCounterClockwiseFace, MultiSampleAntiAlias = true };

Nothing seems to change the results. Am I missing something else from this thread (I’m sure I could be)? Is the Deferred setting part of my issue?

Did some further testing with Raster states and nothing helps. Also tested with Spritebatch Immediate vs Deferred with no luck.