I managed to save the depth texture between render targets. Is it possible to sample it?

Over here I managed to extract an image of the scene being rendered, then restore the depth buffer to continue drawing.

It’s a neat little hacky alternative for getting a refraction map without drawing the entire scene out to a depth texture. But what if I want the depth texture? I’ve decided to give it a try, but I’m probably going to need help because I have zero idea what I’m doing.

The starting point: I’ve added some code to PlatformApplyRenderTargets in GraphicsDevice.OpenGL.cs. The method now takes a parameter, “oldGLDepthBuffer”, and uses it in place of the new RenderTarget’s GLDepthBuffer if it’s not -1.

        private IRenderTarget PlatformApplyRenderTargets(int oldGLDepthBuffer = -1) //NEW!
        {
            var glFramebuffer = 0;
            if (!this.glFramebuffers.TryGetValue(this._currentRenderTargetBindings, out glFramebuffer))
            {
                this.framebufferHelper.GenFramebuffer(out glFramebuffer);
                this.framebufferHelper.BindFramebuffer(glFramebuffer);
                var renderTargetBinding = this._currentRenderTargetBindings[0];
                var renderTarget = renderTargetBinding.RenderTarget as IRenderTarget;
                if(oldGLDepthBuffer == -1) //NEW!
                    this.framebufferHelper.FramebufferRenderbuffer((int)FramebufferAttachment.DepthAttachment, renderTarget.GLDepthBuffer, 0);
                else
                    this.framebufferHelper.FramebufferRenderbuffer((int)FramebufferAttachment.DepthAttachment, oldGLDepthBuffer, 0);
                this.framebufferHelper.FramebufferRenderbuffer((int)FramebufferAttachment.StencilAttachment, renderTarget.GLStencilBuffer, 0);
                for (var i = 0; i < this._currentRenderTargetCount; ++i)
                {
                    renderTargetBinding = this._currentRenderTargetBindings[i];
                    renderTarget = renderTargetBinding.RenderTarget as IRenderTarget;
                    var attachement = (int)(FramebufferAttachment.ColorAttachment0 + i);
                    if (renderTarget.GLColorBuffer != renderTarget.GLTexture)
                        this.framebufferHelper.FramebufferRenderbuffer(attachement, renderTarget.GLColorBuffer, 0);
                    else
                        this.framebufferHelper.FramebufferTexture2D(attachement, (int)renderTarget.GetFramebufferTarget(renderTargetBinding), renderTarget.GLTexture, 0, renderTarget.MultiSampleCount);
                }

What stands out to me is the loop after we set the Depth and Stencil buffers. That loop is where we set any additional RenderTargets that were fed in as part of the SetRenderTargets() method (used for Multiple Render Targets). Most notably, it uses the same method (FramebufferRenderbuffer) that we used to set the depth texture.

All good so far! Now in theory, if I feed in a “DepthOutputRenderTarget” as the first item in this loop, and inject it with oldGLDepthBuffer instead of renderTarget.GLColorBuffer, that RenderTarget should also be handed the depth texture!

Which I did. And got a completely black screen for my sins.

Okaaaayyy… not a great start. All the UI bits and bobs that were rendered after this are fine: they’re simply drawn on top of the black. So it just looks like I’ve just screwed up the scene render somehow.

So, what’s the issue here? First and most obvious: converting DepthStencilFormat to SurfaceFormat. I don’t know to what extent this is an issue, but I assume at the very least they will need to be the same size in bytes.

So, I’ve downgraded the primary render target’s depth stencil format to Depth16 and set depthOutput render target’s SurfaceFormat to HalfSingle. Both 16 bit, they should be compatible.

Still getting a black screen.

Alright, I have two hypotheses about what I’m seeing. Either the black screen is a misguided attempt to render the depthOutput target onto the screen, or I’ve completely screwed up in some manner and am trying to force OpenGL to do complete nonsense. Can’t rule either of those out at this stage.

This is the relevant section of code, where I’m FramebufferRenderbuffer-ing the Depth, Stencil and DepthOutput.

this.framebufferHelper.FramebufferRenderbuffer((int)FramebufferAttachment.DepthAttachment, oldGLDepthBuffer, 0);
this.framebufferHelper.FramebufferRenderbuffer((int)FramebufferAttachment.StencilAttachment, renderTarget.GLStencilBuffer, 0);
var depthOutputRenderTarget = this._currentRenderTargetBindings[1].RenderTarget as IRenderTarget;
this.framebufferHelper.FramebufferRenderbuffer((int)FramebufferAttachment.DepthAttachment, oldGLDepthBuffer, 0);

Which FramebufferAttachment I use doesn’t seem to make a difference.

And this is where it comes out on the higher levels:

sceneRenderTarget = new RenderTarget2D(GraphicsDevice, width, height, false,
                            format, pp.DepthStencilFormat, pp.MultiSampleCount,
                            RenderTargetUsage.PreserveContents);
depthOutputRenderTarget = new RenderTarget2D(GraphicsDevice, width, height, false,
                            SurfaceFormat.HalfSingle, DepthFormat.None, pp.MultiSampleCount,
                            RenderTargetUsage.PreserveContents);
GraphicsDevice.SetRenderTargets(sceneRenderTarget, depthOutputRenderTarget);

As of right now, I am not drawing depthOutputRenderTarget to the screen, and sceneRenderTarget is coming out black.

Sooooo… any idea’s what to try from here?

Yup, depthbuffers can be sampled, however for me it was point where I rather decided to make own framework. It’s entangled pretty deeply and if you plan to start poking into this it might be worth to start considering if this is really way to go.

tbh honest the most easy way would be to just attach a second rendertarget and put the depth information in there during the regular rendering (you just return 2 float4 for each rendertarget) - it’s all done in one go and additionally you can save the depth information in a format more appropriate for what you want to use it for (keyword: linear depth for better precision)

as an example in your case, you could use depth values to find the shore (water plane pixel depth very close to geometry depth = shore. That’s how I render the white shore things in Exipelago - it basically also works for things floating on water etc

another use case is SSAO or screen space depth blur

Fill rate won’t thank you. Is it easiest, sure. Is it proper solution? Nope.

Reiti.net’s is the ‘standard’ approach to getting the depth buffer in Monogame, and while it’s a fine solution when you know you’re going to be doing it from the start, once you’re well into a project and have dozens of Effects and BasicEffects flying in every which direction, it’s a bit of a pain re-coding every single one of them to add a depth output.

That’s why I’m after a dirtier solution. The depth buffer is right there and I’ve already been able to subvert monogame to re-use it between targets. It’s gotta be possible to get it out.

If you want to sample the DepthBuffer you have to put it into a texture. It’s write-only on GPU side afaik (in the rasterizer) … that has performance reasons most likely. Could be there is something to instruct the GPU to copy the depthbuffer directly somewhere else, but I am not aware of it (and it would cost the same as doing it in the PS anyway, as it had to be a copy)

Not sure how it effects fillrate when it’s done in the same pass as the regular PS it’s just one write OP more and that is highly paralellized nowadays. Also I think sampling the (or a) depthbuffer is basically done by every engine, because that information is needed for many effects.

That said - the way the zbuffer is stored is also not the best for using in PostProcessing because of it’s not linear nature (low precision further away, higher precision closer to camera - rarely benefitial for PostProcessingEffects due to flickering)

But yes - having to adapt all shaders can be intensive … I am using a defered shader, where all that putting depth into a texture was default from the beginning, so every shader had it from the start.

You cannot read the Depth Value in the PS because the pixel hasn’t written it yet … (the same reason basically, why you cannot sample from the current rendertarget - and writing there is not a serial operation)

That’s… that’s very wrong. Depthbuffer is relatively standard texture, while there needs to be (at least on DX side, I didn’t check OpenGL MG settings, but it is either same or require no surface format change) all it needs is that format type change and creating Shader Resource View. Copy is absolutely not required and that proposition is ridiculous.

How does it affect fillrate? You literally suggested writing into additional float4 surface out of all things. That’s operation GPUs still struggle with and it’s reason why even game like Crysis pack their Gbuffer as tightly as they can. It also saves read time. So you think wrong. And yes, you can’t read and write into it at same time (we have different resources for that), however water and particle are commonly rendered with Read only on depth. What’s more important is that early Z rejection against depthbuffer is faster than later discard and what you CAN do is having depthbuffer bound in read while also having it bound as shader resource view. Gaining benefit of early z while also being capable utilizing information for soft particles, depth absorbtion and more.

I haven’t managed to make any further progress on this: I can’t seem to fix the black screen result. No idea what’s going on.

I’ll probably leave it be for the moment. Getting the depth buffer right now would be great for a bunch of immediate visual improvements, but in the long run I am eventually going to need to implement an outline shader to highlight certain game objects, which will necessitate all that work I’m trying to put off by finding a dirty solution. Wanting depth information for all those effects will encourage me to put in the work sooner.