[SOLVED] Using a DepthBuffer Written to a RenderTarget2D

I’ve been working on figuring out this problem for a couple weeks and I’m probably approaching it wrong. At any rate, I’ve written a shader (below) that draws everything in shades of grey according to their Y-position (which is equivalent to depth for the purposes of my camera.)

/////////////
// GLOBALS //
/////////////
cbuffer MatrixBuffer
{
    float4x4 xViewProjection;
};

Texture2D colorTexture;
sampler s0 = sampler_state 
{ 
    texture = <colorTexture>; 
    magfilter = LINEAR; 
    minfilter = LINEAR;
    mipfilter = LINEAR; 
    //AddressU = mirror;
    //AddressV = mirror;
};
sampler colorSampler = sampler_state
{
    Texture = <colorTexture>;
};


//////////////
// TYPEDEFS //
//////////////
struct VertexToPixel
{
    float4 Position     : POSITION;     // reserved for Pixel Shader internals
    float4 worldPos     : TEXCOORDS0;   // world position of the texture
    float2 texPos       : TEXCOORDS1;   // actual texture coords
    float2 screenPos    : TEXCOORDS1;   // screen position of the pixel?
};


////////////////////////////////////////////////////////////////////////////////
// Vertex Shader
////////////////////////////////////////////////////////////////////////////////
VertexToPixel DepthVertexShader(float4 inpos : POSITION, float2 inTexCoords : TEXCOORD0)
{
    VertexToPixel output;
    
    // Change the position vector to be 4 units for proper matrix calculations.
    inpos.w = 1.0f;

    // Calculate the position of the vertex against the world, view, and projection matrices.
    output.Position = mul(inpos, xViewProjection);
 
    // Store the position value in a second input value for depth value calculations.
    output.texPos = inTexCoords;
    output.worldPos = inpos;
    output.screenPos = output.Position;

    return output;
}

////////////////////////////////////////////////////////////////////////////////
// Pixel Shader
////////////////////////////////////////////////////////////////////////////////
float4 DepthPixelShader(VertexToPixel input) : COLOR0 // : SV_TARGET
{
    // Gray the texture to it's y-value (approx) and alpha test
    float4 depthColor = tex2D(colorSampler, input.texPos);
    depthColor.a = round(depthColor.a);
    float yval = (1 + input.worldPos.y) * 0.1 * depthColor.a;
    depthColor.r = yval;
    depthColor.g = yval;
    depthColor.b = yval;

    return depthColor;
}

technique Simplest
{
    pass Pass0
    {   
        VertexShader = compile vs_4_0_level_9_1 DepthVertexShader();
        PixelShader = compile ps_4_0_level_9_1 DepthPixelShader();
    }
}

I am now attempting to read this Texture from another shader in order to use it as a Depth buffer for the sake of having lighting effects only affect the things that are of a lower y-value.

I have been reading everything I can in my off time about RenderTargets, DepthBuffers, and the inability to utilize depth buffers in xna/monogame between render targets. This is my attempt to get around that.

My question is; I’ve attempted in a second (similar) shader to draw this without changing it at all over a second set of polygons. I’ve tried using World, Screen, and Texture position to no avail. For reference I’ve included the second (failing) shader

sampler s0;
Texture2D colorTexture;
sampler currSampler = sampler_state
{
    Texture = <colorTexture>;
};

float4x4 xViewProjection;

struct VertexToPixel
{
    float4 Position : POSITION; // reserved for Pixel Shader internals
    float4 worldPos : TEXCOORDS0; // world position of the texture
    float2 texPos : TEXCOORDS1; // actual texture coords
    float2 screenPos : TEXCOORDS1; // screen position of the pixel?
};

VertexToPixel SimplestVertexShader(float4 inpos : POSITION, float2 inTexCoords : TEXCOORD0)
{
    VertexToPixel output;
    
    // Change the position vector to be 4 units for proper matrix calculations.
    inpos.w = 1.0f;

    // Calculate the position of the vertex against the world, view, and projection matrices.
    output.Position = mul(inpos, xViewProjection);
 
    // Store the position value in a second input value for depth value calculations.
    output.texPos = inTexCoords;
    output.worldPos = inpos;
    output.screenPos = output.Position.xy;

    return output;
}

float4 DrawDepthBufferAgain(VertexToPixel input) : COLOR0
{
    float4 depthColor = tex2D(currSampler, input.texPos);
    return depthColor;
}

technique Simplest
{
    pass Pass0
    {
        VertexShader = compile vs_4_0_level_9_1 SimplestVertexShader();
        PixelShader = compile ps_4_0_level_9_1 DrawDepthBufferAgain();
    }
}

This second shader has the issue that it draws the renderTarget “magnified” and “twisted” (by pi/2) due to the re-application of the ViewProjection matrix.

Any help for my conundrum (including how to just reuse the depth buffer effectively) would be greatly appreciated. Thank you for your time.

I’m assuming you checked that the gray depth rendertarget looks as expected if drawn as a regular texture (ie: via SpriteBatch)?

Yes, if I draw it using that first shader and render it directly using spritebatch I get something like:

I realize I could just ensure that all shaders that require this are also using the GPU depth buffer and whatnot; the main issue I ran into is that when it came time for lighting effects (when relying on the GPU’s depth buffer) I would either light some areas twice or I would not light areas based on non-visible positions of the thing “above” it.

By using this “fake” depth buffer I can ensure the alpha test doesn’t mess with the “depth”, meaning that textures with large amounts of zero-alpha will still light properly.


I think mathematically my issue is that I don’t get how to display a texture properly when it’s not the texture passed in in the standard GraphicsDevice.DrawUserPrimitives() style commands. My best guess is that I have to figure out where in screen space I would be drawing for the “depth” texture, then figure out the ratio of where that is in the texture, then use that for the 0-1 texture position; but I don’t think I can figure out the ratio of where that is in the texture; or at least I only know how to do that when calling DrawUserPrimitives directly on the quad/texture I want to draw to.

Shader code looks ok…
I wonder if you could encode the coordinates into the colors as well as alpha-adjusted-depth (in the first shader) - and apply lighting distance/angle calculations… so like 2 rendertargets - 1 for the regular scene and 1 to help with lighting calculations and then mix the pixels in a shader that adjusts colors based on distance to light (and maybe fall-off angle if spotlight). I’m just brainstorming.

I think that’s the same issue. If you encode the coordinates in the G and B (let’s say) and use the R as your Y-value; you still have to know where to sample in order to get those coordinates… I think?

Or did you mean color it normally, but have a section of each color reserved for height info (such as making it 6 bits for each color and the alpha, then trying to jam in the height info in the last 8 bits, and unscramble it with a final shader pass after all height-related things have been done?)


I’ve considered making a quad that perfectly matches the screen (either projected onto the XZ plane or perpendicular to the camera); but that also means I’m passing in a quad and painting the texture.

By this I mean, I have a function that casts the Mouse-Position to the XZ plane. If I were to do that with the 4 corners of the screen and use that as a quad for this texture, I believe it would render it exactly the same as it was displayed.

But… maybe If I do that projected quad I can get the ratio of where I am in the texture (from 0-1) based on the ratio of the World-Position coordinates? Seems really expensive haha, but I guess it may be worth a shot this evening.

Thanks for your input so far. I feel like I’m missing something really fundamental about using multiple Effects and DepthBuffers

If I understand your problem correctly you need to do the following with with the shader that displays the depth texture. First make screenPos a float4

float4 screenPos : TEXCOORD2; 

Then output all 4 components in the vertex shader

output.screenPos = output.Position;

And in the pixel shader

float2 uv = (input.screenPos.xy /  input.screenPos.w + 1) / 2;
uv.y = 1-uv.y;
float4 depthColor = tex2D(currSampler, uv);
1 Like

You are a God among men Markus!

I have no idea what your code does yet, but I guess I better study the heck out of it.
Thank you so much, my beer tastes 10 times better now.

EDIT: I should note that I did test it and it works.