Passing Depth Buffer to the Graphics Device

Hey,

I’ve been reading around the forum a bit, trying to figure this out. My scene depth is all broken. Here
I have a depth buffer, it’s a render target, and I render to it. Woohoo!
Unfortunately, I appear to be rendering solid black, no greyscale, no alpha. Sigh.
Also, I don’t know if the renderer is taking it into account at all. Double Sigh.

The first part is just a normalisation issue. I can deal with that.
The second part, though, I’m not sure about. How can I pass the depth buffer into the graphics device so that it is used to decide which pixels to render? or should I just do this manually in my shaders?

I am just going to copy paste here for example using some code out of a project, to convey the idea.

Consider this pseudo code.

// Make a render target, 
RenderTarget2D rt2d;

// Make a global or static Depth stencil state.
// Note don't just make it and then set it in draw. *** It should be premade like so.*** 
// Re-making every frame would cause a ton of garbage collections.
DepthStencilState ds_depth_on_less_than = new DepthStencilState(){DepthBufferEnable = true, DepthBufferFunction = CompareFunction.Less };`

// set it up in load.
rt2d = new RenderTarget2D(GraphicsDevice, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height);

From here we need to set up to draw to that render target buffer.

protected override void Draw(GameTime gameTime)
{

// clear the device.
GraphicsDevice.Clear(Color.Black);

// Turn on the depth buffer.
GraphicsDevice.DepthStencilState = ds_depth_on_less_than;

// Tell the graphics device to use the render target to draw to instead of the back buffer.
GraphicsDevice.SetRenderTarget(rt2d);

// Set your effect it is easier to pass a world position to a shader to create a depth value.
// Other wise you have to do divide by w and compensate for other annoying things.
effect.CurrentTechnique = effect.Techniques["RenderShadowDepth"];
// this is how i do it.
effect.Parameters["WorldLightPosition"].SetValue(cameraPosition);

// Draw a object or what not to your scene with your effect.
// could be a model or quad whatever.

myModel.Draw();

// You have a render target with the depth drawn to it at this point.


// We want to see it though.
// We are done drawing depth on that render target buffer.
// Tell the graphics device to start drawing to the back buffer again .
GraphicsDevice.SetRenderTarget(null);

// use the render target as a texture.
Texture2D texture = (Texture2D)rt2d;

// Draw the render target as a texture to a quad or rectangle with spritebatch.draw
// You might have to do this on a shader as sometimes the values can be really small and need to be magnified.

spritebatch.draw(... texture, rectangle , ... ect...

I think you can use basic effect to get the depth i can’t remember.
Anyways you probably will want to use your own effect.

A depth shader is pretty simple.
Here is the depth part of a cube shadow shader i made a while back.
I pass a position variable to subtract from pixel positions i do that by passing pixel positions from the vertex shader to the pixel shader unaltered using a texture coordinate as shown below Position3D. It can be done on the vertice shader as well but its a habit to pass it so…

//_______________________________________________________________
// technique
// Render shadow depth
//_______________________________________________________________/
struct VsInputCalcSceneDepth
{
	float4 Position : POSITION0;
};
struct VsOutputCalcSceneDepth
{
	float4 Position     : SV_Position;
	float4 Position3D    : TEXCOORD0;
};
// Shader.
VsOutputCalcSceneDepth CreateDepthMapVertexShader(VsInputCalcSceneDepth input)//(float4 inPos : POSITION)
{
	VsOutputCalcSceneDepth output;
	output.Position3D = mul(input.Position, World);
	float4x4 vp = mul(View, Projection);
	output.Position = mul(output.Position3D, vp);
	return output;
}
float4 CreateDepthMapPixelShader(VsOutputCalcSceneDepth input) : COLOR
{
	return length(WorldLightPosition - input.Position3D);
}

technique RenderShadowDepth
{
	pass Pass0
	{
		VertexShader = compile VS_SHADERMODEL CreateDepthMapVertexShader();
		PixelShader = compile PS_SHADERMODEL CreateDepthMapPixelShader();
	}
}
2 Likes

Great stuff, Thanks!

I have my depth buffer, and I can draw it to screen, but I’m working in 3D, so spritebatch isn’t gonna work for me.
I’m pretty sure that I shouldn’t write a shader to chuck out pixels that are behind the depth in the depth buffer, but I could. This should be a hardware thing, though, no?

I only meant test it so you can see the depths visually. To make sure its working properly.

Sorry this code is out of my cube shader so i record depths in all directions i guess i was sort of thinking on those terms when altering the example.
You would probably clip negative depths in your case.
you still might want to see the depths your are drawing in color.

This was a old bug pic of bad depth values to convey the thought it helps a lot for debuging.


for example this is how i visualize my depth cube i just turn the depth into a color or black and white.
I pass the depth rendertarget as the regular texture to this shader then draw a quad some were on my screen to see it.

//_______________________________________________________________
// technique 
// DepthVisualization
//_______________________________________________________________
// shaders
struct VsInDepthVisualization
{
    float4 Position : POSITION0;
    float3 Normal : NORMAL0;
    float2 TexureCoordinateA : TEXCOORD0;
};
struct VsOutDepthVisualization
{
    float4 Position : SV_Position;
    float3 Dir3D    : TEXCOORD1;
    float2 TexureCoordinateA : TEXCOORD0;
};

VsOutDepthVisualization VsDepthVisualization(VsInDepthVisualization input)
{
    VsOutDepthVisualization output;
    float4 worldPos = float4(WorldLightPosition + input.Position, 1);
    float4x4 vp = mul(View, Projection);
    output.Position = mul(worldPos, vp);
    output.Dir3D = input.Position;
    output.TexureCoordinateA = input.TexureCoordinateA;
    return output;
}

// regular
float4 PsDepthVisualization(VsOutDepthVisualization input) : COLOR
{
    float4 texelcolor = tex2D(TextureSamplerA, input.TexureCoordinateA);
    float shadowDepth = DecodeFloatRGB(texCUBE(TextureDepthSampler, float4(input.Dir3D, 0)).xyz);

    float3 c = EncodeFloatRGB(shadowDepth);
    float4 shadowVisualColoring = shadowDepth * 0.01 * float4(c.z,c.y,c.x , 1.0f);
    return  saturate(shadowVisualColoring);
}

technique DepthVisualization
{
    pass
    {
        VertexShader = compile VS_SHADERMODEL VsDepthVisualization();
        PixelShader = compile PS_SHADERMODEL PsDepthVisualization();
    }
}
1 Like

Got it. The trick was that the Main Buffer has a built in DepthBuffer, and we no longer need to make a seperate one for depth sorting.

mMainBuffer = new RenderTarget2D(mDevice, size.X, size.Y, false, SurfaceFormat.Color, DepthFormat.Depth24Stencil8, 0, RenderTargetUsage.DiscardContents);

This guy fixed a lot!