I am trying to wrap my head around an aspect of @kosmonautgames ’ and Catalin Zima’s deferred engines / GBuffer implementations.
The render targets are defined as (simplified):
colorRT = new RenderTarget2D(
GraphicsDevice,
width,
height,
SurfaceFormat.Color); // byte4
normalRT = new RenderTarget2D(
GraphicsDevice,
width,
height,
SurfaceFormat.Color); // byte4
depthRT = new RenderTarget2D(
GraphicsDevice,
width,
height,
SurfaceFormat.Single); // float
In the shader the pixel shader input is defined as
struct Render_IN
{
float4 Position : SV_POSITION;
float4 Color : COLOR0;
float3 Normal : TEXCOORD0;
float2 Depth : DEPTH;
// Stuff that does not matter here [...]
};
Pixel Shader output is:
struct PixelShaderOutput
{
float4 Color : COLOR0;
float4 Normal : COLOR1;
float4 Depth : COLOR2;
};
And then there is the most irritiating part. In the pixel shader function depth is handed over this way:
PixelShaderOutput WriteBuffers(Render_IN input)
{
float4 finalValue = input.Color;
//Deferred MRT
PixelShaderOutput Out;
Out.Color = finalValue;
// Stuff that does not matter here [...]
Out.Depth = input.Depth.x; // HERE: Conversion from float to float4?
return Out;
}
See the comment in the last four lines.
How does this work? I understand that the float4 for the color is automatically converted to the packed byte color struct by the GPU. Is there some implicit conversion from float to float4? How does the float4 from the output struct map to the SurfaceFormat.Single in the render target?
Is it possible to write the depth value as float into a second render target? If using the SV_Depth semantic, the render target stays clear - I guess because the value is directly used for depth logic.
Yeah, the overlapping surfaces are showing, could there be some alpha blending going on?
Yes, by using MRT, just like you are doing.
SV_Depth is for outputting to the depth buffer, not to a render target.
.
You are using SV_TARGET for the first RT and COLOR1 for the 2nd RT, it probably won’t make a difference, but I would try SV_TARGET0 and SV_TARGET1.
I also noticed that the depth output is defined as a float2
float2 Depth: TEXCOORD0;
which actually makes a lot of sense, but you are just using x in the pixel shader. You didn’t post your vertex shader, are you only using x here too?
The reason it makes sense is, when you output depth as a single float from the vertex shader, it won’t interpolate correctly in the pixel shader. That’s why it’s better to pass both z and w from your transformed vertex position to the pixel shader, and do the w-divide in the pixel shader
I think I did not make myself clear. I meant is there an output semantic for just one float? Right now my output struct is
struct PSOutCD
{
float4 Color : SV_TARGET;
// We cannot use SV_Depth here, since it's swallowed and not written out.
float4 Depth : COLOR1;
};
using a float4 for depth, although being only in need for a simple float.
Thanks. I’ll try that.
That’s because HLSL does not define float2 for the semantic TEXCOORD. But then it does not define float2 at all and we are not using HLSL to begin with but MonoShader (or so). So I guess I’ll be using float then.