[SOLVED] How to combine Deferred Rendering with Forward Rendering for transparent effect?

Hello,

I try to create a deferred rendering using MonoGame. This is yet simple but render deferred work. But I do not understand how combine the deferred rendering to forward rendering. How to do ?

Sorry for my practice in English, I am French.

Thank you.

This is my actual method code:

	public void Compute(in GameTime gameTime, in EventHandler<DrawingEventArgs> drawing, in EventHandler<LightingEventArgs> lighting)
	{
		// G-Buffer

		graphicsDevice.DepthStencilState = DepthStencilState.Default;
		graphicsDevice.SetRenderTargets(depthMap, albedoMap, normalMap, worldPositionMap);
		clearGBuffer.CurrentTechnique.Passes[0].Apply();
		drawing?.Invoke(this,  new DrawingEventArgs(in gameTime));
		depthMap = graphicsDevice.GetRenderTargets()[0].RenderTarget as RenderTarget2D;
		albedoMap = graphicsDevice.GetRenderTargets()[1].RenderTarget as RenderTarget2D;
		normalMap = graphicsDevice.GetRenderTargets()[2].RenderTarget as RenderTarget2D;
		worldPositionMap = graphicsDevice.GetRenderTargets()[3].RenderTarget as RenderTarget2D;

		// Ligh and Shadow

		graphicsDevice.SetRenderTargets(lightMap, shadowMap);
		graphicsDevice.Clear(Color.Transparent);
		graphicsDevice.BlendState = BlendState.Additive;
		lighting?.Invoke(this, new LightingEventArgs(in depthMap, in albedoMap, in normalMap, in worldPositionMap, in vertexData, in indexData));
		lightMap = graphicsDevice.GetRenderTargets()[0].RenderTarget as RenderTarget2D;
		shadowMap = graphicsDevice.GetRenderTargets()[1].RenderTarget as RenderTarget2D;
		graphicsDevice.BlendState = BlendState.Opaque;

		// Renderer

		graphicsDevice.SetRenderTargets(renderedImageMap);
		unpack.Parameters["LightMap"].SetValue(lightMap);
		unpack.Parameters["ShadowMap"].SetValue(shadowMap);
		unpack.CurrentTechnique.Passes[0].Apply();
		graphicsDevice.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, vertexData, 0, 4, indexData, 0, 2);
		renderedImageMap = graphicsDevice.GetRenderTargets()[0].RenderTarget as RenderTarget2D;
		graphicsDevice.SetRenderTargets(null);

		// Output

		get.Parameters["DRColor"].SetValue(renderedImageMap);
		get.CurrentTechnique.Passes[0].Apply();
		graphicsDevice.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, vertexData, 0, 4, indexData, 0, 2);
	}

I’m not sure I did it in a conventional way, but I basically have two key pieces in my G-Buffer:

  1. My albedo alpha channel is left free for use in alpha blending. This helps blend transparent parts of decals, for example, with the terrain that was already drawn.
  2. After blending, I write a constant emissive value to all alpha channels using the BlendFactor AlphaSourceBlend. This is used for transparent effects that need to be unlit.

I’m assuming the question is mainly for #2, because I think that’s the more difficult part. Note that both of these steps occur before lighting and composition. So the reason it’s treated as a forward pass in the end is because during composition, I essentially output the following

`(1-Emissive)(AlbedoDiffuse + Specular)

  • Emissive*Albedo`

Thus my fully emissive pixels are unaffected by light, and my non-emissive pixels are fully effected by light, with a whole gradient in between.

I believe the more common approach is to do a full deferred render with lighting and composition, and then just draw the effects on your output render target afterwards.

You just continue rendering against your main output.

The catch is that you need an intact depth-buffer, as you’re going to draw with depth-test on but depth-write off. Not sure how much of a headache that is with Monogame.

When I continue to render on main output with the following code is that my object is always in front of the deferred object.

How to write in the depth buffer the values compute in G-Buffer part?

I try with tex2Dlod in Vertex Shader but the function doesn’t work.

public void Compute(in GameTime gameTime, in EventHandler<DrawingEventArgs> drawing, in EventHandler<LightingEventArgs> lighting, in EventHandler<LightingEventArgs> transparent)
		{
		// G-Buffer

		graphicsDevice.DepthStencilState = DepthStencilState.Default;

		...

		// Ligh and Shadow

		graphicsDevice.DepthStencilState = DepthStencilState.None;

		...

		// Renderer

		...

		// Output
		
		...

		// Forward

		graphicsDevice.DepthStencilState = DepthStencilState.Default;

		transparent?.Invoke(this, new LightingEventArgs(depthMap, albedoMap, normalMap, worldPositionMap, vertexData, indexData));
	}

You can reconstruct the depth beforehand into the depth buffer.

You can google that, but what it boils down to is that you take the depth texture you created in the g-buffer and fill the actual depth buffer with thtat again.

For a linear depth buffer this would for example work like this, but for your case just revert the way you stored it in your g-buffer

//Depth Reconstruction from linear depth buffer, TheKosmonaut 2016

////////////////////////////////////////////////////////////////////////////////////////////////////////////
//  VARIABLES
////////////////////////////////////////////////////////////////////////////////////////////////////////////

float4x4 Projection;

float3 FrustumCorners[4]; //In Viewspace!

float FarClip;

Texture2D DepthMap;

SamplerState texSampler
{
    Texture = (DepthMap);
    AddressU = CLAMP;
    AddressV = CLAMP;
    MagFilter = POINT;
    MinFilter = POINT;
    Mipfilter = POINT;
};
 
////////////////////////////////////////////////////////////////////////////////////////////////////////////
//  STRUCTS
////////////////////////////////////////////////////////////////////////////////////////////////////////////

struct VertexShaderInput
{
    float2 Position : POSITION0;
};

struct VertexShaderOutput
{
    float4 Position : POSITION0;
    float2 TexCoord : TEXCOORD0;
};

////////////////////////////////////////////////////////////////////////////////////////////////////////////
//  FUNCTIONS
////////////////////////////////////////////////////////////////////////////////////////////////////////////

    ////////////////////////////////////////////////////////////////////////////////////////////////////////////
    //  VERTEX SHADER
    ////////////////////////////////////////////////////////////////////////////////////////////////////////////

VertexShaderOutput VertexShaderFunction(VertexShaderInput input, uint id:SV_VERTEXID)
{
    VertexShaderOutput output;
    output.Position = float4(input.Position, 0, 1);
    output.TexCoord.x = (float)(id / 2) * 2.0;
    output.TexCoord.y = 1.0 - (float)(id % 2) * 2.0;

    return output;
}

////////////////////////////////////////////////////////////////////////////////////////////////////////////////
    //  PIXEL SHADER
    ////////////////////////////////////////////////////////////////////////////////////////////////////////////

        ////////////////////////////////////////////////////////////////////////////////////////////////////////////
        //  HELPER FUNCTIONS
        ////////////////////////////////////////////////////////////////////////////////////////////////////////////

float TransformDepth(float depth, matrix trafoMatrix)
{
    return (depth*trafoMatrix._33 + trafoMatrix._43) / (depth * trafoMatrix._34 + trafoMatrix._44);
}

        ////////////////////////////////////////////////////////////////////////////////////////////////////////////
        //  Main function
        ////////////////////////////////////////////////////////////////////////////////////////////////////////////

float PixelShaderFunction(VertexShaderOutput input) : DEPTH
{
    float2 texCoord = float2(input.TexCoord);

    float linearDepth = DepthMap.Sample(texSampler, texCoord).r * -FarClip;

    return TransformDepth(linearDepth, Projection);
}

////////////////////////////////////////////////////////////////////////////////////////////////////////////////
    //  TECHNIQUES
    ////////////////////////////////////////////////////////////////////////////////////////////////////////////

technique RestoreDepth
{
    pass Pass1
    {
        VertexShader = compile vs_5_0 VertexShaderFunction();
        PixelShader = compile ps_5_0 PixelShaderFunction();
    }
}

It work with the solution of @kosmonautgames!! I have discovered your project yesterday but in using vs_5_0 and ps_5_0, an error triggered at runtime. Today, I have corrected the error in defining GraphicsProfile to HiDef and I have simplified the vertex shader.

But the solution seems to work only with DirectX, how to do with OpenGL ?

Maybe, the solution of @jnoyola is an answer but I don’t understand really how to implant it.

My solution has an advantage of not needing to do a pass to fill the depth buffer with depth from the g-buffer, but I’m not sure if there are any other pros or cons.

I can explain it better tonight or tomorrow when I’m back at my computer.

1 Like

For the OpenGL issue with your other approach, did you include the usual shader model definition?

#if OPENGL
    #define SV_POSITION POSITION
    #define VS_SHADERMODEL vs_3_0
    #define PS_SHADERMODEL ps_3_0
#else
    #define VS_SHADERMODEL vs_4_0_level_9_1
    #define PS_SHADERMODEL ps_4_0_level_9_1
#endif

[...]

technique TechniqueName
{
    pass P0
    {
        VertexShader = compile VS_SHADERMODEL MainVS();
        PixelShader = compile PS_SHADERMODEL MainPS();
    }
};

As for my approach, which part is difficult to understand? I’ll try to explain the whole thing again simply.

The G-Buffer will have Emissive stored in alpha. If you’re using glColorMaski (which I’m not, because it’s only supported in GLES 3.2), then you can pick a specific render target, e.g. the Albedo map, to store Emissive. Otherwise, you have to overwrite all G-Buffer alphas with Emissive, which is fine for my case because my Normal map stores 24 bit Normals and my Depth map stores 18 bit Depth and 8 bit Specular. The key here is that you can use BlendFactor to write a value to the render target that can be specified on your GraphicsDevice in C# rather than written by your shader. The advantage is that you can write RGBA from the shader to perform alpha blending, but then overwrite A with a more useful value (like Emissive), since you don’t actually need to know the alpha value in the future.

So a lot of my transparent effects will use this BlendState. Note the use of BlendFactor.

public static BlendState TransparentEmissiveBlendState { get; } = new BlendState()
{
    ColorSourceBlend = Blend.SourceAlpha,
    ColorDestinationBlend = Blend.InverseSourceAlpha,
    AlphaSourceBlend = Blend.BlendFactor,
    AlphaDestinationBlend = Blend.InverseSourceAlpha
};

The draw loop then looks similar to yours with the added pieces in between:

  1. Draw opaque models with DepthStencilState.Default
  2. Draw decals (transparent effects that don’t affect depth) with DepthStencilState.DepthRead
  3. Sort and draw transparent effects DepthStencilState.DepthRead
  4. Draw light and shadow
  5. Combine Albedo, Light, and Emissiveness with the following
float3 DiffuseColor = (1 - Emissive) * float3(Albedo.rgb * (Lighting.rgb + AmbientColor) + Specular);
float3 EmissiveColor = Emissive * Albedo.rgb;
float3 output = DiffuseColor + EmissiveColor;
1 Like

The standard approach requires Depth semantic only available from the Shader Model 5, and Monogame with MojoShader limit Open GL at Shader Model 3.

I am sorry but my understanding generates of the code that doesn’t work because of the drawing of the transparent elements, without this part, I well get a image. Once add this code, the image becomes black because of the personal BlendState. Can you explain my understanding error and more particularly the BlendFactor because I don’t find information about it on the Internet?

// Draw Transparent
GraphicsDevice.DepthStencilState = DepthStencilState.DepthRead;
GraphicsDevice.SetRenderTargets(albedoMap);
GraphicsDevice.BlendState = new BlendState()
{
ColorSourceBlend = Blend.SourceAlpha,
ColorDestinationBlend = Blend.InverseSourceAlpha,
AlphaSourceBlend = Blend.BlendFactor,
AlphaDestinationBlend = Blend.InverseSourceAlpha
};
foreach (DrawableComponent drawableComponent in Scenes[SceneIndex].Drawables)
drawableComponent.RenderTransparent(gameTime);
GraphicsDevice.BlendState = BlendState.Opaque;

My actual draw loop:

protected override void Draw(GameTime gameTime)
{

  // Define buffer
  GraphicsDevice.SetRenderTargets(depthMap, albedoMap, normalMap);
  // Clear
  ...
  // Draw opaque
  GraphicsDevice.DepthStencilState = DepthStencilState.Default;
  ...
  // Draw Transparent
  ...
  // Draw Light
  ....
  // Release buffer
  ...
  // Compose
  GraphicsDevice.DepthStencilState = DepthStencilState.DepthRead;
  unpackBuffer.Parameters["AlbedoMap"].SetValue(albedoMap);
  unpackBuffer.Parameters["LightMap"].SetValue(lightMap);
  unpackBuffer.CurrentTechnique.Passes[0].Apply();
  DrawQuad();
  base.Draw(gameTime);

}

It’s hard to tell based on this code alone. What was your G-Buffer format? Because this code will overwrite all alpha channels to be whatever you set the BlendFactor to (explained more below).

Did you also change your final output shader to use that alpha value to control whether each pixel is lit or unlit as I outlined in step 5 above? Because without that change, your emissive pixels will be treated as normal materials affected by light, which is less common for transparent effects.

Note that use of BlendFactor is not integral to making deferred rendering work with transparent effects. I simply use BlendFactor so that I can have transparent colors that are independent of my objects’ emissivity.

BlendState

Just to make sure you have a solid understanding, I’ll review blending in general first. This assumes there’s only a single render target, but multiple render targets works exactly the same way.

Normally a pixel shader outputs four channels RGBA, called source. This is then blended with the RGBA data that was already in the render target, called destination. Blending is the process that combines source with destination, and then the blended output is written to the render target. BlendState allows you to specify blend factors and operations for the blending process. You can specify a set of parameters for Color blending, which applies to each of the RGB channels, and a separate set of parameters for Alpha blending, which applies to the A channel.

The blending equation works like this:

output.rgb = (source.rgb * ColorSourceBlend) ColorBlendFunction (dest.rgb * ColorDestinationBlend)
output.a   = (source.a   * AlphaSourceBlend) AlphaBlendFunction (dest.a   * AlphaDestinationBlend)

The BlendFunctions are binary operations, commonly addition (+).
The Source/DestinationBlend factors can be pixel-dependent values, such as SourceAlpha or InverseSourceAlpha, constants like One or Zero, or a custom constant BlendFactor.

First let’s look at a simple example of BlendState.NonPremultiplied, and remember that the BlendFunctions default to addition.

{
	ColorSourceBlend = Blend.SourceAlpha,
	ColorDestinationBlend = Blend.InverseSourceAlpha,
	AlphaSourceBlend = Blend.SourceAlpha,
	AlphaDestinationBlend = Blend.InverseSourceAlpha
}

The output for each channel is calculated like this:

output.rgb  =  (source.rgb * source.a) + (dest.rgb * (1 - source.a))
output.a    =  (source.a   * source.a) + (dest.a   * (1 - source.a))

BlendFactor

Now let’s consider a more complicated BlendState using BlendFactor. You can specify the blend parameter BlendFactor, and then set each RGBA constant (even if you don’t use BlendFactor for both Color and Alpha) on your GraphicsDevice like this:

GraphicsDevice.BlendFactor = new Color(0, 0, 0, 0.5f);

And here is the BlendState:

{
	ColorSourceBlend = Blend.SourceAlpha,
	ColorDestinationBlend = Blend.InverseSourceAlpha,
	AlphaSourceBlend = Blend.BlendFactor,
	AlphaDestinationBlend = Blend.InverseSourceAlpha
}

The output for each channel is then calculated like this:

output.rgb  =  (source.rgb * source.a)      + (dest.rgb * (1 - source.a))
output.a    =  (source.a   * BlendFactor.a) + (dest.a   * (1 - source.a))

The advantage here is that you can use the shader’s alpha value to blend color, but then modify the alpha value before writing it to the render target.

3 Likes

Thank you for this information. So, BlendFactor is just a variable that we modify to customize more finely BlendState? But in this example, BlendFactor is clearly defined, then it isn’t defined in your deferred rendering methods. Why ?

Concerning the buffers, my format is Depth24Stencil8 and my surface format is HalfVector4.

depthMap = new RenderTarget2D(GraphicsDevice, width, height, false, SurfaceFormat.HalfVector4, DepthFormat.Depth24Stencil8);
albedoMap = new RenderTarget2D(GraphicsDevice, width, height, false, SurfaceFormat.HalfVector4, DepthFormat.Depth24Stencil8);
normalMap = new RenderTarget2D(GraphicsDevice, width, height, false, SurfaceFormat.HalfVector4, DepthFormat.Depth24Stencil8);
lightMap = new RenderTarget2D(GraphicsDevice, width, height, false, SurfaceFormat.HalfVector4, DepthFormat.Depth24Stencil8);

Concerning the Unpack Shader that compose the final image, this is the code of the Vertex Shader and Pixel Shader using emissive control:

VSO VS(in VSI i)
{
VSO o;
o.Position = float4(i.Vertex, 1);
o.UV = i.UV;
return o;
}

float4 PS(VSO i) : COLOR
{
float4 lightData = tex2D(LightMap, i.UV);
i.UV.y *= -1;
float4 albedoData = tex2D(AlbedoMap, i.UV);

  float3 DiffuseColor = (1 - albedoData.a) * float3(albedoData.rgb * lightData.rgb);
  float3 EmissiveColor = albedoData.a * albedoData.rgb;
  return float4(DiffuseColor + EmissiveColor, 1);

}

Each transparent object is drawn with it basic shader that return only color (here the color is float4(.5, 0, 0, .5)):

VSO VS(VSI i)
{
VSO o;

  o.Position = mul(float4(i.Vertex, 1), WorldViewProjection);

  return o;

}

PSO PS(VSO i)
{
PSO o;

  o.Albedo = Color;

  return o;

}

A realized image without the transparent pass.

A realized image with specific BlendState:

GraphicsDevice.BlendState = new BlendState()
{
AlphaSourceBlend = Blend.InverseSourceAlpha,
AlphaDestinationBlend = Blend.SourceAlpha
};

And another with the inverse:

GraphicsDevice.BlendState = new BlendState()
{
AlphaSourceBlend = Blend.SourceAlpha,
AlphaDestinationBlend = Blend.InverseSourceAlpha
};

The problem is that all pixels at the exterior of Bunny isn’t drawn.

Thank you very much for your help. I have found a solution. My problem concerned the transmission of the color in shader, instead of pass a Vector4 with alpha canal, it was a Vector3 without alpha canal was passed. But I have a last question. The default value of DephtFactor is white (float4(1, 1, 1, 1)) and she isn’t modified, then the AlphaSourceBlend can be multiplied by one without problem? Otherwise, why do you use the DepthFactor in this case ?

My result with this BlendState:

GraphicsDevice.BlendState = new BlendState()
{
ColorSourceBlend = Blend.SourceAlpha,
ColorDestinationBlend = Blend.InverseSourceAlpha,
AlphaSourceBlend = Blend.One,
AlphaDestinationBlend = Blend.InverseSourceAlpha
};

Apologies, I should have been more explicit about what I set the BlendFactor to. I briefly mentioned in the beginning that I use it to write an emissive value, so it really depends what you’re drawing. Here are a few examples from my project:

  • Terrain (non-emissive): 0
  • Characters (non-emissive): 0
  • Fire (fully emissive): 1
  • Magical sword (somewhat emissive): 0.5

So I just set the BlendFactor before each object or type of object that I draw.

I better understand now. Thank you.