Lighting in the Deferred Engine


As a shader noob, I’m still trying to understand the mystery behind @kosmonautgames 's Deferred Engine (btw, merci beaucoup for sharing this code :slight_smile: )

I think I understand the GBuffer now, but the lighting part remains quite nebulous …

private void DrawLights(List<PointLight> pointLights, List<DirectionalLight> dirLights, Vector3 cameraOrigin)
    DrawPointLights(pointLights, cameraOrigin);
    DrawDirectionalLights(dirLights, cameraOrigin);

If I understand well, this method is used to draw in 2 render targets (diffuse and specular, both contains in renderTargetLightBinding) :

  • multiple point lights (and their shadows)
  • multiple directional lights (but only one of them can be shadowed for the moment)

And for that, the shader(s) is/are using :

  • the albedo, normal and depth render targets
  • one shadow map per light (which was calculated before)
  • the direction / power / radius, etc … for each light

Am I right so far ?

So my question is : how (and where) are all theses informations combined ?

Looks like theses two render targets (diffuse and specular) are the only ones to be used for this entire process, but how can this be ?

For the points lights, the shader “DeferredPointLight.fx” (if it is the correct one) seems not looping over a limited number of light unlike I saw in this thread : Struggling to find point light resources
I am sure that it is related to the spheres drawn in the DrawPointLight method, but I don’t understand how it works.

For the directionnal lights, I guess the “DeferredDirectionnalLight.fx” is used, but once again, it seems to work with only one directionnal light. Wouldn’t calling this shader several time actualling erase the result of the previous call ? Because it seems not be stored in other render targets than the diffuse/specular ones …

And in the end of the DrawLights method, how the point lights and directional lights results are combined into the diffuse/specular render targets ?

Without being able to look into the sources at the moment… Should be right.

I try to explain a bit, how lighting in a deferred rendering context works. It would be possible to draw a fullscreen quad for each light regardless of the light type, because lighting is done in screenspace. For each light at least you want to execute the light pixelshader on all pixels, which are influenced by the light. For point lights you could use a sphere. The result of the lighting is stored in accumulation buffer. This just means, that lights are drawn additively with alphablending into one or more render targets. In this case it would be the diffuse and specular render targets. Finally in a composing shader you would just use all information from the gbuffer and from the lighting buffer to calculate the final color.

Hey there!

I draw every light one by one, it’s basically like rendering a normal scene full of geometry.

All point lights are sphere geometry and, just like in a normal scene, drawing several different geometric objects does not overwrite the ones written already, it’s just added on top of what is already rendered.

How are they combined?

Well it’s basically additive blending, so everything is just added on top of each other.

Specifically I set up this blend mode

_lightBlendState = new BlendState
                AlphaSourceBlend = Blend.One,
                ColorSourceBlend = Blend.One,
                ColorDestinationBlend = Blend.One,
                AlphaDestinationBlend = Blend.One


for each light

  • I draw a sphere with the radius of the light
  • This sphere is not shaded/rendered like normal geometry, but instead in the pixel shader I use the other render targets and information i have (albedo, depth etc.) to calculate lighting for all the pixels covered by the sphere.
  • I draw the final sphere on my diffuse/specular rendertargets. I do not erase the content that is there, but instead I add it on top.

So basically this is a shader effect that for each pixel calculates how much it is affected by my light. I draw the sphere geometry so I don’t have to check all the pixels, but only those who are inside the sphere. I know all the other ones are out of range for the light.

I do draw the spheres to the depth buffer, otherwise lights behind other lights would be covered.

Your post seems relatively vague, if you want some more details I can try to explain[quote=“JohnK, post:1, topic:8382”]
For the points lights, the shader “DeferredPointLight.fx” (if it is the correct one) seems not looping over a limited number of light unlike I saw in this thread : am sure that it is related to the spheres drawn in the DrawPointLight method, but I don’t understand how it works.

This is a different technique entirely. This covers forward lighting, but the technique you are describing and I am using is deferred lighting, which works differently.

Aaaah ! It’s the _lightBlendState that I missed ! That’s the key !! Thank you, I understand better now :wink:

And it’s the same process for the directional lights except you have to render a quad that covers the whole screen instead of spheres, but the blending stuff is the same, right ?

I have other questions about directional lights if you don’t mind :

  • Is there a particular reason for not allowing multiple shadowed directional light in your engine ? Is it technically more difficult to achieve than for multiple point lights shadows ?

  • You seems to use only one shadow map for the directional light, and not techniques like Cascaded Shadow Maps. Does that means that the shadow in your engine is limited to an area (the sponza scene) or is it somehow following the camera view so that this shadow could be casted anywhere ?

Technically it doesn’t matter how many shadow casting directional lights I have, and originally there was no limit. However, I changed it to one (technically up to three, but I was too lazy to implement it yet).

I have added the option to blur the shadow in screen space, but this is a very expensive operation. So I combine the shadow mapped output and the SSAO (screen space ambient occlusion) maps into one and blur that one.
This way my shadow blurring comes for free, since I had to blur the SSAO anyways.

The downside to this is that I can only have 3 shadows blurred for free, since a rendertarget can have 4 channels of color (and one is used for SSAO).

Right now I only do it for one shadow, but I could extent that to 3.


if you are not using the shadow blur you can disable the warning and have multiple light shadows.

line 599 in renderer.cs
if (dirLightShadowed > 1)
throw new NotImplementedException(“Only one shadowed DirectionalLight is supported right now”);

that can be commented out, then it works as expected.

Yes, right now it’s limited to the sponza scene and it is fixed. You can see the boundaries pretty clearly if you go into the MainLogic.cs Initialize() and disable all AddEntity(…) (or at least AddEntity(sponza…) ) calls and then uncomment this line
AddEntity(_assets.Plane, new Vector3(0, 0, 0), 0, 0, 0, 200);
(which has only a simple plane).

Cascaded shadow maps are not trivial to implement, but pretty much needed for bigger environments. I don’t particularly care for that in my test scene, though.

Sticking the directional light to the camera would work in always having shadows around, but these shadows would flicker when the camera moves.

Thank you very much for your explanations =D

Ok, I finally manage to integrate some parts of your light system into my game. Many thanks for that =D

I am now studying the SSAO part and I have once again a few questions for you @kosmonautgames (hoping I don’t bother you too much ^^). I would almost tell you that some comments in your code would be useful but I’m in no position to give lessons about commenting considering I don’t comment very much either and on the top of that, your work seems already awesome to me ^^

Anyway ! About the SSAO blurring part, and especially the DrawBilateralBlur method. If I understand well, the method works like this :
(I will refer to the render targets like this : [RenderTargetName])

Before : Rendering SSAO into [SSAO] (or in _renderTargetScreenSpaceEffect to be exact)
1) Copying [SSAO] into [PrepareBlur] (with Spritebatch.Draw)
2) Processing the vertical blur using [PrepareBlur] into [BlurX]
3) Processing the horizontal blur using [BlurX] into [BlurYFinal]
After : Use [BlurYFinal] as the SSAOMap parameter in the final compose shader.

I have two questions about this :

  • Is the step 1) really necessary or could we just render the vertical blur using the SSAO render target directly and not a copy ?
  • Is this possible to use only one render target for the whole process, i.e, using the SSAO render target as both input and output of the blurring shader, like this (?) :

// + same for BlurHorizontal

Yes, that’s how it’s usually done.

Ssao -> blur horizontally -> temp map -> blur vertically -> original ssao map

The reason I do it like that is that which I described in an earlier post: I blur other things too in the same process not only ssao but also shadows