Deferred rendering - Depth and lBuffer messed up

Sorry kosmonautgames,

this whole rending stuff messes my brain completly up …

PSOut MainPS(VSOut input)
{
	PSOut output;

	float4 normalData = tex2D(normalSampler,input.TexCoord);
	float3 normal = 2.0f * normalData.xyz - 1.0f;

	float specularPower = normalData.a * 255;
	float specularIntensity = tex2D(colorSampler, input.TexCoord).a;

	//read depth
	float depthVal = tex2D(depthSampler, input.TexCoord).r;

	//compute screen-space position
	float4 position;
	position.x = input.TexCoord.x * 2.0f - 1.0f;
	position.y = -(input.TexCoord.y * 2.0f - 1.0f);
	position.z = depthVal;
	position.w = 1.0f;

	//transform to world space
	position = mul(position, InvertViewProjection);
	position /= position.w;
	
	float3 lightVector = -(lightDirection);

	float NdL = max(0,dot(normal,lightVector));
	float3 diffuseLight = NdL * Color.rgb;

	float3 reflectionVector = normalize(reflect(-lightVector, normal));
	float3 directionToCamera = normalize(cameraPosition - position);
	float specularLight = specularIntensity * pow(saturate(dot(reflectionVector, directionToCamera)), specularPower);

	//output the light
	// float occ = tex2D(occlusionSampler, input.TexCoord);
	output.Color = float4(diffuseLight.rgb, specularLight); // *occ;

	return output;
}

… still no change at all. Also, why is everthing working fine in plain XNA and messed up in mono ?!

MonoGame is a complete reimplementation of the XNA API (not affiliated with Mono btw), so it’s not 100% accurate. If you have a project that’s working on XNA and not on MonoGame open an issue on GitHub https://github.com/MonoGame/MonoGame/issues

sight I don’t think that something THIS obvious is broken in mono … Render color/normal/depth and influence per light to a 2d render target … dude! this is obvious stuff ! …

I think I should upload the whole project solution asap … this is weird as f**k … :frowning:

Find my currently work in progress engine here … http://www.garbe-clan.de/content/trunk.zip

Again. I think I make some obvious errors here during the deferred rendering stage …

Anyone who figures out what is wrong here … I owe you a beer or two :wink:

For real … ten bucks over paypal granted … :-p

your LightDirection is 0,0,0 and stays that way, it doesn’t get passed correctly to the shader

So in Classes/Shader/Deferred/light/Directional

I added this line at 28
((NICEShaderDeferredLightDirectional) _shader).LightDirection = _direction;

and now it works, but you also have to pass colorMap, normalMap etc. to the shader, right now it is commented out.

I’ll upload it soon

When i get a bug that starts taking to long to fix.
I start to double check the names of everything first so things are very clear.
Then start simplifying.
Break down big lines of code into smaller lines break down calculations into variable parts.

But ya this look like its way way over complicating things.
The more i looked at it, its sort of a mess.

This is real suspicious to me.
Why would anyone want to do this sort of thing?

float3 normal = 2.0f * normalData.xyz - 1.0f;

then here

float3 lightVector = -(lightDirection);

This is the inverseLightDirection now and that’s a bad name, names like this are bug magnets.

In the next line there is

NDL ? = max(0, dot( normal, lightvector));

NdL (as in non directional light)? without asking what good is a non directional light.
What happens when the light is in a negative direction here with max 0, dot products can validly return a negative cosine.

I would think you would want to flip it to be positive. Not zero it out if its to be non directional, i don’t get it.

Ok then there is that weird normal that has been butchered to no longer be unit length.

Who knows what that’s going to output. That could evaluate to -1 in that case it will be zero and then the diffuse light will end up 0.
With this butchered normal data that > (2 * n -1). for a element at .5 or less which is about 35 degrees or something its going to end up zero when you multiply n * the inverse light.

in the next line.

float3 reflectionVector = normalize(reflect(-lightVector, normal));

is

float3 reflectionVector = normalize(reflect(lightDirection, normal));

pow = the x parameter raised to the pow of the y parameter.
so this says start with the reflection vector 0 to 1, (because it has been normalized) powed to the camera vector great its no longer unit length again. clamp that ranged from 0 to 1 * the alpha of the texture is the specular light float ?
float specularLight = specularIntensity * pow(saturate(dot(reflectionVector, directionToCamera)), specularPower);

.
.
Ok im going to stop right here.

To find the magnitude of a specular light for use in increasing the color of a texel.

You only need to dot a vertice.Normal by the -light.Normal itself.
Provided both are in fact normalized you get a cosine that is -1 to 1.

-1 means the light is behind the triangle and shouldn’t be lighting it up,
(think of were the sun is at night)
so you clamp it to zero with saturate.

+1 means the light is directly shining on it from above or on a flat surface directly at no angle.
(think high noon)

0 means its to the side of the surface directly (think of early sunrise on a ocean)

This is multiplied against the color RGB not A.

Lights don’t normally change the transparency’s of things which is what alpha represents.
To say glass no matter how much light is on it will still be transparent even when its reflecting blinding white light off it.

You could flip the negative to positive if you had to by just doing a if (result < 0 ) result = -result; though i don’t see the point in doing that in the case of the ndl value here unless this is some sort of shadow thing going on.

If
That’s if you want the light to have a linear feel less like a specular spot light a bit more spread out.
You multiply the result by itself before you multiply the color to intensify it, and you get a smoother acosine value that evalues to .5 at 45 degrees of incidence to the surface instead of .707.

You should go thru each line and reword everything and break it down into even smaller pieces make sure you can see and know exactly what each thing is intended to do.

so yeah it was that one line in the c# code as said above

here is a fixed solution

(I changed to 2 suns with different colors just to see if it works)

I think the way I did it is maybe not the way you would maybe do it, but just check the line 28 as said before, and you’ll easily see where it went wrong :slight_smile:

I’ve changed the color to azur so it’s easy to tell the difference to the depth buffer (which is red)

You simply never updated the LightDirection field in the Directional shader. So it was 0,0,0. And that multiplied by color was always zero.

When the light direction was 0,0,0.

Actually Updating the lightDirection doesn’t have to be done every time Render() is called, since the _shader is always a unique object. It would suffice to change the LightDirection inside the _shader when SetDirection(Vector3 direction) is called from Directional.cs.

As I said it’s a bit messy, but you are probably facepalming anyways about forgetting to actually pass the light direction.

BTW: I love your code. Super clean and the shader looks great too!

Facepalming is not enough here … currently I am smashing my head on my desk. I can’t believe I missed this one. THANKS !

Yeah the code is still a big mess. With the basic stuff working again I need to do a big cleanup week. Code conventions … comments … consolidation and performance improvements … stuff like that :wink:

@willmotil the shader is messed up mainly because of the trouble I had and testing things out - it is kind of shitty coded though

the shader is messed up mainly because of the trouble I had and testing things out - it is kind of shitty coded though

I disagree, the shader is absolutely not shitty coded, it’s 100% standard, it looks like straight tutorial code which you would get from riemers or catalin zz.

I think will is just not familiar with deferred shading, none of his points holds merit, the shader code is correct.

I noticed I uploaded a line in the shader which is wrong though, in 122 or directional.fx
instead of
float NdL = dot(normal,lightVector);

if should be
float NdL = saturate(dot(normal,lightVector));

I never said its junk or trying to rip on it and im far from the best at shader syntax myself.
Just saying the shader appears to have some misleading naming in its variables.
Which tends to make debugging difficult.

If you want to see a ton of junk you should see my projects folder its a total mess.

btw i can make it more clear for you, why some stuff is like that

the normal is stored in a texture, but that texture can only have numbers from 0 to 1. But the normal vectors are in the [-1,1] space, so they have to be transformed into [0,1] space and back.

first of all inverseLightDirection would be the worst name possible, since the inverse of y = x would be y = 1 / x = pow(x, -1) This is simply the negated light direction, but it depends on the perspective.

Second of all, it must be like that since we want the vector from the position to the light to compare with our normal, since the normal is looking outward. So we have to use the negative light direction. Same thing goes for reflections etc. The vector to compare to is always the negative of where the light is going. Same with the camera.

NDL is the default naming for this thing, it means N dot L. Because we are doing an normal dot light calculation as you can see in dot( normal, lightvector));

However this has to be clamped, since the backside of an object simply receives no light (0) instead of negative light. So a max operation is appropriate here.
Optionally one can use saturate(…) here, whcih clamps between 0 and 1.

first of all, a normalized vector doesn’t mean 0 to 1. It means the vector is divided by it’s length, but it can be negative, too. (0,0,-1) is normalized, (0.707, 0, -0.707) is normalized, too.

Second of all it’s not the alpha of the texture.

No it’s color.a we are manipulating here, which is the specularity value stored for this pixel, which was done in the g-buffer setup.

The basic specularity formula is the stock good old phong lighting model, which simply has a specular input.

1 Like

Hey guys!

Just to be sure … I have nothing got into the wrong neck :wink:
Seeing code written some year’s ago and think it is just a bunch of crap … we developers evolve :slight_smile:
And still I think my code is kind of crappy - refactoring is needed here asap!

Thanks to kosmonautgames for clarifying the phong/deferred rendering stuff going on here!

Anyways … I’ve got another related oddity.

The LBuffer contains a mysterious spot in the alpha channel (specular intensity). I think I remember this one is a “normal” issue with the deferred directional light reflecting at the far clipping plane of the scene.

Furthermore, some strange stuff happens when clearing the LBuffer.

render() does call _clearBuffers() which in turn does

_graphicsDevice.SetRenderTargets(_lBuffer.LightMap);
_graphicsDevice.Clear(Color.Transparent);
_graphicsDevice.SetRenderTargets(null);

If I do not call

_graphicsDevice.Clear(Color.Transparent);

whilst rendering the light buffer, the light’s alpha channel is set to 1 resulting in a full white final image. The LBuffer is not touched while rendering the GBuffer parts. Why is this? It’s not an unsolvable bug for me, but I want to understand why this happens.

Thanks again for your help and kind words guys :slight_smile:

Oh… and the sources have been updated:: http://www.garbe-clan.de/content/trunk.zip

uhm, if you initialize the rendertargets you have many parameters like width, height etc.

The last parameters is either discardContents or KeepContents.
rt = new Rendertarget(…, DiscardContents)

Default is discard. That means that if you don’t use a rendertarget any more (setRenderTarget to something else) it will be cleared, with some default value. I think the default is black - ergo ( 0 , 0 , 0 , 1)

RenderTargetUsage.PreserveContents make’s even more weird …

For each frame the content is blended over and over and over again …
It seems like as the lbuffer is not really cleared using _graphicsDevice.Clear(Color.Transparent) - which is odd.

To clarify …

  1. Clear all render targets. LBuffer is cleared using _graphicsDevice.Clear(Color.TransparentBlack);
  2. Render geometry
  3. Render lights
  4. Combine to final image

If I do NOT do a _graphicsDevice.Clear(Color.TransparentBlack); again in step #3, the output’s alpha is for some reason pinned at #FF.

the version you uploaded, trunk2, has no real output, everything is white

EDIT: I just randomly subtracted 1 from the final output and it looks like this now.

Hey!

Have a look at Renderer\Deferred.php

private void _renderLights()
{
    _graphicsDevice.SetRenderTargets(_lBuffer.LightMap);

    // Prepare render states
    //_graphicsDevice.Clear(Color.Transparent);       <=== This one

If you comment this line in everything works as expected. You could also remove the specular influence in the combine.fx shader:

float3 lightTerm = (texColor * diffuseLight) + specularLight;

As said, this “issue” is just confusing to me as I would just clear the rendertarget in the _renderLights() method to “fix” it. What really could be an issue is the white spot in the light/final image as mentioned above.

You can clear it with a new vector4(0,0,0,0) or any other color, and specify any alpha value.

Hey Alkher,

as written this does not change the issue. As long as I do not resolve the render target between clearing it and rendering the light everything is fine. Resolving the target is somehow killing the alpha chanel of the render target …

Anyways … this is not a main issue. I want to understand WHY this is … more critical is the “spot” in the lbuffer-alpha mentioned above normal?

How do you write colors int your shader? Maybe the the graphicsdevice.clear is overriden in the shader with some value