PBR Point light issue (BRDF)

Hi everyone,
I have a small problem that I can’t seem to wrap my head around. I’ve build a Frankensteins’ monster shader from example code by Kosmonautgame’s very intense shaders and the Monogame community members indirect help. I broke them down in to smaller shader so I can understand them a little removing complex stuff like Temporal-AA etc.

[Problem] Using Monogame 3.6, rendered with graphics.GraphicsProfile = GraphicsProfile.HiDef; @4K
My shader seems to be working just fine for the most part; this is what I see on launch.

At this point (ignoring not having IBL in the PBR engine at the moment) we can clearly see that there is a point light about mid way past the body and the lighting looks correct.

This is what happens if my Camera moves in side the light radius, and it gets worse the close I get to it, I moved much closer with the camera for the screen shot here.

As you can see, it’s lighting the background even though there is no geometry or anything there to light.
It does fade from the center to the edge exactly as a point light should.

Note: If you were wondering about the images at the top, the in order (left-right) Albedo buffer, Defuse Buffer, Spec Buffer, the rest are just post process effects like Grayscale, Guasian blur etc.

Here is a cut of the PixelshaderFunction, my current guess is that its either to do with depth, or a clamping issue, but that’s just me guessing.

float3 specularColor = float3(1, 1, 1);
float4 Specular = float4(0, 0, 0, 0);
float4 Diffuse = float4(0, 0, 0, 0);

float F0 = tex2D(specularTransmittanceAOSampler, input.TexCoord).r;
float AO = tex2D(specularTransmittanceAOSampler, input.TexCoord).a;


float3 PointLightDirection = position.xyz - PointLightPosition;
float DistanceSq = lengthSquared(PointLightDirection);
float radius = PointLightRadius;


//----------------------------


if (DistanceSq < abs(radius * radius))
{
	Diffuse = AO;

	if (F0 == 0) {
		F0 = lerp(0.04f, albedo.g * 0.25 + 0.75, metallic);
	}

	if (tex2D(specularTransmittanceAOSampler, input.TexCoord).r > 0) {
		specularColor = albedo.rgb * lightColor;
	}
	else {
		specularColor = lightColor * (1 - F0);
	}


	float Distance = sqrt(DistanceSq);

	//normalize
	PointLightDirection /= Distance;

	float du = Distance / (1 - DistanceSq / (radius * radius - 1));

	float denom = du / abs(radius) + 1;

	//The attenuation is the falloff of the light depending on distance basically
	float attenuation = 1 / (denom * denom);
	

	float3 N = normalize(normal);
	float3 L = normalize(PointLightDirection);
	float3 V = normalize(position.xyz - cameraPosition);

	float3 H = normalize(-L + V);
	float NdL = clamp(dot(N, -L), 1e-5, 1.0);
	
	Diffuse *= float4(DiffuseOrenNayar(NdL, N, -L, -V, lightIntensity, lightColor, 1 - roughness) * attenuation, 0);
	Specular = float4(SpecularCookTorrance(NdL, N, -L, -V, lightIntensity, specularColor, F0, roughness)* attenuation, 0);
}


output.Color0 = Diffuse * (1 - F0);
output.Color1 = Specular;
return output;

Any ideas?
Thanks.

do you render the lights as spheres and then flip frontside/backside culling once you are inside the radius?

No, not sure how I at no point noticed the sphere render. My code was still rendering a quad.
Started looking in to it, not sure why we render a sphere yet. Thanks for the answer though.

[Edit]

Found that my Directional light was having the same issue, but only when the light points in a single positive direction eg. Vector3(0,0,1) or Vector3(0,1,0). This line of code from Kosmonaughtgames’ DeferredEngine example fixed that.

if (normalData.x + normalData.y <= 0.001f) //Out of range
{
output.Color0 = float4(0, 0, 0, 0);
output.Color1 = float4(0, 0, 0, 0);
return output;
}
else
{
// Lighting code goes here
}
return output;

I then wen’t and wrapped my Point light code around that same if/else statement and the problem for them went away as well. Still don’t understand why the examples render to the sphere, performance? Will play around with it a bit to see if there is some problems with it.

Still don’t understand why the examples render to the sphere, performance?

Yes, so you only run the shader where covered by the sphere. Otherwise you’d run every light on the entire gbuffer. It matters a lot on lights that are farther away and it’s the only way you get those huge #s of point-lights deferred shading is known for.

Thanks for the answer I understand now.

So why render to a sphere and not a hemisphere rotated to the camera or something?
Would this not also help with the face-culling problem? Sorry if these are dumb questions, but I’m sill trying to understand why we do things instead of just doing them “because”.
I may be miss understanding why we would be swapping from CullClockwise and CullCounterClockwise depending on our position in or outside the render sphere?

If you render to a sphere you only calculate the pixels that are in range of the light, so you don’t waste performance on unlit pixels that still have to calculate lighting.

Thank you everyone again. After getting IBL implemented I came back to my point light to convert the screen quad rendering I had working at the time to properly render to the sphere as suggested. Got me over a 3X reduction in cost per Point Light currently costing me about 0.073ms per light @4K resolution. The function of culling now making sense.