SSAO problems again

Tried to add SSAO to my deferred renderer, should have been trivial, but nothing I do works

At first I thought it was depth buffer precision issues, so I swapped from using a log depth buffer to a linear depth buffer. I checked this worked by actually writing the depth buffer to the log and it’s perfect.

I can see that the depth varies from pixel to pixel , not by a lot, but by enough that SSAO should work.

So I ripped the shader apart and went down to a really trivial version that just samples 16 pixels around the target pixel and accumulates the pixels that are in front of the target pixel in the depth plane

Doesn’t work,

I get nothing

The shader has been ripped apart and now is as simple as it can be. It will not generate good looking SSAO, but it should produce SOMETHING

///////////////////////////////////////////////////////////////////////////////////////
// Pixel shaders
///////////////////////////////////////////////////////////////////////////////////////
float4 MainPS(VertexShaderOutput input) : COLOR
{
    //float3 random = normalize(tex2D(RandomTextureSampler, input.UV * 4.0).rgb);
    float depth = tex2D(depthSampler, input.UV).r;

    //float3 position = float3(input.UV, depth);
    //float3 normal = normal_from_depth(input.UV);

    //float radius_depth = radius / depth;
    //float occlusion = 0.0;
	    float diff = 16.0;
    [unroll]
    for (int i = 0; i < 16; i++) 
   {
	    float2 sp = sample_sphere[i].xy * 8 * halfPixel;
	    float occ_depth = tex2D(depthSampler, input.UV + sp).r;

	    //float3 ray = radius_depth * reflect(sample_sphere[i], random);
	    //float3 hemi_ray = position + sign(dot(ray,normal)) * ray;
	    //
	    //float occ_depth = tex2D(depthSampler, saturate(hemi_ray.xy)).r;
	    //float difference = (depth - occ_depth);

	    if (depth > occ_depth)
	    	diff = diff-1;

	    //occlusion += step(falloff, difference) * (1.0 - smoothstep(falloff, area, difference));
    }

    //float ao = 1.0 - total_strength * occlusion * (1.0 / samples);
     //float res = saturate(ao + base);

      float res = saturate(diff / 16.0f);
    	return float4(res, res, res, 1.0);
       //return float4(debug_depth(depth), 1.0);
}

I cannot get the content builder to generate debug shaders, it just crashes, so I cannot debug the shader itself.

I have checked the depth buffer is correct many times and even done a pass where I took a dump of the depth buffer and did the SSAO in C# instead of the shader, which produced the expected awful results.

I can tell you the difference between adjacent pixels in the depth buffer is in the centimetres range (0.01 -> 0.09) but that should be enough for the shader to work.

Has anyone got any ideas how I can continue?

It’s doing my head in

I don’t see a problem with the code you posted. It should give you shades of gray. My guess is that the problem is somewhere outside of this code.

Since the output is all white it looks like the if statement in the loop never triggers. Maybe because all the samples are from the same pixel. Could sp be too small to make it to neighbouring pixels?

As first test I would hardcode sp to (0.1, 0.1). Now you should see some pixels that are totally black and some that are totally white. If you get black and white sp is probably too small, if the output is still all white than my next guess would be that something is worng with the texture setup. Either the depth map doesn’t make it into the shader, or the output somehow doesn’t make it into the white texture you show.

Early on I did a test that just wrote the depth of the sample point into the result.

Got the expected result.

So the depth texture is fine . ( I have even written it to the log file as text and looked at it, it’s perfect)

SampleSphere contains 8 pixels offset by 2 (so -2,-2 to 2,2 skipping 0,0) and 8 pixels offset by 4 (-4,-4 to 4,4)

When multiplied by halfpixel it should be 1 and 2 pixels away. I added a *8 as another test

I’s just madness

Okay this is interesting

I added some debugging and if the pixel offset is <= half a pixel I set the red channel of the result

float diff = 16.0;
float error = 0;
[unroll]
for (int i = 0; i < 16; i++) 
{
	float2 sp = sample_sphere[i].xy * 8 * halfPixel;
	float occ_depth = tex2D(depthSampler, input.UV + sp).r;

	if ((sp.x <= halfPixel.x) && (sp.y <= halfPixel.y))
		error++;

And every pixel is an error

So it looks like sample_sphere[i] IS ALWAYS 0,0

Yes, that’s what it is

I changed the code to …

float depth = tex2D(depthSampler, input.UV).r;

float res = doTest(float2(-2, -2), input.UV, depth);
res += doTest(float2(-2, 0), input.UV, depth);
res += doTest(float2(-2, 2), input.UV, depth);
res += doTest(float2(0, 2), input.UV, depth);
res += doTest(float2(0, -2), input.UV, depth);
res += doTest(float2(2, -2), input.UV, depth);
res += doTest(float2(2, 0), input.UV, depth);
res += doTest(float2(2, 2), input.UV, depth);

res += doTest(float2(-4, -4), input.UV, depth);
res += doTest(float2(-4, 0), input.UV, depth);
res += doTest(float2(-4, 4), input.UV, depth);
res += doTest(float2(0, 4), input.UV, depth);
res += doTest(float2(0, -4), input.UV, depth);
res += doTest(float2(4, -4), input.UV, depth);
res += doTest(float2(4, 0), input.UV, depth);
res += doTest(float2(4, 4), input.UV, depth);

return float4(res, res, res, 1);

And it works.

I think we have to call this a bug in the shader compiler.

Has anyone ever defined an array in a shader and had it work?

it’s a known bug

IIRC you basically have to prefix the array with static.

Thanks , that would have been good to know a week or so ago :smile: