Hlsl How to set and get higher precision from a depth map ?

ive got a bias in there already its of no affect.

Ok, I’ll see how your image looks like.

I’m using a very large frustum and the background is a globe which makes my shadows pixelated, but it looks like this

Here’s the opposite sort of image when i reverse the depth test.
which makes no sense.

        // reversed the more than > to less then check <
        // in depth map bounds, slightly off color
        // pixels are behind the depth shadow
        inLightOrShadow = float4(.89f, .89f, .99f, 1.0f);

        if ((realLightDepth - .1f) < depthStoredInShadowMap)
        {
            // this is when the pixels should be in front of the shadow
            inLightOrShadow = float4(0.20f, 0.10f, 0.10f, 1.0f);
        }

here is the positions of everything.

        public void SetupWorldSpaceCameraObjects()
        {
            //float degrees = 22.5f;
            float degrees = 0f;
            spinMatrixY = Matrix.CreateRotationY(TORADIANS * degrees);

            obj_CompassArrow.Position = new Vector3(0, 0, -12);
            obj_CompassArrow.Forward = Vector3.Forward;

            obj_Camera.Position = new Vector3(0, 1f, 8f);
            obj_Camera.Forward = spinMatrixY.Forward;

            obj_Light.Position = new Vector3(-3, -3, +25f);
            //obj_Light.Forward = spinMatrixY.Forward;
            obj_Light.Forward = (obj_Camera.Forward * 1000 + obj_Camera.Position) - obj_Light.Position;

            obj_sky.Scale(new Vector3(95f, 95f, 95f));
            obj_sky.Position = Vector3.Zero;//obj_Camera.Position;
            obj_sky.Forward = Vector3.Forward;

            obj_sphere01.Scale(3f, 3f, 3f);
            //obj_sphere01.Scale(-3f, -3f, 3f);
            obj_sphere01.Position = new Vector3(-2.5f, -3f, -26f);
            obj_sphere01.Forward = Vector3.Forward;

            obj_sphere02.Scale(3f);
            obj_sphere02.Position = new Vector3(+2.5f, 3f, -20f);
            obj_sphere02.Forward = Vector3.Forward;

            obj_sphere03.Scale(3f, 3f, 3f);
            obj_sphere03.Position = new Vector3(0, 0f, -15f);
            obj_sphere03.Forward = Vector3.Forward;

            obj_sphere04.Scale(3f, 3f, 3f);
            obj_sphere04.Position = new Vector3(0f, -2f, -10f);
            obj_sphere04.Forward = Vector3.Forward;

            camTransformer.CreateCameraViewSpace(obj_Camera);
        }

What if you try sampling 4 adjacent pixels from your depth and take the minimum? That helped me get around a lot of shadow artifacts.

It’s not artifacts as much as its just not working right.

The objects themselves are being shadowed im pretty sure both calculations are borked somehow.

    // the real light distance
    float realLightDepth = input.Pos2DAsSeenByLight.z / input.Pos2DAsSeenByLight.w;
    
    // shadows depth map value
    float depthStoredInShadowMap = DecodeFloatRGBA(tex2D(TextureSamplerB, ProjectedTexCoords));

When i draw these out via intensity they are both wrong the depth map shows black for the far away background all the spheres are very white up close.

When i draw out the real light depth the background is extremely white and the objects are greyish white.

when i do this creating the depthmap foregoing the /w i get crazy fractal looking results.

    //Output.Color = PSIn.Position2D.z / PSIn.Position2D.w;
    float depth = PSIn.Position2D.z;// / PSIn.Position2D.w;
    Output.Color = EncodeFloatRGBA(depth);

With the background in its really crazy.
It’s almost as if there is a additive bleed thru as well going on.

with the / by w back in and no background i get this.

with the background its fully white.

PixelShaderLightingShadowOutput PixelShaderLightingShadow(VertexShaderLightingShadowOutput input)
{
    PixelShaderLightingShadowOutput output;
    float4 result = tex2D(TextureSamplerA, input.TexureCoordinateA);

    // positional projection on depth map
    float2 ProjectedTexCoords;
    ProjectedTexCoords[0] = input.Pos2DAsSeenByLight.x / input.Pos2DAsSeenByLight.w / 2.0f + 0.5f;
    ProjectedTexCoords[1] = -input.Pos2DAsSeenByLight.y / input.Pos2DAsSeenByLight.w / 2.0f + 0.5f;
    
    // the real light distance
    float realLightDepth = input.Pos2DAsSeenByLight.z / input.Pos2DAsSeenByLight.w;
    
    // shadows depth map value
    float depthStoredInShadowMap = DecodeFloatRGBA(tex2D(TextureSamplerB, ProjectedTexCoords));
    //float depthStoredInShadowMap = tex2D(TextureSamplerB, ProjectedTexCoords).r;

    // for testing 
    float4 inLightOrShadow = float4(.99f, .99f, .99f, 1.0f);

    // if in bounds of the light
    if ((saturate(ProjectedTexCoords).x == ProjectedTexCoords.x) && (saturate(ProjectedTexCoords).y == ProjectedTexCoords.y))
    {
        // in bounds slightly off color
        inLightOrShadow = float4(.89f, .89f, .99f, 1.0f);

        if ((realLightDepth + .01f) > depthStoredInShadowMap)
        {
            // shadow
            inLightOrShadow = float4(0.20f, 0.10f, 0.10f, 1.0f);
        }
    }

    //test
    //result = float4(realLightDepth, realLightDepth, realLightDepth, 1.0f);
    result = float4(depthStoredInShadowMap, depthStoredInShadowMap, depthStoredInShadowMap, 1.0f);

    // finalize
    //result *= inLightOrShadow;
    result.a = 1.0f;
    output.Color = result;
    return output;
}

It is normal you get wrong results when not dividing by w, there are different spaces involved as the commonly named “screen space” is, at each step of the calculations

The complete explanations (openGL but it is the same matter of things):
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/

About precision, here is an interesting article for you:

1 Like

Ok, these encode decode functions are no good.

float4 g = EncodeFloatRGBA(realLightDepth);
realLightDepth = DecodeFloatRGBA(g);

I ran a depth to screen test, directly inserting this to encode and decode the output of a working colored visual test, basically showed this function doesn’t work.
It killed the visual. It shrunk the depth down to nothing.

I tried bit shifting and masking & | but it just errors.

How can i set and get higher precision depth to and retrieve it from a rendertarget ? Whats the right way to do this does any one have a working example that stores and retreives higher depth then a 8 bit float properly ?

Those functions definitely worked for me before. Here are the ones I’m using now that store a float in 24-bits of a color, and they are working well in my game with deferred rendering:

float3 EncodeFloatRGB(float f)
{
	float3 color;
	f *= 256;
	color.x = floor(f);
	f = (f - color.x) * 256;
	color.y = floor(f);
	color.z = f - color.y;
	color.xy *= 0.00390625; // *= 1.0/256
	return color;
}

float DecodeFloatRGB(float3 color)
{
	const float3 byte_to_float = float3(1.0, 1.0 / 256, 1.0 / (256 * 256));
	return dot(color, byte_to_float);
}

They basically do the same thing but they’re optimized a bit more. If these still don’t work for you then I would guess there’s something else wrong. Can you give a sample of a more basic shader where you see issues?

EDIT: You mentioned it looked like there was some additive bleed-through. Do you have multiple passes writing to the depth target? Is it possible there’s a mistake in your blend states?

How’s it going for you willmotil?
You’ve solved it yet?

Regards, Morgan

I haven’t solved it yet. i ll try that one in a bit. I’m on my phone right now. The bleed thru was just because I didn’t set blendstae to opaque. I’ll post a pic later basically I did a color precision test over a large range of values by scaling a cube by 1,1,1000 then rotating it.
Drawing its depth directly in a color scheme.
Then intercepting the values by encoding decoding those depths inbetween I could see the resulting failure of the functions I’ll post it in a couple hours when I can get on.

Weird results with z/w as well ill show it in a minute.
Im going to try that other function and see if it works.

So you can see how i tested the encode decode i added pictures.
The following pictures all depict depth by color the texture is white and grey so as not to affect the visual representation.

The below is the z depth alone of the calculated light across a spectrum. It’s not from the depth buffer texture which is just returning low precision junk. The code is at the bottom.

Blue = closest.
Green = middle.
Red = farther.

Note the visible objects range from 10 to 500 z the scene 550 z near 700 at the corners the far plane is set to 750.

This is the encode decode inserted before visualizing the same calculated depths.

Here is the result of the calculated light using Z divided by W (z/w)
Not what i was expecting. I altered the visual range cause otherwise its just super dark but also to show its doing something weird.

Here is the shader code i trimmed it way down.

    //_______________________________________________________________
    // >>>> LightingShadowPixelShader <<<
    //_______________________________________________________________
    struct VertexShaderLightingShadowInput
    {
        float4 Position : POSITION0;
        //float3 Normal : NORMAL0;
        //float4 Color : COLOR0;
        float2 TexureCoordinateA : TEXCOORD0;
    };

    struct VertexShaderLightingShadowOutput
    {
        float4 Position : SV_Position;
        //float3 Normal : NORMAL0;
        //float4 Color : COLOR0;
        float2 TexureCoordinateA : TEXCOORD0;
        float4 Pos2DAsSeenByLight    : TEXCOORD1;
        //float4 Position3D            : TEXCOORD2;
    };

    struct PixelShaderLightingShadowOutput
    {
        float4 Color : COLOR0;
    };

    VertexShaderLightingShadowOutput VertexShaderLightingShadow(VertexShaderLightingShadowInput input)
    {
        VertexShaderLightingShadowOutput output;
        output.Position = mul(input.Position, gworldviewprojection);
        output.Pos2DAsSeenByLight = mul(input.Position, lightsPovWorldViewProjection);
        output.TexureCoordinateA = input.TexureCoordinateA;
        //output.Position3D = mul(input.Position, gworld);
        //output.Color = input.Color;
        //output.Normal = input.Normal;
        //output.Normal = normalize(mul(input.Normal, (float3x3)gworld));
        return output;
    }

    PixelShaderLightingShadowOutput PixelShaderLightingShadow(VertexShaderLightingShadowOutput input)
    {
        PixelShaderLightingShadowOutput output;
        // current texel, gray scale it a little
        float4 result = tex2D(TextureSamplerA, input.TexureCoordinateA) * 0.5f + 0.25f;
        // positional projection on depth map
        float2 ProjectedTexCoords;
        ProjectedTexCoords[0] = input.Pos2DAsSeenByLight.x / input.Pos2DAsSeenByLight.w / 2.0f + 0.5f;
        ProjectedTexCoords[1] = -input.Pos2DAsSeenByLight.y / input.Pos2DAsSeenByLight.w / 2.0f + 0.5f;
        
        //float calculatedLightDepth = abs(input.Pos2DAsSeenByLight.z);
        float calculatedLightDepth = input.Pos2DAsSeenByLight.z;

        // !! no good decode !!  //float depthStoredInShadowMap = DecodeFloatRGBA(tex2D(TextureSamplerB, ProjectedTexCoords));
        float depthStoredInShadowMap = tex2D(TextureSamplerB, ProjectedTexCoords);

        // Visualize one or the other.
        float tdepth = calculatedLightDepth;
        //float tdepth = depthStoredInShadowMap;

        // +++++++++++++++++++++++++++++++++++++
        // Test....Encoding Decoding algorithm.
        // +++++++++++++++++++++++++++++++++++++

        // uncomment these lines to test
        //float4 temp = EncodeFloatRGBA(tdepth);
        //tdepth = DecodeFloatRGBA(temp);

        // +++++++++++++++++++++++++++++++++++++
        // Test End.
        // +++++++++++++++++++++++++++++++++++++

        // when not using / by w
        float db = saturate(tdepth / 10.0f);
        float dg = saturate(tdepth / 75.0f);
        float dr = saturate(tdepth / 500.0f);
        float dz = saturate(tdepth / 2000.0f);
        db = saturate(db - dg + dz * 0.5f);
        dg = saturate(dg - dr);

        // visual combine
        result *= float4(dr, dg, db, 1.0f);
        // finalize
        output.Color = result;
        output.Color.a = 1.0f;
        return output;
    }

If you want the shadow map generation code its at the top same thing as before.

Edit…

Well its looking like the clip plane needs to be used for the direct light calculation. Im actually not even worrying about the shadow map encoding yet i totally put that to the side for the moment.

Im reading every-were it is supposed to be z/w, to get the actual depth.
However that is not what im seeing whatsoever.

z/f/w

z*w/f

z/w ???

I think you need to normalize your depths by dividing by FarClip. These functions are designed to encode values in the range [0, 1).

Hmm my first reaction to what I see is the same as jnoyola’s something is way out of proportions, I’ll have to dig in deeper.

Well the simple z/w in my deferred engine gives all red. I have to get very close (around the znear in fact) to see it varying due to the precision of this calculation.
Sometimes the human eye cannot see the differences
Do you also divide the lighting by w when you compute lighting?
Try with 1-(z/w) should give you a better view or dividing by farplane to see what’s going on

Do you also divide the lighting by w when you compute lighting?
Try with 1-(z/w) should give you a better view or dividing by farplane to see what’s going on

This is what i ended up doing this aligned the depth values to the range of my far plane.

depth = depth.z / farPlane * depth.w;

Well i worked on this for hours again today and got it partially working.

Its still got some problems artifacts, were to aim the shadow maps view camera. Screw it i got it working for now. that second encoding function worked as well.

Glad you got something working. I’m not sure if the artifacts you have are due to loss of precision with your depth, but if so then you can extend that second function to encode with 32 bits instead of 24.

How would I up that to take a 4th vector I can see how to decode it but not how to encode it. Also what surface format is best to use on the render target.

Each subsequent channel is chopping off what was already encoded, and taking the next 256th chunk, with the exception of z (or now w) that just takes the remainder, whether it fits or not. So you should just be able to insert another layer in the middle. I think it would be like this, and I worked out the math, but didn’t actually try running it. Don’t forget to change the decoder too, and let me know if it works.

float4 EncodeFloatRGB(float f)
{
	float3 color;
	f *= 256;
	color.x = floor(f);
	f = (f - color.x) * 256;
	color.y = floor(f);
	f = (f - color.y) * 256;
	color.z = floor(f);
	color.w = f - color.z;
	color.xyz *= 0.00390625; // *= 1.0/256
	return color;
}

I use SurfaceFormat.Color for all of mine because it’s consistent and one of the few that works reliably across all platforms.

Do MonoGame’s rendertargets not work for R16 or R32 or something? I don’t quite get the need for encoding (other than remapping like /w).

Well this was my original post where I decided to use encoding because there aren’t any supported 32-bit precision surface formats for iOS.