Hlsl How to set and get higher precision from a depth map ?

Weird results with z/w as well ill show it in a minute.
Im going to try that other function and see if it works.

So you can see how i tested the encode decode i added pictures.
The following pictures all depict depth by color the texture is white and grey so as not to affect the visual representation.

The below is the z depth alone of the calculated light across a spectrum. It’s not from the depth buffer texture which is just returning low precision junk. The code is at the bottom.

Blue = closest.
Green = middle.
Red = farther.

Note the visible objects range from 10 to 500 z the scene 550 z near 700 at the corners the far plane is set to 750.

This is the encode decode inserted before visualizing the same calculated depths.

Here is the result of the calculated light using Z divided by W (z/w)
Not what i was expecting. I altered the visual range cause otherwise its just super dark but also to show its doing something weird.

Here is the shader code i trimmed it way down.

    //_______________________________________________________________
    // >>>> LightingShadowPixelShader <<<
    //_______________________________________________________________
    struct VertexShaderLightingShadowInput
    {
        float4 Position : POSITION0;
        //float3 Normal : NORMAL0;
        //float4 Color : COLOR0;
        float2 TexureCoordinateA : TEXCOORD0;
    };

    struct VertexShaderLightingShadowOutput
    {
        float4 Position : SV_Position;
        //float3 Normal : NORMAL0;
        //float4 Color : COLOR0;
        float2 TexureCoordinateA : TEXCOORD0;
        float4 Pos2DAsSeenByLight    : TEXCOORD1;
        //float4 Position3D            : TEXCOORD2;
    };

    struct PixelShaderLightingShadowOutput
    {
        float4 Color : COLOR0;
    };

    VertexShaderLightingShadowOutput VertexShaderLightingShadow(VertexShaderLightingShadowInput input)
    {
        VertexShaderLightingShadowOutput output;
        output.Position = mul(input.Position, gworldviewprojection);
        output.Pos2DAsSeenByLight = mul(input.Position, lightsPovWorldViewProjection);
        output.TexureCoordinateA = input.TexureCoordinateA;
        //output.Position3D = mul(input.Position, gworld);
        //output.Color = input.Color;
        //output.Normal = input.Normal;
        //output.Normal = normalize(mul(input.Normal, (float3x3)gworld));
        return output;
    }

    PixelShaderLightingShadowOutput PixelShaderLightingShadow(VertexShaderLightingShadowOutput input)
    {
        PixelShaderLightingShadowOutput output;
        // current texel, gray scale it a little
        float4 result = tex2D(TextureSamplerA, input.TexureCoordinateA) * 0.5f + 0.25f;
        // positional projection on depth map
        float2 ProjectedTexCoords;
        ProjectedTexCoords[0] = input.Pos2DAsSeenByLight.x / input.Pos2DAsSeenByLight.w / 2.0f + 0.5f;
        ProjectedTexCoords[1] = -input.Pos2DAsSeenByLight.y / input.Pos2DAsSeenByLight.w / 2.0f + 0.5f;
        
        //float calculatedLightDepth = abs(input.Pos2DAsSeenByLight.z);
        float calculatedLightDepth = input.Pos2DAsSeenByLight.z;

        // !! no good decode !!  //float depthStoredInShadowMap = DecodeFloatRGBA(tex2D(TextureSamplerB, ProjectedTexCoords));
        float depthStoredInShadowMap = tex2D(TextureSamplerB, ProjectedTexCoords);

        // Visualize one or the other.
        float tdepth = calculatedLightDepth;
        //float tdepth = depthStoredInShadowMap;

        // +++++++++++++++++++++++++++++++++++++
        // Test....Encoding Decoding algorithm.
        // +++++++++++++++++++++++++++++++++++++

        // uncomment these lines to test
        //float4 temp = EncodeFloatRGBA(tdepth);
        //tdepth = DecodeFloatRGBA(temp);

        // +++++++++++++++++++++++++++++++++++++
        // Test End.
        // +++++++++++++++++++++++++++++++++++++

        // when not using / by w
        float db = saturate(tdepth / 10.0f);
        float dg = saturate(tdepth / 75.0f);
        float dr = saturate(tdepth / 500.0f);
        float dz = saturate(tdepth / 2000.0f);
        db = saturate(db - dg + dz * 0.5f);
        dg = saturate(dg - dr);

        // visual combine
        result *= float4(dr, dg, db, 1.0f);
        // finalize
        output.Color = result;
        output.Color.a = 1.0f;
        return output;
    }

If you want the shadow map generation code its at the top same thing as before.

Edit…

Well its looking like the clip plane needs to be used for the direct light calculation. Im actually not even worrying about the shadow map encoding yet i totally put that to the side for the moment.

Im reading every-were it is supposed to be z/w, to get the actual depth.
However that is not what im seeing whatsoever.

z/f/w

z*w/f

z/w ???

I think you need to normalize your depths by dividing by FarClip. These functions are designed to encode values in the range [0, 1).

Hmm my first reaction to what I see is the same as jnoyola’s something is way out of proportions, I’ll have to dig in deeper.

Well the simple z/w in my deferred engine gives all red. I have to get very close (around the znear in fact) to see it varying due to the precision of this calculation.
Sometimes the human eye cannot see the differences
Do you also divide the lighting by w when you compute lighting?
Try with 1-(z/w) should give you a better view or dividing by farplane to see what’s going on

Do you also divide the lighting by w when you compute lighting?
Try with 1-(z/w) should give you a better view or dividing by farplane to see what’s going on

This is what i ended up doing this aligned the depth values to the range of my far plane.

depth = depth.z / farPlane * depth.w;

Well i worked on this for hours again today and got it partially working.

Its still got some problems artifacts, were to aim the shadow maps view camera. Screw it i got it working for now. that second encoding function worked as well.

Glad you got something working. I’m not sure if the artifacts you have are due to loss of precision with your depth, but if so then you can extend that second function to encode with 32 bits instead of 24.

How would I up that to take a 4th vector I can see how to decode it but not how to encode it. Also what surface format is best to use on the render target.

Each subsequent channel is chopping off what was already encoded, and taking the next 256th chunk, with the exception of z (or now w) that just takes the remainder, whether it fits or not. So you should just be able to insert another layer in the middle. I think it would be like this, and I worked out the math, but didn’t actually try running it. Don’t forget to change the decoder too, and let me know if it works.

float4 EncodeFloatRGB(float f)
{
	float3 color;
	f *= 256;
	color.x = floor(f);
	f = (f - color.x) * 256;
	color.y = floor(f);
	f = (f - color.y) * 256;
	color.z = floor(f);
	color.w = f - color.z;
	color.xyz *= 0.00390625; // *= 1.0/256
	return color;
}

I use SurfaceFormat.Color for all of mine because it’s consistent and one of the few that works reliably across all platforms.

Do MonoGame’s rendertargets not work for R16 or R32 or something? I don’t quite get the need for encoding (other than remapping like /w).

Well this was my original post where I decided to use encoding because there aren’t any supported 32-bit precision surface formats for iOS.