# Hlsl How to set and get higher precision from a depth map ?

So im trying to get a shadow map working but im having some problems which i think are related to precision.

Should i be bit shifting depth into rgb components of a rendertarget and does bitshifting work. Or is there a good example to do this with a depth texture ect.

I recently had issues with the precision of my depth render target (not the depth buffer itself), but that was related to lighting, not shadows. In general I believe you should have 24 bits or more to avoid error. What platform are you compiling your hlsl for? Different platforms have different data types available, but you can also encode a 32-bit float as a 32-bit color (rgba) or a 24-bit float as a 24-bit color (rgb), with an extra 8 bits you can use for something else.

Here’s my post about encoding a higher precision depth into a color render target if that’s what you need: [Solved] iOS 32-bit SurfaceFormat

Does that function work if so that might just solve this problem.

Yes it works. If you see the comment below though, I’m not sure if you get more accuracy with numerical constants that are powers of 255 or 256, although I imagine it’ll be unnoticeable in most cases. Currently all the numbers in that function are 255, 255^2, or 255^3. You can also change the float4 to float3 if you only need 24 bits of accuracy – again, I could tell the difference in my game – and then use the alpha channel for anything else.

Well it must be something else in this shader.

Im using reimers shader and it works with his car model example but that has a fixed camera and a small near and far plane.

When i strip that down and pluged it into mine with a moving camera using the same projection matrix on both it doesn’t seem to want to work right.

Can you use hex with bitshifting in hlsl to integers or something like so ?

int value = depth * 65,000;
byte byteR = (byte)((value) & 0xFF);
byte byteG = (byte)((value >> 8) & 0xFF);
byte byteB = (byte)((value >> 16) & 0xFF);
byte byteA = (byte)((value >> 24) & 0xFF);
// store those in the color
// then yank them back out to a int
int n = (byteR) + (byteG << 8) + (byteB << 16) + (byteA << 24);
// and exchange them to a float
float depth = n / 65,000;

I’m not sure if that works in hlsl. It’s basically the same as the functions I found though. Considering I found several examples using division and multiplication, and none of shifting and masking, I would imagine the former may be faster even if the other is possible.

Maybe if you share the shader code we can spot something else.

But what is the result of the problem, I’m wondering because the only problem I’ve had with implementing it is that I had to add 0,001 or 0,002f (because of the floating point error) as bias to make shadows cast when the object are far away and close to ground, in a big frustum.

Here’s how the error manifested on my iPhone. You can see the clear inaccuracies in the depth render target 2nd from the right, and similar, but slightly harder to see, lines in the lighting and the main composite image.

I had another error show up that looked like the depth and hence lighting produced from reconstruction was all relative to the camera, rather than fixed in world space. I believe that was caused by incorrect encoding/decoding.

Well the result is the opposite of what i would expect.
ill post a pic in a second.

Makes a shadow depth map simple enough

//_____________________________
//_____________________________
//_____________________________
struct SMapVertexInput
{
float4 Position : POSITION0;
float2 TexCoords    : TEXCOORD0;
};
struct SMapVertexToPixel
{
float4 Position     : SV_Position;
float4 Position2D    : TEXCOORD0;
};

struct SMapPixelToFrame
{
float4 Color : COLOR0;
};

float4 EncodeFloatRGBA(float v)
{
float4 kEncodeMul = float4(1.0, 255.0, 65025.0, 160581375.0);
float kEncodeBit = 1.0 / 255.0;
float4 enc = kEncodeMul * v;
enc = frac(enc);
enc -= enc.yzww * kEncodeBit;
return enc;
}

{
SMapVertexToPixel Output;// = (SMapVertexToPixel)0;

Output.Position = mul(input.Position, lightsPovWorldViewProjection);
Output.Position2D = Output.Position;

return Output;
}

{
SMapPixelToFrame Output;// = (SMapPixelToFrame)0;

//Output.Color = PSIn.Position2D.z / PSIn.Position2D.w;

Output.Color = EncodeFloatRGBA(PSIn.Position2D.z / PSIn.Position2D.w);

return Output;
}

{
pass Pass0
{
}
}

second part

//_______________________________________________________________
//_______________________________________________________________
{
float4 Position : POSITION0;
float3 Normal : NORMAL0;
float4 Color : COLOR0;
float2 TexureCoordinateA : TEXCOORD0;
};

{
float4 Position : SV_Position;
float3 Normal : NORMAL0;
float4 Color : COLOR0;
float2 TexureCoordinateA : TEXCOORD0;
float4 Pos2DAsSeenByLight    : TEXCOORD1;
float4 Position3D            : TEXCOORD2;
};

{
float4 Color : COLOR0;
};

{
output.Position = mul(input.Position, gworldviewprojection);
output.Pos2DAsSeenByLight = mul(input.Position, lightsPovWorldViewProjection);
output.Position3D = mul(input.Position, gworld);
output.Color = input.Color;
output.TexureCoordinateA = input.TexureCoordinateA;
output.Normal = input.Normal;
//output.Normal = normalize(mul(input.Normal, (float3x3)gworld));
return output;
}

{
float4 result = tex2D(TextureSamplerA, input.TexureCoordinateA);

// positional projection on depth map
float2 ProjectedTexCoords;
ProjectedTexCoords[0] = input.Pos2DAsSeenByLight.x / input.Pos2DAsSeenByLight.w / 2.0f + 0.5f;
ProjectedTexCoords[1] = -input.Pos2DAsSeenByLight.y / input.Pos2DAsSeenByLight.w / 2.0f + 0.5f;

// the real light distance
float realLightDepth = input.Pos2DAsSeenByLight.z / input.Pos2DAsSeenByLight.w;

// for testing
float4 inLightOrShadow = float4(.99f, .99f, .99f, 1.0f);

// if in bounds of the depth map
if ((saturate(ProjectedTexCoords).x == ProjectedTexCoords.x) && (saturate(ProjectedTexCoords).y == ProjectedTexCoords.y))
{
// in bounds slightly off color
inLightOrShadow = float4(.89f, .89f, .99f, 1.0f);

if ((realLightDepth - .1f) > depthStoredInShadowMap)
{
inLightOrShadow = float4(0.20f, 0.10f, 0.10f, 1.0f);
}
}

// finalize
result.a = 1.0f;
output.Color = result;
return output;
}

{
pass
{
}
}

i dunno i make my render target like this.

renderTargetDepth = new RenderTarget2D(GraphicsDevice, pp.BackBufferWidth, pp.BackBufferHeight, false, SurfaceFormat.Single , DepthFormat.Depth24);

my depth buffer is turned on.

protected override void Draw(GameTime gameTime)
{
GraphicsDevice.SetRenderTarget(renderTargetDepth);
//SetToDrawToRenderTarget(renderTarget1);
GraphicsDevice.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);
GraphicsDevice.DepthStencilState = ds_depthtest_lessthanequals;

GraphicsDevice.RasterizerState = selected_RS_State2;

GraphicsDevice.SetRenderTarget(null);
GraphicsDevice.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);

GraphicsDevice.RasterizerState = rs_solid_ccw;
GraphicsDevice.DepthStencilState = ds_depthtest_lessthanequals;

GraphicsDevice.RasterizerState = rs_solid_cw;

GraphicsDevice.RasterizerState = RasterizerState.CullNone;
GraphicsDevice.BlendState = BlendState.AlphaBlend;
DrawOrientArrows(obj_CompassArrow);
DrawOrientArrows(obj_Light);

GraphicsDevice.BlendState = BlendState.Opaque;

//GraphicsDevice.RasterizerState = rs_solid_nocull;

DrawText(gameTime);
base.Draw(gameTime);
}

my far plane is set at like 500 though.

I’ve added a bias parameter and changed this line…

if ((realDistance - 1.0f / 100.0f) <= depthStoredInShadowMap)

to

if ((realDistance - 1.0f / 100.0f) + xBias <= depthStoredInShadowMap)

And in the application I added between 0.001f to 0.006f

Working like a charm

Regards, Morgan

ive got a bias in there already its of no affect.

Ok, I’ll see how your image looks like.

I’m using a very large frustum and the background is a globe which makes my shadows pixelated, but it looks like this

Here’s the opposite sort of image when i reverse the depth test.
which makes no sense.

// reversed the more than > to less then check <
// in depth map bounds, slightly off color
// pixels are behind the depth shadow
inLightOrShadow = float4(.89f, .89f, .99f, 1.0f);

if ((realLightDepth - .1f) < depthStoredInShadowMap)
{
// this is when the pixels should be in front of the shadow
inLightOrShadow = float4(0.20f, 0.10f, 0.10f, 1.0f);
}

here is the positions of everything.

public void SetupWorldSpaceCameraObjects()
{
//float degrees = 22.5f;
float degrees = 0f;

obj_CompassArrow.Position = new Vector3(0, 0, -12);
obj_CompassArrow.Forward = Vector3.Forward;

obj_Camera.Position = new Vector3(0, 1f, 8f);
obj_Camera.Forward = spinMatrixY.Forward;

obj_Light.Position = new Vector3(-3, -3, +25f);
//obj_Light.Forward = spinMatrixY.Forward;
obj_Light.Forward = (obj_Camera.Forward * 1000 + obj_Camera.Position) - obj_Light.Position;

obj_sky.Scale(new Vector3(95f, 95f, 95f));
obj_sky.Position = Vector3.Zero;//obj_Camera.Position;
obj_sky.Forward = Vector3.Forward;

obj_sphere01.Scale(3f, 3f, 3f);
//obj_sphere01.Scale(-3f, -3f, 3f);
obj_sphere01.Position = new Vector3(-2.5f, -3f, -26f);
obj_sphere01.Forward = Vector3.Forward;

obj_sphere02.Scale(3f);
obj_sphere02.Position = new Vector3(+2.5f, 3f, -20f);
obj_sphere02.Forward = Vector3.Forward;

obj_sphere03.Scale(3f, 3f, 3f);
obj_sphere03.Position = new Vector3(0, 0f, -15f);
obj_sphere03.Forward = Vector3.Forward;

obj_sphere04.Scale(3f, 3f, 3f);
obj_sphere04.Position = new Vector3(0f, -2f, -10f);
obj_sphere04.Forward = Vector3.Forward;

camTransformer.CreateCameraViewSpace(obj_Camera);
}

What if you try sampling 4 adjacent pixels from your depth and take the minimum? That helped me get around a lot of shadow artifacts.

It’s not artifacts as much as its just not working right.

The objects themselves are being shadowed im pretty sure both calculations are borked somehow.

// the real light distance
float realLightDepth = input.Pos2DAsSeenByLight.z / input.Pos2DAsSeenByLight.w;

When i draw these out via intensity they are both wrong the depth map shows black for the far away background all the spheres are very white up close.

When i draw out the real light depth the background is extremely white and the objects are greyish white.

when i do this creating the depthmap foregoing the /w i get crazy fractal looking results.

//Output.Color = PSIn.Position2D.z / PSIn.Position2D.w;
float depth = PSIn.Position2D.z;// / PSIn.Position2D.w;
Output.Color = EncodeFloatRGBA(depth);

With the background in its really crazy.
It’s almost as if there is a additive bleed thru as well going on.

with the / by w back in and no background i get this.

with the background its fully white.

{
float4 result = tex2D(TextureSamplerA, input.TexureCoordinateA);

// positional projection on depth map
float2 ProjectedTexCoords;
ProjectedTexCoords[0] = input.Pos2DAsSeenByLight.x / input.Pos2DAsSeenByLight.w / 2.0f + 0.5f;
ProjectedTexCoords[1] = -input.Pos2DAsSeenByLight.y / input.Pos2DAsSeenByLight.w / 2.0f + 0.5f;

// the real light distance
float realLightDepth = input.Pos2DAsSeenByLight.z / input.Pos2DAsSeenByLight.w;

// for testing
float4 inLightOrShadow = float4(.99f, .99f, .99f, 1.0f);

// if in bounds of the light
if ((saturate(ProjectedTexCoords).x == ProjectedTexCoords.x) && (saturate(ProjectedTexCoords).y == ProjectedTexCoords.y))
{
// in bounds slightly off color
inLightOrShadow = float4(.89f, .89f, .99f, 1.0f);

if ((realLightDepth + .01f) > depthStoredInShadowMap)
{
inLightOrShadow = float4(0.20f, 0.10f, 0.10f, 1.0f);
}
}

//test
//result = float4(realLightDepth, realLightDepth, realLightDepth, 1.0f);

// finalize
result.a = 1.0f;
output.Color = result;
return output;
}

It is normal you get wrong results when not dividing by w, there are different spaces involved as the commonly named “screen space” is, at each step of the calculations

The complete explanations (openGL but it is the same matter of things):
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/

About precision, here is an interesting article for you:

1 Like

Ok, these encode decode functions are no good.

float4 g = EncodeFloatRGBA(realLightDepth);
realLightDepth = DecodeFloatRGBA(g);

I ran a depth to screen test, directly inserting this to encode and decode the output of a working colored visual test, basically showed this function doesn’t work.
It killed the visual. It shrunk the depth down to nothing.

I tried bit shifting and masking & | but it just errors.

How can i set and get higher precision depth to and retrieve it from a rendertarget ? Whats the right way to do this does any one have a working example that stores and retreives higher depth then a 8 bit float properly ?

Those functions definitely worked for me before. Here are the ones I’m using now that store a float in 24-bits of a color, and they are working well in my game with deferred rendering:

float3 EncodeFloatRGB(float f)
{
float3 color;
f *= 256;
color.x = floor(f);
f = (f - color.x) * 256;
color.y = floor(f);
color.z = f - color.y;
color.xy *= 0.00390625; // *= 1.0/256
return color;
}

float DecodeFloatRGB(float3 color)
{
const float3 byte_to_float = float3(1.0, 1.0 / 256, 1.0 / (256 * 256));
return dot(color, byte_to_float);
}

They basically do the same thing but they’re optimized a bit more. If these still don’t work for you then I would guess there’s something else wrong. Can you give a sample of a more basic shader where you see issues?

EDIT: You mentioned it looked like there was some additive bleed-through. Do you have multiple passes writing to the depth target? Is it possible there’s a mistake in your blend states?

How’s it going for you willmotil?
You’ve solved it yet?

Regards, Morgan

I haven’t solved it yet. i ll try that one in a bit. I’m on my phone right now. The bleed thru was just because I didn’t set blendstae to opaque. I’ll post a pic later basically I did a color precision test over a large range of values by scaling a cube by 1,1,1000 then rotating it.
Drawing its depth directly in a color scheme.
Then intercepting the values by encoding decoding those depths inbetween I could see the resulting failure of the functions I’ll post it in a couple hours when I can get on.