[Solved] iOS 32-bit SurfaceFormat

I’ve run into an issue with my deferred rendering pipeline on mobile. It looks like there aren’t any 32-bit precision surface formats that MG supports on iOS, so I’m forced to use a 16-bit depth render target, which causes noticeable error in my lighting. I see the same error when running on my PC when I change from Single to HalfSingle in DesktopGL, and similar results are documented here. I’ve read in a few places that you can’t directly sample from the depth buffer (where the standard 24-bit would produce less error) in XNA, so I assume the same is probably still true, and thus I’m manually writing depth to a render target.

I can’t find a definitive list from Apple, but based on examples, I believe there should be 32-bit precision pixel formats available for iOS.

So my questions and ideas are:

  1. Are there really no supported 32-bit precision formats in iOS, or what would it take to enable them?
  2. Since there are formats available that are 32 bits in size, but not precision, is it doable to hack apart my depth and piece it back together?
  3. Are there any other ways around this? Surely there are iOS games where shaders reconstruct position from depth.

2. After some digging I found that this is a fairly common problem and there are functions for packing and unpacking a Single into a Color:

float4 EncodeFloatRGBA(float v)
{
	float4 kEncodeMul = float4(1.0, 255.0, 65025.0, 16581375.0);
	float kEncodeBit = 1.0 / 255.0;
	float4 enc = kEncodeMul * v;
	enc = frac(enc);
	enc -= enc.yzww * kEncodeBit;
	return enc;
}

float DecodeFloatRGBA(float4 enc)
{
	float4 kDecodeDot = float4(1.0, 1 / 255.0, 1 / 65025.0, 1 / 16581375.0);
	return dot(enc, kDecodeDot);
}

EDIT: I found other samples that use multiples of 256 instead of 255. I don’t know why this uses 255, but I imagine 256 is better. I also like that you can just truncate float4 to float3, for example, to only hold 24 bits of precision rather than 32.