Matrix parameter not recognised in pixel shader?

I poured over this for an hour or two last night, and I’m stumped.

Short story is, I was trying to convert a depth texture (grey scale, linear, 0-1) into the original screen space z and w coordinates. I did something like this (I’ve since deleted it and moved on, so this is just from memory):

Texture2D Depth;
float MaxDepth;
float4x4 Projection;
PixelShaderOutput PixelShaderFunction(VertexShaderOutput input)
    float sample = tex2D(DepthSampler, input.TexCoord).z; // positive z, 0-1
    float z0 = -z * MaxDepth; // original view space z coordinate, negative, up to -MaxDepth
    float z = z0 * Projection._33 + Projection._43;
    float w = z0 * Projection._34 + Projection._44;
    output.Depth = z / w;

Now, looking this over, I notice that there’s a little bit of a logic error involving calculating the “z0” variable, but that’s not the point here. The point is, my z and w values come out to be 0. Logic error aside, that shouldn’t be the case. It’s as if the Projection variable contains all zeroes.

To further investigate, I tried replacing all of that logic with simply:

output.Depth = saturate((abs(Projection._33) + abs(Projection._34) + abs(Projection._43) + abs(Projection._44)) * 100);

…just to try to get something, anything, to show up, but the results were still 0.

The same thing happened when I tried using

mul(float4(0, 0, z0, 1), Projection)

…so it’s not just the _33 syntax (I used that in vertex shaders elsewhere, so that’s fine).

I also checked the constant buffers in the GraphicsDevice object, and, sure enough, I did see the 64-byte matrix data where it should be.

Finally, I gave up and passed those four necessary components in as a float4 vector, like so:

float4 Projection;
float z = z0 * Projection.x + Projection.z;
float w = z0 * Projection.y + Projection.w;
output.Depth = z / w;

And that worked just fine. So, it seems to me that there’s something wrong with transferring the matrix parameters/constants to the pixel shader, or I’ve overlooked some inherent limitation of HLSL’s pixel shaders’ capabilities in the first place.

How are you setting the Projection value from C#? That would be my best guess as to where the problem lies, as I’m setting my Projection as a matrix in pretty much all my shaders and not seeing this particular issue.

Shader snippet:

matrix Projection;
VSOutput VSQuad(VSInput input)
	VSOutput output;

	float4 worldPosition = mul(input.position, World);
	float4 viewPosition = mul(worldPosition, View);
	output.position = mul(viewPosition, Projection);

	// Lots of unrelated code

	return output;

And in code when setting:

UnitRenderEffect.Parameters["Projection"].SetValue( projMat );

Where projMat is a Matrix object and UnitRenderEffect is an Effect object.

I did a quick sanity check and changed my Projection variable in the shader from matrix to float4x4 but it still worked so I don’t think there is an issue there.

Sorry for my tardy response; as mentioned, I’ve since moved on with an alternate solution.
In my C# code, yes, I’m setting it as you demonstrated there.
The difference in our code, however, is that you are using the Projection parameter in the vertex shader - which works just fine for me - whereas I was trying to use it in the pixel shader, and that for some reason was a problem. I don’t remember for sure, but I might not have been using it in the vertex shader, in which case I wonder if it was somehow getting optimised out.

Just thought I’d add that this issue reared its head for me again just now. As far as I can tell, if a matrix parameter is used exclusively in a pixel shader, it does not work. In my previous scenario as originally posted above, the elements evaluated as zero. In the case just now, EffectPass.Apply() threw an exception because it didn’t have a buffer properly allocated to which to copy the data, or so it would seem.

Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
at System.Buffer.BlockCopy(Array src, Int32 srcOffset, Array dst, Int32 dstOffset, Int32 count)
at Microsoft.Xna.Framework.Graphics.ConstantBuffer.SetData(Int32 offset, Int32 rows, Int32 columns, Object data)

Is it possible you’re experiencing the same issue I saw here?

The shader compiler can aggressively optimize away entire columns of your matrix, but MonoGame isn’t aware of that when setting parameters from code.

That could very well be part of it. Is that consistent with the scenario described in my original post, wherein some values are simply zero?

I think the effect is more that your variables are mixed up. But when you’re applying transformations, the wrong variable in the wrong spot could cause everything to become 0.

Here’s an example. If your parameters are declared like this:

float a;
float4x4 M;
float b;

and you don’t use M.z, then everything will get shifted up to look like this:

a => a
M.x => M.x
M.y => M.y
M.w => M.z
b => M.w

Without making any big changes, if you find there’s a column that you’re not using, you could try including it in an insignificant way, like ... + M.z * 0.00001. Then see if it works like you expect.

Yeah, I’ve actually done that a couple times before. For this specific case, as well as the original post, I ended up just passing in the individual necessary matrix elements and reconstructing it in the shader.

Hmm ok, this might be something a little different then. But my first instinct is still that this sounds like an aggressive compiler.

Could you also share the GLSL that it loads? Comparing those might help.

In both cases - the original above and this newest one - I’m using only specific rows/columns of the matrix, so, yes, that could very well be it.

Now, as this is a known issue, how do we fix it? Can the compiler setting be changed to be less aggressive? Can Monogame recognise or be set to acknowledge the dropped elements? Personally I would suggest a method or property to configure which elements to transfer in the effect, so it only degrades performance when necessary.