[SOLVED] Calculate View matrix in shader

I am trying to create my view matrix in the shader. I cribbed some of the code from Matrix.CreateLookAt and tried modifying it to work in the shader, but I am not getting the correct results. The model isn’t being shown.

I am pretty crappy with matrix math, so it’s probably something simple which is eluding me. I tried all variations of setting the matrix using column and row formats, but nothing seems to work.

Here’s the code I have. Any pointers will be greatly appreciated! Note, position is the vertex position passed from the vertex shader to the CalcView function.

float4x4 CalcView(float3 position)
{
	float4x4 result;
	
    float3 cameraPosition = EyePosition;
    float3 cameraTarget = position;	
	
	float3 zAxis = normalize(cameraPosition - cameraTarget);
	float3 xAxis = normalize(cross(float3(0, 1, 0), zAxis));
	float3 yAxis = cross(zAxis, xAxis);
			
	result[0][0] = xAxis.x;
	result[0][1] = yAxis.x;
	result[0][2] = zAxis.x;
	result[0][3] = 0.0;
	result[1][0] = xAxis.y;
	result[1][1] = yAxis.y;
	result[1][2] = zAxis.y;
	result[1][3] = 0.0;
	result[2][0] = xAxis.z;
	result[2][1] = yAxis.z;
	result[2][2] = zAxis.z;
	result[2][3] = 0.0;
	result[3][0] = 0.0 - dot(xAxis, cameraPosition);
	result[3][1] = 0.0 - dot(yAxis, cameraPosition);
	result[3][2] = 0.0 - dot(zAxis, cameraPosition);
	result[3][3] = 1.0;

	result = transpose(result);

	return result;
}

position is the vertex position passed from the vertex shader to the CalcView function.

float3 cameraTarget = position;

This is probably what is messing you up do you have a little dot in the very center of your render ?

your function should take the position of your camera in world space the camera’s forward instead of (float3 cameraTarget = position;) and the cameras up. Also the eye vector is calculated by this functional portion at the bottom.

result[3][0] = 0.0 - dot(xAxis, cameraPosition);
result[3][1] = 0.0 - dot(yAxis, cameraPosition);
result[3][2] = 0.0 - dot(zAxis, cameraPosition);

Also this line here is a terrible idea.
float3 xAxis = normalize(cross(float3(0, 1, 0), zAxis));

This locks your camera into being a fixed camera and if your forward or in this weird case your forward pixels are near that gimble point it will nan all those values and put a black hole in your render.
Of course that will never happen because your forward cannot be a pixel position.
Eg basically your camera is panning all over in the space of a frame per pixel.

Let me just relabel some of these really quick to exemplify
Consider a full triangle’s positions, imagine a triangle that is just plain huge.
Then consider one frame with this code all the positions that triangle interpolates in that frame applied to your code.

float3 hardCodedStaticUp = cross(float3(0, 1, 0);

float3 Forward = normalize(cameraPosition - cameraTarget); // in motion were it should be static
float3 Right= normalize(cross(hardCodedStaticUp, Forward )); // a forward vector of 0,1,0 will cause gimble lock.
float3 Up= cross(Forward , Right); // Up is now in motion also 
// The whole matrice is probably no longer orthanormal either now.
            
result[0][2] = Forward.x;
result[1][2] = Forward.y;
result[2][2] = Forward.z;

// this forwards normal vector depends on the pixel position not the cameras look at vector.

If you were to actually pass the vertice position to a function like this even if it was a matrix being built per pixel, it would be a huge hit on the gpu to boot, the shader normally caches the 4x4 and reuses the precalculated results to do a vector multiply against the matrix. The concatenated view and projection matrices can also get premultiplied by the shader and cached.

You could work directly in clip space if you really wanted but there is a reason it is done the way it is. What you are doing with this is basically dancing on just the very edge of it. By it i mean endless troubles of a similar sort.

@willmotil Thanks for the pointers. I managed to get this working.

FYI… I had hardcoded the camera UP vector whilst testing it out, but am now passing it through from my camera.

What I did was extract the translation from the World matrix and passed that to the CalcView function instead of using the vertex position (which was incorrect on my part). I also cleaned up some of the matrix indexing stuff.

Lessons Learnt: XNA / Monnogame use row major format, whilst HLSL uses column major. Even if you transpose and use a row major format in the shader, you still have to access it using [col][row] indexing. That’s pretty confusing stuff…

One other weirdness I came across, the exact same code that I used for calculating the view in code doesn’t quite provide the same view as when done on the GPU. There is a slight offset difference on the screen. I tried to figure out why, but couldn’t. I decided to just live with it and offset my models accordingly.