Best way to do pixel calculations

Sorry the title isn’t very helpful, I’m not sure how to summarise this question well.

I’ve been working for a while on trying to draw some 2D objects such that they’re correctly ordered in regards to the camera perspective. After tearing my hair out over it for a couple of weeks I’ve realised the least complicated way to do it is for each pixel of a texture to be assigned a value based on some other properties of the object the texture belongs to, and then for every pixel in the same location, the one with the highest value is chosen to be rendered.

What would be the most efficient way to do this, assuming there is one? Would using shaders/buffers or something along those lines be a good idea? Honestly I feel like I should be getting this easily, but at this point my brain is pretty fried.

Depth buffer.

Thank you, that seems to be along the lines of what I’m looking for. Is there a way to define a custom depth function for each pixel?

For each pixel that would be overkill !?
There are comparison functions, one for the whole screen, but you can’t change it for each pixel, or you need to use more than 1 rendertarget and set the appropriate function you want to apply. ( https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.comparefunction.aspx#CompareFunction.LessEqual )

Sorry, I feel like I’m explaining this quite poorly.

What I mean is, I’m trying to assign a depth for each pixel of a texture based on some parameters of the object that has the texture. I figure the best way to do this is to create a height map for the texture and then pass this along with the texture to something which can assign a depth to each pixel properly.

If I understand, you want to get the depth of the pixel, and store it in a rendertarget ?
This is a basic in deferred rendering. Just use a vertexshader to get the positions of the pixel (pos = input.position * WorldViewProj), and in the pixel shader, return the value of pos.z/pos.w (if you don’t use linear depth)
the rendertarget must be format.Single to provide enough precision.

in the VShader:

float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);

in the PShader:

Depth = input.Depth.x / input.Depth.y;
return depth;

In the case you wanted to load a texture and assign it as depth to the GPU…
You can do this by passing the texture to a pixelshader, and the pixelshader could read its value and write it like this:

void mypixelshaderfunc (vsoutput psinput):DEPTH
{
 float currentdepth = Tex2D(psinput.texcoords);
 return currentdepth;
}

Your texture needs to be saved in a format allowing to store enough precision though.

http://www.catalinzima.com/xna/samples/other-samples/restoring-the-depth-buffer/ can help you to find more ideas. :wink:

I think the second thing is closer to what I want to. Sorry, shaders and such really isn’t my strongpoint.

My problem is in calculating the depth for each pixel in the pixel shader, this depends on some extra parameters not contained in the texture, and as far as I can tell there doesn’t seem to be an easy way to pass these to the shader, especially not using SpriteBatch.

Feeling rather stumped at this point :disappointed:

You’ll need to use multiple render targets or learn shaders to do this.

See http://jagaco.com/2016/12/12/custom-depthbuffer/ which shows the technique in action you want.

1 Like