AlphaClipping-Effect Render Result is different from WindowsDX compared to OpenGL

I render two quads with a texture and when I am using the same code in a DesktopGL the shader seems to not be working as intended.

I wonder if it has something to do with the shaderversion?

Any suggestions about what might be the reason for this.

The code of the shader was inspired by an XNA Book.



    public override void Draw(RenderContext renderContext)
        /*var samplerState = new SamplerState();
        samplerState.AddressU = U;
        samplerState.AddressV = V;
        renderContext.GraphicsDevice.SamplerStates[0] = samplerState;*/
        // Set the vertex and index buffer to the graphics card
        renderContext.GraphicsDevice.Indices = IndexBuffer;
        renderContext.GraphicsDevice.BlendState = BlendState.AlphaBlend;

        if (EnsureOcclusion)
            renderContext.GraphicsDevice.DepthStencilState = DepthStencilState.DepthRead;

        // Reset render states
        renderContext.GraphicsDevice.BlendState = BlendState.Opaque;
        renderContext.GraphicsDevice.DepthStencilState = DepthStencilState.Default;
        // Un-set the vertex and index buffer
        renderContext.GraphicsDevice.Indices = null;

ummmmmmmm what is going on here?

I’d be paranoid and first ensure the alpha data is correct in the DesktopGL project version.
At first glance it looks like:
clip((color.a - AlphaTestValue) * (AlphaTestGreater ? 1 : -1));
for whatever reason, isn’t actually clipping out the see-thru pixels and over-writing the tree behind it on the back-buffer.
clip should be supported in all shader versions so that should be ok
I notice that EnsureOcclusion is set to true and it should thus be calling to draw both opaque and transparent pixels - and this makes me think in that neither DirectX nor OpenGL are clipping any pixels because both low-alpha and high alpha versions are being drawn.
If that’s the case, then maybe it’s a matter of which one dominates in depth to determine which pixel gets over-written. Since this isn’t sprite-batch or something like that, the depth value might be important and perhaps different depending on how the camera’s set up (and OpenGL and DX camera spaces may be a bit different).
What happens if you turn off “EnsureOcclusion” to false?

If I set EnsureOcclusion to false, then I just draw the Mesh once with the DepthStencilState set to DepthRead. I also call Effect.Parameters[“AlphaTest”]?.SetValue(false).

If I set it to true, I render the mesh twice. First with DepthStencilState.Default and after that with DepthStencilState.DepthRead and I also set some Params in the shader.

I got this from an xna book. Here are the pages that explain shader and also some screenshots with different settings. When Ensure Occlusion is turn off the render result is based on the order I add them to the scene. But why the result is different to DesktopGL compared to WindowsDX is still a mystery to me because I use the same code 1:1

Here is a shared solution, including a WindowsDX and a DesktopGL project, so you can test it youself

Default values don’t work with OpenGL. You have to set them from C#.


LEGEND. Thank you!!

Very good to know.

Cant that be set in the shader too or instead?

Only with DX default values can be set directly in the shader, MojoShader doesn’t use them.

Ahh, cool good to know. So probably better setting things like this via the device rather than in shader.

Shame, would have been nice to just do this sort of stuff in one place.