I’d be paranoid and first ensure the alpha data is correct in the DesktopGL project version.
At first glance it looks like:
clip((color.a - AlphaTestValue) * (AlphaTestGreater ? 1 : -1));
for whatever reason, isn’t actually clipping out the see-thru pixels and over-writing the tree behind it on the back-buffer.
clip should be supported in all shader versions so that should be ok
I notice that EnsureOcclusion is set to true and it should thus be calling to draw both opaque and transparent pixels - and this makes me think in that neither DirectX nor OpenGL are clipping any pixels because both low-alpha and high alpha versions are being drawn.
If that’s the case, then maybe it’s a matter of which one dominates in depth to determine which pixel gets over-written. Since this isn’t sprite-batch or something like that, the depth value might be important and perhaps different depending on how the camera’s set up (and OpenGL and DX camera spaces may be a bit different).
What happens if you turn off “EnsureOcclusion” to false?
If I set EnsureOcclusion to false, then I just draw the Mesh once with the DepthStencilState set to DepthRead. I also call Effect.Parameters[“AlphaTest”]?.SetValue(false).
If I set it to true, I render the mesh twice. First with DepthStencilState.Default and after that with DepthStencilState.DepthRead and I also set some Params in the shader.
I got this from an xna book. Here are the pages that explain shader and also some screenshots with different settings. When Ensure Occlusion is turn off the render result is based on the order I add them to the scene. But why the result is different to DesktopGL compared to WindowsDX is still a mystery to me because I use the same code 1:1