Hello to the Monogame community!
I have a few questions and curiosities about how to implement a depth buffer in Monogame.
I am making a 2D game and am drawing sprites on screen using DrawUserIndexedPrimitives as the vertices of sprites can change and I want to be able to draw more complex shapes.
I want to use a depth buffer so I can draw every sprite sorted by texture/shader while still preserving depth to reduce GPU state changes.
The only problem is, I don’t know how to implement something like that.
I’ve read through a lot of posts and so forth to get behind this but to no avail. I am fairly new to shaders etc. and I still can’t get this to work.
From the tidbits I’ve gathered I know that Monogame has some sort of implementation of a depth buffer already built-in but I am not sure how to use it with DrawUserIndexedPrimitives.
Or should I try to get my own implementation up and running using some sort of shader or similar? Keep in mind that some of my sprites have fully transparent pixels.
Or is this kind of implementation I am trying to make right now even good in the first place?
I am pretty openly asking here of what I should do to get a depth buffer finally working or how this works.
Maybe someone can point me in the right direction and give me some resources I can learn this from?
I am sorry for any facts I’ve maybe gotten wrong in this post and am thanking you in advance for any help I receive!
It’s unclear how much knowledge you currently have … a Depth Buffer (zbuffer) is GPU functionality you basically just need to “turn it on” (it’s on per default, but not for SpriteBatch). As you using DrawPrimitives, you should have a zbuffer on the default rendertarget (screen) already as long as you didn’t turn it off.
If the z-check fails, the PixelShader for that particular “Pixel” wont even be called. As you may imagine, this does not work for transparent Pixel, as they still write to the zbuffer when plotted transparent in the Pixel Shader.
The zbuffers primarily use is to make the “painters algorithm” work, without pre-sorting the data you send to the GPU and handling overlapping geometry
To avoid plotting transparent pixels in the Pixel Shader you basically call something like
clip(input.Color.a - 0.5f); // this will basically stop writing anything to the target when alpha < 0.5)
alpha blending wont work in that case, if you want blending, you’d need to pre-sort your draw calls for depth first (the GPU will not do this automatically) and turn off the zbuffer. That’s way transparent/translucent geometry is normally rendered in its own pass, after everything else was already rendered with zbuffer activated
(I guess you used SpriteBatch before which afaik does not utilize the zBuffer but may handle that sorting for you before sending the actual Draw Calls so Alpha Blending still works)
and btw: The only time when you need a “custom shape” sprite is, when you want to deform the vertexdata - otherwise drawing a simple quad is the more performant approach
Thank you for all the tips! You summarized things in a way I haven’t found before.
You saying that the Depth Buffer is active by default got me curious and after a little bit of digging I found out that I had used wrong Matrix values/calculations for the View and Projection for my camera.
These worked seemingly fine for simple 2D rendering but weren’t usable for the Z-Buffer.
Needless to say I finally fixed my problem! Thanks for your help.