Does the order of the vertices/indices in a Vertex/IndexBuffer determine the draw order?

In a 2D tile-based platformer I’m drawing all of my tiles using a vertex and index buffer with DrawIndexedPrimitives. I’m trying to make as few draw calls as possible, but some of the tiles are on a different layer than others and thus has to be drawn on top of, or beneath other layers. If I only have to make a draw call for each different texture rather than each different texture for each different layer, that would be great.

Does the order of the vertices/indices in a Vertex/IndexBuffer determine the draw order?

I can’t say with complete certainty, but I believe the index buffer determines draw order if you use one; otherwise, it’s the vertex buffer. Of course, all of the triangles are driving with a single call, but I do think they go in order if definition.

The index buffer determines what vertices are drawn, so it also determines the draw order. I recommend you combine your textures in a texture atlas so you can render every layer in a single draw call.

I combine textures in different texture atlases as in most games. But there are still going to be multiple texture atlases, which means I will have to use a depth buffer as you said. Is there any overhead with using a depth buffer?

There’s overhead, but I doubt you will notice it. And I think it’s faster than any other solution for draw order (unless you already draw everything in order without texture switches, but that’s not very likely :stuck_out_tongue: )

Do you think it would be viable to render tiles that intersects with a rectangle of a certain size larger than the camera rectangle by building a dynamic vertex and index buffer with the tiles and rebuilding them using SetData when the camera rectangle exits the larger rectangle? Would it be more performant to split my tile world into chunks and add/remove its tiles from the buffer using SetData’s startIndex and elementCount when it enters/leaves the camera rectangle, so I wouldn’t have to overwrite the entire buffer?

Since your game is planar you could pre-load the neighbouring chunks of the map as the player moves around. Similar idea to what happens in Minecraft if you ever played the game. Then the player is immersed in a seamless map. You will still get pretty decent performance since the vertex / index buffers will only be recalculated when the player moves across a boundary instead of every frame.

I never intended to rebuild the buffers every frame, sorry if that was unclear. Is it faster to use SetData when providing a startIndex and elementCount argument (only part of the buffer is replaced instead of the entire buffer)?

Not sure. You should take a look at the source code and take a look at the documentation for the low-level rendering API used such as DirectX or OpenGL.

Yes, it’s faster because MG only updates part of the buffer for the different backends.

I’m using a depth buffer, but tiles that are beneath others aren’t drawn. Is there a property in DepthStencilState to change this? Right now I’m just using
GraphicsDevice.DepthStencilState = DepthStencilState.Default;

The issue is that the depth buffer saves the depth of all pixels it renders. That includes transparent pixels. To do proper layering with sprites and the depth buffer you’ll need to clip transparent pixels in a shader. Note that you can’t have partial transparency using this solution.

Oh, okay. I’m not that good with shaders, I’ve only tried creating a pixel shader that manipulated with colors. Could you please explain what you mean by clipping transparent pixels?

@CSharpCoder Read this

I recommend you use AlphaTestEffect like @LithiumToast mentioned, it does the following for you if you set it up right. But I’ll shortly explain it anyway :slight_smile:

In HLSL you have a function clip(value) that discards the current pixel if the passed value is less than 0. When a pixel gets clipped its depth isn’t written to the depth buffer either. Since you want to clip transparent pixels, you’ll want to do something like

float4 color = tex2D(sampler, texCoord);
if (color.a == 0)
    clip(-1);
return color;

Or with some tolerance for clipping low alpha pixels:

float4 color = tex2D(sampler, texCoord);
clip(color.A < 0.1f ? -1 : 1);
return color;

Though I think a better solution would actually be to just render your layers in the right order (referred to as painter’s algorithm in @LithiumToast’s comment linked in his post above) :stuck_out_tongue:

Thanks a lot guys! I think I’ll try AlphaTestEffect. I have multiple texture atlases where each layers can have tiles from different textures which means multiple draw calls, so I can’t render the layers in a certain order, which is why I’m using a depth buffer in the first place.

1 Like

Just a hint: The depth buffer does not help if you have transparent of semi-transparent layers.