Minimal VertexBuffer-Drawing example

Right now I’m drawing my tilemap using SpriteBatch.
But I’ve heard drawing with VertexBuffer and DrawIndexedPrimitives is a lot faster. Is that true and if so, can someone show me a minimal example of how to use it?
I’ve looked to Monogame.Extended’s Tiled-Implementation but there’s a lot of stuff in there and I’m not sure what’s necessary for a minimal setup and what’s Tiled-specific

I’ve heard drawing with VertexBuffer and DrawIndexedPrimitives is a lot faster.

It can be depending on what you are doing and how you are implementing it.
Instancing is a order of magnitude faster but its really used for whole models or meshes or particles.
That is only barely hinted to in the bottom of the example ill post.

can someone show me a minimal example of how to use it?

The below is a minimal example for using DrawIndexedPrimitives or DrawPrimitives in a similar context as well as a few other simple related concepts that are associated with it.

Take Note…

This is a minimal example and as such it is not specifically tweeked to for any particular thing other then to demonstrate ideas, in this way it is even more inefficient speed wise then not as it is. In fact this example is just put together for clarity with a lot of related parts. So you can see how it relates to the current spritebatch calls, However you would not want to use it in such a sloppy manner in your own project.

Once you can see how its working you would want to streamline your own version to be efficient for you and fast for what you are doing specifically.

There is a ton of stuff i do in the above example that is inefficient as im just trying to clump ideas together that are related to each other. For instance the vertex and indexs should not be recreated every frame if avoidable same for the states they should be created and saved and only updated when neccessary same with setting much of the things set to either basic effect or the custom effect. So as you can see there is a lot in the example that is not efficient at all but done so people can see the concepts fit together.

1 Like

Thank you man, that looks promising. I’ll have a look and try it :slight_smile:

Hope it helps
…(20 chars)

The thing with SpriteBatch is that you rebuild the buffer and send all the vertices to the GPU every frame. However if you have only a single texture (i.e. you use an atlas), the draw calls can be batched (so it’s still very fast on the GPU). That’s great for dynamic scenes, but if you have static elements you can put them in a vertex buffer and the vertex data is only sent to the GPU once instead of every frame.

Generally for tilemaps it’s easier and fast enough (when using a texture atlas) to use SpriteBatch because you scroll around the map and only render the visible tiles, so it’s all very dynamic.
Using a DynamicVertexBuffer and only updating the tiles that went out of or came into the viewport might be more efficient but it’s most likely not worth the effort for the minor performance gain.

1 Like

So basically if I have a constantly moving camera then it’s practically the same as using spritebatch?

Why do people recommend VertexBuffers for particle-systems then? I mean, all particles do is move. There’s not really anything static

1 Like

As someone who needs to develop a particle engine for my game engine, I’m curious as to the gains behind using a VertexBuffer as well.

If I had to guess, it’s because often, vertex transformations are done on particles. For example, a dot can be transformed to look like a line through vertex transformations. I assume you can do the same to transform a circular effect into an arbitrary polygon. You can’t achieve that easily, if at all, with a SpriteBatch.

I suppose they’re used, for instance, to do something such as this. You have the following texture:

And you can use vertices to transform it into something like the primary hit spark you see here (the one in the back with the three large points):

Am I correct? Because if so, this may be exactly what I’m looking for to implement in my engine.

1 Like

Yes, SpriteBatch always creates vertices with position, UV and color. If you need other data you can’t use it. Though you could create your own version with a custom vertex type if you want the same functionality.

1 Like

For something like a tilemap spritebatch is probably preferable in most cases.
As jjag said spritebatch is great for tilemaps if you are using spritesheets.

There could be cases though were your particular tile map is doing a extraordinary amount of layered draws or using extremely small tiles or just doesn’t change enough to want to use it.

Or very commonly you may be drawing hexagonal tiles.
To were the capability’s and or benefits spritebatch gives might be insufficient or inherently negated.
Say you plan to regularly zoom out to were there maybe tens of thousands+ tiles on screen you might in this case actually opt for instancing.
Spritebatch is best when there are fewer textures to switch between as it allows for bigger batches and spritesheets are a good choice anytime you are using spritebatch especially for tilemaping.

Particle systems address a different idea and need.

In the case of a particle system you desire to bombard the display with typically a extraordinary number of similar images. However there are some drawbacks to instancing for example tilemaped areas that adjoin can be hard to make look right as line-ing them up requires knowing how to deal with floating point error logically, Also if you flood the instances with too much data too send you start to lose the speed you gained from instancing,

For example.
lets say you have a scene and you want to put realistic rain drops in it that can react to objects within that scene. Or you have a space scene and you want glowing dust particles constantly colliding with your glass viewport, In this case what you most likely want to do is use a vertexbuffer and Instancing.

When you do instancing you not only typically set a vertex buffer and index buffer but also bind a 3rd buffer to the graphics device a Instance buffer. The vertex and index buffer is bound once and just sits on the gpu The instance buffer you update as needed and send to the card which tells it where to draw each instance, In this way because far less data is transfered between the ram and gpu the card can operate extremely fast,

For example in the below image im drawing 500,000 quads per frame at over 100 fps about a million + are also in the scene but about half are clipped. The particles are drawn in 3d under a perspective projection matrix. These just as well could be models though. Particles can be thought of as tons of instances of the same thing with just minor differences and thus fit into the idea of instancing naturally.

http://i936.photobucket.com/albums/ad207/xlightwavex/Gl_instanced_1mllion_zpsotm4qj63.gif

Thanks for the explenation :slight_smile:

But basically instancing should always be just as fast as spritebatch, right? Just with the added benefit of better performance in some cases. Or are there cases were instancing will defenitly be worse than using spritebatch?

Well they are two different creatures.
SpriteBatch creates vertices from rectangles you pass and forms batches a batch is a bunch of draws placed into large vertice and index buffer it passes these batches in one shot at the end of draw typically per texture which speeds things up. Spritebatch also abstracts away a lot of difficult problems MonoGames version of spritebatch is far faster then xna’s version and the old spritebatch was good enough for the xbox.

Instancing is the idea of not passing vertices or indices but instead setting a vertice and indice buffer onto the card that wont be changing much if ever.

Can Instancing be slower ? …

In practice no instancing is much faster.
It also means doing even more stuff yourself you might not want to, a lot more.

However if your instances are loaded with data to send over. Which is opposite to the idea of instancing. For example in the above all that is sent to the card per particle is a position and color a vector3 and 4 for every sprite drawn.
Spritebatch would be sending 4 vertices positions (vector3’s) 4 colors (vector4’s) and and 4 uv’s (vector2’s) plus 6 indexing shorts or int’s. Could i equal that with instancing sure i could send over a full orientation matrix and a bunch of extra data per instance then again i would have a bit of extra room and i could just send 2 normals and build the orientation on the shader. Provided i actually update everything im instancing each frame and not just redrawing it from a different camera position.

That said i can’t really change textures if i want to even use a different sprite rectangle in the same texture id have to come up with some scheme in the pixel shader to do so. You also have to track your instances and perform the cpu side calculations for them.