New to 3D - instancing models for drawing


Gah, it’s so hard for me to write up coherent introduction…

Recently, while working on my game, I’ve started to transition to 3D in order to learn it (well, that and to make the game more aesthetically pleasing, given my atrocious drawing skills), and I got stuck at few… basic problems regarding drawing and performance, and kinda need help to learn how to progress.

What I have for now is this:

The way I did it initially (while learning through experiments, a.k.a. “what does this button do” approach) is to simply draw the model at the initial position, then move it and draw at the new position. As expected, due to numerous mesh.Draw() calls, the performance hit is obvious, and it becomes even worse if I try tinkering with directional lights to try and get some sense of depth (as you can see on screenshot, hills are indistinguishable from plains and I had to forcefully darken plains to make them different enough). This will surely become even worse when I redo some 2D tiles (such as forest tiles), so I’d like to do things right while there is not so much graphical content.

From what I’ve managed to gather around on the internet from various sources (such as this one), what I should probably to is to instance each model and, with the help of shader, batch all positions where that model should be drawn. However, I don’t understand the process behind it; going back to the linked example, I didn’t quite get the parts where the code creates VertexDeclaration and then sets VertexBuffer, nor the logic behind drawing code.

I would be most grateful if I could get some help or directions, so I can do it on my own in the future as well; I know you’re all busy people, so there’s no need for specific code or anything long.

Also, a slightly related question for which a simple yes/no will suffice: shadows, normal maps, etc., do I need to write shader effects for that?

Thanks for reading!

1 Like

MG allows you to set multiple vertex buffers at once. Each vertex buffer can hold a subset of the vertex channels your shader expects. The trick with instancing is that you can draw a number of models, but you don’t have to move the pointer to your vertex data from different buffers by the same number of elements. A special case is when you don’t move a pointer at all after drawing an element. So you can have 1 buffer that holds the actual geometry - the shared data - and have it’s pointer not update after drawing, so the same element is used every time. Then another buffer with instance specific data (e.g. position in the map) and have it’s pointer moved by 1 element every time. When running the shader, for every instance the same position and normal will be used (from the first buffer), but a different offset (from the second buffer). Because you only have to send the geometry once, you save massively on data that gets send from RAM to VRAM.

Make sure the vertex elements of your buffers combined match the inputs of your vertex shader.

I think I can clarify this better with a drawing, but I’m short on time. If this is not clear yet, or you’d like an example, feel free to ask. I’ll have some spare time later this week :slight_smile:

You can do simple lighting with normals by using BasicEffect, but if you want shadow mapping you’ll need custom shaders. There’s an XNA sample for that and several forum posts related to the subject.

Ah, so basically, the goal is to have the vertex buffer containing geometry move only after all “copies” of that model are drawn with only sending data on positions? And what I was doing with repeated mesh.Draw() calls is resending geometry of the same model with each position update?

I’d be thankful for the example, but take your time. I have also to clean up my laptop - i5 inside it is reaching 90+ °C temperatures, and I’d really hate to have my laptop shut down on its own due to overheat in the middle of the work. I’ll also try to write the code myself in the meanwhile.

I’ll then research it when I get to that point ^^

In any case, thank you very much, it’s becoming clearer.

1 Like

Well, might as well post here.

I’ve started to learn more about shaders, and in an attempt to write model instancing, I wrote a drawing code with the help of XNA sample.

Problem is, all I get is black void, and I’m not sure what and where I did wrong. I’ve checked all code multiple times, and tried tinkering with it, nothing helped. So… I need some guidance. Again.

Here’s drawing code (I wrote it simplest so that I get something on screen first, and will refactor it later so that it’s more efficient):

        private void Draw3D(GameTime gameTime)
            GraphicsDevice.BlendState = BlendState.Opaque;
            GraphicsDevice.DepthStencilState = DepthStencilState.Default;

            var box = CalculateBoundingBox(_ground);
            var height = Math.Abs(box.Max.Y - box.Min.Y);
            var width = Math.Abs(box.Max.X - box.Min.X);
            Tile tile;

            var tileTypes = new Dictionary<string, List<Matrix>>();

            // Count all the instances of tiles
            for (int i = 0; i < Map.Instance.Grid.Count; i++)
                for (int j = 0; j < Map.Instance.Grid[i].Count; j++)
                    tile = Map.Instance.Grid[i][j];

                    var tileName = tile.Height.ToString();

                    if (tile.Height >= LandHeight.Plains)
                        tileName = tile.Biome.ToString() + tileName /*+ (tile.Resource == Resource.None ? "" : tile.Resource.ToString()) + (tile.Enhancement == Enhancement.None ? "" : tile.Enhancement.ToString())*/;

                    if (!tileTypes.ContainsKey(tileName))
                        tileTypes.Add(tileName, new List<Matrix>());

                    var tileTransform = Matrix.CreateScale(0.01f) * Matrix.CreateRotationY(MathHelper.PiOver2) * Matrix.CreateTranslation(new Vector3(i * height * 0.75f, 0, j * width + (i % 2) * (width / 2)));

            // Draw map
            foreach (var key in tileTypes.Keys)
                var instanceVertexBuffer = new DynamicVertexBuffer(GraphicsDevice, InstanceVertexDeclaration, tileTypes[key].Count, BufferUsage.WriteOnly);
                instanceVertexBuffer.SetData(tileTypes[key].ToArray(), 0, tileTypes[key].Count, SetDataOptions.Discard);

                var model = key.Contains("Water") ? _ground : _tileModels[key];
                var modelBones = new Matrix[model.Bones.Count];

                foreach (var mesh in model.Meshes)
                    foreach (var meshPart in mesh.MeshParts)
                            new VertexBufferBinding(meshPart.VertexBuffer, meshPart.VertexOffset, 0),
                            new VertexBufferBinding(instanceVertexBuffer, 0, 1)

                        GraphicsDevice.Indices = meshPart.IndexBuffer;

                        Effect effect = _instancingShader;
                        effect.CurrentTechnique = effect.Techniques["Instancing"];

                        effect.Parameters["View"].SetValue(Matrix.CreateLookAt(_cameraPosition, _cameraTarget, Vector3.Up));

                        foreach (var pass in effect.CurrentTechnique.Passes)

                            GraphicsDevice.DrawInstancedPrimitives(PrimitiveType.TriangleList, 0, meshPart.StartIndex, meshPart.PrimitiveCount, tileTypes[key].Count);

Here’s shader code:

    #define VS_SHADERMODEL vs_3_0
    #define PS_SHADERMODEL ps_3_0
    #define VS_SHADERMODEL vs_4_0_level_9_1
    #define PS_SHADERMODEL ps_4_0_level_9_1

// Camera settings.
float4x4 World;
float4x4 View;
float4x4 Projection;

// This sample uses a simple Lambert lighting model.
float3 LightDirection;
float3 DiffuseLight;
float3 AmbientLight;
float4 Color;

texture Texture;

sampler CustomSampler = sampler_state
    Texture = (Texture);

struct VertexShaderInput
    float4 Position : POSITION0;
    float3 Normal : NORMAL0;
    float2 TextureCoordinate : TEXCOORD0;

struct VertexShaderOutput
    float4 Position : SV_POSITION;
    float4 Color : COLOR0;
    float2 TextureCoordinate : TEXCOORD0;

VertexShaderOutput VertexShaderFunction(VertexShaderInput input, float4x4 instanceTransform : BLENDWEIGHT)
    VertexShaderOutput output;

    // Apply the world and camera matrices to compute the output position.
    float4x4 instancePosition = mul(World, transpose(instanceTransform));
    float4 worldPosition = mul(input.Position, instancePosition);
    float4 viewPosition = mul(worldPosition, View);
    output.Position = mul(viewPosition, Projection);

    // Compute lighting, using a simple Lambert model.
    //float3 worldNormal = mul(input.Normal, instanceTransform);
    //float diffuseAmount = max(-dot(worldNormal, LightDirection), 0);
    //float3 lightingResult = saturate(diffuseAmount * DiffuseLight + AmbientLight);

    output.Color = Color;

    // Copy across the input texture coordinate.
    output.TextureCoordinate = input.TextureCoordinate;

    return output;

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
    return tex2D(CustomSampler, input.TextureCoordinate) * input.Color;

// Hardware instancing technique.
technique Instancing
    pass Pass1
        VertexShader = compile VS_SHADERMODEL VertexShaderFunction();
        PixelShader = compile PS_SHADERMODEL PixelShaderFunction();

I have a general idea of what both the code and shaders are doing (there’s always a possibility of me misunderstanding something), so it looks to me like it should work, which is why I’m confused.

EDIT: Solved it. The issue was because my effect didn’t use the texture contained in the mesh parts due to me using my own effect instead of BasicEffect already present. Copying the texture to my effect did the trick.