Handling Custom Mesh Deformation

Hello Monogame Community

I’ve recently started working on a new project using Monogame and would like some advice on how to handle the rendering of a custom deformed 3D mesh, here’s what I’m doing.

Way back when I first started playing with 3D about 15 years ago I was using Dark Basic Professional, which allowed me to directly input the position offset and rotation of all bones in a mesh during runtime and I developed my own custom way of animating that did not use the traditional keyframe method.

In addition to implementing something like that I also want to be able to handle the actual mesh deformation with my own code, and this creates a problem.

While I have played with RB Whitaker’s XNA tutorials and am happy with basic 3D in Monogame there seems to be nothing for handing custom mesh deformation.

I have already considered a couple of options for how to handle this but I’m not happy with either of them:

Method 1 - Recreate the Vertex Buffer every frame to update the mesh data. I seriously do not believe this would be very efficient since I imagine the Vertex Buffer was not designed for dynamic data. It probably would work, but it would be slow.

Method 2 - Pass the updated mesh data into the shader via an array. This would probably be even worse than Method 1 performance-wise as well as being incredibly wasteful on the graphics pipeline, since not only would the mesh data be passed via the Vertex Buffer but into the shader parameters as well, plus I’d need to ensure the base vertex data has enough data for the shader to be able to allocate the deformed data correctly.

Of those two I prefer Method 1, but I’m sure there must be a better way of handling this.

Also, I don’t want to handle the actual mesh deformation inside the shader since I may need to render the deformed meshes multiple times, the articles I have read suggest handling the mesh deformation on the cpu in those cases.

Although I could certainly try these methods out and see how well they perform, I would still be left wondering if there is a better way I could use right from the outset.

Any suggestions, advice or links to tutorials would be greatly appreciated.


PS: I apologise if my use of terminology is wrong, I’m am mostly self-taught in programming.

I’m wondering what kind of deformation you are wanting to use. If it is weighted skinmesh, I think rotations, position distortions, etc can be done with a skinmesh shader of sorts where each vertex would also have info about bone-weight influence. With method 2 the vertices would be made to already consist of a format containing weight-influences (via vertex declaration before making the vertex buffer) so they will already have the data needed to do the deformation (aside from updating the bone movement data).
If it is preferred to do deformations in software rasterization, perhaps you could do a lower-detail version in software and use a tessellation / geometry shader to boost the smoothness of the result which was copied into the vertex buffer each frame.
I remember once I wanted precise detail of animation distortions(hair, etc) as made in my original 3D software and I wanted various walking and running speeds to blend (ie: idle-walk, walk-run) – so I baked out minimal frame intervals (to avoid all the math) and interpolated them(ie: blend vertex[n] in walk 20% with vertex[n] in run 80%] to achieve animation blending(copying to vb). It actually looked very good surprisingly.
I used only one vertex buffer and only set data for relevant spans. For short fast non-blended animations I could just draw sub-sections of the vb which already had all pretransformed vertices.

Just some ideas. ;d

Thanks for the suggestions, but as I mentioned originally I’d prefer not to do the deformation in the shader at present.

Although, you say about using the tesselation / geometry shaders but I thought Monogame doesn’t support those…yet.

I have discovered (more through blind luck than Google actually giving me what I’m trying to search for) references to a ‘DynamicVertexBuffer’ class which looks like it might be what I’m after. I’m off to investigate.

[Yeh, I don’t think I would use geo shaders with this anyway - should be possible independently of Monogame tho - could be tricky for cross platform]

Yeah, something like this proly:

vertexBuffer = new DynamicVertexBuffer(graphics, typeof(VertexPositionColorTexture), vertex_count, BufferUsage.WriteOnly); //will only write to it

indexBuffer = new IndexBuffer(graphics, typeof(short), index_count, BufferUsage.WriteOnly);

[Where indices are preset and won’t change and make sure device states are good for what you want in advance - like culling style]
(and now assuming in update - you’ve updated vertices[vertex_count])
vertexBuffer.SetData(vertices, 0, vertex_count);
device.SetVertexBuffer(vertexBuffer); // if not already set
…set your textures, shader constants/uniforms[if changed]…or some basic shader assigned ahead of time
triangle_count = vertex_count/2
device.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, vertices, 0, vertex_count, indices, 0, triangle_count);

(Note here I was using VertexPositionColorTexture, but you may want to just use VertexPositionTexture for vertices.)

Aside from correcting the part where you say:

triangle_count = vertex_count/2

Should be - triangle_count = vertex_count/3

Yes that definitely seems like the way to go.

Thanks for your help.

1 Like

Oops - oya - maybe even index_count/3 since vertices are shared.