Morphing models.

Hi there.

I’ve found a few threads here and there that say this isn’t usually done, you wouldn’t change the position of vertices in a model as you’re introducing a lot of traffic to the GPU. But if I need to dynamically change the shape (morph) of the model, I can’t see any alternative.

What I can’t find is an example of how you’d do it.

With a pre existing model I’m guessing I’d have to extract the verts and face data to an array and work with that, which sounds a bit hairy, but I’m willing to give it a go!

Anyone have anything they could share?

Many thanks.

Typically, this sort of thing is not done through code, but in whatever 3D program you are using to construct your model. They are usually called something like “Shape Keys.” Using them extensively to for animations is generally frowned upon, both because of the real-time cost, and how much they tend to bloat the size of a model. More often than not, these are used for smaller, subtler animations, and are often blended together with bone-based animations. I often use them to make facial animations for lip-syncing, as bones can only do so much for such detail-oriented work.

Once a shape-key is created, it can usually be adjusted from its default state (0.0) to it’s 100% state (1.0). Some programs will even allow you to use negative values to invert how far it is being pushed. You can then use programming to manipulate these shape-key values at run time to produce some impressive animations.

If all you want to do is “morph” a model, there are other animation tricks that can be used, depending on the actual results desired. For instance, you could use a shape-key transition for the animation, but then you could switch it out with a post-morphed version of the model once the transition is complete. This way you could have three separate models, one regular, one for the transition animation, and one for the completed transformation. The shape-keys could only exist in the transition model, allowing you to save performance for the before and after states.

Many thanks Will, interesting stuff, I’ll start looking at these keys. :o)

Depending on what you want to achieve, there are lot’s of possibilities.

If you only want a small number of morph targets, you can put the vertex position for each morph into the input to the vertex shader

struct VSInput
{
       float4 Vertex1 TEXCOORD0;
       float4 Vertex2 TEXCOORD1;
.....
}
float blendweight1 = 0;

Then in the vertex shader blend between them.

float4 vertexposition = lerp(IN.Vertex1, IN.Vertex2,blendweight1);

Or you can go down the skeletal animation route which is very well known and documented all over the place.

Or you can use calculated positions like I did here http://stainlessbeer.weebly.com/gpu-based.html

If you can give us a better idea of what you want to do we can help out a lot more.