Normals and Tangents after Vertices rotate / scale / translate

I have a model where I have the normals and tangents pre-calculated and working as expected.

I have a scenario where I need to deform part of the model and translate / rotate / scale some of the vertices. I don’t want to have to go through the entire normal and tangent calculation again.

I tried just rotating the normals and tangents based on rotation of the vertices (by decomposing the local transform matrix for those vertices), however this doesn’t give the the same results i.e the lighting is off for the deformed mesh. If I run through the entire normal and tangent calculations again after the vertices have been transformed, then it all looks good.

Is there an optimal way I can update the normals and tangents for the deformed mesh without having to re-do all the calculations?

Thanks

Rotations are no problem on a mesh deformation on the fly. However arbitrarily scaling presents the said problem and will require the recalculation on the fly.

The way this is normally handled is at the shader level were you use your own shader that essentially has two sets of vertice data for the same model, you can consider this non deformed data A and Deformed data final B you then perform a interpolation from A to B on the shader. This is to be able to perform the deformation since the maximum and minimum normals and tangets are precalculated.

Either way that requires a more complex shader on the gpu or the said recalculation on the cpu which is much slower.

If you can store the animated frames, you can lerp between them.

Have a look at this I wrote a hell of a long time ago.

These guys are right that scaling can cause issues for normal vectors, but if you’re deforming a mesh with a transformation matrix, you can also transform the normals analytically without needing keypoints to interpolate.

If you rotate a mesh, then it makes sense to rotate your normals with the same matrix. However if you do non-uniform scaling (i.e. scale by different values in the x, y, and z directions) then your normal vectors will not match the new shape of the surface. Instead of tranforming by the same matrix, you need to tranform the normals by the inverse transpose of the matrix. This article does a pretty good job of explaining how it works with pictures, if you have a mathematical background. In particular image b shows that if you stretch a surface, and then stretch the normal in the same direction, it looks like it stretched the wrong way. If this does apply to your scenario, beware that there may be perf implications of computing lots of matrix inverses.

The key thing to remember is that a world matrix contains three things.

Translation, Rotation, Scaling

When it comes to normals, translation and scaling are bad, rotation is goood.

By using the inverse transpose matrix you remove translation and scaling leaving only the rotation.

Thanks all for the replies. I am just rotating the normals and tangents (without scaling or translating), however the lighting doesn’t match the case where I just recalculate the normal and tangents.

I’ll play around with it a bit more to see what might be going on.

Thanks.

I have a scenario where I need to deform part of the model and translate / rotate / scale some of the vertices.

If you mean you have a model of say a person with a arm that moves but that arm doesn’t deform.
E.G. like the hand doesn’t grow a sixth finger or anything then that’s just a rigged animated mesh you don’t have to recalculate normals or tangents for that. If the whole model scales uniformly the same thing.

If however the arm grows lumps or the hand has a animation that grows a extra finger that is a deformation these typically require a completely different set of animations and matrices and then you need to recalculate the vertices or have extra mesh data for the full extent of the deformation to perform a interpolation without it being computationally super expensive or have a separate set of bind pose matrices as well for this in addition to the animation matrices and bind poses or them.

If your problem is the first case and you are having trouble with lighting then it is likely that…

You are applying the matrix as is to the normals and tangents which instead you should only apply the 3x3 non translational part of the matrix against the normal. That only contains the (proportionally equal axis xyz) scaling and rotation. You then afterwards should re-Normalize the normal and tangent all done on the shader sometimes its enough to just do the normalization.

Normals and Tangents are purely directional vectors of unit length which is all that is requisite for lighting calculations. Performing the same transformation of scaling translation on them that is applied to the positional part of the vertice will make them non unit length and mess up lighting calculation hence yanking out or zeroing out the 3x3 portion of the transformational matrice to be used on the normal and tangent separately in your shader.

A bind pose matrix undoes scaling translation and rotation from the previous nodes which is not to say that even when used scaling if non uniformed can again even in the above scenario throw off light calculations so idealy you re-normalize the matrix however thats more expensive, bit harder to explain and not needed in 99% of the cases (other then the normal bind pose model calculation) as typically morphed meshes are primarily vertice translational based not done with scaling and a renormalization of the result is simpler.

Here are some of my shaders when just working with the prior animated meshes because i was using bones and inverse bind pose matrices i don’t actually use the 3x3 though really i should.

This project also has a example of drawing the normals to be visualized when you hit f6.
Far from done but… maybe it helps to look at how the normals are handled tangents would be the same i didn’t get around to adding them yet.