Animated Mesh Collision Detection Shape Generation
Generating bounding shapes for animated meshes adds complexity as the bounds of the mesh may change from frame to frame. The method I describe below works well for animations which are processed on the CPU or GPU. (GPU is the preferred method as the overhead of animating 100’s if not 1000’s of meshes on the CPU would cripple performance). This description deals with MS3D models but the same process holds for any animation which is bone/joint based. The secret to building a collision shape for an animated mesh is to think of the mesh as a group of meshes. Each bone and its respective vertices are 1 part of the overall mesh.
When an MS3D model is loaded it is in what is known as TPose. This is the models neutral position. An example of a model of a person in TPose is as below.
If you are using the MS3D mesh as a static model this is as far as you have to process. You have all vertex positions, texture coordinates, normals and triangles, however this post is about animations therefore we better go ahead and load the bone information. Each vertex is attached to 1 bone and when this bone moves so does the vertex. (It is important to note here that triangles can be made up of vertices which belong to different bones. This allows areas of the mesh to stretch/shrink during animation.)
Each joint has rotation and translation information for each frame of the animation. In order to use the bones for animation the TPose mesh has to be collapsed so that each bone is at the origin. This is achieved by doing an inverse translation and rotate on each vertices of a bone using the bones absolute transformation matrix. (the absolute and relative transformation matrices are generated during loading the MS3D by using the transformation and rotation vectors for the bone plus the bones parent transformation matrices. I won’t delve to much into loading MS3D files here as there is plenty of resources online covering this subject however if you have any questions please feel free to email me).
When the mesh has been collapsed so that each bone is at the origin it looks something like
At this stage we can build bounding shapes for each bone. For ease of description and high performance I have decided to use bounding boxes. (If you want more precision convex hulls or triangle meshes could be used). A bounding box is simply the smallest box which will enclose the shape. Below is an image of the same collapsed mesh along side its bounding boxes. You can see how the collapsed mesh would fit inside the collection of bounding boxes.
When animating a mesh (on the CPU or GPU) each vertex for a given bone has to be transformed using the bones translation matrix for the current frame. These matrices can also be used to transform the collision shapes. If an arm moves and rotates to a position its collision shape should be transformed in the same way. What first appears to be a complicated problem is actually very simple. You already have most the information you need from when you load your animated mesh. The only extra information you need is to a collision shape per bone which is a trivial task to compute once the mesh has been collapsed.
To actually do collisions you have to use the bounding information plus the transformation matrix for each bone for the current frame. I use Bullet as my collision library. For each collision shape I create a btRigidBody with a mass of 0 so that it is not effected by the physics engine. When collision detection is required I update each btRigidBody with the transformation matrix of its parent bone for the current frame.
There are some issues with the above process. Its not really visible for the first 3 examples in the video but in the fourth, the female zombie, you can clearly see that there are areas of her body (look at her arms) that are not covered by a collision shape. This is because she is very low polygon and allot of the space between bones is made up of polygons with vertices from different bones. My first though was that if the bounding shapes were built from polygons collected to a bone instead of its vertices this problem would be fixed but this didn’t really work. It does shore up any gaps in the collision shapes but it over compensates and creates shapes which are much to big. If meshes were modified such that there was at least 1 vertex at the extremes of each virtual bone all bounding information would be generated correctly. This should not impact performance. Each of the meshes you can see in this post has approximately 30 bones. If we presume an additional 6 vertices per bone (1 for each side top, bottom, left, right, back and front) that would be 180 extra vertices per model.
Speaking with a friend who works in the games industry, he expressed that they have had similar problems before and they currently build or at least correct the bounding information manually. Artists size the bounding shapes and export the data which is then loaded alongside the animation. As a solo developer I do not have the time or skills to setup bounding shapes manually for multiple meshes all of which have 1000 or more frames. For now I will stick with my first solution and hard code the few fixes which are required (if you look at the video you will see that the dog, cop and freak with the eye are well covered using the dynamic method. Only the last mesh has a few issues).