I have been trying to create a animation system for my OpenGL Project for a long time, one of the reason is I have limited amount of time after starting the full-time job. Another problem was that, I mean, if I intend to keep it a clean project rather than just a school project, building a animation system is like a rabbit hole, 30 lines of code got me another 100 lines of work, it only leaded me deeper and deeper. Well, it is a perfect opportunity for me to explain the story here. The amount of skeletal animation that I found online, especially good blogs are less than a dozen. Gladly, I would like to point it out here, there is a good youtube video series you can follow, it is in Java, the author provided the source code for reference. Khronos has a shader example here and there are a few others.
Well, before we can draw anything in a graphic program, we need to thank artists for the rigging part. Connecting the dots between bones and meshes in the blender is far from a easy task, I gotta say that from my personal experience, I gave up in the first step, leaving my ambition of replacing all the artists my programs in the code water, so machine learning programs will take their years to graduate from art school. While they doing that, we still need to make friends with artists and draw our mesh with opengl.
Alright, back to topic. What do we need, for skeletal animation, exactly? Let’s take a look of the diagram first.
I hope this diagram is not missleading too much. Structure,
transformations and skinning are the three parts we need to take care
of. Well, from the history books you know that every charactor has a set of
bones structured in a tree, and a bone has transformations which affects itself
and its children which also affects the assosiate meshes and… Before we can
start draw anything, hundreds lines of code just for logic need to be done. It
is really against the practice. Alor, afin de dessiner en OpenGL, we need to
feed the shader program the minimum amount of data it needs, apart from normal,
vertex and texture, two extra layouts
of bone and weight we need to give it to
the shader. If you don’t want to do any transform, we don’t exactly need the
real bone weights and transforms.
On the CPU part, the work lies on assosiating the bones
and meshes
. Depend
on the asset library using, the data is structured in different ways. For
instance, assimp
requires user to read bone lists from a mesh, where you can
read the bone weights. and also bones hierachy is stored as aiNode
, where you
read the cascaded transforming matrices.
On the GPU part, our vertex shader program looks like this:
//[Vertex Shader]
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 texCoords;
//we can also make a matrix4x2
layout (location = 3) in vec2 bw0;
layout (location = 4) in vec2 bw1;
layout (location = 5) in vec2 bw2;
layout (location = 6) in vec2 bw3;
out vec2 TexCoords;
out vec3 fragPos;
out vec3 Normal;
const int maxNbone = 100; //it has to be constant
uniform mat4 MVP;
uniform mat4 model;
uniform mat4 boneMats[maxNbone];
void main()
{
vec4 v = vec4(position, 1.0);
vec4 n = vec4(normal, 1.0);
vec4 newVertex;
vec4 newNormal;
//updating vertex
newVertex = (v * boneMats[int(bw0.x)]) * bw0.y +
(v * boneMats[int(bw1.x)]) * bw1.y +
(v * boneMats[int(bw2.x)]) * bw2.y +
(v * boneMats[int(bw3.x)]) * bw3.y;
//updating normal
newNormal = (n * boneMats[int(bw0.x)]) * bw0.y +
(n * boneMats[int(bw1.x)]) * bw1.y +
(n * boneMats[int(bw2.x)]) * bw2.y +
(n * boneMats[int(bw3.x)]) * bw3.y;
gl_Position = MVP * newVertex;
Normal = newNormal;
fragPos = vec3(model * newVertex);
TexCoords = texCoords;
}
Straightforward as it is, since we don’t have the cascaded transform here. We just set all the boneMats into identity matrix. It looks as same as a rigid object shader program, we will come back with bone transform next time.