Facial Animation Pipeline
So, with a defined plan for what the project required from our art and animation team, we had to evaluate what tools were available to us and quickly concluded that Facial Capture and algorithmic lip sync were going to play a large part in the facial animation pipeline. I’ve worked with motion-capture data before, mostly cleaning and fixing, but had never worked with facial performance capture (FPC) before.
We had a couple of options available to us; paying to employ the services of an external FPC provider or buying some proprietary FPC software and hardware and handling it ourselves. Both options had its pros and cons. Working with experienced providers was a very attractive prospect as the capture process would be quick and painless, but as our game and dialogue script was still being developed, we didn’t have a definite date as to when we could record our actors. This would have to be managed so that we didn’t eat into our development budget by organising several different recording days, requiring gathering performers and technical crew each time, hoping that schedules would synchronise properly.
Generally, FPC software is a fairly expensive solution, plus it meant a certain amount of up-skilling on our part, learning how to operate a new software package efficiently in a short amount of time, but it made sense to us that being in full control of the recording process was a logical decision.
After a period of researching and testing a couple of different FPC solutions, we decided on the “Faceware Indie Creation Suite” from Faceware Technologies Inc. (www.facewaretech.com) Faceware is a marker-less FPC solution, meaning no sticking or drawing any dots onto the actors faces. Simply record the performance using the provided HD GoPro camera and run its “Analyser” software to track the motion of the facial features. Then using their “Retargeter” plug-in for Maya, the animator applies the tracking data to the facial animation rig and works on top of the retargeted data to finish crafting the performance.
We’ve discovered that the beauty of working with Faceware is that we don’t necessarily always need to be on-site with our actors. So long as the actor has a camera capable of recording HD at 60 FPS, it doesn’t matter where they are located, they can just record and send us the video for tracking. Or, if there are any problems with the pre-recorded footage, I can film my own short sequence for integration into shot.
While all this testing was being carried out, our Character Artist, Angharad Green (@angreenbean), was hard at work in Maya, concepting, modelling and texturing our hero character, Gwen. Once complete and approved I created an animation rig for the body. Facial rigging however is unfortunately not within my area of technical expertise, so we brought in a freelance character rigger, David Cowles-Brookes (www.flex-animation.com) to create a high-end, joint-driven facial rig and custom user-interface.
I’ve been animating with Maya for over a decade now and in that time, I’ve collected a set of favoured tools, scripts and plug-ins that help make my workflow more efficient. One of my favourites for facial animation is “Studio Library” (www.studiolibrary.com) a free tool for managing poses and animations. It allows you to save and build an easily accessible library of poses with a helpful graphic UI of thumbnails, for access at any time. Using it, I create libraries of mouth shapes, stock emotions, eyebrow expressions, blinks, hand poses etc. You can even blend between poses to create new ones. Very helpful when animating an emotional performance. I highly recommend it to Maya animators.
So, as we find ourselves on the cusp of full production on TaleSinger, we are aware that we have a challenge on our hands. We have a lot of characters to give realistic life too, but thanks to the research we’re confident that we can achieve the goals that we’ve set out. Keep a look out on Twitter for further updates on all aspects of our game development (@QuantumSoupLtd) Thanks for reading.