Creating facial animation of characters via MoCap data

Kei Hirose, Tomoyuki Higuchi

Research output: Contribution to journalArticlepeer-review


We consider the problem of generating 3D facial animation of characters. An efficient procedure is realized by using the motion capture data (MoCap data), which is obtained by tracking the facial markers from an actor/actress. In some cases of artistic animation, the MoCap actor/actress and the 3D character facial animation show different expressions. For example, from the original facial MoCap data of speaking, a user would like to create the character facial animation of speaking with a smirk. In this paper, we propose a new easy-to-use system for making character facial animation via MoCap data. Our system is based on the interpolation: once the character facial expressions of the starting and the ending frames are given, the intermediate frames are automatically generated by information from the MoCap data. The interpolation procedure consists of three stages. First, the time axis of animation is divided into several intervals by the fused lasso signal approximator. In the second stage, we use the kernel k-means clustering to obtain control points. Finally, the interpolation is realized by using the control points. The user can easily create a wide variety of 3D character facial expressions by changing the control points.

Original languageEnglish
Pages (from-to)2583-2597
Number of pages15
JournalJournal of Applied Statistics
Issue number12
Publication statusPublished - Dec 2012
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Statistics and Probability
  • Statistics, Probability and Uncertainty


Dive into the research topics of 'Creating facial animation of characters via MoCap data'. Together they form a unique fingerprint.

Cite this