Expression Recognition Driven Virtual Human Animation

  • Junghyun Cho
  • Yu-Jin Hong
  • Sang C. Ahn
  • Ig-Jae Kim
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8530)

Abstract

Since the character expressions are high dimensional, it is not easy to control them intuitively with simple interface. So far, existing controlling and animating methods are mainly based on three dimensional motion capture system for high quality animation. However, using the three dimensional motion capture system is not only unhandy but also quite expensive. In this paper, we therefore present a new control method for 3D facial animation based on expression recognition technique. We simply utilize off-the-shelf a single webcam as a control interface which can easily combine with blendshape technique for 3D animation. We measure the user’s emotional state by a robust facial feature tracker and facial expression classifier and then transfer the measured probabilities of facial expressions to the domain of blendshape basis. We demonstrate our method can be one of efficient interface for virtual human animation through our experiments.

Keywords

3D facial animation control interface blendshape facial feature tracking expression recognition 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Reynolds, D.A.: A Gaussian Mixture Modeling Approach to Text-Independent Speaker Identification., PhD thesis, Georgia Institute of Technology (1992)Google Scholar
  2. 2.
    Viola, Jones: Rapid object detection using a boosted cascade of simple features. In: Computer Vision and Pattern Recognition (2001)Google Scholar
  3. 3.
    Kim, I.J., Ko, H.: Intuitive Quasi-Eigen Faces. ACM GRAPHITE (2007)Google Scholar
  4. 4.
    Chuang, E., Bregler, C.: Performance driven facial animation using blendshape interpolation, Stanford Technical Report CS-TR-200202 (2002)Google Scholar
  5. 5.
    Kalman, R.E.: A new approach to linear filtering and prediction problems. Journal of Basic Engineering 82(1), 35–45 (1960)CrossRefGoogle Scholar
  6. 6.
    Xiong, X., Torre, F.: Supervised Descent Method and its Applications to Face Alignment. IEEE Computer Vision and Pattern Recognition (2013)Google Scholar
  7. 7.
    Cristinacce, D., Cootes, T.: Feature detection and tracking with constrained local models. In: BMVC (2006)Google Scholar
  8. 8.
    Saragih, J.M., Lucey, S., Cohn, J.F.: Face Alignment through Subspace Constrained Mean-Shifts. In: IEEE International Conference on Computer Vision (2009)Google Scholar
  9. 9.
    Chai, J., Xiao, J., Hodgins, J.: Vision-based Control of 3D Facial Animation. In: Eurographics/SIGGRAPH Symposium on Computer Animation (2003)Google Scholar
  10. 10.
    Cao, X., Wei, Y., Wen, F., Sun, J.: Face alignment by explicit shape regression. IEEE Computer Vision and Pattern Recognition (2012)Google Scholar
  11. 11.
    Ekman, P., Friesen, W.: Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto (1978)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Junghyun Cho
    • 1
  • Yu-Jin Hong
    • 1
    • 2
  • Sang C. Ahn
    • 1
  • Ig-Jae Kim
    • 1
    • 2
  1. 1.Imaging Media Research CenterKorea Institute of Science and TechnologyKorea
  2. 2.Dept. of HCI & RoboticsKorea University of Science and TechnologyKorea

Personalised recommendations