Advertisement

Automatic User-Specific Avatar Parametrisation and Emotion Mapping

  • Stephanie Behrens
  • Ayoub Al-Hamadi
  • Robert Niese
  • Eicke Redweik
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8192)

Abstract

In this paper an approach for automatic user-specific 3D model generation and expression classification is proposed. User performance-driven avatar animation is recently in the focus of research due to the increasing amount of low-cost acquisition devices with integrated depth map computation. Thereby challenging is the user-specific emotion classification without a complex manual initial step. Correct classification and emotion intensity identification can only be done with known expression specific facial feature displacement which differs from user to user. The use of facial feature tracking on predefined 3D model expression animations is presented here as solution statement for automatic emotion classification and intensity calculation. Consequently with this approach partial occlusions of a presented emotion do not hamper expression identification based on the symmetrical structure of human faces. Thus, a markerless, automatic and easy to use performance-driven avatar animation approach is presented.

Keywords

Avatar animation face normalisation automatic facial feature extraction facial expression analysis blendshape animation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bassili, J.N.: Facial motion in the perception of faces and of emotional expression. Journal of Experimental Psychology: Human Perception and Performance 4(3), 373–379 (1978)Google Scholar
  2. 2.
    Bouguet, J.-Y.: Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm. Intel Corporation (2001)Google Scholar
  3. 3.
    Chai, J.-X., Xiao, J., Hodgins, J.: Vision-based control of 3d facial animation. In: Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 193–206. Eurographics Association (2003)Google Scholar
  4. 4.
    Ekman, P., Friesen, W.V.: Constants across cultures in the face and emotion. Journal of Personality and Social Psychology 17(2), 124 (1971)CrossRefGoogle Scholar
  5. 5.
    Facegen modeller (April 2013), http://facegen.com/modeller.htm
  6. 6.
    Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recognition 36(1), 259–275 (2003)CrossRefzbMATHGoogle Scholar
  7. 7.
    Joshi, P., Tien, W.C., Desbrun, M., Pighin, F.: Learning controls for blend shape based realistic facial animation. In: ACM SIGGRAPH 2005 Courses, p. 8. ACM (2005)Google Scholar
  8. 8.
    Lewis, J., Anjyo, K.-I.: Direct manipulation blendshapes. IEEE Computer Graphics and Applications 30(4), 42–50 (2010)CrossRefGoogle Scholar
  9. 9.
    Lienhart, R., Kuranov, A., Pisarevsky, V.: Empirical analysis of detection cascades of boosted classifiers for rapid object detection. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 297–304. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  10. 10.
    Liu, X., Xia, S., Fan, Y., Wang, Z.: Exploring non-linear relationship of blendshape facial animation. In: Computer Graphics Forum., vol. 30, pp. 1655–1666. Wiley Online Library (2011)Google Scholar
  11. 11.
    Niese, R., Al-Hamadi, A., Michaelis, B.: A novel method for 3d face detection and normalization. Journal of Multimedia 2(5), 1–12 (2007)CrossRefGoogle Scholar
  12. 12.
    Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: The state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(12), 1424–1445 (2000)CrossRefGoogle Scholar
  13. 13.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, vol. 1, pp. I–511. IEEE (2001)Google Scholar
  14. 14.
    Weise, T., Bouaziz, S., Li, H., Pauly, M.: Realtime performance-based facial animation. ACM Trans. Graph. 30(4), 77 (2011)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Stephanie Behrens
    • 1
  • Ayoub Al-Hamadi
    • 1
  • Robert Niese
    • 1
  • Eicke Redweik
    • 1
  1. 1.Institute for Information Technology and CommunicationsOtto von Guericke UniversityMagdeburgGermany

Personalised recommendations