Facial Movement Based Recognition

  • Alexander Davies
  • Carl Henrik Ek
  • Colin Dalton
  • Neill Campbell
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6930)

Abstract

The modelling and understanding of the facial dynamics of individuals is crucial to achieving higher levels of realistic facial animation. We address the recognition of individuals through modelling the facial motions of several subjects. Modelling facial motion comes with numerous challenges including accurate and robust tracking of facial movement, high dimensional data processing and non-linear spatial-temporal structural motion. We present a novel framework which addresses these problems through the use of video-specific Active Appearance Models (AAM) and Gaussian Process Latent Variable Models (GP-LVM). Our experiments and results qualitatively and quantitatively demonstrate the framework’s ability to successfully differentiate individuals by temporally modelling appearance invariant facial motion. Thus supporting the proposition that a facial activity model may assist in the areas of motion retargeting, motion synthesis and experimental psychology.

Keywords

Facial Expression Hide Markov Model Facial Expression Recognition Latent Variable Model Facial Motion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Tang, H., Fu, Y., Tu, J., Huang, T.S., Hasegawa-Johnson, M.: EAVA: A 3D Emotive Audio-Visual Avatar. In: WACV 2008, pp. 1–6 (2008)Google Scholar
  2. 2.
    Kähler, K., Haber, J., Seidel, H.P.: Reanimating the Dead: Reconstruction of Expressive Faces from Skull Data. In: SIGGRAPH 2003, pp. 554–561 (2003)Google Scholar
  3. 3.
    MacDorman, K.F., Green, R.D., Ho, C.C., Koch, C.T.: Too Real for Comfort? Uncanny Responses to Computer Generated Faces. Computers in Human Behavior 25, 695–710 (2009)CrossRefGoogle Scholar
  4. 4.
    Ye, N., Sim, T.: Combining Facial Appearance and Dynamics for Face Recognition. In: Jiang, X., Petkov, N. (eds.) CAIP 2009. LNCS, vol. 5702, pp. 133–140. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  5. 5.
    Yang, P., Liu, Q., Metaxas, D.: Dynamic Soft Encoded Patterns for Facial Event Analysis. Computer Vision and Image Understanding 115, 456–465 (2011)CrossRefGoogle Scholar
  6. 6.
    Hadid, A., Pietikäinen, M., Li, S.Z.: Learning personal specific facial dynamics for face recognition from videos. In: Zhou, S.K., Zhao, W., Tang, X., Gong, S. (eds.) AMFG 2007. LNCS, vol. 4778, pp. 1–15. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  7. 7.
    Fan, X., Sun, Y., Yin, B., Guo, X.: Gabor-based Dynamic Representation for Human Fatigue Monitoring in Facial Image Sequences. Pattern Recognition Letters 31, 234–243 (2010)CrossRefGoogle Scholar
  8. 8.
    Raducanu, B., Dornaika, F.: Dynamic vs. Static recognition of facial expressions. In: Aarts, E., Crowley, J.L., de Ruyter, B., Gerhäuser, H., Pflaum, A., Schmidt, J., Wichert, R. (eds.) AmI 2008. LNCS, vol. 5355, pp. 13–25. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  9. 9.
    Dornaika, F., Lazkano, E., Sierra, B.: Improving Dynamic Facial Expression Recognition with Feature Subset Selection. Pattern Recognition Letters 32, 740–748 (2011)CrossRefGoogle Scholar
  10. 10.
    Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. In: Burkhardt, H., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1407, pp. 484–498. Springer, Heidelberg (1998)Google Scholar
  11. 11.
    Asthana, A., Saragih, J., Wagner, M., Goecke, R.: Evaluating AAM Fitting Methods for Facial Expression Recognition. In: ACII 2009, pp. 1–8 (2009)Google Scholar
  12. 12.
    Matthews, I., Baker, S.: Active Appearance Models Revisited. International Journal of Computer Vision 60, 135–164 (2004)CrossRefGoogle Scholar
  13. 13.
    Xiao, J., Baker, S., Matthews, I., Kanade, T.: Real-time Combined 2D+3D Active Appearance Models. In: CVPR 2004, vol. 2, pp. 535–542 (2004)Google Scholar
  14. 14.
    Lawrence, N., Hyvärinen, A.: Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models. Journal of Machine Learning Research 6, 1783–1816 (2005)MathSciNetMATHGoogle Scholar
  15. 15.
    Grochow, K., Martin, S.L., Hertzmann, A., Popović, Z.: Style-based Inverse Kinematics. In: SIGGRAPH 2004, pp. 522–531 (2004)Google Scholar
  16. 16.
    Lawrence, N.D., Quiñonero Candela, J.: Local Distance Preservation in the GP-LVM through Back Constraints. In: ICML 2006, pp. 513–520 (2006)Google Scholar
  17. 17.
    Ek, C.H., Torr, P., Lawrence, N.D.: Gaussian Process Latent Variable Models for Human Pose Estimation. In: Popescu-Belis, A., Renals, S., Bourlard, H. (eds.) MLMI 2007. LNCS, vol. 4892, pp. 132–143. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  18. 18.
    Quirion, S., Duchesne, C., Laurendeau, D., Marchand, M.: Comparing GPLVM Approaches for Dimensionality Reduction in Character Animation. WSCG 16, 41–48 (2008)Google Scholar
  19. 19.
    Huang, M., Wang, Z., Ying, Z.: A Novel Method of Facial Expression Recognition Based on GPLVM Plus SVM. In: ICSP 2010, pp. 916–919 (2010)Google Scholar
  20. 20.
    Deena, S., Galata, A.: Speech-driven facial animation using a shared gaussian process latent variable model. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Kuno, Y., Wang, J., Wang, J.-X., Wang, J., Pajarola, R., Lindstrom, P., Hinkenjann, A., Encarnação, M.L., Silva, C.T., Coming, D. (eds.) ISVC 2009. LNCS, vol. 5875, pp. 89–100. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  21. 21.
    Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive Database for Facial Expression Analysis. In: AFGR 2000, pp. 46–53 (2000)Google Scholar
  22. 22.
    Wallhoff, F.: Facial Expressions and Emotion Database (2006)Google Scholar
  23. 23.
    Ekman, P., Friesen, W.V., Hager, J.C.: Facial Action Coding System. [CD-ROM] (2002)Google Scholar
  24. 24.
    Ramer, U.: An Iterative Procedure for the Polygonal Approximation of Plane Curves. Computer Graphics and Image Processing 1, 244–256 (1972)CrossRefGoogle Scholar
  25. 25.
    Zhu, Z., Ji, Q.: Real Time 3D Face Pose Tracking From an Uncalibrated Camera. In: CVPR 2004, vol. 73 (2004)Google Scholar
  26. 26.
    Akhter, I., Sheikh, Y.A., Khan, S., Kanade, T.: Nonrigid Structure from Motion in Trajectory Space. Neural Information Processing Systems (2008)Google Scholar
  27. 27.
    Arthur, D., Vassilvitskii, S.: K-Means++: The Advantages of Careful Seeding. In: SODA 2007, pp. 1027–1035 (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Alexander Davies
    • 1
  • Carl Henrik Ek
    • 2
  • Colin Dalton
    • 1
  • Neill Campbell
    • 1
  1. 1.University of BristolUK
  2. 2.Royal Institute of TechnologySweden

Personalised recommendations