Skip to main content

Facial Expression Recognition

  • Chapter
Handbook of Face Recognition

Abstract

This chapter introduces recent advances in facial expression analysis and recognition. The first part discusses general structure of AFEA systems. The second part describes the problem space for facial expression analysis. This space includes multiple dimensions: level of description, individual differences in subjects, transitions among expressions, intensity of facial expression, deliberate versus spontaneous expression, head orientation and scene complexity, image acquisition and resolution, reliability of ground truth, databases, and the relation to other facial behaviors or nonfacial behaviors. We note that most work to date has been confined to a relatively restricted region of this space. The last part of this chapter is devoted to a description of more specific approaches and the techniques used in recent advances. They include the techniques for face acquisition, facial data extraction and representation, facial expression recognition, and multimodal expression analysis. The chapter concludes with a discussion assessing the current status, future possibilities, and open questions about automatic facial expression analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ahlberg, J., Forchheimer, R.: Face tracking for model-based coding and face animation. Int. J. Imaging Syst. Technol. 13(1), 8–22 (2003)

    Article  Google Scholar 

  2. Baker, S., Kanade, T.: Limits on super-resolution and how to break them. IEEE Trans. Pattern Anal. Mach. Intell. 24(9), 1167–1183 (2002)

    Article  Google Scholar 

  3. Bartlett, M., Hager, J., Ekman, P., Sejnowski, T.: Measuring facial expressions by computer image analysis. Psychophysiology 36, 253–264 (1999)

    Article  Google Scholar 

  4. Bartlett, M., Braathen, B., Littlewort-Ford, G., Hershey, J., Fasel, I., Marks, T., Smith, E., Sejnowski, T., Movellan, J.R.: Automatic analysis of spontaneous facial behavior: A final project report. Technical Report INC-MPLab-TR-2001.08, Machine Perception Lab, Institute for Neural Computation, University of California, San Diego (2001)

    Google Scholar 

  5. Bartlett, M., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Automatic recognition of facial actions in spontaneous expressions. J. Multimed. 1(6), 22–35 (2006)

    Google Scholar 

  6. Black, M.: Robust incremental optical flow. PhD thesis, Yale University (1992)

    Google Scholar 

  7. Black, M., Yacoob, Y.: Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion. In: Proc. of International Conference on Computer Vision, pp. 374–381 (1995)

    Google Scholar 

  8. Black, M., Yacoob, Y.: Recognizing facial expressions in image sequences using local parameterized models of image motion. Int. J. Comput. Vis. 25(1), 23–48 (1997)

    Article  Google Scholar 

  9. Brown, L., Tian, Y.-L.: Comparative study of coarse head pose estimation. In: IEEE Workshop on Motion and Video Computing, Orlando (2002)

    Google Scholar 

  10. Camras, L., Lambrecht, L., Michel, G.: Infant surprise expressions as coordinative motor structures. J. Nonverbal Behav. 20, 183–195 (1966)

    Article  Google Scholar 

  11. Carroll, J., Russell, J.: Do facial expression signal specific emotions? J. Pers. Soc. Psychol. 70, 205–218 (1996)

    Article  Google Scholar 

  12. Carroll, J., Russell, J.: Facial expression in Hollywood’s portrayal of emotion. J. Pers. Soc. Psychol. 72, 164–176 (1997)

    Article  Google Scholar 

  13. Chang, Y., Hu, C., Feris, R., Turk, M.: Manifold based analysis of facial expression. Image Vis. Comput. 24(6), 605–614 (2006)

    Article  Google Scholar 

  14. Chen, L.: Joint processing of audio-visual information for the recognition of emotional expressions in human-computer interaction. PhD thesis, University of Illinois at Urbana-Champaign, Department of Electrical Engineering (2000)

    Google Scholar 

  15. Cohen, I., Sebe, N., Cozman, F., Cirelo, M., Huang, T.: Coding, analysis, interpretation, and recognition of facial expressions. J. Comput. Vis. Image Underst. (2003). Special Issue on Face Recognition

    Google Scholar 

  16. Cohn, J., Katz, G.: Bimodal expression of emotion by face and voice. In: ACM and ATR Workshop on Face/Gesture Recognition and Their Applications, pp. 41–44 (1998)

    Google Scholar 

  17. Cohn, J., Zlochower, A., Lien, J., Kanade, T.: Automated face analysis by feature point tracking has high concurrent validity with manual facs coding. Psychophysiology 36, 35–43 (1999)

    Article  Google Scholar 

  18. Cohn, J., Kanade, T., Moriyama, T., Ambadar, Z., Xiao, J., Gao, J., Imamura, H.: A comparative study of alternative facs coding algorithms. Technical Report CMU-RI-TR-02-06, Robotics Institute, Carnegie Mellon University, Pittsburgh, November 2001

    Google Scholar 

  19. Cohn, J., Schmidt, K., Gross, R., Ekman, P.: Individual differences in facial expression: stability over time, relation to self-reported emotion, and ability to inform person identification. In: Proceedings of the International Conference on Multimodal User Interfaces (ICMI 2002), pp. 491–496 (2002)

    Chapter  Google Scholar 

  20. Cohn, J., Kreuz, T., Yang, Y., Nguyen, M., Padilla, M., Zhou, F., Fernando, D.: Detecting depression from facial actions and vocal prosody. In: International Conference on Affective Computing and Intelligent Interaction (ACII2009) (2009)

    Google Scholar 

  21. Darwin, C.: The Expression of Emotions in Man and Animals. Murray, London (1872), reprinted by University of Chicago Press, 1965

    Book  Google Scholar 

  22. Daugmen, J.: Complete discrete 2d Gabor transforms by neutral networks for image analysis and compression. IEEE Trans. Acoust. Speech Signal Process. 36(7), 1169–1179 (1988)

    Article  Google Scholar 

  23. Donato, G., Bartlett, M., Hager, J., Ekman, P., Sejnowski, T.: Classifying facial actions. IEEE Trans. Pattern Anal. Mach. Intell. 21(10), 974–989 (1999)

    Article  Google Scholar 

  24. Douglas-Cowie, E., Cowie, R., Schroeder, M.: The description of naturally occurring emotional speech. In: International Conference of Phonetic Sciences (2003)

    Google Scholar 

  25. Eihl-Eihesfeldt, I.: Human Ethology. Aldine de Gruvter, New York (1989)

    Google Scholar 

  26. Ekman, P.: The Argument and Evidence about Universals in Facial Expressions of Emotion, vol. 58, pp. 143–164. Wiley, New York (1989)

    Google Scholar 

  27. Ekman, P.: Facial expression and emotion. Am. Psychol. 48, 384–392 (1993)

    Article  Google Scholar 

  28. Ekman, P., Friesen, W.: Pictures of Facial Affect. Consulting Psychologist, Palo Alto (1976)

    Google Scholar 

  29. Ekman, P., Friesen, W.: The Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, San Francisco (1978)

    Google Scholar 

  30. Ekman, P., Rosenberg, E.E.: What the Face Reveals. Oxford University, New York (1997)

    Google Scholar 

  31. Ekman, P., Hager, J., Methvin, C., Irwin, W.: Ekman–Hager facial action exemplars. Human Interaction Laboratory, University of California, San Francisco

    Google Scholar 

  32. Essa, I., Pentland, A.: Coding, analysis, interpretation, and recognition of facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 757–763 (1997)

    Article  Google Scholar 

  33. Farkas, L., Munro, I.: Anthropometric Facial Proportions in Medicine. Charles C Thomas, Springfield (1987)

    Google Scholar 

  34. Fasel, B., Luttin, J.: Recognition of asymmetric facial action unit activities and intensities. In: Proceedings of International Conference of Pattern Recognition (2000)

    Google Scholar 

  35. Fasel, B., Luttin, J.: Automatic facial expression analysis: Survey. Pattern Recognit. 36(1), 259–275 (2003)

    Article  MATH  Google Scholar 

  36. Fleiss, J.: Statistical Methods for Rates and Proportions. Wiley, New York (1981)

    MATH  Google Scholar 

  37. Ford, G.: Fully automatic coding of basic expressions from video. Technical Report INC-MPLab-TR-2002.03, Machine Perception Lab, Institute for Neural Computation, University of California, San Diego (2002)

    Google Scholar 

  38. Fox, N., Reilly, R.: Audio-visual speaker identification. In: Proc. of the 4th International Conference on Audio- and Video-Based Biometric Person Authentication (2003)

    Google Scholar 

  39. Fox, N., Gross, R., de Chazal, P., Cohn, J., Reilly, R.: Person identification using multi-modal features: speech, lip, and face. In: Proc. of ACM Multimedia Workshop in Biometrics Methods and Applications (WBMA 2003), CA (2003)

    Google Scholar 

  40. Frank, M., Ekman, P.: The ability to detect deceit generalizes across different types of high-stake lies. Pers. Soc. Psychol. 72, 1429–1439 (1997)

    Article  Google Scholar 

  41. Friesen, W., Ekman, P.: Emfacs-7: emotional facial action coding system. Unpublished manuscript, University of California at San Francisco (1983)

    Google Scholar 

  42. Fujiyoshi, H., Lipton, A.: Real-time human motion analysis by image skeletonization. In: Proc. of the Workshop on Application of Computer Vision (1998)

    Google Scholar 

  43. Fukui, K., Yamaguchi, O.: Facial feature point extraction method based on combination of shape extraction and pattern matching. Syst. Comput. Jpn. 29(6), 49–58 (1998)

    Article  Google Scholar 

  44. Gunes, H., Piccardi, M.: A bimodal face and body gesture database for automatic analysis of human nonverbal affective behavior. In: International Conference on Pattern Recognition (ICPR), pp. 1148–1153 (2006)

    Chapter  Google Scholar 

  45. Gunes, H., Piccardi, M.: Automatic temporal segment detection and affect recognition from face and body display. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 39(1), 64–84 (2009)

    Article  Google Scholar 

  46. Hampapur, A., Pankanti, S., Senior, A., Tian, Y., Brown, L., Bolle, R.: Face cataloger: multi-scale imaging for relating identity to location. In: Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance (2003)

    Google Scholar 

  47. Heisele, B., Serre, T., Pontil, M., Poggio, T.: Component-based face detection. In: Proc. IEEE Conf. on Computer Vision and Pattern Recogn. (CVPR) (2001)

    Google Scholar 

  48. Izard, C., Dougherty, L., Hembree, E.A.: A system for identifying affect expressions by holistic judgments. Unpublished Manuscript, University of Delaware (1983)

    Google Scholar 

  49. Kanade, T., Cohn, J., Tian, Y.-L.: Comprehensive database for facial expression analysis. In: Proceedings of International Conference on Face and Gesture Recognition, pp. 46–53 (2000)

    Chapter  Google Scholar 

  50. Kimura, S., Yachida, M.: Facial expression recognition and its degree estimation. In: Proc. of the International Conference on Computer Vision and Pattern Recognition, pp. 295–300 (1997)

    Google Scholar 

  51. Kleck, R., Mendolia, M.: Decoding of profile versus full-face expressions of affect. J. Nonverbal Behav. 14(1), 35–49 (1990)

    Article  Google Scholar 

  52. Kobayashi, H., Tange, K., Hara, F.: Dynamic recognition of six basic facial expressions by discrete-time recurrent neural network. In: Proc. of the International Joint Conference on Neural Networks, pp. 155–158 (1993)

    Google Scholar 

  53. Kobayashi, H., Tange, K., Hara, F.: Real-time recognition of six basic facial expressions. In: Proc. IEEE Workshop on Robot and Human Communication, pp. 179–186 (1995)

    Chapter  Google Scholar 

  54. Kraut, R., Johnson, R.: Social and emotional messages of smiling: an ethological approach. J. Pers. Soc. Psychol. 37, 1539–1523 (1979)

    Article  Google Scholar 

  55. Li, S., Gu, L.: Real-time multi-view face detection, tracking, pose estimation, alignment, and recognition. In: IEEE Conf. on Computer Vision and Pattern Recognition Demo Summary (2001)

    Google Scholar 

  56. Lien, J.-J., Kanade, T., Cohn, J., Li, C.: Subtly different facial expression recognition and expression intensity estimation. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 853–859 (1998)

    Google Scholar 

  57. Lien, J.-J., Kanade, T., Cohn, J., Li, C.: Detection, tracking, and classification of action units in facial expression. J. Robot. Auton. Syst. 31, 131–146 (2000)

    Article  Google Scholar 

  58. Lucey, S., Wang, Y., Cox, M., Sridharan, S., Cohn, J.: Efficient constrained local model fitting for non-rigid face alignment. Image Vis. Comput. 27(12), 1804–1813 (2009)

    Article  Google Scholar 

  59. Lyons, M., Akamasku, S., Kamachi, M., Gyoba, J.: Coding facial expressions with Gabor wavelets. In: Proceedings of International Conference on Face and Gesture Recognition (1998)

    Google Scholar 

  60. Mahoor, M., Cadavid, S., Messinger, D., Cohn, J.: A framework for automated measurement of the intensity of non-posed facial action units. In: IEEE Workshop on CVPR for Human Communicative Behavior Analysis, pp. 74–80 (2009)

    Google Scholar 

  61. Manstead, A.: Expressiveness as an Individual Difference, pp. 285–328. Cambridge University Press, Cambridge (1991)

    Google Scholar 

  62. Martin, P., Bateson, P.: Measuring Behavior: An Introductory Guide. Cambridge University Press, Cambridge (1986)

    Google Scholar 

  63. Martinez, A., Benavente, R.: The ar face database. CVC Technical Report, No. 24 (1998)

    Google Scholar 

  64. Mase, K.: Recognition of facial expression from optical flow. IEICE Trans. Electron. 74(10), 3474–3483 (1991)

    Google Scholar 

  65. Matias, R., Cohn, J., Ross, S.: A comparison of two systems to code infants’ affective expression. Dev. Psychol. 25, 483–489 (1989)

    Article  Google Scholar 

  66. Matthews, I., Baker, S.: Active appearance models revisited. Int. J. Comput. Vis. 60(2), 135–164 (2004)

    Article  Google Scholar 

  67. Moriyama, T., Kanade, T., Cohn, J., Xiao, J., Ambadar, Z., Gao, J., Imanura, M.: Automatic recognition of eye blinking in spontaneously occurring behavior. In: Proceedings of the 16th International Conference on Pattern Recognition (ICPR ’2002), vol. 4, pp. 78–81 (2002)

    Google Scholar 

  68. Moses, Y., Reynard, D., Blake, A.: Determining facial expressions in real time. In: Proc. of Int. Conf. On Automatic Face and Gesture Recognition, pp. 332–337 (1995)

    Google Scholar 

  69. Pantic, M., Rothkrantz, L.: Automatic analysis of facial expressions: the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 22(12), 1424–1445 (2000)

    Article  Google Scholar 

  70. Pantic, M., Rothkrantz, L.: Expert system for automatic analysis of facial expression. Image Vis. Comput. 18(11), 881–905 (2000)

    Article  Google Scholar 

  71. Pantic, M., Valstar, M., Rademaker, R., Maat, L.: Web-based database for facial expression analysis. In: International conference on Multimedia and Expo (ICME05) (2005)

    Google Scholar 

  72. Pentland, A., Moghaddam, B., Starner, T.: View-based and modular eigenspaces for face recognition. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 84–91 (1994)

    Chapter  Google Scholar 

  73. Pighin, F., Szeliski, H., Salesin, D.: Synthesizing realistic facial expressions from photographs. In: Proc of SIGGRAPH (1998)

    Google Scholar 

  74. Pilz, K., Thornton, I., Bülthoff, H.: A search advantage for faces learned in motion. In: Experimental Brain Research (2006)

    Google Scholar 

  75. Rinn, W.: The neuropsychology of facial expression: a review of the neurological and psychological mechanisms for producing facial expressions. Psychol. Bull. 95, 52–77 (1984)

    Article  Google Scholar 

  76. Rizvi, S., Phillips, P., Moon, H.: The Feret verification testing protocol for face recognition algorithms. In: Proceedings of the Third International Conference on Automatic Face and Gesture Recognition, pp. 48–55 (1998)

    Google Scholar 

  77. Rosenblum, M., Yacoob, Y., Davis, L.: Human expression recognition from motion using a radial basis function network architecture. IEEE Trans. Neural Netw. 7(5), 1121–1138 (1996)

    Article  Google Scholar 

  78. Rowley, H., Baluja, S., Kanade, T.: Neural network-based face detection. IEEE Trans. Pattern Anal. Mach. Intell. 20(1), 23–38 (1998)

    Article  Google Scholar 

  79. Russell, J.: Culture and the categorization. Psychol. Bull. 110, 426–450 (1991)

    Article  Google Scholar 

  80. Russell, J.: Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychol. Bull. 115, 102–141 (1991)

    Article  Google Scholar 

  81. Samal, A., Iyengar, P.: Automatic recognition and analysis of human faces and facial expressions: A survey. Pattern Recognit. 25(1), 65–77 (1992)

    Article  Google Scholar 

  82. Sayette, M., Cohn, J., Wertz, J., Perrott, M., Parrott, D.: A psychometric evaluation of the facial action coding system for assessing spontaneous expression. J. Nonverbal Behav. 25, 167–186 (2001)

    Article  Google Scholar 

  83. Scherer, K., Ekman, P.: Handbook of Methods in Nonverbal Behavior Research. Cambridge University Press, Cambridge (1982)

    Google Scholar 

  84. Scherer, K., Ellgring, H.: Multimodal expression of emotion: Affect programs or componential appraisal patterns? Emotion 7(1), 158–171 (2007)

    Article  Google Scholar 

  85. Schmidt, K., Cohn, J.F., Tian, Y.-L.: Signal characteristics of spontaneous facial expressions: Automatic movement in solitary and social smiles. Biol. Psychol. (2003)

    Google Scholar 

  86. Schneiderman, H., Kanade, T.: A statistical model for 3d object detection applied to faces and cars. In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE, New York (2000)

    Google Scholar 

  87. Shan, C., Gong, S., McOwan, P.: Facial expression recognition based on local binary patterns: A comprehensive study. Image Vis. Comput. 27(6), 803–816 (2009)

    Article  Google Scholar 

  88. Sim, T., Baker, S., Bsat, M.: The cmu pose, illumination, and expression database. IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1615–1618 (2003)

    Article  Google Scholar 

  89. Sung, K., Poggio, T.: Example-based learning for view-based human face detection. IEEE Trans. Pattern Anal. Mach. Intell. 20(1), 39–51 (1998)

    Article  Google Scholar 

  90. Suwa, M., Sugie, N., Fujimora, K.: A preliminary note on pattern recognition of human emotional expression. In: International Joint Conference on Pattern Recognition, pp. 408–410 (1978)

    Google Scholar 

  91. Tao, H., Huang, T.: Explanation-based facial motion tracking using a piecewise Bezier volume deformation model. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition (1999)

    Google Scholar 

  92. Terzopoulos, D., Waters, K.: Analysis of facial images using physical and anatomical models. In: IEEE International Conference on Computer Vision, pp. 727–732 (1990)

    Google Scholar 

  93. Tian, Y.-L., Bolle, R.: Automatic detecting neutral face for face authentication. In: Proceedings of AAAI-03 Spring Symposium on Intelligent Multimedia Knowledge Management, CA (2003)

    Google Scholar 

  94. Tian, Y.-L., Kanade, T., Cohn, J.: Eye-state action unit detection by Gabor wavelets. In: Proceedings of International Conference on Multi-modal Interfaces (ICMI 2000), pp. 143–150, September 2000

    Google Scholar 

  95. Tian, Y.-L., Kanade, T., Cohn, J.: Recognizing action units for facial expression analysis. IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 1–19 (2001)

    Article  MATH  Google Scholar 

  96. Tian, Y.-L., Kanade, T., Cohn, J.: Evaluation of Gabor-wavelet-based facial action unit recognition in image sequences of increasing complexity. In: Proceedings of the 5th IEEE International Conference on Automatic Face and Gesture Recognition (FG’02), Washington, DC (2002)

    Google Scholar 

  97. Tian, Y.-L., Brown, L., Hampapur, A., Pankanti, S., Senior, A., Bolle, R.: Real world real-time automatic recognition of facial expressions. In: Proceedings of IEEE Workshop on Performance Evaluation of Tracking and Surveillance, Graz, Austria (2003)

    Google Scholar 

  98. Toyama, K.: “Look, ma—no hands!” hands-free cursor control with real-time 3d face tracking. In: Proc. Workshop on Perceptual User Interfaces (PUI’98) (1998)

    Google Scholar 

  99. VanSwearingen, J., Cohn, J., Bajaj-Luthra, A.: Specific impairment of smiling increases severity of depressive symptoms in patients with facial neuromuscular disorders. J. Aesthet. Plast. Surg. 23, 416–423 (1999)

    Article  Google Scholar 

  100. Vetter, T.: Learning novels views to a single face image. In: Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, pp. 22–29 (1995)

    Google Scholar 

  101. Viola, P., Jones, M.: Robust real-time object detection. In: International Workshop on Statistical and Computational Theories of Vision—Modeling, Learning, Computing, and Sampling (2001)

    Google Scholar 

  102. Wen, Z., Huang, T.: Capturing subtle facial motions in 3d face tracking. In: Proc. of Int. Conf. on Computer Vision (2003)

    Google Scholar 

  103. Wu, Y., Toyama, K.: Wide-range person and illumination-insensitive head orientation estimation. In: Proceedings of International Conference on Automatic Face and Gesture Recognition, pp. 183–188 (2000)

    Google Scholar 

  104. Xiao, J., Moriyama, T., Kanade, T., Cohn, J.: Robust full-motion recovery of head by dynamic templates and re-registration techniques. Int. J. Imaging Syst. Technol. (2003)

    Google Scholar 

  105. Xiao, J., Baker, S., Matthews, I., Kanade, T.: Real-time combined 2d+3d active appearance models. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 535–542 (2004)

    Google Scholar 

  106. Yacoob, Y., Black, M.: Parameterized modeling and recognition of activities. In: Proc. 6th IEEE Int. Conf. on Computer Vision, pp. 120–127, Bombay (1998)

    Google Scholar 

  107. Yacoob, Y., Davis, L.: Recognizing human facial expression from long image sequences using optical flow. IEEE Trans. Pattern Anal. Mach. Intell. 18(6), 636–642 (1996)

    Article  Google Scholar 

  108. Yacoob, Y., Lam, H.-M., Davis, L.: Recognizing faces showing expressions. In: Proc. Int. Workshop on Automatic Face- and Gesture-Recognition, pp. 278–283, Zurich, Switzerland (1995)

    Google Scholar 

  109. Yang, M., Kriegman, D., Ahuja, N.: Detecting faces in images: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 24(1) (2002)

    Google Scholar 

  110. Yang, P., Liu, Q., Metaxas, D.: Facial expression recognition using encoded dynamic features. In: International conference on Computer Vision and Pattern Recognition (CVPR) (2008)

    Google Scholar 

  111. Yang, P., Liu, Q., Cui, X., Metaxas, D.: Rankboost with l1 regularization for facial expression recognition and intensity estimation. In: International conference on Computer Vision (ICCV) (2009)

    Google Scholar 

  112. Yin, L., Wei, X., Sun, Y., Wang, J., Rosato, M.: A 3d facial expression database for facial behavior research. In: International Conference on Automatic Face and Gesture Recognition, pp. 211–216 (2006)

    Google Scholar 

  113. Yin, L., Chen, X., Sun, Y., Worm, T., Reale, M.: A high-resolution 3d dynamic facial expression database. In: International Conference on Automatic Face and Gesture Recognition (2008)

    Google Scholar 

  114. Zeng, Z., Pantic, G.R.M., Huang, T.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2009)

    Article  Google Scholar 

  115. Zhang, Y., Ji, Q.: Facial expression recognition with dynamic Bayesian networks. IEEE Trans. Pattern Anal. Mach. Intel. 27(5) (2005)

    Google Scholar 

  116. Zhang, Z., Lyons, M., Schuster, M., Akamatsu, S.: Comparison between geometry-based and Gabor-wavelets-based facial expression recognition using multi-layer perceptron. In: International Workshop on Automatic Face and Gesture Recognition, pp. 454–459 (1998)

    Google Scholar 

  117. Zhang, Y., Ji, Q., Zhu, Z., Yi, B.: Dynamic facial expression analysis and synthesis with mpeg-4 facial animation parameters. IEEE Trans. Circuits Syst. Video Technol. 18(10), 1383–1396 (2008)

    Article  Google Scholar 

  118. Zhao, L., Pingali, G., Carlbom, I.: Real-time head orientation estimation using neural networks. In: Proc of the 6th International Conference on Image Processing (2002)

    Google Scholar 

  119. Zlochower, A., Cohn, J., Lien, J., Kanade, T.: A computer vision based method of facial expression analysis in parent-infant interaction. In: International Conference on Infant Studies, Atlanta (1998)

    Google Scholar 

Download references

Acknowledgements

We sincerely thank Zhen Wen and Hatice Gunes for providing pictures and their permission to use them in this chapter.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yingli Tian .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag London Limited

About this chapter

Cite this chapter

Tian, Y., Kanade, T., Cohn, J.F. (2011). Facial Expression Recognition. In: Li, S., Jain, A. (eds) Handbook of Face Recognition. Springer, London. https://doi.org/10.1007/978-0-85729-932-1_19

Download citation

  • DOI: https://doi.org/10.1007/978-0-85729-932-1_19

  • Publisher Name: Springer, London

  • Print ISBN: 978-0-85729-931-4

  • Online ISBN: 978-0-85729-932-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics