Advertisement

Understanding of the Biological Process of Nonverbal Communication: Facial Emotion and Expression Recognition

  • Alberto C. CruzEmail author
  • B. Bhanu
  • N. S. Thakoor
Chapter
Part of the Computational Biology book series (COBO, volume 22)

Abstract

Facial emotion and expression recognition is the study of facial expressions to infer the emotional state of a person. A camera captures video or images of a person’s face and algorithms automatically, without the help of a human operator, detect his/her expressions to infer his/her underlying emotional state. There has been an increased interest in this field in the past decade, and a system that accomplishes these tasks in unconstrained settings is a realizable goal. In this chapter, we will discuss the process by which a human expresses an emotion; how it is perceived by the human visual system at a low level; how prediction of emotion is made by a human; and publicly available datasets currently used by researchers in the field.

Keywords

Feature Vector Facial Expression Independent Component Analysis Human Visual System Independent Component Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgment

This work was supported in part by the National Science Foundation Integrative Graduate Education and Research Traineeship (IGERT) in Video Bioinformatics (DGE-0903667). Alberto Cruz is an IGERT Fellow.

References

  1. 1.
    Darwin C (1872) The expression of the emotions in man and animals. John MurrayGoogle Scholar
  2. 2.
    el Kaliouby R, Robinson P (2005) The emotional hearing aid: an assistive tool for children with asperger syndrome. Univ Access Inf Soc 4(2):121–134Google Scholar
  3. 3.
    Schuller B, Marchi E, Baron-Cohen S, O’Reilley H, Robinson P, Davies I, Golan O, Friedenson S, Friedenson S, Tal S, Newman S, Meir N, Shillo R, Camurri A, Piana S (2013) ASC-Inclusion: interactive emotion games for social inclusion of children with autism spectrum conditions. In: Intelligent digital games for empowerment and inclusionGoogle Scholar
  4. 4.
    Shotton J, Fitzgibbon A, Cook M, Finocchio M, Moore R, Kipman A, Blake A (2011) Real-Time human pose recognition in parts from single depth images. In: Proceedings of the IEEE computer vision and pattern recognition, Colorado SpringsGoogle Scholar
  5. 5.
    McKeown G, Valstar M, Cowie R, Pantic M, Schröder M (2012) The SEMAINE database: annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Trans Affect Comput 3(1):5–17CrossRefGoogle Scholar
  6. 6.
    Elkins AC, Sun Y, Zafeiriou S, Pantic M (2013) The face of an imposter: computer vision for deception detection. In: Proceedings of the Hawaii international conference on system sciences, Grand WaileaGoogle Scholar
  7. 7.
    Valstar MF, Pantic M (2010) Induced disgust, happiness and surprise: an addition to the MMI facial expression database. In: Proceedings of the international language resources and evaluation conference, MaltaGoogle Scholar
  8. 8.
    Valstar MF, Mehu M, Jiang B, Pantic M, Scherer K (2012) Meta-analysis of the first facial expression recognition challenge. IEEE Trans Syst Man Cybern Part B 42(4):966–979CrossRefGoogle Scholar
  9. 9.
    Shuller B, Valstar M, Eyben F, Cowie R, Pantic M (2012) AVEC 2012—the continuous audio/visual emotion challenge. In: Proceedings of the ACM international conference on multimodal interaction, Santa MonicaGoogle Scholar
  10. 10.
    Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: IEEE CVPRGoogle Scholar
  11. 11.
    Yang S, Bhanu B (2012) Understanding discrete facial expressions in video using an emotion avatar image. IEEE Trans Syst, Man, Cybern, Part B 42(4):980–992CrossRefGoogle Scholar
  12. 12.
    Heikkila J, Ojansivu V (2008) Blur insensitive texture classification using local phase quantization. In: Image and signal processing. Springer, New York, pp 236–243Google Scholar
  13. 13.
    Grigorescue C, Petkov N, Westenberg MA (2003) Contour detection based on nonclassical receptive field inhibition. IEEE Trans Image Process 12(7):729–739CrossRefGoogle Scholar
  14. 14.
    Jiang B, Valstar MF, Pantic M (2012) Facial action detection using block-based pyramid appearance descriptors. In: Proceedings of the IEEE international conference on social computing, AmsterdamGoogle Scholar
  15. 15.
    Pietikainen M, Zhao G (2007) Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans Pattern Anal Mach Intell 29(6):915–928CrossRefGoogle Scholar
  16. 16.
    Pietikainen T, Ahonen A, Hadid M (2006) Face description with local binary patterns: application to face recognition. IEEE Trans Pattern Recogn Anal 28(12):2037–2041CrossRefGoogle Scholar
  17. 17.
    Wolf L, Hassner T, Taigman Y (2011) Effective unconstrained face recognition by combining multiple descriptors and learned background statistic. IEEE Trans Pattern Recogn and Anal 33(10):1978–1990CrossRefGoogle Scholar
  18. 18.
    Cruz AC, Bhanu B, Thakoor NS (2013) Facial emotion recognition with expression energy. In: Proceedings of the ACM international conference on multimodal interaction, Santa MonicaGoogle Scholar
  19. 19.
    Glodek M, Tschechne S, Layher G, Schels M, Brosch T, Scherer S, Kächele M, Schmidt M, Neumann H, Palm G, Schwenker F (2011) Multiple classifier systems for the classification of audio-visual emotional states. In: Proceedings of the affective computing and intelligent interaction, MemphisGoogle Scholar
  20. 20.
    Gupta MD, Jing X (2011) Non-negative matrix factorization as a feature selection tool for maximum margin classifiers. In: IEEE CVPRGoogle Scholar
  21. 21.
    Brunzell H, Eriksson J (2000) Feature reduction for classification of multidimensional data. Pattern Recogn 33(10):1741–1748CrossRefGoogle Scholar
  22. 22.
    Cruz AC, Bhanu B, Yang S (2011) A psychologically inspired match-score fusion model for video-based facial expression recognition. In: Proceedings of the affective computing and intelligent interaction, MemphisGoogle Scholar
  23. 23.
    Lyons M, Akamatsu S (1998) Coding facial expressions with Gabor wavelets. In: Proceedings of the IEEE conference on automatic face and gesture recognition, NaraGoogle Scholar
  24. 24.
    Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z (2010) The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit. In: IEEE CVPRGoogle Scholar
  25. 25.
    Ekman P, Friesen W (1978) Facial action coding system: a technique for the measurement of facial movement. Consulting Psychologists Press, Palo AltoGoogle Scholar
  26. 26.
    Ekman P (1999) Basic emotions. In: The handbook of cognition and emotion. Wiley, New York, pp 45–60Google Scholar
  27. 27.
    Schuller B, Valstar M, Eyben F, Cowie R, Pantic M (2012) AVEC 2012—the continuous audio/visual emotion challenge. In: Proceedings of the ACM international conference on multimodal interaction, Santa MonicaGoogle Scholar
  28. 28.
    McKeown G (2013) Youtube, 24 February 2011. http://www.youtube.com/watch?v=6KZc6e_EuCg. Accessed 21 June 2013 (Online)
  29. 29.
    Fontaine J, Scherer K, Roesch E, Ellsworth P (2007) The world of emotions is not two-dimensional. Psychol Sci 18(12):2050–1057CrossRefGoogle Scholar
  30. 30.
    Soladie C, Salam H, Pelachaud C, Nicolas Stoiber RS (2012) A multimodal fuzzy inference system using a continuous facial expression representation for emotion detection. In: Proceedings of the ACM international conference on multimodal interaction, Santa MonicaGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Center for Research, Intelligent SystemsUniversity of California, RiversideRiversideUSA

Personalised recommendations