Advertisement

Human emotion and cognition recognition from body language of the head using soft computing techniques

  • Yisu ZhaoEmail author
  • Xin Wang
  • Miriam Goubran
  • Thomas Whalen
  • Emil M. Petriu
Original Research

Abstract

To make a computer interface more usable, enjoyable, and effective, it should be able to recognize emotions of its human counterpart. This paper explores new ways to infer the user’s emotions and cognitions from the combination of facial expression (happy, angry, or sad), eye gaze (direct or averted), and head movement (direction and frequency). All of the extracted information is taken as input data and soft computing techniques are applied to infer emotional and cognitional states. The fuzzy rules were defined based on the opinion of an expert in psychology, a pilot group and annotators. Although the creation of the fuzzy rules are specific to a given culture, the idea of integrating the different modalities of the body language of the head is generic enough to be used by any particular target user group from any culture. Experimental results show that this method can be used to successfully recognize 10 different emotions and cognitions.

Keywords

Emotion recognition Cognition recognition Body language of the head Fuzzy inference systems Human–computer intelligent interaction 

References

  1. Al-amri SS, Kalyankar NV, Khamitkar SD (2010) Linear and non-linear contrast enhancement image. Int J Comput Sci Netw Secur 10(2):139–143Google Scholar
  2. Arroyo I, Cooper DG, Burleson W, Woolf BP, Muldner K, Christopherson R (2009) Emotion sensors go to school. In: Proceedings of 14th conference on artificial intelligence in education, pp 17–24Google Scholar
  3. Asteriadis S, Tzouveli P, Karpouzis K, Kollias S (2009) Estimation of behavioral user state based on eye gaze and head pose—application in an e-learning environment. J Multimedia Tools Appl 41(3). doi: 10.1007/s11042-008-0240-1
  4. Baenziger T, Grandjean D, Scherer KR (2009) Emotion recognition from expressions in face, voice, and body: the multimodal emotion recognition test (MERT). Emotion 9:691–704CrossRefGoogle Scholar
  5. Balomenos T, Raouzaiou A, Ioannou S, Drosopoulos A, Karpouzis K, Kollias S (2005) Emotion analysis in man–machine interaction system. LNCS 3361 3361:318–328Google Scholar
  6. Baron-Cohen S, Tead THE (2003) Mind reading: the interactive guide to emotion. Jessica Kingsley Publishers, LondonGoogle Scholar
  7. Brigham EO (2002) The fast Fourier transform. Prentice-Hall, New YorkGoogle Scholar
  8. Calvo RA, D’Mello S (2010) Affect detection: an interdisciplinary review of models, methods, and their applications. IEEE Trans Affect Comput 1:18–37. doi: 10.1109/T-AFFC.2010.1 CrossRefGoogle Scholar
  9. Castellano G, Kessous L, Caridakis G (2008) Emotion recognition through multiple modalities: face, body gesture, speech. Affect and emotion in human-computer interaction. Springer, Berlin, pp 92–103. doi: 10.1007/978-3-540-85099-1_8
  10. Chakraborty A, Konar A (2009) Emotion recognition from facial expressions and its control using fuzzy logic. IEEE Transact Syst Man Cybernet 39(4). doi: 10.1109/TSMCA.2009.2014645
  11. Chatterjee S, Hao S (2010) A novel neuro fuzzy approach to human emotion determination. Int Conf Digital Image Comput Techniq Appl 283–287. doi: 10.1109/DICTA.2010.114
  12. Cohn JF, Read LI, Ambadar Z, Xiao J, Moriyama T (2004) Automatic analysis and recognition of brow actions and head motion in spontaneous facial behavior. In: Proceedings of IEEE international conference systems, man, and cybernetics (SMC ‘04), vol 1, pp 610–616Google Scholar
  13. Contreras R, Starostenko O, Alarcon-Aquino V, Flores-Pulido L (2010) Facial feature model for emotion recognition using fuzzy reasoning. Adv Pattern Recogn 6256:11–21. doi: 10.1007/978-3-642-15992-3_2 CrossRefGoogle Scholar
  14. D’Mello S, Graesser A (2010) Multimodal semi-automated affect detection from conversational cues, gross body language, and facial features. User Model User-Adap Inter 10:147–187. doi: 10.1007/s11257-010-9074-4 CrossRefGoogle Scholar
  15. El Kaliouby R, Robinson P (2004) Real-time inference of complex mental states from facial expressions and head gestures. In: Proceedings international conference computer vision and pattern recognition 3:154. doi: 10.1109/CVPR.2004.153
  16. Engin M (2004) ECG beat classification using neuro-fuzzy network. Pattern Recogn Lett 25(15):1715–1722. doi: 10.1016/j.patrec.2004.06.014 CrossRefGoogle Scholar
  17. Esau N, Wetzel E, Kleinjohann L, Kleinjohann B (2007) Real-time facial expression recognition using a fuzzy emotion model. In: IEEE international conference fuzzy systems, 351–356. doi: 10.1109/FUZZY.2007.4295451
  18. Feng G (2006) A survey on analysis and design on model-based fuzzy control system. In: IEEE transaction on fuzzy systems, vol 14, issue 5. doi: 10.1109/TFUZZ.2006.883415
  19. Friedman J, Hastie T, Tibshirani R (2000) Additive logistic regression: a statistical view of boosting. Ann Stat 28(2):337–374MathSciNetCrossRefzbMATHGoogle Scholar
  20. Ganel T (2011) Revisiting the relationship between the processing of gaze direction and the processing of facial expression. J Exp Psychol Hum Percept Perform 37(1):48–57CrossRefGoogle Scholar
  21. Gegov A (2007) Complexity management in fuzzy systems. Studies in fuzziness and soft computing, vol 211. Springer, BerlinGoogle Scholar
  22. Giripunje S, Bawane N (2007) ANFIS based emotions recognition in speech. Knowledge-based Intelligent Information and Engineering Systems, vol 4692. Springer-Verlag, Berlin, Heidelberg, pp 77–84Google Scholar
  23. Gunes H, Pantic M (2010) Automatic, dimensional and continuous emotion recognition. Int J Synth Emot 1:68–99. doi: 10.4018/jse.2010101605 CrossRefGoogle Scholar
  24. Gunes H, Piccardi M (2005a) Affect recognition from face and body: early fusion versus late fusion. In: Proceedings of IEEE international conference systems, man, and cybernetics (SMC’05), pp 3437–3443. doi: 10.1109/ICSMC
  25. Gunes H, Piccardi M (2005b) Fusing face and body display for bi-modal emotion recognition: single frame analysis and multi-frame post-integration. In: Proceedings of first international conference affective computing and intelligent interaction, pp 102–111. doi: 10.1007/11573548_14
  26. Gunes H, Piccardi M (2009) From monomodal to multi-modal: affect recognition using visual modalities. Ambient intelligence techniques and applications. Springer-Verlag, Berlin, pp 154–182. doi: 10.1007/978-1-84800-346-0_10
  27. Harris C, Stephens M (1988) A combined corner and edge detector. In: Proceedings of the 4th Alvey Vision Conference, vol 15, pp 146–151Google Scholar
  28. Ioannou S, Caridakis G, Karpouzis K, Kollias S (2007) Robust feature detection for facial expression recognition. J Image Video Process 2007(2). doi: 10.1155/2007/29081
  29. Jaimes A, Sebe N (2007) Multimodal human-computer interaction: a survey. Comput Vis Image Underst 108:116–134CrossRefGoogle Scholar
  30. Jang JSR (1993) ANFIS: adaptive-network-based fuzzy inference systems. IEEE Transact Syst Man Cybernet 23(3):665–685. doi: 10.1109/21.256541 CrossRefGoogle Scholar
  31. Ji Q, Lan P, Looney C (2006) A probabilistic framework for modeling and real-time monitoring human fatigue. IEEE Syst Man Cybernet Part A 36(5):862–875. doi: 10.1109/TSMCA.2005.855922 CrossRefGoogle Scholar
  32. Jones J, Palmer L (1978) An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. J Neurophysiol 58(6):1233–1258Google Scholar
  33. Kapoor A, Picard RW (2005) Multimodal affect recognition in learning environment. In: Proceedings of 13th annual ACM international conference multimedia, pp 677-682. doi: 10.1145/1101149.1101300
  34. Katsis CD, Katertsidis N, Ganiatsas G, Fotiadis DI (2008) Toward emotion recognition in car-racing drivers: a biosignal processing approach. IEEE Transact Syst Man Cybernet Part A Syst Humans 38(3), 502–512. doi: 10.1109/TSMCA.2008.918624 Google Scholar
  35. Khezir M, Jahed M (2007) Real-time intelligent pattern recognition algorithm for surface EMG signals. BioMedical Engineering Online 6(1). doi: 10.1186/1475-925X-6-45
  36. Khezri M, Jahed M, Sadati N (2007) Neuro-fuzzy surface EMG pattern recognition for multifunctional hand prosthesis control. In: IEEE International symposium on industrial electronics, pp 269–274. doi: 10.1109/ISIE.2007.4374610
  37. Lee C, Narayanan S (2003) Emotion recognition using a data-driven fuzzy inference system. In: Proceedings of Eurospeech, Geneva, pp 157–160Google Scholar
  38. Lyons M, Budynek J, Akamatsu S (1999) Automatic classification of single facial images. IEEE Trans Pattern Anal Mach Intell 21:1357–1362. doi: 10.1109/34.817413 CrossRefGoogle Scholar
  39. Mamdani EH, Assilian S (1975) An experiment in linguistic synthesis with a fuzzy logic controller. Int J Man Mach Stud 7:1–13. doi: 10.1006/ijhc.1973.0303 Google Scholar
  40. Mandryk RL, Atkins MS (2007) A fuzzy physiological approach for continuously modeling emotion during interaction with play technologies. Int J Hum Comput Stud 65(4):329–347. doi: 10.1016/j.ijhcs.2006.11.011 CrossRefGoogle Scholar
  41. Marsh AA, Elfenbein HA, Ambady N (2003) Nonverbal “accents”: cultural differences in facial expressions of emotion. Psychol Sci 14(4):373–376CrossRefGoogle Scholar
  42. Matsumoto D (1989) Cultural influences on the perception of emotion. J Cross Cult Psychol 20(1):92–105. doi: 10.1177/0022022189201006 CrossRefGoogle Scholar
  43. Matsumoto D (2001) The handbook of culture and psychology. Oxford University Press, USAGoogle Scholar
  44. Matsumoto D, Kasri F, Kooken K (1999) American-Japanese cultural differences in judgments of expression intensity and subjective experience. Cogn Emot 13(2):201–218. doi: 10.1080/026999399379339 CrossRefGoogle Scholar
  45. Pantic M, Bartlett MS (2007) Machine analysis of facial expressions: face recognition. I-Tech Education and Publishing, Vienna, pp 377–416Google Scholar
  46. Picard R (1997) Affective computing. MIT Press, CambridgeGoogle Scholar
  47. Reginald B, Jr Adams, Robert EK (2005) Effects of direct and averted gaze on the perception of facially communicated emotion. Emotion 5(1):3–11. doi: 10.1037/1528-3542.5.1.3 CrossRefGoogle Scholar
  48. Reginald B, Jr Adams, Robert G, Jr Franklin (2009) Influence of emotional expression on the processing of gaze direction. Motivat Emot 33(2):106–112. doi: 10.1007/s11031-009-9121-9 CrossRefGoogle Scholar
  49. Schalk J, Hawk ST, Fishcher AH, Doosje B (2011) Moving faces, looking places: validation of the Amsterdam dynamic facial expression set (ADFES). Emotion 11(4):907–920. doi: 10.1037/a0023853 CrossRefGoogle Scholar
  50. Scherer K, Ellgring H (2007) Multimodal expression of emotion: affect programs or componential appraisal pattern? Emotion 7:158–171. doi: 10.1037/1528-3542.7.1.158 CrossRefGoogle Scholar
  51. Sebe N, Cohen I, Gevers T, Huang TS (2005) Multimodal approaches for emotion recognition: a survey. Proc SPIE-IS&T Electronic Imag SPIE 5670:56–67. doi: 10.1117/12.600746 CrossRefGoogle Scholar
  52. Shi J, Tomasi C (1994) Good features to track. In: Proceedings of IEEE Computing Society Conference on Computing Vision and Pattern Recognition, pp 593–600. doi:  10.1109/CVPR.1994.323794 
  53. Takagi T, Sugeno M (1985) Fuzzy identification of systems and its applications to modeling and control. IEEE Trans Syst Man Cybern 15(1):116–132CrossRefzbMATHGoogle Scholar
  54. Taur JS, Tao CW (2000) A new neuro-fuzzy classifier with application to on-line face detection and recognition. J VLSI Sig Proc 26(3):397–409. doi: 10.1023/A:1026515819538 CrossRefzbMATHGoogle Scholar
  55. Valstar MF, Gunes H, Pantic M (2007) How to distinguish posed from spontaneous smiles using geometric features. In: Proceedings of ninth ACM international conference on multimodal interfaces (ICMI ‘07), pp 38–45. doi: 10.1145/1322192.1322202
  56. Viola P, Jones M (2001) Robust real-time object detection. Cambrige Research Laboratory Technical Report Series CRL2001/01, pp 1–24Google Scholar
  57. Vukadinovic D, Pantic M (2005) Fully automatic facial feature point detection using Gabor feature based boosted classifiers. IEEE Int Conf Syst Man Cybernet 2:1692–1698. doi: 10.1109/ICSMC.2005.1571392 Google Scholar
  58. Zadeh L (1973) Outline of a new approach to the analysis of complex systems and decision processess. IEEE Trans Syst Man Cybern SMC-3(1):28–44. doi: 10.1109/TSMC.1973.5408575 Google Scholar
  59. Zeng Z, Hu Y, Roisman G, Wen Z, Fu Y, Huang T (2006) Audio-visual emotion recognition in adult attachment interview. In: Quek JYFKH, Massaro DW, Alwan AA, Hazen TJ (eds) Proceedings of international conference multimodal interfaces, pp 139–145. doi: 10.1145/1180995.1181028
  60. Zeng Z, Pantic M, Roisman GI, Huang TS (2009) A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans Pattern Anal Mach Intell 31(1):39–58. doi: 10.1109/TPAMI.2008.52 CrossRefGoogle Scholar
  61. Zhang Y, Ji Q (2005) Active and dynamic information fusion for facial expression understanding from image sequences. IEEE Trans Pattern Anal Mach Intell 27(5):699–714. doi: 10.1109/TPAMI.2005.93 CrossRefGoogle Scholar
  62. Zhao Y (2009) Facial expression recognition by applying multi-step integral projection and SVMs. In: Proceedings of IEEE instrumentation and measurement technology conference, pp 686–691. doi: 10.1109/IMTC.2009.5168537 
  63. Zhu Z, Ji Q (2006) Robust real-time face pose and facial expression recovery. In: Proceedings of IEEE international conference computer vision and pattern recognition (CVPR ‘06) 1:681–688. doi: 10.1109/CVPR.2006.259 

Copyright information

© Springer-Verlag 2012

Authors and Affiliations

  • Yisu Zhao
    • 1
    Email author
  • Xin Wang
    • 1
  • Miriam Goubran
    • 1
  • Thomas Whalen
    • 1
  • Emil M. Petriu
    • 1
  1. 1.University of OttawaOttawaCanada

Personalised recommendations