International Journal of Social Robotics

, Volume 4, Issue 3, pp 249–262 | Cite as

Facial Communicative Signals

Valence Recognition in Task-Oriented Human-Robot Interaction
  • Christian Lang
  • Sven Wachsmuth
  • Marc Hanheide
  • Heiko Wersing


This paper investigates facial communicative signals (head gestures, eye gaze, and facial expressions) as nonverbal feedback in human-robot interaction. Motivated by a discussion of the literature, we suggest scenario-specific investigations due to the complex nature of these signals and present an object-teaching scenario where subjects teach the names of objects to a robot, which in turn shall term these objects correctly afterwards. The robot’s verbal answers are to elicit facial communicative signals of its interaction partners. We investigated the human ability to recognize this spontaneous facial feedback and also the performance of two automatic recognition approaches. The first one is a static approach yielding baseline results, whereas the second considers the temporal dynamics and achieved classification rates comparable to the human performance.


Facial communicative signals Valence recognition Head gestures Eye gaze Facial expressions Object teaching Active appearance models 



Christian Lang gratefully acknowledges the financial support from Honda Research Institute Europe for the project “Facial Expressions in Communication”. The authors thank the anonymous reviewers for their helpful comments on an earlier draft of this paper.


  1. 1.
    Argyle M, Cook M (1976) Gaze and mutual gaze. Cambridge University Press, Cambridge Google Scholar
  2. 2.
    Argyle M, Graham JA (1976) The central Europe experiment: Looking at persons and looking at objects. J Nonverbal Behav 1(1):6–16 CrossRefGoogle Scholar
  3. 3.
    Argyle M, Ingham R (1972) Gaze, mutual gaze, and proximity. Semiotica 6(1):32–49 CrossRefGoogle Scholar
  4. 4.
    Bainum CK, Lounsbury KR, Pollio HR (1984) The development of laughing and smiling in nursery school children. Child Dev 55(5):1946–1957 CrossRefGoogle Scholar
  5. 5.
    Baker S, Matthews I, Xiao J, Gross R, Kanade, T, Ishikawa, T (2004) Real-time non-rigid driver head tracking for driver mental state estimation. In: 11th World congress Intell transp syst Google Scholar
  6. 6.
    Barkhuysen P, Krahmer E, Swerts M (2005) Problem detection in human-machine interactions based on facial expressions of users. Speech Commun 45(3):343–359 CrossRefGoogle Scholar
  7. 7.
    Bartlett M, Littlewort G, Frank M, Lainscsek C, Fasel I, Movellan J (2006) Fully automatic facial action recognition in spontaneous behavior. In: International conference on automatic face and gesture recognition, pp 223–230 CrossRefGoogle Scholar
  8. 8.
    Bavelas JB, Black A, Lemery, CR, Mullett, J (1986) I show how you feel: Motor mimicry as a communicative act. J Pers Soc Psychol 50(2):322–329 CrossRefGoogle Scholar
  9. 9.
    Bindemann M, Burton AM, Langton SRH (2007) How do eye gaze and facial expression interact? Vis Cogn 16(6):708–733 CrossRefGoogle Scholar
  10. 10.
    Birdwhistell R (1970) Kinesics and context: essays on body motion communication. University of Pennsylvania Press, Philadelphia Google Scholar
  11. 11.
    Boucher JD, Carlson GE (1980) Recognition of facial expression in three cultures. J Cross-Cult Psychol 11:263–280 CrossRefGoogle Scholar
  12. 12.
    Bradley MM, Lang PJ (1994) Measuring emotion: the self-assessment manikin and the semantic differential. J Behav Ther Exp Psychiatry 25(1):49–59 CrossRefGoogle Scholar
  13. 13.
    Brightman VJ, Segal AL, Werther P, Steiner J (1975) Ethologic study of facial expressions in response to taste stimuli. J Dent Res 54:L141 (Abstract) Google Scholar
  14. 14.
    Brightman VJ, Segal AL, Werther P, Steiner J (1977) Facial expression and hedonic response to face stimuli. J Dent Res 56:B161 (Abstract) Google Scholar
  15. 15.
    Buenaposada JM, Muñoz E, Baumela L (2008) Recognising facial expressions in video sequences. PAA Pattern Anal Appl 11(1):101–116 MathSciNetCrossRefGoogle Scholar
  16. 16.
    Bush LE (1973) Individual differences multidimensional scaling of adjectives denoting feelings. J Pers Soc Psychol 25(1):50–57 CrossRefGoogle Scholar
  17. 17.
    Caridakis G, Malatesta L, Kessous L, Amir N, Raouzaiou A, Karpouzis K (2006) Modeling naturalistic affective states via facial and vocal expressions recognition. In: International conference on multimodal interfaces, pp 146–154 Google Scholar
  18. 18.
    Castrillón M, Déniz O, Hernández M (2003) The encara system for face detection and normalization. In: Lecture notes in computer science, vol 2652. Springer, Berlin, pp 176–183 Google Scholar
  19. 19.
    Chovil N (1991) Social determinants of facial displays. J Nonverbal Behav 15(3):141–154 CrossRefGoogle Scholar
  20. 20.
    Cline MG (1967) The perception of where a person is looking. Am J Psychol 80(1):41–50 MathSciNetCrossRefGoogle Scholar
  21. 21.
    Cohn J, Reed L, Ambadar Z, Xiao J, Moriyama T (2004) Automatic analysis and recognition of brow actions and head motion in spontaneous facial behavior. In: International conference on systems, man and cybernetics, pp 610–616 Google Scholar
  22. 22.
    Cooper RM (2006) The effects of eye gaze and emotional facial expression on the allocation of visual attention. Ph.D. thesis, University of Stirling, Department of Psychology Google Scholar
  23. 23.
    Cootes TF, Edwards GJ, Taylor CJ (2001) Active appearance models. Pattern Anal Mach Intell 23(6):681–685 CrossRefGoogle Scholar
  24. 24.
    Ducci L, Arcuri L, Georgis WT, Sineshaw T (1982) Emotion recognition in Ethiopia: the effect of familiarity with western culture on accuracy of recognition. J Cross-Cult Psychol 13:340–351 CrossRefGoogle Scholar
  25. 25.
    Edwards G, Cootes T, Taylor C (1998) Face recognition using active appearance models. In: Burkhardt H, Neumann B (eds) European conference on computer vision, vol 2. Springer, Berlin, pp 581–695 Google Scholar
  26. 26.
    Efran JS (1968) Looking for approval: effects on visual behavior of approbation from persons differing in importance. J Pers Soc Psychol 10(1):21–25 CrossRefGoogle Scholar
  27. 27.
    Ekman P (1971) Universals and cultural differences in facial expressions of emotion. Neb Symp Motiv 19:207–283 Google Scholar
  28. 28.
    Ekman P (1992) An argument for basic emotions. Cogn Emot 6(3 & 4):169–200 CrossRefGoogle Scholar
  29. 29.
    Ekman P (1994) Strong evidence for universals in facial expressions: a reply to Russell’s mistaken critique. Psychol Bull 115(2):268–287 CrossRefGoogle Scholar
  30. 30.
    Ekman P (1997) Should we call it expression or communication? Innovation 10(4):333–344 MathSciNetGoogle Scholar
  31. 31.
    Ekman P, Friesen W (1978) Facial action coding system: a technique for the measurement of facial movement. Consulting Psychologists Press, Palo Alto Google Scholar
  32. 32.
    Ekman P, Davidson RJ, Friesen WV (1990) The duchenne smile: emotional expression and brain physiology ii. J Pers Soc Psychol 58(2):342–353 CrossRefGoogle Scholar
  33. 33.
    Ekman P, Friesen WV, O’Sullivan M, Chan A, Diacoyanni-Tarlatzis I, Heider K, Krause R, LeCompte WA, Pitcairn T, Ricci-Bitti PE, Scherer K, Tomita M, Tzavaras A (1987) Universals and cultural differences in the judgments of facial expressions of emotion. J Pers Soc Psychol 53(4):712–717 CrossRefGoogle Scholar
  34. 34.
    Ekman P, Sorenson ER, Friesen WV (1969) Pan-cultural elements in facial displays of emotion. Science 164:86–88 CrossRefGoogle Scholar
  35. 35.
    El Kaliouby R, Robinson P (2007) Real-time inference of complex mental states from facial expressions and head gestures. Trans Robot 23(5):991–1000 CrossRefGoogle Scholar
  36. 36.
    Emery NJ (2000) The eyes have it: the neuroethology, function and evolution of social gaze. Neurosci Biobehav Rev 24(6):581–604 CrossRefGoogle Scholar
  37. 37.
    Exline RV (1963) Explorations in the process of person perception: visual interaction in relation to competition, sex, and need for affiliation. J Pers 31(1):1–20 CrossRefGoogle Scholar
  38. 38.
    Exline R, Gray D, Schuette D (1965) Visual behavior in a dyad as affected by interview content and sex of respondent. J Pers Soc Psychol 1(3):201–209 CrossRefGoogle Scholar
  39. 39.
    Exline RV, Winter LC (1965) Affect, cognition and personality, chap. Affective relations and mutual glances in dyads. Springer, New York Google Scholar
  40. 40.
    Fasel B, Luettin J (2003) Automatic facial expression analysis: a survey. Pattern Recognit 36:259–275 CrossRefzbMATHGoogle Scholar
  41. 41.
    Fisher RA (1936) The use of multiple measurements in taxonomic problems. Ann Eugen 7:179–188 CrossRefGoogle Scholar
  42. 42.
    Fragopanagos N, Taylor J (2005) Emotion recognition in human-computer interaction. Neural Netw 18(4):389–405 CrossRefGoogle Scholar
  43. 43.
    Fridlund AJ (1991) Sociality of solitary smiling: Potentiation by an implicit audience. J Pers Soc Psychol 60(2):229–240 CrossRefGoogle Scholar
  44. 44.
    Fridlund AJ (1994) Human facial expression: an evolutionary view. Academic Press, San Diego Google Scholar
  45. 45.
    Fridlund AJ, Sabini JP, Hedlund LE, Schaut JA, Shenker JI, Knauer MJ (1990) Audience effects on solitary faces during imagery: displaying to the people in your head. J Nonverbal Behav 14(2):113–137 CrossRefGoogle Scholar
  46. 46.
    Friesen CK, Moore C, Kingstone A (2005) Does gaze direction really trigger a reflexive shift of spatial attention? Brain Cogn 57(1):66–69 CrossRefGoogle Scholar
  47. 47.
    Goodwin MH, Goodwin C (1986) Gesture and coparticipation in the activity of searching for a word. Semiotica 62(1–2):51–76 CrossRefGoogle Scholar
  48. 48.
    Graf HP, Cosatto E, Strom V, Huang FJ (2002) Visual prosody: Facial movements accompanying speech. In: International conference on automatic face and gesture recognition, pp 396–401 Google Scholar
  49. 49.
    Green RF, Goldfried MR (1965) On the bipolarity of semantic space. Psychol Monogr Gen Appl 79(6, Whole No. 599):31 Google Scholar
  50. 50.
    Gross R, Matthews I, Baker S (2005) Generic vs. person specific active appearance models. Image Vis Comput 23(12):1080–1093 CrossRefGoogle Scholar
  51. 51.
    Gunes H, Pantic M (2010) Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In: International conference on intelligent virtual agents, pp 371–377 CrossRefGoogle Scholar
  52. 52.
    Haasch A, Hohenner S, Hüwel S, Kleinehagenbrock M, Lang S, Toptsis I, Fink GA, Fritsch J, Wrede B, Sagerer G (2004) Biron—the Bielefeld robot companion. In: Prassler E, Lawitzky G, Fiorini P, Haegele M (eds) International workshop on advances in service robotics, pp 27–32 Google Scholar
  53. 53.
    Hadar U, Steiner TJ, Grant EC, Rose FC (1983) Kinematics of head movements accompanying speech during conversation. Hum Mov Sci 2(1–2):35–46 CrossRefGoogle Scholar
  54. 54.
    Heylen D (2005) Challenges ahead: Head movements and other social acts in conversations. In: Joint symposium on virtual social agents, pp 45–52 Google Scholar
  55. 55.
    Heylen D (2006) Head gestures, gaze and the principles of conversational structure. Int J Humanoid Robot 3(3):1–27 CrossRefGoogle Scholar
  56. 56.
    Hugot V (2007) Eye gaze analysis in human-human interactions. Master’s thesis, KTH Royal Institute of Technology, School of Computer Science and Communication, Stockholm, Sweden Google Scholar
  57. 57.
    Huppert FA, Whittington JE (2003) Evidence for the independence of positive and negative well-being: implications for quality of life assessment. Br J Health Psychol 8:107–122 CrossRefGoogle Scholar
  58. 58.
    Ishikawa T, Baker S, Matthews I, Kanade T (2004) Passive driver gaze tracking with active appearance models. In: 11th World congress on intelligent transportation systems Google Scholar
  59. 59.
    Ivan P (2007) Active appearance models for gaze estimation. Master’s thesis, Vrije Universiteit Amsterdam, Faculty of Sciences, Business Mathematics & Informatics Google Scholar
  60. 60.
    Izard CE (1971) The face of emotion. Appleton-Century-Crofts, New York Google Scholar
  61. 61.
    Izard CE (1994) Innate and universal facial expressions: Evidence from developmental and cross-cultural research. Psychol Bull 115(2):288–299 CrossRefGoogle Scholar
  62. 62.
    Izard CE (2010) The many meanings/aspects of emotion: definitions, functions, activation, and regulation. Emot Rev 2(4):363–370 CrossRefGoogle Scholar
  63. 63.
    Jakobs E, Manstead ASR, Fischer AH (2001) Social context effects on facial activity in a negative emotional setting. Emotion 1(1):51–69 CrossRefGoogle Scholar
  64. 64.
    Kapoor A, Picard RW (2005) Multimodal affect recognition in learning environments. In: ACM international conference on multimedia. ACM Press, New York, pp 677–682 Google Scholar
  65. 65.
    Kleck RE, Vaughan RC, Cartwright-Smith J, Vaughan KB, Colby CZ, Lanzetta JT (1976) Effects of being observed on expressive, subjective, and physiological responses to painful stimuli. J Pers Soc Psychol 34(6):1211–1218 CrossRefGoogle Scholar
  66. 66.
    Kobayashi H, Kohshima S (1997) Unique morphology of the human eye. Nature 387:767–768 CrossRefGoogle Scholar
  67. 67.
    Kraut RE (1982) Social presence, facial feedback, and emotion. J Pers Soc Psychol 42(5):853–863 CrossRefGoogle Scholar
  68. 68.
    Kraut RE, Johnston RE (1979) Social and emotional messages of smiling: An ethological approach. J Pers Soc Psychol 37(9):1539–1553 CrossRefGoogle Scholar
  69. 69.
    Lang C, Hanheide M, Lohse M, Wersing H, Sagerer G (2009) Feedback interpretation based on facial expressions in human-robot interaction. In: International symposium on robot and human interactive communication (RO-MAN), pp 189–194 Google Scholar
  70. 70.
    Lang C, Wachsmuth S, Hanheide M, Wersing H (2012) Facial communicative signal interpretation in human-robot interaction by discriminative video subsequence selection. Tech. rep., Bielefeld University, Faculty of Technology, Research Institute for Cognition and Robotics/Applied Informatics Google Scholar
  71. 71.
    Lang C, Wachsmuth S, Wersing H, Hanheide M (2010) Facial expressions as feedback cue in human-robot interaction—a comparison between human and automatic recognition performances. In: Workshop on CVPR for human communicative behavior analysis (CVPR4HB), pp 79–85 Google Scholar
  72. 72.
    Langton SRH, Watt RJ, Bruce V (2000) Do the eyes have it? Cues to the direction of social attention. Trends Cogn Sci 4(2):50–59 CrossRefGoogle Scholar
  73. 73.
    Lanitis A, Taylor CJ, Cootes TF (1997) Automatic interpretation and coding of face images using flexible models. Pattern Anal Mach Intell 19(7):743–756 CrossRefGoogle Scholar
  74. 74.
    Lee J, Chao C, Bobick AF, Thomaz AL (2012) Multi-cue contingency detection. Int J Soc Robot. doi: 10.1007/s12369-011-0136-5 Google Scholar
  75. 75.
    Lohan KS, Rohlfing KJ, Pitsch K, Saunders J, Lehmann H, Nehaniv CL, Fischer K, Wrede B (2012) Tutor spotter: Proposing a feature set and evaluating it in a robotic system. Int J Soc Robot. doi: 10.1007/s12369-011-0125-8 Google Scholar
  76. 76.
    Lohan KS, Vollmer AL, Fritsch J, Rohlfing K, Wrede B (2009) Which ostensive stimuli can be used for a robot to detect and maintain tutoring situations. In: International workshop on social signal processing Google Scholar
  77. 77.
    Lohse M, Hanheide M (2008) Evaluating a social home tour robot applying heuristics. In: Workshop robots as social actors at RO-MAN Google Scholar
  78. 78.
    Lohse M, Rohlfing KJ, Wrede B, Sagerer G (2008) Try something else! when users change their discursive behavior in human-robot interaction. In: International conference on robotics and automation Google Scholar
  79. 79.
    Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110 CrossRefGoogle Scholar
  80. 80.
    Lucey S, Ashraf AB, Cohn JF (2007) Face recognition. In: Investigating spontaneous facial action recognition through AAM representations of the face. Itech Education and Publishing, Vienna, pp 275–286 Google Scholar
  81. 81.
    Ma Y, Konishi Y, Kinoshita K, Lao S, Kawade M (2006) Sparse Bayesian regression for head pose estimation. In: International conference on pattern recognition, pp 507–510 Google Scholar
  82. 82.
    Matthews I, Baker S (2004) Active appearance models revisited. Int J Comput Vis 60:135–164 CrossRefGoogle Scholar
  83. 83.
    Maynard-Smith J, Harper D (2004) Animal signals. Oxford University Press, London Google Scholar
  84. 84.
    McAndrew FT (1986) A cross-cultural study of recognition thresholds for facial expressions of emotion. J Cross-Cult Psychol 17:211–224 CrossRefGoogle Scholar
  85. 85.
    McLachlan JFC (1976) A short adjective check list for the evaluation of anxiety and depression. J Clin Psychol 32(1):195–197 MathSciNetCrossRefGoogle Scholar
  86. 86.
    McNaira DM, Lorr M (1964) An analysis of mood in neurotics. J Abnorm Soc Psychol 69(6):620–627 CrossRefGoogle Scholar
  87. 87.
    Mehrabian A, Friar JT (1969) Encoding of attitude by a seated communicator via posture and position cues. J Consult Clin Psychol 33(3):330–336 CrossRefGoogle Scholar
  88. 88.
    Mehrabian A, Russell JA (1974) An approach to environmental psychology. The MIT Press, Cambridge Google Scholar
  89. 89.
    Morimoto CH, Mimica MR (2005) Eye gaze tracking techniques for interactive applications. Comput Vis Image Underst 98(1):4–24 CrossRefGoogle Scholar
  90. 90.
    Murphy-Chutorian E, Trivedi MM (2009) Head pose estimation in computer vision: a survey. Pattern Anal Mach Intell 31(4):607–626 CrossRefGoogle Scholar
  91. 91.
    Newman R, Matsumoto Y, Rougeaux S, Zelinsky A (2000) Real-time stereo tracking for head pose and gaze estimation. In: International conference on automatic face and gesture recognition, pp 122–128 CrossRefGoogle Scholar
  92. 92.
    Niit T, Valsiner J (1977) Recognition of facial expressions: An experimental investigation of Ekman’s model. Ada Comment Univ Tarvensis 429:85–107 Google Scholar
  93. 93.
    Nowozin S, Bakir G, Tsuda K (2007) Discriminative subsequence mining for action classification. In: International conference on computer vision, pp 1–8 Google Scholar
  94. 94.
    Osgood C, Suci G, Tannenbaum P (1957) The measurement of meaning. University of Illinois Press, Urbana Google Scholar
  95. 95.
    Osgood CE (1966) Dimensionality of the semantic space for communication via facial expressions. Scand J Psychol 7(1):1–30 CrossRefGoogle Scholar
  96. 96.
    Pantic M, Rothkrantz LJM (2000) Automatic analysis of facial expressions: the state of the art. Pattern Anal Mach Intell 22(12):1424–1445 CrossRefGoogle Scholar
  97. 97.
    Parkinson B (2005) Do facial movements express emotions or communicate motives? Personal Soc Psychol Rev 9(4):278–311 CrossRefGoogle Scholar
  98. 98.
    Poggi I, D’Errico F, Vincze L (2010) Types of nods. the polysemy of a social signal. In: International conference on language resources and evaluation, pp 17–23 Google Scholar
  99. 99.
    Posner J, Russell JA, Peterson BS (2005) The circumplex model of affect: an integrative approach to affective neuroscience cognitive development, and psychopathology. Dev Psychopathol 17:715–734 CrossRefGoogle Scholar
  100. 100.
    Prado JA, Simplício C, Lori NF, Dias J (2012) Visuo-auditory multimodal emotional structure to improve human-robot-interaction. Int J Soc Robot 4(1):29–51 CrossRefGoogle Scholar
  101. 101.
    Provine RR, Fischer KR (1989) Laughing, smiling, and talking: relation to sleeping and social context in humans. Ethology 83(4):295–305 CrossRefGoogle Scholar
  102. 102.
    Rabie A, Lang C, Hanheide M, Castrillón-Santana M, Sagerer G (2008) Automatic initialization for facial analysis in interactive robotics. In: International conference on computer vision systems, pp 517–526 CrossRefGoogle Scholar
  103. 103.
    Rabie A, Wrede B, Vogt T, Hanheide M (2009) Evaluation and discussion of multi-modal emotion recognition. In: International conference on computer and electrical engineering, vol 1, pp 598–602 CrossRefGoogle Scholar
  104. 104.
    Ricciardelli P, Baylis G, Driver J (2000) The positive and negative of human expertise in gaze perception. Cognition 77(1):B1–B14 CrossRefGoogle Scholar
  105. 105.
    Russell JA (1978) Evidence of convergent validity on the dimensions of affect. J Cross-Cult Psychol 36(10):1152–1168 Google Scholar
  106. 106.
    Russell JA (1979) Affective space is bipolar. J Pers Soc Psychol 37(3):345–356 CrossRefGoogle Scholar
  107. 107.
    Russell JA (1980) A circumplex model of affect. J Pers Soc Psychol 39(6):1161–1178 CrossRefGoogle Scholar
  108. 108.
    Russell JA (1994) Is there universal recognition of emotion from facial expression? a review of the cross-cultural studies. Psychol Bull 115(1):102–141 CrossRefGoogle Scholar
  109. 109.
    Russell JA (1995) Facial expressions of emotion: what lies beyond minimal universality? Psychol Bull 118(3):379–391 CrossRefGoogle Scholar
  110. 110.
    Sakoe H, Chiba S (1978) Dynamic programming algorithm optimization for spoken word recognition. Acoust Speech Signal Process 26(1):43–49 CrossRefzbMATHGoogle Scholar
  111. 111.
    Sander D, Grandjean D, Kaiser S, Wehrle T, Scherer KR (2007) Interaction effects of perceived gaze direction and dynamic facial expression: evidence for appraisal theories of emotion. Eur J Cogn Psychol 19(3):470–480 CrossRefGoogle Scholar
  112. 112.
    Sebe N, Cohen I, Gevers T, Huang TS (2006) Emotion recognition based on joint visual and audio cues. In: International conference on pattern recognition, vol 1, pp 1136–1139 Google Scholar
  113. 113.
    Sebe N, Lew MS, Sun Y, Cohen I, Gevers T, Huang TS (2007) Authentic facial expression analysis. Image Vis Comput 25(12):1856–1863 CrossRefGoogle Scholar
  114. 114.
    Shacham S, Dar R, Cleeland CS (1984) The relationship of mood state to the severity of clinical pain. Pain 18(2):187–197 CrossRefGoogle Scholar
  115. 115.
    Shimada M, Yoshikawa Y, Asada M, Saiwaki N, Ishiguro H (2011) Effects of observing eye contact between a robot and another person. Int J Soc Robot 3(2):143–154 CrossRefGoogle Scholar
  116. 116.
    Snider JG, Osgood CE (eds) (1969) Semantic differential technique. Aldine, Chicago Google Scholar
  117. 117.
    Storti C (2007) Speaking of India: bridging the communication gap when working with Indians. In: Yes, no, and other problems. Nicholas Brealey, Boston. pp 35–76 Google Scholar
  118. 118.
    Thayer RE (1967) Measurement of activation through self-report. Psychol Rep 20(2):663–678 CrossRefGoogle Scholar
  119. 119.
    Tian YL, Kanade T, Cohn JF (2001) Recognizing action units for facial expression analysis. Pattern Anal Mach Intell 23(2):97–115 CrossRefGoogle Scholar
  120. 120.
    Tipping ME (2001) Sparse Bayesian learning and the relevance vector machine. J Mach Learn Res 1:211–244 MathSciNetzbMATHGoogle Scholar
  121. 121.
    Valstar MF, Gunes H, Pantic M (2007) How to distinguish posed from spontaneous smiles using geometric features. In: International conference on multimodal interfaces, pp 38–45 CrossRefGoogle Scholar
  122. 122.
    Varchmin AC, Rae R, Ritter H (1997) Image based recognition of gaze direction using adaptive methods. In: Gesture and sign language in human-computer interaction. International gesture workshop, Bielefeld, Germany Google Scholar
  123. 123.
    von Grünau M, Anston C (1995) The detection of gaze direction: a stare-in-the-crowd effect. Perception 24(11):1297–1313 CrossRefGoogle Scholar
  124. 124.
    Wang JG, Sung E (2002) Study on eye gaze estimation. Syst Man Cybern 32(3):332–350 CrossRefGoogle Scholar
  125. 125.
    Wang JG, Sung E (2007) Em enhancement of 3d head pose estimated by point at infinity. Image Vis Comput 25:1864–1874 CrossRefGoogle Scholar
  126. 126.
    Watson OM (1970) Proxemic behavior: a cross-cultural study. Mouton De Gruyter, Berlin Google Scholar
  127. 127.
    Westbrook MT (1976) Positive affect: A method of content analysis for verbal samples. J Consult Clin Psychol 44(5):715–719 CrossRefGoogle Scholar
  128. 128.
    Widen SC, Russell JA (2010) Descriptive and prescriptive definitions of emotion. Emot Rev 2(4):377–378 CrossRefGoogle Scholar
  129. 129.
    Yang P, Liu Q, Cui X, Metaxas DN (2008) Facial expression recognition based on dynamic binary patterns. In: Conference on computer vision and pattern recognition Google Scholar
  130. 130.
    Yeasin M, Bullot B, Sharma R (2006) Recognition of facial expressions and measurement of levels of interest from video. Multimedia 8(3):500–508 CrossRefGoogle Scholar
  131. 131.
    Yoo DH, Kim JH, Lee BR, Chung MJ (2002) Non-contact eye gaze tracking system by mapping of corneal reflections. In: International conference on automatic face and gesture recognition, pp 94–99 Google Scholar
  132. 132.
    Zeng Z, Pantic M, Roisman GI, Huang TS (2009) A survey of affect recognition methods: audio, visual, and spontaneous expressions. Pattern Anal Mach Intell 31(1):39–58 CrossRefGoogle Scholar
  133. 133.
    Zhao G, Chen L, Song J, Chen G (2007) Large head movement tracking using sift-based registration. In: International conference on multimedia, pp 807–810 Google Scholar

Copyright information

© Springer Science & Business Media BV 2012

Authors and Affiliations

  • Christian Lang
    • 1
  • Sven Wachsmuth
    • 2
  • Marc Hanheide
    • 3
  • Heiko Wersing
    • 4
  1. 1.Research Institute for Cognition and Robotics (CoR-Lab)Bielefeld UniversityBielefeldGermany
  2. 2.Applied InformaticsBielefeld UniversityBielefeldGermany
  3. 3.School of Computer ScienceUniversity of LincolnLincolnUK
  4. 4.Honda Research Institute EuropeOffenbachGermany

Personalised recommendations