Abstract
This paper investigates the impact of contradictory emotional content on people’s ability to identify the emotion expressed on avatar faces as compared to human faces. Participants saw emotional faces (human or avatar) coupled with emotional texts. The face and text could either display the same or different emotions. Participants were asked to identify the emotion on the face and in the text. While they correctly identified the emotion on human faces more often than on avatar faces, this difference was mostly due to the neutral avatar face. People were no better at identifying a facial expression when emotional information coming from two sources was the same than when it was different, regardless of whether the facial expression was displayed on a human face or on an avatar face. Finally, people were more sensitive to context when trying to identify the emotion in the accompanying text.
Keywords
References
Albrecht, I., Haber, J., Kähler, K., Schröder, M., Seider, H.-P.: May I talk to you?:-) – Facial animations from text. In: Proc. Pacific Graphics 2002, pp. 77–86. IEEE Computer Society, Los Alamitos (2002)
Bailenson, J.N., Yee, N., Merget, D., Schroeder, R.: The effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction. Presence: Teleoperators and Virtual Environments 15(4), 359–372 (2006)
Bartneck, C.: Affective Expressions in Machines. Master’s Thesis, Stan Ackerman Institute, Eindhoven (2000)
Bartneck, C.: Affective expressions in machines. In: Ext. Abstracts CHI 2001, pp. 189–190. ACM Press, New York (2001)
Bartneck, C.: How convincing is Mr. Data’s smile: Affective expressions of machines. User Modeling and User-Adapted Interaction 11, 279–295 (2001)
Bartneck, C., Reichenbach, J.: Subtle emotional expressions of synthetic characters. Int. J. Human-Computer Studies 62, 179–192 (2005)
Battocchi, A., Pianesi, F., Goren-Bar, D.: The properties of DaFEx, a database of kinetic facial expressions. In: Tao, J., Tan, T., Picard, R.W. (eds.) ACII 2005. LNCS, vol. 3784, pp. 558–565. Springer, Heidelberg (2005)
Buck, R.: The Communication of Emotion. Guilford Press, New York (1984)
Burford, D., Blake, E.: Real-time facial animation for avatars in collaborative virtual environments. In: Proc. South African Telecommunications Networks and Applications Conference, pp. 178–183 (1999)
Carroll, J.M., Russell, J.A.: Do facial expressions signal specific emotions? Judging emotion from the face in context. J. Person. Soc. Psych. 70(2), 205–218 (1996)
Creed, C., Beale, R.: Psychological responses to simulated displays of mismatched emotion expressions. Interacting with Computers 20(2), 225–239 (2008)
De Gelder, B., Vroomen, J.: The perception of emotions by ear and by eye. Cognition and Emotion 14(3), 289–311 (2000)
Ekman, P.: Universals and cultural differences in facial expressions of emotion. In: Cole, J.K. (ed.) Nebraska Symp. On Motivation, vol. 19, pp. 207–283. University of Nebraska Press, Lincoln (1972)
Ekman, P., Friesen, W.: Facial Action Coding System. Consulting Psychologists Press (1978)
Ekman, P., Friesen, W., Ellsworth, P.: What are the relative contributions of facial behavior and contextual information to the judgment of emotion? In: Ekman, P. (ed.) Emotion in the Human Face, 2nd edn., pp. 111–127. Cambridge University Press, New York (1982)
Ehrlich, S.M., Shciano, D.J., Sheridan, K.: Communicating facial affect: It’s not the realism, it’s the motion. In: Proc. CHI 2000, pp. 252–253. ACM Press, New York (2000)
Fabri, M., Moore, D., Hobbs, D.: Mediating the expression of emotion in educational collaborative virtual environments: An experimental study. Int. J. of Virtual Reality 7, 66–81 (2004)
Hong, P., Wen, Z., Huang, T.S.: Real-time speech-driven face animation with expressions using neural networks. IEEE Transactions on Neural Networks 13(1), 100–111 (2002)
Izard, C.E.: The Face of Emotion. Appleton-Century-Crofts, New York (1971)
Kätsyri, J., Klucharev, V., Frydrych, M., Sams, M.: Identification of synthetic and natural emotional facial expressions. In: Proc. ITRW Audio Visual Speech Proc. Conference (2003)
Ku, J., et al.: Experimental results of affective valence and arousal to avatar’s facial expressions. Cyberpsychology & Behavior 8(5), 493–503 (2005)
Nass, C., Foehr, U.G., Somoza, M.: The effects of emotion of voice in synthesized and recorded speech. In: Proc. AAAI Symp. Emotional and Intelligent II: The Tangled Knot of Social Cognition (2001)
Noël, S., Dumoulin, S., Whalen, T., Stewart, J.: Recognizing emotions on static and animated avatar faces. In: Proc. HAVE 2006 (2006)
Nogueiras, A., Moreno, A., Bonafonte, A., Marino, J.B.: Speech emotion recognition using hidden markov models. In: Proc. Eurospeech 2001 (2001)
Russell, J.A., Bacharowski, J.-A., Fernandez-Dols, J.M.: Facial and vocal expressions of emotion. Annu. Rev. Psychol. 54, 329–349 (2003)
Spencer-Smith, J., et al.: Making faces: Creating three-dimensional parameterized models of facial expression. Behavior Research Methods, Instruments & Computers 33(2), 115–123 (2001)
Tomkins, S.S.: Affect, Imagery, Consciousness, vol. 1, 2. Springer, New York (1962-1963)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 IFIP International Federation for Information Processing
About this paper
Cite this paper
Noël, S., Dumoulin, S., Lindgaard, G. (2009). Interpreting Human and Avatar Facial Expressions. In: Gross, T., et al. Human-Computer Interaction – INTERACT 2009. INTERACT 2009. Lecture Notes in Computer Science, vol 5726. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03655-2_11
Download citation
DOI: https://doi.org/10.1007/978-3-642-03655-2_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-03654-5
Online ISBN: 978-3-642-03655-2
eBook Packages: Computer ScienceComputer Science (R0)