Skip to main content
Log in

Glances, glares, and glowering: how should a virtual human express emotion through gaze?

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

Gaze is an extremely powerful expressive signal that is used for many purposes, from expressing emotion to regulating human interaction. The use of gaze as a signal has been exploited to strong effect in hand-animated characters, greatly enhancing the believability of the character’s simulated life. However, virtual humans animated in real-time have been less successful at using expressive gaze. One reason for this is that we lack a model of expressive gaze in virtual humans. A gaze shift towards any specific target can be performed in many different ways, using many different expressive manners of gaze, each of which can potentially imply a different emotional or cognitive internal state. However, there is currently no mapping that describes how a user will attribute these internal states to a virtual character performing a gaze shift in a particular manner. In this paper, we begin to address this by providing the results of an empirical study that explores the mapping between an observer’s attribution of emotional state to gaze. The purpose of this mapping is to allow for an interactive virtual human to generate believable gaze shifts that a user will attribute a desired emotional state to. We have generated a set of animations by composing low-level gaze attributes culled from the nonverbal behavior literature. Then, subjects judged the animations displaying these attributes. While the results do not provide a complete mapping between gaze and emotion, they do provide a basis for a generative model of expressive gaze.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Amaya, K., Bruderlin, A., & Calvert, T. (1996). Emotion from motion. In Proceedings of the conference on graphics interface ’96 (pp. 222–229). Canada: Canadian Information Processing Society Toronto, Ont., Canada.

  2. Bickmore, T., & Cassell, J. (2005). Social dialogue with embodied conversational agents. In Advances in natural multimodal dialogue systems. Text, speech, and language technology (pp. 23–54). Netherlands: Springer.

  3. Brand, M., & Hertzmann, A. (2000). Style machines. In Proceedings of the 27th annual conference on computer graphics and interactive techniques (pp. 183–192).

  4. Carney D.R., Hall J.A., LeBeau L.S. (2005) Beliefs about the nonverbal expression of social power. Journal of Nonverbal Behavior 29(2): 105–123

    Article  Google Scholar 

  5. Coulson M. (2004) Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence. Journal of Nonverbal Behavior 28(2): 117–139

    Article  MathSciNet  Google Scholar 

  6. Deng Z., Lewis J.P., Neumann U. (2005) Automated eye motion using texture synthesis. IEEE Computer Graphics and Applications 25(2): 24–30

    Article  Google Scholar 

  7. Ekman, P., & Rosenberg, E. L. (2005). What the face reveals: Basic and applied studies of spontaneous expression using the facial action coding system (FACS). USA: Oxford University Press.

  8. Gebhard, P. (2005). ALMA: A layered model of affect. In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems (pp. 29–36). New York, NY, USA: ACM.

  9. Gillies M.F.P., Dodgson N.A. (2002) Eye movements and attention for behavioural animation. The Journal of Visualization and Computer Animation 13(5): 287–300

    Article  MATH  Google Scholar 

  10. Grammer K., Oberzaucher E. (2006) The reconstruction of facial expressions in embodied systems. ZiF: Mitteilungen (2: 14–31

    Google Scholar 

  11. Hsu, E., Pulli, K., Popovic J. (2005). Style translation for human motion. In International conference on computer graphics and interactive techniques (pp. 1082–1089). New York, NY, USA: ACM Press.

  12. Jiang, H. (2007). From rational to emotional agents. University of South Carolina.

  13. Kleinke C. (1986) Gaze and eye contact: A research review. Psychological Bulletin 100(1): 78–100

    Article  Google Scholar 

  14. Lance B., Marsella S.C. (2007) Emotionally expressive head and body movement during gaze shifts. Lecture Notes in Computer Science 4722: 72

    Article  Google Scholar 

  15. Lance, B. J., & Marsella, S. C. (2008). A model of gaze for the purpose of emotional expression in virtual embodied agents. In Proceedings of the 7th international joint conference on autonomous agents and multiagent systems (Vol. 1, pp. 199–206). Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems.

  16. Lee S.P., Badler J.B., Badler N.I. (2002) Eyes alive. ACM Transactions on Graphics (TOG) 21(3): 637–644

    Google Scholar 

  17. Leigh R.J., Zee D.S. (2006) The neurology of eye movements. Oxford University Press, New York

    Google Scholar 

  18. de Meijer M. (1989) The contribution of general features of body movement to the attribution of emotions. Journal of Nonverbal Behavior 13(4): 247–268

    Article  Google Scholar 

  19. Mignault A., Chaudhuri A. (2003) The many faces of a neutral face: Head tilt and perception of dominance and emotion. Journal of Nonverbal Behavior 27(2): 111–132

    Article  Google Scholar 

  20. Mehrabian, A. (1981). Silent messages: Implicit communication of emotions and attitudes. Wadsworth Pub. Co.

  21. Metts, S., & Bowers, J. W. (1994). Emotion in interpersonal communication. In Handbook of interpersonal communication (2nd ed., pp. 508–541). Sage Publications.

  22. Pasch, M., & Poppe, R. (2007). Person or puppet? The role of stimulus realism in attributing emotion to static body postures. Lecture Notes in Computer Science, 4738, 83.

  23. Paterson, H. M., Pollick, F. E., & Sanford, A. J. (2001). The role of velocity in affect discrimination. In Twenty-third annual conference of the cognitive science society, Edinburgh.

  24. Pelachaud, C., & Bilvi, M. (2003). Modelling gaze behavior for conversational agents. In Intelligent virtual agents: 4th international workshop, IVA 2003. Springer Verlag.

  25. Peters, C., Pelachaud, C., Bevacqua, E., Mancini, M., & Poggi, I. (2005). A model of attention and interest using gaze behavior. Lecture Notes in Computer Science, 3661, 229.

  26. Picot A., Bailly G., Elisei F., Raidt S. (2007) Scrutinizing natural scenes: Controlling the gaze of an embodied conversational agent. Lecture Notes in Computer Science 4722: 272

    Article  Google Scholar 

  27. Rickel J., Johnson W.L. (1999) Animated agents for procedural training in virtual reality: Perception, cognition, and motor control. Applied Artificial Intelligence 13(4–5): 343–382

    Google Scholar 

  28. Schouwstra S.J., Hoogstraten J. (1995) Head position and spinal position as determinants of perceived emotional state. Perceptual and Motor Skills 81(2): 673–674

    Google Scholar 

  29. Shaarani, A. S., Romano, D. M. (2008). The intensity of perceived emotions in 3D virtual humans. In Proceedings of the 7th international joint conference on autonomous agents and multiagent systems (Vol. 3, pp. 1261–1264). Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems.

  30. Wallbott H.G. (1998) Bodily expression of emotion. European Journal of Social Psychology 28: 879–896

    Article  Google Scholar 

  31. Witkin, A., & Popovic, Z. (1995). Motion warping. In Proceedings of the 22nd annual conference on computer graphics and interactive techniques (pp. 105–108). New York, NY, USA: ACM.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brent Lance.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lance, B., Marsella, S. Glances, glares, and glowering: how should a virtual human express emotion through gaze?. Auton Agent Multi-Agent Syst 20, 50–69 (2010). https://doi.org/10.1007/s10458-009-9097-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10458-009-9097-6

Keywords

Navigation