Visual Attention and Eye Gaze During Multiparty Conversations with Distractions

  • Erdan Gu
  • Norman I. Badler
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4133)


Our objective is to develop a computational model to predict visual attention behavior for an embodied conversational agent. During interpersonal interaction, gaze provides signal feedback and directs conversation flow. Simultaneously, in a dynamic environment, gaze also directs attention to peripheral movements. An embodied conversational agent should therefore employ social gaze not only for interpersonal interaction but also to possess human attention attributes so that its eyes and facial expression portray and convey appropriate distraction and engagement behaviors.


Visual Attention Smooth Pursuit Mental Workload Inattentional Blindness Conversational Agent 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Argyle, M., Cook, M.: Gaze and Mutual Gaze. Cambridge University Press, London (1976)Google Scholar
  2. 2.
    Badler, N., Chi, D., Chopra, S.: Virtual human animation based on movement observation and cognitive behavior models. In: Proc. Computer Animation, Geneva, Switzerland, pp. 128–137. IEEE Computer Society Press, Los Alamitos (1999)Google Scholar
  3. 3.
    Cassell, J., Thorisson, K.: The power of a nod and a glance: Envelope vs. emo-tional feedback in animated conversational agents. Applied Artificial Intelligence Journal 13(4-5), 519–538 (2000)Google Scholar
  4. 4.
    Cassell, J., Vilhjalmsson, H.: Fully embodied conversational avatars: Making communicative behaviors autonomous. Autonomous Agents and Multi-Agent Systems 2(1), 45–64 (1999)CrossRefGoogle Scholar
  5. 5.
    Chopra-Khullar, S., Badler, N.: Where to look? Automating attending behaviors of virtual human characters. Autonomous Agents and Multi-agent Systems 4, 9–23 (2001)CrossRefGoogle Scholar
  6. 6.
    Chopra-Khullar, S.: Where to look? Automating certain visual attending behaviors of human characters. Ph.D Dissertation, University of Pennsylvania (1999)Google Scholar
  7. 7.
    Colburn, A., Cohen, M., Drucker, S.: The role of eye gaze in avatar mediated conversational interfaces. MSR-TR-2000-81. Microsoft Research (2000)Google Scholar
  8. 8.
    Garau, M., Slater, M., Bee, S., Sasse, M.: The impact of eye gaze on communication using humaniod avatars. In: Proc. ACM SIGCHI, pp. 309–316 (2001)Google Scholar
  9. 9.
    Green, G.: Inattentional blindness and conspicuity (Retrieved November 10, 2004) (2004),
  10. 10.
    Gu, E.: Multiple Influences on Gaze and Attention Behavior for Embodied Agent, Doctoral Dissertation Proposal, Computer and Information Science Department, Univeristy of Pennysylavania (November 2005)Google Scholar
  11. 11.
    Gu, E., Stocker, C., Badler, N.: Do You See What Eyes See? Implementing Inattentional Blindness. In: Panayiotopoulos, T., Gratch, J., Aylett, R.S., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 178–190. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  12. 12.
    Gu, E., Wang, J., Badler, N.: Generating sequence of eye fixations using decision-theoretic bottom-up attention model. In: 3rd International Workshop on Attention and Performance in Computational Vision, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshop, San Diego, p. 92 (2005)Google Scholar
  13. 13.
    Itti, L.: Visual attention. In: Arbib, M.A. (ed.) The Handbook of Brain Theory and Neural Networks, pp. 1196–1201. MIT Press, Cambridge (2003)Google Scholar
  14. 14.
    Knapp, L., Hall, A.: The effects of eye behavior on human communication. In: Nonverbal communication in human interaction, 4th edn., ch. 10, pp. 369–380. Harcourt College Pub. (1996)Google Scholar
  15. 15.
    Kendon, A.: Some functions of gaze direction in social interaction. Acta Psychologica 32, 1–25 (1967)Google Scholar
  16. 16.
    Lee, S., Badler, J., Badler, N.: Eyes alive. ACM Transactions on Graphics 21(3), 637–644 (2002)Google Scholar
  17. 17.
    Mack, A., Rock, I.: Inattentional Blindness. MIT Press, Cambridge (1998)Google Scholar
  18. 18.
    Matsusaka, Y., Fujie, S., Kobayashi, T.: Modeling of conversational strategy for the robot participating in the group conversation. In: Proc. 7th European Conference on Speech Communication and Technology (Eurospeech 2001), Aalborg, Denmark, pp. 2173–2176 (2001)Google Scholar
  19. 19.
    Miller, E.: Turn-taking and relevance in conversation. For the course, Ways of Speaking, at the University of Pennsylvania (May 1999)Google Scholar
  20. 20.
    Most, S., Scholl, B., Clifford, E., Simons, D.: What you see is what you set: Sustained inattentional blindness and the capture of awareness. Psychological Review 112, 217–242 (2005)CrossRefGoogle Scholar
  21. 21.
    Nakano, Y., Nishida, T.: Awareness of perceived world and conversational en-gagement by conversational agents. In: AISB Symposium: Conversational Informatics for Supporting Social Intelligence & Interaction, England (2005)Google Scholar
  22. 22.
    Novick, D., Hansen, B., Ward, K.: Coordinating turn-taking with gaze. In: Proc. of ICSLP 1996, Philadelphia, PA, pp. 1888–1891 (1996)Google Scholar
  23. 23.
    Peters, C.: Direction of attention perception for conversation initiation in virtual environments. In: Panayiotopoulos, T., Gratch, J., Aylett, R.S., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 215–228. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  24. 24.
    Pelachaud, C., Peters, C., Mancini, M., Bevacqua, E., Poggi, I.: A model of attention and interest using gaze behavior. In: Panayiotopoulos, T., Gratch, J., Aylett, R.S., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 229–240. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  25. 25.
    Slater, M., Pertaub, D., Steed, A.: Public Speaking in Virtual Reality: Facing and Audience of Avatars. IEEE Computer Graphics and Applications 19(2), 6–9 (1999)CrossRefGoogle Scholar
  26. 26.
    Simons, D., Chabris, C.: Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception 28, 1059–1074 (1999)CrossRefGoogle Scholar
  27. 27.
    Sidner, C., Lee, C., Lesh, N.: Engagement rules for human-robot collaborative interactions. In: Proc. IEEE International Conference on Systems, Man & Cybernetics (CSMC), vol. 4, pp. 3957–3962 (2003)Google Scholar
  28. 28.
    Vertegaal, R., Der Veer, G., Vons, H.: Effects of gaze on multiparty mediated communication. In: Proc. Graphics Interface, pp. 95–102. Morgan-Kaufmann Publishers, Montreal. Canadian Human-Computer Communications Society,Canada (2000)Google Scholar
  29. 29.
    Vertegaal, R., Slagter, R., Der Veer, G., Nijholt, A.: Eye gaze patterns in conversations: There is more to conversational ageents than meets the eyes. In: ACM CHI Conference on Human Factors in Computing Systems, pp. 301–308 (2001)Google Scholar
  30. 30.
    Wolfe, J.: Inattentional amnesia, in Fleeting Memories. In: Cognition of Brief Visual Stimuli, pp. 71–94. MIT Press, Cambridge (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Erdan Gu
    • 1
  • Norman I. Badler
    • 1
  1. 1.Department of Computer and Information ScienceUniversity of PennsylvaniaPhiladelphia

Personalised recommendations