Modelling Gaze Behavior for Conversational Agents

  • Catherine Pelachaud
  • Massimo Bilvi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2792)

Abstract

In this paper we propose an eye gaze model for an embodied conversational agent that embeds information on communicative functions as well as on statistical information of gaze patterns. This latter information has been derived from the analytic studies of an annotated video-corpus of conversation dyads. We aim at generating different gaze behaviors to stimulate several personalized gaze habits of an embodied conversational agent.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ball, G., Breese, J.: Emotion and personality in a conversational agent. In: Prevost, S., Cassell, J., Sullivan, J., Churcill, E. (eds.) Embodied Conversational Characters, MIT Press, Cambridge (2000)Google Scholar
  2. 2.
    Beskow, J.: Animation of talking agents. In: Benoit, C., Campbell, R. (eds.) Proceedings of the ESCA Workshop on Audio-Visual Speech Processing, pp. 149–152 (1997)Google Scholar
  3. 3.
    Cappella, J., Pelachaud, C.: Rules for Responsive Robots: Using Human Interaction to Build Virtual Interaction. In: Reis, Fitzpatrick, Vangelisti (eds.) Stability and Change in Relationships, New York, Cambridge University Press, Cambridge (2001)Google Scholar
  4. 4.
    Cassell, J., Bickmore, T.W., Billinghurst, M., Campbell, L., Chang, K., Vilhjalmsson, H.H., Yan, H.: Embodiment in Conversational Interfaces: Rea. In: Proceedings of CHI 1999, Pittsburgh, PA, pp. 520–527 (1999)Google Scholar
  5. 5.
    Cassell, J., Pelachaud, C., Badler, N., Steedman, M., Achorn, B., Becket, T., Douville, B., Prevost, S., Stone, M.: Animated conversation: Rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents. In: Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH, pp. 413–420 (1994)Google Scholar
  6. 6.
    Cassell, J., Torres, O., Prevost, S.: Turn Taking vs. Discourse Structure: How Best to Model Multimodal Conversation. In: Wilks, I. (ed.) Machine Conversations, Kluwer, The Hague (1999)Google Scholar
  7. 7.
    Chopra-Khullar, S., Badler, N.: Where to look? Automating visual attending behaviors of virtual human characters. In: Autonomous Agents Conference, Seattle, WA (1999)Google Scholar
  8. 8.
    Colburn, R.A., Cohen, M.F., Drucker, S.M.: The role of eye gaze in avatar mediated conversational interfaces. Technical Report MSR-TR-2000-81, Microsoft Corporation (2000)Google Scholar
  9. 9.
    Fukayama, A., Ohno, T., Mukawaw, N., Sawaki, M., Hagita, N.: Messages embedded in gaze on interface agents - Impression management with agent’s gaze. In: CHI, vol. 4, pp. 1–48 (2002)Google Scholar
  10. 10.
    Lee, S., Badler, J., Badler, N.: Eyes alive. In: ACM Transactions on Graphics, Siggraph, pp. 637–644. ACM Press, New York (2002)Google Scholar
  11. 11.
    Lester, J.C., Stuart, S.G., Callaway, C.B., Voerman, J.L., Fitzgeral, P.J.: Deictic and emotive communication in animated pedagogical agents. In: Prevost, S., Cassell, J., Sullivan, J., Churcill, E. (eds.) Embodied Conversational Characters, MIT Press, Cambridge (2000)Google Scholar
  12. 12.
    Lundeberg, M., Beskow, J.: Developing a 3D-agent for the August dialogue system. In: Proceedings of the ESCA Workshop on Audio-Visual Speech Processing, Santa Cruz, USA (1999)Google Scholar
  13. 13.
    Pelachaud, C., Carofiglio, V., de Carolis, B., de Rosis, F., Poggi, I.: Embodied Contextual Agent in Information Delivering Agent. In: Proceedings of AAMAS, vol. 2 (2002)Google Scholar
  14. 14.
    Poggi, I.: Mind markers. In: Trigo, N., Rector, M., Poggi, I. (eds.) Meaning and use, University Fernando Pessoa Press, Oporto (2002)Google Scholar
  15. 15.
    Poggi, I., Pelachaud, C., de Rosis, F.: Eye communication in a conversational 3D synthetic agent. Special Issue on Behavior Planning for Life-Like Characters and Avatars. Journal of AI Communications 13(3), 169–181 (2000)Google Scholar
  16. 16.
    Thórisson, K.R.: Layered modular action control for communicative humanoids. In: Computer Animation 1997, IEEE Computer Society Press, Geneva (1997)Google Scholar
  17. 17.
    Thórisson, K.R.: Natural turn-taking needs no manual. In: Karlsson, I., Granström, B., House, D. (eds.) Multimodality in Language and speech systems, pp. 173–207. Kluwer Academic Publishers, Dordrecht (2002)Google Scholar
  18. 18.
    Waters, K., Rehg, J., Loughlin, M., Kang, S.B., Terzopoulos, D.: Visual sensing of humans for active public interfaces. Technical Report CRL 96/5, Cambridge Research Laboratory, Digital Equipment Corporation (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Catherine Pelachaud
    • 1
  • Massimo Bilvi
    • 2
  1. 1.IUT of Montreuil, University of Paris 8, LINC – Paragraphe 
  2. 2.Department of Computer and Systems ScienceUniversity of Rome “La Sapienza” 

Personalised recommendations