Skip to main content

Computational Model of Believable Conversational Agents

  • Chapter
  • First Online:
Communication in Multiagent Systems

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2650))

Abstract

In this chapter we present the issues and problems involved in the creation of Embodied Conversational Agents (ECAs). These agents may have a humanoid aspect and may be embedded in a user interface with the capacity to interact with the user; that is they are able to perceive and understand what the user is saying, but also to answer verbally and nonverbally to the user. ECAs are expected to interact with users as in human-human conversation. They should smile, raise their brows, nod, and even gesticulate, not in a random manner but in co-occurrence with their speech. Results from research in human-human communication are applied to human-ECA communication, or ECA-ECA communication. The creation of such agents requires several steps ranging from the creation of the geometry of the body and facial models to the modeling of their mind, emotion, and personality, but also to the computation of the facial expression, body gesture, gaze that accompany their speech. In this chapter we will present our work toward the computation of nonverbal behaviors accompanying speech.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Andre, E., Rist, T., van Mulken, S., Klesen, M., Baldes, S.: The automated design of believable dialogues for animated presentation teams. In: Prevost, S., Cassell, J., Sullivan, J., Churchill, E. (eds.) Embodied Conversational Characters. MIT press, Cambridge (2000)

    Google Scholar 

  2. Ball, G., Breese, J.: Emotion and personality in a conversational agent. In: Prevost, S., Cassell, J., Sullivan, J., Churchill, E. (eds.) Embodied Conversational Characters. MITpress, Cambridge (2000)

    Google Scholar 

  3. Bartlett, M.S., Hager, J.C., Ekman, P., Sejnowski, T.J.: Measuring facial expressions by computer image analysis. Psychophysiology 36(2), 253–263 (1999)

    Article  Google Scholar 

  4. Black, A.W., Taylor, P., Caley, R., Clark, R.: Festival, http://www.cstr.ed.ac.uk/projects/festival/

  5. Cassell, J., Bickmore, J., Billinghurst, M., Campbell, L., Chang, K., Vilhjálmsson, H., Yan, H.: Embodiment in conversational interfaces: Rea. In: CHI 1999, Pittsburgh, PA, pp. 520–527 (1999)

    Google Scholar 

  6. Cassell, J., Pelachaud, C., Badler, N.I., Steedman, M., Achorn, B., Becket, T., Douville, B., Prevost, S., Stone, M.: Animated conversation: Rule-based generation of facial expression, gesture and spoken intonation for multiple conversational agents. In: Computer Graphics Proceedings, Annual Conference Series, pp. 413–420. ACM SIGGRAPH (1994)

    Google Scholar 

  7. Cassell, J., Stone, M.: Living hand and mouth. Psychological theories about speech and gestures in interactive dialogue systems. In: AAAI 1999 Fall Symposium on Psychological Models of Communication in Collaborative Systems (1999)

    Google Scholar 

  8. Cassell, J., Vilhjálmsson, H., Bickmore, T.: BEAT: the Behavior Expression Animation Toolkit. In: Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH (2001)

    Google Scholar 

  9. Condon, W.S., Osgton, W.D.: Speech and body motion synchrony of the speakerhearer. In: Horton, D.H., Jenkins, J.J. (eds.) The Perception of Language, pp. 150–184. Academic Press, London (1971)

    Google Scholar 

  10. DeCarolis, N., Carofiglio, V., Pelachaud, C.: From discourse plans to believable behavior generation. In: International Natural Language Generation Conference, New-York, July 1-3 (2002)

    Google Scholar 

  11. Doenges, P., Capin, T.K., Lavagetto, F., Ostermann, J., Pandzic, I.S., Petajan, E.: MPEG-4: Audio/video and synthetic graphics/audio for real-time, interactive media delivery, signal processing. Image Communications Journal 9(4), 433–463 (1997)

    Google Scholar 

  12. Ekman, P.: About brows: Emotional and conversational signals. In: von Cranach, M., Foppa, K., Lepenies, W., Ploog, D. (eds.) Human ethology: Claims and limits of a new discipline: contributions to the Colloquium, pp. 169–248. Cambridge University Press, Cambridge (1979)

    Google Scholar 

  13. Elliott, C.: An Affective Reasoner: A process model of emotions in a multiagent system. PhD thesis, Northwestern University, The Institute for the Learning Sciences, Technical Report No. 32 (1992)

    Google Scholar 

  14. Essa, I.A., Pentland, A.: A vision system for observing and extracting facial action parameters. In: Proceedings of Computer Vision and Pattern Recognition (CVPR 1994), pp. 76–83 (1994)

    Google Scholar 

  15. Johnson, W.L., Rickel, J.W., Lester, J.C.: Animated pedagogical agents: Faceto- face interaction in interactive learning environments. To appear in International Journal of Artificial Intelligence in Education (2000)

    Google Scholar 

  16. Kendon, A.: Movement coordination in social interaction: Some examples described. In: Weitz, S. (ed.) Nonverbal Communication. Oxford University Press, Oxford (1974)

    Google Scholar 

  17. Lester, J.C., Stuart, S.G., Callaway, C.B., Voerman, J.L., Fitzgerald, P.J.: Deictic and emotive communication in animated pedagogical agents. In: Prevost, S., Cassell, J., Sullivan, J., Churchill, E. (eds.) Embodied Conversational Characters. MIT press, Cambridge (2000)

    Google Scholar 

  18. Lundeberg, M., Beskow, J.: Developing a 3D-agent for the August dialogue system. In: Proceedings of the ESCA Workshop on Audio-Visual Speech Processing, Santa Cruz, USA (1999)

    Google Scholar 

  19. Marsella, S., Johnson, W.L., LaBore, K.: Interactive pedagogical drama. In: Proceedings of the 4th International Conference on Autonomous Agents, Barcelona, Spain, June 2000, pp. 301–308 (2000)

    Google Scholar 

  20. Ortony, A.: On making believable emotional agents believable. In: Trappl, R., Petta, P. (eds.) Emotions in humans and artifacts. MIT Press, Cambridge (in press)

    Google Scholar 

  21. Ostermann, J.: Animation of synthetic faces in MPEG-4. In: Computer Animation 1998, Philadelphia, USA, pp. 49–51 (June 1998)

    Google Scholar 

  22. Paradiso, A., L’Abbate, M.: A model for the generation and combination of emotional expressions. In: Multimodal Communication and Context in Embodied Agents, Proceedings of the AA 2001 workshop, Montreal, Canada (May 2001)

    Google Scholar 

  23. Pelachaud, C.: Visual text-to-speech. In: Pandzic, I.S., Forchheimer, R. (eds.) MPEG4 Facial Animation - The standard, implementations and applications. John Wiley & Sons, Chichester (to appear)

    Google Scholar 

  24. Pelachaud, C., Badler, N.I., Steedman, M.: Generating facial expressions for speech. Cognitive Science 20(1), 1–46 (1996)

    Article  Google Scholar 

  25. Pelachaud, C., Carofiglio, V., De Carolis, B., de Rosis, F., Poggi, I.: Embodied contextual agent in information delivering application. In: First International Joint Conference on Autonomous Agents & Multi-Agent Systems (AAMAS), Bologna, Italy (July 2002)

    Google Scholar 

  26. Poggi, I.: Mind markers. In: Trigo, N., Rector, M., Poggi, I. (eds.) Gestures. Meaning and use. University Fernando Pessoa Press, Oporto (2002)

    Google Scholar 

  27. Poggi, I., Pelachaud, C.: Facial performative in a conversational system. In: Prevost, S., Cassell, J., Sullivan, J., Churchill, E. (eds.) Embodied Conversational Characters. MIT press, Cambridge (2000)

    MATH  Google Scholar 

  28. Poggi, I., Pelachaud, C., de Rosis, F.: Eye communication in a conversational 3D synthetic agent. AI Communications 13(3), 169–181 (2000)

    Google Scholar 

  29. Rist, T., Schmitt, M.: Applying socio-psychological concepts of cognitive consistency to negotiation dialog scenarios with embodied conversational characters. In: Aylett, R., Canamero, D. (eds.) Animating Expressive Characters for Social Interactions. Advances in Consciousness Research Series. John Benjamins, London (April 2002)

    Google Scholar 

  30. Scheflen, A.E.: The significance of posture in communication systems. Psychiatry 27 (1964)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Pelachaud, C., Bilvi, M. (2003). Computational Model of Believable Conversational Agents. In: Huget, MP. (eds) Communication in Multiagent Systems. Lecture Notes in Computer Science(), vol 2650. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-44972-0_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-44972-0_17

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40385-2

  • Online ISBN: 978-3-540-44972-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics