Abstract
In this chapter we present the issues and problems involved in the creation of Embodied Conversational Agents (ECAs). These agents may have a humanoid aspect and may be embedded in a user interface with the capacity to interact with the user; that is they are able to perceive and understand what the user is saying, but also to answer verbally and nonverbally to the user. ECAs are expected to interact with users as in human-human conversation. They should smile, raise their brows, nod, and even gesticulate, not in a random manner but in co-occurrence with their speech. Results from research in human-human communication are applied to human-ECA communication, or ECA-ECA communication. The creation of such agents requires several steps ranging from the creation of the geometry of the body and facial models to the modeling of their mind, emotion, and personality, but also to the computation of the facial expression, body gesture, gaze that accompany their speech. In this chapter we will present our work toward the computation of nonverbal behaviors accompanying speech.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Andre, E., Rist, T., van Mulken, S., Klesen, M., Baldes, S.: The automated design of believable dialogues for animated presentation teams. In: Prevost, S., Cassell, J., Sullivan, J., Churchill, E. (eds.) Embodied Conversational Characters. MIT press, Cambridge (2000)
Ball, G., Breese, J.: Emotion and personality in a conversational agent. In: Prevost, S., Cassell, J., Sullivan, J., Churchill, E. (eds.) Embodied Conversational Characters. MITpress, Cambridge (2000)
Bartlett, M.S., Hager, J.C., Ekman, P., Sejnowski, T.J.: Measuring facial expressions by computer image analysis. Psychophysiology 36(2), 253–263 (1999)
Black, A.W., Taylor, P., Caley, R., Clark, R.: Festival, http://www.cstr.ed.ac.uk/projects/festival/
Cassell, J., Bickmore, J., Billinghurst, M., Campbell, L., Chang, K., Vilhjálmsson, H., Yan, H.: Embodiment in conversational interfaces: Rea. In: CHI 1999, Pittsburgh, PA, pp. 520–527 (1999)
Cassell, J., Pelachaud, C., Badler, N.I., Steedman, M., Achorn, B., Becket, T., Douville, B., Prevost, S., Stone, M.: Animated conversation: Rule-based generation of facial expression, gesture and spoken intonation for multiple conversational agents. In: Computer Graphics Proceedings, Annual Conference Series, pp. 413–420. ACM SIGGRAPH (1994)
Cassell, J., Stone, M.: Living hand and mouth. Psychological theories about speech and gestures in interactive dialogue systems. In: AAAI 1999 Fall Symposium on Psychological Models of Communication in Collaborative Systems (1999)
Cassell, J., Vilhjálmsson, H., Bickmore, T.: BEAT: the Behavior Expression Animation Toolkit. In: Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH (2001)
Condon, W.S., Osgton, W.D.: Speech and body motion synchrony of the speakerhearer. In: Horton, D.H., Jenkins, J.J. (eds.) The Perception of Language, pp. 150–184. Academic Press, London (1971)
DeCarolis, N., Carofiglio, V., Pelachaud, C.: From discourse plans to believable behavior generation. In: International Natural Language Generation Conference, New-York, July 1-3 (2002)
Doenges, P., Capin, T.K., Lavagetto, F., Ostermann, J., Pandzic, I.S., Petajan, E.: MPEG-4: Audio/video and synthetic graphics/audio for real-time, interactive media delivery, signal processing. Image Communications Journal 9(4), 433–463 (1997)
Ekman, P.: About brows: Emotional and conversational signals. In: von Cranach, M., Foppa, K., Lepenies, W., Ploog, D. (eds.) Human ethology: Claims and limits of a new discipline: contributions to the Colloquium, pp. 169–248. Cambridge University Press, Cambridge (1979)
Elliott, C.: An Affective Reasoner: A process model of emotions in a multiagent system. PhD thesis, Northwestern University, The Institute for the Learning Sciences, Technical Report No. 32 (1992)
Essa, I.A., Pentland, A.: A vision system for observing and extracting facial action parameters. In: Proceedings of Computer Vision and Pattern Recognition (CVPR 1994), pp. 76–83 (1994)
Johnson, W.L., Rickel, J.W., Lester, J.C.: Animated pedagogical agents: Faceto- face interaction in interactive learning environments. To appear in International Journal of Artificial Intelligence in Education (2000)
Kendon, A.: Movement coordination in social interaction: Some examples described. In: Weitz, S. (ed.) Nonverbal Communication. Oxford University Press, Oxford (1974)
Lester, J.C., Stuart, S.G., Callaway, C.B., Voerman, J.L., Fitzgerald, P.J.: Deictic and emotive communication in animated pedagogical agents. In: Prevost, S., Cassell, J., Sullivan, J., Churchill, E. (eds.) Embodied Conversational Characters. MIT press, Cambridge (2000)
Lundeberg, M., Beskow, J.: Developing a 3D-agent for the August dialogue system. In: Proceedings of the ESCA Workshop on Audio-Visual Speech Processing, Santa Cruz, USA (1999)
Marsella, S., Johnson, W.L., LaBore, K.: Interactive pedagogical drama. In: Proceedings of the 4th International Conference on Autonomous Agents, Barcelona, Spain, June 2000, pp. 301–308 (2000)
Ortony, A.: On making believable emotional agents believable. In: Trappl, R., Petta, P. (eds.) Emotions in humans and artifacts. MIT Press, Cambridge (in press)
Ostermann, J.: Animation of synthetic faces in MPEG-4. In: Computer Animation 1998, Philadelphia, USA, pp. 49–51 (June 1998)
Paradiso, A., L’Abbate, M.: A model for the generation and combination of emotional expressions. In: Multimodal Communication and Context in Embodied Agents, Proceedings of the AA 2001 workshop, Montreal, Canada (May 2001)
Pelachaud, C.: Visual text-to-speech. In: Pandzic, I.S., Forchheimer, R. (eds.) MPEG4 Facial Animation - The standard, implementations and applications. John Wiley & Sons, Chichester (to appear)
Pelachaud, C., Badler, N.I., Steedman, M.: Generating facial expressions for speech. Cognitive Science 20(1), 1–46 (1996)
Pelachaud, C., Carofiglio, V., De Carolis, B., de Rosis, F., Poggi, I.: Embodied contextual agent in information delivering application. In: First International Joint Conference on Autonomous Agents & Multi-Agent Systems (AAMAS), Bologna, Italy (July 2002)
Poggi, I.: Mind markers. In: Trigo, N., Rector, M., Poggi, I. (eds.) Gestures. Meaning and use. University Fernando Pessoa Press, Oporto (2002)
Poggi, I., Pelachaud, C.: Facial performative in a conversational system. In: Prevost, S., Cassell, J., Sullivan, J., Churchill, E. (eds.) Embodied Conversational Characters. MIT press, Cambridge (2000)
Poggi, I., Pelachaud, C., de Rosis, F.: Eye communication in a conversational 3D synthetic agent. AI Communications 13(3), 169–181 (2000)
Rist, T., Schmitt, M.: Applying socio-psychological concepts of cognitive consistency to negotiation dialog scenarios with embodied conversational characters. In: Aylett, R., Canamero, D. (eds.) Animating Expressive Characters for Social Interactions. Advances in Consciousness Research Series. John Benjamins, London (April 2002)
Scheflen, A.E.: The significance of posture in communication systems. Psychiatry 27 (1964)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Pelachaud, C., Bilvi, M. (2003). Computational Model of Believable Conversational Agents. In: Huget, MP. (eds) Communication in Multiagent Systems. Lecture Notes in Computer Science(), vol 2650. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-44972-0_17
Download citation
DOI: https://doi.org/10.1007/978-3-540-44972-0_17
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-40385-2
Online ISBN: 978-3-540-44972-0
eBook Packages: Springer Book Archive