Interacting with Embodied Conversational Agents

Chapter

Abstract

The objective to develop more human-centred, personalized and at the same time more engaging speech-based interactive systems immediately leads to the metaphor of an embodied conversational agent (ECA) that employs gestures, mimics and speech to communicate with the human user. During the last decade research groups as well as a number of commercial software developers have started to deploy embodied conversational characters in the user interface especially in those application areas where a close emulation of multimodal human–human communication is needed. This trend is motivated by a number of supporting arguments. First, virtual characters allow for communication styles common in human–human dialogue and thus can release users unaccustomed to technology from the burden to learn and familiarize with less native interaction techniques. Then, a personification of the interface can contribute to a feeling of trust in the system by removing anonymity from the interaction. Furthermore, well-designed characters show great potential for making interfacing with a computer system more enjoyable.

References

  1. 1.
    Noma, T., Zhao, L., Badler, N. I. (2000). Design of a virtual human presenter. IEEE Comput. Graphics Appl., 20, 79-85.CrossRefGoogle Scholar
  2. 2.
    André, E., Rist, T., Müller, J. (1999). Employing AI methods to control the behavior of animated interface agents. Appl, Artif, Intell, 13, 415-448.CrossRefGoogle Scholar
  3. 3.
    André, E., Concepcion, K., Mani, I., van Guilder, L. (2005). Autobriefer: A system for authoring narrated briefings. In: Stock, O., Zancanaro, M., (eds) Multimodal Intelligent Information Presentation. Springer, Berlin, 143-158.CrossRefGoogle Scholar
  4. 4.
    Weizenbaum, J. (1967). Contextual understanding by computers. Commun. ACM, 10, 474-480.MATHGoogle Scholar
  5. 5.
    Gustafson, J., Lindberg, N., Lundeberg, M. (1999). The August spoken dialog system. In: Proc. Eurospeech’99, Budapest, Hungary.Google Scholar
  6. 6.
    Cassell, J., Nakano, Y. I., Bickmore, T. W., Sidner, C. L., Rich, C. (2001). Non-verbal cues for discourse structure. ACL, 106-115.Google Scholar
  7. 7.
    Pelachaud, C., Carofiglio, V., Carolis, B. D., de Rosis, F., Poggi, I. (2002). Embodied contextual agent in information delivering application. In: AAMAS ’02: Proc. 1st Int. Joint Conf. on Autonomous Agents and Multiagent Systems, ACM Press, New York, NY, 758-765.Google Scholar
  8. 8.
    Kopp, S., Jung, B., LeBmann, N., Wachsmuth, I. (2003). Max - A multimodal assistant in virtual reality construction. Künstliche Intelligenz, 4(3), 11-17.Google Scholar
  9. 9.
    Wahlster, W. (2003). Towards symmetric multimodality: Fusion and fission of speech, gesture, facial expression. KI, 1-18.Google Scholar
  10. 10.
    André, E., Rist, T., van Mulken, S., Klesen, M., Baldes, S. (2000). The automated design of believable dialogues for animated presentation teams. In: Cassell, J., Prevost, S., Sullivan, J., Churchill, E. (eds) Embodied Conversational Agents. MIT Press, Cambridge, MA, 220-255.Google Scholar
  11. 11.
    Prendinger, H., Ishizuka, M. (2001). Social role awareness in animated agents. In: AGENTS ’01: Proc. 5th Int. Conf. on Autonomous Agents, ACM Press, New York, NY, 270-277.Google Scholar
  12. 12.
    Pynadath, D. V., Marsella, S. (2005). Psychsim: Modeling theory of mind with decision- theoretic agents. IJCAI, 1181-1186.Google Scholar
  13. 13.
    Rehm, M., André, E., Nischt, M. (2005). Let’s come together - Social navigation behaviors of virtual and real humans. INTETAIN, 124-133.Google Scholar
  14. 14.
    Traum, D., Rickel, J. (2002). Embodied agents for multi-party dialogue in immersive virtual worlds. In: AAMAS ’02: Proc. 1st Int. Joint Conf. on Autonomous Agents and Multiagent Systems, ACM Press, New York, NY, 766-773.Google Scholar
  15. 15.
    Rickel, J., Johnson, W. L. (1999). Animated agents for procedural training in virtual reality: Perception, cognition, and motor control. Appl. Artif. Intell., 13, 343-382.CrossRefGoogle Scholar
  16. 16.
    Gebhard, P., Kipp, M., Klesen, M., Rist, T. (2003). Authoring scenes for adaptive, interactive performances. In: AAMAS ’03: Proc. 2nd Int. Joint Conf. on Autonomous Agents and Multiagent Systems, ACM Press, New York, NY, 725-732.Google Scholar
  17. 17.
    Laurel, B. (1993). Computers as Theatre. Addison Wesley, Boston, MA, USA.Google Scholar
  18. 18.
    Paiva, A., Dias, J., Sobral, D., Aylett, R., Sobreperez, P., Woods, S., Zoll, C., Hall, L. (2004). Caring for agents and agents that care: Building empathic relations with synthetic agents. In: AAMAS ’04: Proc. 3rd Int. Joint Conf. on Autonomous Agents and Multiagent Systems, IEEE Computer Society, Washington, DC, USA, 194-201.Google Scholar
  19. 19.
    Isbister, K., Nakanishi, H., Ishida, T., Nass, C. (2000). Helper agent: Designing an assistant for human-human interaction in a virtual meeting space. In: CHI ’00: Proc. SIGCHI Conf. on Human Factors in Computing Systems, ACM Press, New York, NY, 57-64.Google Scholar
  20. 20.
    Rist, T., André, E., Baldes, S. (2003). A flexible platform for building applications with life- like characters. In: IUI ’03: Proc. 8th Int. Conf. on Intelligent User Interfaces, ACM Press, New York, NY, 158-168.Google Scholar
  21. 21.
    Cassell, J., Vilhjálmsson, H. H., Bickmore, T. W. (2001). BEAT: the Behavior Expression Animation Toolkit. SIGGRAPH, 477-486.Google Scholar
  22. 22.
    Larsson, S., Traum, D. R. (2000). Information state and dialogue management in the TRINDI dialogue move engine toolkit. Nat. Lang. Eng., 6, 323-340.CrossRefGoogle Scholar
  23. 23.
    Rich, C., Sidner, C. (1998). Collagen - A collaboration manager for software interface agents. User Model. User-Adapted Interact., 8, 315-350.CrossRefGoogle Scholar
  24. 24.
    Rickel, J., Lesh, N., Rich, C., Sidner, C. L., Gertner, A. S. (2002). Collaborative discourse theory as a foundation for tutorial dialogue. Intell. Tutoring Syst., 542-551.Google Scholar
  25. 25.
    Sidner, C. L., Lee, C., Kidd, C. D., Lesh, N., Rich, C. (2005). Explorations in engagement for humans and robots. Artif. Intell., 166, 140-164.CrossRefGoogle Scholar
  26. 26.
    Jan, D., Traum, D. R. (2005). Dialog simulation for background characters. In: Int. Conf. on Intelligent Virtual Agents, Kos, Greece, 65-74.Google Scholar
  27. 27.
    Bales, R. F. (1951). Interaction Process Analysis. Chicago University Press, Chicago.Google Scholar
  28. 28.
    Guye-Vuilliéme, A., Thalmann, D. (2001). A high level architecture for believable social agents. Virtual Reality J., 5, 95-106.CrossRefGoogle Scholar
  29. 29.
    Prada, R., Paiva, A. (2005). Intelligent virtual agents in collaborative scenarios. In: Int. Conf. on Intelligent Virtual Agents, Kos, Greece, 317-328.CrossRefGoogle Scholar
  30. 30.
    Poggi, I. (2003). Mind markers. In: Rector, I. Poggi, N. T. (ed) Gestures. Meaning and Use. University Fernando Pessoa Press, Oporto, Portugal.Google Scholar
  31. 31.
    Chovil, N. (1991). Social determinants of facial displays. J.Nonverbal Behav., 15. 141-154.Google Scholar
  32. 32.
    Condon, W., Osgton, W. (1971). Speech and body motion synchrony of the speaker-hearer. In: Horton, D., Jenkins, J. (eds) The Perception of Language. Academic Press, New York, NY, 150-184.Google Scholar
  33. 33.
    Kendon, A. (1974). Movement coordination in social interaction: Some examples described. In: Weitz, S. (ed) Nonverbal Communication. Oxford University Press, Oxford.Google Scholar
  34. 34.
    Scheflen, A. (1964). The significance of posture in communication systems. Psychiatry, 27, 316-331.Google Scholar
  35. 35.
    Ekman, P. (1979). About brows: Emotional and conversational signals. In: von Cranach, M., Foppa, K., Lepenies, W., Ploog, D. (eds) Human Ethology: Claims and Limits of a New Discipline: Contributions to the Colloquium. Cambridge University Press, Cambridge, England; New York, 169-248.Google Scholar
  36. 36.
    Cavé, C., Guaitella, I., Bertrand, R., Santi, S., Harlay, F., Espesser, R. (1996). About the relationship between eyebrow movements and f0-variations. In: Proc. ICSLP’96: 4th Int. Conf. on Spoken Language Processing, Philadelphia, PA.Google Scholar
  37. 37.
    Krahmer, E., Swerts, M. (2004). More about brows. In: Ruttkay, Z., Pelachaud, C. (eds) From Brows till Trust: Evaluating Embodied Conversational Agents. Kluwer, Dordrecht.Google Scholar
  38. 38.
    McNeill, D. (1992) Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago.Google Scholar
  39. 39.
    Knapp, M., Hall, J. (1997). Nonverbal Communication in Human Interaction, Fourth edition. Harcourt Brace, Fort Worth, TX.Google Scholar
  40. 40.
    Pelachaud, C., Bilvi, M. (2003). Computational model of believable conversational agents. In: Huget, M. P. (ed) Communication in Multiagent Systems. Volume 2650 of Lecture Notes in Computer Science. Springer, Berlin, 300-317.CrossRefGoogle Scholar
  41. 41.
    Pelachaud, C. (2005). Multimodal Expressive Embodied Conversational Agent. ACM Multimedia, Brave New Topics session, Singapore.Google Scholar
  42. 42.
    DeCarolis, B., Pelachaud, C., Poggi, I., Steedman, M. (2004). APML, a mark-up language for believable behavior generation. In: Prendinger, H., Ishizuka, M. (eds) Life-Like Characters. Tools, Affective Functions and Applications. Springer, Berlin, 65-85.Google Scholar
  43. 43.
    Cassell, J., Bickmore, J., Billinghurst, M., Campbell, L., Chang, K., Vilhjálmsson, H., Yan, H. (1999). Embodiment in conversational interfaces: Rea. CHI’99, Pittsburgh, PA, 520-527.Google Scholar
  44. 44.
    Kopp, S., Wachsmuth, I. (2004). Synthesizing multimodal utterances for conversational agents. J. Comput. Anim. Virtual Worlds, 15, 39-52.CrossRefGoogle Scholar
  45. 45.
    Kopp, S., Gesellensetter, L., Kramer, N. C. (2005). Wachsmuth, I.: A conversational agent as museum guide - Design and evaluation of a real-world application. In: Int. Conf. on Intelligent Virtual Agents, Kos, Greece, 329-343.Google Scholar
  46. 46.
    Heylen, D. (2005). Challenges ahead. Head movements and other social acts in conversa- tion. In: AISB - Social Presence Cues Symposium. University of Hertfordshire, Hatfield, England.Google Scholar
  47. 47.
    Ortony, A., Clore, G., Collins, A. (1988). The Cognitive Structure of Emotions. Cambridge University Press, Cambridge.CrossRefGoogle Scholar
  48. 48.
    Scherer, K. (2000). Emotion. In: Hewstone, M., Stroebe, W. (eds) Introduction to Social Psychology: A European Perspective. Oxford University Press, Oxford, 151-191.Google Scholar
  49. 49.
    Ekman, P. (2003). The Face Revealed. Weidenfeld & Nicolson, London.Google Scholar
  50. 50.
    DeCarolis, B., Carofiglio, V., Bilvi, M., Pelachaud, C. (2002). APML, a mark-up language for believable behavior generation. In: Embodied Conversational Agents - Let’s Specify and Evaluate Them! Proc. AAMAS’02 Workshop, Bologna, Italy.Google Scholar
  51. 51.
    Ball, G., Breese, J. (2000). Emotion and personality in a conversational agent. In: Cassell, J., Sullivan, S. P., Churchill, E. (eds) Embodied Conversational Characters. MIT Press, Cambridge, MA, 189-219.Google Scholar
  52. 52.
    Tanguy, E., Bryson, J. J., Willis, P. J. (2006). A dynamic emotion representation model within a facial animation system. Int. J. Humanoid Robotics, 3, 293-300.CrossRefGoogle Scholar
  53. 53.
    Pandzic, I., Forchheimer, R. (2002). MPEG4 Facial Animation - The Standard, Implementations and Applications. Wiley, New York, NY.CrossRefGoogle Scholar
  54. 54.
    deRosis, F., Pelachaud, C., Poggi, I., Carofiglio, V., Carolis, B. D. (2003). From Greta’s mind to her face: Modelling the dynamics of affective states in a conversational embodied agent. Int. J. Hum. Comput. Studies, Special Issue on Applications of Affective Computing in HCI, 59,81-118.Google Scholar
  55. 55.
    Bui, T. D. (2004). Creating emotions and facial expressions for embodied agents. PhD thesis, University of Twente, Department of Computer Science, Enschede.Google Scholar
  56. 56.
    Tsapatsoulis, N., Raouzaiou, A., Kollias, S., Cowie, R., Douglas-Cowie, E. (2002). Emotion recognition and synthesis based on MPEG-4 FAPs in MPEG-4 facial animation. In: Pandzic, I. S., Forcheimer, R. (eds) MPEG4 Facial Animation - The Standard, Implementations and Applications. Wiley, New York, NY.Google Scholar
  57. 57.
    Albrecht, I., Schroeder, M., Haber, J., Seidel, H. P. (2005). Mixed feelings - expression of nonbasic emotions in a muscle-based talking head. Virtual Reality - Special Issue on Language, Speech and Gesture for VR, 8(4).Google Scholar
  58. 58.
    Whissel, C. M. (1989). The dictionary of affect in language. In: Plutchnik, R., Kellerman, H. (eds) The measurement of Emotions. Volume Emotion: Theory, Research and Experience: Vol. 4. Academic Press, New York.Google Scholar
  59. 59.
    Plutchnik, R. (1980). Emotion: A Psychoevolutionary Synthesis. Harper and Row, New York, NY.Google Scholar
  60. 60.
    Ruttkay, Z., Noot, H., ten Hagen, P. (2003). Emotion disc and emotion squares: Tools to explore the facial expression face. Comput. Graph. Forum, 22, 49-53.CrossRefGoogle Scholar
  61. 61.
    Schlosberg, H. A. (1952). A description of facial expressions in terms of two dimensions. J. Exp. Psychol., 44, 229-237.CrossRefGoogle Scholar
  62. 62.
    Ekman, P., Friesen, W. (1975). Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues. Prentice-Hall, Inc, Englewood Cliffs, NJ.Google Scholar
  63. 63.
    Rehm, M., André, E. (2005). Catch me if you can: Exploring lying agents in social settings. AAMAS, Proceedings of the 4th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), Utrecht, Netherlands, ACM: New York, USA, 937-944.Google Scholar
  64. 64.
    Ochs, M., Niewiadomski, R., Pelachaud, C., Sadek, D. (2005). Intelligent expressions of emotions. In: 1st Int. Conf. on Affective Computing and Intelligent Interaction ACII, China.Google Scholar
  65. 65.
    Martin, J. C., Niewiadomski, R., Devillers, L., Buisine, S., Pelachaud, C. (2006). Multimodal complex emotions: Gesture expressivity and blended facial expressions. Int. J. Humanoid Robotics. Special issue on "Achieving Human-Like Qualities in Interactive Virtual and Physical Humanoids", 3(3).Google Scholar
  66. 66.
    Wehrle, T., Kaiser, S., Schmidt, S., Scherer, K. R. (2000). Studying the dynamics of emotional expression using synthesized facial muscle movements. J. Pers. Social Psychol., 78, 105-119.CrossRefGoogle Scholar
  67. 67.
    Kaiser, S., Wehrle, T. (2006). Modeling appraisal theory of emotion and facial expression. In: Magnenat-Thalmann, N. (ed) Proc. 19th Int. Conf. on Computer Animation and Social Agents , CASA 2006, Geneva, Computer Graphics Society (CGS).Google Scholar
  68. 68.
    Wehrle, T. (1996). The Geneva Appraisal Manipulation Environment (GAME). University of Geneva, Switzerland. Unpublished computer software edn.Google Scholar
  69. 69.
    Perlin, K., Goldberg, A. (1996). Improv: A system for interactive actors in virtual worlds. In: Computer Graphics Proc., Annual Conference Series, ACM SIGGRAPH, New Orleans, Lousiana, USA, 205-216.Google Scholar
  70. 70.
    Bruderlin, A., Williams, L. (1995). Motion signal processing. In: Proc. 22nd Annual Conf. on Computer Graphics and Interactive Techniques, ACM Press, New York, NY, 97-104.Google Scholar
  71. 71.
    Chi, D. M., Costa, M., Zhao, L., Badler, N. I. (2000). The EMOTE model for effort and shape. In: Akeley, K. (ed) Siggraph 2000, Computer Graphics Proc., ACM Press/ACM SIGGRAPH/Addison Wesley Longman, 173-182.CrossRefGoogle Scholar
  72. 72.
    Laban, R., Lawrence, F. (1974). Effort: Economy in Body Movement. Plays, Inc., Boston.Google Scholar
  73. 73.
    Wallbott, H. G., Scherer, K. R. (1986). Cues and channels in emotion recognition. J. Pers. Soc. Psychol., 51, 690-699.CrossRefGoogle Scholar
  74. 74.
    Gallaher, P. E. (1992). Individual differences in nonverbal behavior: Dimensions of style. J. Pers. Soc. Psychol., 63, 133-145.CrossRefGoogle Scholar
  75. 75.
    Hartmann, B., Mancini, M., Pelachaud, C. (2005). Implementing expressive gesture synthesis for embodied conversational agents. In: Gesture Workshop, Vannes.Google Scholar
  76. 76.
    Egges, A., Magnenat-Thalmann, N. (2005). Emotional communicative body animation for multiple characters. In: V-Crowds’05, Lausanne, Switzerland, 31-40.Google Scholar
  77. 77.
    Stocky, T., Cassell, J. (2002). Shared reality: Spatial intelligence in intuitive user interfaces. In: IUI ’02: Proc. 7th Int. Conf. on Intelligent User Interfaces, ACM Press, New York, NY, 224-225.Google Scholar
  78. 78.
    Chopra-Khullar, S., Badler, N. I. (2001). Where to look? automating attending behaviors of virtual human characters. Autonomous Agents Multi-Agent Syst., 4, 9-23.CrossRefGoogle Scholar
  79. 79.
    Nakano, Y. I., Reinstein, G., Stocky, T., Cassell, J. (2003). Towards a model of face- to-face grounding. ACL’03: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, Sapporo, Japan, 553-561.Google Scholar
  80. 80.
    Peters, C. (2005). Direction of attention perception for conversation initiation in virtual environments. In: Int. Conf. on Intelligent Virtual Agents, Kos, Greece, 215-228.Google Scholar
  81. 81.
    Baron-Cohen, S. (1994). How to build a baby that can read minds: Cognitive Mechanisms in Mind-Reading. Cah. Psychol. Cogn., 13, 513-552.Google Scholar
  82. 82.
    Batliner, A., Huber, R., Niemann, H., Nöth, E., Spilker, J., Fischer, K. (2005). The recog- nition of emotion. In: Wahlster, W. (ed) Verbmobil: Foundations of Speech-to-Speech Translations. Springer, Berlin, 122-130.Google Scholar
  83. 83.
    Maatman, R. M., Gratch, J., Marsella, S. (2005). Natural behavior of a listening agent. In: Int. Conf. on Intelligent Virtual Agents, Kos, Greece, 25-36.Google Scholar
  84. 84.
    Bickmore, T., Cassel, J. (2005). Social dialogue with embodied conversational agents. In: van Kuppevelt, J., Dybkjaer, L., Bernsen, N. O. (eds) Advances in Natural, Multimodal Dialogue Systems. Springer, Berlin.Google Scholar
  85. 85.
    Brown, P., Levinson, S. C. (1987). Politeness - Some Universals in Language Usage. Cambridge University Press, Cambridge.Google Scholar
  86. 86.
    Walker, M. A., Cahn, J. E., Whittaker, S. J. (1997). Improvising linguistic style: Social and affective bases for agents, First International Conference on Autonomous Agents, Marina del Rey, CA, USA, ACM: New York, USA, 96-105.Google Scholar
  87. 87.
    Johnson, W. L., Rizzo, P., Bosma, W., Kole, S., Ghijsen, M., vanWelbergen, H. (2004). Generating socially appropriate tutorial dialog. Affective Dialogue Systems, Tutorial and Research Workshop, ADS 2004, Kloster Irsee, Germany, June 14-16, 2004, Springer, Lecture Notes in Computer Science, Vol. 3068, 254-264.CrossRefGoogle Scholar
  88. 88.
    Johnson, L., Mayer, R., André, E., Rehm, M. (2005). Cross-cultural evaluation of politeness in tactics for pedagogical agents. In: Proc. of the 12th Int. Conf. on Artificial Intelligence in Education (AIED), Amsterdam, Netherlands.Google Scholar
  89. 89.
    Rehm, M., André, E. (2006). Informing the design of embodied conversational agents by analysing multimodal politeness behaviours in human-human communica- tion. In: Nishida, T. (ed) Engineering Approaches to Conversational Informatics. Wiley, Chichester, UK.Google Scholar
  90. 90.
    Cassell, J. (2006). Body language: Lessons from the near-human. In: Riskin, J. (ed) The Sistine Gap: History and Philosophy of Artificial Intelligence. University of Chicago, Chicago.Google Scholar
  91. 91.
    Martin, J. C., Abrilian, S., Devillers, L., Lamolle, M., Mancini, M., Pelachaud, C. (2005). Levels of representation in the annotation of emotion for the specification of expressivity in ECAs. In: Int. Conf. on Intelligent Virtual Agents, Kos, Greece, 405-417.Google Scholar
  92. 92.
    Kipp, M. (2005). Gesture generation by imitation: from human behavior to computer character animation. Dissertation.com, Boca Raton, FL.Google Scholar
  93. 93.
    Stone, M., DeCarlo, D., Oh, I., Rodriguez, C., Stere, A., Lees, A., Bregler, C. (2004): Speaking with hands: Creating animated conversational characters from recordings of human performance. ACM Trans. Graph, 23, 506-513.CrossRefGoogle Scholar
  94. 94.
    Buisine, S., Abrilian, S., Niewiadomski, R., MARTIN, J. C., Devillers, L., Pelachaud, C. (2006). Perception of blended emotions: From video corpus to expressive agent. In: The 6th Int. Conf. on Intelligent Virtual Agents, Marina del Rey, USA.Google Scholar
  95. 95.
    Ruttkay, Z., Pelachaud, C. (2004). From Brows to Trust: Evaluating Embodied Conversational Agents (Human-Computer Interaction Series). Springer-Verlag, New York, Inc., Secaucus, NJ, USA.Google Scholar
  96. 96.
    Buisine, S., Abrilian, S., Martin, J. C. (2004). Evaluation of multimodal behaviour of embodied agents. In: Ruttkay, Z., Pelachaud, C. (eds) From Brows to Trust: Evaluating Embodied Conversational Agents. Kluwer, Norwell, MA, 217-238.Google Scholar
  97. 97.
    Lee, K. M., Nass, C. (2003). Designing social presence of social actors in human computer interaction. In: CHI ’03: Proc. SIGCHI Conf. on Human Factors in Computing Systems, ACM Press, New York, NY, 289-296.Google Scholar
  98. 98.
    Nass, C., Gong, L. (2000). Speech interfaces from an evolutionary perspective. Commun. ACM, 43, 36-43.CrossRefGoogle Scholar
  99. 99.
    Vinayagamoorthy, V., Garau, M., Steed, A., Slater, M. (2004). An eye gaze model for dyadic interaction in an immersive virtual environment: Practice and experience. Comput. Graph. Forum, 23, 1-12.CrossRefGoogle Scholar
  100. 100.
    Lee, S. P., Badler, J. B., Badler, N. I. (2002). Eyes alive. In: SIGGRAPH ’02: Proc. 29th Annual Conf. on Computer Graphics and Interactive Techniques, ACM Press, New York, NY, 637-644.Google Scholar
  101. 101.
    Rehm, M., André, E. (2005). Where do they look? Gaze behaviors of multiple users inter- acting with an embodied conversational agent. In: Int. Conf. on Intelligent Virtual Agents, Kos, Greece, 241-252.Google Scholar
  102. 102.
    Cowell, A. J., Stanney, K. M. (2003) Embodiment and interaction guidelines for designing credible, trustworthy embodied conversational agents. In: Int. Conf. on Intelligent Virtual Agents, Kos, Greece, 301-309.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Multimedia Concepts and ApplicationsUniversity of AugsburgAugsburgGermany
  2. 2.LINC, IUT de Montreuilrue de la NouvelleFrance

Personalised recommendations