Advertisement

Towards Believable Behavior Generation for Embodied Conversational Agents

  • Andrea Corradini
  • Morgan Fredriksson
  • Manish Mehta
  • Jurgen Königsmann
  • Niels Ole Bernsen
  • Lasse Johannesson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3038)

Abstract

This paper reports on the generation of coordinated multimodal output for the NICE (Natural Interactive Communication for Edutainment) system [1]. In its first prototype, the system allows for fun and experientially rich interaction between primarily 10 to 18 years old human users and 3D-embodied fairy tale author H.C. Andersen in his study. User input consists of domain-oriented spoken conversation combined with 2D input gesture, entered via a mouse-compatible device. The animated character can move about and interact with his environment as well as communicate with the user through spoken conversation and non-verbal gesture, body posture, facial expression and gaze. The described approach aims to make the virtual agent’s appearance, voice, actions, and communicative behavior convey the impression of a character with human-like behavior, emotions, relevant domain knowledge, and a distinct personality. We propose an approach to multimodal output generation, which exploits a richly parameterized semantic instruction from the conversation manager and splits the instruction into synchronized text instructions to the text-to-speech synthesizer, and behavioral instructions to the animated character. Based on the implemented version of this approach, we are in the process of creating a behavior sub-system that combines the described multimodal output instructions with parameters representing the current emotional state of the character, producing animations that express emotional state through speech and non-verbal behavior.

Keywords

Facial Expression Conversational Agent Embody Conversational Agent Animated Character Current Emotional State 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
  2. 2.
    Reeves, B., Nass, C.: The Media Equation: how people treat computers, televisions and new media like real people and places. Cambridge University Press, Cambridge (1996)Google Scholar
  3. 3.
    Cassell, J., Sullivan, J., Prevost, S., Churchill, E. (eds.): Embodied conversational agents. MIT Press, Cambridge (2000)Google Scholar
  4. 4.
    Bernsen, N.O., Dybkjær, H., Dybkjær, L.: Designing Interactive Speech Systems. From First Ideas to User Testing. Springer, London (1998)Google Scholar
  5. 5.
    Beskow, J., Edlund, J., Nordstrand, M.: A model for generalised multi-modal conversation system output applied to an animated talking head. In: Minker, W., et al. (eds.) Spoken Multimodal Human-Computer Conversation in Mobile Envs, Kluwer Academic, Dordrecht (2004)Google Scholar
  6. 6.
    Argyle, M.: Bodily Communication, vol. 2. Methuen & Co., London (1986)Google Scholar
  7. 7.
    Knapp, M.L.: Non-verbal Communication in Human Interaction, 2nd edn. Holt, Rinehart and Winston Inc., New York City (1978)Google Scholar
  8. 8.
    Loyall, A.B.: Believable Agents: Building Interactive Personalities. PhD thesis, Tech Report CMU-CS-97-123, Carnegie Mellon University (1997)Google Scholar
  9. 9.
    Picard, R.: Affective Computing. MIT Press, Cambridge (1997)Google Scholar
  10. 10.
    Fiske, S.T., Taylor, S.E.: Social Cognition. McGraw-Hill, New York (1991)Google Scholar
  11. 11.
    Nass, C., Isbister, K., Lee, E.-J.: Truth is beauty: Researching embodied conversational agents. In: Cassell, J., et al. (eds.) Embodied conversational agents, pp. 374–402. MIT Press, Cambridge (2000)Google Scholar
  12. 12.
    Bernsen, N.O., Charfuelàn, M., Corradini, A., et al.: First Prototype of Conversational H.C. Andersen. In: Proc. of ACM Int’l Working Conf. on Advanced Visual Interfaces (2004)Google Scholar
  13. 13.
    Ekman, P., Friesen, W.V.: Nonverbal leakage and clues to deception. Psychiatry 32, 88–95 (1969)Google Scholar
  14. 14.
    Koda, T., Maes, P.: Agents with faces: The effects of personification of agents. In: Proceedings of Human-Computer Interaction, London, UK, pp. 239–245 (1996)Google Scholar
  15. 15.
  16. 16.
    Massaro, D.W., Cohen, M.: Speech perception in perceivers with hearing loss: Synergy of multiple modalities. Jou. of Speech, Language, and Hearing Res. 42, 21–41 (1999)Google Scholar
  17. 17.
    McGurk, H., MacDonald, J.: Hearing lips and seeing voices. Nature 264, 746–748 (1976)CrossRefGoogle Scholar
  18. 18.
    Casell, J., Bickmore, J., Billinghurst, M., Campbell, L., Chang, K., Vilhjalmsson, H., Yan, H.: Embodiment in conversational interfaces: Rea. In: Proc. of CHI 1999, pp. 520–527 (1999)Google Scholar
  19. 19.
    Massaro, D.W., Bosseler, A., Light, J.: Development and Evaluation of a Computer-Animated Tutor for Language and Vocabulary Learning. In: 15th Int’l Congress of Phonetic Sciences, Barcelona, Spain (2003)Google Scholar
  20. 20.
    Pelachaud, C., Carofiglio, V., De Carolis, B., de Rosis, F., Poggi, I.: Embodied Contextual Agent in Information Delivering Application. In: First International Joint Conference on Autonomous Agents & Multi-Agent Systems, Bologna, Italy (2002)Google Scholar
  21. 21.
  22. 22.

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Andrea Corradini
    • 1
  • Morgan Fredriksson
    • 2
  • Manish Mehta
    • 1
  • Jurgen Königsmann
    • 2
  • Niels Ole Bernsen
    • 1
  • Lasse Johannesson
    • 2
  1. 1.Natural Interactive Systems LaboratoryUniversity of Southern DenmarkOdense MDenmark
  2. 2.Liquid Media ABStockholmSweden

Personalised recommendations