Nonverbal Behavior Generator for Embodied Conversational Agents

  • Jina Lee
  • Stacy Marsella
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4133)


Believable nonverbal behaviors for embodied conversational agents (ECA) can create a more immersive experience for users and improve the effectiveness of communication. This paper describes a nonverbal behavior generator that analyzes the syntactic and semantic structure of the surface text as well as the affective state of the ECA and annotates the surface text with appropriate nonverbal behaviors. A number of video clips of people conversing were analyzed to extract the nonverbal behavior generation rules. The system works in real-time and is user-extensible so that users can easily modify or extend the current behavior generation rules.


Surface Text Video Clip Nonverbal Behavior Communicative Intent Natural Language Generation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Knapp, M., Hall, J.: Nonverbal Communication in Human Interaction, 4th edn. Harcourt Brace College Publishers (1997)Google Scholar
  2. 2.
    Fabri, M., Moore, D., Hobbs, D.: Expressive agents: Non-verbal communication in collaborative virtual environments. In: Proceedings of Autonomous Agents and Multi-Agent Systems, Bologna, Italy (2002)Google Scholar
  3. 3.
    Swartout, W., Hill, R., Gratch, J., Johnson, W., Kyriakakis, C., Labore, K., Lindheim, R., Marsella, S., Miraglia, D., Moore, B., Morie, J., Rickel, J., Thiebaux, M., Tuch, L., Whitney, R.: Toward the holodeck: Integrating graphics, sound, character and story. In: Proceedings of 5th International Conference on Autonomous Agents, Montreal, Canada (2001)Google Scholar
  4. 4.
    Durlach, N., Slater, M.: Presence in shared virtual environments and virtual togetherness. In: BT Workshop on Presence in Shared Virtual Environments, Ipswich, UK (1998)Google Scholar
  5. 5.
    Cassell, J., Vilhjálmsson, H., Chang, K., Bickmore, T., Campbell, L., Yan, H.: Requirements for an architecture for embodied conversational characters. In: Magnenat-Thalmann, N., Thalmann, D. (eds.) Computer Animation and Simulation 1999, pp. 109–120. Springer, Vinna (1999)Google Scholar
  6. 6.
    Becheiraz, P., Thalmann, D.: A behavioral animation system for autonomous actors personified by emotions. In: Proceedings of the 1st Workshop on Embodied Conversational Characters (WECC), Lake Tahoe, CA, pp. 57–65 (1998)Google Scholar
  7. 7.
    Striegnitz, K., Tepper, P., Lovett, A., Cassell, J.: Knowledge representation for generating locating gestures in route directions. In: Proceedings of Workshop on Spatial Language and Dialogue, Delmenhorst, Germany (2005)Google Scholar
  8. 8.
    Cassell, J., Vilhjálmsson, H., Bickmore, T.: BEAT: The behavior expression animation toolkit. In: Proceedings of ACM SIGGRAPH, pp. 477–486. ACM Press / ACM SIGGRAPH, New York (2001)Google Scholar
  9. 9.
    Vilhjálmsson, H., Marsella, S.: Social performance framework. In: Workshop on Modular Construction of Human-Like Intelligence at the AAAI 20th National Conference on Artificual Intelligence, Pittsburgh, PA (2005)Google Scholar
  10. 10.
    Ekman, P.: About brows: emotional and conversational signals. In: von Cranach, M., Foppa, K., Lepenies, W., Ploog, D. (eds.) Human Ethology, pp. 169–248. Cambridge University Press, Cambridge (1979)Google Scholar
  11. 11.
    Hadar, U., Steiner, T., Grant, E., Clifford Rose, F.: Kinematics of head movement accompanying speech during conversation. Human Movement Science 2, 35–46 (1983)CrossRefGoogle Scholar
  12. 12.
    Heylen, D.: Challenges ahead. In: Proceedings of AISB Symposium on Social Virtual Agents (in press)Google Scholar
  13. 13.
    Kendon, A.: Some uses of head shake. Gesture (2), 147–182 (2003)Google Scholar
  14. 14.
    McClave, E.: Linguistic functions of head movements in the context of speech. Journal of Pragmatics (32), 855–878 (2000)Google Scholar
  15. 15.
    The HUMAINE Consortium: The HUMAINE portal (2006) (Retrieved April 7, 2006),
  16. 16.
    The HUMAINE Consortium: Multimodal data in action and interaction: a library of recordings and labelling schemes (2004) (Retrieved April 14, 2006),
  17. 17.
    Weizenbaum, J.: ELIZA – a computer program for the study of natural language communication between man and machines. Communications of the Association for Computing Machinery 9, 36–45 (1996)Google Scholar
  18. 18.
    n.a.: Behavior markup language (BML) specification (2006) (Retrieved June 6, 2006),
  19. 19.
    Kopp, S., Krenn, B., Marsella, S., Marshall, A., Pelachaud, C., Pirker, H., Thorisson, K., Vilhjálmsson, H.: Towards a common framework for multimodal generation in embodied conversation agents: a behavior markup language. In: International Conference on Virtual Agents, Marina del Rey, CA (submitted, 2006)Google Scholar
  20. 20.
    DeCarolis, B., Pelachaud, C., Poggi, I., Steedman, M.: APML, a mark-up language for believable behavior generation. In: Prendinger, H., Ishizuka, M. (eds.) Life-like Characters. Tools, Affective Functions and Applications, pp. 65–85. Springer, Heidelberg (2004)Google Scholar
  21. 21.
    Kopp, S., Wach Smuth, I.: Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds 15(1), 39–52 (2004)CrossRefGoogle Scholar
  22. 22.
    Chariank, E.: A maximum-entropy-inspired parser. In: Proceedings of North American Chapter of the Association for Computational Linguistics (2000)Google Scholar
  23. 23.
    Kallmann, M., Marsella, S.C.: Hierarchical motion controllers for real-time autonomous virtual humans. In: Panayiotopoulos, T., Gratch, J., Aylett, R.S., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 253–265. Springer, Heidelberg (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Jina Lee
    • 1
  • Stacy Marsella
    • 1
  1. 1.Information Sciences InstituteUniversity of Southern CaliforniaMarina Del Rey

Personalised recommendations