Creativity Meets Automation: Combining Nonverbal Action Authoring with Rules and Machine Learning

  • Michael Kipp
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4133)


Providing virtual characters with natural gestures is a complex task. Even if the range of gestures is limited, deciding when to play which gesture may be considered both an engineering or an artistic task. We want to strike a balance by presenting a system where gesture selection and timing can be human authored in a script, leaving full artistic freedom to the author. However, to make authoring faster we offer a rule system that generates gestures on the basis of human authored rules. To push automation further, we show how machine learning can be utilized to suggest further rules on the basis of previously annotated scripts. Our system thus offers different degrees of automation for the author, allowing for creativity and automation to join forces.


Facial Expression Multiagent System Virtual Character Conversational Agent Iconic Gesture 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Rist, T., André, E., Baldes, S., Gebhard, P., Klesen, M., Kipp, M., Rist, P., Schmitt, M.: A review of the development of embodied presentation agents and their application fields. In: Prendinger, H., Ishizuka, M. (eds.) Life-Like Characters – Tools, Affective Functions, and Applications, pp. 377–404. Springer, Heidelberg (2003)Google Scholar
  2. 2.
    Gebhard, P., Kipp, M., Klesen, M., Rist, T.: Authoring scenes for adaptive, interactive performances. In: Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 725–732 (2003)Google Scholar
  3. 3.
    Cassell, J., Vilhjálmsson, H., Bickmore, T.: BEAT: the Behavior Expression Animation Toolkit. In: Proceedings of SIGGRAPH 2001, pp. 477–486 (2001)Google Scholar
  4. 4.
    Noma, T., Zhao, L., Badler, N.: Design of a Virtual Human Presenter. IEEE Journal of Computer Graphics and Applications 20, 79–85 (2000)CrossRefGoogle Scholar
  5. 5.
    Hartmann, B., Mancini, M., Buisine, S., Pelachaud, C.: Design and evaluation of expressive gesture synthesis for embodied conversational agents. In: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems. ACM Press, New York (2005)Google Scholar
  6. 6.
    Kopp, S., Tepper, P., Cassell, J.: Towards integrated microplanning of language and iconic gesture for multimodal output. In: Proc. Int’l Conf. Multimodal Interfaces 2004, pp. 97–104 (2004)Google Scholar
  7. 7.
    Cassell, J., Pelachaud, C., Badler, N., Steedman, M., Achorn, B., Becket, T., Douville, B., Prevost, S., Stone, M.: Animated Conversation: Rule-Based Generation of Facial Expression, Gesture & Spoken Intonation for Multiple Conversational Agents. In: Proceedings of SIGGRAPH 1994, pp. 413–420 (1994)Google Scholar
  8. 8.
    André, E., Müller, J., Rist, T.: WIP/PPP: Automatic Generation of Personalized Multimedia Presentations. In: Proceedings of Multimedia 1996, 4th ACM International Multimedia Conference, pp. 407–408. ACM Press, Boston (1996)CrossRefGoogle Scholar
  9. 9.
    Noot, H., Ruttkay, Z.: Gesture in Style. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS(LNAI), vol. 2915, pp. 324–337. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  10. 10.
    Kopp, S., Sowa, T., Wachsmuth, I.: Imitation Games with an Artificial Agent: From Mimicking to Understanding Shape-Related Iconic Gestures. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS(LNAI), vol. 2915, pp. 436–447. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  11. 11.
    Kipp, M.: Gesture Generation by Imitation: From Human Behavior to Computer Character Animation., Boca Raton, Florida (2004)Google Scholar
  12. 12.
    Ndiaye, A., Gebhard, P., Kipp, M., Klesen, M., Schneider, M., Wahlster, W.: Ambient Intelligence in Edutainment: Tangible Interaction with Life-Like Exhibit Guides. In: Maybury, M., Stock, O., Wahlster, W. (eds.) INTETAIN 2005. LNCS(LNAI), vol. 3814, pp. 104–113. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  13. 13.
    Klesen, M., Kipp, M., Gebhard, P., Rist, T.: Staging exhibitions: Methods and tools for modeling narrative structure to produce interactive performances with virtual actors. Virtual Reality. Special Issue on Storytelling in Virtual Environments 7, 17–29 (2003)Google Scholar
  14. 14.
    Gebhard, P., Kipp, M., Klesen, M., Rist, T.: Adding the Emotional Dimension to Scripting Character Dialogues. In: Rist, T., Aylett, R.S., Ballin, D., Rickel, J. (eds.) IVA 2003. LNCS(LNAI), vol. 2792, pp. 48–56. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  15. 15.
    Reithinger, N., Klesen, M.: Dialogue act classification using language models. In: Kokkinakis, G., Fakotakis, N., Dermatas, E. (eds.) Proceedings of the 5th European Conference on Speech Communication and Technology (Eurospeech 1997), pp. 2235–2238 (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Michael Kipp
    • 1
  1. 1.DFKISaarbrückenGermany

Personalised recommendations