Advertisement

Implementing Expressive Gesture Synthesis for Embodied Conversational Agents

  • Björn Hartmann
  • Maurizio Mancini
  • Catherine Pelachaud
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3881)

Abstract

We aim at creating an expressive Embodied Conversational Agent (ECA) and address the problem of synthesizing expressive agent gestures. In our previous work, we have described the gesture selection process. In this paper, we present a computational model of gesture quality. Once a certain gesture has been chosen for execution, how can we modify it to carry a desired expressive content while retaining its original semantics? We characterize bodily expressivity with a small set of dimensions derived from a review of psychology literature. We provide a detailed description of the implementation of these dimensions in our animation system, including our gesture modeling language. We also demonstrate animations with different expressivity settings in our existing ECA system. Finally, we describe two user studies that evaluate the appropriateness of our implementation for each dimension of expressivity as well as the potential of combining these dimensions to create expressive gestures that reflect communicative intent.

Keywords

Nonverbal Behavior Computer Animation Temporal Extent Hand Shape Wrist Position 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Loyall, A.B., Bates, J.: Personality-rich believable agents that use language. In: Johnson, W.L., Hayes-Roth, B. (eds.) Proceedings of the First International Conference on Autonomous Agents (Agents 1997), Marina del Rey, CA, USA, pp. 106–113. ACM Press, New York (1997)CrossRefGoogle Scholar
  2. 2.
    McNeill, D.: Hand and Mind -What gestures reveal about thought. The University of Chicago Press, Chicago, IL (1992)Google Scholar
  3. 3.
    Cassell, J., Vilhjálmsson, H.H., Bickmore, T.: Beat: the behavior expression animation toolkit. In: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 477–486. ACM Press, New York (2001)Google Scholar
  4. 4.
    Tepper, P., Kopp, S., Cassell, J.: Content in context: Generating language and iconic gesture without a gestionary. In: Proceedings of the Workshop on Balanced Perception and Action in ECAs at Automous Agents and Multiagent Systems (AAMAS) (2004)Google Scholar
  5. 5.
    Noot, H., Ruttkay, Z.: Gesture in style. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, pp. 324–337. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  6. 6.
    Hartmann, B., Mancini, M., Pelachaud, C.: Formational parameters and adaptive prototype instantiation for MPEG-4 compliant gesture synthesis. In: Proceedings of the Computer Animation 2002, p. 111. IEEE Computer Society, Los Alamitos (2002)CrossRefGoogle Scholar
  7. 7.
    Kopp, S., Wachsmuth, I.: Synthesizing multimodal utterances for conversational agents. The Journal of Computer Animation and Virtual Worlds 15 (2004)Google Scholar
  8. 8.
    Gibet, S., Kamp, J.F., Poirier, F.: Gesture analysis: Invariant laws in movement. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, pp. 1–9. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  9. 9.
    Neff, M., Fiume, E.: Modeling tension and relaxation for computer animation. In: Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 81–88. ACM Press, New York (2002)CrossRefGoogle Scholar
  10. 10.
    Chi, D., Costa, M., Zhao, L., Badler, N.: The EMOTE model for effort and shape. In: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 173–182. ACM Press/Addison-Wesley Publishing Co. (2000)Google Scholar
  11. 11.
    Wallbott, H.G., Scherer, K.R.: Cues and channels in emotion recognition. Journal of Personality and Social Psychology 51, 690–699 (1986)CrossRefGoogle Scholar
  12. 12.
    Wallbott, H.G.: Bodily expression of emotion. European Journal of Social Psychology 28, 879–896 (1998)CrossRefGoogle Scholar
  13. 13.
    Gallaher, P.E.: Individual differences in nonverbal behavior: Dimensions of style. Journal of Personality and Social Psychology 63, 133–145 (1992)CrossRefGoogle Scholar
  14. 14.
    Martell, C., Howard, P., Osborn, C., Britt, L., Myers, K.: FORM2 kinematic gesture corpus. Video recording and annotation (2003)Google Scholar
  15. 15.
    DeCarolis, B., Pelachaud, C., Poggi, I., Steedman, M.: APML, a mark-up language for believable behavior generation. In: Prendinger, H., Ishizuka, M. (eds.) Life-Like Characters. Cognitive Technologies. Springer, Heidelberg (2004)Google Scholar
  16. 16.
    Black, A., Taylor, P., Caley, R., Clark, R.: Festival, http://www.cstr.ed.ac.uk/projects/festival/
  17. 17.
    Tolani, D.: Inverse Kinematics Methods for Human Modeling and Simulation. PhD thesis, University of Pennsylvania (1998)Google Scholar
  18. 18.
    Kochanek, D.H.U., Bartels, R.H.: Interpolating splines with local tension, continuity, and bias control. In: Christiansen, H. (ed.) Computer Graphics (SIGGRAPH 1984 Proceedings), vol. 18, pp. 33–41 (1984)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Björn Hartmann
    • 1
  • Maurizio Mancini
    • 2
  • Catherine Pelachaud
    • 2
  1. 1.Computer Science DepartmentStanford UniversityStanfordUSA
  2. 2.LINC-LIAUniversity of Paris-8MontreuilFrance

Personalised recommendations