Advertisement

Animating a Conversational Agent with User Expressivity

  • M. K. Rajagopal
  • P. Horain
  • C. Pelachaud
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6895)

Abstract

Our objective is to animate an embodied conversational agent (ECA) with communicative gestures rendered with the expressivity of a real human user it represents. We describe an approach to estimate a subset of expressivity parameters defined in the literature (namely spatial and temporal extent) from captured motion trajectories. We first validate this estimation against synthesis motion and then show results with real human motion. The estimated expressivity is then sent to the animation engine of an ECA that becomes a personalized autonomous representative of that user.

References

  1. 1.
    Hartmann, B., Mancini, M., Pelachaud, C.: Implementing Expressive Gesture Synthesis for Embodied Conversational Agents. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) GW 2005. LNCS (LNAI), vol. 3881, pp. 188–199. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  2. 2.
    Gómez Jáuregui, D., Horain, P., Rajagopal, M., Karri, S.: Real-Time Particle Filtering with Heuristics for 3D Motion Capture by Monocular Vision. In: Multimedia Signal Processing, Saint-Malo, France, pp. 139–144 (2010)Google Scholar
  3. 3.
    Craig, J.: Forward Kinematics. In: Introduction to Robotics: Mechanics and Control, 3rd edn. Prentice-Hall, Englewood Cliffs (1986)Google Scholar
  4. 4.

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • M. K. Rajagopal
    • 1
  • P. Horain
    • 1
  • C. Pelachaud
    • 2
  1. 1.Institut Telecom, Telecom SudParisÉvry CedexFrance
  2. 2.CNRS Telecom ParisTechParisFrance

Personalised recommendations