Animating a Conversational Agent with User Expressivity
Our objective is to animate an embodied conversational agent (ECA) with communicative gestures rendered with the expressivity of a real human user it represents. We describe an approach to estimate a subset of expressivity parameters defined in the literature (namely spatial and temporal extent) from captured motion trajectories. We first validate this estimation against synthesis motion and then show results with real human motion. The estimated expressivity is then sent to the animation engine of an ECA that becomes a personalized autonomous representative of that user.
- 2.Gómez Jáuregui, D., Horain, P., Rajagopal, M., Karri, S.: Real-Time Particle Filtering with Heuristics for 3D Motion Capture by Monocular Vision. In: Multimedia Signal Processing, Saint-Malo, France, pp. 139–144 (2010)Google Scholar
- 3.Craig, J.: Forward Kinematics. In: Introduction to Robotics: Mechanics and Control, 3rd edn. Prentice-Hall, Englewood Cliffs (1986)Google Scholar
- 4.Pelachaud, C.: GRETA, http://perso.telecom-paristech.fr/~pelachau/Greta/