Gesture Synthesis in a Real-World ECA
We address the issue of spontaneous gesture synthesis for embodied conversation agents (ECAs), that is, the generation of appropriate gestures and their coordination with spoken utterances. After a characterization of the application constraints we establish the principal requirements of the gesture generation framework. We demonstrate how these requirements can be met by formulating the gesture generation as real-time search through gesture space (actually gesture and facial expression) under the constraints arising from the graphical model of the character and the linguistic properties of the utterance.
Unable to display preview. Download preview PDF.
- 1.Lexicle Customer Service (2004), http://www.lexicle.com
- 2.Jörding, T., Wachsmuth, I.: An Anthropomorphic Agent for the Use of Spatial Language. In: Coventry, K.R., Olivier, P. (eds.) Spatial Language: Cognitive and Computational Aspects, pp. 69–86. Kluwer, Dordrecht (2002)Google Scholar
- 3.Yan, H.: Paired Speech and Gesture Generation in Embodied Conversational Agents, M.S. thesis in the Media Lab. Cambridge, MA: MIT (2000)Google Scholar
- 4.Cassell, J., Vilhjálmsson, H., Bickmore, T.: BEAT: the Behavior Expression Animation Toolkit. In: Proceedings of SIGGRAPH 2001, Los Angeles, CA, August 12-17, pp. 477–486 (2001)Google Scholar
- 5.McNeill, D.: Hand and Mind: What Gestures Reveal about Thought. The University of Chicago Press, Chicago (1992)Google Scholar