Implementing Expressive Gesture Synthesis for Embodied Conversational Agents
We aim at creating an expressive Embodied Conversational Agent (ECA) and address the problem of synthesizing expressive agent gestures. In our previous work, we have described the gesture selection process. In this paper, we present a computational model of gesture quality. Once a certain gesture has been chosen for execution, how can we modify it to carry a desired expressive content while retaining its original semantics? We characterize bodily expressivity with a small set of dimensions derived from a review of psychology literature. We provide a detailed description of the implementation of these dimensions in our animation system, including our gesture modeling language. We also demonstrate animations with different expressivity settings in our existing ECA system. Finally, we describe two user studies that evaluate the appropriateness of our implementation for each dimension of expressivity as well as the potential of combining these dimensions to create expressive gestures that reflect communicative intent.
KeywordsNonverbal Behavior Computer Animation Temporal Extent Hand Shape Wrist Position
Unable to display preview. Download preview PDF.
- 2.McNeill, D.: Hand and Mind -What gestures reveal about thought. The University of Chicago Press, Chicago, IL (1992)Google Scholar
- 3.Cassell, J., Vilhjálmsson, H.H., Bickmore, T.: Beat: the behavior expression animation toolkit. In: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 477–486. ACM Press, New York (2001)Google Scholar
- 4.Tepper, P., Kopp, S., Cassell, J.: Content in context: Generating language and iconic gesture without a gestionary. In: Proceedings of the Workshop on Balanced Perception and Action in ECAs at Automous Agents and Multiagent Systems (AAMAS) (2004)Google Scholar
- 7.Kopp, S., Wachsmuth, I.: Synthesizing multimodal utterances for conversational agents. The Journal of Computer Animation and Virtual Worlds 15 (2004)Google Scholar
- 10.Chi, D., Costa, M., Zhao, L., Badler, N.: The EMOTE model for effort and shape. In: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 173–182. ACM Press/Addison-Wesley Publishing Co. (2000)Google Scholar
- 14.Martell, C., Howard, P., Osborn, C., Britt, L., Myers, K.: FORM2 kinematic gesture corpus. Video recording and annotation (2003)Google Scholar
- 15.DeCarolis, B., Pelachaud, C., Poggi, I., Steedman, M.: APML, a mark-up language for believable behavior generation. In: Prendinger, H., Ishizuka, M. (eds.) Life-Like Characters. Cognitive Technologies. Springer, Heidelberg (2004)Google Scholar
- 16.Black, A., Taylor, P., Caley, R., Clark, R.: Festival, http://www.cstr.ed.ac.uk/projects/festival/
- 17.Tolani, D.: Inverse Kinematics Methods for Human Modeling and Simulation. PhD thesis, University of Pennsylvania (1998)Google Scholar
- 18.Kochanek, D.H.U., Bartels, R.H.: Interpolating splines with local tension, continuity, and bias control. In: Christiansen, H. (ed.) Computer Graphics (SIGGRAPH 1984 Proceedings), vol. 18, pp. 33–41 (1984)Google Scholar