MIRAGE 2007: Computer Vision/Computer Graphics Collaboration Techniques pp 318-329 | Cite as
Generation of Expression Space for Realtime Facial Expression Control of 3D Avatar
Abstract
This paper describes expression space generation technology that enables animators to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In this system, approximately 2400 facial expression frames are used to generate facial expression space. In this paper, distance matrixes that present distances between facial characteristic points are used to show the state of an expression. The set of these distance matrixes is defined as facial expression space. However, this facial expression space is not space that can be transferred to one space or another in a straight line, when one expression changes to another. In this technology, the route for moving from one expression to another is approximately inferred from captured facial expression data. First, it is assumed that two expressions are close to each other when the distance between distance matrixes that show facial expression states is below a certain value. When two random facial expression states are connected with the set of a series of adjacent expressions, it is assumed that there is a route between the two expressions. It is further assumed that the shortest path between two facial expressions is the path when one expression moves to the other expression. Dynamic programming is used to find the shortest path between two facial expressions. The facial expression space, which is the set of these distance matrixes, is multidimensional space. The facial expression control of 3-dimensional avatars is carried out in real-time when animators navigate through facial expression space. In order to assist this task, multidimensional scaling is used for visualization in 2-dimensional space, and animators are told to control facial expressions when using this system. This paper evaluates the results of the experiment.
Keywords
Short Path Facial Expression Distance Matrix Facial Animation Motion Capture DataPreview
Unable to display preview. Download preview PDF.
References
- 1.Terzopoulos, D., et al.: Facial animation: Past, present and future. Panel, Siggraph97 (1997)Google Scholar
- 2.Parke, F.I., Waters, K.: Computer facial animation. A K Peters, Wellesley (1996)Google Scholar
- 3.Lee, W.-S., Magnenat-Thalmann, N.: Fast head modeling for animation. Journal Image and Vision Computing 18(4), 355–364 (2000)CrossRefGoogle Scholar
- 4.Kouadio, C., Poulin, P., Lachapelle, P.: Real-time facial animation based upon a bank of 3D facial expressions. In: Proc. Computer Animation 98 (June 1998)Google Scholar
- 5.Lee, J., et al.: Interactive Control of Avatars Animated with Human Motion Data. ACM Transactions on Graphics (Siggraph 2002) 21(3), 491–500 (2002)Google Scholar
- 6.Floyd, R.W.: Algorithm 97: Shortest Path. CACM 5, 345 (1962)Google Scholar
- 7.Tenenbaum, J.: Mapping a manifold of perceptual observations. In: Advances in Neural Information Processing Systems, vol. 10, pp. 682–688. MIT Press, Cambridge (1998)Google Scholar
- 8.Cox, T., Cox, M.: Multidimensional Scaling. Chapman & Hall, London (1994)MATHGoogle Scholar
- 9.Torgerson, W.S.: Multidimensional scaling: I. theory and method. Psychometrica 17, 401–419 (1952)MATHCrossRefMathSciNetGoogle Scholar
- 10.Mardia, K.V.: Some properties of classical multi-dimensional scaling. Communications in Statistics-Theory and Methods A7, 1233–1241 (1978)CrossRefMathSciNetGoogle Scholar
- 11.Shardanand, U.: Social information filtering for music recommendation. Master’s thesis, MIT (1994)Google Scholar
- 12.Gleicher, M.: Retargetting motion to new characters. In: Proceedings of SIGGRAPH 98. Computer Graphics Annual Conference Series (1998)Google Scholar
- 13.Vlasic, D., et al.: Face Transfer with Multilinear Models. ACM Transactions on Graphics (TOG) 24, 426–433 (2005)CrossRefGoogle Scholar
- 14.Deng, Z., et al.: Animating blendshape faces by cross-mapping motion capture data. In: Proceedings of the 2006 symposium on Interactive 3D graphics and games, March 14-17, 2006, pp. 43–48 (2006)Google Scholar
- 15.Fidaleo, D., Neumann, U.: Analysis of co-articulation regions for performance-driven facial animation. Journal of Visualization and Computer Animation 15, 15–26 (2004)Google Scholar
- 16.Lee, J., et al.: Interactive Control of Avatars Animated with Human Motion Data. ACM Transactions on Graphics (SIGGRAPH 2002) 21(3), 491–500 (2002)Google Scholar
- 17.Noh, J.-Y., Neumann, U.: Expression cloning. In: Proceedings of SIGGRAPH 2001, pp. 21–28 (2001)Google Scholar
- 18.Deng, Z., Neumann, U.: eFASE: Expressive Facial Animation Synthesis and Editing with Phoneme-Isomap Controls. In: Proc. of ACM SIGGRAPH/EG Symposium on Computer Animation (SCA), pp. 251–259. ACM, New York (2006)Google Scholar