Live facial expression generation based on mixed reality
Virtual reality technology provides a new methodology for visualization with realistic sensation, and has attracted special interests of human interface, visual communication communities. The key issue there is how to represent and reconstruct human naturally and realistically. Accordingly, recent study of facial expression has been received growing attention and intensively investigated. In this paper, We propose a hybrid approach to Live facial expression generation based on mixed reality. We first propose a novel approach to mixed reality, we call Augmented Virtuality, which enhances and augments the reality of complex and delicate live motions of the object in the virtual space, by projecting portions of live video images observing the deformation and motion of the object onto the surface of its static CG model. We also propose a new method of adapting color properties for smooth merging of real and virtual spaces, and also propose a new method of extraction of the region effective for merging, based on the optical flow analysis of both range and color images in which in the virtual space, shape and texture changes are observed. We apply this technique to real time generation of realistic eye expression. We then propose the homotopy sweep method for surface deformation using 3D control vectors, and apply this technique to the animation of mouth/lips expression. Our approach has the advantages of descripting the geometric shapes and the deformation of circular muscle simply, and of reconstructing realistic deformation efficiently. Experimental results demonstrate the effectiveness of the proposed hybrid approach in representive and visualizing live facial expression in real time.
Unable to display preview. Download preview PDF.
- 1.H.D.Foley, A.V.Dam: Fundamentals of Interactive Computer Graphics. Addison-Wesley (1982)Google Scholar
- 2.M.Bajura, H.Fuchs: Merging Virtual Objects with the Real World. Proc.'92 SIGGRAPH (July 1992) 203–210Google Scholar
- 3.P.Milgram: Applications of Augmented Reality of Human-Robot Communication. Proc.Int.conf.'93 IEEE/RSJGoogle Scholar
- 4.P.Milgram: A class of displays on the reality-virtuality continuum. SPIE Vol.2351 Telemanipulator and Telepresence Techonologies (1994)Google Scholar
- 5.P.Milgram: A Taxonomy of Mixed Reality Visual Display. IEICE TRANS. INF. &SYST. Vol.E77-D No.12 (1994) 1321–1329Google Scholar
- 6.M.Ueno et al.: A Construction of High Definition Wire Frame Model of Head and Its Hierarchical Control for Natural Expression Synthesis. Tech. Rep. IEICE PRU92-77 (1992-12) 9–16Google Scholar
- 7.C.S.Choi, H.Harashima, T.Takebe: Analysis of Facial Expression Using Three-Dimensional Facial Model. IEICE TRANS. INF. &SYST. D-II Vol.J74 No.6 Jun 1991 766–777Google Scholar
- 8.K.Aizawa, H.Harashima, T.Saito: A Model-Based Analysis Synthesis Image Coding Scheme. IEICE TRANS. INF. &SYST. B-I Vol.J72 No.3 Mar. 1989 200–207Google Scholar
- 9.M.Kaneko, A.Koike, Y.Hatori: Synthesis of Moving Facial Images with Mouth Shape Controlled by Text Information. IEICE TRANS. INF. &SYST. D-II Vol.J75 No.2 Feb. 1992 203–215Google Scholar
- 10.C.Tai, K.Loe, T.Kunii: Integrated Homotopy Sweep Technique for Computer-Aided Geometric Design. Computer Vision Springer-Verlag 1991 583–595Google Scholar
- 11.L.Moubaraki, H.Tanaka, Y.Kitamura, J.Ohya, F.Kishino: Homotopy-Based 3D Animation of Facial Expression. Tech. Rep.IEICE IE94-07Jul.1994 9-16Google Scholar
- 12.M.Kaneko, Y.Hatori, A.Koike: Coding of Facial Images Based on 3-D Model of Head and Analysis of Shape Changes in Input Image Sequence. IEICE TRANS. INF. &SYST. Vol.J71-B No.12 Dec. 1988 1554–1563Google Scholar
- 13.K.Mase, A.Pentland: Automatic Lipreading by Opitical-Flow Analysis. IEICE TRANS. INF. &SYST. D-II No.6 Jun 1990 796–803Google Scholar