Skip to main content

Live facial expression generation based on mixed reality

  • Session F3A: Face and Hand Posture Recognition
  • Conference paper
  • First Online:
Book cover Computer Vision — ACCV'98 (ACCV 1998)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1351))

Included in the following conference series:

  • 133 Accesses

Abstract

Virtual reality technology provides a new methodology for visualization with realistic sensation, and has attracted special interests of human interface, visual communication communities. The key issue there is how to represent and reconstruct human naturally and realistically. Accordingly, recent study of facial expression has been received growing attention and intensively investigated. In this paper, We propose a hybrid approach to Live facial expression generation based on mixed reality. We first propose a novel approach to mixed reality, we call Augmented Virtuality, which enhances and augments the reality of complex and delicate live motions of the object in the virtual space, by projecting portions of live video images observing the deformation and motion of the object onto the surface of its static CG model. We also propose a new method of adapting color properties for smooth merging of real and virtual spaces, and also propose a new method of extraction of the region effective for merging, based on the optical flow analysis of both range and color images in which in the virtual space, shape and texture changes are observed. We apply this technique to real time generation of realistic eye expression. We then propose the homotopy sweep method for surface deformation using 3D control vectors, and apply this technique to the animation of mouth/lips expression. Our approach has the advantages of descripting the geometric shapes and the deformation of circular muscle simply, and of reconstructing realistic deformation efficiently. Experimental results demonstrate the effectiveness of the proposed hybrid approach in representive and visualizing live facial expression in real time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. H.D.Foley, A.V.Dam: Fundamentals of Interactive Computer Graphics. Addison-Wesley (1982)

    Google Scholar 

  2. M.Bajura, H.Fuchs: Merging Virtual Objects with the Real World. Proc.'92 SIGGRAPH (July 1992) 203–210

    Google Scholar 

  3. P.Milgram: Applications of Augmented Reality of Human-Robot Communication. Proc.Int.conf.'93 IEEE/RSJ

    Google Scholar 

  4. P.Milgram: A class of displays on the reality-virtuality continuum. SPIE Vol.2351 Telemanipulator and Telepresence Techonologies (1994)

    Google Scholar 

  5. P.Milgram: A Taxonomy of Mixed Reality Visual Display. IEICE TRANS. INF. &SYST. Vol.E77-D No.12 (1994) 1321–1329

    Google Scholar 

  6. M.Ueno et al.: A Construction of High Definition Wire Frame Model of Head and Its Hierarchical Control for Natural Expression Synthesis. Tech. Rep. IEICE PRU92-77 (1992-12) 9–16

    Google Scholar 

  7. C.S.Choi, H.Harashima, T.Takebe: Analysis of Facial Expression Using Three-Dimensional Facial Model. IEICE TRANS. INF. &SYST. D-II Vol.J74 No.6 Jun 1991 766–777

    Google Scholar 

  8. K.Aizawa, H.Harashima, T.Saito: A Model-Based Analysis Synthesis Image Coding Scheme. IEICE TRANS. INF. &SYST. B-I Vol.J72 No.3 Mar. 1989 200–207

    Google Scholar 

  9. M.Kaneko, A.Koike, Y.Hatori: Synthesis of Moving Facial Images with Mouth Shape Controlled by Text Information. IEICE TRANS. INF. &SYST. D-II Vol.J75 No.2 Feb. 1992 203–215

    Google Scholar 

  10. C.Tai, K.Loe, T.Kunii: Integrated Homotopy Sweep Technique for Computer-Aided Geometric Design. Computer Vision Springer-Verlag 1991 583–595

    Google Scholar 

  11. L.Moubaraki, H.Tanaka, Y.Kitamura, J.Ohya, F.Kishino: Homotopy-Based 3D Animation of Facial Expression. Tech. Rep.IEICE IE94-07Jul.1994 9-16

    Google Scholar 

  12. M.Kaneko, Y.Hatori, A.Koike: Coding of Facial Images Based on 3-D Model of Head and Analysis of Shape Changes in Input Image Sequence. IEICE TRANS. INF. &SYST. Vol.J71-B No.12 Dec. 1988 1554–1563

    Google Scholar 

  13. K.Mase, A.Pentland: Automatic Lipreading by Opitical-Flow Analysis. IEICE TRANS. INF. &SYST. D-II No.6 Jun 1990 796–803

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Roland Chin Ting-Chuen Pong

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tanaka, H.T., Ishizawa, A., Adachi, H. (1997). Live facial expression generation based on mixed reality. In: Chin, R., Pong, TC. (eds) Computer Vision — ACCV'98. ACCV 1998. Lecture Notes in Computer Science, vol 1351. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-63930-6_183

Download citation

  • DOI: https://doi.org/10.1007/3-540-63930-6_183

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63930-5

  • Online ISBN: 978-3-540-69669-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics