Skip to main content
Log in

Surface detail capturing for realistic facial animation

  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

In this paper, a facial animation system is proposed for capturing both geometrical information and illumination changes of surface details, called expression details, from video clips simultaneously, and the captured data can be widely applied to different 2D face images and 3D face models. While tracking the geometric data, we record the expression details by ratio images. For 2D facial animation synthesis, these ratio images are used to generate dynamic textures. Because a ratio image is obtained via dividing colors of an expressive face by those of a neutral face, pixels with ratio value smaller than one are where a wrinkle or crease appears. Therefore, the gradients of the ratio value at each pixel in ratio images are regarded as changes of a face surface, and original normals on the surface can be adjusted according to these gradients. Based on this idea, we can convert the ratio images into a sequence of normal maps and then apply them to animated 3D model rendering. With the expression detail mapping, the resulted facial animations are more life-like and more expressive.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Lin I-C, Yeh J-S, Ouhyoung M. Extracting 3D facial animation parameters from multiview video clips.IEEE Computer Graphics and Applications, Nov/Dec. 2002, 22(6): 72–80.

    Article  Google Scholar 

  2. Liu Z, Shan Y, Zhang Z. Expressive expression mapping with ratio images. InProc. SIGGRAPH'01, Los Angeles, CA, USA, 2001, pp.271–276.

  3. Platt S M. A structural model of the human face [Dissertation]. University of Pennsylvania, 1985.

  4. Waters K. A muscle model for animating threedimensional facial expression.Computer Graphics (SIGGRAPH Proceedings), 1987, 22: 17–24.

    Article  Google Scholar 

  5. Williams L. Performance-driven facial animation. InProc. SIGGRAPH'90, Dallas, Texas, USA, Aug. 1990, pp.235–242.

  6. Guenter B, Grimn C, Wood D. Making faces. InProc. SIGGRAPH'98, Orlando, Florida, USA, Aug. 1998, pp.55–66.

  7. Bregler C, Covell M, Slaney M. Video rewrite: Driven visual speech with audio. InProc. SIGGRAPH'97, Los Angeles, CA, USA, 1997, pp.353–360.

  8. Cosatto E, Graf H P. Photo-realistic talking-heads from image samples.IEEE Tran. Multimedia, 2000, 2(3): 152–162.

    Article  Google Scholar 

  9. Ezzat T, Geiger G, Poggio T. Trainable videorealistic speech animation. InProc. SIGGRAPH'02, San Antonio, Texas, USA, 2002, pp.388–398.

  10. Wu Y, Kalra P, Moccozet D, Magnenat-Thalmann N. Simulating wrinkles and skin aging.The Visual Computer, 1999, 15(4): 183–198.

    Article  Google Scholar 

  11. Tiddeman B, Burt M, Perret D. Prototyping and transforming facial textures for perception research.IEEE Trans. Computer Graphics and Applications, Sep/Oct 2001, 21(5): 42–50.

    Article  Google Scholar 

  12. Bando Y, Kuratate T, Nishita T. A simple method for modeling wrinkles on human skin. InPacific Graphics 2002 Proceeding, 2002, pp.166–175.

  13. Gonzalez R C, Woods R E. Digital Image Processing. Addison-Wesley Press, ISBN: 0-201-60078-1, 1992.

  14. Noh J-Y, Neumann U. Expression cloning. InProc. SIGGRAPH'01, Los Angeles, CA, USA, 2001, pp.277–288.

  15. Lin I-C, Yeh J-S, Ouhyoung M. Realistic 3D facial animation parameters from mirror-reflected multi-view video. InProc. Computer Animation 2001, IEEE Computer Society, Nov. 2001, pp.2–11.

  16. Jensen H W, Marschner S R, Levoy M, Hanrahan P. A practical model for subsurface light transport. InProc. SIGGRAPH'01, Los Angeles, CA, USA, 2001, pp.511–518.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pei-Hsuan Tu.

Additional information

Pei-Hsuan Tu is currently a software engineer at Cyber-Link Corporation. She received the B.S. degree in computer science from “National Chung-Cheng University” in 2001, and M.S. degree in computer science from “National Taiwan University” in 2003. Her research interests include computer graphics and image processing. She is a member of IEEE and IEEE Computer Society.

I-Chen Lin received the B.S. and Ph.D. degrees in computer science from “National Taiwan University”. His research interests include computer graphics, computer animation and motion tracking. He is a member of ACM SIGGRAPH, IEEE, and IEEE Computer Society.

Jeng-Sheng Yeh received a B.S. degree in computer science from “National Taiwan University” and is currently a Ph.D. candidate in “National Taiwan University”. His research interests include computer graphics, computer user interface, and 3D protein retrieval. He is a member of ACM SIGGRAPH.

Rung-Huei Liang is a post doctoral researcher at the Communication and Multimedia Laboratory at “National Taiwan University”. His research interests include facial/gesture recognition and virtual reality applications. He received the B.S. and Ph.D. degrees in computer science from “National Taiwan University”.

Ming Ouhyoung is a professor of Dept. Computer Science and Information Engineering at “National Taiwan University”. His research interests include computer graphics, virtual reality, and multimedia systems. He received the B.S. and M.S. degrees in electrical engineering from “National Taiwan University” and a Ph.D. degree in computer science from the University of North Carolina at Chapel Hill. He is a member of IEEE and ACM.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Tu, PH., Lin, IC., Yeh, JS. et al. Surface detail capturing for realistic facial animation. J. Comput. Sci. & Technol. 19, 618–625 (2004). https://doi.org/10.1007/BF02945587

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02945587

Keywords

Navigation