Advertisement

Estimating coloured 3D face models from single images: An example based approach

  • Thomas Vetter
  • Volker Blanz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1407)

Abstract

In this paper we present a method to derive 3D shape and surface texture of a human face from a single image. The method draws on a general flexible 3D face model which is “learned” from examples of individual 3D-face data (Cyberware-scans). In an analysis-by-synthesis loop, the flexible model is matched to the novel face image.

From the coloured 3D model obtained by this procedure, we can generate new images of the face across changes in viewpoint and illumination. Moreover, nonrigid transformations which are represented within the flexible model can be applied, for example changes in facial expression.

The key problem for generating a flexible face model is the computation of dense correspondence between all given 3D example faces. A new correspondence algorithm is described which is a generalization of common algorithms for optic flow computation to 3D-face data.

Keywords

Optical Flow Face Image Flexible Model Face Model Shape Vector 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    C. Choi, T. Okazaki, H. Harashima, and T. Takebe, “A system of analyzing and synthesizing facial images,” in Proc. IEEE Int. Symposium of Circuit and Syatems (ISCAS91), pp. 2665–2668, 1991.Google Scholar
  2. 2.
    D. Beymer, A. Shashua, and T. Poggio, “Example-based image analysis and synthesis,” A.I. Memo No. 1431, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 1993.Google Scholar
  3. 3.
    A. Lanitis, C. Taylor, T. Cootes, and T. Ahmad, “Automatic interpretation of human faces and hand gestures using flexible models,” in Proc. International Workshop on Face and Gesture Recognition (M.Bichsel, ed.), (Zürich, Switzerland), pp. 98–103, 1995.Google Scholar
  4. 4.
    T. Vetter and T. Poggio, “Linear objectclasses and image synthesis from a single example image,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 733–742, 1997.CrossRefGoogle Scholar
  5. 5.
    D. Beymer and T. Poggio, “Image representation for visual learning,” Science, vol. 272, pp. 1905–1909, 1996.Google Scholar
  6. 6.
    F. Parke, “A parametric model of human faces,” doctoral thesis, University of Utah, Salt Lake City, 1974.Google Scholar
  7. 7.
    T. Akimoto, Y. Suenaga, and R. Wallace, “Automatic creation of 3D facial models,” IEEE Computer Graphics and Applications, vol. 13, no. 3, pp. 16–22, 1993.CrossRefGoogle Scholar
  8. 8.
    J. Barron, D. Fleet, and S. Beauchemin, “Performance of optical flow techniques,” Int. Journal of Computer Vision, pp. 43–77, 1994.Google Scholar
  9. 9.
    J. Bergen, P. Anandan, K. Hanna, and R. Hingorani, “Hierarchical model-based motion estimation,” in Proceedings of the European Conference on Computer Vision, (Santa Margherita Ligure, Italy), pp. 237–252, 1992.Google Scholar
  10. 10.
    J. Bergen and R. Hingorani, “Hierarchical motion-based frame rate conversion,” technical report, David Sarnoff Research Center Princeton NJ 08540, 1990.Google Scholar
  11. 11.
    P. Burt and E. Adelson, “The Laplacian pyramide as a compact image code,” IEEE Transactions on Communications, no. 31, pp. 532–540, 1983.CrossRefGoogle Scholar
  12. 12.
    B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, pp. 185–203, 1981.CrossRefGoogle Scholar
  13. 13.
    E. Hildreth, The Measurement of Visual Motion. Cambridge: MIT Press, 1983.Google Scholar
  14. 14.
    H. H. Nagel, “Displacement vectors derived from second order intensity variations in image sequences.,” Computer Vision, Graphics and Image Processing, vol. 21, pp. 85–117, 1983.Google Scholar
  15. 15.
    M. Jones and T. Poggio, “Model-based matching by linear combination of prototypes,” a.i. memo no., Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 1996.Google Scholar
  16. 16.
    T. Vetter, M. J. Jones, and T. Poggio, “A bootstrapping algorithm for learning linear models of object classes,” in IEEE Conference on Computer Vision and Pattern Recognition — CVPR'97, (Puerto Rico, USA), IEEE Computer Society Press, 1997.Google Scholar
  17. 17.
    P. Viola, “Alignment by maximization of mutual information,” A.I. Memo No. 1548, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 1995.Google Scholar
  18. 18.
    T. Vetter, “Synthestis of novel views from a single face image,” International Journal of Computer Vision, no. in press.Google Scholar
  19. 19.
    P. Burt and E. Adelson, “Merging images through pattern decomposition,” in Applications of Digital Image Processing VIII, no. 575, pp. 173–181, SPIE The International Society for Optical Engeneering, 1985.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Thomas Vetter
    • 1
  • Volker Blanz
    • 1
  1. 1.Max-Planck-Institut für biologische KybernetikTübingenGermany

Personalised recommendations