3D Rigid Facial Motion Estimation from Disparity Maps

  • N. Pérez de la Blanca
  • J. M. Fuertes
  • M. Lucena
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2905)

Abstract

This paper proposes an approach to estimate 3D rigid facial motions through a stereo image sequence. The approach uses a disparity space as the main space in order to represent all the 3D information. A robust algorithm based on the RANSAC approach is used to estimate the rigid motions through the image sequence. The disparity map is shown to be a robust feature against local motions of the surface and is therefore a very good alternative to the traditional use of the set of interest points.

Keywords

Motion Estimation Interest Point Rigid Motion Stereo Image Epipolar Line 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Bascle, B., Blake, A.: Separability of pose and expression in facial tracking and animation. In: Proc. Int. Conf. Computer Vision (1998)Google Scholar
  2. 2.
    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. PAMI 23(11), 1222–1239 (2001)Google Scholar
  3. 3.
    Demirdjian, D., Darell, T.: Motion estimation from disparity images. In: Proc. ICCV 2001, Vancouver Canada, vol. II, pp. 628–635 (2001)Google Scholar
  4. 4.
    Devernay, F., Faugeras, O.: From projective to Euclidean reconstruction. In: Proceedings Computer Vision and Pattern Recognition, pp. 264–269 (1996)Google Scholar
  5. 5.
    Fua, P.: Regularized bundle-adjustment to models heads from image sequences without calibration data. International Journal of Computer Vision 38(2) (2000)Google Scholar
  6. 6.
    Hartley, R., Zisserman, A.: Multiple View geometry in computer vision. CUP (2002)Google Scholar
  7. 7.
    Kolmogorov, V., Zabih, R.: Visual correspondences with occlusions using graph cuts. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2352, pp. 82–96. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  8. 8.
    Lanitis, A., Taylor, C.J., Cootes, T.F., Ahmed, T.: Automatic interpretation of human faces and hand gestures using flexible models. In: International Workshop on Automatic Face-and-Gesture Recognition (1995)Google Scholar
  9. 9.
    Pollefeys, M., Van Gool, L., Zisserman, A., Fitzgibbon, A.: SMILE 2000. LNCS, vol. 2018. Springer, Heidelberg (2001)MATHCrossRefGoogle Scholar
  10. 10.
    Scharstein, D., Szeliski, R.: A Taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV 47(1), 7–42 (2002)MATHCrossRefGoogle Scholar
  11. 11.
    Tarel, J.P.: Global 3D Planar Reconstruction with Uncalibrated Cameras, A Rectified Stereo Geometry. Machine Graphics & Vision Journal, vol 6(4), 393–418 (1997)Google Scholar
  12. 12.
    Valente, S., Dugelay, J.L.: A visual analysis/synthesis feedback loop for accurate face tracking. Signal Processing Image Communications 16, 585–608 (2001)CrossRefGoogle Scholar
  13. 13.
    Zhang, Z., Faugeras, O.: 3D Dynamic Scene Analysis: A stereo based approach. Springer series in Information Science, vol. 27. Springer, Heidelberg (1992)MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • N. Pérez de la Blanca
    • 1
  • J. M. Fuertes
    • 2
  • M. Lucena
    • 2
  1. 1.Department of Computer Science and Artificial IntelligenceETSII. University of GranadaGranadaSpain
  2. 2.Departmento de Informática. Escuela Politécnica SuperiorUniversidad de JaénJaénSpain

Personalised recommendations