Nonlinear Dynamic Shape and Appearance Models for Facial Motion Tracking

  • Chan-Su Lee
  • Ahmed Elgammal
  • Dimitris Metaxas
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4872)

Abstract

We present a framework for tracking large facial deformations using nonlinear dynamic shape and appearance model based upon local motion estimation. Local facial deformation estimation based on a given single template fails to track large facial deformations due to significant appearance variations. A nonlinear generative model that uses low dimensional manifold representation provides adaptive facial appearance templates depending upon the movement of the facial motion state and the expression type. The proposed model provides a generative model for Bayesian tracking of facial motions using particle filtering with simultaneous estimation of the expression type. We estimate the geometric transformation and the global deformation using the generative model. The appearance templates from the global model then estimate local deformation based on thin-plate spline parameters.

Keywords

Nonlinear Shape and Appearance Models Active Appearance Model Facial Motion Tracking Adaptive Template Thin-plate Spline Local Facial Motion Facial Expression Recognition 

References

  1. 1.
    Cootes, T.F., Taylor, C.J., Cooper, D.H., Graham, J.: Active shape models: Their training and applications. CVIU 61(1), 38–59 (1995)Google Scholar
  2. 2.
    Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. In: Proc. of ECCV, vol. 2, pp. 484–498 (1998)Google Scholar
  3. 3.
    Baker, S., Matthews, I.: Equivalence and efficiency of image alignment algorithms. In: Proc. of CVPR, vol. 1, pp. 1090–1097 (2001)Google Scholar
  4. 4.
    Matthews, I., Baker, S.: Active appearance models revisited. IJCV 60(2), 135–164 (2004)CrossRefGoogle Scholar
  5. 5.
    Hou, X., Li, S., Zhang, H., Cheng, Q.: Direct appearance models. In: Proc. of CVPR, vol. 1, pp. 828–833 (2001)Google Scholar
  6. 6.
    Batur, A.U., Hayes, M.H.: A novel convergence scheme for active appearance models. In: Proc. of CVPR, vol. 1, pp. 359–366 (2003)Google Scholar
  7. 7.
    Blanz, V., Vetter, T.: A morphable model for the synthesis of 3d faces. In: SIGGRAPH 1999, pp. 187–194. ACM Press/Addison-Wesley Publishing Co., New York (1999)CrossRefGoogle Scholar
  8. 8.
    Hager, G.D., Belhumeur, P.N.: Efficient region tracking with parametric models of geometry and illumination. IEEE Trans. PAMI 20(10) (1998)Google Scholar
  9. 9.
    Black, M.J., Jepson, A.D.: Eigentracking: Robust matching and tracking of articulated objects using view-based representation. Int.J. Compter Vision, 63–84 (1998)Google Scholar
  10. 10.
    Ho, J., Lee, K.-C., Yang, M.-H., Kriegman, D.: Visual tracking using learned linear subspaces. In: Proc. of CVPR, pp. 782–789 (2004)Google Scholar
  11. 11.
    Elgammal, A.: Learning to track: Conceptual manifold map for closed-form tracking. In: Proc. of CVPR, pp. 724–730 (2005)Google Scholar
  12. 12.
    Lim, J., Yang, M.H.: A direct method for modeling non-rigid motion with thin plate spline. In: Proc. of CVPR, vol. 1, pp. 1196–1202 (2005)Google Scholar
  13. 13.
    Bookstein, F.L.: Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Trans. PAMI 11(6), 567–585 (1989)MATHGoogle Scholar
  14. 14.
    Stegmann, M.B.: Analysis and segmentation of face images using point annotations and linear subspace techniques. Technical Report TMM-REF-2002-22, Technical University of Denmark (2002)Google Scholar
  15. 15.
    Cootes, T.F.: Statistical models of appearance for computer vision. Technical report, University of Manchester (2004)Google Scholar
  16. 16.
    Lee, C.S., Elgammal, A.: Facial expression analysis using nonlinear decomposable generative models. In: Zhao, W., Gong, S., Tang, X. (eds.) AMFG 2005. LNCS, vol. 3723, pp. 17–31. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  17. 17.
    Schölkopf, B., Smola, A.: Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond. MIT Press, Cambridge (2002)Google Scholar
  18. 18.
    Elgammal, A., Lee, C.S.: Separating style and content on a nonlinear manifold. In: Proc. CVPR, vol. 1, pp. 478–485 (2004)Google Scholar
  19. 19.
    Lee, C.-S., Elgammal, A.: Style adaptive bayesian tracking using explicit manifold learning. In: Proc. of British Machine Vision Conference (2005)Google Scholar
  20. 20.
    Murphy, K., Russell, S.: 24 Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks. In: Sequential Monte Carlo Methods in Practice, pp. 499–515. Springer, Heidelberg (2001)Google Scholar
  21. 21.
    Kanade, T., Tian, Y., Cohn, J.F.: Comprehensive database for facial expression analysis. In: Proc. of FGR., pp. 46–53 (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Chan-Su Lee
    • 1
  • Ahmed Elgammal
    • 1
  • Dimitris Metaxas
    • 1
  1. 1.Rutgers University, Piscataway, NJUSA

Personalised recommendations