Advertisement

A Non-rigid Face Tracking Method for Wide Rotation Using Synthetic Data

  • Ngoc-Trung TranEmail author
  • Fakhreddine Ababsa
  • Maurice Charbit
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9493)

Abstract

This paper propose a new method for wide-rotation non-rigid face tracking that is still a challenging problem in computer vision community. Our method consists of training and tracking phases. In training, we propose to use a large off-line synthetic database to overcome the problem of data collection. The local appearance models are then trained using linear Support Vector Machine (SVM). In tracking, we propose a two-step approach: (i) The first step uses baseline matching for a good initialization. The matching strategy between the current frame and a set of adaptive keyframes is also involved to be recoverable in terms of failed tracking. (ii) The second step estimates the model parameters using a heuristic method via pose-wise SVMs. The combination makes our approach work robustly with wide rotation, up to \(90^{\circ }\) of vertical axis. In addition, our method appears to be robust even in the presence of fast movements thanks to baseline matching. Compared to state-of-the-art methods, our method shows a good compromise of rigid and non-rigid parameter accuracies. This study gives a promising perspective because of the good results in terms of pose estimation (average error is less than \(4^o\) on BUFT dataset) and landmark tracking precision (5.8 pixel error compared to 6.8 of one state-of-the-art method on Talking Face video. These results highlight the potential of using synthetic data to track non-rigid face in unconstrained poses.

Keywords

3D face tracking Out-of-plane tracking Rigid tracking Non-rigid tracking Face matching Synthesized face Face matching 

References

  1. 1.
    Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. TPAMI 23, 681–685 (2001)CrossRefGoogle Scholar
  2. 2.
    Xiao, J., Baker, S., Matthews, I., Kanade, T.: Real-time combined 2d+3d active appearance models. CVPR. 2, 535–542 (2004)Google Scholar
  3. 3.
    Gross, R., Matthews, I., Baker, S.: Active appearance models with occlusion. IVC 24, 593–604 (2006)CrossRefGoogle Scholar
  4. 4.
    Matthews, I., Baker, S.: Active appearance models revisited. IJCV 60, 135–164 (2004)CrossRefGoogle Scholar
  5. 5.
    Cristinacce, D., Cootes, T. F.: Feature detection and tracking with constrained local models. In: BMVC. (2006)Google Scholar
  6. 6.
    Wang, Y., Lucey, S., Cohn, J.: Enforcing convexity for improved alignment with constrained local models. In: CVPR (2008)Google Scholar
  7. 7.
    Saragih, J.M., Lucey, S., Cohn, J.F.: Deformable model fitting by regularized landmark mean-shift. IJCV 91, 200–215 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Dollar, P., Welinder, P., Perona, P.: Cascaded pose regression. In: CVPR (2010)Google Scholar
  9. 9.
    Cao, X., Wei, Y., Wen, F., Sun, J.: Face alignment by explicit shape regression. In: CVPR (2012)Google Scholar
  10. 10.
    Xiong, X., la Torre Frade, F.D.: Supervised descent method and its applications to face alignment. In: CVPR (2013)Google Scholar
  11. 11.
    Cascia, M.L., Sclaroff, S., Athitsos, V.: Fast, reliable head tracking under varying illumination: an approach based on registration of texture-mapped 3d models. TPAMI 22, 322–336 (2000)CrossRefGoogle Scholar
  12. 12.
    Xiao, J., Moriyama, T., Kanade, T., Cohn, J.: Robust full-motion recovery of head by dynamic templates and re-registration techniques. Int. J. Imaging Syst. Technol. 13, 85–94 (2003)CrossRefGoogle Scholar
  13. 13.
    Morency, L. P., Whitehill, J., Movellan, J. R.: Generalized adaptive view-based appearance model: Integrated framework for monocular head pose estimation. In: FG. (2008)Google Scholar
  14. 14.
    An, K. H., Chung, M. J.: 3d head tracking and pose-robust 2d texture map-based face recognition using a simple ellipsoid model. In: IROS, pp. 307–312 (2008)Google Scholar
  15. 15.
    Vacchetti, L., Lepetit, V., Fua, P.: Stable real-time 3d tracking using online and offline information. TPAMI 26, 1385–1391 (2004)CrossRefGoogle Scholar
  16. 16.
    Ström, J.: Model-based real-time head tracking. EURASIP 2002, 1039–1052 (2002)zbMATHGoogle Scholar
  17. 17.
    Chen, Y., Davoine, F.: Simultaneous tracking of rigid head motion and non-rigid facial animation by analyzing local features statistically. In: BMVC. (2006)Google Scholar
  18. 18.
    Alonso, J., Davoine, F., Charbit, M.: A linear estimation method for 3d pose and facial animation tracking. In: CVPR (2007)Google Scholar
  19. 19.
    Lefevre, S., Odobez, J. M.: Structure and appearance features for robust 3d facial actions tracking. In: ICME (2009)Google Scholar
  20. 20.
    Tran, N.-T., Ababsa, F.-E., Charbit, M., Feldmar, J., Petrovska-Delacrétaz, D., Chollet, G.: 3D face pose and animation tracking via eigen-decomposition based Bayesian approach. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Li, B., Porikli, F., Zordan, V., Klosowski, J., Coquillart, S., Luo, X., Chen, M., Gotz, D. (eds.) ISVC 2013, Part I. LNCS, vol. 8033, pp. 562–571. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  21. 21.
    Ababsa, F.: Robust extended kalman filtering for camera pose tracking using 2d to 3d lines correspondences. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 1834–1838 (2009)Google Scholar
  22. 22.
    Ababsa, F., Mallem, M.: Robust line tracking using a particle filter for camera pose estimation. In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology. (2006)Google Scholar
  23. 23.
    Jang, J. S., Kanade, T.: Robust 3d head tracking by online feature registration. In: FG (2008)Google Scholar
  24. 24.
    Wang, H., Davoine, F., Lepetit, V., Chaillou, C., Pan, C.: 3-d head tracking via invariant keypoint learning. IEEE Trans. Circ. Syst. Video Technol. 22, 1113–1126 (2012)CrossRefGoogle Scholar
  25. 25.
    Asteriadis, S., Karpouzis, K., Kollias, S.: Visual focus of attention in non-calibrated environments using gaze estimation. IJCV 107, 293–316 (2014)MathSciNetCrossRefGoogle Scholar
  26. 26.
  27. 27.
    Gu, L., Kanade, T.: 3d alignment of face in a single image. In: CVPR (2006)Google Scholar
  28. 28.
    Su, Y., Ai, H., Lao, S.: Multi-view face alignment using 3d shape model for view estimation. In: Proceedings of the Third International Conference on Advances in Biometrics (2009)Google Scholar
  29. 29.
    Ahlberg, J.: Candide-3 - an updated parameterised face. Technical report, Department of Electrical Engineering, Linkoping University, Sweden (2001)Google Scholar
  30. 30.
    Aggarwal, G., Veeraraghavan, A., Chellappa, R.: 3D facial pose tracking in uncalibrated videos. In: Pal, S.K., Bandyopadhyay, S., Biswas, S. (eds.) PReMI 2005. LNCS, vol. 3776, pp. 515–520. Springer, Heidelberg (2005) CrossRefGoogle Scholar
  31. 31.
    Ojala, T., Pietikäinen, M., Harwood, D.: A comparative study of texture measures with classification based on featured distributions. PR 29, 51–59 (1996)Google Scholar
  32. 32.
    Koestinger, M., Wohlhart, P., Roth, P. M., Bischof, H.: Annotated facial landmarks in the wild: a large-scale, real-world database for facial landmark localization. In: First IEEE International Workshop on Benchmarking Facial Image Analysis Technologies (2011)Google Scholar
  33. 33.
    Gross, R., Matthews, I., Cohn, J.F., Kanade, T., Baker, S.: Multi-pie. IVC 28, 807–813 (2010)CrossRefGoogle Scholar
  34. 34.
    Dementhon, D.F., Davis, L.S.: Model-based object pose in 25 lines of code. IJCV 15, 123–141 (1995)CrossRefGoogle Scholar
  35. 35.
    Fan, R.E., Chang, K.W., Hsieh, C.J., Wang, X.R., Lin, C.J.: LIBLINEAR: A library for large linear classification. JMLR 9, 1871–1874 (2008)zbMATHGoogle Scholar
  36. 36.
    Lewis, J.P.: Fast normalized cross-correlation. In: Proceedings of Vision Interface, vol. 1995, pp. 120–123 (1995)Google Scholar
  37. 37.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  38. 38.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Fourth Alvey Vision Conference, pp. 147–151 (1988)Google Scholar
  39. 39.
    Tomasi, C., Kanade, T.: Detection and tracking of point features. Technical report, International Journal of Computer Vision (1991)Google Scholar
  40. 40.
    Besl, P.J., McKay, N.D.: A method for registration of 3-d shapes. TPAMI 14, 239–256 (1992)CrossRefGoogle Scholar
  41. 41.
    Sanderson, C.: The VidTIMIT Database. Technical report, IDIAP (2002)Google Scholar
  42. 42.
    Lee, K., Ho, J., Yang, M., Kriegman, D.: Video-based face recognition using probabilistic appearance manifolds. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 313–320 (2003)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Ngoc-Trung Tran
    • 1
    • 2
    Email author
  • Fakhreddine Ababsa
    • 2
  • Maurice Charbit
    • 1
  1. 1.LTCI-CNRS, Telecom ParisTECHParisFrance
  2. 2.IBISCUniversity of EvryEvryFrance

Personalised recommendations