Advertisement

Pose Aided Deep Convolutional Neural Networks for Face Alignment

  • Shuying Liu
  • Jiani Hu
  • Weihong Deng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9967)

Abstract

Recently, deep convolutional neural networks have been widely used and achieved state-of-the-art performance in face recognition tasks such as face verification, face detection and face alignment. However, face alignment remains a challenging problem due to large pose variation and the lack of data. Although researchers have designed various network architecture to handle this problem, pose information was rarely used explicitly. In this paper, we propose Pose Aided Convolutional Neural Networks (PACN) which uses different networks for faces with different poses. We first train a CNN to do pose classification and a base CNN, then different networks are finetuned from the base CNN for faces of different pose. Since there wouldn’t be many images for each pose, we propose a data augmentation strategy which augment the data without affecting the pose. Experiment results show that the proposed PACN achieves better or comparable results than the state-of-the-art methods.

Keywords

Deep Convolutional Neural Network Pose aided Data augmentation 

Notes

Acknowledgments

This work was partially sponsored by supported by the NSFC (National Natural Science Foundation of China) under Grant No. 61375031, No. 61573068, No. 61471048, and No. 61273217, the Fundamental Research Funds for the Central Universities under Grant No. 2014ZD03-01, This work was also supported by Beijing Nova Program, CCF-Tencent Open Research Fund, and the Program for New Century Excellent Talents in University.

References

  1. 1.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25(2), 2012 (2012)Google Scholar
  2. 2.
    Sun, Y., Wang, X., Tang, X.: Deep convolutional network cascade for facial point detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3476–3483 (2013)Google Scholar
  3. 3.
    Sun, Y., Wang, X., Tang, X.: Deeply learned face representations are sparse, selective, and robust. Computer Science, pp. 2892–2900 (2014)Google Scholar
  4. 4.
    Cootes, T.F., Taylor, C.J., Cooper, D.H., Graham, J.: Active shape models-their training and application. Comput. Vis. Image Unders. 61(1), 38–59 (1995)CrossRefGoogle Scholar
  5. 5.
    Cristinacce, D., Cootes, T.F.: Feature detection and tracking with constrained local models. In: BMVC, vol. 2, p. 6. Citeseer (2006)Google Scholar
  6. 6.
    Wang, Y., Lucey, S., Cohn, J.F.: Enforcing convexity for improved alignment with constrained local models. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8. IEEE (2008)Google Scholar
  7. 7.
    Milborrow, S., Nicolls, F.: Locating facial features with an extended active shape model. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5305, pp. 504–513. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-88693-8_37 CrossRefGoogle Scholar
  8. 8.
    Xiong, X., Torre, F.: Supervised descent method and its applications to face alignment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 532–539 (2013)Google Scholar
  9. 9.
    Cao, X., Wei, Y., Wen, F., Sun, J.: Face alignment by explicit shape regression. Int. J. Comput. Vision 107(2), 177–190 (2014)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Ren, S., Cao, X., Wei, Y., Sun, J.: Face alignment at 3000 FPS via regressing local binary features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1685–1692 (2014)Google Scholar
  11. 11.
    Zhang, J., Shan, S., Kan, M., Chen, X.: Coarse-to-fine auto-encoder networks (CFAN) for real-time face alignment. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 1–16. Springer, Heidelberg (2014). doi: 10.1007/978-3-319-10605-2_1 Google Scholar
  12. 12.
    Zhang, Z., Luo, P., Loy, C.C., Tang, X.: Facial landmark detection by deep multi-task learning. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 94–108. Springer, Heidelberg (2014). doi: 10.1007/978-3-319-10599-4_7 Google Scholar
  13. 13.
    Sagonas, C., Tzimiropoulos, G., Zafeiriou, S., Pantic, M.: 300 faces in-the-wild challenge: the first facial landmark localization challenge. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 397–403 (2013)Google Scholar
  14. 14.
    Liang, Z., Ding, S., Lin, L.: Unconstrained facial landmark localization with backbone-branches fully-convolutional networks. arXiv preprint arXiv:1507.03409 (2015)
  15. 15.
    Lai, H., Xiao, S., Cui, Z., Pan, Y., Xu, C., Yan, S.: Deep cascaded regression for face alignment. arXiv preprint arXiv:1510.09083 (2015)
  16. 16.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
  17. 17.
    Belhumeur, P.N., Jacobs, D.W., Kriegman, D.J., Kumar, N.: Localizing parts of faces using a consensus of exemplars. IEEE Trans. Pattern Anal. Mach. Intell. 35(12), 2930–2940 (2013)CrossRefGoogle Scholar
  18. 18.
    Le, V., Brandt, J., Lin, Z., Bourdev, L., Huang, T.S.: Interactive facial feature localization. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7574, pp. 679–692. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-33712-3_49 CrossRefGoogle Scholar
  19. 19.
    Zhu, X., Ramanan, D.: Face detection, pose estimation, and landmark localization in the wild. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2879–2886. IEEE (2012)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.School of Information and Communication EngineeringBeijing University of Posts and TelecommunicationsBeijingChina

Personalised recommendations