Advertisement

Robust Face Recognition with Deeply Normalized Depth Images

  • Ziqing Feng
  • Qijun Zhao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10996)

Abstract

Depth information has been proven useful for face recognition. However, existing depth-image-based face recognition methods still suffer from noisy depth values and varying poses and expressions. In this paper, we propose a novel method for normalizing facial depth images to frontal pose and neutral expression and extracting robust features from the normalized depth images. The method is implemented via two deep convolutional neural networks (DCNN), normalization network (\(Net_{N}\)) and feature extraction network (\(Net_{F}\)). Given a facial depth image, \(Net_{N}\) first converts it to an HHA image, from which the 3D face is reconstructed via a DCNN. \(Net_{N}\) then generates a pose-and-expression normalized (PEN) depth image from the reconstructed 3D face. The PEN depth image is finally passed to \(Net_{F}\), which extracts a robust feature representation via another DCNN for face recognition. Our preliminary evaluation results demonstrate the superiority of the proposed method in recognizing faces of arbitrary poses and expressions with depth images.

Keywords

Depth images Face recognition Pose and expression normalization 

Notes

Acknowledgements

This work is supported by the National Key Research and Development Program of China (2017YFB0802300) and the National Natural Science Foundation of China (61773270).

References

  1. 1.
    Liu, W., Wen, Y., Yu, Li, M., Raj, B., Song, L.: Sphereface: deep hypersphere embedding for face recognition. In: CVPR (2017)Google Scholar
  2. 2.
    Lee Y., et al.: Accurate and robust face recognition from RGB-D images with a deep learning approach. In: BMVC, pp. 123.1–123.14 (2016)Google Scholar
  3. 3.
    Liu, F., Zhu, R., Zeng, D., Zhao, Q.: Disentangling features in 3D face shapes for joint face reconstruction and recognition. In: CVPR (2018)Google Scholar
  4. 4.
    Krizhevsky, A., Sutskever, A., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS (2012)Google Scholar
  5. 5.
    Zhang, J., et al.: Lock3DFace: a large-scale database of low-cost Kinect 3D faces. In: ICB, pp. 1–8 (2016)Google Scholar
  6. 6.
    Li, B.Y.L., et al.: Using Kinect for face recognition under varying poses, expressions, illumination and disguise. In: WACV, pp. 186–192 (2013)Google Scholar
  7. 7.
    Berretti, S., Pala, P., Bimbo, A.D.: Face recognition by super-resolved 3D models from consumer depth cameras. IEEE Trans. Inf. Forensics Secur. 9(9), 1436–1449 (2014)CrossRefGoogle Scholar
  8. 8.
    Blanz, V., Vetter, T.: Face recognition based on fitting a 3D morphable model. IEEE Trans. Pattern Anal. Mach. Intell. 25(9), 1063–1074 (2003)Google Scholar
  9. 9.
    Min, R., Kose, N., Dugelay, J.L.: KinectFaceDB: a kinect database for face recognition. IEEE Trans. Syst. Man Cybern. Syst. 4(11), 1534–1548 (2017)Google Scholar
  10. 10.
    Sang, G., Li, J., Zhao, Q.: Pose-invariant face recognition via RGB-D images. In: CIN, pp. 1–9 (2015)Google Scholar
  11. 11.
    Aissaoui, A., Martinet, J., Djeraba, C.: DLBP: a novel descriptor for depth image based face recognition. In: ICIP, pp. 298–302 (2017)Google Scholar
  12. 12.
    Mantecon, T., Del-Bianco, C.R., Jaureguizar, F.: Depth-based face recognition using local quantized patterns adapted for range data. In: ICIP, pp. 293–297 (2015)Google Scholar
  13. 13.
    Mantecon, T., et al.: Visual face recognition using bag of dense derivative depth patterns. IEEE Signal Process. Lett. 23(6), 771–775 (2016)CrossRefGoogle Scholar
  14. 14.
    Olegs, N., Kamal, N., Modris, G., Thomas, B.M.: RGB-D-T based face recognition. In: ICPR (2014)Google Scholar
  15. 15.
    Goswami, G., et al.: On RGB-D face recognition using Kinect. In: BTAS, pp. 1–6 (2014)Google Scholar
  16. 16.
    Gupta, S., Girshick, R., Arbeláez, P., Malik, J.: Learning rich features from RGB-D images for object detection and segmentation. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8695, pp. 345–360. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10584-0_23CrossRefGoogle Scholar
  17. 17.
    Cao, C., et al.: FaceWarehouse: a 3D facial expression database for visual computing. IEEE Trans. Vis. Comput. Graph. 20(3), 413–425 (2014)CrossRefGoogle Scholar
  18. 18.
    Wu, X., He, R., Sun, Z.: A lightened CNN for deep face representation. In: CVPR (2015)Google Scholar
  19. 19.
    Zhu, X., Lei, Z., Liu, X., Shi, H., Li, S.: Face alignment across large poses: a 3D solution. In: CVPR, pp. 146–155 (2016)Google Scholar
  20. 20.
    Phillips, P., et al.: Overview of the face recognition grand challenge. In: CVPR, pp. 947–954 (2005)Google Scholar
  21. 21.
    Tian, W., Liu, F., Zhao, Q.: Landmark-based 3D face reconstruction from an arbitrary number of unconstrained images. In: FGW (2018)Google Scholar
  22. 22.
    Zhu, X., et al.: High-fidelity pose and expression normalization for face recognition in the wild. In: CVPR, pp. 787–796 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.National Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer ScienceSichuan UniversityChengduChina

Personalised recommendations