Advertisement

DFQA: Deep Face Image Quality Assessment

  • Fei YangEmail author
  • Xiaohu Shao
  • Lijun Zhang
  • Pingling Deng
  • Xiangdong Zhou
  • Yu Shi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11902)

Abstract

A face image with high quality can be extracted dependable features for further evaluation, however, the one with low quality can’t. Different from the quality assessment algorithms for general images, the face image quality assessment need to consider more practical factors that directly affect the accuracy of face recognition, face verifcation, etc. In this paper, we present a two-stream convolutional neural network (CNN) named Deep Face Quality Assessment (DFQA) specifically for face image quality assessment. DFQA is able to predict the quality score of an input face image quickly and accurately. Specifcally, we design a network with two-stream for increasing the diversity and improving the accuracy of evaluation. Compared with other CNN network architectures and quality assessment methods for similar tasks, our model is smaller in size and faster in speed. In addition, we build a new dataset containing 3000 face images manually marked with objective quality scores. Experiments show that the performance of face recognition is improved by introducing our face image quality assessment algorithm.

Keywords

Face image Quality assessment Deep learning Face recognition 

Notes

Acknowledgments

This work was supported by National Key Research and Development Program of China (2018YFC0808300), CAS Light of West China Program (2017), National Natural Science Foundation of China (6180021609, 6180070559, 61602433).

References

  1. 1.
    Kim, J., Zeng, H., Ghadiyaram, D., et al.: Deep convolutional neural models for picture-quality prediction: challenges and solutions to data-driven image quality assessment. IEEE Signal Process. Mag. 34(6), 130–141 (2017)CrossRefGoogle Scholar
  2. 2.
    Simone, B., Luigi, C., Paolo, N., Raimondo, S.: On the use of deep learning for blind image quality assessment. Signal Image Video Process. (3), 1–8 (2016)Google Scholar
  3. 3.
    Pittayapat, P., Oliveira, S., Thevissen, P., Michielsen, K., Bergans, N., Willems, G., et al.: Image quality assessment and medical physics evaluation of different portable dental X-ray units. Forensic Sci. Int. 201(1–3), 112–117 (2010)CrossRefGoogle Scholar
  4. 4.
    Tanchenko, A.: Visual-PSNR measure of image quality. J. Vis. Commun. Image Represent. 25(5), 874–878 (2014)CrossRefGoogle Scholar
  5. 5.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Eero, P.S.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)CrossRefGoogle Scholar
  6. 6.
    Zhou, W., Hamid, R.S., Alan, C.B.: No-reference perceptual quality assessment of JPEG compressed images. In: International Conference on Image Processing (2002)Google Scholar
  7. 7.
    Erez, C., Yitzhak, Y.: No-reference assessment of blur and noise impacts on image quality. Signal Image Video Process. 4(3), 289–302 (2010)CrossRefGoogle Scholar
  8. 8.
    Gao, F., Wang, Y., Pan, P.L., Tan, M., Jun, Y., Yan, Z.: DeepSim: deep similarity for image quality assessment. Neurocomputing 257, S0925231217301480 (2017)Google Scholar
  9. 9.
    Ma, K., Liu, W., Liu, T., Wang, Z., Tao, D.: dipIQ: blind image quality assessment by learning-to-rank discriminable image pairs. IEEE Trans. Image Process. 26(8), 1–1 (2017)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Kang, L., Ye, P., Li, Y., et al.: Convolutional neural networks for no-reference image quality assessment. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society (2014)Google Scholar
  11. 11.
    Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695 (2012)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Kang, L., Ye, P., Li, Y., David, D.: Convolutional neural networks for no-reference image quality assessment. In: IEEE Conference on Computer Vision Pattern Recognition (2014)Google Scholar
  13. 13.
    Forrest, N.I., Song, H., Matthew, W.M., Khalid, A., William, J.D., Kurt, K.: SqueezeNet: AlexNet-level Accuracy with 50x Fewer Parameters and<0.5 MB Model Size (2016)Google Scholar
  14. 14.
    Guo, Y., Zhang, L., Hu, Y., He, X., Gao, J.: MS-Celeb-1M: a dataset and benchmark for large-scale face recognition. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 87–102. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46487-9_6CrossRefGoogle Scholar
  15. 15.
    Bahetiyaer, B., Li, K., Yan, B.: An accurate deep convolutional neural networks model for no-reference image quality assessment. In: IEEE International Conference on Multimedia Expo (2017)Google Scholar
  16. 16.
    Bosse, S., Maniry, D., Muller, K.R., Wiegand, T., Samek, W.: Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27(1), 206–219 (2017)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Kim, J., Lee, S.: Fully deep blind image quality predictor. IEEE J. Sel. Top. Signal Process. 11(1), 206–220 (2017)CrossRefGoogle Scholar
  18. 18.
    Nasrollahi, K., Moeslund, T.B.: Face quality assessment system in video sequences. In: Schouten, B., Juul, N.C., Drygajlo, A., Tistarelli, M. (eds.) BioID 2008. LNCS, vol. 5372, pp. 10–18. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-89991-4_2CrossRefGoogle Scholar
  19. 19.
    Truong, Q.C., Dang, T.K., Ha, T.: Face quality measure for face authentication. In: Dang, T.K., Wagner, R., Küng, J., Thoai, N., Takizawa, M., Neuhold, E. (eds.) FDSE 2016. LNCS, vol. 10018, pp. 189–198. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-48057-2_13CrossRefGoogle Scholar
  20. 20.
    Rajeev, R., Ankan, B., Xu, H.Y., Swami, S., Rama, C.: Crystal loss and quality pooling for unconstrained face verification and recognition (2018)Google Scholar
  21. 21.
    Chen, J.S., Deng, Y., Bai, G.C., Su, G.D.: Face image quality assessment based on learning to rank. IEEE Signal Process. Lett. 22(1), 90–94 (2014)CrossRefGoogle Scholar
  22. 22.
    Marcel, S., Erik, R., Joachim, D.: ImageNet pre-trained models with batch normalization. arXiv preprint arXiv:1612.01452 (2016)
  23. 23.
    Liu, X., Joost, V.D.W., Bagdanov, A.D.: RankIQA: learning from rankings for no-reference image quality assessment (2017)Google Scholar
  24. 24.
    Yi, D., Lei, Z., Liao, S.C., Stan, Z.L: Learning face representation from scratch. In: Computer Science (2014)Google Scholar
  25. 25.
    Qiong, C., Shen, L., Xie, W.D., Omkar, M.P., Andrew, Z.: VGGFace2: a dataset for recognising faces across pose and age (2017)Google Scholar
  26. 26.
    Gary, B.H., Manu, R., Tamara, B., Erik, L.M.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments (2007)Google Scholar
  27. 27.
    Florian, S., Dmitry, K., James, P.: FaceNet: a unified embedding for face recognition and clustering (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Fei Yang
    • 1
    • 2
    Email author
  • Xiaohu Shao
    • 1
    • 2
  • Lijun Zhang
    • 1
    • 2
  • Pingling Deng
    • 1
    • 2
  • Xiangdong Zhou
    • 1
    • 2
  • Yu Shi
    • 1
    • 2
  1. 1.Chongqing Institute of Green and Intelligent TechnologyCASBeijingChina
  2. 2.University of Chinese Academy of ScienceBeijingChina

Personalised recommendations