Advertisement

Blind Face Restoration via Deep Multi-scale Component Dictionaries

Conference paper
  • 903 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12354)

Abstract

Recent reference-based face restoration methods have received considerable attention due to their great capability in recovering high-frequency details on real low-quality images. However, most of these methods require a high-quality reference image of the same identity, making them only applicable in limited scenes. To address this issue, this paper suggests a deep face dictionary network (termed as DFDNet) to guide the restoration process of degraded observations. To begin with, we use K-means to generate deep dictionaries for perceptually significant face components (i.e., left/right eyes, nose and mouth) from high-quality images. Next, with the degraded input, we match and select the most similar component features from their corresponding dictionaries and transfer the high-quality details to the input via the proposed dictionary feature transfer (DFT) block. In particular, component AdaIN is leveraged to eliminate the style diversity between the input and dictionary features (e.g., illumination), and a confidence score is proposed to adaptively fuse the dictionary feature to the input. Finally, multi-scale dictionaries are adopted in a progressive manner to enable the coarse-to-fine restoration. Experiments show that our proposed method can achieve plausible performance in both quantitative and qualitative evaluation, and more importantly, can generate realistic and promising results on real degraded images without requiring an identity-belonging reference. The source code and models are available at https://github.com/csxmli2016/DFDNet.

Keywords

Face hallucination Deep face dictionary Guided image restoration Convolutional neural networks 

Notes

Acknowledgements

This work is partially supported by the National Natural Science Foundation of China (NSFC) under Grant No.s 61671182, U19A2073 and Hong Kong RGC RIF grant (R5001-18).

Supplementary material

504446_1_En_23_MOESM1_ESM.pdf (31.8 mb)
Supplementary material 1 (pdf 32609 KB)

References

  1. 1.
    Boracchi, G., Foi, A.: Modeling the performance of image restoration from motion blur. IEEE Trans. Image Process. 21(8), 3502–3517 (2012)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Cao, Q., Lin, L., Shi, Y., Liang, X., Li, G.: Attention-aware face hallucination via deep reinforcement learning. In: CVPR (2017)Google Scholar
  3. 3.
    Cao, Q., Shen, L., Xie, W., Parkhi, O.M., Zisserman, A.: Vggface2: a dataset for recognising faces across pose and age. In: FG (2018)Google Scholar
  4. 4.
    Chen, Y., Tai, Y., Liu, X., Shen, C., Yang, J.: Fsrnet: end-to-end learning face super-resolution with facial priors. In: CVPR June 2018Google Scholar
  5. 5.
    Chrysos, G.G., Zafeiriou, S.: Deep face deblurring. In: CVPRW (2017)Google Scholar
  6. 6.
    Kim, D.K, Minseon, K.G., Kim, D.S.: Progressive face super-resolution via attention to facial landmark. In: BMVC (2019)Google Scholar
  7. 7.
    Dogan, B., Gu, S., Timofte, R.: Exemplar guided face image super-resolution without facial landmarks. In: CVPRW (2019)Google Scholar
  8. 8.
    Dong, C., Deng, Y., Change Loy, C., Tang, X.: Compression artifacts reduction by a deep convolutional network. In: ICCV (2015)Google Scholar
  9. 9.
    Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8692, pp. 184–199. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10593-2_13CrossRefGoogle Scholar
  10. 10.
    Galteri, L., Seidenari, L., Bertini, M., Del Bimbo, A.: Deep generative adversarial compression artifact removal. In: ICCV (2017)Google Scholar
  11. 11.
    Goodfellow, I., et al.: Generative adversarial nets. In: NeurIPS (2014)Google Scholar
  12. 12.
    Guo, J., Chao, H.: One-to-many network for visually pleasing compression artifacts reduction. In: CVPR (2017)Google Scholar
  13. 13.
    Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR (2019)Google Scholar
  14. 14.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: ICCV (2017)Google Scholar
  15. 15.
    Huang, H., He, R., Sun, Z., Tan, T.: Wavelet-srnet: a wavelet-based CNN for multi-scale face super resolution. In: ICCV (2017)Google Scholar
  16. 16.
    Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: ICCV (2017)Google Scholar
  17. 17.
    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_43CrossRefGoogle Scholar
  18. 18.
    Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR (2019)Google Scholar
  19. 19.
    Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: CVPR (2016)Google Scholar
  20. 20.
    King, D.E.: Dlib-ml: a machine learning toolkit. J. Mach. Learn. Res. 10, 1755–1758 (2009)Google Scholar
  21. 21.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization (2014). arXiv preprint arXiv:1412.6980
  22. 22.
    Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J.: Deblurgan: Blind motion deblurring using conditional adversarial networks. In: CVPR (2018)Google Scholar
  23. 23.
    Kupyn, O., Martyniuk, T., Wu, J., Wang, Z.: Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In: ICCV (2019)Google Scholar
  24. 24.
    Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR (2017)Google Scholar
  25. 25.
    Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Understanding and evaluating blind deconvolution algorithms. In: CVPR (2009)Google Scholar
  26. 26.
    Li, X., Liu, M., Ye, Y., Zuo, W., Lin, L., Yang, R.: Learning warped guidance for blind face restoration. In: ECCV (2018)Google Scholar
  27. 27.
    Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: ICCV (2015)Google Scholar
  28. 28.
    Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. In: ICLR (2018)Google Scholar
  29. 29.
    Nah, S., Hyun Kim, T., Mu Lee, K.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: CVPR (2017)Google Scholar
  30. 30.
    Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  31. 31.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  32. 32.
    Ruiz, N., Chong, E., Rehg, J.M.: Fine-grained head pose estimation without keypoints. In: CVPRW (2018)Google Scholar
  33. 33.
    Shen, Z., Lai, W.S., Xu, T., Kautz, J., Yang, M.H.: Deep semantic face deblurring. In: CVPR (2018)Google Scholar
  34. 34.
    Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: CVPR (2018)Google Scholar
  35. 35.
    Wang, X., Yu, K., Dong, C., Change Loy, C.: Recovering realistic texture in image super-resolution by deep spatial feature transform. In: CVPR (2018)Google Scholar
  36. 36.
    Wang, X., et al.: Esrgan: enhanced super-resolution generative adversarial networks. In: ECCVW (2018)Google Scholar
  37. 37.
    Xu, X., Sun, D., Pan, J., Zhang, Y., Pfister, H., Yang, M.H.: Learning to super-resolve blurry face and text images. In: ICCV (2017)Google Scholar
  38. 38.
    Yang, D., Sun, J.: Bm3D-net: a convolutional neural network for transform-domain collaborative filtering. IEEE Sign. Process. Lett. 25(1), 55–59 (2017)CrossRefGoogle Scholar
  39. 39.
    Yu, X., Fernando, B., Ghanem, B., Porikli, F., Hartley, R.: Face super-resolution guided by facial component heatmaps. In: ECCV (2018)Google Scholar
  40. 40.
    Yu, X., Fernando, B., Hartley, R., Porikli, F.: Super-resolving very low-resolution face images with supplementary attributes. In: CVPR (2018)Google Scholar
  41. 41.
    Zhang, H., Dai, Y., Li, H., Koniusz, P.: Deep stacked hierarchical multi-patch network for image deblurring. In: CVPR (2019)Google Scholar
  42. 42.
    Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)MathSciNetCrossRefGoogle Scholar
  43. 43.
    Zhang, K., Zuo, W., Zhang, L.: Ffdnet: toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 27(9), 4608–4622 (2018)MathSciNetCrossRefGoogle Scholar
  44. 44.
    Zhang, K., Zuo, W., Zhang, L.: Deep plug-and-play super-resolution for arbitrary blur kernels. In: CVPR (2019)Google Scholar
  45. 45.
    Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR (2018)Google Scholar
  46. 46.
    Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: ECCV (2018)Google Scholar
  47. 47.
    Zhang, Z., Wang, Z., Lin, Z., Qi, H.: Image super-resolution by neural texture transfer. In: CVPR (2019)Google Scholar
  48. 48.
    Zhu, S., Liu, S., Loy, C.C., Tang, X.: Deep cascaded bi-network for face hallucination. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 614–630. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46454-1_37CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Faculty of ComputingHarbin Institute of TechnologyHarbinChina
  2. 2.Department of Computer ScienceThe University of Hong KongHong KongChina
  3. 3.School of Computer Science and EngineeringNanyang Technological UniversitySingaporeUK
  4. 4.DAMO Academy, Alibaba GroupHangzhouChina
  5. 5.Peng Cheng LabShenzhenChina
  6. 6.Department of ComputingThe Hong Kong Polytechnic UniversityHong KongChina

Personalised recommendations