Advertisement

Unpaired Learning of Deep Image Denoising

Conference paper
  • 1.2k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12349)

Abstract

We investigate the task of learning blind image denoising networks from an unpaired set of clean and noisy images. Such problem setting generally is practical and valuable considering that it is feasible to collect unpaired noisy and clean images in most real-world applications. And we further assume that the noise can be signal dependent but is spatially uncorrelated. In order to facilitate unpaired learning of denoising network, this paper presents a two-stage scheme by incorporating self-supervised learning and knowledge distillation. For self-supervised learning, we suggest a dilated blind-spot network (D-BSN) to learn denoising solely from real noisy images. Due to the spatial independence of noise, we adopt a network by stacking \(1\times 1\) convolution layers to estimate the noise level map for each image. Both the D-BSN and image-specific noise model (\(\text {CNN}_{\text {est}}\)) can be jointly trained via maximizing the constrained log-likelihood. Given the output of D-BSN and estimated noise level map, improved denoising performance can be further obtained based on the Bayes’ rule. As for knowledge distillation, we first apply the learned noise models to clean images to synthesize a paired set of training images, and use the real noisy images and the corresponding denoising results in the first stage to form another paired set. Then, the ultimate denoising model can be distilled by training an existing denoising network using these two paired sets. Experiments show that our unpaired learning method performs favorably on both synthetic noisy images and real-world noisy photographs in terms of quantitative and qualitative evaluation. Code is available at https://github.com/XHWXD/DBSN.

Keywords

Image denoising Unpaired learning Convolutional networks Self-supervised learning 

Notes

Acknowledgement

This work is partially supported by the National Natural Science Foundation of China (NSFC) under Grant No.s 61671182, U19A2073.

Supplementary material

504439_1_En_21_MOESM1_ESM.pdf (32.2 mb)
Supplementary material 1 (pdf 32962 KB)

References

  1. 1.
    Abdelhamed, A., Brubaker, M.A., Brown, M.S.: Noise flow: noise modeling with conditional normalizing flows. In: ICCV, pp. 3165–3173 (2019)Google Scholar
  2. 2.
    Abdelhamed, A., Lin, S., Brown, M.S.: A high-quality denoising dataset for smartphone cameras. In: CVPR, pp. 1692–1700 (2018)Google Scholar
  3. 3.
    Agustsson, E., Timofte, R.: NTIRE 2017 challenge on single image super-resolution: dataset and study. In: CVPR Workshops (2017)Google Scholar
  4. 4.
    Anwar, S., Barnes, N.: Real image denoising with feature attention. In: ICCV, pp. 3155–3164 (2019)Google Scholar
  5. 5.
    Batson, J., Royer, L.: Noise2Self: blind denoising by self-supervision, pp. 524–533 (2019)Google Scholar
  6. 6.
    Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J.T.: Unprocessing images for learned raw denoising. In: CVPR, pp. 11036–11045 (2019)Google Scholar
  7. 7.
    Bulat, A., Yang, J., Tzimiropoulos, G.: To learn image super-resolution, use a GAN to learn how to do image degradation first. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11210, pp. 187–202. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01231-1_12CrossRefGoogle Scholar
  8. 8.
    Chen, J., Chen, J., Chao, H., Yang, M.: Image blind denoising with generative adversarial network based noise modeling. In: CVPR, pp. 3155–3164 (2018)Google Scholar
  9. 9.
    Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. TIP 16(8), 2080–2095 (2007)MathSciNetGoogle Scholar
  10. 10.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR, pp. 248–255 (2009)Google Scholar
  11. 11.
    Foi, A., Trimeche, M., Katkovnik, V., Egiazarian, K.: Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data. TIP 17(10), 1737–1754 (2008)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Franzen, R.: Kodak lossless true color image suite, April 1999. http://r0k.us/graphics/kodak
  13. 13.
    Grossberg, M.D., Nayar, S.K.: Modeling the space of camera response functions. TPAMI 26(10), 1272–1282 (2004)CrossRefGoogle Scholar
  14. 14.
    Gu, S., Zhang, L., Zuo, W., Feng, X.: Weighted nuclear norm minimization with application to image denoising. In: CVPR, pp. 2862–2869 (2014)Google Scholar
  15. 15.
    Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: CVPR, pp. 1712–1722 (2019)Google Scholar
  16. 16.
    He, K., Sun, J., Tang, X.: Guided image filtering. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 1–14. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15549-9_1CrossRefGoogle Scholar
  17. 17.
    Goodfellow, I.J., et al.: Generative adversarial networks. In: NIPS, p. 2672C2680 (2014)Google Scholar
  18. 18.
    Islam, M.T., Rahman, S.M., Ahmad, M.O., Swamy, M.: Mixed Gaussian-impulse noise reduction from images using convolutional neural network. Sig. Process. Image Commun. 68, 26–41 (2018)CrossRefGoogle Scholar
  19. 19.
    Jia, X., Liu, S., Feng, X., Zhang, L.: FOCNet: a fractional optimal control network for image denoising. In: CVPR, pp. 6054–6063 (2019)Google Scholar
  20. 20.
    Kingma, D.P., Dhariwal, P.: Glow: generative flow with invertible 1x1 convolutions. In: NIPS, pp. 10215–10224 (2018)Google Scholar
  21. 21.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  22. 22.
    Krull, A., Buchholz, T.O., Jug, F.: Noise2Void-learning denoising from single noisy images. In: CVPR, pp. 2129–2137 (2019)Google Scholar
  23. 23.
    Krull, A., Vicar, T., Jug, F.: Probabilistic Noise2Void: unsupervised content-aware denoising. arXiv preprint arXiv:1906.00651 (2019)
  24. 24.
    Laine, S., Karras, T., Lehtinen, J., Aila, T.: High-quality self-supervised deep image denoising. In: NIPS, pp. 6968–6978 (2019)Google Scholar
  25. 25.
    Lehtinen, J., et al.: Noise2Noise: learning image restoration without clean data. In: ICML, pp. 2965–2974 (2018)Google Scholar
  26. 26.
    Liu, D., Wen, B., Fan, Y., Loy, C.C., Huang, T.S.: Non-local recurrent network for image restoration. In: NIPS, pp. 1673–1682 (2018)Google Scholar
  27. 27.
    Liu, P., Zhang, H., Zhang, K., Lin, L., Zuo, W.: Multi-level wavelet-CNN for image restoration. In: CVPR Workshops, pp. 773–782 (2018)Google Scholar
  28. 28.
    Liu, X., Tanaka, M., Okutomi, M.: Practical signal-dependent noise parameter estimation from a single noisy image. TIP 23(10), 4361–4371 (2014)MathSciNetzbMATHGoogle Scholar
  29. 29.
    Ma, K., et al.: Waterloo exploration database: new challenges for image quality assessment models. TIP 26(2), 1004–1016 (2016)MathSciNetzbMATHGoogle Scholar
  30. 30.
    Mao, X., Shen, C., Yang, Y.B.: Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In: NIPS, pp. 2802–2810 (2016)Google Scholar
  31. 31.
    Nam, S., Hwang, Y., Matsushita, Y., Joo Kim, S.: A holistic approach to cross-channel image noise modeling and its application to image denoising. In: CVPR, pp. 1683–1691 (2016)Google Scholar
  32. 32.
    Oord, A.v.d., Kalchbrenner, N., Kavukcuoglu, K.: Pixel recurrent neural networks. In: ICML, p. 1747C1756 (2016)Google Scholar
  33. 33.
    Plotz, T., Roth, S.: Benchmarking denoising algorithms with real photographs. In: CVPR, pp. 1586–1595 (2017)Google Scholar
  34. 34.
    Plötz, T., Roth, S.: Neural nearest neighbors networks. In: NIPS, pp. 1087–1098 (2018)Google Scholar
  35. 35.
    Remez, T., Litany, O., Giryes, R., Bronstein, A.M.: Class-aware fully convolutional Gaussian and Poisson denoising. TIP 27(11), 5707–5722 (2018)MathSciNetzbMATHGoogle Scholar
  36. 36.
    Roth, S., Black, M.J.: Fields of experts. IJCV 82(2), 205 (2009)CrossRefGoogle Scholar
  37. 37.
    Soltanayev, S., Chun, S.Y.: Training deep learning based denoisers without ground truth data. In: NIPS, pp. 3257–3267 (2018)Google Scholar
  38. 38.
    Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: CVPR, pp. 3147–3155 (2017)Google Scholar
  39. 39.
    Yue, Z., Yong, H., Zhao, Q., Meng, D., Zhang, L.: Variational denoising network: toward blind noise modeling and removal. In: NIPS, pp. 1688–1699 (2019)Google Scholar
  40. 40.
    Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. TIP 26(7), 3142–3155 (2017)MathSciNetzbMATHGoogle Scholar
  41. 41.
    Zhang, K., Zuo, W., Gu, S., Zhang, L.: Learning deep CNN denoiser prior for image restoration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3929–3938 (2017)Google Scholar
  42. 42.
    Zhang, K., Zuo, W., Zhang, L.: FFDNet: toward a fast and flexible solution for CNN-based image denoising. TIP 27(9), 4608–4622 (2018)MathSciNetGoogle Scholar
  43. 43.
    Zhang, L., Wu, X., Buades, A., Li, X.: Color demosaicking by local directional interpolation and nonlocal adaptive thresholding. J. Electron. Imaging 20(2), 023016 (2011)CrossRefGoogle Scholar
  44. 44.
    Zhussip, M., Soltanayev, S., Chun, S.Y.: Training deep learning based image denoisers from undersampled measurements without ground truth and without image prior. In: CVPR, pp. 10255–10264 (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Harbin Institute of TechnologyHarbinChina
  2. 2.University of TianjinTianjinChina
  3. 3.Peng Cheng LabShenzhenChina

Personalised recommendations