Skip to main content
Log in

Self-supervised image blind deblurring using deep generator prior

  • Published:
Optoelectronics Letters Aims and scope Submit manuscript

Abstract

Deep generative prior (DGP) is recently proposed for image restoration and manipulation, obtaining compelling results for recovering missing semantics. In this paper, we exploit a general solution for single image deblurring using DGP as the image prior. To this end, two aspects of this object are investigated. One is modeling the process of latent image degradation, corresponding to the estimation of blur kernels in conventional deblurring methods. In this regard, a Reblur2Deblur network is proposed and trained on large-scale datasets. In this way, the proposed structure can simulate the degradation of latent sharp images. The other is encouraging deblurring results faithful to the content of latent images, and matching the appearance of blurry observations. As the generative adversarial network (GAN)-based methods often result in mismatched reconstruction, a deblurring framework with the relaxation strategy is implemented to tackle this problem. The pre-trained GAN and pre-trained ReblurNet are allowed to be fine-tuned on the fly in a self-supervised manner. Finally, we demonstrate empirically that the proposed model can perform favorably against the state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. NAH S, SON S, LEE S, et al. NTIRE 2021 challenge on image deblurring[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 19–25, 2021, virtual. New York: IEEE, 2021: 149–165.

    Google Scholar 

  2. SUN L, CHO S, WANG J, et al. Edge-based blur kernel estimation using patch priors[C]//IEEE International Conference on Computational Photography (ICCP), April 19–21, 2013, Cambridge, MA, USA. New York: IEEE, 2013: 1–8.

    Google Scholar 

  3. WHYTE O, SIVIC J, ZISSERMAN A, et al. Non-uniform deblurring for shaken images[J]. International journal of computer vision, 2012, 98(2): 168–186.

    Article  MathSciNet  Google Scholar 

  4. ZHOU Y, KOMODAKIS N. A map-estimation framework for blind deblurring using high-level edge priors[C]//European Conference on Computer Vision, September 6–12, 2014, Zurich, Switzerland. Berlin, Heidelberg: Springer-Verlag, 2014: 142–157.

    Google Scholar 

  5. ULYANOV D, VEDALDI A, LEMPITSKY V. Deep image prior[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 18–23, 2018, Salt Lake City, UT, USA. New York: IEEE, 2018: 9446–9454.

    Google Scholar 

  6. REN D, ZHANG K, WANG Q, et al. Neural blind deconvolution using deep priors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 13–19, 2020, Seattle, WA, USA. New York: IEEE, 2020: 3341–3350.

    Google Scholar 

  7. PAN X G, ZHAN X H, DAI B, et al. Exploiting deep generative prior for versatile image restoration and manipulation[C]//European Conference on Computer Vision, August 23–28, 2020, Glasgow, UK. Berlin, Heidelberg: Springer-Verlag, 2020: 262–277.

    Google Scholar 

  8. LIU Z, LUO P, WANG X, et al. Deep learning face attributes in the wild[C]//Proceedings of the IEEE International Conference on Computer Vision, December 7–13, 2015, Santiago, Chile. New York: IEEE, 2015: 3730–3738.

    Google Scholar 

  9. DENG J, DONG W, SOCHER R, et al. Imagenet: a large-scale hierarchical image database[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition, June 20–25, 2009, Miami, FL, USA. New York: IEEE, 2009: 248–255.

    Google Scholar 

  10. LU B, CHEN J C, CHELLAPPA R. Unsupervised domain-specific deblurring via disentangled representations[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 15–20, 2019, Long Beach, USA. New York: IEEE, 2019: 10225–10234.

    Google Scholar 

  11. DU W, CHEN H, YANG H. Learning invariant representation for unsupervised image restoration[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 13–19, 2020, Seattle, WA, USA. New York: IEEE, 2020: 14483–14492.

    Google Scholar 

  12. KUPYN O, BUDZAN V, MYKHAILYCH M, et al. Deblurgan: blind motion deblurring using conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 18–23, 2018, Salt Lake City, UT, USA. New York: IEEE, 2018: 8183–8192.

    Google Scholar 

  13. LAI W S, HUANG J B, HU Z, et al. A comparative study for single image blind deblurring[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 27–30, 2016, Las Vegas, NV, USA. New York: IEEE, 2016: 1701–1709.

    Google Scholar 

  14. KRISHNAN D, TAY T, FERGUS R. Blind deconvolution using a normalized sparsity measure[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 20–25, 2011, Colorado Springs, CO, USA. New York: IEEE, 2011: 233–240.

    Google Scholar 

  15. PAN J, SUN D, PFISTER H, et al. Deblurring images via dark channel prior[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(10): 2315–2328.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Chen.

Additional information

This work has been supported by the National Natural Science Foundation of China (No.61273251).

Statements and Declarations

The authors declare that there are no conflicts of interest related to this article.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Wang, S. & Chen, L. Self-supervised image blind deblurring using deep generator prior. Optoelectron. Lett. 18, 187–192 (2022). https://doi.org/10.1007/s11801-022-1111-0

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11801-022-1111-0

Document code

Navigation