Skip to main content
Log in

RTR-DPD: reweighted tikohonov regularization for blind deblurring via dual principles of discriminativeness

  • Published:
Multidimensional Systems and Signal Processing Aims and scope Submit manuscript

Abstract

Blind deblurring has undergone rapid development since the variational Bayes method of Fergus et al. about 15 years ago. Nowadays, it is generally acknowledged in the statistical view that unnatural heavy-tailed image priors should be advocated for blind deblurring. However, this paper affirms that an image prior driven by the dual principles of discriminativeness is indeed more critical and essential to blind deblurring, inspired by which a new type of reweighted Tikohonov image regularization is formulated. In comparison with previous approaches, the model in this paper is not only more concise in mathematics but also more intuitive in understanding. Experimenting within a plug-and-play numerical framework on images of natural, manmade, low-illumination, text, and people classes well validate the effectiveness and robustness of the proposed model. More importantly, its practicality to realistic challenging deblurring problems is also demonstrated. In the final, it is believed that this paper could break a long-standing prejudice on maximum-a-posterior-based blind deblurring, i.e., various modeling tricks should be developed to ensure top performance of blind deblurring. However, this paper states that the problem can be formulated in a more strict modeling fashion, just similar to the non-blind deblurring task in a great sense.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  • Almeida, M., & Almeida, L. (2010). Blind and semi-blind deblurring of natural images. IEEE TIP, 19(1), 36–52.

    MathSciNet  MATH  Google Scholar 

  • Babacan, S. D., Molina, R., Do, M. N., Katsaggelos, A. K. (2012). “Bayesian Blind Deconvolution with General Sparse Image Priors,” A. Fitzgibbon et al. (Eds.): ECCV 2012, Part VI, Lecture notes in computer science, vol. 7577, pp. 341–355.

  • Babacan, S. D., Molina, R., & Katsaggelos, A. K. (2009). Variational bayesian blind deconvolution using a total variation prior. IEEE TIP, 18(1), 12–26.

    MathSciNet  MATH  Google Scholar 

  • Bai, Y., Cheung, G., Liu, X., & Gao, W. (2019). Graph-based blind image deblurring from a single photograph. IEEE TIP, 28(3), 1404–1418.

    MathSciNet  Google Scholar 

  • Brifman, A., Romano, Y., & Elad, M. (2016). Turning a denoiser into a super-resolver using plug-and-play priors,” ICIP.

  • Cai, J. F., Ji, H., Liu, C., & Shen, Z. (2012). Framelet-based blind motion deblurring from a single image. IEEE Transactions on Image Processing, 21(2), 562–572.

    Article  MathSciNet  MATH  Google Scholar 

  • Cai, J., Zuo, W., & Zhang, L. (2020). Dark and bright channel prior embedded network for dynamic scene deblurring. IEEE Transactions on Image Processing, 29, 6885–6897.

    Article  MATH  Google Scholar 

  • Chakrabart, A. (2016). “A neural approach to blind motion deblurring,” ECCV, pp. 221–235.

  • Chan, S. H., Wang, X., & Elgendy, O. A. (2017). Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE TCI, 3(1), 84–98.

    MathSciNet  Google Scholar 

  • Chan, T. F., & Wong, C. K. (1998). Total variation blind deconvolution. IEEE TIP, 7(3), 370–375.

    Google Scholar 

  • Cho, S., & Lee, S. (2009). Fast motion deblurring. ACM Transaction on Graphics, 28(5), 1–8.

    Article  Google Scholar 

  • Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2007). Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE TIP, 16(8), 2080–2095.

    MathSciNet  Google Scholar 

  • Egiazarian, K., Katkovnik, V. (2015). “Single image superresolution via BM3D sparse coding,” ESPC, pp. 2849–2853.

  • Fergus, R., Singh, B., Hertzmann, A., Roweis, S. T., & Freeman, W. T. (2006). Removing camera shake from a single photograph. ACM Transactions on Graphics, 25(3), 787–794.

    Article  MATH  Google Scholar 

  • Goodfellow, I., Pouget-Abadie, J., Mirza, M. (2014). Generative adversarial networks, NIPS, pp. 2672–2680.

  • Harmeling, S., Michael, H., & Schölkopf, B. (2010). “Space variant single-image blind deconvolution for removing camera shake,” NIPS, pp. 829–837.

  • Hirsch, M., Schuler, C. J., Harmeling, S., Schölkopf, B. (2011). “Fast removal of non-uniform camera-shake,” ICCV.

  • Hu, Z., Cho, S., Wang, J., & Yang, M.-H. (2014). “Deblurring lowlight images with light streaks,” CVPR, pp. 3382–3389.

  • Köhler, R., Hirsch, M., Mohler, B., Schölkopf, B., Harmeling, S. (2012). “Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database,” ICCV

  • Kotera, J., Sroubek, F., Milanfar, P. (2013). “Blind deconvolution using alternating maximum a posteriori estimation with heavy- tailed priors,” R. Wilson et al. (Eds.): CAIP, Part II, Lecture Notes in Computer Science, vol. 8048, pp. 59–66.

  • Kotera, J., Šmídl, V., & Šroubek, F. (2017). Blind deconvolution with model discrepancies. IEEE TIP, 26(5), 2533–2544.

    MathSciNet  MATH  Google Scholar 

  • Krishnan, D., Tay, T., Fergus, R. (2011) “Blind deconvolution using a normalized sparsity measure,” CVPR, pp.233–240.

  • Krishnan, D., & Fergus, R. (2009). Fast image deconvolution using hyper-laplacian priors. NIPS, 22, 1033–1041.

    Google Scholar 

  • Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J. (2018). “Deblurgan: Blind motion deblurring using conditional adversarial networks,” CVPR, pp. 8183–8192.

  • Kupyn, O., Martyniuk, T., Wu, J., Wang, Z. (2019). “DeblurGAN-v2: Deblurring (Orders-of-Magnitude) faster and better,” ICCV.

  • Lai, W.S., Ding, J.-J., Lin, Y.-Y., and Chuang, Y.-Y. (2015). “Blur Kernel Estimation Using Normalized Color-Line Prior,” CVPR.

  • Lai, W. S., Huang, J. B., Hu, Z., Ahuja, N., Yang, M. H., (2016). “A comparative study for single image blind deblurring,” CVPR, pp. 1701–1709.

  • LeCun, Y., Yoshua, B., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.

    Article  Google Scholar 

  • Levin, A., Weiss, Y., Durand, F., Freeman, W. T. (2011a). “Efficient Marginal Likelihood Optimization in Blind Deconvolution,” CVPR, pp. 2657–2664.

  • Levin, A., Weiss, Y., Durand, F., & Freeman, W. T. (2011b). Understanding blind deconvolution algorithms. IEEE Trans. PAMI, 33(12), 2354–2367.

    Article  Google Scholar 

  • Li, L., Pan, J., Lai, W.-S., et al. (2018). Learning a discriminative prior for blind image deblurring, CVPR.

  • Li, Y., Tofighi, M., Geng, J., Monga, V., & Eldar, Y. C. (2020). Efficient and interpretable deep blind image deblurring via algorithm unrolling. IEEE Transactions on Computational Imaging, 6, 666–681.

    Article  MathSciNet  Google Scholar 

  • Liu, J., Yan, M., & Zeng, T. (2021). Surface-aware blind image deblurring. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(3), 1041–1055.

    Article  Google Scholar 

  • Michaeli, T., & Irani, M. (2014). Blind deblurring using internal patch recurrence. ECCV, pp. 783–798.

  • Molina, R., Mateos, J., & Katsaggelos, A. K. (2006). Blind deconvolution using a variational approach to parameter, image, and blur estimation. IEEE TIP, 15(12), 3715–3727.

    MathSciNet  Google Scholar 

  • Money, J. H., & Kang, S. H. (2008). Total variation minimizing blind deconvolution with shock filter reference. Image and Vision Computing, 26(2), 302–314.

    Article  Google Scholar 

  • Nah, S., Kim, T. H., Lee, K. M. (2017). “Deep multi-scale convolutional neural network for dynamic scene deblurring,” CVPR.

  • Nimisha, T. M., Singh, A. K., Rajagopalan, A. N. (2017). “Blur-invariant deep learning for blind-deblurring,” ICCV.

  • Pan, J., Hu, Z., Su, Z., Yang, M.-H. (2014a). “Deblurring text images via L0-regularized intensity and gradient prior,” CVPR.

  • Pan, J., Hu, Z., Su, Z., & Yang, M.-H. (2014b). “Deblurring face images with exemplars,” ECCV, pp. 47–62.

  • Pan, J., Sun, D., Pfister, H., Yang, M. H. (2018). “Deblurring Images via Dark Channel Prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 2315–2328.

  • Pan, J., Lin, Z., Su, Z., Yang, M.-H. (2016). “Robust kernel estimation with outliers handling for image deblurring,” CVPR.

  • Pan, J., & Su, Z. (2013). Fast L0-regularized kernel estimation for robust motion deblurring. IEEE Signal Processing Letters, 20(9), 1107–1114.

    Google Scholar 

  • Perrone, D., & Favaro, P. (2016). A clearer picture of total variation blind deconvolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(6), 1041–1055.

    Article  Google Scholar 

  • Perrone, D., & Favaro, P. (2016b). A logarithmic image prior for blind deconvolution. IJCV, 117(2), 159–172.

    Article  MathSciNet  MATH  Google Scholar 

  • Ren, W., Cao, X., Pan, J., Guo, X., Zuo, W., & Yang, M.-H. (2016). Image deblurring via enhanced low-rank prior. IEEE Transactions Image Processing, 25(7), 3426–3437.

    Article  MathSciNet  MATH  Google Scholar 

  • Romano, Y., Elad, M., & Milanfar, P. (2017). The little engine that cloud: regularization by denoising (RED). SIAM Journal on Imaging Science, 10, 1804–1844.

    Article  MATH  Google Scholar 

  • Rond, A., Giryes, R., & Elad, M. (2016). Poisson inverse problemsby the plug-and-play scheme. Journal of Visual Communication and Image Representation, 41, 96–108.

    Article  Google Scholar 

  • Roth, S., & Black, M. J. (2009). Fields of Experts. International Journal of Computer Vision, 82(2), 205–229.

    Article  MATH  Google Scholar 

  • Schuler, C., Hirsch, M., Harmeling, S., & Scholkopf, B. (2016). Learning to Deblur. IEEE Transactions on pattern analysis and machine intelligence, 38(7), 1439–1451.

    Article  Google Scholar 

  • Shan, Q., Jia, J., & Agarwala, A. (2008). High-quality motion deblurring from a single image (pp. 1–10). SIGGRAPH.

    Google Scholar 

  • Shao, W. Z., Lin, Y. Z., Bao, B. K., Wang, L. Q., Ge, Q., Li., H., (2018). Blind deblurring using discriminative image smoothing, Chinese Conf. Pattern Recognition & Computer Vision, pp. 490-500.

  • Shao, W., Deng, H., Ge, Q., Li, H., & Wei, Z. (2016). Regularized motion blur-kernel estimation with adaptive sparse image prior learning. Pattern Recognition, 51, 402–424.

    Article  Google Scholar 

  • Shao, W. Z., Ge, Q., Deng, H. S., Wei, Z. H., & Li, H. B. (2015b). “Motion deblurring using non-stationary image modeling. Journal of Mathematical Imaging and Vision, 52(2), 234–248.

    Article  MathSciNet  MATH  Google Scholar 

  • Shao, W., Li, H., & Elad, M. (2015a). Bi-L0-L2-norm regularization for blind motion deblurring. Journal Visual Communication and Image Representation, 33, 42–59.

    Article  Google Scholar 

  • Shearer, P., Gilbert, A. C., Hero III, A. O. (2013). “Correcting camera shake by incremental sparse approximation,” ICIP.

  • Sreehari, S., Venkatakrishnan, S., Wohlberg, B., Drummy, L. F., Simmons, J. P., & Bouman, C. A. (2015). “Plug-and-play priors for bright field electron tomography and sparse interpolation,” arXiv preprint arXiv:1512.07331.

  • Sun, L., Cho, S., Wang, J., & Hays, J. (2013). “Edge-based blur kernel estimation using patch priors,” ICCP.

  • Sun, J., Cao, W., Xu, Z., Ponce, J. (2015). “Learning a convolutional neural network for non-uniform motion blur removal,” CVPR, pp. 769–777.

  • Tao, X., Gao, H., & Wang, Y. (2018). “Scale-recurrent network for deep image deblurring,” CVPR, pp. 8174–8182.

  • Venkatakrishnan, S. V., Bouman, C. A., Wohlberg, B. (2013). “Plug-and-play Priors for Model based Reconstruction,” IEEE Global conference on signal and information processing, pp. 945–948

  • Wang, R., & Tao, D. (2014). Recent progress in image deblurring, arXiv:1409.6838

  • Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE TIP, 13(4), 600–612.

    Google Scholar 

  • Wen, F., Ying, R., Liu, P., et al. (2019). “Blind image deblurring using patch-wise minimal pixels regularization,” arXiv:1906.06642.

  • Whyte, O., Sivic, J., Zisserman, A. (2011). “Deblurring Shaken and Partially Saturated Images,” IEEE Color and Photometry in Computer Vision Workshop, in conjunction with ICCV.

  • Whyte, O., Sivic, J., Zisserman, A. (2014). “Deblurring shaken and partially saturated images,” IJCV, 110(2), pp. 185–201.

  • Whyte, O., Sivic, J., Zisserman, A., & Ponce, J. (2012). Non-uniform deblurring for shaken images. IJCV, 98(2), 168–186.

    Article  MathSciNet  MATH  Google Scholar 

  • Wieschollek, P., Schölkopf, B., Lensch, H. P. A., et al., (2016). “End-to-end learning for image burst deblurring,” ACCV.

  • Wipf, D., & Zhang, H. (2014). Revisiting bayesian blind deconvolution. J. Machine Learning Research, 15, 3595–3634.

    MathSciNet  MATH  Google Scholar 

  • Wu, J., & Di, X. (2020). Integrating neural networks into the blind deblurring framework to compete with the end-to-end learning-based methods. IEEE Transactions on Image Processing, 29, 6841–6851.

    Article  MATH  Google Scholar 

  • Xu, L., Zheng, S., Jia, J. (2013). “Unnatural L0 sparse representation for natural image deblurring,” CVPR, pp. 1107–1114.

  • Xu, L., & Jia, J. (2010). Two-phase Kernel Estimation for Robust Motion Deblurring. ECCV, Part i, LNCS, 6311, 157–170.

    Google Scholar 

  • Xu, L., Yan, Q., Xia, Y., & Jia, J. (2012). Structure extraction from texture via relative total variation. ACM Transactions on Graphics, 31(6), 10.

    Article  Google Scholar 

  • Yan, Y., Ren, W., Guo, Y., Wang, R., & Cao, X. (2017). “Image deblurring via extreme channels prior,” CVPR.

  • You, Y.-L., & Kaveh, M. (1996). A regularization approach to joint blur identification and image restoration. IEEE Transaction on Image Processing, 5(3), 416–28.

    Article  Google Scholar 

  • Zhang, H., Wipf, D., Zhang, Y. (2013). Multi-image blind deblurring using a coupled adaptive sparse prior, CVPR

  • Zhang, J., Pan, J., Ren, J., Song, Y., Bao, L., Lau, R., Yang, M.-H. (2018). “Dynamic scene deblurring using spatially variant recurrent neural networks,” CVPR.

  • Zhong, L., Cho, S., Metaxas, D., Paris, S., & Wang, J. (2013). “Handling noise in single image deblurring using directional filters,” CVPR, pp. 612–619.

  • Zoran, D., & Weiss, Y. (2011). “From learning models of natural image patches to whole image restoration,” ICCV.

  • Zuo, W., Ren, D., Gu, S., Lin, L., & Zhang, L. (2016). Learning iteration-wise generalized shrinkage-thresholding operators for blind deconvolution. IEEE TIP, 25(4), 1751–1764.

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The work was supported in part by the Natural Science Foundation of China (61771250, 61972213, 11901299) and in part by the Fundamental Research Funds for the Central Universities (30918014108).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wen-Ze Shao.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shao, WZ., Deng, HS., Ge, Q. et al. RTR-DPD: reweighted tikohonov regularization for blind deblurring via dual principles of discriminativeness. Multidim Syst Sign Process 34, 291–320 (2023). https://doi.org/10.1007/s11045-022-00863-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11045-022-00863-7

Keywords

Navigation