Skip to main content
Log in

Deep Richardson–Lucy Deconvolution for Low-Light Image Deblurring

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Images taken under the low-light condition often contain blur and saturated pixels at the same time. Deblurring images with saturated pixels is quite challenging. Because of the limited dynamic range, the saturated pixels are usually clipped in the imaging process and thus cannot be modeled by the linear blur model. Previous methods use manually designed smooth functions to approximate the clipping procedure. Their deblurring processes often require empirically defined parameters, which may not be the optimal choices for different images. In this paper, we develop a data-driven approach to model the saturated pixels by a learned latent map. Based on the new model, the non-blind deblurring task can be formulated into a maximum a posterior problem, which can be effectively solved by iteratively computing the latent map and the latent image. Specifically, the latent map is computed by learning from a map estimation network, and the latent image estimation process is implemented by a Richardson–Lucy (RL)-based updating scheme. To estimate high-quality deblurred images without amplified artifacts, we develop a prior estimation network to obtain prior information, which is further integrated into the RL scheme. Experimental results demonstrate that the proposed method performs favorably against state-of-the-art algorithms both quantitatively and qualitatively on synthetic and real-world images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. The dynamic range of all the images is [0, 1] in this paper. The threshold and N are randomly sampled from 0.75–0.95 and 1.5–5.

  2. Taking no account of noise, the blur model in NBDN (Chen et al., 2021) can be formulated as \(\tilde{M}\circ B=\tilde{M}\circ (I\otimes K)\) s.t. \(\tilde{M}\in \{0,1\}\), where pixels violate the linear blur model are assigned with small weights to make sure that they do not involve in the deblurring process. In our model, the blur model is \(B=M\circ (I\otimes K)\) s.t. \(M\in \left[ 0,1\right] \), and all pixels are considered during deblurring.

  3. http://cg.postech.ac.kr/research/realblur/.

References

  • Alvarez, L., & Mazorra, L. (1994). Signal and image restoration using shock filters and anisotropic diffusion. SIAM Journal on Numerical Analysis, 31(2), 590–605.

    Article  MathSciNet  Google Scholar 

  • Cannell, M. B., McMorland, A., & Soeller, C. (2006). Image enhancement by deconvolution. Handbook of biological confocal microscopy (pp. 488–500).

  • Chakrabarti, A. (2016). A neural approach to blind motion deblurring. In ECCV.

  • Chen, L., Fang, F., Lei, S., Li, F., & Zhang, G. (2020). Enhanced sparse model for blind deblurring. In ECCV.

  • Chen, L., Fang, F., Wang, T., & Zhang, G. (2019). Blind image deblurring with local maximum gradient prior. In CVPR.

  • Chen, L., Fang, F., Zhang, J., Liu, J., & Zhang, G. (2020). Oid: Outlier identifying and discarding in blind image deblurring. In ECCV.

  • Chen, L., Zhang, J., Lin, S., Fang, F., & Ren, J.S. (2021). Blind deblurring for saturated images. In CVPR.

  • Chen, L., Zhang, J., Pan, J., Lin, S., Fang, F., & Ren, J. S. (2021). Learning a non-blind deblurring network for night blurry images. In CVPR.

  • Chen, C., & Mangasarian, O. L. (1996). A class of smoothing functions for nonlinear and mixed complementarity problems. Computational Optimization and Applications, 5, 97–138.

    Article  MathSciNet  Google Scholar 

  • Cho, S., Wang, J., & Lee, S. (2011). Handling outliers in non-blind image deconvolution. In ICCV.

  • Dey, N., Blanc-Féraud, L., Zimmer, C., Kam, Z., Olivo-Marin, J. C., & Zerubia, J. (2004). A deconvolution method for confocal microscopy with total variation regularization. In 2nd IEEE International symposium on biomedical imaging: Nano to macro (IEEE Cat No. 04EX821) (pp. 1223–1226). IEEE.

  • Dong, J., Pan, J., Su, Z., & Yang, M. H. (2017). Blind image deblurring with outlier handling. In ICCV.

  • Dong, J., Pan, J., Sun, D., Su, Z., & Yang, M. H. (2018). Learning data terms for non-blind deblurring. In ECCV.

  • Dong, J., Roth, S., & Schiele, B. (2020). Deep wiener deconvolution: Wiener meets deep learning for image deblurring. In NeurIPS.

  • Eboli, T., Sun, J., & Ponce, J. (2020). End-to-end interpretable learning of non-blind image deblurring. In ECCV.

  • Faramarzi, E., Rajan, D., & Christensen, M. P. (2013). Unified blind method for multi-image super-resolution and single/multi-image blur deconvolution. TIP, 22(6), 2101–2114.

    MathSciNet  Google Scholar 

  • Gong, D., Tan, M., Zhang, Y., van den Hengel, A., & Shi, Q. (2017). Self-paced kernel estimation for robust blind image deblurring. In ICCV.

  • Gong, D., Zhang, Z., Shi, Q., Hengel, A. V. D., Shen, C., & Zhang, Y. (2020). Learning an optimizer for image deconvolution. In TNNLS.

  • He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In CVPR.

  • Hu, Z., Cho, S., Wang, J., & Yang, M. H. (2014). Deblurring low-light images with light streaks. In CVPR.

  • Katkovnik, V., Egiazarian, K., & Astola, J. (2005). A spatially adaptive nonparametric regression image deblurring. IEEE TIP, 14(10), 1469–1478.

    Google Scholar 

  • Kingma, D. P. & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980

  • Krishnan, D. & Fergus, R. (2009). Fast image deconvolution using hyper-laplacian priors. In NIPS.

  • Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., & Matas, J. (2018). Deblurgan: Blind motion deblurring using conditional adversarial networks. In CVPR.

  • Kupyn, O., Martyniuk, T., Wu, J., & Wang, Z. (2019). Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In ICCV.

  • Laghrib, A., Ezzaki, M., El Rhabi, M., Hakim, A., Monasse, P., & Raghay, S. (2018). Simultaneous deconvolution and denoising using a second order variational approach applied to image super resolution. CVIU, 168, 50–63.

    Google Scholar 

  • Levin, A., Fergus, R., Durand, F., & Freeman, W. T. (2007). Image and depth from a conventional camera with a coded aperture. In ACM TOG.

  • Levin, A., Weiss, Y., Durand, F., & Freeman, W. T. (2009). Understanding and evaluating blind deconvolution algorithms. In CVPR.

  • Lucy, L. B. (1974). An iterative technique for the rectification of observed distributions. The Astronomical Journal, 79, 745.

    Article  Google Scholar 

  • Nah, S., Hyun Kim, T., & Mu Lee, K. (2017). Deep multi-scale convolutional neural network for dynamic scene deblurring. In CVPR.

  • Nah, S., Kim, T. H., & Lee, K. M. (2017). Deep multi-scale convolutional neural network for dynamic scene deblurring. In CVPR.

  • Osher, S., & Rudin, L. I. (1990). Feature-oriented image enhancement using shock filters. SIAM Journal on Numerical Analysis, 27(4), 919–940.

    Article  Google Scholar 

  • Pan, J., Lin, Z., Su, Z., & Yang, M. H. (2016). Robust kernel estimation with outliers handling for image deblurring. In CVPR.

  • Pan, J., Sun, D., Pfister, H., & Yang, M. H. (2016). Blind image deblurring using dark channel prior. In CVPR.

  • Park, D., Kang, D. U., Kim, J., & Chun, S. Y. (2020). Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training. In ECCV.

  • Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L. (2019). Pytorch: An imperative style, high-performance deep learning library. In Neurips.

  • Perrone, D. & Favaro, P. (2014). Total variation blind deconvolution: The devil is in the details. In CVPR.

  • Ren, W., Pan, J., Cao, X., & Yang, M. H. (2017). Video deblurring via semantic segmentation and pixel-wise non-linear kernel. In ICCV.

  • Ren, W., Zhang, J., Ma, L., Pan, J., Cao, X., Zuo, W., Liu, W., & Yang, M. H. (2018). Deep non-blind deconvolution via generalized low-rank approximation. In NIPS.

  • Ren, D., Zuo, W., Zhang, D., Zhang, L., & Yang, M. H. (2019). Simultaneous fidelity and regularization learning for image restoration. In TPAMI.

  • Richardson, W. H. (1972). Bayesian-based iterative method of image restoration. JoSA, 62(1), 55–59.

    Article  Google Scholar 

  • Rim, J., Lee, H., Won, J., & Cho, S. (2020). Real-world blur dataset for learning and benchmarking deblurring algorithms. In ECCV.

  • Roth, S. & Black, M. J. (2005). Fields of experts: A framework for learning image priors. In CVPR.

  • Schmidt, U., Rother, C., Nowozin, S., Jancsary, J., & Roth, S. (2013). Discriminative non-blind deblurring. In CVPR.

  • Schmidt, U., Schelten, K., & Roth, S. (2011). Bayesian deblurring with integrated noise estimation. In CVPR.

  • Schuler, C. J., Christopher Burger, H., Harmeling, S., & Scholkopf, B. (2013). A machine learning approach for non-blind image deconvolution. In CVPR.

  • Son, H. & Lee, S. (2017). Fast non-blind deconvolution via regularized residual networks with long/short skip-connections. In ICCP. IEEE.

  • Tai, Y. W., Tan, P., & Brown, M. S. (2010). Richardson-lucy deblurring for scenes under a projective motion path. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8), 1603–1618.

    Google Scholar 

  • Tao, X., Gao, H., Shen, X., Wang, J., & Jia, J. (2018). Scale-recurrent network for deep image deblurring. In CVPR.

  • Tsai, F. J., Peng, Y. T., Lin, Y. Y., Tsai, C. C., & Lin, C. W. (2022). Stripformer: Strip transformer for fast image deblurring. In ECCV.

  • Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., & Li, H. (2022). Uformer: A general u-shaped transformer for image restoration. In CVPR.

  • Whyte, O., Sivic, J., & Zisserman, A. (2014). Deblurring shaken and partially saturated images. In IJCV.

  • Wiener, N., Wiener, N., Mathematician, C., Wiener, N., Wiener, N., & Mathématicien, C. (1949). Extrapolation, Interpolation, and Smoothing of Stationary Time Series: with Engineering Applications (Vol. 113). Cambridge: MIT Press.

  • Xiao, L., Wang, J., Heidrich, W., & Hirsch, M. (2016). Learning high-order filters for efficient blind deconvolution of document photographs. In ECCV.

  • Xu, L., Ren, J. S., Liu, C., & Jia, J. (2014). Deep convolutional neural network for image deconvolution. In NIPS.

  • Xu, L., Zheng, S., & Jia, J. (2013). Unnatural \(l_0\) sparse representation for natural image deblurring. In CVPR.

  • Yuan, L., Sun, J., Quan, L., & Shum, H. Y. (2008). Progressive inter-scale and intra-scale non-blind image deconvolution. ACM TOG, 27(3), 1–10.

    Article  Google Scholar 

  • Zhang, H., Dai, Y., Li, H., & Koniusz, P. (2019). Deep stacked hierarchical multi-patch network for image deblurring. In CVPR.

  • Zhang, K., Liang, J., Van Gool, L., & Timofte, R. (2021). Designing a practical degradation model for deep blind image super-resolution. In ICCV.

  • Zhang, J., Pan, J., Lai, W. S., Lau, R. W. H., & Yang, M. H. (2017a). Learning fully convolutional networks for iterative non-blind deconvolution. In CVPR.

  • Zhang, K., Zuo, W., Gu, S., & Zhang, L. (2017b). Learning deep CNN denoiser prior for image restoration. In CVPR.

Download references

Acknowledgements

F. Fang was supported by the National Key R &D Program of China (2022ZD0161800), the NSFC-RGC (61961160734), and the Shanghai Rising-Star Program (21QA1402500). J. Pan was supported by the National Natural Science Foundation of China (Nos. U22B2049).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Jiawei Zhang or Faming Fang.

Additional information

Communicated by Shaodi You.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

We present the detailed deviations for solving Eq. (13) in this “Appendix”. By reformulating Eq. (13) into a vectorized form, we can obtain:

$$\begin{aligned} \min _{\textbf {I}} {\textbf {M}}^\text {T}{} {\textbf {KI}} - {\textbf {B}}^\text {T}\log (diag({\textbf {M}}){\textbf {KI}}) + \lambda \overline{{\textbf {1}}}^\text {T} P({\textbf {I}})\overline{{\textbf {1}}}, \end{aligned}$$
(17)

where \({\textbf {M}}\), \({\textbf {B}}\), and \({\textbf {I}}\) denote the vectorized forms of M, B and I; \({\textbf {K}}\) is the Toeplitz matrix of K w.r.t. I; \(\overline{{\textbf {1}}}\) denotes a vector whose elements are all ones. For the second term of Eq. (17), we denote it as \(\textbf{A}\) and its derivative w.r.t. \({\textbf {I}}\) is:

$$\begin{aligned} \frac{\partial \textbf{A}}{\partial {\textbf {I}}}= & {} \frac{\partial diag({\textbf {M}}){\textbf {KI}}}{\partial {\textbf {I}}}~\frac{\partial log(diag({\textbf {M}}){\textbf {KI}})}{\partial diag({\textbf {M}}){\textbf {KI}}}~\frac{\partial {\textbf {B}}^\text {T}\log (diag({\textbf {M}}){\textbf {KI}})}{\partial log(diag({\textbf {M}}){\textbf {KI}})},\nonumber \\= & {} ({\textbf {K}}^\text {T}diag({\textbf {M}}))diag(\frac{1}{diag({\textbf {M}}){\textbf {KI}}}){\textbf {B}},\nonumber \\= & {} {\textbf {K}}^\text {T}diag(\frac{1}{diag({\textbf {M}}){\textbf {KI}}})(diag({\textbf {M}}){\textbf {B}}), \end{aligned}$$
(18)

where the divide operation is element-wise. Then we can solve Eq. (17) by setting its derivative to zero as:

$$\begin{aligned} {\textbf {K}}^\text {T}{} {\textbf {M}} - {\textbf {K}}^\text {T}diag\left( \frac{1}{diag({\textbf {M}}){\textbf {KI}}}\right) (diag({\textbf {M}}){\textbf {B}}) + \lambda P'({\textbf {I}}) = 0.\nonumber \\ \end{aligned}$$
(19)

Reformulate the above formation into its matrix form, we have:

$$\begin{aligned} M\otimes \widetilde{K} - \frac{M\circ B}{M\circ (I\otimes K)}\otimes \widetilde{K} + \lambda P'_I(I) = 0. \end{aligned}$$
(20)

where \(\widetilde{K}\) is the transpose of K that flips the shape of K upside down and left-to-right, \(P'_I(I)\) is the first order derivative of \(P_I(I)\) w.r.t. I. Recall that the sum of the kernel equals to 1, i.e. \(\overline{{\textbf {1}}}^\text {T} \widetilde{{\textbf {K}}} = 1\), where \(\widetilde{{\textbf {K}}}\) is the vectorized form of \(\widetilde{K}\). Thus, we further have,

$$\begin{aligned}{} & {} M\otimes \widetilde{K} - \frac{M\circ B}{M\circ (I\otimes K)}\otimes \widetilde{K} {+} \lambda P'_I(I) {+} {\textbf {1}} {-} {\textbf {1}}\otimes \widetilde{K} {=} 0,\nonumber \\ \end{aligned}$$
(21)
$$\begin{aligned}{} & {} \left( \frac{B}{I\otimes K} - M + {\textbf {1}}\right) \otimes \widetilde{K} = {\textbf {1}}+\lambda P'_I(I), \end{aligned}$$
(22)
$$\begin{aligned}{} & {} I\circ (\frac{B}{I\otimes K} - M + {\textbf {1}}) \otimes \widetilde{K} = I\circ ({\textbf {1}}+\lambda P'_I(I)), \end{aligned}$$
(23)

where \({\textbf {1}}\) is an all-one image.

In order to solve Eq. (23), we use the fixed point iteration scheme and rewrite it as:

$$\begin{aligned} \frac{I^{t+1}}{I^t} ={\textbf {1}}= \frac{\left( \frac{B}{I^t\otimes K} - M + {\textbf {1}}\right) \otimes \widetilde{K}}{{\textbf {1}}+\lambda P'_I(I^t)}. \end{aligned}$$
(24)

Thus, we can finally get Eq. (14) in our manuscript:

$$\begin{aligned} I^{t+1} = \frac{I^t \circ \left( \left( \frac{B}{I^t\otimes K} - M + {\textbf {1}}\right) \otimes \widetilde{K}\right) }{{\textbf {1}}+\lambda P'_I(I^t)}. \end{aligned}$$
(25)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, L., Zhang, J., Li, Z. et al. Deep Richardson–Lucy Deconvolution for Low-Light Image Deblurring. Int J Comput Vis 132, 428–445 (2024). https://doi.org/10.1007/s11263-023-01877-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-023-01877-9

Keywords

Navigation