Abstract
The recovery of images from the observations that are degraded by a linear operator and further corrupted by Poisson noise is an important task in modern imaging applications such as astronomical and biomedical ones. Gradient-based regularizers involving the popular total variation semi-norm have become standard techniques for Poisson image restoration due to its edge-preserving ability. Various efficient algorithms have been developed for solving the corresponding minimization problem with non-smooth regularization terms. In this paper, motivated by the idea of the alternating direction minimization algorithm and the Newton’s method with upper convergent rate, we further propose inexact alternating direction methods utilizing the proximal Hessian matrix information of the objective function, in a way reminiscent of Newton descent methods. Besides, we also investigate the global convergence of the proposed algorithms under certain conditions. Finally, we illustrate that the proposed algorithms outperform the current state-of-the-art algorithms through numerical experiments on Poisson image deblurring.
Similar content being viewed by others
References
Sarder, P., Nehorai, A.: Deconvolution method for 3-D fluorescence microscopy images. IEEE Signal Process. Mag. 23, 32–45 (2006)
Müller, J., Brune, C., Sawatzky, A., Kösters, T., Schäfers, K., Burger, M.: Reconstruction of short time PET scans using Bregman iterations. In: IEEE Nuclear Science Symposium and Medical Imaging Conference, pp. 2383–2385. (2011)
Bertero, M., Boccacci, P., Desiderà, G., Vicidomini, G.: Image deblurring with Poisson data: from cells to galaxies. Inverse Probl. 25, 123006 (2009)
Csiszár, I.: Why least squares and maximum entropy? An axiomatic approach to inference for linear inverse problems. Ann. Stat. 19, 2032–2066 (1991)
Chambolle, A., Lions, P.-L.: Image recovery via total variation minimization and related problems. Numer. Math. 76, 167–188 (1997)
Candes, E.J., Donoho, D.L.: New tight frames of curvelets and optimal representations of objects with piecewise-\(C^{2}\) singularities. Commun. Pure Appl. Math. 57, 219–266 (2002)
Daubechies, I., Han, B., Ron, A., Shen, Z.: Framelets: MRA-based constructions of wavelet frames. Appl. Comput. Harmon. Anal. 14(1), 1–46 (2003)
Chen, D.Q., Cheng, L.Z.: Deconvolving Poissonian images by a novel hybrid variational model. J. Visual Commun. Image Rep. 22(7), 643–652 (2011)
Steidl, G., Weickert, J., Brox, T., Mrazek, P., Welk, M.: On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs. SIAM J. Numer. Anal. 42(2), 686–713 (2004)
Cai, J.F., Dong, B., Osher, S., Shen, Z.W.: Image restoration: total variation, wavelet frames, and beyond. J. Am. Math. Soc. 25(4), 1033–1089 (2012)
Sawatzky, A., Brune, C., Wubbeling, F., Kosters, T., Schafers, K., Burger, M.: Accurate EM–TV algorithm in PET with low SNR. In: IEEE Nuclear Science Symposium Conference Record, pp. 5133–5137. (2008)
Bonettini, S., Zanella, R., Zanni, L.: A scaled gradient projection method for constrained image deblurring. Inverse Probl. 25, 015002 (2009)
Bonettini, S., Ruggiero, V.: An alternating extragradient method for total variation based image restoration from Poisson data. Inverse Probl. 27, 095001 (2011)
Setzer, S., Steidl, G., Teuber, T.: Deblurring Poissonian images by split Bregman techniques. J. Visual Commun. Image Rep. 21(3), 193–199 (2010)
Afonso, M., Bioucas-Dias, J., Figueiredo, M.: Fast image recovery using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(9), 2345–2356 (2010)
Chen, D.Q., Cheng, L.Z.: Spatially adapted regularization parameter selection based on the local discrepancy function for Poissonian image deblurring. Inverse Probl. 28, 015004 (2012)
Day, N., Blanc-Feraud, L., Zimmer, C., Roux, P., Kam, Z., Olivo-Marin, J.-C., Zerubia, J.: Richardson–Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution. Microsc. Res. Tech. 69(4), 260–266 (2006)
Woo, H., Yun, S.: Proximal linearized alternating direction method for multiplicative denoising. SIAM J. Sci. Comput. 35(2), 336–358 (2013)
Chen, D.Q., Zhou, Y.: Multiplicative denoising based on linearized alternating direction method using discrepancy function constraint. J. Sci. Comput. 60(3), 483–504 (2014)
Cai, J.F., Osher, S., Shen, Z.: Split Bregman methods and frame based image restoration. Multiscale Model. Simul. 8(2), 337–369 (2009)
Jeong, T., Woo, H., Yun, S.: Frame-based Poisson image restoration using a proximal linearized alternating direction method. Inverse Probl. 29(7), 075007 (2013)
Chen, D.Q.: Regularized generalized inverse accelerating linearized alternating minimization algorithm for frame-based Poissonian image deblurring. SIAM J. Imag. Sci. 7(2), 716–739 (2014)
Zhu, M., Chan, T.: An efficient primal-dual hybrid gradient algorithm for total variation image restoration. Technical Report 08-34, CAM UCLA. (2008)
Esser, E., Zhang, X., Chan, T.F.: A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imag. Sci. 3(4), 1015–1046 (2010)
Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)
Landi, G., Piccolomini, E.L.: NPTool: a Matlab software for nonnegative image restoration with Newton projection methods. Numer. Algorithms 62(3), 487–504 (2013)
Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. SIAM Multiscale Model. Simul. 4(4), 1168–1200 (2005)
Barzilai, J., Borwein, J.: Two point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988)
Avriel, M.: Nonlinear Programming: Analysis and Methods. Dover Publications Inc., Mineola (2003)
Bertsekas, D.P.: Convex Optimization Theory. Athena Scientific, Belmont (2009)
Author information
Authors and Affiliations
Corresponding author
Additional information
The work was supported in part by the National Natural Science Foundation of China under Grant 61401473 and 61271014.
Appendix: Proof of Theorem 1
Appendix: Proof of Theorem 1
The proof of Theorem 1 is similar to the previous literature [19, 20]. Only some changes are needed due to the introduction of the Newton descent algorithm. The entire proof is not reproduced here due to limited space. Just enough is sketched to make the changes clear.
Proof
Denote \(L_{k}=(\omega ^{k})^{-1}\left( \delta ^{k} K^{T}K + \alpha \nabla ^{T}\nabla \right) \). According to the iterative formula with respect to u, we know that \(u^{k+1}\) is the solution of the minimization problem
Therefore, the sequence \(\{u^{k}, d^{k}, p^{k}\}\) generated by the IADMNDA algorithm satisfies
Because \(\left( u^{*}, d^{*}, p^{*}\right) \) is one solution of (16), it is also the KKT point that satisfies (17).
Similarly to [19, 20], denote the errors by \(u^{k}_{e} = u^{k}-u^{*}\), \(d^{k}_{e} = d^{k}-d^{*}\), and \(p^{k}_{e} = p^{k}-p^{*}\). Utilizing the subtraction between (20) and (17), taking the inner product with \(u^{k+1}_{e}\), \(d^{k+1}_{e}\) and \(p^{k}_{e}\), and removing the non-negative terms, we further obtain that
where \(\zeta ^{k}\in [u^{*}, u^{k}]\), with \([u^{*}, u^{k}]\) denoting the line segment between \(u^{*}\) and \(u^{k}\). Since \(\delta _{\min }\ge \overline{\gamma _{D}}\), we can easily deduce that the last term of the left side of the inequality (21) is non-negative.
According to the definition of \(\omega ^{k}\) in Algorithm 1, we know that \(\omega ^{k}\) is monotone non-increasing, and hence, there exists \(\omega ^{*}\) such that \(\lim \limits _{k\rightarrow +\infty }\omega ^{k}=\omega ^{*}\).
Denote \(\small {\tilde{L}_{k}=(\omega ^{*})^{-1}\left( \delta ^{k} K^{T}K + \alpha \nabla ^{T}\nabla \right) }\). By the boundedness of \(\delta ^{k}\) and \(\lim \limits _{k\rightarrow +\infty }\omega ^{k}=\omega ^{*}\) we have that
Since \(\delta ^{k+1}-\delta ^{k}\le \underline{\gamma _{D}}\), we know that \(\delta ^{k}K^{T}K + \nabla ^{2}D_{f}(\zeta ^{k})\succeq \delta ^{k+1}K^{T}K\) according to the condition (18). Therefore, by the definition of \(\tilde{L}_{k}\) we further have
According to (22), we can easily conclude that there exists some \(k_{0}\) such that
In the next, summing (21) from some \(k_{0}\) to \(+\infty \), utilizing (23), and removing the other non-negative terms, we obtain that
Due to \(\Vert u^{k}-u^{*}\Vert ^{2}_{\frac{1}{2}\nabla ^{2}D_{f}(\zeta ^{k})}\ge \frac{1}{2}\underline{\gamma _{D}} \Vert Ku^{k}-Ku^{*}\Vert ^{2}_{2}\), we get
Due to \(\Vert K^{T}K\Vert _{2}>0\), we further have \(\lim \limits _{k\rightarrow +\infty }\Vert u^{k}-u^{*}\Vert _{2}=0\), which implies that \(\{u^{k}\}\) converges to a solution of the minimization problem (16). \(\square \)
Rights and permissions
About this article
Cite this article
Chen, DQ. Inexact alternating direction method based on Newton descent algorithm with application to Poisson image deblurring. SIViP 11, 89–96 (2017). https://doi.org/10.1007/s11760-016-0973-7
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-016-0973-7