Skip to main content
Log in

Choice of the parameters in a primal-dual algorithm for Bregman iterated variational regularization

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

Focus of this work is solving a non-smooth constraint minimization problem by a primal-dual splitting algorithm involving proximity operators. The problem is penalized by the Bregman divergence associated with the non-smooth total variation (TV) functional. We analyze two aspects: Firstly, the convergence of the regularized solution of the minimization problem to the minimum norm solution. Second, the convergence of the iteratively regularized minimizer to the minimum norm solution by a primal-dual algorithm. For both aspects, we use the assumption of a variational source condition (VSC). This work emphasizes the impact of the choice of the parameters in stabilization of a primal-dual algorithm. Rates of convergence are obtained in terms of some concave, positive definite index function. The algorithm is applied to a simple two-dimensional image processing problem. Sufficient error analysis profiles are provided based on the size of the forward operator and the noise level in the measurement.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Acar, R., Vogel, C.R.: Analysis of bounded variation penalty methods for ill-posed problems. Inverse Probl. 10(6), 1217–1229 (1994)

    Article  MathSciNet  Google Scholar 

  2. Altuntac, E.: Variational regularization strategy for atmospheric tomography. Institute for Numerical and Applied Mathematics, University of Goettingen (2016)

  3. Anzengruber, S., Hofmann, B., Mathé, P.: Regularization properties of the sequential discrepancy principle for Tikhonov regularization in Banach spaces, vol. 93 (2014)

  4. Anzengruber, S., Ramlau, R.: Morozov’s discrepancy principle for Tikhonov-type functionals with nonlinear operators. Inverse Probl. 26, 025001 (2010). 17pp

    Article  MathSciNet  Google Scholar 

  5. Anzengruber, S., Ramlau, R.: Convergence rates for Morozov’s discrepancy principle using variational inequalities. Inverse Probl. 27, 105007 (2011). 18pp

    Article  MathSciNet  Google Scholar 

  6. Bachmayr, M., Burger, M.: Iterative total variation schemes for nonlinear inverse problems. Inverse Probl. 25, 105004 (2009). 26pp

    Article  MathSciNet  Google Scholar 

  7. Bauschke, H.H., Combettes, P.L.: Convex analysis and monotone operator theory in Hilbert spaces Springer New York (2011)

  8. Bardsley, J.M., Luttman, A.: Total variation-penalized Poisson liklehood estimation for ill-posed problems. Adv Comput. Math. 31, 25–59 (2009)

    Article  Google Scholar 

  9. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  10. Benning, M., Burger, M.: Modern regularization methods for inverse probl. arXiv:1801.09922 (2017)

  11. Benning, M., Betcke, M.M., Ehrhardt, M.J., Schönlieb, C.B.: Choose your path wisely: gradient descent in a Bregman distance framework, arXiv:1712.04045 (2017)

  12. Bergounioux, M.: On Poincaré,-Wirtinger inequalities in space of functions bounded variation. Control Cybernet 40(4), 921–29 (2011)

    MathSciNet  MATH  Google Scholar 

  13. Borwein, J., Luke, R.: Entropic regularization of the 0 function. In: Bauschke, H., Burachik, R., Combettes, P., Elser, V., Luke, D., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer Optimization and Its Applications, vol. 49. Springer, New York (2011)

  14. Burger, M., Osher, S.: Convergence rates of convex variational regularization. Inverse Probl. 20(5), 1411–1421 (2004)

    Article  MathSciNet  Google Scholar 

  15. Chambolle, A., Lions, P.L.: Image recovery via total variation minimization and related problems, vol. 76 (1997)

  16. Chan, T.F., Chen, K.: An optimization-based multilevel algorithm for total variation image denoising. Multiscale Model Simul. 5(2), 615–645 (2006)

    Article  MathSciNet  Google Scholar 

  17. Chan, T., Golub, G., Mulet, P.: A nonlinear primal-dual method for total variation-baes image restoration. SIAM J. Sci. Comp. 20, 1964–1977 (1999)

    Article  Google Scholar 

  18. Chen, J., Loris, I. On starting and stopping criteria for nested primal-dual iterations. arXiv:1806.07677 (2018)

  19. Combettes, P.L., Pesquet, J.C.: Proximal splitting methods in signal processing. In: Bauschke, H., Burachik, R., Combettes, P., Elser, V., Luke, D., Wolkowicz, H (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer Optimization and its Applications, vol. 49. Springer, New York (2011)

  20. Dobson, D., Scherzer, O.: Analysis of regularized total variation penalty methods for denoising. Inverse Probl. 12(5), 601–617 (1996)

    Article  MathSciNet  Google Scholar 

  21. Dobson, D.C., Vogel, C.R.: Convergence of an iterative method for total variation denoising. SIAM J.Numer. Anal. 34(5), 1779–1791 (1997)

    Article  MathSciNet  Google Scholar 

  22. Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems, Math. Appl., vol. 375. Kluwer Academic Publishers Group, Dordrecht (1996)

    Book  Google Scholar 

  23. Flemming, J.: Existence of variational source conditions for nonlinear inverse problems in Banach spaces. J. Inverse Ill-Posed Probl. 26(2), 277–286 (2018)

    Article  MathSciNet  Google Scholar 

  24. Garrigos, G., Rosasco, L., Villa, S.: Iterative regularization via dual diagonal descent. J. Math. Imag. Vis. 60(2), 189–215 (2018)

    Article  MathSciNet  Google Scholar 

  25. Grasmair, M.: Generalized Bregman distances and convergence rates for non-convex regularization methods, vol. 26. 16pp (2010)

  26. Grasmair, M.: Variational inequalities and higher order convergence rates for Tikhonov regularisation on Banach spaces. J Variational Inverse Ill-Posed Probl 21, 379–394 (2013)

    MathSciNet  MATH  Google Scholar 

  27. Grasmair, M., Haltmeier, M., Scherzer, O.: Necessary and sufficient conditions for linear convergence of ł1-regularization. Comm. Pure Appl. Math. 64(2), 161–182 (2011)

    Article  MathSciNet  Google Scholar 

  28. Hofmann, B., Kaltenbacher, B., P‘̀oschl, C., Scherzer, O.: A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operator. Inverse Probl. 23(3), 987–1010 (2007)

    Article  MathSciNet  Google Scholar 

  29. Hofmann, B., Mathé, P.: Parameter choice in Banach space regularization under variational inequalities. Inverse Probl. 28, 104006 (2012). 17pp

    Article  MathSciNet  Google Scholar 

  30. Hohage, T., Weidling, F.: Verification of a variational source condition for acoustic inverse medium scattering problems. Inverse Probl. 31(14pp), 075006 (2015)

    Article  MathSciNet  Google Scholar 

  31. Hohage, T., Schormann, C.: A Newton-type method for a transmission problem in inverse scattering. Inverse Probl. 14, 1207–1227 (1998)

    Article  MathSciNet  Google Scholar 

  32. Hohage, T., Weidling, F.: Characterizations of variational source conditions, converse results, and maxisets of spectral regularization methods. arXiv:1603.05133 (2016)

  33. Kindermann, S.: Convex Tikhonov regularization in Banach spaces: New results on convergence rates. J. Inverse Ill-Posed Probl. 24(3), 341–350 (2016)

    Article  MathSciNet  Google Scholar 

  34. Kirsch, A.: An Introduction to the Mathematical Theory of Inverse Problems. Second Edition Applied Mathematical Sciences, vol. 120. Springer, New York (2011)

    Google Scholar 

  35. Lorenz, D.A.: Convergence rates and source conditions for Tikhonov regularization with sparsity constraints. J. Inv. Ill-Posed Probl. 16, 463–478 (2008)

    MathSciNet  MATH  Google Scholar 

  36. Loris, I., Verhoeven, C.: On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty, vol. 27. 15pp (2011)

  37. Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W.: An iiterative regularization method for total Variation-Based image restoration. Multiscale Model. Simul. 4(2), 460–489 (2005)

    Article  MathSciNet  Google Scholar 

  38. Rudin, L.I., Osher, S.J., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60, 259–268 (1992)

    Article  MathSciNet  Google Scholar 

  39. Schuster, T., Kaltenbacher, B., Hofmann, B., Kazimierski, K.S.: Regularization methods in banach spaces. RICAM, 10 De Gruyter (2012)

  40. Sprung, B., Hohage, T. (2017)

  41. Takahashi, W., Wong, N.C., Yao, J.C.: Fixed point theorems for nonlinear non-self mappings in Hilbert spaces and applications. JFPTA 2013, 116 (2013)

    MATH  Google Scholar 

  42. Tikhonov, A.N.: On the solution of ill-posed problems and the method of regularization, Dokl. Akad. Nauk SSSR 151, 501–504 (1963)

    MathSciNet  Google Scholar 

  43. Tikhonov, A.N., Arsenin, V.Y.: Solutions of ill-posed problems. Translated from the Russian. Preface by translation editor Fritz John. Scripta Series in Mathematics. V. H. Winston & Sons, Washington, D.C.: John Wiley & Sons, New York-Toronto, Ont.-London, xiii+ 258 pp (1977)

  44. Vogel, C.R., Oman, M.E.: Iterative methods for total variation denoising. Siam J. Sci. Comput. 17(1), 227–238 (1996)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author is indebted to Ignace Loris for the fruitful discussions throughout the development of the work. Furthermore, the author is highly grateful to Maria A. Gonzalez-Huici and David Mateos-Nunez for the encouragement and support to finalize the work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Erdem Altuntac.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Major part of this work has been done within the framework of ARC grant at Université Libre de Bruxelles during author‘s PostDoc research period 2017–2019.

Appendices

Appendix A: VSC as upper bound for the Bregman distance

The total error estimation can also be stabilized due to the following assumption that has been derived in the literature listed in Section 4.2,

$$ E({u}_{\alpha}^{\delta} , u^{\dagger}) \leq D_{\mathcal{J}}({u}_{\alpha}^{\delta} , u^{\dagger}). $$
(1.1)

Therefore, for stabilization of E, we seek a stable upper bound for the Bregman distance (1.1).

Assumption A.1

[Variational Source Condition] Let \(T : \mathcal {X} \rightarrow \mathcal {Y}\) be linear, injective forward operator and v∈range(T). There exists some constant σ ∈ (0, 1] and a concave, monotonically increasing index function Ψ with Ψ(0) = 0 and \({\Psi } : [0 , \infty ) \rightarrow [0 , \infty )\) such that for \(q^{\dagger } \in \partial \mathcal {J}(u^{\dagger })\) the minimum norm solution u∈BV(Ω) satisfies

$$ \sigma D_{\mathcal{J}}(u,u^{\dagger}) \leq \mathcal{J}(u) - \mathcal{J}(u^{\dagger}) + {\Psi}\left( \Vert T u - T u^{\dagger}\Vert \right) \text{, for all } u \in \mathcal{X} . $$
(1.2)

Recall that quantitative stability analysis in the continuous mathematical setting aims to find upper bound for the total error estimation functional E in (4.4). According to (1.1), by means of finding stable upper bound for the Bregman distance between the regularized minimizer \({u}_{\alpha }^{\delta }\) and the minimum norm solution u will yield one of the two convergence results of this section. With the established choice of the regularization parameter and the asserted \(\mathcal {J}\) difference estimation in Lemma 6.2, the last ingredient of the Bregman distance following up the Assumption A2.1 is formulated below.

Lemma A.2

Let \(\alpha (\delta ,v^{\delta })\in \overline {S}\cap \underline {S}\) be the regularization parameter for the regularized solution \({u}_{\alpha }^{\delta }\) to the problem (4.2). If the minimum norm solution u satisfies Assumption A2.1, then

$$ - \langle D^{\ast}w^{\dagger},{u}_{\alpha}^{\delta} - u^{\dagger} \rangle = \mathcal{O}({\Psi}(\delta)), $$

holds.

Proof

It follows from VSC (4.5) that

$$ \begin{array}{@{}rcl@{}} &&\frac{\sigma}{2}\left( \mathcal{J}({u}_{\alpha}^{\delta}) - \mathcal{J}(u^{\dagger}) - \langle D^{\ast}w^{\dagger},{u}_{\alpha}^{\delta} - u^{\dagger}\rangle\right)\\ &\leq& \mathcal{J}({u}_{\alpha}^{\delta}) - \mathcal{J}(u^{\dagger}) + {\Psi}(\Vert T {u}_{\alpha}^{\delta} - T u^{\dagger}\Vert). \end{array} $$
(1.3)

After arranging the terms,

$$ \begin{array}{@{}rcl@{}} -\langle D^{\ast}w^{\dagger},{u}_{\alpha}^{\delta}-u^{\dagger}\rangle & \leq & \frac{2}{\sigma}\left( 1 - \frac{\sigma}{2}\right) \left( \mathcal{J}({u}_{\alpha}^{\delta}) - \mathcal{J}(u^{\dagger}) \right) + {\Psi}(\Vert T {u}_{\alpha}^{\delta} - T u^{\dagger}\Vert) \\ & \overset{(6.4)}{\leq} & \frac{2}{\sigma}\left( 1 - \frac{\sigma}{2}\right) \left( \frac{1}{\underline{\tau}-1}\right){\Psi}(\delta) + {\Psi}(\Vert T {u}_{\alpha}^{\delta} - T u^{\dagger}\Vert) \\ & \overset{(4.15)}{\leq} & \frac{2}{\sigma}\left( 1 - \frac{\sigma}{2}\right) \left( \frac{1}{\underline{\tau}-1}\right){\Psi}(\delta) + {\Psi}((\overline{\tau} + 1)\delta) \\ & \overset{(4.7)}{\leq} & \frac{2}{\sigma}\left( 1 - \frac{\sigma}{2}\right) \left( \frac{1}{\underline{\tau}-1}\right){\Psi}(\delta) + (\overline{\tau} + 1){\Psi}(\delta) \\ & = &\left( \frac{2}{\sigma} -1\right) \left( \frac{1}{\underline{\tau}-1}\right){\Psi}(\delta) + (\overline{\tau} + 1){\Psi}(\delta). \end{array} $$
(1.4)

Theorem A.3

Let the minimum norm solution u∈Ω satisfy the VSC given by Assumption A2.1. Then, in the light of the assumptions of lemmata 6.1, 6.2 and finally A2.2, the following estimation holds:

$$ D_{\mathcal{J}}({u}_{\alpha}^{\delta} , u^{\dagger}) = \mathcal{O}({\Psi}(\delta)). $$

Proof

The proof simply follows from verifying the previously established estimations on each components of the Bregman distance as shown below:

$$ \begin{array}{@{}rcl@{}} D_{\mathcal{J}}({u}_{\alpha}^{\delta} , u^{\dagger}) & = & \mathcal{J}({u}_{\alpha}^{\delta}) - \mathcal{J}(u^{\dagger})- \langle D^{\ast}w^{\dagger},{u}_{\alpha}^{\delta} - u^{\dagger} \rangle \\ &\overset{(1.4)}{\leq}& \mathcal{J}({u}_{\alpha}^{\delta}) - \mathcal{J}(u^{\dagger}) + \left( \frac{2}{\sigma} - 1\right) \left( \frac{1}{\underline{\tau} - 1}\right){\Psi}(\delta) + (\overline{\tau} + 1){\Psi}(\delta) \\ &\overset{(6.4)}{\leq}& \left( \frac{1}{\underline{\tau} - 1} \right){\Psi}(\delta)+ \left( \frac{2}{\sigma} - 1 \right) \left( \frac{1}{\underline{\tau} - 1} \right){\Psi}(\delta) + (\overline{\tau} + 1){\Psi}(\delta). \end{array} $$
(1.5)

Appendix B: Further numerical results

In this section, we will present some further numerical results to emphasize the condition on the step length μ in Theorem 7.2. Although the formulated condition allows one to choose the step length as \(\mu = \frac {2}{\Vert T \Vert ^{2}},\) we have observed divergence when we have made this choice of μ (see Fig. 7).

Fig. 7
figure 7

Error analysis profiles and the data visualization: although the noise amount is sufficiently small and it is a full-rank system, with some different choice of the step length that is defined as \(\mu = \frac {2}{\Vert T\Vert ^{2}},\) insufficient reconstruction and instability have been observed

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Altuntac, E. Choice of the parameters in a primal-dual algorithm for Bregman iterated variational regularization. Numer Algor 86, 729–759 (2021). https://doi.org/10.1007/s11075-020-00909-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-020-00909-6

Keywords

Mathematics subject classification (2010)

Navigation