Iterative Regularization via Dual Diagonal Descent

  • Guillaume Garrigos
  • Lorenzo Rosasco
  • Silvia Villa
Article

Abstract

In the context of linear inverse problems, we propose and study a general iterative regularization method allowing to consider large classes of data-fit terms and regularizers. The algorithm we propose is based on a primal-dual diagonal descent method. Our analysis establishes convergence as well as stability results. Theoretical findings are complemented with numerical experiments showing state-of-the-art performances.

Keywords

Splitting methods Dual problem Diagonal methods Iterative regularization Early stopping 

References

  1. 1.
    Alart, P., Lemaire, B.: Penalization in non-classical convex programming via variational convergence. Math. Program. 51, 307–331 (1991)CrossRefMATHGoogle Scholar
  2. 2.
    Alvarez, F., Cominetti, R.: Primal and dual convergence of a proximal point exponential penalty method for linear programming. Math. Program. 93, 87–96 (2002)MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Attouch, H.: Viscosity solutions of minimization problems. SIAM J. Optim. 6, 769–806 (1996)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Attouch, H., Cabot, A., Czarnecki, M.-O.: Asymptotic behavior of non-autonomous monotone and subgradient evolution equations. arXiv:1601.00767 (2016)
  5. 5.
    Attouch, H., Czarnecki, M.-O.: Asymptotic behavior of coupled dynamical systems with multiscale aspects. J. Differ. Equ. 248, 1315–1344 (2010)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Attouch, H., Czarnecki, M.-O., Peypouquet, J.: Prox-penalization and splitting methods for constrained variational problems. SIAM J. Optim. 21, 149–173 (2011)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Attouch, H., Czarnecki, M.-O., Peypouquet, J.: Coupling forward–backward with penalty schemes and parallel splitting for constrained variational inequalities. SIAM J. Optim. 21, 1251–1274 (2011)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Auslender, A., Crouzeix, J.-P., Fedit, P.: Penalty-proximal methods in convex programming. J. Optim. Theory Appl. 55, 1–21 (1987)MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Bachmayr, M., Burger, M.: Iterative total variation schemes for nonlinear inverse problems. Inverse Probl. 25, 105004 (2009)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Bahraoui, M.A., Lemaire, B.: Convergence of diagonally stationary sequences in convex optimization. Set-Valued Anal. 2, 49–61 (1994)MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Bakushinsky, A.B., Kokurin, MYu.: Iterative Methods for Approximate Solution of Inverse Problems. Springer, New York (2004)MATHGoogle Scholar
  12. 12.
    Bauschke, H.H., Borwein, J.: Joint and separate convexity of the Bregman distance, inherently parallel algorithms in feasibility and optimization and their applications. Stud. Comput. Math. 8, 23–36 (2001)CrossRefMATHGoogle Scholar
  13. 13.
    Bauschke, H.H., Combettes, P.: Convex Analysis and Monotone Operator Theory. Springer, New York (2011)MATHGoogle Scholar
  14. 14.
    Beck, A., Sabach, S.: A first order method for finding minimal norm-like solutions of convex optimization problems. Math. Program. 147, 25–46 (2014)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Beck, A., Teboulle, M.: Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31, 167–175 (2003)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Becker, S., Bobin, J., Candès, E.: NESTA: a fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4, 1–39 (2011)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Bertero, M., Boccacci, P.: Introduction to Inverse Problems in Imaging. IOP Publishing, Bristol (1998)CrossRefMATHGoogle Scholar
  18. 18.
    Bolte, J., Nguyen, T.P., Peypouquet, J., Suter, B.: From error bounds to the complexity of first-order descent methods for convex functions. arXiv:1510.08234 (2015)
  19. 19.
    Bot, R.I., Hofmann, B.: The impact of a curious type of smoothness conditions on convergence rates in l1-regularization. Eurasian J. Math. Comput. Appl. 1, 29–40 (2013)Google Scholar
  20. 20.
    Bot, R.I., Hein, T.: Iterative regularization with a general penalty term: theory and application to L1 and TV regularization. Inverse Prob. 28, 1–19 (2012)CrossRefMATHGoogle Scholar
  21. 21.
    Boyer, R.: Quelques algorithmes diagonaux en optimisation convexe. Ph.D., Université de Provence (1974)Google Scholar
  22. 22.
    Bredies, K., Kunisch, K., Pock, T.: Total generalized variation. SIAM J. Imaging Sci. 3, 492–526 (2010)MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Briceño-Arias, L., Combettes, P.L.: A monotone + skew splitting model for composite monotone inclusions in duality. SIAM J. Optim. 21, 1230–1250 (2011)MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Bruck Jr., R.E.: A strongly convergent iterative solution of \(0 \in U(x)\) for a maximal monotone operator \(U\) in Hilbert space. J. Math. Anal. Appl. 48, 114–126 (1974)MathSciNetCrossRefMATHGoogle Scholar
  25. 25.
    Burger, M., Osher, S. (ed.): A guide to the TV zoo. In: Level Set and PDE Based Reconstruction Methods in Imaging, pp. 1–70. Springer, Berlin (2013)Google Scholar
  26. 26.
    Burger, M., Resmerita, E., He, L.: Error estimation for Bregman iterations and inverse scale space methods in image restoration. Comput. 81, 109–135 (2007)MathSciNetCrossRefMATHGoogle Scholar
  27. 27.
    Cabot, A.: The steepest descent dynamical system with control. Applications to constrained minimization. ESAIM Control Optim. Calc. Var. 10, 243–258 (2004)MathSciNetCrossRefMATHGoogle Scholar
  28. 28.
    Cabot, A.: Proximal point algorithm controlled by a slowly vanishing term: applications to hierarchical minimization. SIAM J. Optim. 15, 555–572 (2005)MathSciNetCrossRefMATHGoogle Scholar
  29. 29.
    Calatroni, L., De Los Reyes, J.-C., Schönlieb, C.-B.: Infimal convolution of data discrepancies for mixed noise removal. arXiv:1611.00690 (2016)
  30. 30.
    Chaux, C., Combettes, P.L., Pesquet, J.-C., Wajs, V.: A variational formulation for frame-based inverse problems. Inverse Probl. 23, 1495–1518 (2007)MathSciNetCrossRefMATHGoogle Scholar
  31. 31.
    Chambolle, A., Lions, P.L.: Image recovery via total variation minimization and related problems. Numer. Math. 76, 167–188 (1997)MathSciNetCrossRefMATHGoogle Scholar
  32. 32.
    Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011)MathSciNetCrossRefMATHGoogle Scholar
  33. 33.
    Chambolle, A., Pock, T.: A remark on accelerated block coordinate descent for computing the proximity operators of a sum of convex functions, preprint hal-01099182v2 (2015)Google Scholar
  34. 34.
    Combettes, P.L.: Quasi-Fejérian analysis of some optimization algorithms. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, pp. 115–152. Elsevier, New York (2001)CrossRefGoogle Scholar
  35. 35.
    Combettes, P.L., Dũng, D., Vũ, B.C.: Dualization of signal recovery problems. Set-Valued Var. Anal. 18, 373–404 (2010)MathSciNetCrossRefMATHGoogle Scholar
  36. 36.
    Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Bauschke, H. H., Burachik, R. S., Combettes, P. L., Elser, V., Luke, D. R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, New York (2011)Google Scholar
  37. 37.
    Combettes, P.L., Pesquet, J.-C.: Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators. Set-Valued Var. Anal. 20, 307–330 (2012)MathSciNetCrossRefMATHGoogle Scholar
  38. 38.
    Combettes, P.L., Wajs, V.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)MathSciNetCrossRefMATHGoogle Scholar
  39. 39.
    Cominetti, R., Peypouquet, J., Sorin, S.: Strong asymptotic convergence of evolution equations governed by maximal monotone operators with Tikhonov regularization. J. Differ. Equ. 245, 3753–3763 (2008)MathSciNetCrossRefMATHGoogle Scholar
  40. 40.
    Cominetti, R.: Coupling the proximal point algorithm with approximation methods. J. Optim. Theory Appl. 95, 581–600 (1997)MathSciNetCrossRefMATHGoogle Scholar
  41. 41.
    Czarnecki, M.-O., Noun, N., Peypouquet, J.: Splitting forward–backward penalty scheme for constrained variational problems. arXiv:1408.0974 (2014)
  42. 42.
    Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004)MathSciNetCrossRefMATHGoogle Scholar
  43. 43.
    Deledalle, C.-A., Vaiter, S., Fadili, J.-M., Peyré, G.: Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection. SIAM J. Imaging Sci. 7, 2448–2487 (2014)MathSciNetCrossRefMATHGoogle Scholar
  44. 44.
    Dontchev, A., Zolezzi, T.: Well-Posed Optimization Problems. Springer, Berlin (1993)CrossRefMATHGoogle Scholar
  45. 45.
    Dupé, F.-X., Fadili, J., Starck, J.-L.: Deconvolution under Poisson noise using exact data-fit function and synthesis or analysis sparsity priors. Stat. Methodol. 9, 4–18 (2012)MathSciNetCrossRefMATHGoogle Scholar
  46. 46.
    Engl, H., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Kluwer, Dordrecht (1996)CrossRefMATHGoogle Scholar
  47. 47.
    Hale, E., Yin, W., Zhang, Y.: Fixed-point continuation for \(\ell _1\)-minimization: methodology and convergence. SIAM J. Optim. 19, 1107–1130 (2008)MathSciNetCrossRefMATHGoogle Scholar
  48. 48.
    Hintermüller, M., Langer, A.: Subspace correction methods for a class of non-smooth and non-additive convex variational problems with mixed \(\ell ^1/\ell ^2\) data-fidelity in image processing. SIAM J. Imaging Sci. 6, 2134–2173 (2013)MathSciNetCrossRefMATHGoogle Scholar
  49. 49.
    Kaltenbacher, B., Neubauer, A., Scherzer, O.: Iterative Regularization Methods for Nonlinear Ill-Posed Problems. De Gruyter, Berlin (2008)CrossRefMATHGoogle Scholar
  50. 50.
    Kaplan, A.A.: On convex programming with internal regularization. Sov. Math. Dokl. Akad. Nauk 19, 795–799 (1975)MATHGoogle Scholar
  51. 51.
    Kaplan, A.A.: Iteration processes of convex programming with internal regularization. Sib. Math. J. 20, 219–226 (1979)CrossRefMATHGoogle Scholar
  52. 52.
    Langer, A.: Automated parameter selection for total variation minimization in image restoration. J. Math. Imaging Vis. 57, 239–268 (2017)MathSciNetCrossRefGoogle Scholar
  53. 53.
    Le, T., Chartran, R., Asaki, T.: A variational approach to reconstructing images corrupted by Poisson noise. J. Math. Imaging Vis. 27, 257–63 (2007)MathSciNetCrossRefGoogle Scholar
  54. 54.
    Lemaire, B.: Coupling optimization methods and variational convergence. Trends Math. Optim. Int. Ser. Numer. Math. 84, 163–179 (1988)MathSciNetMATHGoogle Scholar
  55. 55.
    Lemaire, B.: On the convergence of some iterative methods for convex minimization. Recent Dev. Optim. Lect. Notes Econ. Math. Syst. 429, 252–268 (1995)MathSciNetCrossRefMATHGoogle Scholar
  56. 56.
    Lemaire, B.: Well-posedness, conditioning and regularization of minimization, inclusion and fixed-point problems. Pliska Studia Mathematica Bulgarica 12, 71–84 (1998)MathSciNetMATHGoogle Scholar
  57. 57.
    Mallat, S.: A Wavelet Tour of Signal Processing, 3rd edn. Elsevier/Academic Press, Amsterdam (2009)MATHGoogle Scholar
  58. 58.
    Martinet, B.: Perturbation des mthodes d’optimisation. Applications. R.A.I.R.O Analyse numrique 12, 153–171 (1978)MATHGoogle Scholar
  59. 59.
    Matet, S., Rosasco, L., Villa, S., Vũ, B. C.: Don’t relax: early stopping for convex regularization. arXiv:1707.05422 (2016)
  60. 60.
    Nikolova, M.: Minimizers of cost-functions involving non- smooth data-fidelity terms. Application to the processing of outliers. SIAM J. Numer. Anal. 40, 965–994 (2002)MathSciNetCrossRefMATHGoogle Scholar
  61. 61.
    Peypouquet, J.: Coupling the gradient method with a general exterior penalization scheme for convex minimization. J. Optim. Theory Appl. 153, 123–138 (2011)MathSciNetCrossRefMATHGoogle Scholar
  62. 62.
    Peypouquet, J.: Convex Optimization in Normed Spaces. Theory, Methods and Examples. Springer, New York (2015)MATHGoogle Scholar
  63. 63.
    Ramlau, R.: TIGRA—an iterative algorithm for regularizing nonlinear ill-posed problems. Inverse Prob. 19, 433–465 (2003)MathSciNetCrossRefMATHGoogle Scholar
  64. 64.
    Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60, 259–268 (1992)MathSciNetCrossRefMATHGoogle Scholar
  65. 65.
    Scherzer, O.: A modified Landweber iteration for solving parameter estimation problems. Appl. Math. Optim. 38, 45–68 (1998)MathSciNetCrossRefMATHGoogle Scholar
  66. 66.
    Stein, C.: Estimation of the mean of a multivariate normal distribution. Ann. Stat. 9, 1135–1151 (1981)MathSciNetCrossRefMATHGoogle Scholar
  67. 67.
    Steinwart, I., Christmann, A.: Support Vector Machines. Springer, New York (2008)MATHGoogle Scholar
  68. 68.
    Tossings, P.: The perturbed Tikhonov’s algorithm and some of its applications. ESAIM Math. Model. Numer. Anal. 28, 189–221 (1994)MathSciNetCrossRefMATHGoogle Scholar
  69. 69.
    Tseng, P.: Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM J. Control Optim. 29, 119–138 (1991)MathSciNetCrossRefMATHGoogle Scholar
  70. 70.
    Uzawa, H.: Iterative Methods for Concave Programming. Studies in Linear and Nonlinear Programming. Stanford University Press, Stanford (1958)Google Scholar
  71. 71.
    Vainberg, M.M.: Le problème de la minimisation des fonctionelles non linéaires. C.I.M.E. IV ciclo (1970)Google Scholar
  72. 72.
    Yamada, I., Yukawa, M., Yamagishi, M.: Minimizing the Moreau envelope of nonsmooth convex functions over the fixed point set of certain quasi-nonexpansive mappings. In: Bauschke, H. H., Burachik, R. S., Combettes, P. L., Elser, V., Luke, D. R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer, New York (2011)Google Scholar
  73. 73.
    Zolezzi, T.: On equiwellset minimum problems. Appl. Math. Optim. 4, 209–223 (1978)MathSciNetCrossRefMATHGoogle Scholar
  74. 74.
    Zou, Z., Hastie, T.: Regularization and variable selection via the elastic net. J. R. Stat. Soc. B 67, 301–320 (2005)MathSciNetCrossRefMATHGoogle Scholar
  75. 75.
    Zalinescu, C.: Convex Analysis in General Vector Spaces. World Scientific, Singapore (2002)CrossRefMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2017

Authors and Affiliations

  1. 1.Laboratory for Computational and Statistical LearningIstituto Italiano di Tecnologia and Massachusetts Institute of TechnologyCambridgeUSA
  2. 2.DIBRISUniversità Degli Studi di GenovaGenovaItaly
  3. 3.Dipartimento di MatematicaPolitecnico di MilanoMilanItaly

Personalised recommendations