Advertisement

Mathematical Programming

, Volume 174, Issue 1–2, pp 391–432 | Cite as

Convergence of inertial dynamics and proximal algorithms governed by maximally monotone operators

  • Hedy AttouchEmail author
  • Juan Peypouquet
Full Length Paper Series B
  • 215 Downloads

Abstract

We study the behavior of the trajectories of a second-order differential equation with vanishing damping, governed by the Yosida regularization of a maximally monotone operator with time-varying index, along with a new Regularized Inertial Proximal Algorithm obtained by means of a convenient finite-difference discretization. These systems are the counterpart to accelerated forward–backward algorithms in the context of maximally monotone operators. A proper tuning of the parameters allows us to prove the weak convergence of the trajectories to zeroes of the operator. Moreover, it is possible to estimate the rate at which the speed and acceleration vanish. We also study the effect of perturbations or computational errors that leave the convergence properties unchanged. We also analyze a growth condition under which strong convergence can be guaranteed. A simple example shows the criticality of the assumptions on the Yosida approximation parameter, and allows us to illustrate the behavior of these systems compared with some of their close relatives.

Keywords

Asymptotic stabilization Large step proximal method Damped inertial dynamics Lyapunov analysis Maximally monotone operators Time-dependent viscosity Vanishing viscosity Yosida regularization 

Mathematics Subject Classification

37N40 46N10 49M30 65K05 65K10 90B50 90C25 

Notes

Acknowledgements

The authors thank P. Redont for his careful and constructive reading of the paper.

References

  1. 1.
    Álvarez, F.: On the minimizing property of a second-order dissipative system in Hilbert spaces. SIAM J. Control Optim. 38(4), 1102–1119 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Álvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9(1–2), 3–11 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Attouch, H., Cabot, A.: Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity. J. Differ. Equ. 263(9), 5412–5458 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Attouch, H., Cabot, A.: Convergence rates of inertial forward-backward algorithms, HAL-01453170 (2017)Google Scholar
  5. 5.
    Attouch, H., Cabot, A., Redont, P.: The dynamics of elastic shocks via epigraphical regularization of a differential inclusion. Adv. Math. Sci. Appl. 12(1), 273–306 (2002)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Attouch, H., Chbani, Z., Peypouquet, J., Redont, P.: Fast convergence of inertial dynamics and algorithms with asymptotic vanishing damping, to appear in Math. Program.  https://doi.org/10.1007/s10107-016-0992-8
  7. 7.
    Attouch, H., Maingé, P.E.: Asymptotic behavior of second order dissipative evolution equations combining potential with non-potential effects. ESAIM Control Optim. Calc. Var. 17(3), 836–857 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Attouch, H., Peypouquet, J.: The rate of convergence of Nesterov’s accelerated forward–backward method is actually faster than \(\frac{1}{k^2}\). SIAM J. Optim. 26(3), 1824–1834 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Attouch, H., Peypouquet, J., Redont, P.: Fast convergence of regularized inertial dynamics for nonsmooth convex optimization, Working paper. (2017)Google Scholar
  10. 10.
    Attouch, H., Soueycatt, M.: Augmented Lagrangian and proximal alternating direction methods of multipliers in Hilbert spaces. Applications to games, PDE’s and control. Pac. J. Optim. 5(1), 17–37 (2009)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Attouch, H., Wets, R.: Epigraphical processes: laws of large numbers for random LSC functions. Sem. Anal. Convexe Montp. 20, 13–29 (1990)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Attouch, H., Wets, R.: Quantitative stability of variational systems: I, the epigraphical distance. Trans. Am. Math. Soc. 328(2), 695–729 (1991)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Attouch, H., Wets, R.: Quantitative stability of variational systems: II, a framework for nonlinear conditioning. SIAM J. Optim. 3, 359–381 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Bauschke, H., Combettes, P.: Convex Analysis and Monotone Operator Theory in Hilbert spaces, CMS Books in Mathematics. Springer, Berlin (2011)CrossRefzbMATHGoogle Scholar
  15. 15.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Brézis, H.: Opérateurs maximaux monotones dans les espaces de Hilbert et équations d’évolution, Lecture Notes 5, North Holland, (1972)Google Scholar
  17. 17.
    Brézis, H.: Functional Analysis, Sobolev Spaces and Partial Differential Equations. Springer, Berlin (2011)zbMATHGoogle Scholar
  18. 18.
    Bolte, J., Nguyen, T.P., Peypouquet, J., Suter, B.W.: From error bounds to the complexity of first-order descent methods for convex functions. Math. Program. 165(2), 471–507 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Cabot, A., Engler, H., Gadat, S.: On the long time behavior of second order differential equations with asymptotically small dissipation. Trans. Am. Math. Soc. 361, 5983–6017 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Chambolle, A., Dossal, Ch.: On the convergence of the iterates of the fast iterative shrinkage thresholding algorithm. J. Optim. Theory Appl. 166, 968–982 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Haraux, A.: Systèmes dynamiques dissipatifs et applications, RMA 17, Masson, (1991)Google Scholar
  22. 22.
    Jendoubi, M.A., May, R.: Asymptotics for a second-order differential equation with nonautonomous damping and an integrable source term. Appl. Anal. 94(2), 436–444 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Kim, D., Fessler, J.A.: Optimized first-order methods for smooth convex minimization. Math. Program. 159(1–2), 81–107 (2016). Ser. AMathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    May, R.: Asymptotic for a second order evolution equation with convex potential and vanishing damping term. Turk. J. Math. 41(3), 681–685 (2017)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Matet, S., Rosasco, L., Villa, S., Vu, B.C.: Don’t relax: early stopping for convex regularization. arXiv:1707.05422v1 [math.OC] (2017)
  26. 26.
    Nesterov, Y.: A method of solving a convex programming problem with convergence rate \(O(1/k^2)\). Sov. Math. Dokl. 27, 372–376 (1983)zbMATHGoogle Scholar
  27. 27.
    Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course, of Applied Optimization, vol. 87. Kluwer Academic Publishers, Boston (2004)zbMATHGoogle Scholar
  28. 28.
    Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591–597 (1967)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Peypouquet, J.: Convex Otimization in Normed Spaces: Theory, Methods and Examples. With a Foreword by Hedy Attouch. Springer Briefs in Optimization, p. xiv+124. Springer, Cham (2015)Google Scholar
  30. 30.
    Peypouquet, J., Sorin, S.: Evolution equations for maximal monotone operators: asymptotic analysis in continuous and discrete time. J. Convex Anal. 17(3–4), 1113–1163 (2010)MathSciNetzbMATHGoogle Scholar
  31. 31.
    Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987)zbMATHGoogle Scholar
  32. 32.
    Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Rockafellar, R.T.: Monotone operators associated with saddle-functions and mini-max problems, In: Nonlinear operators and nonlinear equations of evolution in Banach spaces 2. In: 18th Proceedings of Symposia in Pure Mathematics, F.E. Browder Ed., American Mathematical Society, pp. 241–250 (1976)Google Scholar
  34. 34.
    Rockafellar, R.T.: Augmented lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97–116 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis, Grundlehren der mathematischen Wissenschafte, vol. 317. Springer, Berlin (1998)Google Scholar
  36. 36.
    Schmidt, M., Le Roux, N., Bach, F.: Convergence rates of inexact proximal-gradient methods for convex optimization. In: Advances in Neural Information Processing Systems (NIPS), (2011)Google Scholar
  37. 37.
    Su, W., Boyd, S., Candès, E.J.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. Neural Inf. Process. Syst. 27, 2510–2518 (2014)zbMATHGoogle Scholar
  38. 38.
    Villa, S., Salzo, S., Baldassarres, L., Verri, A.: Accelerated and inexact forward–backward. SIAM J. Optim. 23(3), 1607–1633 (2013)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2018

Authors and Affiliations

  1. 1.Institut Montpelliérain Alexander Grothendieck, UMR 5149 CNRSUniversité MontpellierMontpellier Cedex 5France
  2. 2.Departamento de MatemáticaUniversidad Técnica Federico Santa MaríaValparaísoChile
  3. 3.Departamento de Ingeniería Matemática & Centro de Modelamiento Matemático (CNRS UMI2807), FCFMUniversidad de ChileSantiagoChile

Personalised recommendations