Advertisement

Mathematical Programming

, Volume 168, Issue 1–2, pp 123–175 | Cite as

Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity

  • Hedy Attouch
  • Zaki Chbani
  • Juan PeypouquetEmail author
  • Patrick Redont
Full Length Paper Series B

Abstract

In a Hilbert space setting \({{\mathcal {H}}}\), we study the fast convergence properties as \(t \rightarrow + \infty \) of the trajectories of the second-order differential equation
$$\begin{aligned} \ddot{x}(t) + \frac{\alpha }{t} \dot{x}(t) + \nabla \Phi (x(t)) = g(t), \end{aligned}$$
where \(\nabla \Phi \) is the gradient of a convex continuously differentiable function \(\Phi : {{\mathcal {H}}} \rightarrow {{\mathbb {R}}}, \alpha \) is a positive parameter, and \(g: [t_0, + \infty [ \rightarrow {{\mathcal {H}}}\) is a small perturbation term. In this inertial system, the viscous damping coefficient \(\frac{\alpha }{t}\) vanishes asymptotically, but not too rapidly. For \(\alpha \ge 3\), and \(\int _{t_0}^{+\infty } t \Vert g(t)\Vert dt < + \infty \), just assuming that \({{\mathrm{argmin\,}}}\Phi \ne \emptyset \), we show that any trajectory of the above system satisfies the fast convergence property
$$\begin{aligned} \Phi (x(t))- \min _{{{\mathcal {H}}}}\Phi \le \frac{C}{t^2}. \end{aligned}$$
Moreover, for \(\alpha > 3\), any trajectory converges weakly to a minimizer of \(\Phi \). The strong convergence is established in various practical situations. These results complement the \({{\mathcal {O}}}(t^{-2})\) rate of convergence for the values obtained by Su, Boyd and Candès in the unperturbed case \(g=0\). Time discretization of this system, and some of its variants, provides new fast converging algorithms, expanding the field of rapid methods for structured convex minimization introduced by Nesterov, and further developed by Beck and Teboulle with FISTA. This study also complements recent advances due to Chambolle and Dossal.

Keywords

Convex optimization Fast convergent methods Dynamical systems Gradient flows Inertial dynamics Vanishing viscosity Nesterov method 

Mathematics Subject Classification

34D05 49M25 65K05 65K10 90C25 90C30 

References

  1. 1.
    Abbas, B., Attouch, H., Svaiter, B.F.: Newton-like dynamics and forward–backward methods for structured monotone inclusions in Hilbert spaces. J. Optim. Theory Appl. 161(2), 331–360 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Alvarez, F.: On the minimizing property of a second-order dissipative system in Hilbert spaces. SIAM J. Control Optim. 38(4), 1102–1119 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set Valued Anal. 9(1–2), 3–11 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Alvarez, F., Attouch, H., Bolte, J., Redont, P.: A second-order gradient-like dissipative dynamical system with Hessian-driven damping. Application to optimization and mechanics. J. Math. Pures Appl. 81(8), 747–779 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Alvarez, F., Peypouquet, J.: Asymptotic almost-equivalence of Lipschitz evolution systems in Banach spaces. Nonlinear Anal. 73(9), 3018–3033 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Alvarez, F., Peypouquet, J.: A unified approach to the asymptotic almost-equivalence of evolution systems without Lipschitz conditions. Nonlinear Anal. 74(11), 3440–3444 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Attouch, H., Buttazzo, G., Michaille, G.: Variational analysis in Sobolev and BV spaces. Applications to PDEs and optimization, 2nd edn. MOS-SIAM Series on Optimization. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA; Mathematical Optimization Society, Philadelphia, PA, xii+793 pp (2014)Google Scholar
  8. 8.
    Attouch, H., Cabot, A., Redont, P.: The dynamics of elastic shocks via epigraphical regularization of a differential inclusion. Adv. Math. Sci. Appl. 12(1), 273–306 (2002)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Attouch, H., Peypouquet, J., Redont, P.: A dynamical approach to an inertial forward-backward algorithm for convex minimization. SIAM J. Optim. 24(1), 232–256 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Aujol, J.-F., Dossal, C.: Stability of over-relaxations for the forward–backward algorithm, application to FISTA. SIAM J. Optim. Society for Industrial and Applied Mathematics (2015). hal-01163432Google Scholar
  11. 11.
    Baillon, J.-B.: Un exemple concernant le comportement asymptotique de la solution du problème \(\frac{du}{dt} + \partial \phi (u) \ni 0\). J. Funct. Anal. 28, 369–376 (1978)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Baillon, J.-B., Haddad, G.: Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Isr. J. Math. 26, 137–150 (1977)CrossRefzbMATHGoogle Scholar
  13. 13.
    Bauschke, H., Combettes, P.: Convex Analysis and Monotone Operator Theory in Hilbert spaces. CMS Books in Mathematics, Springer (2011)CrossRefzbMATHGoogle Scholar
  14. 14.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Brézis, H.: Opérateurs maximaux monotones dans les espaces de Hilbert et équations d’évolution, Lecture Notes 5, North Holland (1972)Google Scholar
  16. 16.
    Bruck, R.E.: Asymptotic convergence of nonlinear contraction semigroups in Hilbert spaces. J. Funct. Anal. 18, 15–26 (1975)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Cabot, A., Engler, H., Gadat, S.: On the long time behavior of second order differential equations with asymptotically small dissipation. Trans. Am. Math. Soc. 361, 5983–6017 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Cabot, A., Engler, H., Gadat, S.: Second order differential equations with asymptotically small dissipation and piecewise flat potentials. Electron. J. Differ. Equ. 17, 33–38 (2009)MathSciNetzbMATHGoogle Scholar
  19. 19.
    Chambolle, A., Dossal, Ch.: On the convergence of the iterates of Fista, HAL Id: hal-01060130 https://hal.inria.fr/hal-01060130v3. Submitted on 20 Oct 2014
  20. 20.
    Ghisi, M., Gobbino, M., Haraux, A.: The remarkable effectiveness of time-dependent damping terms for second order evolution equations (2015). arXiv:1506.06915v1 [math.AP]
  21. 21.
    Güler, O.: New proximal point algorithms for convex minimization. SIAM J. Optim. 2(4), 649–664 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Kim, D., Fessler, J.A.: Optimized first-order methods for smooth convex minimization (2015). arXiv:1406.5468v2 [math.OC]
  23. 23.
    Knopp, K.: Theory and Application of Infinite Series. Blackie & Son, Glasgow (1951)zbMATHGoogle Scholar
  24. 24.
    Lorenz, D.A., Pock, T.: An inertial forward–backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51(2), 1–15 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 155(2), 447–454 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Nesterov, Y.: A method of solving a convex programming problem with convergence rate O(1/k2). Sov. Math. Dokl. 27, 372–376 (1983)zbMATHGoogle Scholar
  27. 27.
    Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course, Volume 87 of Applied Optimization. Kluwer Academic Publishers, Boston (2004)CrossRefzbMATHGoogle Scholar
  28. 28.
    Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Nesterov, Y.: Gradient methods for minimizing composite objective function. CORE Discussion Papers (2007)Google Scholar
  30. 30.
    Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591–597 (1967)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1, 123–231 (2013)Google Scholar
  32. 32.
    Peypouquet, J.: Convex Optimization in Normed Spaces: Theory, Methods and Examples. Springer, Berlin (2015)CrossRefzbMATHGoogle Scholar
  33. 33.
    Peypouquet, J., Sorin, S.: Evolution equations for maximal monotone operators: asymptotic analysis in continuous and discrete time. J. Convex Anal. 17(3–4), 1113–1163 (2010)MathSciNetzbMATHGoogle Scholar
  34. 34.
    Schmidt, M., Le Roux, N., Bach, F.: Convergence rates of inexact proximal-gradient methods for convex optimization. In: NIPS’11—25th Annual Conference on Neural Information Processing Systems, Grenada, Spain (2011) HAL inria-00618152v3Google Scholar
  35. 35.
    Su, W., Boyd, S., Candès, E.J.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. Neural Inf. Process. Syst. 27, 2510–2518 (2014)zbMATHGoogle Scholar
  36. 36.
    Villa, S., Salzo, S., Baldassarres, L., Verri, A.: Accelerated and inexact forward–backward. SIAM J. Optim. 23(3), 1607–1633 (2013)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society 2016

Authors and Affiliations

  • Hedy Attouch
    • 1
  • Zaki Chbani
    • 2
  • Juan Peypouquet
    • 3
    Email author
  • Patrick Redont
    • 1
  1. 1.Institut Montpelliérain Alexander Grothendieck, UMR 5149 CNRSUniversité Montpellier 2Montpellier Cedex 5France
  2. 2.Laboratoire IBN AL-BANNA de Mathématiques et Applications (LIBMA), Faculty of Sciences Semlalia, MathematicsCadi Ayyad UniversityMarrakechMorocco
  3. 3.Universidad Técnica Federico Santa MaríaValparaisoChile

Personalised recommendations