Mathematical Programming

, Volume 146, Issue 1–2, pp 37–75 | Cite as

First-order methods of smooth convex optimization with inexact oracle

  • Olivier Devolder
  • François Glineur
  • Yurii Nesterov
Full Length Paper Series A

Abstract

We introduce the notion of inexact first-order oracle and analyze the behavior of several first-order methods of smooth convex optimization used with such an oracle. This notion of inexact oracle naturally appears in the context of smoothing techniques, Moreau–Yosida regularization, Augmented Lagrangians and many other situations. We derive complexity estimates for primal, dual and fast gradient methods, and study in particular their dependence on the accuracy of the oracle and the desired accuracy of the objective function. We observe that the superiority of fast gradient methods over the classical ones is no longer absolute when an inexact oracle is used. We prove that, contrary to simple gradient schemes, fast gradient methods must necessarily suffer from error accumulation. Finally, we show that the notion of inexact oracle allows the application of first-order methods of smooth convex optimization to solve non-smooth or weakly smooth convex problems.

Keywords

Smooth convex optimization First-order methods Inexact oracle Gradient methods Fast gradient methods Complexity bounds 

Mathematics Subject Classification (2000)

90C06 90C25 90C60 

References

  1. 1.
    Baes, M.: Estimate Sequence Methods: Extensions and Approximations. IFOR Internal report, ETH Zurich, Switzerland (2009)Google Scholar
  2. 2.
    Correa, R., Lemarechal, C.: Convergence of some algorithms for convex minimization. Math. Program. Ser. A 62, 261–275 (1993)CrossRefMATHMathSciNetGoogle Scholar
  3. 3.
    D’Aspremont, A.: Smooth optimization with approximate gradient. SIAM J. Optim. 19, 1171–1183 (2008)CrossRefMATHMathSciNetGoogle Scholar
  4. 4.
    Devolder, O., Glineur, F., Nesterov, Y.: Double smoothing technique for large-scale linearly constrained convex optimization. SIAM J. Optim. 22(2), 702–727 (2012)Google Scholar
  5. 5.
    Hintermuller, M.: A proximal bundle method based on approximative subgradient. Comput. Optim. Appl. 20, 245–266 (2001)CrossRefMathSciNetGoogle Scholar
  6. 6.
    Kiwiel, K.: A proximal bundel method with approximative subgradient linearization. SIAM J. Optim. 16, 1007–1023 (2006)CrossRefMATHMathSciNetGoogle Scholar
  7. 7.
    Khachiyan, L., Nemirovskii, A., Nesterov, Y.: Optimal methods of convex programming and polynomial methods of linear programming. In: Elster, H. (ed.) Modern Mathematical Methods of Optimization, pp. 75–115. Akademie Verlag, Berlin (1993)Google Scholar
  8. 8.
    Lan, G.: An optimal method for stochastic composite optimization. Math. Program. Ser. A 133(1–2), 365–397 (2012)CrossRefMATHGoogle Scholar
  9. 9.
    Nedic, A., Bertsekas, D.: The effect of deterministic noise in subgradient methods. Math. Program. Ser. A 125, 75–99 (2010)CrossRefMATHMathSciNetGoogle Scholar
  10. 10.
    Nemirovskii, A., Nesterov, Y.: Optimal methods for smooth convex minimization. Zh. Vichisl. Mat. Fiz. (In Russian) 25(3), 356–369 (1985)MathSciNetGoogle Scholar
  11. 11.
    Nemirovskii, A., Yudin, D.: Problem Complexity and Method Efficiency in Optimization. Wiley, New York (1983)Google Scholar
  12. 12.
    Nesterov, Y.: A method for unconstrained convex minimization with the rate of convergence of \({O}(\frac{1}{k^2})\). Doklady AN SSSR 269, 543–547 (1983)MathSciNetGoogle Scholar
  13. 13.
    Nesterov, Y.: On an approach to the construction of optimal methods of minimization of smooth convex function. Èkonom. i. Mat. Metody (In Russian) 24, 509–517 (1988)MATHMathSciNetGoogle Scholar
  14. 14.
    Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, Dordrecht (2004)CrossRefGoogle Scholar
  15. 15.
    Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. Ser. A 103, 127–152 (2005)CrossRefMATHMathSciNetGoogle Scholar
  16. 16.
    Nesterov, Y.: Excessive gap technique in nonsmooth convex minimization. SIAM J. Optim. 16, 235–249 (2005)CrossRefMATHMathSciNetGoogle Scholar
  17. 17.
    Nesterov, Y.: Smoothing technique and its applications in semidefinite optimization. Math. Program. A 110, 245–259 (2007)CrossRefMATHMathSciNetGoogle Scholar
  18. 18.
    Nesterov, Y.: Gradient methods for minimizing composite objective function. Math. Program. Ser. B. doi:10.1007/s10107-012-0629-5 (2012)
  19. 19.
    Polyak, B.T.: Introduction to Optimization. Optimization Software Inc, New York (1987)Google Scholar
  20. 20.
    Shor, N.Z.: Minimization Methods for Non-Differentiable Functions. Springer Series in Computational Mathematics. Springer, Berlin (1985)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society 2013

Authors and Affiliations

  • Olivier Devolder
    • 1
  • François Glineur
    • 1
  • Yurii Nesterov
    • 1
  1. 1.Université catholique de Louvain, ICTEAM Institute/CORELouvain-la-NeuveBelgium

Personalised recommendations