Skip to main content
Log in

Reachability of Optimal Convergence Rate Estimates for High-Order Numerical Convex Optimization Methods

  • MATHEMATICS
  • Published:
Doklady Mathematics Aims and scope Submit manuscript

Abstract

The Monteiro–Svaiter accelerated hybrid proximal extragradient method (2013) with one step of Newton’s method used at every iteration for the approximate solution of an auxiliary problem is considered. The Monteiro–Svaiter method is optimal (with respect to the number of gradient and Hessian evaluations for the optimized function) for sufficiently smooth convex optimization problems in the class of methods using only the gradient and Hessian of the optimized function. An optimal tensor method involving higher derivatives is proposed by replacing Newton’s step with a step of Yu.E. Nesterov’s recently proposed tensor method (2018) and by using a special generalization of the step size selection condition in the outer accelerated proximal extragradient method. This tensor method with derivatives up to the third order inclusive is found fairly practical, since the complexity of its iteration is comparable with that of Newton’s one. Thus, a constructive solution is obtained for Nesterov’s problem (2018) of closing the gap between tight lower and overstated upper bounds for the convergence rate of existing tensor methods of order \(p\; \geqslant \;3\).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

REFERENCES

  1. Y. Arjevani, O. Shamir, and R. Shiff, Math. Program. https://arxiv.org/abs/1705.07260.

  2. Yu. Nesterov, Preprint No. 2018005, Univ. Catholique de Louvain (Center for Operations Research and Econometrics (CORE), 2018).

  3. A. V. Gasnikov and D. A. Kovalev, Komp’yut. Issled. Model. 10 (3), 305–314 (2018).

    Google Scholar 

  4. A. S. Nemirovskii and D. B. Yudin, Complexity of Problems and Efficiency of Optimization Methods (Nauka, Moscow, 1979) [in Russian].

    Google Scholar 

  5. R. Monteiro and B. Svaiter, SIAM J. Optim. 23 (2), 1092–1125 (2013).

    Article  MathSciNet  Google Scholar 

  6. Yu. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course (Springer Science and Business Media, 2013).

    MATH  Google Scholar 

  7. Yu. E. Nesterov, Introduction to Convex Optimization (MTsNMO, Moscow, 2010) [in Russian].

  8. H. Lin, J. Mairal, and Z. Harchaoui, J. Machine Learning Res. 18 (212), 1–54 (2018).

    Google Scholar 

  9. Yu. Nesterov and B. Polyak, Math. Program. 108 (1), 177–205 (2006).

    Article  MathSciNet  Google Scholar 

  10. A. V. Gasnikov, Modern Numerical Optimization Methods: Universal Gradient Descent Method (Mosk. Fiz.-Tekh. Inst., Moscow, 2018) [in Russian].

    Google Scholar 

  11. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, 2nd ed. (MIT Press, Cambridge, Mass., 2001).

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. V. Gasnikov.

Additional information

Translated by I. Ruzanova

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gasnikov, A.V., Gorbunov, E.A., Kovalev, D.A. et al. Reachability of Optimal Convergence Rate Estimates for High-Order Numerical Convex Optimization Methods. Dokl. Math. 99, 91–94 (2019). https://doi.org/10.1134/S1064562419010289

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1064562419010289

Navigation