Advertisement

Nonlinear Optimization

  • Yurii¬†Nesterov
Chapter
Part of the Springer Optimization and Its Applications book series (SOIA, volume 137)

Abstract

In this chapter, we introduce the main notations and concepts used in Continuous Optimization. The first theoretical results are related to Complexity Analysis of the problems of Global Optimization. For these problems, we start with a very pessimistic lower performance guarantee. It implies that for any method there exists an optimization problem in \(\mathbb {R}^n\) which needs at least \(O\left ({1 \over \epsilon ^n}\right )\) computations of the function values in order to approximate its global solution up to accuracy ūĚúĖ. Therefore, in the next section we pass to local optimization, and consider two main methods, the Gradient Method and the Newton Method. For both of them, we establish some local rates of convergence. In the last section, we present some standard methods in General Nonlinear Optimization: the conjugate gradient methods, quasi-Newton methods, theory of Lagrangian relaxation, barrier methods and penalty function methods. For some of them, we prove global convergence results.

References

  1. 39.
    Yu. Nesterov, Introductory Lectures on Convex Optimization. A Basic Course (Kluwer, Boston, 2004)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Yurii¬†Nesterov
    • 1
  1. 1.CORE/INMACatholic University of LouvainLouvain-la-NeuveBelgium

Personalised recommendations