Lectures on Convex Optimization pp 3-58 | Cite as

# Nonlinear Optimization

## Abstract

In this chapter, we introduce the main notations and concepts used in Continuous Optimization. The first theoretical results are related to Complexity Analysis of the problems of Global Optimization. For these problems, we start with a very pessimistic lower performance guarantee. It implies that for any method there exists an optimization problem in \(\mathbb {R}^n\) which needs at least \(O\left ({1 \over \epsilon ^n}\right )\) computations of the function values in order to approximate its global solution up to accuracy *𝜖*. Therefore, in the next section we pass to local optimization, and consider two main methods, the Gradient Method and the Newton Method. For both of them, we establish some local rates of convergence. In the last section, we present some standard methods in General Nonlinear Optimization: the conjugate gradient methods, quasi-Newton methods, theory of Lagrangian relaxation, barrier methods and penalty function methods. For some of them, we prove global convergence results.

## References

- 39.Yu. Nesterov,
*Introductory Lectures on Convex Optimization. A Basic Course*(Kluwer, Boston, 2004)Google Scholar