Nonlinear Optimization pp 253-309 | Cite as

# Constrained Optimization

Chapter

First Online:

- 1.3k Downloads

## Abstract

This chapter is devoted to the numerical methods for solving the problem and their theoretical foundations, where the constraint set

$$\begin{aligned} \begin{array}{lll} P: &{} {{\mathrm { Min}}} &{} f(x) \\ &{} \text {s.t.} &{} h_{j}\!\left( x\right) =0,\, j=1,\ldots , m, \\ &{} &{} g_{i}(x)\le 0, \, i=1,\ldots , p, \end{array} \end{aligned}$$

*C*is the whole space \(\mathbb {R}^{n}\). First, in Section 6.1, the so-called penalty and barrier methods are presented. These methods are based on the idea of approximating constrained optimization problems by unconstrained ones, which can be solved by any of the methods studied in Chapter 5. Both types of methods are driven by a parameter that determines the weight assigned in each iteration to constraint satisfaction relative to minimization of the objective function. In Subsection 6.1.4, a logarithmic barrier approach to linear programming is described as an illustration of the methodology of barrier methods. The subsequent Sections 6.2–6.4, of more theoretical flavor, are focused on the formulation of necessary and sufficient optimality conditions, of first and second order, for each of the three types of possible problems: those with equality constraints, with inequality constraints, and with both types of constraints. Conditions of Lagrange, Karush–Kuhn–Tucker, and Fritz John are, respectively, derived through a deep study of the so-called constraint qualifications.## Copyright information

© Springer Nature Switzerland AG 2019