Abstract
Usual global convergence results for sequential quadratic programming (SQP) algorithms with linesearch rely on some a priori assumptions about the generated sequences, such as boundedness of the primal sequence and/or of the dual sequence and/or of the sequence of values of a penalty function used in the linesearch procedure. Different convergence statements use different combinations of assumptions, but they all assume boundedness of at least one of the sequences mentioned above. In the given context boundedness assumptions are particularly undesirable, because even for non-pathological and well-behaved problems the associated penalty functions (whose descent is used to produce primal iterates) may not be bounded below for any value of the penalty parameter. Consequently, boundedness assumptions on the iterates are not easily justifiable. By introducing a very simple and computationally cheap safeguard in the linesearch procedure, we prove boundedness of the primal sequence in the case when the feasible set is nonempty, convex, and bounded. If, in addition, the Slater condition holds, we obtain a complete global convergence result without any a priori assumptions on the iterative sequences. The safeguard consists of not accepting a further increase of constraints violation at iterates which are infeasible beyond a chosen threshold, which can always be ensured by the proposed modified SQP linesearch criterion.
Similar content being viewed by others
References
Bertsekas D. (1982). Constrained Optimization and Lagrange Multiplier Methods. Academic, New York
Bertsekas D. (1995). Nonlinear Programming. Athena Scientific, Belmont
Bertsekas D. (2003). Convex Analysis and Optimization. Athena Scientific, Belmont
Bonnans J., Gilbert J., Lemaréchal C. and Sagastizábal C. (2003). Numerical Optimization: Theoretical and Practical Aspects. Springer, Berlin
Han S.P. (1976). Superlinearly convergent variable metric algorithms for general nonlinear programming problems. Math. Program. 11: 263–282
Han S.P. (1977). A globally convergent method for nonlinear programming. J. Optim. Theory Appl. 22: 297–309
Murray W. and Prieto F.J. (1995). A sequential quadratic programming algorithm using an incomplete solution of the subproblem. SIAM J. Optim. 5: 590–640
Palomares U.G. and Mangasarian O. (1976). Superlinearly convergent quasi-Newton algorithms for nonlinearly constrained optimization problems. Math. Program. 11: 1–13
Powell M. (1978). The convergence of variable metric methods for nonlinearly constrained optimization calculations. In: Mangasarian, O., Meyer, R. and Robinson, S. (eds) Nonlinear Programming, pp 27–63. Academic, London
Robinson S. (1972). A quadratically convergent algorithm for general nonlinear programming problems. Math. Program. 3: 145–156
Robinson S. (1974). Perturbed Kuhn–Tucker points and rates of convergence for a class of nonlinear-programming algorithms. Math. Program. 7: 1–16
Wilson, R.B.: A simplicial method for concave programming. Ph.D. thesis, Graduate School of Business Administration, Harvard University, Cambridge (1963)
Author information
Authors and Affiliations
Corresponding author
Additional information
The author is supported in part by CNPq Grants 301508/2005-4, 490200/2005-2, 550317/2005-8, by PRONEX–Optimization, and by FAPERJ Grant E-26/151.942/2004.
Rights and permissions
About this article
Cite this article
Solodov, M.V. Global convergence of an SQP method without boundedness assumptions on any of the iterative sequences. Math. Program. 118, 1–12 (2009). https://doi.org/10.1007/s10107-007-0180-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-007-0180-y
Keywords
- Sequential quadratic programming
- Global convergence
- Nonsmooth penalty function
- Linesearch
- Slater condition