Abstract
Recursive quadratic programming is a family of techniques developed by Bartholomew-Biggs and other authors for solving nonlinear programming problems. The first-order optimality conditions for a local minimizer of the augmented Lagrangian are transformed into a nonlinear system where both primal and dual variables appear explicitly. The inner iteration of the algorithm is a Newton-like procedure that updates simultaneously primal variables and Lagrange multipliers. In this way, as observed by Gould, the implementation of the Newton method becomes stable, in spite of the possibility of having large penalization parameters. In this paper, the inner iteration is analyzed from a different point of view. Namely, the size of the convergence region and the speed of convergence of the inner process are considered and it is shown that, in some sense, both are independent of the penalization parameter when an adequate version of the Newton method is used. In other words, classical Newton-like iterations are improved, not only in relation to stability of the linear algebra involved, but also with regard to the ovearll convergence of the nonlinear process. Some numerical experiments suggset that, in fact, practical efficiency of the methods is related to these theoretical results.
Similar content being viewed by others
References
Bartholomew-Biggs, M. C., Recursive Quadratic Programming Methods Based on the Augmented Lagrangian, Mathematical Programming Study, Vol. 31, pp. 21–41, 1987.
Bartholomew-Biggs, M. C., and HernÁndez, M. F. G., Some Improvements of the Subroutine OPALQP for Dealing with Large Problems, Journal of Economic Dynamics and Control, Vol. 18, pp. 185–203, 1994.
Bartholomew-Biggs, M. C., and HernÁndez, M. F. G., Using the KKT Matrix in an Augmented Lagrangian SQP Method for Sparse Constrained Optimization, Journal of Optimization Theory and Applications, Vol. 85, pp. 201–220, 1995.
Gill, P. E., Murray, W., and Wright, M. H., Practical Optimization, Academic Press, London, England, 1981.
Gould, N. I. M., On the Accurate Determination of Search Directions for Simple Differentiable Penalty Functions, IMA Journal of Numerical Analysis, Vol. 6, pp. 357–372, 1986.
McCormick, G. P., Nonlinear Programming, John Wiley and Sons, New York, New York, 1983.
Fletcher, R., Practical Methods of Optimization, John Wiley and Sons, New York, New York, 1987.
Dennis, J. E., and Schnabel, R. B., Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Pretice-Hall, Englewoods Cliffs, New Jersey, 1983.
Ortega, J. M., and Rheinboldt, W. C., Interative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, New York, 1970.
Conn, A. R., Gould, N. I. M., and Toint, Ph. L., A Note on Using Alternatives Second-Order Models for the Subproblems Arising in Barrier Function Methods for Minimization, Numerische Mathematik, Vol. 68, pp. 17–33, 1994.
Dussault, J. P., Numerical Stability and Efficiency of Penalty Algorithms, SIAM Journal of Numerical Analysis, Vol. 32, pp. 296–317, 1995.
Gould, N. I. M., On the Convergence of the Sequential Penalty Function Method for Constrained Minimization, SIAM Journal on Numerical Analysis, Vol. 26, pp. 107–128, 1989.
McCormick, G. P., The Superlinear Convergence of a Nonlinear Primal-Dual Algorithm, Technical Report OR-T-550/91. Department of Operations Research, George Washington University, 1991.
Gonzaga, C. C., Path Following Methods for Linear Programming, SIAM Review, Vol. 34, pp. 167–224, 1992.
Forster, W., Homotopy Methods, Handbook of Global Optimization, Edited by R. Horst and P. M. Pardalos, Kluwer Academic Publishers, Dordrecht, Holland, 1995.
Rheinboldt, W. C., Numerical Analysis of Parametrized Nonlinear Equations, John Wiley and Sons, New York, New York, 1986.
Watson, L. T., Billups, S. C., and Morgan, A. P., Algorithm 652—HOMPACK: A Suite of Codes for Globally Convergent Homotopy Algorithms, ACM Transactions Mathematical Software, Vol. 13, pp. 281–310, 1987.
Lootsma, F. A., A Survey of Methods for Solving Constrained Minimization Problems via Unconstrained Minimization, Numerical Methods for Nonlinear Optimization, Edited by F. A. Lootsma, Academic Press, New York, New York, pp. 313–347, 1972.
Griewank, A., Direct Calculation of Newton Steps without Accumulating Jacobians, Large-Scale Numerical Optimization, Edited by T. F. Coleman and Y. Li, SIAM, Philadelphia, Phennsylvania, pp. 115–137, 1990.
Dennis, J. E., and Walker, H. F., Convergence Theorems for Least-Change Secant Update Methods, SIAM Journal on Numerical Analysis, Vol. 18, pp. 949–987, 1981.
MartÍnez, J. M., Local Convergence Theory of Inexact Newton Methods Based on Structured Least Change Updates, Mathematics of Computation, Vol. 55, pp. 143–167, 1990.
MartÍnez, J. M., Fixed-Point Quasi-Newton Methods, SIAM Journal on Numerical Analysis, Vol. 29, pp. 1413–1434, 1992.
Tapia, R. A., Diagonalized Multiplier Methods and Quasi-Newton Methods for Constrained Optimization, Journal of Optimization Theory and Applications, Vol. 22, pp. 135–194, 1977.
Wright, M. H., Why a Pure Primal Newton Barrier Step May Be Infeasible, SIAM Journal on Optimization, Vol. 5, pp. 1–13, 1995.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Martínez, J.M., Santos, L.T. New Theoretical Results on Recursive Quadratic Programming Algorithms. Journal of Optimization Theory and Applications 97, 435–454 (1998). https://doi.org/10.1023/A:1022686919295
Issue Date:
DOI: https://doi.org/10.1023/A:1022686919295