Abstract
This paper presents a family of projected descent direction algorithms with inexact line search for solving large-scale minimization problems subject to simple bounds on the decision variables. The global convergence of algorithms in this family is ensured by conditions on the descent directions and line search. Whenever a sequence constructed by an algorithm in this family enters a sufficiently small neighborhood of a local minimizer ○ satisfying standard second-order sufficiency conditions, it gets trapped and converges to this local minimizer. Furthermore, in this case, the active constraint set at ○ is identified in a finite number of iterations. This fact is used to ensure that the rate of convergence to a local minimizer, satisfying standard second-order sufficiency conditions, depends only on the behavior of the algorithm in the unconstrained subspace. As a particular example, we present projected versions of the modified Polak–Ribière conjugate gradient method and the limited-memory BFGS quasi-Newton method that retain the convergence properties associated with those algorithms applied to unconstrained problems.
Similar content being viewed by others
References
Goldstein, A. A., Convex Programming in Hilbert Space, Bulletin of the American Mathematical Society, Vol. 70, pp. 709–710, 1964.
Levitin, E. S., and Polyak, B. T., Constrained Minimization Problems, USSR Computational Mathematics and Mathematical Physics, Vol. 6, pp. 1–50, 1966.
Bertsekas, D. P., On the Goldstein-Levitin-Poljak Gradient Projection Method, IEEE Transactions on Automatic Control, Vol. 21, pp. 174–184, 1976.
Bertsekas, D. P., Projected Newton Methods for Optimization Problems with Simple Constraints, SIAM Journal on Control and Optimization, Vol. 20, pp. 221–246, 1982.
Dunn, J. C., A Projected Newton Method for Minimization Problems with Nonlinear Inequality Constraints, Numerische Mathematik, Vol. 53, pp. 377–409, 1988.
Quintana, V. H., and Davison, E. J., Clipping-off Gradient Algorithms to Compute Optimal Controls with Constrained Magnitude, International Journal of Control, Vol. 20, pp. 243–255, 1974.
Conn, A. R., Gould, N., and Toint, P. L., Global Convergence of a Class of Trust Region Algorithms for Optimization with Simple Bounds, SIAM Journal on Numerical Analysis, Vol. 25, pp. 433–460, 1988.
Conn, A. R., Gould, N., and Toint, P. L., A Globally Convergent Augmented Lagrangian Algorithm for Optimization with General Constraints and Simple Bounds, SIAM Journal on Numerical Analysis, Vol. 28, pp. 545–572, 1991.
Byrd, R. H., Lu, P., Nocedal, J., and Zhu, C., A Limited Memory Algorithm for Bound Constrained Optimization, Report NAM-08, Electrical Engineering Department, Northwestern University, 1994.
More, J. J., and Toraldo, G., Algorithms for Bound Constrained Quadratic Programming Problems, Numerische Mathematik, Vol. 55, pp. 377–400, 1989.
Calamai, P. H., and More, J. J., Projected Gradient Methods for Linearly Constrained Problems, Mathematical Programming, Vol. 39, pp. 93–116, 1987.
Kelley, C. T., and Sachs, E. W., Solution of Optimal Control Problems by a Pointwise Projected Newton Method, SIAM Journal on Control and Optimization, Vol. 33, pp. 1731–1757, 1995.
Polak, E., Sargent, R. W. H., and Sebastian, D. J., On the Convergence of Sequential Minimization Algorithms, Journal of Optimization Theory and Applications, Vol. 14, pp. 439–442, 1974.
Nocedal, J., Updating Quasi-Newton Matrices with Limited Storage, Mathematics of Computation, Vol. 35, pp. 773–782, 1980.
Powell, M. J. D., Restart Procedures for the Conjugate Gradient Method, Mathematical Programming, Vol. 12, pp. 241–254, 1977.
Luenberger, D. G., Linear and Nonlinear Programming, 2nd Edition, Addison-Wesley Publishing Company, Reading, Massachusetts, 1984.
Bertsekas, D. P., Constrained Optimization and Lagrange Multiplier Methods, Academic Press, New York, New York, 1982.
Golub, G. H., and Loan, C. F., Matrix Computations, 2nd Edition, Johns Hopkins University Press, Baltimore, Maryland, 1989.
Luenberger, D. G., Convergence Rate of a Penalty-Function Scheme, Journal of Optimization Theory and Applications, Vol. 7, pp. 39–51, 1971.
Polak, E., Computational Methods in Optimization, Academic Press, New York, New York, 1971.
Gilbert, J. C., and Nocedal, J., Global Convergence Properties of Conjugate Gradient Methods for Optimization, SIAM Journal on Optimization, Vol. 2, pp. 21–42, 1992.
Schwartz, A., and Polak, E., Consistent Approximations for Optimal Control Problems Based on Runge-Kutta Integration, SIAM Journal on Optimization and Control, Vol. 34, pp. 1235–1269, 1996.
Bertsekas, D. P., Partial Conjugate Gradient Methods for a Class of Optimal Control Problems, IEEE Transactions on Automatic Control, Vol. 19, pp. 209–217, 1974.
Dunn, J. C., and Bertsekas, D. P., Efficient Dynamic Programming Implementations of Newton's Method for Unconstrained Optimal Control Problems, Journal of Optimization Theory and Applications, Vol. 63, pp. 23–38, 1989.
Liu, D. C., and Nocedal, J., On the Limited Memory BFGS Method for Large-Scale Optimization, Mathematical Programming, Vol. 45, pp. 503–528, 1989.
Shanno, D. F., and Phua, K. H., Matrix Conditioning and Nonlinear Optimization, Mathematical Programming, Vol. 14, pp. 149–160, 1978.
Zou, X., Navon, I. M., Berger, M., Phua, K. H., Schlick, T., and Dimet, F. X., Numerical Experience with Limited-Memory Quasi-Newton and Truncated Newton Methods, SIAM Journal on Optimization, Vol. 3, pp. 582–608, 1993.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Schwartz, A., Polak, E. Family of Projected Descent Methods for Optimization Problems with Simple Bounds. Journal of Optimization Theory and Applications 92, 1–31 (1997). https://doi.org/10.1023/A:1022690711754
Issue Date:
DOI: https://doi.org/10.1023/A:1022690711754