Abstract
New variable metric algorithms are presented with three distinguishing features:
-
(1)
They make no line searches and allow quite arbitrary step directions while maintaining quadratic termination and positive updates for the matrixH, whose inverse is the hessian matrix of second derivatives for a quadratic approximation to the objective function.
-
(2)
The updates fromH toH + are optimally conditioned in the sense that they minimize the ratio of the largest to smallest eigenvalue ofH −1 H +.
-
(3)
Instead of working with the matrixH directly, these algorithms represent it asJJ T, and only store and update the Jacobian matrix J. A theoretical basis is laid for this family of algorithms and an example is given along with encouraging numerical results obtained with several standard test functions.
Similar content being viewed by others
References
M.J. Box, “A comparison of several current optimization methods”,The Computer Journal 9 (1966) 67–77.
K.W. Brodlie, A.R. Gourlay and J. Greenstadt, “Rank one and two corrections to positive definite matrices expressed in product form”,Journal of the Institute of Mathematics and its Applications 11 (1973) 73–82.
C.G. Broyden, “The convergence of a class of double-rank minimization algorithms”,Journal of the Institute of Mathematics and its Applications 6 (1970) 76–90.
W.C. Davidon, “Variable metric method for minimization”, Argonne National Laboratory Rept. ANL-5990 (Rev.) (1959).
L.C.W. Dixon, “Quasi-Newton algorithms generate identical points”,Mathematical Programming 2 (1972) 383–387.
A.V. Fiacco and G.P. McCormick,Nonlinear programming (Wiley, New York, 1968) p. 167.
R. Fletcher and M.J.D. Powell, “A rapidly convergent descent method for minimization”,The Computer Journal 6 (1963) 163–168.
R. Fletcher, “A new approach to variable metric algorithms”,The Computer Journal 13 (1970) 317–322.
D. Goldfarb, “A family of variable metric methods derived by variationalmeans”,Mathematics of Computation 24 (1970) 23–26.
H.Y. Huang, “Unified approach to quadratically convergent algorithms for function minimization”,Journal of Optimization Theory and Application 5 (1970) 405–423.
H.J. Kelley and G.E. Myers, “Conjugate direction methods for parameter optimization”,Astronautica Acta 16 (1971) 45–51.
M.J.D. Powell, “An iterative method for finding stationary values of a function of several variables”,The Computer Journal 5 (1962) 147–151.
H.H. Rosenbrock, “An automatic method for finding the greatest or least value of a function”,The Computer Journal 3 (1960) 175–184.
D.F. Shanno, “Conditioning of Quasi-Newton methods for function minimization”,Mathematics of Computation 24 (1970) 647–656.
E. Spedicato, “Conditioning in Huang's two parameter class for conjugate gradients”, Centro Informazioni Studi Esperienze CISE-N-154.
P. Wolfe, “Another variable metric method”, working paper (1967).
G. Zoutendijk,Methods of feasible directions (Elsevier, Amsterdam, 1960).
Author information
Authors and Affiliations
Additional information
Part of this work was supported by a grant from the United States Public Health Service.
Rights and permissions
About this article
Cite this article
Davidon, W.C. Optimally conditioned optimization algorithms without line searches. Mathematical Programming 9, 1–30 (1975). https://doi.org/10.1007/BF01681328
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF01681328