Abstract
It is known that the conjugate-gradient algorithm is at least as good as the steepest-descent algorithm for minimizing quadratic functions. It is shown here that the conjugate-gradient algorithm is actually superior to the steepest-descent algorithm in that, in the generic case, at each iteration it yields a lower cost than does the steepest-descent algorithm, when both start at the same point.
Similar content being viewed by others
References
Luenberger, D. G.,Introduction to Linear and Nonlinear Programming, Addison-Wesley Publishing Company, Reading, Massachusetts, 1973.
Langlois, W. E.,Conditions for Termination of the Method of Steepest Descent After a Finite Number of Iterations, IBM Journal of Research and Development, Vol. 10, pp. 98–99, 1966.
Author information
Authors and Affiliations
Additional information
Communicated by D. Q. Mayne
Thanks are due to Professor R. W. Sargent, Imperial College, London, England, for suggestions concerning presentation.
Rights and permissions
About this article
Cite this article
Allwright, J.C. Conjugate gradient versus steepest descent. J Optim Theory Appl 20, 129–134 (1976). https://doi.org/10.1007/BF00933351
Issue Date:
DOI: https://doi.org/10.1007/BF00933351