Abstract
In the preceding pages we considered two methods for finding a minimum point of a real-valued function f of n real variables, namely, Newton’s method and the gradient method. The gradient method is easy to apply. However, convergence is often very slow. On the other hand, Newton’s algorithm normally has rapid convergence but involves considerable computation at each step. Recall that one step of Newton’s method involves the computation of the gradient f′(x) and the Hessian f″(x) of f and the solution of a linear system of equations, usually, by the inversion of the Hessian f″(x) of f It is a crucial fact that a Newton step can be accomplished instead by a sequence of n linear minimizations in n suitably chosen directions, called conjugate directions. This fact is the central theme in the design of an important class of minimization algorithms, called conjugate direction algorithms.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1980 Springer-Verlag New York Inc.
About this chapter
Cite this chapter
Hestenes, M.R. (1980). Conjugate Direction Methods. In: Conjugate Direction Methods in Optimization. Applications of Mathematics, vol 12. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-6048-6_2
Download citation
DOI: https://doi.org/10.1007/978-1-4612-6048-6_2
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4612-6050-9
Online ISBN: 978-1-4612-6048-6
eBook Packages: Springer Book Archive