Journal of Optimization Theory and Applications

, Volume 71, Issue 2, pp 399–405

Global convergence result for conjugate gradient methods

Authors

  • Y. F. Hu
    • Department of Mathematical SciencesLoughborough University of Technology
  • C. Storey
    • Department of Mathematical SciencesLoughborough University of Technology
Technical Note

DOI: 10.1007/BF00939927

Cite this article as:
Hu, Y.F. & Storey, C. J Optim Theory Appl (1991) 71: 399. doi:10.1007/BF00939927

Abstract

Conjugate gradient optimization algorithms depend on the search directions,
$$\begin{gathered} s^{(1)} = - g^{(1)} , \hfill \\ s^{(k + 1)} = - g^{(k + 1)} + \beta ^{(k)} s^{(k)} ,k \geqslant 1, \hfill \\ \end{gathered} $$
with different methods arising from different choices for the scalar β(k). In this note, conditions are given on β(k) to ensure global convergence of the resulting algorithms.

Key Words

Conjugate gradient algorithmsglobal convergence
Download to read the full article text

Copyright information

© Plenum Publishing Corporation 1991