Computational Optimization and Applications

, Volume 40, Issue 2, pp 191–216

Multi-step nonlinear conjugate gradient methods for unconstrained minimization

Article

DOI: 10.1007/s10589-007-9087-z

Cite this article as:
Ford, J.A., Narushima, Y. & Yabe, H. Comput Optim Appl (2008) 40: 191. doi:10.1007/s10589-007-9087-z

Abstract

Conjugate gradient methods are appealing for large scale nonlinear optimization problems, because they avoid the storage of matrices. Recently, seeking fast convergence of these methods, Dai and Liao (Appl. Math. Optim. 43:87–101, 2001) proposed a conjugate gradient method based on the secant condition of quasi-Newton methods, and later Yabe and Takano (Comput. Optim. Appl. 28:203–225, 2004) proposed another conjugate gradient method based on the modified secant condition. In this paper, we make use of a multi-step secant condition given by Ford and Moghrabi (Optim. Methods Softw. 2:357–370, 1993; J. Comput. Appl. Math. 50:305–323, 1994) and propose two new conjugate gradient methods based on this condition. The methods are shown to be globally convergent under certain assumptions. Numerical results are reported.

Keywords

Unconstrained optimizationConjugate gradient methodLine searchGlobal convergenceMulti-step secant condition

Copyright information

© Springer Science+Business Media, LLC 2007

Authors and Affiliations

  • John A. Ford
    • 1
  • Yasushi Narushima
    • 2
  • Hiroshi Yabe
    • 2
  1. 1.Department of Computer ScienceUniversity of EssexColchesterUK
  2. 2.Department of Mathematical Information ScienceTokyo University of ScienceTokyoJapan