Computational Optimization and Applications

, Volume 28, Issue 2, pp 203–225 | Cite as

Global Convergence Properties of Nonlinear Conjugate Gradient Methods with Modified Secant Condition

  • Hiroshi Yabe
  • Masahiro Takano
Article

Abstract

Conjugate gradient methods are appealing for large scale nonlinear optimization problems. Recently, expecting the fast convergence of the methods, Dai and Liao (2001) used secant condition of quasi-Newton methods. In this paper, we make use of modified secant condition given by Zhang et al. (1999) and Zhang and Xu (2001) and propose a new conjugate gradient method following to Dai and Liao (2001). It is new features that this method takes both available gradient and function value information and achieves a high-order accuracy in approximating the second-order curvature of the objective function. The method is shown to be globally convergent under some assumptions. Numerical results are reported.

unconstrained optimization conjugate gradient method line search global convergence modified secant condition 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M. Al-Baali, “Descent property and global convergence of the Fletcher-Reeves method with inexact line search,” IMA Journal of Numerical Analysis, vol. 5, pp. 121-124, 1985.Google Scholar
  2. 2.
    Y.H. Dai and L.Z. Liao, “New conjugacy conditions and related nonlinear conjugate gradient methods,” Applied Mathematics and Optimization, vol. 43, pp. 87-101, 2001.Google Scholar
  3. 3.
    Y.H. Dai, J.Y. Han, G.H. Liu, D.F. Sun, H.X.Yin, and Y.Yuan, “Convergence properties of nonlinear conjugate gradient methods,” SIAM Journal on Optimization, vol. 10, pp. 345-358, 1999.Google Scholar
  4. 4.
    J.C. Gilbert and J. Nocedal, “Global convergence properties of conjugate gradient methods for optimization,” SIAM Journal on Optimization, vol. 2, pp. 21-42, 1992.Google Scholar
  5. 5.
    J.J. Moré, B.S. Garbow, and K.E. Hillstrom, “Testing unconstrained optimization software,”ACM Transactions on Mathematical Software, vol. 7, pp. 17-41, 1981.Google Scholar
  6. 6.
    J. Nocedal and S.J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer-Verlag: New York, 1999.Google Scholar
  7. 7.
    M.J.D. Powell, “Nonconvex minimization calculations and the conjugate gradient method,” in Lecture Notes in Mathematics, no. 1066, Springer-Verlag, Berlin, 1984, pp. 122-141.Google Scholar
  8. 8.
    J.Z. Zhang, N.Y. Deng, and L.H. Chen, “New quasi-Newton equation and related methods for unconstrained optimization,” Journal of Optimization Theory and Applications, vol. 102, pp. 147-167, 1999.Google Scholar
  9. 9.
    J.Z. Zhang and C.X. Xu, “Properties and numerical performance of quasi-Newton methods with modified quasi-Newton equations,” Journal of Computational and Applied Mathematics, vol. 137, pp. 269-278, 2001.Google Scholar
  10. 10.
    G. Zoutendijk, “Nonlinear programming, computational methods,” in Integer and Nonlinear Programming, J. Abadie (Ed.), North-Holland: Amsterdam, 1970, pp. 37-86.Google Scholar

Copyright information

© Kluwer Academic Publishers 2004

Authors and Affiliations

  • Hiroshi Yabe
    • 1
  • Masahiro Takano
    • 2
  1. 1.Department of Mathematical Information ScienceTokyo University of ScienceShinjuku-ku, TokyoJapan
  2. 2.National Statistics CenterShinjuku-ku, TokyoJapan

Personalised recommendations