Skip to main content
Log in

An Efficient Hybrid Conjugate Gradient Method for Unconstrained Optimization

  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient method. Specifically, if the size of the scalar β k with respect to the one in the new method belongs to some interval, then the corresponding methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the methods need not converge. Numerical experiments are made for two combinations of the new method and the Hestenes–Stiefel conjugate gradient method. The initial results show that, one of the hybrid methods is especially efficient for the given test problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact line search, IMA J. Numer. Anal. 5 (1985) 121–124.

    Google Scholar 

  2. Y.H. Dai, New properties of a nonlinear conjugate gradient method, Numer. Math. 89 (2001) 83–98.

    Google Scholar 

  3. Y.H. Dai and Y. Yuan, Convergence properties of the Fletcher-Reeves method, IMA J. Numer. Anal. 16(2) (1996) 155–164.

    Google Scholar 

  4. Y.H. Dai and Y. Yuan, Convergence properties of the conjugate descent method, Advances in Mathematics 6 (1996) 552–562.

    Google Scholar 

  5. Y.H. Dai and Y. Yuan, Some properties of a new conjugate gradient method, in: Advances in Nonlinear Programming, ed. Y. Yuan (Kluwer Academic, Boston, 1998) pp. 251–262.

    Google Scholar 

  6. Y.H. Dai and Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. Optimization 10(1) (1999) 177–182.

    Google Scholar 

  7. R. Fletcher, Practical Methods of Optimization, Vol. 1, Unconstrained Optimization (Wiley, New York, 1987).

    Google Scholar 

  8. R. Fletcher and C. Reeves, Function minimization by conjugate gradients, Comput. J. 7 (1964) 149–154.

    Google Scholar 

  9. J.C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optimization 2(1) (1992) 21–42.

    Google Scholar 

  10. M.R. Hestenes and E.L. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards Sect. 5(49) (1952) 409–436.

    Google Scholar 

  11. Y.F. Hu and C. Storey, Global convergence result for conjugate gradient methods, JOTA 71(2) (1991) 399–405.

    Google Scholar 

  12. G.H. Liu, J.Y. Han and H.X. Yin, Global convergence of the Fletcher-Reeves algorithm with an inexact line search, Appl. Math. J. Chinese Univ. Ser. B 10 (1995) 75–82.

    Google Scholar 

  13. J.J. Moré, B.S. Garbow and K.E. Hillstrom, Testing unconstrained optimization software, ACMTransactions on Mathematical Software 7 (1981) 17–41.

    Google Scholar 

  14. E. Polak and G. Ribiière, Note sur la convergence de directions conjugées, Rev. Franiçaise Informat. Recherche Opertionelle, 3e année 16 (1969) 35–43.

    Google Scholar 

  15. B.T. Polyak, The conjugate gradient method in extremem problems, Comput. Math. Math. Phys. 9 (1969) 94–112.

    Google Scholar 

  16. M.J.D. Powell, Restart procedures of the conjugate gradient method, Math. Programming 2 (1977) 241–254.

    Google Scholar 

  17. M.J.D. Powell, Nonconvex minimization calculations and the conjugate gradient method, in: Lecture Notes in Mathematics 1066 (Springer, Berlin, 1984) pp. 122–141.

    Google Scholar 

  18. D. Touati-Ahmed and C. Storey, Efficient hybrid conjugate gradient techniques, JOTA 64 (1990) 379–397.

    Google Scholar 

  19. P. Wolfe, Convergence conditions for ascent methods, SIAM Review 11 (1969) 226–235.

    Google Scholar 

  20. P. Wolfe, Convergence conditions for ascent methods. II: Some corrections, SIAM Review 13 (1971) 185–188.

    Google Scholar 

  21. G. Zoutendijk, Nonlinear programming, computational methods, in: Integer and Nonlinear Programming, ed. J. Abadie (North-Holland, Amsterdam, 1970) pp. 37–86.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dai, Y., Yuan, Y. An Efficient Hybrid Conjugate Gradient Method for Unconstrained Optimization. Annals of Operations Research 103, 33–47 (2001). https://doi.org/10.1023/A:1012930416777

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1012930416777

Navigation