Advertisement

Optimization Letters

, Volume 6, Issue 5, pp 941–955 | Cite as

Conjugate gradient methods using value of objective function for unconstrained optimization

  • Hideaki Iiduka
  • Yasushi Narushima
Original Paper

Abstract

Conjugate gradient methods have been widely used as schemes to solve large-scale unconstrained optimization problems. The search directions for the conventional methods are defined by using the gradient of the objective function. This paper proposes two nonlinear conjugate gradient methods which take into account mostly information about the objective function. We prove that they converge globally and numerically compare them with conventional methods. The results show that with slight modification to the direction, one of our methods performs as well as the best conventional method employing the Hestenes–Stiefel formula.

Keywords

Unconstrained optimization problem Conjugate gradient method Wolfe conditions Global convergence 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bongartz I., Conn A.R., Gould N.I.M., Toint P.L.: CUTE: constrained and unconstrained testing environments. ACM Trans. Math. Softw. 21, 123–160 (1995)zbMATHCrossRefGoogle Scholar
  2. 2.
    Cragg E.E., Levy A.V.: Study on a supermemory gradient method for the minimization of functions. J. Optim. Theory Appl. 4, 191–205 (1969)MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Dai Y.H., Liao L.Z.: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43, 87–101 (2001)MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Dai Y.H., Yuan Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10, 177–182 (1999)MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Dai Y.H., Yuan Y.: An efficient hybrid conjugate gradient method for unconstrained optimization. Ann. Oper. Res. 103, 33–47 (2001)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Dolan E.D., Moré J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Fletcher R., Reeves C.M.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Ford J.A., Narushima Y., Yabe H.: Multi-step nonlinear conjugate gradient methods for unconstrained minimization. Comput. Optim. Appl. 40, 191–216 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    Gilbert J.C., Nocedal J.: Global convergence properties of conjugate gradient method for optimization. SIAM J. Optim. 2, 21–42 (1992)MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr web site. http://cuter.rl.ac.uk/cuter-www/index.html
  11. 11.
    Hager W.W., Zhang H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Hager W.W., Zhang H.: A survey of nonlinear conjugate gradient methods. Pacific J. Optim. 2, 35–58 (2006)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Hager W.W., Zhang H.: Algorithm 851: CG_DESCENT: a conjugategradient method with guaranteed descent. ACM Trans. Math. Softw. 32, 113–137 (2006)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Hager, W.W., Zhang, H.: CG_DESCENT Version 1.4 User’ Guide, University of Florida, November 2005. http://www.math.ufl.edu/~hager/
  15. 15.
    Hestenes M.R., Stiefel E.: Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. 49, 409–436 (1952)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Huang H.X., Liang Z.A., Pardalos P.M.: Flow search approach and new bounds for the m-step linear conjugate gradient algorithm. J. Optim. Theory Appl. 120, 53–71 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    Iiduka H.: Hybrid conjugate gradient method for a convex optimization problem over the fixed-point set of a nonexpansive mapping. J. Optim. Theory Appl. 140, 463–475 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Iiduka, H., Uchida, M.: Fixed point optimization algorithms for network bandwidth allocation problems with compoundable constraints. IEEE Communications Letters (to appear)Google Scholar
  19. 19.
    Iiduka H., Yamada I.: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 19, 1881–1893 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    Liu, G., Nocedal, J., Waltz, R.: CG+ web site (1998). http://users.eecs.northwestern.edu/~nocedal/CG+.html
  21. 21.
    Miele A., Cantrell J.W.: Study on a memory gradient method for the minimization of functions. J. Optim. Theory Appl. 3, 459–470 (1969)MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    Narushima Y., Yabe H.: Global convergence of a memory gradient method for unconstrained optimization. Comput. Optim. Appl. 35, 325–346 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  23. 23.
    Narushima Y., Yabe H., Ford J.A.: A three-term conjugate gradient method with sufficient descent property for unconstrained optimization. SIAM J. Optim. 21, 212–230 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  24. 24.
    Nazareth J.L.: A conjugate direction algorithm for unconstrained minimization without line searches. J. Optim. Theory Appl. 23, 373–387 (1977)MathSciNetzbMATHCrossRefGoogle Scholar
  25. 25.
    Nocedal J., Wright S.J.: Numerical Optimization (Second Edition). Springer Series in Operations Research, Springer, New York (2006)Google Scholar
  26. 26.
    Pardalos P.M., Resende M.G.C.: Handbook of Applied Optimization. Oxford University Press, Oxford (2002)zbMATHGoogle Scholar
  27. 27.
    Polak E., Ribière G.: Note sur la convergence de directions conjugées Rev. Francaise Informat Recherche Opertionelle 3e anné 16, 35–43 (1969)Google Scholar
  28. 28.
    Yabe H., Sakaiwa N.: A new nonlinear conjugate gradient method for unconstrained optimization. J. Oper. Res. Soc. Japan 48, 284–296 (2005)MathSciNetzbMATHGoogle Scholar
  29. 29.
    Yabe H., Takano M.: Global convergence properties of nonlinear conjugate gradient methods with modified secant condition. Comput. Optim. Appl. 28, 203–225 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
  30. 30.
    Yuan G.: Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems. Optim. Lett. 3, 11–21 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  31. 31.
    Zhang L., Zhou W., Li D.H.: A descent modified Polak-Ribière–Polyak conjugate gradient method and its global convergence. IMA J. Num. Anal. 26, 629–640 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  32. 32.
    Zhang L., Zhou W., Li D.H.: Global convergence of a modified Fletcher–Reeves conjugate gradient method with Armijo-type line search. Numer. Math. 104, 561–572 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  33. 33.
    Zhang L., Zhou W., Li D.H.: Some descent three-term conjugate gradient methods and their global convergence. Optim. Methods Softw. 22, 697–711 (2007)MathSciNetCrossRefGoogle Scholar
  34. 34.
    Zhou W., Zhang L.: A nonlinear conjugate gradient method based on the MBFGS secant condition. Optim. Methods Softw. 21, 707–714 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  35. 35.
    Zoutendijk G.: Nonlinear programming. In: Abadie, J. (eds) Integer and Nonlinear Programming, pp. 37–86. North-Holland, Amsterdam (1970)Google Scholar

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  1. 1.Network Design Research Center, Kyushu Institute of TechnologyTokyoJapan
  2. 2.Department of Communication and Information ScienceFukushima National College of TechnologyFukushimaJapan

Personalised recommendations