Advertisement

An improved Dai–Kou conjugate gradient algorithm for unconstrained optimization

  • Zexian Liu
  • Hongwei Liu
  • Yu-Hong DaiEmail author
Article
  • 66 Downloads

Abstract

It is gradually accepted that the loss of orthogonality of the gradients in a conjugate gradient algorithm may decelerate the convergence rate to some extent. The Dai–Kou conjugate gradient algorithm (SIAM J Optim 23(1):296–320, 2013), called CGOPT, has attracted many researchers’ attentions due to its numerical efficiency. In this paper, we present an improved Dai–Kou conjugate gradient algorithm for unconstrained optimization, which only consists of two kinds of iterations. In the improved Dai–Kou conjugate gradient algorithm, we develop a new quasi-Newton method to improve the orthogonality by solving the subproblem in the subspace and design a modified strategy for the choice of the initial stepsize for improving the numerical performance. The global convergence of the improved Dai–Kou conjugate gradient algorithm is established without the strict assumptions in the convergence analysis of other limited memory conjugate gradient methods. Some numerical results suggest that the improved Dai–Kou conjugate gradient algorithm (CGOPT (2.0)) yields a tremendous improvement over the original Dai–Kou CG algorithm (CGOPT (1.0)) and is slightly superior to the latest limited memory conjugate gradient software package CG\(\_ \)DESCENT (6.8) developed by Hager and Zhang (SIAM J Optim 23(4):2150–2168, 2013) for the CUTEr library.

Keywords

Conjugate gradient algorithm Limited memory Quasi-Newton method Preconditioned conjugate gradient algorithm Global convergence 

Mathematics Subject Classification

90C06 90C26 65Y20 

Notes

Acknowledgements

We would like to thank the anonymous referees for their useful comments. We also would like to thank professors Hager, W. W. and Zhang, H. C. for their C code of CG\(\_ \)DESCENT (6.8). The third author’s work was partly supported by the Chinese NSF grants (Nos. 11631013 and 11971372) and Key Project of Chinese National Programs for Fundamental Research and Development (No. 2015CB856002). The first author’s work was supported by the National Natural Science Foundation of China (no. 11901561) and the Natural Science Foundation of Guangxi(No. 2018GXNSFBA281180).

References

  1. 1.
    Fletcher, R., Reeves, C.: Function minimization by conjugate gradients. Comput. J. 7(2), 149–154 (1964)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Hestenes, M.R., Stiefel, E.L.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49(6), 409–436 (1952)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Polak, E., Ribière, G.: Note sur la convergence de méthodes de directions conjugées. Rev. Fr. Inform. Rech. Opér. 3, 35–43 (1969)zbMATHGoogle Scholar
  4. 4.
    Polyak, B.T.: The conjugate gradient method in extreme problems. USSR Comput. Math. Math. Phys. 9, 94–112 (1969)CrossRefGoogle Scholar
  5. 5.
    Shanno, D.F.: On the convergence of a new conjugate gradient algorithm. SIAM J. Numer. Anal. 15(6), 1247–1257 (1978)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Perry, J. M.: A class of conjugate gradient algorithms with a two-step variable-metric memory. Discussion Paper 269, Center for Mathematical Studies in Economics and Management Sciences, Northwestern University, Evanston, Illinois (1977)Google Scholar
  7. 7.
    Dai, Y.H., Yuan, Y.X.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10(1), 177–182 (1999)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Dai, Y.H., Kou, C.X.: A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 23(1), 296–320 (2013)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Hager, W.W., Zhang, H.C.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16(1), 170–192 (2005)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Zhang, L., Zhou, W.J., Li, D.H.: Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search. Numer. Math. 104, 561–572 (2006)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Liu, H.W., Liu, Z.X.: An efficient Barzilai–Borwein conjugate gradient method for unconstrained optimization. J. Optim. Theory Appl. 181(2), 608–633 (2019)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Dai, Y.H., Han, J.Y., Liu, G.H., et al.: Convergence properties of nonlinear conjugate gradient methods. SIAM J. Optim. 10(2), 345–358 (1999)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Dai, Y.H., Liao, L.Z.: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43(1), 87–101 (2001)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Hager, W.W., Zhang, H.C.: A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2(1), 35–58 (2006)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Dai, Y.H., Yuan, Y.X.: Nonlinear Conjugate Gradient Methods. Shanghai Scientific and Technical Publishers, Shanghai (2000)Google Scholar
  16. 16.
    Hager, W.W., Zhang, H.C.: The limited memory conjugate gradient method. SIAM J. Optim. 23(4), 2150–2168 (2013)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Schmidt, E.: Über die Auflösung linearer Gleichungen mit Unendlich vielen unbekannten. Rend. Circ. Mat. Palermo. Ser. 1(25), 53–77 (1908)CrossRefGoogle Scholar
  18. 18.
    Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45(1–3), 503–528 (1989)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (1999)CrossRefGoogle Scholar
  20. 20.
    Hager, W.W., Zhang, H.C.: Algorithm 851:conjugate gradient CG\(\_ \)DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32(1), 113–137 (2006)CrossRefGoogle Scholar
  21. 21.
    Biglari, F., Hassan, M.A., Leong, W.J.: New quasi-Newton methods via higher order tensor models. J. Comput. Appl. Math. 235(8), 2412–2422 (2011)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Wei, Z.X., Li, G.Y., Qi, L.Q.: New quasi-Newton methods for unconstrained optimization problems. Appl. Math. Comput. 175(2), 1156–1188 (2006)MathSciNetzbMATHGoogle Scholar
  23. 23.
    Li, D.H., Fukushima, M.: On the global convergence of BFGS method for nonconvex unconstrained optimization problems. SIAM J. Optim. 11(4), 1054–1064 (2001)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Yuan, Y.X.: A modified BFGS algorithm for unconstrained optimization. IMA J. Numer. Anal. 11(3), 325–332 (1991)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Dai, Y.H., Yuan, J.Y., Yuan, Y.X.: Modified two-point stepsize gradient methods for unconstrained optimization problems. Comput. Optim. Appl. 22(1), 103–109 (2002)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Liu, Z.X., Liu, H.W.: An efficient gradient method with approximate optimal stepsize for large-scale unconstrained optimization. Numer. Algorithms 78(1), 21–39 (2018)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Liu, Z.X., Liu, H.W.: An efficient gradient method with approximately optimal stepsize based on tensor model for unconstrained optimization. J. Optim. Theory Appl. 181(2), 608–633 (2019)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Tarzanagh, D.A., Reza Peyghami, M.: A new regularized limited memory BFGS-type method based on modified secant conditions for unconstrained optimization problems. J. Glob. Optim. 63, 709–728 (2015)MathSciNetCrossRefGoogle Scholar
  29. 29.
    Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Andrei, N.: Open problems in nonlinear conjugate gradient algorithms for unconstrained optimization. Bull. Malays. Math. Sci. Soc. 34(2), 319–330 (2011)MathSciNetzbMATHGoogle Scholar
  31. 31.
    Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr and SifDec: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29(4), 373–394 (2003)CrossRefGoogle Scholar
  32. 32.
    Dolan, E.D., More, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Mathematics and StatisticsXidian UniversityXi’anChina
  2. 2.LSEC, ICMSEC, Academy of Mathematics and Systems ScienceChinese Academy of SciencesBeijingChina

Personalised recommendations