An improved Dai–Kou conjugate gradient algorithm for unconstrained optimization
- 66 Downloads
It is gradually accepted that the loss of orthogonality of the gradients in a conjugate gradient algorithm may decelerate the convergence rate to some extent. The Dai–Kou conjugate gradient algorithm (SIAM J Optim 23(1):296–320, 2013), called CGOPT, has attracted many researchers’ attentions due to its numerical efficiency. In this paper, we present an improved Dai–Kou conjugate gradient algorithm for unconstrained optimization, which only consists of two kinds of iterations. In the improved Dai–Kou conjugate gradient algorithm, we develop a new quasi-Newton method to improve the orthogonality by solving the subproblem in the subspace and design a modified strategy for the choice of the initial stepsize for improving the numerical performance. The global convergence of the improved Dai–Kou conjugate gradient algorithm is established without the strict assumptions in the convergence analysis of other limited memory conjugate gradient methods. Some numerical results suggest that the improved Dai–Kou conjugate gradient algorithm (CGOPT (2.0)) yields a tremendous improvement over the original Dai–Kou CG algorithm (CGOPT (1.0)) and is slightly superior to the latest limited memory conjugate gradient software package CG\(\_ \)DESCENT (6.8) developed by Hager and Zhang (SIAM J Optim 23(4):2150–2168, 2013) for the CUTEr library.
KeywordsConjugate gradient algorithm Limited memory Quasi-Newton method Preconditioned conjugate gradient algorithm Global convergence
Mathematics Subject Classification90C06 90C26 65Y20
We would like to thank the anonymous referees for their useful comments. We also would like to thank professors Hager, W. W. and Zhang, H. C. for their C code of CG\(\_ \)DESCENT (6.8). The third author’s work was partly supported by the Chinese NSF grants (Nos. 11631013 and 11971372) and Key Project of Chinese National Programs for Fundamental Research and Development (No. 2015CB856002). The first author’s work was supported by the National Natural Science Foundation of China (no. 11901561) and the Natural Science Foundation of Guangxi(No. 2018GXNSFBA281180).
- 6.Perry, J. M.: A class of conjugate gradient algorithms with a two-step variable-metric memory. Discussion Paper 269, Center for Mathematical Studies in Economics and Management Sciences, Northwestern University, Evanston, Illinois (1977)Google Scholar
- 15.Dai, Y.H., Yuan, Y.X.: Nonlinear Conjugate Gradient Methods. Shanghai Scientific and Technical Publishers, Shanghai (2000)Google Scholar