4OR

, Volume 15, Issue 1, pp 85–92 | Cite as

A class of adaptive Dai–Liao conjugate gradient methods based on the scaled memoryless BFGS update

Research Paper
  • 167 Downloads

Abstract

Minimizing the distance between search direction matrix of the Dai–Liao method and the scaled memoryless BFGS update in the Frobenius norm, and using Powell’s nonnegative restriction of the conjugate gradient parameters, a one-parameter class of nonlinear conjugate gradient methods is proposed. Then, a brief global convergence analysis is made with and without convexity assumption on the objective function. Preliminary numerical results are reported; they demonstrate a proper choice for the parameter of the proposed class of conjugate gradient methods may lead to promising numerical performance.

Keywords

Unconstrained optimization Conjugate gradient method Scaled memoryless BFGS update Frobenius norm Global convergence 

Mathematics Subject Classification

90C53 49M37 65K05 

References

  1. Andrei N (2010) Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Eur J Oper Res 204(3):410–420CrossRefGoogle Scholar
  2. Andrei N (2011) Open problems in conjugate gradient algorithms for unconstrained optimization. B Malays Math Sci Soc 34(2):319–330Google Scholar
  3. Babaie-Kafaki S, Ghanbari R (2014) The Dai–Liao nonlinear conjugate gradient method with optimal parameter choices. Eur J Oper Res 234(3):625–630CrossRefGoogle Scholar
  4. Babaie-Kafaki S, Ghanbari R (2015) A hybridization of the Hestenes–Stiefel and Dai–Yuan conjugate gradient methods based on a least-squares approach. Optim Methods Softw 30(4):673–681CrossRefGoogle Scholar
  5. Babaie-Kafaki S, Ghanbari R (2015) Two optimal Dai–Liao conjugate gradient methods. Optimization 64(11):2277–2287CrossRefGoogle Scholar
  6. Dai YH, Liao LZ (2001) New conjugacy conditions and related nonlinear conjugate gradient methods. Appl Math Optim 43(1):87–101CrossRefGoogle Scholar
  7. Dolan ED, Moré JJ (2002) Benchmarking optimization software with performance profiles. Math Program 91(2, Ser. A):201–213CrossRefGoogle Scholar
  8. Gould NIM, Orban D, Toint PhL (2003) CUTEr: a constrained and unconstrained testing environment, revisited. ACM Trans Math Softw 29(4):373–394CrossRefGoogle Scholar
  9. Hager WW, Zhang H (2006) Algorithm 851: CG_Descent, a conjugate gradient method with guaranteed descent. ACM Trans Math Softw 32(1):113–137CrossRefGoogle Scholar
  10. Hager WW, Zhang H (2006) A survey of nonlinear conjugate gradient methods. Pac J Optim 2(1):35–58Google Scholar
  11. Oren SS, Luenberger DG (1974) Self-scaling variable metric (SSVM) algorithms. I. Criteria and sufficient conditions for scaling a class of algorithms. Manag Sci 20(5):845–862Google Scholar
  12. Powell MJD (1986) Convergence properties of algorithms for nonlinear optimization. SIAM Rev 28(4):487–500CrossRefGoogle Scholar
  13. Sun W, Yuan YX (2006) Optimization theory and methods: nonlinear programming. Springer, New YorkGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Department of Mathematics, Faculty of Mathematics, Statistics and Computer ScienceSemnan UniversitySemnanIran
  2. 2.Faculty of Mathematical SciencesFerdowsi University of MashhadMashhadIran

Personalised recommendations