Skip to main content
Log in

Two-step conjugate gradient method for unconstrained optimization

  • Published:
Computational and Applied Mathematics Aims and scope Submit manuscript

Abstract

Using Taylor’s series, we propose a modified secant relation to get a more accurate approximation of the second curvature of the objective function. Then, using this relation and an approach introduced by Dai and Liao, we present a conjugate gradient algorithm to solve unconstrained optimization problems. The proposed method makes use of both gradient and function values, and utilizes information from the two most recent steps, while the usual secant relation uses only the latest step information. Under appropriate conditions, we show that the proposed method is globally convergent without needing convexity assumption on the objective function. Comparative results show computational efficiency of the proposed method in the sense of the Dolan–Moré performance profiles.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Al-Baali M (1985) Descent property and global convergence of the Fletcher- Reeves method with inexact linesearch. IMA J. Numer. Anal. 5:121–124

    Article  MathSciNet  Google Scholar 

  • Babaie-Kafaki S, Ghanbari R (2014) The Dai-Liao nonlinear conjugate gradient method with optimal parameter choices. Eur. J. Oper. Res. 234:625–630

    Article  MathSciNet  Google Scholar 

  • Babaie-Kafaki S, Ghanbari R, Mahdavi-Amiri N (2010) Two new conjugate gradient methods based on modified secant relations. J. Comput. Appl. Math. 234:1374–1386

    Article  MathSciNet  Google Scholar 

  • Dai YH, Han JY, Liu GH, Sun DF, Yin HX, Yuan YX (1999) Convergence properties of nonlinear conjugate gradient methods. SIAM J. Optim. 10:348–358

    Article  MathSciNet  Google Scholar 

  • Dai YH, Liao LZ (2001) New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43:87–101

    Article  MathSciNet  Google Scholar 

  • Dolan ED, Moré JJ (2002) Benchmarking optimization software with performance profiles. Math. Program. 91:201–213

    Article  MathSciNet  Google Scholar 

  • Fletcher R, Revees CM (1964) Function minimization by conjugate gradients. Comput. J. 7:149–154

    Article  MathSciNet  Google Scholar 

  • Ford JA, Moghrabi IA (1997) Alternating multi-step quasi-Newton methods for unconstrained optimization. J. Comput. Appl. Math. 82:105–116

    Article  MathSciNet  Google Scholar 

  • Ford JA, Moghrabi IA (1993) Alternative parameter choices for multi-step quasi-Newton methods. Optim. Methods Softw. 2:357–370

    Article  Google Scholar 

  • Ford JA, Moghrabi IA (1994) Multi-step quasi-Newton methods for optimization. J. Comput. Appl. Math. 50:305–323

    Article  MathSciNet  Google Scholar 

  • Ford JA, Narushima Y, Yabe H (2008) Multi-step nonlinear conjugate gradient methods for unconstrained minimization. Comput. Opt. Appl. 40:191–216 21-42 (1992)

    Article  MathSciNet  Google Scholar 

  • Gould NI, Orban D, Toint PhL (2015) CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Comput. Opt. Appl. 60:545–557

    Article  MathSciNet  Google Scholar 

  • Hestenes MR, Stiefel EL (1952) Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. 49:409–436

    Article  MathSciNet  Google Scholar 

  • Li G, Tang C, Wei Z (2007) New conjugacy condition and related new conjugate gradient methods for unconstrained optimization. J. Comput. Appl. Math. 202:523–539

    Article  MathSciNet  Google Scholar 

  • Moré JJ, Thuente DJ (1994) Line search algorithms with guaranteed sufficient decrease. ACM Trans. Math. Softw. 20:286–307

    Article  MathSciNet  Google Scholar 

  • Moghrabi IA (2019) A new preconditioned conjugate gradient method for optimization. IAENG Int. J. Appl. Math. 49(1):1–8

    MathSciNet  Google Scholar 

  • Nocedal J, Wright SJ (2006) Numerical Optimization. Springer, New York

    MATH  Google Scholar 

  • Polak E, Ribiére G (1969) Note Sur la Convergence de Directions Conjuguée. Francaise Informat Recherche Operationelle 16:35–43

    MATH  Google Scholar 

  • Polyak BT (1969) The conjugate gradient method in extreme problems. USSR Comput. Math. Math. Phys. 9:94–112

    Article  Google Scholar 

  • Powell MJD (1977) Restart procedures of the conjugate gradient method. Math. Program. 2:241–254

    Article  MathSciNet  Google Scholar 

  • Powell MJD (1984) Nonconvex minimization calculations and the conjugate gradient method, Numerical Analysis (Dundee, 1983) Lecture Notes in Mathematics, vol 1066. Springer, Berlin, pp 122–141

    Google Scholar 

  • Wei Z, Li G, Qi L (2006) New quasi-Newton methods for unconstrained optimization problems. Appl. Math. Comput. 175:1156–1188

    MathSciNet  MATH  Google Scholar 

  • Yuan G, Wei Z (2010) Convergence analysis of a modified BFGS method on convex minimizations. Comput. Opt. Appl. 47:237–255

    Article  MathSciNet  Google Scholar 

  • Yabe H, Takano M (2004) Global convergence properties of nonlinear conjugate gradient methods with modified secant relation. Comput. Optim. Appl. 28:203–225

    Article  MathSciNet  Google Scholar 

  • Zhang JZ, Xu CX (2001) Properties and numerical performance of quasi-Newton methods with modified quasi-Newton equation. J. Comput. Appl. Math. 137:269–278

    Article  MathSciNet  Google Scholar 

  • Zoutendijk G (1970) Nonlinear programming, computational methods. In: Abadie J (ed) Integer and nonlinear programming. North-holland, Amsterdam, pp 37–86

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to N. Bidabadi.

Additional information

Communicated by Andreas Fischer.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dehghani, R., Bidabadi, N. Two-step conjugate gradient method for unconstrained optimization. Comp. Appl. Math. 39, 241 (2020). https://doi.org/10.1007/s40314-020-01297-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40314-020-01297-2

Keywords

Mathematics Subject Classification

Navigation