Skip to main content
Log in

Two–parameter scaled memoryless BFGS methods with a nonmonotone choice for the initial step length

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

A class of two–parameter scaled memoryless BFGS methods is developed for solving unconstrained optimization problems. Then, the scaling parameters are determined in a way to improve the condition number of the corresponding memoryless BFGS update. It is shown that for uniformly convex objective functions, search directions of the method satisfy the sufficient descent condition which leads to the global convergence. To achieve convergence for general functions, a revised version of the method is developed based on the Li–Fukushima modified secant equation. To enhance performance of the methods, a nonmonotone scheme for computing the initial value of the step length is suggested to be used in the line search procedure. Numerical experiments are done on a set of unconstrained optimization test problems of the CUTEr collection. They show efficiency of the proposed algorithms in the sense of the Dolan–Moré performance profile.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Andrei, N.: A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Appl. Math. Lett. 20(6), 645–650 (2007)

    Article  MathSciNet  Google Scholar 

  2. Andrei, N.: Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Optim. Methods Softw. 22(4), 561–571 (2007)

    Article  MathSciNet  Google Scholar 

  3. Andrei, N.: Another hybrid conjugate gradient algorithm for unconstrained optimization. Numer. Algorithms 47(2), 143–156 (2008)

    Article  MathSciNet  Google Scholar 

  4. Andrei, N.: Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Eur. J. Oper. Res. 204(3), 410–420 (2010)

    Article  MathSciNet  Google Scholar 

  5. Andrei, N.: An adaptive scaled BFGS method for unconstrained optimization. Numer. Algorithms 77(2), 413–432 (2018)

    Article  MathSciNet  Google Scholar 

  6. Andrei, N.: A double parameter scaled BFGS method for unconstrained optimization. J. Comput. Appl. Math. 332, 26–44 (2018)

    Article  MathSciNet  Google Scholar 

  7. Babaie–Kafaki, S.: A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization. 4OR 11(4), 361–374 (2013)

    Article  MathSciNet  Google Scholar 

  8. Babaie–Kafaki, S.: Two modified scaled nonlinear conjugate gradient methods. J. Comput. Appl. Math. 261(5), 172–182 (2014)

    Article  MathSciNet  Google Scholar 

  9. Babaie–Kafaki, S.: On optimality of the parameters of self–scaling memoryless quasi–Newton updating formulae. J. Optim. Theory Appl. 167(1), 91–101 (2015)

    Article  MathSciNet  Google Scholar 

  10. Babaie–Kafaki, S.: A modified scaling parameter for the memoryless BFGS updating formula. Numer. Algorithms 72(2), 425–433 (2016)

    Article  MathSciNet  Google Scholar 

  11. Babaie–Kafaki, S., Fatemi, M., Mahdavi–Amiri, N.: Two effective hybrid conjugate gradient algorithms based on modified BFGS updates. Numer. Algorithms 58(3), 315–331 (2011)

    Article  MathSciNet  Google Scholar 

  12. Babaie–Kafaki, S., Ghanbari, R.: A modified scaled conjugate gradient method with global convergence for nonconvex functions. Bull. Belg. Math. Soc. Simon Stevin 21(3), 465–477 (2014)

    MathSciNet  MATH  Google Scholar 

  13. Birgin, E., Martínez, J.M.: A spectral conjugate gradient method for unconstrained optimization. Appl. Math. Optim. 43(2), 117–128 (2001)

    Article  MathSciNet  Google Scholar 

  14. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2, Ser. A), 201–213 (2002)

    Article  MathSciNet  Google Scholar 

  15. Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2(1), 21–42 (1992)

    Article  MathSciNet  Google Scholar 

  16. Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29(4), 373–394 (2003)

    Article  Google Scholar 

  17. Gould, N.I.M., Scott, J.: A note on performance profiles for benchmarking software. ACM Trans. Math. Softw. 43(2), Art. 15, 5 (2016)

    Article  MathSciNet  Google Scholar 

  18. Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23(4), 707–716 (1986)

    Article  MathSciNet  Google Scholar 

  19. Hager, W.W., Zhang, H.: Algorithm 851: CG_Descent, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32(1), 113–137 (2006)

    Article  Google Scholar 

  20. Li, D.H., Fukushima, M.: A modified BFGS method and its global convergence in nonconvex minimization. J. Comput. Appl. Math. 129(1–2), 15–35 (2001)

    Article  MathSciNet  Google Scholar 

  21. Liao, A.: Modifying the BFGS method. Oper. Res. Lett. 20(4), 171–177 (1997)

    Article  MathSciNet  Google Scholar 

  22. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (2006)

    MATH  Google Scholar 

  23. Oren, S.S., Spedicato, E.: Optimal conditioning of self–scaling variable metric algorithms. Math. Program. 10(1), 70–90 (1976)

    Article  MathSciNet  Google Scholar 

  24. Ou, Y.: A note on the global convergence theorem of accelerated adaptive Perry conjugate gradient methods. J. Comput. Appl. Math. 332, 101–106 (2018)

    Article  MathSciNet  Google Scholar 

  25. Sugiki, K., Narushima, Y., Yabe, H.: Globally convergent three–term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization. J. Optim. Theory Appl. 153(3), 733–757 (2012)

    Article  MathSciNet  Google Scholar 

  26. Sun, W., Yuan, Y.X.: Optimization Theory and Methods: Nonlinear Programming. Springer, New York (2006)

    MATH  Google Scholar 

  27. Watkins, D.S.: Fundamentals of Matrix Computations. Wiley, New York (2002)

    Book  Google Scholar 

  28. Zhou, W., Zhang, L.: A nonlinear conjugate gradient method based on the MBFGS secant condition. Optim. Methods Softw. 21(5), 707–714 (2006)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was in part supported by the grant 96013024 from Iran National Science Foundation (INSF), and in part by the Research Council of Semnan University. The authors thank the anonymous reviewers for their valuable comments and suggestions helped to improve the quality of this work. They are also grateful to Professor Michael Navon for providing the line search code.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saman Babaie–Kafaki.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Babaie–Kafaki, S., Aminifard, Z. Two–parameter scaled memoryless BFGS methods with a nonmonotone choice for the initial step length. Numer Algor 82, 1345–1357 (2019). https://doi.org/10.1007/s11075-019-00658-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-019-00658-1

Keywords

Mathematics Subject Classification (2010)

Navigation