Skip to main content
Log in

Diagonal Approximation of the Hessian by Finite Differences for Unconstrained Optimization

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

A new quasi-Newton method with a diagonal updating matrix is suggested, where the diagonal elements are determined by forward or by central finite differences. The search direction is a direction of sufficient descent. The algorithm is equipped with an acceleration scheme. The convergence of the algorithm is linear. The preliminary computational experiments use a set of 75 unconstrained optimization test problems classified in five groups according to the structure of their Hessian: diagonal, block-diagonal, band (tri- or penta-diagonal), sparse and dense. Subject to the CPU time metric, intensive numerical experiments show that, for problems with Hessian in a diagonal, block-diagonal or band structure, the algorithm with diagonal approximation of the Hessian by finite differences is top performer versus the well-established algorithms: the steepest descent and the Broyden–Fletcher–Goldfarb–Shanno. On the other hand, as a by-product of this numerical study, we show that the Broyden–Fletcher–Goldfarb–Shanno algorithm is faster for problems with sparse Hessian, followed by problems with dense Hessian.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Dennis Jr., J.E., Moré, J.J.: A characterization of superlinear convergence and its application to quasi-Newton methods. Mathematics of Computation 28, 549–560 (1974)

    Article  MathSciNet  Google Scholar 

  2. Dennis Jr., J.E., Moré, J.J.: Quasi-Newton methods, motivation and theory. SIAM Review 19, 46–89 (1977)

    Article  MathSciNet  Google Scholar 

  3. Nocedal, J.: Theory of algorithms for unconstrained optimization. Acta Numerica 1, 199–242 (1992)

    Article  MathSciNet  Google Scholar 

  4. Dennis Jr., J.E., Schnabel, R.B.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, Inc., Englewood Cliffs, NJ (1983)

    MATH  Google Scholar 

  5. Gill, P.E., Murray, W., Wright, M.H.: Practical Optimization. Academic Press, London (1981)

    MATH  Google Scholar 

  6. Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, Berlin (2006)

    MATH  Google Scholar 

  7. Bartholomew-Biggs, M.: Nonlinear Optimization with Engineering Applications. Springer Optimization and Its Applications, vol. 19. Springer, Berlin (2008)

    Book  Google Scholar 

  8. Andrei, N.: Continuous Nonlinear Optimization for Engineering Applications in GAMS Technology. Springer Optimization and Its Applications, vol. 121. Springer, Berlin (2017)

    MATH  Google Scholar 

  9. Nazareth, J.L.: If quasi-Newton then why not quasi-Cauchy? SIAG/Opt Views-and-news 6, 11–14 (1995)

    Google Scholar 

  10. Dennis, J.E., Wolkowicz, H.: Sizing and least-change secant methods. SIAM J. Numerical Analysis 30, 1291–1314 (1993)

    Article  MathSciNet  Google Scholar 

  11. Zhu, M., Nazareth, J.L., Wolkowicz, H.: The quasi-Cauchy relation and diagonal updating. SIAM J. Optim. 9(4), 1192–1204 (1999)

    Article  MathSciNet  Google Scholar 

  12. Leong, W.J., Farid, M., Hassan, M.A.: Improved Hessian approximation with modified quasi-Cauchy relation for a gradient-type method. Adv. Model. Optim. 12(1), 37–44 (2010)

    MathSciNet  MATH  Google Scholar 

  13. Farid, M., Leong, W.J., Zheng, L.: A new diagonal gradient-type method for large scale unconstrained optimization. U.P.B. Sci. Bull. Ser. A 75(1), 57–64 (2013)

    MathSciNet  MATH  Google Scholar 

  14. Andrei, N.: A diagonal quasi-Newton updating method for unconstrained optimization. Numerical Algorithms 81, 575–590 (2019)

    Article  MathSciNet  Google Scholar 

  15. Wolfe, P.: Convergence conditions for ascent methods. SIAM Review 11, 226–235 (1969)

    Article  MathSciNet  Google Scholar 

  16. Wolfe, P.: Convergence conditions for ascent methods. II: some corrections. SIAM Rev. 13, 185–188 (1971)

    Article  MathSciNet  Google Scholar 

  17. Andrei, N.: An acceleration of gradient descent algorithm with backtracking for unconstrained optimization. Numerical Algorithms 42, 63–73 (2006)

    Article  MathSciNet  Google Scholar 

  18. Andrei, N.: Acceleration of conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 213, 361–369 (2009)

    MathSciNet  MATH  Google Scholar 

  19. Kelley, C.T.: Iterative Methods for Linear and Nonlinear Equations. SIAM, Philadelphia (1995)

    Book  Google Scholar 

  20. Zoutendijk, G.: Nonlinear programming, computational methods. In: Abadie, J. (ed.) Integer and Nonlinear Programming, pp. 37–86. North-Holland, Amsterdam (1970)

    MATH  Google Scholar 

  21. Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, London (1970)

    MATH  Google Scholar 

  22. Luenberger, D.G., Ye, Y.: Linear and Nonlinear Programming. International Series in Operations Research & Management Science, 4th edn. Springer, New York (2016)

    Book  Google Scholar 

  23. Nash, S.G., Nocedal, J.: A numerical study of the limited memory BFGS method and the truncated-Newton method for large scale optimization. SIAM J. Optimization 1(3), 358–372 (1991)

    Article  MathSciNet  Google Scholar 

  24. Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Programming 45, 503–528 (1989)

    Article  MathSciNet  Google Scholar 

  25. Nash, S.G.: User’s Guide for TN/TNBC: Fortran Routines for Nonlinear Optimization. Report 397, Mathematical Sciences Department, The Johns Hopkins University, Baltimore (1984)

  26. Nash, S.G.: Preconditioning of truncated-Newton methods. SIAM J. Sci. Statist. Comput. 6, 599–616 (1985)

    Article  MathSciNet  Google Scholar 

  27. Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM TOMS 21, 123–160 (1995)

    Article  Google Scholar 

  28. Andrei, N.: A Collection of 75 Unconstrained Optimization Test Problems. Technical Report No. 6/2018, May 10, 2018, pp. 1–9. Research Institute for Informatics, Bucharest (2018)

  29. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MathSciNet  Google Scholar 

  30. Gill, P.E., Murray, W.: Conjugate Gradient Methods for Large-Scale Nonlinear Optimization. Technical Report SOL 79-15, Department of Operations Research, Stanford University, Stanford (1979)

  31. Gilbert, J.C., Lemaréchal, C.: Some numerical experiments with variable-storage quasi-Newton algorithms. Math. Programming Ser. B 45, 407–435 (1989)

    Article  MathSciNet  Google Scholar 

  32. Nazareth, J.L.: The Newton–Cauchy Framework: A Unified Approach to Unconstrained Nonlinear Minimization. Lecture Notes in Computer Science 769. Springer, New York (1994)

  33. Dennis, J.E., Walker, H.F.: Inaccuracy in quasi-Newton methods: local improvement theorems. In: Mathematical Programming Study, Vol. 22: Mathematical Programming at Oberwolfach II, pp. 70–85. North-Holland, Amsterdam (1984)

  34. Ypma, T.J.: The effect of rounding errors on Newton-like methods. IMA J. Numer. Anal. 3, 109–118 (1983)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neculai Andrei.

Additional information

Communicated by Evgeni Alekseevich Nurminski.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Dr. Neculai Andrei is full member of Academy of Romanian Scientists.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Andrei, N. Diagonal Approximation of the Hessian by Finite Differences for Unconstrained Optimization. J Optim Theory Appl 185, 859–879 (2020). https://doi.org/10.1007/s10957-020-01676-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-020-01676-z

Keywords

Mathematics Subject Classification

Navigation