Skip to main content

Part of the book series: International Series in Operations Research & Management Science ((ISOR,volume 228))

Abstract

In this chapter we take another approach toward the development of methods lying somewhere intermediate to steepest descent and Newton’s method. Again working under the assumption that evaluation and use of the Hessian matrix is impractical or costly, the idea underlying quasi-Newton methods is to use an approximation to the inverse Hessian in place of the true inverse that is required in Newton’s method. The form of the approximation varies among different methods—ranging from the simplest where it remains fixed throughout the iterative process, to the more advanced where improved approximations are built up on the basis of information gathered during the descent process.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The algorithm (10.1) is sometimes referred to as the method of deflected gradients, since the direction vector can be thought of as being determined by deflecting the gradient through multiplication by S k .

Bibliography

  1. D.P. Bertsekas, Partial conjugate gradient methods for a class of optimal control problems. IEEE Trans. Autom. Control 19, 209–217 (1973)

    Article  Google Scholar 

  2. C.G. Broyden, Quasi-Newton methods and their application to function minimization. Math. Comput. 21, 368–381 (1967)

    Article  Google Scholar 

  3. C.G. Broyden, The convergence of a class of double rank minimization algorithms: parts I and II. J. Inst. Math. Appl. 6, 76–90, 222–231 (1970)

    Article  Google Scholar 

  4. J.B. Crockett, H. Chernoff, Gradient methods of maximization. Pac. J. Math. 5, 33–50 (1955)

    Article  Google Scholar 

  5. W.C. Davidon, Variable metric method for minimization. Research and Development Report ANL-5990 (Ref.) U.S. Atomic Energy Commission, Argonne National Laboratories (1959)

    Google Scholar 

  6. W.C. Davidon, Variance algorithm for minimization. Comput. J. 10, 406–410 (1968)

    Article  Google Scholar 

  7. J.E. Dennis Jr., J.J. Moré, Quasi-Newton methods, motivation and theory. SIAM Rev. 19, 46–89 (1977)

    Article  Google Scholar 

  8. J.E. Dennis Jr., R.E. Schnabel, Least change secant updates for quasi-Newton methods. SIAM Rev. 21, 443–469 (1979)

    Article  Google Scholar 

  9. L.C.W. Dixon, Quasi-Newton algorithms generate identical points. Math. Prog. 2, 383–387 (1972)

    Article  Google Scholar 

  10. R. Fletcher, A new approach to variable metric algorithms. Comput. J. 13(13), 317–322 (1970)

    Article  Google Scholar 

  11. R. Fletcher, M.J.D. Powell, A rapidly convergent descent method for minimization. Comput. J. 6, 163–168 (1963)

    Article  Google Scholar 

  12. P.E., Gill, W. Murray, Quasi-Newton methods for unconstrained optimization. J. Inst. Math. Appl. 9, 91–108 (1972)

    Google Scholar 

  13. D. Goldfarb, A family of variable metric methods derived by variational means. Math. Comput. 24, 23–26 (1970)

    Article  Google Scholar 

  14. J. Greenstadt, Variations on variable metric methods. Math. Comput. 24, 1–22 (1970)

    Google Scholar 

  15. H.Y. Huang, Unified approach to quadratically convergent algorithms for function minimization. J. Optim. Theory Appl. 5, 405–423 (1970)

    Article  Google Scholar 

  16. C. Loewner, Über monotone Matrixfunktionen. Math. Zeir. 38, 177–216 (1934). Also see C. Loewner, Advanced matrix theory, mimeo notes, Stanford University, 1957

    Google Scholar 

  17. D.G. Luenberger, A combined penalty function and gradient projection method for nonlinear programming, Internal Memo, Department of Engineering-Economic Systems, Stanford University (June 1970)

    Google Scholar 

  18. B.A. Murtagh, R.W.H. Sargent, A constrained minimization method with quadratic convergence (Chap. 14), in Optimization, ed. by R. Fletcher (Academic, London, 1969)

    Google Scholar 

  19. S.S. Oren, Self-scaling variable metric (ssvm) algorithms ii: implementation and experiments. Manag. Sci. 20, 863–874 (1974)

    Article  Google Scholar 

  20. S.S. Oren, D.G. Luenberger, Self-scaling variable metric (ssvm) algorithms i: criteria and sufficient conditions for scaling a class of algorithms. Manag. Sci. 20, 845–862 (1974)

    Article  Google Scholar 

  21. S.S. Oren, E. Spedicato, Optimal conditioning of self-scaling variable metric algorithms. Math. Program. 10, 70–90 (1976)

    Article  Google Scholar 

  22. A. Perry, A modified conjugate gradient algorithm, Discussion Paper No. 229, Center for Mathematical Studies in Economics and Management Science, North-Western University, Evanston, IL (1976)

    Google Scholar 

  23. M.J.D. Powell, On the convergence of the variable metric algorithm. Mathematics Branch, Atomic Energy Research Establishment, Harwell, Berkshire, England, (October 1969)

    Google Scholar 

  24. D.F. Shanno, Conditioning of quasi-Newton methods for function minimization. Math. Comput. 24, 647–656 (1970)

    Article  Google Scholar 

  25. D.F. Shanno, Conjugate gradient methods with inexact line searches. Math. Oper. Res. 3(3) 244–2560 (1978)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Luenberger, D.G., Ye, Y. (2016). Quasi-Newton Methods. In: Linear and Nonlinear Programming. International Series in Operations Research & Management Science, vol 228. Springer, Cham. https://doi.org/10.1007/978-3-319-18842-3_10

Download citation

Publish with us

Policies and ethics