Mathematical Programming

, Volume 50, Issue 1–3, pp 177–195

Convergence of quasi-Newton matrices generated by the symmetric rank one update

  • A. R. Conn
  • N. I. M. Gould
  • Ph. L. Toint
Article

Abstract

Quasi-Newton algorithms for unconstrained nonlinear minimization generate a sequence of matrices that can be considered as approximations of the objective function second derivatives. This paper gives conditions under which these approximations can be proved to converge globally to the true Hessian matrix, in the case where the Symmetric Rank One update formula is used. The rate of convergence is also examined and proven to be improving with the rate of convergence of the underlying iterates. The theory is confirmed by some numerical experiments that also show the convergence of the Hessian approximations to be substantially slower for other known quasi-Newton formulae.

Key words

Quasi-Newton updates convergence theory 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    A.R. Conn, N.I.M. Gould and Ph.L. Toint, “Global convergence of a class of trust region algorithms for optimization with simple bounds,”SIAM Journal on Numerical Analysis 25 (1988) 433–460 (with a correction given inSIAM Journal on Numerical Analysis 26 (1989) 764–767).Google Scholar
  2. [2]
    A.R. Conn, N.I.M. Gould and Ph.L. Toint, “Testing a class of methods for solving minimization problems with simple bounds on the variables,”Mathematics of Computation 50 (1988) 399–430.Google Scholar
  3. [3]
    J.E. Dennis and J.J. Moré, “Quasi-Newton methods, motivation and theory,”SIAM Review 19 (1977) 46–89.Google Scholar
  4. [4]
    J.E. Dennis and R.B. Schnabel,Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Prentice-Hall, Englewood Cliffs, NJ, 1983).Google Scholar
  5. [5]
    A.V. Fiacco and G.P. McCormick,Nonlinear Programming (Wiley, New York, 1968).Google Scholar
  6. [6]
    R. Fletcher,Practical Methods of Optimization: Unconstrained Optimization (Wiley, Chichester, 1980).Google Scholar
  7. [7]
    R.P. Ge and M.J.D. Powell, “The convergence of variable metric matrices in unconstrained optimization,”Mathematical Programming 27 (1983) 123–143.Google Scholar
  8. [8]
    P.E. Gill, W. Murray and M.H. Wright,Practical Optimization (Academic Press, New York, 1981).Google Scholar
  9. [9]
    A. Griewank and Ph.L. Toint, “Partitioned variable metric updates for large structured optimization problems,”Numerische Mathematik 39 (1982) 119–137.Google Scholar
  10. [10]
    J.J. Moré, “Recent developments in algorithms and software for trust region methods,” in: A. Bachem, M. Grötschel and B. Korte, eds.,Mathematical Programming: The State of the Art (Springer, Berlin, 1983).Google Scholar
  11. [11]
    J.M. Ortega and W.C. Rheinboldt,Iterative Solution of Nonlinear Equations in Several Variables (Academic Press, New York, 1970).Google Scholar
  12. [12]
    M.J.D. Powell, “A new algorithm for unconstrained optimization,” in: J.B. Rosen, O.L. Mangasarian and K. Ritter, eds.,Nonlinear Programming (Academic Press, New York, 1970).Google Scholar
  13. [13]
    G. Schuller, “On the order of convergence of certain quasi-Newton methods,”Numerische Mathematik 23 (1974) 181–192.Google Scholar
  14. [14]
    D.C. Sorensen, “An example concerning quasi-Newton estimates of a sparse Hessian,”SIGNUM Newsletter 16 (1981) 8–10.Google Scholar
  15. [15]
    Ph.L. Toint, “On the superlinear convergence of an algorithm for solving a sparse minimization problem,”SIAM Journal on Numerical Analysis 16 (1979) 1036–1045.Google Scholar

Copyright information

© The Mathematical Programming Society, Inc. 1991

Authors and Affiliations

  • A. R. Conn
    • 1
  • N. I. M. Gould
    • 2
  • Ph. L. Toint
    • 3
  1. 1.IBM T. J. Watson Research CenterYorktown HeightsUSA
  2. 2.Rutherford Appleton LaboratoryChiltonUK
  3. 3.Department of MathematicsFacultés Universitaires ND de la PaixNamurBelgium

Personalised recommendations