Skip to main content
Log in

A superlinearly convergent algorithm for minimization without evaluating derivatives

  • Published:
Mathematical Programming Submit manuscript

Abstract

An algorithm for unconstrained minimization of a function of n variables that does not require the evaluation of partial derivatives is presented. It is a second order extension of the method of local variations and it does not require any exact one variable minimizations. This method retains the local variations property of accumulation points being stationary for a continuously differentiable function. Furthermore, because this extension makes the algorithm an approximate Newton method, its convergence is superlinear for a twice continuously differentiable strongly convex function.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. L. Armijo, “Minimization of functionals having Lipschitz continuous first partial derivatives”,Pacific Journal of Mathematics 16 (1966) 1–3.

    Google Scholar 

  2. R.P. Brent,Algorithms for minimization without derivatives (Prentice-Hall, Englewood Cliffs, N.J., 1972).

    Google Scholar 

  3. F.L. Chernous'ko, “A local variation method for the numerical solution of variational problems”,Žurnal Vyčslitel'noį Matematiki i Matematičeskoį Fiziki 5 (1965) 749–754 [in Russian; English transl.:U.S.S.R. Computational Mathematics and Mathematical Physics 5 (1965) 234–2421.

    Google Scholar 

  4. J. Cullum, “Unconstrained minimization of functions without explicit use of their derivatives”, IBM Watson Research Center, Yorktown Heights, N.Y. (1971).

    Google Scholar 

  5. J.W. Daniel,The approximate minimization of functionals (Prentice-Hall, Englewood Cliffs, N.J., 1971).

    Google Scholar 

  6. J.E. Dennis Jr. and J.J. More, “A characterization of superlinear convergence and its application to quasi-Newton methods”,Mathematics of Computation 28 (1974) 549–560.

    Google Scholar 

  7. A.V. Fiacco and G.P. McCormick, Nonlinear programming: sequential unconstrained minimization techniques (Wiley, New York, 1968).

    Google Scholar 

  8. P.E. Gill and W. Murray, “Newton-type methods for unconstrained and linearly constrained optimization”,Mathematical Programming 7 (1974) 311–350.

    Google Scholar 

  9. P.E. Gill, W. Murray and S.M. Picken, “The implementation of two modified Newton methods for unconstrained optimization”, National Physical Laboratory, DNAC Rep. 24, Teddington, England (1972).

    Google Scholar 

  10. P.E. Gill, W. Murray and R.A. Pitfield, “The implementation of two revised quasi-Newton algorithms for unconstrained optimization”, National Physical Laboratory, DNAC Rep. 11, Teddington, England (1972).

    Google Scholar 

  11. S.M. Goldfeld, R.E. Quandt and H.F. Trotter, “Maximization by quadratic hill-climbing”,Econometrica 34 (1966) 541–551.

    Google Scholar 

  12. A.A. Goldstein and J.F. Price, “An effective algorithm for minimization”,Numerische Mathematik 10 (1967) 184–189.

    Google Scholar 

  13. J. Greenstadt, “On the relative efficiencies of gradient methods”,Mathematics of Computation 21 (1967) 360–367.

    Google Scholar 

  14. J. Greenstadt, “A quasi-Newton method with no derivatives”,Mathematics of Computation 26 (1972) 145–166.

    Google Scholar 

  15. A. Matthews and D. Davies, “A comparison of modified Newton methods for unconstrained optimisation”,The Computer Journal 14 (1971) 293–294.

    Google Scholar 

  16. J.H. May, “Linearly constrained nonlinear programming: A solution method that does not require analytic derivatives”, dissertation. Yale University, New Haven, Conn. (1974).

    Google Scholar 

  17. G.P. McCormick and K. Ritter, “Methods of conjugate directions versus quasi-Newton methods”,Mathematical Programming 3 (1972) 101–116.

    Google Scholar 

  18. E. Polak,Computational methods in optimization (Academic Press, New York, 1971).

    Google Scholar 

  19. B.T. Poljak, “Existence theorems and convergence of minimizing sequences in extremum problems with restrictions”,Doklady Akademii Nauk SSSR 166 (1966) 287–290 [in Russian; English transl.:Soviet Mathematics Doklady 7 (1966) 72–75].

    Google Scholar 

  20. M.J.D. Powell, “An efficient method for finding the minimum of a function of several variables without calculating derivatives”,The Computer Journal 7 (1964) 155–162.

    Google Scholar 

  21. H.H. Rosenbrock, “An automatic method for finding the greatest or least value of a function”,The Computer Journal 3 (1960) 175.

    Google Scholar 

  22. G.W. Stewart, “A modification of Davidon's minimization method to accept difference approximations of derivatives”,Journal of the Association for Computing Machinery 14 (1967) 72–83.

    Google Scholar 

  23. D. Winfield, “Function minimization by interpolation in a data table”,Journal of the Institute of Mathematics and its Applications 12 (1973) 339–347.

    Google Scholar 

  24. W.I. Zangwill, “Minimizing a function without calculating derivatives”,The Computer Journal 10 (1967) 293–296.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Research sponsored by National Science Foundation Grant GK-32710 and by the Air Force Office of Scientific Research, Air Force Systems Command, USAF, under Grant No. AFOSR-74-2695.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mifflin, R. A superlinearly convergent algorithm for minimization without evaluating derivatives. Mathematical Programming 9, 100–117 (1975). https://doi.org/10.1007/BF01681333

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01681333

Keywords

Navigation