Skip to main content
Log in

On search directions for minimization algorithms

  • Published:
Mathematical Programming Submit manuscript

Abstract

Some examples are given of differentiable functions of three variables, having the property that if they are treated by the minimization algorithm that searches along the coordinate directions in sequence, then the search path tends to a closed loop. On this loop the gradient of the objective function is bounded away from zero. We discuss the relevance of these examples to the problem of proving general convergence theorems for minimization algorithms that use search directions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. H.B. Curry, “The method of steepest descent for nonlinear minimization problems”,Quarterly of Applied Mathematics 2 (1944) 258–261.

    Google Scholar 

  2. E. Polak,Computational methods in optimization: a unified approach (Academic Press, New York, 1971).

    Google Scholar 

  3. H.H. Rosenbrock, “An automatic method for finding the greatest or least value of a function”,Computer Journal 3 (1960) 175–184.

    Google Scholar 

  4. G. Zoutendijk, “Nonlinear programming, computational methods”, in:Integer and nonlinear programming, Ed. J. Abadie (North-Holland, Amsterdam, 1970) 37–86.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Powell, M.J.D. On search directions for minimization algorithms. Mathematical Programming 4, 193–201 (1973). https://doi.org/10.1007/BF01584660

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01584660

Keywords

Navigation