Mathematical Programming

, Volume 4, Issue 1, pp 193–201 | Cite as

On search directions for minimization algorithms

  • M. J. D. Powell


Some examples are given of differentiable functions of three variables, having the property that if they are treated by the minimization algorithm that searches along the coordinate directions in sequence, then the search path tends to a closed loop. On this loop the gradient of the objective function is bounded away from zero. We discuss the relevance of these examples to the problem of proving general convergence theorems for minimization algorithms that use search directions.


Objective Function Mathematical Method Closed Loop Convergence Theorem Search Direction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    H.B. Curry, “The method of steepest descent for nonlinear minimization problems”,Quarterly of Applied Mathematics 2 (1944) 258–261.Google Scholar
  2. [2]
    E. Polak,Computational methods in optimization: a unified approach (Academic Press, New York, 1971).Google Scholar
  3. [3]
    H.H. Rosenbrock, “An automatic method for finding the greatest or least value of a function”,Computer Journal 3 (1960) 175–184.Google Scholar
  4. [4]
    G. Zoutendijk, “Nonlinear programming, computational methods”, in:Integer and nonlinear programming, Ed. J. Abadie (North-Holland, Amsterdam, 1970) 37–86.Google Scholar

Copyright information

© The Mathematical Programming Society 1973

Authors and Affiliations

  • M. J. D. Powell
    • 1
  1. 1.Atomic Energy Research EstablishmentHarwellGreat Britain

Personalised recommendations