Abstract
Some examples are given of differentiable functions of three variables, having the property that if they are treated by the minimization algorithm that searches along the coordinate directions in sequence, then the search path tends to a closed loop. On this loop the gradient of the objective function is bounded away from zero. We discuss the relevance of these examples to the problem of proving general convergence theorems for minimization algorithms that use search directions.
Similar content being viewed by others
References
H.B. Curry, “The method of steepest descent for nonlinear minimization problems”,Quarterly of Applied Mathematics 2 (1944) 258–261.
E. Polak,Computational methods in optimization: a unified approach (Academic Press, New York, 1971).
H.H. Rosenbrock, “An automatic method for finding the greatest or least value of a function”,Computer Journal 3 (1960) 175–184.
G. Zoutendijk, “Nonlinear programming, computational methods”, in:Integer and nonlinear programming, Ed. J. Abadie (North-Holland, Amsterdam, 1970) 37–86.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Powell, M.J.D. On search directions for minimization algorithms. Mathematical Programming 4, 193–201 (1973). https://doi.org/10.1007/BF01584660
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF01584660