On search directions for minimization algorithms
- 1.1k Downloads
Some examples are given of differentiable functions of three variables, having the property that if they are treated by the minimization algorithm that searches along the coordinate directions in sequence, then the search path tends to a closed loop. On this loop the gradient of the objective function is bounded away from zero. We discuss the relevance of these examples to the problem of proving general convergence theorems for minimization algorithms that use search directions.
KeywordsObjective Function Mathematical Method Closed Loop Convergence Theorem Search Direction
Unable to display preview. Download preview PDF.
- H.B. Curry, “The method of steepest descent for nonlinear minimization problems”,Quarterly of Applied Mathematics 2 (1944) 258–261.Google Scholar
- E. Polak,Computational methods in optimization: a unified approach (Academic Press, New York, 1971).Google Scholar
- H.H. Rosenbrock, “An automatic method for finding the greatest or least value of a function”,Computer Journal 3 (1960) 175–184.Google Scholar
- G. Zoutendijk, “Nonlinear programming, computational methods”, in:Integer and nonlinear programming, Ed. J. Abadie (North-Holland, Amsterdam, 1970) 37–86.Google Scholar