Reflections on nondifferentiable optimization, part 2, convergence
- 43 Downloads
This paper is concerned with the minimization of nondifferentiable functions. Three main results are obtained: (i) convergence of the ball-gradient algorithm, introduced by Dixon, for convex functions; (ii) convergence of the generalized gradient algorithm, as implemented by Shor and Ermol'ev, to a stationary point; and (iii) convergence of an algorithm introduced by Goldstein to a local minimum.
Key WordsNondifferentiable optimization subgradients ball gradients
Unable to display preview. Download preview PDF.
- 1.Shor, N. Z.,Generalized Gradient Methods of Minimization of Nonsmoothed Functions and Their Use in Problems of Mathematical Programming (in Russian), Mathematical Methods, Vol. 12, pp. 337–356, 1976.Google Scholar
- 2.Wolfe, P.,A Method of Conjugate Subgradients for Minimizing Nondifferentiable Functions, Nondifferentiable Optimization, Edited by M. L. Balinski and P. Wolfe, North-Holland Publishing Company, Amsterdam, Holland, 1975.Google Scholar
- 3.Ermol'ev, J. M.,Methods of Solving Nonlinear Extremal Problems, Kibernetika, Vol. 4, pp. 1–4, 1966.Google Scholar
- 4.Goldstein, A. A.,Optimization of Lipschitz Continuous Functions, Mathematical Programming, Vol. 13, pp. 14–22, 1977.Google Scholar
- 5.Dixon, L. C. W.,Reflections on Nondifferentiable Optimization, Part 1, Ball Gradient, Journal of Optimization Theory and Applications, Vol. 32, No. 2, 1980.Google Scholar
- 6.Polyak, B. T.,A General Method of Solving Extremal Problems, Doklady Akademii Nauk, SSSR, Vol. 174, pp. 593–597, 1967.Google Scholar