Mathematical Programming

, Volume 13, Issue 1, pp 329–347

On convergence rates of subgradient optimization methods

  • J. L. Goffin
Article

DOI: 10.1007/BF01584346

Cite this article as:
Goffin, J.L. Mathematical Programming (1977) 13: 329. doi:10.1007/BF01584346

Abstract

Rates of convergence of subgradient optimization are studied. If the step size is chosen to be a geometric progression with ratioρ the convergence, if it occurs, is geometric with rateρ. For convergence to occur, it is necessary that the initial step size be large enough, and that the ratioρ be greater than a sustainable ratez(μ), which depends upon a condition numberμ, defined for both differentiable and nondifferentiable functions. The sustainable ratez(μ) is closely related to the rate of convergence of the steepest ascent method for differentiable functions: in fact it is identical if the function is not too well conditioned.

Key words

Nondifferentiable optimizationRates of convergenceNonsmooth optimization

Copyright information

© The Mathematical Programming Society 1977

Authors and Affiliations

  • J. L. Goffin
    • 1
  1. 1.McGill UniversityMontrealCanada