Mathematical Programming

, Volume 120, Issue 1, pp 213–220

Two “well-known” properties of subgradient optimization

Authors

    • Department of Management SciencesUniversity of Iowa
  • Laurence A. Wolsey
    • Center for Operations Research and EconometricsUniversité Catholique de Louvain
FULL LENGTH PAPER

DOI: 10.1007/s10107-007-0148-y

Cite this article as:
Anstreicher, K.M. & Wolsey, L.A. Math. Program. (2009) 120: 213. doi:10.1007/s10107-007-0148-y

Abstract

The subgradient method is both a heavily employed and widely studied algorithm for non-differentiable optimization. Nevertheless, there are some basic properties of subgradient optimization that, while “well known” to specialists, seem to be rather poorly known in the larger optimization community. This note concerns two such properties, both applicable to subgradient optimization using the divergent series steplength rule. The first involves convergence of the iterative process, and the second deals with the construction of primal estimates when subgradient optimization is applied to maximize the Lagrangian dual of a linear program. The two topics are related in that convergence of the iterates is required to prove correctness of the primal construction scheme.

Keywords

Subgradient optimization Divergent series Lagrangian relaxation Primal recovery

Mathematics Subject Classification (2000)

90C05 90C06 90C25

Copyright information

© Springer-Verlag 2007