Mathematical Programming

, Volume 120, Issue 1, pp 213-220

First online:

Two “well-known” properties of subgradient optimization

  • Kurt M. AnstreicherAffiliated withDepartment of Management Sciences, University of Iowa Email author 
  • , Laurence A. WolseyAffiliated withCenter for Operations Research and Econometrics, Université Catholique de Louvain

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


The subgradient method is both a heavily employed and widely studied algorithm for non-differentiable optimization. Nevertheless, there are some basic properties of subgradient optimization that, while “well known” to specialists, seem to be rather poorly known in the larger optimization community. This note concerns two such properties, both applicable to subgradient optimization using the divergent series steplength rule. The first involves convergence of the iterative process, and the second deals with the construction of primal estimates when subgradient optimization is applied to maximize the Lagrangian dual of a linear program. The two topics are related in that convergence of the iterates is required to prove correctness of the primal construction scheme.


Subgradient optimization Divergent series Lagrangian relaxation Primal recovery

Mathematics Subject Classification (2000)

90C05 90C06 90C25