Two “well-known” properties of subgradient optimization
The subgradient method is both a heavily employed and widely studied algorithm for non-differentiable optimization. Nevertheless, there are some basic properties of subgradient optimization that, while “well known” to specialists, seem to be rather poorly known in the larger optimization community. This note concerns two such properties, both applicable to subgradient optimization using the divergent series steplength rule. The first involves convergence of the iterative process, and the second deals with the construction of primal estimates when subgradient optimization is applied to maximize the Lagrangian dual of a linear program. The two topics are related in that convergence of the iterates is required to prove correctness of the primal construction scheme.
KeywordsSubgradient optimization Divergent series Lagrangian relaxation Primal recovery
Mathematics Subject Classification (2000)90C05 90C06 90C25
Unable to display preview. Download preview PDF.
- 1.Anstreicher, K.M., Wolsey, L.A.: On dual solutions in subgradient optimization. Center for Operations Research and Econometrics. Louvain-la-Neuve, Belgium (1992, working paper)Google Scholar
- 15.Nemirovskii, A.: Private communication (1993)Google Scholar
- 17.Polyak B.T. (1977). Subgradient methods: a survey of Soviet research. In: Lemaréchal, C.L., Mifflin, R. (eds) Nonsmooth Optimization, Proceedings of a IIASA Workshop, March 28–April 8, 1977. Pergamon Press, New York Google Scholar
- 18.Polyak B.T. (1987). Introduction to Optimization. Optimization Software, Inc., New York Google Scholar
- 19.Rudin W. (1976). Principles of Mathematical Analysis, 3rd edn. McGraw-Hill, New York Google Scholar