Mathematical Programming

, Volume 120, Issue 1, pp 213–220 | Cite as

Two “well-known” properties of subgradient optimization

FULL LENGTH PAPER

Abstract

The subgradient method is both a heavily employed and widely studied algorithm for non-differentiable optimization. Nevertheless, there are some basic properties of subgradient optimization that, while “well known” to specialists, seem to be rather poorly known in the larger optimization community. This note concerns two such properties, both applicable to subgradient optimization using the divergent series steplength rule. The first involves convergence of the iterative process, and the second deals with the construction of primal estimates when subgradient optimization is applied to maximize the Lagrangian dual of a linear program. The two topics are related in that convergence of the iterates is required to prove correctness of the primal construction scheme.

Keywords

Subgradient optimization Divergent series Lagrangian relaxation Primal recovery 

Mathematics Subject Classification (2000)

90C05 90C06 90C25 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Anstreicher, K.M., Wolsey, L.A.: On dual solutions in subgradient optimization. Center for Operations Research and Econometrics. Louvain-la-Neuve, Belgium (1992, working paper)Google Scholar
  2. 2.
    Bahiense L., Maculan N., Sagastizábal C. (2002). The volume algorithm revisited: relation with bundle methods. Math. Program. 94: 41–69 MATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Barahona F., Anbil R. (2000). The volume algorithm: producing primal solutions with a subgradient method. Math. Program. 87: 385–399 MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Barahona F., Anbil R. (2002). On some difficult linear programs coming from set partitioning. Discret. Appl. Math. 118: 3–11 MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Correa R., Lemaréchal C. (1993). Convergence of some algorithms for convex minimization. Math. Program. 62: 261–275 CrossRefGoogle Scholar
  6. 6.
    Dubost L., Gonzalez R., Lemaréchal C. (2005). A primal-proximal heuristic applied to the French unit-commitment problem. Math. Program. 104: 129–151 MATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Ermol’ev Yu. (1976). Methods of stochastic programming. Nauka, Moscow MATHGoogle Scholar
  8. 8.
    Goffin J.L. (1977). On the convergence rates of subgradient optimization methods. Math. Program. 13: 329–347 MATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Held M., Wolfe P., Crowder H. (1974). Validation of subgradient optimization. Math. Program. 6: 62–88 MATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Larsson T., Liu Z. (1997). A Lagrangian relaxation scheme for structured linear programs with application to multicommodity network flow. Optimization 40: 247–284 MATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Larsson T., Patriksson M., Strömberg A.-B. (1996). Conditional subgradient optimization—theory and applications. Eur. J. Oper. Res. 88: 382–403 MATHCrossRefGoogle Scholar
  12. 12.
    Larsson T., Patriksson M., Strömberg A.-B. (1998). Ergodic convergence in subgradient optimization. Optim. Methods Softw. 9: 93–120 MATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Larsson T., Patriksson M., Strömberg A.-B. (1999). Ergodic, primal convergence in dual subgradient schemes for convex programming. Math. Program. 86: 283–312 MATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Lemaréchal C.: (2001). Lagrangian relaxation. In: Jünger, M., Nadef, D. (eds) Computational Combinatorial Optimization, pp 112–156. Springer, Heidelberg CrossRefGoogle Scholar
  15. 15.
    Nemirovskii, A.: Private communication (1993)Google Scholar
  16. 16.
    Polyak B.T. (1967). A general method for solving extremum problems. Soviet Math Doklady 8: 593–597 MATHGoogle Scholar
  17. 17.
    Polyak B.T. (1977). Subgradient methods: a survey of Soviet research. In: Lemaréchal, C.L., Mifflin, R. (eds) Nonsmooth Optimization, Proceedings of a IIASA Workshop, March 28–April 8, 1977. Pergamon Press, New York Google Scholar
  18. 18.
    Polyak B.T. (1987). Introduction to Optimization. Optimization Software, Inc., New York Google Scholar
  19. 19.
    Rudin W. (1976). Principles of Mathematical Analysis, 3rd edn. McGraw-Hill, New York Google Scholar
  20. 20.
    Shepilov M.A. (1976). Method of the generalized gradient for finding the absolute minimum of a convex function. Cybernetics 12: 547–553 CrossRefGoogle Scholar
  21. 21.
    Sherali H.D., Choi G. (1996). Recovery of primal solutions when using subgradient optimization methods to solve Lagrangian duals of linear programs. Oper. Res. Lett. 19: 105–113 MATHCrossRefMathSciNetGoogle Scholar
  22. 22.
    Shor N.Z. (1985). Minimization Methods for Non-Differentiable Functions. Springer, Berlin MATHGoogle Scholar

Copyright information

© Springer-Verlag 2007

Authors and Affiliations

  1. 1.Department of Management SciencesUniversity of IowaIowa CityUSA
  2. 2.Center for Operations Research and EconometricsUniversité Catholique de LouvainLouvain-la-NeuveBelgium

Personalised recommendations