Skip to main content

Convergence Rate of Incremental Subgradient Algorithms

  • Chapter

Part of the book series: Applied Optimization ((APOP,volume 54))

Abstract

We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method.

In this paper, we present convergence results and estimates of the convergence rate of a number of variants of incremental subgradient methods, including some that use randomization. The convergence rate estimates are consistent with our computational results, and suggests that the randomized variants perform substantially better than their deterministic counterparts.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Allen E., Helgason R., Kennington J., and Shetty B. (1987), “A Generalization of Polyak’s Convergence Result for Subgradient Optimization,” Mathematical Programming, 37, 309–317.

    Article  MathSciNet  MATH  Google Scholar 

  2. Bazaraa M. S., and Sherali H. D. (1981), “On the Choice of Step Size in Subgradient Optimization,” European Journal of Operational Research, 7, 380–388.

    Article  MathSciNet  MATH  Google Scholar 

  3. Bertsekas D. P. (1999), Nonlinear Programming, ( 2nd edition ), Athena Scientific, Belmont, Massachusetts.

    MATH  Google Scholar 

  4. Bertsekas D. P. and Tsitsiklis J. N. (1996), Neuro-Dynamic Programming, Athena Scientific, Belmont, Massachusetts.

    Google Scholar 

  5. Brännlund U. (1993), “On Relaxation Methods for Nonsmooth Convex Optimization,” Doctoral Thesis, Royal Institute of Technology, Stockholm, Sweden.

    Google Scholar 

  6. Correa R., and Lemaréchal C. (1993), “Convergence of Some Algorithms for Convex Minimization,” Mathematical Programming, 62, 261–275.

    Article  MathSciNet  MATH  Google Scholar 

  7. Dem’yanov V. F., and Vasil’ev L. V. (1985), Nondifferentiable Optimization, Optimization Software, New York.

    Book  Google Scholar 

  8. Ermoliev Yu. M. (1966), “Methods for Solving Nonlinear Extremal Problems,” Kibernetika, Kiev, 4, 1–17.

    Google Scholar 

  9. Ermoliev Yu. M. (1969), “On the Stochastic Quasi-gradient Method and Stochastic Quasi-Feyer Sequences,” Kibernetika, 2, 73–83.

    Google Scholar 

  10. Ermoliev Yu. M. (1976), Stochastic Programming Methods, Nauka, Moscow.

    Google Scholar 

  11. Ermoliev Yu. M. (1983), “Stochastic Quasigradient Methods and Their Application to System Optimization,” Stochastics, 9, 1–36.

    Article  MathSciNet  MATH  Google Scholar 

  12. Ermoliev Yu. M., and Wets R. J.-B. (Eds.), (1988), Numerical Techniques for Stochastic Optimization, IIASA, Springer-Verlag.

    MATH  Google Scholar 

  13. Goffin J. L. (1980), “The Relaxation Method for Solving Systems of Linear Inequalities,” Mathematics of Operations Research, 5 (3), 388–414.

    Article  MathSciNet  MATH  Google Scholar 

  14. Coffin J., and Kiwiel K. (1999), “Convergence of a Simple Subgradient Level Method,” Mathematical Programming, 85, 207–211.

    Article  MathSciNet  Google Scholar 

  15. Hiriart-Urruty J.-B., and Lemaréchal C. (1993), Convex Analysis and Minimization Algorithms, Vols. I and I I, Springer-Verlag, Berlin and New York.

    Google Scholar 

  16. Kaskavelis C. A., and Caramanis M. C. (1998), “Efficient Lagrangian Relaxation Algorithms for Industry Size Job-Shop Scheduling Problems,” IIE Transactions on Scheduling and Logistics, 30, 1085–1097.

    Google Scholar 

  17. Kim S., Ahn H., and Cho S.-C. (1991), “Variable Target Value Subgradient Method,” Mathematical Programming, 49, 359–369.

    Article  MathSciNet  MATH  Google Scholar 

  18. Kim S., and Um B. (1993), “An Improved Subgradient Method for Constrained Nondifferentiable Optimization,” Operations Research Letters, 14, 61–64.

    Article  MathSciNet  MATH  Google Scholar 

  19. Kiwiel K. C. (1996), “The Efficiency of Subgradient Projection Methods for Convex Optimization, Part I: General Level Methods,” SIAM Journal on Control and Optimization, 34 (2), 660–676.

    Article  MathSciNet  MATH  Google Scholar 

  20. Kiwiel K. C. (1996), “The Efficiency of Subgradient Projection Methods for Convex Optimization, Part II: Implementations and Extensions,” SIAM Journal on Control and Optimization, 34 (2), 677–697.

    Article  MathSciNet  MATH  Google Scholar 

  21. Kiwiel K. C., Larsson T., and Lindberg P. O. (1998), “The Efficiency of Ballstep Subgradient Level Methods for Convex Optimization,” Working Paper LiTH-MAT-R-1998–22, Dept. of Mathematics, Linköpings Universitet, Sweden.

    Google Scholar 

  22. Kulikov A. N., and Fazylov V. R. (1990), “Convex Optimization with Prescribed Accuracy,” USSR Computational Mathematics and Mathematical Physics, 30 (3), 16–22.

    Article  MathSciNet  MATH  Google Scholar 

  23. Minoux M. (1986), Mathematical Programming: Theory and Algorithms, J. Wiley, New York.

    Google Scholar 

  24. Nedie A., and Bertsekas D. P. (1999), “Incremental Subgradient Methods for Nondifferentiable Optimization,” Lab. for Info. and Decision Systems Report LIDS-P-2460, Massachusetts Institute of Technology, Cambridge, MA.

    Google Scholar 

  25. Nedié A., and Bertsekas D. P. (2000), “Incremental Subgradient Methods for Nondifferentiable Optimization,” (to appear in SIAM Journal on Optimization).

    Google Scholar 

  26. Polyak B. T. (1967), “A General Method of Solving Extremum Problems,” Doklady Akad. Nauk SSSR, 174 (1), 33–36.

    Google Scholar 

  27. Polyak B. T. (1969), “Minimization of Unsmooth Functionals,” Zhurnal Vychisl. Mat. i Mat. Fiz., 9(3), 509–521.

    Google Scholar 

  28. Polyak B. T. (1987), Introduction to Optimization, Optimization Software Inc., New York.

    Google Scholar 

  29. Rockafellar R. T. (1970), Convex Analysis, Princeton Univ. Press, Princeton, New Jersey.

    Google Scholar 

  30. Shor N. Z. (1985), Minimization Methods for Nondifferentiable Functions, Springer-Verlag, Berlin.

    Book  Google Scholar 

  31. Solodov M. V., and Zavriev S. K. (1998), “Error Stability Properties of Generalized Gradient-Type Algorithms,” Journal of Optimization Theory and Applications, 98 (3), 663–680.

    Article  MathSciNet  MATH  Google Scholar 

  32. Zhao X., Luh P. B., and Wang J. (1999), “Surrogate Gradient Algorithm for Lagrangian Relaxation,” Journal of Optimization Theory and Applications, 100 (3), 699–712.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Nedić, A., Bertsekas, D. (2001). Convergence Rate of Incremental Subgradient Algorithms. In: Uryasev, S., Pardalos, P.M. (eds) Stochastic Optimization: Algorithms and Applications. Applied Optimization, vol 54. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-6594-6_11

Download citation

  • DOI: https://doi.org/10.1007/978-1-4757-6594-6_11

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4419-4855-7

  • Online ISBN: 978-1-4757-6594-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics