Skip to main content
Log in

On convergence rates of subgradient optimization methods

  • Published:
Mathematical Programming Submit manuscript

Abstract

Rates of convergence of subgradient optimization are studied. If the step size is chosen to be a geometric progression with ratioρ the convergence, if it occurs, is geometric with rateρ. For convergence to occur, it is necessary that the initial step size be large enough, and that the ratioρ be greater than a sustainable ratez(μ), which depends upon a condition numberμ, defined for both differentiable and nondifferentiable functions. The sustainable ratez(μ) is closely related to the rate of convergence of the steepest ascent method for differentiable functions: in fact it is identical if the function is not too well conditioned.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. S. Agmon, “The relaxation method for linear inequalities”,Canadian Journal of Mathematics 6 (1954) 382–392.

    Google Scholar 

  2. V.F. Demjanov and V.N. Malozemov, “The theory of nonlinear minimax problems”,Uspekhi Matematicheski Nauk 26 (1971) 53–104.

    Google Scholar 

  3. V.F. Demjanov and A.M. Rubinov, “Minimization of functionals in normed spaces”,SIAM Journal on Control 6 (1968) 73–89.

    Google Scholar 

  4. I.I. Eremin, “Incompatible systems of linear inequalities”,Doklady Akademii Nauk SSSR 138 (1961) 1280–1283. [=Soviet Mathematics Doklady 2 (1961) 821–824.]Mathematical Reviews 24 #A735.

    Google Scholar 

  5. I.I. Eremin, “An iterative method for Cebysev approximations of incompatible systems of linear inequalities”,Doklady Akademii Nauk SSSR 143 (1962) 1254–1256. [=Soviet Mathematics Doklady 3 (1962) 570–572.]Mathematical Reviews 24 #A3166.

    Google Scholar 

  6. I.I. Eremin, “A generalization of the Motzkin—Agmon relaxation method”,Uspekhi Mathematicheski Nauk 20 (1965) 183–187.

    Google Scholar 

  7. I.I. Eremin, “On systems of inequalities with convex functions in the left sides”,American Mathematical Society Translations 88 (2) (1970) 67–83. [Translation ofIzvestiga Akademii Nauk SSSR Serija Matematičeskaja 30 (1966) 265–278.]

    Google Scholar 

  8. I.I. Eremin, “Methods of Fejer's approximations in convex programming”,Mathematical Notes of Academy of Sciences USSR 3 (2) (1968) 139–149.

    Google Scholar 

  9. Yu. M. Ermolev, “Methods of solution of nonlinear extremal problems”,Kibernetika 2 (4) (1966) 1–17.

    Google Scholar 

  10. A. Feuer, “Minimizing well-behaved functions”, in:Proceedings of the twelfth annual Allerton conference on circuit and system theory (Coordinated Science Laboratory, University of Illinois, Urbana, IL, 1974).

    Google Scholar 

  11. J.L. Goffin, “On the finite convergence of the relaxation method for solving systems of inequalities”, Operations Research Center rept. ORC 71-36, University of California at Berkeley (1971).

  12. M. Held and R.M. Karp, “The traveling-salesman problem and minimum spanning trees: part II”,Mathematical Programming 1 (1971) 6–25.

    Google Scholar 

  13. H. Held, R.M. Karp and P. Wolfe, “Large-scale optimization and the relaxation method”, in:Proceedings of the 25th national ACM meeting, Boston, MA, August 1972.

  14. M. Held, P. Wolfe and H. Crowder, “Validation of subgradient optimization”,Mathematical Programming 6 (1974) 62–88.

    Google Scholar 

  15. W.W. Hogan, R.E. Marsten and J.W. Blankenship, “The Boxstep method for large scale optimization”,Operations Research 23 (1975).

  16. C. Lemarechal, “An algorithm for minimizing convex functions”, in: J.L. Rosenfeld, ed.,Information processing '74 (North-Holland, Amsterdam, 1974) pp. 552–556.

    Google Scholar 

  17. C. Lemarechal, “Note on an extension of ‘Davidon’ methods to nondifferentiable functions”,Mathematical Programming 7 (1974) 384–387.

    Google Scholar 

  18. C. Lemarechal, “An extension of Davidon' methods to nondifferentiable problems”,Mathematical Programming Study 3 (1975) 95–109.

    Google Scholar 

  19. E.S. Levitin and B.T. Polyak, “Constrained minimization methods”,Zurnal Vycislitel'noi Matematiki i Matematiceskoi Fiziki (USSR Computational Mathematics and Mathematical Physics) 6 (5) (1965).

  20. D.G. Luenberger,Introduction to linear and nonlinear programming (Addison-Wesley, Reading, MA, 1973).

    Google Scholar 

  21. T. Motzkin and I.J. Schoenberg, “The relaxation method for linear inequalities”,Canadian Journal of Mathematics 6 (1954) 393–404.

    Google Scholar 

  22. W. Oettli, “An iterative method, having linear rate of convergence, for solving a pair of dual linear programs”,Mathematical Programming 3 (1972) 302–311.

    Google Scholar 

  23. B.T. Polyak, “A general method for solving extremal problems”,Doklady Akademii Nauk SSSR 174 (1967) 33–36. [In Russian; Eng. transl:Soviet Mathematics Doklady 8 (1967) 593–597.]

    Google Scholar 

  24. B.T. Polyak, “Minimization of unsmooth functionals”,Zurnal Vycislitel'noi Mathematiki i Matematiceskoi Fiziki (USSR Computational Mathematics and Mathematical Physics) 9 (1969) 509–521.

    Google Scholar 

  25. N.Z. Shor, “On the rate of convergence of the generalized gradient method”,Kibernetika 4 (3) (1968).

  26. N.Z. Shor and M.B. Schepakin, “Algorithms for the solution of the two-stage problem in stochastic programming”,Kibernetika 4 (3) (1968).

  27. N.Z. Shor, “On the convergence rate of the generalized gradient method with space dilation”,Kibernetika 6 (2) (1970).

  28. N.Z. Shor and P.R. Gamburd, “Some questions concerning the convergence of the generalized gradient method”,Kibernetika 7 (6) (1971).

  29. N.Z. Shor, “Generalizations of gradient methods for nonsmooth functions and their applications to mathematical programming”,Economic and Mathematical Methods Vol. 12, No. 2, pp. 337–356 (in Russian) 1976.

    Google Scholar 

  30. P. Wolfe, “Note on a method of conjugate subgradients for minimizing nondifferentiable functions”,Mathematical Programming 7 (1974) 380–383.

    Google Scholar 

  31. P. Wolfe, “A method of conjugate subgradients for minimizing nondifferentiable functions”,Mathematical Programming Study 3 (1975) 145–173.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

This research was supported in part by the D.G.E.S. (Quebec) and the N.R.C. of Canada under grants A8970 and A4152.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Goffin, J.L. On convergence rates of subgradient optimization methods. Mathematical Programming 13, 329–347 (1977). https://doi.org/10.1007/BF01584346

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01584346

Key words

Navigation