Skip to main content
Log in

Random Minibatch Subgradient Algorithms for Convex Problems with Functional Constraints

  • Published:
Applied Mathematics & Optimization Submit manuscript

Abstract

In this paper we consider non-smooth convex optimization problems with (possibly) infinite intersection of constraints. In contrast to the classical approach, where the constraints are usually represented as intersection of simple sets, which are easy to project onto, in this paper we consider that each constraint set is given as the level set of a convex but not necessarily differentiable function. For these settings we propose subgradient iterative algorithms with random minibatch feasibility updates. At each iteration, our algorithms take a subgradient step aimed at only minimizing the objective function and then a subsequent subgradient step minimizing the feasibility violation of the observed minibatch of constraints. The feasibility updates are performed based on either parallel or sequential random observations of several constraint components. We analyze the convergence behavior of the proposed algorithms for the case when the objective function is strongly convex and with bounded subgradients, while the functional constraints are endowed with a bounded first-order black-box oracle. For a diminishing stepsize, we prove sublinear convergence rates for the expected distances of the weighted averages of the iterates from the constraint set, as well as for the expected suboptimality of the function values along the weighted averages. Our convergence rates are known to be optimal for subgradient methods on this class of problems. Moreover, the rates depend explicitly on the minibatch size and show when minibatching helps a subgradient scheme with random feasibility updates.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Blatt, D., Hero, A.O.: Energy based sensor network source localization via projection onto convex sets. IEEE Trans. Signal Process. 54(9), 3614–3619 (2006)

    Article  Google Scholar 

  2. Bauschke, H., Borwein, J.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367–376 (1996)

    Article  MathSciNet  Google Scholar 

  3. Bot, R.I., Hendrich, C.: A double smoothing technique for solving unconstrained nondifferentiable convex optimization problems. Comput. Optim. Appl. 54(2), 239–262 (2013)

    Article  MathSciNet  Google Scholar 

  4. Bot, R.I., Hein, T.: Iterative regularization with general penalty term—theory and application to \(L_1\) and TV regularization. Inverse Probl. 28(10), 1–19 (2012)

    Article  Google Scholar 

  5. Burke, J., Ferris, M.: Weak sharp minima in mathematical programming. SIAM J. Control Optim. 31(6), 1340–1359 (1993)

    Article  MathSciNet  Google Scholar 

  6. Bianchi, P., Hachem, W., Salim, A.: A constant step forward-backward algorithm involving random maximal monotone operators, (2017) arxiv preprint (arXiv:1702.04144)

  7. Bertsekas, D.P.: Incremental proximal methods for large scale convex optimization. Math. Program. 129(2), 163–195 (2011)

    Article  MathSciNet  Google Scholar 

  8. Fercoq, O., Alacaoglu, A., Necoara, I., Cevher, V.: Almost surely constrained convex optimization. In: International Conference on Machine Learning (ICML) (2019)

  9. Keys, K., Zhou, H., Lange, K.: Proximal distance algorithms: theory and practice. J. Mach. Learn. Res. 20(66), 1–38 (2019)

    MathSciNet  MATH  Google Scholar 

  10. Kundu, A., Bach, F., Bhattacharya, C.: Convex optimization over inter-section of simple sets: improved convergence rate guarantees via an exact penalty approach. In: International Conference on Artificial Intelligence and Statistics (2018)

  11. Lewis, A., Pang, J.: Error bounds for convex inequality systems. In: Crouzeix, J., Martinez-Legaz, J., Volle, M. (eds.) Generalized Convexity, Generalized Monotonicity, pp. 75–110. Cambridge University Press, Cambridge (1998)

    Chapter  Google Scholar 

  12. Moulines, E., Bach, F.R.: Non-asymptotic analysis of stochastic approximation algorithms for machine learning. Advances in Neural Information Processing Systems (2011)

  13. Necoara, I., Nesterov, Yu., Glineur, F.: Linear convergence of first order methods for non-strongly convex optimization. Math. Program. 175(1–2), 69–107 (2018)

    MathSciNet  MATH  Google Scholar 

  14. Necoara, I., Richtarik, P., Patrascu, A.: Randomized projection methods for convex feasibility problems: conditioning and convergence rates. SIAM J. Optim. 28(4), 2783–2808 (2019)

    Google Scholar 

  15. Necoara, I., Nedić, A.: Minibatch stochastic subgradient-based projection algorithms for solving convex inequalities, arxiv preprint (2019)

  16. Nedić, A.: Random projection algorithms for convex set intersection Problems. In: IEEE Conference on Decision and Control, pp. 7655–7660 (2010)

  17. Nedić, A.: Random algorithms for convex minimization problems. Math. Program. 129(2), 225–273 (2011)

    Article  MathSciNet  Google Scholar 

  18. Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)

    Article  MathSciNet  Google Scholar 

  19. Nemirovski, A., Yudin, D.: Problem Complexity Method Efficiency in Optimization. Wiley, New York (1983)

    Google Scholar 

  20. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, Boston (2004)

    Book  Google Scholar 

  21. Ouyang, H., Gray, A.: Stochastic smoothing for nonsmooth minimizations: accelerating SGD by exploiting structure, (2012) arxiv preprint (arXiv:1205.4481)

  22. Patrascu, A., Necoara, I.: Nonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization. J. Mach. Learn. Res. 18(198), 1–42 (2018)

    MATH  Google Scholar 

  23. Polyak, B.T.: A general method of solving extremum problems. Doklady Akademii Nauk USSR 174(1), 33–36 (1967)

    MATH  Google Scholar 

  24. Polyak, B.T.: Minimization of unsmooth functionals. USSR Comput. Math. Math. Phys. 9, 14–29 (1969)

    Article  Google Scholar 

  25. Polyak, B.: Random algorithms for solving convex inequalities. Stud. Comput. Math. 8, 409–422 (2001)

    Article  MathSciNet  Google Scholar 

  26. Polyak, B.T., Juditsky, A.B.: Acceleration of stochastic approximation by averaging. SIAM J. Control Optim. 30(4), 838–855 (1992)

    Article  MathSciNet  Google Scholar 

  27. Rockafellar, R.T., Uryasev, S.P.: Optimization of conditional value-at-risk. J. Risk 2, 21–41 (2000)

    Article  Google Scholar 

  28. Shefi, R., Teboulle, M.: Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization. SIAM J. Optim. 24(1), 269–297 (2014)

    Article  MathSciNet  Google Scholar 

  29. Tran-Dinh, Q., Fercoq, O., Cevher, V.: A smooth primal-dual optimization framework for nonsmooth composite convex minimization. SIAM J. Optim. 28(1), 96–134 (2018)

    Article  MathSciNet  Google Scholar 

  30. Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998)

    MATH  Google Scholar 

  31. Wang, M., Chen, Y., Liu, J., Gu, Y.: Random multiconstraint projection: stochastic gradient methods for convex optimization with many constraints, (2015) arxiv preprint (arxiv:1511.03760)

Download references

Acknowledgements

This research was supported by the National Science Foundation under CAREER Grant CMMI 07-42538 and by the Executive Agency for Higher Education, Research and Innovation Funding (UEFISCDI), Romania, PNIII-P4-PCE-2016-0731, project ScaleFreeNet, No. 39/2017.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ion Necoara.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nedić, A., Necoara, I. Random Minibatch Subgradient Algorithms for Convex Problems with Functional Constraints. Appl Math Optim 80, 801–833 (2019). https://doi.org/10.1007/s00245-019-09609-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00245-019-09609-7

Keywords

Navigation