Skip to main content
Log in

Algorithms for stochastic optimization with function or expectation constraints

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

This paper considers the problem of minimizing an expectation function over a closed convex set, coupled with a function or expectation constraint on either decision variables or problem parameters. We first present a new stochastic approximation (SA) type algorithm, namely the cooperative SA (CSA), to handle problems with the constraint on devision variables. We show that this algorithm exhibits the optimal \({{{\mathcal {O}}}}(1/\epsilon ^2)\) rate of convergence, in terms of both optimality gap and constraint violation, when the objective and constraint functions are generally convex, where \(\epsilon\) denotes the optimality gap and infeasibility. Moreover, we show that this rate of convergence can be improved to \({{{\mathcal {O}}}}(1/\epsilon )\) if the objective and constraint functions are strongly convex. We then present a variant of CSA, namely the cooperative stochastic parameter approximation (CSPA) algorithm, to deal with the situation when the constraint is defined over problem parameters and show that it exhibits similar optimal rate of convergence to CSA. It is worth noting that CSA and CSPA are primal methods which do not require the iterations on the dual space and/or the estimation on the size of the dual variables. To the best of our knowledge, this is the first time that such optimal SA methods for solving function or expectation constrained stochastic optimization are presented in the literature.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Auslender, A., Teboulle, M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16(3), 697–725 (2006)

    MathSciNet  MATH  Google Scholar 

  2. Bauschke, H., Borwein, J., Combettes, P.: Bregman monotone optimization algorithms. SIAM J. Control Optim. 42, 596–636 (2003)

    MathSciNet  MATH  Google Scholar 

  3. Beck, A., Ben-Tal, A., Guttmann-Beck, N., Tetruashvili, L.: The comirror algorithm for solving nonsmooth constrained convex problems. Oper. Res. Lett. 38(6), 493–498 (2010)

    MathSciNet  MATH  Google Scholar 

  4. Benveniste, A., Métivier, M., Priouret, P.: Algorithmes adaptatifs et approximations stochastiques. Masson, 1987. (English translation: Adaptive Algorithms and Stochastic Approximations). Springer, Belin (1993)

  5. Bregman, L.M.: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7(3), 200–217 (1967)

    MathSciNet  MATH  Google Scholar 

  6. Chapelle, O., Scholkopf, B., Zien, A.: Semi-supervised Learning. MIT Press, Cambridge, Mass (2006)

    Google Scholar 

  7. Chen, Y., Lan, G., Ouyang, Y.: Optimal primal-dual methods for a class of saddle point problems. SIAM J. Optim. 24(4), 1779–1814 (2014)

    MathSciNet  MATH  Google Scholar 

  8. Duchi, J.C., Bartlett, P.L., Wainwright, M.J.: Randomized smoothing for stochastic optimization. SIAM J. Optim. 22, 674–701 (2012)

    MathSciNet  MATH  Google Scholar 

  9. Duchi, J.C., Shalev-shwartz, S., Singer, Y., Tewari, A.: Composite objective mirror descent. In: Proceedings of the Twenty Third Annual Conference on Computational Learning Theory (2010)

  10. Ermoliev, Y.: Stochastic quasigradient methods and their application to system optimization. Stochastics 9, 1–36 (1983)

    MathSciNet  MATH  Google Scholar 

  11. Gaivoronski, A.: Nonstationary stochastic programming problems. Kybernetika 4, 89–92 (1978)

    MathSciNet  Google Scholar 

  12. Ghadimi, S., Lan, G.: Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, I: a generic algorithmic framework. SIAM J. Optim. 22, 1469–1492 (2012)

    MathSciNet  MATH  Google Scholar 

  13. Ghadimi, S., Lan, G.: Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, II: shrinking procedures and optimal algorithms. SIAM J. Optim. 23, 2061–2089 (2013)

    MathSciNet  MATH  Google Scholar 

  14. Ghadimi, S., Lan, G.: Accelerated gradient methods for nonconvex nonlinear and stochastic optimization. Technical report, Department of Industrial and Systems Engineering, University of Florida, Gainesville, FL 32611, USA, June 2013

  15. Goldfarb, D., Iyengar, G.: Robust portfolio selection problems. Math. Oper. Res. 28(1), 1–38 (2003)

    MathSciNet  MATH  Google Scholar 

  16. Jiang, H., Shanbhag, U.V.: On the solution of stochastic optimization and variational problems in imperfect information regimes. arXiv preprint arXiv:1402.1457 (2014)

  17. Kleywegt, A.J., Shapiro, A., de Mello, T.H.: The sample average approximation method for stochastic discrete optimization. SIAM J. Optim. 12, 479–502 (2001)

    MathSciNet  MATH  Google Scholar 

  18. Lan, G.: An optimal method for stochastic composite optimization. Math. Program. 133(1), 365–397 (2012)

    MathSciNet  MATH  Google Scholar 

  19. Lan, G., Lu, Z., Monteiro, R.D.C.: Primal-dual first-order methods with \({{{\cal{O}}}}(1/\epsilon )\) iteration-complexity for cone programming. Math. Program. 126, 1–29 (2011)

    MathSciNet  MATH  Google Scholar 

  20. Lan, G., Nemirovski, A.S., Shapiro, A.: Validation analysis of mirror descent stochastic approximation method. Math. Program. 134, 425–458 (2012)

    MathSciNet  MATH  Google Scholar 

  21. Nedic, A.: On stochastic subgradient mirror-descent algorithm with weighted averaging. Technical report (2012)

  22. Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)

    MathSciNet  MATH  Google Scholar 

  23. Nemirovski, A., Shapiro, A.: Convex approximations of chance constrained programs. SIAM J. Optim. 17(4), 969–996 (2006)

    MathSciNet  MATH  Google Scholar 

  24. Nesterov, Y.E.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, Massachusetts (2004)

    MATH  Google Scholar 

  25. Pflug, G.: Optimization of stochastic models. In: Pflug, G. (ed.) The Interface Between Simulation and Optimization. Kluwer, Boston (1996)

    MATH  Google Scholar 

  26. Polyak, B.: New stochastic approximation type procedures. Autom. i Telemekh. 7, 98–107 (1990)

    Google Scholar 

  27. Polyak, B., Juditsky, A.: Acceleration of stochastic approximation by averaging. SIAM J. Control Optim. 30, 838–855 (1992)

    MathSciNet  MATH  Google Scholar 

  28. Polyak, B.T.: A general method of solving extremum problems. Doklady Akademii Nauk SSSR 174(1), 33 (1967)

    MathSciNet  MATH  Google Scholar 

  29. Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)

    MathSciNet  MATH  Google Scholar 

  30. Rockafellar, R., Uryasev, S.: Optimization of conditional value-at-risk. J. Risk 2, 21–41 (2000)

    Google Scholar 

  31. Ruszczyński, A., Sysk, W.: A method of aggregate stochastic subgradients with on-line stepsize rules for convex stochastic programming problems. Math. Program. Study 28, 113–131 (1986)

    MathSciNet  MATH  Google Scholar 

  32. Schmidt, M., Roux, N.L., Bach, F.: Minimizing finite sums with the stochastic average gradient. Technical report, September 2013

  33. Shalev-Shwartz, S., Singer, Y., Srebro, N.: Pegasos: Primal estimated sub-gradient solver for svm. In: ICML, pp. 807–814 (2007)

  34. Shapiro, A.: Monte carlo sampling methods. In: Ruszczyński, A., Shapiro, A. (eds.) Stochastic Programming. North-Holland Publishing Company, Amsterdam (2003)

    MATH  Google Scholar 

  35. Spall, J.C.: Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control, vol. 65. Wiley, New York (2005)

    MATH  Google Scholar 

  36. Teboulle, M.: Convergence of proximal-like algorithms. SIAM J. Optim. 7, 1069–1083 (1997)

    MathSciNet  MATH  Google Scholar 

  37. Wang, W., Ahmed, S.: Sample average approximation of expected value constrained stochastic programs. Oper. Res. Lett. 36(5), 515–519 (2008)

    MathSciNet  MATH  Google Scholar 

  38. Xiao, L.: Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 12, 2543–2596 (2010)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guanghui Lan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Part of the results were first presented at the Annual INFORMS meeting in Oct, 2015, https://informs.emeetingsonline.com/emeetings/formbuilder/clustersessiondtl.asp?csnno=24236&mmnno=272&ppnno=91687 and summarized in a previous version entitled “Algorithms for stochastic optimization with expectation constraints” in 2016.

Guanghui Lan has been supported by NSF CMMI 1637474.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lan, G., Zhou, Z. Algorithms for stochastic optimization with function or expectation constraints. Comput Optim Appl 76, 461–498 (2020). https://doi.org/10.1007/s10589-020-00179-x

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-020-00179-x

Keywords

Mathematics Subject classification

Navigation