Skip to main content

Mirror Descent and Convex Optimization Problems with Non-smooth Inequality Constraints

  • Chapter
  • First Online:

Part of the book series: Lecture Notes in Mathematics ((LNM,volume 2227))

Abstract

We consider the problem of minimization of a convex function on a simple set with convex non-smooth inequality constraint and describe first-order methods to solve such problems in different situations: smooth or non-smooth objective function; convex or strongly convex objective and constraint; deterministic or randomized information about the objective and constraint. Described methods are based on Mirror Descent algorithm and switching subgradient scheme. One of our focus is to propose, for the listed different settings, a Mirror Descent with adaptive stepsizes and adaptive stopping rule. We also construct Mirror Descent for problems with objective function, which is not Lipschitz, e.g., is a quadratic function. Besides that, we address the question of recovering the dual solution in the considered problem.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. A. Anikin, A. Gasnikov, A. Gornov, Randomization and sparsity in huge-scale optimization on an example of mirror descent. Proc. Moscow Inst. Phys. Technol. 8(1), 11–24 (2016) (in Russian). arXiv:1602.00594

    Google Scholar 

  2. A. Bayandina, Adaptive mirror descent for constrained optimization, in 2017 Constructive Nonsmooth Analysis and Related Topics (Dedicated to the Memory of VF Demyanov) (CNSA), May 2017, pp. 1–4

    Google Scholar 

  3. A. Bayandina, Adaptive stochastic mirror descent for constrained optimization, in 2017 Constructive Nonsmooth Analysis and Related Topics (Dedicated to the Memory of VF Demyanov) (CNSA), May 2017, pp. 1–4

    Google Scholar 

  4. A. Bayandina, A.G.E. Gasnikova, S. Matsievsky, Primal-dual mirror descent for the stochastic programming problems with functional constraints. Comput. Math. Math. Phys. 58 (2018). arXiv:1604.08194

    Google Scholar 

  5. A. Beck, M. Teboulle, Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31(3), 167–175 (2003). ISSN: 0167-6377

    Article  MathSciNet  Google Scholar 

  6. A. Beck, A. Ben-Tal, N. Guttmann-Beck, L. Tetruashvili, The comirror algorithm for solving nonsmooth constrained convex problems. Oper. Res. Lett. 38(6), 493–498 (2010). ISSN: 0167-6377

    Article  MathSciNet  Google Scholar 

  7. A. Ben-Tal, A. Nemirovski, Lectures on Modern Convex Optimization (Society for Industrial and Applied Mathematics, Philadelphia, 2001)

    Book  Google Scholar 

  8. A. Ben-Tal, A. Nemirovski, Lectures on Modern Convex Optimization (Lecture Notes). Personal web-page of A. Nemirovski (2015)

    Google Scholar 

  9. A. Birjukov, Optimization Methods: Optimality Conditions in Extremal Problems. Moscow Institute of Physics and Technology (2010, in Russian)

    Google Scholar 

  10. S. Boucheron, G. Lugosi, P. Massart, Concentration Inequalities: A Nonasymptotic Theory of Independence (Oxford University Press, Oxford, 2013)

    Book  Google Scholar 

  11. J.C. Duchi, S. Shalev-Shwartz, Y. Singer, A. Tewari, Composite objective mirror descent, in COLT 2010 the 23rd Conference on Learning Theory (2010), pp. 14–26. ISBN: 9780982252925

    Google Scholar 

  12. A. Juditsky, A. Nemirovski, First order methods for non-smooth convex large-scale optimization, I: general purpose methods, in Optimization for Machine Learning, ed. by S.J. Wright, S. Sra, S. Nowozin (MIT Press, Cambridge, 2012), pp. 121–184

    Google Scholar 

  13. A. Juditsky, Y. Nesterov, Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization. Stochastic Syst. 4(1), 44–80 (2014)

    Article  MathSciNet  Google Scholar 

  14. G. Lan, Z. Zhou, Algorithms for stochastic optimization with expectation constraints (2016). arXiv preprint arXiv:1604.03887

    Google Scholar 

  15. A. Nedic, S. Lee, On stochastic subgradient mirror-descent algorithm with weighted averaging. SIAM J. Optim. 24(1), 84–107 (2014)

    Article  MathSciNet  Google Scholar 

  16. A. Nemirovski, A. Juditsky, G. Lan, A. Shapiro, Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)

    Article  MathSciNet  Google Scholar 

  17. A. Nemirovskii, Efficient methods for large-scale convex optimization problems. Ekonomika i Matematicheskie Metody 15 (1979, in Russian)

    Google Scholar 

  18. A. Nemirovskii, Y. Nesterov, Optimal methods of smooth convex minimization. USSR Comput. Math. Math. Phys. 25(2), 21–30 (1985). ISSN: 0041-5553

    Article  MathSciNet  Google Scholar 

  19. A. Nemirovsky, D. Yudin, Problem Complexity and Method Efficiency in Optimization (Wiley, New York, 1983)

    Google Scholar 

  20. Y. Nesterov, A method of solving a convex programming problem with convergence rate O(1∕k 2). Sov. Math. Dokl. 27(2), 372–376 (1983)

    MATH  Google Scholar 

  21. Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course (Kluwer Academic Publishers, Norwell, 2004)

    Book  Google Scholar 

  22. Y. Nesterov, Primal-dual subgradient methods for convex problems. Math. Program. 120(1), 221–259 (2009). ISSN: 1436-4646. First appeared in 2005 as CORE discussion paper 2005/67

    Article  MathSciNet  Google Scholar 

  23. Y. Nesterov, New primal-dual subgradient methods for convex problems with functional constraints, 2015. http://lear.inrialpes.fr/workshop/osl2015/slides/osl2015_yurii.pdf

  24. Y. Nesterov, Subgradient methods for convex functions with nonstandard growth properties, 2016. http://www.mathnet.ru:8080/PresentFiles/16179/growthbm_nesterov.pdf

  25. B. Polyak, A general method of solving extremum problems. Sov. Math. Dokl. 8(3), 593–597 (1967)

    MATH  Google Scholar 

  26. B. Polyak, Existence theorems and convergence of minimizing sequences in extremum problems with restrictions. Sov. Math. Dokl. 7, 72–75 (1967)

    Google Scholar 

  27. V. Roulet, A. d’Aspremont, Sharpness, restart and acceleration (2017). arXiv preprint arXiv:1702.03828

    Google Scholar 

  28. N.Z. Shor, Generalized gradient descent with application to block programming. Kibernetika 3(3), 53–55 (1967)

    MathSciNet  Google Scholar 

  29. L. Xiao, Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11, 2543–2596 (2010). ISSN: 1532-4435

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are very grateful to Anatoli Juditsky, Arkadi Nemirovski and Yurii Nesterov for fruitful discussions. The research by P. Dvurechensky and A. Gasnikov presented in Section 4 was conducted in IITP RAS and supported by the Russian Science Foundation grant (project 14-50-00150). The research by F. Stonyakin presented in subsection 3.3 was partially supported by the grant of the President of the Russian Federation for young candidates of sciences, project no. MK-176.2017.1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pavel Dvurechensky .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Bayandina, A., Dvurechensky, P., Gasnikov, A., Stonyakin, F., Titov, A. (2018). Mirror Descent and Convex Optimization Problems with Non-smooth Inequality Constraints. In: Giselsson, P., Rantzer, A. (eds) Large-Scale and Distributed Optimization. Lecture Notes in Mathematics, vol 2227. Springer, Cham. https://doi.org/10.1007/978-3-319-97478-1_8

Download citation

Publish with us

Policies and ethics