Advertisement

Mirror Descent and Convex Optimization Problems with Non-smooth Inequality Constraints

  • Anastasia Bayandina
  • Pavel DvurechenskyEmail author
  • Alexander Gasnikov
  • Fedor Stonyakin
  • Alexander Titov
Chapter
Part of the Lecture Notes in Mathematics book series (LNM, volume 2227)

Abstract

We consider the problem of minimization of a convex function on a simple set with convex non-smooth inequality constraint and describe first-order methods to solve such problems in different situations: smooth or non-smooth objective function; convex or strongly convex objective and constraint; deterministic or randomized information about the objective and constraint. Described methods are based on Mirror Descent algorithm and switching subgradient scheme. One of our focus is to propose, for the listed different settings, a Mirror Descent with adaptive stepsizes and adaptive stopping rule. We also construct Mirror Descent for problems with objective function, which is not Lipschitz, e.g., is a quadratic function. Besides that, we address the question of recovering the dual solution in the considered problem.

Keywords

Constrained non-smooth convex optimization Stochastic adaptive mirror descent Primal-dual methods Restarts 

AMS Subject Classifications

90C25 90C30 90C06 65Y20 

Notes

Acknowledgements

The authors are very grateful to Anatoli Juditsky, Arkadi Nemirovski and Yurii Nesterov for fruitful discussions. The research by P. Dvurechensky and A. Gasnikov presented in Section 4 was conducted in IITP RAS and supported by the Russian Science Foundation grant (project 14-50-00150). The research by F. Stonyakin presented in subsection 3.3 was partially supported by the grant of the President of the Russian Federation for young candidates of sciences, project no. MK-176.2017.1.

References

  1. 1.
    A. Anikin, A. Gasnikov, A. Gornov, Randomization and sparsity in huge-scale optimization on an example of mirror descent. Proc. Moscow Inst. Phys. Technol. 8(1), 11–24 (2016) (in Russian). arXiv:1602.00594Google Scholar
  2. 2.
    A. Bayandina, Adaptive mirror descent for constrained optimization, in 2017 Constructive Nonsmooth Analysis and Related Topics (Dedicated to the Memory of VF Demyanov) (CNSA), May 2017, pp. 1–4Google Scholar
  3. 3.
    A. Bayandina, Adaptive stochastic mirror descent for constrained optimization, in 2017 Constructive Nonsmooth Analysis and Related Topics (Dedicated to the Memory of VF Demyanov) (CNSA), May 2017, pp. 1–4Google Scholar
  4. 4.
    A. Bayandina, A.G.E. Gasnikova, S. Matsievsky, Primal-dual mirror descent for the stochastic programming problems with functional constraints. Comput. Math. Math. Phys. 58 (2018). arXiv:1604.08194Google Scholar
  5. 5.
    A. Beck, M. Teboulle, Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31(3), 167–175 (2003). ISSN: 0167-6377MathSciNetCrossRefGoogle Scholar
  6. 6.
    A. Beck, A. Ben-Tal, N. Guttmann-Beck, L. Tetruashvili, The comirror algorithm for solving nonsmooth constrained convex problems. Oper. Res. Lett. 38(6), 493–498 (2010). ISSN: 0167-6377MathSciNetCrossRefGoogle Scholar
  7. 7.
    A. Ben-Tal, A. Nemirovski, Lectures on Modern Convex Optimization (Society for Industrial and Applied Mathematics, Philadelphia, 2001)CrossRefGoogle Scholar
  8. 8.
    A. Ben-Tal, A. Nemirovski, Lectures on Modern Convex Optimization (Lecture Notes). Personal web-page of A. Nemirovski (2015)Google Scholar
  9. 9.
    A. Birjukov, Optimization Methods: Optimality Conditions in Extremal Problems. Moscow Institute of Physics and Technology (2010, in Russian)Google Scholar
  10. 10.
    S. Boucheron, G. Lugosi, P. Massart, Concentration Inequalities: A Nonasymptotic Theory of Independence (Oxford University Press, Oxford, 2013)CrossRefGoogle Scholar
  11. 11.
    J.C. Duchi, S. Shalev-Shwartz, Y. Singer, A. Tewari, Composite objective mirror descent, in COLT 2010 the 23rd Conference on Learning Theory (2010), pp. 14–26. ISBN: 9780982252925Google Scholar
  12. 12.
    A. Juditsky, A. Nemirovski, First order methods for non-smooth convex large-scale optimization, I: general purpose methods, in Optimization for Machine Learning, ed. by S.J. Wright, S. Sra, S. Nowozin (MIT Press, Cambridge, 2012), pp. 121–184Google Scholar
  13. 13.
    A. Juditsky, Y. Nesterov, Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization. Stochastic Syst. 4(1), 44–80 (2014)MathSciNetCrossRefGoogle Scholar
  14. 14.
    G. Lan, Z. Zhou, Algorithms for stochastic optimization with expectation constraints (2016). arXiv preprint arXiv:1604.03887Google Scholar
  15. 15.
    A. Nedic, S. Lee, On stochastic subgradient mirror-descent algorithm with weighted averaging. SIAM J. Optim. 24(1), 84–107 (2014)MathSciNetCrossRefGoogle Scholar
  16. 16.
    A. Nemirovski, A. Juditsky, G. Lan, A. Shapiro, Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)MathSciNetCrossRefGoogle Scholar
  17. 17.
    A. Nemirovskii, Efficient methods for large-scale convex optimization problems. Ekonomika i Matematicheskie Metody 15 (1979, in Russian)Google Scholar
  18. 18.
    A. Nemirovskii, Y. Nesterov, Optimal methods of smooth convex minimization. USSR Comput. Math. Math. Phys. 25(2), 21–30 (1985). ISSN: 0041-5553MathSciNetCrossRefGoogle Scholar
  19. 19.
    A. Nemirovsky, D. Yudin, Problem Complexity and Method Efficiency in Optimization (Wiley, New York, 1983)Google Scholar
  20. 20.
    Y. Nesterov, A method of solving a convex programming problem with convergence rate O(1∕k 2). Sov. Math. Dokl. 27(2), 372–376 (1983)zbMATHGoogle Scholar
  21. 21.
    Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course (Kluwer Academic Publishers, Norwell, 2004)CrossRefGoogle Scholar
  22. 22.
    Y. Nesterov, Primal-dual subgradient methods for convex problems. Math. Program. 120(1), 221–259 (2009). ISSN: 1436-4646. First appeared in 2005 as CORE discussion paper 2005/67MathSciNetCrossRefGoogle Scholar
  23. 23.
    Y. Nesterov, New primal-dual subgradient methods for convex problems with functional constraints, 2015. http://lear.inrialpes.fr/workshop/osl2015/slides/osl2015_yurii.pdf
  24. 24.
    Y. Nesterov, Subgradient methods for convex functions with nonstandard growth properties, 2016. http://www.mathnet.ru:8080/PresentFiles/16179/growthbm_nesterov.pdf
  25. 25.
    B. Polyak, A general method of solving extremum problems. Sov. Math. Dokl. 8(3), 593–597 (1967)zbMATHGoogle Scholar
  26. 26.
    B. Polyak, Existence theorems and convergence of minimizing sequences in extremum problems with restrictions. Sov. Math. Dokl. 7, 72–75 (1967)Google Scholar
  27. 27.
    V. Roulet, A. d’Aspremont, Sharpness, restart and acceleration (2017). arXiv preprint arXiv:1702.03828Google Scholar
  28. 28.
    N.Z. Shor, Generalized gradient descent with application to block programming. Kibernetika 3(3), 53–55 (1967)MathSciNetGoogle Scholar
  29. 29.
    L. Xiao, Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11, 2543–2596 (2010). ISSN: 1532-4435MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Anastasia Bayandina
    • 1
    • 2
  • Pavel Dvurechensky
    • 3
    • 4
    Email author
  • Alexander Gasnikov
    • 2
    • 4
  • Fedor Stonyakin
    • 5
    • 2
  • Alexander Titov
    • 1
  1. 1.Moscow Institute of Physics and TechnologyDolgoprudnyRussia
  2. 2.Skolkovo Institute of Science and TechnologySkolkovo Innovation CenterMoscowRussia
  3. 3.Weierstrass Institute for Applied Analysis and StochasticsBerlinGermany
  4. 4.Institute for Information Transmission Problems RASMoscowRussia
  5. 5.V.I. Vernadsky Crimean Federal UniversitySimferopolRussia

Personalised recommendations