Abstract
We consider the problem of minimization of a convex function on a simple set with convex non-smooth inequality constraint and describe first-order methods to solve such problems in different situations: smooth or non-smooth objective function; convex or strongly convex objective and constraint; deterministic or randomized information about the objective and constraint. Described methods are based on Mirror Descent algorithm and switching subgradient scheme. One of our focus is to propose, for the listed different settings, a Mirror Descent with adaptive stepsizes and adaptive stopping rule. We also construct Mirror Descent for problems with objective function, which is not Lipschitz, e.g., is a quadratic function. Besides that, we address the question of recovering the dual solution in the considered problem.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
A. Anikin, A. Gasnikov, A. Gornov, Randomization and sparsity in huge-scale optimization on an example of mirror descent. Proc. Moscow Inst. Phys. Technol. 8(1), 11–24 (2016) (in Russian). arXiv:1602.00594
A. Bayandina, Adaptive mirror descent for constrained optimization, in 2017 Constructive Nonsmooth Analysis and Related Topics (Dedicated to the Memory of VF Demyanov) (CNSA), May 2017, pp. 1–4
A. Bayandina, Adaptive stochastic mirror descent for constrained optimization, in 2017 Constructive Nonsmooth Analysis and Related Topics (Dedicated to the Memory of VF Demyanov) (CNSA), May 2017, pp. 1–4
A. Bayandina, A.G.E. Gasnikova, S. Matsievsky, Primal-dual mirror descent for the stochastic programming problems with functional constraints. Comput. Math. Math. Phys. 58 (2018). arXiv:1604.08194
A. Beck, M. Teboulle, Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31(3), 167–175 (2003). ISSN: 0167-6377
A. Beck, A. Ben-Tal, N. Guttmann-Beck, L. Tetruashvili, The comirror algorithm for solving nonsmooth constrained convex problems. Oper. Res. Lett. 38(6), 493–498 (2010). ISSN: 0167-6377
A. Ben-Tal, A. Nemirovski, Lectures on Modern Convex Optimization (Society for Industrial and Applied Mathematics, Philadelphia, 2001)
A. Ben-Tal, A. Nemirovski, Lectures on Modern Convex Optimization (Lecture Notes). Personal web-page of A. Nemirovski (2015)
A. Birjukov, Optimization Methods: Optimality Conditions in Extremal Problems. Moscow Institute of Physics and Technology (2010, in Russian)
S. Boucheron, G. Lugosi, P. Massart, Concentration Inequalities: A Nonasymptotic Theory of Independence (Oxford University Press, Oxford, 2013)
J.C. Duchi, S. Shalev-Shwartz, Y. Singer, A. Tewari, Composite objective mirror descent, in COLT 2010 the 23rd Conference on Learning Theory (2010), pp. 14–26. ISBN: 9780982252925
A. Juditsky, A. Nemirovski, First order methods for non-smooth convex large-scale optimization, I: general purpose methods, in Optimization for Machine Learning, ed. by S.J. Wright, S. Sra, S. Nowozin (MIT Press, Cambridge, 2012), pp. 121–184
A. Juditsky, Y. Nesterov, Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization. Stochastic Syst. 4(1), 44–80 (2014)
G. Lan, Z. Zhou, Algorithms for stochastic optimization with expectation constraints (2016). arXiv preprint arXiv:1604.03887
A. Nedic, S. Lee, On stochastic subgradient mirror-descent algorithm with weighted averaging. SIAM J. Optim. 24(1), 84–107 (2014)
A. Nemirovski, A. Juditsky, G. Lan, A. Shapiro, Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)
A. Nemirovskii, Efficient methods for large-scale convex optimization problems. Ekonomika i Matematicheskie Metody 15 (1979, in Russian)
A. Nemirovskii, Y. Nesterov, Optimal methods of smooth convex minimization. USSR Comput. Math. Math. Phys. 25(2), 21–30 (1985). ISSN: 0041-5553
A. Nemirovsky, D. Yudin, Problem Complexity and Method Efficiency in Optimization (Wiley, New York, 1983)
Y. Nesterov, A method of solving a convex programming problem with convergence rate O(1∕k 2). Sov. Math. Dokl. 27(2), 372–376 (1983)
Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course (Kluwer Academic Publishers, Norwell, 2004)
Y. Nesterov, Primal-dual subgradient methods for convex problems. Math. Program. 120(1), 221–259 (2009). ISSN: 1436-4646. First appeared in 2005 as CORE discussion paper 2005/67
Y. Nesterov, New primal-dual subgradient methods for convex problems with functional constraints, 2015. http://lear.inrialpes.fr/workshop/osl2015/slides/osl2015_yurii.pdf
Y. Nesterov, Subgradient methods for convex functions with nonstandard growth properties, 2016. http://www.mathnet.ru:8080/PresentFiles/16179/growthbm_nesterov.pdf
B. Polyak, A general method of solving extremum problems. Sov. Math. Dokl. 8(3), 593–597 (1967)
B. Polyak, Existence theorems and convergence of minimizing sequences in extremum problems with restrictions. Sov. Math. Dokl. 7, 72–75 (1967)
V. Roulet, A. d’Aspremont, Sharpness, restart and acceleration (2017). arXiv preprint arXiv:1702.03828
N.Z. Shor, Generalized gradient descent with application to block programming. Kibernetika 3(3), 53–55 (1967)
L. Xiao, Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11, 2543–2596 (2010). ISSN: 1532-4435
Acknowledgements
The authors are very grateful to Anatoli Juditsky, Arkadi Nemirovski and Yurii Nesterov for fruitful discussions. The research by P. Dvurechensky and A. Gasnikov presented in Section 4 was conducted in IITP RAS and supported by the Russian Science Foundation grant (project 14-50-00150). The research by F. Stonyakin presented in subsection 3.3 was partially supported by the grant of the President of the Russian Federation for young candidates of sciences, project no. MK-176.2017.1.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Bayandina, A., Dvurechensky, P., Gasnikov, A., Stonyakin, F., Titov, A. (2018). Mirror Descent and Convex Optimization Problems with Non-smooth Inequality Constraints. In: Giselsson, P., Rantzer, A. (eds) Large-Scale and Distributed Optimization. Lecture Notes in Mathematics, vol 2227. Springer, Cham. https://doi.org/10.1007/978-3-319-97478-1_8
Download citation
DOI: https://doi.org/10.1007/978-3-319-97478-1_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-97477-4
Online ISBN: 978-3-319-97478-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)