Advertisement

On the efficiency of a randomized mirror descent algorithm in online optimization problems

  • A. V. GasnikovEmail author
  • Yu. E. Nesterov
  • V. G. Spokoiny
Article

Abstract

A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.

Keywords

mirror descent method dual averaging method online optimization exponential weighting multi-armed bandits weighting of experts stochastic optimization randomization 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    A. S. Nemirovski and D. B. Yudin, Complexity of Problems and Efficiency of Optimization Methods (Nauka, Moscow, 1979) [in Russian] (http://www2.isye.gatech.edu/~nemirovs/Lect_EMCO.pdf).Google Scholar
  2. 2.
    A. Beck and M. Teboulle, “Mirror descent and nonlinear projected subgradient methods for convex optimization,” Oper. Res. Lett. 31, 167–175 (2003).CrossRefzbMATHMathSciNetGoogle Scholar
  3. 3.
    Y. Nesterov, “Primal-dual subgradient methods for convex problems,” Math. Program. Ser. B 120(1), 261–283 (2009).CrossRefGoogle Scholar
  4. 4.
    A. B. Juditsky, A. V. Nazin, A. B. Tsybakov, and N. Vayatis, “Recursive aggregation of estimators by the mirror descent algorithm with averaging,” Probl. Inf. Transmission 41(4), 368–384 (2005).CrossRefzbMATHGoogle Scholar
  5. 5.
    S. Bubeck, Introduction to Online Optimization: Lecture Notes (Princeton University, New York, 2011); http://www.princeton.edu/~sbubeck/BubeckLectureNotes.pdf.Google Scholar
  6. 6.
    M. Grigoriadis and L. Khachiyan, “A sublinear-time randomized approximation algorithm for matrix games,” Oper. Res. Lett. 18(2), 53–58 (1995).CrossRefzbMATHMathSciNetGoogle Scholar
  7. 7.
    A. Juditsky, G. Lan, A. Nemirovski, and A. Shapiro, “Stochastic approximation approach to stochastic programming,” SIAM J. Optim. 19(4), 1574–1609 (2009); http://www2.isye.gatech.edu/~nemirovs/.CrossRefzbMATHMathSciNetGoogle Scholar
  8. 8.
    A. V. Nazin and B. T. Polyak, “Randomized algorithm to determine the eigenvector of a stochastic matrix with application to the PageRank problem,” Autom. Remote Control 72(2), 342–352 (2011).CrossRefzbMATHMathSciNetGoogle Scholar
  9. 9.
    G. Lugosi, and N. Cesa-Bianchi, Prediction, Learning, and Games (Cambridge Univ. Press, New York, 2006).zbMATHGoogle Scholar
  10. 10.
    A. Juditsky, A. V. Nazin, A. B. Tsybakov, and N. Vayatis, “Gap-free bounds for stochastic multi-armed bandit,” Proceedings of the 17th IFAC World Congress, Seoul, Korea, July 6–11 2008 (IFAC, 2008), pp. 11560–11563.Google Scholar
  11. 11.
    Y. Mansour, Algorithmic Game Theory and Machine Learning(2011), Lectures 1, 2, 6; www.tau.ac.il/~mansour/advanced-agt+ml/.Google Scholar
  12. 12.
    A. Rakhlin, Lecture Notes on Online Learning (2009); http://stat.wharton.upenn.edu/~rakhlin/papers/online-learning.pdf.Google Scholar
  13. 13.
    E. Hazan, “The convex optimization approach to regret minimization,” in Optimization for Machine Learning, Ed. by S. Sra, S. Nowozin, and S. Wright (MIT Press, Cambridge, Mass., 2011), pp. 287–303.Google Scholar
  14. 14.
    S. Bubeck and N. Cesa-Bianchi, “Regret analysis of stochastic and nonstochastic multi-armed bandit problems,” Foundation Trends Machine Learning 5(1), 1–122 (2012); http://www.princeton.edu/~sbubeck/SurveyBCB12.pdf.CrossRefzbMATHGoogle Scholar
  15. 15.
    Z. Shi and R. Liu, “Online universal gradient method,” e-print (2013), arXiv:1311.3832v2.Google Scholar
  16. 16.
    A. Juditsky and A. Nemirovski, “First order methods for nonsmooth convex large-scale optimization I, II,” Optimization for Machine Learning, Ed. by S. Sra, S. Nowozin, and S. Wright (MIT Press, Cambridge, Mass., 2012).Google Scholar
  17. 17.
    S. Boucheron, G. Lugoshi, and P. Massart, Concentration Inequalities: A Nonasymptotic Theory of Independence (Oxford Univ. Press, Oxford, 2013).CrossRefGoogle Scholar
  18. 18.
    G. Lan, A. Nemirovski, and A. Shapiro, “Validation analysis of mirror descent stochastic approximation method,” Math. Program. 134(2), 425–458 (2012).CrossRefzbMATHMathSciNetGoogle Scholar
  19. 19.
    S. P. Andersen, A. de Palma, and J.-F. Thisse, Discrete Choice Theory of Product Differentiation (MIT Press, 1992).Google Scholar
  20. 20.
    M. R. Leadbetter, G. Lindgren, and H. Rootzén, Extremes and Related Properties of Random Sequences and Processes (Mir, Moscow, 1989; Springer, New York, 2011).Google Scholar
  21. 21.
    A. V. Gasnikov, Yu. V. Dorn, Yu. E. Nesterov, and S. V. Shpirko, “On the three-stage version of stable dynamic model,” Mat. Model. 26(6), 34–70 (2014); arXiv:1405.7630.zbMATHGoogle Scholar
  22. 22.
    A. V. Gasnikov and D. Yu. Dmitriev, “On efficient randomized algorithms for finding the pagerank vector,” Comput. Math. Math. Phys. 55(3), 349–365 (2015).CrossRefMathSciNetGoogle Scholar
  23. 23.
    V. V. V’yugin, Mathematical Foundations of the Theory of Machine Learning and Prediction (MTsNMO, Moscow, 2013) [in Russian].Google Scholar
  24. 24.
    A. Nemirovski, “Prox-method with rate of convergence for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems,” SIAM J. Optim. 15, 229–251 (2004).CrossRefzbMATHMathSciNetGoogle Scholar
  25. 25.
    Y. Nesterov, “Smooth minimization of nonsmooth function,” Math. Program. Ser. A 103(1), 127–152 (2005).CrossRefzbMATHMathSciNetGoogle Scholar
  26. 26.
    Y. E. Nesterov, “Subgradient methods for huge-scale optimization problems,”: Math. Program. Ser. A (2014) [in press], CORE Discussion Paper 2012/2, 2012.Google Scholar
  27. 27.
    Y. E. Nesterov, “Efficiency of coordinate descent methods on large scale optimization problem,” SIAM J. Optim. 22(2), 341–362 (2012).CrossRefzbMATHMathSciNetGoogle Scholar
  28. 28.
    S. Bubeck, “Theory of convex optimization for machine learning,” e-print (2014), arXiv:1405.4980.Google Scholar
  29. 29.
    O. Fercoq and P. Richtárik, “Accelerated, parallel and proximal coordinate descent,” e-print (2013), arXiv:1312.5799.Google Scholar
  30. 30.
    T. Tao, Structure and Randomness (Am. Math. Soc., Providence, R.I., 2008; MTsNMO, Moscow, 2013).zbMATHGoogle Scholar
  31. 31.
    R. Motwani and P. Raghavan, Randomized Algorithms (Cambridge Univ. Press, Cambridge 1995).CrossRefzbMATHGoogle Scholar
  32. 32.
    D. P. Dubhashi and A. Panconesi, Concentration of Measure for the Analysis of Randomized Algorithms (Cambridge Univ. Press, Cambridge, 2009).CrossRefzbMATHGoogle Scholar
  33. 33.
    D. A. Spielman and S.-H. Teng, “Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time,” J. ACM 51, 385–463 (2004).CrossRefzbMATHMathSciNetGoogle Scholar
  34. 34.
    M. Ledoux, Concentration of Measure Phenomenon (Am. Math. Soc., Providence, RI, 2001).zbMATHGoogle Scholar

Copyright information

© Pleiades Publishing, Ltd. 2015

Authors and Affiliations

  • A. V. Gasnikov
    • 1
    • 2
    • 3
    Email author
  • Yu. E. Nesterov
    • 4
  • V. G. Spokoiny
    • 1
    • 2
    • 3
    • 5
  1. 1.Moscow Institute of Physics and TechnologyDolgoprudnyi, Moscow oblastRussia
  2. 2.Institute for Information Transmission ProblemsRussian Academy of SciencesMoscowRussia
  3. 3.National Research University Higher School of EconomicsMoscowRussia
  4. 4.Center of Operations Research and Econometrics (CORE, UCL)Louvain-la-NeuveBelgium
  5. 5.Weierstrass Institute for Applied Analysis and StochasticsBerlinGermany

Personalised recommendations