Skip to main content
Log in

Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex

  • Stochastic Systems, Queueing Systems
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

In this paper we propose a modification of the mirror descent method for non-smooth stochastic convex optimization problems on the unit simplex. The optimization problems considered differ from the classical ones by availability of function values realizations. Our purpose is to derive the convergence rate of the method proposed and to determine the level of noise that does not significantly affect the convergence rate.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Nesterov, Yu.E., Algorithmic Models of Human Behavior. A Presentation at the Mathematical Seminar, Moscow: MFTI&MTSNMO, September 14, 2012 (http://www.mathnet.ru/php/seminars. phtml?option lang=rus&presentid=6990).

    Google Scholar 

  2. Duchi, J.C., Jordan, M.I., Wainwright, M.J., and Wibisono, A., Optimal Rates for Zero-order Convex Optimization: The Power of Two Function Evaluations, IEEE Transact. Inform., 2015, vol. 61, no. 5, pp. 2788–2806 (http://www.eecs.berkeley.edu/~wainwrig/Papers/DucZero15.pdf).

    Article  MathSciNet  Google Scholar 

  3. Agarwal, A., Bartlett, P.L., Ravikumar, P., and Wainwright, M.J., Information-theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization, IEEE Transact. Inform., 2012, vol. 58, no. 5, pp. 3235–3249, arXiv:1009.0571.

    Article  MathSciNet  Google Scholar 

  4. Kiefer, J. and Wolfowitz, J., Statistical Estimation on the Maximum of a Regression Function, Ann. Math. Statist., 1952, vol. 23, pp. 462–466.

    Article  MathSciNet  MATH  Google Scholar 

  5. Polyak, B.T., Vvedenie v optimizatsiyu, Moscow: Nauka, 1983. Translated into English under the title Introduction to Optimization, New York: Optimization Software, 1987.

    MATH  Google Scholar 

  6. Granichin, O.N. and Polyak, B.T., Randomizirovannye algoritmy otsenivaniya i optimizatsii pri pochti proizvol’nykh pomekhakh (Randomized Algorithms of Estimation and Optimization under Almost Arbitrary Noise) Moscow: Nauka, 2003.

    Google Scholar 

  7. Nemirovskii, A.S. and Yudin, D.B., Slozhnost’ zadach i effektivnost’ metodov optimizatsii (Complexity of Problems and Efficiency of the Optimization Methods), Moscow: Nauka, 1979.

    Google Scholar 

  8. Konecný, J. and Richárik, P., Simple Complexity Analysis of Simplified Direct Search, in e-print, 2014, arXiv:1410.0390.

    Google Scholar 

  9. Shapiro, A., Dentcheva, D., and Ruszczynski, A., Lecture on Stochastic Programming. Modeling and Theory, in MPS-SIAM Series Optim., 2014.

    Google Scholar 

  10. Nesterov, Y., Primal-Dual Subgradient Methods For Convex Problems, Math. Program., Ser. B, 2009, vol. 120(1), pp. 261–283.

    Article  MathSciNet  MATH  Google Scholar 

  11. Yuditskii, A.B., Nazin, A.V., Tsybakov, A.B., and Vayatis, N., Recurrent Aggregation of Estimates by the Method of Mirror Descent with Averaging, Probl. Peredachi Inf., 2005, vol. 41, no. 4, pp. 78–96.

    MathSciNet  Google Scholar 

  12. Nemirovski, A., Juditsky, A., Lan, G., and Shapiro, A., Stochastic Approximation Approach to Stochastic Programming, SIAM J. Optim., 2009, vol. 19, no. 4, pp. 1574–1609.

    Article  MathSciNet  MATH  Google Scholar 

  13. Gasnikov, A.V., Nesterov, Yu.E., and Spokoinyi, V.G., On Efficiency of One Method for Randomization of the Mirror Descent in the Problems of Online Optimization, Zh. Vychisl. Mat. Mat. Fiz., 2015, vol. 55, no. 4, pp. 55–71.

    Google Scholar 

  14. Gasnikov, A.V., Dvurechenskii, P.E., and Nesterov, Yu.E., Stochastic Gradient Methods with Inaccurate Oracle, Tr. MFTI, 2016, vol. 8, no, 1. pp. 41–91, arxiv:1411.4218.

    Google Scholar 

  15. Bogolubsky, L., Dvurechensky, P., Gasnikov, A., et al., Learning Supervised PageRank with Gradientbased and Gradient-free Optimization Methods, in e-print, 2016, arXiv:1603.00717.

    Google Scholar 

  16. Gasnikov, A.V., Dvurechenskii, P.E., and Kamzolov, D.I., Gradient and Direct Methods with Inaccurate Oracle for Problems of Stochastic Optimization, in Dinamika sistem and protsessy upravleniya (System Dynamics and Control Processes), Proc. Int. Conf. in honour of 90th Anniversary of Academician N.N. Krasovskii, Yekaterinburg, September 15–20, 2014, Yekaterinburg: Krasovskii Inst. Math. Mech. Uro RAN, 2015, pp. 111–117, arXiv:1502.06259.

    Google Scholar 

  17. Belloni, A., Liang, T., Narayanan, H., and Rakhlin, A., Escaping the Local Minima via Simulated Annealing: Optimization of Approximately Convex Functions, JMLR. Workshop Conf. Proc., 2015, vol. 40, pp. 1–26, arXiv:1501.07242.

    Google Scholar 

  18. Nesterov, Yu., Random Gradient-free Minimization of Convex Functions, in CORE Discussion Paper 2011/1, 2011.

    Google Scholar 

  19. Spall, J.C., Introduction to Stochastic Search and Optimization: Estimation, Simulation and Control, New York: Wiley, 2003.

    Book  MATH  Google Scholar 

  20. Bubeck, S. and Cesa-Bianchi, N., Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Foundat. Trends Machine Learning, 2012, vol. 5, no. 1, pp. 1–122, arXiv:1204.5721.

    Article  MATH  Google Scholar 

  21. Nemirovski, A., Lectures on Modern Convex Optimization Analysis, Algorithms, and Engineering Applications, Philadelphia: SIAM, 2013 (http://www2.isye.gatech.edu/~nemirovs/Lect ModConvOpt.pdf).

    MATH  Google Scholar 

  22. Leadbetter, M., Lindgren, G., and Rootzen, K., Extremes of Related Properties of Random Sequences and Processes, New York: Springer-Verlag, 1986. Translated under the title Ekstremumy sluchainykh posledovatel’nostei i protsessov, Moscow: Mir, 1989.

    MATH  Google Scholar 

  23. Ledoux, M., Concentration of Measure Phenomenon, Providence: Am. Math. Soc. (Math. Surveys Monogr., vol. 89), 2001.

    Google Scholar 

  24. Boucheron, S., Lugoshi, G., and Massart, P., Concentration Inequalities: A Nonasymptotic Theory of Independence, Oxford: Oxford Univ. Press, 2013.

    Book  MATH  Google Scholar 

  25. Gasnikov, A.V., Dvurechenskii, P.E., Dorn, Yu.V., and Maksimov, Yu.V., Numerical Methods for Seeking Equilibrium Flow Distribution in the Backman Model and Stable Dynammice Model, Mat. Modelirovanie, 2016, vol. 28 (in press), arXiv:1506.00293.

  26. Ibragimov, I.A. and Khas’minskii, R.Z., Asimptoticheskaya teoriya otsenivaniya (Asymptotic Estimation Theory), Moscow: Nauka, 1977.

    Google Scholar 

  27. Wright, S.J., Coordinate Descent Algorithms, Optimizat. Online, 2015 (http://www.optimizationonline. org/DB FILE/2014/12/4679.pdf).

    MATH  Google Scholar 

  28. Anikin, A., Dvurechensky, P., Gasnikov, A., et al., Modern Efficient Numerical Approaches to Regularized Regression Problems in Application to Traffic Demands Matrix Calculation From Link Loads, in Proc. Int. Conf. ITAS-2015, Russia, Sochi, September, 2015, arXiv:1508.00858.

    Google Scholar 

  29. Gasnikov, A.V., Dvurechenskii, P.E., and Usmanova, I.N., On Nontriviality of Fast (Accelerated) Randomized Methods, Tr. MFTI, 2016, vol. 8, no. 2, arXiv:1508.02182.

    Google Scholar 

  30. Juditsky, A. and Nemirovski, A., First Order Methods for Nonsmooth Convex Large-scale Optimization. I, II, in Optimization for Machine Learning, Sra, S., Nowozin, S., and Wright, S., Eds., Boston: MIT Press, 2012.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. V. Gasnikov.

Additional information

Original Russian Text © A.V. Gasnikov, A.A. Lagunovskaya, I.N. Usmanova, F.A. Fedorenko, 2016, published in Avtomatika i Telemekhanika, 2016, No. 10, pp. 57–77.

This paper was recommended for publication by P.S. Shcherbakov, a member of the Editorial Board

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gasnikov, A.V., Lagunovskaya, A.A., Usmanova, I.N. et al. Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex. Autom Remote Control 77, 2018–2034 (2016). https://doi.org/10.1134/S0005117916110114

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117916110114

Navigation