Skip to main content

Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem

  • Conference paper
  • First Online:
Mathematical Optimization Theory and Operations Research (MOTOR 2020)

Abstract

In the paper, we generalize the approach Gasnikov et al. 2017, which allows to solve (stochastic) convex optimization problems with an inexact gradient-free oracle, to the convex-concave saddle-point problem. The proposed approach works, at least, like the best existing approaches. But for a special set-up (simplex type constraints and closeness of Lipschitz constants in 1 and 2 norms) our approach reduces times the required number of oracle calls (function calculations). Our method uses a stochastic approximation of the gradient via finite differences. In this case, the function must be specified not only on the optimization set itself, but in a certain neighbourhood of it. In the second part of the paper, we analyze the case when such an assumption cannot be made, we propose a general approach on how to modernize the method to solve this problem, and also we apply this approach to particular cases ofsomeclassical sets.

The research of A. Beznosikov was partially supported by RFBR, project number 19-31-51001. The research of A. Gasnikov was partially supported by RFBR, project number 18-29-03071 mk and was partially supported by the Ministry of Science and Higher Education of the Russian Federation (Goszadaniye) no 075-00337-20-03.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    To say in more details this can be done analogously for deterministic set up. As for stochastic set up we need to improve the estimates in this paper by changing the Bregman diameters of the considered convex sets \(\varOmega \) by Bregman divergence between starting point and solution. This requires more accurate calculations (like in [11]) and doesn’t include in this paper. Note that all the constants, that characterized smoothness, stochasticity and strong convexity in all the estimates in this paper can be determine on the intersection of considered convex sets and Bregman balls around the solution of a radii equals to (up to a logarithmic factors) the Bregman divergence between the starting point and the solution.

References

  1. Alkousa, M., Dvinskikh, D., Stonyakin, F., Gasnikov, A., Kovalev, D.: Accelerated methods for composite non-bilinear saddle point problem. arXiv preprint arXiv:1906.03620 (2019)

  2. Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. Society for Industrial and Applied Mathematics, Philadelphia (2019)

    Google Scholar 

  3. Beznosikov, A., Gorbunov, E., Gasnikov, A.: Derivative-free method for composite optimization with applications to decentralized distributed optimization. arXiv preprint arXiv:1911.10645 (2019)

  4. Beznosikov, A., Sadiev, A., Gasnikov, A.: Gradient-free methods for saddle-point problem. arXiv preprint arXiv:2005.05913 (2020)

  5. Duchi, J.C., Jordan, M.I., Wainwright, M.J., Wibisono, A.: Optimal rates for zero-order convex optimization: the power of two function evaluations (2013)

    Google Scholar 

  6. Dvurechensky, P., Gorbunov, E., Gasnikov, A.: An accelerated directional derivative method for smooth stochastic convex optimization. arXiv preprint arXiv:1804.02394 (2018)

  7. Gasnikov, A.: Universal gradient descent. arXiv preprint arXiv:1711.00394 (2017)

  8. Gasnikov, A.V., Krymova, E.A., Lagunovskaya, A.A., Usmanova, I.N., Fedorenko, F.A.: Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case. Autom. Remote Control 78(2), 224–234 (2017). https://doi.org/10.1134/S0005117917020035

    Article  MathSciNet  MATH  Google Scholar 

  9. Gasnikov, A.V., Lagunovskaya, A.A., Usmanova, I.N., Fedorenko, F.A.: Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex. Autom. Remote Control 77(11), 2018–2034 (2016). https://doi.org/10.1134/S0005117916110114

    Article  MathSciNet  MATH  Google Scholar 

  10. Goodfellow, I.: Nips 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160 (2016)

  11. Gorbunov, E., Dvurechensky, P., Gasnikov, A.: An accelerated method for derivative-free smooth stochastic convex optimization. arXiv preprint arXiv:1802.09022 (2018)

  12. Ivanova, A., et al.: Oracle complexity separation in convex optimization. arXiv preprint arXiv:2002.02706 (2020)

  13. Langley, P.: Crafting papers on machine learning. In: Langley, P. (ed.) Proceedings of the 17th International Conference on Machine Learning. (ICML 2000), Stanford, CA, pp. 1207–1216. Morgan Kaufmann (2000)

    Google Scholar 

  14. Lin, T., Jin, C., Jordan, M., et al.: Near-optimal algorithms for minimax optimization. arXiv preprint arXiv:2002.02417 (2020)

  15. Nesterov, Y., Spokoiny, V.G.: Random gradient-free minimization of convex functions. Found. Comput. Math. 17(2), 527–566 (2017)

    Article  MathSciNet  Google Scholar 

  16. Shamir, O.: An optimal algorithm for bandit and zero-order convex optimization with two-point feedback. J. Mach. Learn. Res. 18(52), 1–11 (2017)

    MathSciNet  MATH  Google Scholar 

  17. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)

    MATH  Google Scholar 

  18. Vorontsova, E.A., Gasnikov, A.V., Gorbunov, E.A., Dvurechenskii, P.E.: Accelerated gradient-free optimization methods with a non-Euclidean proximal operator. Autom. Remote Control 80(8), 1487–1501 (2019)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Aleksandr Beznosikov , Abdurakhmon Sadiev or Alexander Gasnikov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Beznosikov, A., Sadiev, A., Gasnikov, A. (2020). Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem. In: Kochetov, Y., Bykadorov, I., Gruzdeva, T. (eds) Mathematical Optimization Theory and Operations Research. MOTOR 2020. Communications in Computer and Information Science, vol 1275. Springer, Cham. https://doi.org/10.1007/978-3-030-58657-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58657-7_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58656-0

  • Online ISBN: 978-3-030-58657-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics