Abstract
In the paper, we generalize the approach Gasnikov et al. 2017, which allows to solve (stochastic) convex optimization problems with an inexact gradient-free oracle, to the convex-concave saddle-point problem. The proposed approach works, at least, like the best existing approaches. But for a special set-up (simplex type constraints and closeness of Lipschitz constants in 1 and 2 norms) our approach reduces times the required number of oracle calls (function calculations). Our method uses a stochastic approximation of the gradient via finite differences. In this case, the function must be specified not only on the optimization set itself, but in a certain neighbourhood of it. In the second part of the paper, we analyze the case when such an assumption cannot be made, we propose a general approach on how to modernize the method to solve this problem, and also we apply this approach to particular cases ofsomeclassical sets.
Keywords
- Zeroth-order optimization
- Saddle-point problem
- Stochastic optimization
The research of A. Beznosikov was partially supported by RFBR, project number 19-31-51001. The research of A. Gasnikov was partially supported by RFBR, project number 18-29-03071 mk and was partially supported by the Ministry of Science and Higher Education of the Russian Federation (Goszadaniye) no 075-00337-20-03.
This is a preview of subscription content, access via your institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
To say in more details this can be done analogously for deterministic set up. As for stochastic set up we need to improve the estimates in this paper by changing the Bregman diameters of the considered convex sets \(\varOmega \) by Bregman divergence between starting point and solution. This requires more accurate calculations (like in [11]) and doesn’t include in this paper. Note that all the constants, that characterized smoothness, stochasticity and strong convexity in all the estimates in this paper can be determine on the intersection of considered convex sets and Bregman balls around the solution of a radii equals to (up to a logarithmic factors) the Bregman divergence between the starting point and the solution.
References
Alkousa, M., Dvinskikh, D., Stonyakin, F., Gasnikov, A., Kovalev, D.: Accelerated methods for composite non-bilinear saddle point problem. arXiv preprint arXiv:1906.03620 (2019)
Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. Society for Industrial and Applied Mathematics, Philadelphia (2019)
Beznosikov, A., Gorbunov, E., Gasnikov, A.: Derivative-free method for composite optimization with applications to decentralized distributed optimization. arXiv preprint arXiv:1911.10645 (2019)
Beznosikov, A., Sadiev, A., Gasnikov, A.: Gradient-free methods for saddle-point problem. arXiv preprint arXiv:2005.05913 (2020)
Duchi, J.C., Jordan, M.I., Wainwright, M.J., Wibisono, A.: Optimal rates for zero-order convex optimization: the power of two function evaluations (2013)
Dvurechensky, P., Gorbunov, E., Gasnikov, A.: An accelerated directional derivative method for smooth stochastic convex optimization. arXiv preprint arXiv:1804.02394 (2018)
Gasnikov, A.: Universal gradient descent. arXiv preprint arXiv:1711.00394 (2017)
Gasnikov, A.V., Krymova, E.A., Lagunovskaya, A.A., Usmanova, I.N., Fedorenko, F.A.: Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case. Autom. Remote Control 78(2), 224–234 (2017). https://doi.org/10.1134/S0005117917020035
Gasnikov, A.V., Lagunovskaya, A.A., Usmanova, I.N., Fedorenko, F.A.: Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex. Autom. Remote Control 77(11), 2018–2034 (2016). https://doi.org/10.1134/S0005117916110114
Goodfellow, I.: Nips 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160 (2016)
Gorbunov, E., Dvurechensky, P., Gasnikov, A.: An accelerated method for derivative-free smooth stochastic convex optimization. arXiv preprint arXiv:1802.09022 (2018)
Ivanova, A., et al.: Oracle complexity separation in convex optimization. arXiv preprint arXiv:2002.02706 (2020)
Langley, P.: Crafting papers on machine learning. In: Langley, P. (ed.) Proceedings of the 17th International Conference on Machine Learning. (ICML 2000), Stanford, CA, pp. 1207–1216. Morgan Kaufmann (2000)
Lin, T., Jin, C., Jordan, M., et al.: Near-optimal algorithms for minimax optimization. arXiv preprint arXiv:2002.02417 (2020)
Nesterov, Y., Spokoiny, V.G.: Random gradient-free minimization of convex functions. Found. Comput. Math. 17(2), 527–566 (2017)
Shamir, O.: An optimal algorithm for bandit and zero-order convex optimization with two-point feedback. J. Mach. Learn. Res. 18(52), 1–11 (2017)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)
Vorontsova, E.A., Gasnikov, A.V., Gorbunov, E.A., Dvurechenskii, P.E.: Accelerated gradient-free optimization methods with a non-Euclidean proximal operator. Autom. Remote Control 80(8), 1487–1501 (2019)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Beznosikov, A., Sadiev, A., Gasnikov, A. (2020). Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem. In: Kochetov, Y., Bykadorov, I., Gruzdeva, T. (eds) Mathematical Optimization Theory and Operations Research. MOTOR 2020. Communications in Computer and Information Science, vol 1275. Springer, Cham. https://doi.org/10.1007/978-3-030-58657-7_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-58657-7_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58656-0
Online ISBN: 978-3-030-58657-7
eBook Packages: Computer ScienceComputer Science (R0)