Abstract
Decentralized optimization methods have been in the focus of optimization community due to their scalability, increasing popularity of parallel algorithms and many applications. In this work, we study saddle point problems of sum type, where the summands are held by separate computational entities connected by a network. The network topology may change from time to time, which models real-world network malfunctions. We obtain lower complexity bounds for algorithms in this setup and develop near-optimal methods which meet the lower bounds.
Keywords
- Saddle-point problem
- Distributed optimization
- Decentralized optimization
- Time-varying network
- Lower and upper bounds
The research of A. Beznosikov, A. Rogozin and A. Gasnikov was supported by Russian Science Foundation (project No. 21-71-30005).
This is a preview of subscription content, access via your institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Abadeh, S., Esfahani, P., Kuhn, D.: Distributionally robust logistic regression. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 1576–1584 (2015)
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: Proceedings of the 34th International Conference on Machine Learning (ICML), vol. 70, no. 1, pp. 214–223 (2017)
Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods, vol. 23. Prentice Hall Englewood Cliffs (1989)
Beznosikov, A., Dvurechensky, P., Koloskova, A., Samokhin, V., Stich, S.U., Gasnikov, A.: Decentralized local stochastic extra-gradient for variational inequalities. arXiv preprint arXiv:2106.08315 (2021)
Beznosikov, A., Samokhin, V., Gasnikov, A.: Local SGD for saddle-point problems. arXiv preprint arXiv:2010.13112 (2020)
Beznosikov, A., Scutari, G., Rogozin, A., Gasnikov, A.: Distributed saddle-point problems under similarity. arXiv preprint arXiv:2107.10706 (2021)
Boyd, S., Ghosh, A., Prabhakar, B., Shah, D.: Randomized gossip algorithms. IEEE Trans. Inf. Theory 52(6), 2508–2530 (2006)
Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)
Facchinei, F., Pang, J.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer Series in Operations Research and Financial Engineering, Springer, New York (2007). https://books.google.ru/books?id=lX_7Rce3_Q0C. https://doi.org/10.1007/b97543
Goodfellow, I.J., et al.: Generative adversarial networks (2014)
Jin, Y., Sidford, A.: Efficiently solving MDPs with stochastic mirror descent. In: III, H.D., Singh, A. (eds.) Proceedings of the 37th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 119, pp. 4890–4900. PMLR, 13–18 July 2020
Juditsky, A., Nemirovskii, A.S., Tauvel, C.: Solving variational inequalities with Stochastic Mirror-Prox algorithm (2008)
Kempe, D., Dobra, A., Gehrke, J.: Gossip-based computation of aggregate information. In: 44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings, pp. 482–491. IEEE (2003)
Koloskova, A., Loizou, N., Boreiri, S., Jaggi, M., Stich, S.U.: A unified theory of decentralized SGD with changing topology and local updates. arXiv preprint arXiv:2003.10422 (2020)
Kovalev, D., Gasanov, E., Richtárik, P., Gasnikov, A.: Lower bounds and optimal algorithms for smooth and strongly convex decentralized optimization over time-varying networks. arXiv preprint arXiv:2106.04469 (2021)
Kovalev, D., Shulgin, E., Richtárik, P., Rogozin, A., Gasnikov, A.: ADOM: accelerated decentralized optimization method for time-varying networks. arXiv preprint arXiv:2102.09234 (2021)
Li, H., Lin, Z.: Accelerated gradient tracking over time-varying graphs for decentralized optimization. arXiv preprint arXiv:2104.02596 (2021)
Liu, M., et al.: A decentralized parallel algorithm for training generative adversarial nets. arXiv preprint arXiv:1910.12999 (2019)
Liu, W., Mokhtari, A., Ozdaglar, A., Pattathil, S., Shen, Z., Zheng, N.: A decentralized proximal point-type method for saddle point problems. arXiv preprint arXiv:1910.14380 (2019)
Maros, M., Jaldén, J.: PANDA: a dual linearly converging method for distributed optimization over time-varying undirected graphs. In: 2018 IEEE Conference on Decision and Control (CDC), pp. 6520–6525 (2018)
McDonald, R., Hall, K., Mann, G.: Distributed training strategies for the structured perceptron. In: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 456–464 (2010)
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)
Nedić, A., Olshevsky, A., Shi, W.: Achieving geometric convergence for distributed optimization over time-varying graphs. SIAM J. Optim. 27(4), 2597–2633 (2017)
Nedic, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)
Nemirovski, A.: Prox-method with rate of convergence o(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229–251 (2004). https://doi.org/10.1137/S1052623403425629
von Neumann, J., Morgenstern, O., Kuhn, H.: Theory of Games and Economic Behavior (Commemorative Edition). Princeton University Press (2007)
Rogozin, A., Beznosikov, A., Dvinskikh, D., Kovalev, D., Dvurechensky, P., Gasnikov, A.: Decentralized distributed optimization for saddle point problems. arXiv preprint arXiv:2102.07758 (2021)
Rogozin, A., Gasnikov, A.: Projected gradient method for decentralized optimization over time-varying networks. arXiv preprint arXiv:1911.08527 (2019)
Rogozin, A., Uribe, C.A., Gasnikov, A.V., Malkovsky, N., Nedić, A.: Optimal distributed convex optimization on slowly time-varying graphs. IEEE Trans. Control Network Syst. 7(2), 829–841 (2019)
Scaman, K., Bach, F., Bubeck, S., Lee, Y.T., Massoulié, L.: Optimal algorithms for smooth and strongly convex distributed optimization in networks. arXiv preprint arXiv:1702.08704 (2017)
Shalev-Shwartz, S., Ben-David, S.: Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press (2014)
Ye, H., Luo, L., Zhou, Z., Zhang, T.: Multi-consensus decentralized accelerated gradient descent. arXiv preprint arXiv:2005.00797 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Beznosikov, A., Rogozin, A., Kovalev, D., Gasnikov, A. (2021). Near-Optimal Decentralized Algorithms for Saddle Point Problems over Time-Varying Networks. In: Olenev, N.N., Evtushenko, Y.G., Jaćimović, M., Khachay, M., Malkova, V. (eds) Optimization and Applications. OPTIMA 2021. Lecture Notes in Computer Science(), vol 13078. Springer, Cham. https://doi.org/10.1007/978-3-030-91059-4_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-91059-4_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-91058-7
Online ISBN: 978-3-030-91059-4
eBook Packages: Computer ScienceComputer Science (R0)