Skip to main content

An Acceleration of Decentralized SGD Under General Assumptions with Low Stochastic Noise

  • Conference paper
  • First Online:
Mathematical Optimization Theory and Operations Research: Recent Trends (MOTOR 2021)

Abstract

Distributed optimization methods are actively researched by optimization community. Due to applications in distributed machine learning, modern research directions include stochastic objectives, reducing communication frequency and time-varying communication network topology. Recently, an analysis unifying several centralized and decentralized approaches to stochastic distributed optimization was developed in Koloskova et al. (2020). In this work, we employ a Catalyst framework and accelerate the rates of Koloskova et al. (2020) in the case of low stochastic noise.

The work of E. Trimbach and A. Rogozin was supported by Andrei M. Raigorodskii Scholarship in Optimization. The research of A. Rogozin is supported by the Ministry of Science and Higher Education of the Russian Federation (Goszadaniye) No-075-00337-20-03, project No. 0714-2020-0005. This work started during Summer school at Sirius Institute.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Assran, M., Loizou, N., Ballas, N., Rabbat, M.: Stochastic gradient push for distributed deep learning. In: International Conference on Machine Learning, pp. 344–353. PMLR (2019)

    Google Scholar 

  2. Dvinskikh, D., Gasnikov, A.: Decentralized and parallelized primal and dual accelerated methods for stochastic convex programming problems. arXiv preprint arXiv:1904.09015 (2019)

  3. Koloskova, A., Lin, T., Stich, S.U., Jaggi, M.: Decentralized deep learning with arbitrary communication compression. In: ICLR 2020 Conference Blind Submission (2020)

    Google Scholar 

  4. Koloskova, A., Loizou, N., Boreiri, S., Jaggi, M., Stich, S.U.: A unified theory of decentralized sgd with changing topology and local updates. In: International Conference on Machine Learning (2020)

    Google Scholar 

  5. Konecnỳ, J., McMahan, H.B., Ramage, D., Richtárik, P.: Federated optimization: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527 (2016)

  6. Kovalev, D., Salim, A., Richtárik, P.: Optimal and practical algorithms for smooth and strongly convex decentralized optimization. arXiv preprint arXiv:2006.11773 (2020)

  7. Kulunchakov, A., Mairal, J.: A generic acceleration framework for stochastic composite optimization. In: Advanced Neural Information Processing System (NeurIPS 2019), vol. 32 (2019)

    Google Scholar 

  8. Li, H., Fang, C., Yin, W., Lin, Z.: A sharp convergence rate analysis for distributed accelerated gradient methods. arXiv:1810.01053 (2018)

  9. Li, H., Lin, Z.: Revisiting extra for smooth distributed optimization. arXiv preprint arXiv:2002.10110 (2020)

  10. Li, Z., Shi, W., Yan, M.: A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates. IEEE Trans. Sign. Process. 67(17), 4494–4506 (2019)

    Article  MathSciNet  Google Scholar 

  11. Lian, X., Zhang, C., Zhang, H., Hsieh, C.J., Zhang, W., Liu, J.: Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. arXiv preprint arXiv:1705.09056 (2017)

  12. Lin H., Mairal J., H.Z.: A universal catalyst for first-order optimization. arXiv preprint arXiv:1506.02186 (2015)

  13. Lin, H., Mairal, J., Harchaoui, Z.: Catalyst acceleration for first-order convex optimization: from theory to practice. J. Mach. Learn. Res. 18(1), 7854–7907 (2018)

    MathSciNet  MATH  Google Scholar 

  14. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  15. McMahan, H.B., Moore, E., Ramage, D., y Arcas, B.A.: Federated learning of deep networks using model averaging. arXiv preprint arXiv:1602.05629 (2016)

  16. Nedić, A., Olshevsky, A., Shi, W.: Achieving geometric convergence for distributed optimization over time-varying graphs. SIAM J. Optim. 27(4), 2597–2633 (2017)

    Article  MathSciNet  Google Scholar 

  17. Nedic, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Cont. 54(1), 48–61 (2009)

    Article  MathSciNet  Google Scholar 

  18. Pu, S., Shi, W., Xu, J., Nedich, A.: A push-pull gradient method for distributed optimization in networks. In: 2018 IEEE Conference Decision Control (CDC), pp. 3385–3390 (2018)

    Google Scholar 

  19. Qu, G., Li, N.: Accelerated distributed nesterov gradient descent. In: 2016 54th Annual Allerton Conference on Communication, Control, and Computing (2016)

    Google Scholar 

  20. Scaman, K., Bach, F., Bubeck, S., Lee, Y.T., Massoulié, L.: Optimal algorithms for smooth and strongly convex distributed optimization in networks. In: International Conference on Machine Learning, pp. 3027–3036 (2017)

    Google Scholar 

  21. Shi, W., Ling, Q., Wu, G., Yin, W.: Extra: an exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 25(2), 944–966 (2015)

    Article  MathSciNet  Google Scholar 

  22. Stich, S.U.: Local sgd converges fast and communicates little. arXiv preprint arXiv:1805.09767 (2018)

  23. Tang, H., Lian, X., Yan, M., Zhang, C., Liu, J.: D2: decentralized training over decentralized data. In: International Conference on Machine Learning, pp. 4848–4856. PMLR (2018)

    Google Scholar 

  24. Xu, J., Tian, Y., Sun, Y., Scutari, G.: Distributed algorithms for composite optimization: Unified framework and convergence analysis. arXiv e-prints pp. arXiv-2002 (2020)

    Google Scholar 

  25. Ye, H., Luo, L., Zhou, Z., Zhang, T.: Multi-consensus decentralized accelerated gradient descent. arXiv preprint arXiv:2005.00797 (2020)

  26. Yuan, K., Ling, Q., Yin, W.: On the convergence of decentralized gradient descent. SIAM J. Optim. 26(3), 1835–1854 (2016)

    Article  MathSciNet  Google Scholar 

  27. Zinkevich, M., Weimer, M., Smola, A.J., Li, L.: Parallelized stochastic gradient descent. In: NIPS, vol. 4, p. 4. Citeseer (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ekaterina Trimbach .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Trimbach, E., Rogozin, A. (2021). An Acceleration of Decentralized SGD Under General Assumptions with Low Stochastic Noise. In: Strekalovsky, A., Kochetov, Y., Gruzdeva, T., Orlov, A. (eds) Mathematical Optimization Theory and Operations Research: Recent Trends. MOTOR 2021. Communications in Computer and Information Science, vol 1476. Springer, Cham. https://doi.org/10.1007/978-3-030-86433-0_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86433-0_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86432-3

  • Online ISBN: 978-3-030-86433-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics