Skip to main content

Communication-efficient Federated Learning via Quantized Clipped SGD

  • Conference paper
  • First Online:
Wireless Algorithms, Systems, and Applications (WASA 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12937))

Abstract

Communication has been considered as a major bottleneck of Federated Learning (FL) in mobile edge networks since participating workers iteratively transmit gradients to and receive models from the server. Compression technology like quantization that reduces the communication overhead and hyperparameter optimization technology like Clipped Stochastic Gradient Descent (Clipped SGD) that accelerates the convergence are two orthogonal approaches to improve the performance of FL. However, the combination of them has been little studied. To fill this gap, we propose Quantized Clipped SGD (QCSGD) to achieve communication-efficient FL. The major challenge of the combination lies in that the gradient quantization essentially affects the adjusting policy of step size in Clipped SGD, resulting in the lack of convergence guarantee. Therefore, we establish the convergence rate of QCSGD via a thorough theoretical analysis and exhibit that QCSGD has a comparable convergence rate as SGD without compression. We also conduct extensive experiments on various machine learning models and datasets and show that QCSGD outperforms state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aji, A.F., Heafield, K.: Sparse communication for distributed gradient descent. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 440–445 (2017)

    Google Scholar 

  2. Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient SGD via gradient quantization and encoding. In: Advances in Neural Information Processing Systems, vol. 30, pp. 1709–1720 (2017)

    Google Scholar 

  3. Alistarh, D., Hoefler, T., Johansson, M., Konstantinov, N., Khirirat, S., Renggli, C.: The convergence of Sparsified Gradient methods. In: NeurIPS (2018)

    Google Scholar 

  4. Bernstein, J., Zhao, J., Azizzadenesheli, K., Anandkumar, A.: SignSGD with majority vote is communication efficient and fault tolerant. arXiv preprint arXiv:1810.05291 (2018)

  5. Chen, C.Y., Choi, J., Brand, D., Agrawal, A., Zhang, W., Gopalakrishnan, K.: ADaComP: adaptive residual gradient compression for data-parallel distributed training. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  6. Chen, M., Yang, Z., Saad, W., Yin, C., Poor, H.V., Cui, S.: A joint learning and communications framework for federated learning over wireless networks. IEEE Trans. Wireless Commun. (2020)

    Google Scholar 

  7. Han, P., Wang, S., Leung, K.K.: Adaptive gradient sparsification for efficient federated learning: an online learning approach. arXiv preprint arXiv:2001.04756 (2020)

  8. Huang, T., Ye, B., Qu, Z., Tang, B., Xie, L., Lu, S.: Physical-layer arithmetic for federated learning in uplink MU-MIMO enabled wireless networks. In: IEEE INFOCOM 2020-IEEE Conference on Computer Communications, pp. 1221–1230. IEEE (2020)

    Google Scholar 

  9. Jia, N.: https://github.com/jianinghui/WASA2021.git

  10. Jiang, J., Fu, F., Yang, T., Cui, B.: SketchML: accelerating distributed machine learning with data sketches. In: Proceedings of the 2018 International Conference on Management of Data, pp. 1269–1284 (2018)

    Google Scholar 

  11. Kairouz, P., et al.: Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977 (2019)

  12. Lin, Y., Han, S., Mao, H., Wang, Y., Dally, B.: Deep gradient compression: reducing the communication bandwidth for distributed training. In: International Conference on Learning Representations (2018)

    Google Scholar 

  13. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  14. Murata, T., Suzuki, T.: Accelerated Sparsified SGD with Error Feedback. arXiv preprint arXiv:1905.12224 (2019)

  15. Rothchild, D., et al.: FetchSGD: communication-efficient federated learning with sketching. In: International Conference on Machine Learning, pp. 8253–8265. PMLR (2020)

    Google Scholar 

  16. Seide, F., Fu, H., Droppo, J., Li, G., Yu, D.: 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs. In: Fifteenth Annual Conference of the International Speech Communication Association (2014)

    Google Scholar 

  17. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321 (2015)

    Google Scholar 

  18. Tran, N.H., Bao, W., Zomaya, A., Nguyen, M.N., Hong, C.S.: Federated learning over wireless networks: optimization model design and analysis. In: IEEE INFOCOM 2019-IEEE Conference on Computer Communications, pp. 1387–1395. IEEE (2019)

    Google Scholar 

  19. Wang, S., et al.: Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun. 37(6), 1205–1221 (2019)

    Article  Google Scholar 

  20. Wen, W., et al.: TernGrad: ternary gradients to reduce communication in distributed deep learning. arXiv preprint arXiv:1705.07878 (2017)

  21. Wu, J., Huang, W., Huang, J., Zhang, T.: Error compensated quantized SGD and its applications to large-scale distributed optimization. In: International Conference on Machine Learning, pp. 5325–5333. PMLR (2018)

    Google Scholar 

  22. Zhang, J., He, T., Sra, S., Jadbabaie, A.: Why gradient clipping accelerates training: a theoretical justification for adaptivity. In: International Conference on Learning Representations (2020). https://openreview.net/forum?id=BJgnXpVYwS

  23. Zhao, S.Y., Xie, Y.P., Gao, H., Li, W.J.: Global momentum compression for sparse communication in distributed SGD. arXiv preprint arXiv:1905.12948 (2019)

Download references

Acknowledgements

This work was supported in part by National Key R&D Program of China (Grant No. 2018YFB1004704), Fundamental Research Funds for the Central Universities (Grant No. B200202176 and B210202079).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhihao Qu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jia, N., Qu, Z., Ye, B. (2021). Communication-efficient Federated Learning via Quantized Clipped SGD. In: Liu, Z., Wu, F., Das, S.K. (eds) Wireless Algorithms, Systems, and Applications. WASA 2021. Lecture Notes in Computer Science(), vol 12937. Springer, Cham. https://doi.org/10.1007/978-3-030-85928-2_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85928-2_44

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85927-5

  • Online ISBN: 978-3-030-85928-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics