Skip to main content

Local Training and Scalability of Federated Learning Systems

  • Chapter
  • First Online:
Federated Learning

Abstract

In this chapter, we delve deeper into the systems aspects of Federated Learning. We focus on the two main parts of FL—the participating devices (parties) and the aggregator’s scalability. First, we discuss the party-side, where we look into details about various factors that impact local training such as computational resources, memory, network, and so on. We also briefly talk about how there are challenges present in each of these aspects and introduce the state-of-the-art papers that address them. Then we discuss how to develop large-scale Federated Learning aggregation systems. We talk about various aggregation schemes in current literature that aim at reducing the scalability challenges. We discuss each of their advantages and disadvantages and suggest scenarios for which they are most applicable. We also provide a list of state-of-the-art works that use these schemes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bonawitz K, Eichner H, Grieskamp W, Huba D, Ingerman A, Ivanov V, Kiddon C, Konecný J, Mazzocchi S, McMahan B, Van Overveldt T, Petrou D, Ramage D, Roselander J (2019) Towards federated learning at scale: System design. In Talwalkar A, Smith V, and Zaharia M (eds) Proceedings of machine learning and systems 2019, MLSys 2019, Stanford, CA, USA, March 31–April 2, 2019. mlsys.org

    Google Scholar 

  2. Zheng Chai, Ahsan Ali, Syed Zawad, Stacey Truex, Ali Anwar, Nathalie Baracaldo, Yi Zhou, Heiko Ludwig, Feng Yan, and Yue Cheng (2020) Tifl: A tier-based federated learning system. In: Proceedings of the 29th international symposium on high-performance parallel and distributed computing, pp 125–136

    Google Scholar 

  3. Chen B, Medini T, Farwell J, Tai C, Shrivastava A (2020) Slide: in defense of smart algorithms over hardware acceleration for large-scale deep learning systems. Proceedings of Machine Learning and Systems 2:291–306

    Google Scholar 

  4. Chen Y, Xiaoyan Sun X, Yaochu Jin Y (2019) Communication-efficient federated deep learning with asynchronous model update and temporally weighted aggregation. Preprint. arXiv:1903.07424

    Google Scholar 

  5. Daghaghi S, Meisburger N, Zhao M, Shrivastava A (2021) Accelerating slide deep learning on modern cpus: Vectorization, quantizations, memory optimizations, and more. Proc Mach Learn Syst 3:156

    Google Scholar 

  6. Ghosh A, Chung J, Yin D, Ramchandran K (2020) An efficient framework for clustered federated learning. Preprint. arXiv:2006.04088

    Google Scholar 

  7. Gupta S, Imani M, Rosing T (2019) Exploring processing in-memory for different technologies. In: Proceedings of the 2019 on great lakes symposium on VLSI, pp 201–206

    Google Scholar 

  8. Hamer J, Mohri M, Suresh AT (2020) Fedboost: A communication-efficient algorithm for federated learning. In: International conference on machine learning. PMLR, pp 3973–3983

    Google Scholar 

  9. Imani M, Gupta S, Kim Y, Rosing T (2019) Floatpim: In-memory acceleration of deep neural network training with high precision. In 2019 ACM/IEEE 46th annual international symposium on computer architecture (ISCA). IEEE, pp 802–815

    Google Scholar 

  10. Jiang J, Hu L (2020) Decentralised federated learning with adaptive partial gradient aggregation. CAAI Trans Intell Technol 5(3):230–236

    Article  Google Scholar 

  11. Jiang Y, Wang S, Valls V, Ko BJ, Lee WH, Leung KK, Tassiulas L (2019) Model pruning enables efficient federated learning on edge devices. Preprint. arXiv:1909.12326

    Google Scholar 

  12. Konecnỳ J, McMahan HB, Yu FX, Richtárik P, Suresh AT, Bacon D (2016) Federated learning: Strategies for improving communication efficiency. CoRR

    Google Scholar 

  13. Lalitha A, Shekhar S, Javidi T, Koushanfar F (2018) Fully decentralized federated learning. In: Third workshop on bayesian deep learning (NeurIPS)

    Google Scholar 

  14. Lane ND, Bhattacharya S, Georgiev P, Forlivesi C, Jiao L, Qendro L, Kawsar F (2016) Deepx: A software accelerator for low-power deep learning inference on mobile devices. In: 2016 15th ACM/IEEE international conference on information processing in sensor networks (IPSN). IEEE, pp 1–12

    Google Scholar 

  15. Li L, Shi D, Hou R, Li H, Pan M, Han Z (2020) To talk or to work: Flexible communication compression for energy efficient federated learning over heterogeneous mobile edge devices. Preprint. arXiv:2012.11804

    Google Scholar 

  16. Liu L, Zhang J, Song SH, Letaief KB (2020) Client-edge-cloud hierarchical federated learning. In: ICC 2020-2020 IEEE international conference on communications (ICC), pp 1–6. IEEE

    Google Scholar 

  17. Lo SK, Lu Q, Zhu L, Paik HY, Xu X, Wang C Architectural patterns for the design of federated learning systems. Preprint. arXiv:2101.02373, 2021.

    Google Scholar 

  18. Luo S, Chen X, Wu Q, Zhou Z, Yu S (2020) Hfel: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning. IEEE Trans Wirel Commun 19(10):6535–6548

    Article  Google Scholar 

  19. Luping W, Wei W, Bo L (2019) Cmfl: Mitigating communication overhead for federated learning. In: 2019 IEEE 39th international conference on distributed computing systems (ICDCS). IEEE, pp 954–964

    Google Scholar 

  20. Kairouz P, McMahan HB, Avent B, Bellet A, Bennis M, Bhagoji AN et al (2021) Advances and open problems in federated learning. Foundations and TrendsⓇin Machine Learning 14(1-2):1–210

    Google Scholar 

  21. Reisizadeh A, Mokhtari A, Hassani H, Jadbabaie A, Pedarsani R (2020) Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization. In: International conference on artificial intelligence and statistics. PMLR, pp 2021–2031

    Google Scholar 

  22. Roy AG, Siddiqui S, Pölsterl S, Navab N, Wachinger C (2019) Braintorrent: A peer-to-peer environment for decentralized federated learning. Preprint. arXiv:1905.06731

    Google Scholar 

  23. Sattler F, Wiedemann S, Müller KR, Samek W (2019) Robust and communication-efficient federated learning from non-iid data. IEEE Trans Neural Netw Learn Syst 31(9):3400–3413

    Article  Google Scholar 

  24. Sattler F, Müller KR, Samek W (2020) Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. IEEE Trans Neural Netw Learn Syst 32:3710

    Article  MathSciNet  Google Scholar 

  25. Sprague MR, Jalalirad A, Scavuzzo M, Capota C, Neun M, Do L, Kopp M (2018) Asynchronous federated learning for geospatial applications. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, pp 21–28

    Google Scholar 

  26. Sun Y, Zhou S, Gündüz D (2020) Energy-aware analog aggregation for federated learning with redundant data. In: ICC 2020-2020 ieee international conference on communications (ICC). IEEE, pp 1–7

    Google Scholar 

  27. Tran NH, Bao W, Zomaya A, Nguyen MN, Hong CS (2019) Federated learning over wireless networks: Optimization model design and analysis. In: IEEE INFOCOM 2019-IEEE conference on computer communications. IEEE, pp 1387–1395

    Google Scholar 

  28. Xie C, Koyejo S, Gupta I (2019) Asynchronous federated optimization. Preprint. arXiv:1903.03934

    Google Scholar 

  29. Xu Z, Yang Z, Xiong J, Yang J, Chen X (2019) Elfish: Resource-aware federated learning on heterogeneous edge devices. Preprint. arXiv:1912.01684

    Google Scholar 

  30. Yang Z, Chen M, Saad W, Hong CS, Shikh-Bahaei M (2020) Energy efficient federated learning over wireless communication networks. IEEE Trans Wirel Commun 20:1935

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Syed Zawad .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Zawad, S., Yan, F., Anwar, A. (2022). Local Training and Scalability of Federated Learning Systems. In: Ludwig, H., Baracaldo, N. (eds) Federated Learning. Springer, Cham. https://doi.org/10.1007/978-3-030-96896-0_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-96896-0_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-96895-3

  • Online ISBN: 978-3-030-96896-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics