Abstract
Federated learning (FL) allows to train models on decentralized data while maintaining data privacy, which unlocks the availability of large and diverse datasets for many practical applications. The ongoing development of aggregation algorithms, distribution architectures and software implementations aims for enabling federated setups employing thousands of distributed devices, selected from millions. Since the availability of such computing infrastructure is a big barrier to experimenting with new approaches, we claim that efficient simulation of FL is necessary and propose the PaSSiFLora library for simulating FL clients in a cluster environment. In PaSSiFLora, the training algorithm is actually performed on real data, but each cluster node can simulate multiple FL clients. Because uniform random selection of clients results in poor simulation performance due to load imbalance, we propose to use uniform random selection of MultiClients. Each MultiClient runs on a single cluster node and in each training iteration is responsible for simulating several clients, selected from a set of local clients. Our experimental results based on the FEMNIST dataset show that PaSSiFLora is capable of simulating 1536 clients and has a good scalability on 48 cluster nodes, which reduces the average iteration time to 13.57 s, from 330.61 s in the case of one cluster node. The MultiClient architecture allows to improve the average performance by up to 75% while it does not cause significant differences in model accuracy during the training. Additionally, correctness of the training is verified against existing FL frameworks: LEAF and TFF.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bonawitz, K., et al.: Towards federated learning at scale: system design. arXiv:1902.01046 [cs, stat] (February 2019). http://arxiv.org/abs/1902.01046
Caldas, S., et al.: LEAF: a benchmark for federated settings. arXiv:1812.01097 [cs, stat] (December 2019). http://arxiv.org/abs/1812.01097
Czarnul, P., et al.: MERPSYS: an environment for simulation of parallel application execution on large scale HPC systems. Simula. Model. Pract. Theor. 77(C), 124–140 (2017). https://doi.org/10.1016/j.simpat.2017.05.009
Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006). https://doi.org/10.1007/11681878_14
Hartmann, F., Suh, S., Komarzewski, A., Smith, T.D., Segall, I.: Federated learning for ranking browser history suggestions. arXiv:1911.11807 [cs, stat] (November 2019)
Hestness, J., et al.: Deep learning scaling is predictable, empirically. arXiv preprint arXiv:1712.00409 (2017)
Jayaraman, B., Wang, L., Evans, D., Gu, Q.: Distributed learning without distress: privacy-preserving empirical risk minimization. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS 2018, Montréal, Canada, pp. 6346–6357. Curran Associates Inc., Red Hook (2018)
Konecný, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: strategies for improving communication efficiency. CoRR abs/1610.05492 (2016). http://arxiv.org/abs/1610.05492
Li, L., Fan, Y., Tse, M., Lin, K.Y.: A review of applications in federated learning. Comput. Ind. Eng. 149, 106854 (2020). https://doi.org/10.1016/j.cie.2020.106854. http://www.sciencedirect.com/science/article/pii/S0360835220305532
McMahan, H.B., Moore, E., Ramage, D., Hampson, S.: Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629 (2016)
Mugunthan, V., Peraire-Bueno, A., Kagal, L.: PrivacyFL: a simulator for privacy-preserving and secure federated learning. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 3085–3092. Association for Computing Machinery, New York (October 2020). https://doi.org/10.1145/3340531.3412771
Sheller, M.J., et al.: Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Sci. Rep. 10(1), 12598 (2020). https://doi.org/10.1038/s41598-020-69250-1. https://www.nature.com/articles/s41598-020-69250-1
Yang, T., et al.: Applied federated learning: improving Google keyboard query suggestions. arXiv:1812.02903 [cs, stat] (December 2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Kołodziej, T., Rościszewski, P. (2021). Towards Scalable Simulation of Federated Learning. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Communications in Computer and Information Science, vol 1516. Springer, Cham. https://doi.org/10.1007/978-3-030-92307-5_29
Download citation
DOI: https://doi.org/10.1007/978-3-030-92307-5_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-92306-8
Online ISBN: 978-3-030-92307-5
eBook Packages: Computer ScienceComputer Science (R0)