Abstract
In a wireless Federated Learning (FL) system, clients train their local models over local datasets on IoT devices. The derived local models are uploaded to the FL server which generates a global model, then broadcasts the model back to the clients for further training. Due to the heterogeneous feature of clients, client selection plays an important role in determining the overall training time. Traditionally, maximum number of clients are selected if they can derive and upload their local models before the deadline in each global iteration. However, selecting more clients not only increases the energy consumption of the clients, but also might not be necessary as having fewer clients in early global iterations and more clients in later iterations have been proved better for model accuracy. To address the issue, this paper proposes a client selection scheme which dynamically adjusts and optimizes the trade-off between maximizing the number of selected clients and minimizing the total communication cost between the clients and the server. By comparing the data diversity of clients, this scheme can select the most suitable clients for global convergence. A Diversity Scaling Node Selection framework (FedDS) is implemented to dynamically change the selecting weights of each node based on the degree of non-i.i.d data diversity. Results has shown that the proposed FedDS can speed up the FL convergence rate compared to FedAvg with random node selection.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chiang, M., Zhang, T.: Fog and IoT: an overview of research opportunities. IEEE Internet Things J. 3, 854–864 (2016)
Xiong, Z., Zhang, Y., Niyato, D., Wang, P., Han, Z.: When mobile blockchain meets edge computing. IEEE Commun. Mag. 56, 33–39 (2018)
McMahan, H.B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of 20th International Conference on Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)
Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. arXiv:1812.06127 [cs, stat] (2020)
Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-IID data. arXiv:1806.00582 (2018)
Wang, H., Kaplan, Z., Niu, D., Li, B.: Optimizing federated learning on Non-IID data with reinforcement learning. In: IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, pp. 1698–1707 (2020)
Wang, S., et al.: Adaptive federated learning in resource constrained edge computing systems. arXiv:1804.05271 [cs, math, stat] (2019)
Wang, L., Wang, W., Li, B.: CMFL: mitigating communication overhead for federated learning. In: 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pp. 954–964 (2019)
Nishio, T., Yonetani, R.: Client selection for federated learning with heterogeneous resources in mobile edge. In: Proceedings of the IEEE International Conference on Communications (ICC) (2019)
Amiri, M.M., Gündüz, D., Kulkarni, S.R., Poor, H.V.: Convergence of update aware device scheduling for federated learning at the wireless edge. IEEE Trans. Wireless Commun. 20, 3643–3658 (2021)
Cho, Y.J., Wang, J., Joshi, G.: Client selection in federated learning: convergence analysis and power-of-choice selection strategies. arXiv:2010.01243 (2020)
Chen, M., Shlezinger, N., Poor, H.V., Eldar, Y.C., Cui, S.: Communication-efficient federated learning. Proc. Natl. Acad. Sci. U. S. A. 118 (2021). https://doi.org/10.1073/pnas.2024789118
Chen, M., Poor, H.V., Saad, W., Cui, S.: Convergence time optimization for federated learning over wireless networks. IEEE Trans. Wireless Commun. 20, 2457–2471 (2021)
Ren, J., He, Y., Wen, D., Yu, G., Huang, K., Guo, D.: Scheduling for cellular federated edge learning with importance and channel awareness. arXiv:2004.00490 (2020)
Chen, W., Horvath, S., Richtarik, P.: Optimal client sampling for federated learning. arXiv:2010.13723 (2020)
Rizk, E., Vlaski, S., Sayed, A.H.: Optimal importance sampling for federated learning. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3095–3099 (2021)
Li, X., Huang, K., Yang, W., Wang, S., Zhang, Z.: On the convergence of FedAvg on non-IID data. In: Proceedings of the International Conference on Learning Representations (ICLR) (2020)
Wu, H., Wang, P.: Fast-convergent federated learning with adaptive weighting. IEEE Trans. Cogn. Commun. Netw. 7, 1078–1088 (2021)
Stich, S.U.: Local SGD converges fast and communicates little. In: Proceedings of the International Conference on Learning Representations (ICLR) (2019)
Yin, D., Pananjady, A., Lam, M., Papailiopoulos, D., Ramchandran, K., Bartlett, P.: Gradient diversity: a key ingredient for scalable distributed learning. In: Storkey, A. and Perez-Cruz, F. (eds.) Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, pp. 1998–2007. PMLR (2018)
Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Ren, Y., Sajjanhar, A., Gao, S., Loke, S. (2023). Client Selection Based on Diversity Scaling for Federated Learning on Non-IID Data. In: Wang, W., Wu, J. (eds) Broadband Communications, Networks, and Systems. BROADNETS 2023. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 511. Springer, Cham. https://doi.org/10.1007/978-3-031-40467-2_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-40467-2_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-40466-5
Online ISBN: 978-3-031-40467-2
eBook Packages: Computer ScienceComputer Science (R0)