Skip to main content
Log in

Gradient sparsification for efficient wireless federated learning with differential privacy

  • Research Paper
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Abstract

Federated learning (FL) enables distributed clients to collaboratively train a machine learning model without sharing raw data with each other. However, it suffers from the leakage of private information from uploading models. In addition, as the model size grows, the training latency increases due to the limited transmission bandwidth and model performance degradation while using differential privacy (DP) protection. In this paper, we propose a gradient sparsification empowered FL framework with DP over wireless channels, to improve training efficiency without sacrificing convergence performance. Specifically, we first design a random sparsification algorithm to retain a fraction of the gradient elements in each client’s local model, thereby mitigating the performance degradation induced by DP and reducing the number of transmission parameters over wireless channels. Then, we analyze the convergence bound of the proposed algorithm, by modeling a non-convex FL problem. Next, we formulate a time-sequential stochastic optimization problem for minimizing the developed convergence bound, under the constraints of transmit power, the average transmitting delay, as well as the client’s DP requirement. Utilizing the Lyapunov drift-plus-penalty framework, we develop an analytical solution to the optimization problem. Extensive experiments have been implemented on three real-life datasets to demonstrate the effectiveness of our proposed algorithm. We show that our proposed algorithms can fully exploit the interworking between communication and computation to outperform the baselines, i.e., random scheduling, round robin, and delay-minimization algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Nguyen D C, Cheng P, Ding M, et al. Enabling AI in future wireless networks: a data life cycle perspective. IEEE Commun Surv Tutorials, 2021, 23: 553–595

    Article  Google Scholar 

  2. Kairouz P, McMahan H B, Avent B, et al. Advances and open problems in federated learning. FNT Machine Learn, 2021, 14: 1–210

    Article  Google Scholar 

  3. Li J, Shao Y, Wei K, et al. Blockchain assisted decentralized federated learning (BLADE-FL): performance analysis and resource allocation. IEEE Trans Parallel Distrib Syst, 2021, 33: 2401–2415

    Article  Google Scholar 

  4. Xia W, Quek T Q S, Guo K, et al. Multi-armed bandit-based client scheduling for federated learning. IEEE Trans Wireless Commun, 2020, 19: 7108–7123

    Article  Google Scholar 

  5. Yang H H, Liu Z, Quek T Q S, et al. Scheduling policies for federated learning in wireless networks. IEEE Trans Commun, 2020, 68: 317–333

    Article  Google Scholar 

  6. Wang S, Tuor T, Salonidis T, et al. Adaptive federated learning in resource constrained edge computing systems. IEEE J Sel Areas Commun, 2019, 37: 1205–1221

    Article  Google Scholar 

  7. Chen M, Poor H V, Saad W, et al. Convergence time optimization for federated learning over wireless networks. IEEE Trans Wireless Commun, 2021, 20: 2457–2471

    Article  Google Scholar 

  8. Yang Z, Chen M, Saad W, et al. Energy efficient federated learning over wireless communication networks. IEEE Trans Wireless Commun, 2021, 20: 1935–1949

    Article  Google Scholar 

  9. Deng X, Li J, Ma C, et al. Blockchain assisted federated learning over wireless channels: dynamic resource allocation and client scheduling. IEEE Trans Wireless Commun, 2022, 22: 3537–3553

    Article  Google Scholar 

  10. Mocanu D C, Mocanu E, Stone P, et al. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nat Commun, 2018, 9: 2383

    Article  Google Scholar 

  11. Jiang J, Fu F, Yang T, et al. SketchML: accelerating distributed machine learning with data sketches. In: Proceedings of International Conference on Management of Data, Houston, 2018. 1269–1284

  12. Zheng S, Shen C, Chen X. Design and analysis of uplink and downlink communications for federated learning. IEEE J Sel Areas Commun, 2021, 39: 2150–2167

    Article  Google Scholar 

  13. Wang Y, Xu Y, Shi Q, et al. Quantized federated learning under transmission delay and outage constraints. IEEE J Sel Areas Commun, 2022, 40: 323–341

    Article  Google Scholar 

  14. Liu S, Yu G, Yin R, et al. Joint model pruning and device selection for communication-efficient federated edge learning. IEEE Trans Commun, 2022, 70: 231–244

    Article  Google Scholar 

  15. Wang Z, Song M, Zhang Z, et al. Beyond inferring class representatives: user-level privacy leakage from federated learning. In: Proceedings of IEEE International Conference on Computer Communications, Paris, 2019. 2512–2520

  16. Dwork C, Roth A. The algorithmic foundations of differential privacy. FNT Theor Comput Sci, 2014, 9: 211–407

    Article  MathSciNet  Google Scholar 

  17. Wei K, Li J, Ding M, et al. Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans Inform Forensic Secur, 2020, 15: 3454–3469

    Article  Google Scholar 

  18. Martin A, Andy C, Ian G, et al. Deep learning with differential privacy. In: Proceedings of ACM Conference on Computer and Communications Security, Vienna, 2016. 308–318

  19. Zhou Y, Wu S, Banerjee A. Bypassing the ambient dimension: private SGD with gradient subspace identification. In: Proceedings of International Conference on Learning Representations, Virtual, 2021

  20. Wei K, Li J, Ma C, et al. Low-latency federated learning over wireless channels with differential privacy. IEEE J Sel Areas Commun, 2020, 40: 290–307

    Article  Google Scholar 

  21. Mironov I, Talwar K, Zhang L. Rényi differential privacy of the sampled Gaussian mechanism. 2019. ArXiv:1908.10530

  22. Yousefpour A, Shilov I, Sablayrolles A, et al. Opacus: user-friendly differential privacy library in PyTorch. In: Proceedings of Privacy in Machine Learning Workshop, NeurIPS, Virtual, 2021

  23. Stich S U, Cordonnier J, Jaggi M. Sparsified SGD with memory. In: Proceedings of Advances in Neural Information Processing Systems, Montreal, 2018, 4452–4463

  24. Dinh C T, Tran N H, Nguyen M N H, et al. Federated learning over wireless networks: convergence analysis and resource allocation. IEEE ACM Trans Networking, 2021, 29: 398–409

    Article  Google Scholar 

  25. Zhou X, Deng Y, Xia H, et al. Time-triggered federated learning over wireless networks. IEEE Trans Wireless Commun, 2022, 21: 11066–11079

    Article  Google Scholar 

  26. You C, Feng D, Guo K, et al. Semi-synchronous personalized federated learning over mobile edge networks. IEEE Trans Wireless Commun, 2023, 22: 2262–2277

    Article  Google Scholar 

  27. Xiao H, Xiang Z, Wang D, et al. A theory to instruct differentially-private learning via clipping bias reduction. In: Proceedings of IEEE Symposium on Security and Privacy, San Francisco, 2023. 2170–2189

  28. Huang T, Lin W, Wu W, et al. An efficiency-boosting client selection scheme for federated learning with fairness guarantee. IEEE Trans Parallel Distrib Syst, 2021, 32: 1552–1564

    Google Scholar 

  29. Molchanov P, Mallya A, Tyree S, et al. Importance estimation for neural network pruning. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, 2019. 11264–11272

  30. Neely M J. Stochastic Network Optimization with Application to Communication and Queueing Systems. Cham: Springer, 2010

    Book  Google Scholar 

  31. Deng X, Li J, Ma C, et al. Low-latency federated learning with DNN partition in distributed industrial IoT networks. IEEE J Sel Areas Commun, 2022, 41: 755–775

    Article  Google Scholar 

  32. Mahdian M, Yan Q. Online bipartite matching with random arrivals: an approach based on strongly factor-revealing LPs. In: Proceedings of ACM Symposium on Theory of Computing, San Jose, 2011. 597–606

  33. LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proc IEEE, 1998, 86: 2278–2324

    Article  Google Scholar 

  34. Xiao H, Rasul K, Vollgraf R. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. 2017. ArXiv:1708.07747

  35. Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images. Dissertation for Master’s Degree. Toronto: University of Toronto, 2009

    Google Scholar 

  36. Chen M, Shlezinger N, Poor H V, et al. Communication-efficient federated learning. Proc Natl Acad Sci USA, 2021, 118: e2024789118

    Article  Google Scholar 

  37. Yurochkin M, Agarwal M, Ghosh S, et al. Bayesian nonparametric federated learning of neural networks. In: Proceedings of International Conference on Machine Learning, 2019. 7252–7261

  38. Liu J, Lou J, Xiong L, et al. Projected federated averaging with heterogeneous differential privacy. Proc VLDB Endow, 2021, 15: 828–840

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by National Natural Science Foundation of China (Grant Nos. 62071296, 62002170, 62071234, U22A2002), National Key Research and Development Program of China (Grant No. 2020YFB1807700), Fundamental Research Funds for the Central Universities (Grant No. 30921013104), Key Technologies R&D Program of Jiangsu (Prospective and Key Technologies for Industry) (Grant Nos. BE2023022, BE2023022-2), Future Network Grant of Provincial Education Board in Jiangsu, Major Science and Technology Plan of Hainan Province (Grant No. ZDKJ2021022), Scientific Research Fund Project of Hainan University (Grant No. KYQD(ZR)-21008), Youth Foundation Project of Zhejiang Lab (Grant No. K2023PD0AA01), Collaborative Innovation Center of Information Technology, Hainan University (Grant No. XTCX2022XXC07), and Sciences and Technology Commission of Shanghai Municipality (Grant Nos. 22JC1404000, 20JC1416502, PKX2021-D02).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Li.

Additional information

Supporting information Appendixes A–D. The supporting information is available online at info.scichina.com and link.springer.com. The supporting materials are published as submitted, without typesetting or editing. The responsibility for scientific accuracy and content remains entirely with the authors.

Supplementary File

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wei, K., Li, J., Ma, C. et al. Gradient sparsification for efficient wireless federated learning with differential privacy. Sci. China Inf. Sci. 67, 142303 (2024). https://doi.org/10.1007/s11432-023-3918-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11432-023-3918-9

Keywords

Navigation