Abstract
Federated Learning (FL) is a promising paradigm, where the local users collaboratively learn models by repeatedly sharing information while the data is kept distributing on these users. FL has been considered in multiple access channels (FL-MAC), which is a hot issue. Even though FL-MAC has many advantages, it is still possible to leak privacy to a third party during the whole training process. To avoid privacy leakage, we propose to add Rényi differential privacy (RDP) into FL-MAC. At the same time, to maximize the convergent rate of users under the constraints of transmission rate and privacy, the quantization stochastic gradient descent (QSGD) is performed by users. We also illustrate our results on MNIST, and the illustration demonstrate that our scheme can improve the model accuracy with a little loss of communication efficiency.
Similar content being viewed by others
References
Konen, J., Mcmahan, H.B., Ramage, D, et al.: Federated Optimization: Distributed Machine Learning for On-Device Intelligence. arXiv:1610.02527 (2016)
Yang, Q., Liu, Y., Chen, T., et al.: Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol. 10(2), 1–19 (2019)
Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, Methods, and Future Directions. IEEE Signal Process. Mag. 37(3), 50–60 (2020)
Kairouz, P., McMahan, H.B., et al.: Advances and open problems in federated learning. Found. Trends Mach. Learn. 14(1), 1–119 (2021)
Zhang, C., Xie, Y., Bai, H., et al.: A survey on federated learning. Knowl.-Based Syst. 216(1), 106775 (2021)
Abdulrahman, S., Tout, H., Ould-Slimane, H., Mourad, A., Talhi, C., Guizani, M.A.: Survey on federated learning: The journey from centralized to distributed On-Site learning and beyond. IEEE Int. Things J. 8(7), 5476–5497 (2021)
Aledhari, M., Razzak, R., Parizi, R.M., Saeed, F.: Federated Learning: A survey on enabling technologies, protocols, and applications. IEEE Access 8(1), 140699–140725 (2020)
Chang, W.T., Tandon, R.: Communication Efficient Federated Learning over Multiple Access Channels. arXiv:2001.08737 (2020)
Chen, M., Yang, Z., Saad, W., Yin, C., Poor, H.V., Cui, S.: A joint learning and communications framework for federated learning over wireless networks. IEEE Trans. Wirel. Commun. 20(1), 269–283 (2021)
Sonee, A., Rini, S.: Efficient Federated Learning over Multiple Access Channel with Differential Privacy Constraints. arXiv:2005.07776 (2020)
Seif, M., Tandon, R., Li, M.: Wireless federated learning with local differential privacy. In: 2020 IEEE International Symposium on Information Theory, pp 2604–2609 (2020)
Yang, K., Jiang, T., Shi, Y., Ding, Z.: Federated learning via over-the-air computation. IEEE Trans. Wirel. Commun. 19(3), 2022–2035 (2020)
Amiri, M.M., Gunduz, D.: Federated learning over wireless fading channels. IEEE Trans. Wirel. Commun. 19(5), 3546–3557 (2020)
Yang, Z., Chen, M., Saad, W., Hong, C.S., Shikh-Bahaei, M.: Energy efficient federated learning over wireless communication networks. IEEE Trans. Wirel. Commun. 20(3), 1935–1949 (2021)
Guo, H., Liu, A., Lau, V.K.N.: Analog gradient aggregation for federated learning over wireless networks: Customized design and convergence analysis. IEEE Int. Things J. 8(1), 197–210 (2021)
Niknam, S., Dhillon, H.S., Reed, J.H.: Federated learning for wireless communications: motivation, Opportunities, and Challenges. IEEE Commun. Mag. 58(6), 46–51 (2020)
Sery, T., Cohen, K.: A sequential gradient-based multiple access for distributed learning over fading channels. In: 2019 57th Annual Allerton Conference on Communication Control, and Computing, pp 303–307 (2019)
Sery, T., Cohen, K.: On analog gradient descent learning over multiple access fading channels. IEEE Trans. Signal Process. 68, 2897–2911 (2020)
Dwork, C.: Differential privacy. In: Proceedings of the 33rd International Conference on Automata Languages and Programming, vol. 2, pp 1–12 (2006)
Domingo-Ferrer, J., Soria-Comas, J., Mulero-Vellido, R.: Steered microaggregation as a unified primitive to anonymize data sets and data streams. IEEE Trans. Inform. Forens. Secur. 14(12), 3298–3311 (2019)
Vu, D., Slavkovic, A.: Differential privacy for clinical trial data: Preliminary evaluations. In: Proceedings of the 2009 IEEE International Conference on Data Mining Workshops, pp 138–143 (2009)
Cao, Y., Yoshikawa, M., Xiao, Y., Xiong, L.: Quantifying differential privacy in continuous data release under temporal correlations. IEEE Trans. Knowl. Data Eng. 31(7), 1281–1295 (2019)
Zhang, N., Li, M., Lou, W.: Distributed data mining with differential privacy. IEEE Int. Conf. Commun. 1–5 (2011)
Zhao, J., Chen, Y., Zhang, W.: Differential privacy preservation in deep learning: challenges, Opportunities and Solutions. IEEE Access 7, 48901–48911 (2019)
Wei, K., Li, J., Ding, M., et al.: Federated learning with differential privacy: Algorithms and performance analysis. IEEE Trans. Inform. Forens. Secur. 15, 3454–3469 (2020)
Huang, X., Ding, Y., Jiang, Z.L., et al.: DP-FL: A novel differentially private federated learning framework for the unbalanced data. World Wide Web-Int. Web Inform. Syst. 23(4), 2529–2545 (2020)
Nuria, R., Goran, S., Daniel, J.: Learning and Differential Privacy: Federated Software tools analysis, the Sherpa. ai FL framework and methodological guidelines for preserving data privacy. Inform. Fusion 64, 270–292 (2020)
Geyer, R.C., Klein, T., Nabi, M.: Differentially private federated learning: A client level perspective. IEEE Access 8, 140699–140725 (2020)
Gong, M., Xie, Y., Pan, K., et al.: A survey on differentially private machine learning. IEEE Comput. Intell. Mag. 15(2), 49–64 (2020)
Mironvo, I.: Ré,nyi Differential Privacy. 2017 IEEE 30th Computer Security Foundations Symposium 263–275 (2017)
Erven, T.V., Harremos, P.: Rényi Divergence and Kullback-Leibler Divergence. IEEE Trans. Inf. Theory 60(7), 3797–3820 (2014)
Melis, L., Song, C., Cristofaro, E.D., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: 2019 IEEE Symposium on Security and Privacy, pp 691–706 (2019)
Du, Y., Yang, S., Huang, K.: High-Dimensional Stochastic gradient quantization for Communication-Efficient edge learning. IEEE Trans. Signal Process. 68, 2128–2142 (2020)
Agarwal, N., Suresh, A.T., Yu, F., et al.: cpSGD: Communication-efficient and differentially-private distributed SGD. In: Advances in Neural Information Processing Systems, pp 265–283 (2018)
Amiri, M., Gündüz, D.: Machine learning at the wireless edge: Distributed stochastic gradient descent over-the-air. Trans. Signal Process. 68, 2155–2169 (2020)
Acknowledgements
This work is supported by the National Natural Science Foundation of China (NSFC) under grant number (11761073, 61902082), the National Key R&D Program of China (2018YFB2100400), and Hangzhou Innovation and Entrepreneurship Leader Team Funding Program (201920110039).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interests
All authors declare no conflict of interest.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A: Proof of Theorem 1
Appendix A: Proof of Theorem 1
Proof
Since for all \(\omega , \omega ^{\prime } \in {R^{d}}\), \(\bar l\) is λ-strongly convex, then for any subgradient g of \(\bar l\) at ω we have
such that
Denote ω∗ is the optimum value of ωb, and
Meantime, \(\bar l\) is a μ-smooth function about the optimum point ω∗, i.e., for any ωb, we have
Then
where the expected value of bounding mean square error \(E\left [\left \| \omega _{b} - \omega ^{*} \right \|^{2}\right ] \) is aiming to observe the extent of ωb converges to its optimum ω∗, we use cyclic formula \(\omega _{b+1} = \omega _{b} -\eta _{b} \hat {g}_{b}\) to obtain the correlation formula of ωb is
For the second term, we have
where \(z=\sum \limits _{k=1}^{K} z_{k}\), and \(z \thicksim N(0, \sigma ^{2} I_{d})\). Thus,
The third term of (23) can be obtained by
The expectation of the above formula is
Then we have
Hence,
By properties of random unbiased gradient and bounded second moment, i.e., \(E\left [ \left \| \hat g_{b}\right \|_{2}^{2} \right ] \leq K^{2}\), and the learning rate of ηb = 1/λ, we have
By (23), we have
Using this bound,we can obtained a lower bound of the convergence rate with μ-smoothness as follows:
□
Rights and permissions
About this article
Cite this article
Wu, S., Yu, M., Ahmed, M.A.M. et al. FL-MAC-RDP: Federated Learning over Multiple Access Channels with Rényi Differential Privacy. Int J Theor Phys 60, 2668–2682 (2021). https://doi.org/10.1007/s10773-021-04867-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10773-021-04867-0