Skip to main content
Log in

FL-MAC-RDP: Federated Learning over Multiple Access Channels with Rényi Differential Privacy

  • Published:
International Journal of Theoretical Physics Aims and scope Submit manuscript

Abstract

Federated Learning (FL) is a promising paradigm, where the local users collaboratively learn models by repeatedly sharing information while the data is kept distributing on these users. FL has been considered in multiple access channels (FL-MAC), which is a hot issue. Even though FL-MAC has many advantages, it is still possible to leak privacy to a third party during the whole training process. To avoid privacy leakage, we propose to add Rényi differential privacy (RDP) into FL-MAC. At the same time, to maximize the convergent rate of users under the constraints of transmission rate and privacy, the quantization stochastic gradient descent (QSGD) is performed by users. We also illustrate our results on MNIST, and the illustration demonstrate that our scheme can improve the model accuracy with a little loss of communication efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Konen, J., Mcmahan, H.B., Ramage, D, et al.: Federated Optimization: Distributed Machine Learning for On-Device Intelligence. arXiv:1610.02527 (2016)

  2. Yang, Q., Liu, Y., Chen, T., et al.: Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol. 10(2), 1–19 (2019)

    Article  Google Scholar 

  3. Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, Methods, and Future Directions. IEEE Signal Process. Mag. 37(3), 50–60 (2020)

    Article  Google Scholar 

  4. Kairouz, P., McMahan, H.B., et al.: Advances and open problems in federated learning. Found. Trends Mach. Learn. 14(1), 1–119 (2021)

    Article  Google Scholar 

  5. Zhang, C., Xie, Y., Bai, H., et al.: A survey on federated learning. Knowl.-Based Syst. 216(1), 106775 (2021)

    Article  Google Scholar 

  6. Abdulrahman, S., Tout, H., Ould-Slimane, H., Mourad, A., Talhi, C., Guizani, M.A.: Survey on federated learning: The journey from centralized to distributed On-Site learning and beyond. IEEE Int. Things J. 8(7), 5476–5497 (2021)

    Article  Google Scholar 

  7. Aledhari, M., Razzak, R., Parizi, R.M., Saeed, F.: Federated Learning: A survey on enabling technologies, protocols, and applications. IEEE Access 8(1), 140699–140725 (2020)

    Article  Google Scholar 

  8. Chang, W.T., Tandon, R.: Communication Efficient Federated Learning over Multiple Access Channels. arXiv:2001.08737 (2020)

  9. Chen, M., Yang, Z., Saad, W., Yin, C., Poor, H.V., Cui, S.: A joint learning and communications framework for federated learning over wireless networks. IEEE Trans. Wirel. Commun. 20(1), 269–283 (2021)

    Article  Google Scholar 

  10. Sonee, A., Rini, S.: Efficient Federated Learning over Multiple Access Channel with Differential Privacy Constraints. arXiv:2005.07776 (2020)

  11. Seif, M., Tandon, R., Li, M.: Wireless federated learning with local differential privacy. In: 2020 IEEE International Symposium on Information Theory, pp 2604–2609 (2020)

  12. Yang, K., Jiang, T., Shi, Y., Ding, Z.: Federated learning via over-the-air computation. IEEE Trans. Wirel. Commun. 19(3), 2022–2035 (2020)

    Article  Google Scholar 

  13. Amiri, M.M., Gunduz, D.: Federated learning over wireless fading channels. IEEE Trans. Wirel. Commun. 19(5), 3546–3557 (2020)

    Article  Google Scholar 

  14. Yang, Z., Chen, M., Saad, W., Hong, C.S., Shikh-Bahaei, M.: Energy efficient federated learning over wireless communication networks. IEEE Trans. Wirel. Commun. 20(3), 1935–1949 (2021)

    Article  Google Scholar 

  15. Guo, H., Liu, A., Lau, V.K.N.: Analog gradient aggregation for federated learning over wireless networks: Customized design and convergence analysis. IEEE Int. Things J. 8(1), 197–210 (2021)

    Article  Google Scholar 

  16. Niknam, S., Dhillon, H.S., Reed, J.H.: Federated learning for wireless communications: motivation, Opportunities, and Challenges. IEEE Commun. Mag. 58(6), 46–51 (2020)

    Article  Google Scholar 

  17. Sery, T., Cohen, K.: A sequential gradient-based multiple access for distributed learning over fading channels. In: 2019 57th Annual Allerton Conference on Communication Control, and Computing, pp 303–307 (2019)

  18. Sery, T., Cohen, K.: On analog gradient descent learning over multiple access fading channels. IEEE Trans. Signal Process. 68, 2897–2911 (2020)

    Article  ADS  MathSciNet  Google Scholar 

  19. Dwork, C.: Differential privacy. In: Proceedings of the 33rd International Conference on Automata Languages and Programming, vol. 2, pp 1–12 (2006)

  20. Domingo-Ferrer, J., Soria-Comas, J., Mulero-Vellido, R.: Steered microaggregation as a unified primitive to anonymize data sets and data streams. IEEE Trans. Inform. Forens. Secur. 14(12), 3298–3311 (2019)

    Article  Google Scholar 

  21. Vu, D., Slavkovic, A.: Differential privacy for clinical trial data: Preliminary evaluations. In: Proceedings of the 2009 IEEE International Conference on Data Mining Workshops, pp 138–143 (2009)

  22. Cao, Y., Yoshikawa, M., Xiao, Y., Xiong, L.: Quantifying differential privacy in continuous data release under temporal correlations. IEEE Trans. Knowl. Data Eng. 31(7), 1281–1295 (2019)

    Article  Google Scholar 

  23. Zhang, N., Li, M., Lou, W.: Distributed data mining with differential privacy. IEEE Int. Conf. Commun. 1–5 (2011)

  24. Zhao, J., Chen, Y., Zhang, W.: Differential privacy preservation in deep learning: challenges, Opportunities and Solutions. IEEE Access 7, 48901–48911 (2019)

    Article  Google Scholar 

  25. Wei, K., Li, J., Ding, M., et al.: Federated learning with differential privacy: Algorithms and performance analysis. IEEE Trans. Inform. Forens. Secur. 15, 3454–3469 (2020)

    Article  Google Scholar 

  26. Huang, X., Ding, Y., Jiang, Z.L., et al.: DP-FL: A novel differentially private federated learning framework for the unbalanced data. World Wide Web-Int. Web Inform. Syst. 23(4), 2529–2545 (2020)

    Article  Google Scholar 

  27. Nuria, R., Goran, S., Daniel, J.: Learning and Differential Privacy: Federated Software tools analysis, the Sherpa. ai FL framework and methodological guidelines for preserving data privacy. Inform. Fusion 64, 270–292 (2020)

    Article  Google Scholar 

  28. Geyer, R.C., Klein, T., Nabi, M.: Differentially private federated learning: A client level perspective. IEEE Access 8, 140699–140725 (2020)

    Article  Google Scholar 

  29. Gong, M., Xie, Y., Pan, K., et al.: A survey on differentially private machine learning. IEEE Comput. Intell. Mag. 15(2), 49–64 (2020)

    Article  Google Scholar 

  30. Mironvo, I.: Ré,nyi Differential Privacy. 2017 IEEE 30th Computer Security Foundations Symposium 263–275 (2017)

  31. Erven, T.V., Harremos, P.: Rényi Divergence and Kullback-Leibler Divergence. IEEE Trans. Inf. Theory 60(7), 3797–3820 (2014)

    Article  Google Scholar 

  32. Melis, L., Song, C., Cristofaro, E.D., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: 2019 IEEE Symposium on Security and Privacy, pp 691–706 (2019)

  33. Du, Y., Yang, S., Huang, K.: High-Dimensional Stochastic gradient quantization for Communication-Efficient edge learning. IEEE Trans. Signal Process. 68, 2128–2142 (2020)

    Article  ADS  MathSciNet  Google Scholar 

  34. Agarwal, N., Suresh, A.T., Yu, F., et al.: cpSGD: Communication-efficient and differentially-private distributed SGD. In: Advances in Neural Information Processing Systems, pp 265–283 (2018)

  35. Amiri, M., Gündüz, D.: Machine learning at the wireless edge: Distributed stochastic gradient descent over-the-air. Trans. Signal Process. 68, 2155–2169 (2020)

    Article  ADS  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (NSFC) under grant number (11761073, 61902082), the National Key R&D Program of China (2018YFB2100400), and Hangzhou Innovation and Entrepreneurship Leader Team Funding Program (201920110039).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuanhong Tao.

Ethics declarations

Conflict of Interests

All authors declare no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Proof of Theorem 1

Appendix A: Proof of Theorem 1

Proof

Since for all \(\omega , \omega ^{\prime } \in {R^{d}}\), \(\bar l\) is λ-strongly convex, then for any subgradient g of \(\bar l\) at ω we have

$$ \bar l(\omega^{\prime}) - \bar l(\omega) \geq \langle g, \omega^{\prime} - \omega \rangle + \frac{\lambda}{2} \left\| \omega^{\prime} - \omega \right\|^{2} . $$
(18)

such that

$$ \begin{array}{@{}rcl@{}} \langle {g_{b}^{k}}, \omega_{b} - \omega \rangle & = &-\langle {g_{b}^{k}}, \omega - \omega_{b} \rangle \\ &\geq &\bar l(\omega_{b}) - \bar l(\omega^{*}) +\frac{\lambda}{2} \left\| \omega_{b} - \omega^{*} \right\|^{2} \\ &\geq& 0 . \end{array} $$
(19)

Denote ω is the optimum value of ωb, and

$$ \begin{array}{@{}rcl@{}} \bar l(\omega_{b}) - \bar l(\omega^{*}) & \geq& \langle g^{*}, \omega_{b}-\omega^{*} \rangle +\frac{\lambda}{2} \left\| \omega_{b} - \omega^{*} \right\|^{2} \\ & \geq& \left\| \omega_{b} - \omega^{*} \right\|^{2} . \end{array} $$
(20)

Meantime, \(\bar l\) is a μ-smooth function about the optimum point ω, i.e., for any ωb, we have

$$ \bar l(\omega_{b}) - \bar l(\omega^{*}) \leq \frac{\mu}{2} \left\| \omega_{b} - \omega^{*} \right\|^{2} , $$
(21)

Then

$$ E\left[ \bar l(\omega_{b}) - \bar l(\omega^{*}) \right] \leq E\left[ \frac{\mu}{2} \left\| \omega_{b} - \omega^{*} \right\|^{2}\right] , $$
(22)

where the expected value of bounding mean square error \(E\left [\left \| \omega _{b} - \omega ^{*} \right \|^{2}\right ] \) is aiming to observe the extent of ωb converges to its optimum ω, we use cyclic formula \(\omega _{b+1} = \omega _{b} -\eta _{b} \hat {g}_{b}\) to obtain the correlation formula of ωb is

$$ \begin{array}{@{}rcl@{}} \left\| \omega_{(b+1)} - \omega^{*} \right\|^{2} &=& \left\| \omega_{(b+1)} - \omega^{*} - \eta_{b}\cdot \sum\limits_{k=1}^{K} \hat {g_{b}^{k}} \right\|^{2} \\ & = &\left\| \omega_{b} - \omega^{*} \right\|^{2} + {\eta_{b}^{2}} \left\| \sum\limits_{k=1}^{K} \hat {g_{b}^{k}} \right\|^{2} - 2\eta_{b} \langle \omega_{b} - \omega^{*}, \sum\limits_{k=1}^{K} \hat {g_{b}^{k}} \rangle . \end{array} $$
(23)

For the second term, we have

$$ \begin{array}{@{}rcl@{}} {\eta_{b}^{2}} \left\| \sum\limits_{k=1}^{K} \hat {g_{b}^{k}} \right\|^{2} & = &{\eta_{b}^{2}} \left\| \sum\limits_{k=1}^{K} (\bar {g_{b}^{k}} + z_{k})\right\|^{2} \\ & =& {\eta_{b}^{2}} \left\| \sum\limits_{k=1}^{K} \bar {g_{b}^{k}} + z\right\|^{2}, \end{array} $$
(24)

where \(z=\sum \limits _{k=1}^{K} z_{k}\), and \(z \thicksim N(0, \sigma ^{2} I_{d})\). Thus,

$$ \begin{array}{@{}rcl@{}} &&E\left[ {\eta_{b}^{2}} \left\| \sum\limits_{k=1}^{K} (\bar {g_{b}^{k}} + z_{k})\right\|^{2} \right] +E\left[2{\eta_{b}^{2}} \sum\limits_{k=1} K \langle \bar g, z \rangle \right] + E\left[ {\eta_{b}^{2}} \left\|z\right\|^{2} \right]\\ & \leq& {\eta_{b}^{2}} +d{\eta_{b}^{2}}\sigma^{2} = {\eta_{b}^{2}}(1+d\sigma^{2}). \end{array} $$
(25)

The third term of (23) can be obtained by

$$ -2\eta_{b} \langle \omega_{b} - \omega^{*}, \sum\limits_{k=1}^{K} \hat {g_{b}^{k}} \rangle = -2\eta_{b} \sum\limits_{k=1}^{K} \langle \omega_{b} - \omega^{*}, \hat {g_{b}^{k}} \rangle . $$
(26)

The expectation of the above formula is

$$ \begin{array}{@{}rcl@{}} && E\left[\langle \omega_{b} - \omega^{*}, \hat {g_{b}^{k}} \rangle \right]\\ & =&E\left[\langle \omega_{b} - \omega^{*}, \bar {g_{b}^{k}} \rangle \right] + E\left[ \langle \omega_{b} - \omega^{*}, z^{k} \rangle\right] \\ & =&E \left[ \frac{1}{K} \langle \omega_{b} - \omega^{*}, \bar {g_{b}^{k}} \rangle \right] . \end{array} $$
(27)

Then we have

$$ \begin{array}{@{}rcl@{}} && -2\eta_{b} E \left[\langle \omega_{b} - \omega^{*}, \sum\limits_{k=1}^{K} \hat {g_{b}^{k}} \rangle \right] \\ & =& -2\eta_{b} \sum\limits_{k=1} K E\left[\frac{1}{K} \langle \omega_{b} - \omega^{*}, \bar {g_{b}^{k}} \rangle \right] \\ & \leq &-2\eta_{b} \frac{1}{K} E\left[ \bar l(\omega_{b}) - l(\omega^{*}) + \frac{\lambda}{2} \left\| \omega_{b} - \omega^{*}\right\|^{2}\right] \\ & \leq &-2\eta_{b} \frac{1}{K} E\left[\lambda \left\| \omega_{b} - \omega^{*} \right\|^{2} \right] . \end{array} $$
(28)

Hence,

$$ \begin{array}{@{}rcl@{}} && E\left[\left\| \omega_{(b+1)} - \omega^{*} \right\|^{2} \right] \\ & \leq& E\left[\left\| \omega_{b} - \omega^{*} \right\|^{2} \right] + {\eta_{b}^{2}}(1+d\sigma^{2}) - 2 \eta_{b} \frac{1}{K} \lambda E\left[ \langle \omega_{b} - \omega^{*} \rangle^{2}\right] \\ & \leq& \left( 1-2\eta_{b} \frac{1}{K}\right) E\left[ \langle \omega_{b} - \omega^{*} \rangle \right] + {\eta_{b}^{2}}(1 + d \sigma^{2}) . \end{array} $$
(29)

By properties of random unbiased gradient and bounded second moment, i.e., \(E\left [ \left \| \hat g_{b}\right \|_{2}^{2} \right ] \leq K^{2}\), and the learning rate of ηb = 1/λ, we have

$$ \begin{array}{@{}rcl@{}} && E\left[\left\| \omega_{(b+1)} - \omega^{*} \right\|^{2} \right] \\ & \leq& \left( 1-\frac{2}{b}\right)E\left[ \left\| \omega_{b} - \omega^{*} \right\|^{2} \right] + \frac{K^{2}(1+ d\sigma^{2})}{\lambda^{2} b^{2}}. \end{array} $$
(30)

By (23), we have

$$ E\left[\left\| \omega_{b} - \omega^{*} \right\| \right] \leq \max\{ 2, 1 + d \sigma^{2}\} \frac{2K^{2}}{\lambda^{2} b}. $$
(31)

Using this bound,we can obtained a lower bound of the convergence rate with μ-smoothness as follows:

$$ \begin{array}{@{}rcl@{}} E\left[ \bar l(\omega_{b} )- \bar l(\omega^{*})\right] & \leq& \frac{\mu}{2} E\left[ \left\| \omega_{b} - \omega^{*} \right\|^{2}\right] \\ & \leq& \max\{2, 1+d\sigma^{2}\} \frac{\mu K^{2}}{\lambda^{2} b} . \end{array} $$
(32)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, S., Yu, M., Ahmed, M.A.M. et al. FL-MAC-RDP: Federated Learning over Multiple Access Channels with Rényi Differential Privacy. Int J Theor Phys 60, 2668–2682 (2021). https://doi.org/10.1007/s10773-021-04867-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10773-021-04867-0

Keywords

Navigation