Skip to main content
Log in

Federated learning to comply with data protection regulations

  • Brief Report
  • Published:
CSI Transactions on ICT Aims and scope Submit manuscript

Abstract

AI is adept at using large quantities of data, sometimes sensitive personal data, and can adversely affect individuals’ privacy. Data privacy concerns significantly impact the course of next-generation AI. Users do not trust anyone withholding their data and need privacy-preserving intelligent systems. In addition, several regulations mandate that organizations handle users’ data in ways that do not affect their privacy and provide them control on their data. Federated Learning emerged as a privacy-preserving technology for data-intensive machine learning by training the models on-site or on-device. However, several concerns related to federated learning emerged due to: (i) dynamic, distributed, heterogeneous, and collaborative nature of client devices, (ii) membership inference and model inversion attacks affecting the overall privacy and security of FL systems, (iii) the need for strict compliance to data privacy and protection laws, (iv) the vulnerabilities at local client devices leading to data leakage, and (iv) diversity and ubiquity of smart devices collecting real-time multimodal data leading to lack of standardization efforts for security and privacy management framework. In this paper, we discuss (a) how federated learning can help us withholding privacy, (b) the need for improving security and privacy in federated learning systems, (c) the privacy regulations and their application to federated learning in various business domains, (d) proposed a federated recommender system and demonstrated the performance that matches the central setting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

References

  1. Nishio T, Yonetani R (2019) Client selection for federated learning with heterogeneous resources in mobile edge. In: ICC 2019-2019 IEEE international conference on communications (ICC), pp. 1–7 . IEEE

  2. Chen J, Pan L, Wei Z, Wang X, Ngo CW, Chua TS (2020) Zero-shot ingredient recognition by multi-relational graph convolutional network. In: AAAI

  3. Kairouz P, McMahan HB, Avent B, Bellet A, Bennis M, Bhagoji AN, Bonawitz K, Charles ZB, Cormode G, Cummings R, D’Oliveira RGL, Rouayheb SYE, Evans D, Gardner J, Garrett Z, Gascón A, Ghazi B, Gibbons PB, Gruteser M, Harchaoui Z, He C, He L, Huo Z, Hutchinson B, Hsu J, Jaggi M, Javidi T, Joshi G, Khodak M, Konecný J, Korolova A, Koushanfar F, Koyejo O, Lepoint T, Liu Y, Mittal P, Mohri M, Nock R, Özgür A, Pagh R, Raykova M, Qi H, Ramage D, Raskar R, Song DX, Song W, Stich SU, Sun Z, Suresh AT, Tramèr F, Vepakomma P, Wang J, Xiong L, Xu Z, Yang Q, Yu FX, Yu H, Zhao S (2021) Advances and open problems in federated learning. Found Trends Mach Learn 14:1–210

    Article  Google Scholar 

  4. Li Q, Wen Z, He B (2021) A survey on federated learning systems: Vision, hype and reality for data privacy and protection. ArXiv 1907.09693

  5. Enthoven D, Al-Ars Z (2020) An overview of federated deep learning privacy attacks and defensive strategies. ArXiv 2004.04676

  6. Lyu L, Yu H, Yang Q (2020) Threats to federated learning: A survey. ArXiv 2003.02133

  7. Li T, Sahu AK, Talwalkar AS, Smith V (2020) Federated learning: challenges, methods, and future directions. IEEE Signal Process Magaz 37:50–60

    Google Scholar 

  8. GDPR: General Data Protection Regulation (GDPR). https://gdpr-info.eu/ (2018)

  9. CCPA: General Data Protection Regulation (GDPR). http://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?division=3.&part=4.&lawCode=CIV&title=1.81.5 (2018)

  10. NITI-Aayog: Principles for Responsible AI. https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf (2021)

  11. NITI-Aayog: Principles for Responsible AI. https://www.niti.gov.in/sites/default/files/2021-08/Part2-Responsible-AI-12082021.pdf (2021)

  12. European Parliament: The CJEU judgment in the Schrems II case. https://www.europarl.europa.eu/RegData/etudes/ATAG/202652=073/EPRS_ATA(2020)652073_EN.pdf (2020)

  13. High-Level Expert Group on Artificial Intelligence : Ethics Guidelines for Trustworthy AI. https://ec.europa.eu/newsroom/dae/redirection/document/56341 (2018)

  14. Smith V, Chiang CK, Sanjabi M, Talwalkar A (2017) Federated multi-task learning. arXiv preprint arXiv:1705.10467

  15. Mothukuri V, Parizi RM, Pouriyeh S, Huang Y, Dehghantanha A, Srivastava G (2021) A survey on security and privacy of federated learning. Fut Gener Comput Syst 115:619–640

    Article  Google Scholar 

  16. Atanov A, Ashukha A, Molchanov D, Neklyudov K, Vetrov D (2019) Uncertainty estimation via stochastic batch normalization. Int Symp Neural Netw. Springer, Cham, pp 261–269

    Google Scholar 

  17. Hitaj B, Ateniese G, Perez-Cruz F (2017) Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pp. 603–618

  18. Zhu L, Liu Z, Han S (2019) Deep leakage from gradients. CoRR 1906.08935

  19. Bhagoji A.N, Chakraborty S, Mittal P, Calo S (2019) Analyzing federated learning through an adversarial lens. In: International conference on machine learning, pp. 634–643 . PMLR

  20. Bhagoji A.N, Chakraborty S, Mittal P, Calo S (2018) Model poisoning attacks in federated learning. In: Proc Workshop Secur Mach Learn (SecML) 32nd Conf Neural Inf Process Syst (NeurIPS)

  21. Nasr M, Shokri R, Houmansadr A (2019) Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE symposium on security and privacy (SP). https://doi.org/10.1109/sp.2019.00065

  22. Wang Z, Song M, Zhang Z, Song Y, Wang Q, Qi H (2018) Beyond inferring class representatives: user-level privacy leakage from federated learning

  23. Geiping J, Bauermeister H, Dr-öge H, Moeller M (2020) Inverting gradients–how easy is it to break privacy in federated learning?

  24. Ximeng L, Lehui X, Yaopeng W, Xuru L (2020) Adversarial attacks and defenses in deep learning. Chin J Netw Inf Secur 6(5):36

    Google Scholar 

  25. Goodfellow I, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: International conference on learning representations . 1412.6572

  26. Hayes J, Ohrimenko O (2018) Contamination attacks and mitigation in multi-party machine learning. In: Advances in neural information processing systems 31 (NeurIPS 2018)

  27. Tu J, Wang T, Wang J, Manivasagam S, Ren M, Urtasun R (2021) Adversarial attacks on multi-agent communication. arXiv preprint arXiv:2101.06560

  28. Zizzo G, Rawat A, Sinn M, Buesser B (2020) Fat: Federated adversarial training. arXiv preprint arXiv:2012.01791

  29. Aono Y, Hayashi T, Wang L, Moriai S et al (2017) Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans Inf Forens Secur 13(5):1333–1345

    Google Scholar 

  30. Zari O, Xu C, Neglia G (2021) Efficient passive membership inference attack in federated learning. arXiv preprint arXiv:2111.00430

  31. Hu H, Salcic Z, Sun L, Dobbie G, Zhang X (2021) Source inference attacks in federated learning. arXiv preprint arXiv:2109.05659

  32. Song L, Shokri R, Mittal P (2019) Membership inference attacks against adversarially robust deep learning models. In: 2019 IEEE security and privacy workshops (SPW), pp. 50–56. IEEE

  33. Chen S, Kahla M, Jia R, Qi G.-J (2021) Knowledge-enriched distributional model inversion attacks. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 16178–16187

  34. Wang Z, Song M, Zhang Z, Song Y, Wang Q, Qi H (2019) Beyond inferring class representatives: user-level privacy leakage from federated learning. In: IEEE INFOCOM 2019-IEEE conference on computer communications, pp. 2512–2520 . IEEE

  35. Yin H, Mallya A, Vahdat A, Alvarez J.M, Kautz J, Molchanov P (2021) See through gradients: image batch recovery via GradInversion

  36. Portability H.-H.I, Act A (1996) Health information privacy. https://www.ncbi.nlm.nih.gov/books/NBK9573/

  37. FDA: Artificial intelligence and machine learning in software as a medical device. https://www.fda.gov/media/145022/download (2021)

  38. Bonawitz K, Ivanov V, Kreuter B, Marcedone A, McMahan H.B, Patel S, Ramage D, Segal A, Seth K (2017) Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. CCS ’17, pp. 1175–1191. Association for Computing Machinery, New York, NY, USA . https://doi.org/10.1145/3133956.3133982

  39. Wang H, Yurochkin M, Sun Y, Papailiopoulos D, Khazaeni Y (2020) Federated learning with matched averaging. ArXiv 2002.06440

  40. Arivazhagan M.G, Aggarwal V, Singh A, Choudhary S (2019) Federated learning with personalization layers. ArXiv 1912.00818

  41. Phong LT, Aono Y, Hayashi T, Wang L, Moriai S (2018) Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans Inf Forens Secur 13(5):1333–1345. https://doi.org/10.1109/TIFS.2017.2787987

    Article  Google Scholar 

  42. Gentry C (2010) Computing arbitrary functions of encrypted data. Commun ACM 53:97–105

    Article  Google Scholar 

  43. Acar A, Aksu H, Uluagac AS, Conti M (2018) A survey on homomorphic encryption schemes. ACM Comput Surveys (CSUR) 51:1–35

    Article  Google Scholar 

  44. Canetti R, Feige U, Goldreich O, Naor M (1996) Adaptively secure multi-party computation. In: STOC ’96

  45. Dwork C (2006) Differential privacy. In: Bugliesi M, Preneel B, Sassone V, Wegener I (eds) Automata, languages and programming. Springer, Berlin, Heidelberg, pp 1–12

    Google Scholar 

  46. Dwork C (2008) Differential privacy: a survey of results. In: TAMC

  47. Dwork C, Roth A (2014) The algorithmic foundations of differential privacy. Found Trends Theor Comput Sci 9:211–407

    Article  MathSciNet  Google Scholar 

  48. Dwork C, Kenthapadi K, McSherry F, Mironov I, Naor M (2006) Our data, ourselves: privacy via distributed noise generation. In: EUROCRYPT

  49. Wei K, Li J, Ding M, Ma C, Yang HH, Farokhi F, Jin S, Quek TQS, Vincent Poor H (2020) Federated learning with differential privacy: algorithms and performance analysis. Trans Inf. For Sec 15, 3454–3469 . https://doi.org/10.1109/TIFS.2020.2988575

  50. Naseri M, Hayes J, Cristofaro E.D (2020) Toward robustness and privacy in federated learning: experimenting with local and central differential privacy. ArXiv 2009.03561

  51. Truex S, Liu L, Chow K.-H, Gursoy M.E, Wei W (2020) Ldp-fed: federated learning with local differential privacy. In: Proceedings of the third ACM international workshop on edge systems, analytics and networking

  52. Hu R, Guo Y, Li H, Pei Q, Gong Y (2020) Personalized federated learning with differential privacy. IEEE Internet Things J 7:9530–9539

    Article  Google Scholar 

  53. Bonawitz K, Eichner H, Grieskamp W, Huba D, Ingerman A, Ivanov V, Kiddon C, Konecný J, Mazzocchi S, McMahan H.B, Overveldt T.V, Petrou D, Ramage D, Roselander J (2019) Towards federated learning at scale: system design. ArXiv 1902.01046

  54. Sahu AK, Li T, Sanjabi M, Zaheer M, Talwalkar AS, Smith V (2020) Federated optimization in heterogeneous networks

  55. Nguyen TG, Phan TV, Hoang DT, Nguyen TN, So-In C (2021) Federated deep reinforcement learning for traffic monitoring in SDN-based IoT networks. IEEE Trans Cogn Commun Netw 7(4):1048–1065

    Article  Google Scholar 

  56. Mowla NI, Tran NH, Doh I, Chae K (2020) AFRL: Adaptive federated reinforcement learning for intelligent jamming defense in FANET. J Commun Netw 22(3):244–258

    Article  Google Scholar 

  57. Wang X, Garg S, Lin H, Hu J, Kaddoum G, Piran MJ, Hossain MS (2021) Towards accurate anomaly detection in industrial internet-of-things using hierarchical federated learning. IEEE Internet Things J

  58. Chen F, Luo M, Dong Z, Li Z, He X (2018) Federated meta-learning with fast convergence and efficient communication. arXiv: 1802.07876

  59. Shokri R, Stronati M, Song C, Shmatikov V (2017) Membership inference attacks against machine learning models. In: 2017 IEEE symposium on security and privacy (SP)

  60. Song C, Ristenpart T, Shmatikov V (2017) Machine learning models that remember too much. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, CCS

  61. Wu S, Tang Y, Zhu Y, Wang L, Xie X, Tan T (2019) Session-based recommendation with graph neural networks. In: AAAI

  62. Gupta P, Garg D, Malhotra P, Vig L, Shroff G.M (2019) NISER: Normalized item and session representations to handle popularity bias. arXiv: 1909.04276 Retrieval

  63. McMahan HB, Moore E, Ramage D, Hampson S, y Arcas BA (2017) Communication-efficient learning of deep networks from decentralized data. In: AISTATS

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Srinivasa Rao Chalamala.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chalamala, S.R., Kummari, N.K., Singh, A.K. et al. Federated learning to comply with data protection regulations. CSIT 10, 47–60 (2022). https://doi.org/10.1007/s40012-022-00351-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40012-022-00351-0

Keywords

Navigation