Skip to main content
Log in

Mitigation of a poisoning attack in federated learning by using historical distance detection

  • Published:
Annals of Telecommunications Aims and scope Submit manuscript

Abstract

Federated learning provides a way to achieve joint model training while keeping the data of every party stored locally, and it protects the data privacy of all participants in cooperative training. However, there are availability and integrity threats in federated learning, as malicious parties may pretend to be benign ones to interfere with the global model. In this paper, we consider a federated learning scenario with one center server and multiple clients, where malicious clients launch poisoning attacks. We explore the statistical relationship of Euclidean distance among models, including benign versus benign models and malicious versus benign models. Then, we design a defense method based on our findings and inspired by evolutionary clustering. The center server uses this defense scheme to screen possible malicious clients and mitigate their attacks. Our mitigation scheme refers to the detection results of both the current and previous rounds. Moreover, we improve our scheme to apply it to a privacy threat scenario. Finally, we demonstrate the effectiveness of our scheme through experiments in several different scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. This paper is a supplementary version of a previous paper that was selected for fast-track publication by Cyber Security in Networking (CSNET).

References

  1. McMahan B, Moore E, Ramage D, Hampson S, Arcas BA (2017) Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, USA. Proceedings of Machine Learning Research, vol 54, pp 1273–1282

  2. Blanchard P, Mhamdi EME, Guerraoui R, Stainer JJ (2017) Machine learning with adversaries: Byzantine tolerant gradient descent. In: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, Long Beach, CA, USA, pp 119–129 

  3. Yin D, Chen Y, Ramchandran K, Bartlett P (2018) Byzantine-robust distributed learning: Towards optimal statistical rates

  4. El Mhamdi EM, Guerraoui R, Rouault S (2018) The hidden vulnerability of distributed learning in byzantium

  5. Jagielski M, Oprea A, Biggio B, Liu C, Nita-Rotaru C, Li B (2018) Manipulating machine learning: Poisoning attacks and countermeasures for regression learning, pp. 19–35 https://doi.org/10.1109/SP.2018.00057

  6. Biggio B, Nelson B, Laskov P (2012) Poisoning attacks against support vector machines

  7. Chen X, Liu C, Li B, Lu K, Song D (2017) Targeted backdoor attacks on deep learning systems using data poisoning

  8. Nasr M, Shokri R, Houmansadr A (2018) Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning

  9. Shokri R, Stronati M, Song C, Shmatikov V (2017) Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP)

  10. Melis L, Song C, De Cristofaro E, Shmatikov V (2019) Exploiting unintended feature leakage in collaborative learning, pp 691–706. https://doi.org/10.1109/SP.2019.00029

  11. Zhu L, Han S (2020) Deep Leakage from Gradients, pp 17–31 https://doi.org/10.1007/978-3-030-63076-8_2

  12. Shi Z, Ding X, Li F, Chen Y, Li C (2021) Mitigation of poisoning attack in federated learning by using historical distance detection. In: 2021 5th Cyber Security in Networking Conference (CSNet), pp 10–17. https://doi.org/10.1109/CSNet52717.2021.9614278

  13. Kairouz P, McMahan H (2021) Advances and open problems in federated learning. Foundations and Trends in Machine Learning 14. https://doi.org/10.1561/2200000083

  14. Bagdasaryan E, Veit A, Hua Y, Estrin D, Shmatikov V (2020)  How to backdoor federated learning. In: Chiappa S, Calandra R (eds)  The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26–28 August 2020, Online [Palermo, Sicily, Italy]. Proceedings of Machine Learning Research, vol. 108, pp 2938–2948

  15. Zhang J, Chen J, Wu D, Chen B, Yu S (2019) Poisoning attack in federated learning using generative adversarial nets, pp 374–380. https://doi.org/10.1109/TrustCom/BigDataSE.2019.00057

  16. Xie C, Koyejo O, Gupta I 2019 Fall of empires: Breaking byzantine-tolerant SGD by inner product manipulation. In: Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI 2019, Tel Aviv, Israel, July 22-25. Proceedings of Machine Learning Research, vol 115, pp 261–270

  17. Sun Z, Kairouz P, Suresh AT, McMahan HB (2019) Can you really backdoor federated learning? CoRR abs/1911.07963

  18. Bhagoji AN, Chakraborty S, Mittal P, Calo SB (2019) Analyzing federated learning through an adversarial lens. In: Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9–15 June 2019, Long Beach, California, USA. Proceedings of Machine Learning Research, vol. 97, pp 634–643

  19. Fung C, Yoon CJM, Beschastnikh I (2018) Mitigating sybils in federated learning poisoning. CoRR abs/1808.04866

  20. Shen S, Tople S, Saxena P (2016) A uror: defending against poisoning attacks in collaborative deep learning systems, pp 508–519. https://doi.org/10.1145/2991079.2991125

  21. Wu Z, Ling Q, Chen T, Giannakis G (2020) Federated variance-reduced stochastic gradient descent with robustness to byzantine attacks. IEEE Trans Signal Process 68:1–1. https://doi.org/10.1109/TSP.2020.3012952

    Article  MathSciNet  MATH  Google Scholar 

  22. Cao D, Chang S, Lin Z, Liu G, Sun D (2019) Understanding distributed poisoning attack in federated learning, pp 233–239. https://doi.org/10.1109/ICPADS47876.2019.00042

  23. Zhao Y, Chen J, Zhang J, Wu D, Blumenstein M, Yu S (2020) Detecting and mitigating poisoning attacks in federated learning using generative adversarial networks. Practice and Experience, Concurrency and Computation. https://doi.org/10.1002/cpe.5906

    Book  Google Scholar 

  24. Manna A, Kasyap H, Tripathy S (2021) Moat: Model agnostic defense against targeted poisoning attacks in federated learning. In: Gao D, Li Q, Guan X, Liao X (eds) Information and Communications Security. Springer, pp 38–55

    Chapter  Google Scholar 

  25. Chen L-Y, Chiu T-C, Pang A-C, Cheng L-C (2021) Fedequal: Defending model poisoning attacks in heterogeneous federated learning. In: 2021 IEEE Global Communications Conference (GLOBECOM), pp 1–6. https://doi.org/10.1109/GLOBECOM46510.2021.9685082

  26. You X, Liu Z, Yang X, Ding X (2022) Poisoning attack detection using client historical similarity in non-iid environments. In: 2022 12th international conference on cloud computing, data science & engineering (confluence), pp 439–447 https://doi.org/10.1109/Confluence52989.2022.9734158

  27. Liu W, Lin H, Wang X, Hu J, Kaddoum G, Piran MJ, Alamri A (2021) D2mif: A malicious model detection mechanism for federated learning empowered artificial intelligence of things. IEEE Internet of Things Journal 1–1. https://doi.org/10.1109/JIOT.2021.3081606

  28. Chakrabarti D, Kumar R (2006) Tomkins A. Evolutionary clustering 2006:554–560. https://doi.org/10.1007/978-0-387-30164-8_271

    Article  Google Scholar 

  29. Zhao L, Hu S, Wang Q, Jiang J, Shen C, Luo X, Hu P (2021) Shielding collaborative learning: Mitigating poisoning attacks through client-side detection. IEEE Trans Dependable Secure Comput 18(5):2029–2041. https://doi.org/10.1109/TDSC.2020.2986205

    Article  Google Scholar 

  30. Liu X, Li H, Xu G, Chen Z, Huang X, Lu R (2021) Privacy-enhanced federated learning against poisoning adversaries. IEEE Trans Inf Forensics Secur 16:4574–4588. https://doi.org/10.1109/TIFS.2021.3108434

    Article  Google Scholar 

  31. Fang M, Cao X, Jia J, Gong NZ 2020 Local model poisoning attacks to byzantine-robust federated learning. In: 29th USENIX Security Symposium, USENIX Security 2020, pp 1605–1622

Download references

Funding

This work is supported by Fundamental Research Funds for the Central Universities (UESTC) (Grant No. ZYGX2020ZB025) and Sichuan Science and Technology Program, China (Grant No. 2021YFG0157).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xuyang Ding.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, Z., Ding, X., Li, F. et al. Mitigation of a poisoning attack in federated learning by using historical distance detection. Ann. Telecommun. 78, 135–147 (2023). https://doi.org/10.1007/s12243-022-00929-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12243-022-00929-4

Keywords

Navigation