Skip to main content
Log in

AAIA: an efficient aggregation scheme against inverting attack for federated learning

  • Regular Contribution
  • Published:
International Journal of Information Security Aims and scope Submit manuscript

Abstract

Federated learning is emerged as an attractive paradigm regarding the data privacy problem, clients train the deep neural network on their local datasets, there is no need to upload their local data to a center server, and gradients are shared instead. However, recent studies show that adversaries can reconstruct the training images at high resolution from the gradients, such a break of data privacy is possible even in trained deep networks. To protect data privacy, a secure aggregation scheme against inverting attack is proposed for federated learning. The gradients are encrypted before sharing, and an adversary is unable to launch various attacks based on gradients. To improve the efficiency of data aggregation schemes, a new way of building shared keys is proposed, and a client build shared keys with 2a other clients, but not all the clients in the system. Besides, the gradient inversion attacks are also tested, and a gradient inversion attack is proposed, which enable the adversary to reconstruct the training data based on gradient. The simulation results show the proposed scheme can protect an honest but curious parameter server from reconstructing the training data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Chen, J., Pan, X., Monga, R., Bengio, S., Jozefowicz, R.: Revisiting distributed synchronous SGD, arXiv:1604.00981 [cs], (Mar. 2017). [Online]

  2. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B. A.: “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics.PMLR, (Apr. 2017), pp. 1273–1282

  3. Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. Proceed. Mach. Learn. Sys. 2, 429–450 (2020)

    Google Scholar 

  4. Phong, L.T., Aono, Y., Hayashi, T., Wang, L., Moriai, S.: Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Foren. Secur. 13(5), 1333–1345 (2018)

    Article  Google Scholar 

  5. Zhu, L., Liu, Z., Han, S.:“Deep leakage from gradients,” in Advances in Neural Information Processing Systems, vol. 32, (2019)

  6. Zhao, B., Mopuri, K. R., Bilen, H.: iDLG: improved deep leakage from gradients, arXiv:2001.02610 [cs, stat], (2020). [Online]

  7. Wei, W., Liu, L., Loper, M., Chow, K.-H., Gursoy, M. E., Truex, S., Wu, Y.:“A Framework for Evaluating Gradient Leakage Attacks in Federated Learning,” arXiv:2004.10397 [cs, stat], (Apr. 2020). [Online]

  8. Jeon, J., Kim, j., Lee, K., Oh, S., Ok, J.:Gradient Inversion with Generative Image Prior, in Advances in neural information processing systems, vol. 34. Curran Associates, Inc., (2021), pp. 29 898–29 908

  9. Geiping, J., Bauermeister, H., Dröge, H., Moeller, M.: Inverting gradients – How easy is it to break privacy in federated learning? arXiv:2003.14053 [cs], (2020). [Online]

  10. Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning, arXiv:1702.07464 [cs, stat], (2017). [Online]

  11. Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H. B., Patel, S., Ramage, D., Segal, A., Seth, K.: “Practical Secure Aggregation for Privacy-Preserving Machine Learning,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security.Dallas Texas USA: ACM, (Oct. 2017), pp. 1175–1191. [Online]. https://doi.org/10.1145/3133956.3133982

  12. Duan, J., Zhou, J., Li, Y.: Privacy-Preserving distributed deep learning based on secret sharing, Information Sciences, vol. 527, pp. 108–127, (2020). [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0020025520302553

  13. Yin, H., Mallya, A., Vahdat, A., Alvarez, J. M., Kautz, J., Molchanov, P.: See through gradients: image batch recovery via GradInversion, in 2021 IEEE/CVF Conference on computer vision and pattern recognition (CVPR). Nashville, TN, USA: IEEE, (2021), pp. 16 332–16 341

  14. Fowl, L., Geiping, J., Czaja, W., Goldblum, M., Goldstein, T.: Robbing the fed: directly obtaining private data in federated learning with modified models,’ (2022)

  15. Boenisch, F., Dziedzic, A., Schuster, R., Shamsabadi, A. S., Shumailov, I., Papernot, N.: When the curious abandon honesty: federated learning is not private,” (2021)

  16. Zhu, J., Blaschko, M.: R-GAP: recursive gradient attack on privacy (2021)

  17. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning, in Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. Denver Colorado USA: ACM, (Oct. 2015), pp. 1310–1321. [Online]. Available: https://doi.org/10.1145/2810103.2813687

  18. Zhao, Q., Zhao, C., Cui, S., Jing, S., Chen, Z.: PrivateDL: privacy-preserving collaborative deep learning against leakage from gradient sharing. Int. J. Intell. Sys. 35(8), 1262–1279 (2020). https://doi.org/10.1002/int.22241

    Article  Google Scholar 

  19. Ryffel, T., Trask, A., Dahl, M., Wagner, B., Mancuso, J., Rueckert, D., Passerat-Palmbach, J.: “A generic framework for privacy preserving deep learning,” arXiv:1811.04017 [cs, stat], (Nov. 2018). [Online]

  20. Zhao, L., Wang, Q., Zou, Q., Zhang, Y., Chen, Y.: Privacy-preserving collaborative deep learning with unreliable participants, arXiv:1812.10113 [cs], pp. 469–472, (2019). [Online]. Available:

  21. Gong, M., Pan, K., Xie, Y., Qin, A. K., Tang, Z.: Preserving differential privacy in deep neural networks with relevance-based adaptive noise imposition, Neural Networks, vol. 125, pp. 131–141, (2020). [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0893608020300460

  22. Liu, X., Li, H., Xu, G., Lu, R., He, M.: Adaptive privacy-preserving federated learning,’ Peer-to-peer networking and applications, (2020). [Online]. Available: https://doi.org/10.1007/s12083-019-00869-2

  23. Lyu, L., Bezdek, J.C., He, X., Jin, J.: Fog-embedded deep learning for the internet of things. IEEE Trans. Ind. Inf. 15(7), 4206–4215 (2019)

    Article  Google Scholar 

  24. Zhang, X., Chen, X., Liu, J.K., Xiang, Y.: DeepPAR and DeepDPA: privacy preserving and asynchronous deep learning for industrial IoT. IEEE Trans. Ind. Inf. 16(3), 2081–2090 (2020)

    Article  Google Scholar 

  25. Choi, B., Sohn, J.-y., Han, D.-J., Moon, J.: “Communication-Computation Efficient Secure Aggregation for Federated Learning,” arXiv:2012.05433 [cs, math], (Dec. 2020)

  26. So, J., Guler, B., Avestimehr, A. S.: Turbo-aggregate: breaking the quadratic aggregation barrier in secure federated learning, arXiv:2002.04156 [cs, math, stat], (2020). [Online]

  27. Bell, J. H., Bonawitz, K. A., Gascón, A., Lepoint, T., Raykova, M.: Secure single-server aggregation with (poly)logarithmic overhead, in Proceedings of the 2020 ACM SIGSAC conference on computer and communications security, ser. CCS ’20. New York, NY, USA: Association for computing machinery, (2020), pp. 1253–1269. [Online]. Available: https://doi.org/10.1145/3372297.3417885

  28. Fu, A., Zhang, X., Xiong, N., Gao, Y., Wang, H.: VFL: a verifiable federated learning with privacy-preserving for big data in industrial IoT, arXiv:2007.13585 [cs], (2020). [Online]

  29. Xu, R., Baracaldo, N., Zhou, Y., Anwar, A., Ludwig, H.: HybridAlpha: an efficient approach for privacy-preserving federated learning, in Proceedings of the 12th ACM workshop on artificial intelligence and security, ser. AISec’19.New York, NY, USA: association for computing machinery, (2019), pp. 13–23. [Online]. Available: https://doi.org/10.1145/3338501.3357371

  30. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models, in 2017 IEEE symposium on security and privacy (SP), (2017), pp. 3–18

  31. Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. in: IEEE Symposium security privacy (SP), 739–753 (2019)

  32. Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., Li, B.: Manipulating machine learning: poisoning attacks and countermeasures for regression learning, in: IEEE Symposium on Security and Privacy (SP), pp. 19–35 (2018)

  33. Zhang, J., Chen, J., Wu, D., Chen, B., Yu, S.: “Poisoning Attack in Federated Learning using Generative Adversarial Nets,” in 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), (Aug. 2019), pp. 374–380

  34. Tan, T. J. L., Shokri, R.: Bypassing backdoor detection algorithms in deep learning, in 2020 IEEE European symposium on security and privacy (EuroS P), (2020), pp. 175–183

  35. Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning, in International Conference on Artificial Intelligence and Statistics. PMLR, (2020), pp. 2938–2948

  36. Goodman, J. T., Venolia, G. D., Steury, K. R., Parker, C.: “Language modeling for soft keyboards,” (2002)

  37. Barreto, P. S. L. M., Kim, H. Y., Lynn, B., Scott, M.: Efficient algorithms for pairing-based cryptosystems, in Advances in Cryptology — CRYPTO 2002, M. Yung, Ed.Berlin, Heidelberg: Springer, (2002), pp. 354–369

  38. Benaloh, J. C.: Secret sharing homomorphisms: keeping shares of a secret secret (Extended Abstract), in Advances in cryptology — CRYPTO’ 86, ser. Lecture Notes in Computer Science, A. M. Odlyzko, Ed. Berlin, Heidelberg: Springer, (1987), pp. 251–260

  39. Stevens, T., Skalka, C., Vincent, C., Ring, J., Clark, S., Near, J.: Efficient differentially private secure aggregation for federated learning via hardness of learning with errors, in Proceedings of the 31st USENIX Security Symposium. Boston, MA, USA: USENIX, (2022), pp. 1379–1395

  40. Xu, G., Li, H., Liu, S., Yang, K., Lin, X.: VerifyNet: secure and verifiable federated learning. IEEE Trans. Inf. Foren. Secur. 15, 911–926 (2020)

    Article  Google Scholar 

  41. Elgamal, T.: A public key cryptosystem and a signature scheme based on discrete logarithms. IEEE Trans. Inf. Theory 31(4), 469–472 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  42. Barker, E.: Recommendation for key management:: Part 1 - general, National Institute of Standards and Technology, Gaithersburg, MD, Tech. Rep. NIST SP 800-57pt1r5, (2020)

  43. Group3DMS-Shares/SecAggProtocol. [Online]. Available: https://github.com/Group3DMS-Shares/SecAggProtocol

Download references

Acknowledgements

The results presented in this paper have been performed as part of the R &D Program of Beijing Municipal Education Commission (KM202210005028) and the Major Research Plan of National Natural Science Foundation of China (92167102).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuwen Chen.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Human and animal rights

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The results presented in this paper have been performed as part of the R &D Program of Beijing Municipal Education Commission (KM202210005028) and the Major Research Plan of National Natural Science Foundation of China (92167102).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, Z., Yang, S., Huang, Y. et al. AAIA: an efficient aggregation scheme against inverting attack for federated learning. Int. J. Inf. Secur. 22, 919–930 (2023). https://doi.org/10.1007/s10207-023-00670-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10207-023-00670-6

Keywords

Navigation