Skip to main content

Privacy-Preserving Federated Compressed Learning Against Data Reconstruction Attacks Based on Secure Data

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1969))

Included in the following conference series:

  • 437 Accesses

Abstract

Federated learning is a new distributed learning framework with data privacy preserving in which multiple users collaboratively train models without sharing data. However, recent studies highlight potential privacy leakage through shared gradient information. Several defense strategies, including gradient information encryption and perturbation, have been suggested. But these strategies either involve high complexity or are susceptible to attacks. To counter these challenges, we propose to train on secure compressive measurements by compressed learning, thereby achieving local data privacy protection with slight performance degradation. A feasible method to boost performance in compressed learning is the joint optimization of the sampling matrix and the inference network during the training phase, but this may suffer from data reconstruction attacks again. Thus, we further incorporate a traditional lightweight encryption scheme to protect data privacy. Experiments conducted on MNIST and FMNIST datasets substantiate that our schemes achieve a satisfactory balance between privacy protection and model performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aono, Y., Hayashi, T., Wang, L., Moriai, S., et al.: Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13(5), 1333–1345 (2017)

    Google Scholar 

  2. Calderbank, R., Jafarpour, S., Schapire, R.: Compressed learning: Universal sparse dimensionality reduction and learning in the measurement domain. preprint (2009)

    Google Scholar 

  3. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  4. Geiping, J., Bauermeister, H., Dröge, H., Moeller, M.: Inverting gradients-how easy is it to break privacy in federated learning? Adv. Neural. Inf. Process. Syst. 33, 16937–16947 (2020)

    Google Scholar 

  5. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  6. Li, Z., Zhang, J., Liu, L., Liu, J.: Auditing privacy defenses in federated learning via generative gradient leakage. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10132–10142 (2022)

    Google Scholar 

  7. Lohit, S., Kulkarni, K., Turaga, P.: Direct inference on compressive measurements using convolutional neural networks. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 1913–1917. IEEE (2016)

    Google Scholar 

  8. Ma, J., Naas, S.A., Sigg, S., Lyu, X.: Privacy-preserving federated learning based on multi-key homomorphic encryption. Int. J. Intell. Syst. 37(9), 5880–5901 (2022)

    Article  Google Scholar 

  9. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  10. Mou, C., Zhang, J.: TransCL: transformer makes strong and flexible compressive learning. IEEE Trans. Pattern Anal. Mach. Intell. 45(4), 5236–5251 (2023)

    Google Scholar 

  11. Nair, A., Liu, L., Rangamani, A., Chin, P., Bell, M.A.L., Tran, T.D.: Reconstruction-free deep convolutional neural networks for partially observed images. In: 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 400–404. IEEE (2018)

    Google Scholar 

  12. Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 739–753. IEEE (2019)

    Google Scholar 

  13. Park, J., Lim, H.: Privacy-preserving federated learning using homomorphic encryption. Appl. Sci. 12(2), 734 (2022)

    Article  Google Scholar 

  14. Sun, J., Li, A., Wang, B., Yang, H., Li, H., Chen, Y.: Soteria: provable defense against privacy leakage in federated learning from representation perspective. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9311–9319 (2021)

    Google Scholar 

  15. Truex, S., Liu, L., Chow, K.H., Gursoy, M.E., Wei, W.: LDP-fed: federated learning with local differential privacy. In: Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, pp. 61–66 (2020)

    Google Scholar 

  16. Wang, Z., Song, M., Zhang, Z., Song, Y., Wang, Q., Qi, H.: Beyond inferring class representatives: user-level privacy leakage from federated learning. In: IEEE INFOCOM 2019-IEEE Conference on Computer Communications, pp. 2512–2520. IEEE (2019)

    Google Scholar 

  17. Wei, K., et al.: Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 15, 3454–3469 (2020)

    Article  Google Scholar 

  18. Yin, H., Mallya, A., Vahdat, A., Alvarez, J.M., Kautz, J., Molchanov, P.: See through gradients: image batch recovery via gradinversion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16337–16346 (2021)

    Google Scholar 

  19. Zhao, B., Mopuri, K.R., Bilen, H.: iDLG: improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020)

  20. Zhu, L., Liu, Z., Han, S.: Deep leakage from gradients. In: Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019, pp. 14747–14756 (2019)

    Google Scholar 

  21. Zisselman, E., Adler, A., Elad, M.: Compressed learning for image classification: a deep neural network approach. In: Handbook of Numerical Analysis, vol. 19, pp. 3–17. Elsevier (2018)

    Google Scholar 

Download references

Acknowledgements

The work was supported by the National Key R &D Program of China under Grant 2020YFB1805400, the National Natural Science Foundation of China under Grant 62072063 and the Project Supported by Graduate Student Research and Innovation Foundation of Chongqing, China under Grant CYB22063.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Di Xiao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xiao, D., Li, J., Li, M. (2024). Privacy-Preserving Federated Compressed Learning Against Data Reconstruction Attacks Based on Secure Data. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1969. Springer, Singapore. https://doi.org/10.1007/978-981-99-8184-7_25

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8184-7_25

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8183-0

  • Online ISBN: 978-981-99-8184-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics