Skip to main content

Protecting Data Privacy in Federated Learning Combining Differential Privacy and Weak Encryption

  • Conference paper
  • First Online:
Science of Cyber Security (SciSec 2021)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 13005))

Included in the following conference series:

Abstract

As a typical application of decentralization, federated learning prevents privacy leakage of crowdsourcing data for various training tasks. Instead of transmitting actual data, federated learning only updates model parameters of server by learning multiple sub-models from clients. However, these parameters may be leaked during transmission and further used by attackers to restore client data. Existing technologies used to protect parameters from privacy leakage do not achieve the sufficient protection of parameter information. In this paper, we propose a novel and efficient privacy protection method, which perturbs the privacy information contained in the parameters and completes its ciphertext representation in transmission. Regarding to the perturbation part, differential privacy is utilized to perturb the real parameters, which can minimize the privacy information contained in the parameters. To further camouflage the parameters, the weak encryption keeps the ciphertext form of the parameters as they are transmitted from the client to the server. As a result, neither the server nor any middle attacker can obtain the real information of the parameter directly. The experiments show that our method effectively resists attacks from both malicious clients and malicious server.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. McMahan, B., Moore, E., Ramage, D., et al.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  2. Ammad-Ud-Din, M., Ivannikova, E., Khan, S.A., et al.: Federated collaborative filtering for privacy-preserving personalized recommendation system. arXiv preprint arXiv:1901.09888 (2019)

  3. Hard, A., Rao, K., Mathews, R., et al.: Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 (2018)

  4. Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the. ACM SIGSAC Conference on Computer and Communications Security, vol. 2017, pp. 603–618 (2017)

    Google Scholar 

  5. Agrawal, R., Srikant, R.: Privacy-preserving data mining. In: Proceedings of the ACM SIGMOD International Conference on Management of Data, vol. (2000), pp. 439–450 (2000)

    Google Scholar 

  6. Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 739–753. IEEE (2019)

    Google Scholar 

  7. Ishai, Y., Kushilevitz, E., Ostrovsky, R., et al.: Zero-knowledge from secure multiparty computation. In: Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing, pp. 21–30 (2007)

    Google Scholar 

  8. Aono, Y., Hayashi, T., Trieu Phong, L., et al. Scalable and secure logistic regression via homomorphic encryption. In: Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy, pp. 142–144 (2016)

    Google Scholar 

  9. Geyer, R.C., Klein, T., Nabi, M.: Differentially private federated learning: a client level perspective. arXiv preprint arXiv:1712.07557 (2017)

  10. Li, Q., Zhu, W., Wu, C., et al.: InvisibleFL: federated learning over non-informative intermediate updates against multimedia privacy leakages. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 753–762 (2020)

    Google Scholar 

  11. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27, 2672–2680 (2014)

    Google Scholar 

  12. Truex, S., Liu, L., Gursoy, M.E., et al.: Demystifying membership inference attacks in machine learning as a service. IEEE Trans. Serv. Comput. (2019)

    Google Scholar 

  13. Melis, L., Song, C., De Cristofaro, E., et al.: Exploiting unintended feature leakage in collaborative learning. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 691–706. IEEE (2019)

    Google Scholar 

  14. Bhowmick, A., Duchi, J., Freudiger, J., et al.: Protection against reconstruction and its applications in private federated learning. arXiv preprint arXiv:1812.00984 (2018)

  15. Wang, Z., Song, M., Zhang, Z., et al.: Beyond inferring class representatives: user-level privacy leakage from federated learning. In: IEEE INFOCOM 2019-IEEE Conference on Computer Communications, pp. 2512–2520. IEEE (2019)

    Google Scholar 

  16. Canetti, R., Feige, U., Goldreich, O., et al.: Adaptively secure multi-party computation. In: Proceedings of the Twenty-Eighth aannual ACM Symposium on Theory of Computing, pp. 639–648 (1996)

    Google Scholar 

  17. Du, W., Han, Y.S., Chen, S.: Privacy-preserving multivariate statistical analysis: Linear regression and classification. In: Proceedings of the 2004 SIAM International Conference on Data Mining. Society for Industrial and Applied Mathematics, pp. 222–233 (2004)

    Google Scholar 

  18. Bonawitz, K., Ivanov, V., Kreuter, B., et al.: Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the. ACM SIGSAC Conference on Computer and Communications Security, vol. 2017, pp. 1175–1191 (2017)

    Google Scholar 

  19. Hao, M., Li, H., Xu, G., et al.: Towards efficient and privacy-preserving federated deep learning. In: ICC 2019–2019 IEEE International Conference on Communications (ICC), pp. 1–6. IEEE (2019)

    Google Scholar 

  20. van Tilborg, H.C.A., Jajodia, S. (eds.): Encyclopedia of Cryptography and Security. Springer, Boston, MA (2011). https://doi.org/10.1007/978-1-4419-5906-5

    Book  MATH  Google Scholar 

  21. Augenstein, S., McMahan, H.B., Ramage, D., et al.: Generative models for effective ml on private, decentralized datasets. arXiv preprint arXiv:1911.06679 (2019)

  22. Zhu, T., Philip, S.Y.: Applying differential privacy mechanism in artificial intelligence. In: 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pp. 1601–1609. IEEE (2019)

    Google Scholar 

  23. Xie, L., Lin, K., Wang, S., et al.: Differentially private generative adversarial network. arXiv preprint arXiv:1802.06739 (2018)

  24. Luo, X., Zhu, X.: Exploiting defenses against GAN-based feature inference attacks in federated learning. arXiv preprint arXiv:2004.12571 (2020)

  25. Choi, Y., Choi, M., Kim, M., et al.: Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8789–8797 (2018)

    Google Scholar 

  26. Zhu, J.Y., Park, T., Isola, P., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

  27. Lim, W.Y.B., Luong, N.C., Hoang, D.T., et al.: Federated learning in mobile edge networks: a comprehensive survey. IEEE Commun. Surv Tutorials (2020)

    Google Scholar 

  28. Konečný, J., McMahan, B., Ramage, D.: Federated optimization: Distributed optimization beyond the datacenter. arXiv preprint arXiv:1511.03575 (2015)

  29. Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006). https://doi.org/10.1007/11681878_14

    Chapter  Google Scholar 

  30. Dwork, C.: Differential privacy: a survey of results. In: Agrawal, M., Du, D., Duan, Z., Li, A. (eds.) TAMC 2008. LNCS, vol. 4978, pp. 1–19. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79228-4_1

    Chapter  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, C., Ma, C., Li, M., Gao, N., Zhang, Y., Shen, Z. (2021). Protecting Data Privacy in Federated Learning Combining Differential Privacy and Weak Encryption. In: Lu, W., Sun, K., Yung, M., Liu, F. (eds) Science of Cyber Security. SciSec 2021. Lecture Notes in Computer Science(), vol 13005. Springer, Cham. https://doi.org/10.1007/978-3-030-89137-4_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-89137-4_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-89136-7

  • Online ISBN: 978-3-030-89137-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics