Abstract
Federated learning (FL) aims to derive a “better” global model without direct access to individuals’ training data. It is traditionally done by aggregation over individual gradients with differentially private (DP) noises. We study an FL variant as a new point in the privacy-performance space. Namely, cryptographic aggregation is over local models instead of gradients; each contributor then locally trains their model using a DP version of Adam upon the “feedback” (e.g., fake samples from GAN – generative adversarial networks) derived from the securely-aggregated global model. Intuitively, this achieves the best of both worlds – more “expressive” models are processed in the encrypted domain instead of just gradients, without DP’s shortcoming, while heavyweight cryptography is minimized (at only the first step instead of the entire process). Practically, we showcase this new FL variant over GAN and meta-learning, for securing new data and new tasks.
S. S. M. Chow—Supported in part by the General Research Funds (CUHK 14210621 and 14209918), University Grants Committee, Hong Kong. Wei Song is supported by the Fundamental Research Funds for the Central Universities (N2316010).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
It can be used to evaluate ciphertext multiplications.
- 2.
Although it has been wrapped up in some privacy libraries, the research literature lacks a self-contained description.
- 3.
It is just an ingredient of GAN and not for any privacy purposes.
- 4.
We remove this straightforward proof due to page limit.
References
www.github.com/OpenMined/PySyft. Accessed 15 Oct 2023
Microsoft SEAL (release 3.3). www.github.com/Microsoft/SEAL. Accessed 15 Oct 2023
Abadi, M., et al.: Deep learning with differential privacy. In: CCS (2016)
Andrew, G., Thakkar, O., McMahan, B., Ramaswamy, S.: Differentially private learning with adaptive clipping. In: NeurIPS (2021)
Augenstein, S., et al.: Generative models for effective ML on private, decentralized datasets. In: ICLR (2020)
Bernstein, D.J., Hamburg, M., Krasnova, A., Lange, T.: Elligator: elliptic-curve points indistinguishable from uniform random strings. In: CCS (2013)
Bonawitz, K.A., et al.: Practical secure aggregation for privacy-preserving machine learning. In: CCS (2017)
Chen, D., Orekondy, T., Fritz, M.: GS-WGAN: a gradient-sanitized approach for learning differentially private generators. In: NeurIPS (2020)
Cheon, J.H., Kim, A., Kim, M., Song, Y.: Homomorphic encryption for arithmetic of approximate numbers. In: Takagi, T., Peyrin, T. (eds.) ASIACRYPT 2017. LNCS, vol. 10624, pp. 409–437. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70694-8_15
Cynthia, McSherry, F., Nissim, K., Smith, A.D.: Calibrating noise to sensitivity in private data analysis. In: TCC (2006)
Damaskinos, G., Mendler-Dünner, C., Guerraoui, R., Papandreou, N., Parnell, T.P.: Differentially private stochastic coordinate descent. In: AAAI (2021)
Deng, J., Dong, W., Socher, R., Li, L., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)
Dong, Y., Chen, X., Jing, W., Li, K., Wang, W.: Meteor: improved secure 3-party neural network inference with reducing online communication costs. In: WWW (2023)
Dwork, C., McSherry, F., Nissim, K., Smith, A.D.: Calibrating noise to sensitivity in private data analysis. In: TCC (2006)
Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9(3–4), 211–407 (2014)
Dwork, C., Rothblum, G.N., Vadhan, S.P.: Boosting and differential privacy. In: FOCS (2010)
Fan, C., Liu, P.: Federated generative adversarial learning. In: PRCV (2020)
Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: CCS (2015)
Gadotti, A., Houssiau, F., Annamalai, M.S.M.S., de Montjoye, Y.: Pool inference attacks on local differential privacy: quantifying the privacy guarantees of apple’s count mean sketch in practice. In: USS (2022)
Gentry, C.: Fully homomorphic encryption using ideal lattices. In: STOC (2009)
Goodfellow, I.J., et al.: Generative adversarial nets. In: NeurIPS (2014)
Huang, Z., Hu, R., Guo, Y., Chan-Tin, E., Gong, Y.: DP-ADMM: ADMM-based distributed learning with differential privacy. IEEE Trans. Inf. Forensics Secur. 15, 1002–1012 (2020)
Kairouz, P., Oh, S., Viswanath, P.: The composition theorem for differential privacy. In: ICML (2015)
Kim, M., Günlü, O., Schaefer, R.F.: Federated learning with local differential privacy: trade-offs between privacy, utility, and communication. In: ICASSP (2021)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)
LeCun, Y.: The MNIST database of handwritten digits (1998)
Li, Z., Huang, Z., Chen, C., Hong, C.: Quantification of the leakage in federated learning. In: NeurIPS Workshop on FL (2019)
Mandal, K., Gong, G.: PrivFL: practical privacy-preserving federated regressions on high-dimensional data over mobile networks. In: CCSW@CCS (2019)
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS (2017)
McMahan, H.B., Ramage, D., Talwar, K., Zhang, L.: Learning differentially private recurrent language models. In: ICLR (2018)
Mironov, I.: Rényi differential privacy. In: CSF (2017)
Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: S &P (2019)
Ng, L.K.L., Chow, S.S.M.: SoK: cryptographic neural-network computation. In: S &P (2023)
Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: ICLR (2017)
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: S &P (2017)
Song, W., Fu, C., Zheng, Y., Cao, L., Tie, M.: A practical medical image cryptosystem with parallel acceleration. J. Ambient. Intell. Humaniz. Comput. 14, 9853–9867 (2022)
Stevens, T., Skalka, C., Vincent, C., Ring, J., Clark, S., Near, J.: Efficient differentially private secure aggregation for federated learning via hardness of learning with errors. In: USENIX Security (2022)
Sun, L., Qian, J., Chen, X.: LDP-FL: practical private aggregation in federated learning with local differential privacy. In: IJCAI (2021)
Truex, S., et al.: A hybrid approach to privacy-preserving federated learning. In: AISec@CCS (2019)
Wang, X., Ranellucci, S., Katz, J.: Authenticated garbling and efficient maliciously secure two-party computation. In: CCS (2017)
Wei, K., et al.: Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 15, 3454–3469 (2020)
Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiV:1708.07747 (2017)
Xu, R., Baracaldo, N., Zhou, Y., Anwar, A., Ludwig, H.: HybridAlpha: an efficient approach for privacy-preserving federated learning. In: AISec@CCS (2019)
Zhang, C., Li, S., Xia, J., Wang, W., Yan, F., Liu, Y.: BatchCrypt: efficient homomorphic encryption for cross-silo federated learning. In: USENIX ATC (2020)
Zhang, W., Tople, S., Ohrimenko, O.: Leakage of dataset properties in multi-party machine learning. In: USENIX Security (2021)
Zhang, W., Fu, C., Zheng, Y., Zhang, F., Zhao, Y., Sham, C.: HSNet: a hybrid semantic network for polyp segmentation. Comput. Biol. Med. 150, 106173 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zheng, Y. et al. (2023). Cryptography-Inspired Federated Learning for Generative Adversarial Networks and Meta Learning. In: Yang, X., et al. Advanced Data Mining and Applications. ADMA 2023. Lecture Notes in Computer Science(), vol 14177. Springer, Cham. https://doi.org/10.1007/978-3-031-46664-9_27
Download citation
DOI: https://doi.org/10.1007/978-3-031-46664-9_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-46663-2
Online ISBN: 978-3-031-46664-9
eBook Packages: Computer ScienceComputer Science (R0)