Abstract
In this paper, we focus on preserving differential privacy (DP) in continual learning (CL), in which we train ML models to learn a sequence of new tasks while memorizing previous tasks. We first introduce a notion of continual adjacent databases to bound the sensitivity of any data record participating in the training process of CL. Based upon that, we develop a new DP-preserving algorithm for CL with a data sampling strategy to quantify the privacy risk of training data in the well-known Averaged Gradient Episodic Memory (A-GEM) approach by applying a moments accountant. Our algorithm provides formal guarantees of privacy for data records across tasks in CL. Preliminary theoretical analysis and evaluations show that our mechanism tightens the privacy loss while maintaining a promising model utility.
P. Desai and P. Lai—These two authors contributed equally.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abadi, M., et al.: Deep learning with differential privacy. In: ACM SIGSAC, pp. 308–318 (2016)
Carlini, N., et al.: Extracting training data from large language models. In: USENIX Security Symposium (2021)
Chaudhry, A., Ranzato, M., Rohrbach, M., Elhoseiny, M.: Efficient lifelong learning with a-GEM. In: ICLR (2019)
Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Theory of Cryptography Conference, pp. 265–284 (2006)
Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9(3–4), 211–407 (2014)
Farquhar, S., Gal, Y.: Differentially private continual learning. In: Privacy in Machine Learning and AI workshop at ICML (2018)
Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: ACM SIGSAC (2015)
Goodfellow, I., Mirza, M., Xiao, D., Courville, A., Bengio, Y.: An empirical investigation of catastrophic forgetting in gradient-based neural networks. In: ICLR (2014)
Kartal, H., Liu, X., Li, X.: Differential privacy for the vast majority. ACM Trans. Manag. Inf. Syst. (TMIS) 10(2), 1–15 (2019)
Lopez-Paz, D., Ranzato, M.: Gradient episodic memory for continual learning. In: Neural Information Processing Systems (NeurIPS) (2017)
Ostapenko, O., Puscas, M., Klein, T., Jahnichen, P., Nabi, M.: Learning to remember: a synaptic plasticity driven framework for continual learning. In: CVPR (2019)
Phan, H., Thai, M.T., Hu, H., Jin, R., Sun, T., Dou, D.: Scalable differential privacy with certified robustness in adversarial learning. In: ICML (2020)
Phan, N., Thai, M., Devu, M., Jin, R.: Differentially private lifelong learning. In: Privacy in Machine Learning (NeurIPS 2019 Workshop) (2019)
Phan, N., Jin, R., Thai, M.T., Hu, H., Dou, D.: Preserving differential privacy in adversarial learning with provable robustness. CoRR abs/1903.09822 (2019). http://arxiv.org/abs/1903.09822
Phan, N., et al.: Heterogeneous gaussian mechanism: preserving differential privacy in deep learning with provable robustness. In: IJCAI, pp. 4753–4759 (2019)
Phan, N., Wu, X., Hu, H., Dou, D.: Adaptive laplace mechanism: differential privacy preservation in deep learning. In: ICDM, pp. 385–394 (2017)
Schwarz, J., et al.: Progress & compress: a scalable framework for continual learning. In: ICML, pp. 4528–4537 (2018)
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE SP, pp. 3–18 (2017)
Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: International Conference on Machine Learning, pp. 3987–3995 (2017)
Acknowledgment
The authors gratefully acknowledge the support from the National Science Foundation grants NSF CNS-1935928/1935923, CNS-1850094, IIS-2041096/2041065.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Desai, P., Lai, P., Phan, N., Thai, M.T. (2021). Continual Learning with Differential Privacy. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Communications in Computer and Information Science, vol 1517. Springer, Cham. https://doi.org/10.1007/978-3-030-92310-5_39
Download citation
DOI: https://doi.org/10.1007/978-3-030-92310-5_39
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-92309-9
Online ISBN: 978-3-030-92310-5
eBook Packages: Computer ScienceComputer Science (R0)