Skip to main content

Continual Learning with Differential Privacy

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2021)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1517))

Included in the following conference series:

Abstract

In this paper, we focus on preserving differential privacy (DP) in continual learning (CL), in which we train ML models to learn a sequence of new tasks while memorizing previous tasks. We first introduce a notion of continual adjacent databases to bound the sensitivity of any data record participating in the training process of CL. Based upon that, we develop a new DP-preserving algorithm for CL with a data sampling strategy to quantify the privacy risk of training data in the well-known Averaged Gradient Episodic Memory (A-GEM) approach by applying a moments accountant. Our algorithm provides formal guarantees of privacy for data records across tasks in CL. Preliminary theoretical analysis and evaluations show that our mechanism tightens the privacy loss while maintaining a promising model utility.

P. Desai and P. Lai—These two authors contributed equally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/PhungLai728/DP-CL.

References

  1. Abadi, M., et al.: Deep learning with differential privacy. In: ACM SIGSAC, pp. 308–318 (2016)

    Google Scholar 

  2. Carlini, N., et al.: Extracting training data from large language models. In: USENIX Security Symposium (2021)

    Google Scholar 

  3. Chaudhry, A., Ranzato, M., Rohrbach, M., Elhoseiny, M.: Efficient lifelong learning with a-GEM. In: ICLR (2019)

    Google Scholar 

  4. Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Theory of Cryptography Conference, pp. 265–284 (2006)

    Google Scholar 

  5. Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9(3–4), 211–407 (2014)

    MathSciNet  MATH  Google Scholar 

  6. Farquhar, S., Gal, Y.: Differentially private continual learning. In: Privacy in Machine Learning and AI workshop at ICML (2018)

    Google Scholar 

  7. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: ACM SIGSAC (2015)

    Google Scholar 

  8. Goodfellow, I., Mirza, M., Xiao, D., Courville, A., Bengio, Y.: An empirical investigation of catastrophic forgetting in gradient-based neural networks. In: ICLR (2014)

    Google Scholar 

  9. Kartal, H., Liu, X., Li, X.: Differential privacy for the vast majority. ACM Trans. Manag. Inf. Syst. (TMIS) 10(2), 1–15 (2019)

    Article  Google Scholar 

  10. Lopez-Paz, D., Ranzato, M.: Gradient episodic memory for continual learning. In: Neural Information Processing Systems (NeurIPS) (2017)

    Google Scholar 

  11. Ostapenko, O., Puscas, M., Klein, T., Jahnichen, P., Nabi, M.: Learning to remember: a synaptic plasticity driven framework for continual learning. In: CVPR (2019)

    Google Scholar 

  12. Phan, H., Thai, M.T., Hu, H., Jin, R., Sun, T., Dou, D.: Scalable differential privacy with certified robustness in adversarial learning. In: ICML (2020)

    Google Scholar 

  13. Phan, N., Thai, M., Devu, M., Jin, R.: Differentially private lifelong learning. In: Privacy in Machine Learning (NeurIPS 2019 Workshop) (2019)

    Google Scholar 

  14. Phan, N., Jin, R., Thai, M.T., Hu, H., Dou, D.: Preserving differential privacy in adversarial learning with provable robustness. CoRR abs/1903.09822 (2019). http://arxiv.org/abs/1903.09822

  15. Phan, N., et al.: Heterogeneous gaussian mechanism: preserving differential privacy in deep learning with provable robustness. In: IJCAI, pp. 4753–4759 (2019)

    Google Scholar 

  16. Phan, N., Wu, X., Hu, H., Dou, D.: Adaptive laplace mechanism: differential privacy preservation in deep learning. In: ICDM, pp. 385–394 (2017)

    Google Scholar 

  17. Schwarz, J., et al.: Progress & compress: a scalable framework for continual learning. In: ICML, pp. 4528–4537 (2018)

    Google Scholar 

  18. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE SP, pp. 3–18 (2017)

    Google Scholar 

  19. Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: International Conference on Machine Learning, pp. 3987–3995 (2017)

    Google Scholar 

Download references

Acknowledgment

The authors gratefully acknowledge the support from the National Science Foundation grants NSF CNS-1935928/1935923, CNS-1850094, IIS-2041096/2041065.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to NhatHai Phan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Desai, P., Lai, P., Phan, N., Thai, M.T. (2021). Continual Learning with Differential Privacy. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Communications in Computer and Information Science, vol 1517. Springer, Cham. https://doi.org/10.1007/978-3-030-92310-5_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-92310-5_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-92309-9

  • Online ISBN: 978-3-030-92310-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics