Skip to main content

Scalable Adversarial Online Continual Learning

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2022)


Adversarial continual learning is effective for continual learning problems because of the presence of feature alignment process generating task-invariant features having low susceptibility to the catastrophic forgetting problem. Nevertheless, the ACL method imposes considerable complexities because it relies on task-specific networks and discriminators. It also goes through an iterative training process which does not fit for online (one-epoch) continual learning problems. This paper proposes a scalable adversarial continual learning (SCALE) method putting forward a parameter generator transforming common features into task-specific features and a single discriminator in the adversarial game to induce common features. The training process is carried out in meta-learning fashions using a new combination of three loss functions. SCALE outperforms prominent baselines with noticeable margins in both accuracy and execution time.

T. Dam, M. Pratama and MD. M. Ferdaus—Equal Contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions


  1. Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., Tuytelaars, T.: Memory aware synapses: learning what (not) to forget. In: ECCV (2018)

    Google Scholar 

  2. Aljundi, R., et al.: Online continual learning with maximally interfered retrieval. In: NeurIPS (2019)

    Google Scholar 

  3. Ashfahani, A., Pratama, M.: Unsupervised continual learning in streaming environments. IEEE Trans. Neural Netw. Learn. Syst. PP (2022)

    Google Scholar 

  4. Buzzega, P., Boschini, M., Porrello, A., Abati, D., Calderara, S.: Dark experience for general continual learning: a strong, simple baseline. In: NeurIPS (2020)

    Google Scholar 

  5. Cha, S., Hsu, H., Calmon, F., Moon, T.: CPR: classifier-projection regularization for continual learning. In: ICLR (2020)

    Google Scholar 

  6. Chaudhry, A., Gordo, A., Dokania, P., Torr, P.H.S., Lopez-Paz, D.: Using hindsight to anchor past knowledge in continual learning. In: AAAI (2021)

    Google Scholar 

  7. Chaudhry, A., Ranzato, M., Rohrbach, M., Elhoseiny, M.: Efficient lifelong learning with a-gem. In: ICLR (2018)

    Google Scholar 

  8. Chaudhry, A., et al.: On tiny episodic memories in continual learning. arXiv: Learning (2019)

  9. Chen, Z., Liu, B.: Lifelong machine learning. In: Synthesis Lectures on Artificial Intelligence and Machine Learning (2016)

    Google Scholar 

  10. Ebrahimi, S., Meier, F., Calandra, R., Darrell, T., Rohrbach, M.: Adversarial continual learning. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12356, pp. 386–402. Springer, Cham (2020).

    Chapter  Google Scholar 

  11. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML (2017)

    Google Scholar 

  12. Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(59), 1–35 (2016)

    MathSciNet  MATH  Google Scholar 

  13. Goodfellow, I., et al.: Generative adversarial nets. In: NIPS (2014)

    Google Scholar 

  14. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Nat. Acad. Sci. 114, 3521–3526 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  15. Lai Li, X., Zhou, Y., Wu, T., Socher, R., Xiong, C.: Learn to grow: a continual structure learning framework for overcoming catastrophic forgetting. In: ICML (2019)

    Google Scholar 

  16. Lopez-Paz, D., Ranzato, M.: Gradient episodic memory for continual learning. In: NIPS (2017)

    Google Scholar 

  17. Mao, F., Weng, W., Pratama, M., Yapp, E.: Continual learning via inter-task synaptic mapping. Knowl. Based Syst. 222, 106947 (2021)

    Article  Google Scholar 

  18. Mariani, G., Scheidegger, F., Istrate, R., Bekas, C., Malossi, A.C.I.: Bagan: data augmentation with balancing GAN. ArXiv abs/1803.09655 (2018)

    Google Scholar 

  19. Paik, I., Oh, S., Kwak, T., Kim, I.: Overcoming catastrophic forgetting by neuron-level plasticity control. In: AAAI (2019)

    Google Scholar 

  20. Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: a review. Neural Netw. Official J. Int. Neural Netw. Soc. 113, 54–71 (2019)

    Article  Google Scholar 

  21. Perez, E., Strub, F., de Vries, H., Dumoulin, V., Courville, A.C.: FiLM: visual reasoning with a general conditioning layer. In: AAAI (2018)

    Google Scholar 

  22. Pham, Q.H., Liu, C., Sahoo, D., Hoi, S.C.H.: Contextual transformation networks for online continual learning. In: ICLR (2021)

    Google Scholar 

  23. Pratama, M., Ashfahani, A., Lughofer, E.: Unsupervised continual learning via self-adaptive deep clustering approach. ArXiv abs/2106.14563 (2021)

    Google Scholar 

  24. Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5533–5542 (2017)

    Google Scholar 

  25. Riemer, M., et al.: Learning to learn without forgetting by maximizing transfer and minimizing interference. In: ICLR (2019)

    Google Scholar 

  26. Rusu, A.A., et al.: Progressive neural networks. ArXiv abs/1606.04671 (2016)

    Google Scholar 

  27. Schmidhuber, J.: Evolutionary principles in self-referential learning. on learning now to learn: the meta-meta-meta...-hook. diploma thesis, Technische Universitat Munchen, Germany (14 May 1987).

  28. Schwarz, J., et al.: Progress & compress: a scalable framework for continual learning. In: ICML (2018)

    Google Scholar 

  29. van de Ven, G.M., Tolias, A.: Three scenarios for continual learning. ArXiv abs/1904.07734 (2019)

    Google Scholar 

  30. Xu, J., Ma, J., Gao, X., Zhu, Z.: Adaptive progressive continual learning. IEEE Trans. Pattern Anal. Mach. Intell. 44, 6715–6728 (2021)

    Google Scholar 

  31. Yoon, J., Yang, E., Lee, J., Hwang, S.J.: Lifelong learning with dynamically expandable networks. In: ICLR (2017)

    Google Scholar 

  32. Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. Proc. Mach. Learn. Res. 70, 3987–3995 (2017)

    Google Scholar 

Download references


M. Pratama acknowledges the UniSA start up grant. T. Dam acknowledges UIPA scholarship from the UNSW PhD program.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Mahardhika Pratama .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 137 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dam, T., Pratama, M., Ferdaus, M.M., Anavatti, S., Abbas, H. (2023). Scalable Adversarial Online Continual Learning. In: Amini, MR., Canu, S., Fischer, A., Guns, T., Kralj Novak, P., Tsoumakas, G. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2022. Lecture Notes in Computer Science(), vol 13715. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26408-5

  • Online ISBN: 978-3-031-26409-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics