Skip to main content

Unsupervised Continual Learning via Self-adaptive Deep Clustering Approach

  • Conference paper
  • First Online:
Continual Semi-Supervised Learning (CSSL 2021)

Abstract

Unsupervised continual learning remains a relatively uncharted territory in the existing literature because the vast majority of existing works call for unlimited access of ground truth incurring expensive labelling cost. Another issue lies in the problem of task boundaries and task IDs which must be known for model’s updates or model’s predictions hindering feasibility for real-time deployment. Knowledge Retention in Self-Adaptive Deep Continual Learner, (KIERA), is proposed in this paper. KIERA is developed from the notion of flexible deep clustering approach possessing an elastic network structure to cope with changing environments in the timely manner. The centroid-based experience replay is put forward to overcome the catastrophic forgetting problem. KIERA does not exploit any labelled samples for model updates while featuring a task-agnostic merit. The advantage of KIERA has been numerically validated in popular continual learning problems where it shows highly competitive performance compared to state-of-the art approaches. Our implementation is available in https://researchdata.ntu.edu.sg/dataset.xhtml?persistentId=doi:10.21979/N9/P9DFJH.

M. Pratama and A. Ashfahani share equal contributions. This work was carried out when M. Pratama was with SCSE, NTU, Singapore.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., Tuytelaars, T.: Memory aware synapses: learning what (not) to forget. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 144–161. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_9

    Chapter  Google Scholar 

  2. Ashfahani, A., Pratama, M.: Unsupervised continual learning in streaming environments. IEEE Trans. Neural Netw. Learn. Syst., 1–12 (2022). https://doi.org/10.1109/TNNLS.2022.3163362

  3. Chaudhry, A., Ranzato, M., Rohrbach, M., Elhoseiny, M.: Efficient lifelong learning with A-GEM. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019). https://openreview.net/forum?id=Hkf2_sC5FX

  4. Gama, J.: Knowledge Discovery from Data Streams, 1st edn. Chapman & Hall/CRC (2010)

    Google Scholar 

  5. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015). https://arxiv.org/abs/1503.02531v1

  6. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114, 3521–3526 (2017)

    Article  MathSciNet  Google Scholar 

  7. Li, X., Zhou, Y., Wu, T., Socher, R., Xiong, C.: Learn to grow: a continual structure learning framework for overcoming catastrophic forgetting. In: Proceedings of Machine Learning Research, Long Beach, California, USA, 09–15 June 2019, vol. 97, pp. 3925–3934. PMLR (2019). http://proceedings.mlr.press/v97/li19m.html

  8. Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40, 2935–2947 (2018)

    Article  Google Scholar 

  9. Lopez-Paz, D., Ranzato, M.: Gradient episodic memory for continual learning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, Red Hook, NY, USA, pp. 6470–6479. Curran Associates Inc. (2017)

    Google Scholar 

  10. Mao, F., Weng, W., Pratama, M., Yee, E.Y.K.: Continual learning via inter-task synaptic mapping. Knowl.-Based Syst. 222, 106947 (2021). https://doi.org/10.1016/j.knosys.2021.106947. https://www.sciencedirect.com/science/article/pii/S0950705121002100

  11. Paik, I., Oh, S., Kwak, T., Kim, I.: Overcoming catastrophic forgetting by neuron-level plasticity control. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, 7–12 February 2020, pp. 5339–5346. AAAI Press (2020)

    Google Scholar 

  12. Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: a review. Neural Netw. Off. J. Int. Neural Netw. Soc. 113, 54–71 (2019)

    Article  Google Scholar 

  13. Pratama, M., Za’in, C., Ashfahani, A., Ong, Y., Ding, W.: Automatic construction of multi-layer perceptron network from streaming examples. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management (2019)

    Google Scholar 

  14. Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5533–5542 (2017)

    Google Scholar 

  15. Rusu, A.A., et al.: Progressive neural networks. ArXiv abs/1606.04671 (2016)

    Google Scholar 

  16. Schwarz, J., et al.: Progress & compress: a scalable framework for continual learning. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 4528–4537. PMLR, 10–15 July 2018. https://proceedings.mlr.press/v80/schwarz18a.html

  17. Shin, H., Lee, J.K., Kim, J., Kim, J.: Continual learning with deep generative replay. In: NIPS (2017)

    Google Scholar 

  18. Smith, J., Baer, S., Kira, Z., Dovrolis, C.: Unsupervised continual learning and self-taught associative memory hierarchies. In: 2019 International Conference on Learning Representations Workshops (2019)

    Google Scholar 

  19. Yang, B., Fu, X., Sidiropoulos, N.D., Hong, M.: Towards k-means-friendly spaces: simultaneous deep learning and clustering. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 3861–3870. PMLR, 06–11 August 2017. http://proceedings.mlr.press/v70/yang17b.html

  20. Yoon, J., Yang, E., Lee, J., Hwang, S.J.: Lifelong learning with dynamically expandable networks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  21. Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: Proceedings of Machine Learning Research, vol. 70, pp. 3987–3995 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mahardhika Pratama .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pratama, M., Ashfahani, A., Lughofer, E. (2022). Unsupervised Continual Learning via Self-adaptive Deep Clustering Approach. In: Cuzzolin, F., Cannons, K., Lomonaco, V. (eds) Continual Semi-Supervised Learning. CSSL 2021. Lecture Notes in Computer Science(), vol 13418. Springer, Cham. https://doi.org/10.1007/978-3-031-17587-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-17587-9_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-17586-2

  • Online ISBN: 978-3-031-17587-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics