Abstract
Unsupervised continual learning remains a relatively uncharted territory in the existing literature because the vast majority of existing works call for unlimited access of ground truth incurring expensive labelling cost. Another issue lies in the problem of task boundaries and task IDs which must be known for model’s updates or model’s predictions hindering feasibility for real-time deployment. Knowledge Retention in Self-Adaptive Deep Continual Learner, (KIERA), is proposed in this paper. KIERA is developed from the notion of flexible deep clustering approach possessing an elastic network structure to cope with changing environments in the timely manner. The centroid-based experience replay is put forward to overcome the catastrophic forgetting problem. KIERA does not exploit any labelled samples for model updates while featuring a task-agnostic merit. The advantage of KIERA has been numerically validated in popular continual learning problems where it shows highly competitive performance compared to state-of-the art approaches. Our implementation is available in https://researchdata.ntu.edu.sg/dataset.xhtml?persistentId=doi:10.21979/N9/P9DFJH.
M. Pratama and A. Ashfahani share equal contributions. This work was carried out when M. Pratama was with SCSE, NTU, Singapore.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., Tuytelaars, T.: Memory aware synapses: learning what (not) to forget. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 144–161. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_9
Ashfahani, A., Pratama, M.: Unsupervised continual learning in streaming environments. IEEE Trans. Neural Netw. Learn. Syst., 1–12 (2022). https://doi.org/10.1109/TNNLS.2022.3163362
Chaudhry, A., Ranzato, M., Rohrbach, M., Elhoseiny, M.: Efficient lifelong learning with A-GEM. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019). https://openreview.net/forum?id=Hkf2_sC5FX
Gama, J.: Knowledge Discovery from Data Streams, 1st edn. Chapman & Hall/CRC (2010)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015). https://arxiv.org/abs/1503.02531v1
Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114, 3521–3526 (2017)
Li, X., Zhou, Y., Wu, T., Socher, R., Xiong, C.: Learn to grow: a continual structure learning framework for overcoming catastrophic forgetting. In: Proceedings of Machine Learning Research, Long Beach, California, USA, 09–15 June 2019, vol. 97, pp. 3925–3934. PMLR (2019). http://proceedings.mlr.press/v97/li19m.html
Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40, 2935–2947 (2018)
Lopez-Paz, D., Ranzato, M.: Gradient episodic memory for continual learning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, Red Hook, NY, USA, pp. 6470–6479. Curran Associates Inc. (2017)
Mao, F., Weng, W., Pratama, M., Yee, E.Y.K.: Continual learning via inter-task synaptic mapping. Knowl.-Based Syst. 222, 106947 (2021). https://doi.org/10.1016/j.knosys.2021.106947. https://www.sciencedirect.com/science/article/pii/S0950705121002100
Paik, I., Oh, S., Kwak, T., Kim, I.: Overcoming catastrophic forgetting by neuron-level plasticity control. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, 7–12 February 2020, pp. 5339–5346. AAAI Press (2020)
Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: a review. Neural Netw. Off. J. Int. Neural Netw. Soc. 113, 54–71 (2019)
Pratama, M., Za’in, C., Ashfahani, A., Ong, Y., Ding, W.: Automatic construction of multi-layer perceptron network from streaming examples. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management (2019)
Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5533–5542 (2017)
Rusu, A.A., et al.: Progressive neural networks. ArXiv abs/1606.04671 (2016)
Schwarz, J., et al.: Progress & compress: a scalable framework for continual learning. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 4528–4537. PMLR, 10–15 July 2018. https://proceedings.mlr.press/v80/schwarz18a.html
Shin, H., Lee, J.K., Kim, J., Kim, J.: Continual learning with deep generative replay. In: NIPS (2017)
Smith, J., Baer, S., Kira, Z., Dovrolis, C.: Unsupervised continual learning and self-taught associative memory hierarchies. In: 2019 International Conference on Learning Representations Workshops (2019)
Yang, B., Fu, X., Sidiropoulos, N.D., Hong, M.: Towards k-means-friendly spaces: simultaneous deep learning and clustering. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 3861–3870. PMLR, 06–11 August 2017. http://proceedings.mlr.press/v70/yang17b.html
Yoon, J., Yang, E., Lee, J., Hwang, S.J.: Lifelong learning with dynamically expandable networks. In: International Conference on Learning Representations (2018)
Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: Proceedings of Machine Learning Research, vol. 70, pp. 3987–3995 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Pratama, M., Ashfahani, A., Lughofer, E. (2022). Unsupervised Continual Learning via Self-adaptive Deep Clustering Approach. In: Cuzzolin, F., Cannons, K., Lomonaco, V. (eds) Continual Semi-Supervised Learning. CSSL 2021. Lecture Notes in Computer Science(), vol 13418. Springer, Cham. https://doi.org/10.1007/978-3-031-17587-9_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-17587-9_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-17586-2
Online ISBN: 978-3-031-17587-9
eBook Packages: Computer ScienceComputer Science (R0)