Abstract
Regularization/prior approach emerges as one of the major directions in continual learning to help a neural network reduce forgetting the learned knowledge. This approach measures the importance of weights for previous tasks and then imposes a constraint on them in the current task without retraining on past data as well as extending the network architecture. However, regularization/prior-based methods face the problem in which weights can be moved intensively to the parameter region obtaining good performance for the current task but getting bad ones for previous tasks. In this paper, we present a novel solution in order to deal with this problem. Rather than using global variables as in the original methods, we add auxiliary local variables for each task that are considered as adjusting factors to suitably change the global ones to this task. As a result, the global variables can be preserved in a good region for all tasks to reduce the forgetting phenomenon. In particular, by imposing a variational distribution on the auxiliary local variables which are employed as multiplicative noise to the input of layers, we can achieve theoretical properties: Uncorrelated likelihoods, correlated pre-activation, and data-dependent regularization which are missing in the existing methods. These properties bring several benefits as follows: (1) Uncorrelated likelihoods between different data instances lead to reduce the high variance of stochastic gradient variational Bayes; (2) correlated pre-activation helps increase the representation ability for each task; and (3) data-dependent regularization guarantees to preserve the global variables in good region for all tasks. Our extensive experiments show that adding the local variables improves the performances of regularization/prior-based methods with significant magnitudes on several datasets. In particular, it makes several standard baselines approach SOTA results.
L. N. Van, N. L. Hai and H. Pham—Equal contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Ahn, H., Cha, S., Lee, D., Moon, T.: Uncertainty-based continual learning with adaptive regularization. In: Advances in Neural Information Processing Systems, pp. 4392–4402 (2019)
Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., Tuytelaars, T.: Memory aware synapses: learning what (not) to forget. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 139–154 (2018)
Benzing, F.: Understanding regularisation methods for continual learning. In: Workshop of Advances in Neural Information Processing Systems (2020)
Blundell, C., Cornebise, J., Kavukcuoglu, K., Wierstra, D.: Weight uncertainty in neural network. In: International Conference on Machine Learning, pp. 1613–1622. PMLR (2015)
De Lange, M., et al.: A continual learning survey: defying forgetting in classification tasks. IEEE Trans. Pattern Anal. Mach. Intell. (2021)
Farquhar, S., Gal, Y.: A unifying Bayesian view of continual learning. In: The Bayesian Deep Learning Workshop at Neural Information Processing Systems (2018)
Gal, Y., Hron, J., Kendall, A.: Concrete dropout. In: Advances in Neural Information Processing Systems, pp. 3581–3590 (2017)
Ghahramani, Z., Attias, H.: Online variational Bayesian learning. In: Slides from talk Presented at NIPS Workshop on Online Learning (2000)
Goodfellow, I.J., Mirza, M., Xiao, D., Courville, A., Bengio, Y.: An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211 (2013)
Ha, C., Tran, V.D., Van, L.N., Than, K.: Eliminating overfitting of probabilistic topic models on short and noisy text: the role of dropout. Int. J. Approximate Reasoning 112, 85–104 (2019)
Jung, S., Ahn, H., Cha, S., Moon, T.: Continual learning with node-importance based adaptive group sparse regularization. In: Advances in Neural Information Processing Systems (2020)
Kingma, D.P., Salimans, T., Welling, M.: Variational dropout and the local reparameterization trick. In: Advances in Neural Information Processing Systems, vol. 28, pp. 2575–2583 (2015)
Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)
Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2935–2947 (2017)
Liu, Y., Dong, W., Zhang, L., Gong, D., Shi, Q.: Variational Bayesian dropout with a hierarchical prior. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7124–7133 (2019)
Loo, N., Swaroop, S., Turner, R.E.: Generalized variational continual learning. In: International Conference on Learning Representation (2021)
Mirzadeh, S., Farajtabar, M., Pascanu, R., Ghasemzadeh, H.: Understanding the role of training regimes in continual learning. In: Advances in Neural Information Processing Systems (2020)
Molchanov, D., Ashukha, A., Vetrov, D.: Variational dropout sparsifies deep neural networks. In: International Conference on Machine Learning, pp. 2498–2507 (2017)
Nguyen, C.V., Li, Y., Bui, T.D., Turner, R.E.: Variational continual learning. In: International Conference on Learning Representation (2018)
Nguyen, V.S., Nguyen, D.T., Van, L.N., Than, K.: Infinite dropout for training Bayesian models from data streams. In: IEEE International Conference on Big Data (Big Data), pp. 125–134. IEEE (2019)
Sato, M.A.: Online model selection based on the variational Bayes. Neural Comput. 13(7), 1649–1681 (2001)
Swaroop, S., Nguyen, C.V., Bui, T.D., Turner, R.E.: Improving and understanding variational continual learning. In: NeurIPS Continual Learning Workshop (2018)
Van Linh, N., Bach, T.X., Than, K.: A graph convolutional topic model for short and noisy text streams. Neurocomputing 468, 345–359 (2022)
Wei, C., Kakade, S.M., Ma, T.: The implicit and explicit regularization effects of dropout. In: Proceedings of the 37th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 119, pp. 10181–10192. PMLR (2020)
Yin, D., Farajtabar, M., Li, A.: Sola: continual learning with second-order loss approximation. In: Workshop of Advances in Neural Information Processing Systems (2020)
Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. Proc. Mach. Learn. Res. 70, 3987 (2017)
Acknowledgement
This work was funded by Gia Lam Urban Development and Investment Company Limited, Vingroup and supported by Vingroup Innovation Foundation (VINIF) under project code VINIF.2019.DA18
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Van, L.N., Hai, N.L., Pham, H., Than, K. (2022). Auxiliary Local Variables for Improving Regularization/Prior Approach in Continual Learning. In: Gama, J., Li, T., Yu, Y., Chen, E., Zheng, Y., Teng, F. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2022. Lecture Notes in Computer Science(), vol 13280. Springer, Cham. https://doi.org/10.1007/978-3-031-05933-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-05933-9_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-05932-2
Online ISBN: 978-3-031-05933-9
eBook Packages: Computer ScienceComputer Science (R0)