Advertisement

Lifelong Learning Starting from Zero

  • Claes StrannegårdEmail author
  • Herman Carlström
  • Niklas Engsner
  • Fredrik Mäkeläinen
  • Filip Slottner Seholm
  • Morteza Haghir Chehreghani
Conference paper
  • 573 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11654)

Abstract

We present a deep neural-network model for lifelong learning inspired by several forms of neuroplasticity. The neural network develops continuously in response to signals from the environment. In the beginning the network is a blank slate with no nodes at all. It develops according to four rules: (i) expansion, which adds new nodes to memorize new input combinations; (ii) generalization, which adds new nodes that generalize from existing ones; (iii) forgetting, which removes nodes that are of relatively little use; and (iv) backpropagation, which fine-tunes the network parameters. We analyze the model from the perspective of accuracy, energy efficiency, and versatility and compare it to other network models, finding better performance in several cases.

Keywords

Lifelong learning Deep learning Dynamic architectures 

References

  1. 1.
    Cangelosi, A., Schlesinger, M.: From babies to robots: the contribution of developmental robotics to developmental psychology. Child Dev. Perspect. 12(3), 183–188 (2018)CrossRefGoogle Scholar
  2. 2.
    Chen, Z., Liu, B.: Topic modeling using topics from many domains, lifelong learning and big data. In: International Conference on Machine Learning (2014)Google Scholar
  3. 3.
    Cortes, C., et al.: AdaNet: adaptive structural learning of artificial neural networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 874–883. JMLR.org (2017)Google Scholar
  4. 4.
    Ditzler, G., Roveri, M., Alippi, C., Polikar, R.: Learning in nonstationary environments: a survey. IEEE Comput. Intell. Mag. 10(4), 12–25 (2015)CrossRefGoogle Scholar
  5. 5.
    Draelos, T.J., et al.: Neurogenesis deep learning: extending deep networks to accommodate new classes. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 526–533. IEEE (2017)Google Scholar
  6. 6.
    Draganski, B., May, A.: Training-induced structural changes in the adult human brain. Behav. Brain Res. 192(1), 137–142 (2008)CrossRefGoogle Scholar
  7. 7.
    Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture. In: Advances in Neural Information Processing Systems, pp. 524–532 (1990)Google Scholar
  8. 8.
    French, R.M.: Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. 3(4), 128–135 (1999)CrossRefGoogle Scholar
  9. 9.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)zbMATHGoogle Scholar
  10. 10.
    Greenspan, R.J., Van Swinderen, B.: Cognitive consonance: complex brain functions in the fruit fly and its relatives. Trends Neurosci. 27(12), 707–711 (2004)CrossRefGoogle Scholar
  11. 11.
    Grossberg, S.: How does a brain build a cognitive code? In: Grossberg, S. (ed.) Studies of Mind and Brain, pp. 1–52. Springer, Dordrecht (1982).  https://doi.org/10.1007/978-94-009-7758-7_1CrossRefzbMATHGoogle Scholar
  12. 12.
    Hassabis, D., Kumaran, D., Summerfield, C., Botvinick, M.: Neuroscience-inspired artificial intelligence. Neuron 95, 245–258 (2017)CrossRefGoogle Scholar
  13. 13.
    Hatcher, W.G., Yu, W.: A survey of deep learning: platforms, applications and emerging research trends. IEEE Access 6, 24411–24432 (2018)CrossRefGoogle Scholar
  14. 14.
    Kandel, E.R., Schwartz, J.H., Jessell, T.M., et al.: Principles of Neural Science, vol. 4. McGraw-Hill, New York (2000)Google Scholar
  15. 15.
    Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Krueger, K.A., Dayan, P.: Flexible shaping: how learning in small steps helps. Cognition 110(3), 380–394 (2009)CrossRefGoogle Scholar
  17. 17.
    Lee, J., Yoon, J., Yang, E., Hwang, S.J.: Lifelong learning with dynamically expandable networks. CoRR abs/1708.01547 (2018)Google Scholar
  18. 18.
    Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40, 2935–2947 (2018)CrossRefGoogle Scholar
  19. 19.
    McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. In: Bower, G.H. (ed.) Psychology of Learning and Motivation, vol. 24, pp. 109–165. Elsevier, Amsterdam (1989) Google Scholar
  20. 20.
    Mermillod, M., Bugaiska, A., Bonin, P.: The stability-plasticity dilemma: investigating the continuum from catastrophic forgetting to age-limited learning effects. Front. Psychol. 4, 504 (2013)CrossRefGoogle Scholar
  21. 21.
    Mitchell, T., et al.: Never-ending learning. Commun. ACM 61(5), 103–115 (2018)CrossRefGoogle Scholar
  22. 22.
    Oppenheim, R.W.: Cell death during development of the nervous system. Annu. Rev. Neurosci. 14(1), 453–501 (1991)CrossRefGoogle Scholar
  23. 23.
    Paolicelli, R.C., et al.: Synaptic pruning by microglia is necessary for normal brain development. Science 333(6048), 1456–1458 (2011)CrossRefGoogle Scholar
  24. 24.
    Parisi, G., Kemker, R., Part, J., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: a review. Neural Netw. Off. J. Int. Neural Netw. Soc. 113, 54–71 (2019)CrossRefGoogle Scholar
  25. 25.
    Power, J.D., Schlaggar, B.L.: Neural plasticity across the lifespan. Wiley Interdiscip. Rev. Dev. Biol. 6(1), e216 (2017)CrossRefGoogle Scholar
  26. 26.
    Rusu, A.A., et al.: Progressive neural networks. arXiv preprint arXiv:1606.04671 (2016)
  27. 27.
    Soltoggio, A., Stanley, K.O., Risi, S.: Born to learn: the inspiration, progress, and future of evolved plastic artificial neural networks. Neural Netw. Off. J. Int. Neural Netw. Soc. 108, 48–67 (2018)CrossRefGoogle Scholar
  28. 28.
    Sze, V., Chen, Y.H., Yang, T.J., Emer, J.S.: Efficient processing of deep neural networks: a tutorial and survey. Proc. IEEE 105(12), 2295–2329 (2017)CrossRefGoogle Scholar
  29. 29.
    Wolfe, N., Sharma, A., Drude, L., Raj, B.: The incredible shrinking neural network: new perspectives on learning representations through the lens of pruning. arXiv preprint arXiv:1701.04465 (2017)
  30. 30.
    Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3987–3995. JMLR.org (2017)Google Scholar
  31. 31.
    Zhou, G., Sohn, K., Lee, H.: Online incremental feature learning with denoising autoencoders. Artif. Intell. Stat. 22, 1453–1461 (2012)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Claes Strannegård
    • 1
    • 2
    Email author
  • Herman Carlström
    • 1
  • Niklas Engsner
    • 2
  • Fredrik Mäkeläinen
    • 2
  • Filip Slottner Seholm
    • 1
  • Morteza Haghir Chehreghani
    • 1
  1. 1.Department of Computer Science and EngineeringChalmers University of TechnologyGothenburgSweden
  2. 2.Dynamic Topologies Sweden ABGothenburgSweden

Personalised recommendations