Skip to main content

Incremental Learning of Spatio-temporal Patterns with Model Selection

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4668))

Abstract

This paper proposes a biologically inspired incremental learning method for spatio-temporal patterns based on our recently reported “Incremental learning through sleep (ILS)” method. This method alternately repeats two learning phases: awake and sleep. During the awake phase, the system learns new spatio-temporal patterns by rote, whereas in the sleep phase, it rehearses the recorded new memories interleaved with old memories. The rehearsal process is essential for reconstructing the internal representation of the neural network so as not only to memorize the new patterns while keeping old memories but also to reduce redundant hidden units. By using this strategy, the neural network achieves high generalization ability.

The most attractive property of the method is the incremental learning ability of non-independent distributed samples without catastrophic forgetting despite using a small amount of resources. We applied our method to an experiment on robot control signals, which vary depending on the context of the current situation.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Murata, N., Yoshizawa, S., Amari, S.-i.: Network information criterion - determining the number of hidden units for an artificial neural network model. IEEE Transctions on Neural Networks 5(6), 865–872 (1994)

    Article  Google Scholar 

  2. van de Laar, P., Heskes, T.: Pruning using parameter and neuronal metrics. Neural Computation 11, 977–993 (1999)

    Article  Google Scholar 

  3. Yamauchi, K., Hayami, J.: Incremental learning and model selection for radial basis function network through sleep. IEICE Transactions on Information and Systems E90-D(4), 722–735 (2007)

    Article  Google Scholar 

  4. Poggio, T., Girosi, F.: Networks for approximation and learning. In: Proceeding of the IEEE International Conference on Neural Networks, vol. 78(9), pp. 1481–1497. IEEE Computer Society Press, Los Alamitos (1990)

    Google Scholar 

  5. Specht, D.F.: A general regression neural network. IEEE Transactions on Neural Networks 2(6), 568–576 (1991)

    Article  Google Scholar 

  6. Tomandl, D., Schober, A.: A modified generalized regression neural network (mgrnn) with a new efficient training algorithm as a robust “black-box”-tool for data analysis. Neural Networks 14, 1023–1034 (2001)

    Article  Google Scholar 

  7. Schaal, S., Atkeson, C.G.: Constructive incremental learning from only local information. Neural Computation 10(8), 2047–2084 (1998)

    Article  Google Scholar 

  8. Moody, J., Darken, C.J.: Fast learning in neural networks of locally-tuned processing units. Neural Computation 1, 281–294 (1989)

    Article  Google Scholar 

  9. Yamauchi, K., Yamaguchi, N., Ishii, N.: Incremental learning methods with retrieving interfered patterns. IEEE Transactions on Neural Networks 10(6), 1351–1365 (1999)

    Article  Google Scholar 

  10. Kasabov, N.: Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning. IEEE Transactions on Systems, Man, and Cybernetics 31(6), 902–918 (2001)

    Article  Google Scholar 

  11. Yingwei, L., Sundararajan, N., Saratchandran, P.: A sequential learning scheme for function approximation using minimal radial basis function neural networks. Neural Computation 9, 461–478 (1997)

    Article  MATH  Google Scholar 

  12. Akaike, H.: A new look at the statistical model identification. IEEE Transactions on Automatic Control AC-19(6), 716–723 (1974)

    Article  MathSciNet  Google Scholar 

  13. Ma, J., Theiler, J., Perkins, S.: Accurate on-line support vector regression. Neural Computation 15, 2683–2703 (2003)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Joaquim Marques de Sá Luís A. Alexandre Włodzisław Duch Danilo Mandic

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yamauchi, K., Sato, M. (2007). Incremental Learning of Spatio-temporal Patterns with Model Selection. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D. (eds) Artificial Neural Networks – ICANN 2007. ICANN 2007. Lecture Notes in Computer Science, vol 4668. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74690-4_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-74690-4_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-74689-8

  • Online ISBN: 978-3-540-74690-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics