Advertisement

Stability and Topology in Reservoir Computing

  • Larry Manevitz
  • Hananel Hazan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6438)

Abstract

Recently Jaeger and others have put forth the paradigm of "reservoir computing" as a way of computing with highly recurrent neural networks. This reservoir is a collection of neurons randomly connected with each other of fixed weights. Amongst other things, it has been shown to be effective in temporal pattern recognition; and has been held as a model appropriate to explain how certain aspects of the brain work. (Particularly in its guise as “liquid state machine”, due to Maass et al.) In this work we show that although it is known that this model does have generalizability properties and thus is robust to errors in input, it is NOT resistant to errors in the model itself. Thus small malfunctions or distortions make previous training ineffective. Thus this model as currently presented cannot be thought of as appropriate as a biological model; and it also suggests limitations on the applicability in the pattern recognition sphere. However, we show that, with the enforcement of topological constraints on the reservoir, in particular that of small world topology, the model is indeed fault tolerant. Thus this implies that "natural" computational systems must have specific topologies and the uniform random connectivity is not appropriate.

Keywords

Reservoir Computing Small world topology robustness Machine Learning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Albert, R., Barabási, A.-L.: Topology of evolving networks: local events and universality, vol. 85, pp. 5234–5237 (2000)Google Scholar
  2. Albert-László, B., Réka, A.: Emergence of Scaling in Random Networks. SCIENCE 286, 509–512 (1999)MathSciNetCrossRefMATHGoogle Scholar
  3. Bianconi, G., Barabási, A.-L.: Competition and multiscaling in evolving networks. Europhys Lett. 54, 436 (2001)CrossRefGoogle Scholar
  4. Chklovskii, L.R.: Structural Properties of the Caenorhabditis elegans Neuronal Network (July 2009)Google Scholar
  5. Danielle, B.S., Bullmore, E.: Small-world brain networks. Neuroscientist 12(6), 512–523 (2006)CrossRefGoogle Scholar
  6. Fernando, C., Sojakka, S.: Pattern Recognition in a Bucket. In: Banzhaf, W., Ziegler, J., Christaller, T., Dittrich, P., Kim, J.T. (eds.) ECAL 2003. LNCS (LNAI), vol. 2801, pp. 588–597. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  7. Gütig, R., Sompolinsky, H.: The tempotron: a neuron that learns spike timing-based decision. Nature Neuroscience 9(3), 420–428 (2006)CrossRefGoogle Scholar
  8. Izhikevich, E.M.: Simple Model of Spiking Neurons. IEEE Transactions On Neural Networks 14(6), 1569–1572 (2003)MathSciNetCrossRefGoogle Scholar
  9. Jaeger, H.: "The ëcho state" approach to analysing and training recurrent neural networks. German National Research Institute for Computer Science (2001)Google Scholar
  10. Jaeger, H.: The "echo state" approach to analysing and training recurrent neural networks. German National Research Institute for Computer Science (2001)Google Scholar
  11. Lukosevicius, M., Jaeger, H.: Reservoir Computing Approaches to Recurrent Neural Network Training. Computer Scinece Review, 127–149 (2009)Google Scholar
  12. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation 14(11), 2531–2560 (2002)CrossRefMATHGoogle Scholar
  13. Maass, W.: Paradigms for computing with Spiking Neurons. Springer, Heidelberg (1999)Google Scholar
  14. Maass, W., Natschlger, T., Markram, H.: Computational models for generic cortical microcircuits. In: Feng, J. (ed.) Computational Neuroscience: A Comprehensive Approach, pp. 575–605. Chapman & Hall/CRC, Boca Raton (2004)Google Scholar
  15. Manevitz, L.M., Marom, S.: Modeling the process of rate selection in neuronal activity. Journal of Theoretical Biology 216(3), 337–343 (2002)CrossRefGoogle Scholar
  16. McCullough, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 115–127 (1943)MathSciNetCrossRefMATHGoogle Scholar
  17. Widrow, B., Hoff, M.: Adaptive Switching Circuits. IRE WESCON Convention Record 4, 96–104 (1960)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Larry Manevitz
    • 1
  • Hananel Hazan
    • 1
  1. 1.Department of Computer ScienceUniversity of HaifaMount CarmelIsrael

Personalised recommendations