Advertisement

Abstract

Prediction occurs in many biological nervous systems e.g. in the cortex [7]. We introduce a method of adapting the recurrent layer dynamics of an echo-state network (ESN) without attempting to train the weights directly. Initially a network is generated that fulfils the echo state – liquid state condition. A second network is then trained to predict the next internal state of the system. In simulation, the prediction of this module is then mixed with the actual activation of the internal state neurons, to produce dynamics that a partially driven by the network model, rather than the input data. The mixture is determined by a parameter α. The target function be produced by the network was sin 3(0.24t), given an input function sin(0.24t). White noise was added to the input signal at 15% of the amplitude of the signal. Preliminary results indicate that self prediction may improve performance of an ESN when performing signal mappings in the presence of additive noise.

Keywords

Hide Layer Input Signal Output Layer Recurrent Neural Network Recursive Little Square 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Jaeger, H.: The ’echo state’ approach to analysing and training recurrent neural networks. GMD Report 148, GMD German National Research Institute for Computer Science (2001), http://www.gmd.de/People/Herbert.Jaeger/Publications.html
  2. 2.
    Jaeger, H.: Short term memory in echo state networks. GMD Report 152, GMD German National Research Insitute for Computer Science (2001), http://www.gmd.de/People/Herbert.Jaeger/Publications.html
  3. 3.
    Jaeger, H.: Adaptive nonlinear system identification with echo state networks. In: Proc. of NIPS 2002, AA14 (2003)Google Scholar
  4. 4.
    Maas, W., Natschläger, T., Markram, H.: Real-time computing without stable states: A new framework on for neural computation based on perturbations. NeuroCOLT Technical Report NC-TR-01-113 (2001)Google Scholar
  5. 5.
    Farhang-Boroujeny, B.: Adaptive Filters. Wiley, Chichester (1999)Google Scholar
  6. 6.
    Werbos, P.: Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78(10), 1550–1560 (1990)CrossRefGoogle Scholar
  7. 7.
    Huettel, S., Mack, P., McCarthy, G.: Perceiving patterns in random series: dynamic processing of sequence in the prefrontal cortex. Nature Neurosci 5(5), 485–490 (2002)Google Scholar
  8. 8.
    Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: continual prediction with LSTM. Neural Computation 12(10), 2451–2471 (2000)CrossRefGoogle Scholar
  9. 9.
    Elman, J.: Finding Structure in Time. Cognitive Science 14(2), 179–211 (1990)CrossRefGoogle Scholar
  10. 10.
    Bengio, Y.: Neural Networks for Speech and Sequence Recognition. International Thomson Publishing Inc. (1996)Google Scholar
  11. 11.
    Kadous, W.: Temporal Classification: Extending the Classification Paradigm to Multivariate Time Series Dissertation, School of Computer Science and Engineering, University of New South Wales (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Norbert M. Mayer
    • 1
  • Matthew Browne
    • 1
  1. 1.GMD- Japan Research Laboratory, Collaboration CentreKitakyushuJapan

Personalised recommendations