Skip to main content

State Prediction: A Constructive Method to Program Recurrent Neural Networks

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6791))

Abstract

We introduce a novel technique to program desired state sequences into recurrent neural networks in one shot. The basic methodology and its scalability to large and input-driven networks is demonstrated by shaping attractor landscapes, transient dynamics and programming limit cycles. The approach unifies programming of transient and attractor dynamics in a generic framework.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Atiya, A.F., Parlos, A.G.: New Results on Recurrent Network Training. IEEE Trans. Neural Networks 11, 697–709 (2000)

    Article  Google Scholar 

  2. Hopfield, J.J.: Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proc. Natl. Acad. Sci. USA 79, 2554–2558 (1982)

    Article  MathSciNet  Google Scholar 

  3. Reinhart, R.F., Steil, J.J.: A Constrained Regularization Approach for Input-Driven Recurrent Neural Networks. Differ. Equ. Dyn. Syst., 1–20 (2010)

    Google Scholar 

  4. Reinhart, R.F., Steil, J.J.: Reservoir Regularization Stabilizes Learning of Echo State Networks with Output Feedback. In: European Symp. on ANNs (2011)

    Google Scholar 

  5. Jaeger, H.: Reservoir Self-Control for Achieving Invariance Against Slow Input Distortions. Technical report, Jacobs University (2010)

    Google Scholar 

  6. Lecun, Y., Cortes, C., http://yann.lecun.com/exdb/mnist/

  7. Beer, R.D.: Dynamical Approaches to Cognitive Science. Trends in Cognitive Sciences 4, 91–99 (2000)

    Article  MathSciNet  Google Scholar 

  8. Beer, R.D.: On the Dynamics of Small Continuous-Time Recurrent Neural Networks. Adaptive Behavior 3, 469–509 (1995)

    Article  Google Scholar 

  9. Haschke, R., Steil, J.J.: Input Space Bifurcation Manifolds of Recurrent Neural Networks. Neurocomputing 64, 25–38 (2005)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Reinhart, R.F., Steil, J.J. (2011). State Prediction: A Constructive Method to Program Recurrent Neural Networks. In: Honkela, T., Duch, W., Girolami, M., Kaski, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2011. ICANN 2011. Lecture Notes in Computer Science, vol 6791. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21735-7_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21735-7_20

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21734-0

  • Online ISBN: 978-3-642-21735-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics