Skip to main content

A Practical Guide to Applying Echo State Networks

  • Chapter
Neural Networks: Tricks of the Trade

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7700))

Abstract

Reservoir computing has emerged in the last decade as an alternative to gradient descent methods for training recurrent neural networks. Echo State Network (ESN) is one of the key reservoir computing “flavors”. While being practical, conceptually simple, and easy to implement, ESNs require some experience and insight to achieve the hailed good performance in many tasks. Here we present practical techniques and recommendations for successfully applying ESNs, as well as some more advanced application-specific modifications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks 5(2), 157–166 (1994)

    Article  Google Scholar 

  2. Bergstra, J.S., Bardenet, R., Bengio, Y., Kégl, B.: Algorithms for hyper-parameter optimization. In: Shawe-Taylor, J., Zemel, R.S., Bartlett, P., Pereira, F.C.N., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 23 (NIPS 2010), pp. 2546–2554 (2011)

    Google Scholar 

  3. Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus (2006)

    MATH  Google Scholar 

  4. Chatzis, S.P., Demiris, Y.: Echo state Gaussian process. IEEE Transactions on Neural Networks 22(9), 1435–1445 (2011)

    Article  Google Scholar 

  5. Chatzis, S.P., Demiris, Y.: The copula echo state network. Pattern Recognition 45(1), 570–577 (2012)

    Article  MATH  Google Scholar 

  6. Daukantas, S., Lukoševičius, M., Marozas, V., Lukoševičius, A.: Comparison of “black box” and “gray box” methods for lost data reconstruction in multichannel signals. In: Proceedings of the 14th International Conference “Biomedical Engineering”, Kaunas, pp. 135–138 (2010)

    Google Scholar 

  7. Dominey, P.F., Ramus, F.: Neural network processing of natural language: I. sensitivity to serial, temporal and abstract structure of language in the infant. Language and Cognitive Processes 15(1), 87–127 (2000)

    Article  Google Scholar 

  8. Doya, K.: Bifurcations in the learning of recurrent neural networks. In: Proceedings of IEEE International Symposium on Circuits and Systems, vol. 6, pp. 2777–2780 (1992)

    Google Scholar 

  9. Farhang-Boroujeny, B.: Adaptive Filters: Theory and Applications. Wiley (1998)

    Google Scholar 

  10. Graves, A.: Supervised Sequence Labelling with Recurrent Neural Networks. PhD thesis, Technical University Munich, Munich, Germany (2008)

    Google Scholar 

  11. Hermans, M., Schrauwen, B.: Memory in reservoirs for high dimensional input. In: Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN 2010), pp. 1–7 (2010)

    Google Scholar 

  12. Hermans, M., Schrauwen, B.: Recurrent kernel machines: Computing with infinite echo state networks. Neural Computation 24(1), 104–133 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  13. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  14. Holzmann, G., Hauser, H.: Echo state networks with filter neurons and a delay and sum readout. Neural Networks 23(2), 244–256 (2010)

    Article  Google Scholar 

  15. Ilies, I., Jaeger, H., Kosuchinas, O., Rincon, M., Šakėnas, V., Vaškevičius, N.: Stepping forward through echoes of the past: forecasting with echo state networks. Short report on the winning entry to the NN3 financial forecasting competition (2007), http://www.neural-forecasting-competition.com/downloads/NN3/methods/27-NN3_Herbert_Jaeger_report.pdf

  16. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Technical Report GMD Report 148, German National Research Center for Information Technology (2001)

    Google Scholar 

  17. Jaeger, H.: Short term memory in echo state networks. Technical Report GMD Report 152, German National Research Center for Information Technology (2002)

    Google Scholar 

  18. Jaeger, H.: Adaptive nonlinear system identification with echo state networks. In: Advances in Neural Information Processing Systems 15 (NIPS 2002), pp. 593–600. MIT Press, Cambridge (2003)

    Google Scholar 

  19. Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)

    Article  Google Scholar 

  20. Jaeger, H.: Generating exponentially many periodic attractors with linearly growing echo state networks. Technical Report No. 3, Jacobs University Bremen (2006)

    Google Scholar 

  21. Jaeger, H.: Echo state network. Scholarpedia 2(9), 2330 (2007)

    Article  Google Scholar 

  22. Jaeger, H., Lukoševičius, M., Popovici, D., Siewert, U.: Optimization and applications of echo state networks with leaky-integrator neurons. Neural Networks 20(3), 335–352 (2007)

    Article  MATH  Google Scholar 

  23. Jaeger, H.: Long short-term memory in echo state networks: Details of a simulation study. Technical Report No. 27, Jacobs University Bremen (2012)

    Google Scholar 

  24. Kahan, W.: Pracniques: further remarks on reducing truncation errors. Communications of the ACM 8(1), 40 (1965)

    Article  Google Scholar 

  25. Küçükemre, A.U.: Echo state networks for adaptive filtering. Master’s thesis, University of Applied Sciences Bohn-Rhein-Sieg, Germany (2006), http://reservoir-computing.org/publications/2006-echo-state-networks-adaptive-filtering

  26. LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient BackProp. In: Orr, G.B., Müller, K.-R. (eds.) NIPS-WS 1996. LNCS, vol. 1524, pp. 9–50. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

  27. Lukoševičius, M., Popovici, D., Jaeger, H., Siewert, U.: Time warping invariant echo state networks. Technical Report No. 2, Jacobs University Bremen (May 2006)

    Google Scholar 

  28. Lukoševičius, M.: Echo state networks with trained feedbacks. Technical Report No. 4, Jacobs University Bremen (February 2007)

    Google Scholar 

  29. Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Computer Science Review 3(3), 127–149 (2009)

    Article  MATH  Google Scholar 

  30. Lukoševičius, M.: Reservoir Computing and Self-Organized Neural Hierarchies. PhD thesis, Jacobs University Bremen, Bremen, Germany (2011)

    Google Scholar 

  31. Lukoševičius, M., Jaeger, H., Schrauwen, B.: Reservoir computing trends. KI - Künstliche Intelligenz, pp. 1–7 (2012)

    Google Scholar 

  32. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Computation 14(11), 2531–2560 (2002)

    Article  MATH  Google Scholar 

  33. Maass, W., Joshi, P., Sontag, E.D.: Principles of real-time computing with feedback applied to cortical microcircuit models. In: Advances in Neural Information Processing Systems 18 (NIPS 2005), pp. 835–842. MIT Press, Cambridge (2006)

    Google Scholar 

  34. Martens, J., Sutskever, I.: Learning recurrent neural networks with Hessian-free optimization. In: Proc. 28th Int. Conf. on Machine Learning (2011)

    Google Scholar 

  35. Martens, J., Sutskever, I.: Training Deep and Recurrent Networks with Hessian-free Optimization. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) NN: Tricks of the Trade, 2nd edn. LNCS, vol. 7700, pp. 479–535. Springer, Heidelberg (2012)

    Google Scholar 

  36. Ozturk, M.C., Xu, D., Príncipe, J.C.: Analysis and design of echo state networks. Neural Computation 19(1), 111–138 (2007)

    Article  MATH  Google Scholar 

  37. Reinhart, F.R., Steil, J.J.: A constrained regularization approach for input-driven recurrent neural networks. Differential Equations and Dynamical Systems 19, 27–46 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  38. Reinhart, F.R., Steil, J.J.: Reservoir regularization stabilizes learning of echo state networks with output feedback. In: Proceedings of the 19th European Symposium on Artificial Neural Networks, ESANN 2011 (2011) (in Press)

    Google Scholar 

  39. Rodan, A., Tiňo, P.: Minimum complexity echo state network. IEEE Transactions on Neural Networks 22(1), 131–144 (2011)

    Article  Google Scholar 

  40. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Neurocomputing: Foundations of Research, pp. 673–695. MIT Press, Cambridge (1988)

    Google Scholar 

  41. Schrauwen, B., Defour, J., Verstraeten, D., Van Campenhout, J.: The Introduction of Time-Scales in Reservoir Computing, Applied to Isolated Digits Recognition. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D.P. (eds.) ICANN 2007. LNCS, vol. 4668, pp. 471–479. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  42. Shi, Z., Han, M.: Support vector echo-state machine for chaotic time-series prediction. IEEE Transactions on Neural Networks 18(2), 359–372 (2007)

    Article  Google Scholar 

  43. Siewert, U., Wustlich, W.: Echo-state networks with band-pass neurons: towards generic time-scale-independent reservoir structures. Internal status report, PLANET intelligent systems GmbH (2007), http://reslab.elis.ugent.be/node/112

  44. Steil, J.J.: Backpropagation-decorrelation: recurrent learning with O(N) complexity. In: Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN 2004), vol. 2, pp. 843–848 (2004)

    Google Scholar 

  45. Steil, J.J.: Memory in Backpropagation-Decorrelation O(N) Efficient Online Recurrent Learning. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 649–654. Springer, Heidelberg (2005)

    Google Scholar 

  46. Sussillo, D., Abbott, L.F.: Generating coherent patterns of activity from chaotic neural networks. Neuron 63(4), 544–557 (2009)

    Article  Google Scholar 

  47. Triefenbach, F., Jalalvand, A., Schrauwen, B., Martens, J.-P.: Phoneme recognition with large hierarchical reservoirs. In: Advances in Neural Information Processing Systems 23 (NIPS 2010), pp. 2307–2315. MIT Press, Cambridge (2011)

    Google Scholar 

  48. Verstraeten, D., Schrauwen, B., Stroobandt, D.: Reservoir-based techniques for speech recognition. In: Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN 2006), pp. 1050–1053 (2006)

    Google Scholar 

  49. Verstraeten, D., Schrauwen, B., D’Haene, M., Stroobandt, D.: An experimental unification of reservoir computing methods. Neural Networks 20(3), 391–403 (2007)

    Article  MATH  Google Scholar 

  50. Verstraeten, D., Dambre, J., Dutoit, X., Schrauwen, B.: Memory versus non-linearity in reservoirs. In: Proc. Int. Neural Networks (IJCNN) Joint Conf., pp. 1–8 (2010)

    Google Scholar 

  51. Werbos, P.J.: Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78(10), 1550–1560 (1990)

    Article  Google Scholar 

  52. Williams, R.J., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Computation 1, 270–280 (1989)

    Article  Google Scholar 

  53. Wyffels, F., Schrauwen, B., Verstraeten, D., Stroobandt, D.: Band-pass reservoir computing. In: Hou, Z., Zhang, N. (eds.) Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN 2008), Hong Kong, pp. 3204–3209 (2008)

    Google Scholar 

  54. Zimmermann, H.-G., Tietz, C., Grothmann, R.: Forecasting with Recurrent Neural Networks: 12 Tricks. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) NN: Tricks of the Trade, 2nd edn. LNCS, vol. 7700, pp. 687–707. Springer, Heidelberg (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Lukoševičius, M. (2012). A Practical Guide to Applying Echo State Networks. In: Montavon, G., Orr, G.B., Müller, KR. (eds) Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science, vol 7700. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-35289-8_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-35289-8_36

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-35288-1

  • Online ISBN: 978-3-642-35289-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics