Advertisement

A Practical Guide to Applying Echo State Networks

  • Mantas Lukoševičius
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7700)

Abstract

Reservoir computing has emerged in the last decade as an alternative to gradient descent methods for training recurrent neural networks. Echo State Network (ESN) is one of the key reservoir computing “flavors”. While being practical, conceptually simple, and easy to implement, ESNs require some experience and insight to achieve the hailed good performance in many tasks. Here we present practical techniques and recommendations for successfully applying ESNs, as well as some more advanced application-specific modifications.

Keywords

Spectral Radius Output Feedback Little Mean Square Recurrent Neural Network Ridge Regression 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks 5(2), 157–166 (1994)CrossRefGoogle Scholar
  2. 2.
    Bergstra, J.S., Bardenet, R., Bengio, Y., Kégl, B.: Algorithms for hyper-parameter optimization. In: Shawe-Taylor, J., Zemel, R.S., Bartlett, P., Pereira, F.C.N., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 23 (NIPS 2010), pp. 2546–2554 (2011)Google Scholar
  3. 3.
    Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus (2006)zbMATHGoogle Scholar
  4. 4.
    Chatzis, S.P., Demiris, Y.: Echo state Gaussian process. IEEE Transactions on Neural Networks 22(9), 1435–1445 (2011)CrossRefGoogle Scholar
  5. 5.
    Chatzis, S.P., Demiris, Y.: The copula echo state network. Pattern Recognition 45(1), 570–577 (2012)CrossRefzbMATHGoogle Scholar
  6. 6.
    Daukantas, S., Lukoševičius, M., Marozas, V., Lukoševičius, A.: Comparison of “black box” and “gray box” methods for lost data reconstruction in multichannel signals. In: Proceedings of the 14th International Conference “Biomedical Engineering”, Kaunas, pp. 135–138 (2010)Google Scholar
  7. 7.
    Dominey, P.F., Ramus, F.: Neural network processing of natural language: I. sensitivity to serial, temporal and abstract structure of language in the infant. Language and Cognitive Processes 15(1), 87–127 (2000)CrossRefGoogle Scholar
  8. 8.
    Doya, K.: Bifurcations in the learning of recurrent neural networks. In: Proceedings of IEEE International Symposium on Circuits and Systems, vol. 6, pp. 2777–2780 (1992)Google Scholar
  9. 9.
    Farhang-Boroujeny, B.: Adaptive Filters: Theory and Applications. Wiley (1998)Google Scholar
  10. 10.
    Graves, A.: Supervised Sequence Labelling with Recurrent Neural Networks. PhD thesis, Technical University Munich, Munich, Germany (2008)Google Scholar
  11. 11.
    Hermans, M., Schrauwen, B.: Memory in reservoirs for high dimensional input. In: Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN 2010), pp. 1–7 (2010)Google Scholar
  12. 12.
    Hermans, M., Schrauwen, B.: Recurrent kernel machines: Computing with infinite echo state networks. Neural Computation 24(1), 104–133 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  14. 14.
    Holzmann, G., Hauser, H.: Echo state networks with filter neurons and a delay and sum readout. Neural Networks 23(2), 244–256 (2010)CrossRefGoogle Scholar
  15. 15.
    Ilies, I., Jaeger, H., Kosuchinas, O., Rincon, M., Šakėnas, V., Vaškevičius, N.: Stepping forward through echoes of the past: forecasting with echo state networks. Short report on the winning entry to the NN3 financial forecasting competition (2007), http://www.neural-forecasting-competition.com/downloads/NN3/methods/27-NN3_Herbert_Jaeger_report.pdf
  16. 16.
    Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Technical Report GMD Report 148, German National Research Center for Information Technology (2001)Google Scholar
  17. 17.
    Jaeger, H.: Short term memory in echo state networks. Technical Report GMD Report 152, German National Research Center for Information Technology (2002)Google Scholar
  18. 18.
    Jaeger, H.: Adaptive nonlinear system identification with echo state networks. In: Advances in Neural Information Processing Systems 15 (NIPS 2002), pp. 593–600. MIT Press, Cambridge (2003)Google Scholar
  19. 19.
    Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)CrossRefGoogle Scholar
  20. 20.
    Jaeger, H.: Generating exponentially many periodic attractors with linearly growing echo state networks. Technical Report No. 3, Jacobs University Bremen (2006)Google Scholar
  21. 21.
    Jaeger, H.: Echo state network. Scholarpedia 2(9), 2330 (2007)CrossRefGoogle Scholar
  22. 22.
    Jaeger, H., Lukoševičius, M., Popovici, D., Siewert, U.: Optimization and applications of echo state networks with leaky-integrator neurons. Neural Networks 20(3), 335–352 (2007)CrossRefzbMATHGoogle Scholar
  23. 23.
    Jaeger, H.: Long short-term memory in echo state networks: Details of a simulation study. Technical Report No. 27, Jacobs University Bremen (2012)Google Scholar
  24. 24.
    Kahan, W.: Pracniques: further remarks on reducing truncation errors. Communications of the ACM 8(1), 40 (1965)CrossRefGoogle Scholar
  25. 25.
    Küçükemre, A.U.: Echo state networks for adaptive filtering. Master’s thesis, University of Applied Sciences Bohn-Rhein-Sieg, Germany (2006), http://reservoir-computing.org/publications/2006-echo-state-networks-adaptive-filtering
  26. 26.
    LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient BackProp. In: Orr, G.B., Müller, K.-R. (eds.) NIPS-WS 1996. LNCS, vol. 1524, pp. 9–50. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  27. 27.
    Lukoševičius, M., Popovici, D., Jaeger, H., Siewert, U.: Time warping invariant echo state networks. Technical Report No. 2, Jacobs University Bremen (May 2006)Google Scholar
  28. 28.
    Lukoševičius, M.: Echo state networks with trained feedbacks. Technical Report No. 4, Jacobs University Bremen (February 2007)Google Scholar
  29. 29.
    Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Computer Science Review 3(3), 127–149 (2009)CrossRefzbMATHGoogle Scholar
  30. 30.
    Lukoševičius, M.: Reservoir Computing and Self-Organized Neural Hierarchies. PhD thesis, Jacobs University Bremen, Bremen, Germany (2011)Google Scholar
  31. 31.
    Lukoševičius, M., Jaeger, H., Schrauwen, B.: Reservoir computing trends. KI - Künstliche Intelligenz, pp. 1–7 (2012)Google Scholar
  32. 32.
    Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Computation 14(11), 2531–2560 (2002)CrossRefzbMATHGoogle Scholar
  33. 33.
    Maass, W., Joshi, P., Sontag, E.D.: Principles of real-time computing with feedback applied to cortical microcircuit models. In: Advances in Neural Information Processing Systems 18 (NIPS 2005), pp. 835–842. MIT Press, Cambridge (2006)Google Scholar
  34. 34.
    Martens, J., Sutskever, I.: Learning recurrent neural networks with Hessian-free optimization. In: Proc. 28th Int. Conf. on Machine Learning (2011)Google Scholar
  35. 35.
    Martens, J., Sutskever, I.: Training Deep and Recurrent Networks with Hessian-free Optimization. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) NN: Tricks of the Trade, 2nd edn. LNCS, vol. 7700, pp. 479–535. Springer, Heidelberg (2012)Google Scholar
  36. 36.
    Ozturk, M.C., Xu, D., Príncipe, J.C.: Analysis and design of echo state networks. Neural Computation 19(1), 111–138 (2007)CrossRefzbMATHGoogle Scholar
  37. 37.
    Reinhart, F.R., Steil, J.J.: A constrained regularization approach for input-driven recurrent neural networks. Differential Equations and Dynamical Systems 19, 27–46 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Reinhart, F.R., Steil, J.J.: Reservoir regularization stabilizes learning of echo state networks with output feedback. In: Proceedings of the 19th European Symposium on Artificial Neural Networks, ESANN 2011 (2011) (in Press)Google Scholar
  39. 39.
    Rodan, A., Tiňo, P.: Minimum complexity echo state network. IEEE Transactions on Neural Networks 22(1), 131–144 (2011)CrossRefGoogle Scholar
  40. 40.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Neurocomputing: Foundations of Research, pp. 673–695. MIT Press, Cambridge (1988)Google Scholar
  41. 41.
    Schrauwen, B., Defour, J., Verstraeten, D., Van Campenhout, J.: The Introduction of Time-Scales in Reservoir Computing, Applied to Isolated Digits Recognition. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D.P. (eds.) ICANN 2007. LNCS, vol. 4668, pp. 471–479. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  42. 42.
    Shi, Z., Han, M.: Support vector echo-state machine for chaotic time-series prediction. IEEE Transactions on Neural Networks 18(2), 359–372 (2007)CrossRefGoogle Scholar
  43. 43.
    Siewert, U., Wustlich, W.: Echo-state networks with band-pass neurons: towards generic time-scale-independent reservoir structures. Internal status report, PLANET intelligent systems GmbH (2007), http://reslab.elis.ugent.be/node/112
  44. 44.
    Steil, J.J.: Backpropagation-decorrelation: recurrent learning with O(N) complexity. In: Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN 2004), vol. 2, pp. 843–848 (2004)Google Scholar
  45. 45.
    Steil, J.J.: Memory in Backpropagation-Decorrelation O(N) Efficient Online Recurrent Learning. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 649–654. Springer, Heidelberg (2005)Google Scholar
  46. 46.
    Sussillo, D., Abbott, L.F.: Generating coherent patterns of activity from chaotic neural networks. Neuron 63(4), 544–557 (2009)CrossRefGoogle Scholar
  47. 47.
    Triefenbach, F., Jalalvand, A., Schrauwen, B., Martens, J.-P.: Phoneme recognition with large hierarchical reservoirs. In: Advances in Neural Information Processing Systems 23 (NIPS 2010), pp. 2307–2315. MIT Press, Cambridge (2011)Google Scholar
  48. 48.
    Verstraeten, D., Schrauwen, B., Stroobandt, D.: Reservoir-based techniques for speech recognition. In: Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN 2006), pp. 1050–1053 (2006)Google Scholar
  49. 49.
    Verstraeten, D., Schrauwen, B., D’Haene, M., Stroobandt, D.: An experimental unification of reservoir computing methods. Neural Networks 20(3), 391–403 (2007)CrossRefzbMATHGoogle Scholar
  50. 50.
    Verstraeten, D., Dambre, J., Dutoit, X., Schrauwen, B.: Memory versus non-linearity in reservoirs. In: Proc. Int. Neural Networks (IJCNN) Joint Conf., pp. 1–8 (2010)Google Scholar
  51. 51.
    Werbos, P.J.: Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78(10), 1550–1560 (1990)CrossRefGoogle Scholar
  52. 52.
    Williams, R.J., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Computation 1, 270–280 (1989)CrossRefGoogle Scholar
  53. 53.
    Wyffels, F., Schrauwen, B., Verstraeten, D., Stroobandt, D.: Band-pass reservoir computing. In: Hou, Z., Zhang, N. (eds.) Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN 2008), Hong Kong, pp. 3204–3209 (2008)Google Scholar
  54. 54.
    Zimmermann, H.-G., Tietz, C., Grothmann, R.: Forecasting with Recurrent Neural Networks: 12 Tricks. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) NN: Tricks of the Trade, 2nd edn. LNCS, vol. 7700, pp. 687–707. Springer, Heidelberg (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Mantas Lukoševičius
    • 1
  1. 1.Jacobs University BremenBremenGermany

Personalised recommendations