Skip to main content

Re-visiting Reservoir Computing Architectures Optimized by Evolutionary Algorithms

  • 315 Accesses

Part of the Lecture Notes in Networks and Systems book series (LNNS,volume 648)

Abstract

For many years, Evolutionary Algorithms (EAs) have been applied to improve Neural Networks (NNs) architectures. They have been used for solving different problems, such as training the networks (adjusting the weights), designing network topology, optimizing global parameters, and selecting features. Here, we provide a systematic brief survey about applications of the EAs on the specific domain of the recurrent NNs named Reservoir Computing (RC). At the beginning of the 2000s, the RC paradigm appeared as a good option for employing recurrent NNs without dealing with the inconveniences of the training algorithms. RC models use a nonlinear dynamic system, with fixed recurrent neural network named the reservoir, and learning process is restricted to adjusting a linear parametric function. However, an RC model has several hyper-parameters, therefore EAs are helpful tools to figure out optimal RC architectures. We provide an overview of the results on the area, discuss novel advances, and we present our vision regarding the new trends and still open questions.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   219.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   279.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Maass, W.: Searching for principles of brain computation. Curr. Opin. Behav. Sci. 11, 81–92 (2016)

    CrossRef  Google Scholar 

  2. Jaeger, H.: Using conceptors to manage neural long-term memories for temporal patterns. J. Mach. Learn. Res. 18, 1–43 (2017)

    MathSciNet  MATH  Google Scholar 

  3. Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3, 127–149 (2009)

    CrossRef  MATH  Google Scholar 

  4. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)

    CrossRef  Google Scholar 

  5. Graves, A.: Supervised Sequence Labelling with Recurrent Neural Networks. Studies in Computational Intelligence, 1st edn., vol. 385. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-24797-2

  6. Maass, W.: Noisy spiking neurons with temporal coding have more computational power than sigmoidal neurons. Technical report TR-1999-037, Technische Universitaet Graz. Graz, Austria (1999)

    Google Scholar 

  7. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Technical report 148, German National Research Center for Information Technology (2001)

    Google Scholar 

  8. Verstraeten, D., Schrauwen, B., D’Haene, M., Stroobandt, D.: An experimental unification of reservoir computing methods. Neural Netw. 20(3), 287–289 (2007)

    CrossRef  MATH  Google Scholar 

  9. Huang, G.-B., Zhu, Q., Siew, C.: Extreme learning machine: theory and applications. Neurocomputing 70(1–3), 489–501 (2006)

    CrossRef  Google Scholar 

  10. Gallicchio, C., Micheli, A., Pedrelli, L.: Deep reservoir computing: a critical experimental analysis. Neurocomputing 268(C), 87–99 (2017). Advances in Artificial Neural Networks, Machine Learning and Computational Intelligence

    Google Scholar 

  11. Li, D., Han, M., Wang, J.: Chaotic time series prediction based on a novel robust echo state network. IEEE Trans. Neural Netw. Learn. Syst. 23(5), 787–799 (2012)

    CrossRef  Google Scholar 

  12. Basterrech, S., Krömer, P.: A nature-inspired biomarker for mental concentration using a single-channel EEG. Neural Comput. Appl. 32, 7941–7956 (2019)

    CrossRef  Google Scholar 

  13. Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304, 78–80 (2004)

    CrossRef  Google Scholar 

  14. Basterrech, S.: Empirical analysis of the necessary and sufficient conditions of the echo state property. In: International Joint Conference on Neural Networks, IJCNN 2017, pp. 888–896, Anchorage, AK, USA. IEEE Press (2017)

    Google Scholar 

  15. Butcher, J.B., Verstraeten, D., Schrauwen, B., Day, C.R., Haycock, P.W.: Reservoir computing and extreme learning machines for non-linear time-series data analysis. Neural Netw. 38, 76–89 (2013)

    CrossRef  Google Scholar 

  16. Hart, A., Hook, J., Dawes, J.: Embedding and approximation theorems for echo state networks. Neural Netw. 128, 234–247 (2020)

    CrossRef  MATH  Google Scholar 

  17. Tanaka, G., et al.: Recent advances in physical reservoir computing: a review. Neural Netw. 115, 100–123 (2019)

    Google Scholar 

  18. Wainrib, G., Galtier, M.N.: A local Echo State Property through the largest Lyapunov exponent. Neural Netw. 76, 39–45 (2016)

    CrossRef  MATH  Google Scholar 

  19. Sun, C., Song, M., Hong, S., Li, H.: A review of designs and applications of echo state networks (2020). Available at Arxiv

    Google Scholar 

  20. Alexandre, L.A., Embrechts, M.J., Linton, J.: Benchmarking reservoir computing on time-independent classification tasks. In: Proceedings of the 2009 international joint conference on Neural Networks, IJCNN 2009, pp. 2376–2380, Piscataway, NJ, USA. IEEE Press (2009)

    Google Scholar 

  21. Ma, Q., Shen, L., Chen, W., Wang, J., Wei, J., Zhiwen, Yu.: Functional echo state network for time series classification. Inf. Sci. 373, 1–20 (2016)

    CrossRef  MATH  Google Scholar 

  22. Martens, J., Sutskever, I.: Training deep and recurrent networks with hessian-free optimization. In: Neural Networks: Tricks of the Trade, (2nd edn.), pp. 479–535. MIT Press (2012)

    Google Scholar 

  23. Pascanu, R., Gülçehre, Ç., Cho, K., Bengio, Y.: How to construct deep recurrent neural networks. In: Bengio,Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014, Conference Track Proceedings (2014)

    Google Scholar 

  24. Rodan, A., Tin̆o, P.: Minimum complexity echo state network. IEEE Trans. Neural Netw. 22, 131–144 (2011)

    Google Scholar 

  25. Lukos̆evic̆ius, M.: Reservoir computing and self-organized neural hierarchies. Ph.D. thesis, School of Engineering and Science. Jacobs University, December 2011

    Google Scholar 

  26. Basterrech, S., Fyfe, C., Rubino, G.: Self-organizing maps and scale-invariant maps in echo state networks. In: 11th International Conference on Intelligent Systems Design and Applications, ISDA 2011, Córdoba, Spain, 22–24 November 2011, pp. 94–99, November 2011

    Google Scholar 

  27. Schrauwen, B., Wardermann, M., Verstraeten, D., Steil, J.J., Stroobandt, D.: Improving reservoirs using intrinsic plasticity. Neurocomputing 71, 1159–1171 (2008)

    CrossRef  Google Scholar 

  28. Basterrech, S., Rubino, G.: Echo state queueing networks: a combination of reservoir computing and random neural networks. Probab. Eng. Inf. Sci. 31, 457–476 (2017)

    CrossRef  MathSciNet  MATH  Google Scholar 

  29. Basterrech, S., Rubino, G.: Echo state queueing network: a new reservoir computing learning tool. In: 10th IEEE Consumer Communications and Networking Conference, CCNC 2013, Las Vegas, NV, USA, 11–14 January 2013, pp. 118–123 (2013)

    Google Scholar 

  30. Yao, X.: Evolving artificial neural networks. Proc. IEEE 87(9), 1423–1447 (1999)

    CrossRef  Google Scholar 

  31. Schmidhuber, J., Wierstra, D., Gagliolo, M., Gomez, F.: Training recurrent networks by Evolino. Neural Netw. 19, 757–779 (2007)

    MATH  Google Scholar 

  32. Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: a survey. J. Mach. Learn. Res. 20, 1–21 (2019)

    Google Scholar 

  33. Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nat. Mach. Learn. 1, 24–35 (2019)

    Google Scholar 

  34. Paugam-Moisy, H., Bohte, S.M.: Computing with spiking neuron networks. In: Rozenberg, G., Bäck, T., Kok, J.N. (eds.) Handbook of Natural Computing. Springer, Cham (2009). https://doi.org/10.1007/978-3-540-92910-9_10

  35. Lukoševičius, M.: Echo state networks with trained feedbacks. Technical report No. 4, Jacobs University Bremen (2007)

    Google Scholar 

  36. Sergio, A.T., Ludermir, T.B.: PSO for reservoir computing optimization. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds.) ICANN 2012. LNCS, vol. 7552, pp. 685–692. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33269-2_86

    CrossRef  Google Scholar 

  37. Ferreira, A.A., Ludermir, T.B.: Comparing evolutionary methods for reservoir computing pre-training. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 283–290, July 2011

    Google Scholar 

  38. Ferreira, A.A., Ludermir, T.B., De Aquino, R.R.B.: An approach to reservoir computing design and training. Expert Syst. Appl. 40(10), 4172–4182 (2013)

    Google Scholar 

  39. Ferreira, A.A., Ludermir, T.B.: Evolutionary strategy for simultaneous optimization of parameters, topology and reservoir weights in echo state networks. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–7, July 2010

    Google Scholar 

  40. Basterrech, S., Alba, E., Snášel, V.: An experimental analysis of the echo state network initialization using the particle swarm optimization. In: 2014 Sixth World Congress on Nature and Biologically Inspired Computing (NaBIC), pp. 214–219, July 2014

    Google Scholar 

  41. Chatzidimitriou, K.C., Mitkas, P.A.: A neat way for evolving echo state networks. In: Proceedings of the 2010 Conference on ECAI 2010: 19th European Conference on Artificial Intelligence, NLD, pp. 909–914. IOS Press (2010)

    Google Scholar 

  42. Matzner, F.: Neuroevolution on the edge of chaos. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2017, pp. 465–472, New York, NY, USA. ACM (2017)

    Google Scholar 

  43. Gallicchio, C., Micheli, A.: Tree echo state networks. Neurocomputing 101, 319–337 (2013)

    Google Scholar 

  44. Gallicchio, C., Micheli, A.: Echo state property of deep reservoir computing networks. Cogn. Comput. 9(3), 337–350 (2017)

    CrossRef  Google Scholar 

  45. Ma, Q., Shen, L., Cottrell, G.W.: DeePr-ESN: a deep projection-encoding Echo State Network. Inf. Sci. 511, 152–171 (2020)

    CrossRef  Google Scholar 

  46. Dale, M.: Neuroevolution of hierarchical reservoir computers. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2018, pp. 410–417, New York, NY, USA. ACM (2018)

    Google Scholar 

  47. Long, J., Zhang, S., Li, C.: Evolving deep echo state networks for intelligent fault diagnosis. IEEE Trans. Industr. Inf. 16(7), 4928–4937 (2020)

    CrossRef  Google Scholar 

  48. Basterrech, S., Rubino, G.: Evolutionary echo state network: evolving reservoirs in the Fourier space. In: 2022 IEEE International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2022)

    Google Scholar 

  49. Basterrech, S., Rubino, G.: Evolving reservoir weights in the frequency domain. In: 2021 Genetic and Evolutionary Computation Conference Companion (GECCO 2021 Companion). ACM (2021)

    Google Scholar 

  50. Rodan, A., Tiňo, P.: Simple deterministically constructed cycle reservoirs with regular jumps. Neural Comput. 24, 1822–1852 (2012)

    CrossRef  MathSciNet  Google Scholar 

  51. Steil, J.J.: Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning. Neural Netw. 20, 353–364 (2007)

    CrossRef  MATH  Google Scholar 

  52. Maat, J.R., Gianniotis, N., Protopapas, P.: Efficient optimization of echo state networks for time series datasets. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–7 (2018)

    Google Scholar 

  53. Ribeiro, G.T., Sauer, J.G., Fraccanabbia, N., Mariani, V.C., dos Santos Coelho, L.: Bayesian optimized echo state network applied to short-term load forecasting. Energies 13(9), 2390 (2020)

    Google Scholar 

  54. Gallicchio, C., Micheli, A.: Deep echo state network (DeepESN): a brief survey. CoRR, abs/1712.04323 (2017)

    Google Scholar 

  55. Gallicchio, C., Micheli, A.: Reservoir topology in deep echo state networks. In: Tetko, I.V., Kůrková, V., Karpov, P., Theis, F. (eds.) ICANN 2019. LNCS, vol. 11731, pp. 62–75. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30493-5_6

    CrossRef  Google Scholar 

  56. Legenstein, R., Maass, W.: Edge of chaos and prediction of computational performance for neural circuit models. Neural Netw. 20(3), 323–334 (2007)

    CrossRef  MATH  Google Scholar 

  57. Burianek, T., Basterrech, S.: Quantifying the reservoir quality using dimensionality reduction techniques. In: ESANN 2018, pp. 443–448, Bruges, Belgium, April 2018

    Google Scholar 

  58. Lukoševičius, M.: On self-organizing reservoirs and their hierarchies. Technical report 25, Jacobs University, Bremen (2010)

    Google Scholar 

Download references

Acknowledgements

This work was supported by GACR-Czech Science Foundation project No. 21-33574K “Lifelong Machine Learning on Data Streams”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sebastián Basterrech .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Basterrech, S., Sharma, T.K. (2023). Re-visiting Reservoir Computing Architectures Optimized by Evolutionary Algorithms. In: Abraham, A., Hanne, T., Gandhi, N., Manghirmalani Mishra, P., Bajaj, A., Siarry, P. (eds) Proceedings of the 14th International Conference on Soft Computing and Pattern Recognition (SoCPaR 2022). SoCPaR 2022. Lecture Notes in Networks and Systems, vol 648. Springer, Cham. https://doi.org/10.1007/978-3-031-27524-1_81

Download citation