Skip to main content
Log in

Information processing in echo state networks at the edge of chaos

  • Original Paper
  • Published:
Theory in Biosciences Aims and scope Submit manuscript


We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the computational capabilities between elements of these networks directly as they undergo the phase transition to chaos. Specifically, we present evidence that both information transfer and storage in the recurrent layer are maximized close to this phase transition, providing an explanation for why guiding the recurrent layer toward the edge of chaos is computationally useful. As a consequence, our study suggests self-organized ways of improving performance in recurrent neural networks, driven by input data. Moreover, the networks we study share important features with biological systems such as feedback connections and online computation on input streams. A key example is the cerebral cortex, which was shown to also operate close to the edge of chaos. Consequently, the behavior of model systems as studied here is likely to shed light on reasons why biological systems are tuned into this specific regime.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others


  1. This initial separation has to be chosen carefully. It should be as small as possible, but still large enough so that its influence will be measurable with limited numerical precision on a computer. We found 10−12 to be a robust value in our simulations, which is also recommended by Sprott (2004) for the precision used in this study.

  2. The TE can be formulated using the l previous states of the source. However, where only the previous state is a causal information contributor (as for ESNs), it is sensible to set l = 1 to measure direct transfer only at step. n


  • Ay N, Bertschinger N, Der R, Güttler F, Olbrich E (2008) Predictive information and explorative behavior of autonomous robots. Eur Phys J B 63:329–339

    Article  CAS  Google Scholar 

  • Beggs JM (2008) The criticality hypothesis: how local cortical networks might optimize information processing. Phil Trans R Soc A 366(1864):329–343

    Article  PubMed  Google Scholar 

  • Beggs JM, Plenz D (2003) Neuronal avalanches in neocortical circuits. J Neurosci 23(35):11,167–11,177

    CAS  Google Scholar 

  • Bell AJ, Sejnowski TJ (1995) An information-maximization approach to blind separation and blind deconvolution. Neural Comput, MIT Press 7(6):1129–1159

    Google Scholar 

  • Bertschinger N, Natschläger T (2004) Real-time computation at the edge of chaos in recurrent neural networks. Neural Comput, MIT Press 16(7):1413–1436

    Google Scholar 

  • Boedecker J, Obst O, Mayer NM, Asada M (2009) Initialization and self-organized optimization of recurrent neural network connectivity. HFSP J 3(5):340–349

    Article  PubMed  Google Scholar 

  • Borst A, Theunissen FE (1999) Information theory and neural coding. Nat Neurosci 2:947–957

    Article  PubMed  CAS  Google Scholar 

  • Büsing L, Schrauwen B, Legenstein R (2010) Connectivity, dynamics, and memory in reservoir computing with binary and analog neurons. Neural Comput, MIT Press 22(5):1272–1311

    Google Scholar 

  • Chialvo DR (2004) Critical brain networks. Physica A 340(4):756–765

    Article  Google Scholar 

  • Cover TM, Thomas JA (2006) Elements of information theory. 2nd edn edn. Wiley, New York, NY

    Google Scholar 

  • Derrida B, Pomeau Y (1986) Random networks of automata: a simple annealed approximation. Europhys Lett 1(2):45–49

    Article  Google Scholar 

  • Jaeger H (2001a) The “echo state” approach to analysing and training recurrent neural networks. Tech Rep 148, GMD Report—German National Research Institute for Computer Science

  • Jaeger H (2001b) Short term memory in echo state networks. Tech Rep 152, GMD—German National Research Institute for Computer Science

  • Jaeger H, Haas H (2004) Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667):78–80

    Article  PubMed  CAS  Google Scholar 

  • Klyubin AS, Polani D, Nehaniv CL (2004) Tracking information flow through the environment: simple cases of stigmergy. In: Pollack J, Bedau M, Husbands P, Ikegami T, Watson RA (eds) Proceedings of the 9th international conference on the simulation and synthesis of living systems. MIT Press, Cambridge, MA, pp 563–568

  • Klyubin AS, Polani D, Nehaniv CL (2005) All else being equal be empowered. In: Capcarrère MS, Freitas AA, Bentley PJ, Johnson CG, Timmis J (eds) Proceedings of the 8th European conference on artificial life, vol 3630. Lecture Notes in Artificial Intelligence. Springer, Heidelberg, pp 744–753

  • Langton CG (1990) Computation at the edge of chaos: phase transitions and emergent computation. Physica D 42(1-3):12–37

    Article  Google Scholar 

  • Lazar A, Pipa G, Triesch J (2009) Sorn: a self-organizing recurrent neural network. Front Comput Neurosci 3(23). doi:10.3389/neuro.10.023.2009

  • Legenstein R, Maass W (2007) Edge of chaos and prediction of computational performance for neural circuit models. Neural Networks 20(3):323–334

    Article  PubMed  Google Scholar 

  • Legenstein R, Maass W (2007b) What makes a dynamical system computationally powerful? In: Haykin S, Principe JC, Sejnowski T, McWhirter J (eds) New directions in statistical signal processing: from systems to brains, MIT Press, Cambridge, MA, pp 127–154

  • Levina A, Herrmann JM, Geisel T (2007) Dynamical synapses causing self-organized criticality in neural networks. Nat Phys 3(12):857–860

    Article  CAS  Google Scholar 

  • Lizier JT, Prokopenko M, Zomaya AY (2007) Detecting non-trivial computation in complex dynamics. In: Almeida e Costa F, Rocha LM, Costa E, Harvey I, Coutinho A (eds) Proceedings of the 9th European conference on artificial life (ECAL 2007), Lisbon, Portugal, vol 4648. Springer, Lecture Notes in Artificial Intelligence, Berlin, Heidelberg, pp 895–904

  • Lizier JT, Prokopenko M, Zomaya AY (2008a) A framework for the local information dynamics of distributed computation in complex systems. Accessed 1 Nov 2010

  • Lizier JT, Prokopenko M, Zomaya AY (2008b) The information dynamics of phase transitions in random boolean networks. In: Bullock S, Noble J, Watson R, Bedau MA (eds) Proceedings of the 11th international conference on the simulation and synthesis of living systems (ALife XI), Winchester, UK. MIT Press, Cambridge, MA, pp 374–381

  • Lizier JT, Prokopenko M, Zomaya AY (2008c) Local information transfer as a spatiotemporal filter for complex systems. Phys Rev E 77(2):026,110

    Google Scholar 

  • Lizier JT, Prokopenko M, Zomaya AY (2010) Coherent information structure in complex computation. Theory Biosci. (to appear)

  • Lukosevicius M, Jaeger H (2009) Reservoir computing approaches to recurrent neural network training. Comput Sci Rev 3(3):127–149

    Article  Google Scholar 

  • Lungarella M, Sporns O (2006) Mapping information flow in sensorimotor networks. PLoS Comput Biol 2(10):e144

    Article  PubMed  Google Scholar 

  • Maass W, Natschläger T, Markram H (2002) Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput, MIT Press 14(11):2531–2560

    Google Scholar 

  • Mitchell M, Hraber PT, Crutchfield JP (1993) Revisiting the edge of chaos: evolving cellular automata to perform computations. Complex Syst 7:89–130

    Google Scholar 

  • Obst O, Boedecker J, Asada M (2010) Improving recurrent neural network performance using transfer entropy. In: Wong KW, Mendis BSU, Bouzerdoum A (eds) Neural information processing. Models and applications, vol 6444. Lecture Notes in Computer Science, Springer, Heidelberg pp 193–200

  • Olsson LA, Nehaniv CL, Polani D (2006) From unknown sensors and actuators to actions grounded in sensorimotor perceptions. Connect Sci 18(2):121–144

    Article  Google Scholar 

  • Prokopenko M, Gerasimov V, Tanev I (2006) Evolving spatiotemporal coordination in a modular robotic system. In: Nolfi S, Baldassarre G, Calabretta R, Hallam JCT, Marocco D, Meyer JA, Miglino O, Parisi D (eds) From animals to animats 9, 9th international conference on simulation of adaptive behavior, SAB 2006, vol 4095. Springer, Lecture Notes in Computer Science, Heidelberg, pp 558–569

  • Schreiber T (2000) Measuring information transfer. Phys Rev Lett 85(2):461–464

    Article  PubMed  CAS  Google Scholar 

  • Shannon CE, Weaver W (1949) The mathematical theory of communication. University of Illinois Press, Urbana, IL

    Google Scholar 

  • Sporns O, Lungarella M (2006) Evolving coordinated behavior by maximizing information structure. In: Rocha LM, Yaeger LS, Bedau MA, Floreano D, Goldstone RL, Vespignani A (eds) Proceedings of the 10th international conference on the simulation and synthesis of living systems, MIT Press, Cambridge, pp 323–329

  • Sprott JC (2003) Chaos and time-series analysis. Oxford University Press, Oxford. Accessed 1 Nov 2010

  • Sprott JC (2004) Numerical calculation of largest Lyapunov exponent.

  • Strong S, Koberle R, van Steveninck R, Bialek W (1998) Entropy and information in neural spike trains. Phys Rev Lett 80:197–200

    Article  CAS  Google Scholar 

  • Tang A, Jackson D (2008) A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro. J Neurosci 28:505–518

    Article  PubMed  CAS  Google Scholar 

  • Tang A, Honey C, Hobbs J, Sher A, Litke A, Sporns O, Beggs J (2008) Information flow in local cortical networks is not democratic. BMC Neurosci 9(Suppl 1):O3. doi:10.1186/1471-2202-9-S1-O3

  • Triesch J (2005) A gradient rule for the plasticity of a neuron’s intrinsic excitability. In: Duch W, Kacprzyk J, Oja E, Zadrozny S (eds) Proceedings of the international conference on artificial neural networks (ICANN 2005). Springer, Lecture Notes in Computer Science, Heidelberg, pp 65–70

  • Zhou D, Sun Y, Rangan AV, Cai D (2010) Spectrum of Lyapunov exponents of non-smooth dynamical systems of integrate-and-fire type. J Comput Neurosci 28:229–245

    Article  PubMed  CAS  Google Scholar 

Download references


We thank the Australian Commonwealth Scientific and Research Organization’s (CSIRO) Advanced Scientific Computing group for access to high performance computing resources used for simulation and analysis. Joschka Boedecker acknowledges travel support from the CSIRO Complex Systems Science network. Michael Mayer thanks the National Science Council of Taiwan for their support (grant number 98-2218-E-194-003-MY2). In addition, we thank the anonymous reviewers for their helpful comments on the manuscript.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Joschka Boedecker.

Electronic supplementary material

Below is the link to the electronic supplementary material.

PDF (211 KB)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Boedecker, J., Obst, O., Lizier, J.T. et al. Information processing in echo state networks at the edge of chaos. Theory Biosci. 131, 205–213 (2012).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: