Theory in Biosciences

, Volume 131, Issue 3, pp 205–213 | Cite as

Information processing in echo state networks at the edge of chaos

  • Joschka Boedecker
  • Oliver Obst
  • Joseph T. Lizier
  • N. Michael Mayer
  • Minoru Asada
Original Paper

Abstract

We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the computational capabilities between elements of these networks directly as they undergo the phase transition to chaos. Specifically, we present evidence that both information transfer and storage in the recurrent layer are maximized close to this phase transition, providing an explanation for why guiding the recurrent layer toward the edge of chaos is computationally useful. As a consequence, our study suggests self-organized ways of improving performance in recurrent neural networks, driven by input data. Moreover, the networks we study share important features with biological systems such as feedback connections and online computation on input streams. A key example is the cerebral cortex, which was shown to also operate close to the edge of chaos. Consequently, the behavior of model systems as studied here is likely to shed light on reasons why biological systems are tuned into this specific regime.

Keywords

Recurrent neural networks Reservoir computing Information transfer Active information storage Phase transition 

Supplementary material

12064_2011_146_MOESM1_ESM.pdf (211 kb)
PDF (211 KB)

References

  1. Ay N, Bertschinger N, Der R, Güttler F, Olbrich E (2008) Predictive information and explorative behavior of autonomous robots. Eur Phys J B 63:329–339CrossRefGoogle Scholar
  2. Beggs JM (2008) The criticality hypothesis: how local cortical networks might optimize information processing. Phil Trans R Soc A 366(1864):329–343PubMedCrossRefGoogle Scholar
  3. Beggs JM, Plenz D (2003) Neuronal avalanches in neocortical circuits. J Neurosci 23(35):11,167–11,177Google Scholar
  4. Bell AJ, Sejnowski TJ (1995) An information-maximization approach to blind separation and blind deconvolution. Neural Comput, MIT Press 7(6):1129–1159Google Scholar
  5. Bertschinger N, Natschläger T (2004) Real-time computation at the edge of chaos in recurrent neural networks. Neural Comput, MIT Press 16(7):1413–1436Google Scholar
  6. Boedecker J, Obst O, Mayer NM, Asada M (2009) Initialization and self-organized optimization of recurrent neural network connectivity. HFSP J 3(5):340–349PubMedCrossRefGoogle Scholar
  7. Borst A, Theunissen FE (1999) Information theory and neural coding. Nat Neurosci 2:947–957PubMedCrossRefGoogle Scholar
  8. Büsing L, Schrauwen B, Legenstein R (2010) Connectivity, dynamics, and memory in reservoir computing with binary and analog neurons. Neural Comput, MIT Press 22(5):1272–1311Google Scholar
  9. Chialvo DR (2004) Critical brain networks. Physica A 340(4):756–765CrossRefGoogle Scholar
  10. Cover TM, Thomas JA (2006) Elements of information theory. 2nd edn edn. Wiley, New York, NYGoogle Scholar
  11. Derrida B, Pomeau Y (1986) Random networks of automata: a simple annealed approximation. Europhys Lett 1(2):45–49CrossRefGoogle Scholar
  12. Jaeger H (2001a) The “echo state” approach to analysing and training recurrent neural networks. Tech Rep 148, GMD Report—German National Research Institute for Computer ScienceGoogle Scholar
  13. Jaeger H (2001b) Short term memory in echo state networks. Tech Rep 152, GMD—German National Research Institute for Computer ScienceGoogle Scholar
  14. Jaeger H, Haas H (2004) Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667):78–80PubMedCrossRefGoogle Scholar
  15. Klyubin AS, Polani D, Nehaniv CL (2004) Tracking information flow through the environment: simple cases of stigmergy. In: Pollack J, Bedau M, Husbands P, Ikegami T, Watson RA (eds) Proceedings of the 9th international conference on the simulation and synthesis of living systems. MIT Press, Cambridge, MA, pp 563–568Google Scholar
  16. Klyubin AS, Polani D, Nehaniv CL (2005) All else being equal be empowered. In: Capcarrère MS, Freitas AA, Bentley PJ, Johnson CG, Timmis J (eds) Proceedings of the 8th European conference on artificial life, vol 3630. Lecture Notes in Artificial Intelligence. Springer, Heidelberg, pp 744–753Google Scholar
  17. Langton CG (1990) Computation at the edge of chaos: phase transitions and emergent computation. Physica D 42(1-3):12–37CrossRefGoogle Scholar
  18. Lazar A, Pipa G, Triesch J (2009) Sorn: a self-organizing recurrent neural network. Front Comput Neurosci 3(23). doi:10.3389/neuro.10.023.2009
  19. Legenstein R, Maass W (2007) Edge of chaos and prediction of computational performance for neural circuit models. Neural Networks 20(3):323–334PubMedCrossRefGoogle Scholar
  20. Legenstein R, Maass W (2007b) What makes a dynamical system computationally powerful? In: Haykin S, Principe JC, Sejnowski T, McWhirter J (eds) New directions in statistical signal processing: from systems to brains, MIT Press, Cambridge, MA, pp 127–154Google Scholar
  21. Levina A, Herrmann JM, Geisel T (2007) Dynamical synapses causing self-organized criticality in neural networks. Nat Phys 3(12):857–860CrossRefGoogle Scholar
  22. Lizier JT, Prokopenko M, Zomaya AY (2007) Detecting non-trivial computation in complex dynamics. In: Almeida e Costa F, Rocha LM, Costa E, Harvey I, Coutinho A (eds) Proceedings of the 9th European conference on artificial life (ECAL 2007), Lisbon, Portugal, vol 4648. Springer, Lecture Notes in Artificial Intelligence, Berlin, Heidelberg, pp 895–904Google Scholar
  23. Lizier JT, Prokopenko M, Zomaya AY (2008a) A framework for the local information dynamics of distributed computation in complex systems. http://arxiv.org/abs/0811.2690. Accessed 1 Nov 2010
  24. Lizier JT, Prokopenko M, Zomaya AY (2008b) The information dynamics of phase transitions in random boolean networks. In: Bullock S, Noble J, Watson R, Bedau MA (eds) Proceedings of the 11th international conference on the simulation and synthesis of living systems (ALife XI), Winchester, UK. MIT Press, Cambridge, MA, pp 374–381Google Scholar
  25. Lizier JT, Prokopenko M, Zomaya AY (2008c) Local information transfer as a spatiotemporal filter for complex systems. Phys Rev E 77(2):026,110Google Scholar
  26. Lizier JT, Prokopenko M, Zomaya AY (2010) Coherent information structure in complex computation. Theory Biosci. (to appear)Google Scholar
  27. Lukosevicius M, Jaeger H (2009) Reservoir computing approaches to recurrent neural network training. Comput Sci Rev 3(3):127–149CrossRefGoogle Scholar
  28. Lungarella M, Sporns O (2006) Mapping information flow in sensorimotor networks. PLoS Comput Biol 2(10):e144PubMedCrossRefGoogle Scholar
  29. Maass W, Natschläger T, Markram H (2002) Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput, MIT Press 14(11):2531–2560Google Scholar
  30. Mitchell M, Hraber PT, Crutchfield JP (1993) Revisiting the edge of chaos: evolving cellular automata to perform computations. Complex Syst 7:89–130Google Scholar
  31. Obst O, Boedecker J, Asada M (2010) Improving recurrent neural network performance using transfer entropy. In: Wong KW, Mendis BSU, Bouzerdoum A (eds) Neural information processing. Models and applications, vol 6444. Lecture Notes in Computer Science, Springer, Heidelberg pp 193–200Google Scholar
  32. Olsson LA, Nehaniv CL, Polani D (2006) From unknown sensors and actuators to actions grounded in sensorimotor perceptions. Connect Sci 18(2):121–144CrossRefGoogle Scholar
  33. Prokopenko M, Gerasimov V, Tanev I (2006) Evolving spatiotemporal coordination in a modular robotic system. In: Nolfi S, Baldassarre G, Calabretta R, Hallam JCT, Marocco D, Meyer JA, Miglino O, Parisi D (eds) From animals to animats 9, 9th international conference on simulation of adaptive behavior, SAB 2006, vol 4095. Springer, Lecture Notes in Computer Science, Heidelberg, pp 558–569Google Scholar
  34. Schreiber T (2000) Measuring information transfer. Phys Rev Lett 85(2):461–464PubMedCrossRefGoogle Scholar
  35. Shannon CE, Weaver W (1949) The mathematical theory of communication. University of Illinois Press, Urbana, ILGoogle Scholar
  36. Sporns O, Lungarella M (2006) Evolving coordinated behavior by maximizing information structure. In: Rocha LM, Yaeger LS, Bedau MA, Floreano D, Goldstone RL, Vespignani A (eds) Proceedings of the 10th international conference on the simulation and synthesis of living systems, MIT Press, Cambridge, pp 323–329Google Scholar
  37. Sprott JC (2003) Chaos and time-series analysis. Oxford University Press, Oxford. Accessed 1 Nov 2010Google Scholar
  38. Sprott JC (2004) Numerical calculation of largest Lyapunov exponent. http://sprott.physics.wisc.edu/chaos/lyapexp.htm
  39. Strong S, Koberle R, van Steveninck R, Bialek W (1998) Entropy and information in neural spike trains. Phys Rev Lett 80:197–200CrossRefGoogle Scholar
  40. Tang A, Jackson D (2008) A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro. J Neurosci 28:505–518PubMedCrossRefGoogle Scholar
  41. Tang A, Honey C, Hobbs J, Sher A, Litke A, Sporns O, Beggs J (2008) Information flow in local cortical networks is not democratic. BMC Neurosci 9(Suppl 1):O3. http://www.biomedcentral.com/1471-2202/9/S1/O3. doi:10.1186/1471-2202-9-S1-O3
  42. Triesch J (2005) A gradient rule for the plasticity of a neuron’s intrinsic excitability. In: Duch W, Kacprzyk J, Oja E, Zadrozny S (eds) Proceedings of the international conference on artificial neural networks (ICANN 2005). Springer, Lecture Notes in Computer Science, Heidelberg, pp 65–70Google Scholar
  43. Zhou D, Sun Y, Rangan AV, Cai D (2010) Spectrum of Lyapunov exponents of non-smooth dynamical systems of integrate-and-fire type. J Comput Neurosci 28:229–245PubMedCrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  • Joschka Boedecker
    • 1
    • 2
  • Oliver Obst
    • 3
    • 4
  • Joseph T. Lizier
    • 3
    • 4
  • N. Michael Mayer
    • 5
  • Minoru Asada
    • 1
    • 2
  1. 1.Department of Adaptive Machine SystemsOsaka UniversitySuitaJapan
  2. 2.JST ERATO Asada Synergistic Intelligence ProjectSuitaJapan
  3. 3.CSIRO ICT Centre, Adaptive Systems TeamEppingAustralia
  4. 4.School of Information TechnologiesThe University of SydneySydneyAustralia
  5. 5.Department of Electrical EngineeringNational Chung Cheng UniversityChia-YiTaiwan, ROC

Personalised recommendations