Advertisement

Guided Self-Organization of Input-Driven Recurrent Neural Networks

  • Oliver ObstEmail author
  • Joschka Boedecker
Part of the Emergence, Complexity and Computation book series (ECC, volume 9)

Abstract

To understand the world around us, our brains solve a variety of tasks. One of the crucial functions of a brain is to make predictions of what will happen next, or in the near future. This ability helps us to anticipate upcoming events and plan our reactions to them in advance. To make these predictions, past information needs to be stored, transformed or used otherwise. How exactly the brain achieves this information processing is far from clear and under heavy investigation. To guide this extraordinary research effort, neuroscientists increasingly look for theoretical frameworks that could help explain the data recorded from the brain, and to make the enormous task more manageable. This is evident, for instance, through the funding of the billion-dollar ”Human Brain Project”, of the European Union, amongst others. Mathematical techniques from graph and information theory, control theory, dynamical and complex systems (Sporns 2011), statistical mechanics (Rolls and Deco 2010), as well as machine learning and computer vision (Seung 2012; Hawkins and Blakeslee 2004), have provided new insights into brain structure and possible function, and continue to generate new hypotheses for future research.

Keywords

Mean Square Error Recurrent Neural Network Average Mutual Information Dynamical System Theory Transfer Entropy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ashby, W.R.: An Introduction to Cybernetics. Chapman & Hall, London (1956)Google Scholar
  2. Baddeley, R., Abbott, L.F., Booth, M.C.A., Sengpiel, F., Freeman, T., Wakeman, E.A., Roll, E.T.: Responses of neurons in primary and inferior temporal visual cortices to natural scenes. Proc. R. Soc. Lond. B 264, 1775–1783 (1997)CrossRefGoogle Scholar
  3. Bell, A.J., Sejnowski, T.J.: An information-maximization approach to blind separation and blind deconvolution. Neural Computation 7(6), 1129–1159 (1995)CrossRefGoogle Scholar
  4. Bengio, Y., Boulanger-Lewandowski, N., Pascanu, R.: Advances in Optimizing Recurrent Networks. arXiv preprint 1212.0901, arXiv.org (2012)Google Scholar
  5. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Transaction on Neural Networks 5(2), 157–166 (1994)CrossRefGoogle Scholar
  6. Bertschinger, N., Natschläger, T.: Real-time computation at the edge of chaos in recurrent neural networks. Neural Computation 16(7), 1413–1436 (2004)CrossRefzbMATHGoogle Scholar
  7. Boedecker, J., Obst, O., Lizier, J.T., Mayer, N.M., Asada, M.: Information processing in echo state networks at the edge of chaos. Theory in biosciences Theorie in den Biowissenschaften 131(3), 1–9 (2011)Google Scholar
  8. Boedecker, J., Obst, O., Mayer, N.M., Asada, M.: Initialization and self-organized optimization of recurrent neural network connectivity. HFSP Journal 3(5), 340–349 (2009)CrossRefGoogle Scholar
  9. Crutchfield, J.P., Machta, J.: Introduction to focus issue on “Randomness, Structure, and Causality: Measures of complexity from theory to applications”. Chaos 21(3), 037101 (2011)Google Scholar
  10. Dambre, J., Verstraeten, D., Schrauwen, B., Massar, S.: Information processing capacity of dynamical systems. Scientific Reports 2, 514 (2012)CrossRefGoogle Scholar
  11. Douglas, R., Markram, H., Martin, K.: Neocortex. In: Shepherd, G. (ed.) Synaptic Organization In the Brain, pp. 499–558. Oxford University Press (2004)Google Scholar
  12. Doya, K.: Bifurcations in the learning of recurrent neural networks. In: IEEE International Symposium on Circuits and Systems, pp. 2777–2780. IEEE (1992)Google Scholar
  13. Ganguli, S., Huh, D., Sompolinsky, H.: Memory traces in dynamical systems. Proceedings of the National Academy of Sciences 105(48), 18970–18975 (2008)CrossRefGoogle Scholar
  14. Grassberger, P.: Toward a quantitative theory of self-generated complexity. International Journal of Theoretical Physics 25(9), 907–938 (1986)CrossRefzbMATHMathSciNetGoogle Scholar
  15. Grassberger, P.: Randomness, information, and complexity. Technical Report 1208.3459, arXiv.org (2012)Google Scholar
  16. Hawkins, J., Blakeslee, S.: On Intelligence. Times Books (2004)Google Scholar
  17. Hochreiter, S.: The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 6(2), 107–116 (1998)CrossRefzbMATHMathSciNetGoogle Scholar
  18. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  19. Jaeger, H.: Short term memory in echo state networks. Technical Report 152, GMD – German National Research Institute for Computer Science (2001)Google Scholar
  20. Jaeger, H., Haas, H.: Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication. Science 304(5667), 78–80 (2004)CrossRefGoogle Scholar
  21. Kohonen, T.: Self-Organizing Maps, 3rd, extended edn. Springer (2001)Google Scholar
  22. Kolmogorov, A.N.: Three approaches to the quantitative definition of information. Problemy Peredachi Informatsii 1(1), 3–11 (1965)zbMATHMathSciNetGoogle Scholar
  23. Lazar, A., Pipa, G., Triesch, J.: SORN: a self-organizing recurrent neural network. Frontiers in Computational Neuroscience 3, 23 (2009)Google Scholar
  24. Legenstein, R., Maass, W.: What makes a dynamical system computationally powerful. In: Haykin, S., Principe, J.C., Sejnowski, T., McWhirter, J. (eds.) New Directions in Statistical Signal Processing: From Systems to Brains, pp. 127–154. MIT Press (2007)Google Scholar
  25. Linsker, R.: Towards an organizing principle for a layered perceptual network. In: Anderson, D.Z. (ed.) NIPS, pp. 485–494. American Institute of Physics (1987)Google Scholar
  26. Lizier, J.T., Prokopenko, M., Zomaya, A.Y.: Detecting non-trivial computation in complex dynamics. In: Almeida e Costa, F., Rocha, L.M., Costa, E., Harvey, I., Coutinho, A. (eds.) ECAL 2007. LNCS (LNAI), vol. 4648, pp. 895–904. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  27. Lizier, J.T., Prokopenko, M., Zomaya, A.Y.: Local measures of information storage in complex distributed computation. Information Sciences 208, 39–54 (2012)CrossRefGoogle Scholar
  28. Lukoševičius, M.: A practical guide to applying echo state networks. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade, 2nd edn. LNCS, vol. 7700, pp. 659–686. Springer, Heidelberg (2012a)Google Scholar
  29. Lukoševičius, M.: Self-organized reservoirs and their hierarchies. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds.) ICANN 2012, Part I. LNCS, vol. 7552, pp. 587–595. Springer, Heidelberg (2012b)CrossRefGoogle Scholar
  30. Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Computer Science Review 3(3), 127–149 (2009)CrossRefGoogle Scholar
  31. Maass, W., Joshi, P., Sontag, E.D.: Computational aspects of feedback in neural circuits. PLOS Computational Biology 3(1), e165 (2007)Google Scholar
  32. Manjunath, G., Tino, P., Jaeger, H.: Theory of Input Driven Dynamical Systems. In: dice.ucl.ac.be, pp. 25–27 (April 2012)Google Scholar
  33. Martens, J., Sutskever, I.: Learning recurrent neural networks with hessian-free optimization. In: Proceedings of the 28th International Conference on Machine Learning, vol. 46, p. 68. Omnipress Madison, WI (2011)Google Scholar
  34. Martinetz, T., Schulten, K.: A “neural-gas” network learns topologies. Artificial Neural Networks 1, 397–402 (1991)Google Scholar
  35. Mitchell, M., Hraber, P.T., Crutchfield, J.P.: Revisiting the edge of chaos: Evolving cellular automata to perform computations. Complex Systems 7, 89–130 (1993)zbMATHGoogle Scholar
  36. Obst, O., Boedecker, J., Asada, M.: Improving Recurrent Neural Network Performance Using Transfer Entropy. Neural Information Processing Models and Applications 6444, 193–200 (2010)CrossRefGoogle Scholar
  37. Obst, O., Boedecker, J., Schmidt, B., Asada, M.: On active information storage in input-driven systems. preprint 1303.5526v1, arXiv.org (2013)Google Scholar
  38. Ozturk, M.C., Xu, D., Príncipe, J.C.: Analysis and design of echo state networks. Neural Computation 19(1), 111–138 (2007)CrossRefzbMATHGoogle Scholar
  39. Prokopenko, M., Lizier, J.T., Obst, O., Wang, X.R.: Relating Fisher information to order parameters. Physical Review E 84(4), 041116 (2011)Google Scholar
  40. Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: the rprop algorithm. In: IEEE International Conference on Neural Networks, vol. 1, pp. 586–591 (1993)Google Scholar
  41. Rissanen, J.: Modeling by shortest data description. Automatica 14(5), 465–471 (1978)CrossRefzbMATHGoogle Scholar
  42. Rolls, E.T., Deco, G.: The Noisy Brain - Stochastic Dynamics as a Principle of Brain Function. Oxford University Press (2010)Google Scholar
  43. Rumelhart, D., Hinton, G., Williams, R.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)CrossRefGoogle Scholar
  44. Schmidhuber, J., Wierstra, D., Gagliolo, M., Gomez, F.: Training recurrent networks by evolino. Neural Computation 19(3), 757–779 (2007)CrossRefzbMATHGoogle Scholar
  45. Schrauwen, B., Wardermann, M., Verstraeten, D., Steil, J.J., Stroobandt, D.: Improving reservoirs using intrinsic plasticity. Neurocomputing 71(7-9), 1159–1171 (2008)CrossRefGoogle Scholar
  46. Schreiber, T.: Measuring information transfer. Physical Review Letters 85(2), 461–464 (2000)CrossRefGoogle Scholar
  47. Seung, H.S.: Connectome: How the Brain’s Wiring Makes Us Who We Are. Houghton Mifflin Harcout, New York (2012)Google Scholar
  48. Sporns, O.: Networks Of the Brain. The MIT Press (2011)Google Scholar
  49. Sussillo, D., Barak, O.: Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Computation 25(3), 626–649 (2013)CrossRefzbMATHMathSciNetGoogle Scholar
  50. Tino, P., Rodan, A.: Short term memory in input-driven linear dynamical systems. Neurocomputing (2013)Google Scholar
  51. Triesch, J.: A gradient rule for the plasticity of a neuron’s intrinsic excitability. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3696, pp. 65–70. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  52. Voegtlin, T.: Recursive self-organizing maps. Neural Networks 15(8-9), 979–991 (2002)CrossRefGoogle Scholar
  53. Werbos, P.J.: Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78(10), 1550–1560 (1990)CrossRefGoogle Scholar
  54. Williams, P.L., Beer, R.D.: Information dynamics of evolved agents. From Animals to Animats 11, 38–49 (2010)CrossRefGoogle Scholar
  55. Williams, R.J., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Computation 1(2), 270–280 (1989)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.CSIRO Computational InformaticsEppingAustralia
  2. 2.Machine Learning LabUniversity of FreiburgFreiburgGermany

Personalised recommendations