Skip to main content

Improving Recurrent Neural Network Performance Using Transfer Entropy

  • Conference paper
Neural Information Processing. Models and Applications (ICONIP 2010)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6444))

Included in the following conference series:

Abstract

Reservoir computing approaches have been successfully applied to a variety of tasks. An inherent problem of these approaches, is, however, their variation in performance due to fixed random initialisation of the reservoir. Self-organised approaches like intrinsic plasticity have been applied to improve reservoir quality, but do not take the task of the system into account. We present an approach to improve the hidden layer of recurrent neural networks, guided by the learning goal of the system. Our reservoir adaptation optimises the information transfer at each individual unit, dependent on properties of the information transfer between input and output of the system. Using synthetic data, we show that this reservoir adaptation improves the performance of offline echo state learning and Recursive Least Squares Online Learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Lukosevicius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Computer Science Review 3(3), 127–149 (2009)

    Article  MATH  Google Scholar 

  2. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation 14(11), 2531–2560 (2002)

    Article  MATH  Google Scholar 

  3. Jaeger, H., Haas, H.: Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication. Science 304(5667), 78–80 (2004)

    Article  Google Scholar 

  4. Boedecker, J., Obst, O., Mayer, N.M., Asada, M.: Initialization and self-organized optimization of recurrent neural network connectivity. HFSP Journal 3(5), 340–349 (2009)

    Article  Google Scholar 

  5. Triesch, J.: A gradient rule for the plasticity of a neuron’s intrinsic excitability. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3696, pp. 65–70. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  6. Steil, J.J.: Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning. Neural Networks 20(3), 353–364 (2007)

    Article  MATH  Google Scholar 

  7. Hebb, D.O.: The organization of behavior: a neuropsychological theory. Lawrence Erlbaum Associates, Mahwah (1949)

    Google Scholar 

  8. Prokopenko, M.: Guided self-organization. HFSP Journal 3(5), 287–289 (2009)

    Article  Google Scholar 

  9. Steil, J.J.: Backpropagation-decorrelation: Recurrent learning with O(N) complexity. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN), vol. 1, pp. 843–848 (2004)

    Google Scholar 

  10. Hayes, M.H.: Chapter 9.4 Recursive Least Squares. In: Statistical Digital Signal Processing and Modeling. Wiley, Chichester (1996)

    Google Scholar 

  11. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Technical Report 148, GMD – German National Research Institute for Computer Science (2001)

    Google Scholar 

  12. Jaeger, H.: Adaptive nonlineaer systems identification with echo state networks. In: Advances in Neural Information Processing Systems, pp. 609–615 (2003)

    Google Scholar 

  13. Schreiber, T.: Measuring information transfer. Physical Review Letters 85(2), 461–464 (2000)

    Article  MathSciNet  Google Scholar 

  14. Hajnal, M., Lőrincz, A.: Critical echo state networks. In: Kollias, S.D., Stafylopatis, A., Duch, W., Oja, E. (eds.) ICANN 2006. LNCS, vol. 4131, pp. 658–667. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  15. Lizier, J.T., Prokopenko, M., Zomaya, A.Y.: Detecting non-trivial computation in complex dynamics. In: Almeida e Costa, F., Rocha, L.M., Costa, E., Harvey, I., Coutinho, A. (eds.) ECAL 2007. LNCS (LNAI), vol. 4648, pp. 895–904. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Obst, O., Boedecker, J., Asada, M. (2010). Improving Recurrent Neural Network Performance Using Transfer Entropy. In: Wong, K.W., Mendis, B.S.U., Bouzerdoum, A. (eds) Neural Information Processing. Models and Applications. ICONIP 2010. Lecture Notes in Computer Science, vol 6444. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-17534-3_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-17534-3_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-17533-6

  • Online ISBN: 978-3-642-17534-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics