Improving Recurrent Neural Network Performance Using Transfer Entropy
Reservoir computing approaches have been successfully applied to a variety of tasks. An inherent problem of these approaches, is, however, their variation in performance due to fixed random initialisation of the reservoir. Self-organised approaches like intrinsic plasticity have been applied to improve reservoir quality, but do not take the task of the system into account. We present an approach to improve the hidden layer of recurrent neural networks, guided by the learning goal of the system. Our reservoir adaptation optimises the information transfer at each individual unit, dependent on properties of the information transfer between input and output of the system. Using synthetic data, we show that this reservoir adaptation improves the performance of offline echo state learning and Recursive Least Squares Online Learning.
KeywordsMachine learning recurrent neural network information theory reservoir computing guided self-organisation
Unable to display preview. Download preview PDF.
- 7.Hebb, D.O.: The organization of behavior: a neuropsychological theory. Lawrence Erlbaum Associates, Mahwah (1949)Google Scholar
- 9.Steil, J.J.: Backpropagation-decorrelation: Recurrent learning with O(N) complexity. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN), vol. 1, pp. 843–848 (2004)Google Scholar
- 10.Hayes, M.H.: Chapter 9.4 Recursive Least Squares. In: Statistical Digital Signal Processing and Modeling. Wiley, Chichester (1996)Google Scholar
- 11.Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Technical Report 148, GMD – German National Research Institute for Computer Science (2001)Google Scholar
- 12.Jaeger, H.: Adaptive nonlineaer systems identification with echo state networks. In: Advances in Neural Information Processing Systems, pp. 609–615 (2003)Google Scholar