Weight modifications in traditional neural nets are computed by hard-wired algorithms. Without exception, all previous weight change algorithms have many specific limitations. Is it (in principle) possible to overcome limitations of hard-wired algorithms by allowing neural nets to run and improve their own weight change algorithms? This paper constructively demonstrates that the answer (in principle) is ‘yes’. I derive an initial gradientbased sequence learning algorithm for a ‘self-referential’ recurrent network that can ‘speak’ about its own weight matrix in terms of activations. It uses some of its input and output units for observing its own errors and for explicitly analyzing and modifying its own weight matrix, including those parts of the weight matrix responsible for analyzing and modifying the weight matrix. The result is the first ‘introspective’ neural net with explicit potential control over all of its own adaptive parameters. A disadvantage of the algorithm is its high computational complexity per time step which is independent of the sequence length and equals O(nconnlognconn), where riconn is the number of connections. Another disadvantage is the high number of local minima of the unusually complex error surface. The purpose of this paper, however, is not to come up with the most efficient ‘introspective’ or ‘self-referential’ weight change algorithm, but to show that such algorithms are possible at all.
Unable to display preview. Download preview PDF.
- K. Möller and S. Thrun. Task modularization by network modulation. In J. Rault, editor, Proceedings of Neuro-Nimes’ 90, pages 419–432, November 1990.Google Scholar
- A. J. Robinson and F. Fallside. The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department, 1987.Google Scholar
- J. H. Schmidhuber. An introspective network that can learn to run its own weight change algorithm. In Proc. of the Third International Conference on Artificial Neural Networks, Brighton. IEE, 1993. Accepted for publication.Google Scholar
- J. H. Schmidhuber. A neural network that embeds its own meta-levels. In Proc. of the International Conference on Neural Networks’ 93, San Francisco. IEEE, 1993. Accepted for publication.Google Scholar
- R. J. Williams. Complexity of exact gradient computation algorithms for recurrent neural networks. Technical Report Technical Report NU-CCS-89-27, Boston: Northeastern University, College of Computer Science, 1989.Google Scholar