A Spiking Neural Sparse Distributed Memory Implementation for Learning and Predicting Temporal Sequences
In this paper we present a neural sequence machine that can learn temporal sequences of discrete symbols, and perform better than machines that use Elman’s context layer, time delay nets or shift register-like context memories. This machine can perform sequence detection, prediction and learning of new sequences. The network model is an associative memory with a separate store for the sequence context of a pattern. Learning is one-shot. The model is capable of both off-line and on-line learning. The machine is based upon a sparse distributed memory which is used to store associations between the current context and the input symbol. Numerical tests have been done on the machine to verify its properties. We have also shown that it is possible to implement the memory using spiking neurons.
Unable to display preview. Download preview PDF.
- 1.Vocal interface for a man-machine dialog. Dominique Beroule. ACL Proceedings, First European Conference (1983)Google Scholar
- 2.Durand, S., Alexandre, F.: Learning Speech as Acoustic Sequences with the Unsupervised Model, tom. In: NEURAP, 8th Intl. conference on neural networks and their applications, France (1995)Google Scholar
- 4.Elman, J.L.: Finding structure in time. Cognitive Science 14 (1990)Google Scholar
- 5.Furber, S.B., Cumpstey, J.M., Bainbridge, W.J., Temple, S.: Sparse distributed memory using N-of-M codes. Neural Networks 10 (2004)Google Scholar
- 6.Lang, K.J., Hinton, G.E.: The development of the time delay neural network architecture for speech recognition. Tech. Report 88152, Carnegie Mellon (1988)Google Scholar
- 7.Sun, R., Giles, C.L. (eds.): Sequence Learning. Springer, Heidelberg (2000)Google Scholar