Learning Context Sensitive Languages with LSTM Trained with Kalman Filters
Unlike traditional recurrent neural networks, the Long Short-Term Memory (LSTM) model generalizes well when presented with training sequences derived from regular and also simple nonregular languages. Our novel combination of LSTM and the decoupled extended Kalman filter, however, learns even faster and generalizes even better, requiring only the 10 shortest exemplars (n ≤ 10) of the context sensitive language anbncn to deal correctly with values of n up to 1000 and more. Even when we consider the relatively high update complexity per timestep, in many cases the hybrid offers faster learning than LSTM by itself.
KeywordsGradient Descent Recurrent Neural Network Training Sequence Memory Block Input Gate
Unable to display preview. Download preview PDF.
- 1.Boden, M., Wiles, J.: Context-free and context-sensitive dynamics in recurrent neural networks. Connection Science 12,3 (2000).Google Scholar
- 2.Chalup, S., Blair, A.: Hill climbing in recurrent neural networks for learning the anbn nn language. Proc. 6th Conf. on Neural Information Processing (1999) 508–513.Google Scholar
- 5.Haykin, S. (ed.): Kalman filtering and neural networks. Wiley (2001).Google Scholar
- 9.Rodriguez, P., Wiles, J.: Recurrent neural networks can learn to implement symbol-sensitive counting. Advances in Neural Information Processing Systems 10 (1998) 87–93. The MIT Press.Google Scholar