Advertisement

Online Symbolic-Sequence Prediction with Discrete-Time Recurrent Neural Networks

  • Juan Antonio Pérez-Ortiz
  • Jorge Calera-Rubio
  • Mikel L. Forcada
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2130)

Abstract

This paper studies the use of discrete-time recurrent neural networks for predicting the next symbol in a sequence. The focus is on online prediction, a task much harder than the classical o.ine grammatical inference with neural networks. The results obtained show that the performance of recurrent networks working online is acceptable when sequences come from finite-state machines or even from some chaotic sources. When predicting texts in human language, however, dynamics seem to be too complex to be correctly learned in real-time by the net. Two algorithms are considered for network training: real-time recurrent learning and the decoupled extended Kalman filter.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bengio, Y., P. Simard and P. Frasconi (1994), “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 5(2).Google Scholar
  2. 2.
    Carrasco, R. C. et al. (2000), “Stable-encoding of finite-state machines in discrete-time recurrent neural nets with sigmoid units”, Neural Computation, 12(9).Google Scholar
  3. 3.
    Cleeremans, A., D. Servan-Schreiber and J. L. McClelland (1989), “Finite state automata and simple recurrent networks”, Neural Computation, 1(3), 372–381.CrossRefGoogle Scholar
  4. 4.
    Elman, J. L. (1990), “Finding structure in time”, Cognitive Science, 14, 179–211.CrossRefGoogle Scholar
  5. 5.
    Haykin, S. (1999), Neural networks: a comprehensive foundation, Chapter 15, Prentice-Hall, New Jersey, 2nd edition.zbMATHGoogle Scholar
  6. 6.
    Hochreiter, J. (1991), Untersuchungen zu dynamischen neuronalen Netzen, Diploma thesis, Institut für Informatik, Technische Universität München.Google Scholar
  7. 7.
    Nelson, M. (1991), “Arithmetic coding + statistical modeling = data compression”, Dr. Dobb’s Journal, Feb. 1991.Google Scholar
  8. 8.
    Pérez-Ortiz, J. A., J. Calera-Rubio, M. L. Forcada (2001), “Online text prediction with recurrent neural networks”, Neural Processing Letters, in press.Google Scholar
  9. 9.
    Puskorius, G. V. and L. A. Feldkamp (1991), ”Decoupled extended Kalman filter training of feedforward layered networks”, in International Joint Conference on Neural Networks, volume 1, pp. 771–777.Google Scholar
  10. 10.
    Robinson, A. J. and F. Fallside (1991), “A recurrent error propagation speech recognition system”, Computer Speech and Language, 5, 259–274.CrossRefGoogle Scholar
  11. 11.
    Schmidhuber, J. and S. Heil (1996), “Sequential neural text compression”, IEEE Transactions on Neural Networks, 7(1), pp. 142–146.CrossRefGoogle Scholar
  12. 12.
    Smith, A. W. and D. Zipser (1989), “Learning sequential structures with the realtime recurrent learning algorithm”, International Journal of Neural Systems, 1(2).Google Scholar
  13. 13.
    Tiňo, P., M. Köteles (1999), “Extracting finite-state representations from recurrent neural networks trained on chaotic symbolic sequences”, IEEE Transactions on Neural Networks, 10(2), pp. 284–302.CrossRefGoogle Scholar
  14. 14.
    Williams, R. J. and R. A. Zipser (1989), “A learning algorithm for continually training recurrent neural networks”, Neural Computation, 1, 270–280.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Juan Antonio Pérez-Ortiz
    • 1
  • Jorge Calera-Rubio
    • 1
  • Mikel L. Forcada
    • 1
  1. 1.Departament de Llenguatges i Sistemes InformáticsUniversitat d’AlacantAlacantSpain

Personalised recommendations