Recurrent Networks for Learning Stochastic Sequences

  • Neil McCulloch

Abstract

This paper describes some experiments exploring the ability of networks to learn the underlying statistics of artificially generated temporal data. In the first experiment, data generated by two simple Markov chains was fed into a multi-layer perceptron (MLP). The desired output was an indication of whether a transition out of one of the models had been made. The network produced a close approximation to the probability that a transition had just been made. In the second experiment hidden Markov models were used to generate the data. This made the determination of whether a transition had occurred much more difficult, and the network produced a much poorer approximation to the correct probability.

Keywords

Hide Markov Model True Probability Confusion Matrice Recurrent Network Correct Probability 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [Bed87]
    Mark D. Bedworth. Using The Error Back Propagation Algorithm: A Few Alternatives To Logistic Networks. R.S.R.E. Memorandum, Speech Research Unit, Royal Signals and Radar Establishment, Malvern, UK, December 1987.Google Scholar
  2. [Bri89]
    J.B. Bridle. Alpha-nets: a recurrent ‘neural’ network archirecture with a hidden markov model interpretation. Speech Communication, 1989.Google Scholar
  3. [DM89]
    N. Dodd and N.A. McCulloch. Structured neural networks for markovian processes. In Proceedings of the First IEE International Conference on Artificial Neural Networks, pages 319–323, 1989.Google Scholar
  4. [Dod90]
    Nigel Dodd. Optimisation of network structure using genetic techniques. In Proceedings of the International Neural Network Conference, Paris, July 1990.Google Scholar
  5. [RM86]
    David E. Rumelhart and James L. Mcclelland. Learning internal representations by error propagation. In Parallel Distributed Processing: Explorations in the Microstructures of Cognition, pages 318–362, MIT press, 1986.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1990

Authors and Affiliations

  • Neil McCulloch
    • 1
  1. 1.Research Initiative in Pattern RecognitionRoyal Signals and Radar EstablishmentMalvern WorcsUK

Personalised recommendations