Learning a Markov process with a synchronous Boltzmann machine
In this paper we present the simulations of a synchronous Boltzmann machine which learns to model a Markov process. The advantage in using synchronous updating is lies in the parallelism that the model offers. Learning with synchronous Boltzmann machines provides an attractive alternative to asynchronous learning provided that one can establish a suitable theoretical framework. The dynamics of synchronous Boltzmann machines were first studied by W.A. Little [Little 1974], [Little 1978] and [Perreto 1984]. The aim of the present study is to present the results generated by a new local learning algorithm for synchronous Boltzmann machines. The algorithm uses Gradient-Descent to update weights and thresholds. Three different Markov processes were set to be modelled with a three unit network.
KeywordsBoltzmann machine Synchronous Learning Markov process Gradient-descent
Unable to display preview. Download preview PDF.
- [Hinton et. al,]1986 Hinton G.E, and T.J Sejnowski. Learning and relearning in Boltzmann machines. In Parallel Distributed Processing: Explorations in Microstucture of Cognition (D.E. Rumelhart and J.L. McClelland eds.) Cambridge MA: MIT Press.Google Scholar
- [Iturrarán]1996 Ursula Iturrarán and Antonia J. Jones. Learning a Markov process with a synchronous Boltzmann machine. Research Report. Available from: Department of Computer Science, University of Wales, Cardiff, PO Box 916, CF2 3XF, UK.Google Scholar
- [Kullback]Kullback 1959 Information theory and statistics, Wiley, N.Y, 1959.Google Scholar
- [Little 1974]W. A. Little. The existence of persistent states in the brain. Math. Biosci. 19:101, 1974.Google Scholar
- [Little 1978]W. A. Little. and G. L. Shaw. Analytic study of the memory storage capability of a neural network. Mathematical Biosciences 39: 281–290, 1978.Google Scholar
- [Perreto]1984 P. Perreto. Collective properties of neural networks: A statistical physics approach. Biological Cybernetics. 50, 51–62.Google Scholar