Contrastive Learning in Random Neural Networks and its Relation to Gradient-Descent Learning
We apply Contrastive Hebbian Learning to the recurrent Random Neural Network model. Under this learning rule, weight adaptation is performed based on the difference between the network’s dynamics when it is input-free and when a teaching signal is imposed. We show that the resulting weight changes are a first order approximation to the gradient-descent algorithm for quadratic error minimization when overall firing rates are constant. The algorithm requires no matrix inversions, and no constraints are placed on network connectivity. A learning result on the XOR problem is presented as an empirical confirmation of these ideas.
- 6.Hinton, G., Sejnowski, T.: Learning and relearning in Boltzmann machines. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1. The MIT Press, Cambridge (1986)Google Scholar
- 7.Movellani, J.: Contrastive Hebbian learning in the continuous Hopfield model. In: Proceedings of the 1990 Connectionist Models Summer School, pp. 10–17, Morgan Kaufmann, San Mateo, USA (1990)Google Scholar
- 8.Rumelhart, D., Hinton, G., Williams, R.: Learning internal representations by error propagation. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1. The MIT Press, Cambridge (1986)Google Scholar