Advertisement

Contrastive Learning in Random Neural Networks and its Relation to Gradient-Descent Learning

  • Alexandre Romariz
  • Erol Gelenbe
Conference paper

Abstract

We apply Contrastive Hebbian Learning to the recurrent Random Neural Network model. Under this learning rule, weight adaptation is performed based on the difference between the network’s dynamics when it is input-free and when a teaching signal is imposed. We show that the resulting weight changes are a first order approximation to the gradient-descent algorithm for quadratic error minimization when overall firing rates are constant. The algorithm requires no matrix inversions, and no constraints are placed on network connectivity. A learning result on the XOR problem is presented as an empirical confirmation of these ideas.

References

  1. 1.
    Baldi, P., Pineda, F.: Contrastive learning and neural oscillations. Neural Comput. 3, 526–545 (1991)CrossRefGoogle Scholar
  2. 2.
    Cooper, S.: Donald O. Hebb’s synapse and learning rule: a history and commentary. Neurosci. Biobehav. Rev. 28, 851–874 (2005)CrossRefGoogle Scholar
  3. 3.
    Gelenbe, E.: Random neural networks with negative and positive signals and product form solution. Neural Comput. 1, 502–510 (1989)CrossRefGoogle Scholar
  4. 4.
    Gelenbe, E.: Learning in the recurrent random neural network. Neural Comput. 5, 154–164 (1993)CrossRefGoogle Scholar
  5. 5.
    Gelenbe, E., Hussain, K.: Learning in the multiple class recurrent random neural network. IEEE Trans. Neural Netw. 13(6), 1257–1267 (2002)CrossRefGoogle Scholar
  6. 6.
    Hinton, G., Sejnowski, T.: Learning and relearning in Boltzmann machines. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1. The MIT Press, Cambridge (1986)Google Scholar
  7. 7.
    Movellani, J.: Contrastive Hebbian learning in the continuous Hopfield model. In: Proceedings of the 1990 Connectionist Models Summer School, pp. 10–17, Morgan Kaufmann, San Mateo, USA (1990)Google Scholar
  8. 8.
    Rumelhart, D., Hinton, G., Williams, R.: Learning internal representations by error propagation. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1. The MIT Press, Cambridge (1986)Google Scholar
  9. 9.
    Timotheau, S.: The random neural network: A survey. Comput. J. 53(3), 251–267 (2010)CrossRefGoogle Scholar
  10. 10.
    Xie, X., Seung, H.: Equivalence of backpropagation and contrastive Hebbian learning in a layered network. Neural Comput. 15, 441–454 (2003)CrossRefMATHGoogle Scholar

Copyright information

© Springer-Verlag London Limited  2011

Authors and Affiliations

  1. 1.Integrated Circuits and Devices Lab, Electrical Engineering DepartmentUniversidade de BrasíliaBrasíliaBrazil
  2. 2.Intelligent Systems and Networks Group, Department of Electrical EngineeringImperial College LondonLondonUK

Personalised recommendations