A Novel Stochastic Learning Rule for Neural Networks

  • Frank Emmert-Streib
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)


The purpose of this article is the introduction of a novel stochastic Hebb-like learning rule for neural networks which combines features of unsupervised (Hebbian) and supervised (reinforcement) learning. This learning rule is stochastic with respect to the selection of the time points when a synaptic modification is induced by simultantious activation of the pre- and postsynaptic neuron. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron which is called homosynaptic plasticity but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of plasticity has recently come into the light of interest of experimental investigations in neurobiology and is called heterosynaptic plasticity. Our learning rule is motivated by these experimental findings and gives a qualitative explanation of this kind of synaptic plasticity. Additionally, we give some numerical results that demonstrate that our learning rule works well in training neural networks, even in the presence of noise.


Neural Network Hide Layer Synaptic Weight Postsynaptic Neuron Reinforcement Signal 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bak, P., Chialvo, D.R.: Adaptive Learning by Extremal Dynamics and Negative Feedback. Phys. Rev. E 63, 031912-1–031912-12 (2001)Google Scholar
  2. 2.
    Bi, G.-g., Poo, M.-m.: Synaptic Modification by Correlated Activity: Hebb’s Postulate Revisited. Annual Review of Neuroscience 24, 139–166 (2001)CrossRefGoogle Scholar
  3. 3.
    Bliss, T.V.P., Lomo, T.: Long-lasting Potentiation of Synaptic Transmission in the Dentate Area of the Anaesthetized Rabbit Following Stimulation of the Perforant Path. J. Physiol. 232, 331–356 (1973)Google Scholar
  4. 4.
    Bosman, R.J.C., van Leeuwen, W.A., Wemmenhove, B.: Combining Hebbian and Reinforcement Learning in a Minibrain Model. Neural Networks 17(1), 29–39 (2004)MATHCrossRefGoogle Scholar
  5. 5.
    Chialvo, D.R., Bak, P.: Learning from Mistakes. Neuroscience 90, 1137–1148 (1999)CrossRefGoogle Scholar
  6. 6.
    Crick, F.: The Recent Excitement about Neural Networks. Nature 337, 129–132 (1989)CrossRefGoogle Scholar
  7. 7.
    Emmert-Streib, F.: Aktive Computation in Offenen Systemen. Lerndynamiken in biologischen Systemen: Vom Netzwerk zum Organismus. Dissertation, Universität Bremen, Mensch & Buch Verlag (2003)Google Scholar
  8. 8.
    Emmert-Streib, F.: Active Learning in Recurrent Neural Networks Facilitated by a Hebb-like Learning Rule with Memory. Neural Information Processing - Letters and Reviews 9(2), 31–40 (2005)Google Scholar
  9. 9.
    Fitzsimonds, R.M., Song, H.-j., Poo, M.-m.: Propagation of Synaptic Modulation in Small Neural Networks. Nature 388, 439–448 (1997)CrossRefGoogle Scholar
  10. 10.
    Frey, U., Morris, R.G.M.: Synaptic Tagging and Long-term Potentiation. Nature 385, 533–536 (1997)CrossRefGoogle Scholar
  11. 11.
    Hebb, D.O.: The Organization of Behavior. Wiley, New York (1949)Google Scholar
  12. 12.
    Hertz, J., Krogh, A., Palmer, R.G.: Introduction to the Theory of Neural Computation. Addison-Wesley, Reading (1991)Google Scholar
  13. 13.
    Holland, J.: Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor (1975)Google Scholar
  14. 14.
    Kempter, R., Gerstner, W., van Hemmen, J.L.: Hebbian Learning and Spiking Neurons. Phys. Rev. E 59, 4498–4514 (1999)CrossRefMathSciNetGoogle Scholar
  15. 15.
    Klemm, K., Bornholdt, S., Schuster, H.G.: Beyond Hebb: Exclusive-Or and Biological Learning. Phys. Rev. Lett. 84, 3013–3016 (2000)CrossRefGoogle Scholar
  16. 16.
    Koch, C.: Biophysics of Computation. Oxford Press (1999)Google Scholar
  17. 17.
    Markram, H., Lübke, J., Frotscher, M., Sakmann, B.: Regulation of Synaptic Efficacy by Coincidence of Postsynaptic APs and EPSPs. Science 275, 213–215 (1997)CrossRefGoogle Scholar
  18. 18.
    Otmakhova, N.A., Lisman, J.E.: D1/D5 Dopamine Receptors Inhibit Depotentiation of CA1 Synapses via cAMP-dependent Mechanism. J. Neuroscience 18, 1270–1279 (1998)Google Scholar
  19. 19.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Representations by Back-propagating Errors. Nature 323, 533–536 (1986)CrossRefGoogle Scholar
  20. 20.
    Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by Simulated Annealing. Science 220, 671–680 (1983)CrossRefMathSciNetGoogle Scholar
  21. 21.
    Werbos, P.: Beyond Regression: New tools for Prediction and Analysis in the Behavioral Sciences. Ph.D. Thesis, Harvard University (1974)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Frank Emmert-Streib
    • 1
  1. 1.Institut für Theoretische PhysikUniversität BremenBremenGermany

Personalised recommendations