Advertisement

Hebbian Learning Meets Deep Convolutional Neural Networks

  • Giuseppe Amato
  • Fabio CarraraEmail author
  • Fabrizio Falchi
  • Claudio Gennaro
  • Gabriele Lagani
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11751)

Abstract

Neural networks are said to be biologically inspired since they mimic the behavior of real neurons. However, several processes in state-of-the-art neural networks, including Deep Convolutional Neural Networks (DCNN), are far from the ones found in animal brains. One relevant difference is the training process. In state-of-the-art artificial neural networks, the training process is based on backpropagation and Stochastic Gradient Descent (SGD) optimization. However, studies in neuroscience strongly suggest that this kind of processes does not occur in the biological brain. Rather, learning methods based on Spike-Timing-Dependent Plasticity (STDP) or the Hebbian learning rule seem to be more plausible, according to neuroscientists. In this paper, we investigate the use of the Hebbian learning rule when training Deep Neural Networks for image classification by proposing a novel weight update rule for shared kernels in DCNNs. We perform experiments using the CIFAR-10 dataset in which we employ Hebbian learning, along with SGD, to train parts of the model or whole networks for the task of image classification, and we discuss their performance thoroughly considering both effectiveness and efficiency aspects.

Keywords

Hebbian learning Deep learning Computer vision Convolutional neural networks 

References

  1. 1.
    Bahroun, Y., Soltoggio, A.: Online representation learning with multi-layer hebbian networks for image classification tasks. arXiv preprint arXiv:1702.06456 (2017)
  2. 2.
    Gerstner, W., Kistler, W.M.: Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, Cambridge (2002)Google Scholar
  3. 3.
    Grossberg, S.: Adaptive pattern classification and universal recoding: i. parallel development and coding of neural feature detectors. Biol. Cybern. 23(3), 121–134 (1976)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Haykin, S.: Neural Networks and Learning Machines, 3rd edn. Pearson, Upper Saddle River (2009)Google Scholar
  5. 5.
    Kohonen, T.: Self-organized formation of topologically correct feature maps. Biol. Cybern. 43(1), 59–69 (1982)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)Google Scholar
  7. 7.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems (2012)Google Scholar
  8. 8.
    Lagani, G.: Hebbian learning algorithms for training convolutional neural networks. Master’s thesis, School of Engineering, University of Pisa, Italy (2019)Google Scholar
  9. 9.
    Lagani, G.: Hebbian learning algorithms for training convolutional neural networks - project code (2019). https://github.com/GabrieleLagani/HebbianLearningThesis
  10. 10.
    Pehlevan, C., Hu, T., Chklovskii, D.B.: A hebbian/anti-hebbian neural network for linear subspace learning: a derivation from multidimensional scaling of streaming data. Neural comput. 27(7), 1461–1495 (2015)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Ponulak, F.: Resume-new supervised learning method for spiking neural networks. Tech. rep. Institute of Control and Information Engineering, Poznan University of Technology (2005)Google Scholar
  12. 12.
    Rumelhart, D.E., Zipser, D.: Feature discovery by competitive learning. Cogn. Sci. 9(1), 75–112 (1985)CrossRefGoogle Scholar
  13. 13.
    Shrestha, A., Ahmed, K., Wang, Y., Qiu, Q.: Stable spike-timing dependent plasticity rule for multilayer unsupervised and supervised learning. pp. 1999–2006 (2017)Google Scholar
  14. 14.
    Von der Malsburg, C.: Self-organization of orientation sensitive cells in the striate cortex. Kybernetik 14(2), 85–100 (1973)CrossRefGoogle Scholar
  15. 15.
    Wadhwa, A., Madhow, U.: Learning sparse, distributed representations using the hebbian principle. arXiv preprint arXiv:1611.04228 (2016)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.ISTI CNRPisaItaly
  2. 2.University of PisaPisaItaly

Personalised recommendations