Skip to main content
Log in

Improving multi-layer spiking neural networks by incorporating brain-inspired rules

受脑启发的学习规则对深层脉冲神经网络性能的提升

  • Research Paper
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Abstract

This paper introduces seven brain-inspired rules that are deeply rooted in the understanding of the brain to improve multi-layer spiking neural networks (SNNs). The dynamics of neurons, synapses, and plasticity models are considered to be major characteristics of information processing in brain neural networks. Hence, incorporating these models and rules to traditional SNNs is expected to improve their efficiency. The proposed SNN model can mainly be divided into three parts: the spike generation layer, the hidden layers, and the output layer. In the spike generation layer, non-temporary signals such as static images are converted into spikes by both local and global feature-converting methods. In the hidden layers, the rules of dynamic neurons, synapses, the proportion of different kinds of neurons, and various spike timing dependent plasticity (STDP) models are incorporated. In the output layer, the function of classification for excitatory neurons and winner take all (WTA) for inhibitory neurons are realized. MNIST dataset is used to validate the classification accuracy of the proposed neural network model. Experimental results show that higher accuracy will be achieved when more brain-inspired rules (with careful selection) are integrated into the learning procedure.

创新点

本文总结和归纳了七条受脑启发的学习准则,并应用于改善脉冲神经网络。这些学习准则都来源于对生物脑的实验研究,并各自从不同的侧面反映了生物网络的学习特性,如神经元的动态分配、突触的自适应生长和消亡机制、不同的突触可塑性学习机制(如不同类型的时序依赖突触可塑性)、网络背景噪声对学习的调控机制、兴奋性和抑制性神经元的比例对学习的调节机制等。本文通过组合上述不同的受脑启发的规则,通过实验研究验证了:随着越来越多的、经过仔细选择的、受脑启发的规则的引入,深层脉冲神经网络能够得到越来越好的分类性能。

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Hinton G, Osindero S, Teh Y W. A fast learning algorithm for deep belief nets. Neural Comput, 2006, 18: 1527–1554

    Article  MathSciNet  MATH  Google Scholar 

  2. He K, Zhang X, Ren S, et al. Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, Santiago, 2015. 1026–1034

    Google Scholar 

  3. Maass W. Networks of spiking neurons: the third generation of neural network models. Neural Networks, 1997, 10: 1659–1671

    Article  Google Scholar 

  4. Eliasmith C, Stewart T, Choo X, et al. A large-scale model of the functioning brain. Science, 2012, 338: 1202–1205

    Article  Google Scholar 

  5. Zenke F, Agnes E, Gerstner W. Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nat Commun, 2015, 6: 6922

    Article  Google Scholar 

  6. Song H F, Yang G R, Wang X J. Training excitatory-inhibitory recurrent neural networks for cognitive tasks: a simple and flexible framework. PLoS Comput Biol, 2016, 12: e1004792

    Article  Google Scholar 

  7. Beyeler M, Oros N, Dutt N D, et al. A GPU-accelerated cortical neural network model for visually guided robot navigation. Neural Networks, 2015, 72: 75–87

    Article  Google Scholar 

  8. Maffei G, Santos-Pata D, Marcos E, et al. An embodied biologically constrained model of foraging: from classical and operant conditioning to adaptive real-world behavior in DAC-X. Neural Networks, 2015, 72: 88–108

    Article  Google Scholar 

  9. Wade J J, McDaid L J, Santos J, et al. Swat: a spiking neural network training algorithm for classification problems. IEEE Trans Neur Net, 2010, 21: 1817–1830

    Article  Google Scholar 

  10. Beyeler M, Dutt N D, Krichmar J L. Categorization and decision-making in a neurobiologically plausible spiking network using a STDP-like learning rule. Neural Networks, 2013, 48: 109–124

    Article  Google Scholar 

  11. Izhikevich E M. Simple model of spiking neurons. IEEE Trans Neur Net, 2003, 14: 1569–1572

    Article  Google Scholar 

  12. Iakymchuk T, Rosado-Munoz A, Guerrero-Martinez J F, et al. Simplified spiking neural network architecture and stdp learning algorithm applied to image classification. EURASIP J Image Vide Process, 2015, 2015: 1–11

    Article  Google Scholar 

  13. Ionescu M, Paun G, Yokomori T. Spiking neural P systems. Fund Inform, 2006, 71: 279–308

    MathSciNet  MATH  Google Scholar 

  14. Zhao Y, Liu X, Wang W. Spiking neural P systems with neuron division and dissolution. PLoS ONE, 2016, 11: e0162882

    Article  Google Scholar 

  15. Jia Y, Huang C, Darrell T. Beyond spatial pyramids: receptive field learning for pooled image features. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, 2012. 3370–3377

    Google Scholar 

  16. Kravitz D J, Saleem K S, Baker C I, et al. The ventral visual pathway: an expanded neural framework for the processing of object quality. Trend Cogn Sci, 2013, 17: 26–49

    Article  Google Scholar 

  17. Häusser M. The hodgkin-huxley theory of the action potential. Nat Neurosci, 2000, 3: 1165

    Article  Google Scholar 

  18. Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. J Neurophysiology, 2005, 94: 3637–3642

    Article  Google Scholar 

  19. Sharkey N E, Jackson S A. An internal report for connectionists. Comput Arc Integrat Neural Symb Proc, 1995, 292: 223–244

    Google Scholar 

  20. Destexhe A. Conductance-based integrate-and-fire models. Neural Comput, 1997, 9: 503–514

    Article  Google Scholar 

  21. Heeger D J, Ress D. What does fMRI tell us about neuronal activity. Nat Rev Neurosci, 2002, 3: 142–151

    Article  Google Scholar 

  22. Sokal R R, Rohlf F J. Biometry: the Principles and Practice of Statistics in Biological Research. New York: WH Freeman and Company, 1969

    MATH  Google Scholar 

  23. Chrol-Cannon J, Jin Y. Computational modelling of neural plasticity for self-organization of neural networks. Biosystems, 2014, 125: 43–54

    Article  Google Scholar 

  24. Ghosh-Dastidar S, Adeli H. Spiking neural networks. Int J Neural Syst, 2009, 19: 295–308

    Article  Google Scholar 

  25. Seress L, Ribak C E. Direct commissural connections to the basket cells of the hippocampal dentate gyrus: anatomical evidence for feed-forward inhibition. J Neurocytology, 1984, 13: 215–225

    Article  Google Scholar 

  26. Lytton W W, Sejnowski T J. Inhibitory Interneurons Can Rapidly Phase-lock Neural Populations, Chapter for Induced Rhythms in the Brain. New York: Springer, 1992. 357–366

    Google Scholar 

  27. Waddington A, Appleby P A, De Kamps M, et al. Triphasic spike-timing-dependent plasticity organizes networks to produce robust sequences of neural activity. Front Comput Neurosci, 2012, 6: 88

    Article  Google Scholar 

  28. Song S, Miller K D, Abbott L F. Competitive hebbian learning through spike-timing-dependent synaptic plasticity. Nat Neurosci, 2000, 3: 919–926

    Article  Google Scholar 

  29. Clopath C, Büsing L, Vasilaki E, et al. Connectivity reflects coding: a model of voltage-based stdp with homeostasis. Nat Neurosci, 2010, 13: 344–352

    Article  Google Scholar 

  30. Rolls E T, Deco G. Computational Neuroscience of Vision. New York: Oxford University Press, 2002

    MATH  Google Scholar 

  31. Schaul T, Bayer J, Wierstra D, et al. Pybrain. J Mach Learn Res, 2010, 11: 743–746

    Google Scholar 

Download references

Acknowledgments

This work was supported by Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDB02060007), and Beijing Municipal Commission of Science and Technology (Grant Nos. Z151100000915070, Z161100000216124).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yi Zeng or Tielin Zhang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zeng, Y., Zhang, T. & Xu, B. Improving multi-layer spiking neural networks by incorporating brain-inspired rules. Sci. China Inf. Sci. 60, 052201 (2017). https://doi.org/10.1007/s11432-016-0439-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11432-016-0439-4

Keywords

Keywords

Navigation