Skip to main content
Log in

SIES: A Novel Implementation of Spiking Convolutional Neural Network Inference Engine on Field-Programmable Gate Array

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Neuromorphic computing is considered to be the future of machine learning, and it provides a new way of cognitive computing. Inspired by the excellent performance of spiking neural networks (SNNs) on the fields of low-power consumption and parallel computing, many groups tried to simulate the SNN with the hardware platform. However, the efficiency of training SNNs with neuromorphic algorithms is not ideal enough. Facing this, Michael et al. proposed a method which can solve the problem with the help of DNN (deep neural network). With this method, we can easily convert a well-trained DNN into an SCNN (spiking convolutional neural network). So far, there is a little of work focusing on the hardware accelerating of SCNN. The motivation of this paper is to design an SNN processor to accelerate SNN inference for SNNs obtained by this DNN-to-SNN method. We propose SIES (Spiking Neural Network Inference Engine for SCNN Accelerating). It uses a systolic array to accomplish the task of membrane potential increments computation. It integrates an optional hardware module of max-pooling to reduce additional data moving between the host and the SIES. We also design a hardware data setup mechanism for the convolutional layer on the SIES with which we can minimize the time of input spikes preparing. We implement the SIES on FPGA XCVU440. The number of neurons it supports is up to 4 000 while the synapses are 256 000. The SIES can run with the working frequency of 200 MHz, and its peak performance is 1.562 5 TOPS.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Akopyan F, Sawada J, Cassidy A, Alvarez-Icaza R, Arthur J, Merolla P, Imam N, Nakamura Y, Datta P, Nam G J. TrueNorth: Design and tool flow of a 65mW 1 million neuron programmable neurosynaptic chip. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2015, 34(10): 1537-1557.

    Google Scholar 

  2. Geddes J, Lloyd S, Simpson A C et al. NeuroGrid: Using grid technology to advance neuroscience. In Proc. the 18th IEEE Symposium on Computer-Based Medical Systems, June 2005, pp.570-572.

  3. Schemmel J, Grübl A, Hartmann S et al. Live demonstration: A scaled-down version of the BrainScaleS wafer-scale neuromorphic system. In Proc. the 2012 IEEE International Symposium on Circuits Systems, May 2012, p.702.

  4. Furber S B, Lester D R, Plana L A, Garside J D, Painkras E, Temple S, Brown A D. Overview of the spiNNaker system architecture. IEEE Transactions on Computers, 2013, 62(12): 2454-2467.

    MathSciNet  Google Scholar 

  5. Davies M, Jain S, Liao Y et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, 2018, 38(1): 82-99.

    Google Scholar 

  6. Diehl P U, Neil D, Binas J, Cook M, Liu S C, Pfeiffer M. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In Proc. the 2015 International Joint Conference on Neural Networks, July 2015.

  7. Rueckauer B, Lungu I A, Hu Y, Pfeiffer M, Liu S C. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in Neuroscience, 2017, 11: Article No. 682.

  8. Rueckauer B, Lungu L A, Hu Y H, Pfeiffer M. Theory and tools for the conversion of analog to spiking convolutional neural networks. arXiv: 1612.04052, 2016. https://arxiv.org/pdf/1612.04052.pdf, Nov. 2019.

  9. Du Z D, Fasthuber R, Chen T S, Ienne P, Li L, Luo T, Feng X B, Chen Y J, Temam O. ShiDianNao: Shifting vision processing closer to the sensor. In Proc. the 42nd ACM/IEEE International Symposium on Computer Architecture, June 2015, pp.92-104.

    Google Scholar 

  10. Guan Y J, Yuan Z H, Sun G Y, Cong J. FPGA-based accelerator for long short-term memory recurrent neural networks. In Proc. the 22nd Asia and South Pacific Design Automation Conference, January 2017, pp.629-634.

  11. Zhou Y M, Jiang J F. An FPGA-based accelerator implementation for deep convolutional neural networks. In Proc. the 4th International Conference on Computer Science Network Technology, December 2016, pp.829-832.

  12. Neil D, Liu S C. Minitaur, an event-driven FPGA-based spiking network accelerator. IEEE Transactions on Very Large Scale Integration Systems, 2014, 22(12): 2621-2628.

    Google Scholar 

  13. Wang R, Thakur C S, Cohen G, Hamilton T J, Tapson J, van Schaik A. Neuromorphic hardware architecture using the neural engineering framework for pattern recognition. IEEE Trans. Biomed Circuits Syst., 2017, 11(3): 574-584.

    Google Scholar 

  14. Glackin B, Mcginnity T M, Maguire L P, Wu Q X, Belatreche A. A novel approach for the implementation of large scale spiking neural networks on FPGA hardware. In Lecture Notes in Computer Science 3512, Cabestany J, Prieto A, Sandoral (eds.), Springer, 2005, pp.552-563.

  15. Cheung K, Schultz S R, Luk W. A large-scale spiking neural network accelerator for FPGA systems. In Proc. the 22nd International Conference on Artificial Neural Networks, September 2012, pp.113-130.

  16. Benton A L. Foundations of physiological psychology. Neurology, 1968, 18(6): 609-612.

    Google Scholar 

  17. Hodgkin A L, Huxley A F, Katz B. Measurement of current-voltage relations in the membrane of the giant axon of Loligo. J. Physiol., 1952, 116(4): 424-448.

    Google Scholar 

  18. Izhikevich E M. Simple model of spiking neurons. IEEE Transactions on Neural Networks, 2003, 14(6): 1569-1572.

    MathSciNet  Google Scholar 

  19. Brunel N, van Rossum M C W. Lapicque’s 1907 paper: From frogs to integrate-and-fire. Biological Cybernetics, 2007, 97(5/6): 337-339.

    MathSciNet  MATH  Google Scholar 

  20. Liu Y H, Wang X J. Spike-frequency adaptation of a generalized leaky integrate-and-fire model neuron. Journal of Computational Neuroscience, 2001, 10(1): 25-45.

    Google Scholar 

  21. Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of Neurophysiology, 2005, 94(5): 3637-3642.

    Google Scholar 

  22. Paninski L, Pillow J W, Simoncelli E P. Maximum likelihood estimation of a stochastic integrate-and-fire neural encoding model. Neural Computation, 2014, 16(12): 2533-2561.

    MATH  Google Scholar 

  23. Tsumoto K, Kitajima H, Yoshinaga T, Aihara K, Kawakami H. Bifurcations in Morris-Lecar neuron model. Neurocomputing, 2006, 69(4-6): 293-316.

    Google Scholar 

  24. Linares-Barranco B, Sanchez-Sinencio E, Rodriguez-Vazquez A, Huertas J L. A CMOS implementation of the Fitzhugh-Nagumo neuron model. IEEE Journal of Solid-State Circuits, 1991, 26(7): 956-965.

    Google Scholar 

  25. Yadav R N, Kalra P K, John J. Time series prediction with single multiplicative neuron model. Applied Soft Computing, 2007, 7(4): 1157-1163.

    Google Scholar 

  26. Maguire L P, Mcginnity T M, Glackin B, Ghani A, Belatreche A, Harkin J. Challenges for large-scale implementations of spiking neural networks on FPGAs. Neurocomputing, 2007, 71(1): 13-29.

    Google Scholar 

  27. Gerstner W, Kistler W. Spiking Neuron Models: Single Neurons, Populations, Plasticity (1st edition). Cambridge University Press, 2002.

  28. Gerstner W. Spiking neuron models. In Encyclopedia of Neuroscience, Squire L R (ed.), Academic Press, 2009, pp.277-280.

  29. Lopresti D P. P-NAC: A systolic array for comparing nucleic acid sequences. Computer, 1987, 20(7): 98-99.

    Google Scholar 

  30. Samajdar A, Zhu Y, Whatmough P, Mattina M, Krishna T. SCALE-Sim: Systolic CNN accelerator simulator. Distributed, Parallel, and Cluster Computing, 2018.

  31. Jouppi N P, Young C, Patil N et al. In-datacenter performance analysis of a tensor processing unit. In Proc. International Symposium on Computer Architecture, May 2017.

  32. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In Proc. the 3rd International Conference on Learning Representations, May 2015, Article No. 4.

  33. Shen J C, Ma D, Gu Z H, Zhang M, Zhu X L, Xu X Q, Xu Q, Shen Y J, Pan G. Darwin: A neuromorphic hardware co-processor based on spiking neural networks. SCIENCE CHINA Information Sciences, 2016, 59(2): Article No. 023401.

  34. Kousanakis E, Dollas A, Sotiriades E et al. An architecture for the acceleration of a hybrid leaky integrate and fire SNN on the convey HC-2ex FPGA-based processor. In Proc. the 25th IEEE International Symposium on Field-programmable Custom Computing Machines, April 2017, pp.56-63.

  35. Fang H, Shrestha A, Ma D et al. Scalable NoC-based neuromorphic hardware learning and inference. arXiv:1810.09233, 2018. https://arxiv.org/pdf/1810.0923-3v1.pdf, Dec. 2019.

  36. Cheung K, Schultz S R, Luk W. NeuroFlow: A general purpose spiking neural network simulation platform using customizable processors. Frontiers in Neuroscience, 2015, 9: Article No. 516.

  37. Albericio J, Judd P, Hetherington T et al. Cnvlutin: Ineffectual-neuron-free deep neural network computing. ACM SIGARCH Computer Architecture News, 2016, 44(3): 1-13.

    Google Scholar 

  38. Guo S, Wang L, Chen B, Dou Q. An overhead-free max-pooling method for SNN. IEEE Embedded Systems Letters. https://doi.org/10.1109/LES.2019.2919244.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Wang.

Electronic supplementary material

ESM 1

(PDF 310 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, SQ., Wang, L., Deng, Y. et al. SIES: A Novel Implementation of Spiking Convolutional Neural Network Inference Engine on Field-Programmable Gate Array. J. Comput. Sci. Technol. 35, 475–489 (2020). https://doi.org/10.1007/s11390-020-9686-z

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-020-9686-z

Keywords

Navigation