Skip to main content

GPU4SNN: GPU-Based Acceleration for Spiking Neural Network Simulations

  • Conference paper
  • First Online:
Parallel Processing and Applied Mathematics (PPAM 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13826))

  • 477 Accesses

Abstract

Spiking Neural Networks (SNNs) are the most common and widely used artificial neural network models in bio-inspired computing. However, SNN simulation requires high computational resources. Therefore, multiple state-of-the-art (SOTA) algorithms explore parallel hardware based implementations for SNN simulation, such as the use of Graphics Processing Units (GPUs). However, we recognize inefficiencies in the utilization of hardware resources in the current SOTA implementations for SNN simulation, namely, the Neuron (N)-, Synapse (S)-, and Action Potential (AP)-algorithm. This work proposes and implements two novel algorithms on an NVIDIA Ampere A100 GPU: The Active Block (AB)- and Single Kernel Launch (SKL)-algorithm. The proposed algorithms consider the available computational resources on both, the Central Processing Unit (CPU) and GPU, leading to a balanced workload for SNN simulation. Our SKL-algorithm is able to remove the CPU bottleneck completely. The average speedups obtained by the best of the proposed algorithms are factors of 0.83\(\times \), 1.36\(\times \) and 1.55\(\times \) in comparison to the SOTA algorithms for firing modes 0, 1 and 2 respectively. The maximum speedups obtained are factors of 1.9\(\times \), 2.1\(\times \) and 2.1\(\times \) for modes 0, 1 and 2 respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    However, there have been a number of studies that rigorously analyze the performance aspects of the different models that suggest otherwise. The interested reader is referred to [31] and references therein.

References

  1. Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: 12th \(\{\)USENIX\(\}\) Symposium on Operating Systems Design and Implementation (\(\{\)OSDI\(\}\) 2016), pp. 265–283 (2016)

    Google Scholar 

  2. Ahmad, N., Isbister, J.B., Smithe, T.S.C., Stringer, S.M.: Spike: a GPU optimised spiking neural network simulator. bioRxiv, p. 461160 (2018)

    Google Scholar 

  3. Balaji, A., et al.: PyCARL: a PyNN interface for hardware-software co-simulation of spiking neural network. arXiv preprint arXiv:2003.09696 (2020)

  4. Barrett, D.G., Morcos, A.S., Macke, J.H.: Analyzing biological and artificial neural networks: challenges with opportunities for synergy? Curr. Opin. Neurobiol. 55, 55–64 (2019). https://doi.org/10.1016/j.conb.2019.01.007. Machine Learning, Big Data, and Neuroscience

  5. Beyeler, M., Carlson, K.D., Chou, T.S., Dutt, N., Krichmar, J.L.: Carlsim 3: a user-friendly and highly optimized library for the creation of neurobiologically detailed spiking neural networks. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2015). https://doi.org/10.1109/IJCNN.2015.7280424

  6. Carnevale, N.T., Hines, M.L.: The NEURON Book. Cambridge University Press, Cambridge (2006). https://doi.org/10.1017/CBO9780511541612

  7. Davies, M., et al.: Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1), 82–99 (2018). https://doi.org/10.1109/MM.2018.112130359

    Article  Google Scholar 

  8. DeBole, M.V., et al.: Truenorth: accelerating from zero to 64 million neurons in 10 years. Computer 52(5), 20–29 (2019). https://doi.org/10.1109/MC.2019.2903009

    Article  Google Scholar 

  9. Demin, V., et al.: Necessary conditions for STDP-based pattern recognition learning in a memristive spiking neural network. Neural Netw. 134, 64–75 (2021). https://doi.org/10.1016/j.neunet.2020.11.005

    Article  Google Scholar 

  10. Diamant, E.: Designing artificial cognitive architectures: brain inspired or biologically inspired? Procedia Comput. Sci. 145, 153–157 (2018)

    Article  Google Scholar 

  11. Eppler, J., Helias, M., Muller, E., Diesmann, M., Gewaltig, M.O.: Pynest: a convenient interface to the nest simulator. Front. Neuroinform. 2, 12 (2008). https://doi.org/10.3389/neuro.11.012.2008

  12. Fidjeland, A.K., Shanahan, M.P.: Accelerated simulation of spiking neural networks using GPUs. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2010)

    Google Scholar 

  13. Furber, S.B., Galluppi, F., Temple, S., Plana, L.A.: The spinnaker project. Proc. IEEE 102(5), 652–665 (2014)

    Article  Google Scholar 

  14. Ghosh-Dastidar, S., Adeli, H.: Third Generation Neural Networks: Spiking Neural Networks. In: Yu, W., Sanchez, E.N. (eds.) Advances in Computational Intelligence, vol. 61, pp. 167–178. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03156-4_17

    Chapter  MATH  Google Scholar 

  15. Golosio, B., Tiddia, G., De Luca, C., Pastorelli, E., Simula, F., Paolucci, P.S.: Fast simulations of highly-connected spiking cortical models using GPUs. Front. Comput. Neurosci. 15, 13 (2021). https://doi.org/10.3389/fncom.2021.627620

    Article  Google Scholar 

  16. Gupta, K., Stuart, J.A., Owens, J.D.: A study of persistent threads style GPU programming for GPGPU workloads. In: 2012 Innovative Parallel Computing (InPar), pp. 1–14 (2012). https://doi.org/10.1109/InPar.2012.6339596

  17. Heaven, D.: Why deep-learning AIs are so easy to fool. Nature 574(7777), 163–166 (2019). https://doi.org/10.1038/d41586-019-03013-5

  18. Hoang, R.V., Tanna, D., Jayet Bray, L.C., Dascalu, S.M., Harris, F.C., Jr.: A novel CPU/GPU simulation environment for large-scale biologically realistic neural modeling. Front. Neuroinform. 7, 19 (2013)

    Article  Google Scholar 

  19. Hodgkin, A.L., Huxley, A.F.: A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117(4), 500 (1952)

    Article  Google Scholar 

  20. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018). https://doi.org/10.1109/CVPR.2018.00745

  21. Izhikevich, E.M.: Simple model of spiking neurons. IEEE Trans. Neural Networks 14(6), 1569–1572 (2003). https://doi.org/10.1109/TNN.2003.820440

    Article  MathSciNet  Google Scholar 

  22. Izhikevich, E.M.: Which model to use for cortical spiking neurons? IEEE Trans. Neural Networks 15(5), 1063–1070 (2004)

    Article  Google Scholar 

  23. Kasap, B., van Opstal, A.J.: Dynamic parallelism for synaptic updating in GPU-accelerated spiking neural network simulations. Neurocomputing 302, 55–65 (2018)

    Article  Google Scholar 

  24. Mark, H.: CUDA Pro Tip: Write Flexible Kernels with Grid-Stride Loops. online (2013)

  25. Neftci, E.O., Mostafa, H., Zenke, F.: Surrogate gradient learning in spiking neural networks: bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Process. Mag. 36(6), 51–63 (2019)

    Article  Google Scholar 

  26. Oreshkin, B.N., Carpov, D., Chapados, N., Bengio, Y.: N-beats: neural basis expansion analysis for interpretable time series forecasting. In: International Conference on Learning Representations (2020)

    Google Scholar 

  27. Paszke, A., et al.: Automatic differentiation in pytorch. Openreview (2017)

    Google Scholar 

  28. Roy, K., Jaiswal, A., Panda, P.: Towards spike-based machine intelligence with neuromorphic computing. Nature 575(7784), 607–617 (2019)

    Article  Google Scholar 

  29. Schrittwieser, J., et al.: Mastering Atari, Go, chess and shogi by planning with a learned model. Nature 588(7839), 604–609 (2020). https://doi.org/10.1038/s41586-020-03051-4

    Article  Google Scholar 

  30. Stimberg, M., Brette, R., Goodman, D.F.: Brian 2, an intuitive and efficient neural simulator. eLife 8, e47314 (2019). https://doi.org/10.7554/eLife.47314

  31. Valadez-Godínez, S., Sossa, H., Santiago-Montero, R.: On the accuracy and computational cost of spiking neuron implementation. Neural Netw. 122, 196–217 (2020)

    Article  Google Scholar 

  32. Woźniak, S., Pantazi, A., Bohnstingl, T., Eleftheriou, E.: Deep learning incorporating biologically inspired neural dynamics and in-memory computing. Nat. Mach. Intell. 2(6), 325–336 (2020). https://doi.org/10.1038/s42256-020-0187-0

    Article  Google Scholar 

  33. Yavuz, E., Turner, J., Nowotny, T.: GeNN: a code generation framework for accelerated brain simulations. Sci. Rep. 6(1), 1–14 (2016)

    Article  Google Scholar 

  34. Yavuz, E., Turner, J., Nowotny, T.: GeNN: a code generation framework for accelerated brain simulations. Nat. Sci. Rep. 6(Jan), 1–14 (2016). https://doi.org/10.1038/srep18854

    Article  Google Scholar 

  35. Zenke, F., et al.: Visualizing a joint future of neuroscience and neuromorphic engineering. Neuron 109(4), 571–575 (2021)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nitin Satpute .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Satpute, N., Hambitzer, A., Aljaberi, S., Aaraj, N. (2023). GPU4SNN: GPU-Based Acceleration for Spiking Neural Network Simulations. In: Wyrzykowski, R., Dongarra, J., Deelman, E., Karczewski, K. (eds) Parallel Processing and Applied Mathematics. PPAM 2022. Lecture Notes in Computer Science, vol 13826. Springer, Cham. https://doi.org/10.1007/978-3-031-30442-2_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-30442-2_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-30441-5

  • Online ISBN: 978-3-031-30442-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics