Skip to main content

Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2020 (ICANN 2020)

Abstract

With more and more event-based neuromorphic hardware systems being developed at universities and in industry, there is a growing need for assessing their performance with domain specific measures. In this work, we use the methodology of converting pre-trained non-spiking to spiking neural networks to evaluate the performance loss and measure the energy-per-inference for three neuromorphic hardware systems (BrainScaleS, Spikey, SpiNNaker) and common simulation frameworks for CPU (NEST) and CPU/GPU (GeNN). For analog hardware we further apply a re-training technique known as hardware-in-the-loop training to cope with device mismatch. This analysis is performed for five different networks, including three networks that have been found by an automated optimization with a neural architecture search framework. We demonstrate that the conversion loss is usually below one percent for digital implementations, and moderately higher for analog systems with the benefit of much lower energy-per-inference costs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Here, we use the most recent GeNN from github (end of April 2020).

  2. 2.

    https://github.com/hbp-unibi/cypress.

  3. 3.

    The code for this and other work can be found at https://github.com/hbp-unibi/SNABSuite.

  4. 4.

    https://github.com/JonasDHomburg/LAMARCK_ML.

References

  1. Akopyan, F., et al.: TrueNorth: design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip. IEEE Trans. Comput.-Aided Design Integr. Circuits Syst. 34(10), 1537–1557 (2015). https://doi.org/10.1109/TCAD.2015.2474396

  2. van Albada, S.J., et al.: Performance comparison of the digital neuromorphic hardware SpiNNaker and the neural network simulation software NEST for a full-scale cortical microcircuit model. Front. Neurosci. 12, 291 (2018). https://doi.org/10.3389/fnins.2018.00291

    Article  Google Scholar 

  3. Cao, Y., Chen, Y., Khosla, D.: Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vision 113(1), 54–66 (2014). https://doi.org/10.1007/s11263-014-0788-3

    Article  MathSciNet  Google Scholar 

  4. Davies, M.: Benchmarks for progress in neuromorphic computing. Nat. Mach. Intell. 1(9), 386–388 (2019). https://doi.org/10.1038/s42256-019-0097-1

    Article  Google Scholar 

  5. Davies, M., et al.: Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1), 82–99 (2018). https://doi.org/10.1109/MM.2018.112130359

    Article  Google Scholar 

  6. Davison, A.P.: PyNN: a common interface for neuronal network simulators. Front. Neuroinform. 2(January), 11 (2008). https://doi.org/10.3389/neuro.11.011.2008

    Article  Google Scholar 

  7. Diehll, P.U., et al.: Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In: Proceedings of the International Joint Conference on Neural Networks 2015-September (2015). https://doi.org/10.1109/IJCNN.2015.7280696

  8. Furber, S.B., et al.: Overview of the SpiNNaker system architecture. IEEE Trans. Comput. 62(12), 2454–2467 (2013). https://doi.org/10.1109/TC.2012.142

    Article  MathSciNet  Google Scholar 

  9. Gewaltig, M.O., Diesmann, M.: NEST (neural simulation tool). Scholarpedia 2(4), 1430 (2007)

    Article  Google Scholar 

  10. Göltz, J., et al.: Fast and deep neuromorphic learning with time-to-first-spike coding (2019). https://doi.org/10.1145/3381755.3381770

  11. Homburg, J.D., Adams, M., Thies, M., Korthals, T., Hesse, M., Rückert, U.: Constraint exploration of convolutional network architectures with neuroevolution. In: Rojas, I., Joya, G., Catala, A. (eds.) IWANN 2019. LNCS, vol. 11507, pp. 735–746. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20518-8_61

    Chapter  Google Scholar 

  12. Jordan, J., et al.: NEST 2.18.0 (2019). https://doi.org/10.5281/ZENODO.2605422

  13. Knight, J.C., Nowotny, T.: GPUs outperform current HPC and neuromorphic solutions in terms of speed and energy when simulating a highly-connected cortical model. Front. Neurosci. 12(December), 1–19 (2018). https://doi.org/10.3389/fnins.2018.00941

    Article  Google Scholar 

  14. Maass, W.: Networks of spiking neurons: the third generation of neural network models. Neural Netw. 10(9), 1659–1671 (1997). https://doi.org/10.1016/S0893-6080(97)00011-7

    Article  Google Scholar 

  15. Moradi, S., et al.: A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAPs). IEEE Trans. Biomed. Circuits Syst. 12(1), 106–122 (2018). https://doi.org/10.1109/TBCAS.2017.2759700

    Article  Google Scholar 

  16. Neckar, A., et al.: Braindrop: a mixed-signal neuromorphic architecture with a dynamical systems-based programming model. Proc. IEEE 107(1), 144–164 (2019). https://doi.org/10.1109/JPROC.2018.2881432

    Article  Google Scholar 

  17. Ostrau, C., et al.: Comparing neuromorphic systems by solving sudoku problems. In: Conference Proceedings: 2019 International Conference on High Performance Computing & Simulation (HPCS). IEEE (2019). https://doi.org/10.1109/HPCS48598.2019.9188207

  18. Ostrau, C., et al.: Benchmarking of neuromorphic hardware systems. In: Proceedings of the Neuro-Inspired Computational Elements Workshop. Association for Computing Machinery (ACM) (2020). https://doi.org/10.1145/3381755.3381772

  19. Petrovici, M.A., et al.: Characterization and compensation of network-level anomalies in mixed-signal neuromorphic modeling platforms. PLoS ONE, 9(10) (2014). https://doi.org/10.1371/journal.pone.0108590

  20. Pfeil, T., et al.: Six networks on a universal neuromorphic computing substrate. Front. Neurosci. 7(7 FEB), 11 (2013). https://doi.org/10.3389/fnins.2013.00011

    Article  Google Scholar 

  21. Rhodes, O., et al.: Real-time cortical simulation on neuromorphic hardware. Philos. Trans. R. Soc. A: Math. Phys. Eng. Sci. 378(2164), 20190160 (2020). https://doi.org/10.1098/rsta.2019.0160

    Article  MathSciNet  Google Scholar 

  22. Schemmel, J., et al.: A wafer-scale neuromorphic hardware system for large-scale neural modeling. In: Proceedings of 2010 IEEE International Symposium on Circuits and Systems, pp. 1947–1950 (2010). https://doi.org/10.1109/ISCAS.2010.5536970

  23. Schmitt, S., et al.: Neuromorphic hardware in the loop: training a deep spiking network on the BrainScaleS wafer-scale system. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2227–2234. IEEE (2017). https://doi.org/10.1109/IJCNN.2017.7966125

  24. Stöckel, A., et al.: Binary associative memories as a benchmark for spiking neuromorphic hardware. Front. Comput. Neurosci. 11(August), 71 (2017). https://doi.org/10.3389/fncom.2017.00071

    Article  Google Scholar 

  25. Yavuz, E., et al.: GeNN: a code generation framework for accelerated brain simulations. Sci. Rep. 6(2015), 18854 (2016). https://doi.org/10.1038/srep18854

    Article  Google Scholar 

Download references

Funding/Acknowledgment

The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7) under grant agreement no 604102 and the EU’s Horizon 2020 research and innovation programme under grant agreements No 720270 and 785907 (Human Brain Project, HBP). It has been further supported by the Cluster of Excellence Cognitive Interaction Technology “CITEC” (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG). Furthermore, we thank the Electronic Vision(s) group from Heidelberg University and Advanced Processor Technologies Research Group from Manchester University for access to their hardware systems and continuous support and James Knight from the University of Sussex for support regarding our GeNN implementation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christoph Ostrau .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ostrau, C., Homburg, J., Klarhorst, C., Thies, M., Rückert, U. (2020). Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware. In: Farkaš, I., Masulli, P., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2020. ICANN 2020. Lecture Notes in Computer Science(), vol 12397. Springer, Cham. https://doi.org/10.1007/978-3-030-61616-8_49

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61616-8_49

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61615-1

  • Online ISBN: 978-3-030-61616-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics