Skip to main content

Accelerated Analog Neuromorphic Computing

  • Chapter
  • First Online:

Abstract

This chapter presents the concepts behind the BrainScales (BSS) accelerated analog neuromorphic computing architecture. It describes the second-generation BrainScales-2 (BSS-2) version and its most recent in silico realization, the HICANN-X Application Specific Integrated Circuit (ASIC), as it has been developed as part of the neuromorphic computing activities within the European Human Brain Project (HBP). While the first generation is implemented in a 180 nm process, the second generation uses 65 nm technology. This allows the integration of a digital plasticity processing unit, a highly parallel microprocessor specially built for the computational needs of learning in an accelerated analog neuromorphic systems.

The presented architecture is based upon a continuous-time, analog, physical model implementation of neurons and synapses, resembling an analog neuromorphic accelerator attached to build-in digital compute cores. While the analog part emulates the spike-based dynamics of the neural network in continuous time, the latter simulates biological processes happening on a slower timescale, like structural and parameter changes. Compared to biological timescales, the emulation is highly accelerated, i.e., all time constants are several orders of magnitude smaller than in biology. Programmable ion channel emulation and inter-compartmental conductances allow the modeling of nonlinear dendrites, back-propagating action potentials, and NMDA and Calcium plateau potentials. To extend the usability of the analog accelerator, it also supports vector-matrix multiplication. Thereby, BSS-2 supports inference of deep convolutional networks as well as local learning with complex ensembles of spiking neurons within the same substrate. A prerequisite to successful training is the calibratability of the underlying analog circuits across the full range of process variations. For this purpose, a custom software toolbox has been developed, which facilitates complex calibrated Monte Carlo simulations.

Keywords

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
EUR   29.95
Price includes VAT (France)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR   93.08
Price includes VAT (France)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR   116.04
Price includes VAT (France)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
EUR   116.04
Price includes VAT (France)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    The Layer1 data format codes a neural event as a parallel bit field containing the neuron address and a valid bit. It is real-time data with a temporal resolution of the system clock, which is 250 MHz in HICANN-X.

References

  1. J. Schemmel, D. Brüderle, A. Grübl, M. Hock, K. Meier, S. Millner, A wafer-scale neuromorphic hardware system for large-scale neural modeling, in Proceedings of the 2010 IEEE International Symposium on Circuits and Systems (ISCAS) (2010), pp. 1947–1950

    Google Scholar 

  2. G. Indiveri, B. Linares-Barranco, T.J. Hamilton, A. van Schaik, R. Etienne-Cummings, T. Delbruck, S.-C. Liu, P. Dudek, P. Häfliger, S. Renaud, J. Schemmel, G. Cauwenberghs, J. Arthur, K. Hynna, F. Folowosele, S. Saighi, T. Serrano-Gotarredona, J. Wijekoon, Y. Wang, K. Boahen, Neuromorphic silicon neuron circuits. Front. Neurosci. 5(0), 2011. http://www.frontiersin.org/Journal/Abstract.aspx?s=755&name=neuromorphicengineering&ART_DOI=10.3389/fnins.2011.00073

    Google Scholar 

  3. B.V. Benjamin, P. Gao, E. McQuinn, S. Choudhary, A.R. Chandrasekaran, J.-M. Bussat, R. Alvarez-Icaza, J.V. Arthur, P.A. Merolla, K. Boahen, Neurogrid: a mixed-analog-digital multichip system for large-scale neural simulations. Proc. IEEE 102(5), 699–716 (2014)

    Article  Google Scholar 

  4. R. Douglas, M. Mahowald, C. Mead, Neuromorphic analogue VLSI. Annu. Rev. Neurosci. 18, 255–281 (1995)

    Article  Google Scholar 

  5. J. Schemmel, A. Grübl, K. Meier, E. Muller, Implementing synaptic plasticity in a VLSI spiking neural network model, in Proceedings of the 2006 International Joint Conference on Neural Networks (IJCNN) (IEEE Press, Piscataway, 2006)

    Google Scholar 

  6. J. Schemmel, D. Brüderle, K. Meier, B. Ostendorf, Modeling synaptic plasticity within networks of highly accelerated I&F neurons, in Proceedings of the 2007 IEEE International Symposium on Circuits and Systems (ISCAS) (IEEE Press, Piscataway, 2007), pp. 3367–3370

    Google Scholar 

  7. K. Zoschke, M. Güttler, L. Böttcher, A. Grübl, D. Husmann, J. Schemmel, K. Meier, O. Ehrmann, Full wafer redistribution and wafer embedding as key technologies for a multi-scale neuromorphic hardware cluster, in 2017 IEEE 19th Electronics Packaging Technology Conference (EPTC) (IEEE, Piscataway, 2017), pp. 1–8

    Google Scholar 

  8. S. Millner, A. Grübl, K. Meier, J. Schemmel, M.-O. Schwartz, A VLSI implementation of the adaptive exponential integrate-and-fire neuron model, in Advances in Neural Information Processing Systems, vol. 23, ed. by J. Lafferty, C.K.I. Williams, J. Shawe-Taylor, R. Zemel, A. Culotta (ACM, New York, 2010), pp. 1642–1650

    Google Scholar 

  9. T. Pfeil, A. Grübl, S. Jeltsch, E. Müller, P. Müller, M.A. Petrovici, M. Schmuker, D. Brüderle, J. Schemmel, K. Meier, Six networks on a universal neuromorphic computing substrate. Front. Neurosci. 7, 11 (2013). http://www.frontiersin.org/neuromorphic_engineering/10.3389/fnins.2013.00011/abstract

    Article  Google Scholar 

  10. A.P. Davison, D. Brüderle, J. Eppler, J. Kremkow, E. Muller, D. Pecevski, L. Perrinet, P. Yger, PyNN: a common interface for neuronal network simulators. Front. Neuroinform. 2, 11 (2008)

    Article  Google Scholar 

  11. D. Brüderle, M.A. Petrovici, B. Vogginger, M. Ehrlich, T. Pfeil, S. Millner, A. Grübl, K. Wendt, E. Müller, M.-O. Schwartz, D. de Oliveira, S. Jeltsch, J. Fieres, M. Schilling, P. Müller, O. Breitwieser, V. Petkov, L. Muller, A. Davison, P. Krishnamurthy, J. Kremkow, M. Lundqvist, E. Muller, J. Partzsch, S. Scholze, L. Zühl, C. Mayr, A. Destexhe, M. Diesmann, T. Potjans, A. Lansner, R. Schüffny, J. Schemmel, K. Meier, A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems. Biol. Cybern. 104, 263–296 (2011). https://doi.org/10.1007/s00422-011-0435-9

  12. J. Schemmel, L. Kriener, P. Müller, K. Meier, An accelerated analog neuromorphic hardware system emulating NMDA-and calcium-based non-linear dendrites. Preprint, arXiv:1703.07286 (2017)

    Google Scholar 

  13. C.S. Thakur, J.L. Molin, G. Cauwenberghs, G. Indiveri, K. Kumar, N. Qiao, J. Schemmel, R. Wang, E. Chicca, J. Olson Hasler, et al., Large-scale neuromorphic spiking array processors: A quest to mimic the brain. Front. Neurosc. 12, 891 (2018)

    Article  Google Scholar 

  14. S. Friedmann, J. Schemmel, A. Grübl, A. Hartel, M. Hock, K. Meier, Demonstrating hybrid learning in a flexible neuromorphic hardware system. IEEE Trans. Biomed. Circuits Syst. 11(1), 128–142 (2017)

    Article  Google Scholar 

  15. S.A. Aamir, P. Müller, A. Hartel, J. Schemmel, K. Meier, A highly tunable 65-nm CMOS LIF neuron for a large-scale neuromorphic system, in Proceedings of IEEE European Solid-State Circuits Conference (ESSCIRC) (2016)

    Google Scholar 

  16. S.A. Aamir, Y. Stradmann, P. Müller, C. Pehle, A. Hartel, A. Grübl, J. Schemmel, K. Meier, An accelerated LIF neuronal network array for a large-scale mixed-signal neuromorphic architecture. IEEE Trans. Circuits Syst. I Reg. Pap. 65(12), 4299–4312 (2018)

    Article  Google Scholar 

  17. S. Friedmann, J. Schemmel, A. Grübl, A. Hartel, M. Hock, K. Meier, Demonstrating hybrid learning in a flexible neuromorphic hardware system. IEEE Trans. Biomed. Circuits Syst. 11(1), 128–142 (2017)

    Article  Google Scholar 

  18. M. Hock, A. Hartel, J. Schemmel, K. Meier, An analog dynamic memory array for neuromorphic hardware, in 2013 European Conference on Circuit Theory and Design (ECCTD), Sept 2013, pp. 1–4

    Google Scholar 

  19. M. Tsodyks, H. Markram, The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc. Natl. Acad. Sci. USA 94, 719–723 (1997)

    Article  Google Scholar 

  20. T. Pfeil, J. Jordan, T. Tetzlaff, A. Grübl, J. Schemmel, M. Diesmann, K. Meier, The effect of heterogeneity on decorrelation mechanisms in spiking neural networks: a neuromorphic-hardware study. Preprint, arXiv:1411.7916 (2014)

    Google Scholar 

  21. J. Jordan, M.A. Petrovici, O. Breitwieser, J. Schemmel, K. Meier, M. Diesmann, T. Tetzlaff, Deterministic networks for probabilistic computing. Sci. Rep. 9(1), 1–17 (2019)

    Google Scholar 

  22. G. Kiene, Mixed-signal neuron and readout circuits for a neuromorphic system. Master thesis, Universität Heidelberg, 2017

    Google Scholar 

  23. S. Billaudelle, Design and implementation of a short term plasticity circuit for a 65 nm neuromorphic hardware system. Masterarbeit, Universität Heidelberg, 2017

    Google Scholar 

  24. S. Billaudelle, B. Cramer, M.A. Petrovici, K. Schreiber, D. Kappel, J. Schemmel, K. Meier, Structural plasticity on an accelerated analog neuromorphic hardware system. Preprint, arXiv:1912.12047 (2019)

    Google Scholar 

  25. R. Brette, W. Gerstner, Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. J. Neurophysiol. 94, 3637–3642 (2005)

    Article  Google Scholar 

  26. S. Millner, Development of a multi-compartment neuron model emulation. Ph.D. dissertation, University of Heidelberg, 2012

    Google Scholar 

  27. R. Jolivet, T.J. Lewis, W. Gerstner, Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy. J. Neurophysiol. 92(2), 959–976 (2004)

    Article  Google Scholar 

  28. V. Thanasoulis, J. Partzsch, S. Hartmann, C. Mayr, R. Schüffny, Dedicated FPGA communication architecture and design for a large-scale neuromorphic system, in 2012 19th IEEE International Conference on Electronics, Circuits, and Systems (ICECS 2012) (IEEE, Piscataway, 2012), pp. 877–880

    Google Scholar 

  29. J. Schemmel, J. Fieres, K. Meier, Wafer-scale integration of analog neural networks, in Proceedings of the 2008 International Joint Conference on Neural Networks (IJCNN) (2008)

    Google Scholar 

  30. J. Schemmel, S. Hohmann, K. Meier, F. Schürmann, A mixed-mode analog neural network using current-steering synapses. Analog Integr. Circ. Sig. Process. 38(2–3), 233–244 (2004)

    Article  Google Scholar 

  31. J. Langeheine, M. Trefzer, D. Brüderle, K. Meier, J. Schemmel, On the evolution of analog electronic circuits using building blocks on a CMOS FPTA, in Proceedings of the Genetic and Evolutionary Computation Conference(GECCO2004) (2004)

    Google Scholar 

  32. S. Hohmann, J. Fieres, K. Meier, J. Schemmel, T. Schmitz, F. Schürmann, Training fast mixed-signal neural networks for data classification, in Proceedings of the 2004 International Joint Conference on Neural Networks (IJCNN’04) (IEEE Press, Piscataway, 2004), pp. 2647–2652

    Google Scholar 

  33. E. Nurse, B.S. Mashford, A.J. Yepes, I. Kiral-Kornek, S. Harrer, D.R. Freestone, Decoding EEG and LFP signals using deep learning: heading truenorth, in Proceedings of the ACM International Conference on Computing Frontiers (2016), pp. 259–266

    Google Scholar 

  34. S. Schmitt, J. Klähn, G. Bellec, A. Grübl, M. Güttler, A. Hartel, S. Hartmann, D. Husmann, K. Husmann, S. Jeltsch, V. Karasenko, M. Kleider, C. Koke, A. Kononov, C. Mauch, E. Müller, P. Müller, J. Partzsch, M.A. Petrovici, B. Vogginger, S. Schiefer, S. Scholze, V. Thanasoulis, J. Schemmel, R. Legenstein, W. Maass, C. Mayr, K. Meier, Classification with deep neural networks on an accelerated analog neuromorphic system. arXiv (2016)

    Google Scholar 

  35. J. Göltz, A. Baumbach, S. Billaudelle, O. Breitwieser, D. Dold, L. Kriener, A.F. Kungl, W. Senn, J. Schemmel, K. Meier, et al., Fast and deep neuromorphic learning with time-to-first-spike coding. Preprint, arXiv:1912.11443 (2019)

    Google Scholar 

  36. A. Shawahna, S.M. Sait, A. El-Maleh, FPGA-based accelerators of deep learning networks for learning and classification: a review. IEEE Access 7, 7823–7859 (2018)

    Article  Google Scholar 

  37. P. Sharma, A. Singh, Era of deep neural networks: a review, in 2017 8th International Conference on Computing, Communication and Networking Technologies (ICCCNT) (IEEE, Piscataway, 2017), pp. 1–5

    Google Scholar 

  38. Y. LeCun, C. Cortes, The MNIST database of handwritten digits (1998)

    Google Scholar 

  39. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, X. Zheng, TensorFlow: large-scale machine learning on heterogeneous distributed systems (2015). http://download.tensorflow.org/paper/whitepaper2015.pdf

  40. J. Weis, P. Spilger, S. Billaudelle, Y. Stradmann, A. Emmel, E. Müller, O. Breitwieser, A. Grübl, J. Ilmberger, V. Karasenko, M. Kleider, C. Mauch, K. Schreiber, J. Schemmel, Inference with artificial neural networks on analog neuromorphic hardware, in IoT Streams for Data-Driven Predictive Maintenance and IoT, Edge, and Mobile for Embedded Machine Learning (Springer International Publishing, Cham, 2020), pp. 201–212

    Google Scholar 

  41. P. Spilger, E. Müller, A. Emmel, A. Leibfried, C. Mauch, C. Pehle, J. Weis, O. Breitwieser, S. Billaudelle, S. Schmitt, T.C. Wunderlich, Y. Stradmann, J. Schemmel, hxtorch: PyTorch for BrainScaleS-2 — perceptrons on analog neuromorphic hardware, in IoT Streams for Data-Driven Predictive Maintenance and IoT, Edge, and Mobile for Embedded Machine Learning (Springer International Publishing, Cham, 2020), pp. 189–200

    Google Scholar 

  42. Y. Stradmann, S. Billaudelle, O. Breitwieser, F.L. Ebert, A. Emmel, D. Husmann, J. Ilmberger, E. Müller, P. Spilger, J. Weis, J. Schemmel, Demonstrating analog inference on the brainscales-2 mobile system (2021)

    Google Scholar 

  43. A. Grübl, S. Billaudelle, B. Cramer, V. Karasenko, J. Schemmel, Verification and design methods for the brainscales neuromorphic hardware system. Preprint (2020). http://arxiv.org/abs/2003.11455

  44. T.E. Oliphant, A Guide to NumPy, vol. 1 (Trelgol Publishing, New York, 2006)

    Google Scholar 

  45. E. Jones, T. Oliphant, P. Peterson, SciPy: open source scientific tools for Python (2001). http://www.scipy.org/

  46. J.D. Hunter, Matplotlib: a 2d graphics environment. Comput. Sci. Eng. 9(3), 90–95 (2007)

    Article  Google Scholar 

  47. R. Naud, N. Marcille, C. Clopath, W. Gerstner, Firing patterns in the adaptive exponential integrate-and-fire model. Biol. Cybern. 99(4), 335–347 (2008). https://doi.org/10.1007/s00422-008-0264-7

    Article  MathSciNet  MATH  Google Scholar 

  48. M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S.H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, et al., Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1), 82–99 (2018)

    Article  Google Scholar 

  49. S.B. Furber, F. Galluppi, S. Temple, L.A. Plana, The spinnaker project. Proc. IEEE 102(5), 652–665 (2014)

    Article  Google Scholar 

Download references

Acknowledgements

The authors wish to express their gratitude to Andreas Grübl, Yannik Stradmann, Vitali Karasenko, Korbinian Schreiber, Christian Pehle, Ralf Achenbach, Markus Dorn, and Aron Leibfried for their invaluable help and active contributions in the development of the BrainScaleS 2 ASICs and systems.

They are not forgetting the important role their former colleagues Andreas Hartel, Syed Aamir, Gerd Kiene, Matthias Hock, Simon Friedmann, Paul Müller, Laura Kriener, and Timo Wunderlich had in these endeavors.

They also want to thank their collaborators Sebastian Höppner from TU Dresden and Tugba Demirci from EPFL Lausanne for their contributions to the BrainScaleS 2 prototype ASIC.

Very special thanks go to Eric Müller, Arne Emmel, Philipp Spilger, and the whole software development team, as well as Mihai Petrovici, Sebastian Schmitt, and the late Karlheinz Meier for their invaluable advice.

This work has received funding from the European Union Seventh Framework Programme ([FP7/2007-2013]) under grant agreement no 604102 (HBP rampup), 269921 (BrainScaleS), 243914 (Brain-i-Nets), the Horizon 2020 Framework Programme ([H2020/2014-2020]) under grant agreement 720270 and 785907 (HBP SGA1 and SGA2) as well as from the Manfred Stärk Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johannes Schemmel .

Editor information

Editors and Affiliations

Author Contribution

Author Contribution

J.S. created the concept, has been the lead architect of the BSS systems, and wrote the manuscript except for Sect. 4, which was written by S.B. S.B. also created the teststand software and conceived the simulations jointly with P.D, who performed the simulations and prepared the results. J.W. performed the measurements for the HAGEN mode and created Fig. 6.10. All authors edited the manuscript together.

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Schemmel, J., Billaudelle, S., Dauer, P., Weis, J. (2022). Accelerated Analog Neuromorphic Computing. In: Harpe, P., Makinwa, K.A., Baschirotto, A. (eds) Analog Circuits for Machine Learning, Current/Voltage/Temperature Sensors, and High-speed Communication. Springer, Cham. https://doi.org/10.1007/978-3-030-91741-8_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-91741-8_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-91740-1

  • Online ISBN: 978-3-030-91741-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics