Skip to main content
Log in

NengoDL: Combining Deep Learning and Neuromorphic Modelling Methods

  • Software Original Article
  • Published:
Neuroinformatics Aims and scope Submit manuscript

Abstract

NengoDL is a software framework designed to combine the strengths of neuromorphic modelling and deep learning. NengoDL allows users to construct biologically detailed neural models, intermix those models with deep learning elements (such as convolutional networks), and then efficiently simulate those models in an easy-to-use, unified framework. In addition, NengoDL allows users to apply deep learning training methods to optimize the parameters of biological neural models. In this paper we present basic usage examples, benchmarking, and details on the key implementation elements of NengoDL. More details can be found at https://www.nengo.ai/nengo-dl.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. Note that this code is only intended to introduce the syntax; it would not result in particularly effective training if we were to run it. Better performance would require a more complicated Nengo model, which we are trying to avoid in this description. Various full functional examples can be found at https://www.nengo.ai/nengo-dl/examples.

  2. Note that although we are using Variables for all the Signals, not all Signals are trainable; we still only optimize the Signals corresponding to trainable parameters of the model (e.g., connection weights and biases).

References

  • Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D.G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., Zheng, X., Brain, G., Osdi, I., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D.G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., Zheng, X. (2016). TensorFlow: a system for large-scale machine learning. In Proceedings of the 12th USENIX symposium on operating systems design (pp. 265–283). Savannah, GA, USA.

  • Bekolay, T., Bergstra, J., Hunsberger, E., DeWolf, T., Stewart, T.C., Rasmussen, D., Choo, X., Voelker, A.R., Eliasmith, C. (2014). Nengo: a Python tool for building large-scale functional brain models. Frontiers in Neuroinformatics, 7(48), 1–13.

    Google Scholar 

  • Benjamin, B.V., Gao, P., Mcquinn, E., Choudhary, S., Chandrasekaran, A.R., Bussat, J. -M., Alvarez-Icaza, R., Arthur, J.V., Merolla, P.A., Boahen, K. (2014). Neurogrid: a mixed-analog-digital multichip system for large-scale neural simulations. In Proceedings of the IEEE, Vol. 102(5).

    Article  Google Scholar 

  • Bobier, B., Stewart, T.C., Eliasmith, C. (2014). A unifying mechanistic model of selective attention in spiking neurons. PLos Computational Biology, 10(6).

    Article  Google Scholar 

  • Choo, X., & Eliasmith, C. (2010). A spiking neuron model of serial-order recall. In Cattrambone, R., & Ohlsson, S. (Eds.) Proceedings of the 32nd annual conference of the cognitive science society. Cognitive Science Society. Portland.

  • Collobert, R., Kavukcuoglu, K., Farabet, C. (2011). Torch7: a Matlab-like environment for machine learning. In Biglearn, NIPS workshop (pp. 1–6).

  • Davies, M., Srinivasa, N., Lin, T. -H., Chinya, G., Cao, Y., Choday, S.H., Dimou, G., Joshi, P., Imam, N., Jain, S., Liao, Y., Lin, C. -K., Lines, A., Liu, R., Mathaikutty, D., McCoy, S., Paul, A., Tse, J., Venkataramanan, G., Weng, Y. -H., Wild, A., Yang, Y. (2018). Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro, 38(1).

    Article  Google Scholar 

  • Davison, A.P., Brüderle, D., Eppler, J., Kremkow, J., Muller, E., Pecevski, D., Perrinet, L., Yger, P. (2009). PyNN: a common interface for neuronal network simulators. Frontiers in Neuroinformatics, 2, 1–10.

    Google Scholar 

  • DeWolf, T., Stewart, T.C., Slotine, J.-J., Eliasmith, C. (2016). A spiking neural model of adaptive arm control. Proceedings of the Royal Society: Biological Sciences, 283(1843).

    Article  Google Scholar 

  • Eliasmith, C., & Anderson, C. (2003). Neural engineering: Computation, representation, and dynamics in neurobiological systems. Cambridge: MIT Press.

    Google Scholar 

  • Eliasmith, C., Stewart, T.C., Choo, X., Bekolay, T., DeWolf, T., Tang, Y., Rasmussen, D. (2012). A large-scale model of the functioning brain. Science, 338(6111), 1202–1205.

    Article  CAS  Google Scholar 

  • Esser, S.K., Appuswamy, R., Merolla, P.A., Arthur, J.V., Modha, D.S. (2015). Backpropagation for energy-efficient neuromorphic computing. In Advances in neural information processing systems (pp. 1–9).

  • Gewaltig, M.-O., & Diesmann, M. (2007). NEST (NEUral Simulation Tool). Scholarpedia, 2, 1430.

    Article  Google Scholar 

  • Gosmann, J., & Eliasmith, C. (2017). Automatic optimization of the computation graph in the Nengo neural network simulator. Frontiers in Neuroinformatics, 11, 1–11.

    Article  Google Scholar 

  • Hines, M.L., & Carnevale, N.T. (1997). The NEURON simulation environment. Neural Computation, 9(6), 1179–1209.

    Article  CAS  Google Scholar 

  • Hunsberger, E., & Eliasmith, C. (2016). Training spiking deep networks for neuromorphic hardware. arXiv:1611.05141 (v1):1–10.

  • Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T. (2014). Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093 (v1).

  • Kay, K.N. (2017). Principles for models of neural information processing. NeuroImage, 1–20.

  • Khan, M., Lester, D., Plana, L. (2008). SpiNNaker: mapping neural networks onto a massively-parallel chip multiprocessor. In IEEE joint conference on neural networks (pp. 2849–2856).

  • Kriegeskorte, N. (2015). Deep neural networks : a new framework for modeling biological vision and brain information processing. Annual Review of Vision Science, 1, 417–446.

    Article  Google Scholar 

  • Lee, J.H., Delbruck, T., Pfeiffer, M. (2016). Training deep spiking neural networks using backpropagation. Frontiers in Neuroscience, 10.

  • MacNeil, D., & Eliasmith, C. (2011). Fine-tuning and the stability of recurrent neural networks. PloS ONE, 6(9), e22885.

    Article  CAS  Google Scholar 

  • Rasmussen, D., & Eliasmith, C. (2014). A spiking neural model applied to the study of human performance and cognitive decline on Raven’s advanced progressive matrices. Intelligence, 42, 53–82.

    Article  Google Scholar 

  • Rasmussen, D., Voelker, A., Eliasmith, C. (2017). A neural model of hierarchical reinforcement learning. PLoS ONE, 12(7), 1–39.

    Google Scholar 

  • Rueckauer, B., Lungu, I.-A., Hu, Y., Pfeiffer, M., Liu, S.-C. (2017). Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in Neuroscience, 11, 1–12.

    Article  Google Scholar 

  • Stewart, T.C., Bekolay, T., Eliasmith, C. (2012). Learning to select actions with spiking neurons in the Basal Ganglia. Frontiers in Decision Neuroscience, 6, 2.

    Google Scholar 

  • Stimberg, M., Goodman, D.F.M., Benichoux, V., Brette, R. (2013). Brian 2 - the second coming : spiking neural network simulation in Python with code generation. In Twenty second annual computational neuroscience meeting (pp. 1–2).

  • Team, T.D. (2016). Theano: a Python framework for fast computation of mathematical expressions. arXiv:1605.02688 (v1):1–19.

  • Tieleman, T., & Hinton, G.E. (2012). Lecture 6.5-Rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 26–31.

    Google Scholar 

  • Yamins, D.L.K., & DiCarlo, J.J. (2016). Using goal-driven deep learning models to understand sensory cortex. Nature Neuroscience, 19(3).

Download references

Acknowledgments

This work was supported by Applied Brain Research, Inc. and ONR MURI N00014-16-1-2832.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Rasmussen.

Ethics declarations

Conflict of interests

DR is an employee/shareholder of Applied Brain Research, Inc., which owns the Nengo software package (including NengoDL). Nengo is free for research/personal/non-commercial use, but ABR charges a license fee for commercial use.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rasmussen, D. NengoDL: Combining Deep Learning and Neuromorphic Modelling Methods. Neuroinform 17, 611–628 (2019). https://doi.org/10.1007/s12021-019-09424-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12021-019-09424-z

Keywords

Navigation