Skip to main content

Programming Neuromorphics Using the Neural Engineering Framework

  • Reference work entry
  • First Online:
Handbook of Neuroengineering
  • 128 Accesses

Abstract

As neuromorphic hardware begins to emerge as a viable target platform for artificial intelligence (AI) applications, there is a need for tools and software that can effectively compile a variety of AI models onto such hardware. Nengo (http://nengo.ai) is an ecosystem of software designed to fill this need with a suite of tools for creating, training, deploying, and visualizing neural networks for various hardware backends, including CPUs, GPUs, FPGAs, microcontrollers, and neuromorphic hardware. While backpropagation-based methods are powerful and fully supported in Nengo, there is also a need for frameworks that are capable of efficiently mapping dynamical systems onto such hardware while best utilizing its computational resources. The neural engineering framework (NEF) is one such method that is supported by Nengo. Most prominently, Nengo and the NEF have been used to engineer the world’s largest functional model of the human brain. In addition, as a particularly efficient approach to training neural networks for neuromorphics, the NEF has been ported to several neuromorphic platforms. In this chapter, we discuss the mathematical foundations of the NEF and a number of its extensions and review several recent applications that use Nengo to build models for neuromorphic hardware. We focus in-depth on a particular class of dynamic neural networks, Legendre Memory Units (LMUs), which have demonstrated advantages over state-of-the-art approaches in deep learning with respect to energy efficiency, training time, and accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 949.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 999.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    There is a similar theorem for continuous time [38].

  2. 2.

    By linearity of convolution, it does not matter whether the filter is applied before or after the decoding or any subsequent encoding. For efficiency reasons, it is often applied in the lower-dimensional (i.e., decoded) space. What is most efficient depends on the hardware and the sparsity of neural activity relative to the integration timescale.

  3. 3.

    The optimization problem from Eq. 6 need only be solved once to decode x(t) from the neural activity. The same decoders may then be transformed by each C without loss in optimality (by linearity).

  4. 4.

    Also assuming the use of a dense state-space realization such as from zero-order hold discretization of the LMU dynamics.

  5. 5.

    Goldman [43] has shown that repeated low-pass filtering can be usefully exploited to implement an integrator, by summing across all of the filters.

  6. 6.

    Determined empirically using the NengoLoihi = 0.5.0 emulator

  7. 7.

    Hyperopt was used to the benefit of LSMs and ESNs. All hyperparameters (apart from q) had minimal effect on the LMU’s performance compared to the usual defaults in Nengo.

  8. 8.

    As additional validation, lower input frequencies or shorter delay lengths were possible with the LSM.

  9. 9.

    The postsynaptic filters are leveraged to participate in the required computation (see Principle 3; Sect. 0.2.1). There is no unwanted phase shift.

References

  1. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al.: TensorFlow: A system for large-scale machine learning. OSDI. 16, 265–283 (2016)

    Google Scholar 

  2. Adrian, E.D.: The Basis of Sensation. Christophers, London (1928)

    Google Scholar 

  3. Appeltant, L., Soriano, M.C., Van der Sande, G., Danckaert, J., Massar, S., Dambre, J., Schrauwen, B., Mirasso, C.R., Fischer, I.: Information processing using a single dynamical node as complex system. Nat. Commun. 2, 468 (2011)

    Google Scholar 

  4. Armstrong-Gold, C.E., Rieke, F.: Bandpass filtering at the rod to second-order cell synapse in salamander (Ambystoma tigrinum) retina. J. Neurosci. 23(9), 3796–3806 (2003)

    Google Scholar 

  5. Bekolay, T., Bergstra, J., Hunsberger, E., DeWolf, T., Stewart, T.C., Rasmussen, D., Choo, X., Voelker, A.R., Eliasmith, C.: Nengo: A Python tool for building large-scale functional brain models. Front. Neuroinform. 7(48) (2014). https://doi.org/10.3389/fninf.2013.00048

  6. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)

    Google Scholar 

  7. Bergstra, J., Komer, B., Eliasmith, C., Yamins, D., Cox, D.D.: Hyperopt: A Python library for model selection and hyperparameter optimization. Comput. Sci. Discov. 8(1), 014008 (2015)

    Google Scholar 

  8. Berzish, M., Eliasmith, C., Tripp, B.: Real-time FPGA simulation of surrogate models of large spiking networks. In: International Conference on Artificial Neural Networks (ICANN), Springer, Cham (2016)

    Google Scholar 

  9. Blouw, P., Choo, X., Hunsberger, E., Eliasmith, C.: Benchmarking keyword spotting efficiency on neuromorphic hardware. arXiv preprint arXiv:1812.01739 (2018)

    Google Scholar 

  10. Boahen, K.: A neuromorph’s prospectus. Comput. Sci. Eng. 19(2), 14–28 (2017)

    Google Scholar 

  11. Brogan, W.L.: Modern Control Theory, 3rd edn. Prentice-Hall, New Jersey (1991)

    MATH  Google Scholar 

  12. Choo, X.: Spaun 2.0: Extending the world’s largest functional brain model. Ph.D. thesis, University of Waterloo (2018)

    Google Scholar 

  13. Choudhary, S., Sloan, S., Fok, S., Neckar, A., Trautmann, E., Gao, P., Stewart, T., Eliasmith, C., Boahen, K.: Silicon neurons that compute. In: International Conference on Artificial Neural Networks, vol. 7552, pp. 121–128. Springer (2012)

    Google Scholar 

  14. Corradi, F., Eliasmith, C., Indiveri, G.: Mapping arbitrary mathematical functions and dynamical systems to neuromorphic VLSI circuits for spike-based neural computation. In: IEEE International Symposium on Circuits and Systems (ISCAS), Melbourne (2014)

    Google Scholar 

  15. Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley, New York (2012)

    MATH  Google Scholar 

  16. Cunningham, J.P., Byron, M.Y.: Dimensionality reduction for large-scale neural recordings. Nat. Neurosci. 17(11), 1500–1509 (2014)

    Google Scholar 

  17. Dambre, J., Verstraeten, D., Schrauwen, B., Massar, S.: Information processing capacity of dynamical systems. Sci. Rep. 2, 514 (2012)

    Google Scholar 

  18. Davies, M., Srinivasa, N., Lin, T.-H., Chinya, G., Cao, Y., Choday, S.H., Dimou, G., Joshi, P., Imam, N., Jain, S., et al.: Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro. 38(1), 82–99 (2018)

    Google Scholar 

  19. de Jong, J., Voelker, A.R., van Rijn, H., Stewart, T.C., Eliasmith, C.: Flexible timing with delay networks – The scalar property and neural scaling. In: International Conference on Cognitive Modelling, Society for Mathematical Psychology (2019)

    Google Scholar 

  20. De Vries, B., Principe, J.C.: The gamma model – A new neural model for temporal processing. Neural Netw. 5(4), 565–576 (1992)

    Google Scholar 

  21. DePasquale, B., Churchland, M.M., Abbott, L.: Using firing-rate dynamics to train recurrent networks of spiking model neurons. arXiv preprint arXiv:1601.07620 (2016)

    Google Scholar 

  22. DePasquale, B., Cueva, C.J., Rajan, K., Abbott, L., et al.: Full-FORCE: A target-based method for training recurrent networks. PLoS One. 13(2), e0191527 (2018)

    Google Scholar 

  23. Destexhe, A., Mainen, Z.F., Sejnowski, T.J.: An efficient method for computing synaptic conductances based on a kinetic model of receptor binding. Neural Comput. 6(1), 14–18 (1994)

    Google Scholar 

  24. Dethier, J., Nuyujukian, P., Eliasmith, C., Stewart, T.C., Elasaad, S.A., Shenoy, K.V., Boahen, K.A.: A brain-machine interface operating with a real-time spiking neural network control algorithm. In: Advances in Neural Information Processing Systems, pp. 2213–2221. (2011)

    Google Scholar 

  25. DeWolf, T., Jaworski, P., Eliasmith, C.: Nengo and low-power AI hardware for robust, embedded neurorobotics. Frontiers in Neurorobotics (2020)

    Google Scholar 

  26. Duggins, P.: Incorporating biologically realistic neuron models into the NEF. Master’s thesis, University of Waterloo (2017)

    Google Scholar 

  27. Duggins, P., Stewart, T.C., Choo, X., Eliasmith, C.: Effects of guanfacine and phenylephrine on a spiking neuron model of working memory. Top. Cogn. Sci. 9, 117–134 (2017)

    Google Scholar 

  28. Eliasmith, C.: How to Build a Brain: A Neural Architecture for Biological Cognition. Oxford University Press, New York (2013)

    Google Scholar 

  29. Eliasmith, C., Anderson, C.H.: Developing and applying a toolkit from a general neurocomputational framework. Neurocomputing. 26, 1013–1018 (1999)

    MATH  Google Scholar 

  30. Eliasmith, C., Anderson, C.H.: Rethinking central pattern generators: A general approach. Neurocomputing. 32–33, 735–740 (2000)

    Google Scholar 

  31. Eliasmith, C., Anderson, C.H.: Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems. MIT Press, Cambridge, MA (2003)

    Google Scholar 

  32. Eliasmith, C., Gosmann, J., Choo, X.: BioSpaun: A large-scale behaving brain model with complex neurons. arXiv preprint arXiv:1602.05220 (2016)

    Google Scholar 

  33. Eliasmith, C., Stewart, T.C., Choo, X., Bekolay, T., DeWolf, T., Tang, Y., Rasmussen, D.: A large-scale model of the functioning brain. Science. 338(6111), 1202–1205 (2012)

    Google Scholar 

  34. Fairhall, A.L., Lewen, G.D., Bialek, W., van Steveninck, R.R.d.R.: Efficiency and ambiguity in an adaptive neural code. Nature. 412(6849), 787 (2001)

    Google Scholar 

  35. Fischl, K.D., Stewart, T.C., Fair, K.L., Andreou, A.G.: Implementation of the neural engineering framework on the TrueNorth neurosynaptic system. In: IEEE Biomedical Circuits and Systems Conference (BioCAS), pp. 587–590. IEEE (2018)

    Google Scholar 

  36. Frady, E.P., Sommer, F.T.: Robust computation with rhythmic spike patterns. arXiv preprint arXiv:1901.07718 (2019)

    Google Scholar 

  37. Friedl, K.E., Voelker, A.R., Peer, A., Eliasmith, C.: Human-inspired neurorobotic system for classifying surface textures by touch. Robot. Autom. Lett. 1(1), 516–523 (2016)

    Google Scholar 

  38. Funahashi, K., Nakamura, Y.: Approximation of dynamical systems by continuous time recurrent neural networks. Neural Netw. 6(6), 801–806 (1993)

    Google Scholar 

  39. Galluppi, F., Davies, S., Furber, S., Stewart, T., Eliasmith, C.: Real time on-chip implementation of dynamical systems with spiking neurons. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2012)

    Google Scholar 

  40. Gautrais, J., Thorpe, S.: Rate coding versus temporal order coding: A theoretical approach. Biosystems. 48(1–3), 57–65 (1998)

    Google Scholar 

  41. Gerstner, W.: Spiking neurons. Pulsed Neural Netw. 4, 3–54 (1999)

    Google Scholar 

  42. GitHub.: nengo/nengo-loihi==0.5.0: Run nengo models on Intel’s Loihi chip. https://github.com/nengo/nengo-loihi/ (2019). Accessed 20 Jan 2019

  43. Goldman, M.S.: Memory without feedback in a neural network. Neuron. 61(4), 621–634 (2009)

    Google Scholar 

  44. Gosmann, J.: Precise Multiplications with the NEF. Technical Report. Centre for Theoretical Neuroscience, Waterloo (2015)

    Google Scholar 

  45. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Google Scholar 

  46. Hunsberger, E.: Spiking deep neural networks: engineered and biological approaches to object recognition. Ph.D. thesis, University of Waterloo (2018)

    Google Scholar 

  47. Hunsberger, E., Eliasmith, C.: Spiking deep networks with LIF neurons. arXiv preprint arXiv:1510.08829 (2015)

    Google Scholar 

  48. Hunsberger, E., Eliasmith, C.: Training spiking deep networks for neuromorphic hardware. arXiv preprint arXiv:1611.05141 (2016)

    Google Scholar 

  49. Jaeger, H.: The “Echo State” Approach to Analysing and Training Recurrent Neural Networks. German National Research Center for Information Technology, Bonn (2001)., GMD Technical Report, 148:34

    Google Scholar 

  50. Jaeger, H.: Short Term Memory in Echo State Networks. Technical report, Fraun-hofer Institute for Autonomous Intelligent Systems (2002)

    Google Scholar 

  51. Kauderer-Abrams, E., Gilbert, A., Voelker, A.R., Benjamin, B.V., Stewart, T.C., Boahen, K.: A population-level approach to temperature robustness in neuromorphic systems. In: IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, Baltimore (2017)

    Google Scholar 

  52. Knight, J., Voelker, A.R., Mundy, A., Eliasmith, C., Furber, S.: Efficient SpiNNaker simulation of a heteroassociative memory using the Neural Engineering Framework. In: International Joint Conference on Neural Networks (IJCNN). IEEE, Vancouver (2016)

    Google Scholar 

  53. Koch, C., Segev, I.: Methods in Neuronal Modeling: From Ions to Networks. MIT Press, Cambridge, MA (1998)

    Google Scholar 

  54. Komer, B., Stewart, T.C., Voelker, A.R., Eliasmith, C.: A neural representation of continuous space using fractional binding. In: 41st Annual Meeting of the Cognitive Science Society. Cognitive Science Society, Montreal (2019)

    Google Scholar 

  55. Lagorce, X., Benosman, R.: STICK: Spike time interval computational kernel, a framework for general purpose computation using neurons, precise timing, delays, and synchrony. Neural Comput. 27(11), 2261–2317 (2015)

    MathSciNet  MATH  Google Scholar 

  56. Legendre, A.-M.: Recherches sur l’attraction des sphéroïdes homogènes. Mémoires de Mathématiques et de Physique, présentés à l’Académie Royale des Sciences, pp. 411–435 (1782)

    Google Scholar 

  57. Lin, C.-K., Wild, A., Chinya, G.N., Lin, T.-H., Davies, M., Wang, H.: Mapping spiking neural networks onto a manycore neuromorphic architecture. In: Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 78–89. ACM (2018)

    Google Scholar 

  58. Lukoševičius, M.: A practical guide to applying echo state networks. In: Neural Networks: Tricks of the Trade, pp. 659–686. Springer, Berlin, Heidelberg (2012)

    Google Scholar 

  59. Lukoševičius, M.: Reservoir computing and self-organized neural hierarchies. Ph.D. thesis, Jacobs University Bremen (2012)

    Google Scholar 

  60. Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009)

    MATH  Google Scholar 

  61. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput. 14(11), 2531–2560 (2002)

    MATH  Google Scholar 

  62. Mitra, P.P., Stark, J.B.: Nonlinear limits to the information capacity of optical fibre communications. Nature. 411(6841), 1027 (2001)

    Google Scholar 

  63. Morcos, B., Stewart, T.C., Eliasmith, C., Kapre, N.: Implementing NEF neural networks on embedded FPGAs. In: 2018 International Conference on Field-Programmable Technology (FPT), pp. 22–29. IEEE (2018)

    Google Scholar 

  64. Mundy, A.: Real time Spaun on SpiNNaker. Ph.D. thesis, University of Manchester (2016)

    Google Scholar 

  65. Mundy, A., Knight, J., Stewart, T., Furber, S.: An efficient SpiNNaker implementation of the neural engineering framework. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2015)

    Google Scholar 

  66. Neckar, A.: Braindrop: A mixed-signal neuromorphic architecture with a dynamical systems-based programming model. Ph.D. thesis, Stanford University (2018)

    Google Scholar 

  67. Neckar, A., Fok, S., Benjamin, B.V., Stewart, T.C., Oza, N.N., Voelker, A.R., Eliasmith, C., Manohar, R., Boahen, K.. Braindrop: A mixed-signal neuromorphic architecture with a dynamical systems-based programming model. In: Proceedings of the IEEE (Accepted) (2019)

    Google Scholar 

  68. Nicola, W., Clopath, C.: Supervised learning in spiking neural networks with FORCE training. Nat. Commun. 8(1), 2208 (2017)

    Google Scholar 

  69. Patel, K.P., Hunsberger, E., Batir, S., Eliasmith, C.: A spiking neural network for image segmentation. Neuromorphic Computing and Engineering (2020) (submitted)

    Google Scholar 

  70. Rall, W.: Distinguishing theoretical synaptic potentials computed for different soma-dendritic distributions of synaptic input. J. Neurophysiol. 30(5), 1138–1168 (1967)

    Google Scholar 

  71. Rasmussen, D.: NengoDL: Combining deep learning and neuromorphic modelling methods. arXiv preprint arXiv:1805.11144 (2018)

    Google Scholar 

  72. Rieke, F., Warland, D.: Spikes: Exploring the Neural Code. MIT Press, Cambridge, MA (1997)

    MATH  Google Scholar 

  73. Rodrigues, O.: De l’attraction des sphéroïdes, Correspondence sur l’É-cole Impériale Polytechnique. Ph.D. thesis, Thesis for the Faculty of Science of the University of Paris (1816)

    Google Scholar 

  74. Roxin, A., Brunel, N., Hansel, D.: Role of delays in shaping spatiotemporal dynamics of neuronal activity in large networks. Phys. Rev. Lett. 94(23), 238103 (2005)

    Google Scholar 

  75. Schäfer, A.M., Zimmermann, H.G.: Recurrent neural networks are universal approximators. In: International Conference on Artificial Neural Networks, pp. 632–640. Springer (2006)

    Google Scholar 

  76. Sharma, S., Aubin, S., Eliasmith, C.: Large-scale cognitive model design using the Nengo neural simulator. In: Biologically Inspired Cognitive Architectures, pp. 86–100. Elsevier B.V., Amsterdam (2016)

    Google Scholar 

  77. Singh, R., Eliasmith, C.: A Dynamic Model of Working Memory in the PFC During a Somatosensory Discrimination Task. In: Computational and Systems Neuroscience, Cold Spring Harbor Laboratory (2004)

    Google Scholar 

  78. Singh, R., Eliasmith, C.: Higher-dimensional neurons explain the tuning and dynamics of working memory cells. J. Neurosci. 26, 3667–3678 (2006)

    Google Scholar 

  79. Stöckel, A., Eliasmith, C.: Passive nonlinear dendritic interactions as a computational resource in spiking neural networks. Neural Comput. 33, 1–33 (2020)

    MathSciNet  MATH  Google Scholar 

  80. Stöckel, A., Stewart, T.C., Eliasmith, C.: Connecting biological detail with neural computation: Application to the cerebellar granule-golgi microcircuit. In: 18th Annual Meeting of the International Conference on Cognitive Modelling. Society for Mathematical Psychology, Toronto (2020)

    Google Scholar 

  81. Stöckel, A., Voelker, A.R., Eliasmith, C.: Point Neurons with Conductance-Based Synapses in the Neural Engineering Framework. Technical Report. Centre for Theoretical Neuroscience, Waterloo (2017)

    Google Scholar 

  82. Stöckel, A., Voelker, A.R., Eliasmith, C.: Nonlinear synaptic interaction as a computational resource in the neural engineering framework. In: Cosyne Abstracts, Denver (2018)

    Google Scholar 

  83. Sussillo, D., Abbott, L.F.: Generating coherent patterns of activity from chaotic neural networks. Neuron. 63(4), 544–557 (2009)

    Google Scholar 

  84. Sussillo, D., Barak, O.: Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Comput. 25(3), 626–649 (2013)

    MathSciNet  MATH  Google Scholar 

  85. Thalmeier, D., Uhlmann, M., Kappen, H.J., Memmesheimer, R.-M.: Learning universal computations with spikes. PLoS Comput. Biol. 12(6), e1004895 (2016)

    Google Scholar 

  86. Thorpe, S., Gautrais, J.: Rank order coding. In: Computational Neuroscience, pp. 113–118. Springer (1998)

    Google Scholar 

  87. Tripp, B., Eliasmith, C.: Neural populations can induce reliable postsynaptic currents without observable spike rate changes or precise spike timing. Cereb. Cortex. 17(8), 1830–1840 (2006)

    Google Scholar 

  88. Voelker, A.R.: Dynamical systems in spiking neuromorphic hardware. Ph.D. thesis, University of Waterloo (2019)

    Google Scholar 

  89. Voelker, A.R., Benjamin, B.V., Stewart, T.C., Boahen, K., Eliasmith, C.: Extending the Neural Engineering Framework for nonideal silicon synapses. In: IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, Baltimore (2017)

    Google Scholar 

  90. Voelker, A.R., Eliasmith, C.: Methods and systems for implementing dynamic neural networks. US Patent App. 15/243,223 (patent pending) (2016)

    Google Scholar 

  91. Voelker, A.R., Eliasmith, C.: Methods for applying the neural engineering framework to neuromorphic hardware. arXiv preprint arXiv:1708.08133 (2017)

    Google Scholar 

  92. Voelker, A.R., Eliasmith, C.: Improving spiking dynamical networks: Accurate delays, higher-order synapses, and time cells. Neural Comput. 30(3), 569–609 (2018)

    MathSciNet  MATH  Google Scholar 

  93. Voelker, A.R., Eliasmith, C.: Legendre memory units in recurrent neural networks. PCT App. PCT/CA2020/00989 (patent pending) (2019)

    Google Scholar 

  94. Voelker, A.R., Gosmann, J., Stewart, T.C.: Efficiently Sampling Vectors and Coordinates from the n-Sphere and n-Ball. Technical Report. Centre for Theoretical Neuroscience, Waterloo (2017)

    Google Scholar 

  95. Voelker, A.R., Kajić, I., Eliasmith, C.: Legendre memory units: continuous-time representation in recurrent neural networks. In: Advances in Neural Information Processing Systems, pp. 15544–15553 (2019)

    Google Scholar 

  96. Voelker, A.R., Rasmussen, D., Eliasmith, C.. A spike in performance: Training hybrid-spiking neural networks with quantized activation functions. arXiv preprint arXiv:2002.03553 (2020)

    Google Scholar 

  97. Waernberg, E., Kumar, A.: Low dimensional activity in spiking neuronal networks. bioRxiv (2017)

    Google Scholar 

  98. Wallace, E., Maei, H.R., Latham, P.E.: Randomly connected networks have short temporal memory. Neural Comput. 25(6), 1408–1439 (2013)

    MathSciNet  MATH  Google Scholar 

  99. Wang, R., Hamilton, T.J., Tapson, J., van Schaik, A.: A compact neural core for digital implementation of the Neural Engineering Framework. In: Biomedical Circuits and Systems Conference (BioCAS), pp. 548–551. IEEE (2014)

    Google Scholar 

  100. Wang, R., Thakur, C.S., Cohen, G., Hamilton, T.J., Tapson, J., van Schaik, A.: A neuromorphic hardware architecture using the neural engineering framework for pattern recognition. IEEE Trans. Biomed. Circuits Syst. 11(3), 574–584 (2017)

    Google Scholar 

  101. Wilson, M.A., Bower, J.M.: The simulation of large-scale neural networks. In: Methods in Neuronal Modeling, pp. 291–333. MIT Press, Cambridge, MA (1989)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aaron R. Voelker .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Singapore Pte Ltd.

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Voelker, A.R., Eliasmith, C. (2023). Programming Neuromorphics Using the Neural Engineering Framework. In: Thakor, N.V. (eds) Handbook of Neuroengineering. Springer, Singapore. https://doi.org/10.1007/978-981-16-5540-1_115

Download citation

Publish with us

Policies and ethics