Efficient Neuromorphic Systems and Emerging Technologies: Prospects and Perspectives

Chapter

Abstract

Recent advances in machine learning, notably deep learning, have resulted in unprecedented success in a wide variety of recognition tasks including vision, speech, and natural language processing. However, implementation of such neural algorithms in conventional “von-Neumann” architectures involve orders of magnitude more area and power consumption than that involved in the biological brain. This is mainly attributed to the inherent mismatch between the computational units—neurons and synapses in such models and the underlying CMOS transistors. In addition, these algorithms, being highly memory-intensive, suffer from memory bandwidth limitations due to significant amount of data transfer between the memory and computing units. Recent experiments in spintronics have opened up the possibility of implementing such computing kernels by single device structures that can be arranged in crossbar architectures resulting in a compact and energy-efficient “in-memory computing” platform. In this chapter, we will review spintronic device structures consisting of single-domain/domain-wall motion based devices for mimicking neuronal and synaptic units. System-level simulations indicate ∼ 100× improvement in energy consumption for such spintronic implementations over a corresponding CMOS implementation across different computing workloads.

Keywords

Domain Wall Convolutional Neural Network Magnetic Tunnel Junction Spike Neural Network CMOS Transistor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556Google Scholar
  2. 2.
    T. Mikolov, M. Karafiát, L. Burget, J. Cernockỳ, S. Khudanpur, Recurrent neural network based language model.Interspeech 2, 3 (2010)Google Scholar
  3. 3.
    Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition. Proc. IEEE 86 (11), 2278–2324 (1998)CrossRefGoogle Scholar
  4. 4.
    A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems (2012), pp. 1097–1105Google Scholar
  5. 5.
    Y. Taigman, M. Yang, M. Ranzato, L. Wolf, Deepface: closing the gap to human-level performance in face verification, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1701–1708Google Scholar
  6. 6.
    M. Rhu, N. Gimelshein, J. Clemons, A. Zulfiqar, S.W. Keckler, vDNN: Virtualized deep neural networks for scalable, memory-efficient neural network design (2016). arXiv preprint arXiv:1602.08124Google Scholar
  7. 7.
    Y. Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen, Z. Xu, N. Sun et al., Dadiannao: A machine-learning supercomputer, in Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture (IEEE Computer Society, 2014), pp. 609–622Google Scholar
  8. 8.
    M. Julliere, Tunneling between ferromagnetic films. Phys. Lett. A 54 (3), 225–226 (1975)CrossRefGoogle Scholar
  9. 9.
    J.C. Slonczewski, Conductance and exchange coupling of two ferromagnets separated by a tunneling barrier. Phys. Rev. B 39 (10), 6995 (1989)Google Scholar
  10. 10.
    J. Hirsch, Spin hall effect. Phys. Rev. Lett. 83 (9), 1834 (1999)Google Scholar
  11. 11.
    C.-F. Pai, L. Liu, Y. Li, H. Tseng, D. Ralph, R. Buhrman, Spin transfer torque devices utilizing the giant spin Hall effect of tungsten. Appl. Phys. Lett. 101 (12), 122404 (2012)Google Scholar
  12. 12.
    L. Liu, C.-F. Pai, Y. Li, H. Tseng, D. Ralph, R. Buhrman, Spin-torque switching with the giant spin Hall effect of tantalum. Science 336 (6081), 555–558 (2012)CrossRefGoogle Scholar
  13. 13.
    A. Sengupta, S.H. Choday, Y. Kim, K. Roy, Spin orbit torque based electronic neuron. Appl. Phys. Lett. 106 (14), 143701 (2015)Google Scholar
  14. 14.
    S. Emori, U. Bauer, S.-M. Ahn, E. Martinez, G.S. Beach, Current-driven dynamics of chiral ferromagnetic domain walls. Nat. Mater. 12 (7), 611–616 (2013)CrossRefGoogle Scholar
  15. 15.
    S. Emori, E. Martinez, K.-J. Lee, H.-W. Lee, U. Bauer, S.-M. Ahn, P. Agrawal, D.C. Bono, G.S. Beach, Spin Hall torque magnetometry of Dzyaloshinskii domain walls. Phys. Rev. B 90 (18), 184427 (2014)Google Scholar
  16. 16.
    G. Indiveri, A low-power adaptive integrate-and-fire neuron circuit, in ISCAS (4). Citeseer (2003), pp. 820–823Google Scholar
  17. 17.
    P.A. Merolla, J.V. Arthur, R. Alvarez-Icaza, A.S. Cassidy, J. Sawada, F. Akopyan, B.L. Jackson, N. Imam, C. Guo, Y. Nakamura et al., A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345 (6197), 668–673 (2014)CrossRefGoogle Scholar
  18. 18.
    A. Sengupta, Y. Shim, K. Roy, Proposal for an All-Spin Artificial Neural Network: emulating neural and synaptic functionalities through domain wall motion in ferromagnets, in IEEE Transactions on Biomedical Circuits and Systems (2016)Google Scholar
  19. 19.
    A. Sengupta, M. Parsa, B. Han, K. Roy, Probabilistic deep spiking neural systems enabled by magnetic tunnel junction. IEEE Trans. Electron Dev. 63 (7), 2963–2970 (2016)CrossRefGoogle Scholar
  20. 20.
    A. Sengupta, P. Panda, P. Wijesinghe, Y. Kim, K. Roy, Magnetic tunnel junction mimics stochastic cortical spiking neurons. Sci. Rep. 6, 30039 (2016)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.School of Electrical and Computer Engineering, Purdue UniversityWest LafayetteUSA

Personalised recommendations