Skip to main content

Introduction to Neuro-Inspired Computing Using Resistive Synaptic Devices

  • Chapter
  • First Online:
Neuro-inspired Computing Using Resistive Synaptic Devices
  • 2488 Accesses

Abstract

This chapter gives an overview of the field of neuro-inspired computing using resistive synaptic devices. First, we discussed the demand for developing neuro-inspired architecture that is beyond today’s von Neumann architecture. Second, we summarized the various approaches to designing the neuromorphic hardware (digital vs. analog, spiking vs. non-spiking) and reviewed the recent progresses of array-level demonstrations of resistive synaptic devices. Then, we discussed the desired device characteristics of the resistive synaptic devices and introduced the crossbar array architectures to implement the weighted sum and weight update operations. Finally, we discussed the challenges for mapping learning algorithms to the neuromorphic hardware and building large-scale system using resistive synaptic devices.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. J. Gantz, D. Reinsel, The Digital Universe in 2020: Big Data, Bigger Digital Shadows, and Biggest Growth in the Far East, IDC Bulletin: IDC IVIEW (2012)

    Google Scholar 

  2. Y. LeCun, Y. Bengio, G. Hinton, Deep learning. Nature 521, 436–444 (2015)

    Article  Google Scholar 

  3. A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems NIPS) (2012)

    Google Scholar 

  4. Q. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean, A. Ng, Building high-level features using large scale unsupervised learning, in International Conference in Machine Learning (ICML) (2012)

    Google Scholar 

  5. G.E. Hinton, S. Osindero, Y. Teh, A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  6. K.J. Kuhn, Considerations for ultimate CMOS scaling. IEEE Trans. Electron Devices 59(7), 1813–1828 (2012)

    Article  Google Scholar 

  7. A. Sally, Reflections on the memory wall, in Conference on Computing Frontiers (2004)

    Google Scholar 

  8. C.-S. Poon, K. Zhou, Neuromorphic silicon neurons and large-scale neural networks: challenges and opportunities. Front. Neurosci. 5(108), 1–3 (2011)

    Google Scholar 

  9. S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, cuDNN: Efficient Primitives for Deep Learning, arXiv:1410.0759 (2014)

    Google Scholar 

  10. G. Lacey, G.W. Taylor, S. Areibi, Deep Learning on FPGAs: Past, Present, and Future, http://arxiv.org/abs/1602.04283 (2016)

  11. Y-H. Chen, T. Krishna, J. Emer, V. Sze, Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks, in IEEE International Solid-State Circuits Conference (ISSCC) (2016)

    Google Scholar 

  12. J. Sim, J-S. Park, M. Kim, D. Bae, Y. Choi, L-S. Kim, A 1.42TOPS/W deep convolutional neural network recognition processor for intelligent IoE systems, in IEEE International Solid-State Circuits Conference (ISSCC) (2016)

    Google Scholar 

  13. N. Jouppi, Google supercharges machine learning tasks with TPU custom chip (2016). [Online]. Available: https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html

  14. S.B. Furber, F. Galluppi, S. Temple, L.A. Plana, The SpiNNaker project. Proc. IEEE 102(5), 652–665 (2014)

    Article  Google Scholar 

  15. J. Schemmel, D. Bruderle, A. Grubl, M. Hock, K. Meier, S. Millner, A wafer-scale neuromorphic hardware system for large-scale neural modeling, in IEEE International Symposium on Circuits and Systems (ISCAS) (2010)

    Google Scholar 

  16. P.A. Merolla, J.V. Arthur, R. Alvarez-Icaza, A.S. Cassidy, J. Sawada, F. Akopyan, B.L. Jackson, N. Imam, C. Guo, Y. Nakamura, B. Brezzo, I. Vo, S.K. Esser, R. Appuswamy, B. Taba, A. Amir, M.D. Flickner, W.P. Risk, R. Manohar, D.S. Modha, A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345(6197), 668–673 (2014)

    Article  Google Scholar 

  17. G.Q. Bi, M.M. Poo, Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18(24), 10464–10472 (1998)

    Google Scholar 

  18. M. Prezioso, F. Merrikh-Bayat, B.D. Hoskins, G.C. Adam, K.K. Likharev, D.B. Strukov, Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015)

    Article  Google Scholar 

  19. S. Yu, Z. Li, P.-Y. Chen, H. Wu, B. Gao, D. Wang, W. Wu, H. Qian, Binary neural network with 16 Mb RRAM macro chip for classification and online training, in IEEE International Electron Devices Meeting (IEDM) (2016)

    Google Scholar 

  20. S. Kim, M. Ishii, S. Lewis, T. Perri, M. BrightSky, W. Kim, R. Jordan, G.W. Burr, N. Sosa, A. Ray, J.-P. Han, C. Miller, K. Hosokawa, C. Lam, NVM neuromorphic core with 64 k–cell (256-by-256) phase change memory synaptic array with on-chip neuron circuits for continuous in-situ learning, in IEEE International Electron Devices Meeting (IEDM) (2015)

    Google Scholar 

  21. B.J. Zhu, Magnetoresistive random access memory: the path to competitiveness and scalability. Proc. IEEE 96(11), 1786–1798 (2008)

    Article  Google Scholar 

  22. H.-S.P. Wong, S. Raoux, S. Kim, J. Liang, J.P. Reifenberg, B. Rajendran, M. Asheghi, K.E. Goodson, Phase change memory. Proc. IEEE 98(12), 2201–2227 (2010)

    Article  Google Scholar 

  23. H.-S.P. Wong, H.-Y. Lee, S. Yu, Y.-S. Chen, Y. Wu, P.-S. Chen, B. Lee, F.T. Chen, M.-J. Tsai, Metal–oxide RRAM. Proc. IEEE 100(6), 1951–1970 (2012)

    Article  Google Scholar 

  24. I. Valov, R. Waser, J.R. Jameson, M.N. Kozicki, Electrochemical metallization memories—fundamentals, applications, prospects. Nanotechnology 22, 254003 (2011)

    Article  Google Scholar 

  25. J.J. Yang, D.B. Strukov, D.R. Stewart, Memristive devices for computing. Nat. Nanotechnol. 8(1), 13–24 (2013)

    Article  Google Scholar 

  26. S. Yu, P.-Y. Chen, Emerging memory technologies: recent trends and prospects. IEEE Solid State Circuits Magazine 8(2), 43–56 (2016)

    Article  Google Scholar 

  27. Y. Choi, I. Song, M.-H. Park, H. Chung, S. Chang, B. Cho, J. Kim, Y. Oh, D. Kwon, J. Sunwoo, J. Shin, Y. Rho, C. Lee, M. Kang, J. Lee, Y. Kwon, S. Kim, J. Kim, Y.-J. Lee, Q. Wang, S. Cha, S. Ahn, H. Horii, J. Lee, K. Kim, H. Joo, K. Lee, Y.-T. Lee et al., A 20 nm 1.8 V 8Gb PRAM with 40 MB/s program bandwidth, in IEEE International Solid-State Circuits Conference (ISSCC) (2012)

    Google Scholar 

  28. T.-Y. Liu, T.H. Yan, R. Scheuerlein, Y. Chen, J.K. Lee, G. Balakrishnan, G. Yee, H. Zhang, A. Yap, J. Ouyang, T. Sasaki, S. Addepalli, A. Al-Shamma, C.-Y. Chen, M. Gupta, G. Hilton, S. Joshi, A. Kathuria, V. Lai, D. Masiwal, M. Matsumoto et al., A 130.7 mm2 2-layer 32Gb ReRAM memory device in 24 nm technology, in IEEE International Solid-State Circuits Conference (ISSCC) (2013)

    Google Scholar 

  29. A. Kawahara, R. Azuma, Y. Ikeda, K. Kawai, Y. Katoh, K. Tanabe, T. Nakamura, Y. Sumimoto, N. Yamada, N. Nakai, S. Sakamoto, Y. Hayakawa, K. Tsuji, S. Yoneda, A. Himeno, K. Origasa, K. Shimakawa, T. Takagi, T. Mikawa, K. Aono, An 8 Mb multi-layered cross-point ReRAM macro with 443 MB/s write throughput, in IEEE International Solid-State Circuits Conference (ISSCC) (2012)

    Google Scholar 

  30. D. Kuzum, S. Yu, H.-S.P. Wong, Synaptic electronics: materials, devices and applications. Nanotechnology 24, 382001 (2013)

    Article  Google Scholar 

  31. D. Kuzum, R.G.D. Jeyasingh, B. Lee, H.-S.P. Wong, Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing. Nano Lett. 12(5), 2179–2186 (2012)

    Article  Google Scholar 

  32. B.L. Jackson, B. Rajendran, G.S. Corrado, M. Breitwisch, G.W. Burr, R. Cheek, K. Gopalakrishnan, S. Raoux, C.T. Rettner, A. Padilla, A.G. Schrott, R.S. Shenoy, B.N. Kurdi, C.H. Lam, D.S. Modha, Nanoscale electronic synapses using phase change devices. ACM J. Emerg. Technol. Comput. Syst. 9(2), 12 (2013)

    Article  Google Scholar 

  33. S.H. Jo, T. Chang, I. Ebong, B.B. Bhadviya, P. Mazumder, W. Lu, Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett. 10(4), 1297–1301 (2010)

    Article  Google Scholar 

  34. T. Ohno, T. Hasegawa, T. Tsuruoka, K. Terabe, J.K. Gimzewski, M. Aono, Short-term plasticity and long-term potentiation mimicked in single inorganic synapses. Nat. Mater. 10(8), 591–595 (2011)

    Article  Google Scholar 

  35. S. Manan, O. Bichler, D. Querlioz, G. Palma, E. Vianello, D. Vuillaume, C. Gamrat, B. DeSalvo, CBRAM devices as binary synapses for low-power stochastic neuromorphic systems: auditory (cochlea) and visual (retina) cognitive processing applications, in IEEE International Electron Devices Meeting (IEDM) (2012)

    Google Scholar 

  36. S. Park, A. Sheri, J. Kim, J. Noh, J. Jang, M. Jeon, B. Lee, B.R. Lee, B.H. Lee, H. Hwang, Neuromorphic speech systems using advanced ReRAM-based synapse, in IEEE International Electron Devices Meeting (IEDM) (2013)

    Google Scholar 

  37. T. Chang, S.-H. Jo, W. Lu, Short-term memory to long-term memory transition in a nanoscale memristor. ACS Nano 5(9), 7669–7676 (2011)

    Article  Google Scholar 

  38. S. Yu, B. Gao, Z. Fang, H.Y. Yu, J.F. Kang, H.-S.P. Wong, A low energy oxide-based electronic synaptic device for neuromorphic visual system with tolerance to device variation. Adv. Mater. 25(12), 1774–1779 (2013)

    Article  Google Scholar 

  39. I.-T. Wang, Y.-C. Lin, Y.-F. Wang, C.-W. Hsu, T.-H. Hou, 3D synaptic architecture with ultralow sub-10 f. energy per spike for neuromorphic computation, in IEEE International Electron Devices Meeting (IEDM) (2014)

    Google Scholar 

  40. S. Yu, B. Gao, Z. Fang, H.Y. Yu, J.F. Kang, H.-S.P. Wong, Stochastic learning in oxide binary synaptic device for neuromorphic computing. Front. Neurosci. 7, 186 (2013)

    Article  Google Scholar 

  41. G.W. Burr, R.M. Shelby, C.D. Nolfo, J.W. Jang, R.S. Shenoy, P. Narayanan, Experimental demonstration and tolerancing of a large-scale neural network (165,000 synapses), using phase-change memory as the synaptic weight element, in IEEE International Electron Devices Meeting (IEDM) (2014)

    Google Scholar 

  42. P.-Y. Chen, B. Lin, I.-T. Wang, T.-H. Hou, J. Ye, S. Vrudhula, J.-S. Seo, Y. Cao, S. Yu, Mitigating effects of non-ideal synaptic device characteristics for on-chip learning, in IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (2015)

    Google Scholar 

  43. MNIST Handwritten Digits Dataset, [Online]. Available: http://yann.lecun.com/exdb/mnist/

  44. “ImageNet,” [Online]. Available: http://www.image-net.org/

  45. D. Garbin, O. Bichler, E. Vianello, Q. Rafhay, C. Gamrat, L.Perniola, G. Ghibaudo, B. DeSalvo, Variability-tolerant convolutional neural network for pattern recognition applications based on OxRAM synapses, in IEEE International Electron Devices Meeting (IEDM) (2014)

    Google Scholar 

  46. M. Hu, H. Li, Q. Wu, G.S. Rose, Hardware realization of BSB recall function using memristor crossbar arrays, in IEEE/DAC Design Automation Conference (DAC) (2012)

    Google Scholar 

  47. P.-Y. Chen, D. Kadetotad, Z. Xu, A. Mohanty, B. Lin, J. Ye, S. Vrudhula, J.-S. Seo, Y. Cao, S. Yu, Technology-design co-optimization of resistive cross-point array for accelerating learning algorithms on chip, in IEEE/ACM Design, Automation & Test in Europe (DATE) (2015)

    Google Scholar 

  48. B. Liu, H. Li, Y. Chen, X. Li, T. Huang, Q. Wu, M. Barnell, Reduction and IR-drop compensations techniques for reliable neuromorphic computing systems, in IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (2014)

    Google Scholar 

  49. L. Gao, I.-T. Wang, P.-Y. Chen, S. Vrudhula, J.-S. Seo, Y. Cao, T.-H. Hou, S. Yu, Fully parallel write/read in resistive synaptic array for accelerating on-chip learning. Nanotechnology. 26, 455204 (2015)

    Google Scholar 

  50. K.-H. Kim, S. Gaba, D. Wheeler, J.M. Cruz-Albrecht, T. Hussain, N. Srinivasa, W. Lu, A functional hybrid memristor crossbar-array/CMOS system for data storage and neuromorphic applications. Nano Lett. 12(1), 389–395 (2011)

    Article  Google Scholar 

  51. S. Park, M. Chu, J. Kim, J. Noh, M. Jeon, B.H. Lee, H. Hwang, B. Lee, B.-G. Lee, Electronic system with memristive synapses for pattern recognition. Sci. Rep. 5, 10123 (2015)

    Article  Google Scholar 

  52. S.B. Eryilmaz, D. Kuzum, R.G.D. Jeyasingh, S. Kim, M. BrightSky, C. Lam, H.-S.P. Wong, Experimental demonstration of array-level learning with phase change synaptic devices, in IEEE International Electron Devices Meeting (IEDM) (2013)

    Google Scholar 

  53. S. Yu, P.-Y. Chen, Y. Cao, L. Xia, Y. Wang, H. Wu, Scaling-up resistive synaptic arrays for neuro-inspired architecture: challenges and prospect, in IEEE International Electron Devices Meeting (IEDM) (2015)

    Google Scholar 

  54. M. Courbariaux, Y. Bengio, J.P. David, BinaryConnect: Training Deep Neural Networks with binary weights during propagations, in Advances in Neural Information Processing Systems (NIPS) (2015)

    Google Scholar 

  55. S. Gupta, A. Agrawal, K. Gopalakrishnan, P. Narayanan, Deep learning with limited numerical precision, in International Conference on Machine Learning (ICML) (2015)

    Google Scholar 

  56. K. Simonyan, K. Zisserman, Very deep convolutional networks for large-scale image recognition, in International Conference on Learning Representations (ICLR) (2015)

    Google Scholar 

  57. N. Suda, G. Dasika, V. Chandra, A. Mohanty, Y. Ma, S. Vrudhula, J. Seo, Y. Cao, Throughput-optimized openCL-based FPGA accelerator for large-scale convolutional neural networks, in ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA) (2016)

    Google Scholar 

  58. Caffe, [Online]. Available: http://caffe.berkeleyvision.org/

  59. S. Han, H. Mao, W.J. Dally, A deep neural network compression pipeline: pruning, quantization, huffman encoding, in International Conference on Learning Representations (ICLR) (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shimeng Yu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Yu, S. (2017). Introduction to Neuro-Inspired Computing Using Resistive Synaptic Devices. In: Yu, S. (eds) Neuro-inspired Computing Using Resistive Synaptic Devices. Springer, Cham. https://doi.org/10.1007/978-3-319-54313-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-54313-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-54312-3

  • Online ISBN: 978-3-319-54313-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics