Skip to main content
Log in

Memristive dynamics enabled neuromorphic computing systems

  • Review
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Abstract

The slowing down of transistor scaling and explosive growth for intelligence computing power emerge as the two driving factors for the study of novel devices and materials to pursue highly-efficient computing systems. Memristors, incorporating rich intrinsic dynamics, are a promising candidate for constructing efficient and scalable bio-inspired computing systems. In this progress report, we review the latest advances in novel types of memristors as well as their applications in implementing neuromorphic computing systems. This paper not only covers the memristive dynamics-enabled bionic computing systems but also discusses the memristive sensory systems that integrate sensing and computing. Eventually, device-circuit co-optimization methods are given to emphasize the trend of cross-layer co-design in this fast-evolving field. The innovation in memristor devices mainly focuses on specialized computing hardware and yields superior computing and sensing efficiency. At last, we offer our insight into the trend of state-of-the-art research in memristive materials, devices, circuits, and systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Kumar S, Wang X, Strachan J P, et al. Dynamical memristors for higher-complexity neuromorphic computing. Nat Rev Mater, 2022, 7: 575–591

    Article  Google Scholar 

  2. Yan B, Li B, Qiao X, et al. Resistive memory-based in-memory computing: from device and large-scale integration system perspectives. Adv Intell Syst, 2019, 1: 1900068

    Article  Google Scholar 

  3. Cheng C D, Tiw P J, Cai Y M, et al. In-memory computing with emerging nonvolatile memory devices. Sci China Inf Sci, 2021, 64: 221402

    Article  Google Scholar 

  4. Wang Y, Yang Y, Hao Y, et al. Embracing the era of neuromorphic computing. J Semicond, 2021, 42: 010301

    Article  Google Scholar 

  5. Yao P, Wu H, Gao B, et al. Fully hardware-implemented memristor convolutional neural network. Nature, 2020, 577: 641–646

    Article  Google Scholar 

  6. Tang J, Yuan F, Shen X, et al. Bridging biological and artificial neural networks with emerging neuromorphic devices: fundamentals, progress, and challenges. Adv Mater, 2019, 31: 1902761

    Article  Google Scholar 

  7. Yao P, Wu H, Gao B, et al. Face classification using electronic synapses. Nat Commun, 2017, 8: 15199

    Article  Google Scholar 

  8. Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. In: Proceedings of Advances in Neural Information Processing Systems, Long Beach, 2017. 5998–6008

  9. Ramachandran P, Parmar N, Vaswani A, et al. Stand-alone self-attention in vision models. In: Proceedings of Advances in Neural Information Processing Systems, Vancouver, 2019

  10. Von Neumann J, Kurzweil R. The Computer and the Brain. New Haven: Yale University Press, 2012

    Google Scholar 

  11. Zhang B, Zhu J, Su H. Toward the third generation artificial intelligence. Sci China Inf Sci, 2023, 66: 121101

    Article  MathSciNet  Google Scholar 

  12. Chen S T, Jian Z Q, Huang Y H, et al. Autonomous driving: cognitive construction and situation understanding. Sci China Inf Sci, 2019, 62: 081101

    Article  Google Scholar 

  13. Chen P Y, Peng X, Yu S. NeuroSim+: an integrated device-to-algorithm framework for benchmarking synaptic devices and array architectures. In: Proceedings of IEEE International Electron Devices Meeting (IEDM), San Francisco, 2017. 1–6

  14. Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners. OpenAI blog, 2019, 1: 9

    Google Scholar 

  15. Partzsch J, Höppner S, Eberlein M, et al. A fixed point exponential function accelerator for a neuromorphic many-core system. In: Proceedings of IEEE International Symposium on Circuits and Systems (ISCAS), Toronto, 2017. 1–4

  16. Guo Q, Guo X C, Bai Y X, et al. A resistive TCAM accelerator for data-intensive computing. In: Proceedings of the 44th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), Porto Alegre, 2011. 339–350

  17. Mei L, Dandekar M, Rodopoulos D, et al. Sub-word parallel precision-scalable MAC engines for efficient embedded DNN inference. In: Proceedings of IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hsinchu, 2019. 6–10

  18. Chen W H, Li K X, Lin W Y, et al. A 65 nm 1 Mb nonvolatile computing-in-memory ReRAM macro with sub-16 ns multiply-and-accumulate for binary DNN AI edge processors. In: Proceedings of IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, 2018. 494–496

  19. Eshraghian J K, Kang S-M, Baek S, et al. Analog weights in ReRAM DNN accelerators. In: Proceedings of IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hsinchu, 2019. 267–271

  20. Lee E, Han T, Seo D, et al. A charge-domain scalable-weight in-memory computing macro with dual-SRAM architecture for precision-scalable DNN accelerators. IEEE Trans Circ Syst I, 2021, 68: 3305–3316

    Google Scholar 

  21. Belluomini W, Jamsek D, Martin A, et al. An 8 GHz floating-point multiply. In: Proceedings of IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, 2005. 374–604

  22. Krizhevsky A. Learning multiple layers of features from tiny images. 2009. https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf

  23. Kumar A, Kharadi A, Singh D, et al. Automatic question-answer pair generation using deep learning. In: Proceedings of International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, 2021

  24. Greff K, Srivastava R K, Koutnik J, et al. LSTM: a search space odyssey. IEEE Trans Neural Netw Learn Syst, 2016, 28: 2222–2232

    Article  MathSciNet  Google Scholar 

  25. Sze V, Chen Y H, Yang T J, et al. Efficient processing of deep neural networks: a tutorial and survey. Proc IEEE, 2017, 105: 2295–2329

    Article  Google Scholar 

  26. Bankman D, Yang L, Moons B, et al. An always-on 3.8 J/86% CIFAR-10 mixed-signal binary CNN processor with all memory on chip in 28-nm CMOS. IEEE J Solid-State Circ, 2019, 54: 158–172

    Article  Google Scholar 

  27. Biswas A, Chandrakasan A P. CONV-SRAM: an energy-efficient SRAM with in-memory dot-product computation for low-power convolutional neural networks. IEEE J Solid-State Circ, 2019, 54: 217–230

    Article  Google Scholar 

  28. Sebastian A, Le Gallo M, Khaddam-Aljameh R, et al. Memory devices and applications for in-memory computing. Nat Nanotechnol, 2020, 15: 529–544

    Article  Google Scholar 

  29. Verma N, Jia H, Valavi H, et al. In-memory computing: advances and prospects. IEEE Solid-State Circ Mag, 2019, 11: 43–55

    Article  Google Scholar 

  30. Ielmini D, Wong H S P. In-memory computing with resistive switching devices. Nat Electron, 2018, 1: 333–343

    Article  Google Scholar 

  31. Zhao Y, Ouyang P, Kang W, et al. An STT-MRAM based in memory architecture for low power integral computing. IEEE Trans Comput, 2018, 68: 617–623

    Article  MathSciNet  MATH  Google Scholar 

  32. Zhu D, Lu S, Wang M, et al. Efficient precision-adjustable architecture for softmax function in deep learning. IEEE Trans Circ Syst II, 2020, 67: 3382–3386

    Google Scholar 

  33. Stevens J R, Venkatesan R, Dai S, et al. Softermax: hardware/software co-design of an efficient softmax for transformers. 2021. ArXiv:210309301

  34. Kvatinsky S, Belousov D, Liman S, et al. MAGIC-memristor-aided logic. IEEE Trans Circ Syst II, 2014, 61: 895–899

    Google Scholar 

  35. Shafiee A, Nag A, Muralimanohar N, et al. ISAAC: a convolutional neural network accelerator with in-situ analog arithmetic in crossbars. In: Proceedings of International Symposium on Computer Architecture (ISCA), Seoul, 2016. 14–26

  36. Devlin J, Chang M-W, Lee K, et al. BERT: pre-training of deep bidirectional transformers for language understanding. 2018. ArXiv:181004805

  37. Zhou H J, Li Y, Miao X S. Low-time-complexity document clustering using memristive dot product engine. Sci China Inf Sci, 2022, 65: 122410

    Article  Google Scholar 

  38. Wan T Q, Ma S J, Liao F Y, et al. Neuromorphic sensory computing. Sci China Inf Sci, 2022, 65: 141401

    Article  Google Scholar 

  39. Bao H, Chen Z G, Cai J M, et al. Memristive cyclic three-neuron-based neural network with chaos and global coexisting attractors. Sci China Technol Sci, 2022, 65: 2582–2592

    Article  Google Scholar 

  40. Wei J S, Zhang J L, Zhang X M, et al. A neuromorphic core based on threshold switching memristor with asynchronous address event representation circuits. Sci China Inf Sci, 2022, 65: 122408

    Article  Google Scholar 

  41. Thakur C S, Molin J L, Cauwenberghs G, et al. Large-scale neuromorphic spiking array processors: a quest to mimic the brain. Front Neurosci, 2018, 12: 891

    Article  Google Scholar 

  42. Reis D, Niemier M, Hu X S. Computing in memory with FeFETs. In: Proceedings of ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED), Bellevue, 2018. 1–6

  43. Lee J, Park B G, Kim Y. Implementation of Boolean logic functions in charge trap flash for in-memory computing. IEEE Electron Dev Lett, 2019, 40: 1358–1361

    Article  Google Scholar 

  44. Liu K, Dang B, Zhang T, et al. Multilayer reservoir computing based on ferroelectric a-In2Se3 for hierarchical information processing. Adv Mater, 2022, 34: 2108826

    Article  Google Scholar 

  45. Jing Z, Yang Y. Artificial intelligence goes physical. Small Sci, 2021, 1: 2000065

    Article  Google Scholar 

  46. Wright L G, Onodera T, Stein M M, et al. Deep physical neural networks trained with backpropagation. Nature, 2022, 601: 549–555

    Article  Google Scholar 

  47. Kan S, Nakajima K, Asai T, et al. Physical implementation of reservoir computing through electrochemical reaction. Adv Sci, 2022, 9: 2104076

    Article  Google Scholar 

  48. Yang K, Joshua Yang J, Huang R, et al. Nonlinearity in memristors for neuromorphic dynamic systems. Small Sci, 2022, 2: 2100049

    Article  Google Scholar 

  49. Xu L, Zhu J, Chen B, et al. A distributed nanocluster based multi-agent evolutionary network. Nat Commun, 2022, 13: 4698

    Article  Google Scholar 

  50. Tang J, Duan H, Lao S. Swarm intelligence algorithms for multiple unmanned aerial vehicles collaboration: a comprehensive review. Artif Intell Rev, 2023, 56: 4295–4327

    Article  Google Scholar 

  51. Attiya I, Elaziz M A, Abualigah L, et al. An improved hybrid swarm intelligence for scheduling IoT application tasks in the cloud. IEEE Trans Ind Inf, 2022, 18: 6264–6272

    Article  Google Scholar 

  52. Saeed R A, Omri M, Abdel-Khalek S, et al. Optimal path planning for drones based on swarm intelligence algorithm. Neural Comput Applic, 2022, 34: 10133–10155

    Article  Google Scholar 

  53. Xiao Z, Yan B, Zhang T, et al. Memristive devices based hardware for unlabeled data processing. Neuromorph Comput Eng, 2022, 2: 022003

    Article  Google Scholar 

  54. Duan Q, Zhang T, Liu C, et al. Artificial multisensory neurons with fused haptic and temperature perception for multimodal in-sensor computing. Adv Intell Syst, 2022, 4: 2200039

    Article  Google Scholar 

  55. Wang T, Meng J, Zhou X, et al. Reconfigurable neuromorphic memristor network for ultralow-power smart textile electronics. Nat Commun, 2022, 13: 7432

    Article  Google Scholar 

  56. Zhou G, Wang Z, Sun B, et al. Volatile and nonvolatile memristive devices for neuromorphic computing. Adv Elect Mater, 2022, 8: 2101127

    Article  Google Scholar 

  57. Jiang H, Li C, Lin P, et al. Ta/HfO2-based memristor and crossbar arrays for in-memory computing. In: Memristor Computing Systems. Berlin: Springer, 2022. 167–188

    Chapter  Google Scholar 

  58. Zhuo Y, Midya R, Song W, et al. A dynamical compact model of diffusive and drift memristors for neuromorphic computing. Adv Elect Mater, 2022, 8: 2270040

    Article  Google Scholar 

  59. Yang X, Taylor B, Wu A, et al. Research progress on memristor: from synapses to computing systems. IEEE Trans Circ Syst I, 2022, 69: 1845–1857

    Google Scholar 

  60. Wu X, Dang B, Wang H, et al. Spike-enabled audio learning in multilevel synaptic memristor array-based spiking neural network. Adv Intell Syst, 2022, 4: 2100151

    Article  Google Scholar 

  61. Yan B, Hsu J L, Yu P C, et al. A 1.041-Mb/MM2 27.38-TOPS/W signed-INT8 dynamic-logic-based ADC-less SRAM compute-in-memory macro in 28 nm with reconfigurable bitwise operation for AI and embedded applications. In: Proceedings of International Solid-State Circuits Conference (ISSCC), San Francisco, 2022. 188–190

  62. Chaudhuri A, Yan B, Chen Y, et al. Hardware fault tolerance for binary RRAM crossbars. In: Proceedings of IEEE International Test Conference (ITC), Washington, 2019. 1–10

  63. Liu Z, Tang J, Gao B, et al. Multichannel parallel processing of neural signals in memristor arrays. Sci Adv, 2020, 6: eabc4797

    Article  Google Scholar 

  64. Duan S, Hu X, Dong Z, et al. Memristor-based cellular nonlinear/neural network: design, analysis, and applications. IEEE Trans Neural Netw Learn Syst, 2014, 26: 1202–1213

    Article  MathSciNet  Google Scholar 

  65. Hu X F, Shi W Q, Zhou Y, et al. Quantized and adaptive memristor based CNN (QA-mCNN) for image processing. Sci China Inf Sci, 2022, 65: 119104

    Article  Google Scholar 

  66. Jacob B, Kligys S, Chen B, et al. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, 2018. 2704–2713

  67. Wang M, Rasoulinezhad S, Leong P H W, et al. NITI: training integer neural networks using integer-only arithmetic. IEEE Trans Parallel Distrib Syst, 2022, 33: 3249–3261

    Article  Google Scholar 

  68. Banner R, Hubara I, Hoffer E, et al. Scalable methods for 8-bit training of neural networks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montréal, 2018

  69. Angizi S, He Z, Awad A, et al. MRIMA: an MRAM-based in-memory accelerator. IEEE Trans Comput-Aided Des Integr Circ Syst, 2019, 39: 1123–1136

    Article  Google Scholar 

  70. Liu K, Zhang T, Dang B, et al. An optoelectronic synapse based on α-In2Se3 with controllable temporal dynamics for multimode and multiscale reservoir computing. Nat Electron, 2022, 5: 761–773

    Article  Google Scholar 

  71. Zhang D, Schoenherr P, Sharma P, et al. Ferroelectric order in van der Waals layered materials. Nat Rev Mater, 2023, 8: 25–40

    Article  Google Scholar 

  72. Wan S, Peng Q, Wu Z, et al. Nonvolatile ferroelectric memory with lateral β/α/β In2Se3 heterojunctions. ACS Appl Mater Interfaces, 2022, 14: 25693–25700

    Article  Google Scholar 

  73. Yuan R, Duan Q, Tiw P J, et al. A calibratable sensory neuron based on epitaxial VO2 for spike-based neuromorphic multisensory system. Nat Commun, 2022, 13: 3973

    Article  Google Scholar 

  74. Zhou F, Zhou Z, Chen J, et al. Optoelectronic resistive random access memory for neuromorphic vision sensors. Nat Nanotechnol, 2019, 14: 776–782

    Article  Google Scholar 

  75. Dang B J, Liu K Q, Wu X L, et al. One-phototransistor-one-memristor array with high-linearity light-tunable weight for optic neuromorphic computing. Adv Mater, 2022. doi: https://doi.org/10.1002/adma.202204844

  76. Li Y J. Accelerated value iteration via Anderson mixing. Sci China Inf Sci, 2021, 64: 222105

    Article  MathSciNet  Google Scholar 

  77. Laskin M, Lee K, Stooke A, et al. Reinforcement learning with augmented data. In: Proceedings of Advances in Neural Information Processing Systems, Vancouver, 2020. 19884–19895

  78. Moerland T M, Broekens J, Plaat A, et al. Model-based reinforcement learning: a survey. FNT Mach Learn, 2023, 16: 1–118

    Article  MATH  Google Scholar 

  79. Kiran B R, Sobh I, Talpaert V, et al. Deep reinforcement learning for autonomous driving: a survey. IEEE Trans Intell Transp Syst, 2021, 23: 4909–4926

    Article  Google Scholar 

  80. Lu Y, Li X, Yan B, et al. In-memory realization of eligibility traces based on conductance drift of phase change memory for energy-efficient reinforcement learning. Adv Mater, 2022, 34: 2107811

    Article  Google Scholar 

  81. Ding F, Jiao Y, Peng B, et al. Modeling the gradual RESET of phase change memory with confined geometry. IEEE Trans Electron Dev, 2022, 69: 6662–6668

    Article  Google Scholar 

  82. Yan B, Yang J, Wu Q, et al. A closed-loop design to enhance weight stability of memristor based neural network chips. In: Proceedings of International Conference on Computer-Aided Design (ICCAD), Irvine, 2017. 541–548

  83. Yan L, Li X, Zhu Y, et al. Uncertainty quantification based on multilevel conductance and stochasticity of heater size dependent C-doped Ge2Sb2Te5 PCM chip. In: Proceedings of IEEE International Electron Devices Meeting (IEDM), San Francisco, 2021. 22–28

  84. Khaddam-Aljameh R, Stanisavljevic M, Mas J F, et al. HERMES core-A 14 nm CMOS and PCM-based in-memory compute core using an array of 300ps/LSB linearized CCO-based ADCs and local digital processing. In: Proceedings of IEEE Symposium on VLSI Circuits, Kyoto, 2021. 1–2

  85. Joshi V, Le Gallo M, Haefeli S, et al. Accurate deep neural network inference using computational phase-change memory. Nat Commun, 2020, 11: 1–3

    Article  Google Scholar 

  86. Lu Y, Li X, Yan L, et al. Accelerated local training of CNNs by optimized direct feedback alignment based on stochasticity of 4 Mb C-doped Ge2Sb2Te5 PCM chip in 40 nm node. In: Proceedings of IEEE International Electron Devices Meeting (IEDM), San Francisco, 2020. 33–36

  87. Wong H S P, Lee H Y, Yu S, et al. Metal-oxide RRAM. Proc IEEE, 2012, 100: 1951–1970

    Article  Google Scholar 

  88. Qiu K, Zhang Y, Yan B, et al. Heterogeneous memory architecture accommodating processing-in-memory on SoC for AIoT applications. In: Proceedings of Asia and South Pacific Design Automation Conference (ASP-DAC), Hsinchu, 2022. 383–388

  89. Li Z, Zheng Q, Yan B, et al. ASTERS: adaptable threshold spike-timing neuromorphic design with twin-column ReRAM synapses. In: Proceedings of Design Automation Conference (DAC), San Francisco, 2022. 1099–1104

  90. Chi P, Li S, Xu C, et al. PRIME: a novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. In: Proceedings of the 43rd Annual International Symposium on Computer Architecture (ISCA), Seoul, 2016. 27–39

  91. Song L, Qian X, Li H, et al. Pipelayer: a pipelined ReRAM-based accelerator for deep learning. In: Proceedings of IEEE International Symposium on High Performance Computer Architecture (HPCA), Austin, 2017. 541–552

  92. Hu M, Li H, Chen Y, et al. Memristor crossbar-based neuromorphic computing system: a case study. IEEE Trans Neural Netw Learn Syst, 2014, 25: 1864–1878

    Article  Google Scholar 

  93. Golonzka O, Arslan U, Bai P, et al. Non-volatile RRAM embedded into 22FFL FinFET technology. In: Proceedings of IEEE Symposium on VLSI Technology, Kyoto, 2019. 230–231

  94. Doevenspeck J, Garello K, Verhoef B, et al. SOT-MRAM based analog in-memory computing for DNN inference. In: Proceedings of IEEE Symposium on VLSI Technology, San Francisco, 2020. 1–2

  95. Xi Y, Gao B, Tang J, et al. In-memory learning with analog resistive switching memory: a review and perspective. Proc IEEE, 2020, 109: 14–42

    Article  Google Scholar 

  96. Chen W H, Dou C, Li K X, et al. CMOS-integrated memristive non-volatile computing-in-memory for AI edge processors. Nat Electron, 2019, 2: 420–428

    Article  Google Scholar 

  97. Jing Z, Yan B, Yang Y, et al. VSDCA: a voltage sensing differential column architecture based on 1T2R RRAM array for computing-in-memory accelerators. IEEE Trans Circ Syst I, 2022, 69: 4028–4041

    Google Scholar 

  98. Piazza F, Uncini A, Zenobi M. Neural networks with digital LUT activation functions. In: Proceedings of International Conference on Neural Networks (IJCNN), Nagoya, 1993. 1401–1404

  99. Micikevicius P, Narang S, Alben J, et al. Mixed precision training. 2017. ArXiv:171003740

  100. Zheng Q, Wang Z, Feng Z, et al. Lattice: an ADC/DAC-less ReRAM-based processing-in-memory architecture for accelerating deep convolution neural networks. In: Proceedings of Design Automation Conference (DAC), San Francisco, 2020. 1–6

  101. Roy K, Jaiswal A, Panda P. Towards spike-based machine intelligence with neuromorphic computing. Nature, 2019, 575: 607–617

    Article  Google Scholar 

  102. Courbariaux M, Bengio Y, David J-P. Training deep neural networks with low precision multiplications. 2014. ArXiv:14127024

  103. Khwa W S, Chen J J, Li J F, et al. A 65 nm 4 Kb algorithm-dependent computing-in-memory SRAM unit-macro with 2.3 ns and 55.8 TOPS/W fully parallel product-sum operation for binary DNN edge processors. In: Proceedings of International Solid State Circuits Conference (ISSCC), San Francisco, 2018. 496–498

  104. Dang B, Lv L, Wang H, et al. 1-HEMT-1-memristor with hardware encryptor for privacy-preserving image processing. IEEE Electron Device Lett, 2022, 43: 1223–1226

    Article  Google Scholar 

  105. Su J W, Chou Y C, Liu R, et al. A 28 nm 384 Kb 6T-SRAM computation-in-memory macro with 8b precision for AI edge chips. In: Proceedings of International Solid State Circuits Conference (ISSCC), San Francisco, 2021. 250–252

  106. Lee J, Kim J, Jo W, et al. ECIM: exponent computing in memory for an energy-efficient heterogeneous floating-point DNN training processor. IEEE Micro, 2021, 42: 99–107

    Article  Google Scholar 

  107. Yang X, Yan B, Li H, et al. ReTransformer: ReRAM-based processing-in-memory architecture for transformer acceleration. In: Proceedings of International Conference on Computer-Aided Design (ICCAD), Virtual Event, 2020. 1–9

  108. Xu N, Fang L, Kim K M, et al. Time-efficient stateful dual-bit-memristor logic. Phys Status Solidi RRL, 2019, 13: 1900033

    Article  Google Scholar 

  109. Yuan R, Ma M Y, Xu L Y, et al. Efficient 16 Boolean logic and arithmetic based on bipolar oxide memristors. Sci China Inf Sci, 2020, 63: 202401

    Article  Google Scholar 

  110. Cai W L, Wang M X, Cao K H, et al. Stateful implication logic based on perpendicular magnetic tunnel junctions. Sci China Inf Sci, 2022, 65: 122406

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (Grant Nos. 61925401, 92064004, 61927901, 92164302) and 111 Project (Grant No. B18001). Yuchao YANG acknowledges support from Fok Ying-Tong Education Foundation and Tencent Foundation through the XPLORER PRIZE.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yuchao Yang or Ru Huang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yan, B., Yang, Y. & Huang, R. Memristive dynamics enabled neuromorphic computing systems. Sci. China Inf. Sci. 66, 200401 (2023). https://doi.org/10.1007/s11432-023-3739-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11432-023-3739-0

Keywords

Navigation