Skip to main content

ReRAM-Based Processing-in-Memory (PIM)

  • Chapter
  • First Online:
Processing-in-Memory for AI

Abstract

Emerging applications such as artificial intelligence and machine learning have created interest in hardware accelerators for processing parallel data considerably in various neural networks. One of the most critical arithmetic functions in neural networks is Multiply-and-Accumulate (MAC). In the conventional computing architecture (i.e., Von Neumann architecture), processing elements and memory are separated. MAC operations require massive data transfer between processing elements and memory, which consumes a huge amount of power. Recently, processing-in-memory (PIM) architectures have been introduced to address the bottleneck of Von Neumann architecture [1–10]. Since PIM architectures include local computing circuits and memory, we can minimize the data transfer from/to external memory. In general, it is well known that the PIM architectures can improve energy efficiency by orders of magnitude. While SRAM and DRAM are commonly considered in PIM architectures, ReRAM has also gained increasing interest because of the accelerator development for edge computing [11–17]. Many edge devices such as wearables, IoT devices, and biomedical devices require high energy efficiency with compromised performance. Particularly, the edge devices process data scarcely and mostly stay in the standby condition. Since ReRAM offers moderate performance and non-volatility, ReRAM-based PIM architectures can be promising solutions for edge computing. However, ReRAM technologies are not mature yet, and various design issues for ReRAM-based PIM should be tackled comprehensively. In this chapter, we will introduce various ReRAM-based PIM designs covering from unit cells, circuit techniques, and architectures. Besides MAC, other essential logic-in-memory functions will also be discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. K. Ando et al., BRein memory: A single-chip binary/ternary reconfigurable in-memory deep neural network accelerator achieving 1.4 TOPS at 0.6 W. IEEE J. Solid State Circuits 53(4), 983–994 (2018)

    Article  Google Scholar 

  2. M. Kang et al., A multi-functional in-memory inference processor using a standard 6T SRAM array. IEEE J. Solid State Circuits 53(2), 642–655 (2018)

    Article  Google Scholar 

  3. A. Biswas et al., CONV-SRAM: An energy-efficient SRAM with in-memory dot-product computation for low-power convolutional neural networks. IEEE J. Solid State Circuits 54(1), 217–230 (2019)

    Article  Google Scholar 

  4. H. Valavi et al., A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute. IEEE J. Solid State Circuits 54(6), 1789–1799 (2019)

    Article  Google Scholar 

  5. S. Yin et al., XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks. IEEE J. Solid State Circuits 55(6), 1733–1743 (2020)

    Google Scholar 

  6. S.K. Gonugondla et al., A 42pJ/decision 3.12TOPS/W robust in-memory machine learning classifier with on-chip training, in Proc. IEEE int. solid-state circuits conf. (ISSCC), (IEEE, Piscataway, 2018), pp. 490–492

    Google Scholar 

  7. J. Su et al., A 28nm 64Kb inference-training two-way transpose multibit 6T SRAM compute-in-memory macro for AI edge chips, in Proc. IEEE int. solid-state circuits conf. (ISSCC), (IEEE, Piscataway, 2020), pp. 240–242

    Google Scholar 

  8. X. Si et al., A 28nm 65Kb 6T SRAM computing-in-memory macro with 8b MAC operation for AI edge chips, in Proc. IEEE int. solid-state circuits conf. (ISSCC), (IEEE, Piscataway, 2020), pp. 246–248

    Google Scholar 

  9. W. Khwa et al., A 65nm 4Kb algorithm-dependent computing-in-memory SRAM unit-macro with 2.3ns and 55.8TOPS/W fully parallel product-sum operation for binary DNN edge processors, in Proc. IEEE int. solid-state circuits conf. (ISSCC), (IEEE, Piscataway, 2018), pp. 496–498

    Google Scholar 

  10. X. Si et al., A twin-8T SRAM computation-in-memory macro for multiple-bit CNN-based machine learning, in Proc. IEEE int. solid-state circuits conf. (ISSCC), (IEEE, Piscataway, 2019), pp. 396–398

    Google Scholar 

  11. W.-H. Chen et al., A 65nm 1Mb non-volatile computing-in-memory ReRAM macro with sub-16ns multiply-and-accumulate for binary DNN AI edge processors, in Proc. IEEE int. solid-state circuits conf. (ISSCC), (IEEE, Piscataway, 2018), pp. 494–496

    Google Scholar 

  12. C. Xue et al., Embedded 1-Mb ReRAM-based computing-in-memory macro with multibit input and weight for CNN-based AI edge processors. IEEE J. Solid State Circuits 55, 203–215 (2020)

    Article  Google Scholar 

  13. C. Xue et al., A 22nm 2Mb ReRAM compute-in-memory macro with 121-28TOPS/W for multibit MAC computing for tiny AI edge devices, in Proc. IEEE int. solid-state circuits conf. (ISSCC), (IEEE, Piscataway, 2020), pp. 244–246

    Google Scholar 

  14. Q. Liu et al., A fully integrated analog ReRAM based 78.4TOPS/W compute-in-memory chip with fully parallel MAC computing, in Proc. IEEE int. solid-state circuits conf. (ISSCC), (IEEE, Piscataway, 2020), pp. 500–502

    Google Scholar 

  15. Y. Chen et al., Reconfigurable 2T2R ReRAM architecture for versatile data storage and computing in-memory. IEEE Trans. VLSI Syst. 28, 2636–2649 (2020)

    Article  Google Scholar 

  16. Y. Chen et al., A reconfigurable 4T2R ReRAM computing in-memory macro for efficient edge applications. IEEE Open J. Circuits Syst. 2, 210–222 (2021)

    Article  Google Scholar 

  17. C.-X. Xue et al., A 22nm 4Mb 8b-precision ReRAM computing-in-memory macro with 11.91 to 195.7TOPS/W for tiny AI edge devices, in Proc. IEEE int. solid- state circuits conf. (ISSCC), (IEEE, Piscataway, 2021), pp. 245–247

    Google Scholar 

  18. C.-X. Xue et al., A 1Mb multibit ReRAM computing-in-memory macro with 14.6ns parallel MAC computing time for CNN based AI edge processors, in Proc. IEEE int. solid-state circuits conf. (ISSCC), (IEEE, Piscataway, 2019), pp. 388–390

    Google Scholar 

  19. W. Lee et al., Multilevel resistive-change memory operation of Al-doped ZnO thin-film transistor. IEEE Electron. Dev. Lett. 37(8), 1014–1017 (2016)

    Article  Google Scholar 

  20. R. Yasuhara et al., Reliability issues in analog ReRAM based neural-network processor, in IEEE international reliability physics symposium (IRPS), (IEEE, Piscataway, 2019), pp. 1–5

    Google Scholar 

  21. R. Mochida et al., A 4M synapses integrated analog ReRAM based 66.5 TOPS/W neural-network processor with cell current controlled writing and flexible network architecture. IEEE Symp. VLSI Technol. 2018, 175–176 (2018)

    Google Scholar 

  22. A. Biswas et al., Conv-RAM: An energy-efficient SRAM with embedded convolution computation for low-power CNN-based machine learning applications, in Proc. IEEE int. solid- state circuits conf. (ISSCC), (IEEE, Piscataway, 2018), pp. 488–490

    Google Scholar 

  23. C. Yu et al., A 16K current-based 8T SRAM compute-in-memory macro with decoupled read/write and 1-5bit column ADC, in Proc. IEEE custom integrated circuits conference (CICC), (2020)

    Google Scholar 

  24. C. Yu et al., A zero-skipping reconfigurable SRAM in-memory computing macro with binary-searching ADC, in Proc. IEEE Eur. solid state circuits conf. (ESSCIRC), (2021)

    Google Scholar 

  25. C. Yu et al., A logic-compatible eDRAM compute-in-memory with embedded ADCs for processing neural networks. IEEE Trans. Circuits Syst. I Regul. Pap. 68(2), 667–679 (2021)

    Article  Google Scholar 

  26. J.M. Correll et al., A fully integrated reprogrammable CMOS-RRAM compute-in-memory coprocessor for neuromorphic applications. IEEE J. Explor. Solid-State Computat. Devices Circuits 6(1), 36–44 (2020)

    Article  Google Scholar 

  27. W. Wan et al., A 74 TMACS/W CMOS-RRAM neurosynaptic core with dynamically reconfigurable dataflow and in-situ transposable weights for probabilistic graphical models, in Proc. IEEE intl. solid-state circuits conference (ISSCC), (2020), pp. 498–499

    Google Scholar 

  28. L. Zheng et al., Memristors-based ternary content addressable memory (mTCAM), in IEEE int. symp. on circuits and systems (ISCAS), (2014), pp. 2253–2256

    Google Scholar 

  29. M. Chang et al., Designs of emerging memory based non-volatile TCAM for Internet-of-Things (IoT) and big-data processing: A 5T2R universal cell, in IEEE int. symp. on circuits and systems (ISCAS), (IEEE, Piscataway, 2016), pp. 1142–1145

    Google Scholar 

  30. D. Ly et al., In-depth characterization of resistive memory-based ternary content addressable memories, in IEEE int. electron devices meeting (IEDM), (IEEE, Piscataway, 2018), pp. 20.3.1–20.3.4

    Google Scholar 

  31. C. Chou et al., An N40 256K×44 embedded RRAM macro with SL-precharge SA and low-voltage current limiter to improve read and write performance, in Proc. IEEE int. solid-state circuits conf. (ISSCC), (IEEE, Piscataway, 2018), pp. 478–479

    Google Scholar 

  32. M. Bocquet et al., In-memory and error-immune differential RRAM implementation of binarized deep neural networks, in IEEE intl. electron devices meeting (IEDM), (IEEE, Piscataway, 2018), pp. 20.6.1–20.6.4

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tony Tae-Hyoung Kim .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Kim, T.TH., Lu, L., Chen, Y. (2023). ReRAM-Based Processing-in-Memory (PIM). In: Kim, JY., Kim, B., Kim, T.TH. (eds) Processing-in-Memory for AI. Springer, Cham. https://doi.org/10.1007/978-3-030-98781-7_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-98781-7_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-98780-0

  • Online ISBN: 978-3-030-98781-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics