Skip to main content

Memristor-only LSTM Acceleration with Non-linear Activation Functions

  • Conference paper
  • First Online:
Designing Modern Embedded Systems: Software, Hardware, and Applications (IESS 2022)

Part of the book series: IFIP Advances in Information and Communication Technology ((IFIPAICT,volume 669))

Included in the following conference series:

  • 185 Accesses

Abstract

Long Short-Term Memories (LSTMs) applied to Speech Recognition are an essential application of modern embedded devices. Computing Matrix-Vector Multiplications (MVMs) with Resistive Random Access Memory (ReRAM) crossbars has paved the way for solving the memory bottleneck issues related to LSTM processing. However, mixed-signal and fully-analog accelerators still lack in developing energy-efficient and versatile devices for the calculus of activation functions between MVM operations. This paper proposes a design methodology and circuitry that achieves both energy efficiency and versatility by introducing a programmable memristor array for computing nonlinearities. We exploit the inherent capability of ReRAM crossbars in computing MVM to perform piecewise linear interpolation (PWL) of non-linear activation functions, achieving a programmable device with a smaller cost. Experiments show that our approach outperforms state-of-the-art LSTM accelerators being 4.85x more efficient using representative speech recognition datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Adam, K., Smagulova, K., James, A.: Generalised analog lstms recurrent modules for neural computing. Front. Comput. Neurosci., 85 (2021)

    Google Scholar 

  2. Ankit, A., et al.: Panther: A programmable architecture for neural network training harnessing energy-efficient reram. IEEE Trans. Comput. (2020)

    Google Scholar 

  3. Evangelopoulos, G.N.: Efficient hardware mapping of long short-term memory neural networks for automatic speech recognition. Ph.D. thesis, KU Leuven Leuven, Belgium (2016)

    Google Scholar 

  4. Grossi, A., et al.: Experimental investigation of 4-kb rram arrays programming conditions suitable for tcam. IEEE VLSI 26(12), 2599–2607 (2018)

    Article  Google Scholar 

  5. Halawani, Y., et al.: Reram-based in-memory computing for search engine and neural network applications. IEEE JETCAS (2019)

    Google Scholar 

  6. Han, J., Liu, H., Wang, M., Li, Z., Zhang, Y.: Era-lSTM: An efficient reram-based architecture for long short-term memory. IEEE TPDS 31(6), 1328–1342 (2019)

    Google Scholar 

  7. Hasler, J.: The potential of soc fpaas for emerging ultra-low-power machine learning. J. Low Power Electron. Appli. 12(2), 33 (2022)

    Article  Google Scholar 

  8. Ji, Y., et al.: Fpsa: A full system stack solution for reconfigurable reram-based nn accelerator architecture. In: ACM ASPLOS, pp. 733–747 (2019)

    Google Scholar 

  9. Kull, L., et al.: A 3.1 mw 8b 1.2 gs/s single-channel asynchronous sar adc with alternate comparators for enhanced speed in 32 nm digital soi cmos. IEEE JSSC 48(12), 3049–3058 (2013)

    Google Scholar 

  10. Li, S.C.: A symmetric complementary structure for rf cmos analog squarer and four-quadrant analog multiplier. Analog Integr. Circ. Sig. Process 23(2), 103–115 (2000)

    Article  Google Scholar 

  11. Long, Y., Na, T., Mukhopadhyay, S.: Reram-based processing-in-memory architecture for recurrent neural network acceleration. IEEE VLSI (2018)

    Google Scholar 

  12. Moreno, D.G., Del Barrio, A.A., Botella, G., Hasler, J.: A cluster of fpaas to recognize images using neural networks. IEEE TCAS II (2021)

    Google Scholar 

  13. Muralimanohar, N., Balasubramonian, R., Jouppi, N.: Optimizing nuca organizations and wiring alternatives for large caches with cacti 6.0. In: 40th IEEE/ACM MICRO, pp. 3–14. IEEE (2007)

    Google Scholar 

  14. Park, S.H., Kim, B., Kang, C.M., Chung, C.C., Choi, J.W.: Sequence-to-sequence prediction of vehicle trajectory via lstm encoder-decoder architecture. In: 2018 IEEE IV, pp. 1672–1678. IEEE (2018)

    Google Scholar 

  15. Peng, X., Huang, S., Jiang, H., Lu, A., Yu, S.: Dnn+ neurosim v2. 0: An end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training. IEEE TCAD 40(11), 2306–2319 (2020)

    Google Scholar 

  16. Saberi, M., Lotfi, R., Mafinezhad, K., Serdijn, W.A.: Analysis of power consumption and linearity in capacitive digital-to-analog converters used in successive approximation adcs. IEEE TCAS-I 58(8), 1736–1748 (2011)

    MathSciNet  Google Scholar 

  17. Shafiee, A., et al.: Isaac: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. ACM SIGARCH 44(3), 14–26 (2016)

    Google Scholar 

  18. Vijayaprabakaran, K., Sathiyamurthy, K.: Towards activation function search for long short-term model network: a differential evolution based approach. J. King Saud Univ.-Comput. Inf. Sci. (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rafael Fão de Moura .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

de Moura, R.F., de Lima, J.P.C., Carro, L. (2023). Memristor-only LSTM Acceleration with Non-linear Activation Functions. In: Henkler, S., Kreutz, M., Wehrmeister, M.A., Götz, M., Rettberg, A. (eds) Designing Modern Embedded Systems: Software, Hardware, and Applications. IESS 2022. IFIP Advances in Information and Communication Technology, vol 669. Springer, Cham. https://doi.org/10.1007/978-3-031-34214-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-34214-1_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-34213-4

  • Online ISBN: 978-3-031-34214-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics