Skip to main content

Multilayer Perceptron Algorithm: Impact of Nonideal Conductance and Area-Efficient Peripheral Circuits

  • 2202 Accesses

Abstract

Large arrays of the same nonvolatile memories (NVMs) being developed for storage-class memory (SCM) – such as phase-change memory (PCM) and resistive RAM (RRAM) – can also be used in non-Von Neumann neuromorphic computational schemes, with device conductance serving as synaptic “weight.” This allows the all-important multiply-accumulate operation within these algorithms to be performed efficiently at the weight data.

In contrast to other groups working on spike-timing-dependent plasticity (STDP), we have been exploring the use of NVM and other inherently analog devices for artificial neural networks (ANN) trained with the backpropagation algorithm. We recently showed a large-scale (165,000 two-PCM synapses) hardware/software demo (IEDM 2014) and analyzed the potential speed and power advantages over GPU-based training (IEDM 2015).

In this chapter, we extend this work in several useful directions. In order to develop an intuitive understanding of the impact that various features of such jump tables have on the classification performance in the ANN application, we describe studies of various artificially constructed jump tables. We then assess the impact of undesired, time-varying conductance change, including drift in PCM and leakage of analog CMOS capacitors. We investigate the use of nonfilamentary, bidirectional RRAM devices based on PrCaMnO3, with an eye to developing material variants that provide sufficiently linear conductance change. And finally, we explore trade-offs in designing peripheral circuitry, balancing simplicity and area efficiency against the impact on ANN performance.

Keywords

  • Artificial Neural Network
  • Resistive Switching
  • High Classification Accuracy
  • Switching Energy
  • Crossbar Array

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, access via your institution.

Fig. 11.1
Fig. 11.2
Fig. 11.3
Fig. 11.4
Fig. 11.5
Fig. 11.6
Fig. 11.7
Fig. 11.8
Fig. 11.9
Fig. 11.10
Fig. 11.11
Fig. 11.12
Fig. 11.13
Fig. 11.14
Fig. 11.15
Fig. 11.16
Fig. 11.17
Fig. 11.18
Fig. 11.19
Fig. 11.20
Fig. 11.21
Fig. 11.22
Fig. 11.23
Fig. 11.24
Fig. 11.25

References

  1. G.W. Burr, R.M. Shelby, C. di Nolfo, J.W. Jang, R.S. Shenoy, P. Narayanan, K. Virwani, E.U. Giacometti, B. Kurdi, H. Hwang, Experimental demonstration and tolerancing of a large-scale neural network (165,000 synapses), using phase-change memory as the synaptic weight element, in IEEE International Electron Devices Meeting (IEDM) (IEEE, San Francisco, 2014)

    Google Scholar 

  2. G.W. Burr, R.M. Shelby, S. Sidler, C. di Nolfo, J. Jang, I. Boybat, R.S. Shenoy, P. Narayanan, K. Virwani, E.U. Giacometti, B. Kurdi, H. Hwang, Experimental demonstration and tolerancing of a large-scale neural network (165 000 synapses), using phase change memory as the synaptic weight element. IEEE Trans. Electron Devices 62(11), 3498–3507 (2015)

    CrossRef  Google Scholar 

  3. G.W. Burr, P. Narayanan, R.M. Shelby, S. Sidler, I. Boybat, C. di Nolfo, Y. Leblebici, Large-scale neural networks implemented with non-volatile memory as the synaptic weight element: comparative performance analysis (accuracy, speed, and power), in IEEE International Electron Devices Meeting (IEDM) (IEEE, Washington, DC, 2015)

    Google Scholar 

  4. J.-W. Jang, S. Park, G.W. Burr, H. Hwang, Y.-H. Jeong, Optimization of conductance change in Pr1-xCaxMnO3–based synaptic devices for neuromorphic systems. IEEE Electron Device Lett. 36(5), 457–459 (2015)

    CrossRef  Google Scholar 

  5. D. Rumelhart, G.E. Hinton, J.L. McClelland, in Parallel Distributed Processing. A general framework for parallel distributing processing (MIT Press, Cambridge, MA, 1986)

    Google Scholar 

  6. Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    CrossRef  Google Scholar 

  7. S. Sidler, I. Boybat, R.M. Shelby, P. Narayanan, J. Jang, A. Fumarola, K. Moon, Y. Leblebici, H. Hwang, G.W. Burr, Large-scale neural networks implemented with non-volatile memory as the synaptic weight element: impact of conductance response, in IEEE European Solid State Device Research Conference (ESSDERC) (IEEE, Lausanne, 2016)

    Google Scholar 

  8. A. Pirovano, A.L. Lacaita, F. Pellizzer, S.A. Kostylev, A. Benvenuti, R. Bez, Low–field amorphous state resistance and threshold voltage drift in chalcogenide materials. IEEE Trans. Electron Devices 51(5), 714–719 (2004)

    CrossRef  Google Scholar 

  9. A. Fumarola, P. Narayanan, L.L. Sanches, S. Sidler, J. Jang, K. Moon, R.M. Shelby, H. Hwang, G.W. Burr, Accelerating machine learning with non-volatile memory: exploring device and circuit tradeoffs, in IEEE International Conference on Rebooting Computing (ICRC) (IEEE, San Diego, 2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Geoffrey W. Burr .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Sanches, L.L. et al. (2017). Multilayer Perceptron Algorithm: Impact of Nonideal Conductance and Area-Efficient Peripheral Circuits. In: Yu, S. (eds) Neuro-inspired Computing Using Resistive Synaptic Devices. Springer, Cham. https://doi.org/10.1007/978-3-319-54313-0_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-54313-0_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-54312-3

  • Online ISBN: 978-3-319-54313-0

  • eBook Packages: EngineeringEngineering (R0)