Skip to main content
Log in

An analog feed-forward neural network with on-chip learning

  • Artificial Neural Net
  • Published:
Analog Integrated Circuits and Signal Processing Aims and scope Submit manuscript

Abstract

An analog continuous-time neural network with on-chip learning is presented. The 4-3-2 feed-forward network with a modified back-propagation learning scheme was build using micropower building blocks in a double poly, double metal 2μ CMOS process. The weights are stored in non-volatile UV-light programmable analog floating gate memories. A differential signal representation is used to design simple building blocks which may be utilized to build very large neural networks. Measured results from on-chip learning are shown and an example of generalization is demonstrated. The use of micro-power building blocks allows very large networks to be implemented without significant power consumption.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. A. G. Andreou and K. A. Boaken, “Neural information processing II,” Chapter 8 in Analog VLSI: Signal and Information Processing, Ismail and Fiez (Eds.). McGraw-Hill: New York, 1994.

    Google Scholar 

  2. A. Abusland and T. S. Lande, “An analog continuous-time micropower Hopfield net.” IEEE Int. Conf. on Neural Network, pp. 1860–1865, July 1994.

  3. R. L. Sigvartsen, “An analog neural network with on-chip learning,” Master's thesis, Department of Informatics, University of Oslo, August 1994.

  4. C. Mead, Analog VLSI and Neural Systems. Addison-Wesley: Reading, MA, 1989.

    Google Scholar 

  5. E. A. Vittoz and J. Fellrath. “CMOS analog integrated circuits based on weak inversion operation.” IEEE Journal of Solid-State Circuits SC-12:224, 1977.

    Google Scholar 

  6. R. L. Carley, “Trimming analog circuits using floating-gate analog MOS memory.” IEEE Journal of Solid-State Circuits 24(6), pp. 1569–1575, December 1989.

    Google Scholar 

  7. E. Sackinger and W. Guggenbühl, “An analog trimming circuit based on a floating-gate device.” IEEE J. of Solid-State Circuits 23, pp. 1437–1440, December 1988.

    Google Scholar 

  8. D. A. Kerns, J. Tanner, M. Sivilotti, and J. Jou, “CMOS UV-writable non-volatile analog storage.” Advanced Researches in VLSI, C. H. Séquin (Ed.), pp. 245–261. MIT Press: Cambridge, MA, 1991.

    Google Scholar 

  9. R. G. Benson and D. A. Kerns, “UV-activated conductances allow for multiple time scale learning. IEEE Transactions on Neural Networks 4(3), pp. 434–440, May 1993.

    Google Scholar 

  10. M. A. Maher, “New UV-memory writing scheme.” Unpublished.

  11. E. Vittoz, H. Oguey, M. A. Maher, O. Nys, E. Dijkstra, and M. Chevroulet, “Analog storage of adjustable synaptic weights,” in VLSI Design of Neural Networks, U. Ramacher and U. Rückert (Eds.), pp. 47–63. Kluwer Academic: Boston, MA, 1991.

    Google Scholar 

  12. H. Yang, B. J. Sheu, and J. -C. Lee, “A nonvolatile analog neural memory using floating-gate MOS transistors.” Analog Integrated Circuits and Signal Processing 2, pp. 19–25, 1992.

    Google Scholar 

  13. J. S. Witters, G. Groeseneken, and H. E. Maes, “Degradation of tunnel-oxide floating-gate EEPROM devices and the correlation with high field-current-induced degradation of thin gate oxides.” IEEE Transactions on Electronic Devices 36(9), pp. 1663–1682, September 1989.

    Google Scholar 

  14. Y. Berg, J. -E. Ruth, and T. S. Lande, “Scalable mean rate signal encoding analog neural network,” IEEE International Symposium on Circuits and Systems 3, Seattle, April 30–May 3 1995, pp. 1668–1671.

  15. A. Abusland and T. S. Lande, “Local generation and storage of reference voltages in CMOS technology,” in 11th European Conference on Circuit Theory and Design, pp. 281–286, Davos, August 1993.

  16. P. Hasler, C. Diorio, B. A. Minch, and C. A. Mead, “Single transistor learning synapse with long term storage.” IEEE International Symposium on Circuits and Systems 3, Seattle, April 30–May 3 1995, pp. 1660–1663.

  17. R. R. Torrance, T. R. Viswanathan, and J. V. Hanson, “CMOS voltage to current transducers.” IEEE Trans. on Circuits and System CAS-32(11), November 1985.

  18. D. E. Rummelhart and J. L. McClelland, “Parallel distributed processing—Explorations in the microstructure of cognition,” in Foundations, Vol. 1. MIT Press: Cambridge, MA, 1986.

    Google Scholar 

  19. F. J. Pineda, “Dynamics and architecture for neural computation.” Journal of Complexity 4, pp. 216–245, 1988.

    Google Scholar 

  20. T. Delbrück, “‘Bump’ circuits for computing similarly and dissimilarly of analog voltages.” Caltech, 1991.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Berg, Y., Sigvartsen, R.L., Lande, T.S. et al. An analog feed-forward neural network with on-chip learning. Analog Integr Circ Sig Process 9, 65–75 (1996). https://doi.org/10.1007/BF00158853

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00158853

Keywords

Navigation