Skip to main content

Part of the book series: The Springer International Series in Engineering and Computer Science ((SECS,volume 127))

  • 64 Accesses

Abstract

The first neural hardware built by M. Minsky and D. Edmonds in 1951 was composed of simple discrete devices such as vacuum tubes, motors, and manually adjusted resistors. The machine successfully demonstrated the learning capability of a Perceptron. The Madaline/Adaline [1] was applied to the first commercial product (Memistor) which can be used for pattern recognition and adaptive control applications. Conversely, present neurocomputing machines are composed of VLSI chips and electronic storage elements. The first general-purpose neurocomputing machine, Mark III, works with a VAX minicomputer to accelerate neural processing in software computation [2]. It was reported that the simulation speed of the integrated VAX-MARK machine can be improved by approximately 29 times than that of a VAX computer alone. The ANZA and Delta-1 printed-circuit boards are accelerators for IBM PC/AT personal computers. They are composed of several VLSI chips which function as the CPU, mathematical coprocessor, and memories. To make the neurocomputing hardware more powerful, design and fabrication of special VLSI neural chips are highly needed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. B. Widrow and Bernard, “An adaptive ‘Adaline’ neuron using chemical ‘Memisters’,” Startford Electronics Lab., Technical Report Number 1553–2, Oct. 17, 1960.

    Google Scholar 

  2. R. H. Nielsen, “Neural-computing: picking the human brain,” IEEE Spectrum, vol. 25, no. 3, pp. 36–41, Mar. 1988.

    Article  Google Scholar 

  3. P. Treleaven, M. Pacheco, and M. Vellasco, “VLSI architectures for neural networks,” IEEE Micro Magazine, vol. 9, no. 6, pp. 8–27, Dec. 1989.

    Article  Google Scholar 

  4. M. Sililotti, M. R. Emerling, and C. Mead, “VLSI architectures for implementation of neural networks,” Neural Networks for Computing, AIP Conf. Proc. 151, Editor: J. S. Denker, pp. 408–413, Snowbird, UT, 1986.

    Google Scholar 

  5. H. P. Graf, L. D. Jackel, and W. E. Hubbard, “VLSI implementation of a neural network model,” IEEE Computer Magazine, vol. 21, no. 3, pp. 41–49, Mar. 1988.

    Article  Google Scholar 

  6. M. Holler, S. Tam, H. Castro, R. Benson, “An electrically trainable artificial neural network (ETANN) with 10240 ‘float gate’ synapses,” Proc. of IEEE/INNS Inter. Joint Conf. Neural Networks, vol. II, pp. 191–196, Washington D.C., June 1989.

    Article  Google Scholar 

  7. B. W. Lee and B. J. Sheu, “Design of a neural-based A/D converter using modified Hopfield network,” IEEE Jour. of Solid-State Circuits, vol. SC-24, no. 4, pp. 1129–1135, Aug. 1989.

    Article  Google Scholar 

  8. K. Goser, U. Hilleringmann, U. Rueckert, and K. Schumacher, “VLSI technologies for artificial neural networks,” IEEE Micro Magazine, vol. 9, no. 6, pp. 28–44, Dec. 1989.

    Article  Google Scholar 

  9. R. E. Howard, D. B. Schwartz, J. S. Denker, R. W. Epworth, H. P. Graf, W. E. Hubbard, L. D. Jackel, B. L. Straughn, and D. M. Tennant, “An associative memory based on an electronic neural network architecture,” IEEE Trans. on Electron Devices, vol. ED-34, no. 7, pp. 1553–1556, July 1987.

    Article  Google Scholar 

  10. P. Mueller, J. V. D. Spiegel, D. Blackman, T. Chiu, T. Clare, C. Donham, T. P. Hsieh, M. Loinaz, “Design and fabrication of VLSI components for a general purpose analog neural computer,” in Analog VLSI Implementation of Neural Systems, Editors: C. Mead and M. Ismail, Boston, MA: Kluwer Academic, pp. 135–169, 1989.

    Chapter  Google Scholar 

  11. T. Morishita, Y. Tamura, and T. Otsuki, “A BiCMOS analog neural network with dynamically updated weights,” Tech. Digest of IEEE Inter. Solid-State Circuits Conf., pp. 142–143, San Francisco, CA, Feb. 1990.

    Google Scholar 

  12. A. F. Murray, “Pulse arithmetic in VLSI neural network,” IEEE Micro Magazine, vol. 9, no. 6, pp. 64–74, Dec. 1989.

    Article  Google Scholar 

  13. D. E. Van den Bout and T. K. Miller III, “A digital architecture employing stochasticism for the simulation of Hopfield neural nets,” IEEE Trans. on Circuits and Systems, vol. 36, no. 5, pp. 732–746, May 1989.

    Article  Google Scholar 

  14. A. Chiang, R. Mountain, J. Reinold, J. LaFranchise, J. Gregory, and G. Lincoln, “A programmable CCD Signal Processor,” Tech. Digest of IEEE Inter. Solid-State Circuits Conf., pp. 146–147, San Francisco, CA, Feb. 1990.

    Google Scholar 

  15. C. F. Neugebauer, A. Agranat, and A. Yariv, “Optically configured phototransistor neural networks,” Proc. of IEEE/INNS Inter. Joint Conf. on Neural Networks, vol. 2, pp. 64–67, Washington D.C., Jan. 1990.

    Google Scholar 

  16. H. P. Graf and D. Henderson, “A reconfigurable CMOS neural networks,” Tech. Digest of IEEE Inter. Solid-State Circuits Conf., pp. 144–145, San Francisco, CA, Feb. 1990.

    Google Scholar 

  17. C. A. Mead, Analog VLSI and Neural Systems, New York: Addison-Wesley, 1989.

    Book  MATH  Google Scholar 

  18. Y. P. Tsividis, Operation and Modeling of the MOS Transistor, New York: McGrow-Hill, 1987.

    Google Scholar 

  19. J. E. Tanner and C. A. Mead, “A correlating optical motion detector,” Proc. of Conf. on Advanced Research in VLSI, Dedham, MA: Artech House, 1984.

    Google Scholar 

  20. M. A. Sivilotti, M. A. Mahowald, and C. A. Mead, “Real-time visual computations using analog CMOS processing arrays,” Proc. of the Stanford Advanced Research in VLSI Conference, Cambridge, MA: The MIT Press, 1987.

    Google Scholar 

  21. K. Goser, U. Hilleringmann, U. Rueckert, and K. Schumacher, “VLSI technologies for artificial neural networks,” IEEE Micro Magazine, vol. 9, no. 6, pp. 28–44, Oct. 1989.

    Article  Google Scholar 

  22. J. Lazzaro and C. Mead, “Circuit models of sensory transduction in the cochlea,” in Analog VLSI Implementation of Neural Systems, Editors: C. Mead and M. Ismail, Boston, MA: Kluwer Academic, pp.85–102, 1989.

    Chapter  Google Scholar 

  23. C. Mead, “Adaptive retina,” in Analog VLSI Implementation of Neural Systems, Editors: C. Mead and M. Ismail, Boston, MA: Kluwer Academic, 1989.

    Chapter  Google Scholar 

  24. D. E. Rumelhart and J. L. McClelland, Parallel distributed processing, vol. I: Foundations, Chapter 7, Cambridge, MA: The MIT Press, 1987.

    Google Scholar 

  25. W. P. Jones and J. Hoskins, “Back-propagation: a generalized delta learning rule,” Byte Magazine, pp. 155–162, Oct. 1987.

    Google Scholar 

  26. Y. P. Tsividis, “Analog MOS integrated circuits — certain new ideas, trends, and obstacles,” IEEE Jour. Solid-State Circuits, vol. SC-22, no. 3, pp. 351–321, June, 1987.

    Google Scholar 

  27. Y. S. Yee, L. M. Terman, and L. G. Heller, “A 1mV MOS comparator,” IEEE Jour. of Solid-State Circuits, vol. SC-13, no. 3, pp. 294–298, June 1978.

    Article  Google Scholar 

  28. P. R. Gray and R. G. Meyer, Analysis and Design of Analog Integrated Circuits, 2nd Ed., New York: John Wiley & Sons, 1984.

    Google Scholar 

  29. H. P. Graf, L. D. Jackel, R. E. Howard, B. Straughn, J. S. Denker, W. Hubbard, D. M. Tennant, and D. Schwartz, “VLSI implementation of a neural network memory with several hundreds of neurons,” Neural Networks for Computing, AIP Conf. Proc. 151, Editor: J. S. Denker, pp. 182–187, Snowbird, UT, 1986.

    Google Scholar 

  30. M. Ismail, S. V. Smith, and R. G. Beale, “A new MOSFET-C universal filter structure for VLSI,” IEEE Jour. of Solid-State Circuits, vol. SC-23, no. pp. 183–194, Feb. 1988.

    Article  Google Scholar 

  31. A. Agrapat, A. Yariv, “A new architecture for a microelectronic implementation of neural network models,” Proc. of IEEE First Inter. Conf. on Neural Networks, vol. III, pp. 403–409, San Diego, CA, June 1987.

    Google Scholar 

  32. D. Hammestrom, “A VLSI architecture for high-performance, low-cost, on-chip learning,” Proc. of IEEE/INNS Inter. Conf. on Neural Networks, vol. II, pp. 537–544, San Diego, CA, June 1990.

    Google Scholar 

  33. H. Kato, H. Yoshizawa, H. Iciki, and K. Asakawa, “A parallel neuroncomputer architecture towards billion connection updates per second,” Proc. of IEEE/INNS Inter. Joint Conf. on Neural Networks, vol. II, pp. 47–50, Washington D.C., Jan. 1990.

    Google Scholar 

  34. T. Kohonen, Self-Organization and Associative Memory, 2nd Ed., New York: Springer-Verlag, 1987.

    Google Scholar 

  35. “A heteroassociative memory using current-mode MOS analog VLSI circuits,” IEEE Trans. on Circuits and Systems, vol. 36, no. 5, pp. 747–755, May 1989.

    Google Scholar 

  36. Am99C10 256x48 Content Addressable Memory Datasheet, Advanced Micro Devices Inc., Sunnyvale, CA, Feb. 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1991 Springer Science+Business Media New York

About this chapter

Cite this chapter

Lee, B.W., Sheu, B.J. (1991). Alternative VLSI Neural Chips. In: Hardware Annealing in Analog VLSI Neurocomputing. The Springer International Series in Engineering and Computer Science, vol 127. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-3984-1_6

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-3984-1_6

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-6780-2

  • Online ISBN: 978-1-4615-3984-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics