Abstract
Recent advances in neural sciences and microelectronic technologies have provided an excellent means to boost the computational capability and efficiency of complicated engineering tasks by several orders of magnitude [1]. Due to the optimized structure for artificial intelligence applications, a neural computer is considered as the most promising sixth-generation computing machine. The interdisciplinary nature of neural network studies spans many science and engineering fields including neuroscience, cognitive science, psychology, computer science, physics, mathematics, electrical engineering, and biomEdical engineering. Many digital neural coprocessors, which usually interface to personal computers and engineering workstations, are commercialized for accelerating neuro-computation. Typical products include ANZA from Hecht-Nielson Co., SAIC from Sigma, Odyssey from Texas Instruments Inc., and Mark III/IV from TRW Inc. [2]. However, a general-purpose digital neural coprocessor is usually much slower than a special-purpose analog neural hardware which implements the neural network in an optimal fashion.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
R. P. Lippman, “An introduction to computing with neural nets,” IEEE Acoustics, Speech, and Signal processing Magazine, pp. 4–22, April 1987.
R. Hecht-Nielsen, “Neural-computing: picking the human brain,” IEEE Spectrum, vol. 25, no. 3, pp. 36–41, Mar. 1988.
J. A. Anderson and E. Rosenfeld, Neuralcomputing — Foundation of Research, Cambridge, MA: The MIT Press, 1988.
S. Grossberg, Neural Network and Natural Intelligence, Cambridge, MA: The MIT Press, 1988.
W. S. McCulloch, W. Pitts, “A logical calculus of the idea immanent in neural nets,” Bulletin of Mathematical Biophysics, vol. 5, pp. 115–133, 1943.
F. Rosenblatt, Principles of neurodynamics: perceptrons and the theory of brain mechanisms, Spartan Books, Washington D.C., 1961.
B. Widrow, Bernard, Hoff, and Marcian, “Adaptive switching circuits,” 7960 IRE WESCON Convention Record, Part 4, pp. 96–104, Aug. 23–26, 1960.
J. A. Anderson, “A simple neural network generating an interactive memory,” Mathematical Biosciences, vol. 14, pp. 197–220, 1972.
J. J. Hopfield, “Neural network and physical systems with emergent collective computational abilities,” Proc. Natl. Acad., Sci. USA., vol. 79, pp. 2554–2558, Apr. 1982.
B. Kosko, “Adaptive bidirectional associative memories,” Applied Optics, vol. 36, pp. 4947–4960, Dec. 1987.
D. Hebb, The Organization of Behavior, New York: Wiley, 1949.
T. Kohonen, Self-Organization and Associative Memory, 2nd Ed., New York: Springer-Verlag, 1987.
R. Hecht-Nielsen, “Counter-propagation networks,” Proc. of IEEE First Inter. Conf. on Neural Networks, vol. II, pp. 19–32, San Diego, CA, 1987.
A. H. Klopf, “A drive-reinforcement model of single neuron function: an alternative to the Hebbian neural model,” Proc. of Conf. on Neural Networks for Computing, pp. 265–270, Snowbird, UT, Apr. 1986.
G. E. Hinton and T. J. Sejnowski, “A learning algorithm for Boltzmann machines,” Cognitive Science, vol. 9, pp. 147–169, 1985.
Y. P. Tsividis, “Analog MOS integrated circuits — certain new ideas, trends, and obstacles,” IEEE Jour. of Solid-State Circuits, vol. SC-22, no. 3, pp. 317–321, June 1987.
H. P. Graf and P. de Vegvar, “A CMOS implementation of a neural network model,” Proc. of the Stanford Advanced Research in VLSI Conference, pp. 351–362, Cambridge, MA: The MIT Press, 1987.
B. W. Lee and B. J. Sheu, “Design of a neural-based A/D converter using modified Hopfield network,” IEEE Jour. of Solid-State Circuits, vol. SC-24, no. 4, pp. 1129–1135, Aug. 1989.
Dahlquist, Björck, and Anderson, Numerical Methods, pp. 269–273, Englewood Cliffs, NJ: Prentice-Hall, 1974.
J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proc. Natl. Acad., Sci. U.S.A., vol. 81, pp. 3088–3092, May 1984.
P. Treleaven, M. Pacheco, and M. Vellasco, “VLSI architectures for neural networks,” IEEE Micro Magazine, vol. 9, no. 6, pp. 8–27, Dec. 1989.
C. A. Mead, Analog VLSI and Neural Systems, New York: Addison-Wesley, 1989.
R. E. Howard, D. B. Schwartz, J. S. Denker, R. W. Epworth, H. P. Graf, W. E. Hubbard, L. D. Jackel, B. L. Straughn, and D. M. Tennant, “An associative memory based on an electronic neural network architecture,” IEEE Trans. on Electron Devices, vol. ED-34, no. 7, pp. 1553–1556, July 1987.
P. Mueller, J. V. D. Spiegel, D. Blackman, T. Chiu, T. Clare, C. Donham, T. P. Hsieh, M. Loinaz, “Design and fabrication of VLSI components for a general purpose analog neural computer,” in Analog VLSI Implementation of Neural Systems, Editors: C. Mead and M. Ismail, Boston, MA: Kluwer Academic, pp. 135–169, 1989.
M. Holler, S. Tarn, H. Castro, R. Benson, “An electrically trainable artificial neural network (ETANN) with 10240 ‘float gate’ synapses,” Proc. of IEEE/INNS Inter. Joint Conf. on Neural Networks, vol. II, pp. 191–196, Washington D.C., June 1989.
T. Morishita, Y. Tamura, and T. Otsuki, “A BiCMOS analog neural network with dynamically updated weights,” Tech. Digest of IEEE Inter. Solid-State Circuits Conf., pp. 142–143, San Fransisco, CA, Feb. 1990.
A. F. Murray, “Pulse arithmetic in VLSI neural network,” IEEE Micro Magazine, vol. 9, no. 6, pp. 64–74, Dec. 1989.
D. E. Van den Bout and T. K. Miller III, “A digital architecture employing stochasticism for the simulation of Hopfield neural nets,” IEEE Trans. on Circuits and Systems, vol. 36, no. 5, pp. 732–746, May 1989.
M. S. Tomlinson Jr., D. J. Walker, M. A. Sivilotti, “A digital neural network architecture for VLSI,” Proc. of IEEE/INNS Inter. Joint Conf. on Neural Networks, vol. II, pp. 545–550, San Diego, CA, June 1990.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1991 Springer Science+Business Media New York
About this chapter
Cite this chapter
Lee, B.W., Sheu, B.J. (1991). Introduction. In: Hardware Annealing in Analog VLSI Neurocomputing. The Springer International Series in Engineering and Computer Science, vol 127. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-3984-1_1
Download citation
DOI: https://doi.org/10.1007/978-1-4615-3984-1_1
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-6780-2
Online ISBN: 978-1-4615-3984-1
eBook Packages: Springer Book Archive