Skip to main content

A Limited-Interconnect, Highly Layered Synthetic Neural Architecture

  • Chapter
VLSI for Artificial Intelligence

Part of the book series: The Kluwer International Series in Engineering and Computer Science ((SECS,volume 68))

Abstract

Recent encouraging results have occurred in the application of neuromorphic, ie. neural network inspired, software simulations of speech synthesis, word recognition, and image processing. Hardware implementations of neuromorphic systems are required for real-time applications such as control and signal processing. Two disparate groups of workers are interested in VLSI hardware implementations of neural networks. The first is interested in electronic-based implementations of neural networks and use standard or custom VLSI chips for the design. The second group wants to build fault tolerant adaptive VLSI chips and are much less concerned with whether the design rigorously duplicates the neural models. In either case, the central issue in construction of a electronic neural network is that the design constraints of VLSI differ from those of biology (Walker and Akers 1988). In particular, the high fan-in/fan-outs of biology impose connectivity requirements such that the electronic implementation of a highly interconnected biological neural networks of just a few thousand neurons would require a level of connectivity which exceeds the current or even projected interconnection density of ULSI systems. Fortunately, highly-layered limited interconnected networks can be formed that are functionally equivalent to highly connected systems (Akers et al. 1988). Highly layered, limited-interconnected architectures are especially well suited for VLSI implementations. The objective of our work is to design highly layered, limited-interconnect synthetic neural architectures and develop training algorithms for systems made from these chips. These networks are specifically designed to scale to tens of thousands of processing elements on current production size dies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Abu-Mostafa, Y.S., and Psaltis, D., “Optical Neural Computers,” Scientific American, vol. 256, no. 3, March, 1987.

    Google Scholar 

  • Akers, L.A., Walker, M.R., Ferry, D.K., and Grondin, R.O., “Limited Interconnectivity in Synthetic Neural Systems”, in Rolf Eckmiller and Christopher v.d. Malsburg eds., Neural Computers. Springer-Verlag, 1988.

    Google Scholar 

  • Baum, E.B. “On the Capabilities of Multilayer Perceptrons”, IEEE Conference on Neural Information Processing Systems — Natural and Synthetic, Denver Colo., November, 1987.

    Google Scholar 

  • Ferry, D.K., Akers, L.A., and Greeneich, E., Ultra Large Scale Integrated Microelectronics. Prentice-Hall, 1988.

    Google Scholar 

  • Hecht-Nielsen, R., “Kolmogorov’s Mapping Neural Network Existance Theorem,” Proceedings of the IEEE First International Conference on Neural Networks, vol. 3, pp. 11–12, 1987.

    Google Scholar 

  • Lippman, R.P., “An Introduction to Computing with Neural Nets,” IEEE ASSP Magazine, pl4–22, April, 1987.

    Google Scholar 

  • McClelland, J.L., “Resource Requirements of Standard and Programmable Nets,” in D.E. Rummelhard and J.L. McClelland eds., Paralell Distributed Processing — Volume 1: Foundations. MIT Press, 1986

    Google Scholar 

  • Myers, G.J., Yu, A.Y. and House, D.L., “Microprocessor Technology Trends,” Proceedings of the IEEE, Vol. 74, No. 12, p. 1605, Dec. 1986.

    Article  Google Scholar 

  • Plaut, D.C., Nowlan, S.J., and Hinton, G.E., “Experiments on Learning by Back Propagation,” Carnegie-Mellon University, Dept. of Computer Science Technical Report, June, 1986

    Google Scholar 

  • Walker, M.R., and Akers, L.A., “A Neuromorphic Approach to Adaptive Digital Circuitry,” Proceedings of the Seventh Annual International IEEE Phoenix Conference on Computers and Communications, p. 19, March 16, 1988.

    Google Scholar 

  • Widrow, B., and Stearns, S.D., Adaptive Signal Processing. Prentice-Hall, 1985.

    MATH  Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1989 Kluwer Academic Publishers

About this chapter

Cite this chapter

Akers, L., Walker, M., Ferry, D., Grondin, R. (1989). A Limited-Interconnect, Highly Layered Synthetic Neural Architecture. In: Delgado-Frias, J.G., Moore, W.R. (eds) VLSI for Artificial Intelligence. The Kluwer International Series in Engineering and Computer Science, vol 68. Springer, Boston, MA. https://doi.org/10.1007/978-1-4613-1619-0_20

Download citation

  • DOI: https://doi.org/10.1007/978-1-4613-1619-0_20

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4612-8895-4

  • Online ISBN: 978-1-4613-1619-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics