Skip to main content

Implementing Neural Networks with the Associative String Processor

  • Chapter
VLSI for Artificial Intelligence and Neural Networks

Abstract

The rebirth of activity in the area of neural computation is stimulated by the increasing frequency with which traditional computational paradigms appear to inefficiently handle fuzzy problems of large dimensionality (e.g. pattern recognition, associative information retrieval, etc.) and the technological advances. Indeed, with the huge strides in VLSI and WSI technologies and the emergence of electro-optics, massively parallel systems that were unrealisable only a few years ago are coming within reach.

The paper details the efficient implementation of two neural network models (i.e. Hopfield’s relaxation model and the back propagation model) on a massively parallel, programmable, fault-tolerant architecture, the ASP (Associative String Processor), which can efficiently support low-MIMD/high-SIMD and other parallel computation paradigms.

Indeed, the paper describes the mapping of the two neural networks, details the steps required to execute the network computations and reports the performance of the ASP implementations which achieved computational rate of Giga-interconnections/sec (i.e. 10 9 interconnections per sec).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Lea, R. M., “The ASP: a cost-effective parallel microcomputer” IEEE Micro, pp. 10–29, October 1988,.

    Google Scholar 

  • Hopfield, J. J. “Neural Networks and Physical Systems with Emergent Collective Computational Abilities”, Proceedings of the National Academy of Science, USA, Vol. 79, pp. 2554–2558, April 1982.

    Article  MathSciNet  Google Scholar 

  • Jones, W. P. and Hoskins, J., “Back-Propagation”, BYTE, pp. 155–162, October 1987.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1991 Springer Science+Business Media New York

About this chapter

Cite this chapter

Krikelis, A., Grözinger, M. (1991). Implementing Neural Networks with the Associative String Processor. In: Delgado-Frias, J.G., Moore, W.R. (eds) VLSI for Artificial Intelligence and Neural Networks. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-3752-6_39

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-3752-6_39

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-6671-3

  • Online ISBN: 978-1-4615-3752-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics