Skip to main content

An Analogue Neuron Suitable for a Data Frame Architecture

  • Chapter
VLSI for Artificial Intelligence and Neural Networks

Abstract

This paper describes the VLSI realisation of a novel neural network implementation architecture which is geared to the processing of frame based data. The chief advantage of this architecture is its elimination of the need to implement total connectivity between neural units as hard-wired connections. This is achieved without sacrificing performance or functionality. A detailed description of the implementation of this architecture inCMOS, using a mixed analogue and digital building blocks is given together with details of system level design.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Akers, L., Walker, M., Ferry, D., and Grondin, R., “A limited interconnect, highly layered synthetic neural architecture” in VLSI for Artificial Intelligence,DelgadoFrias, J.G., and Moore, W.R., Eds., Boston, Mass.: Kluwer, pp. 218–226, 1989

    Chapter  Google Scholar 

  • Bailey, J., and Hammerstrom, D., “Why VLSI implementations of associative VLCNs require connection multiplexing” in Proceedings of the first IEEE international conference on neural networks, pp. 173–180, 1988.

    Chapter  Google Scholar 

  • Bisset, D.L., Daniell, P.M., and Waller, W.A.J., “A data frame architecture for implementing neural networks”, Technical report in preparation, University of Kent, 1990.

    Google Scholar 

  • Daniell, P.M., Waller, W.A.J., and Bisset, D.L., “An implementation of fully analogue sum-of-products neural models.”, 1st International conference on artificial neural networks. lEE, Savoy Place, October 1989.

    Google Scholar 

  • Eberhardt, S., Duong, T., and Thakoor, A., “Design of parallel hardware neural network systems from custom analogue VLSI building block chips”, Proceedings of the International Joint Conference on Neural Networks Washington, vol II, pp183–190, June 1989.

    Article  Google Scholar 

  • Hirai, Y., Kamada, K., Yamada, M., and Ooyama, M., “A Digital neuro-chip with unlimited connectability for large scale neural networks”, Proceedings of theInternational Joint Conference on Neural Networks Washington, vol II, pp163–169, June 1989.

    Article  Google Scholar 

  • Murray, A., Smith, A.V.W., and Tarassenko, L., “Fully-programmable analogue VLSI devices for the implementation of neural networks”, in VLSI for Artificial Intelligence,Delgado-Frias, J.G., and Moore, W.R., Eds., Boston, Mass.: Kluwer, pp. 236–244, 1989

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1991 Springer Science+Business Media New York

About this chapter

Cite this chapter

Waller, W.A.J., Bisset, D.L., Daniell, P.M. (1991). An Analogue Neuron Suitable for a Data Frame Architecture. In: Delgado-Frias, J.G., Moore, W.R. (eds) VLSI for Artificial Intelligence and Neural Networks. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-3752-6_19

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-3752-6_19

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-6671-3

  • Online ISBN: 978-1-4615-3752-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics