Advertisement

Vlsi fully connected neural networks for the implementation of other topologies

  • J. Carrabina
  • F. Lisa
  • N. Avellana
  • C. J. Perez-Vicente
  • E. Valderrama
Hardware Implementations
Part of the Lecture Notes in Computer Science book series (LNCS, volume 540)

Abstract

In this paper, we study the alternatives for the implementation of any topology through a fully connected neural network. This strategy is based in the fact that, by the moment, most of the programmable VLSI neural networks implement this topology although associated computations usesdifferent strategies like analog computations, systolic or sequential digital computations. An efficient correspondence between a fully connected neural network and any other type of network can be done following one of this strategies: transparent strategies, for which the network is processed independently of its own topology as a fully connected network, and specialized strategies, for which submatrices are taken into account for the acceleration of the computations. The differences between the two strategies is basically related to the balance between the speed of the recall phase, and the complexity of the hardware. Finally we present two chips that implement a sequential dynamics with probabilistic criteria that follow the two strategies and evaluate the advantages and drawbacks of each one.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

8. Bibliography

  1. [1]
    C. Mead. "Analog VLSI and Neural Networks". Addison-Wesley 1989.Google Scholar
  2. [2]
    E. Vittoz, X Arreguit. "CMOS Implementation of Herault-Jutten Cells for Separation of Sources". Chapter 3 of "Analog VLSI Implementation of Neural Systems". Kluwer Ac. Pub. 1989.Google Scholar
  3. [3]
    H.P.Graf. P. de Vergar. "A CMOS Implementation of a Neural Network Model". Proc. of the Standford Conf. on Advanced Research in VLSI. MIT Press 1987.Google Scholar
  4. [4]
    M. Verleysen, D. Sirletty, A. Vandemeulebrocke, P.Jaspers. "Neural Networks for High Storage Content Addresable Memory". IEEE Journal of Solid State Circuits. May 1989.Google Scholar
  5. [5]
    J. Carrabina, J.C. Calderon, C. Perez-Vicente, E. Valderrama. "Efficient dynamics for the recall phase of neural networks”. Submitted for publication.Google Scholar
  6. [6]
    J. Hopfield, D.W. Tank. "Simple Neural Optimization Networks: An A/D Converter, Signal Decision Circuit and Linear Programming Circuit". IEEE Transactions on Circcuits and Systems, May 1986.Google Scholar
  7. [7]
    T. Kohonen. "Self-organizing and associative memory". 3rd Edition. Springer-Verlag, 1989.Google Scholar
  8. [9]
    M.I. Jordan. "Serial order: A Parallel Distributed Processing Approach". ICS Report 8604. University of California. May 1986Google Scholar
  9. [10]
    J. Carrabina. "High speed large neural networks". PhD Thesis. September 1991.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1991

Authors and Affiliations

  • J. Carrabina
    • 1
  • F. Lisa
    • 1
  • N. Avellana
    • 1
  • C. J. Perez-Vicente
    • 1
  • E. Valderrama
    • 1
  1. 1.Centre Nacional de MicroelectrònicaUniversitat Autònoma de BarcelonaBellaterra, BarcelonaSpain

Personalised recommendations