Optimised attractor neural networks with external inputs

  • Anthony N. Burkitt
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 686)


Attractor neural networks resemble the brain in many key aspects, such as their high connectivity, feedback, non-local storage of information and tolerance to damage. The models are also amenable to calculation, using mean field theory, and computer simulation. These methods have enabled properties such as the capacity of the network, the quality of pattern retrieval, and robustness to damage, to be accurately determined. In this paper a biologically motivated input method for external stimuli is studied. A straightforward signal-to-noise calculation gives an indication of the properties of the network. Calculations using mean field theory and including the external stimuli are carried out. A threshold is introduced, the value of which is chosen to optimize the performance of the network, and sparsely coded patterns are considered. The network is shown to have enhanced capacity, improved quality of retrieval, and increased robustness to the random elimination of neurons.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Hopfield J. J. (1982), “Neural networks and physical systems with emergent collective computational abilities,” Proc. Natl. Sci., 79, pp. 2554–2558.Google Scholar
  2. [2]
    Amit D. J. (1989), Modeling Brain Function: The world of attractor neural networks, Cambridge: Cambridge University Press.Google Scholar
  3. [3]
    Amit D. J., Parisi G. & Nicolis S. (1990), “Neural potentials as stimuli for attractor neural networks,” Network, 1, pp. 75–88.Google Scholar
  4. [4]
    Engel A., Englisch H. & Schütte A. (1989), “Improved retrieval in neural networks with external fields,” Europhys. Lett., 8, pp. 393–397.Google Scholar
  5. [5]
    Vicente C. J. P. & Amit D. J. (1989), “Optimised network for sparsely coded patterns,” J. Phys., A22, pp. 559–569.Google Scholar
  6. [6]
    Amit D. J., Gutfreund H. & Sompolinsky H. (1987), “Information storage in neural networks with low levels of activity,” Phys. Rev., A35, pp. 2293–2303.Google Scholar
  7. [7]
    Gardner E. (1988), “The phase space of interactions in neural network models,” J. Phys., A21, pp. 257–270.Google Scholar
  8. [8]
    Amit D. J., Gutfreund H. & Sompolinsky H. (1987), “Statistical mechanics of neural networks near saturation,” Ann. Phys., NY, 173, pp. 30–67.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • Anthony N. Burkitt
    • 1
  1. 1.Computer Sciences Laboratory Research School of Physical SciencesAustralian National UniversityCanberraAustralia

Personalised recommendations