Distributed Representations in Radial Basis Function Networks

  • Neil Middleton
Conference paper
Part of the Perspectives in Neural Computing book series (PERSPECT.NEURAL)


This chapter locates Gaussian radial basis function (RBF) networks within the school of connectionist modelling that has successfully exploited the properties of multi layer perceptrons (MLPs). In particular, the highly regarded use of distributed representations in MLPs is established in RBF networks. RBF networks have been used to learn a categorization task, and the way in which the networks have learned the task is interpreted in terms of localist and distributed representations. The problem of catastrophic forgetting as a psychologically implausible feature of MLPs, and an undesirable property of neural network engineering systems, is also reconsidered.


Radial Basis Function Receptive Field Radial Basis Function Network Hide Unit Training Pattern 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Rumelhart DE, McClelland JL: On learning the past tenses of English verbs. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 2, Rumelhart DE, McClelland JL (Eds). MIT Press, Cambridge, Mass, 1986, pp 216–271Google Scholar
  2. 2.
    Plunkett K, Marchman V: U-shaped learning and frequency effects in a multi-layered perceptron: Implications for child development. Cognition 1991, 38: 1–60CrossRefGoogle Scholar
  3. 3.
    Park J, Sandberg PK. Universal approximation using radial-basis-function networks. Neural Computation 1991, 3: 246–257CrossRefGoogle Scholar
  4. 4.
    Kruschke JK. ALCOVE: an exemplar-based connectionist model of category learning. Psychological Review 1992, 99: 22–44CrossRefGoogle Scholar
  5. 5.
    Poggio T, Edelman S. A network that learns to recognize three dimensional objects. Nature (London) 1990, 343: 263–266CrossRefGoogle Scholar
  6. 6.
    Bishop CM. Neural networks for pattern recognition. Clarendon Press, Oxford, 1995Google Scholar
  7. 7.
    Moody JE, Darken CJ. Fast learning in networks of locally tuned processing units. Neural Computation 1989, 1: 281–294CrossRefGoogle Scholar
  8. 8.
    Poggio T, Girosi F. Networks for approximation and learning. Proceedings of the IEEE 1990, 78: 1481–1497CrossRefGoogle Scholar
  9. 9.
    Knapp AG, Anderson JA. Theory of categorization based on distributed memory storage. Journal of Experimental Psychology: Learning, Memory and Cognition 1984, 10: 616–637CrossRefGoogle Scholar
  10. 10.
    Van Gelder T. Defining `distributed representation’. Connection Science 1992, 4 (3 & 4): 175–191CrossRefGoogle Scholar
  11. 11.
    French RM. Using semi-distributed representations to overcome catastrophic forgetting in connectionist networks. CRCC TR 51–1991, 1991Google Scholar
  12. 12.
    Clark A. Associative Engines: Connectionism, Concepts, and Representational Change. MIT Press, Cambridge, Mass, 1993Google Scholar

Copyright information

© Springer-Verlag London Limited 1998

Authors and Affiliations

  • Neil Middleton
    • 1
  1. 1.Brunel UniversityUK

Personalised recommendations