Regularization and Realizability in Radial Basis Function Networks

  • Jason A. S. Freeman
  • David Saad
Part of the Operations Research/Computer Science Interfaces Series book series (ORCS, volume 8)

Abstract

Learning and generalization in a two-layer Radial Basis Function network (RBF) is examined within a stochastic training paradigm. Employing a Bayesian approach, expressions for generalization error are derived under the assumption that the generating mechanism (teacher) for the training data is also an RBF, but one for which the basis function centres and widths need not correspond to those of the student network. The effects of regularization, via a weight decay term, are examined. The cases in which the student has greater representational power than the teacher (over-realizable), and in which the teacher has greater power than the student (unrealizable) are studied. Finally, simulations are performed which validate the analytic results.

Keywords

Radial Basis Function Radial Basis Function Network Hide Unit Generalization Error Neural Computation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    A. Bruce and D. Saad, Statistical mechanics of hypothesis evaluation, J. Phys. A: Math. Gen., Vol. 27 (1994), pp3355–3363.MathSciNetCrossRefGoogle Scholar
  2. [2]
    J. Freeman and D. Saad, Learning and generalization in radial basis function networks, Neural Computation, Vol. 7 (1995), pp1000–1020.CrossRefGoogle Scholar
  3. [3]
    J. Freeman and D. Saad, To appear in Neural Networks, (1995)Google Scholar
  4. [4]
    E. Hartman, J. Keeler, and J. Kowalski, Layered neural networks with gaussian hidden units as universal approximators, Neural Computation, Vol. 2 (1990), pp210–215.CrossRefGoogle Scholar
  5. [5]
    E. Levin, N. Tishby, and S. Solla, A statistical approach to learning and generalization in layered neural networks, In Colt (1989), pp245–260.Google Scholar
  6. [6]
    D. MacKay, Bayesian interpolation, Neural Computation, Vol. 4 (1992), pp415–447.CrossRefGoogle Scholar
  7. [7]
    P. Niyogi and F. Girosi, On the relationship between generalization error, hypothesis complexity and sample complexity for radial basis functions, Technical report, AI Laboratory, MIT (1994).Google Scholar
  8. [8]
    T. Rög nvaldsson, On langevin updating in multilayer perceptrons, Neural Computation, Vol. 6 (1994), pp916–926.CrossRefGoogle Scholar
  9. [9]
    D. Wolpert, On the connection between in-sample testing and generalization error, Complex Systems, Vol. 6 (1992), pp47–94.MathSciNetMATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1997

Authors and Affiliations

  • Jason A. S. Freeman
    • 1
  • David Saad
    • 2
  1. 1.Centre for Neural SystemsUniversity of EdinburghEdinburghUK
  2. 2.Department of Computer Science & Applied MathematicsUniversity of AstonBirminghamUK

Personalised recommendations