Advertisement

On-Line Learning in Multilayer Neural Networks

  • David Saad
  • Sara A. Solla
Part of the Operations Research/Computer Science Interfaces Series book series (ORCS, volume 8)

Abstract

We present an analytic solution to the problem of on-line gradient-descent learning for two-layer neural networks with an arbitrary number of hidden units in both teacher and student networks. The technique, demonstrated here for the case of adaptive input-to-hidden weights, becomes exact as the dimensionality of the input space increases.

Keywords

Hide Unit Generalization Error Multilayer Neural Network Niels Bohr Institute Committee Machine 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    G. Cybenko, Approximation by superposition of sigmoidal functions, Math. Control Signals and Systems Vol. 2 (1989), pp303–314.MathSciNetzbMATHCrossRefGoogle Scholar
  2. [2]
    M. Biehl and H. Schwarze, Learning by online gradient descent, J. Phys. A Vol. 28 (1995), pp643–656.MathSciNetzbMATHCrossRefGoogle Scholar
  3. [3]
    D. Saad and S. A. Solla, Exact solution for on-line learning in multilayer neural networks, Phys. Rev. Lett. Vol. 74 (1995), pp4337–4340.CrossRefGoogle Scholar
  4. [4]
    D. Saad and S. A. Solla, On-line learning in soft committee machines, Phys. Rev. E Vol 52 (1995), pp4225–4243.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1997

Authors and Affiliations

  • David Saad
    • 1
  • Sara A. Solla
    • 2
  1. 1.Dept. of Computer Science and Applied MathematicsUniversity of AstonBirminghamUK
  2. 2.The Niels Bohr InstituteCONNECTCopenhagenDenmark

Personalised recommendations