Advertisement

Capacity of structured multilayer networks with shared weights

  • Sabine Kröner
  • Reinhard Moratz
Poster Presentations 1 Theory III: Genaralization
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1112)

Abstract

The capacity or Vapnik-Chervonenkis dimension of a feedforward neural architecture is the maximum number of input patterns that can be mapped correctly to fixed arbitrary outputs. So far it is known that the upper bound for the capacity of two-layer feedforward architectures with independent weights depends on the number of connections in the neural architecture [1].

In this paper we focus on the capacity of multilayer feedforward networks structured by shared weights. We show that these structured architectures can be transformed into equivalent conventional multilayer feed-forward architectures. Known estimations for the capacity are extended to achieve upper bounds for the capacity of these general multi-layer feedforward architectures. As a result an upper bound for the capacity of structured architectures is derived that increases with the number of independent network parameters. This means that weight sharing in a fixed neural architecture leads to a significant reduction of the upper bound of the capacity.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    E. B. Baum, D. Haussler: What Size Net gives Valid Generalization?, Advances in Neural Information Processing Systems, D. Touretzky, Ed., Morgan Kaufmann, (1989).Google Scholar
  2. 2.
    T. M. Cover: Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition, IEEE Transactions on Electronic Computers, Vol. 14, 326–334, (1965).Google Scholar
  3. 3.
    P. Koiran, E.D. Sontag: Neural Networks with Quadratic VC Dimension, Neuro-COLT Technical Report Series, NC-TR-95-044, London, (1995)Google Scholar
  4. 4.
    S. Kröner, R. Moratz, H. Burkhardt: An adaptive invariant transform using neural network techniques, in Proceedings of EUSIPCO 94, 7th European Signal Processing Conf., Holt et al. (Ed.), Vol. III, 1489–1491, Edinburgh, (1994).Google Scholar
  5. 5.
    Y. le Cun: Generalization and Network Design Strategies, Connectionism in Perspective, R. Pfeiffer, Z. Schreter, F. Fogelman-Soulié, L. Steels (Eds.), Elsevier Science Publishers B.V. 143–155, North-Holland, (1989).Google Scholar
  6. 6.
    W. Maass: Vapnik-Chervonenkis Dimension of Neural Nets, Preprint, Technische Universität Graz, (1994).Google Scholar
  7. 7.
    G. J. Mitchison, R. M. Durbin: Bounds on the Learning Capacity of Some Multi-Layer Networks, Biological Cybernetics, Vol.60, No. 5, 345–356, (1989).Google Scholar
  8. 8.
    P. Rieper: Zur Speicherfähigkeit vorwärtsgerichteter Architekturen künstlicher neuronaler Netze mit gekoppelten Knoten, Diplomarbeit, Universität Hamburg, (1994).Google Scholar
  9. 9.
    V. Vapnik: Estimation of Dependences Based on Empirical Data, Springer-Verlag, Berlin, (1982).Google Scholar
  10. 10.
    A. Waibel: Modular Construction of Time-Delay Neural Networks for Speech Recognition, Neural Computation, Vol.1, 39–46, (1989).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1996

Authors and Affiliations

  • Sabine Kröner
    • 1
  • Reinhard Moratz
    • 2
  1. 1.Technische Informatik ITU Hamburg-HarburgHamburg
  2. 2.AG Angewandte InformatikUniversität BielefeldBielefeldGermany

Personalised recommendations