Generalization of Elman networks

  • Barbara Hammer
Part III: Learning: Theory and Algorithms
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1327)


The Vapnik Chervonenkis dimension of Elman networks is infinite. Here, we find constructions leading to lower bounds for the fat shattering dimension that are linear resp. of order log2 in the input length even in the case of limited weights and inputs. Since finiteness of this magnitude is equivalent to learnability, there is no a priori guarantee for the generalization capability of Elman networks.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive dimensions, uniform convergence, and learnability. In Proc. of 34th IEEE Symp. Foundations Computer Science, 1993.Google Scholar
  2. 2.
    P. L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. Technical report, Department of Systems engineering, ANU, 1996.Google Scholar
  3. 3.
    B. Dasgupta and E. D. Sontag. Sample complexity for learning recurrent perceptron mappings. IEEE Transactions Information Theory, 42, 1996.Google Scholar
  4. 4.
    S. E. Fahlman. The recurrent cascade-correlation architecture. In Advances in Neural Information Processing Systems, volume 3, 1991.Google Scholar
  5. 5.
    L. Gurvits and P. Koiran. Approximation and learning of convex superpositions. In 2nd European Conf. Comp. Learning Theorie, 1995.Google Scholar
  6. 6.
    B. Hammer and V. Sperschneider. Neural networks can approximate mappings on structured objects. In 2nd Int. Conf. Comp. Intelligence and Neuroscience, 1997.Google Scholar
  7. 7.
    J. Kilian and H. T. Siegelmann. The dynamic universality of sigmoidal neural networks. Information and Computation, 128, 1996.Google Scholar
  8. 8.
    P. Koiran and E. D. Sontag. Vapnik-Chervonenkis dimension of recurrent neural networks. In Proc. of the 3rd European Conf. Comp. Learning Theorie, 1997.Google Scholar
  9. 9.
    W. Maass. Vapnik-Chervonenkis dimension of neural nets. Technical Report 96015, NeuroCOLT Technical Report Series, 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Barbara Hammer
    • 1
  1. 1.University of OsnabrückOsnabrückGermany

Personalised recommendations