Advertisement

Role of Function Complexity and Network Size in the Generalization Ability of Feedforward Networks

  • Leonardo Franco
  • José M. Jerez
  • José M. Bravo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3512)

Abstract

The generalization ability of different sizes architectures with one and two hidden layers trained with backpropagation combined with early stopping have been analyzed. The dependence of the generalization process on the complexity of the function being implemented is studied using a recently introduced measure for the complexity of Boolean functions. For a whole set of Boolean symmetric functions it is found that large neural networks have a better generalization ability on a large complexity range of the functions in comparison to smaller ones and also that the introduction of a small second hidden layer of neurons further improves the generalization ability for very complex functions. Quasi-random generated Boolean functions were also analyzed and we found that in this case the generalization ability shows small variability across different network sizes both with one and two hidden layer network architectures.

Keywords

Hide Layer Boolean Function Network Size Symmetric Function Hide Neuron 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Haykin, S.: Neural Networks: A Comprehensive Foundation. Macmillan/IEEE Press (1994)Google Scholar
  2. 2.
    Baum, E.B., Haussler, D.: What size net gives valid generalization? Neural Computation 1, 151–160 (1989)CrossRefGoogle Scholar
  3. 3.
    Lawrence, S., Giles, C.L., Tsoi, A.C.: What Size Neural Network Gives Optimal Generalization? Convergence Properties of Backpropagation. Technical Report UMIACS-TR-96-22 and CS-TR-3617, Institute for Advanced Computer Studies, Univ. of Maryland (1996)Google Scholar
  4. 4.
    Caruana, R., Lawrence, S., Giles, C.L.: Overfitting in Neural Networks: Backpropagation, Conjugate Gradient, and Early Stopping. In: Leen, T.K., Dietterich, T.G., Tresp, V. (eds.) Advances in Neural Information Processing Systems, vol. 13, pp. 402–408. MIT Press, Cambridge (2001)Google Scholar
  5. 5.
    Krogh, A., Hertz, J.A.: A simple weight decay can improve generalization. In: Moody, J.E., Hanson, S.J., Lippmann, R.P. (eds.) Advances in Neural Information Processing Systems, vol. 4, pp. 950–957. Morgan Kaufmann, San Francisco (1992)Google Scholar
  6. 6.
    Prechelt, L.: Automatic Early Stopping Using Cross Validation: Quantifying the Criteria. Neural Networks 11, 761–767 (1998)CrossRefGoogle Scholar
  7. 7.
    Setiono, R.: Feedforward neural network construction using cross-validation. Neural Computation 13, 2865–2877 (2001)zbMATHCrossRefGoogle Scholar
  8. 8.
    Bartlett, P.L.: For valid generalization the size of the weights is more important than the size of the network. In: Mozer, M.C., Jordan, M.I., Petsche, T. (eds.) Advances in Neural Information Processing Systems, vol. 9, pp. 134–140. MIT Press, Cambridge (1997)Google Scholar
  9. 9.
    Franco, L., Anthony, M.: On a generalization complexity measure for Boolean functions. In: Proceedings of the 2004 IEEE International Joint Conference on Neural Networks, pp. 973–978. IEEE Press, Los Alamitos (2004)Google Scholar
  10. 10.
    Wegener, I.: The complexity of Boolean functions. Wiley and Sons Inc, Chichester (1987)zbMATHGoogle Scholar
  11. 11.
    Siu, K.Y., Roychowdhury, V.P., Kailath, T.: Depth-Size Tradeoffs for Neural Computation. IEEE Transactions on Computers 40, 1402–1412 (1991)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Franco, L., Cannas, S.A.: Non glassy ground-state in a long-range antiferromagnetic frustrated model in the hypercubic cell. Physica A 332, 337–348 (2004)CrossRefMathSciNetGoogle Scholar
  13. 13.
    Franco, L., Cannas, S.A.: Generalization and Selection of Examples in Feedforward Neural Networks. Neural Computation 12(10), 2405–2426 (2000)CrossRefGoogle Scholar
  14. 14.
    Franco, L., Cannas, S.A.: Generalization Properties of Modular Networks: Implementing the Parity Function. IEEE Transactions on Neural Networks 12, 1306–1313 (2001)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Leonardo Franco
    • 1
    • 2
  • José M. Jerez
    • 2
  • José M. Bravo
    • 2
  1. 1.Dept. of Experimental PsychologyUniversity of OxfordOxfordUK
  2. 2.Departamento de Lenguajes y Ciencias de la ComputaciónUniversity of MálagaMálagaSpain

Personalised recommendations