Advertisement

Complexity issues in neural network computations

  • Michel Cosnard
  • Pascal Koiran
  • Hélène Paugam-Moisy
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 583)

Abstract

In this paper we described new results on the complexity of computing dichotomies and dichotomies on examples particularly on the number of units in the hidden layers. Traditionnally the number of units is bounded by functions of the number of examples. We have introduced a new parameter: the distance between the classes. These two parameters are complementary and it is still unknown if another parameters could be used. The bounds that we derived are not tight and should be improved.

We have also shown that the use of second hidden layer could reduce the total number of hidden units. What can be proved if we add more layers? More generally the relationship between the capabilities of multilayer artificial neural networks and the number of layers and number of hidden units is still a completely open problem.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. BAUM, E.B. (1988), On the capabilities of multilayer perceptions, Journal of Complexity 4, 193–215.Google Scholar
  2. BLUM, E.K., LI, L.K. (1991), Approximatiom theory and feedforward networks, Neural Networks 4, 511–516Google Scholar
  3. CHESTER, D. (1991), The generalization capabilities of piecewise linear nets, oral presentation at the DIMACS Workshop on Theoretical issues in neural nets, Rutgers Univ., may 20–24Google Scholar
  4. COVER, T.M. (1965), Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition, IEEE Trans. Electron. Comput. 14, 326–334.Google Scholar
  5. CYBENKO, G. (1988), Continuous valued neural networks: approximation theoretic results, in: Virginia Amer. Stat. Assn., Alexandria, editor, Comp. Sc. and Stat., proc.of the 20th Symp. on the Interface, 174–183.Google Scholar
  6. FUNAHASHI, K. (1989), On the approximate realization of continuous mappings by neural networks, Neural Networks, 2, 183–192.Google Scholar
  7. HORNIK, K., STINCHCOMBE, M., WHITE H. (1989), Multilayer feedforward networks are universal approximators, Neural Networks, 2, 359–366.Google Scholar
  8. HORNIK, K., STINCHCOMBE, M., WHITE H. (1990), Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks, Neural Networks, 3, 551–560.Google Scholar
  9. KOIRAN, P. (1991), Approximating of mappings and application to translational invariant networks, proc. of IJCNN-Singapore '91, 3, 2294–2298.Google Scholar
  10. LIPPMANN, R. (1987), An introduction to computing with neural nets, IEEE ASSP Magazine, 4–22Google Scholar
  11. MAKHOUL, J. (1991), Partitioning capabilities of two-layer neural networks, oral presentation at the DIMACS Workshop on Theoretical issues in neural nets, Rutgers Univ., may 20–24Google Scholar
  12. MCCULLOCH, W., PITTS, W. (1943), A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics 5, 115–133Google Scholar
  13. MCCULLOCH, W., PITTS, W. (1947), How we know universals, Bulletin of Mathematical Biophysics 9, 127–147Google Scholar
  14. MINSKY, M., PAPERT, S. (1969), Perceptions, an introduction to computational geometry, MIT PressGoogle Scholar
  15. MINSKY, M., Papert, S. (1988), Perceptrons — Expanded Edition, MIT PressGoogle Scholar
  16. NILSSON, N.J. (1965), The Mathematical Foundations of Learning Machines, Morgan KaufmannGoogle Scholar
  17. ROSENBLATT, F. (1962), Principles of neurodynamics, SPARTAN New YorkGoogle Scholar
  18. RUMELHART, D.E., HINTON, G.E., WILLIAMS, R.J. (1986), Learning internal representations by error propagation, in: Parallel Distributed Processing, vol.1, 318–362, MIT PressGoogle Scholar
  19. SCHLÄFLI, L. (1950), Gesammelte Mathematische Abhandlungen I, Verlag Birkhäuser, Basel (Switzerland), 209–212Google Scholar
  20. SONTAG, E.D. (1990), Feedback stabilization using two-hidden-layer nets, Report SYCON-90-11, Rutgers Center for Systems and ControlGoogle Scholar
  21. VENKATESH, S.S. (1991), Probabilistic capacity and links to distribution dependent learning, oral presentation at the DIMACS Workshop on Theoretical issues in neural nets, Rutgers Univ., may 20–24Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1992

Authors and Affiliations

  • Michel Cosnard
    • 1
  • Pascal Koiran
    • 1
  • Hélène Paugam-Moisy
    • 1
  1. 1.Ecole Normale Supérieure de LyonLyon Cedex 07France

Personalised recommendations