# Complexity issues in neural network computations

## Abstract

In this paper we described new results on the complexity of computing dichotomies and dichotomies on examples particularly on the number of units in the hidden layers. Traditionnally the number of units is bounded by functions of the number of examples. We have introduced a new parameter: the distance between the classes. These two parameters are complementary and it is still unknown if another parameters could be used. The bounds that we derived are not tight and should be improved.

We have also shown that the use of second hidden layer could reduce the total number of hidden units. What can be proved if we add more layers? More generally the relationship between the capabilities of multilayer artificial neural networks and the number of layers and number of hidden units is still a completely open problem.

## Preview

Unable to display preview. Download preview PDF.

## References

- BAUM, E.B. (1988), On the capabilities of multilayer perceptions,
*Journal of Complexity***4**, 193–215.Google Scholar - BLUM, E.K., LI, L.K. (1991), Approximatiom theory and feedforward networks,
*Neural Networks***4**, 511–516Google Scholar - CHESTER, D. (1991), The generalization capabilities of piecewise linear nets, oral presentation at the DIMACS Workshop on Theoretical issues in neural nets, Rutgers Univ., may 20–24Google Scholar
- COVER, T.M. (1965), Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition,
*IEEE Trans. Electron. Comput.***14**, 326–334.Google Scholar - CYBENKO, G. (1988), Continuous valued neural networks: approximation theoretic results, in: Virginia Amer. Stat. Assn., Alexandria, editor,
*Comp. Sc. and Stat., proc.of the 20th Symp. on the Interface*, 174–183.Google Scholar - FUNAHASHI, K. (1989), On the approximate realization of continuous mappings by neural networks,
*Neural Networks*,**2**, 183–192.Google Scholar - HORNIK, K., STINCHCOMBE, M., WHITE H. (1989), Multilayer feedforward networks are universal approximators,
*Neural Networks*,**2**, 359–366.Google Scholar - HORNIK, K., STINCHCOMBE, M., WHITE H. (1990), Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks,
*Neural Networks*,**3**, 551–560.Google Scholar - KOIRAN, P. (1991), Approximating of mappings and application to translational invariant networks,
*proc. of IJCNN-Singapore '91*,**3**, 2294–2298.Google Scholar - LIPPMANN, R. (1987), An introduction to computing with neural nets,
*IEEE ASSP Magazine*, 4–22Google Scholar - MAKHOUL, J. (1991), Partitioning capabilities of two-layer neural networks, oral presentation at the DIMACS Workshop on Theoretical issues in neural nets, Rutgers Univ., may 20–24Google Scholar
- MCCULLOCH, W., PITTS, W. (1943), A logical calculus of the ideas immanent in nervous activity,
*Bulletin of Mathematical Biophysics***5**, 115–133Google Scholar - MCCULLOCH, W., PITTS, W. (1947), How we know universals,
*Bulletin of Mathematical Biophysics***9**, 127–147Google Scholar - MINSKY, M., PAPERT, S. (1969), Perceptions, an introduction to computational geometry, MIT PressGoogle Scholar
- MINSKY, M., Papert, S. (1988), Perceptrons — Expanded Edition, MIT PressGoogle Scholar
- NILSSON, N.J. (1965), The Mathematical Foundations of Learning Machines, Morgan KaufmannGoogle Scholar
- ROSENBLATT, F. (1962), Principles of neurodynamics, SPARTAN New YorkGoogle Scholar
- RUMELHART, D.E., HINTON, G.E., WILLIAMS, R.J. (1986), Learning internal representations by error propagation, in: Parallel Distributed Processing, vol.1, 318–362, MIT PressGoogle Scholar
- SCHLÄFLI, L. (1950), Gesammelte Mathematische Abhandlungen I, Verlag Birkhäuser, Basel (Switzerland), 209–212Google Scholar
- SONTAG, E.D. (1990), Feedback stabilization using two-hidden-layer nets,
*Report SYCON-90-11*, Rutgers Center for Systems and ControlGoogle Scholar - VENKATESH, S.S. (1991), Probabilistic capacity and links to distribution dependent learning, oral presentation at the DIMACS Workshop on Theoretical issues in neural nets, Rutgers Univ., may 20–24Google Scholar