Layered neural networks as universal approximators

  • I. Ciuca
  • J. A. Ware
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1226)


The paper considers Ito's results on the approximation capability of layered neural networks with sigmoid units in two layers. First of all the paper recalls one of Ito's main results. Then the results of Ito regarding Heaviside function as sigmoid functions are extended using a signum function. For Heaviside functions a layered neural network implementation is presented that is also valid for signum functions. The focus of paper is on the implementation of Ito's appoximators as four layer feed-forward neural networks.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Cybenko G. 1989, Approximation by superpositions of a sigmoidal function. Math. Control Signal System 2. 303–314Google Scholar
  2. 2.
    Funahashi K. 1989, On the approximate realisation of continuous mapping by neural networks. Neural Networks 2. 183–192Google Scholar
  3. 3.
    Girosi F. and Poggio T. 1989, Representation properties of network: Kolmogorov's theorem is irrelevant. Neural Computation 1. 456–469Google Scholar
  4. 4.
    Hecht-Nielsen R. 1987, Kolmogorov's mapping neural network existence theorem. IEEE First Conf. Neural Networks III. 11–13Google Scholar
  5. 5.
    Hecht-Nielsen R. 1989. Theory of the back propagation neural network, '89 IJCNN Proc. I. 593–605Google Scholar
  6. 6.
    Hornik K. 1991. Approximation capabilities of multilayer feedforward networks, Neural Networks 4. 251–257Google Scholar
  7. 7.
    Hornik K., Stinchcombe M. and White H. 1989. Multilayer feedforward networks are universal approximators. Neural Networks 2. 359–366Google Scholar
  8. 8.
    Hornik K., Stinchcombe M. and White H. 1990, Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Networks 3. 551–560Google Scholar
  9. 9.
    Ito Y. 1991a. Representation of Functions by superposition of a step or sigmoid function and their applications to neural network theory. Neural Networks 4. 385–394.Google Scholar
  10. 10.
    Ito Y. 1991b, Approximation of functions on a compact set by finite sums of a sigmoid function without scaling. Neural Networks 4. 817–826Google Scholar
  11. 11.
    Ito Y. 1992, Approximation of continuous functions of R by linear combinations of shifted rotations of a sigmoid function with and without scaling, Neural Networks 5. 105–115.Google Scholar
  12. 12.
    Ito Y. 1993, Approximations of differentiable functions and their derivatives on compact set by neural networks. Math. Scient. 18. 11–19Google Scholar
  13. 13.
    Ito Y. 1994, Approximation capability of layered neural networks with sigmoid units on two layers. Neural Computation 6. 1233–1243Google Scholar
  14. 14.
    Kolmogorov A. N. 1957, On the representations of continuous functions of many variables by superpositions of continuous functions of one variable and addition. Dokl. Akad. Nauk USSR 114 (5). 953–956.Google Scholar
  15. 15.
    Kurkova V. 1991. Kolmogorov's theorem is relevant. Neural Computation 3. 617–622Google Scholar
  16. 16.
    Kurkova V. 1992. Kolmogorov's theorem and multilayer neural networks. Neural Networks 5. 501–506Google Scholar
  17. 17.
    Stinchcombe M. and White H. 1989. Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions. '89 IJCNN, Proc. I, 613–617.Google Scholar
  18. 18.
    Cardaliaguet P., Euvrard G., 1992, Approximation of a Function and its Derivatives with a Neural Network, Neural Networks, Vol 5 pp 207–220Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • I. Ciuca
    • 1
  • J. A. Ware
    • 2
  1. 1.Research Institute for InformaticsBucharest
  2. 2.Glamorgan UniversityPontypriddUK

Personalised recommendations