Integral Transform and Its Application to Neural Network Approximation

  • Feng-jun Li
  • Zongben Xu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)


Neural networks are widely used to approximate nonlinear functions. In order to study its approximation capability, a theorem of integral representation of functions is developed by using integral transform. Using the developed representation, an approximation order estimation for the bell-shaped neural networks is obtained. The obtained result reveals that the approximation accurately of the bell-shaped neural networks depends not only on the number of hidden neurons, but also on the smoothness of target functions.


Neural Network Hide Neuron Sigmoidal Function Target Function Integral Transform 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Lewicki, G., Marino, D.: Approximation of Functions of Finite Variation by Superpositions of a Sigmoidal Function. Applied Mathematics Letters 17(12), 1147–1152 (2004)CrossRefMathSciNetMATHGoogle Scholar
  2. 2.
    Barron, A.B.: Universal Approximation Bounds for Superpositions of a Sigmoidal Function. IEEE Trans. Inform. Theory 3(6), 930–945 (1993)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Funahashi, K.: On the Approximate Realization of Continuous Mappings by Neural Networks. Neural Networks 2(1), 183–192 (1989)CrossRefGoogle Scholar
  4. 4.
    Xin, L.: Simultaneous Approximations of Multivariate Functions and Their Derivatives by Neural Networks with One Hidden Layer. Neurocomputing 12(8), 327–343 (1996)MATHGoogle Scholar
  5. 5.
    Makovoz, Y.: Uniform Approximation by Neural Networks. Journal of Approximation Theory 95(11), 215–228 (1998)CrossRefMathSciNetMATHGoogle Scholar
  6. 6.
    Murata, N.: An Integral Representation of Functions Using Three-layered Networks and Their Approximation Bounds. Neural Networks 9(6), 947–956 (1996)CrossRefGoogle Scholar
  7. 7.
    Martin, D.B., Allan, P.: Identifying Linear Combination of Ridge Functions. Advances in Applied Mathematics 22(1), 103–118 (1999)CrossRefMathSciNetMATHGoogle Scholar
  8. 8.
    Widder, D.V.: An Intrduction to Transform Theoy. Academic Press, New York (1971)Google Scholar
  9. 9.
    Lu, S.Z., Wang, K.Y.: Real Analysis. Bei Jing Normal University Pess, Bei Jing (1997) (in chinese)Google Scholar
  10. 10.
    Katsuyuki, H., Taichi, H., Naohiro, T.: Upper Bound of the Expected Training Error of Neural Network Regression for a Gaussian Noise Sequence. Neural Networks 14(10), 1419–1429 (2001)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Feng-jun Li
    • 1
  • Zongben Xu
    • 1
  1. 1.Faculty of Science, Institute for Information and System ScienceXi’an Jiaotong UniversityXi’anP.R. China

Personalised recommendations