The Essential Approximation Order for Neural Networks with Trigonometric Hidden Layer Units

  • Chunmei Ding
  • Feilong Cao
  • Zongben Xu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)


There have been various studies on approximation ability of feedforward neural networks. The existing studies are, however, only concerned with the density or upper bound estimation on how a multivariate function can be approximated by the networks, and consequently, the essential approximation ability of networks cannot be revealed. In this paper, by establishing both upper and lower bound estimations on approximation order, the essential approximation ability of a class of feedforward neural networks with trigonometric hidden layer units is clarified in terms of the second order modulus of smoothness of approximated function.


Neural Network Hide Layer Approximation Order Feedforward Neural Network Hide Unit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Barron, A.R.: Universial Approximation Bounds for Superpositions of a Sigmodial Function. IEEE Trans. Inform. Theory 39, 930–945 (1993)MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Cao, F.L., Xiong, J.Y.: Steckin-Marchaud-Type Inequality in Connection with L p Approximation for Multivariate Bernstein-Durrmeyer Operators. Chinese Contemporary Mathematics 22(2), 137–142 (2001)MathSciNetGoogle Scholar
  3. 3.
    Cao, F.L., Li, Y.M., Xu, Z.B.: Pointwise Approximation for Neural Networks. In: Wang, J., Liao, X.-F., Yi, Z. (eds.) ISNN 2005. LNCS, vol. 3496, pp. 39–44. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  4. 4.
    Chen, T.P., Chen, H.: Universal Approximation to Nonlinear Operators by Neural Networks with Arbitrary Activation Functions and Its Application to Dynamical System. IEEE Trans. Neural Networks 6, 911–917 (1995)CrossRefGoogle Scholar
  5. 5.
    Chen, X.H., White, H.: Improved Rates and Asymptotic Normality for Nonparametric Neural Network Estimators. IEEE Trans. Inform. Theory 45, 682–691 (1999)MATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Chui, C.K., Li, X.: Approximation by Ridge Functions and Neural Networks with One Hidden Layer. J. Approx. Theory 70, 131–141 (1992)MATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Cybenko, G.: Approximation by Superpositions of Sigmoidal Function. Math. of Control Signals, and System 2, 303–314 (1989)MATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Ditzian, Z., Totik, V.: Moduli of Smoothness. Springer, Heidelberg (1987)MATHGoogle Scholar
  9. 9.
    Funahashi, K.I.: On the Approximate Realization of Continuous Mappings by Neural Networks. Neural Networks 2, 183–192 (1989)CrossRefGoogle Scholar
  10. 10.
    Hornik, K., Stinchombe, M., White, H.: Multilayer Feedforward Networks are Universal Approximation. Neural Networks 2, 359–366 (1989)CrossRefGoogle Scholar
  11. 11.
    Johnen, H., Scherer, K.: On the Equivalence of the K-Functional and the Moduli of Continuity and Some Applications. In: Schempp, W., Zeller, K. (eds.) FTRTFT 1992. Lecture Notes in Mathematics, vol. 571, pp. 119–140. Springer, Heidelberg (1977)CrossRefGoogle Scholar
  12. 12.
    Kůrkova, V., Kainen, P.C., Kreinovich, V.: Estimates of the Number of Hidden Units and Variation with Respect to Half-Space. Neural Networks 10, 1068–1078 (1997)Google Scholar
  13. 13.
    Maiorov, V., Meir, R.S.: Approximation Bounds for Smooth Functions in C(R d) by Neural and Mixture Networks. IEEE Trans. Neural Networks 9, 969–978 (1998)CrossRefGoogle Scholar
  14. 14.
    Peetre, J.: On the Connection Between the Theory of Interpolation Spaces and Approximation Theory. In: Alexits, G., Stechkin, S.B. (eds.) Proc. Conf. Construction of Function, Budapest, pp. 351–363 (1969)Google Scholar
  15. 15.
    Suzuki, S.: Constructive Function Approximation by Three-Layer Artificial Neural Networks. Neural Networks 11, 1049–1058 (1998)CrossRefGoogle Scholar
  16. 16.
    Xu, Z.B., Cao, F.L.: Simultaneous L p-Approximation Order for Neural Networks. Neural Networks 18, 914–923 (2005)MATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Ito, Y.: Approximation of Functions on a Compact Set by Finite Sums of Sigmoid Function without Scaling. Neural Networks 4, 817–826 (1991)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Chunmei Ding
    • 1
  • Feilong Cao
    • 1
    • 2
  • Zongben Xu
    • 2
  1. 1.Department of Information and Mathematics Sciences, College of ScienceChina Jiliang UniversityHangzhouP.R. China
  2. 2.Institute for Information and System Sciences, Faculty of ScienceXi’an Jiaotong UniversityXi’anP.R. China

Personalised recommendations