Advertisement

Approximation with Rates by Perturbed Kantorovich–Choquet Neural Network Operators

  • George A. Anastassiou
Chapter
Part of the Studies in Systems, Decision and Control book series (SSDC, volume 190)

Abstract

This chapter deals with the determination of the rate of convergence to the unit of Perturbed Kantorovich–Choquet univariate and multivariate normalized neural network operators of one hidden layer. These are given through the univariate and multivariate moduli of continuity of the involved univariate or multivariate function or its high order derivatives and that appears in the right-hand side of the associated univariate and multivariate Jackson type inequalities. The activation function is very general, especially it can derive from any univariate or multivariate sigmoid or bell-shaped function. The right hand sides of our convergence inequalities do not depend on the activation function. It follows (Anastassiou, Quantitative Approximation by Perturbed Kantorovich–Choquet Neural Network Operators (2018) [1]).

References

  1. 1.
    G.A. Anastassiou, Quantitative Approximation by Perturbed Kantorovich-Choquet Neural Network Operators (Fisicasy Naturales. Serie A. Mathematicas, accepted for publication, Revista de la Real Academia de Ciencias Exactas, 2018)Google Scholar
  2. 2.
    G.A. Anastassiou, Rate of convergence of some neural network operators to the unit-univariate case. J. Math. Anal. Appl. 212, 237–262 (1997)MathSciNetCrossRefGoogle Scholar
  3. 3.
    G.A. Anastassiou, Rate of convergence of some multivariate neural network operators to the unit. J. Comput. Math. Appl. 40, 1–19 (2000)MathSciNetCrossRefGoogle Scholar
  4. 4.
    G.A. Anastassiou, Quantitative Approximations (Chapman and Hall/CRC, New York, 2001)zbMATHGoogle Scholar
  5. 5.
    G.A. Anastassiou, Rate of convergence of some neural network operators to the unit-univariate case, revisited. Vesnik 65(4), 511–518 (2013)MathSciNetzbMATHGoogle Scholar
  6. 6.
    G.A. Anastassiou, Rate of convergence of some multivariate neural network operators to the unit, revisited. J. Comput. Anal. Appl. 15(7), 1300–1309 (2013)MathSciNetzbMATHGoogle Scholar
  7. 7.
    A.R. Barron, Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inform. Theory 39, 930–945 (1993)MathSciNetCrossRefGoogle Scholar
  8. 8.
    F.L. Cao, T.F. Xie, Z.B. Xu, The estimate for approximation error of neural networks: a constructive approach. Neurocomputing 71, 626–630 (2008)CrossRefGoogle Scholar
  9. 9.
    P. Cardaliaguet, G. Euvrard, Approximation of a function and its derivative with a neural network. Neural Netw. 5, 207–220 (1992)CrossRefGoogle Scholar
  10. 10.
    Z. Chen, F. Cao, The approximation operators with sigmoidal functions. Comput. Math. Appl. 58, 758–765 (2009)MathSciNetCrossRefGoogle Scholar
  11. 11.
    T.P. Chen, H. Chen, Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its applications to a dynamic system. IEEE Trans. Neural Netw. 6, 911–917 (1995)CrossRefGoogle Scholar
  12. 12.
    G. Choquet, Theory of capacities. Ann. Inst. Fourier (Grenoble) 5, 131–295 (1954)MathSciNetCrossRefGoogle Scholar
  13. 13.
    C.K. Chui, X. Li, Approximation by ridge functions and neural networks with one hidden layer. J. Approx. Theory 70, 131–141 (1992)MathSciNetCrossRefGoogle Scholar
  14. 14.
    D. Costarelli, R. Spigler, Approximation results for neural network operators activated by sigmoidal functions. Neural Netw. 44, 101–106 (2013)CrossRefGoogle Scholar
  15. 15.
    D. Costarelli, R. Spigler, Multivariate neural network operators with sigmoidal activation functions. Neural Netw. 48, 72–77 (2013)CrossRefGoogle Scholar
  16. 16.
    G. Cybenko, Approximation by superpositions of sigmoidal function. Math. Control Signals Syst. 2, 303–314 (1989)MathSciNetCrossRefGoogle Scholar
  17. 17.
    D. Denneberg, Non-additive Meas. Integral (Kluwer, Dordrecht, 1994)CrossRefGoogle Scholar
  18. 18.
    S. Ferrari, R.F. Stengel, Smooth function approximation using neural networks. IEEE Trans. Neural Netw. 16, 24–38 (2005)CrossRefGoogle Scholar
  19. 19.
    K.I. Funahashi, On the approximate realization of continuous mappings by neural networks. Neural Netw. 2, 183–192 (1989)CrossRefGoogle Scholar
  20. 20.
    S. Gal, Uniform and Pointwise Quantitative Approximation by Kantorovich-Choquet type integral Operators with respect to monotone and submodular set functions. Mediter. J. Math. 14(5), 12 (2017). Art. 205Google Scholar
  21. 21.
    N. Hahm, B.I. Hong, An approximation by neural networks with a fixed weight. Comput. Math. Appl. 47, 1897–1903 (2004)MathSciNetCrossRefGoogle Scholar
  22. 22.
    S. Haykin, Neural Netw.: A Compr. Found., 2nd edn. (Prentice Hall, New York, 1998)Google Scholar
  23. 23.
    K. Hornik, M. Stinchombe, H. White, Multilayer feedforward networks are universal approximations. Neural Netw. 2, 359–366 (1989)CrossRefGoogle Scholar
  24. 24.
    K. Hornik, M. Stinchombe, H. White, Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Netw. 3, 551–560 (1990)CrossRefGoogle Scholar
  25. 25.
    M. Leshno, V.Y. Lin, A. Pinks, S. Schocken, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6, 861–867 (1993)CrossRefGoogle Scholar
  26. 26.
    V. Maiorov, R.S. Meir, Approximation bounds for smooth functions in \(C\left( R^{d}\right) \) by neural and mixture networks. IEEE Trans. Neural Netw. 9, 969–978 (1998)CrossRefGoogle Scholar
  27. 27.
    Y. Makovoz, Uniform approximation by neural networks. J. Approx. Theory 95, 215–228 (1998)MathSciNetCrossRefGoogle Scholar
  28. 28.
    W. McCulloch, W. Pitts, A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 7, 115–133 (1943)MathSciNetCrossRefGoogle Scholar
  29. 29.
    H.N. Mhaskar, C.A. Micchelli, Approximation by superposition of a sigmoidal function. Adv. Appl. Math. 13, 350–373 (1992)MathSciNetCrossRefGoogle Scholar
  30. 30.
    H.N. Mhaskar, C.A. Micchelli, Degree of approximation by neural networks with a single hidden layer. Adv. Appl. Math. 16, 151–183 (1995)MathSciNetCrossRefGoogle Scholar
  31. 31.
    T.M. Mitchell, Machine Learning (WCB-McGraw-Hill, New York, 1997)zbMATHGoogle Scholar
  32. 32.
    S. Suzuki, Constructive function approximation by three-layer artificial neural networks. Neural Netw. 11, 1049–1058 (1998)CrossRefGoogle Scholar
  33. 33.
    Z. Wang, G.J. Klir, Generalized Measure Theory (Springer, New York, 2009)CrossRefGoogle Scholar
  34. 34.
    Z.B. Xu, F.L. Cao, The essential order of approximation for neural networks. Sci. China (Ser. F) 47, 97–112 (2004)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Mathematical SciencesUniversity of MemphisMemphisUSA

Personalised recommendations