Skip to main content
Log in

A note on error bounds for function approximation using nonlinear networks

  • Published:
Circuits, Systems and Signal Processing Aims and scope Submit manuscript

Abstract

For many problems in classification, compensation, adaptivity, identification, and signal processing, results concerning the representation and approximation of nonlinear functions can be of particular interest to engineers. Here we consider a large class of functionsf that map ℝn into the set of real or complex numbers, and we give bounds on the number of parameters of a certain approximation network so thatf can be approximated to within a prescribed degree of accuracy using an appropriate configuration of the network. We also describe related work in the neural networks literature.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. A. R. Barron, Universal approximation bounds for superpositions of a sigmoidal function,IEEE Transactions on Information Theory, vol. 39, no. 3, pp. 930–945, May 1993.

    Google Scholar 

  2. S. Bochner and K. Chandrasekharan,Fourier Transforms, Princeton University Press, Princeton, NJ, 1949.

    Google Scholar 

  3. G. Cybenko, Approximation by superposition of a single function,Mathematics of Control, Signals and Systems, vol. 2, pp. 303–314, 1989.

    Google Scholar 

  4. A. T. Dingankar and I. W. Sandberg, A note on error bounds for approximation in inner product spaces,Circuits, Systems, and Signal Processing, vol. 15, no. 4, pp. 519–522, 1996.

    Google Scholar 

  5. D. Docampo, C. T. Abdallah, and D. Hush, Error bounds in constructive approximation,Proceedings of the Fourth Bayona Workshop on Intelligent Methods in Signal Processing and Communications, Bayona, Spain, June 1996, pp. 24–28.

  6. K. Funahashi, On the approximate realization of continuous mappings by neural networks,IEEE Transactions on Neural Networks, vol. 2, pp. 183–192, 1989.

    Google Scholar 

  7. F. Girosi and G. Anzellotti, Rates of convergence for radial basis functions and neural networks,Artificial Neural Networks with Applications in Speech and Vision, Chapman & Hall, New York 1993.

    Google Scholar 

  8. L. K. Jones, A simple lemma on greedy approximation in Hilbert space and convergence, rates for projection pursuit regression and neural network training,Annals of Statistics, vol. 20, no. 1, pp. 608–613, March 1992.

    Google Scholar 

  9. H. N. Mhaskar and C. A. Micchelli, Approximation by superposition of sigmoidal and radial basis functions,Advances in Applied Mathematics, vol. 3, pp. 350–373, 1992.

    Google Scholar 

  10. G. Pisier, Remarques sur un resultat non publié de B. Maurey,Sem. d'Analyse Fonctionelle, vol. 1, no. 12, Ecole Poly. Centre de Math., Palaiseau, France, 1980–1981.

    Google Scholar 

  11. H. L. Royden,Real Analysis, Macmillan, New York, 1968.

    Google Scholar 

  12. I. W. Sandberg, Approximations for nonlinear functionals,IEEE Transactions on Circuits and Systems-I. Fundamental Theory and Applications, vol. 39, no. 1, pp. 65–67, January 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dingankar, A.T., Sandberg, I.W. A note on error bounds for function approximation using nonlinear networks. Circuits Systems and Signal Process 17, 449–457 (1998). https://doi.org/10.1007/BF01201501

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01201501

Keywords

Navigation