Constructive Function Approximation: Theory and Practice

  • D. Docampo
  • D. R. Hush
  • C. T. Abdallah


In this paper we study the theoretical limits of finite constructive convex approximations of a given function in a Hilbert space using elements taken from a reduced subset. We also investigate the trade-off between the global error and the partial error during the iterations of the solution. These results are then specialized to constructive function approximation using sigmoidal neural networks. The emphasis then shifts to the implementation issues associated with the problem of achieving given approximation errors when using a finite number of nodes and a finite data set for training.


Convex Combination Sigmoidal Function Global Error Multivariate Adaptive Regression Spline Projection Pursuit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    P. Auer, M. Herbster, and M.K. Warmuth. Exponentially many local minima for single neurons. In D. Touretzky, M.C. Mozer, and M.E. Hasselmo, editors, Advances in Neural Information Processing Systems 8, pages 316–322. Morgan Kaufmann, 1996.Google Scholar
  2. [2]
    P. Baldi. Gradient descent learning algorithm overview: A general dynamical systems perspective. IEEE Trans. Neural Nets, 6(1):182–195, 1995.CrossRefGoogle Scholar
  3. [3]
    A.R. Barron. Statistical properties of artificial neural networks. In Proceedings of the 28th IEEE Conf. on Decision and Control, pages 280–285, 1989.Google Scholar
  4. [4]
    A.R. Barron. Approximation and estimation bounds for artificial neural networks. In L.G. Valiant and M.K. Warmuth, editors, Proceedings of the 4th Annual Workshop on Computational Learning Theory, pages 243–249, 1991.Google Scholar
  5. [5]
    A.R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39(3):930–945, 1993.MathSciNetMATHCrossRefGoogle Scholar
  6. [6]
    R. Battiti. First-and second—order methods for learning: between steepest descent and newton’s method. Neural Computation, 4(2):141–166, 1992.Google Scholar
  7. [7]
    L. Breiman and J.H. Friedman. Function approximation using ramps. In Snowbird Workshop on Machines that Learn, 1994.Google Scholar
  8. [8]
    L. Breiman. Hinging hyperplanes for regression, classification and function approximation. IEEE Trans. on Inf. Theory, 39(3), 1993.Google Scholar
  9. [9]
    T. Chen, H. Chen, and R-W. Liu. Approximation capability in C(Rn) by multilayer feedforward networks and related problems. IEEE Trans. Neural Nets, 6(1):25–30, 1995.MATHCrossRefGoogle Scholar
  10. [10]
    E.W. Cheney. Topics in approximation theory, 1992.Google Scholar
  11. [11]
    G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems, 2(4):303–314, 1989.MathSciNetMATHGoogle Scholar
  12. [12]
    A. T. Dingankar and I. W. Sandberg. A note on error bounds for approximation in inner product spaces. Circuits, Systems and Signal Processing, 15(4):519–522, 1996.MathSciNetCrossRefGoogle Scholar
  13. [13]
    J.H. Friedman. Multivariate adaptive regression splines. The Annals of Statistics, 19, 1991.Google Scholar
  14. [14]
    J.H. Friedman and W. Stuetzle. Projection pursuit regression. J. Amer. Stat. Assoc., 76:817–823, 1981.MathSciNetCrossRefGoogle Scholar
  15. [15]
    J.H. Friedman, W. Stuetzle, and A. Schroeder. Projection pursuit density estimation. J. Amer. Stat. Assoc., 79:599–608, 1984.MathSciNetCrossRefGoogle Scholar
  16. [16]
    J.H. Friedman and J.W. Tukey. A projection pursuit algorithm for exploratory data analysis. IEEE Transactions on Computers, C-23(9):881–890, 1974.CrossRefGoogle Scholar
  17. [17]
    F. Girosi and G. Anzellotti. Convergence rates of approximation by translates. Technical Report 1288, MIT Art. Intell. Lab., 1992.Google Scholar
  18. [18]
    Simon Haykin. Neural Networks: A Comprehensive Foundation. Macmillan, New York, 1992.Google Scholar
  19. [19]
    D. Hush, B. Horne, and J.M. Salas. Error surfaces for multi—layer perceptrons. IEEE Transactions on Systems,Man and Cybernetics, 22(5):1152–1160, 1992.CrossRefGoogle Scholar
  20. [20]
    K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks,2(5):359–366, 1989.CrossRefGoogle Scholar
  21. [21]
    P.J. Huber. Porjection pursuit. The Annuals of Statistics,13(2):435–475, 1985.MATHCrossRefGoogle Scholar
  22. [22]
    L.K. Jones. A simple lemma on greedy approximation in hilbert space and convergence rates for projection pursuit regression and neural network training. The Annals of Statistics, 20:608–613, 1992.MathSciNetMATHCrossRefGoogle Scholar
  23. [23]
    G. Pisier. Remarques sur un resultat non publié de b. maurey, 1980–1981.Google Scholar
  24. [24]
    D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representations by error propagation. In D.E. Rumelhart and J.L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, pages 318–362. MIT Press, Cambridge, MA, 1986.Google Scholar
  25. [25]
    I.W. Sandberg. Structure theorems for nonlinear systems. Multidim. Syst. and Sign. Proc., 2:267–286, 1991.MathSciNetMATHCrossRefGoogle Scholar
  26. [26]
    I.W. Sandberg. Uniform approximation and the circle criterion. IEEE Trans. Automat. Control,38(10):1450–1458, 1993.MathSciNetMATHCrossRefGoogle Scholar
  27. [27]
    I.W. Sandberg. General structures for classification. IEEE Trans. Circ. and Syst.-1,41(5):372–376, 1994.MathSciNetMATHCrossRefGoogle Scholar
  28. [28]
    Y. Zhao. On projection pursuit learning. PhD thesis, Dept. Math. Art. Intell. Lab., MIT, Boston, MA, 1992.Google Scholar

Copyright information

© Springer Science+Business Media New York 1997

Authors and Affiliations

  • D. Docampo
  • D. R. Hush
  • C. T. Abdallah

There are no affiliations available

Personalised recommendations