Advertisement

Nonlinear Function Learning by the Normalized Radial Basis Function Networks

  • Adam Krzyżak
  • Dominik Schäfer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4029)

Abstract

We study strong universal consistency and the rates of convergence of nonlinear regression function learning algorithms using normalized radial basis function networks. The parameters of the network including centers, covariance matrices and synaptic weights are trained by the empirical risk minimization. We show the rates of convergence for the networks whose parameters are learned by the complexity regularization.

Keywords

Radial Basis Function Hide Neuron Radial Basis Function Network Output Weight Probabilistic Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Anthony, M., Bartlett, P.L.: Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge (1999)zbMATHCrossRefGoogle Scholar
  2. 2.
    Barron, A.R., Birgé, L., Massart, P.: Risk bounds for model selection via penalization. Probability Theory and Related Fields 113, 301–413 (1999)zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Cybenko, G.: Approximations by superpositions of sigmoidal functions. Mathematics of Control, Signals, and Systems 2, 303–314 (1989)zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Devroye, L., Györfi, L., Lugosi, G.: Probabilistic Theory of Pattern Recognition. Springer, New York (1996)zbMATHGoogle Scholar
  5. 5.
    Duchon, J.: Sur l’erreur d’interpolation des fonctions de plusieurs variables par les \(D\sp{m}\)-splines. RAIRO Anal. Numér. 12(4), 325–334 (1978)zbMATHMathSciNetGoogle Scholar
  6. 6.
    Faragó, A., Lugosi, G.: Strong universal consistency of neural network classifiers. IEEE Trans. on Information Theory 39, 1146–1151 (1993)zbMATHCrossRefGoogle Scholar
  7. 7.
    Girosi, F., Jones, M., Poggio, T.: Regularization theory and neural network architectures. Neural Computation 7, 219–267 (1995)CrossRefGoogle Scholar
  8. 8.
    Györfi, L., Kohler, M., Krzyżak, A., Walk, H.: A Distribution-Free Theory of Nonparametric Regression. Springer, New York (2002)zbMATHCrossRefGoogle Scholar
  9. 9.
    Hornik, K., Stinchocombe, S., White, H.: Multilayer feed-forward networks are universal approximators. Neural Networks 2, 359–366 (1989)CrossRefGoogle Scholar
  10. 10.
    Kolmogorov, A.N., Tihomirov, V.M.: ε-entropy and ε-capacity of sets in function spaces. Translations of the American Mathematical Society 17, 277–364 (1961)MathSciNetGoogle Scholar
  11. 11.
    Krzyżak, A., Linder, T., Lugosi, G.: Nonparametric estimation and classification using radial basis function nets and empirical risk minimization. IEEE Trans. Neural Networks 7(2), 475–487 (1996)CrossRefGoogle Scholar
  12. 12.
    Krzyżak, A., Linder, T.: Radial basis function networks and complexity regularization in function learning. IEEE Trans. Neural Networks 9(2), 247–256 (1998)CrossRefGoogle Scholar
  13. 13.
    Krzyżak, A., Niemann, H.: Convergence and rates of convergence of radial basis functions networks in function learning. Nonlinear Analysis 47, 281–292 (2001)zbMATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Krzyżak, A., Schäfer, D.: Nonparametric regression estimation by normalized radial basis function networks. IEEE Transactions on Information Theory 51(3), 1003–1010 (2005)CrossRefGoogle Scholar
  15. 15.
    Lugosi, G., Zeger, K.: Nonparametric estimation via empirical risk minimization. IEEE Trans. on Information Theory 41, 677–687 (1995)zbMATHCrossRefMathSciNetGoogle Scholar
  16. 16.
    Lugosi, G., Nobel, A.: Adaptive model selection using empirical complexities. Annals of Statistics 27, 1830–1864 (1999)zbMATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Moody, J., Darken, J.: Fast learning in networks of locally-tuned processing units. Neural Computation 1, 281–294 (1989)CrossRefGoogle Scholar
  18. 18.
    Park, J., Sandberg, I.W.: Universal approximation using Radial-Basis-Function networks. Neural Computation 3, 246–257 (1991)CrossRefGoogle Scholar
  19. 19.
    Pollard, D.: Convergence of Stochastic Processes. Springer, New York (1984)zbMATHGoogle Scholar
  20. 20.
    Shorten, R., Murray-Smith, R.: Side effects of normalising radial basis function networks. International Journal of Neural Systems 7, 167–179 (1996)CrossRefGoogle Scholar
  21. 21.
    Specht, D.F.: Probabilistic neural networks. Neural Networks 3, 109–118 (1990)CrossRefGoogle Scholar
  22. 22.
    Ripley, B.: Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge (1996)zbMATHGoogle Scholar
  23. 23.
    Vapnik, V.N., Ya, A.: Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16, 264–280 (1971)zbMATHCrossRefGoogle Scholar
  24. 24.
    Vapnik, V.N.: Estimation of Dependences Based on Empirical Data, 2nd edn. Springer, New York (1999)Google Scholar
  25. 25.
    Xu, L., Krzyżak, A., Yuille, A.L.: On radial basis function nets and kernel regression: approximation ability, convergence rate and receptive field size. Neural Networks 7, 609–628 (1994)zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Adam Krzyżak
    • 1
    • 2
  • Dominik Schäfer
    • 3
  1. 1.Department of Computer Science and Software EngineeringConcordia UniversityMontréalCanada
  2. 2.Institute of Control EngineeringTechnical University of SzczecinSzczecinPoland
  3. 3.Dominik Schäfer, Fachbereich MathematikUniversität StuttgartStuttgartGermany

Personalised recommendations