Skip to main content
Log in

Lazy Learning in Radial Basis Neural Networks: A Way of Achieving More Accurate Models

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Radial Basis Neural Networks have been successfully used in a large number of applications having in its rapid convergence time one of its most important advantages. However, the level of generalization is usually poor and very dependent on the quality of the training data because some of the training patterns can be redundant or irrelevant. In this paper, we present a learning method that automatically selects the training patterns more appropriate to the new sample to be approximated. This training method follows a lazy learning strategy, in the sense that it builds approximations centered around the novel sample. The proposed method has been applied to three different domains

an artificial regression problem and two time series prediction problems. Results have been compared to standard training method using the complete training data set and the new method shows better generalization abilities.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Moody, J. E. and Darkn, C.: Fast learning in networks of locally tuned processing units. Neural Computation, 1 (1989), 281-294.

    Google Scholar 

  2. Poggio, T. and Girosi, F.: Networks for approximation and learning. Proc. IEEE, 78 (1990), 1481-1497.

    Google Scholar 

  3. Ghosh, J. and Nag, A.: An Overview of Radial Basis Function Networks. R. J. Howlett and L.C. Jain (Eds). Physica Verlag, (2000).

  4. Broomhead, D. S. and Lowe, D.: Multivariable functional interpolation and adaptative networks. Complex Systems, 2 (1988), 321-355.

    Google Scholar 

  5. Powell, M.: The theory of radial basis function approximation in 1990. Advances in Numerical Analysis, 3 (1992), 105-210.

    Google Scholar 

  6. Park, I. J. and Sandberg, W.: Universal approximation and radial-basis-function networks. Neural Computation, 5 (1993), 305-316.

    Google Scholar 

  7. Abu-Mostafa, Y. S.: The vapnik-chervonenkis dimension: information versus complexity in learning. Neural Computation, 1 (1989), 312-317.

    Google Scholar 

  8. Cohn, D. Atlas, L. and Ladner.R.: Improving generalization with active learning. machine learning. Machine Learning, 15 (1994) 201-221.

    Google Scholar 

  9. Atkenson, C.G. Moore, A.W. and Schaal, S.: Locally weighted learning. Artificial Intelligence Review, 11 (1997) 11-73.

    Google Scholar 

  10. Wettschereck, D. and Dietterich, T.: Improving the perfomance of radial basis function networks by learning center locations. Advances in Neural Information Processing Systems, 4 (1992), 1133-1140.

    Google Scholar 

  11. Leonardis, A. and Bischof, H.: An efficient mdl-based construction of rbf networks. Neural Networks, 11 (1998), 963-973.

    Google Scholar 

  12. Orr. M. J. L.: Introduction to radial basis neural networks. Technical Report. Centre for Cognitive Science, University of Edinburgh, (1996).

  13. Yingwei, L. Sundararajan, N. and Saratchandran, P.: A sequential learning scheme for function approximation using minimal radial basis function neural networks. Neural Computation, 9 (1997), 461-478.

    Google Scholar 

  14. Mackey, M. C. and Glass, L. Oscillation and chaos in physiological control systems. Science, 197 (1977) 287-289.

    Google Scholar 

  15. Platt, J.: A resource-allocating network for function interpolation. Neural Computation, 3 (1991), 213-225.

    Google Scholar 

  16. Whitehead, B. A. and Choate, T. D.: Cooperative-competitive genetic evolution of radial basis function centeres and widths for time series prediction. IEEE Transactions on Neural Networks, 5 (1995), 15-23.

    Google Scholar 

  17. Moretti, E. and Tomasin, A.: Un contributo matematico all-elaborazione previsionale dei dati di marea a Venecia. Boll. Ocean. Teor. Appl., 1 (1984), 45-61.

    Google Scholar 

  18. Michelato, A. Mosetti, R. and Viezzoli. D.: Statistical forecasting of strong surges and aplication to the lagoon of Venice. Boll. Ocean. Teor. Appl., 1 (1983), 67-83.

    Google Scholar 

  19. Tomasin, A.: A computer simulation of the Adriatic Sea for the study of its dynamics and for the forecasting of floods in the town of Venice. Comp. Phys. Comm., 5 (1973), 51.

    Google Scholar 

  20. Vittori, G.: On the chaotic features of tide elevation in the lagoon Venice. Proc. of the ICCE-92, 23rd International Conference on Coastal Engineering, pages (1992), 4-9.

  21. Zaldı´ var, J. M. Gutrrez, E. Galva´ n, I. M. Strozzi, F. and Tomasin, A.: Forecasting high waters at Venice Lagoon using chaotic time series analysis and nonlinear neural networks. Journal of Hydroinformatics, 2 (2000), 61-84.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Valls, J.M., Galván, I.M. & Isasi, P. Lazy Learning in Radial Basis Neural Networks: A Way of Achieving More Accurate Models. Neural Processing Letters 20, 105–124 (2004). https://doi.org/10.1007/s11063-004-0635-6

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-004-0635-6

Navigation