Advertisement

Learning Classification RBF Networks by Boosting

  • Juan J. Rodríguez Diez
  • Carlos J. Alonso González
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2096)

Abstract

This work proposes a novel method for constructing RBF networks, based on boosting. The task assigned to the base learner is to select a RBF, while the boosting algorithm combines linearly the different RBFs. For each iteration of boosting a new neuron is incorporated into the network.

The method for selecting each RBF is based on randomly selecting several examples as the centers, considering the distances to these center as attributes of the examples and selecting the best split on one of these attributes. This selection of the best split is done in the same way than in the construction of decision trees. The RBF is computed from the center (attribute) and threshold selected.

This work is not about using RBFNs as base learners for boosting, but about constructing RBFNs by boosting.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Robert J. Alcock and Yannis Manolopoulos. Time-series similarity queries employing a feature-based approach. In 7 th Hellenic Conference on Informatics, Ioannina, Greece, 1999.Google Scholar
  2. [2]
    Eric Bauer and Ron Kohavi. An empirical comparison of voting classification algorithms: Bagging, boosting and variants. Machine Learning, 36(1/2):105–139, 1999.CrossRefGoogle Scholar
  3. [3]
    Stephen D. Bay. The UCI KDD archive, 1999.Google Scholar
  4. [4]
    D.J. Berndt and J. Clifford. Finding patterns in time series: a dynamic programming approach. In U.M. Fayyad, G. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy, editors, Advances in Knowledge Discovery and Data Mining, pages 229–248. AAAI Press /MIT Press, 1996.Google Scholar
  5. [5]
    C.L. Blake and C.J. Merz. UCI repository of machine learning databases, 1998.Google Scholar
  6. [6]
    L. Breiman, J.H. Friedman, A. Olshen, and C.J. Stone. Classification and Regression Trees. Chapman & Hall, New York, 1993.Google Scholar
  7. [7]
    Shimon Cohen and Nathan Intrator. A hybrid projection based and radial basis function architecture. In Kittler and Roli [13], pages 147–156.Google Scholar
  8. [8]
    Nigel Duffy and David Helmbold. Leveraging for regression. In Computational Learning Theory: 13 th Conference (COLT 2000), Stanford University, 2000.Google Scholar
  9. [9]
    Y. Freund and R. Schapire. Experiments with a new boosting algorithm. In 13 th International Conference om Machine Learning (ICML-96), pages 148–156, Bari, Italy, 1996.Google Scholar
  10. [10]
    Yoav Freund and Robert E. Scapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, August 1997.MATHCrossRefMathSciNetGoogle Scholar
  11. [11]
    Venkatesan Guruswami and Amit Sahai. Multiclass learning, boosting, and error-correcting codes. In 12 th Annual Conference on Computational Learning Theory (COLT 1999). ACM, 1999.Google Scholar
  12. [12]
    Mohammed W. Kadous. Learning comprehensible descriptions of multivariate time series. In Ivan Bratko and Saso Dzeroski, editors, 16 th International Conference of Machine Learning (ICML-99). Morgan Kaufmann, 1999.Google Scholar
  13. [13]
    Josef Kittler and Fabio Roli, editors. Multiple Classifier Systems: 1 st International Workshop; MCS 2000, volume 1857 of Lecture Notes in Computer Science. Springer, 2000.Google Scholar
  14. [14]
    Miroslav Kubat. Decision trees can initialize radial-basis function networks. IEEE Transactions on Neural Networks, 9:813–821, 1998.CrossRefGoogle Scholar
  15. [15]
    Mark J. Orr. Introduction to radial basis function networks. Technical report, 1996. http://www.anc.ed.ac.uk/~mjo/papers/intro.ps.gz.
  16. [16]
    Mark J. Orr. Recent advances in radial basis function networks. Technical report, 1999. http://www.anc.ed.ac.uk/~mjo/papers/recad.ps.gz.
  17. [17]
    Juan J. Rodríguez, Carlos J. Alonso, and Henrik Boström. Learning first order logic time series classifiers: Rules and boosting. In Djamel A. Zighed, Jan Komorowski, and Jan Żytkow, editors, Principles of Data Mining and Knowledge Discovery: 4 th European Conference; PKDD 2000, volume 1910 of Lecture Notes in Artificial Intelligence, pages 299–308. Springer, 2000.Google Scholar
  18. [18]
    Juan J. Rodríguez Diez and Carlos J. Alonso González. Applying boosting to similarity literals for time series classification. In Kittler and Roli [13], pages 210–219.Google Scholar
  19. [19]
    Naoki Saito. Local Feature Extraction and Its Applications Using a Library of Bases. PhD thesis, Department of Mathematics, Yale University, 1994.Google Scholar
  20. [20]
    Robert E. Schapire. Using output codes to boost multiclass learning problems. In 14 th International Conference on Machine Learning (ICML-97), pages 313–321, 1997.Google Scholar
  21. [21]
    Robert E. Schapire. A brief introduction to boosting. In Thomas Dean, editor, 16 th International Joint Conference on Arti_cial Intelligence (IJCAI-99), pages 1401–1406. Morgan Kaufmann, 1999.Google Scholar
  22. [22]
    Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions. In 11 th Annual Conference on Computational Learning Theory (COLT 1998), pages 80–91. ACM, 1998.Google Scholar
  23. [23]
    Ljupčo Todorovski and Sašo Džeroski. Combining multiple models with meta decision trees. In Principles of Data Mining and Knowledge Discovery: 4 th European Conference; PKDD 2000, pages 54–64. Springer, 2000.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Juan J. Rodríguez Diez
    • 1
  • Carlos J. Alonso González
    • 2
  1. 1.Lenguajes y Sistemas InformáticosUniversidad de BurgosSpain
  2. 2.Grupo de Sistemas Inteligentes, Dpto. de InformáticaUniversidad de ValladolidSpain

Personalised recommendations