Advertisement

Adaptive Splitting and Selection Method of Classifier Ensemble Building

  • Konrad Jackowski
  • Michal Wozniak
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5572)

Abstract

The paper presents a novel machine learning method which allows obtaining compound classifier. Its idea bases on splitting feature space into separate regions and choosing the best classifier from available set of recognizers for each region. Splitting and selection take place simultaneously as a part of an optimization process. Evolutionary algorithm is used to find out the optimal solution. The quality of the proposed method is evaluated via computer experiments.

Keywords

Feature Space Compound Classifier Area Classifier Elementary Classifier Classifier Fusion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Alexandre, L.A., et al.: Combining Independent and Unbiased Classifiers Using Weighted Average. In: Proc. of the 15th Int. Conf.on Pattern Recognition, pp. 495–498 (2000)Google Scholar
  2. 2.
    Asuncion, A., Newman, D.J.: UCI Machine Learning Repository Irvine. University of California, School of Information and Computer Science, Irvine (2007)Google Scholar
  3. 3.
    Biggio, B., et al.: Bayesian Analysis of Linear Combiners. In: Haindl, M., Kittler, J., Roli, F. (eds.) MCS 2007. LNCS, vol. 4472, pp. 292–301. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  4. 4.
    Duda, R.O., et al.: Pattern Classification. Wiley Interscience, Hoboken (2001)zbMATHGoogle Scholar
  5. 5.
    Duin, R., et al.: Experiments with Classifier Combining Rules. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 16–29. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  6. 6.
    Duin, R., et al.: PRTools4, A Matlab Toolbox for Pattern Recognition, Delft University of Technology (2004)Google Scholar
  7. 7.
    Giacinto, G., et al.: Design of Effective Multiple Classifier Systems by Clustering of Classifiers. In: Proc. of 15th Internat. Conf. ICPR, Barcelona, vol. 2, pp. 160–163 (2000)Google Scholar
  8. 8.
    Goebel, K., Yan, W.: Choosing Classifier for Decision Fusion. In: Proc. of the 7th Internat. Conf. on Information Fusion, Stockholm, Sweden, pp. 562–568 (2004)Google Scholar
  9. 9.
    Jain, A.K., et al.: Data Clustering: A Review. ACM Computing Surveys 31(3) (1995)Google Scholar
  10. 10.
    Jain, A.K., et al.: Statistical Pattern Recognition: A Review. IEEE Trans. on PAMI 22(1), 4–37 (2000)CrossRefGoogle Scholar
  11. 11.
    Kittler, J., Alkoot, F.M.: Sum versus Vote Fusion in Multiple Classifier Systems. IEEE Trans. on PAMI 20, 226–239 (2003)CrossRefGoogle Scholar
  12. 12.
    Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proc. of the 14th Int. Joint Conf. on AI, San Mateo, pp. 1137–1143 (1995)Google Scholar
  13. 13.
    Krzanowski, W., Partrige, D.: Software Diversity: Practical Statistics for its Measurement and Exploatation, Department of Computer Science, University of Exeter (1996)Google Scholar
  14. 14.
    Kuncheva, L.I.: Cluster-and-selection method for classifier combination. In: Proc. of the 4th Int. Conf. on Knowledge-Based Intel. Eng.Sys. and Allied Technol., Brighton, pp. 185–188 (2000)Google Scholar
  15. 15.
    Kuncheva, L.I., et al.: Decision templates for multiple classifier fusion: an experimental comparision. Pattern Recognition 34, 299–314 (2001)CrossRefzbMATHGoogle Scholar
  16. 16.
    Kuncheva, L.I., et al.: Ten measures of diversity in classifier ensembles: Limits for two classifiers. In: Proc. of the IEE Work. on Intell. Sensor Proc., Birmingham, pp. 10/1–10/6 (2001)Google Scholar
  17. 17.
    Kuncheva, L.I.: Combining pattern classifiers: Methods and algorithms. Wiley Interscience, New Jersey (2004)CrossRefzbMATHGoogle Scholar
  18. 18.
    Kuratowski, K., Mostowski, A.: Set theory. North-Holand Pub. Co., Amsterdam (1996)zbMATHGoogle Scholar
  19. 19.
    Michalewicz, z.: Genetics Algorithms + Data Structures = Evolutions Programs. Springer, Berlin (1996)CrossRefzbMATHGoogle Scholar
  20. 20.
    Rastrigin, L.A., et al.: Method of Collective Recognition. Energoizdat, Moscow (1981)zbMATHGoogle Scholar
  21. 21.
    Tumer, K., Ghosh, J.: Analysis of Decision Boundaries in Linearly Combined Neural Classifiers. Pattern Recognition 29, 341–348 (1996)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Konrad Jackowski
    • 1
  • Michal Wozniak
    • 1
  1. 1.Chair of Systems and Computer NetworksWroclaw University of TechnologyWroclawPoland

Personalised recommendations