Advertisement

Margin-based Diversity Measures for Ensemble Classifiers

  • Tomasz Arodź
Part of the Advances in Soft Computing book series (AINSC, volume 30)

Abstract

The classifier ensembles have been used successfully in many applications. Their superiority over single classifiers depends on the diversity of the classifiers forming the ensemble. Till now, most of the ensemble diversity measures were derived basing on the binary classification information. In this paper we propose a new group of methods, which use the margins of individual classifiers from the ensemble. These methods process the margins with a bipolar sigmoid function, as the most important information is contained in margins of low magnitude. The proposed diversity measures are evaluated for three types of ensembles of linear classifiers. The tests show that these measures are better at predicting recognition accuracy than established diversity measures, such as Q or disagreement measures, or entropy.

Keywords

Diversity Measure Feature Subset Decision Boundary Classifier Ensemble Weak Classifier 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Freund Y, Schapire R (1997) A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55:119–139zbMATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Breiman L (1996) Bagging predictors. Machine Learning 24:123–140zbMATHMathSciNetGoogle Scholar
  3. 3.
    Ho TK (1995) Random decision forests. In: Proc. of the 3rd Int’l Conference on Document Analysis and Recognition:278–282Google Scholar
  4. 4.
    Bryll R, Gutierrez-Osuna R, Quek F (2003) Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets. Pattern Recognition 36:1291–1302zbMATHCrossRefGoogle Scholar
  5. 5.
    Schapire RE, Freund Y, Bartlett P, Lee WS (1997) Boosting the margin: a new explanation for the effectiveness of voting methods. In: Proc. 14th International Conference on Machine Learning:322–330, Morgan KaufmannGoogle Scholar
  6. 6.
    Cortes C, Vapnik V (1995) Support-vector networks. Machine Learning 20:273–297zbMATHGoogle Scholar
  7. 7.
    Brown G, Wyatt J, Harris R, Yao X (2005) Diversity creation methods: A survey and categorisation. Information Fusion Journal 6:5–20CrossRefGoogle Scholar
  8. 8.
    Kuncheva L (2003) That elusive diversity in classifier ensembles. In: Proc. First Iberian Conference on Pattern Recognition and Image Analysis:1126–1138Google Scholar
  9. 9.
    Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach. Learn. 51:181–207zbMATHCrossRefGoogle Scholar
  10. 10.
    Arodz T (2005) Boosting the Fisher Linear Discriminant with random feature subsets. To appear in: IV International Conference on Computer Recognition Systems, CORES 2005, Advances in Soft Computing, SpringerGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Tomasz Arodź
    • 1
  1. 1.Institute of Computer ScienceAGH University of Science and TechnologyKrakówPoland

Personalised recommendations