Advertisement

“Good” and “Bad” Diversity in Majority Vote Ensembles

  • Gavin Brown
  • Ludmila I. Kuncheva
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5997)

Abstract

Although diversity in classifier ensembles is desirable, its relationship with the ensemble accuracy is not straightforward. Here we derive a decomposition of the majority vote error into three terms: average individual accuracy, “good” diversity and “bad diversity”. The good diversity term is taken out of the individual error whereas the bad diversity term is added to it. We relate the two diversity terms to the majority vote limits defined previously (the patterns of success and failure). A simulation study demonstrates how the proposed decomposition can be used to gain insights about majority vote classifier ensembles.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Brown, G.: Ensemble Learning. In: Encyclopedia of Machine Learning. Springer Press, Heidelberg (2010)Google Scholar
  2. 2.
    Kuncheva, L.: Combining pattern classifiers: methods and algorithms. Wiley-Interscience, Hoboken (2004)zbMATHCrossRefGoogle Scholar
  3. 3.
    Kuncheva, L.: That elusive diversity in classifier ensembles. In: Perales, F.J., Campilho, A.C., Pérez, N., Sanfeliu, A. (eds.) IbPRIA 2003. LNCS, vol. 2652, pp. 1126–1138. Springer, Heidelberg (2003)Google Scholar
  4. 4.
    Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity creation methods: a survey and categorisation. Information Fusion 6(1), 5–20 (2005)CrossRefGoogle Scholar
  5. 5.
    Brown, G., Wyatt, J., Tiňo, P.: Managing diversity in regression ensembles. Journal of Machine Learning Research 6, 1650 (2005)Google Scholar
  6. 6.
    Strehl, A., Ghosh, J.: Cluster ensembles: a knowledge reuse framework for combining multiple partitions. The Journal of Machine Learning Research 3, 583–617 (2003)zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. In: Advances in neural information processing systems, pp. 231–238 (1995)Google Scholar
  8. 8.
    Heskes, T.: Bias/variance decompositions for likelihood-based estimators. Neural Computation 10(6), 1425–1433 (1998)CrossRefGoogle Scholar
  9. 9.
    Kuncheva, L., Whitaker, C., Shipp, C., Duin, R.: Limits on the majority vote accuracy in classifier fusion. Pattern Analysis & Applications 6(1), 22–31 (2003)zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Melville, P., Mooney, R.: Creating diversity in ensembles using artificial data. Information Fusion 6(1), 99–111 (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Gavin Brown
    • 1
  • Ludmila I. Kuncheva
    • 2
  1. 1.School of Computer ScienceUniversity of ManchesterUK
  2. 2.School of Computer ScienceBangor UniversityUK

Personalised recommendations