Advertisement

Analyzing the Relationship between Diversity and Evidential Fusion Accuracy

  • Yaxin Bi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6713)

Abstract

In this paper, we present an empirical analysis on the relationship between diversity and accuracy of classifier ensembles in the context of the theory of belief functions. We provide a modelling for formulating classifier outputs as triplet mass functions and a unified notation for defining diversity measures and then assess the correlation between the diversity obtained by four pairwise and non-pairwise diversity measures and the improvement of accuracy of classifiers combined in decreasing and mixed orders by Dempster’s rule, Proportion and Yager’s rules. Our experimental results reveal that the improved accuracy of classifiers combined by Dempster’s rule is positively correlated with the diversity obtained by the four measures, but the correlation between the diversity and the improved accuracy of the ensembles constructed by Proportion and Yager’s rules is negative, which is not in favor of the claim that increasing diversity could lead to reduction of generalization error of classifier ensembles.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bi, Y., Guan, J., Bell, D.: The combination of multiple classifiers using an evidential approach. Artificial Intelligence 17, 1731–1751 (2008)CrossRefzbMATHGoogle Scholar
  2. 2.
    Bi, Y., Wu, S.: Measuring Impact of Diversity of Classifiers on the Accuracy of Evidential Ensemble Classifiers. In: Hüllermeier, E., Kruse, R., Hoffmann, F. (eds.) IPMU 2010. Communications in Computer and Information Science, vol. 80, pp. 238–247. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  3. 3.
    Fleiss, J.L., Cuzick, J.: The reliability of dichotomous judgments: unequal numbers of judgments per subject. Applied Psychological Measurement 3, 537–542 (1979)CrossRefGoogle Scholar
  4. 4.
    Kohavi, R., Wolpert, D.: Bias plus variance decomposition for zero-one loss functions. In: Proc 13th International Conference of Machine Learning, pp. 275–283 (1996)Google Scholar
  5. 5.
    Kuncheva, L., Whitaker, C.J.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning 51, 181–207 (2003)CrossRefzbMATHGoogle Scholar
  6. 6.
    Shafer, G.: A Mathematical Theory of Evidence, 1st edn. Princeton University Press, Princeton (1976)zbMATHGoogle Scholar
  7. 7.
    Skalak, D.: he sources of increased accuracy for two proposed boosting algorithms. In: Proc. American Association for Artificial Intelligence, AAAI-1996, Integrating Multiple Learned Models Workshop (1996)Google Scholar
  8. 8.
    E., Tang, E.K., Suganthan, P.N., Yao, X.: An analysis of diversity measures. Machine Learning 65(1), 247–271 (2006)CrossRefGoogle Scholar
  9. 9.
    Witten, I.H., Frank, E.: Data Mining: Practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)zbMATHGoogle Scholar
  10. 10.
    Yager, R.R.: On the dempster-shafer framework and new combination rules. Information Science 41, 93–137 (1987)CrossRefzbMATHGoogle Scholar
  11. 11.
    Anand, S.S., Bell, D., Hughes, J.G.: EDM: A General Framework for Data Mining Based on Evidence Theory. Data Knowl. Eng 18(3), 189–223 (1996)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Yaxin Bi
    • 1
  1. 1.School of Computing and MathematicsUniversity of UlsterNewtownabbeyUK

Personalised recommendations