Advertisement

Semantic Explanations in Ensemble Learning

  • Md. Zahidul IslamEmail author
  • Jixue Liu
  • Lin Liu
  • Jiuyong Li
  • Wei Kang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11439)

Abstract

A combination method is an integral part of an ensemble classifier. Existing combination methods determine the combined prediction of a new instance by relying on the predictions made by the majority of base classifiers. This can result in incorrect combined predictions when the majority predict the incorrect class. It has been noted that in group decision-making, the decision by the majority, if lacking consistency in the reasons for the decision provided by its members, could be less reliable than the minority’s decision with higher consistency in the reasons of its members. Based on this observation, in this paper, we propose a new combination method, EBCM, which considers the consistency of the features, i.e. explanations of individual predictions for generating ensemble classifiers. EBCM firstly identifies the features accountable for each base classifier’s prediction, and then uses the features to measure the consistency among the predictions. Finally, EBCM combines the predictions based on both the majority and the consistency of features. We evaluated the performance of EBCM with 16 real-world datasets and observed substantial improvement over existing techniques.

Notes

Acknowledgements

We acknowledge the University of South Australia and Data to Decisions CRC (D2DCRC) for partially funding this research.

References

  1. 1.
    Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Müller, K.R.: How to explain individual classification decisions. JMLR 11, 1803–1831 (2010)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Breiman, L.: Stacked regressions. Mach. Learn. 24(1), 49–64 (1996)zbMATHGoogle Scholar
  3. 3.
    Cevikalp, H., Polikar, R.: Local classifier weighting by quadratic programming. IEEE Trans. Neural Netw. 19(10), 1832–1838 (2008)CrossRefGoogle Scholar
  4. 4.
    Demšar, J.: Statistical comparisons of classifiers over multiple data sets. JMLR 7, 1–30 (2006)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000).  https://doi.org/10.1007/3-540-45014-9_1CrossRefGoogle Scholar
  6. 6.
    Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Ann. Stat. 28(2), 337–407 (2000)MathSciNetCrossRefGoogle Scholar
  7. 7.
    García, S., Fernández, A., Luengo, J., Herrera, F.: A study of statistical techniques and performance measures for genetics-based machine learning: accuracy and interpretability. Soft Comput. 13(10), 959 (2009)CrossRefGoogle Scholar
  8. 8.
    Herrera, F., Herrera-Viedma, E., Verdegay, J.L.: A rational consensus model in group decision making using linguistic assessments. Fuzzy Sets Syst. 88(1), 31–49 (1997)CrossRefGoogle Scholar
  9. 9.
    Huang, Y.S., Suen, C.Y.: The behavior-knowledge space method for combination of multiple classifiers. In: Proceedings of CVPR, pp. 347–352. IEEE (1993)Google Scholar
  10. 10.
    Jain, A.K., Duin, R.P.W., Mao, J.: Statistical pattern recognition: a review. IEEE Trans. Pattern Anal. Mach. Intell. 22(1), 4–37 (2000)CrossRefGoogle Scholar
  11. 11.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms, 2nd edn. Wiley, New York (2014)zbMATHGoogle Scholar
  12. 12.
    Kuncheva, L.I.: A theoretical study on six classifier fusion strategies. IEEE Trans. Pattern Anal. Mach. Intell. 24(2), 281–286 (2002)CrossRefGoogle Scholar
  13. 13.
    Kuncheva, L.I., Bezdek, J.C., Duin, R.P.: Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recognit. 34(2), 299–314 (2001)CrossRefGoogle Scholar
  14. 14.
    Onan, A., Korukoğlu, S., Bulut, H.: A multiobjective weighted voting ensemble classifier based on differential evolution algorithm for text sentiment classification. Expert. Syst. Appl. 62, 1–16 (2016)CrossRefGoogle Scholar
  15. 15.
    Pang, B., Lee, L., Vaithyanathan, S.: Thumbs up?: sentiment classification using machine learning techniques. In: Proceedings of EMNLP, pp. 79–86. ACL (2002)Google Scholar
  16. 16.
    Perikos, I., Hatzilygeroudis, I.: Recognizing emotions in text using ensemble of classifiers. Eng. Appl. Artif. Intell. 51, 191–201 (2016)CrossRefGoogle Scholar
  17. 17.
    Polikar, R.: Ensemble based systems in decision making. IEEE Circuits Syst. Mag. 6(3), 21–45 (2006)CrossRefGoogle Scholar
  18. 18.
    Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: The 22nd ACM SIGKDD, pp. 1135–1144 (2016)Google Scholar
  19. 19.
    Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Comput. Appl. Math. 20, 53–65 (1987)CrossRefGoogle Scholar
  20. 20.
    Tofallis, C.: Add or multiply? a tutorial on ranking and choosing with multiple criteria. INFORMS Trans. Educ. 14(3), 109–119 (2014)CrossRefGoogle Scholar
  21. 21.
    Whitehead, M., Yaeger, L.: Sentiment mining using ensemble classification models. In: Sobh, T. (ed.) Innovations and Advances in Computer Sciences and Engineering, pp. 509–514. Springer, Dordrecht (2010).  https://doi.org/10.1007/978-90-481-3658-2_89CrossRefGoogle Scholar
  22. 22.
    Wolpert, D.H.: Stacked generalization. Neural Netw. 5(2), 241–259 (1992)CrossRefGoogle Scholar
  23. 23.
    Woźniak, M., Graña, M., Corchado, E.: A survey of multiple classifier systems as hybrid systems. Inf. Fusion 16, 3–17 (2014)CrossRefGoogle Scholar
  24. 24.
    Yan, Y., Yang, H., Wang, H.: Two simple and effective ensemble classifiers for twitter sentiment analysis. In: 2017 Computing Conference, pp. 1386–1393 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Md. Zahidul Islam
    • 1
    Email author
  • Jixue Liu
    • 1
  • Lin Liu
    • 1
  • Jiuyong Li
    • 1
  • Wei Kang
    • 1
  1. 1.School of Information Technology and Mathematical Sciences (ITMS)University of South AustraliaAdelaideAustralia

Personalised recommendations