Advertisement

Building Locally Discriminative Classifier Ensemble Through Classifier Fusion Among Nearest Neighbors

  • Xiang-Jun Shen
  • Wen-Chao Zhang
  • Wei Cai
  • Ben-Bright B. Benuw
  • He-Ping Song
  • Qian ZhuEmail author
  • Zheng-Jun Zha
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9916)

Abstract

Many studies on ensemble learning that combines multiple classifiers have shown that, it is an effective technique to improve accuracy and stability of a single classifier. In this paper, we propose a novel discriminative classifier fusion method, which applies local classification results of classifiers among nearest neighbors to build a local classifier ensemble. From this dynamically selected process, discriminative classifiers are weighted heavily to build a locally discriminative ensemble. Experimental results on several UCI datasets have shown that, our proposed method achieves best classification performance among individual classifiers, majority voting and AdaBoost algorithms.

Keywords

Ensemble learning Classifier ensemble Classifier combination Classifier fusion 

Notes

Acknowledgments

This work was funded in part by the National Natural Science Foundation of China (No. 61572240,61502208), Natural Science Foundation of Jiangsu Province of China (No. BK20150522), and the Open Project Program of the National Laboratory of Pattern Recognition(NLPR) (No. 201600005).

References

  1. 1.
    Boucheron, S., Bousquet, O., Lugosi, G.: Theory of classification: a survey of some recent advances. ESAIM: Probab. Stat. 9(1), 323–375 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Egmont-Petersen, M., de Ridder, D., Handels, H.: Image processing with neural networks C a review. Pattern Recogn. 35(10), 2279–2301 (2002)CrossRefzbMATHGoogle Scholar
  3. 3.
    Wang, L.: The Support Vector Machines: Theory and Applications. Springer, Berlin (2005)CrossRefzbMATHGoogle Scholar
  4. 4.
    Quinlan, J.R.: Improved use of continuous attributes in C4.5. J. Artif. Intell. Res. 4, 77–90 (1996)zbMATHGoogle Scholar
  5. 5.
    Rennie, J., Shih, L., Teevan, J., Karger, D.: Spam filtering with Naive Bayes which Naive Bayes? In: Proceedings of the Twentieth International Conference on Machine Learning (ICML), pp. 285–295 (2003)Google Scholar
  6. 6.
    Britto Jr., A.S., sabourin, R., Oliveira, L.S.: Dynamic selection of classifiers - a comprehensive review. Pattern Recogn. 47(11), 3665–3680 (2014)CrossRefGoogle Scholar
  7. 7.
    Skalak, D.: The sources of increased accuracy for two proposed boosting algorithms. In: Proceedings of American Association for Artificial Intelligence, Integrating Multiple Learned Models Workshop, AAAI 1996, pp. 120–125 (1996)Google Scholar
  8. 8.
    Giacinto, G., Roli, F.: Design of effective neural network ensembles for image classification processes. J. Image Vis. Comput. 19(9/10), 699–707 (2001)CrossRefGoogle Scholar
  9. 9.
    Sim, J., Wright, C.C.: The Kappa statistic in reliability studies: use, interpretation, and sample size requirements. Phys. Therapy 85(3), 257–268 (2005)Google Scholar
  10. 10.
    Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles. Mach. Learn. 51(2), 181–207 (2003)CrossRefzbMATHGoogle Scholar
  11. 11.
    Saitta, L.: Hypothesis diversity in ensemble classification. In: Esposito, F., Pivert, O., Hacid, M.-S., Raś, Z.W., Ferilli, S. (eds.) ISMIS 2015. LNCS (LNAI), vol. 9384, pp. 662–670. Springer, Heidelberg (2006). doi: 10.1007/11875604_73 CrossRefGoogle Scholar
  12. 12.
    Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Proceedings of the 13th International Conference on Machine Learning (ICML), pp. 148–156 (1996)Google Scholar
  14. 14.
    Ho, T.K.: The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 832–844 (1998)CrossRefGoogle Scholar
  15. 15.
    Rodrguez, J.J., Kuncheva, L.I., Alonso, C.J.: Rotation forest: a new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1619–1630 (2006)CrossRefGoogle Scholar
  16. 16.
    Zhang, L., Zhou, W.D.: Sparse ensembles using weighted combination methods based on linear programming. Pattern Recogn. 44(1), 97–106 (2011)CrossRefzbMATHGoogle Scholar
  17. 17.
    Benediktsson, J.A., Sveinsson, J.R., Ersoy, O.K., Swain, P.H.: Parallel consensual neural networks. IEEE Trans. Neural Netw. 8(1), 54–64 (1997)CrossRefGoogle Scholar
  18. 18.
    Ueda, N.: Optimal linear combination of neural networks for improving classification performance. IEEE Trans. Pattern Anal. Mach. Intell. 22(2), 207–215 (2000)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Kuncheva, L.I., Rodriguez, J.J.: A weighted voting framework for classifiers ensembles. Knowl. Inf. Syst. 38(2), 259–275 (2014)CrossRefGoogle Scholar
  20. 20.
    Breiman, L.: Stacked regressions. Mach. Learn. 24(1), 49–64 (1996)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Yao, X., Liu, Y.: Making use of population information in evolutionary artificial neural networks. IEEE Trans. Syst. Man Cybern. Part B 28(3), 417–425 (1998)MathSciNetGoogle Scholar
  22. 22.
    Zhou, Z.H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artif. Intell. 137(1–2), 239–263 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Gurram, P., Kwon, H.: Sparse kernel-based ensemble learning with fully optimized kernel parameters for hyperspectral classification problems. IEEE Trans. Geosci. Remote Sens. 51(2), 787–802 (2013)CrossRefGoogle Scholar
  24. 24.
    Yin, X.-C., Huang, K., Hao, H.-W.: \(\rm De^2\): dynamic ensemble of ensembles for learning nonstationary data. Neurocomputing 165, 14–22 (2015)CrossRefGoogle Scholar
  25. 25.
    Theodoridis, S., Koutroumbas, K.: Pattern Recognition, 4th edn. Academic Press, New York (2009)zbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Xiang-Jun Shen
    • 1
  • Wen-Chao Zhang
    • 1
  • Wei Cai
    • 1
  • Ben-Bright B. Benuw
    • 1
  • He-Ping Song
    • 1
  • Qian Zhu
    • 1
    Email author
  • Zheng-Jun Zha
    • 2
  1. 1.School of Computer Science and Telecommunication EngineeringJiangsu UniversityZhenjiangChina
  2. 2.School of Information Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina

Personalised recommendations