Ensemble Enhanced Evidential k-NN Classifier Through Random Subspaces

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10369)


The process of combining an ensemble of classifiers has been deemed to be an efficient way for improving the performance of several classification problems. The Random Subspace Method, that consists of training a set of classifiers on different subsets of the feature space, has been shown to be effective in increasing the accuracy of classifiers, notably the nearest neighbor one. Since, in several real world domains, data can also be suffered from several aspects of uncertainty, including incompleteness and inconsistency, an Enhanced Evidential k-Nearest Neighbor classifier has been recently introduced to deal with the uncertainty pervading both the attribute values and the classifier outputs within the belief function framework. Thus, in this paper, we are based primarily on the Enhanced Evidential k-Nearest Neighbor classifier to construct an ensemble pattern classification system. More precisely, we adopt the Random Subspace Method in our context to build ensemble classifiers with imperfect data.


Classifier ensemble Random Subspace Method Enhanced evidential k-NN Belief function theory 


  1. 1.
    Altınçay, H.: Ensembling evidential k-nearest neighbor classifiers through multi-modal perturbation. Appl. Soft Comput. 7(3), 1072–1083 (2007)CrossRefGoogle Scholar
  2. 2.
    Bay, S.D.: Combining nearest neighbor classifiers through multiple feature subsets. In: 15th International Conference on Machine Learning, vol. 98, pp. 37–45 (1998)Google Scholar
  3. 3.
    Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)MATHGoogle Scholar
  4. 4.
    Bryll, R., Gutierrez-Osuna, R., Quek, F.: Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets. Pattern Recogn. 36(6), 1291–1302 (2003)CrossRefMATHGoogle Scholar
  5. 5.
    Cho, S.B., Won, H.-H.: Cancer classification using ensemble of neural networks with multiple significant gene subsets. Appl. Intell. 26(3), 243–250 (2007)CrossRefMATHGoogle Scholar
  6. 6.
    Dempster, A.P.: Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Stat. 38, 325–339 (1967)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Denoeux, T.: A k-nearest neighbor classification rule based on Dempster-Shafer theory. IEEE Trans. Syst. Man Cybern. 25(5), 804–813 (1995)CrossRefGoogle Scholar
  8. 8.
    Günter, S., Bunke, H.: Feature selection algorithms for the generation of multiple classifier systems and their application to handwritten word recognition. Pattern Recogn. Lett. 25(11), 1323–1336 (2004)CrossRefGoogle Scholar
  9. 9.
    Jiao, L., Denœux, T., Pan, Q.: Evidential editing K-nearest neighbor classifier. In: Destercke, S., Denoeux, T. (eds.) ECSQARU 2015. LNCS, vol. 9161, pp. 461–471. Springer, Cham (2015). doi: 10.1007/978-3-319-20807-7_42 CrossRefGoogle Scholar
  10. 10.
    Jousselme, A., Grenier, D., Bossé, E.: A new distance between two bodies of evidence. Inf. Fusion 2(2), 91–101 (2001)CrossRefGoogle Scholar
  11. 11.
    Kim, Y.: Toward a successful crm: variable selection, sampling, and ensemble. Decis. Support Syst. 41(2), 542–553 (2006)CrossRefGoogle Scholar
  12. 12.
    Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artif. Intell. 97(1–2), 273–324 (1997)CrossRefMATHGoogle Scholar
  13. 13.
    Kuncheva, L., Skurichina, M., Duin, R.P.: An experimental study on diversity for bagging and boosting with linear classifiers. Inf. Fusion 3(4), 245–258 (2002)CrossRefGoogle Scholar
  14. 14.
    Murphy, P., Aha, D.: UCI repository databases (1996).
  15. 15.
    Opitz, D., Maclin, R.: Popular ensemble methods: an empirical study. J. Artif. Intell. Res. 11, 169–198 (1999)MATHGoogle Scholar
  16. 16.
    Ristic, B., Smets, P.: The TBM global distance measure for the association of uncertain combat id declarations. Inf. Fusion 7(3), 276–284 (2006)CrossRefGoogle Scholar
  17. 17.
    Sánchez-Maroño, N., Alonso-Betanzos, A., Tombilla-Sanromán, M.: Filter methods for feature selection – a comparative study. In: Yin, H., Tino, P., Corchado, E., Byrne, W., Yao, X. (eds.) IDEAL 2007. LNCS, vol. 4881, pp. 178–187. Springer, Heidelberg (2007). doi: 10.1007/978-3-540-77226-2_19 CrossRefGoogle Scholar
  18. 18.
    Schapire, R.E.: The boosting approach to machine learning: an overview. In: Denison, D.D., Hansen, M.H., Holmes, C.C., Mallick, B., Yu, B. (eds.) Nonlinear Estimation and Classification. LNS, vol. 171, pp. 149–171. Springer, New York (2003). doi: 10.1007/978-0-387-21579-2_9 CrossRefGoogle Scholar
  19. 19.
    Skurichina, M., Duin, R.P.: Bagging, boosting and the random subspace method for linear classifiers. Pattern Anal. Appl. 5(2), 121–135 (2002)MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Smets, P.: Decision making in the TBM: the necessity of the pignistic transformation. Int. J. Approximate Reasoning 38(2), 133–147 (2005)MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Smets, P., Kennes, R.: The transferable belief model. Artif. Intell. 66(2), 191–234 (1994)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Tessem, B.: Approximations for efficient computation in the theory of evidence. Artif. Intell. 61(2), 315–329 (1993)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Trabelsi, A., Elouedi, Z., Lefevre, E.: A novel \(k\)-nn approach for data with uncertain attribute values. In: 30th International Conference on Industrial, Engineering and other Applications of Applied Intelligent Systems. Springer (2017, to appear)Google Scholar
  24. 24.
    Tumer, K., Ghosh, J.: Classifier combining: analytical results and implications. In: Proceedings of the National Conference on Artificial Intelligence, pp. 126–132 (1996)Google Scholar
  25. 25.
    Zouhal, L.M., Denoeux, T.: An evidence-theoretic \(k\)-nn rule with parameter optimization. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 28(2), 263–271 (1998)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Université de Tunis, Institut Supérieur de Gestion de Tunis, LARODECTunisTunisia
  2. 2.Univ. Artois, EA 3926, Laboratoire de Génie Informatique et d’Automatique de l’Artois (LGI2A)BéthuneFrance

Personalised recommendations