Skip to main content

Ensemble Enhanced Evidential k-NN Classifier Through Random Subspaces

  • Conference paper
  • First Online:
Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10369))

Abstract

The process of combining an ensemble of classifiers has been deemed to be an efficient way for improving the performance of several classification problems. The Random Subspace Method, that consists of training a set of classifiers on different subsets of the feature space, has been shown to be effective in increasing the accuracy of classifiers, notably the nearest neighbor one. Since, in several real world domains, data can also be suffered from several aspects of uncertainty, including incompleteness and inconsistency, an Enhanced Evidential k-Nearest Neighbor classifier has been recently introduced to deal with the uncertainty pervading both the attribute values and the classifier outputs within the belief function framework. Thus, in this paper, we are based primarily on the Enhanced Evidential k-Nearest Neighbor classifier to construct an ensemble pattern classification system. More precisely, we adopt the Random Subspace Method in our context to build ensemble classifiers with imperfect data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Altınçay, H.: Ensembling evidential k-nearest neighbor classifiers through multi-modal perturbation. Appl. Soft Comput. 7(3), 1072–1083 (2007)

    Article  Google Scholar 

  2. Bay, S.D.: Combining nearest neighbor classifiers through multiple feature subsets. In: 15th International Conference on Machine Learning, vol. 98, pp. 37–45 (1998)

    Google Scholar 

  3. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  Google Scholar 

  4. Bryll, R., Gutierrez-Osuna, R., Quek, F.: Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets. Pattern Recogn. 36(6), 1291–1302 (2003)

    Article  MATH  Google Scholar 

  5. Cho, S.B., Won, H.-H.: Cancer classification using ensemble of neural networks with multiple significant gene subsets. Appl. Intell. 26(3), 243–250 (2007)

    Article  MATH  Google Scholar 

  6. Dempster, A.P.: Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Stat. 38, 325–339 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  7. Denoeux, T.: A k-nearest neighbor classification rule based on Dempster-Shafer theory. IEEE Trans. Syst. Man Cybern. 25(5), 804–813 (1995)

    Article  Google Scholar 

  8. Günter, S., Bunke, H.: Feature selection algorithms for the generation of multiple classifier systems and their application to handwritten word recognition. Pattern Recogn. Lett. 25(11), 1323–1336 (2004)

    Article  Google Scholar 

  9. Jiao, L., Denœux, T., Pan, Q.: Evidential editing K-nearest neighbor classifier. In: Destercke, S., Denoeux, T. (eds.) ECSQARU 2015. LNCS, vol. 9161, pp. 461–471. Springer, Cham (2015). doi:10.1007/978-3-319-20807-7_42

    Chapter  Google Scholar 

  10. Jousselme, A., Grenier, D., Bossé, E.: A new distance between two bodies of evidence. Inf. Fusion 2(2), 91–101 (2001)

    Article  Google Scholar 

  11. Kim, Y.: Toward a successful crm: variable selection, sampling, and ensemble. Decis. Support Syst. 41(2), 542–553 (2006)

    Article  Google Scholar 

  12. Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artif. Intell. 97(1–2), 273–324 (1997)

    Article  MATH  Google Scholar 

  13. Kuncheva, L., Skurichina, M., Duin, R.P.: An experimental study on diversity for bagging and boosting with linear classifiers. Inf. Fusion 3(4), 245–258 (2002)

    Article  Google Scholar 

  14. Murphy, P., Aha, D.: UCI repository databases (1996). http://www.ics.uci.edu/mlear

  15. Opitz, D., Maclin, R.: Popular ensemble methods: an empirical study. J. Artif. Intell. Res. 11, 169–198 (1999)

    MATH  Google Scholar 

  16. Ristic, B., Smets, P.: The TBM global distance measure for the association of uncertain combat id declarations. Inf. Fusion 7(3), 276–284 (2006)

    Article  Google Scholar 

  17. Sánchez-Maroño, N., Alonso-Betanzos, A., Tombilla-Sanromán, M.: Filter methods for feature selection – a comparative study. In: Yin, H., Tino, P., Corchado, E., Byrne, W., Yao, X. (eds.) IDEAL 2007. LNCS, vol. 4881, pp. 178–187. Springer, Heidelberg (2007). doi:10.1007/978-3-540-77226-2_19

    Chapter  Google Scholar 

  18. Schapire, R.E.: The boosting approach to machine learning: an overview. In: Denison, D.D., Hansen, M.H., Holmes, C.C., Mallick, B., Yu, B. (eds.) Nonlinear Estimation and Classification. LNS, vol. 171, pp. 149–171. Springer, New York (2003). doi:10.1007/978-0-387-21579-2_9

    Chapter  Google Scholar 

  19. Skurichina, M., Duin, R.P.: Bagging, boosting and the random subspace method for linear classifiers. Pattern Anal. Appl. 5(2), 121–135 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  20. Smets, P.: Decision making in the TBM: the necessity of the pignistic transformation. Int. J. Approximate Reasoning 38(2), 133–147 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  21. Smets, P., Kennes, R.: The transferable belief model. Artif. Intell. 66(2), 191–234 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  22. Tessem, B.: Approximations for efficient computation in the theory of evidence. Artif. Intell. 61(2), 315–329 (1993)

    Article  MathSciNet  Google Scholar 

  23. Trabelsi, A., Elouedi, Z., Lefevre, E.: A novel \(k\)-nn approach for data with uncertain attribute values. In: 30th International Conference on Industrial, Engineering and other Applications of Applied Intelligent Systems. Springer (2017, to appear)

    Google Scholar 

  24. Tumer, K., Ghosh, J.: Classifier combining: analytical results and implications. In: Proceedings of the National Conference on Artificial Intelligence, pp. 126–132 (1996)

    Google Scholar 

  25. Zouhal, L.M., Denoeux, T.: An evidence-theoretic \(k\)-nn rule with parameter optimization. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 28(2), 263–271 (1998)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Asma Trabelsi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Trabelsi, A., Elouedi, Z., Lefevre, E. (2017). Ensemble Enhanced Evidential k-NN Classifier Through Random Subspaces. In: Antonucci, A., Cholvy, L., Papini, O. (eds) Symbolic and Quantitative Approaches to Reasoning with Uncertainty. ECSQARU 2017. Lecture Notes in Computer Science(), vol 10369. Springer, Cham. https://doi.org/10.1007/978-3-319-61581-3_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-61581-3_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-61580-6

  • Online ISBN: 978-3-319-61581-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics