Advertisement

Dynamic Ensemble Selection Using Discriminant Functions and Normalization Between Class Labels – Approach to Binary Classification

  • Robert BurdukEmail author
  • Paulina Baczyńska
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9692)

Abstract

In the classification task, the ensemble selection methods reduce the available pool of the base classifiers. The dynamic ensemble selection methods allow to find the subset of base classifiers for each test sample separately. In finding the best subset of base classifiers many methods used the so-called competence region determined for the validation data set. In this paper, we propose the dynamic ensemble selection in which the validation data set is not necessary and the competence region for the test sample is not determined. Generally, the described method uses only the decision profiles in the selection process. The experiment results based on ten data sets show that the proposed dynamic ensemble selection is a promising method for the development of multiple classifiers systems.

Keywords

Ensemble selection Multiple classifier system Binary classification task 

Notes

Acknowledgments

This work was supported by the Polish National Science Center under the grant no. DEC-2013/09/B/ST6/02264 and by the statutory funds of the Department of Systems and Computer Networks, Wroclaw University of Technology.

References

  1. 1.
    Baczynska, P., Burduk, R.: Ensemble selection based on discriminant functions in binary classification task. In: Jackowski, K., et al. (eds.) IDEAL 2015. LNCS, vol. 9375, pp. 61–68. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-24834-9_8 CrossRefGoogle Scholar
  2. 2.
    Baczyńska, P., Burduk, R.: Two stage ensemble selection algorithm. J. Theor. Appl. Comput. Sci. 9, 3–8 (2015)Google Scholar
  3. 3.
    Britto, A.S., Sabourin, R., Oliveira, L.E.: Dynamic selection of classifiers-a comprehensive review. Pattern Recogn. 47(11), 3665–3680 (2014)CrossRefGoogle Scholar
  4. 4.
    Burduk, R.: Classifier fusion with interval-valued weights. Pattern Recogn. Lett. 34(14), 1623–1629 (2013)CrossRefGoogle Scholar
  5. 5.
    Cavalin, P.R., Sabourin, R., Suen, C.Y.: Dynamic selection approaches for multiple classifier systems. Neural Comput. Appl. 22(3–4), 673–688 (2013)CrossRefGoogle Scholar
  6. 6.
    Cyganek, B.: One-class support vector ensembles for image segmentation and classification. J. Math. Imaging Vis. 42(2–3), 103–117 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Cyganek, B., Woźniak, M.: Vehicle logo recognition with an ensemble of classifiers. In: Nguyen, N.T., Attachoo, B., Trawiński, B., Somboonviwat, K. (eds.) ACIIDS 2014, Part II. LNCS, vol. 8398, pp. 117–126. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  8. 8.
    Didaci, L., Giacinto, G., Roli, F., Marcialis, G.L.: A study on the performances of dynamic classifier selection based on local accuracy estimation. Pattern Recogn. 38, 2188–2191 (2005)CrossRefzbMATHGoogle Scholar
  9. 9.
    Forczmański, P., Łabędź, P.: Recognition of occluded faces based on multi-subspace classification. In: Saeed, K., Chaki, R., Cortesi, A., Wierzchoń, S. (eds.) CISIM 2013. LNCS, vol. 8104, pp. 148–157. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  10. 10.
    Frank, A., Asuncion, A.: UCI machine learning epository (2010)Google Scholar
  11. 11.
    Frejlichowski, D.: An algorithm for the automatic analysis of characters located on car license plates. In: Kamel, M., Campilho, A. (eds.) ICIAR 2013. LNCS, vol. 7950, pp. 774–781. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  12. 12.
    Giacinto, G., Roli, F.: An approach to the automatic design of multiple classifier systems. Pattern Recogn. Lett. 22, 25–33 (2001)CrossRefzbMATHGoogle Scholar
  13. 13.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)zbMATHGoogle Scholar
  14. 14.
    Jackowski, K., Krawczyk, B., Woźniak, M.: Improved adaptive splitting, selection: The hybrid training method of a classifier based on a feature space partitioning. Int. J. Neural Syst. 24(3), 1430007 (2014)CrossRefGoogle Scholar
  15. 15.
    Jackowski, K., Woźniak, M.: Method of classifier selection using the genetic approach. Expert Syst. 27(2), 114–128 (2010)CrossRefGoogle Scholar
  16. 16.
    Korytkowski, M., Rutkowski, L., Scherer, R.: From ensemble of fuzzy classifiers to single fuzzy rule base classifier. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2008. LNCS (LNAI), vol. 5097, pp. 265–272. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  17. 17.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley Inc., New York (2004)CrossRefzbMATHGoogle Scholar
  18. 18.
    Rejer, I.: Genetic algorithms in EEG feature selection for the classification of movements of the left and right hand. In: Burduk, R., Jackowski, K., Kurzynski, M., Wozniak, M., Zolnierek, A. (eds.) CORES 2013. AISC, vol. 226, pp. 579–589. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  19. 19.
    Ruta, D., Gabrys, B.: Classifier selection for majority voting. Inf. Fusion 6(1), 63–81 (2005)CrossRefzbMATHGoogle Scholar
  20. 20.
    Smȩtek, M., Trawiński, B.: Selection of heterogeneous fuzzy model ensembles using self-adaptive genetic algorithms. New Gener. Comput. 29(3), 309–327 (2011)CrossRefGoogle Scholar
  21. 21.
    Trawiński, B., Smȩtek, M., Telec, Z., Lasota, T.: Nonparametric statistical analysis for multiple comparison of machine learning regression algorithms. Int. J. Appl. Math. Comput. Sci. 22(4), 867–881 (2012)MathSciNetzbMATHGoogle Scholar
  22. 22.
    Woloszyński, T., Kurzyński, M.: A probabilistic model of classifier competence for dynamic ensemble selection. Pattern Recogn. 44(10–11), 2656–2668 (2011)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Department of Systems and Computer NetworksWroclaw University of TechnologyWroclawPoland

Personalised recommendations