Advertisement

Homogeneous Ensemble Selection - Experimental Studies

  • Robert BurdukEmail author
  • Paulina Heda
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 534)

Abstract

The paper presents the dynamic ensemble selection method. Proposed method uses information from so-called decision profiles which are formed from the outputs of the base classifiers. In order to verify these algorithms, a number of experiments have been carried out on several public available data sets. The proposed dynamic ensemble selection is experimentally compared against all base classifiers and the ensemble classifiers based on the sum and decision profile methods. As a base classifiers we used the pool of homogeneous classifiers.

Keywords

Multiple classifier system Decision profile Ensemble selection 

Notes

Acknowledgments

This work was supported by the Polish National Science Center under the grant no. DEC-2013/09/B/ST6/02264 and by the statutory funds of the Department of Systems and Computer Networks, Wroclaw University of Technology.

References

  1. 1.
    Alcalá, J., Fernández, A., Luengo, J., Derrac, J., García, S., Sánchez, L., Herrera, F.: Keel data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. J. Multiple Valued Logic Soft Comput. 17(255–287), 11 (2010)Google Scholar
  2. 2.
    Baczyńska, P., Burduk, R.: Ensemble selection based on discriminant functions in binary classification task. In: Jackowski, K., Burduk, R., Walkowiak, K., Woźniak, M., Yin, H. (eds.) IDEAL 2015. LNCS, vol. 9375, pp. 61–68. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-24834-9_8 CrossRefGoogle Scholar
  3. 3.
    Breiman, L.: Randomizing outputs to increase prediction accuracy. Mach. Learn. 40(3), 229–242 (2000)CrossRefzbMATHGoogle Scholar
  4. 4.
    Britto, A.S., Sabourin, R., Oliveira, L.E.: Dynamic selection of classifiers-a comprehensive review. Pattern Recogn. 47(11), 3665–3680 (2014)CrossRefGoogle Scholar
  5. 5.
    Burduk, R.: Classifier fusion with interval-valued weights. Pattern Recogn. Lett. 34(14), 1623–1629 (2013)CrossRefGoogle Scholar
  6. 6.
    Canuto, A.M., Abreu, M.C., de Melo Oliveira, L., Xavier, J.C., Santos, A.D.M.: Investigating the influence of the choice of the ensemble members in accuracy and diversity of selection-based and fusion-based methods for ensembles. Pattern Recogn. Lett. 28(4), 472–486 (2007)CrossRefGoogle Scholar
  7. 7.
    Duin, R.P.: The combining classifier: to train or not to train? In: Proceedings of the 16th International Conference on Pattern Recognition, vol. 2, pp. 765–770. IEEE (2002)Google Scholar
  8. 8.
    Forczmański, P., Łabędź, P.: Recognition of occluded faces based on multi-subspace classification. In: Saeed, K., Chaki, R., Cortesi, A., Wierzchoń, S. (eds.) CISIM 2013. LNCS, vol. 8104, pp. 148–157. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-40925-7_15 CrossRefGoogle Scholar
  9. 9.
    Frank, A., Asuncion, A.: UCI machine learning repository (2010)Google Scholar
  10. 10.
    Frejlichowski, D.: An algorithm for the automatic analysis of characters located on car license plates. In: Kamel, M., Campilho, A. (eds.) ICIAR 2013. LNCS, vol. 7950, pp. 774–781. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-39094-4_89 CrossRefGoogle Scholar
  11. 11.
    Freund, Y., Schapire, R.E., et al.: Experiments with a new boosting algorithm. In: ICML, vol. 96, pp. 148–156 (1996)Google Scholar
  12. 12.
    Giacinto, G., Roli, F.: An approach to the automatic design of multiple classifier systems. Pattern Recogn. Lett. 22, 25–33 (2001)CrossRefzbMATHGoogle Scholar
  13. 13.
    Inbarani, H.H., Azar, A.T., Jothi, G.: Supervised hybrid feature selection based on pso and rough sets for medical diagnosis. Comput. Methods Programs Biomed. 113(1), 175–185 (2014)CrossRefGoogle Scholar
  14. 14.
    Jackowski, K., Krawczyk, B., Woźniak, M.: Improved adaptive splitting and selection: the hybrid training method of a classifier based on a feature space partitioning. Int. J. Neural Syst. 24(3), 1430007 (2014)CrossRefGoogle Scholar
  15. 15.
    Korytkowski, M., Rutkowski, L., Scherer, R.: From ensemble of fuzzy classifiers to single fuzzy rule base classifier. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2008. LNCS (LNAI), vol. 5097, pp. 265–272. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-69731-2_26 CrossRefGoogle Scholar
  16. 16.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. John Wiley & Sons, Hoboken (2004)CrossRefzbMATHGoogle Scholar
  17. 17.
    Kuncheva, L.I., Bezdek, J.C., Duin, R.P.: Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recogn. 34(2), 299–314 (2001)CrossRefzbMATHGoogle Scholar
  18. 18.
    Rejer, I.: Genetic algorithm with aggressive mutation for feature selection in bci feature space. Pattern Anal. Appl. 18(3), 485–492 (2015)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Ruta, D., Gabrys, B.: Classifier selection for majority voting. Inf. Fusion 6(1), 63–81 (2005)CrossRefzbMATHGoogle Scholar
  20. 20.
    Trawiński, B., Smȩtek, M., Telec, Z., Lasota, T.: Nonparametric statistical analysis for multiple comparison of machine learning regression algorithms. Int. J. Appl. Math. Comput. Sci. 22(4), 867–881 (2012)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Xu, L., Krzyżak, A., Suen, C.Y.: Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Trans. Syst. Man Cybern. 22(3), 418–435 (1992)CrossRefGoogle Scholar
  22. 22.
    Zdunek, R., Nowak, M., Pliński, E.: Statistical classification of soft solder alloys by laser-induced breakdown spectroscopy: review of methods. J. Eur. Opt. Soc. Rapid Publ. 11(16006), 1–20 (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Systems and Computer NetworksWroclaw University of Science and TechnologyWroclawPoland

Personalised recommendations