Advertisement

Integration Base Classifiers Based on Their Decision Boundary

  • Robert BurdukEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10246)

Abstract

Multiple classifier systems are used to improve the performance of base classifiers. One of the most important steps in the formation of multiple classifier systems is the integration process in which the base classifiers outputs are combined. The most commonly used classifiers outputs are class labels, the ranking list of possible classes or confidence levels. In this paper, we propose an integration process which takes place in the “geometry space”. It means that we use the decision boundary in the integration process. The results of the experiment based on several data sets show that the proposed integration algorithm is a promising method for the development of multiple classifiers systems.

Keywords

Ensemble selection Multiple classifier system Decision boundary 

Notes

Acknowledgments

This work was supported by the statutory funds of the Department of Systems and Computer Networks, Wroclaw University of Science and Technology.

References

  1. 1.
    Britto, A.S., Sabourin, R., Oliveira, L.E.: Dynamic selection of classifiers-a comprehensive review. Pattern Recogn. 47(11), 3665–3680 (2014)CrossRefGoogle Scholar
  2. 2.
    Cavalin, P.R., Sabourin, R., Suen, C.Y.: Dynamic selection approaches for multiple classifier systems. Neural Comput. Appl. 22(3–4), 673–688 (2013)CrossRefGoogle Scholar
  3. 3.
    Cyganek, B., Woźniak, M.: Vehicle logo recognition with an ensemble of classifiers. In: Nguyen, N.T., Attachoo, B., Trawiński, B., Somboonviwat, K. (eds.) ACIIDS 2014. LNCS, vol. 8398, pp. 117–126. Springer, Cham (2014). doi: 10.1007/978-3-319-05458-2_13 CrossRefGoogle Scholar
  4. 4.
    Didaci, L., Giacinto, G., Roli, F., Marcialis, G.L.: A study on the performances of dynamic classifier selection based on local accuracy estimation. Pattern Recogn. 38, 2188–2191 (2005)CrossRefzbMATHGoogle Scholar
  5. 5.
    Forczmański, P., Łabędź, P.: Recognition of occluded faces based on multi-subspace classification. In: Saeed, K., Chaki, R., Cortesi, A., Wierzchoń, S. (eds.) CISIM 2013. LNCS, vol. 8104, pp. 148–157. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-40925-7_15 CrossRefGoogle Scholar
  6. 6.
    Frejlichowski, D.: An algorithm for the automatic analysis of characters located on car license plates. In: Kamel, M., Campilho, A. (eds.) ICIAR 2013. LNCS, vol. 7950, pp. 774–781. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-39094-4_89 CrossRefGoogle Scholar
  7. 7.
    Giacinto, G., Roli, F.: An approach to the automatic design of multiple classifier systems. Pattern Recogn. Lett. 22, 25–33 (2001)CrossRefzbMATHGoogle Scholar
  8. 8.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)zbMATHGoogle Scholar
  9. 9.
    Jackowski, K., Krawczyk, B., Woźniak, M.: Improved adaptive splitting and selection: the hybrid training method of a classifier based on a feature space partitioning. Int. J. Neural Syst. 24(03), 1430007 (2014)CrossRefGoogle Scholar
  10. 10.
    Korytkowski, M., Rutkowski, L., Scherer, R.: From ensemble of fuzzy classifiers to single fuzzy rule base classifier. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2008. LNCS (LNAI), vol. 5097, pp. 265–272. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-69731-2_26 CrossRefGoogle Scholar
  11. 11.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley Inc., Hoboken (2004)CrossRefzbMATHGoogle Scholar
  12. 12.
    Li, Y., Meng, D., Gui, Z.: Random optimized geometric ensembles. Neurocomputing 94, 159–163 (2012)CrossRefGoogle Scholar
  13. 13.
    Ponti, Jr., M.P.: Combining classifiers: from the creation of ensembles to the decision fusion. In: 2011 24th SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T), pp. 1–10. IEEE (2011)Google Scholar
  14. 14.
    Pujol, O., Masip, D.: Geometry-based ensembles: toward a structural characterization of the classification boundary. IEEE Trans. Pattern Anal. Mach. Intell. 31(6), 1140–1146 (2009)CrossRefGoogle Scholar
  15. 15.
    Rejer, I.: Genetic algorithms for feature selection for brain computer interface. Int. J. Pattern Recogn. Artif. Intell. 29(5), 1559008 (2015)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Ruta, D., Gabrys, B.: Classifier selection for majority voting. Inf. Fusion 6(1), 63–81 (2005)CrossRefzbMATHGoogle Scholar
  17. 17.
    Trawiński, B., Smȩtek, M., Telec, Z., Lasota, T.: Nonparametric statistical analysis for multiple comparison of machine learning regression algorithms. Int. J. Appl. Math. Comput. Sci. 22(4), 867–881 (2012)MathSciNetzbMATHGoogle Scholar
  18. 18.
    Tulyakov, S., Jaeger, S., Govindaraju, V., Doermann, D.: Review of classifier combination methods. In: Marinai, S., Fujisawa, H. (eds.) Machine Learning in Document Analysis and Recognition. SCI, vol. 90, pp. 361–386. Springer, Heidelberg (2008)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Systems and Computer NetworksWroclaw University of Science and TechnologyWroclawPoland

Personalised recommendations