Advertisement

Integration Base Classifiers in Geometry Space by Harmonic Mean

  • Robert BurdukEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10841)

Abstract

One of the most important steps in the formation of multiple classifier systems is the fusion process. The fusion process may be applied either to class labels or confidence levels (discriminant functions). In this paper, we propose an integration process which takes place in the geometry space. It means that the fusion of base classifiers is done using decision boundaries. In our approach, the final decision boundary is calculated by using the harmonic mean. The algorithm presented in the paper concerns the case of 3 basic classifiers and two-dimensional features space. The results of the experiment based on several data sets show that the proposed integration algorithm is a promising method for the development of multiple classifiers systems.

Keywords

Ensemble selection Multiple classifier system Decision boundary 

Notes

Acknowledgments

This work was supported in part by the National Science Centre, Poland under the grant no. 2017/25/B/ST6/01750.

References

  1. 1.
    Britto, A.S., Sabourin, R., Oliveira, L.E.: Dynamic selection of classifiers-a comprehensive review. Pattern Recogn. 47(11), 3665–3680 (2014)CrossRefGoogle Scholar
  2. 2.
    Burduk, R.: Integration base classifiers based on their decision boundary. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2017. LNCS (LNAI), vol. 10246, pp. 13–20. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-59060-8_2CrossRefGoogle Scholar
  3. 3.
    Cavalin, P.R., Sabourin, R., Suen, C.Y.: Dynamic selection approaches for multiple classifier systems. Neural Comput. Appl. 22(3–4), 673–688 (2013)CrossRefGoogle Scholar
  4. 4.
    Cyganek, B.: One-class support vector ensembles for image segmentation and classification. J. Math. Imaging Vis. 42(2–3), 103–117 (2012)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Didaci, L., Giacinto, G., Roli, F., Marcialis, G.L.: A study on the performances of dynamic classifier selection based on local accuracy estimation. Pattern Recogn. 38, 2188–2191 (2005)CrossRefGoogle Scholar
  6. 6.
    Drucker, H., Cortes, C., Jackel, L.D., LeCun, Y., Vapnik, V.: Boosting and other ensemble methods. Neural Comput. 6(6), 1289–1301 (1994)CrossRefGoogle Scholar
  7. 7.
    Giacinto, G., Roli, F.: An approach to the automatic design of multiple classifier systems. Pattern Recogn. Lett. 22, 25–33 (2001)CrossRefGoogle Scholar
  8. 8.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)zbMATHGoogle Scholar
  9. 9.
    Korytkowski, M., Rutkowski, L., Scherer, R.: From ensemble of fuzzy classifiers to single fuzzy rule base classifier. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2008. LNCS (LNAI), vol. 5097, pp. 265–272. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-69731-2_26CrossRefGoogle Scholar
  10. 10.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley, Hoboken (2004)CrossRefGoogle Scholar
  11. 11.
    Li, Y., Meng, D., Gui, Z.: Random optimized geometric ensembles. Neurocomputing 94, 159–163 (2012)CrossRefGoogle Scholar
  12. 12.
    Ponti, Jr., M.P.: Combining classifiers: from the creation of ensembles to the decision fusion. In: 2011 24th SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T), pp. 1–10. IEEE (2011)Google Scholar
  13. 13.
    Pujol, O., Masip, D.: Geometry-based ensembles: toward a structural characterization of the classification boundary. IEEE Trans. Pattern Anal. Mach. Intell. 31(6), 1140–1146 (2009)CrossRefGoogle Scholar
  14. 14.
    Rejer, I.: Genetic algorithms for feature selection for brain computer interface. Int. J. Pattern Recogn. Artif. Intell. 29(5), 1559008 (2015)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Ruta, D., Gabrys, B.: Classifier selection for majority voting. Inf. Fusion 6(1), 63–81 (2005)CrossRefGoogle Scholar
  16. 16.
    Tulyakov, S., Jaeger, S., Govindaraju, V., Doermann, D.: Review of classifier combination methods. In: Marinai, S., Fujisawa, H. (eds.) Machine Learning in Document Analysis and Recognition, pp. 361–386. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-76280-5_14CrossRefGoogle Scholar
  17. 17.
    Woźniak, M., Graña, M., Corchado, E.: A survey of multiple classifier systems as hybrid systems. Inf. Fusion 16, 3–17 (2014)CrossRefGoogle Scholar
  18. 18.
    Xu, L., Krzyzak, A., Suen, C.Y.: Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Trans. Syst. Man Cybern. 22(3), 418–435 (1992)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Systems and Computer NetworksWroclaw University of Science and TechnologyWroclawPoland

Personalised recommendations