Discriminant Function Selection in Binary Classification Task

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 403)


The ensemble selection is one of the important problems in building multiple classifier systems (MCSs). This paper presents dynamic ensemble selection based on the analysis of discriminant functions. The idea of the selection is presented on the basis of binary classification tasks. The paper presents two approaches: one takes into account the normalization of the discrimination functions, and in the second approach, normalization is not performed. The reported results based on the data sets form the UCI repository show that the proposed ensemble selection is a promising method for the development of MCSs.


Ensemble selection Multiple classifier system Binary classification task 



This work was supported by the Polish National Science Center under the grant no. DEC-2013/09/B/ST6/02264 and by the statutory funds of the Department of Systems and Computer Networks, Wroclaw University of Technology.


  1. 1.
    Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, New York (2006)zbMATHGoogle Scholar
  2. 2.
    Burduk, R.: Classifier fusion with interval-valued weights. Pattern Recognit. Lett. 34(14), 1623–1629 (2013)CrossRefGoogle Scholar
  3. 3.
    Britto, A.S., Sabourin, R., Oliveira, L.E.S.: Dynamic selection of classifiers—a comprehensive review. Pattern Recognit. 47(11), 3665–3680 (2014)CrossRefGoogle Scholar
  4. 4.
    Cyganek, B.: One-class support vector ensembles for image segmentation and classification. J. Math. Imaging Vis. 42(2–3), 103–117 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Cavalin, P.R., Sabourin, R., Suen, C.Y.: Dynamic selection approaches for multiple classifier systems. Neural Comput. Appl. 22(3–4), 673–688 (2013)CrossRefGoogle Scholar
  6. 6.
    Cyganek, B., Woźniak, M.: Vehicle Logo Recognition with an Ensemble of Classifiers. Lecture Notes in Computer Science, vol. 8398, pp. 117–126. Springer, Berlin (2014)Google Scholar
  7. 7.
    Didaci, L., Giacinto, G., Roli, F., Marcialis, G.L.: A study on the performances of dynamic classifier selection based on local accuracy estimation. Pattern Recognit. 38, 2188–2191 (2005)CrossRefzbMATHGoogle Scholar
  8. 8.
    Frejlichowski, D.: An Algorithm for the Automatic Analysis of Characters Located on Car License Plates. Lecture Notes in Computer Science, vol. 7950, pp. 774–781. Springer, Berlin (2013)Google Scholar
  9. 9.
    Forczmański, P., Łabȩdź, P.: Recognition of Occluded Faces Based on Multi-subspace Classification. Lecture Notes in Computer Science, vol. 8104, pp. 148–157. Springer, Berlin (2013)Google Scholar
  10. 10.
    Frank, A., Asuncion, A.: UCI Machine Learning Repository (2010)Google Scholar
  11. 11.
    Giacinto, G., Roli, F.: An approach to the automatic design of multiple classifier systems. Pattern Recognit. Lett. 22, 25–33 (2001)CrossRefzbMATHGoogle Scholar
  12. 12.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)zbMATHGoogle Scholar
  13. 13.
    Ho, T.K., Hull, J.J., Srihari, S.N.: Decision combination in multiple classifier systems. IEEE Trans. Pattern Anal. Mach. Intell. 16(1), 66–75 (1994)CrossRefGoogle Scholar
  14. 14.
    Jackowski, K., Woźniak, M.: Method of classifier selection using the genetic approach. Expert Syst. 27(2), 114–128 (2010)CrossRefGoogle Scholar
  15. 15.
    Jackowski, K., Krawczyk, B., Woźniak, M.: Improved adaptive splitting and selection: the hybrid training method of a classifier based on a feature space partitioning. Int. J. Neural Syst. 24(3) (2014)Google Scholar
  16. 16.
    Kuncheva, L.I.: A theoretical study on six classifier fusion strategies. IEEE Trans. Pattern Anal. Mach. Intell. 24(2), 281–286 (2002)CrossRefGoogle Scholar
  17. 17.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley, Hoboken (2004)CrossRefzbMATHGoogle Scholar
  18. 18.
    Kittler, J., Alkoot, F.M.: Sum versus vote fusion in multiple classifier systems. IEEE Trans. Pattern Anal. Mach. Intell. 25(1), 110–115 (2003)CrossRefGoogle Scholar
  19. 19.
    Lam, L., Suen, C.Y.: Application of majority voting to pattern recognition: an analysis of its behavior and performance. IEEE Trans. Syst., Man, Cybern., Part A 27(5), 553–568 (1997)CrossRefGoogle Scholar
  20. 20.
    Przewoźniczek, M., Walkowiak, K., Woźniak, M.: Optimizing distributed computing systems for k-nearest neighbours classifiers-evolutionary approach. Logic J. IGPL 19(2), 357–372 (2010)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Rejer, I.: Genetic Algorithms in EEG Feature Selection for the Classification of Movements of the Left and Right Hand. In: Proceedings of the 8th International Conference on Computer Recognition Systems CORES 2013. Advances in Intelligent Systems and Computing, vol. 226, pp. 581–590. Springer, Heidelberg (2013)Google Scholar
  22. 22.
    Ranawana, R., Palade, V.: Multi-classifier systems: review and a roadmap for developers. Int. J. Hybrid Intell. Syst. 3(1), 35–61 (2006)zbMATHGoogle Scholar
  23. 23.
    Ruta, D., Gabrys, B.: Classifier selection for majority voting. Inf. Fusion 6(1), 63–81 (2005)CrossRefzbMATHGoogle Scholar
  24. 24.
    Smȩtek, M., Trawiński, B.: Selection of heterogeneous fuzzy model ensembles using self-adaptive genetic algorithms. New Gener. Comput. 29(3), 309–327 (2011)CrossRefGoogle Scholar
  25. 25.
    Suen, C.Y., Legault, R., Nadal, C.P., Cheriet, M., Lam, L.: Building a new generation of handwriting recognition systems. Pattern Recognit. Lett. 14(4), 303–315 (1993)CrossRefGoogle Scholar
  26. 26.
    Trawiński, K., Cordon, O., Quirin, A.: A study on the use of multiobjective genetic algorithms for classifier selection in furia-based fuzzy multiclassifiers. Int. J. Comput. Intell. Syst. 5(2), 231–253 (2012)CrossRefGoogle Scholar
  27. 27.
    Ulas, A., Semerci, M., Yildiz, O.T., Alpaydin, E.: Incremental construction of classifier and discriminant ensembles. Inf. Sci. 179(9), 1298–1318 (2009). AprCrossRefGoogle Scholar
  28. 28.
    Woloszyński, T., Kurzyński, M.: A probabilistic model of classifier competence for dynamic ensemble selection. Pattern Recognit. 44(10–11), 2656–2668 (2011)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Department of Systems and Computer NetworksWroclaw University of TechnologyWroclawPoland

Personalised recommendations