Method of Static Classifiers Selection Using the Weights of Base Classifiers

  • Robert BurdukEmail author
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 342)


The choice of a pertinent objective function is one of the most crucial elements in static ensemble selection. In this study, a new approach of calculating the weight of base classifiers is developed. The values of these weights are the basis for the selection process of classifiers from the initial pool. The obtained weights are interpreted in the context of the interval logic. A number of experiments have been carried out on several datasets available in the UCI repository. The performed experiments compare the proposed algorithms with base classifiers, oracle, sum, product, and mean methods.


Classifier fusion Interval logic Static classifiers selection Multiple classifier system 



This work was supported by the Polish National Science Center under the grant no. DEC-2013/09/B/ST6/02264 and by the statutory funds of the Department of Systems and Computer Networks, Wroclaw University of Technology.


  1. 1.
    Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, Secaucus (2006)Google Scholar
  2. 2.
    Cavalin, P.R., Sabourin, R., Suen, C.Y.: Dynamic selection approaches for multiple classifier systems. Neural Comput. Appl. 22(3–4), 673–688 (2013)CrossRefGoogle Scholar
  3. 3.
    Cyganek, B.: One-class support vector ensembles for image segmentation and classification. J. Math. Imaging Vis. 42(2–3), 103–117 (2012)CrossRefzbMATHMathSciNetGoogle Scholar
  4. 4.
    Didaci, L., Giacinto, G., Roli, F., Marciali, G.L.: A study on the performances of dynamic classifier selection based on local accuracy estimation. Pattern Recognition, 28, 2188–2191, 11/2005 (2005)Google Scholar
  5. 5.
    dos Santos, E.M., Sabourin, R.: Classifier ensembles optimization guided by population oracle. In: IEEE Congress on Evolutionary Computation, pp. 693–698 (2011)Google Scholar
  6. 6.
    Duin, R., Juszczak, P., Paclik, P., Pekalska, E., de Ridder, D., Tax, D., Verzakov. S.: PR-Tools4.1, A Matlab Toolbox for Pattern Recognition. Delft University of Technology (2007)Google Scholar
  7. 7.
    Frank, A., Asuncion, A.: UCI machine learning repository Irvine CA (2010)
  8. 8.
    Giacinto, G., Roli, F.: An approach to the automatic design of multiple classifier systems. Pattern Recognit. Lett. 22, 25–33 (2001)CrossRefzbMATHGoogle Scholar
  9. 9.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)zbMATHGoogle Scholar
  10. 10.
    Highleyman, W.H.: The design and analysis of pattern recognition experiments. Bell Syst. Tech. J. 41, 723–744 (1962)CrossRefGoogle Scholar
  11. 11.
    Ho, T.K., Hull, J.J., Srihari, S.N.: Decision combination in multiple classifier systems. IEEE Trans. Pattern Anal. Mach. Intell. 16(1), 66–75 (1994)CrossRefGoogle Scholar
  12. 12.
    Jackowski, K., Krawczyk, B., Woźniak, M.: Improved adaptive splitting and selection: the hybrid training method of a classifier based on a feature space partitioning. Int. J. Neural Syst. 24(03) (2014)Google Scholar
  13. 13.
    Jackowski, K., Wozniak, M.: Method of classifier selection using the genetic approach. Expert Syst. 27(2), 114–128 (2010)CrossRefGoogle Scholar
  14. 14.
    Kittler, J., Alkoot, F.M.: Sum versus vote fusion in multiple classifier systems. IEEE Trans. Pattern Anal. Mach. Intell. 25(1), 110–115 (2003)CrossRefGoogle Scholar
  15. 15.
    Kuncheva, L.I.: A theoretical study on six classifier fusion strategies. IEEE Trans. Pattern Anal. Mach. Intell. 24(2), 281–286 (2002)CrossRefGoogle Scholar
  16. 16.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley New York (2014)Google Scholar
  17. 17.
    Lam, L., Suen, C.Y.: Application of majority voting to pattern recognition: an analysis of its behavior and performance. IEEE Trans. Syst. Man, Cybern, Part A 27(5), 553–568 (1997)CrossRefGoogle Scholar
  18. 18.
    Ranawana, R., Palade, V.: Multi-classifier systems: review and a roadmap for developers. Int. J. Hybrid Intell. Syst. 3(1), 35–61 (2006)zbMATHGoogle Scholar
  19. 19.
    Rejer, I.: Genetic algorithms in EEG feature selection for the classification of movements of the left and right hand. In: Proceedings of the 8th International Conference on Computer Recognition Systems CORES 2013, pp. 579–589. Springer (2013)Google Scholar
  20. 20.
    Ruta, D., Gabrys, B.: Classifier selection for majority voting. Inf. Fusion 6(1), 63–81 (2005)CrossRefGoogle Scholar
  21. 21.
    Smetek, M., Trawinski, B.: Selection of heterogeneous fuzzy model ensembles using self-adaptive genetic algorithms. New Gener. Comput. 29(3), 309–327 (2011)CrossRefGoogle Scholar
  22. 22.
    Suen, C.Y., Legault, R., Nadal, C.P., Cheriet, M., Lam, L.: Building a new generation of handwriting recognition systems. Pattern Recognit. Lett. 14(4), 303–315 (1993)CrossRefGoogle Scholar
  23. 23.
    Trawinski, K., Cordon, O., Quirin, A.: A study on the use of multiobjective genetic algorithms for classifier selection in Furia-based fuzzy multiclassifiers. Int. J. Comput. Intell. Syst. 5(2), 231–253 (2012)CrossRefGoogle Scholar
  24. 24.
    Ulas, A., Semerci, M., Yildiz, O.T., Alpaydin, E.: Incremental construction of classifier and discriminant ensembles. Inf. Sci. 179(9), 1298–1318 (2009)CrossRefGoogle Scholar
  25. 25.
    Woloszynski, T., Kurzynski, M.: A probabilistic model of classifier competence for dynamic ensemble selection. Pattern Recognit. 44(10–11), 2656–2668 (2011)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Department of Systems and Computer NetworksWroclaw University of TechnologyWroclawPoland

Personalised recommendations