Skip to main content

Building an Ensemble of Classifiers via Randomized Models of Ensemble Members

  • Conference paper
  • First Online:
Progress in Image Processing, Pattern Recognition and Communication Systems (CORES 2021, IP&C 2021, ACS 2021)

Abstract

Many dynamic ensemble selection (DES) methods are known in the literature. A previously-developed by the authors, method consists in building a randomized classifier which is treated as a model of the base classifier. The model is equivalent to the base classifier in a certain probabilistic sense. Next, the probability of correct classification of randomized classifier is taken as the competence of the evaluated classifier. In this paper, a novel randomized model of base classifier is developed. In the proposed method, the random operation of the model results from a random selection of the learning set from the family of learning sets of a fixed size. The paper presents the mathematical foundations of this approach and shows how, for a practical application when learning and validation sets are given, one can determine the measure of competence and build a MC system with the DES scheme. The DES scheme with the proposed model of competence was experimentally evaluated on the collection of 67 benchmark datasets and compared in terms of eight quality criteria with two ensemble classifiers which use the previously-proposed concepts of randomized model. The proposed approach achieved the lowest ranks for almost all investigated quality criteria.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/ptrajdos/rrcBasedClassifiers/tree/develop.

  2. 2.

    https://github.com/ptrajdos/MLResults/blob/master/data/KeelData.tar.xz.

  3. 3.

    https://github.com/ptrajdos/MLResults/raw/a3b4168a0b0aabee7ef8cd1056baf4a8578a9f6d/RandomizedClassifiers/CORES2021.zip.

References

  1. Berger, J.O.: Statistical Decision Theory and Bayesian Analysis. Springer, New York (1985). https://doi.org/10.1007/978-1-4757-4286-2

    Book  MATH  Google Scholar 

  2. Bergmann, B., Hommel, G.: Improvements of general multiple test procedures for redundant systems of hypotheses. In: Bauer, P., Hommel, G., Sonnemann, E. (eds.) Multiple Hypothesenprüfung/Multiple Hypotheses Testing, vol. 70, pp. 100–115. Springer, Heidelberg (1988). https://doi.org/10.1007/978-3-642-52307-6_8

    Chapter  Google Scholar 

  3. Britto, A.S., Sabourin, R., Oliveira, L.E.: Dynamic selection of classifiers–a comprehensive review. Pattern Recognit. 47(11), 3665–3680 (2014). https://doi.org/10.1016/j.patcog.2014.05.003

    Article  Google Scholar 

  4. Cruz, R.M., Sabourin, R., Cavalcanti, G.D.: Dynamic classifier selection: recent advances and perspectives. Inf. Fusion 41, 195–216 (2018). https://doi.org/10.1016/j.inffus.2017.09.010

    Article  Google Scholar 

  5. DeGroot, M.H.: Optimal Statistical Decisions. Wiley, New York (2004). https://doi.org/10.1002/0471729000

    Book  MATH  Google Scholar 

  6. Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006)

    MathSciNet  MATH  Google Scholar 

  7. Kleinberg, E.: Stochastic discrimination. Ann. Math. Artif. Intell. 1(1–4), 207–239 (1990). https://doi.org/10.1007/bf01531079

    Article  MATH  Google Scholar 

  8. Ko, A.H., Sabourin, R., Britto, A.S., Jr.: From dynamic classifier selection to dynamic ensemble selection. Pattern Recognit. 41(5), 1718–1731 (2008). https://doi.org/10.1016/j.patcog.2007.10.015

    Article  MATH  Google Scholar 

  9. Kuncheva, L.I.: Combining Pattern Classifiers. Wiley, New York (2014). https://doi.org/10.1002/9781118914564

    Book  MATH  Google Scholar 

  10. Santucci, E., Didaci, L., Fumera, G., Roli, F.: A parameter randomization approach for constructing classifier ensembles. Pattern Recognit. 69, 1–13 (2017). https://doi.org/10.1016/j.patcog.2017.03.031

    Article  Google Scholar 

  11. Sokolova, M., Lapalme, G.: A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 45(4), 427–437 (2009). https://doi.org/10.1016/j.ipm.2009.03.002

    Article  Google Scholar 

  12. Tian, Y., Deng, N.: Support vector classification with nominal attributes. In: Hao, Y., et al. (eds.) Computational Intelligence and Security, pp. 586–591. Springer, Heidelberg (2005). https://doi.org/10.1007/11596448_86

    Chapter  Google Scholar 

  13. Trajdos, P., Kurzynski, M.: Randomized reference classifier with Gaussian distribution and soft confusion matrix applied to the improving weak classifiers. In: Burduk, R., Kurzynski, M., Wozniak, M. (eds.) Progress in Computer Recognition Systems. Advances in Intelligent Systems and Computing, vol. 977, pp. 326–336. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-19738-4_33

    Chapter  Google Scholar 

  14. Woloszynski, T., Kurzynski, M.: A probabilistic model of classifier competence for dynamic ensemble selection. Pattern Recogn. 44(10–11), 2656–2668 (2011). https://doi.org/10.1016/j.patcog.2011.03.020

    Article  MATH  Google Scholar 

  15. Woloszynski, T., Kurzynski, M., Podsiadlo, P., Stachowiak, G.W.: A measure of competence based on random classification for dynamic ensemble selection. Inf. Fusion 13(3), 207–213 (2012). https://doi.org/10.1016/j.inffus.2011.03.007

    Article  Google Scholar 

Download references

Acknowledgment

This work was supported by the statutory funds of the Department of Systems and Computer Networks, Wroclaw University of Science and Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pawel Trajdos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Trajdos, P., Kurzynski, M. (2022). Building an Ensemble of Classifiers via Randomized Models of Ensemble Members. In: Choraś, M., Choraś, R.S., Kurzyński, M., Trajdos, P., Pejaś, J., Hyla, T. (eds) Progress in Image Processing, Pattern Recognition and Communication Systems. CORES IP&C ACS 2021 2021 2021. Lecture Notes in Networks and Systems, vol 255. Springer, Cham. https://doi.org/10.1007/978-3-030-81523-3_1

Download citation

Publish with us

Policies and ethics