Advertisement

On a New Method for Improving Weak Classifiers Using Bayes Metaclassifier

  • Marcin MajakEmail author
  • Marek Kurzyński
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 578)

Abstract

In this paper new algorithm called Bayes metaclassifier (BMC) will be introduced as a method for improving weak classifiers performance. In general, BMC constitutes the probabilistic generalization of any base classifier and has the form of the Bayes scheme. To validate BMC classification two experiments were designed. In the first one three synthetic datasets were generated from normal distribution to calculate and check empirically upper bound for improving base classifier when BMC approach is applied. Furthermore, to validate usefulness of this algorithm extensive simulations from 22 available benchmarks were performed comparing BMC model against 8 base classifiers with different design paradigms.

Keywords

Classification Improving weak classifiers Randomized reference classifier 

Notes

Acknowledgments

This work was supported by the statutory funds of the Department of Systems and Computer Networks, Wroclaw University of Science and Technology.

References

  1. 1.
    Alcala-Fdez, J., Fernandez, A., Luengo, J., Derrac, J., Garcia, S., Sanchez, L., Herrera, F.: Keel data-mining software tool: data set repository. J. Multiple-Valued Logic Soft Comput. 17, 255–287 (2011)Google Scholar
  2. 2.
    Alpaydin, E.: Combined \(5 \,\times \, 2\) cv F test for comparing supervised classification learning algorithms. J. Neural Comput. 11, 1885–1892 (1999)CrossRefGoogle Scholar
  3. 3.
    Asuncion, A., Newman, D.: UCI machine learning repository (2007). http://www.ics.uci.edu/mlearn/MLRepository.html
  4. 4.
    Duda, R., Hart, P., Stork, D.: Pattern Classification, 3rd edn. Wiley-Interscience, New York (2001)zbMATHGoogle Scholar
  5. 5.
    Freund, Y.: Boosting a weak learning algorithm by majority. Inf. Comput. 121(2), 256–285 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Fukunaga, K.: Introduction to Statistical Pattern Recognition, 2nd edn. Academic Press Professional Inc., San Diego (1990)zbMATHGoogle Scholar
  7. 7.
    Garcia, S., Fernandez, A., Luengo, J., Herrera, F.: Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 180, 2044–2064 (2010)CrossRefGoogle Scholar
  8. 8.
    Kuncheva, L.: Combining Pattern Classifiers: Methods and Algorithms, 3rd edn. Wiley-Interscience, New York (2004)CrossRefzbMATHGoogle Scholar
  9. 9.
    Kuncheva, L.: Real medical data sets repository (2004). http://pages.bangor.ac.uk/~mas00a/activities/real_data.htm
  10. 10.
    Kurzynski, M., Majak, M.: Meta-Bayes classifier with Markov model applied to the control of bioprosthetic hand. In: Czarnowski, I., Caballero, A.M., Howlett, R.J., Jain, L.C. (eds.) Intelligent Decision Technologies 2016. SIST, vol. 57, pp. 107–117. Springer, Cham (2016). doi: 10.1007/978-3-319-39627-9_10 CrossRefGoogle Scholar
  11. 11.
    Kurzynski, M., Majak, M., Zolnierek, A.: Multiclassifier systems applied to the computer-aided sequential medical diagnosis. J. Biocybern. Biomed. Eng. 36, 619–625 (2016)CrossRefGoogle Scholar
  12. 12.
    Skurichina, M., Duin, R.P.W.: Bagging, boosting and the random subspace method for linear classifiers. Pattern Anal. Appl. 5(2), 121–135 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Skurichina, M., Duin, R.P.W.: Bagging and the random subspace method for redundant feature spaces. In: Kittler, J., Roli, F. (eds.) MCS 2001. LNCS, vol. 2096, pp. 1–10. Springer, Heidelberg (2001). doi: 10.1007/3-540-48219-9_1 CrossRefGoogle Scholar
  14. 14.
    Strunk Jr., W., White, E.B.: Statistical Decision Theory and Bayesian Analysis, 3rd edn. Springer, New York (1987)Google Scholar
  15. 15.
    Woloszynski, T.: Classifier competence based on probabilistic modeling (ccprmod.m) (2013)Google Scholar
  16. 16.
    Woloszynski, T., Kurzynski, M.: A probabilistic model of classifier competence for dynamic ensemble selection. Pattern Recogn. 44, 2656–2668 (2011)CrossRefzbMATHGoogle Scholar
  17. 17.
    Woloszynski, T., Kurzynski, M.: A measure of competence based on random classification for dynamic ensemble selection. Inf. Fusion 13, 207–213 (2012)CrossRefzbMATHGoogle Scholar
  18. 18.
    Wolpert, D.: Stacked generalization. Neural Netw. 5, 241–259 (1992)CrossRefGoogle Scholar
  19. 19.
    Wozniak, M., Grana, M., Corchado, E.: A survey of multiple classifier systems as hybrid systems. Inf. Fusion 16, 3–17 (2014)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Wrocław University of Science and TechnologyWrocławPoland

Personalised recommendations