Advertisement

One-Class Classification Ensemble with Dynamic Classifier Selection

  • Bartosz KrawczykEmail author
  • Michał Woźniak
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8866)

Abstract

The main problem of one-class classification lies in selecting the model for the data, as we do not have any access to counterexamples, and cannot use standard methods for estimating the quality of the classifier. Therefore ensemble methods that can utilize more than one model, are a highly attractive solution which prevents the situation of choosing the weakest model and improves the robustness of our recognition system. However, one cannot assume that all classifiers available in the pool are in general accurate - they may have some local areas of competence in which they should be utilized. We present a dynamic classifier selection method for constructing efficient one-class ensembles. We propose to calculate the individual classifier competence in a given validation point and use them to estimate competence of each classifier over the entire decision space with a Gaussian potential function. Experimental analysis, carried on a number of benchmark data and backed-up with a thorough statistical analysis prove its usefulness.

Keywords

One-class classification Classifier selection Competence measure 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Demsar, J.: Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research 7, 1–30 (2006)zbMATHMathSciNetGoogle Scholar
  2. 2.
    Galar, M., Fernández, A., Barrenechea Tartas, E., Bustince Sola, H., Herrera, F.: Dynamic classifier selection for one-vs-one strategy: Avoiding non-competent classifiers. Pattern Recognition 46(12), 3412–3424 (2013)Google Scholar
  3. 3.
    García, S., Fernández, A., Luengo, J., Herrera, F.: Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 180(10), 2044–2064 (2010)CrossRefGoogle Scholar
  4. 4.
    Hung-Ren Ko, A., Sabourin, R., de Souza Britto Jr., A.: From dynamic classifier selection to dynamic ensemble selection. Pattern Recognition 41(5), 1718–1731 (2008)Google Scholar
  5. 5.
    Krawczyk, B., Woźniak, M.: Diversity measures for one-class classifier ensembles. Neurocomputing 126, 36–44 (2014)CrossRefGoogle Scholar
  6. 6.
    Lin, C., Chen, W., Qiu, C., Wu, Y., Krishnan, S., Zou, Q.: Libd3c: Ensemble classifiers with a clustering and dynamic selection strategy. Neurocomputing 123, 424–435 (2014)CrossRefGoogle Scholar
  7. 7.
    Łysiak, R., Kurzyński, M., Wołoszynski, T.: Optimal selection of ensemble classifiers using measures of competence and diversity of base classifiers. Neurocomputing 126, 29–35 (2014)CrossRefGoogle Scholar
  8. 8.
    Tax, D.M.J., Müller, K.: A consistency-based model selection for one-class classification. In: Proceedings of the International Conference on Pattern Recognition, vol. 3, pp. 363–366 (2004) (cited by since 1996:12)Google Scholar
  9. 9.
    Tax, D.M.J., Duin, R. P. W.: Characterizing one-class datasets. In: Proceedings of the Sixteenth Annual Symposium of the Pattern Recognition Association of South Africa, pp. 21–26 (2005)Google Scholar
  10. 10.
    Woloszynski, T., Kurzynski, M.: On a New Measure of Classifier Competence Applied to the Design of Multiclassifier Systems. In: Foggia, P., Sansone, C., Vento, M. (eds.) ICIAP 2009. LNCS, vol. 5716, pp. 995–1004. Springer, Heidelberg (2009)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Department of Systems and Computer NetworksWrocław University of TechnologyWrocławPoland

Personalised recommendations