Diversity in Ensembles for One-Class Classification

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 185)

Abstract

One-class classification, known also as learning in the absence of counterexamples, is one of the most challenging problems in the contemporary machine learning. The scope of the paper focuses on creating a one-class multiple classifier systems with diverse classifiers in the pool. An approach is proposed in which an ensemble of one-class classifiers, instead of a single one, is used for the target class recognition. The paper introduces diversity measures dedicated to the selection of such specific classifiers for the committee. Therefore the influence of heterogeneity assurance on the overall classification performance is examined. Experimental investigations prove that diversity measures for one-class classifiers are a promising research direction. Additionally the paper investigates the lack of benchmark datasets for one-class problems and proposes an unified approach for training and testing one-class classifiers on publicly available multi-class datasets.

Keywords

one-class classification machine learning multiple classifier system diversity classifier selection one-class benchmarks 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aksela, M., Laaksonen, J.: Using diversity of errors for selecting members of a committee classifier. Pattern Recognition 39(4), 608–623 (2006)MATHCrossRefGoogle Scholar
  2. 2.
    Alpaydin, E.: Combined 5 x 2 cv f test for comparing supervised classification learning algorithms. Neural Computation 11(8), 1885–1892 (1999)CrossRefGoogle Scholar
  3. 3.
    Bishop, C.M.: Novelty detection and neural network validation. IEE Proceedings: Vision, Image and Signal Processing 141(4), 217–222 (1994)CrossRefGoogle Scholar
  4. 4.
    Czogala, E., Leski, J.: Application of entropy and energy measures of fuzziness to processing of ecg signal. Fuzzy Sets and Systems 97(1), 9–18 (1998)CrossRefGoogle Scholar
  5. 5.
    Frank, A., Asuncion, A.: UCI machine learning repository (2010), http://archive.ics.uci.edu/ml
  6. 6.
    Giacinto, G., Perdisci, R., Del Rio, M., Roli, F.: Intrusion detection in computer networks by a modular ensemble of one-class classifiers. Inf. Fusion 9, 69–82 (2008)CrossRefGoogle Scholar
  7. 7.
    Golestani, A., Azimi, J., Analoui, M., Kangavari, M.: A new efficient fuzzy diversity measure in classifier fusion. In: Proceedings of the IADIS Conference, pp. 722–726 (2007)Google Scholar
  8. 8.
    Ho, T.K.: The random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8), 832–844 (1998)CrossRefGoogle Scholar
  9. 9.
    Hodge, V.J., Austin, J.: A survey of outlier detection methodologies. Artificial Intelligence Review 22(2), 85–126 (2004)MATHCrossRefGoogle Scholar
  10. 10.
    Mao, J., Jain, A.K., Duin, P.W.: Statistical pattern recognition: A review. IEEE Trans. on Pattern Analysis and Machine Intelligence 22(1), 4–37 (2000)CrossRefGoogle Scholar
  11. 11.
    Koch, M.W., Moya, M.M., Hostetler, L.D., Fogler, R.J.: Cueing, feature discovery, and one-class learning for synthetic aperture radar automatic target recognition. Neural Networks 8(7-8), 1081–1102 (1995)CrossRefGoogle Scholar
  12. 12.
    Krawczyk, B., Woźniak, M.: Designing Cost-Sensitive Ensemble – Genetic Approach. In: Choraś, R.S. (ed.) Image Processing and Communications Challenges 3. AISC, vol. 102, pp. 227–234. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  13. 13.
    Krawczyk, B., Woźniak, M.: Combining Diverse One-Class Classifiers. In: Corchado, E., Snášel, V., Abraham, A., Woźniak, M., Graña, M., Cho, S.-B. (eds.) HAIS 2012. LNCS (LNAI), vol. 7209, pp. 590–601. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  14. 14.
    Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning 51(2) (2003)Google Scholar
  15. 15.
    R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2008) ISBN 3-900051-07-0Google Scholar
  16. 16.
    Schölkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. Adaptive computation and machine learning. MIT Press (2002)Google Scholar
  17. 17.
    Tax, D.M.J., Duin, R.P.W.: Combining One-Class Classifiers. In: Kittler, J., Roli, F. (eds.) MCS 2001. LNCS, vol. 2096, pp. 299–308. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  18. 18.
    Tax, D.M.J., Duin, R.P.W.: Characterizing one-class datasets. In: Proceedings of the Sixteenth Annual Symposium of the Pattern Recognition Association of South Africa, pp. 21–26 (2005)Google Scholar
  19. 19.
    Wilk, T., Wozniak, M.: Soft computing methods applied to combination of one-class classifiers. Neurocomput. 75, 185–193 (2012)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Department of Systems and Computer NetworksWroclaw University of TechnologyWroclawPoland

Personalised recommendations