Advertisement

Pruning One-Class Classifier Ensembles by Combining Sphere Intersection and Consistency Measures

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7894)

Abstract

One-class classification is considered as one of the most challenging topics in the contemporary machine learning. Creating Multiple Classifier Systems for this task has proven itself as a promising research direction. Here arises a problem on how to select valuable members to the committee - so far a largely unexplored area in one-class classification. This paper introduces a novel approach that allows to choose appropriate models to the committee in such a way that assures both high quality of individual classifiers and a high diversity among the pool members. We aim at preventing the selection of both too weak or too similar models. This is achieved with the usage of an multi-objective optimization that allows to consider several criteria when searching for a good subset of classifiers. A memetic algorithm is applied due to its efficiency and less random behavior than traditional genetic algorithm. As one-class classification differs from traditional multi-class problems we propose to use two measures suitable for this problem - consistency measure that allow to rank the quality of one-class models and introduced by us sphere intersection measure that serves as a diversity metric. Experimental results carried on a number of benchmark datasets proves that it outperforms traditional single-objective approaches.

Keywords

machine learning one-class classification ensemble pruning classifier selection diversity random subspace memetic algorithm multi-objective optimisation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Alpaydin, E.: Combined 5 x 2 cv f test for comparing supervised classification learning algorithms. Neural Computation 11(8), 1885–1892 (1999)CrossRefGoogle Scholar
  2. 2.
    Bi, Y.: The impact of diversity on the accuracy of evidential classifier ensembles. International Journal of Approximate Reasoning 53(4), 584–607 (2012)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Bishop, C.M.: Novelty detection and neural network validation. IEE Proceedings: Vision, Image and Signal Processing 141(4), 217–222 (1994)CrossRefGoogle Scholar
  4. 4.
    Cheplygina, V., Tax, D.M.J.: Pruned random subspace method for one-class classifiers. In: Sansone, C., Kittler, J., Roli, F. (eds.) MCS 2011. LNCS, vol. 6713, pp. 96–105. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  5. 5.
    Giacinto, G., Perdisci, R., Del Rio, M., Roli, F.: Intrusion detection in computer networks by a modular ensemble of one-class classifiers. Inf. Fusion 9, 69–82 (2008)CrossRefGoogle Scholar
  6. 6.
    Harman, M., McMinn, P.: A theoretical and empirical study of search-based testing: Local, global, and hybrid search. IEEE Transactions on Software Engineering 36(2), 226–247 (2010)CrossRefGoogle Scholar
  7. 7.
    Harris, J.W., Stocker, H.: Handbook of mathematics and computational science. Springer, New York (1998)zbMATHCrossRefGoogle Scholar
  8. 8.
    Ho, T.K.: The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20, 832–844 (1998)CrossRefGoogle Scholar
  9. 9.
    Juszczak, P.: Learning to recognise. A study on one-class classification and active learning. PhD thesis, Delft University of Technology (2006)Google Scholar
  10. 10.
    Knowles, J., Corne, D.: Memetic algorithms for multiobjective optimization: Issues, methods and prospects, pp. 325–332. IEEE Press (2004)Google Scholar
  11. 11.
    Koch, M.W., Moya, M.M., Hostetler, L.D., Fogler, R.J.: Cueing, feature discovery, and one-class learning for synthetic aperture radar automatic target recognition. Neural Networks 8(7-8), 1081–1102 (1995)CrossRefGoogle Scholar
  12. 12.
    Krawczyk, B.: Diversity in ensembles for one-class classification. In: Pechenizkiy, M., Wojciechowski, M. (eds.) New Trends in Databases & Inform. AISC, vol. 185, pp. 119–129. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  13. 13.
    Krawczyk, B., Woźniak, M.: Designing cost-sensitive ensemble – genetic approach. In: Choraś, R.S. (ed.) Image Processing and Communications Challenges 3. AISC, vol. 102, pp. 227–234. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  14. 14.
    Krawczyk, B., Woźniak, M.: Analysis of diversity assurance methods for combined classifiers. In: Choraś, R.S. (ed.) Image Processing and Communications Challenges 4. AISC, vol. 184, pp. 177–184. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  15. 15.
    Krawczyk, B., Woźniak, M.: Combining diverse one-class classifiers. In: Corchado, E., Snášel, V., Abraham, A., Woźniak, M., Graña, M., Cho, S.-B. (eds.) HAIS 2012, Part II. LNCS, vol. 7209, pp. 590–601. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  16. 16.
    Krawczyk, B., Woźniak, M.: Experiments on distance measures for combining one-class classifiers. In: Proceedings of the FEDCISIS 2012 Conference, pp. 88–92 (2012)Google Scholar
  17. 17.
    Liu, B., Zhao, D., Reynaert, P., Gielen, G.G.E.: Synthesis of integrated passive components for high-frequency rf ics based on evolutionary computation and machine learning techniques. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 30(10), 1458–1468 (2011)CrossRefGoogle Scholar
  18. 18.
    SIAM: Proceedings of the Eleventh SIAM International Conference on Data Mining, SDM 2011, Mesa, Arizona, USA, April 28-30. SIAM, Omnipress (2011)Google Scholar
  19. 19.
    Tax, D.M.J., Duin, R.P.W.: Combining one-class classifiers. In: Kittler, J., Roli, F. (eds.) MCS 2001. LNCS, vol. 2096, pp. 299–308. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  20. 20.
    Tax, D.M.J., Duin, R.P.W.: Support vector data description. Machine Learning 54(1), 45–66 (2004)zbMATHCrossRefGoogle Scholar
  21. 21.
    Tax, D.M.J., Müller, K.: A consistency-based model selection for one-class classification. In: Proceedings - International Conference on Pattern Recognition, vol. 3, pp. 363–366 (2004)Google Scholar
  22. 22.
    Tax, D.M.J., Duin, R.P.W.: Characterizing one-class datasets. In: Proceedings of the Sixteenth Annual Symposium of the Pattern Recognition Association of South Africa, pp. 21–26 (2005)Google Scholar
  23. 23.
    R Development Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2008)Google Scholar
  24. 24.
    Wilk, T., Woźniak, M.: Soft computing methods applied to combination of one-class classifiers. Neurocomput. 75, 185–193 (2012)CrossRefGoogle Scholar
  25. 25.
    Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1(1), 67–82 (1997)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Department of Systems and Computer NetworksWroclaw University of TechnologyWroclawPoland

Personalised recommendations