Pruned Random Subspace Method for One-Class Classifiers

  • Veronika Cheplygina
  • David M. J. Tax
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6713)


The goal of one-class classification is to distinguish the target class from all the other classes using only training data from the target class. Because it is difficult for a single one-class classifier to capture all the characteristics of the target class, combining several one-class classifiers may be required. Previous research has shown that the Random Subspace Method (RSM), in which classifiers are trained on different subsets of the feature space, can be effective for one-class classifiers. In this paper we show that the performance by the RSM can be noisy, and that pruning inaccurate classifiers from the ensemble can be more effective than using all available classifiers. We propose to apply pruning to RSM of one-class classifiers using a supervised area under the ROC curve (AUC) criterion or an unsupervised consistency criterion. It appears that when the AUC criterion is used, the performance may be increased dramatically, while for the consistency criterion results do not improve, but only become more predictable.


One-class classification Random Subspace Method Ensemble learning Pruning Ensembles 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Biggio, B., Fumera, G., Roli, F.: Multiple classifier systems under attack. In: El Gayar, N., Kittler, J., Roli, F. (eds.) MCS 2010. LNCS, vol. 5997, pp. 74–83. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  2. 2.
    Bradley, A.: The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition 30(7), 1145–1159 (1997)CrossRefGoogle Scholar
  3. 3.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)zbMATHGoogle Scholar
  4. 4.
    Bryll, R., Gutierrez-Osuna, R., Quek, F.: Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets. Pattern Recognition 36(6), 1291–1302 (2003)CrossRefzbMATHGoogle Scholar
  5. 5.
    Chandola, V., Banerjee, A., Kumar, V.: Anomaly detection: A survey. ACM Computing Surveys 41(3), 1–58 (2009)CrossRefGoogle Scholar
  6. 6.
    Cheplygina, V.: Random subspace method for one-class classifiers. Master’s thesis, Delft University of Technology (2010)Google Scholar
  7. 7.
    DD_tools, the Data Description toolbox for Matlab,
  8. 8.
    Demšar, J.: Statistical comparisons of classifiers over multiple data sets. The Journal of Machine Learning Research 7, 1–30 (2006)zbMATHGoogle Scholar
  9. 9.
    Friedman, M.: The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of the American Statistical Association 32(200), 675–701 (1937)CrossRefzbMATHGoogle Scholar
  10. 10.
    Ho, T.: The random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8), 832–844 (1998)CrossRefGoogle Scholar
  11. 11.
    Kittler, J., Hatef, M., Duin, R., Matas, J.: On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(3), 226–239 (2002)CrossRefGoogle Scholar
  12. 12.
    Lazarevic, A., Kumar, V.: Feature bagging for outlier detection. In: 11th ACM SIGKDD Int. Conf. on Knowledge Discovery in Data Mining, pp. 157–166 (2005)Google Scholar
  13. 13.
    Nanni, L.: Experimental comparison of one-class classifiers for online signature verification. Neurocomputing 69(7-9), 869–873 (2006)CrossRefGoogle Scholar
  14. 14.
    Nemenyi, P.: Distribution-free multiple comparisons. Ph.D. thesis, Princeton (1963)Google Scholar
  15. 15.
  16. 16.
    Oza, N., Tumer, K.: Classifier ensembles: Select real-world applications. Information Fusion 9(1), 4–20 (2008)CrossRefGoogle Scholar
  17. 17.
    PRtools, Matlab toolbox for Pattern Recognition,
  18. 18.
    Rokach, L.: Collective-agreement-based pruning of ensembles. Computational Statistics & Data Analysis 53(4), 1015–1026 (2009)CrossRefzbMATHGoogle Scholar
  19. 19.
    Schapire, R., Freund, Y.: Experiments with a new boosting algorithm. In: 13th Int. Conf. on Machine Learning, p. 148. Morgan Kaufmann, San Francisco (1996)Google Scholar
  20. 20.
    Tax, D., Duin, R.: Combining one-class classifiers. In: Kittler, J., Roli, F. (eds.) MCS 2001. LNCS, vol. 2096, pp. 299–308. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  21. 21.
    Tax, D.: One-class classification; Concept-learning in the absence of counter-examples. Ph.D. thesis, Delft University of Technology (June 2001)Google Scholar
  22. 22.
    Tax, D., Muller, K.: A consistency-based model selection for one-class classification. In: 17th Int. Conf. on Pattern Recognition, pp. 363–366. IEEE, Los Alamitos (2004)Google Scholar
  23. 23.
    UCI Machine Learning Repository,
  24. 24.
    Zhou, Z., Wu, J., Tang, W.: Ensembling neural networks: Many could be better than all. Artificial Intelligence 137(1-2), 239–263 (2002)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Veronika Cheplygina
    • 1
  • David M. J. Tax
    • 1
  1. 1.Pattern Recognition LabDelft University of TechnologyThe Netherlands

Personalised recommendations