Combining Diverse One-Class Classifiers
Multiple Classifier Systems (MCSs) are the focus of intense research and a large variety of methods have been developed in order to exploit strengths of individual classifiers. In this paper we address the problem how to implement a multi-class classifier by an ensemble of one-class classifiers. To improve the performance of a compound classifier, different individual classifiers (which may e.g., differ in complexity, type, training algorithm or other) can be combined and that could increase its both performance, and robustness. The model of one-class classifiers is dedicated to recognize one class only, therefore it is a quite difficult to produce MCSs on the basis of it. One of the important problem is how to ensure diversity of classifier ensemble which consists of one-class classifiers. Well-known diversity measures have been developed for committees of multiclass classifiers. In this work we propose a novel diversity measure which can be applied to a set of one-class classifiers. Additionally we propose a classifier fusion model dedicated to one-class classifiers, which allows more than one classifier per class. We will try answer the question if increasing number of individual one-class classifier has an impact on quality of MCS. The proposed model was evaluated by computer experiments and their results prove that proposed model can outperform well known fusion methods.
KeywordsPattern recognition machine learning one-class classification classifier ensemble diversity measure
Unable to display preview. Download preview PDF.
- 7.Duda, R., Hart, P., Stork, D.: Pattern Classification. Wiley-Interscience (2001)Google Scholar
- 8.Duin, R.: The combining classifier: to train or not to train? In: Proceedings of 16th International Conference on Pattern Recognition, vol. 2, pp. 765–770 (2002)Google Scholar
- 9.van Erp, M., Vuurpijl, L., Schomaker, L.: An overview and comparison of voting methods for pattern recognition. In: Proceedings of Eighth International Workshop on Frontiers in Handwriting Recognition, pp. 195–200 (2002)Google Scholar
- 10.Frank, A., Asuncion, A.: UCI machine learning repository (2010), http://archive.ics.uci.edu/ml
- 12.Hashem, S., Schmeiser, B., Yih, Y.: Optimal linear combinations of neural networks: an overview. In: 1994 IEEE International Conference on Neural Networks, IEEE World Congress on Computational Intelligence, June-2 July, vol. 3, pp. 1507–1512 (1994)Google Scholar
- 19.Krzanowski, W., Partrige, D.: Software Diversity: Practical Statistics for its Measurement and Exploatation. Department of Computer Science, University of Exeter. (1996)Google Scholar
- 22.R Development Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2008) ISBN 3-900051-07-0, http://www.R-project.org
- 23.Rokach, L.: Pattern classification using ensemble methods. Series in machine perception and artificial intelligence. World Scientific (2010)Google Scholar
- 24.Schölkopf, B., Smola, A.: Learning with kernels: support vector machines, regularization, optimization, and beyond. In: Adaptive Computation and Machine Learning. MIT Press (2002)Google Scholar
- 26.Tax, D., Duin, R.P.W.: Characterizing one-class datasets. In: Proceedings of the Sixteenth Annual Symposium of the Pattern Recognition Association of South Africa, pp. 21–26 (2005)Google Scholar