Multiple classifier systems have been originally proposed for supervised classification tasks, and few works have dealt with semi-supervised multiple classifiers. However, there are important pattern recognition applications, such as multi-sensor remote sensing and multi-modal biometrics, which demand semi-supervised multiple classifier systems able to exploit both labelled and unlabelled data. In this paper, the use, in multiple classifier systems, of two well known semi-supervised learning methods, namely, co-training and self-training, is investigated by experiments. Reported results on benchmarking data sets show that co-training and self-training allow exploiting unlabelled data in different types of multiple classifiers systems.


Feature Subset Multiple Classifier Classifier Ensemble Multiple Classifier System Supervise Classification Task 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Roli, F.: Semi-supervised Multiple Classifier Systems: Background and Research Directions. In: Oza, N.C., Polikar, R., Kittler, J., Roli, F. (eds.) MCS 2005. LNCS, vol. 3541, pp. 1–11. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  2. 2.
    D’Alchè-Buc, F., Grandvalet, Y., Ambroise, C.: Semi-supervised marginboost. In: Neural Information Processing Systems Foundation, NIPS 2002 (2002)Google Scholar
  3. 3.
    Bennet, K., Demiriz, A., Maclin, R.: Exploiting unlabeled data in ensemble methods. In: Proc. 8th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, pp. 289–296 (2002)Google Scholar
  4. 4.
    El Gayar, N.: An Experimental Study of a Self-Supervised Classifier Ensemble. International Journal of Information Technology 1(1) (2004)Google Scholar
  5. 5.
    Zhou, Y., Goldman, S.: Democratic Co-Learning. In: Proceedings of the 16th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2004), pp. 594–602 (2004)Google Scholar
  6. 6.
    Zhu, X.: Semi-supervised learning literature survey, Technical report, Computer Sciences TR 1530, Univ. Wisconsis, Madison, USA (January 2006)Google Scholar
  7. 7.
    Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: Proc. of the Workshop on Computational Learning Theory, pp. 92–100 (1998)Google Scholar
  8. 8.
    Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: On Combining Classifiers. IEEE Trans. on Patt. Anal. and Machine Intell. 20(3), 226–239 (1998)CrossRefGoogle Scholar
  9. 9.
    Skurichina, M., Duin, R.P.W.: Bagging, Boosting and the Random Subspace Method for Linear Classifiers. Pattern Analysis & Applications 5, 121–135 (2002)MATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Giacinto, G., Roli, F., Bruzzone, L.: Combination of neural and statistical algorithms for supervised classification of remote-sensing images. Pattern Recongition Letters 21(5), 385–397 (2000)CrossRefGoogle Scholar
  11. 11.
    Solyman, M., El Gayar, N.F.: A Co-training Approach for Semi-supervised Multiple Classifiers. In: INFO 2006, 4th Int. Conference, Cairo, Egypt, March 25-27 (in press, 2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Luca Didaci
    • 1
  • Fabio Roli
    • 1
  1. 1.Dept. of Electrical and Electronic EngineeringUniversity of CagliariCagliariItaly

Personalised recommendations