Representative Sampling for Text Classification Using Support Vector Machines

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2633)


In order to reduce human efforts, there has been increasing interest in applying active learning for training text classifiers. This paper describes a straightforward active learning heuristic, representative sampling, which explores the clustering structure of ‘uncertain’ documents and identifies the representative samples to query the user opinions, for the purpose of speeding up the convergence of Support Vector Machine (SVM) classifiers. Compared with other active learning algorithms, the proposed representative sampling explicitly addresses the problem of selecting more than one unlabeled documents. In an empirical study we compared representative sampling both with random sampling and with SVM active learning. The results demonstrated that representative sampling offers excellent learning performance with fewer labeled documents and thus can reduce human efforts in text classification tasks.


Support Vector Machine Gaussian Mixture Model Representative Sampling Support Vector Machine Model Text Classification 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Blum, A., Mitchell, T.: Combining Labeled and Unlabeled Data with Co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, (1998) 92–100Google Scholar
  2. 2.
    Burges, C.J.: A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2, (1998) 121–167CrossRefGoogle Scholar
  3. 3.
    [Drucker et al., 2001] H. Drucker, B. Shahrary and D.C. Gibbon, Relevance feedback using support vector machines. Proc. 18th International Conf. On Machine Learning, 122–129, 2001.Google Scholar
  4. 4.
    Fishman, G.: Monte Carlo. Concepts, Algorithms and Applications. Springer Verlag, 1996Google Scholar
  5. 5.
    Gray, R.M., Vector Quantization, IEEE ASSP Magazine, (1984) 4–29.Google Scholar
  6. 6.
    Joachims, T.: Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In European Conference on Machine Learning, ECML-98, (1998), 137–142Google Scholar
  7. 7.
    Joachims, T.: Transductive Inference for Text Classification using Support Vector Machines. In Proceedings of International Conference on Machine Learning, (1999)Google Scholar
  8. 8.
    Lewis, D., Gale, W.: A Sequential Algorithm for Training Text Classifiers. Proc. of the Eleventh International Conference on Machine Learning. Morgan Kaufmann, (1994) 148–156Google Scholar
  9. 9.
    McCallum, A., Nigam, K.: Employing EM in pool-based active learning for text classification. In Proceedings of the fifteenth international conference of machine learning (ICML 98), (1998) 350–358Google Scholar
  10. 10.
    Mitchell, T.: Generalization as search. Artificial Intelligence 28 (1982) 203–226CrossRefGoogle Scholar
  11. 11.
    Platt, J.: Probabilistics for SV Machines. In Advances in Large Margin Classifiers. A. Smola, P. Bartlett, Bscholkopf, D. Shuurmans eds., MIT Press (1999) 61–74Google Scholar
  12. 12.
    Schohn, G., Cohn, D.: Less is More: Active Learning with Support Vector Machines. Proc. of the Seventeenth International Conference on Machine Learning (2000)Google Scholar
  13. 13.
    Seung, H.S., Opper, M., Sompolinsky, H.: Query by committee. In Proceedings of the fifth annual ACM workshop on Computational Learning Theory, (1992), 287–294Google Scholar
  14. 14.
    Tong, S., Koller, D.: Support Vector Machine Active Learning with Applications to Text Classification. Journal of Machine Learning Research. Volume 2, (2001) 45–66CrossRefGoogle Scholar
  15. 15.
    Vapnik, V.: Estimation of Dependences Based on Empirical Data. Springer Verlag. 1982.Google Scholar
  16. 16.
    Zhang, T., Oles, F.: A probabilistic analysis on the value of unlabeled data for classification problems. International Conference on Machine Learning (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  1. 1.Tsinghua UniversityBeijingChina
  2. 2.Institute for Computer ScienceUniversity of MunichGermany
  3. 3.Corporate TechnologySiemens AGMunichGermany
  4. 4.University of Arkansas at Little RockLittle RockUSA

Personalised recommendations