Advertisement

Partially Supervised Learning

  • Bing LiuEmail author
  • Wee Sun Lee
Chapter
Part of the Data-Centric Systems and Applications book series (DCSA)

Abstract

In supervised learning, the learning algorithm uses labeled training examples from every class to generate a classification function. One of the drawbacks of this classic paradigm is that a large number of labeled examples are needed in order to learn accurately. Since labeling is often done manually, it can be very labor intensive and time consuming. In this chapter, we study two partially supervised learning tasks. As their names suggest, these two learning tasks do not need full supervision, and thus are able to reduce the labeling effort. The first is the task of learning from labeled and unlabeled examples, which is commonly known as semisupervised learning. In this chapter, we also call it LU learning (L and U stand for “labeled” and “unlabeled” respectively). In this learning setting, there is a small set of labeled examples of every class, and a large set of unlabeled examples. The objective is to make use of the unlabeled examples to improve learning.

Keywords

Support Vector Machine Unlabeled Data Unlabeled Instance Probably Approximately Correct Positive Document 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. 1.
    Barbará, D., C. Domeniconi, and N. Kang. Classifying documents without labels. In Proceedings of SIAM International Conference on Data Mining (SDM-2004), 2004.Google Scholar
  2. 2.
    Blum, A. and S. Chawla. Learning from Labeled and Unlabeled Data Using Graph Mincuts. In Proceedings of International Conference on Machine Learning (ICML-2001), 2001.Google Scholar
  3. 3.
    Blum, A. and T. Mitchell. Combining labeled and unlabeled data with cotraining. In Proceedings of Conference on Computational Learning Theory, 1998.Google Scholar
  4. 4.
    Buckley, C., G. Salton, and J. Allan. The effect of adding relevance information in a relevance feedback environment. In Proceedings of ACM SIGIR Conf. on Research and Development in Information Retrieval (SIGIR-1994), 1994.Google Scholar
  5. 5.
    Castelli, V. and T. Cover. Classification rules in the unknown mixture parameter case: relative value of labeled and unlabeled samples. In Proceedings of IEEE International Symp. Information Theory, 1994.Google Scholar
  6. 6.
    Chapelle, O., B. Schölkopf, and A. Zien. Semi-supervised learning. Vol. 2. 2006: MIT Press.Google Scholar
  7. 7.
    Collins, M. and Y. Singer. Unsupervised models for named entity classification. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP-1999), 1999.Google Scholar
  8. 8.
    Cong, G., W. Lee, H. Wu, and B. Liu. Semi-supervised text classification using partitioned EM. In Proceedings of Conference of Database Systems for Advanced Applications (DASFAA 2004), 2004.Google Scholar
  9. 9.
    Cormen, T., C. Leiserson, R. Rivest, and C. Stein. Introduction to Algorithms. 2001: MIT Press.Google Scholar
  10. 10.
    Dasgupta, S., M. Littman, and D. McAllester. PAC generalization bounds for co-training. In Proceedings of Advances in Neural Information Processing Systems (NIPS-2001), 2001.Google Scholar
  11. 11.
    Dempster, A., N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 1977, 39(1): p. 1–38.Google Scholar
  12. 12.
    Deng, L., X. Chai, Q. Tan, W. Ng, and D. Lee. Spying out real user preferences for metasearch engine personalization. In Proceedings of Workshop on WebKDD, 2004.Google Scholar
  13. 13.
    Denis, F. PAC learning from positive statistical queries. In Proceedings of Intl. Conf. on Algorithmic Learning Theory (ALT-1998), 1998.Google Scholar
  14. 14.
    Elkan, C. and K. Noto. Learning classifiers from only positive and unlabeled data. In Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2008), 2008.Google Scholar
  15. 15.
    Fung, G., J. Yu, H. Lu, and P. Yu. Text classification without labeled negative documents. In Proceedings of IEEE International Conference on Data Engingeering (ICDE-2005), 2005.Google Scholar
  16. 16.
    Ghahramani, Z. and K. Heller. Bayesian sets. Advances in Neural Information Processing Systems, 2006, 18: p. 435.Google Scholar
  17. 17.
    Goldman, S. and Y. Zhou. Enhanced Supervised Learning with Unlabeled Data. In Proceedings of International Conference on Machine Learning (ICML-2000), 2000.Google Scholar
  18. 18.
    Heckman, J. Sample selection bias as a specification error. Econometrica: Journal of the econometric society, 1979: p. 153–161.Google Scholar
  19. 19.
    Huang, J., A. Smola, A. Gretton, K. Borgwardt, and B. Scholkopf. Correcting sample selection bias by unlabeled data. Advances in Neural Information Processing Systems, 2007, 19: p. 601.Google Scholar
  20. 20.
    Joachims, T. Transductive inference for text classification using support vector machines. In Proceedings of International Conference on Machine Learning (ICML-1999), 1999.Google Scholar
  21. 21.
    Joachims, T. Transductive learning via spectral graph partitioning. In Proceedings of International Conference on Machine Learning (ICML-2003), 2003.Google Scholar
  22. 22.
    Kearns, M. Efficient noise-tolerant learning from statistical queries. Journal of the ACM (JACM), 1998, 45(6): p. 983–1006.zbMATHCrossRefMathSciNetGoogle Scholar
  23. 23.
    Lee, L. Measures of distributional similarity. In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL-1999), 1999.Google Scholar
  24. 24.
    Lee, W. and B. Liu. Learning with positive and unlabeled examples using weighted logistic regression. In Proceedings of International Conference on Machine Learning (ICML-2003), 2003.Google Scholar
  25. 25.
    Letouzey, F., F. Denis, and R. Gilleron. Learning from positive and unlabeled examples. In Proceedings of Intl. Conf. on Algorithmic Learning Theory (ALT-200), 2000.Google Scholar
  26. 26.
    Li, X. and B. Liu. Learning to classify texts using positive and unlabeled data. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI-2003), 2003.Google Scholar
  27. 27.
    Li, X., B. Liu, and S. Ng. Negative Training Data can be Harmful to Text Classification. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP-2010), 2010.Google Scholar
  28. 28.
    Li, X., L. Zhang, B. Liu, and S. Ng. Distributional similarity vs. PU learning for entity set expansion. In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL-2010), 2010.Google Scholar
  29. 29.
    Liu, B., Y. Dai, X. Li, W. Lee, and P. Yu. Building text classifiers using positive and unlabeled examples. In Proceedings of IEEE International Conference on Data Mining (ICDM-2003), 2003.Google Scholar
  30. 30.
    Liu, B., W. Lee, P. Yu, and X. Li. Partially supervised classification of text documents. In Proceedings of International Conference on Machine Learning (ICML-2002), 2002.Google Scholar
  31. 31.
    Luigi, C., E. Charles, and C. Michele. Learning gene regulatory networks from only positive and unlabeled data. BMC Bioinformatics, 2010, 11.Google Scholar
  32. 32.
    Manevitz, L. and M. Yousef. One-class svms for document classification. The Journal of Machine Learning Research, 2002, 2.Google Scholar
  33. 33.
    Nigam, K. and R. Ghani. Analyzing the effectiveness and applicability of cotraining. In Proceedings of ACM International Conference on Information and Knowledge Management (CIKM-2000), 2000.Google Scholar
  34. 34.
    Nigam, K., A. McCallum, S. Thrun, and T. Mitchell. Text classification from labeled and unlabeled documents using EM. Machine Learning, 2000, 39(2): p. 103–134.zbMATHCrossRefGoogle Scholar
  35. 35.
    Niu, Z., D. Ji, and C. Tan. Word sense disambiguation using label propagation based semi-supervised learning. In Proceedings of Meeting of the Association for Computational Linguistics (ACL-2005), 2005.Google Scholar
  36. 36.
    Pantel, P., E. Crestan, A. Borkovsky, A. Popescu, and V. Vyas. Web-scale distributional similarity and entity set expansion. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP-2009), 2009.Google Scholar
  37. 37.
    Pham, T., H. Ng, and W. Lee. Word sense disambiguation with semisupervised learning. In Proceedings of National Conference on Artificial Intelligence (AAAI-2005), 2005.Google Scholar
  38. 38.
    Platt, J.C. Probabilities for SV machines. In Advances in Large Margin Classifiers, A. J. Smola, P. Bartlett, B. Schölkopf, and D. Schuurmans, Editors. 1999, MIT Press. p. 61–73.Google Scholar
  39. 39.
    Schölkopf, B., J. Platt, J. Shawe-Taylor, A. Smola, and R. Williamson. Estimating the support of a high-dimensional distribution. Neural computation, 2001, 13(7): p. 1443–1471.zbMATHCrossRefGoogle Scholar
  40. 40.
    Shimodaira, H. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 2000, 90(2): p. 227–244.zbMATHCrossRefMathSciNetGoogle Scholar
  41. 41.
    Vapnik, V. and V. Vapnik. Statistical learning theory. Vol. 2. 1998: Wiley New York.Google Scholar
  42. 42.
    Yu, H. General MC: Estimating boundary of positive class from small positive data. In Proceedings of IEEE International Conference on Data Mining (ICDM-2003), 2003: IEEE.Google Scholar
  43. 43.
    Yu, H., J. Han, and K. Chang. PEBL: positive example based learning for Web page classification using SVM. In Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2002), 2002.Google Scholar
  44. 44.
    Zadrozny, B. Learning and evaluating classifiers under sample selection bias. In Proceedings of International Conference on Machine Learning (ICML- 2004), 2004.Google Scholar
  45. 45.
    Zhang, D. and W. Lee. A simple probabilistic approach to learning from positive and unlabeled examples. In Proceedings of 5th Annual UK Workshop on Computational Intelligence, 2005.Google Scholar
  46. 46.
    Zhu, X., Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of International Conference on Machine Learning (ICML-2003), 2003.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of Illinois, ChicagoChicagoUSA

Personalised recommendations