Supervised Learning and Co-training

  • Malte Darnstädt
  • Hans Ulrich Simon
  • Balázs Szörényi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6925)

Abstract

Co-training under the Conditional Independence Assumption is among the models which demonstrate how radically the need for labeled data can be reduced if a huge amount of unlabeled data is available. In this paper, we explore how much credit for this saving must be assigned solely to the extra-assumptions underlying the Co-training model. To this end, we compute general (almost tight) upper and lower bounds on the sample size needed to achieve the success criterion of PAC-learning within the model of Co-training under the Conditional Independence Assumption in a purely supervised setting. The upper bounds lie significantly below the lower bounds for PAC-learning without Co-training. Thus, Co-training saves labeled data even when not combined with unlabeled data. On the other hand, the saving is much less radical than the known savings in the semi-supervised setting.

Keywords

Concept Class Label Data Unlabeled Data Success Criterion Target Concept 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Balcan, M.-F., Blum, A.: A discriminative model for semi-supervised learning. Journal of the Association on Computing Machinery 57(3), 19:1–19:46 (2010)Google Scholar
  2. 2.
    Balcan, M.-F., Blum, A., Yang, K.: Co-training and expansion: Towards bridging theory and practice. In: Advances in Neural Information Processing Systems, vol. 17, pp. 89–96. MIT Press, Cambridge (2005)Google Scholar
  3. 3.
    Ben-David, S., Lu, T., Pál, D.: Does unlabeled data provably help? Worst-case analysis of the sample complexity of semi-supervised learning. In: Proceedings of the 21st Annual Conference on Learning Theory, pp. 33–44 (2008)Google Scholar
  4. 4.
    Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: Proceedings of the 11th Annual Conference on Computational Learning Theory, pp. 92–100 (1998)Google Scholar
  5. 5.
    Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M.K.: Learnability and the Vapnik-Chervonenkis dimension. Journal of the Association on Computing Machinery 36(4), 929–965 (1989)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Chapelle, O., Schölkopf, B., Zien, A.: Semi-Supervised Learning. MIT Press, Cambridge (2006)CrossRefGoogle Scholar
  7. 7.
    Darnstädt, M., Simon, H.U.: Smart PAC-learners. Theoretical Computer Science 412(19), 1756–1766 (2011)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Geréb-Graus, M.: Lower bounds on parallel, distributed and automata computations. PhD thesis, Harvard University Cambridge, MA, USA (1989)Google Scholar
  9. 9.
    Hanneke, S.: A bound on the label complexity of agnostic active learning. In: Proceedings of the 24th International Conference on Machine Learning, pp. 353–360 (2007)Google Scholar
  10. 10.
    Sauer, N.: On the density of families of sets. Journal of Combinatorial Theory, Series A 13(1), 145–147 (1972)MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Valiant, L.G.: A theory of the learnable. Communications of the ACM 27(11), 1134–1142 (1984)CrossRefMATHGoogle Scholar
  12. 12.
    Wang, W., Zhou, Z.-H.: A new analysis of co-training. In: ICML, pp. 1135–1142 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Malte Darnstädt
    • 1
  • Hans Ulrich Simon
    • 1
  • Balázs Szörényi
    • 2
  1. 1.Fakultät für MathematikRuhr-Universität BochumBochumGermany
  2. 2.Research Group on Artificial IntelligenceHungarian Academy of Sciences and University of SzegedSzegedHungary

Personalised recommendations