Learning a Classifier when the Labeling Is Known

  • Shalev Ben-David
  • Shai Ben-David
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6925)

Abstract

We introduce a new model of learning, Known-Labeling-Classifier-Learning (KLCL). The goal of such learning is to find a low-error classifier from some given target-class of predictors, when the correct labeling is known to the learner. This learning problem can be viewed as measuring the information conveyed by the identity of input examples, rather than by their labels.

Given some class of predictors \({\mathcal H}\), a labeling function, and an \(\textsl{i.i.d.\ }\) unlabeled sample generated by some unknown data distribution, the goal of our learner is to find a classifier in \({\mathcal H}\) that has as low as possible error with respect to the sample-generating distribution and the given labeling function. When the labeling function does not belong to the target class, the error of members of the class (and thus their relative quality as label predictors) varies with the marginal of the underlying data distribution.

We prove a trichotomy with respect to the KLCL sample complexity. Namely, we show that for any learnable concept class \({\mathcal H}\), its KLCL sample complexity is either 0 or Θ(1/ε) or Ω(1/ε2). Furthermore, we give a simple combinatorial property of concept classes that characterizes this trichotomy.

Our results imply new sample-size lower bounds for the common agnostic PAC model - a lower bound of Ω(1/ε2) on the sample complexity of learning deterministic classifiers, as well as novel results about the utility of unlabeled examples in a semi-supervised learning setup.

References

  1. 1.
    Anthony, M., Bartlett, P.L.: Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge (1999)CrossRefMATHGoogle Scholar
  2. 2.
    Slud, E.: Distribution inequalities for the binomial law. Annals of Probablility 5, 404–412 (1977)MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Tate, R.F.: On a double inequality of the normal distribution. Annals of Mathematical Statistics 24, 132–134 (1953)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Urner, R., Ben-David, S., Shalev-Shwartz, S.: Unlabeled data can speed up prediction time. In: ICML (to appear, 2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Shalev Ben-David
    • 1
  • Shai Ben-David
    • 2
  1. 1.Faculty of MathematicsUniversity of WaterlooWaterlooCanada
  2. 2.David R. Cheriton School of Computer ScienceUniversity of WaterlooWaterlooCanada

Personalised recommendations