Advertisement

Randomized hypotheses and minimum disagreement hypotheses for learning with noise

Extended abstract
  • Nicolò Cesa-Bianchi
  • Paul Fischer
  • Eli Shamir
  • Hans Ulrich Simon
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1208)

Abstract

In this paper we prove various results about PAC learning in the presence of malicious and random classification noise. Our main theme is the use of randomized hypotheses for learning with small sample sizes and high malicious noise rates. We show an algorithm that PAC learns any target class of VC-dimension d using randomized hypotheses and order of d/ε training examples (up to logarithmic factors) while tolerating malicious noise rates even slightly larger than the information-theoretic bound ε/(1+ε) for deterministic hypotheses. Combined with previous results, this implies that a lower bound d/Δ+ε/Δ2 on the sample size, where η=ε/(l+ε)−Δ is the malicious noise rate, applies only when using deterministic hypotheses. We then show that the information-theoretic upper bound on the noise rate for deterministic hypotheses can be replaced by 2ε/(l+2ε) if randomized hypotheses are used. Investigating further the use of randomized hypotheses, we show a strategy for learning the powerset of d elements using an optimal sample size of order dε/Δ2 (up to logarithmic factors) and tolerating a noise rate η=2ε/(l+2ε)−Δ. We complement this result by proving that this sample size is also necessary for any class C of VC-dimension d.

We then discuss the performance of the minimum disagreement strategy under both malicious and random classification noise models. For malicious noise we show an algorithm that, using deterministic hypotheses, learns unions of d intervals on the continuous domain [0, 1) using a sample size significantly smaller than that needed by the minimum disagreement strategy. For classification noise we show, generalizing a result by Laird, that order of d/εΔ2) training examples suffice (up to logarithmic factors) to learn by minimizing disagreements any target class of VC-dimension d tolerating random classification noise rate η=1/2−Δ. Using a lower bound by Simon, we also prove that this sample size bound cannot be significantly improved.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Angluin and P.D. Laird. Learning from noisy examples. Machine Learning, 2:343–370, 1988.Google Scholar
  2. 2.
    N. Cesa-Bianchi, E. Dichterman, P. Fischer, and H. Simon. Noise-tolerant learning near the information-theoretic bound. In Proceedings of the 28th ACM Symposium on the Theory of Computing. ACM Press, 141–150, 1996.Google Scholar
  3. 3.
    A. Ehrenfeucht, D. Haussler, M. Kearns, and L. Valiant. A General Lower Bound on the Number of Examples Needed for Learning. Information and Computation, 82(3):247–261, 1989.Google Scholar
  4. 4.
    M. Kearns and R.E. Schapire. Efficient distribution-free learning of probabilistic concepts. Journal of Computer and Systems Sciences, 48(3):464–497, 1994. An extended abstract appeared in the Proceedings of the 30th Annual Symposium on the Foundations of Computer Science.Google Scholar
  5. 5.
    M. J. Kearns, R. E. Schapire, and L. Sellie. Toward Efficient Agnostic Learning. Machine Learning, 17(2):115–141, 1994.Google Scholar
  6. 6.
    M.J. Kearns and M. Li. Learning in the presence of malicious errors. SIAM Journal on Computing, 22(4):807–837, 1993. A preliminary version appeared in the Proceedings of the 20th ACM Symposium on the Theory of Computation.Google Scholar
  7. 7.
    P.D. Laird. Learning from Good and Bad Data. Kluwer, 1988.Google Scholar
  8. 8.
    H.U. Simon. General bounds on the number of examples needed for learning probabilistic concepts. Journal of Computer and Systems Sciences, 52:239–254, 1996.Google Scholar
  9. 9.
    L. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.Google Scholar
  10. 10.
    V.N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer Verlag, 1982.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Nicolò Cesa-Bianchi
    • 1
  • Paul Fischer
    • 2
  • Eli Shamir
    • 3
  • Hans Ulrich Simon
    • 2
  1. 1.DSIUniversitá di MilanoMilanoItaly
  2. 2.Lehrstuhl Informatik IIUniversität DortmundDortmundGermany
  3. 3.Hebrew UniversityJerusalemIsrael

Personalised recommendations