Learnability with restricted focus of attention guarantees noise-tolerance

  • Shai Ben-David
  • Eli Dichterman
Selected Papers Algorithmic Learning Theory
Part of the Lecture Notes in Computer Science book series (LNCS, volume 872)


We consider the question of learning in the presence of classification noise. More specifically, we address the issue of identifying conditions that, once a learning algorithm meets them, it can be transformed into a noise-tolerant algorithm.

While the question of whether every PAC learning algorithm can be made noise-tolerant is still open, the bottom line of this work, loosely stated, is that any restriction on the amount of data an algorithm is allowed to retrieve from its input samples, suffices to render the existence of a noise-tolerant variant that is efficient whenever the original algorithm is. The result is obtained by proving that such a restricted learning is equivalent to learning from statistical queries, and by applying Kearns transformation from statistical learning into noise-tolerant learning.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [AL88]
    Dana Angluin and Philip Laird. Learning from noisy examples. Machine Learning, 2(4):343–370, 1988.Google Scholar
  2. [BC92]
    Avrim Blum and Prasad Chalasani. Learning switching concepts. In 5th COLT, pages 231–242, 1992.Google Scholar
  3. [BDCGL92]
    Shai Ben-David, Benny Chor, Oded Goldreich, and Michael Luby. On the theory of average case complexity. Journal of Computer and System Sciences, 44(2):193–219, 1992.Google Scholar
  4. [BDD93]
    Shai Ben-David and Eli Dichterman. Learning with restricted focus of attention. In 6th COLT, pages 287–296, 1993.Google Scholar
  5. [BI88]
    Gyora M. Benedek and Alon Itai. Learnability by fixed distributions. In 1st COLT, pages 80–90, August 1988.Google Scholar
  6. [FJS91]
    Merrick L. Furst, Jeffrey C. Jackson, and Sean W. Smith. Improved learning of AC0 functions. In 4th COLT, pages 317–325, August 1991.Google Scholar
  7. [Kea93]
    Michael J. Kearns. Efficient noise-tolrant learning from statistical queries. In 25th STOC, pages 392–401, May 1993.Google Scholar
  8. [KL88]
    Michael J. Kearns and Ming Li. Learning in the presence of malicious errors. In 20th STOC, pages 267–280, May 1988.Google Scholar
  9. [KS90]
    Michael J. Kearns and Robert E. Schapire. Efficient distribution-free learning of probabilistic concepts. In 31st FOCS, pages 382–391, 1990.Google Scholar
  10. [KSS92]
    Michael J. Kearns, Robert E. Schapire, and Linda M. Sellie. Towards efficient agnostic learning. In 5th COLT, pages 341–352, 1992.Google Scholar
  11. [Lai87]
    Philip D. Laird. Learning from good and bad data. Technical Report YALEU/DCS/TR-551, Yale University, 1987. Ph.d. Dissertation.Google Scholar
  12. [LMN89]
    Nathan Linial, Yishai Mansour, and Noam Nisan. Constant depth circuits, fourier transform, and learnability. In 30th FOCS, pages 574–579, 1989.Google Scholar
  13. [Sl088]
    Robert H. Sloan. Types of noise in data for concept learning. In 1st COLT, pages 91–96, 1988.Google Scholar
  14. [Val84]
    L. G. Valiant. A theory of the learnable. CACM, 27(11):1134–1142, 1984.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1994

Authors and Affiliations

  • Shai Ben-David
    • 1
  • Eli Dichterman
    • 1
  1. 1.Department of Computer ScienceTechnion-Israel Institue of TechnologyHaifaIsrael

Personalised recommendations