ALT 1993: Algorithmic Learning Theory pp 265-278 | Cite as

On the sample complexity of consistent learning with one-sided error

  • Eiji Takimoto
  • Akira Maruoka
Selected Papers Approximate Learning
Part of the Lecture Notes in Computer Science book series (LNCS, volume 744)

Abstract

Although consistent learning is sufficient for PAC-learning, it has not been found what strategy makes learning more efficient, especially on the sample complexity, i.e., the number of examples required. For the first step towards this problem, only classes that have consistent learning algorithms with one-sided error are considered. A combinatorial quantity called maximal particle sets is introduced, and an upper bound of the sample complexity of consistent learning with one-sided error is obtained in terms of maximal particle sets. For the class of n-dimensional parallel axis rectangles, one of those classes that are consistently learnable with one-sided error, the cardinality of the maximal particle set is estimated and O(d/ge+1/ge log 1/gd) upper bound of the learning algorithm for the class is obtained. This bound improves the bounds due to Blumer et al. [2] and meets the lower bound within a constant factor.

Keywords

Sample Complexity Concept Class Target Class Target Concept Learning Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M. Anthony, N. Biggs, and J. Shawe-Taylor. The learnability of formal concepts. In Proceedings of the 3rd Workshop on Computational Learning Theory, pages 246–257, 1990.Google Scholar
  2. 2.
    A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. Journal of the Association for Computing Machinery, 36(4):929–965, Aug. 1989.Google Scholar
  3. 3.
    A. Ehrenfeucht, D. Haussler, M. Kearns, and L. G. Valiant. A general lower bound on the number of examples needed for learning. In Proc. Conference on Learning, pages 110–120, 1988.Google Scholar
  4. 4.
    D. Haussler, N. Littlestone, and M. Warmuth. Predicting {0,l}-functions on randomly drawn points. In Proceedings of the 29th Annual IEEE Symposium on Foundations of Computer Science, pages 100–109. IEEE, 1988.Google Scholar
  5. 5.
    E. Maeda. Private communications.Google Scholar
  6. 6.
    B. K. Natarajan. Machine Learning: A Theoretical Approach. Morgan Kaufmann, San Mateo, 1991.Google Scholar
  7. 7.
    N. Pippenger. Information theory and complexity of boolean functions. Mathematical Systems Theory, 10:129–167, 1977.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • Eiji Takimoto
    • 1
  • Akira Maruoka
    • 1
  1. 1.Graduate School of Information SciencesTohoku UniversitySendaiJapan

Personalised recommendations