Refutably probably approximately correct learning

  • Satoshi Matsumoto
  • Ayumi Shinohara
Selected Papers Algorithmic Learning Theory
Part of the Lecture Notes in Computer Science book series (LNCS, volume 872)


We propose a notion of the refutably PAC learning, which formalizes the refutability of hypothesis spaces in the PAC learning model. Intuitively, the refutably PAC learning for a concept class F requires that the learning algorithm should refute F with high probability if a target concept can not be approximated by any concept in F with respect to the underlying probability distribution. We give a general upper bound of O((1/ɛ+1/ɛ) ln (¦F[n]¦/δ)) on the number of examples required for refutably PAC learning of F. Here, ɛ and δ are the standard accuracy and confidence parameters, and ɛ′ is the refutation accuracy. Furthermore we also define the strongly refutably PAC learning by introducing the refutation threshold. We prove a general upper bound of O((1/ɛ2 + 1/ɛ′2) ln (¦F[n]¦)) for strongly refutably PAC learning of F. These upper bounds reveal that both the refutably learnability and the strongly refutably learnability are equivalent to the standard learnability within the polynomial size restriction. We also define the polynomialtime refutably learnability of a concept class, and characterize it.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Angluin, D. and Laird, P.: Learning from noisy examples. Machine Learning 2 (1988) 343–370.Google Scholar
  2. 2.
    Arikawa, S., Kuhara, S., Miyano, S., Shinohara, A. and Shinohara, T.: A learning algorithm for elementary formal systems and its experiments on identification of transmembrane domains. In Proc. 25th Hawaii International Conference on System Sciences (1992) 675–684.Google Scholar
  3. 3.
    Arikawa, S., Miyano, S., Shinohara, A., Kuhara, S., Mukouchi, Y. and Shinohara, T.: A machine discovery from ammo acid sequences by decision trees over regular patterns. New Generation Computing 11 (1993) 361–375.Google Scholar
  4. 4.
    Blumer, A., Ehrenfeucht, A., Haussler, D. and Warmuth, M.: Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM 36 (1989) 929–965.Google Scholar
  5. 5.
    Haussler, D.: Generalizing the PAC model: Sample size bounds from metric dimension-based uniform convergence results. In Proceedings of the 2nd Annual Workshop on Computational Learning Theory (1989) 385.Google Scholar
  6. 6.
    Haussier, D., Kearns, M., Littlestone, N. and Warmuth, M.: Equivalence of models for polynomial learnability. Information and Computation 95 (1991) 129–161.Google Scholar
  7. 7.
    Kearns, M. and Schapire, R.: Efficient distribution-free learning of probabilistic concepts. In Proceedings of the 31st Annual Symposium on Foundations of Computer Science (1990) 382–391.Google Scholar
  8. 8.
    Kearns, M., Schapire, R. and Sellie, L.: Toward efficient agnostic learning. In Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory (1992) 341–352.Google Scholar
  9. 9.
    Mukouchi, Y. and Arikawa, S.: Inductive inference machines that can refute hypothesis spaces. In Proceedings of 4th International Workshop on Algorithmic Learning Theory (1993) 123–136.Google Scholar
  10. 10.
    Natarajan, B.: MACHINE LEARNING A Theoretical Approach. Morgan Kaufmann 1991.Google Scholar
  11. 11.
    Shimozono, S., Shinohara, A., Shinohara, T., Miyano, S., Kuhara, S. and Arikawa, S.: Finding alphabet indexing for decision trees over regular patterns: an approach to bioinformatical knowledge acquisition. In Proc. 26th Annual Hawaii International Conference on System Sciences (1993) 763–772.Google Scholar
  12. 12.
    Valiant, L.: A theory of the learnable. Communications of the ACM 27 (1984) 1134–1142.Google Scholar
  13. 13.
    Yamanishi, K.: A learning criterion for stochastic rules. Machine Learning 9 (1992) 165–203.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1994

Authors and Affiliations

  • Satoshi Matsumoto
    • 1
  • Ayumi Shinohara
    • 1
  1. 1.Research Institute of Fundamental Information ScienceKyushu University 33FukuokaJapan

Personalised recommendations