Advertisement

Sample PAC-learnability in model inference

  • S. H. Nienhuys-Cheng
  • M. Polman
Regular Papers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 784)

Abstract

In this article, PAC-learning theory is applied to model inference, which concerns the problem of inferring theories from facts in first order logic. It is argued that uniform sample PAC-learnability cannot be expected with most of the ‘interesting’ model classes. Polynomial sample learnability can only be accomplished in classes of programs having a fixed maximum number of clauses. We have proved that the class of context free programs in a fixed maximum number of clauses with a fixed maximum number of literals is learnable from a polynomial number of examples. This is also proved for a more general class of programs.

References

  1. 1.
    Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M.: Learnability and the Vapnik-Chervonenkis Dimension. Journal of the Association for Computing Machinery 36 No. 4 (1989) 929–965Google Scholar
  2. 2.
    Cohen, W.: Learnability of Restricted Logic Programs. Proceedings of the Third International Workshop on Inductive Logic Programming (ILP'93) (1993) 41–71Google Scholar
  3. 3.
    Cohen, W.: PAC-Learning a restricted Class of recursive Logic Programs. Proceedings of the Third International Workshop on Inductive Logic Programming (ILP'93) (1993) 73–86Google Scholar
  4. 4.
    Dzeroski, S., Muggleton, S., Russel, S.: PAC-Leamability of Determinate Logic Programs. Proceedings of the Fifth ACM Workshop on Computational Learning Theory (1992) 128–135Google Scholar
  5. 5.
    Dzeroski, S., Muggleton, S., Russel, S.: Learnability of Constrained Logic Programs. Proceedings of ECML'93 (1993) 342–347Google Scholar
  6. 6.
    Ehrenfeucht, A., Haussler, D., Kearns, M., Valiant, L.: A General Lower Bound on the Number of Examples Needed for Learning. Information and Computation 82 (1989) 247–261CrossRefGoogle Scholar
  7. 7.
    Kietz, J.: Some Lower Bounds for the Computational Complexity of Inductive Logic Programming. Proceedings of ECML'93 (1993) 114–123Google Scholar
  8. 8.
    Ling, X.: Inductive Learning from Good Examples. Proceedings of IJCAP91 (1991) 751–756Google Scholar
  9. 9.
    Muggleton, S.: Inductive Logic Programming. Muggleton S. (ed.), Inductive Logic Programming, Academic Press (1992) 3–27Google Scholar
  10. 10.
    Natarajan, B.: Machine Learning, a Theoretical Approach. Morgan Kaufman Publishers, Inc (1991)Google Scholar
  11. 11.
    Page, C.D., and Frish, A.M.: Generalization and Learnability: A study of Constrained Atoms. Muggleton (ed.), Inductive Logic Programming, Academic Press (1992) 29–61Google Scholar
  12. 12.
    Polman, M., Nienhuys-Cheng, S.-H.: PAC-Learning and Model Inference. Benelearn'93, Collection of work reports, Vrije Universiteit Brussel (1993)Google Scholar
  13. 13.
    Reynolds, J.C.: Transformational Systems and the Algebraic Structure of Atomic Formulas. Machine Intelligence 5 (1970) 135–153Google Scholar
  14. 14.
    Shapiro, E.: Inductive Inference of Theories from Facts. Technical Report 192, Dept. of Computer Science, Yale University, USA (1981)Google Scholar
  15. 15.
    Valiant, L.G.: A Theory of the Learnable. Communications of the ACM 27 No. 11 (1984) 1134–1142CrossRefGoogle Scholar
  16. 16.
    Vapnik, V., and Chervonenkis, A.: On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16 No. 2 (1971) 264–280Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1994

Authors and Affiliations

  • S. H. Nienhuys-Cheng
    • 1
  • M. Polman
    • 1
    • 2
  1. 1.Dept. of Computer ScienceErasmus University of RotterdamDR RotterdamThe NetherLands
  2. 2.Tinbergen InstituteThe NetherLands

Personalised recommendations