PAC-learning logic programs under the closed-world assumption

  • Luc De Raedt
Communications Session 6B Logic for Artificial Intelligence
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1079)


Positive PAC-learning results are presented for the normal inductive logic programming setting. In our formalisation of this setting, examples are complete models of the target theory, which corresponds to making the closed world assumption. The results obtained here are the first PAC-learning results for ILP in the normal setting, that allow for recursive theories as well as multiple predicates.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    W. Cohen. Learnability of restricted logic programs. In Proceedings of the 3rd International Workshop on Inductive Logic Programming, 1993.Google Scholar
  2. 2.
    W. Cohen. Pac-learning a restricted class of recursive logic programs. In Proceedings of the 3rd International Workshop on Inductive Logic Programming, 1993.Google Scholar
  3. 3.
    W.W. Cohen and D. Page. Polynomial learnability and inductive logic programming: Methods and results. New Generation Computing, 13, 1995.Google Scholar
  4. 4.
    L. De Raedt. Interactive Theory Revision: an Inductive Logic Programming Approach. Academic Press, 1992.Google Scholar
  5. 5.
    L. De Raedt and M. Bruynooghe. A theory of clausal discovery. In Proceedings of the 13th International Joint Conference on Artificial Intelligence, pages 1058–1063. Morgan Kaufmann, 1993.Google Scholar
  6. 6.
    L. De Raedt and S. Džeroski. First order jk-clausal theories are PAC-learnable. Artificial Intelligence, 70:375–392, 1994.Google Scholar
  7. 7.
    S. Džeroski, S. Muggleton, and S. Russell. PAC-learnability of determinate logic programs. In Proceedings of the 5th ACM workshop on Computational Learning Theory, pages 128–135, 1992.Google Scholar
  8. 8.
    N. Helft. Induction as nonmonotonic inference. In Proceedings of the 1st International Conference on Principles of Knowledge Representation and Reasoning, pages 149–156. Morgan Kaufmann, 1989.Google Scholar
  9. 9.
    J.U. Kietz. Some lower bounds for the computational complexity of inductive logic programming. In Proceedings of the 6th European Conference on Machine Learning, volume 667, pages 115–124. Lecture Notes in Artificial Intelligence, 1993.Google Scholar
  10. 10.
    J.W.Lloyd. Foundations of logic programming. Springer-Verlag, 2nd edition, 1987.Google Scholar
  11. 11.
    S. Muggleton and L. De Raedt. Inductive logic programming: Theory and methods. Journal of Logic Programming, 19,20:629–679, 1994.Google Scholar
  12. 12.
    S. Muggleton and C. Feng. Efficient induction of logic programs. In Proceedings of the 1st conference on algorithmic learning theory, pages 368–381. Ohmsma, Tokyo, Japan, 1990.Google Scholar
  13. 13.
    B.K Natarajan. Machine Learning: A Theoretical Approach. Morgan Kaufmann, 1991.Google Scholar
  14. 14.
    C. Rouveirol. Flattening and saturation: Two representation changes for generalization. Machine Learning, 14:219–232, 1994.Google Scholar
  15. 15.
    L. Valiant. A theory of the learnable. Communications of the ACM, 27:1134–1142, 1984.Google Scholar
  16. 16.
    L. Valiant. Learning disjunctive dnf concepts. In Proceedings of the 9th International Joint Conference on Artificial Intelligence. Morgan Kaufmann, 1985.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1996

Authors and Affiliations

  • Luc De Raedt
    • 1
  1. 1.Department of Computer ScienceKatholieke Universiteit LeuvenHeverleeBelgium

Personalised recommendations