Some lower bounds for the computational complexity of inductive logic programming

  • Jörg-Uwe Kietz
Research Papers Inductive Logic Programming
Part of the Lecture Notes in Computer Science book series (LNCS, volume 667)


The field of Inductive Logic Programming (ILP), which is concerned with the induction of Hornclauses from examples and background knowledge, has received increased attention over the last time. Recently, some positive results concerning the learnability of restricted logic programs have been published. In this paper we review these restrictions and prove some lower-bounds of the computational complexity of learning. In particular, we show that a learning algorithm for i2-determinate Hornclauses (with variable i) could be used to decide the PSPACE-complete problem of Finite State Automata Intersection, and that a learning algorithm for 12-nondeterminate Hornclauses could be used to decide the NP-complete problem of Boolean Clause Satisfiability (SAT). This also shows, that these Hornclauses are not PAC-learnable, unless RP=NP=PSPACE.


Inductive Logic Programming PAC-Learning 


  1. 1.
    M. Anthony and N.Biggs. Computational Learning Theory. Cambridge University Press, 1992.Google Scholar
  2. 2.
    L. de Raedt and M. Bruynooghe. An overview of the interactive concept-learner and theory revisor clint. In S. Muggleton, editor, Inductive Logic Programming. Academic Press, 1992.Google Scholar
  3. 3.
    S. Džeroski, S. Muggleton, and S. Russell. Pac-learnability of determinate logic programs. In Proc. of the 5th ACM Workshop on Computaional Learning Theory (COLT), 1992.Google Scholar
  4. 4.
    Michael R. Garey and David S. Johnson. Computers and Intractability — A Guide to the Theory of NP-Completeness. Freeman, San Francisco, Cal., 1979.Google Scholar
  5. 5.
    David Haussler. Learning conjunctive concepts in structural domains. Machine Learning, 4(1):7–40, 1989.Google Scholar
  6. 6.
    Jörg-Uwe Kietz and Stefan Wrobel. Controlling the complexity of learning through syntactic and task-oriented models. In S. Muggleton, editor, Inductive Logic Programming, pages 107–126. Academic Press, 1992.Google Scholar
  7. 7.
    Stephan Muggleton and Cao Feng. Efficient induction of logic programs. In S. Muggleton, editor, Inductive Logic Programming. Academic Press, 1992.Google Scholar
  8. 8.
    Stephen Muggleton and Wray Buntine. Machine invention of first-order predicates by inverting resolution. In Proc. Fifth Intern. Conf. on Machine Learning, Los Altos, CA, 1988. Morgan Kaufman.Google Scholar
  9. 9.
    C. D. Page and A. M. Frisch. Generalisation and learnability: a study in constained atoms. In S. Muggleton, editor, Inductive Logic Programming. Academic Press, 1992.Google Scholar
  10. 10.
    Gordon D. Plotkin. A further note on inductive generalization. In B. Meltzer and D. Michie, editors, Machine Intelligence, volume 6, chapter 8, pages 101–124. American Elsevier, 1971.Google Scholar
  11. 11.
    J. R. Quinlan. Learning logical definitions from relations. Machine Learning, 5(3):239–266, 1990.Google Scholar
  12. 12.
    J. R. Quinlan. Determinate literals in inductive logic programming. In Proc. of the 12th IJCAI, 1991.Google Scholar
  13. 13.
    Celine Rouveirol. Completness for inductive procedures. In Proc. Eigth Intern. Workshop on Machine Learning, pages 452–456, 1991.Google Scholar
  14. 14.
    Celine Rouveirol. Semantic model for induction of first order theories. In Proc. 12th International Joint Conference on Artificial Intelligence, 1991.Google Scholar
  15. 15.
    Stefan Wrobel. Higher-order concepts in a tractable knowledge representation. In K. Morik, editor, GWAI-87 11th German Workshop on Artificial Intelligence, Informatik-Fachberichte Nr. 152, pages 129–138, Berlin, New York, Tokyo, October 1987. Springer.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • Jörg-Uwe Kietz
    • 1
  1. 1.German National Research Center for Computer ScienceSt.AugustinGermany

Personalised recommendations