Logical Minimisation of Meta-Rules Within Meta-Interpretive Learning

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9046)

Abstract

Meta-Interpretive Learning (MIL) is an ILP technique which uses higher-order meta-rules to support predicate invention and learning of recursive definitions. In MIL the selection of meta-rules is analogous to the choice of refinement operators in a refinement graph search. The meta-rules determine the structure of permissible rules which in turn defines the hypothesis space. On the other hand, the hypothesis space can be shown to increase rapidly in the number of meta-rules. However, methods for reducing the set of meta-rules have so far not been explored within MIL. In this paper we demonstrate that irreducible, or minimal sets of meta-rules can be found automatically by applying Plotkin’s clausal theory reduction algorithm. When this approach is applied to a set of meta-rules consisting of an enumeration of all meta-rules in a given finite hypothesis language we show that in some cases as few as two meta-rules are complete and sufficient for generating all hypotheses. In our experiments we compare the effect of using a minimal set of meta-rules to randomly chosen subsets of the maximal set of meta-rules. In general the minimal set of meta-rules leads to lower runtimes and higher predictive accuracies than larger randomly selected sets of meta-rules.

Keywords

Logic Program Inductive Logic Programming Learning Time Hypothesis Space High Predictive Accuracy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

The first author acknowledges the support of the BBSRC and Syngenta in funding his PhD Case studentship. The second author would like to thank the Royal Academy of Engineering and Syngenta for funding his present 5 year Research Chair.

References

  1. 1.
    Muggleton, S.H., Fidjeland, A., Luk, W.: Scalable acceleration of inductive logic programs. In IEEE international conference on field-programmable technology, pp. 252–259. IEEE (2002)Google Scholar
  2. 2.
    Blockeel, H., Dehaspe, L., Demoen, B., Janssens, G., Ramon, J., Vandecasteele, H.: Improving the efficiency of inductive logic programming through the use of query packs. J. Artif. Intell. Res. 16(1), 135–166 (2002)MATHGoogle Scholar
  3. 3.
    Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M.K.: Learnability and the Vapnik-Chervonenkis dimension. J. ACM 36(4), 929–965 (1989)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Hruschka Jr., E.R., Mitchell, T.M.: Toward an architecture for never-ending language learning. In: Proceedings of the Twenty-Fourth Conference on Artificial Intelligence (AAAI 2010) (2010)Google Scholar
  5. 5.
    Cohen, W.: Grammatically biased learning: learning logic programs using an explicit antecedent description language. Artif. Intell. 68, 303–366 (1994)CrossRefMATHGoogle Scholar
  6. 6.
    De Raedt, L.: Declarative modeling for machine learning and data mining. In: Bshouty, N.H., Stoltz, G., Vayatis, N., Zeugmann, T. (eds.) ALT 2012. LNCS, vol. 7568, pp. 12–12. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  7. 7.
    Hinton, G.E.: Learning distributed representations of concepts. Artif. Intell. 40, 1–12 (1986)Google Scholar
  8. 8.
    Lin, D., Dechter, E., Ellis, K., Tenenbaum, J.B., Muggleton, S.H.: Bias reformulation for one-shot function induction. In: Proceedings of the 23rd European Conference on Artificial Intelligence (ECAI 2014), pp. 525–530. IOS Press, Amsterdam (2014)Google Scholar
  9. 9.
    Muggleton, S.H.: Inverse entailment and progol. New Gener. Comput. 13, 245–286 (1995)CrossRefGoogle Scholar
  10. 10.
    S.H. Muggleton and D. Lin. Meta-interpretive learning of higher-order dyadic datalog: Predicate invention revisited. In: Proceedings of the 23rd International Joint Conference Artificial Intelligence (IJCAI 2013), pp. 1551–1557 (2013)Google Scholar
  11. 11.
    Muggleton, S.H., Lin, D., Pahlavi, N., Tamaddoni-Nezhad, A.: Meta-interpretive learning: application to grammatical inference. Mach. Learn. 94, 25–49 (2014)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Muggleton, S.H., Lin, D., Tamaddoni-Nezhad, A.: Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited. Mach. Learn. 100(1), 49–73 (2015)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Nienhuys-Cheng, S.-H., de Wolf, R.: Foundations of Inductive Logic Programming. LNCS (LNAI), vol. 1228. Springer, Heidelberg (1997) MATHGoogle Scholar
  14. 14.
    G.D. Plotkin. Automatic methods of inductive inference. PhD thesis, Edinburgh University, August 1971Google Scholar
  15. 15.
    Shapiro, E.Y.: Algorithmic Program Debugging. MIT Press, Cambridge (1983) MATHGoogle Scholar
  16. 16.
    A. Srinivasan. A study of two probabilistic methods for searching large spaces with ilp. Technical report PRG-TR-16-00, Oxford University Computing Laboratory, Oxford (2000)Google Scholar
  17. 17.
    Srinivasan, A.: The ALEPH manual, Machine Learning at the Computing Laboratory. Oxford University (2001)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Department of ComputingImperial College LondonLondonUK

Personalised recommendations