Logical Minimisation of Meta-Rules Within Meta-Interpretive Learning
Meta-Interpretive Learning (MIL) is an ILP technique which uses higher-order meta-rules to support predicate invention and learning of recursive definitions. In MIL the selection of meta-rules is analogous to the choice of refinement operators in a refinement graph search. The meta-rules determine the structure of permissible rules which in turn defines the hypothesis space. On the other hand, the hypothesis space can be shown to increase rapidly in the number of meta-rules. However, methods for reducing the set of meta-rules have so far not been explored within MIL. In this paper we demonstrate that irreducible, or minimal sets of meta-rules can be found automatically by applying Plotkin’s clausal theory reduction algorithm. When this approach is applied to a set of meta-rules consisting of an enumeration of all meta-rules in a given finite hypothesis language we show that in some cases as few as two meta-rules are complete and sufficient for generating all hypotheses. In our experiments we compare the effect of using a minimal set of meta-rules to randomly chosen subsets of the maximal set of meta-rules. In general the minimal set of meta-rules leads to lower runtimes and higher predictive accuracies than larger randomly selected sets of meta-rules.
KeywordsLogic Program Inductive Logic Programming Learning Time Hypothesis Space High Predictive Accuracy
The first author acknowledges the support of the BBSRC and Syngenta in funding his PhD Case studentship. The second author would like to thank the Royal Academy of Engineering and Syngenta for funding his present 5 year Research Chair.
- 1.Muggleton, S.H., Fidjeland, A., Luk, W.: Scalable acceleration of inductive logic programs. In IEEE international conference on field-programmable technology, pp. 252–259. IEEE (2002)Google Scholar
- 4.Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Hruschka Jr., E.R., Mitchell, T.M.: Toward an architecture for never-ending language learning. In: Proceedings of the Twenty-Fourth Conference on Artificial Intelligence (AAAI 2010) (2010)Google Scholar
- 7.Hinton, G.E.: Learning distributed representations of concepts. Artif. Intell. 40, 1–12 (1986)Google Scholar
- 8.Lin, D., Dechter, E., Ellis, K., Tenenbaum, J.B., Muggleton, S.H.: Bias reformulation for one-shot function induction. In: Proceedings of the 23rd European Conference on Artificial Intelligence (ECAI 2014), pp. 525–530. IOS Press, Amsterdam (2014)Google Scholar
- 10.S.H. Muggleton and D. Lin. Meta-interpretive learning of higher-order dyadic datalog: Predicate invention revisited. In: Proceedings of the 23rd International Joint Conference Artificial Intelligence (IJCAI 2013), pp. 1551–1557 (2013)Google Scholar
- 14.G.D. Plotkin. Automatic methods of inductive inference. PhD thesis, Edinburgh University, August 1971Google Scholar
- 16.A. Srinivasan. A study of two probabilistic methods for searching large spaces with ilp. Technical report PRG-TR-16-00, Oxford University Computing Laboratory, Oxford (2000)Google Scholar
- 17.Srinivasan, A.: The ALEPH manual, Machine Learning at the Computing Laboratory. Oxford University (2001)Google Scholar