Logical reduction of metarules

Many forms of inductive logic programming (ILP) use \emph{metarules}, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this paper, we study whether fragments of metarules can be logically reduced to minimal finite subsets. We consider two traditional forms of logical reduction: subsumption and entailment. We also consider a new reduction technique called \emph{derivation reduction}, which is based on SLD-resolution. We compute reduced sets of metarules for fragments relevant to ILP and theoretically show whether these reduced sets are reductions for more general infinite fragments. We experimentally compare learning with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules. In general, derivation reduced sets of metarules outperforms subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times.


P(A, B) ← Q(A, C), R(C, B)
In this metarule 2 the letters P, Q, and R denote existentially quantified second-order variables (variables that can be bound to predicate symbols) and the letters A, B and C denote universally quantified first-order variables (variables that can be bound to constant symbols). Given the chain metarule, the background parent/2 relation, and examples of the grandparent/2 relation, ILP approaches will try to find suitable substitutions for the existentially quantified second-order variables, such as the substitutions {P/grandparent, Q/parent, R/parent} to induce the theory:

grandparent(A, B) ← parent(A, C), parent(C, B)
However, despite the widespread use of metarules, there is little work determining which metarules to use for a given learning task. Instead, suitable metarules are assumed to be given as part of the background knowledge, and are often used without any theoretical justification. Deciding which metarules to use for a given learning task is a major open challenge [8,10] and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules [10,38], so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. For instance, it is impossible to learn the grandparent/2 relation using only metarules with monadic predicates.
In this paper, we study whether potentially infinite fragments of metarules can be logically reduced to minimal, or irreducible, finite subsets, where a fragment is a syntactically restricted subset of a logical theory [4].
Cropper and Muggleton [10] first studied this problem. They used Progol's entailment reduction algorithm [45] to identify entailment reduced sets of metarules, where a clause C is entailment redundant in a clausal theory T ∪ {C} when T |= C. To illustrate entailment redundancy, consider the following first-order clausal theory T 1 , where p, q, r, and s are first-order predicates: In T 1 the clauses C 2 and C 3 are entailment redundant because they are both logical consequences of C 1 , i.e. {C 1 } |= {C 2 , C 3 }. Because {C 1 } cannot be reduced, it is a minimal entailment reduction of T 1 .
Cropper and Muggleton showed that in some cases as few as two metarules are sufficient to entail an infinite fragment of chained 3 second-order dyadic Datalog [10]. They also showed that learning with minimal sets of metarules improves predictive accuracies and reduces learning times compared to non-minimal sets. To illustrate how a finite subset of metarules could entail an infinite set, consider the set of metarules with only monadic literals and a single first-order variable A:

Contributions
In the rest of this paper, we study whether fragments of metarules relevant to ILP can be logically reduced to minimal finite subsets. We study three forms of reduction: subsumption [55], entailment [45], and derivation. We also study how learning with reduced sets of metarules affects learning performance. To do so, we supply Metagol [13], a metainterpretive learning (MIL) [12,48,49] implementation, with different reduced sets of metarules and measure the resulting learning performance on three domains: Michalski trains [35], string transformations, and game rules [9]. In general, using derivation reduced sets of metarules outperforms using subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times. Overall, our specific contributions are: -We describe the logical reduction problem (Section 3).
-We describe subsumption and entailment reduction, and introduce derivation reduction, the problem of removing derivationally redundant clauses from a clausal theory (Section 3).
-We study the decidability of the three reduction problems and show, for instance, that the derivation reduction problem is undecidable for arbitrary Horn theories (Section 3).
-We introduce two general reduction algorithms that take a reduction relation as a parameter. We also study their complexity (Section 4).
-We run the reduction algorithms on finite sets of metarules to identify minimal sets (Section 5).
-We theoretically show whether infinite fragments of metarules can be logically reduced to finite sets (Section 5).
-We experimentally compare the learning performance of Metagol when supplied with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules (Section 6).

Related work
This section describes work related to this paper, mostly work on logical reduction techniques. We first, however, describe work related to MIL and metarules.

Meta-interpretive learning
Although the study of metarules has implications for many ILP approaches [1, 5, 14, 19-21, 32, 33, 49, 54, 58, 62], we focus on meta-interpretive learning (MIL), a form of ILP based on a Prolog meta-interpreter 5 . The key difference between a MIL learner and a standard Prolog meta-interpreter is that whereas a standard Prolog meta-interpreter attempts to prove a goal by repeatedly fetching first-order clauses whose heads unify with a given goal, a MIL learner additionally attempts to prove a goal by fetching second-order metarules, supplied as background knowledge (BK), whose heads unify with the goal. The resulting meta-substitutions are saved and can be reused in later proofs. Following the proof of a set of goals, a logic program is formed by projecting the meta-substitutions onto their corresponding metarules, allowing for a form of ILP which supports predicate invention and learning recursive theories. Most existing work on MIL has assumed suitable metarules as input to the problem, or has used metarules without any theoretical justification. In this paper, we try to address this issue by identifying minimal sets of metarules for interesting fragments of logic, such as Datalog, from which a MIL system can theoretically learn any logic program.

Metarules
McCarthy [43] and Lloyd [40] advocated using second-order logic to represent knowledge. Similarly, Muggleton et al. [47] argued that using second-order representations in ILP provides more flexible ways of representing BK compared to existing methods. Metarules are second-order Horn clauses and are used as a form of declarative bias [50,53] to determine the structure of learnable programs which in turn defines the hypothesis space. In contrast to other forms of declarative bias, such as modes [45] or grammars [7], metarules are logical statements that can be reasoned about, such as to reason about the redundancy of sets of metarules, which we explore in this paper.
Metarules were introduced in the Blip system [19]. Kietz and Wrobel [33] studied generality measures for metarules in the RDT system. A generality order is necessary because the RDT system searches the hypothesis space (which is defined by the metarules) in a top-down general-to-specific order. A key difference between RDT and MIL is that whereas RDT requires metarules of increasing complexity (e.g. rules with an increasing number of literals in the body), MIL derives more complex metarules through SLDresolution. This point is important because this ability allows MIL to start from smaller sets of primitive metarules. In this paper we try to identify such primitive sets.
Using metarules to build a logic program is similar to the use of refinement operators in ILP [51,57] to build a definite clause literal-by-literal 6 . As with refinement operators, it seems reasonable to ask about completeness and irredundancy of a set of metarules, which we explore in this paper.

Logical redundancy
Detecting and eliminating redundancy in a clausal theory is useful in many areas of computer science. In ILP logically reducing a theory is useful to remove redundancy from a hypothesis space to improve learning performance [10,22]. In general, simplifying or reducing a theory often makes a theory easier to understand and use, and may also have computational efficiency advantages.

Literal redundancy
Plotkin [52] used subsumption to decide whether a literal is redundant in a first-order clause. Joyner [31] independently investigated the same problem, which he called clause condensation, where a condensation of a clause C is a minimum cardinality subset C ′ of C such that C ′ |= C. Gottlob and Fermüller [26] improved Joyner's algorithm and also showed that determining whether a clause is condensed is coNP-complete. In contrast to removing redundant literals, we focus on removing redundant clauses.

Clause redundancy
Plotkin [52] introduced methods to decide whether a clause is subsumption redundant in a first-order clausal theory. This problem has also been extensively studied in the context of first-order logic with equality due to its application in superposition-based theorem proving [30,63]. The same problem, and slight variants, has been extensively studied in the propositional case [36,37]. Removing redundant clauses has numerous applications, such as to improve the efficiency of SAT [29]. In contrast to these works, we focus on reducing theories formed of second-order Horn clauses (without equality), which to our knowledge has not yet been extensively explored. Another difference is that we additionally study redundancy based on SLD-derivations.
Cropper and Muggleton [10] used Progol's entailment-reduction algorithm [45] to identify irreducible, or minimal, sets of metarules. Their approach removed entailment redundant clauses from sets of metarules. They identified theories that are (1) entailment complete for certain fragments of second-order Horn logic, and (2) minimal or irreducible in that no further reductions are possible. They demonstrated that in some cases as few as two clauses are sufficient to entail an infinite theory. However, they only considered small and highly constrained fragments of metarules. In particular, they focused on an exactly-two-connected fragment of metarules where each literal is dyadic and each first-order variable appears exactly twice in distinct literals. However, as discussed in the introduction, entailment reduction is not always the most appropriate form of reduction because it can remove metarules necessary to specialise a clause. Therefore, in this paper, we go beyond entailment reduction and introduce derivation reduction. We also consider more general fragments of metarules, such as a fragment of metarules sufficient to learn Datalog programs.
Cropper and Tourret [16] introduced the derivation reduction problem and studied whether sets of metarules could be derivationally reduced. They considered the exactly-two-connected fragment previously considered by Cropper and Muggleton and a two-connected fragment in which every variable appears at least twice, which is analogous to our singleton-free fragment (Section 5.3). They used graph theoretic methods to show that certain fragments could not be completely derivationally reduced. They demonstrated on the Michalski trains dataset that the partially derivationally reduced set of metarules outperforms the entailment reduced set. In similar work Cropper and Tourret elaborated on their graph theoretic techniques and expanded the results to unconstrained resolution [61].
In this paper, we go beyond the work of Cropper and Tourret in several ways. First, we consider more general fragments of metarules, including connected and Datalog fragments. We additionally consider fragments with zero arity literals. In all cases we provide additional theoretical results showing whether certain fragments can be reduced, and, where possible, show the actual reductions. Second, Cropper and Tourret [61] focused on derivation reduction modulo first-order variable unification, i.e. they considered the case where factorisation [51] was allowed when resolving two clauses, which is not implemented in practice in current MIL systems. For this reason, although Section 5 in [61] and Section 5.1 in the present paper seemingly consider the same problem, the results are opposite to one another. Third, in addition to entailment and derivation reduction, we also consider subsumption reduction. We provide more theoretical results on the decidability of the reduction problems, such as showing a decidable case for derivation reduction (Theorem 4). Fourth, we describe the reduction algorithms and discuss their computational complexity. Finally, we corroborate the experimental results of Cropper and Tourret on Michalski's train problem [16] and provide additional experimental results on two more domains: real-world string transformations and inducing Datalog game rules from observations.

Theory minimisation
We focus on removing clauses from a clausal theory. A related yet distinct topic is theory minimisation where the goal is to find a minimum equivalent formula to a given input formula. This topic is often studied in propositional logic [28]. The minimisation problem allows for the introduction of new clauses. By contrast, the reduction problem studied in this paper does not allow for the introduction of new clauses and instead only allows for the removal of redundant clauses.

Prime implicates
Implicates of a theory T are the clauses that are entailed by T and are called prime when they do not themselves entail other implicates of T . This notion differs from the subsumption and derivation reduction because it focuses on entailment, and it differs from entailment reduction because (1) the notion of a prime implicate has been studied only in propositional, first-order, and some modal logics [2,18,42]; (2) the generation of prime implicates allows for the introduction of new clauses in the formula.

Logical reduction
We now introduce the reduction problem: the problem of finding redundant clauses in a theory. We first describe the reduction problem starting with preliminaries, and then describe three instances of the problem. The first two instances are based on existing logical reduction methods: subsumption and entailment. The third instance is a new form of reduction introduced in [16] based on SLD-derivations.

Preliminaries
We assume familiarity with logic programming notation [39] but we restate some key terminology. A clause is a disjunction of literals. A clausal theory is a set of clauses. A Horn clause is a clause with at most one positive literal. A Horn theory is a set of Horn clauses. A definite clause is a Horn clause with exactly one positive literal. A Horn clause is a Datalog clause if (1) it contains no function symbols, and (2) every variable that appears in the head of the clause also appears in a positive (i.e. not negated) literal in the body of the clause 7 . We denote the powerset of the set S as 2 S .

Metarules
Although the reduction problem applies to any clausal theory, we focus on theories formed of metarules: Definition 1 (Metarule) A metarule is a second-order Horn clause of the form: where each A i is a literal of the form P(T 1 , . . . , T n ) where P is either a predicate symbol or a second-order variable that can be substituted by a predicate symbol, and each T i is either a constant symbol or a first-order variable that can be substituted by a constant symbol.

Name
Metarule  Table 1 shows a selection of metarules commonly used in the MIL literature [11,12,14,15,44]. As Definition 1 states, metarules may include predicate and constant symbols. However, we focus on the more general case where metarules only contain variables 8 . In addition, although metarules can be any Horn clauses, we focus on definite clauses with at least one body literal, i.e. we disallow facts, because their inclusion leads to uninteresting reductions, where in almost all such cases the theories can be reduced to a single fact 9 We denote the infinite set of all such metarules as . We focus on fragments of , where a fragment is a syntactically restricted subset of a theory [4]:

Definition 2 (The fragment
a m ) We denote as a m the fragment of where each literal has arity at most a and each clause has at most m literals in the body. We replace a by the explicit set of arities when we restrict the allowed arities further.
where each predicate has arity 2 and each clause has at most 2 body literals.

Example 2
{2} m is a subset of where each predicate has arity 2 and each clause has at most m body literals. 8 By more general we mean we focus on metarules that are independent of any particular ILP problem with particular predicate and constant symbols. 9 For instance, the metarule P(A) ← entails and subsumes every metarule with a monadic head.

Example 3
{0,2} m is a subset of where each predicate has arity 0 or 2 and each clause has at most m body literals.

Example 4 a
{1,2} is a subset of where each predicate has arity at most a and each clause has either 1 or 2 body literals.
Let T be a clausal theory. Then we say that T is in the fragment a m if and only if each clause in T is in a m .

Meta-interpretive learning
In Section 6 we conduct experiments to see whether using reduced sets of metarules can improve learning performance. The primary purpose of the experiments is to test our claim that entailment reduction is not always the most appropriate form of reduction. Our experiments focus on MIL. For self-containment, we briefly describe MIL. -∀c ∈ H, ∃m ∈ M such that c = mθ , where θ is a substitution that grounds all the existentially quantified variables in m We call H a solution to the MIL problem.
The metarules and background define the hypothesis space. To explain our experimental results in Section 6, it is important to understand the effect that metarules have on the size of the MIL hypothesis space, and thus on learning performance. The following result generalises previous results [12,38]: Theorem 1 (MIL hypothesis space) Given p predicate symbols and k metarules in a m , the number of programs expressible with n clauses is at most (p m+1 k) n .
Proof The number of first-order clauses which can be constructed from a a m metarule given p predicate symbols is at most p m+1 because for a given metarule there are at most m + 1 predicate variables with at most p m+1 possible substitutions. Therefore the set of such clauses S which can be formed from k distinct metarules in a m using p predicate symbols has cardinality at most p m+1 k. It follows that the number of programs which can be formed from a selection of n clauses chosen from S is at most (p m+1 k) n .
⊓ ⊔ Theorem 1 shows that the MIL hypothesis space increases given more metarules. The Blumer bound [3] 10 , says that given two hypothesis spaces, searching the smaller space will result in fewer errors compared to the larger space, assuming that the target hypothesis is in both spaces. This result suggests that we should consider removing redundant metarules to improve the learning performance. We explore this idea in the rest of the paper.

Encapsulation
To reason about metarules (especially when running the Prolog implementations of the reduction algorithms), we use a method called encapsulation [10] to transform a secondorder logic program to a first-order logic program. We first define encapsulation for atoms:

Definition 5 (Atomic encapsulation) Let
A be a second-order or first-order atom of the form P(T 1 , .., T n ). Then enc(A) = enc(P, T 1 , .., T n ) is the encapsulation of A.
For instance, the encapsulation of the atom parent(ann,andy) is enc(parent,ann,andy). Note that encapsulation essentially ignores the quantification of variables in metarules by treating all variables, including predicate variables, as first-order universally quantified variables of the first-order enc predicate. In particular, replacing existential quantifiers with universal quantifiers on predicate variables is fine for our work because we only reason about the form of metarules, not their semantics, i.e. we treat metarules as templates for first-order clauses. We extend atomic encapsulation to logic programs: We now have the proposition:

Proposition 1 (Encapsulation models [10]) The second-order logic program P has a model M if and only if enc(P) has the model enc(M ).
Proof Follows trivially from the definitions of encapsulated programs and interpretations.

⊓ ⊔
We can extend the definition of entailment to logic programs:

Proposition 2 (Entailment [10]) Let P and Q be second-order logic programs. Then P |= Q if and only if every model enc(M ) of enc(P) is also a model of enc(Q).
Proof Follows immediately from Proposition 1.

⊓ ⊔
These results allow us to reason about metarules using standard first-order logic. In the rest of the paper all the reasoning about second-order theories is performed at the first-order level. However, to aid the readability we continue to write non-encapsulated metarules in the rest of the paper, i.e. we will continue to refer to sets of metarules as second-order theories.

Logical reduction problem
We now describe the logical reduction problem. For the clarity of the paper, and to avoid repeating definitions for each form of reduction that we consider (entailment, subsumption, and derivability), we describe a general reduction problem which is parametrised by a binary relation ⊏ defined over any clausal theory, although in the case of derivability, ⊏ is in fact only defined over Horn clauses. Our only constraint on the relation ⊏ is We first define a redundant clause: In a slight abuse of notation, we allow Definition 8 to also refer to a single clause, i.e. in our notation T ⊏ C is the same as T ⊏ {C}. We define a reduced theory:

Definition 9 (⊏-reduced theory) A clausal theory is ⊏-reduced if and only if it is finite and it does not contain any ⊏-redundant clauses.
We define the input to the reduction problem:

Definition 10 (⊏-reduction input) A reduction input is a pair (T,⊏)
where T is a clausal theory and ⊏ is a binary relation over a clausal theory.
Note that a reduction input may (and often will) be an infinite clausal theory. We define the reduction problem: Definition 11 (⊏-reduction problem) Let (T,⊏) be a reduction input. Then the ⊏reduction problem is to find a finite theory T ′ ⊆ T such that (1) T ′ ⊏ T (i.e. T ′ ⊏ C for every clause C in T ), and (2) T ′ is ⊏-reduced. We call T ′ a ⊏-reduction.
Although the input to a ⊏-reduction problem may contain an infinite theory, the output (a ⊏-reduction) must be a finite theory. We also introduce a variant of the ⊏-reduction problem where the reduction must obey certain syntactic restrictions: Definition 12 ( a m -⊏-reduction problem) Let (T ,⊏, a m ) be a triple, where the first two elements are as in a standard reduction input and a m is a target reduction theory. Then the a m -⊏-reduction problem is to find a finite theory

Subsumption reduction
The first form of reduction we consider is based on subsumption, which, as discussed in Section 2, is often used to eliminate redundancy in a clausal theory: Note that if a clause C subsumes a clause D then C |= D [55]. However, if C |= D then it does not necessarily follow that C D. Subsumption can therefore be seen as being weaker than entailment. Whereas checking entailment between clauses is undecidable [6], Robinson [55] showed that checking subsumption between clauses is decidable (although in general deciding subsumption is a NP-complete problem [51]).
If T is a clausal theory then the pair (T, ) is an input to the ⊏-reduction problem, which leads to the subsumption reduction problem (S-reduction problem). We show that the S-reduction problem is decidable for finite theories:

Proposition 3 (Finite S-reduction problem decidability) Let T be a finite theory. Then the corresponding S-reduction problem is decidable.
Proof We can enumerate each element T ′ of 2 T in ascending order on the cardinality of T ′ . For each T ′ we can check whether T ′ subsumes T , which is decidable because subsumption between clauses is decidable. If T ′ subsumes T then we correctly return T ′ ; otherwise we continue to enumerate. Because the set 2 T is finite the enumeration must halt. Because the set 2 T contains T the algorithm will in the worst-case return T . Thus the problem is decidable.

Entailment reduction
As mentioned in the introduction, Cropper and Muggleton [10] previously used entailment reduction [45] to reduce sets of metarules using the notion of an entailment redundant clause: If T is a clausal theory then the pair (T, |=) is an input to the ⊏-reduction problem, which leads to the entailment reduction problem (E-reduction). We show the relationship between an E-and a S-reduction:

Proposition 4 Let T be a clausal theory, T S be a S-reduction of T , and T E be an E-reduction of T . Then T E |= T S .
Proof Assume the opposite, i.e. T E |= T S . This assumption implies that there is a clause C ∈ T S such that T E |= C. By the definition of S-reduction, T S is a subset of T so C must be in T , which implies that T E |= T . But this contradicts the premise that T E is an E-reduction of T . Therefore the assumption cannot hold, and thus T E |= T S .

⊓ ⊔
We show that the E-reduction problem is undecidable for arbitrary clausal theories:

Proposition 5 (E-reduction problem clausal decidability) The E-reduction problem for clausal theories is undecidable.
Proof Follows from the undecidability of entailment in clausal logic [6].

⊓ ⊔
The E-reduction problem for Horn theories is also undecidable:

Proposition 6 (E-reduction problem Horn decidability) The E-reduction problem for
Horn theories is undecidable.
Proof Follows from the undecidability of entailment in Horn logic [41].

⊓ ⊔
The E-reduction problem is, however, decidable for finite Datalog theories:

Proposition 7 (E-reduction problem Datalog decidability) The E-reduction problem for finite Datalog theories is decidable.
Proof Follows from the decidability of entailment in Datalog [17] using a similar algorithm to the one used in the proof of Proposition 3. ⊓ ⊔

Derivation reduction
As mentioned in the introduction, entailment reduction can be too strong a form of reduction. We therefore describe a new form of reduction based on derivability [16,61]. Although our notion of derivation reduction can be defined for any proof system (such as unconstrained resolution as is done in [61]) we focus on SLD-resolution because we want to reduce sets of metarules, which are definite clauses. We define the function R n (T ) of a Horn theory T as: We use this function to define the Horn closure of a Horn theory:

Definition 15 (Horn closure)
The Horn closure R * (T ) of a Horn theory T is: We state our notion of derivability:

Definition 16 (Derivability)
A Horn clause C is derivable from the Horn theory T , written T ⊢ C, if and only if C ∈ R * (T ).
We define a derivationally redundant (D-redundant) clause: Let T be a Horn theory, then the pair (T, ⊢) is an input to the ⊏-reduction problem, which leads to the derivation reduction problem (D-reduction problem). Note that a theory can have multiple D-reductions. For instance, consider the theory T : One D-reduction of T is {C 1 , C 2 } because we can resolve the first body literal of C 2 with C 1 to derive C 3 (up to variable renaming). Another D-reduction of T is {C 1 , C 3 } because we can likewise resolve the first body literal of C 3 with C 1 to derive C 2 . We can show the relationship between E-and D-reductions by restating the notion of a SLD-deduction [51]:

Definition 18 (SLD-deduction [51])
Let T be a Horn theory and C be a Horn clause. Then there exists a SLD-deduction of C from T , written T ⊢ d C, if C is a tautology or if there exists a clause D such that T ⊢ D and D subsumes C.
We can use the subsumption theorem [51] to show the relationship between SLD-deductions and logical entailment:

Theorem 2 (SLD-subsumption theorem [51]) Let T be a Horn theory and C be a Horn clause. Then T |= C if and only if T ⊢ d C.
We can use this result to show the relationship between an E-and a D-reduction:

Proposition 8 Let T be a Horn theory, T E be an E-reduction of T , and T D be a D-reduction of T . Then T E |= T D .
Proof Follows from the definitions of E-reduction and D-reduction because an E-reduction can be obtained from a D-reduction with an additional subsumption check.
⊓ ⊔ We also use the SLD-subsumption theorem to show that the D-reduction problem is undecidable for Horn theories:

Theorem 3 (D-reduction problem Horn decidability) The D-reduction problem for
Horn theories is undecidable.
Proof Assume the opposite, that the problem is decidable, which implies that T ⊢ C is decidable. Since T ⊢ C is decidable and subsumption between Horn clauses is decidable [24], then finding a SLD-deduction is also decidable. Therefore, by the SLD-subsumption theorem, entailment between Horn clauses is decidable. However, entailment between Horn clauses is undecidable [56], so the assumption cannot hold. Therefore, the problem must be undecidable. ⊓ ⊔ However, the D-reduction problem is decidable for any fragment a m (e.g. definite Datalog clauses where each clause has at least one body literal, with additional arity and body size constraints). To show this result, we first introduce two lemmas: Proof Follows from the definitions of SLD-resolution [51].
⊓ ⊔ Note that Lemma 1 does not hold for unconstrained resolution because it allows for factorisation [51]. Lemma 1 also does not hold when facts (bodyless definite clauses) are allowed because they would allow for resolvents that are smaller in body size than one of the original two clauses. Proof Any literal in a m has at most a first-order variables and 1 second-order variable, so any literal has at most a + 1 variables. Any metarule has at most m body literals plus the head literal, so any metarule has at most m+1 literals. Therefore, any metarule has at most ((a + 1)(m + 1)) variables. We can arrange the variables in at most ((a + 1)(m + 1))! ways, so there are at most ((a + 1)(m + 1))! metarules in a m up to variable renaming. Thus a m is finite up to variable renaming.
⊓ ⊔ Note that the bound in the proof of Lemma 2 is a worst-case result. In practice there are fewer usable metarules because we consider fragments of constrained theories, thus not all clauses are admissible, and in all cases the order of the body literals is irrelevant. We use these two lemmas to show that the D-reduction problem is decidable for Proof Let T be a finite clausal theory in a m and C be a definite clause with n > 0 body literals. The problem is whether T ⊢ C is decidable. By Lemma 1, we cannot derive C from any clause which has more than n body literals. We can therefore restrict the resolution closure R * (T ) to only include clauses with body lengths less than or equal to n. In addition, by Lemma 2 there are only a finite number of such clauses so we can compute the fixed-point of R * (T ) restricted to clauses of size smaller or equal to n in a finite amount of steps and check whether C is in the set. If it is then T ⊢ C; otherwise

k-derivable clauses
Propositions 3 and 7 and Theorem 4 show that the ⊏-reduction problem is decidable under certain conditions. However, as we will shown in Section 4, even in decidable cases, solving the ⊏-reduction problem is computationally expensive. We therefore solve restricted k-bounded versions of the E-and D-reduction problems, which both rely on SLD-derivations. Specifically, we focus on resolution depth-limited derivations using the notion of k-derivability: Definition 19 (k-derivability) Let k be a natural number. Then a Horn clause C is kderivable from the Horn theory T , written T ⊢ k C, if and only if C ∈ R k (T ).
The definitions for k-bounded E-and D-reductions follow from this definition but are omitted for brevity. In Section 4 we introduce a general algorithm (Algorithm 1) to solve the S-reduction problem and k-bounded E-and D-reduction problems.

Reduction algorithms
In Section 5 we logically reduce sets of metarules. We now describe the reduction algorithms that we use.

⊏-reduction algorithm
The reduce algorithm (Algorithm 1) shows a general ⊏-reduction algorithm that solves the ⊏-reduction problem (Definition 11) when the input theory is finite 11 . We ignore cases where the input is infinite because of the inherent undecidability of the problem. Algorithm 1 is largely based on Plotkin's clausal reduction algorithm [52]. Given a finite clausal theory T and a binary relation ⊏, the algorithm repeatedly tries to remove a ⊏-redundant clause in T . If it cannot find a ⊏-redundant clause, then it returns the ⊏reduced theory. Note that since derivation reduction is only defined over Horn theories, in a ⊢-reduction input (T, ⊢), the theory T has to be Horn. We show total correctness of the algorithm: Proposition 9 (Algorithm 1 total correctness) Let (T ,⊏) be a ⊏-reduction input where T is finite. Let the corresponding ⊏-reduction problem be decidable. Then Algorithm 1 solves the ⊏-reduction problem.
Proof Trivial by induction on the size of T . ⊓ ⊔ return T Note that Proposition 9 assumes that the given reduction problem is decidable and that the input theory is finite. If you call Algorithm 1 with an arbitrary clausal theory and the |= relation then it will not necessarily terminate. We can call Algorithm 1 with specific binary relations, where each variation has a different time-complexity. Table 2 shows different ways of calling Algorithm 1 with their corresponding time complexities, where we assume finite theories as input. We show the complexity of calling Algorithm 1 with the subsumption relation:

Proposition 10 (S-reduction complexity) If T is a finite clausal theory then calling Algorithm 1 with (T, ) requires at most O(|T | 3 ) calls to a subsumption algorithm.
Proof For every clause in T the algorithm checks whether any other clause in T subsumes C which requires at most O(|T | 2 ) calls to a subsumption algorithm. If any clause C is found to be S-redundant then the algorithm repeats the procedure on the theory (T \ {C}), so overall the algorithm requires at most O(|T | 3 ) calls to a subsumption algorithm. Table 2: Outputs and complexity of Algorithm 1 for different input relations and an arbitrary finite clausal theory T . The time complexities are a function of the size of the given theory, denoted by |T |.
Note that a more detailed analysis of calling Algorithm 1 with the subsumption relation would depend on the subsumption algorithm used, which is an NP-complete problem [24]. We show the complexity of calling Algorithm 1 with the k-bounded entailment relation:  R(B, B). Rename the variables P 1 to P 3 , P 2 to P 4 , and Z to Z 1 in R 1 (to standardise apart the variables) to form R(B, B). Z). Rename the variables Z 1 to D, Z t oC , P 3 to R, P 4 to S, P 1 to Q, and P 2 to T in R 3 to form

Reduction of metarules
We now logically reduce fragments of metarules. Given a fragment  Table 3 shows the four fragments and their main restrictions. The subsequent sections precisely describe the fragments.
Our first goal (G1) is to essentially minimise the number of body literals in a set of metarules, which can be seen as trying to enforce an Occamist bias. We are particularly interested reducing sets of metarules to fragments with at most two body literals because {2} 2 augmented with one function symbol has universal Turing machine expressivity [60]. In addition, previous work on MIL has almost exclusively used metarules from the fragment 2 2 . Our second goal (G2) is more general and concerns reducing an infinite set of metarules to a 2 . Our third goal (G3) is similar, but is about determining whether an infinite set of metarules has any finite reduction.
We work on the goals by first applying the reduction algorithms described in the previous section to finite fragments restricted to 5 body literals (i.e. a 5 ). This value gives us a sufficiently large set of metarules to reduce but not too large that the reduction problem is intractable. When running the E-and D-reduction algorithms (both k-bounded), we use a resolution-depth bound of 7, which is the largest value for which the algorithms terminate in reasonable time 15 . After applying the reduction algorithms to the finite fragments, we then try to solve G2 by extrapolating the results to the infinite case (i.e. a ∞ ). In cases where a 2 ⊏ a ∞ , we then try to solve G3 by seeing whether there exists any natural number k such that a k ⊏ a ∞ .

Connected ( a m ) results
We first consider a general fragment of metarules. The only constraint is that we follow the standard ILP convention [10,20,27,51] and focus on connected clauses 16 :

Definition 20 (Connected clause)
A clause is connected if the literals in the clause cannot be partitioned into two sets such that the variables appearing in the literals of one set are disjoint from the variables appearing in the literals of the other set.
The following clauses are all connected: By contrast, these clauses are not connected: We denote the connected fragment of a m as a m . Table 4 shows the maximum body size and the cardinality of the reductions obtained when applying the reduction algorithms to a 5 for different values of a. To give an idea of the scale of the reductions, the fragment {1, 2} 5 contains 77398 unique metarules, of which E-reduction removed all but two of them.  Table 4: Cardinality and maximal body size of the reductions of a 5 . All the fragments can be S-and E-reduced to a 1 but they cannot all be D-reduced to a 2 . .
As Table 4 shows, all the fragments can be S-and E-reduced to a 1 . We show that in general a ∞ has a a 1 -S-reduction:

Theorem 5 ( a ∞ S-reducibility)
For all a > 0, the fragment a ∞ has a a 1 -S-reduction.
Proof Let C be any clause in a ∞ , where a > 0. By the definition of connected clauses there must be at least one body literal in C that shares a variable with the head literal of C. The clause formed of the head of C with the body literal directly connected to it is by definition in a 1 and clearly subsumes C. Therefore a 1 a ∞ .

⊓ ⊔
We likewise show that a ∞ always has a a 1 -E-reduction: Theorem 6 ( a ∞ E-reducibility) For all a > 0, the fragment a ∞ has a a 1 -E-reduction.
Proof Follows from Theorem 5 and Proposition 4.

⊓ ⊔
As Table 4 shows, the fragment 2 5 could not be D-reduced to 2 2 when running the derivation reduction algorithm. However, because we run the derivation reduction algorithm with a maximum derivation depth, this result alone is not enough to guarantee that the output cannot be further reduced. Therefore, we show that 2 5 cannot be D-reduced to

Q(A, C)), (P(B, A) ← Q(C, A))} up to variable renaming. Let C I denote the clause P(A, B) ← Q(A, C), R(A, D), S(B, C), T (B, D), U(C, D).
We prove that no clause in (C I ) can be derived from 2 2 by induction on the length of derivations. Formally, we show that there exist no derivations of length n from 2 2 to a clause in (C I ). We reason by contradiction and w.l.o.g. we consider only the clause C I .
For the base case n = 0, assume that there is a derivation of length 0 from 2 2 to C I . This assumption implies that C I ∈ 2 2 , but this clearly cannot hold given the body size of C I .
For the general case, assume that the property holds for all k < n and by contradiction consider the final inference in a derivation of length n of C I from 2 2 . Let C 1 and C 2 denote the premises of this inference. Then the literals occurring in C I must occur up to variable renaming in at least one of C 1 and C 2 . We consider the following cases separately.
-All the literals of C I occur in the same premise: because of Lemma 1, this case is impossible because this premise would contain more literals than C I (the ones from C I plus the resolved literal). -Only one of the literals of C I occurs separately from the others: w.l.o.g., assume that the literal Q(A, C) occurs alone in C 2 (up to variable renaming). Then C 2 must be of the form H

(A, C) ← Q(A, C) or H(C, A) ← Q(A, C) for some H, where the H-headed
literal is the resolved literal of the inference that allows the unification of A and C with their counterparts in C 1 17 . In this case, C 1 belongs to (C I ) and a derivation of C 1 from 2 2 of length smaller than n exists as a strict subset of the derivation to C I of length n. This contradicts the induction hypothesis, thus the assumed derivation of C cannot exist.
-Otherwise, the split of the literals of C I between C 1 and C 2 is always such that at least three variables must be unified during the inference. For example, consider the case where P(A, B) ← Q(A, C) ⊂ C 1 and the set {R(A ′ , D), S(B ′ , C ′ ), T (B ′ , D), U(C ′ , D)} occurs in the body of C 2 (up to variable renaming). Then A ′ , B ′ and C ′ must unify respectively with A, B and C for C I to be derived (up to variable renaming). However the inference can at most unify two variable pairs since the resolved literal must be dyadic at most and thus this inference is impossible, a contradiction.
Thus C I and all of (C I ) cannot be derived from 2 2 . Note that, since (C I ) is also neither a subset of 2 3 nor of 2 4 , this proof also shows that (C I ) cannot be derived from 2 3 and from 2 4 .

⊓ ⊔
We generalise this result to 2 ∞ : Theorem 7 ( 2 ∞ D-irreducibility) The fragment 2 ∞ has no D-reduction. Proof It is enough to prove that 2 ∞ does not have a 2 m -D-reduction for an arbitrary m because any D-reduced theory, being finite, admits a bound on the body size of the clauses it contains. Starting from C I as defined in the proof of Proposition 14, apply the following transformation iteratively for k from 1 to m: replace the literals containing Q and R (i.e. at first Q(A, C) and R(A, D)) with the following set of literals Q(A, C k ), R(A, D k ), where all variables and predicate variables labeled with k are new. Let the resulting clause be denoted C I m . This clause is of body size 3m + 5 and thus does not belong to 2 m . Moreover, for the same reason that C I cannot be derived from any 2 m ′ with m ′ < 5 (see the proof of Proposition 14) C I m cannot be derived from any 2 m ′ with m ′ < 3m + 5. In particular, C I m cannot be derived from 2 m . ⊓ ⊔ Another way to generalise Proposition 14 is the following: Theorem 8 ( a ∞ D-irreducibility) For a ≥ 2, the fragment a ∞ has no a a 2 +a−2 -Dreduction.
Proof Let C a denote the clause C a = P(A 1 , . . . , A a ) ←Q 1,1 (A 1 , B 1,1 , . . . , B 1,a−1 ) . . .Q 1,a (A 1 , B a,1 , . . . , B a,a−1 ) . . . Note that for a = 2, the clauses C a and C I from the proof of Proposition 14 coincide. In fact to show that C a is irreducible for any a, it is enough to consider the proof of Proposition 14 where C a is substituted to C I and where the last case is generalised in the following way: the split of the literals of C a between C 1 and C 2 is always such that at least a + 1 variables must be unified during the inference, which is impossible since the resolved literal can at most hold a variables.
The reason this proof holds is that any subset of C a contains at least a + 1 distinct variables. Since C a is of body size a 2 + a − 1, this counter-example proves that a ∞ has no a a 2 +a−2 -D-reduction.

⊓ ⊔
Note that this is enough to conclude that a ∞ cannot be reduced to a 2 but it does not prove that a ∞ is not D-reducible. Table 6 summarises our theoretical results from this section. Theorems 5 and 6 show that a ∞ can always be S-and E-reduced to a 1 respectively. By contrast, Theorem 7 shows that 2 ∞ cannot be D-reduced to 2 2 . In fact, Theorem 7 says that 2 ∞ has no Dreduction. Theorem 7 has direct (negative) implications for MIL systems such as Metagol and HEXMIL. We discuss these implications in more detail in Section 7.

Summary
Arity S E D 1 2 × >2 × Table 6: Existence of a S-, E-or D-reduction of a ∞ to a 2 . The symbol denotes that the fragment does have a reduction. The symbol × denotes that the fragment does not have a reduction.

Datalog ( a m ) results
We now consider Datalog clauses, which are often used in ILP [1,12,20,32,49,58]. The relevant Datalog restriction is that if a variable appears in the head of a clause then it must also appear in a body literal. If we look at the S-reductions of {1, 2} 5 in Table 5 then the clause P(A, B) ← Q(B) is not a Datalog clause because the variable A appears in the head but not in the body. We denote the Datalog fragment of a m as a m . Table  7 shows the results of applying the reduction algorithms to a 5 for different values of a.  Table 7: Cardinality and maximal body size of the reductions of a 5 . All the fragments can be S-and E-reduced to a 2 but they cannot all be D-reduced to a 2 .
Proof Follows using the same argument as in Theorem 5 but the reduction is to 2 2 instead of 2 1 . This difference is due to the Datalog constraint that states: if a variable appears in the head it must also appear in the body. For clauses with dyadic heads, if the two head argument variables occur in two distinct body literals then the clause cannot be further reduced beyond 2 2 .

⊓ ⊔
We show how this result cannot be generalised to a ∞ : Theorem 9 ( a ∞ S-irreducibility) For a > 0, the fragment a ∞ does not have a a a−1 -Sreduction.
Proof As a counter-example to a a a−1 -S-reduction, consider C a = P(X 1 , . . . , X a ) ← Q 1 (X 1 ), . . . , Q a (X a ). The clause C a does not belong to a a−1 and cannot be S-reduced to it because the removal of any subset of its literals leaves argument variables in the head without their counterparts in the body. Hence, any subset of C a does not belong to the Datalog fragment. Thus C a cannot be subsumed by a clause in a a−1 .

⊓ ⊔
However, we can show that a ∞ can always be S-reduced to a a : Theorem 10 ( a ∞ to a a S-reducibility) For a > 0, the fragment a ∞ has a a a -Sreduction.
Proof To prove that a ∞ has a a a -S-reduction it is enough to remark that any clause in a ∞ has a subclause of body size at most a that is also in a ∞ , the worst case being clauses such as C a where all argument variables in the head occur in a distinct literal in the body.

⊓ ⊔
We also show that a ∞ always has a a 2 -E-reduction, starting with the following lemma: Lemma 3 For a > 0 and n ∈ {1, . . . , a}, the clause Proof By induction on n.
Proof Let C be any clause in a ∞ . We denote the head of C by P(A 1 , . . . , A n ), where 0 < n ≤ a. The possibility that some of the A i are equal does not impact the reasoning.
If n = 1, then by definition, there exists a literal L 1 in the body of C such that A 1 occurs in L 1 . It is enough to consider the clause P(A 1 ) ← L 1 to conclude, because P(A 1 ) is the head of C and L 1 belongs to the body of C, thus P(A 1 ) ← L 1 entails C, and this clause belongs to a 2 . In the case where n > 1, there must exist literals L 1 , . . . , L n in the body of C such that There are a few things to stress about C ′ : -The clause C ′ belongs to a ∞ . -Some L i may be identical with each other, since the A i s may occur together in literals or simply be equal, but this scenario does not impact the reasoning.
-The clause C ′ entails C because C ′ is equivalent to a subset of C (but this subset may be distinct from C ′ due to C ′ possibly including some extra duplicated literals).

Now consider the clause
2 -E-reducible. Note that this notation hides the fact that if a variable occurs in distinct body literals L i in C ′ , this connection is not captured in D ′ where distinct variables will occur instead, thus there is no guarantee that However, it always holds that D ′ |= C ′ , because D ′ subsumes C ′ . In our small example, it is enough to consider the substitution θ = {B ′ /B, A ′ 2 /A 2 } to observe this. Thus by transitivity of entailment, we can conclude that C is a 2 -E-reducible. ⊓ ⊔ As Table 7 shows, not all of the fragments can be D-reduced to a 2 . In particular, the result that 2 ∞ has no 2 2 -D-reduction follows from Theorem 7 because the counterexamples presented in the proof also belong to 2 ∞ . Table 9 summarises our theoretical results from this section. Theorem 9 shows that a ∞ never has a a a−1 -S-reduction. This result differs from the connected fragment where a ∞ could always be S-reduced to a 2 . However, Theorem 9 shows that a ∞ can always be Sreduced to a a . As with the connected fragment, Theorem 11 shows that a ∞ can always be E-reduced to a 2 . The result that 2 ∞ has no D-reduction follows from Theorem 7.

Summary
Arity S E D 1 2 × >2 × × Table 9: Existence of a S-, E-or D-reduction of a ∞ to a 2 .

Singleton-free ( a m ) results
It is common in ILP to require that all the variables in a clause appear at least twice [10,46,54], which essentially eliminates singleton variables. We call this fragment the singleton-free fragment:

Definition 21 (Singleton-free) A clause is singleton-free if each first-order variable appears at least twice
For example, if we look at the E-reductions of the connected fragment {1, 2} 5 shown in Table 5 then the clause P(A) ← Q(B, A) is not singleton-free because the variable B only appears once. We denote the singleton-free fragment of a m as a m . Table 10 shows the results of applying the reduction algorithms to a 5 . Table 11 shows the reductions of  Proof As a counter-example, consider the clause: Consider removing any non-empty subset of literals from the body of C. Doing so leads to a singleton variable in the remaining clause, so it is not a singleton-free clause. Moreover, for any other clause to subsume C it must be more general than C, but that is not possible again because of the singleton-free constraint 18 .

⊓ ⊔
We can likewise show that this result holds in the general case: Theorem 12 ( a ∞ S-reducibility) For a ≥ 2, the fragment a ∞ does not have a a 2a−1 -S-reduction.
Proof We generalise the clause C from the proof of Proposition 16 to define the clause a , B a ). The same reasoning applies to C a as to C(= C 2 ), making C a irreducible in a ∞ . Moreover C a is of body size 2a, thus C a is a counterexample to a a 2a−1 -S-reduction of a ∞ .

⊓ ⊔
However, all the fragments can be E-reduced to a 2 .
Theorem 13 ( a ∞ E-reducibility) For a > 0, the fragment a ∞ has a a 2 -E-reduction.
Proof The proof of Theorem 13 is an adaptation of that of Theorem 11. The only difference is that if n = 1 then P(A 1 ) ← L 1 , L 1 must be considered instead of P(A 1 ) ← L 1 to ensure the absence of singleton variables in the body of the clause, and for the same reason, in the general case, the clause D ′ = P(A 1 , . . . , A n ) ← L 1 , ..., L n must be replaced by D ′ = P(A 1 , . . . , A n ) ← L 1 , L 1 , . . . , L n , L n . Note that C ′ is not modified and thus may or may not belong to a ∞ . However, it is enough that C ′ ∈ a ∞ . With these modifications, the proof carries from a ∞ to a 2 as from a ∞ to a 2 , including the results in Lemma 3. ⊓ ⊔ Table 12 summarises our theoretical results from this section. Theorem 12 shows that for a ≥ 2, the fragment a ∞ does not have a a 2a−1 -S-reduction. This result contrasts with the Datalog fragment where a ∞ always has a a a -S-reduction. As is becoming clear, adding more restrictions to a fragment typically results in less S-reducibility. By contrast, as with the connected and Datalog fragments, Theorem 13 shows that fragment a ∞ always has a a 2 -E-reduction. In addition, as with the other fragments, a ∞ has no Dreduction for a ≥ 2. 18 Note that this proof also shows that 2 ∞ does not have a 2 3 -S-reduction.

Summary
Arity S E D 1 2 × × >2 × × Table 12: Existence of a S-, E-or D-reduction of a ∞ to a 2 .

Duplicate-free ( a m ) results
The previous three fragments are general in the sense that they have been widely used in ILP. By contrast, the final fragment that we consider is of particular interest to MIL. Table  1 shows a selection of metarules commonly used in the MIL literature. These metarules have been successfully used despite no theoretical justification. However, if we consider the reductions of the three fragments so far, the identity, precon, and postcon metarules do not appear in any reduction. These metarules can be derived from the reductions, typically using either the P(A) ← Q(A, A) or P(A, A) ← Q(A) metarules. To try to identify a reduction which more closely matches the metarules shown in Table 1, we consider a fragment that excludes clauses in which a literal contains multiple occurrences of the same variable. For instance, this fragment excludes the previously mentioned metarules and also excludes the metarule P(A, A) ← Q(B, A), which was in the D-reduction shown in Table 5. We call this fragment duplicate-free. It is a sub-fragment of a m and we denote it as a m . Table 13 shows the reductions for the fragment {1, 2} 5 . Reductions for other duplicatefree fragments are in Appendix A.4. As Table 13 shows, the D-reduction of {1,2} 5 contains some metarules commonly used in the MIL literature. For instance, it contains the identity 1 , didentity 2 , and precon metarules. We use the metarules shown in Table 13 in Experiments 1 and 2 (Sections 6.1 and 6.2) to learn Michalski trains solutions and string transformation programs respectively. Table 14 shows the results of applying the reduction algorithms to a 5 for different values of a. All the theoretical results that hold for the singleton-free fragments hold similarly for the duplicate-free fragments for the following reasons: -(S) The clauses in the proofs of Proposition 16 and Theorem 12 belong to a ∞ . -(E) If the clause C considered initially in the proof of Theorem 13 belongs to a ∞ , then all the subsequent clauses in that proof are also duplicate-free.
-(D) In the proof of Theorem 7, the C I m family of clauses all belong to a ∞ . Thus Table 12 is also a summary of the S-, E-and D-reduction results of a ∞ to a 2 .

Summary
We started this section with three goals (G1, G2, and G3). Table 15 summarises the results towards these goals for fragments of metarules relevant to ILP (Table 3). For G1, our results are mostly empirical, i.e. the results are the outputs of the reduction algorithms. For G2, Table 15 shows that the results are all positive for E-reduction, but mostly negative for S-and D-reduction, especially for Datalog fragments. Similarly, for G3 the results are again positive for E-reduction but negative for S-and D-reduction for Datalog fragments. We discuss the implications of these results in Section 7.
S-reduction   Table 15: Existence of a S-, E-or D-reduction of a ∞ to a 2 . The symbol denotes that the fragment does have such a reduction. The symbol × denotes that the fragment does not have such a reduction.

Experiments
As explained in Section 1, deciding which metarules to use for a given learning task is a major open problem. The problem is the trade-off between efficiency and expressivity: the hypothesis space grows given more metarules (Theorem 1), so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this section we experimentally explore this trade-off. As described in Section 2, Cropper and Muggleton [10] showed that learning with E-reduced sets of metarules can lead to higher predictive accuracies and lower learning times compared to learning with non-E-reduced sets. However, as argued in Section 1, we claim that E-reduction is not always the most suitable form of reduction because it can remove metarules necessary to learn programs with the appropriate specificity. To test this claim, we now conduct experiments that compare the learning performance of Metagol 2.3.0 19 , the main MIL implementation, when given different reduced sets of metarules 20 . We test the null hypothesis: Null hypothesis 1 There is no difference in the learning performance of Metagol when using different reduced sets of metarules To test this null hypothesis, we consider three domains: Michalski trains, string transformations, and game rules.

Michalski trains
In the Michalski trains problems [35] the task is to induce a program that distinguishes eastbound trains from westbound trains. Figure 1 shows an example target program, where the target concept (f/1) is that the train has a long carriage with two wheels and another with three wheels.

Materials
To obtain the experimental data, we first generated 8 random target train programs where the programs are progressively more difficult, where difficulty is measured by the number of literals in the generated program from the easiest task T 1 to the most difficult task T 8 . Figure 2 shows the background predicates available to Metagol. We vary the metarules given to Metagol. We use the S-, E-, and D-reductions of the fragment {1,2} 5 (Table 13). In addition, we also consider the     If a program is not found in 10 minutes then no program is returned and every testing example is deemed to have failed. We measure mean predictive accuracies, mean learning times, and standard errors over 10 repetitions.

Results
The D * set performs particularly well on the more difficult tasks. The poor performance of the S and E sets on the more difficult tasks is for one of two reasons. The first reason is that the S-and E-reduction algorithms have removed the metarules necessary to express the target concept. This observation strongly corroborates our claim that E-reduction can be too strong because it can remove metarules necessary to specialise a clause. The second reason is that the S-and E-reduction algorithms produce sets of metarules that are still sufficient to express the target theory but doing so requires a much larger and more complex program, measured by the number of clauses needed.
The performance discrepancy between the D and D * sets of metarules can be explained by comparing the hypothesis spaces searched. For instance, when searching for a program with 3 clauses, Theorem 1 shows that when using the D set of metarules the hypothesis space contains approximately 10 24 programs. By contrast, when using the D * set of metarules the hypothesis space contains approximately 10 14 programs. As explained in Section 3.2, assuming that the target hypothesis is in both hypothesis spaces, the Blumer bound [3] tells us that searching the smaller hypothesis space will result in less error, which helps to explain these empirical results. Of course, there is the potential for the D * set to perform worse than the D set when the target theory requires the three removed metarules, but we did not observe this situation in this experiment. Figure 3 shows the target program for T 8 and example programs learned by Metagol using the various reduced sets of metarules. Only the D * program is success set equivalent 23 to the target program when restricted to the target predicate f/1. In all three cases Metagol discovered that if a carriage has three wheels then it is a long carriage, i.e. Metagol discovered that the literal long(C2) is redundant in the target program. Indeed, if we unfold the D * program to remove the invented predicates then the resulting single clause program is one literal shorter than the target program.
Overall, the results from this experiment suggest that we can reject the null hypothesis, both in terms of predictive accuracies and learning times.

String transformations
In [38] and [14] the authors evaluate Metagol on 17 real-world string transformation tasks using a predefined (hand-crafted) set of metarules. In this experiment, we compare learning with different metarules on an expanded dataset with 250 string transformation tasks. 21 A statistical test on paired nominal data https://en.wikipedia.org/wiki/McNemar%27s_test 22 A statistical test on paired ordinal data http://www.biostathandbook.com/pairedttest.html 23 The success set of a logic program P is the set of ground atoms {A ∈ hb(P)|P ∪ {¬A} has a SLD-refutation}, where hb(P) represents the Herband base of the logic program P. The success set restricted to a specific predicate symbol p is the subset of the success set restricted to atoms containing the predicate symbol p.

Materials
Each string transformation task has 10 examples. Each example is an atom of the form f (x, y) where f is the task name and x and y are strings. Figure 4 shows task p6 where the goal is to learn a program that filters the capital letters from the input. We supply Metagol with dyadic background predicates, such as tail, dropLast, reverse, filter_letter, filter_uppercase, dropWhile_not_letter, takeWhile_uppercase. The full details can be found in the code repository. We vary the metarules given to Metagol. We use the S-, E-, and D-reductions of the fragment

Method
Our experimental method is: If a program is not found in 10 minutes then no program is returned and every testing example is deemed to have failed. We measure mean predictive accuracies, mean learning times, and standard errors over 10 repetitions. Table 19 shows the mean predictive accuracies and learning times when learning with the different sets of metarules. Note that we are not interested in the absolute predictive accuracy, which is limited by factors such as the low timeout and insufficiency of the BK. We are instead interested in the relative accuracies. Table 19 shows that the D set outperforms the S and E sets, with a higher mean accuracy of 33%, vs 22% and 22% respectively. The D * set outperforms them all with a mean accuracy of 56%. A McNemar's test on the D and D * accuracies confirmed the significance at the p < 0.01 level. Table 19 shows the corresponding learning times when varying the metarules. Again, the D set outperforms the S and E sets, and again the D * set outperforms them all. A paired t-test on the D and D * learning times confirmed the significance at the p < 0.01 level.

Results
Overall, the results from this experiment give further evidence to reject the null hypothesis, both in terms of predictive accuracies and learning times.

Inducing game rules
The general game playing (GGP) framework [25] is a system for evaluating an agent's general intelligence across a wide range of tasks. In the GGP competition, agents are tested on games they have never seen before. In each round, the agents are given the rules of a new game. The rules are described symbolically as a logic program. The agents are given a few seconds to think, to process the rules of the game, and to then start playing, thus producing game traces. The winner of the competition is the agent who gets the best total score over all the games. In this experiment, we use the IGGP dataset [9] which inverts the GGP task: an ILP system is given game traces and the task is to learn a set of rules (a logic program) that could have produced these traces.

Materials
The IGGP dataset contains problems drawn from 50 games. We focus on the eight games shown in Figure 5 which contain BK compatible with the metarule fragments we consider (i.e. the BK contains predicates in the fragment 2 m ). The other games contain predicates with arity greater than two. Each game has four target predicates legal, next, goal, and terminal, where the arities depend on the game. Figure 6 shows the target solution for the next predicate for the minimal decay game. Each game contains training/validate/test data, composed of sets of ground atoms, in a 4:1:1 split. We vary the metarules given to Metagol. We use the S-, E-, and D-reductions of the fragment  If no program is found in 10 minutes then no program is returned and every testing example is deemed to have failed. Table 20 shows the balanced accuracies when learning with the different sets of metarules. Again, we are not interested in the absolute accuracies only the relative differences when learning using different sets of metarules. The D set outperforms the S and E sets with a higher mean accuracy of 72%, vs 66% and 66% respectively. The D * set again outperforms them all with a mean accuracy of 73%. A McNemar's test on the D and D * accuracies confirmed the significance at the p < 0.01 level. Table 20 shows the corresponding learning times when varying the metarules. Again, the D set outperforms the S and E sets, and again the D * set outperforms them all. However, a paired t-test on the D and D * learning times confirmed the significance only at the p < 0.08 level, so the difference in learning times is insignificant. Overall, the results from this experiment suggest that we can reject the null hypothesis in terms of predictive accuracies but not learning times.

Conclusions and further work
As stated in Section 1, despite the widespread use of metarules, there is little work determining which metarules to use for a given learning task. Instead, suitable metarules are assumed to be given as part of the background knowledge, or are used without any theoretical justification. Deciding which metarules to use for a given learning task is a major open challenge [8,10] and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules [10,38], so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. To address this issue, Cropper and Muggleton [10] used E-reduction on sets of metarules and showed that learning with E-reduced sets of metarules can lead to higher predictive accuracies and lower learning times compared to learning with non-E-reduced sets. However, as we claimed in Section 1, E-reduction is not always the most appropriate form of reduction because it can remove metarules necessary to learn programs with the necessary specificity.
To support our claim, we have compared three forms of logical reduction: S-, E-, and D-reduction, where the latter is a new form of reduction based on SLD-derivations. We have used the reduction algorithms to reduce finite sets of metarules. Table 15 summarises the results. We have shown that many sets of metarules relevant to ILP do not have finite reductions (Theorem 7). These negative results have direct (negative) implications for MIL. Specifically, our results mean that, in certain cases, a MIL system, such as Metagol or HEXMIL [32], cannot be given a finite set of metarules from which it can learn any program, such as when learning arbitrary Datalog programs. The results will also likely have implications for other forms of ILP which rely on metarules.
Our experiments compared learning the performance of Metagol when using the different reduced sets of metarules. In general, using the D-reduced set outperforms both the S-and E-reduced sets in terms of predictive accuracy and learning time. Our experimental results give strong evidence to our claim. We also compared a D * -reduced set, a subset of the D-reduced metarules, which, although derivationally incomplete, outperforms the other two sets in terms of predictive accuracies and learning times.

Limitations and future work
Theorem 7 shows that certain fragments of metarules do not have finite D-reductions. However, our experimental results show that using D-reduced sets of metarules leads to higher predictive accuracies and lower learning times compared to the other forms of reduction. Therefore, our work now opens up a new challenge of overcoming this negative theoretical result. One idea is to explore whether special metarules, such as a currying metarule [12], could alleviate the issue.
In future work we would also like reduce more general fragments of logic, such as triadic logics, which would allow us to tackle a wider variety or problems, such as more of the games in the IGGP dataset.
We have compared the learning performance of Metagol when using different reduced sets of metarules. However, we have not investigated whether these reductions are optimal. For instance, when considering derivation reductions, it may, in some cases, be beneficial to re-add redundant metarules to the reduced sets to avoid having to derive them through SLD-resolution. In future work, we would like to investigate identifying an optimal set of metarules for a given learning task, or preferably learning which metarules to use for a given learning task.
We have shown that although incomplete the D * -reduced set of metarules outperforms the other reductions. In future work we would like to explore other methods which sacrifice completeness for efficiency.
We have used the logical reduction techniques to remove redundant metarules. It may also be beneficial to simultaneously reduce metarules and standard background knowledge. The idea of purposely removing background predicates is similar to dimensionality reduction, widely used in other forms of machine learning [59], but which has been under researched in ILP [23]. Initial experiments indicate that this is possible [8,10], and we aim to develop this idea in future work.