Generating Contrastive Explanations for Inductive Logic Programming Based on a Near Miss Approach

In recent research, human-understandable explanations of machine learning models have received a lot of attention. Often explanations are given in form of model simplifications or visualizations. However, as shown in cognitive science as well as in early AI research, concept understanding can also be improved by the alignment of a given instance for a concept with a similar counterexample. Contrasting a given instance with a structurally similar example which does not belong to the concept highlights what characteristics are necessary for concept membership. Such near misses have been proposed by Winston (1970) as efficient guidance for learning in relational domains. We introduce an explanation generation algorithm for relational concepts learned with Inductive Logic Programming (\textsc{GeNME}). The algorithm identifies near miss examples from a given set of instances and ranks these examples by their degree of closeness to a specific positive instance. A modified rule which covers the near miss but not the original instance is given as an explanation. We illustrate \textsc{GeNME} with the well known family domain consisting of kinship relations, the visual relational Winston arches domain and a real-world domain dealing with file management. We also present a psychological experiment comparing human preferences of rule-based, example-based, and near miss explanations in the family and the arches domains.


Introduction
Explaining classifier decisions has gained much attention in current research. If explanations are intended for the end-user, their main function is to make the human comprehend how the system reached a decision (Miller 2019). In the last years a variety of approaches to explainability has been proposed (Adadi and Berrada 2018;Molnar 2019): Explanations can be local -focusing on the current class decision -or global -covering the learned model (Ribeiro et al. 2016;Adadi and Berrada 2018). A major branch of research addresses explanations by visualizations for end-to-end image classification (Samek et al. 2017;Ribeiro et al. 2016). Alternatively, explanations can be in form of symbolic rules (Lakkaraju et al. 2016;Muggleton et al. 2018) or in natural language (Stickel 1991;Ehsan et al. 2018;Siebers and Schmid 2019). A third approach to explanations is to offer prototypical examples to illustrate a model (Bien et al. 2011;Gurumoorthy et al.). Finally, counterexamples can be used as counterfactuals or contrastive explanations. Counterfactuals typically are minimal changes in feature values which would have resulted in a different decision, such as: You were denied a loan because your annual income was £30,000. If your income had been £45,000, you would have been offered a loan. (Wachter et al. 2017). In philosophy, counterfactuals have been characterized by the concept of a 'closest possible world', that is, the smallest change required to obtain a different (and more desirable) outcome (Pollock 1976). Contrastive explanations have been proposed mainly for image classification. For instance, the contrastive explanation method CEM (Dhurandhar et al. 2018) highlights what is minimally but critically absent in an image in order to belong to a given class. The MMD-critic (Kim et al. 2016) can identify nearest prototypes and nearest miss instances in image data such as handwritten digits and in Imagenet datasets. Furthermore, an algorithm ProtoDash has been proposed to identify prototypes and criticisms for arbitrary symmetric positive definite kernels which has been applied to both tabular as well as image data.
An approach related to counterexamples has been proposed in early AI research by Winston in the context of learning relational concepts such as arch (Winston 1970). He demonstrated that presenting near miss examples where only a small number of relational aspects is missing to make an object a member of a class result in a speed up for learning. Similarly, in cognitive science research, it has been shown that alignment of structured representations helps humans to understand and explain concepts (Gentner and Markman 1994). Gentner and Markman found that it is easier for humans to find the differences between pairs of similar items than to find the differences between pairs of dissimilar items. For example, it is easier to explain the concept of a light bulb by contrasting it with a candle than with a cat (Gentner and Markman 1994).
Induction of relational concepts has been investigated in Inductive Logic Programming (ILP) (Muggleton and De Raedt 1994), statistical relational learning (Koller et al. 2007), and recently also in the context of deep learning with approaches such as RelNN (Kazemi and Poole 2018) and Differentiable Neural Computers (DNCs) (Graves et al. 2016). DNCs have been demonstrated to be able to learn symbolic relational concepts such as family relations or travel routes in the London underground system. These domains are typical examples for domains where ILP approaches have been demonstrated to be highly successful (Muggleton et al. 2018). For DNCs, questions and answers are represented as Prolog clauses. However, in contrast to ILP, the learned models are black-box. For the family domain as well as for an isomorphic fictitious chemistry domain it has been shown, that rules learned with ILP fulfill Donald Michie's criterion of ultra-strong machine learning (Muggleton et al. 2018). Ultra-strong machine learning according to Michie requires a machine learning approach to teach the learned model to a human, whose performance is consequently increased to a level beyond that of the human studying the training data alone. In Muggleton et al. (2018) this characteristics has been related to the comprehensibility of learned rules or explanations: Comprehensibility has been defined such that a human who is presented with this information is able to classify new instances of the given domain correctly.
For ILP as well as for other relational learners such as DNCs, verbal explanations can be helpful to make a system decision transparent and comprehensible. For example, it can be explained why grandfather(ian, kate) holds by presenting the relations on the path from ian to kate in the family tree given in Figure 1: Ian is a grandfather of Kate because Ian is male and Ian is the father of Tom and Tom is the father of Kate. Alternatively, it might be helpful for understanding the concept grandfather to present a contrastive example in form of a near miss explanation. For instance, Jodie is NOT the grandfather of Kate because she is NOT male or Mat is NOT the grandfather of Ian because he is in a child-of-child relation to Ian and NOT in a parent-of-parent relation. The first near miss corresponds to the concept of a grandmother, emphasizing the importance of the attribute male for grandfather. The second near miss corresponds to the concept of a grandson, emphasizing the importance of the relation parent. To our knowledge, generating such near miss examples to explain learned relational concepts has not been investigated yet -neither in the context of ILP nor for other machine learning approaches.
In the following, we discuss the function of near miss examples. Afterwards, we present an algorithmic approach to generate near miss examples in the context of ILP and demonstrate the approach for a generic family domain, a visual domain and a real world domain dealing with file management (Siebers and Schmid 2019). Finally, we present an empirical evaluation with human participants where we compare human preferences of different types of explanations for the family and the arch domain, namely

The Function of Near Miss Examples
Near miss examples have been introduced by Winston as a human-like strategy to machine learning (Winston 1970): A near miss example for a concept is an example which does not belong to the concept but has a strong overlap to positive examples. Such near miss examples are helpful to guide the model construction of a machine learning algorithm (Telle et al. 2019). Winston illustrated learning with near misses in the context of relational visual domains. Concepts are represented as compounds of primitive blocks such as cubes. For instance, positive examples for arches must consist of at least two objects playing the role of supporters (pillars) and another object on top of them (roof). Negative examples for an arch might be a tower of several cubes -a far miss -or two pillars with no space between them covered by a roof -a near miss.
In the context of machine learning of relational concepts such as Winston's arches, molecules (King et al. 1996), or Turing-complete languages (Telle et al. 2019), carefully constructed near misses given as negative examples can speed up learning considerably (see Figure 2.a). In this case, the machine learning expert plays a role similar to that of a school teacher who identifies helpful examples (Schmid et al. 2003). We propose that what is effective for learning is also effective for explaining a learned model (see Figure 2.b): for some concept that an AI system has learned, it can explain its model by constructing a near miss example. While machine learning typically is unidirectional -the human provides the training examples and the system generalizes a model -explanations can support interactive machine learning scenarios based on a bidirectional partnership between human and AI system (Nguyen et al. 2018).
While there are some considerations about what constitutes helpful examples in educational psychology (Gentner et al. 2003) and the insights given in Winston's seminar work (Winston 1970), there exist no general principles to construct helpful near miss explanations. We base our algorithm presented in the next section on some general observations which we will illustrate with the family domain example of Figure 1.

Near Miss Explanations
In the following, we will introduce the GENME algorithm for generating near miss explanations. Our approach extends the comprehensible machine learning approach ILP (Muggleton et al. 2018) with a contrastive explanation component. First, we will introduce the basic notation and basic concepts of ILP. The concept of a near miss explanation is introduced formally and the generation algorithm is presented.

Notation
We introduce basic notation for first-order logic theories based on the clause form underlying the logic programming language Prolog (Sterling and Shapiro 1994). Following Prolog's notational conventions, variables, constants and predicate symbols are represented as strings of letters, numbers, and underscores where variables must start with an upper case letter and constants and predicate symbols with a lower case letter. The arity of a predicate symbol is the number of arguments it takes. A predicate is called attribute if it has arity one and relation otherwise.
Every variable as well as every constant is a term. We call constants ground terms. A predicate symbol of arity n followed by a bracketed n-tuple of terms is called atom, or positive literal. Function sym(A) returns the predicate symbol of atom A. The negation of an atom is called negative literal. The negation symbol is ¬. A literal is ground if all terms in its n-tuple are ground.
A clause is an implication where the antecedent is a set of literals and the consequent is an atom. We write the implication reversed, as H ← {L 1 , . . . , L m }. The consequent of a clause C is called its head, head(C). Its antecedent is called the body of the clause, body(C). For convenience, we may omit the braces surrounding the body. If the body of clause C is the empty set, H ← {}, we call C a fact, omit the antecedents, and simply write H. A clause is called ground when all its literals are ground. A set of clauses is called a (clausal) theory.
A substitution is a mapping from variables to terms. We denote a substitution θ by {x 1 → t 1 , . . . , x k → t k } where x 1 , . . . , x k are variables and t 1 , . . . ,t k are terms. A substitution is applied to a term by simultaneously replacing all x i in the term by the corresponding t i 's. A substitution is applied to a literal by applying it to all terms in the literal. A substitution is applied to a clause by applying it to all literals in the clause. We denote the application of the substitution θ to a term, literal, or clause X by Xθ . If a literal or  a set of literals K is true given a clausal theory T , we say that T models K, or T |= K. Theory T models an atom A if there exists a clause C ∈ T and a substitution θ such that A = head(Cθ ) and T |= body(Cθ ). Using negation by failure, a clausal theory T models a negative literal ¬A if T does not model A, T |= A. A theory T models a set of literals {L 1 , . . . , L n } if there is a substitution θ such that T models L i θ for 1 ≤ i ≤ n. By definition, the empty set {} is modeled by any theory.

Basic Concepts of ILP
ILP is a sub-field of symbolic machine learning which deals with learning clausal theories from examples (Muggleton and De Raedt 1994). Such clausal theories allow to represent relational concepts where either the target defines a relation (i.e. has more than one argument) or the target is defined over relational structures such as Winston's arches. For instance, a theory for grandfather can be learned from positive examples, such as grandfather(ian, kate), and negative examples, such as grandfather(alan,tom) (see Figure 3). Positive and negative examples for the target concept are represented as ground atoms. Additionally, a background knowledge theory is provided. In the family domain, the facts parent(tom, kate) and male ( (1) In general, a learned theory can include several clauses characterizing the target concept. For example, the target concept grandparent can be described by a set of clauses taking into account the genders of the respective parents. It can also be the case that target clauses are not exclusive. That is, a positive example P might follow from multiple clauses.
With the learned theory, new instances given as ground atoms can be classified. For example, grandfather(alan, kate) will be classified as positive; grandfather(becky,tom) will be classified as negative.

Near Miss Examples and Explanations
Such positive classified instances are modeled by the learned theory as introduced in Subsection 3.1. As mentioned above, theory T consists of predefined background clauses and clauses learned for the target concept. For example, the grandfather relation holds for ian and kate given the theory in Equation 1 together with background knowledge about parent relations and the gender of persons in a given family domain as the one shown in Figure 3. An explanation for this fact has to make explicit how this can be derived from the theory. The reason why grandfather(ian, kate) holds is that ian is male and ian is a parent of tom and tom is a parent of kate. In general, we call an explanation for a positive example local explanation: Definition 1 (Local Explanation). A local explanation for a positive example P is a ground clause Cθ where C ∈ T such that P = head(Cθ ) and T |= body(Cθ ).
To emphasize which information is crucial for making someone a grandfather of someone else, a near miss explanation might be helpful. As argued in Section 2, a near miss example is not a positive instance for the target concept, but illustrates a semantically similar concept. For example given the positive example grandfather(ian, kate), possible near miss examples could be the female parent of a parent (that is the grandmother) of kate or a male child of a child (that is a grandson) of ian. Formally, we define near miss explanations and near miss examples: Definition 2 (Near Miss Explanation). Given a local explanation Cθ and a minimally changed clause C with substitution θ , we call C θ a near miss explanation and ∆ head(C θ ) a near miss example if T |= body(C θ ), T |= head(C θ ). ∆ is marking literals as near miss examples.
What constitutes a minimal changed clause is domain dependent. In general, we understand changing a clause as replacing literals in its body by different literals. The most basic change is replacing a single literal by its negation. For example, the attribute male could be changed to ¬male; the relation parent could be changed to ¬parent. In a geometric domain, an attribute large could be changed to ¬large; a relation above could be changed to ¬above. However, such negations are too unspecific for many domains. In a more fine-grained geometry domain, ¬large could mean small or medium. Therefore, we propose that it is helpful to define pairs of semantically opposing predicate symbols, for example inverses, in T when modeling a particular domain. In natural language semantics, such relational opposites are one of the basic relations between lexical units (Palmer 1981).
In the family domain, the pairs male/female and parent/child are semantic opponents. To explain grandfather(ian, kate), the near miss example ∆ grandfather( jodie, kate) (which is actually the grandmother) can be derived by replacing male(A) with female(A) in Equation 1. An alternative near miss can be derived by inverting the parent relation to child. Because grandfather relies on the transitive sequence of two parent relations, both occurrences should be replaced, resulting in ∆ grandfather(mat, ian) (which should actually read grandson(mat, ian)).
Depending on the domain, a minimal change of a clause C might therefore consist of either replacing a single literal or multiple literals. Which literals may be replaced may also depend on the semantic opponents involved. Thus, we introduce domain dependent rewriting filters V p →q to formalize this connection. V p →q (B) extracts all valid literal sets from clause body B, such that the predicate symbol p may be replaced by predicate symbol q in those literal sets. For example, V male →female applied to the body of Equation 1 yields {male(A)} and V parent →child applied to the body of Equation 1 yields {parent(A,C), parent(C, B)} . That is, either male may be replaced (by female) or both parent's (by child). These rewriting filters can be selected by the respective user of the algorithm.
As the head of a clause is never changed, head(C) = head(C ), θ must be different from θ for each near miss explanation C θ w.r.t. local explanation Cθ . Otherwise, the near miss example would be identical to the positive example, head(C θ ) = head(Cθ ) = P, which is by definition not a near miss example (T |= P). Based on this change we define the degree of a near miss explanation: Definition 3 (Degree of Near Miss Explanation). Given a near miss explanation C θ w.r.t. local explanation Cθ , the degree n of the near miss explanation is the number of changed replacements, n = |θ \ θ |.
In our approach, we derive near miss examples rather than counterfactuals. That is, near misses must be defined over constants in the background knowledge. It is not possible to invent new constants. Thus, we can define possible near miss examples, that is near miss candidates: Definition 4 (Near miss candidate). A near miss candidate for a positive example P is a ground atom N which has the same predicate symbol as P, sym(N) = sym(P), but is not modeled by the theory T , T |= N.

Algorithm
To generate near miss explanations we introduce GENME (Algorithm 1). Given a positive example P represented as a ground atom, a domain theory T given as a set of clauses modeling the target concept together with clauses that describe the application domain, and a set of rewriting filters O, GENME returns a set of sets of near miss examples (E d ). Each element of (E ) contains near miss explanations of a given degree d, inducing a partial ordering over counterexamples in relation to instance P. for all L ∈ V p →q (body(C)) do 6: To generate all near miss examples for the given positive example, GENME first generates the set of all near miss candidates and then iterates over all local explanations. To make sure that our algorithm terminates, we only allow near miss candidates that are ground with constants already present in the domain. For each local explanation, we iterate over all valid literal sets (line 4 and 5) to generate a minimally changed clause C by renaming predicate symbol p to o (line 6 applying Algorithm 2).
For each such minimally changed clause C , GENME iterates over all near miss candidates (line 7) and all possible substitutions θ in increasing distance from θ (lines 10-12). If there are substitutions θ such that head(C θ ) equals the near miss candidate and the theory models body(C θ ) for a given distance (line 13) we add all near miss explanations for this distance to E d (lines 13 and 16). As soon as one near miss explanation E for the given d is found, we continue with the next near miss candidate N ∈ N .

Termination, Time Complexity and Implementation Details
The four nested for all loops in GENME (starting in lines 3, 4, 5 and 7) all iterate over finite sets and therefore are guaranteed to terminate. The while loop (starting in line 10) with the included for all loop is eventually terminated when we reach a degree that coincides with the magnitude of the substitution |θ |. GENME is correct in the sense that it does not output explanations that are positive examples for the given theory. This is achieved by only considering rule heads that are not modeled by the theory. Also the algorithm is guaranteed to find the miss explana-  (a 1 , . . . , a n ) then 4: L ← L ∪ {o(a 1 , . . . , a n )} 5: else if L = ¬p(a 1 , . . . , a n ) then 6: L ← L ∪ {¬o(a 1 , . . . , a n )} 7: end if 8: end for 9: return L tions with lowest degree d for a given minimally changed clause C since we iteratively increment d and exit the search as soon as we find a miss explanation for C .
In its given explicit, non-optimized realization, Algorithm 1 has a time complexity dependent on the number of constants C in the background theory. The number of C ∈ T and the number of V p →q ∈ O has a linear impact on runtime. The upper bound for the number of N is given by the magnitude of the Cartesian product C n where n denotes the arity of the target literal. The upper bound is reached when no grounding for the target literal is modeled by theory T .
The algorithm iterates over N ∈ N and tests if a substitution θ can be found for the minimally changed clause C such that T |= body(C θ ) and head(C θ ) = N. The algorithm first tries to find a near miss explanation by testing a θ where only one substitution rule is different from the original θ (d = 1). If no near miss explanation is found, the degree d is gradually incremented until a near miss explanation is found or d reaches |θ |. For a given d the number of possible combinations of substitution rules to change in θ is given by |θ | d . For each rule the number of new terms to use as substitute is given by C − 1. The number of possible θ for a given d is therefore given by |θ | d · (|C | − 1) d . For the family domain the numbers would be |θ | = 3 and |C | = 10. The respective runs for all d's would add up to ∑ 3 d=1 |θ | d · (|C | − 1) d = 999 altered substitutions that would have to be checked.
The inefficiency of the algorithm is mainly due to its unsophisticated filtering strategy. For an implementation, this can be realized more efficiently by realizing the following considerations: Since we are only interested in misses, we can constrain the substitutions for variables occurring in the head of C to the ones dictated by the current N ∈ N . When we also check the equality of head(C θ ) and N before we check T |= body(C θ ) we can safely continue with the next θ . Additionally the user can be featured with the possibility to label particular constants in the local explanation as immutable. This way the labeled constants will not be part of the search for alternative substitutions θ which can improve overall efficiency. An efficiency boost can also be reached by choosing rewriting filters that make sense in the chosen domain and that are also occurring in T .

Application to Example Domains
In the following subsections we show that GENME generates plausible near miss explanations. We present the results of applying the algorithm to different domains: a generic family domain constituted by abstract family relations such as grandfather, a relational visual domain of blocksworld arches, and a real world domain dealing with file management.

Family Domain
The family domain models parent-child relations between persons. This domain is represented by attributes male and female, and relations parent and child. In Given this domain, we use GENME to explain the positive examples P 1 = grandfather(ian, kate) and P 2 = daughter(becky, jodie). We provide four rewriting filters: V male →female , V female →male , V parent →child , and V child →parent , where the first two filters allow changing a single occurrence of the predicate and the second ones require changing all occurrences.

The Winston Arches
The Winston arches visual domain models structural relations of building blocks. The domain features relations contains (a structure contains a block), supports (a block supports another block), (not )meets (two blocks do (not) meet horizontally) and is a (a block has a certain shape; the shape can either be wedge or brick). Figure 4 states the background clauses for the domain. When using GENME with this domain we introduced a small change for handling constants that are contained in the theory (e.g. wedge, brick in Equation 3). In order to find alternative substitutions θ for this constants, we changed them to unique variables before we applied GENME.
With the positive example arch(struct1) and the rewriting filters V supports →supported by , V supported by →supports , V meets →not meets , and V not meets →meets that change all occurrences, GENME yields the following near miss explanation with degree d = 1: arch(struct4) ←contains(struct4, a1), contains(struct4, b), contains(struct4, c), is a(a1, wedge), is a(b, brick), is a(c, brick), supports(b, a1, struct4), supports(c, a1, struct4), meets(b, c, struct4) ∆ arch(struct4) is in fact a plausible near miss example since the only feature that has changed is the closure of the passage between the pillars. The wedge roof is a feature from the positive example that also holds for the miss.
Other explanations are found for d = 3: is a(a2, brick), is a(b, brick), is a(c, brick), supports(b, a2, struct6), supports(c, a2, struct6), meets(b, c, struct6) supported by(b, a2, struct5), supported by(c, a2, struct5), The former explanation features the far miss example ∆ arch(struct6) where not only the two pillars meet, but also the roof got a new shape. The latter explanation deals with the fact that ∆ arch(struct5) turns around the supports relation between pillars and roof. Both miss examples are farther away from the used positive example and might not be that helpful in explaining what an arch is. Table 2 shows the number of near misses GENME found for the different rewriting filters.

File Management
The purpose of the file management domain is to identify irrelevant files, that is files that may be deleted by the user (see (Siebers and Schmid 2019)). For this, a file system with related files and folders is modeled. The domain is represented by relations such as creation time, file size, file name, and media type. Figure 6 shows an excerpt of the background clauses of the theory for a randomly generated file system. Additionally, the theory contains clauses for several auxiliary relations, such as older, larger, and in same folder. To facilitate matters, we assume that the learned irrelevancy concept is formalized in a single clause (see Equation 4).
As shown in Table 3, GENME identifies 68 near miss candidates as near miss examples for both examples (85.0 %). Nevertheless, a single near miss example is identified as nearest example (degree 1) for each example. Both are files of the same media type in the same folder which are newer than the file from the example.

Empirical Study of Human Preferences of Explanation Types
To investigate whether near miss explanations are considered useful by humans, we conducted an empirical study on preferences of explanation modalities for the abstract relational family domain and the visual relational arches domain. For both domains, a cover story has been given and participants had to evaluate the helpfulness of explanations given this setting which is described in the following subsections.
Overall, four different types of explanations have been considered: -General rule (R): a global explanation of the concept a specific instance belongs to, -Example (E): an example-based explanation in form of a specific instance belonging to the concept, -Near Miss (N): a contrastive example which has a high degree of structural similarity to the specific instance under consideration, -Far Miss (F): a negative example for the considered concept which has a low degree of structural similarity to the specific instance under consideration.
All explanations were presented in form of natural language sentences. Such natural language explanations can be generated from ILP learned rules in a straight-forwards manner (Siebers and Schmid 2019). These four types of explanations address different information needs (Miller 2019): To understand the general concept, a global rule can be assumed to be especially helpful. However, it might be the case that the helpfulness is different for abstract in contrast to visual domains. In the second case, a visual prototype might be more effective (Gurumoorthy et al.). In cognitive psychology, visual prototypes have been shown to be an effective means of concept representation for basic categories (Rosch 1979). For simple domains, an arbitrary instance might convey information similar to a prototype. For instance, in medical text books, example images are given to illustrate what a specific skin disease looks like. A near miss example should be especially helpful to highlight what (missing) information would be necessary to make an object belong to a class (Gentner and Markman 1994). This is often helpful, if feature values or relations are hard to grasp. For instance, mushroom pickers use images to distinguish an edible mushroom from the visually most similar toadstool. Arbitrary negative examples, especially far misses can be assumed to be less helpful to understand a concept or why a specific instance belongs to a concept. This type of explanation has been introduced as a baseline. We assume that combining different types of explanations can be more effective than each of these explanations alone. Especially, a combination of a global rule with a near miss might be most efficient to explain relational concepts.
Given the cover story, the helpfulness of the different types of explanations has been assessed with a complete pairwise comparisons (Thurstone 1927). In a first part of the study, all pairings of the four explanation types have been presented in a arbitrary sequence and participants had to evaluate which they found more helpful as an explanation given the cover story. In a second part of the study, pairs of pairs of explanations have been presented. In a final part, the helpfulness of the four explanation types for different information needs as been assessed explicitly. Participant rated how helpful an explanation is to understand the general concept, a particular example instance for the concept, what is not in the concept (exclusion). on a scale from 1 to 5 with labels not at all to absolutely.
The study was conducted as an online experiment with 73 participants (42 females, 31 males) with average age 35.72 (min 18, max 64). 43 participants were employed, 27 where students, 2 were self-employed and 1 person was retired. About 50% of the participants received first the family domain followed by the arches domain and the other half of participants started with the arches followed by the family domain.
Although this is an exploratory study, given the considerations above, we can formulate the following hypotheses: (1)

Results for the Family Domain
The family domain (see Sections 3.2 and 4.1) has been presented with the family tree of Kate as given in Figure 1 with regular arrows and a legend that does not highlight miss examples. Participants were asked to imagine a conversation with their friend Kate who is originating from a native American tribe. She is curious about the different definitions of family relations in western culture, since she is not familiar with them and the definitions that she grew up with are very different from the participant's. In particular, she wants to understand the grandfather relation between Kate and Ian.
The participants were then asked to give their preference on explanations in the pattern outlined above: -(R) A grandfather is a male parent of one of your parents.
-(E) One of your parents, Tom, has a male parent called Ian. Ian is your grandfather. -(N) Jodie, the female parent of your parent Tom is NOT your grandfather; it is your grandmother. -(F) Mat, the male child of Tom, who is the child of Ian is NOT the grandfather of Ian; it is his grandson.
For the six simple explanation pairings we observed the following frequency ranking (rounded relative frequencies in brackets): R (0.43) > E (0.37) > N (0.17) > F (0.03). The rule for the relational concept was preferred over all other explanation modalities followed by an example and a miss with lowest degree. We see that for an abstract relational domain, a rule describing the concepts seems to be more helpful than stating an example. Nevertheless when having the choice between a miss example closer to the chosen example or a miss example further away, the participants preferred the former one confirming the hypothesis that the choice of miss examples is important.
For the second set of 15 the pairwise comparison of pairings of the base explanations we observed the frequency ranking RE (0.32) > RN (0.21) > EN (0.19) > EF (0.13) > RF (0.13) > NF (0.02). The rule-example pairing is favored over all other pairings followed by the rule-low degree miss explanation pairing. Again, a preference for the rule shows in this abstract domain.  Table 4 shows the results of the questions on purpose of explanations. The near miss explanation, although preferred over the far miss explanation, is rather unsuited to understand the general concept of this abstract relational domain as well as understanding why a particular example belongs to the concept. For the purpose of understanding the boundaries of the concept however, the near miss explanation performs best compared to all other explanations. We therefore postulate that a near miss explanation can help humans to avoid false positives when making decisions.

Results for the Arches Domain
The Winston arches domain has been introduced as shown in Figure 5 without the object labels and with a focus on the arch labeled struct1. Participants were asked to imagine playing with building blocks with their five year old son. They want to show him some new structure called arch given the examples and counterexamples in Figure 5.
The explanations for the arch domain are: -(R) An arch consists of two rectangle blocks that do not touch. They support a triangle block. -(E) We showed the structure labeled struct1 as a positive example. -(N) We showed the structure labeled struct4 as a near miss.
- (F) We showed the structure labeled struct6 as a far miss.
The first set of six questions yields the frequency ranking E (0.45) > R (0.30) > N (0.18) > F (0.08). For this visual domain it comes with no surprise that examples seem to be more helpful for explaining the concept. Also, the near miss explanation is preferred over the far miss explanation.
The 15 pairing-decisions yield the ranking EN (0.27) > RE (0.25) > EF (0.21) > RN (0.14) > RF (0.08) > NF (0.04). This time the explanation pair example-near miss is a clear favorite. It seems that humans find stating an example along with a near miss more helpful for explaining concepts in a visual domain than to only state a positive example along with the rule that describe the concept. Also, a rule in combination with a near miss seems to be more helpful than a rule with a miss further away. Table 5 states the results for the questions on purpose for the arches domain. As expected, the statement of an example seems to help humans better in understanding the visual concept than stating a general rule. Surprisingly the rule was better suited to understand why a particular example belongs to the concept. Both near and far miss were better suited to understand what is not in the concept and therefore can be used to sharpen the boundaries of the given concept.

Discussion of Empirical Results
Given the presented results, the hypothesis that near miss examples should be preferred over far miss examples in the pairwise comparisons has been confirmed with one exception: For the arches domain the combination of example and far miss is rated as being more helpful as the combination of rule and near miss explanation. In accordance with the observation that examples are especially helpful as explanations in visual domains, a positive example together with a negative example for the concept of an arch is an effective means to indicate what aspects are necessary for a structure to belong to the concept. Our hypothesis that near miss examples should be rated as the most helpful to understand the boundaries of a given concept has been confirmed for the family as well as for the arches domain. In the arches domain the difference between the helpfulness ratings of near vs. far miss explanations are not very pronounced while for the family domain near miss explanations have been rated as much more helpful than far misses. This finding might be due to the rather simple category structure of the arches domain where arbitrary negative examples might be suitable to highlight the relevance of what is missing to belong to the concept.
The third hypothesis has been confirmed for the helpfulness ratings for the visual domain. The example has been rated as more helpful as the rule to explain the general concept. Furthermore the example has been preferred over the rule in the complete pairwise comparison and the combination of example and near miss example has been preferred most often.

Conclusions and Further Work
We introduced near miss explanations for relational concepts learned with ILP. The GENME algorithm has been presented which generates contrastive examples with different degrees of nearness to a specific positive instance for a given concept. The explanations facilitate identifying crucial aspects of the concept. The current version of GENME relies on checking all candidate examples which is not feasible for domains with a large number of constants. There, it might be necessary to restrict candidates to candidates already close to the positive example. One way to achieve this might be considering similarities within the graph spanned by the constants and predicate relations in the domain.
In an empirical study we could demonstrate that human rate near miss examples as helpful, especially in combination with a global rules-based explanation and that humans evaluate near misses as especially helpful to understand the crucial aspects for instances which make them belong to a specific concept.
At the beginning, focus of research on explainable AI (XAI) has been on visual highlighting of relevant information for blackbox classifiers which is mostly of interest for the model developers (Samek et al. 2019). Recently, there is a growing interest of explanations for domain experts and end-users (Arrieta et al. 2020; Schmid to appear). For these groups, it is of special importance that explanations are easily comprehensible and effectively bring across the most helpful information for a given task. Therefore, the next challenge of XAI systems will be to consider explanations of different modality and provide strategies to select the probably most helpful explanation for a given context Williams and Lombrozo (2010). As cognitive science research suggests (Gentner and Markman 1994), near miss explanations can play an important role to highlight what aspects are necessary for an instance to belong to a given class.