1 Introduction

Modern declarative programming paradigms allow for relatively simple modeling of various combinatorial problems. Nevertheless, solving these problems might become infeasible when the size of input instances and, correspondingly, the number of possible solution candidates start to grow (Dodaro et al., 2016). In many cases, these candidates are symmetric, i.e., one candidate can easily be obtained from another by renaming constants. In order to deal with large problem instances, the ability to encode Symmetry Breaking Constraints (SBCs) in a problem representation becomes an essential skill for programmers. However, the identification of symmetric solutions and the formulation of constraints that remove them is a tedious and time-consuming task. As a result, various tools emerged that avoid the computation of symmetric solutions by, for instance, automatically finding a set of SBCs using properties of permutation groups or applying specific search methods that detect and ignore symmetric states; see (Margot, 2010; Sakallah, 2009; Walsh, 2012) for an overview.

Existing approaches to SBC generation can be classified into instance-specific and model-oriented ones. The former methods identify symmetries for a particular instance at hand by computing and adding ground SBCs to the problem representation (Cohen et al., 2006; Drescher et al., 2011; Puget, 2005). Unfortunately, computational advantages do not carry forward to large-scale instances or advanced encodings, where instance-specific symmetry breaking often requires as much time as it takes to solve the original problem. Moreover, ground SBCs generated by instance-specific approaches are (i) not transferable, since the knowledge obtained is limited to a single instance; (ii) usually hard to interpret and comprehend; (iii) derived from permutation group generators, whose computation is itself a combinatorial problem; and (iv) often redundant and might result in a degradation of the solving performance.

In contrast, model-oriented approaches aim to find general SBCs that depend less on a particular instance. The method presented in Devriendt et al. (2016) uses local domain symmetries of a given first-order theory. SBCs are generated by identifying argument positions in atoms of a formula that comprise object variables defined over the same subset of a domain given in the input. As a result, the computation of lexicographical SBCs is very fast. However, the method considers each first-order formula separately and cannot reliably remove symmetric solutions, as it requires the analysis of several formulas at once. The method of Mears et al. (2008) computes SBCs by generating small instances of parametrized constraint programs, and then finds candidate symmetries using Saucy (Codenotti et al., 2013; Darga et al., 2004) – a graph automorphism detection tool. Next, the algorithm removes all candidate symmetries that are valid only for some of the generated examples as well as those that cannot be proven to be parametrized symmetries using heuristic graph-based techniques. This approach can be seen as a simplified learning procedure that utilizes only negative examples represented by the generated SBCs.

In this work, we introduce a novel model-oriented method for Answer Set Programming (ASP) (Lifschitz, 2019) that aims to generalize the process of discarding redundant solution candidates for instances of a target domain using Inductive Logic Programming (ILP) (Cropper et al., 2020). The goal is to lift the SBCs of small problem instances and to obtain a set of interpretable first-order constraints. Such constraints cut the search space while preserving the satisfiability of a problem for the considered instance distribution, which improves the solving performance, especially in the case of unsatisfiability. The particular contributions of our work are:

  • We suggest several methods to generate a training set comprising positive and negative examples, allowing an ILP system to learn first-order SBCs for the problem at hand.

  • We define the components of an ILP learning task enabling the generation of lexicographical SBCs for ASP.

  • We analyze the features and characteristics of the results obtained by our methods, as well as the effects of language bias decisions on several combinatorial problems.

  • We present an approach that iteratively applies our method to revise constraints when new permutation group generators or more training instances become available.

  • We conduct performance experiments on variants of the pigeon-hole problem as well as the house-configuration problem (Friedrich et al., 2011). The obtained results show that the SBCs generated are easy to interpret in most of the cases, and they result in significant performance improvements over a state-of-the-art instance-specific method as well as an ASP solver without SBCs.

This work extends the previous conference paper (Tarzariol et al., 2021) with two additional methods to obtain the positive and negative examples for an ILP task, i.e., an alternative atom ordering criterion and a full symmetry breaking approach, along with corresponding experimental results. Moreover, we provide much more detailed descriptions and analyses of experiments with the suggested language bias, the previous, and two new learning settings.

The structure of this paper is the following: a brief overview of the preliminaries is given in Sect. 2. Section 3 presents our approach, while Sect. 4 describes its implementation and specifies the components of an ILP learning task. In Sect. 5, we investigate observations from learning experiments conducted with our methods. Section 6 provides and discusses experimental results on the solving performance obtained for some combinatorial problems. Lastly, Sect. 7 concludes the paper and outlines directions for future work.

2 Background

This section introduces some basics and notations for ASP, symmetry breaking, and ILP.

2.1 Answer set programming

Answer Set Programming (ASP) (Lifschitz, 2019) is a declarative programming paradigm based on non-monotonic reasoning and the stable model semantics (Gelfond and Lifschitz, 1991). Over the past decades, ASP has attracted considerable interest thanks to its elegant syntax, expressiveness, and efficient system implementations, showing promising results in numerous domains like, e.g., industrial, robotics, and biomedical applications (Erdem et al., 2016; Falkner et al., 2018). We will briefly define the syntax and semantics of ASP, and refer the reader to (Gebser et al., 2012; Lifschitz, 2019) for more comprehensive introductions.

Syntax An ASP program P is a set of rules r of the form:

$$\begin{aligned} a_0 \leftarrow a_1, \dots , a_m, not \,a_{m+1}, \dots , not \,a_n \end{aligned}$$

where \( not \) stands for default negation and \(a_i\), for \(0\le i \le n\), are atoms. An atom is an expression of the form \(p(\overline{t})\), where p is a predicate, \(\overline{t}\) is a possibly empty vector of terms, and the predicate \(\bot \) (with an empty vector of terms) represents the constant false. Each term t in \(\overline{t}\) is either a variable or a constant, and a literal l is an atom \(a_i\) (positive) or its negation \( not \,a_i\) (negative). The atom \(a_0\) is the head of a rule r, denoted by \(H(r)=a_0\), and the body of r includes the positive or negative, respectively, body atoms \(B^+(r) = \{a_1, \dots , a_m\}\) and \(B^-(r) = \{a_{m+1}, \dots , a_n\}\). A rule r is called a fact if \(B^+(r)\cup B^-(r)=\emptyset \), and a constraint if \(H(r) = \bot \).

Semantics The semantics of an ASP program P is given in terms of its ground instantiation \(P_{ grd }\), replacing each rule \(r\in P\) with instances obtained by substituting the variables in r by constants occurring in P. Then, an interpretation \({\mathcal {I}}\) is a set of (true) ground atoms occurring in \(P_{ grd }\) that does not contain \(\bot \). An interpretation \({\mathcal {I}}\) satisfies a rule \(r \in P_{ grd }\) if \(B^+(r) \subseteq {\mathcal {I}}\) and \(B^-(r) \cap {\mathcal {I}}=\emptyset \) imply \(H(r) \in {\mathcal {I}}\), and \({\mathcal {I}}\) is a model of P if it satisfies all rules \(r\in P_{ grd }\). A model \({\mathcal {I}}\) of P is stable if it is a subset-minimal model of the reduct \(\{H(r) \leftarrow B^+(r) \mid r \in P_{ grd }, B^-(r) \cap {\mathcal {I}} = \emptyset \}\), and we denote the set of all stable models of P by \( AS (P)\).

2.2 Symmetry breaking

Most of the modern instance-specific approaches detect symmetries of a given object by representing it as an instance of the graph automorphism problem. This problem consists of finding all edge-preserving bijective mappings of a graph vertex set to itself. It is an attractive target of reduction since this problem can be solved efficiently for many families of graphs using methods from the group theory; see (Sakallah, 2009) for an overview.

A group is an abstract algebraic structure \(\langle G,*\rangle \) where G is a non-empty set, closed under a binary associative operator \(*\), such that G contains an identity element, and each element has a unique inverse. If the operator \(*\) is implicit, the group is represented by G. A subgroup H of a group G is a group such that \(H \subseteq G \). Given a set \(X=\{x_1,...,x_n\}\) of n elements, a permutation of X is a bijection that rearranges its elements. The symmetric group \(S_X\) is defined by the set of all the n! possible permutations of X, and its subgroups are called permutation groups. In cycle notation, we represent a permutation as a product of disjoint cycles, where each cycle \(\begin{pmatrix} x_1&x_2&x_3&\dots&x_k \end{pmatrix}\) means that the element \(x_1\) is mapped to \(x_2\), \(x_2\) to \(x_3\), and so on, until \(x_k\) is mapped back to \(x_1\); the elements mapped to themselves are not contained in the cycles.

Given a group G and a set X, the group action \(\alpha : G \xrightarrow {} S_X\) defines a permutation of the elements in X for each \(g \in G\). Then, each permutation \(\alpha (g)\) induces a partition on X, \({\mathcal {P}}^{\alpha (g)}(X)\), whose cells are called orbits of X under \(\alpha (g)\). The cells of the finest partition on X that refines \({\mathcal {P}}^{\alpha (g)}(X)\) for each \(g \in G\), \({\mathcal {P}}^{G}(X)\), are the orbits of X under group G.

Example 1

Given a set \(\{a, b, c, d, e\}\) of atoms ordered lexicographically and a permutation \(\pi = (a \; d \; e ) \; ( b \; c )\), let us consider a random interpretation \(\{a, c\}\), respectively, the integer 00101. To find its symmetries with respect to \(\pi \), we repeatedly apply the permutation to \(\{a, c\}\) until no new interpretation is obtained, e.g., \(a\mapsto d\) and \(c \mapsto b\) yielding \(\{b,d\}\), then, \(d\mapsto e\) and \(b \mapsto c\) yielding \(\{c,e\}\), and so on. Thus, we get the orbit \(\{\{a, c\}, \{b, d\}, \{c, e\}, \) \(\{a, b\}, \{c, d\}, \{b, e\}\}\), respectively, \(\{00101, 01010, 10100, 00011, 01100, 10010 \}\) with the integer representation.

Letting G be a permutation group for a set X of ground atoms, the orbits of the set of interpretations over X under G identify equivalence classes of the truth assignments for X. When taking truth assignments as binary integers determined by some total order of the elements in X, the lex-leader scheme consists of specifying a set of Symmetry Breaking Constraints (SBCs) that may eliminate some interpretations but keep the smallest element in terms of the associated integer for each orbit. If the SBCs eliminate all symmetric assignments and preserve the smallest element of each orbit only, we obtain full symmetry breaking. However, dealing with up to n! permutations of n variables explicitly would be an infeasible approach, and a more efficient alternative is to focus on a subset of G to refine \({\mathcal {P}}^{G}(X)\). If this leads to a partition finer than \({\mathcal {P}}^{G}(X)\), then we get partial symmetry breaking. In this case, the SBCs preserve the smallest element as a representative of each orbit but also other symmetric interpretations. Considering a set of irredundant generators K of G is an effective heuristic for partial symmetry breaking since such generators represent G compactly. A set \(K \subset G\) of elements in a group \(\langle G,*\rangle \) is a set of generators for G if every element of G can be expressed as a combination of finitely many elements of K under the group operation. K is irredundant if no proper subset of it is a set of generators for G.

Example 2

Let us consider the applications of the generator \(\pi \) of the orbit obtained in Example 1. For each interpretation in the orbit, we check whether \(\pi \) maps it into a smaller interpretation according to the integer representation. Thus, the interpretations \(\{c, e\}\) and \(\{b, e\}\) are eliminated since applying \(\pi \) yields \(\{a, b\}\) or \(\{a, c\}\), respectively, while the interpretations \(\{a, c\}\), \(\{b, d\}\), \(\{a, b\}\), and \(\{c, d\}\) are preserved. As three symmetric interpretations are obtained in addition to the smallest interpretation of the orbit, the representative \(\{a, b\}\) (00011 in the integer representation), SBCs for direct applications of the generator \(\pi \) achieve partial symmetry breaking only.

Symmetry-Breaking Answer Set Solving (sbass) (Drescher et al., 2011) detects and eliminates syntactic symmetries in ASP by adding ground SBCs to an input ground program \(P_{ grd }\). A symmetry of \(P_{ grd }\) is given by a permutation \(\pi \) of ground atoms that keeps the program syntactically unchanged, i.e., \(P_{ grd }^\pi \) has the same rules as \(P_{ grd }\), where \(P_{ grd }^\pi \) is the set of rules obtained by applying \(\pi \) to the head and body literals of rules in \(P_{ grd }\). In the first step, sbass transforms \(P_{ grd }\) to a colored graph \({\mathcal {G}}_{P_{ grd }}\) such that permutation groups of \({\mathcal {G}}_{P_{ grd }}\) and their generators correspond one-to-one to those of \(P_{ grd }\). In the second step, it uses saucy (Codenotti et al., 2013; Darga et al., 2004) to find a set of group generators for \({\mathcal {G}}_{P_{ grd }}\). Finally, for each found generator sbass constructs a set of SBCs based on the lex-leader scheme and appends them to \(P_{ grd }\). Given the modified ground program, an ASP solver is provided means to avoid symmetric answer sets.

Example 3

To illustrate how sbass works, let us consider the pigeon-hole problem, which is about checking whether p pigeons can be placed into h holes such that each hole contains at most one pigeon. An encoding in ASP of this problem is:

figure a

It takes as input the ground facts pigeon(p). and hole(h). For example, solving the instance with \(p=3\) and \(h=3\) leads to six answer sets:

figure b

The binary integer given on the right is obtained with the following total order of atoms:

figure d

Applying sbass to this pigeon-hole encoding, grounded with the previous input instance, produces the following set of generators:Footnote 1

figure c

According to the lex-leader scheme, we should discard all answer sets but \( AS _6\), which is the only answer set such that applying either generator does not lead to a greater interpretation:

figure e

Therefore, with this instance, we obtain full symmetry breaking by applying the lex-leader scheme for the irredundant generators returned by sbass. However, symmetric solutions can be preserved for other inputs. E.g., with \(p=3\) and \(h=4\), two answer sets are preserved, while the generators describe a common cell with all answer sets and a single representative.

2.3 Inductive logic programming

Inductive Logic Programming (ILP) (Cropper et al., 2020) is a form of machine learning whose goal is to learn a logic program that explains a set of observations in the context of some pre-existing knowledge. Since its foundation, the majority of research in the field addresses Prolog semantics (Cropper and Muggleton, 2016; Muggleton, 1995; Srinivasan, 2004), even though applications in other logic paradigms appeared in the last years. The most expressive ILP system for ASP is Inductive Learning of Answer Set Programs (ilasp) (Law et al., 2014, 2021), which can be used to solve a variety of ILP tasks.

In ILP, a learning task \(\langle P,E,H_M \rangle \) is defined by three elements: a background knowledge P, a set of (positive and negative) examples E, and a hypothesis space \(H_M\), which defines all the rules that can be learned. The learned hypothesis is a subset of the hypothesis space that satisfies a specified learning setting: for ilasp, the setting is learning from answer sets (Law et al., 2014). Before defining it, we introduce the terminology used by ilasp authors. A Partial Interpretation (PI) is a pair of sets of atoms, \(e_{pi}=\langle T, F\rangle \), called inclusions (T) and exclusions (F), respectively. Given a (total) interpretation \({\mathcal {I}}\) and a PI \(e_{pi}\), we say that \({\mathcal {I}}\) extends \(e_{pi}\) if \( T \subseteq {\mathcal {I}}\) and \(F \cap {\mathcal {I}} = \emptyset \). We can augment \(e_{pi}\) with an ASP program C to obtain a Context Dependent Partial Interpretation (CDPI) \(\langle e_{pi}, C\rangle \). Given a program P, a CDPI \(e=\langle e_{pi}, C\rangle \), and an interpretation \({\mathcal {I}}\), we say that \({\mathcal {I}}\) is an accepting answer set of e with respect to P if \({\mathcal {I}} \in AS (P \cup C)\) such that \({\mathcal {I}}\) extends \(e_{pi}\).

A learning task for ilasp is given by an ASP program P as background knowledge, two sets of CDPIs, \(E^+\) and \(E^-\), as positive and negative examples, and the hypothesis space \(H_M\) defined by a language bias M, which limits the potentially learnable rules. The learned hypothesis \(H \subseteq H_M\) must respect the following criteria: (i) for each positive example \(e \in E^+\), there is some accepting answer set of e with respect to \(P\cup H\); and (ii) for any negative example \(e \in E^-\), there is no accepting answer set of e with respect to \(P\cup H\). If multiple hypotheses satisfy the conditions, the system returns one of the shortest, i.e., with the minimum number of literals (Law et al., 2014). In Law et al. (2018), the authors extend the expressiveness of ilasp by allowing noisy examples. With this setting, if an example e is not covered (i.e., there is an accepting answer set for e if it is negative, and none, if it is positive), the corresponding weight is counted as a penalty. Therefore, the learning task becomes an optimization problem with two goals: minimize the size of H and minimize the total penalties for the uncovered examples.

Now, we will define the syntax of ilasp necessary for our work and refer the reader to the system’s manual (Law et al., 2021) for further details. A CDPI is expressed as follows:

$$\begin{aligned} \texttt {\#type(ID@W,\{Inc\},\{Exc\},\{C\}).} \end{aligned}$$

where type is either pos or neg, ID is a unique identifier for the example, W is a positive integer representing the example’s weight (if not defined, the weight is infinite), Inc and Exc are two sets of atoms, and C is an ASP program. The language bias can be specified by mode declarations, which define the predicates that may appear in a rule, their argument types, and their frequency. Since in our work we aim to learn constraints, we restrict the search space just to rules r with \(H(r) = \bot \). Hence, we only need to specify the mode declarations for the body of a rule, expressed by #modeb(R,P,(E)) where R and E are optional and P is a ground atom whose arguments are placeholders of type var(t) for some constant term t. In the learned rules, the placeholders will be replaced by variables of type t. The optional element R is a positive integer, called recall, which specifies the maximum number of times that the mode declaration can be used in each rule. Lastly, E is a condition that further restricts the hypothesis space. We limit our interest to the anti_reflexive option that works with predicates of arity 2. When using it, atoms of the predicate P should be generated with two distinguished argument values.

Choosing an appropriate language bias is still one of the major challenges for modern ILP systems. Whenever the bias does not provide enough limitations, the problem becomes intractable and ilasp might not be able to find useful constraints. In contrast, a too strong bias may exclude solutions from the search space, thus resulting in suboptimal SBCs (Cropper & Dumančć, 2020).

3 Approach

We tackle combinatorial problems modeled in ASP such that the instances of a logic program P are generated by a discrete and often stationary stochastic process. Such situations occur, e.g., in industrial settings where the encoding of a manufacturing system is fixed and production orders vary. In this case, every problem instance can be seen as an outcome of the process. We assume that any instance (i) specifies the (true) atoms of unary domain predicates \(p_1,\dots ,p_k\) in P, where \(c_i\) is the number of atoms that hold for each \(p_i\); and (ii)the satisfiability of the instance depends on the number of atoms for each domain predicate, but not on the values of their terms. Thus, without loss of generality, we consider the terms for each \(p_i\) to be consecutive integers from 1 to \(c_i\).

Our method exploits instance-specific SBCs on a representative set of instances and utilizes them to generate examples for an ILP task. The learning method yields first-order constraints that remove symmetries in the analyzed problem instances as much as possible while preserving the instances’ satisfiability. We consider the following two learning settings:

  • enum is a cautious setting that preserves all answer sets that are not filtered out by the ground SBCs; and

  • sat setting aims to learn tighter constraints which, however, preserve at least one answer set for each instance.

To compute the examples, our approach relies on small satisfiable instances (i.e., with a low value for each \(c_i\)), subdivided into two parts: S and \( Gen \). Each instance \(g \in Gen \)

figure f

defines a positive example with empty inclusions and exclusions, and g as context. These examples, denoted by \( Ex _{ Gen }\), guarantee that the learned constraints generalize for the target distribution since they force the constraints to preserve some solution for each \(g \in Gen \). The instances \(i \in S\) are used to obtain positive and negative examples, representing answer sets of \(P \cup i\) to be preserved or filtered out, respectively, by corresponding SBCs. We denote their union by \( Ex _{S}\) in Fig. 1, where positive examples represent whole answer sets in the enum setting, or like instances in \( Gen \), consist of empty inclusions and exclusions along with the context i in sat.

Fig. 1
figure 1

ILP examples generation

An ILP task further requires background knowledge and a hypothesis space \(H_M\). Both of them are defined by the user (for a possible instantiation, see Sect. 4.3). The background knowledge consists of a logic program P along with an Active Background Knowledge, denoted by \( ABK \) in Algorithm 1. We use \( ABK \) to simplify the management of auxiliary predicate definitions and constraints learned so far. The hypothesis space contains the mode declarations, and we assume it to be general enough to entail ground SBCs by learned first-order constraints. The remaining inputs of Algorithm 1 consist of the instances in \( Gen \) and S as well as the learning setting m. For each answer set \({\mathcal {I}}\) of an instance \(i\in S\) to be analyzed, the algorithm determines T and F by projecting \({\mathcal {I}}\) to the atoms occurring in \( IG \), denoted by \( atoms ( IG )\). Next, in line 10, the predicate \( lexLead (\langle T,F\rangle ,\) \( IG )\) evaluates to true if \({\mathcal {I}}\) is dominated, i.e., \({\mathcal {I}}\) can be mapped to a lexicographically smaller, symmetric answer set by means of some irredundant generator in \( IG \). In this case, the negative example \( neg (T,F,i)\) is added to \( Ex _{S}\) in order to eliminate \({\mathcal {I}}\), while \( pos ({\mathcal {I}},\emptyset ,i)\) or \( pos (\emptyset ,\emptyset ,i)\) is taken as the positive example otherwise, depending on whether the enum or sat setting is selected. Positive examples of the form \( pos (\emptyset ,\emptyset ,g)\) are also gathered in \( Ex _{ Gen }\) for instances \(g \in Gen \), and solving the ILP task at line 16 gives new constraints C to extend \( ABK \).

4 Implementation

The implementation of our framework relies on clingo (consisting of the grounding and solving components gringo and clasp), sbass and ilasp, and is available at Tarzariol et al. (2021). Figure 2 shows the pipeline to generate the examples for a given instance \(i \in S\) (the for-loop at line 5 of Algorithm 1). First, the union of P, i, and \( ABK \) is grounded with gringo to get the ground program \(P_{ grd }\) in smodels format. Then, the solver clasp enumerates all its answer sets, obtaining \( AS (P_{ grd })\). Independently, sbass is run on \(P_{ grd }\) with the option –show to output a set of irredundant permutation group generators. This set contains the vertex permutations of \({\mathcal {G}}_{P_{ grd }}\), expressed in cycle notation. We extract the cycles defined by vertices representing atoms of \(P_{ grd }\) and transform them from smodels format back into their original symbolic representation (by a predicate and integer terms).

Fig. 2
figure 2

Pipeline to compute examples from an instance i

Next, we identify the symmetric answer sets in \( AS (P_{ grd })\) by using an ASP encoding similar to the lex-leader predicate definition in Sakallah (2009) to evaluate SBCs. To this end, we implement an ordering criterion to compare atoms according to their signatures. Given two ground atoms \(p_1(a_1,\dots ,a_n)\) and \(p_2(b_1,\dots ,b_m)\), the first is considered smaller than the second if: (i) \(p_1\) is lexicographically smaller than \(p_2\); (ii) \(p_1=p_2\) and \(n<m\); or (iii) \(p_1=p_2\), \(n=m\), and there are constants \(a_i<b_i\) such that \(a_j=b_j\) for all \(0<j<i\). Our ASP encoding then checks whether an answer set \({\mathcal {I}}\in AS (P_{ grd })\) is undominated by interpretations obtainable by applying the symbolic representation of some irredundant generator returned by sbass to \({\mathcal {I}}\).

In case \({\mathcal {I}}\) is dominated and thus must be eliminated as a symmetric answer set, we map it to a negative example with a unique identifier and a weight of 100. Due to the weights, ilasp returns a set of constraints even if some negative examples are not covered. Moreover, we use uniform weights so that all negative examples have the same relevance and as many as possible are to be eliminated. Lastly, answer sets that were not found to be dominated for any of the generators yield positive examples according to the selected setting—enum or sat. Such positive examples are unweighted so that the learned hypothesis must cover all of them.

4.1 Alternative atom ordering

Let us consider sets of n lexicographically ordered atoms that only differ in the values of the last terms in each atom. For two such sets \(A = \{p(\overrightarrow{x_1},a_1), \dots ,\) \( p(\overrightarrow{x_n},a_n)\}\) and \(B = \{p(\overrightarrow{x_1},b_1), \dots , p(\overrightarrow{x_n},b_n)\}\) of atoms, where \(\overrightarrow{x_i}\) contains all terms but the last, the lex-leader scheme starts by checking the atoms with the greatest \(\overrightarrow{x_i}\) vectors until there are two constants \(a_i \ne b_i\). Since various configuration problems yield answer sets of this kind, we devised an alternative atom ordering such that the lex-leader scheme starts from the smallest \(\overrightarrow{x_i}\) vectors when comparing two answer sets. To this end, an atom \(p_1(a_1,\dots ,a_n)\) is considered smaller than \(p_2(b_1,\dots ,b_m)\) if: (i) \(p_1\) is lexicographically smaller than \(p_2\); (ii) \(p_1=p_2\) and \(n<m\); (iii) \(p_1=p_2\), \(n=m\), and there are constants \(a_i>b_i\) such that \(i<n\) and \(a_j=b_j\) for all \(0<j<i\); or (iv) \(p_1=p_2\), \(n=m\), \(a_i=b_i\) for all \(0<i<n\), and \(a_n<b_n\).

This alternative ordering allows for more natural, undominated answer sets, as illustrated in the following example.

Example 4

Applying the alternative ordering criterion to the same input as described in Example 3, we get the following total order of atoms:

figure g

Thus, the integers associated with the answer sets become:

figure h

Now the lex-leader scheme discards all but the answer set \( AS _1\), and three permutations map \( AS _6\) to smaller answer sets:

figure i

The answer set \( AS _1\) contains atoms that are preserved by the general first-order constraint \(\texttt {:-p2h(P,H), \, P < H.}\), which removes all other symmetric solutions. Unlike that, when taking \( AS _6\) as a representative solution, we have to distinguish particular cases for the assignment of the first and the last pigeon, resulting in longer and more specific constraints.

4.2 Exploiting generators for full symmetry breaking

When investigating irredundant generators to label an answer set as a positive or negative example according to the lex-leader scheme, there can be cases where the labeling achieves partial instead of full symmetry breaking. As illustrated in Example 2, this is because single applications of generators yield a subset of the orbit of an interpretation only. Thus, we implement an alternative setting to label the examples, named fullSBCs, which exploits generators to explore the whole orbit of symmetric interpretations for every answer set. For each of the obtained cells, we label the smallest answer set as a positive example and all the remaining ones as negative. This approach reduces the sensitivity of ILP tasks to particular irredundant generators returned by sbass, allowing to achieve full symmetry breaking for any instance \(i\in S\).

We implement this setting by means of the clingo APIFootnote 2 to interleave the solving phase, which returns a candidate answer set, with the analysis of its orbit. Then, before continuing with the search for the next answer set, we prohibit the explored interpretations by feeding respective constraints to clingo. This setting allows for reducing the number of positive examples produced, and as we can configure it to sample a limited subset of all answer sets, it is also useful for dealing with underconstrained configuration problems that yield plenty of answer sets even for very small instances.

Example 5

To illustrate the fullSBCs setting, let us reconsider the pigeon-hole problem introduced in Example 3, where the instance with three pigeons and four holes leads to 24 solutions. Running sbass on this instance yields five generators, which identify a single cell since all the answer sets are symmetric. However, we only consider the first two generators in the following, allowing us to demonstrate the fullSBCs approach on an example with several, i.e., four, cells. The generators we inspect are:

figure j

Let \( AS _1 ={}\){p2h(1,3), p2h(2,2), p2h(3,4)} be the first answer set found. Then, before searching for other solutions, we repeatedly apply \(\pi _1\) and \(\pi _2\) to \( AS _1\) to obtain the whole orbit of symmetric interpretations. The identified answer sets are:

figure k

Once we have computed all answer sets symmetric to \( AS _1\), we produce a positive example for the smallest answer set encountered, i.e., \( AS _2\), while the other five answer sets constitute negative examples. Now, we can proceed with the search for the next answer set, e.g., \( AS _7={}\)

figure l

{p2h(1,2), p2h(2,1), p2h(3,3)}, and repeat the application of generators to explore its cell, identifying another five symmetric solutions of which {p2h(1,3), p2h(2,1), p2h(3,2)} is the smallest. This process continues until all 24 answer sets, partitioned into four cells with a smallest representative for each, are explored.

Algorithm 2 outlines the fullSBCs approach, providing an alternative implementation of the for-loop at line 7 of Algorithm 1. In the first line, we create a search control object, \( cnt \), using the clingo API. This object keeps track of already identified solutions and provides the get_new_solution method, which returns either a new answer set \({\mathcal {I}}\) or false if all solutions have been exhausted. Similar to the previously presented approaches to example generation, we project the atoms of \({\mathcal {I}}\) to \( atoms ( IG )\). The resulting interpretation \( min \) represents the smallest solution encountered so far in the current cell, and the set \( seen \) keeps track of already discovered interpretations belonging to the current cell. Starting with \( min \), the queue Q collects the interpretations to which all irredundant generators will be applied to yield new symmetric interpretations. The while-loop at line 7 checks whether there is an interpretation, T, left to pop. Then, if T is greater than \( min \) (according to the applied atom ordering criterion), it constitutes a negative example, while a smaller T is taken as new smallest interpretation and the previous \( min \) instead becomes a negative example. Only after the cell has been completely explored, the interpretation \( min \) is eventually labeled as a positive example. Lastly, before querying \( cnt \) for the next solution, we eliminate answer sets subsumed by the explored interpretations in \( seen \) from the search space of \( cnt \).

4.3 ILP learning task

After considering the example generation, we specify components of the ILP learning task suitable for the learning of constraints. The idea is to encode the predicates used by lex-leader symmetry breaking to order atoms and extract the maximal values for domain predicates. Since the mode declarations of ilasp (v4.0.0) do not support arithmetic built-ins such as <, we provide auxiliary predicates in ABK to simulate them. Presupposing the presence of unary domain predicates \(p_1,\dots ,p_k\) with integers from 1 to \(c_i\) for each \(p_i\), ABK defines the auxiliary predicates max\(p_i(c_i)\) for each \(p_i\) and lessThan\((t_1\),\(t_2)\) for each pair of integers \(1\le t_1<t_2\le \max \{c_i \mid i=1,\dots ,k\}\). These two predicates, based exclusively on syntactic properties of a considered problem, are minimal for overcoming limitations of ilasp to learn lex-leader SBCs. The selection of small yet representative instances for S and Gen depends on their hardness for learning. Regarding S, we pursued the strategy to empirically determine instances for which sbass yields a manageable number of permutation group generators. As mentioned in Sect. 4.2, the irredundant generators alone sometimes achieve partial symmetry breaking, and we selected only instances without any or a small amount of “misclassified" answer sets. The instances in Gen are usually larger yet still solvable in a short running time to check that the learned constraints generalize.

The language bias of our learning task includes the mode declarations #modeb(2, \(p_i(\)var\((t_i)))\) and #modeb(1,max\(p_i(\)var\((t_i)))\) for each domain predicate \(p_i\), in which var\((t_i)\) is a placeholder indicating the domain for which each \(p_i\) holds. Moreover, for each (non-auxiliary) predicate P appearing in some of the generators computed for instances in S, we use #modeb(2,P), where the domains of variables in atoms of P are provided by a vector of the placeholders in \(\{{\texttt {var}}(t_i) \mid i = 1,\dots , k\}\), depending on the role of P in the given program P. In addition, we include mode declarations #modeb(2,lessThan(var\((t_i)\),var\((t_j)))\) for all \(i,j = 1, \dots , k\), with the option anti_reflexive in case \(i=j\).

We decided to distinguish the variables’ types in the mode declarations in order to restrict the hypothesis space to rules such that a variable X of type t is included as an argument only in predicates defined over the same type t. To illustrate how this decision influences the search space of an ILP task, let us consider two extensions of the pigeon-hole problem introduced in Example 3, adding color and owner assignments. The pigeon-hole problem with colors associates a color with each pigeon and requires pigeons placed into neighboring holes to be of the same color. The version with colors and owners additionally assigns an owner to each pigeon and imposes the same constraint as with the colors for owners as well. For the pigeon-hole problem with colors, by using typed variables in the mode declarations, ilasp generates a search space of 1837 rules,Footnote 3 while 9169 rules are obtained without distinguishing variables’ types. Regarding the extension to owners, this difference is even larger: 2895 rules using typed variables versus 21406 rules without distinguishing variables’ types.

To compare the learning performance of ilasp, we conducted several experiments on the pigeon-hole problem with colors and owners for a pool of instancesFootnote 4 and observed that applying our approach with typed variables in the mode declarations allows for learning constraints quicker than without distinguishing the types. When using the iterative approach described in Sect. 4.4, ilasp took on average less than two minutes to learn the shortest constraints related to holes, colors, and owners, and always finished in less than ten minutes. In opposite, a similar ILP task defined without distinction of variable types took on average thirty minutes, with cases where no hypothesis was found within an hour.

Reducing the hypothesis space has the potential drawback of learning less efficient rules since there can be situations where stronger constraints with fewer variables are excluded. For instance, a constraint like :- pigeon(X), not p2h(X,X). cannot be learned in the current setting, as the variable X is taken for a pigeon and a hole at the same time. However, we decided to use the restricted search space for our experiments in Sect. 6 because it leads to much better scalability of learning and constraints that still improve the solving performance. In fact, the ability to learn constraints in acceptable time is important for handling application scenarios better than with instance-specific symmetry breaking methods.

Example 6

To illustrate a feasible outcome of our ILP framework, let us inspect the constraints learned for the pigeon-hole problem and its instance with three pigeons and three holes, as also considered in Example 3. Applying the generators returned by sbass to the six answer sets gives one positive and five negative examples, and the resulting ILP task is as follows:

figure m
figure n

.

Let us notice that the ASP input encoding in Example 3 has been adapted into an equivalent one above. Such a modification is necessary because the current version of ilasp does not support rules like {p2h(P,H) : hole(H)} = 1 :- pigeon(P). with the conditional operator “:" in the head. After running ilasp, the learned first-order constraints are:

figure o

4.4 Iterative learning

Inspired by the lifelong learning approach (Cropper et al. 2020), we apply our framework incrementally to a split learning task. This idea is especially useful if the ASP encoding presents several symmetries, where some of them are independent of the others. The iterative approach simplifies the learning task by exploiting the incremental applicability of ILP: first, it solves a subtask to identify a subset of symmetries, and before addressing the remaining ones, we integrate the constraints just learned into the background knowledge. To this end, we divide the hypothesis space for programs with three or more types of variables in the language bias. Then, in the first step, we provide a set S of instances to address their symmetries involving only two types of variables and define the search space with mode declarations restricted to the two types of variables considered. Next, we solve the ILP subtask and append the learned constraints to \( ABK \). In the following steps, we repeat the procedure and analyze the same or different instances in S for symmetries going beyond those already handled by solving ILP subtasks with the mode declarations progressively extended to further types of variables.

To illustrate a concrete application scenario, reconsider the pigeon-hole problem with color and owner assignments, introduced in Sect. 4.3. For this problem, the search space is split into three incremental parts:

  • the first is limited to predicates whose atoms exclusively include variables of the types pigeon and hole,

  • the second part extends mode declarations by allowing atoms with variables of the type color too, and, finally,

  • the third step includes variables of the type owners.

Initially, S contains instances with only one color and owner so that our framework produces examples entailing symmetries related exclusively to the pigeons’ placement. Next, we append the learned constraints to \( ABK \) and repeat the procedure by redefining S with instances with one owner but more than one color. Lastly, we analyze instances in S without restrictions on the numbers of colors and owners while considering the whole language bias. In this way, ilasp can learn new symmetries using predicates that involve more types of variables, as the language bias is progressively extended until it reaches the whole set of mode declarations.

By applying the iterative approach, the learning task can be decomposed into smaller and easier ILP subtasks. For an indication of the practical impact on the size of the search space(s), we note that ilasp generates 1040 rules for variables of the types pigeon and hole only, 1837 rules when variables of the type color are added, and 2895 rules with the full language bias for the pigeon-hole problem with colors and owners. That is, the search space for the ILP subtask in the last iteration includes the same rules as generated when addressing the full language bias in a single step, yet the background knowledge may already be extended by constraints reducing the number of (negative) examples still to investigate. We assessed the impact of the iterative approach in several experiments experiments (see footnote 4) and observed that it allows us to learn constraints much quicker than when tackling all symmetries in a single pass. By splitting the learning task, ilasp took on average less than two minutes to learn constraints related to pigeons and holes, then colors, and finally owner assignments. Unlike that, the ILP task that addresses the full language bias directly took on average more than thirty minutes to return the shortest hypothesis, where in some cases the search was not finished within one hour.

Splitting the learning task has the potential drawback that some of the symmetries can be lost in the process, as the updated \( ABK \) is considered in subsequent calls to sbass for identifying remaining symmetries. However, for the combinatorial problems investigated in our experiments in Sect. 6, the results showed that even in case we learn constraints handling a subset of all problem symmetries, the solving performance benefits substantially.

5 Learning performance

We tested the different settings of our implementation over the two extensions of the pigeon-hole problem described in Sect. 4.3. For every setting, we used the same initial set of instances in \( Gen \), auxiliary constraints in \( ABK \), and mode declarations in the language bias (split to apply the iterative approach). For keeping the number of instances in \( Gen \) moderate, we hand-picked a few (satisfiable) instances to start from, applied our iterative learning approach, and then validated the learned constraints on other satisfiable instances as well. The instances for which learned constraints led to unsatisfiability were then also added to \( Gen \), and we repeated the learning phase until all instances were found satisfiable together with the learned first-order SBCs.

In the following, we report representative results and conclusions drawn from the instances and records of learning experiments provided in our repository (Tarzariol et al., 2021).

5.1 Enum vs sat setting

The difference between the enum and sat setting lies in the positive examples generated for the instances in S: in the first setting, we explicitly list all undominated answer sets as positive examples, while the second produces just a general positive example with empty inclusions and exclusions. That is, the sat setting abstracts over undominated answer sets, as they are neither labeled as positive nor negative examples in the ILP task. In this case, ilasp aims at eliminating as many symmetric answer sets as possible while preserving the satisfiability of a given instance. This even means that the preserved answer sets, required in view of the general positive example, might belong to negative examples but are not covered by the learned constraints. In this way, we may, in general, learn alternative constraints that preserve some specific pattern of solutions appearing in all satisfiable instances, regardless of symmetries, while representative solutions can be lost.

For example, in the first ILP iteration on the pigeon-hole problem with colors and owners, the instance with three pigeons and four holes (and only one color and owner) gives 24 answer sets, 22 of which are labeled as negative. In the enum setting, ilasp finds optimal constraints removing 12 negative examples and thus returns a hypothesis that applied to the same instance leaves 12 answer sets. In contrast, the sat setting enables learning of stronger constraints by ilasp, which preserve 2 answer sets only, both labeled as negative examples.

The complexity of the ILP task depends on the possibility of covering all negative examples. Since with enum we have tighter conditions on the candidate hypotheses, the search space is smaller than in the sat setting. Hence, the optimization problem regarding (weighted) negative examples addressed by ilasp takes in general longer for sat, but only if many negative examples cannot be covered even under relaxed conditions on the candidate hypotheses. On the other hand, if the language bias permits many hypotheses covering all or most of the negative examples, an ILP task is usually quickly solved with the sat setting. E.g., the instance with three pigeons, four holes, one color and owner took 72.8 seconds to be solved in the enum (eliminating 12 out of 22 symmetric answer sets) and just 27.7 seconds in the sat setting (eliminating 20 symmetric answer sets and the 2 unlabeled ones).

5.2 Alternative atom ordering

When answer sets for the combinatorial problem analyzed have the property described in Sect. 4.1, our alternative ordering criterion for the lex-leader scheme may distinguish the representative and symmetric solutions. Hence, ILP tasks can be solved with shorter constraints than for the default atom ordering. For instance, the setting illustrated in Example 6 yields the representative answer set {p2h(1,1),p2h(2,2),p2h(3,3)} instead of {p2h(1,3),p2h(2,2),p2h(3,1)}. This allows ilasp to learn the short constraint :- p2h(X,Y), lessThan(Y,X)., expressing that no pigeon can be placed into a hole smaller than its label, which leaves just one answer set for instances with an equal number \(p=h\) of pigeons and holes.

Given that the positive examples kept after checking direct applications of the irredundant generators returned by sbass heavily depend on the computed generators, we found that often more positive examples are produced than for the default atom ordering. Namely, the generators preserve more symmetric solutions than the default ordering for the extensions of the pigeon-hole problem to colors as well as colors and owners. This leads to weaker (although shorter and easier to interpret) constraints, and better-suited ways of aligning generators with symbolic atom representations would be of interest.

5.3 Exploiting generators for full symmetry breaking

Section 4.2 describes an alternative implementation for labeling answer sets as positive or negative examples, called the fullSBCs setting. We can see the effects of always labeling the answer sets according to full SBCs on the same scenario as discussed in Sect. 5.1: instead of 22 negative and 2 positive examples generated with the enum setting, fullSBCs returns just one positive example, i.e., the representative of the single cell characterized by the generators of sbass. As a consequence, instead of 72.8 seconds to return a hypothesis that produces 12 of the original 24 answer sets, with the ILP task defined based on fullSBCs, ilasp took 21.4 seconds to find a hypothesis that preserves only 4 answer sets.

We note that reducing the number of examples for an ILP task generated by some of our settings has a limited impact on ilasp, as its latest versions implement mechanisms to scale with respect to the number of examples (Law et al., 2016; Law, 2021). However, for instances with many answer sets, the fullSBCs approach can be helpful because equivalent answer sets need not be exhaustively computed by clingo.

6 Solving experiments

To evaluate our approach and the implementation design, we applied it to a series of combinatorial search problems. For each considered problem, we compared the running time of the original encoding, the version extended with our learned constraints, and the instance-specific approach of sbass. The learned constraints depend on the instances used in S and \( Gen \) as well as how we apply the iterative learning approach. In the following, we report results for the constraints with good performance learned applying the definitions of Sect. 4.3.Footnote 5 We ran our tests on an Intel® i7-3930K machine under Linux (Debian GNU/Linux 10), where each run was limited to 900 seconds and 20 GB of memory.

In Tables 1, 2,  3 and 4, the satisfiable instances are shown in even rows, while the odd rows contain unsatisfiable instances. The column base refers to clingo (v5.5.0) run on the original encoding, while Enum, Sat, Ord, and Full report results for the original encoding augmented with first-order constraints learned in the enum, sat, and enum with alternative atom ordering or fullSBCs setting, respectively. The time required by sbass to compute ground SBCs is given in the corresponding column, and clasp\(^\pi \) provides the solving time obtained with these ground SBCs. Runs that did not finish within the time limit of 900 seconds are indicated by TO entries.

We first tested the pigeon-hole problem, working without any division and iterative analysis of the language bias: the four learning settings led to similar performance constraints, although the ones obtained with the alternative ordering were shorter, as mentioned in Sect. 5.2. The running time comparison in Table 1 shows that all the settings of our approach bring about a similar speedup for solving satisfiable as well as unsatisfiable instances. In fact, the vast problem symmetries are cut by the learned first-order constraints. This is particularly important in case of unsatisfiability, where runs on the original encoding without additional constraints do not finish within the time limit. While sbass also manages to handle the two smallest instances, the computation of permutation group generators becomes too expensive when the instance size grows, in which case we cannot run clasp\(^\pi \) with ground SBCs from sbass.

Table 1 Runtime in seconds for pigeon-hole problem
Table 2 Runtime in seconds for pigeon-hole problem with colors
Table 3 Runtime in seconds for pigeon-hole problem with colors and owners

Next, we tested the pigeon-hole problem adding color and owner assignments. For the pigeon-hole problem with color assignments, we divided the language bias into two parts: the first limiting to predicates whose atoms exclusively include variables of the types pigeon and hole, while the second part allows variables to be of the type color too. Likewise, the problem version with owners and colors required a third language bias extension to variables of the type owner. For both extensions of the pigeon-hole problem, the first-order constraints learned in sat turned out to be stronger than in the other settings. Nevertheless, all kinds of constraints helped to improve the search for solutions. Tables 2 and  3 show similar results: the constraints learned with the sat setting lead to the fastest running times for both satisfiable and unsatisfiable instances. The constraints learned with enum using the alternative ordering are shorter and easier to read than the other settings, but sightly less efficient since they break only a subset of all symmetries. In Table 2, the time took for identifying satisfiable and unsatisfiable instances is lower if we use the constraints learned with fullSBCs than those learned with enum; on the other hand, in Table 3, we observe the opposite behavior, especially for the last instances: for the pigeon-hole problem with colors and owners, we could have learned the same constraints in both settings because to obtain the constraints with enum, we used instances that identify full SBCs. However, we tested a different set of rules for fullSBCs since they were stricter than enum, concerning the pigeons’ placement symmetries. Indeed, the first unsatisfiable instance with one color and owner was solved earlier by the constraint of fullSBCs. Lastly, for small unsatisfiable instances, the ground SBCs from sbass lead to better performance than the constraints learned with the enum setting. However, as soon as the color (or owner) dimension grows, the runs of clasp\(^\pi \) reach the timeout. This behavior is due to the redundancy of the ground SBCs, which slow down the search instead of facilitating it. For some of the satisfiable instances, finding a solution with the constraints learned in enum takes longer than with the original encoding alone. Nevertheless, the latter also has timeouts that do not occur with our learned first-order constraints.

To conclude, we applied the different settings of our approach to the house-configuration problem (Friedrich et al. 2011), which consists of assigning t things of p persons to c cabinets, where each cabinet has a capacity limit of two things that must belong to the same owner. Similarly to the pigeon-hole problem with color, we divided the language bias into two parts: the first limiting to predicates whose atoms exclusively include variables of the types cabinet and thing, while the second part allows variables to be of the type person too. The running times in Table 4 exhibit the same trend as observed on the previous problems that our first-order constraints help the search, especially those learned with the sat setting. For this problem, the constraints learned with the enum setting, the alternative ordering, and exploiting the full SBCs show similar performances. In some cases, the original encoding is quicker to solve satisfiable instances, although it takes considerably longer for unsatisfiable ones. On the other hand, sbass brings a moderate speedup for unsatisfiable instances, but its performance suffers a lot when the problem size grows.

Table 4 Runtime in seconds for house-configuration problem

7 Conclusions

This paper introduces methods to lift the SBCs of combinatorial problem encodings in ASP for a target distribution of instances. Our framework addresses the limitations of common instance-specific approaches, like sbass, since: (i) the knowledge is transferable, as learned constraints preserve the satisfiability for the considered instance distribution; (ii) the first-order constraints are easier to interpret than ground SBCs; (iii) the SBCs are computed offline, allowing for addressing large-scale instances, as shown in our experiments; and (iv) the learned constraints are non-redundant, avoiding performance degradation due to an excessive ground representation size. In the current implementation of our approach, ilasp learns shortest constraints that cover as many examples as possible, while there is no distinction regarding the solving performance of candidate hypotheses. Despite this, our experiments showed that the learned constraints significantly improve the solving performance on the analyzed problems. Moreover, the two example generation methods suggested in this work allowed for ILP tasks with (i) fewer positive examples and (ii) shorter learned constraints, in comparison to the two methods of our previous paper (Tarzariol et al., 2021). These results are due to the full symmetry breaking when enumerating all answer sets with the fullSBCs approach or an alternative atom ordering for the lex-leader scheme, respectively.

Nevertheless, there are still some limitations in the usability of our framework, which partially go back to the components used in our current implementation, i.e., sbass, clingo, and ilasp. The sbass tool does not support ASP programs with weak constraints (Calimeri et al., 2019), whose implementation is out of the scope of this work. However, extensions of instance-specific symmetry detection and model-oriented symmetry breaking to optimization problems are undoubtedly worthwhile. Optimization involves solving unsatisfiable subproblem(s) attempting (and failing) to improve an optimal answer set, where symmetry breaking is particularly crucial for performance. Concerning clingo, if a given encoding P leads to large ground instantiations, the addition of learned constraints does not reduce the size. Therefore, it would be desirable to directly incorporate the information about redundant answer sets into a modified encoding. For instance, for the pigeon-hole problem, this might prevent our method from even generating ground atoms representing the placement of a pigeon into some hole with a greater label. Lastly, ilasp does currently not scale well with respect to the size of the hypothesis space spanned by the language bias, which is a well-known issue tackled by next-generation ILP systems under development (Law et al., 2020, 2021).

At present, the successful application of our framework relies on the following characteristics of a combinatorial problem: (i) we can easily provide simple instances (i.e., the total number of solutions can be managed by our implementation) that entail the symmetries of the whole instance distribution; (ii) the object domains can be expressed in terms of unary predicates that hold for a range of consecutive integers; and (iii) the auxiliary predicate definitions suggested for \( ABK \) in Sect. 4.3 enable the learning of constraints that improve the solving performance. In particular, if it gets difficult to compute solutions for an instance in S to analyze, the formulation of an ILP task to learn constraints can become prohibitive.

In the future, we aim to investigate whether the learning of SBCs can be readily applied or further adapted to advanced industrial configuration problems, such as the Partner Units Problem (Dodaro et al., 2016), as well as complex combinatorial problems with specific instance distributions, like the labeling of Graceful Graphs (Petrie & Smith, 2003). For such application scenarios, the language bias may be enriched, possibly extending the background knowledge with additional predicates characterizing the structure of instances. Moreover, for problem instances that yield a vast number of solutions, we can take advantage of the incremental implementation of the fullSBCs approach to limit the number of answer sets to consider as examples for an ILP task. Lastly, we intend to develop automatic mechanisms to select suitable instances for S and \( Gen \) from instance collections, support lifelong learning, and further optimize the grounding and solving efficiency of learned constraints.