1 Introduction

Computational approaches to arguing in favour or against statements under scrutiny are a main research theme in the field of computational argumentation [4, 38], within Artificial Intelligence (AI). Placed in the area of knowledge representation and reasoning and non-monotonic reasoning, computational argumentation features a diverse set of application avenues, such as legal reasoning, medical reasoning, and e-government [3].

Approaches to formalize argumentative reasoning are studied in the field of structured argumentation [5]. Formalisms in structured argumentation usually follow the so-called argumentation workflow [14] to prescribe ways of finding arguments and their relationships. A starting point are knowledge bases, oftentimes assumed to be in a rule-based form. Arguments are then instantiated as derivations applicable within the knowledge base. Reasoning based on the arguments and their relations is carried out by using argumentation semantics, through which one can specify acceptable sets of arguments. Several approaches to structured argumentation have been studied, e.g., assumption-based argumentation (ABA) [12, 20], ASPIC\(^+\) [50, 51], defeasible logic programming (DeLP) [40, 41], and deductive argumentation [6, 7].

Computationally speaking, argumentative reasoning in these approaches to structured argumentation is hard: in almost all cases reasoning defined in these formalisms is NP-hard, see, e.g., the survey by Dunne and Dvořák [32]. To address the complexity barrier, several algorithmic approaches were developed [15, 16], and in a biannual International Competition on Computational Models of Argumentation (ICCMA) [8, 39, 46, 53], which this year is being held for the fifth time, systems compete in terms of runtime performance.

An approach to tackle high complexity is the utilization of structural properties of instances, such as viewing instances in a graph-like manner and considering, e.g., acyclicity or other graph properties. A prominent such property is tree-width [10], informally measuring closeness of instances to trees. The milder complexity of many problems on trees oftentimes transfers to problems on graphs of low tree-width. Algorithms following dynamic programming can then operate on a tree-decomposition of the initial instance, with which one confines the combinatorial explosion of complex problems into subproblems, whose size can be bounded by the tree-width of the original instance. Tree-based forms, or forms close to trees, appear appealing to computational argumentation, since, e.g., dialogues might be represented in a tree-like structure. Indeed, tree-width has been studied in several works in argumentation [27,28,29,30,31, 34, 47]. These studies focus on the field of abstract argumentation, i.e., on formalisms where arguments are given in an abstracted form such as the well-known argumentation framework (AF) [26], and a form of deductive argumentation. To the best of our knowledge, there is no current account of the utilization of tree-width for rule-based structured argumentation formalisms such as ABA, ASPIC\(^+\), or DeLP.

Recent works show that lifting computational approaches in abstract argumentation to structured argumentation is not immediate, and seems to involve dedicated research on the structured formalisms [48, 49]. We follow this line and take up this opportunity to study algorithmic approaches utilizing tree-width for ABA, as one of the prominent structured argumentation approaches with applications in medical decision making [19, 22] and in multi-agent contexts [33].

Our main contributions are as follows.

  • We first show wide applicability of algorithms using tree-width by showing fixed-parameter tractability of reasoning tasks in ABA, under the parameter tree-width. We show these results by making use of Courcelle’s theorem [17] and expressing reasoning in ABA in monadic second order logic (MSO).

  • We develop tree-decomposition-based algorithms for ABA. Towards wider extensibility, we first give a detailed account of a dynamic programming algorithm for the stable semantics and instantiate algorithms for admissible, complete, and stable semantics in the framework of D-FLAT [1, 2, 9], which enables declarative specification of such algorithms in answer set programming (ASP) [13, 43, 52]. Together with expressing ABA reasoning in MSO, the declarative approach of D-FLAT leads to a system that has potential for adaptation to other forms of structured argumentation, further semantics, or other modes of reasoning.

  • Finally, we present an experimental evaluation of a prototype using D-FLAT, showing promise of our approach for complex counting tasks involving in particular a high number of solutions.

Further material, including ASP encodings used within D-FLAT, can be found at https://gitlab.tugraz.at/krr/astra.

2 Background

We recall assumption-based argumentation (ABA) [12, 20], monadic second order logic, and tree-width and tree-decompositions [10], next.

Assumption-Based Argumentation. We assume a deductive system \((\mathcal {L},\mathcal {R})\), where \(\mathcal {L}\) is a set of atoms and \(\mathcal {R}\) a set of inference rules over \(\mathcal {L}\). A rule \(r \in \mathcal {R}\) has the form \(a_0 \leftarrow a_1,\ldots ,a_n\) with each \(a_i \in \mathcal {L}\). We denote the head of rule r by \(head(r) = a_0\) and the (possibly empty) body of r with \(body(r) = \{a_1,\ldots ,a_n\}\).

Definition 1

An ABA framework is a tuple \(F=(\mathcal {L},\mathcal {R},\mathcal {A},\overline{})\), where \((\mathcal {L},\mathcal {R})\) is a deductive system, \(\mathcal {A} \subseteq \mathcal {L}\) a non-empty set of assumptions, and a function mapping assumptions \(\mathcal {A}\) to atoms \(\mathcal {L}\).

In words, an ABA framework includes a deductive system, a distinction between assumptions and non-assumptions, and a contrary function that assigns contraries to assumptions. In this work, we focus on the commonly used logic programming fragment of ABA [12]. We assume that all sets and rules in an ABA framework are finite, and no assumption occurs in the head of a rule. The last condition means that the ABA frameworks are flat. As a slight generalization, we allow the contrary function to be partial.

Derivability in ABA can be defined in multiple ways, we recall the so-called forward-derivations, here called simply derivations. An atom \(a \in \mathcal {L}\) is derivable from a set \(X \subseteq \mathcal {A}\) using rules \(\mathcal {R}\), denoted by \(X \vdash _{\mathcal {R}} a\), if \(a\in X\) or there is a sequence of rules \((r_1,\ldots ,r_n)\) such that \(head(r_n) = a\) and for each rule \(r_i\) we have \(r_i \in \mathcal {R}\) and each atom in the body of \(r_i\) is derived from rules earlier in the sequence or is in X, i.e., \(body(r_i) \subseteq X \cup \bigcup _{j < i} \{head(r_j)\}\). The deductive closure for an assumption set X w.r.t. rules \(\mathcal {R}\) is defined as \( Th _{\mathcal {R}}(X) = \{a\in \mathcal {L} \mid X \vdash _{\mathcal {R}} a\}\).

Example 1

Our running example ABA framework \(F=(\mathcal {L},\mathcal {R},\mathcal {A},\overline{})\) is given with \(\mathcal {A} = \{a, b\}\), \(\mathcal {L} = \mathcal {A} \cup \{x,y,z\}\), the rules \(r_1 = (x\leftarrow a)\), \(r_2 = (y \leftarrow x)\), \(r_3 = (z \leftarrow b)\), and contraries \(\overline{a} = z\) and \(\overline{b} = y\).

The contrary function is used to define attacks between assumption sets.

Definition 2

Let \(F=(\mathcal {L},\mathcal {R},\mathcal {A},\overline{})\) be an ABA framework, and \(A,B \subseteq \mathcal {A}\) be two sets of assumptions. Assumption set A attacks assumption set B in F if \(A' \vdash _{\mathcal {R}} \overline{b}\) for some \(A'\subseteq A\) and \(b \in B\).

Conflict-free assumption sets and defense are defined, as follows.

Definition 3

Let \(F=(\mathcal {L},\mathcal {R},\mathcal {A},\overline{})\) be an ABA framework. An assumption set \(A\subseteq \mathcal {A}\) is conflict-free in F iff A does not attack itself. Set A defends assumption set \(B \subseteq \mathcal {A}\) in F iff for all \(C \subseteq \mathcal {A}\) that attack B it holds that A attacks C.

The ABA semantics we focus on in this work are then defined next.

Definition 4

Let \(F=(\mathcal {L},\mathcal {R},\mathcal {A},\overline{})\) be an ABA framework. Further, let \(A \subseteq \mathcal {A}\) be a conflict-free set of assumptions in F. In F, set A is

  • admissible iff A defends itself,

  • complete iff A is admissible and contains all assumption sets defended by A,

  • preferred iff A is admissible and there is no admissible set of assumptions B with \(A \subset B\), and

  • stable iff each \(\{x\} \subseteq \mathcal {A} \setminus A\) is attacked by A.

Reasoning tasks on ABA include verifying whether a given set of assumptions is a \(\sigma \)-assumption set, and enumerating or counting \(\sigma \)-assumption sets. In addition, often a relevant question is to find out whether a given atom is acceptable under a semantics. To answer this question, two prominent reasoning modes are credulous and skeptical acceptance of atoms in an ABA framework. A given atom \(s \in \mathcal {L}\) is credulously accepted in F under semantics \(\sigma \) iff there is a \(\sigma \)-assumption set A such that \(s \in Th _{\mathcal {R}}(A)\), and skeptically accepted in F under semantics \(\sigma \) iff \(s \in Th _{\mathcal {R}}(A)\) for all \(\sigma \)-assumption sets A. Credulous reasoning under admissible, complete, stable, and preferred semantics is NP-complete and skeptical acceptance under stable is coNP-complete, and \(\varPi ^P_2\)-complete under preferred semantics [21, 24].

Example 2

Continuing Example 1, there are two stable assumption sets \(\{a\}\) and \(\{b\}\), with deductive closures \( Th _{\mathcal {R}}(\{a\}) = \{a,x,y\}\) and \( Th _{\mathcal {R}}(\{b\}) = \{b,z\}\), respectively. In this example, atoms x, y, and z are credulously accepted under stable semantics, and no atom is skeptically accepted under stable semantics.

Monadic Second Order Logic and Tree-Decompositions. We recap monadic second order logic and tree-decompositions, following Gottlob et al. [44].

Monadic second order logic (MSO) extends first order logic by allowing set variables, which range over sets of domain variables, and quantification over these set variables. We write individual variables as lowercase letters x and set variables as uppercase letters X. For a set \(\tau = \{R_1,\ldots ,R_k\}\) of predicate symbols, a finite structure I over \(\tau \), also called a \(\tau \)-structure, has a finite domain \(D= dom (I)\) and relations \(R_i^I \subseteq D^{r_i}\) of arity \(r_i\) for each predicate symbol \(R_i \in \tau \). Evaluation of an MSO formula \(\phi \) over a \(\tau \)-structure I is defined, as usual. For our purposes, it is sufficient to only consider unary and binary predicates.

A tree-decomposition of a \(\tau \)-structure I is a pair \((T,(D_t)_{t\in T})\), with T being a rooted tree and each \(D_t \subseteq D = dom (I)\), satisfying the following properties.

  1. 1.

    Every domain element \(x \in D\) is part of some \(D_t\), i.e., \(x \in D_t\) for some \(t \in T\).

  2. 2.

    For every \(R_i \in \tau \) and tuple \((a_1,\ldots ,a_{r_i}) \in R_i^I\) it holds that there is some node \(t \in T\) with \(\{a_1,\ldots ,a_{r_i}\} \subseteq D_t\).

  3. 3.

    The set \(\{t \mid a \in D_t\}\) induces a subtree of T, for each \(a \in D\).

In brief terms, a tree-decomposition is a tree formed of so-called bags \(D_t\) consisting of sets of domain elements. The second condition ensures that each relation is fully part of at least one bag. The third condition, often referred to as the connectedness condition, states that whenever two bags \(D_t\) and \(D_{t'}\) both contain an a, then on the path between those two bags, we encounter a in the bags.

The width of a tree-decomposition \((T,(D_t)_{t\in T})\) is the maximum number of domain elements in bags minus one, i.e., \( max \{|D_t |\mid t \in T\}-1\). The tree-width of a \(\tau \)-structure I is the minimum width of all tree-decompositions of I.

Complexity-wise, MSO and tree-width are connected, as stated by Courcelle’s theorem.

Theorem 1

([17]). Let \(\phi \) be an MSO formula over a structure \(\tau \), and I be a \(\tau \)-structure of tree-width w. It holds that evaluating \(\phi \) over I can be achieved in \(\mathcal {O}(f(|\phi |,w) \cdot |I|)\), for some function f.

In brief, a problem expressible in MSO is then said to be fixed-parameter tractable (FPT) for the parameter tree-with of the underlying \(\tau \)-structure. For an overview on parametrized complexity (including FPT), see the book by Downey and Fellows [25].

3 Complexity of ABA Under the Lens of Tree-Width

In this section, we show that a large range of problems in ABA can be addressed algorithmically via utilizing tree-width, formally by stating that these problems are FPT with the parameter tree-width.

Towards these results, we represent reasoning in ABA in MSO. We make use of the following set of predicate symbols: \(\tau _{ ABA } = \{ atom /1\), \( asm /1\), \( rule /1\), \( head /2\), \( body /2\), \( contrary /2\), \( query /1\}\), together with the arities of the predicates. The intention of the predicates is formalized next.

Definition 5

Let \(F=(\mathcal {L},\mathcal {R},\mathcal {A},\overline{})\) be an ABA framework. The associated \(\tau _{ ABA }\) structure, denoted by \(I_F\), is defined by \( atom (x)\) for each \(x \in \mathcal {L}\), \( asm (a)\) for each \(a \in \mathcal {A}\), \( rule (r)\), \( head (r,h)\), and \( body (r,b_1), \ldots , body (r,b_k)\) for each rule \(r=h \leftarrow b_1,\ldots ,b_k\), and \( contrary (a,x)\) for each \(a \in \mathcal {A}\) and \(x \in \mathcal {L}\) s.t. \(\overline{a} = x\).

The remaining \( query \) predicate is used to indicate what to query for credulous or skeptical reasoning.

Example 3

Consider again the ABA framework from Example 1, which can be written as a \(\tau _{ABA}\) structure containing \( atom (a)\), \( atom (b)\), \( atom (x)\), \( atom (y)\), \( atom (z)\), \( asm (a)\), \( asm (b)\), \( rule (r_1)\), \( rule (r_2)\), and \( rule (r_3)\) for the unary predicates, and \( contrary (a,z)\), \( contrary (b,y)\), \( head (r_1,x)\), \( head (r_2,y)\), \( head (r_3,z)\), \( body (r_1,a)\), \( body (r_2,x)\), and \( body (r_3,b)\) for the binary predicates. Part of a tree-decomposition of this ABA framework is depicted in Fig. 1. This decomposition is a so-called “nice” tree-decomposition, which has to satisfy further constraints useful for algorithms operating on such a nice tree-decomposition. We recall the formal definition of nice tree-decompositions in the next section. In this tree-decomposition there are 22 nodes. For instance, \(r_1\) and a are together in the bag of node 15, satisfying the condition that these two have to be together in one bag because \( body (r_1,a)\) holds.

Fig. 1.
figure 1

Part of an example tree-decomposition of the ABA framework of Example 1. (Color figure online)

We move on to expressing ABA semantics in terms of MSO. We make use of several common shortcuts, as defined next, in addition to “\(x \in X\)” to denote that a domain element is in the set.

$$\begin{aligned} x \notin X :=&\ \lnot (x \in X)&X \subseteq Y :=&\ \forall x (x \in X \rightarrow x \in Y) \\ X \subsetneq Y :=&\ (X \subseteq Y) \wedge \lnot (Y \subseteq X)&X \subseteq _\mathcal {A} Y :=&\ \forall x \big ( (x \in X \wedge asm (x))\rightarrow x \in Y \big ) \\ X \subsetneq _\mathcal {A} Y :=&\ (X \subseteq _\mathcal {A} Y) \wedge \lnot (Y \subseteq _\mathcal {A} X)&X =_\mathcal {A} Y :=&\ (X \subseteq _\mathcal {A} Y) \wedge (Y \subseteq _\mathcal {A} X) \end{aligned}$$

First, we encode derivability. For a given ABA framework F and \(A \subseteq \mathcal {A}\), there is a direct connection between \( Th _{\mathcal {R}}(A)\) and the unique \(\subseteq \)-minimal classical model of the propositional Horn formula \(\bigwedge _{a \in A} a \wedge \bigwedge _{r \in \mathcal {R}} \big ( \bigwedge _{b \in body (r)} \rightarrow head (r) \big ), \) that is, the Horn theory consisting of each assumption in A as facts and rules in \(\mathcal {R}\) as implications. It holds that \(M \subseteq \mathcal {L}\) is the \(\subseteq \)-minimal model of this formula iff \(M= Th _{\mathcal {R}}(A)\).

We can directly make use of this fact and represent derivability in ABA by the following MSO formula, where we use quantification to express the same reasoning as in the above Horn formula. First, we define satisfaction of rules by

$$\forall r \Big ( rule (r) \rightarrow \exists s \big (( head (r,s) \wedge s \in E \big ) \vee ( body (r,s) \wedge s \notin E) \big ) \Big ).$$

In this formula the set variable E is open. The formula states that whenever r is a rule, then the rule has to be satisfied by E in the logical sense: either the head is in E or some body element is missing from E. We call this formula \(\phi _{ Sat }(E)\).

Derivability is then expressible by

$$\phi _{ Th }(E) =\ \phi _{ Sat }(E) \wedge \forall E' \big ( (E' \subsetneq E \wedge E' =_\mathcal {A} E) \rightarrow \lnot \phi _{ Sat }(E') \big ),$$

which states that E should satisfy the rules and no proper subset \(E' \subsetneq E\) that shares the same assumptions satisfies the rules. Then E corresponds to the least model of the above Horn formula (with assumptions stated as facts).

Attacks are expressible via \(\phi _{ att }(E,S) =\ \exists x,a \big (x \in E \wedge a \in S \wedge contrary (a,x) \big ),\) that is, set E attacks set S if there is a contrary in E of S. The contrary only contains assumptions in the first position and atoms in the second.

The property of being conflict-free can be expressed as \(\phi _{ cf }(E) = \phi _{ Th }(E) \wedge \lnot \phi _{ att }(E,E)\). The notion of defense can then be represented, as follows.

$$\phi _{ def }(E,A) =\ \forall S \big ( (S \subseteq \mathcal {L} \wedge \phi _{ Th }(S) \wedge \phi _{ att }(S,A) ) \rightarrow \phi _{ att }(E,S))$$

In words, if E defends A if for each S attacking A we find E attacks S (for S we also need to check derivability via \(\phi _{ Th }(S)\)).

Admissibility, the complete, and preferred semantics can then be represented, as stated next.

$$\begin{aligned} \phi _{ adm }(E) =&\ \phi _{ cf }(E) \wedge \phi _{ def }(E,E) \\ \phi _{com}(E) =&\ \phi _{ adm }(E) \wedge \forall S \big ( (\phi _{ def }(E,S) \wedge \phi _{ Th }(S)) \rightarrow S \subseteq E \big ) \\ \phi _{ prf }(E) =&\ \phi _{ adm }(E) \wedge \forall E' \lnot \big ( E' \subseteq \mathcal {L} \wedge E \subsetneq E' \wedge \phi _{ adm }(E') \big ) \end{aligned}$$

Finally, the stable semantics can be expressed by formula \(\phi _{{ stb }}(E)\), given as \(\phi _{ cf }(E) \wedge \forall a ( asm (a) \rightarrow a \in E \vee \exists x\big (x \in E \wedge contrary (a,x) \big )),\) capturing directly the definition of stable semantics.

Credulous and skeptical reasoning can then be specified by stating

$$\begin{aligned} \phi ^ Cred _\sigma =&\ \exists E \big (E \subseteq \mathcal {L} \wedge \phi _{ query }(E) \wedge \phi _\sigma (E) \big ) \text { and }\\ \phi ^ Skept _\sigma =&\ \forall E \Big ( \big (E \subseteq \mathcal {L} \wedge \phi _\sigma (E)\big )\rightarrow \big (\phi _{ query }(E) \big ) \Big ). \end{aligned}$$

with \(\phi _{ query }(E) = (\forall x ( query (x) \rightarrow x \in E))\). The formula \(\phi _{ query }(E)\) directly states that the atoms defined by query are in E. We tacitly assume that queries defined by \( query \) refer only to atoms and not to rules.

By utilizing Courcelle’s results (Theorem 1), we can directly infer the following FPT result. The proof of this theorem directly follows from the previous formulas, declaratively representing the definitions of ABA, and Theorem 1.

Theorem 2

Deciding credulous or skeptical acceptance of atoms in a given ABA framework under admissible, complete, stable, or preferred semantics is FPT w.r.t. tree-width.

4 Dynamic Programming Algorithms for ABA

In this section, we present our approach to compute reasoning tasks in ABA using dynamic programming (DP) algorithms. Due to page limitations, we present a DP algorithm using tree-decompositions for computing stable assumption sets, and discuss changes needed for admissible and complete assumption sets. We explain how to instantiate our algorithms in the D-FLAT [1, 2, 9] framework, allowing for a declarative specification of the DP algorithms, and give full declarative encodings in the online supplementary material.

On a high level, DP algorithms operating on tree-decompositions of a given instance usually work bottom-up, by computing tables for each bag in post-order. The tables computed for each bag represent current partial solution candidates that can be inferred from the information encountered “so far”. In a final step, partial solutions are then combined into full solutions. We delegate this step to D-FLAT, which follows so called extension pointers in a top-down fashion, and combines compatible partial solutions.

For the sake of clarity, we present our DP algorithm for stable semantics by requiring that the tree-decomposition is nice and has empty bags as leaves and as the root. Our implementation in D-FLAT does not require nice tree-decompositions, however. In nice tree-decompositions each node has a type and is either a leaf, the root, an introduction node, a removal node, or a join node. Except for leaves and join nodes, the nodes have exactly one child, and join nodes have exactly two children. Bags of introduction nodes have all objects of the child bag and one additional object, while bags of removal nodes have exactly one object less. Join nodes and their children have exactly the same bags. These conditions allow for a more compact algorithm representation. One can efficiently compute a nice tree-decomposition from a given tree decomposition [11].

Our algorithm is inspired by concepts for DP algorithms [35] for answer set programming [43]. For a given ABA framework \(F=(\mathcal {L},\mathcal {R},\mathcal {A},\overline{})\), we define a partial stable assumption set as a quadruple (IRDCW), where \(I \subseteq \mathcal {L}\) is called a witness, \(R \subseteq \mathcal {R}\), \(D \subseteq \mathcal {A}\), and CW is a set of so-called counterwitnesses, which are pairs \((C,R_C)\) with \(C \subseteq I\) and \(R_C \subseteq \mathcal {R}\). We utilize the concepts of witnesses counterwitnesses, as presented by Fichte et al. [35], in our DP algorithms. Intuitively, each partial stable assumption set consists of a witness set I of atoms that is a candidate for a stable assumption set and all atoms that can be derived from the stable assumption set, while D contains the assumptions attacked (“defeated”) by I. That is, I can be seen as atoms and assumptions we “assume” to be part of a stable assumption set. Since we might encounter components of rules in various places in the tree-decomposition, we view the rules as Boolean Horn clauses and store in R all rules that are satisfied by I (similar as in Sect. 3). Finally, a counterwitness represents pairs \((C,R_C)\) where C shares the same assumptions as in I, i.e., \(I \cap \mathcal {A} = C \cap \mathcal {A}\), but has strictly fewer atoms (i.e., represents proper a subset) and \(R_C\) the associated satisfied rules. A counterwitness testifies that one can satisfy the rules \(R_C\) with fewer atoms, and, thus, is a counter to derivability of the atoms in I from the assumptions in I if both I and C satisfy all rules in the root node. During bottom-up computation, partial stable assumption sets and their counterwitnesses are modified, added, or removed, depending on the objects encountered in the bags. A partial stable assumption set contains only components of the current bag (and may use information from child bags).

figure b

Let us go over Algorithm 1 and Algorithm 2 for stable assumption sets. We show here the case for enumerating stable assumption sets, but credulous and skeptical reasoning can be achieved via small modifications: enumerating only stable assumption sets containing or not containing a query.

figure c

In Algorithm 1 we call Algorithm 2 for each node in the tree-decomposition of the given ABA instance and store the result in a table (Tab). Algorithm 2 computes these tables, given the tables of the children nodes. We store partial stable assumption sets computed in Res (initially empty). In Line 3 we merge the tables of the two children tables, by combining compatible partial stable assumption sets. Two such sets are compatible if they coincide on their first components I, i.e., for two sets \(\tau _i\) and \(\tau _j\) of partial stable assumptions sets, the function \( merge \) returns \(\{( I, R_{i} \cup R_{j}, D_i \cup D_j,C_i \sqcup C_j) \mid ( I, R_{i}, D_i, C_{i}) \in \tau _i, ( I, R_{j}, D_j, C_{j}) \in \tau _j \} \). For merging the counterwitnesses, we use \(C_i \sqcup C_j = \{ ( C,R\cup R') \mid ( C,R) \in C_i, ( C,R') \in C_j \}\). If we are not in a join node, there is only one child and we go over all previous partial stable assumption sets (loop beginning with Line 5, after extracting the single table in the line before).

For the root node, there is a simple check: if there are no counterwitnesses (\(C= \emptyset \), Line 6), we have found an entry leading to stable assumption set.

Line 7 considers bags in which an atom \( a \) is introduced. In this case, the set of partial stable assumption sets for this bag is obtained by utilizing each partial stable assumption set in the child partial solution \( I \), and creating two partial stable assumption sets \( I_{out} \) and \( I_{in} \), the former by not adding the introduced atom \( a \) (Line 8), and the latter by adding it (Line 13). Note that the partial stable assumption set with introduced atom a is only constructed if not conflicting (Line 12). This can be seen in Fig. 1 at node 13 which introduces x, resulting in two witnesses on lines 1 and 2 of \(\tau _{13}\). In the figure, witnesses are shown in blue and counterwitnesses are in red.

The \( Sat \) function is used to compute satisfied rules, for a given node t, and set I of atoms by defining \(Sat(t,I, R) = \{r \in R \mid head(r) \in I \text { or } body(r)\cap bag(t) \nsubseteq I \}\), following a similar line of reasoning as in Sect. 3.

To construct new sets of counterwitnesses for each newly constructed partial stable assumption set, an analogous process occurs (Line 10 and Line 15). In the most trivial case, that of a new witness \(I_{out}\) constructed by not adding the introduced atom \( a \), the set \( C_{o} \) is given by preserving the counterwitnesses from the child partial solution, with an updated set of satisfied bag rules (Line 10).

In the case of a witness constructed by the addition of \( a \) to a child witness, if \( a \) is an introduced assumption, we require \( a \) to be part of the constructed counterwitnesses, and thus we do not preserve all the child counterwitnesses \( C \). On the contrary, if the introduced atom is not an assumption (Line 16), we add to our set of newly created counterwitnesses \( C_{in} \) the child counterwitnesses \( C \). In intuitive terms, for a witness constructed by the addition of an introduced atom \(a \notin A\), there can be counterwitnesses, subsets of the witness, such that the introduced atom has not been derived.

To enforce the return of stable assumption sets only, Algorithm 2 ensures that when an atom is removed at node \( t \), only those child assumption sets that were stable are preserved as partial stable assumption sets in \( t \). This is achieved through the check for stable assumption sets in Line 20. In case an atom \(a \notin \mathcal {A}\) is removed, this check is not required, and partial stable assumption sets are preserved with updated sets I and R. With respect to Fig 1, node 2 removes a, and row 2 in \(\tau _{2}\) is the result of preserving a witness from \(\tau _{3}\) (not shown) s.t. \(a \in D\), and \(b \notin I\), \(b \notin D\).

Finally, consider the two possibilities of a bag either introducing or removing a rule r (Line 28 and Line 32 respectively). When a rule is introduced, witnesses and counterwitnesses are preserved, both with an updated set of satisfied rules accounting for the status of the introduced rule. On the contrary, when a rule is removed, by the connectedness property we have visited all atoms in the rule, hence the rule could not become satisfied elsewhere. Algorithm 2 enforces the satisfiability of the removed rule by not preserving partial stable assumption sets for which the removed rule has not been satisfied. Figure 1 depicts the case for an introduced rule in table \(\tau _{12}\), which introduces rule \(r_{2}\). In this case, counterwitnesses that have been preserved, e.g., on line 2 of \(\tau _{12}\), must have an updated set of satisfied rules.

Admissibility and Complete Semantics. To verify whether an assumption set defends itself against all attacks, one can ensure that all its attackers are attacked by the set, or inversely, that there are no undefeated attackers. One can adapt Algorithm 2 as follows: (i) we track derivability for undefeated atoms, (ii) we check its correctness by ensuring that undefeated atoms are not attacked by the set of supported assumptions, i.e. by the candidate admissible set I, (iii) we track the set of atoms that are defeated, (iv) we require an assumption to be either undefeated or defeated, and (v) we add the admissibility check by not preserving those assumption sets that are attacked by some undefeated atom.

For complete semantics, one can ensure that the supported set of assumptions I includes all those assumptions that are undefeated and not attacked by some undefeated set of assumptions, i.e., those assumptions that are defended. In an algorithm for the complete semantics we can track an additional set AU of assumptions that are attacked by undefeated. The algorithm then avoids preserving assumption sets that do not include undefeated assumptions that are not attacked by an undefeated assumption set.

Instantiating Our DP Algorithms. D-FLAT is a problem solving framework based on the DP paradigm that was specifically developed to provide means for declarative specification of algorithms operating on tree-decompositions of given problem instances. The framework allows prototyping an algorithm to solve a given problem by means of the ASP language.

We utilize D-FLAT to instantiate our algorithms for admissible, complete, and stable assumption sets, and associated reasoning tasks, in the ASP language. In the D-FLAT workflow, one can delegate the burden of computing a tree decomposition of the problem instance, and of combining partial solutions to D-FLAT itself. In this workflow, one specifies how partial solutions look like (for stable semantics, our Algorithm 2), and how they can be validly combined, also referred to as extended in D-FLAT terms. D-FLAT then takes care of the storage and actual combination of partial solutions. More concretely, at each node of the tree decomposition, D-FLAT performs a call to an ASP solver, and computes a partial solution. Finally, D-FLAT combines partial solutions by following external pointers, which intuitively specify which partial solutions can be appropriately combined into a complete solution of the problem instance.

5 Experiments

In this section, we present an empirical evaluation of our prototype implementation using D-FLAT. The encodings, instances, and an instance generator used in our evaluation are available with the online material.

Our prototype supports enumeration and counting of \(\sigma \)-assumption sets, for \(\sigma \in \{ adm , com ,{ stb }\}\), and checking skeptical and credulous acceptance for these semantics. Due to a potential high number of assumption sets, we considered counting admissible, complete, and stable assumption sets in two modes, similar to previous works [18, 48]. First, counting all \(\sigma \)-assumption sets and second counting all \(\sigma \)-assumption sets with a given atom being derivable.

We observed that previous random generation methods for ABA instances often result in instances with high tree-width. To explore the potential of our tree decomposition-based approach, we generated ABA frameworks exhibiting a (controlled) low tree-width. For general undirected graphs, \(k\times n\) grids (with vertices connected only to vertical and horizontal neighbours) have a controlled tree-width of min(kn). We adapted this behavior, by constructing \(k\times n\) grids of \(k \cdot n\) atoms. A third of these are randomly and uniformly chosen as assumptions. For each non-assumption atom in the grid, this atom is used as a head in a predefined number rph (rules per head) of rules. The rph number is randomly picked in a restricted range (0–3). The number of body elements for each rule is picked randomly between 0 and the number of cross neighbours of an atom. The selected amount of body elements are then randomly picked from the cross neighbourhood. This process is repeated an rph number of times, and during the first iteration only assumptions are allowed as body elements. Contraries are generated based on the flip of a coin from the two steps cross neighbourhood of an assumption in the grid. While the contraries might lead to a higher tree-width, we observed that the resulting tree-width is sufficiently bounded. A randomly chosen query atom is generated for each instance (uniform probability).

Our experiments were conducted on a Linux machine with 64-bit architecture, powered by an Intel i5 CPU with 8 cores and 8 GB of memory. We imposed a timeout limit of 600 s per run, and a memory limit of 8192 MB.

We compared our approach against the current state-of-the-art ASP-based approach [48, 49] using Clingo [42] (version 5.4.1) as the ASP solver. Table 1 shows an overview of the results, computed over a total of 81 instances generated by four instances for each \(k \in \{2,3,5\}\) and \(n \in \{10\), 20, 100, 200, 400, 500, \(700\}\), excluding 3 instances that resulted in errors for D-FLAT.

The results indicate that for solving the counting tasks, the semantics plays a major role: our prototype had fewer timeouts than when using clingo when counting admissible assumption sets (the case with query denoted by appending “-q”). For stable semantics, both approaches are somewhat on-par with Clingo having an edge over the D-FLAT based approach. For complete semantics, Clingo outperforms the D-FLAT approach.

We hypothesize that the number of solutions (assumption sets) plays a major role, together with the fact that the D-FLAT encoding of stable semantics is simpler, in explaining the runtime. There are more admissible and complete assumption sets than for stable semantics, and in particular, the number of admissible assumption sets might be high.

Table 1. Median running time and timeouts per task (in seconds)

6 Discussions

In this work we looked at complex computational tasks arising in assumption-based argumentation, and showed that many such reasoning tasks are fixed-parameter tractable w.r.t. the parameter tree-width, of a given graph representation of ABA frameworks. We showed these results by using monadic second order logic (MSO) and Courcelle’s theorem. We developed DP algorithms for reasoning in ABA and implemented these in the D-FLAT framework, allowing for declarative specification of such algorithms.

Taken together, our MSO and D-FLAT encodings can be useful for the extension of our work to other structured argumentation formalisms: the MSO encodings suggest wide applicability of FPT results, and our D-FLAT encodings have the potential for direct adaptations to related computational problems arising in structured argumentation, e.g., for the ASPIC\(^+\) formalism [51].

An interesting direction for future work is to utilize recent findings [36, 47] of developing theoretical upper and lower runtime bounds by encoding problems in quantified Boolean logic, instead of using MSO, under certain constraints. These allow for showing more tight bounds in terms of running time (e..g, include also lower bounds). Moreover, decomposition-guided reductions were recently shown to be viable for problems in abstract and logic-based argumentation [34]. These reductions are guided by tree-decompositions and result in (quantified) Boolean formulas which linearly preserve tree-width. In contrast to these works, our approach uses D-FLAT, enabling ASP encodings of the DP algorithms. Our theoretical result (Theorem 2) complements existing results for abstract argumentation [27,28,29,30,31, 34, 47] and logic-based argumentation [34].

Performance of our prototype relies on the performance of D-FLAT. Recent works [23, 37] show that one can specify DP algorithms using database management systems (DBMS), which give another interesting route for extending our work with a different declarative framework for specifications of DP algorithms. Developing systems for ABA based on DP algorithms using DBMS and decomposition-guided reductions to quantified Boolean logic appear as a natural next step for further evaluating strengths of these approaches, also comparing them to D-FLAT.

Our empirical results indicate strength of our approach for complex counting tasks. We believe this could also be interesting for computationally intensive tasks in probabilistic argumentation [45], where counting or weighted summation problems arise naturally. Investigating possibilities for applying our approach to probabilistic argumentation appears to be a natural avenue for future work.