1 Introduction

Termination is an important topic in program verification. There is a wealth of work on automatic termination analysis of term rewrite systems (TRSs) which can also be used to analyze termination of programs in many other languages. Essentially all current termination tools for TRSs (e.g., AProVE [13], NaTT [36], MU-TERM [15],  [27], etc.) use dependency pairs (DPs) [1, 11, 12, 16, 17].

A combination of two TRSs (a main TRS \(\mathcal {R}\) and a base TRS \(\mathcal {B}\)) is “relatively terminating” if there is no rewrite sequence that uses infinitely many steps with rules from \(\mathcal {R}\) (whereas rules from \(\mathcal {B}\) may be used infinitely often). Relative termination of TRSs has been studied since decades [8], and approaches based on relative rewriting are used for many applications, e.g., in complexity analysis [3, 6, 7, 29, 37], for proving confluence [19, 25], for certifying confluence proofs [30], for proving termination of narrowing [20, 31, 34], and for proving liveness [26].

However, while techniques and tools for analyzing ordinary termination of TRSs are very powerful due to the use of DPs, a direct application of standard DPs to analyze relative termination is not possible. Therefore, most existing approaches for automated analysis of relative termination are quite restricted in power. Hence, one of the largest open problems regarding DPs is Problem #106 of the RTA List of Open Problems [5]: Can we use the dependency pair method to prove relative termination? A first major step towards an answer to this question was presented in [21] by giving criteria for \(\mathcal {R}\) and \(\mathcal {B}\) that allow the use of ordinary DPs for relative termination.

Recently, we adapted DPs to analyze probabilistic innermost term rewriting, by using so-called annotated dependency pairs (ADPs) [23] or dependency tuples (DTs) [22] (which were originally proposed for innermost complexity analysis of TRSs [32]).Footnote 1 In these adaptions, one considers all defined function symbols in the right-hand side of a rule at once, whereas ordinary DPs consider them separately.

In this paper, we show that considering the defined symbols on right-hand sides separately (as for classical DPs) does not suffice for relative termination. On the other hand, we do not need to consider all of them at once either (i.e., we do not have to use the notions of ADPs or DTs from [22, 23, 32]). Instead, we introduce a new definition of ADPs that is suitable for relative termination and develop a corresponding ADP framework for automated relative termination proofs of TRSs. Moreover, while ADPs and DTs were only applicable for innermost rewriting in [22, 23, 32], we now adapt ADPs to full (relative) rewriting, i.e., we do not impose any specific evaluation strategy. So while [21] presented conditions under which the ordinary classical DP framework can be used to prove relative termination, in this paper we develop the first specific DP framework for relative termination.

Structure: We start with preliminaries on relative rewriting in Sect. 2. In Sect. 3 we recapitulate the core processors of the DP framework and show that classical DPs are unsound for relative termination in general. Moreover, we state the main results of [21] on criteria when ordinary DPs may nevertheless be used for relative termination. Afterwards, we introduce our novel notion of annotated dependency pairs for relative termination in Sect. 4 and present a corresponding new ADP framework in Sect. 5. We implemented our framework in the tool AProVE and in Sect. 6, we evaluate our implementation in comparison to other state-of-the-art tools. All proofs can be found in [24].

2 Relative Term Rewriting

We assume familiarity with term rewriting [2] and regard (finite) TRSs over a (finite) signature \(\varSigma \) and a set of variables \(\mathcal {V}\).

Example 1

Consider the following TRS \(\mathcal {R}_{\textsf{divL}}\), where \(\textsf{divL}(x, xs )\) computes the number that results from dividing x by each element of the list \( xs \). As usual, natural numbers are represented by the function symbols \(\textsf{0}\) and \(\textsf{s}\), and lists are represented via \(\textsf{nil}\) and \(\textsf{cons}\). Then \(\textsf{divL}(\textsf{s}^{24}(\textsf{0}), \textsf{cons}(\textsf{s}^4(\textsf{0}), \textsf{cons}(\textsf{s}^3(\textsf{0}),\textsf{nil})))\) evaluates to \(\textsf{s}^2(\textsf{0})\), because \((24/4)/3 = 2\). Here, \(\textsf{s}^2(\textsf{0})\) stands for \(\textsf{s}(\textsf{s}(\textsf{0}))\), etc.

figure b

A TRS \(\mathcal {R}\) induces a rewrite relation \({\rightarrow _{\mathcal {R}}} \subseteq \mathcal {T}\left( \varSigma ,\mathcal {V}\right) \times \mathcal {T}\left( \varSigma ,\mathcal {V}\right) \) on terms where \(s \rightarrow _{\mathcal {R}} t\) holds if there is a \(\pi \in \textrm{Pos}(s)\), a rule \(\ell \rightarrow r \in \mathcal {R}\), and a substitution \(\sigma \) such that \(s|_{\pi }=\ell \sigma \) and \(t = s[r\sigma ]_{\pi }\). For example, \(\textsf{minus}(\textsf{s}(\textsf{0}),\textsf{s}(\textsf{0})) \rightarrow _{\mathcal {R}_{\textsf{divL}}} \textsf{minus}(\textsf{0},\textsf{0}) \rightarrow _{\mathcal {R}_{\textsf{divL}}} \textsf{0}\). We call a TRS \(\mathcal {R}\) terminating (abbreviated SN, for “strongly normalizing”) if \(\rightarrow _{\mathcal {R}}\) is well founded. Using the DP framework, one can easily prove that \(\mathcal {R}_{\textsf{divL}}\) is SN (see Sect. 3.1). In particular, in each application of the recursive \(\textsf{divL}\)-rule (6), the length of the list in \(\textsf{divL}\)’s second argument is decreased by one.

In the relative setting, one considers two TRSs \(\mathcal {R}\) and \(\mathcal {B}\). We say that \(\mathcal {R}\) is relatively terminating w.r.t. \(\mathcal {B}\) (i.e., \(\mathcal {R}/ \mathcal {B}\) is SN) if there is no infinite \((\rightarrow _{\mathcal {R}} \cup \rightarrow _{\mathcal {B}})\)-rewrite sequence that uses an infinite number of \(\rightarrow _{\mathcal {R}}\)-steps. We refer to \(\mathcal {R}\) as the main and \(\mathcal {B}\) as the base TRS.

Example 2

Let \(\mathcal {R}_{\textsf{divL}}\) be the main TRS. Since the order of the list elements does not affect the termination of \(\mathcal {R}_{\textsf{divL}}\), this algorithm also works for multisets. To abstract lists to multisets, we add the base TRS \(\mathcal {B}_{\textsf{mset}} = \{(7)\}\).

$$\begin{aligned} \textsf{cons}(x, \textsf{cons}(y, zs )) \rightarrow \textsf{cons}(y, \textsf{cons}(x, zs )) \end{aligned}$$
(7)

\(\mathcal {B}_{\textsf{mset}}\) is non-terminating, since it can switch elements in a list arbitrarily often. However, \(\mathcal {R}_{\textsf{divL}} / \mathcal {B}_{\textsf{mset}}\) is SN as each application of Rule (6) still reduces the list length. Indeed, termination of \(\mathcal {R}_{\textsf{divL}} / \mathcal {B}_{\textsf{mset}}\) can also be shown via the approach of [21], because it allows us to apply (standard) DPs in this example, see Example 13.

However, if \(\mathcal {B}_{\textsf{mset}}\) is replaced by the base TRS \(\mathcal {B}_{\textsf{mset}2}\) with the rule

$$\begin{aligned} \textsf{divL}(z,\textsf{cons}(x, \textsf{cons}(y, zs ))) \rightarrow \textsf{divL}(z,\textsf{cons}(y, \textsf{cons}(x, zs ))), \end{aligned}$$
(8)

then \(\mathcal {R}_{\textsf{divL}} / \mathcal {B}_{\textsf{mset}2}\) remains terminating, but the approach of [21] is no longer applicable, see Example 14. In contrast, with our new DP framework in Sects. 4 and 5, termination of such examples can be proved automatically.Footnote 2

We will use the following four examples to illustrate the problems that one has to take into account when analyzing relative termination. So these examples show why a naive adaption of dependency pairs does not work in the relative setting and why we need our new notion of annotated dependency pairs. The examples represent different types of infinite rewrite sequences that can lead to non-termination in the relative setting: redex-duplicating, redex-creating (or “-emitting”), and ordinary infinite sequences.

Example 3

(Redex-Duplicating). Consider the TRSs \(\mathcal {R}_1 = \{\textsf{a}\rightarrow \textsf{b}\}\) and \(\mathcal {B}_1 = \{\textsf{f}(x) \rightarrow \textsf{d}(\textsf{f}(x),x)\}\) from [21, Example 4]. \(\mathcal {R}_1 / \mathcal {B}_1\) is not SN due to the infinite rewrite sequence \(\underline{\textsf{f}(\textsf{a})} \rightarrow _{\mathcal {B}_1} \textsf{d}(\textsf{f}(\textsf{a}),\underline{\textsf{a}}) \rightarrow _{\mathcal {R}_1} \textsf{d}(\underline{\textsf{f}(\textsf{a})},\textsf{b}) \rightarrow _{\mathcal {B}_1} \textsf{d}(\textsf{d}(\textsf{f}(\textsf{a}),\underline{\textsf{a}}),\textsf{b})\rightarrow _{\mathcal {R}_1} \textsf{d}(\textsf{d}(\textsf{f}(\textsf{a}),\textsf{b}),\textsf{b}) \rightarrow _{\mathcal {B}_1} \ldots \) The reason is that \(\mathcal {B}_1\) can be used to duplicate an arbitrary \(\mathcal {R}_1\)-redex infinitely often.

Example 4

(Redex-Creating on Parallel Position). Next, consider \(\mathcal {R}_2 = \{\textsf{a}\rightarrow \textsf{b}\}\) and \(\mathcal {B}_2 = \{\textsf{f}\rightarrow \textsf{d}(\textsf{f},\textsf{a})\}\). \(\mathcal {R}_2 / \mathcal {B}_2\) is not SN as we have the infinite rewrite sequence \(\underline{\textsf{f}} \rightarrow _{\mathcal {B}_2} \textsf{d}(\textsf{f},\underline{\textsf{a}}) \rightarrow _{\mathcal {R}_2} \textsf{d}(\underline{\textsf{f}},\textsf{b}) \rightarrow _{\mathcal {B}_2} \textsf{d}(\textsf{d}(\textsf{f},\underline{\textsf{a}}),\textsf{b}) \rightarrow _{\mathcal {R}_2} \textsf{d}(\textsf{d}(\underline{\textsf{f}},\textsf{b}),\textsf{b}) \rightarrow _{\mathcal {B}_2} \ldots \) Here, \(\mathcal {B}_2\) can create an \(\mathcal {R}_2\)-redex infinitely often (where in the right-hand side \(\textsf{d}(\textsf{f},\textsf{a})\) of \(\mathcal {B}_2\)’s rule, the \(\mathcal {B}_2\)-redex \(\textsf{f}\) and the created \(\mathcal {R}_2\)-redex \(\textsf{a}\) are on parallel positions).

Example 5

(Redex-Creating on Position Above). Let \(\mathcal {R}_3 = \{\textsf{a}(x) \rightarrow \textsf{b}(x)\}\) and \(\mathcal {B}_3 = \{\textsf{f}\rightarrow \textsf{a}(\textsf{f})\}\). \(\mathcal {R}_3 / \mathcal {B}_3\) is not SN as we have \(\underline{\textsf{f}} \rightarrow _{\mathcal {B}_3} \underline{\textsf{a}}(\textsf{f}) \rightarrow _{\mathcal {R}_3} \textsf{b}(\underline{\textsf{f}}) \rightarrow _{\mathcal {B}_3} \textsf{b}(\underline{\textsf{a}}(\textsf{f})) \rightarrow _{\mathcal {R}_3} \textsf{b}(\textsf{b}(\underline{\textsf{f}})) \rightarrow _{\mathcal {B}_3}\ldots \), i.e., again \(\mathcal {B}_3\) can be used to create an \(\mathcal {R}_3\)-redex infinitely often. In the right-hand side \(\textsf{a}(\textsf{f})\) of \(\mathcal {B}_3\)’s rule, the position of the created \(\mathcal {R}_3\)-redex \(\textsf{a}(\ldots )\) is above the position of the \(\mathcal {B}_3\)-redex \(\textsf{f}\).

Example 6

(Ordinary Infinite). Finally, consider \(\mathcal {R}_4 = \{\textsf{a}\rightarrow \textsf{b}\}\) and \(\mathcal {B}_4 = \{ \textsf{b}\rightarrow \textsf{a}\}\). Here, the base TRS \(\mathcal {B}_4\) can neither duplicate nor create an \(\mathcal {R}_4\)-redex infinitely often, but in combination with the main TRS \(\mathcal {R}_4\) we obtain the infinite rewrite sequence \(\textsf{a}\rightarrow _{\mathcal {R}_4} \textsf{b}\rightarrow _{\mathcal {B}_4} \textsf{a}\rightarrow _{\mathcal {R}_4} \textsf{b}\rightarrow _{\mathcal {B}_4} \ldots \) Thus, \(\mathcal {R}_4 / \mathcal {B}_4\) is not SN.

3 DP Framework

We first recapitulate dependency pairs for ordinary (non-relative) rewriting in Sect. 3.1 and summarize existing results on DPs for relative rewriting in Sect. 3.2.

3.1 Dependency Pairs for Ordinary Term Rewriting

We recapitulate DPs and the two most important processors of the DP framework, and refer to, e.g., [1, 11, 12, 16, 17] for more details. As an example, we show how to prove termination of \(\mathcal {R}_{\textsf{divL}}\) without the base \(\mathcal {B}_{\textsf{mset}}\). We decompose the signature \(\varSigma = \mathcal {C}\uplus \mathcal {D}\) of a TRS \(\mathcal {R}\) such that \(f \in \mathcal {D}\) if \(f = {\text {root}}(\ell )\) for some rule \(\ell \rightarrow r \in \mathcal {R}\). The symbols in \(\mathcal {C}\) and \(\mathcal {D}\) are called constructors and defined symbols of \(\mathcal {R}\), respectively. For every \(f \in \mathcal {D}\), we introduce a fresh annotated (or “marked”) symbol \(f^{\#}\) of the same arity. Let \(\mathcal {D}^\#\) denote the set of all annotated symbols, and let \(\varSigma ^\#= \varSigma \uplus \mathcal {D}^\#\). To ease readability, we often use capital letters like \(\textsf{F}\) instead of \(\textsf{f}^\#\). For any term \(t = f(t_1,\ldots ,t_n) \in \mathcal {T}\left( \varSigma ,\mathcal {V}\right) \) with \(f \in \mathcal {D}\), let \(t^{\#} = f^{\#}(t_1,\ldots ,t_n)\). For each rule \(\ell \rightarrow r\) and each subterm t of r with defined root symbol, one obtains a dependency pair \(\ell ^\# \rightarrow t^\#\). Let \(\mathcal{D}\mathcal{P}(\mathcal {R})\) denote the set of all dependency pairs of the TRS \(\mathcal {R}\).

Example 7

For \(\mathcal {R}_{\textsf{divL}}\) from Example 1, we obtain the following five dependency pairs.

figure c

The DP framework operates on DP problems \((\mathcal {P}, \mathcal {R})\) where \(\mathcal {P}\) is a (finite) set of DPs, and \(\mathcal {R}\) is a (finite) TRS. A (possibly infinite) sequence \(t_0, t_1, t_2, \ldots \) with for all i is a \((\mathcal {P}, \mathcal {R})\)-chain. Here, are rewrite steps at the root. A chain represents subsequent “function calls” in evaluations. Between two function calls (corresponding to steps with \(\mathcal {P}\), called \(\textbf{p}\)-steps) one can evaluate the arguments using arbitrary many steps with \(\mathcal {R}\) (called \(\textbf{r}\)-steps). So \(\textbf{r}\)-steps are rewrite steps that are needed in order to enable another \(\textbf{p}\)-step at a position above later on. Hence, \(\textsf{DL}(\textsf{s}(\textsf{0}), \textsf{cons}(\textsf{s}(\textsf{0}), \textsf{nil})), \textsf{DL}(\textsf{s}(\textsf{0}),\textsf{nil})\) is a \((\mathcal{D}\mathcal{P}(\mathcal {R}_{\textsf{divL}}), \mathcal {R}_{\textsf{divL}})\)-chain, as .

A DP problem \((\mathcal {P}, \mathcal {R})\) is called terminating (SN) if there is no infinite \((\mathcal {P}, \mathcal {R})\)-chain. The main result on DPs is the chain criterion which states that a TRS \(\mathcal {R}\) is SN iff \((\mathcal{D}\mathcal{P}(\mathcal {R}), \mathcal {R})\) is SN. The key idea of the DP framework is a divide-and-conquer approach which applies DP processors to transform DP problems into simpler sub-problems. A DP processor \({\text {Proc}}\) has the form \({\text {Proc}}(\mathcal {P}, \mathcal {R}) = \{(\mathcal {P}_1,\mathcal {R}_1), \ldots , (\mathcal {P}_n,\mathcal {R}_n)\}\), where \(\mathcal {P}, \mathcal {P}_1, \ldots , \mathcal {P}_n\) are sets of DPs and \(\mathcal {R}, \mathcal {R}_1, \ldots , \mathcal {R}_n\) are TRSs. \({\text {Proc}}\) is sound if \((\mathcal {P}, \mathcal {R})\) is SN whenever \((\mathcal {P}_i,\mathcal {R}_i)\) is SN for all \(1 \le i \le n\). It is complete if \((\mathcal {P}_i,\mathcal {R}_i)\) is SN for all \(1 \le i \le n\) whenever \((\mathcal {P}, \mathcal {R})\) is SN.

So for a TRS \(\mathcal {R}\), one starts with the initial DP problem \((\mathcal{D}\mathcal{P}(\mathcal {R}), \mathcal {R})\) and applies sound (and preferably complete) DP processors until all sub-problems are “solved” (i.e., processors transform them to the empty set). This allows for modular termination proofs, as different techniques can be applied on each sub-problem.

One of the most important processors is the dependency graph processor. The \((\mathcal {P}, \mathcal {R})\)-dependency graph indicates which DPs can be used after each other in chains. Its set of nodes is \(\mathcal {P}\) and there is an edge from \(s_1 \rightarrow t_1\) to \(s_2 \rightarrow t_2\) if there are substitutions \(\sigma _1, \sigma _2\) with \(t_1 \sigma _1 \rightarrow _{\mathcal {R}}^* s_2 \sigma _2\). Any infinite \((\mathcal {P}, \mathcal {R})\)-chain corresponds to an infinite path in the dependency graph, and since the graph is finite, this infinite path must end in a strongly connected component (SCC).Footnote 3 Hence, it suffices to consider the SCCs of this graph independently.

Theorem 8

(Dep. Graph Processor). For the SCCs \(\mathcal {P}_1, \ldots , \mathcal {P}_n\) of the \((\mathcal {P}, \mathcal {R})\)-dependency graph, \({\text {Proc}}_{\texttt{DG}}(\mathcal {P},\mathcal {R}) = \{(\mathcal {P}_1,\mathcal {R}), \ldots , (\mathcal {P}_n,\mathcal {R})\}\) is sound and complete.

figure g

While the exact dependency graph is not computable in general, there are several techniques to over-approximate it automatically [1, 12, 16]. The \((\mathcal{D}\mathcal{P}(\mathcal {R}_{\textsf{divL}}), \mathcal {R}_{\textsf{divL}})\)-dependency graph for our example is on the right. Here, \({\text {Proc}}_{\texttt{DG}}(\mathcal{D}\mathcal{P}(\mathcal {R}_{\textsf{divL}}),\!\mathcal {R}_{\textsf{divL}})\) yields \(\bigl (\{(9)\},\!\mathcal {R}_{\textsf{divL}}\bigr )\), \(\bigl (\{(11)\},\!\mathcal {R}_{\textsf{divL}}\bigr )\), and \(\bigl (\{(13)\},\!\mathcal {R}_{\textsf{divL}}\bigr )\).

The second crucial processor adapts classical reduction orders to DP problems. A reduction pair \((\succsim , \succ )\) consists of two relations on terms such that \(\succsim \) is reflexive, transitive, and closed under contexts and substitutions, and \(\succ \) is a well-founded order that is closed under substitutions but does not have to be closed under contexts. Moreover, \(\succsim \) and \(\succ \) must be compatible, i.e., \({\succsim } \circ {\succ } \circ {\succsim } \, \subseteq \, {\succ }\). The reduction pair processor requires that all rules and dependency pairs are weakly decreasing, and it removes those DPs that are strictly decreasing.

Theorem 9

(Reduction Pair Processor). Let \((\succsim , \succ )\) be a reduction pair such that \(\mathcal {P} \cup \mathcal {R}\subseteq \, \succsim \). Then \({\text {Proc}}_{\texttt{RPP}}(\mathcal {P},\mathcal {R}) = \{(\mathcal {P} \, \setminus \succ , \mathcal {R})\}\) is sound and complete.

For example, one can use reduction pairs based on polynomial interpretations [28]. A polynomial interpretation \({\text {Pol}}\) is a \(\varSigma ^\#\)-algebra which maps every function symbol \(f \in \varSigma ^\#\) to a polynomial \(f_{{\text {Pol}}} \in \mathbb {N}[\mathcal {V}]\). \({\text {Pol}}(t)\) denotes the interpretation of a term t by the \(\varSigma ^\#\)-algebra \({\text {Pol}}\). Then \({\text {Pol}}\) induces a reduction pair \((\succsim , \succ )\) where \(t_1 \succsim t_2\) (\(t_1 \succ t_2\)) holds if the inequation \({\text {Pol}}(t_1) \ge {\text {Pol}}(t_2)\) (\({\text {Pol}}(t_1) > {\text {Pol}}(t_2)\)) is true for all instantiations of its variables by natural numbers.

For the three remaining DP problems \(\bigl (\{(9)\}, \mathcal {R}_{\textsf{divL}}\bigr )\), \(\bigl (\{(11)\}, \mathcal {R}_{\textsf{divL}}\bigr )\), and \(\bigl (\{(13)\}, \mathcal {R}_{\textsf{divL}}\bigr )\) in our example, we can apply the reduction pair processor using the polynomial interpretation which maps \(\textsf{0}\) and \(\textsf{nil}\) to 0, \(\textsf{s}(x)\) to \(x + 1\), \(\textsf{cons}(y, xs )\) to \( xs + 1\), \(\textsf{DL}(x, xs )\) to \( xs \), and all other symbols to their first arguments. Since (9), (11), and (13) are strictly decreasing, \({\text {Proc}}_{\texttt{RPP}}\) transforms all three remaining DP problems into DP problems of the form \((\varnothing , \ldots )\). As \({\text {Proc}}_{\texttt{DG}}(\varnothing , \ldots ) = \varnothing \) and all processors used are sound, this means that there is no infinite chain for the initial DP problem \((\mathcal{D}\mathcal{P}(\mathcal {R}_{\textsf{divL}}), \mathcal {R}_{\textsf{divL}})\) and thus, \(\mathcal {R}_{\textsf{divL}}\) is SN.

3.2 Dependency Pairs for Relative Termination

Up to now, we only considered DPs for ordinary termination of TRSs. The easiest idea to use DPs in the relative setting is to start with the DP problem \((\mathcal{D}\mathcal{P}(\mathcal {R}\cup \mathcal {B}), \mathcal {R}\cup \mathcal {B})\). This would prove termination of \(\mathcal {R}\cup \mathcal {B}\), which implies termination of \(\mathcal {R}/ \mathcal {B}\), but ignores that the rules in \(\mathcal {B}\) do not have to terminate. Since termination of DP problems is already defined via a relative condition (finite chains can only have finitely many \(\textbf{p}\)-steps but there may exist rewrite sequences with infinitely many \(\textbf{r}\)-steps that are no chains), another idea for proving termination of \(\mathcal {R}/ \mathcal {B}\) is to start with the DP problem \((\mathcal{D}\mathcal{P}(\mathcal {R}), \mathcal {R}\cup \mathcal {B})\), which only considers the DPs of \(\mathcal {R}\). However, this is unsound in general.

Example 10

The only defined symbol of \(\mathcal {R}_2\) from Example 4 is \(\textsf{a}\). Since the right-hand side of \(\mathcal {R}_2\)’s rule does not contain defined symbols, we would get the DP problem \((\varnothing , \mathcal {R}_2 \cup \mathcal {B}_2)\), which is SN as it has no DP. Thus, we would falsely conclude that \(\mathcal {R}_2 / \mathcal {B}_2\) is SN. Similarly, this approach would also falsely “prove” SN for Examples 3 and 5. Thus, the standard notion of DPs is unsound for relative termination.

In [21], it was shown that under certain conditions on \(\mathcal {R}\) and \(\mathcal {B}\), starting with the DP problem \((\mathcal{D}\mathcal{P}(\mathcal {R}\cup \mathcal {B}_a), \mathcal {R}\cup \mathcal {B})\) for a subset \(\mathcal {B}_a \subseteq \mathcal {B}\) is sound for relative termination.Footnote 4 The two conditions on the TRSs are dominance and being non-duplicating. We say that \(\mathcal {R}\) dominates \(\mathcal {B}\) if defined symbols of \(\mathcal {R}\) do not occur in the right-hand sides of rules of \(\mathcal {B}\). A TRS is non-duplicating if no variable occurs more often on the right-hand side of a rule than on its left-hand side.

Theorem 11

(First Main Result of [21], Sound and Complete). Let \(\mathcal {R}\) and \(\mathcal {B}\) be TRSs such that \(\mathcal {B}\) is non-duplicating and \(\mathcal {R}\) dominates \(\mathcal {B}\). Then the DP problem \((\mathcal{D}\mathcal{P}(\mathcal {R}), \mathcal {R}\cup \mathcal {B})\) is SN iff \(\mathcal {R}/ \mathcal {B}\) is SN.

Theorem 12

(Second Main Result of [21], only Sound). Let \(\mathcal {R}\) and \(\mathcal {B}= \mathcal {B}_a \uplus \mathcal {B}_b\) be TRSs. If \(\mathcal {B}_b\) is non-duplicating, \(\mathcal {R}\cup \mathcal {B}_a\) dominates \(\mathcal {B}_b\), and the DP problem \((\mathcal{D}\mathcal{P}(\mathcal {R}\cup \mathcal {B}_a), \mathcal {R}\cup \mathcal {B})\) is SN, then \(\mathcal {R}/ \mathcal {B}\) is SN.

Example 13

For the main TRS \(\mathcal {R}_{\textsf{divL}}\) from Example 1 and base TRS \(\mathcal {B}_{\textsf{mset}}\) from Example 2 we can apply Theorem 11 and consider the DP problem \((\mathcal{D}\mathcal{P}(\mathcal {R}_{\textsf{divL}}), \mathcal {R}_{\textsf{divL}} \cup \mathcal {B}_{\textsf{mset}})\), since \(\mathcal {B}_{\textsf{mset}}\) is non-duplicating and \(\mathcal {R}_{\textsf{divL}}\) dominates \(\mathcal {B}_{\textsf{mset}}\). As for \((\mathcal{D}\mathcal{P}(\mathcal {R}_{\textsf{divL}}),\!\mathcal {R}_{\textsf{divL}})\), the DP framework can prove that \((\mathcal{D}\mathcal{P}(\mathcal {R}_{\textsf{divL}}), \mathcal {R}_{\textsf{divL}} \cup \mathcal {B}_{\textsf{mset}})\) is SN. In this way, the tool NaTT which implements the results of [21] proves that \(\mathcal {R}_{\textsf{divL}} / \mathcal {B}_{\textsf{mset}}\) is SN. Note that sophisticated techniques like DPs are needed to prove SN for \(\mathcal {R}_{\textsf{divL}} / \mathcal {B}_{\textsf{mset}}\) because classical (simplification) orders already fail to prove termination of \(\mathcal {R}_{\textsf{divL}}\).

Example 14

As mentioned in Example 2, if we consider \(\mathcal {B}_{\textsf{mset}2}\) with the rule

figure h

instead of \(\mathcal {B}_{\textsf{mset}}\) as the base TRS, then \(\mathcal {R}_{\textsf{divL}} / \mathcal {B}_{\textsf{mset}2}\) is still terminating, but we cannot use Theorem 11 since \(\mathcal {R}_{\textsf{divL}}\) does not dominate \(\mathcal {B}_{\textsf{mset}2}\). If we try to split \(\mathcal {B}_{\textsf{mset}2}\) as in Theorem 12, then \(\varnothing \ne \mathcal {B}_a \subseteq \mathcal {B}_{\textsf{mset}2}\) implies \(\mathcal {B}_a = \mathcal {B}_{\textsf{mset}2}\), but \(\mathcal {B}_{\textsf{mset}2}\) is non-terminating. Therefore, all previous tools for relative termination fail in proving that \(\mathcal {R}_{\textsf{divL}} / \mathcal {B}_{\textsf{mset}2}\) is SN. In Sect. 4 we will present our novel DP framework which can prove relative termination of relative TRSs like \(\mathcal {R}_{\textsf{divL}} / \mathcal {B}_{\textsf{mset}2}\).

As remarked in [21], Theorems 11 and 12 are unsound if one only considers minimal chains, i.e., if for a DP problem \((\mathcal {P},\mathcal {R})\) one only considers chains \(t_0, t_1, \ldots \), where all \(t_i\) are \(\mathcal {R}\)-terminating. In the DP framework for ordinary rewriting, the restriction to minimal chains allows the use of further processors, e.g., based on usable rules [12, 17] or the subterm criterion [17]. As shown in [21], usable rules and the subterm criterion can nevertheless be applied if \(\mathcal {B}\) is quasi-terminating [4], i.e., \(\{t \mid s \rightarrow _{\mathcal {B}}^* t \}\) is finite for every term s. This restriction would also be needed to integrate processors that rely on minimality into our new framework in Sect. 4.

4 Annotated Dependency Pairs for Relative Termination

As shown in Sect. 3.2, up to now there only exist criteria [21] that state when it is sound to apply ordinary DPs for proving relative termination, but there is no specific DP-based technique to analyze relative termination directly. For ordinary termination, we create a separate DP for each occurrence of a defined symbol in the right-hand side of a rule (and no DP is created for rules without defined symbols in their right-hand sides). This would work to detect ordinary infinite sequences like the one in Example 6 in the relative setting, i.e., such an infinite sequence would give rise to an infinite chain. However, as shown in Example 10, this would not suffice to detect infinite redex-creating sequences as in Examples 4 and 5. Thus, ordinary DPs are unsound for analyzing relative termination.

To solve this problem, we now adapt the concept of annotated dependency pairs (ADPs) for relative termination. ADPs were introduced in [23] to prove innermost almost-sure termination of probabilistic term rewriting. In the relative setting, we can use similar dependency pairs as in the probabilistic setting, but with a different rewrite relation to deal with non-innermost steps. Compared to [21], we (a) remove the requirement of dominance, which will be handled by the dependency graph processor, and (b) allow for ADP processors that are specifically designed for the relative setting before possibly moving to ordinary DPs.

The requirement that \(\mathcal {B}\) must be non-duplicating remains, since relative non-termination because of duplicating rules is not necessarily due to the relation between the left-hand side and the subterms with defined root symbols in the right-hand side of a rule. Therefore, this cannot be captured by (A)DPs, i.e., DPs do not help in analyzing redex-duplicating sequences as in Example 3, where the crucial redex \(\textsf{a}\) is not generated from a “function call” in the right-hand side of a rule, but it just corresponds to a duplicated variable. To handle TRSs \(\mathcal {R}/ \mathcal {B}\) where \(\mathcal {B}_{dup} \subseteq \mathcal {B}\) is duplicating, one can move the duplicating rules to the main TRS \(\mathcal {R}\) and try to prove relative termination of \((\mathcal {R}\cup \mathcal {B}_{dup})/(\mathcal {B}\setminus \mathcal {B}_{dup})\) instead, or one can try to find a reduction pair \((\succsim , \succ )\) where \(\succ \) is closed under contexts such that \(\mathcal {R}\cup \mathcal {B}\subseteq {\succsim }\) and \(\mathcal {B}_{dup} \subseteq {\succ }\). Then it suffices to prove relative termination of \((\mathcal {R}\setminus \! \succ )/(\mathcal {B}\setminus \! \succ )\) instead.

We will now define a notion of DPs that can detect infinite redex-creating sequences as in Example 4 with \(\mathcal {R}_2 = \{\textsf{a}\rightarrow \textsf{b}\}\) and \(\mathcal {B}_2 = \{\textsf{f}\rightarrow \textsf{d}(\textsf{f},\textsf{a})\}\): \(\underline{\textsf{f}} \rightarrow _{\mathcal {B}_2} \textsf{d}(\textsf{f},\underline{\textsf{a}}) \rightarrow _{\mathcal {R}_2} \textsf{d}(\underline{\textsf{f}},\textsf{b}) \rightarrow _{\mathcal {B}_2} \textsf{d}(\textsf{d}(\textsf{f},\underline{\textsf{a}}),\textsf{b}) \rightarrow _{\mathcal {R}_2} \ldots \) To this end, (1) we need a DP for the rule \(\textsf{a}\rightarrow \textsf{b}\) to track the reduction of the created \(\mathcal {R}_2\)-redex \(\textsf{a}\), although \(\textsf{b}\) is a constructor. Moreover, (2) both defined symbols \(\textsf{f}\) and \(\textsf{a}\) in the right-hand side of the rule \(\textsf{f}\rightarrow \textsf{d}(\textsf{f}, \textsf{a})\) have to be considered simultaneously: We need \(\textsf{f}\) to create an infinite number of \(\mathcal {R}_2\)-redexes, and we need \(\textsf{a}\) since it is the created \(\mathcal {R}_2\)-redex. Hence, for rules from the base TRS \(\mathcal {B}_2\), we have to consider all possible pairs of defined symbols in their right-hand sides simultaneously.Footnote 5 This is not needed for the main TRS \(\mathcal {R}_2\), i.e., if the \(\textsf{f}\)-rule were in the main TRS, then the \(\textsf{f}\) in the right-hand side could be considered separately from the \(\textsf{a}\) that it generates. Therefore, we distinguish between main and base ADPs (that are generated from the main and the base TRS, respectively).

As in [23], we now annotate defined symbols directly in the original rewrite rule instead of extracting annotated subterms from its right-hand side. In this way, we may have terms containing several annotated symbols, which allows us to consider pairs of defined symbols in right-hand sides simultaneously. At the same time, an ADP maintains the information on the positions of the subterms in the original right-hand side. (This information will be needed for the “completeness” of the chain criterion in Theorem 23, i.e., it allows us to obtain an equivalent characterization of relative termination via chains of ADPs.Footnote 6)

Definition 15

(Annotations). For \(t \in \mathcal {T}\left( \varSigma ^\#,\mathcal {V}\right) \) and \(\mathcal {X}\subseteq \varSigma ^\#\cup \mathcal {V}\), let \(\textrm{Pos}_{\mathcal {X}}(t)\) be the set of all positions of t with symbols or variables from \(\mathcal {X}\). For \(\varPhi \subseteq \textrm{Pos}_{\mathcal {D}\cup \mathcal {D}^\#}(t)\), \(\#_\varPhi (t)\) is the variant of t where the symbols at positions from \(\varPhi \) are annotated and all other annotations are removed. Thus, \(\textrm{Pos}_{\mathcal {D}^\#}(\#_\varPhi (t)) = \varPhi \), and \(\#_\varnothing (t)\) removes all annotations from t, where we often write \(\flat (t)\) instead of \(\#_\varnothing (t)\). Moreover, for a singleton \(\{\pi \}\), we often write \(\#_\pi \) instead of \(\#_{\{\pi \}}\). We write \(t \trianglelefteq _{\#}^\pi s\) if \(\pi \in \textrm{Pos}_{\mathcal {D}^\#}(s)\) and \(t = \flat (s|_\pi )\) (i.e., t results from a subterm of s with annotated root symbol by removing its annotations). We also write \(\trianglelefteq _{\#}\) instead of \(\trianglelefteq _{\#}^\pi \) if \(\pi \) is irrelevant.

Example 16

If \(\textsf{f}\in \mathcal {D}\), then we have \(\#_{1}(\textsf{f}(\textsf{f}(x))) = \#_{1}(\textsf{F}(\textsf{F}(x))) = \textsf{f}(\textsf{F}(x))\) and \(\flat (\textsf{F}(\textsf{F}(x))) = \textsf{f}(\textsf{f}(x))\). Moreover, we have \(\textsf{f}(x) \trianglelefteq _{\#}^{1} \textsf{f}(\textsf{F}(x))\).

While in [23] all defined symbols on the right-hand sides of rules were annotated, we now define our novel variant of annotated dependency pairs for relative rewriting. As explained before Definition 15, we have to track (at most) two redexes for base ADPs and only one redex for main ADPs.

Definition 17

(Annotated Dependency Pair). A rule \(\ell \!\rightarrow \!r\) with \(\ell \!\in \!\mathcal {T}\left( \varSigma ,\mathcal {V}\right) \setminus \mathcal {V}\), \(r \in \mathcal {T}\left( \varSigma ^\#,\mathcal {V}\right) \), and \(\mathcal {V}(r) \subseteq \mathcal {V}(\ell )\) is called an annotated dependency pair (ADP). Let \(\mathcal {D}\) be the defined symbols of \(\mathcal {R}\cup \mathcal {B}\), and for \(n \in \mathbb {N}\), let \(\mathcal {A}_{n}(\ell \rightarrow r) = \{\ell \rightarrow \#_{\varPhi }(r) \mid \varPhi \subseteq \textrm{Pos}_{\mathcal {D}}(r), |\varPhi | = \min (n, |\textrm{Pos}_{\mathcal {D}}(r)|)\}\). The canonical main ADPs for \(\mathcal {R}\) are \(\mathcal {A}_{1}(\mathcal {R}) = \bigcup \limits _{\ell \rightarrow r \in \mathcal {R}} \!\!\!\! \mathcal {A}_{1}(\ell \!\rightarrow \!r)\) and the canonical base ADPs for \(\mathcal {B}\!\) are \(\mathcal {A}_{2}(\mathcal {B})\!= \!\!\! \bigcup \limits _{\ell \rightarrow r \in \mathcal {B}} \!\!\!\! \mathcal {A}_{2}(\ell \!\rightarrow \!r)\).

So the left-hand side of an ADP is just the left-hand side of the original rule. The right-hand side results from the right-hand side of the original rule by replacing certain defined symbols f with \(f^{\#}\).

Example 18

The canonical ADPs of Example 4 are \(\mathcal {A}_{1}(\mathcal {R}_2) = \{ \textsf{a}\rightarrow \textsf{b}\}\) and \(\mathcal {A}_{2}(\mathcal {B}_2) = \{\textsf{f}\rightarrow \textsf{d}(\textsf{F},\textsf{A})\}\) and for Example 5 we get \(\mathcal {A}_{1}(\mathcal {R}_3) = \{ \textsf{a}(x) \rightarrow \textsf{b}(x)\}\) and \(\mathcal {A}_{2}(\mathcal {B}_3) = \{\textsf{f}\rightarrow \textsf{A}(\textsf{F})\}\). For \(\mathcal {R}_{\textsf{divL}}/\mathcal {B}_{\textsf{mset}2}\) from Examples 1 and 14, the ADPs \(\mathcal {A}_{1}(\mathcal {R}_{\textsf{divL}})\) are

figure j
$$\begin{aligned} \small {\text {and}}\,\, \mathcal {A}_{2}(\mathcal {B}_{\textsf{mset}2})\,\, \text {contains}\,\, \textsf{divL}(z, \textsf{cons}(x, \textsf{cons}(y, zs ))) \rightarrow \textsf{DL}(z, \textsf{cons}(y, \textsf{cons}(x, zs ))) \end{aligned}$$
(22)

In [23], ADPs were only used for innermost rewriting. We now modify their rewrite relation and define what happens with annotations inside the substitutions during a rewrite step. To simulate redex-creating sequences as in Example 5 with ADPs (where the position of the created redex \(\textsf{a}(\ldots )\) is above the position of the creating redex \(\textsf{f}\)), ADPs should be able to rewrite above annotated arguments without removing their annotation (we will demonstrate that in Example 25). Thus, for an ADP \(\ell \rightarrow r\) with a variable \(\ell |_\pi = x\), we use a variable reposition function (VRF) to indicate which occurrence of x in r should keep the annotations if one rewrites an instance of \(\ell \) where the subterm at position \(\pi \) is annotated. So a VRF maps positions of variables in the left-hand side of a rule to positions of the same variable in the right-hand side.

Definition 19

(Variable Reposition Function). Let \(\ell \rightarrow r\) be an ADP. A function \(\varphi : \textrm{Pos}_{\mathcal {V}}(\ell ) \rightarrow \textrm{Pos}_{\mathcal {V}}(r) \uplus \{\bot \}\) is called a variable reposition function (VRF) for \(\ell \rightarrow r\) iff \(\ell |_\pi = r|_{\varphi (\pi )}\) whenever \(\varphi (\pi ) \ne \bot \).

Example 20

For the ADP \(\textsf{a}(x) \rightarrow \textsf{b}(x)\) for \(\mathcal {R}_3\) from Example 5, if x on position 1 of the left-hand side is instantiated by \(\textsf{F}\), then the VRF \(\varphi (1) = 1\) indicates that this ADP rewrites \(\textsf{A}(\textsf{F})\) to \(\textsf{b}(\textsf{F})\), while \(\varphi (1) = \bot \) means that it rewrites \(\textsf{A}(\textsf{F})\) to \(\textsf{b}(\textsf{f})\).

With VRFs we can define the rewrite relation for ADPs w.r.t. full rewriting.

Definition 21

( ). Let \(\mathcal {P}\) be a set of ADPs. A term \(s \in \mathcal {T}\left( \varSigma ^\#,\mathcal {V}\right) \) rewrites to t using \(\mathcal {P}\) (denoted ) if there are an ADP \(\ell \rightarrow r \in \mathcal {P}\), a substitution \(\sigma \), a position \(\pi \in \textrm{Pos}_{\mathcal {D}\cup \mathcal {D}^\#}(s)\) such that \(\flat (s|_\pi ) = \ell \sigma \), a VRF \(\varphi \) for \(\ell \rightarrow r\), andFootnote 7

$$\begin{aligned} \begin{array}{rlllll} t &{}= &{}s[\#_{\varPhi }(r\sigma )]_{\pi } &{} \text {if} &{} \pi \in \textrm{Pos}_{\mathcal {D}^\#}(s) &{} (\textbf{pr})\\ t &{}= &{}s[\#_{\varPsi }(r\sigma )]_{\pi } &{} \text {if} &{} \pi \in \textrm{Pos}_\mathcal {D}(s) &{} (\textbf{r})\! \end{array} \end{aligned}$$

with \(\varPsi \!=\! \{\varphi (\rho ).\tau \!\mid \!\rho \!\in \! \textrm{Pos}_{\mathcal {V}}(\ell ), \, \varphi (\rho )\!\ne \!\bot , \, \rho .\tau \!\in \! \textrm{Pos}_{\mathcal {D}^\#}(s|_{\pi }) \}\) and \(\varPhi = \textrm{Pos}_{\mathcal {D}^\#}(r)\cup \varPsi \).

So \(\varPsi \) considers all positions of annotated symbols in \(s|_{\pi }\) that are below positions \(\rho \) of variables in \(\ell \). If the VRF maps \(\rho \) to a variable position \(\rho '\) in r, then the annotations below \(\pi .\rho \) in s are kept in the resulting subterm at position \(\pi .\rho '\) after the rewriting.

Rewriting with \(\mathcal {P}\) is like ordinary term rewriting, while considering and modifying annotations. Note that we represent a DP resulting from a rule as well as the original rule by just one ADP. So the ADP \(\textsf{div}(\textsf{s}(x),\textsf{s}(y)) \rightarrow \textsf{s}(\textsf{D}(\textsf{minus}(x,y),\textsf{s}(y)))\) represents both the DP resulting from \(\textsf{div}\) in the right-hand side of the rule (4), and the rule (4) itself (by simply disregarding all annotations of the ADP).

Similar to the classical DP framework, our goal is to track specific reduction sequences. As before, there are \(\textbf{p}\)-steps where a DP is applied at the position of an annotated symbol. These steps may introduce new annotations. Moreover, between two \(\textbf{p}\)-steps there can be several \(\textbf{r}\)-steps.

A step of the form \((\textbf{pr})\) at position \(\pi \) in Definition 21 represents a \(\textbf{p}\)- or an \(\textbf{r}\)-step (or both), where an \(\textbf{r}\)-step is only possible if one later rewrites an annotated symbol at a position above \(\pi \). All annotations are kept during this step except for annotations of subterms that correspond to variables of the applied rule. Here, the used VRF \(\varphi \) determines which of these annotations are kept and which are removed. As an example, with the canonical ADP \(\textsf{a}(x) \rightarrow \textsf{b}(x)\) from \(\mathcal {A}_{1}(\mathcal {R}_3)\) we can rewrite as in Example 20. Here, we have \(\pi = \varepsilon \), \(\flat (s|_\varepsilon ) = \textsf{a}(\textsf{f}) = \ell \sigma \), \(r = \textsf{b}(x)\), and the VRF \(\varphi \) with \(\varphi (1) = 1\) such that the annotation of \(\textsf{F}\) in \(\textsf{A}\)’s argument is kept in the argument of \(\textsf{b}\).

A step of the form \((\textbf{r})\) rewrites at the position of a non-annotated defined symbol, and represents just an \(\textbf{r}\)-step. Hence, we remove all annotations from the right-hand side r of the ADP. However, we may have to keep the annotations inside the substitution, hence we move them according to the VRF. For example, we obtain the rewrite step using the ADP \(\textsf{minus}(\textsf{s}(x),\textsf{s}(y)) \rightarrow \textsf{M}(x,y) \;\) (15) and any VRF.

A (relative) ADP problem has the form \((\mathcal {P},\mathcal {S})\), where \(\mathcal {P}\) and \(\mathcal {S}\) are finite sets of ADPs. \(\mathcal {P}\) is the set of all main ADPs and \(\mathcal {S}\) is the set of all base ADPs. Now we can define chains in the relative setting.

Definition 22

(Chains and Terminating ADP Problems). Let \((\mathcal {P},\mathcal {S})\) be an ADP problem. A sequence of terms \(t_0, t_1, \ldots \) with \(t_i \in \mathcal {T}\left( \varSigma ^\#,\mathcal {V}\right) \) is a \((\mathcal {P},\mathcal {S})\)-chain if we have for all \(i \in \mathbb {N}\). The chain is called infinite if infinitely many of these rewrite steps use with Case \((\textbf{pr})\). We say that an ADP problem \((\mathcal {P},\mathcal {S})\) is terminating (SN) if there is no infinite \((\mathcal {P},\mathcal {S})\)-chain.

Note the two different forms of relativity in Definition 22: In a finite chain, we may not only use infinitely many steps with \(\mathcal {S}\) but also infinitely many steps with \(\mathcal {P}\) where Case \((\textbf{r})\) applies. Thus, an ADP problem \((\mathcal {P},\mathcal {S})\) without annotated symbols or without any main ADPs (i.e., where \(\mathcal {P} = \varnothing \)) is obviously SN. Finally, we obtain our desired chain criterion.

Theorem 23

(Chain Criterion for Relative Rewriting). Let \(\mathcal {R}\) and \(\mathcal {B}\) be TRSs such that \(\mathcal {B}\) is non-duplicating. Then \(\mathcal {R}/ \mathcal {B}\) is SN iff the ADP problem \((\mathcal {A}_{1}(\mathcal {R}), \mathcal {A}_{2}(\mathcal {B}))\) is SN.

Example 24

The infinite rewrite sequence of Example 4 can be simulated by the following infinite chain using \(\mathcal {A}_{1}(\mathcal {R}_2) = \{ \textsf{a}\rightarrow \textsf{b}\}\) and \(\mathcal {A}_{2}(\mathcal {B}_2) = \{\textsf{f}\rightarrow \textsf{d}(\textsf{F},\textsf{A})\}\).

figure q

The steps with use Case (\(\textbf{pr}\)) at the position of the annotated symbol \(\textsf{F}\) and the steps with use (\(\textbf{pr}\)) as well. For this infinite chain, we indeed need two annotated symbols in the right-hand side of the base ADP: If \(\textsf{A}\) were not annotated (i.e., if we had the ADP \(\textsf{f}\rightarrow \textsf{d}(\textsf{F},\textsf{a})\)), then the step with would just use Case (\(\textbf{r}\)) and the chain would not be considered “infinite”. If \(\textsf{F}\) were not annotated (i.e., if we had the ADP \(\textsf{f}\rightarrow \textsf{d}(\textsf{f},\textsf{A})\)), then we would have the step which uses Case (\(\textbf{r}\)) and removes all annotations from the right-hand side. Hence, again the chain would not be considered “infinite”.

Example 25

The infinite rewrite sequence of Example 5 is simulated by the following chain with \(\mathcal {A}_{1}(\mathcal {R}_3) = \{ \textsf{a}(x) \rightarrow \textsf{b}(x)\}\) and \(\mathcal {A}_{2}(\mathcal {B}_3) = \{\textsf{f}\rightarrow \textsf{A}(\textsf{F})\}\).

figure v

Here, it is important to use the VRF \(\varphi (1) = 1\) for \(\textsf{a}(x) \rightarrow \textsf{b}(x)\) which keeps the annotation of \(\textsf{A}\)’s argument \(\textsf{F}\) when rewriting with \(\mathcal {A}_{1}(\mathcal {R}_3)\), i.e., these steps must yield \(\textsf{b}(\textsf{F})\) instead of \(\textsf{b}(\textsf{f})\) to generate further subterms \(\textsf{A}(\ldots )\) afterwards.

5 The Relative ADP Framework

Now we present processors for our novel relative ADP framework. An ADP processor \({\text {Proc}}\) has the form \({\text {Proc}}(\mathcal {P},\mathcal {S})\!=\!\{(\mathcal {P}_1,\mathcal {S}_1), \ldots , (\mathcal {P}_n,\mathcal {S}_n)\}\), where \(\mathcal {P}, \mathcal {P}_1, \ldots , \mathcal {P}_n, \mathcal {S}_1, \ldots , \mathcal {S}_n\) are sets of ADPs. \({\text {Proc}}\) is sound if \((\mathcal {P},\mathcal {S})\) is SN whenever \((\mathcal {P}_i,\mathcal {S}_i)\) is SN for all \(1 \le i \le n\). It is complete if \((\mathcal {P}_i,\mathcal {S}_i)\) is SN for all \(1 \le i \le n\) whenever \((\mathcal {P},\mathcal {S})\) is SN. To prove relative termination of \(\mathcal {R}/\mathcal {B}\), we start with the canonical ADP problem \((\mathcal {A}_{1}(\mathcal {R}),\mathcal {A}_{2}(\mathcal {B}))\) and apply sound (and preferably complete) ADP processors until all sub-problems are transformed to the empty set.

In Sect. 5.1, we present two processors to remove (base) ADPs, and in Sects. 5.2 and 5.3, we adapt the main processors of the classical DP framework from Sect. 3.1 to the relative setting. As mentioned, the soundness and completeness proofs for our processors and the chain criterion (Theorem 23) can be found in [24].

5.1 Derelatifying Processors

The following two derelatifying processors can be used to switch from ADPs to ordinary DPs, similar to Theorems 11 and 12. We extend \(\flat \) to ADPs and sets of ADPs \(\mathcal {S}\) by defining \(\flat (\ell \rightarrow r) = \ell \rightarrow \flat (r)\) and \(\flat (\mathcal {S}) = \{\ell \rightarrow \flat (r) \mid \ell \rightarrow r \in \mathcal {S}\}\).

If the ADPs in \(\mathcal {S}\) contain no annotations anymore, then it suffices to use ordinary DPs. The corresponding set of DPs for a set of ADPs \(\mathcal {P}\) is defined as \(\textrm{dp}(\mathcal {P}) = \{\ell ^\# \rightarrow t^\# \mid \ell \rightarrow r \in \mathcal {P}, t \trianglelefteq _{\#} r\}\).

Theorem 26

(Derelatifying Processor (1)). Let \((\mathcal {P}, \mathcal {S})\) be an ADP problem such that \(\flat (\mathcal {S}) = \mathcal {S}\). Then \({\text {Proc}}_{\texttt{DRP1}}(\mathcal {P}, \mathcal {S}) = \varnothing \) is sound and complete iff the ordinary DP problem \((\textrm{dp}(\mathcal {P}), \flat (\mathcal {P} \cup \mathcal {S}))\) is SN.

Furthermore, similar to Theorem 12, we can always move ADPs from \(\mathcal {S}\) to \(\mathcal {P}\), but such a processor is only sound and not complete. However, it may help to satisfy the requirements of Theorem 26 by moving ADPs with annotations from \(\mathcal {S}\) to \(\mathcal {P}\) such that the ordinary DP framework can be used afterwards.

Theorem 27

(Derelatifying Processor (2)). Let \((\mathcal {P}, \mathcal {S})\) be an ADP problem, and let \(\mathcal {S} = \mathcal {S}_{a} \uplus \mathcal {S}_{b}\). Then \({\text {Proc}}_{\texttt{DRP2}}(\mathcal {P}, \mathcal {S}) = \{(\mathcal {P} \cup \texttt{split}(\mathcal {S}_{a}), \mathcal {S}_{b})\}\) is sound. Here, \(\texttt{split}(\mathcal {S}_{a}) = \{\ell \rightarrow \#_{\pi }(r) \mid \ell \rightarrow r \in \mathcal {S}_{a}, \pi \in \textrm{pos}_{\mathcal {D}^\#}(r)\}\).

So if \(\mathcal {S}_{a}\) contains an ADP with two annotations, then we split it into two ADPs, where each only contains a single annotation.

Example 28

There are also redex-creating examples that are terminating, e.g., \(\mathcal {R}_2 = \{ \textsf{a}\rightarrow \textsf{b}\}\) and the base TRS \(\mathcal {B}_2' = \{ \textsf{f}(\textsf{s}(y)) \rightarrow \textsf{d}(\textsf{f}(y),\textsf{a}) \}\). Relative (and full) termination of this example can easily be shown by using the second derelatifying processor from Theorem 27 to replace the base ADP \(\textsf{f}(\textsf{s}(y)) \rightarrow \textsf{d}(\textsf{F}(y),\textsf{A})\) by the main ADPs \(\textsf{f}(\textsf{s}(y)) \rightarrow \textsf{d}(\textsf{F}(y),\textsf{a})\) and \(\textsf{f}(\textsf{s}(y)) \rightarrow \textsf{d}(\textsf{f}(y),\textsf{A})\). Then the processor of Theorem 26 is used to switch to the ordinary DPs \(\textsf{F}(\textsf{s}(y)) \rightarrow \textsf{F}(y)\) and \(\textsf{F}(\textsf{s}(y)) \rightarrow \textsf{A}\).

5.2 Relative Dependency Graph Processor

Next, we develop a dependency graph processor in the relative setting. The definition of the dependency graph is analogous to the one in the standard setting and thus, the same techniques can be used to over-approximate it automatically.

Definition 29

(Relative Dependency Graph). Let \((\mathcal {P}, \mathcal {S})\) be an ADP problem. The \((\mathcal {P}, \mathcal {S})\)-dependency graph has the set of nodes \(\mathcal {P} \cup \mathcal {S}\) and there is an edge from \(\ell _1 \rightarrow r_1\) to \(\ell _2 \rightarrow r_2\) if there exist substitutions \(\sigma _1, \sigma _2\) and a term \(t \trianglelefteq _{\#} r_1\) such that \(t^\# \sigma _1 \rightarrow _{\flat (\mathcal {P} \cup \mathcal {S})}^* \ell _2^\# \sigma _2\).

So similar to the standard dependency graph, there is an edge from an ADP \(\ell _1 \rightarrow r_1\) to \(\ell _2 \rightarrow r_2\) if the rules of \(\flat (\mathcal {P} \cup \mathcal {S})\) (without annotations) can reduce an instance of a subterm t of \(r_1\) to an instance of \(\ell _2\), if one only annotates the roots of t and \(\ell _2\) (i.e., then the rules can only be applied below the root).

Fig. 1.
figure 1

\((\mathcal {A}_{1}(\mathcal {R}_{\textsf{divL}}), \mathcal {A}_{2}(\mathcal {B}_{\textsf{mset}2}))\)-Dep. Graph

Fig. 2.
figure 2

\((\mathcal {A}_{1}(\mathcal {R}_2), \mathcal {A}_{2}(\mathcal {B}_2))\)-Dep. Graph

Example 30

The dependency graph for the ADP problem \((\mathcal {A}_{1}(\mathcal {R}_{\textsf{divL}}), \mathcal {A}_{2}(\mathcal {B}_{\textsf{mset}2}))\) from Example 18 is shown in Fig. 1. Here, nodes from \(\mathcal {A}_{1}(\mathcal {R}_{\textsf{divL}})\) are denoted by rectangles and the node from \(\mathcal {A}_{2}(\mathcal {B}_{\textsf{mset}2})\) is a circle.

To detect possible ordinary infinite rewrite sequences as in Example 6, we again have to regard SCCs of the dependency graph, where we only need to consider SCCs that contain a node from \(\mathcal {P}\), because otherwise, all steps in the SCC are relative (base) steps. However, in the relative ADP framework, non-termination can also be due to chains representing redex-creating sequences. Here, it does not suffice to look at SCCs. Thus, the relative dependency graph processor differs substantially from the corresponding processor for ordinary rewriting (and also from the corresponding processor for the probabilistic ADP framework in [23]).

Example 31

(Dependency Graph for Redex-Creating TRSs). For \(\mathcal {R}_2\) and \(\mathcal {B}_2\) from Example 4, the dependency graph for \((\mathcal {A}_{1}(\mathcal {R}_2), \mathcal {A}_{2}(\mathcal {B}_2))\) from Example 24 is in Fig. 2. Here, we cannot regard the SCC \(\{\textsf{f}\rightarrow \textsf{d}(\textsf{F},\textsf{A})\}\) separately, as we need \(\mathcal {A}_{1}(\mathcal {R}_2)\)’s rule \(\textsf{a}\rightarrow \textsf{b}\) to reduce the created redex. To find the ADPs that can reduce the created redexes, we have to regard the outgoing paths from the SCCs of \(\mathcal {S}\) to ADPs of \(\mathcal {P}\).

The structure that we are looking for in the redex-creating case is a path from an SCC to a node from \(\mathcal {P}\) (i.e., a form of a lasso), which is minimal in the sense that if we reach a node from \(\mathcal {P}\), then we stop and do not move further along the edges of the graph. Moreover, the SCC needs to contain an ADP with more than one annotated symbol, as otherwise the generation of the infinitely many \(\mathcal {P}\)-redexes would not be possible. Here, it suffices to look at SCCs in the graph restricted to only \(\mathcal {S}\)-nodes (i.e., in the \((\flat (\mathcal {P}),\mathcal {S})\)-dependency graph). The reason is that if the SCC contains a node from \(\mathcal {P}\), then as mentioned above, we have to prove anyway that the SCC does not give rise to infinite chains.

Definition 32

(\(\texttt{SCC}^{(\mathcal {P},\mathcal {S})}_{\mathcal {P}'}\), \(\texttt{Lasso}\)). Let \((\mathcal {P},\mathcal {S})\) be an ADP problem. For any \(\mathcal {P}' \subseteq \mathcal {P} \cup \mathcal {S}\), let \(\texttt{SCC}^{(\mathcal {P},\mathcal {S})}_{\mathcal {P}'}\) denote the set of all SCCs of the \((\mathcal {P},\mathcal {S})\)-dependency graph that contain an ADP from \(\mathcal {P}'\). Moreover, let \(\mathcal {S}_{>1} \subseteq \mathcal {S}\) denote the set of all ADPs from \(\mathcal {S}\) with more than one annotation. Then the set of all minimal lassos is defined as \(\texttt{Lasso} = \{\mathcal {Q}\cup \{n_1, \ldots , n_k\} \mid \mathcal {Q}\in \texttt{SCC}^{(\flat (\mathcal {P}),\mathcal {S})}_{\mathcal {S}_{>1}}, \; n_1,\ldots ,n_k\) is a path in the \((\flat (\mathcal {P}),\mathcal {S})\)-dependency graph such that \(n_1 \in \mathcal {Q}\) and \(n_k \in \flat (\mathcal {P})\}\).

We remove the annotations of ADPs which do not have to be considered anymore for \(\textbf{p}\)-steps due to the dependency graph, but we keep the ADPs for possible \(\textbf{r}\)-steps and thus, consider them as relative (base) ADPs.

Theorem 33

(Dep. Graph Processor). Let \((\mathcal {P},\mathcal {S})\) be an ADP problem. Then

$$\begin{aligned} {\text {Proc}}_{\texttt{DG}}(\mathcal {P},\mathcal {S}) & = \{ (\,\mathcal {P} \cap \mathcal {Q}, \; (\mathcal {S} \cap \mathcal {Q})\cup \flat ( \,(\mathcal {P} \cup \mathcal {S}) \setminus \mathcal {Q}\,)\, ) \mid \mathcal {Q}\in \texttt{SCC}_{\mathcal {P}}^{(\mathcal {P}, \mathcal {S})} \cup \texttt{Lasso}\} \! \end{aligned}$$

is sound and complete.

Example 34

For \((\mathcal {A}_{1}(\mathcal {R}_{\textsf{divL}}), \mathcal {A}_{2}(\mathcal {B}_{\textsf{mset}2}))\) from Example 30 we have three SCCs \(\{(15)\}\), \(\{(18)\}\), and \(\{(20),(22)\}\) containing nodes from \(\mathcal {A}_{1}(\mathcal {R}_{\textsf{divL}})\). The set \(\{(22)\}\) is the only SCC of \((\flat (\mathcal {A}_{1}(\mathcal {R}_{\textsf{divL}})), \mathcal {A}_{2}(\mathcal {B}_{\textsf{mset}2}))\) and there are paths from that SCC to the ADPs (20) and (21) of \(\mathcal {P}\). However, they are not in \(\texttt{Lasso}\), because the SCC \(\{(22)\}\) does not contain an ADP with more than one annotation. Hence, we result in the three new ADP problems \((\{(15)\}, \{\flat (22)\} \cup \flat (\mathcal {A}_{1}(\mathcal {R}_{\textsf{divL}}) \setminus \{(15)\}))\), \((\{(18)\}, \{\flat (22)\} \cup \flat (\mathcal {A}_{1}(\mathcal {R}_{\textsf{divL}}) \setminus \{(18)\}))\), and \((\{(20)\},\{(22)\} \cup \flat (\mathcal {A}_{1}(\mathcal {R}_{\textsf{divL}}) \setminus \{(20)\}))\). For the first two of these new ADP problems, we can use the derelatifying processor of Theorem 26 and prove SN via ordinary DPs, since their base ADPs do not contain any annotated symbols anymore.

The dependency graph processor in combination with the derelatifying processors of Theorems 26 and 27 already subsumes the techniques of Theorems 11 and 12. The reason is that if \(\mathcal {R}\) dominates \(\mathcal {B}\), then there is no edge from an ADP of \(\mathcal {A}_{2}(\mathcal {B})\) to any ADP of \(\mathcal {A}_{1}(\mathcal {R})\) in the \((\mathcal {A}_{1}(\mathcal {R}), \mathcal {A}_{2}(\mathcal {B}))\)-dependency graph. Hence, there are no minimal lassos and the dependency graph processor just creates ADP problems from the SCCs of \(\mathcal {A}_{1}(\mathcal {R})\) where the base ADPs do not have any annotations anymore. Then Theorem 26 allows us to switch to ordinary DPs. For example, if we consider \(\mathcal {B}_{\textsf{mset}}\) instead of \(\mathcal {B}_{\textsf{mset}2}\), then the dependency graph processor yields the three sub-problems for the SCCs \(\{(15)\}\), \(\{(18)\}\), and \(\{(20)\}\), where the base ADPs do not contain annotations anymore. Then, we can move to ordinary DPs via Theorem 26.

Compared to Theorems 11 and 12, the dependency graph allows for more precise over-approximations than just “dominance” to detect when the base ADPs do not depend on the main ADPs. Moreover, the derelatifying processors of Theorems 26 and 27 allow us to switch to the ordinary DP framework also for sub-problems which result from the application of other processors of our relative ADP framework. In other words, Theorems 26 and 27 allow us to apply this switch in a modular way, even if their prerequisites do not hold for the initial canonical ADP problem (i.e., even if the prerequisites of Theorems 11 and 12 do not hold for the whole TRSs).

5.3 Relative Reduction Pair Processor

Next, we adapt the reduction pair processor to ADPs for relative rewriting. While the reduction pair processor for ADPs in the probabilistic setting [23] was restricted to polynomial interpretations, we now allow arbitrary reduction pairs using a similar idea as in [18] for complexity analysis via DPs.

To find out which ADPs cannot be used for infinitely many \(\textbf{p}\)-steps, the idea is not to compare the annotated left-hand side with the whole right-hand side, but just with the set of its annotated subterms. To combine these subterms in the case of ADPs with two or no annotated symbols, we extend the signature by two fresh compound symbols \(\textsf{c}_{0}\) and \(\textsf{c}_{2}\) of arity 0 and 2, respectively. Similar to [18], we have to use \(\textsf{c}_{}\)-monotonic and \(\textsf{c}_{}\)-invariant reduction pairs.

Definition 35

(\(\textsf{c}_{}\)-Monotonic, \(\textsf{c}_{}\)-Invariant). For \(r \in \mathcal {T}\left( \varSigma ^\#,\mathcal {V}\right) \), we define \(\textrm{ann}(r) = \textsf{c}_{0}\) if r does not contain any annotation, \(\textrm{ann}(r) = t^\#\) if \(t \trianglelefteq _{\#} r\) and r only contains one annotated symbol, and \(\textrm{ann}(r) = \textsf{c}_{2}(r_1^\#, r_2^\#)\) if \(r_1 \trianglelefteq _{\#}^{\pi _1} r\), \(r_2 \trianglelefteq _{\#}^{\pi _2} r\), and \(\pi _1 <_{lex} \pi _2\) where \(<_{lex}\) is the (total) lexicographic order on positions.

A reduction pair \((\succsim , \succ )\) is called \(\textsf{c}_{}\)-monotonic if \(\textsf{c}_{2}(s_1, t) \succ \textsf{c}_{2}(s_2, t)\) and \(\textsf{c}_{2}(t, s_1) \succ \textsf{c}_{2}(t, s_2)\) for all \(s_1,s_2,t \in \mathcal {T}\left( \varSigma ^\#,\mathcal {V}\right) \) with \(s_1 \succ s_2\). Moreover, it is \(\textsf{c}_{}\)-invariant if \(\textsf{c}_{2}(s_1,s_2) \sim \textsf{c}_{2}(s_2,s_1)\) and \(\textsf{c}_{2}(s_1,\textsf{c}_{2}(s_2,s_3)) \sim \textsf{c}_{2}(\textsf{c}_{2}(s_1,s_2),s_3)\) for \({\sim } = {\succsim } \cap {\precsim }\) and all \(s_1,s_2,s_3 \in \mathcal {T}\left( \varSigma ^\#,\mathcal {V}\right) \).

So for example, reduction pairs based on polynomial interpretations are \(\textsf{c}_{}\)-monotonic and \(\textsf{c}_{}\)-invariant if \(\textsf{c}_{2}(x,y)\) is interpreted by \(x + y\).

For an ADP problem \((\mathcal {P},\mathcal {S})\), now the reduction pair processor has to orient the non-annotated rules \(\flat (\mathcal {P} \cup \mathcal {S})\) weakly and for all ADPs \(\ell \rightarrow r\), it compares the annotated left-hand side \(\ell ^\#\) with \(\textrm{ann}(r)\). In strictly decreasing ADPs, one can then remove all annotations and consider them as relative (base) ADPs again.

Theorem 36

(Reduction Pair Processor). Let \((\mathcal {P},\mathcal {S})\) be an ADP problem and let \((\succsim , \succ )\) be a \(\textsf{c}_{}\)-monotonic and \(\textsf{c}_{}\)-invariant reduction pair such that \(\flat (\mathcal {P} \cup \mathcal {S}) \subseteq {\succsim }\) and \(\ell ^\# \succsim \textrm{ann}(r)\) for all \(\ell \rightarrow r \in \mathcal {P} \cup \mathcal {S}\). Moreover, let \(\mathcal {P}_{\succ } \subseteq \mathcal {P} \cup \mathcal {S}\) such that \(\ell ^\# \succ \textrm{ann}(r)\) for all \(\ell \rightarrow r \in \mathcal {P}_{\succ }\). Then \({\text {Proc}}_{\texttt{RPP}}(\mathcal {P},\mathcal {S}) = \{(\mathcal {P} \setminus \mathcal {P}_{\succ }, (\mathcal {S} \setminus \mathcal {P}_{\succ }) \cup \flat (\mathcal {P}_{\succ }))\}\) is sound and complete.

Example 37

For the remaining ADP problem \((\{(20)\},\{(22)\} \cup \flat (\mathcal {A}_{1}(\mathcal {R}_{\textsf{divL}}) \setminus \{(20)\}))\) from Example 34, we can apply the reduction pair processor using the polynomial interpretation from the end of Sect. 3.1 which maps \(\textsf{0}\) and \(\textsf{nil}\) to 0, \(\textsf{s}(x)\) to \(x + 1\), \(\textsf{cons}(y, xs )\) to \( xs + 1\), \(\textsf{DL}(x, xs )\) to \( xs \), and all other symbols to their first arguments. Then, (20) is oriented strictly (i.e., it is in \(\mathcal {P}_{\succ }\)), and (22) and all other base ADPs are oriented weakly. Hence, we remove the annotation from (20) and move it to the base ADPs. Now there is no main ADP anymore, and thus the dependency graph processor returns \(\varnothing \). This proves SN for \((\mathcal {A}_{1}(\mathcal {R}_{\textsf{divL}}), \mathcal {A}_{2}(\mathcal {B}_{\textsf{mset}2}))\), hence \(\mathcal {R}_{\textsf{divL}} / \mathcal {B}_{\textsf{mset}2}\) is also SN.

Example 38

Regard the ADPs \(\textsf{a}\rightarrow \textsf{b}\) and \(\textsf{f}\rightarrow \textsf{d}(\textsf{F},\textsf{A})\) for the redex-creating Example 4 again. When using a polynomial interpretation \(\textrm{Pol}\) that maps \(\textsf{c}_{0}\) to 0 and \(\textsf{c}_{2}(x,y)\) to \(x + y\), then for the reduction pair processor one has to satisfy \(\textrm{Pol}(\textsf{A}) \ge 0\) and \(\textrm{Pol}(\textsf{F}) \ge \textrm{Pol}(\textsf{F}) + \textrm{Pol}(\textsf{A})\), i.e., one cannot make any of the ADPs strictly decreasing.

In contrast, for the variant with the terminating base rule \(\textsf{f}(\textsf{s}(y)) \rightarrow \textsf{d}(\textsf{f}(y),\textsf{a})\) from Example 28, we have the ADPs \(\textsf{a}\rightarrow \textsf{b}\) and \(\textsf{f}(\textsf{s}(y)) \rightarrow \textsf{d}(\textsf{F}(y),\textsf{A})\). Here, the second constraint is \(\textrm{Pol}(\textsf{F}(\textsf{s}(y))) \ge \textrm{Pol}(\textsf{F}(y)) + \textrm{Pol}(\textsf{A})\). To make one of the ADPs strictly decreasing, one can set \(\textrm{Pol}(\textsf{F}(x)) = x\), \(\textrm{Pol}(\textsf{s}(x)) = x+1\), and \(\textrm{Pol}(\textsf{A}) = 1\) or \(\textrm{Pol}(\textsf{A}) = 0\). Then the reduction pair processor removes the annotations from the strictly decreasing ADP and the dependency graph processor proves SN.

6 Evaluation and Conclusion

In this paper, we introduced the first notion of (annotated) dependency pairs and the first DP framework for relative termination, which also features suitable dependency graph and reduction pair processors for relative ADPs. Of course, further classical DP processors can be adapted to our relative ADP framework as well. For example, in our implementation of the novel ADP framework in our tool AProVE [13], we also included a straightforward adaption of the classical rule removal processor [11], see [24].Footnote 8 While the soundness proofs for the processors in the new relative ADP framework are more involved than in the standard DP framework, the new processors themselves are quite analogous to their original counterparts and thus, adapting an existing implementation of the ordinary DP framework to the relative ADP framework does not require much effort. In future work, we will investigate how to use our new form of ADPs for full (instead of innermost) rewriting also in the probabilistic setting and for complexity analysis.

To evaluate the new relative ADP framework, we compared its implementation in “new AProVE” to all tools that participated in the most recent termination competition (TermComp 2023) [14] on relative rewriting, i.e., NaTT [36], [27], MultumNonMulta [9], and “old AProVE” which did not yet contain the contributions of the current paper. In TermComp 2023, 98 benchmarks were used for relative termination. However, these benchmarks only consist of examples where the main TRS \(\mathcal {R}\) dominates the base TRS \(\mathcal {B}\) (i.e., which can be handled by Theorem 11 from [21]) or which can already be solved via simplification orders directly.

Therefore, we extended the collection by 32 new “typical” examples for relative rewriting, including both \(\mathcal {R}_{\textsf{divL}}/\mathcal {B}_{\textsf{mset}}\) from Examples 1 and 2, and our leading example \(\mathcal {R}_{\textsf{divL}} / \mathcal {B}_{\textsf{mset}2}\) from Examples 2 and 14 (where only new AProVE can prove SN). Except for \(\mathcal {R}_{\textsf{divL}}/\mathcal {B}_{\textsf{mset}}\), in these examples \(\mathcal {R}\) does not dominate \(\mathcal {B}\). Most of these examples adapt well-known classical TRSs from the Termination Problem Data Base [33] used at TermComp to the relative setting. Moreover, 5 of our new examples illustrate the application of relative termination for proving confluence, i.e., in these examples one can prove confluence with the approach of [19] via our new technique for relative termination proofs.

In the following table, the number in the “YES” (“NO”) row indicates for how many of the 130 examples the respective tool could prove (disprove) relative termination and “MAYBE” refers to the benchmarks where the tool could not solve the problem within the timeout of 300 s per example. The numbers in brackets are the respective results when only considering our new 32 examples. “AVG(s)” gives the average runtime of the tool on solved examples in seconds.

 

new AProVE

NaTT

old AProVE

MultumNonMulta

YES

91 (32)

68 (10)

48 (5)

39 (3)

0 (0)

NO

13 (0)

5 (0)

13 (0)

7 (0)

13 (0)

MAYBE

26 (0)

57 (22)

69 (27)

84 (29)

117 (32)

AVG(s)

5.11

0.41

4.02

1.67

1.60

The table clearly shows that while old AProVE was already the second most powerful tool for relative termination, the integration of the ADP framework in new AProVE yields a substantial advance in power (i.e., it only fails on 26 of the examples, compared to 57 and 69 failures of NaTT and old AProVE, respectively). In particular, previous tools (including old AProVE) often have problems with relative TRSs where the main TRS does not dominate the base TRS, whereas the ADP framework can handle such examples.

A special form of relative TRSs are relative string rewrite systems (SRSs), where all function symbols have arity 1. Due to the base ADPs with two annotated symbols on the right-hand side, here the ADP framework is less powerful than dedicated techniques for string rewriting. For the 403 relative SRSs at TermComp 2023, the ADP framework only finds 71 proofs, mostly due to the dependency graph and the rule removal processor, while termination analysis via AProVE’s standard strategy for relative SRSs succeeds on 209 examples, and the two most powerful tools for relative SRSs at TermComp 2023 (MultumNonMulta and Matchbox [35]) succeed on 274 and 269 examples, respectively.

Another special form of relative rewriting is equational rewriting, where one has a set of equations E which correspond to relative rules that can be applied in both directions. In [10], DPs were adapted to equational rewriting. However, this approach requires E-unification to be decidable and finitary (i.e., for (certain) pairs of terms, it has to compute finite complete sets of E-unifiers). This works well if E are AC- or C-axioms, and for this special case, dedicated techniques like [10] are more powerful than our new ADP framework for relative termination. For example, on the 76 AC- and C-benchmarks for equational rewriting at TermComp 2023, the relative ADP framework finds 36 proofs, while dedicated tools for AC-rewriting like AProVE’s equational strategy or MU-TERM [15] succeed on 66 and 64 examples, respectively. However, in general, the requirement of a finitary E-unification algorithm is a hard restriction. In contrast to existing tools for equational rewriting, our new ADP framework can be used for arbitrary (non-duplicating) relative rules.

For details on our experiments, our collection of examples, and for instructions on how to run our implementation in AProVE via its web interface or locally, see: https://aprove-developers.github.io/RelativeDTFramework/.