Left-Linear Completion with AC Axioms

We revisit completion modulo equational theories for left-linear term rewrite systems where unification modulo the theory is avoided and the normal rewrite relation can be used in order to decide validity questions. To that end, we give a new correctness proof for finite runs and establish a simulation result between the two inference systems known from the literature. Given a concrete reduction order, novel canonicity results show that the resulting complete systems are unique up to the representation of their rules' right-hand sides. Furthermore, we show how left-linear AC completion can be simulated by general AC completion. In particular, this result allows us to switch from the former to the latter at any point during a completion process.


Introduction
Completion has been extensively studied since its introduction in the seminal paper by Knuth and Bendix [KB70].One of the main limitations of the original formulation is its inability to deal with equations which cannot be oriented into a terminating rule such as the commutativity axiom.This shortcoming can be resolved by completion modulo an equational theory E. In the literature, there are two different approaches of achieving this.The general approach [JK86,Bac91] requires E-unification and allows us to decide validity problems using the rewrite relation → R/E which is defined as For left-linear term rewrite systems, however, there is Huet's approach [Hue80] which avoids E-unification.In particular, Huet's approach does not consider local peaks modulo E ( → R • ∼ E • → R ) but works with ordinary local peaks ( → R • → R ) as well as local cliffs of the form ↔ E • → R .Hence, instead of E-critical pairs we can use normal critical pairs when we also take overlaps between R and E into account.We call this approach left-linear completion modulo an equational theory.If we have a complete TRS R in the general sense, we can decide validity problems s ≈ t by rewriting both terms to normal form with → R/E and then checking the result for E-equivalence.For complete systems stemming from left-linear E-completion, however, it suffices to rewrite both s and t to normal form using the normal rewrite relation → R and then perform just one E-equivalence check on these normal forms.
In their respective books, Avenhaus [Ave95] and Bachmair [Bac91] present inference systems for left-linear completion modulo an equational theory.This article gives a detailed account of the nature of and relation between these two systems for finite runs.For the concrete case of AC (associative and commutative function symbols), we compare leftlinear completion modulo equational theories with the general approach by presenting an implementation of left-linear AC completion and comparing it with the state of the art concerning general AC completion.After setting the stage in Section 2, we present a new criterion for the Church-Rosser modulo property of left-linear TRSs based on prime critical pairs in Section 3. Slightly modified versions (A and B) of the inference systems due to Avenhaus and Bachmair are discussed in Sections 4 and 5, respectively.Both sections include a new correctness proof of the given inference system for finite runs.For A, this is done from scratch by using the criterion from Section 3 in the spirit of [HMSW19].Correctness of B is then reduced to the correctness of A by establishing a simulation result between finite runs in these systems.Furthermore, Section 6 reports on novel results on canonicity for this setting.For the concrete equational theory of associative and commutative (AC) function symbols, we also show the connection between the inference system A and general AC completion by means of another simulation result (Section 7).Finally, we describe our implementation of A in the tool accompll and present experimental results which show that the avoidance of AC unification and AC matching can result in significant performance improvements over general AC completion (Sections 8 and 9).This article extends our previous papers [NHM23b] and [NHM23a] by including full proof details, more examples and experimental data as well as novel results on canonicity.

Preliminaries
We assume familiarity with term rewriting and completion as described e.g. in [BN98] but recall some central notions.
2.1.Rewrite Systems.An abstract rewrite system (ARS) is a set A together with a binary relation → on A. If a → b for no b, a is called a normal form of →.The set of normal forms is denoted by NF(→).Given an arbitrary binary relation →, we write ←, ↔, → = , → + , → * and to denote its inverse, its symmetric closure, its reflexive closure, its transitive closure and its symmetric and transitive closure, respectively.Hence, the relation ↔ * denotes the symmetric, reflexive and transitive closure of → and is called conversion.The relation a → !b is defined as a → * b and b ∈ NF(→) and is used to denote rewriting to normal form.Finally, ↓ abbreviates the joinability relation → * • → * where • denotes relation composition.Given a signature F and a set of variables V, we consider the set of terms T (F, V) which is defined as usual.The set of variables in a term t is written as Var(t).Terms which do not contain the same variable more than once are referred to as linear terms.Each subterm of a term t has a unique position which is a finite sequence of positive integers where the empty sequence representing the root position is written as ϵ.The set of positions in a term t is denoted by Pos(t) and further divided into the subset Pos F (t) of positions which address function symbols and the subset Pos V (t) = Pos(t) \ Pos F (t) of variable positions.If a position p is a prefix of the position q we write p ⩽ q.Positions p and q are parallel, denoted by p ∥ q, if neither p ⩽ q nor q ⩽ p.If p ⩽ q then q \ p denotes the unique position r such that pr = q.By s| p we denote the subterm of s at position p.Mappings σ from variables to terms with a finite domain ({x ∈ V | x ̸ = σ(x)}) are called substitutions.The application of a substitution σ to a term t is denoted by tσ.A renaming is a bijective substitution from V to V. A term s is a variant of a term t (s .= t) if there exists a renaming ρ such that s = tρ.A context C is a term with exactly once occurrence of the special symbol □ / ∈ F ∪ V called hole which acts as a placeholder for concrete terms.Replacing the hole with a term t results in a term which we denote by C[t].A term s encompasses a term t (s • ⊵ t) if s = C[tσ] for some context C and substitution σ.It is known that • ⊵ is a quasi-order on terms and its strict part • ▷ is a well-founded order with • ▷ = • ⊵ \ .=.We write s[t] p for the term which is created from s by replacing its subterm at position p by t.
A pair of terms (s, t) can be viewed as an equation (s ≈ t) or a rule (s → t).In the latter case we assume that s is not a variable and Var(s) ⊇ Var(t).Equational systems (ESs) are sets of equations while term rewrite systems (TRSs) are sets of rules.Given a set of pairs of terms E, we define a rewrite relation → E as the closure of its pairs under substitutions and contexts.More formally, s → E t if there is a pair (ℓ, r) ∈ E, a position p and a substitution σ such that s p = ℓσ and t = s[rσ] p .The equational theory induced by E consists of all pairs of terms (s, t) such that s ↔ * E t.A TRS R represents an ES E if ↔ * E = ↔ * R .Two rules ℓ → r and ℓ ′ → r ′ are variants if there exists a renaming ρ such that ℓρ = ℓ ′ and rρ = r ′ .A TRS is left-linear if ℓ is a linear term for every rule ℓ → r ∈ R.

Given an ES E, E
A TRS is complete if it is terminating and confluent.For standard rewriting, the Church-Rosser property (↔ * R ⊆ ↓ R ) coincides with confluence.Hence, complete presentations R of an ES E can be used to decide the validity problem for E: s ↔ * E t if and only if s → !R • !R ← t.Note that confluence and the Church-Rosser property do not coincide for rewriting modulo equational theories [Ohl98].Therefore, a notion of completeness for this setting has to depend on the Church-Rosser property instead of confluence in order to facilitate a decision procedure for validity problems of equational theories with complete presentations.
2.2.Critical Peaks.Confluence of terminating TRSs is characterized by joinability of critical pairs.Critical pairs are computed from overlaps which are the potentially dangerous cases of nondeterminism in rewriting as long as termination holds.
Overlaps give rise to critical peaks from which the critical pairs can then be extracted.Definition 2.2.Let ⟨ℓ 1 → r 1 , p, ℓ 2 → r 2 ⟩ be an overlap of a TRS R and σ a most general unifier of ℓ 1 and ℓ 2 | p .The term ℓ 2 σ[ℓ 1 σ] p = ℓ 2 σ can be rewritten in two different ways which gives rise to a critical peak, here illustrated graphically: More formally, a critical peak is a quadruple ⟨ℓ 2 σ[r 1 σ] p , p, ℓ 2 σ, r 2 σ⟩ and the equation ℓ 2 σ[r 1 σ] p ≈ r 2 σ is a critical pair of R obtained from the original overlap.
Usually, we denote a critical peak ⟨t, p, s, u⟩ more illustratively by The set of all critical pairs of a TRS R (obtained from all possible overlaps) is denoted by CP(R).Knuth and Bendix' criterion [KB70] states that a terminating TRS R is confluent if and only if CP(R) ⊆ ↓ R .Kapur et al. [KMN88] showed that joinability of prime critical pairs still guarantees confluence for terminating TRSs.
subterms of s| p are in normal form.Critical pairs derived from prime critical peaks are called prime.The set of all prime critical pairs of a TRS R is denoted by PCP(R).
Example 2.5.Using Theorem 2.4, we show the confluence of the following TRS R: Since every rewrite step reduces the total number of f and a, termination of R follows.
Observe that R admits six (modulo symmetry) critical peaks of the form t p R ← s → ϵ R u: Here the positions p in s are indicated by underlining.The first three critical peaks are not prime due to the reducible proper subterm a in s| p , while the others are prime.Therefore, 2.3.Rewriting Modulo.We now turn our attention to rewriting modulo an equational theory.To that end, we start by giving general definitions for abstract rewrite systems (ARSs).Let A = ⟨A, →⟩ be an ARS and ∼ an equivalence relation on A. We write ⇔ for Given A, we denote ⟨A, →/∼⟩ by A/∼.The ARS A is terminating modulo ∼ if there are no infinite rewrite sequences with →/∼ and Church-Rosser modulo ∼ if ⇔ * ⊆ ↓ ∼ .The ARS A is complete modulo ∼ if it is terminating modulo ∼ and Church-Rosser modulo ∼.While there is no distinction for termination modulo ∼ between A and A/∼ (∼ • ∼ = ∼ by transitivity), it makes a considerable difference whether we talk about the Church-Rosser modulo ∼ property and therefore completeness modulo ∼ of A or A/∼.The following lemma is taken from [Ave95, Lemma 4.1.12].It establishes an important connection between the Church-Rosser modulo ∼ property of an ARS A and A/∼.Lemma 2.6 [Ave95].Let A = ⟨A, →⟩ and A ′ = ⟨A, ⇀⟩ be ARSs and ∼ an equivalence relation on The definitions and results for ARSs carry over to TRSs by replacing the equivalence relation ∼ by the equational theory ↔ * B of an ES B. Most theoretical results of this article are not specific to AC but hold for an arbitrary base theory B of which we only demand that Var(ℓ) = Var(r) for all ℓ ≈ r ∈ B. We abbreviate ↔ * B by ∼ B and the rewrite relation Termination modulo B is shown by B-compatible reduction orders >, i.e., > is well-founded, closed under contexts and substitutions and is not a very practical rewrite relation: It is undecidable in general, and even if the equational theory of B is decidable, rewriting a term t requires to check every member of its B-equivalence class.A more practical alternative due to Peterson and Stickel [PS81] is the relation → R,B defined as follows: s → R,B t if there exist a rule ℓ → r ∈ R, a substitution σ and a position p such that s| p ∼ B ℓσ and t = s[rσ] p .This definition is very similar to the definition of standard rewriting but with B-matching instead of normal matching.It is immediate from the respective definitions that the inclusions → R ⊆ → R,B ⊆ → R/B hold.Note that for the concrete case of B = AC, all of these inclusions are actually strict for nonempty TRSs R.

Church-Rosser Criterion
In this section we present a new characterization of the Church-Rosser property modulo an equational theory B for left-linear TRSs which are terminating modulo B. The original left-linear completion procedure [Ave95,Bac91] relies on the following theorem.We use critical pairs between two different TRSs R 1 and R 2 with the same signature.We write CP(R 1 , R 2 ) for the set of all critical pairs originating from overlaps ⟨ρ 1 , p, ρ 2 ⟩ where We show that the left-linear TRS R of Example 2.5 is Church-Rosser modulo B. As before, the termination of R/B follows from the fact that every rewrite step reduces the total number of f and a.The six critical peaks result in Observe that R and B admit four critical peaks of the forms Theorem 3.1 allows us to use ordinary critical pairs instead of B-critical pairs.In particular, equational unification modulo B can be replaced by syntactic unification which improves efficiency.Furthermore, the form of the joining sequence (↓ ∼ R ) is advantageous as it uses the normal rewrite relation and just one B-equality check in the end as opposed to rewrite steps modulo the theory (∼ B • → R • ∼ B ).However, left-linearity is necessary in Theorem 3.1 as the following example illustrates.
Example 3.3.Consider the AC-terminating TRS R consisting of the single rule f(x, x) → x with + as an additional AC function symbol as well as the following conversion: There are no critical pairs in R and between R and AC ± , so does not hold because x + y and f(x + y, y + x) are R-normal forms which are not AC equivalent.
In the remainder of this section we show that joinability of prime critical pairs suffices for the characterization of Theorem 3.1.
3.1.Peak-and-Cliff Decreasingness.We present a new Church-Rosser modulo criterion, dubbed peak-and-cliff decreasingness.This is an extension of peak decreasingness [HMSW19] which is a simple confluence criterion for ARSs designed to replace complicated proof orderings in the correctness proofs of completion procedures.
In the following, we assume that equivalence relations ∼ are defined as the reflexive and transitive closure of a symmetric relation , so ∼ = * .We refer to conversions of the form ← • or • → as local cliffs and conversions of the form ← • → as local peaks.Furthermore, we assume that steps are labeled with labels from a set I, so let A = ⟨A, {→ α } α∈I ⟩ be an ARS and ∼ = ( { α | α ∈ I }) * an equivalence relation on A. Definition 3.4.An ARS A is peak-and-cliff decreasing if there is a well-founded order > on I such that for all α, β ∈ I the inclusions We show that peak-and-cliff decreasingness is a sufficient condition for the Church-Rosser modulo property.Lemma 3.5.Every conversion modulo ∼ is a valley modulo ∼ or contains a local peak or cliff: The proof of the following theorem is based on a well-founded order on multisets.We denote the multiset extension of an order > by > mul .It is well-known that the multiset extension of a well-founded order is also well-founded.
Theorem 3.6.If A is a peak-and-cliff decreasing ARS then A is Church-Rosser modulo ∼.
Proof.With every conversion C we associate a multiset M C consisting of labels of its rewrite and equivalence relation steps.Since A is peak-and-cliff decreasing, there is a well-founded order > on I which allows us to replace conversions C of the forms Hence, we prove that A is Church-Rosser modulo ∼, i.e., ⇔ * ⊆ ↓ ∼ , by well-founded induction on > mul .Consider a conversion a ⇔ * b which we call C. By Lemma 3.5 we either have a ↓ ∼ b (which includes the case that C is empty) or one of the following cases holds: If a ↓ ∼ b we are immediately done.In the remaining cases, we have a local peak or cliff with concrete labels α and β, so and we finish the proof by applying the induction hypothesis.
The above theorem can also be shown by verifying that it is a special case of the Church-Rosser modulo criterion known as decreasing diagrams [FvO13,Theorem 31].Note, however, that it is not as obvious as the fact that peak decreasingness [HMSW19] is an instance of decreasing diagrams for confluence [vO94]. 1or the main result of this section, a simpler version of peak-and-cliff decreasingness suffices.
Definition 3.7.Let A = ⟨A, →⟩ be an ARS equipped with a ∼-compatible well-founded order > on A and ∼ = * an equivalence relation on A. We write b ) and b ∼ a.We say that A is source decreasing modulo ∼ if the inclusions Proof.In the definition of peak decreasingness we set I = A. Note that this implies α = β for all local peaks and cliffs.Hence, the ARS is peak-and-cliff decreasing and we can conclude by an appeal to Theorem 3.6.
3.2.Prime Critical Pairs.We show that joinability of prime critical pairs is enough for characterizing the Church-Rosser modulo property.In the following, PCP ± (R, B ± ) denotes the restriction of CP ± (R, B ± ) to prime critical pairs but where irreducibility is always checked with respect to R, i.e., the critical peaks t p R ← s ↔ ϵ B u and t ′ ↔ p B s → ϵ R u ′ are both prime if proper subterms of s| p are irreducible with respect to R.
Example 3.9 (continued from Example 3.2).Recall the four critical peaks between R and B. The first two peaks f(x x are not prime due to the reducible proper subterm a.The other two are prime.Hence, Correctness of Theorem 3.1 can be shown by the combination of Corollary 3.8 with the following lemma.
Lemma 3.10 [Hue80].For left-linear TRSs R, the following inclusion holds: In order to integrate the refinement by prime critical pairs some more observations are required.
Definition 3.11.Given a TRS R and terms s, t and u, we write t s u}.Note that the joinability of ordinary critical peaks is not affected by incorporating B into conversions.Hence, the following result is taken from [HMSW19, Lemma 2.15] and therefore stated without proof.
(1) If Proof.We only prove (1) as the other statement is symmetrical.If all proper subterms of s| p are in normal form with respect to → R , t ≈ u ∈ PCP(R, B ± ) which establishes t ▽ ∼ s u.Since also t ▽ s t, we obtain the desired result.Otherwise, there are a position q > p and a term v such that s q − − → R v and all proper subterms of s| q are in normal form with respect to → R .Together with Lemma 3.10 we obtain either v ↓ ∼ R u or v ↔ PCP ± (R,B ± ) u.In both cases v ▽ ∼ s u holds.A similar case analysis applies to the local peak is an instance of a prime critical peak as q > p and all proper subterms of s| q are in normal form with respect to → R .Closure of rewriting under contexts and substitutions yields t ↔ PCP(R) v. Therefore, we have t ▽ s v in both cases, concluding the proof.
The following lemma generalizes the previous results of this section to arbitrary local peaks and cliffs.Lemma 3.14.Let R be a left-linear TRS. (1 . By definition, s > t, v and s ∼ u.The conversion between v and u is of the form v → * R • ∼ • k R ← u for some k.If k = 0 then all steps between v and u are labeled with terms which are smaller than s.If k > 0 then there exists a w < s such that v In this case all steps of the conversion are labeled with terms which are smaller than s except for the rightmost step which we may label with s.Hence, the corresponding condition required by source decreasingness is fulfilled in all cases. Example 3.16 (continued from Example 3.9).One can verify the termination of R/B and the inclusion PCP(R) ∪ PCP ± (R, B ± ) ⊆ ↓ ∼ R .By Theorem 3.15 the Church-Rosser modulo property holds.
Finally, we show that the previous result does not hold if we just demand termination of R. The counterexample shows this for the concrete case of AC and is based on Example 4.1.8from [Ave95] which uses an ARS.Note that the usage of prime critical pairs instead of critical pairs has no effect.
Example 3.17.Consider the TRS R consisting of the rules where + is an AC function symbol.Clearly, the (prime) critical pairs of R are joinable modulo AC because b + (a + a) ∼ AC a + (b + a).For PCP ± (R, AC ± ) we only have to consider the rules which rewrite to c and d respectively since all other rules only involve AC equivalent terms.Modulo symmetry, these (prime) critical pairs are which can be joined by adding the rules to R. The new (prime) critical pairs in PCP(R)∪PCP ± (R, AC ± ) are trivially joinable modulo AC as they are AC equivalent.To sum up, Termination of R can be checked by e.g.T T T 2 , but the loop shows that R is not AC terminating.We have c ⇔ * d but not c ↓ ∼ R d as the terms are normal forms and not AC equivalent.Hence, R is not Church-Rosser modulo AC.

Avenhaus' Inference System
The idea of completion modulo an equational theory B for left-linear systems where the normal rewrite relation can be used to decide validity problems has been put forward by Huet [Hue80].To the best of our knowledge, inference systems for this approach are only presented in the books by Avenhaus [Ave95] and Bachmair [Bac91].This section presents a new correctness proof of a version of Avenhaus' inference system for finite runs in the spirit of [HMSW19] which does not rely on proof orderings.Correctness of Bachmair's system is established by a simulation result in Section 5.

Inference System.
Definition 4.1.The inference system A is parameterized by a fixed B-compatible reduction order >.It transforms pairs consisting of an ES E and a TRS R over the common signature F according to the following inference rules where s ≈ t denotes either s ≈ t or t ≈ s: A step in an inference system I from an ES E and a TRS R to an ES E ′ and a TRS R ′ is denoted by (E, R) I (E ′ , R ′ ).The parentheses of the pairs are only used when the expression is surrounded by text in order to increase readability.In the following, PCP ± (R, B ± ) denotes the restriction of CP ± (R, B ± ) to prime critical pairs but where irreducibility is always checked with respect to R, i.e., the critical peaks The run is fair if R n is left-linear and the following inclusions hold: Intuitively, fair and non-failing runs yield a B-complete presentation R n of the initial set of equations In particular, the inference rules are designed to preserve the equational theory augmented by B.
Example 4.3.In this example we illustrate a successful run for the ES E consisting of the rules where + is an AC function symbol.This example is taken from [Ave95, Example 4.2.15(b)].
As suggested by Definition 4.2 we only consider prime critical pairs.As AC-compatible reduction order we use the polynomial interpretation [BL87] and start by orienting equations 2 and 3 into the rules which only leads to prime critical pairs between rule 3 ′ and AC ± .We add these prime critical pairs as rules by applying deduce: Keeping rule 4 enables us to collapse the remaining three rules to the equations which can all be deleted.Next, we deduce the prime critical pair stemming from rules 3 ′ and 4 which is just the trivial equation 0 ≈ 0 and therefore can be deleted.We continue by deducing the prime critical pairs between rule 4 and AC ± which adds the new rules which can all be collapsed to trivial equations x + y and therefore deleted.Now we orient the only remaining original equation 1 to which gives rise to two prime critical pairs between rules 1 ′ and 3 ′ as well as rule 4 which can be simplified to the trivial equations and therefore deleted.Finally, we deduce rules corresponding to the prime critical pairs between rule 1 ′ and AC ± : Applications of collapse and simplify transform these rules to AC equivalent equations which can be deleted.Thus, the TRS consisting of the rules 1 ′ , 2 ′ , 3 ′ and 4 is the result of a fair and non-failing run which is an AC complete presentation of the original equations as we will show in the correctness proof.
The next example shows that deducing local cliffs as rules as well as the restriction to → R in the collapse rule are crucial properties of the inference system.
Example 4.4.Consider the ES E consisting of the single equation where + is an AC function symbol.We clearly have 0 + x ↔ * E ∪ AC x, so an AC complete system C representing E has to satisfy 0 + x ↓ ∼ C x.There is just one way to orient the only equation in E which results in the rule x + 0 → x.Since we want our run to be fair, we add the rules stemming from the prime critical pairs between x + 0 → x and AC ± : If collapsing with → R/AC is allowed, all these rules become trivial equations and can therefore be deleted.Thus, the modified inference system allows for a fair run which is not complete as 0 + x ↓ ∼ R x does not hold for R = {x + 0 → x}.Furthermore, if we add pairs of terms stemming from local cliffs as equations, we get the same result by applications of simplify.
The inference system presented in Definition 4.1 is almost the same as the one presented by Avenhaus in [Ave95].However, since we only consider finite runs, the encompassment condition for the collapse rule has been removed in the spirit of [ST13].(The original side condition is s → R u with ℓ → r ∈ R and s • ▷ l.)The following example shows that this can lead to smaller B-complete systems.
Example 4.5.Consider the ES E consisting of the single equation where + is an AC function symbol.The inference system presented in [Ave95] produces the AC complete system in which either of the rules could be collapsed if it was allowed to collapse with the other rule.In [Ave95] this is prevented by an encompassment condition which essentially forbids to collapse at the root position with a rewrite rule whose left-hand side is a variant of the left-hand side of the rule which should be collapsed.However, this is possible with the system presented in this article, so for an AC complete representation just one of the two rules suffices.
4.2.Correctness Proof.We show that every fair and non-failing finite run results in a B-complete presentation.To this end, we first verify that inference steps in A preserve convertibility.We abbreviate then the following inclusions hold: By inspecting the inference rules of A we obtain the following inclusions: deduce Then, inclusion (2) follows directly from the closure of rewrite relations under contexts and substitutions.Statement (1) holds since it is a generalization that all cases have in common.
Proof.According to the assumption we have (E, R) ⊢ n A (E ′ , R ′ ) for some natural number n.We proceed by induction on n.
We continue with a case analysis on the rule applied in the step (E ′′ , R ′′ ) ⊢ A (E ′ , R ′ ).For the rules delete and simplify there is nothing to show as the set of rewrite rules is not changed.If deduce is applied to a local peak there is nothing to show.Otherwise, we have In the case of compose we have From the induction hypothesis we obtain s > t.Now R ′ ⊆ > follows by the transitivity of >.Finally, for collapse we have Definition 4.9.Let ↔ be a rewrite relation or equivalence relation, M a finite multiset of terms and > a B-compatible reduction order.We write s We follow the convention that if a conversion is labeled with M , all single steps can be labeled with M .
(1) For any finite multiset M we have By definition, there exist terms s ′ , t ′ ∈ M with s ′ ≳ s and t ′ ≳ t.According to Lemma 4.6 there exist terms u and v such that s Since R ′ ⊆ > we have s ≳ u and t ≳ v and therefore s ′ ≳ u and t ′ ≳ v. Hence, every non-empty step can be labeled with M and we obtain s For a proof of (2), let s → M R t.By definition, there exists an s ′ ∈ M such that s ′ ≳ s.We proceed by case analysis on the rule applied in the inference step.For deduce, orient, delete, and simplify there is nothing to show since R ⊆ R ′ .
Suppose the step is an application of compose.If the rule used in the step s → M R t is not altered, we are done.Otherwise, the step was performed with a rule ℓ → r ∈ R which is changed to ℓ → r ′ ∈ R ′ with r → R ′ /B r ′ .There exist a substitution σ and a position p such that s = s[ℓσ] p and t Since > is a B-compatible reduction order, we have s ′ > t ′ and therefore the label M is still valid.From t ′ we can reach t with From s > t we obtain {s} > mul {t} which means that the new conversion between s and t is of the desired form.Finally, suppose the step is an application of collapse.If the rule used in the step s → M R t is not altered, we are done immediately.Otherwise, the step was performed with a rule ℓ → r ∈ R which is changed to an equation ℓ ′ ≈ r ∈ E ′ with ℓ → R ′ ℓ ′ .There exist a substitution σ and a position p such that s = s[ℓσ] p and t Since > is a B-compatible reduction order we have s ′ > t ′ and therefore the label M is still valid.From t ′ we can reach t with t ′ → N E ′ t where N = {t ′ , t}.From s > t ′ and s > t we obtain {s} > mul N which means that the new conversion between s and t is of the desired form.
Finally, we are able to prove the correctness result for A, i.e., all finite fair and non-failing runs produce a B-complete TRS which represents the original set of equations.In contrast to [Ave95] and [Bac91], the proof shows that it suffices to consider prime critical pairs.Theorem 4.11.Let E be an ES.For every fair and non-failing run Proof.Let > be the B-compatible reduction order used in the run.From fairness we obtain E n = ∅ as well as the fact that R n is left-linear.Corollary 4.7 establishes and termination modulo B of R n follows from Lemma 4.8.It remains to prove that R n is Church-Rosser modulo B which we do by showing peak-and-cliff decreasingness.So consider a labeled local peak t M 1 Rn ← s → M 2 Rn u.Lemma 3.14(1) yields t ▽ 2 s u.Let v ▽ s w appear in this sequence (so v = t or w = u).By definition, v ↓ Rn w or v ↔ PCP(Rn) w.Together with fairness, the fact that ∼ B is reflexive as well as closure of rewriting under contexts and substitutions we obtain v ↓ ∼ Rn w or (v, w) In both cases, it is possible to label all steps between v and w with {v, w }.Since s > v and s > w we have M 1 > mul {v, w } and M 2 > mul {v, w }.Repeated applications of Lemma 4.10(1) therefore yield a conversion in R n ∪ B between v and w where every step is labeled with a multiset that is smaller than both M 1 and M 2 .Hence, the corresponding condition required by peak-and-cliff decreasingness is fulfilled.
Next consider a labeled local cliff t M 1 Rn ← s ↔ M 2 B u. From Lemma 3.14(2) we obtain a term v such that t ▽ s v ▽ ∼ s u.As in the case for local peaks we obtain a conversion between t and v where each step can be labeled with {t, v } < mul M 1 .Together with fairness, We can label the rightmost step with M 2 and the remaining steps with {v, w }.Note that s > v. Since > is a B-compatible reduction order we also have s > w.Thus, M 1 > mul {v, w } which establishes the corresponding condition required by peak-and-cliff decreasingness for all k.In the remaining case we have (v, u) ∈ n i=0 ↔ R i , so there is some i ⩽ n such that v ↔ R i u.Actually, we know that u → M 2 R i v since otherwise we would have both s > v and v > s by the B-compatibility of >.Repeated applications of Lemma 4.10(1,2) therefore yield a conversion between u and v of the form where {u} > mul N .By definition, s ′ ≳ u for some s ′ ∈ M 1 and therefore M 1 > mul N , which means that the corresponding condition required by peak-and-cliff decreasingness is fulfilled.Overall, it follows that R n is peak-and-cliff decreasing and therefore Church-Rosser modulo B.
Note that the proofs of the previous theorem and Theorem 3.6 do not require multiset orders induced by quasi-orders but use multiset extensions of proper B-compatible reduction orders which are easier to work with.This could be achieved by defining peak-and-cliff decreasingness in such a way that well-founded orders suffice for the abstract setting.However, the usage of multiset orders based on B-compatible reduction orders as well as a notion of labeled rewriting which allows us to label steps with B-equivalent terms are crucial in order to establish peak-and-cliff decreasingness for TRSs.

Bachmair's Inference System
As already mentioned, the inference system proposed by Avenhaus [Ave95] is essentially the same as A. The only other inference system for B-completion for left-linear TRSs is due to Bachmair [Bac91].We investigate a slightly modified version of this inference system where arbitrary local peaks are deducible and the encompassment condition from the collapse rule is removed as we only consider finite runs.The resulting system will be called B.
The main difference between A and B is that in B one may only use the standard rewrite relation → R for simplifying equations and composing rules.This allows us to deduce local cliffs as equations.The goal of this section is to establish correctness of B via a simulation by A.
Definition 5.1.The inference system B is the same as A but with rewriting in compose and simplify restricted to → R and the following rule which replaces the two deduction rules of A: Definition 5.2.Let E be an ES.A finite sequence The run is fair if R n is left-linear and the following inclusion holds: In contrast to Definition 4.2, the fairness condition is the same for all prime critical pairs since the inference rule deduce of B never produces rewrite rules.
In order to prove that fair and non-failing runs in B can be simulated in A, we start with the following technical lemma.We denote an application of the rule orient in an inference system I by o I .
Let > be a fixed B-compatible reduction order which is used in both A and B. From In order to simplify the formulation, we will refer to the sequence of orient steps between a pair and its primed variant as the invariant.We proceed by a case analysis on the rule applied in the inference step ▷ In the case of deduce, we apply the same rule in A. For local peaks s → In both cases, the invariant is preserved.▷ Suppose the inference step in B orients an equation s ≈ t.If s ≈ t ∈ E ′ 1 we perform the same step in A which preserves the invariant.Otherwise, 1 where {v, w } = {s, t}.In this case, an empty step in A preserves the invariant.▷ If the inference step in B deletes an equation s ≈ t, it has to be in E ′ 1 which enables us to perform the same step in A while preserving the invariant: Suppose that s ≈ t ∈ E 1 \ E ′ 1 .Since the equation is deleted, we have s ∼ B t. Neither s > t nor t > s can hold as > is B-compatible and irreflexive.Hence, the equation cannot be oriented which contradicts the assumption . ▷ If the inference step in B is compose or collapse, the same step can be performed in A while preserving the invariant as R 1 ⊆ R ′ 1 .▷ Finally, suppose that the inference step in B simplifies an equation s ≈ t.Since the orientation of equations does not matter in completion, we may assume that the simplification transforms the equation to s ′ ≈ t without loss of generality.If s ≈ t ∈ E ′ 1 , we perform the same step in A which preserves the invariant.Otherwise, there is a rule v → w ∈ R ′ 1 such that {v, w } = {s, t}.If v = s and w = t, we use collapse for the inference step in A which produces the same equation s ′ ≈ t ∈ E ′ 2 which means that the invariant is preserved.If v = t and w = s, we use compose for the inference step in A in order to obtain the rule t → s ′ .From t > s and s > s ′ we obtain t > s ′ and therefore the equation s ′ ≈ t can be oriented into the rule t → s ′ .Thus, the invariant is preserved.
For the proof of the simulation result, we need a slightly different form of the previous lemma.Analogous to the notation for rewrite relations, the relation o !I denotes the exhaustive application of the inference rule orient.
3 ) as desired.Theorem 5.5.For every fair run (E, ∅) ⊢ * B (∅, R) there exists a fair run (E, ∅) By n applications of Corollary 5.4 we arrive at the following situation: The following two statements hold: (1) For 0 ⩽ i ⩽ n, all orientable equations in E i are in R ′ i (possibly reversed) and the other equations are in ) is a set of orientable equations.Statement (1) is immediate from the simulation relation o !B and statement (2) follows from B-compatibility of the used reduction order together with the fact that every (prime) critical pair is connected by one R n -step and one B-step.Furthermore, Hence, we obtain fairness of the run in A by showing the following inclusions: . By fairness of the run in B we obtain s ↓ ∼ R ′ n t or s ↔ E k t for some k ⩽ n.In the former case, we are immediately done.In the latter case we obtain s . By fairness of the run in B we obtain s ↓ ∼ R ′ n t or s ↔ E k t for some k ⩽ n.Again, we are immediately done in the former case.In the latter case we have s ↔ R ′ k t because of (1) and (2).Therefore, the run in A is fair.The previous theorem is an important simulation result which justifies the emphasis on A in this article.Moreover, together with Theorem 4.11 the correctness of the inference system B is an easy consequence.
Corollary 5.6.Every fair and non-failing run for E in B produces a B-complete presentation of E.

Canonicity
Complete representations resulting from completion may have redundant rules which do not contribute to the computation of normal forms.The notion of canonicity addresses this issue by defining a minimal and unique representation of a complete TRS for a given reduction order.In this section, canonicity results for B-complete TRSs are presented.After establishing results for abstract rewriting, means to compute B-canonical TRSs and the uniqueness of B-canonical TRSs are discussed.The results and proofs in this section closely follow the presentation of canonicity results for standard rewriting in [HMSW19] by carefully lifting definitions and results to rewriting modulo B. To the best of our knowledge, this section presents the first account of canonicity results for B-complete TRSs.
6.1.Results for Abstract Rewriting Modulo.In the following we assume that If A 1 and A 2 are normalization equivalent modulo ∼ and terminating, they are equivalent modulo ∼.
Proof.We prove ⇔ * A 1 ⊆ ⇔ * A 2 by induction on the length of conversions in A 1 .The claim follows by symmetry.If a ⇔ 0 A 1 b then a = b and therefore also a ⇔ 0 We only consider the first case by again exploiting symmetry.Since A 1 and A 2 are normalization equivalent and terminating, there is an element c such that b Hence, we can connect a ′ and b by a conversion in A 2 as desired.
Proof.From the inclusion → A 2 ⊆ (→ A 1 /∼) + as well as the termination of A 1 modulo ∼ we obtain that A 2 is terminating modulo ∼.Next, we show Hence, A 1 and A 2 are normalization equivalent modulo ∼.Finally, the Church-Rosser modulo ∼ property of A 2 follows from the sequence of inclusions the fact that A 1 is complete modulo ∼ and normalization equivalence modulo ∼ of A 1 and A 2 , respectively.6.2.Computing B-Canonical Term Rewrite Systems.Intuitively, B-complete TRSs need to have a variety of left-hand sides of rules in order to match every possible B-equivalent term of a reducible term.However, for the right-hand sides of rules only one representative of the B-equivalence class suffices.This rationale is reflected in the following series of definitions.Definition 6.4.Two rules ℓ → r and ℓ ′ → r ′ are right-B-equivalent variants if there exists a renaming σ such that ℓσ = ℓ ′ and rσ ∼ B r ′ .We write R 1 .= ∼ R 2 if every rule of R 1 has a right-B-equivalent variant in R 2 and vice versa.For any TRS R, the TRS R .
Example 6.5.From the TRS R consisting of the rules where + and × are AC function symbols we obtain R .
=∼ by removing the third rule: The relation .= ∼ weakens the notion of literal similarity of TRSs to the setting of the inference system A: While composing with → R/B is allowed, left-hand sides may only be rewritten with → R .Definition 6.6.A TRS R is right-B-reduced if for every rewrite rule ℓ → r ∈ R, r is a normal form with respect to → R/B .We say that R is canonical modulo B if it is complete modulo B, left-reduced and right-B-reduced.
The next definition is an extension of .= for B-complete TRSs.In particular, it is designed to transform B-complete TRSs into B-canonical TRSs.Definition 6.7.Given a B-terminating TRS R, the TRSs ∼ R and ∼• R are defined as follows: Here, r↓ R/B denotes an arbitrary normal form of r with respect to → R/B .
The following example shows that for preserving the equational theory, it does not suffice to make ∼ R variant-free.
Example 6.8.Suppose we change the definition to where variant-freeness is implicit.Consider the TRS R consisting of the rules where + is an AC function symbol.We have ∼ R = R since the two rules are not variants and therefore ∼• R = ∅ which obviously induces a different equational theory than R.However, by using Definition 6.7 we obtain e.g. the following B-canonical TRS: Before we can prove the main result of this section, we need the following technical lemma.Lemma 6.9.If R is a B-complete TRS and ∼ R is Church-Rosser modulo B then NF( R) for which it suffices to show that ℓ / ∈ NF( R due to closure of rewriting under contexts and substitutions.We prove this by induction on ℓ with respect to the well-founded order R follows from its definition.Hence, R is not only normalization equivalent modulo B but also (conversion) equivalent modulo B to R.
Due to the constructive nature of Definition 6.7, the previous theorem states that given a B-complete TRS R it is possible to compute a B-canonical TRS which is equivalent modulo B to R.An inspection of Definition 6.6 further reveals that the inference system A can produce B-canonical TRSs: If (E, ∅) * A (∅, R) is a fair run and neither compose nor collapse are applicable, R is not only complete but also canonical modulo B. The following lemma shows that this also holds for the inference system B. Lemma 6.11.If (E, ∅) * B (∅, R) is a fair run and neither compose nor collapse are applicable, R is B-canonical.
Proof.According to Corollary 5.6, the TRS R is a B-complete presentation of E. Furthermore, R is left-reduced by definition as collapse is not applicable.For a proof by contradiction, suppose that R is not right-B-reduced.Hence, there exists a rule ℓ → r in R which can be modified to ℓ → r ′ where r → !R/B r ′ .Note that r and r ′ are not equivalent modulo B as R is terminating modulo B. From the B-completeness of R we obtain r → !R • ∼ B • !R ← r ′ .Since r ′ is a normal form with respect to → R/B , we have r → + R • ∼ B r ′ which contradicts our assumption that compose is not applicable.Therefore, both A and B can produce canonical systems due to the availability of compose and collapse which is also referred to as inter-reduction.If a given completion procedure lacks inter-reduction, it is an instance of elementary completion.Note that the original completion procedure by Knuth and Bendix [KB70] performs elementary completion.The next example shows that there is a big difference between A and B with respect to elementary completion.Example 6.12.

and consider a corresponding run in A.
There is only one option to orient the only equation into a terminating rewrite rule, namely x we can deduce the rule x • (y • 1) → x • y.In B restricted to elementary completion, the result of deduce would be the corresponding equation which can be simplified to a trivial equation and therefore deleted.In A restricted to elementary completion, however, there are no means of removing or even altering the deduced rule.Hence, the rule will be used to deduce more critical pairs with the associativity axiom which are again kept as rules.Clearly, this process cannot terminate as overlaps with the associativity axioms can increase the size of the left-hand side of the rules without bound.
The previous example shows that for many examples, there will not be a finite run in A restricted to elementary completion.Hence, the usage of inter-reduction and the resulting definition of canonicity is crucial for the success of A in generating finite solutions to validity problems.6.3.Uniqueness of B-Canonical Term Rewrite Systems.The main result of this section states that B-canonical TRSs which are compatible with the same B-compatible reduction order are unique up to right-B-equivalent variants.This property justifies the usage of the term canonical and motivates the computation of TRSs which are canonical modulo B as discussed in the last section.In order to prove this result, we start with an auxiliary lemma.Lemma 6.13.Let R be a TRS which is right-B-reduced and let s be a reducible term which is minimal with respect to • ▷.
Proof.Let ℓ → r be the first rule which is applied in the sequence s → + R • ∼ B t, so s • ⊵ ℓ.Since s is a minimal reducible term with respect to • ▷, we have s .= ℓ.By definition, there is a renaming σ such that s = ℓσ.Since R is right-B-reduced, r is a normal form with respect to → R/B and therefore also → R .Closure of normal forms under renamings yields rσ ∈ NF(R).Hence, t ∼ B rσ and we conclude that s → t and ℓ → r are right-B-equivalent variants.
In contrast to its counterpart for standard rewriting in [HMSW19, Theorem 4.9], the following lemma needs termination modulo B as an additional precondition.Note that it still treats a more general case than the main result of this subsection (Theorem 6.15) as the Church-Rosser modulo B property is not required.Lemma 6.14.TRSs which are normalization equivalent modulo B, terminating modulo B, left-reduced and right-B-reduced are unique up to right-B-equivalent variants.
Proof.Let R and S be TRSs which are normalization equivalent modulo B, terminating modulo B, left-reduced and right-B-reduced.As the argument is symmetric, we only show that every rule of R has a right-B-equivalent variant in S. Consider ℓ → r ∈ R. Note that ℓ and r are not B-equivalent since otherwise the cycle r ∼ B ℓ → R r contradicts the fact that R is terminating modulo B. Furthermore, r ∈ NF(R) as R is right-B-reduced.Hence, normalization equivalence modulo B of R and S yields ℓ → + S • ∼ B r.Moreover, the left-reducedness of R yields that ℓ is a minimal R-reducible term with respect to • ▷.Now suppose there is exists a rule ℓ ′ → r ′ ∈ S such that ℓ • ▷ ℓ ′ .Right-B-reducedness of S yields r ′ ∈ NF(S) and termination modulo B of S implies that ℓ ′ and r ′ are not B-equivalent.Together with normalization equivalence modulo B of R and S we obtain ℓ ′ → + R • ∼ B r ′ which contradicts the fact that ℓ is a minimal R-reducible term with respect to • ▷.Therefore, ℓ is a minimal S-reducible term with respect to • ▷ and from Lemma 6.13 we obtain that ℓ → r is a right-B-equivalent variant of a rule in S which concludes the proof.
In the proof of the following theorem we now just need to establish the preconditions of the previous lemma.Proof.Let R and S be compatible with the B-compatible reduction order >.We show that R and S are normalization equivalent modulo B which allows us to conclude the proof by an appeal to Lemma 6.14.As the argument is symmetric, we only show Since S is terminating, there exists a term u such that t ′ → !S u.From the equivalence modulo B of R and S we obtain t ′ * where + is an AC function symbol.There are two B-complete presentations of E consisting of one rule each: While the rules of the two TRSs are right-B-equivalent variants, they are not variants.Note that both systems are compatible with the same B-compatible reduction order.
Note that R and R ′ in the previous example can also be obtained with the inference system B.This shows that while the relation .= ∼ is motivated by the definition of the inference system A, the notion of right-B-equivalent variants naturally arises in completion modulo B for left-linear TRSs.

AC Completion
So far, the theoretical results have been generalized by using the equational theory B as a placeholder.In practice, however, this article is concerned with the particular theory AC.The results of this section allow us to assess the effectiveness of the inference system A in the setting of AC completion.7.1.Limitations of Left-Linear AC Completion.In addition to the restriction to left-linear rewrite rules, the following example demonstrates another severe limitation of the inference system A previously unmentioned in the literature.where and is an AC function symbol.There is only one way to orient each equation.Furthermore, there are no critical pairs between the resulting rewrite rules.Hence, using the inference system A we arrive at the intermediate TRS and(0, 0) → 0 and(1, 1) → 1 and(0, 1) → 0 where the only possible next step is to deduce local cliffs.We will now show that this has to be done infinitely many times.Note that an AC-complete presentation R of E has to be able to rewrite any AC-equivalent term of a redex.Consider the infinite family of terms s 0 = and(0, 1) s 1 = and(and(0, x 1 ), 1) s 2 = and(and(and(0, x 1 ), x 2 ), 1) as well as t 0 = 0 t 1 = and(0, x 1 ) t 2 = and(and(0, x 1 ), x 2 ) Clearly, s n ↔ * E ∪ AC t n for all n ∈ N and therefore also s n ↓ ∼ R t n for all n ∈ N, but this demands infinitely many rules in R: For each s n there is an AC-equivalent term such that the constants 0 and 1 are next to each other which allows us to rewrite it using the rule and(0, 1) → 0. However, with n also the amount of variables between these constants increases which requires R to have infinitely many rules since rewrite rules can only be applied before the representation modulo AC is changed.
Note that there is nothing special about this example except the fact that it contains at least one equation which can only be oriented such that the left-hand side contains an AC function symbol where both arguments have "structure", i.e., both arguments contain a function symbol which is different from the original AC function symbol.As a consequence, the necessity of infinite rules applies to all equational systems which have this property.Needless to say, this means that for a large class of equational systems the corresponding AC-canonical presentation (in the left-linear sense) is infinite if it exists.This observation is in stark contrast to the properties of general AC completion as presented in the next section which can complete the ES E from Example 7.1 into a finite AC-canonical TRS by simply orienting all rules from left to right.The following example shows that given a nonempty context, the same effect can also be seen even when only one argument contains a different function symbol.
Example 7.2.Consider the ES E consisting of the equation s(p(x) + y) = x + y where + is an AC function symbol.Note that orienting the equation from left to right is the only way to convert E into a terminating TRS R. The rule has no critical pairs with itself, but as in the previous example, adding critical pairs with AC does not terminate.This can be seen by considering the infinite family of terms as well as For a complete presentation R of E ∪ AC we have s n ↓ ∼ R t n for all n ∈ N, but this demands infinitely many rules in R as before.7.2.General AC Completion.Inference systems for completion modulo an equational theory which are not restricted to the left-linear case usually need more inference rules than the ones already covered in this article.For general AC completion, however, there exists a particularly simple inference system which constitutes a special case of normalized completion [Mar96] and can be found in Sarah Winkler's PhD thesis [Win13,p. 109].
for some t and u.Performing deduce and simplify, we obtain: 1 /AC the claim is concluded.
Theorem 7.5.For every fair run (E, ∅) In addition to the result of the previous theorem, the proof of Lemma 7.4 provides a procedure to construct a KB AC run which "corresponds" to a given A run.In particular, this means that it is possible to switch from A to KB AC at any point while performing AC completion.This is of practical relevance: Assume that AC completion is started with A in order to avoid AC unification.If A gets stuck due to simplified equations which are not orientable into a left-linear rule or it seems to be the case that the procedure diverges due to the problem described in Example 7.1, starting from scratch with KB AC is not necessary.We conclude the section by illustrating the practical relevance of the simulation result with an example.
Example 7.6.Consider the ES E for abelian groups consisting of the equations where • is an AC symbol.Note that the well-known completion run for non-abelian group theory is also a run in A: Critical pairs with respect to the associativity axiom are deducible via local cliffs, non-left-linear intermediate rules are allowed and all (intermediate) rules are orientable with e.g.AC-KBO.Hence, we obtain the TRS R ′ consisting of the rules and switch to KB AC where we can collapse the redundant rules 4, 6, 7 and 9.A final joinability check of all AC critical pairs reveals that the resulting TRS R is an AC-complete presentation of abelian groups.Hence, the simulation result allows us to make progress with A even when it is doomed to fail.In particular, critical pairs between rules whose left-hand sides do not contain AC symbols do not need to be recomputed.

Implementation
The command-line tool accompll implements AC completion for left-linear TRSs based on the inference system A (Definition 4.1).It uses external termination tools instead of a fixed AC-compatible reduction order and is written in the programming language Haskell.The source code of the tool accompll is available on GitHub2 As input, the tool expects a file in the WST3 format describing the equational theory on which left-linear AC completion should be performed.The user can choose whether → R , → R, AC or → R/AC is used for rewriting in the inference rules simplify and compose.Furthermore, the generation of critical pairs can be restricted to the primality criterion.Note that these options are facilitated by the theoretical results in the previous sections.
Another feature is the validity problem solving mode which solves a given instance of the validity problem for an equational theory E upon successful completion of E. This mode can be triggered by supplying a concrete equation s ≈ t as a command line argument in addition to the file describing E.
In the tool accompll, external termination tools do much of the heavy lifting.In particular, the user can supply the executable of an arbitrary termination tool as long as the output starts with YES, MAYBE, NO or TIMEOUT (all other cases are treated as an error).The input format for the termination tool can be set by a command line argument.The available options are the WST format as well as the XML format of the Nagoya Termination Tool [YKS14]. 4ince starting a new process for every call of the termination tool causes a lot of operating system overhead, the tool supports an interactive mode which allows it to communicate with a single process of the termination tool in a dialogue style.Here, the only constraint for the termination tool is that it accepts a sequence of termination problems separated by the keyword (RUN).This is currently only implemented in an experimental version of Tyrolean Termination Tool 2 (T T T 2 ) [KSZM09], but we hope that more termination tools will follow as this approach has a positive effect on the runtime of completion with termination tools while demanding comparatively little implementation effort.The remainder of this section discusses important aspects and properties of the implementation.8.1.Termination Tools.The reduction order is a critical input parameter of a completion procedure.Finding an appropriate order can be very challenging as the equations which are generated during a completion run are usually not known in advance.Hence, the nature of this problem is different from the standard termination problem where the input is one specific TRS.
While it is well-known that termination is an undecidable property of TRSs, a number of termination tools have been developed which can solve the termination problem automatically in many practical cases.The usage of termination tools in completion has been pioneered by Wehrman et al. [WSW06].Intuitively, the difficulty of finding an appropriate reduction order is replaced by a sequence of calls to a termination tool.This allows us to define a completion procedure which does not depend on a reduction order as input.An important ingredient for completion with termination tools is a constraint system which keeps track of all rules which have been produced during the run.Checking termination for the constraint system as opposed to the current TRS guarantees that there exists a single reduction order which can be used for the whole run.This is important as correctness is lost when the reduction order is changed during a completion run [SK94].The next definition extends the inference system A by a constraint system.Definition 8.1.The inference system A TT transforms triples consisting of an ES E and TRSs R, C over the common signature F. Except for orient, the inference rules are trivial extensions of the rules in A TT where the constraint system C is not changed.
) denotes a step in the inference system A TT and a sequence of steps starting with (E, ∅, ∅) constitutes a run for E. We will now prove that A TT is sound and complete with respect to A. Lemma 8.2.For every run there exists an equivalent run which uses the reduction order → + Cn/B .Proof.First of all, note that → + Cn/B is a B-compatible reduction order as it is transitive, closed under contexts and substitutions and the B-termination of C n is established by induction on the length of the run in A TT .Since the constraint system is only altered and used in the orient rule, all other steps are valid steps in A by just removing the constraint system.Since C n includes all rules which are oriented during the run, the corresponding orient steps in A also succeed.Lemma 8.3.For every run using the B-compatible reduction order > there exists an equivalent run Cn/B ⊆ > as the constraint system is just carried along without any further restrictions in all rules except orient.Now assume that the step is an application of orient on the equation s ≈ t.We have E n = E n−1 \ {s ≈ t} and R n = R n−1 ∪ {v → w } where v > w and {v, w } = {s, t}.
) by an application of orient and we obtain (E 0 , R 0 , C 0 ) * A TT (E n , R n , C n ) as well as → + Cn/B ⊆ > from the induction hypothesis together with v > w and the definition of C n .
Note that the presented translation between runs of A and A TT allows us to speak about the fairness of runs in A TT since this simply means that the corresponding run in A is fair.
Lemma 8.2 shows that the usage of termination tools is a sound extension which allows us to perform completion without constructing an appropriate reduction order beforehand.Furthermore, the usage of termination tools does not affect the applicability of our completion procedure due to the previous completeness result (Lemma 8.3).However, implementing completion with termination tools such that it does not affect the applicability in practice is a highly nontrivial task.The reason for this is that without a concrete reduction order, both versions of the orient rule may be applicable.This yields a potentially huge search space as orienting a rule in the "wrong" direction may cause completion to fail or diverge.In [WSW06] this problem is solved by traversing the search space which is a binary tree with some best-first strategy based on a cost function which takes the number of equations, rules and critical pairs into account.
The state of the art for solving this problem efficiently is multi-completion with termination tools due to Winkler et al. [WSMK13].This method traverses the whole search space while being as efficient as possible by sharing computations between different nodes in the binary tree which represents the search space.The method is implemented in the tool mkbTT.As the implementation of this approach is a major effort, we adopt the strategy used by the automatic mode of the Knuth-Bendix Completion Visualizer (KBCV) [SZ12].Instead of traversing the whole search space, KBCV runs two threads in parallel where one thread prefers to orient equations from left to right while the other one prefers to orient from right to left.If one of the threads finishes successfully, the corresponding result is reported.Completion fails if both threads fail.Needless to say, this compromises completeness, but it is a trade-off which works well in many practical cases.In particular, KBCV can also complete systems which mkbTT cannot within a given time constraint [SZ12].8.2.Strategy.The presentation of A TT as an inference system allows implementations to use the given rules in any order as long as the produced run is fair.This section is about the strategy for applying rules of A TT which is employed by the tool accompll.From now on, we specialize to the equational theory AC as accompll was developed for this specific case.
The used strategy is based on Huet's completion procedure [BN98,Section 7.4].An important property of this procedure is that the rules simplify, delete, compose, and collapse are applied eagerly in order to keep the intermediate ESs and TRSs as small as possible.Usually, this has a positive effect on the runtime of the completion process.Unlike Huet's completion procedure, in the employed strategy the orientation of a rule is always directly followed by the computation of all possible critical pairs which involve this rule.Furthermore, orient is only applied once per iteration to the smallest equation with respect to the term size.This modified version of Huet's completion procedure is implemented e.g. in KBCV and the flow chart depicted in Figure 1 is based on [SZ12].However, in contrast to standard completion, we have to add some of the critical pairs as rules in A TT .Hence, we apply deduce on the selected equation before compose and collapse are applied exhaustively.Moreover, we need to keep track of a separate set of pending rules (P) as the eager recursive computation of critical pairs between rules and AC axioms might unnecessarily lead to non-termination of the completion procedure.We will now describe the flow chart depicted in Figure 1 in detail.As already mentioned, accompll also keeps track of a list of pending rules P. Intuitively, a pending rule is like a normal rule with the exception that critical pairs involving it have not yet been computed and it is not used to collapse other rules.Hence, accompll works on a quadruple of an ES and three TRSs (E, P, R, C) which are processed by one main loop.First of all, simplify is applied exhaustively such that both sides of every equation are normal forms with respect to E and P.This preliminary step may be already enough to join some equations, so after that every AC equivalent equation is removed with delete.If no equations or pending rules are left, we are done.Otherwise, an equation ℓ ≈ r or pending rule ℓ → r which is minimal with respect to |ℓ| + |r| where | • | denotes the size of a term is selected.If it is a rule, orient is just the identity function.Otherwise, orient is applied with the orientation preference of the given thread, i.e., it first orients the equation in the preferred direction and only tries the other option in case of failure.An important feature of the implementation of orient is that it can be postponed, i.e., if one equation is not orientable in either direction, the next one is tried.If orient never succeeds, the thread terminates in a failure state.
In any case, a successful application of orient yields some rule ρ.Next, deduce is used to produce the sets CP({ρ}) and CP ± (R, {ρ}) which are added to E as well as CP ± (AC ± , {ρ}) which is added to P. After that, all rules in P ∪ R are exhaustively composed such that their right-hand sides are normal forms with respect to P ∪ R. Finally, collapse is applied exhaustively in R with respect to ρ and in P with respect to {ρ} ∪ R. Here, we differentiate between R and P because at this point, the left-hand sides of R are known to be normal forms of the remaining rules of R while this can not be assured for the deduced rules P. Note that we do not collapse P with respect to P as it would require checking rules for equality.
In order to produce a left-linear system, orient is only applied if the left-hand side of the resulting rule is linear which makes all intermediate TRSs left-linear.Fairness only demands left-linearity for the resulting TRS, but improving this aspect was not considered as it is unclear based on which criteria non-left-linear rules should be turned back into equations again.Moreover, keeping each intermediate TRS left-linear makes intermediate rules stemming from critical pairs left-linear by definition: Since the AC axioms are linear, overlaps between the AC axioms and a left-linear TRS always produce linear terms.Thus, the left-linearity of pending rules does not have to be checked as rewriting with linear rules preserves linearity.We are now ready to prove the main result of this section.
Theorem 8.4.The TRSs produced by the tool accompll are AC-canonical presentations of the ES provided as input.
Proof.Suppose that given an ES E, accompll outputs a TRS R n and consider the thread which produced this result in n iterations of the main loop as depicted in the flow chart (Figure 1).In particular, let E i , P i , R i and C i denote the respective values of E, P, R and C at the decision node in the flow chart in the i-th iteration.Note that E 0 = E, P 0 = R 0 = C 0 = ∅ and E n = P n = ∅.It is immediate from the flow chart and its textual description that is a run for E. Furthermore, all prime critical pairs of R n have been considered as an intermediate equation and all prime critical pairs between R n and AC ± have been considered as an intermediate rule.The fact that R n is an AC-complete presentation of E now follows from fairness (Definition 4.2) together with Lemma 8.2 and Theorem 4.11.Finally, a straightforward induction argument shows that the statements (1) ℓ ∈ NF(R i ) for every ℓ → r ∈ R i ∪ P i and (2) r ∈ NF(R i ∪ P i ) for every ℓ → r ∈ R i ∪ P i hold for 0 ⩽ i ⩽ n.Together, (1) and (2) show that R n is also AC-canonical: Just like in the proof of Lemma 6.11 we conclude that if each rule of an AC-complete system is right-reduced, it is also right-AC-reduced.
8.3.Implementation Details.The implementation of the tool accompll is based on the Haskell term-rewriting library [FAS13] which takes care of most of our needs regarding term rewriting except rewriting modulo AC, prime critical pairs and the computation of normal forms.The following paragraphs describe selected implementation details.
Rewriting to normal forms.A naive implementation of the computation of normal forms is highly inefficient as in general, the whole term has to be traversed for every rewrite step.As a trade-off between the time-consuming implementation of sophisticated term indexing techniques and the naive implementation we employ a bottom-up construction of normal forms by innermost rewriting of marked terms where normal form positions are remembered.
AC equivalence of terms.Checking whether two terms are equivalent modulo AC is needed for the implementation of the inference rule deduce.To this end, an equation s ≈ t is transformed into a canonical form s ′ ≈ t ′ where nested applications of AC symbols are flattened out to just one n-ary application for arbitrary n and their arguments are ordered with respect some total order on terms.Then, s ∼ AC t if and only if s ′ = t ′ .As an example, the terms f(x, f(y, g(f(z, a)))) and f(f(y, x), g(f(a, z))) with f ∈ AC are AC equivalent because they have the same canonical form f(g(f(a, z)), x, y) with respect to the lexicographic order on the representation of terms as strings of ASCII symbols.
Normal forms in rewriting modulo AC.As already mentioned in Section 2, a direct implementation of → R/AC cannot be efficient.In the following, we will prove that when we are only interested in rewriting to normal forms, it suffices to implement the relation → R, AC from Peterson and Stickel [PS81] which is easier to compute but relies on AC matching.Although there are more efficient implementations, we used the AC matching algorithm due to Contejean [Con04] as it has been certified.The result we are about to prove (Corollary 8.8) is a special case of more general results for conditional rewriting modulo an equational theory by Meseguer [Mes17, Corollary 3].However, to the best of our knowledge, the literature does not contain a direct proof of the required result despite its importance for effective implementations also in the unconditional case.Therefore, in the following, we give a detailed and direct proof of the result.
Let R be a TRS and let f (u, v) → r be a rule in R where f is an AC function symbol.
The TRS consisting of R together with all extensions of rules in R is denoted by R e .We will now prove that the relations → * R/AC • ∼ AC and → * R e , AC • ∼ AC coincide.To this end, we show AC ← u by induction on s.Let ℓ 1 → r 1 and ℓ 2 → r 2 be the rules employed in the left and right steps.We distinguish four cases.We use the facts that ∼ AC • → ϵ R, AC and → ϵ R, AC coincide and → AC is contained in ∼ AC .
(1) If q = ϵ then t → R e , AC u.Thus, the claim holds.
(3) If p = ip ′ and q = jq ′ with i, j ∈ N then we further distinguish two subcases.If i = j then t| i → AC s| i → R e , AC u| i and we obtain t → R e , AC • = AC ← u with help of the induction hypothesis.If i ̸ = j then we clearly have t → q R e , AC • p AC ← u. (4) In the final case, p = ϵ and q ∈ Pos F (ℓ 1 ) with q ̸ = ϵ.Since Pos F (ℓ 1 ) ⊆ {ϵ, 1}, the rule ∈ Var(ℓ).Define the substitution τ as follows: The other direction follows from the definition of → R, AC as well as the fact that → • ∼ AC for all n ⩾ 0. We use induction on n.If n = 0 then the claim is trivial.If n > 0 then the claim is verified as follows:

Experimental Results
We evaluated the performance of our tool accompll on a problem set containing 52 ESs.The problem set is based on the one used in [WM11] and has been extended by further examples from the literature as well as handcrafted examples.From the 52 ESs, 10 contain equations which cannot be oriented into a left-linear rule.Furthermore, 6 problems are ground which means that accompll cannot find a finite solution by Example 7.1.This leaves us with 36 problems for which accompll may find an AC complete presentation.However, for some of these ESs, simplified equations where both terms are non-linear may be deduced which causes accompll to get stuck.Furthermore, many interesting problems exhibit the properties described in Examples 7.1 and 7.2 and are therefore out of reach for our method.Nevertheless, there are 7 examples in the problem set where left-linear completion with AC axioms is preferable to general AC completion due to significantly better performance.The experiments were performed on an Intel Core i7-7500U running at a clock rate of 2.7 GHz with 15.5 GiB of main memory.Our tool accompll was used with the termination tool T T T 2 as well as an experimental version (denoted by T T T 2 e) which allows our tool to communicate a sequence of termination problems without having to start a new process all the time, as described in the preceding section.  1, columns (1) show the execution time in seconds where ∞ denotes that the timeout of 60 seconds has been reached and ⊥ denotes failure of completion.Columns (2) state the number of rules of the completed TRS.In Example 4.3, the equations just have to be oriented and one additional rule has to be added in the case of left-linear AC completion.Hence, all three systems can handle this problem easily.Examples 7.1 and 7.6 show the two main limitations of left-linear AC completion: it diverges on problems which contain an AC symbol where both arguments have "structure" and non-left-linear presentations are out of reach.For general AC completion and normalized completion, respectively, these examples pose no problem.The remaining examples show that the absence of AC unification can make left-linear completion more practical than general AC completion.In Table 1, we can see that mkbTT needs considerably more time to complete E and MaedMax even times out after 60 seconds.By inspecting R, it is easily seen that the operations as well as the unit elements are equivalent.Furthermore, the concise presentation of E as a complete TRS is facilitated by our canonicity results as well as the implementation of inter-reduction (collapse and compose).Using the inference system B without inter-reduction and with the same AC compatible reduction order, we obtain a much larger complete but non-canonical presentation of E which extends R: together with the AC axioms for +.This is an extension of [Ave95, Exercise 4.2.4(b)](the addition operation on lists) with the standard append and reverse functions on lists.We added the involution axiom for list reversal in order to generate critical pairs.Our tool accompll produces a complete presentation in less than a second by orienting all equations in E from left to right and adding the following rules: together with the AC axioms for + and ×, defining addition, multiplication, cutoff subtraction and round-up division on the natural numbers.Our tool accompll produced a complete presentation in less than a second by orienting all equations in E from left to right and adding the following rules: x + 0 → x x × 0 → 0 x + s(y) → s(y + x) x × s(y) → (y × x) + x Since round-up division cannot be handled by simplification orders [Der79], this example also shows the merits of using termination tools in completion.Note that it takes mkbTT much longer to complete E (more than 9 seconds) and MaedMax times out on this problem after 60 seconds.
Despite the successes in solving the previous three examples quickly, the severity of the limitations of left-linear AC completion is reflected in the total number of solved problems as shown in Table 1.In particular, the problem set does not contain an ES which is only completed by accompll.However, given Theorem 7.5, this is not unexpected.Another noteworthy but unsurprising fact is that complete systems produced by accompll tend to have more rules since every rule needs different versions of left-hand sides to facilitate rewriting without AC matching.The complete results are available on the tool's website. 6In addition to full details for the experiments with T T T 2 as shown in Table 1, the website also contains additional experiments with the termination tool MU-TERM [GL20].For accompll, using MU-TERM instead of T T T 2 does not change the set of solved problems but it usually takes more time to complete ESs.The situation is different for mkbTT: Using MU-TERM instead of T T T 2 internally, it can complete fewer systems overall (26 instead of 38 out of 52).However, five of these systems are not completed by mkbTT with T T T 2 and three of them are not completed by any other tool configuration.We conclude with some additional comments on the results.▷ The results are not cluttered with detailed results for the available options regarding prime critical pairs and the concrete rewrite relation used for simplify and compose since they did not lead to significant runtime differences.Instead, the default options (no prime critical pairs and the rewrite relation → R ) were used for the experiments.▷ Due to the incompleteness of the used approach for completion with termination tools, some equations in the problems A95 ex4 2 4a.trs as well as sp.trs had to be reversed in order to get appropriate results.Note that this does not distort the experimental results for left-linear AC completion in general as the problem lies in the particular implementation of completion with termination tools.

Conclusion
This article consolidates and extends existing work on left-linear B-completion [Hue80, Bac91, Ave95] by using and adapting elegant proof techniques which have been put forward for standard completion in [HMSW19].This approach allowed for improvements of existing results: Huet's result could be strengthened to prime critical pairs in Theorem 3.15 which in turn improved the definition of fair runs in the inference system A (Definition 4.2).Furthermore, the usage of peak-and-cliff decreasingness instead of proof orderings simplified the proof of the correctness result for A (Theorem 4.11).The limitation to finite runs also facilitated the removal of the encompassment condition which allows the inference system to produce smaller B-complete systems as we showed in Example 4.5.
In addition, the relationship between the two existing inference systems from the literature has been investigated thoroughly, its core part being a simulation result (Theorem 5.5) which states that any fair run in B can be simulated by a fair run in A. With the novel simulation result, completion with B can be reduced to completion with A in the case of finite runs.Furthermore, the presentation of novel canonicity results facilitates a concrete definition of minimal complete systems in our setting.Since the inference system A adds critical pairs stemming from overlaps between (intermediate) rules and equations in B, its termination relies heavily on inter-reduction and therefore canonicity (Example 6.12).The final theoretical contribution of this article is a formal presentation of the correspondence between left-linear AC completion and general AC completion through another simulation result (Theorem 7.5).
The tool accompll is the first implementation of left-linear AC completion.Our novel results on canonicity define in which way the systems produced by accompll are minimal and unique.Unfortunately, the experimental results show that despite the practical advantages of avoiding AC unification and deciding validity problems with the normal rewrite relation, left-linear AC completion often needs infinitely many rules and therefore diverges.This problem does not seem to be mentioned in the literature.However, the possible speedup due to the avoidance of AC unification reported in the experimental results as well as the already mentioned simulation result for general AC completion show that left-linear AC completion also has merits in practice.
We conclude by giving some pointers for future work.First of all, the merits of our novel simulation result for general AC completion could be evaluated experimentally by providing an implementation.While accompll adopts the two-thread version of multi-completion [SZ12], we anticipate that left-linear AC completion can also be effectively implemented by a variant of maximal completion that aims to find a canonical system [SW15,Hir21].Another interesting research direction is normalized completion for the left-linear case.If successful, this would facilitate the treatment of important cases such as abelian groups despite the restriction to left-linear TRSs.Furthermore, a formalization of the established theoretical results is desirable.To that end, the existing Isabelle/HOL formalization from [HMSW19] is a perfect starting point as some results of this article are extensions of the results for standard rewriting presented there.
b by induction on n.If n = 0 then a = b and therefore also a ↓ ∼ b.If n > 0 then a ⇔ c ⇔ n−1 b for some c.The induction hypothesis yields c ↓ ∼ b or c ↼ ⇀ b.In the latter case we are already done because ⇔ • ↼ ⇀ ⊆ ↼ ⇀.In the former case, we distinguish between three subcases: a → c, a ← c or a ∼ c.If a → c, we immediately obtain a ↓ ∼ c.For the remaining two cases, note that there exists a k such that c → k • ∼ • → * b.We continue with a case analysis on k: ▷ k = 0: If a ← c we have a ← c ∼ c ′ → * b for some c ′ .Now either c = c ′ and a ↓ ∼ b or c • ∼ c ′ and therefore a ↼ ⇀ b.If a ∼ c we have a ↓ ∼ b because ∼ is transitive.▷ k > 0: If a ← c then there exists a c ′ such that a ← c → c ′ ⇔ * b and therefore a ↼ ⇀ b.Finally, if a ∼ c then a ∼ c → c ′ ⇔ * b for some c ′ .If a = c then we obtain a ↓ ∼ b from the induction hypothesis as there is a conversion between a and b of length n − 1.Otherwise, a ∼ • c and therefore a ↼ ⇀ b.
for all a ∈ A.Here ← a → (← a ) denotes the binary relation consisting of all pairs (b, c) such that a → b and a → c (a c).Furthermore, * ⇐ == ⇒ ∨a denotes the binary relation consisting of all pairs of elements which are connected by a conversion where each step is labeled by an element smaller than a. Corollary 3.8.Every ARS that is source decreasing modulo ∼ is Church-Rosser modulo ∼.

▷▷R
the rules ℓ → r and ℓ ′ → r are not right-B-equivalent variants by the definition of ∼ R. By definition, • ⊵ = • ▷ ∪ .=, so we continue by a case analysis on ℓ • ⊵ ℓ ′ .If ℓ • ▷ ℓ ′ , the induction hypothesis yields ℓ ′ / ∈ NF( If ℓ .= ℓ ′ then by definition there exists a renaming σ such that ℓ = ℓ ′ σ.From the right-B-reducedness of ∼ R we conclude that r and r ′ are normal forms with respect to → ∼ R/B .Since normal forms are closed under renamings, this also holds for r ′ σ.Together with r → ∼ R ℓ = ℓ ′ σ → ∼ R r ′ σ and the Church-Rosser modulo B property of ∼ R we obtain r ∼ B r ′ σ.Thus, ℓ → r and ℓ ′ → r ′ are right-B-equivalent variants.This contradicts the definition of ∼ R. Therefore, this case cannot happen.Theorem 6.10.If R is a B-complete TRS then ∼• R is a TRS which is normalization equivalent modulo B, conversion equivalent modulo B and canonical modulo B. Proof.Let R be a B-complete TRS.By definition, .Since R and ∼ R have the same left-hand sides, their normal forms coincide.An application of Lemma 6.3 yields that ∼ R is B-complete and normalization equivalent modulo B to R. Furthermore, NF( Now, another application of Lemma 6.3 yields that ∼• R is also B-complete and normalization equivalent modulo B to R. The TRS ∼ R is right-B-reduced by definitionis also right-B-reduced.Together with the already established fact that NF( Theorem 6.15.Let R and S be TRSs which are equivalent modulo B and canonical modulo B. If R and S are compatible with the same B-compatible reduction order then R .= ∼ S.

Proof.
With a straightforward induction argument, we obtain the run (E, ∅) * KB AC (∅, R ′ ) as well as R ⊆ → + R ′ /AC ( * ) from Lemma 7.4.Furthermore, AC termination of R ′ and ↔ * E ∪ AC = ↔ * R ′ ∪ AC ( * * ) are easy consequences of the definition of KB AC .AC-completeness of R follows from fairness of the run in A and Theorem 4.11.For the Church-Rosser modulo AC property of R ′ /AC, consider a conversion s ↔ * R ′ ∪ AC t.From ( * * ) we obtain s ↔ * E ∪ AC and therefore s →

Example 9. 1 .
The Eckmann-Hilton argument[EH62] considers the following equational theory E0 ⊕ a ≈ a 1 ⊗ a ≈ a (a ⊕ b) ⊗ (c ⊕ d) ≈ (a ⊗ c) ⊕ (b ⊗ d) a ⊕ 0 ≈ a a ⊗ 1 ≈ aand proves 0 ≈ 1. 5 Moreover, it establishes that ⊕ and ⊗ coincide and are associative as well as commutative.If we assume that ⊕ and ⊗ are AC symbols, accompll produces the following complete presentation R of E in less than a second:b ⊕ a → a ⊗ b 1 ⊗ a → a 0 → 1 a ⊗ 1 → a

0
+ x → x rev(append(l, c(x, nil))) → c(x, rev(l)) s(x) + y → s(y + x) It takes mkbTT more than 60 seconds to solve this problem and MaedMax terminates with an error message.Example 9.3.Consider the ES E 0+ x ≈ x 0 × x ≈ 0 s(x) + y ≈ s(x + y) s(x) × y ≈ (x × y) + y 0 − x ≈ 0 (x + y) × z ≈ (x × z) + (y × z)x − 0 ≈ x div(0, s(y)) ≈ 0 s(x) − s(y) ≈ x − y div(s(x), s(y)) ≈ s(div(x − y, s(y))) CP ± (R, B ± ) u.In the former case we are done as t ▽ s u ▽ s u.For the latter case we further distinguish between the two subcases t → CP(R, B ± ) u and u → CP(B ± , R) t.Note that this list of subcases is exhaustive due to the direction of the local cliff.Ift → CP(R, B ± ) u, t ▽ s • ▽ ∼s u follows from Lemma 3.13(1) and closure of rewriting under contexts and substitutions.Ifu → CP(B ± , R) t, u ▽ Theorem 3.15.A left-linear TRS R which is terminating modulo B is Church-Rosser modulo B if and only if PCP(R) ∪ PCP ± (R, B ± ) ⊆ ↓ ∼ R .Proof.The only-if direction is trivial.For a proof of the if direction, we show that R is source decreasing; the Church-Rosser property modulo B is then an immediate consequence of Corollary 3.8.From the termination of R modulo B we obtain the well-founded order which contradicts the irreflexivity of >.If t ′ ̸ = u, we also obtain a contradiction to the irreflexivity of > from u ∼ B t ′ → !S u together with termination modulo B of S. Hence, t ′ = u which means that t ′ ∈ NF(S).Equivalence of R and S modulo B yields s * ′ from which we obtain s → !S • ∼ B t ′ by B-completeness of S and t ′ ∈ NF(S).Finally, t ∼ B t ′ establishes s → !S • ∼ B t as desired.The previous result cannot be strengthened to literal similarity as the following counterexample shows.Example 6.16.Consider the ES E consisting of the single equation t AC • ∼ AC Proof.Lemma 8.5 generalizes to * AC ← • → R e , AC ⊆ → R e , AC • * AC ← by a straightforward induction argument.From AC ⊆ * AC ← we obtain ∼ AC ⊆ * AC ←, and hence the claim follows.The relations → R/AC and → R e , AC • ∼ AC coincide.

Table 1 :
[WSMK13]mpares the two configurations of accompll with the normalized completion[Mar96]mode of mkbTT[WM13]and the AC completion mode of MaedMax[Win19]on selected examples.The tool mkbTT is the original implementation of multi-completion with termination tools[WSMK13].MaedMax, on the other hand, implements maximal completion Experimental results on 52 problems (excerpt) which makes use of MaxSAT/MaxSMT solvers instead of termination tools in order to avoid using concrete reduction orders as input.To the best of our knowledge, there is no other comparable completion tool which supports AC axioms.Since normalized completion subsumes general AC completion, a comparison with the aforementioned modes of both systems allows us to assess the effectiveness of accompll with respect to the state of the art in AC completion.Note that both normalized completion and general AC completion use AC unification.In Table