The Power Word Problem in Graph Products

. The power word problem of a group G asks whether an expression p x 1 1 . . . p x n n , where the p i are words and the x i binary encoded integers, is equal to the identity of G . We show that the power word problem in a ﬁxed graph product is AC 0 -Turing-reducible to the word problem of the free group F 2 and the power word problem of the base groups. Furthermore, we look into the uniform power word problem in a graph product, where the dependence graph and the base groups are part of the input. Given a class of ﬁnitely generated groups C , the uniform power word problem in a graph product can be solved in AC 0 ( C = L PowWP C ). As a consequence of our results, the uniform knapsack problem in graph groups is NP -complete.


Introduction
From a power word (u 1 , x 1 , u 2 , x 2 , . . . , u n , x n ) one can easily (e. g., by an uAC 0 -reduction) compute a straight-line program for the word u x1 1 u x2 2 · · · u xn n . In this sense, the power word problem is at most as difficult as the compressed word problem. On the other hand, both power words and straight-line programs achieve exponential compression in the best case; so the additional difficulty of the the compressed word problem does not come from a higher compression rate but rather because straight-line programs can generate more "complex" words.
Our main results for the power word problem are the following; in each case we compare our results with the corresponding results for the compressed word problem: 1 • The power word problem for every finitely generated nilpotent group is in uTC 0 and hence has the same complexity as the word problem (or the problem of multiplying binary encoded integers). The proof is a straightforward adaption of a proof from [51]. There, the special case, where all words u i in the input power word are single generators, was shown to be in uTC 0 . The compressed word problem for every finitely generated nilpotent group belongs to the class DET ⊆ uNC 2 and is hard for the counting class C = L in case of a torsion-free nilpotent group [35].
• The power word problem for the Grigorchuk group is uAC 0 -many-onereducible to its word problem. Since the word problem for the Grigorchuk group is in L [8,23], also the power word problem is in L. Moreover, in [8], it is shown that the compressed word problem for the Grigorchuk group is PSPACE-complete. Hence, the Grigorchuk group is an example of a group for which the compressed word problem is provably more difficult than the power word problem.
• The power word problem for a finitely generated group G is uNC 1 -manyone-reducible to the power word problem for any finite index subgroup of G. An analogous result holds for the compressed word problem as well [35].
• If G is a graph product of finitely generated groups G 1 , . . . , G n (the so-called base groups) not containing any elements of order two, then the power word problem in G can be decided in uAC 0 with oracle gates for (i) the word problem for the free group F 2 and (ii) the power word problems for the base groups G i . In order to define a graph product of groups G 1 , . . . , G n , one needs a graph with vertices 1, . . . , n. The corresponding graph product is obtained as the quotient of the free product of G 1 , . . . , G n modulo the commutation relation that allows elements of G i to commute with elements of G j iff i and j are adjacent in the graph. Graph products were introduced by Green in 1990 [25]. The compressed word problem for a graph product is polynomial time Turing-reducible to the compressed word problems for the the base groups [28].
• A right-angled Artin group (RAAG) can be defined as a graph product of copies of Z. As a corollary of our transfer theorem for graph products, it follows that the power word problem for a RAAG can be decided in uAC 0 with oracle gates for the word problem for the free group F 2 . The same upper complexity bound was shown before by Kausch [33] for the ordinary word problem for a RAAG and in [43] for the power word problem for a finitely generated free group. As a consequence of our new result, the power word problem for a RAAG is in L (for the ordinary word problem this follows from the well-known fact that RAAGs are linear groups together with the above mentioned result of Lipton and Zalcstein [37]). The compressed word problem for every RAAG is in P (polynomial time) and P-complete if the RAAG is non-abelian [39].
In all the above mentioned results, the group is fixed, i.e., not part of the input. In general, it makes no sense to input an arbitrary finitely generated group, since there are uncountably many such groups. On the other hand, if we restrict to finitely generated groups with a finitary description, one may also consider a uniform version of the word problem/power word problem/compressed word problem, where the group is part of the input. We will consider the uniform power word problem for graph products for a fixed countable class C of finitely generated groups. We assume that the groups in C have a finitary description. 2 Then a graph product is given by a list G 1 , . . . G n of base groups from C together with an undirected graph on the indices 1, . . . , n. For this setting Kausch [33] proved that the uniform word problem for graph products belongs to C = L UWP(C) , i.e., the counting logspace class C = L with an oracle for the uniform word problem for the class C (we write UWP(C) for the latter). We extend this result to the power word problem under the additional assumption that no group in C contains an element of order two. More precisely, we show that the uniform power word problem for graph products over that class C of base groups belongs to the closure of C = L UPowWP(C) under uAC 0 -Turingreductions, where UPowWP(C) denotes the uniform power word problem for the class C. Analogous results for the uniform compressed word problem are not known. Indeed, whether the uniform compressed word problem for RAAGs is solvable in polynomial time is posed as an open problem in [40]. Our result for the uniform power word problem for graph products implies that the uniform power word problem for RAAGs can be solved in polynomial time. We can apply this result to the knapsack problem for RAAGs. The knapsack problem is a classical optimization problem that originally has been formulated for the integers. Myasnikov et al. introduced the decision variant of the knapsack problem for an arbitrary finitely generated group G: Given g 1 , . . . , g n , g ∈ G, decide whether there are x 1 , . . . , x n ∈ N such that g x1 1 · · · g xn n = g holds in the group G [50], see also [19,22,34,44] for further work. For many groups G one can show that, if such x 1 , . . . , x n ∈ N exist, then there exist such numbers of size 2 poly(N ) , where N it the total length of all words representing the group elements g 1 , . . . , g n , g. This holds for instance for RAAGs. In this case, one nondeterministically guesses the binary encodings of numbers x 1 , . . . , x n and then verifies, using an algorithm for the power word problem, whether g x1 1 · · · g xn n g −1 = 1 holds. In this way, it was shown in [44] that for every RAAG the knapsack problem belongs to NP (using the fact that the compressed word problem and hence the power word problem for a fixed RAAG belongs to P). Moreover, if the commutation graph of the RAAG G contains an induced subgraph C 4 (cycle on 4 nodes) or P 4 (path on 4 nodes), then the knapsack problem for G is NP-complete [44]. However, membership of the uniform version of the knapsack problem for RAAGs in NP remained open. Our polynomial time algorithm for the uniform power word problem for RAAGs yields the missing piece: the uniform knapsack problem for RAAGs is indeed NP-complete.

Related work
Implicitly, (variants of) the power word problem have been studied long before. In the commutative setting, Ge [24] has shown that one can verify in polynomial time an identity α x1 1 α x2 2 · · · α xn n = 1, where the α i are elements of an algebraic number field and the x i are binary encoded integers.
In [27], Gurevich and Schupp present a polynomial time algorithm for a compressed form of the subgroup membership problem for a free group F where group elements are represented in the form a x1 1 a x2 2 · · · a xn n with binary encoded integers x i . The a i must be, however, standard generators of the free group F . This is the same input representation as in [51] (for nilpotent groups) and is more restrictive then our setting, where we allow powers of the form w x for w an arbitrary word over the group generators (on the other hand, Gurevich and Schupp consider the subgroup membership problem, which is more general than the word problem).
Recently, the power word problem has been investigated in [19]. In [19] it is shown that the power word problem for a wreath product of the form G≀Z with G finitely generated nilpotent belongs to uTC 0 . Moreover, the power word problem for iterated wreath products of the form Z r ≀ (Z r ≀ (Z r · · · )) belongs to uTC 0 . By a famous embedding theorem of Magnus [47], it follows that the power word problem for a free solvable groups is in uTC 0 . Finally, in [45] Zetzsche and the first author of this work showed that the power word problem for a solvable Baumslag-Solitar group BS(1, q) belongs to uTC 0 .
The present paper is a combination of the two conference papers [43] (by the first and third author) and [57] (by the second and third author). Here we also correct a mistake that occurred in [57] and version 2 of this paper (see [58]): there, our results on graph products were stated without the additional assumption that the base groups do not have elements of order two. While we strongly conjecture this result to be true, our proof only works with this additional assumption. The key lies in the proof of Lemma 50 (which corresponds to Lemma 15 in [57,58]) -indeed, the only place where we need this additional assumption. We give more technical details in Remark 43 and Remark 51.

Preliminaries
For integers a ≤ b we write [a, b] for the interval { x ∈ Z | a ≤ x ≤ b }. For an integer z ∈ Z let us define x = [0, x] if x ≥ 0 and x = [x, 0] if x < 0.

Words
An alphabet is a (finite or infinite) set Σ; an element a ∈ Σ is called a letter. The free monoid over Σ is denoted by Σ * ; its elements are called words. The multiplication of the free monoid is concatenation of words. The identity element is the empty word 1.
Consider a word w = a 1 · · · a n with a i ∈ Σ. For A ⊆ Σ we write |w| A for the number of i ∈ [1, n] with a i ∈ A and we set |w| = |w| Σ (the length of w) and |w| a = |w| {a} for a ∈ Σ. A word w has period k if a i = a i+k for all i with i, i + k ∈ [1, n].

Monoids
Let M be an arbitrary monoid. Later, we will consider finitely generated monoids M , where elements of M are described by words over an alphabet of monoid generators. To distinguish equality as words from equality as elements of M , we also write x = M y (or x = y in M ) to indicate equality in M (as opposed to equality as words). Let x = M uvw for some x, u, v, w ∈ M . We say u is a prefix of x, v is a factor of x, and w is a suffix of x. We call u a proper An element u ∈ M is primitive if u = M v k for any v ∈ M and k > 1. Two elements u, v ∈ M are transposed if there are x, y ∈ M such that u = M xy and v = M yx. We call u and v conjugate if there is an element t ∈ M such that ut = tv (note that this is also sometimes called left-conjugate in the literature). For a free monoid Σ * , two words u, v are transposed if and only if they are conjugate. In this case, we also say that the word u is a cyclic permutation of the word v.

Rewriting systems over monoids
A rewriting system over the monoid M is a subset S ⊆ M ×M . We write ℓ → r if (ℓ, r) ∈ S. The corresponding rewriting relation =⇒ S over M is defined by: u =⇒ S v if and only if there exist ℓ → r ∈ S and s, t ∈ M such that u = M sℓt and v = M srt. We also say that u can be rewritten to v in one step. Let v to denote that u can be rewritten to v using at most k steps. We say that w ∈ M is irreducible with respect to S if there is no v ∈ M with w =⇒ S v. The set of irreducible monoid elements is denoted as We write M/S for the quotient monoid M/ ≡ S , where ≡ S is the smallest congruence relation on M that contains S.
The above notion of a rewriting system over a monoid M is a generalization of the notion of a string rewriting system, which is a rewriting system over a free monoid Σ * . For further details on rewriting systems we refer to [10,32].

Partially commutative monoids
In this subsection, we introduce a few basic notations concerning partially commutative monoids. More information can be found in [14].
Let Σ be an alphabet of symbols. We do not require Σ to be finite. Let I ⊆ Σ × Σ be a symmetric and irreflexive relation. The partially commutative monoid defined by (Σ, I) is the quotient monoid Thus, the relation I describes which generators commute; it is called the commutation relation or independence relation. The relation D = (Σ × Σ) \ I is called dependence relation and (Σ, D) is called a dependence graph. The monoid M (Σ, I) is also called a trace monoid and its elements are called traces or partially commutative words. Note that for words u, v ∈ Σ * with u = M(Σ,I) v we have |u| a = |v| a for every a ∈ Σ. Hence, the length |w| and |w| a for a trace w ∈ M (Σ, I) is well-defined and we use this notation henceforth.
A letter a is called a minimal letter of w ∈ M (Σ, I) if w = M(Σ,I) au for some u ∈ M (Σ, I). Likewise a letter a is called a maximal letter of w if w = M(Σ,I) ua for some u ∈ M (Σ, I). When we say that a is minimal (maximal) in w ∈ Σ * , we mean that a is minimal (maximal) in the trace represented by w. Note that if both a and b = a are minimal (maximal) letters of w, then (a, b) ∈ I. A trace rewriting system is simply a rewriting system over a trace monoid M (Σ, I) in the sense of Section 2.3. If ∆ ⊆ Σ is a subset, we write M (∆, I) for the submonoid of M (Σ, I) generated by ∆.
Elements of a partially commutative monoid can represented by directed acyclic graphs: Let w = a 1 · · · a n with a i ∈ Σ. We define the dependence graph of w as follows: The node set is [1, n] and there is an edge i → j if and only if i < j and (a i , a j ) ∈ D. Then, for two words u, v ∈ Σ * we have u = M(Σ,I) v if and only if the dependence graphs of u and v are isomorphic (as labeled directed graphs). The dependence graph of a trace v ∈ M (Σ, I) is the dependence graph of (any) word representing v. The trace v is said to be connected if its dependence graph is weakly connected, or, equivalently, if the induced subgraph of (Σ, D) consisting only of the letters occurring in v is connected. The connected components of the trace v are the weakly connected components of the dependence graph of v.

Levi's lemma
As a consequence of the representation of traces by dependence graphs, one obtains Levi's lemma for traces (see e.g. [14, p. 74]), which is one of the fundamental facts in trace theory. The formal statement is as follows.
The situation in the lemma will be visualized by a diagram of the following kind. The i-th column corresponds to u i , the j-th row (read from bottom to top) corresponds to v j , and the intersection of the i-th column and the jth row represents w i,j . Furthermore, w i,j and w k,ℓ are independent if one of them is left-above the other one. So, for instance, all w i,j in the red part are independent from all w k,ℓ in the blue part. v n w 1,n w 2,n w 3,n . . . w m,n . . .
Usually, Levi's lemma is formulated for the case that the alphabet Σ is finite. But the case that Σ is finite already implies the general case with Σ possibly infinite: simply replace the trace monoid M (Σ, I) by M (Σ ′ , I ′ ), where Σ ′ contains all symbols occurring in one of the traces u i , v j and I ′ is the restriction of I to Σ ′ . A consequence of Levi's Lemma is that trace monoids are cancellative, i.e., usv = utv implies s = t for all traces s, t, u, v ∈ M .

Projections to free monoids
It is a well-known result [17,18,61] that every trace monoid can be embedded into a direct product of free monoids. In this section we recall the corresponding results.
Consider a trace monoid M = M (Σ, I) with the property that there exist finitely many sets A i ⊆ Σ (i ∈ [1, k] for some k ∈ N) fulfilling the following property: Since D is reflexive this implies that for every a ∈ Σ there is an i such that a ∈ A i . All trace monoids M (Σ, I) that will appear in this paper have the above property if one takes for the A i the maximal cliques in the dependence graph (Σ, D) [18]. If Σ is finite, one can take for the A i also all sets {a, b} with (a, b) ∈ D together with all singletons {a} with a an isolated vertex in (Σ, D) [17]. Let π i : Σ * → A * i be the projection to the free monoid A * i defined by π i (a) = a for a ∈ A i and π i (a) = 1 otherwise. We define a projection Π : Σ * → A * 1 ×· · ·×A * k to a direct product of free monoids by Π(w) = (π 1 (w), . . . , π k (w)). It is straightforward to see that, if u = M v, then also Π(u) = Π(v). Hence, we can consider Π also as a monoid morphism Π : M → A * 1 × · · ·× A * k (which from now on we denote by the same letter Π). We will make use of the following two lemmata presented in [18]. Thus, Π is an injective monoid morphism Π : In [18] these lemmata are only proved for the case that Σ is finite, but as for Levi's Lemma one obtains the general case by restricting (Σ, I) to those letters that appear in the traces involved.
Projections onto free monoids were used in [18] in order to show the following lemmata. Lemma 4 gives us a tool for checking conjugacy in M (Σ, I); indeed, from now on, we will most of the time use that conjugate elements are related by a sequence of transpositions. Lemma 5 ([18,Proposition 3.5]) Let M = M (Σ, I) and u, v, p, q ∈ M such that u = p k and v = q ℓ with p and q primitive and k, ℓ ≥ 1. Then u and v are conjugate if and only if k = ℓ and p and q are conjugate.
Note that Lemma 5 implies that if u is conjugate to a primitive trace, then u must be primitive as well.

Trace monoids defined by finite graphs
As a first step towards graph products let us consider trace monoids of a special form: Let L be a finite set of size σ = |L| and I ⊆ L × L be irreflexive and symmetric (i. e., (L, I) is a finite undirected simple graph). Moreover, assume that for each ζ ∈ L we are given a (possibly infinite) alphabet Γ ζ such that Γ ζ ∩ Γ ξ = ∅ for ζ = ξ. By setting Γ = ζ∈L Γ ζ and Henceforth, we simply write I for I Γ . For a ∈ Γ we define alph(a) = ζ if a ∈ Γ ζ . For u = a 1 · · · a k ∈ Γ * we define alph(u) = {alph(a 1 ), . . . , alph(a k )}.
The following lemma characterizes the shape of a prefix, suffix or factor of a power in the above trace monoid M . Lemma 6 Let p ∈ M be connected and k ∈ N. Then we have: 1. If p k = M uw for traces u, w ∈ M (Γ, I), then there exist s < σ, ℓ, m ∈ N and factorizations p = u i w i for i ∈ [1, s] such that • u = M p ℓ u 1 · · · u s and w = M w 1 · · · w s p m .
2. Given a factor v of p k at least one of the following is true. Proof Let us start with the first statement. We apply Levi's Lemma to the identity p k = M uw and obtain the following diagram: Fig. 1 A factorization of p k as in case 1 of Lemma 6.
We have (w i , u i+1 ) ∈ I and hence alph( Therefore, alph(u i+1 ) alph(u i ) whenever u i = 1 = w i . It follows that there are ℓ, m ≥ 0 and s < σ such that k = ℓ + s + m and By renaming u x+i and w x+i into u i and w i , respectively, for i ∈ [1, s] we obtain factorizations u = M p ℓ u 1 · · · us and w = M w 1 · · · wsp m for some s < σ and traces To derive statement 2, consider the factorization p k = M u(vw). Applying the final conclusion of the previous paragraph, we obtain factorizations u = M p x u 1 · · · us and vw = M x 1 · · · xsp ℓ where s < σ, the u i are proper prefixes of p, the x i are proper suffixes of p and k = x + s + ℓ.
We then consider two cases: if ℓ = 0, then vw = M x 1 · · · xs. Applying Levi's Lemma to this factorization yields the following diagram: Therefore, v i is a proper factor of p. Now assume that ℓ > 0. Applying Levi's Lemma to vw = M x 1 · · · xsp ℓ yields a diagram of the following form: To the factorizations p = M v s+i w s+i (i ∈ [1, ℓ]) we apply the arguments used for the proof of statements 1 and 2. There are y, z ≥ 0 and t < σ such that ℓ = y + t + z and • v s+i = p, w s+i = 1 for i ∈ [1, y], • v s+i = 1 = w s+i , v s+i w s+i = M p for i ∈ [y + 1, y + t], and is a proper prefix of p. If y > 0 then (v s+1 , w 1 · · · ws) ∈ I implies w 1 · · · ws = 1. Hence, every v i (i ∈ [1, s]) is proper suffix of p (by a symmetric argument, we could write v also as a concatenation of s < σ many proper suffixes of p followed by t < σ many proper factors of p). Finally, assume that y = 0. We get v = M v 1 · · · vsv s+y+1 · · · v s+y+t with every v i (i ∈ [1, s]) a proper factor of p and every v s+y+i (i ∈ [1, t]) a proper prefix of p.
For a trace u ∈ M = M (Γ, I) and ζ ∈ L, we write |u| ζ = |u| Γ ζ = a∈Γ ζ |u| a . Note that, while the sum might be infinite, only finitely many summands are non-zero.
Lemma 7 Let r, s, t, u ∈ M with rs = M tu and, for all ζ ∈ L, |s| ζ ≥ |u| ζ or, equivalently, |r| ζ ≤ |t| ζ . Then, as elements of M , u is a suffix of s and r is a prefix of t. In particular, if for all ζ ∈ L we have |s| ζ = |u| ζ , then s = M u and r = M t.
Proof By Levi's Lemma, there are p, q, x, y ∈ M with (x, y) ∈ I and r = px, t = py, s = yq, and u = xq. Because of the condition |s| ζ ≥ |u| ζ for all ζ ∈ L, x must be the empty trace.
The second part of the lemma follows by using the first part for the two inequalities |s| ζ ≥ |u| ζ and |u| ζ ≥ |s| ζ .
Lemma 8 Let p σ u = M vp σ for some primitive and connected trace p and let u ∈ M be a prefix of p k for some k ∈ N. Then we have u = v = p ℓ for some ℓ ∈ [0, k].
Proof If u is the empty prefix, we are done. Hence, from now on, we can assume that u is non-empty. First consider the case that p is a prefix of u. Then, p σ+1 is a prefix of vp σ . Hence, there is a trace q with vp σ = M pq, where p σ is a factor of q. Then Lemma 7 implies that p is a prefix of v.
If v = M pv ′ and u = M pu ′ , we obtain p σ+1 u ′ = M pv ′ p σ . Cancelling p yields p σ u ′ = M v ′ p σ . Since u ′ is a prefix of p k−1 we can replace u and v by u ′ and v ′ , respectively. Therefore, we can assume that p is not a prefix of u. Since u is a prefix of some p k , Lemma 6 implies that u is already a prefix of p σ .
Let us next show that u = v. To do so, we write p σ = M uw. Then we have Since |wu| a = |uw| a for all a ∈ Γ, Lemma 7 implies u = v. Now, we have p σ u = M up σ . Since p is connected, [17, Proposition 3.1] implies that there are i, j ∈ N with p σ·i = M u j . Then, by [17,Theorem 1.5] it follows that there are t ∈ M and ℓ, m ∈ N with p = M t m and u = M t ℓ . As p is primitive, we have m = 1 and t = p and hence u = M = p ℓ . Since u is a prefix of p k , we have ℓ ≤ k.
To prove the next lemma, we want to apply Lemma 2. To do so, we use the following projections suitable for our use case. Let where ζ is isolated if there is no ξ = ζ with (ζ, ξ) ∈ D (and D = L × L \ I).
Notice that even though Γ might be infinite, A is finite in any case (because L is finite). Let us write A = {A 1 , . . . , A k } and π i for the projection M (Γ, I) → A * i .
Lemma 9 Let u, v, p, q ∈ M and k ∈ N with uqv = M p k and |p| ζ = |q| ζ for all ζ ∈ L. Then p and q are conjugate in M .
Proof First, we are going to show that the transposition qvu of uqv is equal to q k in M . Consider projections π i onto cliques. By the assumption |p| ζ = |q| ζ for all ζ ∈ L, it follows that |π i (p)| = |π i (q)|. As π i (p k ) has a period |π i (p)|, so has its cyclic permutation π i (qvu). As its first |π i (p)| letters are exactly π i (q), it follows that π i (qvu) = π i (q k ). Since this holds for all i, it follows by Lemma 2 that qvu = M q k . Now, observe that q k = M qvu and p k = M uqv are conjugate in M . Hence, it remains to apply Lemma 5 to conclude that p and q are conjugate: we write p =p i and q =q j for primitive tracesp,q. Then Lemma 5 tells us that i = j andp andq are conjugate. Hence, also p and q are conjugate.

Groups
If G is a group, then u, v ∈ G are conjugate if and only if there is a g ∈ G such that u = G g −1 vg (note that this agrees with the above definition for monoids).

Free groups
Let X be a set and X = { a | a ∈ X } be a disjoint copy of X. We extend the mapping a → a to an involution without fixed points on Σ = X ∪ X by a = a and finally to an involution on Σ * by a 1 a 2 · · · a n = a n · · · a 2 a 1 . The only fixed point of the latter involution is the empty word 1. The string rewriting system is strongly confluent and terminating meaning that for every word w ∈ Σ * there exists a unique wordŵ ∈ IRR(S free ) with w * =⇒ S freeŵ . Words from IRR(S free ) are called freely reduced. The system S free defines the free group F (X) = Σ * /S free with basis X. Let η : Σ * → F (X) denote the canonical monoid homomorphism. Then we have η(w) −1 = η(w) for all words w ∈ Σ * . If |X| = 2, then we write F 2 for F (X). It is known that for every countable set X, F 2 contains an isomorphic copy of F (X).

Finitely generated groups and the word problem
A group G is called finitely generated (f.g.) if there exists a finite set X and a surjective group homomorphism h : F (X) → G. In this situation, the set Σ = X ∪ X is called a finite (symmetric) generating set for G. Usually, we write X −1 instead of X and a −1 instead of a for a ∈ Σ. Thus, for an integer z < 0 and w ∈ Σ * we write w z for (w) −z .
In many cases we can think of Σ as a subset of G, but, in general, we can also have more than one letter for the same group element. The group identity of G is denoted with 1 as well (this fits to our notation 1 for the empty word which is the identity of F (X)).
For words u, v ∈ Σ * we usually say that u = v in G or u = G v in case h(η(u)) = h(η(v)) and we do not write η nor h from now on. The word problem for the finitely generated group G, WP(G) for short, is defined as follows:

The power word problem
A power word (over Σ) is a tuple (u 1 , x 1 , u 2 , x 2 , . . . , u n , x n ) where u 1 , . . . , u n ∈ Σ * are words over the group generators and x 1 , . . . , x n ∈ Z are integers that are given in binary notation. Such a power word represents the word u x1 1 u x2 2 · · · u xn n . Quite often, we will identify the power word (u 1 , x 1 , u 2 , x 2 , . . . , u n , x n ) with the word u x1 1 u x2 2 · · · u xn n . Moreover, if x i = 1, then we usually omit the exponent 1 in a power word. The power word problem for the finitely generated group G, PowWP(G) for short, is defined as follows:

Input:
a power word (u 1 , x 1 , u 2 , x 2 , . . . , u n , x n ). Question: Does u x1 1 u x2 2 · · · u xn n = G 1 hold? Due to the binary encoded exponents, a power word can be seen as a succinct description of an ordinary word. Hence, a priori, the power word problem for a group G could be computationally more difficult than the word problem. An example, where this happens (under standard assumptions from complexity theory) is the wreath product S 5 ≀ Z (where S 5 is the symmetric group on 5 elements). The word problem for this group can be easily solved in logspace, whereas the power word problem for S 5 ≀ Z is coNP-complete [43].
Let C be a countable class of groups, where every group has a finite description. We also assume that the description of G ∈ C contains a generating set for G. We write UPowWP(C) for the uniform power word problem:

Input:
a group G ∈ C and a power word (u 1 , x 1 , u 2 , x 2 , . . . , u n , x n ) over the generating set of G. Question: Does u x1 1 u x2 2 · · · u xn n = G 1 hold?

Right-angled Artin groups
Right-angled Artin groups are defined similarly to partially commutative monoids. Again we have a symmetric and irreflexive commutation relation I ⊆ X × X. Then G(X, I) = F (X)/{ab = ba | (a, b) ∈ I} is the corresponding right-angled Artin group (RAAG), also known as a graph group or free partially commutative group. The name graph group is due to the commutation relation being commonly visualized as an undirected graph. Note that we have M (X, I) ⊆ G(X, I).
We can view G(X, I) also as follows: let Σ = X ∪ X where X is a disjoint copy of X and a = a for a ∈ Σ (like for free groups). Extend I to Σ × Σ by requiring that (a, b) ∈ I if and only if (a, b) ∈ I for a, b ∈ Σ. Then G(X, I) is the quotient of M (Σ, I) defined by the relations aa = 1 for a ∈ Σ. A trace w ∈ M (Σ, I) is called reduced if it does not contain a factor aa for a ∈ Σ. For every trace u ∈ M (Σ, I) there is a unique reduced trace v (the reduced normal form of u) with u = v in G(X, I). Like for free groups, it can be computed using the confluent and terminating trace rewriting system {aa → 1 | a ∈ Σ}.

Graph products
Let (G ζ ) ζ∈L be a family of so-called base groups and I ⊆ L × L be an irreflexive and symmetric relation (the independence relation). As before, we assume that L is always finite and we write σ = |L|. The graph product GP(L, I, (G ζ ) ζ∈L ) is defined as the free product of the G ζ modulo the relations expressing that elements from G ζ and G ξ commute whenever (ζ, ξ) ∈ I. Below, we define this group by a group presentation.
Let Γ ζ = G ζ \ {1} be the set of non-trivial elements of the group G ζ for ζ ∈ L. We assume w.l.o.g. that the sets Γ ζ are pairwise disjoint. We then define Γ and I Γ as in Section 2.5: Γ = ζ∈L Γ ζ (note that typically, Γ will be infinite) and I Γ = {(a, b) ∈ Γ × Γ | (alph(a), alph(b)) ∈ I}. As in Section 2.5 we write I instead of I Γ . For a, b ∈ G ζ we write [ab] for the element of G ζ obtained by multiplying ab in G ζ (whereas ab denotes a two-letter word in Γ * ). Here, we identify 1 ∈ G ζ with the empty word 1. The relation I is extended to Γ * by I = {(u, v) ∈ Γ * × Γ * | alph(u) × alph(v) ⊆ I} (where alph(u) ⊆ L is defined as in Section 2.5). With these definitions we have Example 11 If all the base groups are the infinite cyclic group (i. e., for each ζ ∈ L we have G ζ = Z), then the graph product GP(L, I, G ζ ζ∈L ) is the RAAG G(L, I).
Let G = GP(L, I, (G ζ ) ζ∈L ) be a graph product and M = M (Γ, I) the corresponding trace monoid (see Section 2.4). Notice that M satisfies the setting of Section 2.5 -so these results and definitions apply to the case of graph products. We can represent elements of G by elements of M . More precisely, there is a canonical surjective homomorphism h : M → G. A reduced representative of a group element g ∈ G is a trace w of minimal length such that h(w) = g. We also say that w is reduced. Equivalently, w ∈ M is reduced if there is no two-letter factor ab of w such that alph(a) = alph(b). A trace w ∈ M is called cyclically reduced if all transpositions of w are reduced. Equivalently, w is cyclically reduced if it is reduced and it cannot be written in the form axb with a, b ∈ Γ ζ for some x ∈ M . Note that this definition agrees with [33], whereas in [19] a slightly different definition is used. We call a trace w ∈ M composite if |alph(w)| ≥ 2. Notice that a trace w, where every connected component is composite, is cyclically reduced if and only if ww is reduced (then, every w k with k ≥ 2 is reduced). A word w ∈ Γ * is called reduced/cyclically reduced/composite if the trace represented by w is reduced/cyclically reduced/composite.
Note that a word w ∈ Γ * is cyclically reduced if and only if every cyclic permutation of the word w is reduced as a trace (be aware of the subtle difference between a cyclic permutation of a word w and a transposition of the trace represented by w): If the trace represented by w is cyclically reduced, then clearly every cyclic permutation of w must be reduced. On the other hand, assume that w = M aw ′ b with a, b ∈ Γ ζ . Then we can write the word w as w = xaybz such that (a, xz) ∈ I. Then ybzxa is a cyclic permutation of w that is not reduced.
On the free monoid Γ * we can define an involution (·) −1 by (a 1 a 2 · · · a n ) −1 = a −1 The counterpart of the rewriting system S free for graph products is the trace rewriting system Note that G = M/T and that IRR(T ) is the set of reduced traces. Moreover, T is terminating and confluent; the latter is shown in [36, Lemma 6.1]. The following lemma can be found in [28,Lemma 24].
The following commutative diagram summarizes the mappings between the sets introduced in this section (֒→ → indicates a bijection): An I-clique is a trace a 1 a 2 · · · a k ∈ M such that a i ∈ Γ and (a i , a j ) ∈ I for all i = j. Note that |v| ≤ σ for every I-clique v. The following lemma is a generalization of a statement from [15] (equation (21) in the proof of Lemma 22), where only the case q = 1 is considered.
Lemma 13 Let p, q, r, s ∈ M such that pq, qr, s ∈ IRR(T ) and p q r * =⇒ T s. Then there exist factorizations Proof We prove the lemma by induction over the length of T -derivations (recall that T is terminating). The case that p q r ∈ IRR(T ) is clear (take t = u = v = w = 1). Now assume that p q r is not reduced. Since pq, qr ∈ IRR(T ), the trace p q r must contain a factor ab with alph(a) = alph(b), where a is a maximal letter of p, b is a minimal letter of r and (a, q), (b, q) ∈ I. Let us write p = Mp a and r = M br.
If we set u = xa, we obtain exactly the situation from the lemma. Now assume that [ab] = c = 1. We obtain p q r = Mp a q br =⇒ Tp c qr * =⇒ T s. Note that (c, q) ∈ I. Sincep a q, b qr ∈ IRR(T ), we also havep c q, c qr ∈ IRR(T ). Hence, by induction we obtain factorizations These are I-cliques (since (c, t ′ ) ∈ I) that satisfy the conditions from the lemma. Moreover, (c, u) ∈ I implies (a, u) ∈ I and Since Γ might be an infinite alphabet, for inputs of algorithms, we need to encode elements of Γ over a finite alphabet. For ζ ∈ L let Σ ζ be a finite generating set for G ζ such that Σ ζ ∩ Σ ξ = ∅ for ζ = ξ. Then Σ = ζ∈L Σ ζ is a generating set for G. Every element of Γ ζ can be represented as a word from Σ * ζ . However, in general, representatives are not unique. Deciding whether two words w, v ∈ Σ * ζ represent the same element of Γ ζ is the word problem for G ζ . We give more details how to represent power words in Section 5.1.1.
Let C be a countable class of finitely generated groups with finite descriptions. One might for instance take a subclass of finitely (or recursively) presented groups. Then a graph product GP(L, I, (G ζ ) ζ∈L ) with G ζ ∈ C for all ζ has a finite description as well: such a group is given by the finite graph (L, I) and a list of the finite descriptions of the groups G ζ ∈ C for ζ ∈ L. We denote with GP(C) the class of all such graph products.

Complexity
We assume that the reader is familiar with the complexity classes P and NP; see e.g. [5] for details. Let C be any complexity class and K ⊆ ∆ * , L ⊆ Σ * languages. Then L is C-many-one reducible to

Circuit complexity
We use circuit complexity for classes below deterministic logspace (L for short). Instead of defining these classes directly, we introduce the slightly more general notion of AC 0 -Turing reducibility. A language L ⊆ {0, 1} * is AC 0 -Turing-reducible to K ⊆ {0, 1} * if there is a family of constant-depth, polynomial-size Boolean circuits with oracle gates for K deciding L. More precisely, we can define the class of language AC 0 (K) which are AC 0 -Turingreducible to K ⊆ {0, 1} * : a language L ⊆ {0, 1} * belongs to AC 0 (K) if there exists a family (C n ) n≥0 of Boolean circuits with the following properties: • C n has n distinguished input gates x 1 , . . . , x n and a distinguished output gate o. • C n accepts exactly the words from L ∩ {0, 1} n , i.e., if the input gate x i receives the input a i ∈ {0, 1} for all i, then the output gate o evaluates to 1 if and only if a 1 a 2 · · · a n ∈ L. • Every circuit C n is built up from input gates, not -gates, and -gates, or -gates, and oracle gates for K (which output 1 if and only if their input is in K).
The incoming wires for an oracle gate for K have to be ordered since the language K is not necessarily closed under permutations of symbols. • All gates may have unbounded fan-in, i. e., there is no bound on the number of incoming wires for a gate. • There is a polynomial p(n) such that C n has at most p(n) many gates and wires. • There is a constant d such that every C n has depth at most d (the depth is the length of a longest path from an input gate x i to the output gate o).
This is in fact the definition of non-uniform AC 0 (K). Here "non-uniform" means that the mapping n → C n is not restricted in any way. In particular, it can be non-computable. For algorithmic purposes one usually adds some uniformity requirement to the above definition. The most "uniform" version of AC 0 (K) is DLOGTIME-uniform AC 0 (K). For this, one encodes the gates of each circuit C n by bit strings of length O(log n). Then the circuit family (C n ) n≥0 is called DLOGTIME-uniform if (i) there exists a deterministic Turing machine that computes for a given gate u ∈ {0, 1} * of C n (|u| ∈ O(log n)) in time O(log n) the type of gate u, where the types are x 1 , . . . , x n , not, and, or, oracle gate, and (ii) there exists a deterministic Turing machine that decides for two given gates u, v ∈ {0, 1} * of C n (|u|, |v| ∈ O(log n)) and a binary encoded integer i with O(log n) many bits in time O(log n) whether u is the i-th input gate for v. In the following, we write uAC 0 (K) for DLOGTIMEuniform AC 0 (K). For more details on these definitions we refer to [59]. If the language L (or K) in the above definition of uAC 0 (K) is defined over a nonbinary alphabet Σ, then one first has to fix a binary encoding of Σ as words in {0, 1} ℓ for some large enough ℓ ∈ N. If C = {K 1 , . . . , K n } is a finite class of languages, then AC 0 (C) is the same If C is an infinite complexity class, then uAC 0 [C] is the union of all classes uAC 0 (K) for K ∈ C. Note that uAC 0 [C](K) is the same as L∈C uAC 0 (K, L).
The class uNC 1 is defined as the class of languages accepted by DLOGTIMEuniform families of Boolean circuits having bounded fan-in, polynomial size, and logarithmic depth. As a consequence of Barrington's theorem [6], we have uNC 1 = uAC 0 (WP(A 5 )), where A 5 is the alternating group over 5 elements [59,Corollary 4.54]. Moreover, the word problem for any finite group G is in uNC 1 . If G is finite non-solvable, its word problem is uNC 1 -complete -even under uAC 0 -many-one reductions. Robinson proved that the word problem for the free group F 2 is uNC 1 -hard [55], i.e., uNC 1 ⊆ uAC 0 (WP(F 2 )).
The class uTC 0 is defined as uAC 0 (Majority) where Majority is the language of all bit strings containing more 1s than 0s. Important problems that are complete (under uAC 0 -Turing reductions) for uTC 0 are: where |w| a denotes the number of occurrences of a in w, see e.g. [59], • the computation (of a certain bit) of the binary representation of the product of two or any (unbounded) number of binary encoded integers [29], • the computation (of a certain bit) of the binary representation of the integer quotient of two binary encoded integers [29], • the word problem for every infinite finitely generated solvable linear group [35], • the conjugacy problem for the Baumslag-Solitar group BS(1, 2) [16].

Counting complexity classes
Counting complexity classes are built on the idea of counting the number of accepting and rejecting computation paths of a Turing machine. For a nondeterministic Turing machine M , let accept M (resp., reject M ) be the function that assigns to an input x for M the number of accepting (resp., rejecting) computation paths on input x. We define the function gap M : . The class of functions GapL and the class of languages C = L are defined as follows: M is a non-deterministic, logarithmic space-bounded Turing machine We write GapL K and C = L K to denote the corresponding classes where the Turing machine M is equipped with an oracle for the language K. We have the following relationships of C = L with other complexity classes; see e. g., [1]: 3 Groups with an easy power word problem In this section we start with two easy examples of groups where the power word problem can be solved efficiently.
Theorem 14 If G is a finitely generated nilpotent group, then PowWP(G) is in uTC 0 .
Proof In [51], the so-called word problem with binary exponents was shown to be in uTC 0 . Here the input is a power word u x1 1 · · · u xn n but all the u i are required to be one of the standard generators of the group G. For arbitrary power words, we can apply the same techniques as in [51]: we compute Mal'cev normal forms of all u i using [51, Theorem 5], then we use the power polynomials from [51, Lemma 2] to compute Mal'cev normal forms with binary exponents of all u xi i . Finally, we compute the Mal'cev normal form of u x1 1 · · · u xn n again using [51,Theorem 5].
Theorem 14 has been generalized in [19], where it is shown that the power word problem for a wreath product G ≀ Z with G finitely generated nilpotent belongs to uTC 0 . Other classes of groups where the power word problem belongs to uTC 0 are iterated wreath products of the form Z r ≀ (Z r ≀ (Z r · · · )), free solvable groups [19] and solvable Baumslag-Solitar group BS(1, q) [45].
The Grigorchuk group (defined in [26] and also known as the first Grigorchuk group) is a finitely generated subgroup of the automorphism group of an infinite binary rooted tree. It is a torsion group (every element has order 2 k for some k) and it was the first example of a group of intermediate growth. Theorem 15 The power word problem for the Grigorchuk group is uAC 0 -many-onereducible to its word problem (under suitable assumptions on the input encoding).
Proof Let G denote the Grigorchuk group. By [7, Theorem 6.6], every element of G that can be represented by a word of length m over a finite set of generators has order at most Cm 3/2 for some constant C. W. l. o. g. C = 2 ℓ for some ℓ ∈ N. On input of a power word u x1 1 · · · u xn n with all words u i of length at most m, we can compute the smallest k with 2 k ≥ m in uAC 0 . We have 2 k ≤ 2m. Now, we know that an element of length m has order bounded by 2 2k+ℓ . Since the order of every element of G is a power of two, this means that g 2 2k+ℓ = 1 for all g ∈ G of length at most m. Thus, we can reduce all exponents modulo 2 2k+ℓ (i. e., we drop all but the 2k + ℓ least significant bits). Now all exponents are at most 2 2k+ℓ ≤ 4Cm 2 and the power word can be written as an ordinary word (to do this in uAC 0 , we need a neutral letter to pad the output to a fixed word length). Note that this can be done by a uniform circuit family.
Theorem 15 applies only if the generating set contains a neutral letter. Otherwise, the reduction is in uTC 0 . It is well-know that the word problem for the Grigorchuk group is in L (see e. g., [49,53]). Thus, also the power word problem is in L. On the other hand, the compressed word problem (mentioned in the introduction) for the Grigorchuk group is PSPACE-complete [8].

Power word problems in finite extensions
Also for finite groups the power word problem is easy; it belongs to uNC 1 . The following result generalizes this fact: is reducible via a homomorphism (i. e., in particular in uTC 0 ) to PowWP(H). Thus, we can assume that from the beginning H is normal and that Q = G/H is a finite quotient group. Notice that H is finitely generated as G is so; see e.g. [56, 1.6.11]. Let R ⊆ G denote a set of representatives of Q with 1 ∈ R. If we choose a finite generating set Σ for H, then Σ ∪ (R \ {1}) becomes a finite generating set for G.
Let u = u x1 1 · · · u xn n denote the input power word. As a first step, for every exponent x i we compute numbers y i , z i ∈ Z with x i = y i |Q| + z i and 0 ≤ z i < |Q| (i. e., we compute the division with remainder by |Q|). This is possible in uNC 1 [29]. Note that u |Q| i is trivial in the quotient Q = G/H and, therefore, represents an element of H. Using the conjugate collection process from [55, Theorem 5.2] we can compute in uNC 1 a word h i ∈ Σ * such that u |Q| i = G h i . Then we replace in the input word every u xi i by h yi i u zi i where we write u zi i as a word without exponents. We have obtained a word where all factors with exponents represent elements of H. Finally, we proceed like Robinson [55] for the ordinary word problem treating words with exponents as single letters (this is possible because they are in H).
To give some more details for the last step, let us denote the result of the previous step as g 0 h y1 Once again, we follow [55] and writeh 0 r 0 h y1 1h 1 r 1 · · · h yn nhn rn as by conjugation with a i , i.e., by a homomorphism from a fixed finite set of homomorphisms. Thus, a power word P i over the alphabet Σ with P i = H (a i h yi ih i a −1 i ) can be computed in uTC 0 . Also all w i belong to H, since a i r i and a i+1 belong to the same coset of H. Moreover, every w i comes from a fixed finite set (namely R · R · R −1 ) and, thus, can be rewritten to a word w ′ i ∈ Σ * . Now it remains to verify whether a n+1 = 1 (solving the word problem for Q, which is in uNC 1 ). If this is not the case, we output any non-identity word in H, otherwise we output the power word P =h 0 w ′ 0 P 1 w ′ 1 P 2 w ′ 2 · · · Pnw ′ n . As a n+1 = 1, we have P = G u.

Power word problems in graph products
The main results of this section are transfer theorems for the complexity of the power word problem in graph products. We will prove such a transfer theorem for the non-uniform setting (where the graph product is fixed) as well as the uniform setting (where the graph product is part of the input). Before, we will consider a special case, the so called simple power word problem for graph products, in Section 5.1. In Section 5.2 we have to prove some further combinatorial results on traces. Finally, in Section 5.3 we prove the transfer theorems for graph products.

The simple power word problem for graph products
In this section we consider a restricted version of the power word problem for graph products. Later, we will use this restricted version in our algorithms for the unrestricted power word problem. Let G = GP(L, I, (G ζ ) ζ∈L ) be a graph product and define Γ ζ , Σ ζ , Γ, Σ as in Section 2.6.5. A simple power word is a word w = w x1 1 · · · w xn n , where w 1 , . . . , w n ∈ Γ and x 1 , . . . , x n ∈ Z is a list of binary encoded integers. Each w i encoded as a word over some finite alphabet Σ ζ . Note that this is more restrictive than a power word: we only allow powers of elements from a single base group. The simple power word problem SPowWP(G) is to decide whether w = G 1, where w is a simple power word. We also consider a uniform version of this problem. With USPowWP(GP(C)) we denote the uniform simple power word problem for graph products from the class GP(C) (see the last paragraph in Section 2.6.5). The following results on the complexity of the (uniform) simple power word problem are obtained by using the corresponding algorithm for the (uniform) word problem [33, Theorem 5.6.5, Theorem 5.6.14] and replacing the oracles for the word problems of the base groups with oracles for the power word problems in the base groups.
Proposition 17 For the (uniform) simple power word problem the following holds.
• Let G = GP(L, I, (G ζ ) ζ∈L ) be a fixed graph product of f.g. groups. Then We adapt the proof from [33] for the word problem to the setting of the simple power word problem. The proofs for the non-uniform and uniform case are quite different. Indeed, in the non-uniform case, we can work by induction over the size of the (in-)dependence graph, while for the uniform case we rely on an embedding into some linear space of infinite dimension. Therefore, we split the proofs into two subsections: in Section 5.1.2 we work on the non-uniform case and later, in Section 5.1.3, we develop an algorithm for the uniform case.

Input encoding
Let us give some details how to encode the input for the (simple) power word problem in graph products. There are certainly other ways how the represent the input for our algorithms without changing the complexity; but whenever the encoding is important, we assume that is is done as described in this section. We will use blocks of equal size to encode the different parts of the input. This makes it possible that parts of the computation can be done in uAC 0 . We assume that there is a letter for 1 ∈ Σ representing the group identity.
The input of the power word problem in a graph product is p x1 1 · · · p xn n where p i = a i,1 · · · a i,mi ∈ Σ * (note that each letter of Γ can be written as a word over Σ). We can pad with the identity element, so that each p i has length n, i. e., p i = a i,1 · · · a i,n with a i,j ∈ Σ. We encode each letter a ∈ Σ as a tuple (ζ, a) where ζ = alph(a). In the non-uniform case, there is a constant k, such that k bits are sufficient to encode any element of L and any letter of any Σ ζ for any ζ ∈ L. In the uniform case we encode the elements of L as well as the elements of each Σ ζ using n bits. The encoding of a word p i is illustrated by the following figure. alph(a i,1 ) Encoding a word p i requires 2nk bits in the non-uniform case and 2n 2 bits in the uniform case. For the simple power word problem we impose the restriction alph(a i,j ) = alph(a i,k ) for all i, j, k ∈ {1, . . . , n}, as mixed powers are not allowed.
We combine the above encoding for the words p i with a binary encoding for the exponents x i to obtain the encoding of a power word. Each exponent is encoded using n bits. Note that we can do this because, if an exponent is smaller, we can pad it with zeroes and, if an exponent is larger, we can choose a larger n and pad the input word with the identity element 1. This leads us to the following encoding of a power word, which in the non-uniform case uses (2k + 1)n 2 bits.
In the uniform case, this encoding requires (2n+1)n 2 bits. Furthermore, we also need to encode the descriptions of the base groups and the independence graph. By padding the input appropriately, we may assume that there are n base groups and that each can be encoded using n bits. The independence graph can be given as adjacency matrix, using n 2 bits.

The non-uniform case
Before solving the simple power word problem in G we prove several lemmata to help us achieve this goal. Lemma 18 below is due to Kausch [33]. For this lemma, we have to introduce first some notation and the notion of a semidirect product: Take two groups H and N with a left action of H on N (a mapping and for each h ∈ H the map g → h • g is an automorphism of G). The corresponding semidirect product N ⋊ H is a group with underlying set N × H and the multiplication is defined by (n 1 , h 1 If B is a group and u an arbitrary object, we write B u = {(g, u) | g ∈ B} for an isomorphic copy of B with multiplication (g, u)(g ′ , u) = (gg ′ , u). In the following let B be finitely generated. We begin by looking at the free product G ≃ * k∈N B k of countable many copies of B. Kausch [33,Lemma 5.4.5] has shown that the word problem for G can be solved in uAC 0 with oracle gates for WP(B) and WP(F 2 ). We show a similar result for the simple power word problem. Our proof is mostly identical to the one presented in [33], with only a few changes to account for the different encoding of the input. We use the following lemma on the algebraic structure of G. Lemma 18 [33,Lemma 5.4.4] Let B be a f.g. group and G = * k∈N B k . Then, we have G ≃ F (X) ⋊ B, where F (X) is a free group with basis and g ∈ B acts on F (X) by conjugating with (g, 0): The choice of 0 ∈ N in Lemma 18 as the distinguished element from N is arbitrary.
With Lemma 18, we can solve the simple power word problem for G.

Lemma 19
Let B and G be as in Lemma 18. Given a power word where the exponents x i ∈ Z are encoded as binary numbers, one can decide in The set X is given by Let ϕ : G → B be the homomorphism defined by ϕ(b, k) = b. We can assume ϕ(w) = w x1 1 · · · w xn n = B 1 as otherwise w = G 1. Now our aim is to write w as a member of the kernel of ϕ, which is F (X).
Let g i = w x1 1 · · · w xi i ∈ (B × Z) * . Observe that we can construct the g i in uAC 0 . We have Using the fact that gn = ϕ(w) = B 1 (and hence (gn, kn) = G 1), we can rewrite w as part of the kernel over the basis X: Next we define a finite subset Y ⊆ X such that w ∈ F (Y ) ≤ F (X). To achieve this we set From this definition it follows that |Y | ≤ 2(n − 1). Two elements (g i , k)(g i , 0) −1 and (g j , ℓ)(g j , 0) −1 from Y are equal if and only if k = ℓ and g i g −1 j = B 1. Note that g i g −1 j = B 1 is an instance of PowWP(B). Hence, using an oracle for PowWP(B) one can decide whether two elements of Y represent the same generator of F (Y ).
As a last step we simplify the basis Y by mapping it to the integer interval [1, 2(n − 1)]. We use the following map ψ : Y → [1, 2(n − 1)]: The map ψ can be computed in uAC 0 with oracle gates for PowWP(B) and defines an isomorphism between F (Y ) and F ([1, 2(n − 1)]). It is well known that the free group F (N) can be embedded into F 2 by the mapping k → a −k ba k . Since this mapping can be computed in uTC 0 ⊆ uAC 0 (WP(F 2 )), we can finally check w = F (Y ) 1 in For the following lemma we need the notion of an amalgamated product. For groups A, P and Q and injective homomorphisms φ : A → P and ψ : A → Q the amalgamated product P * A Q is the free product P * Q modulo the relations {φ(a) = ψ(a) | a ∈ A}. In the following, A is a subgroup of P and Q and φ and ψ are the identity. The following lemma is due to Kausch [33]. 4 Lemma 20 [33, Lemma 5.5.2] Let G = P * A (B × A) and consider the surjective homomorphism π : G → P with π(g) = g for g ∈ P and π(b) = 1 for all b ∈ B. Then we have G ≃ (ker π) ⋊ P and ker π ≃ * v∈P/A Bv, where the isomorphism ϕ : * v∈P/A Bv → ker π maps (b, v) to vbv −1 .
We want to solve the simple power word problem by induction. For the inductive step, we actually will need to solve the following slightly more general problem: Definition 21 Let G be a graph product and H ≤ G a fixed subgroup. We denote by GSPowWP(G, H) the generalized simple power word problem:
Let w = a x1 1 · · · a xn n be the input to the generalized simple power word problem and let π(w) = π(a 1 ) x1 · · · π(an) xn We have w = G π(w) if and only if w ∈ G S . This is equivalent to w −1 π(w) = G 1. Moreover, the projection π can be computed in uAC 0 when elements of Γ are represented by words from ζ∈L Σ * ζ (since we assume 1 ∈ Σ).
Lemma 23 (Proposition 17, Part 1) Let G = GP(L, I, (G ζ ) ζ∈L ) be a graph product of f.g. groups. We have that is the simple power word problem in G can be solved in uAC 0 with oracles for the power word problem in each base group G ζ and the word problem for the free group F 2 .
Proof We proceed by induction on the cardinality of L. If |L| = 1, we can solve the simple power word problem in G by solving the power word problem in the base group. Otherwise, fix an arbitrary ξ ∈ L. We define L ′ = L \ {ξ}, I ′ = I ∩ (L ′ × L ′ ), link(ξ) = {ζ ∈ L | (ξ, ζ) ∈ I} and the three groups Now we can write G as an amalgamated product: G = P * A (A × B).
By the induction hypothesis we can solve SPowWP(P ) and SPowWP(A) in uAC 0 with oracles for PowWP(G ζ ) (for all ζ ∈ L) and WP(F 2 ). By Lemma 22 we can solve GSPowWP(A, P ) in uAC 0 with an oracle for SPowWP(P ). It remains to show how to solve the simple power word problem in the amalgamated product.

The uniform case
Let G = GP(L, I, (G ζ ) ζ∈L ) be a graph product of f.g. groups. The following embedding of G into a (possibly infinite-dimensional) linear group has been presented in [33]. We write Z (Γ) for the free abelian group with basis Γ. It consists of all mappings f : Γ → Z such that f (c) = 0 for only finitely many c ∈ Γ. We write such a mapping f as a formal sum S = c∈Γ λ c · c with λ c = f (c) ∈ Z and call λ c the coefficient of c in S. The mapping σ : G → GL(Z (Γ) ) is defined by w → σ w , where σ w = σ a1 · · · σ an for w = a 1 · · · a n with a i ∈ Γ. For a ∈ Γ the mapping σ a : Z (Γ) → Z (Γ) is defined as the linear extension of if a ∈ Γ ζ , b ∈ Γ ξ for some ζ = ξ and (ζ, ξ) ∈ I.
Lemma 24 [33, Lemma 3.3.4] Let w ∈ Γ * be reduced and wb = G ubv such that b ∈ Γ ξ , u, v ∈ Γ * , (b, v) ∈ I and b is the unique maximal letter of ub. Moreover, let and let u = u 0 a 1 u 1 · · · anun with a i ∈ Γ ζ and u i ∈ (Γ \ Γ ζ ) * . Then for all c ∈ Γ ζ we have λc ≥ 0 and Our solution to the uniform simple power word problem is based on the solution to the word problem presented in [33]. The underlying idea is to add an additional free group χ for a new generator χ to the graph product, which Algorithm 1 Computing the coefficient of a ∈ Γ ζ in σ w (χ) Input: a ∈ Γ ζ , a x1 1 a x2 2 · · · a xn n with a i ∈ Γ ζi and b i = [a xi i ] ∈ Γ ζi (k, ℓ, s) ← (n + 1, n + 1, 1) for i in [n, . . . , 1] do if k = n + 1 ∧ ℓ = n + 1 then ⊲ σ bi (χ) = 2b i + χ
Proof Let w = a x1 1 · · · a xn n ∈ G, where a i ∈ Γ ζi and x i ∈ Z. If w = G 1 then there are ζ ∈ L and 1 ≤ k ≤ ℓ ≤ n such that the coefficient of [π ζ (a x k k · · · a x ℓ ℓ )] in σw(χ) is not zero.  Each inner node is labeled with the coefficient it contributes to. The algorithm stores the coefficient using two indices k and ℓ. The nodes on the second level correspond to σ b (χ) = 2b + χ, the nodes on the third level correspond to σ ab (χ) = σa(2b + χ) = 2b + 2a + χ. If they are labeled with a, they have one leaf node as a child which is an accepting path (or a rejecting path if the sign is negative -here all signs are positive). If they are not labeled with a, then there are two leaf node children, one is an accepting path, the other a rejecting path, so they do not affect the difference of accepting and rejecting paths. Here, the difference of accepting and rejecting paths is 2, which is the coefficient λa of σ ab (χ).
To compute the coefficients we use Algorithm 1. For simplicity we assume a xi i = G ζ i 1 for all i ∈ [1, n]. This can be enforced by a precomputation using UPowWP(C) as an oracle. Let b i ∈ Γ ζi with b i = G ζ i a xi i . Our nondeterministic logspace algorithm will produce a computation tree such that the coefficient of a ∈ Γ in σw(χ) will the number of accepting leaves minus the number of rejecting leaves (as required by the definition of GapL). The algorithm stores in each configuration an element [π ζ k (b k · · · b ℓ ] ∈ Γ using the two indices k and ℓ. We use (k, ℓ) = (n+1, n+1) to represent χ. In addition to k and ℓ we store a sign s (1 or −1), saying whether the configuration gives a positive or negative contribution to the coefficient of [π ζ k (b k · · · b ℓ ]. The root node of the computation tree corresponds to χ. Let w = w ′ a, with a ∈ Γ. Then σw(χ) = σ w ′ (σa(χ)). The nodes on the second level, that is the children of the root node, correspond to σa(χ). The last level made up of inner nodes corresponds to σw(χ). At that point the algorithm checks if the node corresponds to the input element a ∈ Γ, i.e., whether a = [π ζ k (b k · · · b ℓ )] holds. This is done using the oracle for the uniform power word problem in C. If it holds, then the computation will accept the input if the stored sign s is 1, and reject if s = −1. If a = [π ζ k (b k · · · b ℓ )] does not hold, then the algorithm branches into two leaf nodes, one accepting and one rejecting, which gives a zero contribution to the coefficient of a. In this way, it is ensured that the coefficient of a is the difference of the number of accepting paths and the number of rejecting paths. Therefore, the computation of a coefficient is in GapL UPowWP(C) , and we can check in C=L UPowWP(C) whether a coefficient is zero. An example of a computation tree is presented in Fig. 2.
Finally, we can check in C=L UPowWP(C) whether all coefficients of elements [π ζ k (b k · · · b ℓ )] are zero, as C=L is closed under conjunctive truth table reductions [2, Proposition 17] and the proof holds for every relativized version C=L A .

Combinatorics on Traces
In this section we develop various tools concerning combinatorics on traces, which later will be used to solve the power word problem in graph products. As a motivation and an easy example, we start with the analogous construction for free groups we presented in [43], before looking into the more technical case of graph products. The first task for solving the power word problem in a free group is to compute certain unique normal forms for the words u i of an instance of the power word problem as in (4) below.
We use the notation from Section 2.6.1. In particular, we use the rewriting system S free = { aa → 1 | a ∈ Σ }. Fix an arbitrary order on the input alphabet Σ. This gives us a lexicographic order on Σ * , which is denoted by . Let Ω ⊆ IRR(S free ) ⊆ Σ * denote the set of words w such that • w is non-empty, • w is cyclically reduced (i.e, w cannot be written as aua for a ∈ Σ), • w is primitive (i.e, w cannot be written as u n for n ≥ 2), • w is lexicographically minimal among all cyclic permutations of w and w (i. e., w uv for all u, v ∈ Σ * with vu = w or vu = w).
Notice that Ω consists of Lyndon words [46, Chapter 5.1] with the stronger requirement of being freely reduced, cyclically reduced and also minimal among the conjugacy class of the inverse. In [43], the first step is to rewrite the input power word in the form w = s 0 u x1 1 s 1 · · · u xn n s n with u i ∈ Ω and s i ∈ IRR(S free ). (4) This transformation can be done by a rather easy uAC 0 (F 2 ) computation. The reason to do this lies in the following crucial lemma: essentially it says that, if a long factor of u xi i cancels with some u xj j , then already u i = u j . Thus, if a power word of the form (4) represents the group identity, every u i with a large exponent must cancel with other occurrences of the very same word u i . Thus, only the same u i can cancel implying that we can make the exponents of the different u i independently smaller.
Lemma 26 Let p, q ∈ Ω, x, y ∈ Z and let v be a factor of p x and w a factor of q y . If vw * =⇒ S free 1 and |v| = |w| ≥ |p| + |q| − 1, then p = q.
Proof Since p and q are cyclically reduced, v and w are freely reduced, i.e., v = w as words. Thus, v has two periods |p| and |q|. Since v is long enough, by the theorem of Fine and Wilf [20] it also has the period gcd(|p| , |q|). This means that also p and q have period gcd(|p| , |q|) (since cyclic permutations of p and q are factors of v). Assuming gcd(|p| , |q|) < |p|, would mean that p is a proper power contradicting the fact that p is primitive. Hence, |p| = |q|. Since |v| ≥ |p| + |q| − 1 = 2 |p| − 1, p is a factor of v, which itself is a factor of q −y . Thus, p is a cyclic permutation of q or of q. By the last condition on Ω, this implies p = q.
In the remainder of this section, we develop the requirements for a special normal form (like Ω above) and generalize Lemma 26 to graph products. In particular, we aim for some special kind of cyclic normal forms ensuring uniqueness within a conjugacy class (see Definition 37 below).

Cyclic normal forms and conjugacy
Recall that by Lemma 4, traces u, v ∈ M are conjugate if and only if they are related by a sequence of transpositions. By ≤ L we denote a linear order on the set L. For a, b ∈ Γ we write a < L b if alph(a) < L alph(b). The length-lexicographic normal form of g ∈ G is the reduced representative nf G (g) = w ∈ Γ * for g that is lexicographically smallest. Note that this normal form is on the level of Γ. Each letter of Γ still might have different representations over the finite generating set Σ as outlined above. If G is clear from the context, we also write nf(g). Moreover, for a word u ∈ Γ * (or trace u ∈ M ) we write nf(u) for nf(g), where g is the group element represented by u.
Definition 28 Let w ∈ Γ * . We say w is a cyclic normal form if w and all its cyclic permutations are length-lexicographic normal forms and w is composite.

Remark 29
Observe that if w is a cyclic normal form, then as a trace from M it is cyclically reduced and all cyclic permutations of w are cyclic normal forms themselves.
Cyclic normal forms have been introduced in [12] for RAAGs. Moreover, by [12], given w ∈ Γ * , which has a cyclic normal form, a cyclic normal form for w can be computed in linear time. In Theorem 35 below, we show that cyclic normal forms also exist for certain elements in the case of graph products and that they also can be computed efficiently.
It is easy to see that every cyclic normal form is connected (see Remark 30). In particular, not every element has a cyclic normal form. Moreover, there can be more than one cyclic normal form per conjugacy class; however, by Lemma 32 below they are all cyclic permutations of each other.

Remark 30
Notice that, if w ∈ Γ * is a cyclic normal form, then it is connected. Indeed, let d ∈ Γ be a ≤ L -largest letter occurring in w. After a cyclic permutation we can write w = dw ′ . Now, assume that w = M uv with (u, v) ∈ I. Without loss of generality d belongs to u. Let c denote the first letter of v. Since (u, v) ∈ I, we must have alph(c) = alph(d) and therefore c < L d. Since (c, u) ∈ I we obtain w = M cw ′′ for some w ′′ . But then w cannot start with d, which is contradiction.
For the following considerations, it is useful to embed the trace monoid M = M (Γ, I) (and, thus, IRR(T )) via the trace monoid M (Γ ∪ Γ, I) into the right-angled Artin group G(Γ, I) as in (3). Note that this means that we add a formal inverse a for every a ∈ Γ (which is different from the inverse a −1 of a in the group G alph(a) ). Be aware that Γ might be infinite and that a trace u ∈ M (Γ ∪ Γ, I) is reduced with respect to G(Γ, I) if it does not contain a aa or aa for a ∈ Γ (but it may contain a factor ab with alph(a) = alph(b) and therefore be non-reduced with respect to the graph product G). Proof The lemma can be shown by almost a verbatim repetition of the proof of [12, Proposition 2.21]. However, we can also use that result as a black-box: it states that two cyclic normal forms in a RAAG are conjugate if and only if they are cyclic permutations of each other. 5 We apply this result to the RAAG G(Γ, I). In [12,Proposition 2.21] it is assumed that Γ is finite, whereas our Γ is infinite. But we can restrict Γ do those symbols that appear in u and v.
Moreover, while we are given a linear order on L, we need a linear order on Γ to obtain the notion of a cyclic normal form in a RAAG. To solve this problem we fix on each Γ ζ for ζ ∈ L an arbitrary linear order (for different ζ, ξ ∈ L we use our order on L). This gives a linear order on Γ. As our definition of IRR(T ) implies that there are never two consecutive letters from Γ ζ for the same ζ ∈ L, the outcome for the cyclic normal form does not depend on the actual orders we chose on the Γ ζ . Therefore, every cyclic normal form according to Definition 28 is also a cyclic normal form in the RAAG G(Γ, I).
Let us now take to cyclic normal forms u, v ∈ Γ * according to Definition 28. Then u and v are also cyclic normal forms with respect to the RAAG G(Γ, I). Lemma 33 Let w = dw ′ ∈ Γ * (with d ∈ Γ) be a cyclically reduced and composite length-lexicographic normal form such that w does not contain any letter c with d < L c. Then for all k ≥ 1 we have nf(w k ) = w k and w is a cyclic normal form.
Proof Note that d must be the unique minimal letter in the trace represented by w: if a = d would be also minimal, then (a, d) ∈ I (in particular, a and d do not belong to the same Γ ζ ) and d < L a (since dw ′ is a length-lexicographic normal form), contradicting the assumption on d. In particular, w must be connected as a trace.
We show that nf(w k ) = w k by showing that w k is a length-lexicographic normal form. Since w is cyclically reduced (and hence in particular reduced), connected and composite, also w k is cyclically reduced and composite.
Let us now prove that w k is a length-lexicographic normal form: Assume the converse. The characterization of lexicographically smallest words of Anisimov and Knuth [4] implies that w k contains a factor bua where a < L b and (a, bu) ∈ I. Since w is a length-lexicographic normal form, the factor bua does not belong to some factor w of w k . Therefore, w has a prefix ya, where y is a suffix of u. Since (a, y) ∈ I, a is a minimal letter of w. Since d is the unique minimal letter of w, we have d = a, i.e., d < L b, which contradicts the assumptions on d. Hence, w is indeed a length-lexicographic normal form, i.e., nf(w k ) = w k .
Finally, we show that w is a cyclic normal form. Every cyclic permutation v of w is a factor of w 2 . Since every factor of a length-lexicographic normal form is again a length-lexicographic normal form, v is a length-lexicographic normal form.
Corollary 34 Let u = du ′ ∈ Γ * (with d ∈ Γ) be a cyclic normal form such that u does not contain any letter c with d < L c. If u = M w k for a trace w, then nf(w) is a cyclic normal form and u = nf(w) k (as words).
Proof The case k = 1 is trivial, so let us assume that k ≥ 2. Let w ∈ Γ * such that u = M w k . By the argument from the proof of Lemma 33, d is the unique minimal letter of u. Since u is composite and connected and alph(u) = alph(w), also w is composite and connected. As ww is a factor of u, ww must be reduced. Hence, w is cyclically reduced. We can also write w as w = M dw ′ and d is also the unique minimal letter of w. In particular, nf(w) = dv for some word v ∈ Γ * . Applying Lemma 33 (with w replaced by nf(w)) we obtain that nf(w k ) = nf(nf(w) k ) = nf(w) k and nf(w) is a cyclic normal form.

Theorem 35
The following holds: • Let G = GP(L, I, (G ζ ) ζ∈L ) be a graph product of f.g. groups. Then the following problem is in uTC 0 ⊆ uAC 0 (WP(F 2 )): Input: a cyclically reduced, composite and connected w ∈ Γ * Output: a cyclic normal form that is conjugate in G to w • Let C be any non-trivial class of f.g. groups. Then the following problem is in is in uAC 0 [NL]: Input: G = GP(L, I, (G ζ ) ζ∈L ) given by (L, I) and G ζ ∈ C for ζ ∈ L and a cyclically reduced, composite and connected w ∈ Γ * Output: a cyclic normal form that is conjugate in G to w In both cases the output word starts with a letter that is maximal w. r. t. ≤ L .
Note that due to Lemma 27, in the situation of Theorem 35, being conjugate in G is equivalent to being conjugate in the trace monoid M = M (Γ, I) or being related by a sequence of transpositions.
Proof of Theorem 35 Let w ∈ Γ * be the input word. Note that w is already cyclically reduced, composite and connected. The cyclic normal form can be computed with the following algorithm: 1. Compute the length-lexicographic normal formw = nf G (w σ ) where, as before, σ = |L|.
3. Compute the length-lexicographic normal form of dzy. We have nf G (dzy) = u σ , where u is a cyclic normal form conjugate to w.
First, we show that our algorithm is correct, i. e., nf G (dzy) has the form u σ and u is a cyclic normal form conjugate to w. For this we first prove that d is the unique minimal letter of the trace represented by dzy. To get a contradiction, assume that a = d is another minimal letter. In particular, (a, d) ∈ I, which implies a < L d. If a belongs to z, then we can write z = az ′ and get ydz = M yadz ′ contradicting the fact that ydz is a length-lexicographic normal form. Now assume that a belongs to y, i.e., y = M ay ′ and (a, dz) ∈ I. Hence, in the trace monoid M , ya is a prefix of w 2σ = M (ydz)(ay ′ dz). Levi's Lemma yields the following diagram (where yav = M w 2σ ): v v 1 v 2 · · · vσ v σ+1 v σ+2 · · · v 2σ ya y 1 y 2 · · · yσ y σ+1 y σ+2 · · · y 2σ w w · · · w w w · · · w None of the v i can be 1, since d belongs to w but not to ya. By Lemma 6 we obtain y j = 1 for all j ≥ σ. In particular, ya is already a prefix of w σ = M ydz. But a does not occur in dz (we have (a, dz) ∈ I). Hence, since ya contains more a's than y, this is a contradiction. Next, let us show that y is a prefix of wy in the trace monoid M . To see this, observe that ydzw = M w σ+1 = M wydz. Since |y| a ≤ |wy| a for all a ∈ Γ, Lemma 7 implies that y is a prefix of wy.
Thus, we can find someû ∈ M with yû = M wy. Observe that by the very definitionû is conjugate to w. By Lemma 4,û can be obtained from the cyclically reduced w by a sequence of transpositions. Since the property of being cyclically reduced is preserved by transpositions, it follows that alsoû is cyclically reduced.
Second, we look at the complexity in the non-uniform case. Our algorithm requires solving the normal form problem twice. Since the input is cyclically reduced, computing the normal form only required computing a lexicographically smallest ordering, which by [33,Theorem 6.3.7] can be done in uTC 0 ⊆ uAC 0 (WP(F 2 )). In addition, we need to compute a cyclic permutation, which can be done in uAC 0 .
Third, we look at the complexity in the uniform case. By [33,Theorem 6.3.13], solving the normal form problem can be done in uTC 0 with oracle gates for NL (more precisely, that theorem states that it can be decided in NL which of two letters comes first in the normal form -as sorting is in uTC 0 , this statement follows). Note that Finally, we need the following lemma that requires that none of the base groups G ζ contains elements of order two. Hence, a = a −1 holds for all a ∈ Γ.
Lemma 36 Assume that a = a −1 for all a ∈ Γ. If p ∈ M \ {1} is reduced, then p and p −1 are not conjugate (in M ).
Proof Assume that p ∈ M \ {1} is conjugate to p −1 . We show that p is not reduced. Let us first consider the case that p = M p −1 . We show by induction on |p| that p is not reduced. Since p = 1, we can write p = M as for a ∈ Γ and s ∈ M . We obtain as = M p = M p −1 = M s −1 a −1 . Since a = a −1 , we can write s as s = M ta −1 and obtain ata −1 = M as = M s −1 a −1 = M at −1 a. Since M is cancellative, we get t = M t −1 . If t = 1, then p M = aa −1 is not reduced and, if t = 1, then t is not reduced by induction. Now assume that p = p −1 . Since p and p −1 are conjugate, [14,Proposition 4.4.5] yields factorizations p = M q 1 q 2 · · · q k and p −1 = M q k q k−1 · · · q 1 for traces q 1 , . . . , q k ∈ M \{1}. Define r = M q 2 · · · q k and s = M q −1 2 · · · q −1 k . We obtain q 1 r = M q −1 1 s. Levi's Lemma yields factorizations q 1 = M tu and q −1 We claim that u = v = 1: For every ζ ∈ L, the number of letters from Γ ζ in q 1 and q −1 1 is the same. Hence, also the number of letters from Γ ζ in u and v is the same. Since (u, v) ∈ I, this is only possible if u = v = 1.
We now get q 1 = q −1 1 . By the first paragraph of the proof, this shows that q 1 (and hence p) is not reduced.

A variant of Lyndon traces
A Lyndon trace w from a trace monoid M (Σ, I) is a connected primitive trace that is lexicographical minimal in its conjugacy class (where a trace u is smaller than a trace v if the lexicographic normal form of u is lengthlexicographically smaller than the lexicographic normal form of v), see e.g. [14,Section 4.4]. We will work with the following variant of Lyndon traces. As usual let G = GP(L, I, (G ζ ) ζ∈L ) be a graph product and M = M (Γ, I) the corresponding trace monoid.

Definition 37
Let Ω be the set of all w ∈ Γ * satisfying the following properties: • w is a cyclic normal form (in particular, it is composite, cyclically reduced, and connected), • w represents a primitive element of M , • w is lexicographically minimal (w. r. t. ≤ L ) among its cyclic permutations and the cyclic permutations of a cyclic normal form conjugate (in M ) to w −1 .
Note that the last point in Definition 37 makes sense because if w is composite, cyclically reduced, connected and primitive, then w −1 as well as every w ′ conjugate in M to w or w −1 have the same properties (that a trace that is conjugate to a primitive trace is primitive too follows from Lemma 5). Moreover, by Theorem 35 there is a cyclic normal form that is conjugate to w −1 and by Lemma 32 all cyclic normal forms conjugate to w −1 are cyclic permutations of each other.  {(a, c), (c, a)}, and a < b < c. 6 Then the trace abc is lexicographically smallest in its conjugacy class. However, it is not a cyclic normal form, since the cyclic permutation bca is not lexicographically minimal. The corresponding lexicographically smallest conjugate cyclic normal form would be acb.

Remark 40
Notice that, when solving the uniform power word problem for graph products, it is important that in Ω we require lexicographical minimality only among cyclic normal forms conjugate to w ±1 . A straightforward generalization of the approach for free groups (see beginning of Section 5.2) would search for a lexicographically minimal element in the full conjugacy class of w ±1 . However, this approach does not seem to be feasible because there might be exponentially many conjugate traces for a given trace. Here is an example: Let M (Σ, I) where Σ = {a 1 , . . . , an} and I = (a i , a j ) |j − i| ≥ 2 and u = a 1 · · · an. Then every permutation a π(1) · · · a π(n) is conjugate to u. In particular, there are n! many conjugate traces to u.
An interesting open question is the complexity of the following problem: given a trace monoid and a trace, find the lexicographically smallest conjugate trace. This problem can easily be seen to be in P NP . However, it is totally unclear to us whether it can be actually solved in polynomial time -or whether its decision variant is NP-complete.
The crucial property of Ω is that each w ∈ Ω is a unique representative for its conjugacy class and the conjugacy class of its inverse. Similar to the case of a free group (see Lemma 26) this fact leads us to the following theorem, which is central to solving the power word problem in graph products (see Lemma 50 below). As for Lemma 26, the intuition behind it is that, if there are two powers p x and q y , where p, q ∈ Ω and q = p, then in p x q y only a small number of letters can cancel out. Conversely, if a sufficiently large suffix of p x cancels with a prefix of q y , then p = q. In the end this will allow us to decrease all the exponents of p simultaneously as described in Definition 55.
Note that in the above theorem u and v are reduced as they are factors of p x (resp. q y ). For the proof of Theorem 41 we apply the Lemmata 2 and 3 from Section 2.4.2 to the trace monoid M (Γ, I) that corresponds to the graph product G = GP(L, I, (G ζ ) ζ∈L ). To do so, we use the cliques and projections as defined in (1) in Section 2.5, which we recall for convenience: where ζ is isolated if there is no ξ = ζ with (ζ, ξ) ∈ D. Now, π i : M (Γ, I) → A * i denotes the canonical projection and Π : M (Γ, I) → A * 1 × · · · × A * k with Π(w) = (π 1 (w), . . . , π k (w)) is an injective monoid morphism by Lemma 2. We derive Theorem 41 from the following lemma. Lemma 42 Let p, q, v ∈ M (Γ, I), x, y ∈ N \ {0} such that p and q are primitive and connected, and p x and q y have the common factor v. If p 2 and q 2 are factors of v, then for all i the projections π i (p) and π i (q) are conjugate as words.
For each i ∈ Jv we write π i (p) =p si i and π i (q) =q ri i wherep i andq i ∈ A * i are primitive. As v is a common factor of p x and q y , its projection π i (v) is a common factor of π i (p x ) =p six i and π i (q y ) =q riy i . Thus, π i (v) has periods |p i | and |q i |. Since p 2 is a factor of v, π i (p) 2 is a factor of π i (v). This yields the lower bound 2|p i |, and by symmetry 2|q i |, on the length of π i (v). Combining those, we obtain By the theorem of Fine and Wilf [20], gcd(|p i |, |q i |) is a period of π i (v). Asp i and q i are primitive, it follows that |p i | = |q i |. As p is a factor of v, in particular,p i is a factor of π i (v) and, thus, also of π i (q y ) =q ri·y i . Hence,p i andq i are conjugate words for all i ∈ Jv.
In order to show that π i (p) and π i (q) are conjugate for all i ∈ Jv, it suffices to show that s i = r i for all i ∈ Jv. Assume for a contradiction that for some i ∈ Jv we have s i = r i . Then, there are λ, µ ∈ N \ {0} such that λs i = µr i . W. l. o. g. let µ > 1 and gcd{λ, µ} = 1. Now µ divides s i . Let J = {j ∈ Jv | λs j = µr j }. Claim: J = Jv.
Proof of the Claim: Clearly i ∈ J. We want to show that J = Jv. This is the case if alph(v) = {ζ} for some isolated ζ (then, also Jv is a singleton). Otherwise, for all i ∈ Jv the set A i is of the form A i = Γ ζ ∪ Γ ξ with (ζ, ξ) ∈ D and ζ = ξ.
Thus, every s i (for i ∈ Jv) is divisible by µ > 1. By Lemma 3 we can write p = M u µ for some trace u, contradicting p being primitive. This concludes the proof of the lemma.
The proof idea for Theorem 41 is as follows: We use the length bound from Lemma 6 in order to show that the requirements of Lemma 42 are satisfied. After applying that lemma, we show that p and q are conjugate using Lemma 2. Then we conclude from the definition of Ω that p = q.
Proof of Theorem 41 We have u = G v −1 and hence u = M v −1 (since, u and v −1 are reduced). Thus, v −1 is a factor of p x and therefore, v is a factor of p −x . Let Ω ± = Ω ∪ Ω −1 be the extension of Ω that includes the inverse of each element. Let x = |x| andŷ = |y|. Then there arep,q ∈ Ω ± such thatp ∈ {p, p −1 },q ∈ {q, q −1 } and v is a common factor ofpx andqŷ. As |v| > 2σ(|p| + |q|) ≥ 2σ|p|, by Lemma 6, v can be written as v = u 1 · · · u tp z vs · · · v 1 , where z ≥ 2. Hence,p 2 is a factor of v. By symmetryq 2 is a factor of v.
By Lemma 42, for all i the projections π i (p) and π i (q) are conjugate words. In particular, for each ζ ∈ L we have |p| ζ = |q| ζ . Thus, asq is a factor ofpx, it follows from Lemma 9 thatp is conjugate in M toq. Since p, q ∈ Ω, this finally implies p = q; see Remark 38.

Main proofs for the power word problem in graph products
In this section we show our main results for graph products (according to the conference version [57]): In order to solve the power word problem, we follow the outline of [43] (which is for free groups). In particular, our proof also consists of three major steps: • In a preprocessing step we replace all powers with powers of elements of Ω (Section 5.3.1). • We define a symbolic rewriting system which we use to prove correctness (Section 5.3.2). • We define the shortened word, replacing each exponent with a smaller one, bounded by a polynomial in the input (Section 5.3.3).
Finally, in Section 5.3.4, we combine these steps for the solution of the power word problem.
The main difference to [43] is that here we rely on Theorem 41 instead of [43,Lemma 11] (see Lemma 26), an easy fact about words. This is because the combinatorics of traces/graph products is much more involved than of words/free groups. Furthermore, for free groups we did not have to bother with elements of order two, which led to the mistake in [57,58] (see Remark 51). Another major difference to the case of free groups is that we need the results for the simple power word problem considered in Section 5.1. Apart from that, all steps are the same (with some minor technical differences).

Preprocessing
Let G = GP(L, I, (G ζ ) ζ∈L ) be a graph product of f.g. groups. As usual, σ = |L|. We define the alphabetΓ = Γ × Z, where (v, z) represents the power v z . Note thatΓ is the alphabet of the simple power word problem in G. During preprocessing, the input power word is transformed into the form w = u 0 p x1 1 u 1 · · · p xn n u n , where p i ∈ Ω and u i ∈Γ * for all i.
We denote the uniform word problem for graph products with base groups in C by UWP(GP(C)). For some further thoughts on how to encode the input, see Section 5.1.1. The preprocessing consists of five steps: Step 1: Cyclically reducing powers. Cyclically reducing every p i can be done using the procedure from [33,Lemma 7.3.4]. We also need to compute a trace y i such that y −1 i p i y i is cyclically reduced. It follows from the proof of [33, Lemmata 7.3.2 and 7.3.3] that such y i can be obtained as a prefix of p i . Let us quickly repeat the argument: assume that p i is already reduced. First one computes the longest prefix t i of p i such that t −1 i is also a suffix of p i . Thus we can write p i = M t i p ′ i t −1 i . The trace p ′ i is not necessarily cyclically reduced. But there are elements a 1 , . . . , a k , b 1 , . . . , b k ∈ Γ such that alph(a i ) = alph(b i ) for all i and (a i , a j ) ∈ I for i = j (in particular, k ≤ σ) such that p ′ i = a 1 · · · a npi b 1 · · · b n andp i [a 1 b 1 ] · · · [a k b k ] is cyclically reduced. Let us define the prefix y i = t i a 1 · · · a k of p i . Then we havep i = G y −1 i p i y i and |y i | ≤ |p i |. We then replace the power p xi i with y −1 ip xi i y i ; moreover, y −1 i and y i can be merged with u i−1 and u i , respectively. Thus we can assume that for the next step the input again has the form w = u 0 p x1 1 u 1 · · · p xn n u n , but now all p i are cyclically reduced.
Step 2: Replacing powers with powers of connected elements. We compute the connected components of p i . More precisely, we compute p i,1 , . . . , p i,ki such that each p i,j is connected, p i = G p i,1 · · · p i,ki , and (p i,j , p i,ℓ ) ∈ I for j = ℓ. Observe that k i ≤ |L|. We replace the power p xi i with p xi i,1 · · · p xi i,ki .
Step 3: Removing powers of a single letter. We use the alphabetΓ = Γ × Z, where (v, z) represents the power v z . We replace each power p xi i where p i ∈ Γ ζ for some ζ ∈ L with the corresponding letter p xi i ∈Γ. Note that there is no real work to do in this step. What happens is that powers of a single letter will be ignored (i. e., treated as if they were part of the u i from (5)) in the remaining preprocessing steps and when computing the shortened word. As a consequence, in the remaining preprocessing steps and during the computation of the shortened word we may assume that we only have powers of composite words. At the end, powers of a single letter will be the only powers remaining in the shortened word, and therefore they are the reason for reducing to the simple power word problem. For the next step we still assume that the input has the shape w = u 0 p x1 1 u 1 · · · p xn n u n , however, from here on u i ∈Γ * .
Step 4: Replace each letter with a normal form specific to the input. Whereas the previous steps work on the level of the trace monoid, this step computes a normal form for the elements of Γ itself. For each i we write p i = a i,1 · · · a i,ki , where a i,j ∈ Γ. Recall that elements of Γ are given as words over Σ i. e., the generators of the respective base groups. Let N = [a 1,1 , a −1 1,1 , . . . , a 1,k1 , a −1 1,k1 , . . . , a n,1 , a −1 n,1 , . . . , a n,kn , a −1 n,kn ] be the list of letters of Γ (and their inverses) occurring in some power. For convenience, we write N = [b 1 , . . . , b m ], where m = |N |. We replace each p i withp i =ã i,1 · · ·ã i,ki , whereã i,j is the first element in N equivalent to a i,j . Note that we need to solve the word problem in the base groups G ζ to compute this. After that transformation, any twoã i,j andã ℓ,m representing the same element of Γ are equal as words over Σ (and so bit-wise equal). Thus, the letters are in a normal form. Be aware that this normal form is dependent on the input of the power word problem in G, but that is not an issue for our application. Again, we assume the input for the next step to be w = u 0 p x1 1 u 1 · · · p xn n u n .
Step 5: Making each p i a primitive cyclic normal form. The following is done for each i ∈ [1, n]. Let us write p x for p xi i . We apply the algorithm presented in the proof of Theorem 35 and compute a cyclic normal form q that is conjugate to p in M . We have yp = M qy for some y with |y| < σ · |p|. We replace p x with y −1 q x y and merge y −1 with u i−1 and y with u i . Note that also q must be connected, composite and cyclically reduced as a trace.
Observe that any cyclic normal form u computed by the algorithm from the proof of Theorem 35 starts with a letter d such that u does not contain any letter c with d < L c. If such a cyclic normal form is not primitive in the trace monoid M , i.e., u = M w k with k > 1, then, by Corollary 34, u = nf(w) k (as words) and nf(w) is a cyclic normal form. Therefore, we compute a primitive word r ∈ Γ * such that q = r k for some k ≥ 1 and replace q x by r kx (clearly, q = r if q is already primitive). Also r must be connected, composite and cyclically reduced as a trace. Moreover, r is a cyclic normal form as well, since each cyclic permutation of r is a factor of q = r k if k ≥ 2 and hence must be a length-lexicographic normal form. Again, we write the resulting power word as u 0 p x1 1 u 1 · · · p xn n u n for the next step.
Step 6: Replace each power with a power of an element in Ω. Let Ω be as in Definition 37. The previous steps have already taken care of most properties of Ω. In addition, Step 4 ensured that individual letters are in a normal form. The only requirement not yet fulfilled is that every p i must be minimal w. r. t. ≤ L among its cyclic permutations and the cyclic permutations of a cyclic normal form conjugate (in M ) to p −1 i . Using Theorem 35, we compute a cyclic normal form p ′ i that is conjugate (in M ) to p −1 i . It must be primitive too: if p ′ i = M s ℓ for some ℓ ≥ 1 and s ∈ M , then (s −1 ) ℓ is conjugate in M to p i , which implies by Lemma 5 that p i = M r ℓ for some r ∈ M . As p is primitive, we have ℓ = 1. Hence, p ′ i is primitive. Finally, we consider all cyclic permutations of p i and p ′ i and take the lexicographically smallest one; call itp i . Moreover, let ι ∈ {−1, 1} be such that ι = 1 ifp i is conjugate to p i and ι = −1 ifp i is conjugate to p −1 i . Then we can replace the power p xi i by t −1 ip ιxi i t i for an appropriate conjugator t i of length at most |p i | (we can choose t i as a prefix ofp i ι sgn(xi) ). Finally, t −1 i and t i can be merged with u i−1 and u i , respectively.

Remark 43
Notice that it might happen that p i is conjugate to p −1 i . In this case the outcome of Step 6 is not uniquely defined: p xi i could be either replaced byp xi i or p −xi i (plus some appropriate conjugators). For the preprocessing itself this ambiguity is not a problem; however, it prevents Lemma 50 below from being true. Therefore, in the later steps of our proof, we will require that a = a −1 for all a ∈ Γ, which, by Lemma 36, implies that p i cannot be conjugate to p −1 i .

Lemma 44
The preprocessing can be reduced to the word problem; more precisely: • Let G = GP(L, I, (G ζ ) ζ∈L ) be a fixed graph product of f.g. groups. Then computing the preprocessing is in uAC 0 (WP(G), WP(F 2 )). • Let C be a non-trivial class of f.g. groups. Given (L, I), G ζ ∈ C for ζ ∈ L and an element w of the graph product G = GP(L, I, (G ζ ) ζ∈L ), the preprocessing can be done in uAC 0 [NL](UWP(GP(C)).
Proof We look at the complexity of the individual steps of the preprocessing. For this proof we split step 5 into two parts: a) computing a cyclic normal form and b) making it primitive.
Step non-uniform uniform 1. making p i cyclically reduced uAC 0 (WP(G)) uAC 0 (UWP(GP(C))) 2. making p i connected uAC 0 uAC 0 [NL] 3. powers of single letters uAC 0 uAC 0 4. normal form of letters uAC 0 ({WP(G ζ ) | ζ ∈ L}) uAC 0 (UWP(C)) 5a. making p i cyclic normal forms uAC 0 (WP(F 2 )) uAC 0 [NL] 5b. makig p i primitive uAC 0 uAC 0 6. bringing p i to Ω uAC 0 (WP(F 2 )) uAC 0 [NL] Step 1. By [33,Lemma 7.3.4], the cyclically reduced conjugate trace for a p i can be computed in uAC 0 with oracle gates for the word problem in G in the non-uniform case and in uAC 0 with oracle gates for UWP(GP(C)) (the uniform word problem for graph products with base groups in C) in the uniform case. Also the conjugating element (called y i in Step 1 above) can be computed within the same bound.
Step 2. To compute the connected components of a power p xi i (i ∈ [1, n]), let us define L i = alph(p i ) and the symmetric predicate con i (ζ, ξ) for ζ, ξ ∈ L, which is true if and only if there is a path from ζ to ξ in the dependence graph (L i , (L i × L i ) \ I).
If ζ or ξ does not belong to L i , then con i (ζ, ξ) is false. Note that if there is a path from ζ to ξ, then there is a path of length at most σ − 1. Moreover, there is a path of length exactly σ − 1, because the complement of I is reflexive. Therefore, in the non-uniform case the following formula is equivalent to con i (ζ, ξ): In the uniform case computing the predicate con i requires solving the undirected path connectivity problem, which is in NL. Furthermore, we define the predicate smallest i (ζ), which for ζ ∈ L is true if and only if ζ ∈ L i is the smallest member of L in the connected component of (L i , (L i × L i ) \ I). The following formula is equivalent to smallest i (ζ): We define the projection π i,ζ : Γ * → Γ * for i ∈ [1, n] and ζ ∈ L by π i,ζ (a) = a if con i (ζ, alph(a)) ∧ smallest i (ζ), 1 otherwise.
Observe that p i = G ζ∈L π i,ζ (p i ) and each π i,ζ (p i ) is connected.
Step 3. Identifying powers of single letters is obviously in uAC 0 . This step does not actually replace them, instead they will be ignored during the remaining preprocessing steps and when computing the shortened word.
Step 4. Recall that N = [b 1 , . . . , bm]. To compute our normal form, we define the mapping f : and the mapping nfletter : {b 1 , . . . , bm} → {b 1 , . . . , bm} by nfletter(b i ) = b f (i) . Then f and hence nfletter can be computed in uAC 0 with oracle gates for the word problems in the base groups (resp., the uniform word problem for the class C in the uniform case).
Step 5a. By Theorem 35, we can compute a cyclic normal form in uAC 0 with oracle gates for the word problem in F 2 in the non-uniform case and in uAC 0 with oracle gates for NL in the uniform case.
Step 5b. Checking for periods in words and replacing each power with a primitive factor is obviously in uAC 0 (recall that we encode every element Γ using the same number of bits).
Step 6. A cyclic normal p ′ i form conjugate to p −1 i can be computed as in step 5a. Computing all cyclic permutations of p i and p ′ i and selecting the lexicographically smallest one is obviously in uAC 0 .
From the complexities of the individual steps we conclude that the preprocessing can be done in uAC 0 using oracle gates for the word problem in G and F 2 in the non-uniform case and in uAC 0 using oracle gates for UWP(GP(C)) and NL in the uniform case.

A symbolic rewriting system
We continue with a graph product of f.g. groups G = GP(L, I, (G ζ ) ζ∈L ). Recall the trace rewriting system T from (2) in Section 2.6.5. As before, let σ = |L|. From now on, we assume that a = a −1 for all a ∈ Γ. Therefore, by Lemma 36, if p ∈ M \ {1} is reduced, then p and p −1 are not conjugate. For x ∈ Z \ {0} we denote by sgn x ∈ {−1, 1} the sign of x. Moreover, let sgn 0 = 0. For every p ∈ Ω, we define the alphabet x ∈ N, α is a prefix of p σ and p is no prefix of α, β is a suffix of p σ and p is no suffix of β    .
Proof By Lemma 6 we can write α = p k u 1 · · · us with s < σ where each u i is a proper prefix of p. As p is not a prefix of α, we have k = 0. Regarding the length of α, we obtain |α| = s i=1 |u i | < s i=1 |p| = s|p| ≤ (σ − 1)|p|. The bound on the length of β follows by symmetry.
• If a is a minimal letter of βp x ∈ M (Γ, I), then there are β ′ ∈ M (Γ, I) and d ∈ {0, sgn x} with (β ′ , p x−d , α) ∈ ∆ ′ and βp x = M aβ ′ p x−d . • If a is a maximal letter of p x α ∈ M (Γ, I), then there are α ′ ∈ M (Γ, I) and Proof We only prove the first statement, the second statement can be shown in the same way. Moreover, assume that x ≥ 0, the case x ≤ 0 is analogous. So, assume that a is a minimal letter of the trace βp x . The case that a is a minimal letter of β, i.e., β = M aβ ′ for some β ′ , is clear. Otherwise, x > 0 and a must be a minimal letter of p with (a, β) ∈ I. Let γ ∈ M (Γ, I) such that p = M aγ. We obtain βp x = M aβγp x−1 . It remains to show that (βγ, p x−1 , α) ∈ ∆ ′ , i.e., that βγ is a suffix of p σ and p is not a suffix of βγ. We have p σ = uβ for some u ∈ M . Moreover, p is not a suffix of β. The first statement of Lemma 6 implies that β is a suffix of p σ−1 . Hence, βγ is a suffix of p σ . As (a, β) ∈ I, we have |β| a = 0. Therefore, |βγ| a = |γ| a = |p| a − 1. Hence, p cannot be a suffix of βγ.
Let us consider the corresponding trace monoid M (∆, I ∆ ): it contains M (Γ, I) and π defines a surjective homomorphism M (∆, I ∆ ) → M (Γ, I), which we denote by the same letter π. We define a trace rewriting system R over M (∆, I ∆ ) by the rules given in Table 1.
If p x απ(u)δq y / ∈ IRR(T ) then Lemma 13 tells us there must exist a prefix s of p x α, a suffix t of δq y , and an I-clique v such that and (π(u), v) ∈ I. Moreover, by Lemma 46 we can write s and t as p x ′ α ′ and δ ′ q y ′ , respectively, where x ′ ∈ x , y ′ ∈ y , and (β,

Remark 48
In rules (2) and (3) we allow u to contain a minimal letter a such that (p, a) ∈ I. Similarly, u may contain a maximal letter b such that (b, p) ∈ I in rule (2) and (b, q) ∈ I in rule (3). On the other hand, we could forbid this situation and require that (β, p x , α) is the only minimal letter of the left-hand sides of rules (2) and (3) (and similarly for the maximal letters). This would not change the arguments in our further considerations.
The following facts about R are crucial.
Proof We start with statement 1. Assume we have an element t ∈ IRR(R) with π(t) / ∈ IRR(T ). So, none of the rules of R can be applied to t. For rule (4) this implies that x = 0 for every (β, p x , α) ∈ ∆ ′ that occurs in t. Since π(t) / ∈ IRR(T ), there is a factor ab in π(t) with a, b ∈ Γ and alph(a) = alph(b). We have ab =⇒ T [ab] (we may have [ab] = 1). Let t = t 1 t 2 · · · tm with t i ∈ ∆. As every π(t i ) ∈ M (Γ, I) is reduced with respect to T , a and b must be located in different factors π(t i ). Assume that a belongs to π(t i ) and b belongs to π(t j ) for some j > i (note that j < i is not possible rule (1) (β, p x , α) (δ, p y , γ) → (β, p x+y+f , γ) condition (2) if ∃z ∈ Z : αδ * =⇒ T p z , then (p, π(u)) / ∈ I , βp x απ(u) ∈ IRR(T ), condition (3) p = q, βp x απ(u) ∈ IRR(T ), π(u)δp y γ ∈ IRR(T ) and condition (7) alph(a) = alph(b) Table 1 The rules for the rewriting system R. All triples (β, p x , α), (δ, p y , γ), since (a, b) ∈ D). Let u = t i+1 · · · t j−1 (which might be empty). It follows that a is a maximal letter of π(t i ), b is a minimal letter of π(t j ), (π(u), a) ∈ I and (π(u), b) ∈ I. Moreover, we can assume that i and j are chosen such that j − i is minimal, which implies that π(ut j ), π(t i u) ∈ IRR(T ). If t i and t j are both in Γ, then rule (7) can be applied, which is a contradiction. If t i = (β, p x , α) ∈ ∆ ′ and t j = b ∈ Γ, then a must be a maximal letter of p x α (since x = 0). Hence, by Lemma 46, rule (6) can be applied. Similarly, if t i ∈ Γ and t j ∈ ∆ ′ then rule (5) can be applied. In both cases we obtain a contradiction. Finally, if t i , t j ∈ ∆ ′ , the situation is a bit more subtle. Assume that t i = (β, p x , α) and t j = (δ, q y , γ) for x = 0 = y. Clearly, a is a maximal letter of p x α, and b is a minimal letter of δq y . Moreover, p x απ(u), π(u)δq y ∈ IRR(T ). Our consideration from Remark 47 shows that for some I-clique v with (π(u), v) ∈ I, x ′ ∈ x , y ′ ∈ y , and α ′ , δ ′ with (β, p x ′ , α ′ ), (δ ′ , q y ′ , γ) ∈ ∆ ′ . If p = q then rule (3) can be applied to t. On the other hand, if p = q then, rule (1) or (2) can be applied. Altogether, it follows that one of the rules of R can be applied contradicting t ∈ IRR(R). Thus π(IRR(R)) ⊆ IRR(T ). For statement 2 observe that the rules of R only allow such reductions that are also allowed in T . To see statement 3, consider a rewriting step u =⇒ R v. This means that we have u * =⇒ T v and either |u| ∆ ′ > |v| ∆ ′ (for rules (1) and (4)) or u + =⇒ T v and |u| ∆ ′ = |v| ∆ ′ (for the other rules). Hence, as T is terminating, so is R (indeed, in Lemma 52 below, we give explicit bounds on the number of possible rewriting steps). Statement 4 follows from statements 1 and 2. If u * =⇒ R 1, then π(u) * =⇒ T 1 by statement 2, i.e., π(v) = G 1. On the other hand, if u * =⇒ R 1 does not hold, then, since R is terminating, there exists v ∈ IRR(R) with u * =⇒ R v = 1. We obtain π(u) * =⇒ T π(v) = 1 by statement 2 and π(v) ∈ IRR(T ) by statement 1. Since T is terminating and confluent this implies π(u) = G π(v) = G 1.

Lemma 50
The following length bounds hold: (2): |d| ≤ 5σ and |e| ≤ 5σ • Rule (3): |d| ≤ 4σ|q| and |e| ≤ 4σ|p| • Rule (4): |βα| < 2(σ − 1)|p| • Rules (5) and (6): |d| ≤ 1 Remark 51 Note that the proof of the bound for rule (2) in Lemma 50 essentially relies on the assumption a = a −1 for a ∈ Γ. Indeed, without this requirement we can construct examples where d and e in rule (2) can be arbitrarily large. In the corresponding [57,58,Lemma 15], there is the unfortunate mistake that this condition is not required. Notice that the correctness of the whole shortening process described below depends on the bounds provided by Lemma 50. Moreover, Lemma 50 is the only place in our construction for the power word problem in graph products where we explicitly use the requirement a = a −1 for a ∈ Γ.
Rule (2): Let ι = sgn x and κ = sgn y. We distinguish two cases. First, assume that (p, π(u)) ∈ I. Then, due to condition (2), αδ does not reduce with T to a power of p. Most importantly, we have αδ = G 1.
We apply Lemma 13 (with q = 1) to the reduced traces p x α and δp y . Due to the form of rule (2) we obtain factorizations p x α = M p x−d α ′ rs and δp y = M s −1 tδ ′ p y−d such that r and t are I-cliques with rt Assume that |s| ≥ 3σ |p|. We will deduce a contradiction. Since |α| , |δ| ≤ σ |p|, this implies |y| , |x| ≥ 2σ and (by Lemma 6) s has a suffix p ισ α. Since s −1 is a prefix of δp y , it follows that p ισ α is a suffix of p −y δ −1 . Hence, there is a trace q with First, consider the case ι = κ. As δ is a suffix of p κσ , there is some q ′ with δ −1 q ′ = M p −κσ . Hence, by (6), we have qp ισ αq ′ = M p −y−κσ = M (p −ι ) |y|+σ . As the number of letters from each ζ ∈ L is the same in p and p −1 , Lemma 9 implies that p is conjugate to p −1 . But since p and p −1 are both reduced this contradicts Lemma 36. Be aware that here we rely upon the assumption a = a −1 for all a ∈ Γ. Now consider the case ι = κ. For simplicity, assume ι = 1 and κ = −1 (the other case works exactly the same way), i. e., y ≤ −2σ. Recall that α is a prefix p σ . With (6) we obtain qp σ α = M p −y δ −1 = M p σ p −y−σ δ −1 , where α is a prefix of p −y−σ . It follows that α must be a suffix of p −y−σ δ −1 . Hence, there exists q ′′ such that p −y δ −1 = M qp σ α = M p σ q ′′ α and therefore qp σ = M p σ q ′′ . Note that p −y δ −1 is a prefix of some p z with z ∈ N. In particular, there is some z ≥ σ such that p σ q ′′ α is a prefix of p z . It follows that q ′′ is a prefix of some p k with k ∈ N. Since p ∈ Ω is connected and primitive as a trace, Lemma 8 implies that q = q ′′ = p ℓ for some ℓ ∈ N. We obtain p −y δ −1 = M p σ+ℓ α and hence αδ = G p j for some j ∈ Z (indeed, as p −1 is not a suffix of δ and p is not a prefix of α, it follows that αδ = G 1). Again, we obtained a contradiction.
Rule (5): There is a single letter prefix b of βp x with alph(a) = alph(b). Either b is a prefix of β in which case d = 0 or, if it is not, then b must be a prefix of p sgn x in which case |d| = 1. The same bound on rule (6) follows by symmetry.
For a trace w = w 1 · · · w n ∈ M (∆, I ∆ ) with w i ∈ ∆, we define For convenience we define µ(w) = 2 if w does not contain any letter from ∆ ′ . The reason behind this is that |p| ≥ 2 for all (β, p x , α) ∈ ∆ ′ as p ∈ Ω is required to be composite. Thus, in any case we have µ(w) ≥ 2.
Proof We observe the following bounds on the number of applications of the individual rules of the rewriting system (which we will prove below): 1. Rules (1) and (4) can be applied at most |w| ∆ ′ times in total. (2) and (3) can be applied at most 2σ|w| ∆ ′ times.
2. For bounding the number of applications of rules (2) and (3), write w = M w 1 · · · wn with w i ∈ ∆. We say that a pair (i, j) with 1 ≤ i < j ≤ n potentially cancels if w i , w j ∈ ∆ ′ and there is some ζ ∈ L such that ζ ∈ alph(w i ) ∩ alph(w j ) and for all i < k < j either w k ∈ Γ or w k ∈ ∆ ′ with ζ ∈ alph(w k ). Notice that the number of pairs that potentially cancels does not depend on the representative w 1 · · · wn we started with (as the letters from Γ ζ are linearly ordered). Moreover, if a rule (2) or (3) can be applied at positions i < j in w, then there must be letters a in π(w i ) and b in π(w j ) from the same alphabet Γ ζ (in particular, ζ ∈ alph(w i ) ∩ alph(w j )) such that (a, w k ) ∈ I ∆ for all i < k < j. Therefore, (i, j) potentially cancels -the converse, however, does not hold. Furthermore, for each pair that potentially cancels (at some point during the rewriting process w * =⇒ R v), a rule of type (2) or (3) can be applied at most once. This is because the right-hand side of the rule is in π −1 (IRR(T )) and irreducibility is not changed by the application of other rules (indeed, not by any application of rules from T ). z i = 0 for all i ∈ [1, m]. As the intervals in C are ordered, there are ι and τ such that C i (u) consists of all indices from ι to τ . In case y i > 0 we have The case y i < 0 follows by symmetry.

Definition 57
We define the distance between some u and the closest interval from C as From that definition the following statement follows immediately. We want to show that given some requirements are fulfilled, any rewriting step that is possible on u is also possible on S C (u).
Proof Observe that u is compatible with C. By Lemma 53 we have distp(v, C) > 0 and thus v is compatible with C. It follows that S C (u) and S C (v) are defined.
To prove the lemma, we compare the shortened version of u and v and show that a rule from R can be applied. We distinguish which rule from R has been applied in the rewrite step u =⇒ R v.
Rule (2), (3), (5), (6) or (7): If one of these rules has been applied, the shortening process has the same effect on u and v, i. e., C i (u) = C i (v) for all i (this is because by Lemma 53 we have |η i p (u) − η i p (v)| ≤ 5σµ(u) and the assumption distp(u, C) > 5σµ(u)). The same rule that has been applied in u =⇒ R v can also be used to get S C (u) =⇒ R S C (v): Consider a letter (β, p yi , α) in u that by the shortening process is changed to (β, p zi , α) with z i = y i . Then C i (u) = ∅ and we have |z i | = |y i | − j∈Ci(u) d j ≥ 2 distp(u, C) ≥ 5σµ(u).
Thus, by Lemma 50 the exponents in S C (u) are large enough to apply a rule of the same type as in u =⇒ R v.
Rule (4): If rule (4) is applied, then C ℓ (v) = C ℓ (u) for ℓ < i and C ℓ (v) = C ℓ+1 (u) for ℓ ≥ i. We also know y i = 0, which is not altered by the shortening process, i. e., C i (u) = ∅. Thus, rule (4) can be applied to S C (u) to obtain S C (v).
Therefore, in any case we have sgn(y i + y i+1 ) · ℓ∈Ci(v) d ℓ = sgn(y i + y i+1 + f ) · ℓ∈Ci(v) d ℓ and it remains to show z i + z i+1 = y i + y i+1 − sgn(y i + y i+1 ) · ℓ∈Ci(v) d ℓ . Now let us distinguish two cases: First, consider the case that y i and y i+1 have the same sign. In that case we have C i (u) ∩ C i+1 (u) = ∅ and (again, because by Lemma 50, |f | ≤ 2σ < distp(u, C)) it follows that C i (v) = C i (u) ∪ C i+1 (u). Thus, we obtain Second, we look at the case where y i and y i+1 have opposite sign. We assume |y i | ≥ |y i+1 |. The other case is symmetric. We have C i+1 (u) ⊆ C i (u) and C i (v) = C i (u) \ C i+1 (u). This implies Note that in the case that y i = −y i+1 we have C i (u) = C i+1 (u) and C i (v) = ∅, so the last equality also holds in this case. This concludes the proof of the lemma.
We continue by defining a concrete set of intervals C K u,p based on the following intuitive idea. From Lemma 49 and Lemma 52 we know that π(u) = G 1 if and only if u ≤k =⇒ R 1 with k = 10σ 2 |u| 2 ∆ µ(u). By Lemma 53, each application of a rule changes η p (·) by at most 5σµ(u). Thus, the partial sums of the exponents change by less than K = 50σ 3 |u| 2 ∆ µ(u) 2 + 1.
Let {c 1 , . . . c ℓ } = η i p (u) | i ∈ [0, m] be the ordered set of the η i p (u), i. e., c 1 < · · · < c ℓ . We define the set of intervals Let us write C for C K u,p in the following. Note that |C| ≤ m. The next lemma shows that the shortened word computed with the set C is the identity if and only if the original word is the identity.
The next lemma shows that when using the set C from (8), the exponents of the shortened word are bounded by a polynomial.

Solving the power word problem
Now we are ready for the proofs of our main results for graph products.
Theorem 63 Let G = GP(L, I, G ζ ζ∈L ) be a graph product of f.g. groups such that no G ζ contains any element a with a 2 = G ζ 1 and a = G ζ 1. Then the power word problem in G can be decided in uAC 0 with oracle gates for the word problem in F 2 and for the power word problems in the base groups G ζ .
Proof By Lemma 44 the preprocessing can be done in uAC 0 with oracles for the word problems in G and F 2 (thus, by [33, Theorem 5.6.5, Theorem 5.6.14] in uAC 0 (WP(F 2 ), (WP(G ζ )) ζ∈L ) ⊆ uAC 0 (WP(F 2 ), (PowWP(G ζ )) ζ∈L )). Let (5) be the power word obtained after the preprocessing. The shortening procedure can be computed in parallel for each p ∈ {p i | i ∈ [1, n]}. It requires iterated additions, which is in uTC 0 ⊆ uAC 0 (WP(F 2 )). By Lemma 62 the exponents of the shortened word are bounded by a polynomial in the input length. We write the shortened word as a simple power word of polynomial length and solve the simple power word problem, which by Proposition 17, is in uAC 0 (WP(F 2 ), (PowWP(G ζ )) ζ∈L ).
Corollary 64 Let G be a RAAG. The power word problem in G is uAC 0 -Turing reducible to the word problem in the free group F 2 and, thus, in L.
The proof of the following result is analogous to the proof of Theorem 63 using the respective statements of the lemmas for the uniform case.
Theorem 65 Let C be a non-trivial class of f.g. groups such that for all G ∈ C and all a ∈ G \ {1} we have a 2 = G 1. Then UPowWP(GP(C)) belongs to uAC 0 C=L UPowWP(C) .
Corollary 66 Let RAAG denote the class of finitely generated RAAGs given by an alphabet X and an independence relation I ⊆ X × X. Then UPowWP(RAAG) is in uAC 0 C=L ⊆ uNC 2 .
Remark 67 One can consider variants of the power word problem, where the exponents are not given in binary representation but in even more compact forms. Power circuits as defined in [52] are such a representation that allow non-elementary compression for some integers. Our logspace algorithm for the power word problem in a RAAG involves iterated addition and comparison (for equality) of exponents. For arbitrary power circuits, unfortunately, comparison for less than is P-complete and the complexity for equality checking is unknown. However, if we restrict to certain normal forms, called reduced power circuits, both iterated addition and comparison (for equality and for less than) are in uTC 0 [48]. Therefore, our techniques show that the power word problem for RAAGs with exponents given by reduced power circuits is also uAC 0 -Turing-reducible to the word problem for the free group F 2 .

Consequences for the knapsack problem in right-angled Artin groups
Recall that the knapsack problem for a finitely generated group G asks, whether for given group elements g 1 , . . . , g n , g ∈ G (represented by words over generators) there exist x 1 , . . . , x n ∈ N such that g x1 1 · · · g xn n = G g holds. Using our results on power word problem, we can show the following result, which solves an open problem from [44].
Corollary 68 The uniform knapsack problem for RAAGs is NP-complete: On input of a RAAG G = G(X, I), given by the graph (X, I), and u 1 , . . . , un, u ∈ (X ∪ X) * , it can be decided in NP whether there are x 1 , . . . , xn ∈ N with u x1 1 · · · u xn n = G u.
Proof Let N = |X| + |u| + n i=1 |u i | (this is roughly the input size). By [44,Theorem 3.11], there is a polynomial p(N ) such that if there is a solution, then there is a solution x 1 , . . . , xn with x i ≤ 2 p(N ) . Therefore, we can guess a potential solution in polynomial time. From Corollary 66 it follows that the uniform power word problem in RAAGs belongs to P. Hence, the uniform knapsack problem can be decided in NP. Finally, NP-completeness follows immediately from the NP-completeness of the knapsack problem for a certain fixed RAAG, which has been shown in [44].
Note that this proof even shows NP-completeness of the slightly more general problem of uniformly solving exponent equations for RAAGs as defined in [44].

Open Problems
We strongly conjecture that the requirement a 2 = 1 can be dropped in all our results (as falsely claimed in [57,58]). Indeed, we believe that our methods can be extended to cope with that case. Still, this is a highly non-trivial question for further research.
Furthermore, we conjecture that the method of Section 5.3 can similarly be applied to hyperbolic groups, and hence that the power word problem for a hyperbolic group G is uAC 0 -Turing-reducible to the word problem for G. One may also try to prove transfer results for the power word problem with respect to group theoretical constructions other than graph products, e.g., HNN extensions and amalgamated products over finite subgroups. For a transfer result with respect to wreath products, see [19,Proposition 19]. However, many cases are still open.
For finitely generated linear groups, the power word problem leads to the problem of computing matrix powers with binary encoded exponents. The complexity of this problem is open; variants of this problem have been studied in [3,21].
Another open question is what happens if we allow nested exponents. We conjecture that in the free group for any nesting depth bounded by a constant the problem is still in uAC 0 (WP(F 2 )). However, for unbounded nesting depth it is not clear what happens: we only know that it is in P since it is a special case of the compressed word problem; but it still could be in uAC 0 (WP(F 2 )) or it could be P-complete or somewhere in between.