3XOR Games with Perfect Commuting Operator Strategies Have Perfect Tensor Product Strategies and are Decidable in Polynomial Time

We consider 3XOR games with perfect commuting operator strategies. Given any 3XOR game, we show existence of a perfect commuting operator strategy for the game can be decided in polynomial time. Previously this problem was not known to be decidable. Our proof leads to a construction, showing a 3XOR game has a perfect commuting operator strategy iff it has a perfect tensor product strategy using a 3 qubit (8 dimensional) GHZ state. This shows that for perfect 3XOR games the advantage of a quantum strategy over a classical strategy (defined by the quantum-classical bias ratio) is bounded. This is in contrast to the general 3XOR case where the optimal quantum strategies can require high dimensional states and there is no bound on the quantum advantage. To prove these results, we first show equivalence between deciding the value of an XOR game and solving an instance of the subgroup membership problem on a class of right angled Coxeter groups. We then show, in a proof that consumes most of this paper, that the instances of this problem corresponding to 3XOR games can be solved in polynomial time.


Introduction
One fantastic implication of quantum mechanics is that measurements made on quantum mechanical systems can produce correlated outcomes irreproducible by any classical system. This observation is at the heart of Bell's celebrated 1964 inequality [2] and has since found applications in cryptography [1,15,34,11], delegated computing [29] and short depth circuits [4,36,18], among others. Recent results have shown the sets of correlations producible by measuring quantum states are incredibly difficult to characterize [26,14,22,12,32,10].
In this work, we present a result in the opposite direction. We consider a natural question concerning existence of quantum correlations which has been open for decades and is comparable to the one shown to be undecidable in [32]. We show it can be answered in polynomial time. Furthermore we show that when these correlations can be produced, they can be produced by simple measurements of a finite dimensional quantum state. We begin by reviewing some necessary background.
Nonlocal Games. Nonlocal games describe experiments which test the correlations that can be produced by measurements of quantum systems. A nonlocal game involves a referee (also called the verifier ) and k ≥ 1 players (also called provers). In a round of the game, the verifer selects a question vector q = (q 1 , q 2 , ..., q k ) randomly from a set S of possible question vectors, then sends player i question q i . Each player responds with an answer a i . The players cannot communicate with each other when choosing their answers. After receiving an answer from each player, the verfier computes a score V (a 1 , a 2 , ..., a k |q 1 , q 2 , ..., q k ) which depends on the questions selected and answers recieved. The players know the set of possible questions S and the scoring function V . Their goal is to chose a strategy for responding to each possible question which maximizes their score in expectation. The difficulty for the players lies in the fact that in a given round each player only has partial information about the questions sent to other players.
For a given game G, the supremum of the expected scores achievable by players is called the value of the game. The value depends on the resources available to the players. If players are restricted to classical strategies, the value is called the classical value and denoted ω(G). If players can make measurements on a shared quantum state (but still can't communicate) the value can be larger and is called the entangled value. More specifically, if the players shared state lives in a Hilbert space H = H 1 ⊗ H 2 ⊗ ... ⊗ H k and the i-th player makes a measurement on the i-th Hilbert space, the supremum of the scores the players can obtain is called the tensor product value, denoted ω * tp . If the players share an arbitrary state and the only restriction placed on their measurements is that the measurement operators commute (enforcing no-communication), the supremum of the achievable scores is called the commuting operator value, denoted ω * co . When the state shared by the players is finite dimensional these definitions coincide. In the infinite dimensional case ω * tp ≤ ω * co , and there exist games for which the inequality is strict [22].
Bounds On the Value. The commuting operator and tensor product values of a game are in general uncomputable [22,32]. Intuitively, this is because the nonlocal games formalism places no restriction on the dimension of the state shared by the players, and so a brute force search over strategies will never terminate. However, such a search can provide a lower bound on the value of a game. Given a game G, let ω * d (G) denote the maximum score achievable by players using states of dimension at most d. This value lower bounds the tensor product (hence, commuting operator) value, and converges to the tensor product value in the limit as d → ∞ [31], so sup d<∞ {ω * d } = ω * tp . Given a fixed d, ω * d can be computed by exhaustive search. Computing ω * d for an increasing sequence of d's produces a sequence of lower bounds that converge to ω * tp from below. It is also possible to bound the commuting operator value of a nonlocal game from above, via a convergent hierarchy of semidefinite programs known as the NPA hierarchy [27,13]. (Both these papers focus on upper bounds in the two player case, but k player generalizations are straightforward.) When run to a finite level, this hierarchy gives an upper bound on the commuting operator value of a game. However there is no guarantee that this bound can be achieved by any commuting operator strategy, hence no guarantee that the upper bound matches the true commuting operator value. In general all that can be said is that this hierarchy is complete, meaning that the bound computed necessarily converges to the commuting operator value of the game. Because of the previously mentioned undecidability results, no general bounds can be put on this rate of convergence.
XOR Games. XOR games are one family of games for which more concrete results are known. These are nonlocal games where each question q j is drawn from an alphabet of size n, player's responses are single bits a i ∈ {0, 1} and the scoring function checks if the overall parity of the responses matches a desired parity s j associated with the question, that is V (a 1 , a 2 , ..., a k |q 1 , q 2 , ..., q k ) = 1 if i a i = s j (mod 2) 0 otherwise. (1.1) We refer to an XOR game with k players as a kXOR Game. It is helpful to think of an kXOR game as testing satisfiabiliy of a set of clauses: where each clauseX (1) q j1 + ... +X (k) q jk = s j corresponds to a question vector (q j1 , q j2 , ..., q jk ) with associated parity bit s j . If question vectors are chosen uniformly at random, the classical value of the game corresponds to the maximum fraction of simultaneously satisfiable clauses (see Section 2.1.2 for a proof of this fact). The tensor product and commuting operator values have no such interpretation, and may be larger.
Famously, Bell's inequality can be expressed as a 2XOR game called the CHSH game [7], with clausesX (2) 0 = 0. At most 3 of these 4 clauses can be simultaneously satisfied, so the classical value of this game is 0.75. However, there exists a strategy involving measurements on the two qubit Bell state |Φ + = 1 √ 2 (|11 + |00 ) which achieves an expected score of cos 2 (π/8) ≈ 0.85. 2XOR games are well understood in general; in 1987 Tsirelson showed the optimal value for any 2XOR game can be achieved by a finite dimensional strategy which can be found in polynomial time [33]. This result shows the 2 qubit strategy is optimal for the CHSH game, so ω * co (CHSH) = ω * tp (CHSH) = cos 2 (π/8). More generally, Tsirelson's result showed ω * co = ω * tp for any 2XOR game. For kXOR games with k > 2 the situation is much more opaque. There exist polynomial time algorithms that can compute ω * co and ω * tp in special cases [35,37]. On the other hand it is NP-hard to compute the classical value of a 3XOR game [19], and there is no known upper bound on the runtime required to compute the commuting operator or tensor product value of a kXOR game when k ≥ 3. Furthermore, the commuting operator and tensor product values of a kXOR game are not known to coincide. One natural and efficiently solvable problem involving kXOR games is identifying games with perfect classical value ω = 1. This is equivalent to asking if the corresponding set of clauses is exactly solvable, so can be answered in polynomial time using Gaussian elimination.
Interestingly, there exist XOR games with ω * tp = 1 and ω < 1; the sets of clauses associated with these games appear perfectly solvable when the game is played by players sharing an entangled state, despite the clauses having no actual solution. The most famous of these XOR pseudotelepathy games [3] is the GHZ game, a 3XOR game with 4 clauses and classical value ω = 3/4. There is a perfect value tensor product strategy for this game involving measurements of the GHZ state 1 √ 2 (|000 + |111 ) so ω * tp (GHZ) = ω * co (GHZ) = 1 [17,24]. The relative difficulty of computing the classical value of kXOR games compared to the ease of identifying perfect value kXOR games motivates an analogous question concerning the entangled values. Does there exist a non-commutative analogue of Gaussian elimination that can easily identify kXOR games with ω * co or ω * tp = 1? How hard is it to identify XOR pseudotelepathy games?
Bias. XOR games can also be characterized by their bias β(G), defined by β(G) = 2ω(G)−1. 1 The entangled biases β * co and β * tp are defined analogously. A completely random strategy for answering an XOR game will achieve a score of 1/2, hence ω(G) ≥ 1/2 and β(G) ∈ [0, 1], with identical bounds holding on the other biases. When comparing classical and entangled biases, the quantity usually considered is the ratio β * tp (G)/β(G) (or β * co (G)/β(G)), called the quantum-classical gap. For 2XOR games this gap can be related to the Grothendieck inequality, with where K R G is the real Grothendieck constant 2 . For 3XOR games no such bound holds [28,6], and there exist families of games {G n } n∈N with All these families have the property that lim n→∞ β * tp (G n ) = 0; it is open whether an unbounded quantum-classical gap can exist for kXOR games with β * co bounded away from zero. One special case where a bound on the quantum-classical gap is known is 3XOR games with the players restricted to a GHZ state [28] (later generatlized to Schmidt states in [5]). In this case the quantum-classical gap is bounded above by 4K R G [5].
Our Main Results. This paper considers perfect commuting operator strategies for XOR games. We first show a link between XOR games and algebraic combinatorics: proving a kXOR game has value ω * co = 1 iff an instance of the subgroup membership problem on a right angled Coxeter group corresponding to the kXOR game has no for an answer. For kXOR games with k ≥ 3, the corresponding class of Coxeter groups has undecidable subgroup membership problem. A priori, it is not clear whether or not the instances determining if a game has value ω * co = 1 are decidable. In this paper we resolve the 3XOR case by proving an algebraic result (whose proof consumes most of this paper) showing the instances of the subgroup membership problem determining the value of 3XOR games are equivalent to instances on a simpler group G/K obtained from G by modding out a particular normal subgroup K. This equivalence lets us construct a polynomial time algorithm that determines if 3XOR games do or do not have value ω * co = 1. Previously this problem was not known to be decidable. For k ≥ 4 it remains open whether or not there is any algorithm which can decide in finite time if a game has a perfect commuting operator strategy.
Combining this result with arguments from [35] shows 3XOR games with ω * co = 1 also have perfect value tensor product strategies, with the players sharing a three qubit GHZ state. Combining that observation with the known bounds on the quantum-classical gap for strategies using a GHZ state [28,5] shows that 3XOR games with ω * co = 1 have classical value bounded a constant distance above 1/2. In other words, when ω * co = 1, how well quantum bias outperforms classical bias is bounded. This is in contrast with the behavior, see Equation (1.3), of not perfect games. Section 2 gives basic definitions, precise statements of the main theorems, and proofs or proof sketches where appropriate. Section 3 gives proofs of the more involved algebraic results. The appendices fill in proof details and give perspectives, mostly about the subgroup K.
Comparison with Other Work. Our result shares high-level structure with the work of Cleve and coauthors [9,8] and followup work by Slofstra [32] concerning linear systems games, though our work comes to a very different conclusion than theirs. In both that work and ours, perfect value commuting operator strategies are shown to exist for a family of nonlocal games iff an algebraic property is satisfied on a related group. In [32], Slofstra showed that the algebraic property associated with linear systems games was undecidable, implying existence of a linear systems game whose 2 Because ω * co = ω * tp for 2XOR games, we also have β * co = β * tp only perfect value strategies were incredibly complicated (infinite dimensional). Here we show the algebraic property associated with perfect value 3XOR games can be checked in polynomial time, and give a finite dimensional strategy, called a MERP strategy, that achieves value 1 whenever a perfect value commuting operator strategy exists. The MERP strategy is a variant of the GHZ strategy that has been considered before. In [37] this strategy was shown to be optimal for kXOR games with two questions per player. In [35] this strategy was shown to be optimal for a restricted class of XOR games (symmetric kXOR games) 3 with perfect value. In [21] a quantum circuit closely related to this strategy was used as a subroutine in short depth circuits.

A Detailed Overview
We begin this section by introducing notation necessary to state the main theorems of this work. Much of it is specific to this paper, so we suggest a reader familiar with the field still read Section 2.1 fairly closely. Section 2.2 contains all the major theorem statements of this paper.

Games
As mentioned in Section 1, we think of XOR games as testing satisfiability of an associated system of equations. Our starting point for defining any kXOR game is a system of equations of the form n are formal variables taking values in {0, 1} and the equations are all taken mod 2. N is called the alphabet size of the game, and m the number of clauses. The kXOR game associated to this system of equations has m question vectors {(n 11 , n 12 , ..., n 1k ), ..., (n m1 , n m2 , ..., n mk )}. In a round of the game the verifier selects a i ∈ [m] uniformly at random, then sends question vector (n i1 , n i2 , ..., n ik ) to the players, i.e. player j receives question n ij . The players respond with single bit answers and win (get a score of 1 on) the round if the sum of their responses equals s i mod 2. They get a score of 0 otherwise. Any kXOR game where clauses are chosen uniformly at random can be described by specifying the associated system of equations. 4 For the case of 3XOR games, we will simplify notation slightly by omitting a subindex and instead writing our system of equations aŝ . The question vector sent to the players is then (a j , b j , c j ), with the players winning the round if their responses sum to s j mod 2.

Strategies
For ease of notation, we will describe strategies in the special case of 3XOR games. We note that all the definitions given here generalize naturally to the k-player case. We begin this section with a brief discussion of classical strategies, then move on to consider entangled strategies. The discussion of classical strategies is included mostly for perspective, and can be skipped. The definitions related to entangled strategies are essential. The most general classical strategy can be described by specifying a response for each player based on the question received and some shared randomness λ. If we are only concerned with strategies that maximize the players' score, a convexity argument shows that we can ignore the shared randomness (fix λ to the value that maximizes the players' score in expectation), so optimal classical strategies can be described by fixing responses for each player to each possible question. To better align with the quantum case, we describe these strategies multiplicatively rather then additively. Define X (α) i to equal 1 if player α responds to question i with a 0, and X (α) i = −1 if the player responds with a 1. Players win on the j-th question vector iff X c j (−1) s j = 1 so the expected score of the players conditioned on receiving the j-th question vector can be written and the expected score this strategy achieves on a XOR game is given by We refer to strategies where players share and measure a quantum state before deciding their response as entangled strategies. 5 In the most general entangled strategy, players share an state |ψ and randomness λ. Then they receive a question, make a measurement on the quantum state based on the question and shared randomness, and then send a response to the verifier based on the measurement outcome. Mathematically, any strategy can be described by fixing the state |ψ and POVMs (Positive Operator-Valued Measures) for each possible question sent to the players. A Naimark type dilation theorem tells us that any such strategy can be transformed to one where players' measurements are all described by PVMs (Projective Valued Measures) without changing the score that strategy achieves on a game (the finite dimensional case is standard, see [16] Section 3, for the infinite dimensional argument). Thus, when considering whether or not a game has an optimal strategy we are free to consider only strategies which can be described by a shared state |ψ and PVMs (Projective Valued Measures) for each possible player and question.
In this paper we describe entangled strategies using the PVM formalism. More specifically, we study self adjoint operators associated with these PVMs. We define these self-adjoint operators as follows: 1. First, specify the shared state |ψ which lives in some Hilbert space H.

For each player α ∈ [k]
and question i ∈ [N ], let P (α) i be the projector onto the subspace of H associated with a 1 response by player α to question i. Similarly, let Q be the projector onto the subspace associated with a 0 response. Here 1 represents the identity operator.
3. For every α and i, define the strategy observable X i . 5 The name quantum strategies, while more natural, can cause confusion with strategies where questions and responses are themselves quantum states. Entanglement is not necessary for these strategies, but the players' achieve a value exceeding their classical value only if the state they share is entangled.
The operators X (α) i satisfy some useful properties. Firstly, they are self-adjoint by construction with eigenvalues ±1. From this, or from direct calculation, it follows that where we have used the fact that Q (α) i and P (α) i are orthogonal projectors on the last line. Secondly, the restriction that players be non-communicating means that a players chance of responding 1 (resp. 0) should be independent of another player's response. Hence for any i, j, α = β. Defining the group commutator of two observables [y, z] := yzy −1 z −1 we see Finally, we consider a product of operators corresponding to a question vector in the XOR game. A state is in the 1 eigenspace of X c j iff the sum mod 2 of the players responses to the verifier upon measuring this state is 0. Similarly a state is in the −1 eigenspace iff the sum of the players responses upon measuring this state is 1. Then, players win on question vector j with probability and their overall score on the game is given by An important consequence of Eq. (2.7) is that the players win the game with probability 1 iff . This is because each X (α) i has norm ≤ 1.

Groups
Now we introduce groups whose structure mimics the structure of the strategy observables introduced in Section 2.1.2. We describe these groups using the language of group presentations. The language in this section is, at times, technical and we alert the reader that explicit examples of this notation are given in Section 2.3. Given integers k and N we define the game group G to be the group with generators σ and x (α) i for all i ∈ [n], α ∈ [k], and relations: Here the x (α) i are group elements satisfying the same relations as the strategy observables defined in Section 2.1.2. The element σ is a formal variable playing the role of −1. Note σ = 1 in the group. While it is not needed for the paper, we remark here that G is a right angled Coxeter group.
Given an k-player XOR game testing the system of m equationŝ we define the clauses h 1 , h 2 , ..., h m of the game by where σ 0 = 1. We denote the set of all clauses by S and define the clause group H ≤ G to be the subgroup generated by the clauses, so We note that this construction lets us associate any k-player XOR game with a subgroup H of the group G. It is also worth noting that the clause group H is, in general, not a normal subgroup of the game group G. Here we recall that a subgroup T of G is called a normal subgroup (denoted T ⊳ G) if gT g −1 = T for all g ∈ G, i.e. for all g ∈ G, t ∈ T we also have (2.10) Important subgroups of groups G and H are those consisting of even length words corresponding to each player. Define the even subgroups G E , H E by Given a set of elements R ⊆ G E the normal closure of R in G E , denoted in this paper by R G E is defined to be the smallest normal subgroup of G E containing the elements of R. Equivalently, R G E is the subgroup of G E generated by the set of elements (2.12) Define the even commutator subgroup K of G E by: (2.13) In this paper we will frequently study the group G E /K obtained by modding out the group G E by the normal subgroup K. The first isomorphism theorem tells us that this is a well defined group whose elements can be identified with the cosets of K in the group G E . In this paper we will denote the elements of G E /K as [w] K where w ∈ G E and iff w 1 k = w 2 for some k ∈ K. The normal subgroup property ensures that elements in G E /K multiply as in the group G E , with We can also understand subgroups of G E /K using the (second) isomorphism theorem. This theorem tells us that, given any subgroup T of G E , denoted T < G E : Particularly important to this paper will be the group (H E K)/K which we view as a subgroup of G E /K. We have for any element for some k 1 , k 2 , k 3 ∈ K and h ∈ H (note in this equivalence we have again used the normal property of the subgroup K). This condition is also equivalent to the condition This shows that (H E K)/K is equal to the subgroup of G E /K generated by the elements (that is the generators of H E taken mod K generate the subgroup (H E K)/K of G E /K). For this reason we use the notation [H E ] K to denote the group (H E K)/K. Of particular importance to the rest of this paper will be the condition which we also sometimes state as σ ∈ H E (mod K).

Precise Statements of Main Results
In this section we give theorem statements covering the main results of this paper, along with some relevant theorems from previous work.

An algebraic characterization of perfect k-player XOR Games
Our first result shows the problem of determining if ω * co = 1 is equivalent to an instance of the subgroup membership problem on the game group G.
We should mention that some ingredients of this proof have appeared before in other contexts [27,35]. The key innovation of this theorem is the algebraic formulation of the issue. Proof. For notational convenience, we prove the result here in the special case of k = 3 players. The proof generalizes easily to other values of k.
We first show that σ ∈ H ⇒ w * < 1. Assume for contradiction that σ ∈ H and w * = 1. Then, since σ ∈ H, there exists a sequence of clauses whose product where each clause for all t i ∈ {t 1 , t 2 , ..., t l }. We can relate the group elements x It remains to show σ / ∈ H ⇒ w * = 1. A proof of this fact that relies on completeness of the nsSoS hierarchy is given in [35] (Theorem 6.1, in which a sequence of clauses h t 1 h t 2 ...h t l satisfying h t 1 h t 2 ...h t l = σ is referred to as a refutation). Here we give a standalone proof, which can be viewed as a special case of the GNS construction. We assume σ / ∈ H, and construct the strategy observables and state |ψ explicitly.
First we define a Hilbert space H with orthogonal basis vectors corresponding to the left cosets of H in G. That is, H is spanned by basis vectors {|H , |g 1 H , ...} with inner product Next we define the representation π : G → GL(H) to be the representation given by the left action of G on H, so π(g 1 ) |g 2 H = |g 1 g 2 H .
As we shall see in this paper we find it much easier to study the question of whether σ ∈ H E rather than if σ ∈ H. The following lemma shows that that these conditions are equivalent.
To see the converse direction, note that each clause h i contains exactly one generator x generators to the identity, the parity of the number of generators corresponding to each player remains fixed when applying the relations of G. Then any word in G which is equal to the product of an odd number of clauses from H contains an odd number of generators corresponding to each player α. Thus the word contains at least one generator corresponding to each player α and hence cannot equal σ.
From this, we conclude that if σ ∈ H there is a product an even number of clauses h 1 h 2 ...h 2l ∈ H E which equals σ, thus σ ∈ H E as well.
2.2.2 Sufficient conditions for kXOR games to have ω * co = 1 Theorem 2.1 and Lemma 2.2 imply that we could identify XOR games with value ω * = 1 by solving instances of the subgroup membership problem in the groups G or G E . Unfortunately, the subgroup membership problem in these groups is, in general, undecidable. 7 Instead of reasoning about this problem directly it is helpful to consider a computationally simpler subgroup membership problem obtained by modding out the group G E by the normal subgroup K. We show this simpler problem can be solved in polynomial time. Proof. First note that K ⊳ G E and H E < G E , so the question is well defined. To show a polynomial time algorithm, note that G E /K is an abelian group -in fact we have modded out by exactly the commutator subgroup of G E . The subgroup membership problem for any abelian group can be solved in polynomial time (see Theorem B.1 in Appendix B), so the result follows.
An immediate consequence of Lemma 2.2 and Theorem 2.1 is that Then, Theorem 2.3 tells us that a sufficient condition for an XOR game to have ω * co = 1 can be checked in polynomial time. In fact we can say something stronger: when the condition given by Theorem 2.3 is met an optimal strategy can be chosen from a simple family of strategies which generalize the regular 3 qubit GHZ strategy. We introduce these strategies in Definition 2.4.

Definition 2.4. [MERP strategies]
A MERP (maximally entangled, relative phase) strategy for a kXOR game is one where the players share the k-qubit GHZ state |ψ = 1 √ 2 (|11...1 + |00...0 ) and, given question j, the α-th player measures the α-th qubit of the state with a strategy observable of the form where σ x , σ z are the Pauli X and Z matrices: The angle θ The MERP strategy observables for any choice of θ (α) j are valid strategy observables, that is, they are hermitian with eigenvalues ±1 and observables corresponding to different players commute.
We can now state the relationship between MERP strategies and the condition σ / ∈ H E (mod K).
Theorem 2.5. Let σ, H E , K be defined relative to an kXOR game as described in Section 2.1.3 and with a perfect value MERP strategy. A description of this strategy can be found in polynomial time.
Proof. This theorem is a rephrasing of Theorem 5.30 from [35], where the condition [σ] K / ∈ H (mod K) was referred to as existence of a PREF (parity refutation). The equivalence between the σ ∈ H (mod K) condition and existence of a parity refutation is elaborated on in Appendix A.3.
In Appendix A.4 we prove the theorem in one direction by showing that MERP matrices satisfy the defining relations for K. The other direction is proved by defining a system of linear diophantine equations which are solved only when [σ] K ∈ H E (mod K) then showing, via a theorem of alternatives, that these equations being unsatisfied implies a MERP strategy can achieve value 1. 8 In the language of Section 2.1.2, the state |ψ lives in the Hilbert space C 2 k and, given question j, player α measures a strategy observable of the form I ⊗α−1 ⊗ M (α) j ⊗ I ⊗k−α where I is the 2 by 2 identity matrix.

For 3 player games the sufficient conditions are necessary
Theorems 2.5 gives a necessary and sufficient condition characterizing when an XOR game has a perfect MERP strategy. This also gives a sufficient (but not, in general, necessary) condition for a game to have ω * co = 1. 9 Theorem 2.6, the main mathematical engine underlying this paper, gives the surprising result that this sufficient condition is also necessary for 3XOR games.
Theorem 2.6. Let σ, H E , K be defined relative to an 3XOR game as described in Section 2.1.3 and The proof of this result is purely algebraic, but involved. We give the full proof in Section 3. We now state the main result of the paper, which follows as a consequence of Theorems 2.1, 2.3, 2.5 and 2.6 and Lemma 2.2.
Theorem 2.7. A 3XOR game has value ω * co = 1 iff it has a perfect value MERP strategy, implying ω * co = ω * tp = 1. Additionally, there exists a polynomial time algorithm which decides if a 3XOR game has value ω * co = 1, and outputs a description of the perfect value MERP strategy if one exists. Proof. By Theorem 2.1, an 3XOR game has ω * co = 1 iff σ / ∈ H in the associated group. By Theorem 2.6, this is also equivalent to the statement [σ] K ∈ H (mod K). By Theorem 2.5 this implies a MERP strategy, and the first part of the result follows.
To get the polynomial time algorithm, we just need to check if [σ] K ∈ H (mod K), which we can do in polynomial time by Theorem 2.3. If true, there exists a MERP strategy and we can find it by Theorem 2.5. If false, the same chain of implications as above shows ω * co < 1.
For k > 3 players, our arguments break down because we have no analog of Theorem 2.6. Indeed it remains open whether there is any finite time algorithm for identifying perfect k-player XOR games when k > 3. Some speculation about possible k-player analogues of Theorem 2.6 is provided in Appendix A.5.

Bounds on the bias ratio
Combining Theorem 2.7 with a result from [28] gives the following.
K R G is the real Grothendieck constant. Proof. By Theorem 2.7, a 3XOR game G with ω * co = 1 must also have a perfect value MERP strategy. This strategy uses a GHZ state for the players, and a bound from [28] gives that where β * GHZ is the maximum bias achieved with a strategy using a GHZ state. But then and the result follows.

Examples
In this subsection we re-analyze some well known XOR games using the techniques developed in this paper.

The CHSH Game
The first game we analyze is the CHSH game, introduced in [7]. This is a two question, two player XOR game. Following convention, questions sent to the players are indicated with labels in {0, 1}. The CHSH tests a system of 4 equations: Following the procedure as outlined in Section 2.1.3 (Equation (2.9)) we see the clause group H CHSH associated with this game is generated by the clauses We can multiply these clauses together and then simplifying using the relations of the game group G to show We conclude the CHSH game does not have a perfect commuting operator strategy by Theorem 2.1.

The GHZ Game
Next, we analyze the GHZ game, introduced in [17]. This is a 3 player game testing a system of equationsX Thus the associated clause group H GHZ is generated by clauses The GHZ game has a perfect MERP strategy. Here, we reprove this result using the techniques developed in the paper. The first step is to construct the even clause group H E GHZ , which is generated by the pairs of clauses 42) (and, by definition, their inverses). Simplifying these using the relations of the game group G gives generating set where bracketed terms now indicate generators of G E . Working mod K all the bracketed terms commute with each other, 10 so now straightforward linear algebra can be used to show that Then we see that the GHZ game has a perfect MERP strategy by Theorem 2.5. While we didn't use it in either of these examples Theorem 2.6 tells us that the techniques used above to analyze the GHZ game can be used to analyze any 3 player game. In particular, analyzing any 3-player game G with even clause group H E G we will either find that that and the game (like the GHZ game) has a perfect MERP strategy or by Theorem 2.6 and so the game has no perfect commuting operator strategy of any kind.

Acknowledgements
The authors thank Igor Klep, Aram Harrow, Gurtej Kanwar, Anand Natarajan, William Slofstra, and Jop Briet for discussions and helpful comments. They also thank Zinan Hu and Zehong Zhao for providing numerical examples valuable to our understanding. They heartily thank both an anonymous referee and Taro Spirig for careful reading and helpful comments. 10 A careful reader might notice that all the bracketed terms actually commute with each other even before modding out by the subgroup K. This is a consequence of the fact that the GHZ game is a two question game, but doesn't hold in general. Elaborating on this observation, it is possible to show that a two question XOR game with any number of players has a perfect commuting operator strategy iff it has a perfect MERP strategy, giving a special case of the result shown in [37].

Technical Details
This section begins with definitions, then compares the algebraic structure defined in this paper to the one introduced in [8], then proves Theorem 2.6.

Background and Definitions
We briefly recap the definitions given in Section 2.1, then give some additional notation that will be useful in this section. In everything that follows [ , ] denotes the group commutator, so [x, y] = xyx −1 y −1 .

Recap
We consider a 3XOR game with questions drawn from an alphabet of size [N ]. The game has m question vectors labeled ( There are several algebraic objects associated with the game. The first is the game group G, defined to be the group generated by the set of elements The generators x (α) i correspond to the observables measured by player α upon receiving question number i. The group element σ should be though of as a formal variable corresponding to −1 in the group. Note σ has order two (σ 2 = 1) and commutes with all elements of group ([σ, w] = 1 for any w ∈ G).
For all i ∈ [m] we define the associated clause The clause set S = {h i } i∈ [m] contains all clauses of the game. The clause group H = S is the subgroup of G generated by the clauses. The even game group G E is the subgroup of G consisting of words with an even number of generators corresponding to each player and possibly the element σ, so The even clause group is the subgroup of G generated by an even number of clauses Finally, K is the commutator subgroup of G E , defined to be the normal closure of the set of commutators of the generators of G E . In math: Where X Y denotes the normal closure of the set X in the group Y .

Projections and Clause Graphs
It will be helpful to have notation for referring to just the observables associated with a single player.
To this end, define player subgroups G α ≤ G by and for all α ∈ {1, 2, 3}. One advantage to working with the player subgroups G α and G E α is that they have simple group presentations. We give these presentations in the following lemmas.
and relations Proof. Since G α was defined to be the subgroup of G generated by the elements x (α) i , it is clear that any element in G α is a word in the generators presented above.
Because the relations given above are clearly true in the group G α , all that remains to show is that the relations given in Equation (3.9) can transform two words into one another if they are equal in the group G α . To prove this, we first say a word consisting of x (α) i generators is in fully reduced form iff: in the word and with the same value of i are adjacent in the word.
Any word made up of the generators given in Equation (3.8) can be put in fully reduced form by repeated application of the relations given in Equation (3.9) (first by the replacement and then by deleting any two adjacent instances of the generator x (α) i ). Furthermore, it is clear that two words in G α made up of x (α) i generators are equal iff their fully reduced forms are equal (i.e. fully reduced forms are canonical forms for words in G α ). This shows that any two words in G α made up of x (α) i generators can be transformed into each other via the relations given in Equation (3.9) iff their canonical forms are equal. The claim follows.
and relations Proof. Similarly to the proof of Lemma 3.1, it is immediate that the generators given in Equation (3.11) generate the group G E α . Also similarly to the proof of Lemma 3.1, we show that the relations given above can transform two words constructed from the generating set given in Equation (3.11) into one another if they are equal in the group G E α by showing the relations can put words in fully reduced form. To see this first notice we can remove inverses using relation (2) and the argument and then remove any adjacent x (α) i elements using relation (1). The proof follows.
Because observables corresponding to different players commute, we can write any w ∈ G as where w α ∈ G α for all α ∈ {1, 2, 3}, and s w ∈ {0, 1}. Similarly, any w ′ ∈ G E can be written as with w ′ α ∈ G E α and s ′ w ∈ {0, 1}. For any α ∈ {1, 2, 3} we also define the projector onto player subgroups ϕ α : G → G α by defining its action on the generators of G: then extending ϕ α to a homorphism on G. To see this defines a valid homomorphism note that it preserves the group relations: with a similarly simple argument showing commutation relations are preserved. It is also helpful to define a projection ϕ σ which acts on the generators of G as Combining Equation (3.13) with the definition of ϕ α gives the equation for any w ∈ G. Next, we define the clause (hyper)graph 11 G 123 which gives a useful way of visualizing the clause structure of a game. The graph has 3N vertices which we identify with the generators x α i of the group G. We label the vertices by the corresponding generator. Hyperedges in the graph correspond to clauses, with a hyperedge going through vertices x (1) Note that the existence of the hyperedge is independent of the value of s i , so the clause graph contains no information about the parity bits. Because edges in the hypergraph correspond to clauses h ∈ S, we can identify any sequence of edges in G 123 with a word w ∈ H. We will use this relationship frequently in the future.
We also define important subgraphs of G 123 by taking the induced graphs on vertices corresponding to a subset of players. 12 For any α = β ∈ {1, 2, 3} we define the multigraph G αβ to be subgraph of G 123 induced by the vertices corresponding to generators of G α and G β . See Figure 2 for an example. As with the graph G 123 , edges in the graph G αβ can be identified with clauses in H and sequences of edges in G αβ can be identified with words w ∈ H.
In Section 3.3 we show that we can restrict our attention to the case where G 123 is a connected graph. The induced graph G αβ can be disconnected, and the different connected components of this graph (and representative elements from each) play an important role in the proof in Section 3.4.

Defining Homomorphisms via Group Presentations
We now recap a standard algebraic result which we shall use frequently when making arguments involving the groups G α and G E α . In the following Lemma we describe a group as being presented by a set of generators S and relations R. It is understood that these relations correspond to the set of equations {r = 1} for all r ∈ R. be some function mapping generators of G to elements in the group H. Then f can be extended to a homomorphism f : G → H which acts on inverses as and on words s 1 s 2 ...s l ∈ G as Proof. The only if direction is clear, since f (r) = 1 implies that f (1) = 1 and so f can't be a homomorphism.
Figure 1: Sample hypergraph G 123 for a game with alphabet size N = 6 and 11 clauses. The hypergraph is generated by clause set (σ terms omitted since they don't affect the graph): To prove the if direction, we first show is that f is well defined. To see this, note that any two words s 1 s 2 ...s l and t 1 t 2 ...t k made up of elements from the generating set S are equal in G iff where words w i ∈ G α are arbitrary, each r i is in R, and equality in the equation above now holds as words (that is, the only thing that needs to be cancelled are elements adjacent to their own inverse). Then and it is clear the function f is well defined. From here it is clear that f is a homomorphism, since for any words w 1 and w 2 we have and we are done.
In practice, given a function f mapping the generators of some group G into a group H and satisfying the conditions of Lemma 3.3, we will refer to the homomorphism f : G → H constructed using the above procedure as the homorphism constructed by "extending f in the natural way", or with similar language.

Comparison with Linear Systems Games
A reader familiar with the work of Cleve, Liu and Slofstra concerning linear systems games [8] may notice a similarity between the solution group defined in that paper and the clause group defined in this work. In this section we give a direct comparison between the two. Our goal in doing this is not to provide any deep insights -we simply hope a direct comparison will help a reader already familiar with linear systems games to better understand our work. We do not define linear systems games here, and point readers to [8] for a formal introduction to them. This section is not critical and a reader can safely skip it without impacting their understanding of the rest of this paper.
Following [8], we consider a binary linear system of m equations on n variables M x = b, with M ∈ Z m×n 2 and b ∈ Z m . M ij specifies an individual entry in the matrix M , and b i specifies an entry from the vector b. The solution group of the binary linear system is a group with generators g 1 , g 2 ..., g n , J and relations 1. In [8] the authors showed the following result: Theorem 3.4 (Implied by Theorem 4 of [8], paraphrased). The linear system game associated to the system of equations M x = b has a perfect value commuting operator strategy iff in the associated solution group we have J = 1.
Theorem 2.1 can be thought of as an analog of Theorem 3.4 for 3XOR games. We can restate Theorem 3.4 in a way that makes the comparison even more apparent.
Given a system of equations M x = b, define the group G lsg to be the group with generators g 1 , g 2 , ..., g n , J and relations 1-3 above. Note that J = 1 in this group. Next, define the subgroup H lsg ⊳ G lsg to be the normal closure in G lsg of the words corresponding to equations in the system of equations M x = b (that is, the words involved in relation 4 above) so (3.28) Using these definitions, an equivalent statement of Theorem 3.4 is: We can compare the above theorem and Theorem 2.1 directly. We list, and briefly discuss, the key differences: i) The group G contains an element for every question player combination, while G lsg only contains an element for every question. In a commuting operator (or tensor product) strategy for an XOR game, different players can measure completely different observables when sent the same question and so we need a different group element to correspond to each player-question combo. 13 Conversely, in linear systems games there is a close relationship between Alice and Bob's measurements given the same question, and both players measurement operators can be constructed from representations (right and left actions) of the same group elements.
i) Generators of G lsg commute with each other if they appear in the same equation (relation 3 above). Generators of G satisfy no such relation. This difference reflects a difference between linear system games and XOR game strategies. In a linear systems game a single player must make simultaneous measurements of all the operators corresponding to a question in the game. This never happens in XOR games. From an algebraic point of view, these extra relations place a restriction on elements of G lsg that is not placed on elements of G.
i) The group H lsg is a normal subgroup of G lsg , while H is not a normal subgroup of G. This has an algebraic consequence: asking if J ∈ H lsg is an instance of the word problem (mod out by the generators of H lsg , then ask if J equals the identity), while asking if σ ∈ H is an instance of the subgroup membership problem. The word problem is in a sense "easier" than the subgroup membership problem: there are groups with solvable word problem but undecidable subgroup membership problem [25]. Still, both problems are undecidable in general. This difference also has consequences for game strategies. In a linear systems game, an identity of the form holds in the group, hence holds as an operator identity on the strategy observables as well. In an XOR game, the operator identities codified in H only need hold acting on the state |ψ and there are games (for example, the GHZ game) where products of strategy observables act as the identity on |ψ , but the operators themselves do not multiply to the identity.
We should also point out that a linear systems game can be defined for any system of equations of the form M x = b, while XOR games require equations of a special form: exactly one variable corresponding to each player is involved in each equation. It is possible to define a slightly more general form of kXOR games with a subset of players, as opposed to all players, queried on each question but those are not considered here.
Theorem 2.7, in combination with [32] shows that there cannot exist a mapping which is computable in finite time and transforms linear systems games into XOR games while preserving the commuting operator value of the game. (Or else this mapping, in combination with Theorem 2.7, would give a finite time algorithm for deciding whether or not a linear systems game has perfect commuting-operator value. This is impossible by [32].) The question of finding a natural map in the other direction remains open.

Connectivity of the Clause Graph
In Section 3.1.2 we introduced the clause graph G 123 -a graphical representation of the clause structure of a 3XOR game. In this section we consider 3XOR games whose associated clause graph is not connected. Given such a game we can always define smaller games, each involving only the clauses corresponding to a single connected component of the clause graph. Here, we show a 3XOR game has ω * co = 1 iff each of these smaller games has a perfect commuting operator strategy. This result is easy to prove from a strategies point of view. Recall that a clause x (1) corresponds to a question vector (a i , b i , c i ) that could be sent to the players in a round of the game. If a game has a disconnected clause graph G 123 , players will never be sent a question vector asking them to make measurements from different connected components of the graph. Thus, players can consider the measurements in each connected component of G 123 independently when coming up with a strategy for the game. If they come up with strategies that win for each connected component of clauses they can always combine them (given a question, a player follows the strategy corresponding to the connected component that question came from) to create a strategy that wins on the larger game.
Below, we prove the result using algebraic techniques. The proof is considerably less natural in this setting, but provides a useful exercise in proving results about XOR games using the groups formalism.
Theorem 3.6. Let G be a 3XOR game with clause set S, clause group H, and clause graph G 123 . Then σ ∈ H iff there exists a subset of clauses S ′ ⊆ S corresponding to all the edges in a connected component of G 123 with σ ∈ S ′ .
Proof. First note that if the clause graph G 123 is connected Theorem 3.6 is trivial, since the only subset of S corresponding to a connected component of G 123 is S itself. Also note that one direction of the above claim is immediate by the observation that S ′ < S and so σ ∈ S ′ =⇒ σ ∈ S = H.
To deal with the converse direction, consider a game G with clause group H ∋ σ and a disconnected clause graph G 123 . Let S 1 , S 2 , ..., S l be subsets of S corresponding to all the edges in the connected components of the clause graph; note that sets S 1 , ..., S l partition the S. For all i ∈ [l], define a map ρ i which acts on the generators of H as 14 We have by assumption that σ ∈ H. Then there exists a sequence of clauses h r 1 h r 2 ...h rt = σ. We prove two claims: To prove the first, define the set V i to consist of all generators x = V i ∩G α to be the subset of generators in V i corresponding to player α. Finally, we define the homomorphism π i : G → G by its action on the generators of G: Routine calculation shows that π i preserves the relations of G, and thus, is a valid homomorphism. Now, to prove Claim 1 we show The second equality follows because we assumed h r 1 h r 2 ...h rt = σ, and the third equality holds by definition of ϕ α . All that remains to show is the first, but this is straightforward since if h r j ∈ S i and Where we used the fact that σ commutes with all elements of G to reorder elements and get from the second line to the third, and our assumption for the sake of contradiction on the final line. But, by our assumption at the start of this section we also have ϕ σ (h r 1 h r 2 ...h rt ) = σ. The contradiction proves Claim 2. Finally, to complete the proof we note by Equation (3.18), Claim 1, and Claim 2, and Thus the claim holds with S ′ = S i ′ .
To prove the strongest form of Theorem 2.6, we also need a version of Theorem 3.6 that applies to words σ ∈ H E (mod K). We give that theorem next. The proof is very similar to the proof of Theorem 3.6, with a few more technical details. 15 Theorem 3.7. Let G be a 3XOR game with clause set S, clause group H, and clause graph G 123 . For any subset of clauses S ′ ⊆ S, define H S ′ = S ′ to be the clause group generated by just the clauses in S ′ , and define H E S ′ analogously. Then σ ∈ H E (mod K) iff there exists a subset of clauses S ′ ⊆ S corresponding to all the edges in a connected component of G 123 with σ ∈ H E S ′ (mod K).
Proof. As with the proof of Theorem 3.6, the case where G 123 is connected and the To deal with the remaining case, let G be an XOR game with disconnected clause graph G 123 and σ ∈ H E (mod K). Let S 1 , S 2 , ..., S l be subsets of S corresponding to all edges in the connected components of the clause graph. For each S i , we pick some representative clauseĥ i ∈ S i . Then, define a mapρ i which acts on the generators of H as Note that for any generator of h j h j ′ of the even clause group H E we havẽ As in the proof of Theorem 3.6, define the subset of generators V i to be the x (α) i corresponding to vertices in the same connected component as the edges in S i . Then define the projectorπ i which acts on the generators of G as An important observation is thatπ i maps commutators of even pairs of generators to commutators of even pairs of generators (or the identity) soπ i (K) ⊆ K.

There exists an
The proof of the first equality in Claim 1 follows identically to the proof of Claim 1 in Theorem 3.6. The second inequality holds becauseπ i (K) ⊆ K.
Proving Claim 2 requires a little more work. The complicating issue is that we can encounter a case where ϕ σ (ρ i (h j )) = σ even if h j / ∈ S i . Thus the equation might not hold, and we can't simply copy the proof of Claim 2 in Theorem 3.6. However, copying the proof of Claim 2 does give us that there exists an i ′ ∈ [l] for which ϕ σ (ρ i ′ (h r 1 )ρ i ′ (h r 2 )...ρ i ′ (h rt )) = σ, that is, the claim holds when the mapρ i is replaced by the map ρ i defined in the proof of Theorem 3.6. Let n i ′ be the number of clauses in the sequence h r 1 h r 2 ...h rt not contained in S i ′ , that is We claim n i ′ is even. To see this, note that any word w ∈ K contains each generator x ..h rt must be even (the 1 here is arbitrary, all that matters is that we fix a player). But this is equal to n i ′ mod 2, and we conclude n i ′ is even. Finally, we note that since n i ′ is even and σ has order two. Combining Claims 1 and 2 with Equation (3.18) gives To close this section we observe that Theorem 3.7 implies that we can prove Theorem 2.6 for all 3XOR games by proving it in the special case of games whose clause graph G 123 is connected. To see why, consider a 3XOR game G with clause set S, a disconnected clause graph and σ ∈ H E (mod K). Theorem 3.7 says that we can find a connected subset of clauses S ′ ⊂ S with σ ∈ H E S ′ (mod K). Then, we restrict to the 3XOR game G ′ defined only on these clauses and note is has a fully connected clause graph. Theorem 2.6 then says σ ∈ S ′ , which implies σ ∈ H for the original game G as well. For this reason, we assume the clause graph G 123 is connected in Section 3.4.

Proof of Theorem 2.6
The proof is involved, and we will build up to it slowly over the course of many lemmas. First, we recap the theorem and give an outline of the first stages of the proof. Note that notation, particularly the w, w ′ andw, in this outline is simplified, and does not match the notation used in the remainder of this section. Theorem 2.6 (Repeated). Let σ, H E , K be defined relative to an kXOR game as described in Section 2.1.3 and define [σ] K , [H E ] K as in Section 2.1.3. Then (3.51) Proof Outline (Part 1) of Theorem 2.6. The forwards direction is immediate from the discussion in Section 2.2.2. The backwards direction takes work. Our starting point is the observation that [σ] K ∈ [H E ] K iff there exists some h ∈ H E satisfying h = σw, with w ∈ K. Our goal, given such an h is to show that σ ∈ H E . To do this we modify the word h by right multiplying by words in H E until we have removed the w portion, producing a word σ ∈ H E . We refer to this process as "clearing" the word w from the word h. To begin, we break w into three words: since G 1 , G 2 and G 3 group elements all commute with each other we can separate them out and write w = w 1 w 2 w 3 with each w α ∈ G E α ∩ K. Then we clear the word w one w α at a time.
In Section 3.4.1 we show how to clear the w 1 part of the word w. To do this we define a homomorphism ϕ * 1 which maps any word v 1 ∈ G 1 to a word in h ∈ H with the G 1 portion of the word h equal to v 1 . Applying this homomorphism to w 1 produces a word ϕ * Importantly w ′ contains no terms in the G 1 subgroup, that is, we have successfully cleared the G 1 portion of the word w.
Our next step is to right multiply by a word which will clear the w ′ 2 term, while not introducing any new terms in the G 1 subgroup. We do this by constructing another homomorphism ϕ * 2,1 , which takes a word v 2 in G E 2 and produces a word in H E which equals v 2 in the G 2 subgroup and projects to the identity in the G 1 subgroup whenever possible. Details are given in Section 3.4.2.
Section 3.4.3 performs the process of removing the w 1 and w 2 words from h. The final result is a word where w ′′ 3 ∈ G E 3 ∩ K. Finally, we want to clear the word w ′′ 3 without introducing any words in the G 1 or G 2 subgroups. Unlike previous sections, we do not do this by constructing a homomorphism. Instead, in Sections 3.4.4 and 3.4.5 we construct a series of gadget words designed to make a word easier to clear. Then, in Section 3.4.6 we right multiply the word w ′′ 3 by the gadget words, and clear the word with the gadgets introduced. This procedure is elaborated on in Part 2 of this proof outline, in Section 3.4.4.
We now begin the proof in earnest.

Projectors and a simple right inverse.
We start with some useful notation. Recall the projector ϕ α : G → G α onto group elements corresponding to player α defined in Section 3.1.2 . It is a homomorphism, defined by and ϕ α (σ) = 1. (3.54) We also defined a projector onto the σ subgroup, ϕ σ : G → {σ, 1} which satisfies Because the map ϕ is many to one, there are many choices of right inverse: in the course of the paper we will define several. We use the notation ϕ * , with various subscripts, when referring to right inverses of ϕ. We first define the simple right inverse ϕ * α : G α → H which maps each x (α) i to a single clause in S. For ease of notation, we give the definition when α = 1. ϕ * 1 is a homomorphism which acts on the generators of G 1 by k σ l must exist in S or else the question x (1) i is never asked, and the group element x (1) i can be removed from the game group (this can be viewed as a special case of the proof given in Section 3.3 that we can assume the game group is connected). If there are multiple clauses which contain the element x (1) i , we pick one arbitrarily. To verify ϕ * 1 is indeed a homomorphism, we can check ϕ * α for general α is defined similarly.

Identity preserving right inverse.
The next right inverse we define, ϕ * α,β , acts as a right inverse to ϕ α while also producing a word h ∈ H satisfying ϕ β (h) = 1 whenever such a mapping is possible. In order to define ϕ * α,β as a homomorphism, we restrict it's action to the subgroup of even length words G E α . Now we give a "trick" we will use repeatedly to construct homomorphisms on the even subgroups.

60)
and extend it to act on elements in G E α in the natural way, so for any word Thenf is a homomorphism.
Proof. By Lemma 3.3 the only thing we need to show is thatf respects the relations of G E α . By Lemma 3.2 G E α has only two families of relations, namely that k for all i, j, k ∈ [N ], and that We check thatf satisfies these through straightforward computation. Noting that showsf satisfies relation (1), while noting that showsf satisfies relations (2). Now we turn to introducing an important homomorphism ϕ * a,b . Our organization is unusual in that we give its properties first as Lemma 3.9 and then define it and its key ingredients, Equations (3.69) to (3.71), during the proof of the lemma. We alert the reader that these objects will be re-used in future proofs. Lemma 3.9. For each α, β ∈ [3], with α = β there exists a homomorphism ϕ * α,β : A2. ϕ β ϕ * α,β (w) = 1 whenever there exists an h ∈ H E satisfying ϕ α (h) = w and ϕ β (h) = 1.
Proof. For ease of notation, we prove the result when α = 1, β = 2. The proof is identical for other α, β.
Recall the (multi)graph G 12 , defined in Section 3.1.2. G 12 has 2N vertices, labeled by the group elements x (1) N . We identify vertices in the graph with generators of game group G, and abuse notation slightly by referring to the two objects interchangeably. Edges in the graph correspond to clauses; the graph has one edge (x (1) i , x forming one half of the graph and x (2) j for j ∈ [N ] forming the other. See Figure 3 for an example. Recall that, sequences of edges in G 12 (and in particular, paths) can be identified with words in H. Now, consider a word P x (1) corresponding to a path in G 12 from a vertex associated with player 1 to a vertex associated with player 2. Note the path has odd length because G 12 is and l 1 , l 2 ∈ {0, 1} are arbitrary.
bipartite, so the word P x (1) consists of an odd sequence of clauses. All generators in G 1 , G 2 other than x (1) i 1 and x (2) jt are repeated adjacent to each other in the word P x jt . These generators cancel, and so P x Hence, and We note that we can construct a path with the above properties between any two vertices x (1) in the same connected component of the multigraph G 12 . Next we develop some notation related to these connected components of G 12 . Arbitrarily pick a pair of vertices x (1) to take each generator of G α (vertices in G 12 ) to the unique representative vertex in G β in the same component as that generator. Each function r (α)→(β) 1,2 maps generators which square to the identity to generators which square to the identity, so can be extended to a homomorphism acting on words in G α . Note that the homomorphism r (1) = 1 for any α, β. Figure 4: Sample graph repeated from Figure 3 with a choice of representative vertices in G 2 indicated in red. As an example of our notation, consider the first connected component and note that Next for each x (1) i ∈ G 1 fix a path, denoted between the vertex x (1) i and the (connected) representative vertex (see Figure 5). 16 Define the homomorphism ϕ * 1,2 : G E 1 → H E by its action on the generators of G E , (3.71) Recall the conflation of notation defined above, so P 1,2 x defines both a path in the graph G 12 and a word in H. The function ϕ * 1,2 is a valid homomorphism by Lemma 3.8. It remains to show ϕ * 1,2 satisfies Properties A1 and A2. Property A1 requires that ϕ 1 (ϕ * 1,2 (w)) = w for all w ∈ G E 1 . To prove this property we show ϕ * 1,2 acts as desired on the generators of G E 1 . This follows from Equation (3.67), which gives Property A2 requires that ϕ 2 ϕ * 1,2 (w) = 1 whenever there exists an h ∈ H E satisfying ϕ 1 (h) = w and ϕ 2 (h) = 1. To show this we first show that ϕ 2 (ϕ * 1,2 (ϕ 1 (h))) = r (ϕ 1 (h)) (3.73) 16 We emphasize that the path P 1,2 x can be chosen arbitrarily.
x (2) 6 Figure 5: Sample graph with representative vertices indicated in red and the path indicated in blue. This path corresponds to a word where k 1 , k 2 , k 3 ∈ [N ] and l 1 , l 2 , l 3 ∈ {0, 1} are arbitrary.
for any h ∈ H E (we only need the first equality to prove Property A2, but the second equality is an easy consequence and will be useful to us later). The equality can be verified by checking the action of the two maps on generators h i h j of H E : (3.80) Line (3.78) follows. This argument proves both equalities in Equation (3.73). Now any h ∈ H satisfying ϕ 2 (h) = 1 must have even length, so h ∈ H E and we have ϕ 2 (ϕ * 1,2 (ϕ 1 (h))) = r is a homomorphism in the last two equalities. This proves Property A2, and completes the proof.
The next lemma proves that right inverses ϕ * α and ϕ * α,β map within the K subgroup. That is, they map words in K ∩ G E α to words in K ∩ H E . Lemma 3.10. Let v ∈ K ∩ G E α be arbitrary. Then Proof. For notational convenience we prove the result when α = 1, β = 2.
The proof is mechanical: for all x a i j . Then noting that any factors of σ cancel in the commutator. A similar argument shows ϕ * 1,2 (v) ∈ K. To start assume v ∈ K ∩ G E and write Then note the words ϕ α ϕ * 1,2 (x (1) shows those words are in K. The full argument is given in an appendix (Lemma A.1).
An important consequence of Lemma 3.10 is the following corollary.

Clearing the G 1 and G 2 subgroups
The next lemma makes critical use of right inverses ϕ * α and ϕ * α,β . It should be be thought of as a "pre-processing" step, that puts words in a convenient form to prove Theorem 2.6. Lemma 3.12. If there exists a word w ∈ H E satisfying w = σ (mod K), then there exists a word w ′ in H E satisfying: Proof. We construct w ′ by right multiplying w by ϕ * 1 ϕ 1 w −1 to clear the G 1 subgroup elements, then multiplying by ϕ * 2,1 ϕ 2 wϕ * 1 ϕ 1 w −1 −1 to clear the G 2 subgroup. Formally: First, we show that ϕ * 2,1 ϕ 2 wϕ * 1 ϕ 1 w −1 −1 is well defied, and that w ′ = σ (mod K). By assumption, w = σ (mod K). Equivalently, w = kσ, for some k ∈ K. Then since ϕ 1 maps words in K to words inside K and words in H E to words in G E 1 . The map ϕ 1 is a homomorphism, so we also have ϕ 1 (w −1 ) ∈ K ∩ G E 1 . Then, by Lemma 3.10, A similar argument shows ϕ 2 (w) ∈ K ∩ G E 2 . From this, and equation 3.96 it follows that Then, by Lemma 3.10 Putting this all together gives as desired.

Gadgets for processing words in G 3
We are now almost ready to prove Theorem 2.6. Before we do this, we introduce two final homomor- 17 As in Lemma 3.9 we introduce the properties of these homomorphisms in the following lemma, then define the homomorphisms in the lemma's proof.
Lemma 3.13. There exist homomorphisms f α for α ∈ {1, 2} which map G E 3 → H E and satisfy: Remark 3.14. Properties B1, B2 and B4 are all satisfied if the homomorphism f α is replaced by ϕ * 3,α . Property B3 is not, but it is satisfied by f α in the special case that the graph G 3α is connected. Thus, the homomorphism f α can be thought of as producing words similar to those produced by the map ϕ * 3,α , with the additional feature that they also behave as if the graph G 3α is connected and hence satisfy Property B3. A motivated reader can also check that (with appropriately chosen conventions) the construction of f α given later satisfies f α = ϕ * 3,α when G 3α is connected.
Lemma 3.13 is the last major result needed to prove Theorem 2.6. Before proving the Lemma we build intuition for it's significance by sketching how Properties B1 to B4 are used in the proof of Theorem 2.6.
Proof Outline (Part 2) of Theorem 2.6. Recall that Lemma 3.12 (as foreshadowed in Part1 of this proof outline) shows that existence of a word u ∈ H E with u = σ (mod K) implies existence of a word u ′ σ ∈ H E with u ′ ∈ G E 3 ∩ K. We now show how Lemma 3.13 lets us argue that u ′ σ ∈ H E implies that σ ∈ H E . For simplicity, 18 we consider the case where u ′ has the very basic form However, the intuition given here applies more generally. Properties B1 and B2 are used to reason about words w ∈ G E 3 ∩ H E . They show that (up to a factor of σ) existence of a word w ∈ G E 3 ∩ H E implies that the words ϕ 3 (f α (w)) are in G E 3 ∩ H E for α ∈ {1, 2}. 19 To understand why, note for any w ∈ G E 3 ∩ H E we have ϕ 3 (w) = w and ϕ 2 (w) = 1, so ϕ 2 (ϕ * 3,2 (w)) = 1 by Property A2. Then ϕ 2 (f 2 (w)) = ϕ 2 (ϕ * 3,2 (w)) = 1 by Property B1 and ϕ 1 (f 2 (w)) = ϕ 1 (ϕ * 3,2 (w)) by Property B2. Thus, Now, define w ′ = w(ϕ * 3,2 (w)) −1 f 2 (w). Since f 2 and ϕ * 3,2 both map into H E and w ∈ H E , we have w ′ ∈ H E . We have ϕ 1 (w ′ ) = ϕ 2 (w ′ ) = 1 by definition of w and Equation (3.113). Furthermore, by Property A1. We conclude that, up to a potential factor of σ, A similar argument applies to the homomorphism f 1 and proves Property B3 gives us a powerful tool for working with words of the form ϕ 3 (f α (w)). Recall that we want to show that a word [v 1 , v 2 ]σ ∈ H E with v 1 , v 2 ∈ G E 3 implies that σ ∈ H E . Similar logic as used to show Equation (3.116) can show (3.118) Now we define q := ϕ * 3,1 (ϕ 3 (f 1 (v 1 ))), ϕ * 3,2 (ϕ 3 (f 2 (v 2 ))) (3.119) and note that q ∈ H E (because ϕ * 3,1 and ϕ * 3,2 map into H E ). Using Property B3 we see and while Property A1 of the maps ϕ * 3,α gives and direct computation gives Thus, checking the action of each projection on ϕ α on the word we see which is the containment needed to prove Theorem 2.6. For technical reasons in the full proof of Theorem 2.6 we do not apply the homomorphisms f 1 and f 2 to separate parts of a word w ∈ G E 3 , but instead chain them together as ϕ 3 (f 1 (ϕ 3 (f 2 (w)))). 20 Property B4 is a technical result that tells us this chaining together of maps f α behaves as desired.

Proof of Lemma 3.13
Now we turn to the proof of Lemma 3.13. To prepare, we construct "gadget" words which will be used in the definition of f α . These words depend on the representative vertices chosen from the connected components of G 13 and G 23 when constructing the right inverses ϕ * 3,1 and ϕ * 3,2 . To work with these representative vertices we define, for α, β ∈ {1, 3}, the functions r , analogously for α, β ∈ {2, 3}. Next, recall the hypergraph G 123 defined in Section 3.1.2.
Vertices are identified with elements x k σ l ∈ S, where l has value 0 or 1. By the arguments of Section 3.3, we can assume this hypergraph is connected. Then there exist paths in G 123 between any two vertices. Now for each pair of vertices x j ) denote some fixed minimal length path between these vertices. Then we fix some arbitrary vertex in G 3 , wlog chosen to be x 1 . Each path corresponds to a sequence of clauses, and we can identify sequences of clauses with words in H. A sample hypergraph G 123 is introduced in Figure 6, and a sample path is illustrated in Figure 8.
Next, given a sequence of clauses h p 1 , h p 2 , ..., h ps corresponding to a path in G 123 , define the subsequence of clauses s β (h p 1 , h p 2 , ..., h ps ) to be the sequence including only pairs consisting of adjacent clauses which are connected through the G β vertices. That is, s β (h p 1 , h p 2 , ..., h ps ) includes only adjacent clauses h p i h p i+1 which satisfy (3.125) Note s β (h p 1 , h p 2 , ..., h ps ) need not be a path. 20 In the full proof, this composition is defined in Equation (3.198) Finally, define words and The full sequence of steps involved in the construction of γ 2 is visualized in Figures 6 to 9. We alert the reader that we will most frequently use these gadget words with the fixed index α = 3, but will occasionally require this more general definition.
The following lemma summarizes the important properties of the gadget words γ 2 x and γ 1 x and γ 1 x , defined as in Equation (3.127), satisfy the following properties: Proof. We show the γ 2 case. The proof in the γ 1 case is identical up to a change of index.
To begin the proof, we note the word Q r corresponds to a minimal-length path and so there are never more than two adjacent clauses containing the same element in the G 1 subgroup. (If there were three or more adjacent hyperedges containing the same element in G 1 , the middle hyperedges could be deleted and the path would remain connected, contradicting minimality). Additionally, recall that each hyperedge in G 123 is of the form (x (1) i , x (2) j , x k ), i.e. the hyperedge contains exactly one vertex in each G α for α ∈ {1, 2, 3}. For these reasons the subsequence s 1 Q r consists of pairs of hyperedges h y j , h z j which overlap on some vertex in G 1 . Thus we can write where ϕ 1 h y j h z j = 1. This shows Property C1.
Next, we prove property C2. We start by numbering all clauses in the path Q r (3.129) Consider two adjacent hyperedges h pr h p r+1 in the path. Since these hyperedges appear in sequence they overlap on at least one vertex. 21 The x is fixed, so nonessential. We keep it in the notation only to remind ourselves that the words correspond to a subsequence chosen from a sequence of clauses which corresponds to a path terminating at vertex x 1 .  are indicated in red. The hypergraph is generated by clause set (σ terms omitted since they don't affect the graph): Figure 7: Graph G 23 corresponding to the same set of clauses as used to generate the hypergraph in Figure 6. Representative vertices in G 2 are indicated in red.  i , x (3) 1 and, using the notation of Equation (3.128), we have h pr h p r+1 = h y j h z j for some j. b) Otherwise these hyperedges overlap on a vertex corresponding to a generator of either G 2 or G 3 (equivalently, these hyperedges overlap on a vertex contained in the graph G 23 ). In that case ϕ 3 (h pr ) and ϕ 3 (h p r+1 ) are in the same connected component in the graph G 23 so r The first equality holds by Equation (3.73).
Now consider a contiguous string of hyperedges of the form h z j , h p r+1 , h p r+2 ..., h p r+r ′ , h y j+1 contained in the path (3.129). Here h z j and h y j+1 belong to the path γ 2 x , no adjacent hyperedges between h z j and h y j+1 overlap on a vertex in the G 1 subspace, else they would be contained in the subsequence , a contradiction. Now that the intermediate clauses h p r+1 , ..., h p r+r ′ are introduced we apply the observation of the previous paragraph inductively to see Multiplying these terms together and noting adjacent clauses cancel shows ϕ 2 ϕ * 3,2 (ϕ 3 (h z j h y j+1 )) = 1. for any j < L. Now we use this observation inductively, and compute where we used on the last line the fact that ϕ 3 (h y 1 ) was in the same connected component in G 23 as x 1 , so In addition to the gadget words defined above, we will need to recall the properties of the paths P α,β , defined analogously to the path P 1,2 defined at Equation (3.70); they are used to construct the homomorphisms ϕ * α,β . In particular, we care about the properties of those paths when α is 3 and β is 1 or 2. We recall the properties of those paths in the following lemma.
Proof. Properties D1 and D2 follow from the properties of paths in the graph G 3,β , as discussed in the proof of Lemma 3.9. Property D3 is just the definition of ϕ * 3,β , analogous to Equation (3.71).
Now we use the gadget words γ 1 (x 1 ) and γ 2 (x 1 ) along with the paths P 3,β to prove Lemma 3.13.
Proof (Lemma 3.13). Recall that Lemma 3.13 claimed the existence of homomorphisms f 1 and f 2 which map G E 3 → H E and satisfy certain desiderata (Properties B1 to B4). We will now give an explicit construction of these homomorphisms.
Define the homomorphism f 1 : G E 3 → H E by its action on the basis elements with f 2 defined similarly. Both maps are homomorphisms by Lemma 3.9. It remains to show they satisfy Properties B1 to B4. We will show explicitly that the homomorphism f 1 satisfies these properties; the reader can see that the argument for f 2 is identical. Property B2 applied to homomorphism f 1 requires that for any v ∈ G E 3 . We prove this by checking the action of f 1 on the generators of G E 3 . Direct calculation gives where we used Property C1 of the words γ 1 (x i , x 1 ) to go from the second line to the third, and Property D3 of the words P 3,1 . to go from the third line to fourth.
Property B1 applied to homomorphism f 1 requires that for any v ∈ G E 3 with ϕ 1 ϕ * 3,1 (v) = 1. The proof of this is similar to the proof of Property A2 of the map ϕ * α,β . Recall the function r for any h ∈ H E . As in the proof of Property A2, we check this claim directly on the generators of H E : Note to get line (3.151) we used Property D2 of the paths P 3,1 . The key argument comes in getting to line 3.152 where we used the fact that x  and . (3.156) Since λ 1 , ϕ 1 , f 1 , and ϕ 3 are all homomorphisms, this proves the claim.
Next, for any v ∈ G E 3 satisfying ϕ 1 ϕ * 3,1 (v) = 1 we use Equation (3.147) with h = ϕ * 3,1 (v) to conclude (recalling that v = ϕ 3 ϕ * 3,1 (v) by Property A1): which proves property B1. Property B4 applied to the homomorphism f 1 requires that for any v ∈ G E 3 . This follows from Property C1 of the words γ 1 (x 1 ) and Property A2 of the map ϕ * 3,2 . Property C1 gives and then Property A2 gives The idea is that the gadget words inserted by the map f 1 map to the identity under ϕ 2 ϕ * 3,2 (ϕ 3 ) and Property B4 follows. We verify Property B4 algebraically by checking that Equation (3.160) holds on the generators of G E 3 : Where we used Equation (3.162) to go from the second line to the third, and Property D1 of the paths P 3,1 to go from the third line to the fourth. Finally, Property B3 applied to f 1 requires that for any v ∈ G E 3 . This relies heavily on Property C2 of the words γ 1 (x i , x 1 ). Because v has even length, we can write Then and using Property C2 of the words γ 1 (x 1 ) and Property D1 of the paths P 3,1 gives This shows Property B3 and completes the proof of Lemma 3.13.
One final nice property of the maps f 1 , f 2 that we need to show is that they map words inside the K subgroup to words inside the K subgroup. We show that in the following lemma.
Proof. By assumption, we can write Then, a i 4 ∈ G E , so (by Lemma A.1 in the appendix) But K is normal, so we also have for all i, hence a i 4 The proof for f 2 is identical.
As a corollary, we note that the maps f 1 , f 2 don't introduce any undesired factors of σ.
Corollary 3.18. For any word v ∈ K ∩ G E 3 , we have Proof. Similarly to the proof of Corollary 3.11, note that f 1 (v) ∈ K by Lemma 3.17, so ϕ σ (f 1 (v)) = 1 by Lemma A.4. The proof for f 2 is similar.

Proof of Theorem 2.6
Finally, we are ready to prove Theorem 2.6.
Proof or Theorem 2.6. It is immediate that To see the reverse direction, assume that [σ] K ∈ H E (mod K). Then there exists some w ∈ H E satisfying w = σ (mod K). By Lemma 3.12, there exists a word w ′ ∈ H satisfying ϕ 1 (w ′ ) = ϕ 2 (w ′ ) = 1 and w ′ = σ (mod K). Note that the last condition implies that w ′ = σk for some k ∈ K, hence We choose words u i ∈ G E 3 and indices a i 1 , ..., a i 4 ∈ [N ] so that Now we multiply gadgets onto w ′ . Consider the word Note that ϕ 1 (w ′ ) = 1, and w ′ ∈ H. Hence the first by Property A2 of the map ϕ * 3,1 and the second by property B1 of f 1 . Putting this all together, By Property B2 of the map f 1 we have Finally by Property A1 of the map ϕ * 3,1 . Also note that ϕ 3 (f 1 (ϕ 3 (w ′ ))) ∈ K by Lemma 3.17 and the fact that ϕ 3 maps words in K to words in K (Lemma A.5).
We summarize: (3.190) and Now we again multiply gadgets onto w ′′ with the 1 and 2 indices swapped. Recall then define The same arguments as used to show Equations (3.190) and (3.191) then give and We have, by assumption, We define a composition of maps F : Then we have where we used the fact that each word u i x has even length on the first line, and that each word u i has even length on the second. Now by Property B3. Next where we used Property B4 and then Property B3 of the maps f 2 and f 1 . Finally, consider the word 22 We have A similar argument using Equation (3.204) shows ϕ 1 (w ′′′′ ) = 1. Finally, elements in the image of ϕ σ commute with each other (by an argument similar to the proof of Corollary 3.18) hence To put this all together and complete the proof, consider the word w ′′′ w ′′′′−1 . Using equations Eqs. (3.194) and (3.209) with a similar argument giving (3.215) 22 Below we could have replaces the ϕ * 3 appearing in the term ϕ * 3 (F (ui)) with either ϕ * 3,1 or ϕ * 3,2 and the proof would remain correct.

Equation (3.206) gives
Finally, Equation (3.213), Corollary 3.18, and Corollary 3.11 give We conclude σ ∈ H E and thus the proof is complete.

A Properties of K and its Interactions
Here we prove several small facts used in the proof of Theorem 2.6 as well as some which add perspective on K.
A.1 Properties of K Lemma A.1. Let u, v be two even length words in G α . Then [u, v] ∈ K.
Proof. Let l(u) denote the length of u, with l(v) defined similarly. Define L = l(u) + l(v). We prove by induction on L.
When L = 4, u and v must both have length 2, hence [u, v] is a generator of K. Then the result is immediate.
Otherwise, we must have that either l(u) or l(v) is greater than 2. For now we assume l(v) > 2.
with v ′ and v ′′ both even length words. Note that so v ′ and v ′′ both have length less than v. Then we can write where we have used the commutator identity and since K is a group The proof when l(u) > 2 is almost identical, except we use the commutator identity

A.1.1 Canonical form for monomials mod K
Consider the game group G is defined for k players and let ∼ K denote the equivalence relation on G defined by modding out by K. In this subsection we shall write down a canonical selection from the equivalence classes. This is not used in the proofs here, but might be in other proofs and it is certainly useful in computer experiments. While G is defined for k players modding out by K acts independently on the variables x (α) j j = 1, . . . , n associated with each player α. Thus, without loss of generality we can take k = 1. Also G contains σ but we shall ignore it, since σ has no impact on the canonical form.
The core observation is the following lemma. For degree 3 or more monomials this immediately implies that interchanging any two even position variables or any two odd position variables in a monomial m produces a monomialm with m∼ Km .
Proof. We first show abcd ∼ K adcb by noting (adcb) −1 abcd = bcd bcd = bc dc cb cd ∼ K 1. (A.9) where the last equation is true by definition of K. The proof that the first and third monomials are equivalent goes similarly. If m has degree 3 write it as abc, then the property just proved for degree 4 gives abc∼ K abcxx ⇐⇒ cbaxx∼ K cba (A.10) as claimed.
Given an ordering on the generators of G, a canonical form of a monomial m is seen easily from the lemma. We describe it in terms of an algorithm.
Algorithm Proof. We can write Then, where we used that Im(ϕ σ ) = {σ, 1} is a commutative group to show the commutator terms were the identity.
Lemma A.5. For any k ∈ K, and α ∈ {1, 2, 3}: ϕ α (k) ∈ K. (A. 16) Proof. Define the set C to be all commutators of pairs, that is C = x α i x α j , x α k x α l : i, j, k, l ∈ [n], α ∈ [3] . (A.17) Recall that K was defied to be the normal closure of C in G E , that is: We first show that ϕ α (c) ∈ C (A. 19) for all c ∈ C. To see this, note for α = β, and for α = β. Then, since ϕ α is a homomorphism mapping G E → G E 3 , and ϕ α (C) ⊂ C, we have The result follows.

A.3 Equivalence between a PREF and σ ∈ H (mod K)
In [35] an object called a parity refutation was defined. A (paraphrased) version of that definition using the language of Section 2.1.3 is repeated here. First, we define a parity preserving permutation.
Definition A.6. A parity preserving permutation of a sequence of generators (written here as a product) x (1) a 1 x (1) a 2 ...x (1) is a permutation P which satisfies with i = j (mod 2), similar restrictions for P (x (2) b i ′ ) and P (x c i ′′ ) and the condition P (σ) = σ. An equivalent definition of parity preserving permutations which will be useful to use later are permutations P which can be decomposed into products of transpositions of the form π Parity preserving permutations can be used to define an equivalence relation on the words g ∈ G Definition A.7. Two words g 1 , g 2 ∈ G are parity permutation equivalent, written g 1 ∼ p g 2 , if there is a sequence of generators c l 3 σ s = g 1 (A. 26) and a parity preserving permutation P acting on that sequence of generators satisfying Routine calculation (given in [35]) shows ∼ p is an equivalence relation on elements of G. Finally, we define a parity refutation (PREF).
Definition A.8. A sequence of clauses h r 1 , h r 2 , ..., h r l is called a parity refutation if h r 1 h r 2 ...h r l ∼ p σ.
Existence of a parity refutation is exactly equivalent to a word σ ∈ H (mod K), as we show in the following theorem. (Actually, a stronger statement is true: the equivalence relation ∼ p is exactly the same as the equivalence relation on G induced by modding out by K. Small modifications to the proof below give that result.) Theorem A.9. A sequence of clauses h r 1 h r 2 ...h r l is a parity refutation iff the word h r 1 h r 2 ...h r l ∈ H obtained by multiplying the clauses together satisfies h r 1 h r 2 ...h r l = σ (mod K) (A.28) Proof. Both directions of the proof are nontrivial. We first show that if a sequence of clauses h r 1 h r 2 ...h r l forms a parity refutation then h r 1 h r 2 ...h r l = σ (mod K). Recall that any parity preserving permutation P can be decomposed into transpositions of the form π But we also have As a consequence, we also have Since the word x (α) a l was arbitrary and we could decompose P into products of transpositions of the form π A.5 Some members of K ∩ H E and possible kXOR generalizations Here we give some intuition for dealing with the subgroup K in relation to 3XOR. A major component of our 3XOR analysis has been showing that the special word σ is in K ∩H E . This was difficult. For perspective, we ask a simpler question: is the intersection K ∩ H E necessarily nonempty for a 3XOR game? The next lemma says yes.
Lemma A.11. Suppose a 3XOR game is nontrivial in the sense that it contains at least two clauses which contain the same generator for x (1) i for player 1, also two such clauses for player 2, then K ∩ H E is not empty, indeed at least some generators of K are necessarily contained in H E .
Proof. Consider a pair of clauses h 1 , h 2 ∈ S corresponding to question vectors which send the same question to the first player, so h 1 = x (1) c 2 σ s 2 and a 1 = a 2 . Similarly, let clauses h 3 , h 4 be clauses which agree on the question sent to the second player so x b 3 = x b 4 . 23 We

B Subgroup Membership
Theorem B.1. The subgroup membership problem is solvable in polynomial time for any finitely generated abelian group. 24 Proof. It reduces to linear algebra over the integers. We can write all the relations in the group G and generators of the subgroupG as products of generators of G, raised to some power. When we multiply generators or apply a relation we just add or subtract the multiplicities of the relevant generators. So the subgroup membership problem just asks if a given vector (corresponding to the group element) is in the span of the vectors corresponding to the relations and subgroup generators.

C.1 Funding and Competing Interests
Financial Interests: J.W. Helton thanks the Center for Mathematical Sciences and Applications at Harvard for a stimulating stay leading to this collaboration and thanks to the NSF for its support through DMS1500835. A. Bene Watts was supported by NSF grant CCF-1729369.

C.2 Data Availability Statement
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

Changes to the Published Version
We are grateful to Taro Spring for suggesting a few changes (see below) to the published version of this article. They are implemented in this second arXiv version.
-In Equation (3.14) a w was changed to w ′ .
-A missing ϕ α was added to the last line in the proof of Theorem 3.7.
-In the first paragraph of Section 3.4.5 a mislabeled G 12 was changed to G 23 .
-In Equation (3.146) a mislabeled r  -Mislabeled H were changed to H E in Section 3.4.6.
-A mislabeled G 3 was changed to G E 3 right above Equation (3.198).
-A clarifying footnote (footnote 19) was added to the proof sketch (part 2) on page 37.