Graph isomorphism: Physical resources, optimization models, and algebraic characterizations

In the $(G,H)$-isomorphism game, a verifier interacts with two non-communicating players (called provers) by privately sending each of them a random vertex from either $G$ or $H$, whose aim is to convince the verifier that two graphs $G$ and $H$ are isomorphic. In recent work along with Atserias, \v{S}\'amal and Severini [Journal of Combinatorial Theory, Series B, 136:89--328, 2019] we showed that a verifier can be convinced that two non-isomorphic graphs are isomorphic, if the provers are allowed to share quantum resources. In this paper we model classical and quantum graph isomorphism by linear constraints over certain complicated convex cones, which we then relax to a pair of tractable convex models (semidefinite programs). Our main result is a complete algebraic characterization of the corresponding equivalence relations on graphs in terms of appropriate matrix algebras. Our techniques are an interesting mix of algebra, combinatorics, optimization, and quantum information.


Introduction
A pair of graphs G and H is isomorphic, denoted by G ∼ = H, if there exists a bijective map from the vertex set of G to the vertex set of H that preserves adjacency and non-adjacency.The problem of deciding whether two given graphs are isomorphic is of fundamental practical interest, and at the same time, it plays a central role in theoretical computer science as one of the few problems in the class NP which is not known to be polynomial-time solvable or NP-complete.
Along with Atserias, Šámal, and Severini, the authors recently introduced a non-local game that captures the notion of graph isomorphism [1].Specifically, in the (G, H)-graph isomorphism game there are two players, Alice and Bob, trying to convince a third party, called the verifier, that the graphs G and H are isomorphic.For this, the verifier randomly selects a pair of vertices x A , x B ∈ V G ∪ V H and sends to Alice and Bob respectively.After receiving their vertices, and without communicating, Alice and Bob respond with vertices y A , y B ∈ V G ∪ V H , where we assume the vertex sets V G and V H are disjoint.
The players win the game if the questions they were asked and the answers they provided indeed model an isomorphism from G to H. Concretely, the first winning condition is that each player must respond with a vertex from the graph that the vertex they received was not from, i.e., Furthermore, setting g A be the unique vertex of G among x A and y A , and defining g B , h A , and h B similarly, the second winning condition is that where rel(x, y) is the relationship between vertices x and y, i.e., whether they are equal, adjacent, or distinct non-adjacent.Note that Equation (2) encodes many constraints, e.g., if Alice and Bob are sent the same vertices in G, then they must respond with the same vertex of H or if they are sent the endpoints of an edge of G they need to respond with the endpoints of an edge of H. Furthermore, note that we do not explicitly require that G and H have the same number of vertices.
Alice and Bob are allowed to agree on a strategy before the start of the game, but are not allowed to communicate once the game has begun.This type of game is known as a nonlocal game, since the players are usually thought of as being separated in space, which prevents them from communicating after they receive their questions.The parties only play one round of this game, and we only consider strategies that win with certainty, i.e., with probability equal to one.We refer to such strategies as perfect.
It is easy to see that responding according to an isomorphism of G and H is a perfect strategy for the (G, H)-isomorphism game.Moreover, the converse also holds (see Section 2.1) and thus the isomorphism game characterizes the notion of isomorphism of graphs.Motivated by this, in the previous work [1] we introduced the notions quantum and non-signalling isomorphisms of graphs in terms of the existence of perfect quantum and non-signalling strategies for the graph isomorphism game.Furthermore, we investigated these two relations, proving various necessary conditions for quantum isomorphism, giving a complete characterization of non-signalling isomorphism, and providing a method for constructing pairs of non-isomorphic graphs that are nevertheless quantum isomorphic.
In this work we continue our study of the graph isomorphism problem within the framework of nonlocal games.Our point of departure is a new equivalence relation on graphs, which is defined in terms of the feasibility of a certain linear conic program over an appropriate convex cone.Specifically, for any convex cone of matrices K we say that graphs G and H are K-isomorphic, and write G ∼ =K H, if there exists a matrix M with rows and columns indexed by V G × V H such that: h,h ∈V H M gh,g h = 1, for all g, g ∈ V G (3) M gh,g h = 0, if rel(g, g ) = rel(h, h ), ( 5) Any matrix satisfying (3)-( 6) is called a K-isomorphism matrix for G to H. Note that the entries in a K-isomorphism matrix are not necessarily nonnegative, or even real, depending on the choice of cone K.In this article we study the graph equivalences defined by the notion of K-isomorphism for four cones of matrices.The first one is the cone of positive semidefinite matrices (psd), denoted S + , defined as the set of Gram matrices of a set of vectors v 1 , . . ., v n , i.e., M ij = v T i v j .Second, we consider the doubly nonnegative cone, denoted DN N , which consists of entrywise-nonnegative psd matrices.Third, we consider the cone of completely positive semidefinite matrices [17], denoted CS + , which consists of Gram matrices of psd matrices.Concretely, a matrix M is completely positive semidefinite if there exist Hermitian psd matrices ρ 1 , . . ., ρ n , such that M ij = ρ i , ρ j := Tr(ρ † i ρ j ).Lastly, we consider the completely positive cone, denoted CP, corresponding to Gram matrix of entrywise nonnegative vectors.It is straightforward to verify that and these containments are all strict for matrices of size at least 5 [17].
All of these cones are of central importance to the field mathematical optimization.Most notably, linear optimation over the cone of psd matrices corresponds to semidefinite programming, an important family of optimization models with extensive modeling power and efficient algorithms [11].Additionally, linear optimization over the completetely positive cone corresponds to completely positive programming, a family of optimization models that are hard to solve but have significant expressive power [5].
Summary of results and related work.In our first result we expresses both classical and quantum graph isomorphism as K-isomorphism over appropriate cones of matrices.Specifically, we have that: Result 1.For any pair of graphs G, H we have that G and H are isomorphic if and only if G ∼ =CP H and furthermore, G and H are quantum isomorphic if and only if G ∼ =CS + H.
The fact that quantum isomorphism is equivalent to the feasiblity of a linear conic program over the cpsd cone is not surprising in view of the strong connections between the cpsd cone and the set of quantum correlations, e.g.see [17,26,18,24].On the other hand, the formulation of graph isomorphism as a feasibility problem over the completely positive cone is to the best of our knowledge new.A related result is a formulation for GI over the copositive cone [12], the dual of the completely positive cone.
Furthermore, in [24], the notion of K-homomorphism for various cones K was considered.These relations are related to homomorphisms in the same way that K-isomorphisms are related to isomorphisms.In particular, CP-and CS + -homomorphisms are equivalent to classical and quantum homomorphisms.
As the problem of deciding whether two graphs are classical (or quantum) isomorphic is hard, it is important to identify tractable necessary and/or sufficient conditions allowing to checking this.In view of our first result, we study the notion of K-isomorphism in the case of the doubly nonnegative and positive semidefinite cones.Moreover, by the chain of inclusions CP ⊆ CS + ⊆ DN N ⊆ S + , both DN Nand S + -isomorphism are tractable relaxations of quantum (and of classical) graph isomorphism.The main contribution of this work is a complete algebraic characterization of the graphs that are DN Nand S + -isomorphic respectively in terms of isomorphisms of appropriate matrix algebras.
A linear subspace of C n×n which is also closed under matrix multiplication is an algebra.A subalgebra A of C n×n is called coherent if it is unital (i.e., contains the identity matrix), contains the all-ones matrix, is closed under Schur product, and is self-adjoint (i.e., closed under conjugate transpose).As the intersection of two coherent algebras is a coherent algebra we can define the coherent algebra of a graph G, denoted by A G , as the intersection of all coherent algebras containing the adjacency matrix of G.
Result 2. Consider two graphs G, H with adjacency matrices A G and A H and coherent algebras A G and A H respectively.Then, we have that G ∼ =DN N H if and only if there exists an isomorphism between the coherent algebras A G and A H that maps A G to A H .
As it turns out, the notion of DN N -isomorphism coincides with an equivalence relation on graphs introduced in 1968 by Weisfeiler and Leman [29], known today as the 2-dimensional Weisfeiler-Leman method.In order to explain this link we first need to introduce some necessary background.
It is well-known that coherent algebras are in one-to-one correspondence with coherent configurations.Indeed, since a coherent algebra A is closed under Schur product, it must have an orthogonal (with respect to the Hilbert-Schmidt inner product) basis of 01-matrices, denoted by {A i : i ∈ I}.Concretely, the matrices A i satisfy the following properties where • denotes the Schur, or entrywise, product: (i) for some Ω ⊆ I, (iv) for each i ∈ I, there exists a j ∈ I such that A † i = A j , and (v) there exist numbers p k ij for i, j, k ∈ I, called the intersection numbers of A, such that namely the set of ordered pairs (g, g ) such that the gg -entry of A i is 1.Equivalently, thinking of each such subset as a binary relation on V G , i.e., imply that that these relations form a coherent configuration [13].Conversely, any coherent configuration corresponds to some coherent algebra.
Given a graph G, the 2-dimensional Weisfeiler-Lehman algorithm begins by labeling every ordered pair of vertices (g, g ) according to whether they are equal, adjacent, or non-adjacent.At each step, for every ordered pair of vertices (g 1 , g 2 ), its label is augmented with the |V G |-element multiset of ordered pairs of labels of (g 1 , g), (g, g 2 ) for each g ∈ V G .The algorithm terminates when the partition of V G × V G induced by the labels stabilizes.By specifying an ordering on the values of rel, and ordering the labels lexicographically, the parts of the resulting partition inherit an isomorphism invariant ordering.If the partitions resulting from running this algorithm on two graphs are different (in a sense that can be made rigorous), then the graphs must be non-isomorphic.Otherwise, we say that the graphs are not distinguished by the Weisfeiler-Leman method, which is an equivalence relation on graphs.As was shown in the original paper by Weisfeiler and Lehman, the resulting partition of V G × V G given by the above algorithm is exactly the coherent configuration corresponding to the coherent algebra of G [29].
In Section 6 we characterize S + -isomorphism by introducing an appropriate generalization of coherent algebras of graphs.Specifically, we say that a subalgebra A of C n×n is partially coherent (with respect to {I, A G }) if it is unital, self-adjoint, contains the all-ones matrix, and is closed under Schur multiplication with the matrices I and A G .As with coherent algebras, the intersection of two partially coherent algebras is again a partially coherent algebra.This allows to define the partially coherent algebra of a graph G, denoted ÂG , to be the minimal partially coherent algebra containing A G .We show the following: Result 3. Consider two graphs G, H with adjacency matrices A G and A H and partially coherent algebras ÂG and ÂH respectively.Then, we have that G ∼ =S + H if and only if there exists a linear bijection The notion of S + -isomorphism appears to be a new graph relation, which we show implies several forms of cospectrality, e.g.see Lemmas 5.3 and 5.4.Moreover, we show that S + -isomorphism is equivalent to copsectrality of adjacency matrices when restricted to 1-walk-regular graphs (cf.Theorem 5.9).
Our main technique for studying DN N -and S + -isomorphisms is a surprising correspondence between K-isomorphism matrices and linear maps Φ : C V G ×V G → C V H ×V H . Concretely, by considering a K-isomorphism matrix M for G to H as a Choi matrix, we can associate to it a linear map As it turns out, maps constructed in this manner have some remarkable properties.The idea for this construction is adopted from Ortiz and Paulsen who applied it to winning correlations for the homomorphism game [21].
As an immediate consequence of the chain of inclusions CP ⊆ CS + ⊆ DN N ⊆ S + , it follows that for all graphs G and H we have that and in Section 7 we show that none of these implications can be reversed.
In Section 8 we give yet another characterization of K-isomorphism by combining a conic generalization of the celebrated Lovász theta function and a new product of graphs.Specifically, for any matrix cone K consider the graph parameter: For K = S + , the corresponding parameter is the celebrated Lovász theta function, denoted by ϑ, whereas for K = DN N , it is equal to a variant due to Schrijver, which is usually denoted ϑ [25].A nontrivial result [8] is that for K = CP the parameter ϑ CP is equal to the independence number of a graph.
In order to reformulate K-isomorphism in terms of the graph parameter ϑ K we make use of the graph isomorphism product, denoted G H, which has vertex set V G × V H and edges (g, h) ∼ (g , h ) if rel(g, g ) = rel(h, h ).In other words, vertices of G H are adjacent exactly when the corresponding entry in an isomorphism matrix for G to H is required to be zero.Note that the isomorphism product of G and H is the complement of the so-called weak modular product of graphs, e.g.see [22].For K = CP, the above result implies that the graphs G and H are isomorphic if and only if [15], and in fact it has long been known that α(G H) is the size of the largest common induced subgraph of G and H.
Lastly, in Appendix 9 we show that DN N -and S + -isomorphisms respectively correspond to the feasibility of (the first level of) the Lasserre hierarchy applied to appropriate relaxations of the quadratic integer programming formulation of the graph isomorphism problem.Specifically, two graphs G and H with adjacency matrices A G and A H are isomorphic if and only if there exists a permutation matrix X = (X gh ) such that A G = X A H X. Thus, the 0/1 solutions in the semi-algebraic set defined by h encode all possible isomorphisms between G and H.Given a semialgebraic set K = {x ∈ [0, 1] n : g i (x) ≥ 0, i = 1, . . ., m}, the Lasserre hierarchy is a systematic method for producing tighter approximations to conv(K ∩ {0, 1} n ) [16].Each level of the Lasserre hierarchy corresponds to a semidefinite program and can be constructed using sums of squares representations of polynomials and the dual theory of moments.
Result 5. Two graphs G and H are S + -(respectively DN N -) isomorphic if the first level of the Lasserre hierarchy applied to (9)- (11) (respectively, adding entrywise nonnegativity) is feasible.Summarizing, our results give a surprising, and previously unknown, connection between coherent algebras, the Weisfeiler-Lehman algorithm, the Lasserre hierarchy, and Schrijver's theta function (see Theorem 9.2 and Remark 9.3).

Strategies for the isomorphism game
In this section we recall the isomorphism game and briefly explain classical and quantum strategies.For more detailed background and additional properties we refer the reader to [1].
Recall that in the (G, H)-isomorphism game, each player receives a vertex from one of the graphs G and H, and they must respond with a vertex from the other graph (see Equation 1).In order to win, the vertices of G that they receive/send must relate to each other (i.e., be equal, adjacent, or distinct and non-adjacent) in the same way as their vertices of H (see Equation 2).The players know G and H and can agree on a strategy beforehand, but they are not allowed to communicate once the game begins (i.e., once they receive their questions/vertices).We are interested in winning or perfect strategies, which are those that win with probability one.
Remark 2.1.Any winning strategy for the (G, H)-isomorphism game is also a winning strategy for the (H, G)-isomorphism game, as well as the (G, H)-isomorphism game.Here G refers to the complement of G, i.e., the graph obtained from G by replacing edges with non-edges and vice versa.
For any fixed strategy for the (G, H)-isomorphism game, there is an associated joint conditional probability distribution, p(y A , y B |x A , x B ) for x A , x B , y A , y B ∈ V G ∪ V H , which gives the probability of Alice and Bob responding with y A and y B when given inputs x A and x B respectively.The distribution p is usually referred to as a correlation.A given strategy for the (G, H)-isomorphism game is a winning strategy if and only if p(y A , y B |x A , x B ) = 0 whenever x A , x B , y A , and y B do not meet the winning conditions (1) and (2) defined above.

Classical Strategies
In general, classical strategies allow Alice and Bob to have access to some shared randomness, such as a random binary string, which they can use to determine how they respond to the questions of the referee.However, for each value the shared randomness may assume, the corresponding strategy becomes deterministic.Mathematically, this says that any classical correlation p can be written as p = i λ i p i where λ i ≥ 0 for each i, with i λ i = 1, and each p i corresponds to a deterministic classical strategy.The coefficients λ i encode the shared randomness used by the players.Since whether a correlation p corresponds to a winning strategy is determined by its zeros, the correlation p arises from a winning strategy if and only if p i is winning for all i such that λ i > 0.
A deterministic classical strategy consists of two functions f A and f B for Alice and Bob respectively that map inputs to outputs.Thus when Alice receives some input x, she will respond with f A (x), and Bob acts analogously.For the isomorphism game, it is not difficult to see that the functions f A and f B must be equal, and moreover that the restriction of them to V G (resp.V H ) is an isomorphism from G to H (resp. from H to G).Furthermore, the restriction to V H is the inverse of the restriction to V G .Thus the (G, H)-isomorphism game can be won perfectly with classical strategies if and only if G and H are actually isomorphic.

Quantum Strategies
In a quantum strategy the players can take advantage of shared quantum entanglement and measurements in order to produce their outputs.For our purposes we can restrict the shared entanglement to what are known as pure bipartite states of full Schmidt rank.Such a state corresponds to a unit vector ψ ∈ C d ⊗C d which can be expressed as i∈ [d] are two orthonormal bases of C d and λ i > 0 for all i.Any vector in C d ⊗C d admits such a Schmidt decomposition if we allow λ i ≥ 0 and the restriction λ i > 0 reflects our assumption that the shared entangled state has full Schmidt rank.The two orthonormal bases, A and B are known as Schmidt bases of ψ and there can be several choices for such a pair of bases.For a given shared state ψ ∈ C d ⊗ C d , we intuitively think that the first tensor describes Alice's part of the state while the second one describes Bob's part.In order to extract classical information from this shared state Alice and Bob can measure their respective parts.A k-outcome quantum measurement of a d-dimensional system (space), also referred to as a POVM (Positive Operator Valued Measure), is described by a family of k operators from S d + which add up to identity.We say that such a measurement is projective if each of the positive semidefinite operators is an (orthogonal) projection.For more in-depth explanation of general quantum strategies we refer the reader to [20], or to [1] for more details on quantum strategies for the isomorphism game specifically.
With these notions at hand we are ready to describe a general quantum strategy that Alice and Bob use to play a (G, H)-isomorphism game.First, Alice and Bob can choose the shared entangled state that they will use.Next, each player can choose a quantum measurement which they will perform upon receiving x ∈ V G ∪ V H . Since any classical processing of the measurement outcome can be included in the measurement, without loss of generality, we can assume that each of the players respond with the measurement outcome they obtain.Hence, we index the measurement outcomes by elements of V G ∪ V H .So a quantum strategy consists of a shared entangled state ψ ∈ C d ⊗ C d for some d, and quantum measurements for Alice and Bob respectively.According to the postulates of quantum mechanics, the probability of obtaining outcome y and y upon measuring P x and Q x respectively, is given by It will often be useful to use the fact that the probability from Equation 12 can be also expressed as where ψ := vec(ρ) and vec : C d1×d2 → C d1 ⊗ C d2 is the linear map defined by vec(e i e T j ) = e i ⊗ e j and extended by linearity.In the above derivation, we have used the identities vec(AXB T ) = (A ⊗ B) vec(X) and Tr(A † B) = vec(A) † vec(B) which can be verified by a direct calculation.We will also use the inverse map of vec which we denote by mat.Note that mat takes vectors to matrices, i.e., mat : C d1 ⊗ C d2 → C d1×d2 .For notational convenience, we usually choose to express the shared state ψ, as well as the operators P xy and Q xy , in a Schmidt basis of ψ.Note that in this basis, the operator ρ = mat(ψ) from Equation ( 13) is a diagonal matrix with positive diagonal entries.
In [1] we showed that perfect strategies of the isomorphism game have a special form Theorem 2.2.Let G and H be graphs and let 6. P xy = P yx for all x, y ∈ V .
We also observed in [1] that Theorem 2.2 allows us to reformulate the existence of a quantum homomorphism in the following way.
Theorem 2.3.Let G and H be graphs.Then G ∼ =q H if and only if there exist projectors P gh for g ∈ V G and h ∈ V H such that 1.
h∈V H P gh = I for all g ∈ V G ; 2.
g∈V G P gh = I for all h ∈ V H ; 3.
Note that by items (1) and ( 6) of Theorem 2.2, any quantum correlation p that wins the (G, H)isomorphism game satisfies the following: In other words, switching the input and output, for Alice or Bob, does not effect the corresponding probability.We refer to any correlation with this property as being input-output symmetric.This symmetry allows us to use a smaller matrix when formulating classical and quantum isomorphisms as conic feasibility problems.Note that since quantum winning correlations for the isomorphism game are input-output symmetric, so are classical correlations.

Conic Formulations
In this section we will prove Result 1, that graphs G and H are isomorphic (resp.quantum isomorphic) if and only if there is a CP-isomorphism matrix (resp.CS + -isomorphism matrix) for G to H.

Classical Correlations
Suppose that p is a correlation for the (G, H)-isomorphism game.Define the matrix M p with rows and columns indexed by V G × V H entrywise as: Note that the matrix M p does not contain all of the probabilities of p, only those corresponding to inputs from V G and outputs from V H . Thus, in general the matrix M p may not completely determine the correlation p.However, since winning classical or quantum correlations are input-output symmetric, such correlations p are determined by the matrix M p .Also note that since Alice and Bob are symmetric, i.e., p(y, y |x, x ) = p(y , y|x , x) for all x, x , y, y ∈ V G ∪V H for a winning classical or quantum correlation p, we have that M p is symmetric.
Recall from above that a matrix is completely positive if it is the Gram matrix of entrywise nonnegative vectors.Equivalently, a matrix M is completely positive if M = i p i p T i where p i ∈ R d are entyrwise nonnegative vectors.The equivalence of these two definitions follows from the fact that the matrix P P T is the Gram matrix of the rows of P but is also equal to i p i p T i where p i is the i th column of P .This formulation of completely positive matrices will be useful for proving Theorem 3.1 below.
The proof of the following theorem resembles the proof of Theorem 4.2 from [24], a similar result for the homomorphism game.But there the only concern was showing that the existence of a homomorphism was equivalent to the existence of a completely positive matrix satisfying certain properties.Theorem 3.1.Suppose G and H are graphs and p is a correlation for the (G, H)-isomorphism game.Then p is winning classical correlation if and only if p is input-output symmetric and M p is a CPisomorphism matrix.
Proof.Suppose that p is a winning classical correlation.Then we already know that p is input-ouput symmetric, so it remains to show that M p is a CP-isomorphism matrix.First, since p is a winning correlation we have that Equations (3) and ( 5) hold.Also, since p is input-output symmetric, we have that Equation (4) holds.Thus we only need to show that M p ∈ CP.
Since p is classical, it is a convex combination of correlations arising from deterministic classical strategies.In other words, there are positive numbers λ 1 , . . ., λ k such that k i=1 λ i = 1, and deterministic correlations p i such that It is easy to see that M p = i λ i M pi .Since p i is deterministic, there exists a graph isomorphism Since the v i are entrywise nonnegative and the λ i are positive, it follows that M p is completely positive.Conversely, suppose that p is input-output symmetric and M p is a CP-isomorphism matrix.It immediately follows from the definition of isomorphism matrices that p is a winning correlation for the (G, H)-isomorphism game, so it remains to show that it is classical.
Since M p is completely positive, there are nonzero, entrywise nonnegative vectors Our aim is to show that each v i corresponds to a deterministic correlation, i.e., that N i is a scalar multiple of M pi for some deterministic correlation p i .
First, we show that the N i have uniform block sums, i.e., that h,h N i gh,g h does not depend on g, g ∈ V G .For each i, let N i be a matrix with rows and columns indexed by V G such that So we have that M p = i N i .Note that M p can be obtained by conjugating M p by the matrix I ⊗ e, where e is the all ones vector, and N i can be obtained from N i similarly.Since M p and the N i are completely positive, they are also positive semidefinite and therefore so are M p and the N i .Since h,h p(h, h |g, g ) = 1 for all g, g ∈ V G , we have that M p is equal to the all ones matrix J.However, J is a rank 1 positive semidefinite matrix and we have shown it can be written as the sum of the positive semidefinite matrices N i .This is only possible if each N i is a positive (since v i is nonzero) scalar multiple of J. Let µ i be such that N i = µ i J and note that µ i > 0 for all i and i µ i = 1.Now we will show that each v i corresponds to a deterministic correlation.For notational simplicity, . By an analogous argument to the above, we have that f is injective.Moreover, since N gh,g h = 0 for rel(g, g ) = rel(h, h ), we must have that f is an isomorphism of G and H.
We want to show that v gf (g) = v g f (g ) for all g, g ∈ V G .However, Since N is a positive multiple of J by the above argument, we have that v gf (g) v g f (g ) is constant for all g, g ∈ V G , and therefore v gf (g) is constant.It is easy to see that v gf (g) = √ µ 1 and N = µ 1 M q where q is the correlation arising from the deterministic strategy obtained from using the isomorphism f .The same holds for all i and therefore p is a convex combination of correlations arising from deterministic classical strategies, i.e., p is classical.
The above theorem is similar to a more general characterization of classical correlations given in [26].However, our result is different in that we use a smaller matrix whose rows and columns are indexed by . This is possible because of the input-output symmetry of winning classical/quantum correlations for the isomorphism game that is not present in arbitrary games.
Note that we must include the assumption of input-output symmetry in the theorem above, as otherwise we do not have any information about the other probabilities of p. Thus, this assumption is dropped, it could be possible that p is classical when the inputs are both from V G , but not for other inputs.

Quantum Correlations
To prove the characterization for winning quantum correlations for the isomorphism game, we first need the following lemma which was proven in [24] (see also [26,Lemma 3.6]).Lemma 3.2.Let X and Y be finite sets and let M be the Gram matrix of vectors w xy for x ∈ X, y ∈ Y which lie in some inner product space.The following assertions are equivalent: • Then there exists a unit vector w satisfying y∈Y w xy = w for all x ∈ X.

•
y,y ∈Y M xy,x y = 1 for all x, x ∈ X.Using this we can now prove the following: Theorem 3.3.Suppose G and H are graphs and p is a correlation for the (G, H)-isomorphism game.Then p is a winning quantum correlation if and only if p is input-output symmetric and M p is a CS +isomorphism matrix.
Proof.Suppose that p is a winning quantum correlation for the (G, H)-isomorphism game.By the same reasoning as in the proof of Theorem 3.1, we have that M p satisfies Equations ( 3), (4), and (5).So it remains to show that M p ∈ CS + .
Since p is a quantum correlation, by Theorem 2.2 and Equation (13) we have that M p gh,g h = Tr(ρ † P gh ρP g h ).
The expression on the righthand side above is the Hilbert-Schmidt inner product of the matrices ρ 1/2 P gh ρ 1/2 and ρ 1/2 P g h ρ 1/2 , which are both positive semidefinite.Thus, M p is a Gram matrix of positive semidefinite matrices., i.e., it is completely positive semidefinite as desired.
Conversely, suppose that p is input-output symmetric and M p is a CS + -isomorphism matrix.It immediately follows from the definition of isomorphism matrices that p is a winning correlation for the (G, H)-isomorphism game.So it remains to show that p is quantum.Since M p ∈ CS + , there exist positive semidefinite matrices ρ gh for g ∈ V G and h ∈ V H such that M p gh,g h = Tr(ρ gh ρ g h ).Since we can apply Lemma 3.2.Therefore, there exists a matrix ρ such that and Tr(ρ † ρ) = 1.This implies that ρ is positive semidefinite and that the column space of ρ gh is contained in the column space of ρ for all g ∈ V G , h ∈ V H . Therefore, by restricting to a subspace if necessary, we may assume that ρ is positive definite, and thus ρ −1/2 exists.Define P gh = ρ −1/2 ρ gh ρ −1/2 , and note that this is positive semidefinite.We have that Therefore, {P gh : h ∈ V H } is a valid quantum measurement.We would like to also show that {P gh : g ∈ V G } is a valid quantum measurement.To do this it suffices to show that g ρ gh = ρ for all h ∈ V H . Since M p is an isomorphism matrix, we have that Thus we can apply Lemma 3.2 again to obtain a positive semidefinite matrix ρ such that However, we must have that Since the sum of the entries of M p is equal to both |V G | 2 and |V H | 2 depending on which of Equations ( 3) and ( 4) you consider, we have that ρ = ρ, and thus g ρ gh = ρ as desired.Therefore, {P gh : g ∈ V G } is a valid quantum measurement.Define P hg = P gh and P gg = 0 = P hh for all g, g ∈ V G and h, h ∈ V H , and let ψ = vec(ρ).Then it is easy to see that for g, g ∈ V G and h, h ∈ V H , Switching g and h or g and h does not change any of the values above and so the correlation p can be realized by Alice performing measurements P x = (P xy : y ∈ V G ∪ V H ) and Bob performing measurements ) on the shared state ψ.Thus p is a quantum correlation and we have proven the theorem.
As with Theorem 3.1, the theorem above is similar to a more general characterization of synchronous quantum correlations given in [26], but we are able to use a smaller matrix due to input-output symmetry.
By the containments CP ⊆ CS + ⊆ DN N ⊆ S + and Theorems 3.1 and 3.3, we have the following chain of implications: We will see in Section 7 that none of these implications can be reversed.

From isomorphism matrices to isomorphism maps
Our main technique for studying DN N -and S + -isomorphism is a correspondence between isomorphism matrices and linear maps between the space of matrices indexed by V G and those indexed by V H .

Linear maps preserving matrix cones and their properties
A linear map Φ : C m×m → C n×n is positive if it maps psd matrices to psd matrices, i.e., Φ(X) is psd whenever X is psd.We recall the following well-known theorem, e.g.see [7]: Lemma 4.1.For a linear map Φ : C m×m → C n×n the following assertions are equivalent: (i) Φ is m-positive, i.e., the map id m ⊗ Φ is positive.
(ii) The Choi matrix of Φ, defined by (iii) Φ admits a Kraus decomposition, i.e., there exist matrices (iv) Φ is completely positive1 , i.e., the map if id r ⊗ Φ is positive for all r ∈ N.
The implications (i) =⇒ (ii), (iii) =⇒ (iv), (iv) =⇒ (i) are straighforward, whereas (ii) =⇒ (iii) follows by considering a Cholesky factorization of the Choi matrix C Φ .Furthermore, we note that is no relation between completely positive maps and the cone of completely positive matrices.
A linear map Φ : C m×m → C n×n is trace-preserving (TP) if Tr(Φ(X)) = Tr(X) for all X ∈ C m×m , and unital if Φ(I m ) = I n .Note that if Φ : C m×m → C n×n is both trace-preserving and unital, we necessarily have that m = n, since n = Tr(I n ) = Tr(Φ(I m )) = Tr(I m ) = m.Completely positive tracepreserving (CPTP) linear maps are important in the theory of quantum information because they are in some sense the most general type of map allowed by the formalism of quantum mechanics.
In the next lemma we collect some useful properties of completely positive maps in terms of their Kraus decompositions that we invoke throughout this section.The proofs are easy and are omitted.
Then, we have that: In particular, the adjoint of a completely positive map is also completely positive (as it admits itself a Kraus decomposition).
(ii) Φ is unital if and only if (iii) Φ is trace-preserving if and only if i K † i K i = I.This is also equivalent to Φ † (I) = I.
As mentioned in Lemma 4.1, a linear map Φ : C m×m → C n×n is completely positive if and only if its Choi matrix is psd.It turns out that an analogous property holds for all cones of interest to this work.
Specifically, if Φ : C m×m → C n×n is a linear map and K is a matrix cone, we say that Φ is Kpreserving if X ∈ K implies that Φ(X) ∈ K. Using this terminology, a linear map Φ is positive if and only if it is S + -preserving.We now prove a sufficient condition for showing a map is K-preserving: Lemma 4.3.For a matrix cone K ∈ {CP, CS + , DN N , S + } and a linear map Φ : Note that the matrix Thus, since the all ones matrix J is in K for all K ∈ {CP, CS + , DN N , S + }, and all such K are easily seen to be closed under tensor products and taking principal submatrices, we have that C • (X ⊗ J n ) ∈ K whenever X ∈ K. Furthermore, equation (15) implies that the , k entry of Φ(X) is obtained by summing up the entries of the , k block of the block matrix C • (X ⊗ J n ), and it is routine to check that K is closed under this operation for all K ∈ {CP, CS + , DN N , S + }.
We are now ready to prove the following analogue of Lemma 4.1 for the cones of interest: Lemma 4.4.Consider a matrix cone K ∈ {CP, CS + , DN N , S + } and a linear map Φ : The following assertions are equivalent: (ii) The Choi matrix C Φ lies in K.
(iii) Φ is completely K-preserving, i.e., the map id r ⊗ Φ is K-preserving for all r ∈ N.
Proof.(i) =⇒ (ii).The Choi matrix of Φ is equal to where (ii) =⇒ (iii).Assume that C := C Φ ∈ K.By Lemma 4.3 it suffices to show that the Choi matrix of id r ⊗ Φ is in K for all r.Using id rm = id r ⊗ id m , we have that the Choi matrix of id r ⊗ Φ is equal to Up to a permutation of the tensors, this is equal to which is in K since K is closed under tensor products.Since permuting the tensors is equivalent to conjugation by some permutation matrix, i.e., consistently relabeling rows and columns, this does not change whether the matrix is in K and thus we have proven the claim.(iii) =⇒ (i).Straightforward.
We conclude this section with a lemma we use repeatedly in the remainder of this article.(2) D ij = 0 whenever v i = u j . ( Proof.Consider the linear maps f (w) = D(u • w) and g(w) = v • (Dw).Letting D j be the j th column of D and e j the j th standard basis vector, it follows that f (e j ) = u j D j and g(e j ) = v • D j .Consequently, which is in turn equivalent to D ij = 0 whenever u j = v i .Lastly, to get the third equivalence, note that Through this correspondence Lemma 4.5 immediately yields the following: (2) M gh,g h = 0 whenever X gg = Y hh .

The Choi matrix as a K-isomorphism matrix
The following assertions are equivalent: (1) The Choi matrix of Φ is a K-isomorphism matrix from G to H.
Remark 4.8.We conclude this section by listing some further useful properties satisfied by isomorphism maps.First, note that ( 17) and ( 18) further imply that since A G = J − I − A G and J is the identity with respect to the Schur product.Furthermore, since Φ(J) = J = Φ † (J) it follows respectively by ( 17) and ( 18) that: Lastly, the fact that Φ is sum-preserving combined with Φ(I) = I shows that G and H have the same number of vertices.Analogously, Φ(A G ) = A H implies that G and H have the same number of edges.

Some additional properties of isomorphism maps
Lemma 4.9.Consider a linear map Φ : C n×n → C n×n which is completely positive, trace-preserving, and unital.Then, for any pair of matrices X, Y such that Φ(X) = Y and Φ † (Y ) = X we have that , for all matrices W.
Proof.The presented proof is a small modification of the arguments in [28].As Φ is completely positive it admits a a Kraus decomposition Φ(Z) = m i=1 K i ZK † i .The crux of the proof is to show that For this, set where to get the last equality we used the assumption Φ(X) = Y and that Φ(X † ) = Y † , the latter following by Lemma 4.2 (iv).As Z is psd (since it is the sum of psd matrices) we have that where for the last equality we used that Tr(Φ(XX † )) = Tr(XX † ) since Φ is trace preserving.
By assumption we also have that Φ † (Y ) = X, and Φ † is trace-preserving as Φ is assumed to be unital.In a similar manner as above we get that Combining ( 25) and ( 26) we get that Tr(Z) = Tr(XX † − Y Y † ) = 0, and as Z is psd, this further implies that Z = 0.As Z is the sum of psd matrices, every term in the sum Lastly, using that K i X = Y K i for all i we have that for any matrix W : Lastly, repeating the above argument with the matrix i (XK Note that the conditions above imply that φ(A G ) = A H where A G and A H are the adjacency matrices of the complements of G and H respectively.Furthermore, they also imply that φ(A G • N ) = A H • φ(N ) for all N ∈ ÂG .Note that if it exists, a partial equivalence φ of graphs G and H is completely determined since φ(A G ) = A H .
If φ is an equivalence of graphs G and H, then any function of I, A G , and J using the operations of addition, scalar multiplication, matrix multiplication, entrywise multiplication, and conjugate transposition is mapped to the same function with A G replaced by A H .This obviously still holds if we restrict to functions in which entrywise multiplication can only be used when one of the factors is I or A G .Since the space of matrices that can be written as such functions is exactly the partially coherent algebra of G, we have that the restriction of the equivalence φ to ÂG is a partial equivalence of G and H. Thus, any pair of equivalent graphs are also partially equivalent, as one would expect.Proof.Assume that G ∼ =S + H and let M be a S + -isomorphism matrix.Let Φ : C V G ×V G → C V H ×V H be the linear map whose Choi matrix is M As M is a S + -isomorphism matrix it follows by Theorem 4.7 that Φ is a S + -isomorphism map, i.e., it satisfies Conditions ( 16)- (19).Furthermore, as as already noted in Remark 4.8, the above properties also imply that Φ

The characterization
Additionally, by Theorem 4.7 the adjoint map Φ † is a K-isomorphism map from H to G, i.e., it satisfies: Now, (28) combined with Φ † (J) = J imply that Φ † (I) = I and Φ † (A H ) = A G .Summarizing, we have determined that Φ M is completely positive, unital, trace-preserving, and Consequently, Lemma 4.9 implies that for any W ∈ C n×n we have that Φ(XW ) = Φ(X)Φ(W ) and Φ(W X) = Φ(W )Φ(X) whenever X ∈ {I, J, A G }. Furthermore, by Lemma 4.6 and Condition (5) of isomorphism matrices, we have that for any W ∈ C n×n , and similarly for Φ † .Consequently, any finite expression involving I, A G , A G and the operations of addition, scalar multiplication, matrix multiplication, and Schur multiplication where at least one of the factors is I or A G , will be mapped by Φ to the same expression with A G and A G replaced with A H and A H respectively.Further, Φ † is the inverse of Φ on such expressions.This means that the restriction of Φ to the partially coherent algebra of G is a partial equivalence.Conversely, suppose that φ : ÂG → ÂH is a partial equivalence of G and H.By Lemma A.1, there exists a unitary matrix U such that φ(X) = U XU † for all X ∈ ÂG .Let φ : Obviously, φ and φ agree on ÂG .Also, φ is a CPTP unital map with adjoint φ † (X) = φ−1 (X) = U † XU .Let Π : C V G ×V G → ÂG be the orthogonal projection onto the partially coherent algebra of G and define the composition Since Π is a CPTP unital map by Lemma A.4, we have that Φ is the composition of two CPTP unital maps and is thus CPTP and unital itself.We show that Φ is a S + -isomorphism map for G to H, and thus, Theorem 4.7 implies that its Choi matrix is a S + -isomorphism matrix.
We first aim to show that Φ where the second to last equality uses the fact that Π(X) ∈ ÂG .So Φ satisfies property (18).
We can similarly show that Φ(I • X) = I • Φ(X) and Φ(A G • X) = A H • Φ(X), i.e., that Φ satisfies property (17).Therefore Φ is an S + -isomorphism map for G to H and we are done.Proof.Assume that G ∼ =S + H.By Theorem 5.2 there exists an S + -isomorphism map Φ from G to H.As we have already seen, the map Φ satisfies Φ(

Necessary conditions for S + -isomorphism
and the claim follows immediately by Lemma 4.10.
We note that the above result in the special case of quantum isomorphic graphs was proved in [1].Moreover, there are other types of cospectrality that one can consider.Another common cospectrality relation is in terms of the (combinatorial) Laplacian of a graph G, defined as the matrix L = D − A G where D is a diagonal matrix of degrees and A G is the adjacency matrix.

Proof. It is easy to see that if
which is of course the Laplacian of H. Similarly, we have that Φ † (I and by Lemma 4.10 this implies that the Laplacians of G and H have the same eigenvalues.
One can similarly show that S + -isomorphic graphs are cospectral with respect to their signless or normalized Laplacians, as well as many other similarly constructed matrices.An important property of the Laplacian of a graph G is that the number of connected components of G is equal to the multiplicity of zero as an eigenvalue of its Laplacian [3,Proposition 1.3.7].Therefore, we have the following.
Corollary 5.5.If G ∼ =S + H they have the same number of connected components, as do their complements.
Another property preserved by S + -isomorphism has to do with the number of walks in a graph.We say that a graph G is walk-regular if the number of walks of length beginning and ending at a vertex of G is independent of the choice of vertex.Equivalently, there exists a number a ∈ N such that I • A G = a I for all ∈ N. We also say that a graph is 1-walk-regular if it is walk-regular and there Obviously, this means that the number of walks of length starting at one end of an edge and ending at the other does not depend on the edge.It turns out that S + -isomorphism preserves both of the aforementioned properties: Lemma 5.6.If G and H are S + -isomorphic graphs, then G is walk-regular if and only if H is walkregular.The same holds for 1-walk-regularity.
Proof.Suppose G is walk-regular and let a for ∈ N satisfying I • A G = a I.By Theorem 5.2 there exists a S + -isomorphism map Φ from G to H.Then, we have that and thus H is walk-regular.Essentially the same proof works for 1-walk-regularity.
Walk-regularity also turns out to be related to the partially coherent algebra of a graph.Below we refer to the algebra of polynomials in the adjacency matrix of a graph G as the adjacency algebra of G. Lemma 5.7.If the adjacency algebra of a graph G is equal to its partially coherent algebra, then G is connected and walk-regular.The converse implication does not hold.
Proof.Let ÂG be the partially coherent algebra of G and assume that this is equal to the adjacency algebra of G. Since J ∈ ÂG by definition, we have that J is contained in the adjacency algebra of G which happens if and only if G is connected and regular [14, Theorem 1].So it only remains to show that G is walk-regular.
Consider the subspace D = {I • X : X ∈ ÂG }.By Lemma A.2, there exists an orthogonal basis of D consisting of diagonal 01-matrices.Let {D 1 , . . ., D r } be this basis and suppose that r > 1.This implies that D 1 is not the identity matrix and therefore D 1 J ∈ ÂG is not symmetric.This contradicts the assumption that ÂG is equal to the adjacency algebra of G, which obviously contains only symmetric matrices.Therefore, we have that r = 1 and D is just the span of the identity matrix.However, since A G ∈ ÂG , we have that I • A G ∈ D. Therefore, for any ∈ N, we have that there exists a number a such that I • A G = a I, i.e., G is walk-regular.
To show that the converse does not hold, consider the 10-cycle C 10 , and let G be the graph with vertex set V (C 10 ) such that two vertices are adjacent if they are at distance one or two in C 10 (see Figure 1).Note that G is vertex transitive and therefore walk-regular, and it is obviously connected.We will show that the adjacency matrix of C 10 is contained in the partially coherent algebra of G, but not its adjacency algebra.For the former claim, it is straightforward to show that (or simply compute) the adjacency matrix of C 10 is equal to For the latter, if the adjacency algebra of G contains the adjacency matrix of C 10 , then it contains its entire adjacency algebra.However, the dimension of the adjacency algebra of a graph is equal to the number of distinct eigenvalues of its adjacency matrix.For C 10 this dimension is 6, but for G it is 5 (by direct computation).Thus the adjacency algebra of the latter cannot contain the adjacency matrix of the former, and we are done.Proof.It is obvious that the partially coherent algebra of G contains the adjacency algebra of G. To prove the first claim it therefore suffices to show that the adjacency algebra is S-partially coherent for S = {I, A G }. First, since G is 1-walk-regular, it is regular and moreover it is also connected by assumption.Using the known fact that J is contained in the adjacency algebra if and only if G is connected and regular [14], we have that the all ones matrix J can be written as a polynomial in A G .Second, it is obvious that the adjacency algebra is closed under conjugate transpose.So it only remains to show that the adjacency algebra is closed under entrywise product with I and A G .Using the definition of 1-walk-regularity it follows that for any polynomial f (x) = c x , we have that , and thus we have proven the claim.
To show that the converse does not hold, consider the 8-cycle C 8 and let G be the graph with vertex set V (C 8 ) such that two vertices are adjacent if they are at distance two or three in C 8 .We will show that the coherent algebra algebra of G is equal to its adjacency algebra, but that it is not 1-walk-regular.Let A be the coherent algebra of G and let C be the coherent algebra of C 8 .
The graph C 8 is distance regular and thus both its adjacency and coherent algebras are equal to the span of its distance matrices which obviously contains A G .Thus by minimality of A, we have that A ⊆ C. On the other hand, C has dimension 5 since C 8 has diameter 4 and the adjacency algebra of G, which is contained in A, has dimension 5 since A G has 5 distinct eigenvalues.Thus we have that the adjacency algebra of G is equal to A = C.
However, the number of walks of length two between adjacent vertices of G is not constant, it depends on the distance between the vertices in C 8 .Therefore G is not 1-walk-regular.
The above two lemmas show that the property of having adjacency algebra equal to partially coherent algebra lies somewhere (strictly) in between being walk-regular and being 1-walk-regular.
Theorem 5.9.Let G be a connected 1-walk-regular graph.For any graph H we have that G ∼ =S + H if and only if H is a connected 1-walk-regular graph that is cospectral to G.
Proof.If G ∼ =S + H, it follows that H is a connected (Lemma 5.5) 1-walk-regular graph (Lemma 5.6) that is also cospectral to G (Lemma 5.3).
Conversely, suppose that H is a connected 1-walk-regular graph that is cospectral to G. Since they are cospectral, by the spectral theorem there exists a unitary matrix U such that U A G U † = A H .It is then easy to see that the map φ(X) = U XU † is an algebra isomorphism from the adjacency algebra of G to that of H.By Lemma 5.8, it follows that φ is an algebra isomorphism from ÂG to ÂH , it remains to verify that this is a partial equivalence.Obviously, φ(X † ) = φ(X) † , and so this condition is met.We also need that φ(J) = J, but this holds because if E λ and F λ are the projections onto the λ-eigenspaces of G and H respectively, then U E λ U † = F λ , and is the projection onto the maximum eigenspaces of both G and H since they are connected and regular.
Lastly, we show that φ(I As G is a connected 1-walk-regular graph, by Lemma 5.8 the partially coherent algebra of G is equal to the adjacency algebra of G and thus I • X = (Tr(X)/n)I for all X ∈ ÂG .Therefore, On the other hand, φ(X) ∈ ÂH and thus where the second equality follows from the fact that φ is trace-preserving.Thus we have shown that φ(I • X) = I • φ(X) for all X ∈ ÂG .We similarly have that A G • X = γA G where γ = Tr(A G X)/nk, where k is the degree of both G and Thus G and H are partially equivalent and by Theorem 6.3 we have G ∼ =S + H.

Coherent algebras of graphs
The coherent algebra of a graph G, denoted A G , is defined to be the intersection of all coherent algebras containing its adjacency matrix A, i.e., the smallest coherent algebra containing A. Equivalently, this is all the matrices that can be written as a finite expression involving I, A, J, and the operations of addition, scalar multiplication, matrix multiplication, Schur multiplication, and conjugate transpose.
An isomorphism between coherent algebras A and B is a bijective linear map φ : A → B that preserves all operations of a coherent algebra, i.e., As a consequence of the above, we must have that φ(I) = I and φ(J) = J.More generally, if φ is an isomorphism of coherent algebras A and B, and A 1 , . . ., A d and B 1 , . . ., B d are the orthogonal 01-matrices forming bases of A and B respectively, then there exists a bijection f : If G and H are two graphs with respective adjacency matrices A G and A H and coherent algebras A G and A H , then we say that G and H are equivalent if there exists an isomorphism φ from A G to A H such that φ(A G ) = A H .We refer to the map φ as an equivalence of G and H.It is known [29] that two graphs are equivalent if and only if they are not distinguished by the Weisfeiler-Leman method.

The characterization
We will need the following: Lemma 6.1.Consider a doubly stochastic matrix D ∈ R d×d and column vectors u, v ∈ R d with the same multiset of entries.The following are equivalent: , the indices of the largest entry of v and define U similarly.As D is doubly stochastic, the equation v i = (Du) i = j D ij u j shows that v i is a convex combination of the entries of u.If i ∈ V , no entry u j for j / ∈ U can appear with nonzero weight in this convex combination.Therefore, we have that and thus 1 = Furthermore, we have that where for the first equality we use that u and v have the same multiset of entries and for the second equality we use (32).Thus, (33) holds throughout with equality, which in turn implies that Rearranging D so that the V -rows and U -columns are first, it follows by (31) and (34) that where D and D are doubly stochastic matrices.The same argument can be applied to D (where V and U are now defined as the indices of the second largest entry in v and u respectively).Continuing in the same manner it follows that D ij = 0 whenever v i = u j .
As with Lemma 4.5, we now state a form of the above lemma in terms of maps between matrix spaces.As before this is equivalent to the above by the correspondence vec(Φ(X)) = M vec(X) for where Φ has Choi matrix M and Mhh ,gg = M gh,g h .Note that M having row and column sums equal to 1 is equivalent to Φ(J) = J = Φ † (J).Lemma 6.2.Let Φ : C V G ×V G → C V H ×V H be a linear map with entrywise nonneggative Choi matrix M such that Φ(J) = J = Φ † (J).For any fixed pair of matrices X ∈ C V G ×V G and Y ∈ C V H ×V H with the same multiset of entries the following are equivalent: (1) Φ(X) = Y .
( Proof.Suppose that G ∼ =DN N H and let M be a DN N -isomorphism matrix.Consider the linear map Φ : As M is a DN N -isomorphism matrix it follows by Theorem 4.7 that Φ is a DN N -isomorphism map.We show that Φ is the desired equivalence between the coherent algebras of G and H. The same arguments as in Theorem 5.2 apply to show that any finite expression involving I, A G , A G and the operations of addition, scalar multiplication, matrix multiplication, and Schur multiplication where at least one of the factors is I or A G , will be mapped by Φ to the same expression with A G and A G replaced with A H and A H respectively.Furthermore, Φ † is the inverse of Φ on such expressions.It remains to consider the case of arbitrary Schur products.For this, it suffices to show that for any pair of matrices X, Y such that Φ(X) = Y and Φ † (Y ) = X we have that However, since Φ is a DN N -isomorphism map, we have that its Choi matrix is entrywise nonneggative and Φ(J) = J = Φ † (J).This is the matrix map analog of being doubly stochastic, and thus Φ(X) = Y and Φ † (Y ) = X implies that X and Y have the same multiset of entries.Therefore we can apply Lemma 6.2 to obtain Equation (35).We thus have that for J) = J, and similarly for Φ † , it follows that any expression in I, A G , J using addition, scalar multiplication, matrix multiplication, conjugate transposition, and Schur product is mapped by Φ to the same expression but with A G replaced with A H (and Φ † is the inverse of Φ on such expressions).Therefore the restriction of Φ to A G is an equivalence of G and H. Conversely, let φ : A G → A H be an equivalence of G and H.By Lemma A.1, there exists a unitary matrix U such that φ(X) = U XU † for all X ∈ A G .Let φ : be the orthogonal projection onto the coherent algebra of A G .The same arguments as in the proof of Theorem 5.2 imply that the composition Φ = φ • Π : C V G ×V G → C V H ×V H is an S + -isomorphism map for G to H. Thus, to show that Φ is a DN N -isomorphism map it remains to show that it is completely DN N -preserving, or equivalently, that its Choi matrix C Φ is entrywise nonnegative (we already know it is psd).
Let A 1 , . . ., A d and B 1 , . . ., B d be the 01-bases of A G and A H respectively.Then, we have that φ(A i ) = B i and as φ is trace preserving (cf.Lemma A.3) it follows that , where m i is the number of 1's in A i and B i .Then, { 1 √ mi A i : i ∈ [d]} is an orthonormal basis for A G , and thus By (36) we clearly see that if X is entrywise nonnegative, then Φ(X) is also nonnegative.Thus the Choi matrix of Φ is doubly nonnegative and so Φ is a DN N -isomorphism map for G to H.

Necessary conditions on DN N -isomorphic graphs
Since any pair of DN N -isomorphic graphs are also S + -isomorphic, we know that any of the necessary conditions for S + -isomorphism given in the previous section are also necessary for DN N -isomorphism.However, some of these necessary conditions can be strengthened.We say that the d-distance graph of G is the graph with vertex set V G such that two vertices are adjacent if their distance in G is exactly d.The d-distance matrix of G is the adjacency matrix of its d-distance graph, so in particular, it has zero diagonal.Lemma 6.4.Consider two graphs G and H. Define X ,i (respectively Y ,i ) as the matrix whose ggentry is 1 if the number of walks of length in G from g to g is equal to i, and is otherwise zero.Moreover, let X ( ) and Y ( ) be the -distance matrices of G and H respectively.Assume that G and H are DN N -isomorphic graphs with isomorphism map Φ.Then we have that Φ(X ,i ) = Y ,i for all , i ∈ N and Φ(X ( ) ) = Y ( ) for all = 0, 1, . . ., diam(G) Proof.We have that Φ(A G ) = A H and Φ † (A H ) = A G and thus A G and A H have the same multiset of entries.Let S be the set of entries of A G .Then A G = i∈S iX ,i and A H = i∈S iY ,i .It is then easy to see that for any j ∈ S, and similarly for Y ,j .It then follows from the properties of DN N -isomorphism maps that Φ(X ,i ) = Y ,i for all , i.The second claim holds for = 0, 1, and we proceed by induction.It is easy to see that and thus the claim follows from the properties of Φ and the obvious induction argument.
We saw in Section 5.3 that S + -isomorphism preserves the property of being 1-walk-regular.For DN N -isomorphism, an even stronger property known as distance regularity is preserved.A connected graph G of diameter d is distance regular if there exist integers p k ij for i, j, k ∈ {0, 1, . . ., d} such that the number of vertices w at distance i from u and distance j from v is equal to p k ij whenever u and v are at distance k, i.e., whenever dist(u, v) = k we have that |N i (u) ∩ N j (v)| = p k ij .Letting X ( ) for ∈ {0, 1, . . ., d} be the distance matrix of G, one can see that this definition is equivalent to the equations In other words, the distance matrices are a 01 orthogonal basis of the algebra they generate.From here it is easy to see that this is a coherent algebra.Furthermore, since the distance matrices of G are contained in the coherent algebra of G by Equation (37), the coherent algbera generated by the distance matrices of G is equal to the coherent algebra of G. Thus a graph is distance regular if and only if its coherent algebra is equal to the span of its distance matrices, in which case all matrices in the coherent algebra are symmetric and thus they all commute.This allows us to prove the following: Proof.By assumption, there exists a DN N -isomorphism map Φ = Φ M , where M is a DN N -isomorphism matrix.It suffices to show that G being distance regular implies that H is distance regular.So let G be distance regular and let X ( ) and Y ( ) be the -distance matrices of G and H respectively.By Lemma 6.4, we have that Φ(X ( ) ) = Y ( ) for all .Since G is distance regular, the X ( ) form a basis of A G .Since the image of A G under Φ is A H , and since the restriction of Φ to A G is a linear bijection, we have that Φ maps any basis of A G to a basis of A H . Therefore, the distance matrices of H form a basis of A H and thus H is distance regular.
In Section 5.3 we showed that the partially coherent algebra of a connected 1-walk-regular graph is equal to its adjacency algebra.Analogously, it is well known that the coherent algebra of a distance regular graph is equal to its adjacency algebra [2].We will not give a proof of this here, but it suffices to show that the distance matrices of a distance regular graph are polynomials in its adjacency matrix, and this can be done with induction.As one might expect, this allows us to prove an analog of Theorem 5.9 for DN N -isomorphism.Theorem 6.6.Let G be a distance regular graph.If H is a graph, then G ∼ =DN N H if and only if H is a distance regular graph that is cospectral to G.
Proof.The only if direction follows from Lemmas 5.3 and 6.5.For the other direction, suppose that G and H are cospectral distance regular graphs with eigenvalues λ 1 ≥ . . .≥ λ n for n = |V G | = |V H |, and let d be the diameter of both graphs (it is well known that the diameter of a distance regular graph is one less than the number of distinct eigenvalues [2], so this will be the same for G and H).Note that the unique eigenvector for λ 1 is the constant vector since the graphs are connected and regular.Also let X ( ) and Y ( ) be the -distance matrices of G and H respectively.Since G and H are cospectral, there exists a unitary matrix U such that U A G U † = A H .It is then immediate that the map φ(X) = U XU † is an algebra isomorphism of the adjacency algebras of G and H, which are respectively equal to their coherent algebras since the graphs are distance regular.We aim to show that φ is an equivalence.
Obviously, φ(X † ) = φ(X) † , and φ(J) = J since 1 n J is the projection onto the λ 1 -eigenspace for both G and H.So we only need to show that φ(X • X ) = φ(X) • φ(X ) for all X, X ∈ A G .
For any X, X ∈ A G , we have that X = α X ( ) and X = α X ( ) for some coefficients α , α for = 0, . . .d since the distance matrices of G span A G by the distance regularity of G. Then Thus it suffices to show that φ(X ( ) ) = Y ( ) for all = 0, . . ., d.Since G is distance regular, there exist polynomials f for = 0, . . ., d such that f (A G ) = X ( ) [2, Section 2.7].Moreover, the polynomials f only depend on the eigenvalues of the distance regular graph.Since G and H are cospectral, we have that f

Separations between the various notions of isomorphism
In Equation ( 14) we noted that the four different types of isomorphisms we consider in this paper satisfy the following chain of implications: In our earlier work [1], we showed that the first implication cannot be reversed, i.e., that there are quantum isomorphic graphs that are not isomorphic.Here we show that none of the other implications can be reversed.We begin by showing that S + -isomorphism does not imply DN N -isomorphism.
The first graph we consider is the 4-cube: the graph whose vertices are the binary strings of length 4, two being adjacent if they differ in exactly one position.This is a well-known distance transitive (and therefore distance regular) graph.Less well-known is the Hoffman graph, which is the unique graph cospectral to the 4-cube.This Hoffman graph is not distance regular but is 1-walk-regular.Both graphs are shown in Figure 7.  Proof.Let G be the 4-cube and H the Hoffman graph.Since G is distance regular it is 1-walk-regular, and we already noted above that the Hoffman graph is 1-walk-regular.Since they are cospectral, by Theorem 5.9 they are S + -isomorphic.However, since G is distance regular but H is not, by Theorem 6.6 they are not DN N -isomorphic.
The next separation we will show is between quantum and DN N -isomorphism.The graphs we will use are the the cartesian product of K 4 with itself, and the Shrikhande graph.The cartesian product of graphs G and H, denoted G H, has vertex set V G × V H , and vertices (g, h) and (g , h ) are adjacent if (g = g and h ∼ h ) or (g ∼ g and h = h ).For G = H = K 4 , the vertices of the graph can be thought of as being in a 4 × 4 grid, such that two are adjacent if they are in the same row or column.From this description it is easy to see that K 4 K 4 contains K 4 as a subgraph.
The graph K 4 K 4 is what is known as a strongly regular graph, which is just a distance regular graph of diameter two.Equivalently, an n-vertex, k-regular graph G is strongly regular if there exists numbers λ and µ such that any two adjacent vertices of G share λ common neighbors, and any two distinct non-adjacent vertices of G share µ common neighbors.The numbers (n, k, λ, µ) are called the parameters of a strongly regular graph and these completely determine its spectrum.The parameters of K 4 K 4 are (16, 6, 2, 2).
The Shrikhande graph is pictured in Figure 7.The Shrikhande graph is also a strongly regular graph with parameters (16, 6, 2, 2), and thus it is cospectral to K 4 K 4 .We will show that these two graphs are DN N -isomorphic but not quantum isomorphic, thus separating these two relations.Theorem 7.2.There exist graphs G and H that are DN N -isomorphic but not quantum isomorphic, i.e., G ∼ =DN N H ⇒ G ∼ =q H.
Proof.The Shrikhande graph and K 4 K 4 are strongly regular graphs with the same parameters and are thus cospectral.It then follows from Theorem 6.6 that G ∼ =DN N H.It remains to show that these two graphs are not quantum isomorphic.
In [19], it is shown that if G ∼ =q H then G and H admit the same number of homomorphisms from any planar graph K2 .A homomorphism from K to G is simply an adjacency-preserving map from V K to V G .In the case of K being a complete graph, the existence of a homomorphism from K to G is equivalent to G containing K as a subgraph.It is easy to see that the graph K 4 K 4 contains K 4 as a subgraph, whereas the Shrikhande graph does not.Thus the Shrikhande graph admits no homomorphisms from K 4 , whereas K 4 K 4 admits some positive number of homomorphisms from K 4 .As K 4 is planar, it follows that K 4 K 4 is not quantum isomorphic to the Shrikhande graph.We remark that the result of [19] used to prove the above theorem is a bit of overkill.In fact, the above can be proved more directly using the notion of projective representations of graphs and [1, Lemma 5.20].However, this more direct proof is somewhat tedious and the above proof allows us to avoid introducing projective representations here.
The above results show that none of the implications in Equation ( 14) can be reversed.Before moving on, we briefly mention that it is not too difficult to show that G ∼ =S + H implies that G and H are fractionally isomorphic, i.e., there exists a doubly stochastic matrix D such that A G D = DA H .If M is an S + -isomorphism matrix for G to H then letting D gh = M gh,gh works (the proof is similar to [1, Lemma 4.2]).Again, the reverse implication does not hold, for instance the 6-cycle and the disjoint union of two 3-cycles are fractionally isomorphic, but not S + -isomorphic since they are not cospectral.

Conic theta functions
For any matrix cone K, one can define the graph parameter [9,17]: Note that Tr(M J) is equal to the sum of the entries of M , which we also denote by sum(M ).For K = S + , the parameter ϑ K is exactly the celebrated Lovász theta function, denoted simply ϑ.For K = DN N , it is equal to a variant due to Schrijver that is usually denoted ϑ or ϑ − .A nontrivial result [8] is that ϑ CP is equal to the independence number of a graph, denoted α.The parameter ϑ CS+ was first considered in [17]; it may or may not be equal to a related parameter known as the projective packing number [23], but in general it is much less understood than the other three parameters discussed above.One of the main reasons for this is that the cone CS + is not closed [27].Note that if K ⊆ K then ϑ K (G) ≤ ϑ K (G).In particular, we have the following chain of inequalities: In order to reformulate K-isomorphism in terms of the graph parameter ϑ K we make use of the graph isomorphism product, denoted G H, which has vertex set V G × V H and edges (g, h) ∼ (g , h ) if rel(g, g ) = rel(h, h ).In other words, vertices of G H are adjacent exactly when the corresponding entry in an isomorphism matrix for G to H is required to be zero.Note that the isomorphism product of G and H is the complement of the modular product of graphs.
Suppose that G ∼ =K H and let be M the requisite K-isomorphism matrix.We show that M is, up to a scalar, a feasible solution for ϑ Conversely, let M be an optimal solution for ϑ K (G H) of value |V G | = |V H |. We show that the block sums h,h ∈V H M gh,g h & g,g ∈V G M gh,g h are all equal to the same constant, and thus M is (a scalar multiple of) a K-isomorphism matrix for G to H.For this, define M to be a matrix with rows and columns indexed by V G , whose g, g entry is given by: M g,g = where we used that M gh,gh = 0 when h = h .Thus, the sum of the eigenvalues of M is 1.Since M is positive semidefinite, it must have exactly one nonzero eigenvalue, which must be equal to 1, and which has e as an eigenvector.This implies that M is a multiple of the all ones matrix.By the definition of M , this implies that the block sums h,h M gh,g h are constant (and equal to 1/|V G |).The same argument shows that the block sums g,g M gh,g h are all equal to 1/|V H | = 1/|V G |, and thus we are done.

Connection with the Lasserre hierarchy
Consider a semialgebraic set K = {x ∈ [0, 1] n : g i (x) ≥ 0, i = 1, . . ., m}.The Lasserre hierarchy is a systematic method for producing tighter approximations to conv(K ∩ {0, 1} n ).As the goal is to characterize the convex hull of {0, 1} points in K, using that x 2 i = x i , we may assume that i.e., g i is multilinear and its monomials are indexed by subsets of [n].For t ≥ 0, the t-th level of the Lasserre hierarchy is an SDP defined as the set of all vectors y = (y I ), I ⊆ [n], |I| ≤ 2t, that satisfy: M t (y) 0, M t− deg(g i ) 2 (g i * y) 0 (i ∈ [m]), y ∅ = 1.
For completeness, we recall that M t (y) is a matrix indexed by all sets I ⊆ [n] with |I| ≤ t and its I, J entry is given by y I∪J .Furthermore, M t− deg(g i ) 2 (g i * y) is a matrix indexed by all I ⊆ [n] with and its I, J entry is given by K⊆[n] g i (K)y I∪J∪K .Deciding graph isomorphism is equivalent to the feasibility of the following quadratic integer program: X gh X g h = 0 if rel(g, g ) = rel(h, h ), (40) We proceed to apply the Lasserre hierarchy to the semi-algebraic set obtained by dropping the integrality constraints.We only consider the first level (i.e., t = 1), defined in terms of the variables y ∅ , y (g,h) , y {(g,h),(g ,h )} .
The first constraint is that M 1 (y) 0. This matrix is indexed by ∅, (g, h); The entries in the 0-th row are y (g,h) and the (g, h), (g h ) entry is y {(g,h),(g ,h )} .Moreover, the diagonal is equal to the 0-th row.To handle equality constraints in the Lasserre hierarchy we write them as two inequalities.As an example consider h X gh − 1 ≥ 0. As this has degree one (and we consider level t = 1), we get the constraint M 0 (g i * y) 0. This is a trivial matrix; is it only indexed by the empty set, so it is just the scalar Specializing to the polynomial h X gh − 1 ≥ 0, (42) gives the constraint: and symetrically, it also includes g y (g,h) = 1, ∀h.
Proof.Let y = (y ∅ , y (g,h) , y {(g,h),(g ,h )} ) feasible for (44)-(48).As M 1 (y) is positive semidefinite, it can be realized as the Gram matrix of a family of vectors v ∅ and v (g,h) , g ∈ V G , h ∈ V H .The main step is to show that v ∅ = g v (g,h) = h v (g,h) .Towards this end, note that where the first equality follows from (47), and Combining the above with y ∅ = 1 we get that and analogously that v ∅ − g v (g,h) 2 = 0, ∀h.Lastly, it follows that the restriction of M 1 (y) on the rows/columns indexed by (g, h) is a S + -isomorphism matrix as h,h ∈V H M 1 (y) gh,g h = h,h ∈V H v (g,h) , v (g ,h ) = v ∅ , v ∅ = 1, for all g, g ∈ V G , and similarly g,g M 1 (y) gh,g h = 1 for all h, h ∈ V H .
Conversely, let M be a S + -isomorphism matrix and let v (g,h) be the vectors in a Gram decomposition.For any g ∈ V G set v g = h v (g,h) and for any h ∈ V H define v h = g v (g,h) .As before, we can easily show that v g = v g for all g, g ∈ V G and v h = v h for all h, h ∈ V H , and thus we use v G and v H to refer to these two vectors.But we also have that which in turn implies that v G = v H =: v (as S + -isomorphic graphs have the same number of vertices).
Lastly, extend M be adding an extra row/column indexed by the vector v.It is then straightforward to check that the augmented matrix satisfies (44)-(48).
Combining several results of this paper, we have the following: Theorem 9.2.Let G and H be graphs.Then the following are equivalent: (1) G ∼ =S + H.
(2) G and H are equivalent. ( (4) The first level of the Lasserre hierarchy for isomorphism of G and H has a nonnegative solution.
Remark 9.3.As mentioned in Section 6.1, graphs G and H are equivalent if and only if they are not distinguished by the Weisfeiler-Leman method.In turn, the latter is known to have many equivalent formulations, e.g., in terms of logic and pebbling games on graphs [6].However, the connections to Schrijver's theta function and the first level of the Lasserre hierarchy given in the above theorem appear to be new.

A Tools from algebra
For our characterizations of DN N -and S + -isomorphism, we will need some tools from algebra.The first result was used in [10] and says that certain isomorphisms between matrix algebras can be realized as conjugation by a unitary matrix.
Lemma A.1.If A and B are self-adjoint unital subalgebras of C n×n and φ : A → B is a trace-preserving isomorphism such that φ(X † ) = φ(X) † for all X ∈ A, then there exists a unitary matrix U ∈ C n×n such that φ(X) = U XU † for all X ∈ A.

Result 4 .
Consider two graphs G and H and a matrix cone K ⊆ S + .Then G ∼ =K H if and only if ϑ K (G H) = |V G | = |V H | and this value is attained.
a winning strategy for the (G, H)-isomorphism game.Also, suppose that the operators P xy and Q xy as well as the shared state ψ are expressed in a Schmidt basis of ψ and let ρ := mat(ψ).Then we have 1.P xy = Q T xy for all x, y ∈ V ; 2. P xy and Q xy are projectors for all x, y ∈ V ; 3. P xy ρ = ρP xy and Q xy ρ = ρQ xy for all x, y ∈ V ; 4. p(y, y |x, x ) := ψ † (P xy ⊗ Q x y ) ψ = 0 if and only if P xy P x y = 0;

Lemma 4 . 5 .
Let D ∈ C m×n be a matrix and let u ∈ C n , v ∈ C m .Then the following are equivalent:(1) D(u • w) = v • (Dw) for all w ∈ C n .

Theorem 4 . 7 .
Consider two graphs G, H, a cone of matrices K ∈ {CP, CS + , DN N , S + } and a linear map Φ : and thus, Φ(W X) = Φ(W )Y .Lemma 4.10.Consider a linear map Φ : C n×n → C n×n which is completely positive, trace-preserving, and unital.Then, for any pair of Hermitian matrices X, Y such that Φ(X) = Y and Φ † (Y ) = X we have that X and Y are cospectral, and furthermore, if E λ and F λ are projections onto the λ-eigenspaces of X and Y respectively, then Φ(E λ ) = F λ and Φ † (F λ ) = E λ .

Theorem 5 . 2 .
Two graphs G and H are partially equivalent if and only if G ∼ =S + H.

Lemma 5 . 3 .
If G ∼ =S + H they have cospectral adjacency matrices, as do their complements.

Theorem 6 . 3 .
Two graphs G and H are equivalent if and only if G ∼ =DN N H.

Theorem 7 . 1 .
There exist graphs G and H that are S + -isomorphic but not DN N -isomorphic, i.e.,G ∼ =S + H ⇒ G ∼ =DN N H.

Theorem 8 . 1 .
Consider two graphs G and H and a matrix cone K ⊆ S + .Then G ∼ =K H if and only if First, by conditions (3) and (4), we have that sum(M ) = |V G | 2 = |V H | 2 , and thus |V G | = |V H |. Next, we have that M gh,g h = 0 if rel(g, g ) = rel(h, h) by definition, and thus M gh,g h = 0 if (g, h) ∼ (g , h ) in G H. We also have that M ∈ K. Lastly, it follows again by (4) thatTr(M ) = g∈V G , h∈V H M gh,gh = g∈V G h,h ∈V H M gh,gh = |V G |.Thus, setting M = M/ Tr(M ), it follows that Tr(M ) = 1 and sum(M ) = |V G |, i.e., M is a feasible solution for ϑ K (G H) of value |V G | = |V H |.

h, h
∈V H M gh,g h .First, note that as M ∈ S + we also have that M ∈ S + .Indeed, if {v gh } g∈V G ,h∈V H is a Gram decomposition of M , then { h v gh } g∈V G is a Gram decomposition for the matrix M .Moreover, by definition of M we have that sum( M ) = sum(M ), and using that sum(M ) = |V G | we get that e T M e e T e = sum( M ) |V G | = 1.Consequently, the maximum eigenvalue of M is at least 1.On the other hand,Tr( M ) = g∈V G h,h ∈V H M gh,gh = g∈V G h,∈V H M gh,gh = Tr(M ) = 1,

hy
(g,h) = 1, ∀g, (45) g y (g,h) = 1, ∀h, a linear map and M its Choi matrix.Consider the matrix M with columns indexed by V G ×V G and rows indexed by V H ×V H defined as Mhh ,gg = M gh,g h .By Lemma 4.4, we have that M ∈ K if and only if Φ is completely K-preserving, i.e., (6) ⇔ (16).