1 Introduction

A pair of graphs G and H are isomorphic, denoted by \(G\cong H\), if there exists a bijective map from the vertex set of G to the vertex set of H that preserves adjacency and non-adjacency. The problem of deciding whether two given graphs are isomorphic is of fundamental practical interest, and at the same time, it plays a central role in theoretical computer science as one of the few problems in the class NP which is not known to be polynomial-time solvable or NP-complete.

Along with Atserias, Šámal, and Severini, the authors recently introduced a non-local game that captures the notion of graph isomorphism [1]. Specifically, in the (GH)-graph isomorphism game there are two players, Alice and Bob, trying to convince a third party, called the verifier, that the graphs G and H are isomorphic. For this, the verifier randomly selects a pair of vertices \(x_{A}, x_{B} \in V_G \cup V_H\) and sends them to Alice and Bob respectively (we assume \(V_G\) and \(V_H\) are disjoint so that players know which graph their vertex is from). After receiving their vertices, and without communicating, Alice and Bob respond with vertices \(y_{A},y_{B} \in V_G \cup V_H\).

The players win the game if the questions they were asked and the answers they provided indeed model an isomorphism from G to H. Concretely, the first winning condition is that each player must respond with a vertex from the graph that the vertex they received was not from, i.e.,

$$\begin{aligned} x_{A} \in V_G \Leftrightarrow y_{A} \in V_H \text { and } x_{B} \in V_G \Leftrightarrow y_{B} \in V_H. \end{aligned}$$
(1)

Furthermore, letting \(g_{A}\) be the unique vertex of G among \(x_{A}\) and \(y_{A}\), \(h_{A}\) the unique vertex of H among among \(x_{A}\) and \(y_{A}\), and defining \(g_{B}\) (resp. \(h_{B}\)) the unique vertex of G (resp. H) among \(x_{B}\) and \(y_{B}\), the second winning condition is that

$$\begin{aligned} {{\,\textrm{rel}\,}}(g_{A},g_{B}) = {{\,\textrm{rel}\,}}(h_{A},h_{B}), \end{aligned}$$
(2)

where \({{\,\textrm{rel}\,}}(x,y)\) is the relationship between vertices x and y, i.e., whether they are equal, adjacent, or distinct non-adjacent. Note that Eq. (2) encodes many constraints, e.g., if Alice and Bob are sent the same vertices in G, then they must respond with the same vertex of H or if they are sent the endpoints of an edge of G they need to respond with the endpoints of an edge of H. Furthermore, note that we do not explicitly require that G and H have the same number of vertices.

Alice and Bob are allowed to agree on a strategy before the start of the game, but are not allowed to communicate once the game has begun. This type of game is known as a nonlocal game, since the players are usually thought of as being separated in space, which prevents them from communicating after they receive their questions. The parties only play one round of this game, and we only consider strategies that win with certainty, i.e., with probability equal to one. We refer to such strategies as perfect.

It is easy to see that responding according to an isomorphism of G and H is a perfect strategy for the (GH)-isomorphism game. Moreover, the converse also holds (see Sect. 2.1) and thus the isomorphism game characterizes the notion of graph isomorphism. Motivated by this, in the previous work [1] we introduced the notions quantum and non-signalling isomorphisms of graphs in terms of the existence of perfect quantum and non-signalling strategies for the graph isomorphism game. Furthermore, we investigated these two relations, proving various necessary conditions for quantum isomorphism, giving a complete characterization of non-signalling isomorphism, and providing a method for constructing pairs of non-isomorphic graphs that are nevertheless quantum isomorphic.

In this work we continue our study of the graph isomorphism problem within the framework of nonlocal games. Our point of departure is a new equivalence relation on graphs, which is defined in terms of the feasibility of a certain linear conic program over an appropriate convex cone. Specifically, for any convex cone of matrices \({\mathcal {K}}\) we say that graphs G and H are \({\mathcal {K}}\)-isomorphic, and write \(G \cong _{{\mathcal {K}}} H\), if there exists a matrix M with rows and columns indexed by \(V_G \times V_H\) such that:

$$\begin{aligned} \sum _{h,h' \in V_H} M_{gh,g'h'}&= 1, \text { for all } g,g' \in V_G \end{aligned}$$
(3)
$$\begin{aligned} \sum _{g,g' \in V_G} M_{gh,g'h'}&= 1, \text { for all } h,h' \in V_H \end{aligned}$$
(4)
$$\begin{aligned} M_{gh,g'h'}&= 0, \text { if } {{\,\textrm{rel}\,}}(g,g') \ne {{\,\textrm{rel}\,}}(h,h'), \end{aligned}$$
(5)
$$\begin{aligned} M&\in {\mathcal {K}}. \end{aligned}$$
(6)

Any matrix satisfying (3)–(6) is called a \({\mathcal {K}}\)-isomorphism matrix for G to H.

Note that the entries in a \({\mathcal {K}}\)-isomorphism matrix are not necessarily nonnegative, or even real, depending on the choice of cone \({\mathcal {K}}\). In this article we study the graph equivalences defined by the notion of \({\mathcal {K}}\)-isomorphism for four cones of matrices. The first one is the cone of positive semidefinite matrices (psd), denoted \({\mathcal {S}}_{+}\), defined as the set of Gram matrices of a set of vectors \(v_1, \ldots , v_n\), i.e., \(M_{ij} = v_i^Tv_j\). Second, we consider the doubly nonnegative cone, denoted \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\), which consists of entrywise-nonnegative psd matrices. Third, we consider the cone of completely positive semidefinite matrices [16], denoted \({{\mathcal{C}\mathcal{S}}_{+}}\), which consists of Gram matrices of psd matrices. Concretely, a matrix M is completely positive semidefinite if there exist Hermitian psd matrices \(\rho _1, \ldots , \rho _n\), such that \(M_{ij} = \langle \rho _i, \rho _j \rangle := {{\,\textrm{Tr}\,}}(\rho _i^\dagger \rho _j)\). Lastly, we consider the completely positive cone, denoted \({\mathcal {C}}{\mathcal {P}}\), corresponding to Gram matrix of entrywise nonnegative vectors. It is straightforward to verify that

$$\begin{aligned} {\mathcal {C}}{\mathcal {P}} \subseteq {{\mathcal{C}\mathcal{S}}_{+}} \subseteq {\mathcal {D}}{\mathcal {N}}{\mathcal {N}} \subseteq {\mathcal {S}}_{+}, \end{aligned}$$
(7)

and these containments are all strict for matrices of size at least 5 [16].

All of these cones are of central importance to the field mathematical optimization. Most notably, linear optimation over the cone of psd matrices is known as semidefinite programming, an important family of optimization models with extensive modeling power and efficient algorithms [10]. Additionally, linear optimization over the completetely positive cone corresponds to completely positive programming, a family of optimization models that are hard to solve but have significant expressive power [5].

Summary of results and related work. In our first result we express both classical and quantum graph isomorphism as \({\mathcal {K}}\)-isomorphism over appropriate cones of matrices. Specifically, we have that:

Result 1

For any pair of graphs GH we have that G and H are isomorphic if and only if \(G\cong _{{\mathcal {C}}{\mathcal {P}}} H\) and furthermore, G and H are quantum isomorphic if and only if \(G\cong _{{\mathcal{C}\mathcal{S}}_{+}} H\).

The fact that quantum isomorphism is equivalent to the feasiblity of a linear conic program over the cpsd cone is not surprising in view of the strong connections between the cpsd cone and the set of quantum correlations, e.g. see [16, 17, 22, 24]. On the other hand, the formulation of graph isomorphism as a feasibility problem over the completely positive cone is to the best of our knowledge new. A different formulation for GI over the copositive cone, the dual of the completely positive cone is given in [11].

Furthermore, in [22], the notion of \({\mathcal {K}}\)-homomorphism for various cones \({\mathcal {K}}\) was considered. These relations are related to homomorphisms in the same way that \({\mathcal {K}}\)-isomorphisms are related to isomorphisms. In particular, \({\mathcal {C}}{\mathcal {P}}\)- and \({{\mathcal{C}\mathcal{S}}_{+}}\)-homomorphisms are equivalent to classical and quantum homomorphisms.

As the problem of deciding whether two graphs are classical (or quantum) isomorphic is hard, it is important to identify tractable necessary and/or sufficient conditions allowing to checking this. In view of our first result, we study the notion of \({\mathcal {K}}\)-isomorphism in the case of the doubly nonnegative and positive semidefinite cones. Moreover, by the chain of inclusions \({\mathcal {C}}{\mathcal {P}} \subseteq {{\mathcal{C}\mathcal{S}}_{+}} \subseteq {\mathcal {D}}{\mathcal {N}}{\mathcal {N}} \subseteq {\mathcal {S}}_{+}\), both \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)- and \({\mathcal {S}}_{+}\)-isomorphism are tractable relaxations of quantum (and of classical) graph isomorphism. The main contribution of this work is a complete algebraic characterization of the graphs that are \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)- and \({\mathcal {S}}_{+}\)-isomorphic respectively in terms of isomorphisms of appropriate matrix algebras.

A linear subspace of \({\mathcal {C}}^{n \times n}\) which is also closed under matrix multiplication is an algebra. A subalgebra \({\mathcal {A}}\) of \({\mathcal {C}}^{n \times n}\) is called coherent if it is unital (i.e., contains the identity matrix), contains the all-ones matrix J, is closed under Schur product, and is self-adjoint (i.e., closed under conjugate transpose). As the intersection of two coherent algebras is a coherent algebra we can define the coherent algebra of a graph G, denoted by \({\mathcal {A}}_G\), as the intersection of all coherent algebras containing the adjacency matrix of G.

Result 2

Consider two graphs GH with adjacency matrices \(A_G\) and \(A_H\) and coherent algebras \({\mathcal {A}}_G\) and \({\mathcal {A}}_H\) respectively. Then, we have that \(G \cong _{{\mathcal {D}}{\mathcal {N}}{\mathcal {N}}} H\) if and only if there exists an isomorphism between the coherent algebras \({\mathcal {A}}_G\) and \({\mathcal {A}}_H\) that maps \(A_G\) to \(A_H\).

As it turns out, the notion of \({\mathcal {D}}{\mathcal {N}} {\mathcal {N}}\)-isomorphism coincides with an equivalence relation on graphs introduced in 1968 by Weisfeiler and Leman [27], known today as the 2-dimensional Weisfeiler–Leman method. Specifically, it is known [27] that there exists an isomorphism between the coherent algebras \({\mathcal {A}}_G\) and \({\mathcal {A}}_H\) that maps \(A_G\) to \(A_H\) if and only if the graphs are not distinguished by the Weisfeiler–Leman method.

In Sect. 7 we characterize \({\mathcal {S}}_{+}\)-isomorphism by introducing an appropriate generalization of coherent algebras of graphs. Specifically, we say that a subalgebra \({\mathcal {A}}\) of \({\mathcal {C}}^{n\times n}\) is partially coherent (with respect to \( \{I, A_G\}\)) if it is unital, self-adjoint, contains the all-ones matrix, and is closed under Schur multiplication with the matrices I and \(A_G\). As with coherent algebras, the intersection of two partially coherent algebras is again a partially coherent algebra. This allows to define the partially coherent algebra of a graph G, denoted \(\hat{{\mathcal {A}}}_G\), to be the minimal partially coherent algebra containing \(A_G\). We show the following:

Result 3

Consider two graphs GH with adjacency matrices \(A_G\) and \(A_H\) and partially coherent algebras \(\hat{{\mathcal {A}}}_G\) and \(\hat{{\mathcal {A}}}_H\) respectively. Then, we have that \(G \cong _{{\mathcal {S}}_{+}} H\) if and only if there exists a linear bijection \(\phi : \hat{{\mathcal {A}}}_G \rightarrow \hat{{\mathcal {A}}}_H\) such that

  1. 1.

    \(\phi (M^\dagger ) = \phi (M)^\dagger \) for all \(M \in \hat{{\mathcal {A}}}_G\);

  2. 2.

    \(\phi (MN) = \phi (M)\phi (N)\) for all \(M,N \in \hat{{\mathcal {A}}}_G\);

  3. 3.

    \(\phi (I) = I\), \(\phi (A_G) = A_H\), and \(\phi (J) = J\);

  4. 4.

    \(\phi (M\bullet N) = \phi (M)\bullet \phi (N)\) for all \(M \in \{I,A_G\}\) and \(N \in \hat{{\mathcal {A}}}_G\).

The notion of \({\mathcal {S}}_{+}\)-isomorphism appears to be a new graph relation, which we show implies several forms of cospectrality, e.g. see Lemmas 6.3 and 6.4. Moreover, \({\mathcal {S}}_{+}\)-isomorphism, when restricted to 1-walk-regular graphs, is equivalent to cospectrality of adjacency matrices (cf. Theorem 6.9).

Our main technique for studying \({\mathcal {D}}{\mathcal {N}} {\mathcal {N}}\)- and \({\mathcal {S}}_{+}\)-isomorphisms is a surprising correspondence between \({\mathcal {K}}\)-isomorphism matrices and linear maps \(\Phi : {\mathcal {C}}^{V_G \times V_G} \rightarrow {\mathcal {C}}^{V_H \times V_H}\). Concretely, by considering a \({\mathcal {K}}\)-isomorphism matrix M for G to H as a Choi matrix, we can associate to it a linear map \(\Phi _M: {\mathcal {C}}^{V_G \times V_G} \rightarrow ~{\mathcal {C}}^{V_H \times V_H}\) given by \(\left( \Phi _M(X)\right) _{h,h'} = \sum _{g,g'} M_{gh,g'h'} X_{g,g'}.\) As it turns out, maps constructed in this manner have some remarkable properties. The idea for this construction is adopted from Ortiz and Paulsen who applied it to winning correlations for the homomorphism game [20].

As an immediate consequence of the chain of inclusions \({\mathcal {C}}{\mathcal {P}} \subseteq {{\mathcal{C}\mathcal{S}}_{+}} \subseteq {\mathcal {D}}{\mathcal {N}}{\mathcal {N}} \subseteq {\mathcal {S}}_{+}\), it follows that for all graphs G and H we have that

$$\begin{aligned} G \cong H \ \Rightarrow \ G \cong _q H \ \Rightarrow \ G \cong _{{\mathcal {D}}}{\mathcal {N}}{\mathcal {N}} H \ \Rightarrow \ G \cong _{{\mathcal {S}}_{+}} H, \end{aligned}$$
(8)

and in Sect. 8 we show that none of these implications can be reversed.

In Sect. 9 we give yet another characterization of \({\mathcal {K}}\)-isomorphism by combining a conic generalization of the celebrated Lovász theta function and a new product of graphs. Specifically, for any matrix cone \({\mathcal {K}}\) consider the graph parameter:

$$\begin{aligned} \vartheta ^{\mathcal {K}}(G) = \sup \left\{ {{\,\textrm{Tr}\,}}(MJ): \ M_{g,g'} = 0 \text { if } g \sim g', \ {{\,\textrm{Tr}\,}}(M) = 1, \ M \in {\mathcal {K}}\right\} . \end{aligned}$$

For \({\mathcal {K}} = {\mathcal {S}}_{+}\), the corresponding parameter is the celebrated Lovász theta function, denoted by \(\vartheta \), whereas for \({\mathcal {K}} = {\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\), it is equal to a variant due to Schrijver, which is usually denoted \(\vartheta '\) [23]. A nontrivial result [7] is that for \({\mathcal {K}}={\mathcal {C}}{\mathcal {P}}\) the parameter \(\vartheta ^{{\mathcal {C}}{\mathcal {P}}}\) is equal to the independence number of a graph.

In order to reformulate \({\mathcal {K}}\)-isomorphism in terms of the graph parameter \(\vartheta ^{{\mathcal {K}}}\) we make use of the graph isomorphism product, denoted \(G \diamond H\), which has vertex set \(V_G \times V_H\) and edges \((g,h) \sim (g',h')\) if \({{\,\textrm{rel}\,}}(g,g') \ne {{\,\textrm{rel}\,}}(h,h')\). In other words, vertices of \(G \diamond H\) are adjacent exactly when the corresponding entry in an isomorphism matrix for G to H is required to be zero. Note that the isomorphism product of G and H is the complement of the so-called weak modular product of graphs, e.g. see [13].

Result 4

Consider two graphs G and H and a matrix cone \({\mathcal {K}} \subseteq {\mathcal {S}}_{+}\). Then \(G \cong _{{\mathcal {K}}} H\) if and only if \(\vartheta ^{{\mathcal {K}}}(G \diamond H) = |V_G| = |V_H|\) and this value is attained.

For \({\mathcal {K}} = {\mathcal {C}}{\mathcal {P}}\), the above result implies that the graphs G and H are isomorphic if and only if \(\alpha (G \diamond H) = {|V_G| = |V_H|}\). This is not a new result [14], and in fact it has long been known that \(\alpha (G \diamond H)\) is the size of the largest common induced subgraph of G and H. For \({\mathcal {K}}={\mathcal {S}}_{+}\) and \({\mathcal {K}}={\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\) this result is new to the best of our knowledge.

Lastly, in Sect. 10 we show that \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)- and \({\mathcal {S}}_{+}\)-isomorphisms respectively correspond to the feasibility of (the first level of) the Lasserre hierarchy applied to appropriate relaxations of the quadratic integer programming formulation of the graph isomorphism problem. Specifically, two graphs G and H with adjacency matrices \(A_G\) and \(A_H\) are isomorphic if and only if there exists a permutation matrix \(X=(X_{gh})\) such that \(A_G=X^\top A_HX\). Thus, the 0/1 solutions in the semi-algebraic set defined by

$$\begin{aligned}&\sum _g X_{gh}=1, \end{aligned}$$
(9)
$$\begin{aligned}&\sum _hX_{gh}=1, \end{aligned}$$
(10)
$$\begin{aligned}&X_{gh}X_{g'h'}=0 \text { if } {{\,\textrm{rel}\,}}(g,g')\ne {{\,\textrm{rel}\,}}(h,h'), \end{aligned}$$
(11)

encode all possible isomorphisms between G and H.

Given a semialgebraic set \({\mathcal {K}}=\{x\in [0,1]^n: g_i(x)\ge 0, \ i=1,\ldots , m\}\), the Lasserre hierarchy is a systematic method for producing tighter approximations to \(\textrm{conv}({\mathcal {K}}\cap \{0,1\}^n)\) [15]. Each level of the Lasserre hierarchy corresponds to a semidefinite program and can be constructed using sums of squares representations of polynomials and the dual theory of moments.

Result 5

Two graphs G and H are \({\mathcal {S}}_{+}\)- isomorphic if the first level of the Lasserre hierarchy applied to (9)–(11) is feasible. Furthermore, two graphs G and H are \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphic if the first level of the Lasserre hierarchy applied to (9)–(11) where we also add the constraint that \(X=(X_{gh})\) is entrywise nonnegative is feasible.

Summarizing, our results give a surprising, and previously unknown, connection between coherent algebras, the Weisfeiler–Lehman algorithm, the Lasserre hierarchy, and Schrijver’s theta function.

2 Strategies for the isomorphism game

In this section we recall the isomorphism game and briefly explain classical and quantum strategies. For more detailed background and additional properties we refer the reader to [1].

Recall that in the (GH)-isomorphism game, each player receives a vertex from one of the graphs G and H, and they must respond with a vertex from the other graph (see Eq. 1). In order to win, the vertices of G that they receive/send must relate to each other (i.e., be equal, adjacent, or distinct and non-adjacent) in the same way as their vertices of H (see Eq. 2). The players know G and H and can agree on a strategy beforehand, but they are not allowed to communicate once the game begins (i.e., once they receive their questions/vertices). We are interested in winning or perfect strategies, which are those that win with probability one.

For any fixed strategy for the (GH)-isomorphism game, there is an associated joint conditional probability distribution, \(p(y_{A}, y_{B} | x_{A}, x_{B})\) for \(x_{A}\), \(x_{B}\), \(y_{A}\), \(y_{B} \in V_G \cup V_H\), which gives the probability of Alice and Bob responding with \(y_{A}\) and \(y_{B}\) when given inputs \(x_{A}\) and \(x_{B}\) respectively. The distribution p is usually referred to as a correlation. A given strategy for the (GH)-isomorphism game is a winning strategy if and only if \(p(y_{A}, y_{B} | x_{A}, x_{B}) = 0\) whenever \(x_{A}, x_{B}, y_{A}\), and \(y_{B}\) do not meet the winning conditions (1) and (2) defined above.

Note that any winning strategy for the (GH)-isomorphism game is also a winning strategy for the (HG)-isomorphism game, as well as the \((\overline{G},\overline{H})\)-isomorphism game. Here \(\overline{G}\) refers to the complement of G, i.e., the graph obtained from G by replacing edges with non-edges and vice versa.

2.1 Classical strategies

In general, classical strategies allow Alice and Bob to have access to some shared randomness, such as a random binary string, which they can use to determine how they respond to the questions of the referee. However, for each value the shared randomness may assume, the corresponding strategy becomes deterministic. Mathematically, this says that any classical correlation p can be written as \(p=\sum _i \lambda _i p_i\) where \(\lambda _i \ge 0\) for each i, with \(\sum _i \lambda _i = 1\), and each \(p_i\) corresponds to a deterministic classical strategy. The coefficients \(\lambda _i\) encode the shared randomness used by the players. Since whether a correlation p corresponds to a winning strategy is determined by its zeros, the correlation p arises from a winning strategy if and only if \(p_i\) is winning for all i such that \(\lambda _i > 0\).

A deterministic classical strategy consists of two functions \(f_A\) and \(f_B\) for Alice and Bob respectively that map inputs to outputs. Thus when Alice receives some input x, she will respond with \(f_A(x)\), and Bob acts analogously. For the isomorphism game, it is not difficult to see that the functions \(f_A\) and \(f_B\) must be equal, and moreover that the restriction of them to \(V_G\) (resp. \(V_H\)) is an isomorphism from G to H (resp. from H to G). Furthermore, the restriction to \(V_H\) is the inverse of the restriction to \(V_G\). Thus the (GH)-isomorphism game can be won perfectly with classical strategies if and only if G and H are actually isomorphic.

2.2 Quantum strategies

In a quantum strategy the players can take advantage of shared quantum entanglement and measurements in order to produce their outputs. For our purposes we can restrict the shared entanglement to what are known as pure bipartite states of full Schmidt rank. Such a state corresponds to a unit vector \(\psi \in \mathbb {C}^d \otimes \mathbb {C}^d\) which can be expressed as \(\sum _{i\in [d]} \lambda _i \alpha _i \otimes \beta _i\) where \({\mathcal {A}} = \{\alpha _i: i\in [d]\}\) and \({\mathcal {B}} = \{\beta _i: i\in [d]\}\) are two orthonormal bases of \(\mathbb {C}^d\) and \(\lambda _i>0\) for all i. Any vector in \(\mathbb {C}^d \otimes \mathbb {C}^d\) admits such a decomposition with \(\lambda _i \ge 0\), where the restriction \(\lambda _i>0\) reflects our assumption that the shared entangled state has full Schmidt rank. The two orthonormal bases, \({\mathcal {A}}\) and \({\mathcal {B}}\) are known as Schmidt bases of \(\psi \) and there can be several choices for such a pair of bases. For a given shared state \(\psi \in \mathbb {C}^d \otimes \mathbb {C}^d\), we intuitively think that the first tensor describes Alice’s part of the state while the second one describes Bob’s part. In order to extract classical information from this shared state Alice and Bob can measure their respective parts. A k-outcome quantum measurement of a d-dimensional system, also referred to as a POVM (Positive Operator Valued Measure), is described by a family of k operators from \({\mathcal {S}}^{d}_{+}\) which add up to identity. We say that such a measurement is projective if each of the positive semidefinite operators is an (orthogonal) projection. For more in-depth explanation of general quantum strategies we refer the reader to [19], or to [1] for more details on quantum strategies for the isomorphism game specifically.

With these notions at hand we are ready to describe a general quantum strategy that Alice and Bob use to play a (GH)-isomorphism game. First, Alice and Bob can choose the shared entangled state that they will use. Next, each player can choose a quantum measurement which they will perform upon receiving \(x\in V_G\cup V_H\). Since any classical processing of the measurement outcome can be included in the measurement, without loss of generality, we can assume that each of the players respond with the measurement outcome they obtain. Hence, we index the measurement outcomes by elements of \(V_G \cup V_H\). So a quantum strategy consists of a shared entangled state \(\psi \in {\mathbb {C}}^d \otimes {\mathbb {C}}^d\) for some d, and quantum measurements \({\mathcal {P}}_x = (P_{xy} \in {\mathcal {S}}^{d}_{+}: y \in V_G \cup V_H)\) and \({\mathcal {Q}}_x = (Q_{xy} \in {\mathcal {S}}^{d}_{+}: y \in V_G \cup V_H)\) for each \(x \in V_G \cup V_H\) for Alice and Bob respectively. According to the postulates of quantum mechanics, the probability of obtaining outcome y and \(y'\) upon measuring \({\mathcal {P}}_{x}\) and \({\mathcal {Q}}_{x'}\) respectively, is given by

$$\begin{aligned} p(y,y'| x,x') = \psi ^\dagger \left( P_{xy} \otimes Q_{x'y'}\right) \psi . \end{aligned}$$
(12)

It will often be useful to use the fact that the probability from Eq. 12 can be also expressed as

$$\begin{aligned} \psi ^\dagger \left( P \otimes Q\right) \psi = {{\,\textrm{vec}\,}}(\rho )^\dagger \left( P \otimes Q\right) {{\,\textrm{vec}\,}}(\rho ) = {{\,\textrm{vec}\,}}(\rho )^\dagger {{\,\textrm{vec}\,}}(P\rho Q^T) = {{\,\textrm{Tr}\,}}(\rho ^\dagger P \rho Q^T) \end{aligned}$$
(13)

where \(\psi := {{\,\textrm{vec}\,}}(\rho )\) and \({{\,\textrm{vec}\,}}: {\mathbb {C}}^{d_1 \times d_2} \rightarrow {\mathbb {C}}^{d_1} \otimes {\mathbb {C}}^{d_2}\) is the linear map defined by \({{\,\textrm{vec}\,}}(e_i e_j^T) = e_i \otimes e_j\) and extended by linearity. In the above derivation, we have used the identities \({{\,\textrm{vec}\,}}(AXB^T) = (A \otimes B) {{\,\textrm{vec}\,}}(X)\) and \({{\,\textrm{Tr}\,}}(A^\dagger B) = {{\,\textrm{vec}\,}}(A)^\dagger {{\,\textrm{vec}\,}}(B)\) which can be verified by a direct calculation. We will also use the inverse map of \({{\,\textrm{vec}\,}}\) which we denote by \({{\,\textrm{mat}\,}}\). Note that \({{\,\textrm{mat}\,}}\) takes vectors to matrices, i.e., \({{\,\textrm{mat}\,}}: {\mathcal {C}}^{d_1} \otimes {\mathcal {C}}^{d_2} \rightarrow {\mathcal {C}}^{d_1 \times d_2}\). For notational convenience, we usually choose to express the shared state \(\psi \), as well as the operators \(P_{xy}\) and \(Q_{xy}\), in a Schmidt basis of \(\psi \). Note that in this basis, the operator \(\rho = {{\,\textrm{mat}\,}}(\psi )\) from Eq. (13) is a diagonal matrix with positive diagonal entries.

In [1] we showed that perfect strategies of the isomorphism game have a special form:

Theorem 2.1

Let G and H be graphs and let \(V:= V_G \cup V_H\). Suppose that a shared state \(\psi \in \mathbb {C}^d \otimes \mathbb {C}^d\) of full Schmidt rank and \({\mathcal {P}}_x = \{P_{xy}: y \in V\}\) and \({\mathcal {Q}}_x = \{Q_{xy}: y \in V\}\) for \(x \in V\) comprise a winning strategy for the (GH)-isomorphism game. Also, suppose that the operators \(P_{xy}\) and \(Q_{xy}\) as well as the shared state \(\psi \) are expressed in a Schmidt basis of \(\psi \) and let \(\rho := {{\,\textrm{mat}\,}}(\psi )\). Then we have

  1. 1.

    \(P_{xy} = Q_{xy}^T\) for all \(x, y \in V\);

  2. 2.

    \(P_{xy}\) and \(Q_{xy}\) are projectors for all \(x, y \in V\);

  3. 3.

    \(P_{xy}\rho = \rho P_{xy}\) and \(Q_{xy}\rho = \rho Q_{xy}\) for all \(x, y \in V\);

  4. 4.

    \(p(y, y' | x, x'):= \psi ^\dagger \left( P_{xy} \otimes Q_{x'y'}\right) \psi = 0\) if and only if \(P_{xy}P_{x'y'} = 0\);

  5. 5.

    \(P_{xy} = 0\) if \(x,y \in V_G\) or \(x,y \in V_H\);

  6. 6.

    \(P_{xy} = P_{yx}\) for all \(x, y \in V\).

We also observed in [1] that Theorem 2.1 allows us to reformulate the existence of a quantum homomorphism in the following way.

Theorem 2.2

Let G and H be graphs. Then \(G \cong _q H\) if and only if there exist projectors \(P_{gh}\) for \(g \in V_G\) and \(h \in V_H\) such that

  1. 1.

    \(\sum _{h \in V_H} P_{gh} = I\) for all \(g \in V_G\);

  2. 2.

    \(\sum _{g \in V_G} P_{gh} = I\) for all \(h \in V_H\);

  3. 3.

    \(P_{gh} P_{g'h'} = 0\) if \({{\,\textrm{rel}\,}}(g,g') \ne {{\,\textrm{rel}\,}}(h,h')\).

Note that by items (1) and (6) of Theorem 2.1, any quantum correlation p that wins the (GH)-isomorphism game satisfies the following:

$$\begin{aligned} p(y,y'|x,x')= & {} p(x,y'|y,x') = p(y,x'|x,y')\\= & {} p(x,x'|y,y') \text { for all } x,x',y,y' \in V_G \cup V_H. \end{aligned}$$

In other words, switching the input and output, for Alice or Bob, does not effect the corresponding probability. We refer to any correlation with this property as being input–output symmetric. This symmetry allows us to use a smaller matrix when formulating classical and quantum isomorphisms as conic feasibility problems. Note that since quantum winning correlations for the isomorphism game are input–output symmetric, so are classical correlations.

3 Conic formulations

In this section we will prove Result 1, that graphs G and H are isomorphic (resp. quantum isomorphic) if and only if there is a \({\mathcal {C}}{\mathcal {P}}\)-isomorphism matrix (resp. \({{\mathcal{C}\mathcal{S}}_{+}}\)-isomorphism matrix) for G to H.

3.1 Classical correlations

Suppose that p is a correlation for the (GH)-isomorphism game. Define the matrix \(M^p\) with rows and columns indexed by \(V_G \times V_H\) entrywise as:

$$\begin{aligned} M^p_{gh,g'h'} = p(h,h'|g,g'). \end{aligned}$$

Note that the matrix \(M^p\) does not contain all of the probabilities of p, only those corresponding to inputs from \(V_G\) and outputs from \(V_H\). Thus, in general the matrix \(M^p\) may not completely determine the correlation p. However, since winning classical or quantum correlations are input–output symmetric, such correlations p are determined by the matrix \(M^p\). Also note that since Alice and Bob are symmetric, i.e., \(p(y,y'|x,x') = p(y',y|x',x)\) for all \(x,x',y,y' \in V_G \cup V_H\) for a winning classical or quantum correlation p, we have that \(M^p\) is symmetric.

Recall from above that a matrix is completely positive if it is the Gram matrix of entrywise nonnegative vectors. Equivalently, a matrix M is completely positive if \(M = \sum _i p_ip_i^T\) where \(p_i \in {\mathbb {R}}^d\) are entyrwise nonnegative vectors. The equivalence of these two definitions follows from the fact that the matrix \(PP^T\) is the Gram matrix of the rows of P but is also equal to \(\sum _i p_ip_i^T\) where \(p_i\) is the \(i^\text {th}\) column of P. This formulation of completely positive matrices will be useful for proving Theorem 3.1 below. The proof of the following theorem resembles the proof of Theorem 4.2 from [22] for the homomorphism game. But there the only concern was showing that the existence of a homomorphism was equivalent to the existence of a completely positive matrix satisfying certain properties. Here we show a bijection between certain completely positive matrices and classical correlations for the isomorphism game.

Theorem 3.1

Suppose G and H are graphs and p is a correlation for the (GH)-isomorphism game. Then p is winning classical correlation if and only if p is input–output symmetric and \(M^p\) is a \({\mathcal {C}}{\mathcal {P}}\)-isomorphism matrix.

Proof

Suppose that p is a winning classical correlation. Then we already know that p is input-ouput symmetric, so it remains to show that \(M^p\) is a \({\mathcal {C}}{\mathcal {P}}\)-isomorphism matrix. First, since p is a winning correlation we have that Eqs. (3) and (5) hold. Also, since p is input–output symmetric, we have that Eq. (4) holds. Thus we only need to show that \(M^p \in {\mathcal {C}}{\mathcal {P}}\).

Since p is classical, it is a convex combination of correlations arising from deterministic classical strategies. In other words, there are positive numbers \(\lambda _1, \ldots , \lambda _k\) such that \(\sum _{i=1}^k \lambda _i = 1\), and deterministic correlations \(p_i\) such that

$$\begin{aligned} p = \sum _{i=1}^n \lambda _i p_i. \end{aligned}$$

It is easy to see that \(M^p = \sum _i \lambda _i M^{p_i}\). Since \(p_i\) is deterministic, there exists a graph isomorphism \(f_i:V_G \rightarrow V_H\) such that

$$ \begin{aligned} p_i(h,h'|g,g') = {\left\{ \begin{array}{ll} 1 &{} \text {if } h = f_i(g) \ \& \ h' = f_i(g') \\ 0 &{} \text {otherwise} \end{array}\right. } \end{aligned}$$

for all \(g,g' \in V_G\) and \(h,h' \in V_H\). Let \(v^i\) be a real vector with coordinates indexed by \(V_G \times V_H\) such that

$$\begin{aligned} v^i_{gh} = {\left\{ \begin{array}{ll} 1 &{} \text {if } h = f_i(g) \\ 0 &{} \text {otherwise} \end{array}\right. }. \end{aligned}$$

It is straightforward to see that \(M^{p_i} = v^i {v^i}^T\). Since the \(v^i\) are entrywise nonnegative and the \(\lambda _i\) are positive, it follows that \(M^p\) is completely positive.

Conversely, suppose that p is input–output symmetric and \(M^p\) is a \({\mathcal {C}}{\mathcal {P}}\)-isomorphism matrix. It immediately follows from the definition of isomorphism matrices that p is a winning correlation for the (GH)-isomorphism game, so it remains to show that it is classical.

Since \(M^p\) is completely positive, there are nonzero, entrywise nonnegative vectors \(v^1, \ldots , v^k\) such that \(M^p = \sum _i v^i {v^i}^T\). Let \(N^i = v^i{v^i}^T\). Our aim is to show that each \(v^i\) corresponds to a deterministic correlation, i.e., that \(N^i\) is a scalar multiple of \(M^{p_i}\) for some deterministic correlation \(p_i\).

First, we show that the \(N^i\) have uniform block sums, i.e., that \(\sum _{h,h'} N^i_{gh,g'h'}\) does not depend on \(g,g' \in V_G\). For each i, let \(\widehat{N}^i\) be a matrix with rows and columns indexed by \(V_G\) such that

$$\begin{aligned} \widehat{N}^i_{gg'}:= \sum _{h,h' \in V_H} N^i_{gh,g'h'}. \end{aligned}$$

Also let

$$\begin{aligned} \widehat{M}^p_{gg'} = \sum _{h,h' \in V_H} M^p_{gh,g'h'}. \end{aligned}$$

So we have that \(\widehat{M}^p = \sum _i \widehat{N}^i\). Note that \(\widehat{M}^p\) can be obtained by conjugating \(M^p\) by the matrix \(I \otimes e\), where e is the all ones vector, and \(\widehat{N}^i\) can be obtained from \(N^i\) similarly. Since \(M^p\) and the \(N^i\) are completely positive, they are also positive semidefinite and therefore so are \(\widehat{M}^p\) and the \(\widehat{N}^i\).

Since \(\sum _{h,h'} p(h,h'|g,g') = 1\) for all \(g,g' \in V_G\), we have that \(\widehat{M}^p\) is equal to the all ones matrix J. However, J is a rank 1 positive semidefinite matrix and we have shown it can be written as the sum of the positive semidefinite matrices \(\widehat{N}^i\). This is only possible if each \(\widehat{N}^i\) is a positive (since \(v^i\) is nonzero) scalar multiple of J. Let \(\mu _i\) be such that \(\widehat{N}^i = \mu _i J\) and note that \(\mu _i > 0\) for all i and \(\sum _i \mu _i = 1\).

Now we will show that each \(v^i\) corresponds to a deterministic correlation. For notational simplicity, let \(v = v^1\), \(N = N^1\), and \(\widehat{N} = \widehat{N}^1\). Since the \(v^i\) are nonnegative, we must have that \(N_{gh,g'h'} = 0\) if \({{\,\textrm{rel}\,}}(g,g') \ne {{\,\textrm{rel}\,}}(h,h')\). Suppose that \(h \ne h'\) and \(v_{gh}, v_{gh'} > 0\). Then \(N_{gh,gh'} = v_{gh}v_{gh'} > 0\), a contradiction. Therefore, for each \(g \in V_G\), there exists a unique \(h \in V_H\) such that \(v_{gh} > 0\). Let \(f: V_G \rightarrow V_H\) be the function such that \(v_{gf(g)} > 0\) for all \(g \in V_G\). This shows that f is injective. By an analogous argument to the above, we have that f is surjective. Indeed, suppose that \(v_{gh}, v_{g'h}>0\) for \(g\ne g'\). Then, \(N_{gh,g'h}=v_{gh}v_{g'h}>0\), a contradiction as \(N_{gh,g'h'} = 0\) if \({{\,\textrm{rel}\,}}(g,g') \ne {{\,\textrm{rel}\,}}(h,h')\). Moreover, since \(N_{gh,g'h'} = 0\) for \({{\,\textrm{rel}\,}}(g,g') \ne {{\,\textrm{rel}\,}}(h,h')\), we must have that f is an isomorphism of G and H.

We want to show that \(v_{gf(g)} = v_{g'f(g')}\) for all \(g,g' \in V_G\). However,

$$\begin{aligned} \widehat{N}_{gg'}:= \sum _{h,h' \in V_H} N_{gh,g'h'} = v_{gf(g)} v_{g'f(g')}. \end{aligned}$$

Since \(\widehat{N}\) is a positive multiple of J by the above argument, we have that \(v_{gf(g)} v_{g'f(g')}\) is constant for all \(g,g' \in V_G\), and therefore \(v_{gf(g)}\) is constant. It is easy to see that \(v_{gf(g)} = \sqrt{\mu _1}\) and \(N = \mu _1 M^q\) where q is the correlation arising from the deterministic strategy obtained from using the isomorphism f. The same holds for all i and therefore p is a convex combination of correlations arising from deterministic classical strategies, i.e., p is classical. \(\square \)

The above theorem is similar to a more general characterization of classical correlations given in [24]. However, our result is different in that we use a smaller matrix whose rows and columns are indexed by \(V_G \times V_H\), instead of \((V_G \cup V_H) \times (V_G \cup V_H)\). This is possible because of the input–output symmetry of winning classical/quantum correlations for the isomorphism game that is not present in arbitrary games.

Note that we must include the assumption of input–output symmetry in the theorem above, as otherwise we do not have any information about the other probabilities of p. Thus, if this assumption is dropped, it could be possible that p is classical when the inputs are both from \(V_G\), but not for other inputs.

3.2 Quantum correlations

To prove the characterization for winning quantum correlations for the isomorphism game, we first need the following lemma which was proven in [22] (see also [24, Lemma 3.6]).

Lemma 3.2

Let X and Y be finite sets and let M be the Gram matrix of vectors \(w_{xy}\) for \(x \in X\), \(y \in Y\) which lie in some inner product space. The following assertions are equivalent:

  • There exists a unit vector w satisfying \( \sum _{y \in Y} w_{xy} = w \text { for all } x \in X\).

  • \(\sum _{y,y' \in Y} M_{xy,x'y'} = 1 \text { for all } x,x' \in X.\)

Using this we can now prove the following:

Theorem 3.3

Suppose G and H are graphs and p is a correlation for the (GH)-isomorphism game. Then p is a winning quantum correlation if and only if p is input–output symmetric and \(M^p\) is a \({{\mathcal{C}\mathcal{S}}_{+}}\)-isomorphism matrix.

Proof

Suppose that p is a winning quantum correlation for the (GH)-isomorphism game. By the same reasoning as in the proof of Theorem 3.1, we have that \(M^p\) satisfies Eqs. (3), (4), and (5). So it remains to show that \(M^p \in {{\mathcal{C}\mathcal{S}}_{+}}\).

Since p is a quantum correlation, by Theorem 2.1 and Eq. (13) we have that

$$\begin{aligned} M^p_{gh,g'h'} = {{\,\textrm{Tr}\,}}(\rho ^\dagger P_{gh} \rho P_{g'h'}) \text { for all } g \in V_G, h \in V_H \end{aligned}$$

where \(\rho = {{\,\textrm{mat}\,}}(\psi )\) for a quantum state \(\psi \) used in a quantum strategy along with measurement operators \(P_{gh}\) (for Alice). Moreover, we may assume that \(\rho \) is a diagonal matrix with strictly positive diagonal entries. Thus \(\rho \) is invertible and \(\rho ^{1/2}\) exists. Thus,

$$\begin{aligned} M^p_{gh,g'h'} = {{\,\textrm{Tr}\,}}(\rho P_{gh} \rho P_{g'h'}) = {{\,\textrm{Tr}\,}}\left( (\rho ^{1/2} P_{gh} \rho ^{1/2}) (\rho ^{1/2} P_{g'h'} \rho ^{1/2})\right) . \end{aligned}$$

The expression on the righthand side above is the Hilbert-Schmidt inner product of the matrices \(\rho ^{1/2} P_{gh} \rho ^{1/2}\) and \(\rho ^{1/2} P_{g'h'} \rho ^{1/2}\), which are both positive semidefinite. Thus, \(M^p\) is a Gram matrix of positive semidefinite matrices., i.e., it is completely positive semidefinite as desired.

Conversely, suppose that p is input–output symmetric and \(M^p\) is a \({{\mathcal{C}\mathcal{S}}_{+}}\)-isomorphism matrix. It immediately follows from the definition of isomorphism matrices that p is a winning correlation for the (GH)-isomorphism game. So it remains to show that p is quantum. Since \(M^p \in {{\mathcal{C}\mathcal{S}}_{+}}\), there exist positive semidefinite matrices \(\rho _{gh}\) for \(g \in V_G\) and \(h \in V_H\) such that \(M^p_{gh,g'h'} = {{\,\textrm{Tr}\,}}(\rho _{gh}\rho _{g'h'})\). Since

$$\begin{aligned} \sum _{h,h'} M^p_{gh,g'h'} = 1 \text { for all } g, g' \in V_G, \end{aligned}$$

we can apply Lemma 3.2. Therefore, there exists a matrix \(\rho \) such that

$$\begin{aligned} \sum _{h \in V_H} \rho _{gh} = \rho \text { for all } g \in V_G, \end{aligned}$$

and \({{\,\textrm{Tr}\,}}(\rho ^\dagger \rho ) = 1\). This implies that \(\rho \) is positive semidefinite and that the column space of \(\rho _{gh}\) is contained in the column space of \(\rho \) for all \(g \in V_G, h \in V_H\). Therefore, by restricting to a subspace if necessary, we may assume that \(\rho \) is positive definite, and thus \(\rho ^{-1/2}\) exists.

Define \(P_{gh} = \rho ^{-1/2}\rho _{gh}\rho ^{-1/2}\), and note that this is positive semidefinite. We have that

$$\begin{aligned} \sum _{h \in V_H} P_{gh} = \rho ^{-1/2}\left( \sum _{h \in V_H} \rho _{gh}\right) \rho ^{-1/2} = \rho ^{-1/2}\rho \rho ^{-1/2} = I. \end{aligned}$$

Therefore, \(\{P_{gh}: h \in V_H\}\) is a valid quantum measurement. We would like to also show that \(\{P_{gh}: g \in V_G\}\) is a valid quantum measurement. To do this it suffices to show that \(\sum _{g} \rho _{gh} = \rho \) for all \(h \in V_H\). Since \(M^p\) is an isomorphism matrix, we have that

$$\begin{aligned} \sum _{g,g'} M^p_{gh,g'h'} = 1 \text { for all } h, h' \in V_H. \end{aligned}$$

Thus we can apply Lemma 3.2 again to obtain a positive semidefinite matrix \(\rho '\) such that

$$\begin{aligned} \sum _{g \in V_G} \rho _{gh} = \rho ' \text { for all } h \in V_H. \end{aligned}$$

However, we must have that

$$\begin{aligned} |V_G|\rho = \sum _{g \in V_G} \sum _{h \in V_H} \rho _{gh} = \sum _{h \in V_H} \sum _{g \in V_G} \rho _{gh} = |V_H|\rho '. \end{aligned}$$

Since the sum of the entries of \(M^p\) is equal to both \(|V_G|^2\) and \(|V_H|^2\) depending on which of Eqs. (3) and (4) you consider, we have that \(\rho ' = \rho \), and thus \(\sum _{g} \rho _{gh} = \rho \) as desired. Therefore, \(\{P_{gh}: g \in V_G\}\) is a valid quantum measurement.

Define \(P_{hg} = P_{gh}\) and \(P_{gg'} = 0 = P_{hh'}\) for all \(g,g' \in V_G\) and \(h,h' \in V_H\), and let \(\psi = {{\,\textrm{vec}\,}}(\rho )\). Then it is easy to see that for \(g,g' \in V_G\) and \(h, h' \in V_H\),

$$\begin{aligned} \psi ^\dagger \left( P_{gh} \otimes P^T_{g'h'}\right) \psi = {{\,\textrm{Tr}\,}}(\rho P_{gh} \rho P_{g'h'}) = {{\,\textrm{Tr}\,}}(\rho _{gh}\rho _{g'h'}) = p(h,h'|g,g'). \end{aligned}$$

Switching g and h or \(g'\) and \(h'\) does not change any of the values above and so the correlation p can be realized by Alice performing measurements \({\mathcal {P}}_x = (P_{xy}: y \in V_G \cup V_H)\) and Bob performing measurements \({\mathcal {Q}}_x = (P^T_{xy}: y \in V_G \cup V_H)\) on the shared state \(\psi \). Thus p is a quantum correlation and we have proven the theorem. \(\square \)

As with Theorem 3.1, the theorem above is similar to a more general characterization of synchronous quantum correlations given in [24], but we are able to use a smaller matrix due to input–output symmetry.

By the containments \({\mathcal {C}}{\mathcal {P}} \subseteq {{\mathcal{C}\mathcal{S}}_{+}} \subseteq {\mathcal {D}}{\mathcal {N}}{\mathcal {N}} \subseteq {\mathcal {S}}_{+}\) and Theorems 3.1 and 3.3, we have the following chain of implications:

$$\begin{aligned} G \cong H \ \Rightarrow \ G \cong _q H \ \Rightarrow \ G \cong _{{\mathcal {D}}{\mathcal {N}}{\mathcal {N}}} H \ \Rightarrow \ G \cong _{{\mathcal {S}}_{+}} H. \end{aligned}$$
(14)

We will see in Sect. 8 that none of these implications can be reversed.

4 From isomorphism matrices to isomorphism maps

Our main technique for studying \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)- and \({\mathcal {S}}_{+}\)-isomorphism is a correspondence between isomorphism matrices and linear maps between the space of matrices indexed by \(V_G\) and those indexed by \(V_H\).

4.1 Linear maps preserving matrix cones and their properties

A linear map \(\Phi : {\mathbb {C}}^{m \times m} \rightarrow {\mathbb {C}}^{n \times n}\) is positive if it maps psd matrices to psd matrices, i.e., \(\Phi (X)\) is psd whenever X is psd.

We recall the following well-known theorem, e.g. see [6]:

Lemma 4.1

For a linear map \(\Phi : {\mathbb {C}}^{m \times m} \rightarrow {\mathbb {C}}^{n \times n}\) the following assertions are equivalent:

  1. (i)

    \(\Phi \) is m-positive, i.e., the map \(\text {id}_m\otimes \Phi \) is positive.

  2. (ii)

    The Choi matrix of \(\Phi \), defined by \(C_\Phi = \sum _{i,j=1}^m E_{ij} \otimes \Phi (E_{ij}),\) is psd.

  3. (iii)

    \(\Phi \) admits a Kraus decomposition, i.e., there exist matrices \(K_i\in {\mathcal {C}}^{n\times m}, \ 1\le i\le mn\) such that

    $$\begin{aligned} \Phi (X)=\sum _iK_iXK_i^\dagger . \end{aligned}$$
  4. (iv)

    \(\Phi \) is completely positive,Footnote 1 i.e., the map if \(\text {id}_r \otimes \Phi \) is positive for all \(r \in {\mathbb {N}}\).

The implications \((i)\implies (ii), (iii)\implies (iv), (iv)\implies (i)\) are straightforward, whereas \( (ii) \implies (iii)\) follows by considering a Cholesky factorization of the Choi matrix \(C_\Phi \). Furthermore, we note that is no relation between completely positive maps and the cone of completely positive matrices.

A linear map \(\Phi : {\mathbb {C}}^{m \times m} \rightarrow {\mathbb {C}}^{n \times n}\) is trace-preserving (TP) if \({{\,\textrm{Tr}\,}}(\Phi (X)) = {{\,\textrm{Tr}\,}}(X)\) for all \(X \in {\mathbb {C}}^{m \times m}\), and unital if \(\Phi (I_m) = I_n\). Note that if \(\Phi : {\mathbb {C}}^{m \times m} \rightarrow {\mathbb {C}}^{n \times n}\) is both trace-preserving and unital, we necessarily have that \(m = n\), since \(n={{\,\textrm{Tr}\,}}(I_n)={{\,\textrm{Tr}\,}}(\Phi (I_m))={{\,\textrm{Tr}\,}}(I_m)=m\). Completely positive trace-preserving (CPTP) linear maps are important in the theory of quantum information because they are in some sense the most general type of map allowed by the formalism of quantum mechanics.

In the next lemma we collect some useful properties of completely positive maps in terms of their Kraus decompositions that we invoke throughout this section. The proofs are easy and are omitted.

Lemma 4.2

Consider a completely positive map \(\Phi : {\mathbb {C}}^{m \times m} \rightarrow {\mathbb {C}}^{n \times n} \) with Kraus decomposition \(\Phi (X)=\sum _iK_iXK_i^\dagger \). Then, we have that:

  1. (i)

    The adjoint of \(\Phi \) is given by \(\Phi ^\dagger (Y) = \sum _{i} K_i^\dagger Y K_i\) for all \(Y\in {\mathcal {C}}^{n\times n}\). In particular, the adjoint of a completely positive map is also completely positive (as it admits itself a Kraus decomposition).

  2. (ii)

    \(\Phi \) is unital if and only if \(\sum _i K_i K_i^\dagger =I\).

  3. (iii)

    \(\Phi \) is trace-preserving if and only if \(\sum _i K_i^\dagger K_i = I\). This is also equivalent to \(\Phi ^\dagger (I)=I\).

  4. (iv)

    \(\Phi (X^\dagger ) = \Phi (X)^\dagger \) for any \(X\in {\mathcal {C}}^{m\times m}\).

  5. (v)

    \(\Phi \) (resp. \(\Phi ^\dagger \)) is sum-preserving, i.e., \(\langle \phi (X), J\rangle =\langle X,J\rangle \) for any \(X\in {\mathcal {C}}^{m\times m}\) if \(\Phi ^\dagger (J)=J\) (resp. \(\Phi (J)=J)\).

As mentioned in Lemma 4.1, a linear map \(\Phi : {{\mathbb {C}}^{m \times m} \rightarrow {\mathbb {C}}^{n \times n}}\) is completely positive if and only if its Choi matrix is psd. It turns out that an analogous property holds for all cones of interest to this work.

Specifically, if \(\Phi : {\mathbb {C}}^{m \times m} \rightarrow {\mathbb {C}}^{n \times n}\) is a linear map and \({\mathcal {K}}\) is a matrix cone, we say that \(\Phi \) is \({\mathcal {K}}\)-preserving if \(X \in {\mathcal {K}}\) implies that \(\Phi (X) \in {\mathcal {K}}\). Using this terminology, a linear map \(\Phi \) is positive if and only if it is \({\mathcal {S}}_{+}\)-preserving. We now prove a sufficient condition for showing a map is \({\mathcal {K}}\)-preserving:

Lemma 4.3

For a matrix cone \({\mathcal {K}} \in \{{\mathcal {C}}{\mathcal {P}}, {{\mathcal{C}\mathcal{S}}_{+}}, {\mathcal {D}}{\mathcal {N}}{\mathcal {N}}, {\mathcal {S}}_{+}\}\) and a linear map \(\Phi : {\mathbb {C}}^{m \times m} \rightarrow {\mathbb {C}}^{n \times n}\), if \(C_\Phi \in {\mathcal {K}}\) then \(\Phi \) is \({\mathcal {K}}\)-preserving.

Proof

Set \(C:= C_\Phi \in {\mathcal {K}}\). Let \(X\in {\mathcal {K}}\), we need to show that \(\Phi (X)\in {\mathcal {K}}\).

Using the fact that \(X_{i,j} = (X \otimes J_n)_{i\ell ,jk}\) we have that

$$\begin{aligned} \Phi (X)_{\ell ,k} = \sum _{i,j \in [m]} C_{i\ell , jk}X_{i,j} =\sum _{i,j \in [m]} C_{i\ell ,jk} (X \otimes J_n)_{i\ell ,jk} = \sum _{i,j \in [m]} (C \bullet (X \otimes J_n))_{i\ell ,jk}, \end{aligned}$$
(15)

where \(\bullet \) denotes the Schur product (which is also called the Hadamard product). Note that the matrix \(C \bullet (X \otimes J_n)\) is a principal submatrix of \(C \otimes (X \otimes J_n)\). Thus, since the all ones matrix J is in \({\mathcal {K}}\) for all \({\mathcal {K}} \in \{{\mathcal {C}}{\mathcal {P}}, {{\mathcal{C}\mathcal{S}}_{+}}, {\mathcal {D}}{\mathcal {N}}{\mathcal {N}}, {\mathcal {S}}_{+}\}\), and all such \({\mathcal {K}}\) are easily seen to be closed under tensor products and taking principal submatrices, we have that \(C \bullet (X \otimes J_n) \in {\mathcal {K}}\) whenever \(X\in {\mathcal {K}}\). Furthermore, Eq. (15) implies that the \(\ell ,k\) entry of \(\Phi (X)\) is obtained by summing up the entries of the \(\ell ,k\) block of the block matrix \(C \bullet (X \otimes J_n)\), so it remains to check that \({\mathcal {K}}\) is closed under this operation for all \({\mathcal {K}} \in \{{\mathcal {C}}{\mathcal {P}}, {{\mathcal{C}\mathcal{S}}_{+}}, {\mathcal {D}}{\mathcal {N}}{\mathcal {N}}, {\mathcal {S}}_{+}\}\).

Indeed, consider a block matrix \(Z=\sum _{i,j=1}^nE_{ij}\otimes A_{ij}\in {\mathcal {K}}\), where each block \(A_{ij}\) is \(m\times m\) and construct a new matrix obtained by summing up the entries of each block of Z, i.e., \(\hat{Z}=\sum _{i,j=1}^nE_{ij}\langle A_{ij}, J\rangle \). For \({\mathcal {K}} \in \{{\mathcal {C}}{\mathcal {P}}, {{\mathcal{C}\mathcal{S}}_{+}}, {\mathcal {S}}_{+}\}\) we have that Z has an inner product (aka Gram) decomposition \(Z_{i\ell , jk}=\langle u_{i\ell }, u_{jk}\rangle \), for \(i,j\in [n], \ell , k\in [m]\) where for \({\mathcal {K}}={\mathcal {S}}_{+}\) the \(u_{i\ell }\)’s are real vectors, for \({\mathcal {K}}={{\mathcal{C}\mathcal{S}}_{+}}\) they are PSD matrices, and for \({\mathcal {K}}={\mathcal {C}}{\mathcal {P}}\) they are nonnegative vectors. Defining, \(w_{i}=\sum _{k=1}^mu_{ik}\) for all \(i\in [n]\) we see that \(\langle w_i, w_j\rangle =\langle \sum _{\ell =1}^mu_{i\ell }, \sum _{k=1}^m u_{jk}\rangle =\langle A_{ij}, J\rangle \). Thus, the \(w_i\)’s give an inner product decomposition of \(\hat{Z}=\sum _{i,j=1}^nE_{ij}\langle A_{ij}, J\rangle \). Next, note that for the \({\mathcal {K}}={\mathcal {C}}{\mathcal {P}}\) case, the \(w_i\)’s are sums of nonnegative vectors, so themselves nonnegative, whereas for the \(\mathcal {K} = \mathcal{C}\mathcal{S}_+\) case, they are sums of psd matrices, so again psd. Finally, for the \({\mathcal {K}}={\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\) case we also see that the operation of summing up the entries of each block preserves nonnegativity. \(\square \)

We are now ready to prove the following analogue of Lemma 4.1 for the cones of interest:

Lemma 4.4

Consider a matrix cone \({\mathcal {K}} \in \{{\mathcal {C}}{\mathcal {P}}, {{\mathcal{C}\mathcal{S}}_{+}}, {\mathcal {D}}{\mathcal {N}}{\mathcal {N}}, {\mathcal {S}}_{+}\}\) and a linear map \(\Phi : {\mathbb {C}}^{m \times m} \rightarrow {\mathbb {C}}^{n \times n}\). The following assertions are equivalent:

  1. (i)

    The map \(\text {id}_m\otimes \Phi \) is \({\mathcal {K}}\)-preserving.

  2. (ii)

    The Choi matrix \(C_\Phi \) lies in \( {\mathcal {K}}\).

  3. (iii)

    \(\Phi \) is completely \({\mathcal {K}}\)-preserving, i.e., the map \(\text {id}_r \otimes \Phi \) is \({\mathcal {K}}\)-preserving for all \(r \in {\mathbb {N}}\).

Proof

\((i) \implies (ii)\). The Choi matrix of \(\Phi \) is equal to

$$\begin{aligned} C_\Phi= & {} \sum _{i,j \in [m]} E_{ij} \otimes \Phi (E_{ij}) = (\text {id}_m \otimes \Phi )\left( \sum _{i,j} E_{ij} \otimes E_{ij}\right) \\= & {} (\text {id}_m \otimes \Phi ) \left( \sum _{i,j} e_ie_j^T \otimes e_ie_j^T\right) = (\text {id}_m \otimes \Phi )\left( \psi _m \psi _m^T\right) , \end{aligned}$$

where \(\psi _m = \sum _{i=1}^m e_i \otimes e_i \in {\mathbb {C}}^m \otimes {\mathbb {C}}^m\). Since \(\psi _m \psi _m^T \in {\mathcal {K}}\) for all \({\mathcal {K}} \in \{{\mathcal {C}}{\mathcal {P}}, {{\mathcal{C}\mathcal{S}}_{+}}, {\mathcal {D}}{\mathcal {N}}{\mathcal {N}}, {\mathcal {S}}_{+}\}\), and \(\text {id}_m\otimes \Phi \) is \({\mathcal {K}}\)-preserving, we have that \(C_\Phi \in {\mathcal {K}}\).

\((ii)\implies (iii)\). Assume that \(C:= C_\Phi \in {\mathcal {K}}\). By Lemma 4.3 it suffices to show that the Choi matrix of \(\textrm{id}_r \otimes \Phi \) is in \({\mathcal {K}}\) for all r. Using \(\text {id}_{rm} = \text {id}_r \otimes \text {id}_m\), we have that the Choi matrix of \(\text {id}_r \otimes \Phi \) is equal to

$$\begin{aligned}&(\text {id}_r \otimes \text {id}_m) \otimes (\text {id}_r \otimes \Phi ) \left( \sum _{i,j \in [r], \ \ell ,k \in [m]} (E_{ij} \otimes E_{\ell k}) \otimes (E_{ij} \otimes E_{\ell k}) \right) \\&\quad = \sum _{i,j \in [r], \ \ell ,k \in [m]} E_{ij} \otimes E_{\ell k} \otimes E_{ij} \otimes \Phi (E_{\ell k}). \end{aligned}$$

Up to a permutation of the tensors, this is equal to

$$\begin{aligned} \left( \sum _{i,j \in [r]} E_{ij} \otimes E_{ij}\right) \otimes C_\Phi = \psi _r\psi _r^T \otimes C_\Phi , \end{aligned}$$

which is in \({\mathcal {K}}\) since \({\mathcal {K}}\) is closed under tensor products. Since permuting the tensors is equivalent to conjugation by some permutation matrix, i.e., consistently relabeling rows and columns, this does not change whether the matrix is in \({\mathcal {K}}\) and thus we have proven the claim.

\((iii)\implies (i)\). Straightforward. \(\square \)

We conclude this section with a lemma we use repeatedly in the remainder of this article.

Lemma 4.5

Let \(D \in {\mathbb {C}}^{n \times n}\) be a matrix and let \(u \in {\mathbb {C}}^n\), \(v \in {\mathbb {C}}^n\). Then the following are equivalent:

  1. (1)

    \(D(u \bullet w) = v \bullet (Dw)\) for all \(w \in {\mathbb {C}}^n\).

  2. (2)

    \(D_{ij} = 0\) whenever \(v_{i} \ne u_{j}\).

  3. (3)

    \(D^\dag (v \bullet z) = u \bullet (D^\dag z)\) for all \(z \in {\mathbb {C}}^n\).

Proof

Consider the linear maps \(f(w) = D(u \bullet w)\) and \(g(w) = v \bullet (Dw)\). Letting \(D_j\) be the jth column of D and \(e_j\) the jth standard basis vector, it follows that \(f(e_j) = u_jD_j \) and \(g(e_j) = v \bullet D_j\). Consequently, \(D(u \bullet w) = v \bullet (Dw)\) for all \(w\in {\mathcal {C}}^d\) is equivalent to the statement

$$\begin{aligned} f(e_j) = g(e_j) \text { for all } j \in [d] \iff u_jD_j = v \bullet D_j \text{ for } \text{ all } j \in [d], \end{aligned}$$

which is in turn equivalent to \(D_{ij} = 0\) whenever \(u_j \ne v_i\). Lastly, to get the third equivalence, note that \(D_{ij} =0\) whenever \(v_i \ne u_j\) is equivalent to \(D^\dag _{ji}=0\) whenever \(u_j \ne v_i\). \(\square \)

Now consider a map \(\Phi : {\mathbb {C}}^{V_G \times V_G} \rightarrow {\mathbb {C}}^{V_H \times V_H}\) with Choi matrix M. Define \(\tilde{M}\) entrywise as \(\tilde{M}_{hh',gg'} = M_{gh,g'h'}\). It is straightforward to check that \({{\,\textrm{vec}\,}}(\Phi (X)) = \tilde{M}{{\,\textrm{vec}\,}}(X)\) for all \(X \in {\mathbb {C}}^{V_G \times V_G}\) and \({{\,\textrm{vec}\,}}(\Phi ^\dag (Y)) = \tilde{M}^\dag {{\,\textrm{vec}\,}}(Y)\) for all \(Y \in {\mathbb {C}}^{V_H \times V_H}\). Through this correspondence Lemma 4.5 immediately yields the following:

Lemma 4.6

Let \(\Phi : {\mathbb {C}}^{V_G \times V_G} \rightarrow {\mathbb {C}}^{V_H \times V_H}\) be a linear map with Choi matrix M. For any fixed pair of matrices \(X \in {\mathbb {C}}^{V_G \times V_G}\) and \(Y \in {\mathbb {C}}^{V_H \times V_H}\) the following are equivalent:

  1. (1)

    \(\Phi (X \bullet W) = Y \bullet \Phi (W)\) for all \(W \in {\mathbb {C}}^{V_G \times V_G}\).

  2. (2)

    \(M_{gh,g'h'} = 0\) whenever \(X_{gg'} \ne Y_{hh'}\).

  3. (3)

    \(\Phi ^\dag (Y \bullet Z) = X \bullet \Phi ^\dag (Z)\) for all \(Z \in {\mathbb {C}}^{V_H \times V_H}\).

4.2 The Choi matrix as a \({\mathcal {K}}\)-isomorphism matrix

Definition 4.7

Consider two graphs GH, a cone of matrices \({\mathcal {K}} \in \{{\mathcal {C}}{\mathcal {P}}, {{\mathcal{C}\mathcal{S}}_{+}}, {\mathcal {D}}{\mathcal {N}}{\mathcal {N}}, {\mathcal {S}}_{+}\}\) and a linear map \(\Phi : {\mathbb {C}}^{V_G \times V_G} \rightarrow ~{\mathbb {C}}^{V_H \times V_H}\). We say that \(\Phi \) is a \({\mathcal {K}}\)-isomorphism map from G to H, i.e., it satisfies

$$\begin{aligned}&\Phi \text { is completely } {\mathcal {K}}\text {-preserving} \end{aligned}$$
(16)
$$\begin{aligned}&\Phi (I \bullet X) = I \bullet \Phi (X) \text { for all } X \in {\mathbb {C}}^{V_G \times V_G} \end{aligned}$$
(17)
$$\begin{aligned}&\Phi (A_G \bullet X) = A_H \bullet \Phi (X) \text { for all } X \in {\mathbb {C}}^{V_G \times V_G} \end{aligned}$$
(18)
$$\begin{aligned}&\Phi (J) = J = \Phi ^\dagger (J). \end{aligned}$$
(19)

The following result shows that the Choi matrix of a \({\mathcal {K}}\)-isomorphism map is a \({\mathcal {K}}\) isomorphism matrix.

Theorem 4.8

Consider two graphs GH, a cone of matrices \({\mathcal {K}} \in \{{\mathcal {C}}{\mathcal {P}}, {{\mathcal{C}\mathcal{S}}_{+}}, {\mathcal {D}}{\mathcal {N}}{\mathcal {N}}, {\mathcal {S}}_{+}\}\) and a linear map \(\Phi : {\mathbb {C}}^{V_G \times V_G} \rightarrow ~{\mathbb {C}}^{V_H \times V_H}\). The following assertions are equivalent:

  1. (1)

    The Choi matrix of \(\Phi \) is a \({\mathcal {K}}\)-isomorphism matrix from G to H.

  2. (2)

    \(\Phi \) is a \({\mathcal {K}}\)-isomorphism map from G to H.

  3. (3)

    \(\Phi ^\dag \) is a \({\mathcal {K}}\)-isomorphism map from H to G.

Proof

\((1)\iff (2)\). Let \(\Phi : {\mathbb {C}}^{V_G \times V_G} \rightarrow {\mathbb {C}}^{V_H \times V_H}\) be a linear map and M its Choi matrix. Consider the matrix \(\tilde{M}\) with columns indexed by \(V_G \times V_G\) and rows indexed by \(V_H \times V_H\) defined as \(\tilde{M}_{hh',gg'} = M_{gh,g'h'}.\) By Lemma 4.4, we have that \(M \in {\mathcal {K}}\) if and only if \(\Phi \) is completely \({\mathcal {K}}\)-preserving, i.e.,

$$\begin{aligned} {}(6) \Leftrightarrow (16). \end{aligned}$$
(20)

Also, Conditions (3) and (4) are equivalent to \(\tilde{M}\) having all row and column sums equal to 1 respectively, which in turn are equivalent to \(\Phi (J) = J\) and \(\Phi ^\dag (J) = J\) respectively. Thus

$$ \begin{aligned} {}(3) \ \& \ (4) \Leftrightarrow (19). \end{aligned}$$
(21)

Lastly, we show that Condition (5) holding for M is equivalent to Conditions (17) and (18) holding for \(\Phi \). Indeed, by Lemma 4.6 we have that \(\Phi (A_G \bullet X) = A_H \bullet \Phi (X) \text { for all } X \in {\mathbb {C}}^{V_G \times V_G}\) is equivalent to \(M_{gh,g'h'} = 0\) whenever \((A_G)_{gg'} \ne (A_H)_{hh'}\), i.e., whenever (\(g \sim g'\) & \(h \not \sim h'\)) or (\(g \not \sim g'\) & \(h \sim h'\)). Similarly Lemma 4.6 implies that \(\Phi (I \bullet X) = I \bullet \Phi (X)\) is equivalent to \(M_{gh,g'h'} = 0\) whenever (\(g = g'\) & \(h \ne h'\)) or (\(g \ne g'\) & \(h = h'\)). Summarizing, we have that Conditions (17) and (18) holding for \(\Phi \) is equivalent to \(M_{gh,g'h'} = 0\) whenever \({{\,\textrm{rel}\,}}(g,g') \ne {{\,\textrm{rel}\,}}(h,h')\), i.e.,

$$ \begin{aligned} \big ((17) \ \& \ (18)\big ) \Leftrightarrow (5). \end{aligned}$$
(22)

Combining the equivalences in (20), (21), and (22) yields the theorem.

\((2) \iff (3)\). Follows immediately by Lemma 4.6. \(\square \)

Remark 4.9

We conclude this section by listing some further useful properties satisfied by isomorphism maps. First, note that (17) and (18) further imply that

$$\begin{aligned} \Phi (A_{\overline{G}} \bullet X) = A_{\overline{H}} \bullet \Phi (X) \text { for all } X \in {\mathbb {C}}^{V_G \times V_G}, \end{aligned}$$
(23)

since \(A_{\overline{G}} = J - I - A_G\) and J is the identity with respect to the Schur product. Furthermore, since \(\Phi (J)=J=\Phi ^\dag (J)\) it follows respectively by (17) and (18) that:

$$\begin{aligned} \Phi (I)=I \text { and } \Phi (A_G)=A_H. \end{aligned}$$

Lastly, the fact that \(\Phi \) is sum-preserving combined with \(\Phi (I)=I\) shows that G and H have the same number of vertices. Analogously, \(\Phi (A_G)=A_H\) implies that G and H have the same number of edges.

4.3 Some additional properties of isomorphism maps

Lemma 4.10

Consider a linear map \(\Phi : \mathbb {C}^{n \times n} \rightarrow \mathbb {C}^{n \times n}\) which is completely positive, trace-preserving, and unital. Then, for any matrix X such that \(\Phi ^\dagger (\Phi (X))=X\) we have that

$$\begin{aligned} \Phi (XW) = \Phi (X)\Phi (W) \ \text { and } \ \Phi (WX) = \Phi (W)\Phi (X), \text { for all matrices } W. \end{aligned}$$

Proof

The presented proof is a small modification of the arguments in [26]. Let \(Y = \Phi (X)\). As \(\Phi \) is completely positive it admits a

a Kraus decomposition \(\Phi (Z)=\sum _{i=1}^mK_i ZK_i^\dagger \). The crux of the proof is to show that

$$\begin{aligned} K_i X = YK_i\ \text { and }\ XK_i^\dagger = K_i^\dagger Y \text { for all } i \in [m]. \end{aligned}$$
(24)

Indeed, using that \(K_i X = YK_i\) for all i we have that for any matrix W:

$$\begin{aligned} \Phi (XW) = \sum _i K_i XW K_i^\dagger = \sum _i YK_i W K_i^\dagger = Y\Phi (W)=\Phi (X)\Phi (W). \end{aligned}$$

Similarly, if \(XK_i^\dagger = K_i^\dagger Y\) for all i it follows that \(\Phi (WX) = \Phi (W)Y\).

To prove (24), set \({{\mathcal {Z}}_1}= \sum _i (K_i X -YK_i)(K_i X - YK_i)^\dagger \) and note that

$$\begin{aligned} {{\mathcal {Z}}_1}&= \sum _i (K_i X - YK_i)(X^\dagger K_i^\dagger - K_i^\dagger Y^\dagger ) \\&= \sum _i K_i X X^\dagger K_i^\dagger - \sum _i K_i X K_i^\dagger Y^\dagger - \sum _i YK_i X^\dagger K_i^\dagger + \sum _iYK_i K_i^\dagger Y^\dagger \\&= \Phi (XX^\dagger ) - \Phi (X) Y^\dagger - Y\Phi (X^\dagger ) + Y Y^\dagger \\&= \Phi (XX^\dagger ) - YY^\dagger , \end{aligned}$$

where to get the last equality we used the assumption \(\Phi (X)=Y\) and that \(\Phi (X^\dagger )=Y^\dagger \), the latter following by Lemma 4.2 (iv). As \({{\mathcal {Z}}_1}\) is psd (since it is the sum of psd matrices) we have that

$$\begin{aligned} 0 \le {{\,\textrm{Tr}\,}}({{\mathcal {Z}}_1}) = {{\,\textrm{Tr}\,}}\left( \Phi (XX^\dagger ) - YY^\dagger \right) ={{\,\textrm{Tr}\,}}(XX^\dagger - YY^\dagger ), \end{aligned}$$
(25)

where for the last equality we used that \({{\,\textrm{Tr}\,}}(\Phi (XX^\dagger )) ={{\,\textrm{Tr}\,}}(XX^\dagger ) \) since \(\Phi \) is trace preserving.

In a similar manner as above, we define the psd matrix

$$\begin{aligned} {\mathcal {Z}}_2=\sum _i(XK_i^\dagger -K_i^\dagger Y) (XK_i^\dagger -K_i^\dagger Y)^\dagger , \end{aligned}$$

and using that \(\Phi ^\dag (Y) = X\) and that \(\Phi ^\dag \) is trace-preserving (as \(\Phi \) is unital) we get that

$$\begin{aligned} 0\le {{\,\textrm{Tr}\,}}({\mathcal {Z}}_2)={{\,\textrm{Tr}\,}}(YY^\dagger - XX^\dagger ). \end{aligned}$$
(26)

Combining (25) and (26) we get that \({{\,\textrm{Tr}\,}}({\mathcal {Z}}_1) = {{\,\textrm{Tr}\,}}({\mathcal {Z}}_2)={{\,\textrm{Tr}\,}}(XX^\dagger -YY^\dagger ) = 0\).

As \({\mathcal {Z}}_1\) and \({\mathcal {Z}}_2\) are psd, this further implies that \({\mathcal {Z}}_1 ={\mathcal {Z}}_2= 0\). As \({\mathcal {Z}}_1\) is the sum of psd matrices, every term in the sum \({\mathcal {Z}}_1=\sum _i (K_i X - YK_i)(K_i X - YK_i)^\dagger \) is equal to zero, which in turn implies that \(K_i X - YK_i = 0\) for all i, i.e., that \(K_i X = YK_i\). Similarly, every term in the sum \({\mathcal {Z}}_2=\sum _i(XK_i^\dagger -K_i^\dagger Y)(XK_i^\dagger -K_i^\dagger Y)^\dagger \) is zero and thus \(XK_i^\dagger =K_i^\dagger Y\) for all i. \(\square \)

Lemma 4.11

Consider a linear map \(\Phi : \mathbb {C}^{n \times n} \rightarrow \mathbb {C}^{n \times n}\) which is completely positive, trace-preserving, and unital. Then, for any pair of Hermitian matrices XY such that \(\Phi (X)=Y\) and \(\Phi ^\dag (Y)=X\) we have that X and Y are cospectral, and furthermore, if \(E_\lambda \) and \(F_\lambda \) are projections onto the \(\lambda \)-eigenspaces of X and Y respectively, then \(\Phi (E_\lambda ) = F_\lambda \) and \(\Phi ^\dag (F_\lambda ) = E_\lambda \).

Proof

By Lemma 4.10, we have that \(\Phi (XW) = Y\Phi (W)\) for any matrix W and therefore \(\Phi (f(X)) = f(Y)\) for any polynomial f. Let \(\lambda \) be an eigenvalue for X and let \(E_\lambda \) be the corresponding orthogonal projector. Then, we have that

$$\begin{aligned} \Phi (E_\lambda )= \Phi (E_\lambda ^2)=\Phi (E_\lambda )^2, \end{aligned}$$
(27)

where the first equality follows as \(E_\lambda \) is a projector and the second one as \(E_\lambda \) can be written as a polynomial in X, concretely \(E_\lambda =\prod _{\tau \ne \lambda }{(X-\tau I)\over \lambda -\tau }\). Consequently, \(\Phi (E_\lambda )\) is an orthogonal projector since it is idempotent by (27), and Hermitian since \(\Phi (E_\lambda )^\dag =\Phi (E_\lambda ^\dag )=\Phi (E_\lambda )\). Furthermore, since \(\Phi \) is trace-preserving and the rank of a projector is equal to its trace, the rank of \(\Phi (E_\lambda )\) is equal to that of \(E_\lambda \). Furthermore,

$$\begin{aligned} Y\Phi (E_\lambda ) =\Phi (X)\Phi (E_\lambda )= \Phi (X E_\lambda ) = \Phi (\lambda E_\lambda ) = \lambda \Phi (E_\lambda ), \end{aligned}$$

which means that the range of \(\Phi (E_\lambda )\) is contained on the \(\lambda \)-eigenspace of Y. Summarizing we showed that if \(\lambda \) is an eigenvalue for X then it also an eigenvalue for Y and furthermore \(\textrm{mult}(Y,\lambda )\ge \textrm{mult}(X,\lambda )\). The symmetric argument shows that X and Y have the same multiset of eigenvalues, i.e., they are cospectral. Lastly, combining the inclusion \(\textrm{range}(\Phi (E_\lambda ))\subseteq \textrm{Ker}(Y-\lambda I)\) with the fact that both subspaces have the same dimension, it follows that \(\Phi (E_\lambda ) = F_\lambda \) and similarly \(\Phi ^\dagger (F_\lambda ) = E_ \lambda \). \(\square \)

5 Some tools from algebra

For our characterizations of \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)- and \({\mathcal {S}}_{+}\)-isomorphism, we will need some tools from algebra. The first result was used in [9] and says that certain isomorphisms between matrix algebras can be realized as conjugation by a unitary matrix.

Lemma 5.1

If \({\mathcal {A}}\) and \({\mathcal {B}}\) are self-adjoint unital subalgebras of \(\mathbb {C}^{n \times n}\) and \(\phi : {\mathcal {A}} \rightarrow {\mathcal {B}}\) is a trace-preserving isomorphism such that \(\phi (X^\dagger ) = \phi (X)^\dagger \) for all \(X \in {\mathcal {A}}\), then there exists a unitary matrix \(\mathbb {C}^{n \times n}\) such that

$$\begin{aligned} \phi (X) = UXU^\dagger \text { for all } X \in {\mathcal {A}}. \end{aligned}$$

Lemma 5.2

Suppose that G is a graph with partially coherent algebra \(\hat{{\mathcal {A}}}_G\), and let \({\mathcal {D}} = \{I \bullet X: X \in \hat{{\mathcal {A}}}\}\). Then, \({\mathcal {D}}\) is a subalgebra of \(\hat{{\mathcal {A}}}_G\) and there exists an orthogonal basis of \({\mathcal {D}}\) consisting of diagonal 01 matrices.

Proof

By definition \(\hat{{\mathcal {A}}}_G\) is closed under Schur product with I, and thus \({\mathcal {D}} \subseteq \hat{{\mathcal {A}}}\). Since \(\hat{{\mathcal {A}}}\) is a vector space, and \(X \mapsto I \bullet X\) is the projection onto the vector space of diagonal matrices, we have that \({\mathcal {D}}\) is a subspace of \(\hat{{\mathcal {A}}}_G\), and that \(I \in {\mathcal {D}}\). Also, since \(\hat{{\mathcal {A}}}_G\) is closed under matrix product and the product of any two diagonal matrices is diagonal, we have that \({\mathcal {D}}\) is closed under matrix product.

For any matrix \(D \in {\mathcal {D}}\) there exist distinct \(\alpha _1, \ldots , \alpha _k \in \mathbb {C}\) such that \(D = \sum _i \alpha _i I_i\) where \(I_1, \ldots , I_k\) are 01 diagonal matrices with distinct nonzero entries. It remains to show that all matrices \(I_i\) lie in \({\mathcal {D}}\). For this, note that for any \(i \in [k]\) we have that

$$\begin{aligned} I_i = \prod _{j \ne i} \frac{1}{\alpha _i - \alpha _j}(D - \alpha _j I), \end{aligned}$$

which shows that \(I_i\in {\mathcal {D}}\) as \({\mathcal {D}}\) is a subalgebra of \(\hat{{\mathcal {A}}}_G\) with \(I,D\in {\mathcal {D}}\). From here it is easy to see that there exists a set of 01 diagonal matrices of \({\mathcal {D}}\) whose nonzero entries are disjoint and who span \({\mathcal {D}}\). This is our desired basis of \({\mathcal {D}}\). \(\square \)

In order to use Lemma 5.1, we need to prove that equivalences and partial equivalences are trace-preserving. This was done for equivalences in [9], and their proof can be used to prove the same for partial equivalences, which we do here.

Lemma 5.3

Suppose that G and H are graphs with partially coherent algebras \(\hat{{\mathcal {A}}}_G\) and \(\hat{{\mathcal {A}}}_H\) respectively. If \(\phi : \hat{{\mathcal {A}}}_G \rightarrow \hat{{\mathcal {A}}}_H\) is a partial equivalence of G and H, then \(\phi \) is trace-preserving.

Proof

For any \(X \in \hat{{\mathcal {A}}}_G\) and we have that \({{\,\textrm{Tr}\,}}(\phi (X)) = {{\,\textrm{Tr}\,}}(I \bullet \phi (X)) = {{\,\textrm{Tr}\,}}(\phi (I \bullet X))\). Thus it suffices to show that \(\phi \) is trace-preserving on the subalgebra \({\mathcal {D}} = \{I \bullet X: X \in \hat{{\mathcal {A}}}_G\}\). By Lemma 5.2, there exist diagonal 01 matrices \(I_1, \ldots , I_d\) that form an orthogonal basis of \({\mathcal {D}}\). By linearity, it suffices to show that \(\phi \) preserves the trace of each individual \(I_i\).

First, since \(\phi \) is a partial equivalence, we have by definition that

$$\begin{aligned} \phi (I_i) = \phi (I \bullet I_i) = I \bullet \phi (I_i), \end{aligned}$$

and therefore we have that \(\phi (I_i)\) is diagonal for all i. Moreover, since \(I_i^2 = I_i\) for all \(i \in [d]\), we have that

$$\begin{aligned} \phi (I_i)^2 = \phi (I_i^2) = \phi (I_i), \end{aligned}$$

and therefore \(\phi (I_i)\) is a 01 diagonal matrix for all \(i \in [d]\). Let \(n_i = {{\,\textrm{Tr}\,}}(I_i)\) be the number of 1’s in \(I_i\), and let \(n'_i = {{\,\textrm{Tr}\,}}(\phi (I_i))\) be the number of 1’s in \(\phi (I_i)\). We aim to show that \(n'_i = n_i\).

Recall that \(J \in \hat{{\mathcal {A}}}_G\), and thus \(J_i:= I_i J I_i \in \hat{{\mathcal {A}}}_G\) for all \(i \in [d]\). Let \(J'_i:=\phi (J_i) = \phi (I_i)J\phi (I_i)\). It is easy to see that \(J_i^2 =n_i J_i\), and similarly \((J'_i)^2 = n'_i J'_i\). Therefore,

$$\begin{aligned} \phi (J_i)^2 = \phi (J_i^2) = \phi (n_i J_i) = n_i \phi (J_i). \end{aligned}$$

However, we also have that

$$\begin{aligned} \phi (J_i)^2 = (J'_i)^2 = n'_i J'_i = n'_i \phi (J_i). \end{aligned}$$

Of course this implies that \(n'_i = n_i\) and we are done. \(\square \)

Lastly, we will need to use the fact that the orthogonal projection onto a unital self-adjoint algebra is a completely positive map, see [4, Theorems 1.5.10 and 1.5.11].

Lemma 5.4

Let \({\mathcal {A}}\) be a self-adjoint subalgebra of \(\mathbb {C}^{n \times n}\) containing the identity. If \(\Pi \) is the orthogonal projection onto \({\mathcal {A}}\), then \(\Pi \) is a CPTP unital map.

6 Characterizing \({\mathcal {S}}_{+} \)-isomorphic graphs

6.1 Partially coherent algebras

Suppose that S is some subset of \(\mathbb {C}^{n \times n}\). We say that an algebra \({\mathcal {A}}\) is an S-partially coherent algebra if \({\mathcal {A}}\)

  1. 1.

    is unital;

  2. 2.

    is self-adjoint;

  3. 3.

    contains the all ones matrix;

  4. 4.

    is closed under Schur multiplication by any matrix in S.

Note that the last two properties above imply that any S-partially coherent algebra must contain every element of S. On the other hand, any coherent algebra containing S is S-partially coherent. The smallest example that we know of an S-partially coherent algebra that is not a coherent algebra is the algebra of polynomials in the adjacency matrix of the Hoffman graph. We have verified by computer that this algebra is not a coherent algebra, but is an S-partially coherent algebra for \(S = \{I,A\}\) where A is the adjacency matrix of the Hoffman graph which we will see in Sect. 8.

For us, the set S will almost always be the set \(\{I,A_G\}\) for some graph G, and we will be interested in the smallest S-partially coherent algebra. Let us go into some detail about this here. It is not difficult to see that if \({\mathcal {A}}_\alpha \) is an S-partially coherent algebra for all \(\alpha \) in some index set \(\Omega \), then \(\cap _{\alpha \in \Omega } {\mathcal {A}}_\alpha \) is an S-partially coherent algebra. Therefore, the intersection of all S-partially coherent algebras, which we will denote by \({\mathcal {A}}_S\), is an S-partially coherent algebra and it is necessarily the unique minimal one. On the other hand, we can also consider the set \({\mathcal {A}}'_S\) of matrices that can be expressed using the elements of \(S \cup \{I,J\}\) and a finite number of the operations of addition, scalar multiplication, matrix multiplication, conjugate transposition, and Schur multiplication where at least one of the factors is an element of S. We claim that this is an S-partially coherent algebra. Clearly \({\mathcal {A}}'_S\) contains I and J. Moreover, if \(M,M' \in {\mathcal {A}}'_S\), then \(MM'\) can be written as the product of two expressions of the form described above and this product is itself such an expression, therefore \({\mathcal {A}}'_S\) is closed under multiplication. We can similarly show that \({\mathcal {A}}'_S\) is an algebra which is self-adjoint and closed under Schur multiplication by any matrix in S. Finally, we claim that \({\mathcal {A}}'_S = {\mathcal {A}}_S\). To show this it suffices to show that \({\mathcal {A}}'_S \subseteq {\mathcal {A}}_S\). But of course this holds since \({\mathcal {A}}'_S\) is simply the closure of \(S \cup \{I,J\}\) under all the operations an S-partially coherent algebra must be closed under.

We define the partially coherent algebra of a graph G, denoted \(\hat{{\mathcal {A}}}_G\), to be the minimal S-partially coherent algebra where \(S = \{I, A_G\}\). Note that this will also be \(S'\)-partially coherent for \(S' = \{I, A_G, A_{\overline{G}}\}\) since \(A_{\overline{G}} = J - I - A_G\) and J is the Schur identity.

Definition 6.1

Let G and H be graphs with adjacency matrices \(A_G\) and \(A_H\) and partially coherent algebras \(\hat{{\mathcal {A}}}_G\) and \(\hat{{\mathcal {A}}}_H\) respectively. We say that G and H are partially equivalent if there exists an linear bijection \(\phi : \hat{{\mathcal {A}}}_G \rightarrow \hat{{\mathcal {A}}}_H\) such that

  1. 1.

    \(\phi (M^\dagger ) = \phi (M)^\dagger \) for all \(M \in \hat{{\mathcal {A}}}_G\);

  2. 2.

    \(\phi (MN) = \phi (M)\phi (N)\) for all \(M,N \in \hat{{\mathcal {A}}}_G\);

  3. 3.

    \(\phi (I) = I\), \(\phi (A_G) = A_H\), and \(\phi (J) = J\);

  4. 4.

    \(\phi (M\bullet N) = \phi (M)\bullet \phi (N)\) for all \(M \in \{I,A_G\}\) and \(N \in \hat{{\mathcal {A}}}_G\).

We refer to \(\phi \) as a partial equivalence of G and H.

Note that the conditions above imply that \(\phi (A_{\overline{G}}) =A_{\overline{H}}\) where \(A_{\overline{G}}\) and \(A_{\overline{H}}\) are the adjacency matrices of the complements of G and H respectively. Furthermore, they also imply that \(\phi (A_{\overline{G}} \bullet N) = A_{\overline{H}} \bullet \phi (N)\) for all \(N \in \hat{{\mathcal {A}}}_G\). Note that if it exists, a partial equivalence \(\phi \) of graphs G and H is unique. This is because any matrix \(M \in \hat{{\mathcal {A}}}_G\) can be written as an expression in IJ, and \(A_G\) using the appropriate operations, and by definition of partial equivalence, \(\phi \) must map this to a matrix \(M'\) that is equal to the same expression except with \(A_G\) replaced with \(A_H\). Thus the image of every matrix in \(\hat{{\mathcal {A}}}_G\) is completely determined.

6.2 The characterization

Theorem 6.2

Two graphs G and H are partially equivalent if and only if \(G \cong _{{\mathcal {S}}_{+}} H\).

Proof

Assume that \(G \cong _{{\mathcal {S}}_{+}} H\) and let M be a \({\mathcal {S}}_{+}\)-isomorphism matrix. Let \(\Phi : \mathbb {C}^{V_G \times V_G} \rightarrow \mathbb {C}^{V_H \times V_H}\) be the linear map whose Choi matrix is M. As M is a \({\mathcal {S}}_{+}\)-isomorphism matrix it follows by Theorem 4.8 that \(\Phi \) is a \({\mathcal {S}}_{+}\)-isomorphism map, i.e., it satisfies Conditions (16)–(19). Furthermore, as as already noted in Remark 4.9, the above properties also imply that \(\Phi (I) = I\), \( \Phi (A_G) = A_H \) and that \(|V_G|=|V_H|=:n\). Additionally, by Theorem 4.8 the adjoint map \(\Phi ^\dagger \) is a \({\mathcal {K}}\)-isomorphism map from H to G, i.e., it satisfies:

$$\begin{aligned} \Phi ^\dagger (I \bullet X) = I \bullet \Phi ^\dagger (X)\ \text { and } \ \Phi ^\dagger (A_H \bullet X) = A_G \bullet \Phi ^\dagger (X), \text { for all } X \in \mathbb {C}^{n \times n}. \end{aligned}$$
(28)

Now, (28) combined with \(\Phi ^\dagger (J)=J\) imply that \(\Phi ^\dag (I) = I \) and \(\Phi ^\dag (A_H) = A_G.\)

Summarizing, we have determined that \(\Phi \) is completely positive, unital, trace-preserving, and

$$\begin{aligned} \Phi (I)=I=\Phi ^\dag (I), \quad \Phi (J)=J=\Phi ^\dag (J), \quad \Phi (A_G)=A_H, \quad \Phi ^\dag (A_H)=A_G. \end{aligned}$$

Consequently, Lemma 4.10 implies that for any \( W\in {\mathcal {C}}^{n\times n}\) we have that \(\Phi (XW) = \Phi (X)\Phi (W) \) and \(\Phi (WX)=\Phi (W)\Phi (X)\) whenever \(X\in \{I,J,A_G\}\). Furthermore, by Lemma 4.6 and Condition (5) of isomorphism matrices, we have that for any \(W \in {\mathbb {C}}^{n \times n}\),

$$\begin{aligned} \Phi (I \bullet W) = I \bullet \Phi (W), \ \Phi (A_G \bullet W) = A_H \bullet \Phi (W), \ \Phi (A_{\overline{G}} \bullet W) = A_{\overline{H}} \bullet \Phi (W), \end{aligned}$$

and similarly for \(\Phi ^\dag \).

Consequently, any finite expression involving \(I, A_G, A_{\overline{G}}\) and the operations of addition, scalar multiplication, matrix multiplication, and Schur multiplication where at least one of the factors is I or \(A_G\), will be mapped by \(\Phi \) to the same expression with \(A_G\) and \(A_{\overline{G}}\) replaced with \(A_H\) and \(A_{\overline{H}}\) respectively. Further, \(\Phi ^\dag \) is the inverse of \(\Phi \) on such expressions. This means that the restriction of \(\Phi \) to the partially coherent algebra of G is a partial equivalence.

Conversely, suppose that \(\phi : \hat{{\mathcal {A}}}_G \rightarrow \hat{{\mathcal {A}}}_H\) is a partial equivalence of G and H. By Lemma 5.1, there exists a unitary matrix U such that \(\phi (X) = UXU^\dagger \) for all \(X \in \hat{{\mathcal {A}}}_G\). Let \(\hat{\phi }: {\mathbb {C}}^{V_G \times V_G} \rightarrow {\mathbb {C}}^{V_H \times V_H}\) be defined as \(\hat{\phi }(X) = UXU^\dagger \) for all \(X \in {\mathbb {C}}^{V_G \times V_G}\). Obviously, \(\phi \) and \(\hat{\phi }\) agree on \(\hat{{\mathcal {A}}}_G\). Also, \(\hat{\phi }\) is a CPTP unital map with adjoint \(\hat{\phi }^\dag (X) = \hat{\phi }^{-1}(X) = U^\dagger X U\). Let \(\Pi : {\mathbb {C}}^{V_G \times V_G} \rightarrow \hat{{\mathcal {A}}}_G\) be the orthogonal projection onto the partially coherent algebra of G and define the composition

$$\begin{aligned} \Phi = \hat{\phi } \circ \Pi : {\mathbb {C}}^{V_G \times V_G} \rightarrow {\mathbb {C}}^{V_H \times V_H}. \end{aligned}$$

Since \(\Pi \) is a CPTP unital map by Lemma 5.4, we have that \(\Phi \) is the composition of two CPTP unital maps and is thus CPTP and unital itself. We show that \(\Phi \) is a \({\mathcal {S}}_{+}\)-isomorphism map for G to H, and thus, Theorem 4.8 implies that its Choi matrix is a \({\mathcal {S}}_{+}\)-isomorphism matrix.

We already have established that \(\Phi \) is completely \({\mathcal {S}}_{+}\)-preserving, i.e., completely positive. Since \(J \in \hat{{\mathcal {A}}}_G\), we have \(\Pi (J) = J\). Also, since \(\phi \) is a partial equivalence it satisfies \(\phi (J)=J\), and consequently \(\hat{\phi }(J)=\phi (J)= J\). Therefore, it follows that \(\Phi (J)= \hat{\phi } \circ \Pi (J)= J\). On the other hand, we have that \(\Phi ^\dag = \Pi ^\dag \circ \hat{\phi }^\dag = \Pi \circ \hat{\phi }^{-1}\) and thus \(\Phi ^\dag (J) = J\). So \(\Phi \) satisfies property (19). Thus it is only left to show that \(\Phi \) satisfies properties (17) and (18).

We first aim to show that \(\Phi (A_G \bullet X) = A_H \bullet \Phi (X)\) for all \(X \in {\mathbb {C}}^{V_G \times V_G}\). Let \(\Lambda : {\mathbb {C}}^{V_G \times V_G}\rightarrow {\mathbb {C}}^{V_G \times V_G}\) be the linear map defined by \(\Lambda (X) = A_G \bullet X\). It is easy to see that \(\Lambda \) is a self-adjoint projection onto a subspace of \({\mathbb {C}}^{V_G \times V_G}\). Since \(A_G \bullet X \in \hat{{\mathcal {A}}}_G\) for all \(X \in \hat{{\mathcal {A}}}_G\), we have that

$$\begin{aligned} \Pi \circ \Lambda \circ \Pi = \Lambda \circ \Pi . \end{aligned}$$

It follows that

$$\begin{aligned} \Pi \circ \Lambda = \Pi ^\dag \circ \Lambda ^\dag = (\Lambda \circ \Pi )^\dag = (\Pi \circ \Lambda \circ \Pi )^\dag = \Pi \circ \Lambda \circ \Pi = \Lambda \circ \Pi , \end{aligned}$$

i.e., that \(\Lambda \) and \(\Pi \) commute. Therefore,

$$\begin{aligned} \Phi (A_G \bullet X)= & {} \hat{\phi } \circ \Pi \circ \Lambda (X) = \hat{\phi } \circ \Lambda \circ \Pi (X) = \hat{\phi }(A_G \bullet \Pi (X)) \\= & {} A_H \bullet \hat{\phi }(\Pi (X)) = A_H \bullet \Phi (X), \end{aligned}$$

where the second to last equality uses the fact that \(\Pi (X) \in \hat{{\mathcal {A}}}_G\). So \(\Phi \) satisfies property (18).

We can similarly show that \(\Phi (I \bullet X) = I \bullet \Phi (X)\) and \(\Phi (A_{\overline{G}} \bullet X) = A_{\overline{H}} \bullet \Phi (X)\), i.e., that \(\Phi \) satisfies property (17). Therefore \(\Phi \) is an \({\mathcal {S}}_{+}\)-isomorphism map for G to H and we are done. \(\square \)

6.3 Necessary conditions for \({\mathcal {S}}_{+}\)-isomorphism

Lemma 6.3

If \(G\cong _{{\mathcal {S}}_{+}} H\), then they have cospectral adjacency matrices, as do their complements.

Proof

Assume that \(G\cong _{{\mathcal {S}}_{+}} H\). By Theorem 6.2 there exists an \({\mathcal {S}}_{+}\)-isomorphism map \(\Phi \) from G to H. As we have already seen, the map \(\Phi \) satisfies \( \Phi (A_G) = A_H, \ \Phi ^\dag (A_H)=A_G, \ \Phi (A_{\overline{G}})=A_{\overline{H}}, \ \Phi ^\dag (A_{\overline{H}})=A_{\overline{G}},\) and the claim follows immediately by Lemma 4.11. \(\square \)

We note that the above result in the special case of quantum isomorphic graphs was proved in [1]. Moreover, there are other types of cospectrality that one can consider. Another common cospectrality relation is in terms of the (combinatorial) Laplacian of a graph G, defined as the matrix \(L = D - A_G\) where D is a diagonal matrix of degrees and \(A_G\) is the adjacency matrix.

Lemma 6.4

If \(G\cong _{{\mathcal {S}}_{+}} H\), then they have cospectral Laplacian matrices, as do their complements.

Proof

It is easy to see that if \(A_G\) is the adjacency matrix of G, then \(I \bullet A_G^2 - A_G\) is the Laplacian of G. Suppose that \(\Phi \) is a \({\mathcal {S}}_{+}\)-isomorphism map for G to H. Then we have that

$$\begin{aligned} \Phi (I \bullet A_G^2 - A_G) = I \bullet \Phi (A_G)^2 - \Phi (A_G) = I \bullet A_H^2 - A_H, \end{aligned}$$

which is of course the Laplacian of H. Similarly, we have that \(\Phi ^\dag (I \bullet A_H^2 - A_H) = I \bullet A_G^2 - A_G\), and by Lemma 4.11 this implies that the Laplacians of G and H have the same eigenvalues. \(\square \)

One can similarly show that \({\mathcal {S}}_{+}\)-isomorphic graphs are cospectral with respect to their signless or normalized Laplacians, as well as many other similarly constructed matrices. An important property of the Laplacian of a graph G is that the number of connected components of G is equal to the multiplicity of zero as an eigenvalue of its Laplacian [3, Proposition 1.3.7]. Therefore, we have the following.

Corollary 6.5

If \(G\cong _{{\mathcal {S}}_{+}} H\), then they have the same number of connected components, as do their complements.

Another property preserved by \({\mathcal {S}}_{+}\)-isomorphism has to do with the number of walks in a graph. We say that a graph G is walk-regular if the number of walks of length \(\ell \) beginning and ending at a vertex of G is independent of the choice of vertex. Equivalently, there exists a number \(a_\ell \in {\mathbb {N}}\) such that \(I \bullet A_G^\ell = a_\ell I\) for all \(\ell \in {\mathbb {N}}\). We also say that a graph is 1-walk-regular if it is walk-regular and there exists \(b_\ell \in {\mathbb {N}}\) such that \(A_G \bullet A_G^\ell = b_\ell A_G\) for all \(\ell \in {\mathbb {N}}\). Obviously, this means that the number of walks of length \(\ell \) starting at one end of an edge and ending at the other does not depend on the edge. It turns out that \({\mathcal {S}}_{+}\)-isomorphism preserves both of the aforementioned properties:

Lemma 6.6

If G and H are \({\mathcal {S}}_{+}\)-isomorphic graphs, then G is walk-regular if and only if H is walk-regular. The same holds for 1-walk-regularity.

Proof

Suppose G is walk-regular and let \(a_\ell \) for \(\ell \in {\mathbb {N}}\) satisfy \(I \bullet A_G^\ell = a_\ell I\). By Theorem 6.2 there exists a \({\mathcal {S}}_{+}\)-isomorphism map \(\Phi \) from G to H. Then, we have that

$$\begin{aligned} I \bullet A_H^\ell = \Phi (I \bullet A_G^\ell ) = \Phi (a_\ell I) = a_\ell I, \end{aligned}$$

and thus H is walk-regular. Essentially the same proof works for 1-walk-regularity.

\(\square \)

Walk-regularity also turns out to be related to the partially coherent algebra of a graph. Below we refer to the algebra of polynomials in the adjacency matrix of a graph G as the adjacency algebra of G.

Lemma 6.7

If the adjacency algebra of a graph G is equal to its partially coherent algebra, then G is connected and walk-regular. The converse implication does not hold.

Proof

Let \(\hat{{\mathcal {A}}}_G\) be the partially coherent algebra of G and assume that this is equal to the adjacency algebra of G. Since \(J \in \hat{{\mathcal {A}}}_G\) by definition, we have that J is contained in the adjacency algebra of G which happens if and only if G is connected and regular [12, Theorem 1]. So it only remains to show that G is walk-regular.

Consider the subspace \({\mathcal {D}} = \{I \bullet X: X \in \hat{{\mathcal {A}}}_G\}\). By Lemma 5.2, there exists an orthogonal basis of \({\mathcal {D}}\) consisting of diagonal 01-matrices. Let \(\{D_1, \ldots , D_r\}\) be this basis and suppose that \(r > 1\). This implies that \(D_1\) is not the identity matrix and therefore \(D_1J \in \hat{{\mathcal {A}}}_G\) is not symmetric. This contradicts the assumption that \(\hat{{\mathcal {A}}}_G\) is equal to the adjacency algebra of G, which obviously contains only symmetric matrices. Therefore, we have that \(r = 1\) and \({\mathcal {D}}\) is just the span of the identity matrix. However, since \(A_G^\ell \in \hat{{\mathcal {A}}}_G\), we have that \(I \bullet A_G^\ell \in {\mathcal {D}}\). Therefore, for any \(\ell \in {\mathbb {N}}\), we have that there exists a number \(a_\ell \) such that \(I \bullet A_G^\ell = a_\ell I\), i.e., G is walk-regular.

To show that the converse does not hold, consider the 10-cycle \(C_{10}\), and let G be the graph with vertex set \(V(C_{10})\) such that two vertices are adjacent if they are at distance one or two in \(C_{10}\) (see Fig. 1). Note that G is vertex transitive and therefore walk-regular, and it is obviously connected. We will show that the adjacency matrix of \(C_{10}\) is contained in the partially coherent algebra of G, but not its adjacency algebra. For the former claim, it is straightforward to show that (or simply compute) the adjacency matrix of \(C_{10}\) is equal to \(A_G \bullet A_G^2 - A_G \in \hat{{\mathcal {A}}}_G\). For the latter, if the adjacency algebra of G contains the adjacency matrix of \(C_{10}\), then it contains its entire adjacency algebra. However, the dimension of the adjacency algebra of a graph is equal to the number of distinct eigenvalues of its adjacency matrix. For \(C_{10}\) this dimension is 6, but for G it is 5 (by direct computation). Thus the adjacency algebra of the latter cannot contain the adjacency matrix of the former, and we are done. \(\square \)

Fig. 1
figure 1

Distance 1 and 2 graph of \(C_{10}\)

If we change walk-regular to 1-walk-regular, then the necessary condition of Lemma 6.7 becomes a sufficient condition:

Lemma 6.8

If G is a connected 1-walk-regular graph, then the partially coherent algebra of G is equal to the adjacency algebra of G. The converse does not hold.

Proof

It is obvious that the partially coherent algebra of G contains the adjacency algebra of G. To prove the first claim it therefore suffices to show that the adjacency algebra is S-partially coherent for \(S = \{I, A_G\}\). First, since G is 1-walk-regular, it is regular and moreover it is also connected by assumption. Using the known fact that J is contained in the adjacency algebra if and only if G is connected and regular [12], we have that the all ones matrix J can be written as a polynomial in \(A_G\). Second, it is obvious that the adjacency algebra is closed under conjugate transpose. So it only remains to show that the adjacency algebra is closed under entrywise product with I and \(A_G\). Using the definition of 1-walk-regularity it follows that for any polynomial \(f(x)=\sum _\ell c_\ell x^\ell \), we have that \(I \bullet f(A_G) = \sum _\ell c_\ell a_\ell I\) and \(A_G \bullet f(A_G) = \sum _\ell c_\ell b_\ell A_G\), and thus we have proven the claim.

To show that the converse does not hold, consider the 8-cycle \(C_8\) and let G be the graph with vertex set \(V(C_8)\) such that two vertices are adjacent if they are at distance two or three in \(C_8\). We will show that the coherent algebra algebra of G is equal to its adjacency algebra, but that it is not 1-walk-regular. Let \({\mathcal {A}}\) be the coherent algebra of G and let \({\mathcal {C}}\) be the coherent algebra of \(C_8\).

The graph \(C_8\) is distance regular and thus both its adjacency and coherent algebras are equal to the span of its distance matrices which obviously contains \(A_G\). Thus by minimality of \({\mathcal {A}}\), we have that \({\mathcal {A}} \subseteq {\mathcal {C}}\). On the other hand, \({\mathcal {C}}\) has dimension 5 since \(C_8\) has diameter 4 and the adjacency algebra of G, which is contained in \({\mathcal {A}}\), has dimension 5 since \(A_G\) has 5 distinct eigenvalues. Thus we have that the adjacency algebra of G is equal to \({\mathcal {A}} = {\mathcal {C}}\).

However, the number of walks of length two between adjacent vertices of G is not constant, it depends on the distance between the vertices in \(C_8\). Therefore G is not 1-walk-regular. \(\square \)

The above two lemmas show that the property of having adjacency algebra equal to partially coherent algebra lies somewhere (strictly) in between being walk-regular and being 1-walk-regular.

Theorem 6.9

Let G be a connected 1-walk-regular graph. For any graph H we have that \(G \cong _{{\mathcal {S}}_{+}} H\) if and only if H is a connected 1-walk-regular graph that is cospectral to G.

Proof

If \(G \cong _{{\mathcal {S}}_{+}} H\), it follows that H is a connected (Lemma 6.5) 1-walk-regular graph (Lemma 6.6) that is also cospectral to G (Lemma 6.3).

Conversely, suppose that H is a connected 1-walk-regular graph that is cospectral to G. Since they are cospectral, by the spectral theorem there exists a unitary matrix U such that \(UA_GU^\dagger = A_H\). It is then easy to see that the map \(\phi (X) = UXU^\dagger \) is an algebra isomorphism from the adjacency algebra of G to that of H. By Lemma 6.8, it follows that \(\phi \) is an algebra isomorphism from \(\hat{{\mathcal {A}}}_G\) to \(\hat{{\mathcal {A}}}_H\), it remains to verify that this is a partial equivalence. Obviously, \(\phi (X^\dagger ) = \phi (X)^\dagger \), and so this condition is met. We also need that \(\phi (J) = J\), but this holds because if \(E_\lambda \) and \(F_\lambda \) are the projections onto the \(\lambda \)-eigenspaces of G and H respectively, then \(UE_\lambda U^\dagger = F_\lambda \), and \(\frac{1}{n}J\) (where \(n = |V_G| = |V_H|\)) is the projection onto the maximum eigenspaces of both G and H since they are connected and regular.

Lastly, we show that \(\phi (I \bullet X) = I \bullet \phi (X)\) and \(\phi (A_G \bullet X) = A_H \bullet \phi (X)\) for all \(X \in \hat{{\mathcal {A}}}_G\). As G is a connected 1-walk-regular graph, by Lemma 6.8 the partially coherent algebra of G is equal to the adjacency algebra of G and thus \(I \bullet X = ({{\,\textrm{Tr}\,}}(X)/n) I\) for all \(X \in \hat{{\mathcal {A}}}_G\). Therefore,

$$\begin{aligned} \phi (I \bullet X) = \phi \left( \frac{{{\,\textrm{Tr}\,}}(X)}{n}I\right) =\frac{{{\,\textrm{Tr}\,}}(X)}{n}I. \end{aligned}$$
(29)

On the other hand, \(\phi (X) \in \hat{{\mathcal {A}}}_H\) and thus

$$\begin{aligned} I \bullet \phi (X) = \left( \frac{{{\,\textrm{Tr}\,}}(\phi (X))}{n}\right) \! I =\left( \frac{{{\,\textrm{Tr}\,}}(X)}{n}\right) I, \end{aligned}$$
(30)

where the second equality follows from the fact that \(\phi \) is trace-preserving. Thus we have shown that \(\phi (I \bullet X) = I \bullet \phi (X)\) for all \(X \in \hat{{\mathcal {A}}}_G\).

We similarly have that \(A_G \bullet X = \gamma A_G\) where \(\gamma = {{\,\textrm{Tr}\,}}(A_GX)/nk\), where k is the degree of both G and H. Thus \(\phi (A_G \bullet X) = \gamma A_H\). Of course we also have that \(A_H \bullet \phi (X) = \gamma 'A_H\) where

$$\begin{aligned} \gamma ' = {{\,\textrm{Tr}\,}}(A_H \phi (X))/nk = {{\,\textrm{Tr}\,}}(\phi (A_G X))/nk = {{\,\textrm{Tr}\,}}(A_G X)/nk = \gamma . \end{aligned}$$

Thus G and H are partially equivalent and by Theorem 7.3 we have \(G \cong _{{\mathcal {S}}_{+}} H\). \(\square \)

7 Characterizing \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphic graphs

7.1 Coherent algebras of graphs

The coherent algebra of a graph G, denoted \({\mathcal {A}}_G\), is defined to be the intersection of all coherent algebras containing its adjacency matrix A, i.e., the smallest coherent algebra containing A. Equivalently, this is all the matrices that can be written as a finite expression involving I, A, J, and the operations of addition, scalar multiplication, matrix multiplication, Schur multiplication, and conjugate transpose.

An isomorphism between coherent algebras \({\mathcal {A}}\) and \({\mathcal {B}}\) is a bijective linear map \(\phi : {\mathcal {A}} \rightarrow {\mathcal {B}}\) that preserves all operations of a coherent algebra, i.e.,

  • \(\phi (M^\dagger ) = \phi (M)^\dagger \) for all \(M \in {\mathcal {A}}\);

  • \(\phi (MN) = \phi (M)\phi (N)\) for all \(M,N \in {\mathcal {A}}\);

  • \(\phi (M\bullet N) = \phi (M)\bullet \phi (N)\) for all \(M,N \in {\mathcal {A}}\).

As a consequence of the above, we must have that \(\phi (I) = I\) and \(\phi (J) = J\). More generally, if \(\phi \) is an isomorphism of coherent algebras \({\mathcal {A}}\) and \({\mathcal {B}}\), and \(A_1, \ldots , A_d\) and \(B_1, \ldots , B_d\) are the orthogonal 01-matrices forming bases of \({\mathcal {A}}\) and \({\mathcal {B}}\) respectively, then there exists a bijection \(f: [d] \rightarrow [d]\) such that \(\phi (A_i) = B_{f(i)}\) for all \(i \in [d]\).

If G and H are two graphs with respective adjacency matrices \(A_G\) and \(A_H\) and coherent algebras \({\mathcal {A}}_G\) and \({\mathcal {A}}_H\), then we say that G and H are equivalent if there exists an isomorphism \(\phi \) from \({\mathcal {A}}_G\) to \({\mathcal {A}}_H\) such that \(\phi (A_G) = A_H\). We refer to the map \(\phi \) as an equivalence of G and H. It is known [27] that two graphs are equivalent if and only if they are not distinguished by the Weisfeiler–Leman method.

If \(\phi \) is an equivalence of graphs G and H, then any function of \(I, A_G\), and J using the operations of addition, scalar multiplication, matrix multiplication, entrywise multiplication, and conjugate transposition is mapped to the same function with \(A_G\) replaced by \(A_H\). This obviously still holds if we restrict to functions in which entrywise multiplication can only be used when one of the factors is I or \(A_G\). Since the space of matrices that can be written as such functions is exactly the partially coherent algebra of G, we have that the restriction of the equivalence \(\phi \) to \(\hat{{\mathcal {A}}}_G\) is a partial equivalence of G and H. Thus, any pair of equivalent graphs are also partially equivalent, as one would expect.

7.2 The characterization

We will need the following:

Lemma 7.1

Consider a doubly stochastic matrix \(D \in {\mathbb {R}}^{d \times d}\) and column vectors \(u, v\in {\mathbb {R}}^d\) with the same multiset of entries. The following are equivalent:

  1. (1)

    \(Du = v\).

  2. (2)

    \(D_{ij} = 0\) whenever \(v_i\ne u_j\).

  3. (3)

    \(D(u \bullet w) = v \bullet (Dw)\) for all vectors w.

  4. (4)

    \(D^T (v \bullet w) = u \bullet (D^T w)\) for all vectors w.

  5. (5)

    \(D^Tv=u\).

Proof

\((1) \implies (2)\). Suppose that \(Du = v\). Set \(V=\{ i\in [n]: v_i =v^{\downarrow }_1\}\), i.e., the indices of the largest entry of v and define U similarly. As D is doubly stochastic, the equation \(v_i = (Du)_i = \sum _j D_{ij}u_j\) shows that \(v_i\) is a convex combination of the entries of u. If \(i \in V\), no entry \(u_j\) for \(j \notin U\) can appear with nonzero weight in this convex combination. Therefore, we have that

$$\begin{aligned} i \in V, \ j \notin U \implies D_{ij} = 0, \end{aligned}$$
(31)

and thus

$$\begin{aligned} 1 = \sum _{j \in [n]} D_{ij} = \sum _{j \in U} D_{ij}, \text { for all } i\in V. \end{aligned}$$
(32)

Furthermore, we have that

$$\begin{aligned} |U| = |V| = \sum _{i \in V} \sum _{j \in U} D_{ij} =\sum _{j \in U} \sum _{i \in V} D_{ij} \le \sum _{j \in U} \sum _{i \in [n]} D_{ij} = |U|, \end{aligned}$$
(33)

where for the first equality we use that u and v have the same multiset of entries and for the second equality we use (32). Thus, (33) holds throughout with equality, which in turn implies that

$$\begin{aligned} j \in U, \ i \notin V \implies D_{ij} = 0. \end{aligned}$$
(34)

Rearranging D so that the V-rows and U-columns are first, it follows by (31) and (34) that

$$\begin{aligned} D = \begin{pmatrix} D' &{} 0 \\ 0 &{} D'' \end{pmatrix}, \end{aligned}$$

where \(D'\) and \(D''\) are doubly stochastic matrices. The same argument can be applied to \(D''\) (where V and U are now defined as the indices of the second largest entry in v and u respectively). Continuing in the same manner it follows that \(D_{ij} = 0\) whenever \(v_i \ne u_j\).

\((2) \iff (3) \). We have already established this in the proof of Lemma 4.5.

\((3)\implies (1).\) This follows by selecting \(w=e\).

Lastly, to get (4) and (5) simply note that (2) is equivalent to \(D^T_{ji}=0\) whenever \(u_j\ne v_i\). \(\square \)

As with Lemma 4.5, we now state a form of the above lemma in terms of maps between matrix spaces. As before this is equivalent to the above by the correspondence \({{\,\textrm{vec}\,}}(\Phi (X)) = \tilde{M}{{\,\textrm{vec}\,}}(X)\) for where \(\Phi \) has Choi matrix M and \(\tilde{M}_{hh',gg'} = M_{gh,g'h'}\). Note that \(\tilde{M}\) having row and column sums equal to 1 is equivalent to \(\Phi (J) = J = \Phi ^\dag (J)\).

Lemma 7.2

Let \(\Phi : {\mathbb {C}}^{V_G \times V_G} \rightarrow {\mathbb {C}}^{V_H \times V_H}\) be a linear map with entrywise nonneggative Choi matrix M such that \(\Phi (J) = J = \Phi ^\dag (J)\). For any fixed pair of matrices \(X \in {\mathbb {C}}^{V_G \times V_G}\) and \(Y \in {\mathbb {C}}^{V_H \times V_H}\) with the same multiset of entries the following are equivalent:

  1. (1)

    \(\Phi (X) = Y\).

  2. (2)

    \(M_{gh,g'h'} = 0\) whenever \(X_{gg'} \ne Y_{hh'}\).

  3. (3)

    \(\Phi (X \bullet W) = Y \bullet \Phi (W)\) for all \(W \in {\mathbb {C}}^{V_G \times V_G}\).

  4. (4)

    \(\Phi ^\dag (Y \bullet Z) = X \bullet \Phi ^\dag (Z)\) for all \(Z \in {\mathbb {C}}^{V_H \times V_H}\).

  5. (5)

    \(\Phi ^\dag (Y) = X\).

Theorem 7.3

Two graphs G and H are equivalent if and only if \(G \cong _{{\mathcal {D}}{\mathcal {N}}{\mathcal {N}}} H\).

Proof

Suppose that \(G \cong _{{\mathcal {D}}{\mathcal {N}}{\mathcal {N}}} H\) and let M be a \({\mathcal {D}}{\mathcal {N}} {\mathcal {N}}\)-isomorphism matrix. Consider the linear map \(\Phi : \mathbb {C}^{V_G \times V_G} \rightarrow \mathbb {C}^{V_H \times V_H}\) whose Choi matrix is equal to M. As M is a \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphism matrix it follows by Theorem 4.8 that \(\Phi \) is a \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphism map. We show that \(\Phi \) is the desired equivalence between the coherent algebras of G and H.

The same arguments as in Theorem 6.2 apply to show that any finite expression involving \(I, A_G, A_{\overline{G}}\) and the operations of addition, scalar multiplication, matrix multiplication, and Schur multiplication where at least one of the factors is I or \(A_G\), will be mapped by \(\Phi \) to the same expression with \(A_G\) and \(A_{\overline{G}}\) replaced with \(A_H\) and \(A_{\overline{H}}\) respectively. Furthermore, \(\Phi ^\dag \) is the inverse of \(\Phi \) on such expressions.

It remains to consider the case of arbitrary Schur products. For this, it suffices to show that for any pair of matrices XY such that \(\Phi (X)=Y\) and \(\Phi ^\dag (Y)=X\) we have that

$$\begin{aligned} \Phi (X \bullet W) = \Phi (X) \bullet \Phi (W) \ \text { and } \ \Phi ^\dag (Y \bullet W)=\Phi ^\dag (Y) \bullet \Phi ^\dag (W), \text { for all } W. \end{aligned}$$
(35)

However, since \(\Phi \) is a \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphism map, we have that its Choi matrix is entrywise nonneggative and \(\Phi (J) = J = \Phi ^\dag (J)\). This is the matrix map analog of being doubly stochastic, and thus \(\Phi (X) = Y\) and \(\Phi ^\dag (Y) = X\) implies that X and Y have the same multiset of entries. Therefore we can apply Lemma 7.2 to obtain Eq. (35). We thus have that for \(X_1,X_2,Y_1,Y_2\) such that \(\Phi (X_i) = Y_i\) and \(\Phi ^\dag (Y_i) = X_i\), it holds that \(\Phi (X_1 \bullet X_2) = Y_1 \bullet Y_2\) and \(\Phi ^\dag (Y_1 \bullet Y_2) = X_1 \bullet X_2\). Since \(\Phi (I) = I\), \(\Phi (A_G) =A_H\), \(\Phi (J) = J\), and similarly for \(\Phi ^\dag \), it follows that any expression in \(I, A_G, J\) using addition, scalar multiplication, matrix multiplication, conjugate transposition, and Schur product is mapped by \(\Phi \) to the same expression but with \(A_G\) replaced with \(A_H\) (and \(\Phi ^\dag \) is the inverse of \(\Phi \) on such expressions). Therefore the restriction of \(\Phi \) to \({\mathcal {A}}_G\) is an equivalence of G and H.

Conversely, let \(\phi : {\mathcal {A}}_G \rightarrow {\mathcal {A}}_H\) be an equivalence of G and H. By Lemma 5.1, there exists a unitary matrix U such that \(\phi (X) = UXU^\dagger \) for all \(X \in {\mathcal {A}}_G\). Let \(\hat{\phi }: {\mathbb {C}}^{V_G \times V_G} \rightarrow {\mathbb {C}}^{V_H \times V_H}\) be defined as \(\hat{\phi }(X) = UXU^\dagger \) for all \(X \in {\mathbb {C}}^{V_G \times V_G}\). Moreover, let \(\Pi : {\mathbb {C}}^{V_G \times V_G} \rightarrow {\mathcal {A}}_G\) be the orthogonal projection onto the coherent algebra of \({\mathcal {A}}_G\). The same arguments as in the proof of Theorem 6.2 imply that the composition \(\Phi = \hat{\phi } \circ \Pi : {\mathbb {C}}^{V_G \times V_G} \rightarrow {\mathbb {C}}^{V_H \times V_H}\) is an \({\mathcal {S}}_{+}\)-isomorphism map for G to H. Thus, to show that \(\Phi \) is a \({\mathcal {D}}{\mathcal {N}} {\mathcal {N}}\)-isomorphism map it remains to show that it is completely \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-preserving, or equivalently, that its Choi matrix \(C_\Phi \) is entrywise nonnegative (we already know it is psd).

Let \(A_1, \ldots , A_d\) and \(B_1, \ldots , B_d\) be the 01-bases of \({\mathcal {A}}_G\) and \({\mathcal {A}}_H\) respectively. Then, we have that \(\phi (A_i) = B_i\) and as \(\phi \) is trace preserving (cf. Lemma 5.3) it follows that \(m_i = {{\,\textrm{Tr}\,}}(A_i^TA_i) = {{\,\textrm{Tr}\,}}(B_i^TB_i)\), where \(m_i\) is the number of 1’s in \(A_i\) and \(B_i\). Then, \(\{\frac{1}{\sqrt{m_i}} A_i: i \in [d]\}\) is an orthonormal basis for \({\mathcal {A}}_G\), and thus

$$\begin{aligned} \Pi (X) = \sum _{i=1}^d \frac{1}{m_i}\langle A_i, X\rangle A_i = \sum _{i=1}^d \frac{1}{m_i}{{\,\textrm{Tr}\,}}(A_i^T X) A_i, \end{aligned}$$

which in turn implies that

$$\begin{aligned} \Phi (X) = \sum _{i=1}^d \frac{1}{m_i}{{\,\textrm{Tr}\,}}(A_i^T X) B_i. \end{aligned}$$
(36)

By (36) we clearly see that if X is entrywise nonnegative, then \(\Phi (X)\) is also nonnegative. Thus the Choi matrix of \(\Phi \) is doubly nonnegative and so \(\Phi \) is a \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphism map for G to H. \(\square \)

7.3 Necessary conditions on \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphic graphs

Since any pair of \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphic graphs are also \({\mathcal {S}}_{+}\)-isomorphic, we know that any of the necessary conditions for \({\mathcal {S}}_{+}\)-isomorphism given in the previous section are also necessary for \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphism. However, some of these necessary conditions can be strengthened.

We say that the d-distance graph of G is the graph with vertex set \(V_G\) such that two vertices are adjacent if their distance in G is exactly d. The d-distance matrix of G is the adjacency matrix of its d-distance graph, so in particular, it has zero diagonal.

Lemma 7.4

Consider two graphs G and H. Define \(X^{\ell ,i}\) (respectively \(Y^{\ell ,i}\)) as the matrix whose \(gg'\)-entry is 1 if the number of walks of length \(\ell \) in G from g to \(g'\) is equal to i, and is otherwise zero. Moreover, let \(X^{(\ell )}\) and \(Y^{(\ell )}\) be the \(\ell \)-distance matrices of G and H respectively. Assume that G and H are \({\mathcal {D}}{\mathcal {N}} {\mathcal {N}}\)-isomorphic graphs with isomorphism map \(\Phi \). Then we have that

$$\begin{aligned} \Phi (X^{\ell ,i}) = Y^{\ell ,i} \text { for all } \ell , i \in {\mathbb {N}}\ \text { and } \ \Phi (X^{(\ell )}) = Y^{(\ell )} \text { for all } \ell = 0, 1, \ldots , \textrm{diam}(G) \end{aligned}$$

Proof

We have that \(\Phi (A_G^\ell ) = A_H^\ell \) and \(\Phi ^\dag (A_H^\ell ) =A_G^\ell \) and thus \(A_G^\ell \) and \(A_H^\ell \) have the same multiset of entries. Let S be the set of entries of \(A_G^\ell \). Then \(A_G^\ell = \sum _{i \in S} iX^{\ell ,i}\) and \(A_H^\ell = \sum _{i \in S} iY^{\ell ,i}\). It is then easy to see that for any \(j \in S\),

$$\begin{aligned} X^{\ell ,j} = \bullet _{i \ne j}\frac{1}{j-i}(A_G^\ell - iJ), \end{aligned}$$

and similarly for \(Y^{\ell ,j}\). It then follows from the properties of \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphism maps that \(\Phi (X^{\ell ,i}) = Y^{\ell ,i}\) for all \(\ell ,i\).

The second claim holds for \(\ell = 0,1\), and we proceed by induction. It is easy to see that

$$\begin{aligned} X^{(\ell )} = \left( \sum _{i \ge 1} X^{\ell ,i}\right) \bullet \left( J - \sum ^{\ell -1}_{k = 0} X^{(k)}\right) , \end{aligned}$$
(37)

and thus the claim follows from the properties of \(\Phi \) and the obvious induction argument. \(\square \)

We saw in Sect. 6.3 that \({\mathcal {S}}_{+}\)-isomorphism preserves the property of being 1-walk-regular. For \({\mathcal {D}}{\mathcal {N}} {\mathcal {N}}\)-isomorphism, an even stronger property known as distance regularity is preserved. A connected graph G of diameter d is distance regular if there exist integers \(p_{ij}^k\) for \(i,j,k \in \{0,1,\ldots ,d\}\) such that the number of vertices w at distance i from u and distance j from v is equal to \(p_{ij}^k\) whenever u and v are at distance k, i.e., whenever \(\textrm{dist}(u,v)=k\) we have that \(|N_i(u)\cap N_j(v)|=p^k_{ij}. \) Letting \(X^{(\ell )}\) for \(\ell \in \{0,1,\ldots ,d\}\) be the \(\ell \) distance matrix of G, one can see that this definition is equivalent to the equations

$$\begin{aligned} X^{(i)}X^{(j)} = \sum _k p_{ij}^k X^{(k)}. \end{aligned}$$

In other words, the distance matrices are a 01 orthogonal basis of the algebra they generate. From here it is easy to see that this is a coherent algebra. Furthermore, since the distance matrices of G are contained in the coherent algebra of G by Eq. (37), the coherent algbera generated by the distance matrices of G is equal to the coherent algebra of G. Thus a graph is distance regular if and only if its coherent algebra is equal to the span of its distance matrices, in which case all matrices in the coherent algebra are symmetric and thus they all commute. This allows us to prove the following:

Lemma 7.5

If \(G \cong _{{\mathcal {D}}{\mathcal {N}}{\mathcal {N}}} H\) then G is distance regular if and only if H is distance regular.

Proof

By assumption, there exists a \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphism map \(\Phi =\Phi _M\), where M is a \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphism matrix. It suffices to show that G being distance regular implies that H is distance regular. So let G be distance regular and let \(X^{(\ell )}\) and \(Y^{(\ell )}\) be the \(\ell \)-distance matrices of G and H respectively. By Lemma 7.4, we have that \(\Phi (X^{(\ell )}) = Y^{(\ell )}\) for all \(\ell \). Since G is distance regular, the \(X^{(\ell )}\) form a basis of \({\mathcal {A}}_G\). Since the image of \({\mathcal {A}}_G\) under \(\Phi \) is \({\mathcal {A}}_H\), and since the restriction of \(\Phi \) to \({\mathcal {A}}_G\) is a linear bijection, we have that \(\Phi \) maps any basis of \({\mathcal {A}}_G\) to a basis of \({\mathcal {A}}_H\). Therefore, the distance matrices of H form a basis of \({\mathcal {A}}_H\) and thus H is distance regular. \(\square \)

In Sect. 6.3 we showed that the partially coherent algebra of a connected 1-walk-regular graph is equal to its adjacency algebra. Analogously, it is well known that the coherent algebra of a distance regular graph is equal to its adjacency algebra [2]. We will not give a proof of this here, but it suffices to show that the distance matrices of a distance regular graph are polynomials in its adjacency matrix, and this can be done with induction. As one might expect, this allows us to prove an analog of Theorem 6.9 for \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphism.

Theorem 7.6

Let G be a distance regular graph. If H is a graph, then \(G \cong _{{\mathcal {D}}{\mathcal {N}}{\mathcal {N}}} H\) if and only if H is a distance regular graph that is cospectral to G.

Proof

The only if direction follows from Lemmas 6.3 and 7.5. For the other direction, suppose that G and H are cospectral distance regular graphs with eigenvalues \(\lambda _1 \ge \ldots \ge \lambda _n\) for \(n = |V_G| = |V_H|\), and let d be the diameter of both graphs (it is well known that the diameter of a distance regular graph is one less than the number of distinct eigenvalues [2], so this will be the same for G and H). Note that the unique eigenvector for \(\lambda _1\) is the constant vector since the graphs are connected and regular. Also let \(X^{(\ell )}\) and \(Y^{(\ell )}\) be the \(\ell \)-distance matrices of G and H respectively. Since G and H are cospectral, there exists a unitary matrix U such that \(UA_GU^\dagger = A_H\). It is then immediate that the map \(\phi (X) = UXU^\dagger \) is an algebra isomorphism of the adjacency algebras of G and H, which are respectively equal to their coherent algebras since the graphs are distance regular. We aim to show that \(\phi \) is an equivalence.

Obviously, \(\phi (X^\dagger ) = \phi (X)^\dagger \), and \(\phi (J) = J\) since \(\frac{1}{n}J\) is the projection onto the \(\lambda _1\)-eigenspace for both G and H. So we only need to show that \(\phi (X \bullet X') = \phi (X) \bullet \phi (X')\) for all \(X,X' \in {\mathcal {A}}_G\).

For any \(X,X' \in {\mathcal {A}}_G\), we have that \(X = \sum _\ell \alpha _\ell X^{(\ell )}\) and \(X' = \sum _\ell \alpha '_\ell X^{(\ell )}\) for some coefficients \(\alpha _\ell , \alpha '_\ell \) for \(\ell = 0, \ldots d\) since the distance matrices of G span \({\mathcal {A}}_G\) by the distance regularity of G. Then \(X \bullet X' = \sum _{\ell } \alpha _\ell \alpha '_\ell X^{(\ell )}\), since \(X^{(\ell )} \bullet X^{(k)} = \delta _{\ell k} X^{(\ell )}\). Suppose that \(\phi (X^{(\ell )}) = Y^{(\ell )}\). Then,

$$\begin{aligned} \phi (X \bullet X')&= \phi \left( \sum _{\ell = 0}^d \alpha _\ell \alpha '_\ell X^{(\ell )} \right) \\&= \sum _{\ell = 0}^d \alpha _\ell \alpha '_\ell \phi \left( X^{(\ell )} \right) \\&= \sum _{\ell } \alpha _\ell \alpha '_\ell Y^{(\ell )} \\&= \left( \sum _{\ell =0}^d \alpha _\ell Y^{(\ell )}\right) \bullet \left( \sum _{\ell =0}^d \alpha '_\ell Y^{(\ell )}\right) \\&= \phi \left( \sum _{\ell =0}^d \alpha _\ell X^{(\ell )}\right) \bullet \phi \left( \sum _{\ell =0}^d \alpha '_\ell X^{(\ell )}\right) \\&= \phi (X) \bullet \phi (X'). \end{aligned}$$

Thus it suffices to show that \(\phi (X^{(\ell )}) = Y^{(\ell )}\) for all \(\ell = 0, \ldots , d\). Since G is distance regular, there exist polynomials \(f_\ell \) for \(\ell = 0, \ldots , d\) such that \(f_\ell (A_G) = X^{(\ell )}\) [2, Section 2.7]. Moreover, the polynomials \(f_\ell \) only depend on the eigenvalues of the distance regular graph. Since G and H are cospectral, we have that \(f_\ell (A_H) = Y^{(\ell )}\). Since \(\phi (f(A_G)) = f(\phi (A_G)) =f(A_H)\) for any polynomial f, it follows that \(\phi (X^{(\ell )}) =\phi (f_{\ell }(A_G)) = f_\ell (A_H) = Y^{(\ell )}\) as desired. \(\square \)

8 Separations between the various notions of isomorphism

In Eq. (14) we noted that the four different types of isomorphisms we consider in this paper satisfy the following chain of implications:

$$\begin{aligned} G \cong H \ \Rightarrow \ G \cong _q H \ \Rightarrow \ G \cong _{{\mathcal {D}}{\mathcal {N}}{\mathcal {N}}} H \ \Rightarrow \ G \cong _{{\mathcal {S}}_{+}} H. \end{aligned}$$

In our earlier work [1], we showed that the first implication cannot be reversed, i.e., that there are quantum isomorphic graphs that are not isomorphic. Here we show that none of the other implications can be reversed. We begin by showing that \({\mathcal {S}}_{+}\)-isomorphism does not imply \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphism.

The first graph we consider is the 4-cube: the graph whose vertices are the binary strings of length 4, two being adjacent if they differ in exactly one position. This is a well-known distance transitive (and therefore distance regular) graph. Less well-known is the Hoffman graph, which is the unique graph cospectral to the 4-cube.

Fig. 2
figure 2

The 4-cube and Hoffman graphs

This Hoffman graph is not distance regular but is 1-walk-regular. Both graphs are shown in Fig. 2.

Theorem 8.1

There exist graphs G and H that are \({\mathcal {S}}_{+}\)-isomorphic but not \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphic, i.e., \(G \cong _{{\mathcal {S}}_{+}} H \ \not \Rightarrow \ G \cong _{{\mathcal {D}}{\mathcal {N}}{\mathcal {N}}} H\).

Proof

Let G be the 4-cube and H the Hoffman graph. Since G is distance regular it is 1-walk-regular, and we already noted above that the Hoffman graph is 1-walk-regular. Since they are cospectral, by Theorem 6.9 they are \({\mathcal {S}}_{+}\)-isomorphic. However, since G is distance regular but H is not, by Theorem 7.6 they are not \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphic. \(\square \)

The next separation we will show is between quantum and \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphism. The graphs we will use are the cartesian product of \(K_4\) with itself, and the Shrikhande graph. The cartesian product of graphs G and H, denoted \(G \square H\), has vertex set \(V_G \times V_H\), and vertices (gh) and \((g',h')\) are adjacent if (\(g = g'\) and \(h \sim h'\)) or (\(g \sim g'\) and \(h = h'\)). For \(G = H = K_4\), the vertices of the graph can be thought of as being in a \(4 \times 4\) grid, such that two are adjacent if they are in the same row or column. From this description it is easy to see that \(K_4 \square K_4\) contains \(K_4\) as a subgraph.

The graph \(K_4 \square K_4\) is what is known as a strongly regular graph, which is just a distance regular graph of diameter two. Equivalently, an n-vertex, k-regular graph G is strongly regular if there exists numbers \(\lambda \) and \(\mu \) such that any two adjacent vertices of G share \(\lambda \) common neighbors, and any two distinct non-adjacent vertices of G share \(\mu \) common neighbors. The numbers \((n,k,\lambda , \mu )\) are called the parameters of a strongly regular graph and these completely determine its spectrum. The parameters of \(K_4 \square K_4\) are (16, 6, 2, 2).

Fig. 3
figure 3

Shrikhande graph

The Shrikhande graph is pictured in Fig. 3. The Shrikhande graph is also a strongly regular graph with parameters (16, 6, 2, 2), and thus it is cospectral to \(K_4 \square K_4\). We will show that these two graphs are \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphic but not quantum isomorphic, thus separating these two relations.

Theorem 8.2

There exist graphs G and H that are \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphic but not quantum isomorphic, i.e., \(G \cong _{{\mathcal {D}}{\mathcal {N}}{\mathcal {N}}} H \ \not \Rightarrow \ G \cong _q H\).

Proof

The Shrikhande graph and \(K_4 \square K_4\) are strongly regular graphs with the same parameters and are thus cospectral. It then follows from Theorem 7.6 that \(G \cong _{{\mathcal {D}}{\mathcal {N}}{\mathcal {N}}}~H\). It remains to show that these two graphs are not quantum isomorphic.

In [18], it is shown that if \(G \cong _q H\) then G and H admit the same number of homomorphisms from any planar graph K,Footnote 2. A homomorphism from K to G is simply an adjacency-preserving map from \(V_K\) to \(V_G\). In the case of K being a complete graph, the existence of a homomorphism from K to G is equivalent to G containing K as a subgraph. It is easy to see that the graph \(K_4 \square K_4\) contains \(K_4\) as a subgraph, whereas the Shrikhande graph does not. Thus the Shrikhande graph admits no homomorphisms from \(K_4\), whereas \(K_4 \square K_4\) admits some positive number of homomorphisms from \(K_4\). As \(K_4\) is planar, it follows that \(K_4 \square K_4\) is not quantum isomorphic to the Shrikhande graph. \(\square \)

We remark that the result of [18] used to prove the above theorem is a bit of overkill. In fact, the above can be proved more directly using the notion of projective representations of graphs and [1, Lemma 5.20]. However, this more direct proof is somewhat tedious and the above proof allows us to avoid introducing projective representations here.

The above results show that none of the implications in Eq. (14) can be reversed. Before moving on, we briefly mention that it is not too difficult to show that \(G \cong _{{\mathcal {S}}_{+}} H\) implies that G and H are fractionally isomorphic, i.e., there exists a doubly stochastic matrix D such that \(A_GD = DA_H\). If M is an \({\mathcal {S}}_{+}\)-isomorphism matrix for G to H then letting \(D_{gh} = M_{gh,gh}\) works (the proof is similar to [1, Lemma 4.2]). Again, the reverse implication does not hold, for instance the 6-cycle and the disjoint union of two 3-cycles are fractionally isomorphic, but not \({\mathcal {S}}_{+}\)-isomorphic since they are not cospectral.

9 Conic theta functions

For any matrix cone \({\mathcal {K}}\), one can define the graph parameter [8, 16]:

$$\begin{aligned} \begin{array}{lll} \vartheta ^{\mathcal {K}}(G) = &{} \ \sup &{} \ {{\,\textrm{Tr}\,}}(MJ) \\ &{} \ \text {s.t.} &{} \ M_{g,g'} = 0 \text { if } g \sim g' \\ &{} \ {} &{} \ {{\,\textrm{Tr}\,}}(M) = 1 \\ &{} \ {} &{} \ M \in {\mathcal {K}}. \end{array} \end{aligned}$$

Note that \({{\,\textrm{Tr}\,}}(MJ)\) is equal to the sum of the entries of M, which we also denote by \(\text {sum}(M)\). For \({\mathcal {K}} ={\mathcal {S}}_{+}\), the parameter \(\vartheta ^{{\mathcal {K}}}\) is exactly the celebrated Lovász theta function, denoted simply \(\vartheta \). For \({\mathcal {K}} ={\mathcal {D}}{\mathcal {N}} {\mathcal {N}}\), it is equal to a variant due to Schrijver that is usually denoted \(\vartheta '\) or \(\vartheta ^{-}\). A nontrivial result [7] is that \(\vartheta ^{{\mathcal {C}{\mathcal {P}}} }\) is equal to the independence number of a graph, denoted \(\alpha \). The parameter \(\vartheta ^{{\mathcal{C}\mathcal{S}}_{+}}\) was first considered in [16]; it may or may not be equal to a related parameter known as the projective packing number [21], but in general it is much less understood than the other three parameters discussed above. One of the main reasons for this is that the cone \({{\mathcal{C}\mathcal{S}}_{+}}\) is not closed [25].

Note that if \({\mathcal {K}} \subseteq {\mathcal {K}}'\) then \(\vartheta ^{{\mathcal {K}}}(G) \le \vartheta ^{{\mathcal {K}}'}(G)\). In particular, we have the following chain of inequalities:

$$\begin{aligned} \alpha (G) = \vartheta ^{\mathcal {C}}{\mathcal {P}}(G) \le \vartheta ^{{\mathcal{C}\mathcal{S}}_{+}}(G) \le \vartheta ^{\mathcal {D}{\mathcal {N}}{\mathcal {N}}}(G) \le \vartheta ^{\mathcal {S}_+}(G) = \vartheta (G) \le \chi (\overline{G}). \end{aligned}$$

In order to reformulate \({\mathcal {K}}\)-isomorphism in terms of the graph parameter \(\vartheta ^{{\mathcal {K}}}\) we make use of the graph isomorphism product, denoted \(G \diamond H\), which has vertex set \(V_G \times V_H\) and edges \((g,h) \sim (g',h')\) if \({{\,\textrm{rel}\,}}(g,g') \ne {{\,\textrm{rel}\,}}(h,h')\). In other words, vertices of \(G \diamond H\) are adjacent exactly when the corresponding entry in an isomorphism matrix for G to H is required to be zero. Note that the isomorphism product of G and H is the complement of the modular product of graphs.

Theorem 9.1

Consider two graphs G and H and a matrix cone \({\mathcal {K}} \subseteq {\mathcal {S}}_{+}\). Then \(G \cong _{{\mathcal {K}}} H\) if and only if \(\vartheta ^{{\mathcal {K}}}(G \diamond H) = |V_G| =|V_H|\) and this value is attained.

Proof

First, note that \(\vartheta ^{{\mathcal {K}}}(G \diamond H) \le \vartheta (G \diamond H) \le \chi (\overline{G \diamond H})\). Also, the set \(V_g = \{(g,h): h \in V_H\}\) is an independent set in \(\overline{G \diamond H}\) for all \(g \in V_G\). Therefore, \(\chi (\overline{G \diamond H}) \le |V_G|\), and similarly \(\chi (\overline{G \diamond H}) \le |V_H|\). Thus to show that \(\vartheta ^{{\mathcal {K}}}(G \diamond H) = |V_G|\) (or \(|V_H|\)), it suffices to find a solution of value \(|V_G|\) (or \(|V_H|\)).

Suppose that \(G \cong _{{\mathcal {K}}} H\) and let be M the requisite \({\mathcal {K}}\)-isomorphism matrix. We show that M is, up to a scalar, a feasible solution for \(\vartheta ^{{\mathcal {K}}}(G \diamond H)\) of value \(|V_G| = |V_H|\). First, by conditions (3) and (4), we have that \(\text {sum}(M)=|V_G|^2=|V_H|^2\), and thus \(|V_G| = |V_H|\). Next, we have that \(M_{gh,g'h'} = 0\) if \({{\,\textrm{rel}\,}}(g,g') \ne {{\,\textrm{rel}\,}}(h,h')\) by definition, and thus \(M_{gh,g'h'} = 0\) if \((g,h) \sim (g',h')\) in \(G \diamond H\). We also have that \(M \in {\mathcal {K}}\). Lastly, it follows again by (4) that

$$\begin{aligned} {{\,\textrm{Tr}\,}}(M) = \sum _{g \in V_G, \ h \in V_H} M_{gh,gh} = \sum _{g \in V_G} \sum _{h,h' \in V_H} M_{gh,gh'} = |V_G|. \end{aligned}$$

Thus, setting \(M' = M/{{\,\textrm{Tr}\,}}(M)\), it follows that \({{\,\textrm{Tr}\,}}(M') = 1\) and \(\text {sum}(M') = |V_G|\), i.e., \(M'\) is a feasible solution for \(\vartheta ^{{\mathcal {K}}}(G \diamond H)\) of value \(|V_G| = |V_H|\).

Conversely, let M be an optimal solution for \(\vartheta ^{{\mathcal {K}}}(G \diamond H)\) of value \(|V_G| = |V_H|\).

We show that the block sums

$$ \begin{aligned} \sum _{h,h' \in V_H} M_{gh,g'h'} \quad \& \quad \sum _{g,g' \in V_G} M_{gh,g'h'} \end{aligned}$$

are all equal to the same constant, and thus M is (a scalar multiple of) a \({\mathcal {K}}\)-isomorphism matrix for G to H. For this, define \(\widehat{M}\) to be a matrix with rows and columns indexed by \(V_G\), whose \(g,g'\) entry is given by:

$$\begin{aligned} \widehat{M}_{g,g'} = \sum _{h,h' \in V_H} M_{gh,g'h'}. \end{aligned}$$

First, note that as \(M \in {\mathcal {S}}_{+}\) we also have that \(\widehat{M} \in {\mathcal {S}}_{+}\). Indeed, if \(\{v_{gh}\}_{g \in V_G, h\in V_H}\) is a Gram decomposition of M, then \(\{\sum _h v_{gh}\}_{g \in V_G}\) is a Gram decomposition for the matrix \(\widehat{M}\). Moreover, by definition of \(\widehat{M}\) we have that \(\text {sum}(\widehat{M}) = \text {sum}(M)\), and using that \(\text {sum}(M) = |V_G|\) we get that

$$\begin{aligned} \frac{e^T \widehat{M} e}{e^T e} = {\text {sum}(\widehat{M}) \over |V_G|}= 1. \end{aligned}$$

Consequently, the maximum eigenvalue of \(\widehat{M}\) is at least 1. On the other hand,

$$\begin{aligned} {{\,\textrm{Tr}\,}}(\widehat{M}) = \sum _{g\in V_G} \sum _{h,h' \in V_H} M_{gh,gh'} =\sum _{g\in V_G} \sum _{h, \in V_H} M_{gh,gh}={{\,\textrm{Tr}\,}}(M) = 1, \end{aligned}$$

where we used that \(M_{gh,gh'} = 0\) when \(h \ne h'\). Thus, the sum of the eigenvalues of \(\widehat{M}\) is 1. Since \(\widehat{M}\) is positive semidefinite, it must have exactly one nonzero eigenvalue, which must be equal to 1, and which has e as an eigenvector. This implies that \(\widehat{M}\) is a multiple of the all ones matrix. By the definition of \(\widehat{M}\), this implies that the block sums \(\sum _{h,h'} M_{gh,g'h'}\) are constant (and equal to \(1/|V_G|\)). The same argument shows that the block sums \(\sum _{g,g'} M_{gh,g'h'}\) are all equal to \(1/|V_H| = 1/|V_G|\), and thus we are done. \(\square \)

10 Connection with the Lasserre hierarchy

Consider a semialgebraic set \({\mathcal {K}}=\{x\in [0,1]^n: g_i(x)\ge 0, \ i=1,\ldots , m\}\). The Lasserre hierarchy is a systematic method for producing tighter approximations to \(\textrm{conv}({\mathcal {K}}\cap \{0,1\}^n)\). As the goal is to characterize the convex hull of \(\{0,1\}\) points in \({\mathcal {K}}\), using that \(x_i^2=x_i\), we may assume that

$$\begin{aligned} g_i(x)=\sum _{K\subseteq [n]} g_i(K)\prod _{i\in K}x_i, \end{aligned}$$

i.e., \(g_i\) is multilinear and its monomials are indexed by subsets of [n]. For \(t\ge 0\), the t-th level of the Lasserre hierarchy is an SDP defined as the set of all vectors \(y=(y_I), \ I \subseteq [n], |I|\le 2t, \) that satisfy:

$$\begin{aligned} M_t(y)\succeq 0, \quad M_{t-\lceil {\textrm{deg}(g_i)\over 2}\rceil } (g_i*y) \succeq 0 \ (i\in [m]), \quad y_\emptyset =1. \end{aligned}$$

For completeness, we recall that \(M_t(y)\) is a matrix indexed by all sets \(I\subseteq [n]\) with \(|I|\le t\) and its IJ entry is given by \(y_{I\cup J}\). Furthermore, \( M_{t-\lceil {\textrm{deg}(g_i)\over 2}\rceil }(g_i* y)\) is a matrix indexed by all \(I\subseteq [n]\) with \(|I|\le t-{\textrm{deg}(g_i)\over 2}\) and its IJ entry is given by \(\sum _{K\subseteq [n]}g_i(K)y_{I\cup J\cup K}.\)

Deciding graph isomorphism is equivalent to the feasibility of the following quadratic integer program:

$$\begin{aligned}&\sum _g X_{gh}=1, \end{aligned}$$
(38)
$$\begin{aligned}&\sum _hX_{gh}=1, \end{aligned}$$
(39)
$$\begin{aligned}&X_{gh}X_{g'h'}=0 \text { if } {{\,\textrm{rel}\,}}(g,g')\ne {{\,\textrm{rel}\,}}(h,h'), \end{aligned}$$
(40)
$$\begin{aligned}&X_{gh}\in \{0,1\}, \ \forall g,h. \end{aligned}$$
(41)

We proceed to apply the Lasserre hierarchy to the semi-algebraic set obtained by dropping the integrality constraints. We only consider the first level (i.e., \(t=1\)), defined in terms of the variables

$$\begin{aligned} y_\emptyset , \quad y_{(g,h)}, \quad y_{\{(g,h), (g',h')\}}. \end{aligned}$$

The first constraint is that \(M_1(y)\succeq 0\). This matrix is indexed by \(\emptyset , (g,h)\); The entries in the 0-th row are \(y_{(g,h)}\) and the \((g,h), (g'h')\) entry is \(y_{\{(g,h), (g',h')\}}\). Moreover, the diagonal is equal to the 0-th row. To handle equality constraints in the Lasserre hierarchy we write them as two inequalities. As an example consider \(\sum _h X_{gh}- 1\ge 0\). As this has degree one (and we consider level \(t=1\)), we get the constraint \(M_0(g_i*y)\succeq 0\). This is a trivial matrix; it is only indexed by the empty set, so it is just the scalar

$$\begin{aligned} \sum _{K\subseteq [n]}g_i(K)y_{K}. \end{aligned}$$
(42)

Specializing to the polynomial \(\sum _h X_{gh}- 1\ge 0\), (42) gives the constraint:

$$\begin{aligned} \sum _h y_{(gg,h)}-y_\emptyset =\sum _h y_{(g,h)}-1\ge 0, \end{aligned}$$

whereas the polynomial \(\sum _h X_{gh}- 1\le 0\) gives the converse inequality. Thus, the first level of the Lasserre hierarchy includes the constraints

$$\begin{aligned} \sum _h y_{(g,h)}=1, \ \forall g, \end{aligned}$$
(43)

and symetrically, it also includes

$$\begin{aligned} \sum _g y_{(g,h)}=1, \ \forall h. \end{aligned}$$

Finally, the constraints \(X_{gh}X_{g'h'}=0 \text { if } {{\,\textrm{rel}\,}}(g,g')\ne {{\,\textrm{rel}\,}}(h,h')\) translate to

$$\begin{aligned} y_{(g,h),(g',h')} =0 \text { if } {{\,\textrm{rel}\,}}(g,g')\ne {{\,\textrm{rel}\,}}(h,h'). \end{aligned}$$

Lemma 10.1

\(G\cong _{{\mathcal {S}}_{+}} H\) if and only if the first level of the Lasserre hierarchy for graph isomorphism is feasible, i.e., there exists \(y=(y_\emptyset , y_{(g,h)}, y_{\{(g,h), (g',h')\}})\) such that

$$\begin{aligned}&M_1(y)\succeq 0, \end{aligned}$$
(44)
$$\begin{aligned}&\sum _h y_{(g,h)}=1, \ \forall g, \end{aligned}$$
(45)
$$\begin{aligned}&\sum _g y_{(g,h)}=1, \ \forall h, \end{aligned}$$
(46)
$$\begin{aligned}&y_{(g,h),(g',h')} =0, \text { if } {{\,\textrm{rel}\,}}(g,g')\ne {{\,\textrm{rel}\,}}(h,h'), \end{aligned}$$
(47)
$$\begin{aligned}&y_\emptyset =1. \end{aligned}$$
(48)

Furthermore, \({\mathcal {D}}{\mathcal {N}}{\mathcal {N}}\)-isomorphism is equivalent to the feasibility of (44)–(48) with the additional constraints that the variables are nonnegative.

Proof

Let \(y=(y_\emptyset , y_{(g,h)}, y_{\{(g,h), (g',h')\}})\) be feasible for (44)–(48). As \(M_1(y)\) is positive semidefinite, it can be realized as the Gram matrix of a family of vectors \(v_\emptyset \) and \(v_{(g,h)}, g\in V_G, h\in V_H\). The main step is to show that \(v_\emptyset =\sum _g v_{(g,h)}=\sum _h v_{(g,h)}\). Towards this end, note that

$$\begin{aligned} \left\langle \sum _h v_{(g,h)}, \sum _h v_{(g,h)}\right\rangle =\sum _h \langle v_{(g,h)}, v_{(g,h)}\rangle =\sum _h y_{(g,h)}=1, \end{aligned}$$

where the first equality follows from (47), and

$$\begin{aligned} \left\langle v_\emptyset ,\sum _h v_{(g,h)}\right\rangle =\sum _h y_{(g,h)}=1. \end{aligned}$$

Combining the above with \(y_\emptyset =1\) we get that

$$\begin{aligned} \left\| v_\emptyset -\sum _h v_{(g,h)}\right\| ^2=0, \ \forall g, \end{aligned}$$

and analogously that \(\Vert v_\emptyset -\sum _g v_{(g,h)}\Vert ^2=0, \ \forall h.\) Lastly, it follows that the restriction of \(M_1(y)\) on the rows/columns indexed by (gh) is a \({\mathcal {S}}_{+}\)-isomorphism matrix as

$$\begin{aligned} \sum _{h,h' \in V_H} M_1(y)_{gh,g'h'} =\sum _{h,h'\in V_H} \langle v_{(g,h)}, v_{(g',h')}\rangle =\langle v_\emptyset , v_\emptyset \rangle =1, \text { for all } g,g' \in V_G, \end{aligned}$$

and similarly \(\sum _{g,g'} M_1(y)_{gh,g'h'} = 1\) for all \(h,h' \in V_H\).

Conversely, let M be a \({\mathcal {S}}_{+}\)-isomorphism matrix and let \(v_{(g,h)}\) be the vectors in a Gram decomposition. For any \(g\in V_G\) set \(v_g=\sum _h v_{(g,h)}\) and for any \(h\in V_H\) define \(v_h=\sum _g v_{(g,h)}\). As before, we can easily show that \(v_g=v_{g'}\) for all \(g,g'\in V_G\) and \(v_h=v_{h'}\) for all \(h,h'\in V_H\), and thus we use \(v_G\) and \(v_H\) to refer to these two vectors. But we also have that

$$\begin{aligned} |V_G|v_{G}=\sum _g \sum _h v_{(g,h)}= \sum _h \sum _g v_{(g,h)}=|V_H|v_{H}, \end{aligned}$$

which in turn implies that \(v_G=v_H=:v\) (as \({\mathcal {S}}_{+}\)-isomorphic graphs have the same number of vertices). Lastly, extend M by adding an extra row/column indexed by the vector v. It is then straightforward to check that the augmented matrix satisfies (44)–(48). \(\square \)