1 Introduction

Let G be a graph of order n with vertex set \(V=\{v_{1},\dots ,v_{n}\}\) and adjacency matrix A. Let S be a non-empty subset of V and let \(\mathrm{e}=(x_{1}, \dots , x_{n})^\mathtt{T}\) be its characteristic vector, that is \(x_{\ell }=1\) if \(v_{\ell }\in S\) and \(x_{\ell }=0\) otherwise, for \(\ell =1,\dots ,n.\) Then the \(n\times n\) matrix

$$\begin{aligned} W^{S}:=\big [{\mathrm{e}}, A{\mathrm{e}}, A^{2}{\mathrm{e}},\dots ,A^{n-1}{\mathrm{e}}\big ], \end{aligned}$$
(2)

formed by the \(A^{i}{\mathrm{e}}\) as columns, is the walk matrix of G for S. This term refers to the fact that the \(k{\mathrm{th}}\) entry in the row indexed by \(v_{\ell }\) is the number of walks in G of length \(k-1\) from \(v_{\ell }\) to some vertex in S. Walk matrices first appeared in 1978 in Cvetković [5] for the case \(S=V\) and in 2012 in Godsil [14] for arbitrary non-empty \(S\subseteq V.\) A survey about walk matrices and the related topic of main eigenvalues and main eigenvectors can be found in [20]. More recently walk matrices have been studied in spectral graph theory [27,28,29] and in particular in connection with the question whether a graph is identified up to isomorphism by its spectrum  [17].

Let \(\mu _{1},\dots ,\mu _{s}\) be the distinct eigenvalues of G. Since A is symmetric, every vector in \(\mathbb {R}^{n}\) can be written uniquely as a linear combination of eigenvectors of A. In particular, when S is set of vertices then its characteristic vector \(\mathrm{e}\) can be written in this way. For convenience, we renumber the distinct eigenvalues of A so that

$$\begin{aligned} {\mathrm{SD}}(S)\!:\mathrm{e}= & {} \mathrm{e}_{1}+\mathrm{e}_{2}+\dots +\mathrm{e}_{r} \text { with } \mathrm{e}_{i}\ne 0 ~\text {and} \nonumber \\ A\mathrm{e}_{i}= & {} \mu _{i}\mathrm{e}_{i}\text { for all }1\le i\le r \end{aligned}$$
(3)

for some \(r\le s.\) We refer to (3) as the spectral decomposition of S,  or more properly, of its characteristic vector.

We outline the main results. The key theorem shows that the walk matrix for S determines the spectral decomposition of S,  and vice versa. This holds for any graph G and any non-empty set S of vertices of G. To state this in a precise fashion, we use (3) to define the \(n\times r\) main eigenvector matrix

$$\begin{aligned} E^{S}=\big [\mathrm{e}_{1}, \mathrm{e}_{2}, \dots , \mathrm{e}_{r} \big ] \end{aligned}$$

and, corresponding to the eigenvalues \(\mu _{1}, \mu _{2}, \dots , \mu _{r},\) we define the \(r\times n\) main eigenvalue matrix

$$\begin{aligned} M^{S}= \left( \begin{array}{ccccc} 1 &{} \mu _1&{} \mu _{1}^{2} \cdots &{} \mu _1^{n-1} \\ 1&{} \mu _2&{}\mu _{2}^{2} \cdots &{} \mu _2^{n-1} \\ \vdots &{} \vdots &{}\vdots \cdots &{} \vdots \\ 1 &{} \mu _r&{} \mu _{r}^{2} \cdots &{} \mu _r^{n-1} \end{array}\right) . \end{aligned}$$

The following is proved in Theorems 3.6 and 4.2 in Sects. 3 and 4:

Theorem 1.1

Let G be a graph and let S be a non-empty set of vertices of G. Let \(W:=W^{S},\) \(E:=E^{S}\) and \(M:=M^{S}\) be as above. Then

$$\begin{aligned} W=E\cdot M. \end{aligned}$$

Furthermore, if W is given, then both E and M are determined uniquely. In particular, the number of eigenvectors in the spectral decomposition of S is equal to the rank of W.

The word ‘determine’ has a precise meaning that is defined in Sect. 4. The first part of the theorem is straight forward from the definition of the matrices. The second part does require a further analysis of the walk matrix and certain polynomials associated to it. While the proof is elementary, it appears that the theorem has many applications. The first concerns the adjacency matrix of the graph.

Theorem 1.2

Let G be a graph and let S be a set of vertices of G,  with walk matrix \(W:=W^{S}.\) Suppose that W has rank \(\ge n-1.\) Then W determines the adjacency matrix of G.

We give an explicit formula for the adjacency matrix when \(W^{S}\) has rank \(\ge n-1,\) see Theorems 5.1 and  5.5. The theorem is best possible in the following sense: There are graphs G and \(G^{*},\) with vertex sets S and \(S^{*},\) where \(W^{S}=W^{S^{*}}\) have rank \(n-2,\) while the corresponding adjacency matrices are not equal to each other. Generalizations for the case when \(W^{S}\) has rank \(< n-1\) are considered in Sect. 5.

Reordering the vertices of the graph amounts to permuting the rows of the walk matrix, and it is useful to bring walk matrices into some standard form by such row permutations. Here we use the lexicographical ordering of the rows. We denote the lex-ordered version of \(W^{S}\) by \(\mathrm{lex}(W^{S}),\) see Sect. 7. If \(S=V\), then \(W^{S}\) is the standard walk matrix of G. The following is proved in Theorem 6.3.

Theorem 1.3

Let G and \(G^{*}\) be graphs with standard walk matrices W and \(W^{*},\) respectively. Suppose that W has rank \(\ge n-1.\) Then G is isomorphic to \(G^{*}\) if and only if \(\mathrm{lex}(W)=\mathrm{lex}(W^{*}).\)

In Sect. 6 we consider walk equivalence: Two graphs G and \(G^{*}\) are walk equivalent to each other if their standard walk matrices are the same, \(W^{V}=W^{V^{*}}.\) In Proposition 6.1 we show that G is walk equivalent to \(G^{*}\) if and only if their adjacency matrices restrict to the same map on the space generated by the columns of \(W^{V}.\)

In Sect. 7 we discuss probabilistic applications. Let P(n) be a property of graphs on n vertices. Then we say that P(n) holds almost always, or that almost all graphs have property P, if the probability for P(n) to hold tends to 1 as n tends to infinity. O’Rourke and Touri [18], based on the work of Tao and Vu [23], have shown that the standard walk matrix is almost always invertible, see Theorem 7.1. From this we have the following consequence for the graph isomorphism problem, further comments are available in Sect. 7.

Theorem 1.4

For almost all graphs G the following holds: G is isomorphic to the graph \(G^{*}\) if and only if \(\mathrm{lex}(W)=\mathrm{lex}(W^{*}),\) where W and \(W^{*}\) are the standard walk matrices of G and \(G^{*},\) respectively.

Throughout we consider various notions of graph equivalences that arise from the eigenvalues and eigenvectors associated to walk matrices and spectral decompositions. In these problems it becomes evident that the Galois group of the characteristic polynomial of the adjacency matrix plays an important role. In the Appendix we give examples and counterexample for several assertions such spectral equivalences

The results in this paper show that walk matrices form an essential bridge between the combinatorics and the algebraic, in particular spectral, properties of finite graphs. All graphs considered in the paper are finite, undirected and without loops or multiple edges. In Sect. 2 we state the basics required for spectral decompositions. In Sect. 3 we introduce main eigenvectors and main eigenvalues. Sections 47 contain the material discussed above.

2 Preliminaries and notation

Let G be a graph on the vertex set \(V=\{v_{1}, v_{2},\dots ,v_{n}\}.\) For u and v in V, we write \(u\sim v\) if u is adjacent to v. The adjacency matrix \(A=(a_{ij})\) of G is given by \(a_{i,j}=1\) if \(v_{i}\sim v_{j}\) and \(a_{i,j}=0\) otherwise. The characteristic polynomial of G,  denoted \({\mathrm{char}}_{G}(x),\) is the characteristic polynomial of A. The roots of \({\mathrm{char}}_{G}(x)\) are the eigenvalues of G and the collection of all roots is the spectrum of G. Since A is symmetric, all eigenvalues are real. We denote the distinct eigenvalues of G by

$$\begin{aligned} \mu _{1}, \mu _{2},\dots ,\mu _{s} \end{aligned}$$

for a certain \(s\le n,\) in some arbitrary order. (Later it will be essential to reorder the eigenvalues in particular circumstances.) The minimum polynomial of G is the monic polynomial f(x) of least degree with \(f(A)=0,\) denoted by \(f(x)={\min }_{G}(x).\) We have

$$\begin{aligned} {\min }_{G}(x)=(x-\mu _{1})(x-\mu _{2})\cdots (x-\mu _{s}) \end{aligned}$$

since A is symmetric.

The smallest field \(\mathbb {K}\) with \(\mathbb {Q}\subseteq \mathbb {K}\subset \mathbb {R}\) which contains all eigenvalues of G is the splitting field of \({\mathrm{char}}_{G}(x),\) or of G,  denoted \(\mathbb {K}=\mathbb {Q}[\mu _{1},\dots ,\mu _{s}].\) The set of all field automorphisms \(\gamma \!: \mathbb {K}\rightarrow \mathbb {K}\) which map eigenvalues of A to eigenvalues of A forms the Galois group of \({\mathrm{char}}_{G}(x),\) or of G,  denoted by \(\mathrm{Gal}(G).\) We express the action of the field automorphism \(\gamma \) by \(a\mapsto a^{\gamma }\) for \(a\in \mathbb {K}\) and extend this notation to vectors, matrices and polynomials in the obvious way. For instance, \(A^{\gamma }=(a_{ij}^{\gamma })=(a_{ij})=A\) for all \(\gamma \) in \(\mathrm{Gal}(G).\) We will use the fact that \(a\in \mathbb {K}\) belongs to \(\mathbb {Q}\) if and only if \(a^{\gamma }=a\) for all \(\gamma \in \mathrm{Gal}(G).\) An integer polynomial is monic if its leading coefficient is 1. A real number that is the root of a monic integer polynomial is an algebraic integer. Such a number is rational if and only if it is an ordinary integer. Two algebraic integers are algebraically conjugate if they are roots of the same irreducible monic integer polynomial.

Since \((\min _{G}(A))^{\gamma }=\min ^{\gamma }_{G}(A)=0\) for all \(\gamma \in \mathrm{Gal}(G),\) it follows that \(\min _{G}(x)=\min ^{\gamma }_{G}(x)\) and so this is an integer polynomial. Let

$$\begin{aligned} \min \nolimits _{G}(x)=f_{1}(x)\cdots f_{\ell }(x) \end{aligned}$$
(4)

be factored into irreducible integer polynomials \(f_{i}(x).\) Then two eigenvalues \(\mu \) and \(\mu ^{*}\) of G are algebraically conjugate if and only if they are roots of the same polynomial \(f_{i}(x)\) for some \(1\le i\le \ell .\) From Galois theory [21] we have

Theorem 2.1

The orbits of \(\mathrm{Gal}(G)\) on the spectrum of G are the equivalence classes of distinct algebraically conjugate eigenvalues of G.

For each i with \(1\le i\le s\), consider the polynomial

$$\begin{aligned} m_{i}(x):=(x-\mu _{1})\cdots (x-\mu _{i-1})(x-\mu _{i+1})\cdots (x-\mu _{s})=(x-\mu _{i})^{-1}\min \!\!_{G}(x) \end{aligned}$$

and define the \(n\times n\) matrix

$$\begin{aligned} E_{i} := \frac{1}{m_{i}(\mu _{i})} m_{i}(A) . \end{aligned}$$
(5)

Clearly, each \(E_{i}\) is symmetric and its coefficients belong to \(\mathbb {K}.\) The following lemma can be found in many books [7, 10, 13, 15] on linear algebra, it can also be verified easily directly from the definition. Usually these results are stated for matrices over the real numbers. However, since the polynomials \(m_{i}(x)\) are defined over \(\mathbb {K},\) all matrices and computations are over \(\mathbb {K}.\) This is essential for us: In particular, Galois automorphisms act on the \(E_{i}\) and all associated quantities over \(\mathbb {K}^{n}.\)

Lemma 2.2

Let A be the adjacency matrix of G with distinct eigenvalues \(\mu _{1},\dots ,\mu _{s}.\) For \(1\le i\le s\), let \(E_{i}\) be as above and denote the \(n\times n\) identity matrix by \({\mathrm{I}}.\) Then

  1. (i)

    \(E^{2}_{i} = E_{i}\) and \(E_{i}E_{j} = 0\) for all \(i\ne j\in \{1, \dots , s\},\)

  2. (ii)

    \({\mathrm{I}} = E_{1} + E_{2} +\dots + E_{s}\),

  3. (iii)

    \(A = \mu _{1}E_{1} + \mu _{2}E_{2} +\dots + \mu _{s}E_{s}.\)

The matrices \(E_{i}\) are the minimum idempotents or orthogonal idempotents of the graph. From the properties in (i) and (ii) it is easy to compute all powers of A and so we obtain from (iii) the principal equation for A,  namely

$$\begin{aligned} A^{k}=\mu _{1}^{k}E_{1}+\mu _{2}^{k}E_{2}+\cdots +\mu _{s}^{k}E_{s} \end{aligned}$$
(6)

for all \(k\ge 0.\) Further, let \(x\in \mathbb {K}^{n}\) and consider \(E_{i}\!\cdot \! x\) for some \(1\le i\le s.\) Since

$$\begin{aligned} A(E_{i}\!\cdot \! x)=(\mu _{1}E_{1}+\mu _{2}E_{2}+\dots +\mu _{s}E_{s})E_{i}\!\cdot \! x=\mu _{i}E_{i}^{2}\!\cdot \! x=\mu _{i}E_{i}\!\cdot \! x, \end{aligned}$$

using Lemma 2.2(i), it follows that \(E_{i}\!\cdot \! x\) is an eigenvector of A for eigenvalue \(\mu _{i},\) provided that \(E_{i}\!\cdot \! x\ne 0.\) From Lemma 2.2(ii) we conclude that

$$\begin{aligned} x=E_{1}\!\cdot \! x+\dots +E_{s}\!\cdot \! x. \end{aligned}$$

We call this expression the spectral decomposition of x into eigenvectors of A. Since \(E_{i}^{2}=E_{i},\) we can view \(E_{i}\) as a projection map \(\mathbb {K}^{n}\rightarrow \mathbb {K}^{n}.\) Its image

$$\begin{aligned} \mathrm{Eig}(A,\mu _{i}):= \{E_{i}\!\cdot \!x | x\in \mathbb {K}^{n} \} \end{aligned}$$

is the eigenspace of A for eigenvalue \(\mu _{i}.\) Clearly \(E_{i}\) and \(\mathrm{Eig}(A,\mu _{i})\) determine one another. There are many useful interconnections. For instance, the multiplicity of \(\mu _{i}\) in the spectrum of G is equal to \(\dim (\mathrm{Eig}(A,\mu _{i}))={\mathrm{trace}}(E_{i}),\) and so on.

Next suppose that x is an eigenvector of A for eigenvalue \(\mu \) and let \(\gamma \in \mathrm{Gal}(G).\) Then \(Ax=\mu x\) implies \(Ax^{\gamma }=\mu ^{\gamma }x^{\gamma }\) since \(A^{\gamma }=A.\) Thus, \(x^{\gamma }\) is an eigenvector for eigenvalue \(\mu ^{\gamma }.\) This shows that the action of \(\mathrm{Gal}(G)\) on the set \(\{\mu _{1},\dots ,\mu _{s}\}\) of all distinct eigenvalues extends to an action on the set \(\{{\mathrm{Eig}}(A,\mu _{1}),\dots , {\mathrm{Eig}}(A,\mu _{s})\}\) of all eigenspaces, and hence also to an action on the set \(\{E_{1},\dots , E_{s}\}\) of all idempotents of G. Further actions will be discussed in Sect. 3. We collect these facts:

Theorem 2.3

[Spectral Decomposition]

Let \(\mu _{1},\dots , \mu _{s}\) be the distinct eigenvalues of the graph G. Let \(\mathbb {K}\) be its splitting field and let \(\mathrm{Eig}(A,\mu _{1}),\dots {\mathrm{Eig}}(A,\mu _{s})\) be the eigenspaces of its adjacency matrix. Then

$$\begin{aligned} \mathbb {K}^{n} = {\mathrm{Eig}}(A,\mu _{1}) \oplus \dots \oplus {\mathrm{Eig}}(A,\mu _{s}) \end{aligned}$$

and for \(x\in \mathbb {K}\) we have the spectral decomposition

$$\begin{aligned} x=E_{1}\!\cdot \! x+\dots +E_{s}\!\cdot \! x \quad \text {with} \quad E_{i}\!\cdot \!x \in {\mathrm{Eig}}(A,\mu _{i}). \end{aligned}$$

For \(i\ne j\), there exists a field automorphism \(\gamma \in \mathrm{Gal}(G)\) with \(\mathrm{Eig}(A,\mu _{i})^{\gamma }={\mathrm{Eig}}(A,\mu _{j})\) if and only if \(\mu _{i}\) is algebraically conjugate to \(\mu _{j}.\)

Remark 2.1

Spectral decompositions appear in many parts of combinatorics, see for instance [7], and most often this is for the purpose of decomposing an operator. In this paper, by contrast, we are interested in the decomposition of a vector into eigenvectors of the operator, applied in particular to the characteristic vector of a set of vertices of the graph.

3 From the spectral decomposition of a set to its walk matrix

Let G be a graph of order n on the vertex set V with adjacency matrix A and distinct eigenvalues \(\mu _{1},\dots ,\mu _{s}.\) Let \(\mathbb {K}=\mathbb {Q}[\mu _{1},\dots ,\mu _{s}]\) be the splitting field of G,  with Galois group \(\mathrm{Gal}(G),\) and let \(E_{i}:\mathbb {K}^{n}\rightarrow \mathbb {K}^{n}\) for \(1\le i\le s\) be the orthogonal idempotents of G. Throughout S is a non-empty subset of V with characteristic vector \(\mathrm{x}^{S}:=(x_{1},\dots ,x_{n})^\mathtt{T}\in \mathbb {K}^{n}.\) For convenience, we write \(e=x^{S}\) when the context is clear. Then the spectral decomposition of S is

$$\begin{aligned} {\mathrm{SD}}(S)\!:\mathrm{e}= & {} \mathrm{e}_{1}+\mathrm{e}_{2}+\dots +\mathrm{e}_{r} \text { with } \mathrm{e}_{i}\ne 0~ \text { and} \nonumber \\ A\mathrm{e}_{i}= & {} \mu _{i}\mathrm{e}_{i}\text { for all } 1\le i\le r. \end{aligned}$$
(7)

From the general facts in Sect. 2 we have \(\mathrm{e}_{i}=E_{i}\mathrm{e}\) and

$$\begin{aligned} A^{k}\mathrm{e}= & {} \mu _{1}^{k}\mathrm{e}_{1}+\mu _{2}^{k}\mathrm{e}_{2}+\dots +\mu _{r}^{k}\mathrm{e}_{r} \text { for all }\, k\ge 0. \end{aligned}$$
(8)

We are interested in the combinatorial significance of this decomposition.

The \(\mathrm{e}_{i}\) and \(\mu _{i}\) are the main eigenvectors and main eigenvalues associated to S,  respectively. Comments and references concerning the notion of main eigenvectors and eigenvalues are available in Remark 3.1. It is important to emphasize that the actual eigenvalues which appear in (7) depend on S,  in general, while the numbering itself is only a matter of convenience.

We now associate to S its main polynomial

$$\begin{aligned} {\mathrm{main}}_{G}^{S}(x):=(x-\mu _{1})\cdot (x-\mu _{2})\cdots (x-\mu _{r}) \end{aligned}$$

where the \(\mu _{i}\) corresponds to the eigenvectors in (7).

Lemma 3.1

Let S be a set of vertices of G and let \(\mathrm{x}^{S}=e=\mathrm{e}_{1}+\mathrm{e}_{2}+\dots +\mathrm{e}_{r} \) be its spectral decomposition. Then we have:

  1. (i)

    For each \(\gamma \in \mathrm{Gal}(G)\) the map \(\gamma \!: \mathrm{e}_{i}\mapsto \mathrm{e}_{i}^{\gamma }\) is a permutation of the set \(\{\mathrm{e}_{1},\mathrm{e}_{2},\dots ,\mathrm{e}_{r}\}\) and the map \(\gamma \!: \mu _{i}\mapsto \mu _{i}^{\gamma }\) is a permutation of the set \(\{\mu _{1},\mu _{2},\dots ,\mu _{r}\}.\)

  2. (ii)

    The polynomial \({\mathrm{main}}_{G}^{S}(x)\) is an integer polynomial dividing \({\mathrm{min}}_{G}(x).\) It is the unique monic polynomial f(x) of least degree such that \(f(A)(\mathrm{e})=0.\) (In ring theoretical terms, \(f(x)={\mathrm{main}}_{G}^{S}(x)\) is the A-annihilator of \(\mathrm{e}.)\)

Proof

(i) Since \(\mathrm{e}^{\gamma }=\mathrm{e},\) we have \(\mathrm{e}_{1}^{\gamma }+\mathrm{e}_{2}^{\gamma }+\dots +\mathrm{e}_{r}^{\gamma }=\mathrm{e}_{1}+\mathrm{e}_{2}+\dots +\mathrm{e}_{r}\) and hence \(\{\mathrm{e}_{1}^{\gamma },\mathrm{e}_{2}^{\gamma },\dots ,\mathrm{e}_{r}^{\gamma }\}=\{\mathrm{e}_{1},\mathrm{e}_{2},\dots ,\mathrm{e}_{r}\},\) since the decomposition into eigenvectors is unique. For the same reason \(\mu _{i}^{\gamma }\in \{\mu _{1},\mu _{2},\dots ,\mu _{r}\}\) for each \(1\le i\le r.\)

(ii) It follows from (i) that \(\big ({\mathrm{main}}_{G}^{S}(x)\big )^{\gamma }={\mathrm{main}}_{G}^{S}(x)\) for all \(\gamma \in \mathrm{Gal}(G).\) Therefore, \({\mathrm{main}}_{G}^{S}(x)\) is an integer polynomial, with leading coefficient 1. Since \((A-\mu _{i} \mathrm{I})(\mathrm{e}_{i})=0,\) we have \(\big ({\mathrm{main}}_{G}^{S}(A)\big )\mathrm{e}=0.\) It is easy to check that if f(x) is a proper divisor of \({\mathrm{main}}_{G}^{S}(x)\) then \(f(A)(\mathrm{e})\ne 0.\) \(\square \)

The lemma imposes significant restrictions on the main eigenvalues that can appear in the spectral decomposition of a vertex set. For instance, the following is immediate from the lemma:

Corollary 3.2

Suppose that \({\mathrm{min}}_{G}(x)\) is irreducible. Then \({\mathrm{main}}^{S}_{G}(x)={\mathrm{min}}_{G}(x)\) for every non-empty set \(S\subseteq V.\)

Remark 3.1

The notion of main eigenvalues and main eigenvectors is due to Cvetković [5]. In the original context, see also [20], an eigenvector is said to be a main eigenvector if its eigenspace contains a vector that is not perpendicular to \(\mathrm{e}.\) This definition is equivalent to the one we are using here:

Lemma 3.3

The eigenvalue \(\mu _{i}\) is a main eigenvalue for \(\mathrm{e}\) if and only if \({\mathrm{Eig}}(A,\mu _{i})\) contains a vector that is not perpendicular to \(\mathrm{e}.\)

Proof

Clearly, \(E_{i}\mathrm{e}\) belongs to \({\mathrm{Eig}}(A,\mu _{i}),\) and if \(E_{i}\mathrm{e}\ne 0\) then \(\mathrm{e}^\mathtt{T}(E_{i}\mathrm{e})=\mathrm{e}^\mathtt{T}(E_{i}^{2}\mathrm{e})=(E_{i}\mathrm{e})^\mathtt{T}(E_{i}\mathrm{e})\ne 0\) by Lemma 2.2. Conversely, suppose that \(E_{i}a\) is not perpendicular to e for some \(a\in \mathbb {K}^{n}.\) Thus, \(0\ne (E_{i}a)^\mathtt{T}\mathrm{e}=a^\mathtt{T}(E_{i}\mathrm{e})\) and so \(E_{i}\mathrm{e}\ne 0\) is a main eigenvector. \(\square \)

In [5] and [20] the set S is equal to V and so \(\mathrm{e}=x^{V}\) is the all-one vector. We refer to this situation as the standard case. The general situation, when S is an arbitrary non-empty set of vertices, has been considered in Godsil [14]. Various results related to Lemma 3.1 for the standard case can be found in Cvetković [5], Godsil [13], Rowlinson [20] and Teranishi [24].

We come to the main topic in this paper: To a chosen set S of vertices of G, we associate its walk matrix \(W^{S}.\) This matrix is closely related to the spectral decomposition of S,  as we shall see.

As before let V be the vertex set of G and let \(k\ge 0\) be an integer. Then a walk of length k in G is a sequence of vertices \(w=(u_0,u_1,\ldots ,u_k)\) with \(u_{i}\in V\) such that \(u_{i-1}\sim u_i\) for \(i=1,2,\ldots , k.\) (These vertices are not necessarily distinct.) For short, we say that w is a k-walk from \(u_{0}\) to \(u_{k},\) and that these vertices are the ends of w. When A denotes the adjacency matrix of G it is well-known that the (ij)-entry of \(A^{k}\) is the total number of k-walks from \(v_{i}\) to \(v_{j},\) see for instance [2, 13]. Hence we have:

Proposition 3.4

Let S be a subset of V with characteristic vector \(\mathrm{e}=\mathrm{x}^{S}\) and let \(0\le k\le n.\) Then for all \(1\le j\le n\), the j-th entry of \(A^{k}\mathrm{e}\) is the total number of k-walks from \(v_{j}\) to some vertex in S.

Definition Let S be a subset of V with characteristic vector \(\mathrm{e}:=\mathrm{x}^{S}.\) Then the walk matrix for S is the \(n\times n\) matrix with columns \(\mathrm{e},\) \(A\mathrm{e},\) \(A^2\mathrm{e},\dots , A^{n-1}\mathrm{e},\) thus

$$\begin{aligned} W^{S}:=\big [\mathrm{e},A\mathrm{e},A^2\mathrm{e}\dots , A^{n-1}\mathrm{e}\big ]. \end{aligned}$$

In particular, the entries in the \(j{\mathrm{th}}\) row of \(W^{S}\) are the number of walks of length \(0,\dots , n-1\) from \(v_{j}\) ending at a vertex in S. When \(S=V\) we refer to \(W^{S}\) as the standard walk matrix of G.

For convenience, we extend this notation: For \(0\le i\le j\) let

$$\begin{aligned} W^{S}_{[i,j]}:=\big [A^{i}\mathrm{e},A^{i+1}\mathrm{e},... ,A^{j-1}\mathrm{e},A^{j}\mathrm{e}\big ]. \end{aligned}$$

In particular, \(W^{S}=W^{S}_{[0,n-1]},\) and \(W^{S}_{[0,0]}=\mathrm{e}\) is the first column of \(W^{S}.\)

Returning to the spectral decomposition \(\mathrm{x}^{S}=\mathrm{e}_{1}+\mathrm{e}_{2}+\dots +\mathrm{e}_{r}\) for S, we define the main eigenvector matrix \(E^{S}\) for S. Its columns are the main eigenvectors of S,  that is

$$\begin{aligned} E^{S}:=\big [\mathrm{e}_{1},\mathrm{e}_{2},\dots ,\mathrm{e}_{r}\big ]. \end{aligned}$$

This matrix has size \(n\times r.\) We also need certain matrices associated to the main eigenvalues \(\mu _{1}, \mu _{2},\dots ,\mu _{r}\) for S. For \(0\le i\le j\), denote by \(M^{S}_{[i,j]}\) the \(r\times (j-i+1)\) matrix

$$\begin{aligned} M^{S}_{[i,j]}= \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c}\mu _1^i &{} \mu _1^{i+1} &{} \cdots &{} \mu _1^j \\ \mu _2^i &{} \mu _2^{i+1} &{} \cdots &{} \mu _2^j \\ \vdots &{} \vdots &{} &{} \vdots \\ \mu _r^i &{} \mu _r^{i+1} &{} \cdots &{} \mu _r^j\end{array}\right) . \end{aligned}$$
(9)

This matrix is the [ij]-main eigenvalue matrix for S. Note that \(M^{S}_{[i,j]},\) \({\mathrm{main}}_{G}^{S}(x)\) and the main eigenvalues for S all determine one another when \(i<j\) or when \(i=j\) is odd. (For \(i=j\ne 0\), the sign of the eigenvalue may be determined from other information, in some cases.) The \(r\times r\) matrix

$$\begin{aligned} M^{S}:=M^{S}_{[0,r-1]} \end{aligned}$$

is the main eigenvalue matrix for S in G. Recall, these definitions refer to the specific arrangement of the eigenvectors for \(\mathrm{e}=\mathrm{x}^{S}\) in the decomposition (7).

Lemma 3.5

  1. (i)

    For all \(0\le j\), the matrix \(M^{S}_{[0,j]}\) has rank \(\min \{r, j+1\}.\)

  2. (ii)

    For \(0<i<j\), the matrix \(M^{S}_{[i,j]}\) has rank \( \min \{r, j-i+1\}\) if 0 is not a main eigenvalue. If  0 is a main eigenvalue, then \(M^{S}_{[i,j]}\) has rank \( \min \{r, j-i+1\}-1.\)

  3. (iii)

    \(M^{S}\) is invertible and \(\det (M^{S})^{2}\) is an integer.

Proof

For (i) and (ii) we use that \(M^{S}_{[i,j]}\) is of Vandermonde type and so it is easy to compute the determinant of a suitable square submatrix, having in mind that the main eigenvalues are pairwise distinct. If \(0<i\) and 0 is an eigenvalue the result follows by removing a row of zeros from \(M^{S}_{[i,j]},\) we leave the details to the reader. For (iii) note that \(M=M^{S}\) is invertible by (i). By Lemma 3.1 any element \(\gamma \in \mathrm{Gal}(G)\) permutes the rows of M and so \(\det (M^{\gamma })=\pm \det (M).\) This gives \(\det ((M^{2})^{\gamma })=\det (M^{2})\) for all \(\gamma \in \mathrm{Gal}(G)\) and hence \(\det (M^{2})\) is an integer. \(\square \)

With these preparations in hand, we obtain the first part of Theorem 1.1:

Theorem 3.6

Let S be a non-empty set of vertices of G. Then for all \(0\le i\le j\), we have \(W^{S}_{[i,j]}=E^{S}\cdot M^{S}_{[i,j]}.\) In particular, \(W^{S}=E^{S}\cdot M^{S}.\)

Proof

Fix some k with \(i\le k\le j.\) Then \(A^{k}\mathrm{e}=\mu _{1}^{k}\mathrm{e}_{1}+\dots +\mu _{r}^{k}\mathrm{e}_{r}\) by (8). This expression is equal to the \((k+1)^{st}\) column on the right-hand side of the equation, as required. \(\square \)

We note several consequences of the theorem.

Corollary 3.7

Let S be a non-empty set of vertices of G with walk matrix \(W^{S}.\) Let \(x^{S}=\mathrm{e}=\mathrm{e}_{1}+\mathrm{e}_{2}+\dots +\mathrm{e}_{r}\) be the spectral decomposition of S. Then \(\mathrm{e}_{1},\mathrm{e}_{2},\dots ,\mathrm{e}_{r}\) is a basis for the \(\mathbb {K}\)-vector space spanned by the columns of \(W^{S}_{[0,r-1]}.\) In particular, \(r=\mathrm{rank}(W^{S}_{[0,r-1]})={\mathrm{rank}}(W^{S}).\)

Proof

Let X be the vector space spanned by \(\mathrm{e}_{1},\mathrm{e}_{2},\dots ,\mathrm{e}_{r}\) over \(\mathbb {K}.\) As the \(\mathrm{e}_{i}\) are orthogonal to each other, they are linearly independent. Let Y be the vector space spanned by the columns of \(W^{S}_{[0,r-1]}\) over \(\mathbb {K}.\) By Theorem 3.6, we have \(W^{S}_{[0,r-1]}=E^{S}\cdot M^{S}_{[0,r-1]}\) and so every column of \(W^{S}_{[0,r-1]}\) is a linear combination of \(\mathrm{e}_{1},\mathrm{e}_{2},\dots ,\mathrm{e}_{r}.\) Hence, \(Y\subseteq X.\) By Lemma 3.5, the matrix \(M^{S}_{[0,r-1]}\) has rank r and so it has a right-inverse \(M^{*},\) that is \(M^{S}_{[0,r-1]}M^{*}=\mathrm{I}_{r}.\) Hence, \(W^{S}_{[0,r-1]}M^{*}=E^{S}\) and thereby \(X\subseteq Y.\) \(\square \)

Corollary 3.8

Suppose that \({\mathrm{char}}_{G}(x)\) is irreducible. Then

  1. (i)

    \({\mathrm{char}}_{G}(x)={\mathrm{main}}^{S}_{G}(x)\) for all \(S\subseteq V,\) and

  2. (ii)

    \({\mathrm{rank}}(W^{S})=n\) for all \(S\subseteq V.\)

Proof

For (i), the irreducibility of \({\mathrm{char}}_{G}(x)\) and Lemma 3.1 implies that \({\mathrm{char}}_{G}(x)={\mathrm{main}}_{G}^{S}(x)\) for all \(\emptyset \ne S\subseteq V.\) By definition, \(\mathrm{main}_{G}^{S}(x)\) has degree r and so \(n=r.\) Property (ii) now follows from Corollary 3.7.\(\square \)

Example 3.1

The simplest kind of a walk matrix occurs in regular graphs. Here \(\mathrm{e}_{1}=(1,1,\dots ,1)^\mathtt{T}\) is an eigenvector for eigenvalue \(\mu _{1}=k\) where k is the valency of the graph. Therefore, \(\mathrm{x}^{V}=\mathrm{e}=\mathrm{e}_{1}\) is the spectral decomposition for \(S=V,\) with main polynomial \(x-k\) and walk matrix \(W^{V}=[\mathrm{e},k\mathrm{e},k^{2}\mathrm{e},\dots ,k^{n-1}\mathrm{e}].\) Connected regular graphs are characterized by this property.

Example 3.2

The graph G in Fig. 1 on the vertex set \(V=\{1,2,3,4\}\) has characteristic polynomial \({\mathrm{char}}_{G}(x)=\mathrm{min}_G(x)=(x+1)(x^3-x^{2}-3x+1)\) where the second factor is irreducible. The Galois group of \({\mathrm{char}}_{G}(x)\) is \(\mathrm{Sym}(1)\times {\mathrm{Sym}}(3),\) fixing the rational root \(-1\) and permuting the irrational roots \(-1.48...\), 0.31... and 2.17... as a symmetric group.

The standard walk matrix of G is \(W=W^{V},\) its main polynomial is \({\mathrm{main}}_G^{V}(x)=x^3-x^{2}-3x+1.\)

Fig. 1
figure 1

Graph on 4 Vertices

We compute the walk matrix and main polynomial for some other subsets. We have

$$\begin{aligned} W^{\{1\}}=\left( \begin{array}{cccc} 1 &{} 0 &{} 1 &{} 0 \\ 0 &{} 1 &{} 0 &{} 3 \\ 0 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 1 &{} 1 \\ \end{array}\right) ,\quad W^{\{2\}}=\left( \begin{array}{cccc} 0 &{} 1 &{} 0 &{} 3 \\ 1 &{} 0 &{} 3 &{} 2 \\ 0 &{} 1 &{} 1 &{} 4 \\ 0 &{} 1 &{} 1 &{} 4 \\ \end{array} \right) , \end{aligned}$$
$$\begin{aligned} W^{\{3\}}=\left( \begin{array}{cccc} 0 &{} 0 &{} 1 &{} 1 \\ 0 &{} 1 &{} 1 &{} 4 \\ 1 &{} 0 &{} 2 &{} 2 \\ 0 &{} 1 &{} 1 &{} 3 \\ \end{array} \right) \quad {\text {and}}\quad W^{\{4\}}=\left( \begin{array}{cccc} 0 &{} 0 &{} 1 &{} 1 \\ 0 &{} 1 &{} 1 &{} 4 \\ 0 &{} 1 &{} 1 &{} 3 \\ 1 &{} 0 &{} 2 &{} 2 \\ \end{array} \right) . \end{aligned}$$

Their main polynomials are \({\mathrm{main}}_{G}^{\{1\}}(x)={\mathrm{main}}_{G}^{\{2\}}(x)=(x^3-x^{2}-3x+1),\) of degree \(3={\mathrm{rank}}(W^{\{1\}})={\mathrm{rank}}(W^{\{2\}}),\) according to Corollary 3.7. For the remaining sets, we have \({\mathrm{main}}_{G}^{\{3\}}(x)={\mathrm{main}}_{G}^{\{ 4\}}(x)=(1+x)(x^3-x^{2}-3x+1),\) of degree \(4={\mathrm{rank}}(W^{\{3\}})={\mathrm{rank}}(W^{\{4\}}).\) According to Proposition 4.1 in the next section, the walk matrix for any set \(S\subseteq V\) is of the form \(W^{S}=\sum _{i\in S} W^{\{i\}}.\)

Remark 3.2

Let S be a set and let \(W^{S}=(w_{i,j})\) be its walk matrix. We observe that \(W^{S}\) contains information about the subgraph G[S] induced on \(S\!:\) Evidently \(w_{i,1}=1\) if and only if \(v_{i}\in S.\) Furthermore, according to Proposition 3.4, we have that \(w_{i,2}\) is the number of neighbors of \(v_{i}\) in S. Therefore, \((w_{i,2})_{w_{i,1}=1}\) is the degree sequence of G[S]. In particular, the second column of \(W^{V}\) is the degree sequence of G.

Remark 3.3

With regard to the spectral decomposition (7) of a set S,  say \(x^{S}=\mathrm{e}_{1}+\dots +\mathrm{e}_{r}\), one may ask if the \(\mathrm{e}_{i}\) by themselves already determine the \(\mu _{i}.\) However, in general this is not the case, as the following shows. Consider graphs G and \(G^{*}\) with vertex sets V and \(V^{*}\) which have the same splitting field \(\mathbb {K}\) and the same number s of distinct eigenvalues \(\mu _{1},\dots ,\mu _{s}\) and \(\mu ^{*}_{1},\dots ,\mu ^{*}_{s},\) respectively. Assume furthermore that the vertex sets can be reordered so that \({\mathrm{Eig}}(G,\mu _{i})= {\mathrm{Eig}}(G^{*},\mu ^{*}_{i})\) for all \(1\le i\le s.\) In this case, we call G and \(G^{*}\) eigenspace equivalent. Note that G is eigenspace equivalent to \(G^{*}\) if and only their adjacency matrices A and \(A^{*}\) commute up to renumbering vertices. It is easy to see that every regular graph G is eigenspace equivalent to its complement \(\overline{G}.\) In this case, \(x^{S}=\mathrm{e}_{1}+\dots + e_{r}\) is expressed by eigenvectors whose eigenvalues may be those of G or of \(\overline{G}.\) We have examples of non-isomorphic eigenspace equivalent graphs which are not of this kind, see Appendix 8.1. It is an open problem to determine graphs up to eigenspace equivalence.

Remark 3.4

A similar question occurs for the spectral decomposition \(x^{S}=\mathrm{e}_{1}+\dots + \mathrm{e}_{r}\) when we ask if the \(\mu _{i}\) by themselves already determines the \(\mathrm{e}_{i},\) up to rearranging vertices. Again, in general this is not the case, as the following shows. Consider two graphs G and \(G^{*}\) on the same vertex set \(V=V^{*}\) which have the same irreducible characteristic polynomial. (In particular, G and \(G^{*}\) are cospectral.) Let \(S=V,\) \(\mathrm{e}=\mathrm{x}^{S}\) and consider \(e=\mathrm{e}_{1}+\dots +e_{n}=\mathrm{e}^{*}_{1}+\dots + \mathrm{e}^{*}_{n},\) see Corollary 3.2. Then it is easy to show that G is isomorphic to \(G^{*}\) if and only if \(\mathrm{e}_{1}=\mathrm{e}^{*}_{1},\) up to a permutation of the entries of the vectors. There are no examples if the order of the graphs is \(n<8.\) For \(n=8\) an example of least order can be found in Appendix 8.6.

4 From the walk matrix for a set to its spectral decomposition

Let S be a set of vertices of the graph G with adjacency matrix A. In the current section we show that the walk matrix for S determines its spectral decomposition. We start with several general properties of walk matrices.

Proposition 4.1

  1. (i)

    Let \(S\subseteq V\) and let \(0\le i<j.\) Then \(AW^{S}_{[i,j]}=W^{S}_{[i+1,j+1]}.\)

  2. (ii)

    Let S and T be disjoint subsets of V. Then \(W^{S} + W^{T}=W^{S\cup T}.\)

  3. (iii)

    Let \(S\subseteq V\) and let \(0\le i<j.\) Then

    $$\begin{aligned} (W^{S}_{[i,j]})^\mathtt{T}\cdot W^{S}_{[i,j]}=\left( \begin{array}{ccccc}n_{2i} &{} n_{2i+1} &{} \cdots &{} n_{i+j} \\ n_{2i+1} &{} n_{2i+2} &{} \cdots &{} n_{i+j+1} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ n_{i+j} &{} n_{i+j+1} &{} \cdots &{} n_{2j}\end{array}\right) \end{aligned}$$

    where \(n_{k}\) is the number of all k-walks in G with both ends in S. (Note, \((W^{S}_{[i,j]})^\mathtt{T}\) is the transpose of \(W^{S}_{[i,j]}.\))

Proof

The statement (i) follows from the definition of \(W^{S}_{[i+1,j+1]}.\) For the remainder, let \({\mathrm{s}}, {\mathrm{t}}, {\mathrm{u}}\) be the characteristic vectors of ST and \(U:=S\cup T.\) We denote the corresponding walk matrices by \(W^{S}, W^{T}, W^{U}.\) The statement (ii) is immediate since \({\mathrm{s}}+{\mathrm{t}}={\mathrm{u}}.\) To prove (iii) let \(S=\{a, b,\dots \}\) and let \({\mathrm{a}}\), \({\mathrm{b}},\dots \) be the corresponding characteristic vectors. Then \(W^{S}_{[i,j]}= W^{\mathrm{a}}_{[i,j]} + W^{\mathrm{b}}_{[i,j]} + \dots \) by (ii). So we expand \((W^{S}_{[i,j]})^\mathtt{T}\cdot W^{S}_{[i,j]}\) as a sum of terms of the form \(X=(W^{\mathrm{a}}_{[i,j]})^\mathtt{T}\cdot W^{\mathrm{c}}_{[i,j]}\) with \(c\in S.\) A row of \((W^{\mathrm{a}}_{[i,j]})^\mathtt{T}\) is of the form \({\mathrm{a}}^\mathtt{T}A^{k}\) and a column of \(W^\mathrm{c}_{[i,j]}\) is of the form \(A^{\ell }{\mathrm{c}}.\) Therefore the corresponding entry of X is \({\mathrm{a}}^\mathtt{T}A^{k}\cdot A^{\ell }{\mathrm{c}}={\mathrm{a}}^\mathtt{T}A^{k+\ell }{\mathrm{c}}.\) This is the number of \((k+\ell )\)-walks from a to c. \(\square \)

In the following, if A and B are graph quantities, we say that ‘A determines B’ if there is an algorithm with input A and output B which is independent of any other property of the graph. For instance, Theorem 3.6 says that for any vertex set S the matrices \(E^{S}\) and \(M^{S}\) determine the walk matrix \(W^{S}.\) The next result proves the converse, and therefore verifies the second part of Theorem 1.1:

Theorem 4.2

Let S be a set of vertices of G with walk matrix \(W^{S}\) of rank r.

  1. (i)

    If \(r<n\), then \(W^{S}_{[0,r]}\) determines \(W^{S},\) \(M^{S}\) and \(E^{S}.\)

  2. (ii)

    If \(r=n\), then \(W^{S}_{[0,r-1]}(=W^{S})\) determines \(M^{S}\) and \(E^{S}.\)

Proof

We drop superscripts, writing \(W^{S}=W\) etc., where possible. For (i) we notice that the last column of \(W_{[0,r]}\) is a linear combination of the first r columns of \(W_{[0,r]},\) by Corollary 3.7. Hence there are rational coefficients \(f_{0},\dots ,f_{r-1}\) so that

$$\begin{aligned} A^{r}\mathrm{e}=f_{0}\mathrm{e}+f_{1}A\mathrm{e}+\dots +f_{r-1}A^{r-1}\mathrm{e}. \end{aligned}$$

Hence \(f(x)= -f_{0}-f_{1}x- \dots -f_{r-1}x^{r-1}+x^{r}\) is a monic polynomial with \(\big (f(A)\big )(\mathrm{e})=0.\) It follows from Lemma 3.1 that \(f(x)={\mathrm{main}}^{S}_{G}(x).\) Hence we have determined all main eigenvalues for S and hence also \(M_{[i,j]}\) for all \(0\le i\le j.\) We have \(W_{[0,r-1]}=E\cdot M_{[0,r-1]}\) by Theorem 3.6. Since \(M_{[0,r-1]}\) has an inverse \(M^{*}\) by Lemma 3.5, it follows that \(W_{[0,r-1]}\cdot M^{*}=E.\) Therefore, using Theorem 3.6 once more we have determined \(W_{[i,j]}\) for all \(0\le i\le j.\)

For (ii) we have \({\mathrm{char}}_{G}(x)={\mathrm{main}}_{G}^{S}(x)\) by Lemma 3.1 and since A satisfies its characteristic equation, we obtain

$$\begin{aligned} 0= & {} {\mathrm{char}}_{G}(A)=c_{0}{\mathrm{I}}+c_{1}A+\cdots +c_{n-1}A^{n-1}+A^{n}\nonumber \\ {}= & {} (A-\mu _{1}\mathrm{I\,})(A-\mu _{2}\mathrm{I\,})\cdots (A-\mu _{n}\mathrm{I\,}) . \end{aligned}$$
(10)

Here \(c_{n-1}=\mu _{1}+\mu _{2}+\cdots +\mu _{n}={\mathrm{trace}}(A)=0\) and so

$$\begin{aligned} -A^{n}= & {} c_{0}{\mathrm{I}}+c_{1}A+\cdots +c_{n-2}A^{n-2}+c_{n-1}A^{n-1}\nonumber \\ {}= & {} c_{0}{\mathrm{I}}+c_{1}A+\cdots +c_{n-2}A^{n-2} . \end{aligned}$$
(11)

It follows that

$$\begin{aligned} -A^{n}\mathrm{e}= & {} c_{0}\mathrm{e}+c_{1}A\mathrm{e}+\cdots + c_{n-2}A^{n-2}\mathrm{e}\nonumber \\ {}= & {} W_{[0,n-2]}\cdot c^\mathtt{T} \end{aligned}$$
(12)

where \(c:=(c_{0},c_{1},..., c_{n-2}).\) Since \(W=W_{[0,n-1]}\) is given, we can compute the walk numbers

$$\begin{aligned} n_{n}, n_{n+1},\dots , n_{2n-2} \end{aligned}$$

in Proposition 4.1(iii). (These are \((n-1)\) entries from row 2 to row n in the last column of \(W_{[0,n-1]}^\mathtt{T}W_{[0,n-1]}.)\) Therefore, if \(w:=(n_{n}, n_{n+1},\dots , n_{2n-2}),\) then we have

$$\begin{aligned} W_{[0,n-2]}^\mathtt{T}\cdot A^{n}\mathrm{e}=w^\mathtt{T} \end{aligned}$$
(13)

by Proposition 4.1(iii). Taking (12) and (13) together we have

$$\begin{aligned} (W_{[0,n-2]}^\mathtt{T}\cdot W_{[0,n-2]}) c^\mathtt{T}=-w^\mathtt{T} . \end{aligned}$$
(14)

Since \({\mathrm{rank}}(W_{[0,n-1]})=n,\) the matrix \(W_{[0,n-2]}\) and so also

$$\begin{aligned} W_{[0,n-2]}^\mathtt{T}\cdot W_{[0,n-2]} \end{aligned}$$

have rank \(n-1.\) It follows that \(W_{[0,n-2]}^\mathtt{T}\cdot W_{[0,n-2]}\) is invertible and so

$$\begin{aligned} c^\mathtt{T}=-(W_{[0,n-2]}^\mathtt{T}\cdot W_{[0,n-2]})^{-1}w^\mathtt{T}. \end{aligned}$$

This means that we have determined c from \(W=W_{[0,n-1]}.\) But these are the coefficients of the characteristic polynomial and so all eigenvalues are determined. Hence M is determined. Finally apply Theorem 3.6 to find E. \(\square \)

Remark 4.1

If S is a set of vertices, then we have seen that the spectral decomposition of S,  with the matrices \(E^{S}\) and \(M^{S},\) determines its walk matrix \(W^{S},\) and vice versa. We have examples of graphs G\(G^{*}\) with \(V=S=V^{*}=S^{*}\) and matrices \(E\ne E^{*}\) but \(M=M^{*}.\) For instance, the two graphs G and \(G^{*}\) of order 8 and labelled No. 79 and No. 80 in Cvetković [6] are cospectral. They have the same characteristic and main polynomials for \(S=V,\)

$$\begin{aligned} \begin{aligned}&{\mathrm{char}}_{G}(x)={\mathrm{char}}_{G^{*}}(x)=(x^3-x^2-5 x+1)(x-1)(x+1)^2,\\&{\mathrm{main}}^{V}_{G}(x)={\mathrm{main}}_{G^{*}}(x)=x^3-x^2-5x+1. \end{aligned} \end{aligned}$$

Hence, they have the same main eigenvalues. But computation shows that their main eigenvectors are different. Another pair of this kind are the graphs labelled No. 92 and No. 96 in [6].

Remark 4.2

We also have examples of graphs G\(G^{*}\) with \(V=S=V^{*}=S^{*}\) and matrices \(E= E^{*}\) but \(M\ne M^{*}.\) Trivial examples occur for pairs of regular graphs of the same order but with different valencies. For non-regular graphs with the same main eigenvectors but different main eigenvalues, see Appendix 8.2.

5 From the walk matrix to the adjacency matrix

As before, G denotes a graph of order n on the vertex set V with adjacency matrix A. Further, S denotes a non-empty set of vertices with walk matrix \(W=W^{S}.\) Here we investigate to what degree W determines the adjacency matrix of G. The following is a prototype of this problem:

Theorem 5.1

Suppose that W has rank \(r=n.\) Then W determines A.

Proof

Here \(W=W_{[0,n-1]}\) determines E and \(M=M_{[0,n-1]}\) by Theorem 4.2. From this we find \(M_{[1,n]}\) and hence \(W_{[1,n]}=EM_{[1,n]}\) by Theorem 3.6. By Proposition 4.1(i) we have \(AW_{[0,n-1]}=W_{[1,n]}.\) As W is invertible, we obtain \(A= EM_{[1,n]}W^{-1}.\) \(\square \)

One important consequence occurs for graphs with irreducible characteristic polynomial. Here Corollary 3.8 and Theorem 5.1 provide the following:

Theorem 5.2

Suppose that the characteristic polynomial of G is irreducible. Then W determines A.

Remark 5.1

Some results about the irreducibility of the characteristic polynomial of a graph are available in [19]. For probabilistic results on the rank of the standard walk matrix, see Sect. 7. From the results there it becomes clear that Theorem 5.1 covers almost all graphs.

Next we come to the case when W has rank \(r<n.\) This happens if and only if there are eigenvectors which are not scalarly dependent on main eigenvectors. For the remainder of this section, we assume that \(r={\mathrm{rank}}(W)<n.\) Let \(\mu _{1},\dots , \mu _{r}\) and \(\mathrm{e}_{1},\dots , \mathrm{e}_{r}\) be the main eigenvalues and main eigenvectors that appear in the decomposition

$$\begin{aligned} {\mathrm{SD}}(S)\!: \mathrm{e}= & {} \mathrm{e}_{1}+\mathrm{e}_{2}+\dots +\mathrm{e}_{r} . \end{aligned}$$
(15)

Let \(\lambda _{r+1},\dots , \lambda _{n}\) be the remaining non-main eigenvalues, with eigenvectors

$$\begin{aligned} \mathrm{f}_{r+1},\dots , \mathrm{f}_{n}\in \mathbb {K}^{n}. \end{aligned}$$

To avoid any confusion: it may happen that \(\lambda _{j}=\mu _{i}\) for some ij. Indeed, this will be the case precisely when the eigenspace for \(\mu _{i}\) has dimension \(>1.\) In this case we take \(\mathrm{f}_{j}\) perpendicular to \(\mathrm{e}_{i.}\) It follows that the \(\mathrm{f}_{\ell }\) for \(r+1\le \ell \le n\) are orthogonal to the columns of W. In addition, we select the \(\mathrm{f}_{\ell }\) to be an orthonormal set. Hence,

$$\begin{aligned} \mathrm{f}_{r+1},\dots , \mathrm{f}_{n} \, \text {is an orthonormal basis of } {\mathrm{ker}}(W^\mathtt{T}). \end{aligned}$$
(16)

Next consider the matrix

$$\begin{aligned} \widehat{W}:=\big [W_{[0,r-1]} \big | \mathrm{f}_{r+1},\mathrm{f}_{r+2},\dots ,\mathrm{f}_{n}\big ] . \end{aligned}$$
(17)

Since \(W_{[0,r-1]}\) has rank r by Corollary 3.7, we conclude from (16) that \(\widehat{W}\) is invertible; the inverse is given as follows. Since \(W_{[0,r-1]}\) has rank r,  it follows that \(W_{[0,r-1]}^\mathtt{T}W_{[0,r-1]}\) is invertible and so we put

$$\begin{aligned} W^{\dagger }:=(W_{[0,r-1]}^\mathtt{T} W_{[0,r-1]})^{-1}\cdot W_{[0,r-1]}^\mathtt{T} , \end{aligned}$$

a matrix of size \(r\times n.\) Next let

$$\begin{aligned} \overline{W}:=\left( \begin{array}{c} W^\dagger \\ \mathrm{f}^\mathtt{T}_{r+1} \\ \vdots \\ \mathrm{f}^\mathtt{T}_{n}\end{array}\right) \end{aligned}$$
(18)

and verify that \(\overline{W}\cdot \widehat{W}={\mathrm{I}}_{n}\) by using (16). When we compute \(\widehat{W}\cdot \overline{W}=\mathrm{I}_{n}\), we obtain the equation

$$\begin{aligned} \sum _{j=r+1}^{n} \mathrm{f}_{j}\cdot \mathrm{f}^\mathtt{T}_{j} = {\mathrm{I}}_{n} - W_{[0,r-1]}\cdot W^{\dagger }. \end{aligned}$$
(19)

The matrix \(\mathrm{f}_{j}\cdot \mathrm{f}_{j}^\mathtt{T}\) is the projection of \(\mathbb {K}^{n}\) onto the hyperplane with normal \(\mathrm{f}_{j}\) and so the sum on the left represents the eigenspace decomposition of \(\mathrm{ker}(W^\mathtt{T}).\) From (17) we have

$$\begin{aligned} A\cdot \widehat{W} =\big [W_{[1,r]} \big | \lambda _{r+1}\mathrm{f}_{r+1},\lambda _{r+2}\mathrm{f}_{r+2},\dots ,\lambda _{n}\mathrm{f}_{n}\big ], \end{aligned}$$

see also Proposition 4.1(i), and therefore

$$\begin{aligned} A= & {} \big [W_{[1,r]} \big | \lambda _{r+1}\mathrm{f}_{r+1}, \lambda _{r+2}\mathrm{f}_{f+2},\dots , \lambda _{n}\mathrm{f}_{n}\big ] \cdot \widehat{W}^{-1}\nonumber \\ A= & {} \big [W_{[1,r]} \big | \lambda _{r+1}\mathrm{f}_{r+1}, \lambda _{r+2}\mathrm{f}_{f+2},\dots ,\lambda _{n}\mathrm{f}_{n}\big ] \cdot \overline{W}\nonumber \\ A= & {} W_{[1,r]}\cdot W^{\dagger } + \sum _{j=r+1}^{n}\lambda _{j}(\mathrm{f}_{j}\cdot \mathrm{f}_{j}^\mathtt{T}) . \end{aligned}$$
(20)

Alternatively, this equation can be derived from (19) by multiplying both sides by A and using Proposition 4.1(i). Collecting these observations, we have proved the following:

Theorem 5.3

Suppose that W has rank \(r< n.\) Denote the non-main eigenvalues by \(\lambda _{r+1}, \dots , \lambda _{n}.\) Then

$$\begin{aligned} A= W_{[1,r]}\cdot (W_{[0,r-1]}^\mathtt{T} W_{[0,r-1]})^{-1}\cdot W_{[0,r-1]}^\mathtt{T} + \sum _{j=r+1}^{n}\lambda _{j}(\mathrm{f}_{j}\cdot \mathrm{f}_{j}^\mathtt{T}) \end{aligned}$$

where \(\mathrm{f}_{r+1},\dots ,\mathrm{f}_{n}\) is an orthonormal basis of \(\mathrm{ker} W^\mathtt{T}\) consisting of the non-main eigenvectors of G with \(A\mathrm{f}_{j}=\lambda _{j}\mathrm{f}_{j}\) for \(r<j\le n.\)

Definition Let S be a set of vertices of the graph G and denote its walk matrix by W. Then

$$\begin{aligned} A_{W} := W_{[1,r]}\cdot (W_{[0,r-1]}^\mathtt{T} W_{[0,r-1]})^{-1}\cdot W_{[0,r-1]}^\mathtt{T} \end{aligned}$$

is the \(W\!\)-restriction of A,  or S-restriction of A.

We note some properties of this matrix.

Proposition 5.4

Suppose that W has rank \(r<n.\) Then \(A_{W}\) is a symmetric matrix with eigenbasis \(\mathrm{e}_{1},\dots , \mathrm{e}_{r}, \mathrm{f}_{r+1},\dots \mathrm{f}_{n}\) and eigenvalues \(\mu _{1},\dots ,\mu _{r}, 0,\dots , 0.\) In particular,

  1. (i)

    \({\mathrm{rank}} A_{W}=r\) if 0 is not a main eigenvalue for S and \({\mathrm{rank}} A_{W}=r-1\) otherwise;

  2. (ii)

    \(A\cdot A_{W}=A_{W}\cdot A;\)

  3. (iii)

    if X denotes the \(\mathbb {K}\)-vector space spanned by the columns of W then \(\{A_{W}\cdot x | x\in \mathbb {K}^{n}\}\subseteq X\) with equality if and only if 0 is not a main eigenvalue for S.

Proof

Since A and \(f_{j}\cdot f^\mathtt{T}_{j}\) are symmetric, also \(A_{W}\) is symmetric. It is straightforward to verify from (20) that \(\mathrm{e}_{1},\dots , \mathrm{e}_{r}, \mathrm{f}_{r+1},\dots , \mathrm{f}_{n}\) is an eigenbasis for the given eigenvalues. To prove (i) notice that the main eigenvalues are always distinct. Verify (ii) on the given basis. For (iii), notice that \(A_{W}\cdot x\) is a linear combination of the columns of \(W_{[1,r]}.\) The remainder follows from (i).\(\square \)

We come to applications of Theorem 5.3 when \(W^{S}\) has rank \(n-1\) or \(n-2.\)

Theorem 5.5

Suppose that W has rank \(n-1.\) Then W determines A.

Proof

Let \(\mathrm{f}_{n}\) be a non-main eigenvector of G with eigenvalues \(\lambda _{n}.\) Then \(\mathrm{f}_{n}\cdot f_{n}^\mathtt{T}\) is determined by W according to (19). By Theorem 4.2 the main eigenvalues \(\mu _{1},\dots ,\mu _{n-1}\) are determined by W and so \(\lambda _{n}=-\mu _{1}-\dots -\mu _{n-1}\) is known. Now A can be determined by Theorem 5.3. \(\square \)

Theorem 5.6

Suppose that \(W=W^{S}\) has rank \(n-2.\) Assume that (a) \(S=V\) or (b) that the number of edges of G is given. Then W determines all eigenvalues of G. Let \(\lambda _{n-1}\) and \(\lambda _{n}\) be the two non-main eigenvalues of G.

  1. (i)

    If \(\lambda _{n-1}=\lambda _{n}\), then W determines A.

  2. (ii)

    If \(\lambda _{n-1}\ne \lambda _{n}\), then W determines the adjacency matrix of at most two distinct graphs G and \(G^{*}\) with walk matrix W.

Proof

By Theorem 4.2W determines M and hence the main polynomial, say

$$\begin{aligned} {\mathrm{main}}(x)=x^{n-2}+a_1x^{n-3}+\cdots +a_{n-3}x+a_{n-2}. \end{aligned}$$

Let \(\lambda _{n-1}\) and \(\lambda _{n}\) be the two non-main eigenvalues and put

$$\begin{aligned} (x-\lambda _{n-1})(x-\lambda _n)=:x^2+b_1x+b_2. \end{aligned}$$

Therefore, when

$$\begin{aligned} {\mathrm{char}}(x)=x^n+c_1x^{n-1}+c_2x^{n-2}+\cdots +c_{n-1}x+c_n , \end{aligned}$$

then \({\mathrm{char}}(x)=(x-\lambda _{n-1})(x-\lambda _n)\cdot {\mathrm{main}}(x)=(x^2+b_1x+b_2)\cdot {\mathrm{main}}(x)\) gives

$$\begin{aligned} c_1=b_1+a_1=0 \quad {\text {and}} \quad c_2=b_2+a_1b_1+a_2=-m, \end{aligned}$$

where m is the number of edges of G. (We are using the well-known fact that \(-c_{n-2}\) always is the number of edges of the graph, see [2, 13].) Under the assumption (a), the column \(W_{[1,1]}\) determines the vertex degrees and from these we determine m. Under assumption (b), this information is given anyhow. Therefore, \(b_1=-a_1\) and \(b_2=a_1^2-a_2-m\) and so the two non-main eigenvalues

$$\begin{aligned} \lambda _{n-1,n}=\frac{a_1\pm \sqrt{4(a_2+m)-3a_1^2}}{2}. \end{aligned}$$

of G are determined by W and m.

Next let \(\mathrm{f}_{n-1}\) and \(\mathrm{f}_{n}\) be non-main eigenvectors of G for the eigenvalues \(\lambda _{n-1}\) and \(\lambda _{n}.\) Then \(\mathrm{f}_{n-1}\cdot f_{n-1}^\mathtt{T} + \mathrm{f}_{n}\cdot f_{n}^\mathtt{T}\) is determined by W according to (19). If \(\lambda _{n-1}=\lambda _{n}\), we can determine A by Theorem 5.3.

It remains to consider the case \(\lambda _{n-1}\ne \lambda _{n}.\) Here we write

$$\begin{aligned} \lambda _{n-1}\mathrm{f}_{n-1}\cdot f_{n-1}^\mathtt{T} + \lambda _{n}\mathrm{f}_{n}\cdot f_{n}^\mathtt{T}= & {} \lambda _{n-1}(\mathrm{f}_{n-1}\cdot f_{n-1}^\mathtt{T} + \mathrm{f}_{n}\cdot f_{n}^\mathtt{T})\nonumber \\&+(\lambda _{n}-\lambda _{n-1})\mathrm{f}_{n}\cdot f_{n}^\mathtt{T}\nonumber \\= & {} \lambda _{n-1}({\mathrm{I}}_{n}- W_{[0,n-3]}\cdot W^{\dagger })\nonumber \\&+(\lambda _{n}-\lambda _{n-1})\mathrm{f}_{n}\cdot f_{n}^\mathtt{T} , \end{aligned}$$
(21)

using (19). Furthermore, from (20) we obtain

$$\begin{aligned} A= & {} W_{[1,r]}\cdot W^{\dagger } + \sum _{j=n-1}^{n}\lambda _{j}(\mathrm{f}_{j}\cdot \mathrm{f}_{j}^\mathtt{T})\nonumber \\ {}= & {} W_{[1,r]}\cdot W^{\dagger }+\lambda _{n-1}({\mathrm{I}}_{n}- W_{[0,n-3]}\cdot W^{\dagger }) + (\lambda _{n}-\lambda _{n-1})\mathrm{f}_{n}\cdot f_{n}^\mathtt{T} . \end{aligned}$$
(22)

In this equation \(W_{[1,r]}\cdot W^{\dagger }+\lambda _{n-1}({\mathrm{I}}_{n}- W_{[0,n-3]}\cdot W^{\dagger })\) is known and so we let \(w_{1},\dots ,w_{n}\) be the entries on the diagonal of that matrix. Similarly, denote the diagonal entries of \(\mathrm{f}_{n}\cdot \mathrm{f}_{n}^\mathtt{T}\) by \(\mathrm{f}_{n,1}^{2},\dots ,\mathrm{f}_{n,n}^{2}\) where \(\mathrm{f}_{n}=(\mathrm{f}_{n,1},\dots ,\mathrm{f}_{n,n})^\mathtt{T}.\) Since the diagonal entries of A are zero, we have

(23)

Recall that \(\mathrm{f}_{n}\) is a unit vector in the kernel of \(W^\mathtt{T}.\) This space has dimension 2 and so \(\mathrm{f}_{n}\) depends on one parameter t,  i.e., an angle of rotation. The equation (23) consists of a system of quadratic equations for t and therefore it has at most two solutions. Hence, there are at most two options for \(\mathrm{f}_{n},\) and hence by (22), there are at most two options for A. \(\square \)

Remark 5.2

The exception in (ii) does occur, an example is given in Appendix 8.3. There we obtained two different \(8\times 8\) adjacency matrices with the same walk matrix of rank 6. In this particular example, the corresponding graphs however are isomorphic. We conjecture that the two graphs in (ii) of the theorem are indeed always isomorphic.

Remark 5.3

The next case to consider are graphs \(G\not \simeq G^{*}\) of order n with the same standard walk matrix W of rank \(n-3.\) By computation, we have shown that \(n=7\) is the least n for which a pair of this kind exists. One example is available in Appendix 8.4.

6 Walk equivalence and isomorphism

Here we consider pairs of graphs which have the same walk matrix for suitably chosen sets of vertices. The following is a characterization in terms of W-restrictions of the adjacency matrix.

Proposition 6.1

Let G and \(G^{*}\) be graphs on the vertex set V and let \(S, S^{*}\subseteq V.\) Denote by W and \(W^{*}\) the corresponding walk matrices, with restrictions \(A_{W}\) and \(A^{*}_{W^{*}}\), respectively. Then the following are equivalent

  1. (i)

    \(W=W^{*},\)

  2. (ii)

    G\(G^{*}\) have the same main eigenvalues and main eigenvectors for S\(S^{*},\) respectively, and

  3. (iii)

    \(S=S^{*}\) and \(A_{W}=A^{*}_{W^{*}}.\)

In particular, if W and \(W^{*}\) are the standard walk matrices for G and \(G^{*},\) respectively, then \(W=W^{*}\) if and only if \(A_{W}=A^{*}_{W^{*}}.\)

Proof

If (i) holds then the first column of \(W=W^{*}\) determines \(S=S^{*}\) and from (20) we have \(A_{W}=A^{*}_{W^{*}}.\) Hence (i) implies (iii). Next (ii) implies (i) by Theorem 3.6.

It remains to show that (iii) implies (ii). Let r and \(r^{*}\) be the rank of W and \(W^{*}\), respectively. By Proposition 5.4, the condition \(A_{W}=A^{*}_{W^{*}}\) implies \(r^{*}=r\) or \(r^{*}=r+1,\) without loss of generality. First suppose that \(r^{*}=r\) when the proposition implies \(\mu _{i}^{*}=\mu _{i}\) and \(\mathrm{e}_{i}^{*}=c_{i}\mathrm{e}_{i}\) with certain coefficients \(c_{i}\in \mathbb {K},\) for \(1\le i\le r.\) Since \(S=S^{*},\) we have \(x^{S}=\sum _{i} \mathrm{e}_{i}=x^{S^{*}}=\sum _{i} \mathrm{e}^{*}_{i}=\sum _{i} c_{i}\mathrm{e}_{i}\) and hence \(\mathrm{e}_{i}^{*}=\mathrm{e}_{i}\) for \(1\le i\le r.\) Thus (ii) holds in this case. Next suppose that \(r^{*}=r+1\) when \(\mu _{i}^{*}=\mu _{i}\) and \(\mathrm{e}_{i}^{*}=c_{i}\mathrm{e}_{i}\) with certain coefficients \(c_{i}\in \mathbb {K},\) for \(1\le i\le r,\) and \(\mathrm{e}_{r^*}=c_{r^{*}}f_{r+1},\) say. Since \(S=S^{*}\) we have \(x^{S}=\sum _{i=1}^{r} \mathrm{e}_{i}=x^{S^{*}}=\sum _{i=1}^{r} \mathrm{e}^{*}_{i}+\mathrm{e}_{r^{*}}=\sum _{i} c_{i}\mathrm{e}_{i}+\mathrm{e}_{r^{*}}.\) This implies \(\mathrm{e}_{r^{*}}=0,\) a contradiction. Hence, (iii) implies (ii). \(\square \)

It is essential to be able to reorder the vertices of a graph and change all associated matrices accordingly. Let \(V=\{v_{1},\dots ,v_{n}\}\) be the vertices of G and let \(\mathrm{Sym}(n)\) be the symmetric group on \(\{1,2,\dots ,n\}.\) For \(g\in {\mathrm{Sym}}(n)\) we denote \(g\!: i\rightarrow i^{g}\) for \(i=1,\dots n.\) From this we obtain permutations of the elements and subsets of V by setting \(g\!: v_{i}\rightarrow v_{i}^{g}:=v_{i^{g}}\) for \(i=1,\dots n\) and \(g\!: T\rightarrow T^{g}:=\{v^{g} | v\in T\}\) for any \(T\subseteq S.\)

The same permutation gives rise to a new adjacency relation \(\sim _{g}\) and a new graph \(G^{*}=G^{g}\) by setting \(v^{g}\sim _{g} u^{g}\) if and only if \(v\sim u\) in G. In this way g defines an isomorphism \(g\!: G\rightarrow G^{*},\) denoted \(G\simeq G^{*}.\)

With \(g \in {\mathrm{Sym}}(n)\) we associate the \(n\times n\) permutation matrix \(P=P(g).\) It has the property that for all \(1\le i\le n\) we have \(v_{j}=v_{i}^{g}\) if and only if \(\mathrm{v}_{j}=P\cdot \mathrm{v}_{i}.\) (Here \(\mathrm{v}_{i}\) is the characteristic vector of \(v_{i},\) etc..) It follows that \(P\cdot \mathrm{x}^{ T}=\mathrm{x}^{(T^{g})}\) for all \(T\subseteq V\) when \(\mathrm{x}^{T}\) denotes the characteristic vector of T. The proof of the next lemma is left to the reader.

Lemma 6.2

Let G and \(G^{*}\) be graphs on the vertex sets V and \(V^{*}\) with walk matrices W and \(W^{*},\) defined for certain sets \(S\subseteq V\) and \(S^{*}\subseteq V^{*},\) respectively. Suppose that \(g\in {\mathrm{Sym}}(n)\) is a permutation for which \(G^{g}=G^{*}\) and \(S^{g}=S^{*}.\) If \(P:=P(g)\), then

  1. (i)

    \(\mathrm{e}_{j}^{*}=P\mathrm{e}_{j}\) for all \(1\le j\le r^{*}=r,\) \(\mathrm{e}^{*}=P\mathrm{e}\) and \(W^{*}=P\cdot W,\)

  2. (ii)

    \(A^{*}=PAP^\mathtt{T},\) and

  3. (iii)

    \(A^{*}_{W^{*}}=PA_{W}P^\mathtt{T}.\)

Definition

Let G and \(G^{*}\) be graphs on the vertex sets V and \(V^{*},\) respectively, and let \(S\subseteq V\) and \(S^{*}\subseteq V^{*}.\) Denote the corresponding walk matrices by W and \(W^{*}.\) Then (GS) is walk equivalent to \((G^{*},S^{*}),\) denoted \((G,S)\sim (G^{*},S^{*}),\) if there is a permutation matrix P such that \(W^{*}=P\cdot W.\) If \((G,S)\sim (G^{*},S^{*})\) with \(S=V\), then G is walk equivalent to \(G^{*},\) denoted \(G\sim G^{*}.\)

Remark 6.1

In the definition, if we have \(W^{*}=P\cdot W,\) then \(S^{*}\) is determined by \(\mathrm{e}^{*}=P\cdot \mathrm{e},\) the first column of \(W^{*}.\) From this it follows easily that \(\sim \) indeed is an equivalence relation. To be precise, it is a relation for graphs with a distinguished vertex set. It turns into an equivalence relation on graphs per se only when \(S=V.\)

Remark 6.2

If G is isomorphic to \(G^{*}\) via the permutation g, then it follows from Lemma 6.2 that \((G,S)\sim (G^{*},S^{*})\) for any \(S\subseteq V\) when we set \(S^{*}=S^{g}.\) Conversely however, walk equivalence does not imply isomorphism. For instance, if G and \(G^{*}\) are regular graphs with the same valency then \(G\sim G^{*},\) and we may not conclude that G is isomorphic to \(G^{*}.\) Furthermore, if \(G\sim G^{*}\) then we cannot conclude much in general about the walk equivalence of pairs \((G,S), (G^{*},S^{*})\) with \(S\subset V\) and \(S^{*}\subset V^{*}.\) There are simple examples for \(G\sim G^{*}\) but \((G,S)\not \sim (G^{*},S^{*})\) for all \(|S|, |S^{*}|<n.\) (For instance, let G be the union of two 3-cycles and \(G^{*}\) a 6-cycle.)

Lexicographical Order: In order to decide whether (GS) is walk equivalent to \((G^{*},S^{*})\) it is essential to be able to transform the corresponding walk matrices into some standard format.

Let \(W=W^{S}\) be the walk matrix of G for S. Then there is a permutation \(g\in {\mathrm{Sym}}(n)\) of the rows of W,  with corresponding permutation matrix \(P=P(g),\) so that the rows of PW are in lexicographical order. The matrix

$$\begin{aligned} \mathrm{lex}(W):=PW \end{aligned}$$

then is the lex-form of W with reordering matrix P. Evidently, P is unique if and only if the rows of W are pairwise distinct. It is also clear that \((G,S)\sim (G^{*},S^{*})\) if and only if \(\mathrm{lex}(W^{S})=\mathrm{lex}(W^{S^{*}}).\) In order to keep track of the reordering, we append the ‘label vector’ \(\mathrm{L}:=(v_{1},\dots ,v_{n})^\mathtt{T}\) as last column to W,  obtaining

$$\begin{aligned} W^{\ddag }=\big [W \big | {\mathrm{L}}\big ]. \end{aligned}$$

The matrix \(PW^{\ddag }=\mathrm{lex}(W^{\ddag })\) then is the vertex lex-form of W.

Example 6.1

Let G be the graph in Fig. 1 on \(V=\{v_{1},v_{2},v_{3},v_{4}\},\) let \(S=\{v_{3}\}\) and \(W:=W^{S}.\) Then

$$\begin{aligned} W=\left( \begin{array}{cccc} 0 &{} 0 &{} 1 &{} 1 \\ 0 &{} 1 &{} 1 &{} 4 \\ 1 &{} 0 &{} 2 &{} 2 \\ 0 &{} 1 &{} 1 &{} 3 \\ \end{array} \right) , W^{\ddag }=\left( \begin{array}{ccccc} 0 &{} 0 &{} 1 &{} 1 &{}v_{1}\\ 0 &{} 1 &{} 1 &{} 4 &{}v_{2}\\ 1 &{} 0 &{} 2 &{} 2 &{}v_{3} \\ 0 &{} 1 &{} 1 &{} 3 &{}v_{4}\\ \end{array} \right) \text { and } \mathrm{lex}~W^{\ddag }=\left( \begin{array}{ccccc} 1 &{} 0 &{} 2 &{} 2 &{}v_{3} \\ 0 &{} 1 &{} 1 &{} 4 &{}v_{2}\\ 0 &{} 1 &{} 1 &{} 3 &{}v_{4}\\ 0 &{} 0 &{} 1 &{} 1 &{}v_{1}\\ \end{array} \right) . \end{aligned}$$

It follows that the reordering permutation is \((v_{3},v_{1},v_{4})(v_{2}).\)

The next theorem proves Theorem 1.3 in the introduction.

Theorem 6.3

Let G and \(G^{*}\) be graphs of order n and suppose that there is a subset S of vertices of G such that \(W^{S}\) has rank \(\ge n-1.\) Then G is isomorphic to \(G^{*}\) if and only if \((G,S)\sim (G^{*},S^{*})\) for some vertex set \(S^{*}\) of \(G^{*}.\) Furthermore, if G is isomorphic to \(G^{*}\) then the isomorphism is determined uniquely from the lex forms of \(W^{S}\) and \(W^{S^{*}},\) unless \(W^{S}\) has a repeated row when there are two isomorphisms.

Proof

If G is isomorphic to \(G^{*}\) via \(g\in {\mathrm{Sym}}(n)\), the result follows from Lemma 6.2 and Remark 6.2, taking \(S^{*}=S^{g}.\) Conversely, let \(g\in {\mathrm{Sym}}(n)\) be such that \(P(g)W=W^{*}\) where \(W=W^{S}\) and \(W^{*}=W^{S^{*}}.\) Consider the graph \(H=G^{g}.\) By Lemma 6.2 its walk matrix is \(P(g)W=W^{*}.\) Hence, H and \(G^{*}\) have the same walk matrix. As the rank of \(W^{*}\) is \(\ge n-1,\) it follows from Theorems 5.1 and 5.5 that H and \(G^{*}\) have the same adjacency matrix. Hence, \(H=G^{*}\) and so G is isomorphic to \(G^{*}.\) Suppose that \(P(h)W=\mathrm{lex}(W)=P(h^{*})W^{*}\) for the corresponding reordering permutations h and \(h^{*}.\) It follows that \(P(h)=P(h^{*})P(g)\) and so \(g=(h^{*})^{-1}h.\) If the rows of W (and hence of \(W^{*})\) are pairwise distinct then h and \(h^{*}\) are unique. If W has repeated rows, then there is exactly one pair of repeated rows since \({\mathrm{rank}}(W)\ge n-1\) by assumption. In this case, \(g=(h^{*})^{-1}h\) is unique up to the transposition interchanging these rows. \(\square \).

The following is a special case of Theorem 6.3.

Theorem 6.4

Let G be a graph of order n and let \(S, S^{*}\) be sets of vertices of G. Suppose that \(W^{S}\) has rank \(\ge n-1.\) Then there is an automorphism g of G with \(S^{g}=S^{*}\) if and only if \(\mathrm{lex}(W^{S})=\mathrm{lex}(W^{S^{*}}).\)

Example 6.2

The graph in Fig. 1 has the following walk matrices for \(S=\{3\}\) and \(S=\{4\},\) respectively,

$$\begin{aligned} W^{\{3\}}=\left( \begin{array}{cccc} 0 &{} 0 &{} 1 &{} 1 \\ 0 &{} 1 &{} 1 &{} 4 \\ 1 &{} 0 &{} 2 &{} 2 \\ 0 &{} 1 &{} 1 &{} 3 \\ \end{array} \right) \quad \text {and} \quad W^{\{4\}}=\left( \begin{array}{cccc} 0 &{} 0 &{} 1 &{} 1 \\ 0 &{} 1 &{} 1 &{} 4 \\ 0 &{} 1 &{} 1 &{} 3 \\ 1 &{} 0 &{} 2 &{} 2 \\ \end{array} \right) . \end{aligned}$$

These matrices are lex-equivalent with two repeated rows and have rank \(\ge 3.\) Theorem 1.3 implies that there is an automorphism of G which interchanges 3 and 4.

Remark 6.3

Suppose that G has order n and average degree d. Let S be a set of vertices. Then \(W^{S}\) can be computed from A by carrying out \(d\cdot n\cdot (n-1)\) additions. It follows from Theorem 1.3 that isomorphism testing for G is polynomial if \(W^{S}\) has rank \(\ge n-1.\) The complexity analysis of this problem is interesting but remains outside the scope of this paper.

Remark 6.4

Recently all connected graphs of order \(n=7\) and \(n=8\) for which the standard walk matrix \(W^{V}\) has rank \(n-1\) have been enumerated in [16]. For \(n=7\) also the Galois group of such graphs has been computed.

7 Probabilistic results

Let P(n) be a property of finite undirected simple graphs on n vertices. Then P(n) holds almost always, or almost all graphs have property P, if the probability for P(n) to hold tends to 1 as n tends to infinity. The following is due to O’Rourke and Touri [18] and is based on Tao and Vu [23]. Recall that the standard walk matrix of G with vertex set V is \(W^{V}\!:\)

Theorem 7.1

For almost all graphs, the standard walk matrix is invertible.

Recall that G and \(G^{*}\) are walk-equivalent if their standard walk matrices have the same lex form. From Theorems 5.1 and  6.4, we have immediately the next theorem, its second part proves Theorem 1.4.

Theorem 7.2

(i) For almost all graphs, the standard walk matrix determines the adjacency matrix of the graph.

(ii) For almost all graphs G, we have that G is isomorphic to the graph \(G^{*}\) if and only if G is walk-equivalent to \(G^{*}.\)

Remark 7.1

Following on from Remark 6.3, the isomorphism testing problem \(G\simeq G^{*}\) is polynomial for almost all G.

Let G be a graph with characteristic polynomial \(\mathrm{}char_{G}(x).\) In Theorem 5.2 we have noted that if \(\mathrm{}char_{G}(x)\) is irreducible then \(W^{S}\) is invertible for any vertex set S. Therefore, in this case the conclusion of Theorem 7.1 holds for walk matrices of general type. In the literature there are several papers in which this irreducibility problem is considered from a probabilistic point of view, see [3, 4, 8, 19, 26]. In fact, there is the

Conjecture 7.3

For almost all graphs, the characteristic polynomial is irreducible.

The characteristic polynomial of a graph is irreducible if and only if all eigenvalues are simple and its Galois group acts transitively on these eigenvalues. For recent papers on the Galois group of integers polynomials, see Dietmann [8], Bary-Soroker, Koukoulopoulos & Kozma [1] and Eberhard [9]. We have the following stronger

Conjecture 7.4

For almost all graphs G, the Galois group \(\mathrm{Gal}(G)\) contains the alternating group \({\mathrm{Alt}}(n),\) where n is the order of G.

This conjecture is supported by a theorem of Van der Waerden [25] which states that for almost all monic integer polynomials the Galois group is isomorphic to \({\mathrm{Sym}}(n),\) where n is the degree of the polynomial. There are examples of graphs with irreducible characteristic polynomial where \(\mathrm{Gal}(G)\not \supseteq {\mathrm{Alt}}(n).\) But these are difficult to find, in most practical computations the group turns out to be \({\mathrm{Sym}}(n).\)

Theorem 7.5

If the Conjecture 7.3 holds, then the following is true for almost all graphs G and arbitrary non-empty vertex set S of \(G\!:\) For any graph \(G^{*}\), there is an isomorphism \(G\rightarrow G^{*}\) if and only if \((G,S)\sim (G^{*},S^{*})\) for some vertex set \(S^{*}\) of \(G^{*}.\) Furthermore, if G is isomorphic to \(G^{*},\) then the isomorphism can be determined from the lex forms of \(W^{S}\) and \(W^{S^{*}}.\)

8 Appendix

We give details and examples of graphs and walk matrices with particular features mentioned in earlier sections of the paper.

Appendix 8.1

The following two graphs G and \(G^{*}\) of order 8 have the same eigenspaces, and hence are eigenspace equivalent, but are not isomorphic (Fig. 2).

Fig. 2
figure 2

Graphs with the Same Eigenspaces

Appendix 8.2

The following two non-isomorphic graphs of order 8 have the same main eigenvectors for \(S=V\) but different main eigenvalues. They appear in L. Collins and I. Sciriha [22]. Let G and \(G^{*}\) be as shown in Fig. 3, and let

$$\begin{aligned} e_1= & {} \left( \frac{1+\sqrt{5}}{2}, \frac{1+\sqrt{5}}{2}, \frac{1+\sqrt{5}}{2},\frac{1+\sqrt{5}}{2} ,1,1,1,1\right) ^\mathtt{T},\\ e_2= & {} \left( \frac{1-\sqrt{5}}{2}, \frac{1-\sqrt{5}}{2}, \frac{1-\sqrt{5}}{2},\frac{1-\sqrt{5}}{2},1,1,1,1 \right) ^\mathtt{T}. \end{aligned}$$

It is straightforward to verify that \(e_1\) and \(e_2\) are the main eigenvectors of G and \(G^{*}\) with two main eigenvalues respectively (Fig. 3).

$$\begin{aligned} 1+\sqrt{5},1-\sqrt{5} \,\,\text { and }\,\, \frac{3}{2}(1+\sqrt{5}),\frac{3}{2}(1-\sqrt{5}), \end{aligned}$$
Fig. 3
figure 3

Graphs with the same main eigenvectors but different main eigenvalues

Appendix 8.3

The following two adjacency matrices on 8 vertices

$$\begin{aligned} A_1=\begin{bmatrix} 0 &{} 0 &{} 0 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \end{bmatrix},\ A_2=\begin{bmatrix} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 \\ 1 &{} 0 &{} 0 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \end{bmatrix}, \end{aligned}$$

give rise to the same standard walk matrix W of rank 6, 

$$\begin{aligned} W=\begin{bmatrix} 1 &{} 3 &{} 8 &{} 23 &{} 64 &{} 181 &{} 506 &{} 1425 \\ 1 &{} 3 &{} 8 &{} 23 &{} 64 &{} 181 &{} 506 &{} 1425 \\ 1 &{} 3 &{} 9 &{} 24 &{} 69 &{} 190 &{} 539 &{} 1502 \\ 1 &{} 2 &{} 5 &{} 13 &{} 37 &{} 101 &{} 287 &{} 797 \\ 1 &{} 4 &{} 11 &{} 32 &{} 89 &{} 252 &{} 705 &{} 1984 \\ 1 &{} 2 &{} 5 &{} 14 &{} 37 &{} 106 &{} 291 &{} 826 \\ 1 &{} 2 &{} 7 &{} 19 &{} 55 &{} 153 &{} 433 &{} 1211 \\ 1 &{} 1 &{} 3 &{} 8 &{} 23 &{} 64 &{} 181 &{} 506 \\ \end{bmatrix}. \end{aligned}$$

In this case the graphs for A and \(A^{*}\) are isomorphic to each other. This graph is drawn in Fig. 4.

Fig. 4
figure 4

Two different adjacency matrices with the same walk matrix

Appendix 8.4

A pair of non-isomorphic graphs G and \(G^*\) of least order \(n=7\) with the same walk matrix is given in Fig. 5. Their walk matrix is

$$\begin{aligned} W=W^*=\begin{bmatrix} 1 &{} 4 &{} 11 &{} 35 &{} 104 &{} 318 &{} 960 \\ 1 &{} 3 &{} 9 &{} 27 &{} 82 &{} 248 &{} 752 \\ 1 &{} 2 &{} 7 &{} 20 &{} 62 &{} 186 &{} 566 \\ 1 &{} 2 &{} 8 &{} 22 &{} 70 &{} 208 &{} 636 \\ 1 &{} 2 &{} 7 &{} 20 &{} 62 &{} 186 &{} 566 \\ 1 &{} 3 &{} 9 &{} 27 &{} 82 &{} 248 &{} 752 \\ 1 &{} 4 &{} 11 &{} 35 &{} 104 &{} 318 &{} 960 \\ \end{bmatrix}. \end{aligned}$$
Fig. 5
figure 5

A pair of non-isomorphic graphs with the same walk matrix of \({\mathrm{rank}}(W)=n-3=4.\)

Appendix 8.5

We return to the graph G in Fig. 1 to show in an example how the Galois group and the automorphism group act on the main eigenvectors and main eigenvalues of the graph. The characteristic polynomial of G is \((x+1)(x^3-x^{2}-3x+1),\) it has the roots

$$\begin{aligned} \lambda _{0}=-1, \lambda _{1}=-1.48..., \lambda _{2}=0.31... \text { and } \lambda _{3}=2.17... \end{aligned}$$

with corresponding eigenvectors

$$\begin{aligned} a_{0}=\left( \begin{array}{c}0 \\ 0 \\ -1\\ 1 \end{array}\right) ,\quad a_{1}=\left( \begin{array}{c}-0.67... \\ 1\\ -0.40... \\ -0.40...\end{array}\right) ,\quad a_{2}=\left( \begin{array}{c}3.21... \\ 1 \\ -1.45...\\ -1.45...\end{array}\right) ~ \text { and},\, a_{3}=\left( \begin{array}{c}0.46... \\ 1 \\ 0.85... \\ 0.85...\end{array}\right) . \end{aligned}$$

The graph automorphism which fixes \(v_{1}, v_{2}\) and interchanges \(v_{3}, v_{4}\) maps \(a_{0}\) to \(-a_{0}\) and fixes \(a_{1}, a_{2}, a_{3}.\) (Clearly, graph automorphisms always leave all eigenspaces invariant: In matrix terms, if P is a permutation matrix with \(A P=P A\) and if \(Ax=\lambda x,\) then \(A Px=P Ax=\lambda Px.\)) The Galois group \({\mathrm{Gal}}(G)\) of \((x^3-x^{2}-3x+1)\) is the symmetric group on \(\{\lambda _{1},\lambda _{2}, \lambda _{3}\},\) its elements permute \(\mathrm{Eig}(A,\lambda _{1}), {\mathrm{Eig}}(A,\lambda _{2}), {\mathrm{Eig}}(A,\lambda _{3})\) in the same way.

Next we find the main eigenvectors for \(S=\{v_{1}\},\) for instance. The main polynomial of this set is \((x^3-x^{2}-3x+1).\) We write \(\mathrm{e}=(1,0,0,0)^\mathtt{T}\) in terms of its main eigenvectors,

$$\begin{aligned} \mathrm{e}=(1,0,0,0)^\mathtt{T}=\mathrm{e}_{1}+\mathrm{e}_{2}+\mathrm{e}_{3}=c_{1}a_{1}+c_{2}a_{2}+c_{3}a_{3} \end{aligned}$$

for certain coefficients \(c_{i}\in \mathbb {K},\) the splitting field of \( (x^3-x^{2}-3x+1).\) In fact, \(c_{1}=-0.37..,\) \(c_{2}=0.20...\) and \(c_{3}=0.17...\) . By Lemma 3.1 we have that \({\mathrm{Gal}}(G)\) permutes the \(\mathrm{e}_{i}\) and so we obtain the explicit action of \({\mathrm{Gal}}(G)\) on the main eigenvectors of \(S=\{v_{1}\}.\)

Appendix 8.6

Fig. 6
figure 6

Two non-isomorphic graphs of order 8 with the same irreducible characteristic polynomial

By computation, we found that the characteristic polynomial of pairs of cospectral graphs of order \(n<8\) is always reducible. The following in an example for \(n=8\) with a pair of non-isomorphic graphs G\(G^{*}\) with irreducible characteristic polynomial

$$\begin{aligned} x^8-10x^6-4 x^5+24 x^4+8x^3-16x^2+1. \end{aligned}$$

Their complements have the same characteristic polynomial

$$\begin{aligned} x^8-18 x^6-26 x^5+26 x^4+42 x^3-16 x^2-16 x+4 \end{aligned}$$

and this polynomial is again irreducible (Fig. 6).