Notes on simplicial rook graphs

The simplicial rook graph ${\rm SR}(m,n)$ is the graph of which the vertices are the sequences of nonnegative integers of length $m$ summing to $n$, where two such sequences are adjacent when they differ in precisely two places. We show that ${\rm SR}(m,n)$ has integral eigenvalues, and smallest eigenvalue $s = \max (-n, -{m \choose 2})$, and that this graph has a large part of its spectrum in common with the Johnson graph $J(m+n-1,n)$. We determine the automorphism group and several other properties.


Introduction
Let N be the set of nonnegative integers, and let m, n ∈ N. The simplicial rook graph SR(m, n) is the graph obtained by taking as vertices the vectors in N m with coordinate sum n, and letting two vertices be adjacent when they differ in precisely two coordinate positions. Then has v = n+m−1

Integrality of the eigenvalues
We start with the main result, proved by a somewhat tricky induction.

Theorem 2.1 All eigenvalues of SR(m, n) are integers.
Proof Let be the graph SR(m, n), and let X be its vertex set. The adjacency matrix A of acts as a linear operator on R X (sending each vertex to the sum of its neighbors). By induction, we construct a series of subspaces 0 = U 0 ⊆ U 1 ⊆ . . . ⊆ U t = R X and find integers c i , such that (A−c i I )U i ⊆ U i−1 (1 ≤ i ≤ t). Then p(A) := i (A−c i I ) vanishes identically, and all eigenvalues of A are among the integers c i .
For j = k, let A jk be the matrix that describes adjacency between vertices that differ only in the j-and k-coordinates. Then A = A jk . If (A jk − c jk I )u ∈ U for all j, k then (A − cI )u ∈ U for c = c jk .
A basis for R X is given by the vectors e x (x ∈ X ) that have y-coordinate 0 for y = x, and x-coordinate 1. For S ⊆ X , let e S := x∈S e x , so that e X is the all-1 vector. Since (A − cI )e X = 0 for c = (m − 1)n, we can put U 1 = e X .
For partitions of the set of coordinate positions {1, . . . , m} and integral vectors z indexed by that sum to n, let S ,z be the set of all u ∈ X with i∈π u i = z π for all π ∈ . If is a partition into singletons, then |S ,z | = 1.
For a vector y indexed by a partition , letỹ be the sequence of pairs (y π , |π |) (π ∈ ) sorted lexicographically: with the y π in nondecreasing order, and for given y π with the |π | in nondecreasing order.
We use induction to show for S = S ,z and suitable c that the image (A − cI )e S lies in the subspace U spanned by e T for T = S ,y , where ( , y) < ( , z).
Note that the sets S = S ,z induce regular subgraphs of . Indeed, the induced subgraph is a copy of the Cartesian product π SR(|π |, z π ). The image (A − cI )e S can be viewed as a multiset where the x ∈ X occur with certain multiplicities. The fact that S induces a regular subgraph means that we can adjust c to give all x ∈ S any desired given multiplicity, while the multiplicity of x / ∈ S does not depend on c. If j, k belong to the same part of , then A jk e S only contains points of S and can be ignored. So, let j ∈ π , k ∈ ρ, where π, ρ ∈ , π = ρ, and consider A jk e S . Abbreviate π ∪ {k} with π + k and π \ { j} with π − j.
The image (A jk −cI )e S equals S 1 −S 2 , where S 1 is the sum of all e T , with T = S ,y and = ( \ {π, ρ}) ∪ {π − j, ρ + j} (omitting π − j if it is empty) and y agrees with z except that y π − j ≤ z π and y ρ+ j ≥ z ρ (of course y π − j + y ρ+ j = z π + z ρ ), and S 2 is the sum of all e T , with T = S ,y and = ( \ {π, ρ}) ∪ {π + k, ρ − k} and y agrees with z except that y π +k < z π and y ρ−k > z ρ .
[Let u be a ( j, k)-neighbor of s ∈ S. Since i∈π s i = z π , it follows that i∈π − j u i = i∈π − j s i ≤ z π , so that u is counted in S 1 . Conversely, if u is counted in S 1 , then we find a ( j, k)-neighbor s ∈ S by moving u j − s j from position j to position k (if u j > s j ) or moving s j − u j from position k to position j (if s j > u j ). The latter is impossible if u k < s j − u j , i.e. i∈π +k u i < z π , and these cases are subtracted in S 2 .] We are done by induction. Indeed, for the pair { j, k} we can choose which of the two is called j, and we pick notation such that (z π , |π |) ≤ (z ρ , |ρ|) in lexicographic order. Now in S 1 and S 2 only ( , y) occur with ( , y) < ( , z).

The smallest eigenvalue
We find the smallest eigenvalue of by observing that is a halved graph of a bipartite graph .
Consider the bipartite graph of which the vertices are the vectors in N m with coordinate sum at most n, where two vertices are adjacent when one has coordinate sum n, the other coordinate sum less than n, and both differ in precisely one coordinate. Let V be the set of vectors in N m with coordinate sum n. Two vectors u, v in V are adjacent in precisely when they have distance 2 in . If the adjacency matrix of is 0 N N 0 , with top and left indexed by V , then for the adjacency matrix A of we find A + n I = N N , so that A + n I is positive semidefinite, and the smallest eigenvalue of A is not smaller than −n.
Together with the results of [9], this proves that the smallest eigenvalue of A equals max −n, − m 2 . Proof Let s be the smallest eigenvalue of A. We just saw that s ≥ −n. Elkies [6] observed that s ≥ − m 2 , since A is the sum of m 2 matrices A jk that describe adjacency where only coordinates j, k are changed. Each A jk is the adjacency matrix of a graph that is a union of cliques and hence has smallest eigenvalue not smaller than −1. Then A = A jk has smallest eigenvalue not smaller than − m 2 . It is shown in [9] that the eigenvalue − m 2 has multiplicity at least and hence occurs with nonzero multiplicity if n ≥ m 2 . It is also shown in [9] that the multiplicity of the eigenvalue −n is at least the number of permutations in Sym(m) with precisely n inversions, that is the number of words w of length n in this Coxeter group, and this is nonzero precisely when n ≤ m 2 .

Proposition 3.2
The eigenvalue − m 2 has multiplicity precisely Proof For each vertex u, and 1 ≤ j < k ≤ m, let C jk (u) be the ( j, k)-clique on u, that is the set of all vertices v with v i = u i for i = j, k. An eigenvector a = (a u ) for the eigenvalue − m 2 must be a common eigenvector of all A jk for the eigenvalue −1. That means that v∈C a v = 0 for each set C = C jk (u).
Order the vertices by u > v when u d > v d when d = d uv is the largest index where u, v differ. Suppose u i = s for some index i and s ≤ m − i − 1. We can express a u in terms of a v for smaller v with d uv ≥ m − s via v∈C a v = 0, where C = C i,m−s (u). Indeed, this equation will express a u in terms of then by induction a v in its turn can be expressed in terms of a w where w is smaller and d vw ≥ m − t > m − s, so that w is smaller than u, and d uw > m − s.
In this way, we expressed a u when u i ≤ m − i − 1 for some i. The free a u have u i ≥ m − i for all i, and the vector u with u i = u i − (m − i) is nonnegative and sums to n − m 2 . There are such vectors, so this is an upper bound for the multiplicity. But by [9] this is also a lower bound.
Thanks to a suggestion by Aart Blokhuis, we can also settle the multiplicity of the eigenvalue −n.

Proposition 3.3
The multiplicity of the eigenvalue −n equals the number of elements of Sym(m) with n inversions, that is, the coefficient of t n in the product m i=2 (1 + t + · · · + t i−1 ).
Proof As already noted, it is shown in [9] that the multiplicity of the eigenvalue −n is at least the number of permutations in Sym(m) with precisely n inversions. The where w runs over Sym(m) and l(w) is the number of inversions of w, is standard, cf. [8], p. 73.
Since A + n I = N N , the multiplicity of the eigenvalue −n is the nullity of N , and we need an upper bound for that.
We first define a matrix P and observe that N and P have the same column space and hence the same rank. For u, v ∈ N m , write u v when u i ≤ v i for all i. Let P be the 0-1 matrix with the same row and column indices (elements of N m with sum m and sum smaller than m, respectively) where P xy = 1 when y x. Recall that N is the 0-1 matrix with N xy = 1 when x and y differ in precisely one coordinate position. Let M(y) denote column y of the matrix M.
For d = n − y i , we find that where W i is the set of vectors in {0, 1} m with sum i. Indeed, suppose that x and y differ in j positions. Then j ≤ d, and N xy = δ 1 j , while the x-entry of the right-hand side We see that N (y) and d P(y) differ by a linear combination of columns P(y ) where y i > y i , and hence that N and P have the same column space.
Aart Blokhuis remarked that the coefficient of t n in the product m i=2 (1 + t + · · · + t i−1 ) is precisely the number of vertices u satisfying u i < i for 1 ≤ i ≤ m. Thus, it suffices to show that the rows of N (or P) indexed by the remaining vertices are linearly independent.
Consider a linear dependence between the rows of P indexed by the remaining vertices, and let P be the submatrix of P containing the rows that occur in this dependence. Order vertices in reverse lexicographic order, so that u is earlier than u when u h < u h and u i = u i for i > h. Let x be the last row index of P (in this order). Let h be an index where the inequality x i < i is violated, so that x h ≥ h. Let e i be the element of N m that has all coordinates 0 except for the i-coordinate, which is 1. Let z = x − he h . Let H = {1, . . . , h − 1}. For S ⊆ H , let χ(S) be the element of N m that has i-coordinate 1 if i ∈ S, and 0 otherwise.
Consider the linear combination p = S (−1) |S| P (z + χ(S)) of the columns of P . We shall see that p has x-entry 1 and all other entries equal to 0. But that contradicts the existence of a linear dependence.
If u is a row index of P , and not z u, then p u = 0. If z u, and z i < u i for some i < h, then the alternating sum vanishes, and These three propositions settle conjectures from [9].

An equitable partition
A partition {X 1 , . . . , X t } of the vertex set X of a graph is called equitable when for all i, j the number e i j of vertices in X j adjacent to a given vertex x ∈ X i does not depend on the choice of x ∈ X i . In this case, the matrix E = (e i j ) is called the quotient matrix of the partition. All eigenvalues of E are also eigenvalues of , realized by eigenvectors that are constant on the sets X i . There is a basis of R X consisting of eigenvectors that either are constant on all X i or sum to zero on all X i . The partition of X into orbits of an automorphism group G of is always equitable.
In this section, we indicate an equitable partition of SR(m, n), and in the next section a much finer one.
Let be the graph SR(m, n) where n > 0, and let V i be the set of vertices with precisely i nonzero coordinates, so that

and all other neighbors in V i . The quotient matrix E is tridiagonal and has eigenvalues
Proof The e i j are easily checked. It remains to find the eigenvalues. Let u be an eigenvector for the Johnson graph . It is not wrong to say that it has these eigenvalues and multiplicities for 0 ≤ i ≤ n since by convention the multiplicities of an eigenvalue are added, and eigenvalues with multiplicity 0 are no eigenvalues. For example, J (5, 4) has spectrum 4 1 (−1) 4 , which is the same as where multiplicities are written as exponents. The

The common part of the spectra of SR(m, n) and J(m + n − 1, n)
Both SR(m, n) and J (m+n−1, n) have m+n−1 n vertices. Both have valency n(m−1). These graphs resemble each other and have a large part of their spectrum in common. Let m, n > 0.

Proposition 5.1
The graphs SR(m, n) and J (m + n − 1, n) have equitable partitions with the same quotient matrix E, where E has eigenvalues (n − i)(m − i) − n with multiplicity m i for 0 ≤ i ≤ n − 1, and multiplicity m n − 1 for i = n. In particular, the spectrum of E is a common part of the spectrum of SR(m, n) and that of J (m+n−1, n).
i f |S| = |T | = i and S, T differ in two places, 0 otherwise, in both cases. It follows that our partitions are equitable with the same quotient matrix E. We may conclude that J (m +n−1, n) and SR(m, n) have the n i=1 m i eigenvalues of the matrix E in common.
Claim: These eigenvalues are (n −i)(m −i)−n with multiplicity m i for 0 ≤ i < n, and multiplicity m n − 1 for i = n. These are the eigenvalues of J (m + n − 1, n), so we need only confirm the multiplicities.
Let W i j be the (symmetrized) inclusion matrix of i-subsets against j-subsets in a v-set. −1 has no columns, and W −1, j has no rows, so that  (a, b, c, . . .) and y = (a +d, b −d, c, . . .). We find (a , b , c, . . .) with a +b = a +b Note for later use the structure of the graph (x, y) induced by the common neighbors of x and y. It is K a+b−1 + K m−2 + K g , where g is the number of c (common coordinates of x and y) not less than d.
If λ xy = m + n − 3 for all n(m − 1) neighbors y of x (and m > 2), then x has either a unique nonzero coordinate n or only coordinates 0, 1. Thus, we can recognize this set of m + m n vertices. The induced subgraph (for n > 2) is isomorphic to K m + J (m, n). We see that determines m + n − 3 and also the pair {m, m n }. Now m is the smallest element of the pair distinct from 0, 1, so we find m and n.
Suppose first that n = m − 1. Then we recognized the set S of vectors with a unique nonzero coordinate. At distance i from S lie the vectors with precisely i + 1 nonzero coordinates, and the positions of the nonzero coordinates of a vector u are determined by the set of nearest vertices in S. We show by induction on m that all vertex labels are determined. If a vertex (a, b, . . .) has at least two nonzero coordinates, and m > 3, then its neighbor (0, a + b, . . .) lies in the SR(m − 1, n) on the vertices with first coordinate zero, and by induction a + b is determined. If it has at least three nonzero coordinates: (a, b, c, . . .), then each of a + b, a + c, b + c is determined and hence also a, b, c. If it has precisely two nonzero coordinates: (a, n − a, 0, . . .), then it has neighbors (a − i, n − a, i, 0, . . .) (1 ≤ i ≤ a − 1) and (a, n − a − j, j, 0, . . .) (1 ≤ j ≤ n − a − 1) of which all coordinates are known, and a and n − a follow unless {a, n − a} = {1, 2}. This settles all claims when m > 3, n = m − 1.
If m = 3, n ≥ 3, we recall that the common neighbors of vertices x and y induce K a+b−1 + K m−2 + K g , where m − 2 = 1 and g ≤ 1, so that a + b can be recognized directly when a + b ≥ 3. But we also know the zero pattern, so can also recognize a, b when a + b ≤ 2. This determines all for m = 3.
Finally, if m = n + 1 ≥ 4, we have to distinguish the copy of K m on the vectors of shape n 1 0 n from that on the vectors of shape 1 n 0. Both sets have m m−1 2 neighbors, but if x = (n, 0, . . .) and y = (n − a, a, . . .), then (x, y) 2K n−1 , while (x, y) K n−1 + K n−2 + K 1 if x = (1, . . . , 1, 1, 0) and y = (2, 1, . . . , 1, 0, 0). This settles all cases. Proof The diameter is at most m − 1, since one can walk from one vertex to another and decrease the number of different coordinates by at least one at each step. The diameter is also at most n, since one can walk from one vertex to another and decrease the sum of the absolute values of the coordinate differences by at least two at each step. If m > n, then (0, . . . , 0, n) and (1, . . . , 1, 0, . . . , 0) show that the diameter is at least n. If m ≤ n, then (0, . . . , 0, n) and (1, . . . , 1, n − m + 1) show that the diameter is at least m − 1.

Maximal cliques and local graphs
We classify the cliques (complete subgraphs) and find the maximal ones. We also examine the structure of the local graphs of .

Lemma 8.1 Cliques C in SR(m, n) are of three types:
1. All adjacencies are ( j, k)-adjacencies for fixed j, k. Now |C| ≤ n + 1.
. . , m}, and x i ≥ a for i ∈ I . Now |C| ≤ m.
Proof Suppose u, v, w are pairwise adjacent, not all ( j, k)-adjacent for the same pair ( j, k). Then u, v are (i, j)-adjacent, u, w are (i, k)-adjacent, and v, w are ( j, k)adjacent, for certain i, j, k. Now u k = v k , u j = w j and v i = w i , so that u = x + ae i , v = x + ae j , w = x + ae k , where a > 0 or a < 0. Proof For n > 0, the m vectors ne i are distinct and mutually adjacent, forming an m-clique. And for m > 1, the n + 1 vectors ae 1 + (n − a)e 2 (0 ≤ a ≤ n) form an (n + 1)-clique. Conversely, no larger cliques occur, as we just saw.
Fix a vertex u of SR(m, n). We describe the structure of the local graph of u, that is the graph induced by SR(m, n) on the set U of neighbors of u. If vw is an edge in this local graph, then uvw is a clique in SR(m, n), so we can invoke the above classification. The set U has a partition into m 2 cliques of type 1, where the ( j, k)-clique has size The set U has a partition into n cliques of type 2, each of size m − 1. Finally, U has a partition into cliques of type 3.

Lemma 8.4 Let m, n ≥ 3, and fix a vertex u. Each neighbor v of u is contained in at most two maximal cliques precisely when u has only one nonzero coordinate.
Proof Suppose each point v of U is covered by at most two maximal cliques. Then one of the cliques of types 1 or 3 on v in U has size 1. This means that whenever u j + u k ≥ 2, we have u i = 0 for i = j, k. If u j ≥ 2, this means that u has only one nonzero coordinate. If u j = u k = 1, this means that n = 2.
Suppose m, n ≥ 3. We see that we can retrieve V 1 as the set of vertices that are locally the union of two cliques.

Cospectral mates
For m ≤ 2 or n ≤ 2, the graph SR(m, n) is complete or triangular and hence determined by its spectrum, except in the case of m = 7, n = 2 where it is isomorphic to the triangular graph T (8), and cospectral with the three Chang graphs (cf. [4,5]). The graph SR(3, 3) is 6-regular on 10 vertices, and we find that its complement is cubic with spectrum 3 1 2 1 1 3 (−1) 2 (−2) 3 . All integral cubic graphs are known, and SR(3, 3) is uniquely determined by its spectrum, cf. [3], Sect. 3.8. We give some further cases where SR(m, n) is not determined by its spectrum. Proof Apply Godsil-McKay switching (cf. [3,7], 1.8.3, 14.2.3). Switch with respect to a 4-clique B such that every vertex outside B is adjacent to 0, 2 or 4 vertices inside. If m = 4, take B = {n000, 0n00, 00n0, 000n}. If n = 3, m ≥ 2, take B = {ae 1 + be 2 | a + b = 3}. In both cases, every vertex outside B is adjacent to 0 or 2 vertices inside. The switching operation preserves all edges and nonedges, except that it changes adjacency for pairs bc with b ∈ B, c / ∈ B, and c adjacent to 2 vertices of B, turning edges (resp. nonedges) into nonedges (resp. edges). The resulting graph has the same spectrum. We show that it is nonisomorphic to SR(m, n) for m = 4, n ≥ 3 and for n = 3, m ≥ 4.
In the former case, B = V 1 . If switching does not change the isomorphism type, then B must remain the V 1 of the new graph (since it is a single orbit of size m contained in V 1 ∪ V 2 ). But after switching the common neighbors of n000 and 0n 10 (with n = n − 1) include the pairwise nonadjacent 0n 01, 01n 0, 001n , contradicting Lemma 8.4.

The eigenspace of the smallest eigenvalue
Fix π ∈ Sym(m), and let a i = #{ j | i < j and π i > π j } for 1 ≤ i ≤ m. Then a = (a i ) is a vertex of SR(m, n) when n is the number of inversions of π .
Say that σ ∈ Sym(m) is π -admissible if a i +i −σ i ≥ 0 for 1 ≤ i ≤ m. Let Adm(π ) be the set of π -admissible permutations and define x(σ ) by x(σ ) i = a i + i − σ i . Then σ ∈ Adm(π ) if and only if x(σ ) is a vertex of SR(m, n). Theorem 11.1 (Martin and Wagner [9], Thm. 3.8) For each π ∈ Sym(m) with n inversions, let Then each F π is an eigenvector of SR(m, n) with eigenvalue −n, and the F π are linearly independent. Then each F p,w is an eigenvector of SR(m, n) with eigenvalue − m 2 , and for fixed w, the collection of all such F p,w is linearly independent.
Picking w = 1 2 (1 − m, 3 − m, . . . , m − 3, m − 1) yields the lower bound already mentioned earlier: The multiplicity of the eigenvalue − m 2 is at least For the eigenvalue −n, it follows that its multiplicity is at least the number of elements in Sym(m) with precisely n inversions, and one conjectures that equality holds.
The proof of Theorem 11.1 shows that for each π ∈ Sym(m) with n inversions, the set X π = {x(σ ) | σ ∈ Adm(π )} induces a bipartite subgraph of SR(m, n) that is regular of valency n.

Proposition 11.3 For
It follows that classifying all (m, n, π) for fixed n is a finite job. Let Q k denote the k-cube. Using Sage, we find for n = 1 that only Q 1 occurs, for n = 2 that only Q 2 occurs, for n = 3 that only K 3,3 and Q 3 occur, and for n = 4 that only K 3,3 × K 2 and Q 4 occur. For larger n, one finds more complicated shapes.
It was conjectured in [9] that all graphs (m, n, π) have integral spectrum.
12 Spectra for small m or n If we fix a small value of n, we find a nice spectrum (eigenvalues and multiplicities are polynomials in m, n). If we fix a small value of m ≥ 3, we get a messy result (also congruence conditions play a rôle). Below, multiplicities are written as exponents.
Let a m ↓ b denote sequence of eigenvalues and multiplicities found as follows: The eigenvalues are the integers c with a ≥ c ≥ b, where the first multiplicity is m, and each following multiplicity is 2 larger for even c, and 10 larger for odd c. Now the conjectured spectrum of SR(4, n), n ≥ 6, n = 7 consists of For example, −5 has multiplicity 6n − 28. The above is trivial for m < 3 or n < 3. It was done in [9] for m = 3 and will be done below for n = 3, 4. The suggested spectra for n = 5 were extrapolated from small cases. We have not attempted to write down a proof.  Proof In view of the common part of the spectra of SR(m, 3) and J (m + 2, 3), and the fact that m(m 2 − 7)/6 is the coefficient of t 3 in m i=2 (1 + t + · · · + t i−1 ) (for m ≥ 3), and the fact that the stated multiplicities sum to the total number of vertices, it follows that we only have to show the presence of the part (m − 3) m−1 .
Fix an index h, 1 ≤ h ≤ m and consider the vector p indexed by the vertices that is 1 in vertices 2e h + e i and −1 on vertices e h + 2e i and 0 elsewhere. One checks that this is an eigenvector with eigenvalue m − 3, and the m vectors defined in this way have only a single dependency (namely, they sum to 0). Any eigenvector for one of these eigenvalues sums to zero on each part of the fine equitable partition found earlier, that is, on each set of vertices with given support. Since there are unique vertices with support of sizes 1 or 4, these eigenvectors are 0 there, and we need only look at the vertices 3e i + e j and 2e i + 2e j and 2e i + e j + e k .
Fix an index h, 1 ≤ h ≤ m and consider the vector p (indexed by the vertices) that vanishes on each vertex where h is not in the support, is −1 on 2e h + 2e i and on 3e h + e i , is 2 on e h + 3e i , is −2 on 2e h + e i + e j , and is 1 on e h + 2e i + e j . One checks that this is an eigenvector with eigenvalue 2m − 5 and that the m vectors defined in this way are linearly independent. That settles the part (2m − 5) m .
Fix a pair of indices h, i, 1 ≤ h < i ≤ m, and consider the vector p (indexed by the vertices) that is 1 on e h + 3e j , 2e i + 2e j and 2e h + e i + e j , is −1 on e i + 3e j , 2e h + 2e j and e h + 2e i + e j , and is 0 elsewhere. One checks that this is an eigenvector with eigenvalue m − 6 and that the m 2 vectors defined in this way are linearly independent. That settles the part (m − 6) ( m 2 ) . Having found all desired eigenvalues except one, it is not necessary to construct eigenvectors for the final one, since checking θ = tr A = 0 and θ 2 = tr A 2 = vk suffices.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.