1 Introduction

A tensegrity graph is defined as a graph \(G = ([n],E)\) where the edge set E is partitioned into three disjoint sets B, C, and S. The elements of B, C, and S are called bars, cables, and struts respectively. A tensegrity framework \(G({\mathbf{p}})\) consists of a tensegrity graph G and an assignment of real vectors \({\mathbf{p}}= (p_1, p_2, \ldots , p_n)\) to the vertices of G. In this work we consider the vectors \(p_i\) defining a tensegrity framework as column vectors. For a tensegrity framework \(G({\mathbf{p}})\) where \(p_i\in \mathbb {R}^{d}\) for all \(i\in [n]\) we write \(G({\mathbf{p}})\subseteq \mathbb {R}^d\). The framework matrix associated to a tensegrity framework \(G({\mathbf{p}})\subseteq \mathbb {R}^d\), usually denoted by P, is the \(n\times d\) matrix whose \(i\text {th}\) row is given by the vector \(p_i^{T}\). The Gram matrix of a tensegrity framework \(G({\mathbf{p}})\), denoted by \(\hbox {Gram}(p_1,\ldots ,p_n),\) is defined as the symmetric \(n\times n\) matrix whose (ij) entry is given by \(p_i^{T}p_j\) for all \(i,j\in [n]\). Note that for any tensegrity framework \(G({\mathbf{p}})\) we have that \(\hbox {Gram}(p_1,\ldots ,p_n)=\textit{PP}^{T}\) and that \(\mathrm{rank}(\hbox {Gram}(p_1,\ldots ,p_n))=\dim {{\mathrm{span}}}(p_1,\ldots ,p_n).\)

Consider two tensegrity frameworks \(G({\mathbf{p}})\) and \(G({\mathbf{q}})\). We say that \(G({\mathbf{p}})\) dominates \(G({\mathbf{q}})\) if the following three conditions hold:

  1. (i)

    \(q_i^{T}q_j = p_i^{T}p_j, \text { for } ij \in B \text { and for } i = j; \)

  2. (ii)

    \(q_i^{T}q_j \ge p_i^{T}p_j, \text { for } ij \in C;\)

  3. (iii)

    \(q_i^{T}q_j \le p_i^{T}p_j, \text { for } ij \in S. \)

We use the notation \(G({\mathbf{p}})\succeq G({\mathbf{q}})\) to indicate that \(G({\mathbf{p}})\) dominates \(G({\mathbf{q}})\). Furthermore, two tensegrity frameworks \(G({\mathbf{p}})\) and \(G({\mathbf{q}})\) are called congruent if

$$\begin{aligned} \hbox {Gram}(p_1,\ldots ,p_n)=\hbox {Gram}(q_1,\ldots ,q_n). \end{aligned}$$

Lastly, a tensegrity framework \(G({\mathbf{p}})\) is called universally completable if it is congruent to any framework it dominates, i.e.,

$$\begin{aligned} G({\mathbf{p}})\succeq G({\mathbf{q}}) \ \Longrightarrow \ \hbox {Gram}(p_1,\ldots ,p_n)=\hbox {Gram}(q_1,\ldots ,q_n). \end{aligned}$$

The notion of universal completability was originally introduced and investigated in [19] due to its relevance to the low-rank positive semidefinite matrix completion problem. Recall that a (real) symmetric \(n\times n\) matrix Z is called positive semidefinite (psd), denoted by \(Z\in \mathcal {S}^n_+\), if all of its eigenvalues are nonnegative. To illustrate the connection, consider a tensegrity framework \(G({\mathbf{p}})\), let \(X:=\hbox {Gram}(p_1,\ldots ,p_n)\) be the corresponding Gram matrix and define

$$\begin{aligned} \mathcal {S}(G,{\mathbf{p}}):=\big \{ Z\in \mathcal {S}^n_+: Z_{ij}&= p_i^{T}p_j \quad \text { if } ij \in B \text { or } i = j; \\ Z_{ij}&\ge p_i^{T}p_j \quad \text { if } ij \in C; \\ Z_{ij}&\le p_i^{T}p_j \quad \text { if } ij \in S\big \}, \end{aligned}$$

where BC and S are the sets of bars, cables, and struts respectively. The set \(\mathcal {S}(G,{\mathbf{p}})\) consists of all \(n\times n\) psd matrices with the following properties: (i) diagonal entries and entries corresponding to bars coincide with the respective entries of X, (ii) entries corresponding to cables are lower bounded by the respective entries of X, and (iii) entries corresponding to struts are upper bounded by the respective entries of X.

As a matrix is psd if and only if it is the Gram matrix of a family of vectors, it follows that \(\hbox {Gram}(p_1,\ldots ,p_n)\) is an element of \(\mathcal {S}(G,{\mathbf{p}})\) and that \(G({\mathbf{p}})\) is universally completable if and only if this is the unique element of \(\mathcal {S}(G,{\mathbf{p}})\).

For the remainder of this section we consider the special case where both S and C in the definition of \(\mathcal {S}(G,{\mathbf{p}})\) are empty. In this setting, any tensegrity framework \(G({\mathbf{p}})\) defines a G-partial matrix, i.e., a matrix whose entries are only specified along the diagonal and off-diagonal positions corresponding to edges of the G. In this case we have that

$$\begin{aligned} \mathcal {S}(G,{\mathbf{p}})=\big \{ Z\in \mathcal {S}^n_+: Z_{ij} = p_i^{T}p_j \text { if } ij \in B \text { or } i = j\big \}, \end{aligned}$$

and any element of \(\mathcal {S}(G,{\mathbf{p}})\) is called a psd completion of the partial G-matrix specified by the tensegrity framework \(G({\mathbf{p}})\).

Given a framework \(G({\mathbf{p}})\), a question of fundamental interest is to identify the smallest rank of an element in \(\mathcal {S}(G,{\mathbf{p}})\). An important instance of this problem is the low-rank correlation matrix completion problem. The correlation matrix of a family of random variables \(X_1, \ldots , X_n\) is the matrix whose ij-entry is equal to the correlationFootnote 1 of \(X_i\) and \(X_j\). Equivalently, a matrix is a correlation matrix of some family of random variables if and only if it is psd with all diagonal entries equal to one [25]. The rank of a correlation matrix turns out to be equal to the number of uncorrelated random variables. Consequently, identifying the smallest psd completion of a partial correlation matrix corresponds to finding the simplest model compatible with the observed correlations.

The Gram dimension of a graph G, denoted by \(\mathrm{gd}(G),\) was introduced in [20] to address the low-rank psd matrix completion problem described above. It is defined as the smallest integer \(k\ge 1\) with the following property: For any framework \(G({\mathbf{p}})\) there exists an element \(Z\in \mathcal {S}(G,{\mathbf{p}})\) such that \(\mathrm{rank} (Z)\le k\). Notice that the Gram dimension is a well-defined graph parameter as it is bounded above by the number of vertices of G. Furthermore, it was shown in [20] that \(\mathrm{gd}(\cdot )\) is minor-monotone, i.e., whenever H is a minor of G we have that \(\mathrm{gd}(H)\le \mathrm{gd}(G)\). By the graph minor theorem of Seymour and Robertson it follows that for any fixed integer \(k\ge 1\), the class of graphs satisfying \(\mathrm{gd}(G)\le k\) can be characterized by a finite list of minimal forbidden minors. The complete list of forbidden minors was identified for \(k\in \{1,2,3,4\}\) in [20].

To show that a graph G is a forbidden minor for \(\mathrm{gd}(H)\le k\) we need to construct a framework \(G({\mathbf{p}})\) for which any \(Z\in \mathcal {S}(G,{\mathbf{p}})\) satisfies \(\mathrm{rank}(Z)> k\). Clearly, placing a lower bound on the rank of all elements of \(\mathcal {S}(G,{\mathbf{p}})\) is a challenging task. Nevertheless, there is one special case where this can achieved: If there exists a unique psd completion to the G-partial matrix specified by \(G({\mathbf{p}})\), i.e., \(\mathcal {S}(G,{\mathbf{p}})=\{\hbox {Gram}(p_1,\ldots ,p_n)\}\). Consequently, if \(G({\mathbf{p}})\) is a universally completable framework with \(\dim {{\mathrm{span}}}(p_1,\ldots ,p_n)>k\) then G is a (not necessarily minimal) forbidden minor for the class of graphs satisfying \(\mathrm{gd}(G)\le k\). This is the approach taken in [19], where the notion of universal completability was introduced, to identify families of forbidden minors for the Gram dimension.

The use of the term “universally completable” instead of, the perhaps more intuitive, “uniquely completable” stems from a close relationship to theory of universal rigidity [5]. A framework \(G({\mathbf{p}})\) is called universally rigid if for any other framework \(G({\mathbf{q}})\subseteq \mathbb {R}^d\) (for any \(d\ge 1\)) the following holds:

$$\begin{aligned}&\Vert q_i - q_j\Vert = \Vert p_i - p_j\Vert \quad \text { for all } i \sim j \\&\quad \Longrightarrow \Vert q_i - q_j\Vert = \Vert p_i - p_j\Vert \quad \text { for all } i,j\in [n]. \end{aligned}$$

Here \(\Vert \cdot \Vert \) denotes the usual Euclidean norm of a vector.

Universal completability can be thought of as a “spherical analog” of universal rigidity, where norms of differences are replaced by inner products. In fact, their relationship can be made more precise, as was done in [19]. Specifically, given a framework \(G({\mathbf{p}})\), let \(G'\) be the apex graph obtained from G by adding a vertex, labelled by 0, adjacent to every vertex in G, and let \({\mathbf{p}}' = (p_0, p_1, \ldots , p_n)\) where \(p_0\) is the zero vector. Then it is not difficult to see that \(G({\mathbf{p}})\) is universally completable if and only if \(G'({\mathbf{p}}')\) is universally rigid. Thus, this entire paper could have been written in terms of universal rigidity of apex graphs. Nevertheless, when working in the setting of psd completions, as psd matrices are defined in terms inner products, it is more natural to work with the notion of universal completability. On the other hand, for problems arising in discrete geometry and rigidity theory the notion of universal rigidity is more suitable. For a detailed comparison between these two notions the reader is referred to [19].

1.1 Contributions and Related Work

The main result in [19] is a sufficient condition for showing that a framework is universally completable (cf. Theorem 2.5). This condition consists of two parts: (1) the existence of a “stress matrix” for the framework and (2) certain algebraic constraints on the vectors defining the framework. Our first result in this paper is an explicit description of the set of (Gram matrices) of tensegrity frameworks that are dominated by a framework \(G({\mathbf{p}})\) that has a stress matrix (cf. Theorem 2.3). Based on this we immediately get a necessary and sufficient condition for the universal completability of tensegrity frameworks (that admit a stress matrix) and furthermore, recover the sufficient condition from [19]. We note that our proof is a slight modification of the main result from [19].

The sufficient condition for universal completability is used in [19] to give universally completable frameworks for several graph classes. On the other hand, these constructions were all ad hoc and no systematic procedure was given to identify universally completable frameworks for an arbitrary graph. In this paper we address this problem by using least eigenvalue frameworks (LEF), i.e., graph embeddings obtained using the eigenvectors corresponding to the least eigenvalue of the adjacency matrix of the graph. LEFs are central to this work as they always have an appropriate stress matrix. Based on this, we identity a necessary and sufficient condition for showing that the LEF of an arbitrary graph is universally completable (cf. Theorem 3.4). As an application we construct universally completable frameworks for the Kneser graph \(K_{n:r}\) and the q-Kneser graph \(qK_{n:r}\) for all \(n \ge 2r+1\) (cf. Theorems 3.17 and 3.18).

To further substantiate the usefulness of LEFs we investigate how easy it is to test the condition for universal completability of LEFs and furthermore, how often it happens that a LEF is universally completable. Using results from [19] we can rephrase our necessary and sufficient condition for universal completability of LEFs in terms of the Strong Arnold Property (cf. Proposition 3.8). This version of our result gives an algorithm for deciding whether a LEF is universally completable. The algorithm only involves solving a system of \(|V(G)|^2\) linear equations in \(|E(\overline{G})|\) variables. Using this we show that 1293 out of the 1326 connected Cayley graphs for \(\mathbb {Z}_2^n \ (n \le 5)\) have universally completable LEFs.

In the second part of this paper we focus on uniquely vector colorable (UVC) graphs, i.e., graphs for which one of the semidefinite programming formulations corresponding to the Lovász theta number of the complementary graph admits a unique optimal solution. The study of UVC graphs was initiated by Pak and Vilenchik in [22] where the first family of UVC graphs was given. We identify an interesting connection between universal completability and UVC graphs. Specifically, we give a sufficient condition for showing that \(i\mapsto p_i\) is the unique optimal vector coloring by means of the universal completability of an appropriately defined tensegrity framework (cf. Theorem 4.5). This condition combined with our results on universal completability of LEFs shows that odd cycles, Kneser and q-Kneser graphs are UVC (cf. Theorems 4.7 and 4.8). Lastly, we study 1-walk-regular graphs (cf. Definition 4.9) for which we provide a full description of the set of optimal vector colorings (cf. Theorem 4.10). This yields a necessary and sufficient condition for a 1-walk-regular graph to be UVC.

Our sufficient condition for showing that a graph is UVC generalizes the approach taken by Pak and Vilenchik [22]. Specifically, the main result in [22] is that the categorical product of a complete graph \(K_r\) with a regular graph H (satisfying some additional spectral conditions) is UVC by showing that a certain framework corresponding to the apex graph of \(K_r\times H\) is universally rigid. Furthermore, the framework they consider is a (generalized) LEF. Consequently, recalling the equivalence between universal completability of a graph with the universal rigidity of the corresponding apex graph, their construction can be recovered as a special case of our sufficient condition (cf. Theorem 4.6). Nevertheless, our sufficient condition is more general as it applies to arbitrary graphs (not necessarily of the form \(K_r\times H\)) and in particular, it can also handle the case of non-regular graphs.

We note here that the use of eigenvector techniques in the investigation of various graph properties has a long history, and we are certainly not the first to make use of the eigenvectors corresponding to the least eigenvalue of a graph. As far back as the early 90’s Alon and Kahale [1] used a linear combination of eigenvectors for the smallest two eigenvalues of certain graphs to approximate a 3-coloring, and more generally the idea of partitioning graphs according to eigenvectors goes back at least to the 70’s with the work of Fiedler [7]. More recently, similar techniques have been used for the community detection problem [16].

Note A subset of the results appearing in this paper were published as an extended abstract in the proceedings of the 8th European Conference on Combinatorics, Graph Theory and Applications, EuroComb 2015 [11].

2 Universal Completability

2.1 Basic Definitions and Notation

2.1.1 Linear algebra

We denote by \(e_i\) the \(i\text {th}\) standard basis vector and by \(\vec {1}\) the all ones vector. Furthermore, we denote by \({{\mathrm{span}}}(p_1,\ldots ,p_n)\) the linear span of the vectors \(\{p_i\}_{i=1}^n\). The set of \(n\times n\) real symmetric matrices is denoted by \(\mathcal {S}^n\), and the set of matrices in \(\mathcal {S}^n\) with nonnegative eigenvalues, i.e. the real positive semidefinite matrices, is denoted by \(\mathcal {S}^n_+\). Given a matrix \(X\in \mathcal {S}^n\) we denote its kernel/null space by \(\hbox {Ker }X\) and its image/column space by \(\hbox {Im }X\). The corank of a matrix \(X\in \mathcal {S}^n\), denoted \(\hbox {corank }X\), is defined as the dimension of \(\hbox {Ker }X\). The Schur product of two matrices \(X,Y\in \mathcal {S}^n\), denoted by \(X\circ Y\), is the matrix whose entries are given by \((X\circ Y)_{ij}=X_{ij}Y_{ij}\) for all \(i,j\in [n]\). Lastly, the direct sum of two matrices \(X,Y\in \mathcal {S}^n\), denoted by \(X\oplus Y\), is given by the matrix

$$\begin{aligned} \begin{pmatrix} X&{}\quad 0\\ 0 &{} \quad Y\end{pmatrix}. \end{aligned}$$

A matrix \(X\in \mathcal {S}^n\) has real eigenvalues, and we denote the smallest one by \(\lambda _\mathrm{min}(X)\).

2.1.2 Graph theory

Given a graph \(G=([n],E)\) we write \(i\sim j\) to indicate that the vertices \(i,j\in [n]\) are adjacent and \(i\simeq j\) to indicate that \(i=j\) or \(i\sim j\). For a vertex \(i\in [n]\), we use N[i] to denote the closed neighborhood of i, i.e. \(N[i]:=\{i\}\cup \{j \in [n] : j \sim i\}\). Also, for any subset \(S\subseteq [n]\), we write \(G\setminus S\) to denote the subgraph of G induced by the vertices in \([n]{\setminus }S\). We denote by \(\overline{G}\) the complement of the graph G. A clique in a graph is a subset of pairwise adjacent vertices, and an independent set is a subset of pairwise nonadjacent vertices. A graph G is called split if there exists a partition \(\{C, I\}\) of the vertex set such that C is a clique in G and I is an independent set in G. The Cayley graph over a group \(\Gamma \) with inverse closed connection set \(C\subseteq \Gamma {\setminus }\{\text {id}\}\) has the elements of \(\Gamma \) as its vertices, such that vertices \(a, b\in \Gamma \) are adjacent if \(a - b \in C\). If the group \(\Gamma \) is abelian, then there exists a simple description of the eigenvalues and eigenvectors of the adjacency matrix of G. Specifically, if \(\chi \) is a character of \(\Gamma \), then the vector \((\chi (a))_{a\in \Gamma }\) is an eigenvector with corresponding eigenvalue \(\sum _{c \in C}\chi (c)\). Moreover, this procedure provides a full set of eigenvectors.

2.2 A Sufficient Condition for Universal Completability

In this section we show that under appropriate assumptions we can derive a complete description for the set of tensegrity frameworks that a fixed tensegrity framework \(G({\mathbf{p}})\) dominates (cf. Theorem 2.3). As an immediate consequence, we identify a sufficient condition for showing that a tensegrity framework is universally completable as in [19]. Furthermore, this result is used in Sect. 4 where we study uniquely vector colorable graphs.

We start with a definition which is central to the results in this section.

Definition 2.1

Consider a tensegrity framework \(G({\mathbf{p}})\subseteq \mathbb {R}^d\) and let P be the corresponding framework matrix. A spherical stress matrix for \(G({\mathbf{p}})\) is a symmetric matrix \(Z \in \mathcal {S}^n\) with the following properties:

  1. (i)

    Z is positive semidefinite;

  2. (ii)

    \(Z_{ij} = 0\) whenever \(i \ne j, \ ij \not \in E\);

  3. (iii)

    \(Z_{ij} \ge 0\) for all (struts) \(ij \in S\) and \(Z_{ij} \le 0\) for all (cables) \(ij \in C\);

  4. (iv)

    \(ZP = 0\);

  5. (v)

    \(\hbox {corank }Z=\dim {{\mathrm{span}}}(p_1, \ldots , p_n)\).

Although the above definition may make it seem like the existence of a spherical stress matrix is rare, the frameworks we will consider in this paper always admit a natural spherical stress matrix. Moreover, for any framework that is an optimal vector coloring (see Sect. 4), there always exists a (nonzero) matrix satisfying all but possibly condition (v) of being a spherical stress matrix.

We continue with a simple lemma we use in the proof of our main result below.

Lemma 2.2

Let \(X\in \mathcal {S}^n_+\) and \(Y\in \mathcal {S}^n\) satisfy \(\mathrm{Ker}\, X \subseteq \mathrm{Ker}\, Y\). If \(X=\textit{PP}^{T}\) then there exists a symmetric matrix R such that

$$\begin{aligned} Y={ PRP}^{T}\quad \mathrm{and} \quad Im \, R\subseteq Im \, P^{T}. \end{aligned}$$

Proof

We first prove the claim in the case that P has full column rank. In this case we can extend P to a full-rank square matrix \(\tilde{P}\) and define the symmetric matrix \(\tilde{R}:=\tilde{P}^{-1}Y(\tilde{P}^{-1})^{T}\). By assumption we have that \(\hbox {Ker }X \subseteq \hbox {Ker }Y\) and thus

$$\begin{aligned} \hbox {Ker }X \oplus 0 = \hbox {Ker }\tilde{P}(I\oplus 0)\tilde{P}^{T}\subseteq \hbox {Ker }\tilde{P}\tilde{R}\tilde{P}^{T}= \hbox {Ker }Y \oplus 0. \end{aligned}$$

Since \(\tilde{P}\) is invertible this gives that \(\mathrm{Ker}(I\oplus 0)\subseteq \hbox {Ker }\tilde{R}\) and thus \(\tilde{R}=R \oplus 0.\) This shows that \(Y=\textit{PRP}^{T}\) for some symmetric matrix R. In this case we assumed that P had full column-rank and so we have that \(\hbox {Im }R \subseteq \hbox {Im }P^{T}\) since the latter is the whole space.

Lastly, we consider the case when P does not have full column rank. We have that \(X = QQ^{T}\) for some full column-rank matrix Q, and thus by the above there exists a symmetric matrix \(R'\) such that \(Y={ QR}'{} { Q}^{T}\). Since \(\hbox {Im }Q = \hbox {Im }X =\hbox {Im }P\) there exists a matrix U such that \(Q=PU\) and therefore \(Y=\textit{PUR}'U^{T}P^{T}\). If we let E be the orthogonal projection onto \(\hbox {Im }P^{T}\), then \({ EP}^{T}= P^{T}\) and \(PE = P\) since E is symmetric. Thus \(Y = \textit{PEUR}'{} \textit{U}^{T}{} { EP}^{T}\). Letting \(R = \textit{EUR}'{} { U}^{T} E\) completes the proof. \(\square \)

In [19] it is shown that if there exists a spherical stress matrix for a framework \(G({\mathbf{p}})\), where \({\mathbf{p}}\) spans the space it lives in, and \(R= 0\) is the unique symmetric matrix satisfying condition (b) in Theorem 2.3 below, then \(G({\mathbf{p}})\) is universally completable. Our main result of this section, presented here, slightly generalizes the proof of this result to obtain a characterization of all frameworks dominated by \(G({\mathbf{p}})\), assuming that there exists a spherical stress matrix for \(G({\mathbf{p}})\).

Theorem 2.3

Consider a tensegrity framework \(G({\mathbf{p}})\subseteq \mathbb {R}^d\) and let \(P\in \mathbb {R}^{n\times d}\) be the corresponding framework matrix. Let \(Z\in \mathcal {S}_+^n\) be a spherical stress matrix for \(G({\mathbf{p}})\). The framework \(G({\mathbf{p}})\) dominates the framework \(G({\mathbf{q}})\) if and only if

$$\begin{aligned} \mathrm{Gram}(q_1, \ldots , q_n) = \textit{PP}^{T}+ \textit{PRP}^{T}, \end{aligned}$$
(1)

where R is a symmetric \(d\times d\) matrix satisfying:

  1. (a)

    \(\mathrm{Im}\, R \subseteq {{\mathrm{span}}}(p_1,\ldots ,p_n)\);

  2. (b)

    \(p_i^{T}R p_j = 0 \text { for } i = j \text { and } ij \in B \cup \{\ell k\in C\cup S : Z_{\ell k} \ne 0 \};\)

  3. (c)

    \(p_i^{T}R p_j \ge 0 \text { for } ij \in C;\)

  4. (d)

    \(p_i^{T}R p_j \le 0 \text { for } ij \in S.\)

Proof

Assume there exists a nonzero matrix \(R\in \mathcal {S}^d\) satisfying (a)–(d) and \(\mathrm{Gram}(q_1, \ldots , q_n) = \textit{PP}^{T}+ \textit{PRP}^{T}\). This shows that \(q_i^{T}q_j = p_i^{T}p_j + p_i^{T}R p_j\) for all \(i,j\in [n]\) and using (b)–(d) it follows that \(G({\mathbf{p}})\succeq ~G({\mathbf{q}})\). For the converse implication, say that \(G({\mathbf{p}})\succeq G({\mathbf{q}})\) and define \(X := \textit{PP}^{T}= \text {Gram}(p_1, \ldots , p_n)\) and \(Y := \text {Gram}(q_1, \ldots , q_n)\). Since Z is a spherical stress matrix for \(G({\mathbf{p}})\), condition (iv) implies that \(ZX = 0\). This shows that \(\hbox {Im }X \subseteq \hbox {Ker }Z\). By (v) we have \(\hbox {corank }Z = \mathrm{rank}\ X\) and thus \(\hbox {Ker }X = \hbox {Im }Z\). Furthermore, since \(G({\mathbf{p}})\) dominates \(G({\mathbf{q}})\) and using the fact that Y and Z are positive semidefinite, we have that

$$\begin{aligned} 0 \le {{\mathrm{Tr}}}(ZY) = \sum _{ i \simeq j} Z_{ij}Y_{ij} \le \sum _{i \simeq j} Z_{ij}X_{ij} = {{\mathrm{Tr}}}(ZX) = 0, \end{aligned}$$
(2)

and thus (2) holds throughout with equality. In particular, again using that Y and Z are positive semidefinite, \({{\mathrm{Tr}}}(YZ) = 0\) implies that \(YZ = 0\) and therefore \(\hbox {Ker }Y \supseteq \hbox {Im }Z = \hbox {Ker }X\). If v is a vector such that \(Xv = 0\), then by the above \(Yv = 0\) and therefore \((Y-X)v = 0\). This implies that \(\mathrm{Ker}(Y - X) \supseteq \hbox {Ker }X\). By Lemma 2.2 we have that \(Y-X = \textit{PRP}^{T}\) for some symmetric matrix R with \(\hbox {Im }R \subseteq \hbox {Im } P^{T}= {{\mathrm{span}}}(p_1, \ldots , p_n)\). By assumption \(G({\mathbf{p}})\succeq G({\mathbf{q}})\) and thus \(p_i^{T}R p_j \le 0 \text { for } ij \in S\), \(p_i^{T}R p_j \ge 0 \text { for } ij \in C\), and \(p_i^{T}Rp_j = 0 \text { for } ij \in B \text { and } i =j\). By (2) we have that \(\sum _{i\simeq j}Z_{ij}(X_{ij}-Y_{ij})=0\) and since \(Z_{lk}(X_{lk}-Y_{lk})\ge 0\) for all \(lk\in C\cup S\) it follows that \(X_{lk} = Y_{lk}\) for all \(lk \in C \cup S\) with \(Z_{lk} \ne 0\). \(\square \)

Remark 2.4

Note that the matrix R in the statement of Theorem 2.3 satisfies \(I+R \succeq 0\). This is because \(\hbox {Im }R \subseteq \hbox {Im }P^{T}\) and \(P(I+R)P^{T}\) is psd since it is a Gram matrix. Conversely, if R is any matrix satisfying conditions (a)–(d) and \(I+R \succeq 0\), then \(P(I+R)P^{T}\) is the Gram matrix of some set of vectors dominated by \({\mathbf{p}}\).

As an immediate consequence of Theorem 2.3 we get the following sufficient condition for showing that a tensegrity framework \(G({\mathbf{p}})\) is universally completable. This is essentially the same sufficient condition as that given in [19].

Theorem 2.5

Consider a tensegrity framework \(G({\mathbf{p}})\subseteq \mathbb {R}^d\) and let \(Z\in \mathcal {S}^n_+\) be a spherical stress matrix for \(G({\mathbf{p}})\). If \(R=0\) is the unique symmetric matrix satisfying conditions \((\mathrm{a})\)\((\mathrm{d})\) of Theorem 2.3, then \(G({\mathbf{p}})\) is universally completable.

We note that the existence of a spherical stress matrix is not a requirement for a framework to be universally completable. For an example of a framework which is universally completable but admits no spherical stress matrix see [19].

3 Least Eigenvalue Frameworks

3.1 Definition and Basic Properties

In order to use Theorem 2.5 to show that a framework \(G({\mathbf{p}})\) is universally completable, the first step is to construct a spherical stress matrix for \(G({\mathbf{p}})\). In view of Definition 2.1, it is not obvious how to do this, and this task is equivalent to a feasibility semidefinite program with a rank constraint. In this section we show how for any graph G, using the eigenvectors of the least eigenvalue of the adjacency matrix of G, we can construct a tensegrity framework which we call the least eigenvalue framework. Least eigenvalue frameworks are important to this work as they come with an associated spherical stress matrix. Consequently, to show that such a framework is universally completable it suffices to check that the only matrix R satisfying conditions (a)–(d) of Theorem 2.3 is \(R = 0\).

Definition 3.1

Consider a graph \(G=([n],E)\) and let d be the multiplicity of the least eigenvalue of its adjacency matrix. Let P be an \(n\times d\) matrix whose columns form an orthonormal basis for the eigenspace of the least eigenvalue of G. To each vertex \(i\in [n]\) we associate the vector \(p_i\in \mathbb {R}^d\) that corresponds to the i-th row of P. We refer to any framework constructed in this manner as a least eigenvalue framework (LEF) of G.

Clearly, there can be multiple least eigenvalue frameworks for a graph G, since there are many choices of orthonormal basis for the least eigenspace. However, for any choice of orthonormal basis, the Gram matrix of the corresponding least eigenvalue framework will be equal to the orthogonal projector onto the least eigenspace of G. To see this, let d be the dimension of the least eigenspace of G. Note that \(P^{T}P = I_d\) and therefore \((\textit{PP}^{T})^2 = \textit{PP}^{T}\). Therefore \(\textit{PP}^{T}\) is an orthogonal projector. Since the columns of P are eigenvectors for the least eigenvalue of G, the column space of \(\textit{PP}^{T}\) is contained in the least eigenspace of G. To show that the column space of \(\textit{PP}^{T}\) is equal to the least eigenspace of G, it suffices to show that \(\textit{PP}^{T}\) has rank d. However, the eigenvalues of a projector take values in \(\{0,1\}\) and so its rank is equal to its trace. Therefore, \(\mathrm{rank} (\textit{PP}^{T}) = {{\mathrm{Tr}}}(\textit{PP}^{T}) = {{\mathrm{Tr}}}(P^{T}P) = {{\mathrm{Tr}}}(I_d) = d\). Consequently, all LEFs are congruent and thus indistinguishable for our purposes. We refer to any such framework as the least eigenvalue framework of G.

In general, one can consider eigenvalue frameworks for eigenvalues other than the minimum as well. In fact, this idea is not new, and such frameworks have been studied in their own right by algebraic graph theorists, under different names. Brouwer and Haemers [2] refer to eigenvalue frameworks as “Euclidean representations,” and use them to derive the characterization of graphs with least eigenvalue at least \(-2\), originally due to Cameron et al. [3]. Eigenvalue frameworks are used as a tool to prove statements about the structure of graphs from the geometry of its eigenspaces. In [8], Godsil presents several results on distance-regular graphs that use this approach (Godsil refers to eigenvalue frameworks as “representations”). Furthermore, LEFs also appear in the literature as the vertex sets of “eigenpolytopes” [9]. Also, LEFs fit in the general framework of “nullspace representations” studied in [21] due to the relation with the celebrated Colin de Verdière graph parameter [4]. Lastly, we stress that the idea of exploiting the spectral properties of the adjacency matrix of a graph is the most basic approach in the spectral analysis of graph algorithms and it has been successfully applied to a wide array of problems, e.g. graph coloring, graph expansion and max-cut among others. Nevertheless, the techniques employed in that line of research are usually randomized and thus quite distant from our approach. For an extensive survey the reader is referred to [17] and references therein.

We will be interested in a similar, but more general, construction of frameworks based on eigenvectors of a graph:

Definition 3.2

Consider a graph G and let P be a matrix whose columns span the eigenspace of the least eigenvalue of G. We say that the vectors \(p_i\) that are the rows of P are a generalized least eigenvalue framework of G.

As an example of a generalized least eigenvalue framework, consider the projection \(E_\tau \) onto the least eigenspace of some graph G. We claim that the rows, or columns of \(E_\tau \) form a generalized least eigenvalue framework for G. Indeed, it is clear that these span the \(\tau \)-eigenspace of G, since this space is equal to the column space of \(E_\tau \). Interestingly, the Gram matrix of this framework is \(E_\tau ^2 = E_\tau \), and so this framework is congruent to the least eigenvalue framework of G, even though it was not explicitly constructed according to Definition 3.1. We will see another example of this phenomenon in Sect. 3.5 when we construct a least eigenvalue framework for Kneser graphs.

In the next result we show that for any generalized least eigenvalue framework there exists a canonical choice for a spherical stress matrix.

Proposition 3.3

Let G be a tensegrity graph with no cables (i.e., \(C=\emptyset \)). Also let A be the adjacency matrix of G and set \(\tau =\lambda _\mathrm{min}(A)\). The matrix \(A-\tau I\) is a spherical stress matrix for any generalized least eigenvalue framework of G.

Proof

We check the validity of conditions (i)–(v) from Definition 2.1. Clearly, \(A-\tau I \succeq 0\) and so condition (i) holds. It is also trivial to see that conditions (ii) and (iii) hold. Condition (iv) holds since the columns of P are \(\tau \)-eigenvectors of G by the definition of generalized least eigenvalue framework. For condition (v), note that the corank of \(A- \tau I\) is equal to the dimension of the \(\tau \)-eigenspace of A, which is exactly the dimension of the span of the \(\{p_i\}_{i=1}^n.\) \(\square \)

Note that in Proposition 3.3 we restrict to tensegrities with no cables since the definition of the spherical stress matrix requires the entries corresponding to cables should be less than 0. This is clearly not satisfied for \(A-\tau I\).

Given the restriction to tensegrity frameworks with no cables, the scope of Proposition 3.3 appears to be limited. Nevertheless, it turns out this is exactly what we need for the study of uniquely vector colorable graphs in Sect. 4.

3.2 Conditions for Universal Completability

In this section we give a necessary and sufficient condition for showing that a generalized least eigenvalue framework of a tensegrity graph with no cables is universally completable. We then apply our condition to show that the least eigenvalue framework of an odd cycle is universally completable.

Theorem 3.4

Let G be a tensegrity graph with \(C=\emptyset \) and let \(G({\mathbf{p}})\subseteq ~\mathbb {R}^d\) be a generalized LEF of G. Then \(G({\mathbf{p}})\) is universally completable if and only if

$$\begin{aligned} p_i^{T}R p_j = 0 \quad \mathrm{for}\, i\simeq j \Longrightarrow R = 0, \end{aligned}$$
(3)

for all symmetric matrices \(R\in \mathcal {S}^d\) with \(\mathrm{Im}\, R \subseteq {{\mathrm{span}}}(p_1, \ldots , p_n)\).

Proof

First, suppose \(R\in \mathcal {S}^d\) is a nonzero matrix satisfying \(p_i^{T}R p_j = 0\) for \(i\simeq j\) and \(\hbox {Im }R \subseteq {{\mathrm{span}}}(p_1, \ldots , p_n)\). Without loss of generality we may assume that \(\lambda _\mathrm{min}(R)\ge -1\). Let P be the framework matrix corresponding to \(G({\mathbf{p}})\). Then the matrix \(X:= P(I+R)P^{T}\) is positive semidefinite, and by assumption satisfies

$$\begin{aligned} (P(I+R)P^{T})_{ij} = p^{T}_ip_j + p_i^{T}R p_j = p^{T}_ip_j \quad \text {for all } i\simeq j. \end{aligned}$$

Furthermore, since \(R \ne 0\) and \(\hbox {Im }R \subseteq {{\mathrm{span}}}(p_1, \ldots , p_n) = \hbox {Im }P^{T}\), the matrix \(\textit{PRP}^{T}\) is not zero. Thus \(X = P(I+R)P^{T}\ne \textit{PP}^{T}\). Since X is positive semidefinite, it is the Gram matrix of some set of vectors which form a framework that is dominated by \(G({\mathbf{p}})\). Therefore, \(G({\mathbf{p}})\) is not universally completable.

For the other direction we use Theorem 2.5 to show that \(G({\mathbf{p}})\) is universally completable. Let A be the adjacency matrix of G and set \(\tau =\lambda _\mathrm{min}(A)\). By Proposition 3.3 the matrix \(A-\tau I \) is a spherical stress matrix for \(G({\mathbf{p}})\). Specializing the conditions (a)–(d) from Theorem 2.3 to \(A-\tau I\) it remains to show that \(R=0\) is the only symmetric matrix satisfying \(p_i^{T}R p_j =~0\) for \(i\simeq j\) and \(\hbox {Im }R \subseteq {{\mathrm{span}}}(p_1, \ldots , p_n)\). This is exactly the assumption of the theorem. \(\square \)

Remark 3.5

Notice that if \(G({\mathbf{p}})\) is a (generalized) LEF we have that

$$\begin{aligned} p_i^{T}R p_j = 0 \text { for } i\simeq j \Longleftrightarrow p_i^{T}R p_j = 0 \,\,\, \text {for } i\sim j. \end{aligned}$$

To see this, let A be the adjacency matrix of G, let P be the framework matrix associated to \(G({\mathbf{p}})\) and \(\tau =\lambda _\mathrm{min}(A)\). By definition of the least eigenvalue framework we have that \(AP=\tau P\) which is equivalent to

$$\begin{aligned} \tau p_i=\sum _{i\sim j}p_j \quad \text {for all } i\in [n], \end{aligned}$$

and the claim follows.

To illustrate the usefulness of Theorem 3.4 we now show that the least eigenvalue framework of an odd cycle is universally completable. For this we must first calculate the eigenvectors corresponding to the least eigenvalue of an odd cycle. These are well known, but we briefly explain how to derive them.

The odd cycle of length \(n:=2k+1\), denoted \(C_{2k+1}\), can be described as the Cayley graph over the abelian group \(\mathbb {Z}_{n}:=\{0,\ldots ,n-1\}\) with connection set \(C = \{\pm 1 \mod n\}\). As described in Sect. 2.1 we see that \(\lambda _\mathrm{min}(C_{2k+1})=2\cos {2\pi k\over n } \) with multiplicity two. Furthermore, a basis for the corresponding eigenspace is given by the vectors \(\{ u,v\}\subseteq \mathbb {C}^n\) where \(u(x)=\exp ({2\pi i k x\over n})\) and \(v(x)=\exp ({2\pi i (n-k) x\over n})\). Recall that the least eigenvalue framework of a graph is defined in terms of real vectors but the eigenvectors identified above are complex valued. It is easy to see that the vectors \(\{ {{\mathrm{Re}}}(u), {{\mathrm{Im}}}(u)\}\subseteq \mathbb {R}^n\) form a real valued orthogonal basis for the least eigenspace of \(C_{2k+1}\). Note that \({{\mathrm{Re}}}(u)_x=\cos ({2\pi kx \over n})\) and \({{\mathrm{Im}}}(u)_x=\sin ({2\pi kx \over n})\) for all \(x\in \mathbb {Z}_n\). Consequently, the least eigenvalue framework for \(C_{2k+1}\), up to a scalar, is given by

$$\begin{aligned} p_x=\biggl (\cos \biggl ({2\pi kx \over n}\biggr ),\sin \biggl ({2\pi kx \over n}\biggr )\biggr )^{T}\quad \text {for all } x\in \mathbb {Z}_n. \end{aligned}$$
(4)

Geometrically, this assigns \(2k+1\) points evenly spaced around the unit circle to the vertices of \(C_{2k+1}\) such that adjacent vertices are at maximum distance.

Theorem 3.6

The LEF of an odd cycle is universally completable.

Proof

Set \(n:=2k+1\). Suppose \(R \in \mathcal {S}^2\) satisfies

$$\begin{aligned} p_x^{T}R p_y = 0 \quad \text { for }x\simeq y. \end{aligned}$$
(5)

By Theorem 3.4 it suffices to show that \(R = 0\). For every \(x \in \mathbb {Z}_{n}\) it follows from (5) that \(p_x\) is orthogonal to the image of \({{\mathrm{span}}}\{p_y : x \sim y\}\) under the map R. However, for every \(x\in \mathbb {Z}_{n}\) we have \({{\mathrm{span}}}\{p_y : x \sim y\}=~\mathbb {R}^2\) and thus \(p_x\) is orthogonal to \(\hbox {Im }R\). Since this is true for all \(x \in \mathbb {Z}_n\), and \({{\mathrm{span}}}(p_x: x \in \mathbb {Z}_n)=\mathbb {R}^2\), we must have that \(R = 0\). \(\square \)

3.3 Computations and the Strong Arnold Property

In the previous section we identified a necessary and sufficient condition for a least eigenvalue framework to be universally completable (Theorem 3.4). It is possible to turn this condition into an algorithm for determining when a least eigenvalue framework is universally completable. In fact, we investigate this approach in a follow-up to this paper [12]. However, the algorithm investigated there only determines universal completability, it does not provide a method for determining all frameworks dominated by a least eigenvalue framework. Here we will present an alternative necessary and sufficient condition which presents a straightforward method for doing exactly this. The resulting algorithm corresponds to simply solving a homogeneous system of linear equations, and the framework is universally completable if and only if the system has only the trivial solution. Using this we examine how often the least eigenvalue framework of a graph happens to be universally completable. Our computations show that this is the case for the vast majority of Cayley graphs on \(\mathbb {Z}^2_n\) \((n\le 5)\).

To derive our second condition for the universal completability of least eigenvalue frameworks we exploit a connection with the Strong Arnold Property (SAP) that we now introduce. Consider a graph \(G = ([n], E)\) and let A be its adjacency matrix. Set

$$\begin{aligned} \mathcal {C}(G):= \Big \{M \in \mathbb {R}^{n \times n} : M_{ij} = 0 \text { if } i \not \simeq j\Big \}. \end{aligned}$$

A matrix \(M \in \mathcal {C}(G)\) has the Strong Arnold Property if for every \(X\in \mathcal {S}^n\):

$$\begin{aligned} (A+I)\circ X=0 \text { and } MX = 0 \ \Longrightarrow \ X = 0. \end{aligned}$$

The Strong Arnold Property is related to the celebrated Colin de Verdière graph parameter [4], but Laurent and Varvitsiotis have also identified a link between the SAP and their sufficient condition for universal completability.

Theorem 3.7

[19] Consider a graph \(G = ([n],E)\) and a matrix \(M \in \mathcal {C}(G)\) with \(\hbox {corank }M=d\). Let \(P \in \mathbb {R}^{n \times d}\) be a matrix whose columns form an orthonormal basis for \(\mathrm{Ker}\, M\) and let \(p_1, \ldots , p_n\) denote the rows of P. The following are equivalent:

  1. (i)

    M has the Strong Arnold Property.

  2. (ii)

    For any \(d \times d\) symmetric matrix R,

    $$\begin{aligned}p_i^{T}R p_j = 0 \quad \mathrm{for\,all}\, i\simeq j \Longrightarrow R = 0.\end{aligned}$$

We now give our second necessary and sufficient condition for the universal completability of least eigenvalue frameworks.

Proposition 3.8

Let G be a tensegrity graph with no cables, and let \(G({\mathbf{p}})\) be its least eigenvalue framework. Furthermore, let A be the adjacency matrix of G and let \(\tau =\lambda _\mathrm{min}(A)\). The following are equivalent:

  1. (i)

    \(G({\mathbf{p}})\) is universally completable;

  2. (ii)

    \(A-\tau I \) has the Strong Arnold Property. Explicitly, for any \(X\in \mathcal {S}^n\):

    $$\begin{aligned} (A+I) \circ X = 0 \text { and } (A-\tau I)X = 0 \Longrightarrow X=0. \end{aligned}$$

Proof

The proof follows by combining Theorem 3.4 with Theorem 3.7. \(\square \)

Proposition 3.8 (ii) provides us with a polynomial time algorithm for determining whether the least eigenvalue framework of a graph is universally completable, assuming we can compute its least eigenvalue exactly. In particular, finding all matrices X such that \(X_{ij} = 0\) when \(i\simeq j\) and \((A - \tau I)X = 0\) is equivalent to solving a system of \(|V(G)|^2\) equations (one for each entry of \((A-\tau I)X\)) consisting of \(|E(\overline{G})|\) variables. We then conclude that the least eigenvalue framework of G is universally completable if and only if the only solution to this system of equations corresponds to \(X = 0\).

Using Sage [24], we applied the above described algorithmFootnote 2 to all Cayley graphs for \(\mathbb {Z}_2^n\) for \(n \le 5\) and obtained the results summarized in the Table 1. Note that these graphs all have integral spectrum which allows us to use exact arithmetic when looking for possible solutions X.

Table 1 Data for Cayley graphs on \(\mathbb {Z}_2^n\)

Note that we have not yet shown how to use this algorithm to determine all frameworks dominated by the least eigenvalue framework of a graph. To do this, we will need a correspondence between matrices X satisfying the hypotheses of Proposition 3.8 (ii) and matrices R satisfying \(p_i^{T}R p_j = 0\) for \(i \simeq j\), where \((p_1, \ldots , p_n)\) is the least eigenvalue framework of a graph. First we introduce some notation. For a graph G with adjacency matrix A and least eigenvalue \(\tau \) with multiplicity d, we define the following:

$$\begin{aligned} \mathcal {X}(G)&= \big \{X \in \mathcal {S}^n: (A+I)\circ X = 0 \quad \text {and}\quad (A - \tau I)X = 0\big \} \\ \mathcal {R}(G)&= \big \{R \in \mathcal {S}^d: p_i^{T}R p_j = 0 \text { for } i \simeq j\big \}. \end{aligned}$$

Note that \(\mathcal {X}(G)\) and \(\mathcal {R}(G)\) are both clearly vector spaces. We will construct a linear map between these two spaces that will serve as the needed correspondence.

Lemma 3.9

Let G be a graph and let P be the framework matrix for the least eigenvalue framework of G. Define a map \(\Phi \) as follows:

$$\begin{aligned} \Phi (R) = \textit{PRP}^{T}\quad \text {for } R \in \mathcal {R}(G). \end{aligned}$$

Then the map \(\Phi \) is a linear bijection between \(\mathcal {R}(G)\) and \(\mathcal {X}(G)\), with \(\Phi ^{-1}(X) = P^{T}X P\). Moreover, \(\textit{PP}^{T}+ \Phi (R)\) is psd if and only if \(I+R\) is psd.

Proof

The fact that \(\Phi \) is linear is obvious. Next we will show that \(\Phi (R) \in \mathcal {X}(G)\) for all \(R \in \mathcal {R}(G)\). If \(R \in \mathcal {R}(G)\), then R is symmetric and so \(\textit{PRP}^{T}\) is symmetric. Moreover,

$$\begin{aligned} \Phi (R)_{ij} = (\textit{PRP}^{T})_{ij} = p_i^{T}R p_j = 0 \quad \text {for } i \simeq j. \end{aligned}$$

This implies that \((A+I) \circ \Phi (R) = 0\) where A is the adjacency matrix of G. Also, by the definition of least eigenvalue framework, the columns of P are eigenvectors for the minimum eigenvalue, say \(\tau \), of A. Therefore,

$$\begin{aligned} (A-\tau I)\Phi (R) = (A- \tau I)\textit{PRP}^{T}= 0, \end{aligned}$$

as desired. This shows that the image of \(\Phi \) is contained in \(\mathcal {X}(G)\). Now we will show that \(\Phi \) is surjective. Suppose \(X \in \mathcal {X}(G)\). We have that

$$\begin{aligned} \Phi (P^{T}X P) = \textit{PP}^{T}X \textit{PP}^{T}, \end{aligned}$$

and moreover \(\textit{PP}^{T}=: E_\tau \) is the orthogonal projection onto the \(\tau \)-eigenspace of A. Since \((A- \tau I)X = 0\), the columns (and thus rows) of X are all \(\tau \)-eigenvectors of A and thus

$$\begin{aligned} \Phi (P^{T}X P) = E_\tau X E_\tau = X. \end{aligned}$$

Therefore \(\Phi \) is surjective.

Now we show that \(\Phi \) is injective by verifying that \(\Phi ^{-1}(X) = P^{T}X P\). Since the columns P are an orthonormal basis, we have that \(P^{T}P = I\). Therefore,

$$\begin{aligned} P^{T}\Phi (R)P = P^{T}\textit{PRP}^{T}P = R. \end{aligned}$$

So \(\Phi ^{-1}(X) = P^{T}X P\).

Lastly, \(\textit{PP}^{T}+ \Phi (R) = P(I+R)P^{T}\) which is psd if and only if \(I+R\) is psd since P has full column-rank. \(\square \)

The above lemma gives us the following corollary:

Corollary 3.10

Let G be a tensegrity graph with no cables and let \({\mathbf{p}}\) be its least eigenvalue framework with corresponding framework matrix P. Then the matrix \(\textit{PP}^{T}+ X\) is the Gram matrix of a framework dominated by \(G({\mathbf{p}})\) if and only if \(X \in \mathcal {X}(G)\) and \(\lambda _{\min }(X) \ge -1\).

Proof

By Theorem 2.3 and Remark 2.4, we have that \(\textit{PP}^{T}+ X\) is the Gram matrix of a framework dominated by \(G({\mathbf{p}})\) if and only if

$$\begin{aligned} \textit{PP}^{T}+ X = P(I+R)P^{T}\end{aligned}$$

for some \(R \in \mathcal {R}(G)\) such that \(I+R\) is psd. By Lemma 3.9 we have that this is equivalent to \(X \in \mathcal {X}(G)\) and \(\textit{PP}^{T}+ X \succeq 0\). The former implies that \(\hbox {Im }X \subseteq \hbox {Im }\textit{PP}^{T}\), and so the latter is equivalent to \(\lambda _{\min }(X) \ge -1\) since \(\textit{PP}^{T}\) is an orthogonal projector. \(\square \)

By the above corollary, in order to determine all frameworks dominated by the least eigenvalue framework of a graph G, it suffices to determine \(\mathcal {X}(G)\), which is equivalent to solving a system of linear equations. Then, for any \(X \in \mathcal {X}(G)\), one can positively scale X until \(\lambda _{\min }(X) \ge -1\). This is especially useful when \(\mathcal {X}(G)\) is 1-dimensional, which is a case we pay special attention to in [12].

3.4 Two Additional Sufficient Conditions

In this section we use Proposition 3.8 to derive two additional sufficient conditions for the least eigenvalue framework of a graph to be universally completable. For the first of these, we recall the following well-known result (see, e.g., [15]):

Theorem 3.11

Let G be a graph and H an induced subgraph of G. Then the least eigenvalue of H is greater than or equal to the least eigenvalue of G.

Let G be a tensegrity graph and let \(\tau \) be the minimum eigenvalue of its adjacency matrix. As an immediate consequence of Theorem 3.11 we have that

$$\begin{aligned} \lambda _\mathrm{min}(G{\setminus }N[i])\ge \tau \quad \text {for all } i\in V(G). \end{aligned}$$
(6)

Using (6) we next derive a sufficient condition for universal completability.

Proposition 3.12

Let \(G=([n],E)\) be a tensegrity graph with no cables, let A be its adjacency matrix and set \(\tau =\lambda _\mathrm{min}(A)\). If

$$\begin{aligned} \lambda _\mathrm{min}(G{\setminus }N[i])>\tau \quad \mathrm{for\,all}\, i\in V(G), \end{aligned}$$

then the least eigenvalue framework \(G({\mathbf{p}})\) is universally completable.

Proof

Suppose that \(G({\mathbf{p}})\) is not universally completable. We show there exists \(i\in [n]\) for which \(\lambda _\mathrm{min}(G{\setminus }N[i]) = \tau \). By Proposition 3.8, there exists a symmetric nonzero matrix \(X\in \mathcal {S}^n\) such that \((A+I) \circ X = 0\) and \((A-\tau I)X = 0\). Let \(x_1, \ldots , x_n\) be the columns of X. Since X is nonzero, there exists a vertex \(i\in [n]\) such that \(x_i\ne 0\). Furthermore, as \((A-\tau I)X = 0\) it follows that \(x_i\) is a \(\tau \)-eigenvector of G. However, since \((A+ I) \circ X = 0\) we have that the entries of \(x_i\) corresponding to N[i] are equal to zero.

This implies that \(G{\setminus }N[i]\) has \(\tau \) as an eigenvalue. \(\square \)

We continue with our second sufficient condition for universal completability.

Lemma 3.13

Let \(G=([n],E)\) be a tensegrity graph with no cables, let A be its adjacency matrix and set \(\tau =\lambda _\mathrm{min}(A)\). Suppose there exists a clique C in G such that the principal submatrix of \(A-\tau I\) induced by the nodes in \([n]{\setminus }C\) is invertible. Then the least eigenvalue framework \(G({\mathbf{p}})\) is universally completable.

Proof

Suppose that \(X\in \mathcal {S}^n\) is a symmetric matrix satisfying \((A+I) \circ X = 0\) and \((A - \tau I)X = 0\). By Proposition 3.8 it suffices to show that \(X = 0\). By labeling the vertices of G appropriately we can assume that \(A-\tau I\) has the following block structure:

$$\begin{aligned} \begin{pmatrix} D &{}\quad B\\ B^{T}&{}\quad F \end{pmatrix}, \end{aligned}$$

where the upper left block corresponds to the clique C whereas the lower right block corresponds to \([n]{\setminus }C\). By assumption we have that F is invertible. Since C is a clique, all of the entries of \(A + I\) in the block corresponding to C are 1. Since we require \((A+ I) \circ X = 0\), the corresponding block structure for X is given by:

$$\begin{aligned} \begin{pmatrix} 0 &{}\quad Z\\ Z^{T}&{}\quad Y \end{pmatrix}. \end{aligned}$$

Then \((A-\tau I)X=0\) implies that

$$\begin{aligned} FZ^{T}= 0 \quad \text {and}\quad B^{T}Z + FY = 0. \end{aligned}$$

Since F is invertible it follows that \(Z = Y = 0\) and thus \(X = 0\). \(\square \)

Using Lemma 3.13 we now show that the least eigenvalue framework of any split graph is universally completable.

Corollary 3.14

Let G be a tensegrity graph with no cables. If G is a split graph, then the least eigenvalue framework \(G({\mathbf{p}})\) is universally completable.

Proof

Since G is a split graph there exists a partition \(\{C,I\}\) of V(G) such that C is a clique and \(I = V(G){\setminus }C\) is an independent set of G. This implies that the principal submatrix of \(A- \tau I\) corresponding to \(V(G){\setminus }C\) is a nonzero scalar multiple of the identity matrix, and is therefore invertible. By Lemma 3.13 this implies that \(G({\mathbf{p}})\) is universally completable. \(\square \)

3.5 Application: Kneser Graphs

In this section we use Theorem 3.4 to show that a family of generalized least eigenvalue frameworks for the Kneser and q-Kneser graphs are universally completable. This is used later in Sect. 4 to show that for \(n \ge 2r+1\), both the Kneser graph \(K_{n:r}\) and the q-Kneser graph \(qK_{n:r}\) are uniquely vector colorable. Interestingly, even though the frameworks studied in this section are constructed as generalized least eigenvalue frameworks, we will later see that they are in fact congruent to the corresponding least eigenvalue framework.

For two positive integers nr the Kneser graph, denoted by \(K_{n:r}\), is the graph whose vertices correspond to the subsets of [n] of cardinality r, where two vertices are adjacent if the corresponding sets are disjoint. The q-Kneser graph, denoted by \(qK_{n:r}\), has as its vertices the r-dimensional subspaces of the finite vector space \(\mathbb {F}_q^n\), and two of these subspaces are adjacent if they are skew, i.e., the subspaces intersect trivially. In this section we construct universally completable generalized least eigenvalue frameworks for both \(K_{n:r}\) and \(qK_{n:r}\) (for \(n \ge 2r+1\)). We only give the proof for the q-Kneser graphs, but the proof for Kneser graphs is similar; we discuss the differences at the end of the section.

Let P be a matrix with rows indexed by the r-dimensional subspaces of \(\mathbb {F}_q^n\) (i.e. by the vertices of \(qK_{n:r}\)) and columns indexed by the lines (1-dimensional subspaces) of \(\mathbb {F}_q^n\) such that

$$\begin{aligned} P_{S,\ell }:=\left\{ \begin{array}{ll} \alpha &{}\quad \text {if } \ell \subseteq S, \\ \beta &{}\quad \text {if } \ell \cap S = \{0\}. \end{array}\right. \end{aligned}$$
(7)

In other words, P is a “weighted incidence matrix” of the r-dimensional subspaces and lines of \(\mathbb {F}_q^n\). Further suppose that \(\alpha \) and \(\beta \) are chosen such that \(P\vec {1}=0\). The precise values of \(\alpha \) and \(\beta \) are not important (since we can apply a global scalar without changing the proof), but one suitable choice is

$$\begin{aligned}\alpha := [r]_q - [n]_q\quad \text {and}\quad \beta := [r]_q, \end{aligned}$$

where \([k]_q:= \frac{q^k - 1}{q-1} = \sum _{i=0}^{k-1} q^i\) gives the number of lines contained in a k-dimensional subspace of \(\mathbb {F}_q^n\). Using P we construct a least eigenvalue framework for \(qK_{n:r}\) by assigning to each r-dimensional subspace \(S\subseteq \mathbb {F}_q^n\) the vector \(p_S\) corresponding to the S-row of P. Note that the columns of P are not orthogonal, however it is known that they span the least eigenspace of \(qK_{n:r}\) [10]. Therefore the vectors \(\{p_S: S \in V(qK_{n:r})\}\) form a generalized least eigenvalue framework of \(qK_{n:r}\). Lastly, note that the vectors \(p_S\) lie in \(\mathbb {R}^{[n]_q}\) but do not span it, since they are all orthogonal to the all ones vector \(\vec {1}\). Later we show that \({{\mathrm{span}}}(p_S: S \in V(qK_{n:r}))=\{\vec {1}\}^\bot \).

By Theorem 3.4, to conclude that \(qK_{n:r}({\mathbf{p}})\) is universally completable it suffices to show that

$$\begin{aligned} p_S^{T}Rp_T=0 \quad \text {for } S\simeq T \Longrightarrow R=0, \end{aligned}$$

for any \(R\in \mathcal {S}^{[n]_q}\) with \(\hbox {Im }R\subseteq {{\mathrm{span}}}(p_S: S \in V(qK_{n:r}))\). For this we need to introduce some notation and an auxiliary lemma. For a subspace F of \(\mathbb {F}^n_q\), define \(\vec {1}_F\) as a vector indexed by the lines of \(\mathbb {F}_q^n\) as follows:

$$\begin{aligned} \big (\vec {1}_F\big )_\ell = \left\{ \begin{array}{cl} 1 &{} \quad \text {if } \ell \subseteq F, \\ 0 &{} \quad \text {if } \ell \cap F = \{0\}. \end{array} \right. \end{aligned}$$

We also define the following two subspaces of \(\mathbb {R}^n\) for any subspace F of \(\mathbb {F}_q^n\):

$$\begin{aligned} \begin{aligned} P_F&:= {{\mathrm{span}}}\big (\{p_S : S \cap F = \{0\}\} \cup \{\vec {1}\}\big ), \text { and } \\ E_F&:= {{\mathrm{span}}}\big (\{e_\ell : \ell \cap F = \{0\}\} \cup \{\vec {1}_F\}\big ). \end{aligned} \end{aligned}$$
(8)

We will need the following technical lemma:

Lemma 3.15

Let \(n \ge 2r+1\), and let F be a subspace of \(\mathbb {F}^n_q\) of dimension at most r. Then \(P_F = E_F\).

Proof

Clearly, \(E_F\) is exactly the subspace of vectors which are constant on the lines contained in F. From this, it is easy to see that \(P_F \subseteq E_F\). To show the other containment, we will prove that \(e_\ell \in P_F\) for all \(\ell \) skew to F, and that \(\vec {1}_F \in P_F\).

First, suppose \(\ell \) is a line skew to F. Then, since \(n \ge 2r+1\), there exists some \(r+1\) dimensional subspace U of \(\mathbb {F}^n_q\) containing \(\ell \) and skew to F. Let \(\mathcal {U}\) be the set of all r-dimensional subspaces of U. Since U is skew to F, so is every element of \(\mathcal {U}\). Therefore, for all \(S \in \mathcal {U}\), we have \(p_S \in P_F\). Furthermore, since \(\vec {1}\in P_F\), we have that \(\vec {1}_S \in P_F\) for all \(S \in \mathcal {U}\). The vectors \(\vec {1}_S\) for \(S \in \mathcal {U}\) are all 0 on the lines not contained in U, and thus the matrix whose rows are these \(\vec {1}_S\) vectors looks like

$$\begin{aligned}{}[\ M \ | \ 0 \ ], \end{aligned}$$

where M is the incidence matrix whose rows are indexed by the r-dimensional subspaces of U, and whose columns are indexed by the 1-dimensional subspaces of U. We will show that \(e_{\ell '} \in P_F\) for all \(\ell ' \subseteq U\). To do this it suffices to show that M has full column rank, which is equivalent to the matrix \(M^{T}M\) having no zero eigenvalues.

To see this, note that \((M^{T}M)_{\ell '\ell ''}\) is equal to the number of r-dimensional subspaces of U which contain both \(\ell '\) and \(\ell ''\). This value only depends on whether \(\ell ' = \ell ''\) and is greater in the case where this holds. Therefore,

$$\begin{aligned} M^{T}M = aI + bJ, \end{aligned}$$

where \(a > 0\) and \(b \ge 0\). This clearly has no zero eigenvalues and thus \(e_{\ell '} \in P_F\) for all \(\ell ' \subseteq U\), and in particular \(e_\ell \in P_F\). Since \(\ell \) was an arbitrary line skew to F, this shows that \(e_{\ell '} \in P_F\) for all \(\ell '\) skew to F. To see that \(\vec {1}_F \in P_F\), simply note that

$$\begin{aligned} \vec {1}_F = \vec {1}- \sum _{\ell \text { skew to } F} e_\ell .\square \end{aligned}$$

Remark 3.16

Setting F to be the zero subspace, Lemma 3.15 implies that

$$\begin{aligned} {{\mathrm{span}}}(\{p_S : S \in V(qK_{n:r})\} \cup \{\vec {1}\})=\mathbb {R}^{[n]_q}, \end{aligned}$$

and since \(p_S^{T}\vec {1}=0 \) for all \(S\subseteq \mathbb {F}_q^n\) it follows that

$$\begin{aligned} {{\mathrm{span}}}(p_S : S \in V(qK_{n:r}))= \{\vec {1}\}^\perp . \end{aligned}$$
(9)

This is used in the proof of our main theorem below.

Theorem 3.17

For \(n \ge 2r+1\), the generalized least eigenvalue framework of \(qK_{n:r}\) described in (7) is universally completable.

Proof

Combining Theorem 3.4 with (9), it suffices to show that \(R=0\) for any \([n]_q \times [n]_q\) symmetric matrix R satisfying \(\hbox {Im }R \subseteq \{\vec {1}\}^\bot \) and

$$\begin{aligned} p_S^{T}R p_T = 0\quad \text {for all } S,T \in V(qK_{n:r}) \text { such that } S \simeq T. \end{aligned}$$
(10)

For any \(T \in V(qK_{n:r})\) it follows from (10) that the vector \(R p_T\) is orthogonal to \(p_S\) for all \(S \in V(qK_{n:r})\) skew to T. Furthermore, as \(\hbox {Im }R \subseteq \{\vec {1}\}^\bot \) the vector \(Rp_T\) is orthogonal to \(\vec {1}\). This implies that \(Rp_T\) is orthogonal to \(P_T\) as defined in (8). By Lemma 3.15, we have that \(P_T = E_T\) and therefore

$$\begin{aligned} Rp_T \perp e_\ell \quad \text {for all } \ell \text { skew to } T. \end{aligned}$$

Since R is symmetric, this implies that

$$\begin{aligned} Re_\ell \perp p_T \quad \text {for all } T \text { skew to } \ell . \end{aligned}$$

As \(\hbox {Im }R \subseteq \{\vec {1}\}^\bot \), the latter implies that \(Re_\ell \) is orthogonal to \(P_F\) for \(F = \ell \). Applying Lemma 3.15 again, we obtain

$$\begin{aligned} Re_\ell \perp E_\ell = \mathbb {R}^{[n]_q}. \end{aligned}$$

Since this is true for all lines \(\ell \) of \(\mathbb {F}_q^n\), we have that \(R = 0\). \(\square \)

The proof for Kneser graphs (both the lemma and the theorem) is essentially identical to the above, except that subspaces are replaced by subsets and lines are replaced by elements. Therefore we have the following:

Theorem 3.18

For \(n \ge 2r+1\), the generalized least eigenvalue framework of \(K_{n:r}\) described in (7) is universally completable.

4 Vector Colorings

4.1 Definitions and Properties

For \(t \ge 2\), a vector t-coloring of a graph \(G=([n],E)\) consists of an assignment \({\mathbf{p}}= (p_1, \ldots , p_n)\) of real unit vectors to the vertices of G such that

$$\begin{aligned} p_i^{T}p_j \le \frac{-1}{t-1} \quad \text {for all } i \sim j. \end{aligned}$$
(11)

We say that \({\mathbf{p}}\) is a strict vector t-coloring if (11) holds with equality for all edges of G.

The vector chromatic number of a nonempty graph G, denoted \(\chi _\mathrm{v}(G)\), is the minimum real number \(t\ge 2\) such that G admits a vector t-coloring. The vector chromatic number of empty graphs is defined to be one. The strict vector chromatic number, \(\chi _\mathrm{sv}(G)\), is defined analogously. We say that the value of a vector coloring is the smallest t for which (11) is satisfied.

Vector and strict vector colorings, as well as their associated chromatic numbers, were introduced by Karger, Motwani, and Sudan in [18]. As it turns out, for any graph G we have \(\chi _\mathrm{sv}(G)=\vartheta (\overline{G})\), where \(\vartheta (\cdot )\) denotes the Lovász theta number of a graph [23]. Furthermore, for any graph G we have that \(\chi _\mathrm{v}(G)=\vartheta '(\overline{G})\), where \(\vartheta '(\cdot )\) is a variant of the Lovász theta function introduced by Schrijver in [23].

Clearly, for any graph G we have that \(\chi _\mathrm{v}(G)\le \chi _\mathrm{sv}(G)\). Furthermore, notice that if G admits a k-coloring (in the usual sense), then mapping each color class to a vertex of the \(k-1\) simplex centered at the origin corresponds to a strict vector k-coloring. This implies that \(\chi _\mathrm{sv}(G) \le \chi (G)\).

Our main goal in this section is to identify families of graphs that admit unique vector (resp. strict) vector colorings. Similarly to tensegrity frameworks, any (strict) vector t-coloring can be rotated, reflected, or otherwise orthogonally transformed to generate another (strict) vector t-coloring. This is analogous to permuting the colors in a usual coloring of a graph. Consequently, when defining uniquely vector colorable graphs we must mod out by this equivalence to arrive at a meaningful definition. As with tensegrity frameworks, we accomplish this using Gram matrices.

Definition 4.1

A graph G is called uniquely (strict) vector colorable if for any two optimal (strict) vector colorings \({\mathbf{p}}\) and \({\mathbf{q}}\), we have

$$\begin{aligned} \hbox {Gram}(p_1, \ldots , p_n) = \hbox {Gram}(q_1, \ldots , q_n). \end{aligned}$$

In this section we identify an interesting connection between universal completability and uniquely vector colorable graphs. Specifically, in Sect. 4.2 we develop a sufficient condition for showing that a vector coloring of a graph G is the unique optimal vector coloring of G. Combining this with our results from the previous section allows us to show that for \(n \ge 2r+1\), both the Kneser graph \(K_{n:r}\) and the q-Kneser graph \(qK_{n:r}\) are uniquely vector colorable. Furthermore, in Sect. 4.3 we introduce the class of 1-walk-regular graphs for which we can fully characterize the set of optimal vector colorings. To achieve this we need to use the characterization of frameworks dominated by a fixed tensegrity framework \(G({\mathbf{p}})\) given in Theorem 2.3.

4.2 Uniqueness of Vector Colorings

In this section we mainly focus on unique vector colorability, but for the graph classes we consider this is equivalent to unique strict vector colorability. In order to show that a graph \(G=([n],E)\) is uniquely vector colorable we start with a candidate vector coloring \({\mathbf{p}}\) of G and show it is the unique optimal vector coloring of G. To achieve this we use the tools we developed in the previous sections concerning tensegrity frameworks. Specifically, we associate to the vector coloring \({\mathbf{p}}\) a tensegrity framework \(\tilde{G}({\mathbf{p}})\) and relate the universal completability of \(\tilde{G}({\mathbf{p}})\) to the uniqueness of \({\mathbf{p}}\) as an optimal vector coloring.

Definition 4.2

Consider a graph \(G=([n],E)\) and let \({\mathbf{p}}\) be a vector coloring of G. Define \(\tilde{G}\) to be the tensegrity graph obtained by G by setting \(S=E\) (and thus \(C=B=\emptyset \)) and let \(\tilde{G}({\mathbf{p}})\) be the corresponding tensegrity framework.

Using the construction described above we now state and prove a lemma that is used throughout this section.

Lemma 4.3

Consider a graph \(G=([n], E)\) and let \({\mathbf{p}}\) be a strict vector coloring of G. The vector coloring \({\mathbf{q}}\) achieves better (i.e., smaller) or equal value compared to the vector coloring \({\mathbf{p}}\) if and only if \(\tilde{G}({\mathbf{p}})\succeq \tilde{G}({\mathbf{q}})\).

Proof

Let t be the value of \({\mathbf{p}}\) as a strict vector coloring. Let \({\mathbf{q}}\) be a vector coloring of G satisfying \(q_i^{T}q_j\le {-1\over t'-1}\) for some \(t'\le t\). As \({\mathbf{p}}\) is a strict vector t-coloring we have that \(q_i^{T}q_j\le {-1\over t-1}= p_i^{T}p_j\) for all \(i\sim j\). Since the vectors \(\{p_i\}_{i=1}^n\) and \(\{q_i\}_{i=1}^n\) have unit norm it follows that \(\tilde{G}({\mathbf{p}})\succeq \tilde{G}({\mathbf{q}})\). The converse direction follows easily (and holds even if \({\mathbf{p}}\) is not strict as a vector coloring). \(\square \)

Remark 4.4

Note that the forward implication in Lemma 4.3 is not guaranteed to hold if \({\mathbf{p}}\) is not a strict vector t-coloring. Indeed, in this case there exist adjacent vertices i and j such that \(p_i^{T}p_j < -1/(t-1)\) so there might exist a vector coloring \({\mathbf{q}}\) whose value is better compared to \({\mathbf{p}}\) but satisfies

$$\begin{aligned} p_i^{T}p_j< q_i^{T}q_j < \frac{-1}{t-1}. \end{aligned}$$

This shows that \(\tilde{G}({\mathbf{p}})\not \succeq \tilde{G}({\mathbf{q}})\).

Using Lemma 4.3 we arrive at our main result in this section where we make the connection between universal completability and unique vector colorability.

Theorem 4.5

Consider a graph \(G=([n], E)\), let \({\mathbf{p}}\) be a strict vector coloring and let \(\tilde{G}({\mathbf{p}})\) be the tensegrity framework given in Definition 4.2. Then:

  1. (i)

    If \(\tilde{G}({\mathbf{p}})\) is universally completable, then G is UVC and, furthermore, the matrix \(\mathrm{Gram}(p_1,\ldots ,p_n)\) is the unique optimal vector coloring of G.

  2. (ii)

    If \({\mathbf{p}}\) is optimal as a vector coloring, then G is uniquely vector colorable if and only if \(\tilde{G}({\mathbf{p}})\) is universally completable.

Proof

(i) Let \({\mathbf{q}}\) be a vector coloring of G whose value is better or equal compared to \({\mathbf{p}}\). By Lemma 4.3 we have that \(\tilde{G}({\mathbf{p}})\succeq \tilde{G}({\mathbf{q}})\). By assumption \(\tilde{G}({\mathbf{p}})\) is universally completable which implies \(\hbox {Gram}(p_1,\ldots ,p_n)=\hbox {Gram}(q_1,\ldots ,q_n)\).

(ii) By assumption we have \(p_i^{T}p_j=-1/(\chi _\mathrm{v}(G)-1)\) for all \(i\sim j\). First, assume that G is uniquely vector colorable and consider a framework \(\tilde{G}({\mathbf{q}})\) satisfying \(\tilde{G}({\mathbf{p}})\succeq \tilde{G}({\mathbf{q}}) \). This gives that \(q_i^{T}q_j\le p_i^{T}p_j= -1/(\chi _\mathrm{v}(G)-1)\) for all \(i\sim j\) and since \({\mathbf{p}}\) is the unique optimal vector coloring of G it follows that \(\hbox {Gram}(p_1,\ldots ,p_n)=\hbox {Gram}(q_1,\ldots ,q_n)\). Conversely, say that \(\tilde{G}({\mathbf{p}})\) is universally completable and let \({\mathbf{q}}\) be another optimal vector coloring of G. This means that \(q_i^{T}q_j\le -1/(\chi _\mathrm{v}(G)-1)=p_i^{T}p_j\) for all \(i \sim j\). Since \(\tilde{G}({\mathbf{p}})\) is universally completable it follows that \(\hbox {Gram}(p_1,\ldots ,p_n)=\hbox {Gram}(q_1,\ldots ,q_n)\). \(\square \)

The categorical product of graphs G and H, denoted \(G \times H\), is the graph with vertex set \(V(G) \times V(H)\) where vertices \((i, \ell )\) and (jk) are adjacent if i and j are adjacent in G and \(\ell \) and k are adjacent in H. Pak and Vilenchik considered vector colorings of the categorical products when one of the factors was complete, and using Theorem 4.5 we can recover one of their main results [22, Thm. 3].

Theorem 4.6

Let \(H = ([n], E)\) be a k-regular graph with \(\max |\lambda _i| < k/(r -~1)\) \( (2\le i\le n)\), where \(\{\lambda _i\}_{i=1}^n\) are the eigenvalues of the adjacency matrix of H. For all \(r\ge 2\), the graph \(K_r \times H\) is UVC.

Proof

By the assumption on the eigenvalues of H, the least eigenvalue of \(K_r \times H\) is \(-k\) with multiplicity \(r-1\). Moreover, this eigenspace is spanned by vectors of the form \(u \otimes \vec {1}\) where u is an eigenvector for the \(-1\) eigenspace of \(K_r\) and \(\vec {1}\in \mathbb {R}^{|V(H)|}\) is the all ones vector (the eigenvector for the k-eigenspace of H). Note that the \(r\times r\) matrix \({r\over r-1}I-{1\over r-1}J\) is psd with rank \(r-1\). Consider a family of vectors \( \{w_i\}_{i=1}^{r}\subseteq \mathbb {R}^{r-1}\) such that \(\mathrm{span}(\{w_i\}_{i=1}^r)=\mathbb {R}^{r-1}\) and \(\hbox {Gram}(\{w_i\}_{i=1}^r)={r\over r-1}I-{1\over r-1}J\). Furthermore, note that \(\sum _{i=1}^{r}w_i=0\) since the row sums of \({r\over r-1}I-{1\over r-1}J\) are all zero. Then, a generalized LEF for \(K_r\times H\) is given by \((i,h)\mapsto w_i\), for all \(i\in [r], h\in V(H)\). To see this note that the columns of the \(r\times (r-1)\) matrix P whose rows are given by the \(w_i\)’s span the \(-1\) eigenspace of \(K_r\). Indeed, the columns of P are linearly independent as \(\mathrm{span}(\{w_i\}_{i=1}^r)=\mathbb {R}^{r-1}\) and they lie in \(\{\vec {1}\}^\perp \) as \(\sum _{i=1}^{r}w_i=0\). Thus, the columns of the matrix \(P \otimes \vec {1}\) span the least eigenspace of \(K_r \times H\) and the (ih) row of \(P \otimes \vec {1}\) is equal to \(w_i\). We denote this framework by \(G_r(\mathbf{w})\). Since \(w_i^T w_i = 1\) for all i and \(w_i^Tw_j=-1/(r-1)\) for all \(i\ne j\in [r]\) the framework \(G_r(\mathbf{w})\) is also a strict vector coloring. Thus by Theorem 4.5 (i) it suffices to show that \(\tilde{G_r}(\mathbf{w})\) is universally completable. As \(G_r(\mathbf{w})\) is a generalized LEF, by Theorem 3.4 it remains to show that (3) holds. For this, consider a matrix \(R\in \mathcal {S}^{r-1}\) for which \(w_{(i,h)}^TRw_{(i,h)}=0, \ \forall i,h\) and \(w_{(i,h)}^TRw_{(i',h')}=0\) for \(i\ne i'\) and \(h \sim h'\). This implies that \(w_i^TRw_j=0\) for all \(i,j\in [r]\) and as \(\mathrm{span}(\{w_i\}_{i=1}^r)=\mathbb {R}^{r-1}\) it follows that \(R=0\). \(\square \)

We note that using our techniques we can actually greatly generalize the above result. The spectral condition on H can be replaced by a much more natural condition on its vector chromatic number, and \(K_r\) can be replaced by a much larger class of graphs. In particular, this allows us to handle the case where H is non-regular. This more general result requires us to develop our methods into a form that better handles vector colorings of categorical products, and thus it will appear in a later work which specifically concerns these products [13]. To our knowledge, the above mentioned result by Pak and Vilenchik is the only previously known result concerning the uniqueness of vector colorings.

We conclude this section with an easy application of Theorem 4.5 where we show that odd cycles, Kneser, and q-Kneser graphs are uniquely vector colorable.

Theorem 4.7

For \(k \in \mathbb {N}\), the odd cycle \(C_{2k+1}\) is uniquely vector colorable.

Proof

Let \({\mathbf{p}}\) be the least eigenvalue framework of \(C_{2k+1}\), as described in Sect. 3.2. It is easy to see that the inner products of the vectors in \({\mathbf{p}}\) are constant on edges and that this constant is negative. Therefore, after appropriate scaling, \({\mathbf{p}}\) is a strict vector coloring. By Theorem 4.5, it remains to show that \(\tilde{C}_{2k+1}({\mathbf{p}})\) is universally completable. However this was shown in Theorem 3.6. \(\square \)

Theorem 4.8

For \(n \ge 2r+1\), both the Kneser graph \(K_{n:r}\) and the q-Kneser graph \(qK_{n:r}\) are uniquely vector colorable.

Proof

Let \({\mathbf{p}}\) denote the generalized least eigenvalue framework for the q-Kneser graph \(qK_{n:r}\) described in Sect. 3.5. It is easy to see that this framework has constant inner product on edges of \(qK_{n:r}\), and a straightforward computation shows that this constant is negative. Therefore, after appropriate scaling, this forms a strict vector coloring. Consequently, by Theorem 4.5 (i), it suffices to show that \(q\tilde{K}_{n:r}({\mathbf{p}})\) is universally completable. That was already established in Theorem 3.17. The case of Kneser graphs follows in a similar manner. \(\square \)

For \(n \le 2r-1\), the graphs \(K_{n:r}\) and \(qK_{n:r}\) are empty, and so they are clearly not uniquely vector colorable. Furthermore, for \(n = 2r\) (and \(r > 1\)), the graph \(K_{n:r}\) is disconnected and therefore not uniquely vector colorable. This leaves the case \(n=2r\) for the q-Kneser graphs. These graphs are not bipartite, so it is not clear if they are uniquely vector colorable. However, using the algorithm described in Sect. 3.3, we found that \(2K_{4:2}\) is not uniquely vector colorable.

4.3 1-Walk-Regular Graphs

In this section we focus on the class of 1-walk-regular graphs. These graphs are relevant to this work since they exhibit sufficient regularity so as to guarantee that their least eigenvalue frameworks are always (up to a global scalar) strict vector colorings. This fact combined with Theorem 2.3 allows us to characterize the set of optimal vector colorings for a 1-walk-regular graph (cf. Theorem 4.10). This implies that the least eigenvalue framework of a 1-walk-regular graph is always an optimal strict vector coloring and moreover, yields a necessary and sufficient condition for a 1-walk-regular graph to be uniquely vector colorable.

Definition 4.9

A graph with adjacency matrix A is called 1-walk-regular if there exist \(a_k, b_k \in \mathbb {N}\) for all \(k \in \mathbb {N}\) such that:

  1. (i)

    \(A^k \circ I = a_k I\);

  2. (ii)

    \(A^k \circ A = b_k A\).

Equivalently, a graph is 1-walk-regular if for all \(k \in \mathbb {N}\), (i) the number of walks of length k starting and ending at a vertex does not depend on the choice of vertex, and (ii) the number of walks of length k between the endpoints of an edge does not depend on the edge.

Note that a 1-walk-regular graph must be regular. Also, any graph which is vertex- and edge-transitive is easily seen to be 1-walk-regular. Other classes of 1-walk-regular graphs include distance regular graphs and, more generally, graphs which are a single class in an association scheme.

Our main result in this section is a characterization of the set of optimal vector colorings of a 1-walk-regular graph which we now state and prove.

Theorem 4.10

Consider a 1-walk-regular graph \(G=([n],E)\). Let \(G({\mathbf{p}})\subseteq \mathbb {R}^d\) be its least eigenvalue framework and \(P\in \mathbb {R}^{n\times d}\) the corresponding framework matrix.

The vector coloring \({\mathbf{q}}\) is optimal if and only if

$$\begin{aligned} \mathrm{Gram}(q_1, \ldots , q_n) = \frac{n}{d}\big (\textit{PP}^{T}+ \textit{PRP}^{T}\big ), \end{aligned}$$
(12)

where \(R\in \mathcal {S}^d \) is a symmetric matrix satisfying

$$\begin{aligned} p_i^{T}R p_j = 0\quad \text { for all } i \simeq j. \end{aligned}$$

Proof

Let A be the adjacency matrix of G and set \(\tau :=\lambda _{\min }(A)\). Note that \(d = \hbox {corank }(A-\tau I)\), and that the matrix \(E_\tau := \textit{PP}^{T}\) is the orthogonal projector onto the \(\tau \)-eigenspace of G. We first show that the assignment \(i\mapsto \sqrt{\frac{n}{d}}p_i\) is a strict vector coloring. For this we show that \(p_i^{T}p_i = d/n\) for all \(i \in [n]\) and that \(p_i^{T}p_j\) is a negative constant for all \(i \sim j\).

First, note that \(E_\tau \) can be expressed as a polynomial in A. To see this set

$$\begin{aligned} Z:=\prod _{\lambda \ne \tau }\frac{1}{\tau - \lambda }\,(A - \lambda I), \end{aligned}$$

where the product ranges over the eigenvalues of A. Considering Zv where v ranges over an orthonormal basis composed of eigenvectors of A shows that \(Z = E_\tau \).

Since G is 1-walk-regular and \(E_\tau \) is a polynomial in A there exist scalars ab such that

$$\begin{aligned} E_\tau \circ I = aI \quad \text {and}\quad \ E_\tau \circ A = bA. \end{aligned}$$
(13)

Using the fact that \(E_\tau =\textit{PP}^{T}\) it follows from (13) that

$$\begin{aligned} p_i^{T}p_i=a \quad \text {for all } i\in [n], \quad \text {and}\quad p_i^{T}p_j=b \quad \text {for all } i\sim j. \end{aligned}$$
(14)

Moreover, since \(E_\tau \) is the projector onto \(\mathrm{Ker}(A-\tau I)\) and \(d=\hbox {corank }(A-\tau I)\), we have that \({{\mathrm{Tr}}}(E_\tau )=\mathrm{rank}(E_\tau )=~d \). On the other hand (13) implies that \({{\mathrm{Tr}}}(E_\tau )=na\) and thus \(a=d/n\), as previously claimed.

Let \(\text {sum}(M)\) denote the sum of the entries of the matrix M. Using (13) combined with the fact that G is r-regular for some r, we get that

$$\begin{aligned} brn = \text { sum}(A \circ E_\tau ) = {{\mathrm{Tr}}}(AE_\tau ) = {{\mathrm{Tr}}}(\tau E_\tau ) = \tau d, \end{aligned}$$

and thus \(b=\tau d/nr < 0\), since \(\tau < 0\).

For \(i\in [n]\), set \( \tilde{p}_i:= \sqrt{\frac{n}{d}}p_i\). Since

$$\begin{aligned} \tilde{p_i}^{T}\tilde{p}_i=1 \quad \text {for all } i\in [n] \quad \text {and}\quad \tilde{p_i}^{T}\tilde{p}_i={\tau \over r} \quad \text {for all } i\sim j, \end{aligned}$$

the assignment \(\tilde{\mathbf{p}}\) is a strict vector coloring of G.

Let \(\tilde{G}\) denote the tensegrity graph obtained from the graph G by making all its edges into struts (recall Definition 4.2). By Lemma 4.3 we have that a vector coloring \({\mathbf{q}}\) of G achieves the same or better value compared to \(\tilde{\mathbf{p}}\) if and only if \(\tilde{G}(\tilde{\mathbf{p}})\succeq \tilde{G}({\mathbf{q}})\). Furthermore, since \({\mathbf{p}}\) is the least eigenvalue framework of G, by Proposition 3.4 we know that \(A-\tau I\) is a spherical stress matrix for \(\tilde{G}(\tilde{{\mathbf{p}}})\). Consequently, it follows from Theorem 2.3 that \(\tilde{G}(\tilde{\mathbf{p}})\succeq \tilde{G}({\mathbf{q}})\) is equivalent to

$$\begin{aligned} \hbox {Gram}(q_1,\ldots ,q_n)= \frac{n}{d}\big (\textit{PP}^{T}+ \textit{PRP}^{T}\big ), \end{aligned}$$
(15)

for some \(R\in \mathcal {S}^d\) satisfying \( p_i^{T}Rp_j = 0\) whenever \(i \simeq j\). Lastly, this implies that

$$\begin{aligned} q_i^{T}q_j=\tilde{p}_i^{T}\tilde{p}_j \quad \text {for all } i\simeq j, \end{aligned}$$

and thus \({\mathbf{q}}\) is a vector coloring of G achieving the same value as the vector coloring \(\tilde{\mathbf{p}}\). \(\square \)

As a consequence of Theorem 4.10 we identify a necessary and sufficient condition for showing that a 1-walk-regular graph is uniquely vector colorable.

Corollary 4.11

Let \(G=([n],E)\) be 1-walk-regular and let \(G({\mathbf{p}})\subseteq \mathbb {R}^d\) be its least eigenvalue framework. We have that:

  1. (i)

    The assignment \(i\mapsto \sqrt{\frac{n}{d}}p_i \) is an optimal strict vector coloring of G.

  2. (ii)

    G is uniquely vector colorable if and only if for any \(R\in \mathcal {S}^d \) we have:

    $$\begin{aligned} p_i^{T}R p_j = 0 \text { for all } i \simeq j \Longrightarrow R=0. \end{aligned}$$
  3. (iii)

    \({n\over d}\mathrm{Gram}(p_1,\ldots ,p_n)\) is the unique optimal vector coloring of G if and only if the least eigenvalue framework \(G({\mathbf{p}})\) is universally completable.

We note that the construction of an optimal strict vector coloring described above originally appeared in [14], though the proof of optimality there is different and uniqueness is not discussed.

Remark 4.12

Since the Kneser and q-Kneser graphs are 1-walk-regular, by Corollary 4.11(ii) the assignment \(i\mapsto \sqrt{\frac{n}{d}}p_i \) is an optimal strict vector coloring, where \(\{p_i\}\) is the corresponding LEF. On the other hand, the frameworks for the Kneser and q-Kneser graphs described in Sect. 3.5 are universally completable and thus, by Theorem 4.5 they are congruent to the corresponding LEFs.

5 Concluding Remarks

In the first part of this work we considered general tensegrity frameworks. We showed that for any framework \(G({\mathbf{p}})\) for which there exists a spherical stress matrix, we can provide a description of the set of frameworks that are dominated by \(G({\mathbf{p}})\). We then introduced least eigenvalue frameworks and identified two necessary and sufficient conditions for determining when a least eigenvalue framework is universally completable. Using these conditions we showed that a family of least eigenvalue frameworks for the Kneser and q-Kneser graphs are universally completable. Lastly, by reformulating our conditions in terms of the Strong Arnold Property, we gave an efficient algorithm for determining when a least eigenvalue framework is universally completable. Using this, we collected data on Cayley graphs over \(\mathbb {Z}_2^n\ (n\le 5)\) indicating that it is fairly common for the least eigenvalue framework to be universally completable.

In the second part of this work, we introduced the notion of unique vector colorability and showed that certifying the optimality of a vector coloring that is strict can be reduced to the universal completability of an appropriately defined tensegrity framework. This fact allowed us to conclude that odd cycles, Kneser and q-Kneser graphs are uniquely vector colorable. Lastly, we characterized the set of optimal vector colorings for the class of 1-walk-regular graphs. As a corollary we got that the least eigenvalue framework of a 1-walk-regular graph is always an optimal strict vector coloring and moreover, we identified a necessary and sufficient condition for a 1-walk-regular graph to be uniquely vector colorable.

As a follow up to this work, we are currently preparing two other articles on vector colorings and unique vector colorability [12, 13]. The first of these focuses on the relationship between vector colorings and graph homomorphisms, and in particular, how to use knowledge of the former to obtain information about the latter. This allows us to prove that, under certain conditions, unique vector colorability is a sufficient condition for a graph to be a core (have no proper endomorphisms). Our other article investigates vector colorings of categorical products. Here we prove a vector coloring analog of the well-known Hedetniemi conjecture, and also greatly generalize the result of Pak and Vilenchik mentioned in Sect. 4.2 above. Specifically, we show that if G is any uniquely vector colorable graph admitting a spherical stress matrix, and H is any connected graph with \(\chi _\mathrm{v}(H) > \chi _\mathrm{v}(G)\), then \(G \times H\) is uniquely vector colorable. Moreover, we prove that if \(\chi _\mathrm{v}(G) = \chi _\mathrm{v}(H)\), and both G and H are UVC and admit spherical stress matrices, then there are exactly two optimal vector colorings of \(G \times H\): one induced by each of the factors. These results constitute a vector coloring analog of three unproven statements about colorings of categorical products investigated by Duffus, Sands, and Woodrow [6]. Finally, we show that even when the uniqueness condition is dropped all of the optimal vector colorings of the product can still be described in terms of the optimal vector colorings of the factors.