Abstract
A tensegrity is a structure made from cables, struts, and stiff bars. A ddimensional tensegrity is universally rigid if it is rigid in any dimension \(d'\) with \(d'\ge d\). The celebrated super stability condition due to Connelly gives a sufficient condition for a tensegrity to be universally rigid. Gortler and Thurston showed that super stability characterizes universal rigidity when the point configuration is generic and every member is a stiff bar. We extend this result in two directions. We first show that a generic universally rigid tensegrity is super stable. We then extend it to tensegrities with point group symmetry, and show that this characterization still holds as long as a tensegrity is generic modulo symmetry. Our strategy is based on the blockdiagonalization technique for symmetric semidefinite programming problems, and our proof relies on the theory of real irreducible representations of finite groups.
1 Introduction
A tensegrity is a stable structure made from cables, struts, and stiff bars. Since the invention by Kenneth Snelson, the theory and the applications of tensegrities have been extensively studied from various perspectives. A mathematical foundation for the rigidity or stability analysis has been established in the context of rigidity theory [8, 15, 25]. Following the notation in that context, we define a (d dimensional) tensegrity as a triple \((G, \sigma , p)\) of an edgesigned graph \((G,\sigma )\) with \(\sigma :E(G)\rightarrow \{1,0,+1\}\) and a pointconfiguration \(p:V(G)\rightarrow {\mathbb {R}}^d\). Here each vertex i corresponds to a joint \(p_i=p(i)\in {\mathbb {R}}^d\), each edge \(e=ij\) with \(\sigma (e)=+1/0/1\) corresponds to a cable/bar/strut, respectively, between joints \(p_i\) and \(p_j\). When every member is a stiff bar (that is, \(\sigma (e)=0\) for every \(e\in E(G)\)), a tensegrity is called a barjoint framework, which is the central object of study in rigidity theory.
In a tensegrity all bars are stiff and cannot change length while cables are allowed to decrease in length and struts are allowed to increase in length. Given the system of these geometric constraints, the global rigidity of the tensegrity is defined in terms of the uniqueness of the solution of the system up to isometries. More formally, given an edgesigned graph \((G,\sigma )\), two pointconfigurations p, q for \((G,\sigma )\) are said to be congruent if
where \(\Vert \cdot \Vert \) denotes the Euclidean norm, and a tensegrity \((G,\sigma ,p)\) is congruent to a tensegrity \((G,\sigma ,q)\) if p is congruent to q. We say that a tensegrity \((G,\sigma ,p)\) dominates a tensegrity \((G,\sigma ,q)\) if
This dominance captures the set of possible deformations of a given tensegrity \((G,\sigma , p)\), where a tensegrity \((G,\sigma ,q)\) satisfies the geometric constraints posed by cables/bars/struts of \((G,\sigma , p)\) if and only if \((G,\sigma ,q)\) is dominated by \((G,\sigma ,p)\). A ddimensional tensegrity \((G,\sigma ,p)\) is globally rigid if every ddimensional tensegrity \((G,\sigma ,q)\) dominated by \((G,\sigma ,p)\) is congruent to \((G,\sigma ,p)\).
Connelly [8] initiated the rigidity analysis of tensegrities, and in his paper [8] in 1982 he gave a celebrated sufficient condition for the global rigidity in terms of stress matrices (that is, graph Laplacian matrices weighted by equilibrium selfstresses). Tensegrities satisfying his sufficient condition are called super stable, and super stability is now used as a major criterion for structural engineers to develop new tensegrities (see, e.g., [31]).
Recently Connelly’s super stability condition received attention in the context of the sensor network localization or the graph realization problem [1, 2, 28]. To understand the exact solvability of the SDP relaxation, So and Ye [28] looked at a stronger rigidity property, called universal rigidity. Suppose that \((G,\sigma ,p)\) is a ddimensional tensegrity whose ambient space \({\mathbb {R}}^d\) lies in \({\mathbb {R}}^{d'}\) for each integer \(d'\ge d\). Then \((G,\sigma ,p)\) is also a tensegrity in \({\mathbb {R}}^{d'}\). We say that \((G,\sigma ,p)\) is universally rigid if \((G,\sigma ,p)\) is globally rigid in \({\mathbb {R}}^{d'}\) for every integer \(d'\ge d\). Clearly, universal rigidity implies global rigidity but the converse implication does not hold in general as indicated in Fig. 1 (See, e.g., [13] for further interaction between two rigidity concepts.)
Although universal rigidity is stronger than global rigidity, super stability still implies universal rigidity as it is implicit in Connelly’s original work [8]. It turns out that super stability even characterizes universal rigidity for almost all barjoint frameworks. Specifically we say that a tensegrity (or a barjoint framework) is generic if the set of coordinates of the points is algebraically independent over \({\mathbb {Q}}\). In 2014, Gortler and Thurston [20] proved that a generic barjoint framework (G, p) is universally rigid if and only if it is super stable.
The goal of this paper is to extend the GortlerThurston characterization in two directions. We first extend the result to tensegrities, and show that universal rigidity and super stability coincide for generic tensegrities. We then extend it to tensegrities with point group symmetry, where a finite point group faithfully acts on the underlying signed graphs and the pointconfigurations are compatible with this action (see Sect. 4 for the formal definition.) Note that a priori a tensegrity with point group symmetry is not generic, but we shall prove that a characterization still holds as long as tensegrities are “generic modulo symmetry”.
Infinitesimal rigidity of symmetric frameworks has been widely investigated, see, e.g. [22, 26], and references therein. These researches deal with the rigidity of symmetric structures in two settings: forced symmetric rigidity and incidentally symmetric rigidity. Forced symmetric rigidity takes care of only deformations maintaining the underlying symmetry while incidentally symmetric rigidity deals with deformations which may break the underlying symmetry. In this paper, for symmetric tensegrities, we focus on the universal rigidity of the latter type.
As given in tensegrity catalogues, most of existing tensegrities exhibit symmetry or are compositions of simple symmetric modules, and building larger tensegrities based on group symmetry is now a standard technique in structural engineering. The technique was initiated by Connelly and Terrell [14], where they showed how to simplify the super stability condition via finite group representation theory. Although their paper focuses on particular instances, the technique is general enough to design a larger class of symmetric tensegrities [10]. An implication of our result is that any universally rigid tensegrities whose pointconfigurations are generic modulo symmetry can be obtained from stress matrices constructed as in the method of Connelly and Terrell.
We should remark that, as shown by Connelly and Gortler [11], the universal rigidity of tensegrities can be characterized by a sequence of dual solutions in the facial reduction procedure due to Borwein and Wolkowicz [7]. We however believe that the characterization in terms of stress matrices (or weighted Laplacian) is important toward characterizing the global rigidity of symmetric tensegrities. A characterization of the global rigidity of generic barjoint frameworks is known in terms of stress matrices [19].
Technically, our work is closely related to the topic of strict complementarity in semidefinite programming (SDP) problems, or equivalently to the face exposedness of projections of positive semidefinite cones. Understanding the existence of a strict complementary pair of primal and dual solutions is a classical but still ongoing research topic in convex optimization (see, e.g., [16] for a recent result). The characterization problem of the universal rigidity of barjoint frameworks is known to be equivalent to the existence of strict complementary pairs of primal and dual solutions in the Euclidean matrix completion problem (see Sect. 2 for details). There are several researchers that answer the characterization problem (or the existence of strict complementary pair) for special classes of graphs [4, 17, 29] while GortlerThurston [20] solved the problem assuming a certain genericity of input entries. This paper provides a new direction based on group symmetry to go beyond generic instances.
Our proof strategy is based on the blockdiagonalization technique for symmetric SDP problems. Here the general idea is to use the blockdiagonalization of the underlying matrix algebra to decompose SDP instances to smaller pieces, and the method is successfully used to solve large scaled SDP problems, see, e.g., [5, 18, 23]. Also, prior researches [6, 21] on this technique are motivated from the optimal design of truss structures. Our technical contribution is to use the blockdiagonalization technique to analyze the facial structures of SDP problems rather than for reducing computational cost, and our proof essentially relies on the theory of real irreducible representation.
2 Semidefinite programming problem for universal rigidity
In this section we shall explain the background materials for analyzing universal rigidity from the view point of semidefinite programming.
Throughout the paper we shall use the following notations. Let V be a finite set with \(V=n\) (typically \(V=\{1,2,\dots , n\}\)). For a finite set X with \(X=m\), let \({\mathbb {R}}^X\) be the mdimensional Euclidean space whose each entry is indexed by each element of X. For \(i\in X\), let \({\varvec{e}}_i\) be the unit vector of \({\mathbb {R}}^X\) whose ith entry is one and all other entries are zero, and let \({{\varvec{1}}}_X=\sum _{i\in X} {\varvec{e}}_i\). Similarly, let \({\mathcal {S}}^X\) be the set of all \(m \times m\) symmetric matrices whose entries are indexed by the pairs of elements in X. Throughout the paper, \({\mathcal {S}}^X\) is regarded as a Euclidean space by using the trace inner product \(\langle \cdot , \cdot \rangle \) defined by \(\langle A,B \rangle =\text {tr}AB\). If \(A \in {\mathcal {S}}^X\) is positive semidefinite, it is denoted as \(A \succeq 0\), and let \({\mathcal {S}}^X_+=\{ A \in {\mathcal {S}}^X: A\succeq 0\}\).
For a graph G, let \(N_G(i)\) be the set of all neighbors of \(i\in V(G)\) in G, and let \({\overline{N}}_G(i)=N_G(i)\cup \{i\}\).
2.1 Weighted Laplacian and configurations
Given a graph \(G=(V,E)\) with edge weight \(\omega :E(G)\rightarrow {\mathbb {R}}\), its Laplacian \(L_{G,\omega }\) is defined by
where \(\omega _{ij}=\omega (ij)\) and
It is symmetric and always satisfies \(L_{G,\omega }{{\varvec{1}}}_V=0\). A weighted Laplacian of the complete graph on V is simply called a Laplacian matrix. (Equivalently, a symmetric matrix L is Laplacian if \(L{{\varvec{1}}}_V=0\).) Let \({\mathcal {L}}^V\) be the set of all Laplacian matrices. Then \({\mathcal {L}}^V\) is a linear subspace of \({\mathcal {S}}^V\) given by
where \(\{F_{ij}: i,j\in V, i\ne j\}\) forms a basis.
Let \(J_V={{\varvec{1}}}_V {{\varvec{1}}}_V^{\top }\). When \(L\succeq 0\), \(L\in {\mathcal {L}}^V\) if and only if \(\langle L, J_V\rangle =0\). Hence the set of positive semidefinite Laplacian matrices \({\mathcal {L}}_+^V\) is given by
Let \(q:V\rightarrow {\mathbb {R}}^d\) be a ddimensional point configuration for some positive integer d. We identify q with a matrix Q of size \(d\times n\) whose ith column vector is \(q_i\). We then have \(Q^{\top } Q\succeq 0\), and \(\langle Q^\top Q, J_V \rangle = 0\) holds if and only if the center of gravity of q(V) is the origin, i.e., \(\sum _{i\in V} q_i=0\). \(Q^\top Q\) is called the Gram matrix of q. Since the properties we are interested in (such as universal rigidity) are invariant by translations, throughout the paper we shall focus on tensegrities whose center of gravity is at the origin.
We denote by \({\mathcal {C}}_d(V)\) the set of all point configurations \(q:V\rightarrow {\mathbb {R}}^d\) such that \(\sum _{i\in V} q_i=0\) and q(V) affinely span \({\mathbb {R}}^d\), and let \({\mathcal {C}}(V)=\bigcup _{d\in {\mathbb {Z}}_{\ge 0}} {\mathcal {C}}_d(V)\). Then we have
and
2.2 SDP formulation
Let \((G,\sigma , p)\) be a ddimensional tensegrity. Let \(E_0=\sigma ^{1}(0)\), \(E_+=\sigma ^{1}(+1)\), \(E_=\sigma ^{1}(1)\). We consider the following semidefinite programming problem for \((G,\sigma ,p)\):
By (2) any feasible X is written as \(X=Q^{\top }Q\) for some \(q\in {\mathcal {C}}(V)\). Moreover,
holds, which means that \(Q^\top Q\) is feasible if and only if \((G,\sigma ,q)\) is dominated by \((G,\sigma ,p)\). It can be also checked that \(Q^\top Q=P^{\top } P\) holds if and only if p and q are congruent. Therefore, we have the following.
Proposition 2.1
\((G,\sigma ,p)\) is universally rigid if and only if (P) has a unique feasible solution.
Since \({\mathcal {L}}^V_+\subset {\mathcal {L}}^V\) and \(F_{ij}\in {\mathcal {L}}^V\), we can consider the dual problem of (P) in \({\mathcal {L}}^V\), that is,
By weak duality, the dual optimal value is at least 0, and it is indeed 0 as it is attained by \(\omega =0\).
If we consider a dual variable \(\omega :E(G)\rightarrow {\mathbb {R}}; ij\mapsto \omega _{ij}\) as an edge weight of G, \(L_{G,\omega }=\sum _{ij\in E(G)} \omega _{ij} F_{ij}\), and hence the first dual constraint is written by \(L_{G,\omega }\succeq 0\). Moreover, the objective function is equal to \(\langle P^\top P,L_{G,\omega } \rangle \). Hence \(L_{G,\omega }\succeq 0\) implies that \(\omega \) attains the dual optimal value if and only if \(PL_{G,\omega }=0\) holds. In terms of p, the latter condition becomes
The Eq. (3) is nothing but the equilibrium condition for structures to be statically rigid, and the equation frequently appears in rigidity theory. In general, for a tensegrity \((G,\sigma , p)\), an edge weight \(\omega :E(G)\rightarrow {\mathbb {R}}\) is said to be an equilibrium stress if \(\omega \) satisfies (3). Also \(\omega \) is said to be proper if
We further say that \(\omega \) is strictly proper if (4) holds with strict inequality for every \(ij \in E_+ \cup E_\). The condition (4) reflects the physical fact that each cable only has a tension while each strut only has a compression (see [25] for more details).
With this notation, the discussion is summarized as follows.
Proposition 2.2
An edge weight \(\omega :E(G)\rightarrow {\mathbb {R}}\) is an optimal solution of (D) if and only if it is a proper equilibrium stress of \((G,\sigma ,p)\).
2.3 Facial structure of \({\mathcal {L}}_+^V\)
In the next two subsections, we shall provide high level ideas of Connelly’s sufficient condition and GortlerThurston’s characterization since our technical result will be built on these ideas. The key ingredient in both results are the facial structure of \({\mathcal {L}}_+^V\).
Let C be a nonempty convex set in a Euclidean space. The dimension of C is the dimension of the smallest affine subspace containing C and is denoted as \(\dim C\). A convex subset \(F \subseteq C\) is a face if for any \(x,y \in C\), \(\frac{x+y}{2} \in F\) implies \(x,y \in F\). For \(x \in C\), the smallest face containing x is called the minimal face of x and is denoted as \(F_C(x)\). We say that a hyperplane H exposes a face F of C if \(F=C\cap H\) and H supports C (i.e., C lies on the closed halfspace defined by H). A face F is said to be exposed if there is a hyperplane exposing F. To simplify the presentation, we also consider the ambient space as a hyperplane whose normal vector is the zero vector. Then C itself is always exposed. C is called exposed if every face of C is exposed. It is wellknown that \({{{\mathcal {S}}}}_+^V\) (in \({{{\mathcal {S}}}}^V\)) is exposed, but this is not a general property of convex sets. Moreover the following properties are known for the facial structure of \({{{\mathcal {S}}}}^n_+\) (see, e.g., [24]).
Proposition 2.3
Let \(A \in {{{\mathcal {S}}}}^n_+\) be a matrix with rank d. Then \(\dim F_{{{{\mathcal {S}}}}^n_+} (A) = \left( {\begin{array}{c}d+1\\ 2\end{array}}\right) \).
For \(B \in {{{\mathcal {S}}}}^n\), the hyperplane \(\{X \in {{{\mathcal {S}}}}^n : \langle X,B \rangle =0\}\) exposes \(F_{{{{\mathcal {S}}}}^n_+} (A)\) if and only if B satisfies \(\mathrm{rank}\,A + \mathrm{rank}\,B =n\), \(\langle A,B \rangle =0\), and \(B \succeq 0\).
Now we are interested in the facial structure of \({\mathcal {L}}_+^V\) in \({\mathcal {L}}^V\). \({\mathcal {L}}_+^V\) is known to be a face of \({\mathcal {S}}_+^V\), and one can understand the facial structure of \({\mathcal {L}}_+^V\) by restricting the ambient matrix space to \({\mathcal {L}}^V\). Based on Proposition 2.3, the following properties easily follow.
Proposition 2.4
Let \(L\in {\mathcal {L}}_+^V\) be a matrix with rank d. Then \(\dim F_{{\mathcal {L}}_+^V}(L)={d+1 \atopwithdelims ()2}\).
For \(M\in {\mathcal {L}}^V\), the hyperplane \(\{X\in {\mathcal {L}}^V: \langle X, M\rangle =0\}\) exposes \(F_{{\mathcal {L}}_+^V}(L)\) if and only if M satisfies \(\mathrm{rank}\,L+\mathrm{rank}\,M=V1\), \(\langle L, M\rangle =0\), and \(M\succeq 0\).
2.4 Connelly’s sufficient condition
The following is Connelly’s super stability condition.
Theorem 2.5
(Connelly [8]). Let \((G,\sigma ,p)\) be a ddimensional tensegrity with n vertices, and suppose that

(i)
it has a strictly proper equilibrium stress \(\omega \) such that \(L_{G,\omega }\succeq 0\) and \(\mathrm{rank}\,L_{G,\omega }=nd1\), and

(ii)
there is no nonzero symmetric matrix S of size \(d\times d\) such that
$$\begin{aligned} \left( p_ip_j\right) ^{\top } S\left( p_ip_j\right) =0\qquad (ij\in E(G)). \end{aligned}$$
Then \((G,\sigma ,p)\) is universally rigid.
The condition (ii) is referred to as the conic condition for the edge directions.
Connelly [9] pointed out that for a generic barjoint framework with at least \(d+1\) vertices the conic condition for the edge directions always holds. Recent papers [3, 12] examine how to ensure the conic condition for the edge directions without genericity. For practical purpose the following statement due to Alfakih and Nguyen would be sufficiently general.
Theorem 2.6
(Alfakih and Nguyen [3]). Let \((G,\sigma ,p)\) be a ddimensional tensegrity such that \(p({\overline{N}}_G(i))\) affinely spans \({\mathbb {R}}^d\) for each \(i\in V(G)\). Suppose also that \((G,\sigma ,p)\) has a strictly proper equilibrium stress \(\omega \) such that \(L_{G,\omega }\succeq 0\) and \(\mathrm{rank}\,L_{G,\omega }=nd1\). Then the conic condition for the edge direction holds.
See [12] for stronger sufficient conditions.
2.5 Characterization by Gortler and Thurston
GortlerThurston [20] gave a reverse direction of Theorem 2.5 for generic barjoint frameworks.
Theorem 2.7
(GortlerThurston [20]). A generic ddimensional barjoint framework (G, p) with \(n\ge d+2\) vertices is universally rigid if and only if it has an equilibrium stress \(\omega \) such that \(L_{G,\omega }\succeq 0\) and \(\mathrm{rank}\,L_{G,\omega }=nd1\).
The sufficiency is due to Connelly. The proof of the necessity goes as follows. Suppose that (G, p) is a universally rigid generic framework, and we want to find \(L_{G,\omega }\) as given in the statement. Translate p so that the center of gravity is at the origin, and consider the face \(F_{{\mathcal {L}}_+^V}(P^{\top }P)\). By Proposition 2.4, \(F_{{\mathcal {L}}_+^V}(P^{\top }P)\) is exposed by the hyperplane \(\{X\in {\mathcal {L}}^V: \langle X, L\rangle =0\}\) for some \(L\in {\mathcal {L}}^V_+\) with
By \(L\in {\mathcal {L}}^V_+\), we have \(L\succeq 0\), and by (5), we also have \(\mathrm{rank}\,L=nd1\). Hence, if \(L=L_{G,\omega }\) for some \(\omega :E(G)\rightarrow {\mathbb {R}}\) (i.e., (i, j)th entry of L is zero if \(ij\notin E(G)\)), then Proposition 2.2 and (6) imply that \(\omega \) is an equilibrium stress.
Therefore, what is remaining is to prove that \(F_{{\mathcal {L}}_+^V}(P^{\top }P)\) is exposed by the hyperplane defined by \(L_{G,\omega }\) for some \(\omega \).
To find such \(L_{G,\omega }\), consider the subspace
of \({\mathcal {L}}^V\). The idea is to look at the projection \(\pi \) of \({\mathcal {L}}^V\) to \({\mathcal {L}}(G)\), and compute the hyperplane H of \({\mathcal {L}}(G)\) exposing the minimal face of \(\pi (P^{\top }P)\) in \(\pi ({\mathcal {L}}^V_+)\). Then this hyperplane H is defined by \(L\in {\mathcal {L}}(G)\), which is equivalent to having an expression \(L=L_{G,\omega }\) for some \(\omega \), and \(\pi ^{1}(H)\) (defined by L) would be the hyperplane of \({\mathcal {L}}^V\) exposing \(F_{{\mathcal {L}}_+^V}(P^{\top }P)\) as required.
There is one technical subtlety in this argument: the minimal face of \(\pi (P^{\top }P)\) in \(\pi ({\mathcal {L}}_+^V)\) may not be exposed. (Even if \({\mathcal {L}}_+^V\) is exposed, \(\pi ({\mathcal {L}}_+^V)\) may not be exposed.) The main technical observation of GortlerThurston [20] is to prove that, if \(P^{\top } P\) is generic in a certain sense, \(\pi (P^{\top }P)\) is exposed.
Our proof follows the same technique, and a detailed description will be given in Sect. 3. In order to give a rigorous discussion, we review the following materials from [20].
A subset of a Euclidean space is semialgebraic over \({\mathbb {Q}}\) if it is described by finite number of algebraic equalities and inequalities whose coefficients are rationals. Let S be a semialgebraic set defined over \({\mathbb {Q}}\). A point \(x \in S\) is generic in S if there is no rational coefficient polynomial f such that \(f(x)=0\) and \(f(y) \ne 0\) for some \(y \in S\). A point \(x \in S\) is locally generic in S if for small enough \(\epsilon >0\), x is generic in \(S \cap B_\epsilon (x)\). The following proposition can be used to “transfer” the genericity of a point configuration p to \(P^{\top }P\).
Proposition 2.8
(GortlerThurston [20, Lemma 2.6]). Let S be a semialgebraic set defined over \({\mathbb {Q}}\) and f be an algebraic map from S to a Euclidean space. If x is generic in S, f(x) is generic in f(S).
Let C be a nonempty convex set in a Euclidean space. C is linefree if it contains no complete affine line. A point \(x \in C\) is kextreme if \(\dim F_C(x) \le k\). We denote by \(\text {ext}_k(C)\) the set of kextreme points of C.
We will use the following combination of [20, Proposition 4.14] and [20, Theorem 2], which is also explicit in the proof of the main theorem of [20].
Proposition 2.9
Let C be a closed linefree convex semialgebraic set in \({\mathbb {R}}^m\), and \(\pi :{\mathbb {R}}^m\rightarrow {\mathbb {R}}^n\) be a projection, both defined over \({\mathbb {Q}}\). Suppose that x is locally generic in \(\text {ext}_k(C)\) for some k and \(\pi ^{1}(\pi (x))\cap C\) is a singleton set. Then there exists a hyperplane H in \({\mathbb {R}}^n\) such that \(\pi ^{1}(H)\) exposes \(F_C(x)\).
3 Characterizing the universal rigidity of tensegrities
In this section we prove an extension of Theorem 2.7 to tensegrities.
Theorem 3.1
Let \((G,\sigma ,p)\) be a generic ddimensional tensegrity with \(n \ge d+2\) vertices. Then \((G,\sigma ,p)\) is universally rigid if and only if it has a strictly proper equilibrium stress \(\omega \) such that \(\mathrm{rank}\,L_{G,\omega }=nd1\) and \(L_{G,\omega }\succeq 0\).
The sufficiency follows from Theorems 2.5 and 2.6, and we focus on the necessity. When applying GortlerThurston’s proof to tensegrities, we need to further ensure that \(\omega \) is proper, the sign condition (4). In the above proof sketch, this requires us to find an exposing hyperplane of \(F_{{\mathcal {L}}_+^V}(P^{\top }P)\) satisfying a sign condition on nonzero entries. We show how to get around this by a simple trick.
Proof of Theorem 3.1
To see the necessity, suppose that \((G,\sigma ,p)\) is universally rigid. Translate the configuration so that the center of gravity is at the origin. Our idea is to introduce a slack variable for each constraint of (P). For this, we consider the ambient space \({{{\mathcal {K}}}}\) and convex cone \({{{\mathcal {K}}}}_+\) defined by
respectively, where \(E_{\pm }=E_+ \cup E_\). In the following discussion, an element in \({{{\mathcal {K}}}}\) is often denoted by a pair (X, s) with \(X \in {{{\mathcal {L}}}}^V\) and \(s \in {\mathbb {R}}^{E_\pm } \times \{0\}^{E_0}\). \(\langle \cdot \rangle \) denotes the Euclidean inner product in \({{{\mathcal {K}}}}\). Consider the following SDP over \({\mathcal {K}}\):
where, for \(ij \in E(G)\), \({{\varvec{e}}}_{ij}\) denotes the unit vector in \({\mathbb {R}}^{E(G)}\) such that the ijth entry is one. Observe that X is feasible in (P) if and only if (X, s) is feasible in (P’) for some \(s\in {\mathbb {R}}_{\ge 0}^{E_\pm } \times \{0\}^{E_0}\). As \((G,\sigma ,p)\) is universally rigid, Proposition 2.1 implies that \((P^{\top }P, 0)\in {\mathcal {K}}\) is the unique solution of (P’).
We shall apply Proposition 2.9 to this setting. To do so, we need to prove the local genericity of \((P^{\top }P, 0)\).
Claim 3.2
Let \(k=\left( {\begin{array}{c}d+1\\ 2\end{array}}\right) \). Then \((P^\top P,0)\in {\mathcal {K}}\) is locally generic in \(\text {ext}_k({\mathcal {K}}_+)\).
Proof
A map \(f: {\mathcal {C}}(V) \rightarrow {\mathcal {L}}^V_+\); \(p \mapsto P^\top P\) is algebraic over \({\mathbb {Q}}\) and \(f({\mathcal {C}}_d(V) )\) equals to \({\mathcal {L}}_{+,d}:=\{L\in {\mathcal {L}}^V_+: \mathrm{rank}\,L=d\}\) by (1). Hence, by Proposition 2.8, \(P^\top P\) is generic in \(\bigcup _{i\le d} {\mathcal {L}}_{+,i}\).
From Proposition 2.4 for \(k=\left( {\begin{array}{c}d+1\\ 2\end{array}}\right) \),
Hence, \(P^\top P\) is generic in \(\text {ext}_k({\mathcal {L}}_+^V)\). (7) also implies
where \(\Vert s\Vert _0\) denotes the number of nonzero elements of s. By the lower semicontinuity of rank, there exists a neighborhood U of \(P^{\top }P\) in \({\mathcal {L}}^V\) in which the rank of any matrix is at least d. By (8) and \(k=\left( {\begin{array}{c}d+1\\ 2\end{array}}\right) \), we have
Hence \((P^\top P, 0)\) is generic in \(\left( U \times {\mathbb {R}}_{\ge 0}^{E_\pm } \times \{0\}^{E_0} \right) \cap \text {ext}_k({\mathcal {K}}_+)\), meaning that \((P^\top P, 0)\) is locally generic in \(\text {ext}_k({\mathcal {K}}_+)\). \(\square \)
We are now in a position to complete the proof. We consider the subspace
of \({\mathcal {K}}\), and let \(\pi : {\mathcal {K}} \rightarrow {\mathcal {K}}(G)\) be a projection. Since \((P^\top P,0)\) is the unique solution of (P’), \(\pi ^{1}\left( \pi ( (P^\top P,0) )\right) \cap {\mathcal {K}}_+\) is a singleton set. Since \({\mathcal {K}}_+\) is a closed linefree convex set and \((P^\top P,0)\) is locally generic in \(\text {ext}_{\left( {\begin{array}{c}d+1\\ 2\end{array}}\right) }({\mathcal {K}}_+)\) by Claim 3.2, Proposition 2.9 can be applied. Hence, there exists a hyperplane \(H=\{(X,s)\in {\mathcal {K}}(G): \langle (X,s), (L,t)\rangle =0\}\) defined by \((L,t)\in {\mathcal {K}}(G)\) such that \(\pi ^{1}(H)=\{(X,s)\in {\mathcal {K}}: \langle (X,s), (L,t)\rangle =0\}\) exposes \(F_{{\mathcal {K}}_+}((P^{\top }P, 0))\). Note that \(F_{{\mathcal {K}}_+}((P^{\top }P, 0))=F_{{\mathcal {L}}_+}(P^{\top }P)\times \{0\}\). This and Proposition 2.4 imply
and
By \((L,t)\in {\mathcal {K}}(G)\), \((L,t)=\sum _{ij\in E(G)} \omega _{ij} (F_{ij}, \sigma (ij){{\varvec{e}}}_{ij})\) for some \(\omega :E(G)\rightarrow {\mathbb {R}}\). Hence \(L=L_{G,\omega }\). By \(PL=0\) from (9), \(\omega \) is an equilibrium stress of \((G,\sigma ,p)\). Moreover, (10) implies that \(\sigma (ij)\omega _{ij}=t_{ij}>0\) for every \(ij\in E_\pm \), that is, \(\omega \) is strictly proper. Together with (9), we conclude that \(\omega \) satisfies the properties of the statement. \(\square \)
4 Extension to groupsymmetric tensegrities
In this section, we extend Theorem 3.1 to tensegrities with finite point group symmetry. We use the following notations. Let \(\Gamma \) be a finite group with the identity element \(e_\Gamma \).
The set of \(n \times m\) real matrices is denoted as \({\mathbb {R}}^{n \times m}\). For a finite set X with \(X=m\), let \({\mathbb {R}}^{X \times X}\) be the set of \(m \times m\) real matrices whose each row or column is indexed by each element in X. For \(A\in {\mathbb {R}}^{n\times m}\), the (i, j)th entry is denoted by A[i, j]. The identity matrix of size n is denoted as \(I_n\). The general linear group and the orthogonal group of \({\mathbb {R}}^n\) are denoted as \(GL_n({\mathbb {R}})\) and \(O({\mathbb {R}}^n)\), respectively. For two matrices \(A \in {\mathbb {R}}^{k \times l}\) and \(B \in {\mathbb {R}}^{k' \times l'}\), their matrix direct sum \(A \oplus B \in {\mathbb {R}}^{(k+k')\times (l+l')}\) and their matrix tensor product \(A \otimes B \in {\mathbb {R}}^{kk' \times ll'}\) are defined by
respectively. For subsets of matrices \({\mathcal {F}}_1 \subseteq {\mathbb {R}}^{k \times l}\) and \({\mathcal {F}}_2 \subseteq {\mathbb {R}}^{k' \times l'}\), \({\mathcal {F}}_1 \oplus {\mathcal {F}}_2 \subseteq {\mathbb {R}}^{(k+k') \times (l+l')}\) and \(I_n \otimes {\mathcal {F}}_1 \subseteq {\mathbb {R}}^{nk \times nl}\) are defined by
respectively.
4.1 Main result
We define basic notions regarding groupsymmetric tensegrities and then state our main theorem.
Let \({\hat{V}}\) be a finite set and \(\Gamma \) be a finite group. A \(\Gamma \)gain graph on \({\hat{V}}\) is a directed graph \(({\hat{V}}, {\hat{E}})\) in which each edge is labeled by an element in \(\Gamma \). An edge from a vertex u to a vertex v with label \(\gamma \in \Gamma \) is denoted by a triple \((u,v,\gamma )\), and we will identify \((u,v,\gamma )\) with \((v,u,\gamma ^{1})\).
More rigorously, a \(\Gamma \)gain graph is defined as a pair \(({\hat{V}}, {\hat{E}})\) of a finite set \({\hat{V}}\) and a subset \({\hat{E}}\) of \(({\hat{V}}\times {\hat{V}}\times \Gamma )/\sim \), where \(\sim \) is an equivalence relation on \({\hat{V}}\times {\hat{V}}\times \Gamma \) defined by
The lift of a \(\Gamma \)gain graph \({\hat{G}}=({\hat{V}},{\hat{E}})\) is an undirected graph \(G=(V,E)\) on \(V:=\Gamma \times {\hat{V}}\) such that \(\{(\alpha ,u),(\beta ,v)\}\) is an edge of G if and only if \((u,v,\alpha ^{1}\beta ) \in {\hat{E}}.\) For simplicity, we often denote an edge \(\{(\alpha ,u),(\beta ,v)\}\) of the lift by \((\alpha ,u)(\beta ,v)\).
An undirected graph \(G=(V,E)\) is said to be a \(\Gamma \)symmetric graph if it is the lift of some \(\Gamma \)gain graph \({\hat{G}}=({\hat{V}},{\hat{E}})\). For example, a Cayley graph is a groupsymmetric graph with \({\hat{V}}=1\). Figure 2 gives an example of a \({\mathbb {Z}}_2\)symmetric graph. For \(v \in {\hat{V}}\), a subset of vertices \(\Gamma \times \{v\}\subseteq V\) is called a vertex orbit. For \((u,v,\gamma ) \in {\hat{E}}\), a subset of edges \(\{ (\alpha ,u)(\alpha \gamma ,v) : \alpha \in \Gamma \} \subseteq E\) is called an edge orbit. A \(\Gamma \)symmetric graph can be the lift of more than one \(\Gamma \)gain graph \({\hat{G}}\) because the representative vertices can be picked arbitrarily from each vertex orbit. In the subsequent discussion, for a \(\Gamma \)symmetric graph G, it would be convenient to fix representative vertices from each vertex orbit. The resulting \(\Gamma \)gain graph \({\hat{G}}\) is called the quotient of G and is denoted by \(G/\Gamma \).
Next we define a groupsymmetric tensegrity. A group homomorphism \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\) is called a point group. A ddimensional point configuration \(p:V(G) \rightarrow {\mathbb {R}}^d\) is compatible with a point group \(\theta \) if the following relations are satisfied:
Let \({\mathcal {C}}_\theta (V(G))\) (or \({{{\mathcal {C}}}}_\theta (V)\)) be the set of all ddimensional point configurations \(p:V(G) \rightarrow {\mathbb {R}}^d\) such that \(\sum _{i \in V(G)}p_i=0\) and p is compatible with \(\theta \). A ddimensional tensegrity \((G,\sigma ,p)\) is a \(\theta \)symmetric tensegrity if \((V,E_0)\), \((V,E_+)\), \((V,E_)\) are \(\Gamma \)symmetric graphs and p is compatible with \(\theta \).
With this definition, our goal is to extend Theorem 3.1 to \(\theta \)symmetric tensegrities. Note that a priori a groupsymmetric tensegrity is not generic, and Theorem 3.1 cannot be applied. In order to extend Theorem 3.1, we need to introduce genericity modulo symmetry, which is commonly used in the context of infinitesimal rigidity [22, 26]. Let \({\mathbb {Q}}_{\theta ,\Gamma }\) be the finite extension field of \({\mathbb {Q}}\) generated by the entries of \(\theta (\gamma )\) for all \(\gamma \in \Gamma \) and those of (representative) irreducible representations of \(\Gamma \) (see Subsection 4.4 and Subsection 5.3 for the formal definition). A \(\theta \)symmetric tensegrity \((G,\sigma ,p)\) is generic modulo symmetry if the translation of p is generic over \({\mathbb {Q}}_{\theta ,\Gamma }\) in \({{{\mathcal {C}}}}_\theta (V)\). In other words, we choose a representative vertex from each vertex orbit of G, and \((G,\sigma ,p)\) is said to be generic modulo symmetry if the set of coordinates of the points of these representative vertices is algebraically independent over \({\mathbb {Q}}_{\theta ,\Gamma }\). Note that, in the latter definition, as \({\mathbb {Q}}_{\theta ,\Gamma }\) contains the entries of \(\theta \), genericity modulo symmetry is independent of the choice of representative vertices.
Now we are in a position to state our main result, an extension of Theorem 3.1 to groupsymmetric tensegrities.
Theorem 4.1
Let \((G,\sigma ,p)\) be a ddimensional tensegrity with n vertices which is \(\theta \)symmetric and is generic modulo symmetry. Suppose also that \(p({\overline{N}}_G(i))\) affinely spans \({\mathbb {R}}^d\) for all \(i \in V(G)\). Then, \((G,\sigma ,p)\) is universally rigid if and only if it has a strictly proper equilibrium stress \(\omega \) satisfying \(L_{G,\omega } \succeq 0\) and \(\mathrm{rank}\,L_{G,\omega }=nd1\).
The sufficiency of Theorem 4.1 immediately follows from Theorem 2.5 and Theorem 2.6. The necessity of Theorem 4.1 holds without assuming the neighborgeneral position of p as follows.
Theorem 4.2
Let \((G,\sigma ,p)\) be a ddimensional tensegrity with \(n\ge d+2\) vertices which is \(\theta \)symmetric and is generic modulo symmetry. Suppose also that p affinely spans \({\mathbb {R}}^d\). If \((G,\sigma ,p)\) is universally rigid, then it has a strictly proper equilibrium stress \(\omega \) satisfying \(L_{G,\omega } \succeq 0\) and \(\mathrm{rank}\,L_{G,\omega }=nd1\).
The subsequent discussion is devoted to the proof of Theorem 4.2. In order to simplify the description, we shall focus on barjoint frameworks (G, p) rather than tensegrities \((G,\sigma ,p)\). For general tensegrities the proof easily follows by combining the proof for the barjoint case with that of Theorem 3.1.
A highlevel idea of the proof is as follows. Proposition 2.1 states that, if (G, p) is universally rigid, then the Gram matrix \(P^\top P\) of p is the unique feasible solution of the SDP problem (P). To use GortlerThurston’s argument (Proposition 2.9), the unique solution must be generic in a certain sense. When (G, p) is groupsymmetric, however, genericity in this sense does not hold in the cone of positive semidefinite Laplacian matrices \({\mathcal {L}}^V_+\). To make the unique solution generic, we restrict the ambient space to \(\Gamma \)symmetric Laplacian matrices. To investigate the facial structure of the restricted cone, we blockdiagonalize the ambient matrix space by using structure theorem (Proposition 4.4). If the original point configuration p is generic modulo symmetry, the image of \(P^\top P\) by this transformation is generic. Therefore GortlerThurston’s argument can be applied.
In order to explain how to implement this idea, we shall first give the proof of Theorem 4.2 for the case when every real irreducible representation of \(\Gamma \) is absolutely irreducible (see Subsection 4.3 for the definition). The proof for the general case follows the same idea but is technically more involved. We will give it in Sect. 5.
4.2 Restriction of (P) to the space of groupsymmetric Laplacians
In this subsection, we define the space of \(\Gamma \)symmetric Laplacian matrices as a subspace of Laplacian matrices.
Let V be a finite set, and let \(K_V\) be the complete graph on V. If we can decompose V into \(V=\Gamma \times (V/\Gamma )\) for some finite set \(V/\Gamma \), then \(K_V\) is \(\Gamma \)symmetric, where its quotient \(K_V/\Gamma \) consists of the vertex set \(V/\Gamma \) and the edge set \(((V/\Gamma ) \times (V/\Gamma ) \times \Gamma \setminus \{ (v,v,e_\Gamma ) : v \in V/\Gamma \} )/\sim \). (Note that, since \(K_V\) has no loop, its quotient has no loop with the identity label.) The quotient \(K_V/\Gamma \) is called the complete \(\Gamma \)gain graph. In the subsequent discussion, each \(\Gamma \)symmetric graph \(G=(V,E)\) and its quotient \(G/\Gamma \) are assumed to be subgraphs of \(K_V\) and \(K_V/\Gamma \), respectively.
For a \(\Gamma \)symmetric graph \(G=(V,E)\), an edge weight \(\omega :E(G) \rightarrow {\mathbb {R}}\) is \(\Gamma \)symmetric if \(\omega \) is constant on each edge orbit. A Laplacian matrix is said to be \(\Gamma \)symmetric if it is the Laplacian of \(K_V\) weighted by a \(\Gamma \)symmetric edge weight \(\omega \). Equivalently, \(L \in {{{\mathcal {L}}}}^V\) is \(\Gamma \)symmetric if and only if
for any \((\alpha ,u), (\beta ,v) \in V\) and any \(\gamma \in \Gamma \). Let \(({{{\mathcal {L}}}}^V)^\Gamma \) be the set of all \(\Gamma \)symmetric Laplacian matrices. Then \(({{{\mathcal {L}}}}^V)^\Gamma \) is a linear subspace of \({\mathcal {S}}^V\) given by
where
Let \(({{{\mathcal {L}}}}^V)^\Gamma _+\) be the set of positive semidefinite \(\Gamma \)symmetric Laplacian matrices.
Let \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\) be a point group. For a given \(\theta \)symmetric framework (G, p), we consider the following SDP problem:
The cone of (P\(^\Gamma \)) is restricted to \(({\mathcal {L}}^V)^\Gamma _+\). We have the following.
Proposition 4.3
If a framework (G, p) with \(p \in {{{\mathcal {C}}}}_\theta (V)\) is universally rigid and \(\theta \)symmetric for some point group \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\), then (P\(^\Gamma \)) has the unique solution \(P^\top P\).
Proof
We first check that the Gram matrix \(P^\top P \in {\mathcal {L}}^V_+\) of p is \(\Gamma \)symmetric by checking (11) as follows:
where the second equation follows from \(\theta \)symmetry and the third equation follows from \(\theta (\gamma ) \in O({\mathbb {R}}^d)\). Hence \(P^\top P \in ({\mathcal {L}}^V)^\Gamma _+\), and \(P^\top P\) is feasible.
To see the uniqueness, consider any \(X \in ({{{\mathcal {L}}}}^V)^\Gamma \). Then X is a linear combination of \(F_{(u,v,\gamma )}\) over \((u,v,\gamma ) \in E(K_V/\Gamma )\) while each \(F_{(u,v,\gamma )}\) is the sum of \(F_{(\alpha ,u),(\alpha \gamma ,v)}\) over \(\alpha \in \Gamma \). Hence we have
for any \((u,v,\gamma ) \in E(G/\Gamma )\) and \(\alpha \in \Gamma \).
Since (12) holds for \(P^\top P\) by \(P^\top P\in ({\mathcal {L}}^V)^\Gamma _+\), \(\langle X, F_{(u,v,\gamma )} \rangle = \langle P^\top P,F_{(u,v,\gamma )} \rangle \) for all \((u,v,\gamma ) \in E(G/\Gamma )\) if and only if \(\langle X, F_{(\alpha ,u)(\alpha \gamma ,v)} \rangle = \langle P^\top P,F_{(\alpha ,u)(\alpha \gamma ,v)} \rangle \) for all \((\alpha ,u)(\alpha \gamma ,v) \in E(G)\). In other words, \(X \in ({\mathcal {L}}^V)^\Gamma _+\) is feasible in (P\(^\Gamma \)) if and only if X is feasible in (P) (for (G, p)). Hence the uniqueness of the feasible solution of (P\(^\Gamma \)) follows by applying Proposition 2.1 to (P). \(\square \)
4.3 Blockdiagonalization of (P\(^\Gamma \))
To investigate the facial structure of \(({\mathcal {L}}^V)^\Gamma _+\), we use the structure theorem (Proposition 4.4) given below.
Let W be a finite dimensional real or complex vector space. Let GL(W) be a general linear group of W. A representation of \(\Gamma \) is a group homomorphism \(\rho :\Gamma \rightarrow GL(W)\). \(d_\rho :=\dim W\) is called the degree of \(\rho \). Two representations \(\rho :\Gamma \rightarrow GL(W)\) and \(\rho ':\Gamma \rightarrow GL(W')\) are equivalent if there is an isomorphism \(T:W \rightarrow W'\) satisfying \(T \circ \rho (\gamma ) =\rho '(\gamma ) \circ T\) for all \(\gamma \in \Gamma \). For a representation \(\rho :\Gamma \rightarrow GL(W)\), a linear subspace \(W'\) of W is \(\Gamma \)invariant if \(\rho (\gamma ) (W') \subseteq W'\) for all \(\gamma \in \Gamma \). A representation \(\rho :\Gamma \rightarrow GL(W)\) is irreducible if the only \(\Gamma \)invariant subspaces are \(\{0\}\) and W. The simplest example is the trivial representation, where W is onedimensional and every element is associated with the identity. The trivial representation is denoted by \(\mathrm{tri}\).
A real irreducible representation \(\rho :\Gamma \rightarrow GL(W)\) is called absolutely irreducible if \(\rho \) is also irreducible over \({\mathbb {C}}\). We also call \(\Gamma \) absolutely irreducible if every real irreducible representation of \(\Gamma \) is absolutely irreducible. This is equivalent to saying that every complex irreducible representation is equivalent to a real representation. For example, dihedral groups and symmetric groups are known to be absolutely irreducible while cyclic groups of order more than two are not, see [27] for example.
In this section we shall focus on the case where \(\Gamma \) is absolutely irreducible. This special case is sufficiently general to explain our technical idea, and moreover it includes finite groups that appear in applications. Hence, throughout this section, a real representation is simply called a representation.
Let \({\tilde{\Gamma }}\) be the set of all equivalence classes of irreducible representations of \(\Gamma \). By fixing a representative of each class, each element of \({\tilde{\Gamma }}\) is regarded as a representation. Since every representation is equivalent to an orthogonal representation, we further assume that each \(\rho \in {\tilde{\Gamma }}\) is an orthogonal matrix representation \(\rho :\Gamma \rightarrow O({\mathbb {R}}^{d_\rho })\).
The following is a basic relation on the degrees of complex irreducible representations and it is also valid for real irreducible representations since \(\Gamma \) is absolutely irreducible:
We can now state the structure theorem.
Proposition 4.4
Let \(\Gamma \) be a finite group, \(V/\Gamma \) be a finite set with \({\hat{n}}\) elements, and \(V=\Gamma \times V/\Gamma \). Suppose that \(\Gamma \) is absolutely irreducible. Then there exists an orthogonal transformation \(\Psi :{\mathbb {R}}^{V\times V} \rightarrow {\mathbb {R}}^{V\times V}\) satisfying
The proof is given in Subsection 4.5.
Following Proposition 4.4, for each irreducible representation \(\rho \in {\tilde{\Gamma }}\), we define a linear space \({\mathcal {K}}_\rho \) and a convex cone \({\mathcal {K}}_{+,\rho }\) by
respectively. The ambient space \({\mathcal {K}}_\Gamma \) and the convex cone \({\mathcal {K}}_{+,\Gamma }\) are defined by
respectively. Note that \({\mathcal {K}}_\Gamma = \Psi (({\mathcal {L}}^V)^\Gamma )\) by Proposition 4.4, and \({\mathcal {K}}_{+,\Gamma } = \Psi (({\mathcal {L}}^V)^\Gamma _+)\) also holds since an orthogonal transformation preserves positive semidefiniteness.
For each \(X \in {{{\mathcal {K}}}}_\Gamma \) and \(\rho \in {\tilde{\Gamma }}\), the projected image of X to \({{{\mathcal {K}}}}_\rho \) is denoted by \(X_\rho \), i.e., \(X= \bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_\rho } \otimes X_\rho \).
Using the orthogonal transformation \(\Psi \) in Proposition 4.4, the SDP problem (P\(^\Gamma \)) is transformed into the following blockdiagonal form:
Since \(\Psi \) is an orthogonal transformation, Proposition 4.3 immediately gives the following.
Proposition 4.5
If a framework (G, p) with \(p \in {{{\mathcal {C}}}}_\theta (V)\) is universally rigid and \(\theta \)symmetric with some point group \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\), then (P\(^\Psi \)) has the unique solution \(\Psi (P^\top P)\).
4.4 Proof of Theorem 4.2 for absolutely irreducible \(\Gamma \)
Let \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\) be a finite point group. We take an orthogonal matrix \(Z_\Gamma \in O({\mathbb {R}}^V)\) representing \(\Psi \), i.e., \(\Psi (X)=Z_\Gamma ^\top X Z_\Gamma \) for any \(X\in {\mathbb {R}}^{V\times V}\).
We define the finite extension field \({\mathbb {Q}}_{\theta ,\Gamma }\) of \({\mathbb {Q}}\) generated by the entries of \(\theta (\gamma )\) for all \(\gamma \in \Gamma \) and the entries of \(Z_\Gamma \)^{Footnote 1}. As in [20, Section 5.5], genericity and local genericity over \({\mathbb {Q}}_{\theta ,\Gamma }\) are defined similarly and Proposition 2.8 and Proposition 2.9 can be generalized by changing the coefficient field to \({\mathbb {Q}}_{\theta ,\Gamma }\).
Our strategy is to apply Proposition 2.9 over \({{{\mathcal {K}}}}_{+,\Gamma }\), and to do that we need to show that \(\Psi (P^{\top }P)\) is generic over \({\mathbb {Q}}_{\theta ,\Gamma }\) in the set of “low rank” matrices in \({{{\mathcal {K}}}}_{+,\Gamma }\). This can be done by showing that any “low rank” matrix in \({{{\mathcal {K}}}}_{+,\Gamma }\) is the \(\Psi \)image of the Gram matrix of some \(\theta \)symmetric point configuration in \({{{\mathcal {C}}}}_\theta (V)\). Since p is generic in \({{{\mathcal {C}}}}_\theta (V)\), this would imply that \(\Psi (P^{\top }P)\) is generic in the set of “low rank” matrices in \({{{\mathcal {K}}}}_{+,\Gamma }\).
A precise statement we want to show is given as follows.
Proposition 4.6
Let \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\) be a point group, and let \(m_\rho \) be the multiplicity of an irreducible representation \(\rho \in {\tilde{\Gamma }}\) in \(\theta \). Also, define a map f by
where \(Q^\top Q\) is the Gram matrix of q. If \(\Gamma \) is absolutely irreducible, then
The proof of Proposition 4.6 is given in Subsection 4.5. Using Proposition 4.4 and Proposition 4.6, we can now give the proof of Theorem 4.2 for absolutely irreducible \(\Gamma \).
Proof of Theorem 4.2
for absolutely irreducible \(\Gamma .\) Let (G, p) be a ddimensional \(\theta \)symmetric framework with a point group \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\) which is generic modulo symmetry and is universally rigid. Suppose also that p affinely spans \({\mathbb {R}}^d\). Let \(G/\Gamma \) be the quotient of G, and let \({\hat{n}}=V(G/\Gamma )\), which is the number of vertex orbits in G. (The notation \({\hat{n}}\) is consistent with that in Proposition 4.4 by setting \(V/\Gamma =V(G/\Gamma )\) and \(V=V(G)\).)
By a translation, we may suppose that the center of gravity is at the origin, i.e., \(p \in {{{\mathcal {C}}}}_\theta (V)\). As the original configuration affinely spans \({\mathbb {R}}^d\), \(\mathrm{rank}\,P =d\). Let \(m_\rho \) be the multiplicity of an irreducible representation \(\rho \in {\tilde{\Gamma }}\) in \(\theta \). Then,
Claim 4.7
Let \(k=\sum _{\rho \in {\tilde{\Gamma }}} \left( {\begin{array}{c}m_\rho +1\\ 2\end{array}}\right) \). Then \(\Psi (P^\top P) \in {{{\mathcal {K}}}}_\Gamma \) is locally generic over \({\mathbb {Q}}_{\theta ,\Gamma }\) in \(\text {ext}_k({\mathcal {K}}_{+,\Gamma })\).
Proof
Since (G, p) is generic modulo symmetry, p is generic over \({\mathbb {Q}}_{\theta ,\Gamma }\) in \({{{\mathcal {C}}}}_\theta (V)\). The map \(f:q \mapsto \Psi (Q^\top Q)\) is algebraic over \({\mathbb {Q}}_{\theta ,\Gamma }\). By Proposition 2.8 over \({\mathbb {Q}}_{\theta ,\Gamma }\), \(\Psi (P^\top P)\) is generic in \(f({\mathcal {C}}_\theta (V))\) over \({\mathbb {Q}}_{\theta ,\Gamma }\). Also, \(\mathrm{rank}\,\Psi (P^\top P) = \mathrm{rank}\,P^\top P =\mathrm{rank}\,P=d\). Hence, by Proposition 4.6 and (14), we obtain \(\mathrm{rank}\,\Psi (P^\top P)_\rho =m_\rho \) for \(\rho \in {\tilde{\Gamma }}\).
By the definition of \({\mathcal {K}}_{+,\Gamma }\), Proposition 2.3 and Proposition 2.4,
For each \(\rho \in {\tilde{\Gamma }}\), by the lower semicontinuity of rank, there exists a neighborhood \(U_\rho \) of \(\Psi (P^\top P)_\rho \) in \({{{\mathcal {K}}}}_\rho \) in which the rank of any matrix is at least \(\mathrm{rank}\,\Psi (P^\top P)_\rho =m_\rho \). \(U=\oplus _{\rho \in {\tilde{\Gamma }}} I_{d_\rho } \otimes U_\rho \) is a neighborhood of \(\Psi (P^\top P)\) in \({{{\mathcal {K}}}}_\Gamma \). Hence, by (15) and \(k=\sum _{\rho \in {\tilde{\Gamma }}} \left( {\begin{array}{c}m_\rho +1\\ 2\end{array}}\right) \), we have
which further implies, by Proposition 4.6,
Hence, since \(\Psi (P^\top P)\) is generic in \(f({{{\mathcal {C}}}}_{\theta }(V))\), \(\Psi (P^\top P)\) is locally generic in \(\text {ext}_k({\mathcal {K}}_{+,\Gamma })\). \(\square \)
Consider the subspace
of \({\mathcal {K}}_\Gamma \) and the projection \(\pi :{\mathcal {K}}_\Gamma \rightarrow {\mathcal {K}}_\Gamma (G)\). Proposition 4.5 states that \(\Psi (P^\top P)\) is the unique solution of (P\(^\Psi \)), and hence \(\pi ^{1}(\pi (\Psi (P^\top P))) \cap {\mathcal {K}}_{+,\Gamma }\) is a singleton. Since \({\mathcal {K}}_{+,\Gamma }\) is a closed linefree convex set and \(\Psi (P^\top P)\) is locally generic in \(\text {ext}_k({\mathcal {K}}_{+,\Gamma })\) with \(k=\sum _{\rho \in {\tilde{\Gamma }}} \left( {\begin{array}{c}m_\rho +1\\ 2\end{array}}\right) \) by Claim 4.7, Proposition 2.9 can be applied. Hence there exists a hyperplane \(H=\{\langle X,L \rangle =0 : X \in {\mathcal {K}}_\Gamma (G)\}\) in \({\mathcal {K}}_\Gamma (G)\) defined by \(L \in {\mathcal {K}}_\Gamma (G)\) such that \(\pi ^{1}(H)=\{\langle X,L \rangle =0 : X \in {\mathcal {K}}_\Gamma \}\) exposes \(F_{{\mathcal {K}}_{+,\Gamma }}(\Psi (P^\top P))\).
Following the decomposition of the ambient space \({{{\mathcal {K}}}}_{\Gamma }\), we have the decomposition of the face \(F_{{\mathcal {K}}_{+,\Gamma }}(\Psi (P^\top P))\) as follows:
In each component \({{{\mathcal {K}}}}_{\rho }\) of \({{{\mathcal {K}}}}_{\Gamma }\), \(\pi ^{1}(H)\) restricted to \({{{\mathcal {K}}}}_{\rho }\) is a hyperplane of \({{{\mathcal {K}}}}_{\rho }\) given by \(\{\langle X,L_\rho \rangle =0 : X \in {\mathcal {K}}_\Gamma \}\) and it exposes \(F_{{\mathcal {K}}_{+,\rho }} (\Psi (P^\top P)_\rho )\). (Recall that \(L_\rho \) denotes the projected image of L to \({{{\mathcal {K}}}}_\rho \).) Since \(\mathrm{rank}\,\Psi (P^\top P)_\rho =m_{\rho }\), Propositions 2.3 and 2.4 imply
for every \(\rho \in {\tilde{\Gamma }}\). By (13), (14), (16),
By \(L \in {\mathcal {K}}_\Gamma (G)\), \(L=\sum _{(u,v,\gamma ) \in E(G/\Gamma )} {\hat{\omega }}_{(u,v,\gamma )}\Psi (F_{(u,v,\gamma )})\) for some \({\hat{\omega }}: E(G/\Gamma ) \rightarrow {\mathbb {R}}\). Hence, by (17) and (18), \(\Psi ^{1}(L) = \sum _{(u,v,\gamma ) \in E(G/\Gamma )} {\hat{\omega }}_{(u,v,\gamma )}F_{(u,v,\gamma )}\) is a \(\Gamma \)symmetric weighted Laplacian of G satisfying
Therefore \(\Psi ^{1}(L)\) satisfies the properties of the statement. \(\square \)
4.5 Proofs of Proposition 4.4 and Proposition 4.6
We give a proof of Propositions 4.4 and 4.6. Let \(V=\Gamma \times V/\Gamma \) be a finite set, and denote \(n=V\) and \({\hat{n}}=V/\Gamma \) as in the statement of Proposition 4.4. In the subsequent discussion, we shall frequently look at vectors in \({\mathbb {R}}^{\Gamma }\otimes {\mathbb {R}}^{V/\Gamma }\) (which can be identified with \({\mathbb {R}}^V\)) and matrices in \({\mathbb {R}}^{\Gamma \times \Gamma }\otimes {\mathbb {R}}^{V/\Gamma \times V/\Gamma }\). Let \(\{{\varvec{e}}_\gamma : \gamma \in \Gamma \}\) be the standard basis of \({\mathbb {R}}^{\Gamma }\) and \(\{{\varvec{e}}_v: v\in V/\Gamma \}\) be that of \({\mathbb {R}}^{V/\Gamma }\). Then \(\{E_{\alpha ,\beta }:={\varvec{e}}_\alpha {\varvec{e}}_\beta ^{\top }: \alpha ,\beta \in \Gamma \}\) and \(\{E_{u,v}:={\varvec{e}}_u{\varvec{e}}_v^\top : u,v\in V/\Gamma \}\) form a basis of \({\mathbb {R}}^{\Gamma \times \Gamma }\) and a basis of \({\mathbb {R}}^{V/\Gamma \times V/\Gamma }\), respectively.
With this notation, the right regular representation \(R:\Gamma \rightarrow {\mathbb {R}}^{\Gamma \times \Gamma }\) of \(\Gamma \) is defined by
A basic fact from representation theory is that, over the complex field, the right regular representation is (unitarily) equivalent to \(\bigoplus _{\rho } I_{d_\rho } \otimes \rho (\gamma )\), where the sum is taken over all nonequivalent complex irreducible representations. Since \(\Gamma \) is assumed to be absolutely irreducible, this in turn implies that there is an orthogonal matrix \(Z\in O({\mathbb {R}}^{\Gamma })\) such that
where recall that \({\tilde{\Gamma }}\) is the set of all real irreducible representations of \(\Gamma \). This orthogonal transformation gives a linear isomorphism between \({{{\mathcal {T}}}}:=\mathrm{span}\{R(\gamma ):\gamma \in \Gamma \}\) and \(\bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_{\rho }} \otimes {\mathbb {R}}^{d_{\rho }\times d_{\rho }}\). Indeed, if we set \(\psi :{\mathbb {R}}^{\Gamma \times \Gamma }\ni X\mapsto Z^{\top } XZ \in {\mathbb {R}}^{\Gamma \times \Gamma }\), then by (20), \(\psi ({{{\mathcal {T}}}}) \subseteq \bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_{\rho }} \otimes {\mathbb {R}}^{d_{\rho }\times d_{\rho }}\), and since Z is nonsingular \(\psi _{{{\mathcal {T}}}}\) is an injective linear map. By (13), \({{{\mathcal {T}}}}\) and \(\bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_{\rho }} \otimes {\mathbb {R}}^{d_{\rho }\times d_{\rho }}\) have the same dimensions, and hence \(\psi _{{{\mathcal {T}}}}\) is an isomorphism.
Using Z, we consider an orthogonal map \(\Psi :{\mathbb {R}}^{V\times V} \rightarrow {\mathbb {R}}^{V\times V}\) given by \(\Psi (X)=(Z \otimes I_{{\hat{n}}})^\top X (Z \otimes I_{{\hat{n}}})\). We show this is the desired map for Proposition 4.4.
Proof of Proposition 4.4
Since \(\psi \) is a linear isomorphism between \({{{\mathcal {T}}}}\) and \(\bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_{\rho }} \otimes {\mathbb {R}}^{d_{\rho }\times d_{\rho }}\), we have
Also (11) says that a Laplacian \(L\in {\mathbb {R}}^V\) is in \(({{{\mathcal {L}}}}^V)^{\Gamma }\) if and only if it is spanned by \(\{R(\gamma ) \otimes E_{u,v}:\gamma \in \Gamma , u,v \in V/\Gamma \}\). In other words,
Instead of computing \(\Psi (({{{\mathcal {L}}}}^V)^{\Gamma })\), we now compute the \(\Psi \)image of the right side of (22).
Claim 4.8
\(X\in {{{\mathcal {T}}}}\otimes {\mathbb {R}}^{V/\Gamma \times V/\Gamma }\) is Laplacian if and only if \(\Psi (X)\) is symmetric and \(\Psi (X)_\mathrm{tri}\) is Laplacian, where \(\Psi (X)_\mathrm{tri}\) stands for the projected image of \(\Psi (X)\) to the component associated with the trivial representation in (21).
Proof
Clearly, X is symmetric if and only if \(\Psi (X)\) is symmetric. Also, a symmetric matrix \(X \in {\mathcal {S}}^n\) is Laplacian if and only if \(X{\varvec{1}}_n=0\). Hence it suffices to show that
To see this, we use a fact that the span of the vector \({\varvec{1}}_\Gamma \in {\mathbb {R}}^{\Gamma }\) is a subrepresentation of R and corresponds to the trivial representation. (This can be checked by \(R(\gamma ){\varvec{1}}_\Gamma ={\varvec{1}}_\Gamma \) for any \(\gamma \in \Gamma \).) Hence \(Z^\top {\varvec{1}}_\Gamma =\Gamma {\varvec{e}}_\mathrm{tri}\), where \({\varvec{e}}_\mathrm{tri}\) is a unit vector in \({\mathbb {R}}^\Gamma \). This implies that
Since \(X{\varvec{1}}_V=0\) if and only if \(\Psi (X)(Z^{\top }\otimes I_{V/\Gamma }){\varvec{1}}_V=0\), we obtain (23), and hence the claim follows. \(\square \)
Combining the relations obtained so far, we get
as required. \(\square \)
Next we prove Proposition 4.6. In the proof, we use the following orthogonality relation of complex irreducible representations (see, e.g., [27] for details).
Proposition 4.9
(Schur orthogonality). Let \(\pi :\Gamma \rightarrow GL_{d_\pi }({\mathbb {C}})\) and \(\pi ':\Gamma \rightarrow GL_{d_{\pi '}}({\mathbb {C}})\) be complex irreducible representations. If \(\pi \) and \(\pi '\) are not equivalent, then
for any \(1 \le k,l \le d_\pi \) and \(1 \le k',l' \le d_{\pi '}\). If \(\pi =\pi '\), then
for any \(1 \le k,l,k',l' \le d_\pi \).
Let \(\Gamma \) be an absolutely irreducible finite group and (G, p) be a ddimensional \(\theta \)symmetric framework with \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\). We fix a representative vertex from each vertex orbit, and let \({\tilde{P}}\) be a \(d \times {\hat{n}}\) matrix given by arranging the coordinates of the representative vertices. Then we may suppose (by permuting the columns appropriately)
Since \(\theta \) is an orthogonal representation, there is an orthogonal matrix \(Y \in O({\mathbb {R}}^d)\) such that
where recall that \(m_\rho \) denotes the multiplicity of \(\rho \in {\tilde{\Gamma }}\) in \(\theta \).
Let S be a set of indices defined by
By (14), \(S=d\). Hence in the following discussion we identify \({\mathbb {R}}^S\) with \({\mathbb {R}}^d\) by taking the standard basis \(\{{\varvec{e}}_{(\rho ,t,k)}:(\rho ,t,k)\in S\}\).
With these notations, we can calculate \(\Psi (P^\top P)\) explicitly.
Lemma 4.10
For each \(\rho \in {\tilde{\Gamma }}\) and t with \(1\le t\le m_{\rho }\), define \(w_{\rho ,t} \in {\mathbb {R}}^{d_\rho }\otimes {\mathbb {R}}^{V/\Gamma }\) by
Then, for each \(\rho \in {\tilde{\Gamma }}\),
Proof
Hence
Note that, by Schur orthogonality (Proposition 4.9) and the absolute irreducibility of \(\Gamma \), for any \(\rho , \rho ' \in {\tilde{\Gamma }}\),
Hence, we have
Therefore, we obtain \(\Psi (P^{\top } P)=\sum _{1\le t\le m_{\rho }} w_{\rho ,t} w_{\rho ,t}^\top \) with
as required. \(\square \)
Proof of Proposition 4.6
For any \(p \in {\mathcal {C}}_\theta (V)\), by Lemma 4.10, \(\mathrm{rank}\,\Psi (P^\top P)_\rho \le m_\rho \). So, it is sufficient to prove that for any \(X \in {\mathcal {K}}_{+,\Gamma }\) satisfying \(\mathrm{rank}\,X_\rho \le m_\rho \) (\(\rho \in {\tilde{\Gamma }}\)), there exists \(q \in {\mathcal {C}}_\theta (V)\) satisfying \(\Psi (Q^\top Q)=X\).
Since \(\mathrm{rank}\,X_\rho \le m_\rho \) and \(X_\rho \) is positive semidefinite, there exist \(v_{\rho ,1}, \ldots , v_{\rho ,m_\rho } \in {\mathbb {R}}^{d_\rho {\hat{n}}}\) such that \(X_\rho = \sum _{1 \le t \le m_\rho } v_{\rho ,t} {v_{\rho ,t}}^\top \).
Claim 4.11
There exists a matrix \({\tilde{Q}}\in {\mathbb {R}}^{d\times {\hat{n}}}\) such that
for all \(\rho \in {\tilde{\Gamma }}\) and t with \(1\le t\le m_{\rho }\).
Proof
We decompose \({\mathbb {R}}^{d_{\rho }{\hat{n}}}\) into \(d_{\rho }\) copies of \({\mathbb {R}}^{{\hat{n}}}\) such that \({\mathbb {R}}^{d_{\rho }{\hat{n}}}={\mathbb {R}}^{{\hat{n}}}\times \cdots \times {\mathbb {R}}^{{\hat{n}}}\). Then each \(v_{\rho ,t}\in {\mathbb {R}}^{d_{\rho }{\hat{n}}}\) is written by \(v_{\rho ,t}=(v_{\rho ,t,1},v_{\rho ,t,2},\dots , v_{\rho ,t,d_{\rho }})\) for some \(v_{\rho ,t,1}, \dots , v_{\rho ,t,d_{\rho }}\in {\mathbb {R}}^{{\hat{n}}}\). Let U be the matrix obtained by aligning \(v_{\rho ,t,k}\) for all \((\rho ,t,k)\in S\). Since \(S=d\), the size of U is \({\hat{n}}\times d\). Moreover, \(\sum _{1\le k\le d_\rho } {\varvec{e}}_k\otimes (U{\varvec{e}}_{(\rho ,t,k)})=v_{\rho ,t}\). Hence, by setting \({\tilde{Q}}=\sqrt{\frac{d_{\rho }}{\Gamma }}Y^{1} U^{\top }\), we obtain the claimed \({\tilde{Q}}\). \(\square \)
Let \({\tilde{Q}}\) be as shown in Claim 4.11, and \(Q=\sum _{\gamma \in \Gamma }{\varvec{e}}_\gamma ^\top \otimes (\theta (\gamma ){\tilde{Q}})\). The size of Q is \(d \times n\) and it can be identified with a configuration \(q:V \rightarrow {\mathbb {R}}^d\) by \(Q=\sum _{\gamma \in \Gamma }{\varvec{e}}_\gamma ^\top \otimes (\theta (\gamma ){\tilde{Q}})\). Then q is compatible with \(\theta \) and satisfies \(\Psi (Q^\top Q)=X\) by Lemma 4.10 and Claim 4.11. Since \(Q^{\top }Q=\Psi ^{1}(X)\), we have \(Q^{\top }Q\in {{{\mathcal {L}}}}^V\), implying \(Q{\varvec{1}}_V=0\). Hence \(q \in {{{\mathcal {C}}}}_\theta (V)\). This completes the proof. \(\square \)
4.6 Example
As a working example, we consider the reflectional symmetry \(C_s\) with respect to xaxis in the plane. In this case, \(d=2\) and \(C_s\) consists of two elements, denoted by \(\{e,s\}\). The point group representation \(\theta :\{e,s\}\rightarrow O({\mathbb {R}}^2)\) is given by
\(\theta \) is decomposed as \(\theta =+ \oplus \), where \(+\) denotes the trivial representation and \(:C_s \rightarrow O({\mathbb {R}})\) denotes another irreducible representation of degree 1 given by \((e)=1\), \((s)=1\). Let G be a \(C_s\)symmetric graph and \({\hat{n}}\) be the number of its vertex orbits. Denote \(V(G)=\{1,2,\dots , 2{\hat{n}}\}\), and we assume that, for \(1 \le i \le {\hat{n}}\), i and \(i+{\hat{n}}\) are in the same vertex orbit and i is the representative vertex in this vertex orbit. Then we have
The orthogonal transformation \(\Psi :({\mathcal {L}}^V)^{C_s} \rightarrow {\mathcal {K}}_+ \oplus {\mathcal {K}}_\) in Proposition 4.4 is given by
and hence we have
By \(\theta =+ \oplus \), we have \(m_+=m_=1\).
For \(p \in {\mathcal {C}}_\theta (V)\), let \({\tilde{p}}_x\) (resp. \({\tilde{p}}_y\)) \(\in {\mathbb {R}}^{{\hat{n}}}\) be the xcoordinate (resp. ycoordinate) vector of the representative vertices. Then the corresponding matrix P (whose ith coordinate is \(p_i\)) is
and the Gram matrix of p is
Hence,
Therefore, we have
which is as stated in Proposition 4.6.
5 Complete proof of Theorem 4.1
In the last section we proved Theorem 4.2 for absolutely irreducible \(\Gamma \). In this section, we give a complete proof of Theorem 4.2. If \(\Gamma \) is not absolutely irreducible, the corresponding structure theorem has a slightly more involved form (Proposition 5.2), and accordingly we need a further technical analysis of the facial structure of the cone of positive semidefinite \(\Gamma \)symmetric Laplacian matrices.
Throughout this section, we distinguish real representations and complex representations, and denote the equivalence classes of real irreducible representations of \(\Gamma \) by \({\tilde{\Gamma }}\). Again, we may assume that each \(\rho \in {\tilde{\Gamma }}\) is an orthogonal matrix representation \(\rho :\Gamma \rightarrow O({\mathbb {R}}^{d_\rho })\). Let \({\mathbb {H}}\) be the algebra of quaternions. For \(x=a+b\mathrm{i}+c\mathrm{j}+d\mathrm{k}\in {\mathbb {H}}\) (\(a,b,c,d \in {\mathbb {R}}\)), its conjugate \(x^*\) is \(x^*=ab\mathrm{i}c\mathrm{j}d\mathrm{k}\).
5.1 Blockdiagonalization
We first review a basic fact on real representations. [6, 23] give more detailed algebraic expositions for applications to semidefinite programming problems.
Let \(\rho :\Gamma \rightarrow GL(W)\) be a real irreducible representation. The set of isomorphisms of W commutative to \(\rho (\gamma )\) for all \(\gamma \in \Gamma \) is denoted by \(\mathrm{Hom}(\rho ,\rho )\). \(\mathrm{Hom}(\rho ,\rho )\) forms a division algebra over \({\mathbb {R}}\), called the commutative algebra, and it is known (by the Frobenius theorem) that \(\mathrm{Hom}(\rho ,\rho )\) is isomorphic to one of \({\mathbb {R}}, {\mathbb {C}}, {\mathbb {H}}\). Accordingly \(\rho \) is said to be of real, complex, quaternionic type, respectively. For a finite group \(\Gamma \), the set of real irreducible representations of real (resp., complex, quaternionic) type is denoted by \({\tilde{\Gamma }}_{\mathbb {R}}\) (resp., \({\tilde{\Gamma }}_{\mathbb {C}}\), \({\tilde{\Gamma }}_{\mathbb {H}}\)). Note that \({\tilde{\Gamma }}={\tilde{\Gamma }}_{\mathbb {R}}\cup {\tilde{\Gamma }}_{\mathbb {C}}\cup {\tilde{\Gamma }}_{\mathbb {H}}\).
The types of real representations can be also understood in terms of decomposability over \({\mathbb {C}}\) as follows.
Proposition 5.1
(See, e.g., [27, Chapter 13.2]). For a real irreducible representation \(\rho \in {\tilde{\Gamma }}\),

\(\rho \) is of real type if and only if it is irreducible over \({\mathbb {C}}\) (i.e., absolutely irreducible);

\(\rho \) is of complex type if and only if it is decomposed over \({\mathbb {C}}\) into the direct sum of a complex irreducible representation \(\pi \) and its complex conjugate \({\overline{\pi }}\) such that \(\pi \) and \({\overline{\pi }}\) are not equivalent to each other;

\(\rho \) is of quaternionic type if and only if it is decomposed over \({\mathbb {C}}\) into the direct sum of two copies of a selfconjugate complex irreducible representation \(\pi \), i.e., \(\pi ={\overline{\pi }}\).
A trivial representation \(\mathrm{tri}\) is always of real type, and hence \({\tilde{\Gamma }}_{\mathbb {R}}\ne \emptyset \). An absolutely irreducible group we have studied in the last section is the case when \({\tilde{\Gamma }}_{\mathbb {C}}={\tilde{\Gamma }}_{\mathbb {H}}=\emptyset \). The simplest example of a nonabsolutely irreducible group is the cyclic group \(C_n\) of order n with \(n \ge 3\), which has real irreducible representation of complex type of degree 2. Another fundamental example is the quaternion group \(Q_8\) whose real representations consist of four real type representations of degree 1 and one quaternionic type representation of degree 4.
To describe the structure theorem, we introduce two linear matrix spaces. For a complex number \(a+b\mathrm{i}\in {\mathbb {C}}\) with \(a,b\in {\mathbb {R}}\), let
The map C can be extended to \(C:{\mathbb {C}}^{n\times n}\rightarrow {\mathbb {R}}^{2n\times 2n}\) by applying it entrywise, i.e.,
for \(Z=(z_{ij})\in {\mathbb {C}}^{n\times n}\). The space of all real expressions of \(n \times n\) complex matrices is denoted by \({\mathcal {C}}^{2n}\), i.e., \({\mathcal {C}}^{2n}=C({\mathbb {C}}^{n\times n})\). Similarly, for \(a+b\mathrm{i}+c\mathrm{j}+d\mathrm{k}\in {\mathbb {H}}\) with \(a, b, c, d \in {\mathbb {R}}\), let
and extend it over \({\mathbb {H}}^{n\times n}\) by
for \(X=(x_{ij})\in {\mathbb {H}}^{n\times n}\). Let \({{{\mathcal {H}}}}^{4n}=H({\mathbb {H}}^{n\times n})\).
Note that C and H defined above are commutative to matrix multiplication, and \(C(X^*)=C(X)^\top \) and \(H(Y^*)=H(Y) ^\top \) hold, where \(^*\) denotes conjugate transpose. For \(A \in {\mathcal {C}}^{2n}\) (resp., \(A \in {{{\mathcal {H}}}}^{4n}\)), the matrix \(X \in {\mathbb {C}}^{n \times n}\) satisfying \(C(X)=A\) (reps., \(X \in {\mathbb {H}}^{n \times n}\) satisfying \(H(X)=A\)) is denoted by \(C^{1}(A)\) (resp., \(H^{1}(A)\)).
For each \(\rho \in {\tilde{\Gamma }}\), define a linear space \({\mathcal {K}}_\rho \) by
and let
where \(\dim {\mathbb {R}}, \dim {\mathbb {C}}, \dim {\mathbb {H}} = 1, 2, 4\), respectively. Then we have the following structure theorem. Recall that \(\{E_{u,v}: u,v\in V/\Gamma \}\) stands for the standard basis of \({\mathbb {R}}^{V/\Gamma \times V/\Gamma }\).
Proposition 5.2
There exists an orthogonal transformation \(\Psi :{\mathbb {R}}^{V\times V} \rightarrow {\mathbb {R}}^{V\times V}\) satisfying \(\Psi (({\mathcal {L}}^V)^\Gamma ) ={\mathcal {K}}_\Gamma \) and
In the proof of Proposition 5.2, we use the fact that real irreducible representations of complex and quaternionic type can be represented by matrices in \({{{\mathcal {C}}}}^{d_{\rho }}\) and \({{{\mathcal {H}}}}^{d_\rho }\). This fact follows from the following lemma.
Lemma 5.3

(i)
For each real irreducible representation \(\rho :\Gamma \rightarrow GL(V)\) of complex type, there exists a basis B of V such that the matrix expression of \(\rho (\gamma )\) with respect to B satisfies \(\rho (\gamma ) \in {\mathcal {C}}^{d_\rho }\).

(ii)
For each real irreducible representation \(\rho :\Gamma \rightarrow GL(V)\) of quaternionic type, there exists a basis B of V such that the matrix expression of \(\rho (\gamma )\) with respect to B satisfies \(\rho (\gamma ) \in {\mathcal {H}}^{d_\rho }\).
Proof
We prove (i). As the commutative algebra \(\mathrm{Hom}(\rho ,\rho )\) of \(\rho \) is isomorphic to \({\mathbb {C}}\), there exists \(J \in \mathrm{Hom}(\rho , \rho )\) satisfying \(J^2=\mathrm{id}_V\). Let \(B=\emptyset \). We pick any nonzero \(v_1\) from V and add \(v_1\) and \(J(v_1)\) to B. We then pick \(v_2\) from the complement of \(\mathrm{span}B\) and add \(v_2\) and \(J(v_2)\) to B. We keep the procedure until we get \(B=\{v_1, J(v_1), v_2, J(v_2), \ldots , v_{d_{\rho }/2}, J( v_{d_{\rho }/2})\}\). By \(J^2=\mathrm{id}_V\), one can easily check that \(B=\{v_1, J(v_1), v_2, J(v_2), \ldots \}\) is a basis of V. As J is commutative to \(\rho (\gamma )\), we can deduce that matrix expression of \(\rho (\gamma )\) with respect to B have the desired form.
The proof for (ii) is identical. \(\square \)
In view of Lemma 5.3, we may suppose that each \(\rho \in {\tilde{\Gamma }}_{\mathbb {C}}\) (resp., \(\rho \in {\tilde{\Gamma }}_{\mathbb {H}}\)) is an orthogonal matrix representation with \(\rho \in {{{\mathcal {C}}}}^{d_{\rho }}\) (resp., \(\rho \in {{{\mathcal {H}}}}^{d_{\rho }}\)).
We now consider the decomposition of the regular representation R into the real irreducible representations. Recall first that the multiplicity of a complex irreducible representation \(\pi \) in R is equal to the degree of \(\pi \). A representation \(\rho \in {\tilde{\Gamma }}_{\mathbb {R}}\) of real type is absolutely irreducible, and hence its multiplicity in R is its degree \(d_\rho \). A representation \(\rho \in {\tilde{\Gamma }}_{\mathbb {C}}\) of complex type is the direct sum of a complex irreducible representation \(\pi \) and its conjugate \({\overline{\pi }}\). Hence the degree of \(\pi \) is \(\frac{d_\rho }{2}\), and the multiplicity of \(\rho \) in R is \(\frac{d_\rho }{2}\). A representation \(\rho \in {\tilde{\Gamma }}_{\mathbb {H}}\) of quaternionic type decomposes into two copies of a selfconjugate complex irreducible representation \(\pi \). Hence the degree of \(\pi \) is \(\frac{d_\rho }{2}\), and the multiplicity of \(\rho \) in R is \(\frac{d_\rho }{4}\).
Therefore, there exists an orthogonal matrix \(Z \in O({\mathbb {R}}^\Gamma )\) satisfying
Note also that by comparing the degree of R and that of its decomposition into real irreducible representations, we have
For each \(\rho \in {\tilde{\Gamma }}\), let \({{{\mathcal {K}}}}'_\rho \) be a linear space defined by
and let
(29) implies that, by an orthogonal transformation, \(R(\gamma )\) is mapped to \({{{\mathcal {K}}}}'\). If we set \({{{\mathcal {T}}}} = \mathrm{span}\{R(\gamma ):\gamma \in \Gamma \}\), then a map
is a linear isomorphism between \({{{\mathcal {T}}}}\) and \({{{\mathcal {K}}}}'\).
Based on \(\psi \), we construct an orthogonal transformation \(\Psi : {\mathbb {R}}^{V\times V} \rightarrow {\mathbb {R}}^{V\times V}\) by \(X \mapsto (Z \otimes I_{{\hat{n}}})X(Z \otimes I_{{\hat{n}}})\) for \(X \in {\mathbb {R}}^{V\times V}={\mathbb {R}}^{\Gamma \times \Gamma }\otimes {\mathbb {R}}^{V/\Gamma \times V/\Gamma }\). Then the restriction of \(\Psi \) to \(({{{\mathcal {L}}}}^V)^{\Gamma }\) would be a map claimed in Proposition 5.2. Since the proof is identical to that of Proposition 4.4, we omit the detailed description.
5.2 Extending Proposition 4.6
In this subsection, we prove Proposition 5.4, which extends Proposition 4.6 to general finite groups. Recall that the projection of \(X \in {\mathcal {K}}_\Gamma \) to \({\mathcal {K}}_\rho \) is denoted by \(X_\rho \). Given a point group \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\), the multiplicity of \(\rho \in {\tilde{\Gamma }}\) in \(\theta \) is denoted by \(m_\rho \). We have
Let \({{{\mathcal {K}}}}_{+,\Gamma }=\Psi (({\mathcal {L}}^V)_+^\Gamma )\). The following proposition extends Proposition 4.6 to general groups.
Proposition 5.4
For a map \(f:{\mathcal {C}}_\theta (V) \rightarrow {\mathcal {K}}_{+,\Gamma };q\mapsto \Psi (Q^\top Q)\),
In the proof of Proposition 5.4, we use the explicit form of \(\Psi (P^\top P)\) (Lemma 5.5). For this, as in Sect. 4.5, we define \({\tilde{P}}\) and Y as follows. We fix a representative vertex from each vertex orbit, and let \({\tilde{P}}\) be a \(d \times {\hat{n}}\) matrix given by arranging the coordinates of the representative vertices. Then we may suppose (by permuting the columns appropriately)
Since \(\theta \) is an orthogonal representation, there is an orthogonal matrix \(Y \in O({\mathbb {R}}^d)\) such that
Let S be the set of indices defined by
By (31), \(S=d\). Hence in the following discussion we identify \({\mathbb {R}}^S\) with \({\mathbb {R}}^d\) by taking the standard basis \(\{{\varvec{e}}_{(\rho ,t,k,a)}: (\rho ,t,k,a)\in S\}\).
Lemma 5.5
For each \(\rho \in {\tilde{\Gamma }}_{{\mathbb {F}}}\), we have
where \(W_{\rho ,t} \in {\mathbb {R}}^{d_\rho {\hat{n}}\times (\dim {\mathbb {F}}) }\) is defined by
The proof of Lemma 5.5 is in Appendix A. For positive semidefinite complex Hermitian matrices, the following properties are wellknown. A complex Hermitian matrix X is positive semidefinite if and only if C(X) is positive semidefinite. A positive semidefinite complex Hermitian matrix \(X \in {\mathbb {C}}^{n \times n}\) with rank r is written by \(X=\sum _{1\le i\le r} v_i v_i^*\) for some \(v_1, \ldots , v_r \in {\mathbb {C}}^n\). Also \(\mathrm{rank}\,C(X) = 2\mathrm{rank}\,X\) for any complex matrix X.
Although \({\mathbb {H}}\) is not a field, the corresponding properties are known even for quaternionic matrices. For a set \(v_1,\dots , v_k\) of vectors in \({\mathbb {H}}^n\), its (right) linear (in)dependence is defined in terms of the existence of scalars \(\lambda _i\in {\mathbb {H}}\) such that \(\sum _{i=1}^kv_i\lambda _i=0\) with \(\lambda _i\ne 0\) for some i. Then the rank of a quaternionic matrix X is defined by the maximum possible size of a linearly independent column vector set. By [30, Theorem 7.3], \(\mathrm{rank}\,H(X) = 4\mathrm{rank}\,X\). By [30, Remark 6.1], X is positive semidefinite if and only if H(X) is positive semidefinite. Also by [30, Corollary 6.2], a positive semidefinite \(X \in {\mathbb {H}}^{n \times n}\) with rank r is written by \(X=\sum _{1\le i\le r} v_i v_i^*\) for some \(v_1, \ldots , v_r \in {\mathbb {H}}^n\).
Now we are ready to prove Proposition 5.4.
Proof of Proposition 5.4
By Lemma 5.5, for any \(q \in {{{\mathcal {C}}}}_\theta (V)\), we have \(\mathrm{rank}\,\Psi (Q^\top Q)_\rho \le \dim {\mathbb {F}}\cdot m_\rho \) (\(\rho \in {\tilde{\Gamma }}_{\mathbb {F}}\)). Hence it suffices to show the other inclusion. Take any \(X \in {{{\mathcal {K}}}}_{+,\Gamma }\) satisfying \(\mathrm{rank}\,X_\rho \le \dim {\mathbb {F}}\cdot m_\rho \) (\(\rho \in {\tilde{\Gamma }}_{\mathbb {F}}\)). We want to find \(p \in {{{\mathcal {C}}}}_\theta (V)\) satisfying \(\Psi (P^\top P)=X\).
The proof is identical to that for the case of absolutely irreducible groups given in Proposition 4.6. Indeed, if \(\rho \in {\tilde{\Gamma }}_{\mathbb {C}}\) (resp., \(\rho \in {\tilde{\Gamma }}_{\mathbb {H}}\)), then \(C^{1}(X_\rho )\) (resp., \(H^{1}(X_\rho )\)) is a positive semidefinite complex (resp., quaternionic) matrix with rank at most \(m_\rho \). Hence, for each \(\rho \in {\tilde{\Gamma }}_{{\mathbb {F}}}\), there exist vectors \(v_{\rho ,1}, \ldots , v_{\rho ,m_\rho }\) in \({\mathbb {F}}^\frac{d_\rho {\hat{n}}}{\dim {\mathbb {F}}}\) such that
depending on the type of \(\rho \). Applying the same calculation as that in Proposition 4.6, one can find a \(d \times {\hat{n}}\) matrix \({\tilde{P}}\) such that
for all \(\rho \in {\tilde{\Gamma }}\) and \(1\le t\le m_{\rho }\).
Let p be a ddimensional configuration defined by \(P=\sum _{\gamma \in \Gamma } {\varvec{e}}_\gamma ^\top \otimes (\theta (\gamma ){\tilde{P}})\). Then p is compatible with \(\theta \). Also, by (34), the definition of \({\tilde{P}}\), and Lemma 5.5, we have \(\Psi (P^\top P)_\rho =X_\rho \). Since \(P^{\top }P=\Psi ^{1}(X)\in {{{\mathcal {L}}}}^V\), we have \(P^{\top }P {\varvec{1}}_V\), meaning that \(p(V)=0\). Thus p satisfies \(p\in {{{\mathcal {C}}}}_{\theta }(V)\) and \(\Psi (P^{\top }P)=X\) as required. \(\square \)
5.3 Proof of Theorem 4.1 for general case
We give the proof of Theorem 4.1 for general case. Let \({\mathbb {Q}}_{\theta , \Gamma }\) be the finite extension field of \({\mathbb {Q}}\) generated by \({\mathbb {Q}}\) and the entries of \(\theta (\gamma )\) for \(\gamma \in \Gamma \) and those of \(\rho (\gamma )\) for \(\rho \in {\tilde{\Gamma }}\) and \(\gamma \in \Gamma \). Let \(\Psi \) be the orthogonal transformation given in Proposition 5.2, and we consider the blockdiagonalization of the SDP problem. The resulting SDP problem (P\(^\Psi \)) is as given in Sect. 4.3. Proposition 2.1 implies that (P\(^\Psi \)) has the unique solution \(\Psi (P^{\top }P)\) if (G, p) is universally rigid (c.f. Proposition 4.5).
To apply GortlerThurston’s argument, we need to understand the facial structure of \({\mathcal {K}}_{+,\Gamma } = \Psi (({{{\mathcal {L}}}}^V)^\Gamma )\). By (28), \({\mathcal {K}}_{+,\Gamma }\) is the direct sum of positive semidefinite cones, positive semidefinite Laplacian cones, cones of the form \({\mathcal {C}}^k\cap {\mathcal {S}}_+^k\), and cones of the form \({\mathcal {H}}^k\cap {\mathcal {S}}_+^k\) for some integer k.
Through the map C, \({\mathcal {C}}^k\cap {\mathcal {S}}_+^k\) can be identified with the cone of positive semidefinite complex Hermitian matrices of size \(k\times k\). Similarly, through the map H, \({\mathcal {H}}^k\cap {\mathcal {S}}_+^k\) can be identified with the cone of positive semidefinite quaternionic Hermitian matrices of size \(k\times k\). Hence the following proposition can be proved by the identical manner as the proof of Proposition 2.3.
Proposition 5.6

(i)
For \(A \in {\mathcal {C}}^k\cap {\mathcal {S}}_+^k\) with rank 2r, \(\dim F_{{\mathcal {C}}^k\cap {\mathcal {S}}_+^k}(A)=r^2\).
For \(B \in {\mathcal {C}}^k \cap {\mathcal {S}}^k\), the hyperplane \(\{ \langle X,B \rangle =0 : X \in {\mathcal {C}}^k \cap {\mathcal {S}}^k\}\) in \({\mathcal {C}}^k \cap {\mathcal {S}}^k\) exposes \(F_{{\mathcal {C}}^k\cap {\mathcal {S}}_+^k}(A)\) if and only if \(\mathrm{rank}\,A + \mathrm{rank}\,B = k\), \(\langle A,B \rangle =0\), and \(B \succeq 0\).

(ii)
For \(A \in {\mathcal {H}}^k\cap {\mathcal {S}}_+^k\) with rank 4r, \(\dim F_{{\mathcal {H}}^k\cap {\mathcal {S}}_+^k}(A)=2r^2r\).
For \(B \in {\mathcal {H}}^k \cap {\mathcal {S}}^k\), the hyperplane \(\{ \langle X,B \rangle =0 : X \in {\mathcal {H}}^k \cap {\mathcal {S}}^k\}\) in \({\mathcal {H}}^k \cap {\mathcal {S}}^k\) exposes \(F_{{\mathcal {H}}^k\cap {\mathcal {S}}_+^k}(A)\) if and only if \(\mathrm{rank}\,A + \mathrm{rank}\,B = k\), \(\langle A,B \rangle =0\), and \(B \succeq 0\).
Now we give the proof of Theorem 4.1 for general case.
Proof of the necessity of Theorem 4.1
Let (G, p) be a \(\theta \)symmetric framework with a point group \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\) which is generic modulo symmetry and universally rigid. Suppose also that p affinely spans \({\mathbb {R}}^d\). By a translation, we may suppose \(p \in {{{\mathcal {C}}}}_\theta (V)\). We shall apply Proposition 2.9 to the ambient space \({\mathcal {K}}_\Gamma \) defined in (28) and a point \(\Psi (P^\top P)\). To do so, we need to prove the local genericity of \(\Psi (P^\top P)\).
Claim 5.7
For
\(\Psi (P^\top P)\) is locally generic over \({\mathbb {Q}}_{\theta ,\Gamma }\) in \(\text {ext}_k({\mathcal {K}}_{+,\Gamma })\).
Proof
The facial structure of \(\text {ext}_k({\mathcal {K}}_{+,\Gamma })\) are described by Propositions 2.3, 2.4 and 5.6. We also have Proposition 5.4. Hence the proof follows the same line as Claim 4.7. \(\square \)
Define a linear subspace \({\mathcal {K}}_\Gamma (G)\) of \({{{\mathcal {K}}}}_\Gamma \) and a projection \(\pi : {{{\mathcal {K}}}}_\Gamma \rightarrow {{{\mathcal {K}}}}_\Gamma (G)\) similarly as in the previous section. Then by (the generalization of) Proposition 2.9, there exists a hyperplane \(H=\{\langle X,L \rangle =0 : X \in {\mathcal {K}}_\Gamma (G)\}\) in \({\mathcal {K}}_\Gamma (G)\) defined by some \(L \in {\mathcal {K}}_\Gamma (G)\) such that \(\pi ^{1}(H)=\{\langle X,L \rangle =0 : X \in {\mathcal {K}}_\Gamma \}\) exposes \(F_{{\mathcal {K}}_{+,\Gamma }}(\Psi (P^\top P))\). By Propositions 2.3, 2.4 and 5.6, we have
As \(L\in {\mathcal {K}}_\Gamma (G)\), \(\Psi ^{1}(L)\) is a weighted Laplacian on G satisfying
Therefore \(\Psi ^{1}(L)\) satisfies the property of the statement. This completes the proof. \(\square \)
Notes
Although the details are omitted here, one can explicitly describe the entries of \(Z_\Gamma \) in terms of those of \(\rho (\gamma )\) for \(\rho \in {\tilde{\Gamma }}\).
References
Alfakih, A.Y.: On dimensional rigidity of barandjoint frameworks. Discrete Appl. Math. 155(10), 1244–1253 (2007)
Alfakih, A.Y.: On bar frameworks, stress matrices and semidefinite programming. Math. Program. 129, 113–128 (2011)
Alfakih, A.Y., Nguyen, V.H.: On affine motions and universal rigidity of tensegrity frameworks. Linear Algebra Its Appl. 439(10), 3134–3147 (2013)
Alfakih, A.Y., Taheri, N., Ye, Y.: On stress matrices of \((d + 1)\)lateration frameworks in general position. Math. Program. 137, 1–17 (2013)
C. Bachoc, D. Gijswijt, A. Schrijver, F. Vallentin. Invariant semidefinite programs. In : Anjos, M. F., Lasserre, J. B. (eds.) Handbook on Semidefinite, Conic and Polynomial Optimization, pp. 219–269. Springer, 2012
Bai, Y., de Klerk, E., Pasechnik, D., Sotirov, R.: Exploiting group symmetry in truss topology optimization. Optim. Eng. 10, 331–349 (2009)
Borwein, M.J., Wolkowicz, H.: Facial reduction for a coneconvex programming problem. J. Aust. Math. Soc. 30, 369–380 (1981)
Connelly, R.: Rigidity and energy. Invent. Math. 66(1), 11–33 (1982)
Connelly, R.: Generic global rigidity. Discrete Comput. Geom. 33(4), 549–563 (2005)
Connelly, R., Back, A.: Mathematics and tensegrity: Group and representation theory make it possible to form a complete catalogue of strutcable constructions with prescribed symmetries. Am. Sci. 86, 142–151 (1998)
Connelly, R., Gortler, S.: Iterative universal rigidity. Discrete Comput. Geom. 53(4), 847–877 (2015)
Connelly, R., Gortler, S., Theran, L.: Affine rigidity and conics at infinity. Int. Math. Res. Not. 2018(13), 4084–4102 (2017)
Connelly, R., Gortler, S., Theran, L.: Generically globally rigid graphs have generic universally rigid frameworks. Combinatorica 40, 1–37 (2020)
Connelly, R., Terrell, M.: Globally rigid symmetric tensegrities. Struct. Topol. 21, 59–78 (1995)
Connelly, R., Whiteley, W.: Secondorder rigidity and prestress stability for tensegrity frameworks. SIAM J. Discrete Math. 9, 453–491 (1995)
de Carli Silva, M.K., Tunçel, L.: Strict complementarity in semidefinite optimization with elliptopes including the maxcut SDP. SIAM J. Optim. 29, 2650–2676 (2019)
Drusvyatskiy, D., Pataki, G., Wolkowicz, H.: Coordinate shadows of semidefinite and Euclidean distance matrices. SIAM J. Optim. 25, 1160–1178 (2015)
Gatermann, K., Parrilo, P.: Symmetry groups, semidefinite programs, and sums of squares. J. Pure Appl. Algebra 192(1–3), 95–128 (2004)
Gortler, S., Healy, A., Thurston, D.: Characterizing generic global rigidity. Am. J. Math. 132(4), 897–939 (2010)
Gortler, S., Thurston, D.: Characterizing the universal rigidity of generic frameworks. Discrete Comput. Geom. 51(4), 1017–1036 (2014)
Kanno, Y., Ohsaki, M., Murota, K., Katoh, N.: Group symmetry in interiorpoint methods for semidefinite program. Optim. Eng. 2, 293–320 (2001)
Malestein, J., Theran, L.: Generic rigidity with forced symmetry and sparse colored graphs. Rigidity Symmetry Fields Inst. Commun. 70, 227–252 (2014)
Murota, K., Kanno, Y., Kojima, M., Kojima, S.: A numerical algorithm for blockdiagonal decomposition of matrix * algebras with application to semidefinite programming. Jpn. J. Ind. Appl. Math. 27(1), 125–160 (2010)
Pataki, G.: The geometry of semidefinite programming. In: Aardal, K., Nemhauser, G., Weismantel, R. (eds.) Handbook of Semidefinite Programming, pp. 29–65. Springer, Berlin (2000)
Roth, B., Whiteley, W.: Tensegrity frameworks. Trans. Am. Math. Soc. 265, 419–446 (1981)
Schulze, B., Whiteley, W.: Rigidity of symmetric frameworks. In: O’Rourke, J., Tóth, C.D. (eds.) Handbook of Discrete and Computational Geometry, 3rd edn., pp. 1633–1659. Chapman and Hall/CRC, Boca Raton (2017)
Serre, J.P.: Linear Representations of Finite Groups. Springer, Berlin (1977)
So, A.M., Ye, Y.: Theory of semidefinite programming for sensor network localization. Math. Program. 109, 367–384 (2007)
Tanigawa, S.: Singularity degree of the positive semidefinite matrix completion problem. SIAM J. Optim. 27, 986–1009 (2017)
Zhang, F.: Quaternions and matrices of quaternions. Linear Algebra Its Appl. 251, 21–57 (1997)
Zhang, J.Y., Ohsaki, M.: Tensegrity Structures: Form, Stability, and Symmetry. Springer, Berlin (2015)
Acknowledgements
The authors would like to thank the referees for their valuable comments which helped to improve the manuscript.
This work was supported by JST ERATO Grant Number JPMJER1903 and JSPS KAKENHI Grant Number JP18K11155.
Open Access
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Proof of Lemma 5.5
Proof of Lemma 5.5
In the proof of Lemma 5.5, we use the orthogonality relation of real irreducible representations (Proposition A.1). Recall that every \(\rho \in {\tilde{\Gamma }}_{\mathbb {R}}\) (resp., \(\rho \in {\tilde{\Gamma }}_{{\mathbb {C}}}, \rho \in {\tilde{\Gamma }}_{\mathbb {H}}\)) is an orthogonal representation with \(\rho (\gamma )\in {\mathbb {R}}^{d_{\rho }\times d_{\rho }}\) (resp., \({{{\mathcal {C}}}}^{d_\rho }\), \({{{\mathcal {H}}}}^{d_\rho }\)) for \(\gamma \in \Gamma \). Fix the standard basis \({{{\mathcal {B}}}}_{\rho }\) of \({\mathbb {R}}^{d_{\rho }\times d_{\rho }}\), \({\mathcal {C}}^{d_{\rho }}\), \({\mathcal {H}}^{d_{\rho }}\) by
respectively. For each \(B \in {{{\mathcal {B}}}}_\rho \), the coordinate vector \(\rho _B \in {\mathbb {R}}^{\Gamma }\) with respect to B is defined by
Then, we have the following.
Proposition A.1
\(\left\{ \sqrt{ \frac{d_\rho }{\Gamma } } \rho _B : \rho \in {\tilde{\Gamma }}, B \in {{{\mathcal {B}}}}_\rho \right\} \) forms an orthogonal basis of \({\mathbb {R}}^{\Gamma }\).
Proof
By definition, for each \(\rho \in {\tilde{\Gamma }}_{\mathbb {F}}\), \({{{\mathcal {B}}}}_\rho =\frac{{d_\rho }^2}{\dim {\mathbb {F}}}\). Hence by (30), \(\sum _{\rho \in {\tilde{\Gamma }}}{{{\mathcal {B}}}}_{\rho }=\Gamma \). Therefore, it suffices to prove that \(\left\{ \sqrt{ \frac{d_\rho }{\Gamma } } \rho _B : \rho \in {\tilde{\Gamma }}, B \in {{{\mathcal {B}}}}_\rho \right\} \) is an orthonormal set. By Schur orthogonality (Proposition 4.9) over \({\mathbb {C}}\), for two inequivalent real irreducible representations \(\rho , \rho ' \in {\tilde{\Gamma }}\) and any \(B \in {{{\mathcal {B}}}}_\rho , B' \in {{{\mathcal {B}}}}_{\rho '}\), \(\rho _B\) and \(\rho _{B'}\) are orthogonal to each other. Hence it suffices to prove that for each \(\rho \), \(\left\{ \sqrt{\frac{d_\rho }{\Gamma }} \rho _B:B\in {{{\mathcal {B}}}}_\rho \right\} \) forms an orthonormal set.
When \(\rho \) is of real type (i.e., \(\rho \) is irreducible over \({\mathbb {C}}\)), this follows from Schur orthogonality, again.
Suppose that \(\rho \) is of complex type. By Proposition 5.1, \(\rho \) is (unitarily) equivalent to \(\pi \otimes {\overline{\pi }}\) for some complex irreducible representation \(\pi \) of degree \(\frac{d_{\rho }}{2}\), and by \(\rho \in {{{\mathcal {C}}}}^{d_{\rho }}\) we may suppose \(\rho (\gamma )=\mathfrak {R}\pi (\gamma ) \otimes C(1) + \mathfrak {I}\pi (\gamma ) \otimes C(\mathrm{i})\) for all \(\gamma \in \Gamma \). Then
where \(\pi _{k,l}\) denotes the vector in \({\mathbb {C}}^{\Gamma }\) such that \(\pi _{k,l}(\gamma )\) is the (k, l)entry of \(\pi (\gamma )\).
For a complex number \(c\in {\mathbb {C}}\), we have \(\begin{pmatrix} c&{\overline{c}}\end{pmatrix}\begin{pmatrix} \frac{1}{2} &{} \frac{\mathrm{i}}{2} \\ \frac{1}{2} &{} \frac{\mathrm{i}}{2}\end{pmatrix} =\begin{pmatrix} \mathfrak {R}c&\mathfrak {I}c\end{pmatrix}\). Note that \(\begin{pmatrix} \frac{1}{2} &{} \frac{\mathrm{i}}{2} \\ \frac{1}{2} &{} \frac{\mathrm{i}}{2}\end{pmatrix}\) is unitary. Applying this unitary transformation entrywise to each pair \((\pi _{k,l},\overline{\pi _{k,l}})\), the set of vectors \(\{\pi _{k,l}: 1\le k,l\le \frac{d_{\rho }}{2}\}\cup \{\overline{\pi _{k,l}}: 1\le k,l\le \frac{d_{\rho }}{2}\}\) is mapped to \(\{\mathfrak {R}\pi _{k,l}: 1\le k,l\le \frac{d_{\rho }}{2}\}\cup \{\mathfrak {I}\pi _{k,l}: 1\le k,l\le \frac{d_{\rho }}{2}\}\). Since this is a unitary transformation, the orthogonality of the former set (which follows from Schur orthogonality) implies the orthogonality of the latter set. By (35), this in turn implies the orthogonality of \(\left\{ \sqrt{\frac{d_\rho }{\Gamma }} \rho _B:B\in {{{\mathcal {B}}}}_\rho \right\} \).
Finally, suppose that \(\rho \) is of quaternionic type. By Proposition 5.1, \(\rho \) is (complex)equivalent to the direct sum of two copies of a selfconjugate irreducible representation \(\pi \). As \(\rho \in {{{\mathcal {H}}}}^{d_{\rho }}\), \(\rho (\gamma )=a(\gamma )\otimes H(1) + b(\gamma ) \otimes H(\mathrm{i})+c(\gamma ) \otimes H(\mathrm{j}) + d(\gamma ) \otimes H(\mathrm{k})\) for some \(a,b,c,d:\Gamma \rightarrow M_{\frac{d_\rho }{4}}({\mathbb {R}})\), and we may take \(\pi \) such that
Hence, by using Schur orthogonality of \(\pi \), one can check the statement by the same manner as the case of complex type. \(\square \)
Proof of Lemma 5.5
We apply the same calculation as that in the proof of Lemma 4.10. Then, for each \(\rho \in {\tilde{\Gamma }}\), we have
with
If \(\rho \) is of real type, the proof is identical to that of Lemma 4.10.
Suppose that \(\rho \) is of complex type. By Proposition A.1, for a real irreducible representation \(\rho ' \in {\tilde{\Gamma }}\), we have
To see how the sum in the above relation for \(\rho =\rho '\) can be simplified, let \(\{{\varvec{e}}_{(\ell ,a)}: 1\le \ell \le \frac{d_{\rho }}{2}, 1\le a\le 2\}\) be the standard basis of \({\mathbb {R}}^{d_{\rho }\times d_{\rho }}\). Then
Combining it with (37), we have
with
Hence by (36) and (38), we have \(\Psi (P^{\top }P)_{\rho }=\sum _{1\le t\le m_{\rho }}W_{\rho ,t}W_{\rho ,t}^{\top }\) with
as required.
Finally suppose that \(\rho \) is of quaternionic type. For a real irreducible representation \(\rho ' \in {\tilde{\Gamma }}\), by Proposition A.1, \(\sum _{\gamma \in \Gamma } \rho (\gamma ) \otimes \rho '(\gamma )\) is 0 if \(\rho \) and \(\rho '\) are not equivalent, and if \(\rho = \rho '\), this is equal to
For \(\rho =\rho '\), this value is written as \(\frac{\Gamma }{d_\rho } B B^\top \) with
Applying the same calculation as that of the complex case, one can derive the desired form. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Oba, R., Tanigawa, Si. Characterizing the universal rigidity of generic tensegrities. Math. Program. 197, 109–145 (2023). https://doi.org/10.1007/s10107021017302
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107021017302
Keywords
 Tensegrity
 Graph rigidity
 Universal rigidity
 Semidefinite programming
 Strict complementarity
 Super stability
Mathematics Subject Classification
 52C25 Rigidity and flexibility of structures (aspects of discrete geometry)
 90C22 Semidefinite programming