Skip to main content

Characterizing the universal rigidity of generic tensegrities

Abstract

A tensegrity is a structure made from cables, struts, and stiff bars. A d-dimensional tensegrity is universally rigid if it is rigid in any dimension \(d'\) with \(d'\ge d\). The celebrated super stability condition due to Connelly gives a sufficient condition for a tensegrity to be universally rigid. Gortler and Thurston showed that super stability characterizes universal rigidity when the point configuration is generic and every member is a stiff bar. We extend this result in two directions. We first show that a generic universally rigid tensegrity is super stable. We then extend it to tensegrities with point group symmetry, and show that this characterization still holds as long as a tensegrity is generic modulo symmetry. Our strategy is based on the block-diagonalization technique for symmetric semidefinite programming problems, and our proof relies on the theory of real irreducible representations of finite groups.

Introduction

A tensegrity is a stable structure made from cables, struts, and stiff bars. Since the invention by Kenneth Snelson, the theory and the applications of tensegrities have been extensively studied from various perspectives. A mathematical foundation for the rigidity or stability analysis has been established in the context of rigidity theory [8, 15, 25]. Following the notation in that context, we define a (d -dimensional) tensegrity as a triple \((G, \sigma , p)\) of an edge-signed graph \((G,\sigma )\) with \(\sigma :E(G)\rightarrow \{-1,0,+1\}\) and a point-configuration \(p:V(G)\rightarrow {\mathbb {R}}^d\). Here each vertex i corresponds to a joint \(p_i=p(i)\in {\mathbb {R}}^d\), each edge \(e=ij\) with \(\sigma (e)=+1/0/-1\) corresponds to a cable/bar/strut, respectively, between joints \(p_i\) and \(p_j\). When every member is a stiff bar (that is, \(\sigma (e)=0\) for every \(e\in E(G)\)), a tensegrity is called a bar-joint framework, which is the central object of study in rigidity theory.

In a tensegrity all bars are stiff and cannot change length while cables are allowed to decrease in length and struts are allowed to increase in length. Given the system of these geometric constraints, the global rigidity of the tensegrity is defined in terms of the uniqueness of the solution of the system up to isometries. More formally, given an edge-signed graph \((G,\sigma )\), two point-configurations pq for \((G,\sigma )\) are said to be congruent if

$$\begin{aligned} \Vert p_i -p_j\Vert =\Vert q_i -q_j\Vert \text { for all } i, j \in V(G), \end{aligned}$$

where \(\Vert \cdot \Vert \) denotes the Euclidean norm, and a tensegrity \((G,\sigma ,p)\) is congruent to a tensegrity \((G,\sigma ,q)\) if p is congruent to q. We say that a tensegrity \((G,\sigma ,p)\) dominates a tensegrity \((G,\sigma ,q)\) if

$$\begin{aligned} \Vert p_i -p_j\Vert&\ge \Vert q_i -q_j\Vert \text { for all}\, e=ij \in E(G)\,\text { with}\, \sigma (e)=+1, \\ \Vert p_i -p_j\Vert&= \Vert q_i -q_j\Vert \text { for all}\, e=ij \in E(G)\, \text { with}\, \sigma (e)=0,\,\text { and} \\ \Vert p_i -p_j\Vert&\le \Vert q_i -q_j\Vert \text { for all}\, e=ij \in E(G)\,\text { with}\, \sigma (e)=-1. \end{aligned}$$

This dominance captures the set of possible deformations of a given tensegrity \((G,\sigma , p)\), where a tensegrity \((G,\sigma ,q)\) satisfies the geometric constraints posed by cables/bars/struts of \((G,\sigma , p)\) if and only if \((G,\sigma ,q)\) is dominated by \((G,\sigma ,p)\). A d-dimensional tensegrity \((G,\sigma ,p)\) is globally rigid if every d-dimensional tensegrity \((G,\sigma ,q)\) dominated by \((G,\sigma ,p)\) is congruent to \((G,\sigma ,p)\).

Connelly [8] initiated the rigidity analysis of tensegrities, and in his paper [8] in 1982 he gave a celebrated sufficient condition for the global rigidity in terms of stress matrices (that is, graph Laplacian matrices weighted by equilibrium self-stresses). Tensegrities satisfying his sufficient condition are called super stable, and super stability is now used as a major criterion for structural engineers to develop new tensegrities (see, e.g., [31]).

Recently Connelly’s super stability condition received attention in the context of the sensor network localization or the graph realization problem [1, 2, 28]. To understand the exact solvability of the SDP relaxation, So and Ye [28] looked at a stronger rigidity property, called universal rigidity. Suppose that \((G,\sigma ,p)\) is a d-dimensional tensegrity whose ambient space \({\mathbb {R}}^d\) lies in \({\mathbb {R}}^{d'}\) for each integer \(d'\ge d\). Then \((G,\sigma ,p)\) is also a tensegrity in \({\mathbb {R}}^{d'}\). We say that \((G,\sigma ,p)\) is universally rigid if \((G,\sigma ,p)\) is globally rigid in \({\mathbb {R}}^{d'}\) for every integer \(d'\ge d\). Clearly, universal rigidity implies global rigidity but the converse implication does not hold in general as indicated in Fig. 1 (See, e.g., [13] for further interaction between two rigidity concepts.)

Fig. 1
figure 1

The framework on the left is globally rigid in \({\mathbb {R}}^2\), but it is not universally rigid. The framework on the right is universally rigid since it is a Cauchy polygon [8]. These examples show that universal rigidity is not a generic property of a graph

Although universal rigidity is stronger than global rigidity, super stability still implies universal rigidity as it is implicit in Connelly’s original work [8]. It turns out that super stability even characterizes universal rigidity for almost all bar-joint frameworks. Specifically we say that a tensegrity (or a bar-joint framework) is generic if the set of coordinates of the points is algebraically independent over \({\mathbb {Q}}\). In 2014, Gortler and Thurston [20] proved that a generic bar-joint framework (Gp) is universally rigid if and only if it is super stable.

The goal of this paper is to extend the Gortler-Thurston characterization in two directions. We first extend the result to tensegrities, and show that universal rigidity and super stability coincide for generic tensegrities. We then extend it to tensegrities with point group symmetry, where a finite point group faithfully acts on the underlying signed graphs and the point-configurations are compatible with this action (see Sect. 4 for the formal definition.) Note that a priori a tensegrity with point group symmetry is not generic, but we shall prove that a characterization still holds as long as tensegrities are “generic modulo symmetry”.

Infinitesimal rigidity of symmetric frameworks has been widely investigated, see, e.g. [22, 26], and references therein. These researches deal with the rigidity of symmetric structures in two settings: forced symmetric rigidity and incidentally symmetric rigidity. Forced symmetric rigidity takes care of only deformations maintaining the underlying symmetry while incidentally symmetric rigidity deals with deformations which may break the underlying symmetry. In this paper, for symmetric tensegrities, we focus on the universal rigidity of the latter type.

As given in tensegrity catalogues, most of existing tensegrities exhibit symmetry or are compositions of simple symmetric modules, and building larger tensegrities based on group symmetry is now a standard technique in structural engineering. The technique was initiated by Connelly and Terrell [14], where they showed how to simplify the super stability condition via finite group representation theory. Although their paper focuses on particular instances, the technique is general enough to design a larger class of symmetric tensegrities [10]. An implication of our result is that any universally rigid tensegrities whose point-configurations are generic modulo symmetry can be obtained from stress matrices constructed as in the method of Connelly and Terrell.

We should remark that, as shown by Connelly and Gortler [11], the universal rigidity of tensegrities can be characterized by a sequence of dual solutions in the facial reduction procedure due to Borwein and Wolkowicz [7]. We however believe that the characterization in terms of stress matrices (or weighted Laplacian) is important toward characterizing the global rigidity of symmetric tensegrities. A characterization of the global rigidity of generic bar-joint frameworks is known in terms of stress matrices [19].

Technically, our work is closely related to the topic of strict complementarity in semidefinite programming (SDP) problems, or equivalently to the face exposedness of projections of positive semidefinite cones. Understanding the existence of a strict complementary pair of primal and dual solutions is a classical but still ongoing research topic in convex optimization (see, e.g., [16] for a recent result). The characterization problem of the universal rigidity of bar-joint frameworks is known to be equivalent to the existence of strict complementary pairs of primal and dual solutions in the Euclidean matrix completion problem (see Sect. 2 for details). There are several researchers that answer the characterization problem (or the existence of strict complementary pair) for special classes of graphs [4, 17, 29] while Gortler-Thurston [20] solved the problem assuming a certain genericity of input entries. This paper provides a new direction based on group symmetry to go beyond generic instances.

Our proof strategy is based on the block-diagonalization technique for symmetric SDP problems. Here the general idea is to use the block-diagonalization of the underlying matrix algebra to decompose SDP instances to smaller pieces, and the method is successfully used to solve large scaled SDP problems, see, e.g., [5, 18, 23]. Also, prior researches [6, 21] on this technique are motivated from the optimal design of truss structures. Our technical contribution is to use the block-diagonalization technique to analyze the facial structures of SDP problems rather than for reducing computational cost, and our proof essentially relies on the theory of real irreducible representation.

Semidefinite programming problem for universal rigidity

In this section we shall explain the background materials for analyzing universal rigidity from the view point of semidefinite programming.

Throughout the paper we shall use the following notations. Let V be a finite set with \(|V|=n\) (typically \(V=\{1,2,\dots , n\}\)). For a finite set X with \(|X|=m\), let \({\mathbb {R}}^X\) be the m-dimensional Euclidean space whose each entry is indexed by each element of X. For \(i\in X\), let \({\varvec{e}}_i\) be the unit vector of \({\mathbb {R}}^X\) whose i-th entry is one and all other entries are zero, and let \({{\varvec{1}}}_X=\sum _{i\in X} {\varvec{e}}_i\). Similarly, let \({\mathcal {S}}^X\) be the set of all \(m \times m\) symmetric matrices whose entries are indexed by the pairs of elements in X. Throughout the paper, \({\mathcal {S}}^X\) is regarded as a Euclidean space by using the trace inner product \(\langle \cdot , \cdot \rangle \) defined by \(\langle A,B \rangle =\text {tr}AB\). If \(A \in {\mathcal {S}}^X\) is positive semidefinite, it is denoted as \(A \succeq 0\), and let \({\mathcal {S}}^X_+=\{ A \in {\mathcal {S}}^X: A\succeq 0\}\).

For a graph G, let \(N_G(i)\) be the set of all neighbors of \(i\in V(G)\) in G, and let \({\overline{N}}_G(i)=N_G(i)\cup \{i\}\).

Weighted Laplacian and configurations

Given a graph \(G=(V,E)\) with edge weight \(\omega :E(G)\rightarrow {\mathbb {R}}\), its Laplacian \(L_{G,\omega }\) is defined by

$$\begin{aligned} L_{G,\omega }:=\sum _{e=ij\in E} \omega _{ij} F_{ij}, \end{aligned}$$

where \(\omega _{ij}=\omega (ij)\) and

$$\begin{aligned} F_{ij}:=\left( {\varvec{e}}_i-{\varvec{e}}_j\right) \left( {\varvec{e}}_i-{\varvec{e}}_j\right) ^\top . \end{aligned}$$

It is symmetric and always satisfies \(L_{G,\omega }{{\varvec{1}}}_V=0\). A weighted Laplacian of the complete graph on V is simply called a Laplacian matrix. (Equivalently, a symmetric matrix L is Laplacian if \(L{{\varvec{1}}}_V=0\).) Let \({\mathcal {L}}^V\) be the set of all Laplacian matrices. Then \({\mathcal {L}}^V\) is a linear subspace of \({\mathcal {S}}^V\) given by

$$\begin{aligned} {\mathcal {L}}^V=\mathrm{span}\{F_{ij}: i,j\in V, i\ne j\}, \end{aligned}$$

where \(\{F_{ij}: i,j\in V, i\ne j\}\) forms a basis.

Let \(J_V={{\varvec{1}}}_V {{\varvec{1}}}_V^{\top }\). When \(L\succeq 0\), \(L\in {\mathcal {L}}^V\) if and only if \(\langle L, J_V\rangle =0\). Hence the set of positive semidefinite Laplacian matrices \({\mathcal {L}}_+^V\) is given by

$$\begin{aligned} {\mathcal {L}}_+^V=\left\{ L \in {\mathcal {S}}_+^V : \langle L,J_V \rangle =0 \right\} . \end{aligned}$$

Let \(q:V\rightarrow {\mathbb {R}}^d\) be a d-dimensional point configuration for some positive integer d. We identify q with a matrix Q of size \(d\times n\) whose ith column vector is \(q_i\). We then have \(Q^{\top } Q\succeq 0\), and \(\langle Q^\top Q, J_V \rangle = 0\) holds if and only if the center of gravity of q(V) is the origin, i.e., \(\sum _{i\in V} q_i=0\). \(Q^\top Q\) is called the Gram matrix of q. Since the properties we are interested in (such as universal rigidity) are invariant by translations, throughout the paper we shall focus on tensegrities whose center of gravity is at the origin.

We denote by \({\mathcal {C}}_d(V)\) the set of all point configurations \(q:V\rightarrow {\mathbb {R}}^d\) such that \(\sum _{i\in V} q_i=0\) and q(V) affinely span \({\mathbb {R}}^d\), and let \({\mathcal {C}}(V)=\bigcup _{d\in {\mathbb {Z}}_{\ge 0}} {\mathcal {C}}_d(V)\). Then we have

$$\begin{aligned} \left\{ L\in {\mathcal {L}}^V_+: \mathrm{rank}\,L=d\right\} =\left\{ Q^{\top } Q: q\in {\mathcal {C}}_d(V)\right\} \end{aligned}$$
(1)

and

$$\begin{aligned} {\mathcal {L}}^V_+=\{Q^{\top } Q: q\in {\mathcal {C}}(V)\}. \end{aligned}$$
(2)

SDP formulation

Let \((G,\sigma , p)\) be a d-dimensional tensegrity. Let \(E_0=\sigma ^{-1}(0)\), \(E_+=\sigma ^{-1}(+1)\), \(E_-=\sigma ^{-1}(-1)\). We consider the following semidefinite programming problem for \((G,\sigma ,p)\):

$$\begin{aligned} \begin{array}{llll} \text {(P)} &{} \text {max.} &{} 0 \\ &{} \text {s.t.} &{} \langle X, F_{ij} \rangle = \Vert p_i-p_j\Vert ^2 &{}\left( ij \in E_0\right) \\ &{} &{} \langle X, F_{ij} \rangle \le \Vert p_i-p_j\Vert ^2 &{}\left( ij \in E_+\right) \\ &{} &{} \langle X, F_{ij} \rangle \ge \Vert p_i-p_j\Vert ^2 &{}\left( ij \in E_-\right) \\ &{} &{} X \in {\mathcal {L}}^V_+. \end{array} \end{aligned}$$

By (2) any feasible X is written as \(X=Q^{\top }Q\) for some \(q\in {\mathcal {C}}(V)\). Moreover,

$$\begin{aligned} \langle Q^\top Q, F_{ij}\rangle =\langle Q^\top Q, \left( {\varvec{e}}_i-{\varvec{e}}_j\right) \left( {\varvec{e}}_i-{\varvec{e}}_j\right) ^\top \rangle = \Vert q_i-q_j\Vert ^2 \end{aligned}$$

holds, which means that \(Q^\top Q\) is feasible if and only if \((G,\sigma ,q)\) is dominated by \((G,\sigma ,p)\). It can be also checked that \(Q^\top Q=P^{\top } P\) holds if and only if p and q are congruent. Therefore, we have the following.

Proposition 2.1

\((G,\sigma ,p)\) is universally rigid if and only if (P) has a unique feasible solution.

Since \({\mathcal {L}}^V_+\subset {\mathcal {L}}^V\) and \(F_{ij}\in {\mathcal {L}}^V\), we can consider the dual problem of (P) in \({\mathcal {L}}^V\), that is,

$$\begin{aligned} \begin{array}{llll} \text {(D)} &{} \text {min.} &{} \sum _{ij \in E(G)} \omega _{ij}\Vert p_i-p_j\Vert ^2 \\ &{} \text {s.t.} &{} \sum _{ij \in E(G)} \omega _{ij} F_{ij} \succeq 0 \\ &{} &{} \sigma (ij)\omega _{ij} \ge 0 &{} (ij\in E(G)). \end{array} \end{aligned}$$

By weak duality, the dual optimal value is at least 0, and it is indeed 0 as it is attained by \(\omega =0\).

If we consider a dual variable \(\omega :E(G)\rightarrow {\mathbb {R}}; ij\mapsto \omega _{ij}\) as an edge weight of G, \(L_{G,\omega }=\sum _{ij\in E(G)} \omega _{ij} F_{ij}\), and hence the first dual constraint is written by \(L_{G,\omega }\succeq 0\). Moreover, the objective function is equal to \(\langle P^\top P,L_{G,\omega } \rangle \). Hence \(L_{G,\omega }\succeq 0\) implies that \(\omega \) attains the dual optimal value if and only if \(PL_{G,\omega }=0\) holds. In terms of p, the latter condition becomes

$$\begin{aligned} \sum _{j \in N_G(i)} \omega _{ij}\left( p_i-p_j\right) =0 \qquad (i\in V(G)). \end{aligned}$$
(3)

The Eq. (3) is nothing but the equilibrium condition for structures to be statically rigid, and the equation frequently appears in rigidity theory. In general, for a tensegrity \((G,\sigma , p)\), an edge weight \(\omega :E(G)\rightarrow {\mathbb {R}}\) is said to be an equilibrium stress if \(\omega \) satisfies (3). Also \(\omega \) is said to be proper if

$$\begin{aligned} \sigma (ij)\omega _{ij}\ge 0\qquad (ij\in E(G)). \end{aligned}$$
(4)

We further say that \(\omega \) is strictly proper if (4) holds with strict inequality for every \(ij \in E_+ \cup E_-\). The condition (4) reflects the physical fact that each cable only has a tension while each strut only has a compression (see [25] for more details).

With this notation, the discussion is summarized as follows.

Proposition 2.2

An edge weight \(\omega :E(G)\rightarrow {\mathbb {R}}\) is an optimal solution of (D) if and only if it is a proper equilibrium stress of \((G,\sigma ,p)\).

Facial structure of \({\mathcal {L}}_+^V\)

In the next two subsections, we shall provide high level ideas of Connelly’s sufficient condition and Gortler-Thurston’s characterization since our technical result will be built on these ideas. The key ingredient in both results are the facial structure of \({\mathcal {L}}_+^V\).

Let C be a non-empty convex set in a Euclidean space. The dimension of C is the dimension of the smallest affine subspace containing C and is denoted as \(\dim C\). A convex subset \(F \subseteq C\) is a face if for any \(x,y \in C\), \(\frac{x+y}{2} \in F\) implies \(x,y \in F\). For \(x \in C\), the smallest face containing x is called the minimal face of x and is denoted as \(F_C(x)\). We say that a hyperplane H exposes a face F of C if \(F=C\cap H\) and H supports C (i.e., C lies on the closed halfspace defined by H). A face F is said to be exposed if there is a hyperplane exposing F. To simplify the presentation, we also consider the ambient space as a hyperplane whose normal vector is the zero vector. Then C itself is always exposed. C is called exposed if every face of C is exposed. It is well-known that \({{{\mathcal {S}}}}_+^V\) (in \({{{\mathcal {S}}}}^V\)) is exposed, but this is not a general property of convex sets. Moreover the following properties are known for the facial structure of \({{{\mathcal {S}}}}^n_+\) (see, e.g., [24]).

Proposition 2.3

Let \(A \in {{{\mathcal {S}}}}^n_+\) be a matrix with rank d. Then \(\dim F_{{{{\mathcal {S}}}}^n_+} (A) = \left( {\begin{array}{c}d+1\\ 2\end{array}}\right) \).

For \(B \in {{{\mathcal {S}}}}^n\), the hyperplane \(\{X \in {{{\mathcal {S}}}}^n : \langle X,B \rangle =0\}\) exposes \(F_{{{{\mathcal {S}}}}^n_+} (A)\) if and only if B satisfies \(\mathrm{rank}\,A + \mathrm{rank}\,B =n\), \(\langle A,B \rangle =0\), and \(B \succeq 0\).

Now we are interested in the facial structure of \({\mathcal {L}}_+^V\) in \({\mathcal {L}}^V\). \({\mathcal {L}}_+^V\) is known to be a face of \({\mathcal {S}}_+^V\), and one can understand the facial structure of \({\mathcal {L}}_+^V\) by restricting the ambient matrix space to \({\mathcal {L}}^V\). Based on Proposition 2.3, the following properties easily follow.

Proposition 2.4

Let \(L\in {\mathcal {L}}_+^V\) be a matrix with rank d. Then \(\dim F_{{\mathcal {L}}_+^V}(L)={d+1 \atopwithdelims ()2}\).

For \(M\in {\mathcal {L}}^V\), the hyperplane \(\{X\in {\mathcal {L}}^V: \langle X, M\rangle =0\}\) exposes \(F_{{\mathcal {L}}_+^V}(L)\) if and only if M satisfies \(\mathrm{rank}\,L+\mathrm{rank}\,M=|V|-1\), \(\langle L, M\rangle =0\), and \(M\succeq 0\).

Connelly’s sufficient condition

The following is Connelly’s super stability condition.

Theorem 2.5

(Connelly [8]). Let \((G,\sigma ,p)\) be a d-dimensional tensegrity with n vertices, and suppose that

  1. (i)

    it has a strictly proper equilibrium stress \(\omega \) such that \(L_{G,\omega }\succeq 0\) and \(\mathrm{rank}\,L_{G,\omega }=n-d-1\), and

  2. (ii)

    there is no non-zero symmetric matrix S of size \(d\times d\) such that

    $$\begin{aligned} \left( p_i-p_j\right) ^{\top } S\left( p_i-p_j\right) =0\qquad (ij\in E(G)). \end{aligned}$$

Then \((G,\sigma ,p)\) is universally rigid.

The condition (ii) is referred to as the conic condition for the edge directions.

Connelly [9] pointed out that for a generic bar-joint framework with at least \(d+1\) vertices the conic condition for the edge directions always holds. Recent papers [3, 12] examine how to ensure the conic condition for the edge directions without genericity. For practical purpose the following statement due to Alfakih and Nguyen would be sufficiently general.

Theorem 2.6

(Alfakih and Nguyen [3]). Let \((G,\sigma ,p)\) be a d-dimensional tensegrity such that \(p({\overline{N}}_G(i))\) affinely spans \({\mathbb {R}}^d\) for each \(i\in V(G)\). Suppose also that \((G,\sigma ,p)\) has a strictly proper equilibrium stress \(\omega \) such that \(L_{G,\omega }\succeq 0\) and \(\mathrm{rank}\,L_{G,\omega }=n-d-1\). Then the conic condition for the edge direction holds.

See [12] for stronger sufficient conditions.

Characterization by Gortler and Thurston

Gortler-Thurston [20] gave a reverse direction of Theorem 2.5 for generic bar-joint frameworks.

Theorem 2.7

(Gortler-Thurston [20]). A generic d-dimensional bar-joint framework (Gp) with \(n\ge d+2\) vertices is universally rigid if and only if it has an equilibrium stress \(\omega \) such that \(L_{G,\omega }\succeq 0\) and \(\mathrm{rank}\,L_{G,\omega }=n-d-1\).

The sufficiency is due to Connelly. The proof of the necessity goes as follows. Suppose that (Gp) is a universally rigid generic framework, and we want to find \(L_{G,\omega }\) as given in the statement. Translate p so that the center of gravity is at the origin, and consider the face \(F_{{\mathcal {L}}_+^V}(P^{\top }P)\). By Proposition 2.4, \(F_{{\mathcal {L}}_+^V}(P^{\top }P)\) is exposed by the hyperplane \(\{X\in {\mathcal {L}}^V: \langle X, L\rangle =0\}\) for some \(L\in {\mathcal {L}}^V_+\) with

$$\begin{aligned}&\mathrm{rank}\,P^{\top }P+\mathrm{rank}\,L=n-1, \end{aligned}$$
(5)
$$\begin{aligned}&\langle P^{\top } P, L\rangle =0. \end{aligned}$$
(6)

By \(L\in {\mathcal {L}}^V_+\), we have \(L\succeq 0\), and by (5), we also have \(\mathrm{rank}\,L=n-d-1\). Hence, if \(L=L_{G,\omega }\) for some \(\omega :E(G)\rightarrow {\mathbb {R}}\) (i.e., (ij)-th entry of L is zero if \(ij\notin E(G)\)), then Proposition 2.2 and (6) imply that \(\omega \) is an equilibrium stress.

Therefore, what is remaining is to prove that \(F_{{\mathcal {L}}_+^V}(P^{\top }P)\) is exposed by the hyperplane defined by \(L_{G,\omega }\) for some \(\omega \).

To find such \(L_{G,\omega }\), consider the subspace

$$\begin{aligned} {\mathcal {L}}(G):=\mathrm{span}\{F_{ij}: ij\in E(G)\} \end{aligned}$$

of \({\mathcal {L}}^V\). The idea is to look at the projection \(\pi \) of \({\mathcal {L}}^V\) to \({\mathcal {L}}(G)\), and compute the hyperplane H of \({\mathcal {L}}(G)\) exposing the minimal face of \(\pi (P^{\top }P)\) in \(\pi ({\mathcal {L}}^V_+)\). Then this hyperplane H is defined by \(L\in {\mathcal {L}}(G)\), which is equivalent to having an expression \(L=L_{G,\omega }\) for some \(\omega \), and \(\pi ^{-1}(H)\) (defined by L) would be the hyperplane of \({\mathcal {L}}^V\) exposing \(F_{{\mathcal {L}}_+^V}(P^{\top }P)\) as required.

There is one technical subtlety in this argument: the minimal face of \(\pi (P^{\top }P)\) in \(\pi ({\mathcal {L}}_+^V)\) may not be exposed. (Even if \({\mathcal {L}}_+^V\) is exposed, \(\pi ({\mathcal {L}}_+^V)\) may not be exposed.) The main technical observation of Gortler-Thurston [20] is to prove that, if \(P^{\top } P\) is generic in a certain sense, \(\pi (P^{\top }P)\) is exposed.

Our proof follows the same technique, and a detailed description will be given in Sect. 3. In order to give a rigorous discussion, we review the following materials from [20].

A subset of a Euclidean space is semi-algebraic over \({\mathbb {Q}}\) if it is described by finite number of algebraic equalities and inequalities whose coefficients are rationals. Let S be a semi-algebraic set defined over \({\mathbb {Q}}\). A point \(x \in S\) is generic in S if there is no rational coefficient polynomial f such that \(f(x)=0\) and \(f(y) \ne 0\) for some \(y \in S\). A point \(x \in S\) is locally generic in S if for small enough \(\epsilon >0\), x is generic in \(S \cap B_\epsilon (x)\). The following proposition can be used to “transfer” the genericity of a point configuration p to \(P^{\top }P\).

Proposition 2.8

(Gortler-Thurston [20, Lemma 2.6]). Let S be a semi-algebraic set defined over \({\mathbb {Q}}\) and f be an algebraic map from S to a Euclidean space. If x is generic in S, f(x) is generic in f(S).

Let C be a non-empty convex set in a Euclidean space. C is line-free if it contains no complete affine line. A point \(x \in C\) is k-extreme if \(\dim F_C(x) \le k\). We denote by \(\text {ext}_k(C)\) the set of k-extreme points of C.

We will use the following combination of [20, Proposition 4.14] and [20, Theorem 2], which is also explicit in the proof of the main theorem of [20].

Proposition 2.9

Let C be a closed line-free convex semi-algebraic set in \({\mathbb {R}}^m\), and \(\pi :{\mathbb {R}}^m\rightarrow {\mathbb {R}}^n\) be a projection, both defined over \({\mathbb {Q}}\). Suppose that x is locally generic in \(\text {ext}_k(C)\) for some k and \(\pi ^{-1}(\pi (x))\cap C\) is a singleton set. Then there exists a hyperplane H in \({\mathbb {R}}^n\) such that \(\pi ^{-1}(H)\) exposes \(F_C(x)\).

Characterizing the universal rigidity of tensegrities

In this section we prove an extension of Theorem 2.7 to tensegrities.

Theorem 3.1

Let \((G,\sigma ,p)\) be a generic d-dimensional tensegrity with \(n \ge d+2\) vertices. Then \((G,\sigma ,p)\) is universally rigid if and only if it has a strictly proper equilibrium stress \(\omega \) such that \(\mathrm{rank}\,L_{G,\omega }=n-d-1\) and \(L_{G,\omega }\succeq 0\).

The sufficiency follows from Theorems 2.5 and  2.6, and we focus on the necessity. When applying Gortler-Thurston’s proof to tensegrities, we need to further ensure that \(\omega \) is proper, the sign condition (4). In the above proof sketch, this requires us to find an exposing hyperplane of \(F_{{\mathcal {L}}_+^V}(P^{\top }P)\) satisfying a sign condition on non-zero entries. We show how to get around this by a simple trick.

Proof of Theorem 3.1

To see the necessity, suppose that \((G,\sigma ,p)\) is universally rigid. Translate the configuration so that the center of gravity is at the origin. Our idea is to introduce a slack variable for each constraint of (P). For this, we consider the ambient space \({{{\mathcal {K}}}}\) and convex cone \({{{\mathcal {K}}}}_+\) defined by

$$\begin{aligned} {\mathcal {K}}:={{{\mathcal {L}}}}^V\times {\mathbb {R}}^{E_\pm } \times \{0\}^{E_0} \text { and } {\mathcal {K}}_+:={{{\mathcal {L}}}}^V_+\times {\mathbb {R}}^{E_\pm }_{\ge 0} \times \{0\}^{E_0}, \end{aligned}$$

respectively, where \(E_{\pm }=E_+ \cup E_-\). In the following discussion, an element in \({{{\mathcal {K}}}}\) is often denoted by a pair (Xs) with \(X \in {{{\mathcal {L}}}}^V\) and \(s \in {\mathbb {R}}^{E_\pm } \times \{0\}^{E_0}\). \(\langle \cdot \rangle \) denotes the Euclidean inner product in \({{{\mathcal {K}}}}\). Consider the following SDP over \({\mathcal {K}}\):

$$\begin{aligned} \begin{array}{llll} \text {(P')} &{} \text {max.} &{} 0 \\ &{} \text {s.t.} &{} \langle (X,s), \left( F_{ij}, \sigma (ij) {{\varvec{e}}}_{ij}\right) \rangle = \Vert p_i-p_j\Vert ^2 &{}\left( ij \in E(G)\right) \\ &{} &{} (X,s) \in {\mathcal {K}}_+, \end{array} \end{aligned}$$

where, for \(ij \in E(G)\), \({{\varvec{e}}}_{ij}\) denotes the unit vector in \({\mathbb {R}}^{E(G)}\) such that the ij-th entry is one. Observe that X is feasible in (P) if and only if (Xs) is feasible in (P’) for some \(s\in {\mathbb {R}}_{\ge 0}^{E_\pm } \times \{0\}^{E_0}\). As \((G,\sigma ,p)\) is universally rigid, Proposition 2.1 implies that \((P^{\top }P, 0)\in {\mathcal {K}}\) is the unique solution of (P’).

We shall apply Proposition 2.9 to this setting. To do so, we need to prove the local genericity of \((P^{\top }P, 0)\).

Claim 3.2

Let \(k=\left( {\begin{array}{c}d+1\\ 2\end{array}}\right) \). Then \((P^\top P,0)\in {\mathcal {K}}\) is locally generic in \(\text {ext}_k({\mathcal {K}}_+)\).

Proof

A map \(f: {\mathcal {C}}(V) \rightarrow {\mathcal {L}}^V_+\); \(p \mapsto P^\top P\) is algebraic over \({\mathbb {Q}}\) and \(f({\mathcal {C}}_d(V) )\) equals to \({\mathcal {L}}_{+,d}:=\{L\in {\mathcal {L}}^V_+: \mathrm{rank}\,L=d\}\) by (1). Hence, by Proposition 2.8, \(P^\top P\) is generic in \(\bigcup _{i\le d} {\mathcal {L}}_{+,i}\).

From Proposition 2.4 for \(k=\left( {\begin{array}{c}d+1\\ 2\end{array}}\right) \),

$$\begin{aligned} \text {ext}_k\left( {\mathcal {L}}_+^V\right) =\left\{ L\in {\mathcal {L}}^V_+:\left( {\begin{array}{c}\mathrm{rank}\,L+1\\ 2\end{array}}\right) \le k \right\} =\bigcup _{i\le d} {\mathcal {L}}_{+,i}. \end{aligned}$$
(7)

Hence, \(P^\top P\) is generic in \(\text {ext}_k({\mathcal {L}}_+^V)\). (7) also implies

$$\begin{aligned} \text {ext}_k\left( {\mathcal {K}}_+\right) = \left\{ (L,s) \in {\mathcal {K}}_+ : \left( {\begin{array}{c}\mathrm{rank}\,L+1\\ 2\end{array}}\right) + \Vert s\Vert _0 \le k \right\} , \end{aligned}$$
(8)

where \(\Vert s\Vert _0\) denotes the number of non-zero elements of s. By the lower semi-continuity of rank, there exists a neighborhood U of \(P^{\top }P\) in \({\mathcal {L}}^V\) in which the rank of any matrix is at least d. By (8) and \(k=\left( {\begin{array}{c}d+1\\ 2\end{array}}\right) \), we have

$$\begin{aligned} \left( U \times {\mathbb {R}}_{\ge 0}^{E_\pm } \times \{0\}^{E_0} \right) \cap \text {ext}_k({\mathcal {K}}_+) = \left\{ (L,0) \in {\mathcal {K}}_+ : \mathrm{rank}\,L =d \right\} . \end{aligned}$$

Hence \((P^\top P, 0)\) is generic in \(\left( U \times {\mathbb {R}}_{\ge 0}^{E_\pm } \times \{0\}^{E_0} \right) \cap \text {ext}_k({\mathcal {K}}_+)\), meaning that \((P^\top P, 0)\) is locally generic in \(\text {ext}_k({\mathcal {K}}_+)\). \(\square \)

We are now in a position to complete the proof. We consider the subspace

$$\begin{aligned} {\mathcal {K}}(G):=\text {span} \left\{ (F_{ij},\sigma (ij){\varvec{e}}_{ij}) : ij\in E(G)\right\} \end{aligned}$$

of \({\mathcal {K}}\), and let \(\pi : {\mathcal {K}} \rightarrow {\mathcal {K}}(G)\) be a projection. Since \((P^\top P,0)\) is the unique solution of (P’), \(\pi ^{-1}\left( \pi ( (P^\top P,0) )\right) \cap {\mathcal {K}}_+\) is a singleton set. Since \({\mathcal {K}}_+\) is a closed line-free convex set and \((P^\top P,0)\) is locally generic in \(\text {ext}_{\left( {\begin{array}{c}d+1\\ 2\end{array}}\right) }({\mathcal {K}}_+)\) by Claim 3.2, Proposition 2.9 can be applied. Hence, there exists a hyperplane \(H=\{(X,s)\in {\mathcal {K}}(G): \langle (X,s), (L,t)\rangle =0\}\) defined by \((L,t)\in {\mathcal {K}}(G)\) such that \(\pi ^{-1}(H)=\{(X,s)\in {\mathcal {K}}: \langle (X,s), (L,t)\rangle =0\}\) exposes \(F_{{\mathcal {K}}_+}((P^{\top }P, 0))\). Note that \(F_{{\mathcal {K}}_+}((P^{\top }P, 0))=F_{{\mathcal {L}}_+}(P^{\top }P)\times \{0\}\). This and Proposition 2.4 imply

$$\begin{aligned} L\succeq 0,\ \mathrm{rank}\,L=n-d-1,\ \langle P^\top P, L\rangle =0, \end{aligned}$$
(9)

and

$$\begin{aligned} t_{ij} > 0 \qquad (ij\in E_\pm ). \end{aligned}$$
(10)

By \((L,t)\in {\mathcal {K}}(G)\), \((L,t)=\sum _{ij\in E(G)} \omega _{ij} (F_{ij}, \sigma (ij){{\varvec{e}}}_{ij})\) for some \(\omega :E(G)\rightarrow {\mathbb {R}}\). Hence \(L=L_{G,\omega }\). By \(PL=0\) from (9), \(\omega \) is an equilibrium stress of \((G,\sigma ,p)\). Moreover, (10) implies that \(\sigma (ij)\omega _{ij}=t_{ij}>0\) for every \(ij\in E_\pm \), that is, \(\omega \) is strictly proper. Together with (9), we conclude that \(\omega \) satisfies the properties of the statement. \(\square \)

Extension to group-symmetric tensegrities

In this section, we extend Theorem 3.1 to tensegrities with finite point group symmetry. We use the following notations. Let \(\Gamma \) be a finite group with the identity element \(e_\Gamma \).

The set of \(n \times m\) real matrices is denoted as \({\mathbb {R}}^{n \times m}\). For a finite set X with \(|X|=m\), let \({\mathbb {R}}^{X \times X}\) be the set of \(m \times m\) real matrices whose each row or column is indexed by each element in X. For \(A\in {\mathbb {R}}^{n\times m}\), the (ij)-th entry is denoted by A[ij]. The identity matrix of size n is denoted as \(I_n\). The general linear group and the orthogonal group of \({\mathbb {R}}^n\) are denoted as \(GL_n({\mathbb {R}})\) and \(O({\mathbb {R}}^n)\), respectively. For two matrices \(A \in {\mathbb {R}}^{k \times l}\) and \(B \in {\mathbb {R}}^{k' \times l'}\), their matrix direct sum \(A \oplus B \in {\mathbb {R}}^{(k+k')\times (l+l')}\) and their matrix tensor product \(A \otimes B \in {\mathbb {R}}^{kk' \times ll'}\) are defined by

$$\begin{aligned} A \oplus B = \begin{pmatrix} A &{} 0 \\ 0 &{} B \end{pmatrix} \text { and } A \otimes B = \begin{bmatrix} A[1,1]B &{} \cdots &{} A[1,l]B \\ \vdots &{} \ddots &{} \vdots \\ A[k,1]B &{} \cdots &{} A[k,l]B \end{bmatrix}, \end{aligned}$$

respectively. For subsets of matrices \({\mathcal {F}}_1 \subseteq {\mathbb {R}}^{k \times l}\) and \({\mathcal {F}}_2 \subseteq {\mathbb {R}}^{k' \times l'}\), \({\mathcal {F}}_1 \oplus {\mathcal {F}}_2 \subseteq {\mathbb {R}}^{(k+k') \times (l+l')}\) and \(I_n \otimes {\mathcal {F}}_1 \subseteq {\mathbb {R}}^{nk \times nl}\) are defined by

$$\begin{aligned} {\mathcal {F}}_1 \oplus {\mathcal {F}}_2 = \left\{ A\oplus B : A \in {\mathcal {F}}_1, B \in {\mathcal {F}}_2 \right\} \text { and } I_n \otimes {\mathcal {F}}_1 = \{ I_n \otimes A : A \in {\mathcal {F}}_1 \}, \end{aligned}$$

respectively.

Main result

We define basic notions regarding group-symmetric tensegrities and then state our main theorem.

Let \({\hat{V}}\) be a finite set and \(\Gamma \) be a finite group. A \(\Gamma \)-gain graph on \({\hat{V}}\) is a directed graph \(({\hat{V}}, {\hat{E}})\) in which each edge is labeled by an element in \(\Gamma \). An edge from a vertex u to a vertex v with label \(\gamma \in \Gamma \) is denoted by a triple \((u,v,\gamma )\), and we will identify \((u,v,\gamma )\) with \((v,u,\gamma ^{-1})\).

More rigorously, a \(\Gamma \)-gain graph is defined as a pair \(({\hat{V}}, {\hat{E}})\) of a finite set \({\hat{V}}\) and a subset \({\hat{E}}\) of \(({\hat{V}}\times {\hat{V}}\times \Gamma )/\sim \), where \(\sim \) is an equivalence relation on \({\hat{V}}\times {\hat{V}}\times \Gamma \) defined by

$$\begin{aligned} (u,v,\gamma ) \sim (u',v',\gamma ') \Longleftrightarrow (u',v',\gamma ') = (u,v,\gamma ) \text { or } (u',v',\gamma ')=(v,u,\gamma ^{-1}). \end{aligned}$$

The lift of a \(\Gamma \)-gain graph \({\hat{G}}=({\hat{V}},{\hat{E}})\) is an undirected graph \(G=(V,E)\) on \(V:=\Gamma \times {\hat{V}}\) such that \(\{(\alpha ,u),(\beta ,v)\}\) is an edge of G if and only if \((u,v,\alpha ^{-1}\beta ) \in {\hat{E}}.\) For simplicity, we often denote an edge \(\{(\alpha ,u),(\beta ,v)\}\) of the lift by \((\alpha ,u)(\beta ,v)\).

An undirected graph \(G=(V,E)\) is said to be a \(\Gamma \)-symmetric graph if it is the lift of some \(\Gamma \)-gain graph \({\hat{G}}=({\hat{V}},{\hat{E}})\). For example, a Cayley graph is a group-symmetric graph with \(|{\hat{V}}|=1\). Figure 2 gives an example of a \({\mathbb {Z}}_2\)-symmetric graph. For \(v \in {\hat{V}}\), a subset of vertices \(\Gamma \times \{v\}\subseteq V\) is called a vertex orbit. For \((u,v,\gamma ) \in {\hat{E}}\), a subset of edges \(\{ (\alpha ,u)(\alpha \gamma ,v) : \alpha \in \Gamma \} \subseteq E\) is called an edge orbit. A \(\Gamma \)-symmetric graph can be the lift of more than one \(\Gamma \)-gain graph \({\hat{G}}\) because the representative vertices can be picked arbitrarily from each vertex orbit. In the subsequent discussion, for a \(\Gamma \)-symmetric graph G, it would be convenient to fix representative vertices from each vertex orbit. The resulting \(\Gamma \)-gain graph \({\hat{G}}\) is called the quotient of G and is denoted by \(G/\Gamma \).

Fig. 2
figure 2

An example of a \({\mathbb {Z}}_2\)-gain graph with \({\mathbb {Z}}_2=\{-1,+1\}\) and its lift. (Recall that, in a \(\Gamma \)-gain graph, \((u,v,\gamma )\sim (v,u,\gamma ^{-1})\). In the left figure, we only draw an edge from each equivalence class with respect to \(\sim \).)

Next we define a group-symmetric tensegrity. A group homomorphism \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\) is called a point group. A d-dimensional point configuration \(p:V(G) \rightarrow {\mathbb {R}}^d\) is compatible with a point group \(\theta \) if the following relations are satisfied:

$$\begin{aligned} \theta (\gamma ) p_{(\alpha ,v)} = p_{(\gamma \alpha ,v)} \qquad (\gamma \in \Gamma , (\alpha ,v) \in V(G) ). \end{aligned}$$

Let \({\mathcal {C}}_\theta (V(G))\) (or \({{{\mathcal {C}}}}_\theta (V)\)) be the set of all d-dimensional point configurations \(p:V(G) \rightarrow {\mathbb {R}}^d\) such that \(\sum _{i \in V(G)}p_i=0\) and p is compatible with \(\theta \). A d-dimensional tensegrity \((G,\sigma ,p)\) is a \(\theta \)-symmetric tensegrity if \((V,E_0)\), \((V,E_+)\), \((V,E_-)\) are \(\Gamma \)-symmetric graphs and p is compatible with \(\theta \).

With this definition, our goal is to extend Theorem 3.1 to \(\theta \)-symmetric tensegrities. Note that a priori a group-symmetric tensegrity is not generic, and Theorem 3.1 cannot be applied. In order to extend Theorem 3.1, we need to introduce genericity modulo symmetry, which is commonly used in the context of infinitesimal rigidity [22, 26]. Let \({\mathbb {Q}}_{\theta ,\Gamma }\) be the finite extension field of \({\mathbb {Q}}\) generated by the entries of \(\theta (\gamma )\) for all \(\gamma \in \Gamma \) and those of (representative) irreducible representations of \(\Gamma \) (see Subsection 4.4 and Subsection 5.3 for the formal definition). A \(\theta \)-symmetric tensegrity \((G,\sigma ,p)\) is generic modulo symmetry if the translation of p is generic over \({\mathbb {Q}}_{\theta ,\Gamma }\) in \({{{\mathcal {C}}}}_\theta (V)\). In other words, we choose a representative vertex from each vertex orbit of G, and \((G,\sigma ,p)\) is said to be generic modulo symmetry if the set of coordinates of the points of these representative vertices is algebraically independent over \({\mathbb {Q}}_{\theta ,\Gamma }\). Note that, in the latter definition, as \({\mathbb {Q}}_{\theta ,\Gamma }\) contains the entries of \(\theta \), genericity modulo symmetry is independent of the choice of representative vertices.

Now we are in a position to state our main result, an extension of Theorem 3.1 to group-symmetric tensegrities.

Theorem 4.1

Let \((G,\sigma ,p)\) be a d-dimensional tensegrity with n vertices which is \(\theta \)-symmetric and is generic modulo symmetry. Suppose also that \(p({\overline{N}}_G(i))\) affinely spans \({\mathbb {R}}^d\) for all \(i \in V(G)\). Then, \((G,\sigma ,p)\) is universally rigid if and only if it has a strictly proper equilibrium stress \(\omega \) satisfying \(L_{G,\omega } \succeq 0\) and \(\mathrm{rank}\,L_{G,\omega }=n-d-1\).

The sufficiency of Theorem 4.1 immediately follows from Theorem 2.5 and Theorem 2.6. The necessity of Theorem 4.1 holds without assuming the neighbor-general position of p as follows.

Theorem 4.2

Let \((G,\sigma ,p)\) be a d-dimensional tensegrity with \(n\ge d+2\) vertices which is \(\theta \)-symmetric and is generic modulo symmetry. Suppose also that p affinely spans \({\mathbb {R}}^d\). If \((G,\sigma ,p)\) is universally rigid, then it has a strictly proper equilibrium stress \(\omega \) satisfying \(L_{G,\omega } \succeq 0\) and \(\mathrm{rank}\,L_{G,\omega }=n-d-1\).

The subsequent discussion is devoted to the proof of Theorem 4.2. In order to simplify the description, we shall focus on bar-joint frameworks (Gp) rather than tensegrities \((G,\sigma ,p)\). For general tensegrities the proof easily follows by combining the proof for the bar-joint case with that of Theorem 3.1.

A high-level idea of the proof is as follows. Proposition 2.1 states that, if (Gp) is universally rigid, then the Gram matrix \(P^\top P\) of p is the unique feasible solution of the SDP problem (P). To use Gortler-Thurston’s argument (Proposition 2.9), the unique solution must be generic in a certain sense. When (Gp) is group-symmetric, however, genericity in this sense does not hold in the cone of positive semidefinite Laplacian matrices \({\mathcal {L}}^V_+\). To make the unique solution generic, we restrict the ambient space to \(\Gamma \)-symmetric Laplacian matrices. To investigate the facial structure of the restricted cone, we block-diagonalize the ambient matrix space by using structure theorem (Proposition 4.4). If the original point configuration p is generic modulo symmetry, the image of \(P^\top P\) by this transformation is generic. Therefore Gortler-Thurston’s argument can be applied.

In order to explain how to implement this idea, we shall first give the proof of Theorem 4.2 for the case when every real irreducible representation of \(\Gamma \) is absolutely irreducible (see Subsection 4.3 for the definition). The proof for the general case follows the same idea but is technically more involved. We will give it in Sect. 5.

Restriction of (P) to the space of group-symmetric Laplacians

In this subsection, we define the space of \(\Gamma \)-symmetric Laplacian matrices as a subspace of Laplacian matrices.

Let V be a finite set, and let \(K_V\) be the complete graph on V. If we can decompose V into \(V=\Gamma \times (V/\Gamma )\) for some finite set \(V/\Gamma \), then \(K_V\) is \(\Gamma \)-symmetric, where its quotient \(K_V/\Gamma \) consists of the vertex set \(V/\Gamma \) and the edge set \(((V/\Gamma ) \times (V/\Gamma ) \times \Gamma \setminus \{ (v,v,e_\Gamma ) : v \in V/\Gamma \} )/\sim \). (Note that, since \(K_V\) has no loop, its quotient has no loop with the identity label.) The quotient \(K_V/\Gamma \) is called the complete \(\Gamma \)-gain graph. In the subsequent discussion, each \(\Gamma \)-symmetric graph \(G=(V,E)\) and its quotient \(G/\Gamma \) are assumed to be subgraphs of \(K_V\) and \(K_V/\Gamma \), respectively.

For a \(\Gamma \)-symmetric graph \(G=(V,E)\), an edge weight \(\omega :E(G) \rightarrow {\mathbb {R}}\) is \(\Gamma \)-symmetric if \(\omega \) is constant on each edge orbit. A Laplacian matrix is said to be \(\Gamma \)-symmetric if it is the Laplacian of \(K_V\) weighted by a \(\Gamma \)-symmetric edge weight \(\omega \). Equivalently, \(L \in {{{\mathcal {L}}}}^V\) is \(\Gamma \)-symmetric if and only if

$$\begin{aligned} L[(\alpha ,u),(\beta ,v)]=L[(\gamma \alpha ,u),(\gamma \beta ,v)] \end{aligned}$$
(11)

for any \((\alpha ,u), (\beta ,v) \in V\) and any \(\gamma \in \Gamma \). Let \(({{{\mathcal {L}}}}^V)^\Gamma \) be the set of all \(\Gamma \)-symmetric Laplacian matrices. Then \(({{{\mathcal {L}}}}^V)^\Gamma \) is a linear subspace of \({\mathcal {S}}^V\) given by

$$\begin{aligned} \left( {\mathcal {L}}^V\right) ^\Gamma = \mathrm{span} \left\{ F_{(u,v,\gamma )} : (u,v,\gamma ) \in E\left( K_V/\Gamma \right) \right\} \end{aligned}$$

where

$$\begin{aligned} F_{(u,v,\gamma )} = \sum _{\alpha \in \Gamma } F_{(\alpha ,u)(\alpha \gamma ,v)}. \end{aligned}$$

Let \(({{{\mathcal {L}}}}^V)^\Gamma _+\) be the set of positive semidefinite \(\Gamma \)-symmetric Laplacian matrices.

Let \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\) be a point group. For a given \(\theta \)-symmetric framework (Gp), we consider the following SDP problem:

$$\begin{aligned} \begin{array}{llll} (\text {P}^\Gamma ) &{} \text {max.} &{} 0 \\ &{} \text {s.t.} &{} \langle X, F_{(u,v,\gamma )} \rangle = \langle P^\top P,F_{(u,v,\gamma )} \rangle &{}((u,v,\gamma ) \in E(G/\Gamma )) \\ &{} &{} X \in \left( {\mathcal {L}}^V\right) ^\Gamma _+. \end{array} \end{aligned}$$

The cone of (P\(^\Gamma \)) is restricted to \(({\mathcal {L}}^V)^\Gamma _+\). We have the following.

Proposition 4.3

If a framework (Gp) with \(p \in {{{\mathcal {C}}}}_\theta (V)\) is universally rigid and \(\theta \)-symmetric for some point group \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\), then (P\(^\Gamma \)) has the unique solution \(P^\top P\).

Proof

We first check that the Gram matrix \(P^\top P \in {\mathcal {L}}^V_+\) of p is \(\Gamma \)-symmetric by checking (11) as follows:

$$\begin{aligned} \left( P^\top P\right) \left[ (\gamma \alpha ,u)(\gamma \beta ,v)\right]&= {p_{(\gamma \alpha ,u)}}^\top p_{(\gamma \beta ,v)} = {p_{(\alpha ,u)}}^\top \theta (\gamma )^\top \theta (\gamma ) p_{(\beta ,v)} \\&= {p_{(\alpha ,u)}}^\top p_{(\beta ,v)} = (P^\top P)[(\alpha ,u),(\beta ,v)], \end{aligned}$$

where the second equation follows from \(\theta \)-symmetry and the third equation follows from \(\theta (\gamma ) \in O({\mathbb {R}}^d)\). Hence \(P^\top P \in ({\mathcal {L}}^V)^\Gamma _+\), and \(P^\top P\) is feasible.

To see the uniqueness, consider any \(X \in ({{{\mathcal {L}}}}^V)^\Gamma \). Then X is a linear combination of \(F_{(u,v,\gamma )}\) over \((u,v,\gamma ) \in E(K_V/\Gamma )\) while each \(F_{(u,v,\gamma )}\) is the sum of \(F_{(\alpha ,u),(\alpha \gamma ,v)}\) over \(\alpha \in \Gamma \). Hence we have

$$\begin{aligned} \langle X,F_{(u,v,\gamma )} \rangle = |\Gamma | \langle X,F_{(\alpha ,u)(\alpha \gamma ,v)} \rangle \end{aligned}$$
(12)

for any \((u,v,\gamma ) \in E(G/\Gamma )\) and \(\alpha \in \Gamma \).

Since (12) holds for \(P^\top P\) by \(P^\top P\in ({\mathcal {L}}^V)^\Gamma _+\), \(\langle X, F_{(u,v,\gamma )} \rangle = \langle P^\top P,F_{(u,v,\gamma )} \rangle \) for all \((u,v,\gamma ) \in E(G/\Gamma )\) if and only if \(\langle X, F_{(\alpha ,u)(\alpha \gamma ,v)} \rangle = \langle P^\top P,F_{(\alpha ,u)(\alpha \gamma ,v)} \rangle \) for all \((\alpha ,u)(\alpha \gamma ,v) \in E(G)\). In other words, \(X \in ({\mathcal {L}}^V)^\Gamma _+\) is feasible in (P\(^\Gamma \)) if and only if X is feasible in (P) (for (Gp)). Hence the uniqueness of the feasible solution of (P\(^\Gamma \)) follows by applying Proposition 2.1 to (P). \(\square \)

Block-diagonalization of (P\(^\Gamma \))

To investigate the facial structure of \(({\mathcal {L}}^V)^\Gamma _+\), we use the structure theorem (Proposition 4.4) given below.

Let W be a finite dimensional real or complex vector space. Let GL(W) be a general linear group of W. A representation of \(\Gamma \) is a group homomorphism \(\rho :\Gamma \rightarrow GL(W)\). \(d_\rho :=\dim W\) is called the degree of \(\rho \). Two representations \(\rho :\Gamma \rightarrow GL(W)\) and \(\rho ':\Gamma \rightarrow GL(W')\) are equivalent if there is an isomorphism \(T:W \rightarrow W'\) satisfying \(T \circ \rho (\gamma ) =\rho '(\gamma ) \circ T\) for all \(\gamma \in \Gamma \). For a representation \(\rho :\Gamma \rightarrow GL(W)\), a linear subspace \(W'\) of W is \(\Gamma \)-invariant if \(\rho (\gamma ) (W') \subseteq W'\) for all \(\gamma \in \Gamma \). A representation \(\rho :\Gamma \rightarrow GL(W)\) is irreducible if the only \(\Gamma \)-invariant subspaces are \(\{0\}\) and W. The simplest example is the trivial representation, where W is one-dimensional and every element is associated with the identity. The trivial representation is denoted by \(\mathrm{tri}\).

A real irreducible representation \(\rho :\Gamma \rightarrow GL(W)\) is called absolutely irreducible if \(\rho \) is also irreducible over \({\mathbb {C}}\). We also call \(\Gamma \) absolutely irreducible if every real irreducible representation of \(\Gamma \) is absolutely irreducible. This is equivalent to saying that every complex irreducible representation is equivalent to a real representation. For example, dihedral groups and symmetric groups are known to be absolutely irreducible while cyclic groups of order more than two are not, see [27] for example.

In this section we shall focus on the case where \(\Gamma \) is absolutely irreducible. This special case is sufficiently general to explain our technical idea, and moreover it includes finite groups that appear in applications. Hence, throughout this section, a real representation is simply called a representation.

Let \({\tilde{\Gamma }}\) be the set of all equivalence classes of irreducible representations of \(\Gamma \). By fixing a representative of each class, each element of \({\tilde{\Gamma }}\) is regarded as a representation. Since every representation is equivalent to an orthogonal representation, we further assume that each \(\rho \in {\tilde{\Gamma }}\) is an orthogonal matrix representation \(\rho :\Gamma \rightarrow O({\mathbb {R}}^{d_\rho })\).

The following is a basic relation on the degrees of complex irreducible representations and it is also valid for real irreducible representations since \(\Gamma \) is absolutely irreducible:

$$\begin{aligned} \sum _{\rho \in {\tilde{\Gamma }}} {d_\rho }^2 = |\Gamma |. \end{aligned}$$
(13)

We can now state the structure theorem.

Proposition 4.4

Let \(\Gamma \) be a finite group, \(V/\Gamma \) be a finite set with \({\hat{n}}\) elements, and \(V=\Gamma \times V/\Gamma \). Suppose that \(\Gamma \) is absolutely irreducible. Then there exists an orthogonal transformation \(\Psi :{\mathbb {R}}^{V\times V} \rightarrow {\mathbb {R}}^{V\times V}\) satisfying

$$\begin{aligned} \Psi \left( \left( {\mathcal {L}}^V\right) ^\Gamma \right) = {\mathcal {L}}^{V/\Gamma } \oplus \ \bigoplus _{\rho \in {\tilde{\Gamma }} \setminus \{\mathrm{tri}\}} \left( I_{d_\rho }\otimes {\mathcal {S}}^{d_\rho {\hat{n}}} \right) . \end{aligned}$$

The proof is given in Subsection 4.5.

Following Proposition 4.4, for each irreducible representation \(\rho \in {\tilde{\Gamma }}\), we define a linear space \({\mathcal {K}}_\rho \) and a convex cone \({\mathcal {K}}_{+,\rho }\) by

$$\begin{aligned} \begin{array}{ll} {\mathcal {K}}_\rho := {\left\{ \begin{array}{ll} {\mathcal {L}}^{V/\Gamma } &{} (\rho = \mathrm{tri}) \\ {\mathcal {S}}^{d_\rho {\hat{n}}} &{} (\rho \in {\tilde{\Gamma }} \setminus \{\mathrm{tri}\}) \end{array}\right. } \text { and } {\mathcal {K}}_{+,\rho } := {\left\{ \begin{array}{ll} {\mathcal {L}}^{V/\Gamma }_+ &{} (\rho = \mathrm{tri}) \\ {\mathcal {S}}^{d_\rho {\hat{n}}}_+ &{} (\rho \in {\tilde{\Gamma }} \setminus \{\mathrm{tri}\}) \end{array}\right. }, \end{array} \end{aligned}$$

respectively. The ambient space \({\mathcal {K}}_\Gamma \) and the convex cone \({\mathcal {K}}_{+,\Gamma }\) are defined by

$$\begin{aligned} {\mathcal {K}}_\Gamma = \bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_\rho } \otimes {\mathcal {K}}_\rho \text { and } {\mathcal {K}}_{+,\Gamma } = \bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_\rho } \otimes {\mathcal {K}}_{+,\rho }, \end{aligned}$$

respectively. Note that \({\mathcal {K}}_\Gamma = \Psi (({\mathcal {L}}^V)^\Gamma )\) by Proposition 4.4, and \({\mathcal {K}}_{+,\Gamma } = \Psi (({\mathcal {L}}^V)^\Gamma _+)\) also holds since an orthogonal transformation preserves positive semidefiniteness.

For each \(X \in {{{\mathcal {K}}}}_\Gamma \) and \(\rho \in {\tilde{\Gamma }}\), the projected image of X to \({{{\mathcal {K}}}}_\rho \) is denoted by \(X_\rho \), i.e., \(X= \bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_\rho } \otimes X_\rho \).

Using the orthogonal transformation \(\Psi \) in Proposition 4.4, the SDP problem (P\(^\Gamma \)) is transformed into the following block-diagonal form:

$$\begin{aligned} \begin{array}{llll} (\text {P}^\Psi ) &{} \text {max.} &{} 0 \\ &{} \text {s.t.} &{} \langle X, \Psi (F_{(u,v,\gamma )}) \rangle = \langle \Psi (P^\top P), \Psi (F_{(u,v,\gamma )}) \rangle &{}((u,v,\gamma ) \in E(G/\Gamma )) \\ &{} &{} X \in {{{\mathcal {K}}}}_{+,\Gamma }. \end{array} \end{aligned}$$

Since \(\Psi \) is an orthogonal transformation, Proposition 4.3 immediately gives the following.

Proposition 4.5

If a framework (Gp) with \(p \in {{{\mathcal {C}}}}_\theta (V)\) is universally rigid and \(\theta \)-symmetric with some point group \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\), then (P\(^\Psi \)) has the unique solution \(\Psi (P^\top P)\).

Proof of Theorem 4.2 for absolutely irreducible \(\Gamma \)

Let \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\) be a finite point group. We take an orthogonal matrix \(Z_\Gamma \in O({\mathbb {R}}^V)\) representing \(\Psi \), i.e., \(\Psi (X)=Z_\Gamma ^\top X Z_\Gamma \) for any \(X\in {\mathbb {R}}^{V\times V}\).

We define the finite extension field \({\mathbb {Q}}_{\theta ,\Gamma }\) of \({\mathbb {Q}}\) generated by the entries of \(\theta (\gamma )\) for all \(\gamma \in \Gamma \) and the entries of \(Z_\Gamma \)Footnote 1. As in [20, Section 5.5], genericity and local genericity over \({\mathbb {Q}}_{\theta ,\Gamma }\) are defined similarly and Proposition 2.8 and Proposition 2.9 can be generalized by changing the coefficient field to \({\mathbb {Q}}_{\theta ,\Gamma }\).

Our strategy is to apply Proposition 2.9 over \({{{\mathcal {K}}}}_{+,\Gamma }\), and to do that we need to show that \(\Psi (P^{\top }P)\) is generic over \({\mathbb {Q}}_{\theta ,\Gamma }\) in the set of “low rank” matrices in \({{{\mathcal {K}}}}_{+,\Gamma }\). This can be done by showing that any “low rank” matrix in \({{{\mathcal {K}}}}_{+,\Gamma }\) is the \(\Psi \)-image of the Gram matrix of some \(\theta \)-symmetric point configuration in \({{{\mathcal {C}}}}_\theta (V)\). Since p is generic in \({{{\mathcal {C}}}}_\theta (V)\), this would imply that \(\Psi (P^{\top }P)\) is generic in the set of “low rank” matrices in \({{{\mathcal {K}}}}_{+,\Gamma }\).

A precise statement we want to show is given as follows.

Proposition 4.6

Let \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\) be a point group, and let \(m_\rho \) be the multiplicity of an irreducible representation \(\rho \in {\tilde{\Gamma }}\) in \(\theta \). Also, define a map f by

$$\begin{aligned} f:{\mathcal {C}}_\theta (V)\ni q \mapsto \Psi (Q^\top Q)\in {\mathcal {K}}_{+,\Gamma }, \end{aligned}$$

where \(Q^\top Q\) is the Gram matrix of q. If \(\Gamma \) is absolutely irreducible, then

$$\begin{aligned} f\left( {\mathcal {C}}_\theta (V)\right) = \left\{ X \in {\mathcal {K}}_{+,\Gamma } : \mathrm{rank}\,X_\rho \le m_\rho ~ \left( \rho \in {\tilde{\Gamma }}\right) \right\} . \end{aligned}$$

The proof of Proposition 4.6 is given in Subsection 4.5. Using Proposition 4.4 and Proposition 4.6, we can now give the proof of Theorem 4.2 for absolutely irreducible \(\Gamma \).

Proof of Theorem 4.2

for absolutely irreducible \(\Gamma .\) Let (Gp) be a d-dimensional \(\theta \)-symmetric framework with a point group \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\) which is generic modulo symmetry and is universally rigid. Suppose also that p affinely spans \({\mathbb {R}}^d\). Let \(G/\Gamma \) be the quotient of G, and let \({\hat{n}}=|V(G/\Gamma )|\), which is the number of vertex orbits in G. (The notation \({\hat{n}}\) is consistent with that in Proposition 4.4 by setting \(V/\Gamma =V(G/\Gamma )\) and \(V=V(G)\).)

By a translation, we may suppose that the center of gravity is at the origin, i.e., \(p \in {{{\mathcal {C}}}}_\theta (V)\). As the original configuration affinely spans \({\mathbb {R}}^d\), \(\mathrm{rank}\,P =d\). Let \(m_\rho \) be the multiplicity of an irreducible representation \(\rho \in {\tilde{\Gamma }}\) in \(\theta \). Then,

$$\begin{aligned} \sum _{\rho \in {\tilde{\Gamma }}} d_\rho m_\rho =d. \end{aligned}$$
(14)

Claim 4.7

Let \(k=\sum _{\rho \in {\tilde{\Gamma }}} \left( {\begin{array}{c}m_\rho +1\\ 2\end{array}}\right) \). Then \(\Psi (P^\top P) \in {{{\mathcal {K}}}}_\Gamma \) is locally generic over \({\mathbb {Q}}_{\theta ,\Gamma }\) in \(\text {ext}_k({\mathcal {K}}_{+,\Gamma })\).

Proof

Since (Gp) is generic modulo symmetry, p is generic over \({\mathbb {Q}}_{\theta ,\Gamma }\) in \({{{\mathcal {C}}}}_\theta (V)\). The map \(f:q \mapsto \Psi (Q^\top Q)\) is algebraic over \({\mathbb {Q}}_{\theta ,\Gamma }\). By Proposition 2.8 over \({\mathbb {Q}}_{\theta ,\Gamma }\), \(\Psi (P^\top P)\) is generic in \(f({\mathcal {C}}_\theta (V))\) over \({\mathbb {Q}}_{\theta ,\Gamma }\). Also, \(\mathrm{rank}\,\Psi (P^\top P) = \mathrm{rank}\,P^\top P =\mathrm{rank}\,P=d\). Hence, by Proposition 4.6 and (14), we obtain \(\mathrm{rank}\,\Psi (P^\top P)_\rho =m_\rho \) for \(\rho \in {\tilde{\Gamma }}\).

By the definition of \({\mathcal {K}}_{+,\Gamma }\), Proposition 2.3 and Proposition 2.4,

$$\begin{aligned} \text {ext}_k\left( {\mathcal {K}}_{+,\Gamma }\right) = \left\{ X \in {\mathcal {K}}_{+,\Gamma } : \sum _{\rho \in {\tilde{\Gamma }}} \left( {\begin{array}{c}\mathrm{rank}\,X_\rho +1\\ 2\end{array}}\right) \le k \right\} . \end{aligned}$$
(15)

For each \(\rho \in {\tilde{\Gamma }}\), by the lower semi-continuity of rank, there exists a neighborhood \(U_\rho \) of \(\Psi (P^\top P)_\rho \) in \({{{\mathcal {K}}}}_\rho \) in which the rank of any matrix is at least \(\mathrm{rank}\,\Psi (P^\top P)_\rho =m_\rho \). \(U=\oplus _{\rho \in {\tilde{\Gamma }}} I_{d_\rho } \otimes U_\rho \) is a neighborhood of \(\Psi (P^\top P)\) in \({{{\mathcal {K}}}}_\Gamma \). Hence, by (15) and \(k=\sum _{\rho \in {\tilde{\Gamma }}} \left( {\begin{array}{c}m_\rho +1\\ 2\end{array}}\right) \), we have

$$\begin{aligned} U \cap \text {ext}_k\left( {\mathcal {K}}_{+,\Gamma }\right) \subseteq \left\{ X \in {\mathcal {K}}_{+,\Gamma } : \mathrm{rank}\,X_\rho = m_\rho ~ (\rho \in {\tilde{\Gamma }})\right\} , \end{aligned}$$

which further implies, by Proposition 4.6,

$$\begin{aligned} U \cap \text {ext}_k\left( {\mathcal {K}}_{+,\Gamma }\right) \subseteq \left\{ X \in {\mathcal {K}}_{+,\Gamma } : \mathrm{rank}\,X_\rho = m_\rho ~ (\rho \in {\tilde{\Gamma }}) \right\} \subseteq f\left( {{{\mathcal {C}}}}_{\theta }(V)\right) . \end{aligned}$$

Hence, since \(\Psi (P^\top P)\) is generic in \(f({{{\mathcal {C}}}}_{\theta }(V))\), \(\Psi (P^\top P)\) is locally generic in \(\text {ext}_k({\mathcal {K}}_{+,\Gamma })\). \(\square \)

Consider the subspace

$$\begin{aligned} {\mathcal {K}}_\Gamma (G)=\text {span} \left\{ \Psi \left( F_{(u,v,\gamma )}\right) : (u,v,\gamma ) \in E(G/\Gamma )\right\} \end{aligned}$$

of \({\mathcal {K}}_\Gamma \) and the projection \(\pi :{\mathcal {K}}_\Gamma \rightarrow {\mathcal {K}}_\Gamma (G)\). Proposition 4.5 states that \(\Psi (P^\top P)\) is the unique solution of (P\(^\Psi \)), and hence \(\pi ^{-1}(\pi (\Psi (P^\top P))) \cap {\mathcal {K}}_{+,\Gamma }\) is a singleton. Since \({\mathcal {K}}_{+,\Gamma }\) is a closed line-free convex set and \(\Psi (P^\top P)\) is locally generic in \(\text {ext}_k({\mathcal {K}}_{+,\Gamma })\) with \(k=\sum _{\rho \in {\tilde{\Gamma }}} \left( {\begin{array}{c}m_\rho +1\\ 2\end{array}}\right) \) by Claim 4.7, Proposition 2.9 can be applied. Hence there exists a hyperplane \(H=\{\langle X,L \rangle =0 : X \in {\mathcal {K}}_\Gamma (G)\}\) in \({\mathcal {K}}_\Gamma (G)\) defined by \(L \in {\mathcal {K}}_\Gamma (G)\) such that \(\pi ^{-1}(H)=\{\langle X,L \rangle =0 : X \in {\mathcal {K}}_\Gamma \}\) exposes \(F_{{\mathcal {K}}_{+,\Gamma }}(\Psi (P^\top P))\).

Following the decomposition of the ambient space \({{{\mathcal {K}}}}_{\Gamma }\), we have the decomposition of the face \(F_{{\mathcal {K}}_{+,\Gamma }}(\Psi (P^\top P))\) as follows:

$$\begin{aligned} F_{{\mathcal {K}}_{+,\Gamma }}\left( \Psi \left( P^\top P\right) \right) = \bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_\rho } \otimes F_{{\mathcal {K}}_{+,\rho }} \left( \Psi \left( P^\top P\right) _\rho \right) . \end{aligned}$$

In each component \({{{\mathcal {K}}}}_{\rho }\) of \({{{\mathcal {K}}}}_{\Gamma }\), \(\pi ^{-1}(H)\) restricted to \({{{\mathcal {K}}}}_{\rho }\) is a hyperplane of \({{{\mathcal {K}}}}_{\rho }\) given by \(\{\langle X,L_\rho \rangle =0 : X \in {\mathcal {K}}_\Gamma \}\) and it exposes \(F_{{\mathcal {K}}_{+,\rho }} (\Psi (P^\top P)_\rho )\). (Recall that \(L_\rho \) denotes the projected image of L to \({{{\mathcal {K}}}}_\rho \).) Since \(\mathrm{rank}\,\Psi (P^\top P)_\rho =m_{\rho }\), Propositions 2.3 and  2.4 imply

$$\begin{aligned}&\mathrm{rank}\,L_\rho = {\left\{ \begin{array}{ll} {\hat{n}}-m_\mathrm{tri}-1 &{} (\text {if }\rho =\mathrm{tri}) \\ {\hat{n}}d_\rho -m_\rho &{} (\text {if } \rho \in {\tilde{\Gamma }} \setminus \{\mathrm{tri}\}) \end{array}\right. } \end{aligned}$$
(16)
$$\begin{aligned}&L_\rho \succeq 0, ~ \langle \Psi \left( P^\top P\right) _\rho , L_\rho \rangle =0. \end{aligned}$$
(17)

for every \(\rho \in {\tilde{\Gamma }}\). By (13), (14), (16),

$$\begin{aligned} \mathrm{rank}\,L = {\hat{n}}-m_\mathrm{tri}-1 + \sum _{\rho \in {\tilde{\Gamma }} \setminus \{1\}} d_\rho \left( {\hat{n}}d_\rho -m_\rho \right) = {\hat{n}}|\Gamma |-d-1 = n-d-1. \end{aligned}$$
(18)

By \(L \in {\mathcal {K}}_\Gamma (G)\), \(L=\sum _{(u,v,\gamma ) \in E(G/\Gamma )} {\hat{\omega }}_{(u,v,\gamma )}\Psi (F_{(u,v,\gamma )})\) for some \({\hat{\omega }}: E(G/\Gamma ) \rightarrow {\mathbb {R}}\). Hence, by (17) and (18), \(\Psi ^{-1}(L) = \sum _{(u,v,\gamma ) \in E(G/\Gamma )} {\hat{\omega }}_{(u,v,\gamma )}F_{(u,v,\gamma )}\) is a \(\Gamma \)-symmetric weighted Laplacian of G satisfying

$$\begin{aligned} \Psi ^{-1}(L) \succeq 0, \quad \mathrm{rank}\,\Psi ^{-1}(L)=n-d-1, \quad \langle P^\top P, \Psi ^{-1}(L) \rangle =0. \end{aligned}$$

Therefore \(\Psi ^{-1}(L)\) satisfies the properties of the statement. \(\square \)

Proofs of Proposition 4.4 and Proposition 4.6

We give a proof of Propositions 4.4 and  4.6. Let \(V=\Gamma \times V/\Gamma \) be a finite set, and denote \(n=|V|\) and \({\hat{n}}=|V/\Gamma |\) as in the statement of Proposition 4.4. In the subsequent discussion, we shall frequently look at vectors in \({\mathbb {R}}^{\Gamma }\otimes {\mathbb {R}}^{V/\Gamma }\) (which can be identified with \({\mathbb {R}}^V\)) and matrices in \({\mathbb {R}}^{\Gamma \times \Gamma }\otimes {\mathbb {R}}^{V/\Gamma \times V/\Gamma }\). Let \(\{{\varvec{e}}_\gamma : \gamma \in \Gamma \}\) be the standard basis of \({\mathbb {R}}^{\Gamma }\) and \(\{{\varvec{e}}_v: v\in V/\Gamma \}\) be that of \({\mathbb {R}}^{V/\Gamma }\). Then \(\{E_{\alpha ,\beta }:={\varvec{e}}_\alpha {\varvec{e}}_\beta ^{\top }: \alpha ,\beta \in \Gamma \}\) and \(\{E_{u,v}:={\varvec{e}}_u{\varvec{e}}_v^\top : u,v\in V/\Gamma \}\) form a basis of \({\mathbb {R}}^{\Gamma \times \Gamma }\) and a basis of \({\mathbb {R}}^{V/\Gamma \times V/\Gamma }\), respectively.

With this notation, the right regular representation \(R:\Gamma \rightarrow {\mathbb {R}}^{\Gamma \times \Gamma }\) of \(\Gamma \) is defined by

$$\begin{aligned} R(\gamma ) = \sum _{\alpha \in \Gamma } E_{\alpha ,\alpha \gamma }. \end{aligned}$$
(19)

A basic fact from representation theory is that, over the complex field, the right regular representation is (unitarily) equivalent to \(\bigoplus _{\rho } I_{d_\rho } \otimes \rho (\gamma )\), where the sum is taken over all non-equivalent complex irreducible representations. Since \(\Gamma \) is assumed to be absolutely irreducible, this in turn implies that there is an orthogonal matrix \(Z\in O({\mathbb {R}}^{\Gamma })\) such that

$$\begin{aligned} Z^\top R(\gamma ) Z = \bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_\rho } \otimes \rho (\gamma ) \quad (\gamma \in \Gamma ), \end{aligned}$$
(20)

where recall that \({\tilde{\Gamma }}\) is the set of all real irreducible representations of \(\Gamma \). This orthogonal transformation gives a linear isomorphism between \({{{\mathcal {T}}}}:=\mathrm{span}\{R(\gamma ):\gamma \in \Gamma \}\) and \(\bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_{\rho }} \otimes {\mathbb {R}}^{d_{\rho }\times d_{\rho }}\). Indeed, if we set \(\psi :{\mathbb {R}}^{\Gamma \times \Gamma }\ni X\mapsto Z^{\top } XZ \in {\mathbb {R}}^{\Gamma \times \Gamma }\), then by (20), \(\psi ({{{\mathcal {T}}}}) \subseteq \bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_{\rho }} \otimes {\mathbb {R}}^{d_{\rho }\times d_{\rho }}\), and since Z is non-singular \(\psi |_{{{\mathcal {T}}}}\) is an injective linear map. By (13), \({{{\mathcal {T}}}}\) and \(\bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_{\rho }} \otimes {\mathbb {R}}^{d_{\rho }\times d_{\rho }}\) have the same dimensions, and hence \(\psi |_{{{\mathcal {T}}}}\) is an isomorphism.

Using Z, we consider an orthogonal map \(\Psi :{\mathbb {R}}^{V\times V} \rightarrow {\mathbb {R}}^{V\times V}\) given by \(\Psi (X)=(Z \otimes I_{{\hat{n}}})^\top X (Z \otimes I_{{\hat{n}}})\). We show this is the desired map for Proposition 4.4.

Proof of Proposition 4.4

Since \(\psi \) is a linear isomorphism between \({{{\mathcal {T}}}}\) and \(\bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_{\rho }} \otimes {\mathbb {R}}^{d_{\rho }\times d_{\rho }}\), we have

$$\begin{aligned} \Psi \left( {{{\mathcal {T}}}}\otimes {\mathbb {R}}^{V/\Gamma \times V/\Gamma }\right) =\bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_{\rho }} \otimes {\mathbb {R}}^{d_{\rho }\times d_{\rho }}\otimes {\mathbb {R}}^{V/\Gamma \times V/\Gamma }. \end{aligned}$$
(21)

Also (11) says that a Laplacian \(L\in {\mathbb {R}}^V\) is in \(({{{\mathcal {L}}}}^V)^{\Gamma }\) if and only if it is spanned by \(\{R(\gamma ) \otimes E_{u,v}:\gamma \in \Gamma , u,v \in V/\Gamma \}\). In other words,

$$\begin{aligned} \left( {{{\mathcal {L}}}}^V\right) ^{\Gamma }={{{\mathcal {L}}}}^V\cap \left( {{{\mathcal {T}}}}\otimes {\mathbb {R}}^{V/\Gamma \times V/\Gamma }\right) . \end{aligned}$$
(22)

Instead of computing \(\Psi (({{{\mathcal {L}}}}^V)^{\Gamma })\), we now compute the \(\Psi \)-image of the right side of (22).

Claim 4.8

\(X\in {{{\mathcal {T}}}}\otimes {\mathbb {R}}^{V/\Gamma \times V/\Gamma }\) is Laplacian if and only if \(\Psi (X)\) is symmetric and \(\Psi (X)_\mathrm{tri}\) is Laplacian, where \(\Psi (X)_\mathrm{tri}\) stands for the projected image of \(\Psi (X)\) to the component associated with the trivial representation in (21).

Proof

Clearly, X is symmetric if and only if \(\Psi (X)\) is symmetric. Also, a symmetric matrix \(X \in {\mathcal {S}}^n\) is Laplacian if and only if \(X{\varvec{1}}_n=0\). Hence it suffices to show that

$$\begin{aligned} X{\varvec{1}}_V=0\,\text {if and only if}\, \Psi (X)_\mathrm{tri}{\varvec{1}}_{V/\Gamma }=0. \end{aligned}$$
(23)

To see this, we use a fact that the span of the vector \({\varvec{1}}_\Gamma \in {\mathbb {R}}^{\Gamma }\) is a subrepresentation of R and corresponds to the trivial representation. (This can be checked by \(R(\gamma ){\varvec{1}}_\Gamma ={\varvec{1}}_\Gamma \) for any \(\gamma \in \Gamma \).) Hence \(Z^\top {\varvec{1}}_\Gamma =|\Gamma |{\varvec{e}}_\mathrm{tri}\), where \({\varvec{e}}_\mathrm{tri}\) is a unit vector in \({\mathbb {R}}^\Gamma \). This implies that

$$\begin{aligned} \left( Z^{\top }\otimes I_{V/\Gamma }\right) {\varvec{1}}_V =\left( Z^{\top }\otimes I_{V/\Gamma }\right) \left( {\varvec{1}}_\Gamma \otimes {\varvec{1}}_{V/\Gamma }\right) =\left( Z^{\top }{\varvec{1}}_{\Gamma }\right) \otimes {\varvec{1}}_{V/\Gamma } =|\Gamma | {\varvec{e}}_{\mathrm{tri}}\otimes {\varvec{1}}_{V/\Gamma }. \end{aligned}$$

Since \(X{\varvec{1}}_V=0\) if and only if \(\Psi (X)(Z^{\top }\otimes I_{V/\Gamma }){\varvec{1}}_V=0\), we obtain (23), and hence the claim follows. \(\square \)

Combining the relations obtained so far, we get

$$\begin{aligned} \Psi \left( \left( {{{\mathcal {L}}}}^V\right) ^{\Gamma }\right)&=\Psi \left( {{{\mathcal {L}}}}^V\cap \left( {{{\mathcal {T}}}}\otimes {\mathbb {R}}^{V/\Gamma \times V/\Gamma }\right) \right) \qquad \qquad \qquad \qquad \qquad \qquad (\text {by }\,(22)) \\&=\left\{ X: X\in \bigoplus _{\rho \in {\tilde{\Gamma }}} I_{d_{\rho }}\right. \\&\quad \left. \otimes {\mathbb {R}}^{d_\rho \times d_{\rho }}\otimes {\mathbb {R}}^{V/\Gamma \times V/\Gamma }, X \in {{{\mathcal {S}}}}^V, X_\mathrm{tri}{\varvec{1}}_{V/\Gamma }=0 \right\} \quad \qquad (\text {by}\, (21)\,\text {and}\, \text {Claim}~4.8) \\&=\left\{ X: X\in \bigoplus _{\rho \in {\tilde{\Gamma }}}I_{d_{\rho }}\otimes {\mathcal {S}}^{d_\rho {\hat{n}}}, X_\mathrm{tri}{\varvec{1}}_{V/\Gamma }=0 \right\} \\&=\left\{ X: X\in \left( {\mathcal {L}}^{V/\Gamma }\oplus \bigoplus _{\rho \in {\tilde{\Gamma }} \setminus \{\mathrm{tri}\}}I_{d_{\rho }}\otimes {\mathcal {S}}^{d_\rho {\hat{n}}} \right) \right\} \end{aligned}$$

as required. \(\square \)

Next we prove Proposition 4.6. In the proof, we use the following orthogonality relation of complex irreducible representations (see, e.g., [27] for details).

Proposition 4.9

(Schur orthogonality). Let \(\pi :\Gamma \rightarrow GL_{d_\pi }({\mathbb {C}})\) and \(\pi ':\Gamma \rightarrow GL_{d_{\pi '}}({\mathbb {C}})\) be complex irreducible representations. If \(\pi \) and \(\pi '\) are not equivalent, then

$$\begin{aligned} \sum _{\gamma \in \Gamma } (\pi (\gamma )[k,l]) \left( \overline{\pi '(\gamma )[k',l']}\right) =0 \end{aligned}$$

for any \(1 \le k,l \le d_\pi \) and \(1 \le k',l' \le d_{\pi '}\). If \(\pi =\pi '\), then

$$\begin{aligned} \sum _{\gamma \in \Gamma } \left( \pi (\gamma )[k,l]\right) \left( \overline{\pi (\gamma )[k',l']}\right) = {\left\{ \begin{array}{ll} \frac{|\Gamma |}{d_\pi } &{} ( k=k', l=l'), \\ 0 &{} (\text {otherwise}) \end{array}\right. } \end{aligned}$$

for any \(1 \le k,l,k',l' \le d_\pi \).

Let \(\Gamma \) be an absolutely irreducible finite group and (Gp) be a d-dimensional \(\theta \)-symmetric framework with \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\). We fix a representative vertex from each vertex orbit, and let \({\tilde{P}}\) be a \(d \times {\hat{n}}\) matrix given by arranging the coordinates of the representative vertices. Then we may suppose (by permuting the columns appropriately)

$$\begin{aligned} P=\sum _{\gamma \in \Gamma } {\varvec{e}}_\gamma ^\top \otimes (\theta (\gamma ) {\tilde{P}}). \end{aligned}$$
(24)

Since \(\theta \) is an orthogonal representation, there is an orthogonal matrix \(Y \in O({\mathbb {R}}^d)\) such that

$$\begin{aligned} Y \theta (\gamma ) Y^\top = \bigoplus _{\rho \in {\tilde{\Gamma }}} I_{m_\rho } \otimes \rho (\gamma ), \end{aligned}$$
(25)

where recall that \(m_\rho \) denotes the multiplicity of \(\rho \in {\tilde{\Gamma }}\) in \(\theta \).

Let S be a set of indices defined by

$$\begin{aligned} S = \left\{ (\rho ,t,k) : \rho \in {\tilde{\Gamma }}, 1 \le t \le m_\rho , 1 \le k \le d_\rho \right\} . \end{aligned}$$

By (14), \(|S|=d\). Hence in the following discussion we identify \({\mathbb {R}}^S\) with \({\mathbb {R}}^d\) by taking the standard basis \(\{{\varvec{e}}_{(\rho ,t,k)}:(\rho ,t,k)\in S\}\).

With these notations, we can calculate \(\Psi (P^\top P)\) explicitly.

Lemma 4.10

For each \(\rho \in {\tilde{\Gamma }}\) and t with \(1\le t\le m_{\rho }\), define \(w_{\rho ,t} \in {\mathbb {R}}^{d_\rho }\otimes {\mathbb {R}}^{V/\Gamma }\) by

$$\begin{aligned} w_{\rho ,t}= \sqrt{\frac{|\Gamma |}{d_\rho }} \sum _{1 \le k \le d_\rho } {\varvec{e}}_k \otimes \left( {\tilde{P}}^\top Y^\top {\varvec{e}}_{(\rho ,t,k)} \right) . \end{aligned}$$
(26)

Then, for each \(\rho \in {\tilde{\Gamma }}\),

$$\begin{aligned} \Psi \left( P^\top P\right) _\rho = \sum _{1 \le t \le m_\rho } w_{\rho ,t} {w_{\rho ,t}}^\top . \end{aligned}$$

Proof

By (19) and (24), we have

$$\begin{aligned} P^\top P =\sum _{\alpha ,\beta \in \Gamma } E_{\alpha ,\beta } \otimes \left( {\tilde{P}}^\top \theta \left( \alpha ^{-1} \beta \right) {\tilde{P}}\right) =\sum _{\gamma \in \Gamma } R(\gamma ) \otimes \left( {\tilde{P}}^\top \theta (\gamma ) {\tilde{P}} \right) . \end{aligned}$$
(27)

Hence

$$\begin{aligned} \Psi \left( P^\top P\right) _\rho&= \sum _{\gamma \in \Gamma } \rho (\gamma ) \otimes \left( {\tilde{P}}^\top \theta (\gamma ) {\tilde{P}} \right) \qquad \qquad \qquad (\text {by} (20)\,\text {and}\,(27))\\&=\sum _{\gamma \in \Gamma } (I_{d_\rho }\otimes {\tilde{P}})^\top \left( \rho (\gamma ) \otimes \theta (\gamma ) \right) \left( I_{d_\rho } \otimes {\tilde{P}}\right) \\&=(I_{d_\rho } \otimes {\tilde{P}}^\top Y^\top ) \left( \sum _{\gamma \in \Gamma } \rho (\gamma ) \otimes \bigoplus _{\rho ' \in {\tilde{\Gamma }}} \bigoplus _{1 \le t \le m_{\rho '}} \rho '(\gamma ) \right) ( I_{d_\rho } \otimes Y{\tilde{P}})&(\text {by}\, (25)). \end{aligned}$$

Note that, by Schur orthogonality (Proposition 4.9) and the absolute irreducibility of \(\Gamma \), for any \(\rho , \rho ' \in {\tilde{\Gamma }}\),

$$\begin{aligned} \sum _{\gamma \in \Gamma } \rho (\gamma ) \otimes \rho '(\gamma ) = {\left\{ \begin{array}{ll} \frac{|\Gamma |}{d_\rho } \sum _{1 \le k,l \le d_\rho } E_{kl} \otimes E_{kl} &{} (\rho = \rho '), \\ 0 &{} (\rho \ne \rho '). \end{array}\right. } \end{aligned}$$

Hence, we have

$$\begin{aligned} \sum _{\gamma \in \Gamma } \rho (\gamma ) \otimes \bigoplus _{\rho ' \in {\tilde{\Gamma }}} \bigoplus _{1 \le t \le m_{\rho '}} \rho '(\gamma )&= \frac{|\Gamma |}{d_\rho } \sum _{1\le t \le m_{\rho }} \sum _{1 \le k,l \le d_\rho } E_{kl} \otimes E_{(\rho ,t,k)(\rho ,t,l)} \\&= \frac{|\Gamma |}{d_\rho } \sum _{1\le t \le m_{\rho }} \left( \sum _{1 \le k \le d_\rho } {\varvec{e}}_k \otimes {\varvec{e}}_{(\rho ,t,k)}\right) \left( \sum _{1 \le l \le d_\rho } {\varvec{e}}_l \otimes {\varvec{e}}_{(\rho ,t,l)}\right) ^\top . \end{aligned}$$

Therefore, we obtain \(\Psi (P^{\top } P)=\sum _{1\le t\le m_{\rho }} w_{\rho ,t} w_{\rho ,t}^\top \) with

$$\begin{aligned} w_{\rho ,t}=\left( I_{d_\rho } \otimes {\tilde{P}}^\top Y^\top \right) \left( \sqrt{\frac{|\Gamma |}{d_\rho }}\sum _{1 \le k \le d_\rho } {\varvec{e}}_k \otimes {\varvec{e}}_{(\rho ,t,k)}\right) =\sqrt{\frac{|\Gamma |}{d_\rho }} \sum _{1 \le k \le d_\rho } {\varvec{e}}_k \otimes \left( {\tilde{P}}^\top Y^\top {\varvec{e}}_{(\rho ,t,k)}\right) \end{aligned}$$

as required. \(\square \)

Proof of Proposition 4.6

For any \(p \in {\mathcal {C}}_\theta (V)\), by Lemma 4.10, \(\mathrm{rank}\,\Psi (P^\top P)_\rho \le m_\rho \). So, it is sufficient to prove that for any \(X \in {\mathcal {K}}_{+,\Gamma }\) satisfying \(\mathrm{rank}\,X_\rho \le m_\rho \) (\(\rho \in {\tilde{\Gamma }}\)), there exists \(q \in {\mathcal {C}}_\theta (V)\) satisfying \(\Psi (Q^\top Q)=X\).

Since \(\mathrm{rank}\,X_\rho \le m_\rho \) and \(X_\rho \) is positive semidefinite, there exist \(v_{\rho ,1}, \ldots , v_{\rho ,m_\rho } \in {\mathbb {R}}^{d_\rho {\hat{n}}}\) such that \(X_\rho = \sum _{1 \le t \le m_\rho } v_{\rho ,t} {v_{\rho ,t}}^\top \).

Claim 4.11

There exists a matrix \({\tilde{Q}}\in {\mathbb {R}}^{d\times {\hat{n}}}\) such that

$$\begin{aligned} v_{\rho ,t}=\sqrt{\frac{|\Gamma |}{d_{\rho }}} \sum _{1\le k\le d_{\rho }} {\varvec{e}}_k \otimes \left( {\tilde{Q}}^{\top } Y^{\top } {\varvec{e}}_{(\rho ,t,k)} \right) \end{aligned}$$

for all \(\rho \in {\tilde{\Gamma }}\) and t with \(1\le t\le m_{\rho }\).

Proof

We decompose \({\mathbb {R}}^{d_{\rho }{\hat{n}}}\) into \(d_{\rho }\) copies of \({\mathbb {R}}^{{\hat{n}}}\) such that \({\mathbb {R}}^{d_{\rho }{\hat{n}}}={\mathbb {R}}^{{\hat{n}}}\times \cdots \times {\mathbb {R}}^{{\hat{n}}}\). Then each \(v_{\rho ,t}\in {\mathbb {R}}^{d_{\rho }{\hat{n}}}\) is written by \(v_{\rho ,t}=(v_{\rho ,t,1},v_{\rho ,t,2},\dots , v_{\rho ,t,d_{\rho }})\) for some \(v_{\rho ,t,1}, \dots , v_{\rho ,t,d_{\rho }}\in {\mathbb {R}}^{{\hat{n}}}\). Let U be the matrix obtained by aligning \(v_{\rho ,t,k}\) for all \((\rho ,t,k)\in S\). Since \(|S|=d\), the size of U is \({\hat{n}}\times d\). Moreover, \(\sum _{1\le k\le d_\rho } {\varvec{e}}_k\otimes (U{\varvec{e}}_{(\rho ,t,k)})=v_{\rho ,t}\). Hence, by setting \({\tilde{Q}}=\sqrt{\frac{d_{\rho }}{|\Gamma |}}Y^{-1} U^{\top }\), we obtain the claimed \({\tilde{Q}}\). \(\square \)

Let \({\tilde{Q}}\) be as shown in Claim 4.11, and \(Q=\sum _{\gamma \in \Gamma }{\varvec{e}}_\gamma ^\top \otimes (\theta (\gamma ){\tilde{Q}})\). The size of Q is \(d \times n\) and it can be identified with a configuration \(q:V \rightarrow {\mathbb {R}}^d\) by \(Q=\sum _{\gamma \in \Gamma }{\varvec{e}}_\gamma ^\top \otimes (\theta (\gamma ){\tilde{Q}})\). Then q is compatible with \(\theta \) and satisfies \(\Psi (Q^\top Q)=X\) by Lemma 4.10 and Claim 4.11. Since \(Q^{\top }Q=\Psi ^{-1}(X)\), we have \(Q^{\top }Q\in {{{\mathcal {L}}}}^V\), implying \(Q{\varvec{1}}_V=0\). Hence \(q \in {{{\mathcal {C}}}}_\theta (V)\). This completes the proof. \(\square \)

Example

As a working example, we consider the reflectional symmetry \(C_s\) with respect to x-axis in the plane. In this case, \(d=2\) and \(C_s\) consists of two elements, denoted by \(\{e,s\}\). The point group representation \(\theta :\{e,s\}\rightarrow O({\mathbb {R}}^2)\) is given by

$$\begin{aligned} \theta (e)=\begin{pmatrix} 1 &{} 0 \\ 0 &{} 1\end{pmatrix} \text { and } \theta (s)=\begin{pmatrix} 1 &{} 0 \\ 0 &{} -1\end{pmatrix}. \end{aligned}$$

\(\theta \) is decomposed as \(\theta =+ \oplus -\), where \(+\) denotes the trivial representation and \(-:C_s \rightarrow O({\mathbb {R}})\) denotes another irreducible representation of degree 1 given by \(-(e)=1\), \(-(s)=-1\). Let G be a \(C_s\)-symmetric graph and \({\hat{n}}\) be the number of its vertex orbits. Denote \(V(G)=\{1,2,\dots , 2{\hat{n}}\}\), and we assume that, for \(1 \le i \le {\hat{n}}\), i and \(i+{\hat{n}}\) are in the same vertex orbit and i is the representative vertex in this vertex orbit. Then we have

$$\begin{aligned} \left( {\mathcal {L}}^V\right) ^{C_s} = \left\{ \begin{pmatrix} A &{} B \\ B &{} A\end{pmatrix}: A, B \in {\mathcal {S}}^{{\hat{n}}}, (A+B){\varvec{1}}=0 \right\} , \, {\mathcal {K}}_+={\mathcal {L}}^{{\hat{n}}}, \, {\mathcal {K}}_-={\mathcal {S}}^{{\hat{n}}}. \end{aligned}$$

The orthogonal transformation \(\Psi :({\mathcal {L}}^V)^{C_s} \rightarrow {\mathcal {K}}_+ \oplus {\mathcal {K}}_-\) in Proposition 4.4 is given by

$$\begin{aligned} Z= \begin{pmatrix} \frac{1}{\sqrt{2}} &{} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &{} -\frac{1}{\sqrt{2}} \end{pmatrix} \oplus I_{{\hat{n}}}, \end{aligned}$$

and hence we have

$$\begin{aligned} \Psi \left( \begin{pmatrix} A &{} B \\ B &{} A\end{pmatrix} \right) = Z^\top \begin{pmatrix} A &{} B \\ B &{} A\end{pmatrix} Z = \begin{pmatrix} A+B &{} 0 \\ 0 &{} A-B\end{pmatrix}. \end{aligned}$$

By \(\theta =+ \oplus -\), we have \(m_+=m_-=1\).

For \(p \in {\mathcal {C}}_\theta (V)\), let \({\tilde{p}}_x\) (resp. \({\tilde{p}}_y\)) \(\in {\mathbb {R}}^{{\hat{n}}}\) be the x-coordinate (resp. y-coordinate) vector of the representative vertices. Then the corresponding matrix P (whose i-th coordinate is \(p_i\)) is

$$\begin{aligned} P= \begin{pmatrix} {\tilde{p}}_x^\top &{} {\tilde{p}}_x^\top \\ {\tilde{p}}_y^\top &{} -{\tilde{p}}_y^\top \end{pmatrix}, \end{aligned}$$

and the Gram matrix of p is

$$\begin{aligned} P^\top P = \begin{pmatrix} {\tilde{p}}_x &{} {\tilde{p}}_y \\ {\tilde{p}}_x &{} -{\tilde{p}}_y \end{pmatrix} \begin{pmatrix} {\tilde{p}}_x^\top &{} {\tilde{p}}_x^\top \\ {\tilde{p}}_y^\top &{} -{\tilde{p}}_y^\top \end{pmatrix}= \begin{pmatrix} {\tilde{p}}_x {\tilde{p}}_x^\top + {\tilde{p}}_y {\tilde{p}}_y^\top &{} {\tilde{p}}_x {\tilde{p}}_x^\top - {\tilde{p}}_y {\tilde{p}}_y^\top \\ {\tilde{p}}_x {\tilde{p}}_x^\top - {\tilde{p}}_y {\tilde{p}}_y^\top &{} {\tilde{p}}_x {\tilde{p}}_x^\top + {\tilde{p}}_y {\tilde{p}}_y^\top \end{pmatrix}. \end{aligned}$$

Hence,

$$\begin{aligned} \Psi \left( P^\top P\right) _+ = 2{\tilde{p}}_x {\tilde{p}}_x^\top , \text { and } \Psi \left( P^\top P\right) _-=2{\tilde{p}}_y {\tilde{p}}_y^\top . \end{aligned}$$

Therefore, we have

$$\begin{aligned} f\left( {\mathcal {C}}_\theta (V)\right)&= \left\{ \left( 2{\tilde{p}}_x {\tilde{p}}_x^\top \right) \oplus \left( 2{\tilde{p}}_y {\tilde{p}}_y^\top \right) : {\tilde{p}}_x, {\tilde{p}}_y \in {\mathbb {R}}^{{\hat{n}}}, {\tilde{p}}_x\cdot {\varvec{1}}=0 \right\} \\&= \left\{ X \in {\mathcal {K}}_{+,C_s} : \mathrm{rank}\,X_+, \mathrm{rank}\,X_- \le 1 \right\} \\&= \left\{ X \in {\mathcal {K}}_{+,C_s} : \mathrm{rank}\,X_+\le m_+, \mathrm{rank}\,X_- \le m_- \right\} , \end{aligned}$$

which is as stated in Proposition 4.6.

Complete proof of Theorem 4.1

In the last section we proved Theorem 4.2 for absolutely irreducible \(\Gamma \). In this section, we give a complete proof of Theorem 4.2. If \(\Gamma \) is not absolutely irreducible, the corresponding structure theorem has a slightly more involved form (Proposition 5.2), and accordingly we need a further technical analysis of the facial structure of the cone of positive semidefinite \(\Gamma \)-symmetric Laplacian matrices.

Throughout this section, we distinguish real representations and complex representations, and denote the equivalence classes of real irreducible representations of \(\Gamma \) by \({\tilde{\Gamma }}\). Again, we may assume that each \(\rho \in {\tilde{\Gamma }}\) is an orthogonal matrix representation \(\rho :\Gamma \rightarrow O({\mathbb {R}}^{d_\rho })\). Let \({\mathbb {H}}\) be the algebra of quaternions. For \(x=a+b\mathrm{i}+c\mathrm{j}+d\mathrm{k}\in {\mathbb {H}}\) (\(a,b,c,d \in {\mathbb {R}}\)), its conjugate \(x^*\) is \(x^*=a-b\mathrm{i}-c\mathrm{j}-d\mathrm{k}\).

Block-diagonalization

We first review a basic fact on real representations. [6, 23] give more detailed algebraic expositions for applications to semidefinite programming problems.

Let \(\rho :\Gamma \rightarrow GL(W)\) be a real irreducible representation. The set of isomorphisms of W commutative to \(\rho (\gamma )\) for all \(\gamma \in \Gamma \) is denoted by \(\mathrm{Hom}(\rho ,\rho )\). \(\mathrm{Hom}(\rho ,\rho )\) forms a division algebra over \({\mathbb {R}}\), called the commutative algebra, and it is known (by the Frobenius theorem) that \(\mathrm{Hom}(\rho ,\rho )\) is isomorphic to one of \({\mathbb {R}}, {\mathbb {C}}, {\mathbb {H}}\). Accordingly \(\rho \) is said to be of real, complex, quaternionic type, respectively. For a finite group \(\Gamma \), the set of real irreducible representations of real (resp., complex, quaternionic) type is denoted by \({\tilde{\Gamma }}_{\mathbb {R}}\) (resp., \({\tilde{\Gamma }}_{\mathbb {C}}\), \({\tilde{\Gamma }}_{\mathbb {H}}\)). Note that \({\tilde{\Gamma }}={\tilde{\Gamma }}_{\mathbb {R}}\cup {\tilde{\Gamma }}_{\mathbb {C}}\cup {\tilde{\Gamma }}_{\mathbb {H}}\).

The types of real representations can be also understood in terms of decomposability over \({\mathbb {C}}\) as follows.

Proposition 5.1

(See, e.g., [27, Chapter 13.2]). For a real irreducible representation \(\rho \in {\tilde{\Gamma }}\),

  • \(\rho \) is of real type if and only if it is irreducible over \({\mathbb {C}}\) (i.e., absolutely irreducible);

  • \(\rho \) is of complex type if and only if it is decomposed over \({\mathbb {C}}\) into the direct sum of a complex irreducible representation \(\pi \) and its complex conjugate \({\overline{\pi }}\) such that \(\pi \) and \({\overline{\pi }}\) are not equivalent to each other;

  • \(\rho \) is of quaternionic type if and only if it is decomposed over \({\mathbb {C}}\) into the direct sum of two copies of a self-conjugate complex irreducible representation \(\pi \), i.e., \(\pi ={\overline{\pi }}\).

A trivial representation \(\mathrm{tri}\) is always of real type, and hence \({\tilde{\Gamma }}_{\mathbb {R}}\ne \emptyset \). An absolutely irreducible group we have studied in the last section is the case when \({\tilde{\Gamma }}_{\mathbb {C}}={\tilde{\Gamma }}_{\mathbb {H}}=\emptyset \). The simplest example of a non-absolutely irreducible group is the cyclic group \(C_n\) of order n with \(n \ge 3\), which has real irreducible representation of complex type of degree 2. Another fundamental example is the quaternion group \(Q_8\) whose real representations consist of four real type representations of degree 1 and one quaternionic type representation of degree 4.

To describe the structure theorem, we introduce two linear matrix spaces. For a complex number \(a+b\mathrm{i}\in {\mathbb {C}}\) with \(a,b\in {\mathbb {R}}\), let

$$\begin{aligned} C(a+b\mathrm{i}) = \begin{pmatrix} a&{} -b \\ b &{} a \end{pmatrix}. \end{aligned}$$

The map C can be extended to \(C:{\mathbb {C}}^{n\times n}\rightarrow {\mathbb {R}}^{2n\times 2n}\) by applying it entry-wise, i.e.,

$$\begin{aligned} C(Z)=\begin{pmatrix} C(z_{11}) &{} \cdots &{} C(z_{1n}) \\ \vdots &{} \ddots &{} \vdots \\ C(z_{n1}) &{} \cdots &{} C(z_{nn}) \end{pmatrix} \end{aligned}$$

for \(Z=(z_{ij})\in {\mathbb {C}}^{n\times n}\). The space of all real expressions of \(n \times n\) complex matrices is denoted by \({\mathcal {C}}^{2n}\), i.e., \({\mathcal {C}}^{2n}=C({\mathbb {C}}^{n\times n})\). Similarly, for \(a+b\mathrm{i}+c\mathrm{j}+d\mathrm{k}\in {\mathbb {H}}\) with \(a, b, c, d \in {\mathbb {R}}\), let

$$\begin{aligned} H(a+b\mathrm{i}+c\mathrm{j}+d\mathrm{k}) = \begin{pmatrix} a &{} -b &{} c &{} -d \\ b &{} a &{} d &{} c \\ -c &{} -d &{} a &{} b \\ d &{} -c &{} -b &{} a \end{pmatrix}, \end{aligned}$$

and extend it over \({\mathbb {H}}^{n\times n}\) by

$$\begin{aligned} H(X)=\begin{pmatrix} H(x_{11}) &{} \cdots &{}H(x_{1n}) \\ \vdots &{} \ddots &{} \vdots \\ H(x_{n1}) &{} \cdots &{} H(x_{nn}) \end{pmatrix} \end{aligned}$$

for \(X=(x_{ij})\in {\mathbb {H}}^{n\times n}\). Let \({{{\mathcal {H}}}}^{4n}=H({\mathbb {H}}^{n\times n})\).

Note that C and H defined above are commutative to matrix multiplication, and \(C(X^*)=C(X)^\top \) and \(H(Y^*)=H(Y) ^\top \) hold, where \(^*\) denotes conjugate transpose. For \(A \in {\mathcal {C}}^{2n}\) (resp., \(A \in {{{\mathcal {H}}}}^{4n}\)), the matrix \(X \in {\mathbb {C}}^{n \times n}\) satisfying \(C(X)=A\) (reps., \(X \in {\mathbb {H}}^{n \times n}\) satisfying \(H(X)=A\)) is denoted by \(C^{-1}(A)\) (resp., \(H^{-1}(A)\)).

For each \(\rho \in {\tilde{\Gamma }}\), define a linear space \({\mathcal {K}}_\rho \) by

$$\begin{aligned} {\mathcal {K}}_\rho = {\left\{ \begin{array}{ll} {\mathcal {L}}^{V/\Gamma } &{} (\rho =\mathrm{tri}) \\ {\mathcal {S}}^{d_\rho {\hat{n}}} &{} \left( \rho \in {\tilde{\Gamma }}_{\mathbb {R}} \setminus \{\mathrm{tri}\}\right) \\ {\mathcal {C}}^{d_\rho {\hat{n}}} \cap {\mathcal {S}}^{d_\rho {\hat{n}}} &{} \left( \rho \in {\tilde{\Gamma }}_{\mathbb {C}}\right) \\ {\mathcal {H}}^{d_\rho {\hat{n}}} \cap {\mathcal {S}}^{d_\rho {\hat{n}}} &{} \left( \rho \in {\tilde{\Gamma }}_{\mathbb {H}}\right) \end{array}\right. }, \end{aligned}$$

and let

$$\begin{aligned} {\mathcal {K}}_\Gamma =\bigoplus _{{\mathbb {F}}\in \{ {\mathbb {R}}, {\mathbb {C}}, {\mathbb {H}}\}} \bigoplus _{\rho \in {\tilde{\Gamma }}_{\mathbb {F}}} I_{ \frac{d_\rho }{\dim {\mathbb {F}}}} \otimes {\mathcal {K}}_\rho , \end{aligned}$$
(28)

where \(\dim {\mathbb {R}}, \dim {\mathbb {C}}, \dim {\mathbb {H}} = 1, 2, 4\), respectively. Then we have the following structure theorem. Recall that \(\{E_{u,v}: u,v\in V/\Gamma \}\) stands for the standard basis of \({\mathbb {R}}^{V/\Gamma \times V/\Gamma }\).

Proposition 5.2

There exists an orthogonal transformation \(\Psi :{\mathbb {R}}^{V\times V} \rightarrow {\mathbb {R}}^{V\times V}\) satisfying \(\Psi (({\mathcal {L}}^V)^\Gamma ) ={\mathcal {K}}_\Gamma \) and

$$\begin{aligned} \Psi \left( R(\gamma )\otimes E_{u,v}\right) = \bigoplus _{{\mathbb {F}}\in \{{\mathbb {R}}, {\mathbb {C}}, {\mathbb {H}}\}} \bigoplus _{\rho \in {\tilde{\Gamma }}_{\mathbb {F}}} I_{ \frac{d_\rho }{\dim {\mathbb {F}}}} \otimes \rho (\gamma ) \otimes E_{u,v}. \end{aligned}$$

In the proof of Proposition 5.2, we use the fact that real irreducible representations of complex and quaternionic type can be represented by matrices in \({{{\mathcal {C}}}}^{d_{\rho }}\) and \({{{\mathcal {H}}}}^{d_\rho }\). This fact follows from the following lemma.

Lemma 5.3

  1. (i)

    For each real irreducible representation \(\rho :\Gamma \rightarrow GL(V)\) of complex type, there exists a basis B of V such that the matrix expression of \(\rho (\gamma )\) with respect to B satisfies \(\rho (\gamma ) \in {\mathcal {C}}^{d_\rho }\).

  2. (ii)

    For each real irreducible representation \(\rho :\Gamma \rightarrow GL(V)\) of quaternionic type, there exists a basis B of V such that the matrix expression of \(\rho (\gamma )\) with respect to B satisfies \(\rho (\gamma ) \in {\mathcal {H}}^{d_\rho }\).

Proof

We prove (i). As the commutative algebra \(\mathrm{Hom}(\rho ,\rho )\) of \(\rho \) is isomorphic to \({\mathbb {C}}\), there exists \(J \in \mathrm{Hom}(\rho , \rho )\) satisfying \(J^2=-\mathrm{id}_V\). Let \(B=\emptyset \). We pick any non-zero \(v_1\) from V and add \(v_1\) and \(J(v_1)\) to B. We then pick \(v_2\) from the complement of \(\mathrm{span}B\) and add \(v_2\) and \(J(v_2)\) to B. We keep the procedure until we get \(B=\{v_1, J(v_1), v_2, J(v_2), \ldots , v_{d_{\rho }/2}, J( v_{d_{\rho }/2})\}\). By \(J^2=-\mathrm{id}_V\), one can easily check that \(B=\{v_1, J(v_1), v_2, J(v_2), \ldots \}\) is a basis of V. As J is commutative to \(\rho (\gamma )\), we can deduce that matrix expression of \(\rho (\gamma )\) with respect to B have the desired form.

The proof for (ii) is identical. \(\square \)

In view of Lemma 5.3, we may suppose that each \(\rho \in {\tilde{\Gamma }}_{\mathbb {C}}\) (resp., \(\rho \in {\tilde{\Gamma }}_{\mathbb {H}}\)) is an orthogonal matrix representation with \(\rho \in {{{\mathcal {C}}}}^{d_{\rho }}\) (resp., \(\rho \in {{{\mathcal {H}}}}^{d_{\rho }}\)).

We now consider the decomposition of the regular representation R into the real irreducible representations. Recall first that the multiplicity of a complex irreducible representation \(\pi \) in R is equal to the degree of \(\pi \). A representation \(\rho \in {\tilde{\Gamma }}_{\mathbb {R}}\) of real type is absolutely irreducible, and hence its multiplicity in R is its degree \(d_\rho \). A representation \(\rho \in {\tilde{\Gamma }}_{\mathbb {C}}\) of complex type is the direct sum of a complex irreducible representation \(\pi \) and its conjugate \({\overline{\pi }}\). Hence the degree of \(\pi \) is \(\frac{d_\rho }{2}\), and the multiplicity of \(\rho \) in R is \(\frac{d_\rho }{2}\). A representation \(\rho \in {\tilde{\Gamma }}_{\mathbb {H}}\) of quaternionic type decomposes into two copies of a self-conjugate complex irreducible representation \(\pi \). Hence the degree of \(\pi \) is \(\frac{d_\rho }{2}\), and the multiplicity of \(\rho \) in R is \(\frac{d_\rho }{4}\).

Therefore, there exists an orthogonal matrix \(Z \in O({\mathbb {R}}^\Gamma )\) satisfying

$$\begin{aligned} Z^\top R(\gamma ) Z = \bigoplus _{{\mathbb {F}}\in \{ {\mathbb {R}}, {\mathbb {C}}, {\mathbb {H}}\}} \bigoplus _{\rho \in {\tilde{\Gamma }}_{\mathbb {F}}} I_{ \frac{d_\rho }{\dim {\mathbb {F}}}} \otimes \rho (\gamma ). \end{aligned}$$
(29)

Note also that by comparing the degree of R and that of its decomposition into real irreducible representations, we have

$$\begin{aligned} \sum _{{\mathbb {F}} \in \{ {\mathbb {R}}, {\mathbb {C}}, {\mathbb {H}}\}} \sum _{\rho \in {\tilde{\Gamma }}_{\mathbb {F}}} \frac{{d_\rho }^2}{\dim {\mathbb {F}}} = |\Gamma |. \end{aligned}$$
(30)

For each \(\rho \in {\tilde{\Gamma }}\), let \({{{\mathcal {K}}}}'_\rho \) be a linear space defined by

$$\begin{aligned} {{{\mathcal {K}}}}'_\rho = {\left\{ \begin{array}{ll} {\mathbb {R}}^{d_\rho \times d_{\rho }} &{} \left( \rho \in {\tilde{\Gamma }}_{\mathbb {R}}\right) \\ {{{\mathcal {C}}}}^{d_\rho } &{} \left( \rho \in {\tilde{\Gamma }}_{\mathbb {C}}\right) \\ {{{\mathcal {H}}}}^{d_\rho } &{} \left( \rho \in {\tilde{\Gamma }}_{\mathbb {H}}\right) \end{array}\right. }, \end{aligned}$$

and let

$$\begin{aligned} {{{\mathcal {K}}}}'=\bigoplus _{{\mathbb {F}}\in \{{\mathbb {R}},{\mathbb {C}},{\mathbb {H}}\}}\bigoplus _{\rho \in {\tilde{\Gamma }}_{{\mathbb {F}}}} I_{\frac{d_{\rho }}{\dim {\mathbb {F}}}}\otimes {{{\mathcal {K}}}}_{\rho }'. \end{aligned}$$

(29) implies that, by an orthogonal transformation, \(R(\gamma )\) is mapped to \({{{\mathcal {K}}}}'\). If we set \({{{\mathcal {T}}}} = \mathrm{span}\{R(\gamma ):\gamma \in \Gamma \}\), then a map

$$\begin{aligned} \psi :{{{\mathcal {T}}}} \rightarrow {{{\mathcal {K}}}}'; \quad X\mapsto Z^\top X Z \end{aligned}$$

is a linear isomorphism between \({{{\mathcal {T}}}}\) and \({{{\mathcal {K}}}}'\).

Based on \(\psi \), we construct an orthogonal transformation \(\Psi : {\mathbb {R}}^{V\times V} \rightarrow {\mathbb {R}}^{V\times V}\) by \(X \mapsto (Z \otimes I_{{\hat{n}}})X(Z \otimes I_{{\hat{n}}})\) for \(X \in {\mathbb {R}}^{V\times V}={\mathbb {R}}^{\Gamma \times \Gamma }\otimes {\mathbb {R}}^{V/\Gamma \times V/\Gamma }\). Then the restriction of \(\Psi \) to \(({{{\mathcal {L}}}}^V)^{\Gamma }\) would be a map claimed in Proposition 5.2. Since the proof is identical to that of Proposition 4.4, we omit the detailed description.

Extending Proposition 4.6

In this subsection, we prove Proposition 5.4, which extends Proposition 4.6 to general finite groups. Recall that the projection of \(X \in {\mathcal {K}}_\Gamma \) to \({\mathcal {K}}_\rho \) is denoted by \(X_\rho \). Given a point group \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\), the multiplicity of \(\rho \in {\tilde{\Gamma }}\) in \(\theta \) is denoted by \(m_\rho \). We have

$$\begin{aligned} d = \sum _{\rho \in {\tilde{\Gamma }}}d_\rho m_\rho . \end{aligned}$$
(31)

Let \({{{\mathcal {K}}}}_{+,\Gamma }=\Psi (({\mathcal {L}}^V)_+^\Gamma )\). The following proposition extends Proposition 4.6 to general groups.

Proposition 5.4

For a map \(f:{\mathcal {C}}_\theta (V) \rightarrow {\mathcal {K}}_{+,\Gamma };q\mapsto \Psi (Q^\top Q)\),

$$\begin{aligned} f\left( {\mathcal {C}}_\theta (V)\right) = \left\{ X \in {\mathcal {K}}_{+,\Gamma } : \mathrm{rank}\,X_\rho \le \dim {\mathbb {F}} \cdot m_\rho ~ \left( \rho \in {\tilde{\Gamma }}_{\mathbb {F}}, {\mathbb {F}}\in \{{\mathbb {R}}, {\mathbb {C}}, {\mathbb {H}}\}\right) \right\} . \end{aligned}$$

In the proof of Proposition 5.4, we use the explicit form of \(\Psi (P^\top P)\) (Lemma 5.5). For this, as in Sect. 4.5, we define \({\tilde{P}}\) and Y as follows. We fix a representative vertex from each vertex orbit, and let \({\tilde{P}}\) be a \(d \times {\hat{n}}\) matrix given by arranging the coordinates of the representative vertices. Then we may suppose (by permuting the columns appropriately)

$$\begin{aligned} P=\sum _{\gamma \in \Gamma } {\varvec{e}}_\gamma ^\top \otimes (\theta (\gamma ) {\tilde{P}}). \end{aligned}$$
(32)

Since \(\theta \) is an orthogonal representation, there is an orthogonal matrix \(Y \in O({\mathbb {R}}^d)\) such that

$$\begin{aligned} Y \theta (\gamma ) Y^\top = \bigoplus _{\rho \in {\tilde{\Gamma }}} I_{m_\rho } \otimes \rho (\gamma ). \end{aligned}$$
(33)

Let S be the set of indices defined by

$$\begin{aligned} S= \bigcup _{{\mathbb {F}}\in \{{\mathbb {R}},{\mathbb {C}},{\mathbb {H}}\}} \left\{ (\rho ,t,l,a) : \rho \in {\tilde{\Gamma }}_{\mathbb {F}}, 1 \le t \le m_\rho , 1 \le l \le \frac{d_\rho }{\dim {\mathbb {F}}}, 1\le a \le \dim {\mathbb {F}} \right\} . \end{aligned}$$

By (31), \(|S|=d\). Hence in the following discussion we identify \({\mathbb {R}}^S\) with \({\mathbb {R}}^d\) by taking the standard basis \(\{{\varvec{e}}_{(\rho ,t,k,a)}: (\rho ,t,k,a)\in S\}\).

Lemma 5.5

For each \(\rho \in {\tilde{\Gamma }}_{{\mathbb {F}}}\), we have

$$\begin{aligned} \Psi \left( P^\top P\right) _\rho = \sum _{1 \le t \le m_\rho } W_{\rho ,t} W_{\rho ,t}^\top \end{aligned}$$

where \(W_{\rho ,t} \in {\mathbb {R}}^{d_\rho {\hat{n}}\times (\dim {\mathbb {F}}) }\) is defined by

$$\begin{aligned}&W_{\rho ,t} \\&\quad =\sqrt{\frac{|\Gamma |}{d_\rho }}\cdot {\left\{ \begin{array}{ll} \displaystyle \sum _{1 \le l \le d_\rho } {\varvec{e}}_l \otimes {\tilde{P}}^{\top }Y^{\top }{\varvec{e}}_{(\rho ,t,l,1)} &{} \left( \mathrm {if}\,\, {\mathbb {F}}={\mathbb {R}}\right) \\ C\left( \displaystyle \sum _{1 \le l \le \frac{d_\rho }{2}} {\varvec{e}}_l \otimes \left( {\tilde{P}}^\top Y^\top \left( {\varvec{e}}_{(\rho ,t,l,1)}+\mathrm{i} {\varvec{e}}_{(\rho ,t,l,2)}\right) \right) \right) &{} (\mathrm {if}\,\,{\mathbb {F}}={\mathbb {C}}) \\ H\left( \displaystyle \sum _{1 \le l \le \frac{d_\rho }{4}} {\varvec{e}}_l \otimes \left( {\tilde{P}}^\top Y^\top ({\varvec{e}}_{(\rho ,t,l,1)}+\mathrm{i}{\varvec{e}}_{(\rho ,t,l,2)} -\mathrm{j} {\varvec{e}}_{(\rho ,t,l,3)} +\mathrm{k} {\varvec{e}}_{(\rho ,t,l,4)})\right) \right) &{} (\mathrm {if }\,\,{\mathbb {F}}={\mathbb {H}}). \\ \end{array}\right. } \end{aligned}$$

The proof of Lemma 5.5 is in Appendix A. For positive semidefinite complex Hermitian matrices, the following properties are well-known. A complex Hermitian matrix X is positive semidefinite if and only if C(X) is positive semidefinite. A positive semidefinite complex Hermitian matrix \(X \in {\mathbb {C}}^{n \times n}\) with rank r is written by \(X=\sum _{1\le i\le r} v_i v_i^*\) for some \(v_1, \ldots , v_r \in {\mathbb {C}}^n\). Also \(\mathrm{rank}\,C(X) = 2\mathrm{rank}\,X\) for any complex matrix X.

Although \({\mathbb {H}}\) is not a field, the corresponding properties are known even for quaternionic matrices. For a set \(v_1,\dots , v_k\) of vectors in \({\mathbb {H}}^n\), its (right) linear (in)dependence is defined in terms of the existence of scalars \(\lambda _i\in {\mathbb {H}}\) such that \(\sum _{i=1}^kv_i\lambda _i=0\) with \(\lambda _i\ne 0\) for some i. Then the rank of a quaternionic matrix X is defined by the maximum possible size of a linearly independent column vector set. By [30, Theorem 7.3], \(\mathrm{rank}\,H(X) = 4\mathrm{rank}\,X\). By [30, Remark 6.1], X is positive semidefinite if and only if H(X) is positive semidefinite. Also by [30, Corollary 6.2], a positive semidefinite \(X \in {\mathbb {H}}^{n \times n}\) with rank r is written by \(X=\sum _{1\le i\le r} v_i v_i^*\) for some \(v_1, \ldots , v_r \in {\mathbb {H}}^n\).

Now we are ready to prove Proposition 5.4.

Proof of Proposition 5.4

By Lemma 5.5, for any \(q \in {{{\mathcal {C}}}}_\theta (V)\), we have \(\mathrm{rank}\,\Psi (Q^\top Q)_\rho \le \dim {\mathbb {F}}\cdot m_\rho \) (\(\rho \in {\tilde{\Gamma }}_{\mathbb {F}}\)). Hence it suffices to show the other inclusion. Take any \(X \in {{{\mathcal {K}}}}_{+,\Gamma }\) satisfying \(\mathrm{rank}\,X_\rho \le \dim {\mathbb {F}}\cdot m_\rho \) (\(\rho \in {\tilde{\Gamma }}_{\mathbb {F}}\)). We want to find \(p \in {{{\mathcal {C}}}}_\theta (V)\) satisfying \(\Psi (P^\top P)=X\).

The proof is identical to that for the case of absolutely irreducible groups given in Proposition 4.6. Indeed, if \(\rho \in {\tilde{\Gamma }}_{\mathbb {C}}\) (resp., \(\rho \in {\tilde{\Gamma }}_{\mathbb {H}}\)), then \(C^{-1}(X_\rho )\) (resp., \(H^{-1}(X_\rho )\)) is a positive semidefinite complex (resp., quaternionic) matrix with rank at most \(m_\rho \). Hence, for each \(\rho \in {\tilde{\Gamma }}_{{\mathbb {F}}}\), there exist vectors \(v_{\rho ,1}, \ldots , v_{\rho ,m_\rho }\) in \({\mathbb {F}}^\frac{d_\rho {\hat{n}}}{\dim {\mathbb {F}}}\) such that

$$\begin{aligned} X_\rho =\sum _{1 \le t \le m_\rho } v_{\rho ,t} v_{\rho ,t}^*, \quad C^{-1}(X_\rho )=\sum _{1 \le t \le m_\rho } v_{\rho ,t} v_{\rho ,t}^*, \quad \text { or } \quad H^{-1}(X_\rho )=\sum _{1 \le t \le m_\rho } v_{\rho ,t} v_{\rho ,t}^*\nonumber \\ \end{aligned}$$
(34)

depending on the type of \(\rho \). Applying the same calculation as that in Proposition 4.6, one can find a \(d \times {\hat{n}}\) matrix \({\tilde{P}}\) such that

$$\begin{aligned} v_{\rho ,t}&=\sqrt{\frac{|\Gamma |}{d_{\rho }}}\displaystyle \sum _{1 \le l \le \frac{d_\rho }{2}} {\varvec{e}}_l \otimes {\tilde{P}}^\top Y^\top {\varvec{e}}_{(\rho ,t,l,1)} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (\text {if }{\mathbb {F}}={\mathbb {R}}), \\ C(v_{\rho ,t})&=\sqrt{\frac{|\Gamma |}{d_{\rho }}}C\left( \displaystyle \sum _{1 \le l \le \frac{d_\rho }{2}} {\varvec{e}}_l \otimes \left( {\tilde{P}}^\top Y^\top ( {\varvec{e}}_{(\rho ,t,l,1)}+\mathrm{i} {\varvec{e}}_{(\rho ,t,l,2)})\right) \right) \qquad \qquad (\text {if }{\mathbb {F}}={\mathbb {C}}), \\ H(v_{\rho ,t})&=\sqrt{\frac{|\Gamma |}{d_{\rho }}}H\left( \displaystyle \sum _{1 \le l \le \frac{d_\rho }{4}} {\varvec{e}}_l \otimes \left( {\tilde{P}}^\top Y^\top \left( {\varvec{e}}_{(\rho ,t,l,1)}\right. \right. \right. \\&\quad \left. \left. \left. +\mathrm{i}{\varvec{e}}_{(\rho ,t,l,2)} -\mathrm{j} {\varvec{e}}_{(\rho ,t,l,3)} +\mathrm{k} {\varvec{e}}_{(\rho ,t,l,4)}\right) \right) \right) \qquad \qquad \qquad \qquad \qquad \qquad \quad (\text {if }{\mathbb {F}}={\mathbb {H}}) \end{aligned}$$

for all \(\rho \in {\tilde{\Gamma }}\) and \(1\le t\le m_{\rho }\).

Let p be a d-dimensional configuration defined by \(P=\sum _{\gamma \in \Gamma } {\varvec{e}}_\gamma ^\top \otimes (\theta (\gamma ){\tilde{P}})\). Then p is compatible with \(\theta \). Also, by (34), the definition of \({\tilde{P}}\), and Lemma 5.5, we have \(\Psi (P^\top P)_\rho =X_\rho \). Since \(P^{\top }P=\Psi ^{-1}(X)\in {{{\mathcal {L}}}}^V\), we have \(P^{\top }P {\varvec{1}}_V\), meaning that \(p(V)=0\). Thus p satisfies \(p\in {{{\mathcal {C}}}}_{\theta }(V)\) and \(\Psi (P^{\top }P)=X\) as required. \(\square \)

Proof of Theorem 4.1 for general case

We give the proof of Theorem 4.1 for general case. Let \({\mathbb {Q}}_{\theta , \Gamma }\) be the finite extension field of \({\mathbb {Q}}\) generated by \({\mathbb {Q}}\) and the entries of \(\theta (\gamma )\) for \(\gamma \in \Gamma \) and those of \(\rho (\gamma )\) for \(\rho \in {\tilde{\Gamma }}\) and \(\gamma \in \Gamma \). Let \(\Psi \) be the orthogonal transformation given in Proposition 5.2, and we consider the block-diagonalization of the SDP problem. The resulting SDP problem (P\(^\Psi \)) is as given in Sect. 4.3. Proposition 2.1 implies that (P\(^\Psi \)) has the unique solution \(\Psi (P^{\top }P)\) if (Gp) is universally rigid (c.f. Proposition 4.5).

To apply Gortler-Thurston’s argument, we need to understand the facial structure of \({\mathcal {K}}_{+,\Gamma } = \Psi (({{{\mathcal {L}}}}^V)^\Gamma )\). By (28), \({\mathcal {K}}_{+,\Gamma }\) is the direct sum of positive semidefinite cones, positive semidefinite Laplacian cones, cones of the form \({\mathcal {C}}^k\cap {\mathcal {S}}_+^k\), and cones of the form \({\mathcal {H}}^k\cap {\mathcal {S}}_+^k\) for some integer k.

Through the map C, \({\mathcal {C}}^k\cap {\mathcal {S}}_+^k\) can be identified with the cone of positive semidefinite complex Hermitian matrices of size \(k\times k\). Similarly, through the map H, \({\mathcal {H}}^k\cap {\mathcal {S}}_+^k\) can be identified with the cone of positive semidefinite quaternionic Hermitian matrices of size \(k\times k\). Hence the following proposition can be proved by the identical manner as the proof of Proposition 2.3.

Proposition 5.6

  1. (i)

    For \(A \in {\mathcal {C}}^k\cap {\mathcal {S}}_+^k\) with rank 2r, \(\dim F_{{\mathcal {C}}^k\cap {\mathcal {S}}_+^k}(A)=r^2\).

    For \(B \in {\mathcal {C}}^k \cap {\mathcal {S}}^k\), the hyperplane \(\{ \langle X,B \rangle =0 : X \in {\mathcal {C}}^k \cap {\mathcal {S}}^k\}\) in \({\mathcal {C}}^k \cap {\mathcal {S}}^k\) exposes \(F_{{\mathcal {C}}^k\cap {\mathcal {S}}_+^k}(A)\) if and only if \(\mathrm{rank}\,A + \mathrm{rank}\,B = k\), \(\langle A,B \rangle =0\), and \(B \succeq 0\).

  2. (ii)

    For \(A \in {\mathcal {H}}^k\cap {\mathcal {S}}_+^k\) with rank 4r, \(\dim F_{{\mathcal {H}}^k\cap {\mathcal {S}}_+^k}(A)=2r^2-r\).

    For \(B \in {\mathcal {H}}^k \cap {\mathcal {S}}^k\), the hyperplane \(\{ \langle X,B \rangle =0 : X \in {\mathcal {H}}^k \cap {\mathcal {S}}^k\}\) in \({\mathcal {H}}^k \cap {\mathcal {S}}^k\) exposes \(F_{{\mathcal {H}}^k\cap {\mathcal {S}}_+^k}(A)\) if and only if \(\mathrm{rank}\,A + \mathrm{rank}\,B = k\), \(\langle A,B \rangle =0\), and \(B \succeq 0\).

Now we give the proof of Theorem 4.1 for general case.

Proof of the necessity of Theorem 4.1

Let (Gp) be a \(\theta \)-symmetric framework with a point group \(\theta :\Gamma \rightarrow O({\mathbb {R}}^d)\) which is generic modulo symmetry and universally rigid. Suppose also that p affinely spans \({\mathbb {R}}^d\). By a translation, we may suppose \(p \in {{{\mathcal {C}}}}_\theta (V)\). We shall apply Proposition 2.9 to the ambient space \({\mathcal {K}}_\Gamma \) defined in (28) and a point \(\Psi (P^\top P)\). To do so, we need to prove the local genericity of \(\Psi (P^\top P)\).

Claim 5.7

For

$$\begin{aligned} k=\sum _{\rho \in {\tilde{\Gamma }}_{\mathbb {R}}} \left( {\begin{array}{c}m_\rho +1\\ 2\end{array}}\right) + \sum _{\rho \in {\tilde{\Gamma }}_{\mathbb {C}}} m_\rho ^2 + \sum _{\rho \in {\tilde{\Gamma }}_{\mathbb {H}}} \left( 2m_\rho ^2-m_\rho \right) , \end{aligned}$$

\(\Psi (P^\top P)\) is locally generic over \({\mathbb {Q}}_{\theta ,\Gamma }\) in \(\text {ext}_k({\mathcal {K}}_{+,\Gamma })\).

Proof

The facial structure of \(\text {ext}_k({\mathcal {K}}_{+,\Gamma })\) are described by Propositions 2.3,  2.4 and  5.6. We also have Proposition 5.4. Hence the proof follows the same line as Claim 4.7. \(\square \)

Define a linear subspace \({\mathcal {K}}_\Gamma (G)\) of \({{{\mathcal {K}}}}_\Gamma \) and a projection \(\pi : {{{\mathcal {K}}}}_\Gamma \rightarrow {{{\mathcal {K}}}}_\Gamma (G)\) similarly as in the previous section. Then by (the generalization of) Proposition 2.9, there exists a hyperplane \(H=\{\langle X,L \rangle =0 : X \in {\mathcal {K}}_\Gamma (G)\}\) in \({\mathcal {K}}_\Gamma (G)\) defined by some \(L \in {\mathcal {K}}_\Gamma (G)\) such that \(\pi ^{-1}(H)=\{\langle X,L \rangle =0 : X \in {\mathcal {K}}_\Gamma \}\) exposes \(F_{{\mathcal {K}}_{+,\Gamma }}(\Psi (P^\top P))\). By Propositions 2.3, 2.4 and  5.6, we have

$$\begin{aligned}&\mathrm{rank}\,L_\rho ={\left\{ \begin{array}{ll} {\hat{n}}-m_\mathrm{tri}-1 &{} (\rho =\mathrm{tri}) \\ d_\rho {\hat{n}} - \dim {\mathbb {F}} \cdot m_\rho &{} (\rho \in {\tilde{\Gamma }}_{\mathbb {F}} \setminus \{1\}), \end{array}\right. } \\&L_\rho \succeq 0,~ \langle L_\rho , \Psi (P^\top P)_\rho \rangle =0. \end{aligned}$$

By (30), (31), we have

$$\begin{aligned} \mathrm{rank}\,L&= {\hat{n}}-m_\mathrm{tri}-1 + \sum _{{\mathbb {F}} = {\mathbb {R}}, {\mathbb {C}}, {\mathbb {H}}} \sum _{\rho \in {\tilde{\Gamma }}_{\mathbb {F}} \setminus \{1\}} \frac{d_\rho }{\dim {\mathbb {F}}}(d_\rho {\hat{n}} - \dim {\mathbb {F}} \cdot m_\rho ) \\&={\hat{n}}|\Gamma | - d -1 = n-d-1. \end{aligned}$$

As \(L\in {\mathcal {K}}_\Gamma (G)\), \(\Psi ^{-1}(L)\) is a weighted Laplacian on G satisfying

$$\begin{aligned} \Psi ^{-1}(L) \succeq 0,~ \langle \Psi ^{-1}(L),~ P^\top P \rangle =0, \mathrm{rank}\,\Psi ^{-1}(L)=n-d-1. \end{aligned}$$

Therefore \(\Psi ^{-1}(L)\) satisfies the property of the statement. This completes the proof. \(\square \)

Notes

  1. 1.

    Although the details are omitted here, one can explicitly describe the entries of \(Z_\Gamma \) in terms of those of \(\rho (\gamma )\) for \(\rho \in {\tilde{\Gamma }}\).

References

  1. 1.

    Alfakih, A.Y.: On dimensional rigidity of bar-and-joint frameworks. Discrete Appl. Math. 155(10), 1244–1253 (2007)

    MathSciNet  Article  Google Scholar 

  2. 2.

    Alfakih, A.Y.: On bar frameworks, stress matrices and semidefinite programming. Math. Program. 129, 113–128 (2011)

    MathSciNet  Article  Google Scholar 

  3. 3.

    Alfakih, A.Y., Nguyen, V.-H.: On affine motions and universal rigidity of tensegrity frameworks. Linear Algebra Its Appl. 439(10), 3134–3147 (2013)

    MathSciNet  Article  Google Scholar 

  4. 4.

    Alfakih, A.Y., Taheri, N., Ye, Y.: On stress matrices of \((d + 1)\)-lateration frameworks in general position. Math. Program. 137, 1–17 (2013)

    MathSciNet  Article  Google Scholar 

  5. 5.

    C. Bachoc, D. Gijswijt, A. Schrijver, F. Vallentin. Invariant semidefinite programs. In : Anjos, M. F., Lasserre, J. B. (eds.) Handbook on Semidefinite, Conic and Polynomial Optimization, pp. 219–269. Springer, 2012

  6. 6.

    Bai, Y., de Klerk, E., Pasechnik, D., Sotirov, R.: Exploiting group symmetry in truss topology optimization. Optim. Eng. 10, 331–349 (2009)

    MathSciNet  Article  Google Scholar 

  7. 7.

    Borwein, M.J., Wolkowicz, H.: Facial reduction for a cone-convex programming problem. J. Aust. Math. Soc. 30, 369–380 (1981)

    MathSciNet  Article  Google Scholar 

  8. 8.

    Connelly, R.: Rigidity and energy. Invent. Math. 66(1), 11–33 (1982)

    MathSciNet  Article  Google Scholar 

  9. 9.

    Connelly, R.: Generic global rigidity. Discrete Comput. Geom. 33(4), 549–563 (2005)

    MathSciNet  Article  Google Scholar 

  10. 10.

    Connelly, R., Back, A.: Mathematics and tensegrity: Group and representation theory make it possible to form a complete catalogue of strut-cable constructions with prescribed symmetries. Am. Sci. 86, 142–151 (1998)

    Article  Google Scholar 

  11. 11.

    Connelly, R., Gortler, S.: Iterative universal rigidity. Discrete Comput. Geom. 53(4), 847–877 (2015)

    MathSciNet  Article  Google Scholar 

  12. 12.

    Connelly, R., Gortler, S., Theran, L.: Affine rigidity and conics at infinity. Int. Math. Res. Not. 2018(13), 4084–4102 (2017)

    MathSciNet  MATH  Google Scholar 

  13. 13.

    Connelly, R., Gortler, S., Theran, L.: Generically globally rigid graphs have generic universally rigid frameworks. Combinatorica 40, 1–37 (2020)

    MathSciNet  Article  Google Scholar 

  14. 14.

    Connelly, R., Terrell, M.: Globally rigid symmetric tensegrities. Struct. Topol. 21, 59–78 (1995)

    MATH  Google Scholar 

  15. 15.

    Connelly, R., Whiteley, W.: Second-order rigidity and prestress stability for tensegrity frameworks. SIAM J. Discrete Math. 9, 453–491 (1995)

    MathSciNet  Article  Google Scholar 

  16. 16.

    de Carli Silva, M.K., Tunçel, L.: Strict complementarity in semidefinite optimization with elliptopes including the maxcut SDP. SIAM J. Optim. 29, 2650–2676 (2019)

    MathSciNet  Article  Google Scholar 

  17. 17.

    Drusvyatskiy, D., Pataki, G., Wolkowicz, H.: Coordinate shadows of semidefinite and Euclidean distance matrices. SIAM J. Optim. 25, 1160–1178 (2015)

    MathSciNet  Article  Google Scholar 

  18. 18.

    Gatermann, K., Parrilo, P.: Symmetry groups, semidefinite programs, and sums of squares. J. Pure Appl. Algebra 192(1–3), 95–128 (2004)

    MathSciNet  Article  Google Scholar 

  19. 19.

    Gortler, S., Healy, A., Thurston, D.: Characterizing generic global rigidity. Am. J. Math. 132(4), 897–939 (2010)

    MathSciNet  Article  Google Scholar 

  20. 20.

    Gortler, S., Thurston, D.: Characterizing the universal rigidity of generic frameworks. Discrete Comput. Geom. 51(4), 1017–1036 (2014)

    MathSciNet  Article  Google Scholar 

  21. 21.

    Kanno, Y., Ohsaki, M., Murota, K., Katoh, N.: Group symmetry in interior-point methods for semidefinite program. Optim. Eng. 2, 293–320 (2001)

    MathSciNet  Article  Google Scholar 

  22. 22.

    Malestein, J., Theran, L.: Generic rigidity with forced symmetry and sparse colored graphs. Rigidity Symmetry Fields Inst. Commun. 70, 227–252 (2014)

    MathSciNet  Article  Google Scholar 

  23. 23.

    Murota, K., Kanno, Y., Kojima, M., Kojima, S.: A numerical algorithm for block-diagonal decomposition of matrix * -algebras with application to semidefinite programming. Jpn. J. Ind. Appl. Math. 27(1), 125–160 (2010)

    MathSciNet  Article  Google Scholar 

  24. 24.

    Pataki, G.: The geometry of semidefinite programming. In: Aardal, K., Nemhauser, G., Weismantel, R. (eds.) Handbook of Semidefinite Programming, pp. 29–65. Springer, Berlin (2000)

    Chapter  Google Scholar 

  25. 25.

    Roth, B., Whiteley, W.: Tensegrity frameworks. Trans. Am. Math. Soc. 265, 419–446 (1981)

    MathSciNet  Article  Google Scholar 

  26. 26.

    Schulze, B., Whiteley, W.: Rigidity of symmetric frameworks. In: O’Rourke, J., Tóth, C.D. (eds.) Handbook of Discrete and Computational Geometry, 3rd edn., pp. 1633–1659. Chapman and Hall/CRC, Boca Raton (2017)

  27. 27.

    Serre, J.-P.: Linear Representations of Finite Groups. Springer, Berlin (1977)

    Book  Google Scholar 

  28. 28.

    So, A.M., Ye, Y.: Theory of semidefinite programming for sensor network localization. Math. Program. 109, 367–384 (2007)

    MathSciNet  Article  Google Scholar 

  29. 29.

    Tanigawa, S.: Singularity degree of the positive semidefinite matrix completion problem. SIAM J. Optim. 27, 986–1009 (2017)

    MathSciNet  Article  Google Scholar 

  30. 30.

    Zhang, F.: Quaternions and matrices of quaternions. Linear Algebra Its Appl. 251, 21–57 (1997)

    MathSciNet  Article  Google Scholar 

  31. 31.

    Zhang, J.Y., Ohsaki, M.: Tensegrity Structures: Form, Stability, and Symmetry. Springer, Berlin (2015)

    Book  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the referees for their valuable comments which helped to improve the manuscript.

This work was supported by JST ERATO Grant Number JPMJER1903 and JSPS KAKENHI Grant Number JP18K11155.

Open Access

This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Ryoshun Oba.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Proof of Lemma 5.5

Proof of Lemma 5.5

In the proof of Lemma 5.5, we use the orthogonality relation of real irreducible representations (Proposition A.1). Recall that every \(\rho \in {\tilde{\Gamma }}_{\mathbb {R}}\) (resp., \(\rho \in {\tilde{\Gamma }}_{{\mathbb {C}}}, \rho \in {\tilde{\Gamma }}_{\mathbb {H}}\)) is an orthogonal representation with \(\rho (\gamma )\in {\mathbb {R}}^{d_{\rho }\times d_{\rho }}\) (resp., \({{{\mathcal {C}}}}^{d_\rho }\), \({{{\mathcal {H}}}}^{d_\rho }\)) for \(\gamma \in \Gamma \). Fix the standard basis \({{{\mathcal {B}}}}_{\rho }\) of \({\mathbb {R}}^{d_{\rho }\times d_{\rho }}\), \({\mathcal {C}}^{d_{\rho }}\), \({\mathcal {H}}^{d_{\rho }}\) by

$$\begin{aligned}&\{E_{lm}:1\le l,m \le d_{\rho }\}, \\&\quad \{E_{lm} \otimes C(1), E_{lm} \otimes C(\mathrm{i}) : 1\le l,m\le d_{\rho }/2 \}, \\&\quad \{E_{lm} \otimes H(1), E_{lm} \otimes H(\mathrm{i}), E_{lm} \otimes H(\mathrm{j}), E_{lm} \otimes H(\mathrm{k}) : 1\le l,m\le d_{\rho }/4\}, \end{aligned}$$

respectively. For each \(B \in {{{\mathcal {B}}}}_\rho \), the coordinate vector  \(\rho _B \in {\mathbb {R}}^{\Gamma }\)  with respect to  B is defined by

$$\begin{aligned} \rho _B(\gamma ) = \frac{1}{\dim {\mathbb {F}}} \langle \rho (\gamma ), B \rangle \qquad (\gamma \in \Gamma ). \end{aligned}$$

Then, we have the following.

Proposition A.1

\(\left\{ \sqrt{ \frac{d_\rho }{|\Gamma |} } \rho _B : \rho \in {\tilde{\Gamma }}, B \in {{{\mathcal {B}}}}_\rho \right\} \) forms an orthogonal basis of \({\mathbb {R}}^{\Gamma }\).

Proof

By definition, for each \(\rho \in {\tilde{\Gamma }}_{\mathbb {F}}\), \(|{{{\mathcal {B}}}}_\rho |=\frac{{d_\rho }^2}{\dim {\mathbb {F}}}\). Hence by (30), \(\sum _{\rho \in {\tilde{\Gamma }}}|{{{\mathcal {B}}}}_{\rho }|=|\Gamma |\). Therefore, it suffices to prove that \(\left\{ \sqrt{ \frac{d_\rho }{|\Gamma |} } \rho _B : \rho \in {\tilde{\Gamma }}, B \in {{{\mathcal {B}}}}_\rho \right\} \) is an orthonormal set. By Schur orthogonality (Proposition 4.9) over \({\mathbb {C}}\), for two inequivalent real irreducible representations \(\rho , \rho ' \in {\tilde{\Gamma }}\) and any \(B \in {{{\mathcal {B}}}}_\rho , B' \in {{{\mathcal {B}}}}_{\rho '}\), \(\rho _B\) and \(\rho _{B'}\) are orthogonal to each other. Hence it suffices to prove that for each \(\rho \), \(\left\{ \sqrt{\frac{d_\rho }{\Gamma }} \rho _B:B\in {{{\mathcal {B}}}}_\rho \right\} \) forms an orthonormal set.

When \(\rho \) is of real type (i.e., \(\rho \) is irreducible over \({\mathbb {C}}\)), this follows from Schur orthogonality, again.

Suppose that \(\rho \) is of complex type. By Proposition 5.1, \(\rho \) is (unitarily) equivalent to \(\pi \otimes {\overline{\pi }}\) for some complex irreducible representation \(\pi \) of degree \(\frac{d_{\rho }}{2}\), and by \(\rho \in {{{\mathcal {C}}}}^{d_{\rho }}\) we may suppose \(\rho (\gamma )=\mathfrak {R}\pi (\gamma ) \otimes C(1) + \mathfrak {I}\pi (\gamma ) \otimes C(\mathrm{i})\) for all \(\gamma \in \Gamma \). Then

$$\begin{aligned} \rho _{E_{k,l}\otimes C(1)}=\mathfrak {R}\pi _{k,l}\text { and } \rho _{E_{k,l}\otimes C(\mathrm{i})}=\mathfrak {I}\pi _{k,l}, \end{aligned}$$
(35)

where \(\pi _{k,l}\) denotes the vector in \({\mathbb {C}}^{\Gamma }\) such that \(\pi _{k,l}(\gamma )\) is the (kl)-entry of \(\pi (\gamma )\).

For a complex number \(c\in {\mathbb {C}}\), we have \(\begin{pmatrix} c&{\overline{c}}\end{pmatrix}\begin{pmatrix} \frac{1}{2} &{} -\frac{\mathrm{i}}{2} \\ \frac{1}{2} &{} \frac{\mathrm{i}}{2}\end{pmatrix} =\begin{pmatrix} \mathfrak {R}c&\mathfrak {I}c\end{pmatrix}\). Note that \(\begin{pmatrix} \frac{1}{2} &{} -\frac{\mathrm{i}}{2} \\ \frac{1}{2} &{} \frac{\mathrm{i}}{2}\end{pmatrix}\) is unitary. Applying this unitary transformation entry-wise to each pair \((\pi _{k,l},\overline{\pi _{k,l}})\), the set of vectors \(\{\pi _{k,l}: 1\le k,l\le \frac{d_{\rho }}{2}\}\cup \{\overline{\pi _{k,l}}: 1\le k,l\le \frac{d_{\rho }}{2}\}\) is mapped to \(\{\mathfrak {R}\pi _{k,l}: 1\le k,l\le \frac{d_{\rho }}{2}\}\cup \{\mathfrak {I}\pi _{k,l}: 1\le k,l\le \frac{d_{\rho }}{2}\}\). Since this is a unitary transformation, the orthogonality of the former set (which follows from Schur orthogonality) implies the orthogonality of the latter set. By (35), this in turn implies the orthogonality of \(\left\{ \sqrt{\frac{d_\rho }{\Gamma }} \rho _B:B\in {{{\mathcal {B}}}}_\rho \right\} \).

Finally, suppose that \(\rho \) is of quaternionic type. By Proposition 5.1, \(\rho \) is (complex-)equivalent to the direct sum of two copies of a self-conjugate irreducible representation \(\pi \). As \(\rho \in {{{\mathcal {H}}}}^{d_{\rho }}\), \(\rho (\gamma )=a(\gamma )\otimes H(1) + b(\gamma ) \otimes H(\mathrm{i})+c(\gamma ) \otimes H(\mathrm{j}) + d(\gamma ) \otimes H(\mathrm{k})\) for some \(a,b,c,d:\Gamma \rightarrow M_{\frac{d_\rho }{4}}({\mathbb {R}})\), and we may take \(\pi \) such that

$$\begin{aligned} \pi (\gamma )&= a(\gamma ) \otimes \begin{pmatrix} 1 &{} 0 \\ 0 &{} 1 \end{pmatrix} + b(\gamma ) \otimes \begin{pmatrix} \mathrm{i}&{} 0 \\ 0 &{} -\mathrm{i}\end{pmatrix} + c(\gamma ) \otimes \begin{pmatrix} 0 &{} 1 \\ -1 &{} 0 \end{pmatrix} + d(\gamma ) \otimes \begin{pmatrix} 0 &{} \mathrm{i}\\ \mathrm{i}&{} 0 \end{pmatrix}. \end{aligned}$$

Hence, by using Schur orthogonality of \(\pi \), one can check the statement by the same manner as the case of complex type. \(\square \)

Proof of Lemma 5.5

We apply the same calculation as that in the proof of Lemma 4.10. Then, for each \(\rho \in {\tilde{\Gamma }}\), we have

$$\begin{aligned} \Psi \left( P^\top P\right) _\rho = \left( I_{d_\rho }\otimes Y{\tilde{P}}\right) ^\top A_\rho \left( I_{d_\rho }\otimes Y{\tilde{P}}\right) \end{aligned}$$
(36)

with

$$\begin{aligned} A_\rho =\sum _{\gamma \in \Gamma } \rho (\gamma ) \otimes \bigoplus _{\rho ' \in {\tilde{\Gamma }}} \bigoplus _{1 \le t \le m_{\rho '}} \rho '(\gamma ). \end{aligned}$$
(37)

If \(\rho \) is of real type, the proof is identical to that of Lemma 4.10.

Suppose that \(\rho \) is of complex type. By Proposition A.1, for a real irreducible representation \(\rho ' \in {\tilde{\Gamma }}\), we have

$$\begin{aligned}&\sum _{\gamma \in \Gamma } \rho (\gamma ) \otimes \rho '(\gamma ) \\&\quad = {\left\{ \begin{array}{ll} \displaystyle \frac{|\Gamma |}{d_\rho } \sum _{1 \le l,m \le \frac{d_\rho }{2}} \left( E_{lm} \otimes C(1) \otimes E_{lm} \otimes C(1)+ E_{lm}\otimes C(\mathrm{i})\otimes E_{lm} \otimes C(\mathrm{i})\right) &{} (\text {if }\rho =\rho '), \\ 0 &{} (\text {if }\rho \ne \rho '). \end{array}\right. } \end{aligned}$$

To see how the sum in the above relation for \(\rho =\rho '\) can be simplified, let \(\{{\varvec{e}}_{(\ell ,a)}: 1\le \ell \le \frac{d_{\rho }}{2}, 1\le a\le 2\}\) be the standard basis of \({\mathbb {R}}^{d_{\rho }\times d_{\rho }}\). Then

$$\begin{aligned}&\sum _{1 \le l,m \le \frac{d_\rho }{2}} \left( E_{lm} \otimes C(1) \otimes E_{lm} \otimes C(1)+ E_{lm}\otimes C(\mathrm{i})\otimes E_{lm} \otimes C(\mathrm{i})\right) \\&\quad = \sum _{1 \le l,m \le \frac{d_\rho }{2}} E_{lm} \otimes \begin{pmatrix} E_{lm} \otimes C(1) &{} -E_{lm} \otimes C(\mathrm{i}) \\ E_{lm} \otimes C(\mathrm{i}) &{} E_{lm} \otimes C(1) \end{pmatrix} \\&\quad =\sum _{1 \le l,m \le \frac{d_\rho }{2}} {\varvec{e}}_l{\varvec{e}}_m^{\top }\otimes \begin{pmatrix} {\varvec{e}}_l{\varvec{e}}_m^{\top } \otimes C(1)C(1)^{\top } &{} {\varvec{e}}_l{\varvec{e}}_m^{\top } \otimes C(1)C(\mathrm{i})^{\top } \\ {\varvec{e}}_l{\varvec{e}}_m^{\top } \otimes C(\mathrm{i})C(1)^{\top } &{} {\varvec{e}}_l{\varvec{e}}_m^{\top } \otimes C(\mathrm{i})C(\mathrm{i})^{\top } \end{pmatrix} \\&\quad = \left( \sum _{1 \le l \le \frac{d_\rho }{2}} {\varvec{e}}_l \otimes \begin{pmatrix} {\varvec{e}}_l \otimes C(1) \\ {\varvec{e}}_l \otimes C(\mathrm{i}) \end{pmatrix} \right) \left( \sum _{1 \le l \le \frac{d_\rho }{2}} {\varvec{e}}_l \otimes \begin{pmatrix} {\varvec{e}}_l \otimes C(1) \\ {\varvec{e}}_l \otimes C(\mathrm{i}) \end{pmatrix} \right) ^\top \\&\quad =\left( \sum _{1 \le l \le \frac{d_\rho }{2}} {\varvec{e}}_l \otimes \begin{pmatrix} {\varvec{e}}_{(l,1)} &{} {\varvec{e}}_{(l,2)}\\ {\varvec{e}}_{(l,2)} &{} -{\varvec{e}}_{(l,1)} \end{pmatrix} \right) \left( \sum _{1 \le l \le \frac{d_\rho }{2}} {\varvec{e}}_l \otimes \begin{pmatrix} {\varvec{e}}_{(l,1)} &{} {\varvec{e}}_{(l,2)}\\ {\varvec{e}}_{(l,2)} &{} -{\varvec{e}}_{(l,1)} \end{pmatrix} \right) ^\top \\&\quad =\left( \sum _{1 \le l \le \frac{d_\rho }{2}} {\varvec{e}}_l \otimes \begin{pmatrix} {\varvec{e}}_{(l,1)} &{} -{\varvec{e}}_{(l,2)}\\ {\varvec{e}}_{(l,2)} &{} {\varvec{e}}_{(l,1)} \end{pmatrix} \right) \left( \sum _{1 \le l \le \frac{d_\rho }{2}} {\varvec{e}}_l \otimes \begin{pmatrix} {\varvec{e}}_{(l,1)} &{} -{\varvec{e}}_{(l,2)}\\ {\varvec{e}}_{(l,2)} &{} {\varvec{e}}_{(l,1)} \end{pmatrix} \right) ^\top . \end{aligned}$$

Combining it with (37), we have

$$\begin{aligned} A_\rho = \frac{|\Gamma |}{d_\rho } \sum _{1 \le t \le m_\rho } U_{\rho ,t}U_{\rho ,t}^\top \end{aligned}$$
(38)

with

$$\begin{aligned} U_{\rho ,t}= \sum _{1 \le l \le \frac{d_\rho }{2}} {\varvec{e}}_l \otimes \begin{pmatrix} {\varvec{e}}_{(\rho ,t,l,1)} &{} -{\varvec{e}}_{(\rho ,t,l,2)}\\ {\varvec{e}}_{(\rho ,t,l,2)} &{} {\varvec{e}}_{(\rho ,t,l,1)} \end{pmatrix}. \end{aligned}$$

Hence by (36) and (38), we have \(\Psi (P^{\top }P)_{\rho }=\sum _{1\le t\le m_{\rho }}W_{\rho ,t}W_{\rho ,t}^{\top }\) with

$$\begin{aligned} W_{\rho ,t}&=\sqrt{\frac{|\Gamma |}{d_{\rho }}}\sum _{1 \le l \le \frac{d_\rho }{2}} {\varvec{e}}_l \otimes \begin{pmatrix} {\tilde{P}}^{\top }Y^{\top }{\varvec{e}}_{(\rho ,t,l,1)} &{} -{\tilde{P}}^{\top }Y^{\top }{\varvec{e}}_{(\rho ,t,l,2)}\\ {\tilde{P}}^{\top }Y^{\top }{\varvec{e}}_{(\rho ,t,l,2)} &{} {\tilde{P}}^{\top }Y^{\top }{\varvec{e}}_{(\rho ,t,l,1)} \end{pmatrix} \\&=\sqrt{\frac{|\Gamma |}{d_{\rho }}}\sum _{1 \le l \le \frac{d_\rho }{2}} C\left( {\varvec{e}}_l \otimes \left( {\tilde{P}}^\top Y^\top ( {\varvec{e}}_{(\rho ,t,l,1)}+\mathrm{i} {\varvec{e}}_{(\rho ,t,l,2)})\right) \right) \end{aligned}$$

as required.

Finally suppose that \(\rho \) is of quaternionic type. For a real irreducible representation \(\rho ' \in {\tilde{\Gamma }}\), by Proposition A.1, \(\sum _{\gamma \in \Gamma } \rho (\gamma ) \otimes \rho '(\gamma )\) is 0 if \(\rho \) and \(\rho '\) are not equivalent, and if \(\rho = \rho '\), this is equal to

$$\begin{aligned} \frac{|\Gamma |}{d_\rho } \sum _{1 \le l,m \le \frac{d_\rho }{4}} E_{lm} \otimes \begin{pmatrix} E_{lm} \otimes H(1) &{} -E_{lm} \otimes H(\mathrm{i}) &{} E_{lm} \otimes H(\mathrm{j}) &{} -E_{lm} \otimes H(\mathrm{k}) \\ E_{lm} \otimes H(\mathrm{i}) &{} E_{lm} \otimes H(1) &{} E_{lm} \otimes H(\mathrm{k}) &{} E_{lm} \otimes H(\mathrm{j}) \\ -E_{lm} \otimes H(\mathrm{j}) &{} -E_{lm} \otimes H(\mathrm{k}) &{} E_{lm} \otimes H(1) &{} E_{lm} \otimes H(\mathrm{i}) \\ E_{lm} \otimes H(\mathrm{k}) &{} -E_{lm} \otimes H(\mathrm{j}) &{}-E_{lm} \otimes H(\mathrm{i}) &{} E_{lm} \otimes H(1) \end{pmatrix}. \end{aligned}$$

For \(\rho =\rho '\), this value is written as \(\frac{|\Gamma |}{d_\rho } B B^\top \) with

$$\begin{aligned} B = \sum _{1 \le l \le \frac{d_\rho }{4}} {\varvec{e}}_l \otimes \begin{pmatrix} {\varvec{e}}_l\otimes H(1) \\ {\varvec{e}}_l\otimes H(\mathrm{i}) \\ -{\varvec{e}}_l\otimes H(\mathrm{j}) \\ {\varvec{e}}_l\otimes H(\mathrm{k}) \end{pmatrix}. \end{aligned}$$

Applying the same calculation as that of the complex case, one can derive the desired form. \(\square \)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Oba, R., Tanigawa, Si. Characterizing the universal rigidity of generic tensegrities. Math. Program. (2021). https://doi.org/10.1007/s10107-021-01730-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10107-021-01730-2

Keywords

  • Tensegrity
  • Graph rigidity
  • Universal rigidity
  • Semidefinite programming
  • Strict complementarity
  • Super stability

Mathematics Subject Classification

  • 52C25 Rigidity and flexibility of structures (aspects of discrete geometry)
  • 90C22 Semidefinite programming