Skip to main content
Log in

Spectrahedral cones generated by rank 1 matrices

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

Let \(\mathcal{S}_+^n \subset \mathcal{S}^n\) be the cone of positive semi-definite matrices as a subset of the vector space of real symmetric \(n \times n\) matrices. The intersection of \(\mathcal{S}_+^n\) with a linear subspace of \(\mathcal{S}^n\) is called a spectrahedral cone. We consider spectrahedral cones K such that every element of K can be represented as a sum of rank 1 matrices in K. We shall call such spectrahedral cones rank one generated (ROG). We show that ROG cones which are linearly isomorphic as convex cones are also isomorphic as linear sections of the positive semi-definite matrix cone, which is not the case for general spectrahedral cones. We give many examples of ROG cones and show how to construct new ROG cones from given ones by different procedures. We provide classifications of some subclasses of ROG cones, in particular, we classify all ROG cones for matrix sizes not exceeding 4. Further we prove some results on the structure of ROG cones. We also briefly consider the case of complex or quaternionic matrices. ROG cones are in close relation with the exactness of semi-definite relaxations of quadratically constrained quadratic optimization problems or of relaxations approximating the cone of nonnegative functions in squared functional systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The reason for defining the operator \(\Lambda \) by virtue of its adjoint \(\Lambda ^*\) is to stay in line with the notations in [15]. This definition explicitly uses a basis of the space F. In a coordinate-free definition, the source space of \(\Lambda ^*\) should be the space \({{\mathrm{Sym}}}^2(F)\) of contravariant symmetric 2-tensors over F, and the operator \(\Lambda ^*\) itself should be defined by linear continuation of the map \(f \otimes f \mapsto f^2\), \(f \in F\).

  2. The extension from ROG cones to the subclass considered in Lemma 3.6 is due to Gregory Blekherman.

  3. We propose to reserve the notion irreducible for ROG cones \(K \subset \mathcal{S}_+^n\) such that the real projective variety defined by the set \(\{ x \in \mathbb R^n \,|\, xx^T \in K \}\) is irreducible.

References

  1. Agler, J., Helton, J.W., McCullough, S., Rodman, L.: Positive semidefinite matrices with a given sparsity pattern. Linear Algebra Appl. 107, 101–149 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  2. Brenner, J.L.: Matrices of quaternions. Pac. J. Math. 1, 329–335 (1951)

    Article  MATH  MathSciNet  Google Scholar 

  3. Brickman, L.: On the field of values of a matrix. Proc. Am. Math. Soc. 12, 61–66 (1961)

    Article  MathSciNet  MATH  Google Scholar 

  4. Cohen, N., Johnson, C.R., Rodman, L., Woerdeman, H.J.: Ranks of completions of partial matrices. Oper. Theory Adv. Appl. 40, 165–185 (1989)

    MathSciNet  MATH  Google Scholar 

  5. Dines, L.L.: On linear combinations of quadratic forms. Bull. Am. Math. Soc. 49, 388–393 (1943)

    Article  MathSciNet  MATH  Google Scholar 

  6. Garey, M.R., Johnson, D.S.: Computers and intractability: a guide to the theory of NP-completeness. Freeman, San Francisco (1979)

    MATH  Google Scholar 

  7. Goemans, Michel X., Williamson, David P.: Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. Assoc. Comput. Mach. 42(6), 1115–1145 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  8. Güler, O., Tunçel, L.: Characterization of the barrier parameter of homogeneous convex cones. Math. Program. 81(1), 55–76 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  9. Gwynne, E., Libine, M.: On a quaternionic analogue of the cross-ratio. Adv. Appl. Clifford Algebras 22(4), 1041–1053 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  10. Hadwin, D., Harrison, K.J., Ward, J.A.: Rank-one completions of partial matrices and completely rank-nonincreasing linear functionals. Proc. Am. Math. Soc. 134(8), 2169–2178 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  11. Helton, J.W., Vinnikov, V.: Linear matrix inequality representation of sets. Commun. Pure Appl. Math. 60(5), 654–674 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  12. Hilbert, D.: Über die Darstellung definiter Formen als Summe von Formenquadraten. Math. Ann. 32, 342–350 (1888)

    Article  MathSciNet  MATH  Google Scholar 

  13. Laurent, M.: On the sparsity order of a graph and its deficiency in chordality. Combinatorica 21(4), 543–570 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  14. Luo, Z.-Q., Ma, W.-K., So, A.M.-C., Ye, Y., Zhang, S.: Semidefinite relaxation of quadratic optimization problems. IEEE Signal Proc. Mag. 27(3), 20–34 (2010)

    Article  Google Scholar 

  15. Nesterov, Y.: Squared functional systems and optimization problems. In: Frenk, H., Roos, K., Terlaky, T., Zhang, S. (eds.) High Performance Optimization, Chapter 17, pp. 405–440. Kluwer Academic Press, Dordrecht (2000)

    Chapter  Google Scholar 

  16. Paulsen, V.I., Power, S.C., Smith, R.R.: Schur products and matrix completions. J. Funct. Anal. 85, 151–178 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  17. Ramana, M., Goldman, A.J.: Some geometric results in semidefinite programming. J. Global Optim. 7, 33–50 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  18. Rosenblum, M., Rovnyak, J.: Hardy Classes and Operator Theory. Oxford Mathematical Monographs. Oxford University Press, Oxford (1985)

    MATH  Google Scholar 

  19. Tismenetsky, M.: Matrix generalizations of a moment problem theorem I. The Hermitian case. SIAM J. Matrix Anal. Appl. 14, 92–112 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  20. Yakubovitch, V.A.: Factorization of symmetric matrix polynomials. Dokl. Akad. Nauk SSSR 194(3), 1261–1264 (1970)

    MathSciNet  Google Scholar 

  21. Ycart, B.: Extrémales du cône des matrices de type non négatif, à coefficients positifs ou nuls. Linear Algebra Appl. 48, 317–330 (1982)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roland Hildebrand.

Additional information

The author kindly acknowledges support from the ANR GéoLMI of the French National Research Agency.

Appendices

Appendix 1: Plücker embeddings of real Grassmanians

The purpose of this section is to provide Lemma 10.5, which is needed for the proof of Theorem 3.18. It turns out that Lemma 10.5 is essentially equivalent to a property of the Plücker embedding of real Grassmanians, which is stated below as Lemma 10.4. However, we start with results on the rank 1 completion of partially specified matrices, which will be needed to prove Lemma 10.4.

Definition 10.1

A real partially specified \(n \times m\) matrix is defined by an index subset \(\mathcal{P} \subset \{1,\ldots ,n\} \times \{1,\ldots ,m\}\), called a pattern, together with a collection of real numbers \((A_{ij})_{(i,j) \in \mathcal{P}}\). A completion of a partially specified matrix \((\mathcal{P},(A_{ij})_{(i,j) \in \mathcal{P}})\) is a real \(n \times m\) matrix C such that \(C_{ij} = A_{ij}\) for all \((i,j) \in \mathcal{P}\).

We shall be concerned with the question when a partially specified matrix possesses a completion of rank 1. This problem has been solved in [4], see also [10]. In order to formulate the result, we need to define a weighted bipartite graph G associated to the partially specified matrix. The two groups of vertices will be the row indices \(1,\ldots ,n\) and the column indices \(1,\ldots ,m\). The edges will be the elements of \(\mathcal{P}\), with the weight of (ij) equal to \(A_{ij}\).

Lemma 10.2

[10, Theorem 5] A partially specified matrix \((\mathcal{P},(A_{ij})_{(i,j) \in \mathcal{P}})\) has a rank 1 completion if and only if the following conditions are satisfied. If for some \((i,j) \in \mathcal{P}\) we have \(A_{ij} = 0\), then either \(A_{ij'} = 0\) for all \((i,j') \in \mathcal{P}\), or \(A_{i'j} = 0\) for all \((i',j) \in \mathcal{P}\). Further, for every cycle \(i_1\)-\(j_1\)-\(i_2\)-\(\cdots \)-\(i_k\)-\(j_k\)-\(i_1\), \(k \ge 2\), of the bipartite graph G corresponding to the partially specified matrix, where \(i_1\) in the representation of the cycle is a row index, we have \(\prod _{l=1}^k A_{i_lj_l} = A_{i_kj_1} \cdot \prod _{l=1}^{k-1} A_{i_{l+1}j_l}\).

Note that the relation in the second condition of the lemma depends only on the cycle itself, but not on its starting point or on the direction in which the edges are traversed. Since the products in the lemma are multiplicative under the concatenation of paths [10, p. 2171], we may also restrict the condition to prime cycles (i.e., chordless cycles where a given vertex appears at most once).

Corollary 10.3

Let \(A = (\mathcal{P},(A_{ij})_{(i,j) \in \mathcal{P}})\) be a partially specified matrix such that \(A_{ij} = \pm 1\) for all \((i,j) \in \mathcal{P}\), and G the corresponding bipartite graph. Assume further that for every prime cycle \(i_1\)-\(j_1\)-\(\cdots \)-\(j_k\)-\(i_1\) of G with \(k \ge 2\), where the representation of the cycle begins with a row index, we have \(\prod _{l=1}^k A_{i_lj_l} = A_{i_kj_1} \cdot \prod _{l=1}^{k-1} A_{i_{l+1}j_l}\). Then there exists a rank 1 completion \(C = ef^T\) of A such that \(e \in \{-1,+1\}^n\), \(f \in \{-1,+1\}^m\).

Proof

By Lemma 10.2 there exists a rank 1 completion \(\tilde{C} = \tilde{e}\tilde{f}^T\) of A, where \(\tilde{e} \in \mathbb R^n\), \(\tilde{f} \in \mathbb R^m\). Suppose there exists an index i such that \(\tilde{e}_i = 0\). Then all elements of the i-th row of \(\tilde{C}\) vanish, and all elements of this row are unspecified in A. We may then set \(\tilde{e}_i = 1\) and \(\tilde{e}\tilde{f}^T\) would still be a completion of A. Hence assume without loss of generality that all elements of \(\tilde{e}\) are nonzero. In a similar manner, we may assume that the elements of \(\tilde{f}\) are nonzero.

We then define the vectors \(e \in \mathbb R^n\), \(f \in \mathbb R^m\) element-wise by the signs of the elements of \(\tilde{e},\tilde{f}\), respectively. For every \((i,j) \in \mathcal{P}\) we then have \(e_if_j = \frac{\tilde{e}_i\tilde{f}_j}{|\tilde{e}_i\tilde{f}_j|} = \frac{A_{ij}}{|A_{ij}|} = A_{ij}\), because \(A_{ij} = \pm 1\). It follows that \(C = ef^T\) is also a completion of A. \(\square \)

We now come to the Grassmanian \(Gr(n,\mathbb R^m)\), i.e., the space of linear n-planes in \(\mathbb R^m\). Fix a basis in \(\mathbb R^m\). Then an n-plane \(\Lambda \) can be represented by an n-tuple of linear independent vectors in \(\mathbb R^m\), namely those spanning \(\Lambda \). Let us treat these vectors as row vectors and stack them into an \(n \times m\) matrix M. The matrix M is determined only up to left multiplication by a nonsingular \(n \times n\) matrix, reflecting the ambiguity in the choice of vectors spanning \(\Lambda \). The Plücker coordinate \(\Delta _{i_1\ldots i_n}\) of \(\Lambda \), where \(1 \le i_1 < \ldots < i_n \le m\), is defined as the determinant of the \(n \times n\) submatrix formed of the columns \(i_1,\ldots ,i_n\) of M. The vector \(\Delta \) of all Plücker coordinates is determined by the n-plane \(\Lambda \) up to multiplication by a nonzero constant and corresponds to a point in the projectivization \(\mathbb P(\wedge ^n \mathbb R^m)\) of the n-th exterior power of \(\mathbb R^m\). The map \(\Lambda \mapsto \Delta \) from \(Gr(n,\mathbb R^m)\) to \(\mathbb P(\wedge ^n \mathbb R^m)\) is called the Plücker embedding.

Lemma 10.4

Let \(\Lambda ,\Lambda ' \subset \mathbb R^m\) be two n-planes with Plücker coordinate vectors \(\Delta ,\Delta '\), respectively. Suppose there exists a positive constant c such that \(|\Delta _{i_1\ldots i_n}| = c|\Delta '_{i_1\ldots i_n}|\) for all n-tuples \((i_1,\ldots ,i_n)\). Then there exists a linear automorphism of \(\mathbb R^m\), given by a diagonal coefficient matrix \(\Sigma = {{\mathrm{diag}}}(\sigma _1,\ldots ,\sigma _m)\), where \(\sigma _i \in \{-1,+1\}\) for all \(i = 1,\ldots ,m\), which takes the n-plane \(\Lambda \) to \(\Lambda '\).

Proof

Assume the conditions of the lemma. Let without restriction of generality \(\Delta _{1\ldots n} \not = 0\), then also \(\Delta '_{1\ldots n} \not = 0\). Otherwise we may permute the basis vectors of \(\mathbb R^m\) to obtain these inequalities. Then we may choose the \(n \times m\) matrix M representing \(\Lambda \) such that the first n columns of M form the identity matrix. Make a similar choice for the \(n \times m\) matrix \(M'\) representing \(\Lambda '\). Then we have \(\Delta _{1\ldots n} = \Delta '_{1\ldots n} = 1\) and hence \(c = 1\) for this choice of \(M,M'\). If \(m = n\), then we may take \(\Sigma \) as the identity matrix. Let \(m > n\).

Let kl be indices such that \(1 \le k \le n\), \(n < l \le m\). The determinant \(\Delta _{1,\ldots ,k-1,k+1,\ldots ,n,l}\) is then given by \((-1)^{n-k}M_{kl}\). Likewise, \(\Delta '_{1,\ldots ,k-1,k+1,\ldots ,n,l} = (-1)^{n-k}M'_{kl}\), and hence \(|M_{kl}| = |M_{kl}'|\) by the assumption on \(\Delta ,\Delta '\). We then get \(|M_{kl}| = |M_{kl}'|\) also for all \(k = 1,\ldots ,n\), \(l = 1,\ldots ,m\).

Let now \(\mathcal{P}\) be the set of index pairs (kl) such that \(M_{kl} \not = 0\), and set \(A_{kl} = \frac{M'_{kl}}{M_{kl}} \in \{-1,+1\}\) for \((k,l) \in \mathcal{P}\). Then for every completion C of the partially specified matrix \(A = (\mathcal{P},(A_{kl})_{(k,l) \in \mathcal{P}})\) we have \(M' = M \cdot C\), where \(\cdot \) denotes the Hadamard matrix product.

We shall now show that the partially specified matrix A satisfies the condition of Corollary 10.3. Let \(i_1\)-\(j_1\)-\(\cdots \)-\(j_k\)-\(i_1\) be a prime cycle of the bipartite graph G corresponding to A, where \(k \ge 2\), \(i_1,\ldots ,i_k\) are row indices, and \(j_1,\ldots ,j_k\) are column indices. Since the cycle is prime, the row and column indices are mutually distinct. The \(k \times k\) submatrix \(\hat{M}\) of M consisting of elements with row indices \(i_1,\ldots ,i_k\) and column indices \(j_1,\ldots ,j_k\) does not have any nonzero elements except those specified by the edges of the cycle, because any such element would render the cycle non-prime. In particular, every row and every column of \(\hat{M}\) contains exactly two nonzero elements. The index set \(\{j_1,\ldots ,j_k\}\) then has an empty intersection with \(\{1,\ldots ,n\}\), because the first n columns of M contain strictly less than two nonzero elements each. Moreover, in the Leibniz formula for the determinant \(\det \hat{M}\) only two products are nonzero, and the corresponding permutations are related by a cyclic permutation, which has sign \((-1)^{k-1}\). Therefore we have \(|\det \hat{M}| = \left| \prod _{l=1}^k M_{i_lj_l} - (-1)^kM_{i_kj_1} \cdot \prod _{l=1}^{k-1} M_{i_{l+1}j_l} \right| \).

Consider the \(n \times n\) submatrix of M consisting of columns with indices in \((\{1,\ldots ,n\} \setminus \{i_1,\ldots ,i_k\}) \cup \{ j_1,\ldots ,j_k\}\). The determinant of this submatrix has absolute value \(|\det \hat{M}|\) by construction. A similar formula holds for the absolute value of the determinant of the corresponding \(n \times n\) submatrix of \(M'\). By the assumption on \(\Delta ,\Delta '\) we then have

$$\begin{aligned} \left| \prod _{l=1}^k M_{i_lj_l} - (-1)^kM_{i_kj_1} \cdot \prod _{l=1}^{k-1} M_{i_{l+1}j_l} \right| = \left| \prod _{l=1}^k M'_{i_lj_l} - (-1)^kM'_{i_kj_1} \cdot \prod _{l=1}^{k-1} M'_{i_{l+1}j_l} \right| . \end{aligned}$$

It follows that either

$$\begin{aligned} \left( 1 - \prod _{l=1}^k A_{i_lj_l}\right) \prod _{l=1}^k M_{i_lj_l} = \left( 1 - A_{i_kj_1} \cdot \prod _{l=1}^{k-1} A_{i_{l+1}j_l}\right) (-1)^kM_{i_kj_1} \cdot \prod _{l=1}^{k-1} M_{i_{l+1}j_l} \end{aligned}$$

or

$$\begin{aligned} \left( 1 + \prod _{l=1}^k A_{i_lj_l}\right) \prod _{l=1}^k M_{i_lj_l} = \left( 1 + A_{i_kj_1} \cdot \prod _{l=1}^{k-1} A_{i_{l+1}j_l}\right) (-1)^kM_{i_kj_1} \cdot \prod _{l=1}^{k-1} M_{i_{l+1}j_l}. \end{aligned}$$

Note that all the involved elements of M are nonzero, while those of A equal \(\pm 1\). The relation \(\prod _{l=1}^k A_{i_lj_l} = -A_{i_kj_1} \cdot \prod _{l=1}^{k-1} A_{i_{l+1}j_l}\) would then imply that in each of the two equations above, one side is zero while the other is not. Therefore we must have \(\prod _{l=1}^k A_{i_lj_l} = A_{i_kj_1} \cdot \prod _{l=1}^{k-1} A_{i_{l+1}j_l}\), and the condition in Corollary 10.3 is fulfilled.

By this corollary there exists a rank 1 completion \(C = ef^T\) of A such that \(e \in \{-1,+1\}^n\), \(f \in \{-1,+1\}^m\). We then have \(M' = M \cdot (ef^T) = {{\mathrm{diag}}}(e)\cdot M\cdot {{\mathrm{diag}}}(f)\). Setting \(\Sigma = {{\mathrm{diag}}}(f)\) completes the proof. \(\square \)

We now provide the technical result which is necessary for the proof of Theorem 3.18.

Lemma 10.5

Let \(x_1,\ldots ,x_m,y_1,\ldots ,y_m \in \mathbb R^n\) be such that \({{\mathrm{span}}}\{x_1,\ldots ,x_m\} = {{\mathrm{span}}}\{y_1,\ldots ,y_m\} = \mathbb R^n\). Denote by \(L \subset \mathcal{S}^n\) the linear span of the set \(\{x_1x_1^T,\ldots ,x_mx_m^T\}\) and assume there exists a linear map \(\tilde{f}: L \rightarrow \mathcal{S}^n\) such that \(\tilde{f}(x_ix_i^T) = y_iy_i^T\) for all \(i = 1,\ldots ,m\). Assume further that there exists a positive constant \(c > 0\) such that \(\det Z = c\det \tilde{f}(Z)\) for all \(Z \in L\). Then there exists a non-singular \(n \times n\) matrix S such that \(\tilde{f}(Z) = SZS^T\) for all matrices \(Z \in L\).

Proof

Assemble the column vectors \(x_i\) into an \(n \times m\) matrix X and the column vectors \(y_i\) into an \(n \times m\) matrix Y. By assumption these matrices have full row rank n. For mutually distinct indices \(i_1,\ldots ,i_n \in \{1,\ldots ,m\}\), let \(X_{i_1\ldots i_n},Y_{i_1\ldots i_n}\) be the \(n \times n\) submatrices formed of the columns \(i_1,\ldots ,i_n\) of XY, respectively. We have \(\det (X_{i_1\ldots i_n}X_{i_1\ldots i_n}^T) = \det (\sum _{k=1}^n x_{i_k}x_{i_k}^T) = c\det (\sum _{k=1}^n y_{i_k}y_{i_k}^T) = c\det (Y_{i_1\ldots i_n}Y_{i_1\ldots i_n}^T)\), which implies \(|\det X_{i_1\ldots i_n}| = \sqrt{c}|\det Y_{i_1\ldots i_n}|\).

Since the n-tuple \((i_1,\ldots ,i_n)\) was chosen arbitrarily, the n-planes spanned in \(\mathbb R^m\) by the row vectors of XY, respectively, fulfill the conditions of Lemma 10.4. By this lemma there exist a nonsingular \(n \times n\) matrix S and a diagonal matrix \(\Sigma = {{\mathrm{diag}}}(\sigma _1,\ldots ,\sigma _m)\) with \(\sigma _i \in \{-1,+1\}\) such that \(Y = SX\Sigma \), or equivalently \(y_i = \sigma _iSx_i\) for all \(i = 1,\ldots ,m\). For every \(i = 1,\ldots ,m\) we then have \(\tilde{f}(x_ix_i^T) = y_iy_i^T = Sx_ix_i^TS^T\), and by linear extension we get the claim of the lemma. \(\square \)

Appendix 2: Extreme elements of rank 2

In this section we provide auxiliary results which are needed for the classification of ROG spectrahedral cones of codimension 2 in Sect. 6.2. By virtue of Lemma 6.2 these results allow in principle also a classification of ROG cones of codimensions 3 and 4, but the number and complexity of cases to be considered becomes prohibitive in the framework of this paper.

We first provide the following structural result on real symmetric matrix pencils. Recall that y is called an eigenvector of the pencil \(Q_1 + \lambda Q_2\) if the linear forms \(Q_1y,Q_2y\) are linearly dependent.

Lemma 10.6

Let \(Q_1,Q_2\) be quadratic forms on \(\mathbb R^n\) such that the pencil \(Q_1 + \lambda Q_2\) possesses n linearly independent real eigenvectors. Then there exists a direct sum decomposition \(\mathbb R^n = H_0 \oplus H_1 \oplus \cdots \oplus H_m\), non-degenerate quadratic forms \(\Phi _k\) on \(H_k\), \(k = 1,\ldots ,m\), and mutually distinct angles \(\varphi _1,\ldots ,\varphi _m \in [0,\pi )\) with the following properties. For every vector \(x = \sum _{k=0}^m x_k\), where \(x_k \in H_k\), we have \(Q_1(x) = \sum _{k=1}^m \cos \varphi _k \Phi _k(x_k)\), \(Q_2(x) = \sum _{k=1}^m \sin \varphi _k \Phi _k(x_k)\). Moreover, the set of real eigenvectors of the pencil \(Q_1 + \lambda Q_2\) is given by the union \(\bigcup _{k=1}^m (H_0 + H_k)\).

Proof

We define the subspace \(H_0\) as the intersection \(\ker Q_1 \cap \ker Q_2\). For every real eigenvector \(y \not \in H_0\) of the pencil \(Q_1 + \lambda Q_2\), the linear span of the set \(\{ Q_1y,Q_2y \}\) of linear forms has then dimension 1. Hence there exists a unique angle \(\varphi (y) \in [0,\pi )\) such that \(\sin \varphi (y) Q_1y - \cos \varphi (y) Q_2y = 0\).

By assumption we find linearly independent real eigenvectors \(y_1,\ldots ,y_{n-\dim H_0}\) of the pencil \(Q_1 + \lambda Q_2\) such that \({{\mathrm{span}}}(H_0 \cup \{ y_1,\ldots ,y_{n-\dim H_0} \}) = \mathbb R^n\). Regroup these vectors into subsets \(\{y_{11},\ldots ,y_{1d_1}\}\), \(\ldots \), \(\{y_{m1},\ldots ,y_{md_m}\}\) such that \(\varphi (y_{kl}) = \varphi _k\), \(k = 1,\ldots ,m\), \(l = 1,\ldots ,d_k\), where \(\varphi _1,\ldots ,\varphi _m \in [0,\pi )\) are mutually distinct angles, and \(d_k\) is the number of eigenvectors corresponding to angle \(\varphi _k\). Define the subspace \(H_k\) as the linear span of \(y_{k1},\ldots ,y_{kd_k}\), \(k = 1,\ldots ,m\). Then by construction we have that \(H_0 \oplus H_1 \oplus \cdots \oplus H_m\) is a direct sum decomposition of \(\mathbb R^n\). Moreover, every vector \(y \in H_k\) is an eigenvector and we have \(\sin \varphi _k Q_1y - \cos \varphi _k Q_2y = 0\) for all \(y \in H_k\), \(k = 1,\ldots ,m\). It follows that there exist quadratic forms \(\Phi _k\) on \(H_k\), \(k = 1,\ldots ,m\), such that \(Q_1|_{H_k} = \cos \varphi _k \Phi _k\), \(Q_2|_{H_k} = \sin \varphi _k \Phi _k\).

Let now \(k,k' \in \{1,\ldots ,m\}\) be distinct indices and \(y \in H_k\), \(y' \in H_{k'}\) be arbitrary vectors. By construction we have \(\sin \varphi _k Q_1y - \cos \varphi _k Q_2y = \sin \varphi _{k'} Q_1y' - \cos \varphi _{k'} Q_2y' = 0\). Therefore \(\sin \varphi _k y^TQ_1y' - \cos \varphi _k y^TQ_2y' = \sin \varphi _{k'} y^TQ_1y' - \cos \varphi _{k'} y^TQ_2y' = 0\). But \(\varphi _k,\varphi _{k'}\) are distinct, and thus this linear system on \(y^TQ_iy'\) has only the trivial solution \(y^TQ_1y' = y^TQ_2y' = 0\). The decomposition formulas \(Q_1(x) = \sum _{k=1}^m \cos \varphi _k \Phi _k(x_k)\), \(Q_2(x) = \sum _{k=1}^m \sin \varphi _k \Phi _k(x_k)\) now readily follow.

Let \(1 \le k \le m\). Suppose there exists a vector \(y \in H_k\) such that \(\Phi _ky = 0\). Then we have \(Q_1y = Q_2y = 0\), and \(y \in H_0\). Thus \(y = 0\), and the form \(\Phi _k\) must be non-degenerate.

Let now \(x = \sum _{k=0}^m x_k\) be a real eigenvector of the pencil \(Q_1 + \lambda Q_2\), where \(x_k \in H_k\). Then we have \(\sin \varphi Q_1x - \cos \varphi Q_2x = 0\) for some angle \(\varphi \in [0,\pi )\). Let \(z_k \in H_k\), \(k = 0,\ldots ,m\) be arbitrary vectors, and set \(z = \sum _{k=0}^m z_k\). Then we have \(x^TQ_1z = \sum _{k=1}^m \cos \varphi _k \Phi _k(x_k,z_k)\), \(x^TQ_2z = \sum _{k=1}^m \sin \varphi _k \Phi _k(x_k,z_k)\). It follows that

$$\begin{aligned} 0= & {} \sin \varphi x^TQ_1z - \cos \varphi x^TQ_2z = \sum _{k=1}^m (\sin \varphi \cos \varphi _k \Phi _k(x_k,z_k) - \cos \varphi \sin \varphi _k \Phi _k(x_k,z_k)) \\= & {} \sum _{k=1}^m \sin (\varphi -\varphi _k) \Phi _k(x_k,z_k). \end{aligned}$$

Here \(\Phi _k(x_k,z_k) = \frac{1}{4}(\Phi _k(x_k+z_k)-\Phi _k(x_k-z_k))\) is as usual the bilinear form defined by the quadratic form \(\Phi _k\). Since this holds identically for all \(z_k \in H_k\) and the forms \(\Phi _k\) are non-degenerate, we must have either \(\varphi = \varphi _k\) or \(x_k = 0\) for each \(k = 1,\ldots ,m\). Therefore \(x \in H_0 + H_k\) for some k. On the other hand, every vector \(x \in H_0+H_k\) is an eigenvector of the pencil \(Q_1 + \lambda Q_2\), since it satisfies \(\sin \varphi _k Q_1x - \cos \varphi _k Q_2x = 0\). \(\square \)

We now come to spectrahedral cones \(K \subset \mathcal{S}_+^n\) of codimension d. We represent these as in Lemma 6.1 by linearly independent quadratic forms \(Q_1,\ldots ,Q_d\) on \(\mathbb R^n\), \(K = \{ X \in \mathcal{S}_+^n \,|\, \langle X,Q_i \rangle = 0,\ i = 1,\ldots ,d \}\). We study the intersections of the spectrahedral cone K with faces \(\mathcal{F}_n(H)\) of \(\mathcal{S}_+^n\) of rank not exceeding 2, i.e., where \(H = {{\mathrm{span}}}\{x,y\}\) for some vectors \(x,y \in \mathbb R^n\).

Lemma 10.7

Assume above notations. The face \(\mathcal{F}_n(H) \cap K\) of K is generated by an extreme element of rank 2 if and only if the \(d \times 3\) matrix

$$\begin{aligned} M(x,y) = \begin{pmatrix} x^TQ_1x &{}\quad 2x^TQ_1y &{}\quad y^TQ_1y \\ \vdots &{}\quad \vdots &{}\quad \vdots \\ x^TQ_dx &{}\quad 2x^TQ_dy &{}\quad y^TQ_dy \end{pmatrix} \end{aligned}$$

has rank 2 and its kernel is generated by a vector \((a,b,c)^T \in \mathbb R^3\) such that the matrix \(A = \begin{pmatrix} a &{}\quad b \\ b &{}\quad c \end{pmatrix}\) is definite.

Proof

If xy are linearly dependent, then both the face \(\mathcal{F}_n(H)\) and the matrix M(xy) have rank at most 1. Hence we may assume that xy are linearly independent.

The intersection \(\mathcal{F}_n(H) \cap K\) is given by all matrices \(X = axx^T + b(xy^T+yx^T) + cyy^T\) such that \(\begin{pmatrix} a &{}\quad b \\ b &{}\quad c \end{pmatrix} \succeq 0\) and

$$\begin{aligned} \begin{pmatrix} \langle X,Q_1 \rangle \\ \vdots \\ \langle X,Q_d \rangle \end{pmatrix} = M(x,y) \cdot \begin{pmatrix} a \\ b \\ c \end{pmatrix} = 0. \end{aligned}$$

The signature of \(X = axx^T + b(xy^T+yx^T) + cyy^T\) equals the signature of A.

Suppose that \({{\mathrm{rk}}}M(x,y) = 2\) and the kernel of M(xy) is generated by a vector \((a,b,c)^T\) such that \(A \succ 0\). Then \(X = axx^T + b(xy^T+yx^T) + cyy^T\) is positive semi-definite of rank 2, and the face \(\mathcal{F}_n(H) \cap K\) is generated by X, which proves the “if” direction.

Suppose now that \(X = axx^T + b(xy^T+yx^T) + cyy^T\) is positive semi-definite of rank 2 and generates the face \(\mathcal{F}_n(H) \cap K\). Then \((a,b,c)^T \in \ker M(x,y)\) and \(A \succ 0\). Moreover, the dimension of \(\ker M(x,y)\) is 1 and it must be generated by the vector \((a,b,c)^T\), otherwise we would have \(\dim (\mathcal{F}_n(H) \cap K) > 1\). It follows that \({{\mathrm{rk}}}M(x,y) = 2\), which proves the “only if” direction. \(\square \)

Lemma 10.8

Assume the notations of the previous lemma and set \(d = 2\). The matrix M(xy) satisfies the conditions of the previous lemma if and only if the bi-quartic polynomial p(xy) given by

$$\begin{aligned}&\left( y^TQ_1y\cdot x^TQ_2x - x^TQ_1x\cdot y^TQ_2y\right) ^2 - 4\left( x^TQ_1y\cdot y^TQ_2y - x^TQ_2y\cdot y^TQ_1y\right) \\&\times \left( x^TQ_1x\cdot x^TQ_2y - x^TQ_1y\cdot x^TQ_2x\right) \end{aligned}$$

is negative on xy.

Proof

The matrix A is definite if and only if \(b^2 - ac < 0\), and M(xy) has full rank 2 if and only if the cross product

$$\begin{aligned} \begin{pmatrix} x^TQ_1x \\ 2x^TQ_1y \\ y^TQ_1y \end{pmatrix} \times \begin{pmatrix} x^TQ_2x \\ 2x^TQ_2y \\ y^TQ_2y \end{pmatrix} = \begin{pmatrix} 2(x^TQ_1y\cdot y^TQ_2y - x^TQ_2y\cdot y^TQ_1y) \\ y^TQ_1y\cdot x^TQ_2x - x^TQ_1x\cdot y^TQ_2y \\ 2(x^TQ_1x\cdot x^TQ_2y - x^TQ_1y\cdot x^TQ_2x) \end{pmatrix} \end{aligned}$$

is nonzero. In this case the kernel of M(xy) is generated by this cross product, and hence \(b^2 - ac < 0\) if and only if \(p(x,y) < 0\). \(\square \)

Lemma 10.9

Assume the notations of the previous lemma and suppose that the polynomial p(xy) is nonnegative for all \(x,y \in \mathbb R^n\). Suppose there exists \(z \in \mathbb R^n\) such that \(z^TQ_1z = z^TQ_2z = 0\) and the linear forms \(q_1 = Q_1z\), \(q_2 = Q_2z\) are linearly independent. Then there exists a linear form u which is linearly independent from \(q_1,q_2\) and such that \(Q_1 = u \otimes q_1 + q_1 \otimes u\), \(Q_2 = u \otimes q_2 + q_2 \otimes u\).

Proof

By virtue of the condition \(z^TQ_1z = z^TQ_2z = 0\) the nonnegative polynomial p(xy) vanishes for \(x = z\) and all \(y \in \mathbb R^n\). Therefore \(\left. \frac{\partial p(x,y)}{\partial x}\right| _{x = z} = 0\) for all \(y \in \mathbb R^n\). By virtue of \(z^TQ_1z = z^TQ_2z = 0\), at \(x = z\) this gradient is given by

$$\begin{aligned} \left. \frac{\partial p(x,y)}{\partial x}\right| _{x = z} = -8\left( q_1^Ty\cdot y^TQ_2y - q_2^Ty\cdot y^TQ_1y)(q_2^Ty \cdot q_1 - q_1^Ty\cdot q_2\right) = 0. \end{aligned}$$

Since \(q_1,q_2\) are linearly independent, the linear form \(q_2^Ty \cdot q_1 - q_1^Ty\cdot q_2\) is nonzero if \(q_1^Ty \not = 0\) or \(q_2^Ty \not = 0\). Therefore \(q_1^Ty\cdot y^TQ_2y = q_2^Ty\cdot y^TQ_1y\) for all such y, i.e., for a dense subset of \(\mathbb R^n\). It follows that \(q_1^Ty\cdot y^TQ_2y = q_2^Ty\cdot y^TQ_1y\) identically for all \(y \in \mathbb R^n\).

In particular, for every \(y \in \mathbb R^n\) such that \(q_1^Ty = 0\), \(q_2^Ty \not = 0\) we have \(y^TQ_1y = 0\). The subset of such vectors y is dense in the kernel of \(q_1\), and hence \(Q_1\) is zero on this kernel. It follows that there exists a linear form \(u_1\) such that \(Q_1 = q_1 \otimes u_1 + u_1 \otimes q_1\). In a similar manner, there exists a linear form \(u_2\) such that \(Q_2 = q_2 \otimes u_2 + u_2 \otimes q_2\). It follows that \(q_1^Ty\cdot q_2^Ty \cdot u_2^Ty = q_2^Ty\cdot q_1^Ty \cdot u_1^Ty\) identically for all \(y \in \mathbb R^n\). For all \(y \in \mathbb R^n\) such that \(q_1^Ty \not = 0\) and \(q_2^Ty \not = 0\) it follows that \(u_2^Ty = u_1^Ty\). Since the set of such vectors y is dense in \(\mathbb R^n\), we get that \(u_1,u_2\) are equal to the same linear form u.

Note that \(q_1^Tz = q_2^Tz = 0\) by assumption. We obtain \(q_1 = Q_1z = q_1^Tz \cdot u + u^Tz \cdot q_1 = u^Tz \cdot q_1\), and hence \(u^Tz = 1\). Therefore u must be linearly independent of \(q_1,q_2\). \(\square \)

Lemma 10.10

Let \(Q_1,Q_2\) be linearly independent quadratic forms on \(\mathbb R^n\). Consider the spectrahedral cone \(K = \{ X \in \mathcal{S}_+^n \,|\, \langle X,Q_1 \rangle = \langle X,Q_2 \rangle = 0 \}\). Suppose that K does not have extreme elements of rank 2. Then one of the following conditions holds.

  1. (i)

    For every \(z \in \mathbb R^n\) such that \(z^TQ_1z = z^TQ_2z = 0\) the linear forms \(q_1 = Q_1z\), \(q_2 = Q_2z\) are linearly dependent.

  2. (ii)

    There exist linearly independent linear forms \(q_1,q_2,u\) such that \(Q_1 = u \otimes q_1 + q_1 \otimes u\), \(Q_2 = u \otimes q_2 + q_2 \otimes u\).

Proof

Suppose that condition (i) does not hold, then there exists \(z \in \mathbb R^n\) such that \(z^TQ_1z = z^TQ_2z = 0\) and the linear forms \(q_1 = Q_1z\), \(q_2 = Q_2z\) are linearly independent. Further, by Lemmas 10.7, 10.8 the polynomial p(xy) defined in Lemma 10.8 is nonnegative for all \(x,y \in \mathbb R^n\). Hence the conditions of Lemma 10.9 are fulfilled and condition (ii) holds. \(\square \)

Appendix 3: Coordinate-free characterization of \({{\mathrm{Han}}}_+^4\)

In this section we provide an auxiliary results which is necessary for the classification of simple ROG cones of degree 4 and dimension 7.

Lemma 10.11

Let \(e_1,\ldots ,e_4\) be the canonical basis vectors of \(\mathbb R^4\). Let \(P \subset \mathbb R^4\) be a 2-dimensional subspace which is transversal to all coordinate planes spanned by pairs of basis vectors. Define the set of vectors

$$\begin{aligned} \mathcal{R}= & {} \{ \alpha e_i \,|\, \alpha \in \mathbb R,\ i = 1,2,3,4 \} \quad \cup \\&\{ z = (z_1,z_2,z_3,z_4)^T \in \mathbb R^4 \,|\, z_i \not = 0\ \forall \quad \ i = 1,\ldots ,4;\ (z_1^{-1},z_2^{-1},z_3^{-1},z_4^{-1})^T \in P \}. \end{aligned}$$

Let \(L \subset \mathcal{S}^4\) be the linear span of the set \(\{ zz^T \,|\, z \in \mathcal{R} \}\). Then \(\dim L = 7\), and the spectrahedral cone \(K = L \cap \mathcal{S}_+^4\) is isomorphic to the cone \({{\mathrm{Han}}}_+^4\) of positive semi-definite \(4 \times 4\) Hankel matrices.

Proof

Let \((r_1\cos \varphi _1,\ldots ,r_4\cos \varphi _4)^T, (r_1\sin \varphi _1,\ldots ,r_4\sin \varphi _4)^T \in \mathbb R^4\) be two linearly independent vectors spanning P. By the transversality property of P all \(2 \times 2\) minors of the \(4 \times 2\) matrix composed of these vectors are nonzero. Hence the angles \(\varphi _1,\ldots ,\varphi _4\) are mutually distinct modulo \(\pi \), and the scalars \(r_1,\ldots ,r_4\) are nonzero. We may also assume without loss of generality that none of the angles \(\varphi _i\) is a multiple of \(\pi \), otherwise we choose slightly different basis vectors in P.

For all \(\xi \in [0,\pi )\) we then have that the vector \((r_1\sin (\varphi _1+\xi ),\ldots ,r_4\sin (\varphi _4+\xi ))^T\) is an element of P. More precisely, we get

$$\begin{aligned}&\mathcal{R} = \{ \alpha e_i \,|\, \alpha \in \mathbb R,\ i = 1,2,3,4 \} \cup \\&\quad \left\{ \alpha \left( \frac{1}{r_1\sin (\varphi _1+\xi )},\ldots ,\frac{1}{r_4\sin (\varphi _4+\xi )}\right) ^T \,|\, \alpha \in \mathbb R,\ \xi \not = \varphi _i \mod \pi \right\} . \end{aligned}$$

Now set \(\cos \xi = \frac{1-t^2}{1+t^2}\), \(\sin \xi = \frac{2t}{1+t^2}\), and \(s = t - \frac{1}{t}\). Then \(\frac{1}{r_i\sin (\varphi _i+\xi )} = \frac{1+t^2}{r_it(2\cos \varphi _i-s\sin \varphi _i)}\). Define the vector \(\mu (s) = \left( \frac{1}{r_1(2\cos \varphi _1-s\sin \varphi _1)},\ldots ,\frac{1}{r_4(2\cos \varphi _4-s\sin \varphi _4)}\right) ^T\) for all \(s \in \mathbb R\) except the values \(s = 2\cot \varphi _i\), \(i = 1,\ldots ,4\). We then get

$$\begin{aligned} \mathcal{R}= & {} \{ \alpha e_i \,|\, \alpha \in \mathbb R,\ i = 1,2,3,4 \} \cup \{ \alpha \mu (s) \,|\, \alpha \in \mathbb R,\ s \in \mathbb R,\ s \not = 2\cot \varphi _i \} \cup \\&\cup \left\{ \alpha \left( \frac{1}{r_1\sin \varphi _1},\ldots ,\frac{1}{r_4\sin \varphi _4}\right) ^T \,|\, \alpha \in \mathbb R \right\} . \end{aligned}$$

Multiplying the vector \(\mu (s)\) by the common denominator of its elements, we obtain the vector

$$\begin{aligned} \nu (s) = \mu (s) \cdot \, \prod _{i=1}^4 (2\cos \varphi _i-s\sin \varphi _i) = {{\mathrm{diag}}}\left( r_1^{-1},r_2^{-1},r_3^{-1},r_4^{-1}\right) \cdot \, M \cdot \,{{\mathrm{diag}}}(8,-4,2,-1) \cdot \eta (s), \end{aligned}$$
(11)

where \(\eta (s) = (1,s,s^2,s^3)^T\) and the matrix M is given by

$$\begin{aligned} \begin{pmatrix} \cos \varphi _2\cos \varphi _3\cos \varphi _4 &{}\sin \varphi _2\sin \varphi _3\sin \varphi _4 \!+\! \sin (\varphi _2\!+\!\varphi _3\!+\!\varphi _4) &{}\cos \varphi _2\cos \varphi _3\cos \varphi _4 \!-\! \cos (\varphi _2\!+\!\varphi _3\!+\!\varphi _4) &{}\sin \varphi _2\sin \varphi _3\sin \varphi _4 \\ \cos \varphi _1\cos \varphi _3\cos \varphi _4 &{}\sin \varphi _1\sin \varphi _3\sin \varphi _4 \!+\! \sin (\varphi _1\!+\!\varphi _3\!+\!\varphi _4) &{}\cos \varphi _1\cos \varphi _3\cos \varphi _4 \!-\! \cos (\varphi _1\!+\!\varphi _3\!+\!\varphi _4) &{}\sin \varphi _1\sin \varphi _3\sin \varphi _4 \\ \cos \varphi _1\cos \varphi _2\cos \varphi _4 &{}\sin \varphi _1\sin \varphi _2\sin \varphi _4 \!+\! \sin (\varphi _1\!+\!\varphi _2\!+\!\varphi _4) &{}\cos \varphi _1\cos \varphi _2\cos \varphi _4 \!-\! \cos (\varphi _1\!+\!\varphi _2\!+\!\varphi _4) &{}\sin \varphi _1\sin \varphi _2\sin \varphi _4 \\ \cos \varphi _1\cos \varphi _2\cos \varphi _3 &{}\sin \varphi _1\sin \varphi _2\sin \varphi _3 \!+\! \sin (\varphi _1\!+\!\varphi _2\!+\!\varphi _3) &{}\cos \varphi _1\cos \varphi _2\cos \varphi _3 \!-\! \cos (\varphi _1\!+\!\varphi _2\!+\!\varphi _3) &{}\sin \varphi _1\sin \varphi _2\sin \varphi _3 \end{pmatrix} \!. \end{aligned}$$

Here for the calculus of M we used the formulas

$$\begin{aligned}&\sin \varphi _i\cos \varphi _j\cos \varphi _k+\sin \varphi _j \cos \varphi _i\cos \varphi _k+\sin \varphi _k\cos \varphi _i\cos \varphi _j\\&\quad = \sin \varphi _i\sin \varphi _j\sin \varphi _k + \sin (\varphi _i+\varphi _j+\varphi _k), \\&\sin \varphi _2\sin \varphi _3\cos \varphi _4+\sin \varphi _2\sin \varphi _4\cos \varphi _3+\sin \varphi _3 \sin \varphi _4\cos \varphi _2\\&\quad = \cos \varphi _i\cos \varphi _j\cos \varphi _k - \cos (\varphi _i+\varphi _j+\varphi _k). \end{aligned}$$

Note that the vector \(\nu (s)\) can also be defined by the right-hand side of (11) for \(s = 2\cot \varphi _i\), and for this value of s it is proportional to \(e_i\). Defining \(\eta (\infty ) = e_4\) and \(\nu (\infty ) = {{\mathrm{diag}}}(r_1^{-1},r_2^{-1},r_3^{-1},r_4^{-1}) \cdot M \cdot {{\mathrm{diag}}}(8,-4,2,-1) \cdot \eta (\infty )\), we finally get

$$\begin{aligned} \mathcal{R} = \{ \alpha \nu (s) \,|\, \alpha \in \mathbb R,\ s \in \mathbb R \cup \{\infty \} \}. \end{aligned}$$

A symbolic computation with a computer algebra system yields

$$\begin{aligned} \det M = \sin (\varphi _1-\varphi _2)\sin (\varphi _1-\varphi _3) \sin (\varphi _1-\varphi _4)\sin (\varphi _2-\varphi _3)\sin (\varphi _2-\varphi _4)\sin (\varphi _3-\varphi _4) \not = 0. \end{aligned}$$

Hence the matrix product \({{\mathrm{diag}}}\left( r_1^{-1},r_2^{-1},r_3^{-1},r_4^{-1}\right) \cdot M \cdot {{\mathrm{diag}}}(8,-4,2,-1)\) is non-degenerate, and the subspace \(L = {{\mathrm{span}}}\{ zz^T \,|\, z = \nu (s),\ s \in \mathbb R \cup \{\infty \} \}\) is isomorphic to the subspace \(L' = {{\mathrm{span}}}\{ zz^T \,|\, z = \eta (s),\ s \in \mathbb R \cup \{\infty \} \}\).

The subspace \(L'\), however, is the subspace of Hankel matrices in \(\mathcal{S}^4\). The claim of the lemma now easily follows. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hildebrand, R. Spectrahedral cones generated by rank 1 matrices. J Glob Optim 64, 349–397 (2016). https://doi.org/10.1007/s10898-015-0313-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-015-0313-4

Keywords

Mathematics Subject Classification

Navigation