Skip to main content
Log in

Arveson extreme points span free spectrahedra

  • Published:
Mathematische Annalen Aims and scope Submit manuscript

Abstract

Let \( {SM_n( {\mathbb {R}} )^g}\) denote g-tuples of \(n \times n\) real symmetric matrices. Given tuples \(X=(X_1, \ldots , X_g) \in {SM_{n_1}( {\mathbb {R}} )^g}\) and \(Y=(Y_1, \ldots , Y_g) \in {SM_{n_2}( {\mathbb {R}} )^g}\), a matrix convex combination of X and Y is a sum of the form

$$\begin{aligned} V_1^* XV_1+V_2^* Y V_2 \qquad \qquad V_1^* V_1+V_2^* V_2=I_n \end{aligned}$$

where \(V_1: {\mathbb {R}} ^n \rightarrow {\mathbb {R}} ^{n_1}\) and \(V_2: {\mathbb {R}} ^n \rightarrow {\mathbb {R}} ^{n_2}\) are contractions. Matrix convex sets are sets which are closed under matrix convex combinations. A key feature of matrix convex combinations is that the g-tuples XY, and \(V_1^* XV_1+V_2^* Y V_2\) do not need to have the same size. As a result, matrix convex sets are a dimension free analog of convex sets. While in the classical setting there is only one notion of an extreme point, there are three main notions of extreme points for matrix convex sets: ordinary, matrix, and absolute extreme points. Absolute extreme points are closely related to the classical Arveson boundary. A central goal in the theory of matrix convex sets is to determine if one of these types of extreme points for a matrix convex set minimally recovers the set through matrix convex combinations. This article shows that every real compact matrix convex set which is defined by a linear matrix inequality is the matrix convex hull of its absolute extreme points, and that the absolute extreme points are the minimal set with this property. Furthermore, we give an algorithm which expresses a tuple as a matrix convex combination of absolute extreme points with optimal bounds. Similar results hold when working over the field of complex numbers rather than the reals.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. If \(\tilde{Y}_c\) is an element of \({{{\mathcal {D}}} _A^{\mathbb {R}}}(n+1)\) then so is \(\tilde{Y}_{-c}\). For this reason, it is equivalent to require \(|c| \le 1\).

References

  1. Agler, J.: An Abstract Approach to Model Theory, Surveys of Some Recent Results in Operator Theory, vol. II, pp. 1–23, Pitman Res. Notes Math. Ser., 192, Longman Sci. Tech., Harlow (1988)

  2. Arveson, W.: Subalgebras of \(C^*\)-algebras. Acta Math. 123, 141–224 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  3. Arveson, W.: Subalgebras of \(C^*\)-algebras. II. Acta Math. 128, 271–308 (1972)

    Article  MathSciNet  MATH  Google Scholar 

  4. Arveson, W.: The noncommutative Choquet boundary. J. Am. Math. Soc. 21, 1065–1084 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  5. Davidson, K.R.: \(C^*\)-Algebras by Example. American Mathematical Society, Chicago (1996)

    Book  MATH  Google Scholar 

  6. Davidson, K.R., Kennedy, M.: The Choquet boundary of an operator system. Duke Math. J. 164, 2989–3004 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Dritschel, M.A., McCullough, S.A.: Boundary representations for families of representations of operator algebras and spaces. J. Oper. Theory 53, 159–168 (2005)

    MathSciNet  MATH  Google Scholar 

  8. Effros, E.G., Winkler, S.: Matrix convexity: operator analogues of the bipolar and Hahn-Banach theorems. J. Funct. Anal. 144, 117–152 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  9. Evert, E.: Matrix convex sets without absolute extreme points. Linear Algebra Appl. 537, 287–301 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  10. Evert, E., Helton, J.W., Klep, I., McCullough, S.: Extreme points of matrix convex sets, free spectrahedra and dilation theory. J. Geom. Anal. 28, 1373–1498 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  11. Farenick, D.R.: Extremal matrix states on operator systems. J. Lond. Math. Soc. 61, 885–892 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  12. Farenick, D.R.: Pure matrix states on operator systems. Linear Algebra Appl. 393, 149–173 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  13. Fritz, T., Netzer, T., Thom, A.: Spectrahedral containment and operator systems with finite-dimensional realization. SIAM J. Appl. Algebra Geom. 1, 556–574 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  14. Fuller, A.H., Hartz, M., Lupini, M.: Boundary representations of operator spaces, and compact rectangular matrix convex sets. J. Oper. Theory 79, 139–172 (2018)

    MathSciNet  MATH  Google Scholar 

  15. Hamana, M.: Injective envelopes of operator systems. Publ. Res. Inst. Math. Sci. 15, 773–785 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  16. Helton, J.W., Klep, I., McCullough, S.: The matricial relaxation of a linear matrix inequality. Math. Program. 138, 401–445 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Helton, J.W., McCullough, S.: Every free basic convex semi-algebraic set has an LMI representation. Ann. Math. (2) 176, 979–1013 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  18. Kleski, C.: Boundary representations and pure completely positive maps. J. Oper. Theory 71, 45–62 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  19. Kriel, T.: Free spectrahedra, determinants of monic linear pencils and decompositions of pencils, preprint arXiv:1611.03103

  20. Muhly, P.S., Solel, B.: An algebraic characterization of boundary representations. In: Nonselfadjoint Operator Algebras, Operator Theory, and Related Topics, Oper. Theory Adv. Appl., vol. 104, pp. 189–196. Birkäuser, Basel (1998)

  21. Passer, B., Shalit, O., Solel, B.: Minimal and maximal matrix convex sets. J. Funct. Anal. 274, 3197–3253 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  22. Paulsen, V.: Completely Bounded Maps and Operator Algebras, Cambridge Studies in Advanced Mathematics 78. Cambridge University Press, Cambridge (2002)

    Google Scholar 

  23. Webster, C., Winkler, S.: The Krein–Milman theorem in operator convexity. Trans Am. Math. Soc. 351, 307–322 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  24. Zalar, A.: Operator Positivstellenätze for noncommutative polynomials positive on matrix convex sets. J. Math. Anal. Appl. 445, 32–80 (2017)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eric Evert.

Additional information

Communicated by Andreas Thom.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

E. Evert and J.W. Helton: Research supported by the NSF Grant DMS-1500835.

Appendix

Appendix

The appendix contains an NC \(\hbox {LDL}^*\) formula and the proof of Theorem 1.2 over the reals.

1.1 The NC \(\hbox {LDL}^*\) of block \(3 \times 3\) matrices

This subsection contains a brief discussion of the NC \(\hbox {LDL}^*\) decomposition of the evaluation of a linear pencil \(L_A\) on a block \(3 \times 3\) matrix. Consider a general block \(3 \times 3\) tuple

$$\begin{aligned} Z:=\begin{pmatrix} X &{} \beta &{} \eta \\ \beta ^* &{} \gamma &{} \sigma \\ \eta ^* &{} \sigma ^* &{} \psi \end{pmatrix} \end{aligned}$$

where \(X \in {SM_{n_1}( {{\mathbb {K}}})^g}\) and \(\gamma \in {SM_{n_2}( {{\mathbb {K}}})^g}\) and \(\psi \in {SM_{n_3}( {{\mathbb {K}}})^g}\) and \(\beta , \eta ,\) and \(\sigma \) are each g-tuples of matrices of appropriate size. We know that

$$\begin{aligned} L_A \begin{pmatrix} X &{} \beta &{} \eta \\ \beta ^* &{} \gamma &{} \sigma \\ \eta ^* &{} \sigma ^* &{} \psi \end{pmatrix} \sim _{\mathrm {c.s.} } \begin{pmatrix} L_A(X) &{} \Lambda _A (\beta ) &{} \Lambda _A (\eta ) \\ \Lambda _A (\beta ^*) &{} L_A (\gamma ) &{} \Lambda _A (\sigma ) \\ \Lambda _A (\eta ^*) &{} \Lambda _A (\sigma ^*) &{} L_A (\psi ) \end{pmatrix}=:{\mathfrak {Z}} \end{aligned}$$

where \(\sim _{\mathrm {c.s.}}\) denotes equivalence up to permutations (canonical shuffles). It follows that

$$\begin{aligned} L_A (Z) \succeq 0 \mathrm {\ if \ and \ only \ if \ } {\mathfrak {Z}} \succeq 0. \end{aligned}$$

The NC \(\hbox {LDL}^*\) of \({\mathfrak {Z}}\) has as its block diagonal factor D the matrix

$$\begin{aligned} D=\begin{pmatrix} L_A(X) &{} 0 &{} 0\\ 0 &{} S &{} 0 \\ 0 &{} 0 &{} L_A (\gamma )-\Lambda _A (\beta ^*) L_A (X)^\dagger \Lambda _A (\beta )-W^* S^\dagger W \end{pmatrix} \end{aligned}$$

where

$$\begin{aligned} \begin{array}{rcl} S&{}=&{} L_A (\psi )- \Lambda _A (\eta ^*) L_A(X)^\dagger \Lambda _A (\eta ) \\ W&{}=&{} \Lambda _A (\sigma ^*)-\Lambda _A(\eta ^*) L_A (X)^\dagger \Lambda _A (\beta ). \end{array} \end{aligned}$$

It follows that \(L_A (Z) \succeq 0\) if and only if \(L_A (X) \succeq 0\) and \(S \succeq 0\) and

$$\begin{aligned} L_A (\gamma )-\Lambda _A (\beta ^*) L_A (X)^\dagger \Lambda _A (\beta )-W^* S^\dagger W \succeq 0. \end{aligned}$$

Considering the case where \( {{\mathbb {K}}}= {\mathbb {R}} \) and \(\gamma \in {\mathbb {R}} ^g\) and \(\psi =0 \in {\mathbb {R}} ^g\), hence \(\sigma =\sigma ^* \in {\mathbb {R}} ^g\), and substituting \(\eta =c {\hat{\beta }}\) or \(\eta =0\) gives Eqs. (2.8) and (2.16), respectively.

1.2 Proof of Theorem 1.2 over the real numbers

We now give a proof of Theorem 1.2 over the real numbers. To emphasize the real setting in this subsection we will now use the terms symmetric and orthogonal in favor of self-adjoint and unitary. Recall that a tuple \(X \in {SM_n( {\mathbb {R}} )^g}\) is irreducible over \( {\mathbb {R}} \) if the matrices \(X_1, \ldots , X_g\) have no common reducing subspaces in \( {\mathbb {R}} ^n\); a tuple is reducible over \( {\mathbb {R}} \) if it is not irreducible over \( {\mathbb {R}} \).

Lemma 5.1

Let \(X \in {SM_n( {\mathbb {R}} )^g}\) be a g-tuple of real symmetric matrices which is irreducible over \( {\mathbb {R}} \) and let \(W \in SM_n ( {\mathbb {R}} )\) be a real symmetric matrix which commutes with X. Then W is a constant multiple of the identity.

Proof

Let \(W \in SM_n ( {\mathbb {R}} )\) be a real symmetric matrix such that \(WX=XW\) and let \({{\mathcal {E}}} _1, \ldots , {{\mathcal {E}}} _k \subset {\mathbb {R}} ^n\) denote the real eigenspaces of W corresponding to the eigenvalues \(\lambda _1, \ldots , \lambda _k\) of W, respectively. Since X is real and \(WX=XW\), each \({{\mathcal {E}}} _j\) is a reducing subspace for X. If \(k>2\), then each \({{\mathcal {E}}} _j\) is a nontrivial real reducing subspace of X which would imply that X is reducible over \( {\mathbb {R}} \). It follows that \(k=1\) and \(W= \lambda _1 I\). \(\square \)

We now prove Theorem 1.2 which is our real analogue of [10, Theorem 1.1 (3)], Theorem 1.2.

The proof over \( {\mathbb {R}} \) follows exactly the proof over \( {\mathbb {C}} \) in [10] as we now outline. That an irreducible Arveson extreme point is absolute extreme is a simple argument given in [10, Section 3.4] based on [10, Lemma 3.14] which (over \( {\mathbb {R}} \)) says the following.

Lemma 5.2

Fix positive integer n and m and suppose \(C \in {\mathbb {R}} ^{n \times m}\) is a nonzero matrix, the tuple \(X\in {SM_n( {\mathbb {R}} )^g}\) is irreducible over \( {\mathbb {R}} \) and \(E\in {SM_m( {\mathbb {R}} )^g}\). If \(CX_j = E_j C\) for each j,  then \(C^TC\) is a nonzero multiple of the identity. Moreover, the range of C reduces the set \(\{E_1,\ldots ,E_g\}\) so there is an orthogonal matrix \(U \in M_m(\mathbb {R})\) so that for each j we have \(U^T E_j U= X\oplus Z_j\) for some \(Z_j \in SM_k ( {\mathbb {R}} ),\) where \(k=m-n\).

Proof

To prove this statement note that \( X_j C^T = C^T E_j. \) It follows that

$$\begin{aligned} X_j C^TC = C^T E_j C = C^T C X_j. \end{aligned}$$

Using Lemma 5.1 with the irreduciblity of \(\{X_1,\ldots ,X_g\}\) shows \(C^TC\) is a nonzero multiple of the identity and therefore C is a real multiple of an isometry. Further, since \(CX=EC\), the range of C is invariant for E. Since each \(E_j\) is symmetric, the range of C reduces each \(E_j\). Morevoer, C as a multiple of an isometry extends to a multiple of an orthogonal matrix U. \(\square \)

Proof of Theorem 1.2when\( {{\mathbb {K}}}= {\mathbb {R}} \) Suppose X is both irreducible over \( {\mathbb {R}} \) and in the Arveson boundary of \({{{\mathcal {D}}} _A^{\mathbb {R}}}\). To prove X is an absolute extreme point, suppose \( X= \sum _{i=1}^\nu C_i^T E^i C_i, \) where each \(C_i\) is nonzero, \(\sum _{i=1}^\nu C_i^T C_i =I\) and \(E^i \in {{{\mathcal {D}}} _A^{\mathbb {R}}}\). In this case, let

$$\begin{aligned} C= \begin{pmatrix} C_1 \\ \vdots \\ C_\nu \end{pmatrix} \quad \text { and }\quad E= E^1\oplus E^2\oplus \cdots \oplus E^\nu \end{aligned}$$

and observe that C is an isometry and \(X=C^T E C\). Hence, as X is in the Arveson boundary, \(CX=EC\). It follows that \(C_i X_k = E^i_k C_i\) for each i and k. Thus, by Lemma 5.2, it follows that for each i there is an orthogonal matrix \(U_i\) such that \(U_i^T E^i U_i= X\oplus Z^i\) for some \(Z^i\in {{{\mathcal {D}}} _A^{\mathbb {R}}}\). Therefore X is an absolute extreme point of \({{{\mathcal {D}}} _A^{\mathbb {R}}}\).

The converse proof that an absolute extreme point of \({{{\mathcal {D}}} _A^{\mathbb {R}}}\) is irreducible over \( {\mathbb {R}} \) and Arveson extreme is [10, Lemma 3.11] and [10, Lemma 3.13] which while stated over \( {\mathbb {C}} \) is unchanged over \( {\mathbb {R}} \). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Evert, E., Helton, J.W. Arveson extreme points span free spectrahedra. Math. Ann. 375, 629–653 (2019). https://doi.org/10.1007/s00208-019-01858-9

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00208-019-01858-9

Mathematics Subject Classification

Navigation