Skip to main content
Log in

Dimension reduction for semidefinite programs via Jordan algebras

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

We propose a new method for simplifying semidefinite programs (SDP) inspired by symmetry reduction. Specifically, we show if an orthogonal projection map satisfies certain invariance conditions, restricting to its range yields an equivalent primal–dual pair over a lower-dimensional symmetric cone—namely, the cone-of-squares of a Jordan subalgebra of symmetric matrices. We present a simple algorithm for minimizing the rank of this projection and hence the dimension of this subalgebra. We also show that minimizing rank optimizes the direct-sum decomposition of the algebra into simple ideals, yielding an optimal “block-diagonalization” of the SDP. Finally, we give combinatorial versions of our algorithm that execute at reduced computational cost and illustrate effectiveness of an implementation on examples. Through the theory of Jordan algebras, the proposed method easily extends to linear and second-order-cone programming and, more generally, symmetric cone optimization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. We omit the Albert algebra from this list since it is exceptional, i.e., it is an algebra that is not special. By definition, all subalgebras of \({\mathbb {S}}^n\) are special [22, 2.3.1]; hence, no subalgebra of \({\mathbb {S}}^n\) is isomorphic to the Albert algebra.

  2. Note that the related algorithm [2, Section 5] uses noncommuting indeterminates.

  3. These formats also allow for Lorentz cones. None of the examples presented, however, use this type of cone.

References

  1. Alizadeh, F., Schmieta, S.: Symmetric cones, potential reduction methods and word-by-word extensions. In: Wolkowicz, H., Saigal, R., Vandenberghe, L. (eds.) Handbook of Semidefinite Programming, pp. 195–233. Springer, Berlin (2000)

    Chapter  Google Scholar 

  2. Babel, L., Chuvaeva, I.V., Klin, M., Pasechnik, D.V.: Algebraic combinatorics in mathematical chemistry. Methods and algorithms. II. Program implementation of the Weisfeiler–Leman algorithm. arXiv preprint arXiv:1002.1921 (2010)

  3. Bachoc, C., Gijswijt, D.C., Schrijver, A., Vallentin, F.: Invariant semidefinite programs. In: Anjos, M.F., Lasserre, J.B. (eds.) Handbook on Semidefinite, Conic and Polynomial Optimization, pp. 219–269. Springer, Berlin (2012)

    Chapter  MATH  Google Scholar 

  4. Bhatia, R.: Positive Definite Matrices. Princeton University Press, Princeton (2009)

    Book  MATH  Google Scholar 

  5. Bödi, R., Grundhöfer, T., Herr, K.: Symmetries of linear programs. Note di Matematica 30(1), 129–132 (2011)

    MathSciNet  MATH  Google Scholar 

  6. Borwein, J., Wolkowicz, H.: Regularizing the abstract convex program. J. Math. Anal. Appl. 83(2), 495–530 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  7. Caluza Machado, F., de Oliveira Filho, F .M.: Improving the semidefinite programming bound for the kissing number by exploiting polynomial symmetry. Exp. Math. 27(3), 362–369 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  8. de Klerk, E.: Exploiting special structure in semidefinite programming: a survey of theory and applications. Eur. J. Oper. Res. 201(1), 1–10 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. de Klerk, E., Sotirov, R.: A new library of structured semidefinite programming instances. Optim. Methods Softw. 24(6), 959–971 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  10. de Klerk, E., Dobre, C., Pasechnik, D.V.: Numerical block diagonalization of matrix*-algebras with application to semidefinite programming. Math. Program. 129(1), 91–111 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. Dobre, C., Vera, J.: Exploiting symmetry in copositive programs via semidefinite hierarchies. Math. Program. 151(2), 659–680 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Drusvyatskiy, D., Wolkowicz, H.: The many faces of degeneracy in conic optimization. arXiv preprint arXiv:1706.03705 (2017)

  13. Eberly, W., Giesbrecht, M.: Efficient decomposition of associative algebras. In: Proceedings of the 1996 International Symposium on Symbolic and Algebraic Computation, pp. 170–178. ACM (1996)

  14. Faraut, J., Korányi, A.: Analysis on Symmetric Cones. Oxford University Press, Oxford (1994)

    MATH  Google Scholar 

  15. Farenick, D.: Algebras of Linear Transformations. Universitext. Springer, New York (2012). ISBN 9781461300977

  16. Fawzi, H., Parrilo, P.A.: Self-scaled bounds for atomic cone ranks: applications to nonnegative rank and cp-rank. arXiv preprint arXiv:1404.3240 (2014)

  17. Faybusovich, L.: Linear systems in Jordan algebras and primal–dual interior-point algorithms. J. Comput. Appl. Math. 86(1), 149–175 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  18. Fujisawa, K., Kojima, M., Nakata, K., Yamashita, M.: SDPA (semidefinite programming algorithm) user’s manual—version 6.2. 0. Department of Mathematical and Com-puting Sciences, Tokyo Institute of Technology. Research Reports on Mathematical and Computing Sciences Series B: Operations Research (2002)

  19. Gatermann, K., Parrilo, P.A.: Symmetry groups, semidefinite programs, and sums of squares. J. Pure Appl. Algebra 192(1–3), 95–128 (2004). https://doi.org/10.1016/j.jpaa.2003.12.011

    Article  MathSciNet  MATH  Google Scholar 

  20. Gijswijt, D.: Matrix algebras and semidefinite programming techniques for codes. arXiv preprint arXiv:1007.0906 (2010)

  21. Grohe, M., Kersting, K., Mladenov, M., Selman, E.: Dimension reduction via colour refinement. In: Algorithms-ESA 2014, pp. 505–516. Springer (2014)

  22. Hanche-Olsen, H., Størmer, E.: Jordan Operator Algebras, vol. 21. Pitman Advanced Publishing Program, Edinburgh (1984)

    MATH  Google Scholar 

  23. Higman, D.: Coherent algebras. Linear Algebra Appl. 93, 209–239 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  24. Idel, M.: On the Structure of Positive Maps. Technical University of Munich, Munich (2013)

    Google Scholar 

  25. Maehara, T., Murota, K.: A numerical algorithm for block-diagonal decomposition of matrix*-algebras with general irreducible components. Jpn. J. Ind. Appl. Math. 27(2), 263–293 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  26. Margot, F.: Exploiting orbits in symmetric ILP. Math. Program. 98(1–3), 3–21 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  27. Mittelmann, H.D.: An independent benchmarking of SDO and SOCP solvers. Math. Program. 95(2), 407–430 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  28. Németh, A., Németh, S.: Lattice-like subsets of Euclidean Jordan algebras. arXiv preprint arXiv:1401.3581 (2014)

  29. Nesterov, Y., Nemirovskii, A., Ye, Y.: Interior-Point Polynomial Algorithms in Convex Programming, vol. 13. SIAM, Philadelphia (1994)

    Book  MATH  Google Scholar 

  30. Packard, A., Doyle, J.: The complex structured singular value. Automatica 29(1), 71–109 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  31. Papachristodoulou, A., Anderson, J., Valmorbida, G., Prajna, S., Seiler, P., Parrilo, P.: SOSTOOLS version 3.00 sum of squares optimization toolbox for MATLAB. arXiv preprint arXiv:1310.4716 (2013)

  32. Pataki, G.: Strong duality in conic linear programming: facial reduction and extended duals. Comput. Anal. Math, pp. 613–634 (2013)

  33. Pataki, G., Schmieta, S.: The DIMACS library of semidefinite-quadratic-linear programs. http://dimacs.rutgers.edu/Challenges/Seventh/Instances (1999). Accessed Dec 2018

  34. Permenter, F.: Reduction methods in semidefinite and conic optimization. Ph.D. thesis, MIT. http://hdl.handle.net/1721.1/114005 (2018). Accessed Dec 2018

  35. Permenter, F., Parrilo, P.A.: Finding sparse, equivalent SDPs via minimal-coordinate-projections. In: IEEE 54th Annual Conference on Decision and Control (CDC). IEEE (2015)

  36. Schrijver, A.: A comparison of the Delsarte and Lovász bounds. IEEE Trans. Inf. Theory 25(4), 425–429 (1979)

    Article  MATH  Google Scholar 

  37. Seiler, P.: SOSOPT: A toolbox for polynomial optimization. arXiv preprint arXiv:1308.1889 (2013)

  38. Størmer, E.: Positive Linear Maps of Operator Algebras. Springer, Berlin (2013)

    Book  MATH  Google Scholar 

  39. Størmer, E., Effros, E.G.: Positive projections and Jordan structure in operator algebras. Math. Scand. 45, 127–138 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  40. Sturm, J .F.: Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw. 11(1–4), 625–653 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  41. Vallentin, F.: Symmetry in semidefinite programs. Linear Algebra Appl. 430(1), 360–369 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  42. Wedderburn, J.M.: On hypercomplex numbers. Proc. Lond. Math. Soc. 2(1), 77–118 (1908)

    Article  MathSciNet  MATH  Google Scholar 

  43. Weisfeiler, B.: On Construction and Identification of Graphs. Springer, Berlin (1977)

    Google Scholar 

Download references

Acknowledgements

We thank Etienne de Klerk for useful discussions during the beginning stages of this work. We also thank anonymous referees for comments that improved our presentation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frank Permenter.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Proof of Theorem 2.1

We now prove Theorem 2.1, which stated that a subspace \({\mathcal {S}} \subseteq {\mathbb {S}}^n\) is a Jordan subalgebra if and only if its orthogonal projection \(P_{{\mathcal {S}}}\) is unital and positive. Analogues for complex Jordan algebras are well known; see [38, 39] and also the thesis [24]. One direction is also shown in [28]. The converse direction is shown in part by translating an argument of [38] from the complex to real case. Since they are short and self-contained, we give full proofs of both directions.

To begin, we need the following lemma relating invariance under squaring to eigenvalue decompositions.

Lemma 7.1

For a non-zero \(X \in {\mathbb {S}}^n\), let \(E_X \subset {\mathbb {S}}^n\) be the set of pairwise orthogonal idempotent matrices for which

$$\begin{aligned} X = \sum _{ E \in E_X} \lambda _E E, \end{aligned}$$

where the range of \(E \in E_X\) is an eigenspace of X and \({\{\lambda _E\}}_{E \in E_X}\) is the set of non-zero (distinct) eigenvalues of X. For a subspace \({\mathcal {S}} \subseteq {\mathbb {S}}^n\), the following are equivalent.

  1. 1.

    \({\mathcal {S}}\) contains the set \(E_X\) for all non-zero \(X \in {\mathcal {S}}\).

  2. 2.

    \({\mathcal {S}}\) is invariant under squaring, i.e., \({\mathcal {S}} \supseteq \{ X^2 : X \in {\mathcal {S}} \}\).

Proof

That statement one implies two is immediate given that \(X^2 = \sum _{ E \in E_X } \lambda ^2_E E\). Conversely, suppose X has non-zero eigenvalue \(\lambda \) of maximum magnitude. Then, if statement two holds, the idempotent \({\hat{E}} = \lim _{n \rightarrow \infty }{(|\lambda |^{-1} X )}^{2n}\) is contained in \({\mathcal {S}}\) and has range equal to an eigenspace or, if \(\pm |\lambda |\) are both eigenvalues, the sum of two eigenspaces. Replacing X with \(X-\lambda {\hat{E}}\) and iterating yields a set of idempotents whose span contains \(E_X\); moreover, this set is contained in \({\mathcal {S}}\). \(\square \)

We now use this lemma and the mentioned argument of [38] to prove Theorem 2.1

To prove (\(2 \Rightarrow 1)\), consider \(X \succeq 0\) and suppose \(P_{{\mathcal {S}}}(X)\) is non-zero. For a non-zero eigenvalue \(\lambda _E\) of \(P_{{\mathcal {S}}}(X)\), let \(E \in {\mathbb {S}}^n\) denote the idempotent with range equal to the associated eigenspace. If (2) holds, then Lemma 7.1 implies \(P_{{\mathcal {S}}}(E) = E\). Hence,

$$\begin{aligned} 0 \le E\cdot X = P_{{\mathcal {S}}}(E)\cdot X = E\cdot P_{{\mathcal {S}}}(X) = \lambda _E ||E ||^2. \end{aligned}$$

We conclude the eigenvalues of \(P_{{\mathcal {S}}}(X)\) are non-negative, i.e., that \(P_{{\mathcal {S}}}(X) \succeq 0\). To show the unitality condition, let Z be a matrix in \({\mathcal {S}}\) of maximum rank and let

$$\begin{aligned} {\hat{E}}= \sum _{ E \in E_Z} E. \end{aligned}$$

For all \(X \in {\mathcal {S}}\), it holds that \(t {\hat{E}} \succeq X^2\) for some \(t > 0\). This shows the range of \({\hat{E}}\) contains the range of \(X^2\) and hence the range of X. It follows \({\hat{E}} X=X\).

To prove (\(1 \Rightarrow 2)\), suppose the unit element E has rank r. Then we can find an orthogonal matrix \(Q = (Q_1, Q_2) \in {\mathbb {R}}^{n \times n}\) for which \(E=Q_{1} Q_{1}^T\) and

$$\begin{aligned} {\mathcal {S}} = \left\{ Q \left( \begin{array}{cc} X &{} 0 \\ 0 &{} 0 \end{array} \right) Q^T : X \in \mathcal {{\hat{S}}} \subseteq {\mathbb {S}}^{r}\right\} , \end{aligned}$$

where \(\mathcal {{\hat{S}}} := Q_{1}^T {\mathcal {S}} Q_{1}\). Further, the projection \(P_{{\mathcal {S}}}\) satisfies

$$\begin{aligned} P_{{\mathcal {S}}}(X) = Q_{1} Q_{1}^T P_{\mathcal {{\hat{S}}} }(X) Q_{1} Q_{1}^T \end{aligned}$$

where \(P_{\mathcal {{\hat{S}}} } : {\mathbb {S}}^{r} \rightarrow {\mathbb {S}}^{r}\) is the orthogonal projection onto \(\mathcal {{\hat{S}}}\). It follows that if \(\mathcal {{\hat{S}}}\) is invariant under squaring, so is \({\mathcal {S}}\), and if \(P_{{\mathcal {S}}}\) is positive, so is \(P_{\mathcal {{\hat{S}}} }\). Hence, Statement 2 follows by showing \(\mathcal {{\hat{S}}}\) is invariant under squaring.

We show this applying the argument from [38, Theorem 2.2.2] and using the fact \(\mathcal {{\hat{S}}}\) contains the identity matrix of order r. Dropping the subscript \(\mathcal {{\hat{S}}}\) from \(P_{\mathcal {{\hat{S}}}}\), we first note since P is positive and \(P(I)=I\), it satisfies the Kadison inequality, which states \(P(X^2) -P(X)P(X) \succeq 0\) for all \(X \in {\mathbb {S}}^r\) (e.g., Theorem 2.3.4 of [4]). Hence, for X in the range of P

$$\begin{aligned} P(X^2) -X^2 \succeq 0. \end{aligned}$$

Letting \(Z=P(X^2) -X^2\) and taking the trace shows

$$\begin{aligned} {{\,\mathrm{Tr}\,}}Z= & {} I\cdot Z = P(I)\cdot Z = I\cdot P(Z) = {{\,\mathrm{Tr}\,}}\big (P^2(X^2) -P(X^2)\big )\\&\quad = {{\,\mathrm{Tr}\,}}\big ( P(X^2) -P(X^2) \big ) = 0. \end{aligned}$$

Since \(Z \succeq 0\), we conclude \(Z=0\), i.e., that \(P(X^2)=X^2\). Therefore \(X^2\) is in the range of P.

1.2 Invariant affine sets of projections

Recall Condition 2.1-(b) and Condition 2.1-(c) require invariance of the affine sets \(Y+{\mathcal {L}}\) and \(C+\mathcal {L}^{\perp }\) under the projection \(P_{{\mathcal {S}}}\). We now prove the characterization of these conditions provided by Lemma 3.1.

Lemma 7.2

For an affine set \(Y + {\mathcal {L}}\), let \(Y_{{\mathcal {L}}^{\perp }} \in {\mathbb {S}}^n\) denote the projection of \(Y\in {\mathbb {S}}^n\) onto the subspace \({\mathcal {L}}^{\perp }\). Let \(P_{{\mathcal {S}}} : {\mathbb {S}}^n \rightarrow {\mathbb {S}}^n\) denote the orthogonal projection onto a subspace \({\mathcal {S}}\) of \({\mathbb {S}}^n\). The following statements are equivalent.

  1. 1.

    \(P_{{\mathcal {S}}}(Y + {\mathcal {L}}) \subseteq Y + {\mathcal {L}}\)

  2. 2.

    \(P_{{\mathcal {S}}}(Y_{{\mathcal {L}}^{\perp }}) = Y_{{\mathcal {L}}^{\perp }}\) and \(P_{{\mathcal {S}}}({\mathcal {L}}) \subseteq {\mathcal {L}}\)

Proof

To begin, first note \(P_{{\mathcal {S}}}\)—being an orthogonal projection—is a contraction with respect to the Frobenius norm \(\Vert X \Vert _{F}\) (recalling our use of the trace inner-product); further, \(Y_{{\mathcal {L}}^{\perp }}\) is the unique minimizer of this norm over \(Y + {\mathcal {L}}\). Hence, if \(P_{{\mathcal {S}}}(Y + {\mathcal {L}}) \subseteq Y + {\mathcal {L}}\), then \(P_{{\mathcal {S}}}(Y_{{\mathcal {L}}^{\perp }}) = Y_{{\mathcal {L}}^{\perp }}\); in addition, since \(Y + {\mathcal {L}} = Y_{{\mathcal {L}}^{\perp }} + {\mathcal {L}}\),

$$\begin{aligned} Y_{{\mathcal {L}}^{\perp }} + P_{{\mathcal {S}}}({\mathcal {L}}) = P_{{\mathcal {S}}}( Y_{{\mathcal {L}}^{\perp }} +{\mathcal {L}}) \subseteq Y_{{\mathcal {L}}^{\perp }} + {\mathcal {L}}, \end{aligned}$$

which implies \(P_{{\mathcal {S}}}({\mathcal {L}}) \subseteq {\mathcal {L}}\). The converse direction is obvious given that \(Y + {\mathcal {L}} = Y_{{\mathcal {L}}^{\perp }} + {\mathcal {L}}\). \(\square \)

If we apply the previous lemma to both the primal and dual affine sets we obtain the conditions \(P_{{\mathcal {S}}}({\mathcal {L}}) \subseteq {\mathcal {L}}\) and \(P_{{\mathcal {S}}}({\mathcal {L}}^{\perp }) \subseteq {\mathcal {L}}^{\perp }\). However, Lemma 3.1 only contains one of these conditions, since they turn out to be equivalent. Consider the following.

Lemma 7.3

[15, Proposition 3.8] Let \(P_{{\mathcal {L}}} : {\mathbb {S}}^n \rightarrow {\mathbb {S}}^n\) and \(P_{{\mathcal {S}}}: {\mathbb {S}}^n \rightarrow {\mathbb {S}}^n\) denote the orthogonal projections onto subspaces \({\mathcal {L}}\) and \({\mathcal {S}}\) of \({\mathbb {S}}^n\). The following four statements are equivalent.

  • \({\mathcal {L}}\) is an invariant subspace of \(P_{{\mathcal {S}}}\)

  • \({\mathcal {L}}^{\perp }\) is an invariant subspace of \(P_{{\mathcal {S}}}\)

  • \({\mathcal {S}}\) is an invariant subspace of \(P_{{\mathcal {L}}}\)

  • \({\mathcal {S}}^{\perp }\) is an invariant subspace of \(P_{{\mathcal {L}}}\)

Combining these two lemmas proves Lemma 3.1.

1.3 Linear images of self-dual cones

The following was used to prove Proposition 2.6.

Lemma 7.4

Let \({\mathcal {W}}\) and \({\mathcal {V}}\) be inner-product spaces and \(\mathcal {C}\subseteq {\mathcal {V}} \) and \({\mathcal {K}}\subseteq {\mathcal {W}}\) self-dual convex cones. Let \(T : {\mathcal {V}} \rightarrow {\mathcal {W}}\) be a injective linear map with adjoint \(T^* : {\mathcal {W}} \rightarrow {\mathcal {V}}\). If \({\mathcal {K}}= T(\mathcal {C})\), then \(T^*T(\mathcal {C}) = \mathcal {C}\).

Proof

For all \(x,y \in \mathcal {C}\),

$$\begin{aligned} \langle T^*T(x), y \rangle = \langle T(x), T(y) \rangle \ge 0 \end{aligned}$$

by self-duality of \({\mathcal {K}}\). By self-duality of \(\mathcal {C}\), we conclude \(T^*T(x) \in \mathcal {C}\). On the other hand, since \(T^*\) is surjective, we have for any \(x \in \mathcal {C}\) existence of \(w\in {\mathcal {V}}\) for which \(x=T^*w\). Further, for all \(y \in \mathcal {C}\),

$$\begin{aligned} 0 \le \langle T^*w, y\rangle = \langle w, T y\rangle \end{aligned}$$

which, since \({\mathcal {K}}= T(\mathcal {C})\), shows \(w \in {\mathcal {K}}\). Hence, \(w=Tz\) for \(z \in \mathcal {C}\), showing \(x = T^*Tz\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Permenter, F., Parrilo, P.A. Dimension reduction for semidefinite programs via Jordan algebras. Math. Program. 181, 51–84 (2020). https://doi.org/10.1007/s10107-019-01372-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-019-01372-5

Mathematics Subject Classification

Navigation