1 Introduction

1.1 Matrix Factorization Ranks

A factorization of a matrix \(A \in \mathbb {R}^{m \times n}\) over a sequence \(\{K^d\}_{d\in \mathbb {N}}\) of cones that are each equipped with an inner product \(\langle \cdot ,\cdot \rangle \) is a decomposition of the form \(A=(\langle X_i,Y_j\rangle )\) with \(X_i, Y_j \in K^d\) for all \((i,j)\in [m]\times [n]\), for some integer \(d\in \mathbb {N}\). Following [34], the smallest integer d for which such a factorization exists is called the cone factorization rank of A over \(\{K^d\}\).

The cones \(K^d\) we use in this paper are the nonnegative orthant \(\mathbb {R}^d_+\) with the usual inner product and the cone \(\mathrm {S}^d_+\) (resp., \(\mathrm {H}^d_+\)) of \(d\times d\) real symmetric (resp., Hermitian) positive semidefinite matrices with the trace inner product \(\langle X, Y \rangle = \mathrm {Tr}(X^\textsf {T}Y)\) (resp., \(\langle X, Y \rangle = \mathrm {Tr}(X^* Y)\)). We obtain the nonnegative rank, denoted \({{\,\mathrm{rank}\,}}_+(A)\), which uses the cones \(K^d=\mathbb {R}^d_+\), and the positive semidefinite rank, denoted \(\hbox {psd-rank}_\mathbb {K}(A)\), which uses the cones \(K^d=\mathrm {S}^d_+\) for \(\mathbb {K}= \mathbb {R}\) and \(K^d=\mathrm {H}^d_+\) for \(\mathbb {K}=\mathbb {C}\). Both the nonnegative rank and the positive semidefinite rank are defined whenever A is entrywise nonnegative.

The study of the nonnegative rank is largely motivated by the groundbreaking work of Yannakakis [78], who showed that the linear extension complexity of a polytope P is given by the nonnegative rank of its slack matrix. The linear extension complexity of P is the smallest integer d for which P can be obtained as the linear image of an affine section of the nonnegative orthant \(\mathbb {R}^d_+\). The slack matrix of P is given by the matrix \((b_i-a_i^\mathsf{T}v)_{v\in V,i\in I}\), where \(P= \text {conv}(V)\) and \(P= \{x: a_i^\mathsf{T}x\le b_i\ (i\in I)\}\) are the point and hyperplane representations of P. Analogously, the semidefinite extension complexity of P is the smallest d such that P is the linear image of an affine section of the cone \(\mathrm {S}^d_+\) and it is given by the (real) positive semidefinite rank of its slack matrix [34].

The motivation to study the linear and semidefinite extension complexities is that polytopes with small extension complexity admit efficient algorithms for linear optimization. Well-known examples include spanning tree polytopes [54] and permutahedra [32], which have polynomial linear extension complexity, and the stable set polytope of perfect graphs, which has polynomial semidefinite extension complexity [40] (see, e.g., the surveys [18, 25]). The above connection to the nonnegative rank and to the positive semidefinite rank of the slack matrix can be used to show that a polytope does not admit a small extended formulation. Recently, this connection was used to show that the linear extension complexities of the traveling salesman, cut, and stable set polytopes are exponential in the number of nodes [29], and this result was extended to their semidefinite extension complexities in [51]. Surprisingly, the linear extension complexity of the matching polytope is also exponential [66], even though linear optimization over this set is polynomial time solvable [23]. It is an open question whether the semidefinite extension complexity of the matching polytope is exponential.

Besides this link to extension complexity, the nonnegative rank also finds applications in probability theory and in communication complexity, and the positive semidefinite rank has applications in quantum information theory and in quantum communication complexity (see, e.g., [24, 29, 42, 55]).

For square symmetric matrices (\(m=n\)), we are also interested in symmetric analogs of the above matrix factorization ranks, where we require the same factors for the rows and columns (i.e., \(X_i = Y_i\) for all \(i\in [n]\)). The symmetric analog of the nonnegative rank is the completely positive rank, denoted \(\hbox {cp-rank}(A)\), which uses the cones \(K^d = \mathbb {R}_+^d\), and the symmetric analog of the positive semidefinite rank is the completely positive semidefinite rank, denoted \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {K}(A)\), which uses the cones \(K^d=\mathrm {S}^d_+\) if \(\mathbb {K}=\mathbb {R}\) and \(K^d=\mathrm {H}^d_+\) if \(\mathbb {K}=\mathbb {C}\). These symmetric factorization ranks are not always well defined since not every symmetric nonnegative matrix admits a symmetric factorization by nonnegative vectors or positive semidefinite matrices. The symmetric matrices for which these parameters are well defined form convex cones known as the completely positive cone, denoted \(\hbox {CP}^n\), and the completely positive semidefinite cone, denoted \(\mathrm {CS}_{+}^n\). We have the inclusions \(\hbox {CP}^n \subseteq \mathrm {CS}_{+}^n \subseteq \mathrm {S}_+^n\), which are known to be strict for \(n\ge 5\). For details on these cones see [6, 17, 50] and references therein.

Motivation for the cones \(\hbox {CP}^n\) and \(\mathrm {CS}_{+}^n\) comes in particular from their use to model classical and quantum information optimization problems. For instance, graph parameters such as the stability number and the chromatic number can be written as linear optimization problems over the completely positive cone [45], and the same holds, more generally, for quadratic problems with mixed binary variables [13]. The \(\hbox {cp-rank}\) is widely studied in the linear algebra community; see, e.g., [6, 10, 68, 69].

The completely positive semidefinite cone was first studied in [50] to describe quantum analogs of the stability number and of the chromatic number of a graph. This was later extended to general graph homomorphisms in [72] and to graph isomorphism in [2]. In addition, as shown in [53, 72], there is a close connection between the completely positive semidefinite cone and the set of quantum correlations. This also gives a relation between the completely positive semidefinite rank and the minimal entanglement dimension necessary to realize a quantum correlation. This connection has been used in [38, 62, 63] to construct matrices whose completely positive semidefinite rank is exponentially large in the matrix size. For the special case of synchronous quantum correlations, the minimum entanglement dimension is directly given by the completely positive semidefinite rank of a certain matrix (see [37]).

The following inequalities hold for the nonnegative rank and the positive semidefinite rank: We have

$$\begin{aligned} \hbox {psd-rank}_\mathbb {C}(A)\le \hbox {psd-rank}_\mathbb {R}(A) \le {{\,\mathrm{rank}\,}}_+(A) \le \mathrm {min}\{m,n\} \end{aligned}$$

for any \(m\times n\) nonnegative matrix A and \(\hbox {cp-rank}(A)\le \left( {\begin{array}{c}n+1\\ 2\end{array}}\right) \) for any \(n\times n\) completely positive matrix A. However, the situation for the cpsd-rank is very different. Exploiting the connection between the completely positive semidefinite cone and quantum correlations it follows from results in [73] that the cone \(\mathrm {CS}_{+}^n\) is not closed for \(n\ge 1942\). The results in [22] show that this already holds for \(n\ge 10\). As a consequence, there does not exist an upper bound on the \(\hbox {cpsd-rank}\) as a function of the matrix size. For small matrix sizes, very little is known. It is an open problem whether \(\mathrm {CS}_{+}^5\) is closed, and we do not even know how to construct a \(5 \times 5\) matrix whose cpsd-rank exceeds 5.

The \({{\,\mathrm{rank}\,}}_+\), \(\hbox {cp-rank}\), and \(\text {psd-rank}\) are known to be computable; this follows using results from [65] since upper bounds exist on these factorization ranks that depend only on the matrix size, see [5] for a proof for the case of the \(\hbox {cp-rank}\). But computing the nonnegative rank is NP-hard [76]. In fact, determining the \({{\,\mathrm{rank}\,}}_+\) and \(\hbox {psd-rank}\) of a matrix are both equivalent to the existential theory of the reals [70, 71]. For the cp-rank and the cpsd-rank, no such results are known, but there is no reason to assume they are any easier. In fact, it is not even clear whether the cpsd-rank is computable in general.

To obtain upper bounds on the factorization rank of a given matrix, one can employ heuristics that try to construct small factorizations. Many such heuristics exist for the nonnegative rank (see the overview [30] and references therein), factorization algorithms exist for completely positive matrices (see the recent paper [39], also [20] for structured completely positive matrices), and algorithms to compute positive semidefinite factorizations are presented in the recent work [75]. In this paper, we want to compute lower bounds on matrix factorization ranks, which we achieve by employing a relaxation approach based on (noncommutative) polynomial optimization.

1.2 Contributions and Connections to Existing Bounds

In this work, we provide a unified approach to obtain lower bounds on the four matrix factorization ranks mentioned above, based on tools from (noncommutative) polynomial optimization.

We sketch the main ideas of our approach in Sect. 1.4 below, after having introduced some necessary notation and preliminaries about (noncommutative) polynomials in Sect. 1.3. We then indicate in Sect. 1.5 how our approach relates to the more classical use of polynomial optimization dealing with the minimization of polynomials over basic closed semialgebraic sets. The main body of the paper consists of four sections each dealing with one of the four matrix factorization ranks. We start with presenting our approach for the completely positive semidefinite rank and then explain how to adapt this to the other ranks.

For our results, we need several technical tools about linear forms on spaces of polynomials, both in the commutative and noncommutative setting. To ease the readability of the paper, we group these technical tools in Appendix A. Moreover, we provide full proofs, so that our paper is self-contained. In addition, some of the proofs might differ from the customary ones in the literature since our treatment in this paper is consistently on the ‘moment’ side rather than using real algebraic results about sums of squares.

In Sect. 2, we introduce our approach for the completely positive semidefinite rank. We start by defining a hierarchy of lower bounds

$$\begin{aligned} {\xi _{1}^{\mathrm {cpsd}}}(A) \le {\xi _{2}^{\mathrm {cpsd}}}(A) \le \ldots \le {\xi _{t}^{\mathrm {cpsd}}}(A)\le \ldots \le {{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A), \end{aligned}$$

where \({\xi _{t}^{\mathrm {cpsd}}}(A)\), for \(t \in \mathbb {N}\), is given as the optimal value of a semidefinite program whose size increases with t. Not much is known about lower bounds for the cpsd-rank in the literature. The inequality \(\sqrt{{{\,\mathrm{rank}\,}}(A)} \le {{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A)\) is known, which follows by viewing a Hermitian \(d\times d\) matrix as a \(d^2\)-dimensional real vector, and an analytic lower bound is given in [62]. We show that the new parameter \({\xi _{1}^{\mathrm {cpsd}}}(A)\) is at least as good as this analytic lower bound and we give a small example where a strengthening of \({\xi _{2}^{\mathrm {cpsd}}}(A)\) is strictly better then both above-mentioned generic lower bounds. Currently, we lack evidence that the lower bounds \({\xi _{t}^{\mathrm {cpsd}}}(A)\) can be larger than, for example, the matrix size, but this could be because small matrices with large cpsd-rank are hard to construct or might even not exist. We also introduce several ideas leading to strengthenings of the basic bounds \({\xi _{t}^{\mathrm {cpsd}}}(A)\).

We then adapt these ideas to the other three matrix factorization ranks discussed above, where for each of them we obtain analogous hierarchies of bounds.

For the nonnegative rank and the completely positive rank, much more is known about lower bounds. The best-known generic lower bounds are due to Fawzi and Parrilo [26, 27]. In [27], the parameters \(\tau _+(A)\) and \(\tau _{\mathrm {cp}}(A)\) are defined, which, respectively, lower bound the nonnegative rank and the \(\hbox {cp-rank}\), along with their computable semidefinite programming relaxations \(\tau _\mathrm {+}^\mathrm {sos}(A)\) and \(\tau _\mathrm {cp}^\mathrm {sos}(A)\). In [27] it is also shown that \(\tau _+(A)\) is at least as good as certain norm-based lower bounds. In particular, \(\tau _+(\cdot )\) is at least as good as the \(\ell _\infty \) norm-based lower bound, which was used by Rothvoß [66] to show that the matching polytope has exponential linear extension complexity. In [26] it is shown that for the Frobenius norm, the square of the norm-based bound is still a lower bound on the nonnegative rank, but it is not known how this lower bound compares to \(\tau _+(\cdot )\).

Fawzi and Parrilo [27] use the atomicity of the nonnegative and completely positive ranks to derive the parameters \(\tau _+(A)\) and \(\tau _{\mathrm {cp}}(A)\); i.e., they use the fact that the nonnegative rank (cp-rank) of A is equal to the smallest d for which A can be written as the sum of d nonnegative (positive semidefinite) rank one matrices. As the \(\hbox {psd-rank}\) and \(\hbox {cpsd-rank}\) are not known to admit atomic formulations, the techniques from [27] do not extend directly to these factorization ranks. However, our approach via polynomial optimization captures these factorization ranks as well.

In Sects. 3 and 4, we construct semidefinite programming hierarchies of lower bounds \({\xi _{t}^{\mathrm {cp}}}(A)\) and \({\xi _{t}^{\mathrm {+}}}(A)\) on \(\hbox {cp-rank}(A)\) and \({{\,\mathrm{rank}\,}}_+(A)\). We show that the bounds \({\xi _{t}^{\mathrm {+}}}(A)\) converge to \(\tau _+(A)\) as \(t \rightarrow \infty \). The basic hierarchy \(\{{\xi _{t}^{\mathrm {cp}}}(A)\}\) for the cp-rank does not converge to \(\tau _{\mathrm {cp}}(A)\) in general, but we provide two types of additional constraints that can be added to the program defining \({\xi _{t}^{\mathrm {cp}}}(A)\) to ensure convergence to \(\tau _{\mathrm {cp}}(A)\). First, we show how a generalization of the tensor constraints that are used in the definition of the parameter \(\tau _{\mathrm {cp}}^\mathrm {sos}(A)\) can be used for this, and we also give a more efficient (using smaller matrix blocks) description of these constraints. This strengthening of \({\xi _{2}^{\mathrm {cp}}}(A)\) is then at least as strong as \(\tau _{\mathrm {cp}}^\mathrm {sos}(A)\), but requires matrix variables of roughly half the size. Alternatively, we show that for every \(\varepsilon >0\) there is a finite number of additional linear constraints that can be added to the basic hierarchy \(\{{\xi _{t}^{\mathrm {cp}}}(A)\}\) so that the limit of the sequence of these new lower bounds \({\xi _{t}^{\mathrm {+}}}(A)\) is at least \(\tau _{\mathrm {cp}}(A)-\varepsilon \). We give numerical results on small matrices studied in the literature, which show that \({\xi _{3}^{\mathrm {+}}}(A)\) can improve over \(\tau _{+}^\mathrm {sos}(A)\).

Finally, in Sect. 5, we derive a hierarchy \(\{{\xi _{t}^{\mathrm {psd}}}(A)\}\) of lower bounds on the psd-rank. We compare the new bounds \({\xi _{t}^{\mathrm {psd}}}(A)\) to a bound from [52], and we provide some numerical examples illustrating their performance.

We provide two implementations of all the lower bounds introduced in this paper, at the arXiv submission of this paper. One implementation uses Matlab and the CVX package [36], and the other one uses Julia [8]. The implementations support various semidefinite programming solvers, for our numerical examples we used Mosek [56].

1.3 Preliminaries

In order to explain our basic approach in the next section, we first need to introduce some notation. We denote the set of all words in the symbols \(x_1,\ldots ,x_n\) by \(\langle \mathbf{x}\rangle = \langle x_1, \ldots , x_n \rangle \), where the empty word is denoted by 1. This is a semigroup with involution, where the binary operation is concatenation, and the involution of a word \(w\in \langle \mathbf{x}\rangle \) is the word \(w^*\) obtained by reversing the order of the symbols in w. The \(*\)-algebra of all real linear combinations of these words is denoted by \(\mathbb {R}\langle \mathbf{x} \rangle \), and its elements are called noncommutative polynomials. The involution extends to \(\mathbb {R}\langle \mathbf{x}\rangle \) by linearity. A polynomial \(p\in \mathbb {R}\langle \mathbf{x}\rangle \) is called symmetric if \(p^*=p\) and \(\mathrm {Sym} \, \mathbb {R}\langle \mathbf{x}\rangle \) denotes the set of symmetric polynomials. The degree of a word \(w\in \langle \mathbf{x}\rangle \) is the number of symbols composing it, denoted as |w| or \(\deg (w)\), and the degree of a polynomial \(p=\sum _wp_ww\in \mathbb {R}\langle \mathbf{x}\rangle \) is the maximum degree of a word w with \(p_w\ne 0\). Given \(t\in \mathbb {N}\cup \{\infty \}\), we let \(\langle \mathbf{x} \rangle _t\) be the set of words w of degree \(|w| \le t\), so that \(\langle \mathbf{x} \rangle _\infty =\langle \mathbf{x}\rangle \), and \(\mathbb {R}\langle \mathbf{x} \rangle _t\) is the real vector space of noncommutative polynomials p of degree \(\mathrm {deg}(p) \le t\). Given \(t \in \mathbb {N}\), we let \(\langle \mathbf{x} \rangle _{=t}\) be the set of words of degree exactly equal to t.

For a set \(S\subseteq \mathrm {Sym} \,\mathbb {R}\langle \mathbf{x}\rangle \) and \(t\in \mathbb {N}\cup \{\infty \}\), the truncated quadratic module at degree 2t associated to S is defined as the cone generated by all polynomials \(p^*g p \in \mathbb {R}\langle \mathbf{x}\rangle _{2t}\) with \(g\in S\cup \{1\}\):

$$\begin{aligned} {{\mathscr {M}}}_{2t}(S)=\mathrm {cone}\Big \{p^*gp: p\in \mathbb {R}\langle \mathbf{x}\rangle , \ g\in S\cup \{1\},\ \deg (p^*gp)\le 2t\Big \}. \end{aligned}$$

Likewise, for a set \(T \subseteq \mathbb {R}\langle \mathbf{x}\rangle \), we can define the truncated ideal at degree 2t, denoted by \(\mathscr {I}_{2t}(T)\), as the vector space spanned by all polynomials \(p h \in \mathbb {R}\langle \mathbf{x}\rangle _{2t}\) with \(h \in T\):

$$\begin{aligned} {\mathscr {I}}_{2t}(T) = \mathrm {span}\big \{ ph : p \in \mathbb {R}\langle \mathbf {x}\rangle , \, h \in T, \, \mathrm {deg}(ph) \le 2t \big \}. \end{aligned}$$

We say that \({{\mathscr {M}}}(S) + {{\mathscr {I}}}(T)\) is Archimedean when there exists a scalar \(R>0\) such that

$$\begin{aligned} R-\sum _{i=1}^n x_i^2\in {\mathscr {M}}(S)+ {\mathscr {I}}(T). \end{aligned}$$

Throughout, we are interested in the space \(\mathbb {R}\langle \mathbf{x} \rangle _t^*\) of real-valued linear functionals on \(\mathbb {R}\langle \mathbf{x} \rangle _t\). We list some basic definitions: A linear functional \(L \in \mathbb {R}\langle \mathbf{x} \rangle _t^*\) is symmetric if \(L(w) = L(w^*)\) for all \(w \in \langle \mathbf{x} \rangle _t\) and tracial if \(L(ww') = L(w'w)\) for all \(w,w' \in \langle \mathbf{x} \rangle _t\). A linear functional \(L \in \mathbb {R}\langle \mathbf{x} \rangle _{2t}^*\) is said to be positive if \(L(p^*p) \ge 0\) for all \(p \in \mathbb {R}\langle \mathbf{x} \rangle _t\). Many properties of a linear functional \(L \in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) can be expressed as properties of its associated moment matrix (also known as its Hankel matrix). For \(L \in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) we define its associated moment matrix, which has rows and columns indexed by words in \(\langle \mathbf{x}\rangle _t\), by

$$\begin{aligned} M_t(L)_{w,w'} = L(w^* w') \quad \text {for} \quad w,w' \in \langle \mathbf{x}\rangle _t, \end{aligned}$$

and as usual we set \(M(L) = M_\infty (L)\). It then follows that L is symmetric if and only if \(M_t(L)\) is symmetric, and L is positive if and only if \(M_t(L)\) is positive semidefinite. In fact, one can even express nonnegativity of a linear form \(L\in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) on \({{\mathscr {M}}}_{2t}(S)\) in terms of certain associated positive semidefinite moment matrices. For this, given a polynomial \(g\in \mathbb {R}\langle \mathbf{x}\rangle \), define the linear form \(gL \in \mathbb {R}\langle \mathbf{x}\rangle _{2t-\deg (g)}^*\) by \((gL)(p)=L(gp)\). Then, we have

$$\begin{aligned} L(p^*gp)\ge 0 \text { for all } p\in \mathbb {R}\langle \mathbf{x}\rangle _{t-d_g} \iff M_{t-d_g}(gL)\succeq 0, \quad (d_g = \lceil \deg (g)/2\rceil ), \end{aligned}$$

and thus \(L\ge 0\) on \({{\mathscr {M}}}_{2t}(S)\) if and only if \(M_{t-d_{g}}(gL) \succeq 0\) for all \(g\in S \cup \{1\}\). Also, the condition \(L=0\) on \({{\mathscr {I}}}_{2t}(T)\) corresponds to linear equalities on the entries of \(M_t(L)\).

The moment matrix also allows us to define a property called flatness. For \(t \in \mathbb {N}\), a linear functional \(L \in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) is called \(\delta \)-flat if the rank of \(M_t(L)\) is equal to that of its principal submatrix indexed by the words in \(\langle \mathbf{x}\rangle _{t-\delta }\), that is,

$$\begin{aligned} {{\,\mathrm{rank}\,}}(M_t(L))={{\,\mathrm{rank}\,}}(M_{t-\delta }(L)). \end{aligned}$$

We call Lflat if it is \(\delta \)-flat for some \(\delta \ge 1\). When \(t=\infty \), L is said to be flat when \(\mathrm {rank}(M(L))<\infty \), which is equivalent to \({{\,\mathrm{rank}\,}}(M(L))={{\,\mathrm{rank}\,}}(M_s(L))\) for some \(s\in \mathbb {N}\).

A key example of a flat symmetric tracial positive linear functional on \(\mathbb {R}\langle \mathbf{x}\rangle \) is given by the trace evaluation at a given matrix tuple \(\mathbf {X}= (X_1,\ldots ,X_n) \in (\mathrm {H}^d)^n\):

$$\begin{aligned} p \mapsto \mathrm {Tr}(p(\mathbf {X})). \end{aligned}$$

Here, \(p(\mathbf {X})\) denotes the matrix obtained by substituting \(x_i\) by \(X_i\) in p, and throughout \(\mathrm {Tr}(\cdot )\) denotes the usual matrix trace, which satisfies \(\mathrm {Tr}(I) = d\) where I is the identity matrix in \(\mathrm {H}^d\). We mention in passing that we use \(\mathrm {tr}(\cdot )\) to denote the normalized matrix trace, which satisfies \(\mathrm {tr}(I) = 1\) for \(I \in \mathrm {H}^d\). Throughout, we use \(L_\mathbf {X}\) to denote the real part of the above functional, that is, \(L_\mathbf {X}\) denotes the linear form on \(\mathbb {R}\langle \mathbf{x}\rangle \) defined by

$$\begin{aligned} L_{\mathbf{X}}(p) = \mathrm {Re}( \mathrm {Tr}(p(X_1,\ldots ,X_n))) \quad \text {for} \quad p \in \mathbb {R}\langle \mathbf{x}\rangle . \end{aligned}$$

Observe that \(L_\mathbf {X}\) too is a symmetric tracial positive linear functional on \(\mathbb {R}\langle \mathbf{x}\rangle \). Moreover, \(L_\mathbf {X}\) is nonnegative on \({{\mathscr {M}}}(S)\) if the matrix tuple \(\mathbf {X}\) is taken from the matrix positivity domain\({\mathscr {D}}(S)\) associated to the finite set \(S \subseteq \mathrm {Sym} \, \mathbb {R}\langle \mathbf{x}\rangle \), defined as

$$\begin{aligned} {\mathscr {D}}(S)=\bigcup _{d\ge 1} \Big \{\mathbf {X}=(X_1,\ldots ,X_n)\in (\mathrm {H}^d)^n: g(\mathbf {X})\succeq 0 \text { for } g\in S\Big \}. \end{aligned}$$

Similarly, the linear functional \(L_\mathbf {X}\) is zero on \({{\mathscr {I}}}(T)\) if the matrix tuple \(\mathbf {X}\) is taken from the matrix variety\(\mathscr {V}(T)\) associated to the finite set \(T \subseteq \mathrm {Sym} \, \mathbb {R}\langle \mathbf{x}\rangle \), defined as

$$\begin{aligned} {\mathscr {V}}(T) = \bigcup _{d\ge 1} \big \{\mathbf {X}\in (\mathrm {H}^d)^n : h(\mathbf {X}) = 0 \text { for all } h \in T\big \}, \end{aligned}$$

To discuss convergence properties of our lower bounds for matrix factorization ranks, we will need to consider infinite dimensional analogs of matrix algebras, namely \(C^*\)-algebras admitting a tracial state. Let us introduce some basic notions we need about \(C^*\)-algebras; see, e.g., [9] for details. For our purposes, we define a \(C^*\)-algebra to be a norm closed \(*\)-subalgebra of the complex algebra \({{\mathscr {B}}}({\mathscr {H}})\) of bounded operators on a complex Hilbert space \({\mathscr {H}}\). In particular, we have \(\Vert a^*a\Vert = \Vert a\Vert ^2\) for all elements a in the algebra. Such an algebra \({{\mathscr {A}}}\) is said to be unital if it contains the identity operator (denoted 1). For instance, any full complex matrix algebra \(\mathbb {C}^{d\times d}\) is a unital \(C^*\)-algebra. Moreover, by a fundamental result of Artin-Wedderburn, any finite dimensional \(C^*\)-algebra (as a vector space) is \(*\)-isomorphic to a direct sum \(\bigoplus _{m=1}^M \mathbb {C}^{d_m\times d_m}\) of full complex matrix algebras [3, 77]. In particular, any finite dimensional \(C^*\)-algebra is unital.

An element b in a \(C^*\)-algebra \({{\mathscr {A}}}\) is called positive, denoted \(b\succeq 0\), if it is of the form \(b=a^*a\) for some \(a\in {{\mathscr {A}}}\). For finite sets \(S \subseteq \mathrm {Sym} \,\mathbb {R}\langle \mathbf{x}\rangle \) and \(T \subseteq \mathbb {R}\langle \mathbf{x}\rangle \), the \(C^*\)-algebraic analogs of the matrix positivity domain and matrix variety are the sets

$$\begin{aligned} {\mathscr {D}}_{{\mathscr {A}}}(S)&= \big \{\mathbf{X}=(X_1,\ldots ,X_n) \in \mathscr {A}^n : X_i^* = X_i \text { for } i \in [n], \, g(\mathbf{X}) \succeq 0 \text { for } g \in S \big \},\\ {\mathscr {V}}_{{\mathscr {A}}}(T)&= \big \{\mathbf{X}=(X_1,\ldots ,X_n) \in \mathscr {A}^n : X_i^* = X_i \text { for } i \in [n], \, h(\mathbf{X}) = 0 \text { for } h \in T \big \}. \end{aligned}$$

A state\(\tau \) on a unital \(C^*\)-algebra \({{\mathscr {A}}}\) is a linear form on \({{\mathscr {A}}}\) that is positive, i.e., \(\tau (a^*a)\ge 0\) for all \(a\in {{\mathscr {A}}}\), and satisfies \(\tau (1)=1\). Since \({{\mathscr {A}}}\) is a complex algebra, every state \(\tau \) is Hermitian: \(\tau (a) = \tau (a^*)\) for all \(a \in {{\mathscr {A}}}\). We say that that a state is tracial if \(\tau (ab) = \tau (ba)\) for all \(a,b \in {\mathscr {A}}\) and faithful if \(\tau (a^*a)=0\) implies \(a=0\). A useful fact is that on a full matrix algebra \(\mathbb {C}^{d\times d}\) the normalized matrix trace is the unique tracial state (see, e.g., [15]). Now, given a tuple \(\mathbf {X}=(X_1,\ldots ,X_n)\in {{\mathscr {A}}}^n\) in a \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \), the second key example of a symmetric tracial positive linear functional on \(\mathbb {R}\langle \mathbf{x}\rangle \) is given by the trace evaluation map, which we again denote by \(L_\mathbf {X}\) and is defined by

$$\begin{aligned} L_\mathbf {X}(p)=\tau (p(X_1,\ldots ,X_n)) \quad \text {for all} \quad p\in \mathbb {R}\langle \mathbf{x}\rangle . \end{aligned}$$

1.4 Basic Approach

To explain the basic idea of how we obtain lower bounds for matrix factorization ranks, we consider the case of the completely positive semidefinite rank. Given a minimal factorization \(A=(\mathrm {Tr}(X_i,X_j))\), with \(d={{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A)\) and \(\mathbf {X}=(X_1,\ldots ,X_n)\) in \((\mathrm {H}_+^d)^n\), consider the linear form \(L_{\mathbf{X}}\) on \(\mathbb {R}\langle \mathbf{x}\rangle \) as defined in (5):

$$\begin{aligned} L_{\mathbf{X}}(p) = \mathrm {Re}( \mathrm {Tr}(p(X_1,\ldots ,X_n))) \quad \text {for} \quad p \in \mathbb {R}\langle \mathbf{x}\rangle . \end{aligned}$$

Then, we have \(A=(L_{\mathbf {X}}(x_ix_j))\) and \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A) = d=L_{\mathbf {X}}(1)\). To obtain lower bounds on \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A)\), we minimize L(1) over a set of linear functionals L that satisfy certain computationally tractable properties of \(L_{\mathbf{X}}\). Note that this idea of minimizing L(1) has recently been used in the works [59, 74] in the commutative setting to derive a hierarchy of lower bounds converging to the nuclear norm of a symmetric tensor.

The above linear functional \(L_{\mathbf{X}}\) is symmetric and tracial. Moreover, it satisfies some positivity conditions, since we have \(L_{\mathbf{X}}(q) \ge 0\) whenever \(q(\mathbf{X})\) is positive semidefinite. It follows that \(L_{\mathbf{X}}(p^*p) \ge 0\) for all \(p\in \mathbb {R}\langle \mathbf{x}\rangle \) and, as we explain later, \(L_{\mathbf{X}}\) satisfies the localizing conditions \(L_{\mathbf{X}}(p^*(\sqrt{A_{ii}} x_i - x_i^2)p) \ge 0\) for all p and i. Truncating the linear form yields the following hierarchy of lower bounds:

$$\begin{aligned} {\xi _{t}^{\mathrm {cpsd}}}(A) = \mathrm {min} \Big \{ L(1) : \;&L \in \mathbb {R}\langle x_1,\ldots ,x_n \rangle _{2t}^* \text { tracial and symmetric},\\&L(x_ix_j) = A_{ij} \quad \text {for} \quad i,j \in [n],\\&L \ge 0\quad \text {on} \quad {\mathscr {M}}_{2t}\big ( \{\sqrt{A_{11}} x_1-x_1^2, \ldots ,\sqrt{A_{nn}} x_n-x_n^2 \}\big )\Big \}. \end{aligned}$$

The bound \({\xi _{t}^{\mathrm {cpsd}}}(A)\) is computationally tractable (for small t). Indeed, as was explained in Sect. 1.3, the localizing constraint “\(L\ge 0\) on \({\mathscr {M}}_{2t}(S)\)” can be enforced by requiring certain matrices, whose entries are determined by L, to be positive semidefinite. This makes the problem defining \({\xi _{t}^{\mathrm {cpsd}}}(A)\) into a semidefinite program. The localizing conditions ensure the Archimedean property of the quadratic module, which permits to show certain convergence properties of the bounds \({\xi _{t}^{\mathrm {cpsd}}}(A)\).

The above approach extends naturally to the other matrix factorization ranks, using the following two basic ideas. First, since the cp-rank and the nonnegative rank deal with factorizations by diagonal matrices, we use linear functionals acting on classical commutative polynomials. Second, the asymmetric factorization ranks (psd-rank and nonnegative rank) can be seen as analogs of the symmetric ranks in the partial matrix setting, where we know only the values of L on the quadratic monomials corresponding to entries in the off-diagonal blocks (this will require scaling of the factors in order to be able to define localizing constraints ensuring the Archimedean property). A main advantage of our approach is that it applies to all four matrix factorization ranks, after easy suitable adaptations.

1.5 Connection to Polynomial Optimization

In classical polynomial optimization, the problem is to find the global minimum of a commutative polynomial f over a semialgebraic set of the form

$$\begin{aligned} D(S) = \{x \in \mathbb {R}^n : g(x) \ge 0 \text { for } g \in S\}, \end{aligned}$$

where \(S \subseteq \mathbb {R}[\mathbf {x}] = \mathbb {R}[x_1,\ldots ,x_n]\) is a finite set of polynomials.Footnote 1 Tracial polynomial optimization is a noncommutative analog, where the problem is to minimize the normalized trace \(\mathrm {tr}(f(\mathbf {X}))\) of a symmetric polynomial f over a matrix positivity domain \({\mathscr {D}}(S)\) where \(S \subseteq \mathrm {Sym} \, \mathbb {R}\langle \mathbf{x}\rangle \) is a finite set of symmetric polynomials.Footnote 2 Notice that the distinguishing feature here is the dimension independence: the optimization is over all possible matrix sizes. Perhaps counterintuitively, in this paper, we use techniques similar to those used for the tracial polynomial optimization problem to compute lower bounds on factorization dimensions.

For classical polynomial optimization Lasserre [46] and Parrilo [60] have proposed hierarchies of semidefinite programming relaxations based on the theory of moments and the dual theory of sums of squares polynomials. These can be used to compute successively better lower bounds converging to the global minimum (under the Archimedean condition). This approach has been used in a wide range of applications and there is an extensive literature (see, e.g., [1, 47, 49]). Most relevant to this work, it is used in [48] to design conic approximations of the completely positive cone and in [58] to check membership in the completely positive cone. This approach has also been extended to the noncommutative setting, first to the eigenvalue optimization problem [57, 61] (which will not play a role in this paper), and later to tracial optimization [14, 43].

For our paper, the moment formulation of the lower bounds is most relevant: For all \(t \in \mathbb {N}\cup \{\infty \}\), we can define the bounds

$$\begin{aligned} f_t&=\mathrm {inf}_{}\big \{L(f) : L\in \mathbb {R}[\mathbf{x}]_{2t}^*,\, L(1)=1,\, L\ge 0 \text { on } {{\mathscr {M}}}_{2t}(S)\big \}, \\ f_t^\mathrm {tr}&=\mathrm {inf}_{}\big \{L(f) : L\in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^* \text { tracial and symmetric},\, L(1)=1,\, L \ge 0 \text { on } {{\mathscr {M}}}_{2t}(S)\big \}, \end{aligned}$$

where \(f_t\) (resp., \(f_t^\mathrm {tr}\)) lower bounds the (tracial) polynomial optimization problem.

The connection between the parameters \({\xi _{t}^{\mathrm {cpsd}}}(A)\) and \(f_t^\mathrm {tr}\) is now clear: in the former we do not have the normalization property “\(L(1)=1\)” but we do have the additional affine constraints “\(L(x_i x_j) = A_{ij}\)”. This close relation to (tracial) polynomial optimization allows us to use that theory to understand the convergence properties of our bounds. Since throughout the paper we use (proof) techniques from (tracial) polynomial optimization, we will state the main convergence results we need, with full proofs, in Appendix A. Moreover, we give all proofs from the “moment side”, which is most relevant to our treatment. Below we give a short summary of the convergence results for the hierarchies \(\{f_t\}\) and \(\{f_t^\mathrm {tr}\}\) that are relevant to our paper. We refer to Appendix A.3 for details.

Under the condition that \({{\mathscr {M}}}(S)\) is Archimedean, we have asymptotic convergence: \(f_t \rightarrow f_\infty \) and \(f_t^\mathrm {tr} \rightarrow f_\infty ^\mathrm {tr}\) as \(t \rightarrow \infty \). In the commutative setting, one can moreover show that \(f_\infty \) is equal to the global minimum of f over the set D(S). However, in the noncommutative setting, the parameter \(f_\infty ^\mathrm {tr}\) is in general not equal to the minimum of \(\mathrm {tr}(f(\mathbf {X}))\) over \(\mathbf {X}\in {\mathscr {D}}(S)\). Instead, we need to consider the \(C^*\)-algebraic version of the tracial polynomial optimization problem: one can show that

$$\begin{aligned} f_\infty ^\mathrm {tr}= \mathrm {inf} \big \{ \tau (f(\mathbf{X})) : \mathbf{X} \in {\mathscr {D}}_{{\mathscr {A}}}(S), \, {\mathscr {A}} \text { is a unital }C^*\text { -algebra with tracial state } \tau \big \}. \end{aligned}$$

An important additional convergence result holds under flatness. If the program defining the bound \(f_t\) (resp., \(f_t^\mathrm {tr}\)) admits a sufficiently flat optimal solution, then equality holds: \(f_t = f_\infty \) (resp., \(f_t^\mathrm {tr} = f_\infty ^\mathrm {tr}\)). Moreover, in this case, the parameter \(f_t^\mathrm {tr}\) is equal to the minimum value of \(\mathrm {tr}(f(\mathbf {X}))\) over the matrix positivity domain \({\mathscr {D}}(S)\).

2 Lower Bounds on the Completely Positive Semidefinite Rank

Let A be a completely positive semidefinite \(n \times n\) matrix. For \(t \in \mathbb {N}\cup \{\infty \}\), we consider the following semidefinite program, which, as we see below, lower bounds the complex completely positive semidefinite rank of A:

$$\begin{aligned} {\xi _{t}^{\mathrm {cpsd}}}(A) = \mathrm {min} \big \{ L(1) : \;&L \in \mathbb {R}\langle x_1,\ldots ,x_n \rangle _{2t}^* \text { tracial and symmetric},\\&L(x_ix_j) = A_{ij} \quad \text {for} \quad i,j \in [n],\\&L \ge 0\quad \text {on} \quad {\mathscr {M}}_{2t}(S_A^{\mathrm {cpsd}}) \big \}, \end{aligned}$$

where we set

$$\begin{aligned} S_A^{\mathrm {cpsd}}= \big \{\sqrt{A_{11}} x_1 - x_1^2, \ldots , \sqrt{A_{nn}} x_n - x_n^2\big \}. \end{aligned}$$

Additionally, define the parameter \({\xi _{*}^{\mathrm {cpsd}}}(A)\), obtained by adding the rank constraint \({{\,\mathrm{rank}\,}}(M(L)) < \infty \) to the program defining \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\), where we consider the infimum instead of the minimum since we do not know whether the infimum is always attained. (In Proposition 1 we show the infimum is attained in \({\xi _{t}^{\mathrm {cpsd}}}(A)\) for \(t\in \mathbb {N}\cup \{\infty \}\)). This gives a hierarchy of monotone nondecreasing lower bounds on the completely positive semidefinite rank:

$$\begin{aligned} {\xi _{1}^{\mathrm {cpsd}}}(A) \le \ldots \le {\xi _{t}^{\mathrm {cpsd}}}(A)\le \ldots \le {\xi _{\infty }^{\mathrm {cpsd}}}(A) \le {\xi _{*}^{\mathrm {cpsd}}}(A)\le {{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A). \end{aligned}$$

The inequality \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\le {\xi _{*}^{\mathrm {cpsd}}}(A)\) is clear and monotonicity as well: If L is feasible for \({\xi _{k}^{\mathrm {cpsd}}}(A)\) with \(t \le k \le \infty \), then its restriction to \(\mathbb {R}\langle \mathbf{x}\rangle _{2t}\) is feasible for \({\xi _{t}^{\mathrm {cpsd}}}(A)\).

The following notion of localizing polynomials will be useful. A set \(S\subseteq \mathbb {R}\langle \mathbf{x}\rangle \) is said to be localizing at a matrix tuple \(\mathbf {X}\) if \(\mathbf {X}\in {\mathscr {D}}(S)\) (i.e., \(g(\mathbf {X})\succeq 0\) for all \(g\in S\)) and we say that S is localizing for A if S is localizing at some factorization \(\mathbf {X}\in (\mathrm {H}_+^d)^n\) of A with \(d={{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\). The set \(S_A^{\mathrm {cpsd}}\) as defined in (7) is localizing for A, and, in fact, it is localizing at any factorization \(\mathbf {X}\) of A by Hermitian positive semidefinite matrices. Indeed, since

$$\begin{aligned} A_{ii}={{\,\mathrm{Tr}\,}}(X_i^2)\ge \lambda _{\mathrm {max}}(X_i^2) = \lambda _{\mathrm {max}} (X_i)^2 \end{aligned}$$

we have \(\sqrt{A_{ii}} X_i - X_i^2 \succeq 0\) for all \(i\in [n]\).

We can now use this to show the inequality \({\xi _{*}^{\mathrm {cpsd}}}(A) \le {{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\). For this set \(d = {{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\), let \(\mathbf {X}\in (\mathrm {H}_+^d)^n\) be a Gram factorization of A, and consider the linear form \(L_\mathbf {X}\in \mathbb {R}\langle \mathbf {x}\rangle ^*\) defined by

$$\begin{aligned} L_\mathbf {X}(p) = \mathrm {Re}(\mathrm {Tr}(p(\mathbf {X}))) \quad \text {for all} \quad p \in \mathbb {R}\langle \mathbf {x}\rangle . \end{aligned}$$

By construction \(L_\mathbf {X}\) is symmetric and tracial, and we have \(A=(L(x_ix_j))\). Moreover, since the set of polynomials \(S_A^{\mathrm {cpsd}}\) is localizing for A, the linear form \(L_\mathbf {X}\) is nonnegative on \({\mathscr {M}}(S_A^{\mathrm {cpsd}})\). Finally, we have \({{\,\mathrm{rank}\,}}(M(L_\mathbf {X}))<\infty \), since the algebra generated by \(X_1, \ldots , X_n\) is finite dimensional. Hence, \(L_\mathbf {X}\) is feasible for \({\xi _{*}^{\mathrm {cpsd}}}(A)\) with \(L_\mathbf {X}(1)=d\), which shows \({\xi _{*}^{\mathrm {cpsd}}}(A) \le {{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\).

The inclusions in (8) below show the quadratic module \({{\mathscr {M}}}(S_A^{\mathrm {cpsd}})\) is Archimedean (recall the definition in (3)). Moreover, although there are other possible choices for the localizing polynomials to use in \(S_A^{\mathrm {cpsd}}\), these inclusions also show that the choice made in (7) leads to the largest truncated quadratic module and thus to the best bound. For any scalar \(c > 0\), we have the inclusions

$$\begin{aligned} {{\mathscr {M}}}_{2t}(x,c-x) \subseteq {{\mathscr {M}}}_{2t}(x,c^2-x^2) \subseteq {{\mathscr {M}}}_{2t}(cx-x^2) \subseteq {{\mathscr {M}}}_{2t+2}(x,c-x), \end{aligned}$$

which hold in light of the following identities:

$$\begin{aligned} c-x&= \big ((c-x)^2 + c^2-x^2\big )/(2c), \end{aligned}$$
$$\begin{aligned} c^2 - x^2&= (c-x)^2 + 2(cx - x^2), \end{aligned}$$
$$\begin{aligned} cx - x^2&= \big ((c-x) x (c-x) + x(c-x)x\big )/c, \end{aligned}$$
$$\begin{aligned} x&= \big ( (cx - x^2) + x^2\big )/c. \end{aligned}$$

In the rest of this section, we investigate properties of the hierarchy \(\{{\xi _{t}^{\mathrm {cpsd}}}(A)\}\) as well as some variations on it. We discuss convergence properties, asymptotically and under flatness, and we give another formulation for the parameter \({\xi _{*}^{\mathrm {cpsd}}}(A)\). Moreover, as the inequality \({\xi _{*}^{\mathrm {cpsd}}}(A) \le {{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\) is typically strict, we present an approach to strengthen the bounds in order to go beyond \({\xi _{*}^{\mathrm {cpsd}}}(A)\). Then, we propose some techniques to simplify the computation of the bounds, and we illustrate the behavior of the bounds on some examples.

2.1 The Parameters \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) and \({\xi _{*}^{\mathrm {cpsd}}}(A)\)

In this section, we consider convergence properties of the hierarchy \({\xi _{t}^{\mathrm {cpsd}}}(\cdot )\), both asymptotically and under flatness. We also give equivalent reformulations of the limiting parameters \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) and \({\xi _{*}^{\mathrm {cpsd}}}(A)\) in terms of \(C^*\)-algebras with a tracial state, which we will use in Sects. 2.32.4 to show properties of these parameters.

Proposition 1

Let \(A \in \mathrm {CS}_{+}^n\). For \(t \in \mathbb {N}\cup \{\infty \}\) the optimum in \({\xi _{t}^{\mathrm {cpsd}}}(A)\) is attained, and

$$\begin{aligned} \lim _{t \rightarrow \infty } {\xi _{t}^{\mathrm {cpsd}}}(A) = {\xi _{\infty }^{\mathrm {cpsd}}}(A). \end{aligned}$$

Moreover, \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) is equal to the smallest scalar \(\alpha \ge 0\) for which there exists a unital \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \((X_1,\ldots ,X_n) \in {\mathscr {D}}_{{\mathscr {A}}}(S_A^{\mathrm {cpsd}})\) such that \(A = \alpha \cdot (\tau (X_iX_j))\).


The sequence \(({\xi _{t}^{\mathrm {cpsd}}}(A))_t\) is monotonically nondecreasing and upper bounded by \({\xi _{\infty }^{\mathrm {cpsd}}}(A) <\infty \), which implies its limit exists and is at most \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\).

As \({\xi _{t}^{\mathrm {cpsd}}}(A)\le {\xi _{\infty }^{\mathrm {cpsd}}}(A)\), we may add the redundant constraint \(L(1) \le {\xi _{\infty }^{\mathrm {cpsd}}}(A)\) to the problem \({\xi _{t}^{\mathrm {cpsd}}}(A)\) for every \(t \in \mathbb {N}\). By (10), we have \(\mathrm {Tr}(A) -\sum _ix_i^2 \in {{\mathscr {M}}}_2(S_A^{\mathrm {cpsd}})\). Hence, using the result of Lemma 13, the feasible region of \({\xi _{t}^{\mathrm {cpsd}}}(A)\) is compact, and thus, it has an optimal solution \(L_t\). Again by Lemma 13, the sequence \((L_t)\) has a pointwise converging subsequence with limit \(L \in \mathbb {R}\langle \mathbf {x}\rangle ^*\). This pointwise limit L is symmetric, tracial, satisfies \((L(x_ix_j)) = A\), and is nonnegative on \({\mathscr {M}}(S_A^{\mathrm {cpsd}})\). Hence, L is feasible for \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\). This implies that L is optimal for \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) and we have \(\lim _{t \rightarrow \infty } {\xi _{t}^{\mathrm {cpsd}}}(A) = {\xi _{\infty }^{\mathrm {cpsd}}}(A)\).

The reformulation of \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) in terms of \(C^*\)-algebras with a tracial state follows directly using Theorem 1. \(\square \)

Next, we give some equivalent reformulations for the parameter \({\xi _{*}^{\mathrm {cpsd}}}(A)\), which follow as a direct application of Theorem 2. In general, we do not know whether the infimum in \({\xi _{*}^{\mathrm {cpsd}}}(A)\) is attained. However, as a direct application of Corollary 1, we see that this infimum is attained if there is an integer \(t \in \mathbb {N}\) for which \({\xi _{t}^{\mathrm {cpsd}}}(A)\) admits a flat optimal solution.

Proposition 2

Let \(A \in \mathrm {CS}_{+}^n\). The parameter \({\xi _{*}^{\mathrm {cpsd}}}(A)\) is given by the infimum of L(1) taken over all conic combinations L of trace evaluations at elements in \({\mathscr {D}}_{{\mathscr {A}}}(S_A^{\mathrm {cpsd}})\) for which \(A=(L(x_ix_j))\). The parameter \({\xi _{*}^{\mathrm {cpsd}}}(A)\) is also equal to the infimum over all \(\alpha \ge 0\) for which there exist a finite dimensional \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \((X_1,\ldots ,X_n) \in {\mathscr {D}}_{{\mathscr {A}}}(S_A^{\mathrm {cpsd}})\) such that \(A = \alpha \cdot (\tau (X_iX_j))\).

In addition, if \({\xi _{t}^{\mathrm {cpsd}}}(A)\) admits a flat optimal solution, then \({\xi _{t}^{\mathrm {cpsd}}}(A)= {\xi _{*}^{\mathrm {cpsd}}}(A)\).

Next we show a formulation for \({\xi _{*}^{\mathrm {cpsd}}}(A)\) in terms of factorization by block-diagonal matrices, which helps explain why the inequality \({\xi _{*}^{\mathrm {cpsd}}}(A) \le {{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A)\) is typically strict. Here \(\Vert \cdot \Vert \) is the operator norm, so that \(\Vert X\Vert = \lambda _\mathrm {max}(X)\) for \(X \succeq 0\).

Proposition 3

For \(A \in \mathrm {CS}_{+}^n\) we have

$$\begin{aligned} {\xi _{*}^{\mathrm {cpsd}}}(A) = \mathrm {inf} \Bigg \{ \sum _{m=1}^M d_m \cdot \underset{i \in [n]}{\mathrm {max}} \frac{\Vert X^m_{i}\Vert ^2}{A_{ii}} : \;&M \in \mathbb {N},\, d_1,\ldots ,d_M \in \mathbb {N}, \nonumber \\&X_i^m \in \mathrm {H}_+^{d_m} \text { for } i \in [n], m \in [M],\nonumber \\&A = \mathrm {Gram}\big (\oplus _{m=1}^M X_1^m,\ldots ,\oplus _{m = 1}^M X_n^m\big )\Bigg \}. \end{aligned}$$

Note that using matrices from \(\mathrm {S}_+^{d_m}\) instead of \(\mathrm {H}_+^{d_m}\) does not change the optimal value.


The proof uses the formulation of \({\xi _{*}^{\mathrm {cpsd}}}(A)\) in terms of conic combinations of trace evaluations at matrix tuples in \({\mathscr {D}}(S_A^{\mathrm {cpsd}})\) as given in Proposition 2. We first show the inequality \(\beta \le {\xi _{*}^{\mathrm {cpsd}}}(A)\), where \(\beta \) denotes the optimal value of the program in (13).

For this, assume \(L\in \mathbb {R}\langle \mathbf{x}\rangle ^*\) is a conic combination of trace evaluations at elements of \({{\mathscr {D}}}(S_A^{\mathrm {cpsd}})\) such that \(A=(L(x_ix_j))\). We will construct a feasible solution for (13) with objective value L(1). The linear functional L can be written as

$$\begin{aligned} L=\sum _{m=1}^M \lambda _m L_{\mathbf Y^m}, \text { where } \lambda _m > 0 \text { and } \mathbf Y^m=(Y^m_1,\ldots ,Y^m_n) \in {{\mathscr {D}}}(S_A^{\mathrm {cpsd}}) \text { for } m \in [M]. \end{aligned}$$

Let \(d_m\) denote the size of the matrices \(Y_1^m, \ldots , Y_n^m\), so that \(L(1)=\sum _m \lambda _m d_m\). Since \(\mathbf Y^m \in {{\mathscr {D}}}(S_A^{\mathrm {cpsd}})\), we have \(Y^m_i \succeq 0\) and \(A_{ii}I-(Y^m_i)^2\succeq 0\) by identities (10) and (12). This implies \(\Vert Y^m_i\Vert ^2 \le A_{ii}\) for all \(i\in [n]\) and \(m \in [M]\). Define \(\mathbf X^m = \sqrt{\lambda _m} \, \mathbf Y^m\). Then, \(L(x_ix_j)= \sum _m {{\,\mathrm{Tr}\,}}(X^m_iX^m_j)\), so that the matrices \(\oplus _m X^m_1,\ldots ,\oplus _m X^m_n\) form a Gram decomposition of A. This gives a feasible solution to (13) with value

$$\begin{aligned} \sum _{m=1}^M d_m \cdot \underset{i\in [n]}{\mathrm {max}}\frac{\Vert X^m_i\Vert ^2}{A_{ii}} =\sum _{m=1}^M d_m \lambda _m \, \underset{i\in [n]}{\mathrm {max}}\frac{\Vert Y^m_i\Vert ^2}{A_{ii}} \le \sum _{m=1}^M d_m\lambda _m =L(1), \end{aligned}$$

which shows \(\beta \le L(1)\), and hence \(\beta \le {\xi _{*}^{\mathrm {cpsd}}}(A)\).

For the other direction, we assume

$$\begin{aligned} A = \mathrm {Gram}\big (\oplus _{m=1}^M X^m_1,\ldots ,\oplus _{m=1}^M X^m_n\big ), \quad X^m_1,\ldots ,X^m_n \in \mathrm {S}^{d_m}_+ \ \text { for } m \in [M]. \end{aligned}$$

Set \(\lambda _m = \mathrm {max}_{i\in [n]} {\Vert X^m_i\Vert ^2/ A_{ii}}\), and define the linear form L by

$$\begin{aligned} L= \sum _{m=1}^M \lambda _m L_{\mathbf Y^m}, \quad \text {where} \quad \mathbf Y^m = \mathbf {X}^m / \sqrt{\lambda _m} \quad \text {for all} \quad m \in [M]. \end{aligned}$$

We have \(L(1)=\sum _m \lambda _m d_m\) and \(A=(L(x_ix_j))\), and thus it suffices to show that each matrix tuple \(\mathbf Y^m\) belongs to \({{\mathscr {D}}}(S_A^{\mathrm {cpsd}})\). For this we observe that \(\lambda _mA_{ii}\ge \Vert X^m_i\Vert ^2\). Therefore \(\lambda _m A_{ii} I \succeq (X_i^m)^2\), and thus \(A_{ii} I \succeq (Y_i^m)^2\), which implies \(\sqrt{A_{ii}} Y_i^m - (Y_i^m)^2 \succeq 0\). This shows \({\xi _{*}^{\mathrm {cpsd}}}(A) \le L(1)=\sum _m \lambda _m d_m\), and thus \({\xi _{*}^{\mathrm {cpsd}}}(A) \le \beta \). \(\square \)

We can say a bit more when the matrix A lies on an extreme ray of the cone \(\mathrm {CS}_{+}^n\). In the formulation from Proposition 3, it suffices to restrict the minimization over factorizations of A involving only one block. However, we know very little about the extreme rays of \(\mathrm {CS}_{+}^n\), also in view of the recent result that the cone is not closed for large n [22, 73].

Proposition 4

If A lies on an extreme ray of the cone \(\mathrm {CS}_{+}^n\), then

$$\begin{aligned} {\xi _{*}^{\mathrm {cpsd}}}(A) = {\text {inf}} \left\{ d \cdot \underset{i \in [n]}{\mathrm {max}} \frac{\Vert X_{i}\Vert ^2}{A_{ii}} : d \in \mathbb {N}, X_1,\ldots ,X_n \in \mathrm {H}_+^{d}, \, A = \mathrm {Gram}\big (X_1, \ldots , X_n \big )\right\} . \end{aligned}$$

Moreover, if \(\oplus _{m=1}^M X^m_1,\ldots ,\oplus _{m=1}^M X^m_n\) is a Gram decomposition of A providing an optimal solution to (13) and some block \(X^m_i\) has rank 1, then \({\xi _{*}^{\mathrm {cpsd}}}(A)={{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\).


Let \(\beta \) be the infimum in Proposition 4. The inequality \({\xi _{*}^{\mathrm {cpsd}}}(A) \le \beta \) follows from the reformulation of \({\xi _{*}^{\mathrm {cpsd}}}(A)\) in Proposition 3. To show the reverse inequality we consider a solution \( \oplus _{m=1}^M X^m_1,\ldots ,\oplus _{m=1}^M X^m_n \) to (13), and set \(\lambda _m= \mathrm {max}_i\Vert X^m_i\Vert ^2/A_{ii}\). We will show \(\beta \le \sum _m d_m\lambda _m\). For this define the matrices \( A_m={{\,\mathrm{Gram}\,}}(X^m_1,\cdots ,X^m_n), \) so that \(A=\sum _m A_m\). As A lies on an extreme ray of \(\mathrm {CS}_{+}^n\), we must have \(A_m = \alpha _m A\) for some \(\alpha _m>0\) with \(\sum _m\alpha _m=1\). Hence, since

$$\begin{aligned} A=A_m/\alpha _m={{\,\mathrm{Gram}\,}}(X^m_1/\sqrt{\alpha _m}, \cdots , X^m_n/\sqrt{\alpha _m}), \end{aligned}$$

we have \(\beta \le d_m\lambda _m/\alpha _m\) for all \(m\in [M]\). It suffices now to use \(\sum _m \alpha _m=1\) to see that \(\mathrm {min}_m d_m\lambda _m/\alpha _m \le \sum _m d_m\lambda _m\). So we have shown \(\beta \le \mathrm {min}_m d_m\lambda _m/\alpha _m \le \sum _m d_m\lambda _m.\) This implies \(\beta \le {\xi _{*}^{\mathrm {cpsd}}}(A)\), and thus equality holds.

Assume now that \(\oplus _{m=1}^M X^m_1,\ldots ,\oplus _{m=1}^M X^m_n\) is optimal to (13) and that there is a block \(X_i^m\) of rank 1. By Proposition 3 we have \(\sum _m d_m\lambda _m= {\xi _{*}^{\mathrm {cpsd}}}(A)\). From the argument just made above it follows that

$$\begin{aligned} {\xi _{*}^{\mathrm {cpsd}}}(A)= \mathrm {min}_m d_m\lambda _m/\alpha _m =\sum _m d_m \lambda _m. \end{aligned}$$

As \(\sum _m \alpha _m=1\) this implies \(d_m\lambda _m/\alpha _m =\mathrm {min}_m d_m\lambda _m/\alpha _m\) for all m; that is, all terms \(d_m\lambda _m/\alpha _m\) take the same value \({\xi _{*}^{\mathrm {cpsd}}}(A)\). By assumption, there exist some \(m\in [M]\) and \(i\in [n]\) for which \(X^m_i\) has rank 1. Then \(\Vert X^m_i\Vert ^2=\langle X^m_i,X^m_i\rangle \), which gives \(\lambda _m =\alpha _m\), and thus \({\xi _{*}^{\mathrm {cpsd}}}(A) = d_m\). On the other hand, \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\le d_m\) since \((X^m_i/\sqrt{\alpha _m})_i\) forms a Gram decomposition of A, so equality \({\xi _{*}^{\mathrm {cpsd}}}(A)=d_m={{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\) holds. \(\square \)

2.2 Additional Localizing Constraints to Improve on \({\xi _{*}^{\mathrm {cpsd}}}(A)\)

In order to strengthen the bounds, we may require nonnegativity over a (truncated) quadratic module generated by a larger set of localizing polynomials for A. The following lemma gives one such approach.

Lemma 1

Let \(A \in \mathrm {CS}_{+}^n\). For \(v\in \mathbb {R}^n\) and \(g_v= v^\textsf {T}Av -\big (\sum _{i=1}^n v_ix_i\big )^2\), the set \(\{g_v\}\) is localizing for every Gram factorization by Hermitian positive semidefinite matrices of A (in particular, \(\{g_v\}\) is localizing for A).


If \(X_1,\ldots ,X_n\) is a Gram decomposition of A by Hermitian positive semidefinite matrices, then

$$\begin{aligned} v^\textsf {T}Av= {{\,\mathrm{Tr}\,}}\left( \Big (\sum _{i=1}^n v_iX_i\Big )^2\right) \ge \lambda _{\mathrm {max}}\left( \Big (\sum _{i=1}^n v_iX_i\Big )^2\right) , \end{aligned}$$

hence \(v^\textsf {T}AvI-(\sum _{i=1}^nv_iX_i)^2\succeq 0\). \(\square \)

Given a set \(V\subseteq \mathbb {R}^n\), we consider the larger set

$$\begin{aligned} S_{A,V}^{\mathrm {cpsd}}= S_A^{\mathrm {cpsd}}\cup \{g_v: v\in V\} \end{aligned}$$

of localizing polynomials for A. For \(t \in \mathbb {N}\cup \{\infty ,*\}\), denote by \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\) the parameter obtained by replacing in \({\xi _{t}^{\mathrm {cpsd}}}(A)\) the nonnegativity constraint on \({{\mathscr {M}}}_{2t}(S_A^{\mathrm {cpsd}})\) by nonnegativity on the larger set \({{\mathscr {M}}}_{2t}(S_{A,V}^{\mathrm {cpsd}})\). We have \({\xi _{t,\emptyset }^{\mathrm {cpsd}}}(A)={\xi _{t}^{\mathrm {cpsd}}}(A)\) and

$$\begin{aligned} {\xi _{t}^{\mathrm {cpsd}}}(A)\le {\xi _{t,V}^{\mathrm {cpsd}}}(A)\le {{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A) \quad \text {for all} \quad V \subseteq \mathbb {R}^n. \end{aligned}$$

By scaling invariance, we can add the above constraints for all \(v \in \mathbb {R}^n\) by setting V to be the unit sphere \(\mathbb {S}^{n-1}\). Since \(\mathbb {S}^{n-1}\) is a compact metric space, there exists a sequence \(V_1 \subseteq V_2 \subseteq \ldots \subseteq \mathbb {S}^{n-1}\) of finite subsets such that \(\bigcup _{k\ge 1} V_k\) is dense in \(\mathbb {S}^{n-1}\). Each of the parameters \({\xi _{t,V_k}^{\mathrm {cpsd}}}(A)\) involves finitely many localizing constraints, and, as we now show, they converge to the parameter \({\xi _{t,\mathbb {S}^{n-1}}^{\mathrm {cpsd}}}(A)\).

Proposition 5

Consider a matrix \(A\in \mathrm {CS}_{+}^n\). For \(t \in \{\infty , *\}\), we have

$$\begin{aligned} \lim _{k \rightarrow \infty } {\xi _{t,V_k}^{\mathrm {cpsd}}}(A) = {\xi _{t,\mathbb {S}^{n-1}}^{\mathrm {cpsd}}}(A). \end{aligned}$$


Let \(\varepsilon > 0\). Since \(\bigcup _k V_k\) is dense in \(\mathbb {S}^{n-1}\), there is an integer \(k\ge 1\) so that for every \(u \in \mathbb {S}^{n-1}\) there exists a vector \(v \in V_k\) satisfying

$$\begin{aligned} \Vert u-v\Vert _1 \le \frac{\varepsilon \lambda _\mathrm {min}(A)}{4 \sqrt{n} \, \mathrm {max}_i A_{ii}} \quad \text {and} \quad \Vert u-v\Vert _2 \le \frac{\varepsilon \lambda _\mathrm {min}(A)}{4\mathrm {Tr}(A^2)^{1/2}}. \end{aligned}$$

The above Propositions 1 and 2 have natural analogs for the programs \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\). These show that for \(t = \infty \) (\(t = *\)) the parameter \({\xi _{t,V_k}^{\mathrm {cpsd}}}(A)\) is the infimum over all \(\alpha \ge 0\) for which there exist a (finite dimensional) unital \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \(\mathbf {X}\in {\mathscr {D}}_{{\mathscr {A}}}(S_{A,V_k}^{\mathrm {cpsd}})\) such that \(A = \alpha \cdot (\tau (X_iX_j))\).

Below we will show that \(\mathbf{X}' = \sqrt{1-\varepsilon } \mathbf{X} \in {\mathscr {D}}_{{\mathscr {A}}}(S_{A,\mathbb {S}^{n-1}}^{\mathrm {cpsd}})\). This implies that the linear form \(L \in \mathbb {R}\langle \mathbf {x}\rangle ^*\) defined by \(L(p) = \alpha /(1-\varepsilon ) \tau (p(\mathbf{X'}))\) is feasible for \({\xi _{t,\mathbb {S}^{n-1}}^{\mathrm {cpsd}}}(A)\) with objective value \(L(1) = \alpha /(1-\varepsilon )\). This shows

$$\begin{aligned} {\xi _{t,\mathbb {S}^{n-1}}^{\mathrm {cpsd}}}(A) \le {1\over 1-\varepsilon }\ {\xi _{t,V_k}^{\mathrm {cpsd}}} (A) \le {1\over 1-\varepsilon }\ \lim _{k\rightarrow \infty } {\xi _{t,V_k}^{\mathrm {cpsd}}}(A). \end{aligned}$$

Since \(\varepsilon >0\) was arbitrary, letting \(\varepsilon \) tend to 0 completes the proof.

We now show \(\mathbf{X}' = \sqrt{1-\varepsilon } \mathbf{X} \in {\mathscr {D}}_{{\mathscr {A}}}(S_{A,\mathbb {S}^{n-1}}^{\mathrm {cpsd}})\). For this consider the map

$$\begin{aligned} f_{\mathbf{X}} :\mathbb {S}^{n-1} \rightarrow \mathbb {R}, \, v \mapsto \Big \Vert \sum _{i=1}^n v_i X_i\Big \Vert ^2, \end{aligned}$$

where \(\Vert \cdot \Vert \) denotes the \(C^*\)-algebra norm of \({\mathscr {A}}\). For \(\alpha \in \mathbb {R}\) and \(a\in {{\mathscr {A}}}\) with \(a^*=a\), we have \(\alpha \ge \Vert a\Vert \) if and only if \(\alpha -a\succeq 0\) in \({{\mathscr {A}}}\), or, equivalently, \(\alpha ^2-a^2\succeq 0\) in \({{\mathscr {A}}}\). Since \(\mathbf {X}\in {\mathscr {D}}_{{\mathscr {A}}}(S_{A,V_k}^{\mathrm {cpsd}})\) we have \(v^\textsf {T}A v - f_{\mathbf{X}}(v) \ge 0\) for all \(v \in V_k\), and hence

$$\begin{aligned} v^\textsf {T}A v - f_{\mathbf{X}'}(v)= & {} v^\textsf {T}A v \left( 1 - (1-\varepsilon ) \frac{f_{\mathbf{X}}(v)}{v^\textsf {T}A v}\right) \ge v^\textsf {T}A v \big ( 1 - (1-\varepsilon ) \big ) \\= & {} \varepsilon v^\textsf {T}A v \ge \varepsilon \lambda _\mathrm {min}(A). \end{aligned}$$

Let \(u \in \mathbb {S}^{n-1}\) and let \(v \in V_k\) be such that (14) holds. Using Cauchy-Schwarz we have

$$\begin{aligned} | u^\textsf {T}A u - v^\textsf {T}A v |&= | (u-v)^\textsf {T}A (u + v)| = |\langle A, (u-v) (u+v)^\textsf {T}\rangle | \\&\le \sqrt{\mathrm {Tr}(A^2)} \sqrt{\mathrm {Tr}((u+v) (u-v)^\textsf {T}(u-v) (u+v)^\textsf {T})}\\&\le \sqrt{\mathrm {Tr}(A^2)} \Vert u-v\Vert _2 \Vert u+v\Vert _2 \le 2\sqrt{\mathrm {Tr}(A^2)} \Vert u-v\Vert _2\\&\le 2\sqrt{\mathrm {Tr}(A^2)} \frac{\varepsilon \lambda _\mathrm {min}(A)}{4\sqrt{\mathrm {Tr}(A^2)}}= \frac{\varepsilon \lambda _\mathrm {min}(A)}{2}. \end{aligned}$$

Since \(\sqrt{A_{ii}} X_i - X_i^2\) is positive in \({\mathscr {A}}\), we have that \(\sqrt{A_{ii}} -X_i\) is positive in \({\mathscr {A}}\) by (9) and (10), which implies \(\Vert X_i\Vert \le \sqrt{A_{ii}}\). By the reverse triangle inequality, we then have

$$\begin{aligned} |f_{\mathbf{X'}}(u) - f_{\mathbf{X'}}(v)|&= \left| \big \Vert \sum _{i=1}^n u_i X_i'\big \Vert - \big \Vert \sum _{i=1}^n v_i X_i'\big \Vert \right| \left( \big \Vert \sum _{i=1}^n u_i X_i'\big \Vert + \big \Vert \sum _{i=1}^n v_i X_i'\big \Vert \right) \\&\le \left\| \sum _{i=1}^n (v_i - u_i) X_i'\right\| 2\sqrt{n} \, \mathrm {max}_i \sqrt{A_{ii}} \\&\le \left( \sum _{i=1}^n |v_i - u_i| \Vert X_i'\Vert \right) 2\sqrt{n} \, \mathrm {max}_i \sqrt{A_{ii}}\\&\le \Vert u-v\Vert _1 2 \sqrt{n} \, \mathrm {max}_i A_{ii}\\&\le \frac{\varepsilon \lambda _\mathrm {min}(A)}{4 \sqrt{n} \, \mathrm {max}_i A_{ii}} 2\sqrt{n}\, \mathrm {max}_i A_{ii}= \frac{\varepsilon \lambda _\mathrm {min}(A)}{2}. \end{aligned}$$

Combining the above inequalities we obtain that \(u^\textsf {T}A u - f_{\mathbf{X}'}(u) \ge 0\) for all \(\mathbb {S}^{n-1}\), and hence \(u^\textsf {T}A u - \big (\sum _{i=1}^n u_i X_i'\big )^2\) is positive in \({\mathscr {A}}\). Thus, we have \(\mathbf {X}' \in {\mathscr {D}}_{{\mathscr {A}}}(S_{A,\mathbb {S}^{n-1}}^{\mathrm {cpsd}})\). \(\square \)

We now discuss two examples where the bounds \({\xi _{*,V}^{\mathrm {cpsd}}}(A)\) go beyond \({\xi _{*}^{\mathrm {cpsd}}}(A)\).

Example 1

Consider the matrix

$$\begin{aligned} A = \begin{pmatrix} 1 &{}1/2\\ 1/2 &{} 1 \end{pmatrix}= {{\,\mathrm{Gram}\,}}\ \left( \begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix}, \begin{pmatrix} 1/2 &{} 1/2 \\ 1/2 &{} 1/2\end{pmatrix} \right) , \end{aligned}$$

with \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A) = 2\). We can also write \(A = \mathrm {Gram}(Y_1, Y_2)\), where

$$\begin{aligned} Y_1 = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 &{} 0 &{} 0\\ 0 &{} 1 &{} 0\\ 0 &{} 0 &{}0 \end{pmatrix}, \quad Y_2 = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 \end{pmatrix}. \end{aligned}$$

With \(X_i= \sqrt{2} \ Y_i\) we have \(I - X_i^2 \succeq 0\) for \(i=1,2\). Hence the linear form \(L = L_\mathbf {X}/2\) is feasible for \({\xi _{*}^{\mathrm {cpsd}}}(A)\), which shows that \({\xi _{*}^{\mathrm {cpsd}}}(A) \le L(1) = 3/2\). In fact, this form L gives an optimal flat solution to \({\xi _{2}^{\mathrm {cpsd}}}(A)\), as we can check using a semidefinite programming solver, so \({\xi _{*}^{\mathrm {cpsd}}}(A) = 3/2\). In passing, we observe that \({\xi _{1}^{\mathrm {cpsd}}}(A) = 4/3\), which coincides with the analytic lower bound (18) (see also Lemma 6 below).

For \(e = (1,1) \in \mathbb {R}^2\) and \(V = \{e\}\), this form L is not feasible for \({\xi _{*,V}^{\mathrm {cpsd}}}(A)\), because for the polynomial \(p = 1-3 x_1 - 3x_2\) we have \(L(p^*g_ep) = -9/2 < 0\). This means that the localizing constraint \(L(p^*g_ep)\ge 0\) is not redundant: For \(t\ge 2\) it cuts off part of the feasibility region of \({\xi _{t}^{\mathrm {cpsd}}}(A)\). Indeed, using a semidefinite programming solver, we find an optimal flat solution of \({\xi _{3,V}^{\mathrm {cpsd}}}(A)\) with objective value \((5-\sqrt{3})/2\approx 1.633\), hence

$$\begin{aligned} {\xi _{*,V}^{\mathrm {cpsd}}}(A) = (5-\sqrt{3})/2 > 3/2 = {\xi _{*}^{\mathrm {cpsd}}}(A). \end{aligned}$$

Example 2

Consider the symmetric circulant matrices

$$\begin{aligned} M(\alpha ) = \begin{pmatrix} 1 &{} \alpha &{} 0 &{} 0 &{} \alpha \\ \alpha &{} 1 &{} \alpha &{} 0 &{} 0 \\ 0 &{} \alpha &{} 1 &{} \alpha &{} 0 \\ 0 &{} 0 &{} \alpha &{} 1 &{} \alpha \\ \alpha &{} 0 &{} 0 &{} \alpha &{} 1 \end{pmatrix}\quad \text { for } \quad \alpha \in \mathbb {R}. \end{aligned}$$

For \(0\le \alpha \le 1/2\), we have \(M(\alpha ) \in \mathrm {CS}_{+}^5\) with \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(M(\alpha )) \le 5\). To see this, we set \(\beta =(1+\sqrt{1-4\alpha ^2})/2\) and observe that the matrices

$$\begin{aligned} X_i = \mathrm {Diag}(\sqrt{\beta } \, e_i + \sqrt{1-\beta }\, e_{i+1}) \in \mathrm {S}^5_+, \quad i\in [5], \quad (\text {with }e_6 := e_1), \end{aligned}$$

form a factorization of \(M(\alpha )\). As \(M(\alpha )\) is supported by a cycle, we have \(M(\alpha )\in \mathrm {CS}_{+}^5\) if and only if \(M(\alpha )\in \hbox {CP}^5\) [50]. Thus, \(M(\alpha ) \in \mathrm {CS}_{+}^5\) if and only if \(0 \le \alpha \le 1/2\).

By using its formulation in Proposition 3, we can use the above factorization to derive the inequality \({\xi _{*}^{\mathrm {cpsd}}}(M(1/2))\le 5/2\). However, using a semidefinite programming solver, we see that

$$\begin{aligned} {\xi _{2,V}^{\mathrm {cpsd}}}(M(1/2)) = 5, \end{aligned}$$

where V is the set containing the vector \((1,-1,1,-1,1)\) and its cyclic shifts. Hence, the bound \({\xi _{2,V}^{\mathrm {cpsd}}}(M(1/2))\) is tight: It certifies \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(M(1/2))=5\), while the other known bounds, the rank bound \(\sqrt{\mathrm {rank}(A)}\) and the analytic bound (18), only give \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A) \ge 3\).

We now observe that there exist \(0<\varepsilon ,\delta <1/2\) such that \(\hbox {cpsd-rank}_\mathbb {C}(M(\alpha )) = 5\) for all \( \alpha \in [0,\varepsilon ] \cup [\delta ,1/2]\). Indeed, this follows from the fact that \({\xi _{1}^{\mathrm {cpsd}}}(M(0)) = 5\) (by Lemma 6), the above result that \({\xi _{2,V}^{\mathrm {cpsd}}}(M(1/2)) = 5\), and the lower semicontinuity of \(\alpha \mapsto {\xi _{2,V}^{\mathrm {cpsd}}}(M(\alpha ))\), which is shown in Lemma 7 below.

As the matrices \(M(\alpha )\) are nonsingular, the above factorization shows that their cp-rank is equal to 5 for all \(\alpha \in [0,1/2]\); whether they all have \(\hbox {cpsd-rank}\) equal to 5 is not known.

2.3 Boosting the Bounds

In this section, we propose some additional constraints that can be added to strengthen the bounds \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\) for finite t. These constraints may shrink the feasibility region of \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\) for \(t \in \mathbb {N}\), but they are redundant for \(t\in \{\infty ,*\}\). The latter is shown using the reformulation of the parameters \({\xi _{\infty ,V}^{\mathrm {cpsd}}}(A)\) and \({\xi _{*,V}^{\mathrm {cpsd}}}(A)\) in terms of \(C^*\)-algebras.

We first mention how to construct localizing constraints of “bilinear type”, inspired by the work of Berta, Fawzi and Scholz [7]. Note that as for localizing constraints, these bilinear constraints can be modeled as semidefinite constraints.

Lemma 2

Let \(A\in \mathrm {CS}_{+}^n\), \(t \in \mathbb {N}\cup \{\infty , *\}\), and let \(\{g,g'\}\) be localizing for A. If we add the constraints

$$\begin{aligned} L(p^*gpg')\ge 0 \quad \text {for} \quad p\in \mathbb {R}\langle \mathbf{x}\rangle \quad \text {with} \quad \deg (p^*gpg')\le 2t \end{aligned}$$

to \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\), then we still get a lower bound on \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\). However, the constraints (16) are redundant for \({\xi _{\infty ,V}^{\mathrm {cpsd}}}(A)\) and \({\xi _{*,V}^{\mathrm {cpsd}}}(A)\) when \(g,g' \in {{\mathscr {M}}}(S_{A,V}^{\mathrm {cpsd}})\).


Let \(\mathbf {X}\in (\mathrm {H}^d_+)^n\) be a Gram decomposition of A, and let \(L =L_\mathbf {X}\) be the real part of the trace evaluation at \(\mathbf {X}\). Then, \(p(\mathbf {X})^* g(\mathbf {X}) p(\mathbf {X})\succeq 0\) and \(g'(\mathbf {X})\succeq 0\), and thus

$$\begin{aligned} L(p^*gpg') =\text {Re}( {{\,\mathrm{Tr}\,}}( p(\mathbf {X})^* g(\mathbf {X}) p(\mathbf {X}) g'(\mathbf {X})))\ge 0. \end{aligned}$$

So by adding the constraints (16) we still get a lower bound on \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\).

To show that the constraints (16) are redundant for \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\) and \({\xi _{*,V}^{\mathrm {cpsd}}}(A)\) when \(g,g'\in {{\mathscr {M}}}(S_{A,V}^{\mathrm {cpsd}})\), we let \(t\in \{\infty ,*\}\) and assume L is feasible for \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\). By Theorem 1 there exist a unital \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \(\mathbf {X}\in {\mathscr {D}}(S_{A,V}^{\mathrm {cpsd}})\) such that \(L(p)=L(1) \tau (p(\mathbf {X}))\) for all \(p\in \mathbb {R}\langle \mathbf{x}\rangle \). Since \(g,g' \in {{\mathscr {M}}}(S_{A,V}^{\mathrm {cpsd}})\) we know that \(g(\mathbf {X}), g'(\mathbf {X})\) are positive elements in \({{\mathscr {A}}}\), so \(g(\mathbf {X}) = a^* a\) and \(g'(\mathbf {X}) = b^* b\) for some \(a,b \in {{\mathscr {A}}}\). Then, we have

$$\begin{aligned} L(p^* g pg)&= L(1) \, \tau (p^*(\mathbf {X}) \, g(\mathbf {X}) \, p(\mathbf {X}) \, g'(\mathbf {X}) ) \\&= L(1) \, \tau (p^*(\mathbf {X}) \, a^* a \, p(\mathbf {X}) \, b^* b) \\&= L(1) \, \tau ((a \, p(\mathbf {X}) \, b^*)^* a \, p(\mathbf {X}) \, b^*) \ge 0, \end{aligned}$$

where we use that \(\tau \) is a positive tracial state on \({{\mathscr {A}}}\). \(\square \)

Second, we show how to use zero entries in A and vectors in the kernel of A to enforce new constraints on \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\).

Lemma 3

Let \(A\in \mathrm {CS}_{+}^n\) and \(t \in \mathbb {N}\cup \{\infty , *\}\). If we add the constraint

$$\begin{aligned} L=0 \quad \text { on } \quad {\mathscr {I}}_{2t}\left( \big \{\sum _{i=1}^nv_ix_i: v\in \ker A\big \} \cup \big \{x_ix_j: A_{ij}=0 \big \} \right) \end{aligned}$$

to \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\), then we still get a lower bound on \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\). Moreover, these constraints are redundant for \({\xi _{\infty ,V}^{\mathrm {cpsd}}}(A)\) and \({\xi _{*,V}^{\mathrm {cpsd}}}(A)\).


Let \(\mathbf {X}\in (\mathrm {H}^d_+)^n\) be a Gram factorization of A and let \(L_\mathbf {X}\) be as in (5). If \(Av=0\), then \(0=v^\textsf {T}Av = {{\,\mathrm{Tr}\,}}((\sum _{i=1}^n v_iX_i)^2)\) and thus \(\sum _{i=1}^nv_iX_i=0\). Hence \(L_\mathbf {X}((\sum _{I=1}^nv_ix_i)p)=\mathrm {Re}({{\,\mathrm{Tr}\,}}((\sum _{i=1}^nv_iX_i)p(\mathbf {X})))=0\). If \(A_{ij}=0\), then \({{\,\mathrm{Tr}\,}}(X_iX_j)=0\), which implies \(X_iX_j=0\), since \(X_i\) and \(X_j\) are positive semidefinite. Hence, \(L_\mathbf {X}(x_ix_ip)=\text {Re}({{\,\mathrm{Tr}\,}}(X_iX_jp(\mathbf {X})))=0\). Therefore, adding the constraints (17) still lower bounds \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\).

As in the proof of the previous lemma, if \(t \in \{\infty ,*\}\) and L is feasible for \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\) then, by Theorem 1, there exist a unital \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \(\mathbf {X}\) in \({\mathscr {D}}(S_{A,V}^{\mathrm {cpsd}})\) such that \(L(p)=L(1) \tau (p(\mathbf {X}))\) for all \(p\in \mathbb {R}\langle \mathbf{x}\rangle \). Moreover, by Lemma 12, we may assume \(\tau \) to be faithful. For a vector v in the kernel of A, we have \(0 = v^\textsf {T}A v = L((\sum _i v_i x_i)^2) = L(1) \tau ( (\sum _i v_i X_i)^2)\), and hence, since \(\tau \) is faithful, \(\sum _i v_i X_i = 0\) in \({{\mathscr {A}}}\). It follows that \(L(p (\sum _i v_i x_i)) = L(1) \tau (p(\mathbf {X}) \, 0) = 0\) for all \(p \in \mathbb {R}\langle \mathbf{x}\rangle \). Analogously, if \(A_{ij}=0\), then \(L(x_ix_j)=0\) implies \(\tau (X_iX_j)=0\) and thus \(X_iX_j=0\), since \(X_i, X_j\) are positive in \({{\mathscr {A}}}\) and \(\tau \) is faithful. This implies \(L(p x_i x_j) = 0\) for all \(p \in \mathbb {R}\langle \mathbf{x}\rangle \). This shows that the constraints (17) are redundant. \(\square \)

Note that the constraints \(L(p \, (\sum _{i=1}^nv_ix_i))=0\) for \(p\in \mathbb {R}\langle \mathbf{x}\rangle _t,\) which are implied by (17), are in fact redundant: if \(v \in \ker (A)\), then the vector obtained by extending v with zeros belongs to \(\ker (M_t(L))\), since \(M_t(L)\succeq 0\). Also, for an implementation of \({\xi _{t}^{\mathrm {cpsd}}}(A)\) with the additional constraints (17), it is more efficient to index the moment matrices with a basis for \(\mathbb {R}\langle \mathbf{x}\rangle _{t}\) modulo the ideal \({\mathscr {I}}_t\big (\{ \sum _i v_i x_i: v \in \ker (A)\} \cup \{x_i x_j : A_{ij} = 0\}\big )\).

2.4 Additional Properties of the Bounds

Here, we list some additional properties of the parameters \({\xi _{t}^{\mathrm {cpsd}}}(A)\) for \(t \in \mathbb {N}\cup \{\infty , *\}\). First we state some properties for which the proofs are immediate and thus omitted.

Lemma 4

Suppose \(A\in \mathrm {CS}_{+}^n\) and \(t \in \mathbb {N}\cup \{\infty ,*\}\).

  1. (1)

    If P is a permutation matrix, then \({\xi _{t}^{\mathrm {cpsd}}}(A) = {\xi _{t}^{\mathrm {cpsd}}}(P^\textsf {T}A P)\).

  2. (2)

    If B is a principal submatrix of A, then \({\xi _{t}^{\mathrm {cpsd}}}(B) \le {\xi _{t}^{\mathrm {cpsd}}}(A)\).

  3. (3)

    If D is a positive definite diagonal matrix, then \({\xi _{t}^{\mathrm {cpsd}}}(A) = {\xi _{t}^{\mathrm {cpsd}}}(D A D).\)

We also have the following direct sum property, where the equality follows using the \(C^*\)-algebra reformulations as given in Propositions 1 and 2.

Lemma 5

If \(A \in \mathrm {CS}_{+}^n\) and \(B \in \mathrm {CS}_{+}^m\), then \({\xi _{t}^{\mathrm {cpsd}}}(A\oplus B) \le {\xi _{t}^{\mathrm {cpsd}}}(A) + {\xi _{t}^{\mathrm {cpsd}}}(B)\), where equality holds for \(t \in \{\infty , *\}\).


To prove the inequality, we take \(L_A\) and \(L_B\) feasible for \({\xi _{t}^{\mathrm {cpsd}}}(A)\) and \({\xi _{t}^{\mathrm {cpsd}}}(B)\), and construct a feasible L for \({\xi _{t}^{\mathrm {cpsd}}}(A\oplus B)\) by \(L(p(\mathbf{x}, \mathbf{y})) = L_A(p(\mathbf{x}, \mathbf{0})) + L_B(p(\mathbf{0}, \mathbf{y}))\).

Now we show equality for \(t = \infty \) (\(t=*\)). By Proposition 1 (Proposition 2), \({\xi _{t}^{\mathrm {cpsd}}}(A\oplus B)\) is equal to the infimum over all \(\alpha \ge 0\) for which there exists a (finite dimensional) unital \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \((\mathbf {X}, \mathbf{Y}) \in {{\mathscr {D}}}_{{\mathscr {A}}}(S_{A\oplus B}^{\mathrm {cpsd}})\) such that \(A = \alpha \cdot (\tau (X_iX_j))\), \(B = \alpha \cdot (\tau (Y_iY_j))\) and \((\tau (X_iY_j))=0\). This implies \(\mathbf {X}\in {{\mathscr {D}}}_{{\mathscr {A}}}(S_A^{\mathrm {cpsd}})\) and \(\mathbf Y\in {{\mathscr {D}}}_{{\mathscr {A}}}(S_B^{\mathrm {cpsd}})\). Let \(P_A\) be the projection onto the space \(\sum _i \mathrm {Im}(X_i)\) and define the linear form \(L_A \in \mathbb {R}\langle \mathbf {x}\rangle ^*\) by \(L_A(p) = \alpha \cdot \tau (p(\mathbf {X}) P_A)\). It follows that \(L_A\) is nonnegative on \({\mathscr {M}}(S_A^{\mathrm {cpsd}})\), and

$$\begin{aligned} L_A(x_ix_j) = \alpha \, \tau (x_ix_jP_A) = \alpha \, \tau (x_ix_j) = A_{ij}, \end{aligned}$$

so \(L_A\) is feasible for \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) with \(L_A(1)=\alpha \tau (P_A)\). In the same way, we consider the projection \(P_B\) onto the space \(\sum _j \mathrm {Im}(Y_j)\) and define a feasible solution \(L_B\) for \({\xi _{t}^{\mathrm {cpsd}}}(B)\) with \(L_B(1)=\alpha \tau (P_B)\). By Lemma 12 we may assume \(\tau \) to be faithful, so that positivity of \(X_i\) and \(Y_j\) together with \(\tau (X_iY_j) = 0\) implies \(X_iY_j = 0\) for all i and j, and thus \(\sum _i \mathrm {Im}(X_i) \perp \sum _j \mathrm {Im}(Y_j)\). This implies \(I \succeq P_A + P_B\) and thus \(\tau (P_A+P_B)\le \tau (1)=1\). We have

$$\begin{aligned} L_A(1) + L_B(1) = \alpha \, \tau (P_A) + \alpha \tau (P_B) \le \alpha \, \tau (1) = \alpha , \end{aligned}$$

so \({\xi _{t}^{\mathrm {cpsd}}}(A)+{\xi _{t}^{\mathrm {cpsd}}}(B) \le L_A(1)+L_B(1)\le \alpha \), completing the proof. \(\square \)

Note that the \(\hbox {cpsd-rank}\) of a matrix satisfies the same properties as those mentioned in the above two lemmas, where the inequality in Lemma 5 is always an equality: \(\hbox {cpsd-rank}_\mathbb {C}(A~\oplus ~B)=\hbox {cpsd-rank}_\mathbb {C}(A)+\hbox {cpsd-rank}_\mathbb {C}(B)\) [38, 62].

The following lemma shows that the first level of our hierarchy is at least as good as the analytic lower bound (18) on the cpsd-rank derived in [62, Theorem 10].

Lemma 6

For any non-zero matrix \(A \in \mathrm {CS}_{+}^n\), we have

$$\begin{aligned} {\xi _{1}^{\mathrm {cpsd}}}(A) \ge \frac{\left( \sum _{i=1}^n \sqrt{A_{ii}}\right) ^2}{\sum _{i,j=1}^n A_{ij}}. \end{aligned}$$


Let L be feasible for \({\xi _{1}^{\mathrm {cpsd}}}(A)\). Since L is nonnegative on \({{\mathscr {M}}}_{2}(S_A^{\mathrm {cpsd}})\), it follows that \(L(\sqrt{A_{ii}}x_i-x_i^2)\ge 0\), implying \(\sqrt{A_{ii}} L(x_i)\ge L(x_i^2)=A_{ii}\) and thus \(L(x_i)\ge \sqrt{A_{ii}}\). Moreover, the matrix \(M_1(L)\) is positive semidefinite. By taking the Schur complement with respect to its upper left corner (indexed by 1), it follows that the matrix \(L(1)\cdot A- (L(x_i)L(x_j))\) is positive semidefinite. Hence, the sum of its entries is nonnegative, which gives \(L(1)(\sum _{i,j}A_{ij})\ge (\sum _i L(x_i))^2\ge (\sum _i \sqrt{A_{ii}})^2\) and shows the desired inequality. \(\square \)

As an application of Lemma 6, the first bound \({\xi _{1}^{\mathrm {cpsd}}}\) is exact for the \(k\times k\) identity matrix: \({\xi _{1}^{\mathrm {cpsd}}}(I_k)={{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(I_k)=k\). Moreover, by combining this with Lemma 4, it follows that \({\xi _{1}^{\mathrm {cpsd}}}(A)~\ge ~k\) if A contains a diagonal positive definite \(k\times k\) principal submatrix. A slightly more involved example is given by the \(5 \times 5\) circulant matrix A whose entries are given by \(A_{ij} = \cos ((i-j)4\pi /5)^2\) (\(i,j \in [5]\)); this matrix was used in [25] to show a separation between the completely positive semidefinite cone and the completely positive cone, and it was shown that \(\hbox {cpsd-rank}_\mathbb {C}(A) =2\). The analytic lower bound of [62] also evaluates to 2, hence Lemma 6 shows that our bound is tight on this example.

We now examine further analytic properties of the parameters \({\xi _{t}^{\mathrm {cpsd}}}(\cdot )\). For each \(r \in \mathbb {N}\), the set of matrices \(A\in \mathrm {CS}_{+}^n\) with \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A) \le r\) is closed, which shows that the function \(A \mapsto \hbox {cpsd-rank}_\mathbb {C}(A)\) is lower semicontinuous. We now show that the functions \(A \mapsto {\xi _{t}^{\mathrm {cpsd}}}(A)\) have the same property. The other bounds defined in this paper are also lower semicontinuous, with a similar proof.

Lemma 7

For every \(t \in \mathbb {N}\cup \{\infty \}\) and \(V \subseteq \mathbb {R}^n\), the function

$$\begin{aligned} \mathrm {S}^n \rightarrow \mathbb {R}\cup \{\infty \}, \, A \mapsto {\xi _{t,V}^{\mathrm {cpsd}}}(A) \end{aligned}$$

is lower semicontinuous.


It suffices to show the result for \(t\in \mathbb {N}\), because \({\xi _{\infty ,V}^{\mathrm {cpsd}}}(A)=\mathrm {sup}_t\, {\xi _{t,V}^{\mathrm {cpsd}}}(A)\), and the pointwise supremum of lower semicontinuous functions is lower semicontinuous. We show that the level sets \(\{A \in \mathrm {S}^n: {\xi _{t,V}^{\mathrm {cpsd}}}(A) \le r\}\) are closed. For this, we consider a sequence \((A_k)_{k\in \mathbb {N}}\) in \(\mathrm {S}^n\) converging to \(A \in \mathrm {S}^n\) such that \({\xi _{t,V}^{\mathrm {cpsd}}}(A_k) \le r\) for all k. We show that \({\xi _{t,V}^{\mathrm {cpsd}}}(A) \le r\). Let \(L_k\in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) be an optimal solution to \({\xi _{t,V}^{\mathrm {cpsd}}}(A_k)\). As \(L_k(1) \le r\) for all k, it follows from Lemma 13 that there is a pointwise converging subsequence of \((L_k)_k\), still denoted \((L_k)_k\) for simplicity, that has a limit \(L\in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) with \(L(1)\le r\). To complete the proof we show that L is feasible for \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\). By the pointwise convergence of \(L_k\) to L, for every \(\varepsilon >0\), \(p \in \mathbb {R}\langle \mathbf{x}\rangle \), and \(i \in [n]\), there exists a \(K \in \mathbb {N}\) such that for all \(k \ge K\) we have

$$\begin{aligned} |L(p^* x_i p) - L_k(p^* x_i p) |&< \mathrm {min}\left\{ 1,\frac{\varepsilon }{\sqrt{A_{ii}}}\right\} , \qquad |L(p^* x_i^2 p) - L_k(p^* x_i^2 p)|< \varepsilon , \\ |\sqrt{A_{ii}} - \sqrt{(A_k)_{ii}}|&< \frac{\varepsilon }{L(p^* x_i p) + 1}. \end{aligned}$$

Hence, we have

$$\begin{aligned} L(p^*(\sqrt{A_{ii}} x_i - x_i^2) p)&= \sqrt{A_{ii}} \Big (L(p^* x_i p) - L_k(p^* x_i p) + L_k (p^* x_i p) \Big ) \\&\quad - \Big ( L(p^* x_i^2 p) -L_k(p^* x_i^2 p ) + L_k(p^* x_i^2p)\Big ) \\&\ge -2 \varepsilon + \sqrt{A_{ii}} \, L_k (p^* x_i p) - L_k(p^* x_i^2p) \\&\ge -3 \varepsilon + \sqrt{(A_k)_{ii}} \, L_k (p^* x_i p) - L_k(p^* x_i^2p) \\&= -3 \varepsilon + L_k(p^*(\sqrt{(A_k)_{ii}} \, x_i - x_i^2) p) \ge -3 \varepsilon , \end{aligned}$$

where in the second inequality we use that \(0 \le L_k(p^* x_i p) \le L(p^* x_i p) + 1\). Letting \(\varepsilon \rightarrow 0\) gives \(L(p^*(\sqrt{A_{ii}}x_i-x_i^2)p)\ge 0\).

Similarly, one can show \(L(p^*(v^\textsf {T}Av - (\sum _i v_i x_i)^2) p) \ge 0\) for \(v \in V\), \(p \in \mathbb {R}\langle \mathbf{x}\rangle \).

\(\square \)

If we restrict to completely positive semidefinite matrices with an all-ones diagonal, that is, to \(\mathrm {CS}_{+}^n \cap \mathrm {E}_n\), we can show an even stronger property. Here, \(\mathrm {E}_n\) is the elliptope, which is the set of \(n \times n\) positive semidefinite matrices with an all-ones diagonal.

Lemma 8

For every \(t \in \mathbb {N}\cup \{\infty \}\), the function

$$\begin{aligned} \mathrm {CS}_{+}^n \cap \mathrm {E}_n \rightarrow \mathbb {R},\, A \mapsto {\xi _{t}^{\mathrm {cpsd}}}(A) \end{aligned}$$

is convex, and hence continuous on the interior of its domain.


Let \(A,B\in \mathrm {CS}_{+}^n\cap \mathrm {E}_n\) and \(0<\lambda <1\). Let \(L_A\) and \(L_B\) be optimal solutions for \({\xi _{t}^{\mathrm {cpsd}}}(A)\) and \({\xi _{t}^{\mathrm {cpsd}}}(B)\). Since the diagonals of A and B are the same, we have \(S_A^{\mathrm {cpsd}}=S_B^{\mathrm {cpsd}}\). So the linear functional \(L=\lambda L_A+(1-\lambda )L_B\) is feasible for \({\xi _{t}^{\mathrm {cpsd}}}(\lambda A+(1-\lambda )B)\), hence \( {\xi _{t}^{\mathrm {cpsd}}}(\lambda A+(1-\lambda )B)\le \lambda L_A(1)+(1-\lambda )L_B(1) = \lambda {\xi _{t}^{\mathrm {cpsd}}}(A)+ (1-\lambda ){\xi _{t}^{\mathrm {cpsd}}}(B). \)    \(\square \)

Example 3

In this example, we show that for \(t \ge 1\), the function

$$\begin{aligned} \mathrm {CS}_{+}^n \rightarrow \mathbb {R}, \, A \mapsto {\xi _{t}^{\mathrm {cpsd}}}(A) \end{aligned}$$

is not continuous. For this, we consider the matrices

$$\begin{aligned} A_k = \begin{pmatrix} 1/k &{} 0 \\ 0 &{} 1 \end{pmatrix}\in \mathrm {CS}_{+}^2, \end{aligned}$$

with \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A_k) = 2\) for all \(k\ge 1\). As \(A_k\) is diagonal positive definite, we have \({\xi _{t}^{\mathrm {cpsd}}}(A_k) = 2\) for all \(t,k\ge 1\), while \({\xi _{t}^{\mathrm {cpsd}}}(\lim _{k \rightarrow \infty } A_k) = 1\). This argument extends to \(\mathrm {CS}_{+}^n\) with \(n > 2\). This example also shows that the first level of the hierarchy \({\xi _{1}^{\mathrm {cpsd}}}(\cdot )\) can be strictly better than the analytic lower bound (18) of [62].

Example 4

In this example, we determine \({\xi _{t}^{\mathrm {cpsd}}}(A)\) for all \(t \ge 1\) and \(A \in \mathrm {CS}_{+}^2\). In view of Lemma 4(3), we only need to find \({\xi _{t}^{\mathrm {cpsd}}}(A(\alpha ))\) for \(0 \le \alpha \le 1\), where \( A(\alpha )= \bigl ({\begin{matrix} 1 &{} \alpha \\ \alpha &{} 1\end{matrix}}\bigr ). \)

The first bound \({\xi _{1}^{\mathrm {cpsd}}}(A(\alpha ))\) is equal to the analytic bound \(2/(\alpha +1)\) from (18), where the equality follows from the fact that L given by \(L(x_i x_j) = A(\alpha )_{ij}\), \(L(x_1)=L(x_2)=1\) and \(L(1)=2/(\alpha +1)\) is feasible for \({\xi _{1}^{\mathrm {cpsd}}}(A(\alpha ))\).

For \(t \ge 2\), we show \({\xi _{t}^{\mathrm {cpsd}}}(A(\alpha )) = 2-\alpha \). By the above, this is true for \(\alpha = 0\) and \(\alpha = 1\), and in Example 1 we show \({\xi _{t}^{\mathrm {cpsd}}}(A(1/2)) =3/2\) for \(t\ge 2\). The claim then follows since the function \(\alpha \mapsto {\xi _{t}^{\mathrm {cpsd}}}(A(\alpha ))\) is convex by Lemma 8.

3 Lower Bounds on the Completely Positive Rank

The best current approach for lower bounding the completely positive rank of a matrix is due to Fawzi and Parrilo [27]. Their approach relies on the atomicity of the completely positive rank, that is, the fact that \(\hbox {cp-rank}(A)=r\) if and only if A has an atomic decomposition \(A=\sum _{k=1}^r v_k v_k^\textsf {T}\) for nonnegative vectors \(v_k\). In other words, if \(\hbox {cp-rank}(A)=r\), then A / r can be written as a convex combination of r rank one positive semidefinite matrices \(v_k v_k^\textsf {T}\) that satisfy \(0 \le v_k v_k^\textsf {T}\le A\) and \(v_k v_k^\textsf {T}\preceq A\). Based on this observation, Fawzi and Parrilo define the parameter

$$\begin{aligned} \tau _\mathrm {cp}(A) \!=\! \mathrm {min}\Big \{ \alpha : \alpha \!\ge \!0,\, A \in \alpha \cdot \mathrm {conv} \big \{ R \in \mathrm {S}^n : 0 \!\le \!R \le A, \,R \preceq A,\, {{\,\mathrm{rank}\,}}(R) \le \! 1\big \}\Big \}, \end{aligned}$$

as lower bound for \(\hbox {cp-rank}(A)\). They also define the semidefinite programming parameter

$$\begin{aligned} \tau _{\mathrm {cp}}^{\mathrm {sos}}(A) = \mathrm {min} \big \{ \alpha : \;&\alpha \in \mathbb {R}, \, X \in \mathrm {S}^{n^2},\\&\begin{pmatrix} \alpha &{} \text {vec}(A)^\textsf {T}\\ \text {vec}(A) &{} X \end{pmatrix} \succeq 0,\\&X_{(i,j),(i,j)} \le A_{ij}^2 \quad \text {for} \quad 1 \le i,j \le n, \\&X_{(i,j),(k,l)} = X_{(i,l),(k,j)} \quad \text {for} \quad 1 \le i< k \le n, \; 1 \le j < l \le n,\\&X \preceq A \otimes A\big \}, \end{aligned}$$

as an efficiently computable relaxation of \(\tau _\mathrm {cp}(A)\), and they show \({{\,\mathrm{rank}\,}}(A) \le \tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\). Therefore, we have

$$\begin{aligned} {{\,\mathrm{rank}\,}}(A) \le \tau _{\mathrm {cp}}^{\mathrm {sos}}(A) \le \tau _\mathrm {cp}(A)\le \hbox {cp-rank}(A). \end{aligned}$$

Instead of the atomic point of view, here we take the matrix factorization perspective, which allows us to obtain bounds by adapting the techniques from Sect. 2 to the commutative setting. Indeed, we may view a factorization \(A =(a_i^\mathsf{T}a_j)\) by nonnegative vectors as a factorization by diagonal (and thus pairwise commuting) positive semidefinite matrices.

Before presenting the details of our hierarchy of lower bounds, we mention some of our results in order to make the link to the parameters \(\tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\) and \( \tau _\mathrm {cp}(A)\). The direct analog of \(\{{\xi _{t}^{\mathrm {cpsd}}}(A)\}\) in the commutative setting leads to a hierarchy that does not converge to \(\tau _{\mathrm {cp}}(A)\), but we provide two approaches to strengthen it that do converge to \(\tau _{\mathrm {cp}}(A)\). The first approach is based on a generalization of the tensor constraints in \(\tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\). We also provide a computationally more efficient version of these tensor constraints, leading to a hierarchy whose second level is at least as good as \(\tau _{\mathrm {cp}}^\mathrm {sos}(A)\) while being defined by a smaller semidefinite program. The second approach relies on adding localizing constraints for vectors in the unit sphere as in Sect. 2.2.

The following hierarchy is a commutative analog of the hierarchy from Sect. 2, where we may now add the localizing polynomials \(A_{ij}-x_ix_j\) for the pairs \(1 \le i < j \le n\), which was not possible in the noncommutative setting of the completely positive semidefinite rank. For each \(t \in \mathbb {N}\cup \{\infty \}\), we consider the semidefinite program

$$\begin{aligned} {\xi _{t}^{\mathrm {cp}}}(A) = \mathrm {min} \big \{ L(1) : \;&L \in \mathbb {R}[x_1,\ldots ,x_n]_{2t}^*,\\&L(x_ix_j) = A_{ij} \quad \text {for} \quad i,j \in [n],\\&L \ge 0 \quad \text {on} \quad {\mathscr {M}}_{2t}(S_A^{\mathrm {cp}}) \big \}, \end{aligned}$$

where we set

$$\begin{aligned} S_A^{\mathrm {cp}}= \big \{\sqrt{A_{ii}}x_i - x_i^2 : i \in [n]\big \} \cup \big \{A_{ij} - x_i x_j : 1 \le i < j \le n\big \}. \end{aligned}$$

We additionally define \({\xi _{*}^{\mathrm {cp}}}(A)\) by adding the constraint \({{\,\mathrm{rank}\,}}(M(L)) < \infty \) to \({\xi _{\infty }^{\mathrm {cp}}}(A)\). We also consider the strengthening \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\), where we add to \({\xi _{t}^{\mathrm {cp}}}(A)\) the positivity constraints

$$\begin{aligned} L(gu) \ge 0 \quad \text {for} \quad g \in \{1\} \cup S_A^{\mathrm {cp}}\quad \text {and} \quad u \in [\mathbf{x}]_{2t-\deg (g)} \end{aligned}$$

and the tensor constraints

$$\begin{aligned} (L((ww')^c))_{w,w' \in \langle \mathbf {x}\rangle _{=l}} \preceq A^{\otimes l} \quad \text {for all integers } \quad 2 \le l \le t, \end{aligned}$$

which generalize the case \(l=2\) used in the relaxation \(\tau _\mathrm {cp}^\mathrm {sos}(A)\). Here, for a word \(w \in \langle \mathbf {x}\rangle \), we denote by \(w^c\) the corresponding (commutative) monomial in \([\mathbf {x}]\). The tensor constraints (20) involve matrices indexed by the noncommutative words of length exactly l. In Sect. 3.4, we show a more economical way to rewrite these constraints as \( (L(mm'))_{m,m' \in [\mathbf {x}]_{=l}} \preceq Q_l A^{\otimes l} Q_l^\textsf {T}, \) thus involving smaller matrices indexed by commutative words of degree l.

Note that, as before, we can strengthen the bounds by adding other localizing polynomials to the set \(S_A^{\mathrm {cp}}\). In particular, we can follow the approach of Sect. 2.2. Another possibility is to add localizing constraints specific to the commutative setting: we can add each monomial \(u \in [\mathbf{x}]\) to \(S_A^{\mathrm {cp}}\) (see Sect. 3.5.2 for an example).

The bounds \({\xi _{t}^{\mathrm {cp}}}(A)\) and \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\) are monotonically nondecreasing in t, and they are invariant under simultaneously permuting the rows and columns of A and under scaling a row and column of A by a positive number. In Propositions 6 and 7, we show

$$\begin{aligned} \tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\le {\xi _{t,\dagger }^{\mathrm {cp}}}(A)\le \tau _{\mathrm {cp}}(A) \quad \text {for} \quad t \ge 2, \end{aligned}$$

and in Proposition 10, we show the equality \({\xi _{*,\dagger }^{\mathrm {cp}}}(A) = \tau _{\mathrm {cp}}(A)\).

3.1 Comparison to \(\tau _\mathrm {cp}^\mathrm {sos}(A)\)

We first show that the semidefinite programs defining \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\) are valid relaxations for the completely positive rank. More precisely, we show that they lower bound \(\tau _{\mathrm {cp}}(A)\).

Proposition 6

For \(A \in \hbox {CP}^n\) and \(t \in \mathbb {N}\cup \{\infty ,*\}\), we have \({\xi _{t,\dagger }^{\mathrm {cp}}}(A) \le \tau _{\mathrm {cp}}(A)\).


It suffices to show the inequality for \(t=*\). For this, consider a decomposition \(A=\alpha \, \sum _{k=1}^r \lambda _k R_k\), where \(\alpha \ge 1\), \(\lambda _k>0\), \(\sum _{k=1}^r \lambda _k = 1\), \(0\le R_k\le A\), \(R_k\preceq A\), and \({{\,\mathrm{rank}\,}}R_k= 1\). There are nonnegative vectors \(v_k\) such that \(R_k=v_k v_k^\textsf {T}\). Define the linear map \(L\in \mathbb {R}[\mathbf{x}]^*\) by \(L=\alpha \sum _{k=1}^r \lambda _k L_{v_k}\), where \(L_{v_k}\) is the evaluation at \(v_k\) mapping any polynomial \(p\in \mathbb {R}[\mathbf{x}]\) to \(p(v_k)\).

The equality \((L(x_ix_j))=A\) follows from the identity \(A=\alpha \sum _{k=1}^r \lambda _k R_k\). The constraints \( L((\sqrt{A_{ii}} x_i - x_i^2) p^2) \ge 0 \) follow because

$$\begin{aligned} L_{v_k}(\sqrt{A_{ii}} x_i - x_i^2) p^2) = (\sqrt{A_{ii}} (v_k)_i - (v_k)_i^2) p(v_k)^2 \ge 0, \end{aligned}$$

where we use that \((v_k)_i \ge 0\) and \((v_k)_i^2 = (R_k)_{ii} \le A_{ii}\) implies \((v_k)_i^2 \le (v_k)_i \sqrt{A_{ii}}\). The constraints \( L((A_{ij} - x_ix_j) p^2) \ge 0 \) and

$$\begin{aligned} L(gu) \ge 0 \quad \text {for} \quad g \in \{1\} \cup S_A^{\mathrm {cp}}\quad \text {and} \quad u \in [\mathbf{x}] \end{aligned}$$

follow in a similar way.

It remains to be shown that \(X_l \preceq A^{\otimes l}\) for all l, where we set \(X_l = (L(uv))_{u,v\in \langle \mathbf{x}\rangle _{=l}}\). Note that \(X_1=A\). We adapt the argument used in [27] to show \(X_l \preceq A^{\otimes l}\) using induction on \(l \ge 2\). Suppose \(A^{\otimes (l-1)}\succeq X_{l-1}\). Combining \(A-R_k\succeq 0\) and \(R_k\succeq 0\) gives \((A-R_k)\otimes R_k^{\otimes (l-1)}\succeq 0\) and thus \(A\otimes R_k^{\otimes (l-1)}\succeq R_k^{\otimes l}\) for each k. Scale by factor \(\alpha \lambda _k\) and sum over k to get

$$\begin{aligned} A\otimes X_{l-1}=\sum _k \alpha \lambda _k A\otimes R_k^{\otimes (l-1)} \succeq \sum _k \alpha \lambda _k R_k^{\otimes l}= X_l. \end{aligned}$$

Finally, combining with \(A^{\otimes (l-1)}-X_{l-1}\succeq 0\) and \(A\succeq 0\), we obtain

$$\begin{aligned} A^{\otimes l} =A\otimes (A^{\otimes (l-1)}-X_{l-1})+ A\otimes X_{l-1} \succeq A\otimes X_{l-1}\succeq X_l. \end{aligned}$$

\(\square \)

Now we show that the new parameter \({\xi _{2,\dagger }^{\mathrm {cp}}}(A)\) is at least as good as \(\tau _\mathrm {cp}^\mathrm {sos}(A)\). Later in Sect. 3.5.1, we will give an example where the inequality is strict.

Proposition 7

For \(A \in \hbox {CP}^n\) we have \( \tau _{\mathrm {cp}}^{\mathrm {sos}}(A) \le {\xi _{2,\dagger }^{\mathrm {cp}}}(A). \)


Let L be feasible for \({\xi _{2,\dagger }^{\mathrm {cp}}}(A)\). We will construct a feasible solution to the program defining \(\tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\) with objective value L(1), which implies \(\tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\le L(1)\) and thus the desired inequality. For this set \(\alpha = L(1)\) and define the symmetric \(n^2 \times n^2\) matrix X by \( X_{(i,j),(k,l)} =L(x_ix_jx_kx_l)\) for \(i,j,k,l \in [n]\). Then, the matrix

$$\begin{aligned} M:=\begin{pmatrix} \alpha &{} \text {vec}(A)^\textsf {T}\\ \text {vec}(A) &{} X \end{pmatrix} \end{aligned}$$

is positive semidefinite. This follows because M is obtained from the principal submatrix of \(M_2(L)\) indexed by the monomials 1 and \(x_ix_j\) (\(1\le i\le j\le n\)) where the rows/columns indexed by \(x_j x_i\) with \(1 \le i < j \le n\) are duplicates of the rows/columns indexed by \(x_i x_j\).

We have \(L((A_{ij} - x_ix_j)x_ix_j) \ge 0\) for all ij: For \(i \ne j\) this follows using the constraint \(L((A_{ij} - x_ix_j)u) \ge 0\) with \(u = x_ix_j\) (from (19)), and for \(i = j\) this follows from

$$\begin{aligned} L((A_{ii} -x_i^2) x_i^2) = L\left( (\sqrt{A_{ii}} - x_i)^2 + 2 (\sqrt{A_{ii}} x_i - x_i^2)\right) \ge 0, \end{aligned}$$

which holds because of (10), the constraint \(L(p^2) \ge 0\) for \(\deg (p)\le 2\), and the constraint \(L(\sqrt{A_{ii}} x_i - x_i^2) \ge 0\). Using \(L(x_ix_j) = A_{ij}\), we get \( X_{(i,j),(i,j)} = L(x_i^2x_j^2) \le A_{ij}^2. \) We also have \( X_{(i,j),(k,l)} = L(x_ix_jx_kx_l) = L(x_ix_lx_kx_j) = X_{(i,l),(k,j)}, \) and the constraint \((L(uv))_{u,v \in \langle \mathbf {x}\rangle _{=2}} \preceq A^{\otimes 2}\) implies \(X \preceq A \otimes A\). \(\square \)

3.2 Convergence of the Basic Hierarchy

We first summarize convergence properties of the hierarchy \({\xi _{t}^{\mathrm {cp}}}(A)\). Note that unlike in Sect. 2 where we can only claim the inequality \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\le {\xi _{*}^{\mathrm {cpsd}}}(A)\), here we can show the equality \({\xi _{\infty }^{\mathrm {cp}}}(A) = {\xi _{*}^{\mathrm {cp}}}(A)\). This is because we can use Theorem 7, which permits to represent certain truncated linear functionals by finite atomic measures.

Proposition 8

Let \(A \in \hbox {CP}^n\). For every \(t \in \mathbb {N}\cup \{\infty , *\}\) the optimum in \({\xi _{t}^{\mathrm {cp}}}(A)\) is attained, and \({\xi _{t}^{\mathrm {cp}}}(A) \rightarrow {\xi _{\infty }^{\mathrm {cp}}}(A) = {\xi _{*}^{\mathrm {cp}}}(A)\) as \(t\rightarrow \infty \). If \({\xi _{t}^{\mathrm {cp}}}(A)\) admits a flat optimal solution, then \({\xi _{t}^{\mathrm {cp}}}(A) = {\xi _{\infty }^{\mathrm {cp}}}(A)\). Moreover, \({\xi _{\infty }^{\mathrm {cp}}}(A) = {\xi _{*}^{\mathrm {cp}}}(A)\) is the minimum value of L(1) taken over all conic combinations \(L\) of evaluations at elements of \(D(S_A^{\mathrm {cp}})\) satisfying \(A = (L(x_ix_j))\).


We may assume \(A\ne 0\). Since \(\sqrt{A_{ii}} x_i -x_i^2 \in S_A^{\mathrm {cp}}\) for all i, using (10) we obtain that \(\mathrm {Tr}(A) -\sum _i x_i^2 \in {{\mathscr {M}}}_2(S_A^{\mathrm {cp}})\). By adapting the proof of Proposition 1 to the commutative setting, we see that the optimum in \({\xi _{t}^{\mathrm {cp}}}(A)\) is attained for \(t \in \mathbb {N}\cup \{\infty \}\), and \({\xi _{t}^{\mathrm {cp}}}(A) \rightarrow {\xi _{\infty }^{\mathrm {cp}}}(A)\) as \(t\rightarrow \infty \).

We now show the inequality \({\xi _{*}^{\mathrm {cp}}}(A)\le {\xi _{\infty }^{\mathrm {cp}}}(A)\), which implies that equality holds. For this, let L be optimal for \({\xi _{\infty }^{\mathrm {cp}}}(A)\). By Theorem 7, the restriction of L to \(\mathbb {R}[\mathbf {x}]_2\) extends to a conic combination of evaluations at points in \(D(S_A^{\mathrm {cp}})\). It follows that this extension is feasible for \({\xi _{*}^{\mathrm {cp}}}(A)\) with the same objective value. This shows that \({\xi _{*}^{\mathrm {cp}}}(A)\le {\xi _{\infty }^{\mathrm {cp}}}(A)\), that the optimum in \({\xi _{*}^{\mathrm {cp}}}(A)\) is attained, and that \({\xi _{*}^{\mathrm {cp}}}(A)\) is the minimum of L(1) over all conic combinations \(L\) of evaluations at elements of \(D(S_A^{\mathrm {cp}})\) such that \(A = (L(x_ix_j))\). Finally, by Theorem 6, we have \({\xi _{t}^{\mathrm {cp}}}(A) = {\xi _{\infty }^{\mathrm {cp}}}(A)\) if \({\xi _{t}^{\mathrm {cp}}}(A)\) admits a flat optimal solution. \(\square \)

Next, we give a reformulation for the parameter \({\xi _{*}^{\mathrm {cp}}}(A)\), which is similar to the formulation of \(\tau _\mathrm {cp}(A)\), although it lacks the constraint \(R \preceq A\) which is present in \(\tau _\mathrm {cp}(A)\).

Proposition 9

We have

$$\begin{aligned} {\xi _{*}^{\mathrm {cp}}}(A) = \mathrm {min}\Big \{ \alpha : \alpha \ge 0,\, A \in \alpha \cdot \mathrm {conv} \big \{ R \in \mathrm {S}^n : 0 \le R \le A, \, {{\,\mathrm{rank}\,}}(R) \le 1\big \}\Big \}. \end{aligned}$$


This follows directly from the reformulation of \({\xi _{*}^{\mathrm {cp}}}(A)\) in Proposition 8 in terms of conic evaluations at points in \(D(S_A^{\mathrm {cp}})\) after observing that, for \(v \in \mathbb {R}^n\), we have \(v \in D(S_A^{\mathrm {cp}})\) if and only if the matrix \(R = vv^\textsf {T}\) satisfies \(0 \le R \le A\). \(\square \)

3.3 Additional Constraints and Convergence to \(\tau _\mathrm {cp}(A)\)

The reformulation of the parameter \({\xi _{*}^{\mathrm {cp}}}(A)\) in Proposition 9 differs from \(\tau _\mathrm {cp}(A)\) in that the constraint \(R\preceq A\) is missing. In order to have a hierarchy converging to \(\tau _\mathrm {cp}(A)\), we need to add constraints to enforce that L can be decomposed as a conic combination of evaluation maps at nonnegative vectors v satisfying \(vv^\mathsf{T}\preceq A\). Here, we present two ways to achieve this goal. First, we show that the tensor constraints (20) suffice in the sense that \({\xi _{*,\dagger }^{\mathrm {cp}}}(A) =\tau _{\mathrm {cp}}(A)\) (note that the constraints (19) are not needed for this result). However, because of the special form of the tensor constraints we do not know whether \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\) admitting a flat optimal solution implies \({\xi _{t,\dagger }^{\mathrm {cp}}}(A) = {\xi _{*,\dagger }^{\mathrm {cp}}}(A)\), and we do not know whether \({\xi _{\infty ,\dagger }^{\mathrm {cp}}}(A) = {\xi _{*,\dagger }^{\mathrm {cp}}}(A)\). Second, we adapt the approach of adding additional localizing constraints from Sect. 2.2 to the commutative setting, where we do show \({\xi _{\infty ,\mathbb {S}^{n-1}}^{\mathrm {cp}}}(A) = {\xi _{*,\mathbb {S}^{n-1}}^{\mathrm {cp}}}(A) = \tau _{\mathrm {cp}}(A)\). This yields a doubly indexed sequence of semidefinite programs whose optimal values converge to \(\tau _{\mathrm {cp}}(A)\).

Proposition 10

Let \(A \in \hbox {CP}^n\). For every \(t \in \mathbb {N}\cup \{\infty \}\), the optimum in \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\) is attained. We have \({\xi _{t,\dagger }^{\mathrm {cp}}}(A) \rightarrow {\xi _{\infty ,\dagger }^{\mathrm {cp}}}(A)\) as \(t\rightarrow \infty \) and \({\xi _{*,\dagger }^{\mathrm {cp}}}(A) =\tau _{\mathrm {cp}}(A)\).


The attainment of the optima in \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\) for \(t \in \mathbb {N}\cup \{ \infty \}\) and the convergence of \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\) to \({\xi _{\infty ,\dagger }^{\mathrm {cp}}}(A)\) can be shown in the same way as the analog statements for \({\xi _{t}^{\mathrm {cp}}}(A)\) in Proposition 8.

We have seen the inequality \({\xi _{*,\dagger }^{\mathrm {cp}}}(A) \le \tau _{\mathrm {cp}}(A)\) in Proposition 6. Now we show the reverse inequality. Let L be feasible for \({\xi _{*,\dagger }^{\mathrm {cp}}}(A)\). We will show that L is feasible for \(\tau _{\mathrm {cp}}(A)\), which implies \(\tau _{\mathrm {cp}}(A)\le L(1)\) and thus \(\tau _{\mathrm {cp}}(A)\le {\xi _{*,\dagger }^{\mathrm {cp}}}(A)\).

By Proposition 7 and the fact that \({{\,\mathrm{rank}\,}}(A) \le \tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\), we have \(L(1) > 0\) (where we assume \(A\ne 0\)). By Theorem 5, we may write

$$\begin{aligned} L= L(1) \sum _{k=1}^K \lambda _k L_{v_k}, \end{aligned}$$

where \(\lambda _k>0\), \(\sum _k \lambda _k =1\), and \(L_{v_k}\) is an evaluation map at a point \(v_k \in D(S_A^{\mathrm {cp}})\). We define the matrices \(R_k = v_k v_k^\textsf {T}\), so that \(A = L(1) \sum _{k=1}^K R_k\). The matrices \(R_k\) satisfy \(0 \le R_k \le A\) since \(v_k \in D(S_A^{\mathrm {cp}})\). Clearly also \(R_k \succeq 0\). It remains to show that \(R_k \preceq A\). For this we use the tensor constraints (20). Using that L is a conic combination of evaluation maps, we may rewrite these constraints as

$$\begin{aligned} L(1) \sum _{k=1}^K \lambda _k R_k^{\otimes l} \preceq A^{\otimes l}, \end{aligned}$$

from which it follows that \(L(1) \lambda _k R_k^{\otimes l} \preceq A^{\otimes l}\) for all \(k\in [K]\). Therefore, for all \(k\in [K]\) and all vectors v with \(v^\mathsf{T}R_kv>0\), we have

$$\begin{aligned} L(1) \lambda _k \le \left( \frac{v^\textsf {T}A v}{v^\textsf {T}R_kv}\right) ^l \quad \text {for all} \quad l \in \mathbb {N}. \end{aligned}$$

Suppose there is a k such that \(R_k \not \preceq A\). Then there exists a v such that \(v^\textsf {T}R_k v > v^\textsf {T}A v\). As \((v^\textsf {T}A v) / (v^\textsf {T}R_kv) < 1\), letting l tend to \(\infty \) we obtain \(L(1)\lambda _k=0\), reaching a contradiction. It follows that \(R_k \preceq A\) for all \(k \in [K]\). \(\square \)

The second approach for reaching \(\tau _\mathrm {cp}(A)\) is based on using the extra localizing constraints from Sect. 2.2. For a subset \(V\subseteq \mathbb {S}^{n-1}\), define \({\xi _{t,V}^{\mathrm {cp}}}(A)\) by replacing the truncated quadratic module \({\mathscr {M}}_{2t}(S_A^{\mathrm {cp}})\) in \({\xi _{t}^{\mathrm {cp}}}(A)\) by \({\mathscr {M}}_{2t}(S_{A,V}^{\mathrm {cp}})\), where

$$\begin{aligned} S_{A,V}^{\mathrm {cp}}= S_A^{\mathrm {cp}}\cup \left\{ v^\textsf {T}Av-\Big (\sum _{i=1}^n v_ix_i\Big )^2 : v\in V \right\} . \end{aligned}$$

Proposition 5 can be adapted to the completely positive setting, so that we have a sequence of finite subsets \(V_1 \subseteq V_2 \subseteq \ldots \subseteq \mathbb {S}^{n-1}\) with \( {\xi _{*,V_k}^{\mathrm {cp}}}(A) \rightarrow {\xi _{*,\mathbb {S}^{n-1}}^{\mathrm {cp}}}(A) \) as \(k\rightarrow \infty \). Proposition 8 still holds when adding extra localizing constraints, so that for any \(k\ge 1\) we have

$$\begin{aligned} \lim _{t \rightarrow \infty } {\xi _{t,V_k}^{\mathrm {cp}}}(A) = {\xi _{*,V_k}^{\mathrm {cp}}}(A). \end{aligned}$$

Combined with Proposition 11, this shows that we have a doubly indexed sequence \({\xi _{t,V_k}^{\mathrm {cp}}}(A)\) of semidefinite programs that converges to \(\tau _\mathrm {cp}(A)\) as \(t \rightarrow \infty \) and \(k \rightarrow \infty \).

Proposition 11

For \(A \in \hbox {CP}^n\) we have \({\xi _{*,\mathbb {S}^{n-1}}^{\mathrm {cp}}}(A) = \tau _{\mathrm {cp}}(A)\).


The proof is the same as the proof of Proposition 9, with the following additional observation: Given a vector \(u \in \mathbb {R}^n\), we have \(u \in D(S_{A,\mathbb {S}^{n-1}}^{\mathrm {cp}})\) only if \(uu^\textsf {T}\preceq A\). The latter follows from the additional localizing constraints: for each \(v \in \mathbb {R}^n\) we have

$$\begin{aligned} 0 \le v^\textsf {T}A v - \Big (\sum _i v_i u_i \Big )^2 = v^{\textsf {T}} ( A - uu^\textsf {T}) v. \end{aligned}$$

\(\square \)

3.4 More Efficient Tensor Constraints

Here, we show that for any integer \(l\ge 2\) the constraint \(A^{\otimes l} -(L((ww')^c))_{w,w'\in \langle \mathbf{x}\rangle _{=l}}\succeq 0\), used in the definition of \({\xi _{t,+}^{\mathrm {cp}}}(A)\), can be reformulated in a more economical way using matrices indexed by commutative monomials in \([\mathbf{x}]_{=l}\) instead of noncommutative words in \(\langle \mathbf{x}\rangle _{=l}\). For this we exploit the symmetry in the matrices \(A^{\otimes l}\) and \((L((ww')^c))_{w,w'\in \langle \mathbf{x}\rangle _{=l}}\) for \(L \in \mathbb {R}[\mathbf {x}]_{2l}^*\). Recall that for a word \(w \in \langle \mathbf {x}\rangle \), we let \(w^c\) denote the corresponding (commutative) monomial in \([\mathbf {x}]\).

Define the matrix \(Q_l \in \mathbb {R}^{[\mathbf{x}]_{=l} \times \langle \mathbf{x}\rangle _{=l}}\) by

$$\begin{aligned} (Q_l)_{m,w} = {\left\{ \begin{array}{ll} 1/d_m &{} \text { if } w^c = m,\\ 0 &{} \text { otherwise,} \end{array}\right. } \end{aligned}$$

where, for \(m = x_1^{\alpha _1} \cdots x_n^{\alpha _n} \in [\mathbf {x}]_{=l}\), we define the multinomial coefficient

$$\begin{aligned} d_m = \big |\big \{w\in \langle \mathbf{x}\rangle _{=l}: w^c = m\big \}\big | = \frac{l!}{\alpha _1! \cdots \alpha _n!}. \end{aligned}$$

Lemma 9

For \(L \in \mathbb {R}[\mathbf{x}]_{2l}^*\) we have

$$\begin{aligned} Q_l (L((ww')^c))_{w,w'\in \langle \mathbf{x}\rangle _{=l}} Q_l^\textsf {T}= (L(mm'))_{m,m'\in [\mathbf{x}]_{=l}}. \end{aligned}$$


For \(m,m'\in [\mathbf{x}]_{l}\), the \((m,m')\)-entry of the left hand side is equal to

$$\begin{aligned} \sum _{w,w'\in \langle \mathbf{x}\rangle _{=l}} Q_{mw}Q_{m'w'}L((ww')^c)&= \sum _{\underset{w^c = m}{w \in \langle \mathbf{x}\rangle _{=l}}} \sum _{\underset{(w')^c = m'}{w' \in \langle \mathbf{x}\rangle _{=l}}} \frac{L((ww')^c)}{d_md_{m'}} = L(mm'). \end{aligned}$$

\(\square \)

The symmetric group \(S_l\) acts on \(\langle \mathbf {x}\rangle _{=l}\) by \((x_{i_1} \cdots x_{i_l})^\sigma = x_{i_{\sigma (1)}} \cdots x_{i_{\sigma (l)}}\) for \(\sigma \in S_l\). Let

$$\begin{aligned} P = \frac{1}{l!} \sum _{\sigma \in S_l} P_\sigma , \end{aligned}$$

where, for any \(\sigma \in S_l\), \(P_\sigma \in \mathbb {R}^{\langle \mathbf{x}\rangle _{=l} \times \langle \mathbf{x}\rangle _{=l}}\) is the permutation matrix defined by

$$\begin{aligned} (P_\sigma )_{w,w'} = {\left\{ \begin{array}{ll} 1 &{} \text {if } w^\sigma = w',\\ 0 &{} \text {otherwise}.\end{array}\right. } \end{aligned}$$

A matrix \(M \in \mathbb {R}^{\langle \mathbf{x}\rangle _{=l} \times \langle \mathbf{x}\rangle _{=l}}\) is said to be \(S_l\)-invariant if \(P^\sigma M = M P^\sigma \) for all \(\sigma \in S_l\).

Lemma 10

If \(M \in \mathbb {R}^{\langle \mathbf{x}\rangle _{=l} \times \langle \mathbf{x}\rangle _{=l}}\) is symmetric and \(S_l\)-invariant, then

$$\begin{aligned} M\succeq 0 \quad \Longleftrightarrow \quad Q_l M Q_l^\textsf {T}\succeq 0. \end{aligned}$$


The implication \(M \succeq 0 \Longrightarrow Q_l M Q_l^\textsf {T}\succeq 0\) is immediate. For the other implication, we need a preliminary fact. Consider the diagonal matrix \(D \in \mathbb {R}^{[\mathbf{x}]_{=l}\times [\mathbf{x}]_{=l}}\) with \(D_{mm}= d_m\) for \(m \in [\mathbf{x}]_{=l}\). We claim that \(Q_l^\textsf {T}D Q_l = P\), the matrix in (24). Indeed, for any \(w,w'\in \langle \mathbf{x}\rangle _{=l}\), we have

$$\begin{aligned} (Q_l^\textsf {T}D Q_l)_{ww'}&= \sum _{m\in [\mathbf{x}]_{=l}} (Q_l)_{mw}(Q_l)_{mw'}D_{mm} = {\left\{ \begin{array}{ll} 1/d_m &{} \text {if } w^c = (w')^c=m,\\ 0 &{} \text {otherwise}\end{array}\right. }\\&= \frac{|\{\sigma \in S_l: w^\sigma =w'\}|}{l!} = P_{ww'}. \end{aligned}$$

Suppose \(Q_l M Q_l^\textsf {T}\succeq 0\), and let \(\lambda \) be an eigenvalue of M with eigenvector z. Since \(MP=PM\), we may assume \(Pz=z\), for otherwise we can replace z by Pz, which is still an eigenvector of M with eigenvalue \(\lambda \). We may also assume z to be a unit vector. Then, \(\lambda \ge 0\) can be shown using the identity \(Q_l^\textsf {T}D Q_l=P\) as follows:

$$\begin{aligned} \lambda \!=\! z^\textsf {T}M z \!=\! z^\textsf {T}P M P z \!=\! z^\textsf {T}(Q_l^\textsf {T}D Q_l) M(Q_l^\textsf {T}D Q_l)z = (D Q_l z)^\textsf {T}(Q_l M Q_l^\textsf {T}) D Q_l z \ge 0. \end{aligned}$$

\(\square \)

We can now derive our symmetry reduction result:

Proposition 12

For \(L \in \mathbb {R}[\mathbf{x}]_{2l}^*\) we have

$$\begin{aligned} A^{\otimes l}-(L((ww')^c))_{w,w'\in \langle \mathbf{x}\rangle _{=l}}\succeq 0 \quad \Longleftrightarrow \quad Q_l A^{\otimes l}Q_l^\textsf {T}- (L(mm'))_{m,m'\in [\mathbf{x}]_{=l}}\succeq 0. \end{aligned}$$


For any \(w,w'\in \langle \mathbf{x}\rangle _{=l}\), we have \((P_\sigma A^{\otimes l} P_\sigma ^\textsf {T})_{w,w'} = A^{\otimes l}_{w^\sigma , (w')^\sigma } = A^{\otimes l}_{w,w'}\) and

$$\begin{aligned} (P_\sigma (L((uu')^c))_{u,u'\in \langle \mathbf{x}\rangle _{=l}} P_\sigma ^*)_{w,w'} = L((w^\sigma (w')^\sigma )^c) = L((ww')^c). \end{aligned}$$

This shows that the matrix \(A^{\otimes l}-(L((ww')^c))_{w,w'\in \langle \mathbf{x}\rangle _{=l}}\) is \(S_l\)-invariant. Hence, the claimed result follows by using Lemmas 9 and 10. \(\square \)

3.5 Computational Examples

3.5.1 Bipartite Matrices

Consider the \((p+q)\times (p+q)\) matrices

$$\begin{aligned} P(a,b) = \begin{pmatrix} (a+q) I_p &{} J_{p,q} \\ J_{q,p} &{} (b+p) I_q \end{pmatrix}, \quad a,b \in \mathbb {R}_+, \end{aligned}$$

where \(J_{p,q}\) denotes the all-ones matrix of size \(p \times q\). We have \(P(a,b)=P(0,0)+D\) for some nonnegative diagonal matrix D. As can be easily verified, P(0, 0) is completely positive with \(\hbox {cp-rank}(P(0,0))=pq\), so P(ab) is completely positive with \(pq \le \hbox {cp-rank}(P(a,b)) \le pq + p + q\).

For \(p=2\) and \(q=3\), we have \(\hbox {cp-rank}(P(a,b))=6\) for all \(a,b \ge 0\), which follows from the fact that \(5 \times 5\) completely positive matrices with at least one zero entry have \(\hbox {cp-rank}\) at most 6; see [6, Theorem 3.12]. Fawzi and Parrilo [27] show that \(\tau _{\text {cp}}^{\mathrm {sos}}(P(0,0)) = 6\), and give a subregion of \([0,1]^2\) where \(5< \tau _{\text {cp}}^{\mathrm {sos}}(P(a,b)) < 6\). The next lemma shows the bound \({\xi _{2,\dagger }^{\mathrm {cp}}}(P(a,b))\) is tight for all \(a,b \ge 0\) and therefore strictly improves on \(\tau _{\mathrm {cp}}^{\mathrm {sos}}\) in this region.

Lemma 11

For \(a,b \ge 0\) we have \({\xi _{2,\dagger }^{\mathrm {cp}}}(P(a,b)) \ge pq\).


Let L be feasible for \({\xi _{2,\dagger }^{\mathrm {cp}}}(P(a,b))\) and let

$$\begin{aligned} B = \begin{pmatrix} \alpha &{} c^\textsf {T}\\ c &{} X \end{pmatrix} \end{aligned}$$

be the principal submatrix of \(M_2(L)\) where the rows and columns are indexed by

$$\begin{aligned} \{1\} \cup \{x_ix_j : 1 \le i \le p, \, p+1 \le j \in p+q\}. \end{aligned}$$

It follows that c is the all-ones vector \(c = \mathbf {1}\). Moreover, if \(P(a,b)_{ij} = 0\) for some \(i\ne j\), then the constraints \(L(x_ix_ju) \ge 0\) and \(L((P(a,b)_{ij} - x_ix_j)u) \ge 0\) imply \(L(x_i x_j u) = 0\) for all \(u \in [\mathbf {x}]_2\). Hence, \(X_{x_ix_j,x_kx_l} = L(x_i x_j x_k x_l) = 0\) whenever \(x_ix_j \ne x_k x_l\). It follows that X is a diagonal matrix. We write

$$\begin{aligned} B = \begin{pmatrix} \alpha &{} \mathbf {1}^\textsf {T}\\ \mathbf {1} &{} \mathrm {Diag}(z_1, \ldots , z_{pq}) \end{pmatrix}. \end{aligned}$$

Since \(\begin{pmatrix} 1 &{} - \mathbf {1}^\textsf {T}\\ -\mathbf {1} &{} J \end{pmatrix} \succeq 0\) we have

$$\begin{aligned} 0 \le \mathrm {Tr}\left( \begin{pmatrix} \alpha &{} \mathbf {1}^\textsf {T}\\ \mathbf {1} &{} \mathrm {Diag}(z_1, \ldots , z_{pq}) \end{pmatrix} \begin{pmatrix} 1 &{} - \mathbf {1}^\textsf {T}\\ -\mathbf {1} &{} J \end{pmatrix}\right) = \alpha - 2 pq + \sum _{k = 1}^{pq} z_k. \end{aligned}$$

Finally, by the constraints \(L((P(a,b)_{ij} - x_i x_j) u) \ge 0\) (with \(i \in [p], j \in p+[q]\) and \(u = x_i x_j\)) and \(L(x_i x_j) = P(a,b)_{ij}\) we obtain \(z_k \le 1\) for all \(k \in [pq]\). Combined with the above inequality, it follows that

$$\begin{aligned} L(1) = \alpha \ge 2pq - \sum _{k=1}^{pq} z_k \ge pq, \end{aligned}$$

and hence \({\xi _{2,\dagger }^{\mathrm {cp}}}(P(a,b)) \ge pq\). \(\square \)

3.5.2 Examples Related to the DJL-Conjecture

The Drew–Johnson–Loewy conjecture [21] states that the maximal \(\hbox {cp-rank}\) of an \(n~\times ~n\) completely positive matrix is equal to \(\lfloor n^2/4 \rfloor \). Recently, this conjecture has been disproven for \(n=7,8,9,10,11\) in [10] and for all \(n \ge 12\) in [11] (interestingly, it remains open for \(n=6\)). Here, we study our bounds on the examples of [10]. Although our bounds are not tight for the \(\hbox {cp-rank}\), they are non-trivial and as such may be of interest for future comparisons. For numerical stability reasons, we have evaluated our bounds on scaled versions of the matrices from [10], so that the diagonal entries become equal to 1. In Table 1 the matrices \(\tilde{M}_7\), \(\tilde{M}_8\) and \(\tilde{M}_9\) correspond to the matrices \(\tilde{M}\) in Examples 1–3 of [10], and \(M_7\), \(M_{11}\) correspond to the matrices M in Examples 1 and 4. The column \({\xi _{2,\dagger }^{\mathrm {cp}}}(\cdot ) + x_i x_j\) corresponds to the bound \({\xi _{2,\dagger }^{\mathrm {cp}}}(\cdot )\) where we replace \(S_A^{\mathrm {cp}}\) by \(S_A^{\mathrm {cp}}\cup \{ x_i x_j : 1 \le i < j \le n\}\).

Table 1 Examples from [10] with various bounds on their cp-rank

4 Lower Bounds on the Nonnegative Rank

In this section, we adapt the techniques for the cp-rank from Sect. 3 to the asymmetric setting of the nonnegative rank. We now view a factorization \(A = (a_i^\textsf {T}b_j)_{i \in [m], j \in [n]}\) by nonnegative vectors as a factorization by positive semidefinite diagonal matrices, by writing \(A_{ij} = {{\,\mathrm{Tr}\,}}(X_i X_{m+j})\), with \(X_i =\mathrm{Diag}(a_i)\) and \(X_{m+j} = \mathrm{Diag}(b_j)\). Note that we can view this as a “partial matrix” setting, where for the symmetric matrix \(({{\,\mathrm{Tr}\,}}(X_iX_k))_{i,k\in [m+n]}\) of size \(m+n\), only the off-diagonal entries at the positions \((i,m+j)\) for \(i\in [m], j\in [n]\) are specified.

This asymmetry requires rescaling the factors in order to get upper bounds on their maximal eigenvalues, which is needed to ensure the Archimedean property for the selected localizing polynomials. For this we use the well-known fact that for any \(A \in \mathbb {R}_+^{m \times n}\) there exists a factorization \(A=({{\,\mathrm{Tr}\,}}(X_iX_{m+j}))\) by diagonal nonnegative matrices of size \({{\,\mathrm{rank}\,}}_+(A)\), such that

$$\begin{aligned} \lambda _\mathrm {max}(X_i), \lambda _\mathrm {max}(X_{m+j}) \le \sqrt{A_\mathrm {max}} \quad \text {for all} \quad i \in [m], j \in [n], \end{aligned}$$

where \(A_\mathrm {max}:= \mathrm {max}_{i,j} A_{ij}\). To see this, observe that for any rank one matrix \(R = u v^\textsf {T}\) with \(0 \le R \le A\), one may assume \(0 \le u_i, v_j \le \sqrt{A_\mathrm {max}}\) for all ij. Hence, the set

$$\begin{aligned} S_A^{+}= \big \{\sqrt{A_\mathrm {max}}x_i - x_i^2 : i \in [m+n]\big \} \cup \big \{A_{ij} - x_i x_{m+j} : i \in [m], j \in [n] \big \} \end{aligned}$$

is localizing for A; that is, there exists a minimal factorization \(\mathbf {X}\) of A with \(\mathbf {X}\in {\mathscr {D}}(S_A^+)\).

Given \(A\in \mathbb {R}^{m\times n}_{\ge 0}\), for each \(t \in \mathbb {N}\cup \{\infty \}\) we consider the semidefinite program

$$\begin{aligned} {\xi _{t}^{\mathrm {+}}}(A) = \mathrm {min} \big \{ L(1) : \;&L \in \mathbb {R}[x_1,\ldots ,x_{m+n}]_{2t}^*,\\&L(x_ix_{m+j}) = A_{ij} \quad \text {for} \quad i \in [m], j \in [n],\\&L \ge 0 \quad \text {on} \quad {\mathscr {M}}_{2t}(S_A^{+}) \big \}. \end{aligned}$$

Moreover, define \({\xi _{*}^{\mathrm {+}}}(A)\) by adding the constraint \({{\,\mathrm{rank}\,}}(M(L)) < \infty \) to the program defining \({\xi _{\infty }^{\mathrm {+}}}(A)\). It is easy to check that \({\xi _{t}^{\mathrm {+}}}(A)\le {\xi _{\infty }^{\mathrm {+}}}(A)\le {\xi _{*}^{\mathrm {+}}}(A)\le {{\,\mathrm{rank}\,}}_+(A)\) for \(t \in \mathbb {N}\).

Denote by \({\xi _{t,\dagger }^{\mathrm {+}}}(A)\) the strengthening of \({\xi _{t}^{\mathrm {+}}}(A)\) where we add the positivity constraints

$$\begin{aligned} L(gu) \ge 0 \quad \text {for} \quad g \in \{1\} \cup S_A^{+}\quad \text {and} \quad u \in [\mathbf{x}]_{2t-\deg (g)}. \end{aligned}$$

Note that these extra constraints can help for finite t, but are redundant for \(t \in \{\infty , *\}\).

4.1 Comparison to Other Bounds

As in the previous section, we compare our bounds to the bounds by Fawzi and Parrilo [27]. They introduce the following parameter \(\tau _+(A)\) as analog of the bound \(\tau _\mathrm {cp}(A)\) for the nonnegative rank:

$$\begin{aligned} \tau _+(A) = \mathrm {min}\Big \{ \alpha : \alpha \ge 0,\,A \in \alpha \cdot \mathrm {conv} \big \{ R \in \mathbb {R}^{m \times n}: 0 \le R \le A, \, {{\,\mathrm{rank}\,}}(R) \le 1\big \}\Big \}, \end{aligned}$$

and the analog \(\tau _+^\mathrm {sos}(A)\) of the bound \(\tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\) for the nonnegative rank:

$$\begin{aligned} \tau _{+}^{\mathrm {sos}}(A) = \mathrm {inf} \big \{ \alpha : \;&X \in \mathbb {R}^{mn \times mn}, \, \alpha \in \mathbb {R},\\&\begin{pmatrix} \alpha &{} \text {vec}(A)^\textsf {T}\\ \text {vec}(A) &{} X \end{pmatrix} \succeq 0, \\&X_{(i,j),(i,j)} \le A_{ij}^2 \quad \text {for} \quad 1 \le i \le m, 1 \le j \le n, \\&X_{(i,j),(k,l)} = X_{(i,l),(k,j)} \quad \text {for} \quad 1 \le i< k \le m, \; 1 \le j < l \le n \big \}. \end{aligned}$$

First, we give the analog of Proposition 8, whose proof we omit since it is very similar.

Proposition 13

Let \(A \in \mathbb {R}_+^{m \times n}\). For every \(t \in \mathbb {N}\cup \{\infty , *\}\) the optimum in \({\xi _{t}^{\mathrm {+}}}(A)\) is attained, and \({\xi _{t}^{\mathrm {+}}}(A) \rightarrow {\xi _{\infty }^{\mathrm {+}}}(A) = {\xi _{*}^{\mathrm {+}}}(A)\) as \(t\rightarrow \infty \). If \({\xi _{t}^{\mathrm {+}}}(A)\) admits a flat optimal solution, then \({\xi _{t}^{\mathrm {+}}}(A) = {\xi _{*}^{\mathrm {+}}}(A)\). Moreover, \({\xi _{\infty }^{\mathrm {+}}}(A) = {\xi _{*}^{\mathrm {+}}}(A)\) is the minimum of L(1) over all conic combinations \(L\) of trace evaluations at elements of \(D(S_A^{+})\) satisfying \(A =( L(x_ix_{m+j}))\).

Now we observe that the parameters \({\xi _{\infty }^{\mathrm {+}}}(A)\) and \({\xi _{*}^{\mathrm {+}}}(A)\) coincide with \(\tau _+(A)\), so that we have a sequence of semidefinite programs converging to \(\tau _+(A)\).

Proposition 14

For any \(A \in \mathbb {R}_{\ge 0}^{m \times n}\), we have \({\xi _{\infty }^{\mathrm {+}}}(A) = {\xi _{*}^{\mathrm {+}}}(A) = \tau _+(A).\)


The discussion at the beginning of Sect. 4 shows that for any rank one matrix R satisfying \(0 \le R \le A\) we may assume that \(R=uv^\textsf {T}\) with \((u,v)\in \mathbb {R}^m_+ \times \mathbb {R}^n_+\) and \( u_i,v_j \le \sqrt{A_{\mathrm {max}}}\) for \(i\in [m],j\in [n]\). Hence, \(\tau _+(A)\) can be written as:

$$\begin{aligned} \mathrm {min}\Big \{\alpha : \alpha \!\ge \! 0,\, A \!\in \!\alpha&\cdot \mathrm {conv} \big \{ uv^\textsf {T}:u \in \Big [0, \sqrt{A_\mathrm {max}}\Big ]^m, v \in \Big [0, \sqrt{A_\mathrm {max}}\Big ]^n,\, uv^\textsf {T}\le A \big \} \Big \} \\&=\mathrm {min}\Big \{ \alpha : \alpha \ge 0,\, A \in \alpha \cdot \mathrm {conv}\big \{uv^\textsf {T}: (u,v) \in D(S_A^{+})\big \} \Big \}. \end{aligned}$$

The equality \({\xi _{\infty }^{\mathrm {+}}}(A) = {\xi _{*}^{\mathrm {+}}}(A)=\tau _+(A)\) now follows from the reformulation of \({\xi _{*}^{\mathrm {+}}}(A)\) in Proposition 13 in terms of conic evaluations, after noting that for (uv) in \( \mathbb {R}^m\times \mathbb {R}^n\) we have \((u,v)\in D(S_A^{+})\) if and only if the matrix \(R=uv^\textsf {T}\) satisfies \(0\le R\le A\). \(\square \)

Analogously to the case of the completely positive rank, we have the following proposition. The proof is similar to that of Proposition 4.2, considering now for M the principal submatrix of \(M_2(L)\) indexed by the monomials 1 and \(x_ix_{m+j}\) for \(i\in [m]\) and \(j\in [n]\).

Proposition 15

If A is a nonnegative matrix, then \({\xi _{2,\dagger }^{\mathrm {+}}}(A) \ge \tau _{+}^{\mathrm {sos}}(A)\).

In the remainder of this section, we recall how \(\tau _+(A)\) and \(\tau _{+}^{\mathrm {sos}}(A)\) compare to other bounds in the literature. These bounds can be divided into two categories: combinatorial lower bounds and norm-based lower bounds. The following diagram from [27] summarizes how \(\tau _+^{\mathrm {sos}}(A)\) and \(\tau _+(A)\) relate to the combinatorial lower bounds

Here \(\mathrm {RG}(A)\) is the rectangular graph, with \(V = \{(i,j)\in [m]\times [n]: A_{ij} > 0\}\) as vertex set and \(E = \{ ((i,j),(k,l)): A_{il} A_{kj}= 0\}\) as edge set. The coloring number of \(\mathrm {RG}(A)\) coincides with the well-known rectangle covering number (also denoted \({{\,\mathrm{rank}\,}}_B(A)\)), which was used, e.g., in [29] to show that the extension complexity of the correlation polytope is exponential. The clique number of \(\mathrm {RG}(A)\) is also known as the fooling set number (see, e.g., [28]). Observe that the above combinatorial lower bounds only depend on the sparsity pattern of the matrix A, and that they are all equal to one for a strictly positive matrix.

Fawzi and Parrilo [27] have furthermore shown that the bound \(\tau _+(A)\) is at least as good as norm-based lower bounds:

$$\begin{aligned} \tau _+(A) = \underset{\begin{array}{c} {\mathscr {N}} \text { monotone and} \\ \text { positively homogeneous} \end{array}}{\mathrm {sup}} \frac{{\mathscr {N}}^*(A)}{{\mathscr {N}}(A)}. \end{aligned}$$

Here, a function \({\mathscr {N}}: \mathbb {R}^{m \times n}_{+} \rightarrow \mathbb {R}_{+}\) is positively homogeneous if \({\mathscr {N}}(\lambda A) = \lambda {\mathscr {N}}(A)\) for all \(\lambda \ge 0\) and monotone if \({\mathscr {N}}(A) \le {\mathscr {N}}(B)\) for \(A \le B\), and \({\mathscr {N}}^*(A)\) is defined as

$$\begin{aligned} {\mathscr {N}}^*(A) = \mathrm {max}\{ L(A) :&\ L:\mathbb {R}^{m\times n}\rightarrow \mathbb {R} \text{ linear } \text{ and } L(X) \le 1 \text { for all } X \in \mathbb {R}^{m \times n}_{+} \\&\text { with } {{\,\mathrm{rank}\,}}(X) \le 1 \text { and } {\mathscr {N}}(X) \le 1\}. \end{aligned}$$

These bounds are called norm-based since norms often provide valid functions \({\mathscr {N}}\). For example, when \({\mathscr {N}}\) is the \(\ell _\infty \)-norm, Rothvoß [66] used the corresponding lower bound to show that the matching polytope has exponential extension complexity.

When \({\mathscr {N}}\) is the Frobenius norm: \({\mathscr {N}}(A) = (\sum _{i,j} A_{ij}^2)^{1/2}\), the parameter \({\mathscr {N}}^*(A)\) is known as the nonnegative nuclear norm. In [26] it is denoted by \(\nu _+(A)\), shown to satisfy \({{\,\mathrm{rank}\,}}_+(A)\ge \left( \nu _+(A)/||A||_F\right) ^2\), and reformulated as

$$\begin{aligned} \nu _+(A)&= \mathrm {min}\left\{ \sum _i \lambda _i : A = \sum _{i} \lambda _i u_i v_i^\textsf {T}, \, (\lambda _i,u_i,v_i) \in \mathbb {R}^{1+m+n}_{+}, \, ||u_i||_2 = ||v_i||_2 = 1 \right\} \end{aligned}$$
$$\begin{aligned}&= \mathrm {max}\big \{ \langle A, W \rangle : W \in \mathbb {R}^{m \times n}, \, \bigl ({\begin{matrix} I &{} -W \\ -W^\textsf {T}&{} I\end{matrix}}\bigr ) \text { is copositive} \big \}. \end{aligned}$$

where the cone of copositive matrices is the dual of the cone of completely positive matrices. Fawzi and Parrilo [26] use the copositive formulation (27) to provide bounds \(\nu _+^{[k]}(A)\) (\(k\ge 0\)), based on inner approximations of the copositive cone from [60], which converge to \(\nu _+(A)\) from below. We now observe that by Theorem 7 the atomic formulation of \(\nu _+(A)\) from (26) can be seen as a moment optimization problem:

$$\begin{aligned} \nu _+(A) = \mathrm {min}\int _{V(S)} 1 \, {\hbox {d}} \mu (x) \quad \text {s.t.} \quad A_{ij} = \int _{V(S)} x_i x_{m+j} \, {\hbox {d}} \mu (x) \quad \text {for}\quad i\in [m], j\in [n]. \end{aligned}$$

Here, the optimization variable \(\mu \) is required to be a Borel measure on the variety V(S), where

$$\begin{aligned} S=\textstyle {\left\{ \sum _{i=1}^mx_i^2-1, \ \sum _{j=1}^n x_{m+j}^2-1\right\} }. \end{aligned}$$

(The same observation is made in [74] for the real nuclear norm of a symmetric 3-tensor and in [59] for symmetric odd-dimensional tensors.) For \(t \in \mathbb {N}\cup \{\infty \}\), let \(\mu _t(A)\) denote the parameter defined analogously to \({\xi _{t}^{\mathrm {+}}}(A)\), where we replace the condition \(L\ge 0\) on \({\mathscr {M}}_{2t}(S_A^+)\) by \(L\ge 0\) on \({\mathscr {M}}_{2t}(\{x_1,\ldots , x_{m+n}\})\) and \(L=0\) on \({\mathscr {I}}_{2t}(S)\), and let \(\mu _*(A)\) be obtained by adding the constraint \({{\,\mathrm{rank}\,}}(M(L)) < \infty \) to \(\mu _\infty (A)\). We have \(\mu _t(A) \rightarrow \mu _\infty (A) = \mu _*(A) = \nu _+(A)\) by Theorem 7 and (a non-normalized analog of) Theorem 8. One can show that \(\mu _1(A)\) with the additional constraints \(L(u) \ge 0\) for all \(u \in [\mathbf{x}]_2\), is at least as good as \(\nu _+^{[0]}(A)\). It is not clear how the hierarchies \(\mu _t(A)\) and \(\nu _+^{[k]}(A)\) compare in general.

4.2 Computational Examples

We illustrate the performance of our approach by comparing our lower bounds \({\xi _{2,\dagger }^{\mathrm {+}}}\) and \({\xi _{3,\dagger }^{\mathrm {+}}}\) to the lower bounds \(\tau _+\) and \(\tau _+^{\mathrm {sos}}\) on the two examples considered in [27].

4.2.1 All Nonnegative \(2 \times 2\) Matrices

For \(A(\alpha ) = \bigl ({\begin{matrix} 1 &{} 1 \\ 1 &{} \alpha \end{matrix}}\bigr )\), Fawzi and Parrilo [27] show that

$$\begin{aligned} \tau _+(A(\alpha )) = 2-\alpha \quad \text {and} \quad \tau _+^{\mathrm {sos}}(A(\alpha )) = \frac{2}{1+\alpha } \quad \text {for all} \quad 0 \le \alpha \le 1. \end{aligned}$$

Since the parameters \(\tau _+(A)\) and \(\tau _+^{\mathrm {sos}}(A)\) are invariant under scaling and permuting rows and columns of A, one can use the identity

$$\begin{aligned} \begin{pmatrix} 1 &{} 1 \\ 1 &{} \alpha \end{pmatrix} = \begin{pmatrix} 1 &{} 0 \\ 0 &{} \alpha \end{pmatrix}\begin{pmatrix} 1 &{} 1 \\ 1 &{} 1/\alpha \end{pmatrix}\begin{pmatrix} 0 &{} 1 \\ 1 &{} 0 \end{pmatrix} \end{aligned}$$

to see this describes the parameters for all nonnegative \(2 \times 2\) matrices. By using a semidefinite programming solver for \(\alpha = k/100\), \(k \in [100]\), we see that \({\xi _{2}^{\mathrm {+}}}(A(\alpha ))\) coincides with \(\tau _+(A(\alpha ))\).

4.2.2 The Nested Rectangles Problem

In this section, we consider the nested rectangles problem as described in [27, Section 2.7.2] (see also [55]), which asks for which ab there exists a triangle T such that \(R(a,b) \subseteq T \subseteq P\), where \(R(a,b) = [-a,a] \times [-b,b]\) and \(P = [-1,1]^2\).

The nonnegative rank relates not only to the extension complexity of a polytope [78], but also to extended formulations of nested pairs [12, 31]. An extended formulation of a pair of polytopes \(P_1\subseteq P_2 \subseteq \mathbb {R}^d\) is a (possibly) higher dimensional polytope K whose projection \(\pi (K)\) is nested between \(P_1\) and \(P_2\). Let us suppose \(\pi (K)= \{ x \in \mathbb {R}^d : y \in \mathbb {R}_+^k, \, (x,y) \in K\}\) and \(K= \{(x,y): Ex+Fy = g,\, y \in \mathbb {R}^k_+\}\), then k is the size of the extended formulation, and the smallest such k is called the extension complexity of the pair \((P_1, P_2)\). It is known (cf. [12, Theorem 1]) that the extension complexity of the pair \((P_1,P_2)\), where

$$\begin{aligned} P_1 = \mathrm {conv}(\{v_1, \ldots , v_n\}) \quad \text {and} \quad P_2 = \left\{ x : a_i^\textsf {T}x \le b_i \text { for } i \in [m]\right\} , \end{aligned}$$

is equal to the nonnegative rank of the generalized slack matrix \(S_{P_1,P_2} \in \mathbb {R}^{m \times n}\), defined by

$$\begin{aligned} (S_{P_1,P_2})_{ij} = b_j - a_j^\textsf {T}v_i \quad \text {for} \quad i\in [m], j\in [n]. \end{aligned}$$

Any nonnegative matrix is the slack matrix of some nested pair of polytopes [34, Lemma 4.1] (see also [31]).

Applying this to the pair (R(ab), P), one immediately sees that there exists a polytope K with at most three facets whose projection \(T = \pi (K)\subseteq \mathbb {R}^2\) satisfies \(R(a,b) \subseteq T \subseteq P\) if and only if the pair (R(ab), P) admits an extended formulation of size 3. For \(a,b>0\), the polytope T has to be 2 dimensional, therefore K has to be at least 2 dimensional as well; it follows that K and T have to be triangles. Hence, there exists a triangle T such that \(R(a,b) \subseteq T \subseteq P\) if and only if the nonnegative rank of the slack matrix \(S(a,b) := S_{R(a,b),P}\) is equal to 3. One can verify that

$$\begin{aligned} S(a,b) = \begin{pmatrix} 1-a &{} 1+a &{} 1-b &{} 1+b \\ 1+a &{} 1-a &{} 1-b &{} 1+b \\ 1+a &{} 1-a &{} 1+b &{}1-b \\ 1-a &{} 1+a &{} 1+b &{} 1-b \end{pmatrix}. \end{aligned}$$

Such a triangle exists if and only if \((1+a)(1+b) \le 2\) (see [27, Proposition 4] for a proof sketch). To test the quality of their bound, Fawzi and Parrilo [27] compute \(\tau _+^{\mathrm {sos}}(S(a,b))\) for different values of a and b. In doing so, they determine the region where \(\tau _+^{\mathrm {sos}}(S(a,b))>3\). We do the same for the bounds \({\xi _{1,\dagger }^{\mathrm {+}}}(S(a,b)), {\xi _{2,\dagger }^{\mathrm {+}}}(S(a,b))\) and \({\xi _{3,\dagger }^{\mathrm {+}}}(S(a,b))\), see Fig. 1. The results show that \({\xi _{2,\dagger }^{\mathrm {+}}}(S(a,b))\) strictly improves upon the bound \(\tau _+^{\mathrm {sos}}(S(a,b))\), and that \({\xi _{3,\dagger }^{\mathrm {+}}}(S(a,b))\) is again a strict improvement over \({\xi _{2,\dagger }^{\mathrm {+}}}(S(a,b))\).

Fig. 1
figure 1

The colored region corresponds to \({{\,\mathrm{rank}\,}}_+(S(a,b)) = 4\). The top right region (black) corresponds to \({\xi _{1,\dagger }^{\mathrm {+}}}(S(a,b)) >3\), the two top right regions (black and red) together correspond to \(\tau _+^\mathrm {sos}(S(a,b)) > 3\), the three top right regions (black, red and yellow) to \({\xi _{2,\dagger }^{\mathrm {+}}}(S(a,b))>3\), and the four top right regions (black, red, yellow, and green) to \({\xi _{3,\dagger }^{\mathrm {+}}}(S(a,b))>3\) (Color figure online)

5 Lower Bounds on the Positive Semidefinite Rank

The positive semidefinite rank can be seen as an asymmetric version of the completely positive semidefinite rank. Hence, as was the case in the previous section for the nonnegative rank, we need to select suitable factors in a minimal factorization in order to be able to bound their maximum eigenvalues and obtain a localizing set of polynomials leading to an Archimedean quadratic module.

For this we can follow, e.g., the approach in [52, Lemma 5] to rescale a factorization and claim that, for any \(A \in \mathbb {R}^{m\times n}_+\) with psd-rank\(_\mathbb {C}(A) = d\), there exists a factorization \(A =( \langle X_i, X_{m+j}\rangle )\) by matrices \(X_1, \ldots , X_{m+n} \in \mathrm {H}_{+}^d\) such that \(\sum _{i=1}^m X_i = I\) and \(\mathrm {Tr}(X_{m+j}) = \sum _i A_{ij}\) for all \(j\in [n]\). Indeed, starting from any factorization \(X_i,X_{m+j}\) in \(\mathrm {H}^d_+\) of A, we may replace \(X_i\) by \(X^{-1/2}X_iX^{-1/2}\) and \(X_{m+j}\) by \(X^{1/2}X_{m+j}X^{1/2}\), where \(X:=\sum _{i=1}^m X_i\) is positive definite (by minimality of d). This argument shows that the set of polynomials

$$\begin{aligned} S_A^{\mathrm {psd}}= \left\{ x_i - x_i^2 : i \in [m]\right\} \cup \left\{ \Big (\sum _{i=1}^m A_{ij}\Big ) x_{m+j} - x_{m+j}^2 : j \in [n] \right\} \end{aligned}$$

is localizing for A; that is, there is at least one minimal factorization \(\mathbf {X}\) of A such that \(g(\mathbf {X})\succeq 0\) for all polynomials \(g\in S_A^{\mathrm {psd}}\). Moreover, for the same minimal factorization \(\mathbf {X}\) of A, we have \(p(\mathbf {X}) (1-\sum _{i=1}^m X_i) = 0\) for all \(p \in \mathbb {R}\langle \mathbf{x}\rangle \).

Given \(A\in \mathbb {R}^{m\times n}_{+}\), for each \(t \in \mathbb {N}\cup \{\infty \}\) we consider the semidefinite program

$$\begin{aligned} {\xi _{t}^{\mathrm {psd}}}(A) = \mathrm {min} \big \{ L(1) : \;&L \in \mathbb {R}\langle x_1,\ldots ,x_{m+n}\rangle _{2t}^*,\\&L(x_ix_{m+j}) = A_{ij} \quad \text {for} \quad i \in [m], j \in [n],\\&L \ge 0 \quad \text {on} \quad {\mathscr {M}}_{2t}(S_A^{\mathrm {psd}}), \\&L = 0 \quad \text {on} \quad {\mathscr {I}}_{2t}(1-\textstyle {\sum _{i=1}^m x_i}) \big \}. \end{aligned}$$

We additionally define \({\xi _{*}^{\mathrm {psd}}}(A)\) by adding the constraint \({{\,\mathrm{rank}\,}}(M(L)) < \infty \) to the program defining \({\xi _{\infty }^{\mathrm {psd}}}(A)\) (and considering the infimum instead of the minimum, since we do not know if the infimum is attained in \({\xi _{*}^{\mathrm {psd}}}(A)\)). By the above discussion, it follows that the parameter \({\xi _{*}^{\mathrm {psd}}}(A)\) is a lower bound on psd-rank\(_\mathbb {C}(A)\) and we have

$$\begin{aligned} {\xi _{1}^{\mathrm {psd}}}(A)\le \ldots \le {\xi _{t}^{\mathrm {psd}}}(A)\le \ldots \le {\xi _{\infty }^{\mathrm {psd}}}(A)\le {\xi _{*}^{\mathrm {psd}}}(A)\le \hbox {psd-rank}_\mathbb {C}(A). \end{aligned}$$

Note that, in contrast to the previous bounds, the parameter \({\xi _{t}^{\mathrm {psd}}}(A)\) is not invariant under rescaling the rows of A or under taking the transpose of A (see Sect. 5.2.2).

It follows from the construction of \(S_A^{\mathrm {psd}}\) and Eq. (10) that the quadratic module \({{\mathscr {M}}}(S_A^{\mathrm {psd}})\) is Archimedean, and hence the following analog of Proposition 1 can be shown.

Proposition 16

Let \(A \in \mathbb {R}^{m \times n}_+\). For each \(t \in \mathbb {N}\cup \{\infty \}\), the optimum in \({\xi _{t}^{\mathrm {psd}}}(A)\) is attained, and we have

$$\begin{aligned} \lim _{t \rightarrow \infty } {\xi _{t}^{\mathrm {psd}}}(A) = {\xi _{\infty }^{\mathrm {psd}}}(A). \end{aligned}$$

Moreover, \({\xi _{\infty }^{\mathrm {psd}}}(A)\) is equal to the infimum over all \(\alpha \ge 0\) for which there exists a unital \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \(\mathbf {X}\in {\mathscr {D}}_{{\mathscr {A}}} (S_A^{\mathrm {psd}}) \cap {\mathscr {V}}_\mathscr {A}(1-\textstyle {\sum _{i=1}^m x_i})\) such that \(A = \alpha \cdot (\tau (X_iX_{m+j}))_{i\in [m],j\in [n]}\).

5.1 Comparison to Other Bounds

In [52], the following bound on the complex positive semidefinite rank was derived:

$$\begin{aligned} \hbox {psd-rank}_\mathbb {C}(A) \ge \sum _{i=1}^m \mathrm {max}_{j \in [n]} \frac{ A_{ij}}{\sum _i A_{ij}}. \end{aligned}$$

If a feasible linear form L to \({\xi _{t}^{\mathrm {psd}}}(A)\) satisfies the inequalities \(L(x_i( \sum _i A_{ij} - x_{m+j}))~\ge ~0\) for all \(i \in [m], j \in [n]\), then L(1) is at least the above lower bound. Indeed, the inequalities give

$$\begin{aligned} L(x_i) \ge \mathrm {max}_{j \in [n]} \, \frac{L(x_i x_{m+j})}{\sum _i A_{ij}} = \mathrm {max}_{j \in [n]} \, \frac{A_{ij}}{ \sum _i A_{ij}}. \end{aligned}$$

and hence

$$\begin{aligned} L(1) = \sum _{i = 1}^m L(x_i) \ge \sum _{i=1}^m \mathrm {max}_{j \in [n]} \frac{ A_{ij}}{\sum _i A_{ij}}. \end{aligned}$$

The inequalities \(L(x_i( \sum _i A_{ij} - x_{m+j})) \ge 0\) are easily seen to be valid for trace evaluations at points of \({{\mathscr {D}}}(S_A^{\mathrm {psd}})\). More importantly, as in Lemma 2, these inequalities are satisfied by feasible linear forms to the programs \({\xi _{\infty }^{\mathrm {psd}}}(A)\) and \({\xi _{*}^{\mathrm {psd}}}(A)\). Hence, \({\xi _{\infty }^{\mathrm {psd}}}(A)\) and \({\xi _{*}^{\mathrm {psd}}}(A)\) are at least as good as the lower bound (28).

In [52], two other fidelity based lower bounds on the psd-rank were defined; we do not know how they compare to \({\xi _{t}^{\mathrm {psd}}}(A)\).

5.2 Computational Examples

In this section, we apply our bounds to some (small) examples taken from the literature, namely \(3\times 3\) circulant matrices and slack matrices of small polygons.

5.2.1 Nonnegative Circulant Matrices of Size 3

We consider the nonnegative circulant matrices of size 3 which are, up to scaling, of the form

$$\begin{aligned} M(b,c) = \begin{pmatrix} 1 &{} b &{} c \\ c &{} 1 &{} b \\ b &{} c &{} 1 \end{pmatrix} \quad \text {with} \quad b,c \ge 0. \end{aligned}$$

If \(b=1=c\), then \({{\,\mathrm{rank}\,}}(M(b,c)) = \hbox {psd-rank}_\mathbb {R}(M(b,c)) = \hbox {psd-rank}_\mathbb {C}(M(b,c)) = 1\). Otherwise, we have \({{\,\mathrm{rank}\,}}(M(b,c))\ge 2\), which implies \(\hbox {psd-rank}_\mathbb {K}(M(b,c)) \ge 2\) for \(\mathbb {K}\in \{\mathbb {R},\mathbb {C}\}\). In [25, Example 2.7] it is shown that

$$\begin{aligned} \hbox {psd-rank}_\mathbb {R}(M(b,c)) \le 2 \quad \Longleftrightarrow \quad 1 +b^2 +c^2 \le 2(b + c + bc). \end{aligned}$$

Hence, if b and c do not satisfy the above relation then \(\hbox {psd-rank}_\mathbb {R}(M(b,c))=3\).

Fig. 2
figure 2

The colored region corresponds to the values (bc) for which \(\hbox {psd-rank}_\mathbb {R}(M(b,c)) =3\); the outer region (yellow) shows the values of (bc) for which \({\xi _{2}^{\mathrm {psd}}}(M(b,c))>2\) (Color figure online)

To see how good our lower bounds are for this example, we use a semidefinite programming solver to compute \({\xi _{2}^{\mathrm {psd}}}(M(b,c))\) for \((b,c) \in [0,4]^2\) (with stepsize 0.01). In Fig. 2, we see that the bound \({\xi _{2}^{\mathrm {psd}}}(M(b,c))\) certifies that \(\hbox {psd-rank}_\mathbb {R}(M(b,c)) =\hbox {psd-rank}_\mathbb {C}(M(b,c))=3\) for most values (bc) where \(\hbox {psd-rank}_\mathbb {R}(M(b,c))=3\).

5.2.2 Polygons

Here, we consider the slack matrices of two polygons in the plane, where the bounds are sharp (after rounding) and illustrate the dependence on scaling the rows or taking the transpose. We consider the quadrilateral Q with vertices (0, 0), (0, 1), (1, 0), (2, 2), and the regular hexagon H, whose slack matrices are given by

$$\begin{aligned} S_Q = \begin{pmatrix} 0 &{}0 &{}2 &{}2 \\ 1 &{} 0 &{} 0&{} 3 \\ 0 &{} 1 &{} 3 &{} 0 \\ 2 &{}2 &{}0 &{} 0\end{pmatrix}, \qquad S_H = \begin{pmatrix} 0 &{} 1&{} 2&{} 2&{} 1&{} 0 \\ 0&{} 0&{} 1&{}2 &{}2&{} 1 \\ 1 &{} 0 &{} 0 &{} 1 &{} 2 &{} 2 \\ 2&{} 1&{} 0&{} 0&{} 1&{} 2\\ 2&{} 2&{} 1&{} 0&{} 0 &{}1 \\ 1&{} 2&{} 2&{} 1&{} 0 &{}0 \end{pmatrix}. \end{aligned}$$

Our lower bounds on the \(\hbox {psd-rank}_\mathbb {C}\) are not invariant under taking the transpose, indeed numerically we have \({\xi _{2}^{\mathrm {psd}}}(S_Q) \approx 2.266\) and \({\xi _{2}^{\mathrm {psd}}}(S_Q^\textsf {T}) \approx 2.5\). The slack matrix \(S_Q\) has \(\hbox {psd-rank}_\mathbb {R}(S_Q) = 3\) (a corollary of [35, Theorem 4.3]) and therefore both bounds certify \(\hbox {psd-rank}_\mathbb {C}(S_Q) = 3 = \hbox {psd-rank}_\mathbb {R}(S_Q)\).

Secondly, our bounds are not invariant under rescaling the rows of a nonnegative matrix. Numerically we have \({\xi _{2}^{\mathrm {psd}}}(S_H) \approx 1.99\) while \({\xi _{2}^{\mathrm {psd}}}(DS_H) \approx 2.12\), where \(D = \mathrm {Diag}(2,2,1,1,1,1)\). The bound \({\xi _{2}^{\mathrm {psd}}}(DS_H)\) is in fact tight (after rounding) for the complex positive semidefinite rank of \(DS_H\) and hence of \(S_H\): in [33] it is shown that \(\hbox {psd-rank}_\mathbb {C}(S_H) = 3\).

6 Discussion and Future Work

In this work, we provide a unified approach for the four matrix factorizations obtained by considering (a)symmetric factorizations by nonnegative vectors and positive semidefinite matrices. Our methods can be extended to the nonnegative tensor rank, which is defined as the smallest integer d for which a k-tensor \(A \in \mathbb {R}_+^{n_1 \times \cdots \times n_k}\) can be written as \(A = \sum _{l=1}^d u_{1,l}\otimes \cdots \otimes u_{k,l}\) for nonnegative vectors \(u_{j,l} \in \mathbb {R}_+^{n_j}\). The approach from Sect. 4 for \({{\,\mathrm{rank}\,}}_+\) can be extended to obtain a hierarchy of lower bounds on the nonnegative tensor rank. For instance, if A is a 3-tensor, the analogous bound \({\xi _{t}^{\mathrm {+}}}(A)\) is obtained by minimizing L(1) over \(L\in \mathbb {R}[x_{1},\ldots ,x_{n_1+n_2+n_3}]^*\) such that \(L(x_{i_1}x_{n_1+i_2}x_{n_1+n_2+i_3})=A_{i_1i_2i_3}\) (for \(i_1\in [n_1],i_2\in [n_2],i_3\in [n_3]\)), using as localizing polynomials in \(S_A^+\) the polynomials \(\root 3 \of {A_\mathrm {max}}x_i-x_i^2\) and \(A_{i_1i_2i_3}- x_{i_1}x_{n_1+i_2}x_{n_1+n_2+i_3}\). As in the matrix case one can compare to the bounds \(\tau _+(A)\) and \(\tau _+^{\mathrm {sos}}(A)\) from [27]. One can show \({\xi _{*}^{\mathrm {+}}}(A)=\tau _+(A)\), and one can show \({\xi _{3,\dagger }^{\mathrm {+}}}(A) \ge \tau _+^{\mathrm {sos}}(A)\) after adding the conditions \(L(x_{i_1}x_{n_1+i_2}x_{n_1+n_2+i_3}(A_{i_1i_2i_3}- x_{i_1}x_{n_1+i_2}x_{n_1+n_2+i_3}))\ge 0\) to \({\xi _{3}^{\mathrm {+}}}(A)\).

Testing membership in the completely positive cone and the completely positive semidefinite cone is another important problem, to which our hierarchies can also be applied. It follows from the proof of Proposition 8 that if A is not completely positive then, for some order t, the program \({\xi _{t}^{\mathrm {cp}}}(A)\) is infeasible or its optimum value is larger than the Caratheodory bound on the cp-rank (which is similar to an earlier result in [58]). In the noncommutative setting, the situation is more complicated: If \({\xi _{*}^{\mathrm {cpsd}}}(A)\) is feasible, then \(A\in \mathrm {CS}_{+}\), and if \(A\not \in \mathrm {CS}_{+,\mathrm {vN}}^n\), then \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) is infeasible (Propositions 1 and 2). Here, \(\mathrm {CS}_{+,\mathrm {vN}}^n\) is the cone defined in [17] consisting of the matrices admitting a factorization in a von Neumann algebra with a trace. By Lemma 12, \(\mathrm {CS}_{+,\mathrm {vN}}^n\) can equivalently be characterized as the set of matrices of the form \(\alpha \, (\tau (a_ia_j))\) for some \(C^*\)-algebra \({\mathscr {A}}\) with tracial state \(\tau \), positive elements \(a_1,\ldots ,a_n\in {\mathscr {A}}\) and \(\alpha \in \mathbb {R}_+\).

Our lower bounds are on the complex version of the (completely) positive semidefinite rank. As far as we are aware, the existing lower bounds (except for the dimension counting rank lower bound) are also on the complex (completely) positive semidefinite rank. It would be interesting to find a lower bound on the real (completely) positive semidefinite rank that can go beyond the complex (completely) positive semidefinite rank.

We conclude with some open questions regarding applications of lower bounds on matrix factorization ranks. First, as was shown in [38, 62, 63], completely positive semidefinite matrices whose \(\hbox {cpsd-rank}_\mathbb {C}\) is larger than their size do exist, but currently we do not know how to construct small examples for which this holds. Hence, a concrete question: Does there exist a \(5 \times 5\) completely positive semidefinite matrix whose \(\hbox {cpsd-rank}_\mathbb {C}\) is at least 6? Second, as we mentioned before, the asymmetric setting corresponds to (semidefinite) extension complexity of polytopes. Rothvoß’ result [66] (indirectly) shows that the parameter \({\xi _{\infty }^{\mathrm {+}}}\) is exponential (in the number of nodes of the graph) for the slack matrix of the matching polytope. Can this result also be shown directly using the dual formulation of \({\xi _{\infty }^{\mathrm {+}}}\), that is, by a sum-of-squares certificate? If so, could one extend the argument to the noncommutative setting (which would show a lower bound on the semidefinite extension complexity)?