Abstract
We study symmetric nonnegative forms and their relationship with symmetric sums of squares. For a fixed number of variables n and degree 2d, symmetric nonnegative forms and symmetric sums of squares form closed, convex cones in the vector space of nvariate symmetric forms of degree 2d. Using representation theory of the symmetric group we characterize both cones in a uniform way. Further, we investigate the asymptotic behavior when the degree 2d is fixed and the number of variables n grows. Here, we show that, in sharp contrast to the general case, the difference between symmetric nonnegative forms and sums of squares does not grow arbitrarily large for any fixed degree 2d. We consider the case of symmetric quartic forms in more detail and give a complete characterization of quartic symmetric sums of squares. Furthermore, we show that in degree 4 the cones of nonnegative symmetric forms and symmetric sums of squares approach the same limit, thus these two cones asymptotically become closer as the number of variables grows. We conjecture that this is true in arbitrary degree 2d.
Introduction
Throughout the paper let \({\mathbb {R}}[X_1,\ldots , X_n]\) denote the ring of polynomials in n real variables and \(H_{n,k}\) the set of homogeneous polynomials (forms) of degree k in \({\mathbb {R}}[X_1,\ldots , X_n]\). Certifying that a form \(f\in H_{n,2d}\) assumes only nonnegative values is one of the fundamental questions of real algebra. One such possible certificate is a decomposition of f as a sum of squares, i.e., one finds forms \(p_1,\ldots ,p_m\in H_{n,d}\) such that \(f=p_1^2+\dots +p_m^2\). In 1888 Hilbert [16] gave a beautiful proof showing that in general not all nonnegative forms can be written as a sum of squares. In fact, he showed that the sum of squares property only characterizes nonnegativity in the cases of binary forms, of quadratic forms, and of ternary quartics. In all other cases there exist forms that are nonnegative but do not allow a decomposition as a sum of squares. Despite its elegance, Hilbert’s proof was not constructive. A constructive approach to Hilbert’s proof appeared in an article by Terpstra [37] in 1939, but the first explicit example was found by Motzkin in 1965 [22] and an explicit example based on Hilbert’s method was constructed by Robinson in 1969 [29]. We refer the interested reader to [24, 33] for more background on this topic.
The sum of squares decomposition of nonnegative polynomials has been the cornerstone of recent developments in polynomial optimization. Following ideas of Lasserre and Parrilo, polynomial optimization problems, i.e., the task of finding \(f^{*}=\min f(x)\) for a polynomial f, can be relaxed and transferred into semidefinite optimization problems. If \(ff^{*}\) can be written as a sum of squares, these semidefinite relaxations are in fact exact. Hence a better understanding of the difference of sums of squares and nonnegative polynomials is highly desirable.
We study the case of forms in n variables of degree 2d that are symmetric, i.e., invariant under the action of the symmetric group \({\mathcal {S}}_n\) that permutes the variables. Let \({\mathbb {R}}[X_1,\ldots , X_n]^S\) denote the ring of symmetric polynomials and \(H^S_{n,2d}\) denote the real vector space of symmetric forms of degree 2d in n variables. Let \(\Sigma _{n,2d}^S\) be the cone of forms in \(H^S_{n,2d}\) that can be decomposed as sums of squares and \({\mathcal {P}}^S_{n,2d}\) be the cone of nonnegative symmetric forms. Choi and Lam [7] showed that the following symmetric form of degree 4 in 4 variables is nonnegative but cannot be written as a sum of squares:
Thus one can conclude that \(\Sigma _{4,4}^S\ne {\mathcal {P}}^S_{4,4}\) and therefore even in the case of symmetric polynomials the sum of squares property already fails to characterize nonnegativity in the first case covered by Hilbert’s classical result. These results have been recently extended by Goel et al. [13] into a full characterization of equality cases between \(\Sigma _{n,2d}^S\) and \({\mathcal {P}}_{n,2d}^S\). Unfortunately, there are no other interesting cases of equality beyond those covered by Hilbert’s Theorem.
The case of even symmetric forms has also received some attention. Choi et al. [8] fully described the cones of even symmetric sextics in any number of variables, and showed that under some normalization these cones have the same limit as the number of variables grows. Harris [15] showed that even symmetric ternary octics are nonnegative, only if they are sums of squares, providing a new interesting case of equality between nonnegative polynomials and sums of squares. Goel et al. [14] showed that there are no other interesting cases of equality beyond Harris’ and Hilbert’s results for even symmetric forms.
In addition to the qualitative statement of Hilbert’s characterization, a quantitative understanding of the gap between sums of squares and nonnegative forms has been studied by several authors. In particular, in [3] the first author added to the work of Hilbert by showing that the gap between sum of squares and nonnegative forms of fixed degree grows infinitely large with the number of variables if the degree is at least 4. This result has been recently refined by Ergur to the multihomogeneous case [10]. In this article we study the relationship between symmetric sums of squares and symmetric nonnegative forms. In particular, we are interested in the asymptotic behavior of the cones, which we can realize for example as symmetric mean inequalities naturally associated to a symmetric polynomial. The study of such symmetric inequalities has a long history (see for example [9]) and it is an interesting question to ask when one can use sum of squares certificates to verify such an inequality. For instance, Hurwitz [17] showed that a sum of squares decomposition can be used to verify the arithmetic mean–geometric mean inequality. Recently, Frenkel and Horváth [11] studied the connection of Minkowski’s inequality to sums of squares. Our results imply that a positive fraction of such inequalities come from sums of squares symmetric polynomials. Furthermore, in degree 4 we show that a family of symmetric power mean inequalities is valid for all n if and only if each member can be written as a sum of squares. We conjecture that this holds for all degrees.
Overview and Main Results
Symmetric Sums of Squares
Symmetric polynomials are classical objects in algebra. In order to represent symmetric polynomials, we will make use of the power sum polynomials.
Definition 2.1
For \(i\in {\mathbb {N}}\) define
to be the ith power sum polynomial. We will also work with the power means:
It is known (for example [20, 2.11]) that \({\mathbb {R}}[X_1,\ldots , X_n]^S\) is freely generated by the algebraically independent polynomials \(P_1^{(n)},\ldots ,P_n^{(n)}\). Hence it follows that every symmetric polynomial \(f\in {\mathbb {R}}[X_1,\ldots , X_n]^S\) of degree \(2d\le n\) can uniquely be written as
for some polynomial \(g\in {\mathbb {R}}[z_1,\ldots ,z_{2d}]\), with \(\deg _w g=\deg f\), where \(\deg _w\) denotes the weighted degree corresponding to the weight \((1,\ldots ,2d)\). Recall that for a natural number k a partition \(\lambda \) of k (written \(\lambda \vdash k\)) is a sequence of weakly decreasing positive integers \(\lambda =(\lambda _1,\lambda _2,\ldots ,\lambda _l)\) with \(\sum _{i=1}^l\lambda _i=k\). For \(n\ge k\) and to a partition \(\lambda =(\lambda _1,\ldots ,\lambda _l)\vdash k\) we associate polynomials
It now follows that for every \(n\ge k\) the families of polynomials \(\{P_\lambda :\lambda \vdash k\}\) as well as \(\bigl \{p_\lambda ^{(n)}:\lambda \vdash k\bigr \}\) form a basis of \(H_{n,k}^S\). In particular, if \(n\ge k\) then the dimension of \(H_{n,k}^S\) is equal to \(\pi (k)\), the number of partitions of k. Thus the dimension of \(H_{n,k}^S\) is constant for fixed k and all sufficiently large n.
Using representation theory of the symmetric group, and in particular socalled higher Specht polynomials, we are able to give a uniform representation of the cone of symmetric sums of squares of fixed degree 2d in terms of matrix polynomials, with coefficients that are rational functions in n (see Theorem 4.15) and similarly a uniform representation of the sequence of dual cones in terms of linear matrix polynomials whose coefficients are “symmetrizations” of sums of squares in 2d variables. This gives us in particular a better understanding of the faces of \(\Sigma ^S_{n,2d}\) that are not faces of \({\mathcal {P}}^S_{n,2d}\). We make these findings more concrete in the case of quartic symmetric forms, where we completely characterize the cone \(\Sigma _{n,4}\) and its boundary. This in particular allows us to easily compute a family of symmetric sums of square polynomials that are on the boundary of \(\Sigma _{n,4}^S\) without having a real zero, thus certifying the difference between symmetric sums of squares and symmetric nonnegative forms (see Theorem 5.5).
Asymptotic Behavior of Sums of Squares and NonNegative Forms
Our characterization allows us to study the asymptotic relationship between symmetric sums of squares and symmetric nonnegative forms of fixed degree in a growing number of variables. Even though vector spaces \(H^{S}_{n,2d}\) have the same dimension \(\pi (2d)\) for all \(n\ge 2d\), there is no canonical way to identify vector spaces \(H^{S}_{n,2d}\) for different n. In fact there are several natural ways to define transition maps identifying vector spaces of symmetric forms in different numbers of variables (see for example [2]), and different transition maps will lead to different limits as n goes to infinity. The system of vector spaces \(H^S_{n,2d}\) together with transition maps will define a directed system of vector spaces, and we can define the direct limit \(H^S_{\infty ,2d}\) of vector space \(H_{n,2d}^S\) [30, Sect. 7.6]. One way of defining these transitions is by symmetrization:
Definition 2.2
For \(f\in {\mathbb {R}}[X]\) we define the symmetrization of f as
The composition of the natural inclusion \(i_{n,n+1}:H_{n,2d} \rightarrow H_{n+1,2d}\) with \({{\,\mathrm{sym}\,}}_{n+1}\) defines injective maps \(\varphi _{n,n+1}:H_{n,2d}^S\rightarrow H_{n+1,2d}^S\). Therefore, we have the following.
Proposition 2.3
For \(n,m\in {\mathbb {N}}\) with \(n>m\) consider the maps \(\varphi _{m,n}:H_{m,2d}^S\rightarrow H_{n,2d}^S\) defined by
Then the system of vector spaces \(H^S_{n,2d}\) together with the maps \(\varphi _{m,n}\) defines a directed system and for \(m\ge 2d\) the maps \(\varphi _{m,n}\) are isomorphisms.
We consider the direct limit \(H_{\infty ,k}^{\varphi }\) of the directed system above. Since the maps \(\varphi _{m,n}\) are isomorphisms for \(m \ge 2d\), it follows that \(H_{\infty ,k}^{\varphi }\) is also a real vector space of dimension \(\pi (2d)\). Therefore we have natural isomorphisms \(\varphi _n:H_{\infty ,2d}^\varphi \rightarrow H_{n,2d}^S\) for \(n \ge 2d\), which allow us to view the cones \(\Sigma _{n,2d}^S\) and \({\mathcal {P}}_{n,2d}^S\) as subsets of \(H_{\infty ,2d}^\varphi \). Note that we have \(\varphi _{m,n}(\Sigma _{m,2d}^S)\subseteq \Sigma _{n,2d}^S\) and \(\varphi _{m,n}({\mathcal {P}}^S_{m,2d})\subseteq {\mathcal {P}}^S_{n,2d}\). It follows that with transition maps \(\varphi _{m,n}\) the cones of sums of squares and the cones of nonnegative polynomials form nested increasing sequences in \(H_{\infty ,2d}^\varphi \). We define the following cones of nonnegative elements and sums of squares in \(H_{\infty ,k}^{\varphi }\):
The following theorem is immediate from the above discussion.
Theorem 2.4
The cones \({\mathcal {P}}_{\infty ,2d}^{\varphi }\) and \(\Sigma _{\infty ,2d}^{\varphi }\) are fulldimensional convex cones in \(H_{\infty ,2d}^\varphi \simeq {\mathbb {R}}^{\pi (2d)}\).
Forms in fixed degree make up a vanishingly small portion of nonnegative forms as the number of variables grows [3]. More precisely, (nonsymmetric) nonnegative forms and sums of squares in n variables of degree 2d with average 1 on the unit sphere form compact convex sets \({\bar{P}}_{n,2d}\) and \({\bar{\Sigma }}_{n,2d}\) of dimension \(D=\left( {\begin{array}{c}n+d1\\ d\end{array}}\right) 1\). It was shown in [3] that the ratio of volumes
converges to 0 for all \(2d\ge 4\) as n goes to infinity. The ratio of volumes is raised to the power 1/D to take into account the effects of large dimension on volumes as the volume of \((1{+}\varepsilon )\Sigma _{n,2d}\) is equal to \((1{+}\varepsilon )^D{{\,\mathrm{vol}\,}}\Sigma _{n,2d}\).
By contrast, the cones of symmetric nonnegative forms and sums of squares of fixed degree live in the vector space \(H^S_{n,2d}\) which has fixed dimension \(\pi (2d)\) for a sufficiently large number of variables n. Therefore, to prove that asymptotically symmetric sums of squares make up a nontrivial portion of symmetric nonnegative forms (with respect to some transition maps) it suffices to show that both limits are fulldimensional in \(H_{\infty ,2d}^\varphi \simeq {\mathbb {R}}^{\pi (2d)}\), which is done in Theorem 2.4.
Besides the direct limit we also study symmetric power mean inequalities. We can express a symmetric form f in \(H^S_{n,2d}\) in the power mean basis \(p_{\lambda }^{(n)}\) with \(\lambda \vdash 2d\):
Using the power mean basis we can define transition maps \(\rho _{m,n}\) by identifying
As before the system of vector spaces \(H^S_{n,2d}\) together with the maps \(\rho _{m,n}\) defines a directed system, and for \(m\ge 2d\) the maps \(\rho _{m,n}\) are isomorphisms. We consider the direct limit \(H_{\infty ,k}^{\rho }\). Since the maps \(\rho _{m,n}\) are isomorphisms for \(m \ge 2d\), it follows that \(H_{\infty ,k}^{\rho }\) is again a real vector space of dimension \(\pi (2d)\). The natural isomorphisms \(\rho _n:H_{\infty ,2d}^\rho \rightarrow H_{n,2d}^S\) for \(n \ge 2d\), allow us to view the cones \(\Sigma _{n,2d}^S\) and \({\mathcal {P}}_{n,2d}^S\) as subsets of \(H_{\infty ,2d}^\rho \). We will denote these images by \(\Sigma _{n,2d}^\rho \) and \({\mathcal {P}}_{n,2d}^\rho \) and consider the limit cones:
Definition 2.5
The sequences \({\mathcal {P}}^{\rho }_{n,2d}\) and \(\Sigma _{n,2d}^{\rho }\) are not nested in general. Let \(x=(X_1,\dots ,X_n)\) be a point in \({\mathbb {R}}^n\) and let \({\tilde{x}}\) be the point in \({\mathbb {R}}^{k \cdot n}\) with each \(X_i\) repeated k times. Then
It follows that \(f^{(k\cdot n)}\in {\mathcal {P}}^{p}_{k\cdot n,d}\) implies \(f^{(n)}\in {\mathcal {P}}^{p}_{n,d}\) and hence we get the following.
Proposition 2.6
Consider the cones \({\mathcal {P}}^{p}_{n,2d}\) as convex subsets of \({\mathbb {R}}^{\pi (d)}\) using the coefficients \(c_{\lambda }\) of \(p_{\lambda }\). Then for every \(n\ge 2d\) and \(k\in {\mathbb {N}}\) we have
Remark 2.7
We note that the same proof also yields that \(\Sigma ^{\rho }_{k \cdot n,2d}\subseteq \Sigma ^{\rho }_{n,2d}\).
It is not directly clear from Proposition 2.6 that the sequences \({\mathcal {P}}^{\rho }_{n,2d}\) and \(\Sigma ^{\rho }_{n,2d}\) have limits, which we show separately:
Theorem 2.8
(a) The cones \({\mathfrak {S}}_{2d}\) and \({\mathfrak {P}}_{2d}\) are fulldimensional cones.
Although the cone of symmetric nonnegative quartics is strictly bigger than the cone of symmetric quartic sums of squares for any number of variables \(n\ge 4\), we show that in the limit the two cones coincide:
Theorem 2.9
\({\mathfrak {P}}_{4}={\mathfrak {S}}_{4}\).
In particular, this result applies in the situation of power mean inequalities studied in [23], and hence it is possible to verify any such inequality using sums of squares. We conjecture that this happens in arbitrary degree 2d, i.e., we suggest the following.
Conjecture 1
\({\mathfrak {P}}_{2d}={\mathfrak {S}}_{2d}\) for all \(d \in {\mathbb {N}}\).
Structure of the Article and Guide for the Reader
This article is structured as follows: We provide a characterization of symmetric nonnegative forms and the limit cone in Sect. 3. Section 4 provides a detailed study of symmetric sums of squares. To this end we present the general framework of how to use representation theory to study invariant sums of squares in Sect. 4.1. In Sect. 4.2 we outline the basic notions of the representation theory of the symmetric group. These results are then used in Sect. 4.3 to represent the cone of symmetric sums of squares (without restrictions on the degree) in terms of matrix polynomials in Theorems 4.11 and 4.12. The subsequent Sect. 4.4 then discusses how restricting degree allows for a uniform description of the cones \(\Sigma _{n,2d}\) in terms of the power mean bases \(p_\lambda ^{(n)}\) (Theorem 4.15). The final subsection of Sect. 4 discusses some results on the dual cone which are needed in the sequel. The subsequent Sect. 5 makes these results more concrete as we give a description of the cone of symmetric quartic sums of squares (Theorem 5.1). Furthermore, we describe the elements of the boundary of \(\Sigma _{n,4}\) that are strictly positive in Theorem 5.3 and give an explicit example of such a polynomial for every \(n\ge 4\) in Example 5.4. From this example it follows in particular that besides the cases where Hilbert showed the equality of sums of squares and nonnegative forms there always exist symmetric positive definite forms which are not sums of squares (see Theorem 5.5). In Sect. 6 we explore the two notions of limits and prove Theorem 2.8. We also discuss the connection with the power mean inequalities. These power mean inequalities are then again studied in more detail in the final Sect. 7, where we show in particular that all valid power mean inequalities of degree 4 are sums of squares (Theorem 2.9).
The order of sections was chosen to present the more general statements in Sects. 3, 4, and 6 and then apply them in the quartic case in Sects. 5 and 7. Depending on reader’s preferences, one can also begin by reading Sect. 5 first before actually diving into Sect. 4, and similarly Sect. 7 before Sect. 6, while taking the necessary results from previous sections for granted.
Symmetric PSD Forms
We begin by characterizing the cone \({\mathfrak {P}}_{2d}\). One key result needed to describe the nonnegative symmetric forms is the socalled half degree principle (see [26, 27, 38]): For a natural number \(k\in {\mathbb {N}}\) we define \(A_k\) to be the set of all points in \({\mathbb {R}}^n\) with at most k distinct components, i.e.,
The half degree principle says that a symmetric form of degree \(2d> 2\) is nonnegative, if and only if it is nonnegative on \(A_d\):
Proposition 3.1
(Half degree principle) Let \(f\in H^S_{n,2d}\) and set \(k:=\max \{2,d\}\). Then f is nonnegative if and only if
Remark 3.2
By considering \(f\epsilon (X_1^2+\dots +X_n^2)^d\) for a sufficiently small \(\epsilon >0\) we see that we can also replace nonnegative by positive in the above theorem, thus characterizing strict positivity of symmetric forms.
A nonincreasing sequence of k natural numbers \(\vartheta :=(\vartheta _1,\ldots , \vartheta _k)\) such that \(\vartheta _1+\dots + \vartheta _k=n\) is called a kpartition of n (written \(\vartheta \vdash _k n\)). Given a symmetric form \(f\in H^S_{n,2d}\) and \(\vartheta \) a kpartition of n we define \(f^{\vartheta }\in {\mathbb {R}}[t_1,\ldots , t_k]\) via
From now on assume that \(2d>2\). Then the halfdegree principle implies that nonnegativity of \(f=\sum _{\lambda \vdash 2d}c_{\lambda }p_{\lambda }\) is equivalent to nonnegativity of
for all \(\vartheta \vdash _d n\), since the polynomials \(f^{\vartheta }\) give the values of f at all points with at most d parts. We note that for all \(i\in {\mathbb {N}}\) we have
For a partition \(\lambda =(\lambda _1,\dots ,\lambda _l) \vdash 2d\) we define a 2dvariate form \(\Phi _{\lambda }\) in the variables \(s_1,\dots ,s_d\) and \(t_1,\dots , t_d\) by
and use it to associate to any form \(f \in H^S_{n,2d}\), \(f=\sum _{\lambda \vdash 2d}c_{\lambda }p_{\lambda }\), the form
Note that
We define the set
It follows from the arguments above that \(f \in H^S_{n,2d}\) is nonnegative if and only if the forms \(\Phi _{f}(s,t)\) are nonnegative forms in t for all \(w \in W_n\). This is summarized in the following corollary.
Corollary 3.3
Let \(f=\sum _{\lambda \vdash 2d }c_{\lambda }p_{\lambda }\) be a form in \(H^S_{n,2d}\). Then f is nonnegative (positive) if and only if for all \(w\in W_n\) the dvariate forms \(\Phi _f(w,t)\) are nonnegative (positive).
This result enables us to characterize the elements of \({\mathfrak {P}}_{2d}\). We expand the sets \(W_n\) to the standard simplex \(\Delta \) in \({\mathbb {R}}^d\):
Then we have the following theorem characterizing \({\mathfrak {P}}_{2d}\).
Theorem 3.4
Let \({\mathfrak {f}}\in H^{\rho }_{\infty ,2d}\) be the sequence defined by \(f^{(n)}=\sum _{\lambda \vdash 2d}c_\lambda p_\lambda ^{(n)}\). Then \({\mathfrak {f}}\in {\mathfrak {P}}_{2d}\) if and only if the 2dvariate polynomial \(\Phi _{f}(s,t)\) is nonnegative on \(\Delta \times {\mathbb {R}}^d\).
Proof
Suppose that \(\Phi _{f}(s,t)\) is nonnegative on \(\Delta \times {\mathbb {R}}^d\). Let \(f^{(n)}=\sum c_{\lambda }p^{(n)}_{\lambda }\). Since \(W_{n}\subset \Delta \) for all n, we see from Corollary 3.3 that \(f^{(n)}\) is a nonnegative form for all n and thus \({\mathfrak {f}} \in {\mathfrak {P}}_{2d}\).
On the other hand, suppose there exists \(\alpha _0 \in \Delta \) such that \(\Phi _{f}(\alpha _0,t)<0\) for some \(t\in {\mathbb {R}}^d\). Then we can find a rational point \(\alpha \in \Delta \) with all positive coordinates and sufficiently close to \(\alpha _0\) so that \(\Phi _{f}(\alpha ,t) < 0\). Let h be the least common multiple of the denominators of \(\alpha \). Then we have \(\alpha \in W_{ah}\) for all \(a\in {\mathbb {N}}\). Choose a such that \(ah \ge 2d\). Then \(f^{(ah)}\) is negative at the corresponding point and we have \({\mathfrak {f}} \notin {\mathfrak {P}}_{2d}\). \(\square \)
Symmetric Sums of Squares
We now consider symmetric sums of squares. It was already observed in [12] that invariance under a group action allows us to demand sum of squares decompositions which put strong restrictions on the underlying squares. First, we explain the general approach, which uses representation theory and can be used for other groups as well. Our presentation follows the ideas of [12] which we present in a slightly different way. The interested reader is advised to consult there for more details.
Invariant Sums of Squares
Let G be a finite group acting linearly on \({\mathbb {R}}^{n}\). As G acts linearly on \({\mathbb {R}}^n\) also the \({\mathbb {R}}\)vector space \({\mathbb {R}}[X]\) can be viewed as a Gmodule and by Maschke’s theorem (the reader may consult for example [34] for basics in linear representation theory) there exists a decomposition of the form
with \(V^{(j)} = W^{(j)}_1 \oplus \cdots \oplus W^{(j)}_{\eta _j}\) and \(\nu _j := \dim W^{(j)}_i\). Here, the \(W^{(j)}_i\) are the irreducible components and the \(V^{(j)}\) are the isotypic components, i.e., the direct sums of isomorphic irreducible components. The component with respect to the trivial irreducible representation is the invariant ring \({\mathbb {R}}[X]^G\). The elements of the other isotypic components are called semiinvariants. It is classically known that each isotypic component is a finitely generated \({\mathbb {R}}[X]^{G}\)module (see [36, Theorem 1.3]). To any element \(f\in H_{n,d}\) we can associate a symmetrization by which we mean its image under the following linear map:
Definition 4.1
For a finite group G the linear map \({\mathcal {R}}_G:H_{n,d}\rightarrow H_{n,d}^{G}\) defined by
is called the Reynolds operator of G. In the case of \(G={\mathcal {S}}_n\) we say that \({\mathcal {R}}_{{\mathcal {S}}_n}(f)\) is a symmetrization of f and we write \({{\,\mathrm{sym}\,}}f\) in this case.
For a set of polynomials \(f_1,\ldots ,f_l\) we will write \(\sum {\mathbb {R}}\{f_1,\ldots ,f_l\}^2\) to refer to the sums of squares of elements in the linear span of the polynomials \(f_1,\ldots ,f_l\). It has already been observed by Gaterman and Parrilo [12] that invariant sums of squares can be written as sums of squares of semiinvariants using Schur’s Lemma. However, a closer inspection of the situation allows in many cases—as for example in the case of \({\mathcal {S}}_n\)—a finer analysis of the decomposition into sums of squares. Consider a set of forms \(\{f_{1,1},\ldots ,f_{1,\eta _1},f_{2,1},\ldots ,f_{h,\eta _h}\}\) such that for fixed j the forms \(f_{j,i}\) generate irreducible components of \(V^{(j)}\). Further assume that they are chosen in such a way, that for each j and each pair (l, k) there exists a Gisomorphism \(\rho _{l,k}^{(j)}:V^{j}\rightarrow V^{j}\) which maps \(f_{j,l}\) to \(f_{j,k}\). Now for every j we consider the set \(\{f_{j,1},\ldots ,f_{j,\eta _j}\}\) which contains only one polynomial per irreducible module. However, since every irreducible module is generated by the Gorbit of only one element, every such set uniquely describes the chosen decomposition. We call such a set a symmetry basis and show that invariant sums of squares are in fact symmetrizations of sums of squares of a symmetry basis. The following theorem, which we state in a slightly more general setup, highlights the use of a symmetry basis.
Theorem 4.2
Let G be a finite group and assume that all real irreducible representations \(V\subset H_{n,d}\) are also irreducible over their complexification. Let p be a form of degree 2d that is invariant with respect to G. If p is a sum of squares, then p can be written in the form
The main tool for the proof is Schur’s Lemma, and we remark that a dual version of this theorem can be found in [28, Thm. 3.4] and [25].
Proof
Let \(p\in H_{n,2d}\) be a Ginvariant sum of squares. Then there exists a symmetric positive semidefinite bilinear form
which is a Gram matrix for p, i.e., for every \(x\in {\mathbb {R}}^{n}\) we can write \(p(x)=B(X^{d},X^{d})\), where \(X^{d}\) stands for the dth power of x in the symmetric algebra of \({\mathbb {R}}^{n}\). Since p is Ginvariant, we have \(p={\mathcal {R}}_G(p)\) and by linearity we may assume that B is a Ginvariant bilinear form. Now decompose \(H_{n,2d}\) as in (4.1) and consider the restriction of B to
For every \(v\in V^{(i)}\) the quadratic form \(B^{ij}\) defines a linear map \(\phi _v:V^{(j)}\rightarrow {\mathbb {R}}\) via \(\phi _v(w):=B^{ij}(v,w)\) and so the form \(B^{ij}\) naturally can be seen as an element of \({{\,\mathrm{Hom}\,}}^{G}\bigl ({V^{(i)}}^{*},V^{(j)}\bigr )\). Since real representations are selfdual we have that \({V^{(i)}}^{*}\) and \(V^{(j)}\) are not isomorphic and thus by Schur’s Lemma we find that \(B^{ij}(v,w)=0\) for all \(v\in V^{(i)}\) and \(w\in V^{(j)}\). So the isotypic components are orthogonal with respect to B and hence it suffices to look at
individually. We have \(V^{(j)}:=\bigoplus _{k=1}^{l} W^{(j)}_{k}\), where each \(W^{(j)}_k\) is generated by a semiinvariant \(f_{j,k}\), i.e., there is a basis \(f_{j,k,1},\ldots ,f_{j,k,\nu _j}\) for every \(W^{(j)}_k\) such that the basis elements \(f_{j,k,i}\) are taken from the orbit of \(f_{j,k}\) under G. To again use Schur’s Lemma we identify \(B_j\) with its complexification \(B_j^{{\mathbb {C}}}\), which is possible since we assumed that all representations are irreducible also over \({\mathbb {C}}\). Consider a pair \(W^{(j)}_{k_1}, W^{(j)}_{k_2}\), where we allow \(k_1=k_2\). To apply Schur’s Lemma we relate the quadratic from \(B^{jj}\) to a linear map \(\psi ^{(j)}_{k_1,k_2}:W^{(j)}_{k_1}\rightarrow W^{(j)}_{k_2}\) defined on the generating set \(f_{j,k_1,1},\ldots ,f_{j,k_1,\nu _j}\) by
Since we assumed that \(W^{(j)}_{k}\) are absolutely irreducible we have by Schur’s Lemma
and we can conclude that this map is unique up to scalar multiplication. Therefore it can be represented in the form \(\psi ^{(j)}_{k_1,k_2}=c_{k_1,k_2}\rho _{k_1,k_2}\), where \(\rho _{k_1,k_2}\) is the Gisomorphism with \(\rho _{k_1,k_2}(f_{j,k_1})=f_{j,k_2}\) as above. It therefore follows that
where \(\delta _{u,v}\) denotes the Kronecker delta. By considering the matrix of B with respect to the basis \(f_{j,k,l}\) of \(H_{n,d}\) we see that p has the desired decomposition. \(\square \)
Remark 4.3
The above statement also holds true in the situation where one looks at sums of squares of elements of an arbitrary Gclosed submodule \(T\subset {\mathbb {R}}[X]\).
In some situations it is convenient to formulate the above Theorem 4.2 in terms of matrix polynomials, i.e., matrices with polynomial entries. Given two \(k\times k\) symmetric matrices A and B define their inner product as \(\langle A,B \rangle ={\text {trace}}{(AB)}\). Define a blockdiagonal symmetric matrix A with h blocks \(A^{(1)},\dots ,A^{(h)}\) with the entries of each block given by:
Then Theorem 4.2 is equivalent to the following statement:
Corollary 4.4
With the conditions as in Theorem 4.2 let \(p\in {\mathbb {R}}[X]^G\). Then p is a sum of squares of polynomials in T if and only if p can be written as \(p=\langle A,B \rangle \), where B is a positive semidefinite matrix with real entries.
We now aim to apply Theorem 4.2 to a symmetric form \(p \in H_{n,2d}^S\). In order to do this we need to identify an explicit representative in every irreducible \({\mathcal {S}}_n\)submodule of \(H_{n,d}\). We first recall some useful facts from the representation theory of \({\mathcal {S}}_n\). The irreducible representations in this case are the socalled Specht modules, which we will define in the following section. We refer to [18, 31] for more details.
Specht Modules as Polynomials
Let \(\lambda =(\lambda _1,\lambda _2,\ldots ,\lambda _l)\vdash n\) be a partition of n. A Young tableau of shape \(\lambda \) consists of l rows, with \(\lambda _i\) entries in the ith row. Each entry is an element in \(\{1, \ldots , n\}\), and each of these numbers occurs exactly once. A standard Young tableau is a Young tableau in which all rows and columns are increasing. An element \(\sigma \in {\mathcal {S}}_n\) acts on a Young tableau by replacing each entry by its image under \(\sigma \). Two Young tableaux \(T_1\) and \(T_2\) are called rowequivalent if the corresponding rows of the two tableaux contain the same numbers. The classes of rowequivalent Young tableaux are called tabloids, and the equivalence class of a tableau T is denoted by \(\{T\}\). The stabilizer of a rowequivalence class is called the rowstabilizer, denoted by \({{\,\mathrm{RStab}\,}}_T\). If \(R_1,\ldots ,R_l\) are the rows or a given Young tableau T this group can be written as
where \({\mathcal {S}}_{R_i}\) is the symmetric group on the elements of row i. The action of \({\mathcal {S}}_n\) on the equivalence classes of rowequivalent Young tableaux gives rise to the permutation module \(M^{\lambda }\) corresponding to \(\lambda \) which is the \({\mathcal {S}}_n\)module defined by
where \(\{T_1\}, \ldots , \{T_s\}\) is a complete list of \(\lambda \)tabloids and \({\mathbb {R}}\{ \{T_1\}, \ldots ,\{T_s\}\}\) denotes their \({\mathbb {R}}\)linear span.
Let T be a Young tableau for \(\lambda \vdash n\), and let \(C_i\) be the entries in the ith column of t. The group
where \({\mathcal {S}}_{C_i}\) is the symmetric group on the elements of columns i, is called the column stabilizer of T. The irreducible representations of the symmetric group \({\mathcal {S}}_n\) are in 11correspondence with the partitions of n, and they are given by the Specht modules, as explained below. For \(\lambda \vdash n\), the polytabloid associated with T is defined by
Then for a partition \(\lambda \vdash n\), the Specht module \(S^{\lambda }\) is the submodule of the permutation module \(M^\lambda \) spanned by the polytabloids \(e_T\). The dimension of \(S^{\lambda }\) is given by the number of standard Young tableaux for \(\lambda \vdash n\), which we will denote by \(s_\lambda \).
A classical construction of Specht realizes Specht modules as submodules of the polynomial ring (see [35]): For \(\lambda \vdash n\) let \(T_{\lambda }\) be a standard Young tableau of shape \(\lambda \) and \({\mathcal {C}}_1,\ldots ,{\mathcal {C}}_{\nu }\) be the columns of \(T_\lambda \). To \(T_\lambda \) we associate the monomial \(X^{t_{\lambda }}:=\prod _{i=1}^{n}X_i^{m(i)1}\), where m(i) is the index of the row of \(T_{\lambda }\) containing i. Note that for any \(\lambda \)tabloid \(\{T_{\lambda }\}\) the monomial \(X^{T_{\lambda }}\) is well defined, and the mapping \(\{T_{\lambda }\} \mapsto X^{T_{\lambda }}\) is an \({\mathcal {S}}_n\)isomorphism. For any column \({\mathcal {C}}_i\) of \(T_\lambda \) we denote by \({\mathcal {C}}_i(j)\) the element in the jth row and we associate to it a Vandermonde determinant:
The Specht polynomial \(sp_{T_{\lambda }}\) associated to \(T_\lambda \) is defined as
where \({{\,\mathrm{CStab}\,}}_{T_{\lambda }}\) is the column stabilizer of \(T_\lambda \). By the \({\mathcal {S}}_n\)isomorphism \(\{T_{\lambda }\} \mapsto X^{t_{\lambda }}\), \({\mathcal {S}}_n\) acts on \(sp_{T_{{\lambda }}}\) in the same way as on the polytabloid \(e_{T_{\lambda }}\). If \(T_{\lambda ,1},\ldots ,T_{\lambda ,k}\) denote all standard Young tableaux associated to \(\lambda \), then the polynomials \(sp_{T_{\lambda ,1}},\ldots ,sp_{T_{\lambda ,k}}\) are called the Specht polynomials associated to \(\lambda \). We then have the following proposition; see [35].
Proposition 4.5
The Specht polynomials \(sp_{T_{\lambda ,1}},\ldots ,sp_{T_{\lambda ,k}}\) span an \({\mathcal {S}}_n\)submodule of \({\mathbb {R}}[X]\) which is isomorphic to the Specht module \(S^{\lambda }\).
The Specht polynomials identify a submodule of \({\mathbb {R}}[X]\) isomorphic to \({\mathcal {S}}^{\lambda }\). In order to get a decomposition of the entire ring \({\mathbb {R}}[X]\) we will use a generalization of this construction which is described in the next section.
Higher Specht Polynomials and the Decomposition of \({\mathbb {R}}[X]\)
In what follows we will need to understand the decomposition of the polynomial ring \({\mathbb {R}}[X]\) and \({\mathcal {S}}_n\)module \(H_{n,d}\) in terms of \({\mathcal {S}}_n\)irreducible representations. Notice that such a decomposition is not unique. It is classically known that the ring \({\mathbb {R}}[X]\) is a free module of dimension n! over the ring of symmetric polynomials. Similarly, every isotypic component is a free \({\mathbb {R}}[X]^{{\mathcal {S}}_n}\)module. Therefore, one general strategy in order to get a symmetry basis of \({\mathbb {R}}[X]\) consists in building a free module basis for \({\mathbb {R}}[X]\) over \({\mathbb {R}}[X]^{{\mathcal {S}}_n}\) which additionally is symmetry adapted, i.e., which respects a decomposition into irreducible \({\mathcal {S}}_n\)modules. One such construction, which generalizes Specht’s original construction presented above, is due to Ariki et al. [1].
Definition 4.6
Let \(n\in {\mathbb {N}}\).

(i)
A finite sequence \(w=(w_1,\ldots ,w_n)\) of nonnegative integers is called a word of length n. A word w of length n is called a permutation if the set of nonnegative integers forming a word of length n is \(\{1,\ldots ,n\}\).

(ii)
Given a word w and a permutation u we define the monomial associated to the pair as \(X_u^{w}:=X_{u_1}^{w_1}\cdots X_{u_n}^{w_n}\).

(iii)
Given a permutation w we associate to w its index, denoted by i(w), by constructing the following word of length n. The word i(w) contains 0 exactly at the same position where 1 occurs in w and the other entries we define recursively with the following rule: Suppose that the entry in i(w) at a given position is c and that k occurs in w at the same position, then i(w) should be also c if it lies to the right of k and it should be \(c+1\) if it lies to the left of k.

(iv)
For \(\lambda \vdash n\) and T being a standard Young tableau of shape \(\lambda \), we define the word of T, denoted by w(T), by collecting the entries of T from the bottom to the top in consecutive columns starting from the left.

(v)
For a pair (T, V) of standard \(\lambda \)tableaux we define the monomial associated to this pair as \(X_{w(V)}^{i(w(T))}\).
Example 4.7
Consider the tableau
The resulting word is given by \(w(T)=31524\), with \(i(w(T))=10001\). Taking
we obtain \(X_{w(V)}^{i(w(T))}=X_1^0X_2^1X_3^0X_4^2X_5^1\).
Definition 4.8
Let \(\lambda \vdash n\) and T be a \(\lambda \)tableau. Then the Young symmetrizer associated to T is the element in the group algebra \({\mathbb {R}}[{\mathcal {S}}_n]\) defined to be
Now let T be a standard Young tableau, and define the higher Specht polynomial associated with the pair (T, V) to be
For \(\lambda \vdash n\) we will denote by
the set of all standard higher Specht polynomials corresponding to \(\lambda \) and by
the set of all standard higher Specht polynomials.
Remark 4.9
Let \(s_{\lambda }\) denote the number of standard Young tableaux of shape \(\lambda \). It follows from the socalled Robinson–Schensted correspondence (see [31]) that
Therefore the cardinality of \({\mathcal {F}}\) is exactly n!
The importance of the higher Specht polynomials now is summarized in the following theorem which can be found in [1, Thm. 1].
Theorem 4.10
The following holds for the set of higher Specht polynomials.

(i)
The set \({\mathcal {F}}\) is a free basis of the ring \({\mathbb {R}}[X]\) over the invariant ring \({\mathbb {R}}[X]^{{\mathcal {S}}_n}\).

(ii)
For any \(\lambda \vdash n\) and standard \(\lambda \)tableau T, the space spanned by the polynomials in
$$\begin{aligned} {\mathcal {F}}^T_\lambda :=\bigl \{F^T_V:V \text { runs over all standard } \lambda \text {tableaux}\bigr \} \end{aligned}$$is an irreducible \({\mathcal {S}}_n\)module isomorphic to the Specht module \(S^{\lambda }\).
For every \(\lambda \vdash n\) we denote by \(V_0^{\lambda }\) the standard \(\lambda \)tableau with entries \(\{1,\ldots ,\lambda _1\}\) in the first row, \(\{\lambda _1+1,\ldots ,\lambda _2\}\) in the second row, and so on. Consider the set
which is of cardinality \(s_\lambda \). The set \({\mathcal {Q}}_\lambda \) is a symmetry basis of the vector space spanned by \({\mathcal {F}}\). Using these polynomials we define \(s_\lambda \times s_\lambda \) matrix polynomials \(Q^{\lambda }\) by:
where \(T,T'\) run over all standard \(\lambda \)tableaux. Since by (i) in Theorem 4.10 we know that every polynomial \(h\in {\mathbb {R}}[X]\) can be uniquely written as a linear combination of elements in \({\mathcal {F}}\) with coefficients in \({\mathbb {R}}[X]^{{\mathcal {S}}_n}\), the following theorem can be thought of as a generalization of Corollary 4.4 to sums of squares from an \({\mathcal {S}}_n\)module with coefficients in an \({\mathcal {S}}_n\)invariant ring (see also [12, Thm. 6.2]):
Theorem 4.11
Let \(p\in {\mathbb {R}}[X]^{{\mathcal {S}}_n}\) be a symmetric polynomial. Then p is a sum of squares if and only if it can be written in the form
where \(Q^\lambda \) is defined in (4.2) and each \(B^\lambda \in {\mathbb {R}}[X]^{s_\lambda \times s_\lambda }\) is a sum of symmetric squares matrix polynomial, i.e., \(B^\lambda (x)=L^{t}(x)L(x)\) for some matrix polynomial L(x) whose entries are symmetric polynomials.
Each entry of the matrix \(Q^{\lambda }\) is a symmetric polynomial and thus can be represented as a polynomial in any set of generators of the ring of symmetric polynomials. We will use the power means \(p_1,\ldots ,p_n\) to phrase the next theorem. However, any other choice works similarly. With this choice of basis it follows that there exists a matrix polynomial \({\tilde{Q}}^{\lambda }(z_1,\ldots ,z_n)\) in n variables \(z_1,\ldots ,z_n\) such that
With this notation one can restate Theorem 4.11 in the following way:
Theorem 4.12
Let \(f\in {\mathbb {R}}[X]^{{\mathcal {S}}_n}\) be a symmetric polynomial and \(g\in {\mathbb {R}}[z_1,\ldots ,z_n]\) such that \(f=g(p_1,\ldots ,p_n)\). Then f is a sum of squares if and only if g can be written in the form
where \({\tilde{Q}}^\lambda \) is defined in (4.3) and each \(B^\lambda \in {\mathbb {R}}[z]^{s_\lambda \times s_\lambda }\) is a sum of squares matrix polynomial, i.e., \(B^\lambda :=L(z)^{t}L(z)\) for some matrix polynomial L.
While Theorems 4.11 and 4.12 give a characterization of symmetric sums of squares in a given number of variables, we need to understand the behavior of the \({\mathcal {S}}_n\)module \(H_{n,d}\) for polynomials of a fixed degree d in a growing number of variables n. This will be done in the next section.
The Cone \(\Sigma _{n,2d}^S\)
A symmetric sum of squares \(f \in \Sigma ^S_{n,2d}\) has to be a sum of squares from \(H_{n,d}\). Therefore we now consider restricting the degree of the squares in the underlying sum of squares representation. With a little abuse of notation we denote by \({\mathcal {F}}_{n,d}\) the vector space spanned by higher Specht polynomials for the group \({\mathcal {S}}_n\) of degree at most d. Further, for a partition \(\lambda \vdash n\) let \({\mathcal {F}}_{\lambda ,d}\) denote the span of the higher Specht polynomials of degree at most d corresponding to the Specht module \({\mathcal {S}}^{\lambda }\), i.e., \({\mathcal {F}}_{\lambda ,d}\) is exactly the isotypic component of \({\mathcal {F}}_{n,d}\) corresponding to \({\mathcal {S}}^{\lambda }\). In order to describe this isotypic component combinatorially, recall that the degree of the higher Specht polynomial \(F_T^S\) is given by the charge c(S) of S. Thus, it follows from the above construction that
We now show that sums of squares of degree 2d in n variables can be constructed by symmetrizing sums of squares in 2d variables. So we first consider the case \(n=2d\). Let
be the decomposition of \({\mathcal {F}}_{2d,d}\) as an \({\mathcal {S}}_{2d}\)module. The following proposition gives the multiplicities of the different \({\mathcal {S}}_n\)modules appearing in the vector space of homogeneous polynomials of degree d.
Proposition 4.13
The multiplicities \(m_{\lambda }\) of the \({\mathcal {S}}_n\)modules \({\mathcal {S}}^\lambda \) which appear in an isotypic decomposition \(H_{n,d}\) coincide with the number of standard \(\lambda \)tableaux S with the charge of S at most d: \(c(S)\le d\).
For a partition \(\lambda \vdash 2d\) and \(n \ge 2d\) define a new partition \(\lambda ^{(n)} \vdash n\) by simply increasing the first part of \(\lambda \) by \(n2d\): \(\lambda ^{(n)}_1=\lambda _1+n2d\) and \(\lambda ^{(n)}_i=\lambda _i\) for \(i\ge 2\). Then the decomposition Theorem 4.10 in combination with [28, Thm. 4.7] yields that
For every \(\lambda \vdash 2d\) we choose \(m_\lambda \) many higher Specht polynomials \(q_1^{\lambda },\ldots ,q_{m_\lambda }^{\lambda }\) that form a symmetry basis of the \(\lambda \)isotypic component of \({\mathcal {F}}_{2d,d}\). Let \(q_\lambda =(q_1^{\lambda },\ldots ,q_{m_\lambda }^{\lambda })\) be the vector with entries \(q_i^{\lambda }\). As before we construct the matrix \(Q^{\lambda }_{2d}\) by
Further, we define the matrix \(Q_n^{\lambda }\) by
By construction we have the following:
Proposition 4.14
The matrix \(Q_n^{\lambda }\) is the \({\mathcal {S}}_n\)symmetrization of the matrix \(Q_{2d}^{\lambda }\):
We now give a parametric description of the family of cones \(\Sigma ^S_{n,2d}\). Note again that this statement is given in terms of a particular basis, but similarly can be stated with any set of generators.
Theorem 4.15
Let \(f:=\sum _{\lambda \vdash 2d} c_\lambda p_\lambda ^{(n)}\in H_{n,2d}^S\). Then f is a sum of squares if and only if it can be written in the form
where each \(B^\lambda \in {\mathbb {R}}\bigl [p_1^{(n)},\ldots ,p_{d}^{(n)}\bigr ]^{m_\lambda \times m_\lambda }\) is a sum of squares matrix of power sum polynomials, i.e., \(B^\lambda =L_{\lambda }^{t}L_{\lambda }\) for some matrix polynomial \(L_{\lambda }\bigl (p_1^{(n)},\ldots ,p_{d}^{(n)}\bigr )\) whose entries are weighted homogeneous forms. Additionally, we have for every column k of \(L_{\lambda }\),
or, equivalently, every entry \(B^{\lambda }(i,j)\) of \(B^{\lambda }\) is a weighted homogeneous form such that
Proof
In order to apply Theorem 4.11 to our fixed degree situation we have to show that the forms \(\{q_1^{\lambda },\ldots ,q_{m_\lambda }^{\lambda }\}\) when viewed as functions in n variables also form a symmetry basis of the \(\lambda ^{(n)}\)isotypic component of \({\mathcal {F}}_{n,d}\) for all \(n \ge 2d\). Indeed, consider a standard Young tableau \(t_{\lambda }\) of shape \(\lambda \) and construct a standard Young tableau \(t_{\lambda ^{(n)}}\) of shape \(\lambda ^{(n)}\) by adding numbers \(2d+1,\dots ,n\) as rightmost entries of the top row of \(t_{\lambda ^{(n)}}\), while keeping the rest of the filling of \(t_{\lambda ^{(n)}}\) the same as for \(t_{\lambda }\). It follows by construction of the Specht polynomials that
We may assume, the \(q_k^{(\lambda )}\) were chosen so that they map to \(sp_{t_{\lambda }}\) by an \({\mathcal {S}}_{2d}\)isomorphism. We observe that \(sp_{t_{\lambda }}\) (and therefore \(sp_{t_\lambda ^{(n)}}\)) and \(q_k^{\lambda }\) do not involve any of the variables \(X_{j}\), \(j>2d\). Therefore both are stabilized by \({\mathcal {S}}_{n2d}\) (operating on the last \(n2d\) variables), and further the action on the first 2d variables is exactly the same. Thus there is an \({\mathcal {S}}_{n}\)isomorphism mapping \(q_k^{\lambda }\) to \(sp_{t_\lambda ^{(n)}}\) and the \({\mathcal {S}}_{n}\)modules generated by the two polynomials are isomorphic. Therefore it follows that \(q_k^{(\lambda )}\) also form a symmetry basis of the \(\lambda ^{(n)}\)isotypic component of \({\mathcal {F}}_{n,d}\). \(\square \)
Remark 4.16
We remark that the sum of squares decomposition of \(f=\sum _{\lambda \vdash 2d} \langle B^{\lambda },Q^{\lambda }_n \rangle \), with \(B^{\lambda }=L_\lambda ^tL_{\lambda }\) can be read off as follows:
In particular, if for a fixed \(\lambda \vdash n\) and for every \(1\le i\le m_\lambda \) we denote \(\delta _i:=d\deg q_i^{\lambda }\), then the set of polynomials
is a symmetry basis of the isotypic component \(H_{n,d}\) corresponding to \(\lambda \).
The Dual Cone of Symmetric Sums of Squares
Recall, that for a convex cone \(K\subset {\mathbb {R}}^n\) the dual cone \(K^*\) is defined as
Our analysis of the dual cone \((\Sigma _{n,2d}^S)^*\) proceeds similarly to the analysis of the dual cone in the nonsymmetric situation given in [4, 6].
Let \(S_{n,d}\) be the vector space of real quadratic forms on \(H_{n,d}\). Let \(S_{n,d}^+\) be the cone of positive semidefinite quadratic forms in \(S_{n,d}\). An element \({\mathcal {Q}}\in S_{n,d}\) is said to be \({\mathcal {S}}_n\)invariant if \({\mathcal {Q}}(f)={\mathcal {Q}}(\sigma (f))\) for all \(\sigma \in {\mathcal {S}}_n\), \(f \in H_{n,d}\). We will denote by \({\bar{S}}_{n,d}\) the space of \({\mathcal {S}}_n\)invariant quadratic forms on \(H_{n,d}\). Further we can identify a linear functional \(l\in (H_{n,2d}^{S})^*\) with a quadratic form \({\mathcal {Q}}_{l}\) defined by
Let \({\bar{S}}_{n,d}^+\) be the cone of positive semidefinite forms in \({\bar{S}}_{n,d}\), i.e.,
The following lemma is straightforward, but very important, as it allows us to identify the elements of dual cone \(l\in (\Sigma _{n,2d}^S)^*\) with quadratic forms \({\mathcal {Q}}_{\ell }\) in \({\bar{S}}_{n,d}^+\).
Lemma 4.17
A linear functional \(\ell \in (H_{n,2d}^S)^*\) belongs to the dual cone \((\Sigma _{n,2d}^S)^*\) if and only if the quadratic form \({\mathcal {Q}}_{\ell }\) is positive semidefinite.
Since for \(\ell \in (H_{n,2d}^S)^*\) we have \({\mathcal {Q}}_{\ell }\in {\bar{S}}_{n,d}\), Schur’s Lemma again applies and we can use the symmetry basis constructed above to simplify the condition that \({\mathcal {Q}}_{\ell }\) is positive semidefinite. In order to arrive at a dual statement of Corollary 4.15 we construct the following matrices:
Definition 4.18
For a partition \(\lambda \vdash 2d\) consider the blockmatrix \(M_{n,\lambda }\) defined by
where in each block i, j the indices \((\alpha ,\beta )\) run through all pairs of weakly decreasing sequences \(\alpha =(\alpha _1,\ldots ,\alpha _a)\) and \(\beta =(\beta _1,\ldots ,\beta _b)\) such that
With this notation the following lemma is just the dual version of Corollary 4.15 and is established by expressing Lemma 4.17 in the basis given in (4.5):
Lemma 4.19
Let \(\ell \in H_{n,2d}^{*}\) be a linear functional. Then \(\ell \in (\Sigma _{n,2d})^{*}\) if and only if for all \(\lambda \vdash 2d\) the above matrices \(M_{n,\lambda }\) are positive semidefinite.
In order to examine the kernels of quadratic forms we use the following construction. Let \(W\subset H_{n,d}\) be any linear subspace. We define \(W^{<2>}\) to be the symmetrization of the degree 2d part of the ideal generated by W:
In Lemma 4.17 we identified the dual cone \((\Sigma _{n,2d}^{S})^*\) with a linear section of the cone of positive semidefinite quadratic forms \(S_{n,d}^+\) with the subspace \({\bar{S}}_{n,d}\) of symmetric quadratic forms. By a slight abuse of terminology we think of positive semidefinite forms \(Q_{\ell }\) as elements of the dual cone \((\Sigma _{n,2d})^*\). The following important proposition is a straightforward adaptation of the equivalent result in the nonsymmetric case [6, Proposition 2.1]:
Proposition 4.20
Let \(\ell \in (\Sigma ^{S}_{n,2d})^*\) be a linear functional nonnegative on squares and let \(W_{\ell }\subset H_{n,d}\) be the kernel of the quadratic form \({\mathcal {Q}}_{\ell }\). The linear functional \(\ell \) spans an extreme ray of \((\Sigma _{n,2d}^S)^*\) if and only if \(W_{\ell }^{<2>}\) is a hyperplane in \(H_{n,2d}^S\). Equivalently, the kernel of \({\mathcal {Q}}_\ell \) is maximal, i.e., if \(\ker {\mathcal {Q}}_\ell \subseteq \ker {\mathcal {Q}}_m\) for some \(m \in H_{n,2d}^*\) then \(m=\lambda \ell \) for some \(\lambda \in {\mathbb {R}}\).
The dual correspondence yields that any facet F of a cone K, i.e., any maximal face of K, is given by an extreme ray of the dual cone \(K^*\). More precisely, for any maximal face F of K there exists an extreme ray of \(K^*\) spanned by a linear functional \(\ell \in K^*\) such that
We now aim to characterize the extreme rays of \((\Sigma _{n,2d}^{S})^*\) that are not extreme rays of the cone \(({\mathcal {P}}_{n,2d}^{S})^*\). For \(v \in {\mathbb {R}}^n\) define a linear functional
We say that the linear functional \(\ell _v\) corresponds to point evaluation at v. It is easy to show with the same proof as in the nonsymmetric case that the extreme rays of the cone \(({\mathcal {P}}_{n,2d}^{S})^*\) are precisely the point evaluations \(\ell _v\) (see [5, Chap. 4] for the nonsymmetric case). Therefore we need to identify extreme rays of \((\Sigma _{n,2d}^{S})^*\) which are not point evaluations. We now examine the case of degree 4 in detail, and give an explicit construction of an element of \((\Sigma _{n,4}^{S})^*\) which does not belong to \(({\mathcal {P}}_{n,2d}^{S})^*\).
Symmetric Quartic Sums of Squares
We now look at the decomposition of \(H_{n,2}\) as an \({\mathcal {S}}_n\)module in order to apply Theorem 4.2 and characterize all symmetric sums of squares of degree 4.
Theorem 5.1
Let \(f^{(n)}\in H_{n,4}\) be symmetric and \(n\ge 4\). If \(f^{(n)}\) is a sum of squares then it can be written in the form
where \(\gamma \ge 0\) and the matrices \((\alpha _{ij})_{2\times 2}\) and \((\beta _{ij})_{2\times 2}\) are positive semidefinite.
Proof
The statement follows directly from the arguments presented in Sect. 4.4. Following Theorem 4.15 we get that \(f^{(n)}\) has a decomposition in the form
where \(B^{(n)}\) is a sum of symmetric squares, \(B^{(n1,1)}\) is a \(2\times 2\) sum of symmetric squares matrix polynomial and due to the degree restrictions \(B^{(n2,2)}\) is a nonnegative scalar. It remains to calculate the matrices \(Q_n^{(n1,1)}\) and \(Q_n^{(n2,2)}\) appearing in the statement decomposition. These are defined as the symmetrization of the pairwise products of those Specht polynomials which generate the corresponding Specht modules in degree 2. In degree 2 the Specht polynomials \(X_nX_1\) and \(X_n^2X_1^2\) generate two disjunct irreducible \({\mathcal {S}}_n\)modules isomorphic to the \(S^{(n1,1)}\) part and the Specht polynomial \((X_{n1}X_1)(X_{n}X_2)\) generates a module isomorphic to \(S^{(n2,2)}\). Thus we have:
Then the symmetrization can be calculated quite directly, since every product involves at most 4 variables. These calculations then yield
which gives exactly the statement in the theorem. \(\square \)
The Boundary of \(\Sigma _{n,4}^{S}\)
We now apply Proposition 4.20 to the case of degree 4 and examine the possible kernels of an extreme ray of \((\Sigma ^S_{n,4})^*\) which do not come from a point evaluation.
Lemma 5.2
Suppose a linear functional \(\ell \) spans an extreme ray of \((\Sigma _{n,4}^{{\mathcal {S}}})^{*}\) that is not an extreme ray of \(({\mathcal {P}}_{n,4}^S)^*\). Let Q be quadratic form corresponding to \(\ell \). Then
and n is odd.
Proof
Since Q is an \({\mathcal {S}}_n\)invariant quadratic form, its kernel \({{\,\mathrm{Ker}\,}}Q\subseteq H_{n,2}\) is an \({\mathcal {S}}_n\)module. It follows from the arguments in the proof of Theorem 5.1 that \({{\,\mathrm{Ker}\,}}Q\) decomposes as
where \(\alpha ,\beta \in \{0,1,2\}\) and \(\gamma \in \{0,1\}\). We now examine the possible combinations of \(\alpha \), \(\beta \), and \(\gamma \).
As above let W denote the kernel of Q. We first observe that \(\alpha =2\) is not possible: if \(\alpha =2\) then we have \(p_2\in W\) and so \(p_2^2\in W^{<2>}_S\), which is a contradiction since \(p_2^2\) is not on the boundary of \(\Sigma _{n,4}^{{\mathcal {S}}_n}\).
By Proposition 4.20 the kernel W of Q must be maximal. Let \(w \in {\mathbb {R}}^n\) be the all 1 vector: \(w=(1,\dots ,1)\). We now observe that \(\alpha =0\) is also impossible: if \(\alpha =0\) then all forms in the kernel W of Q are 0 at w. Therefore \(\ker Q \subseteq \ker Q_{\ell _w}\) and by Proposition 4.20 we have \(Q=\lambda Q_{\ell _w}\), which is a contradiction, since Q does not correspond to point evaluation. Thus we must have \(\alpha =1\).
Since we have \(\dim H_{n,4}^{{\mathcal {S}}}=5\), from Corollary 4.20 we see that \(\dim W^{<2>}=4\). This excludes the case \(\beta =0\), since even with \(\alpha =1\) and \(\gamma =1\) the dimension of \(W^{<2>}\) is at most 3. Now suppose that \(\beta =2\), i.e., the \({\mathcal {S}}_n\)module generated by \((X_1X_2)p_{1}\) and \(X_1^2X_2^2\) lies in W as well as a polynomial \(q=a p_1^2+b p_2\). We consider the symmetrizations of the five pairwise products and express these in the basis \(\{p_{(4)},p_{(3,1)}, p_{(2^2)},p_{(2,1^2)}, p_{(1^4)}\}\).
Now the condition \(\dim W^{<2>}=4\) implies that these five products cannot be linearly independent and an explicit calculation of the determinant of the corresponding matrix M yields \(\det M=b(a+b)\). We now examine the possible roots of this determinant. In the case when \(a=b\) all polynomials in W (even if \(\gamma =1\)) will be zero at \((1,\dots ,1)\), which is excluded. Therefore the only possible case is \(b=0\). In that case, by calculating the kernel of M we see that the unique (up to a constant multiple) linear functional \(\ell \) vanishing on \(W^{<2>}\) is given by
We observe using (5.1) that we must have \(\gamma =0\) since \(\ell \bigl ({{\,\mathrm{sym}\,}}_n (X_1X_2)^2(X_3X_4)^2\bigr )>0\) for \(n \ge 4\). Now suppose that n is even and let \(w\in {\mathbb {R}}^n\) be given by \(w=(1,\dots ,1,1,\dots ,1)\) where 1 and \(1\) occur n/2 times each. It is easy to verify that for all \(f \in W\) we have \(f(w)=0\). Therefore it follows that \(W \subseteq \ker Q_{\ell _w}\), which is a contradiction, since W is a kernel of an extreme ray that does not come from point evaluation.
When n is odd the forms in W have no common zeros and therefore \(\ell \) is not a positive combination of point evaluations. It is not hard to verify that \(\ell \) is nonnegative on squares and the kernel W is maximal. Therefore by Proposition 4.20 we know that \(\ell \) spans an extreme ray of \((\Sigma ^S_{n,2d})^*\)
Finally we need to deal with the case \(\alpha =\beta =\gamma =1\). Suppose that the \({\mathcal {S}}_n\)module W is generated by three polynomials:
Again we consider the symmetrizations of the five pairwise products and represent these in a matrix M. Explicit calculations now show that
As \(\alpha =\beta =\gamma =1\) we must have \({\text {rank}} M=4\) since the rows of M generate \(W^{<2>}\). Again we cannot have \(a=b\), and thus
Therefore there exists a unique linear functional \(\ell \), which vanishes on \(W^{<2>}\) and comes from the kernel of M.
Let \(w\in {\mathbb {R}}^n\) be a point with coordinates \(w=(s,\dots ,s,t)\) with \(s,t \in {\mathbb {R}}\) such that
We see that \(q_3(w)=0\) and from the above equation it also follows that for all f in the \({\mathcal {S}}_n\)module generated by \(q_2\) we have \(f(w)=0\). Direct calculation shows that (5.2) also implies \(q_1(w)=0\). Thus we have \(W\subseteq Q_{\ell _w}\), which is a contradiction by Proposition 4.20, since W is a kernel of an extreme ray that does not come from point evaluation. We remark that it is possible to show that the functional \(\ell \) vanishing on \(W^{<2>}\) and giving rise to W is in fact a multiple of \(\ell _w\), but this is not necessary for us to finish the proof. \(\square \)
The above description allows us to explicitly characterize degree 4 symmetric sums of squares that are positive and on the boundary of \(\Sigma _{n,4}^S\).
Theorem 5.3
Let \(n\ge 4\) and \(f^{(n)}\in H_{n,4}\) be symmetric and positive on the boundary of \(\Sigma _{4,n}^{S}\). Then

(i)
either \(f^{(n)}\) can be written as
$$\begin{aligned} f^{(n)}&=a^2p_{(4)}^{(n)}+2abp_{(31)}^{(n)}+(c^2a^2)p_{2^2}^{(n)}\nonumber \\&\quad +\,(2cd+b^22ab)p_{(2,1^2)}^{(n)}+(d^2b^2)p_{(1^4)}^{(n)}, \end{aligned}$$with nonzero coefficients \(a,b,c,d\in {\mathbb {R}}\setminus \{0\}\) which additionally satisfy
$$\begin{aligned} 0&\le \frac{a(cd)+b(d+c)}{ac}, \quad \ 0\le \frac{a (c + d) (b c  a d)}{ac},\nonumber \\ 0&\le \frac{c+d}{a^2c^2}\bigl (a^2 (c  d) + b (a c + b c)\bigr ), \quad \\ 0&\le \frac{c+d}{a^2c^2}\bigl ((a b c + b^2 c  a^2 d) a^2 c  (a^2 d)^2\bigr ),\nonumber \\ 0&\le ( c+d)\bigl (( c{a}^{2}+cab) {n}^{2}+ ( {b}^{2}c3cab+3{a}^{2}d ) n{b}^{2}c+3cab3{a}^{2}d\bigr ),\nonumber \end{aligned}$$(5.3) 
(ii)
or, if n is odd, then \(f^{(n)}\) may have the form
$$\begin{aligned} f^{(n)}&=a^2p_{(1^4)}+b_{11}\bigl (p_{(2,1^2)}p_{(1^4)}\bigr )+2b_{12}\bigl (p_{(3,1)}p_{(2,1^2)}\bigr )\nonumber \\&\quad +\, b_{22}\bigl (p_{(4)}p_{(2^2)}\bigr ), \end{aligned}$$with coefficients \(a,b_{11}, b_{12}, b_{22}\in {\mathbb {R}}\) which additionally satisfy
$$\begin{aligned} a\ne 0,\quad b_{11}+b_{22}\ge 0,\quad b_{11}b_{22}b_{12}^2\ge 0. \end{aligned}$$(5.4)
Proof
Suppose that \(f^{(n)}\) is a strictly positive form on the boundary of \(\Sigma _{4,n}^{S}\). Then there exists a nontrivial functional l spanning an extreme ray of the dual cone \((\Sigma _{4,n}^{S})^{*}\) such that \(\ell (f^{(n)})=0\). Let \(W_{l}\subset H_{n,2}\) denote the kernel of \(Q_\ell \). In view of Lemma 5.2 we see that there are two possible situations that we need to take into consideration.
(i) We first assume that
In view of (5.5) we may assume that the \({\mathcal {S}}_n\)module \(W_\ell \) is generated by two polynomials:
where \(a,b,c,d\in {\mathbb {R}}\) are chosen such that \((0,0)\ne (a,b)\) and \((0,0)\ne (c,d)\).
Let \(q\in H_{n,2}\). By Proposition 4.20 we have
The dimension of the vector space of \({\mathcal {S}}_n\)invariant quadratic maps from \(H_{n,2}\) to \({\mathbb {R}}\) is 5. However, since \(q\in W_{\ell }\), Schur’s lemma implies \(\ell (qr)=0\) for all r in the isotypic component of the type \((n2,2)\). Let \(y_{\lambda }=\ell (p_{\lambda })\). Using explicit calculations we find that the coefficients \(y_{\lambda }\) are characterized by the following system of four linear equations:
Since in addition we want that the form \(l\in (\Sigma _{n,d}^S)^{*}\) we must also take into account that the corresponding quadratic form \(Q_\ell \) has to be positive semidefinite. By Lemma 4.17 this is equivalent to checking that each of the two matrices
is positive semidefinite and
Now assuming \(a=0\) we find that either \(b=0\), which is excluded, or any solution of the above linear system will have
By substituting this into \(M_{(n2,2)}\) we find that
while from \(M_{(n1,1)}\) we have \(y_{(4)}y_{(2)^2} \ge 0\). It follows that
But then we find that \(\ell \) is proportional to the functional that simply evaluates at the point \((1,1,1,\ldots ,1)\), which is a contradiction since \(f^{(n)}\) is strictly positive. Thus \(a\ne 0\).
Now suppose that \(c=0\). Then we find that
Since \(a\ne 0\) we find that the linear functional \(\ell \) is given by
By Lemma 5.2 we must have n odd in order for \(\ell \) not to be a point evaluation and this sends us to case (ii) discussed below. Meanwhile with \(a,c \ne 0\) the solution of the linear system (up to a common multiple) is given by
which then yields the conditions in (5.3).
(ii) If n is odd we know from Lemma 5.2 that there is one additional case: \(f^{(n)}\) is a sum of the square \((ap_{(11)})^2\), and a sum of squares of elements from the isotypic component of \(H_{n,2}\) which corresponds to the representation \(S^{(n1,1)}\). Since \(f^{(n)}\) is strictly positive, we must have \(a\ne 0\) (otherwise \(f^{(n)}\) has a zero at \((1,\dots ,1)\)) and it also follows that the matrix \((b_{ij})_{2\times 2}\) must be strictly positive definite. Therefore we get the announced decomposition from Theorem 5.1. \(\square \)
Note that although the first symmetric counterexample by Choi and Lam in four variables gives \(\Sigma _{4,4}^S\subsetneq {\mathcal {P}}_{4,4}^S\) it does not immediately imply that we have strict containment for all n. However, using our methods, one can produce a sequence of strictly positive symmetric quartics that lie on the boundary of \(\Sigma _{n,4}^{S}\) for all n as a witness for the strict inclusion.
Example 5.4
For \(n\ge 4\) consider the family of polynomials
where we set \(a=1\), \(b={13}/{10}\), \(c=1\), and \(d={5}/{4}\). Further consider the linear functional \(\ell \in H_{n,4}^{*}\) with
Then we have \(\ell (f^{(n)})=0\). In addition the corresponding matrices become
These matrices are all positive semidefinite for \(n\ge 4\) and therefore we have \(\ell \in (\Sigma _{n,4}^{S})^{*}\). This implies that \(f^{(n)}\in \partial \Sigma _{n,4}^{S}\).
Now we argue that for any \(n\in {\mathbb {N}}\) the forms \(f^{(n)}\) are strictly positive. By Corollary 3.3 it follows that \(f^{(n)}\) has a zero, if and only if there exists \(k\in \{{1}/{n},\ldots ,({n{}1})/{n}\}\) such that the bivariate form
has a real projective zero (x, y). Since \(f^{(n)}\) is a sum of squares and therefore nonnegative, we also know that \(h_k(x,y)\) is nonnegative for all \(k \in \{{1}/{n},\ldots ,({n{}1})/{n}\}\). Therefore the real projective roots of \(h_k(x,y)\) must have even multiplicity. This implies that \(h_k(x,y)\) has a real root only if its discriminant \(\delta (h_k)\)—viewed as polynomial in the parameter k—has a root in the admissible range for k. We calculate
We see that \(\delta (h_k)\) is zero only for
Thus we see that for all natural numbers n there is no \(k\in \{{1}/{n},\ldots ,{(n{}1)}/{n}\}\) such that \(h_k(x,y)\) has a real projective zero. Therefore we can conclude that for any \(n\in {\mathbb {N}}\) the form \(f^{(n)}\) will be strictly positive.
From the above example the following characterization, which recently had been independently given by Goel et al. [13], is an immediate consequence.
Theorem 5.5
The inclusion \(\Sigma _{n,2d}^S\subset {\mathcal {P}}_{n,2d}^S\) is strict except in the cases of symmetric bivariate forms, or symmetric quadratic forms, or symmetric ternary quartics.
Proof
The wellknown Robinson form
is a nonnegative form which is not a sum of squares. Furthermore, for the case \(2d=4\) and \(n\ge 4\), Example 5.4 above gives for every n a positive polynomial \(f^{(n)}\) which lies on the boundary of \(\Sigma _{n,4}^S\) and therefore guarantees the existence of \(h_{n,4}\in P_{n,2d}^S\) which is positive definite but not a sum of squares. The result now follows by observing that for any positive definite form \(h\in H_{n,2d}\) that is not a sum of squares, the form \((X_1+\cdots +X_n)^2 h\in H_{n,2d+2}\) is also positive definite and not a sum of squares. Indeed, if \((X_1+\cdots +X_n)^2h=f_1^2+\ldots +f_m^2\) then \((X_1+\cdots + X_n)^2\) will divide \(f_i^2\) which yields that h is a sum of squares. \(\square \)
Asymptotic Behavior
In this section we study the relationship of sums of squares and nonnegative forms when the number of variables tends to infinity.
FullDimensionality
We now consider the power mean inequalities and their limits. In order to talk about limits of our sequences of cones we use the following notion of limit of a sequence of sets, which is due to Kuratowski [19], and we refer the reader to [21, 32] for details in the context of sequences of convex sets.
Definition 6.1
Let \(\{K_n\}_{n\in {\mathbb {N}}}\) be a sequence of subsets of \({\mathbb {R}}^k\). Then a set \(K\subset {\mathbb {R}}^k\) is called the limit of the sequence, denoted by \(K=\lim \limits _{n\rightarrow \infty } K_n\), if we have
where
Remark 6.2
Note that the limit defined above is a closed set.
It will be convenient for the proof of Theorem 2.8 to relate the power mean inequalities to the sequences formed by the Reynolds operator. Let \(\mu =(\mu _1,\dots ,\mu _r)\) be a partition of 2d. Associate to \(\mu \) the monomial \(X_1^{\mu _1}\cdots X_{r}^{\mu _r}\) and define a symmetric form \(m_\mu ^{(n)}\) by:
This is a monomial mean basis of \(H_{n,2d}^S\). We observe that with this choice of basis of \(H_{n,2d}^S\) the transition maps \(\varphi _{m,n}\) are given by the identity matrices. Since the stabilizer of the monomial \(X_1^{\mu _1}\cdots X_{r}^{\mu _r}\) is isomorphic to \({\mathcal {S}}_{s_1}\times \ldots \times {\mathcal {S}}_{s_t}\times {\mathcal {S}}_{nr}\), it follows that
where \({\bar{m}}_{\mu }^{(n)}\) is the monomial symmetric polynomial associated to \(\mu \).
Proposition 6.3
Consider the sequences \(\Sigma _{n,2d}^{\varphi }\) and \({\mathcal {P}}^{\varphi }_{n,2d}\) embedded into \({\mathbb {R}}^{\pi (2d)}\) via the monomial mean basis. Then the limits of the resulting sequences of convex cones in \({\mathbb {R}}^{\pi (2d)}\) have limits, which we will denote by \({\mathfrak {S}}_{2d}^{\varphi }\) and \({\mathfrak {P}}_{2d}^{\varphi }\). Both of these limits are closed and fulldimensional.
Proof
Since we have \(\varphi _{n,n+1}(\Sigma _{n,2d})\subseteq \Sigma _{n+1,2d}^{S}\) and \(\varphi _{n,n+1}(P_{n,2d})\subseteq {\mathcal {P}}^{S}_{n+1,2d}\) the resulting sequences of cones are increasing. Thus by [32, Prop. 1] the limits exist and are given by
Clearly, both cones are fulldimensional. \(\square \)
In order to establish the result for the power mean basis, we first have to study the relationship between these two bases:
Proposition 6.4
Let \(M_n\) be the matrix acting between the monomial mean and power mean bases of \(H_{n,2d}^S\). Then \(M_n\) converges entrywise to a full rank matrix \(M^*\) as n grows to infinity.
Proof
The transition matrix between power sum symmetric polynomials and monomial symmetric polynomials is well understood [2]. Converting to our mean bases we have the following: let \(\nu :=(\nu _1,\ldots ,\nu _l)\vdash 2d\), \(\mu =(\mu _1,\dots ,\mu _r) \vdash 2d\), then
where \(\mathcal {BL}(\mu )^{\nu }\) is the number of \(\mu \)brick permutations of shape \(\nu \) [2]. We observe that the unique highest order of growth in n for a coefficient of \(p_{\nu }^{(n)}\) occurs when the number of parts of \(\nu \) is maximized. The unique \(\nu \) with the largest number of parts and nonzero \(\mathcal {BL}(\mu )^{\nu }\) is \(\mu \). Thus we have \(\nu =\mu \), \(r=l\), and
Therefore we see that asymptotically
where the coefficients \(a_{\mu ,\nu }(n)\) tend to 0 as \(n \rightarrow \infty \). The proposition now follows. \(\square \)
Now with these preparations the proof of Theorem 2.8 will be immediate after the following two lemmas.
Lemma 6.5
Let V be a finitedimensional vector space. Let \(A_i\) be a sequence of subsets of V converging to A. Let \(M_i\) be a sequence of linear maps from V to itself converging to identity. Let \(B_i=M_i(A_i)\). Then
so A is the limit of \(B_i\).
Proof
For the first inclusion, let \(a\in A\). Since A is the limit of \(A_i\) there exists N such that for all \(i\ge N\), a is contained in \(A_i\). Let \(b_i=M_i a\). Then \(b_i\in B_i\) for all \(i\ge N\) and moreover, since the linear maps \(M_i\) converge to identity, we have that \(b_i\) converges to a. This in turn implies that \(a\in \liminf _{i \rightarrow \infty } B_i\). For the second inclusion, we remark that \(\bigl (\limsup _{i \rightarrow \infty } B_i\bigr )^c= \liminf _{i \rightarrow \infty } B_i^c\). Hence, one can argue in an analogous way by considering the complement of A. \(\square \)
From the above lemma we can easily obtain the following generalization, which shows that the conclusions also hold if the limit of the linear maps \(M_i\) is any fullrank map.
Lemma 6.6
Let V be a finitedimensional vector space. Let \(A_i\) be a sequence of subsets of V converging to A. Let \(M_i\) be a sequence of linear maps from V to itself, converging to a fullrank linear map M. Let \(B_i=M_i(A_i)\). Then
so M(A) is the limit of \(B_i\).
Proof
We can apply Lemma 6.5 to the sequence \(C_i=M^{1}M_i(A_i)\). Since \(B_i=M(C_i)\), and M is a fullrank linear map, the desired conclusions follow for the sequence \(B_i\) as well. \(\square \)
The existence of the limits of sequences and their fulldimensionality now can be established by translating from Proposition 6.3.
Proof of Theorem 2.8
We only give the proof for \({\mathfrak {S}}_{2d}\) since the statement for \({\mathfrak {P}}_{2d}\) follows in an analogous manner. We first observe that the sequence \(\Sigma _{n,2d}^\rho \) is seminested; it follows that \(\liminf \Sigma _{n,2d}^\rho =\bigcap _{n\ge 2d} \Sigma _{n,2d}^\rho \). We now apply Lemma 6.5 to the sequence \(\Sigma _{n,2d}^\rho \), with \(A_i=\Sigma ^{\varphi }_{n,2d}\) and \(M_i\) being transition maps between monomial mean and power mean bases. From Proposition 6.4 we know that the maps \(M_i\) converge to identity. Therefore, we see that
The theorem now follows, since the fulldimensionality is a direct consequence of Proposition 6.3. \(\square \)
Symmetric Mean Inequalities of Degree Four
In this last section we characterize quartic symmetric mean inequalities that are valid for all values of n. Recall from Section 2 that \({\mathfrak {P}}_4\) denotes the cone of all sequences \({\mathfrak {f}}=(f^{(4)},f^{(5)},\ldots )\) of degree 4 power means that are nonnegative for all n and \({\mathfrak {S}}_4\) the cone of such sequences that can be written as sums of squares.
In the case of quartic forms the elements of \({\mathfrak {P}}_{4}\) can be characterized by a family of univariate polynomials as Theorem 3.4 specializes to the following
Proposition 7.1
Let
be a linear combination of quartic symmetric power means. Then \({\mathfrak {f}}\in {\mathfrak {P}}_4\) if any only if for all \(\alpha \in [0,1]\) the bivariate form
is nonnegative.
Now we turn to the characterization of the elements on the boundary of \({\mathfrak {P}}_4\).
Lemma 7.2
Let \(0\ne {\mathfrak {f}}\in {\mathfrak {P}}_4\). Then \({\mathfrak {f}}\) is on the boundary \(\partial {\mathfrak {P}}_4\) if and only if there exists \(\alpha \in (0,1)\) such that the bivariate form \(\Phi ^{\alpha }_{{\mathfrak {f}}}(x,y)\) has a double real root.
Proof
Let \({\mathfrak {f}}\in \partial {\mathfrak {P}}_4\). Suppose that for all \(\alpha \in (0,1)\) the bivariate form \(\Phi _{{\mathfrak {f}}}^\alpha \) has no double real roots. From Proposition 7.1 we know that \(\Phi _{{\mathfrak {f}}}^\alpha \) is a nonnegative form for all \(\alpha \in [0,1]\) and thus \(\Phi _{{\mathfrak {f}}}^\alpha \) is strictly positive for all \(\alpha \in (0,1)\). Thus for a sufficiently small perturbation \(\tilde{{\mathfrak {f}}}\) of the coefficients \(c_{\lambda }\) of \({\mathfrak {f}}\) the form \(\Phi _{\tilde{{\mathfrak {f}}}}^\alpha \) will remain positive for all \(\alpha \in (0,1)\). Now we deal with the cases \(\alpha =0,1\).
We observe that for all \({\mathfrak {g}} \in H^\rho _{\infty ,4}\),
By the above we must have \(\Phi ^{1/2}_{\mathfrak {g}}(1,1)>0\) and the same will be true for a sufficiently small perturbation \(\tilde{{\mathfrak {f}}}\) of \({\mathfrak {f}}\). But then it follows by Proposition 7.1 that a neighborhood of \({\mathfrak {f}}\) is in \({\mathfrak {P}}_4\), which contradicts the assumption that \({\mathfrak {f}}\in \partial {\mathfrak {P}}_4\). Therefore there exists \(\alpha \in (0,1)\) such that \(\Phi ^\alpha _{\mathfrak {{f}}}(x,y)\) has a double real root.
Now suppose \({\mathfrak {f}} \in {\mathfrak {P}}_4\) and \(\Phi _{{\mathfrak {f}}}^\alpha (x,y)\) has a double real root for some \(\alpha \in (0,1)\). Let \({\mathfrak {f}}_{\epsilon }=f\epsilon {\mathfrak {p}}_{2^2}\). It follows that for all \(\epsilon >0\) we have \({\mathfrak {f}}_\epsilon \notin {\mathfrak {P}}_4\), since \(\Phi _{{\mathfrak {f}}_\epsilon }^{\alpha }\) is strictly negative at the double zero of \(\Phi _{{\mathfrak {f}}}^{\alpha }\). Thus \({\mathfrak {f}}\) is on the boundary of \({\mathfrak {P}}_4\). \(\square \)
We now deduce a corollary from Theorem 5.1, completely describing the polynomials in \({\mathfrak {S}}_4\).
Corollary 7.3
We have \({\mathfrak {f}}\in {\mathfrak {S}}_4\) if and only if
where the matrices \((\alpha _{ij})_{2\times 2}\) and \((\beta _{ij})_{2\times 2}\) are positive semidefinite.
Proof
We observe from Theorem 5.1 that the coefficients of the squares of symmetric polynomials and of \((n{}1,1)\) semiinvariants do not depend on n. Thus the cone generated by these sums of squares is the same for any n, and it corresponds precisely to the cone given in the statement of the corollary. Now observe that the limit of the square of the \((n{}2,2)\) component is equal to \({\mathfrak {p}}_{(1^4)}/2{\mathfrak {p}}_{(21^2)}+{\mathfrak {p}}_{(2^2)}/2\), which is a sum of symmetric squares. Thus the squares from the \((n{}2,2)\) component do not contribute anything in the limit. \(\square \)
In order to algebraically characterize the elements on the boundary recall that the discriminant \({\text {disc}}f\) of a bivariate form f is a homogeneous polynomial in the coefficients of f, which vanishes exactly on the set of forms with multiple projective roots. However, note that \({\text {disc}}f=0\) alone does not guarantee that f has a double real root, since the double root may be complex.
Proposition 7.4
Let \({\mathfrak {f}}\in H^\rho _{\infty ,4}\) be of the form
such that the coefficients meet the conditions in (5.4). Then for \(\alpha =1/2\) the associated form \(\Phi _{{\mathfrak {f}}}^{\alpha }(X,Y)\) has a double root at \((x,y)=(1,1)\).
Lemma 7.5
Let \({\mathfrak {f}}\in H^\rho _{\infty ,4}\) be of the form
such that the coefficients a, b, c, d meet the conditions in (5.3). Consider the associated form \(\Phi _{{\mathfrak {f}}}^{\alpha }\). Then there is a value \(\alpha \), \(0<\alpha <1\), such that \(\Phi _{{\mathfrak {f}}}^{\alpha }\) has a real double root.
Proof
We first show that there is a (possibly complex) double root by examining the discriminant. To this end, we find that this discriminant \(\delta _{{\mathfrak {f}}}(\alpha )\) factors as
where \(\delta _1\) and \(\delta _2\) are quadratic polynomials in \(\alpha \). We examine these factors \(\delta _1\) and \(\delta _2\) now assuming the conditions on a, b, c, d imposed by (5.3).
One easily checks that \(\delta _1(\alpha )=\delta _1(1{}\alpha )\) and \(\delta _1(0)=\delta _1(1)=16{a}^{2}(c{+}d ) ^2<0\). Further,
This clearly implies that the quadratic polynomial \(\delta _1\) is strictly negative on (0, 1). Moreover, the conditions in (5.3) yield \(\delta _2(0)=\delta _2(1)=a^2(c{+}d) >0\) and \(\delta _2({1}/{2})=c( 2a{+}b) ^{2}/4 <0\) since c is supposed to be negative.
It follows now that \(\delta _2(\alpha )\) has two real roots \(\alpha _{1},1\alpha _{1}\in (0,1)\) and hence the polynomial \(\Phi _{{\mathfrak {f}}}(\alpha _{1/2},1{}\alpha _{1/2},x,1)\) has a double root. However, it remains to verify that this double root is indeed real. In order to establish this we examine the polynomial \(\delta _2(\alpha ,a,b,d,c)\) more carefully. We have
and for \(\alpha \ne {1}/{2}\), one can solve for d to find
This yields that \(\Phi _{{\mathfrak {f}}}(\alpha ^{*},1{}\alpha ^{*},x,1)\) contains the factor \(( ax+a+b\alpha xb\alpha +b) ^{2}\) and hence in this case \(\Phi _{{\mathfrak {f}}}(\alpha ^{*},1{}\alpha ^{*},x,1)\) has real double root.
In the case \(\alpha ={1}/{2}\) it follows from the observations made above, that at a root of \(\delta _2(\alpha ,a,b,c,d)\) also all second partial derivatives have to vanish. By explicit calculations one finds that this can happen only if \(a={1}/{2}\) in which case the polynomial \(\delta _2\) specializes to \(c(b{}1) ^{2}/4\). Since \(c<0\) it follows that only for \(b=1\) the discriminant can vanish. In this situation one gets
and it follows that \(X_{1/2}:=\pm \bigl ({{d+2\sqrt{{c}(d{}c)}}{(2c{+}d)1}}\bigr )\) are the two double roots in this case. The conditions imposed on c, d by Theorem 5.3 ensure that \(c(d{}c)\ge 0\), and hence these roots will also be real. Therefore we have shown that in all cases the roots are indeed real. \(\square \)
We are now in the position to show that \({\mathfrak {P}}_4={\mathfrak {S}}_4\).
Proof of Theorem 2.9
Since \({\mathfrak {S}}_4\subset {\mathfrak {P}}_4\) and both sets are closed convex cones, it suffices to show that every \({\mathfrak {f}}\) on the boundary of \({\mathfrak {S}}_4\) also lies in the boundary of \({\mathfrak {P}}_4\). It follows from Theorem 5.3 that a sequence \({\mathfrak {f}}:=(f^{(4)},f^{(5)},\ldots )\) in the boundary of \({\mathfrak {S}}_4\) that is not in the boundary of \({\mathfrak {P}}_4\) has to be a form as considered in Lemma 7.5, but by combining Lemmas 7.5 and 7.2 we find that \({\mathfrak {f}}\in \partial {\mathfrak {S}}_{4}\) implies \({\mathfrak {f}}\in \partial {\mathfrak {P}}_4\), and we can conclude that \({\mathfrak {S}}_4={\mathfrak {P}}_4\). \(\square \)
Conclusion and Open Questions
Besides Conjecture 1 there is another important question left open in our work. Corollary 7.3 gave a description of the asymptotic symmetric sums of squares cone in terms of the squares involved. In this description of the limit not all semiinvariant polynomials were necessary. It is natural to investigate the situation also in arbitrary degree:
Question 1
Let \({\mathfrak {f}}\in {\mathfrak {S}}_{2d}\). What semiinvariant polynomials are necessary for a description of \({\mathfrak {f}}\) as a sum of squares?
The general setup of our work focused on the case of a fixed degree. Examples like the difference of the geometric and the arithmetic mean show however, that it would be very interesting to understand also the situation where the degree is not fixed.
Question 2
What can be said about the quantitative relationship between the cones \(\Sigma ^S_{n,2d}\) and \({\mathcal {P}}^S_{n,2d}\) in asymptotic regimes other than fixed degree 2d?
References
Ariki, S., Terasoma, T., Yamada, H.F.: Higher Specht polynomials. Hiroshima Math. J. 27(1), 177–188 (1997)
Beck, D.A., Remmel, J.B., Whitehead, T.: The combinatorics of transition matrices between the bases of the symmetric functions and the \(B_n\) analogues. Discrete Math. 153(1–3), 3–27 (1996)
Blekherman, G.: There are significantly more nonnegative polynomials than sums of squares. Israel J. Math. 153, 355–380 (2006)
Blekherman, G.: Nonnegative polynomials and sums of squares. J. Am. Math. Soc. 25(3), 617–635 (2012)
Blekherman, G., Parrilo, P.A., Thomas, R.R. (eds.): Semidefinite Optimization and Convex Algebraic Geometry. MOSSIAM Series on Optimization, vol. 13. Society for Industrial and Applied Mathematics & Mathematical Optimization Society, Philadelphia (2013)
Blekherman, G., Sinn, R.: Extreme rays of Hankel spectrahedra for ternary forms. J. Symb. Comput. 79(1), 23–42 (2017)
Choi, M.D., Lam, T.Y.: Extremal positive semidefinite forms. Math. Ann. 231(1), 1–18 (1977)
Choi, M.D., Lam, T.Y., Reznick, B.: Even symmetric sextics. Math. Z. 195(4), 559–580 (1987)
Cuttler, A., Greene, C., Skandera, M.: Inequalities for symmetric means. Eur. J. Comb. 32(6), 745–761 (2011)
Ergür, A.A.: Multihomogeneous nonnegative polynomials and sums of squares. Discrete Comput. Geom. 60(2), 318–344 (2018)
Frenkel, P.E., Horváth, P.: Minkowski’s inequality and sums of squares. Cent. Eur. J. Math. 12(3), 510–516 (2014)
Gatermann, K., Parrilo, P.A.: Symmetry groups, semidefinite programs, and sums of squares. J. Pure Appl. Algebra 192(1–3), 95–128 (2004)
Goel, C., Kuhlmann, S., Reznick, B.: On the Choi–Lam analogue of Hilbert’s 1888 theorem for symmetric forms. Linear Algebra Appl. 496, 114–120 (2016)
Goel, C., Kuhlmann, S., Reznick, B.: The analogue of Hilbert’s 1888 theorem for even symmetric forms. J. Pure Appl. Algebra 221(6), 1438–1448 (2017)
Harris, W.R.: Real even symmetric ternary forms. J. Algebra 222(1), 204–245 (1999)
Hilbert, D.: Ueber die Darstellung definiter Formen als Summe von Formenquadraten. Math. Ann. 32(3), 342–350 (1888)
Hurwitz, A.: Ueber den Vergleich des arithmetischen und des geometrischen Mittels. J. Reine Angew. Math. 108, 266–268 (1891)
James, G., Kerber, A.: Representation Theory of the Symmetric Group. Encyclopedia of Mathematics and its Applications, vol. 16. AddisonWesley, Reading (1981)
Kuratowski, C.: Topologie. Vol. I. Monografie Matematyczne, vol. 20. Państwowe Wydawnictwo Naukowe, Warsaw (1958)
Macdonald, I.G.: Symmetric Functions and Hall Polynomials. Oxford Mathematical Monographs. Oxford University Press, New York (1995)
Mosco, U.: Convergence of convex sets and of solutions of variational inequalities. Adv. Math. 3(4), 510–585 (1969)
Motzkin, T.S.: The arithmeticgeometric inequality. In: 1967 Inequalities (Proc. Sympos. Wright–Patterson Air Force Base, Ohio 1965), pp. 205–224. Academic Press, New York (1967)
Reznick, B.: Some inequalities for products of power sums. Pac. J. Math. 104(2), 443–463 (1983)
Reznick, B.: Some concrete aspects of Hilbert’s 17th Problem. In: Real Algebraic Geometry and Ordered Structures (Baton Rouge 1996). Contemp. Math., vol. 253, pp. 251–272. American Mathematical Society, Providence (2000)
Riener, C.: Symmetries in Semidefinite and Polynomial Optimization. PhD thesis, Johann Wolfgang GoetheUniversität, Frankfurt am Main (2011)
Riener, C.: On the degree and halfdegree principle for symmetric polynomials. J. Pure Appl. Algebra 216(4), 850–856 (2012)
Riener, C.: Symmetric semialgebraic sets and nonnegativity of symmetric polynomials. J. Pure Appl. Algebra 220(8), 2809–2815 (2016)
Riener, C., Theobald, T., Andrén, L.J., Lasserre, J.B.: Exploiting symmetries in SDPrelaxations for polynomial optimization. Math. Oper. Res. 38(1), 122–141 (2013)
Robinson, R.M.: Some definite polynomials which are not sums of squares of real polynomials. In: Selected Questions of Algebra and Logic, pp. 264–282. Izdat. ”Nauka” Sibirsk. Otdel., Novosibirsk (1973). (in Russian)
Rotman, J.J.: Advanced Modern Algebra. Graduate Studies in Mathematics, vol. 114. American Mathematical Society, Providence (2010)
Sagan, B.E.: The Symmetric Group Representations, Combinatorial Algorithms, and Symmetric Functions. Graduate Texts in Mathematics, vol. 203. Springer, New York (2001)
Salinetti, G., Wets, R.J.B.: On the convergence of sequences of convex sets in finite dimensions. SIAM Rev. 21(1), 18–33 (1979)
Scheiderer, C.: Positivity and sums of squares: a guide to recent results. In: Emerging Applications of Algebraic Geometry. IMA Vol. Math. Appl., vol. 149, pp. 271–324. Springer, New York (2009)
Serre, J.P.: Linear Representations of Finite Groups. Graduate Texts in Mathematics, vol. 42. Springer, New York (1977)
Specht, W.: Zur Darstellungstheorie der symmetrischen Gruppe. Math. Z. 42(1), 774–779 (1937)
Stanley, R.P.: Invariants of finite groups and their applications to combinatorics. Bull. Am. Math. Soc. 1(3), 475–511 (1979)
Terpstra, F.J.: Die Darstellung biquadratischer Formen als Summen von Quadraten mit Anwendung auf die Variationsrechnung. Math. Ann. 116(1), 166–180 (1939)
Timofte, V.: On the positivity of symmetric polynomial functions. Part I: general results. J. Math. Anal. Appl. 284(1), 174–190 (2003)
Acknowledgements
Open Access funding provided by UiT – The Arctic University of Norway. This research was initiated during the IPAM program on Modern Trends in Optimization and its Application and the authors would like to thank the Institute for Pure and Applied Mathematics for the hospitality during the program and the organizers of the program for the invitation to participate in it. We thank an anonymous referee for helpful comments which greatly improved this paper and Roland Hildebrand for bringing the article by Terpstra to our attention. The second author acknowledges support of the Tromsø Research Foundation and Grant Agreement 17matteCRMarie.
Author information
Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Editor in Charge: Kenneth Clarkson
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Blekherman, G., Riener, C. Symmetric NonNegative Forms and Sums of Squares. Discrete Comput Geom 65, 764–799 (2021). https://doi.org/10.1007/s0045402000208w
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s0045402000208w
Keywords
 NonNegative polynomials
 Sums of squares
 Symmetric polynomials
 Symmetric inequalities
 Symmetric group
Mathematics Subject Classification
 14P99