Convergence analysis of a Lasserre hierarchy of upper bounds for polynomial minimization on the sphere

We study the convergence rate of a hierarchy of upper bounds for polynomial minimization problems, proposed by Lasserre [SIAM J. Optim. 21(3) (2011), pp. 864-885], for the special case when the feasible set is the unit (hyper)sphere. The upper bound at level r of the hierarchy is defined as the minimal expected value of the polynomial over all probability distributions on the sphere, when the probability density function is a sum-of-squares polynomial of degree at most 2r with respect to the surface measure. We show that the exact rate of convergence is Theta(1/r^2), and explore the implications for the related rate of convergence for the generalized problem of moments on the sphere.


Introduction
We consider the problem of minimizing an n-variate polynomial f : R n → R over a compact set K ⊆ R n , i.e., the problem of computing the parameter: In this paper we will focus on the case when K is the unit sphere: K = S n−1 = {x ∈ R n : x = 1}, in which case we will omit the subscript K and simply write f min = min x∈S n−1 f (x).
Problem (1) is in general a computationally hard problem, already for simple sets K like the hypercube, the standard simplex, and the unit ball or sphere.For instance, the problem of finding the maximum cardinality α(G) of a stable set in a graph G = ([n], E) can be expressed as optimizing a quadratic polynomial over the standard simplex [19], or where A G is the adjacency matrix of G, E is the set of non-edges of G and m = |E|.Other applications of polynomial optimization over the unit sphere include deciding whether homogeneous polynomials are positive semidefinite.Indeed, a homogeneous polynomial f is defined as positive semidefinite precisely if and positive definite if the inequality is strict; see e.g.[23].As special case, one may decide if a symmetric matrix A = (a ij ) ∈ R n×n is copositive, by deciding if the associated form f (x) = i,j∈[n] a ij x 2 i x 2 j is positive semidefinite; see, e.g.[21].
Another special case is to decide the convexity of a homogeneous polynomial f , by considering the parameter min (x,y)∈S 2n−1 y T ∇f (x)y, which is nonnegative if and only if f is convex.This decision problem is known to be NP-hard, already for degree 4 forms [1].
As shown by Lasserre [16], the parameter (1) can be reformulated via the infinite dimensional program where Σ[x] denotes the set of sums of squares of polynomials, and µ is a given Borel measure supported on K. Given an integer r ∈ N, by bounding the degree of the polynomial h ∈ Σ[x] by 2r, Lasserre [16] defined the parameter: where Σ[x] r consists of the polynomials in Σ[x] with degree at most 2r.Here we use the 'overline' symbol to indicate that the parameters provide upper bounds for f min,K , in contrast to the parameters f (r) in ( 9) below, which provide lower bounds for it.
Since sums of squares of polynomials can be formulated using semidefinite programming, the parameter (3) can be expressed via a semidefinite program.In fact, since this program has only one affine constraint, it even admits an eigenvalue reformulation [16], which will be mentioned in (12) in Section 2.2 below.Of course, in order to be able to compute the parameter (3) in practice, one needs to know explicitly (or via some computational procedure) the moments of the reference measure µ on K.These moments are known for simple sets like the simplex, the box, the sphere, the ball and some simple transforms of them (they can be found, e.g., in Table 1 in [10]).
As a direct consequence of the formulation (2), the bounds f (r) K converge asymptotically to the global minimum f min,K when r → ∞.How fast the bounds converge to the global minimum in terms of the degree r has been investigated in the papers [12,7,9], which show, respectively, a convergence rate in O(1/ √ r) for general compact K (satisfying a minor geometric condition), a convergence rate in O(1/r) when K is a convex body, and a convergence rate in O(1/r 2 ) when K is the box [−1, 1] n .In these works the reference measure µ is the Lebesgue measure, except for the box [−1, 1] n where more general measures are considered (see Theorem 3 below for details).
In this paper we are interested in analyzing the worst-case convergence of the bounds (3) in the case of the unit sphere K = S n−1 , when selecting as reference measure the surface (Haar) measure dσ(x) on S n−1 .We let σ n−1 denote the surface measure of S n−1 , so that dσ(x)/σ n−1 is a probability measure on S n−1 , with (See, e.g., [6, relation (2.2.3)].)To simplify notation we will throughout omit the subscript K = S n−1 in the parameters ( 1) and ( 3), which we simply denote as Example 1.Consider the minimization of the Motzkin form on S 2 .This form has 12 minimizers on the sphere, namely 1 √ 3 (±1, ±1, ±1) as well as (±1, 0, 0) and (0, ±1, 0), and one has f min = 0.
In Table 1 we give the bounds f (r) for the Motzkin form for r ≤ 9.In Figure 1 we show a contour plot of the form on the sphere (top left), as well as a contour plot of the optimal density function for r = 3 (top right), r = 6 (bottom left), and r = 9 (bottom right).In the figure, the red end of the spectrum denotes higher function values.Some local maximimizers of the Motzkin form are visible that correspond to |x 3 | = 1 (at the poles) and x 3 = 0 (on the equator).
When r = 3 and r = 6, the modes of the optimal density are at the global minimizers (±1, 0, 0) and (0, ±1, 0) (one may see the contours of two of these modes in one hemisphere).On the other hand, when r = 9, the mass of the distribution is concentrated at the 8 global minimizers 1 √ 3 (±1, ±1, ±1) (one may see 4 of these in one hemisphere), and there are no modes at the global minimizers (±1, 0, 0) and (0, ±1, 0).
It is also illustrative to do the same plots using spherical coordinates: In Figure 2 we plot the Motzkin form in spherical coordinates (top left), as well as the optimal density function that corresponds to r = 3 (top right), r = 6 (bottom left), and r = 9 (bottom right).For example, when r = 9 one can see the 8 modes (peaks) of the density that correspond to the 8 global minimizers 1 √ 3 (±1, ±1, ±1).(Note that the peaks at φ = 0 and φ = 2π correspond to the same mode of the density, due to periodicity.)Likewise when r = 3 and r = 6 one may see 4 modes corresponding to (±1, 0, 0) and (0, ±1, 0).
The convergence rate of the bounds f (r) was investigated by Doherty and Wehner [4], who showed when f is a homogeneous polynomial.As we will briefly recap in Section 2.1, their result follows in fact as a byproduct of their analysis of another Lasserre hierarchy of bounds for f min , namely the lower bounds (9) below.
Our main contribution in this paper is to show that the convergence rate of the bounds f (r) is O(1/r 2 ) for any polynomial f and, moreover, that this analysis is tight for any (nonzero) linear polynomial f .This is summarized in the following theorem.(i) For any polynomial f we have (ii) For any (nonzero) linear polynomial f we have Let us say a few words about the proof technique.For the first part (i), our analysis relies on the following two basic steps: first, we observe that it suffices to consider the case when f is linear (which follows using Taylor's theorem), and then we show how to reduce to the case of minimizing a linear univariate polynomial over the interval [−1, 1], where we can rely on the analysis completed in [9].For the second part (ii), by exploiting a connection recently mentioned in [18] between the bounds (3) and cubature rules, we can rely on known results for cubature rules on the unit sphere to show tightness of the bounds.
Organization of the paper.In Section 2 we recall some previously known results that are most relevant to this paper.First we give in Section 2.1 a brief recap of the approach of Doherty and Wehner [4] for analysing bounds for polynomial optimization over the unit sphere.After that, we recall our earlier results about the quality of the bounds (3) in the case of the interval K = [−1, 1].Section 3 contains our main results about the convergence analysis of the bounds (3) for the unit sphere: after showing in Section 3.1 that the convergence rate is in O(1/r 2 ) we prove in Section 3.2 that the analysis is tight for nonzero linear polynomials.2 Preliminaries

The approach of Doherty & Wehner for the sphere
Here we briefly sketch the approach followed by Doherty and Wehner [4] for showing the convergence rate O(1/r) mentioned above in (6).Their approach applies to the case when f is a homogeneous polynomial, which enables using the tensor analysis framework.A first observation made in [4] is that we may restrict to the case when f has even degree, because if f is homogeneous with odd degree d then we have So we now assume that f is homogeneous with even degree d = 2a.
The approach in [4] in fact also permits to analyze the following hierarchy of lower bounds on f min : which are the usual sums-of-squares bounds for polynomial optimization (as introduced in [14,22]).Here and throughout, x denotes the Euclidean norm for real vectors.One can verify that (9) can be reformulated as (see [11]).For any integer r ∈ N we have The following error estimate is shown on the range f Theorem 2. [4] Assume n ≥ 3 and f is a homogeneous polynomial of degree 2a.There exists a constant C n,a (depending only on n and a) such that, for any integer r ≥ a(2a 2 + n − 2) − n/2, we have where f max is the maximum value of f taken over S n−1 .
The starting point in the approach in [4] is reformulating the problem in terms of tensors.For this we need the following notion of 'maximally symmetric matrix'.Given a real symmetric matrix M = (M i,j ) indexed by sequences i ∈ [n] a , M is called maximally symmetric if it is invariant under action of the permutation group Sym(2a) after viewing M as a 2a-tensor acting on R n .This notion is the analogue of the 'moment matrix' property, when expressed in the tensor setting.To see this, for a sequence i = (i 1 , . . ., i a ) ∈ [n] a , define α(i) = (α 1 , . . ., α n ) ∈ N n by letting α denote the number of occurrences of within the multi-set {i 1 , . . ., i a } for each ∈ [n], so that a = |α| = n i=1 α i .Then, the matrix M is maximally symmetric if and only if each entry M i,j depends only on the n-tuple α(i) + α(j).Following [4] we let MSym((R n ) ⊗a ) denote the set of maximally symmetric matrices acting on (R n ) ⊗a .It is not difficult to see that any degree 2a homogeneous polynomial f can be represented in a unique way as where the matrix Z f is maximally symmetric.
Given an integer r ≥ a, define the polynomial f r (x) = f (x) x 2r−2a , thus homogeneous with degree 2r.The parameter (10) can now be reformulated as The approach in [4] can be sketched as follows.Let M be an optimal solution to the program (11) (which exists since the feasible region is a compact set).Then the polynomial Q M (x) := (x ⊗r ) T M x ⊗r is a sum of squares since M 0.
After scaling, we obtain the polynomial which defines a probability density function on S n−1 , i.e., S n−1 h(x)dσ(x) = 1.In this way h provides a feasible solution for the program defining the upper bound f (r) .This thus implies the chain of inequalities The main contribution in [4] is their analysis for bounding the range between the two extreme values in the above chain and showing Theorem 2, which is done by using, in particular, Fourier analysis on the unit sphere.
Using different techniques we will show below a rate of convergence in O(1/r 2 ) for the upper bounds f (r) , thus stronger than the rate O(1/r) in Theorem 2 above and applying to any polynomial (not necessarily homogeneous).On the other hand, while the constant involved in Theorem 2 depends only on the degree of f and the dimension n, the constant in our result depends also on other characteristics of f (its first and second order derivatives).A key ingredient in our analysis will be to reduce to the univariate case, namely to the optimization of a linear polynomial over the interval [−1, 1].Thus we next recall the relevant known results that we will need in our treatment.

Convergence analysis for the interval [−1, 1]
We start with recalling the following eigenvalue reformulation for the bound (3), which holds for general K compact and plays a key role in the analysis for the case K = [−1, 1].For this consider the following inner product on the space of polynomials on K and let {b α (x) : α ∈ N n } denote a basis of this polynomial space that is orthonormal with respect to the above inner product; that is, K b α (x)b β (x)dµ(x) = δ α,β .Then the bound (2) can be equivalently rewritten as (see [16,7]).Using this reformulation we could show in [7] that the bounds The key fact is that, in the case of the univariate polynomial f (x) = x, the matrix A f in ( 12) has a tri-diagonal shape, which follows from the 3-term recurrence relationship satisfied by the orthogonal polynomials.In fact, A f coincides with the so-called Jacobi matrix of the orthogonal polynomials in the theory of orthogonal polynomials and its eigenvalues are given by the roots of the degree r + 1 orthogonal polynomial (see, e.g.[6,Chapter 1]).This fact is key to the following result.Theorem 3. (with degree r + 1).In particular, f 3 Convergence analysis for the unit sphere In this section we analyze the quality of the bounds f (r) when minimizing a polynomial f over the unit sphere S n−1 .
In Section 3.1 we show that the range f (r) − f min is in O(1/r 2 ) and in Section 3.2 we show that the analysis is tight for linear polynomials.

The bound O(1/r 2 )
We first deal with the n-variate linear (coordinate) polynomial f (x) = x 1 and after that we will indicate how the general case can be reduced to this special case.The key idea is to get back to the analysis in Section 2.2, for the interval [−1, 1] with an appropriate weight function.We begin with introducing some notation we need.
To simplify notation we set d = n − 1 (which also matches the notation customary in the theory of orthogonal polynomials where d usually is the number of variables).We let (well-defined when x < 1) and set Convergence analysis of a Lasserre hierarchy of upper bounds for polynomial minimization on the sphere Lemma 1. Fix x 1 ∈ [−1, 1] and let d ≥ 2. Then we have: which is thus equal to C d−1,λ w 1,λ+(d−1)/2 (x 1 ).
Proof.Change variables and set Putting things together and using relation (14) we obtain the desired result.
We also need the following lemma, which relates integration over the unit sphere S d ⊆ R d+1 and integration over the unit ball B d ⊆ R d and can be found, e.g., in [6, Lemma 3.8.1]and [2, Lemma 11.7.1].Lemma 2. Let g be a (d + 1)-variate integrable function defined on S d and d ≥ 1.Then we have: By combining these two lemmas we obtain the following result.Lemma 3. Let g(x 1 ) be a univariate polynomial and d ≥ 1.Then we have: where we set ν = d−1 2 .
Proof.Applying Lemma 2 to the function x ∈ R d+1 → g(x 1 ) we get If d = 1 then ν = 0 and the right hand side term in ( 15) is equal to as desired, since 2σ −1 1 C 1,0 = 1 using σ 1 = 2π and C 1,0 = π (by (14) and Then the right hand side in ( 15) is equal to where we have used Lemma 1 for the first equality.Finally we verify that the constant (using relations ( 4) and ( 14)), and thus we arrive at the desired identity.
We can now complete the convergence analysis for the minimization of x 1 on the unit sphere.Lemma 4. For the minimization of the polynomial f (x) = x 1 over S d with d ≥ 1, the order r upper bound (3) satisfies Proof.Let h(x 1 ) be an optimal univariate sum-of-squares polynomial of degree 2r for the order r upper bound corresponding to the minimization of x 1 over [−1, 1], when using as reference measure on [−1, 1] the measure with weight function w 1,ν (x 1 )C −1 1,ν and ν = (d − 1)/2 (thus ν > −1).Applying Lemma 3 to the univariate polynomials h(x 1 ) and x 1 h(x 1 ), we obtain Since the function x 1 has the same global minimum −1 over [−1, 1] and over the sphere S d , we can apply Theorem 3 to conclude that We now indicate how the analysis for an arbitrary polynomial f reduces to the case of the linear coordinate polynomial x 1 .To see this, suppose a ∈ S n−1 is a global minimizer of f over S n−1 .Then, using Taylor's theorem, we can upper estimate f as follows: Note that the upper estimate g(x) is a linear polynomial, which has the same minimum value as f (x) on S n−1 , namely f (a) = f min = g min .From this it follows that f (r) − f min ≤ g (r) − g min and thus we may restrict to analyzing the bounds for a linear polynomial.
Next, assume f is a linear polynomial, of the form f (x) = c T x with (up to scaling) c = 1.We can then apply a change of variables to bring f (x) into the form x 1 .Namely, let U be an orthogonal n × n matrix such that U c = e 1 .
Then the polynomial g(x) := f (U T x) = x 1 has the desired form and it has the same minimum value −1 over S n−1 as f (x).As the sphere is invariant under any orthogonal transformation it follows that f (applying Lemma 4 to g(x) = x 1 ).Summarizing, we have shown the following.Theorem 4. For the minimization of any polynomial f (x) over S n−1 with n ≥ 2, the order r upper bound (3) satisfies Note the difference to Theorem 2 where the constant depends only on the degree of f and the number n of variables; here the constant in O(1/r 2 ) does also depend on the polynomial f , namely it depends on the norm of ∇f (a) at a global minimizer a of f in S n−1 and on

The analysis is tight for linear polynomials
In this section we show -through an example -that the convergence rate cannot be better than Ω 1/r 2 .The example is simply minimizing x 1 over the sphere S n−1 .The key tool we use is a link between the bounds f (r) and properties of some known cubature rules on the unit sphere.This connection, recently mentioned in [18], holds for any compact set K. It goes as follows.
Suppose the points x (1) , . . ., x (N ) ∈ K and the weights w 1 , . . ., w N > 0 provide a (positive) cubature rule for K for a given measure µ, which is exact up to degree d + 2r, that is, for all polynomials g with degree at most d + 2r.Then, for any polynomial f with degree at most d, we have The argument is simple: if h ∈ Σ[x] r is an optimal sum-of-squares density for the parameter f (r) , then we have As a warm-up we consider the case n = 2, where we can use the cubature rule in Theorem 5 below for the unit circle.We use spherical coordinates (x 1 , x 2 ) = (cos θ, sin θ) to express a polynomial f in x 1 , x 2 as a polynomial g in cos θ, sin θ.Theorem 5. [2, Proposition 6.5.1]For each d ∈ N, the cubature formula is exact for all g ∈ span{1, cos θ, sin θ, . . ., cos(dθ), sin(dθ)}, i.e. for all polynomials of degree at most d, restricted to the unit circle.
Using this cubature rule on S 1 we can lower bound the parameters f (r) for the minimization of f (x) = x 1 over S 1 .
Namely, by setting x 1 = cos θ, we derive directly from the above theorem combined with relation ( 16) that This reasoning extends to any dimension n ≥ 2, by using product-type cubature formulas on the sphere S n−1 .In particular we will use the cubature rule described in [2, Theorem 6.2.3], see Theorem 7 below.
To define the nodes of the cubature rule on S n−1 we need the Gegenbauer polynomials C λ d (x), where λ > −1/2.Recall that these are the orthogonal polynomials with respect to the weight function on [−1, 1].We will not need the explicit expressions for the polynomials C λ d (x), we only need the following information about their extremal roots, shown in [7] (for general Jacobi polynomials, using results of [3,5]).It is well known that each C λ d (x) has d distinct roots, lying in (−1, 1).Theorem 6. Denote the roots of the polynomial C λ d (x) by t The cubature rule we will use may now be stated.Theorem 7. [2, Theorem 6.2.3]Let f : S n−1 → R be a polynomial of degree at most 2d − 1, and let g(θ 1 , . . ., θ n−1 ) := f (x 1 , . . ., x n ), be the expression of f in the generalized spherical coordinates (17).Then j2,d , . . ., θ where cos θ are positive scalars as in relation (6.2.3) of [2].
We can now show the tightness of the convergence rate Ω(1/r 2 ) for the minimization of a coordinate polynomial on S n−1 .Theorem 8. Consider the problem of minimizing the coordinate polynomial x n on the unit sphere S n−1 with n ≥ 2.
The convergence rate for the parameters (3) satisfies Proof.We have f (x 1 , . . ., x n ) = x n , so that g(θ 1 , . . ., θ n−1 ) = cos θ n−1 .Using ( 16) we obtain that where we use the fact that t 4 Implications for the generalized problem of moments In this section, we describe the implications of our results for the generalized problem of moments (GPM), defined as follows for a compact set K ⊂ R n .
val := inf • the functions f i (i = 0, . . ., m) are continuous on K; • M(K) + denotes the convex cone of probability measures supported on the set K; As before, we are interested in the special case where K = S n−1 .This special case is already of independent interest, since it contains the problem of finding cubature schemes for numerical integration on the sphere, see e.g.[10] and the references therein.Our main result in Theorem 4 has the following implication for the GPM on the sphere, as a corollary of the following result in [13] (which applies to any compact K, see also [10] for a sketch of the proof in the setting described here).Theorem 9 (De Klerk-Postek-Kuhn [13]).Assume that f 0 , . . ., f m are polynomials, K is compact, µ is a Borel measure supported on K, and the GPM (19) has an optimal solution.Given r ∈ N, define the parameter , where lim r→∞ ε(r) = 0, then the parameters ∆(r) satisfy: As a consequence of our main result in Theorem 4, combined with Theorem 10, we immediately obtain the following corollary.Corollary 1. Assume that f 0 , . . ., f m are polynomials, K = S n−1 , and the GPM (19) has an optimal solution.Then, for any integer r ∈ N, there is an h r ∈ Σ r such that Minimization of a rational function on K is a special case of the GPM where we may prove a better rate of convergence.In particular, we now consider the global optimization problem: where p, q are polynomials such that q(x) > 0 ∀ x ∈ K, and K ⊆ R n is compact.
It is well-known that one may reformulate this problem as the GPM with m = 1 and f 0 = p, f 1 = q, and b 1 = 1, i.e.: Analogously to (3), we now define the hierarchy of upper bounds on val as follows: p/q (r) K := min h∈Σ[x]r K p(x)h(x)dµ(x) s.t.K q(x)h(x)dµ(x) = 1, where µ is a Borel measure supported on K. Theorem 10.Consider the rational optimization problem (20).If, for any polynomial f , it holds that f (r) where lim r→∞ ε(r) = 0, then one also has p/q (r) K − val = O(ε(r)).In particular, if K = S n−1 , then p/q (r) Proof.Consider the polynomial f (x) = p(x) − val • q(x).Then f (x) ≥ 0 for all x ∈ K, and f min,K = 0, with global minimizer given by the minimizer of problem (20).Now, for given r ∈ N, let h ∈ Σ r be such that f (r) K = K f (x)h(x)dµ(x), and K h(x)dµ(x) = 1, where µ is the reference measure for K. Setting h, one has h * ∈ Σ r and K h * (x)q(x)dµ(x) = 1.Thus h * is feasible for problem (21).Moreover, by construction, The final result for the special case K = S n−1 and µ = σ (surface measure) now follows from our main result in Theorem 4.

Concluding remarks
In this paper we have improved on the O(1/r) convergence result of Doherty and Wehner [4] for the Lasserre hierarchy of upper bounds (3) for (homogeneous) polynomial optimization on the sphere.Having said that, Doherty and Wehner also showed that the hierarchy of lower bounds (9) of Lasserre satisfies the same rate of convergence, due to Theorem 2.
In view of the fact that we could show the improved O(1/r 2 ) rate for the upper bounds, and the fact that the lower bounds hierarchy empirically converges much faster in practice, one would expect that the lower bounds ( 9) also converge at a rate no worse than O(1/r 2 ).However, our analysis does not allow us to analyse the convergence of the lower bound hierarchy, and this remains an interesting open problem.
Another open problem is the exact rate of convergence of the bounds in Theorem 10 for the generalized problem of moments (GPM).In our analysis of the GPM on the sphere in Corollary 1, we could only obtain O(1/r) convergence, which is a square root worse than the special cases for polynomial and rational function minimization.We do not know at the moment if this is a weakness of the analysis or inherent to the GPM.
Note that if we pick another reference measure dµ(x) = q(x)dσ(x), where q is strictly positive on the sphere, then the convergences rates with respect to both measures σ and µ have the same behaviour (up to multiplicative constant).
It would be interesting to understand the convergence rate for more general reference measures.

Figure 1 :
Figure 1: Contour plots of the Motzkin form on the sphere (top left) and optimal density for r = 3 (top right), r = 6 (bottom left), and r = 9 (bottom right).

Figure 2 :
Figure 2: Plots of the Motzkin form on the sphere (top left) and optimal density for r = 3 (top right), r = 6 (bottom left), and r = 9 (bottom right), in spherical coordinates.

( 3 )
have a convergence rate in O(1/r 2 ) for the case of the interval K = [−1, 1] (and as an application also for the n-dimensional box [−1, 1] n ).This result holds for a large class of measures on [−1, 1], namely those which admit a weight function w(x) = (1 − x) a (1 + x) b (with a, b > −1) with respect to the Lebesgue measure.The corresponding orthogonal polynomials are known as the Jacobi polynomials P a,b d (x) where d ≥ 0 is their degree.The case a = b = −1/2 (resp., a = b = 0) corresponds to the Chebychev polynomials (resp., the Legendre polynomials), and when a = b = λ − 1/2, the corresponding polynomials are the Gegenbauer polynomials C λ d (x) where d is their degree.See, e.g., [6, Chapter 1] for a general reference about orthogonal polynomials.

[ 7 ]
Consider the measure dµ(x) = (1 − x) a (1 + x) b dx on the interval [−1, 1], where a, b > −1.For the univariate polynomial f (x) = x, the parameter f (r) is equal to the smallest root of the Jacobi polynomial P a,b r+1 a probability measure over the unit ball B d .See, e.g., [6, Section 2.3.2] or [2, Section 11].We will use the following simple lemma, which indicates how to integrate the d-variate weight function w d,λ along d − 1 variables.
j,d and the parameters µ

Table 1 :
Upper bounds for the Motzkin form