Abstract
We address the following generalization \(\mathbf {P}\) of the Löwner-John ellipsoid problem. Given a (non necessarily convex) compact set \(\mathbf {K}\subset \mathbb {R}^n\) and an even integer \(d\in \mathbb {N}\), find an homogeneous polynomial \(g\) of degree \(d\) such that \(\mathbf {K}\subset \mathbf {G}:=\{\mathbf {x}:g(\mathbf {x})\le 1\}\) and \(\mathbf {G}\) has minimum volume among all such sets. We show that \(\mathbf {P}\) is a convex optimization problem even if neither \(\mathbf {K}\) nor \(\mathbf {G}\) are convex! We next show that \(\mathbf {P}\) has a unique optimal solution and a characterization with at most \({n+d-1\atopwithdelims ()d}\) contacts points in \(\mathbf {K}\cap \mathbf {G}\) is also provided. This is the analogue for \(d>2\) of Löwner-John’s theorem in the quadratic case \(d=2\), but importantly, we neither require the set \(\mathbf {K}\) nor the sublevel set \(\mathbf {G}\) to be convex. More generally, there is also an homogeneous polynomial \(g\) of even degree \(d\) and a point \(\mathbf {a}\in \mathbb {R}^n\) such that \(\mathbf {K}\subset \mathbf {G}_\mathbf {a}:=\{\mathbf {x}:g(\mathbf {x}-\mathbf {a})\le 1\}\) and \(\mathbf {G}_\mathbf {a}\) has minimum volume among all such sets (but uniqueness is not guaranteed). Finally, we also outline a numerical scheme to approximate as closely as desired the optimal value and an optimal solution. It consists of solving a hierarchy of convex optimization problems with strictly convex objective function and Linear Matrix Inequality constraints.
Similar content being viewed by others
Notes
For instance some well-known NP-hard 0/1 optimization problems reduce to conic LP optimization problems over the convex cone of copositive matrices (and/or its dual) for which the associated membership problem is hard.
We thank Pham Tien Son for providing these two examples.
We have used the GloptiPoly software [20] dedicated to solving the generalized problem of moments.
A semidefinite program is a finite-dimensional convex optimization problem which in canonical form reads: \(\min _\mathbf {x}\{\mathbf {c}^T\mathbf {x}:\mathbf {A}_0+\sum _{k=1}^t\mathbf {A}_kx_k\succeq 0\}\), where \(\mathbf {c}\in \mathbb {R}^t\) and the \(\mathbf {A}_k\)’s are real symmetric matrices. Importantly, up to arbitrary fixed precision it can be solved in time polynomial in the input size of the problem.
A Linear Matrix Inequality (LMI) is a constraint of the form \(\mathbf {A}(\mathbf {x}):=\mathbf {A}_0+\sum _{\ell =1}^t\mathbf {A}_\ell x_\ell \succeq 0\) where each \(\mathbf {A}_\ell \), \(\ell =0,\ldots ,t\), is a real symmetric matrix; so each entry of the real symmetric matrix \(\mathbf {A}(\mathbf {x})\) is affine in \(\mathbf {x}\in \mathbb {R}^t\). An LMI always define a convex set, i.e., the set \(\{\mathbf {x}\in \mathbb {R}^t:\mathbf {A}(\mathbf {x})\succeq 0\}\) is convex.
References
Anastassiou, G.A.: Moments in Probability Approximation Theory. Longman Scientific & Technical, UK (1993)
d’Aspremont, A.: Smooth optimization with approximate gradient. SIAM J. Optim. 19, 1171–1183 (2008)
Ash, R.B.: Real Analysis and Probability. Academic Press Inc., Boston (1972)
Ball, K.: Ellipsoids of maximal volume in convex bodies. Geom. Dedicata 41, 241–250 (1992)
Ball, K.: Convex geometry and functional analysis. In: Johnson, W.B., Lindenstrauss, J. (eds.) Handbook of the Geometry of Banach Spaces I, pp. 161–194. North Holland, Amsterdam (2001)
Barvinok, A.I.: Computing the volume, counting integral points, and exponential sums. Discrete Comput. Geom. 10, 123–141 (1993)
Bastero, J., Romance, M.: John’s decomposition of the identity in the non-convex case. Positivity 6, 1–16 (2002)
Bayer, C., Teichmann, J.: The proof of Tchakaloff’s theorem. Proc. Am. Math. Soc. 134, 303–3040 (2006)
Bookstein, F.L.: Fitting conic sections to scattered data. Comp. Graph. Image. Process. 9, 56–71 (1979)
Calafiore, G.: Approximation of \(n\)-dimnsional data using spherical and ellipsoidal primitives. IEEE Trans. Syst. Man. Cyb. 32, 269–276 (2002)
Chernousko, F.L.: Guaranteed estimates of undetermined quantities by means of ellipsoids. Sov. Math. Dodkl. 21, 396–399 (1980)
Croux, C., Haesbroeck, G., Rousseeuw, P.J.: Location adjustment for the minimum volume ellipsoid estimator. Stat. Comput. 12, 191–200 (2002)
Giannopoulos, A., Perissinaki, I., Tsolomitis, A.: A John’s theorem for an arbitrary pair of convex bodies. Geom. Dedicata 84, 63–79 (2001)
Dyer, M.E., Frieze, A.M., Kannan, R.: A random polynomial-time algorithm for approximating the volume of convex bodies. J. ACM 38, 1–17 (1991)
Faraut, J., Korányi, A.: Analysis on Symmetric Cones. Clarendon Press, Oxford (1994)
Freitag, E., Busam, R.: Complex Analysis, 2nd edn. Springer, Berlin (2009)
Gander, W., Golub, G.H., Strebel, R.: Least-squares fitting of circles and ellipses. BIT 34, 558–578 (1994)
Helton, J.W., Nie, J.: A semidefinite approach for truncated K-moment problems. Fond. Comput. Math. 12, 851–881 (2012)
Henk, M.: Löwner-John ellipsoids. Documenta Math. 95–106 (2012) (extra volume: optimization stories)
Henrion, D., Lasserre, J.B., Lofberg, J.: Gloptipoly 3: moments, optimization and semidefinite programming. Optim. Method. Softw. 24, 761–779 (2009)
Henrion, D., Lasserre, J.B., Savorgnan, C.: Approximate volume and integration of basic semi-algebraic sets. SIAM Rev. 51, 722–743 (2009)
Henrion, D., Peaucelle, D., Arzelier, D., Sebek, M.: Ellipsoidal approximation of the stability domain of a polynomial. IEEE Trans. Autom. Control 48, 2255–2259 (2003)
Henrion, D., Lasserre, J.B.: Inner approximations for polynomial matrix inequalities and robust stability regions. IEEE Trans. Autom. Control 57, 1456–1467 (2012)
Henrion, D., Sebek, M., Kucera, V.: Positive polynomials and and robust stabilization with fixed-order controllers. IEEE Trans. Autom. Control 48, 1178–1186 (2003)
Hiriart-Urruty, J.B., Lemarechal, C.: Convex Analysis and Minimization Algorithms I. Springer, Berlin (1993)
Hiriart-Urruty, J.B., Lemarechal, C.: Convex Analysis and Minimization Algorithms II. Springer, Berlin (1993)
Karimi, A., Khatibi, H., Longchamp, R.: Robust control of polytopic systems by convex optimization. Automatica 43, 1395–1402 (2007)
Kemperman, J.H.B.: Geometry of the moment problem. In: H.J. Landau (Ed.) Moments in Mathematics. Proceedings of Symposia in Applied Mathematics, vol. 37 pp. 16–53 (1987)
Kemperman, J.H.B.: The general moment problem, a geometric approach. Annals Math. Stat. 39, 93–122 (1968)
Lasserre, J.B.: Global optimization with polynomials and the problem of moments. SIAM J. Optim. 11, 796–817 (2001)
Lasserre, J.B.: Moments, Positive Polynomials and Their Applications. Imperial College, London (2009)
Lasserre, J.B.: Recovering an homogeneous polynomial from moments of its level set. Discrete Comput. Geom. 50, 673–678 (2013)
Morosov, A., Shakirov, S.: New and old results in resultant theory. Theor. Math. Phys. 163, 587–617 (2010)
Morosov, A., Shakirov, S.: Introduction to integral discriminants, J. High Energy Phys. 12 (2009). arXiv:0911.5278v1
Nurges, U.: Robust pole assignment via reflection coefficientsof polynomials. Automatica 42, 1223–1230 (2006)
O’Rourke, J., Badler, N.I.: Decomposition of three-dimensional objets into spheres. IEEE Trans. Pattern Anal. Machine Intell. 1, 295–305 (1979)
Pratt, V.: Direct least squares fitting of algebraic surfaces. ACM J. Comp. Graph. 21, 145–152 (1987)
Putinar, M.: Positive polynomials on compact semi-algebraic sets. Indiana Univ. Math. J. 42, 969–984 (1993)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton, NJ (1970)
Rosen, J.B.: Pattern separation by convex programming techniques. J. Math. Anal. Appl. 10, 123–134 (1965)
Rosin, P.L.: A note on the least squares fitting of ellipses. Pattern Recog. Lett. 14, 799–808 (1993)
Rosin, P.L., West, G.A.: Nonparametric segmentation of curves into various representations. IEEE Trans. Pattern Anal. Machine Intell. 17, 1140–1153 (1995)
Rousseeuw, P.J., Leroy, A.M.: Robust Regression and Outlier Detection. John Wiley, New York (1987)
Royden, H.L.: Real Analysis. Macmillan, New York (1968)
Sun, P., Freund, R.: Computation of minimum-volume covering ellipsoids. Oper. Res. 52, 690–706 (2004)
Taubin, G.: Estimation of planar curves, surfaces and nonplanar space curves defined by implicit equations, with applications to to edge and range image segmentation. IEEE Trans. Pattern Anal. Machine Intell. 13, 1115–1138 (1991)
Vandenberghe, L., Boyd, S.: Semidefinite programming. SIAM Rev. 38, 49–95 (1996)
Widder, D.V.: The Laplace Transform. Princeton University Press, Princeton (1946)
Acknowledgments
This work was partially supported by a grant from the Gaspar Monge Program for Optimization and Operations Research (PGMO) of the Fondation Mathématique Jacques Hadamard (France).
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
1.1 First-order KKT-optimality conditions
Consider the finite dimensional optimization problem:
for some real matrix \(\mathbf {A}\in \mathbb {R}^{m\times n}\), vector \(\mathbf {b}\in \mathbb {R}^m\), some closed convex cone \(C\subset \mathbb {R}^n\) (with dual cone \(C^*=\{\,\mathbf {y}:\mathbf {y}^T\mathbf {x}\ge 0,\,\forall \mathbf {x}\,\in C\,\}\)) and some convex and differentiable function \(f\) with domain \(D\). Suppose that \(C\) has a nonempty interior \(\mathrm{int}(C)\) and Slater’s condition holds, that is, there exists \(\mathbf {x}_0\in D\cap \mathrm{int}(C)\) such that \(\mathbf {A}\mathbf {x}_0=\mathbf {b}\). The normal cone at a point \(0\ne \mathbf {x}\in C\) is the set \(N_C(\mathbf {x})=\{\mathbf {y}\in C^*:\,\langle \mathbf {y},\mathbf {x}\rangle =0\}\) (see e.g. [25, p. 189]).
Then by Theorem 5.3.3, p. 188 in [26], \(\mathbf {x}^*\in C\) is an optimal solution if and only if there exists \((\lambda ,\mathbf {y})\in \mathbb {R}^m\times N_C(\mathbf {x}^*)\) such that:
and \(\langle \mathbf {x}^*,\mathbf {y}\rangle =0\) follows because \(\mathbf {y}\in N_C(\mathbf {x}^*)\).
1.2 Measures with finite support
We restate the following important result stated in [29, Theorem 1] and [1, Theorem 2.1.1, p. 39].
Theorem 7.1
([1, 29]) Let \(f_1,\ldots ,f_N\) be real-valued Borel measurable functions on a measurable space \(\Omega \) and let \(\mu \) be a probability measure on \(\Omega \) such that each \(f_i\) is integrable with respect to \(\mu \). Then there exists a probability \(\nu \) with finite support in \(\Omega \) and such that:
One can even attain that the support of \(\nu \) has at most \(N+1\) points.
In fact if \(\mathcal {M}(\Omega )_+\) denotes the space of probability measures on \(\Omega \), then the moment space
is the convex hull of the set \(f(\Omega )=\{(f_1(\mathbf {x}),\ldots ,f_N(\mathbf {x})):\,\mathbf {x}\in \Omega \}\) and each point \(\mathbf {y}\in Y_N\) can be represented as the convex hull of at most \(N+1\) point \(f(\mathbf {x}_i)\), \(i=1,\ldots ,N+1\). (See e.g. Sect. 3, p. 29 in Kemperman [28].)
In the proof of Theorem 3.2 one uses Theorem 7.1 with the \(f_i\)’s being all monomials \((\mathbf {x}^\alpha )\) of degree equal to \(d\) (and so \(N={n+d-1\atopwithdelims ()d}\)). We could also use Tchakaloff’s Theorem [8] but then we would potentially need \({n+d\atopwithdelims ()d}\) points. An alternative would be to use Tchakaloff’s Theorem after “de-homogenizing” the measure \(\mu \) so that \(n\)-dimensional moments of order \(\vert \alpha \vert =d\) become \((n-1)\)-dimensional moments of order \(\vert \alpha \vert \le d\), and one retrieves the bound \({n-1+d\atopwithdelims ()d}\).
1.3 Proof of Theorem 3.2
Proof
-
(a)
As \(\mathcal {P}\) is a minimization problem, its feasible set \(\{\,g\in \mathbf {H}[\mathbf {x}]_{d}:1-g\in C_{d}(\mathbf {K})\,\}\) can be replaced by the smaller set
$$\begin{aligned} F\,:=\,\left\{ g\in \mathbf {H}[\mathbf {x}]_{d}\,:\,\begin{array}{l}\displaystyle \int _{\mathbb {R}^n}\exp (-g(\mathbf {x}))\,d\mathbf {x}\,\le \,\int _{\mathbb {R}^n}\exp (-g_0(\mathbf {x}))\,d\mathbf {x}\\ 1-g\in \,C_{d}(\mathbf {K}) \end{array}\right\} , \end{aligned}$$
for some \(g_0\in \mathbf {P}[\mathbf {x}]_{d}\). Notice that \(F\subset \mathbf {P}[\mathbf {x}]_d\) and \(F\) is a closed convex set since the convex function \(g\mapsto \int _{\mathbb {R}^n}\exp (-g)d\mathbf {x}\) is continuous on the interior of its domain.
Next, let \(\mathbf {z}=(z_\alpha ), \alpha \in \mathbb {N}^n_{d}\), be a (fixed) element of \(\mathrm{int}(C_{d}(\mathbf {K})^*)\) (hence \(z_0>0\)). By Lemma 2.6 such an element exists and \(\langle \mathbf {z},\mathbf {g}\rangle \ge 0\) (as \(g\in \mathbf {P}[\mathbf {x}]_d\) is nonnegative). Next as \(\langle \mathbf{z}, 1-\mathbf{g} \rangle \ge 0\) one has \(z_0 \ge \langle \mathbf{z, g}\rangle \) and so by Corollary I.1.6 in Faraut and Korányi [15], g is bounded. Therefore the set \(F\) is a compact convex set. Finally, since \(g\mapsto \int _{\mathbb {R}^n}\exp (-g(\mathbf {x}))d\mathbf {x}\) is strictly convex, it is continuous on the interior of its domain and so it is continuous on \(F\). Hence problem \(\mathcal {P}\) has a unique optimal solution \(g^*\in \mathbf {P}[\mathbf {x}]_d\).
-
(b)
We may and will consider any homogeneous polynomial \(g\) as an element of \(\mathbb {R}[\mathbf {x}]_{d}\) whose coefficient vector \(\mathbf {g}=(g_\alpha )\) is such that \(g^*_\alpha =0\) whenever \(\vert \alpha \vert <d\). And so Problem \(\mathcal {P}\) is equivalent to the problem
$$\begin{aligned} \mathcal {P}':\quad \left\{ \begin{array}{ll}\rho =\displaystyle \inf _{g\in \mathbb {R}[\mathbf {x}]_{d}}&{}\displaystyle \int _{\mathbb {R}^n} \exp (-g(\mathbf {x}))\,d\mathbf {x}\\ \text{ s.t. }&{}g_\alpha =0,\quad \forall \,\alpha \in \mathbb {N}^n_{d};\,\vert \alpha \vert <d\\ &{}1-g\,\in \,C_{d}(\mathbf {K}),\end{array}\right. \end{aligned}$$(7.2)where we replaced \(g\in \mathbf {P}[\mathbf {x}]_d\) with the equivalent constraints \(g\in \mathbb {R}[\mathbf {x}]_{d}\) and \(g_\alpha :=0\) for all \(\alpha \in \mathbb {N}^n_{d}\) with \(\vert \alpha \vert <d\). Next, doing the change of variable \(h=1-g\), \(\mathcal {P}\)’ reads:
$$\begin{aligned} \mathcal {P}':\quad \left\{ \begin{array}{ll}\rho =\displaystyle \inf _{h\in \mathbb {R}[\mathbf {x}]_{d}}&{}\displaystyle \int _{\mathbb {R}^n} \exp (h(\mathbf {x})-1)\,d\mathbf {x}\\ \text{ s.t. }&{}h_\alpha =0,\quad \forall \,\alpha \in \mathbb {N}^n_{d};\,0<\vert \alpha \vert <d\\ &{}h_0=1\\ &{}h\,\in \,C_{d}(\mathbf {K}),\end{array}\right. \end{aligned}$$(7.3)
As \(\mathbf {K}\) is compact, there exists \(\theta \in \mathbf {P}[\mathbf {x}]_{d}\) such that \(1-\theta \in \mathrm{int}(C_{d}(\mathbf {K}))\), i.e., Slater’s condition holds for the convex optimization problem \(\mathcal {P}'\). Indeed, choose \(\mathbf {x}\mapsto \theta (\mathbf {x}):=M^{-1}\Vert \mathbf {x}\Vert ^{d}\) for \(M>0\) sufficiently large so that \(1-\theta >0\) on \(\mathbf {K}\). Hence with \(\Vert g\Vert _1\) denoting the \(\ell _1\)-norm of the coefficient vector of \(g\) (in \(\mathbb {R}[\mathbf {x}]_{d}\)), there exists \(\epsilon >0\) such that for every \(h\in B(\theta ,\epsilon )(:=\{h\in \mathbb {R}[\mathbf {x}]_{d}:\Vert \theta -h\Vert _1<\epsilon \}\)), the polynomial \(1-h\) is (strictly) positive on \(\mathbf {K}\).
Therefore, the unique optimal solution \((1-g^*)=:h^*\in \mathbb {R}[\mathbf {x}]_{d}\) of \(\mathcal {P}\)’ in (7.3) satisfies the Karush-Kuhn-Tucker (KKT) optimality conditions (7.1) which for problem (7.3) read:
for some \(\mathbf {y}^*=(y^*_\alpha )\), \(\alpha \in \mathbb {N}^n_{d}\), in the dual cone \(C_{d}(\mathbf {K})^*\subset \mathbb {R}^{s(d)}\) of \(C_{d}(\mathbf {K})\), and some vector \(\gamma =(\gamma _\alpha )\), \(0\le \vert \alpha \vert <d\). By Lemma 2.5,
and so (3.2) is just (7.4) restated in terms of \(\mu ^*\).
Next, the condition \(\langle h^*,\mathbf {y}^*\rangle =0\) (or equivalently, \(\langle 1-g^*,\mathbf {y}^*\rangle =0\)), reads:
which combined with \(1-g^*\in C_{d}(\mathbf {K})\) and \(\mu ^*\in \mathcal {M}(\mathbf {K})_+\), implies that \(\mu ^*\) is supported on \(\mathbf {K}\cap \{\mathbf {x}:g^*(\mathbf {x})=1\}=\mathbf {K}\cap \mathbf {G}^*_1\).
Next, let \(s:=\sum _{\vert \alpha \vert =d}g^*_\alpha y^*_\alpha \,(=y^*_0)\). From \(\langle 1-g^*,\mu ^*\rangle =0\), the measure \(s^{-1}\mu ^*=:\psi \) is a probability measure supported on \(\mathbf {K}\cap \mathbf {G}^*_1\), and satisfies \(\int \mathbf {x}^\alpha d\psi =s^{-1}y^*_\alpha \) for all \(\vert \alpha \vert =d\) (and \(\langle 1-g^*,\psi \rangle =0\)).
Hence by Theorem 7.1 there exists an atomic probability measure \(\nu ^*\in \mathcal {M}(\mathbf {K}\cap \mathbf {G}^*_1)_+\) such that
In addition \(\nu ^*\) may be chosen to be supported on at most \(N={n+d-1\atopwithdelims ()d}\) points in \(\mathbf {K}\cap \mathbf {G}^*_1\) and not \(N+1\) points as predicted by Theorem 7.1. This is because one among the \(N\) conditions
is redundant as \(\langle g^*,\mathbf {y}^*\rangle =y^*_0\) and \(\nu ^*\) is supported on \(\mathbf {K}\cap \mathbf {G}_1^*\). In other words, \(\mathbf {y}^*\) is not in the interior of the moment space \(Y_N\). Hence in (3.2) the measure \(\mu ^*\) can be substituted with the atomic measure \(s\,\nu ^*\) supported on at most \(n+d-1\atopwithdelims ()d\) contact points in \(\mathbf {K}\cap \mathbf {G}^*_1\).
To obtain \(\mu ^*(\mathbf {K})=\frac{n}{d}\int _{\mathbb {R}^n}\exp (-g^*)\), multiply both sides of (7.4)-(7.5) by \(h^*_\alpha \) for every \(\alpha \ne 0\), sum up and use \(\langle h^*,\mathbf {y}^*\rangle =0\) to obtain
where we have also used (2.5).
-
(c)
Let \(\mu ^*:=\sum _{i=1}^s \lambda _i\delta _{\mathbf {x}_i}\) where \(\delta _{\mathbf {x}_i}\) is the Dirac measure at the point \(\mathbf {x}_i\in \mathbf {K}\), \(i=1,\ldots ,s\). Next, let \(y^*_\alpha :=\int \mathbf {x}^\alpha d\mu ^*\) for all \(\alpha \in \mathbb {N}^n_{d}\), so that \(\mathbf {y}^*\in C_d(\mathbf {K})^*\). In particular \(\mathbf {y}^*\) and \(g^*\) satisfy
$$\begin{aligned} \langle 1-g^*,\mathbf {y}^*\rangle \,=\,\int _\mathbf {K}( 1-g^*)d\mu ^*\,=\,0, \end{aligned}$$because \(g^*(\mathbf {x}_i)=1\) for all \(i=1,\ldots ,s\). In other words, the pair \((g^*,\mathbf {y}^*)\) satisfies the KKT-optimality conditions associated with the convex problem \(\mathcal {P}\). But since Slater’s condition holds for \(\mathcal {P}\), those conditions are also sufficient for \(g^*\) to be an optimal solution of \(\mathcal {P}\), the desired result
\(\square \)
1.4 Proof of Theorem 4.1
Proof
First observe that (4.2) reads
and notice that the constraint \(1-g_\mathbf {a}\in C(\mathbf {K})\) is the same as \(1-g\in C(\mathbf {K}-\mathbf {a})\). And so for every \(\mathbf {a}\in \mathbb {R}^n\), the inner minimization problem
of (7.7) reads
From Theorem 3.2 (with \(\mathbf {K}-\mathbf {a}\) in lieu of \(\mathbf {K}\)), problem (7.8) has a unique minimizer \(g^\mathbf {a}\in \mathbf {P}[\mathbf {x}]_d\) with value \(\rho _\mathbf {a}=\int _{\mathbb {R}^n}\exp (-g^\mathbf {a})d\mathbf {x}=\int _{\mathbb {R}^n}\exp (-g^\mathbf {a}_\mathbf {a})d\mathbf {x}\).
Therefore, in a minimizing sequence \((\mathbf {a}_\ell ,g^{\mathbf {a}_\ell })\subset \mathbb {R}^n\times \mathbf {P}[\mathbf {x}]_d\), \(\ell \in \mathbb {N}\), for problem \(\mathcal {P}\) in (4.2) with
we may and will consider that for every \(\ell \), the homogeneous polynomial \(g^{\mathbf {a}_\ell }\in \mathbf {P}[\mathbf {x}]_d\)) solves the inner minimization problem (7.8) with \(\mathbf {a}_\ell \) fixed. For simplicity of notation rename \(g^{\mathbf {a}_\ell }\) as \(g^\ell \) and \(g^{\mathbf {a}_\ell }_{\mathbf {a}_\ell }\) (\(=g^{\mathbf {a}_\ell }(\mathbf {x}-\mathbf {a}_\ell )\)) as \(g^\ell _{\mathbf {a}_\ell }\).
As observed in the proof of Theorem 3.2, there is \(\mathbf {z}\in \mathrm{int}(C_d(\mathbf {K})^*)\) such that \(\langle 1-g^\ell _{\mathbf {a}_\ell },\mathbf {z}\rangle \ge 0\) and by Corollary I.1.6 in Faraut et Korányi [15], the set \(\{h\in C_{d}(\mathbf {K}):\langle \mathbf {z},h\rangle \le z_0\}\) is compact.
Also, \(\mathbf {a}_\ell \) can be chosen with \(\Vert \mathbf {a}_\ell \Vert \le M\) for all \(\ell \) (and some \(M\)), otherwise the constraint \(1-g_{\mathbf {a}_\ell }\in C_d(\mathbf {K})\) would impose a much too large volume \(\mathrm{vol}(\mathbf {G}^{\mathbf {a}_\ell }_1)\).
Therefore, there is a subsequence \((\ell _k)\), \(k\in \mathbb {N}\), and a point \((\mathbf {a}^*,\theta ^*)\in \mathbb {R}^n\times C_d(\mathbf {K})\) such that
Recall the definition (4.1) of \(g^\ell _{\mathbf {a}_\ell }(\mathbf {x})=g^\ell (\mathbf {x}-\mathbf {a}_\ell )\) for the homogeneous polynomial \(g^\ell \in \mathbf {P}[\mathbf {x}]_d\) with coefficient vector \(\mathbf {g}^\ell \), i.e.,
for some polynomials \((p_\alpha )\subset \mathbb {R}[\mathbf {x},\mathbf {g}]\), \(\alpha \in \mathbb {N}^n_d\). In particular, for every \(\alpha \in \mathbb {N}^n_d\) with \(\vert \alpha \vert =d\), \(p_\alpha (\mathbf {a}_\ell ,\mathbf {g}^\ell )=(g^\ell )_\alpha \). And so for every \(\alpha \in \mathbb {N}^n_d\) with \(\vert \alpha \vert =d\),
If we define the homogeneous polynomial \(g^*\) of degree \(d\) by \((g^*)_\alpha =\theta ^*_\alpha \) for every \(\alpha \in \mathbb {N}^n_d\) with \(\vert \alpha \vert =d\), then
This means that for every \(\alpha \in \mathbb {N}^n_d\),
In addition, as \(\mathbf {g}^{\ell _k}\rightarrow \mathbf {g}^*\) as \(k\rightarrow \infty \), one has the pointwise convergence \(g^{\ell _k}(\mathbf {x})\rightarrow g^*(\mathbf {x})\) for all \(\mathbf {x}\in \mathbb {R}^n\). Therefore, by Fatou’s Lemma (see e.g. Ash [3]),
which proves that \((\mathbf {a}^*,g^*)\) is an optimal solution of (4.2).
In addition \(g^*\in \mathbf {P}[\mathbf {x}]_d\) is an optimal solution of the inner minimization problem in (7.8) with \(\mathbf {a}:=\mathbf {a}^*\). Otherwise an optimal solution \(h\in \mathbf {P}[\mathbf {x}]_d\) of (7.8) with \(\mathbf {a}=\mathbf {a}^*\) would yield a solution \((\mathbf {a}^*,h)\) with associated cost \(\int _{\mathbb {R}^n}\exp (-h)\) strictly smaller than \(\rho \), a contradiction.
Hence by Theorem 3.2 (applied to problem (7.8)) there is a finite Borel measure \(\mu ^*\in \mathcal {M}(\mathbf {K}-\mathbf {a}^*)_+\) such that
And so \(\mu ^*\) is supported on the set
Invoking again [1, Theorem 2.1.1, p. 39], there exists an atomic measure \(\nu ^*\in \mathcal {M}(\mathbf {K}-\mathbf {a}^*)_+\) supported on at most \({n-1+d\atopwithdelims ()d}\) of \(\mathbf {K}-\mathbf {a}^*\) with same moments of order \(d\) as \(\mu ^*\). \(\square \)
Rights and permissions
About this article
Cite this article
Lasserre, J.B. A generalization of Löwner-John’s ellipsoid theorem. Math. Program. 152, 559–591 (2015). https://doi.org/10.1007/s10107-014-0798-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-014-0798-5