Abstract
We study the expected value of support functions of random polytopes in a certain direction, where the random polytope is given by independent random vectors uniformly distributed in an isotropic convex body. All results are obtained using probabilistic estimates in terms of Orlicz norms that were not used in this connection before.
Similar content being viewed by others
1 Introduction and Notation
The study of random polytopes began with Sylvester and the famous four-point problem nearly 150 years ago. This problem asks for the probability that the convex hull of four randomly chosen points in a planar region forms a four-sided polygon and its solution depends on the probability distribution of the random points. It was the starting point for an extensive study. In their groundbreaking work [30] from 1963, Rényi and Sulanke continued it, studying expectations of various basic functionals of random polytopes. Important quantities are expectations, variances and distributions of those functionals, and their study combines convex geometry, as well as geometric analysis and geometric probability (see also [2, 29]).
In the last 30 years a tremendous effort was made to explore properties of random polytopes as they gained more and more importance due to many important applications and connections to various other fields. Those can be found not only in statistics (extreme points of random samples) and convex geometry (approximation of convex sets), but also in computer science in the analysis of the average complexity of algorithms [22] and optimization [5], and even in biology [33]. In 1989, Milman and Pajor revealed a deep connection to functional analysis, proving that the expected volume of a certain random simplex is closely related to the isotropic constant of a convex set. In fact, this is a fundamental quantity in convex geometry and the local theory of Banach spaces [17].
Since Gluskin’s result [8], random polytopes are known to provide many examples of convex bodies (and related normed spaces) with a “pathologically bad” behavior of various parameters of a linear and geometric nature (see for instance the survey [16] and references therein). Consequently, they were also a natural candidate for a potential counterexample for the hyperplane conjecture. The isotropic constant of certain classes of random polytopes has been studied in [1, 7] and [12], showing that they do not provide a counterexample for the hyperplane conjecture.
Some other recent developments in the study of random polytopes can be found in [7] or [21], where the authors studied the relation between some parameters of a random polytope in an isotropic convex body and the isotropic constant of the body. Their results provide sharp estimates whenever \(n^{1+\delta} \leq N \leq e^{\sqrt{n}}\) for some δ>0. However, their method does not cover the case where N∼n, and it seems that a new approach is needed. Therefore, our paper serves this purpose, providing a new tool in the study of random polytopes where results are obtained for the range \(n \leq N \leq e^{\sqrt{n}}\). More precisely, we will estimate the expected value of support functions of random polytopes for a fixed direction, using a representation of this parameter via Orlicz norms.
Even though the motivation is of a geometrical nature, the tools we use are mainly probabilistic and analytical, involving Orlicz norms and therefore spaces which naturally appear in Banach space theory. It is interesting that those spaces, as we will see, also naturally appear in the study of certain parameters of random polytopes. Hence, this interplay between convex geometry and classical Orlicz spaces is attractive both from the analytical and from the geometrical point of view.
Before stating the exact results, and to allow a better understanding, we start with some basic definitions before we go into detail. A convex body K⊂ℝn is a compact convex set with non-empty interior. It is called symmetric if −x∈K whenever x∈K. We will denote its volume (or Lebesgue measure) by |⋅|. A convex body is said to be in isotropic position if it has volume 1 and satisfies the following two conditions:
-
∫ K x dx=0 (center of mass at 0),
-
\(\int_{K}\langle x,\theta\rangle^{2}\,dx=L_{K}^{2}\ \forall\theta \in S^{n-1}\),
where L K is a constant independent of θ, which is called the isotropic constant of K. Here, 〈⋅,⋅〉 denotes the standard scalar product in ℝn.
We will use the notation a∼b to express that there exist two positive absolute constants c 1,c 2 such that c 1 a≤b≤c 2 a and use a∼ δ b in case the constants depend on some constant δ>0. Similarly, we write a≲b if there exists a positive absolute constant c such that a≤cb. The letters c,c′,C,C′,c 1,c 2,… will denote positive absolute constants whose values may change from line to line. We will write C(r) if the constant depends on some parameter r>0.
Let K be a convex body, and θ∈S n−1 a unit vector. The support function of K in the direction θ is defined by h K (θ)=max{〈x,θ〉:x∈K}. The mean width of K is
where dμ denotes the uniform probability measure on S n−1.
Given an isotropic convex body K, let us consider the random polytope \(K_{N}=\operatorname{conv}\{\pm X_{1},\dots,\pm X_{N}\}\), where X 1,…,X N are independent random vectors uniformly distributed in K. It is known (see for instance [7] or [20]) that the expected value of the mean width of K N is bounded from above by
where C is a positive absolute constant. In [7] the authors showed that if \(N\leq e^{\sqrt{n}}\),
As a consequence, they obtained
if the number of random points defining K N verifies \(n^{1+\delta} \leq N \leq e^{\sqrt{n}}\), δ>0 a constant.
Now, let us be more precise and outline what we will prove and study in the following. First of all, by Fubini’s theorem, the expected value of the mean width of K N is the average on S n−1 of the expected value of the support function of K N in the direction θ:
Initially, in this paper we are interested in estimating \({\mathbb{E}} h_{K_{N}}(\theta)\hspace{-1pt}=\hspace{-1pt}{\mathbb{E}}\max_{1\leq i \leq N}\hspace{-1pt}|\langle X_{i},\theta\rangle|\) for a fixed direction θ∈S n−1, but we will also derive “high probability” (in the set of directions) results. In order to do so, we establish a completely new approach applying probabilistic estimates in connection with Orlicz norms. Those were first studied by Kwapień and Schütt in the discrete case in [14] and [15] and later extended by Gordon, Litvak, Schütt and Werner in [9] and [10] (for recent developments, see also [24, 25] and [26]). Using this method to estimate support functions of random polytopes is interesting in itself and introduces a new tool in convex geometry.
As we will see, the expected value of the mean width of a random polytope in (1) is equivalent to an average of Orlicz norms, i.e.,
This, in fact, is not just a nice representation, but a very interesting observation, which bears information concerning the expected value of the mean width, worth to be studied in more detail. Notice that averages of Orlicz norms naturally appear in functional analysis when studying symmetric subspaces of the classical Banach space L 1 (see [3, 14, 23], just to mention a few). To be more precise, as shown in [14], every finite-dimensional symmetric subspace of L 1 is C-isomorphic to an average of Orlicz spaces (see [28] for the corresponding result for rearrangement invariant spaces).
In Sect. 2 we will introduce the aforementioned Orlicz norm method that we will use throughout this paper to prove estimates for support functions of random polytopes.
In Sect. 5, with this approach, denoting by e j the canonical basis vectors in ℝn, we first compute \({\mathbb{E}}h_{K_{N}}(e_{j})\) when the isotropic convex body in which K N lies is the normalized \(\ell_{p}^{n}\) ball, i.e., in \(D_{p}^{n}=\frac{B_{p}^{n}}{|B_{p}^{n}|^{\frac{1}{n}}}\). Namely, using these ideas, we prove the following:
Theorem 1
Let X 1,…,X N be independent random vectors uniformly distributed in \(D_{p}^{n}\), 1≤p≤∞, with n≤N≤e c′n, and \(K_{N}=\operatorname{conv}\{\pm X_{1},\ldots,\pm X_{N}\}\). Then, for all j=1,…,n,
Many properties of random variables distributed in \(\ell_{p}^{n}\) balls have already been studied, see for instance [4, 31] and [32].
By rotational invariance in the Euclidean case, we obtain the same estimate for the expected value of the mean width of a random polytope in \(D_{2}^{n}\), under milder conditions on the number of points N:
Corollary 2
Let X 1,…,X N be independent random vectors uniformly distributed in \(D_{2}^{n}\), with n≤N≤e n, and let \(K_{N}= \operatorname{conv}\{\pm X_{1},\ldots,\pm X_{N}\}\). Then
In Sect. 4 we will use our approach to give a general upper bound for \({\mathbb{E}}h_{K_{N}}(\theta)\) when K is symmetric and under some smoothness conditions on the function \(h(t)=|K \cap\{ \langle x,\theta\rangle=t \} |^{\frac{1}{n-1}}\). This general case will include the case when \(K=D_{p}^{n}\) with 2≤p<∞ and θ=e j .
As proved in [21], the expected value of the intrinsic volumes (in particular the mean width) of K N are minimized when \(K=D_{2}^{n}\). Thus, we have \({\mathbb{E}}w(K_{N})\gtrsim\sqrt{\log N}\) and \({\mathbb{E}}w(K_{N})\sim L_{K}\sqrt{\log N}\) for those bodies with the isotropic constant bounded. We prove the existence of directions such that the expected value of the support function in these directions is bounded from above by a constant times \(L_{K} \sqrt{\log N}\) and, respectively, bounded from below by a constant times \(L_{K} \sqrt{\log N}\). In fact, as a consequence, we estimate the measure of the set of directions verifying such estimates. It is stated in the following corollary. Notice that the constant L K appears explicitly also in the lower bound.
Corollary 3
Let \(n \leq N \leq e^{\sqrt{n}}\), K be an isotropic convex body in ℝn, and let X 1,…,X N be independent random variables uniformly distributed on K. Let \(K_{N}=\operatorname{conv}\{\pm X_{1},\ldots,\pm X_{N}\}\). For every r>0, there exist positive constants C(r),C 1(r),C 2(r) such that
for a set of directions with measure greater than \(1-\frac{1}{N^{r}}\) and \(\frac{C(r)\sqrt{\log N}}{N^{r}}\) respectively.
All the estimates we prove using our approach hold when \(n\leq N\leq e^{\sqrt{n}}\). Thus, our method might provide a tool to prove \({\mathbb{E}}w(K_{N}) \sim L_{K} \sqrt{\log N}\) for this range of N and hence close the gap mentioned in [7], where the authors’ result was restricted to the case \(n^{1+\delta} \leq N \leq e^{\sqrt{n}}\), δ>0, and constants depending on δ.
2 Preliminaries
A convex function M:[0,∞)→[0,∞) where M(0)=0 and M(t)>0 for t>0 is called an Orlicz function. If there is a t 0>0 such that for all t≤t 0, we have M(t)=0, then M is called a degenerated Orlicz function. The dual function M ∗ of an Orlicz function M is given by the Legendre transform
Again, M ∗ is an Orlicz function, and M ∗∗=M. For instance, taking \(M(t)=\frac{1}{p}t^{p}\), p≥1, the dual function is given by \(M^{*}(t)=\frac{1}{p^{*}}t^{p^{*}}\) with \(\frac{1}{p^{*}}+\frac{1}{p}=1\). The n-dimensional Orlicz space \(\ell_{M}^{n}\) is ℝn equipped with the norm
In case M(t)=t p, 1≤p<∞, we just have \(\lVert \cdot \rVert_{M}= \lVert\cdot \rVert_{p}\). For a detailed and thorough introduction to the theory of Orlicz spaces, we refer the reader to [13] and [27].
In [10] the authors obtained the following result:
Theorem 4
([10, Lemma 5.2])
Let X 1,…,X N be iid random variables with finite first moments. For all s≥0, let
Then, for all \(x=(x_{i})_{i=1}^{N}\in{\mathbb{R}}^{N}\),
Obviously, the function
is non-negative and convex, since \(\int_{\{\frac{1}{t}\leq \lvert X \rvert\}} \lvert X \rvert \,d\mathbb{P}\) is increasing in t. Furthermore, we have M(0)=0 and M is continuous. One can easily show that this Orlicz function M can also be written in the following way:
As a corollary, we obtain the following result, which is the one we use to estimate the support functions of random polytopes.
Corollary 5
Let X 1,…,X N be iid random vectors in ℝn, and let \(K_{N}=\operatorname{conv}\{\pm X_{1}, \ldots,\pm X_{N}\}\). Let θ∈S n−1 and
Then
3 Random Polytopes in Normalized \(\ell_{p}^{n}\)-Balls
In this section we consider random polytopes \(K_{N}=\operatorname{conv}\{\pm X_{1},\ldots,\pm X_{N}\}\), where X 1,…,X N are independent random vectors uniformly distributed in the normalized \(\ell_{p}^{n}\) ball \(D_{p}^{n}=\frac{B_{p}^{n}}{|B_{p}^{n}|^{\frac{1}{n}}}\). Let us recall that the volume of \(B_{p}^{n}\) equals
and so, using Stirling’s formula, we have that \(|B_{p}^{n}|^{1/n} \sim \frac{1}{n^{\frac{1}{p}}}\) and \(\frac{|B_{p}^{n-1}|}{|B_{p}^{n}|}\sim n^{\frac{1}{p}}\).
We are going to estimate \({\mathbb{E}}h_{K_{N}}(e_{j})\) using the Orlicz norm approach introduced in Sect. 2. In order to do so, we need to compute the Orlicz function M introduced in Corollary 5. We are doing this in the following.
Lemma 6
Let 1≤p<∞, and M:[0,∞)→[0,∞) be the function
Then, if \(s\leq\frac{1}{|B_{p}^{n}|^{\frac{1}{n}}}\),
Also, if \(s\leq\frac{1}{|B_{p}^{n}|^{\frac{1}{n}}}\),
Proof
The (n−1)-dimensional volume \(|D_{p}^{n}\cap\{ \langle x,e_{j}\rangle=t\} |\) equals
By Fubini’s theorem we have that if \(s\geq|B_{p}^{n}|^{1/n}\),
Otherwise M is 0. Integration by parts yields
Now, making the change of variables
we obtain
Therefore,
if \(s\leq\frac{1}{|B_{p}^{n}|^{\frac{1}{n}}}\) and 0 otherwise, which is the expression in (3). The first term in the previous sum equals
and integration by parts yields that this equals
The integral inside the second term equals
and, integrating by parts, this equals
and so, the second term above equals
Thus, adding the two terms we have that if \(s\leq\frac{1}{|B_{p}^{n}|^{\frac{1}{n}}}\),
which is the expression in (4). □
Now we are going to prove Theorem 1. It will be a consequence of the next two propositions, where we will prove the upper and lower bound for \({\mathbb{E}}h_{K_{N}}(e_{j})\) respectively.
Proposition 7
For every n,N∈ℕ, with n≤N, and every 1≤p<∞, we have that if X 1,…,X N are independent random vectors uniformly distributed in \(D_{p}^{n}\), then
for all j=1,…,n.
Remark 1
Notice that for p=2, this result is similar to the analogous one for Gaussian random vectors.
Proof
If p≥2 and \(s\leq\frac{1}{|B_{p}^{n}|^{\frac{1}{n}}}\), the second term in the expression of \(M (\frac{1}{s} )\) given by (3) is negative, and so
Integration by parts gives
Take \(s_{0}=\frac{1}{2^{\frac{1}{p}}|B_{p}^{n}|^{\frac{1}{n}}}\min \{ \alpha (\frac{p}{n-1+p} )(\log N),1 \}^{\frac{1}{p}}\), α>0 to be specified later. Since \(s_{0}^{p}|B_{p}^{n}|^{\frac{p}{n}}\leq\frac{1}{2}\), there exists a constant c such that
Take \(\alpha=\frac{2}{c}\). If the minimum in the definition of s 0 is \(\frac{1}{2}\), then trivially we have
If not, then
Since \(|B_{p}^{n-1}|/|B_{p}^{n}| \sim n^{1/p}\), we get
when N≥N 0 for some sufficiently large N 0∈ℕ. Altogether, for p≥2, we obtain
where C is an absolute positive constant. This minimum is 1 if and only if \(\log N\geq1+\frac{n-1}{p}\). In this case the upper bound we obtain is \(\frac{C}{|B_{p}^{n}|^{\frac{1}{n}}}\sim Cn^{\frac{1}{p}}\). Since n−1≤plogN, we have that the upper bound \(Cn^{\frac{1}{p}}\leq C(\log N)^{\frac{1}{p}}\). If the minimum is not 1, since \(|B_{p}^{n}|^{\frac{1}{n}}\sim\frac{1}{n^{\frac{1}{p}}}\), we also obtain an upper bound of the order \((\log N)^{\frac{1}{p}}\).
If p∈[1,2], we use that in the representation of \(M (\frac{1}{s} )\) given by (4) only the first term is positive and so
Take \(s_{0}=\frac{1}{2^{\frac{1}{p}}|B_{p}^{n}|^{\frac{1}{n}}}\min \{ \alpha (\frac{p}{n-1+2p} )(\log N),1 \}^{\frac{1}{p}}\), α>0 to be specified later. Since \(s_{0}^{p}|B_{p}^{n}|^{\frac{p}{n}}\leq\frac{1}{2}\), there exists a constant such that
Take \(\alpha=\frac{2}{c}\). If the minimum in the definition of s 0 is \(\frac{1}{2}\), then trivially we have
If not, then
Since \(|B_{p}^{n-1}|/|B_{p}^{n}| \sim n^{1/p}\) and p∈[1,2], we get
when N≥N 0 for some sufficiently large N 0∈ℕ. Altogether, for 1≤p≤2, we obtain
where C is an absolute positive constant. □
In order to prove the lower bound for \({\mathbb{E}}h_{K_{N}}(e_{j})\), we need the two following technical results:
Lemma 8
Let α,β∈ℝ∖{−1}. Then we have
Proof
We consider ∫sinα+2(θ)cosβ(θ) dθ. Integration by parts yields
Since cosβ+2(θ)=cosβ(θ)(1−sin2(θ)), we obtain
Thus,
and so
□
As a corollary, we obtain the kth iteration of Lemma 8.
Corollary 9
Let α,β∈ℝ∖{−1}. Then, for any k∈ℕ, we have
We will now prove the lower estimate.
Proposition 10
There exists a positive absolute constant c′ such that for every n,N∈ℕ, with n≤N≤e c′n, and every 1≤p<∞, we have that if X 1,…,X N are independent random vectors uniformly distributed on \(D_{p}^{n}\), then
for all j=1,…,n.
Proof
We start with the case 1<p≤2 where we use the recursion formula. Since 1<p≤2, using the representation of M in (3), we have that
Using Corollary 9 with \(\alpha= \frac{2n}{p}-\frac{2}{p}+3\) and \(\beta= \frac{2}{p}-3\), we have −1≤β+1<0, and for any k∈ℕ, we get
Since \(\beta+1=\frac{2}{p}(1-p)\), we get
So this yields
If we choose k=n and take into account that 1<p≤2, we get
We take \(s_{0}=\frac{\gamma^{\frac{1}{p}}(\log N)^{\frac{1}{p}}}{|B_{p}^{n}|^{\frac{1}{n}}n^{\frac{1}{p}}}\)with γ a constant to be chosen later. Then, since N≤e n, we obtain
Choosing γ small enough, so that c 1 γ<1, we get
if N≥N 0 for some N 0∈ℕ large enough. Therefore, there exists an absolute positive constant c such that
Now, let us consider the easier case where p=1. In this case, we have
If we now choose s 0=αlogN, where α is a constant to be chosen later, we obtain
and so, choosing α a constant small enough so that cα<1, we obtain that
whenever N≥N 0. Therefore, if p=1, there exists an absolute positive constant c such that
Now, let us treat the case 2≤p. We will assume that \(p-1\leq c\frac{n}{\alpha\log N}\), where α is a constant that will be determined later, and c is an absolute constant small enough. We will also assume that \(p\leq N^{\frac{1}{4}}\). We have seen that the second term in (3) equals
and so if p≥2, the second term in the expression (3) defining \(M (\frac{1}{s} )\) is greater than or equal to
Integration by parts yields that this quantity equals
Thus, putting this together with the first term, we have that if p≥2,
Using integration by parts, the first term in the previous expression equals
Using the recursion formula in Corollary 9, we obtain that for any k∈ℕ, this quantity equals
Estimating the cosine in the denominator inside the integral by the value at its extreme point, we obtain that this quantity is greater than
Since for every m, we have that \(\frac{2\frac{n}{p}+2m}{2\frac{n-1}{p}+2m+2}=1-\frac{2-\frac{2}{p}}{2\frac{n-1}{p}+2m+2}\leq1\), this expression is greater than
Hence,
We take
Then,
if n≥n 0. On the other hand, choosing k so that \(k+1=\frac{2n}{\alpha (p-1)\log N}\), we have
where the last inequality holds because of our assumptions on p. This last quantity is greater than
if N≥N 0. Taking c small enough so that 6e −1(1−c)>2.1, we have that
since we are assuming that \(p\leq N^{\frac{1}{4}}\). Taking α such that \(c_{1}\alpha+\frac{1}{2}<1\), we obtain
if N≥N 1 and n≥n 0 for some n 0,N 1 big enough. Therefore,
where N≥N 0, and C is a positive absolute constant.
Now we consider the case \(p\geq c\frac{n}{\log N}\) or p≥N 1/4. In that case we choose
Then
We want the latter expression to be greater or equal to N −1, i.e.,
which is equivalent to
To obtain this, it is enough to show
and since \(p\geq c\frac{n}{\log N}\) and N≤e c′n, to obtain the latter inequality, it is enough to have
But
if c′ is small enough. So we obtain the estimate. If \(p\geq N^{\frac{1}{4}}\), we immediately obtain
for N≥N 0. Therefore, in these two cases, we obtain the estimate
□
Remark 2
In the case p=∞ it is very easy to check that
and so \({\mathbb{E}}h_{K_{N}}(e_{j}) \sim1\).
4 General Results
Using our approach, we will now prove more general bounds for symmetric isotropic convex bodies. In the first theorem we assume some mild technical conditions which are verified by the \(\ell_{p}^{n}\) balls (p≥2). In this way we recover the upper estimates proved in the previous section.
Since \({\mathbb{E}}h_{K_{N}}(\theta) \sim\inf \{ s>0 : M_{\theta } ( \frac{1}{s} ) \leq\frac{1}{N} \}\), it seems natural to study for which value of s
As one could expect, this value of s is of the order \(L_{K} \sqrt{\log N}\). As a consequence of Chebychev’s inequality, we will obtain probability estimates for the set of directions verifying \({\mathbb{E}}h_{K_{N}}(\theta) \leq C L_{K} \sqrt {\log N}\) or \({\mathbb{E}}h_{K_{N}}(\theta) \geq C L_{K} \sqrt{\log N}\) .
Theorem 11
Let K be a symmetric and isotropic convex body, n≤N, θ∈S n−1, and X 1,…,X N be independent random vectors uniformly distributed in K. Define \(h(t)= | K \cap\{ \langle x,\theta\rangle=t\}|^{\frac{1}{n-1}}\). Assume that h is twice differentiable and that h′(t)≠0 for all t∈(0,h K (θ)). Assume also that −h′(t)/t is increasing and that h(h K (θ))=0. Then,
where α,C, α>C are positive absolute constants.
Proof
First of all, notice that h is a concave function. Then, using Theorem 4, we get
Integration by parts yields
Since h′(t)−th″(t)≥0, we have
Again we use integration by parts and get
Furthermore, since we have h′(t)−th″(t)≥0, we get
Thus,
Choosing
we have that there exists a positive constant c 1 such that
Since K is isotropic, s 0≤(n+1)L K . Therefore,
By Hensley’s result (see [11]), \(L_{K} \sim\frac{1}{|K\cap \theta^{\bot}|}\), and because n≤N, we have
Taking α so that c 1 α>2, we have \(M(\frac{1}{s_{0}}) \leq\frac{1}{N}\) for N≥N 0 for some N 0∈ℕ big enough. □
With the method, introduced in Sect. 2, we are also able to prove the following general result, which will lead us to estimates of the support function for some directions of random polytopes in symmetric isotropic convex bodies:
Theorem 12
Let \(n \leq N \leq e^{\sqrt{n}}\), K be a symmetric isotropic convex body in ℝn, and let X 1,…,X N be independent random variables uniformly distributed in K. Then,
and
where C 1,C 2 are positive absolute constants.
Consequently, if \(\tilde{s}\) is chosen such that
then \(\tilde{s} \sim L_{K} \sqrt{\log N}\).
In order to prove this theorem, we need the following proposition:
Proposition 13
Let K be a symmetric convex body in ℝn of volume 1. Let s>0, θ∈S n−1, and M θ be the Orlicz function associated to the random variable 〈X,θ〉, where X is uniformly distributed in K. Then,
where \(M_{\langle\theta,e_{1}\rangle}\) is the Orlicz function associated to the random variable 〈θ,e 1〉 with θ uniformly distributed on S n−1. For any \(s\leq \lVert x \rVert_{2}\),
and 0 otherwise.
Proof
Using the definition of M θ , we obtain
where the last equality is obtained by the change of variable \(t=\frac{u}{ \lVert x \rVert_{2}}\). Hence, by the rotational invariance of S n−1,
Now, let us compute \(M_{ \langle\theta,e_{1}\rangle }\). For any s>1, otherwise the function is 0, we have
The change of variables \(\frac{1}{t} = \cos y\) yields
□
Given that the expected mean width of K N is minimized when \(K=D_{2}^{n}\), it is natural to expect that given s, the average \(\int_{S^{n-1}}M_{\theta}(\frac{1}{s} )\,d\mu(\theta)\) would also be minimized when \(K=D_{2}^{n}\). We prove it, using this representation, in the following:
Corollary 14
Let K be a symmetric convex body in ℝn of volume 1, and let s>0. Then
where \(M_{D_{2}^{n},\theta}\) denotes the Orlicz function associated to \(D_{2}^{n}\).
Proof
By (5) and the facts that \(M_{\langle\theta,e_{1}\rangle}\) is increasing and \(|K|=|D_{2}^{n}|=1\) we have that if r n is the radius of \(D_{2}^{n}\),
□
Now, we give the proof of Theorem 12:
Proof
By (6), if \(\lVert x \rVert_{2} \geq s\), we have
Integration by parts yields
We start with the upper bound where we will use Paouris’ result about the concentration of mass on isotropic convex bodies from [18]. First of all, we have
From (5) and since \(M_{\langle\theta,e_{1}\rangle} (\frac{ \lVert x \rVert_{2}}{s} ) = 0\) for \(s> \lVert x \rVert_{2}\), we get
We choose \(s_{0} = \sqrt{\alpha} L_{K} \sqrt{\log N}\), with α>0 a constant to be chosen later. Then, if \(N \leq e^{\sqrt{n}}\),
We choose γ>0 such that \(c_{1}\gamma- \frac{1}{2}>1\) and then α>0 so that \(\frac{c_{1}\alpha}{\gamma^{2}}>1\). Then,
for \(N\leq e^{\sqrt{n}}\) and N≥N 0.
To prove the lower bound, we use the recursion formula (9). For \(\lVert x \rVert_{2} \geq s \) and any k∈ℕ,
Taking k=n, we have
Thus,
Take \(s_{1} = \sqrt{\beta} L_{K} \sqrt{\log N}\), β>0 a constant to be chosen later. Then
Using the small ball probability result proved in [19], we get that there exists a constant c 5>0 such that
for N≤e n. Therefore,
where the inequality before the last one holds because \(\lVert x \rVert_{2}^{2} \geq c_{5}^{2} n L_{K}^{2}\). We take β small enough, so that c 6 β<1 and \(2\sqrt{\beta}\sqrt{\log N} \leq c_{5} \sqrt{n}\). Then
for N≥N 0 and N≤e n. Hence,
□
Obviously, the theorem implies that there are directions θ 1,θ 2∈S n−1 such that the expectation of the support function in those directions is bounded from above and below respectively by a constant times \(L_{K}\sqrt{\log N}\). In Corollary 3 we give estimates for the measure of the set of directions verifying such estimates. However, we do not think that the estimate we give for the measure of the set of directions verifying the lower bound is optimal.
Proof of Corollary 3
To prove that the upper bound is true for most directions, we proceed as in the proof of Theorem 12. We choose s 0 like there and α, γ so that \(c_{1}\gamma- \frac{1}{2}> 2(r+1)\) and \(\frac{c_{1}\alpha}{\gamma^{2}}>2(r+1)\) and obtain
Then, by Chebychev’s inequality,
Thus,
and so
To prove the probability estimate for the lower bound, we can assume that r<1. We proceed as in Theorem 12. We choose s 1 like there and take β small enough so that c 6 β<r. We obtain
Then, for any decreasing, positive and concave function f, we get
Using Jensen’s inequality, from this we obtain
Thus,
and therefore,
This means that
We choose \(f(t) = -at + a \max_{\theta\in S^{n-1}} M_{\theta}(\frac{1}{s_{1}})\), a>0. Then
and thus
From Hölder’s inequality we obtain
Because of our choice of s 1, we get
Therefore,
This yields
□
References
Alonso-Gutiérrez, D.: On the isotropy constant of random convex sets. Proc. Am. Math. Soc. 136(9), 3293–3300 (2008)
Bárány, I.: Random polytopes, convex bodies, and approximation. In: Stochastic Geometry. Lecture Notes in Mathematics, vol. 1892, pp. 77–118 (2007)
Bretagnolle, J., Dacunha-Castelle, D.: Application de l’étude de certaines formes linéaires aléatoires au plongement d’espaces de Banach dans les espaces L p . Ann. Sci. Éc. Norm. Super. 2, 437–480 (1960)
Barthe, F., Guédon, O., Mendelson, S., Naor, A.: A probabilistic approach to the geometry of the \(\ell_{p}^{n}\) ball. Ann. Probab. 33, 480–513 (2005)
Borgwardt, K.H.: The Simplex Method: a Probabilistic Analysis. Algorithms and Combinatorics, vol. 1. Springer, Berlin (1987)
Dafnis, N., Giannopoulos, A., Guédon, O.: On the isotropic constant of random polytopes. Adv. Geom. 10, 311–322 (2010)
Dafnis, N., Giannopoulos, A., Tsolomitis, A.: Quermaßintegrals and asymptotic shape of random polytopes in an isotropic convex body (2012). Preprint
Gluskin, E.D.: The diameter of Minkowski compactum roughly equals to n. Funkc. Anal. Prilozh. 15(1), 72–73 (1981). English translation: Funct. Anal. Appl. 15(1), 57–58 (1981)
Gordon, Y., Litvak, A.E., Schütt, C., Werner, E.: Orlicz norms of sequences of random variables. Ann. Probab. 30, 1833–1853 (2002)
Gordon, Y., Litvak, A.E., Schütt, C., Werner, E.: Uniform estimates for order statistics and Orlicz functions. Positivity 16, 1–28 (2012)
Hensley, D.: Slicing convex bodies, bounds of slice area in terms of the body’s covariance. Proc. Am. Math. Soc. 79, 619–625 (1980)
Klartag, B., Kozma, G.: On the hyperplane conjecture for random convex sets (2008). Manuscript
Krasnoselski, M.A., Rutickii, Y.B.: Convex Functions and Orlicz Spaces. Noordhoff, Groningen (1961)
Kwapień, S., Schütt, C.: Some combinatorial and probabilistic inequalities and their application to Banach space theory. Stud. Math. 82, 91–106 (1985)
Kwapień, S., Schütt, C.: Some combinatorial and probabilistic inequalities and their application to Banach space theory II. Stud. Math. 95, 141–154 (1989)
Mankiewicz, P., Tomczak-Jaegermann, N.: Quotients of finite-dimensional Banach spaces; random phenomena. In: Johnson, W.B., Lindenstrauss, J. (eds.) Handbook of the Geometry of Banach spaces, vol. 2, pp. 1201–1246. Elsevier, Amsterdam (2003)
Milman, V.D., Pajor, A.: Isotropic position and inertia ellipsoids and zonoids of the unit ball of a normed n-dimensional space. In: Geometric Aspects of Functional Analysis. Lecture Notes in Math., vol. 1376, pp. 64–104 (1989)
Paouris, G.: Concentration of mass on isotropic convex bodies. In: Geometric and Functional Analysis, vol. 16, pp. 1021–1049 (2006)
Paouris, G.: Small ball probability estimates for log-concave measures. Trans. Am. Math. Soc. 364, 287–308 (2012)
Pivovarov, P.: On the volume of caps and bounding the mean-width of an isotropic convex body. Math. Proc. Camb. Philos. Soc. 149, 317–331 (2010)
Paouris, G., Pivovarov, P.: A probabilistic take on isoperimetric-type inequalities (2012). Preprint
Preparata, F.P., Shamos, M.I.: Computational Geometry: an Introduction. Texts and Monographs in Computer Science. Springer, New York (1990)
Prochno, J.: Subspaces of L 1 and combinatorial inequalities in Banach space theory. Dissertation (2011)
Prochno, J.: A combinatorial approach to Musielak–Orlicz spaces. Banach J. Math. Anal. 7(1) (2013)
Prochno, J., Riemer, S.: On the maximum of random variables on product spaces. Houst. J. Math. (2012, to appear)
Prochno, J., Schütt, C.: Combinatorial inequalities and subspaces of L 1. Stud. Math. (2012, to appear)
Rao, M.M., Ren, Z.D.: Theory of Orlicz Spaces. Dekker, New York (1991)
Raynaud, Y., Schütt, C.: Some results on symmetric subspaces of L 1. Stud. Math. 89, 2–35 (1988)
Reitzner, M.: Random polytopes. In: New Perspectives in Stochastic Geometry, pp. 45–76. Oxford University Press, Oxford (2010)
Rényi, A., Sulanke, R.: Über die konvexe hülle von n zufällig gewählten punkten. Z. Wahrscheinlichkeitstheor. Verw. Geb. 2, 75–84 (1963)
Schechtman, G., Zinn, J.: On the volume of the intersection of two \(L_{p}^{n}\) balls. Proc. Am. Math. Soc. 110, 217–224 (1990)
Schechtman, G., Zinn, J.: Concentration on the \(\ell_{p}^{n}\) ball. In: Geometric Aspects of Functional Analysis (Notes of GAFA Seminar). Lecture Notes in Math., vol. 1745, pp. 245–256. Springer, Berlin (2000)
Solomon, H.: Geometric Probability. Regional Conference Series in Applied Mathematics, vol. 28. SIAM, Philadelphia (1989)
Acknowledgements
This work was done while the authors were postdoctoral fellows at the Department of Mathematical and Statistical Sciences at University of Alberta. We would like to thank the department for providing such good environment and working conditions. Especially, we would like to thank Nicole Tomczak-Jaegermann and Alexander Litvak for pointing out the problem to us and for useful comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
D. Alonso-Gutiérrez is partially supported by the grant MTM2010-16679.
Rights and permissions
About this article
Cite this article
Alonso-Gutiérrez, D., Prochno, J. Estimating Support Functions of Random Polytopes via Orlicz Norms. Discrete Comput Geom 49, 558–588 (2013). https://doi.org/10.1007/s00454-012-9468-7
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00454-012-9468-7