1 Introduction and Notation

The study of random polytopes began with Sylvester and the famous four-point problem nearly 150 years ago. This problem asks for the probability that the convex hull of four randomly chosen points in a planar region forms a four-sided polygon and its solution depends on the probability distribution of the random points. It was the starting point for an extensive study. In their groundbreaking work [30] from 1963, Rényi and Sulanke continued it, studying expectations of various basic functionals of random polytopes. Important quantities are expectations, variances and distributions of those functionals, and their study combines convex geometry, as well as geometric analysis and geometric probability (see also [2, 29]).

In the last 30 years a tremendous effort was made to explore properties of random polytopes as they gained more and more importance due to many important applications and connections to various other fields. Those can be found not only in statistics (extreme points of random samples) and convex geometry (approximation of convex sets), but also in computer science in the analysis of the average complexity of algorithms [22] and optimization [5], and even in biology [33]. In 1989, Milman and Pajor revealed a deep connection to functional analysis, proving that the expected volume of a certain random simplex is closely related to the isotropic constant of a convex set. In fact, this is a fundamental quantity in convex geometry and the local theory of Banach spaces [17].

Since Gluskin’s result [8], random polytopes are known to provide many examples of convex bodies (and related normed spaces) with a “pathologically bad” behavior of various parameters of a linear and geometric nature (see for instance the survey [16] and references therein). Consequently, they were also a natural candidate for a potential counterexample for the hyperplane conjecture. The isotropic constant of certain classes of random polytopes has been studied in [1, 7] and [12], showing that they do not provide a counterexample for the hyperplane conjecture.

Some other recent developments in the study of random polytopes can be found in [7] or [21], where the authors studied the relation between some parameters of a random polytope in an isotropic convex body and the isotropic constant of the body. Their results provide sharp estimates whenever \(n^{1+\delta} \leq N \leq e^{\sqrt{n}}\) for some δ>0. However, their method does not cover the case where Nn, and it seems that a new approach is needed. Therefore, our paper serves this purpose, providing a new tool in the study of random polytopes where results are obtained for the range \(n \leq N \leq e^{\sqrt{n}}\). More precisely, we will estimate the expected value of support functions of random polytopes for a fixed direction, using a representation of this parameter via Orlicz norms.

Even though the motivation is of a geometrical nature, the tools we use are mainly probabilistic and analytical, involving Orlicz norms and therefore spaces which naturally appear in Banach space theory. It is interesting that those spaces, as we will see, also naturally appear in the study of certain parameters of random polytopes. Hence, this interplay between convex geometry and classical Orlicz spaces is attractive both from the analytical and from the geometrical point of view.

Before stating the exact results, and to allow a better understanding, we start with some basic definitions before we go into detail. A convex body K⊂ℝn is a compact convex set with non-empty interior. It is called symmetric if −xK whenever xK. We will denote its volume (or Lebesgue measure) by |⋅|. A convex body is said to be in isotropic position if it has volume 1 and satisfies the following two conditions:

  • K xdx=0 (center of mass at 0),

  • \(\int_{K}\langle x,\theta\rangle^{2}\,dx=L_{K}^{2}\ \forall\theta \in S^{n-1}\),

where L K is a constant independent of θ, which is called the isotropic constant of K. Here, 〈⋅,⋅〉 denotes the standard scalar product in ℝn.

We will use the notation ab to express that there exist two positive absolute constants c 1,c 2 such that c 1 abc 2 a and use a δ b in case the constants depend on some constant δ>0. Similarly, we write ab if there exists a positive absolute constant c such that acb. The letters c,c′,C,C′,c 1,c 2,… will denote positive absolute constants whose values may change from line to line. We will write C(r) if the constant depends on some parameter r>0.

Let K be a convex body, and θS n−1 a unit vector. The support function of K in the direction θ is defined by h K (θ)=max{〈x,θ〉:xK}. The mean width of K is

$$w(K)=\int_{S^{n-1}}h_K(\theta)\,d\mu(\theta), $$

where denotes the uniform probability measure on S n−1.

Given an isotropic convex body K, let us consider the random polytope \(K_{N}=\operatorname{conv}\{\pm X_{1},\dots,\pm X_{N}\}\), where X 1,…,X N are independent random vectors uniformly distributed in K. It is known (see for instance [7] or [20]) that the expected value of the mean width of K N is bounded from above by

$${\mathbb{E}}w(K_N)\leq C L_K \sqrt{\log N}, $$

where C is a positive absolute constant. In [7] the authors showed that if \(N\leq e^{\sqrt{n}}\),

$${\mathbb{E}} \biggl( \frac{|K_N|}{|B_2^n|} \biggr)^{\frac{1}{n}} \geq C L_K \sqrt{\log\frac{N}{n}}. $$

As a consequence, they obtained

$${\mathbb{E}}w(K_N) \sim_{\delta} L_K \sqrt{\log N} $$

if the number of random points defining K N verifies \(n^{1+\delta} \leq N \leq e^{\sqrt{n}}\), δ>0 a constant.

Now, let us be more precise and outline what we will prove and study in the following. First of all, by Fubini’s theorem, the expected value of the mean width of K N is the average on S n−1 of the expected value of the support function of K N in the direction θ:

$$ {\mathbb{E}}w(K_N)= {\mathbb{E}}\int _{S^{n-1}} h_{K_N}(\theta)\,d\mu =\int _{S^{n-1}}{\mathbb{E}}h_{K_N}(\theta)\,d\mu. $$
(1)

Initially, in this paper we are interested in estimating \({\mathbb{E}} h_{K_{N}}(\theta)\hspace{-1pt}=\hspace{-1pt}{\mathbb{E}}\max_{1\leq i \leq N}\hspace{-1pt}|\langle X_{i},\theta\rangle|\) for a fixed direction θS n−1, but we will also derive “high probability” (in the set of directions) results. In order to do so, we establish a completely new approach applying probabilistic estimates in connection with Orlicz norms. Those were first studied by Kwapień and Schütt in the discrete case in [14] and [15] and later extended by Gordon, Litvak, Schütt and Werner in [9] and [10] (for recent developments, see also [24, 25] and [26]). Using this method to estimate support functions of random polytopes is interesting in itself and introduces a new tool in convex geometry.

As we will see, the expected value of the mean width of a random polytope in (1) is equivalent to an average of Orlicz norms, i.e.,

$${\mathbb{E}}w(K_N) \sim\int_{S^{n-1}}\big \lVert(1, \ldots,1) \big\rVert_{M_{\theta}} \,d\mu(\theta). $$

This, in fact, is not just a nice representation, but a very interesting observation, which bears information concerning the expected value of the mean width, worth to be studied in more detail. Notice that averages of Orlicz norms naturally appear in functional analysis when studying symmetric subspaces of the classical Banach space L 1 (see [3, 14, 23], just to mention a few). To be more precise, as shown in [14], every finite-dimensional symmetric subspace of L 1 is C-isomorphic to an average of Orlicz spaces (see [28] for the corresponding result for rearrangement invariant spaces).

In Sect. 2 we will introduce the aforementioned Orlicz norm method that we will use throughout this paper to prove estimates for support functions of random polytopes.

In Sect. 5, with this approach, denoting by e j the canonical basis vectors in ℝn, we first compute \({\mathbb{E}}h_{K_{N}}(e_{j})\) when the isotropic convex body in which K N lies is the normalized \(\ell_{p}^{n}\) ball, i.e., in \(D_{p}^{n}=\frac{B_{p}^{n}}{|B_{p}^{n}|^{\frac{1}{n}}}\). Namely, using these ideas, we prove the following:

Theorem 1

Let X 1,…,X N be independent random vectors uniformly distributed in  \(D_{p}^{n}\), 1≤p≤∞, with nNe cn, and \(K_{N}=\operatorname{conv}\{\pm X_{1},\ldots,\pm X_{N}\}\). Then, for all j=1,…,n,

$${\mathbb{E}}h_{K_N}(e_j)={\mathbb{E}}\max_{1\leq i\leq N}\big| \langle X_i,e_j\rangle\big|\sim(\log N)^{\frac{1}{p}}. $$

Many properties of random variables distributed in \(\ell_{p}^{n}\) balls have already been studied, see for instance [4, 31] and [32].

By rotational invariance in the Euclidean case, we obtain the same estimate for the expected value of the mean width of a random polytope in \(D_{2}^{n}\), under milder conditions on the number of points N:

Corollary 2

Let X 1,…,X N be independent random vectors uniformly distributed in \(D_{2}^{n}\), with nNe n, and let \(K_{N}= \operatorname{conv}\{\pm X_{1},\ldots,\pm X_{N}\}\). Then

$${\mathbb{E}}w(K_N)\sim\sqrt{\log N}. $$

In Sect. 4 we will use our approach to give a general upper bound for \({\mathbb{E}}h_{K_{N}}(\theta)\) when K is symmetric and under some smoothness conditions on the function \(h(t)=|K \cap\{ \langle x,\theta\rangle=t \} |^{\frac{1}{n-1}}\). This general case will include the case when \(K=D_{p}^{n}\) with 2≤p<∞ and θ=e j .

As proved in [21], the expected value of the intrinsic volumes (in particular the mean width) of K N are minimized when \(K=D_{2}^{n}\). Thus, we have \({\mathbb{E}}w(K_{N})\gtrsim\sqrt{\log N}\) and \({\mathbb{E}}w(K_{N})\sim L_{K}\sqrt{\log N}\) for those bodies with the isotropic constant bounded. We prove the existence of directions such that the expected value of the support function in these directions is bounded from above by a constant times \(L_{K} \sqrt{\log N}\) and, respectively, bounded from below by a constant times \(L_{K} \sqrt{\log N}\). In fact, as a consequence, we estimate the measure of the set of directions verifying such estimates. It is stated in the following corollary. Notice that the constant L K appears explicitly also in the lower bound.

Corollary 3

Let \(n \leq N \leq e^{\sqrt{n}}\), K be an isotropic convex body inn, and let X 1,…,X N be independent random variables uniformly distributed on K. Let \(K_{N}=\operatorname{conv}\{\pm X_{1},\ldots,\pm X_{N}\}\). For every r>0, there exist positive constants C(r),C 1(r),C 2(r) such that

for a set of directions with measure greater than \(1-\frac{1}{N^{r}}\) and \(\frac{C(r)\sqrt{\log N}}{N^{r}}\) respectively.

All the estimates we prove using our approach hold when \(n\leq N\leq e^{\sqrt{n}}\). Thus, our method might provide a tool to prove \({\mathbb{E}}w(K_{N}) \sim L_{K} \sqrt{\log N}\) for this range of N and hence close the gap mentioned in [7], where the authors’ result was restricted to the case \(n^{1+\delta} \leq N \leq e^{\sqrt{n}}\), δ>0, and constants depending on δ.

2 Preliminaries

A convex function M:[0,∞)→[0,∞) where M(0)=0 and M(t)>0 for t>0 is called an Orlicz function. If there is a t 0>0 such that for all tt 0, we have M(t)=0, then M is called a degenerated Orlicz function. The dual function M of an Orlicz function M is given by the Legendre transform

$$M^*(x) = \sup_{t\in[0,\infty)}\bigl(xt-M(t)\bigr). $$

Again, M is an Orlicz function, and M ∗∗=M. For instance, taking \(M(t)=\frac{1}{p}t^{p}\), p≥1, the dual function is given by \(M^{*}(t)=\frac{1}{p^{*}}t^{p^{*}}\) with \(\frac{1}{p^{*}}+\frac{1}{p}=1\). The n-dimensional Orlicz space \(\ell_{M}^{n}\) is ℝn equipped with the norm

$$\lVert x \rVert_M = \inf \Biggl\{ \rho>0 : \sum _{i=1}^n M \biggl(\frac{ \lvert x_i \rvert}{\rho} \biggr) \leq1 \Biggr\}. $$

In case M(t)=t p, 1≤p<∞, we just have \(\lVert \cdot \rVert_{M}= \lVert\cdot \rVert_{p}\). For a detailed and thorough introduction to the theory of Orlicz spaces, we refer the reader to [13] and [27].

In [10] the authors obtained the following result:

Theorem 4

([10, Lemma 5.2])

Let X 1,…,X N be iid random variables with finite first moments. For all s≥0, let

$$M(s)=\int _0 ^s \int _{\{\frac{1}{t}\leq|X_1|\}} |X_1| \,d\mathbb{P} \,dt. $$

Then, for all \(x=(x_{i})_{i=1}^{N}\in{\mathbb{R}}^{N}\),

$$\mathbb{E} \max _{1\leq i \leq N}|x_iX_i| \sim \lVert x \rVert_M. $$

Obviously, the function

$$ M(s)=\int_0^s \int_{\{\frac{1}{t}\leq|X_1|\}} |X_1| \,d\mathbb{P} \,dt $$
(2)

is non-negative and convex, since \(\int_{\{\frac{1}{t}\leq \lvert X \rvert\}} \lvert X \rvert \,d\mathbb{P}\) is increasing in t. Furthermore, we have M(0)=0 and M is continuous. One can easily show that this Orlicz function M can also be written in the following way:

$$M(s) = \int_0^s \biggl(\frac{1}{t} \mathbb{P} \biggl( \lvert X \rvert\geq\frac{1}{t}\biggr) + \int _{\frac{1}{t}}^{\infty} \mathbb{P}\bigl( \lvert X \rvert\geq u\bigr) \,du \biggr) \,dt. $$

As a corollary, we obtain the following result, which is the one we use to estimate the support functions of random polytopes.

Corollary 5

Let X 1,…,X N be iid random vectors inn, and let \(K_{N}=\operatorname{conv}\{\pm X_{1}, \ldots,\pm X_{N}\}\). Let θS n−1 and

$$M_{\theta}(s)=\int_0^s \int _{\{\frac{1}{t}\leq|\langle X_1,\theta \rangle|\}} \big|\langle X_1,\theta\rangle\big| \,d\mathbb{P} \,dt. $$

Then

$${\mathbb{E}}h_{K_N}(\theta)\sim\inf \biggl\{ s>0 : M_{\theta } \biggl(\frac{1}{s} \biggr) \leq\frac{1}{N} \biggr \}. $$

3 Random Polytopes in Normalized \(\ell_{p}^{n}\)-Balls

In this section we consider random polytopes \(K_{N}=\operatorname{conv}\{\pm X_{1},\ldots,\pm X_{N}\}\), where X 1,…,X N are independent random vectors uniformly distributed in the normalized \(\ell_{p}^{n}\) ball \(D_{p}^{n}=\frac{B_{p}^{n}}{|B_{p}^{n}|^{\frac{1}{n}}}\). Let us recall that the volume of \(B_{p}^{n}\) equals

$$\big|B_p^n\big| = \frac{(\varGamma(1+\frac{1}{p}))^n}{\varGamma(1+\frac{n}{p})}, $$

and so, using Stirling’s formula, we have that \(|B_{p}^{n}|^{1/n} \sim \frac{1}{n^{\frac{1}{p}}}\) and \(\frac{|B_{p}^{n-1}|}{|B_{p}^{n}|}\sim n^{\frac{1}{p}}\).

We are going to estimate \({\mathbb{E}}h_{K_{N}}(e_{j})\) using the Orlicz norm approach introduced in Sect. 2. In order to do so, we need to compute the Orlicz function M introduced in Corollary 5. We are doing this in the following.

Lemma 6

Let 1≤p<∞, and M:[0,∞)→[0,∞) be the function

$$M(s):= M_{e_j}(s)=\int_0^s \int _{\{x\in D_p^n: |\langle x,e_j\rangle|\geq\frac{1}{t}\}} \big|\langle x,e_j\rangle\big| \,dx \,dt. $$

Then, if \(s\leq\frac{1}{|B_{p}^{n}|^{\frac{1}{n}}}\),

(3)

Also, if \(s\leq\frac{1}{|B_{p}^{n}|^{\frac{1}{n}}}\),

(4)

Proof

The (n−1)-dimensional volume \(|D_{p}^{n}\cap\{ \langle x,e_{j}\rangle=t\} |\) equals

By Fubini’s theorem we have that if \(s\geq|B_{p}^{n}|^{1/n}\),

Otherwise M is 0. Integration by parts yields

Now, making the change of variables

$$\frac{|B_p^n|^{\frac{1}{n}}}{t} = (\cos\theta)^{\frac{2}{p}}\quad \Longrightarrow\quad \frac{dt}{d\theta} =|B_p^n|^{\frac{1}{n}} \frac{2}{p}\frac{\sin\theta}{(\cos\theta )^{1+\frac{2}{p}}}, $$

we obtain

Therefore,

if \(s\leq\frac{1}{|B_{p}^{n}|^{\frac{1}{n}}}\) and 0 otherwise, which is the expression in (3). The first term in the previous sum equals

$$\frac{4}{p(n-1+p)}\frac{|B_p^{n-1}|}{|B_p^n|}\int_0^{\cos ^{-1}(s|B_p^n|^{\frac{1}{n}})^{\frac{p}{2}}} \frac{(\sin\theta )^{2\frac{n-1}{p}+3}(\cos\theta)}{(\cos\theta)^{4-\frac{2}{p}}}\,d\theta, $$

and integration by parts yields that this equals

The integral inside the second term equals

and, integrating by parts, this equals

and so, the second term above equals

Thus, adding the two terms we have that if \(s\leq\frac{1}{|B_{p}^{n}|^{\frac{1}{n}}}\),

which is the expression in (4). □

Now we are going to prove Theorem 1. It will be a consequence of the next two propositions, where we will prove the upper and lower bound for \({\mathbb{E}}h_{K_{N}}(e_{j})\) respectively.

Proposition 7

For every n,N∈ℕ, with nN, and every 1≤p<∞, we have that if X 1,…,X N are independent random vectors uniformly distributed in \(D_{p}^{n}\), then

$$\mathbb{E} \max_{1\leq i \leq N}\big \lvert\langle X_i,e_j \big\rangle \rvert \lesssim (\log N )^{\frac{1}{p}} $$

for all j=1,…,n.

Remark 1

Notice that for p=2, this result is similar to the analogous one for Gaussian random vectors.

Proof

If p≥2 and \(s\leq\frac{1}{|B_{p}^{n}|^{\frac{1}{n}}}\), the second term in the expression of \(M (\frac{1}{s} )\) given by (3) is negative, and so

$$M \biggl(\frac{1}{s} \biggr) \leq \frac{4}{p(n-1+p)} \frac{|B_p^{n-1}|}{|B_p^n|} \int_0^{\cos^{-1}(s|B_p^n|^{\frac{1}{n}})^{\frac{p}{2}}} \frac{(\sin \theta)^{\frac{2(n-1)}{p}+3}}{(\cos\theta)^{3-\frac{2}{p}}} \,d\theta. $$

Integration by parts gives

Take \(s_{0}=\frac{1}{2^{\frac{1}{p}}|B_{p}^{n}|^{\frac{1}{n}}}\min \{ \alpha (\frac{p}{n-1+p} )(\log N),1 \}^{\frac{1}{p}}\), α>0 to be specified later. Since \(s_{0}^{p}|B_{p}^{n}|^{\frac{p}{n}}\leq\frac{1}{2}\), there exists a constant c such that

$$M \biggl(\frac{1}{s_0} \biggr)\leq\frac{2|B_p^{n-1}|}{(p-1)(n-1+p)|B_p^n|}\frac{1}{(s_0|B_p^n|^{\frac{1}{n}})^{p-1}} e^{-cs_0^p|B_p^n|^{\frac{p}{n}}\frac{n-1+p}{p}}. $$

Take \(\alpha=\frac{2}{c}\). If the minimum in the definition of s 0 is \(\frac{1}{2}\), then trivially we have

$$\mathbb{E} \max_{1\leq i \leq N} \big\lvert\langle X_i,e_1 \rangle \big\rvert \leq\frac{1}{|B_p^n|^{\frac{1}{n}}}. $$

If not, then

$$M \biggl(\frac{1}{s_0} \biggr)\leq\frac{2|B_p^{n-1}|e^{-\log N}}{(p-1)(n-1+p)|B_p^n| (\frac{1}{c}\frac{p}{n-1+p}\log N )^{\frac{p-1}{p}}}. $$

Since \(|B_{p}^{n-1}|/|B_{p}^{n}| \sim n^{1/p}\), we get

when NN 0 for some sufficiently large N 0∈ℕ. Altogether, for p≥2, we obtain

$$\mathbb{E} h_{K_N}(e_1) = \mathbb{E} \max_{1\leq i \leq N} \big\lvert \langle X_i,e_1\rangle \big\rvert \leq \frac{C}{|B_p^n|^{\frac{1}{n}}}\min \biggl\{ \biggl(\frac{p}{n-1+p} \biggr) (\log N),1 \biggr\}^{\frac{1}{p}}, $$

where C is an absolute positive constant. This minimum is 1 if and only if \(\log N\geq1+\frac{n-1}{p}\). In this case the upper bound we obtain is \(\frac{C}{|B_{p}^{n}|^{\frac{1}{n}}}\sim Cn^{\frac{1}{p}}\). Since n−1≤plogN, we have that the upper bound \(Cn^{\frac{1}{p}}\leq C(\log N)^{\frac{1}{p}}\). If the minimum is not 1, since \(|B_{p}^{n}|^{\frac{1}{n}}\sim\frac{1}{n^{\frac{1}{p}}}\), we also obtain an upper bound of the order \((\log N)^{\frac{1}{p}}\).

If p∈[1,2], we use that in the representation of \(M (\frac{1}{s} )\) given by (4) only the first term is positive and so

Take \(s_{0}=\frac{1}{2^{\frac{1}{p}}|B_{p}^{n}|^{\frac{1}{n}}}\min \{ \alpha (\frac{p}{n-1+2p} )(\log N),1 \}^{\frac{1}{p}}\), α>0 to be specified later. Since \(s_{0}^{p}|B_{p}^{n}|^{\frac{p}{n}}\leq\frac{1}{2}\), there exists a constant such that

$$M \biggl(\frac{1}{s_0} \biggr)\leq\frac{2}{(n-1+p)(n-1+2p)}\frac{|B_p^{n-1}|}{|B_p^n|} \frac{1}{(s_0|B_p^n|^{\frac{1}{n}})^{2p-1}} e^{-cs_0^p|B_p^n|^{\frac{p}{n}}\frac{n-1+2p}{p}}. $$

Take \(\alpha=\frac{2}{c}\). If the minimum in the definition of s 0 is \(\frac{1}{2}\), then trivially we have

$$\mathbb{E} \max_{1\leq i \leq N} \big\lvert\langle X_i,e_1 \rangle \big\rvert \leq\frac{1}{|B_p^n|^{\frac{1}{n}}}. $$

If not, then

$$M \biggl(\frac{1}{s_0} \biggr)\leq\frac{2e^{-\log N}}{(n-1+p)(n-1+2p)}\frac{|B_p^{n-1}|}{|B_p^n|} \frac{1}{ (\frac{1}{c}\frac{p}{n-1+2p}\log N )^{\frac{2p-1}{p}}}. $$

Since \(|B_{p}^{n-1}|/|B_{p}^{n}| \sim n^{1/p}\) and p∈[1,2], we get

when NN 0 for some sufficiently large N 0∈ℕ. Altogether, for 1≤p≤2, we obtain

$$\mathbb{E} h_{K_N}(e_1) = \mathbb{E} \max_{1\leq i \leq N} \big\lvert \langle X_i,e_1\rangle \big\rvert \leq \frac{C}{|B_p^n|^{\frac{1}{n}}}\min \biggl\{\frac{\log N}{n},1 \biggr\}^{\frac{1}{p}}\leq C(\log N)^{\frac{1}{p}}, $$

where C is an absolute positive constant. □

In order to prove the lower bound for \({\mathbb{E}}h_{K_{N}}(e_{j})\), we need the two following technical results:

Lemma 8

Let α,β∈ℝ∖{−1}. Then we have

$$\int\sin^{\alpha}(\theta) \cos^{\beta}(\theta) \,d\theta= \frac{\sin^{\alpha+1}(\theta) \cos^{\beta+1}(\theta)}{\alpha+1} + \frac{\alpha+ \beta+ 2}{\alpha+1} \int\sin^{\alpha+2}(\theta ) \cos^{\beta}(\theta) \,d\theta. $$

Proof

We consider ∫sinα+2(θ)cosβ(θ) . Integration by parts yields

Since cosβ+2(θ)=cosβ(θ)(1−sin2(θ)), we obtain

Thus,

and so

 □

As a corollary, we obtain the kth iteration of Lemma 8.

Corollary 9

Let α,β∈ℝ∖{−1}. Then, for any k∈ℕ, we have

We will now prove the lower estimate.

Proposition 10

There exists a positive absolute constant csuch that for every n,N∈ℕ, with nNe cn, and every 1≤p<∞, we have that if X 1,…,X N are independent random vectors uniformly distributed on \(D_{p}^{n}\), then

$$\mathbb{E} \max_{1\leq i \leq N} \big\lvert\langle X_i,e_j \rangle \big\rvert \gtrsim (\log N )^{\frac{1}{p}} $$

for all j=1,…,n.

Proof

We start with the case 1<p≤2 where we use the recursion formula. Since 1<p≤2, using the representation of M in (3), we have that

$$M \biggl(\frac{1}{s} \biggr) \geq\frac{4}{p(n-1+p)}\frac{|B_p^{n-1}|}{|B_p^n|}\int _0^{\cos^{-1}(s^{-1}|B_p^n|^{\frac{1}{n}})^{\frac{p}{2}}} (\sin\theta)^{2\frac{n-1}{p}+3} (\cos \theta)^{\frac{2}{p}-3}\,d\theta. $$

Using Corollary 9 with \(\alpha= \frac{2n}{p}-\frac{2}{p}+3\) and \(\beta= \frac{2}{p}-3\), we have −1≤β+1<0, and for any k∈ℕ, we get

Since \(\beta+1=\frac{2}{p}(1-p)\), we get

So this yields

$$M \biggl(\frac{1}{s} \biggr) \geq\frac{2(k+1)}{p(n-1+p)}\frac{|B_p^{n-1}|(1-s^p|B_p^n|^{\frac{p}{n}})^{\frac{n-1}{p}+k+2}}{|B_p^n| (\frac{n-1}{p}+2)(s|B_p^n|^{\frac{1}{n}})^{p-1}} \biggl(1-\frac{2-\frac{1}{p}}{\frac{n-1}{p}+k+1} \biggr)^k. $$

If we choose k=n and take into account that 1<p≤2, we get

$$M \biggl(\frac{1}{s} \biggr) \geq C \frac{|B_p^{n-1}|}{|B_p^{n}|} \frac{e^{\frac{n-1+np}{p}\log(1-s^p|B_p^n|^{\frac{p}{n}})}}{(n-1+2p)(s|B_p^n|^{\frac{1}{n}})^{p-1}}. $$

We take \(s_{0}=\frac{\gamma^{\frac{1}{p}}(\log N)^{\frac{1}{p}}}{|B_{p}^{n}|^{\frac{1}{n}}n^{\frac{1}{p}}}\)with γ a constant to be chosen later. Then, since Ne n, we obtain

$$M \biggl(\frac{1}{s_0} \biggr) \geq C \frac{|B_p^{n-1}|}{|B_p^{n}|} \frac{e^{-c_1\gamma\log N}}{(n-1+2p)(\gamma\frac{\log N}{n})^{1-\frac{1}{p}}} \geq\frac{C'}{N^{c_1\gamma}(\gamma\log N)^{1-\frac{1}{p}}}. $$

Choosing γ small enough, so that c 1 γ<1, we get

$$M \biggl(\frac{1}{s_0} \biggr) \geq\frac{1}{N} $$

if NN 0 for some N 0∈ℕ large enough. Therefore, there exists an absolute positive constant c such that

$$\mathbb{E} \max_{1\leq i \leq N}\big \lvert\langle X_i,e_j \rangle \big\rvert \geq c (\log N)^{\frac{1}{p}}. $$

Now, let us consider the easier case where p=1. In this case, we have

$$M \biggl(\frac{1}{s} \biggr) = \frac{2}{n(n+1)}\frac{|B_1^{n-1}|}{|B_1^n|} \frac{(1-s|B_1^n|^{\frac{1}{n}})^{n+1}}{s|B_1^n|^{\frac{1}{n}}}. $$

If we now choose s 0=αlogN, where α is a constant to be chosen later, we obtain

$$M \biggl(\frac{1}{s_0} \biggr) \geq\frac{C}{N^{c\alpha}\log N}, $$

and so, choosing α a constant small enough so that <1, we obtain that

$$M \biggl(\frac{1}{s_0} \biggr) \geq\frac{1}{N} $$

whenever NN 0. Therefore, if p=1, there exists an absolute positive constant c such that

$$\mathbb{E} \max_{1\leq i \leq N} \big\lvert\langle X_i,e_j \rangle \big\rvert \geq c (\log N). $$

Now, let us treat the case 2≤p. We will assume that \(p-1\leq c\frac{n}{\alpha\log N}\), where α is a constant that will be determined later, and c is an absolute constant small enough. We will also assume that \(p\leq N^{\frac{1}{4}}\). We have seen that the second term in (3) equals

and so if p≥2, the second term in the expression (3) defining \(M (\frac{1}{s} )\) is greater than or equal to

$$\frac{4(2-p)}{p (n-1+2p )(n-1+p)}\frac{|B_p^{n-1}|}{|B_p^n|}\int_0^{\cos^{-1}(s|B_p^n|^{\frac{1}{n}})^{\frac{p}{2}}} \frac{(\sin\theta)^{2\frac{n-1}{p}+5}}{(\cos \theta)^{5-\frac{2}{p}}}\,d\theta. $$

Integration by parts yields that this quantity equals

Thus, putting this together with the first term, we have that if p≥2,

Using integration by parts, the first term in the previous expression equals

Using the recursion formula in Corollary 9, we obtain that for any k∈ℕ, this quantity equals

Estimating the cosine in the denominator inside the integral by the value at its extreme point, we obtain that this quantity is greater than

Since for every m, we have that \(\frac{2\frac{n}{p}+2m}{2\frac{n-1}{p}+2m+2}=1-\frac{2-\frac{2}{p}}{2\frac{n-1}{p}+2m+2}\leq1\), this expression is greater than

Hence,

We take

$$s_0 = \frac{ \alpha^{\frac{1}{p}} (p-1)^{\frac{1}{p}}}{ |B_p^n|^{\frac{1}{n}} n^{\frac{1}{p}} } ( \log N )^{\frac{1}{p}}. $$

Then,

if nn 0. On the other hand, choosing k so that \(k+1=\frac{2n}{\alpha (p-1)\log N}\), we have

where the last inequality holds because of our assumptions on p. This last quantity is greater than

if NN 0. Taking c small enough so that 6e −1(1−c)>2.1, we have that

$$ M \biggl(\frac{1}{s_0} \biggr) \geq \frac{C}{p^2 N^{c_1\alpha }(\alpha\log N)^{2-\frac{2}{p}}}\geq \frac{C}{N^{c_1\alpha+\frac{1}{2}}(\alpha\log N)^{2-\frac{2}{p}}}, $$

since we are assuming that \(p\leq N^{\frac{1}{4}}\). Taking α such that \(c_{1}\alpha+\frac{1}{2}<1\), we obtain

$$M \biggl(\frac{1}{s_0} \biggr) \geq\frac{1}{N} $$

if NN 1 and nn 0 for some n 0,N 1 big enough. Therefore,

$$\mathbb{E} \max_{1 \leq i \leq N} \big\lvert\langle X_i,e_1 \rangle \big\rvert \geq\tilde{C} \biggl( \log\frac{N}{p^{\frac{1}{4}}} \biggr)^{\frac{1}{p}} \geq C ( \log N )^{\frac{1}{p}}, $$

where NN 0, and C is a positive absolute constant.

Now we consider the case \(p\geq c\frac{n}{\log N}\) or pN 1/4. In that case we choose

$$s_0 = \frac{1}{2|B_p^n|^{\frac{1}{n}}}. $$

Then

We want the latter expression to be greater or equal to N −1, i.e.,

$$C_1 n^{\frac{1}{p}} e^{-c_2\frac{n-1}{p (3/2 )^p}} \geq \frac{1}{N}, $$

which is equivalent to

$$\log N + \log(C_1) + \frac{1}{p} \log(n) \geq c_2 \frac{n-1}{p (\frac{3}{2} )^p}. $$

To obtain this, it is enough to show

$$\log N \geq c_2\frac{n-1}{p (\frac{3}{2} )^p}, $$

and since \(p\geq c\frac{n}{\log N}\) and Ne cn, to obtain the latter inequality, it is enough to have

$$\log N \geq c_2\frac{n-1}{p (\frac{3}{2} )^{\frac{c}{c'}}}. $$

But

$$c_2\frac{n-1}{p (\frac{3}{2} )^{\frac{c}{c'}}} \leq c_2\frac{n-1}{c\frac{n}{\log N } (\frac{3}{2} )^{\frac{c}{c'}}} \leq c_2\frac{\log N}{c (\frac{3}{2} )^{\frac{c}{c'}}} \leq\log N $$

if c′ is small enough. So we obtain the estimate. If \(p\geq N^{\frac{1}{4}}\), we immediately obtain

$$C_1 n^{\frac{1}{p}} e^{-c_2\frac{n-1}{p (3/2 )^p}} \geq C > \frac{1}{N} $$

for NN 0. Therefore, in these two cases, we obtain the estimate

$${\mathbb{E}}h_{K_N}(e_j) \sim\frac{1}{|B_p^n|^{\frac{1}{n}}} \sim n^{\frac{1}{p}} \gtrsim(\log N)^{\frac{1}{p}}. $$

 □

Remark 2

In the case p=∞ it is very easy to check that

$$\inf \biggl\{ s>0 : M \biggl(\frac{1}{s} \biggr) \leq\frac{1}{N} \biggr\} = \frac{ 1+\frac{1}{N} - \sqrt{\frac{2}{N}+\frac{1}{N^2}} }{2} \sim1, $$

and so \({\mathbb{E}}h_{K_{N}}(e_{j}) \sim1\).

4 General Results

Using our approach, we will now prove more general bounds for symmetric isotropic convex bodies. In the first theorem we assume some mild technical conditions which are verified by the \(\ell_{p}^{n}\) balls (p≥2). In this way we recover the upper estimates proved in the previous section.

Since \({\mathbb{E}}h_{K_{N}}(\theta) \sim\inf \{ s>0 : M_{\theta } ( \frac{1}{s} ) \leq\frac{1}{N} \}\), it seems natural to study for which value of s

$$\int_{S^{n-1}} M_{\theta}\biggl(\frac{1}{s}\biggr) \,d\mu(\theta) = \frac{1}{N}. $$

As one could expect, this value of s is of the order \(L_{K} \sqrt{\log N}\). As a consequence of Chebychev’s inequality, we will obtain probability estimates for the set of directions verifying \({\mathbb{E}}h_{K_{N}}(\theta) \leq C L_{K} \sqrt {\log N}\) or \({\mathbb{E}}h_{K_{N}}(\theta) \geq C L_{K} \sqrt{\log N}\) .

Theorem 11

Let K be a symmetric and isotropic convex body, nN, θS n−1, and X 1,…,X N be independent random vectors uniformly distributed in K. Define \(h(t)= | K \cap\{ \langle x,\theta\rangle=t\}|^{\frac{1}{n-1}}\). Assume that h is twice differentiable and that h′(t)≠0 for all t∈(0,h K (θ)). Assume also thath′(t)/t is increasing and that h(h K (θ))=0. Then,

$$\mathbb{E} \max_{1\leq i \leq N}\big|\langle X_i,\theta\rangle\big| \leq C h^{-1} \biggl(h(0) \biggl(1-\alpha\frac{\log N}{n}\biggr) \biggr), $$

where α,C, α>C are positive absolute constants.

Proof

First of all, notice that h is a concave function. Then, using Theorem 4, we get

$$M \biggl(\frac{1}{s} \biggr) \hspace{-0.7pt}= \hspace{-0.7pt}\int_{\frac{1}{h_K(\theta)}}^{\frac{1}{s}} 2 \int_{\frac{1}{t}}^{h_K(\theta)} r h(r)^{n-1} \,dr \,dt\hspace{-0.7pt} =\hspace{-0.7pt} 2 \int_{\frac{1}{h_K(\theta)}}^{\frac{1}{s}} \int_{\frac{1}{t}}^{h_K(\theta)} \frac{r}{h'(r)} h'(r)h(r)^{n-1} \,dr\,dt. $$

Integration by parts yields

$$M \biggl(\frac{1}{s} \biggr) = 2 \int_{\frac{1}{h_K(\theta)}}^{\frac{1}{s}} -\frac{\frac{1}{t}h(\frac{1}{t})^n}{nh'(\frac{1}{t})} \,dt - \int_{\frac{1}{h_K(\theta)}}^{\frac{1}{s}} \int _{\frac{1}{t}}^{h_K(\theta)} h(r)^n \frac{h'(r)-rh''(r)}{nh'(r)^2} \,dr \,dt. $$

Since h′(t)−th″(t)≥0, we have

Again we use integration by parts and get

Furthermore, since we have h′(t)−th″(t)≥0, we get

$$-sh'(s) = \big|sh'(s)\big| = \int_0^s - h'(t)-th''(t) \,dt \geq-2\int _0^sh'(t)\,dt = 2\bigl(h(0)-h(s) \bigr). $$

Thus,

$$M \biggl(\frac{1}{s} \biggr) \leq\frac{sh(s)^{n-1}}{2n(n+1)\bigl(\frac{h(0)}{h(s)}-1\bigr)^2} = \frac{se^{(n-1) \log\frac{h(s)}{h(0)} } |K\cap \theta^{\bot}|h(s)^2 }{2n(n+1)\bigl(1-\frac{h(s)}{h(0)}\bigr)^2h(0)^2}. $$

Choosing

$$s_0 = h^{-1} \biggl( h(0) \biggl(1-\alpha \frac{\log N}{n}\biggr) \biggr), $$

we have that there exists a positive constant c 1 such that

$$M \biggl(\frac{1}{s_0} \biggr) \leq C \frac{s_0|K\cap\theta^{\bot }|}{N^{c_1\alpha}\alpha^2(\log N)^2}. $$

Since K is isotropic, s 0≤(n+1)L K . Therefore,

$$M \biggl(\frac{1}{s_0} \biggr) \leq C \frac{nL_K |K\cap\theta^{\bot }| }{N^{c_1\alpha}\alpha^2(\log N)^2}. $$

By Hensley’s result (see [11]), \(L_{K} \sim\frac{1}{|K\cap \theta^{\bot}|}\), and because nN, we have

$$M \biggl(\frac{1}{s_0} \biggr) \leq\frac{ CN }{ N^{c_1\alpha} \alpha^2(\log N)^2} = \frac{ C }{ N^{c_1\alpha-1} \alpha^2(\log N)^2}. $$

Taking α so that c 1 α>2, we have \(M(\frac{1}{s_{0}}) \leq\frac{1}{N}\) for NN 0 for some N 0∈ℕ big enough. □

With the method, introduced in Sect. 2, we are also able to prove the following general result, which will lead us to estimates of the support function for some directions of random polytopes in symmetric isotropic convex bodies:

Theorem 12

Let \(n \leq N \leq e^{\sqrt{n}}\), K be a symmetric isotropic convex body inn, and let X 1,…,X N be independent random variables uniformly distributed in K. Then,

$$\int_{S^{n-1}} M_{\theta} \biggl(\frac{1}{C_1L_K \sqrt{\log N}} \biggr) \,d\mu(\theta) \leq\frac{1}{N}, $$

and

$$\int_{S^{n-1}} M_{\theta} \biggl(\frac{1}{C_2 L_K \sqrt{\log N}} \biggr) \,d\mu(\theta) \geq\frac{1}{N}, $$

where C 1,C 2 are positive absolute constants.

Consequently, if \(\tilde{s}\) is chosen such that

$$\int_{S^{n-1}} M_{\theta}\biggl(\frac{1}{\tilde{s}} \biggr) \,d\mu(\theta) = \frac{1}{N}, $$

then \(\tilde{s} \sim L_{K} \sqrt{\log N}\).

In order to prove this theorem, we need the following proposition:

Proposition 13

Let K be a symmetric convex body inn of volume 1. Let s>0, θS n−1, and M θ be the Orlicz function associated to the random variableX,θ〉, where X is uniformly distributed in K. Then,

$$ \int _{S^{n-1}} M_{\theta} \biggl(\frac{1}{s} \biggr) \,d \mu(\theta) = \int_{K} M_{\langle\theta,e_1\rangle} \biggl( \frac{ \lVert x \rVert_2}{s} \biggr) \,dx, $$
(5)

where \(M_{\langle\theta,e_{1}\rangle}\) is the Orlicz function associated to the random variableθ,e 1with θ uniformly distributed on S n−1. For any \(s\leq \lVert x \rVert_{2}\),

$$ M_{\langle\theta,e_1\rangle} \biggl( \frac{ \lVert x \rVert_2}{s} \biggr) = \frac{2w_{n-1}}{nw_n} \int_0^{\cos ^{-1}(\frac{s}{ \lVert x \rVert_2})} \frac{\sin^n y}{\cos^2 y} \,dy, $$
(6)

and 0 otherwise.

Proof

Using the definition of M θ , we obtain

where the last equality is obtained by the change of variable \(t=\frac{u}{ \lVert x \rVert_{2}}\). Hence, by the rotational invariance of S n−1,

$$ \int_{S^{n-1}} M_{\theta} \biggl(\frac{1}{s} \biggr) \,d\mu(\theta) = \int_K M_{ \langle\theta,e_1\rangle } \biggl( \frac{ \lVert x \rVert_2}{s} \biggr) \,dx. $$

Now, let us compute \(M_{ \langle\theta,e_{1}\rangle }\). For any s>1, otherwise the function is 0, we have

The change of variables \(\frac{1}{t} = \cos y\) yields

$$ M_{\langle\theta,e_1\rangle}(s) = \frac{2w_{n-1}}{nw_n} \int_0^{\cos^{-1}(\frac{1}{s})} \frac{\sin^n y}{\cos^2 y} \,dy. $$

 □

Given that the expected mean width of K N is minimized when \(K=D_{2}^{n}\), it is natural to expect that given s, the average \(\int_{S^{n-1}}M_{\theta}(\frac{1}{s} )\,d\mu(\theta)\) would also be minimized when \(K=D_{2}^{n}\). We prove it, using this representation, in the following:

Corollary 14

Let K be a symmetric convex body inn of volume 1, and let s>0. Then

$$\int_{S^{n-1}}M_\theta \biggl(\frac{1}{s} \biggr)\,d\mu(\theta)\geq \int_{S^{n-1}}M_{D_2^n,\theta} \biggl( \frac{1}{s} \biggr)\,d\mu(\theta )=M_{D_2^n,e_1} \biggl( \frac{1}{s} \biggr), $$

where \(M_{D_{2}^{n},\theta}\) denotes the Orlicz function associated to \(D_{2}^{n}\).

Proof

By (5) and the facts that \(M_{\langle\theta,e_{1}\rangle}\) is increasing and \(|K|=|D_{2}^{n}|=1\) we have that if r n is the radius of \(D_{2}^{n}\),

 □

Now, we give the proof of Theorem 12:

Proof

By (6), if \(\lVert x \rVert_{2} \geq s\), we have

Integration by parts yields

We start with the upper bound where we will use Paouris’ result about the concentration of mass on isotropic convex bodies from [18]. First of all, we have

$$M_{\langle\theta,e_1\rangle} \biggl(\frac{ \lVert x \rVert_2}{s} \biggr) \leq\frac{2w_{n-1}}{nw_n} \frac{ \lVert x \rVert_2}{s} \biggl(1-\frac{s^2}{ \lVert x \rVert_2^2} \biggr)^{\frac{n-1}{2}}. $$

From (5) and since \(M_{\langle\theta,e_{1}\rangle} (\frac{ \lVert x \rVert_{2}}{s} ) = 0\) for \(s> \lVert x \rVert_{2}\), we get

We choose \(s_{0} = \sqrt{\alpha} L_{K} \sqrt{\log N}\), with α>0 a constant to be chosen later. Then, if \(N \leq e^{\sqrt{n}}\),

We choose γ>0 such that \(c_{1}\gamma- \frac{1}{2}>1\) and then α>0 so that \(\frac{c_{1}\alpha}{\gamma^{2}}>1\). Then,

$$\int_{S^{n-1}} M_{\theta}\biggl(\frac{1}{s_0}\biggr) \,d\mu(\theta) \leq \frac{C}{\sqrt{\alpha}\sqrt{\log N}} \biggl[ \frac{\gamma}{N^{\frac{c_1\alpha}{\gamma^2}}} + \frac{1}{N^{c_1\gamma-\frac{1}{2}}} \biggr] \leq\frac{1}{N} $$

for \(N\leq e^{\sqrt{n}}\) and NN 0.

To prove the lower bound, we use the recursion formula (9). For \(\lVert x \rVert_{2} \geq s \) and any k∈ℕ,

Taking k=n, we have

Thus,

Take \(s_{1} = \sqrt{\beta} L_{K} \sqrt{\log N}\), β>0 a constant to be chosen later. Then

$$\int_{S^{n-1}} M_{\theta}\biggl(\frac{1}{s_1} \biggr) \,d\mu(\theta) \geq \int_{K\setminus2\sqrt{\beta} L_K \sqrt{\log N}B_2^n} \frac{Cw_{n-1}}{nw_n} \frac{ \lVert x \rVert_2}{\sqrt{\beta} L_K \sqrt{\log N}} e^{-c_4n\frac{\beta L_K^2\log N}{ \lVert x \rVert_2^2}} \,dx. $$

Using the small ball probability result proved in [19], we get that there exists a constant c 5>0 such that

$$\big|K\setminus c_5\sqrt{n}L_K B_2^n\big| \geq\frac{1}{2} $$

for Ne n. Therefore,

where the inequality before the last one holds because \(\lVert x \rVert_{2}^{2} \geq c_{5}^{2} n L_{K}^{2}\). We take β small enough, so that c 6 β<1 and \(2\sqrt{\beta}\sqrt{\log N} \leq c_{5} \sqrt{n}\). Then

$$\frac{C''}{N^{c_6\beta}\sqrt{\beta}\sqrt{\log N}} \geq\frac{1}{N} $$

for NN 0 and Ne n. Hence,

$$\int_{S^{n-1}} M_{\theta}\biggl(\frac{1}{s_1} \biggr) \,d\mu(\theta) \geq\frac{1}{N}. $$

 □

Obviously, the theorem implies that there are directions θ 1,θ 2S n−1 such that the expectation of the support function in those directions is bounded from above and below respectively by a constant times \(L_{K}\sqrt{\log N}\). In Corollary 3 we give estimates for the measure of the set of directions verifying such estimates. However, we do not think that the estimate we give for the measure of the set of directions verifying the lower bound is optimal.

Proof of Corollary 3

To prove that the upper bound is true for most directions, we proceed as in the proof of Theorem 12. We choose s 0 like there and α, γ so that \(c_{1}\gamma- \frac{1}{2}> 2(r+1)\) and \(\frac{c_{1}\alpha}{\gamma^{2}}>2(r+1)\) and obtain

$$\int_{S^{n-1}} M_{\theta}\biggl(\frac{1}{s_0}\biggr) \,d\mu(\theta) \leq\frac{1}{N^{r+1}}. $$

Then, by Chebychev’s inequality,

$$\frac{1}{N^{r+1}} \geq \int_{S^{n-1}} M_{\theta}\biggl( \frac{1}{s_0}\biggr) \,d\mu(\theta) \geq\frac{1}{N} \mu \biggl\{ \theta\in S^{n-1} : M_{\theta} \biggl( \frac{1}{s_0} \biggr) > \frac{1}{N} \biggr\}. $$

Thus,

$$\mu \biggl\{ \theta\in S^{n-1} : M_{\theta} \biggl( \frac{1}{s_0} \biggr) \leq\frac{1}{N} \biggr\} \geq1- \frac{1}{N^{r}}, $$

and so

$$\mu \bigl\{ \theta\in S^{n-1} :{\mathbb{E}}h_{K_N}(\theta) \leq C_1(r)L_K\sqrt{\log N} \bigr\} \geq1- \frac{1}{N^{r}}. $$

To prove the probability estimate for the lower bound, we can assume that r<1. We proceed as in Theorem 12. We choose s 1 like there and take β small enough so that c 6 β<r. We obtain

$$\int_{S^{n-1}} M_{\theta} \biggl( \frac{1}{s_1} \biggr) \,d\mu(\theta ) > \frac{1}{N^{r}}. $$

Then, for any decreasing, positive and concave function f, we get

$$f \biggl( \int_{S^{n-1}} M_{\theta} \biggl( \frac{1}{s_1} \biggr) \,d\mu(\theta) \biggr) < f \biggl( \frac{1}{N^{r}} \biggr). $$

Using Jensen’s inequality, from this we obtain

Thus,

$$\mu \biggl\{ \theta\in S^{n-1} : M_{\theta} \biggl( \frac{1}{s_1} \biggr) < \frac{1}{N} \biggr\} \leq\frac{f ( \frac{1}{N^{r}} )}{f ( \frac{1}{N} )}, $$

and therefore,

$$\mu \biggl\{ \theta\in S^{n-1} : M_{\theta} \biggl( \frac{1}{s_1} \biggr) \geq \frac{1}{N} \biggr\} \geq1- \frac{f ( \frac{1}{N^{r}} )}{f ( \frac{1}{N} )}. $$

This means that

$$\mu \bigl\{ \theta\in S^{n-1} : {\mathbb{E}}h_{K_N}(\theta) \geq cs_1 \bigr\} \geq1- \frac{f ( \frac{1}{N^{r}} )}{f ( \frac{1}{N} )}. $$

We choose \(f(t) = -at + a \max_{\theta\in S^{n-1}} M_{\theta}(\frac{1}{s_{1}})\), a>0. Then

$$\frac{f ( \frac{1}{N^{r}} )}{f ( \frac{1}{N} )} = \frac{-\frac{1}{N^{r}}+\max_{\theta\in S^{n-1}} M_{\theta }(\frac{1}{s_1})}{-\frac{1}{N}+\max_{\theta\in S^{n-1}} M_{\theta }(\frac{1}{s_1})}, $$

and thus

$$1- \frac{f ( \frac{1}{N^{r}} )}{f ( \frac{1}{N} )} = \frac{\frac{1}{N^{r}} - \frac{1}{N} }{\max_{\theta\in S^{n-1}} M_{\theta}(\frac{1}{s_1})-\frac{1}{N}}. $$

From Hölder’s inequality we obtain

Because of our choice of s 1, we get

$$M_{\theta} \biggl( \frac{1}{s_1} \biggr) \leq\frac{C(r)}{\sqrt{\log N}}. $$

Therefore,

$$1- \frac{f ( \frac{1}{N^{r}} )}{f ( \frac{1}{N} )} \geq\frac{\frac{1}{N^{r}} - \frac{1}{N} }{\frac{C(r)}{\sqrt{\log N}}-\frac{1}{N}} \geq\frac{C'(r)\sqrt{\log N}}{N^{r}}. $$

This yields

$$\mu \bigl\{ \theta\in S^{n-1} : {\mathbb{E}}h_{K_N}(\theta) \geq C_2(r)L_K \sqrt{\log N} \bigr\} \geq \frac{C(r)\sqrt{\log N}}{N^{r}}. $$

 □