1 Introduction

We study the question how to obtain a threshold for the expected measure of a random polytope defined as the convex hull of independent random points with a log-concave distribution. The general formulation of the problem is the following. Given a log-concave probability measure \(\mu \) on \({\mathbb R}^n\), let \(X_1,X_2,\ldots \) be independent random points in \({\mathbb R}^n\) distributed according to \(\mu \) and for any \(N>n\) define the random polytope

$$\begin{aligned} K_N=\textrm{conv}\{X_1,\ldots ,X_N\}. \end{aligned}$$

Then, consider the expectation \({\mathbb {E}}_{\mu ^N}[\mu (K_N)]\) of the measure of \(K_N\), where \(\mu ^N=\mu \otimes \cdots \otimes \mu \) (N times). This is an affinely invariant quantity, so we may assume that \(\mu \) is centered, i.e. the barycenter of \(\mu \) is at the origin.

Given \(\delta \in (0,1)\) we say that \(\mu \) satisfies a “\(\delta \)-upper threshold" with constant \(\varrho _1\) if

$$\begin{aligned} \sup \{{\mathbb {E}}_{\mu ^N}[\mu (K_N)]:N\leqslant \exp (\varrho _1n)\}\leqslant \delta \end{aligned}$$
(1.1)

and that \(\mu \) satisfies a “\(\delta \)-lower threshold” with constant \(\varrho _2\) if

$$\begin{aligned} \inf \{{\mathbb {E}}_{\mu ^N}[\mu (K_N)]:N\geqslant \exp (\varrho _2n)\}\geqslant 1-\delta . \end{aligned}$$
(1.2)

Then, we define \(\varrho _1(\mu ,\delta )=\sup \{\varrho _1:(1.1)\; \text {holds true}\}\) and \(\varrho _2(\mu ,\delta )=\inf \{\varrho _2:(1.2)\; \text {holds true}\}\). Our main goal is to obtain upper bounds for the difference

$$\begin{aligned} \varrho (\mu ,\delta ):=\varrho _2(\mu ,\delta )-\varrho _1(\mu ,\delta ) \end{aligned}$$

for any fixed \(\delta \in \left( 0,\tfrac{1}{2}\right) \).

One may also consider a sequence \(\{\mu _n\}_{n=1}^{\infty }\) of log-concave probability measures \(\mu _n\) on \({\mathbb R}^n\). Then, we say that \(\{\mu _n\}_{n=1}^{\infty }\) exhibits a “sharp threshold” if there exists a sequence \(\{\delta _n\}_{n=1}^{\infty }\) of positive reals such that \(\delta _n\rightarrow 0\) and \(\varrho (\mu _n,\delta _n)\rightarrow 0\) as \(n\rightarrow \infty \). This terminology may be used to describe a variety of results that have been obtained for specific sequences of measures (in most cases, product measures or rotationally invariant measures). In Sect. 2 we provide a non-exhaustive list of contributions to this topic; starting with the classical work [14] of Dyer, Füredi and McDiarmid, which concerns the uniform measure on the cube, most of them establish a sharp threshold.

Our aim is to describe a general approach to the problem, working with an arbitrary log-concave probability measure \(\mu \) on \({\mathbb R}^n\). Our approach is based on the Cramer transform of \(\mu \). Recall that the logarithmic Laplace transform of \(\mu \) is defined by

$$\begin{aligned} \Lambda _{\mu }(\xi )=\ln \left( \int _{{\mathbb R}^n}e^{\langle \xi ,z\rangle }d\mu (z)\right) ,\qquad \xi \in {\mathbb R}^n \end{aligned}$$

and the Cramer transform of \(\mu \) is the Legendre transform of \(\Lambda _{\mu }\), defined by

$$\begin{aligned} \Lambda _{\mu }^{*}(x)= \sup _{\xi \in {\mathbb R}^n} \left\{ \langle x, \xi \rangle -\Lambda _{\mu }(\xi )\right\} , \qquad x\in {\mathbb R}^n. \end{aligned}$$

For every \(t>0\) we set

$$\begin{aligned} B_t(\mu ):=\{x\in {\mathbb R}^n:\Lambda ^{*}_{\mu }(x)\leqslant t\} \end{aligned}$$

and for any \(x\in {\mathbb R}^n\) we denote by \({{\mathcal {H}}}(x)\) the set of all half-spaces H of \({\mathbb R}^n\) containing x. Then we consider the function \(\varphi _{\mu }\), called Tukey’s half-space depth, defined by

$$\begin{aligned} \varphi _{\mu }(x)=\inf \{\mu (H):H\in {{\mathcal {H}}}(x)\}. \end{aligned}$$

We refer the reader to the survey article of Nagy, Schütt and Werner [23] for an extensive and comprehensive survey on Tukey’s half-space depth, with an emphasis on its connections with convex geometry, and many references. From the definition of \(\Lambda _{\mu }^{*}\) one can easily check that for every \(x\in {\mathbb R}^n\) we have \(\varphi _{\mu } (x)\leqslant \exp (-\Lambda _{\mu }^{*}(x))\) (see Lemma 3.1 in Sect. 3 below). In particular, for any \(t>0\) and for all \(x\notin B_t(\mu )\) we have that \(\varphi _{\mu }(x)\leqslant \exp (-t)\). A main idea, which appears in all the previous works on this topic, is to show that \(\varphi _{\mu }\) is almost constant on the boundary \(\partial (B_t(\mu ))\) of \(B_t(\mu )\). Our first main result shows that this is true, in general, if \(\mu =\mu _K\) is the uniform measure on a centered convex body of volume 1 in \({\mathbb R}^n\).

Theorem 1.1

Let K be a centered convex body of volume 1 in \({\mathbb R}^n\). Then, for every \(t>0\) we have that

$$\begin{aligned} \inf \{\varphi _{\mu _K}(x):x\in B_t(\mu _K)\}\geqslant \frac{1}{10}\exp (-t-2\sqrt{n}). \end{aligned}$$

This implies that

$$\begin{aligned} \omega _{\mu _K} (x)-5\sqrt{n}\leqslant \Lambda ^{*}(x)\leqslant \omega _{\mu _K}(x) \end{aligned}$$

for every \(x\in {\mathbb R}^n\), where \(\omega _{\mu _K}(x)=\ln \left( \frac{1}{\varphi _{\mu _K}(x)}\right) \).

Theorem 1.1 may be viewed as a version of Cramér’s theorem (see [13]) for random vectors uniformly distributed in convex bodies. We present the proof in Sect. 3. It exploits techniques from the theory of large deviations and a theorem of Nguyen [25] (proved independently by Wang [31]; see also [16]) which is exactly the ingredient that forces us to consider only uniform measures on convex bodies. It seems harder to prove, if true, an analogous estimate for any centered log-concave probability measure \(\mu \) on \({\mathbb R}^n\); this is a basic question that our work leaves open.

The second step in our approach is to consider, for any centered log-concave probability measure \(\mu \) on \({\mathbb R}^n\), the parameter

$$\begin{aligned} \beta (\mu )=\frac{\textrm{Var}_{\mu }(\Lambda _{\mu }^{*})}{({\mathbb {E}}_{\mu }(\Lambda _{\mu }^{*}))^2} \end{aligned}$$
(1.3)

provided that

$$\begin{aligned} \Vert \Lambda _{\mu }^{*}\Vert _{L^2(\mu )}=\big ({\mathbb {E}}_{\mu } \big ((\Lambda _{\mu }^{*})^2\big )\big )^{1/2}<\infty . \end{aligned}$$

Roughly speaking, the plan is the following: provided that \(\varphi _{\mu }\) is “almost constant” on \(\partial (B_t(\mu ))\) for all \(t>0\) and that \(\beta (\mu )=o_n(1)\), one can establish a “sharp threshold” for the expected measure of \(K_N\) with

$$\begin{aligned} \varrho _2\approx \varrho _1\approx \Vert \Lambda _{\mu }^{*}\Vert _{L^1(\mu )}={\mathbb {E}}_{\mu }(\Lambda _{\mu }^{*}). \end{aligned}$$

We make these ideas more precise in Sect. 5 where we also illustrate them with a number of examples. Note that it is not clear in advance that \(\Lambda _{\mu }^{*}\) has bounded second or higher order moments, which is necessary so that \(\beta (\mu )\) would be well-defined. We study this question in Sect. 4 where we obtain an affirmative answer in the case of the uniform measure on a convex body. In fact we cover the more general case of \(\kappa \)-concave probability measures, \(\kappa \in (0,1/n]\), which are supported on a centered convex body.

Theorem 1.2

Let K be a centered convex body of volume 1 in \({\mathbb R}^n\). Let \(\kappa \in (0,1/n]\) and let \(\mu \) be a centered \(\kappa \)-concave probability measure with \(\textrm{supp}(\mu )=K\). Then,

$$\begin{aligned} \int _{{\mathbb R}^n}e^{\frac{\kappa \Lambda _{\mu }^{*}(x)}{2}}d\mu (x)<\infty . \end{aligned}$$

In particular, for all \(p\geqslant 1\) we have that \({\mathbb {E}}_{\mu }\big ((\Lambda _{\mu }^{*}(x))^p\big )<\infty \).

The method of proof of Theorem 1.2 gives in fact reasonable upper bounds for \(\Vert \Lambda _{\mu }^{*}\Vert _{L^p(\mu )}\). In particular, if we assume that \(\mu =\mu _K\) is the uniform measure on a centered convex body then we obtain a sharp two sided estimate for the most interesting case where \(p=1\) or 2.

Theorem 1.3

Let K be a centered convex body of volume 1 in \({\mathbb R}^n\), \(n\geqslant 2\). Then,

$$\begin{aligned} c_1n/L_{\mu _K}^2\leqslant \Vert \Lambda _{\mu _K}^{*}\Vert _{L^1(\mu _K)}\leqslant \Vert \Lambda _{\mu _K}^{*}\Vert _{L^2(\mu _K)}\leqslant c_2n\ln n, \end{aligned}$$

where \(L_{\mu _K}\) is the isotropic constant of the uniform measure \(\mu _K\) on K and \(c_1,c_2>0\) are absolute constants.

The left-hand side inequality of Theorem 1.3 follows easily from one of the main results in [9]. Both the lower and the upper bound are of optimal order with respect to the dimension. This can be seen e.g. from the example of the uniform measure on the cube or the Euclidean ball (see Sect. 5), respectively.

Besides Theorem 1.2, we show in Sect. 4 that \(\Lambda _{\mu }^{*}\) has finite moments of all orders in the following cases:

  1. (i)

    If \(\mu \) is a centered probability measure on \({\mathbb R}\) which is absolutely continuous with respect to Lebesgue measure or a product of such measures.

  2. (ii)

    If \(\mu \) is a centered log-concave probability measure on \({\mathbb R}^n\) and there exists a function \(g:[1,\infty )\rightarrow [1,\infty )\) with \(\lim _{t\rightarrow \infty }g(t)/\ln (t+1)=+\infty \) such that \(Z_t^+(\mu )\supseteq g(t)Z_2^+(\mu )\) for all \(t\geqslant 2\), where \(\{Z_t^+(\mu )\}_{t\geqslant 1}\) is the family of one-sided \(L_t\)-centroid bodies of \(\mu \).

Again, it seems harder to prove, if true, an analogous result for any centered log-concave probability measure \(\mu \) on \({\mathbb R}^n\); this is a second basic question that our work leaves open.

In Sect. 5 we describe the approach to the main problem and show how one can use the previous results to obtain bounds for \(\varrho (\mu ,\delta )\). We also clarify the role of the parameter \(\beta (\mu )\). One would hope that \(\beta (\mu )\) is small as the dimension increases, e.g. \(\beta (\mu )\leqslant c/\sqrt{n}\). If so, then the next general result provides satisfactory lower bounds for \(\varrho _1(\mu ,\delta )\).

Theorem 1.4

Let \(\mu \) be a centered log-concave probability measure on \({\mathbb R}^n\). Assume that \(\beta (\mu )<1/8\) and \(8\beta (\mu )<\delta <1\). If \(n/L_{\mu }^2\geqslant c_2\ln (2/\delta )\sqrt{\delta /\beta (\mu )}\) where \(L_{\mu }\) is the isotropic constant of \(\mu \), then

$$\begin{aligned} \varrho _1(\mu ,\delta )\geqslant \left( 1-\sqrt{8\beta (\mu )/\delta }\right) \frac{\mathbb {E}_{\mu }(\Lambda _{\mu }^{*})}{n}. \end{aligned}$$

We are able to give satisfactory upper bounds for \(\varrho _2(\mu ,\delta )\) in the case where \(\mu =\mu _K\) is the uniform measure on a centered convex body K of volume 1 in \({\mathbb R}^n\).

Theorem 1.5

Let K be a centered convex body of volume 1 in \({\mathbb R}^n\). Let \(\beta (\mu _K )<1/2\) and \(2\beta (\mu _K )<\delta <1\). If \(n/L_{\mu _K}^2\geqslant c_2\ln (2/\delta )\sqrt{\delta /\beta (\mu _K)}\) then

$$\begin{aligned} \varrho _2(\mu _K,\delta )\leqslant \left( 1+\sqrt{8\beta (\mu _K)/\delta }\right) \frac{\mathbb {E}_{\mu _K}(\Lambda _{\mu _K}^{*})}{n}. \end{aligned}$$

Combining these two results we see that, provided that \(\beta (\mu _K)\) is small compared to a fixed \(\delta \in (0,1)\), we have a threshold of the order

$$\begin{aligned} \varrho (\mu _K,\delta )\leqslant \frac{c}{n}\sqrt{\frac{\textrm{Var}_{\mu _K }(\Lambda _{\mu _K}^{*})}{\delta }}. \end{aligned}$$

The above discussion leaves open a third basic question: to estimate

$$\begin{aligned} \beta _n:=\sup \{\beta (\mu ):\mu \;\hbox {is a centered log-concave probability measure on}\;{\mathbb R}^n\}. \end{aligned}$$

We illustrate the method that we develop in this work with a number of examples. We consider first the standard examples of the uniform measure on the unit cube and the Gaussian measure. As a direct consequence of our results, in both cases we obtain a bound \(\varrho (\mu ,\delta )\leqslant c(\delta )/\sqrt{n}\) for the threshold, where \(c(\delta )>0\) is a constant depending on \(\delta \). Finally, we examine the case of the uniform measure on the Euclidean ball \(D_n\) of volume 1 in \({\mathbb R}^n\) and obtain the following sharp threshold.

Theorem 1.6

Let \(D_n\) be the centered Euclidean ball of volume 1 in \({\mathbb R}^n\). Then, the sequence \(\mu _n:=\mu _{D_n}\) exhibits a sharp threshold with \(\varrho (\mu _n,\delta )\leqslant \frac{c}{\sqrt{\delta n}}\) and e.g. in the case where n is even we have that

$$\begin{aligned} {\mathbb {E}}_{\mu _n}(\Lambda _{\mu _n}^{*})=\frac{(n+1)}{2}H_{\frac{n}{2}}+O(\sqrt{n}) \end{aligned}$$

as \(n\rightarrow \infty \), where \(H_m=\sum _{k=1}^m\frac{1}{k}\).

2 Notation, background information and related literature

In this section we introduce notation and terminology that we use throughout this work, and provide background information on convex bodies and log-concave probability measures. We write \(\langle \cdot ,\cdot \rangle \) for the standard inner product in \({\mathbb R}^n\) and denote the Euclidean norm by \(|\cdot |\). In what follows, \(B_2^n\) is the Euclidean unit ball, \(S^{n-1}\) is the unit sphere, and \(\sigma \) is the rotationally invariant probability measure on \(S^{n-1}\). Lebesgue measure in \({\mathbb R}^n\) is also denoted by \(|\cdot |\). The letters \(c, c^{\prime },c_j,c_j^{\prime }\) etc. denote absolute positive constants whose value may change from line to line. Whenever we write \(a\approx b\), we mean that there exist absolute constants \(c_1,c_2>0\) such that \(c_1a\leqslant b\leqslant c_2a\).

We refer to Schneider’s book [29] for basic facts from the Brunn–Minkowski theory and to the book [2] for basic facts from asymptotic convex geometry. We also refer to [10] for more information on isotropic convex bodies and log-concave probability measures.

2.1 Log-concave probability measures

A convex body in \({\mathbb R}^n\) is a compact convex set \(K\subset {\mathbb R}^n\) with non-empty interior. We say that K is centrally symmetric if \(-K=K\) and that K is centered if the barycenter \(\textrm{bar}(K)=\frac{1}{|K|}\int _Kx\,dx\) of K is at the origin. The Minkowski functional \(\Vert \cdot \Vert _K\) of a convex body K in \({\mathbb R}^n\) with \(0\in \textrm{int}(K)\) is defined for all \(x\in {\mathbb R}^n\) by \(\Vert x\Vert _K=\inf \{s\geqslant 0:x\in sK\}\) and the support function of K is the function

$$\begin{aligned} h_K(x) = \sup \{\langle x,y\rangle :y\in K\},\qquad x\in {\mathbb R}^n. \end{aligned}$$

A Borel measure \(\mu \) on \(\mathbb R^n\) is called log-concave if \(\mu (\lambda A+(1-\lambda )B) \geqslant \mu (A)^{\lambda }\mu (B)^{1-\lambda }\) for any compact subsets A and B of \({\mathbb R}^n\) and any \(\lambda \in (0,1)\). A function \(f:\mathbb R^n \rightarrow [0,\infty )\) is called log-concave if its support \(\{f>0\}\) is a convex set in \({\mathbb R}^n\) and the restriction of \(\ln {f}\) to it is concave. If f has finite positive integral then there exist constants \(A,B>0\) such that \(f(x)\leqslant Ae^{-B|x|}\) for all \(x\in {\mathbb R}^n\) (see [10, Lemma 2.2.1]). In particular, f has finite moments of all orders. It is known (see [6]) that if a probability measure \(\mu \) is log-concave and \(\mu (H)<1\) for every hyperplane H in \({\mathbb R}^n\), then \(\mu \) has a log-concave density \(f_{{\mu }}\). We say that \(\mu \) is even if \(\mu (-B)=\mu (B)\) for every Borel subset B of \({\mathbb R}^n\) and that \(\mu \) is centered if

$$\begin{aligned} \textrm{bar}(\mu ):=\int _{\mathbb R^n} \langle x, \xi \rangle d\mu (x) = \int _{\mathbb R^n} \langle x, \xi \rangle f_{\mu }(x) dx = 0 \end{aligned}$$

for all \(\xi \in S^{n-1}\). We shall use the fact that if \(\mu \) is a centered log-concave probability measure on \({\mathbb R}^k\) then

$$\begin{aligned} \Vert f_{\mu }\Vert _{\infty }\leqslant e^kf_{\mu }(0). \end{aligned}$$
(2.1)

This is a result of Fradelizi from [15]. Note that if K is a convex body in \(\mathbb R^n\) then the Brunn–Minkowski inequality implies that the indicator function \(\textbf{1}_{K} \) of K is the density of a log-concave measure, the Lebesgue measure on K.

Given \(\kappa \in [-\infty ,1/n]\) we say that a measure \(\mu \) on \({\mathbb R}^n\) is \(\kappa \)-concave if

$$\begin{aligned} \mu ((1-\lambda )A+\lambda B)\geqslant ((1-\lambda )\mu ^{\kappa }(A)+\lambda \mu ^{\kappa }(B))^{1/\kappa } \end{aligned}$$
(2.2)

for all compact subsets AB of \({\mathbb R}^n\) with \(\mu (A)\mu (B)>0\) and all \(\lambda \in (0,1)\). The limiting cases are defined appropriately. For \(\kappa =0\) the right hand side in (2.2) becomes \(\mu (A)^{1-\lambda }\mu (B)^{\lambda }\) (therefore, 0-concave measures are the log-concave measures). In the case \(\kappa =-\infty \) the right hand side in (2.2) becomes \(\min \{\mu (A),\mu (B)\}\). Note that if \(\mu \) is \(\kappa \)-concave and \(\kappa _1\leqslant \kappa \) then \(\mu \) is \(\kappa _1\)-concave.

Next, let \(\gamma \in [-\infty ,\infty ]\). A function \(f:\mathbb R^n\rightarrow [0,\infty )\) is called \(\gamma \)-concave if

$$\begin{aligned} f((1-\lambda )x+\lambda y)\geqslant ((1-\lambda )f^{\gamma }(x)+\lambda f^{\gamma }(y))^{1/\gamma } \end{aligned}$$

for all \(x,y\in {\mathbb R}^n\) with \(f(x)f(y)>0\) and all \(\lambda \in (0,1)\). Again, we define the cases \(\gamma =0,+\infty \) appropriately. Borell [7] studied the relation between \(\kappa \)-concave probability measures and \(\gamma \)-concave functions and showed that if \(\mu \) is a measure on \({\mathbb R}^n\) and the affine subspace F spanned by the support \(\textrm{supp}(\mu )\) of \(\mu \) has dimension \(\textrm{dim}(F)=n\) then for every \(-\infty \leqslant \kappa <1/n\) we have that \(\mu \) is \(\kappa \)-concave if and only if it has a non-negative density \(\psi \in L_{\textrm{loc}}^1({\mathbb R}^n,dx)\) and \(\psi \) is \(\gamma \)-concave, where \(\gamma =\frac{\kappa }{1-\kappa n}\in [-1/n,+\infty )\).

Let \(\mu \) and \(\nu \) be two log-concave probability measures on \({\mathbb R}^n\). Let \(T:{\mathbb R}^n\rightarrow {\mathbb R}^n\) be a measurable function which is defined \(\nu \)-almost everywhere and satisfies

$$\begin{aligned} \mu (B)=\nu (T^{-1}(B)) \end{aligned}$$

for every Borel subset B of \({\mathbb R}^n\). We then say that T pushes forward \(\nu \) to \(\mu \) and write \(T_*\nu =\mu \). It is easy to see that \(T_*\nu =\mu \) if and only if for every bounded Borel measurable function \(g:{\mathbb R}^n\rightarrow {\mathbb R}\) we have

$$\begin{aligned} \int _{{\mathbb R}^n}g(x)d\mu (x)=\int _{{\mathbb R}^n}g(T(y))d\nu (y). \end{aligned}$$

If \(\mu \) is a log-concave measure on \({\mathbb R}^n\) with density \(f_{\mu }\), we define the isotropic constant of \(\mu \) by

$$\begin{aligned} L_{\mu }:=\left( \frac{\sup _{x\in {\mathbb R}^n} f_{\mu } (x)}{\int _{{\mathbb R}^n}f_{\mu }(x)dx}\right) ^{\frac{1}{n}} [\det \text {Cov}(\mu )]^{\frac{1}{2n}}, \end{aligned}$$

where \(\text {Cov}(\mu )\) is the covariance matrix of \(\mu \) with entries

$$\begin{aligned} \text {Cov}(\mu )_{ij}:=\frac{\int _{{\mathbb R}^n}x_ix_j f_{\mu } (x)\,dx}{\int _{{\mathbb R}^n} f_{\mu } (x)\,dx}-\frac{\int _{{\mathbb R}^n}x_i f_{\mu } (x)\,dx}{\int _{{\mathbb R}^n} f_{\mu } (x)\,dx}\frac{\int _{{\mathbb R}^n}x_j f_{\mu } (x)\,dx}{\int _{{\mathbb R}^n} f_{\mu } (x)\,dx}. \end{aligned}$$

We say that a log-concave probability measure \(\mu \) on \({\mathbb R}^n\) is isotropic if it is centered and \(\text {Cov}(\mu )=I_n\), where \(I_n\) is the identity \(n\times n\) matrix. In this case, \(L_{\mu }=\Vert f_{\mu }\Vert _{\infty }^{1/n}\). For every \(\mu \) there exists an affine transformation T such that \(T_{*}\mu \) is isotropic. The hyperplane conjecture asks if there exists an absolute constant \(C>0\) such that

$$\begin{aligned} L_n:= \max \{ L_{\mu }:\mu \ \hbox {is an isotropic log-concave probability measure on}\ {\mathbb R}^n\}\leqslant C \end{aligned}$$

for all \(n\geqslant 1\). Bourgain [8] established the upper bound \(L_n\leqslant c\root 4 \of {n}\ln n\); later, Klartag, in [21], improved this estimate to \(L_n\leqslant c\root 4 \of {n}\). In a breakthrough work, Chen [12] proved that for any \(\varepsilon >0\) there exists \(n_0(\varepsilon )\in {\mathbb N}\) such that \(L_n\leqslant n^{\varepsilon }\) for every \(n\geqslant n_0(\varepsilon )\). Very recently, Klartag and Lehec [22] showed that the hyperplane conjecture and the stronger Kannan–Lovász–Simonovits isoperimetric conjecture hold true up to a factor that is polylogarithmic in the dimension; more precisely, they achieved the bound \(L_n\leqslant c(\ln n)^4\), where \(c>0\) is an absolute constant.

2.2 Known results

Several variants of the threshold problem have been studied, starting with the work of Dyer, Füredi and McDiarmid who established in [14] a sharp threshold for the expected volume of random polytopes with vertices uniformly distributed in the discrete cube \(E_2^n=\{-1,1\}^n\) or in the solid cube \(B_{\infty }^n=[-1,1]^n\). For example, in the first case, if \(\kappa =2/\sqrt{e}\) then for every \(\varepsilon \in (0,1)\) one has

$$\begin{aligned} \lim _{n\rightarrow \infty }\sup \left\{ 2^{-n} {\mathbb {E}}|K_N|:N\leqslant (\kappa -\varepsilon )^n\right\} =0 \end{aligned}$$

and

$$\begin{aligned} \lim _{n\rightarrow \infty }\inf \left\{ 2^{-n} {\mathbb {E}}|K_N|:N\geqslant (\kappa +\varepsilon )^n\right\} =1. \end{aligned}$$

A similar result holds true for the expected volume of random polytopes with vertices uniformly distributed in the cube \(B_{\infty }^n\); the corresponding value of the constant \(\kappa \) is \(\kappa =2\pi /e^{\gamma +1/2}\), where \(\gamma \) is Euler’s constant. In the terminology of the introduction, this last result is equivalent to \(\varrho (\mu _{B_{\infty }^n},\delta )=O_{\delta }(1)\) for any fixed value of \(\delta \in \left( 0,\tfrac{1}{2}\right) \).

Further sharp thresholds for the volume of various classes of random polytopes have been given. In [18] a threshold for \({\mathbb {E}}_{\mu ^N}|K_N|/(2\alpha )^n\) was established for the case where \(X_i\) have independent identically distributed coordinates supported on a bounded interval \([-\alpha ,\alpha ]\) under some mild additional assumptions. The articles [27] and [3, 4] address the same question for a number of cases where \(X_i\) have rotationally invariant densities. Exponential in the dimension upper and lower thresholds are obtained in [17] for the case where \(X_i\) are uniformly distributed in a simplex.

Upper and lower thresholds were obtained recently by Chakraborti, Tkocz and Vritsiou in [11] for some general families of distributions. If \(\mu \) is an even log-concave probability measure supported on a convex body K in \({\mathbb R}^n\) and if \(X_1,X_2,\ldots \) are independent random points distributed according to \(\mu \), then for any \(n<N\leqslant \exp (c_1n/L_{\mu }^2)\) we have that

$$\begin{aligned} \frac{{\mathbb {E}}_{\mu ^N}(|K_N|)}{|K|} \leqslant \exp \left( -c_2n/L_{\mu }^2\right) , \end{aligned}$$
(2.3)

where \(c_1,c_2>0\) are absolute constants. A lower threshold is also established in [11] for the case where \(\mu \) is an even \(\kappa \)-concave measure on \({\mathbb R}^n\) with \(0<\kappa <1/n\), supported on a convex body K in \({\mathbb R}^n\). If \(X_1,X_2,\ldots \) are independent random points in \({\mathbb R}^n\) distributed according to \(\mu \) and \(K_N=\textrm{conv}\{X_1,\ldots ,X_N\}\) as before, then for any \(M\geqslant C\) and any \(N\geqslant \exp \left( \frac{1}{\kappa }(\log n+2\log M)\right) \) we have that

$$\begin{aligned} \frac{{\mathbb {E}}_{\mu ^N}(|K_N|)}{|K|}\geqslant 1-\frac{1}{M}, \end{aligned}$$
(2.4)

where \(C>0\) is an absolute constant.

Analogues of these results in the setting of the present work were obtained in [9] for 0-concave, i.e. log-concave, probability measures. There exists an absolute constant \(c>0\) such that if \(N_1(n)=\exp (cn/L_n^2)\) then

$$\begin{aligned} \sup _{\mu }\Big (\sup \Big \{{\mathbb {E}}_{\mu ^N}[\mu (K_N)]:N\leqslant N_1(n)\Big \}\Big )\longrightarrow 0 \end{aligned}$$

as \(n\rightarrow \infty \), where the first supremum is over all log-concave probability measures \(\mu \) on \({\mathbb R}^n\). On the other hand, if \(\delta \in (0,1)\) then,

$$\begin{aligned} \inf _{\mu }\Big (\inf \Big \{ {\mathbb {E}}_{\mu ^N}\big [\mu ((1+\delta )K_N)\big ]: N\geqslant \exp \big (C\delta ^{-1}\ln \left( 2/\delta \right) n\ln n\big )\Big \}\Big )\longrightarrow 1 \end{aligned}$$

as \(n\rightarrow \infty \), where the first infimum is over all log-concave probability measures \(\mu \) on \({\mathbb R}^n\) and \(C>0\) is an absolute constant.

It should be noted that an exponential in the dimension lower threshold is not possible in full generality. For example, in the case where \(X_i\) are uniformly distributed in the Euclidean ball one needs \(N\geqslant \exp (cn\ln n)\) points so that the volume of a random \(K_N\) will be significantly large. Thus, apart from the constants depending on \(\delta \), the lower threshold above is sharp. However, it provides a weak threshold in the sense that we estimate the expectation \({\mathbb {E}}_{\mu ^N}\big (\mu (1+\delta )K_N)\) (for an arbitrarily small but positive value of \(\delta )\) while we would like to have a similar result for \({\mathbb {E}}_{\mu ^N}\big [\mu (K_N)]\). It is shown in [9] that there exists an absolute constant \(C>0\) such that

$$\begin{aligned} \inf _{\mu }\Big (\inf \Big \{ {\mathbb {E}}\,\big [\mu (K_N)\big ]: N\geqslant \exp (C(n\ln n)^2u(n))\Big \}\Big )\longrightarrow 1 \end{aligned}$$

as \(n\rightarrow \infty \), where the first infimum is over all log-concave probability measures \(\mu \) on \({\mathbb R}^n\) and u(n) is any function with \(u(n)\rightarrow \infty \) as \(n\rightarrow \infty \).

3 Estimates for the half-space depth

Let \(\mu \) be a centered log-concave probability measure on \({\mathbb R}^n\) with density \(f:=f_{\mu }\). Recall that the logarithmic Laplace transform of \(\mu \) on \({\mathbb R}^n\) is defined by

$$\begin{aligned} \Lambda _{\mu }(\xi )=\log \left( \int _{{\mathbb R}^n}e^{\langle \xi ,z\rangle }f(z)dz\right) . \end{aligned}$$

It is easily checked by means of Hölder’s inequality that \(\Lambda _{\mu }\) is convex and \(\Lambda _{\mu }(0)=0\). Since \(\textrm{bar}(\mu )=0\), Jensen’s inequality shows that

$$\begin{aligned} \Lambda _{\mu }(\xi )=\log \left( \int _{{\mathbb R}^n}e^{\langle \xi ,z\rangle }f(z)dz\right) \geqslant \int _{{\mathbb R}^n}\langle \xi ,z\rangle f(z)dz=0 \end{aligned}$$

for all \(\xi \). Therefore, \(\Lambda _{\mu }\) is a non-negative function. One can check that the set \(A(\mu )=\{\Lambda _{\mu }<\infty \}\) is open and \(\Lambda _{\mu }\) is \(C^{\infty }\) and strictly convex on \(A(\mu )\).

We also define

$$\begin{aligned} \Lambda _{\mu }^{*}(x)= \sup _{\xi \in {\mathbb R}^n} \left\{ \langle x, \xi \rangle - \Lambda _{\mu }(\xi )\right\} . \end{aligned}$$

In other words, \(\Lambda _{\mu }^{*}\) is the Legendre transform of \(\Lambda _{\mu }\): recall that given a convex function \(g:{\mathbb R}^n\rightarrow (-\infty ,\infty ]\), the Legendre transform \({{\mathcal {L}}}(g)\) of g is defined by

$$\begin{aligned} {{\mathcal {L}}}(g)(x):=\sup _{\xi \in {\mathbb R}^n}\{ \langle x,\xi \rangle - g(\xi )\}. \end{aligned}$$

The function \(\Lambda ^{*}_{\mu }\) is called the Cramer transform of \(\mu \). For every \(t>0\) we define the convex set

$$\begin{aligned} B_t(\mu ):=\{x\in {\mathbb R}^n:\Lambda ^{*}_{\mu }(x)\leqslant t\}. \end{aligned}$$

For any \(x\in {\mathbb R}^n\) we denote by \({{\mathcal {H}}}(x)\) the set of all half-spaces H of \({\mathbb R}^n\) containing x. Then we define

$$\begin{aligned} \varphi _{\mu }(x)=\inf \{\mu (H):H\in {{\mathcal {H}}}(x)\}. \end{aligned}$$

The function \(\varphi _{\mu }\) is called Tukey’s half-space depth. Our aim is to give upper and lower bounds for \(\varphi _{\mu }(x)\) when \(x\in \partial (B_t(\mu ))\), \(t>0\).

Lemma 3.1

Let \(\mu \) be a Borel probability measure on \({\mathbb R}^n\). For every \(x\in {\mathbb R}^n\) we have \(\varphi _{\mu } (x)\leqslant \exp (-\Lambda _{\mu }^{*}(x))\). In particular, for any \(t>0\) and for all \(x\notin B_t(\mu )\) we have that \(\varphi _{\mu }(x)\leqslant \exp (-t)\).

Proof

Let \(x\in {\mathbb R}^n\). We start with the observation that for any \(\xi \in {\mathbb R}^n\) the half-space \(\{z:\langle z-x,\xi \rangle \geqslant 0\}\) is in \({{\mathcal {H}}}(x)\), therefore

$$\begin{aligned} \varphi _{\mu } (x) \leqslant \mu (\{z:\langle z,\xi \rangle \geqslant \langle x,\xi \rangle \} ) \leqslant e^{-\langle x,\xi \rangle }{\mathbb {E}}_{\mu }\big (e^{\langle z,\xi \rangle }\big ) =\exp \big (-[\langle x,\xi \rangle -\Lambda _{\mu }(\xi )]\big ), \end{aligned}$$

and taking the infimum over all \(\xi \in {\mathbb R}^n\) we see that \(\varphi _{\mu } (x)\leqslant \exp (-\Lambda _{\mu }^{*}(x))\), as claimed. \(\square \)

Next, we would like to obtain a lower bound for \(\varphi _{\mu }(x)\) when \(x\in B_t(\mu )\). In the case where \(\mu =\mu _K\) is the uniform measure on a centered convex body K of volume 1 in \({\mathbb R}^n\), our estimate is the following.

Theorem 3.2

Let K be a centered convex body of volume 1 in \({\mathbb R}^n\). Then, for every \(t>0\) we have that

$$\begin{aligned} \inf \{\varphi _{\mu _K}(x):x\in B_t(\mu _K)\}\geqslant \frac{1}{10}\exp (-t-2\sqrt{n}). \end{aligned}$$

The first part of the argument works for any centered log-concave probability measure \(\mu \) with density f on \({\mathbb R}^n\). For every \(\xi \in {\mathbb R}^n\) we define the probability measure \(\mu _{\xi }\) with density

$$\begin{aligned} f_{\xi }(z)=e^{-\Lambda _{\mu } (\xi )+\langle \xi , z\rangle }f(z). \end{aligned}$$

In the next lemma (see [10, Proposition 7.2.1]) we recall some basic facts for \(\mu _{\xi }\).

Lemma 3.3

The barycenter of \(\mu _{\xi }\) is \(x=\nabla \Lambda _{\mu } (\xi )\) and \(\textrm{Cov}(\mu _{\xi })=\textrm{Hess}\,(\Lambda _{\mu })(\xi )\).

Next, we set

$$\begin{aligned} \sigma _{\xi }^2=\int _{{\mathbb R}^n}\langle z-x,\xi \rangle ^2d\mu _{\xi }(z). \end{aligned}$$

Let \(t>0\). Since \(B_t(\mu )\) is convex, in order to give a lower bound for \(\inf \{\varphi _{\mu }(x):x\in B_t(\mu )\}\) it suffices to give a lower bound for \(\mu (H)\), where H is any closed half-space whose bounding hyperplane supports \(B_t(\mu )\). In that case,

$$\begin{aligned} \mu (H)=\mu (\{z:\langle z-x,\xi \rangle \geqslant 0\}) \end{aligned}$$
(3.1)

for some \(x\in \partial (B_t(\mu ))\), with \(\xi =\nabla \Lambda _{\mu }^{*}(x)\), or equivalently \(x=\nabla \Lambda _{\mu }(\xi )\) (see e.g. Theorem 23.5 and Corollary 23.5.1 in [28]). Note that

$$\begin{aligned} \mu (\{z:\langle z-x,\xi \rangle \geqslant 0\})&=\int _{\mathbb {R}^n}{} \textbf{1}_{[0,\infty )} (\langle z-x,\xi \rangle ) f(z)\, dz\nonumber \\&=e^{\Lambda _{\mu } (\xi )}\int _{\mathbb {R}^n}{} \textbf{1}_{[0,\infty )} (\langle z-x,\xi \rangle ) e^{-\langle z,\xi \rangle }\, d\mu _{\xi }(z)\nonumber \\&=e^{\Lambda _{\mu } (\xi )}e^{-\langle x,\xi \rangle }\int _{\mathbb {R}^n}{} \textbf{1}_{[0,\infty )}(\langle z-x,\xi \rangle ) e^{-\langle z-x,\xi \rangle }\, d\mu _{\xi }(z)\nonumber \\&\geqslant e^{-\Lambda _{\mu }^{*}(x)}\int _{0}^{\infty } \sigma _{\xi }e^{-\sigma _{\xi }t} \mu _{\xi }(\{z:0\leqslant \langle z-x,\xi \rangle \leqslant \sigma _{\xi }t\})\, dt. \end{aligned}$$
(3.2)

From Markov’s inequality we see that

$$\begin{aligned} \mu _{\xi }(\{z:\langle z-x,\xi \rangle \geqslant 2\sigma _{\xi }\})\leqslant \frac{1}{4}. \end{aligned}$$

Moreover, since x is the barycenter of \(\mu _{\xi }\), Grünbaum’s lemma (see [10, Lemma 2.2.6]) implies that

$$\begin{aligned} \mu _{\xi }(\{z:\langle z-x,\xi \rangle \geqslant 0\})\geqslant \frac{1}{e}. \end{aligned}$$

Therefore,

$$\begin{aligned} \int _{0}^{\infty } \sigma _{\xi }e^{-\sigma _{\xi }t}\mu _{\xi }(\{z:0\leqslant \langle z-x,\xi \rangle \leqslant \sigma _{\xi }t\})\, dt\geqslant \int _2^{\infty } \sigma _{\xi }e^{-\sigma _{\xi }t}\left( \frac{1}{e}-\frac{1}{4}\right) \, dt\geqslant \frac{4-e}{4e}e^{-2\sigma _{\xi }}. \end{aligned}$$
(3.3)

We would like now an upper bound for \(\sup _{\xi }\sigma _{\xi }\). We can have this when \(\mu =\mu _K\) is the uniform measure on a centered convex body K of volume 1 on \({\mathbb R}^n\), using a theorem of Nguyen [25] (proved independently by Wang [31]; see also [16]).

Theorem 3.4

Let \(\nu \) be a log-concave probability measure on \({\mathbb R}^n\) with density \(g=\exp (-p)\), where p is a convex function. Then,

$$\begin{aligned} \textrm{Var}_{\nu }(p)\leqslant n. \end{aligned}$$

Proof of Theorem 3.2

Set \(\mu :=\mu _K\). Since \(f(z)={\textbf{1}}_K(z)\), the density \(f_{\xi }\) of \(\mu _{\xi }\) is proportional to \(e^{\langle \xi , z\rangle }{\textbf{1}}_K(z)\). Using the fact that

$$\begin{aligned} \mathbb {E}_{\mu _{\xi }}({\langle \xi , z\rangle })=\langle \nabla \Lambda _{\mu } (\xi ),\xi \rangle =\langle x,\xi \rangle , \end{aligned}$$

from Theorem 3.4 we get that

$$\begin{aligned} \sigma _{\xi }^2=\mathbb {E}_{\mu _{\xi }}({\langle z-x,\xi \rangle })^2=\textrm{Var}_{\mu _{\xi }}(\langle \xi , z\rangle )\leqslant n. \end{aligned}$$

Then, combining (3.1), (3.2) and (3.3), for any bounding hyperplane H of \(B_t(\mu )\) we have

$$\begin{aligned} \mu (H)&\geqslant e^{-\Lambda _{\mu }^{*}(x)}\int _{0}^{\infty } \sigma _{\xi }e^{-\sigma _{\xi }t} \mu _{\xi }(0\leqslant \langle z-x,\xi \rangle \leqslant \sigma _{\xi }t)\, dt\\&\geqslant \frac{4-e}{4e}e^{-\Lambda _{\mu }^{*}(x)-2\sigma _{\xi }}\geqslant \frac{1}{10}\exp (-t-2\sqrt{n}), \end{aligned}$$

as claimed. \(\square \)

Theorem 3.2 shows that if K is a centered convex body of volume 1 in \({\mathbb R}^n\) then

$$\begin{aligned} 10\varphi _{\mu _K}(x)\geqslant \exp (-\Lambda _{\mu _K}^{*}(x)-2\sqrt{n}) \end{aligned}$$

for all \(x\in {\mathbb R}^n\). Setting

$$\begin{aligned} \omega _{\mu _K} (x)=\ln \left( \frac{1}{\varphi _{\mu _K}(x)}\right) \end{aligned}$$
(3.4)

and taking into account Lemma 3.1 we have the next two-sided estimate.

Corollary 3.5

Let K be a centered convex body of volume 1 in \({\mathbb R}^n\). Then, for every \(x\in \textrm{int}(K)\) we have that

$$\begin{aligned} \omega _{\mu _K} (x)-5\sqrt{n}\leqslant \Lambda _{\mu _K}^{*}(x)\leqslant \omega _{\mu _K}(x). \end{aligned}$$
(3.5)

Note

A basic question that arises from the results of this section is whether an analogue of (3.5) holds true for any centered log-concave probability measure \(\mu \) on \({\mathbb R}^n\). This would allow us to apply the next steps of the procedure that our approach suggests to all log-concave probability measures.

4 Moments of the Cramer transform

As explained in the introduction, we would like to know for which centered log-concave probability measures \(\mu \) on \({\mathbb R}^n\) we have that \(\Lambda _{\mu }^{*}\) has finite moments of all orders. Our first result provides an affirmative answer in the case where \(\mu =\mu _K\) is the uniform measure on a centered convex body K of volume 1 in \({\mathbb R}^n\). In fact, the next theorem covers a more general case.

Theorem 4.1

Let K be a centered convex body of volume 1 in \({\mathbb R}^n\). Let \(\kappa \in (0,1/n]\) and let \(\mu \) be a centered \(\kappa \)-concave probability measure with \(\textrm{supp}(\mu )=K\). Then,

$$\begin{aligned} \int _{{\mathbb R}^n}e^{\frac{\kappa \Lambda _{\mu }^{*}(x)}{2}}d\mu (x)<\infty . \end{aligned}$$

In particular, for all \(p\geqslant 1\) we have that \({\mathbb {E}}_{\mu }\big ((\Lambda _{\mu }^{*}(x))^p\big )<\infty \).

The proof of Theorem 4.1 is based on the next lemma, which is proved in [11, Lemma 7] in the symmetric case.

Lemma 4.2

Let K be a centered convex body of volume 1 in \({\mathbb R}^n\). Let \(\kappa \in (0,1/n]\) and let \(\mu \) be a centered \(\kappa \)-concave probability measure with \(\textrm{supp}(\mu )=K\). Then,

$$\begin{aligned} \varphi _{\mu }(x)\geqslant e^{-2}\kappa (1-\Vert x\Vert _K)^{1/\kappa } \end{aligned}$$
(4.1)

for every \(x\in K\), where \(\Vert x\Vert _K\) is the Minkowski functional of K.

Sketch of the proof

We modify the argument from [11, Lemma 7] to cover the not necessarily symmetric case. First, consider the case \(0<\kappa <1/n\). Let X be a random vector distributed according to \(\mu \). Given \(\theta \in S^{n-1}\) let \(b=h_K(\theta )\) and \(a=h_K(-\theta )\). If \(g_{\theta }\) is the density of \(\langle X,\theta \rangle \) then \(g_{\theta }^{\frac{\kappa }{1-\kappa }}\) is concave on \([-a,b]\), therefore

$$\begin{aligned} g_{\theta }(t)\geqslant g_{\theta }(0)\left( 1-\frac{t}{b}\right) ^{\frac{1-\kappa }{\kappa }} \end{aligned}$$

for all \(t\in [0,b]\). It follows that, for every \(0<s<b\),

$$\begin{aligned} {\mathbb P}(\langle X,\theta \rangle \geqslant s)=\int _s^bg_{\theta }(t)\,dt\geqslant g_{\theta }(0)\int _s^b\left( 1-\frac{t}{b}\right) ^{\frac{1-\kappa }{\kappa }}\,dt =\kappa g_{\theta }(0)b\left( 1-\frac{s}{b}\right) ^{\frac{1}{\kappa }}. \end{aligned}$$

Note that \(g_{\theta }\) is a centered log-concave density. Therefore, \(g_{\theta }(0)\geqslant e^{-1}\Vert g_{\theta }\Vert _{\infty }\) by (2.1) and \(\Vert g_{\theta }\Vert _{\infty }b\geqslant {\mathbb P}(\langle X,\theta \rangle \geqslant 0)\geqslant e^{-1}\) by Grünbaum’s lemma [10, Lemma 2.2.6], which implies that \(g_{\theta }(0)b\geqslant e^{-2}\). It follows that

$$\begin{aligned} {\mathbb P}(\langle X,\theta \rangle \geqslant s)=\int _s^bg_{\theta }(t)\,dt\geqslant e^{-2}\kappa \left( 1-\frac{s}{b}\right) ^{\frac{1}{\kappa }}. \end{aligned}$$

Now, let \(x\in K\). Then \(\langle x,\theta \rangle \leqslant \Vert x\Vert _Kh_K(\theta )=\Vert x\Vert _Kb\), therefore

$$\begin{aligned} {\mathbb P}(\langle X,\theta \rangle \geqslant \langle x,\theta \rangle )\geqslant {\mathbb P}(\langle X,\theta \rangle \geqslant \Vert x\Vert _Kb) \geqslant e^{-2}\kappa \left( 1-\Vert x\Vert _K\right) ^{\frac{1}{\kappa }}. \end{aligned}$$

For the case \(\kappa =1/n\) recall that a 1/n-concave measure is \(\kappa \)-concave for every \(\kappa \in (0,1/n)\). This means that (4.1) holds true for all \(\kappa \in (0,1/n)\) and letting \(\kappa \rightarrow 1/n\) we obtain the result. \(\square \)

Proof of Theorem 4.1

From Lemma 3.1 we know that \(\varphi _{\mu } (x)\leqslant \exp (-\Lambda _{\mu }^{*}(x))\), or equivalently,

$$\begin{aligned} e^{\frac{\kappa \Lambda _{\mu }^{*}(x)}{2}}\leqslant \frac{1}{\varphi _{\mu }(x)^{\kappa /2}} \end{aligned}$$

for all \(x\in K\). From Lemma 4.2 we know that

$$\begin{aligned} \varphi _{\mu }(x)\geqslant e^{-2}\kappa (1-\Vert x\Vert _K)^{1/\kappa } \end{aligned}$$

for every \(x\in K\). It follows that

$$\begin{aligned} \int _Ke^{\frac{\kappa \Lambda _{\mu }^{*}(x)}{2}}d\mu (x)\leqslant (e^2/\kappa )^{\kappa /2}\int _K\frac{1}{(1-\Vert x\Vert _K)^{1/2}}d\mu (x). \end{aligned}$$

Recall that the cone probability measure \(\nu _K\) on the boundary \(\partial (K)\) of a convex body K with \(0\in \textrm{int}(K)\) is defined by

$$\begin{aligned} \nu _K(B)=\frac{|\{rx:x\in B,0\leqslant r\leqslant 1\}|}{|K|} \end{aligned}$$

for all Borel subsets B of \(\partial (K)\). We shall use the identity

$$\begin{aligned} \int _{{\mathbb R}^n}g(x)\,dx=n|K|\int _0^{\infty }r^{n-1}\int _{\partial (K)}g(rx)\,d\nu _K(x)\,dr \end{aligned}$$

which holds for every integrable function \(g:{\mathbb R}^n\rightarrow {\mathbb R}\) (see [24, Proposition 1]). Let f denote the density of \(\mu \) on K. We write

$$\begin{aligned} \int _K\frac{1}{(1-\Vert x\Vert _K)^{1/2}}d\mu (x)&=\int _{{\mathbb R}^n}\frac{f(x)}{(1-\Vert x\Vert _K)^{1/2}}{\textbf{1}}_K(x)\,dx\\&= n|K|\int _0^{\infty }r^{n-1}\int _{\partial (K)}\frac{f(ry)}{(1-\Vert ry\Vert _K)^{1/2}}{\textbf{1}}_K(ry)\,d\nu _K(y)\,dr\\&=n|K|\int _0^1\frac{r^{n-1}}{\sqrt{1-r}}\int _{\partial (K)}f(ry)\,d\nu _K(y)\,dr\\&\leqslant n|K|\Vert f\Vert _{\infty }\int _0^1\frac{r^{n-1}}{\sqrt{1-r}}\,dr \\&=n|K|B(n,1/2)\Vert f\Vert _{\infty }\leqslant c\sqrt{n}\Vert f\Vert _{\infty }<+\infty , \end{aligned}$$

and the proof is complete.\(\square \)

In the case of the uniform measure \(\mu =\mu _K\) on a centered convex body K of volume 1 in \({\mathbb R}^n\) we see that

$$\begin{aligned} \int _K\big (\Lambda _{\mu _K}^{*}(x)/2n\big )^pdx\leqslant (c_1p)^p\int _Ke^{\frac{\Lambda _{\mu _K}^{*}(x)}{2n}}dx \leqslant (c_2p)^p\sqrt{n}, \end{aligned}$$

where \(c_1,c_2>0\) are absolute constants. This gives the following estimate for the moments of \(\Lambda _{\mu _K}^{*}\):

$$\begin{aligned} \Vert \Lambda _{\mu _K}^{*}\Vert _{L^p(\mu _K)}\leqslant cpn^{1+\frac{1}{2p}} \end{aligned}$$

for all \(p\geqslant 1\). However, essentially repeating the argument that we used for Theorem 4.1 we may obtain sharp estimates in the most interesting case \(p=1\) or 2. We need the next lemma.

Lemma 4.3

Let \(H_n=1+\frac{1}{2}+\cdots +\frac{1}{n}\). Then,

$$\begin{aligned} \int _0^1 r^{n-1}\ln (1-r)\,dr =-\frac{1}{n}H_n \end{aligned}$$

and

$$\begin{aligned} \int _0^1 r^{n-1}\ln ^2(1-r)\,dr=\frac{1}{n}H_n^2-\frac{1}{n}\sum _{k=1}^n\frac{1}{k^2}. \end{aligned}$$

Proof

We consider the beta integral

$$\begin{aligned} B(x,y)=\int _0^1 r^{x-1}(1-r)^{y-1}\, dr \end{aligned}$$

and differentiate it with respect to y. Then, the desired integrals are equal to

$$\begin{aligned} \frac{\partial }{\partial y}B(x,y)\Big |_{y=1}\qquad \hbox {and}\qquad \frac{\partial ^2}{\partial ^2 y}B(x,y)\Big |_{y=1}. \end{aligned}$$

We have

$$\begin{aligned} \frac{\partial }{\partial y} B(x, y) = B(x, y) \left( \frac{\Gamma '(y)}{\Gamma (y)} - \frac{\Gamma '(x + y)}{\Gamma (x + y)} \right) = B(x, y) \big (\psi (y) - \psi (x + y)\big ), \end{aligned}$$

where \(\psi (y)=\frac{\Gamma '(y)}{\Gamma (y)}\) is the digamma function. Moreover,

$$\begin{aligned} \frac{\partial ^2}{\partial ^2 y} B(x, y)= B(x, y) \big ((\psi (y) - \psi (x + y))^2-(\psi ^{\prime }(y) - \psi ^{\prime }(x + y))\big ). \end{aligned}$$

Recall (see e.g. [1]) that

$$\begin{aligned} \psi (n+1)-\psi (1)=H_n:=\sum _{k=1}^n\frac{1}{k} \qquad \hbox {and}\qquad \psi ^{\prime }(n)=\sum _{k=n}^{\infty }\frac{1}{k^2} \end{aligned}$$

for all \(n\geqslant 1\). Therefore,

$$\begin{aligned} \int _0^1 r^{n-1}\ln (1-r)\,dr =B(n, 1) \big (\psi (1) - \psi (n + 1)\big )&=-\frac{1}{n}H_n, \end{aligned}$$

and

$$\begin{aligned} \int _0^1 r^{n-1}\ln ^2(1-r)\,dr&=B(n, 1) \big ((\psi (1) - \psi (n + 1))^2-(\psi '(1) - \psi '(n + 1)\big )\\&=\frac{1}{n}H_n^2-\frac{1}{n}\sum _{k=1}^n\frac{1}{k^2}, \end{aligned}$$

as claimed. \(\square \)

Theorem 4.4

Let K be a centered convex body of volume 1 in \({\mathbb R}^n\), \(n\geqslant 2\). Let \(\kappa \in (0,1/n]\) and let \(\mu \) be a centered \(\kappa \)-concave probability measure with \(\textrm{supp}(\mu )=K\). Then,

$$\begin{aligned} {\mathbb {E}}_{\mu }(\Lambda _{\mu }^{*})\leqslant \big ({\mathbb {E}}_{\mu }[(\Lambda _{\mu }^{*})^2]\big )^{1/2}\leqslant \frac{c\ln n}{\kappa }\Vert f\Vert _{\infty }^{1/2}, \end{aligned}$$

where \(c>0\) is an absolute constant and f is the density of \(\mu \).

Proof

Following the proof of Theorem 4.1 we write

$$\begin{aligned} \int _K\big (\Lambda _{\mu }^{*}(x)\big )^2\,d\mu (x)\leqslant \int _K\ln ^2\left( \frac{e^2}{\kappa }\frac{1}{(1-\Vert x\Vert _K)^{1/\kappa }}\right) d\mu (x). \end{aligned}$$

If f is the density of \(\mu \) on K and \(\nu _K\) is the cone measure of K, using the inequality \(\ln ^2(ab)\leqslant 2(\ln ^2a+\ln ^2b)\) where \(a,b>0\), we may write

$$\begin{aligned}&\frac{1}{2}\int _K\ln ^2\left( \frac{e^2}{\kappa }\frac{1}{(1-\Vert x\Vert _K)^{1/\kappa }}\right) d\mu (x) -\ln ^2\left( \frac{e^2}{\kappa }\right) \\&\quad \leqslant \int _{{\mathbb R}^n}f(x)\ln ^2\left( \frac{1}{(1-\Vert x\Vert _K)^{1/\kappa }}\right) {\textbf{1}}_K(x)\,dx\\&\quad = n|K|\int _0^{\infty }r^{n-1}\int _{\partial (K)}f(ry)\ln ^2\left( \frac{1}{(1-\Vert ry\Vert _K)^{1/\kappa }}\right) {\textbf{1}}_K(ry)\,d\nu _K(y)\,dr\\&\quad =\frac{n}{\kappa ^2}\int _0^1r^{n-1}\ln ^2(1-r)\int _{\partial (K)}f(ry)\,d\nu _K(y)\,dr\\&\quad \leqslant \frac{n}{\kappa ^2}\Vert f\Vert _{\infty }\int _0^1r^{n-1}\ln ^2(1-r)\,dr. \end{aligned}$$

Since \(1\leqslant \int _Kf(x)\,dx\leqslant \Vert f\Vert _{\infty }\), using also Lemma 4.3 we get

$$\begin{aligned} \int _K\big (\Lambda _{\mu }^{*}(x)\big )^2\,d\mu (x)&\leqslant \frac{2n}{\kappa ^2}\left( \frac{1}{n}H_n^2-\frac{1}{n}\sum _{k=1}^n\frac{1}{k^2}\right) \Vert f\Vert _{\infty } +2\ln ^2\left( \frac{e^2}{\kappa }\right) \\&\leqslant \left( \frac{2H_n^2}{\kappa ^2}+2\ln ^2(e^2/\kappa )\right) \Vert f\Vert _{\infty }\leqslant \frac{c_1\ln ^2n}{\kappa ^2}\Vert f\Vert _{\infty }, \end{aligned}$$

where \(c_1>0\) is an absolute constant. This completes the proof.\(\square \)

Our next result concerns the one-dimensional case. Let \(\mu \) be a centered probability measure on \({\mathbb R}\) which is absolutely continuous with respect to Lebesgue measure and consider a random variable X, on some probability space \((\varOmega ,\mathcal {F},P),\) with distribution \(\mu \), i.e., \(\mu (B):=P(X\in B),\ B\in \mathcal {B}(\mathbb {R})\). We define

$$\begin{aligned}{} & {} \alpha _+ =\alpha _+ (\mu ):=\sup \left\{ x\in \mathbb {R}:\mu ([x,\infty ))>0\right\} )\\{} & {} \quad \hbox {and}\quad \alpha _- =\alpha _- (\mu ):=\sup \left\{ x\in \mathbb {R}:\mu ((-\infty ,-x]))>0\right\} ). \end{aligned}$$

Thus, \(-\alpha _-,\alpha _+\) are the endpoints of the support of \(\mu \). Note that we may have \(\alpha _{\pm } =+\infty \). We define \(I_{\mu }=(-\alpha _-,\alpha _+ )\). Recall that

$$\begin{aligned} \Lambda _{\mu }^{*}(x):=\sup \{tx-\Lambda _{\mu } (t):t\in \mathbb {R}\},\qquad x\in \mathbb {R}. \end{aligned}$$

In fact, since \(tx-\Lambda _{\mu }(t)<0\) for \(t<0\) when \(x\in [0,\alpha _+ )\), we have that \(\Lambda _{\mu }^{*}(x)=\sup \{tx-\Lambda _{\mu } (t):t\geqslant 0\}\) in this case, and similarly \(\Lambda _{\mu }^{*}(x):=\sup \{tx-\Lambda _{\mu } (t):t\leqslant 0\}\) when \(x\in (-\alpha _-,0]\). One can also check that \(\Lambda _{\mu }^{*}(\alpha _{\pm } )=+\infty \). See [18, Lemma 2.8] for the case \(\alpha _{\pm }<+\infty \). In the case \(\alpha _{\pm }=\pm \infty \), the convexity and monotonicity properties of \(\Lambda _{\mu }^{*}\) imply again that \(\lim _{t\rightarrow \pm \infty }\Lambda _{\mu }^{*}(t)=+\infty \).

Proposition 4.5

Let \(\mu \) be a centered probability measure on \({\mathbb R}\) which is absolutely continuous with respect to Lebesgue measure. Then,

$$\begin{aligned} \int _{I_{\mu }}e^{\Lambda _{\mu }^{*}(x)/2}d\mu (x)\leqslant 4, \end{aligned}$$

where \(I_{\mu }=\textrm{supp}(\mu )\). In particular, for all \(p\geqslant 1\) we have that

$$\begin{aligned} \int _{I_{\mu }}(\Lambda _{\mu }^{*}(x))^p\,d\mu (x)<+\infty . \end{aligned}$$

Proof

Let \(F(x)=\mu (-\infty ,x]\). For any \(x\in [0,\alpha _+ )\) and \(t\geqslant 0\) we have

$$\begin{aligned} \min \{F(x),1-F(x)\}=\varphi _{\mu }(x)\leqslant e^{-\Lambda _{\mu }^{*}(x)}. \end{aligned}$$

It follows that

$$\begin{aligned} \int _{I_{\mu }}e^{\Lambda _{\mu }^{*}(x)/2}d\mu (x)&\leqslant \int _{I_{\mu }}\frac{1}{\sqrt{\min \{F(x),1-F(x)\}}}f(x)\,dx\nonumber \\&\leqslant \int _{I_{\mu }}\frac{1}{\sqrt{F(x)}}f(x)\,dx +\int _{I_{\mu }}\frac{1}{\sqrt{1-F(x)}}f(x)\,dx. \end{aligned}$$
(4.2)

Write f for the density of \(\mu \) with respect to Lebesgue measure. Then, \((1-F)^{\prime }(x)=-f(x)\), which implies that

$$\begin{aligned} \int _0^{\alpha _+ }\frac{1}{\sqrt{1-F(x)}}f(x)\,dx\leqslant -\int _0^{\alpha _+ } \frac{1}{\sqrt{1-F(x)}}(1-F)^{\prime }(x)dx\\ =-2\sqrt{1-F(x)}\Big |_0^{\alpha _+ }=2\sqrt{1-F(0)} \end{aligned}$$

since \(F(\alpha _+ )=1\). In the same way we check that

$$\begin{aligned} \int _{-\alpha _-}^0\frac{1}{\sqrt{1-F(x)}}f(x)\,dx\leqslant & {} -\int _{-\alpha _-}^0 \frac{1}{\sqrt{1-F(x)}}(1-F)^{\prime }(x)dx\\= & {} -2\sqrt{1-F(x)}\Big |_{-\alpha _-}^0=2-2\sqrt{1-F(0)}. \end{aligned}$$

This shows that

$$\begin{aligned} \int _{I_{\mu }}\frac{1}{\sqrt{1-F(x)}}f(x)\,dx\leqslant 2. \end{aligned}$$

In a similar way we obtain the same upper bound for the second summand in (4.2) and the result follows. \(\square \)

Proposition 4.5 can be extended to products. Let \(\mu _i\), \(1\leqslant i\leqslant n\) be centered probability measures on \({\mathbb R}\), all of them absolutely continuous with respect to Lebesgue measure. If \(\overline{\mu }=\mu _1\otimes \cdots \otimes \mu _n\) then \(I_{\overline{\mu }}=\prod _{i=1}^nI_{\mu _i}\) and we can easily check that

$$\begin{aligned} \Lambda _{\overline{\mu }}^{*}(x)=\sum _{i=1}^n\Lambda _{\mu _i}^{*}(x_i) \end{aligned}$$

for all \(x=(x_1,\ldots ,x_n)\in I_{\overline{\mu }}\), which implies that

$$\begin{aligned} \int _{I_{\overline{\mu }}}e^{\Lambda _{\overline{\mu }}^{*}(x)/2}d\overline{\mu }(x) =\prod _{i=1}^n\left( \int _{I_{\mu _i}}e^{\Lambda _{\mu _i}^{*}(x_i)/2}d\mu _i(x_i)\right) \leqslant 4^n. \end{aligned}$$

In particular, for all \(p\geqslant 1\) we have that

$$\begin{aligned} \int _{I_{\overline{\mu }}}(\Lambda _{\overline{\mu }}^{*}(x))^p\,d\overline{\mu }(x)<+\infty . \end{aligned}$$

We close this section with one more case where we can establish that \(\Lambda _{\mu }^{*}\) has finite moments of all orders. We consider an arbitrary centered log-concave probability measure on \({\mathbb R}^n\) but we have to impose some conditions on the growth of its one-sided \(L_t\)-centroid bodies \(Z_t^+(\mu )\). Recall that for every \(t\geqslant 1\), the one-sided \(L_t\)-centroid body \(Z_t^+(\mu )\) of \(\mu \) is the convex body with support function

$$\begin{aligned} h_{Z_t^+(\mu )}(y)=\left( 2\int _{{\mathbb R}^n}\langle x,y\rangle _+^tf_{\mu }(x)dx\right) ^{1/t}, \end{aligned}$$

where \(a_+=\max \{a,0\}\). If \(\mu \) is isotropic then \(Z_2^+(\mu )\supseteq cB_2^n\) for an absolute constant \(c>0\). One can also check that if \(1\leqslant t<s\) then

$$\begin{aligned} \left( \frac{2}{e}\right) ^{\frac{1}{t}-\frac{1}{s}}Z_t^+(\mu )\subseteq Z_s^+(\mu ) \subseteq c_1\left( \frac{2e-2}{e}\right) ^{\frac{1}{t}-\frac{1}{s}}\frac{s}{t}Z_t^+(\mu ). \end{aligned}$$

For a proof of these claims see [19]. The condition we need is that the family of the one-sided \(L_t\)-centroid bodies grows with some mild rate as \(t\rightarrow \infty \) (note that the assumption in the next proposition can be satisfied only for log-concave probability measures \(\mu \) with support \(\textrm{supp}(\mu )={\mathbb R}^n\)).

Proposition 4.6

Let \(\mu \) be a centered log-concave probability measure on \({\mathbb R}^n\). Assume that there exists an increasing function \(g:[1,\infty )\rightarrow [1,\infty )\) with \(\lim _{t\rightarrow \infty }g(t)/\ln (t+1)=+\infty \) such that \(Z_t^+(\mu )\supseteq g(t)Z_2^+(\mu )\) for all \(t\geqslant 2\). Then,

$$\begin{aligned} \int _{{\mathbb R}^n}|\Lambda _{\mu }^{*}(x)|^pd\mu (x)<+\infty \end{aligned}$$

for every \(p\geqslant 1\).

Proof

We use the following fact, proved in [9, Lemma 4.3]: If \(t\geqslant 1\) then for every \(x\in \tfrac{1}{2}Z_t^+(\mu )\) we have

$$\begin{aligned} \varphi _{\mu }(x)\geqslant e^{-c_1t}, \end{aligned}$$

where \(c_1>1\) is an absolute constant. Since \(\Lambda _{\mu }^{*}(x)\leqslant \ln \frac{1}{\varphi _{\mu }(x)}\), this shows that \(\Lambda _{\mu }^{*}(x)\leqslant c_1t\) for all \(x\in \tfrac{1}{2}Z_t^+(\mu )\). In other words,

$$\begin{aligned} \frac{1}{2}Z_{t/c_1}^+(\mu )\subseteq B_t(\mu ),\qquad t\geqslant c_1. \end{aligned}$$
(4.3)

Since \(\lim _{t\rightarrow \infty }g(t)=+\infty \), there exists \(t_0\geqslant c_1\) such that \(\mu \left( \frac{g(t_0/c_1)}{2}Z_2^+(\mu )\right) \geqslant 2/3\). From Borell’s lemma [10, Lemma 2.4.5] we know that, for all \(t\geqslant t_0\),

$$\begin{aligned} 1-\mu \left( \frac{g(t/c_1)}{2}Z_2^+(\mu )\right) \leqslant e^{-c_2g(t/c_1)/g(t_0/c_1)}, \end{aligned}$$

where \(c_2>0\) is an absolute constant. We write

$$\begin{aligned}{} & {} \int _{{\mathbb R}^n}|\Lambda _{\mu }^{*}(x)|^pd\mu (x) \\{} & {} \quad =\int _0^{\infty }pt^{p-1}\mu (\{x:\Lambda _{\mu }^{*}(x)\geqslant t\})\,dt =p\int _0^{\infty }t^{p-1}(1-\mu (B_t(\mu )))\,dt. \end{aligned}$$

From (4.3) it follows that

$$\begin{aligned} 1-\mu (B_t(\mu ))\leqslant 1-\mu \left( \frac{1}{2}Z_{t/c_1}^+(\mu )\right) \leqslant 1-\mu \left( \frac{g(t/c_1)}{2}Z_2^+(\mu )\right) \leqslant e^{-c_2g(t/c_1)/g(t_0/c_1)} \end{aligned}$$

for all \(t\geqslant t_0\). Since \(\lim _{t\rightarrow \infty }g(t)/\ln (t+1)=+\infty \), there exists \(t_p\geqslant t_0\) such that

$$\begin{aligned} (p-1)\ln (t)\leqslant \frac{c_2}{2g(t_0/c_1)}g(t/c_1) \end{aligned}$$

for all \(t\geqslant t_p\). Assume that \(p>2\). Then, from the previous observations we get

$$\begin{aligned} p\int _{t_p}^{\infty }t^{p-1}(1-\mu (B_t(\mu )))\,dt&\leqslant p\int _{t_p}^{\infty }t^{p-1}\left( 1-\mu \left( \tfrac{g(t/c_1)}{2}Z_2^+(\mu )\right) \right) \,dt\\&\leqslant p\int _{t_p}^{\infty }t^{p-1}t^{-2(p-1)}\,dt= p\int _{t_p}^{\infty }t^{-(p-1)}\,dt <\infty . \end{aligned}$$

This proves the result for \(p>2\) and then from Hölder’s inequality it is clear that the assertion of the proposition is also true for all \(p\geqslant 1\). \(\square \)

Note

It is not hard to construct examples of log-concave probability measures, even on the real line, for which \(\textrm{supp}(\mu )={\mathbb R}^n\) but the assumption of Proposition 4.6 is not satisfied. Consider for example a measure \(\mu \) on \({\mathbb R}\) with density \(f(x)=c\cdot \exp (-p)\) where p is an even convex function rapidly increasing to infinity, e.g. \(p(t)=e^{t^2}\).

However, this does not exclude the possibility that for every centered log-concave probability measure \(\mu \) on \({\mathbb R}^n\) the function \(\Lambda _{\mu }^{*}\) has finite second or higher moments.

5 Threshold for the measure: the approach and examples

For any log-concave probability measure \(\mu \) on \({\mathbb R}^n\) we define the parameter

$$\begin{aligned} \beta (\mu )=\frac{\textrm{Var}_{\mu }(\Lambda _{\mu }^{*})}{({\mathbb {E}}_{\mu }(\Lambda _{\mu }^{*}))^2} \end{aligned}$$
(5.1)

provided that

$$\begin{aligned} \Vert \Lambda _{\mu }^{*}\Vert _{L^2(\mu )}=({\mathbb {E}}(\Lambda _{\mu }^{*})^2)^{1/2}<\infty . \end{aligned}$$

One of the main results in [9] states that if \(\mu \) is a log-concave probability measure on \({\mathbb R}^n\) then

$$\begin{aligned} \int _{\mathbb {R}^n}\varphi _{\mu }(x)\,d\mu (x) \leqslant \exp \left( -cn/L_{\mu }^2\right) , \end{aligned}$$

where \(c>0\) is an absolute constant. In fact, the proof of this estimate starts with Lemma 3.1 and follows from the next stronger result: If \(n\geqslant n_0\) then

$$\begin{aligned} \int _{{\mathbb R}^n}\exp (-\Lambda _{\mu }^{*}(x))\,d\mu (x) \leqslant \exp \left( -cn/L_{\mu }^2\right) \end{aligned}$$

where \(L_{\mu }\) is the isotropic constant of \(\mu \) and \(c>0\), \(n_0\in {\mathbb N}\) are absolute constants. Then, Jensen’s inequality implies that

$$\begin{aligned} e^{-{\mathbb {E}}_{\mu }(\Lambda _{\mu }^{*})}\leqslant \int _{{\mathbb R}^n}\exp (-\Lambda _{\mu }^{*}(x))\,d\mu (x) \leqslant \exp \left( -cn/L_{\mu }^2\right) . \end{aligned}$$

We will need this lower bound for \({\mathbb {E}}_{\mu }(\Lambda _{\mu }^{*})\).

Lemma 5.1

Let \(\mu \) be a log-concave probability measure on \({\mathbb R}^n\), \(n\geqslant n_0\). Then,

$$\begin{aligned} {\mathbb {E}}(\Lambda _{\mu }^{*})\geqslant cn/L_{\mu }^2, \end{aligned}$$

where \(L_{\mu }\) is the isotropic constant of \(\mu \) and \(c>0\), \(n_0\in {\mathbb N}\) are absolute constants.

We will also need a number of observations in the case \(\mu =\mu _K\) where K is a centered convex body of volume 1 in \({\mathbb R}^n\). The next lemma provides a lower bound for \(\textrm{Var}(\Lambda _{\mu _K}^{*})\).

Lemma 5.2

Let K be a centered convex body of volume 1 in \({\mathbb R}^n\). Then,

$$\begin{aligned} \textrm{Var}(\Lambda _{\mu _K}^{*})\geqslant c/L_{\mu _K}^4, \end{aligned}$$

where \(c>0\) is an absolute constant.

Proof

Borell has proved in [5, Theorem 1] that if T is a convex body in \({\mathbb R}^n\) and f is a non-negative, bounded and convex function on T, not identically zero and with \(\min (f)=0\), then the function

$$\begin{aligned} \Phi _f(p)=\ln \big [(n+p)\Vert f\Vert _p^p\big ] \end{aligned}$$

is convex on \([0,\infty )\). Consider a centered convex body K of volume 1 in \({\mathbb R}^n\). Applying Borell’s theorem for the function \(\Lambda _{\mu _K}^{*}\) on rK, \(r\in (0,1)\) and the triple \(p=0,1\) and 2, and finally letting \(r\rightarrow 1^-\), we see that

$$\begin{aligned} (n+1)^2\Vert \Lambda _{\mu _K}^{*}\Vert _{L^1(\mu _K)}^2\leqslant n(n+2)\Vert \Lambda _{\mu _K}^{*}\Vert _{L^2(\mu _K)}^2, \end{aligned}$$

which implies that

$$\begin{aligned} \textrm{Var}(\Lambda _{\mu _K}^{*})\geqslant \frac{1}{n(n+2)}\Vert \Lambda _{\mu _K}^{*}\Vert _{L^1(\mu _K)}^2. \end{aligned}$$

Then, taking into account Lemma 5.1 we obtain the result. \(\square \)

Recall the definition of \(\omega _{\mu _K}=\ln (1/\varphi _{\mu _K})\) in (3.4) and consider the parameter

$$\begin{aligned} \tau (\mu _K)=\frac{\textrm{Var}_{\mu _K }(\omega _{\mu _K})}{({\mathbb {E}}_{\mu _K }(\omega _{\mu _K}))^2}. \end{aligned}$$
(5.2)

The next lemma shows that we can estimate \(\beta (\mu _K)\) if we can compute \(\tau (\mu _K)\).

Lemma 5.3

Let K be a centered convex body of volume 1 in \({\mathbb R}^n\). Then,

$$\begin{aligned} \beta (\mu _K)=\left( \tau (\mu _K)+O(L_{\mu _K}^2/\sqrt{n})\right) \left( 1+O(L_{\mu _K}^2/\sqrt{n})\right) . \end{aligned}$$

Proof

From Corollary 3.5 we know that if K is a centered convex body of volume 1 in \({\mathbb R}^n\) then for every \(x\in \textrm{int}(K)\) we have that \(\omega _{\mu _K} (x)-5\sqrt{n}\leqslant \Lambda _{\mu _K}^{*}(x)\leqslant \omega _{\mu _K}(x)\). Writing \(\Lambda _{\mu _K}^{*}=\omega _{\mu _K}+h\) where \(\Vert h\Vert _{\infty }\leqslant 5\sqrt{n}\) we easily see that

$$\begin{aligned} \textrm{Var}_{\mu _K }(\Lambda _{\mu _K}^{*})= \textrm{Var}_{\mu _K }(\omega _{\mu _K})+O(\sqrt{n}{\mathbb {E}}_{\mu _K }(\Lambda _{\mu _K }^{*})) \end{aligned}$$

where \(X=O(Y)\) means that \(|X|\leqslant cY\) for an absolute constant \(c>0\). Lemma 5.1 and the fact that \({\mathbb {E}}_{\mu _K}(\omega _{\mu _K})={\mathbb {E}}_{\mu _K}(\Lambda _{\mu _K }^{*})+O(\sqrt{n})\) imply that

$$\begin{aligned} \frac{{\mathbb {E}}_{\mu _K}(\omega _{\mu _K})}{{\mathbb {E}}_{\mu _K}(\Lambda _{\mu _K }^{*})}=1+O(L_{\mu _K}^2/\sqrt{n}). \end{aligned}$$

Taking also into account the fact that \(L_{\mu _K}^2/\sqrt{n}=O((\ln n)^8/\sqrt{n})=o(1)\) we get

$$\begin{aligned} {\mathbb {E}}_{\mu _K}(\omega _{\mu _K})\approx {\mathbb {E}}_{\mu _K}(\Lambda _{\mu _K }^{*}). \end{aligned}$$

Combining the above we see that

$$\begin{aligned} \beta (\mu _K)&= \frac{\textrm{Var}_{\mu _K }(\Lambda _{\mu _K}^{*})}{({\mathbb {E}}_{\mu _K}(\Lambda _{\mu _K }^{*}))^2} =\frac{\textrm{Var}_{\mu _K }(\omega _{\mu _K})+O(\sqrt{n}{\mathbb {E}}_{\mu _K }(\Lambda _{\mu _K }^{*}))}{({\mathbb {E}}_{\mu _K}(\omega _{\mu _K}))^2} \left( \frac{{\mathbb {E}}_{\mu _K}(\omega _{\mu _K})}{{\mathbb {E}}_{\mu _K}(\Lambda _{\mu _K }^{*})}\right) ^2\\&= \left( \frac{\textrm{Var}_{\mu _K }(\omega _{\mu _K})}{({\mathbb {E}}_{\mu _K}(\omega _{\mu _K }))^2}+O\left( L_{\mu _K}^2/\sqrt{n}\right) \right) \left( 1+O\left( L_{\mu _K}^2/\sqrt{n}\right) \right) ^2\\&=\left( \tau (\mu _K)+O\left( L_{\mu _K}^2/\sqrt{n}\right) \right) \left( 1+O\left( L_{\mu _K}^2/\sqrt{n}\right) \right) , \end{aligned}$$

as claimed. \(\square \)

Recall that \(B_t(\mu )=\{x\in {\mathbb R}^n:\Lambda ^{*}_{\mu }(x)\leqslant t\}\), where \(\Lambda ^{*}_{\mu }\) is the Cramer transform of \(\mu \). A version of the next lemma appeared originally in [14].

Lemma 5.4

Let \(t>0\). For every \(N>n\) we have

$$\begin{aligned} {\mathbb {E}}_{\mu ^N}(\mu (K_N))\leqslant \mu (B_t(\mu ))+ N\exp (-t). \end{aligned}$$

We use the lemma in the following way. Let \(m:=\mathbb {E}_{\mu }(\Lambda _{\mu }^{*})\). Then, for all \(\varepsilon \in (0,1)\), from Chebyshev’s inequality we have that

$$\begin{aligned} \mu (\{\Lambda _{\mu }^{*}\leqslant m -\varepsilon m \})&\leqslant \mu (\{|\Lambda _{\mu }^{*}-m |\geqslant \varepsilon m \}) \leqslant \frac{\mathbb {E}_{\mu }|\Lambda _{\mu }^{*}-m|^2}{\varepsilon ^2m^2}=\frac{\beta (\mu )}{\varepsilon ^2}. \end{aligned}$$

Equivalently,

$$\begin{aligned} \mu (B_{(1-\varepsilon )m }(\mu ))\leqslant \frac{\beta (\mu )}{\varepsilon ^2}. \end{aligned}$$

Let \(\delta \in (\beta (\mu ),1)\). We distinguish two cases:

(i) If \(\beta (\mu )<1/8\) and \(8\beta (\mu )<\delta <1\) then, choosing \(\varepsilon =\sqrt{2\beta (\mu )/\delta }\) we have that

$$\begin{aligned} \mu (B_{(1-\varepsilon )m }(\mu ))\leqslant \frac{\delta }{2}. \end{aligned}$$

Then, from Lemma 5.4 we see that

$$\begin{aligned}\sup \left\{ {\mathbb {E}}_{\mu ^N}(\mu (K_N)):N\leqslant e^{(1-2\varepsilon )m} \right\}&\leqslant \mu (B_{(1-\varepsilon )m }(\mu ))+e^{(1-2\varepsilon )m }e^{-(1-\varepsilon )m }\\&\leqslant \frac{\delta }{2}+e^{-\varepsilon m}\leqslant \delta , \end{aligned}$$

provided that \(\varepsilon m\geqslant \ln (2/\delta )\). Since \(m\geqslant c_1n/L_{\mu }^2\), this condition is satisfied if \(n/L_{\mu }^2\geqslant c_2\ln (2/\delta )\sqrt{\delta /\beta (\mu )}\). By the choice of \(\varepsilon \) we conclude that

$$\begin{aligned} \varrho _1(\mu ,\delta )\geqslant \left( 1-\sqrt{8\beta (\mu )/\delta }\right) \frac{\mathbb {E}_{\mu }(\Lambda _{\mu }^{*})}{n}. \end{aligned}$$

(ii) If \(1/8\leqslant \beta (\mu )<1\) and \(\beta (\mu )<\delta <1\) then, choosing \(\varepsilon =\sqrt{\frac{2\beta (\mu )}{\beta (\mu )+\delta }}\) we have that

$$\begin{aligned} \mu (B_{(1-\varepsilon )m }(\mu ))\leqslant \frac{\beta (\mu )+\delta }{2}. \end{aligned}$$

Then, exactly as in (i), we see that

$$\begin{aligned} \sup \{ {\mathbb {E}}_{\mu ^N}(\mu (K_N)):N\leqslant e^{(1-\sqrt{\varepsilon } )m} \}\leqslant & {} \frac{\beta (\mu )+\delta }{2}+e^{(1-\sqrt{\varepsilon } )m }e^{-(1-\varepsilon )m }\\\leqslant & {} \frac{\beta (\mu )+\delta }{2}+e^{-(\sqrt{\varepsilon }-\varepsilon ) m}\leqslant \delta , \end{aligned}$$

provided that

$$\begin{aligned} (\sqrt{\varepsilon }-\varepsilon ) m\geqslant \ln \left( \frac{2}{\delta -\beta (\mu )}\right) . \end{aligned}$$
(5.3)

Note that \(1>\varepsilon \geqslant \sqrt{\beta (\mu )}\geqslant \frac{1}{2\sqrt{2}}\), and hence

$$\begin{aligned} \sqrt{\varepsilon }-\varepsilon =\frac{\sqrt{\varepsilon }}{(1+\sqrt{\varepsilon })(1+\varepsilon )} (1-\varepsilon ^2)\geqslant c_1^{\prime }(1-\varepsilon ^2)=c_1^{\prime }\frac{\delta -\beta (\mu )}{\delta +\beta (\mu )}\geqslant c_2^{\prime }(\delta -\beta (\mu )), \end{aligned}$$

where \(c_i^{\prime }>0\) are absolute constants. Since \(m\geqslant c_1n/L_{\mu }^2\), we see that if \(n/L_{\mu }^2\geqslant \frac{c_2}{\delta -\beta (\mu )}\ln \left( \frac{2}{\delta -\beta (\mu )}\right) \) then (5.3) is satisfied. Therefore, we conclude that

$$\begin{aligned} \varrho _1(\mu ,\delta )\geqslant \left( 1-\root 4 \of {\frac{2\beta (\mu )}{\beta (\mu )+\delta }}\right) \frac{\mathbb {E}_{\mu }(\Lambda _{\mu }^{*})}{n}. \end{aligned}$$

We summarize the above in the next theorem.

Theorem 5.5

Let \(\mu \) be a log-concave probability measure on \({\mathbb R}^n\).

  1. (i)

    Let \(\beta (\mu )<1/8\) and \(8\beta (\mu )<\delta <1\). If \(n/L_{\mu }^2\geqslant c_2\ln (2/\delta )\sqrt{\delta /\beta (\mu )}\) then

    $$\begin{aligned} \varrho _1(\mu ,\delta )\geqslant \left( 1-\sqrt{8\beta (\mu )/\delta }\right) \frac{\mathbb {E}_{\mu }(\Lambda _{\mu }^{*})}{n}. \end{aligned}$$
  2. (ii)

    Let \(1/8\leqslant \beta (\mu )<1\) and \(\beta (\mu )<\delta <1\). If \(n/L_{\mu }^2\geqslant \frac{c_2}{\delta -\beta (\mu )}\ln \left( \frac{2}{\delta -\beta (\mu )}\right) \) then

    $$\begin{aligned} \varrho _1(\mu ,\delta )\geqslant \left( 1-\root 4 \of {\frac{2\beta (\mu )}{\beta (\mu )+\delta }}\right) \frac{\mathbb {E}_{\mu }(\Lambda _{\mu }^{*})}{n}. \end{aligned}$$

Remark 5.6

Paouris and Valettas have proved in [26, Theorem 5.6] that if \(\mu \) is a log-concave probability measure on \({\mathbb R}^n\) and p is a convex function on \({\mathbb R}^n\) then

$$\begin{aligned} \mu \left( \left\{ x:p(x)<M(p)-t\Vert p -M(p)\Vert _{L^1(\mu )}\right\} \right) \leqslant \frac{1}{2}\exp (-t/16) \end{aligned}$$
(5.4)

for all \(t>0\), where M(p) is a median of p with respect to \(\mu \). Consider the parameter

$$\begin{aligned} \tilde{\beta }(\mu )=\frac{\Vert \Lambda _{\mu }^{*}-{\mathbb {E}}(\Lambda _{\mu }^{*})\Vert _{L^2(\mu )}}{M(\Lambda _{\mu }^{*})}. \end{aligned}$$

Recall that \(\Lambda _{\mu }^{*}\) is convex and set \(M=M(\Lambda _{\mu }^{*})\). Since

$$\begin{aligned} \Vert \Lambda _{\mu }^{*}-M(\Lambda _{\mu }^{*})\Vert _{L^1(\mu )}\leqslant \Vert \Lambda _{\mu }^{*}-{\mathbb {E}}(\Lambda _{\mu }^{*})\Vert _{L^1(\mu )} \leqslant \Vert \Lambda _{\mu }^{*}-{\mathbb {E}}(\Lambda _{\mu }^{*})\Vert _{L^2(\mu )}, \end{aligned}$$

from (5.4) we see that

$$\begin{aligned} \mu \left( \left\{ x:\Lambda _{\mu }^{*}(x)<(1-t\tilde{\beta }(\mu ))M\right\} \right) \leqslant \frac{1}{2}\exp (-t/16) \end{aligned}$$
(5.5)

for every \(t>0\). This shows that

$$\begin{aligned} \mu (B_{(1-t\tilde{\beta }(\mu ))M}(\mu ))\leqslant \frac{\delta }{4} \end{aligned}$$

if \(t\geqslant 16\ln (2/\delta )\). Then, from Lemma 5.4 we see that

$$\begin{aligned}&\sup \left\{ {\mathbb {E}}_{\mu ^N}(\mu (K_N)):N \leqslant e^{(1-2t\tilde{\beta }(\mu ))M} \right\} \\&\qquad \leqslant \mu (B_{(1-t\tilde{\beta }(\mu ))M}(\mu ))+e^{(1-2t\tilde{\beta }(\mu ))M}e^{-(1-t\tilde{\beta }(\mu ))M}\\&\qquad = \mu (B_{(1-t\tilde{\beta }(\mu ))M}(\mu ))+e^{-t\tilde{\beta }(\mu )M}\leqslant \frac{\delta }{4}+e^{-t\tilde{\beta }(\mu )M}\leqslant \delta , \end{aligned}$$

provided that \(t\tilde{\beta }(\mu )M\geqslant \ln (2/\delta )\). Now, we restrict our attention to the case where \(\mu =\mu _K\) is the uniform measure on a centered convex body K of volume 1 in \({\mathbb R}^n\). Then, Lemma 5.2 shows that

$$\begin{aligned} \tilde{\beta }(\mu _K)M(\Lambda _{\mu _K}^{*})=\Vert \Lambda _{\mu _K}^{*}-{\mathbb {E}}(\Lambda _{\mu _K}^{*})\Vert _{L^2(\mu )}\geqslant c_1/L_{\mu _K}^2 \end{aligned}$$

where \(c_1>0\) is an absolute constant. Choosing \(t=c_2L_{\mu _K}^2\ln (2/\delta )\) we get

$$\begin{aligned} \varrho _1(\mu _K ,\delta )\geqslant \left( 1-c_3L_{\mu _K}^2\ln (2/\delta )\tilde{\beta }(\mu _K)\right) \frac{M(\Lambda _{\mu _K}^{*})}{n}. \end{aligned}$$
(5.6)

A natural question is to examine how close \({\mathbb {E}}(\Lambda _{\mu }^{*})\) and \(M(\Lambda _{\mu }^{*})\) are. This would allow us to compare (5.6) with Theorem 5.5 at least in the case of the uniform measure on a convex body. From [10, Lemma 2.4.10] we know that

$$\begin{aligned} \frac{1}{2}\Vert \Lambda _{\mu }^{*}-{\mathbb {E}}(\Lambda _{\mu }^{*})\Vert _{L^2(\mu )}\leqslant \Vert \Lambda _{\mu }^{*}-M(\Lambda _{\mu }^{*})\Vert _{L^2(\mu )} \leqslant 3\Vert \Lambda _{\mu }^{*}-{\mathbb {E}}(\Lambda _{\mu }^{*})\Vert _{L^2(\mu )} \end{aligned}$$

for any Borel probability measure \(\mu \) on \({\mathbb R}^n\). Therefore, if we assume that \(\beta (\mu _K)\leqslant \eta \) for some small enough \(\eta \in (0,1)\), we see that

$$\begin{aligned} M(\Lambda _{\mu }^{*})\geqslant \Vert \Lambda _{\mu }^{*}\Vert _2-3\sqrt{\eta }\Vert \Lambda _{\mu }^{*}\Vert _1\geqslant (1-3\sqrt{\eta }){\mathbb {E}}(\Lambda _{\mu }^{*}). \end{aligned}$$

This gives a variant of Theorem 5.5 with a much better dependence on \(\delta \).

Theorem 5.7

Let K be a centered convex body of volume 1 in \({\mathbb R}^n\) and let \(\delta \in (0,1)\) and \(\eta \in (0,1/9)\). If \(\beta (\mu _K)\leqslant \eta \) and \(c_3L_{\mu _K}^2\ln (2/\delta )\tilde{\beta }(\mu _K) <1\) then

$$\begin{aligned} \varrho _1(\mu ,\delta )\geqslant \left( 1-c_3L_{\mu _K}^2\ln (2/\delta )\tilde{\beta }(\mu _K)\right) (1-3\sqrt{\eta })\frac{{\mathbb {E}}(\Lambda _{\mu _K}^{*})}{n}. \end{aligned}$$

For the proof of the lower threshold we need a basic fact that plays a main role in the proof of all the lower thresholds that have been obtained so far. It is stated in the form below in [11, Lemma 3]. For a proof see [14] or [18, Lemma 4.1].

Lemma 5.8

For every Borel subset A of \({\mathbb R}^n\) we have that

$$\begin{aligned} 1-\mu ^N(K_N\supseteq A)\leqslant 2\left( {\begin{array}{c}N\\ n\end{array}}\right) \left( 1-\inf _{x\in A}\varphi _{\mu }(x)\right) ^{N-n}. \end{aligned}$$

Therefore,

$$\begin{aligned} {\mathbb {E}}\,[\mu (K_N)]\geqslant \mu (A)\left( 1-2\left( {\begin{array}{c}N\\ n\end{array}}\right) \left( 1-\inf _{x\in A}\varphi _{\mu }(x)\right) ^{N-n}\right) . \end{aligned}$$

In order to apply Lemma 5.8 we note that if \(m:=\mathbb {E}_{\mu }(\Lambda _{\mu }^{*})\) then as before, for all \(\varepsilon \in (0,1)\), from Chebyshev’s inequality we have that

$$\begin{aligned} \mu (\{\Lambda _{\mu }^{*}\geqslant m +\varepsilon m \})\leqslant \mu (\{|\Lambda _{\mu }^{*}-m |\geqslant \varepsilon m \})\leqslant \frac{\beta (\mu )}{\varepsilon ^2}. \end{aligned}$$

If \(\beta (\mu )<1/2\) and \(2\beta (\mu )<\delta <1\) then, choosing \(\varepsilon =\sqrt{2\beta (\mu )/\delta }\) we have that

$$\begin{aligned} \mu (B_{(1+\varepsilon )m }(\mu ))\geqslant 1-\frac{\delta }{2}. \end{aligned}$$

Therefore, we will have that

$$\begin{aligned} \varrho _2(\mu ,\delta )\leqslant (1+2\varepsilon )m/n \end{aligned}$$

if our lower bound for \(\inf _{x\in B_{(1+\varepsilon )m}(\mu )}\varphi _{\mu }(x)\) gives

$$\begin{aligned} 2\left( {\begin{array}{c}N\\ n\end{array}}\right) \left( 1-\inf _{x\in B_{(1+\varepsilon )m}(\mu )}\varphi _{\mu }(x)\right) ^{N-n}\leqslant \frac{\delta }{2} \end{aligned}$$
(5.7)

for all \(N\geqslant N_0:=\exp ((1+2\varepsilon )m)\). Recall that in the case of the uniform measure on a centered convex body of volume 1, Theorem 3.2 shows that

$$\begin{aligned} \inf _{x\in B_{(1+\varepsilon )m}(\mu _K)}\varphi _{\mu _K}(x)\geqslant \frac{1}{10}\exp (-(1+\varepsilon )m-2\sqrt{n}). \end{aligned}$$

We require that n and m are large enough so that \(1/2^n<\delta /2\) and \(2\sqrt{n}\leqslant \frac{\varepsilon m}{2}\). Using also the fact that \(\left( {\begin{array}{c}N\\ n\end{array}}\right) \leqslant e^{-1}\left( \frac{eN}{n}\right) ^n\) we see that (5.7) will be satisfied if we also have

$$\begin{aligned} \left( \frac{2eN}{n}\right) ^ne^{-\frac{N-n}{10}e^{-(1+3\varepsilon /2)m}}<1. \end{aligned}$$

Setting \(x:=N/n\) we see that this last is equivalent to

$$\begin{aligned} e^{(1+3\varepsilon /2)m}<\frac{x-1}{10\ln (2ex)}. \end{aligned}$$

One can now check that if \(N\geqslant \exp ((1+2\varepsilon )m)\) then all the restrictions are satisfied if we assume that \(n/L_{\mu _K}^2\geqslant c_2\ln (2/\delta )\sqrt{\delta /\beta (\mu _K)}\). In this way we get the following.

Theorem 5.9

Let \(\beta ,\delta >0\) with \(2\beta<\delta <1\). If K is a centered convex body of volume 1 in \({\mathbb R}^n\) with \(\beta (\mu _K )=\beta \) and \(n/L_{\mu _K}^2\geqslant c_2\ln (2/\delta )\sqrt{\delta /\beta }\) then

$$\begin{aligned} \varrho _2(\mu _K,\delta )\leqslant \left( 1+\sqrt{8\beta /\delta }\right) \frac{\mathbb {E}_{\mu _K}(\Lambda _{\mu _K}^{*})}{n}. \end{aligned}$$

An estimate analogous to the one in Theorem 5.5 (ii) is also possible but we shall not go through the details. From the discussion in this section it is clear that our approach is able to provide good bounds for the threshold \(\varrho (\mu ,\delta )\) if the parameter \(\beta (\mu )\) is small, especially if \(\beta (\mu )=o_n(1)\) as the dimension increases. We illustrate this with a number of examples.

Example 5.10

(Uniform measure on the cube) Let \(\mu _{C_n}\) be the uniform measure on the unit cube \(C_n=\left[ -\tfrac{1}{2},\tfrac{1}{2}\right] ^n\). Since \(\mu _{C_n}=\mu _{C_1}\otimes \cdots \otimes \mu _{C_1}\) we have

$$\begin{aligned} \textrm{Var}_{\mu _{C_n}}(\Lambda _{\mu _{C_n}}^{*})=n\textrm{Var}_{\mu _{C_1}}(\Lambda _{\mu _{C_1}}^{*})\qquad \hbox {and} \qquad {\mathbb {E}}_{\mu _{C_n} }(\Lambda _{\mu _{C_n} }^{*})=n{\mathbb {E}}_{\mu _{C_1} }(\Lambda _{\mu _{C_1} }^{*}). \end{aligned}$$

Therefore,

$$\begin{aligned} \beta (\mu _{C_n})=\frac{\textrm{Var}_{\mu _{C_n}}(\Lambda _{\mu _{C_n}}^{*})}{({\mathbb {E}}_{\mu _{C_n} }(\Lambda _{\mu _{C_n} }^{*}))^2} =\frac{\beta (\mu _{C_1})}{n}\longrightarrow 0. \end{aligned}$$

as \(n\rightarrow \infty \). Then, Theorems 5.5 and 5.9 show that for any \(\delta \in (0,1)\) there exists \(n_0(\delta )\) such that, for any \(n\geqslant n_0\),

$$\begin{aligned} \varrho _1(\mu _{C_n},\delta )\geqslant \left( 1-\sqrt{\frac{8\beta _{\mu _{C_n}}}{\delta }}\right) \frac{{\mathbb {E}}(\Lambda _{\mu _{C_n}}^{*})}{n} \geqslant \left( 1-\frac{c_1}{\sqrt{\delta n}}\right) {\mathbb {E}}(\Lambda _{\mu _{C_1}}^{*}) \end{aligned}$$

and

$$\begin{aligned} \varrho _2(\mu _{C_n},\delta )\leqslant \left( 1+\sqrt{\frac{8\beta _{\mu _{C_n}}}{\delta }}\right) \frac{{\mathbb {E}}(\Lambda _{\mu _{C_n}}^{*})}{n} \leqslant \left( 1+\frac{c_2}{\sqrt{\delta n}}\right) {\mathbb {E}}(\Lambda _{\mu _{C_1}}^{*}), \end{aligned}$$

which shows that

$$\begin{aligned} \varrho (\mu _{C_n},\delta )\leqslant \frac{c}{\sqrt{\delta n}}, \end{aligned}$$

where \(c>0\) is an absolute constant. This estimate provides a sharp threshold for the measure of a random polytope \(K_N\) with independent vertices uniformly distributed in \(C_n\). It provides a direct proof of the result of Dyer, Füredi and McDiarmid in [14] with a stronger estimate for the “width of the threshold".

Example 5.11

(Gaussian measure) Let \(\gamma _n\) denote the standard n-dimensional Gaussian measure with density \(f_{\gamma _n}(x)=(2\pi )^{-n/2}e^{-|x|^2/2}\), \(x\in {\mathbb R}^n\). Note that \(\gamma _n=\gamma _1\otimes \cdots \otimes \gamma _1\), and hence we may argue as in the previous example. We may also use direct computation to see that

$$\begin{aligned} \Lambda _{\gamma _n}(\xi )=\log \Big (\int _{{\mathbb R}^n}e^{\langle \xi ,z\rangle }f_{\gamma _n}(z)dz\Big )=\frac{1}{2}|\xi |^2 \end{aligned}$$

for all \(\xi \in {\mathbb R}^n\) and

$$\begin{aligned} \Lambda _{\gamma _n }^{*}(x) = \sup _{\xi \in {\mathbb R}^n} \left\{ \langle x, \xi \rangle - \Lambda _{\gamma _n}(\xi )\right\} =\frac{1}{2}|x|^2 \end{aligned}$$

for all \(x\in {\mathbb R}^n\). It follows that

$$\begin{aligned} B_t(\gamma _n)=\{x\in {\mathbb R}^n:\Lambda _{\gamma _n}^{*}(x)\leqslant t\}=\{x\in {\mathbb R}^n:|x|\leqslant \sqrt{2t}\} =\sqrt{2t}B_2^n. \end{aligned}$$

Note that if \(x\in \partial (B_t(\gamma _n))\) then

$$\begin{aligned} \varphi _{\gamma _n}(x)=\frac{1}{\sqrt{2\pi }}\int _{\sqrt{2t}}^{\infty }e^{-u^2/2}du\geqslant \frac{c}{\sqrt{t}}e^{-t} \end{aligned}$$

for all \(t\geqslant 1\) (see [20, p. 17] for a refined form of the lower bound that we use). By the standard concentration estimate for the Euclidean norm with respect to \(\gamma _n\) (see [30, Theorem 3.1.1]), we have \(\Vert \;|x|-\sqrt{n}\;\Vert _{\psi _2}\leqslant C\), where \(C>0\) is an absolute constant, or equivalently, for any \(s>0\),

$$\begin{aligned} \gamma _n(\{x\in {\mathbb R}^n:|\;|x|-\sqrt{n}\;|\geqslant s\sqrt{n}\})\leqslant 2\exp (-cs^2n), \end{aligned}$$

where \(c>0\) is an absolute constant. This shows that

$$\begin{aligned} \max \{\gamma _n((1-s)\sqrt{n}B_2^n), 1-\gamma _n((1+s)\sqrt{n}B_2^n)\}\leqslant 2\exp (-cs^2n) \end{aligned}$$

for every \(s\in (0,1)\). Let \(\varepsilon \in (0,1/2)\). Applying Lemma 5.4 with \(t=\left( 1-\varepsilon \right) n/2\) and \(N\leqslant \exp ((1/2-\varepsilon )n)\) we see that

$$\begin{aligned} {\mathbb {E}}_{\gamma _n^N}(\gamma _n (K_N))&\leqslant \gamma _n (\sqrt{(1-\varepsilon )n}B_2^n)+ \exp (-\varepsilon n/2)\\&\leqslant 2\exp (-c\varepsilon ^2n)+\exp (-\varepsilon n/2), \end{aligned}$$

using the fact that \(\sqrt{1-\varepsilon }\leqslant 1-\varepsilon /2\). It follows that, for any \(\delta \in (0,1)\), if we choose \(\varepsilon =c_1\sqrt{\ln (4/\delta )}/\sqrt{n}\) we have

$$\begin{aligned} \sup \left\{ {\mathbb {E}}_{\gamma _n^N}\big (\gamma _n(K_N)\big ):N\leqslant e^{\left( \frac{1}{2}-\varepsilon \right) n}\right\} \leqslant \delta , \end{aligned}$$

and hence

$$\begin{aligned} \varrho _1(\gamma _n,\delta )\geqslant \frac{1}{2}-\frac{c_1\sqrt{\ln (4/\delta )}}{\sqrt{n}}. \end{aligned}$$

Now, let \(N\geqslant \exp ((1/2+\varepsilon )n)\). Applying Lemma 5.8 with \(A=B_t(\gamma _n)\) where \(t=(1+\varepsilon )n/2\), we see that \(\gamma _n (B_t(\gamma _n))=\gamma _n(\sqrt{(1+\varepsilon )n}B_2^n)\geqslant 1-2\exp (-c\varepsilon ^2n)\), because \(\sqrt{1+\varepsilon }\geqslant 1+\varepsilon /3\). We also have

$$\begin{aligned} 2\left( {\begin{array}{c}N\\ n\end{array}}\right) \left( 1-\inf _{x\in B_t(\gamma _n)}\varphi _{\gamma _n }(x)\right) ^{N-n}\leqslant \left( \frac{2eN}{n}\right) ^n \exp \left( -\frac{c(N-n)}{\sqrt{n}}e^{-(1+\varepsilon )n/2}\right) . \end{aligned}$$

Let \(\delta \in (0,1)\). We choose \(\varepsilon =c_2\sqrt{\ln (4/\delta )}/\sqrt{n}\) and insert these estimates into Lemma 5.8. Arguing as in the proof of (5.7) we see that if \(n\geqslant n_0(\delta )\) then

$$\begin{aligned} \inf \left\{ {\mathbb {E}}_{\gamma _n^N}\big (\gamma _n(K_N)\big ):N\geqslant e^{\left( \frac{1}{2}+\varepsilon \right) n}\right\} \geqslant 1-\delta , \end{aligned}$$

and hence

$$\begin{aligned} \varrho _2(\gamma _n,\delta )\leqslant \frac{1}{2}+\frac{c_2\sqrt{\ln (4/\delta )}}{\sqrt{n}}. \end{aligned}$$

Combining the above we get

$$\begin{aligned} \varrho (\gamma _n,\delta )\leqslant \frac{C\sqrt{\ln (4/\delta )}}{\sqrt{n}}, \end{aligned}$$

where \(C>0\) is an absolute constant.

We close this article with the example of the uniform measure on the Euclidean ball. It was proved in [3] that if \(\varepsilon \in (0,1)\) and \(K_N=\textrm{conv}\{x_1,\ldots ,x_N\}\) where \(x_1,\ldots ,x_N\) are random points independently and uniformly chosen from \(B_2^n\) then

$$\begin{aligned} \lim _{n\rightarrow \infty }\sup \left\{ \frac{{\mathbb {E}} |K_N|}{|B_2^n|}:N\leqslant \exp \left( (1-\varepsilon )\left( \frac{n+1}{2}\right) \ln n\right) \right\} =0 \end{aligned}$$

and

$$\begin{aligned} \lim _{n\rightarrow \infty }\inf \left\{ \frac{{\mathbb {E}} |K_N|}{|B_2^n|}:N\geqslant \exp \left( (1+\varepsilon )\left( \frac{n+1}{2}\right) \ln n\right) \right\} =1. \end{aligned}$$

We shall obtain a similar conclusion with the approach of this work (the estimate below is in fact stronger since it sharpens the width of the threshold from O(1) to \(O(1/\sqrt{n})\)).

Theorem 5.12

Let \(D_n\) be the centered Euclidean ball of volume 1 in \({\mathbb R}^n\). Then, the sequence \(\mu _n:=\mu _{D_n}\) exhibits a sharp threshold with \(\varrho (\mu _n,\delta )\leqslant \frac{c}{\sqrt{\delta n}}\) and e.g. if n is even then we have that

$$\begin{aligned} {\mathbb {E}}_{\mu _n}(\Lambda _{\mu _n}^{*})=\frac{(n+1)}{2}H_{\frac{n}{2}}+O(\sqrt{n}) \end{aligned}$$

as \(n\rightarrow \infty \), where \(H_m=\sum _{k=1}^m\frac{1}{k}\).

Proof

Note that if K is a centered convex body in \({\mathbb R}^n\) and \(r>0\) then \(\Lambda _{\mu _{rK}}^{*}(x)=\Lambda _{\mu _K}^{*}(x/r)\) for all \(x\in {\mathbb R}^n\), where \(\mu _{rK}\) is the uniform measure on rK. It follows that

$$\begin{aligned} \frac{1}{|rK|}\int _{rK}[\Lambda _{\mu _{rK}}^{*}(x)]^pdx=\frac{1}{|K|}\int _K[\Lambda _{\mu _K}^{*}(x)]^pdx \end{aligned}$$

for every \(p>0\) and \(r>0\). This shows that in order to compute \(\beta (\mu _{D_n)}\) it suffices to compute the ratio

$$\begin{aligned} \beta (\mu _{D_n})+1=\frac{\frac{1}{|B_2^n|}\int _{B_2^n}\Lambda ^{*}(x)^2dx}{\left( \frac{1}{|B_2^n|}\int _{B_2^n}\Lambda ^{*}(x)dx\right) ^2} \end{aligned}$$

where \(\Lambda ^{*}:=\Lambda _{\mu _{B_2^n}}^{*}\). Having in mind Lemma 5.3 we start by computing \(\tau (\mu _{B_2^n})\). Set \(\omega :=\omega _{\mu _{B_2^n}}\). Then, \(\omega (x)=\ln (1/\varphi (x))\) where \(\varphi (x)=F(|x|)\),

$$\begin{aligned} F(r)=c_n\int _r^{1}(1-t^2)^{\frac{n-1}{2}}dt,\qquad r\in [0,1] \end{aligned}$$

and \(c_n=\pi ^{-1/2}\Gamma (\frac{n}{2}+1)/\Gamma (\frac{n+1}{2})\). From [3, Lemma 2.2] we know that

$$\begin{aligned} F(r)=(1-r^2)^{\frac{n+1}{2}}h(r,n), \end{aligned}$$

where

$$\begin{aligned} \frac{1}{\sqrt{2\pi (n+2)}}\leqslant h(r,n)\leqslant \frac{1}{r\sqrt{2\pi n}} \end{aligned}$$
(5.8)

for all \(r\in (0,1]\). We assume that n is even (the case where n is odd can be treated in a similar way). Using polar coordinates we compute

$$\begin{aligned} \frac{1}{|B_2^n|}\int _{B_2^n}\omega (x)\,dx&=-n\int _0^{1}r^{n-1}\ln (F(r))\, dr\\&=-n\int _0^1 r^{n-1}\ln ((1-r^2)^{\frac{n+1}{2}})\, dr-n\int _0^1 r^{n-1}\ln \left( h(r,n)\right) \, dr. \end{aligned}$$

The leading term is the first one and one can compute it explicitly. After making the change of variables \(r^2=u\), we get

$$\begin{aligned}&-n\int _0^1 r^{n-1}\ln ((1-r^2)^{\frac{n+1}{2}})\, dr = -\frac{n(n+1)}{2}\int _0^1 r^{n-1}\ln (1-r^2)\, dr\nonumber \\&\quad =-\frac{n(n+1)}{4}\int _0^1u^{\frac{n-2}{2}}\ln (1-u)\,du=\frac{n(n+1)}{2n}H_{\frac{n}{2}}, \end{aligned}$$
(5.9)

using also Lemma 4.3. For the second term we recall from (5.8) that \(0\leqslant -\ln (h(r,n))\leqslant \tfrac{1}{2}\ln (2\pi (n+2))\leqslant c_1\ln n\), and hence

$$\begin{aligned} -n\int _0^1 r^{n-1}\ln \left( h(r,n)\right) \, dr\leqslant c_1\ln n\int _0^1nr^{n-1}dr=c_1\ln n. \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{1}{|B_2^n|}\int _{B_2^n}\omega (x)\,dx=\frac{n+1}{2}H_{\frac{n}{2}}+O(\ln n). \end{aligned}$$
(5.10)

Using again polar coordinates we write

$$\begin{aligned} \frac{1}{|B_2^n|}\int _{B_2^n}(\omega (x))^2dx&=n\int _0^{1}r^{n-1}\ln ^2(F(r))\, dr\\&=n\int _0^1 r^{n-1}\ln ^2((1-r^2)^{\frac{n+1}{2}})\, dr+n\int _0^1 r^{n-1}\ln ^2(h(r,n))\, dr\\&\quad +2n\int _0^1 r^{n-1}\ln ((1-r^2)^{\frac{n+1}{2}})\ln (h(r,n))\, dr, \end{aligned}$$

As before, the leading term is the first one and we can compute it explicitly. After making the change of variables \(r^2=u\), we get

$$\begin{aligned} n\left( \frac{n+1}{2}\right) ^2\int _0^1 r^{n-1}\ln ^2(1-r^2)\, dr&=n\frac{(n+1)^2}{8}\int _0^1u^{\frac{n-2}{2}}\ln ^2(1-u)\,du\\&=n\frac{(n+1)^2}{8}\left( \frac{2}{n}H_{\frac{n}{2}}^2-\frac{2}{n}\sum _{k=1}^{n/2}\frac{1}{k^2}\right) . \end{aligned}$$

On the other hand, from (5.8) we see that if \(h(r,n)\leqslant 1\) then \(0\leqslant -\ln (h(r,n))\leqslant \tfrac{1}{2}\ln (2\pi (n+2))\leqslant c_1\ln n\), while if \(h(r,n)>1\) then \(0\leqslant \ln (h(r,n))\leqslant \ln (1/r)\). Therefore, \(\ln ^2(h(r,n))\leqslant c_2(\ln n)^2+\ln ^2(r)\) for all \(r\in (0,1]\). It follows that

$$\begin{aligned} n\int _0^1 r^{n-1}\ln ^2(h(r,n))\, dr\leqslant c_3(\ln n)^2+n\int _0^1r^{n-1}\ln ^2r\,dr\leqslant c_4(\ln n)^2. \end{aligned}$$

Using again the fact that \(\ln (h(r,n)^{-1})\leqslant c_1\ln n\) as well as (5.9), we check that

$$\begin{aligned} 2n\int _0^1 r^{n-1}\ln ((1-r^2)^{\frac{n+1}{2}})\ln (h(r,n))\, dr\leqslant \frac{n(n+1)}{2n}H_{\frac{n}{2}}\cdot c_1\ln n\leqslant c_5n(\ln n)^2. \end{aligned}$$

From these estimates we have

$$\begin{aligned} \frac{1}{|B_2^n|}\int _{B_2^n}(\omega (x))^2\,dx=\frac{(n+1)^2}{4}H^2_{\frac{n}{2}}+O(n(\ln n)^2). \end{aligned}$$
(5.11)

From (5.10) and (5.11) we finally get

$$\begin{aligned} \tau (\mu _{B_2^n})=O\left( \frac{n(\ln n)^2}{n^2H^2_{\frac{n}{2}}}\right) =O(1/n). \end{aligned}$$

Then, Lemma 5.3 and a simple computation show that

$$\begin{aligned} \beta (\mu _{D_n})= & {} \left( \tau (\mu _{B_2^n})+O(L_{\mu _{B_2^n}}^2/\sqrt{n})\right) \left( 1+O(L_{\mu _{B_2^n}}^2/\sqrt{n})\right) \\= & {} O(1/\sqrt{n}), \end{aligned}$$

because \(L_{\mu _{B_2^n}}\approx 1\). Finally, note that by the estimate (3.5) in Corollary 3.5 we have

$$\begin{aligned} {\mathbb {E}}_{\mu _n}(\Lambda _{\mu _n}^{*})=\frac{1}{|B_2^n|}\int _{B_2^n}\omega (x)\,dx+O(\sqrt{n}) =\frac{(n+1)}{2}H_{\frac{n}{2}}+O(\sqrt{n}) \end{aligned}$$

as \(n\rightarrow \infty \). \(\square \)

Note

The above discussion leaves open the following basic question: to estimate

$$\begin{aligned} \beta _n^{*}:=\sup \{\beta (\mu _K):K\;\hbox {is a centered convex body of volume}\;1\;\hbox {in}\;{\mathbb R}^n\} \end{aligned}$$

or, more generally,

$$\begin{aligned} \beta _n:=\sup \{\beta (\mu ):\mu \;\hbox {is a centered log-concave probability measure on}\;{\mathbb R}^n\}. \end{aligned}$$