1 Introduction

Let \(\mu \) be a probability measure on \({{\mathbb {R}}}^n\). For any \(x\in {{\mathbb {R}}}^n\) we denote by \({{{\mathcal {H}}}}(x)\) the set of all half-spaces H of \({{\mathbb {R}}}^n\) containing x. The function

$$\begin{aligned} \varphi _{\mu }(x)=\inf \{\mu (H):H\in {{{\mathcal {H}}}}(x)\} \end{aligned}$$

is called Tukey’s half-space depth. The first work in statistics where some form of the half-space depth appears is an article of Hodges [1] from 1955. Tukey introduced the half-space depth for data sets in [2] as a tool that enables efficient visualization of random samples in the plane. The term “depth" also comes from Tukey’s article. A formal definition of the half-space depth as a way to distinguish points that fit the overall pattern of a multivariable probability distribution and to obtain an efficient description, visualization, and nonparametric statistical inference for multivariable data, was given by Donoho and Gasko in [3] (see also [4]). We refer the reader to the survey article of Nagy et al. [5] for an overview of this topic, with an emphasis on its connections with convex geometry, and many references.

In the first part of this article we study the expectation of the half-space depth in the context of log-concave probability measures. In what follows, these are the Borel probability measures \(\mu \) on \({\mathbb {R}}^n\) that satisfy \(\mu (\lambda A+(1-\lambda )B) \geqslant \mu (A)^{\lambda }\mu (B)^{1-\lambda }\) for any compact subsets \(A,B\subseteq {{\mathbb {R}}}^n\) and any \(\lambda \in (0,1)\), as well as the non-degeneracy condition \(\mu (H)<1\) for every hyperplane H in \({{\mathbb {R}}}^n\). The question whether there exists an absolute constant \(c\in (0,1)\) such that

$$\begin{aligned} {{\mathbb {E}}}_{\mu }(\varphi _{\mu }):=\int _{{{\mathbb {R}}}^n}\varphi _{\mu }(x)\,d\mu (x)\leqslant c^n\end{aligned}$$
(1.1)

for all \(n\geqslant 1\) and all log-concave probability measures \(\mu \) on \({{\mathbb {R}}}^n\) was asked in [6] in connection with stochastic separability and applications to machine learning and error-correction mechanisms in artificial intelligence systems; for the origin of the question we refer to [7] and to the references therein. In the context of asymptotic geometric analysis, the validity of (1.1) implies that if \(m\leqslant C^n\), where \(C>1\) is an absolute constant, then a set of m independent random points with a log-concave distribution has, with probability close to 1, the property that every point in the set can be separated from all others by a hyperplane.

Our first result shows that (1.1) holds true modulo the isotropic constant \(L_{\mu }\) of \(\mu \), defined in (2.3).

Theorem 1.1

Let \(\mu \) be a log-concave probability measure on \({{\mathbb {R}}}^n\), \(n\geqslant n_0\). Then, \({{\mathbb {E}}}_{\mu }(\varphi _{\mu }) \leqslant \exp \left( -cn/L_{\mu }^2\right) \) where \(L_{\mu }\) is the isotropic constant of \(\mu \) and \(c>0\), \(n_0\in {{\mathbb {N}}}\) are absolute constants.

Background information on isotropic log-concave probability measures and the isotropic constant is provided in Sect. 2. The well-known hyperplane conjecture asks whether there exists an absolute constant \(C>0\) such that \(L_n\leqslant C\) for every \(n\geqslant 2\), where

$$\begin{aligned} L_n=\sup \{L_{\mu }:\mu \ \hbox {is an isotropic log-concave probability measure on}\ {{\mathbb {R}}}^n\}. \end{aligned}$$

The best known upper bound, due to Klartag [8], asserts that \(L_n\leqslant C\sqrt{\ln n}\) for some absolute constant \(C>0\), therefore Theorem 1.1 shows that

$$\begin{aligned} {{\mathbb {E}}}_{\mu }(\varphi _{\mu }) \leqslant \exp \left( -cn/\ln n\right) \end{aligned}$$

provided that n is large enough. The quantity \({{\mathbb {E}}}_{\mu }(\varphi _{\mu })\) is affinely invariant and hence for the proof of Theorem 1.1 we may assume that \(\mu \) is isotropic. Actually, we obtain Theorem 1.1 as a special case of a more general result which is presented in Sect. 3.

Theorem 1.2

Let \(\mu \) and \(\nu \) be two isotropic log-concave probability measures on \({{\mathbb {R}}}^n\), \(n\geqslant n_0\). Then,

$$\begin{aligned} {{\mathbb {E}}}_{\nu }(\varphi _{\mu }):=\int _{{\mathbb {R}}^n}\varphi _{\mu }(x)\,d\nu (x)\leqslant \exp \left( -cn/L_{\nu }^2\right) , \end{aligned}$$

where \(c>0\), \(n_0\in {{\mathbb {N}}}\) are absolute constants.

The proof of Theorem 1.2 starts with the known estimate \(\varphi _{\mu }(x)\leqslant \exp (-\Lambda _{\mu }^{*}(x))\) where \(\Lambda _{\mu }^{*}\) is the Cramér transform of \(\mu \) (defined in Sect. 2), and actually establishes the stronger inequality

$$\begin{aligned} \int _{{{\mathbb {R}}}^n}e^{-\Lambda _{\mu }^{*}(x)}d\nu (x) \leqslant \exp \left( -cn/L_{\nu }^2\right) ,\end{aligned}$$
(1.2)

exploiting upper bounds for the volume of the sets \(B_t(\mu )=\{x\in {\mathbb R}^n:\Lambda _{\mu }^{*}(x)\leqslant t\}\). The assumption that both \(\mu \) and \(\nu \) are isotropic is not necessary. One can consider a different type of normalization. We discuss this matter in Sect. 2 and we state another version of Theorem 1.2 that might be useful (see Theorem 3.2). In any case, setting \(\nu =\mu \) we obtain Theorem 1.1 as an immediate consequence of any of these statements.

In Sect. 4 we show that, apart from the value of the isotropic constant \(L_{\mu }\), the exponential estimate provided by Theorem 1.1 is sharp.

Theorem 1.3

Let \(\mu \) be a log-concave probability measure on \({{\mathbb {R}}}^n\). Then,

$$\begin{aligned} \int _{{{\mathbb {R}}}^n}\varphi _{\mu }(x)d\mu (x)\geqslant e^{-cn}, \end{aligned}$$

where \(c>0\) is an absolute constant.

The proof of Theorem 1.3 makes use of several facts about isotropic log-concave probability measures. In the case where \(\mu \) is the uniform measure on a convex body K of volume 1 in \({{\mathbb {R}}}^n\), one can show that \(\varphi _{\mu }(x)\geqslant e^{-c_1n}\) for all \(x\in \tfrac{1}{2}K\) and then simply apply Markov’s inequality and use the fact that \(\left| \tfrac{1}{2}K\right| =2^{-n}\). When \(\mu \) is an arbitrary log-concave probability measure on \({{\mathbb {R}}}^n\), in order to obtain the same exponential in the dimension lower bound we have to exploit the family of the one-sided \(L_t\)-centroid bodies of \(\mu \). More precisely, we use the fact that in order to have the lower bound \(\varphi _{\mu }(x)\geqslant e^{-c_1n}\) we may use, instead of \(\tfrac{1}{2}K\), the convex body \(\tfrac{1}{2}Z_t^+(\mu )\) with e.g. \(t=5n\), where \(Z_t^+(\mu )\) is the one-sided \(L_t\)-centroid body of \(\mu \), and we establish an appropriate lower bound for \(\mu \left( \tfrac{1}{2}Z_{5n}^+(\mu )\right) \). This last estimate requires the use of some other families of convex sets that are associated with a log-concave probability measure; these are introduced in the next section as well as in Sect. 4. For the reader’s convenience we present first the proof of Theorem 1.3 in the simpler case where \(\mu \) is the uniform measure on a convex body K in \({{\mathbb {R}}}^n\) and then in the general case of an arbitrary log-concave probability measure.

In the second part of this article we consider the question to obtain uniform upper and lower thresholds for the expected measure of a random polytope defined as the convex hull of independent random points with a log-concave distribution. The general formulation of the problem is the following. Given a log-concave probability measure \(\mu \) on \({{\mathbb {R}}}^n\) we consider independent random points \(X_1,X_2,\ldots \) in \({{\mathbb {R}}}^n\) distributed according to \(\mu \) and for any \(N>n\) we consider the random polytope

$$\begin{aligned} K_N=\textrm{conv}\{X_1,\ldots ,X_N\} \end{aligned}$$

and the expectation \({{\mathbb {E}}}_{\mu ^N}[\mu (K_N)]\). Tukey’s half-space depth plays a crucial role in the study of these random polytopes and of their threshold behavior, starting with the classical work of Dyer, Füredi and McDiarmid who established in [9] a sharp threshold for the expected volume of random polytopes with vertices uniformly distributed in the discrete cube \(E_2^n=\{-1,1\}^n\) or in the solid cube \(B_{\infty }^n=[-1,1]^n\). They proved that in the first case, if \(\kappa =\ln 2-\tfrac{1}{2}\) then for every \(\varepsilon \in (0,\kappa )\) one has the upper threshold

$$\begin{aligned} \lim _{n\rightarrow \infty }\sup \left\{ 2^{-n} {{\mathbb {E}}}|K_N|:N\leqslant \exp ((\kappa -\varepsilon )n)\right\} =0 \end{aligned}$$
(1.3)

and the lower threshold

$$\begin{aligned} \lim _{n\rightarrow \infty }\inf \left\{ 2^{-n} {{\mathbb {E}}}|K_N|:N\geqslant \exp ((\kappa +\varepsilon )n)\right\} =1.\end{aligned}$$
(1.4)

A similar result holds true for the expected volume of random polytopes with vertices uniformly distributed in the cube \(B_{\infty }^n\); the corresponding value of the constant \(\kappa \) is \(\kappa =\ln (2\pi )-\gamma -\tfrac{1}{2}\), where \(\gamma \) is Euler’s constant. Half-space depth plays a key role in the proof of these results: the starting point for the proof of the upper and lower threshold are variants of Lemmas 5.2 and 5.7 respectively. Further sharp thresholds (meaning that there exists some constant \(\kappa =\kappa _{\mu }\) such that the expected volume of \(K_N\) changes behavior around \(N=\exp (\kappa _{\mu }n)\)) have been given in a number of other special cases; see [10] for the case where \(X_i\) have independent identically distributed coordinates supported on a bounded interval, and the articles [11] and [12, 13] for a number of cases where \(X_i\) have rotationally invariant densities. All these works follow the same strategy and use estimates for the half-space depth. Non-sharp, both of them exponential in the dimension, upper and lower thresholds are obtained in [14] for the case where \(X_i\) are uniformly distributed in a simplex. All these results suggest that, at least in the case where \(\mu =\mu _K\) is the uniform measure on a high-dimensional convex body, the expectation \({{\mathbb {E}}}_{\mu ^N}[\mu (K_N)]\) of the measure of \(K_N\) exhibits a threshold with constant \(\kappa _{\mu } =\frac{1}{n}{{\mathbb {E}}}_{\mu }(\Lambda _{\mu }^{*})\), where \(\Lambda _{\mu }^{*}\) is the Cramér transform of \(\mu \), in the sense that the following statement might be true: given \(\delta \in \left( 0,\tfrac{1}{2}\right) \), there exists \(n_0(\delta ,\varepsilon )\in {{\mathbb {N}}}\) such that if \(n\geqslant n_0\) and K is a convex body in \({{\mathbb {R}}}^n\) then

$$\begin{aligned} \sup \left\{ {{\mathbb {E}}}_{\mu ^N}[\mu (K_N)]:N\leqslant \exp ((\kappa _{\mu }-\varepsilon ) n)\right\} \leqslant \delta \end{aligned}$$

and

$$\begin{aligned} \inf \left\{ {{\mathbb {E}}}_{\mu ^N}[\mu (K_N)]:N\geqslant \exp ((\kappa _{\mu }+\varepsilon ) n)\right\} \geqslant 1-\delta \end{aligned}$$

for some \(\varepsilon =c(n,\delta )\kappa _{\mu }\) with \(\lim \limits _{n\rightarrow \infty }c(n,\delta )=0\). Some steps in this direction have been made in [15]. Note that by (1.2) and Jensen’s inequality one has that \(\kappa _{\mu }\geqslant c/L_n^2\) for every log-concave probability measure \(\mu \) on \({{\mathbb {R}}}^n\).

Here, we are interested in uniform upper and lower thresholds for the class of all log-concave probability measures. The question that we study is to find a constant \(N_1(n)\), depending only on n and as large as possible, so that

$$\begin{aligned} \sup _{\mu }\Big (\sup \Big \{{{\mathbb {E}}}_{\mu ^N}[\mu (K_N)]:N\leqslant N_1(n)\Big \}\Big )\longrightarrow 0 \end{aligned}$$

as \(n\rightarrow \infty \) and a second constant \(N_2(n)\), depending only on n and as small as possible, so that

$$\begin{aligned} \inf _{\mu }\Big (\inf \Big \{{{\mathbb {E}}}_{\mu ^N}[\mu (K_N)]:N\geqslant N_2(n)\Big \}\Big )\longrightarrow 0 \end{aligned}$$

as \(n\rightarrow \infty \), where the supremum and the infimum are over all log-concave probability measures. We shall call the first type of result a “uniform upper threshold" and the second type a “uniform lower threshold".

Such uniform upper and lower thresholds were obtained recently by Chakraborti, Tkocz and Vritsiou in [16] for some families of distributions. They showed that if \(\mu \) is an even log-concave probability measure supported on a convex body K in \({\mathbb R}^n\) and if \(X_1,X_2,\ldots \) are independent random points distributed according to \(\mu \), then for any \(n<N\leqslant \exp (c_1n/L_{\mu }^2)\) we have that

$$\begin{aligned} \frac{{{\mathbb {E}}}_{\mu ^N}(|K_N|)}{|K|} \leqslant \exp \left( -c_2n/L_{\mu }^2\right) ,\end{aligned}$$
(1.5)

where \(c_1,c_2>0\) are absolute constants. We obtain an upper threshold for a pair of log-concave measures \(\mu \) and \(\nu \), if they can be simultaneously put in the isotropic position.

Theorem 1.4

Let \(\mu \) and \(\nu \) be isotropic log-concave probability measures on \({{\mathbb {R}}}^n\). Let \(X_1,X_2,\ldots \) be independent random points in \({{\mathbb {R}}}^n\) distributed according to \(\mu \) and for any \(N>n\) consider the random polytope \(K_N=\textrm{conv}\{X_1,\ldots ,X_N\}\). Then, for any \(N\leqslant \exp (c_1n/L_{\nu }^2)\) we have that

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}(\nu (K_N)) \leqslant 2\exp \left( -c_2n/L_{\nu }^2\right) , \end{aligned}$$

where \(c_1,c_2>0\) are absolute constants.

As a corollary of Theorem 1.4 we get:

Corollary 1.5

There exists an absolute constant \(c>0\) such that if \(N_1(n)=\exp (cn/L_n^2)\) then

$$\begin{aligned} \sup _{\mu }\Big (\sup \Big \{{{\mathbb {E}}}_{\mu ^N}[\mu (K_N)]:N\leqslant N_1(n)\Big \}\Big )\longrightarrow 0 \end{aligned}$$

as \(n\rightarrow \infty \), where the first supremum is over all log-concave probability measures \(\mu \) on \({{\mathbb {R}}}^n\).

The proof of Theorem 1.4 exploits some of the ideas that are used for the proof of (1.5) in [16]: Lemma 5.2 is a variant of a known idea which is often used for upper thresholds and is based again on the inequality \(\varphi _{\mu }(x)\leqslant \exp (-\Lambda _{\mu }^{*}(x))\). Then, one has to use upper bounds for the volume of the sets \(B_t(\mu )\). The assumption that both \(\mu \) and \(\nu \) are isotropic may be replaced by different types of normalization. We discuss other versions of Theorem 1.4 in Sect. 5 and we show that one can recover (1.5) from these.

The uniform lower threshold which is established in [16] concerns the case where \(\mu \) is an even \(\kappa \)-concave measure on \({{\mathbb {R}}}^n\) with \(0<\kappa <1/n\), supported on a convex body K in \({{\mathbb {R}}}^n\). If \(X_1,X_2,\ldots \) are independent random points in \({\mathbb R}^n\) distributed according to \(\mu \) and \(K_N=\textrm{conv}\{X_1,\ldots ,X_N\}\) as before, then for any \(M\geqslant C\) and any \(N\geqslant \exp \left( \frac{1}{\kappa }(\ln n+2\ln M)\right) \) we have that

$$\begin{aligned} \frac{{{\mathbb {E}}}_{\mu ^N}(|K_N|)}{|K|}\geqslant 1-\frac{1}{M},\end{aligned}$$
(1.6)

where \(C>0\) is an absolute constant.

Since the family of log-concave probability measures corresponds to the case \(\kappa =0\), it is natural to ask for analogues of this result for 0-concave, i.e. log-concave, probability measures. We obtain a uniform lower threshold for the class of log-concave probability measures.

Theorem 1.6

Let \(\delta \in (0,1)\). Then,

$$\begin{aligned} \inf _{\mu }\Big (\inf \Big \{ {{\mathbb {E}}}_{\mu ^N}\big [\mu ((1+\delta )K_N)\big ]: N\geqslant \exp \big (C\delta ^{-1}\ln \left( 2/\delta \right) n\ln n\big )\Big \}\Big )\longrightarrow 1 \end{aligned}$$

as \(n\rightarrow \infty \), where the first infimum is over all log-concave probability measures \(\mu \) on \({{\mathbb {R}}}^n\) with barycenter at the origin, and \(C>0\) is an absolute constant.

The proof of Theorem 1.6 exploits the half-space depth as follows. By a known fact, Lemma 5.7, roughly speaking it suffices to have a good lower bound for \(\varphi _{\mu }(x)\) on a set \(A\subset {{\mathbb {R}}}^n\) of measure close to 1. We show that if \(\mu \) has its barycenter at the origin then, as in the proof of Theorem 1.3, the role of A can be played by \((1+\delta )Z_t^+(\mu )\) where, this time, \(t\geqslant C_{\delta }n\ln n\) and \(C_{\delta }=C\delta ^{-1}\ln \left( 2/\delta \right) \). Theorem 1.6 provides a weak threshold in the sense that we estimate the expectation \({{\mathbb {E}}}_{\mu ^N}\big (\mu (1+\delta )K_N)\) (for an arbitrarily small but positive value of \(\delta )\) while we would like to have a similar result for \({{\mathbb {E}}}_{\mu ^N}\big [\mu (K_N)]\). One can “remove the \(\delta \)-term", however the dependence on n becomes worse. More precisely, we show in Theorem 5.8 that there exists an absolute constant \(C>0\) such that

$$\begin{aligned} \inf _{\mu }\Big (\inf \Big \{ {{\mathbb {E}}}_{\mu ^N}\big [\mu (K_N)\big ]: N\geqslant \exp (C(n\ln n)^2u(n))\Big \}\Big )\longrightarrow 1 \end{aligned}$$

as \(n\rightarrow \infty \), where the first infimum is over all log-concave probability measures \(\mu \) on \({{\mathbb {R}}}^n\) and u(n) is any function with \(u(n)\rightarrow \infty \) as \(n\rightarrow \infty \).

It should be noted that an exponential in the dimension lower threshold is not possible in full generality. For example, in the case where \(X_i\) are uniformly distributed in the Euclidean ball the sharp threshold for the problem is

$$\begin{aligned} \exp \left( (1\pm \epsilon )\tfrac{1}{2}n\ln n\right) ,\qquad \epsilon >0. \end{aligned}$$

See [17] where a related estimate first appears, and [11, 12] for sharp estimates; one more proof is given in [15].

2 Notation and background information

In this section we introduce notation and terminology that we use throughout this work, and provide background information on isotropic convex bodies and log-concave probability measures. We write \(\langle \cdot ,\cdot \rangle \) for the standard inner product in \({{\mathbb {R}}}^n\) and denote the Euclidean norm by \(|\cdot |\). In what follows, \(B_2^n\) is the Euclidean unit ball, \(S^{n-1}\) is the unit sphere, and \(\sigma \) is the rotationally invariant probability measure on \(S^{n-1}\). Lebesgue measure in \({{\mathbb {R}}}^n\) is denoted by \(|\cdot |\). The letters \(c, c^{\prime },c_j,c_j^{\prime }\) etc. denote absolute positive constants whose value may change from line to line. Whenever we write \(a\approx b\), we mean that there exist absolute constants \(c_1,c_2>0\) such that \(c_1a\leqslant b\leqslant c_2a\). Similarly, if AB are sets, then \(A \approx B\) will state that \(c_1A\subseteq B \subseteq c_2 A\) for some absolute constants \(c_1,c_2>0\). We refer to Schneider’s book [18] for basic facts from the Brunn–Minkowski theory and to the book [19] for basic facts from asymptotic convex geometry. We also refer to [20] for more information on isotropic convex bodies and log-concave probability measures.

2.1. Convex bodies. A convex body in \({{\mathbb {R}}}^n\) is a compact convex set \(K\subset {{\mathbb {R}}}^n\) with non-empty interior. In this work we often consider bounded convex sets K in \({{\mathbb {R}}}^n\) with \(0\in \textrm{int}(K)\); since their closure is a convex body, we shall call these sets convex bodies too. We say that K is centrally symmetric if \(-K=K\) and that K is centered if the barycenter \(\textrm{bar}(K)=\frac{1}{|K|}\int _Kx\,dx\) of K is at the origin. We shall use the fact that if K is a centered convex body in \({{\mathbb {R}}}^n\) then

$$\begin{aligned} \max _{y\in {{\mathbb {R}}}^n}|K\cap (y+\xi ^{\perp })|_{n-1}\leqslant e\,|K\cap \xi ^{\perp }|_{n-1} \end{aligned}$$
(2.1)

for all \(\xi \in S^{n-1}\), where \(\xi ^{\perp }=\{x\in {\mathbb R}^n:\langle x,\xi \rangle =0\}\) and \(|\cdot |_{n-1}\) denotes \((n-1)\)-dimensional volume. This is a result of Fradelizi; for a proof see [20, Proposition 6.1.9]. The radial function \(\varrho _K\) of K is defined for all \(x\ne 0\) by \(\varrho _K(x)=\sup \{\lambda >0:\lambda x\in K\}\) and the support function of K is given by \(h_K(x) = \sup \{\langle x,y\rangle :y\in K\}\) for all \(x\in {{\mathbb {R}}}^n\). The polar body \(K^{\circ }\) of a convex body K in \({{\mathbb {R}}}^n\) with \(0\in \textrm{int}(K)\) is the convex body

$$\begin{aligned} K^{\circ }:=\bigl \{y\in {{\mathbb {R}}}^n: \langle x,y\rangle \leqslant 1\;\hbox {for all}\; x\in K\bigr \}. \end{aligned}$$

A convex body K in \({{\mathbb {R}}}^n\) is called isotropic if it has volume 1, it is centered, and its inertia matrix is a multiple of the identity matrix: there exists a constant \(L_K>0\), the isotropic constant of K, such that

$$\begin{aligned} \Vert \langle \cdot ,\xi \rangle \Vert _{L_2(K)}^2:=\int _K\langle x,\xi \rangle ^2dx =L_K^2 \end{aligned}$$

for all \(\xi \in S^{n-1}\).

2.2. Log-concave probability measures. In this article, a Borel measure \(\mu \) on \({\mathbb {R}}^n\) is called log-concave if \(\mu (H)<1\) for every hyperplane H in \({\mathbb R}^n\) and \(\mu (\lambda A+(1-\lambda )B) \geqslant \mu (A)^{\lambda }\mu (B)^{1-\lambda }\) for any compact subsets AB of \({{\mathbb {R}}}^n\) and any \(\lambda \in (0,1)\). A theorem of Borell [21] shows that under these assumptions, \(\mu \) has a log-concave density \(f_{{\mu }}\). A function \(f:{\mathbb {R}}^n \rightarrow [0,\infty )\) is called log-concave if its support \(\{f>0\}\) is a convex set in \({{\mathbb {R}}}^n\) and the restriction of \(\ln {f}\) to it is concave. If f has finite positive integral then there exist constants \(A,B>0\) such that \(f(x)\leqslant Ae^{-B|x|}\) for all \(x\in {{\mathbb {R}}}^n\) (see [20, Lemma 2.2.1]). In particular, f has finite moments of all orders. We say that \(\mu \) is even if \(\mu (-B)=\mu (B)\) for every Borel subset B of \({{\mathbb {R}}}^n\) and that \(\mu \) is centered if

$$\begin{aligned} \int _{{\mathbb {R}}^n} \langle x, \xi \rangle d\mu (x) = \int _{\mathbb R^n} \langle x, \xi \rangle f_{\mu }(x) dx = 0 \end{aligned}$$

for all \(\xi \in S^{n-1}\). We shall use the fact that if \(\mu \) is a centered log-concave probability measure on \({{\mathbb {R}}}^n\) then

$$\begin{aligned} \Vert f_{\mu }\Vert _{\infty }\leqslant e^nf_{\mu }(0). \end{aligned}$$
(2.2)

This is a result of Fradelizi; for a proof see [20, Theorem 2.2.2]. Note that if K is a convex body in \({\mathbb {R}}^n\) then the Brunn-Minkowski inequality implies that the indicator function \({\textbf{1}}_{K} \) of K is the density of a log-concave measure, the Lebesgue measure on K.

If \(\mu \) is a log-concave measure on \({{\mathbb {R}}}^n\) with density \(f_{\mu }\), we define the isotropic constant of \(\mu \) by

$$\begin{aligned} L_{\mu }:=\left( \frac{\sup _{x\in {{\mathbb {R}}}^n} f_{\mu } (x)}{\int _{{{\mathbb {R}}}^n}f_{\mu }(x)dx}\right) ^{\frac{1}{n}} [\det \text {Cov}(\mu )]^{\frac{1}{2n}}, \end{aligned}$$
(2.3)

where \(\text {Cov}(\mu )\) is the covariance matrix of \(\mu \) with entries

$$\begin{aligned} \text {Cov}(\mu )_{ij}:=\frac{\int _{{{\mathbb {R}}}^n}x_ix_j f_{\mu } (x)\,dx}{\int _{{{\mathbb {R}}}^n} f_{\mu } (x)\,dx}-\frac{\int _{{\mathbb R}^n}x_i f_{\mu } (x)\,dx}{\int _{{{\mathbb {R}}}^n} f_{\mu } (x)\,dx}\frac{\int _{{{\mathbb {R}}}^n}x_j f_{\mu } (x)\,dx}{\int _{{{\mathbb {R}}}^n} f_{\mu } (x)\,dx}. \end{aligned}$$

We say that a log-concave probability measure \(\mu \) on \({\mathbb R}^n\) is isotropic if it is centered and \(\text {Cov}(\mu )=I_n\), where \(I_n\) is the identity \(n\times n\) matrix. In this case, \(L_{\mu }=\Vert f_{\mu }\Vert _{\infty }^{1/n}\). For every \(\mu \) there exists an affine transformation T such that \(T_{*}\mu \) is isotropic, where \(T_{*}\mu \) is the push-forward of \(\mu \) defined by \(T_{*}\mu (A)=\mu (T^{-1}(A))\). Note that a convex body K of volume 1 is isotropic if and only if the log-concave probability measure with density \(L_K^n{{\textbf{1}}}_{K / L_K}\) is isotropic. The hyperplane conjecture asks if there exists an absolute constant \(C>0\) such that

$$\begin{aligned} L_n:= \max \{ L_{\mu }:\mu \ \hbox {is an isotropic log-concave probability measure on}\ {{\mathbb {R}}}^n\}\leqslant C \end{aligned}$$

for all \(n\geqslant 1\). Bourgain [22] established the upper bound \(L_n\leqslant c\root 4 \of {n}\ln n\); later, Klartag, in [23], improved this estimate to \(L_n\leqslant c\root 4 \of {n}\). In a breakthrough work, Chen [24] proved that for any \(\epsilon >0\) there exists \(n_0(\epsilon )\in {{\mathbb {N}}}\) such that \(L_n\leqslant n^{\epsilon }\) for every \(n\geqslant n_0(\epsilon )\). Subsequently, Klartag and Lehec [25] showed that \(L_n\leqslant c(\ln n)^4\), and very recently Klartag [8] achieved the best known bound \(L_n\leqslant c\sqrt{\ln n}\).

2.3. Centroid bodies. Let \(\mu \) be a log-concave probability measure on \({\mathbb {R}}^n\). For any \(t\geqslant 1\) we define the \(L_t\)-centroid body \(Z_t(\mu )\) of \(\mu \) as the centrally symmetric convex body whose support function is

$$\begin{aligned} h_{Z_t(\mu )}(y):=\left( \int _{{\mathbb {R}}^n} |\langle x,y\rangle |^t f_{\mu }(x)dx \right) ^{1/t},\qquad y\in {{\mathbb {R}}}^n. \end{aligned}$$

Note that \(Z_t(\mu )\) is always centrally symmetric, and \(Z_t(T_{*}\mu )=T(Z_t(\mu ))\) for every \(T\in GL(n)\) and \(t\geqslant 1\). Note also that a centered log-concave probability measure \(\mu \) is isotropic if and only if \(Z_2(\mu )=B_2^n\). The next result of Paouris (see [20, Theorem 5.1.17]) provides upper bounds for the volume of the \(L_t\)-centroid bodies of isotropic log-concave probability measures.

Theorem 1.7

If \(\mu \) is a centered log-concave probability measure on \(\mathbb R^n\), then for every \(2\leqslant t\leqslant n\) we have that

$$\begin{aligned} |Z_t(\mu )|^{1/n}\leqslant c\sqrt{t/n}[\det \textrm{Cov}(\mu )]^{\frac{1}{2n}}, \end{aligned}$$

where \(c>0\) is an absolute constant. In particular, if \(\mu \) is isotropic then \(|Z_t(\mu )|^{1/n}\leqslant c\sqrt{t/n}\) for all \(2\leqslant t\leqslant n\).

A variant of the \(L_t\)-centroid bodies of \(\mu \) is defined as follows. For every \(t\geqslant 1\) we consider the convex body \(Z_t^+(\mu )\) with support function

$$\begin{aligned} h_{Z_t^+(\mu )}(y)=\left( \int _{{{\mathbb {R}}}^n}\langle x,y\rangle _+^tf_{\mu }(x)dx\right) ^{1/t},\qquad y\in {{\mathbb {R}}}^n, \end{aligned}$$

where \(a_+=\max \{a,0\}\). When \(f_{\mu }\) is even, we have that \(Z_t^+(\mu )=2^{-1/t}Z_t(\mu )\). In any case, we easily verify that

$$\begin{aligned} Z_t^+(\mu )\subseteq Z_t(\mu ). \end{aligned}$$

Moreover, if \(\mu \) is isotropic then \(Z_2^+(\mu )\supseteq cB_2^n\) for an absolute constant \(c>0\). One can also check that if \(1\leqslant t<s\) then

$$\begin{aligned} \left( \frac{4}{e}\right) ^{\frac{1}{t}-\frac{1}{s}}Z_t^+(\mu )\subseteq Z_s^+(\mu ) \subseteq c_1\left( \frac{4(e-1)}{e}\right) ^{\frac{1}{t}-\frac{1}{s}}\frac{s}{t}Z_t^+(\mu ). \end{aligned}$$

The right-hand side inequality gives

$$\begin{aligned}{} & {} \int _{{{\mathbb {R}}}^n}\langle x,\xi \rangle _+^{2t}f_{\mu }(x)dx =[h_{Z_{2t}^+(\mu )}(\xi )]^{2t} \leqslant C^{2t}[h_{Z_{t}^+(\mu )}(\xi )]^{2t}\nonumber \\{} & {} =C^{2t}\left( \int _{{{\mathbb {R}}}^n}\langle x,\xi \rangle _+^tf_{\mu }(x)dx\right) ^2, \end{aligned}$$
(2.4)

for all \(\xi \in S^{n-1}\), where \(C>1\) is an absolute constant. For a proof of all these claims see [26], where the family of bodies \({\tilde{Z}}_t^+(\mu )=2^{1/t}Z_t^+(\mu )\) is considered. We have made the necessary adjustments in the inclusions that we use.

2.4. The bodies \(B_t(\mu )\). Let \(\mu \) be a probability measure on \({{\mathbb {R}}}^n\). We define

$$\begin{aligned} M_{\mu }(v):= \int _{{{\mathbb {R}}}^n} e^{\langle v,x\rangle }d\mu (x)=\exp (\Lambda _{\mu }(v)) \end{aligned}$$

where

$$\begin{aligned} \Lambda _{\mu }(v)=\ln \left( \int _{{{\mathbb {R}}}^n} e^{\langle v,x\rangle }d\mu (x)\right) \end{aligned}$$

is the logarithmic Laplace transform of \(\mu \). We also define

$$\begin{aligned} \Lambda _{\mu }^{*}(v):= {{{\mathcal {L}}}}(\Lambda _{\mu })(v) = \sup _{u\in {{\mathbb {R}}}^n} \left\{ \langle v, u\rangle - \ln \int _{{{\mathbb {R}}}^n} e^{\langle u,x\rangle }d\mu (x)\right\} , \end{aligned}$$

where, given a convex function \(g:{{\mathbb {R}}}^n\rightarrow (-\infty ,\infty ]\), the Legendre transform \({{{\mathcal {L}}}}(g)\) of g is defined by

$$\begin{aligned} {{{\mathcal {L}}}}(g)(x):=\sup _{y\in {{\mathbb {R}}}^n}\{ \langle x,y\rangle - g(y)\}. \end{aligned}$$

The function \(\Lambda ^{*}_{\mu }\) is called the Cramér transform of \(\mu \) and plays a crucial role in the theory of large deviations. For every \(t\geqslant 1\) we define

$$\begin{aligned} M_t(\mu ):= \left\{ v\in {{\mathbb {R}}}^n: \int _{{{\mathbb {R}}}^n} | \langle v, x\rangle |^td\mu (x)\leqslant 1\right\} . \end{aligned}$$

Note that

$$\begin{aligned} Z_t(\mu ):= (M_t(\mu ))^{\circ }= \left\{ x\in {{\mathbb {R}}}^n: | \langle v, x\rangle |^t \leqslant \int _{{\mathbb R}^n} | \langle v, y\rangle |^td\mu (y) \;\;\hbox {for all}\; v\in {{\mathbb {R}}}^n\right\} . \end{aligned}$$

For every \(t>0\) we also set

$$\begin{aligned} B_t(\mu ):=\{v\in {{\mathbb {R}}}^n:\Lambda ^{*}_{\mu }(v)\leqslant t\}. \end{aligned}$$

We say that a measure \(\mu \) on \({{\mathbb {R}}}^n\) is \(\alpha \)-regular if for any \(s\geqslant t\geqslant 2\) and every \(v\in {{\mathbb {R}}}^n\),

$$\begin{aligned} \left( \int _{{{\mathbb {R}}}^n} |\langle v,x\rangle |^sd\mu (x)\right) ^{1/s} \leqslant \alpha \frac{s}{t}\left( \int _{{{\mathbb {R}}}^n} |\langle v,x\rangle |^td\mu (x)\right) ^{1/t}. \end{aligned}$$

For all \(s\geqslant t\) we have \(M_s(\mu )\subseteq M_t(\mu )\) and \(Z_t(\mu ) \subseteq Z_s(\mu )\). If the measure \(\mu \) is \(\alpha \)-regular, then \(M_t(\mu ) \subseteq \alpha \frac{s}{t} M_s(\mu )\) and \(Z_s(\mu ) \subseteq \alpha \frac{s}{t}Z_t(\mu )\) for all \(s\geqslant t\geqslant 2\). Moreover, for every centered probability measure \(\mu \) we have \(\Lambda ^{*}_{\mu }(0)=0\) by Jensen’s inequality, and the convexity of \(\Lambda ^{*}_{\mu }\) implies that \(B_t(\mu ) \subseteq B_s(\mu ) \subseteq \frac{s}{t}B_t(\mu )\) for all \(s\geqslant t>0\).

Recall that, by Borell’s lemma, every log-concave probability measure is c-regular (see [20, Theorem 2.4.6] for a proof).

Proposition 1.8

Every log-concave probability measure is c-regular, where \(c\geqslant 1\) is an absolute constant.

The next proposition compares \(B_t(\mu )\) with \(Z_t(\mu )\) when \(\mu \) is \(\alpha \)-regular.

Proposition 1.9

If \(\mu \) is \(\alpha \)-regular for some \(\alpha \geqslant 1\), then for any \(t\geqslant 2\) we have

$$\begin{aligned}B_t(\mu ) \subseteq 4e\alpha Z_t(\mu ).\end{aligned}$$

Proof

We first check that if \(u\in M_t(\mu )\) then

$$\begin{aligned} \Lambda _{\mu }\left( \frac{tu}{2e\alpha } \right) \leqslant t. \end{aligned}$$

We fix \(u\in M_t(\mu )\) and set \({\tilde{u}}:=\frac{tu}{2e\alpha }\). Then,

$$\begin{aligned} \left( \int _{{{\mathbb {R}}}^n} |\langle {\tilde{u}},x\rangle |^kd\mu (x)\right) ^{1/k}= \frac{t}{2e\alpha }\left( \int _{{{\mathbb {R}}}^n} |\langle u,x\rangle |^kd\mu (x)\right) ^{1/k}, \end{aligned}$$

which is bounded by \(\frac{t}{2e\alpha }\) if \(k\leqslant t\) and by \(\frac{k}{2e}\) if \(k>t\). It follows that

$$\begin{aligned} \int _{{{\mathbb {R}}}^n} e^{\langle {\tilde{u}},x\rangle }d\mu (x)&\leqslant \int _{{{\mathbb {R}}}^n} e^{|\langle {\tilde{u}},x\rangle |}d\mu (x) = \sum _{k=0}^{\infty }\frac{1}{k!}\int _{{{\mathbb {R}}}^n} |\langle {\tilde{u}}, x\rangle |^kd\mu (x)\\&\leqslant \sum _{k\leqslant t}\frac{1}{k!} \left| \frac{t}{2e\alpha }\right| ^k + \sum _{k>t}\frac{1}{k!}\left| \frac{k}{2e}\right| ^k \leqslant e^{\frac{t}{2e\alpha }}+1\leqslant e^t \end{aligned}$$

and the claim follows.

Now, let \(v\notin 4e\alpha Z_t(\mu )\). We can find \(u\in M_t(\mu )\) such that \(\langle v,u\rangle > 4e\alpha \) and then

$$\begin{aligned} \Lambda ^{*}_{\mu }(v) \geqslant \Big \langle v, \frac{tu}{2e\alpha }\Big \rangle -\Lambda _{\mu }\left( \frac{tu}{2e\alpha }\right) > \frac{t}{2e\alpha }4e\alpha -t=t. \end{aligned}$$

Therefore, \(v\notin B_t(\mu )\). \(\square \)

By Proposition 2.2, we have that Proposition 2.3 holds true (with an absolute constant in place of \(4e\alpha \)) for every log-concave probability measure.

2.5. Ball’s bodies \(K_t(\mu )\). If \(\mu \) is a log-concave probability measure on \({\mathbb {R}}^n\) then, for every \(t>0\), we define

$$\begin{aligned} K_t(\mu ):=K_t(f_\mu )=\left\{ x\in {{\mathbb {R}}}^n: \int _0^\infty r^{t-1}f_\mu (rx)\, dr\geqslant \frac{f_{\mu }(0)}{t} \right\} . \end{aligned}$$

From the definition it follows that the radial function of \(K_t(\mu )\) is given by

$$\begin{aligned} \varrho _{K_t(\mu )}(x)=\left( \frac{1}{f_{\mu }(0)}\int _0^{\infty }tr^{t-1}f_{\mu }(rx)\,dr\right) ^{1/t}\end{aligned}$$
(2.5)

for \(x\ne 0\). The bodies \(K_t(\mu )\) were introduced by K. Ball who also established their convexity. If \(\mu \) is also centered then, for every \(0 < t\leqslant s\),

$$\begin{aligned} \frac{\Gamma (t+1)^{\frac{1}{t}}}{\Gamma (s+1)^{\frac{1}{s}}} K_s(\mu )\subseteq K_{t}(\mu )\subseteq e^{\frac{n}{t}-\frac{n}{s}}K_{s}(\mu ). \end{aligned}$$
(2.6)

A proof is given in [20, Proposition 2.5.7]. It is easily checked that

$$\begin{aligned} |K_n(f)|\,f_{\mu }(0)=\int _{{\mathbb {R}}^n} f_{\mu }(x)dx=1 \end{aligned}$$
(2.7)

(see e.g. [20, Lemma 2.5.6]) and then we can use the inclusions (2.6) in order to estimate the volume of \(K_t(\mu )\). For every \(t>0\) we have

$$\begin{aligned} e^{-1}\leqslant f_{\mu }(0)^{\frac{1}{n}+\frac{1}{t}}|K_{n+t}(\mu )|^{\frac{1}{n}+\frac{1}{t}}\leqslant e\frac{n+t}{n}. \end{aligned}$$
(2.8)

We are mainly interested in the convex body \(K_{n+1}(\mu )\). We shall use the fact that \(K_{n+1}(\mu )\) is centered (see [20, Proposition 2.5.3 (v)]) and that

$$\begin{aligned} f_{\mu }(0)|K_{n+1}(\mu )|\approx 1. \end{aligned}$$
(2.9)

The last estimate follows immediately from (2.7) and (2.8).

3 Upper bound for the expected value of the half-space depth

Let \(\mu \) and \(\nu \) be two log-concave probability measures on \({{\mathbb {R}}}^n\) with the same barycenter. If \(T:{{\mathbb {R}}}^n\rightarrow {{\mathbb {R}}}^n\) is an invertible affine transformation and \(T_{*}\mu \) is the push-forward of \(\mu \) defined by \(T_{*}\mu (A)=\mu (T^{-1}(A))\) then we observe that \(\varphi _{T_{*}\mu }(x)=\varphi _{\mu }(T^{-1}(x))\) for all \(x\in {{\mathbb {R}}}^n\), and hence

$$\begin{aligned} \int _{{{\mathbb {R}}}^n}\varphi _{T_{*}\mu }(x)dT_{*}\nu (x)=\int _{{{\mathbb {R}}}^n}\varphi _{\mu }(T^{-1}(x))dT_{*}\nu (x) =\int _{{{\mathbb {R}}}^n}\varphi _{\mu }(x)d\nu (x). \end{aligned}$$

Therefore, Theorem 1.1 is a consequence of Theorem 1.2. Starting with a log-concave probability measure \(\mu \) on \({{\mathbb {R}}}^n\), we consider an affine transformation T such that \(T_{*}\mu \) is isotropic and then apply Theorem 1.2 to the measures \(T_{*}\mu \) and \(\nu =T_{*}\mu \).

Proof of Theorem  1.2

Consider two isotropic log-concave probability measures \(\mu , \nu \) on \({{\mathbb {R}}}^n\). We will show that

$$\begin{aligned} \int _{{\mathbb {R}}^n}\varphi _{\mu }(x)\,d\nu (x) \leqslant e^{-cn/L_{\nu }^2} \end{aligned}$$

for some absolute constant \(c>0\). We start with the observation that for any \(v\in {{\mathbb {R}}}^n\) the half-space \(\{z:\langle z,v \rangle \geqslant \langle x,v\rangle \}\) is in \({{{\mathcal {H}}}}(x)\), therefore

$$\begin{aligned} \varphi _{\mu } (x) \leqslant \mu (\{z:\langle z,v \rangle \geqslant \langle x,v\rangle \} ) \leqslant e^{-\langle x,v\rangle }{{\mathbb {E}}}_{\mu }\big (e^{\langle z,v\rangle }\big )=\exp \big (-[\langle x,v\rangle -\Lambda _{\mu }(v)]\big ), \end{aligned}$$

and taking the infimum over all \(v\in {{\mathbb {R}}}^n\) we see that

$$\begin{aligned} \varphi _{\mu } (x)\leqslant \exp (-\Lambda _{\mu }^{*}(x)). \end{aligned}$$

Then we write

$$\begin{aligned} \int _{{\mathbb {R}}^n}\varphi _{\mu } (x)\,d\nu (x)&\leqslant \int _{{\mathbb {R}}^n}e^{-\Lambda _{\mu }^{*} (x)}f_{\nu }(x)\, dx=\int _{{\mathbb {R}}^n}\left( \int _{\Lambda _{\mu }^{*} (x)}^{\infty }e^{-t}dt\,\right) f_{\nu }(x)dx\\&=\int _0^{\infty }e^{-t}\int _{{\mathbb {R}}^n}{} \mathbf{{1}}_{B_t(\mu )}(x)f_{\nu }(x)dx\,dt=\int _0^{\infty }e^{-t}\nu (B_t(\mu ))\,dt. \end{aligned}$$

Fix \(b\in (2/n,1/2]\) which will be chosen appropriately. Since \(\nu (B_t(\mu ))\leqslant 1\) and also \(\nu (B_t(\mu ))\leqslant \Vert f_{\nu }\Vert _{\infty }|B_t(\mu )|\) for all \(t>0\), we may write

$$\begin{aligned} \int _{{\mathbb {R}}^n}\varphi _{\mu }(x)\,d\nu (x)&\leqslant \int _{bn}^{\infty }e^{-t}\nu (B_t(\mu ))dt +\Vert f_{\nu }\Vert _{\infty }\int _0^{bn}e^{-t}|B_t(\mu )|\,dt\\&\leqslant \int _{bn}^{\infty }e^{-t}\,dt+L_{\nu }^n\int _0^2e^{-t}|B_t(\mu )|\,dt+L_{\nu }^n\int _2^{bn}e^{-t}|B_t(\mu )|\,dt \\&\leqslant e^{-bn}+L_{\nu }^n|B_2(\mu )|+L_{\nu }^n\int _2^{bn}e^{-t}|B_t(\mu )|\,dt. \end{aligned}$$

Applying Proposition 2.3 and Theorem 2.1 we get

$$\begin{aligned} |B_t(\mu )|^{1/n}\leqslant c_1|Z_t(\mu )|^{1/n}\leqslant c_2\sqrt{t/n} \end{aligned}$$

for all \(2\leqslant t\leqslant n\), where \(c_1,c_2>0\) are absolute constants. It is also known that \(L_{\nu }\geqslant c_3\) where \(c_3>0\) is an absolute constant (see [20, Proposition 2.3.12] for a proof). So, we may assume that \(c_2L_{\nu }\geqslant \sqrt{2}\). Choosing \(b_0:=1/(c_2L_{\nu })^2\leqslant 1/2\) we write

$$\begin{aligned} L_{\nu }^n\int _2^{b_0n}e^{-t}|B_t(\mu )|\,dt\leqslant c_2^nL_{\nu }^n\int _2^{b_0n}(t/n)^{n/2}e^{-t}dt =(c_2L_{\nu })^n\int _2^{b_0n}(t/n)^{n/2}e^{-t}dt, \end{aligned}$$

and since \(b_0n\leqslant n/2\) and the function \(t\mapsto t^{n/2}e^{-t}\) is increasing on [0, n/2], we get

$$\begin{aligned} (c_2L_{\nu })^n\int _2^{b_0n}e^{-t}|B_t(\mu )|\,dt\leqslant (b_0n-2)\cdot (c_2L_{\nu })^nb_0^{n/2}e^{-b_0n}=(b_0n-2)e^{-b_0n}. \end{aligned}$$

Moreover, \(|B_2(\mu )|^{1/n}\leqslant c_2\sqrt{2/n}\), therefore

$$\begin{aligned} L_{\nu }^n|B_2(\mu )|\leqslant (c_4L_{\nu }^2/n)^{n/2}\leqslant e^{-b_0n}, \end{aligned}$$

because \(c_4L_{\nu }^2/n\leqslant e^{-2}\) if \(n\geqslant n_0\). Combining the above we get

$$\begin{aligned} \int _{{\mathbb {R}}^n}\varphi _{\mu } (x)\,d\nu (x)\leqslant e^{-b_0n}+e^{-b_0n}+(b_0n-2)e^{-b_0n}, \end{aligned}$$

and hence

$$\begin{aligned} \int _{{\mathbb {R}}^n}\varphi _{\mu }(x)\,d\nu (x) \leqslant n\exp \left( -n/(c_2L_{\nu })^2\right) \end{aligned}$$

which implies the result. \(\square \)

Remark 3.1

In the introduction we have already mentioned that the assumption that both \(\mu \) and \(\nu \) are isotropic is not necessary. One may consider different situations, where \(\mu \) and \(\nu \) are centered and \(\Vert f_{\nu }\Vert _{\infty }\) is comparable with \(\Vert f_{\mu }\Vert _{\infty }\). For example, the next result can be obtained with the ideas that were used in the proof of Theorem 1.2.

Theorem 1.11

Let \(\mu \) and \(\nu \) be two centered log-concave probability measures on \({{\mathbb {R}}}^n\), \(n\geqslant n_0\), such that \(\Vert f_{\mu }\Vert _{\infty }=\Vert f_{\nu }\Vert _{\infty }\). Then,

$$\begin{aligned} {{\mathbb {E}}}_{\nu }(\varphi _{\mu }):=\int _{{\mathbb {R}}^n}\varphi _{\mu }(x)\,d\nu (x)\leqslant \exp \left( -cn/L_{\mu }^2\right) , \end{aligned}$$

where \(c>0\), \(n_0\in {{\mathbb {N}}}\) are absolute constants.

The proof of Theorem 3.2 is quite similar to the one of Theorem 1.2. We fix \(b\in (2/n,1/2]\) and write

$$\begin{aligned} \int _{{\mathbb {R}}^n}\varphi _{\mu } (x)\,d\nu (x)&\leqslant \int _{{\mathbb {R}}^n}e^{-\Lambda _{\mu }^{*} (x)}f_{\nu }(x)\, dx =\int _0^{\infty }e^{-t}\nu (B_t(\mu ))\,dt\\&\leqslant e^{-bn}+\Vert f_{\nu }\Vert _{\infty }|B_2(\mu )|+\Vert f_{\nu }\Vert _{\infty }\int _2^{bn}e^{-t}|B_t(\mu )|\,dt. \end{aligned}$$

Then, we use the upper bound

$$\begin{aligned} |B_t(\mu )|^{1/n}\leqslant c_1|Z_t(\mu )|^{1/n}\leqslant c_2\sqrt{t/n}[\det \text {Cov}(\mu )]^{\frac{1}{2n}}, \end{aligned}$$

observe that

$$\begin{aligned} \Vert f_{\nu }\Vert _{\infty }[\det \text {Cov}(\mu )]^{\frac{1}{2}}=\Vert f_{\mu }\Vert _{\infty }[\det \text {Cov}(\mu )]^{\frac{1}{2}}=L_{\mu }^n. \end{aligned}$$

and continue as in the proof of Theorem 1.2.

4 Lower bound for the expected value of the half-space depth

In this section we show that the exponential estimate of Theorem 1.1 is sharp. As explained in the introduction, for the reader’s convenience we consider first the simpler case where \(\mu \) is the uniform measure on a convex body K in \({{\mathbb {R}}}^n\) and then present the more technical tools and computations that are required for the case of an arbitrary log-concave probability measure \(\mu \) on \({{\mathbb {R}}}^n\).

4.1 Uniform measure on a convex body

The next proposition provides an exponential lower bound for \({{\mathbb {E}}}_{\mu _K}(\varphi _{\mu _K})\), where \(\mu _K\) is the uniform measure on K.

Proposition 1.12

Let K be a convex body of volume 1 in \({{\mathbb {R}}}^n\). Then,

$$\begin{aligned} \int _K\varphi _{\mu _K}(x)dx\geqslant e^{-cn}, \end{aligned}$$

where \(c>0\) is an absolute constant.

Proof

By translation invariance we may assume that the barycenter of K is at the origin. Let \(x\in \frac{1}{2}K\). We will show that \(\varphi _{\mu _K}(x)\geqslant \frac{1}{e^2n}\cdot \frac{1}{2^n}\). It suffices to show that

$$\begin{aligned} \inf \,|\{z\in K:\langle z,\xi \rangle \geqslant \langle x,\xi \rangle \}|\geqslant \frac{1}{e^2n}\cdot \frac{1}{2^n},\end{aligned}$$
(4.1)

where the infimum is over all \(\xi \in S^{n-1}\), because by the definition of \(\varphi _{\mu _K}(x)\) we only have to check the half-spaces \(H\in {{{\mathcal {H}}}}(x)\) for which x is a boundary point. Moreover, we may consider only those \(\xi \in S^{n-1}\) that satisfy \(\langle x,\xi \rangle \geqslant 0\), because if \(\langle x,\xi \rangle <0\) then

$$\begin{aligned} |\{z\in K:\langle z,\xi \rangle \geqslant \langle x,\xi \rangle \}|\geqslant |\{z\in K:\langle z,\xi \rangle \geqslant 0\}|\geqslant 1/e \end{aligned}$$

by Grünbaum’s lemma (see [20, Lemma 2.2.6]). Fix \(\xi \in S^{n-1}\) with \(\langle x,\xi \rangle \geqslant 0\) and set \(m=h_K(\xi )=\max \{\langle z,\xi \rangle :z\in K\}\). Since \(\langle x,\xi \rangle \leqslant m/2\), it is enough to show that

$$\begin{aligned} |\{z\in K:\langle z,\xi \rangle \geqslant m/2\}|\geqslant \frac{1}{e^2n}\cdot \frac{1}{2^n}. \end{aligned}$$
(4.2)

Consider the function \(g(t)=|K(\xi ,t)|_{n-1}\), where \(K(\xi ,t)=\{z\in K:\langle z,\xi \rangle =t\}\), \(t\in [0,m]\) and \(|\cdot |_{n-1}\) denotes \((n-1)\)-dimensional volume. The Brunn-Minkowski inequality implies that \(g^{\frac{1}{n-1}}\) is concave. Therefore, for every \(r\in [0,m]\) we have that

$$\begin{aligned} g(r)\geqslant \left( 1-\frac{r}{m}\right) ^{n-1}g(0). \end{aligned}$$

We write

$$\begin{aligned} |\{z\in K:\langle z,\xi \rangle \geqslant m/2\}|&=\int _{m/2}^mg(r)\,dr\geqslant g(0)\int _{m/2}^m\left( 1-\frac{r}{m}\right) ^{n-1}dr\\&= g(0)m\int _{1/2}^1(1-s)^{n-1}ds=\frac{1}{n2^n}g(0)m. \end{aligned}$$

Since K is centered, we know that \(\Vert g\Vert _{\infty }\leqslant e\,|K\cap \xi ^{\perp }|_{n-1}=eg(0)\) from (2.1). Then, using also Grünbaum’s lemma, we see that

$$\begin{aligned} \frac{1}{e}\leqslant \int _0^mg(r)\,dr\leqslant \Vert g\Vert _{\infty }m\leqslant eg(0)m, \end{aligned}$$

and (4.2) follows. It is now clear that

$$\begin{aligned} \int _K\varphi _{\mu _K}(x)dx\geqslant \int _{\frac{1}{2}K}\varphi _{\mu _K}(x)dx\geqslant \Big |\frac{1}{2}K\Big |\cdot \frac{1}{e^2n}\cdot \frac{1}{2^n}= \frac{1}{e^2n}\cdot \frac{1}{4^n}\geqslant e^{-cn} \end{aligned}$$

for some absolute constant \(c>0\). \(\square \)

4.2 Log-concave probability measures

Next, we assume that \(\mu \) is a log-concave probability measure on \({{\mathbb {R}}}^n\). Our aim is to prove the next theorem.

Theorem 1.13

Let \(\mu \) be a log-concave probability measure on \({{\mathbb {R}}}^n\). Then,

$$\begin{aligned} \int _{{{\mathbb {R}}}^n}\varphi _{\mu }(x)d\mu (x)\geqslant e^{-cn}, \end{aligned}$$

where \(c>0\) is an absolute constant.

By the affine invariance of \({{\mathbb {E}}}_{\mu }(\varphi _{\mu })\) we may assume that \(\mu \) is centered. The proof is based on a number of observations. The first one is a consequence of the Paley-Zygmund inequality; we just adapt here the proof of [20, Lemma 11.3.3] to give a lower bound for \(\varphi _{\mu }(x)\) when \(x\in \delta Z_t^+(\mu )\) for some \(\delta \in (0,1)\).

Lemma 1.14

Let \(t\geqslant 1\) and \(\delta \in (0,1)\). For every \(x\in \delta Z_t^+(\mu )\) one has

$$\begin{aligned} \varphi _{\mu }(x)\geqslant \frac{(1-\delta ^t)^2}{C_1^t}, \end{aligned}$$

where \(C_1>1\) is an absolute constant.

Proof

Let \(x\in \delta Z_t^+(\mu )\). As in the proof of Proposition 4.1, using Grünbaum’s lemma we see that it is enough to show that

$$\begin{aligned} \inf \mu (\{z\in {{\mathbb {R}}}^n:\langle z,\xi \rangle \geqslant \langle x,\xi \})\geqslant \frac{(1-\delta ^t)^2}{C_1^t}, \end{aligned}$$
(4.3)

where the infimum is over all \(\xi \in S^{n-1}\) with \(\langle x,\xi \rangle \geqslant 0\).

Since \(x\in \delta Z_t^+(\mu )\), we have \(\langle x,\xi \rangle \leqslant \delta h_{Z_t^+(\mu )}(\xi )\) for any such \(\xi \in S^{n-1}\), so it is enough to show that

$$\begin{aligned} \mu (\{z\in {{\mathbb {R}}}^n:\langle z,\xi \rangle \geqslant \delta h_{Z_t^+(\mu )}(\xi )\})\geqslant \frac{(1-\delta ^t)^2}{C_1^t}. \end{aligned}$$
(4.4)

We apply the Paley-Zygmund inequality

$$\begin{aligned} \mu (\{z:g(z)\geqslant \delta ^t{{\mathbb {E}}}_{\mu }(g)\})\geqslant (1-\delta ^t)^2\frac{[{{\mathbb {E}}}_{\mu }(g)]^2}{{{\mathbb {E}}}_{\mu }(g^2)} \end{aligned}$$

for the function \(g(z)=\langle z,\xi \rangle _+^t\). From (2.4) we see that

$$\begin{aligned} {{\mathbb {E}}}_{\mu }(g^2)\leqslant C_1^t\,[{{\mathbb {E}}}_{\mu }(g)]^2 \end{aligned}$$

for some absolute constant \(C_1>0\), and the lemma follows. \(\square \)

Definition 1.15

For every \(t\geqslant 1\) we consider the convex set

$$\begin{aligned} R_t(\mu )=\{x\in {{\mathbb {R}}}^n:f_{\mu }(x)\geqslant e^{-t}f_{\mu }(0)\}. \end{aligned}$$

The convexity of \(R_t(\mu )\) is an immediate consequence of the log-concavity of \(f_{\mu }\). Note that \(R_t(\mu )\) is bounded and \(0\in \textrm{int}(R_t(\mu ))\).

Lemma 1.16

For every \(t\geqslant 5n\) we have \(R_t(\mu )\supseteq c_0K_{n+1}(\mu )\), where \(c_0>0\) is an absolute constant.

Proof

Let \(t\geqslant 5n\). Given any \(\xi \in S^{n-1}\) consider the log-concave function \(h:[0,\infty )\rightarrow [0,\infty )\) defined by \(h(t)=f_{\mu }(t\xi )\). From [27, Lemma 5.2] we know that

$$\begin{aligned} \int _0^{\varrho _{R_t(\mu )}(\xi )}r^{n-1}h(r)dr\geqslant (1-e^{-t/8})\int _0^{\infty }r^{n-1}h(r)dr. \end{aligned}$$

By the definition of \(K_n(\mu )\) we have

$$\begin{aligned} \int _0^{\infty }r^{n-1}h(r)dr=\frac{f_{\mu }(0)}{n}[\varrho _{K_n(\mu )}(\xi )]^n. \end{aligned}$$

On the other hand,

$$\begin{aligned} \int _0^{\varrho _{R_t(\mu )}(\xi )}r^{n-1}h(r)dr\leqslant \Vert f\Vert _{\infty }\int _0^{\varrho _{R_t(\mu )}(\xi )}r^{n-1}dr =\frac{\Vert f\Vert _{\infty }}{n}[\varrho _{R_t(\mu )}(\xi )]^n. \end{aligned}$$

Using also the fact that \(\Vert f\Vert _{\infty }\leqslant e^nf_{\mu }(0)\) from (2.2), we get

$$\begin{aligned} e^n[\varrho _{R_t(\mu )}(\xi )]^n\geqslant (1-e^{-t/8})[\varrho _{K_n(\mu )}(\xi )]^n. \end{aligned}$$

This shows that \(R_t(\mu )\supseteq c_0K_n(\mu )\), where \(c_0>0\) is an absolute constant. From (2.6) we know that \(K_n(\mu )\approx K_{n+1}(\mu )\), and this completes the proof.\(\square \)

Our final lemma compares \(Z_t^+(\mu )\) with \(K_{n+1}(\mu )\) when \(t\geqslant 5n\).

Lemma 1.17

For every \(t\geqslant 5n\) we have that \(Z_t^+(\mu )\supseteq c_0^{\prime }K_{n+1}(\mu )\), where \(c_0^{\prime }>0\) is an absolute constant.

Proof

From Lemma 4.5 we know that \(c_0K_{n+1}(\mu )\subseteq R_t(\mu )\) for all \(t\geqslant 5n\), where \(c_0>0\) is an absolute constant. Let \(\xi \in S^{n-1}\) and set \(m:=h_{c_0K_{n+1}(\mu )}(\xi )=c_0h_{K_{n+1}(\mu )}(\xi )\). Define

$$\begin{aligned} A_{\xi }=c_0K_{n+1}(\mu )\cap \{ x:\langle x,\xi \rangle \geqslant m/2\}. \end{aligned}$$

Since \(K_{n+1}(\mu )\) is centered, the proof of Proposition 4.1 shows that

$$\begin{aligned} |A_{\xi }|\geqslant \frac{|c_0K_{n+1}(\mu )|}{e^2n\cdot 2^n}\geqslant \frac{|c_0K_{n+1}(\mu )|}{C^n} \end{aligned}$$

for some absolute constant \(C>c_0\). Moreover, if \(x\in A_{\xi }\) then \(x\in R_t(\mu )\) and hence \(f_{\mu }(x)\geqslant e^{-t}f_{\mu }(0)\). We write

$$\begin{aligned} \int _{{{\mathbb {R}}}^n}\langle x,\xi \rangle _+^td\mu (x)&\geqslant \int _{A_{\xi }}\langle x,\xi \rangle _+^td\mu (x)\\&\geqslant \left( \frac{m}{2}\right) ^te^{-t}f_{\mu }(0)|A_{\xi }|\geqslant \left( \frac{m}{2e}\right) ^t\left( \frac{c_0}{C}\right) ^nf_{\mu }(0)|K_{n+1}(\mu )|. \end{aligned}$$

Using also the fact that \((c_0/C)^n\geqslant (c_0/C)^t\) because \(t\geqslant 5n\), we get

$$\begin{aligned} \int _{{{\mathbb {R}}}^n}\langle x,\xi \rangle _+^td\mu (x) \geqslant (c_1m)^tf_{\mu }(0)|K_{n+1}(\mu )|, \end{aligned}$$

where \(c_1>0\) is an absolute constant. Finally, \(f_{\mu }(0)|K_{n+1}(\mu )|\approx 1\) by (2.9), which implies that

$$\begin{aligned} h_{Z_t^+(\mu )}(\xi )\geqslant c_2m=c_0^{\prime }h_{K_{n+1}(\mu )}(\xi ), \end{aligned}$$

where \(c_0^{\prime }=c_2c_0\), and the lemma is proved. \(\square \)

Proof of Theorem 4.2

Combining Lemmas 4.5 and 4.6 we see that

$$\begin{aligned} R_{5n}(\mu )\cap Z_{5n}^+(\mu )\supseteq c_1K_{n+1}(\mu ) \end{aligned}$$

for some absolute constant \(c_1>0\). We apply Lemma 4.3 with \(t=5n\) and \(\delta =\frac{1}{2}\). For every \(x\in \frac{1}{2}Z_{5n}^+(\mu )\) we have

$$\begin{aligned} \varphi _{\mu }(x)\geqslant C_1^{-n} \end{aligned}$$

for some absolute constant \(C_1>1\). It follows that

$$\begin{aligned} \int _{{{\mathbb {R}}}^n}\varphi _{\mu }(x)\,d\mu (x)\geqslant C_1^{-n}\mu \left( \tfrac{1}{2}Z_{5n}^+(\mu )\right) . \end{aligned}$$

Then, by Lemma 4.6 we have \(\frac{1}{2}Z_{5n}^+(\mu )\supseteq \frac{c_1}{2}K_{n+1}(\mu )\). Since \(\frac{c_1}{2}K_{n+1}(\mu )\subseteq R_{5n}(\mu )\), we know that \(f_{\mu }(x)\geqslant e^{-5n}f_{\mu }(0)\) for all \(x\in \frac{c_1}{2}K_{n+1}(\mu )\). Using also (2.9), we get

$$\begin{aligned} \mu \left( \tfrac{1}{2}Z_{5n}^+(\mu )\right)&\geqslant \mu \left( \frac{c_1}{2}K_{n+1}(\mu )\right) =\int _{\frac{c_1}{2}K_{n+1}(\mu )}f_{\mu }(x)\,dx\geqslant e^{-5n}f_{\mu }(0)\left| \frac{c_1}{2}K_{n+1}(\mu )\right| \\&= e^{-5n}(c_1/2)^nf_{\mu }(0)|K_{n+1}(\mu )|\geqslant e^{-5n}c_2^n. \end{aligned}$$

Combining the above we conclude that

$$\begin{aligned} \int _{{{\mathbb {R}}}^n}\varphi _{\mu }(x)\,d\mu (x)\geqslant C_1^{-n}e^{-5n}c_2^n\geqslant e^{-cn}, \end{aligned}$$

for some absolute constant \(c>0\). \(\square \)

5 Bounds for the expected measure of random polytopes

Let \(\mu \) be a log-concave probability measure on \({{\mathbb {R}}}^n\). Let \(X_1,X_2,\ldots \) be independent random points in \({\mathbb R}^n\) distributed according to \(\mu \) and for any \(N>n\) consider the random polytope \(K_N=\textrm{conv}\{X_1,\ldots ,X_N\}\). Given a second log-concave probability measure \(\nu \) on \({{\mathbb {R}}}^n\) with the same barycenter as \(\mu \), consider the expectation \({\mathbb E}_{\mu ^N}[\nu (K_N)]\) of the \(\nu \)-measure of \(K_N\). Note that if \(T:{{\mathbb {R}}}^n\rightarrow {{\mathbb {R}}}^n\) is an invertible affine transformation and \(T_{*}\mu \) is the push-forward of \(\mu \) defined by \(T_{*}\mu (A)=\mu (T^{-1}(A))\) then

$$\begin{aligned} {{\mathbb {E}}}_{(T_{*}\mu )^N}[(T_{*}\nu )(K_N)]={{\mathbb {E}}}_{\mu ^N}[\nu (K_N)]. \end{aligned}$$

So, we may assume that \(\mu \) is isotropic and \(\nu \) is centered. In the next theorem we actually assume that both \(\mu \) and \(\nu \) are isotropic.

Theorem 1.18

Let \(\mu \) and \(\nu \) be isotropic log-concave probability measures on \({{\mathbb {R}}}^n\), \(n\geqslant n_0\). For any \(N\leqslant \exp (c_1n/L_{\nu }^2)\) we have that

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}(\nu (K_N)) \leqslant 2 \exp \left( -c_2n/L_{\nu }^2\right) , \end{aligned}$$

where \(c_1,c_2>0\) and \(n_0\in {{\mathbb {N}}}\) are absolute constants.

The proof of Theorem 5.1 will exploit the same tools that were used in the previous section, combined with a variant of the standard lemma that is used in order to establish upper thresholds. Recall that \(B_t(\mu )=\{v\in {\mathbb R}^n:\Lambda ^{*}_{\mu }(v)\leqslant t\}\), where \(\Lambda ^{*}_{\mu }\) is the Cramér transform of \(\mu \). A version of the next lemma appeared originally in [9].

Lemma 1.19

Let \(t>0\). For every \(N>n\) we have

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}(\nu (K_N))\leqslant \nu (B_t(\mu ))+ N\exp (-t). \end{aligned}$$

Proof

We write

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}(\nu (K_N))&={{\mathbb {E}}}_{\mu ^N}(\nu (K_N\cap B_t(\mu )))+{{\mathbb {E}}}_{\mu ^N}(\nu (K_N\setminus B_t(\mu )))\\&\leqslant \nu (B_t(\mu ))+{{\mathbb {E}}}_{\mu ^N}(\nu (K_N\setminus B_t(\mu ))). \end{aligned}$$

Observe that if H is a closed half-space containing x, and if \(x\in K_N\), then there exists \(i\leqslant N\) such that \(X_i\in H\) (otherwise we would have \(x\in K_N\subseteq H^\prime \), where \(H^\prime \) is the complementary half-space). It follows that

$$\begin{aligned} \mu ^N\bigl (x\in K_N\bigr ) \leqslant N\varphi _{\mu }(x). \end{aligned}$$

Then, Fubini’s theorem shows that

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}(\nu (K_N\setminus B_t(\mu ))){} & {} =\int _{{{\mathbb {R}}}^n\setminus B_t(\mu )}\mu ^N(x\in K_N)\,d\nu (x)\\{} & {} \leqslant \int _{{{\mathbb {R}}}^n\setminus B_t(\mu )}N\varphi _{\mu }(x)\,d\nu (x)\leqslant N\,e^{-t} \end{aligned}$$

because \(\varphi _{\mu }(x)\leqslant \exp (-\Lambda _{\mu }^{*}(x))\leqslant e^{-t}\) for all \(x\notin B_t(\mu )\).\(\square \)

Proof of Theorem 5.1

Using the estimate \(\nu (B_t(\mu ))\leqslant \Vert f_{\nu }\Vert _{\infty }|B_t(\mu )|\), Proposition 2.3 and Theorem 2.1, from Lemma 5.2 we get

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}(\nu (K_N))\leqslant \left( c_1\Vert f_{\nu }\Vert _{\infty }^{1/n}\sqrt{t/n}\right) ^n+ N\exp (-t) \end{aligned}$$

for every \(N>n\) and \(2\leqslant t\leqslant n\). Recall that \(\nu \) is isotropic, therefore \(\Vert f_{\nu }\Vert _{\infty }^{2/n}=L_{\nu }^2=O(\sqrt{n})\); in fact, Klartag’s estimate for \(L_n\) gives much more. Then, if \(n\geqslant n_0\) where \(n_0\in {{\mathbb {N}}}\) is an absolute constant, the choice \(t:=(c_1e)^{-2}n/\Vert f_{\nu }\Vert _{\infty }^{2/n}\) satisfies \(2\leqslant t\leqslant n\) and gives

$$\begin{aligned} \left( c_1\Vert f_{\nu }\Vert _{\infty }^{1/n}\sqrt{t/n}\right) ^n\leqslant e^{-n}. \end{aligned}$$

Therefore,

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}(\nu (K_N))\leqslant e^{-n}+ N\exp (-c_2n/\Vert f_{\nu }\Vert _{\infty }^{2/n}), \end{aligned}$$

where \(c_2=(c_1e)^{-2}\). It follows that if \(N\leqslant \exp (c_3n/\Vert f_{\nu }\Vert _{\infty }^{2/n})\) where \(c_3=c_2/2\), then we have

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}(\nu (K_N))\leqslant e^{-n}+\exp (-c_3n/\Vert f_{\nu }\Vert _{\infty }^{2/n}) \end{aligned}$$

and the result follows from the fact that \(\Vert f_{\nu }\Vert _{\infty }^{2/n}=L_{\nu }^2\geqslant c\). \(\square \)

Remark 5.3

Let \(\mu \) be isotropic. For the proof of Theorem 5.1, what we actually need about \(\nu \) is that \(\nu \) is centered and that \(\Vert f_{\nu }\Vert _{\infty }^{1/n}=o_n(\sqrt{n})\). Then the argument of the previous proof gives

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}(\nu (K_N))\leqslant \exp (-c_2n/\max \{1,\Vert f_{\nu }\Vert _{\infty }^{2/n}\}) \end{aligned}$$

if \(N\leqslant \exp (c_1n/\Vert f_{\nu }\Vert _{\infty }^{2/n})\). Note that the proof of (1.5) in [16] exploits the same ideas. The role of \(\nu \) is played by the uniform measure on a convex body K, therefore \(\Vert f_{\nu }\Vert _{\infty }=\frac{1}{|K|}\). On the other hand, \(\mu \) is isotropic and supported on K, and hence

$$\begin{aligned} |K|\cdot L_{\mu }^n\geqslant \int _Kf_{\mu }(x)dx=\mu (K)=1. \end{aligned}$$

This shows that \(\Vert f_{\nu }\Vert _{\infty }\leqslant L_{\mu }^n\), therefore \(n/\Vert f_{\nu }\Vert _{\infty }^{\frac{2}{n}}\geqslant n/L_{\mu }^2\), which (combined with the above) proves (1.5).

A second possible normalization is to assume that \(\mu \) and \(\nu \) are simply centered and that \(\Vert f_{\mu }\Vert _{\infty }= \Vert f_{\nu }\Vert _{\infty }\). Then, starting the computation as in the proof of Theorem 5.1 we get

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}(\nu (K_N))&\leqslant \left( c_1\Vert f_{\nu }\Vert _{\infty }^{1/n}[\det \textrm{Cov}(\mu )]^{\frac{1}{2n}}\sqrt{t/n}\right) ^n+ N\exp (-t)\\&= \left( c_1\Vert f_{\mu }\Vert _{\infty }^{1/n}[\det \textrm{Cov}(\mu )]^{\frac{1}{2n}}\sqrt{t/n}\right) ^n\\&\quad + N\exp (-t)= \left( c_1L_{\mu }\sqrt{t/n}\right) ^n+ N\exp (-t). \end{aligned}$$

Choosing \(t=(c_1e)^{-2}n/L_{\mu }^2\) and continuing as above, we get:

Theorem 1.21

Let \(\mu \) and \(\nu \) be two centered log-concave probability measures on \({{\mathbb {R}}}^n\) with \(\Vert f_{\mu }\Vert _{\infty }=\Vert f_{\nu }\Vert _{\infty }\). For any \(N\leqslant \exp (c_1n/L_{\mu }^2)\) we have that

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}(\nu (K_N)) \leqslant 2 \exp \left( -c_2n/L_{\mu }^2\right) , \end{aligned}$$

where \(c_1,c_2>0\) are absolute constants.

We pass now to the lower threshold. It is useful to observe that in the case where \(X_1,X_2,\ldots \) are uniformly distributed in the Euclidean unit ball the sharp threshold for the problem (see [11] and [12]) is

$$\begin{aligned} \exp \left( (1\pm \epsilon )\tfrac{1}{2}n\ln n\right) ,\qquad \epsilon >0. \end{aligned}$$

We concentrate in the case \(\nu =\mu \) of our problem, in which case we shall establish a weak lower threshold of this order. The precise formulation of our result is the following.

Theorem 1.22

Let \(\delta \in (0,1)\). Then,

$$\begin{aligned} \inf _{\mu }\Big (\inf \Big \{ {{\mathbb {E}}}_{\mu ^N}\big [\mu ((1+\delta )K_N)\big ]: N\geqslant \exp \big (C\delta ^{-1}\ln \left( 2/\delta \right) n\ln n\big )\Big \}\Big )\longrightarrow 1 \end{aligned}$$

as \(n\rightarrow \infty \), where the first infimum is over all centered log-concave probability measures \(\mu \) on \({{\mathbb {R}}}^n\) and \(C>0\) is an absolute constant.

This is a weak threshold in the sense that we consider the expected measure of \((1+\delta )K_N\) instead of \(K_N\), where \(\delta >0\) is arbitrarily small. The reason for this is the dependence on \(\delta \) in the next technical proposition.

Proposition 1.23

Let \(\mu \) be an isotropic log-concave probability measure on \({{\mathbb {R}}}^n\). For any \(\delta \in (0,1)\) and any \(t\geqslant C_{\delta }n\ln n\) we have that

$$\begin{aligned} \mu ((1+\delta )Z_t^+(\mu ))\geqslant 1-e^{-c_{\delta }t} \end{aligned}$$

where \(C_{\delta }=C\delta ^{-1}\ln \left( 2/\delta \right) \) and \(c_{\delta }=c\delta \) are positive constants depending only on \(\delta \).

Proof

Let \(\delta \in (0,1)\) and set \(\epsilon =\delta /5\). Fix \(t\geqslant n\) which will be determined. Recall that \(b_1B_2^n\subseteq Z_t^+(\mu )\subseteq b_2tB_2^n\) for some absolute constants \(b_1,b_2>0\). This implies that if \(v,w\in S^{n-1}\) and \(|v-w|\leqslant \frac{b_1\epsilon }{b_2t}\) then

$$\begin{aligned} h_{Z_t^+(\mu )}(v-w)\leqslant b_2t|v-w|\quad \hbox {and}\quad b_1\leqslant \min \{h_{Z_t^+(\mu )}(v),h_{Z_t^+(\mu )}(w)\}, \end{aligned}$$

therefore

$$\begin{aligned} h_{Z_t^+(\mu )}(v-w)\leqslant b_2t|v-w|\leqslant \epsilon \min \{h_{Z_t^+(\mu )}(v),h_{Z_t^+(\mu )}(w)\}.\end{aligned}$$
(5.1)

Set \(b:=b_2/b_1\) and consider a \(\frac{\epsilon }{bt}\)-net N of the Euclidean unit sphere \(S^{n-1}\) with cardinality \(|N|\leqslant (1+2bt/\epsilon )^n\leqslant (3bt/\epsilon )^n\); for a proof of the estimate on the cardinality of N see e.g. [19, Lemma 5.2.5]. We define

$$\begin{aligned} W=\bigcap _{\xi \in N}\left\{ x:\langle x,\xi \rangle _+\leqslant \frac{1}{1+\epsilon } h_{Z_t^+(\mu )}(\xi )\right\} . \end{aligned}$$

Let \(x\in W\). Then, \(\langle x,\xi \rangle _+\leqslant \frac{1}{1+\epsilon } h_{Z_t^+(\mu )}(\xi )\) for all \(\xi \in N\). We will show that \((1-\epsilon )\langle x,w\rangle _+\leqslant h_{Z_t^+(\mu )}(w)\) for all \(w\in S^{n-1}\), which is equivalent to \((1-\epsilon )x\in Z_t^+(\mu )\). We set

$$\begin{aligned} \alpha _{\mu }(x):=\max \left\{ \frac{\langle x,w\rangle _+}{h_{Z_t^+(\mu )}(w)}:w\in S^{n-1}\right\} \end{aligned}$$

and consider \(v\in S^{n-1}\) such that \(\langle x,v\rangle _+=\alpha _{\mu }(x)\cdot h_{Z_t^+(\mu )}(v)\). There exists \(\xi \in N\) such that \(|\xi -v|\leqslant \frac{\epsilon }{bt}\). Using the fact that \(\langle x,v-\xi \rangle _+\leqslant \alpha _{\mu }(x)h_{Z_t^+(\mu )}(v-\xi )\), we write

$$\begin{aligned} \langle x,v\rangle _+ \leqslant \langle x,\xi \rangle _+ +\langle x,v-\xi \rangle _+ \leqslant \frac{1}{1+\epsilon } h_{Z_t^+(\mu )}(\xi ) +\alpha _{\mu }(x)h_{Z_t^+(\mu )}(v-\xi ). \end{aligned}$$

From (5.1) it follows that

$$\begin{aligned} \langle x,v\rangle _+ \leqslant \frac{1}{1+\epsilon } h_{Z_t^+(\mu )}(\xi )+\epsilon \alpha _{\mu }(x)h_{Z_t^+(\mu )}(v)= \frac{1}{1+\epsilon } h_{Z_t^+(\mu )}(\xi )+\epsilon \langle x,v\rangle _+, \end{aligned}$$

which gives

$$\begin{aligned} \langle x,v\rangle _+\leqslant \frac{1}{1-\epsilon ^2} h_{Z_t^+(\mu )}(\xi ). \end{aligned}$$

Moreover,

$$\begin{aligned} h_{Z_t^+(\mu )}(\xi ){} & {} \leqslant h_{Z_t^+(\mu )}(v)+h_{Z_t^+(\mu )}(\xi -v)\\{} & {} \leqslant h_{Z_t^+(\mu )}(v)+\epsilon h_{Z_t^+(\mu )} (v) =(1+\epsilon )h_{Z_t^+(\mu )}(v), \end{aligned}$$

which finally gives \(\alpha _{\mu }(x)\leqslant 1/(1-\epsilon )\). This shows that \((1-\epsilon )W\subseteq Z_t^+(\mu )\). For every \(\xi \in N\) we have

$$\begin{aligned} \mu (\{x:\langle x,\xi \rangle _+\geqslant (1+\epsilon )\Vert \langle \cdot ,\xi \rangle _+ \Vert _t\})\leqslant (1+\epsilon )^{-t}. \end{aligned}$$

Since \(\delta \in (0,1)\) we have \(0<\epsilon <1/5\), therefore \(\frac{(1+\epsilon )^2}{1-\epsilon }\leqslant 1+5\epsilon =1+\delta \). Then,

$$\begin{aligned} \mu ((1+\delta )Z_t^+(\mu ))&\geqslant \mu \left( \frac{(1+\epsilon )^2}{1-\epsilon }Z_t^+(\mu )\right) \geqslant \mu ((1+\epsilon )^2W)\\&=\mu \left( \bigcap _{\xi \in N}\left\{ x:\langle x,\xi \rangle _+\leqslant (1+\epsilon ) h_{Z_t^+(\mu )}(\xi )\right\} \right) \\&\geqslant 1-|N|\cdot (1+\epsilon )^{-t}\geqslant 1-(C^{\prime }_{\epsilon }t)^n(1+\epsilon )^{-t}, \end{aligned}$$

where \(C_{\epsilon }^{\prime }=3b/\epsilon \). It follows that there exists \(C_{\epsilon }>1\) such that if \(t\geqslant C_{\epsilon }n\ln n\) then

$$\begin{aligned} (C_{\epsilon }^{\prime }t)^n(1+\epsilon )^{-t}\leqslant (1+\epsilon )^{-t/2}\leqslant e^{-\epsilon t/4}. \end{aligned}$$
(5.2)

To see this, consider the function

$$\begin{aligned} \ell (t)=\frac{t}{2}\ln (1+\epsilon )-n\ln (3bt/\epsilon ). \end{aligned}$$

It is easily checked that \(\ell \) is increasing on \([2n/\ln (1+\epsilon ),\infty )\). Therefore, if \(t\geqslant C_{\epsilon }n\ln n\) where \(C_{\epsilon }=\frac{C}{\epsilon }\ln \left( \frac{2}{\epsilon }\right) \) for a large enough absolute constant \(C>0\), one can check that \(\ell (t)\geqslant \ell (C_{\epsilon }n\ln n)>0\). This implies (5.2). Since \(\epsilon =\delta /5\), we obtain the assertion of the proposition with the stated dependence of the constants \(C_{\delta },c_{\delta }\) on \(\delta \). \(\square \)

For the proof of Theorem 5.5 we also need a basic fact that plays a main role in the proof of all the lower thresholds that have been obtained so far. It is stated in the form below in [16, Lemma 3]. For a proof see [9] or [10, Lemma 4.1].

Lemma 1.24

For every Borel subset A of \({{\mathbb {R}}}^n\) we have that

$$\begin{aligned} 1-\mu ^N(K_N\supseteq A)\leqslant 2\left( {\begin{array}{c}N\\ n\end{array}}\right) \left( 1-\inf _{x\in A}\varphi _{\mu }(x)\right) ^{N-n}. \end{aligned}$$

Therefore,

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}[\mu (K_N)]\geqslant \mu (A)\left( 1-2\left( {\begin{array}{c}N\\ n\end{array}}\right) \left( 1-\inf _{x\in A}\varphi _{\mu }(x)\right) ^{N-n}\right) . \end{aligned}$$

Proof of Theorem 5.5

Let \(0<\delta <1\) and set \(\epsilon =\delta /3\). Let \(\mu \) be a centered log-concave probability measure on \({{\mathbb {R}}}^n\). Since the expectation \({{\mathbb {E}}}_{\mu ^N}\big [\mu ((1+\delta )K_N)\big ]\) is a linearly invariant quantity, we may assume that \(\mu \) is isotropic. From Lemma 4.3 we know that for every \(x\in (1-\epsilon ) Z_t^+(\mu )\) we have

$$\begin{aligned}\varphi _{\mu }(x)\geqslant \frac{(1-(1-\epsilon )^t)^2}{C_1^t},\end{aligned}$$

where \(C_1>1\) is an absolute constant. Then, taking into account the fact that \(1-\epsilon >2/3\), we get

$$\begin{aligned}\mu ^N\Big (K_N\supseteq (1-\epsilon )Z_t^+(\mu )\Big ) \geqslant 1-2\left( {\begin{array}{c}N\\ n\end{array}}\right) \left[ 1-\frac{(1-(1-\epsilon )^t)^2}{C_1^t}\right] ^{N-n}.\end{aligned}$$

By the mean value theorem we have \(1-(1-\epsilon )^t=t\epsilon z^{t-1}\) for some \(z\in (1-\epsilon ,1)\), and hence \(1-(1-\epsilon )^t\geqslant t\epsilon (1-\epsilon )^{t-1}\). Taking also into account the fact that \(1-\epsilon >2/3\), we get

$$\begin{aligned} \mu ^N\Big (K_N\supseteq (1-\epsilon )Z_t^+(\mu )\Big )&\geqslant 1-2\left( {\begin{array}{c}N\\ n\end{array}}\right) \left[ 1-\frac{({t\epsilon }(1-\epsilon )^{t-1})^2}{C_1^t}\right] ^{N-n}\\&\geqslant 1-\left( \frac{2eN}{n}\right) ^n\exp \left( -(N-n)\frac{(t\epsilon )^2}{(3C_1)^t}\right) . \end{aligned}$$

This last quantity tends to 1 as \(n\rightarrow \infty \) if

$$\begin{aligned} (3C_1)^tn\ln (4eN/n)<(N-n)(t\epsilon )^2, \end{aligned}$$
(5.3)

and assuming that \(\delta \in (1/n^2,1)\) and \(t\geqslant C_{\epsilon }n\ln n\) where \(C_{\epsilon }\) is the constant from Proposition 5.6, we check that (5.3) holds true if \(N\geqslant \exp (C_2t)\) for a large enough absolute constant \(C_2>0\).

Note that \(\epsilon =\delta /3\) implies that \(1+\delta >\frac{1+\epsilon }{1-\epsilon }\). Then, if \(N\geqslant \exp (C_2C_{\epsilon }n\ln n)\) we see that

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}\left[ \mu \left( (1+\delta )K_N\right) \right]&\geqslant {{\mathbb {E}}}_{\mu ^N}\left[ \mu \left( \frac{1+\epsilon }{1-\epsilon }K_N\right) \right] \\&\geqslant \mu ((1+\epsilon )Z_t^+(\mu ))\times \mu ^N\Big (K_N\supseteq (1-\epsilon )Z_t^+(\mu )\Big )\\&\geqslant \big (1-e^{-c\epsilon t}\big )\left[ 1-\left( \frac{2eN}{n}\right) ^n\exp \left( -(N-n)\frac{(t\epsilon )^2}{(3C_1)^t}\right) \right] \longrightarrow 1 \end{aligned}$$

as \(n\rightarrow \infty \). \(\square \)

We have already mentioned that Theorem 5.5 provides a weak threshold in the sense that we estimate the expectation \({{\mathbb {E}}}_{\mu ^N}\big (\mu ((1+\delta )K_N))\) (for an arbitrarily small but positive value of \(\delta )\) while the original question is about \({{\mathbb {E}}}_{\mu ^N}\big (\mu (K_N)\big )\). The next result provides an estimate where “\(\delta \) is removed", however the dependence on n is worse. The argument below was suggested by the referee and replaces our much more complicated original argument, leading to the same final estimate.

Theorem 1.25

There exists an absolute constant \(C>0\) such that

$$\begin{aligned} \inf _{\mu }\Big (\inf \Big \{ {{\mathbb {E}}}_{\mu ^N}\big [\mu (K_N)\big ]: N\geqslant \exp (C(n\ln n)^2u(n))\Big \}\big )\longrightarrow 1 \end{aligned}$$

as \(n\rightarrow \infty \), where the first infimum is over all log-concave probability measures \(\mu \) on \({{\mathbb {R}}}^n\) and u(n) is any function with \(u(n)\rightarrow \infty \) as \(n\rightarrow \infty \).

Proof

Let \(\mu \) be a log-concave probability measure on \({{\mathbb {R}}}^n\). Since the expectation \({{\mathbb {E}}}_{\mu ^N}\big [\mu (K_N)\big ]\) is an affinely invariant quantity, we may assume that \(\mu \) is centered. Note that if \(A\subset {{\mathbb {R}}}^n\) is a Borel set, then

$$\begin{aligned} \mu ((1+\delta )A)=\int _{(1+\delta )A}f_{\mu }(x)\,dx=(1+\delta )^n\int _Af_{\mu }((1+\delta )x)\,dx. \end{aligned}$$

Since \(f_{\mu }\) is log-concave, we see that

$$\begin{aligned} f_{\mu }((1+\delta )x)\leqslant f_{\mu }(x)\left( \frac{f_{\mu }(x)}{f_{\mu }(0)}\right) ^{\delta }\leqslant e^{n\delta }f_{\mu }(x) \end{aligned}$$

for every \(x\in {{\mathbb {R}}}^n\), because \(f_{\mu }(x)\leqslant e^nf_{\mu }(0)\) by (2.2). It follows that

$$\begin{aligned} \mu ((1+\delta )A)\leqslant (1+\delta )^ne^{n\delta }\mu (A)\leqslant e^{2n\delta }\mu (A). \end{aligned}$$
(5.4)

Given a function u(n) with \(u(n)\rightarrow \infty \) as \(n\rightarrow \infty \), choose \(\delta _n=(nu(n))^{-1}\). From (5.4) we see that

$$\begin{aligned} {{\mathbb {E}}}_{\mu ^N}\big [\mu (K_N)\big ]\geqslant e^{-2n\delta _n}{{\mathbb {E}}}_{\mu ^N}\big [\mu ((1+\delta _n)K_N)\big ]. \end{aligned}$$

Therefore, we see that

$$\begin{aligned}&\inf _{\mu }\Big (\inf \Big \{ {{\mathbb {E}}}_{\mu ^N}\big [\mu (K_N)\big ]: N\geqslant \exp \big (C\delta _n^{-1}\ln \left( 2/\delta _n \right) n\ln n\big )\Big \}\Big )\geqslant e^{-2n\delta _n}\inf _{\mu }\\&\hspace{1cm}\times \Big (\inf \Big \{ {\mathbb E}_{\mu ^N}\big [\mu ((1+\delta _n)K_N)\big ]: N\geqslant \exp \big (C\delta _n^{-1}\ln \left( 2/\delta _n \right) n\ln n\big )\Big \}\Big )\longrightarrow 1 \end{aligned}$$

as \(n\rightarrow \infty \), using Theorem 5.5 and the fact that \(e^{-2n\delta _n}=e^{-2/u(n)}\rightarrow 1\). We may clearly assume that \(u(n)=O(n)\). Then,

$$\begin{aligned} \delta _n^{-1}\ln \left( 2/\delta _n \right) n\ln n=n^2\ln n\ln (2nu(n))u(n)\approx (n\ln n)^2u(n), \end{aligned}$$

and the result follows.\(\square \)