Half-space depth of log-concave probability measures

Given a probability measure $\mu $ on ${\mathbb R}^n$, Tukey's half-space depth is defined for any $x\in {\mathbb R}^n$ by $\varphi_{\mu }(x)=\inf\{\mu (H):H\in {\cal H}(x)\}$, where ${\cal H}(x)$ is the set of all half-spaces $H$ of ${\mathbb R}^n$ containing $x$. We show that if $\mu $ is log-concave then $$e^{-c_1n}\leq \int_{\mathbb{R}^n}\varphi_{\mu }(x)\,d\mu(x) \leq e^{-c_2n/L_{\mu}^2}$$ where $L_{\mu }$ is the isotropic constant of $\mu $ and $c_1,c_2>0$ are absolute constants. The proofs combine large deviations techniques with a number of facts from the theory of $L_q$-centroid bodies of log-concave probability measures. The same ideas lead to general estimates for the expected measure of random polytopes whose vertices have a log-concave distribution.


Introduction
Let µ be a probability measure on R n .For any x ∈ R n we denote by H(x) the set of all half-spaces H of R n containing x.The function ϕ µ (x) = inf{µ(H) : H ∈ H(x)} is called Tukey's half-space depth.The first work in statistics where some form of the half-space depth appears is an article of Hodges [18] from 1955.Tukey introduced the half-space depth for data sets in [27] as a tool that enables efficient visualization of random samples in the plane.The term "depth" also comes from Tukey's article.A formal definition of the half-space depth as a way to distinguish points that fit the overall pattern of a multivariable probability distribution and to obtain an efficient description, visualization, and nonparametric statistical inference for multivariable data, was given by Donoho and Gasko in [11] (see also [26]).We refer the reader to the survey article of Nagy, Schütt and Werner [23] for an overview of this topic, with an emphasis on its connections with convex geometry, and many references.
In the first part of this article we study the expectation of the half-space depth in the context of logconcave probability measures.In what follows, these are the Borel probability measures µ on R n that satisfy µ(λA + (1 − λ)B) µ(A) λ µ(B) 1−λ for any compact subsets A, B ⊆ R n and any λ ∈ (0, 1), as well as the non-degeneracy condition µ(H) < 1 for every hyperplane H in R n .The question whether there exists an absolute constant c ∈ (0, 1) such that for all n 1 and all log-concave probability measures µ on R n was asked in [1] in connection with stochastic separability and applications to machine learning and error-correction mechanisms in artificial intelligence systems; for the origin of the question we refer to [16] and to the references therein.In the context of asymptotic geometric analysis, the validity of (1.1) implies that if m C n , where C > 1 is an absolute constant, then a set of m independent random points with a log-concave distribution has, with probability close to 1, the property that every point in the set can be separated from all others by a hyperplane.
Our first result shows that (1.1) holds true modulo the isotropic constant L µ of µ, defined in (2.3).
Background information on isotropic log-concave probability measures and the isotropic constant is provided in Section 2. The well-known hyperplane conjecture asks whether there exists an absolute constant C > 0 such that L n C for every n 2, where L n = sup{L µ : µ is an isotropic log-concave probability measure on R n }.
The best known upper bound, due to Klartag [21], asserts that L n C √ ln n for some absolute constant C > 0, therefore Theorem 1.1 shows that E µ (ϕ µ ) exp (−cn/ ln n) provided that n is large enough.The quantity E µ (ϕ µ ) is affinely invariant and hence for the proof of Theorem 1.1 we may assume that µ is isotropic.Actually, we obtain Theorem 1.1 as a special case of a more general result which is presented in Section 3.
Theorem 1.2.Let µ and ν be two isotropic log-concave probability measures on R n , n n 0 .Then, where c > 0, n 0 ∈ N are absolute constants.
The proof of Theorem 1.2 starts with the known estimate ϕ µ (x) exp(−Λ * µ (x)) where Λ * µ is the Cramér transform of µ (defined in Section 2), and actually establishes the stronger inequality exploiting upper bounds for the volume of the sets B t (µ) = {x ∈ R n : Λ * µ (x) t}.
The assumption that both µ and ν are isotropic is not necessary.One can consider a different type of normalization.We discuss this matter in Section 2 and we state another version of Theorem 1.2 that might be useful (see Theorem 3.2).In any case, setting ν = µ we obtain Theorem 1.1 as an immediate consequence of any of these statements.
In Section 4 we show that, apart from the value of the isotropic constant L µ , the exponential estimate provided by Theorem 1.1 is sharp.
Theorem 1.3.Let µ be a log-concave probability measure on R n .Then, where c > 0 is an absolute constant.
The proof of Theorem 1.3 makes use of several facts about isotropic log-concave probability measures.In the case where µ is the uniform measure on a convex body K of volume 1 in R n , one can show that ϕ µ (x) e −c1n for all x ∈ 1 2 K and then simply apply Markov's inequality and use the fact that 1 2 K = 2 −n .When µ is an arbitrary log-concave probability measure on R n , in order to obtain the same exponential in the dimension lower bound we have to exploit the family of the one-sided L t -centroid bodies of µ.More precisely, we use the fact that in order to have the lower bound ϕ µ (x) e −c1n we may use, instead of 1  2 K, the convex body 1  2 Z + t (µ) with e.g.t = 5n, where Z + t (µ) is the one-sided L t -centroid body of µ, and we establish an appropriate lower bound for µ 1 2 Z + 5n (µ) .This last estimate requires the use of some other families of convex sets that are associated with a log-concave probability measure; these are introduced in the next section as well as in Section 4. For the reader's convenience we present first the proof of Theorem 1.3 in the simpler case where µ is the uniform measure on a convex body K in R n and then in the general case of an arbitrary log-concave probability measure.
In the second part of this article we consider the question to obtain uniform upper and lower thresholds for the expected measure of a random polytope defined as the convex hull of independent random points with a log-concave distribution.The general formulation of the problem is the following.Given a log-concave probability measure µ on R n we consider independent random points X 1 , X 2 , . . . in R n distributed according to µ and for any N > n we consider the random polytope Tukey's half-space depth plays a crucial role in the study of these random polytopes and of their threshold behavior, starting with the classical work of Dyer, Füredi and McDiarmid who established in [13] a sharp threshold for the expected volume of random polytopes with vertices uniformly distributed in the discrete cube They proved that in the first case, if κ = ln 2 − 1 2 then for every ε ∈ (0, κ) one has the upper threshold and the lower threshold A similar result holds true for the expected volume of random polytopes with vertices uniformly distributed in the cube B n ∞ ; the corresponding value of the constant , where γ is Euler's constant.Half-space depth plays a key role in the proof of these results: the starting point for the proof of the upper and lower threshold are variants of Lemma 5.2 and Lemma 5.7 respectively.Further sharp thresholds (meaning that there exists some constant κ = κ µ such that the expected volume of K N changes behavior around N = exp(κ µ n)) have been given in a number of other special cases; see [15] for the case where X i have independent identically distributed coordinates supported on a bounded interval, and the articles [24] and [3], [4] for a number of cases where X i have rotationally invariant densities.All these works follow the same strategy and use estimates for the half-space depth.Non-sharp, both of them exponential in the dimension, upper and lower thresholds are obtained in [14] for the case where X i are uniformly distributed in a simplex.All these results suggest that, at least in the case where µ = µ K is the uniform measure on a high-dimensional convex body, the expectation E µ N [µ(K N )] of the measure of K N exhibits a threshold with constant κ µ = 1 n E µ (Λ * µ ), where Λ * µ is the Cramér transform of µ, in the sense that the following statement might be true: given δ ∈ 0, 1  2 , there exists n 0 (δ, ε) ∈ N such that if n n 0 and K is a convex body in for some ε = c(n, δ)κ µ with lim n→∞ c(n, δ) = 0. Some steps in this direction have been made in [7].Note that by (1.2) and Jensen's inequality one has that κ µ c/L 2 n for every log-concave probability measure µ on R n .Here, we are interested in uniform upper and lower thresholds for the class of all log-concave probability measures.The question that we study is to find a constant N 1 (n), depending only on n and as large as possible, so that sup as n → ∞ and a second constant N 2 (n), depending only on n and as small as possible, so that inf as n → ∞, where the supremum and the infimum are over all log-concave probability measures.We shall call the first type of result a "uniform upper threshold" and the second type a "uniform lower threshold".Such uniform upper and lower thresholds were obtained recently by Chakraborti, Tkocz and Vritsiou in [9] for some families of distributions.They showed that if µ is an even log-concave probability measure supported on a convex body K in R n and if X 1 , X 2 , . . .are independent random points distributed according to µ, then for any n < N exp(c 1 n/L 2 µ ) we have that where c 1 , c 2 > 0 are absolute constants.We obtain an upper threshold for a pair of log-concave measures µ and ν, if they can be simultaneously put in the isotropic position.
Theorem 1.4.Let µ and ν be isotropic log-concave probability measures on R n .Let X 1 , X 2 , . . .be independent random points in R n distributed according to µ and for any N > n consider the random polytope K N = conv{X 1 , . . ., X N }.Then, for any N exp(c 1 n/L 2 ν ) we have that where c 1 , c 2 > 0 are absolute constants.
As a corollary of Theorem 1.4 we get: Corollary 1.5.There exists an absolute constant c > 0 such that if as n → ∞, where the first supremum is over all log-concave probability measures µ on R n .
The proof of Theorem 1.4 exploits some of the ideas that are used for the proof of (1.5) in [9]: Lemma 5.2 is a variant of a known idea which is often used for upper thresholds and is based again on the inequality ϕ µ (x) exp(−Λ * µ (x)).Then, one has to use upper bounds for the volume of the sets B t (µ).The assumption that both µ and ν are isotropic may be replaced by different types of normalization.We discuss other versions of Theorem 1.4 in Section 5 and we show that one can recover (1.5) from these.
The uniform lower threshold which is established in [9] concerns the case where µ is an even κ-concave measure on R n with 0 < κ < 1/n, supported on a convex body K in R n .If X 1 , X 2 , . . .are independent random points in R n distributed according to µ and K N = conv{X 1 , . . ., X N } as before, then for any M C and any N exp 1 κ (ln n + 2 ln M ) we have that where C > 0 is an absolute constant.
Since the family of log-concave probability measures corresponds to the case κ = 0, it is natural to ask for analogues of this result for 0-concave, i.e. log-concave, probability measures.We obtain a uniform lower threshold for the class of log-concave probability measures.
Theorem 1.6.Let δ ∈ (0, 1).Then, as n → ∞, where the first infimum is over all log-concave probability measures µ on R n with barycenter at the origin, and C > 0 is an absolute constant.
The proof of Theorem 1.6 exploits the half-space depth as follows.By a known fact, Lemma 5.7, roughly speaking it suffices to have a good lower bound for ϕ µ (x) on a set A ⊂ R n of measure close to 1.We show that if µ has its barycenter at the origin then, as in the proof of Theorem 1.3, the role of A can be played by (1 + δ)Z + t (µ) where, this time, t C δ n ln n and C δ = Cδ −1 ln (2/δ).Theorem 1.6 provides a weak threshold in the sense that we estimate the expectation E µ N µ(1 + δ)K N ) (for an arbitrarily small but positive value of δ) while we would like to have a similar result for E µ N µ(K N )].One can "remove the δ-term", however the dependence on n becomes worse.More precisely, we show in Theorem 5.8 that there exists an absolute constant C > 0 such that inf as n → ∞, where the first infimum is over all log-concave probability measures µ on R n and u(n It should be noted that an exponential in the dimension lower threshold is not possible in full generality.For example, in the case where X i are uniformly distributed in the Euclidean ball the sharp threshold for the problem is exp See [12] where a related estimate first appears, and [24], [3] for sharp estimates; one more proof is given in [7].

Notation and background information
In this section we introduce notation and terminology that we use throughout this work, and provide background information on isotropic convex bodies and log-concave probability measures.We write •, • for the standard inner product in R n and denote the Euclidean norm by In what follows, B n 2 is the Euclidean unit ball, S n−1 is the unit sphere, and σ is the rotationally invariant probability measure on S n−1 .Lebesgue measure in R n is denoted by | • |.The letters c, c ′ , c j , c ′ j etc. denote absolute positive constants whose value may change from line to line.Whenever we write a ≈ b, we mean that there exist absolute constants c 1 , c 2 > 0 such that c 1 a b c 2 a.Similarly, if A, B are sets, then A ≈ B will state that c 1 A ⊆ B ⊆ c 2 A for some absolute constants c 1 , c 2 > 0. We refer to Schneider's book [25] for basic facts from the Brunn-Minkowski theory and to the book [2] for basic facts from asymptotic convex geometry.We also refer to [8] for more information on isotropic convex bodies and log-concave probability measures.

Convex bodies.
A convex body in R n is a compact convex set K ⊂ R n with non-empty interior.In this work we often consider bounded convex sets K in R n with 0 ∈ int(K); since their closure is a convex body, we shall call these sets convex bodies too.We say that K is centrally symmetric if This is a result of Fradelizi; for a proof see [8, Proposition 6.1.9].The radial function ̺ K of K is defined for all x = 0 by ̺ K (x) = sup{λ > 0 : λx ∈ K} and the support function of K is given by h K (x) = sup{ x, y : y ∈ K} for all x ∈ R n .The polar body K • of a convex body K in R n with 0 ∈ int(K) is the convex body A convex body K in R n is called isotropic if it has volume 1, it is centered, and its inertia matrix is a multiple of the identity matrix: there exists a constant L K > 0, the isotropic constant of K, such that 2.2.Log-concave probability measures.In this article, a Borel measure µ on R n is called log-concave if µ(H) < 1 for every hyperplane H in R n and µ(λA+(1−λ)B) µ(A) λ µ(B) 1−λ for any compact subsets A, B of R n and any λ ∈ (0, 1).A theorem of Borell [5] shows that under these assumptions, µ has a log-concave density f µ .A function f : R n → [0, ∞) is called log-concave if its support {f > 0} is a convex set in R n and the restriction of ln f to it is concave.If f has finite positive integral then there exist constants A, B > 0 such that f (x) Ae −B|x| for all x ∈ R n (see [8, Lemma 2.2.1]).In particular, f has finite moments of all orders.We say that µ is even if µ(−B) = µ(B) for every Borel subset B of R n and that µ is centered if x, ξ f µ (x)dx = 0 for all ξ ∈ S n−1 .We shall use the fact that if µ is a centered log-concave probability measure on R n then where Cov(µ) is the covariance matrix of µ with entries We say that a log-concave probability measure µ on R n is isotropic if it is centered and Cov(µ) = I n , where I n is the identity n × n matrix.In this case, For every µ there exists an affine transformation T such that T * µ is isotropic, where T * µ is the push-forward of µ defined by T * µ(A) = µ(T −1 (A)).Note that a convex body K of volume 1 is isotropic if and only if the log-concave probability measure with density L n K 1 K/LK is isotropic.The hyperplane conjecture asks if there exists an absolute constant C > 0 such that L n := max{L µ : µ is an isotropic log-concave probability measure on R n } C for all n 1. Bourgain [6] established the upper bound L n c 4 √ n ln n; later, Klartag, in [19], improved this estimate to L n c 4 √ n.In a breakthrough work, Chen [10] proved that for any ǫ > 0 there exists n 0 (ǫ) ∈ N such that L n n ǫ for every n n 0 (ǫ).Subsequently, Klartag and Lehec [22] showed that L n c(ln n) 4 , and very recently Klartag [21] achieved the best known bound L n c √ ln n.

Centroid bodies.
Let µ be a log-concave probability measure on R n .For any t 1 we define the L t -centroid body Z t (µ) of µ as the centrally symmetric convex body whose support function is Note that Z t (µ) is always centrally symmetric, and Z t (T * µ) = T (Z t (µ)) for every T ∈ GL(n) and t 1.Note also that a centered log-concave probability measure µ is isotropic if and only if Z 2 (µ) = B n 2 .The next result of Paouris (see [8,Theorem 5.1.17])provides upper bounds for the volume of the L t -centroid bodies of isotropic log-concave probability measures.
Theorem 2.1.If µ is a centered log-concave probability measure on R n , then for every 2 t n we have that where c > 0 is an absolute constant.In particular, if µ is isotropic then |Z t (µ)| 1/n c t/n for all 2 t n.
A variant of the L t -centroid bodies of µ is defined as follows.For every t 1 we consider the convex body Z + t (µ) with support function where a + = max{a, 0}.When f µ is even, we have that In any case, we easily verify that 2 for an absolute constant c > 0. One can also check that if 1 t < s then 4 e The right-hand side inequality gives for all ξ ∈ S n−1 , where C > 1 is an absolute constant.For a proof of all these claims see [17], where the family of bodies Z+ t (µ) = 2 1/t Z + t (µ) is considered.We have made the necessary adjustments in the inclusions that we use.

2.4.
The bodies B t (µ).Let µ be a probability measure on R n .We define where is the logarithmic Laplace transform of µ.We also define where, given a convex function g : R n → (−∞, ∞], the Legendre transform L(g) of g is defined by The function Λ * µ is called the Cramér transform of µ and plays a crucial role in the theory of large deviations.For every t 1 we define

Note that
For every t > 0 we also set We say that a measure µ on R n is α-regular if for any s t 2 and every v ∈ R n , The next proposition compares B t (µ) with Z t (µ) when µ is α-regular.
Proposition 2.3.If µ is α-regular for some α 1, then for any t 2 we have We fix u ∈ M t (µ) and set ũ := tu 2eα .Then, and the claim follows.Now, let v / ∈ 4eαZ t (µ).We can find u ∈ M t (µ) such that v, u > 4eα and then By Proposition 2.2, we have that Proposition 2.3 holds true (with an absolute constant in place of 4eα) for every log-concave probability measure.
2.5.Ball's bodies K t (µ).If µ is a log-concave probability measure on R n then, for every t > 0, we define From the definition it follows that the radial function of K t (µ) is given by for x = 0.The bodies K t (µ) were introduced by K. Ball who also established their convexity.If µ is also centered then, for every 0 < t s, (2.6) Γ(t + 1) (see e.g.[8, Lemma 2.5.6]) and then we can use the inclusions (2.6) in order to estimate the volume of K t (µ).For every t > 0 we have We are mainly interested in the convex body K n+1 (µ).We shall use the fact that K n+1 (µ) is centered (see [8, Proposition 2.5.3 (v)]) and that (2.9) The last estimate follows immediately from (2.7) and (2.8).

Upper bound for the expected value of the half-space depth
Let µ and ν be two log-concave probability measures on R n with the same barycenter.If T : R n → R n is an invertible affine transformation and T * µ is the push-forward of µ defined by T * µ(A) = µ(T −1 (A)) then we observe that ϕ T * µ (x) = ϕ µ (T −1 (x)) for all x ∈ R n , and hence Therefore, Theorem 1.1 is a consequence of Theorem 1.2.Starting with a log-concave probability measure µ on R n , we consider an affine transformation T such that T * µ is isotropic and then apply Theorem 1.2 to the measures T * µ and ν = T * µ.
Proof of Theorem 1.2.Consider two isotropic log-concave probability measures µ, ν on R n .We will show that for some absolute constant c > 0. We start with the observation that for any v ∈ R n the half-space and taking the infimum over all v ∈ R n we see that Then we write Fix b ∈ (2/n, 1/2] which will be chosen appropriately.Since ν(B t (µ)) 1 and also ν(B t (µ)) f ν ∞ |B t (µ)| for all t > 0, we may write Applying Proposition 2.3 and Theorem 2.1 we get for all 2 t n, where c 1 , c 2 > 0 are absolute constants.It is also known that L ν c 3 where c 3 > 0 is an absolute constant (see [8, Proposition 2.3.12] for a proof).So, we may assume that and since b 0 n n/2 and the function t → t n/2 e −t is increasing on [0, n/2], we get Moreover, |B 2 (µ)| 1/n c 2 2/n, therefore Combining the above we get and hence which implies the result.
Remark 3.1.In the introduction we have already mentioned that the assumption that both µ and ν are isotropic is not necessary.One may consider different situations, where µ and ν are centered and f ν ∞ is comparable with f µ ∞ .For example, the next result can be obtained with the ideas that were used in the proof of Theorem 1.2.
Theorem 3.2.Let µ and ν be two centered log-concave probability measures on R n , n n 0 , such that where c > 0, n 0 ∈ N are absolute constants.
The proof of Theorem 3.2 is quite similar to the one of Theorem 1.2.We fix b ∈ (2/n, 1/2] and write Then, we use the upper bound

Lower bound for the expected value of the half-space depth
In this section we show that the exponential estimate of Theorem 1.1 is sharp.As explained in the introduction, for the reader's convenience we consider first the simpler case where µ is the uniform measure on a convex body K in R n and then present the more technical tools and computations that are required for the case of an arbitrary log-concave probability measure µ on R n .

Uniform measure on a convex body
The next proposition provides an exponential lower bound for E µK (ϕ µK ), where µ K is the uniform measure on K. Proposition 4.1.Let K be a convex body of volume 1 in R n .Then, where c > 0 is an absolute constant.
Proof.By translation invariance we may assume that the barycenter of K is at the origin.Let x ∈ 1 2 K.We will show that ϕ µK (x) where the infimum is over all ξ ∈ S n−1 , because by the definition of ϕ µK (x) we only have to check the half-spaces H ∈ H(x) for which x is a boundary point.Moreover, we may consider only those ξ ∈ S n−1 that satisfy x, ξ 0, because if x, ξ < 0 then  We write for some absolute constant c > 0.

Log-concave probability measures
Next, we assume that µ is a log-concave probability measure on R n .Our aim is to prove the next theorem.
Theorem 4.2.Let µ be a log-concave probability measure on R n .Then, where c > 0 is an absolute constant.
By the affine invariance of E µ (ϕ µ ) we may assume that µ is centered.The proof is based on a number of observations.The first one is a consequence of the Paley-Zygmund inequality; we just adapt here the proof of [8,Lemma 11.3.3] to give a lower bound for ϕ µ (x) when x ∈ δZ + t (µ) for some δ ∈ (0, 1).
Proof.Let x ∈ δZ + t (µ).As in the proof of Proposition 4.1, using Grünbaum's lemma we see that it is enough to show that where the infimum is over all ξ ∈ S n−1 with x, ξ 0. Since x ∈ δZ + t (µ), we have x, ξ δh Z + t (µ) (ξ) for any such ξ ∈ S n−1 , so it is enough to show that We apply the Paley-Zygmund inequality for the function g(z) = z, ξ t + .From (2.4) we see that for some absolute constant C 1 > 0, and the lemma follows.
Definition 4.4.For every t 1 we consider the convex set The convexity of R t (µ) is an immediate consequence of the log-concavity of f µ .Note that R t (µ) is bounded and 0 ∈ int(R t (µ)).
Proof.Let t 5n.Given any ξ ∈ S n−1 consider the log-concave function h By the definition of K n (µ) we have On the other hand, Using also the fact that f ∞ e n f µ (0) from (2.2), we get This shows that R t (µ) ⊇ c 0 K n (µ), where c 0 > 0 is an absolute constant.From (2.6) we know that K n (µ) ≈ K n+1 (µ), and this completes the proof.
Lemma 4.6.For every t 5n we have that , where c ′ 0 > 0 is an absolute constant.
Proof of Theorem 4.2.Combining Lemma 4.5 and Lemma 4.6 we see that for some absolute constant c 1 > 0. We apply Lemma 4.3 with t = 5n and δ = 1 2 .For every x ∈ 1 2 Z + 5n (µ) we have Then, by Lemma 4.6 we have Using also (2.9), we get Combining the above we conclude that 5 Bounds for the expected measure of random polytopes Let µ be a log-concave probability measure on R n .Let X 1 , X 2 , . . .be independent random points in R n distributed according to µ and for any N > n consider the random polytope K N = conv{X 1 , . . ., X N }.Given a second log-concave probability measure ν on R n with the same barycenter as µ, consider the expectation E µ N [ν(K N )] of the ν-measure of K N .Note that if T : R n → R n is an invertible affine transformation and T * µ is the push-forward of µ defined by T * µ(A) = µ(T −1 (A)) then So, we may assume µ is isotropic and ν is centered.In the next theorem we actually assume that both µ and ν are isotropic.
Theorem 5.1.Let µ and ν be isotropic log-concave probability measures on R n , n n 0 .For any N exp(c 1 n/L 2 ν ) we have that , where c 1 , c 2 > 0 and n 0 ∈ N are absolute constants.
The proof of Theorem 5.1 will exploit the same tools that were used in the previous section, combined with a variant of the standard lemma that is used in order to establish upper thresholds.Recall that t}, where Λ * µ is the Cramér transform of µ.A version of the next lemma appeared originally in [13].
Lemma 5.2.Let t > 0. For every N > n we have Proof.We write Observe that if H is a closed half-space containing x, and if x ∈ K N , then there exists i N such that X i ∈ H (otherwise we would have x ∈ K N ⊆ H ′ , where H ′ is the complementary half-space).It follows that Then, Fubini's theorem shows that for every N > n and 2 t n.Recall that ν is isotropic, therefore f ν Klartag's estimate for L n gives much more.Then, if n n 0 where n 0 ∈ N is an absolute constant, the choice ∞ satisfies 2 t n and gives Therefore, where c 2 = (c 1 e) −2 .It follows that if N exp(c 3 n/ f ν 2/n ∞ ) where c 3 = c 2 /2, then we have and the result follows from the fact that f ν Remark 5.3.Let µ be isotropic.For the proof of Theorem 5.1, what we actually need about ν is that ν is centered and that f ν Then the argument of the previous proof gives Note that the proof of (1.5) in [9] exploits the same ideas.The role of ν is played by the uniform measure on a convex body K, therefore f ν ∞ = 1 |K| .On the other hand, µ is isotropic and supported on K, and hence ∞ n/L 2 µ , which (combined with the above) proves (1.5).A second possible normalization is to assume that µ and ν are simply centered and that f µ ∞ = f ν ∞ .Then, starting the computation as in the proof of Theorem 5.1 we get Choosing t = (c 1 e) −2 n/L 2 µ and continuing as above, we get: Theorem 5.4.Let µ and ν be two centered log-concave probability measures on R n with f µ ∞ = f ν ∞ .For any N exp(c 1 n/L 2 µ ) we have that where c 1 , c 2 > 0 are absolute constants.
We pass now to the lower threshold.It is useful to observe that in the case where X 1 , X 2 , . . .are uniformly distributed in the Euclidean unit ball the sharp threshold for the problem (see [24] and [3]) is exp (1 ± ǫ) 1  2 n ln n , ǫ > 0.
We concentrate in the case ν = µ of our problem, in which case we shall establish a weak lower threshold of this order.The precise formulation of our result is the following.
Theorem 5.5.Let δ ∈ (0, 1).Then, as n → ∞, where the first infimum is over all centered log-concave probability measures µ on R n and C > 0 is an absolute constant.
This is a weak threshold in the sense that we consider the expected measure of (1 + δ)K N instead of K N , where δ > 0 is arbitrarily small.The reason for this is the dependence on δ in the next technical proposition.Proposition 5.6.Let µ be an isotropic log-concave probability measure on R n .For any δ ∈ (0, 1) and any t C δ n ln n we have that µ((1 + δ)Z + t (µ)) 1 − e −c δ t where C δ = Cδ −1 ln (2/δ) and c δ = cδ are positive constants depending only on δ.
Proof.Let δ ∈ (0, 1) and set ǫ = δ/5.Fix t n which will be determined.Recall that b where C ′ ǫ = 3b/ǫ.It follows that there exists To see this, consider the function It is easily checked that ℓ is increasing on [2n/ ln(1 + ǫ), ∞).Therefore, if t C ǫ n ln n where C ǫ = C ǫ ln 2 ǫ for a large enough absolute constant C > 0, one can check that ℓ(t) ℓ(C ǫ n ln n) > 0. This implies (5.2).Since ǫ = δ/5, we obtain the assertion of the proposition with the stated dependence of the constants C δ , c δ on δ.
For the proof of Theorem 5.5 we also need a basic fact that plays a main role in the proof of all the lower thresholds that have been obtained so far.It is stated in the form below in [9, Lemma 3].For a proof see [13] or [15,Lemma 4.1].
We have already mentioned that Theorem 5.5 provides a weak threshold in the sense that we estimate the expectation E µ N µ((1 + δ)K N ) (for an arbitrarily small but positive value of δ) while the original question is about E µ N µ(K N ) .The next result provides an estimate where "δ is removed", however the dependence on n is worse.The argument below was suggested by the referee and replaces our much more complicated original argument, leading to the same final estimate.as n → ∞, where the first infimum is over all log-concave probability measures µ on R n and u(n) is any function with u(n) → ∞ as n → ∞.
Proof.Let µ be a log-concave probability measure on R n .Since the expectation E µ N µ(K N ) is an affinely invariant quantity, we may assume that µ is centered.Note that if A ⊂ R n is a Borel set, then Since f µ is log-concave, we see that for every x ∈ R n , because f µ (x) e n f µ (0) by (2.2).It follows that (5.4) µ((1 + δ)A) (1 + δ) n e nδ µ(A) e 2nδ µ(A).

1 2 =
L n µ .and continue as in the proof of Theorem 1.2.
This is a result of Fradelizi; for a proof see [8, Theorem 2.2.2].Note that if K is a convex body in R n then the Brunn-Minkowski inequality implies that the indicator function 1 K of K is the density of a log-concave measure, the Lebesgue measure on K.If µ is a log-concave measure on R n with density f µ , we define the isotropic constant of µ by(2.3) [8, all s t we have M s (µ) ⊆ M t (µ) and Z t (µ) ⊆ Z s (µ).If the measure µ is α-regular, then M t (µ) ⊆ α s t M s (µ) and Z s (µ) ⊆ α s t Z t (µ) for all s t 2.Moreover, for every centered probability measure µ we have Λ * µ (0) = 0 by Jensen's inequality, and the convexity of Λ * µ implies that B t (µ) ⊆ B s (µ) ⊆ s t B t (µ) for all s t > 0.Recall that, by Borell's lemma, every log-concave probability measure is c-regular (see[8, Theorem 2.4.6] for a proof).Proposition 2.2.Every log-concave probability measure is c-regular, where c 1 is an absolute constant.
b 1 and consider a ǫ bt -net N of the Euclidean unit sphere S n−1 with cardinality |N | (1+2bt/ǫ) n (3bt/ǫ) n ; for a proof of the estimate on the cardinality of N see e.g.[2, Lemma 5.2.5].We define