Estimating the probability that a given vector is in the convex hull of a random sample

For a d-dimensional random vector X, let pn,X(θ)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_{n, X}(\theta )$$\end{document} be the probability that the convex hull of n independent copies of X contains a given point θ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta $$\end{document}. We provide several sharp inequalities regarding pn,X(θ)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_{n, X}(\theta )$$\end{document} and NX(θ)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N_X(\theta )$$\end{document} denoting the smallest n for which pn,X(θ)≥1/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_{n, X}(\theta )\ge 1/2$$\end{document}. As a main result, we derive the totally general inequality 1/2≤αX(θ)NX(θ)≤3d+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1/2 \le \alpha _X(\theta )N_X(\theta )\le 3d + 1$$\end{document}, where αX(θ)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha _X(\theta )$$\end{document} (a.k.a. the Tukey depth) is the minimum probability that X is in a fixed closed halfspace containing the point θ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta $$\end{document}. We also show several applications of our general results: one is a moment-based bound on NX(EX)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N_X(\mathbb {E}\!\left[ X\right] )$$\end{document}, which is an important quantity in randomized approaches to cubature construction or measure reduction problem. Another application is the determination of the canonical convex body included in a random convex polytope given by independent copies of X, where our combinatorial approach allows us to generalize existing results in random matrix community significantly.


Introduction
Consider generating independent and identically distributed d-dimensional random vectors.How many vectors do we have to generate in order that a point θ ∈ R d is contained in the convex hull of the sample with probability at least 1/2?More generally, what is the probability of the event with an n-point sample for each n?These questions were first solved for a general distribution which has a certain symmetry about θ by Wendel (1963).Let us describe the problem more formally.
Let X be a d-dimensional random vector and X 1 , X 2 , . . .be independent copies of X.For each θ ∈ R d and positive integer n, define p n,X (θ) := P(θ ∈ conv{X 1 , . . ., X n }) , as the reasonable number of observations we need.As p n,X and N X are only dependent on the probability distribution of X, we also write p n,µ and N µ when X follows the distribution µ.We want to evaluate p n,X as well as N X for a general X.Wendel (1963) showed that holds for an X such that X and −X have the same distribution and X 1 , . . ., X d are almost surely linearly independent.In particular, N X (0) = 2d holds for such random vectors.For an X with an absolutely continuous distribution with respect to the Lebesgue measure, Wagner and Welzl (2001) showed more generally that the right-hand side of (1) is indeed an upper bound of p n,X , and they also characterized the condition for equality (see Theorem 6).Moreover, Kabluchko and Zaporozhets (2020) recently gave an explicit formula for p n,X when X is a shifted Gaussian.
In this paper, our aim is to give generic bounds of p n,X and N X , and we are particularly interested in the upper bound of N X , which is opposite to the bound given by Wagner and Welzl (2001).Estimating p n,X and N X is of great interest from application, which ranges from numerical analysis to statistics, and compressed sensing.As a by-product, we also give a general result explaining the deterministic body included in the random polytope conv{X 1 , . . ., X n }, which is a sharp generalization of a recent work in the random matrix community (Guédon et al., 2019).The remainder of this section will explain more detailed motivation from related fields and implications of our results.
Throughout the paper, let •, • be any inner product on R d , and • be the norm it induces.
The proof is essentially given by classical Carathéodory's theorem.The points and weights treated in Tchakaloff's theorem is an important object in the field of numerical integration, called cubature (Stroud, 1971).An equivalent problem is also treated as a beneficial way of data compression in the field of data science (Maalouf et al., 2019;Cosentino et al., 2020).A typical choice of test function f i is monomials when X is a subset of an Euclidean space, so the integration with respect to the measure d+1 j=1 w j δ xj is a good approximation of X f dµ for a smooth integrand f .However, constructions under general setting are also useful; for example, in the cubature on Wiener space (Lyons and Victoir, 2004), X is the space of continuous paths, µ is the Wiener measure, and the test functions are iterated integrals of paths.
To this generalized cubature construction (or measure reduction) problem, there are efficient deterministic approaches (Litterer and Lyons, 2012;Tchernychova, 2015;Maalouf et al., 2019) when µ is discrete.Using randomness for construction is recently considered (Cosentino et al., 2020;Hayakawa, 2020) and it is important to know p n,X (E[X]) for the d-dimensional random variable where Y is drawn from µ. Indeed, once we have E[X] ∈ conv{X 1 , . . ., X n } (X i = f (Y i ) are independent copies of X), then we can choose d + 1 points and weights satisfying (2) by solving a simple linear programming problem.Evaluation of N X is sought for estimating the computational complexity of this naive scheme.

Statistical depth
From the statistical context, p d+1,X (θ) for a d-dimensional X is called the simplicial depth of θ ∈ R d with respect to the (population) distribution of X (Liu, 1990;Cascos, 2007), which can be used for mathematically characterizing the intuitive "depth" of each point θ when we are given the distribution of X.For an empirical measure, it corresponds to the number of simplices (whose vertices are in the data) containing θ.
There are also a various concepts measuring depth, all called statistical depth (Cascos, 2007;Mosler, 2013).One of the first such concepts is the halfspace depth proposed by Tukey (1975): which can equivalently defined as the minimum measure of a halfspace containing θ. Donoho et al. (1992) and Rousseeuw and Ruts (1999) extensively studied general features of α X .We call it the Tukey depth throughout the paper.
Our finding is that these two depth notions are indeed deeply related.We prove the rate of convergence p n,X → 1 is essentially determined by α X (Proposition 13), and we have a beautiful relation 1/2 ≤ α X N X ≤ 3d + 1 in Theorem 16.

Inclusion of deterministic convex bodies
Although we have seen the background of the p n,X (θ), which only describes the probability of a single vector contained in the random convex polytope, several aspects of such random polytopes have been studied (Majumdar et al., 2010;Hug, 2013).In particular, people also studied deterministic convex bodies associated with the distribution of a random vector.For example, one consequence of wellknown Dvoretzky-Milman's Theorem (see, e.g., Vershynin 2018, Chapter 11) is that the convex hull of n independent samples from the d-dimensional standard normal distribution is "approximately" a Euclidean ball of radius ∼ √ log n with high probability for a sufficiently large n.Mainly from the context of random matrices, there have been several researches on the interior convex body of conv{X 1 , . . ., X n } or its "absolute" version conv{±X 1 , . . ., ±X n } for various classes of X such as Gaussian, Rademacher or vector with i.i.d.subgaussian entries (Gluskin, 1989;Giannopoulos and Hartzoulaki, 2002;Litvak et al., 2005;Dafnis et al., 2009;Guédon et al., 2020).One result about the Rademacher vector is the following: Theorem 2 (Giannopoulos and Hartzoulaki 2002).Let d be a sufficiently large positive integer and X 1 , X 2 , . . .be independent samples from the uniform distribution over the set {−1, 1} d ⊂ R d .Then, there exists an absolute constant c > 0 such that, for each integer n ≥ d(log d) 2 , we have Although each of those results in literature was based on its specific assumptions on the distribution of X, Guédon et al. (2019) found a possible way of treating the results in a unified manner under some technical assumptions on X.They introduced the floating body associated with X Kα (X) := {s ∈ R d | P( s, X ≥ 1) ≤ α} to our context (the notation here is slightly changed from the original one), and argued that, under some assumptions on X, with high probability, conv{X 1 , . . ., X n } includes a constant multiple of the polar body of Kα (X) with log(1/α) ∼ 1 + log(n/d).Note that their main object of interest is the absolute convex hull, but their results can be extended to the ordinary convex hull (see Guédon et al. 2019, Remark 1.7).
Let us explain more formally.Firstly, for a set A ⊂ R d , the polar body of A is defined as Secondly, we shall describe the assumptions used in Guédon et al. (2019).Let |||•||| be a norm on R d and γ, δ, r, R > 0 be constants.Their assumptions are as follows: Under these conditions, they proved the following assertion by using concentration inequalities.
Though computing Kα (X) • for individual X is not necessarily an easy task, this gives us a unified understanding of existing results in terms of the polar of the floating body Kα (X).However, its use is limited due to the technical assumptions.In this paper, we show that we can completely remove the assumptions in Theorem 3 and obtain a similar statement only with explicit constants (see Proposition 22 and Corollary 25, or the next section).Finally, we add that this interior body of random polytopes or its radius is recently reported to be essential in the robustness of sparse recovery (Krahmer et al., 2018) and the convergence rate of greedy approximation algorithms (Mirrokni et al., 2017;Combettes and Pokutta, 2019) when the data is random.

Organization of the paper
In this paper, our aim is to derive general inequalities for p n,X and N X .The main part of this paper is Section 2 to 5. The following is a broad description of the contents of each section.
• Section 2: General bounds of p n,X without specific quantitative assumptions • Section 3: Bounds of p n,X uniformly determined by α X • Section 4: Bounds of N X (E[X]) uniformly determined by the moments of X • Section 5: Results on deterministic convex bodies included in random polytopes Let us give more detailed explanation about each section.Section 2 provides generalization of the results of Wagner and Welzl (2001), and we give generic bounds of p n,X (θ) under a mild assumption p d,X (θ) = 0, which is satisfied with absolutely continuous distributions as well as typical empirical distributions.Our main result in Section 2 is as follows (Theorem 8): Theorem.Let X be an arbitrary d-dimensional random vector and θ In Section 3, we introduce p ε n,X and α ε X for an ε ≥ 0, which are "ε-relaxation" of p n,X and α X in that p 0 n,X = p n,X and α 0 X = α X hold.For this generalization, we prove that the convergence of p ε n,X → 1 is uniformly evaluated in terms of α ε X (Proposition 13), and obtain the following result (Theorem 14): Theorem.Let X be an arbitrary d-dimensional random vector and θ ∈ R d .Then, for each ε ≥ 0 and positive integer n ≥ 3d/α ε X (θ), we have Although we do not define ε-relaxation version here, we can see from the case ε = 0 that, for example, N X (θ) ≤ ⌈3d/α X (θ)⌉ generally holds (see also Theorem 16).
In Section 4, we derive upper bounds of N X without relying on α X , which may also be unfamiliar.By using the result in the preceding section and the Berry-Esseen theorem, we show some upper bounds of N X in terms of the (normarized) moments of X as follows (Theorem 19): Theorem.Let X be a centered d-dimensional random vector with nonsingular covariance matrix V .Then, Here, • 2 denotes the usual Euclidean norm on R d .Note that the right-hand side can easily be replaced by the moment of V −1/2 X 2 (see also Corollary 20).Section 5 asserts that K α (X) : ) is a canonical deterministic body included in the random convex polytope conv{X 1 , . . ., X n }.We see in Proposition 22 that this body is essentially equivalent to the Kα (X) • mentioned in Section 1.3, and prove the following (Theorem 24): Theorem.Let X be an arbitrary symmetric d-dimensional random vector, and let α, δ, ε ∈ (0, 1).
If a positive integer n satisfies then we have, with probability at least 1 − δ, where X 1 , X 2 , . . .are independent copies of X.
A consequence of this theorem (Corollary 25) enables us to remove the technical assumption of Theorem 3. Note that all these results give explicit constants with reasonable magnitude, which is because of our combinatorial approach typically seen in the proof of Proposition 10 and Proposition 15.After these main sections, we give some implications of our results on motivational examples (introduced in Section 1.1, 1.2) in Section 6, and we finally give our conclusion in Section 7.

General bounds of p n,X
In this section, we denote p n,X (0) by only p n,X .As we always have p n,X (θ) = p n,X−θ (0), it suffices to treat p n,X (0) unless we consider properties of p n,X as a function.
Let us start with easier observations.Proposition 4 and Proposition 5 are almost dimension-free.Firstly, as one expects, the following simple assertion holds.
Proposition 4. For an arbitrary d-dimensional random vector X with E[X] = 0 and P(X = 0) > 0, we have 0 The conclusion still holds if we only assume p n,X > 0 for some n instead of E[X] = 0.
Proving p n,X → 1 is also easy.From the independence, we have This leads to the conclusion combined with the monotonicity of p n,X .
Note that we have used the condition E[X] = 0 only to ensure p d+1 > 0. Hence the latter statement readily holds from the same argument.
The next one includes a little quantitative relation among p n,X and N X .
Proposition 5.For an arbitrary d-dimensional random vector X and integers n ≥ m ≥ d + 1, Proof.Let M be the number of m-point subsets of {X 1 , . . ., X n } whose convex hull contains 0.Then, we have As p n,X = P(M ≥ 1) ≤ E[M ], we obtain the first inequality.
For the second part, we carry out the following rough estimate: For the minimum integer k satisfying when p n,X < 1/2.Indeed, by the motonicity of (1 + 1/x) x over x > 0, we have so the conclusion follows.
Remark 1.Although the estimate N X ≤ n pn,X looks loose in general, N X ≤ 2d p 2d,X is a sharp uniform bound for each dimension d up to a universal constant.Indeed, in Example 34 and Example 35 (Appendix B), we prove that lim εց0 sup holds for each positive integer d.In contrast, the other inequality p n,X ≤ n m p m,X is indeed very loose and drastically improved in Proposition 7.
In Proposition 4 and 5, we have never used the information of dimension except for observing p d+1,X > 0 in Proposition 4.However, when the distribution of X has a certain regularity, there already exists a strong result that reflects the dimensionality.
Theorem 6 (Wagner and Welzl 2001).When the distribution of X is absolutely continuous with respect to the Lebesgue measure on R d , holds for each n ≥ d + 1.The equality is attained if and only if the distribution is balanced, i.e., P( c, X ≤ 0) = 1/2 holds for all the unit vectors c ∈ R d .
The authors of Wagner and Welzl (2001) derived this result by showing the existence of a nonnegative continuous function h We shall provide an intuitive description of the function h X .Let us consider a one-dimensional i.i.d.sequence Y 1 , Y 2 , . . .(also independent from X 1 , X 2 , . ..),where each Y i follows the uniform distribution over (0, 1).If we consider the (d + 1)-dimensional random vectors Xi := (X i , Y i ), then, for each n, 0 ∈ conv{X 1 , . . ., X n } ⊂ R d is obviously equivalent to the condition that the (d + 1)-th coordinate axis (denoted by ℓ) intersects the convex set Cn := conv{ X1 , . . ., Xn } ⊂ R d+1 .Under a certain regularity condition, there are exactly two facets (a d-dimensional face of C n ) respectively composed of a (d+1)-point subset of { X1 , . . ., Xn } that intersects ℓ.Let us call them top and bottom, where the top is the facet whose intersection with ℓ has the bigger (d + 1)-th coordinate.Let us define another random variable H as • otherwise the probability that 0 and Xd+2 are on the same side of the hyperplane supporting conv{ X1 , . . ., Xd+1 } (conditioned by X1 , . . ., Xd+1 ).
Then, for a given realization of { X1 , . . ., Xn }, the probability that conv{ X1 , . . ., Xd+1 } becomes the top of Cn is H n−d−1 .As there are n d+1 choice of (equally) possible "top," we can conclude that , H > 0 , and so we can understand h X as the density of a half mixture of H and 1 − H over {H > 0}.This has been a simplified explanation of h X .For more rigorous arguments and proofs, see Wagner and Welzl (2001).
By using this "density" function, we can prove the following interesting relationship.
Proposition 7. Let X be an R d -valued random variable with an absolutely continuous distribution.
Then, for any integers n ≥ m ≥ d + 1, we have Proof.The right inequality is clear from (4).For the left inequality, by using h X (t) = h X (1 − t), we can rewrite (4) as We can prove for a ≥ b ≥ 0 that t a +(1−t) a t b +(1−t) b attains its minimum at t = 1/2, e.g., by using the method of Lagrange multipliers.Accordingly, we obtain , which is equivalent to the inequality to prove.
Remark 2. The left inequality has nothing to say when n and m are large so 2 n−m is faster than (n/m) d .However, for small n and m, it works as a nice estimate.Consider the case n = 2d and m = d + 1.Then, the proposition and the usual estimate for central binomial coefficients yield This is comparable to the symmetric case, where p d+1,X = 1/2 d and p 2d,X = 1/2 hold.The right inequality is an obvious improvement of the dimension-free estimate given in Proposition 5.
We next generalize these results to general distributions including discrete ones such as empirical measures.However, at least we have to assume p d,X = 0. Note that it is weaker than the condition that X has an absolutely continuous distribution, as it is satisfied with usual empirical measures (see Proposition 9).
From smoothing arguments, we obtain the following generalization of inequalities ( 3) and ( 5).
Theorem 8. Let X be an arbitrary d-dimensional random vector with p d,X = 0.Then, for any Proof.Let U be a uniform random variable over the unit ball of R d which is independent from X. Let also U 1 , U 2 . . .be independent copies of U , which is independent from X 1 , X 2 , . ... We shall prove that lim εց0 p n,X+εU = p n,X for each n.Note that the distribution of X + εU has the probability density function where V denotes the volume of the unit ball.Therefore, once we establish the limit lim εց0 p n,X+εU = p n,X the statement of the theorem is clear.
From p d,X = 0, we know that For each n ≥ d + 1, consider the event as εU i ≤ ε for all i (more precisely, we can prove this by using the separating hyperplane theorem).Therefore, by considering the facets of the convex hull, we have By using (6), we have and so we obtain lim inf εց0 p n,X+εU ≥ p n,X .
On the other hand, if we have 0 ∈ conv{X i + εU i } n i=1 and 0 ∈ conv{X i } n i=1 at the same time, then there exsits J ⊂ {1, . . ., n} such that |J| = d and inf y∈conv{Xi}i∈J y ≤ ε.Indeed, we can write 0 as a convex combination n i=1 λ i (X i + εU i ) = 0, so , there is a facet within ε-distance from 0. Therefore, we obtain and similarly it follows that Thus we finally obtain lim εց0 p n,X+εU = p n,X .
We should remark that p d,X = 0 is naturally satisfied with (centerized) empirical measures.
Proposition 9. Let µ be an absolutely continuous probability distribution on R d and Y 1 , Y 2 , . . .be an i.i.d.samplings from µ.Then, with probability one, for each M ≥ d + 1, distributions Proof.For µ M , it suffices to prove that with probability one there are no J ⊂ {1, . . ., M } with |J| = d such that 0 ∈ conv{Y i } i∈J .This readily follows from the absolute continuity of the original measure µ.The extension to the case µ satisfies only p d,µ = 0 is immediate.
For the centered version μM , what to prove is that with probability one there are no J ⊂ {1, . . ., M } with |J| = d such that 1 M M i=1 Y j ∈ conv{Y i } i∈J .Suppose this occurs for some J.Then, we have that 1 M−d i =J Y i is on the affine hull of {Y i } i∈J .However, as {Y i } i ∈J is independent from {Y i } i∈J for a fixed J, this probability is zero again from the absolute continuity of µ.Therefore, we have the desired conclusion.
depth and its relaxation.We shall fix an arbitrarily real inner product •, • on R d , and use the induced norm • and the notation dist(x, A) := inf a∈A x − a for an x ∈ R d and A ⊂ R d .
For a d-dimensional random vector X and θ ∈ R d , define an ε-relaxation version of the Tukey depth by We also define, for a positive integer n, where X 1 , . . ., X n are independent copies of X.Note that p n,X = p 0 n,X .Although we regard them as functions of θ in Section 5, we only treat the case θ = 0 and omit the argument θ in this section.
Proposition 10.Let X be a d-dimensional random vector with an absolutely continuous distribution with respect to the Lebesgue measure.Then, for each ε ≥ 0 and positive integer n ≥ d + 1, we have Before going into details of quantitative results, we note the following equivalence of the positivity of α ε X and p ε n,X which immediately follows from this assertion.
Proposition 11.Let X be an arbitrary d-dimensional random vector and let ε ≥ 0.Then, Then, for each c ∈ R d with c = 1, we have c, x ≤ ε and so c, X i ≤ ε for at least one i ∈ {1, . . ., n}.Hence we have a uniform evaluation and the first assertion follows.
For the latter, if α ε X is positive, we have p ε n,X > 0 for a sufficiently large n from Proposition 10.Finally, Carathéodory's theorem yields the positivity for all n ≥ d + 1.
Let us prove Proposition 10.
Proof of Proposition 10.Let m ≥ d be an integer.We first consider the quantity q m := 1 − p ε n,X .Let A m be the event given by P(dist(0, conv{X i } n i=1 ) > ε) .Also, let B m be the event that {X 1 , . . ., X m } is in general position.Then, we have P(B m ) = 1 and Then, the boundary ∂H m is the hyperplane going through h m and perpendicular to h m .From the general-position assumption, there are at most d points on ∂H m .Let I m be the set of indices i satisfying ∂H m , then I m is a random subset of {1, . . ., m} with 1 As I m is a random set determined uniquely, we can decompose the probability P(A m ∩ B m ) as follows by symmetry: Hence, we want to evaluate the probability P(I m = {1, . . ., k}).Note that we can similarly define h k as the unique point in conv{X i } k i=1 that minimizes the distance from the origin.Then, Then, we have Therefore, we have By letting n = m + 1, we obtain the conclusion.
If we define g d,n (α) by g d,n := 1 for n = 1, . . ., d and for n = d + 1, d + 2, . .., we clearly have 1 − p ε n,X ≤ g d,n (α ε X ) from Proposition 10 for a d-dimensional X having density.We can actually generalize this to a general X.
Lemma 12. Let X be an arbitrary d-dimensional random vector.Then, for each ε ≥ 0 and positive integer n, we have Hence we have α ε X ≤ α ε+δ X .Consider generating i.i.d.pairs (X 1 , X1 ), . . ., (X n , Xn ) that are copies of (X, X).Then, for each x ∈ conv{X i } n i=1 , there is a convex combination such that x = n i=1 λ i X i with λ i ≥ 0 and n i=1 λ i = 1.Then, we have It means that inf y∈conv{ Xi} n i=1 x − y ≤ δ holds for every x ∈ conv{X i } n i=1 , and we can deduce that p ε+2δ n,X ≥ p ε+δ n, X holds.
In particular, we can choose X having density, so that we have 1 − p ε+δ n,X ≤ g d,n (α ε+δ X ).Therefore, from the monotonicity of g d,n , we have As δ > 0 can be taken arbitrarily, we finally obtain by letting δ → 0. The δ-relaxation technique used in this proof is a big advantage of introducing p ε n,X extending p n,X .
From this lemma, we obtain the following general bound.
Proposition 13.Let X be an arbitrary d-dimensional random vector.Then, for each ε ≥ 0 and positive integer n ≥ d/α ε X , we have Proof.From Lemma 12, it suffices to prove that holds for each α ∈ (0, 1) and n ≥ d/α.From the definition of g d,n (see ( 7)), if we set n 0 := ⌈d/α⌉, then we have As we know d/α ≤ n 0 < d/α + 1 by definition, we have This is indeed the desired inequality (8).
Remark 3. As 1 α log 1 1−α ≥ 1 holds on (0, 1) for n ≥ (1+α)d α , the bound (8) yields a looser but more understandable variant Note that we have a trivial lower bound of 1 − p ε n,X ≥ (1 − α ε X ) n , which is proven by fixing a separating hyperplane between the origin and sample points.
For a special choice n = ⌈3d/α⌉, the following is readily available Theorem 14.Let X be an arbitrary d-dimensional random vector.Then, for each ε ≥ 0 and positive integer n ≥ 3d/α ε X , we have Proof.From Proposition 13, it suffices to prove for all α ∈ (0, 1).If we let f (x) = x−2 x log 1 1−x for x ∈ (0, 1), then we have If we set t := log 1 1−x , t takes positive reals and we have Therefore, it suffices to consider the limit α ց 0. In this limit, the left-hand side of ( 9) is equal to 3e −2 , which is smaller than 1/2 since e > √ 6 holds.
We complete this section with a stronger version of Proposition 10 only for ε = 0. Indeed, by summing up the following inequality, we can immediately obtain the ε = 0 case in Porposition 10.
Proposition 15.Let X be a d-dimensional random vector with an absolutely continuous distribution with respect to the Lebesgue measure.Then, holds for all n ≥ d + 1, where we regard.
4 Bounds of N X via Berry-Esseen theorem In this section, we discuss upper bounds of N X for a centered X, which are of particular interest from the randomized measure reduction (see Section 1.1).
We know the following assertion as a consequence of Theorem 14.
Theorem 16.Let X be an arbitrary d-dimensional random vector.Then, we have Proof.The right inequality is an immediate consequence of Theorem 14.To prove the left one, let n be a positive integer satisfying 1 2n > α X .Then, there exists a vector c ∈ R d \ {0} such that P c ⊤ X ≤ 0 < 1 2n .Then, for X 1 , X 2 , . . ., X n (i.i.d.copies of X), we have Therefore, N X must satisfy 1 2NX ≤ α X .Remark 4. The above theorem states that 1/2 ≤ α X N X ≤ 3d + 1.This evaluation for α X N X is indeed tight up to a universal constant.For example, if X is a d-dimensional standard Gaussian, we have α X = 1 2 and N X = 2d, so α X N X = d.Moreover, for a small ε ∈ (0, 1), if we consider X = (X 1 , . . ., X d ) such that then we can see α X = ε/2 and N X = Ω((d − 1)/ε) as (0, . . ., 0, 1) has to be in the convex hull of samples to include the origin in it.Hence the bound α X N X = O(d) is sharp even for a small α X .
On the contrary, inf holds (even when requiring p d,X = 0) for each positive integer d from Example 34 and Example 35 in the appendix (Section B).
Although Theorem 16 has strong generality, in many situations we have little information about the Tukey depth α X .Indeed, approximately computing the Tukey depth itself is an important and difficult problem (Cuesta-Albertos and Nieto-Reyes, 2008; Zuo, 2019).However, if we limit the argument to a centered X, we can obtain various moment-based bounds as shown below.In this section, we use the usual Euclidean norm • 2 given by x 2 = √ x ⊤ x for simplicity.Let X be a d-dimensional centered random vector whose covariance matrix V := E XX ⊤ is nonsingular.We also define V −1/2 as the positive-definite square root of V −1 .Then, for each unit vector c ∈ R d (namely c 2 = 1), we have We have the following simple result for a bounded X.
Proposition 17.Let X be a centered d-dimensional random vector with nonsingular covariance matrix V .If V −1/2 X 2 ≤ B holds almost surely for a positive constant B, then we have Proof.For a one-dimensional random variable Y with E[Y ] = 0, E Y 2 = 1 and |Y | ≤ B, we have and so By observing this inequality for each Y = c ⊤ V −1/2 X with c 2 = 1, we obtain the bound of α X .The latter bound then follows from Theorem 16.
Let us consider the unbounded case.The Berry-Esseen theorem evaluates the speed of convergence in the central limit theorem (Berry, 1941;Esseen, 1942).The following is a recent result with an explicit small constant.
Theorem 18 (Korolev and Shevtsova 2012).Let Y be a random variable with E[Y ] = 0, E Y 2 = 1, and E |Y | 3 < ∞, and let Y 1 , Y 2 , . . .be independent copies of Y .Also let Z be one-dimensional standard Gaussian.Then, we have for arbitrary x ∈ R and n ≥ 1.
We can apply the Berry-Esseen theorem for evaluating the probability P c ⊤ S n ≤ 0 from (10), where S n is the normalized i By elaborating this idea, we obtain the following bound of N X .
Theorem 19.Let X be a centered d-dimensional random vector with nonsingular covariance matrix V .Then, Proof.Let n be an integer satisfying .
Remark 5.The bound in Theorem 19 is sharp up to constant as a uniform bound in terms of holds for all c 2 = 1 while N X = 2d, so we have sup From Theorem 19, we can also obtain several looser but more tractable bounds.
Corollary 20.Let X be a centered d-dimensional random vector with nonsingular covariance matrix V .N X can be bounded as Proof.From Theorem 19, it suffices to prove for each unit vector c ∈ R d .The first bound is clear from The second bound can also be derived as where we have used the Cauchy-Schwarz inequality.
Remark 6.In the order notation, the first bound in this corollary states This estimate is also sharp up to O(d) factor in the sense that we can prove sup for each positive integer d.For the proof of this fact, see Example 34 and Example 35 in the appendix (Section B).
We finally remark that there are multivariate versions of the Berry-Esseen theorem (Zhai, 2018;Raič, 2019) and we can use them to derive a bound of N X in a different approach which does not use α X .However, their bounds only gives the estimate which is far worse than the bounds obtained in Theorem 19 and Corollary 20.However, it is notable that this approach from multidimensional Berry-Esseen formulas is applicable to non-identical X i 's if the second and third moments are uniformly bounded, while the combinatorial approach based on α X seems to be fully exploiting the i.i.d.assumption.Therefore, we provide the details of this alternative approach in the appendix (Section A).

Deterministic interior body of random polytopes
For each α > 0, define a deterministic set defined by the level sets of Tukey depth This set is known to be compact and convex (Rousseeuw and Ruts, 1999).We can also naturally generalize this set for the ε-relaxation of Tukey depth, and the generalization also satisfies the following: Proposition 21.Let X be a d-dimensional random vector.Then, for each ε ≥ 0 and α > 0, the set {θ ∈ R d | α ε X (θ) ≥ α} is compact and convex, and satisfies Proof.We fix α and denote If t(c) = ∞, i.e., the right-hand set is empty for some c, then each set K ε is empty.t(c) > −∞ is clear from α > 0. Suppose t(c) ∈ R for all c.From the continuity of probability, the infimum can actually be replaced by minimum, so we have for each θ ∈ R d .Hence, if θ 0 ∈ K 0 and θ − θ 0 ≤ ε, then we have θ ∈ K ε , so we obtain the inclusion statement.
Let us prove that K ε is compact and convex.Define As H ε (c) is closed and convex, K ε is also closed and convex.To prove compactness, we shall prove K ε is bounded.As X is a random vector, there is an R > 0 such that P( X ≥ R) < α.Then, for each θ ∈ R d satisfying θ ≥ R + ε, we have Therefore, we have θ < R + ε for each θ ∈ K ε and so K ε is bounded.
Remark 7. Note that the inclusion stated in Proposition 21 can be strict.For example, if X is a d-dimensional standard Gaussian, K α (X) is empty for each α > 1/2, but the ε-relaxation of Tukey depth can be greater than 1/2 for ε > 0.
From this proposition, we can naturally generalize the arguments given in this section to the ε-relaxation case; natural interior bodies of ε-neighborhood of conv{X 1 , . . ., X n } are given by the ε-relaxation of Tukey depth.However, to keep the notation simple, we only treat K α (X) the interior body of usual convex hull in the following.
We next prove that the polar body Kα (X) • used in Guédon et al. (2019), which we have introduced in Section 1.3, is essentially the same as K α (X) in their setting, i.e., when X is symmetric.
Recall that Kα (X) is defined as Note that the following proposition is not surprising if we go back to the original background of Kα (Schütt and Werner, 1990), where X is uniform from some deterministic convex set, and recent reseaches on its deep relation to the Tukey depth (Nagy et al., 2019).
Proposition 22.Let X be a d-dimensional symmetric random vector.Then, for each α ∈ (0, 1/2), we have Proof.Consider the set Then, we clearly have A α ⊂ Kα (X) and so (A α ) • ⊃ Kα (X) • .We first prove that (A α ) • = K α (X) actually holds.From the definition of a polar, θ ∈ (A α ) • if and only if for each r > 0 and c = 1.As we have assumed that X is symmetric and α < 1/2, ( 13) is still equivalent even if we allow r to rake all reals.We shall prove that, for a fixed c, (13) is equivalent to P( c, X − θ ≥ 0) ≥ α.Indeed, if holds, there exists a δ > 0 such that P( c, X ≥ c, θ − δ) < α.Then, we have the negation of ( 13) by letting r = c, θ − δ.For the opposite direction, if we assume P( c, X ≥ c, θ ) ≥ α, we have P( c, X ≥ r) ≥ α for all r < c, θ and so ( 13) is true.Therefore, we obtain (A α ) • = K α (X).
We are going to prove the extension of Theorem 3 by finding a finite set of points whose convex hull approximates K α (X).The following statement is essentially well-known (Pisier, 1999;Barvinok, 2014), but we give the precise statement and a brief proof for completeness.
Proposition 23.Let K be a compact and convex subset of R d such that K = −K.Then, for each ε ∈ (0, 1), there is a finite set A ⊂ R d such that Proof.We can only consider the case K has full dimension, i.e., K has a nonempty interior.Then, the Minkowski functional of K (e.g., see Conway, 2007, IV.1.14)Theorem 24.Let X be an arbitrary symmetric d-dimensional random vector, and let α, δ, ε ∈ (0, 1).If a positive integer n satisfies then we have, with probability at least 1 − δ, where X 1 , X 2 , . . .are independent copies of X.
Proof.As K α (X) is symmetric and convex, there is a set A ⊂ K α (X) with cardinality at most for each θ ∈ A. Hence, it suffices to prove the right-hand side of ( 14) is bounded by (1 + 2/ε) −d δ.
By taking the logarithm, it is equivalent to showing Let us denote x := nα/d.For x ≥ 12, as x/2 − log x is increasing, we have by a simple computation.Therefore, from log(1 + 2/ε) ≤ log 3 + log(1/ε) and the assumption for n, we obtain the inequality ( 14).
Remark 8.Although the bound given in Theorem 24 requires n ≥ 12d/α, it can be loosened for moderate δ and ε.For example, if we want to obtain a bound for the case δ = ε = 1/2, then we can prove n ≥ 5d/α to be sufficient by using the bound in Proposition 13.Moreover, we should note that we have used the assumption that X is symmetric only for assuring that K α (X) is symmetric so that we can use Proposition 23.If we take a symmetric convex subset K ⊂ K α (X), we can prove a similar inclusion statement for K even for a nonsymmetric X.
If we want a the generalized version of Theorem 3, we can prove the following: Corollary 25.Let X be an arbitrary d-dimensional symmetric random vector.Let β ∈ (0, 1) and set α = (en/d) −β .Then, there exists an absolute constant c > 0.45 such that, for each integer n with probability at least 1 − exp(−ce −β n 1−β d β ), where X 1 , X 2 , . . .are independent copies of X.
Proof.For α = (en/d) −β , we have Hence, from Theorem 24, it suffices to determine how small δ can be taken so as to satisfy As n ≥ 12d holds for all β, for a := log 2 6 < 0.1, we have an ≥ 2d α log 2. Therefore, we can take δ as small as log Therefore, we can take c = 1−a 2 > 0.45 as desired.

Application
We discuss implications of the results of this paper in two parts.The first part discusses the use of the bounds we gave on p n,X , while the second part gives implication of N X 's bounds on the randomized cubature construction.

Bounds of p n,X
Firstly, the inequality between p n,X and p m,X given in Proposition 7 provides the inequality as is mentioned in Remark 2.
Measure reduction Consider a discrete (probability) measure µ = x∈X w x δ x for a finite subset of X ⊂ R d .In Cosentino et al. (2020), randomized algorithms for constructing a convex combination satisfying ), whose existence is assured by Tchakaloff's theorem (Tchakaloff, 1957;Bayer and Teichmann, 2006), are considered.As a basic algorithm, the authors considers the following scheme: or not, and finish the algorithm and return A ∪ {x} if it holds.
Although we can execute the decision for each x in (a.2) with O d 2 computational cost with an O d 3 preprocessing for a fixed A, the overall expected computational cost until the end of the algorithm is at least Ω d 2 /p d+1,X under some natural assumption on µ (see Proposition 9).However, we can also consdier the following naive procedure: By using an LP solver with the simplex method we can execute (b.2) in (empirically) O d 3 time (Pan, 1985;Shamir, 1987).Hence the overall computational cost can be heuristically bounded above by O d 3 /p 2d,X , which is faster than the former by Ω d −3/2 2 d from the evaluation in (15).Note also that we have rigorously polynomial bounds via other LP methods (e.g., an infeasible-interiorpoint method (Mizuno, 1994)), and so the latter scheme is preferable even in worst-case when the dimension d becomes large.
Relation between two depths We can also deduce an inequality between two depth concepts in statistics.As is mentioned in Introduction, for a random vector X ∈ R d , p d+1,X is called the simplicial depth whereas α X is the Tukey depth of the origin with respect to X. Naively, we have α X ≥ pn,X n for each n, so α X ≥ p d+1,X d+1 holds.However, by using (15) here, we obtain a sharper estimate In contrast, deriving a nontrivial upper bound of α X in terms of p d+1,X still seems difficult.

Bounds of N X
Secondly, we give applications of the bounds of N X given in Section 4.
Random trigonometic cubature Consider a d-dimensional random vector X = (cos θ, . . ., cos dθ) ⊤ ∈ R d for a positive integer d, where θ is a uniform random variable over (−π, π).Then, from an easy computation, we have V := E XX ⊤ = 1 2 I d , and so we obtain almost surely.Therefore, from Proposition 17, we have This example is equivalent to a random construction of the so-called Gauss-Chebyshev quadrature (Mason and Handscomb, 2002, Chapter 8).Although we can bound as above the number of observations required in a random construction, concrete constructions with fewer points are already known.
Deriving a bound for random construction of cubature without any know deterministic construction, such as cubature on Wiener space (Lyons and Victoir, 2004;Hayakawa and Tanaka, 2020), which is more important, is still unsolved and left for future work.
Beyond naive cubature construction Recall the cubature construction problem described in Section 1.1.We consider a random variable of the form X = f (Y ), where Y is a random variable on some topological space X and f = (f 1 , . . ., f d ) ⊤ : X → R d is a d-dimensional vector valued integrable function.Our aim is to find points y 1 , . . ., y d+1 ∈ X and weights w 1 , . . ., w d+1 ≥ 0 whose total is one such that A naive algorithm proposed by Hayakawa (2020) was to generate independent copies Y 1 , Y 2 , . . . of Y and choose y j from these random samples.Without any knowledge of N X , the algorithm would be of the form (c.1) Take k = 2d.
(c.2) Randomly generate Y i up to i = k and determine if ( 16) can be satisfied with y j ∈ {Y i } k i=1 by using an LP solver.
(c.3)If we find a solution, stop the algorithm.Otherwise, go to (c.2) after replacing k by 2k.
This procedure ends at k ≤ 2N X (E[X]) with probability more than half.We can then heuristically estimate the computational cost by Θ(C(d, N X (E[X]))), where we denote by C(d, n) the computational complexity of a linear programming problem finding the solution of ( 16) from n sample points.Empirically, this is estimated as Ω(d 2 n) or more when we use the simplex method (Shamir, 1987).
However, our analysis on N X via the Berry-Esseen bound tells us the possibility of an alternative (Algorithm 1).
As we can carry out Algorithm 1 within O 2 k ℓd 2 + kC(d, ℓd) , the overall computational cost is O kC(d, ℓd) + 2 k ℓd 2 .Then we have heuristically have the bound O kℓd 3 + 2 k ℓd 2 for a small ℓ.By using the number N = 2 k ℓd, which is the number of randomly generated copies of Y , this cost is rewritten as O log(N/ℓd)ℓd 3 + N d .
As our bound for N X (E[X]) in Theorem 19 is applicable for this N because of the use of Berry-Esseen type estimate (ℓ = 17 is used in the proof), we can also give an estimate for this alternative algorithm.If the N is not as large as Ω(dN X (E[X])) for an appropriate choice of ℓ, we indeed have a better scheme, though the comparison itself may be a nontrivial problem in general.In any event, the fact that we can avoid solving a large LP problem is an obvious advantage.

Concluding remarks
In this paper, we have investigated inequalities regarding p n,X , N X and α X , which is motivated from the fields of numerical analysis, data science, statistics and random matrix.We generalized the existing inequalities for p n,X in Section 2. After pointing out that the convergence rate of p n,X is determined by α X in Section 3 with introduction of ε-relaxation of both quantities, we proved that N X and 1/α X are of the same magnitude up to an O(d) factor in Theorem 16.We also gave estimates of N X based on the moments of X in Section 4 by using Berry-Esseen type bounds.
Although arguments have been based on whether a given vector is included in the random convex polytope conv{X 1 , . . ., X n }, in Section 4, we extended our results to the analysis of deterministic convex bodies included in the random convex hull, which immediately led to a technical improvement on a result from the random matrix community.We finally discussed several implications of our results on application in Section 6.
A Bounds of N X via Multivariate Berry-Esseen theorem In this section, we provide two different estimates of N X .Although we can prove that the first bound (Section A.2) is strictly stronger than the second one (Section A.3), we also give the proof of the second as there seems to be more room for improvement in the second approach than in the first.
The following first bound is the one mentioned in (11).The proof is given in Section A.2.
Theorem 26.Let X be an R d -valued random vector which is centered and of nonsingular covariance matrix V .Then, Note that holds so we can ignore the O(d) term.In the case sup V −1/2 X 2 < ∞, we have Therefore, the following proposition, which only states N X = Õ d 15/2 sup V −1/2 X Proposition 27.Let X be an R d -valued random vector which is centered, bounded and of nonsingular covariance matrix V .Then, for all n satisfying

A.1 Multivariate Berry-Esseen bounds
Before proceeding to the evaluation of N X , we briefly review multivariate Berry-Esseen type theorems.The following theorem should be the best known bound with explicit constants and dependence with respect to the dimension.
Theorem 28 (Raič 2019).Let Y 1 , . . ., Y n be i.i.d.D-dimensional independent random vectors with mean zero and covariance I D .For any convex measurable set A ⊂ R D , it holds where Z is a D-dimensional standard Gaussian.
Note that the original statement is not limited to the i.i.d.case.However, similarly to the other existing Berry-Esseen type bounds, Theorem 28 only gives information about convex measurable sets.Thus we cannot use this result directly.However, Section A.2 gives a creative use of Theorem 28.
Unlike the usual Berry-Esseen results, the next theorem can be used for nonconvex case with reasonable dependence on dimension.We denote by W 2 (µ, ν) the Wasserstein-2 distribution between two probability measures µ and ν on the same domain.This is defined formally as where the infimum is taken for all the joint distribution (Y, Z) with the marginal satisfying Y ∼ µ and Z ∼ ν.Although it is an abuse of notation, we also write W 2 (Y, Z) to represent W 2 (µ, ν) when Y ∼ µ and Z ∼ ν for some random variables Y and Z.
Theorem 29 (Zhai 2018).Let Y 1 , . . ., Y n be D-dimensional independent random vectors with mean zero, covariance Σ, and Y i 2 ≤ B almost surely for each i.If we let Z be a Gaussian with covariance Σ, then we have For a set A ⊂ R D and an ε > 0, define By combining the following assertion with Theorem 29, we derive another bound of N X in Section A.3.
Proposition 30.Let Y, Z be D-dimensional random vectors.Then, for any measurable set A ⊂ R d and any ε > 0, the following estimates hold: Proof.This proof is essentially the same as the argument given in the proof of Zhai (2018, Proposition 1.4).Let (Y ′ , Z ′ ) be an arbitrary couple of random variables such that Y ′ ∼ Y and Z ′ ∼ Z.Then, we have (by Chebyshev's inequality) By taking the infimum of the right-hand side with respect to all the possible couples (Y ′ , Z ′ ), we obtain the former result.The latter can also be derived by evaluating and again taking the infimum.

A.2 The first bound
In this section, we prove Theorem 26.We shall set D = d and make use of Theorem 28.First, fix a set S ⊂ R d and consider the set C(S) Here, λ > 0 comes from the assumption 0 ∈ conv S.This occurs if and only if x is contained in the negative cone of S, i.e., C(S) = { k i=1 λi x i | k ≥ 0, λi ≤ 0, x i ∈ S}.In both cases C(S) is convex, so S 0 is always convex (and of course measurable).
Let X be an R d -valued random vector with mean 0 and nonsingular covariance V .Suppose E V −1/2 X 3 2 < ∞.Let X 1 , X 2 , . . .be independent copies of X, and for a fixed positive integer n, define for i = 1, . . ., 2d.We also let Z 1 , . . ., Z 2d be independent d-dimensional standard Gaussian which is also independent from X 1 , X 2 , . ...Then, by using Theorem 28 and the above-mentioned convexity of C(S), we have is a convex combination and it contradicts the assumption 0 ∈ conv{x i } 2d i=1 .Therefore, we can take a unit vector c ∈ R d such that Let us assume the closed ball with center x k and radius δ is included in N ({x i | i = k}) for a δ > 0.
Then, if δ > x k − xk 2 , the closed ball with center xk and radius δ In particular, we have some coefficients By the inequality (18), we have Therefore, from the definition of δ ′ , we obtain by Cauchy-Schwarz and the assumption.It immediately implies the desired assertion.
Proof.By Lemma 31, we have Therefore, letting Z = (Z 1 , . . ., Z 2d ) be a standard Gaussian in R D (where each Z i is a independent standard Gaussian in R d ), we can evaluate For each k, Z k is independent from the random convex set N ({Z i | i = k}).Therefore, we can use the result of Ball (1993) for a constant B, then we have 19), we have from Theorem 29 that (for ε = 2 −13/2 d −7/4 ) From Proposition 30, we obtain Therefore, 0 is contained in the convex hull of {X 1 , . . ., X 2dn } with probability at least 1/4.Since (1 − 1/4) 3 < 1/2, N X ≤ 6dn holds.Therefore, our proof of Proposition 27 is complete.

B Extreme examples
Before treating concrete examples, we prove a proposition which is useful for evaluating N X .
Lemma 33.For a random vector X and its independent copies X 1 , X 2 , . .., define ÑX as the minimum index n satisfying 0 ∈ conv{X 1 , . . ., X n }.Then, we have Proof.From the definition of N X , P(0 ∈ {X 1 , . . ., X NX −1 }) < 1/2 holds.Thus P ÑX ≥ N X ≥ 1/2, and so we obtain E ÑX ≥ 1 2 N X .For the other inequality, we use the evaluation P ÑX ≥ kN X ≤ 2 −k for each nonnegative integer k.As ÑX is a nonnegative discrete random variable, we have Note that all the examples given below satisfy p d,X = 0.They are given as one of the worst-case examples for uniform estimates of N X in Proposition 5 or Theorem 26.Let us start with the simplest extreme case.
The next example is a multi-dimensional version of the previous one.Let us estimate p d+1,X , p 2d,X and N X for this X.To contain the origin in the convex hull, we have to observe at least one X i with Y = 0. Therefore, for an ε ≪ 1/d, we have where X ′ represents a (d − 1)-dimensional uniform random vector over the box [−1, 1] d−1 .We can see that p 2d,X 2 d−1 p d+1,X holds for a small ε as Remark 2 suggests.For the calculation of N X , we can exploit Proposition 33.We first bound the expectation of ÑX .For independent copies X 1 , X 2 , . . . of X, let N 1 be the minimum integer n satisfying X n = ε −1 e d .We also define N 2 as the minimum integer n satisfying −(1 − ε) −1 e d ∈ conv{X 1 , . . ., X n }.Then, ÑX = max{N 1 , N 2 } holds.Thus we have N 1 ≤ ÑX ≤ N 1 + N 2 .E[N 1 ] = 1/ε clearly holds.For N 2 , we can evaluate (again using X ′ ) as where we have used Proposition 33 for the inequality.Therefore, from Proposition 33, we obtain We finally compare the naive general estimate N X ≤ n pn,X in Proposition 5 with this example.From (21), we have Therefore, the evaluation N X ≤ 2d p 2d,X is sharp even for small p 2d,X up to constant in the sense that lim ε→0 sup holds.Also in this example, we have α X = ε for ε ∈ (0, 1/3).Hence, combined with (21), we have Therefore, we have inf X:d-dim α X N X ≤ 2. We next evaluate the value of E V −1/2 X 3 2 , where V = (V ij is the covariance matrix of X with respect to the basis {e 1 , . . ., e d }.Then, for (i, j) ∈ {1, . . ., d − 1} 2 , we obtain (δ ij : Kronecker's delta) by using the independence of Y , Z 1 , . . ., Z d−1 .For the V dd , we have .
Therefore, V −1/2 X can be explicitly written as Thus we have and so holds when 0 < ε < 1/2.By using (21), we obtain Therefore, by taking ε → 0, we finally obtain the estimate sup as mentioned in Remark 6.
defines a norm on R d (note that all norms are equivalent on R d ).For this norm, it is known that there is a finite subset A ⊂ S such that min y∈A |||x − y||| ≤ ε for all x ∈ B and |A| ≤ (1 + 2/ε) d(Pisier, 1999, Lemma 4.10).It suffices to prove (1 − ε)K ⊂ conv A. Assume the contrary, i.e., let x 0 be a point such that |||x||| ≤ 1 − ε and x 0 ∈ conv A. Then, there exists a (d − 1)-dimensional hyperplane H ⊂ R d such that x 0 ∈ H and all the points in A lie (strictly) on the same side as the origin with respect to H. Let y ∈ argmin x∈H |||x|||.Then, we have |||y||| ≤ 1 − ε, and z := |||y||| −1 y satisfies min x∈H |||z − x||| ≥ ε.Hence, we have min x∈A |||z − x||| > ε and it contradicts the assumption for A.
and go back to (b.1) if not.