Abstract
We prove a central limit theorem for the volume of projections of the cube \([-1,1]^N\) onto a random subspace of dimension \(n\), when \(n\) is fixed and \(N\rightarrow \infty \). Randomness in this case is with respect to the Haar measure on the Grassmannian manifold.
1 Main result
The focus of this paper is the volume of random projections of the cube \(B_{\infty }^N=[-1,1]^N\) in \(\mathbb R ^N\). To fix the notation, let \(n\geqslant 1\) be an integer and for \(N\geqslant n\), let \(G_{N,n}\) denote the Grassmannian manifold of all \(n\)-dimensional linear subspaces of \(\mathbb R ^N\). Equip \(G_{N,n}\) with the Haar probability measure \(\nu _{N,n}\), which is invariant under the action of the orthogonal group. Suppose that \((E(N))_{N\geqslant n}\) is a sequence of random subspaces with \(E(N)\) distributed according to \(\nu _{N,n}\). We consider the random variables
where \(P_{E(N)}\) denotes the orthogonal projection onto \(E(N)\) and \(|\cdot |\) is \(n\)-dimensional volume, when \(n\) is fixed and \(N\rightarrow \infty \). We show that \(Z_N\) satisfies the following central limit theorem.
Theorem 1.1
Here \(\overset{d}{\rightarrow }\) denotes convergence in distribution and \(\mathcal N (0,1)\) a standard Gaussian random variable with mean \(0\) and variance \(1\). Our choice of scaling for the cube is immaterial as the quantity in (1.2) is invariant under scaling and translation of \([-1,1]^N\).
Gaussian random matrices play a central role in the proof of Theorem 1.1, as is often the case with results about random projections onto subspaces \(E\in G_{N,n}\). Specifically, we let \(G\) be an \(n\times N\) random matrix with independent columns \(g_1,\ldots ,g_N\) distributed according to standard Gaussian measure \(\gamma _n\) on \(\mathbb R ^n\), i.e.,
We view \(G\) as a linear operator from \(\mathbb R ^N\) to \(\mathbb R ^n\). If \(C\subset \mathbb R ^N\) is any convex body, then
where \(E= \mathop {\mathrm{Range}}(G^*)\) is distributed uniformly on \(G_{N,n}\). Moreover, \(\det {(GG^*)}^{1/2}\) and \(|P_E C|\) are independent. The latter fact underlies the Gaussian representation of intrinsic volumes, as proved by Tsirelson in [24] (see also [28]); it is also used in R. Vitale’s probabilistic derivation of the Steiner formula [27]. Passing between Gaussian vectors and random orthogonal projections is useful in a variety of contexts, e.g., [1, 5, 6, 8, 12, 14, 16, 18]. As we will show, however, it is a delicate matter to use (1.3) to prove limit theorems, especially with the normalization required in Theorem 1.1. Our path will involve analyzing asymptotic normality of \(|G B_{\infty }^N|\) before dealing with the quotient \(|GB_{\infty }^N|/\det {(GG^*)}^{1/2}\).
The set
is a random zonotope, i.e., a Minkowski sum of the random segments \([-g_i,g_i]=\{\lambda g_i: |\lambda |\leqslant 1 \}\). By the well-known zonotope volume formula (e.g. [15]), \(X_N=|GB_{\infty }^N|\) satisfies
where \(\det {[g_{i_1}\cdots g_{i_n}]}\) is the determinant of the matrix with columns \(g_{i_1},\ldots ,g_{i_n}\). The quantity
is a U-statistic and central limit theorems for U-statistics go back to Hoeffding [11]. In fact, formula (1.4) for \(X_N\) is simply a special case of Minkowski’s theorem on mixed volumes of convex sets (see §2). In [26], Vitale proved a central limit theorem for Minkowski sums of more general random convex sets, using mixed volumes and U-statistics (discussed in detail below). In particular, it follows from Vitale’s results that \(X_N\) satisfies a central limit theorem, namely,
where \(s_{N,n}\) is a certain conditional standard deviation (see Corollary 3.4). It follows that \(X_N\) also satisfies the central limit theorem with the canonical normalization:
(using, e.g., Theorem 3.1 or Proposition 4.2).
It is tempting to think that the latter central limit theorem for \(X_N\) easily yields Theorem 1.1. However, for a family of convex bodies \(C=C_N\subset \mathbb R ^N, N=n, n+1, \ldots \), asymptotic normality of \(|GC|\) is not sufficient to conclude that \(|P_{E(N)} C|\) is asymptotically normal. For example, if \(C=B_2^N\) is the Euclidean ball in \(\mathbb R ^N\), then \(|GB_2^N|= \det {(GG^*)}^{1/2}|B_2^n|\) is asymptotically normal (e.g., [2, Theorems 4.2.3, 7.5.3]), however \(|P_{E(N)}B_2^N|\) is constant.
In fact, as we show in Proposition 4.4, both \(X_N\) and \(\det {(GG^*)}^{1/2}\) contribute to asymptotic normality of \(Z_N=|P_{E(N)}B_{\infty }^N|\), a technical difficulty that requires careful analysis. In particular, we invoke a randomization inequality from [7, Chapter 3] to deal with the canonical normalization for \(Z_N\) in Theorem 1.1. As a by-product, we also obtain the limiting behavior of the variance of \(Z_N\) as \(N\rightarrow \infty \).
We mention that when \(n=1\), Theorem 1.1 implies that if \((\theta _N)\) is a sequence of random vectors with \(\theta _N\) distributed uniformly on the sphere \(S^{N-1}\), then the \(\ell _1\)-norm \(||\cdot ||_1\) (the support function of the cube) satisfies
Theorem 1.1 complements recent research on central limit phenomena for various quantities that arise in Asymptotic Geometric Analysis; see, e.g., the papers of Bárány and Vu [4], Klartag [13], Reitzner [20] and the references therein. In fact, the central limit theorem for \(X_N\) in (1.6) can be seen as a counter-part to the Bárány-Vu result for convex hulls of Gaussian vectors [4]. Namely, when \(n\geqslant 2\) the quantity \(V_N = |\mathop {\mathrm{conv}}\left\{ g_1,\ldots ,g_N\right\} |\) satisfies
see the latter article for the corresponding Berry-Esseen type estimate. The latter result is one of several recent deep central limit theorems in stochastic geometry concerning random convex hulls, e.g., [3, 29]. The techniques used in this paper are different and the main focus here is to understand the Grassmannian setting.
Lastly, for a thorough exposition of the properties of the cube, see [30].
2 Preliminaries
The setting is \(\mathbb R ^n\) with the usual inner-product \(\langle \cdot , \cdot \rangle \) and Euclidean norm \(||\cdot ||_2\); \(n\)-dimensional Lebesgue measure is denoted by \(|\cdot |\). For sets \(A,B \subset \mathbb R ^n\) and scalars \(\alpha , \beta \in \mathbb R \), we define \(\alpha A +\beta B\) by usual scalar multiplication and Minkowski addition: \(\alpha A +\beta B = \{\alpha a +\beta b: a\in A, b\in B\}\).
2.1 Mixed volumes
The mixed volume \(V(K_1,\ldots ,K_n)\) of compact convex sets \(K_1,\ldots ,K_n\) in \(\mathbb R ^n\) is defined by
By a theorem of Minkowski, if \(t_1,\ldots ,t_N\) are non-negative real numbers then the volume of \(K=t_1K_1+\cdots +t_NK_N\) can be expressed as
The coefficients \(V(K_{i_1},\ldots , K_{i_n})\) are non-negative and invariant under permutations of their arguments. When the \(K_i\)’s are origin-symmetric line segments, say \(K_i=[-x_i,x_i]=\{\lambda x_i:|\lambda |\leqslant 1\}\), for some \(x_1,\ldots ,x_n\in \mathbb R ^n\), we simplify the notation and write
We will make use of the following properties:
-
(i)
\(V(K_1,\ldots ,K_n)>0\) if and only if there are line segments \(L_i\subset K_i\) with linearly independent directions.
-
(ii)
If \(x_1,\ldots , x_n\in \mathbb R ^n\), then
$$\begin{aligned} n!V(x_1,\ldots ,x_n) = 2^n|\det {[x_1\cdots x_n]}|, \end{aligned}$$(2.3)where \(\det {[x_1\cdots x_n]}\) denotes the determinant of the matrix with columns \(x_1,\ldots ,x_n\).
-
(iii)
\(V(K_1,\ldots ,K_n)\) is increasing in each argument (with respect to inclusion).
For further background we refer the reader to [22, Chapter 5] or [10], Appendix A].
A zonotope is a Minkowski sum of line segments. If \(x_1,\ldots ,x_N\) are vectors in \(\mathbb R ^n\), then
Alternatively, a zonotope can be seen as a linear image of the cube \(B_{\infty }^N = [-1,1]^N\). If \(x_1,\ldots ,x_N\in \mathbb R ^n\), one can view the \(n\times N\) matrix \(X = [x_1\cdots x_N]\) as a linear operator from \(\mathbb R ^N\) to \(\mathbb R ^n\); in this case, \(X B_{\infty }^N = \sum _{i=1}^N [-x_i,x_i]\).
By (2.1) and properties (i) and (ii) of mixed volumes, the volume of \(\sum _{i=1}^N[-x_i,x_i]\) satisfies
Note that for \(x_1,\ldots ,x_n\in \mathbb R ^n\),
where \(F_k=\mathop {\mathrm{span}}\{x_1,\ldots ,x_k\}\) for \(k=1,\ldots ,n-1\) (which can be proved using Gram-Schmidt orthogonalization, e.g., [2, Theorem 7.5.1]).
We will also use the Cauchy-Binet formula. Let \(x_1,\ldots , x_N\in \mathbb R ^n\) and let \(X\) be the \(n\times N\) matrix with columns \(x_1,\ldots ,x_N\), i.e., \(X= [x_1\cdots x_N]\). Then
for a proof, see e.g. [9, §3.2].
2.2 Slutsky’s theorem
We will make frequent use of Slutsky’s theorem on convergence of random variables (see, e.g., [23, §1.5.4]).
Theorem 2.1
Let \((X_N)\) and \((\alpha _N)\) be sequences of random variables. Suppose that \(X_N\overset{d}{\rightarrow } X_0\) and \(\alpha _N\overset{\mathbb{P }}{\rightarrow } \alpha _0\), where \(\alpha _0\) is a finite constant. Then
and
Slutsky’s theorem also applies when the \(X_N\)’s take values in \(\mathbb R ^k\) and satisfy \(X_N\overset{d}{\rightarrow } X_0\) and \((A_N)\) is a sequence of \(m\times k\) random matrices such that \(A_N\overset{\mathbb{P }}{\rightarrow } A_0\) and the entries of \(A_0\) are constants. In this case, \(A_N X_N\overset{d}{\rightarrow } A_0 X_0\).
3 U-statistics
In this section, we give the requisite results from the theory of U-statistics needed to prove asymptotic normality of \(X_N\) and \(Z_N\) stated in the introduction. For further background on U-statistics, see e.g. [7, 21, 23].
Let \(X_1,X_2,\ldots \) be a sequence of i.i.d. random variables with values in a measurable space \((S, \mathcal{S })\). Let \(h:S^m \rightarrow \mathbb R \) be a measurable function. For \(N\geqslant m\), the U-statistic of order \(m\) with kernel \(h\) is defined by
where
When \(h\) is symmetric, i.e., \(h(x_1, \ldots , x_m) = h(x_{\sigma (1)}, \ldots , x_{\sigma (m)})\) for every permutation \(\sigma \) of \(m\) elements, we can write
here the sum is taken over all \({N \atopwithdelims ()m}\) subsets \(\{i_1,\ldots , i_m\}\) of \(\{1,\ldots , N\}\).
Using the latter notation, we state several well-known results, due to Hoeffding (see, e.g., [23, Chapter 5]).
Theorem 3.1
For \(N\geqslant m\), let \(U_N\) be a U-statistic with kernel \(h:S^m \rightarrow \mathbb R \). Set \(\zeta = \mathop {\mathrm{var}}(\mathbb{E }[h(X_1,\ldots ,X_m) | X_1])\).
-
(1)
The variance of \(U_N\) satisfies
$$\begin{aligned} \mathop {\mathrm{var}}(U_N) = \frac{m^2 \zeta }{N} + O(N^{-2})\quad \text {as }N\rightarrow \infty . \end{aligned}$$ -
(2)
If \(\mathbb{E }|h(X_1,\ldots ,X_m)| <\infty \), then \(U_N \overset{a.s.}{\rightarrow } \mathbb{E }U_N\) as \(N\rightarrow \infty \).
-
(3)
If \(\mathbb{E }h^2(X_1, \ldots , X_m)< \infty \) and \(\zeta >0\), then
$$\begin{aligned} \sqrt{N} \left( \frac{ U_{N}- \mathbb{E }U_N}{ m\sqrt{\zeta } }\right) \overset{d}{\rightarrow } \mathcal{{N}}(0,1)\quad \text {as } N\rightarrow \infty . \end{aligned}$$
The corresponding Berry-Esseen type bounds are also available (see, e.g,. [23, page 193]), stated here in terms of the function
Theorem 3.2
With the preceding notation, suppose that \(\xi =\mathbb{E }|h(X_{1}, \ldots , X_{m})|^3 <\infty \) and
Then
where \(c>0\) is an universal constant.
3.1 U-statistics and mixed volumes
Let \(\mathcal C _n\) denote the class of all compact, convex sets in \(\mathbb R ^n\). A topology on \(\mathcal C _n\) is induced by the Hausdorff metric
A random convex set is a Borel measurable map from a probability space into \(\mathcal{C }_n\). A key ingredient in our proof is the following theorem for Minkowski sums of random convex sets due to Vitale [26]; we include the proof for completeness.
Theorem 3.3
Let \(n\geqslant 1\) be an integer. Suppose that \(K_1,K_2,\ldots \) are i.i.d. random convex sets in \(\mathbb R ^n\) such that \(\mathbb{E }\sup _{x\in K_1}||x||_2<\infty \). Set \(V_N =|\sum _{i=1}^N K_i|\) and suppose that \(\mathbb{E }V(K_1,\ldots ,K_n)^2<\infty \) and furthermore that \(\zeta = \mathop {\mathrm{var}}(\mathbb{E }[V(K_1,\ldots ,K_n)| K_1]) >0\). Then
where \((N)_n=\frac{N!}{(N-n)!}\).
Proof
Taking \(h:(\mathcal C _n)^n \rightarrow \mathbb R \) to be \(h(K_1,\ldots ,K_n) = V(K_1,\ldots ,K_n)\) and using (2.1), we have
where
and \(J=\{1,\ldots ,N\}^n\backslash I_N^n\). Note that \(|J|/(N)_n = O(\frac{1}{N})\) and thus the second term on the right-hand side of (3.3) tends to zero in probability. Applying Theorem 3.1(3) and Slutsky’s theorem leads to the desired conclusion.\(\square \)
In the special case when the \(K_i\)’s are line segments, say \(K_i = [-X_i, X_i]\) where \(X_1,X_2,\ldots \) are i.i.d. random vectors in \(\mathbb R ^n\), the assumptions in the latter theorem can be readily verified by using (2.3). Furthermore, if the \(X_i\)’s are rotationally-invariant, the assumptions simplify further as follows (essentially from [26], stated here in a form that best serves our purpose).
Corollary 3.4
Let \(X=R\theta \) be a random vector such that \(\theta \) is uniformly distributed on the sphere \(S^{n-1}\) and \(R\geqslant 0\) is independent of \(\theta \) and satisfies \(\mathbb{E }R^2 <\infty \) and \(\mathop {\mathrm{var}}(R)>0\). For each \(i=1,2,\ldots \), let \(X_i=R_i\theta _i\) be independent copies of \(X\). Let \(D_n=|\det {[\theta _1\cdots \theta _n]}|\) and set
Then \(V_N=|\sum _{i=1}^N [-X_i,X_i]|\) satisfies
Proof
Plugging \(X_i=R_i\theta _i, i=1,\ldots ,n\), into (2.3) gives
By (2.5),
with \(F_k = \mathop {\mathrm{span}}\{\theta _1,\ldots ,\theta _k\}\) for \(k=1,\ldots ,n-1\). In particular, \(D_n\leqslant 1\) and thus (3.4) implies
Using (3.4) once more, together with (3.5), we have
here we have used the fact that \(\mathbb{E }||P_{{F_k}^{\perp }}\theta _{k+1}||_2\) depends only on the dimension of \(F_k\) (which is equal to \(k\) a.s.) and that \(||\theta _1||_2=1\) a.s. By (3.6) and our assumption \(\mathop {\mathrm{var}}(R)>0\), we can apply Theorem 3.3 with
where \(\zeta _1\) is defined in the statement of the corollary.\(\square \)
For further information on Theorem 3.3, including a CLT for the random sets themselves, or the case when \(\zeta =0\), see [26] or [17, p. 232]; see also [25].
Corollary 3.4 implies the first central limit theorem for \(X_N\) stated in the introduction (1.5). To prove Theorem 1.1, however, we will need some additional tools.
3.2 Randomization
In this subsection, we discuss a randomization inequality for U-statistics. It will be used for variance estimates and will play a crucial role in the proof of Theorem 1.1.
Using the notation at the beginning of §3, suppose that \(h:(\mathbb R ^n)^m\rightarrow \mathbb R \) satisfies \(\mathbb{E }|h(X_1,\ldots ,X_m)| <\infty \) and let \(1<r\leqslant m\). Following [7, Definition 3.5.1], we say that \(h\) is degenerate of order \(r-1\) if
for all \(x_1,\ldots ,x_{r-1}\in \mathbb R ^n\), and the function
is non-constant. If \(h\) is not degenerate of any positive order \(r\), we say it is non-degenerate or degenerate of order \(0\). We will make use of the following randomization theorem, which is a special case of [7, Theorem 3.5.3].
Theorem 3.5
Let \(1\leqslant r \leqslant m\) and \(p\geqslant 1\). Suppose that \(h:S^m\rightarrow \mathbb R \) is degenerate of order \(r-1\) and \(\mathbb{E }|h(X_1,\ldots ,X_m)|^p<\infty \). Set
Let \(\varepsilon _1,\ldots ,\varepsilon _N\) denote i.i.d. Rademacher random variables, independent of \(X_1,\ldots ,X_N\). Then
Here \(A\simeq _{m,p}B\) means \(C^{\prime }_{m,p} A \leqslant B \leqslant C^{\prime \prime }_{m,p} A\), where \(C^{\prime }_{m,p}\) and \(C^{\prime \prime }_{m,p}\) are constants that depend only on \(m\) and \(p\).
Corollary 3.6
Let \(\mu \) be a probability measure on \(\mathbb R ^n\), absolutely continuous with respect to Lebesgue measure. Suppose that \(X_1,\ldots ,X_N\) are i.i.d. random vectors distributed according to \(\mu \). Let \(p\geqslant 2\) and suppose \(\mathbb{E }|\det {[X_1\cdots X_n]}|^p<\infty \). Define \(f:(\mathbb R ^n)^n \rightarrow \mathbb R \) by
Then
where \(C_{n,p}\) is a constant that depends on \(n\) and \(p\).
Proof
Since \(\mu \) is absolutely continuous, \(\mathop {\mathrm{dim}}\left( \mathop {\mathrm{span}}\{X_1,\ldots , X_k\}\right) =k\) a.s. for \(k=1,\ldots , n\). Moreover, \( f(ax_1,\ldots , x_n) = |a|f(x_1,\ldots ,x_n)\) for any \(a\in \mathbb R \), hence \(f\) is non-degenerate [(cf. (2.5)]. Thus we may apply Theorem 3.5 with \(r=1\):
Suppose now that \(X_1,\ldots ,X_N\) are fixed. Taking expectation in \(\mathbf{\varepsilon }= (\varepsilon _1,\ldots ,\varepsilon _N)\) and appling Khintchine’s inequality and then Hölder’s inequality twice, we have
where \(C\) is an absolute constant. Taking expectation in the \(X_i\)’s gives
The proposition follows as stated by using the estimate \({N\atopwithdelims ()n}\leqslant (eN/n)^n\).\(\square \)
4 Proof of Theorem 1.1
As explained in the introduction, our first step is identity (1.3), the proof of which is included for completeness.
Proposition 4.1
Let \(N\geqslant n\) and let \(G\) be an \(n\times N\) random matrix with i.i.d. standard Gaussian entries. Let \(C\subset \mathbb R ^N\) be a convex body. Then
where \(E = \mathop {{\mathrm{Range}}(G^*)}\). Moreover, \(E\) is distributed uniformly on \(G_{N,n}\) and \(\det {(GG^*)}^{\frac{1}{2}}\) and \(\left|P_E C\right|\) are independent.
Proof
Identity (4.1) follows from polar decomposition; see, e.g., [18, Theorem 2.1(iii)]. To prove that the two factors are independent, we note that if \(U\) is an orthogonal transformation, we have \(\mathop {\mathrm{det}}(GG^*)^{1/2} = \mathop {\mathrm{det}}((GU)(GU)^*)^{1/2}\); moreover, \(G\) and \(GU\) have the same distribution. Thus if \(U\) is a random orthogonal transformation distributed according to the Haar measure, we have for \(s, t\geqslant 0\),
\(\square \)
Taking \(C=B_{\infty }^N\) in (4.1), we set
[cf. (2.4)],
[cf. (2.6)], and
where \(E\) is distributed according to \(\nu _{N,n}\) on \(G_{N,n}\). Then \(X_N = Y_N Z_N\), where \(Y_N\) and \(Z_N\) are independent. In order to prove Theorem 1.1, we start with several properties of \(X_N\) and \(Y_N\).
Proposition 4.2
Let \(X_N\) be as defined in (4.2).
-
(1)
For each \(p\geqslant 2\),
$$\begin{aligned} \mathbb{E }|X_N-\mathbb{E }X_N|^p\leqslant C_{n,p} N^{p(n-\frac{1}{2})}. \end{aligned}$$ -
(2)
The variance of \(X_N\) satisfies
$$\begin{aligned} \frac{\mathop {\mathrm{var}}(X_N)}{N^{2n-1}} \rightarrow c_n \quad \text {as }N\rightarrow \infty , \end{aligned}$$where \(c_n\) is a positive constant that depends only on \(n\).
-
(3)
\(X_N\) is asymptotically normal; i.e.,
$$\begin{aligned} \frac{X_N-\mathbb{E }X_N}{\sqrt{\mathop {\mathrm{var}}(X_N)}} \overset{d}{\rightarrow } \mathcal N (0,1) \quad \text {as } N\rightarrow \infty . \end{aligned}$$
Proof
Statement (1) follows from Corollary 3.6.
To prove (2), let \(g\) be a random vector distributed according to \(\gamma _n\). Then Corollary 3.4 with \(\zeta _1 =4^n\mathop {\mathrm{var}}(||g||_2)\mathbb{E }^{2(n-1)}||g||_2\mathbb{E }^2 D_n \) yields
On the other hand, by part (1) we have
This implies that the sequence \((X_N-\mathbb{E }X_N)/N^{n-\frac{1}{2}}\) is uniformly integrable, hence
which implies (2).
Part (3) follows from (4.5) and Slutsky’s theorem.\(\square \)
We now turn to \(Y_N=\det {(GG^*)}^{\frac{1}{2}}\). It is well-known that
where \(\chi _k=\sqrt{\chi _k^2}\) and the \(\chi _k^2\)’s are independent chi-squared random variables with \(k\) degrees of freedom, \(k=N, \ldots ,N-n+1\) (see, e.g., [2, Chapter 7]). Consequently,
Additionally, we will use the following basic properties of \(Y_N\).
Proposition 4.3
Let \(Y_N\) be as defined in (4.3).
-
(1)
For each \(p\geqslant 2\),
$$\begin{aligned} \mathbb{E }|Y_N^2 - \mathbb{E }Y_N^2|^p \leqslant C_{n,p} N^{p(n-\frac{1}{2})}. \end{aligned}$$ -
(2)
The variance of \(Y_N\) satisfies
$$\begin{aligned} \frac{\mathop {\mathrm{var}}(Y_{N})}{N^{n-1}} \rightarrow \frac{n}{2} \quad \text {as } N\rightarrow \infty . \end{aligned}$$ -
(3)
\(Y_N^2\) is asymptotically normal; i.e.,
$$\begin{aligned} \sqrt{N}\left( \frac{Y_N^2}{N^n}-1\right) \overset{d}{\rightarrow } \mathcal N (0,2n) \quad \text {as } N\rightarrow \infty . \end{aligned}$$
Proof
To prove part (1), we apply Corollary 3.6 to \(Y_N^2\).
To prove part (2), we use (4.6) and define \(Y_{N,n}\) by \(Y_{N,n}=Y_N = \chi _N\chi _{N-1}\cdot \ldots \cdot \chi _{N-n+1}\) and procede by induction on \(n\). Suppose first that \(n=1\) so that \(Y_{N,1} =\chi _N\). By the concentration of Gaussian measure (e.g., [19, Remark 4.8]), there is an absolute constant \(c_1\) such that \(\mathbb{E }|\chi _N - \mathbb{E }\chi _N|^4<c_1\) for all \(N\), which implies that the sequence \((\chi _N- \mathbb{E }\chi _N)_N\) is uniformly integrable. By the law of large numbers \(\chi _N/\sqrt{N}\rightarrow 1\) a.s. and hence \(\mathbb{E }\chi _N /\sqrt{N}\rightarrow 1\), by uniform integrability. Note that
By Slutsky’s theorem and the classical central limit theorem,
while
since \(\mathop {\mathrm{var}}(\chi _N)=N-\mathbb{E }^2 \chi _N <c_1^{1/2}\). Thus
Appealing again to uniform integrability of \((\chi _N-\mathbb{E }\chi _N)_N\), we have
Assume now that
Note that
We conclude the proof of part (2) by applying (4.7),
and, using the inductive hypothesis,
Lastly, statement (3) is well-known (see, e.g., [2, §7.5.3]).\(\square \)
The next proposition is the key identity for \(Z_N\). To state it we will use the following notation:
Explicit formulas for \(\Delta _{n,p}^p\) are well-known and follow from identity (2.5); see, e.g., [2, p. 269].
Proposition 4.4
Let \(X_N, Y_N\) and \(Z_N\) be as above (cf. (4.2)–(4.4)). Then
where
-
(i)
\(\alpha _{N,n} \overset{a.s.}{\rightarrow } 1\) as \(N\rightarrow \infty \);
-
(ii)
\(\beta _{N,n} \overset{a.s.}{\rightarrow } \beta _n=\frac{2^{n-1}\Delta _{n,1}}{\Delta _{n,2}^2}\) as \(N\rightarrow \infty \);
-
(iii)
\(\delta _{N,n}\overset{a.s.}{\rightarrow } 0\) as \(N\rightarrow \infty \).
Moreover, for all \(p\geqslant 1\),
The latter proposition is the first step in passing from the quotient \(Z_N=X_N/Y_N\) to the normalization required in Theorem 1.1. The fact that \(N^{n-\frac{1}{2}}\) appears in both of the denominators on the right-hand side of (4.9) indicates that both \(X_N\) and \(Y_N^2\) must be accounted for in order to capture the asymptotic normality of \(Z_N\).
Proof
Write
Thus
which shows that (4.9) holds with
Using the factorization of \(Y_N\) in (4.6) and applying the SLLN for each \(\chi _k\) (\(k=N,\ldots ,N-n+1\)), we have
and hence
By the Cauchy-Binet formula (2.6) and the SLLN for U-statistics [(Theorem 3.1(2)], we have
Thus
By Proposition 4.3(2), we also have \(\delta _{N,n}\overset{a.s.}{\rightarrow } 0 \text { as } N\rightarrow \infty \). To prove the last assertion, we note that for \(1\leqslant p\leqslant (N-n+1)/2\),
where \(C_{n,p}\) is a constant that depends on \(n\) and \(p\) only (see, e.g., [18, Lemma 4.2]).\(\square \)
Proof of Theorem 1.1
To simplify the notation, for \(I = \{i_1,\ldots ,i_n\}\subset \{1,\ldots ,N\}\), write \(d_I=|\mathop {\mathrm{det}}[g_{i_1}\cdots g_{i_n}]|\). Applying Proposition 4.4, we can write
where
and
Set \(I_0=\{1,\ldots ,n\}\). Applying Theorem 3.1(3) with
yields
By Proposition 4.4, \(\alpha _{N,n}\overset{a.s.}{\rightarrow } 1, \beta _{N,n}\overset{a.s.}{\rightarrow } \beta _n\) and \(\delta _{N,n}\overset{a.s.}{\rightarrow } 0\); moreover, each of the latter sequences is uniformly integrable. Thus by Hölder’s inequality and Proposition 4.2(1)
Similarly, using Proposition 4.3(1),
By Slutsky’s theorem and the fact that \({N\atopwithdelims ()n}/N^n\rightarrow 1/n!\) as \(N\rightarrow \infty \), we have
To conclude the proof of the theorem, it is sufficient to show that
Once again we appeal to uniform integrability: by Proposition 4.4,
By Hölder’s inequality and Propositions 4.2(1), 4.3(1) and 4.4,
References
Affentranger, F., Schneider, R.: Random projections of regular simplices. Discrete Comput. Geom. 7(3), 219–226 (1992)
Anderson, T.W.: An introduction to multivariate statistical analysis, 3rd edn. Wiley Series in Probability and Statistics. Wiley-Interscience, Hoboken (2003)
Bárány, I., Reitzner, M.: Poisson polytopes. Ann. Probab. 38(4), 1507–1531 (2010)
Bárány, I., Vu, V.: Central limit theorems for Gaussian polytopes. Ann. Probab. 35(4), 1593–1621 (2007)
Baryshnikov, Y.M., Vitale, R.A.: Regular simplices and Gaussian samples. Discrete Comput. Geom. 11(2), 141–147 (1994)
Böröczky, K. Jr., Henk, M.: Random projections of regular polytopes. Arch. Math. (Basel) 73(6), 465–473 (1999)
de la Peña, V.H., Giné, E.: Decoupling, Probability and its Applications (New York), Springer, New York. From dependence to independence, Randomly stopped processes. \(U\)-statistics and processes. Martingales and beyond (1999)
Donoho, D.L., Tanner, J.: Counting the faces of randomly-projected hypercubes and orthants, with applications. Discrete Comput. Geom. 43(3), 522–541 (2010)
Evans, L.C., Gariepy, R.F.: Measure theory and fine properties of functions. In: Studies in Advanced Mathematics. CRC Press, Boca Raton (1992)
Gardner, R.J.: Geometric tomography, 2nd edn. Encyclopedia of Mathematics and its Applications, vol. 58. Cambridge University Press, Cambridge (2006)
Hoeffding, W.: A class of statistics with asymptotically normal distribution. Ann. Math. Stat. 19, 293–325 (1948)
James, A.T.: Normal multivariate analysis and the orthogonal group. Ann. Math. Stat. 25, 40–75 (1954)
Klartag, B.: A central limit theorem for convex sets. Invent. Math. 168(1), 91–131 (2007)
Mankiewicz, P., Tomczak-Jaegermann, N.: Geometry of families of random projections of symmetric convex bodies. Geom. Funct. Anal. 11(6), 1282–1326 (2001)
McMullen, P.: Volumes of projections of unit cubes. Bull. London Math. Soc. 16(3), 278–280 (1984)
Miles, R.E.: Isotropic random simplices. Adv. Appl. Probab. 3, 353–382 (1971)
Molchanov, I.: Theory of random sets. In: Probability and its Applications (New York). Springer London Ltd., London (2005)
Paouris, G., Pivovarov, P.: Small-ball probabilities for the volume of random convex sets. Discrete Comput. Geom. 49(3), 601–646 (2013)
Pisier, G.: The volume of convex bodies and Banach space geometry. In: Cambridge Tracts in Mathematics, vol. 94. Cambridge University Press, Cambridge (1989)
Reitzner, M.: Central limit theorems for random polytopes. Probab. Theory Relat. Fields 133(4), 483–507 (2005)
Rubin, H., Vitale, R.A.: Asymptotic distribution of symmetric statistics. Ann. Stat. 8(1), 165–170 (1980)
Schneider, R.: Convex bodies: the Brunn-Minkowski theory. In: Encyclopedia of Mathematics and its Applications, vol. 44. Cambridge University Press, Cambridge (1993)
Serfling, R.J.: Approximation theorems of mathematical statistics. In: Wiley Series in Probability and Mathematical Statistics. Wiley, New York (1980)
Tsirelson, B.S.: A geometric approach to maximum likelihood estimation for an infinite-dimensional Gaussian location. II. Teor. Veroyatnost. i Primenen. 30(4), 772–779 (1985). English translation: Theory Probab. Appl. 30(4), 820–827 (1985)
Vitale, R.A.: Asymptotic area and perimeter of sums of random plane convex sets. University of Wisconsin-Madison, Mathematics Research Center, no. 1770 (1977)
Vitale, R.A.: Symmetric statistics and random shape. In: Proceedings of the 1st World Congress of the Bernoulli Society, vol. 1, pp. 595–600 (Tashkent, 1986). VNU Science Press, Utrecht (1987)
Vitale, R.A.: On the volume of parallel bodies: a probabilistic derivation of the Steiner formula. Adv. Appl. Probab. 27(1), 97–101 (1995)
Vitale, R.A.: On the Gaussian representation of intrinsic volumes. Stat. Probab. Lett. 78(10), 1246–1249 (2008)
Vu, V.: Central limit theorems for random polytopes in a smooth convex set. Adv. Math. 207(1), 221–243 (2006)
Zong, C.: The cube: a window to convex and discrete geometry. In: Cambridge Tracts in Mathematics, vol. 168. Cambridge University Press, Cambridge (2006)
Acknowledgments
It is our pleasure to thank R. Vitale for helpful comments on an earlier version of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
G. Paouris was supported by the A. Sloan Foundation, BSF grant 2010288 and the US National Science Foundation, grants DMS-0906150 and CAREER-1151711.
P. Pivovarov was supported by a Postdoctoral Fellowship award from the Natural Sciences and Engineering Research Council of Canada and the Department of Mathematics at Texas A&M University
J. Zinn was partially supported by NSF grant DMS-1208962
Rights and permissions
About this article
Cite this article
Paouris, G., Pivovarov, P. & Zinn, J. A central limit theorem for projections of the cube. Probab. Theory Relat. Fields 159, 701–719 (2014). https://doi.org/10.1007/s00440-013-0518-8
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-013-0518-8
Mathematics Subject Classification
- Primary 52A23 (Asymptotic theory of convex bodies)
- Secondary 52A22 (Random convex sets and integral geometry)
- 60D05 (Geometric probability and stochastic geometry)
- 60F05 (Central limit and other weak theorems)