Threshold phenomena for random cones

We consider an even probability distribution on the $d$-dimensional Euclidean space with the property that it assigns measure zero to any hyperplane through the origin. Given $N$ independent random vectors with this distribution, under the condition that they do not positively span the whole space, the positive hull of these vectors is a random polyhedral cone (and its intersection with the unit sphere is a random spherical polytope). It was first studied by Cover and Efron. We consider the expected face numbers of these random cones and describe a threshold phenomenon when the dimension $d$ and the number $N$ of random vectors tend to infinity. In a similar way, we treat the solid angle, and more generally the Grassmann angles. We further consider the expected numbers of $k$-faces and of Grassmann angles of index $d-k$ when also $k$ tends to infinity.


Introduction
The following is a literal quotation from [5]: "Recent work has exposed a phenomenon of abrupt phase transitions in high-dimensional geometry. The phase transitions amount to a rapid shift in the likelihood of a property's occurrence when a dimension parameter crosses a critical level (a threshold)." Two early observations of this phenomenon were published in 1992. Dyer, Füredi and McDiarmid [8] considered the convex hull of N = N (d) vertices chosen independently at random (with equal chances) from the vertices of the unit cube in R d . Let V d,N denote the volume of this random polytope. Then, for every ε > 0, Here E denotes mathematical expectation. The paper [8] has a similar result for the convex hull of i.i.d. uniform random points from the unit cube. We have quoted this example as an illustration of what we have in mind: for instance, a d-dimensional random polytope with its number N of vertices depending on d, where a small change of this dependence causes an abrupt change of some property as d → ∞. In the work of Vershik and Sporyshev [15], a d-dimensional random polytope is obtained as a uniform random orthogonal projection of a fixed regular simplex with N vertices in a higher-dimensional space, and threshold phenomena are exhibited for the expected numbers of k-faces, under the assumption of a linearly coordinated growth of the parameters d, N, k. Similar models, also with the regular simplex replaced by the regular cross-polytope, and random projections extended to more general random linear mappings, have found important applications in the work of Donoho and collaborators. We refer to the paper of Donoho and Tanner [6], where also earlier work of these authors is cited and explained. The paper [7] of the same authors treats random projections of the cube and the positive orthant in a similar way. Generally in stochastic geometry, threshold phenomena have been investigated for face numbers, neighborliness properties, volumes, intrinsic volumes, more general measures, and for several different models of random polytopes. Different phase transitions were exhibited. We mention that [9] has extended the model of [8] by introducing more general distributions for the random points. The paper [13] considers convex hulls of i.i.d. random points with either Gaussian distribution or uniform distribution on the unit sphere. In [1], [2], the points have a beta or beta-prime distribution. The paper [3] studies facet numbers of convex hulls of random points on the unit sphere in different regimes. The papers [13] and [1] deal also with polytopes generated by intersections of random closed halfspaces.
In this note, we consider a model of random polyhedral convex cones (or, equivalently, of random spherical polytopes) that was introduced by Cover and Efron [4] (and more closely investigated in [11]). Let φ be a probability measure on the Euclidean space R d which is even (invariant under reflection in the origin o) and assigns measure zero to each hyperplane through the origin. For n ∈ N, the (φ, n)-Cover-Efron cone C n is defined as the positive hull of n independent random vectors X 1 , . . . , X n with distribution φ, under the condition that this positive hull is different from R d . The intersection C n ∩ S d−1 with the unit sphere S d−1 is a spherical random polytope, contained in some halfsphere. In the following it will be convenient to work with polyhedral cones instead of spherical polytopes.
For k ∈ {1, . . . , d− 1}, let f k (C n ) denote the number of k-dimensional faces of the cone C n (equivalently, the number of (k − 1)-dimensional faces of the spherical polytope C n ∩ S d−1 ). We are interested in the asymptotic behavior of the expectation E f k (C n ), as d tends to infinity and n grows suitably with d.
Convention. In the following, C N is a Cover-Efron cone in R d , and N = N (d) ≥ d is an integer depending on the dimension d, but we will omit the dimension d in the notation. Further (until stated otherwise), k is a fixed positive integer, and we consider only dimensions d > k.
Any k-face of the cone C N is a.s. the positive hull of k vectors from X 1 , . . . , X N , and there are N k possible choices. Therefore, we consider the quotient E f k (C N )/ N k . For this, we can state the following phase transition.
with a number δ ∈ [0, 1]. Then the Cover-Efron cone C N satisfies We do not know what happens if δ = 1/2. However, if N is close to 2d, or even equal to it, then more precise asymptotic statements are possible Of course, the first part of Theorem 2 implies the first part of Theorem 1.
As another functional of a closed convex cone C ⊂ R d , we consider the solid angle v d (C). This is the normalized spherical Lebesgue measure of C ∩S d−1 . (We avoid the notation V d (C) used in [11], since V d is often used for the volume of a convex body. The reader is warned that what we denote here by v d (C) was denoted by v d−1 (C ∩ S d−1 ) in [14,Sect. 6.5].) More generally, we consider the Grassmann angles. For a closed convex cone C ⊂ R d which is not a subspace, the jth Grassmann angle of C, for j ∈ {1, . . . , d − 1}, is defined by The latter is the unique Haar probability measure on G(d, d − j), the Grassmannian of (d − j)-dimensional linear subspaces of R d . Thus, Grassmann angles were introduced by Grünbaum [10], in a slightly different, though equivalent way. Grünbaum's Grassmann angles are given by with a number δ ∈ [0, 1]. Then the Cover-Efron cone C N satisfies Again, the first part of this theorem has a stronger version, given by the first part of the following theorem.
Clearly, in Theorem 1 (and similarly in Theorem 3), the change when passing a threshold is not so abrupt as in the other mentioned examples: below the threshold δ = 1/2, the limit in question increases (decreases) with the parameter to an extremal value, above the threshold, it remains constant. The situation changes if also the number k, the dimension of the considered faces, increases with the dimension; then a more abrupt phase transition is observed. Under a linearly coordinated growth, for k-faces we find the same threshold as established by Donoho and Tanner [7] in their investigation of random linear images of orthants. This comes unexpected, since the random cones considered in [7] and here have different distributions.
We replace the convention made earlier by the following one.
Convention. In the following theorems and in Sections 6 and 7, N = N (d) ≥ d and k = k(d) < d are integers depending on the dimension d, but we will omit the argument d in the notation.
Theorem 5. Let 0 < δ, ρ < 1 be given. Let k < d < N be integers such that Then We note that the first assumption of this theorem, ρ < ρ W (δ), implies that for large d we have N + k < 2d.
Adapting an argument of Donoho and Tanner [7] to the present situation, we can also replace the convergence of an expectation in the first part of Theorem 5 by the convergence of a probability, at the cost of a smaller threshold. Theorem 6. Let 0 < δ, ρ < 1 be given, where δ > 1/2. Let k < d < N be integers such that There exists a positive number ρ S (δ) such that There is also a counterpart to Theorem 5 for Grassmann angles.
Theorem 7. Let 0 < δ, ρ < 1 be given. Let k < d < N be integers such that After some preliminaries in the next section, we collect a number of auxiliary results about sums of binomial coefficients in Section 3. Then we prove the first two theorems in Section 4, Theorems 3 and 4 in Section 5, Theorems 5 and 6 in Section 6, and Theorem 7 in Section 7.

Preliminaries
First we recall two classical facts. For n ∈ N, let H 1 , . . . , H n ∈ G(d, d − 1). Assume that these hyperplanes are in general position, that is, the intersection of any m ≤ d of them is of dimension d − m. Then the number of d-dimensional cones in the tessellation of R d induced by these hyperplanes is given by From this result of Steiner (in dimension three) and Schläfli, Wendel has deduced the following. If X 1 , . . . , X n are i.i.d. random vectors in R d with distribution φ (enjoying the properties mentioned above), then P d,n := P pos{X 1 , . . . , where P stands for probability and pos denotes the positive hull. For references and proofs, we refer to [14,Sect. 8.2.1]. Now we can write down the distribution of the Cover-Efron cone C n , namely There is an equivalent representation of C n . For this, we denote by φ * the image measure of φ under the mapping x → x ⊥ from R d \ {o} to G(d, d − 1). Let H 1 , . . . , H n be i.i.d. random hyperplanes with distribution φ * . They are almost surely in general position. The (φ * , n)-Schläfli cone S n is obtained by picking at random (with equal chances) one of the d-dimensional cones from the tessellation induced by H 1 , . . . , H n . Its distribution is given by is the set of d-cones in the tessellation induced by H 1 , . . . , H n . We have (see [11,Thm. 3 where S • n denotes the polar cone of S n . For the expectations appearing in our theorems, explicit representations are available. The proofs of Theorems 1, 2 and 5 are based on the formula for k ∈ {0, . . . , d−1} (see [4,Formula (3.3)] or [11,Formula (27)].) For the proofs of Theorems 3, 4 and 7, we use the explicit formula for j ∈ {1, . . . , d − 1} (see [11, (29)]). It is sometimes useful to write this in the form for k ∈ {1, . . . , d − 1} (where an empty sum is zero, by definition).

Auxiliary results on binomial coefficients
From the last expressions it is obvious that some information on binomial coefficients is required. First we note Stirling's formula It implies, in particular, that We need some information on the Wendel probabilities The following lemma considers the reciprocal of the quotient appearing in (1).
This and induction can be used to prove that For j ∈ {1, . . . , k} we have This gives and thus the assertion.
Hence, for arbitrary p ≥ 1 we may write with the convention that the last sum is zero if 2p > M . The choice M = N − k − 1 and p = d − k now gives the assertion.
The following lemma gives upper and lower bounds for the expressions appearing in (3). For the proof of the upper bound, we adjust and slightly refine the argument for Proposition 1(c) in [12], in the current framework. The improved lower bound in (8) will be crucial in the following.
If m = n, then (9) holds trivially, since n n+1 = 0. It also holds for m = 0. In the remaining cases, we have This completes the proof (b).
We assume that d is so large that α > 2. From (10) we have We conclude that lim sup Lemma 3(b) provides the lower bound for each fixed ℓ 1 ≥ 2. Letting ℓ 1 → ∞, we find that lim inf Together with (11) this completes the proof.
We state another simple lemma.
Proof. We use n ℓ = n n−ℓ . If x := m i=0 2m m , then which gives the first relation. If y : which gives the second relation.

Proofs of Theorems 1 and 2
Proof of Theorem 1. As already mentioned, the first part of Theorem 1 follows from the first part of Theorem 2, which will be proved below.
To prove the second part of Theorem 1, we assume that d/N → δ as d → ∞, where 0 ≤ δ < 1/2. We write (1) in the form and here Since also (d − k)/(N − k) → δ, we deduce from Lemma 4 that the normalized sums in the numerator and denominator of (12) tend to the same finite limit. It follows that Proof of Theorem 2. First we assume that N − 2d is bounded from above. From (1) and Lemma 1 we have where A depends on d, N, k. Here by Lemma 2.
The argument is different according to whether N − 2d + k − 1 < 0 or not. Therefore, we consider separately the subsequence of all dimensions d for which N − 2d + k − 1 < 0 and the subsequence for which N − 2d + k − 1 ≥ 0. Without loss of generality, we will assume that one of the assumptions holds for the whole sequence.
Assume, first, that N − 2d + k − 1 < 0 for all d. We recall that in this case the sum in the last numerator of (14) is zero, hence We compare the different binomial coefficients appearing in (14) with a special one. For this, we set Thus, We have Similarly,

This gives
As d → ∞, the terms P (d, k) and Q(d, k) remain between two positive constants, and by (5). It follows that A → 0 as d → ∞. By (13), this implies that lim d→∞ E f k (C N )/ N k = 1. To prove the second part, suppose that N = 2d. Then by (1) we have We distinguish two cases. If k is odd, say k = 2ℓ − 1 with ℓ ∈ N, then where Lemma 5 was used. Since Stirling's formula (4) yields it follows (again using Lemma 5) that If k is even, say k = 2ℓ with ℓ ∈ N, then by Lemma 5. By Stirling's approximation (4), and hence we get Thus in both cases the asymptotic relation is proved.

Proofs of Theorems 3 and 4
Proof of Theorem 3. The first part of Theorem 3 follows from the first part of Theorem 4, which will be proved below.
For the second part of the proof, we assume that d/N → δ as d → ∞, with 0 ≤ δ < 1/2. We note that relation (2) shows that Then and here Together with d/N → δ we have (d − k + 1)/N → δ and (d + 1)/N → δ, hence Lemma 4 yields that The assertion follows.
Proof of Theorem 4. First we assume that N ≤ 2d + b with a fixed number b. We want to make use of (3). Writing n := N 2 , we have where an empty sum is zero.
The last sum in (15) has at most n − d + 2 ≤ b + 2 summands. Therefore, there is a constant c, independent of d, such that and where (5) was used.
To prove the second part, let N = 2d. Using Lemma 5, we get By Stirling's approximation (4), we obtain for j ∈ {0, . . . , k − 1} that Hence we get Thus we arrive at which completes the proof of the second part.

Faces of increasing dimension
In this section and the next, we allow also the number k to depend on the dimension d. In the present section, we are interested in a phase transition for the expectation E f k (C N )/ N k . It turns out that it appears at the same threshold as it was observed earlier by Donoho and Tanner [7] for a different, but closely related class of random polyhedral cones. These authors considered a real random d × N matrix A of rank d, where d < N , the nonnegative orthant Considering the column vectors of A as random vectors in R d , the image AR N + is just the positive hull of these vectors. For a suitable distribution, the random cone AR N + is obtained in a similar way as the Cover-Efron cone, just by omitting the condition that the cone is different from R d . Imposing this condition leads, of course, to different distributions of the random cones. Comparing formula (13) of [7] with our formula (1), where the right-hand side can be written as we see that it results in an additional denominator in the expression for the expected number of k-faces, thus increasing this expectation. Therefore our estimates, though leading to the same threshold, require more effort.
First we assume that 0 < ρ < ρ W (δ). For sufficiently large d, we then have 0 < ρ d < ρ W (δ d ), which implies that δ d > 1/2 and N − 2d + k < 0. We assume in the following that d is so large that this holds.
We need only consider the case a > 1/2. For fixed a > 1/2, we consider F a (b) := F (a, b) for 0 < b ≤ 2 − a −1 . Note that 2 − a −1 ∈ (0, 1) for a > 1/2. Differentiation yields Now we assume that ρ > ρ W (δ). Then ρ > 2−δ −1 , irrespective of whether δ ≥ 1/2 or not. For sufficiently large d (which we assume in the following), we then have ρ d > 2 − δ −1 d , which implies N − 2d + k > 0. Thus, we can apply (10) to the normalized sum in the numerator of (12). This yields Here, where the last denominator is positive. It follows that for all sufficiently large d. Here and below we denote by c i a positive constant that is independent of d.
In view of (12), we now determine the asymptotic behavior of Here, To treat the remaining terms, we use the Stirling formula (4). This gives where ϕ is contained in a fixed interval independent of d, and and remark that H(a, b) = g(a) F (a, b). Note that for a, b ∈ (0, 1) we have b > 2 − a −1 if and H(a, b). Differentiation yields H(a, b).

by (21) and that
. This completes the proof also in the case ρ > ρ W (δ).

Proof of Theorem 7
We use the representation and show the convergence of the quotient, under different assumptions.
First we deal with the second part of the proof and point out that our argument requires to distinguish whether ρ > ρ W (δ) or not. We begin with the case ρ > ρ W (δ); then ρ ≥ 2−δ −1 . Clearly, Since (a fortiori) ρ > 1 − (2δ) −1 , we have N − 2d + 2k > 0 for sufficiently large d, hence Lemma 3 yields as d → ∞, and the last denominator is positive. Hence, if d is large enough (which is always assumed in the following), there are constants C 1 , C 2 , independent of d, such that , a ∈ (0, 1), b ∈ [0, 1).
We use repeatedly that We note that still N − 2d + 2k > 0 for sufficiently large d, so that (24) can be applied. It yields Here and below, C m denotes a positive constant independent of d.