Abstract
The convex hull of N independent random points chosen on the boundary of a simple polytope in \( {\mathbb {R}}^n\) is investigated. Asymptotic formulas for the expected number of vertices and facets, and for the expectation of the volume difference are derived. This is one of the first investigations leading to rigorous results for random polytopes which are neither simple nor simplicial. The results contrast existing results when points are chosen in the interior of a convex set.
Similar content being viewed by others
1 Introduction and Statement of Results
Let \(K \subset {\mathbb {R}}^n \) be a convex set of dimension n, \(n \ge 2\). Let \(N \in {\mathbb {N}}\) and choose N random points \(X_1, \dots , X_N\) uniformly distributed either in the interior of K or on the boundary \({\partial }K\) of K. Write [A] for the convex hull of a set A, and denote by \(P_N= [X_1, \dots , X_N]\) the convex hull of the random points. The expected number of vertices \({\mathbb {E}}f_0(P_N)\), the expected number of \((n-1)\)-dimensional faces \({\mathbb {E}}f_{n-1}(P_N)\), and the expectation of the volume difference \(V_n(K)-{\mathbb {E}}V_n (P_N)\) of K and \(P_N\) are of interest. Since explicit results for fixed N cannot be expected one investigates the asymptotics as \(N \rightarrow \infty \).
If the vertices of the random polytopes are chosen from the interior of a convex set, there is a vast amount of literature. Investigations started with two famous articles by Rényi and Sulanke [26, 27] who obtained in the planar case the asymptotic behavior of the expected area \({\mathbb {E}}V_2(P_N)\) when the boundary of K is sufficiently smooth and when K is a polygon. In a series of papers these formulae were generalized to higher dimensions. In the case when the boundary of K is sufficiently smooth, we know by work of Wieacker [34], Schneider and Wieacker [30], Bárány [2], Schütt [32], and Böröczky et al. [8] that the volume difference behaves like
where \(P_N\) is the convex hull of uniform iid random points in the interior of K, \(\Omega (K)\) denotes the affine surface area of K and \(c_n\) is a constant that depends only on n. The generalization to all intrinsic volumes is due to Bárány [2, 3] and Reitzner [23]. The corresponding results for random points chosen in a polytope P are much more difficult. In a long and intricate proof Bárány and Buchta [4] settled the case of polytopes \(P \subset {\mathbb {R}}^n \),
where \({\text {flag}}(P)\) is the number of flags of the polytope P. A flag is a sequence of i-dimensional faces \(F_i\) of P, \(i=0,\dots , n-1\), such that \(F_i \subset F_{i+1}\). The phenomenon that the expression should only depend on this combinatorial structure of the polytope had been discovered in connection with floating bodies by Schütt [31].
Due to Efron’s identity [11] the results on \({\mathbb {E}}V_n (P_N)\) can be used to determine the expected number of vertices of \(P_N\). The general results for the number of \(\ell \)-dimensional faces \(f_\ell (P_N)\) are due to Wieacker [34], Bárány and Buchta [4], and Reitzner [24]: if K is a smooth convex body and \(\ell \in \{0, \dots , n-1\}\), then
and if P is a polytope, then, with a different constant, but still denoted \(c_{n,\ell }\),
Choosing random points from the interior of a convex body always produces a simplicial polytope with probability one. Yet often applications of the above mentioned results in computational geometry, the analysis of the average complexity of algorithms and optimization necessarily deal with non-simplicial polytopes and it became crucial to have analogous results for random polytopes without this very specific combinatorial structure. The only classical results for this question concern 0/1-polytopes in high dimensions [6, 10, 13, 14, 20], which have a highly interesting combinatorial structure, yet in a very specific setting. And very recently Newman [21] used a somewhat dual approach to construct general random polytopes from random polyhedra.
In view of the applications it is also of high interest to show that the face numbers of most realizations of random polytopes are close to the expected ones, and thus to prove variance estimates, central limit theorems and deviation inequalities. There has been serious progress in this direction in recent years, and we refer to the survey articles [18, 19, 25].
In all these results there is a general scheme: if the underlying convex sets are smooth then the number of faces and the volume difference behave asymptotically like powers of N, if the underlying sets are convex polytopes then logarithmic factors show up. Metric and combinatorial quantities only differ by a factor N.
In this paper we are discussing the case that the random points are chosen from the boundary of a polytope P. In dimensions \(n\ge 3\), this produces random polytopes which are neither simple nor simplicial with high probability as \(N\rightarrow \infty \), although still most of the facets are simplices. Thus our results are a decisive step in taking into account the point mentioned above. The applications in computational geometry, the analysis of the average complexity of algorithms and optimization need formulae for the combinatorial structure of the involved random polytopes and thus the question on the number of facets and vertices are of interest.
From (1.3) it follows immediately that for random polytopes whose points are chosen from the boundary of a polytope the expected number of vertices is
with \(c_{n-1,0}\) from (1.3), independent of P. Indeed, a chosen point is a vertex of a random polytope if and only if it is a vertex of the convex hull of all the random points chosen in the same facet of P. We define \(\ln _+ x=\max {\{0, \ln x\}}\). By (1.3) we get that the expected number of vertices equals
where we sum over all facets \(F_i\) of P and \(N_{i}\) is a binomial distributed random variable with parameters N and \(p_i={\lambda }_{n-1}(F_i)/\bigl (\sum _{F_j} {\lambda }_{n-1}(F_j)\bigr )\). Here \({\lambda }_{n-1}\) is the \((n-1)\)-dimensional Lebesgue measure. It is left to observe that \({\mathbb {E}}(\ln _+ N_{i})^{n-2} = (\ln N)^{n-2}(1+o(1))\) and \(\sum _{F_i} {\text {flag}}(F_i) = {\text {flag}}(P)\).
For our first main results we restrict our investigations to simple polytopes. We recall that a polytope in \({\mathbb {R}}^n\) is called simple if each of its vertices is contained in exactly n facets.
Theorem 1.1
Let \(n \ge 2\) and choose N uniform random points on the boundary of a simple polytope P in \({\mathbb {R}}^n\), \(n\ge 2\). For the expected number of facets of the random polytope \(P_N\), we have
with some \( c_{n} >0\) independent of P.
The case \(n=2\) is particularly simple. \({\mathbb {E}}f_{1}(P_N)\) is asymptotically, as \(N \rightarrow \infty \), equal to \(2 f_0( P) = 2 f_1(P)\). Note that for a simplicial polytope \({\text {flag}}(P) = n!f_0(P)\) and therefore Theorem 1.1 can also be written as
We conjecture this formula to hold for general polytopes. Yet this seems to be much more involved. We are showing here that for \(n \ge 3\) and for \(1\le \ell \le n-2\)
with \( c_{n-1,\ell }\) defined in (1.3). For this we count those \(\ell \)-dimensional faces which are contained in the facets \(F_i\) of P. Analogous to the case \(\ell =0\) we have
For the case \(\ell =n-1\) and \(n\ge 3\), we observe that each \((n-2)\)-dimensional face of a polytope is contained in precisely two \((n-1)\)-dimensional facets. Assume that not all random points are contained in the same facet of P which happens with probability tending to one as \(N\rightarrow \infty \). Then, each \((n-2)\)-dimensional face of \(P_N\) in a facet F of P is contained in at least one facet of \(P_N\) not contained in F, and thus gives rise to a facet of \(P_N\) which is the convex hull of this face and one additional point in another facet of P. This shows
for general polytopes P in dimension \(n\ge 3\).
This sheds some light on the geometry of \(P_N\) if P is a simple polytope. The number of those facets of the random polytope that are not contained in the boundary of P are already of the same order as all facets that have one vertex in one facet of P and all the others in another one. In fact it follows from our proof that for simple polytopes the main contribution comes from those facets of \(P_N\) whose vertices are on precisely two facets of P. We refer to the end of Sect. 3.5 for the details.
Surprisingly this is no longer true for the expectation of the volume difference. Here the main contribution comes from all facets of \(P_N\). And—to our big surprise—the volume difference contains no logarithmic factor. This is in sharp contrast to the results for random points inside convex sets and shows that the phenomenon mentioned above does not hold for more general random polytopes.
Theorem 1.2
For the expected volume difference between a simple polytope \(P \subset {\mathbb {R}}^n\) and the random polytope \(P_N\) with vertices chosen from the boundary of P, we have
with some \(c_{n,P}>0\) depending on n and P.
Intuitively, the difference volume for a random polytope whose vertices are chosen from the boundary should be smaller than the one whose vertices are chosen from the body. Our result confirms this for N sufficiently large. The first one is of the order \(N^{-n/(n-1)}\) compared to \(N^{-1}(\ln N)^{n-1}\). It is well known that for uniform random polytopes in the interior of a convex set the expected missed volume is minimized for the ball for N large [7, 16, 17], a smooth convex set, and—in the planar case—maximized by a triangle [7, 9, 15] or more generally by polytopes [5]. Hence one should also compare the result of Theorem 1.2 to the result of choosing random points on the boundary of a smooth convex set. This clearly leads to a random polytope with N vertices. And by results of Schütt and Werner [33], see also Reitzner [22], the expected volume difference is of order \(N^{-2/(n-1)}\) which is smaller as the order in (1.1) as is to be expected, but also surprisingly much bigger than the order \(N^{-n/(n-1)} \) occurring in Theorem 1.2.
We give a simple argument that shows that the volume difference between the cube and a random polytope is at least of the order \(N^{-n/(n-1)}\). We denote by \(e_1,\dots ,e_n\) the unit vectors of the standard orthonormal basis in \({\mathbb {R}}^n\). We consider the cube \(C^{n}=[0,1]^{n}\) and the subset of the boundary
where \(H_+(h,u)=\{x:\langle x,u \rangle \ge h \}\). These sets are the union of small simplices in the facets of the cube close to the vertices. Then
where \({\lambda }_{n-1}\) denotes the \((n-1)\)-dimensional Lebesgue measure, and the probability that none of the points \(x_{1},\dots ,x_{N}\) is chosen from this set equals
Therefore, with probability approximately 1/e the union of the simplices in (1.4) is not contained in the random polytope and the difference volume is greater than
which is in accordance with Theorem 1.2.
The paper is organized in the following way. The next section contains a tool from integral geometry and two asymptotic expansions. The proof of the asymptotic expansions is rather technical and shifted to the end of the paper, Appendix A, B, and C. The third section is devoted to the proofs of Theorems 1.1 and 1.2. There, first we evaluate two formulas for the quantities appearing in Theorems 1.1 and 1.2 and combine them with the necessary asymptotic results. These results are proven in in Sects. 3.5–3.7, using computations for the moments of the volume of involved random simplices in Sect. 3.3.
Throughout this paper \(c_n, c_{{\varvec{m}}, P, n, \dots }, \ldots \) are generic constants depending on \({\varvec{m}}\), P, n, etc. whose precise values may differ from occurrence to occurrence.
2 Tools
We work in the Euclidean space \({\mathbb {R}}^n\) with inner product \(\langle \,{\cdot }\,,\,{\cdot }\,\rangle \) and norm \(\Vert \,{\cdot }\,\Vert \). We write \(H=H(h,u)\) for the hyperplane with unit normal vector \(u\in S^{n-1}\) and signed distance h to the origin, \(H(h,u)=\{x:\langle x,u\rangle =h\}\). We denote by \(H_-=H_-(h,u)=\{x:\langle x,u\rangle \le h\}\) and by \(H_+=H_+(h,u)=\{x:\langle x,u\rangle \ge h\}\) the the two closed halfspaces bounded by the hyperplane H. For a set \(A\subset {\mathbb {R}}^n\) we write [A] for the convex hull of A.
In this paper we need a formula for n points distributed on the boundary of a given convex body. A theorem of Blaschke–Petkantschin type which deals with such a situation is a special case of Theorem 1 in Zähle [35]. We state it here only for a \((n-1)\)-dimensional set X, which is what we need in the following. Denote by \({{\mathcal {H}}}_{n-1}\) the \((n-1)\)-dimensional Hausdorff measure. A set X is \({{\mathcal {H}}}_{n-1}\)-rectifiable if it is the countable union of images of bounded subsets of \({\mathbb {R}}^{n-1}\) under some Lipschitz maps, up to a set of \({{\mathcal {H}}}_{n-1}\)-measure zero. Then, for \({{\mathcal {H}}}_{n-1}\)-almost all points \(x\in X\) there exists a (generalized) tangent hyperplane \(T_{x}\) at x to X consisting of all approximate tangent vectors v at x. Essentially, v is an approximate tangent vector at x if for each \(\varepsilon >0\) there exists \(x'\in X\) with \(\Vert x-x'\Vert \le \varepsilon \) and \(\alpha >0\) such that \(\Vert \alpha (x-x')-v\Vert \le \varepsilon \). For the precise definition we refer to the book by Schneider and Weil [29, p. 634], and for a general introduction to Hausdorff measure, the existence of tangent hyperplanes and facts on geometric measure theory we refer to Federer [12].
For two hyperplanes \(H_1, H_2\) let \( J(H_1, H_2)\) be the length of the projection of a unit interval in \(H_1 \cap (H_1 \cap H_2)^\perp \) onto \(H_2^\perp \), or \(J(H_1, H_2)=0\) if \(H_1\,{\parallel }\,H_2\). Observe that \(J(H(h_1,u_1),H(h_2, u_2))\) is just the length of the projection of \(u_2\) onto \(H_1\), which equals \(\sin \sphericalangle (u_1, u_2)\).
Note that theorem of Zähle is stated for \({{\mathcal {H}}}_{n-1}\)-rectifiable sets, although Zähle remarks that the result is true under the weaker assumption of \({{\mathcal {H}}}_{n-1}\)-measurability.
Theorem 2.1
(Zähle [35]) Suppose \(X \subset {\mathbb {R}}^n\) is an \({{\mathcal {H}}}_{n-1}\)-rectifiable set and let \(g:({\mathbb {R}}^n)^{n-1} \rightarrow [0, \infty ) \) be a measurable function. Then there is a constant \({\beta }\) such that
with dx, du, dh denoting integration with respect to the Hausdorff measure on the respective range of integration, and where the hyperplane \(H(x_1,\dots , x_n)\) is the affine hull of \(x_1, \dots , x_n\).
In our case X is the boundary of a polytope P, and almost all \(x \in {\partial }P\) are in the relative interior of a facet of P where \(T_x\) is simply the supporting hyperplane. Thus \(J (T_{x_j},H(x_1, \dots , x_n))=0\) if all points are on the same facet of P. To exclude this from the range of integration, we write \(({\partial }P)^n_{\ne }\) for the set of all n-tuples \( x_1, \dots , x_n \in {\partial }P\) which are not all contained in the same facet. Also, ignoring sets of \({{\mathcal {H}}}_{n-1}\)-measure zero, we may assume that \( x_1, \dots , x_n \) are in general position when integrating on \(({\partial }P)^n_{\ne }\). And, again ignoring sets of measure zero, a hyperplane H(h, u) meets \({\partial }P\) at least in d facets, or \({\partial }P\cap H(h,u)=\emptyset \). Thus Zähle’s result takes the following form useful in our context.
Lemma 2.2
Let \(g(x_1, \dots , x_n)\) be a continuous function. Then there is a constant \({\beta }\) such that
with dx, du, dh denoting integration with respect to the Hausdorff measure on the respective range of integration.
One of the essential ingredients of our proof are two asymptotic expansions of the function
of \({\varvec{l}}=(l_{1},\dots ,l_{n})\) as \(N \rightarrow \infty \). Here, \(l_i \in {\mathbb {R}}\), \(l_i >0\) for all i and \(\alpha \in {\mathbb {R}}\), \(\alpha >0\). We need it for the computation of the expectations of the number of facets and of the expected volume difference. The proof of these results is rather technical and lengthy, and will be found in Sects. A–C of the appendix.
Lemma 2.3
Assume that \(n \ge 2\), \(0<{\alpha }<1/n\), and that \({\varvec{l}}=(l_1,\dots ,l_n)\), \(L=\sum _{i=1}^nl_i\), with \(n-1>l_i>L/(n-1)-1\) for all \(i=1, \dots , n\). Then
as \(N \rightarrow \infty \), where the implicit constant in \(O(\,{\cdot }\,)\) may depend on \({\alpha }\).
Lemma 2.4
Assume that \(n \ge 2\), \(0<{\alpha }\le 1/(2n)\), and \({\varvec{l}}=(l_1, \dots , l_n)\), \(L=\sum _{i=1}^n l_i \), with \( n-1 > l_{i} \ge L/(n-1)-1\) for all \(i=1,\dots ,n\). If for at least three different indices i, j, k we have the strict inequality that \(l_i,l_j,l_k>L/(n-1) -1 \), then
as \(N \rightarrow \infty \), where the implicit constant in \(O(\,{\cdot }\,)\) may depend on \({\alpha }\). If for exactly two different indices i, j we have the strict inequality that \( l_{i}, l_j >L/(n-1)-1\) and equality \(l_k =L/(n-1)-1\) for all other \(l_k\), then
as \(N \rightarrow \infty \) with \(c_n>0\), where the implicit constant in \(O(\,{\cdot }\,)\) may depend on \({\alpha }\).
3 Proof of Theorems 1.1 and 1.2
3.1 The Number of Facets
Let \(P\subset {\mathbb {R}}^n\) be a simple polytope, and assume w.l.o.g. that the surface area satisfies \({\lambda }_{n-1} ( {\partial }P)=1\). As usual denote by \({{\mathcal {F}}}_k(P)\) the set of k-dimensional faces of P. Choose random points \(X_1,\dots ,X_N\) on the boundary of P with respect to Lebesgue measure, and denote by \(P_N=[X_1, \dots , X_N]\) their convex hull. In general \({\mathcal {F}}_{n-1}(P_N)\) consists of facets contained in facets of P and facets which are formed by random points on different facets of P. The latter facets are simplices, almost surely. The number of facets contained in \(\partial P\) is bounded by the number of facets of P and thus by a constant. Hence we assume in the following that \((X_1,\dots ,X_n)\in ({\partial }P)_{\ne }^n\). The convex hull of such points \(X_i\), \(i\in I=\{i_1,\dots ,i_n\}\), forms a facet \([X_{i_1},\dots ,X_{i_n}]\) of \(P_N\) if their affine hull does not intersect the convex hull of the remaining points \([\{X_{j}\}_{j \notin I}]\).
To simplify notation we set \(H= \text {aff}[X_1,\dots , X_n]\). If the points \(X_1, \dots , X_n\) form a facet then their affine hull is a supporting hyperplane \(H=H(h,u)\) of the random polytope \(P_N\). The unit vector u is the unit outer normal vector of this facet. Then the halfspace \(H_-=H_-(h,u)=\{x:\langle x,u\rangle \le h\}\) bounded by the hyperplane H contains the random polytope \(P_N\). The probability that \(X_{n+1},\dots , X_N\) are contained in \(H_- \) equals
thus
Denote by H(P, u) a support hyperplane with normal u, supporting P in \(v \in {{\mathcal {F}}}_0(P)\). Then the normal cone \({{\mathcal {N}}}(v,P)\) is defined as (see e.g., [28]),
With probability one the vector u is contained in the interior of one of the normal cones \({{\mathcal {N}}}(v,P)\) of the vertices \(v \in {{\mathcal {F}}}_0(P)\) of P because the boundaries of the normal cones have measure 0. Hence
Now we fix a vertex v. Since P is simple, v is contained in precisely n facets \(F_1, \dots , F_n\). There is an affine transformation \(A_v\) which maps v to the origin and the n edges \([v, v_i]\) containing v onto segments \([0, s_i e_i]\) on the coordinate axis. Here we have a free choice for the n parameters \(s_i>0\) which we will fix soon. We assume \(s_i \ge n\), \(i=1, \dots , n\), which implies
The image measure \({\lambda }_{n-1, A_v}\) of the Lebesgue measure \({\lambda }_{n-1 }\) on the facets of P under the affine transformation \(A_v\) is—up to a constant—again Lebesgue measure, where the constant may differ for different facets. We choose the n parameters \(s_i \ge n\) in such a way that the constant equals the same \(a_v>0\) for the n facets \(F_1, \dots , F_n\) containing v,
Note that the last condition means that for all such facets \(F_i\) and all measurable \(B \subset A_v F_i\),
Note also that \([0, 1]^{n-1} \subset A_v F_i\), \(i=1, \dots ,n\), and thus by (3.7),
Such a uniform bound on \(a_v\) is needed in Sect. 3.5 so that \(\alpha =a_v/{(n-1)!}\le 1/(2n)\).
To keep the notation short, we write \(d_{A_v} x = d {\lambda }_{n-1,A_v}(x)\) for integration with respect to this image measure of the Lebesgue measure on \({\partial }P\) under the map \(A_v\). Equation (3.7) shows that for \(x \in A_v F_i\),
for \(i=1,\dots ,n\), where dx is again shorthand for Lebesgue measure (or equivalently Hausdorff measure) on the respective facet \(F_i\). This yields
We use Zähle’s formula (2.5) which transforms the integral over the points \(x_i \in {\partial }P\) into an integral over all the hyperplanes \(H=H(h,u)\), \(u \in S^{n-1}\), \(h \in {\mathbb {R}}\), and integrals over \({\partial }P \cap H\):
We have \({{\mathcal {N}}}(0, A_v P)=-S_{+}^{n-1}\), where we denote \(S_+^{n-1}= S^{n-1} \cap {\mathbb {R}}_+^n\). The condition \({\mathbb {1}}(u \in {{\mathcal {N}}}(0, A_v P))\) will be taken into account in the range of integration in the form \( u \in - S_+^{n-1}\). Now we fix u and split the integral into two parts. In the first one \(H_- \) contains all the unit vectors \(e_i\). We write this condition in the form
Note that \(h \le 0\), since \(u \in - S_+^{n-1}\). The second part is over \( h \le \max _{i=1, \dots , n} u_i\). Thus the expected number of facets is
where we have used that in the first case \({\partial }A_v P \cap H_+ = {\partial }{\mathbb {R}}_+^n \cap H_+\). The substitution \(u \mapsto -u\) and \(h \mapsto -h\) yields the more convenient formula
with
The asymptotically dominating term will be \(I_v^1\). Using (3.7) and (3.9) for \(I_v^1\) we have
In Sect. 3.5 we will determine the precise asymptotics. Equation (3.28) will tell us that
with some constant \(c_n >0\) as \(N \rightarrow \infty \). The error term \(E_v^1\) can be estimated by using the fact that there are constants \({\overline{a}},{\underline{a}}\) such that
for all \(v \in {{\mathcal {F}}}_0(P)\) and all \(B \subset {\partial }P\). This shows
In Sect. 3.6 we will show that this is of order \(O(N^{-n}(\ln N)^{n-3})\), see (3.35). This implies
with some \(c_n >0\) which is Theorem 1.1.
3.2 The Volume Difference
We are interested in the expected volume difference
With probability one the random polytope \(P_{N}\) has the following property: For each facet \(F \in {{\mathcal {F}}}_{n-1}(P_N)\) that is not contained in a facet of P there exists a unique vertex \(v\in {{\mathcal {F}}}_0(P)\), such that the outer unit normal vector \(u_{F}\) of F is contained in the normal cone \({{\mathcal {N}}}(v,P)\), or equivalently the hyperplane H containing F is parallel to a supporting hyperplane to P at v. Clearly all the sets [F, v] are contained in \(P\setminus P_N\) and they have pairwise disjoint interiors. This is immediate in dimension two, and holds in arbitrary dimensions because it holds for all two-dimensional sections through P and \(P_N\) containing two vertices of P. We set
where \(D_N\) is the subset of \(P \setminus P_N\) not covered by one of the simplices [F, v] with \(u_F \in {{\mathcal {N}}}(v,P)\). We have
For the first summand we follow the approach already worked out in detail in the last section. The convex hull \([X_{i_1},\dots ,X_{i_n}]\) forms a facet of \(P_N\) if their affine hull does not intersect the convex hull of the remaining point, and to simplify notation we set \(u=u_F\) and \(H(h,u)=H=\text {aff}[X_1,\dots ,X_n]\). The halfspace \(H_-\) contains the random polytope \(P_N\), and the probability that \(X_{n+1},\dots ,X_N\) are contained in \(H_-\) equals
Thus
We fix v and use the affine transformation \(A_v\) defined in the last section which maps v to the origin and the edges onto the coordinate axes. The transformation rule yields
The volume of the simplex \([\{A_v^{-1}x_i\}_{i=1,\dots ,n},0]\) is a constant \(d_v=\det A_v^{-1}\) times the volume of \([\{x_i\}_{i=1,\dots ,n},0]\) which equals \(n^{-1}\) times the height |h| times the volume of the base \([\{ x_i\}_{i=1, \dots , n}]\). By Zähle’s formula (2.5) we obtain
We split the integral into the two parts \(\max _{i=1,\dots ,n}u_i\le h\le 0\) and \(h\le \max _{i=1,\dots ,n}u_i\) and substitute \(u\mapsto -u\), \(h\mapsto -h\). The main part of the expected volume difference is
with
The asymptotically dominating term will be \(I_v^2\). In Sect. 3.5 we determine the precise asymptotics. Equation (3.27) will tell us that
with some constant \(c_n >0\) as \(N \rightarrow \infty \). The error term \(E_v^2\) can be estimated by
where \({\overline{a}}, {\underline{a}}\) are as in (3.11). In Sect. 3.6, (3.36), we show that
It remains to estimate
The following argument is proved in detail in the paper of Affentranger and Wieacker [1, p. 302] and will only be sketched here.
If \(y \in D_N\), then the normal cone \({{\mathcal {N}}}(y, [y,P_N])\) is not contained in any of the normal cones \({{\mathcal {N}}}(v,P)\) of P, \(v \in {{\mathcal {F}}}_0(P)\). Hence \({{\mathcal {N}}}(y,[y,P_N])\) meets at least two neighbouring normal cones \({{\mathcal {N}}}(v_1,P), {{\mathcal {N}}}(v_2,P)\), and thus the normal cone of the edge \(e=[v_1,v_2]\in {{\mathcal {F}}}_1(P)\). This implies that there exists a supporting hyperplane H of P with \(H \cap P=e\) with the property that the parallel hyperplane through y does not meet \(P_N\).
We apply an affine map \(A_e\) similar to the one defined above which maps \(e=[v,w]\) to the unit interval \([0, e_n]\), v to the origin, and the image of other edges containing v contain the remaining unit intervals \([0,e_i]\). After applying this map the situation described above is the following: for \(x=(x_1,\dots ,x_n)=A_e y \in A_e D_N\) the supporting hyperplane \( A_e H =H(0, u)\) to \(A_e P\) intersects \(A_e P\) in the edge \([0, e_n]\). The parallel hyperplane \(H(\langle x,u\rangle ,u)\) contains x and cuts off from \(A_e P\) a cap disjoint from \(A_e P_N\). This cap contains the simplex
Hence if \(x \in A_e D_N\) then
The probability of this event is given by
with some \({\underline{a}} >0\). We denote by \(d_e\) the involved Jacobian of \(A_e^{-1}\) and by \({\overline{d}}\) the maximum of \(d_e\). This implies the estimate
assuming again that \(A_e P \subset [0,{\tau }]^n\) for all e. In Sect. 3.7 we prove that
Combining our results we get
which is Theorem 1.2.
3.3 Random Simplices in Simplices
For \(u \in S_+^{n-1}\), \(h \ge 0\), and \(H=H(h,u)\) we set
which is the (not normalized) k-th moment of the volume of a random simplex in \( {\mathbb {R}}_+^n \cap H(h,u)\) where the random points are chosen on the boundary of this simplex according to the weight functions \(J (T_{x_j},H(1,u))^{-1} \). Recall that for almost all \(x_j\), \(T_{x_j}\) is the supporting hyperplane at \(x_j\). In fact it is the coordinate hyperplane which contains \(x_j\).
Lemma 3.1
For \(k \ge 0\), there are constants \({{\mathcal {E}}}_{k,{\varvec{f}}}>0\) independent of u, such that
Proof
For a point \(x_j\) in the coordinate hyperplane \(e_i^\perp \), the weight function \(J (T_{x_j},H(1,u))^{-1} \) is the sine of the angle between \(e_i\) and u. Thus
and hence is independent of h as long as u is fixed. In (3.20) we substitute \(x_j = h y_j\) with \(y_j \in H(1,u)\). The \((n-1)\)-dimensional volume is homogeneous of degree \(n-1\), hence
and since \(x_j\) are in the \((n-2)\)-dimensional planes \( {\partial }{\mathbb {R}}_+^{n} \cap H(h,u)\) we have \( dx_j = h^{n-2}dy_j\).
To evaluate \({{\mathcal {E}}}_k(1,u)\) we condition on the facets in \(e_1^\perp , \dots , e_n^\perp \) of \({\mathbb {R}}_+^n \cap H(1,u)\) from where the random points are chosen. Thus for
we condition on the event \(y_i\in e_{f_i}^\perp \). Because \(\{y_1,\dots ,y_n\}\in ({\partial }{\mathbb {R}}_+^n \cap H(1,u))_{\ne }^n\), which means that not all points are contained in the same facet, we may assume that \({\varvec{f}}\in \{1,\dots ,n\}^n_{\ne }\) where we remove all n-tuples of the form \((i,\dots , i)\) and denote the remaining set by \(\{1,\dots ,n\}^n_{\ne }\). Recalling (3.21), we obtain
A short computation shows that H(1, u) meets the coordinate axis in the points \((1/u_i)e_i\). We substitute \(z= Ay\), \(y = A^{-1} z\), where A is the affine map transforming \(H(1, {\textbf{1}}_n)\) into H(1, u). Here \({\textbf{1}}_n\) is the vector \((1, \dots , 1)^T\). The map is given by
The volume of the simplex \({\mathbb {R}}_+^n\cap H_-(1,u)\) is given by \((1/n!) \prod _{i=1}^n(1/u_i)\), thus the ‘base’ \({\mathbb {R}}_+^n \cap H(1,u)\) of this simplex has \((n-1)\)-volume \((1/(n-1)!)\prod _{i=1}^n(1/u_i)\). The regular simplex \({\mathbb {R}}_+^n\cap H(1,{\textbf{1}}_n)\) has \((n-1)\)-volume \(\sqrt{n}/(n-1)!\). Hence
The \((n-1)\)-volume of the simplex spanned by the origin and the facet of \({\partial }{\mathbb {R}}_+^n \cap H(1,u)\) in \(e_i^\perp \) is given by \((1/(n-1)!)\prod _{j\ne i}(1/u_j)\), its height by \(\Vert (u_1, \dots , u_{i-1}, u_{i+1},u_n)\Vert ^{-1}\)\(=(1-u_i^2)^{-1/2}\) and hence the \((n-2)\)-volume of the facet of \({\partial }{\mathbb {R}}_+^n\cap H(1,u)\) in \(e_i^\perp \) is
Comparing this to the volume \(\sqrt{n-1}/(n-2)!\) of the facet of the simplex \({\partial }{\mathbb {R}}_+^n \cap H(1,{\textbf{1}}_n)\) in \(e_i^\perp \) which equals the volume of \({\mathbb {R}}_+^{n-1} \cap H (1,{\textbf{1}}_{n-1})\) shows that the Jacobian in \(e_{f_i}^\perp \) of the map A is
Combining these Jacobians with (3.23) we obtain
where \({{\mathcal {E}}}_{k,{\varvec{f}}}\) is independent of u. Together with (3.22) this finishes the proof. \(\square \)
In Sect. 3.6 we need an estimate for \({{\mathcal {E}}}_0 (h,u)\) in the case when there is a \( k \le n-1\) such that
see (3.29). Then H meets the coordinate axes in the points \((h/u_i)e_i\in [0,1]^n\) for \(i=1,\dots ,k\), and the other points of intersection are outside of \([0, 1]^n\). We set
Lemma 3.2
Let \( k \le n-1\) be given such that (3.25) holds. Then we have
with \(m_j = m_j({\varvec{f}})= \sum _{i=1}^{n} {\mathbb {1}}( f_i=j) \le n-1\) for \(j \le k\), and \(\sum _{i=1}^k m_i \le n\).
Proof
First we compare the intersection of H with the facet of \([0,1]^n \) in \(e_f^\perp \) to the intersection of H with the opposite facet of \([0,1]^n \) in \(e_f+e_f^\perp \), \(f=1,\dots ,n\). For \(i \ne f\) the hyperplane H meets the coordinate axes \(\text {lin}\{e_i\}\) in \(e_f^\perp \) in points \((h/{u_i})e_i\). It meets the shifted coordinate axes \(e_f+ \text {lin}\{ e_i\}\) in the opposite facet in the points \(e_f+((h-u_f)/u_i)e_i\). Because \(u \in S_+^{n-1}\) we have \(u_f\ge 0\). This shows that the facet of \(H \cap [0,1]^n\) in \(e_f^\perp \) contains the simplex
The opposite facet contains either the smaller simplex
if \(f\ge k+1\) and \(h/u_f>1\), and otherwise the intersection is empty, \((H \cap [0,1]^n)\cap (e_f+e_f^\perp )=\emptyset \). The simplex (3.26) has volume
for \(f \le k\), and
for \(f \ge k+1\), the volume in the opposite facet clearly is smaller for \(f \ge k+1\) or vanishes for \(f \le k\). We use \(J(T_{x},H(h,u))^{-1}=J(T_{x},H(1,u))^{-1}=( 1- u_f^2)^{-1/2} \) for \(x \in e_f^\perp \). For \(f \le k\) this proves
since \((n-k)^{1/2}\) is the diameter of the \((n-k)\)-dimensional unit cube. In this case there is no simplex in the opposite facet. Analogously, for \(f\ge k+1\)
Again we condition on the facets \({\partial }[0,1]^n \cap H(1,u)\) from where the random points are chosen. Because of the term \({\mathbb {1}}(|\{ x_1, \dots , x_n\} \cap e_f^\perp | \le n-1)\), it is impossible that all points are contained in one of the facets in \(e_f^\perp \). Thus for \(f\le k\) we have at most \(n-1\) points in \(([0,1]^n\cap H)\cap e_f^\perp \) and no point in \(([0,1]^n\cap H)\cap (e_f+e_f^\perp )\) because this set is empty. For \(f\ge k+1\) we have at most \(n-1\) points in \(([0,1]^n\cap H) \cap e_f^\perp \) and maybe some additional points in \(([0,1]^n \cap H) \cap (e_f +e_f^\perp )\).
Now for \(j=1, \dots , n\) there is some \(f_j\) such that \( x_j\) is either in the facet \([0,1]^n \cap e_{f_j}^\perp \) or in the opposite facet \([0,1]^n\cap (e_{f_j}+e_{f_j}^\perp )\). This defines a vector
and we take into account that for \(f \le k\),
This yields
with \(m_j = m_j({\varvec{f}}) = \sum _i {\mathbb {1}}( f_i=j) \le n-1\) for \(j \le k\), and \(\sum _1^k m_i \le \sum _1^n m_i =n\).
\(\square \)
3.4 The Crucial Substitution
In the next sections we will end up with integrals over \(u\in S_+^{n-1}\) and where we split the integrals in the part where \(h\ge \min _{1\le i\le n}u_i\) and the part where \(h\le \min _{1\le i\le n}u_i\). Then the following substitution is helpful.
Lemma 3.3
Let \(f:({\mathbb {R}}_+)^n \rightarrow {\mathbb {R}}\) be an integrable function such that both sides of the following equation are finite. Then
In particular we will make extensive use of the following version where we use that the range of integration \(0 \le h \le u_i\) for all \(i=1, \dots , n\), is equivalent to \(t_i \in [0,1]\):
where \({\varepsilon }\in \{0,1\}\), N is the number of chosen points and the \(m_i\) are as in Lemma 3.2.
Proof
The goal is to rewrite the integration \(dhdu\) over the set of hyperplanes into an integration with respect to \(t_1, \dots , t_n\) where these are the intersections of the hyperplane H(h, u) with the coordinate axis. First, the substitution \(r = h^{-1}\) leads to \(dh = -r^{-2}dr\). Then we pass from polar coordinates (r, u) to the Cartesian coordinate system: for \(h,r \in {\mathbb {R}}_+\) and \(u\in S^{n-1}_+\) this gives
Now we substitute \(x_i=1/t_i\) and take into account that
Thus finally we have
with \(h^{-1} u_i= r u_i=x_i=t_i^{-1}\). \(\square \)
3.5 The Main Term
By (3.10) and (3.15), for \({\varepsilon }\in \{0,1\}\) we have to investigate
where we use the notation from (3.20). Recall that \(H=H(h,u)\) meets the coordinate axis in the points \(t_i e_i =(h/u_i)e_i\), and hence
We plug this and the result of Lemma 3.1 into \(I_v^{1+{\varepsilon }}\), set \(m_i= \sum _j {\mathbb {1}}(f_j=i)\), and obtain
Note that \(\sum m_i=n\). We use the substitution introduced in Lemma 3.3 and use the notation from Lemmata 2.3 and 2.4, in particular we use the function \({{\mathcal {J}}}(\,{\cdot }\,)\) introduced in (2.6),
with \({\varvec{m}}=(m_1, \dots , m_n)\). In the case \({\varepsilon }=1\),
and Lemma 2.3 (with \(L=-n\)) implies with a constant \(c_{{\varvec{m}} , n}\) that depends on \({\varvec{m}}\) and n,
where the implicit constant in \(O(\,{\cdot }\,)\) may depend on \(a_v\). Because \(c_{{\varvec{m}}, n}>0\), all terms with \({\varvec{f}} \in \{ 1, \dots , n\}_{\ne }^n \) contribute. Geometrically this means that the contribution for the volume difference comes from all facets of \(P_N\).
In the case \({\varepsilon }=0\) the asymptotic results from Lemma 2.4 (with \(L=0\)) give
where only those terms contribute for which \(f_i\) is concentrated on two values, and where the implicit constant in \(O(\,{\cdot }\,)\) may depend on \(a_v\). We can apply Lemma 2.4 as (3.8) holds. Geometrically this implies that the main contribution comes from that facets of \(P_N\) whose vertices are on precisely two facets of P.
3.6 The Error of the First Kind
Denote by \(\text {diam}(K)\) the diameter of a convex set K. By (3.17) and (3.12), for the error term we have to estimate
for \({\varepsilon }=0,1\). Recall that the hyperplane \(H=H(h,u)\) meets the coordinate axes in the points \((h/{u_i})e_i\). Hence the halfspace \(H_-\) contains at least one unit vector since \(h \ge \min u_i\). W.l.o.g. we multiply by \(n \atopwithdelims ()k\), assume that it contains \(e_{k+1}, \dots , e_n\), and thus the points of intersection satisfy
with some \(0\le k\le n-1\). Then the convex hull of \((h/u_1)e_1, \dots ,(h/{u_k})e_k, e_{k+1}, \dots , e_n\) is contained in \(A_vP\cap H_-\) and we estimate
For \(k=0,1\) we have \({\lambda }_{n-1}( {\partial }A_v P \cap H_-)\ge 1/(n-1)!\) and thus \(E_{\varepsilon }= O\bigl (e^{-\underline{a}N/(n-1)!}\bigr )\), so serious estimates are only necessary in the cases \(2 \le k \le n-1\). Next we use that \(A_v P \subset [0, {\tau }]^n\) for all \(A_v\) and for some \({\tau }>0\). Thus
because H meets the first k coordinate axes in \(h/u_1, \dots ,h/u_k\). This gives
Now we deal with the inner integration with respect to \(x_1, \dots , x_n\). We want to replace \({\partial }A_v P \cap H\) by \({\partial }[0,1]^n \cap H\). The main point here is to estimate \(J(T_x, H)^{-1}\) for \(x \notin {\partial }{\mathbb {R}}_+^n\).
In general we have \(J (T_{x}, H ) \in [0,1]\) by definition. Recall that \(x \in H\). The critical equality \(J (T_{x},H) =0\) can occur only if \(T_x=H\), thus if H is a supporting hyperplane \(H(h_{A_v P}(u),u)\) or \(H(h_{A_v P}(-u),-u)\) to \({A_v P}\). Since \(u \in S_+^{n-1}\), in the second case we have \(h_{A_v P}(-u)=0\) and \(x \in {\partial }{\mathbb {R}}_+^n\).
To exclude the first case we assume that \({\lambda }_{n-1}({\partial }A_v P \cap H_-)\le 1/2\). In this case \(H_-\) cannot contain the point \(n^{-1/(n-1)} (1,\dots ,1)^T\) since otherwise \({\partial }A_v P \cap H_-\) would contain \({\partial }A_v P \cap n^{-1/(n-1)} [0,1]^n\) (recall that \(u \in S_+^{n-1}\)) and this part has surface are 1. Now we claim that there is a constant \(c_{A_v P} >0\) such that
If such a positive constant would not exist then (by the compactness of \({\partial }A_v P\)) there would be a convergent sequence \((x_k,H_k)\rightarrow (x_0,H_0)\) with \( J (T_{x_k},H_k)\rightarrow 0\), where \(x_k \in H_k\) yields \(x_0\in H_0=H(h_0,u_0)\), \(u_0\in S_+^{n-1}\). But in this case also
and \(H_0\) is a supporting hyperplane at \(x_0\). Since \(u_0 \in S_+^{n-1}\) this leads to two cases. The first case is that
but \(x_{k}\) is not in \({\partial }A_v P \cap {\partial }{\mathbb {R}}^n_+\) and thus contained in some other facet of \(A_v P\). This implies \(J(T_{x_k},H_0)\nrightarrow 0\) as \(x_k \rightarrow x_0\), which is impossible. The second case is that \(x_0\) is contained in \(H_0=H(h_{A_v P}(u_0),u_0)\) where \((1,\dots , 1)^T \in H_{0-}\). Since this point is in \(A_v P\), but all \(H_{k -}\) do not contain \(n^{-1/(n-1)}(1, \dots , 1)^T\) this again contradicts the convergence \(H_k \rightarrow H_0\). Hence such a sequence \(x_k\) cannot exist, and (3.32) holds with some constant \(c_{A_v P}>0\). Thus from now on we assume that \({\lambda }_{n-1}({\partial }A_vP\cap H_-) \le 1/2\), take into account an error term of order
and obtain by (3.32) that
because \(A_v P\) is contained in the larger cube \([0,{\tau }]^n\).
In the following we denote by \(F_c\) the union of the facets of \(A_v P\) contained in \({\partial }{\mathbb {R}}_+^n\), and by \(F_{0}\) the union of the remaining facets which cover \({\partial }A_v P \setminus {\partial }{\mathbb {R}}^n_+\), \({\partial }A_v P = F_c \cup F_0 \). Then
Because of (3.34) and using \(F_c \subset {\partial }[0,{\tau }]^n \), we obtain the upper bounds
and
where \((F_c \cap H)^{n-k}_{\ne }=(F_c \cap H)^{n-k}\) for \(k \ge 1\). Combining these we get for \(k \ge 1\)
Observe that for the \((n-1)\)-dimensional polytope \([0,{\tau }]^n\cap H\) the area of each \((n-2)\)-dimensional facet is bounded by the sum of the areas of all other facets. Hence excluding a facet from the range of integration of the inner integral with respect to \(x_1\) can be compensated by a factor 2,
Since \(J (T_{x_j},H)\) is always less or equal one, this yields
Substituting \(x_i\) by \({\tau }x_i\) we obtain
with \({{\mathcal {E}}}_0^1 (\,{\cdot }\,)\) defined in front of Lemma 3.2. We make use of Lemma 3.2 for \({{\mathcal {E}}}_0^1\) and the error term (3.33): for \(m_i= \sum _j {\mathbb {1}}(f_j=i)\) we get
with \(m_i \le n-1\) for \(j \le k\), \(\sum _1^k m_i \le n\), and where \(c_{v,P}\) depends on \(n, c_{A_v P}, \max c_{n,k}\) and \({\tau }\). Next we use the substitution from Lemma 3.3:
The integrations with respect to \(t_{k+1}, \dots , t_n\) are immediate since the only terms occurring are \(t_i^{-2}\), and we have
with \(0 \le m_i \le n-1\). We set \(l_i=m_i- (n-k+1+{\varepsilon }))\). To apply Lemma 2.4 in the case \({\varepsilon }=0\) we have to check that there are \(i \ne j\) with \(l_i, l_j >L/(k-1)-1\). Set \(M=\sum _1^k m_i \le n\). We have
and equality holds only if \(M=n\) and \(m_i=0\). But \(M=n \) and \(m_i \le n-1\) imply that there are at least two different indices i, j with \(m_i>0\). Hence we may apply Lemma 2.4 (and if \(m_i \ge 1\) for all i even Lemma 2.3) which tells us that the integral is bounded by
This finally proves
In the case \({\varepsilon }=1\) we have \(l_i=m_i- (n-k+2)\), and with \(M=\sum _1^k m_i \le n\) this gives
since \(M \le n\). Thus the integral is of order
and
3.7 The Error of the Second Kind
Here we have to evaluate the following estimate of \({\mathbb {E}}V_n(D_N)\) of (3.18),
The integration with respect to \(x_n\) is immediate. We may assume without loss of generality that for \(k = 0, \dots , n-1\) precisely k of the coordinates of x are bounded by 1,
For \(k=0,1\) we have
So we assume \(2 \le k \le n-1\). Then the volume of the boundary of the simplex is given by
Therefore we obtain
where we used Lemma 2.3 with \(l_i=k-2\), \(L= k(k-2)\), which implies \(l_i>L/(k-1)-1= k-2 -1/(k-1)\). As \( k \le n-1\) we obtain
References
Affentranger, F., Wieacker, J.A.: On the convex hull of uniform random points in a simple \(d\)-polytope. Discrete Comput. Geom. 6(4), 291–305 (1991)
Bárány, I.: Random polytopes in smooth convex bodies. Mathematika 39(1), 81–92 (1992)
Bárány, I.: Random polytopes in smooth convex bodies: corrigendum. Mathematika 51(1–2), 31 (2004)
Bárány, I., Buchta, Ch.: Random polytopes in a convex polytope, independence of shape, and concentration of vertices. Math. Ann. 297(3), 467–497 (1993)
Bárány, I., Larman, D.G.: Convex bodies, economic cap coverings, random polytopes. Mathematika 35(2), 274–291 (1988)
Bárány, I., Pór, A.: On \(0\)-\(1\) polytopes with many facets. Adv. Math. 161(2), 209–228 (2001)
Blaschke, W.: Vorlesungen über Differentialgeometrie und geometrische Grundlagen von Einsteins Relativitätstheorie. II. Affine Differentialgeometrie. Grundlehren der Mathematischen Wissenschaften, vol. 7. Springer, Berlin (1923)
Böröczky, K.J., Jr., Hoffmann, L.M., Hug, D.: Expectation of intrinsic volumes of random polytopes. Period. Math. Hungar. 57(2), 143–164 (2008)
Dalla, L., Larman, D.G.: Volumes of a random polytope in a convex set. In: Applied Geometry and Discrete Mathematics. DIMACS Ser. Discrete Math. Theoret. Comput. Sci., vol. 4, pp. 175–180. American Mathematical Society, Providence (1991)
Dyer, M.E., Füredi, Z., McDiarmid, C.: Volumes spanned by random points in the hypercube. Random Struct. Algorith. 3(1), 91–106 (1992)
Efron, B.: The convex hull of a random set of points. Biometrika 52, 331–343 (1965)
Federer, H.: Geometric Measure Theory. Grundlehren der Mathematischen Wissenschaften, vol. 153. Springer, New York (1969)
Gatzouras, D., Giannopoulos, A., Markoulakis, N.: Lower bound for the maximal number of facets of a 0/1 polytope. Discrete Comput. Geom. 34(2), 331–349 (2005)
Gatzouras, D., Giannopoulos, A., Markoulakis, N.: On the maximal number of facets of \(0/1\) polytopes. In: Geometric Aspects of Functional Analysis (Israel 2004–2005). Lecture Notes in Math., vol. 1910, pp. 117–125. Springer, Berlin (2007)
Giannopoulos, A.A.: On the mean value of the area of a random polygon in a plane convex body. Mathematika 39(2), 279–290 (1992)
Groemer, H.: On some mean values associated with a randomly selected simplex in a convex set. Pacific J. Math. 45, 525–533 (1973)
Groemer, H.: On the mean value of the volume of a random polytope in a convex set. Arch. Math. (Basel) 25, 86–90 (1974)
Hug, D.: Random polytopes. In: Stochastic Geometry, Spatial Statistics and Random Fields (Hirschegg 2009). Lecture Notes in Math., vol. 2068, pp. 205–238. Springer, Heidelberg (2013)
Hug, D., Reitzner, M.: Introduction to stochastic geometry. In: Stochastic Analysis for Poisson Point Processes. Bocconi Springer Math., vol. 7, pp. 145–184. Springer, Cham (2016)
Mendelson, S., Pajor, A., Rudelson, M.: The geometry of random \(\{-1,1\}\)-polytopes. Discrete Comput. Geom. 34(3), 365–379 (2005)
Newman, A.: Doubly random polytopes. Random Struct. Algorith. 61(2), 364–382 (2022)
Reitzner, M.: Random points on the boundary of smooth convex bodies. Trans. Am. Math. Soc. 354(6), 2243–2278 (2002)
Reitzner, M.: Stochastical approximation of smooth convex bodies. Mathematika 51(1–2), 11–29 (2004)
Reitzner, M.: The combinatorial structure of random polytopes. Adv. Math. 191(1), 178–208 (2005)
Reitzner, M.: Random polytopes. In: New Perspectives in Stochastic Geometry, pp. 45–76. Oxford University Press, Oxford (2010)
Rényi, A., Sulanke, R.: Über die konvexe Hülle von \(n\) zufällig gewählten Punkten. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 2, 75–84 (1963)
Rényi, A., Sulanke, R.: Über die konvexe Hülle von \(n\) zufällig gewählten Punkten. II. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 3, 138–147 (1964)
Schneider, R.: Convex Bodies: The Brunn–Minkowski Theory. Encyclopedia of Mathematics and its Applications, vol. 44. Cambridge University Press, Cambridge (1993)
Schneider, R., Weil, W.: Stochastic and Integral Geometry. Probability and its Applications (New York). Springer, Berlin (2008)
Schneider, R., Wieacker, J.A.: Random polytopes in a convex body. Z. Wahrscheinlichkeitstheorie Verw. Gebiete 52(1), 69–73 (1980)
Schütt, C.: The convex floating body and polyhedral approximation. Israel J. Math. 73(1), 65–77 (1991)
Schütt, C.: Random polytopes and affine surface area. Math. Nachr. 170, 227–249 (1994)
Schütt, C., Werner, E.: Polytopes with vertices chosen randomly from the boundary of a convex body. In: Geometric Aspects of Functional Analysis (Israel 2001–2002). Lecture Notes in Math., vol. 1807, pp. 241–422. Springer, Berlin (2003)
Wieacker, J.A.: Einige Probleme der polyedrischen Approximation. Diploma work, Freiburg im Breisgau (1978)
Zähle, M.: A kinematic formula and moment measures of random sets. Math. Nachr. 149, 325–340 (1990)
Acknowledgements
The authors want to thank the referees for their careful reading of the manuscript and various suggestions for improvement.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Editor in Charge: János Pach
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Partially supported by NSF Grant DMS 1811146.
Appendices
Appendix: Some Asymptotic Expansions
Appendix A: A Useful Substitution
Let \({\mathcal {S}}_{n}\) is the set of all permutations of \(\{1,\dots ,n\}\). We start with the following observation.
Lemma A.1
Let \(f:(0,\infty )^{n}\rightarrow (0,\infty )^{n}\) be defined by
-
(i)
The inverse function to f is \(g:(0,\infty )^{n}\rightarrow (0,\infty )^{n}\) given by
$$\begin{aligned} g_{i}(x)=\frac{1}{x_{i}}\left( \prod _{k=1}^{n}x_{k}\right) ^{\!1/(n-1)}. \end{aligned}$$ -
(ii)
f maps the open set \((0,1)^{n}\) bijectively onto
$$\begin{aligned} \left\{ y\in (0,1)^{n}\;\Big |\;\forall \, i=1,\dots ,n: \prod _{k=1}^{n}y_{k}< y_{i}^{n-1}\right\} . \end{aligned}$$(A.37) -
(iii)
The set
$$\begin{aligned} \left\{ x\in (0,\beta )^{n}\;\Big |\;\forall \,i=1,\dots ,n: \prod _{k=1}^{n}x_{k}<\beta \cdot x_{i}^{n-1}\right\} \end{aligned}$$(A.38)equals
$$\begin{aligned} \bigcup _{\pi \in {\mathcal {S}}_{n}}\{(x_{\pi (1)},\dots ,x_{\pi (n)})\mid x\in M\}, \end{aligned}$$(A.39)where M is the set of all \(x\in (0,\infty )^{n}\) with \(x_{n}\le x_{n-1} \le \ldots \le x_{1}\) and
$$\begin{aligned} \begin{aligned} \beta \cdot x_{3}&>x_{1}\cdot x_{2}, \\ \beta \cdot x_{4}^{2}&>x_{1}\cdot x_{2}\cdot x_{3}, \\&\ \vdots&\\ \beta \cdot x_{n}^{n-2}&>x_{1}\cdots x_{n-1} . \end{aligned} \end{aligned}$$(A.40)
Proof
(i) For all \(j=1,\dots ,n\)
and for all \(i=1,\dots ,n\)
(ii) We show that f maps an element \(x \in (0,1)^{n}\) to an element of the set defined by (A.37). Indeed, for all \(x\in (0,1)\) we have
Moreover,
Since for all \(i=1,\dots ,n\) we have \(x_{i}\in (0,1)\) we get for all \(i=1,\dots ,n\)
Thus f maps \((0,\infty )^{n}\) into the set defined by (A.37). Now we show that g maps an element y of the set defined by (A.37) to an element of \((0,1)^{n}\). Since \(\prod _{k=1}^{n}y_{k}<y_{i}^{n-1}\),
(iii) We show that the set defined by (A.39) contains the set defined by (A.38). Let x be an element of the set defined by (A.38). There is a permutation \(\pi \) such that
and for all \(i=1,\dots ,n\),
We prove by induction that \((x_{\pi (1)},\dots ,x_{\pi (n)})\in M\). The last inequality of (A.41) follows from (A.38) for \(i=n\). Suppose now that we have verified the last k inequalities, i.e.,
By (A.40),
We substitute for \(x_{n}, \dots , x_{n-k+1}\) using the above inequalities already obtained.
or, equivalently,
as long as \(n-k-2 \ge 1\). Thus the last inequality is \(x_{3}>(1/\beta )\cdot x_{1}\cdot x_{2}\). Now we show that (A.39) is contained in (A.38). It is enough to show that M is a subset of (A.38). Let \(x\in M\). By the last inequality of (A.40)
Since \(x_{n}<x_{n-1}<\ldots <x_{1}\) we get for all \(i=1,\dots ,n\)
\(\square \)
Recall the definition (2.6):
Lemma A.2
Let \(\alpha >0\), and \({\varvec{l}}=(l_1, \dots , l_n)\), \(L=\sum _1^n l_i \), with \( l_i <n-1 \) for all \(i=1,\dots ,n\). Then we have
Proof
By the assumption \( l_i <n-1 \) for all \(i=1,\dots ,n\) the integral is finite. We use the transformation of Lemma A.1: For \(i,j=1,\dots ,n\),
The partial derivatives of t with respect to v are for \(i\ne j\)
and for \(i=j\)
This allows computation of the Jacobian
The remaining determinant can be calculated explicitly by use of the formula
which yields, with \(x_i=1-n\),
Applying the transformation theorem gives
In the last step we substitute \(v_{i}=s_{i}/({\alpha }(N-n))\) and obtain
\(\square \)
Lemma A.3
Let \(\alpha >0\), and \({\varvec{l}}=(l_1, \dots , l_n)\), \(L=\sum _1^n l_i \), with \( l_i <n-1 \) for all \(i=1,\dots ,n\). Then we have
Proof
By the assumption \( l_i <n-1 \) for all \(i=1,\dots ,n\) the integrals are finite. The result follows from Lemmas A.1 and A.2. \(\square \)
Appendix B: Proof of Lemma 2.3
Our goal is to prove Lemma 2.3, which is the asymptotic formula
as \(N \rightarrow \infty \), where \(n \ge 2\), \(0<{\alpha }<1/n\), and \({\varvec{l}}=(l_1, \dots , l_n)\), \(L=\sum _1^n l_i \), with \( n-1> l_{i} >{L}/(n-1)- 1\).
Proof
By Lemma A.2 we have
Because \( e^{t} (1-t) \ge (1+t)(1-t)=(1-t^2) \) for \(|t| \le 1\) and \((1-t^2)^m \ge 1- m t^2\), we have
for \(|x| \le N-n\). This yields
Integrating the terms containing \(O\bigl ( N^{-1} \sum _{i=1}^n s_i^2 \bigr )\) yields incomplete Gamma functions times a term \(O\bigl (N^{-n+L/(n-1)-1}\bigr )\). The main term gives
where \(D_N\) is the set where at least one of the terms
equals zero. Thus \(D_N\) is covered by the unions of the sets
Integration on the set \(D_{N,k}\) gives
and the contribution of the sets \(D_{N,k}'\) gives
Hence the error term of the integration over \(D_N\) is of order
for \(l_i-L/(n-1) +1 >0\). \(\square \)
Appendix C: Proof of Lemma 2.4
Our next goal is to prove Lemma 2.4 which deals with the case when some of the \(l_i\) are extremal in the sense that \(l_i=L/(n-1)-1\). If for at least three different indices i, j, k we have the strict inequality that \(l_i,l_j,l_k>L/(n-1)-1\), then we want to prove that
If for exactly two different indices i, j we have the strict inequality that \( l_{i}, l_j > {L}/({n-1}) -1 \) and equality \(l_k=L/({n-1}) -1 \) for all other \(l_k\), then we will show that
with some \(c_n > 0\). First we show that \({{\mathcal {J}}}({\varvec{l}})\) is at least of order \( N^{-n+L/({n-1})}(\ln N)^{n-2}\), and thus the strict inequality \(c_n>0\).
Lemma C.1
There is a constant \(c_{n,{\alpha }}>0\) such that for all \(n-1 > l_i \ge L/(n-1)-1\) we have
for N sufficiently large.
Proof
We use Lemma A.2. Since the integrand is positive, for N sufficiently large,
For those i with \(l_i - L/(n-1)= -1\),
and for those i with \(l_i -L/(n-1)>-1\),
both for N sufficiently large. \(\square \)
To show that this yields in fact the correct order we introduce in the light of Lemma A.3 integrals of the type
Lemma C.2
Assume \({\alpha }\le 1/(2n)\), and that \({\varvec{q}}=(q_1, \dots , q_n) \in {\mathbb {R}}^n\), \(q_i \ge -1\), and there are \(i \ne j\) with \(q_i, q_j >-1\). Then there is a constant \(c_{{\varvec{q}},n} \ge 0\) independent of \({\alpha }\) such that
as \(N \rightarrow \infty \). More precisely, if \(q_1, q_2 > -1\) and \(q_3= \ldots =q_n=-1\), then
with some \(c_n \ge 0\). If there exists an \(m\ge 3\) with \(q_m > -1\), then \(c_{{\varvec{q}}, n}=0\) and
In other words, the only asymptotically contributing terms are those with \(q_1,q_2 > -1\) and \(q_{3}= \ldots = q_{n}=-1\). We will prove Lemma C.2 below and before this show that it implies Lemma 2.4.
Proof of Lemma 2.4
For \({\varvec{l}}=(l_1, \dots , l_n)\), \(L=\sum _1^n l_i \), with \(l_i < n-1\), Lemma A.3 tells us that
Assume that \( l_{ i} \ge L/(n-1)-1\) for all i, and there exists some tuple \( i \ne j\) with \(l_i,l_j>L/(n-1)-1\). If \( l_{\pi (1)} , l_{\pi (2)}>L/(n-1)-1\) and \( l_{\pi (i)} =L/(n-1)-1\) for all \(i \ge 3\), we have that
where the constant is non-negative. If \( l_{\pi (i)} > L/(n-1)-1\) for some \(i \ge 3\), then
Hence, depending on \({\varvec{l}}=(l_1, \dots , l_n)\), there are two cases.
-
We have \( l_{k} =L/(n-1)-1\) for all except two indices \(i \ne j\): Then there are \((n-2)!\) permutations which bring \(l_i, l_j\) into the first two places with order \((l_i, l_j)\), resp. \((l_j, l_i)\) and allow for an application of (C.46). All other permutations add terms of order \(O( (\ln N)^{n-3})\). Summing over these possibilities, we have
$$\begin{aligned} {{\mathcal {J}}}({\varvec{l}})&=\biggl (\frac{1}{{\alpha }(N-n)}\biggr )^{\!n-L/(n-1)}\frac{(n-2)!}{n-1}c_n \Gamma \biggl (l_i- \frac{L}{n-1} +1\biggr )\\&\quad \times \Gamma \biggl (l_j- \frac{L}{n-1} +1\biggr ) (\ln N)^{n-2} (1+O((\ln N)^{-1})\\&=c_n {\alpha }^{-n+L/(n-1)} \Gamma \biggl (l_i- \frac{L}{n-1} +1\biggr )\Gamma \biggl (l_j- \frac{L}{n-1} +1\biggr )\\&\quad \times N^{-n+L/(n-1)}(\ln N)^{n-2}(1+O((\ln N)^{-1}). \end{aligned}$$ -
There exist at least three different \( l_i,l_j,l_k>L/(n-1)-1\). This yields
$$\begin{aligned} {{\mathcal {J}}}({\varvec{l}}) = O\bigl (N^{-n+L/(n-1)}(\ln N)^{n-3}\bigr ). \end{aligned}$$
The implicit constants in \(O(\,{\cdot }\,)\) may depend on \({\alpha }\). These estimates imply Lemma 2.4. \(\square \)
Proof of Lemma C.2
The proof of the lemma is divided into four parts. Lemmata C.3 and C.5 give the crucial estimates. Equation (C.46) when \(q_3= \ldots = q_n=-1\) follows from Lemma C.4,
We replace \((1- (s_1+s_2)/(N-n))^{N-n}\) by the exponential function using (B.45):
Clearly the integral in the last line is of order \(O(e^{-N} N^{q_1})\). Hence
(C.47) is proved in Lemma C.6. \(\square \)
Lemma C.3
Assume \(s_n \le \ldots \le s_1 \le {\alpha }(N-n) \), \(3 \le m \le n\), and \(s_{m-1} \le 1\). Then for \({\alpha }\le 1/(2n)\) and \(k \ge 0\) we have
Proof
We use the notation \(S:= \bigl (\sum _1^{{m-1}} s_i)/(N-n)\). By assumption \(\alpha \le 1/(2n)\). This implies
And for \(S \le 1/2\) and \(x\ge 0\) we have
The essential observation is that for \(a,b \in (0,1)\) and \(k\ge 0\),
and
Because of (C.48) and (C.50) we obtain
Again by (C.48) and (C.50), by the elementary inequality \( (1-y)^{k} \ge (1-ky) \) for \(y\le 1\) and by (C.49)
This proves the lemma. \(\square \)
With the help of this lemma we determine the asymptotic behavior of the dominant terms.
Lemma C.4
There is a constant \(c_n\), such that for \(q_1, q_2 > -1 \) and \({\alpha }\le 1/(2n)\) we have
Proof
We denote the range of integration of \({{\mathcal {S}}}({\varvec{q}})\) by I and dissect this along the sets
for \(k=3,\dots ,n\), and
The dominant term is the one with \(I \cap I_{3}=I \cap \{0 \le s_n \le \ldots \le s_3 \le 1\}\) as range of integration. Hence in the first part of the proof we assume \(s_i \le 1\) for \(i=3, \dots , n\). For \(m=2, \dots , n-1 \) we define
and claim that
where \(P_{n-m}\) is a homogeneous polynomial of degree \(n-m\) independent of \({\alpha }\), and the error term \(E_{n-m}\) is a function whose absolute value is bounded by a polynomial \(Q_{n-m-1}\) of degree at most \(n-m-1\) whose coefficients may depend on \({\alpha }\). To shorten the following formulae we suppress the arguments of \(P_{n-m}\), \(E_{n-m}\), and \(Q_{n-m-1}\) from now on.
We use induction in m, starting with \(m=n-1\) and going down to \(m=2\). For \(m=n-1\) and \( P_0=1\) in the first step we obtain \({{\mathcal {S}}}_1=(1-(N-n)^{-1}(\sum s_i))(P_1+E_1)\) by Lemma C.3 (where \(k=0\)) with
Assume that (C.51) holds. Then
with
where the coefficients \(p_{n-m-k}\) are polynomials in \(\ln (N-n), \ln s_1, \dots , \ln s_{m-1}\) of degree \(n-m-k\) independent of \({\alpha }\). And the absolute value of \(E_{n-m}\) is bounded by a polynomial \(Q_{n-m-1}\) of degree \(n-m-1\).
In Lemma C.3 both bounds are—up to the term \((1-S)^{N-n}\)—polynomials of degree \(k+1\) where the sum of the monomials of top degree \(k+1\) is denoted by \(H_{k+1}\) and is independent of \({\alpha }\). We have
Thus by Lemma C.3 the integration of \(P_{n-m}\) yields homogeneous polynomials \(H_{k+1}\) of degree \(k+1\), and hence a homogeneous polynomial \(P_{n-m+1}\) of degree \(n-m+1\):
independent of \({\alpha }\). The other terms of lower degree and the error term in Lemma C.3 produce error terms which can be bounded by a polynomial of degree k. Multiplied by the polynomials \(p_{n-m-k}\) from the representation (C.52) this yields an error term \(E'_{n-m+1}\) bounded by a polynomial \(Q'_{n-m}\) in \(\ln (N-n), \ln s_1, \dots , \ln s_{m-1}\) of order \(n-m\),
For the absolute value of the integration over \(E_{n-m}\) we obtain
where in the third line we used Lemma C.3 again which leads to a polynomial \(Q''_{n-m}\) of degree \(n-m\). Hence \(E_{n-m+1} := E'_{n-m+1}+E''_{n-m+1}\) is bounded by \(Q'_{n-m} + Q''_{n-m}\), a polynomial of degree \(n-m\). This proves (C.51). On \(I \cap I_3\) we take \(\min (s_2, 1)\) as the upper limit of integration with respect to \(s_3\). Thus we obtain on \(I \cap I_3\) that
It remains to consider the last two integrations with \(q_{1},q_{2} > -1\). The dominating term in Lemma C.4 is the term of \(P_{n-2}\) with \((\ln (N-n))^{n-2}\),
For the terms \((\ln (N-n))^k (\ln s_1)^{j_1} (\ln (\min (s_2,1)))^{j_2} \) with \(k=n-2-j_1-j_2<n-2\) we obtain
with \(k \le n-3\), since integrals of the form \(\int _0^{\infty } e^{-t}t^k|{\ln t}|^j\,dt\) are convergent. For the integral over the error term we get
Combining these estimates yields Lemma C.4 for \(s_3 \le 1\), i.e., on \(I \cap I_3\).
It remains to show that the integration over \(I_4\cup \dots \cup I_{n+1}\) is of order \(O((\ln N)^{n-3})\). Consider the range of integration \(I \cap I_k\), \(k \ge 4\), with
Then the integrations up to \(s_k\) just yield (C.51) and in the remaining integrations we have
since \({{\mathcal {S}}}_{n-k+1}\) is bounded by polynomials in \(\ln (N-n), \ln s_1, \dots , \ln s_{k-1}\) of order \(n-k+1\), and all occurring integrals
are finite. This finishes the proof of Lemma C.4. \(\square \)
For the second part of Lemma C.2, i.e., for (C.47), we investigate the terms with \( q_{m} > -1 \) for some \(m \in \{3, \dots ,n\} \). We start by restating the following simple analogue of Lemma C.3. We recall that \(S=\bigl (\sum _{i=1}^{m-1}s_i\bigr )/(N-n)\).
Lemma C.5
For \(q _m> -1\), \(k \ge 0\), \(s_{{m-1}} \le 1\), and \(s_{m-1} \le s_2\) we have
Proof
We use that the antiderivative of \(e^{-t} t^k\) is given by \(e^{-t} P_k(t)\) where \(P_k\) is a polynomial of degree k:
\(\square \)
Lemma C.6
Assume that \(q_n, \dots , q_{m+1}= -1 \), \(q_m >-1\) for some \(m \ge 3\), and \(q_{m-1}, \dots , q_1 \ge -1\). Then we have
Proof
We proceed precisely as in the previous proof of Lemma C.4. We denote the range of integration by I and dissect this set by
for \(k=3, \dots , n+1\). First we deal with the term with \(I \cap I_{3}=I \cap \{\ldots \le s_3 \le 1\}\) as range of integration, hence we assume \(s_i \le 1\) for \(i=3, \dots , n\). We define
We know from the proof of Lemma C.3 that
Because \(q_{m} > -1\), the next integration by Lemma C.5 yields as a bound a polynomial of again degree \(n-m\) in \(\ln (N-n),\ln s_1, \dots , \ln s_{m-1}\) times \(s_2^{q_m+1}\).
Proceeding in this way, each integration with respect to \(s_i\) with \(q_i=-1\) increases the degree of the polynomial bound by one, and each integration with respect to \(s_m\) with \(q_m>-1\) leads to a polynomial bound again of the same degree and multiplies this new polynomial bound by \(s_2^{q_m+1}\). Thus we obtain on \(I\cap I_3\) that
where we put \(q_{-}=\sum _{l=3}^n{\mathbb {1}}(q_l=-1) \) and \(q_{+}=\sum _{l=3}^n(q_l+1){\mathbb {1}}(q_l > -1)\). This now yields
as a bound for \({{\mathcal {S}}}(q_1, \dots , q_n)\). By our assumption \(q_{-} \le n-3\), \(q_{+} > 0\), thus
Hence \({{\mathcal {S}}}(q_1, \dots , q_n)\) is bounded by
on \(I \cap I_3\). On \(I \cap I_k\) with \(k \ge 4\), the term \({{\mathcal {S}}}(q_1, \dots , q_m)\) is by monotonicity (observe that \(s_k, \dots , s_n \le 1\)) bounded by \({{\mathcal {S}}}(q_1, \dots , q_{k-1}, -1, \dots , -1)\) which in turn is bounded by
as in the proof of Lemma C.4, (C.55). Hence the integration over \(I_4\cup I_5 \cup \ldots \) leads to a term of order \(O((\ln (N-n))^{n-3})\). This proves our lemma. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Reitzner, M., Schütt, C. & Werner, E.M. The Convex Hull of Random Points on the Boundary of a Simple Polytope. Discrete Comput Geom 69, 453–504 (2023). https://doi.org/10.1007/s00454-022-00460-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00454-022-00460-2