1 Introduction and Statement of Results

Let \(K \subset {\mathbb {R}}^n \) be a convex set of dimension n, \(n \ge 2\). Let \(N \in {\mathbb {N}}\) and choose N random points \(X_1, \dots , X_N\) uniformly distributed either in the interior of K or on the boundary \({\partial }K\) of K. Write [A] for the convex hull of a set A, and denote by \(P_N= [X_1, \dots , X_N]\) the convex hull of the random points. The expected number of vertices \({\mathbb {E}}f_0(P_N)\), the expected number of \((n-1)\)-dimensional faces \({\mathbb {E}}f_{n-1}(P_N)\), and the expectation of the volume difference \(V_n(K)-{\mathbb {E}}V_n (P_N)\) of K and \(P_N\) are of interest. Since explicit results for fixed N cannot be expected one investigates the asymptotics as \(N \rightarrow \infty \).

If the vertices of the random polytopes are chosen from the interior of a convex set, there is a vast amount of literature. Investigations started with two famous articles by Rényi and Sulanke [26, 27] who obtained in the planar case the asymptotic behavior of the expected area \({\mathbb {E}}V_2(P_N)\) when the boundary of K is sufficiently smooth and when K is a polygon. In a series of papers these formulae were generalized to higher dimensions. In the case when the boundary of K is sufficiently smooth, we know by work of Wieacker [34], Schneider and Wieacker [30], Bárány [2], Schütt [32], and Böröczky et al. [8] that the volume difference behaves like

$$\begin{aligned} V_n(K)-{\mathbb {E}}V_n (P_N) = c_n \Omega (K) V_n(K)^{2/(n+1)} N^{-2/(n+1)}(1+o(1)), \end{aligned}$$
(1.1)

where \(P_N\) is the convex hull of uniform iid random points in the interior of K, \(\Omega (K)\) denotes the affine surface area of K and \(c_n\) is a constant that depends only on n. The generalization to all intrinsic volumes is due to Bárány [2, 3] and Reitzner [23]. The corresponding results for random points chosen in a polytope P are much more difficult. In a long and intricate proof Bárány and Buchta [4] settled the case of polytopes \(P \subset {\mathbb {R}}^n \),

$$\begin{aligned} V_n(P)-{\mathbb {E}}V_n(P_N) = \frac{ {\text {flag}}(P)}{(n+1)^{n-1} (n-1)!} N^{-1} (\ln N)^{n-1} (1+o(1)), \end{aligned}$$

where \({\text {flag}}(P)\) is the number of flags of the polytope P. A flag is a sequence of i-dimensional faces \(F_i\) of P, \(i=0,\dots , n-1\), such that \(F_i \subset F_{i+1}\). The phenomenon that the expression should only depend on this combinatorial structure of the polytope had been discovered in connection with floating bodies by Schütt [31].

Due to Efron’s identity [11] the results on \({\mathbb {E}}V_n (P_N)\) can be used to determine the expected number of vertices of \(P_N\). The general results for the number of \(\ell \)-dimensional faces \(f_\ell (P_N)\) are due to Wieacker [34], Bárány and Buchta [4], and Reitzner [24]: if K is a smooth convex body and \(\ell \in \{0, \dots , n-1\}\), then

$$\begin{aligned} {\mathbb {E}}f_\ell (P_N) = c_{n,\ell }\Omega (K)N^{ (n-1)/(n+1)}(1+o(1)), \end{aligned}$$
(1.2)

and if P is a polytope, then, with a different constant, but still denoted \(c_{n,\ell }\),

$$\begin{aligned} {\mathbb {E}} f_\ell (P_N) = c_{n,\ell }{\text {flag}}(P)(\ln N)^{ {n-1}} (1+o(1)) . \end{aligned}$$
(1.3)

Choosing random points from the interior of a convex body always produces a simplicial polytope with probability one. Yet often applications of the above mentioned results in computational geometry, the analysis of the average complexity of algorithms and optimization necessarily deal with non-simplicial polytopes and it became crucial to have analogous results for random polytopes without this very specific combinatorial structure. The only classical results for this question concern 0/1-polytopes in high dimensions [6, 10, 13, 14, 20], which have a highly interesting combinatorial structure, yet in a very specific setting. And very recently Newman [21] used a somewhat dual approach to construct general random polytopes from random polyhedra.

In view of the applications it is also of high interest to show that the face numbers of most realizations of random polytopes are close to the expected ones, and thus to prove variance estimates, central limit theorems and deviation inequalities. There has been serious progress in this direction in recent years, and we refer to the survey articles [18, 19, 25].

In all these results there is a general scheme: if the underlying convex sets are smooth then the number of faces and the volume difference behave asymptotically like powers of N, if the underlying sets are convex polytopes then logarithmic factors show up. Metric and combinatorial quantities only differ by a factor N.

In this paper we are discussing the case that the random points are chosen from the boundary of a polytope P. In dimensions \(n\ge 3\), this produces random polytopes which are neither simple nor simplicial with high probability as \(N\rightarrow \infty \), although still most of the facets are simplices. Thus our results are a decisive step in taking into account the point mentioned above. The applications in computational geometry, the analysis of the average complexity of algorithms and optimization need formulae for the combinatorial structure of the involved random polytopes and thus the question on the number of facets and vertices are of interest.

From (1.3) it follows immediately that for random polytopes whose points are chosen from the boundary of a polytope the expected number of vertices is

$$\begin{aligned} {\mathbb {E}}f_{0}(P_{N})=c_{n-1,0} {\text {flag}}(P)(\ln N)^{ {n-2}} (1+o(1)) \end{aligned}$$

with \(c_{n-1,0}\) from (1.3), independent of P. Indeed, a chosen point is a vertex of a random polytope if and only if it is a vertex of the convex hull of all the random points chosen in the same facet of P. We define \(\ln _+ x=\max {\{0, \ln x\}}\). By (1.3) we get that the expected number of vertices equals

$$\begin{aligned} c_{n-1,0} \sum _{F_i} {\text {flag}}(F_i){\mathbb {E}}(\ln _+ N_{i} )^{ {n-2}} (1+o(1)), \end{aligned}$$

where we sum over all facets \(F_i\) of P and \(N_{i}\) is a binomial distributed random variable with parameters N and \(p_i={\lambda }_{n-1}(F_i)/\bigl (\sum _{F_j} {\lambda }_{n-1}(F_j)\bigr )\). Here \({\lambda }_{n-1}\) is the \((n-1)\)-dimensional Lebesgue measure. It is left to observe that \({\mathbb {E}}(\ln _+ N_{i})^{n-2} = (\ln N)^{n-2}(1+o(1))\) and \(\sum _{F_i} {\text {flag}}(F_i) = {\text {flag}}(P)\).

For our first main results we restrict our investigations to simple polytopes. We recall that a polytope in \({\mathbb {R}}^n\) is called simple if each of its vertices is contained in exactly n facets.

Theorem 1.1

Let \(n \ge 2\) and choose N uniform random points on the boundary of a simple polytope P in \({\mathbb {R}}^n\), \(n\ge 2\). For the expected number of facets of the random polytope \(P_N\), we have

$$\begin{aligned}{\mathbb {E}}f_{n-1}(P_N)=c_{n} f_0( P) (\ln N)^{n-2} (1+O((\ln N)^{-1})),\end{aligned}$$

with some \( c_{n} >0\) independent of P.

The case \(n=2\) is particularly simple. \({\mathbb {E}}f_{1}(P_N)\) is asymptotically, as \(N \rightarrow \infty \), equal to \(2 f_0( P) = 2 f_1(P)\). Note that for a simplicial polytope \({\text {flag}}(P) = n!f_0(P)\) and therefore Theorem 1.1 can also be written as

$$\begin{aligned} {\mathbb {E}}f_{n-1}(P_N)=\frac{c_{n}}{n!} {\text {flag}}(P) (\ln N)^{n-2} (1+O((\ln N)^{-1})). \end{aligned}$$

We conjecture this formula to hold for general polytopes. Yet this seems to be much more involved. We are showing here that for \(n \ge 3\) and for \(1\le \ell \le n-2\)

$$\begin{aligned} {\mathbb {E}}f_{\ell }(P_N)\ge c_{n-1,\ell } {\text {flag}}(P) (\ln N)^{n-2} (1+o(1)) \end{aligned}$$

with \( c_{n-1,\ell }\) defined in (1.3). For this we count those \(\ell \)-dimensional faces which are contained in the facets \(F_i\) of P. Analogous to the case \(\ell =0\) we have

$$\begin{aligned} {\mathbb {E}}f_{\ell }(P_N)&\ge \sum _{F_i} {\mathbb {E}}f_{\ell }(P_N \cap F_i)\\&=c_{n-1,\ell } \sum _{F_i} {\text {flag}}(F_i) {\mathbb {E}}(\ln _+ N_i)^{n-2}(1+o(1))\\ {}&= c_{n-1,\ell } {\text {flag}}(P) (\ln N)^{n-2} (1+o(1)). \end{aligned}$$

For the case \(\ell =n-1\) and \(n\ge 3\), we observe that each \((n-2)\)-dimensional face of a polytope is contained in precisely two \((n-1)\)-dimensional facets. Assume that not all random points are contained in the same facet of P which happens with probability tending to one as \(N\rightarrow \infty \). Then, each \((n-2)\)-dimensional face of \(P_N\) in a facet F of P is contained in at least one facet of \(P_N\) not contained in F, and thus gives rise to a facet of \(P_N\) which is the convex hull of this face and one additional point in another facet of P. This shows

$$\begin{aligned}{\mathbb {E}}f_{n-1}(P_N){} & {} \ge \sum _{F_i} {\mathbb {E}}f_{n-2}(P_N \cap F_i)(1-o(1)) + o(1)\\{} & {} =c_{n-1,n-2} {\text {flag}}(P) (\ln N)^{n-2} (1+o(1))\end{aligned}$$

for general polytopes P in dimension \(n\ge 3\).

This sheds some light on the geometry of \(P_N\) if P is a simple polytope. The number of those facets of the random polytope that are not contained in the boundary of P are already of the same order as all facets that have one vertex in one facet of P and all the others in another one. In fact it follows from our proof that for simple polytopes the main contribution comes from those facets of \(P_N\) whose vertices are on precisely two facets of P. We refer to the end of Sect. 3.5 for the details.

Surprisingly this is no longer true for the expectation of the volume difference. Here the main contribution comes from all facets of \(P_N\). And—to our big surprise—the volume difference contains no logarithmic factor. This is in sharp contrast to the results for random points inside convex sets and shows that the phenomenon mentioned above does not hold for more general random polytopes.

Theorem 1.2

For the expected volume difference between a simple polytope \(P \subset {\mathbb {R}}^n\) and the random polytope \(P_N\) with vertices chosen from the boundary of P, we have

$$\begin{aligned} {\mathbb {E}}(V_n(P)-V_n(P_N)) =c_{n,P} N^{-n/(n-1)} \bigl (1+ O\bigl (N^{- 1/((n-1)(n-2))}\bigr )\bigr ) \end{aligned}$$

with some \(c_{n,P}>0\) depending on n and P.

Intuitively, the difference volume for a random polytope whose vertices are chosen from the boundary should be smaller than the one whose vertices are chosen from the body. Our result confirms this for N sufficiently large. The first one is of the order \(N^{-n/(n-1)}\) compared to \(N^{-1}(\ln N)^{n-1}\). It is well known that for uniform random polytopes in the interior of a convex set the expected missed volume is minimized for the ball for N large [7, 16, 17], a smooth convex set, and—in the planar case—maximized by a triangle [7, 9, 15] or more generally by polytopes [5]. Hence one should also compare the result of Theorem 1.2 to the result of choosing random points on the boundary of a smooth convex set. This clearly leads to a random polytope with N vertices. And by results of Schütt and Werner [33], see also Reitzner [22], the expected volume difference is of order \(N^{-2/(n-1)}\) which is smaller as the order in (1.1) as is to be expected, but also surprisingly much bigger than the order \(N^{-n/(n-1)} \) occurring in Theorem 1.2.

We give a simple argument that shows that the volume difference between the cube and a random polytope is at least of the order \(N^{-n/(n-1)}\). We denote by \(e_1,\dots ,e_n\) the unit vectors of the standard orthonormal basis in \({\mathbb {R}}^n\). We consider the cube \(C^{n}=[0,1]^{n}\) and the subset of the boundary

$$\begin{aligned} \begin{aligned}&\partial C^{n}\cap H_{+}\!\left( \biggl (\frac{(n-1)!}{n N}\biggr )^{\!1/(n-1)}\!,\,(1,\dots ,1)\right) \\&\quad =\,\bigcup _{i=1}^{n}\,\biggl (\frac{(n-1)!}{n N}\biggr )^{\!1/(n-1)}\![0,e_{1},\dots ,e_{i-1},e_{i+1},\dots ,e_{n}], \end{aligned} \end{aligned}$$
(1.4)

where \(H_+(h,u)=\{x:\langle x,u \rangle \ge h \}\). These sets are the union of small simplices in the facets of the cube close to the vertices. Then

$$\begin{aligned} \frac{1}{N}= \sum _{i=1}^{n} {\lambda }_{n-1}\biggl (\biggl (\frac{(n-1)!}{n N}\biggr )^{\!1/(n-1)}\![0,e_{1},\dots ,e_{i-1},e_{i+1},\dots ,e_{n}]\biggr ), \end{aligned}$$

where \({\lambda }_{n-1}\) denotes the \((n-1)\)-dimensional Lebesgue measure, and the probability that none of the points \(x_{1},\dots ,x_{N}\) is chosen from this set equals

$$\begin{aligned} \biggl (1-\frac{1}{N}\biggr )^{\!N}\sim \frac{1}{e}. \end{aligned}$$

Therefore, with probability approximately 1/e the union of the simplices in (1.4) is not contained in the random polytope and the difference volume is greater than

$$\begin{aligned} \frac{1}{n!}\biggl (\frac{(n-1)!}{n N}\biggr )^{\!n/(n-1)}\!\sim \,\frac{N^{-n/(n-1)}}{n}, \end{aligned}$$

which is in accordance with Theorem 1.2.

The paper is organized in the following way. The next section contains a tool from integral geometry and two asymptotic expansions. The proof of the asymptotic expansions is rather technical and shifted to the end of the paper, Appendix AB, and C. The third section is devoted to the proofs of Theorems 1.1 and  1.2. There, first we evaluate two formulas for the quantities appearing in Theorems 1.1 and 1.2 and combine them with the necessary asymptotic results. These results are proven in in Sects. 3.53.7, using computations for the moments of the volume of involved random simplices in Sect. 3.3.

Throughout this paper \(c_n, c_{{\varvec{m}}, P, n, \dots }, \ldots \) are generic constants depending on \({\varvec{m}}\), P, n, etc. whose precise values may differ from occurrence to occurrence.

2 Tools

We work in the Euclidean space \({\mathbb {R}}^n\) with inner product \(\langle \,{\cdot }\,,\,{\cdot }\,\rangle \) and norm \(\Vert \,{\cdot }\,\Vert \). We write \(H=H(h,u)\) for the hyperplane with unit normal vector \(u\in S^{n-1}\) and signed distance h to the origin, \(H(h,u)=\{x:\langle x,u\rangle =h\}\). We denote by \(H_-=H_-(h,u)=\{x:\langle x,u\rangle \le h\}\) and by \(H_+=H_+(h,u)=\{x:\langle x,u\rangle \ge h\}\) the the two closed halfspaces bounded by the hyperplane H. For a set \(A\subset {\mathbb {R}}^n\) we write [A] for the convex hull of A.

In this paper we need a formula for n points distributed on the boundary of a given convex body. A theorem of Blaschke–Petkantschin type which deals with such a situation is a special case of Theorem 1 in Zähle [35]. We state it here only for a \((n-1)\)-dimensional set X, which is what we need in the following. Denote by \({{\mathcal {H}}}_{n-1}\) the \((n-1)\)-dimensional Hausdorff measure. A set X is \({{\mathcal {H}}}_{n-1}\)-rectifiable if it is the countable union of images of bounded subsets of \({\mathbb {R}}^{n-1}\) under some Lipschitz maps, up to a set of \({{\mathcal {H}}}_{n-1}\)-measure zero. Then, for \({{\mathcal {H}}}_{n-1}\)-almost all points \(x\in X\) there exists a (generalized) tangent hyperplane \(T_{x}\) at x to X consisting of all approximate tangent vectors v at x. Essentially, v is an approximate tangent vector at x if for each \(\varepsilon >0\) there exists \(x'\in X\) with \(\Vert x-x'\Vert \le \varepsilon \) and \(\alpha >0\) such that \(\Vert \alpha (x-x')-v\Vert \le \varepsilon \). For the precise definition we refer to the book by Schneider and Weil [29, p. 634], and for a general introduction to Hausdorff measure, the existence of tangent hyperplanes and facts on geometric measure theory we refer to Federer [12].

For two hyperplanes \(H_1, H_2\) let \( J(H_1, H_2)\) be the length of the projection of a unit interval in \(H_1 \cap (H_1 \cap H_2)^\perp \) onto \(H_2^\perp \), or \(J(H_1, H_2)=0\) if \(H_1\,{\parallel }\,H_2\). Observe that \(J(H(h_1,u_1),H(h_2, u_2))\) is just the length of the projection of \(u_2\) onto \(H_1\), which equals \(\sin \sphericalangle (u_1, u_2)\).

Note that theorem of Zähle is stated for \({{\mathcal {H}}}_{n-1}\)-rectifiable sets, although Zähle remarks that the result is true under the weaker assumption of \({{\mathcal {H}}}_{n-1}\)-measurability.

Theorem 2.1

(Zähle [35])      Suppose \(X \subset {\mathbb {R}}^n\) is an \({{\mathcal {H}}}_{n-1}\)-rectifiable set and let \(g:({\mathbb {R}}^n)^{n-1} \rightarrow [0, \infty ) \) be a measurable function. Then there is a constant \({\beta }\) such that

$$\begin{aligned}&\int \limits _{S^{n-1}}\int \limits _{{\mathbb {R}}}\int \limits _{X \cap H} \!\!\cdots \!\! \int \limits _{ X \cap H}{\mathbb {1}}(x_1, \dots , x_{n} \text { in general position}) g(x_1,\dots , x_n) \, d x_1 \ldots d x_n dh du\\&\quad =\frac{{\beta }}{(n-1)!}\int \limits _{X}\!\cdots \! \int \limits _{X}{\mathbb {1}}(x_1, \dots , x_n \text{ in } \text{ general } \text{ position})g(x_1,\dots , x_n)\\&\times {\lambda }_{n-1} ( [x_1, \dots , x_n])^{-1}\prod \limits _{j=1}^n J (T_{x_j},H(x_1, \dots , x_n))\, d x_1 \ldots d x_n, \end{aligned}$$

with dxdudh denoting integration with respect to the Hausdorff measure on the respective range of integration, and where the hyperplane \(H(x_1,\dots , x_n)\) is the affine hull of \(x_1, \dots , x_n\).

In our case X is the boundary of a polytope P, and almost all \(x \in {\partial }P\) are in the relative interior of a facet of P where \(T_x\) is simply the supporting hyperplane. Thus \(J (T_{x_j},H(x_1, \dots , x_n))=0\) if all points are on the same facet of P. To exclude this from the range of integration, we write \(({\partial }P)^n_{\ne }\) for the set of all n-tuples \( x_1, \dots , x_n \in {\partial }P\) which are not all contained in the same facet. Also, ignoring sets of \({{\mathcal {H}}}_{n-1}\)-measure zero, we may assume that \( x_1, \dots , x_n \) are in general position when integrating on \(({\partial }P)^n_{\ne }\). And, again ignoring sets of measure zero, a hyperplane H(hu) meets \({\partial }P\) at least in d facets, or \({\partial }P\cap H(h,u)=\emptyset \). Thus Zähle’s result takes the following form useful in our context.

Lemma 2.2

Let \(g(x_1, \dots , x_n)\) be a continuous function. Then there is a constant \({\beta }\) such that

$$\begin{aligned}&\int \limits _{({\partial }P)^n_{\ne } }\!\!g(x_1,\dots , x_n) \, d x_1 \ldots d x_n\nonumber \\&\quad ={\beta }^{-1} (n-1)!\int \limits _{S^{n-1}} \! \int \limits _{{\mathbb {R}}}\!\int \limits _{({\partial }P \cap H)^n_{\ne }}\!\!g(x_1,\dots , x_n){\lambda }_{n-1} ( [x_1, \dots , x_n])\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \times \prod \limits _{j=1}^n J (T_{x_j},H)^{-1}\,dx_1\ldots d x_ndh du \nonumber \end{aligned}$$
(2.5)

with dxdudh denoting integration with respect to the Hausdorff measure on the respective range of integration.

One of the essential ingredients of our proof are two asymptotic expansions of the function

$$\begin{aligned} {{\mathcal {J}}}({\varvec{l}})=\int \limits _0^1\!\dots \!\int \limits _0^{1}\left( 1- {\alpha }\sum _{i=1}^{n} \prod _{j \ne i} t_j \right) ^{\!N-n}\prod _{i=1}^n t_i^{n-2-l_i}\,dt_1 \ldots dt_n \end{aligned}$$
(2.6)

of \({\varvec{l}}=(l_{1},\dots ,l_{n})\) as \(N \rightarrow \infty \). Here, \(l_i \in {\mathbb {R}}\), \(l_i >0\) for all i and \(\alpha \in {\mathbb {R}}\), \(\alpha >0\). We need it for the computation of the expectations of the number of facets and of the expected volume difference. The proof of these results is rather technical and lengthy, and will be found in Sects. AC of the appendix.

Lemma 2.3

Assume that \(n \ge 2\), \(0<{\alpha }<1/n\), and that \({\varvec{l}}=(l_1,\dots ,l_n)\), \(L=\sum _{i=1}^nl_i\), with \(n-1>l_i>L/(n-1)-1\) for all \(i=1, \dots , n\). Then

$$\begin{aligned} {{\mathcal {J}}}({\varvec{l}})&={\alpha }^{-n +{L/(n-1)}}(n-1)^{-1}\prod _{i=1}^n \Gamma \biggl (l_i-\frac{L}{n-1}+1\biggr )\\&\quad \times N ^{-n+L/(n-1)}\bigl (1+ O\bigl (N^{-(\min _k l_k-L/(n-1)+1)/(n-2)} \bigr )\bigr ) \end{aligned}$$

as \(N \rightarrow \infty \), where the implicit constant in \(O(\,{\cdot }\,)\) may depend on \({\alpha }\).

Lemma 2.4

Assume that \(n \ge 2\), \(0<{\alpha }\le 1/(2n)\), and \({\varvec{l}}=(l_1, \dots , l_n)\), \(L=\sum _{i=1}^n l_i \), with \( n-1 > l_{i} \ge L/(n-1)-1\) for all \(i=1,\dots ,n\). If for at least three different indices ijk we have the strict inequality that \(l_i,l_j,l_k>L/(n-1) -1 \), then

$$\begin{aligned} {{\mathcal {J}}}({\varvec{l}})=O\bigl ( N^{-n+L/(n-1)} (\ln N)^{n-3}\bigr ) \end{aligned}$$

as \(N \rightarrow \infty \), where the implicit constant in \(O(\,{\cdot }\,)\) may depend on \({\alpha }\). If for exactly two different indices ij we have the strict inequality that \( l_{i}, l_j >L/(n-1)-1\) and equality \(l_k =L/(n-1)-1\) for all other \(l_k\), then

$$\begin{aligned} {{\mathcal {J}}}({\varvec{l}})&=c_n {\alpha }^{-n+L/(n-1)}\Gamma \biggl (l_i- \frac{L}{n-1} +1\biggr )\Gamma \biggl (l_j- \frac{L}{n-1} +1\biggr )\\&\quad \times N^{-n+L/(n-1)} (\ln N)^{n-2} \bigl (1+O((\ln N)^{-1})\bigr ) \end{aligned}$$

as \(N \rightarrow \infty \) with \(c_n>0\), where the implicit constant in \(O(\,{\cdot }\,)\) may depend on \({\alpha }\).

3 Proof of Theorems 1.1 and 1.2

3.1 The Number of Facets

Let \(P\subset {\mathbb {R}}^n\) be a simple polytope, and assume w.l.o.g. that the surface area satisfies \({\lambda }_{n-1} ( {\partial }P)=1\). As usual denote by \({{\mathcal {F}}}_k(P)\) the set of k-dimensional faces of P. Choose random points \(X_1,\dots ,X_N\) on the boundary of P with respect to Lebesgue measure, and denote by \(P_N=[X_1, \dots , X_N]\) their convex hull. In general \({\mathcal {F}}_{n-1}(P_N)\) consists of facets contained in facets of P and facets which are formed by random points on different facets of P. The latter facets are simplices, almost surely. The number of facets contained in \(\partial P\) is bounded by the number of facets of P and thus by a constant. Hence we assume in the following that \((X_1,\dots ,X_n)\in ({\partial }P)_{\ne }^n\). The convex hull of such points \(X_i\), \(i\in I=\{i_1,\dots ,i_n\}\), forms a facet \([X_{i_1},\dots ,X_{i_n}]\) of \(P_N\) if their affine hull does not intersect the convex hull of the remaining points \([\{X_{j}\}_{j \notin I}]\).

To simplify notation we set \(H= \text {aff}[X_1,\dots , X_n]\). If the points \(X_1, \dots , X_n\) form a facet then their affine hull is a supporting hyperplane \(H=H(h,u)\) of the random polytope \(P_N\). The unit vector u is the unit outer normal vector of this facet. Then the halfspace \(H_-=H_-(h,u)=\{x:\langle x,u\rangle \le h\}\) bounded by the hyperplane H contains the random polytope \(P_N\). The probability that \(X_{n+1},\dots , X_N\) are contained in \(H_- \) equals

$$\begin{aligned} {\lambda }_{n-1}({\partial }P \cap H_- )^{N-n} = (1-{\lambda }_{n-1}( {\partial }P \cap H_+ ))^{N-n} , \end{aligned}$$

thus

$$\begin{aligned} {\mathbb {E}}f_{n-1}(P_N)=\left( {\begin{array}{c}N\\ n\end{array}}\right) {\mathbb {E}}\bigl ( (1- {\lambda }_{n-1}( {\partial }P \cap H_+ ))^{N-n}\,{\mathbb {1}}(\{ X_i \}_{i \le n} \in ({\partial }P)^n_{\ne })\bigr ) + O(1). \end{aligned}$$

Denote by H(Pu) a support hyperplane with normal u, supporting P in \(v \in {{\mathcal {F}}}_0(P)\). Then the normal cone \({{\mathcal {N}}}(v,P)\) is defined as (see e.g., [28]),

$$\begin{aligned} {{\mathcal {N}}}(v,P) = \{u \in {\mathbb {R}}^n \setminus \{0\}: v \in H(P,u) \cup \{0\} \}. \end{aligned}$$

With probability one the vector u is contained in the interior of one of the normal cones \({{\mathcal {N}}}(v,P)\) of the vertices \(v \in {{\mathcal {F}}}_0(P)\) of P because the boundaries of the normal cones have measure 0. Hence

$$\begin{aligned}&{\mathbb {E}}f_{n-1}(P_N)\\&\quad \,=\!\sum _{v \in {{\mathcal {F}}}_0(P)}\left( {\begin{array}{c}N\\ n\end{array}}\right) {\mathbb {E}}\bigl ((1-{\lambda }_{n-1}( {\partial }P \cap H_+ ))^{N-n}{\mathbb {1}}( u \in {{\mathcal {N}}}(v, P),\, \{ X_i \}_{i \le n} \in ({\partial }P)^n_{\ne }) \bigr )\\&\qquad +O(1)\\&\quad \,=\!\sum _{v \in {{\mathcal {F}}}_0(P)} \left( {\begin{array}{c}N\\ n\end{array}}\right) \idotsint \limits _{({\partial }P)_{\ne }^n}(1- {\lambda }_{n-1}( {\partial }P \cap H_+ ))^{N-n}{\mathbb {1}}( u \in {{\mathcal {N}}}(v, P)) \, d x_1 \dots dx_n\\&\qquad +O(1). \end{aligned}$$

Now we fix a vertex v. Since P is simple, v is contained in precisely n facets \(F_1, \dots , F_n\). There is an affine transformation \(A_v\) which maps v to the origin and the n edges \([v, v_i]\) containing v onto segments \([0, s_i e_i]\) on the coordinate axis. Here we have a free choice for the n parameters \(s_i>0\) which we will fix soon. We assume \(s_i \ge n\), \(i=1, \dots , n\), which implies

$$\begin{aligned}{}[0, 1]^n \subset A_v P. \end{aligned}$$

The image measure \({\lambda }_{n-1, A_v}\) of the Lebesgue measure \({\lambda }_{n-1 }\) on the facets of P under the affine transformation \(A_v\) is—up to a constant—again Lebesgue measure, where the constant may differ for different facets. We choose the n parameters \(s_i \ge n\) in such a way that the constant equals the same \(a_v>0\) for the n facets \(F_1, \dots , F_n\) containing v,

$$\begin{aligned} {\lambda }_{n-1} (F_i) = a_v {\lambda }_{n-1} (A_v F_i). \end{aligned}$$

Note that the last condition means that for all such facets \(F_i\) and all measurable \(B \subset A_v F_i\),

$$\begin{aligned} {\lambda }_{n-1, A_v} ( B) = {\lambda }_{n-1} (A_v^{-1} B) = a_v {\lambda }_{n-1}(B) . \end{aligned}$$
(3.7)

Note also that \([0, 1]^{n-1} \subset A_v F_i\), \(i=1, \dots ,n\), and thus by (3.7),

$$\begin{aligned} n=\sum _{i=1}^n {\lambda }_{n-1}\left( [0, 1]^{n-1}\right) \le \frac{1}{a_v}\sum _{i=1}^n{\lambda }_{n-1}(F_i)\le \frac{S(P)}{a_v}= \frac{1}{a_v}. \end{aligned}$$
(3.8)

Such a uniform bound on \(a_v\) is needed in Sect. 3.5 so that \(\alpha =a_v/{(n-1)!}\le 1/(2n)\).

To keep the notation short, we write \(d_{A_v} x = d {\lambda }_{n-1,A_v}(x)\) for integration with respect to this image measure of the Lebesgue measure on \({\partial }P\) under the map \(A_v\). Equation (3.7) shows that for \(x \in A_v F_i\),

$$\begin{aligned} d_{A_v} x = a_vdx \end{aligned}$$
(3.9)

for \(i=1,\dots ,n\), where dx is again shorthand for Lebesgue measure (or equivalently Hausdorff measure) on the respective facet \(F_i\). This yields

$$\begin{aligned} {\mathbb {E}}f_{n-1}(P_N)&=\sum _{v \in {{\mathcal {F}}}_0( P)}\left( {\begin{array}{c}N\\ n\end{array}}\right) \idotsint \limits _{({\partial }A_v P)_{\ne }^n} \, (1- {\lambda }_{n-1, A_v} ({\partial }A_v P \cap H_+ ))^{N-n}\\&~~~~~~~~~~~\quad \qquad \times {\mathbb {1}}( u \in {{\mathcal {N}}}(0, A_v P)) \, d_{A_v} x_1 \dots d_{A_v} x_n+O(1). \end{aligned}$$

We use Zähle’s formula (2.5) which transforms the integral over the points \(x_i \in {\partial }P\) into an integral over all the hyperplanes \(H=H(h,u)\), \(u \in S^{n-1}\), \(h \in {\mathbb {R}}\), and integrals over \({\partial }P \cap H\):

$$\begin{aligned}&{\mathbb {E}}f_{n-1}(P_N)=\sum _{v \in {{\mathcal {F}}}_0(P)}\left( {\begin{array}{c}N\\ n\end{array}}\right) {\beta }^{-1} (n-1)!\int \limits _{S^{n-1}} \! \int \limits _{{\mathbb {R}}}\,\idotsint \limits _{({\partial }A_vP\cap H)_{\ne }^n}(1-{\lambda }_{n-1, A_v}({\partial }A_v P\cap H_+))^{N-n}\\&\quad \times {\lambda }_{n-1} ([x_1, \dots , x_n])\prod \limits _{j=1}^n J (T_{x_j},H)^{-1}{\mathbb {1}}(u\in {{\mathcal {N}}}(0,A_vP))\, d_{A_v}x_1 \dots d_{A_v} x_n dhdu+O(1). \end{aligned}$$

We have \({{\mathcal {N}}}(0, A_v P)=-S_{+}^{n-1}\), where we denote \(S_+^{n-1}= S^{n-1} \cap {\mathbb {R}}_+^n\). The condition \({\mathbb {1}}(u \in {{\mathcal {N}}}(0, A_v P))\) will be taken into account in the range of integration in the form \( u \in - S_+^{n-1}\). Now we fix u and split the integral into two parts. In the first one \(H_- \) contains all the unit vectors \(e_i\). We write this condition in the form

$$\begin{aligned} \max _{i=1, \dots , n} u_i \le h \le 0 . \end{aligned}$$

Note that \(h \le 0\), since \(u \in - S_+^{n-1}\). The second part is over \( h \le \max _{i=1, \dots , n} u_i\). Thus the expected number of facets is

$$\begin{aligned}&{\mathbb {E}}f_{n-1}(P_N)\\&\quad =\sum _{v \in {{\mathcal {F}}}_0( P)} \left( {\begin{array}{c}N\\ n\end{array}}\right) {\beta }^{-1}(n-1)!\left( \,\,\int \limits _{-S_+^{n-1}}\int \limits _{\max u_i}^0(1- {\lambda }_{n-1, A_v} ( {\partial }{\mathbb {R}}_+^n \cap H_+))^{N-n}\,\right. \\&\qquad \times \idotsint \limits _{({\partial }{\mathbb {R}}_+^n \cap H)_{\ne }^n}\,{\lambda }_{n-1} ( [x_1, \dots , x_n]) \prod \limits _{j=1}^n J (T_{x_j},H)^{-1}d_{A_v} x_1 \dots d_{A_v} x_n dhdu\\&\qquad +\int \limits _{-S_+^{n-1}}\!\int \limits _{-\infty }^{\max u_i}(1-{\lambda }_{n-1,A_v}({\partial }A_vP\cap H_+))^{N-n}\,\\&\qquad \times \left. \idotsint \limits _{({\partial }A_vP\cap H)_{\ne }^n} {\lambda }_{n-1}([x_1,\dots ,x_n])\prod \limits _{j=1}^n J (T_{x_j},H)^{-1}d_{A_v} x_1 \dots d_{A_v} x_ndhdu\right) \\&\qquad +O(1), \end{aligned}$$

where we have used that in the first case \({\partial }A_v P \cap H_+ = {\partial }{\mathbb {R}}_+^n \cap H_+\). The substitution \(u \mapsto -u\) and \(h \mapsto -h\) yields the more convenient formula

$$\begin{aligned}{\mathbb {E}}f_{n-1}(P_N)=\sum _{v \in {{\mathcal {F}}}_0( P)} \left( {\begin{array}{c}N\\ n\end{array}}\right) {\beta }^{-1} (n-1)!( I_v^1+E_v^1) \ + O(1)\end{aligned}$$

with

$$\begin{aligned} I_v^1&=\int \limits _{S_+^{n-1}} \int \limits _0^{\min u_i}(1- {\lambda }_{n-1, A_v} ({\partial }{\mathbb {R}}_+^n \cap H_-))^{N-n}\,\\&\quad \times \idotsint \limits _{({\partial }{\mathbb {R}}_+^n \cap H)_{\ne }^n}{\lambda }_{n-1} ( [x_1, \dots , x_n]) \prod \limits _{j=1}^n J (T_{x_j},H)^{-1}\, d_{A_v} x_1 \dots d_{A_v} x_n dhdu,\\ E_v^1&=\int \limits _{S_+^{n-1}} \int \limits _{\min u_i}^{\infty }(1- {\lambda }_{n-1, A_v} ({\partial }A_v P \cap H_-))^{N-n}\,\\&\quad \times \idotsint \limits _{({\partial }A_v P \cap H)_{\ne }^n} {\lambda }_{n-1} ( [x_1, \dots , x_n]) \prod \limits _{j=1}^n J (T_{x_j},H)^{-1}\, d_{A_v}x_1\dots d_{A_v}x_ndhdu. \end{aligned}$$

The asymptotically dominating term will be \(I_v^1\). Using (3.7) and (3.9) for \(I_v^1\) we have

$$\begin{aligned} \begin{aligned} I_v^1&=a_v^n \int \limits _{S_+^{n-1}} \!\int \limits _0^{\min u_i}(1- a_v {\lambda }_{n-1}({\partial }{\mathbb {R}}_+^n \cap H_-))^{N-n}\,\\&\quad \times \idotsint \limits _{({\partial }{\mathbb {R}}_+^n \cap H)_{\ne }^n}{\lambda }_{n-1} ([x_1,\dots ,x_n])\prod \limits _{j=1}^n J (T_{x_j},H)^{-1}\, d x_1\dots d x_n dh du. \end{aligned} \end{aligned}$$
(3.10)

In Sect. 3.5 we will determine the precise asymptotics. Equation (3.28) will tell us that

$$\begin{aligned} I_v^1 = c_n N^{-n} (\ln N)^{n-2} (1+O((\ln N)^{-1})) \end{aligned}$$

with some constant \(c_n >0\) as \(N \rightarrow \infty \). The error term \(E_v^1\) can be estimated by using the fact that there are constants \({\overline{a}},{\underline{a}}\) such that

$$\begin{aligned} {\underline{a}} {\lambda }_{n-1} (B) \le {\lambda }_{n-1, A_v} ( B) \le {\overline{a}} {\lambda }_{n-1}(B) \end{aligned}$$
(3.11)

for all \(v \in {{\mathcal {F}}}_0(P)\) and all \(B \subset {\partial }P\). This shows

$$\begin{aligned} E_v^1&\le (2{\overline{a}})^n \int \limits _{S_+^{n-1}}\int \limits _{\min u_i}^{\infty }(1- {\underline{a}}{\lambda }_{n-1}({\partial }A_v P \cap H_-))^{N}\\&\quad \times \idotsint \limits _{({\partial }A_v P \cap H)_{\ne }^n}{\lambda }_{n-1} ( [x_1, \dots , x_n])\prod \limits _{j=1}^nJ(T_{x_j},H)^{-1}d x_1 \ldots d x_n dhdu. \nonumber \end{aligned}$$
(3.12)

In Sect. 3.6 we will show that this is of order \(O(N^{-n}(\ln N)^{n-3})\), see (3.35). This implies

$$\begin{aligned} {\mathbb {E}}f_{n-1}(P_N)&=\sum _{v \in {{\mathcal {F}}}_0( P)} \left( {\begin{array}{c}N\\ n\end{array}}\right) {\beta }^{-1} (n-1)!c_n N^{-n} (\ln N)^{n-2} (1+O((\ln N)^{-1}))\nonumber \\&=c_n f_0( P) (\ln N)^{n-2} (1+O((\ln N)^{-1})) \end{aligned}$$
(3.13)

with some \(c_n >0\) which is Theorem 1.1.

3.2 The Volume Difference

We are interested in the expected volume difference

$$\begin{aligned} {\mathbb {E}}(V_n(P)-V_n(P_N)). \end{aligned}$$

With probability one the random polytope \(P_{N}\) has the following property: For each facet \(F \in {{\mathcal {F}}}_{n-1}(P_N)\) that is not contained in a facet of P there exists a unique vertex \(v\in {{\mathcal {F}}}_0(P)\), such that the outer unit normal vector \(u_{F}\) of F is contained in the normal cone \({{\mathcal {N}}}(v,P)\), or equivalently the hyperplane H containing F is parallel to a supporting hyperplane to P at v. Clearly all the sets [Fv] are contained in \(P\setminus P_N\) and they have pairwise disjoint interiors. This is immediate in dimension two, and holds in arbitrary dimensions because it holds for all two-dimensional sections through P and \(P_N\) containing two vertices of P. We set

$$\begin{aligned} C_N\,= \bigcup _{v \in {{\mathcal {F}}}_{0}(P)} \, \bigcup _{ \begin{array}{c} F \in {{\mathcal {F}}}_{n-1}(P_N) \\ u_F \in {{\mathcal {N}}}(v,P) \\ F \nsubseteq {\partial }P \end{array} } [F, v],D_N= P\setminus ( P_N \cup C_N ), \end{aligned}$$
(3.14)

where \(D_N\) is the subset of \(P \setminus P_N\) not covered by one of the simplices [Fv] with \(u_F \in {{\mathcal {N}}}(v,P)\). We have

$$\begin{aligned}&{\mathbb {E}}(V_n(P)-V_n(P_N))={\mathbb {E}}V_n(C_N)+ {\mathbb {E}}V_n(D_N)\\&\qquad ={\mathbb {E}}\sum \limits _{v \in {{\mathcal {F}}}_0(P)}\sum \limits _{F\in {{\mathcal {F}}}_{n-1}(P_N)}V_n([F,v]){\mathbb {1}}(u_F \in {{\mathcal {N}}}(v,P)) +{\mathbb {E}}V_n(D_N) . \end{aligned}$$

For the first summand we follow the approach already worked out in detail in the last section. The convex hull \([X_{i_1},\dots ,X_{i_n}]\) forms a facet of \(P_N\) if their affine hull does not intersect the convex hull of the remaining point, and to simplify notation we set \(u=u_F\) and \(H(h,u)=H=\text {aff}[X_1,\dots ,X_n]\). The halfspace \(H_-\) contains the random polytope \(P_N\), and the probability that \(X_{n+1},\dots ,X_N\) are contained in \(H_-\) equals

$$\begin{aligned} {\lambda }_{n-1}({\partial }P \cap H_- ))^{N-n} = (1-{\lambda }_{n-1}({\partial }P \cap H_+ ))^{N-n}. \end{aligned}$$

Thus

$$\begin{aligned}&{\mathbb {E}}V_n (C_N)\\&\quad =\!\!\sum _{v \in {{\mathcal {F}}}_0(P)}\left( {\begin{array}{c}N\\ n\end{array}}\right) {\mathbb {E}}\bigl ( (1- {\lambda }_{n-1}( {\partial }P \cap H_+ ))^{N-n}\\&\qquad \times {\mathbb {1}}( u \in {{\mathcal {N}}}(v, P),\, \{ X_i \}_{i \le n} \in ({\partial }P)^n_{\ne })V_n[X_1, \dots , X_n, v] \bigr )\\&\quad =\sum _{v \in {{\mathcal {F}}}_0(P)}\left( {\begin{array}{c}N\\ n\end{array}}\right) \idotsint \limits _{({\partial }P)_{\ne }^n}(1-{\lambda }_{n-1}({\partial }P \cap H_+ ))^{N-n}\\ {}&\qquad \times {\mathbb {1}}( u \in {{\mathcal {N}}}(v, P))V_n[x_1,\dots ,x_n,v]\,dx_1\dots dx_n. \end{aligned}$$

We fix v and use the affine transformation \(A_v\) defined in the last section which maps v to the origin and the edges onto the coordinate axes. The transformation rule yields

$$\begin{aligned}&{\mathbb {E}}V_n(C_N)\\&\quad =\sum _{v \in {{\mathcal {F}}}_0( P)} \left( {\begin{array}{c}N\\ n\end{array}}\right) \idotsint \limits _{({\partial }A_v P)_{\ne }^n} (1- {\lambda }_{n-1, A_v} ({\partial }A_v P \cap H_+ ))^{N-n}{\mathbb {1}}( u \in {{\mathcal {N}}}(0, A_v P))\\&\qquad \times V_n [A_v^{-1} x_1, \dots , A_v^{-1} x_n,0]\, d_{A_v}x_1\dots d_{A_v} x_n. \end{aligned}$$

The volume of the simplex \([\{A_v^{-1}x_i\}_{i=1,\dots ,n},0]\) is a constant \(d_v=\det A_v^{-1}\) times the volume of \([\{x_i\}_{i=1,\dots ,n},0]\) which equals \(n^{-1}\) times the height |h| times the volume of the base \([\{ x_i\}_{i=1, \dots , n}]\). By Zähle’s formula (2.5) we obtain

$$\begin{aligned} {\mathbb {E}}V_n(C_N)&\,=\sum _{v \in {{\mathcal {F}}}_0( P)} d_v \left( {\begin{array}{c}N\\ n\end{array}}\right) {\beta }^{-1} \frac{(n-1)!}{n}\\&\quad \times \int \limits _{S^{n-1}} \int \limits _{{\mathbb {R}}}\,\idotsint \limits _{({\partial }A_v P \cap H)_{\ne }^n}(1- {\lambda }_{n-1, A_v}({\partial }A_vP\cap H_+ ))^{N-n}\,\\&\quad \times {\lambda }_{n-1}([x_1,\dots ,x_n])^2\prod \limits _{j=1}^nJ(T_{x_j},H)^{-1}\\&\quad \times {\mathbb {1}}(u \in {{\mathcal {N}}}(0, A_v P))\, d_{A_v} x_1 \dots d_{A_v} x_n\,dhdu. \end{aligned}$$

We split the integral into the two parts \(\max _{i=1,\dots ,n}u_i\le h\le 0\) and \(h\le \max _{i=1,\dots ,n}u_i\) and substitute \(u\mapsto -u\), \(h\mapsto -h\). The main part of the expected volume difference is

$$\begin{aligned}{\mathbb {E}}V_n(C_N)=\sum _{v \in {{\mathcal {F}}}_0( P)} d_v \left( {\begin{array}{c}N\\ n\end{array}}\right) {\beta }^{-1} \frac{(n-1)!}{n}( I_v^2+E_v^2)\end{aligned}$$

with

$$\begin{aligned} I_v^2&=a_v^n \int \limits _{S_+^{n-1}}\!\int \limits _0^{\min u_i}(1- a_v {\lambda }_{n-1}({\partial }{\mathbb {R}}_+^n \cap H_-))^{N-n}\, \\ {}&\quad \times h \idotsint \limits _{({\partial }{\mathbb {R}}_+^n \cap H)_{\ne }^n}{\lambda }_{n-1} ( [x_1, \dots , x_n])^2 \prod \limits _{j=1}^n J (T_{x_j},H)^{-1} \, d x_1 \dots d x_n \, dhdu , \nonumber \end{aligned}$$
(3.15)
$$\begin{aligned} E_v^2&=\int \limits _{S_+^{n-1}} \int \limits _{\min u_i}^{\infty }(1- {\lambda }_{n-1, A_v} ({\partial }A_v P \cap H_-))^{N-n}\, \\ {}&\quad \times h \idotsint \limits _{({\partial }A_v P \cap H)_{\ne }^n}{\lambda }_{n-1} ( [x_1, \dots , x_n])^2 \prod \limits _{j=1}^n J (T_{x_j},H)^{-1}\, d_{A_v} x_1 \dots d_{A_v} x_n \, dhdu.\nonumber \end{aligned}$$
(3.16)

The asymptotically dominating term will be \(I_v^2\). In Sect. 3.5 we determine the precise asymptotics. Equation (3.27) will tell us that

$$\begin{aligned} I_v^2 = c_n a_v^{-n/({n-1})} N^{-n-n/({n-1})}\bigl (1+ O\bigl (N^{-1/({(n-1)(n-2)})}\bigr )\bigr ) \end{aligned}$$

with some constant \(c_n >0\) as \(N \rightarrow \infty \). The error term \(E_v^2\) can be estimated by

$$\begin{aligned} E_v^2&\le (2{\overline{a}})^n \int \limits _{S_+^{n-1}} \int \limits _{\min u_i}^{\infty }(1- {\underline{a}} {\lambda }_{n-1}({\partial }A_v P \cap H_-))^{N} \\&\quad \times h \idotsint \limits _{({\partial }A_v P \cap H)_{\ne }^n}{\lambda }_{n-1} ( [x_1, \dots , x_n])^2 \prod \limits _{j=1}^n J (T_{x_j},H)^{-1}\, d x_1 \dots d x_n \, dhdu, \nonumber \end{aligned}$$
(3.17)

where \({\overline{a}}, {\underline{a}}\) are as in (3.11). In Sect. 3.6, (3.36), we show that

$$\begin{aligned}E_v^2=O\bigl ( N^{-n-({n-1})/({n-2}) }\bigr ).\end{aligned}$$

It remains to estimate

$$\begin{aligned} {\mathbb {E}}V_n(D_N)= {\mathbb {E}}\left( P\setminus \left( P_N \,\cup \!\bigcup _{F \in {{\mathcal {F}}}_{n-1}(P_N)} [F, v_F] \right) \right) . \end{aligned}$$

The following argument is proved in detail in the paper of Affentranger and Wieacker [1, p. 302] and will only be sketched here.

If \(y \in D_N\), then the normal cone \({{\mathcal {N}}}(y, [y,P_N])\) is not contained in any of the normal cones \({{\mathcal {N}}}(v,P)\) of P, \(v \in {{\mathcal {F}}}_0(P)\). Hence \({{\mathcal {N}}}(y,[y,P_N])\) meets at least two neighbouring normal cones \({{\mathcal {N}}}(v_1,P), {{\mathcal {N}}}(v_2,P)\), and thus the normal cone of the edge \(e=[v_1,v_2]\in {{\mathcal {F}}}_1(P)\). This implies that there exists a supporting hyperplane H of P with \(H \cap P=e\) with the property that the parallel hyperplane through y does not meet \(P_N\).

We apply an affine map \(A_e\) similar to the one defined above which maps \(e=[v,w]\) to the unit interval \([0, e_n]\), v to the origin, and the image of other edges containing v contain the remaining unit intervals \([0,e_i]\). After applying this map the situation described above is the following: for \(x=(x_1,\dots ,x_n)=A_e y \in A_e D_N\) the supporting hyperplane \( A_e H =H(0, u)\) to \(A_e P\) intersects \(A_e P\) in the edge \([0, e_n]\). The parallel hyperplane \(H(\langle x,u\rangle ,u)\) contains x and cuts off from \(A_e P\) a cap disjoint from \(A_e P_N\). This cap contains the simplex

$$\begin{aligned}{}[0, \min (1,x_1)e_1, \dots , \min (1, x_{n-1})e_{n-1},e_n]. \end{aligned}$$

Hence if \(x \in A_e D_N\) then

$$\begin{aligned}{}[0, \min (1,x_1)e_1, \dots , \min (1, x_{n-1})e_{n-1},e_n]\cap A_e P_N=\emptyset . \end{aligned}$$

The probability of this event is given by

$$\begin{aligned}&{\mathbb {P}}([0, x_1 e_1,\dots , x_{n-1} e_{n-1}, e_n ] \cap A_e P_N= \emptyset )\\&\qquad =\bigl ( 1- {\lambda }_{n-1} (A_e^{-1} ( {\partial }{\mathbb {R}}_+^n \cap [0, \min (1,x_1)e_1, \dots , \min (1,x_{n-1})e_{n-1}, e_n]))\bigr )^{N}\\&\qquad \le \bigl ( 1- {\underline{a}}{\lambda }_{n-1} ({\partial }{\mathbb {R}}_+^n \cap [0, \min (1,x_1)e_1, \dots , \min (1,x_{n-1})e_{n-1}, e_n ])\bigr )^{N}, \end{aligned}$$

with some \({\underline{a}} >0\). We denote by \(d_e\) the involved Jacobian of \(A_e^{-1}\) and by \({\overline{d}}\) the maximum of \(d_e\). This implies the estimate

$$\begin{aligned}&{\mathbb {E}}V_n(D_N) =\int \limits _{P} {\mathbb {P}}(x \in D_N)\,dx\\&\quad \le \sum _{e\in {{\mathcal {F}}}_1(P)}d_e\!\int \limits _{A_eP}\bigl (1-{\underline{a}}{\lambda }_{n-1}({\partial }{\mathbb {R}}_+^n\\ {}&\qquad \qquad \qquad \qquad \quad \cap [0,\min (1,x_1)e_1,\dots , \min (1,x_{n-1})e_{n-1},e_n])\bigr )^{N}dx\\&\quad \le f_1(P)\,{\overline{d}}\!\int \limits _{[0,{\tau }]^n}\bigl (1-{\underline{a}}{\lambda }_{n-1}({\partial }{\mathbb {R}}_+^n \\ {}&\qquad \qquad \qquad \qquad \quad \cap [0,\min (1,x_1)e_1,\dots , \min (1,x_{n-1})e_{n-1}, e_n ])\bigr )^{N}dx \end{aligned}$$

assuming again that \(A_e P \subset [0,{\tau }]^n\) for all e. In Sect. 3.7 we prove that

$$\begin{aligned} {\mathbb {E}}V_n(D_N)=O\bigl (N^{-(n-1)/(n-2)}\bigr ). \end{aligned}$$
(3.18)

Combining our results we get

$$\begin{aligned} {\mathbb {E}}(V_n(P)-V_n(P_N))&= \sum _{v \in {{\mathcal {F}}}_0( P)} d_v \left( {\begin{array}{c}N\\ n\end{array}}\right) {\beta }^{-1} \frac{(n-1)!}{n}( I_v^2+E_v^2)+ {\mathbb {E}}V_n(D_N)\nonumber \\&=c_n \sum _{v \in {{\mathcal {F}}}_0( P)} d_v^2 a_v^{-n/(n-1)}N^{-n/(n-1)}\bigl (1+ O\bigl (N^{-1/((n-1)(n-2))}\bigr )\bigr )\nonumber \\&=c_{n,P} N^{-n/(n-1)} \bigl (1+ O\bigl (N^{-1/((n-1)(n-2))}\bigr )\bigr ), \end{aligned}$$
(3.19)

which is Theorem 1.2.

3.3 Random Simplices in Simplices

For \(u \in S_+^{n-1}\), \(h \ge 0\), and \(H=H(h,u)\) we set

$$\begin{aligned} {{\mathcal {E}}}_k(h, u)=\idotsint \limits _{({\partial }{\mathbb {R}}_+^n \cap H)_{\ne }^n}{\lambda }_{n-1} ( [x_1, \dots , x_n])^{k} \prod \limits _{j=1}^n J (T_{x_j},H(1,u))^{-1}\, d x_1 \dots d x_n,\nonumber \\ \end{aligned}$$
(3.20)

which is the (not normalized) k-th moment of the volume of a random simplex in \( {\mathbb {R}}_+^n \cap H(h,u)\) where the random points are chosen on the boundary of this simplex according to the weight functions \(J (T_{x_j},H(1,u))^{-1} \). Recall that for almost all \(x_j\), \(T_{x_j}\) is the supporting hyperplane at \(x_j\). In fact it is the coordinate hyperplane which contains \(x_j\).

Lemma 3.1

For \(k \ge 0\), there are constants \({{\mathcal {E}}}_{k,{\varvec{f}}}>0\) independent of u, such that

$$\begin{aligned}{{\mathcal {E}}}_k(h, u)=h^{-(n+k )} n^{-k/2}(n-1)^{-n/2}\!\!\sum _{{\varvec{f}} \in \{ 1, \dots , n\}_{\ne }^n }\left( \,\prod _{i=1}^n \frac{h}{u_i} \right) ^{\!n+k}\prod \limits _{i=1}^n \frac{u_{f_i}}{h} {{\mathcal {E}}}_{k, {\varvec{f}}}.\end{aligned}$$

Proof

For a point \(x_j\) in the coordinate hyperplane \(e_i^\perp \), the weight function \(J (T_{x_j},H(1,u))^{-1} \) is the sine of the angle between \(e_i\) and u. Thus

$$\begin{aligned} J (T_{x_j},H(1,u))= \Vert u\vert _{e_i^\perp }\Vert =( 1- u_i^2)^{1/2} \end{aligned}$$
(3.21)

and hence is independent of h as long as u is fixed. In (3.20) we substitute \(x_j = h y_j\) with \(y_j \in H(1,u)\). The \((n-1)\)-dimensional volume is homogeneous of degree \(n-1\), hence

$$\begin{aligned} {\lambda }_{n-1} ([x_1, \dots , x_n]) = h^{n-1} {\lambda }_{n-1} ([y_1, \dots , y_n]) , \end{aligned}$$

and since \(x_j\) are in the \((n-2)\)-dimensional planes \( {\partial }{\mathbb {R}}_+^{n} \cap H(h,u)\) we have \( dx_j = h^{n-2}dy_j\).

$$\begin{aligned} {{\mathcal {E}}}_k(h, u)&=h^{(n-1)k + n(n-2)}\!\!\idotsint \limits _{({\partial }{\mathbb {R}}_+^n \cap H(1,u))_{\ne }^n}\!{\lambda }_{n-1} ([y_1, \dots , y_n])^{k}\nonumber \\&\quad \qquad \qquad \qquad \qquad \times \prod \limits _{j=1}^n J (T_{x_j},H(1,u))^{-1} d y_1 \ldots d y_n\\&=h^{(n-1)k + n(n-2)} {{\mathcal {E}}}_k (1,u). \nonumber \end{aligned}$$
(3.22)

To evaluate \({{\mathcal {E}}}_k(1,u)\) we condition on the facets in \(e_1^\perp , \dots , e_n^\perp \) of \({\mathbb {R}}_+^n \cap H(1,u)\) from where the random points are chosen. Thus for

$$\begin{aligned} {\varvec{f}} \in \{1, \dots , n\}^n \end{aligned}$$

we condition on the event \(y_i\in e_{f_i}^\perp \). Because \(\{y_1,\dots ,y_n\}\in ({\partial }{\mathbb {R}}_+^n \cap H(1,u))_{\ne }^n\), which means that not all points are contained in the same facet, we may assume that \({\varvec{f}}\in \{1,\dots ,n\}^n_{\ne }\) where we remove all n-tuples of the form \((i,\dots , i)\) and denote the remaining set by \(\{1,\dots ,n\}^n_{\ne }\). Recalling (3.21), we obtain

$$\begin{aligned} {{\mathcal {E}}}_k(1, u)&=\sum _{{\varvec{f}} \in \{ 1, \dots , n\}_{\ne }^n }\prod \limits _{i=1}^n\,(1-u_{f_i}^2 )^{-1/2}\\&\quad \times \idotsint \limits _{({\partial }{\mathbb {R}}_+^n \cap H(1,u))_{\ne }^n}\!{\lambda }_{n-1} ([y_1, \dots , y_n])^{k}\prod \limits _{i=1}^n {\mathbb {1}}( y_i \in e_{f_i}^\perp )\, d y_1\ldots d y_n. \nonumber \end{aligned}$$
(3.23)

A short computation shows that H(1, u) meets the coordinate axis in the points \((1/u_i)e_i\). We substitute \(z= Ay\), \(y = A^{-1} z\), where A is the affine map transforming \(H(1, {\textbf{1}}_n)\) into H(1, u). Here \({\textbf{1}}_n\) is the vector \((1, \dots , 1)^T\). The map is given by

$$\begin{aligned} A=\begin{pmatrix} u_1 &{}\quad 0 \cdots &{}\quad 0 \\ \vdots &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad \cdots 0 &{}\quad u_n \end{pmatrix}. \end{aligned}$$
(3.24)

The volume of the simplex \({\mathbb {R}}_+^n\cap H_-(1,u)\) is given by \((1/n!) \prod _{i=1}^n(1/u_i)\), thus the ‘base’ \({\mathbb {R}}_+^n \cap H(1,u)\) of this simplex has \((n-1)\)-volume \((1/(n-1)!)\prod _{i=1}^n(1/u_i)\). The regular simplex \({\mathbb {R}}_+^n\cap H(1,{\textbf{1}}_n)\) has \((n-1)\)-volume \(\sqrt{n}/(n-1)!\). Hence

$$\begin{aligned} {\lambda }_{n-1}([A^{-1}z_1,\dots ,A^{-1}z_n])^k =n^{-k/2}\left( \,\prod _{i=1}^n\frac{1}{u_i}\right) ^{\!k}{\lambda }_{n-1}([z_1,\dots ,z_n])^k. \end{aligned}$$

The \((n-1)\)-volume of the simplex spanned by the origin and the facet of \({\partial }{\mathbb {R}}_+^n \cap H(1,u)\) in \(e_i^\perp \) is given by \((1/(n-1)!)\prod _{j\ne i}(1/u_j)\), its height by \(\Vert (u_1, \dots , u_{i-1}, u_{i+1},u_n)\Vert ^{-1}\)\(=(1-u_i^2)^{-1/2}\) and hence the \((n-2)\)-volume of the facet of \({\partial }{\mathbb {R}}_+^n\cap H(1,u)\) in \(e_i^\perp \) is

$$\begin{aligned} {\lambda }_{n-2}({\partial }{\mathbb {R}}_+^n \cap e_i^\perp \cap H(1,u) )= \frac{(1- u_i^2)^{1/2}}{(n-2)!}\prod _{j \ne i} \frac{1}{u_j} . \end{aligned}$$

Comparing this to the volume \(\sqrt{n-1}/(n-2)!\) of the facet of the simplex \({\partial }{\mathbb {R}}_+^n \cap H(1,{\textbf{1}}_n)\) in \(e_i^\perp \) which equals the volume of \({\mathbb {R}}_+^{n-1} \cap H (1,{\textbf{1}}_{n-1})\) shows that the Jacobian in \(e_{f_i}^\perp \) of the map A is

Combining these Jacobians with (3.23) we obtain

where \({{\mathcal {E}}}_{k,{\varvec{f}}}\) is independent of u. Together with (3.22) this finishes the proof. \(\square \)

In Sect. 3.6 we need an estimate for \({{\mathcal {E}}}_0 (h,u)\) in the case when there is a \( k \le n-1\) such that

$$\begin{aligned} \frac{h}{u_1} , \dots , \frac{h}{u_k} \le 1\qquad \text {and}\qquad \frac{h}{u_{k+1}}, \dots , \frac{h}{u_n} \ge 1, \end{aligned}$$
(3.25)

see (3.29). Then H meets the coordinate axes in the points \((h/u_i)e_i\in [0,1]^n\) for \(i=1,\dots ,k\), and the other points of intersection are outside of \([0, 1]^n\). We set

$$\begin{aligned}{{\mathcal {E}}}_0^1 (h, u){} & {} =\idotsint \limits _{({\partial }[0,1]^n \cap H)^n}\,\prod \limits _{j=1}^n J (T_{x_j},H)^{-1}\\{} & {} \quad \times \prod \limits _{f=1}^n {\mathbb {1}}(|\{ x_1, \dots , x_n\}\cap e_f^\perp | \le n-1)\, d x_1 \ldots d x_n .\end{aligned}$$

Lemma 3.2

Let \( k \le n-1\) be given such that (3.25) holds. Then we have

$$\begin{aligned}{{\mathcal {E}}}_0^1 (h, u)\le c_{n, k}h^{-n} \prod _{j=1}^k \biggl (\frac{h}{u_j}\biggr )^{\!n}\!\sum _{{\varvec{f}} \in \{1, \dots , n\}^n}\prod _{j =1}^k\biggl (\frac{u_j}{h}\biggr )^{\!m_j}\end{aligned}$$

with \(m_j = m_j({\varvec{f}})= \sum _{i=1}^{n} {\mathbb {1}}( f_i=j) \le n-1\) for \(j \le k\), and \(\sum _{i=1}^k m_i \le n\).

Proof

First we compare the intersection of H with the facet of \([0,1]^n \) in \(e_f^\perp \) to the intersection of H with the opposite facet of \([0,1]^n \) in \(e_f+e_f^\perp \), \(f=1,\dots ,n\). For \(i \ne f\) the hyperplane H meets the coordinate axes \(\text {lin}\{e_i\}\) in \(e_f^\perp \) in points \((h/{u_i})e_i\). It meets the shifted coordinate axes \(e_f+ \text {lin}\{ e_i\}\) in the opposite facet in the points \(e_f+((h-u_f)/u_i)e_i\). Because \(u \in S_+^{n-1}\) we have \(u_f\ge 0\). This shows that the facet of \(H \cap [0,1]^n\) in \(e_f^\perp \) contains the simplex

$$\begin{aligned} \left[ \biggl \{ \frac{h}{u_i}e_i \biggr \}_{i \le k,i \ne f}\right] . \end{aligned}$$
(3.26)

The opposite facet contains either the smaller simplex

$$\begin{aligned} \left[ \biggl \{ \frac{h-u_f}{u_i}e_i\biggr \}_{i \le k,i \ne f}\right] \end{aligned}$$

if \(f\ge k+1\) and \(h/u_f>1\), and otherwise the intersection is empty, \((H \cap [0,1]^n)\cap (e_f+e_f^\perp )=\emptyset \). The simplex (3.26) has volume

$$\begin{aligned} \frac{1}{(k-2)!}\cdot \frac{1}{h ( 1- u_f^2 )^{-1/2}}\prod _{i \le k,i \ne f}\frac{h}{u_i} \end{aligned}$$

for \(f \le k\), and

$$\begin{aligned} \frac{1}{(k-1)!}\cdot \frac{1}{h ( 1- u_f^2 )^{-1/2}} \prod _{i \le k} \frac{h}{u_i} \end{aligned}$$

for \(f \ge k+1\), the volume in the opposite facet clearly is smaller for \(f \ge k+1\) or vanishes for \(f \le k\). We use \(J(T_{x},H(h,u))^{-1}=J(T_{x},H(1,u))^{-1}=( 1- u_f^2)^{-1/2} \) for \(x \in e_f^\perp \). For \(f \le k\) this proves

$$\begin{aligned} \int \limits _{([0,1]^n \cap H)\cap e_f^\perp }\!\!\!J(T_{x},H)^{-1}d x&\,=\,( 1- u_f^2)^{-1/2}\!\!\int \limits _{([0,1]^n \cap H) \cap e_f^\perp }\!d x\\&\,\le \,\frac{(n-k)^{(n-k)/2}}{(k-2)!}\cdot \frac{1}{h} \prod _{i \le k,i \ne f}\frac{h}{u_i}, \end{aligned}$$

since \((n-k)^{1/2}\) is the diameter of the \((n-k)\)-dimensional unit cube. In this case there is no simplex in the opposite facet. Analogously, for \(f\ge k+1\)

$$\begin{aligned} \int \limits _{([0,1]^n \cap H)\cap e_f^\perp }\!\!\!J(T_x,H)^{-1}\,d x&\le \frac{(n-k-1)^{(n-k-1)/2}}{(k-1)!}\cdot \frac{1}{h}\prod _{i \le k} \frac{h}{u_i}\qquad \text {and}\\ \int \limits _{([0,1]^n \cap H)\cap (e_f+e_f^\perp )}\!\!\!J (T_{x},H)^{-1}\, d x&\le \frac{(n-k-1)^{(n-k-1)/2}}{(k-1)!}\cdot \frac{1}{h} \prod _{i \le k} \frac{h}{u_i}. \end{aligned}$$

Again we condition on the facets \({\partial }[0,1]^n \cap H(1,u)\) from where the random points are chosen. Because of the term \({\mathbb {1}}(|\{ x_1, \dots , x_n\} \cap e_f^\perp | \le n-1)\), it is impossible that all points are contained in one of the facets in \(e_f^\perp \). Thus for \(f\le k\) we have at most \(n-1\) points in \(([0,1]^n\cap H)\cap e_f^\perp \) and no point in \(([0,1]^n\cap H)\cap (e_f+e_f^\perp )\) because this set is empty. For \(f\ge k+1\) we have at most \(n-1\) points in \(([0,1]^n\cap H) \cap e_f^\perp \) and maybe some additional points in \(([0,1]^n \cap H) \cap (e_f +e_f^\perp )\).

Now for \(j=1, \dots , n\) there is some \(f_j\) such that \( x_j\) is either in the facet \([0,1]^n \cap e_{f_j}^\perp \) or in the opposite facet \([0,1]^n\cap (e_{f_j}+e_{f_j}^\perp )\). This defines a vector

$$\begin{aligned} {\varvec{f}}\in \{1,\dots ,n\}^n, \end{aligned}$$

and we take into account that for \(f \le k\),

$$\begin{aligned} m_f= \sum _{j=1}^{n} {\mathbb {1}}(f_j=f) \le n-1 . \end{aligned}$$

This yields

$$\begin{aligned} {{\mathcal {E}}}_0^1 (h, u)&\,=\sum _{{\varvec{f}} \in \{1, \dots , n\}^n}\prod _{j : f_j\le k} \left( \,\int \limits _{{\partial }[0,1]^n \cap H}\!\!J (T_{x_j},H)^{-1}{\mathbb {1}}( x_j\in e_{f_j}^\perp )\,d x_j \right) \\&\quad \times \prod _{j:f_j \ge k+1} \left( \,\int \limits _{{\partial }[0,1]^n \cap H}\!\!J (T_{x_j},H)^{-1}{\mathbb {1}}\bigl ( x_j \in e_{f_j}^\perp \cup (e_{f_j} + e_{f_j}^\perp )\bigr )\,dx_j\right) \\&\,\le \,c_{n,k}h^{-n} \prod _{j=1}^k \biggl ( \frac{h}{u_j}\biggr )^{\!n}\!\sum _{{\varvec{f}} \in \{1, \dots , n\}^n}\prod _{j =1}^k \biggl ( \frac{u_{j}}{h}\biggr )^{\!m_j} \end{aligned}$$

with \(m_j = m_j({\varvec{f}}) = \sum _i {\mathbb {1}}( f_i=j) \le n-1\) for \(j \le k\), and \(\sum _1^k m_i \le \sum _1^n m_i =n\).

\(\square \)

3.4 The Crucial Substitution

In the next sections we will end up with integrals over \(u\in S_+^{n-1}\) and where we split the integrals in the part where \(h\ge \min _{1\le i\le n}u_i\) and the part where \(h\le \min _{1\le i\le n}u_i\). Then the following substitution is helpful.

Lemma 3.3

Let \(f:({\mathbb {R}}_+)^n \rightarrow {\mathbb {R}}\) be an integrable function such that both sides of the following equation are finite. Then

$$\begin{aligned}\int \limits _{S_+^{n-1}} \int \limits _{{\mathbb {R}}_+}f\biggl ( \frac{h}{u_1}, \dots , \frac{h}{u_n}\biggr )h^{-(n+1)}dhdu\,=\,\idotsint \limits _{({\mathbb {R}}_+)^n}f(t_1, \dots , t_n) \prod _{i=1}^{n}t_{i}^{-2}\,dt_1 \dots dt_n.\end{aligned}$$

In particular we will make extensive use of the following version where we use that the range of integration \(0 \le h \le u_i\) for all \(i=1, \dots , n\), is equivalent to \(t_i \in [0,1]\):

$$\begin{aligned}&\int \limits _{S_+^{n-1}}\!\!\int \limits _0^{\min _{1\le i\le n}\{u_i\}}\left( 1-a\frac{\sum u_i}{\prod u_i}h^{n-1}\right) ^{\!N-n}h^{-(n+1)}\prod _{i=1}^n\biggl (\frac{u_i}{h}\biggr )^{\!-(n+1 + {\varepsilon }) + m_i}dhdu\\&\qquad \qquad =\int \limits _0^1\dots \int \limits _0^1\left( 1- a \sum _i \prod _{j \ne i} t_j \right) ^{\!N-n}\prod _{i=1}^n t_i^{n-1 + {\varepsilon }-m_i}dt_1\ldots dt_n, \end{aligned}$$

where \({\varepsilon }\in \{0,1\}\), N is the number of chosen points and the \(m_i\) are as in Lemma 3.2.

Proof

The goal is to rewrite the integration \(dhdu\) over the set of hyperplanes into an integration with respect to \(t_1, \dots , t_n\) where these are the intersections of the hyperplane H(hu) with the coordinate axis. First, the substitution \(r = h^{-1}\) leads to \(dh = -r^{-2}dr\). Then we pass from polar coordinates (ru) to the Cartesian coordinate system: for \(h,r \in {\mathbb {R}}_+\) and \(u\in S^{n-1}_+\) this gives

$$\begin{aligned} h^{-(n+1)}\, dhdu =r^{n-1}\, drdu =dx_{1}\ldots dx_{n}. \end{aligned}$$

Now we substitute \(x_i=1/t_i\) and take into account that

$$\begin{aligned} h^{-1}=r=\left( \,\sum _{i=1}^{n}|x_{i}|^{2}\right) ^{\!1/2}=\left( \,\sum _{i=1}^{n}\,\biggl |\frac{1}{t_{i}}\biggr |^{2}\right) ^{\!1/2}. \end{aligned}$$

Thus finally we have

$$\begin{aligned} h^{-(n+1)}\, dhdu = \prod _{i=1}^{n}t_{i}^{-2}\, dt_{1}\ldots dt_{n} \end{aligned}$$

with \(h^{-1} u_i= r u_i=x_i=t_i^{-1}\). \(\square \)

3.5 The Main Term

By (3.10) and (3.15), for \({\varepsilon }\in \{0,1\}\) we have to investigate

$$\begin{aligned} I_v^{1+{\varepsilon }}&=a_v^n \int \limits _{S_+^{n-1}}\!\int \limits _0^{\min u_i}(1- a_v {\lambda }_{n-1}({\partial }{\mathbb {R}}_+^n \cap H_- ))^{N-n} h^{{\varepsilon }}\\ {}&\quad \times \idotsint \limits _{({\partial }{\mathbb {R}}_+^n \cap H)_{\ne }^n}{\lambda }_{n-1} ( [x_1, \dots , x_n])^{1+{\varepsilon }} \prod \limits _{j=1}^n J(T_{x_j},H)^{-1}\, d x_1 \ldots d x_n \, dhdu\\&=a_v^n \int \limits _{S_+^{n-1}}\!\int \limits _0^{\min u_i}(1- a_v {\lambda }_{n-1}({\partial }{\mathbb {R}}_+^n \cap H_- ))^{N-n} h^{{\varepsilon }}{{\mathcal {E}}}_{1+{\varepsilon }}(h, u)\,dhdu, \end{aligned}$$

where we use the notation from (3.20). Recall that \(H=H(h,u)\) meets the coordinate axis in the points \(t_i e_i =(h/u_i)e_i\), and hence

$$\begin{aligned} {\lambda }_{n-1}({\partial }{\mathbb {R}}_+^n \cap H_- )=\frac{1}{(n-1)!}\cdot \frac{\sum u_i}{\prod u_i}h^{n-1}. \end{aligned}$$

We plug this and the result of Lemma 3.1 into \(I_v^{1+{\varepsilon }}\), set \(m_i= \sum _j {\mathbb {1}}(f_j=i)\), and obtain

$$\begin{aligned} I_v^{1+{\varepsilon }}&=n^{-(1+{\varepsilon })/2}(n-1)^{-n/2}a_v^n\!\sum _{{\varvec{f}} \in \{ 1, \dots , n\}_{\ne }^n }\!\!{{\mathcal {E}}}_{1+{\varepsilon },{\varvec{f}}}\\&\quad \times \int \limits _{S_+^{n-1}}\!\int \limits _0^{\min u_i}\biggl (1- \frac{a_v}{(n-1)!}\cdot \frac{\sum u_i}{\prod u_i} h^{n-1} \biggr )^{\!N-n}\\ {}&h^{-(n +1)}\prod _{i=1}^n \biggl (\frac{u_i}{h} \biggr )^{\!-(n+1+{\varepsilon })+m_i}\, dhdu. \end{aligned}$$

Note that \(\sum m_i=n\). We use the substitution introduced in Lemma 3.3 and use the notation from Lemmata 2.3 and 2.4, in particular we use the function \({{\mathcal {J}}}(\,{\cdot }\,)\) introduced in (2.6),

$$\begin{aligned} I_v^{1+{\varepsilon }}&=n^{-(1+{\varepsilon })/2}(n-1)^{-n/2}a_v^n(n-1)^{-{\varepsilon }}\!\!\sum _{{\varvec{f}} \in \{ 1, \dots , n\}_{\ne }^n }\!\! {{\mathcal {E}}}_{1+{\varepsilon },{\varvec{f}}}\\&\qquad \times \,\int \limits _0^1 \!\dots \!\int \limits _0^1\left( 1- \frac{a_v}{(n-1)!}\sum _i \prod _{j\ne i} t_j \right) ^{\!N-n}\prod _{i=1}^nt_i^{n-1+{\varepsilon }-m_i}dt_1 \dots dt_n\\&=n^{-(1+{\varepsilon })/2} (n-1)^{-n/2}a_v^n (n-1)^{-{\varepsilon }}\!\!\sum _{{\varvec{f}} \in \{ 1, \dots , n\}_{\ne }^n }\!\!{{\mathcal {E}}}_{1+{\varepsilon },{\varvec{f}}}{{\mathcal {J}}}({\varvec{m}} - (1+{\varepsilon }){\varvec{1}}), \end{aligned}$$

with \({\varvec{m}}=(m_1, \dots , m_n)\). In the case \({\varepsilon }=1\),

$$\begin{aligned}I_v^2=n^{-2/2} (n-1)^{-n/2}a_v^n (n-1)^{-1}\sum _{{\varvec{f}} \in \{ 1,\dots ,n\}_{\ne }^n }\!\!{{\mathcal {E}}}_{2,{\varvec{f}}}{{\mathcal {J}}}({\varvec{m}} - 2\,{\cdot }\,{\varvec{1}}),\end{aligned}$$

and Lemma 2.3 (with \(L=-n\)) implies with a constant \(c_{{\varvec{m}} , n}\) that depends on \({\varvec{m}}\) and n,

$$\begin{aligned} \begin{aligned} I_v^2&=c_n a_v^{-n/(n-1)}\sum _{{\varvec{f}} } {{\mathcal {E}}}_{2, {\varvec{f}}} c_{{\varvec{m}} , n}N ^{-n-n/(n-1)}\bigl (1+ O\bigl (N^{-1/((n-1)(n-2))}\bigr )\bigr )\\&=c_na_v^{-n/(n-1)}N ^{-n-n/(n-1)}\bigl (1+ O\bigl (N^{-1/((n-1)(n-2))}\bigr )\bigr ), \end{aligned} \end{aligned}$$
(3.27)

where the implicit constant in \(O(\,{\cdot }\,)\) may depend on \(a_v\). Because \(c_{{\varvec{m}}, n}>0\), all terms with \({\varvec{f}} \in \{ 1, \dots , n\}_{\ne }^n \) contribute. Geometrically this means that the contribution for the volume difference comes from all facets of \(P_N\).

In the case \({\varepsilon }=0\) the asymptotic results from Lemma 2.4 (with \(L=0\)) give

$$\begin{aligned} I_v^1&=n^{-1/2}(n-1)^{-n/2}a_v^n\!\sum _{{\varvec{f}} \in \{ 1, \dots , n\}_{\ne }^n }\!\! {{\mathcal {E}}}_{1,{\varvec{f}}}{{\mathcal {J}}}({\varvec{m}}-{\varvec{1}})\nonumber \\&= c_n\!\! \sum _{{\varvec{f}}:\sharp \{m_i>0\}=2 }\!\! {{\mathcal {E}}}_{1,{\varvec{f}}}d_{{\varvec{m}},n} N^{-n}(\ln N)^{n-2} (1+O((\ln N)^{-1})) \\&\quad +c_n\!\!\sum _{{\varvec{f}}:\sharp \{m_i>0\}\ge 3}\!\!O(N^{-n} (\ln N)^{n-3}) \nonumber \\&=c_n N^{-n} (\ln N)^{n-2} (1+O((\ln N)^{-1})),\nonumber \end{aligned}$$
(3.28)

where only those terms contribute for which \(f_i\) is concentrated on two values, and where the implicit constant in \(O(\,{\cdot }\,)\) may depend on \(a_v\). We can apply Lemma 2.4 as (3.8) holds. Geometrically this implies that the main contribution comes from that facets of \(P_N\) whose vertices are on precisely two facets of P.

3.6 The Error of the First Kind

Denote by \(\text {diam}(K)\) the diameter of a convex set K. By (3.17) and (3.12), for the error term we have to estimate

$$\begin{aligned} E_v^{1+{\varepsilon }}&\le (2{\overline{a}})^n\!\int \limits _{S_+^{n-1}}\!\int \limits _{\min u_i}^{\text {diam}(A_vP)}\!(1-{\underline{a}}{\lambda }_{n-1}({\partial }A_vP\cap H_-))^{N}h^{{\varepsilon }}\\&\quad \times \idotsint \limits _{({\partial }A_v P \cap H)_{\ne }^n} {\lambda }_{n-1} ( [x_1, \dots , x_n])^{1+{\varepsilon }} \prod \limits _{j=1}^n J (T_{x_j},H)^{-1}\,dx_1\ldots d x_ndhdu\\&\le (2{\overline{a}})^n\!\int \limits _{S_+^{n-1}}\!\int \limits _{\min u_i}^{\text {diam}(A_v P)}\!(1-{\underline{a}}{\lambda }_{n-1}({\partial }A_vP\cap H_-))^Nh^{{\varepsilon }}\\&\quad \times {\lambda }_{n-1}(A_v P \cap H)^{1+{\varepsilon }}\idotsint \limits _{({\partial }A_vP\cap H)_{\ne }^n}\,\prod \limits _{j=1}^nJ(T_{x_j},H)^{-1}\, dx_1 \ldots dx_ndhdu \end{aligned}$$

for \({\varepsilon }=0,1\). Recall that the hyperplane \(H=H(h,u)\) meets the coordinate axes in the points \((h/{u_i})e_i\). Hence the halfspace \(H_-\) contains at least one unit vector since \(h \ge \min u_i\). W.l.o.g. we multiply by \(n \atopwithdelims ()k\), assume that it contains \(e_{k+1}, \dots , e_n\), and thus the points of intersection satisfy

$$\begin{aligned} \frac{h}{u_1} , \dots , \frac{h}{u_k} \le 1\qquad \text {and}\qquad \frac{h}{u_{k+1}}, \dots , \frac{h}{u_n} \ge 1 \end{aligned}$$
(3.29)

with some \(0\le k\le n-1\). Then the convex hull of \((h/u_1)e_1, \dots ,(h/{u_k})e_k, e_{k+1}, \dots , e_n\) is contained in \(A_vP\cap H_-\) and we estimate

$$\begin{aligned} \begin{aligned} {\lambda }_{n-1}( {\partial }A_v P \cap H_-)&\ge \frac{1}{(n-1)!} \sum \limits _{j=1}^n \prod \limits _{i \ne j} \min \biggl (1,\frac{h}{u_i}\biggr ) \\&\ge \frac{1}{(n-1)!}\sum \limits _{j=1}^k \prod \limits _{i \le k, i \ne j} \frac{h}{u_i}. \end{aligned} \end{aligned}$$
(3.30)

For \(k=0,1\) we have \({\lambda }_{n-1}( {\partial }A_v P \cap H_-)\ge 1/(n-1)!\) and thus \(E_{\varepsilon }= O\bigl (e^{-\underline{a}N/(n-1)!}\bigr )\), so serious estimates are only necessary in the cases \(2 \le k \le n-1\). Next we use that \(A_v P \subset [0, {\tau }]^n\) for all \(A_v\) and for some \({\tau }>0\). Thus

$$\begin{aligned} {\lambda }_{n-1}(A_v P \cap H)\le {\lambda }_{n-1}([0, {\tau }]^n \cap H)\le c_nh^{-1}\prod _{i=1}^k\frac{h}{u_i}{\tau }^{n-k} \end{aligned}$$
(3.31)

because H meets the first k coordinate axes in \(h/u_1, \dots ,h/u_k\). This gives

$$\begin{aligned} E_v^{1+{\varepsilon }}&\le (2{\overline{a}})^nc_n \sum _{k=0}^{n-1} {n \atopwithdelims ()k} {\tau }^{(n-k)(1+{\varepsilon })}\\&\quad \times \!\int \limits _{S_+^{n-1}}\int \limits _{\begin{array}{c} h\le {u_1} , \dots , {u_k} \\ h \ge {u_{k+1}}, \dots , {u_n} \end{array}}\!\!\left( 1- \frac{{\underline{a}}}{(n-1)!}\sum \limits _{j=1}^k \prod \limits _{i \le k, i \ne j} \frac{h}{u_i} \right) ^{\!N}h^{-1}\\&\quad \times \,\prod _1^k \biggl ( \frac{h}{u_i} \biggr )^{\!1+ {\varepsilon }}\!\idotsint \limits _{({\partial }A_v P \cap H)_{\ne }^n}\,\prod \limits _{j=1}^n J(T_{x_j},H)^{-1}\, d x_1 \ldots d x_n dh du. \end{aligned}$$

Now we deal with the inner integration with respect to \(x_1, \dots , x_n\). We want to replace \({\partial }A_v P \cap H\) by \({\partial }[0,1]^n \cap H\). The main point here is to estimate \(J(T_x, H)^{-1}\) for \(x \notin {\partial }{\mathbb {R}}_+^n\).

In general we have \(J (T_{x}, H ) \in [0,1]\) by definition. Recall that \(x \in H\). The critical equality \(J (T_{x},H) =0\) can occur only if \(T_x=H\), thus if H is a supporting hyperplane \(H(h_{A_v P}(u),u)\) or \(H(h_{A_v P}(-u),-u)\) to \({A_v P}\). Since \(u \in S_+^{n-1}\), in the second case we have \(h_{A_v P}(-u)=0\) and \(x \in {\partial }{\mathbb {R}}_+^n\).

To exclude the first case we assume that \({\lambda }_{n-1}({\partial }A_v P \cap H_-)\le 1/2\). In this case \(H_-\) cannot contain the point \(n^{-1/(n-1)} (1,\dots ,1)^T\) since otherwise \({\partial }A_v P \cap H_-\) would contain \({\partial }A_v P \cap n^{-1/(n-1)} [0,1]^n\) (recall that \(u \in S_+^{n-1}\)) and this part has surface are 1. Now we claim that there is a constant \(c_{A_v P} >0\) such that

$$\begin{aligned} J(T_x, H) \ge c_{A_v P}\ \ \text { if }\ \ {\lambda }_{n-1}({\partial }A_v P \cap H_-) \le \frac{1}{2}\ \ \text { and }\ \ x \in {\partial }A_v P \setminus {\partial }{\mathbb {R}}_+^n.\nonumber \\ \end{aligned}$$
(3.32)

If such a positive constant would not exist then (by the compactness of \({\partial }A_v P\)) there would be a convergent sequence \((x_k,H_k)\rightarrow (x_0,H_0)\) with \( J (T_{x_k},H_k)\rightarrow 0\), where \(x_k \in H_k\) yields \(x_0\in H_0=H(h_0,u_0)\), \(u_0\in S_+^{n-1}\). But in this case also

$$\begin{aligned} J(T_{x_{k}}, H_0) \rightarrow 0 \end{aligned}$$

and \(H_0\) is a supporting hyperplane at \(x_0\). Since \(u_0 \in S_+^{n-1}\) this leads to two cases. The first case is that

$$\begin{aligned} x_0 \in H_0= H(h_{A_v P}(-u_0),-u_0) ,\quad \ \ x_0 \in {\partial }A_v P \cap {\partial }{\mathbb {R}}^n_+, \end{aligned}$$

but \(x_{k}\) is not in \({\partial }A_v P \cap {\partial }{\mathbb {R}}^n_+\) and thus contained in some other facet of \(A_v P\). This implies \(J(T_{x_k},H_0)\nrightarrow 0\) as \(x_k \rightarrow x_0\), which is impossible. The second case is that \(x_0\) is contained in \(H_0=H(h_{A_v P}(u_0),u_0)\) where \((1,\dots , 1)^T \in H_{0-}\). Since this point is in \(A_v P\), but all \(H_{k -}\) do not contain \(n^{-1/(n-1)}(1, \dots , 1)^T\) this again contradicts the convergence \(H_k \rightarrow H_0\). Hence such a sequence \(x_k\) cannot exist, and (3.32) holds with some constant \(c_{A_v P}>0\). Thus from now on we assume that \({\lambda }_{n-1}({\partial }A_vP\cap H_-) \le 1/2\), take into account an error term of order

$$\begin{aligned} \biggl (1- \frac{{\underline{a}}}{2} \biggr )^{\!N}=e^{-cN}, \end{aligned}$$
(3.33)

and obtain by (3.32) that

$$\begin{aligned} \begin{aligned}&\int \limits _{({\partial }A_vP\setminus {\partial }{\mathbb {R}}^n_+)\cap H}\!\!\!\!J (T_{x},H)^{-1}\,dx\, \\&\quad \le \,c_{A_v P}^{-1}{\lambda }_{n-2} ({\partial }[0,{\tau }]^n\cap H)\,=\,c_{A_v P}^{-1}\!\!\int \limits _{{\partial }[0,{\tau }]^n \cap H } dx \end{aligned} \end{aligned}$$
(3.34)

because \(A_v P\) is contained in the larger cube \([0,{\tau }]^n\).

In the following we denote by \(F_c\) the union of the facets of \(A_v P\) contained in \({\partial }{\mathbb {R}}_+^n\), and by \(F_{0}\) the union of the remaining facets which cover \({\partial }A_v P \setminus {\partial }{\mathbb {R}}^n_+\), \({\partial }A_v P = F_c \cup F_0 \). Then

$$\begin{aligned}&\idotsint \limits _{({\partial }A_v P \cap H)_{\ne }^n}\,\prod \limits _{j=1}^n J (T_{x_j},H)^{-1}\, d x_1 \ldots dx_n\le \idotsint \limits _{(F_c \cap H)_{\ne }^n}\,\prod \limits _{j=1}^n J (T_{x_j},H)^{-1} \, d x_1 \ldots d x_n\\&\qquad \qquad \qquad +\sum _{k=1}^n {n \atopwithdelims ()k}\idotsint \limits _{(F_0 \cap H)^{k} \times (F_c \cap H)^{n-k}}\,\prod \limits _{j=1}^n J (T_{x_j},H)^{-1}\, d x_1 \ldots d x_n. \end{aligned}$$

Because of (3.34) and using \(F_c \subset {\partial }[0,{\tau }]^n \), we obtain the upper bounds

$$\begin{aligned} \idotsint \limits _{(F_0 \cap H)^{k} }\,\prod \limits _{j=1}^k J (T_{x_j},H)^{-1} \, d x_1 \ldots d x_k\le c_{A_v P}^{-n}\idotsint \limits _{({\partial }[0,{\tau }]^n \cap H)^k }d x_1 \ldots d x_k \end{aligned}$$

and

$$\begin{aligned}{} & {} \idotsint \limits _{(F_c \cap H)^{n-k}_{\ne }}\,\prod \limits _{j=k+1}^n J (T_{x_j},H)^{-1}d x_{k+1} \ldots d x_n\\{} & {} \quad \le \idotsint \limits _{({\partial }[0,{\tau }]^n \cap H)^{n-k}_{\ne }}\,\prod \limits _{j=k+1}^n J (T_{x_j},H)^{-1} d x_{k+1} \dots d x_n, \end{aligned}$$

where \((F_c \cap H)^{n-k}_{\ne }=(F_c \cap H)^{n-k}\) for \(k \ge 1\). Combining these we get for \(k \ge 1\)

$$\begin{aligned}{} & {} \idotsint \limits _{(F_0 \cap H)^{k} \times (F_c \cap H)^{n-k}}\,\prod \limits _{j=1}^n J (T_{x_j},H)^{-1}d x_1 \ldots d x_n\\{} & {} \quad \le c_{A_v P}^{-n}\idotsint \limits _{({\partial }[0,{\tau }]^n \cap H)^n}\,\prod \limits _{j=k+1}^n J (T_{x_j},H)^{-1}dx_1\ldots dx_n.\end{aligned}$$

Observe that for the \((n-1)\)-dimensional polytope \([0,{\tau }]^n\cap H\) the area of each \((n-2)\)-dimensional facet is bounded by the sum of the areas of all other facets. Hence excluding a facet from the range of integration of the inner integral with respect to \(x_1\) can be compensated by a factor 2,

$$\begin{aligned} \int \limits _{{\partial }[0,{\tau }]^n\cap H}\!\!\!dx_{1}\,\le \,2\!\!\int \limits _{{\partial }[0,{\tau }]^n \cap H}\,\prod \limits _{f=1}^n{\mathbb {1}}(|\{x_1, \dots ,x_n\}\cap e_f^\perp |\le n-1))\,dx_1. \end{aligned}$$

Since \(J (T_{x_j},H)\) is always less or equal one, this yields

$$\begin{aligned}&\idotsint \limits _{({\partial }A_v P \cap H)_{\ne }^n}\,\prod \limits _{j=1}^n J (T_{x_j},H)^{-1}d x_1 \ldots d x_n\\ {}&\quad \le 2 c_{A_v P}^{-n}\sum _{k=0}^n {n \atopwithdelims ()k}\idotsint \limits _{{({\partial }[0,{\tau }]^n \cap H)^n} }\,\prod \limits _{j=1}^n J (T_{x_j},H)^{-1}\\&\qquad \times \prod \limits _{f=1}^n {\mathbb {1}}\bigl (|\{x_1, \dots , x_n\}\cap e_f^\perp | \le n-1\bigr )\, d x_1 \ldots dx_n\\&\quad \le 2^{n+1} c_{A_v P}^{-n}\idotsint \limits _{{({\partial }[0,{\tau }]^n \cap H)^n} }\,\prod \limits _{j=1}^n J (T_{x_j},H)^{-1}\\&\qquad \times \prod \limits _{f=1}^n{\mathbb {1}}\bigl (|\{x_1, \dots , x_n\}\cap e_f^\perp | \le n-1\bigr )\, d x_1 \ldots d x_n. \end{aligned}$$

Substituting \(x_i\) by \({\tau }x_i\) we obtain

$$\begin{aligned} \idotsint \limits _{({\partial }A_v P \cap H)_{\ne }^n}\,\prod \limits _{j=1}^n J (T_{x_j},H)^{-1} d x_1 \ldots d x_n\le 2^n {\tau }^{n(n-2)} c_{A_vP}^{-n}{{\mathcal {E}}}_0^1 \biggl (\frac{h}{{\tau }}, u\biggr ) \end{aligned}$$

with \({{\mathcal {E}}}_0^1 (\,{\cdot }\,)\) defined in front of Lemma 3.2. We make use of Lemma 3.2 for \({{\mathcal {E}}}_0^1\) and the error term (3.33): for \(m_i= \sum _j {\mathbb {1}}(f_j=i)\) we get

$$\begin{aligned} E_v^{1+{\varepsilon }}&\le c_{v,P} \sum _{k=0}^{n-1}\,\int \limits _{S_+^{n-1}}\int \limits _{ \begin{array}{c} h \le {u_1} , \dots , {u_k}\\ h \ge {u_{k+1}}, \dots ,u_n \end{array}}\!\!\!\left( 1- \frac{{\underline{a}}}{(n-1)!} \sum \limits _{j=1}^k \prod \limits _{i \le k, i \ne j} \frac{h}{u_i} \right) ^{\!N}h^{ -(n+1) }\\&\quad \times \prod _{j=1}^k \biggl ( \frac{h}{u_j}\biggr )^{\!n+1+{\varepsilon }}\left( \,\sum _{{\varvec{f}} \in \{1, \dots , n\}^n}\prod _{j =1}^k \biggl ( \frac{u_{j}}{h}\biggr )^{\!m_j}\right) \, dhdu+ O(e^{-cN}) \end{aligned}$$

with \(m_i \le n-1\) for \(j \le k\), \(\sum _1^k m_i \le n\), and where \(c_{v,P}\) depends on \(n, c_{A_v P}, \max c_{n,k}\) and \({\tau }\). Next we use the substitution from Lemma 3.3:

$$\begin{aligned} E_v^{1+{\varepsilon }}&\le c_{n, A_v P} \sum _{k=0}^{n-1}\sum _{{\varvec{f}} \in \{1, \dots , n\}^n}\,\idotsint \limits _{\begin{array}{c} t_1, \dots t_k \le 1 \\ tea_{k+1}, \dots , t_n \ge 1 \end{array}}\left( 1- \frac{{\underline{a}}}{(n-1)!} \sum \limits _{j=1}^k\,\prod \limits _{i \le k, i \ne j} t_i \right) ^{\!N}\\&\quad \times \,\prod _{i=1}^k t_i^{n-1+{\varepsilon }-m_i}\prod _{i=k+1}^n t_i^{-2}dt_1 \ldots dt_n+ O(e^{-cN}). \end{aligned}$$

The integrations with respect to \(t_{k+1}, \dots , t_n\) are immediate since the only terms occurring are \(t_i^{-2}\), and we have

$$\begin{aligned} E_v^{1+{\varepsilon }}&\le c_{n, A_v P} \sum _{k=0}^{n-1}\sum _{{\varvec{f}} \in \{1, \dots , n\}^n}\,\int \limits _0^1\!\dots \!\int \limits _0^1\left( 1- \frac{{\underline{a}}}{(n-1)!} \sum \limits _{j=1}^k \prod \limits _{i \le k, i \ne j} t_i \right) ^{\!N}\\&\quad \times \,\prod _{i=1}^k t_i^{k-2 - (m_i- (n-k+1+{\varepsilon }))}dt_1 \ldots dt_k+ O(e^{-cN}). \end{aligned}$$

with \(0 \le m_i \le n-1\). We set \(l_i=m_i- (n-k+1+{\varepsilon }))\). To apply Lemma 2.4 in the case \({\varepsilon }=0\) we have to check that there are \(i \ne j\) with \(l_i, l_j >L/(k-1)-1\). Set \(M=\sum _1^k m_i \le n\). We have

$$\begin{aligned} l_i -\frac{L}{k-1} +1= & {} m_i -(n-k+1) - \frac{1}{k-1}\sum _{j=1}^k\,(m_j- (n-k+1)) + 1\\= & {} m_i + \frac{n-M}{k-1} \ge 0 \end{aligned}$$

and equality holds only if \(M=n\) and \(m_i=0\). But \(M=n \) and \(m_i \le n-1\) imply that there are at least two different indices ij with \(m_i>0\). Hence we may apply Lemma 2.4 (and if \(m_i \ge 1\) for all i even Lemma 2.3) which tells us that the integral is bounded by

$$\begin{aligned} O\bigl (N^{-k+L/(k-1)}(\ln N)^{k-2}\bigr )=O\bigl (N^{(M-nk)/(k-1)}(\ln N)^{n-3}\bigr )=O\bigl (N^{-n} (\ln N)^{n-3}\bigr ). \end{aligned}$$

This finally proves

$$\begin{aligned} E_v^1 =O\bigl (N^{-n} (\ln N)^{n-3}\bigr ). \end{aligned}$$
(3.35)

In the case \({\varepsilon }=1\) we have \(l_i=m_i- (n-k+2)\), and with \(M=\sum _1^k m_i \le n\) this gives

$$\begin{aligned} l_i -\frac{L}{k-1} +1= & {} m_i -(n-k+2) - \frac{1}{k-1}\sum _{j=1}^k\,(m_j- (n-k+2)) + 1\\= & {} m_i + \frac{n+1-M}{k-1} > 0 \end{aligned}$$

since \(M \le n\). Thus the integral is of order

$$\begin{aligned} O\bigl (N^{-k+L/(k-1)}\bigr )=O\bigl (N^{(M-k(n+1))/(k-1)}\bigr )=O\bigl (N^{-n -(n-1)/(n-2)}\bigr ) \end{aligned}$$

and

$$\begin{aligned} E_v^2=O\bigl (N^{-n -(n-1)/(n-2)}\bigr ). \end{aligned}$$
(3.36)

3.7 The Error of the Second Kind

Here we have to evaluate the following estimate of \({\mathbb {E}}V_n(D_N)\) of (3.18),

$$\begin{aligned}{} & {} f_1(P)\, {\overline{d}}\!\int \limits _0^{\tau }\!\dots \!\!\int \limits _0^{\tau }\left( 1\!-\! {\underline{a}}{\lambda }_{n-1} ( {\partial }{\mathbb {R}}_+^n \cap [0, \min (1,x_1)e_1, \dots , \min (1,x_{n-1})e_{n-1}, e_n ])\right) ^{\!N}\\{} & {} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ dx_1 \ldots dx_n. \end{aligned}$$

The integration with respect to \(x_n\) is immediate. We may assume without loss of generality that for \(k = 0, \dots , n-1\) precisely k of the coordinates of x are bounded by 1,

$$\begin{aligned} x_1, \dots , x_k \le 1,\qquad x_{k+1} , \dots , x_{n-1} \ge 1 . \end{aligned}$$

For \(k=0,1\) we have

$$\begin{aligned} \int \limits _0^{\tau }\!\dots \!\int \limits _0^{\tau }\biggl ( 1- \frac{{\underline{a}}}{(n-1)!}\biggr )^{\!N} dx_1 \ldots dx_{n-1}= O\bigl (e^{-{\underline{a}N}/(n-1)!}\bigr ). \end{aligned}$$

So we assume \(2 \le k \le n-1\). Then the volume of the boundary of the simplex is given by

$$\begin{aligned}&{\lambda }_{n-1}\bigl ( {\partial }{\mathbb {R}}_+^n \cap [0, \min (1,x_1)e_1, \dots , \min (1,x_{n-1})e_{n-1}, e_n ]\bigr )\\&\qquad \qquad =\sum _{j=1}^n\frac{1}{(n-1)!}\prod _{i\le n,i\ne j}\!\min (1,x_i)\ge \sum _{j=1}^k \frac{1}{(n-1)!} \prod _{i \le k, i \ne j}x_i. \end{aligned}$$

Therefore we obtain

$$\begin{aligned}&\int \limits _0^{\tau }\!\dots \!\int \limits _0^{\tau }\left( 1- \frac{{\underline{a}} }{(n-1)!} \sum _{j=1}^k \prod _{i \le k, i \ne j} x_i \right) ^{\!N} dx_1 \dots dx_{n-1}\\&\qquad \qquad \le {\tau }^n \int \limits _0^1\! \dots \! \int \limits _0^1\left( 1- \frac{{\underline{a}}{\tau }^{k-1}}{(n-1)!} \sum _{j=1}^k \prod _{i \le k, i \ne j} t_i \right) ^{\!N-k} dt_1 \ldots dt_{n-1}\\&\qquad \qquad ~\quad \times {\tau }^n\left( \frac{{\underline{a}}{\tau }^{k-1}}{(n-1)!}\right) ^{-k/(k-1)}\!(k-1)^{-1}\Gamma \biggl (\frac{1}{k-1}\biggr )^{\!k}N ^{-k/(k-1)}(1+o(1)), \end{aligned}$$

where we used Lemma 2.3 with \(l_i=k-2\), \(L= k(k-2)\), which implies \(l_i>L/(k-1)-1= k-2 -1/(k-1)\). As \( k \le n-1\) we obtain

$$\begin{aligned}{\mathbb {E}}V_n(D_N)= O\bigl (N^{-(n-1)/(n-2)}\bigr ).\end{aligned}$$