1 Introduction and main results

1.1 Background

The theory of random convex hulls has a long history, going back to Sylvester’s famous four-point problem [62]. Since the seminal papers of Rényi and Sulanke [54, 55], it has become a mainstream research topic in convex, stochastic and integral geometry, with connections to asymptotic geometric analysis, optimization or multivariate statistics, to name just a few.

In this article we focus on the convex hull of independent and identically distributed points taken from the boundary of a fixed convex body K. This model of a so-called random inscribed polytope in K was investigated in [13, 16, 51, 52, 56, 57, 60, 64], mainly from an asymptotic point of view (as the number of points tends to infinity). In particular, in [63] a central limit theorem is proven for the volume of the random inscribed polytope inside a sufficiently smooth convex body in Euclidean space. Our goal is to generalize this result to the setting of non-Euclidean geometries. Of particular interest are the cases of random inscribed polytopes inside convex bodies in spherical or hyperbolic geometry. This continues a recent trend in stochastic geometry of generalizing known results to the non-Euclidean setting, and in particular to spherical and hyperbolic geometry, see e.g. [5, 6, 24, 28, 29, 32,33,34,35,36,37,38].

More generally, we work with projective Finsler geometries, i.e., ones for which geodesics are affine line segments. These are the Finsler solutions of Hilbert’s fourth problem, and have been studied intensively, see e.g. [3, 20, 47, 50]. Since on a Finsler manifold there is no canonical volume measure, we establish our results for a general definition of volume, which is an assignment of Finsler volume measure obeying some natural axioms [4].

Following the ideas of [8], we reformulate the problem in terms of weighted random inscribed polytopes in Euclidean space. This approach is more general and also paves the way to some new directions. For example, it allows us to prove central limit theorems for dual volumes, which are central to Lutwak’s dual Brunn–Minkowski theory [40, 42], as well as for the mean width of random polyhedral sets circumscribing a convex body. Finally, let us also mention that the analogous result for the random model where points are distributed inside the convex body was proven for the Euclidean case in [53], and were recently generalized to the non-Euclidean setting in [9].

1.2 Random inscribed polytopes in projective Riemannian geometries

We begin with the setting of Riemannian geometry. In this case there is a canonical notion of volume–the Riemannian volume measure. It may be defined as the d-dimensional Hausdorff measure of the associated metric space of the d-dimensional Riemannian manifold \((\Omega ,g)\), or equivalently, as the integral of the Riemannian volume density, which in local coordinates reads

$$\begin{aligned} \sqrt{\det (g_{ij}(x))} \,|dx_1 \wedge \cdots \wedge dx_d|, \end{aligned}$$
(1.1)

where \(g_{ij}(x)\) are the metric coefficients in the given coordinates (see, e.g., [17, §5.5.1]).

A Riemannian metric on a domain \(\Omega \subset \mathbb {R}^d\) is called projective if affine line segments are geodesics. We consider a projective \(C^2\)-Riemannian metric g on a convex domain \(\Omega \subset \mathbb {R}^d\). The regularity assumption ensures uniqueness of geodesics, so in particular affine line segments are the only geodesics of g.

Let \(K \subset \Omega \) be a convex body of class \(C^2_+\), that is, the boundary \({{\,\mathrm{bd}\,}}K\) of K is a \(C^2\)-smooth hypersurface with everywhere strictly positive Gauss–Kronecker curvature. Denote by \(\Phi _g\) the Riemannian volume measure on K, and by \(\sigma _g\) the normalized Riemannian surface measure on \({{\,\mathrm{bd}\,}}K\). Let \(X_1,X_2,\ldots \) be a sequence of independent random points on \({{\,\mathrm{bd}\,}}K\) distributed according to \(\sigma _g\) and for \(n\ge d+1\) define their convex hull \(K_g(n):=[X_1, \ldots , X_n]\), which is what we call a random Riemannian inscribed polytope.

Theorem 1.1

Under the above assumptions, the Riemannian volume \(\Phi _g(K_g(n))\) satisfies a central limit theorem, that is,

$$\begin{aligned} \frac{\Phi _g(K_g(n)) - \mathbb {E}\Phi _g(K_g(n)) }{\sqrt{{{\,\mathrm{Var}\,}}\Phi _g(K_g(n)) }} \overset{d}{\longrightarrow } Z \qquad \text {as}~{n\rightarrow \infty }, \end{aligned}$$

where Z is a standard Gaussian random variable. Here \(\overset{d}{\longrightarrow }\) denotes convergence in distribution.

Fig. 1
figure 1

Illustration of the hyperbolic inscribed random polytope \(K_h(n)\) generated in a hyperbolic convex body K in the hyperbolid model of the hyperbolic plane

Example 1.2

(Hyperbolic geometry) The d-dimensional hyperbolic space is realized as a projective Riemannian space in the Beltrami–Klein model. This is the unit ball \(\Omega = \{x \in \mathbb {R}^d \,:\, \Vert x\Vert <1 \}\) equipped with the Riemannian metric with length element

$$\begin{aligned}ds^2 = \frac{(1-\Vert x\Vert ^2 ) \Vert dx\Vert ^2 + \langle x, dx \rangle ^2 }{(1-\Vert x\Vert ^2 )^2},\end{aligned}$$

where we denote by \(\langle \,\cdot \,, \,\cdot \, \rangle \) and \(\Vert \,\cdot \,\Vert \) the Euclidean inner product and norm, respectively, on \(\mathbb {R}^d\). This defines a (complete) projective Riemannian metric of constant sectional curvature \(-1\) (see e.g. [1, 21] for more details, and relations with other models of hyperbolic space). Then Theorem 1.1 implies that the hyperbolic volume of the random polytope generated by independent points on the boundary of a hyperbolic convex body obeys a central limit theorem. Figure 1 illustrates a random inscribed polytope in the hyperboloid model of the hyperbolic plane.

Example 1.3

(Spherical geometry) The spherical geometry in a hemisphere may also be realized in a projective model. The gnomonic projection maps the upper hemisphere \(S^d_+ := \{x \in S^{d} : x_{d+1} > 0\} \subset \mathbb {R}^{d+1}\) onto its tangent hyperplane at the north pole, \(H:=\{x_{d+1}=1\}\), by projecting along rays emanating from the origin (see Fig. 2). To be more precise, the point \(x=(x_1,\dotsc , x_{d+1}) \in S^d_+\) is mapped to the point \(p(x)=(\frac{x_1}{x_{d+1}}, \dotsc , \frac{x_d}{x_{d+1}}) \in \mathbb {R}^d\), where we have identified H with \(\mathbb {R}^d\) by means of an isometry that maps the north pole of \(S^d\) to the origin of \(\mathbb {R}^d\). The standard Riemannian metric on the hemisphere \(S^d_+\) is identified with the Riemannian metric on \(\mathbb {R}^d\) with length element

$$\begin{aligned} ds^2 = \frac{(1+\Vert x\Vert ^2 ) \Vert dx\Vert ^2 - \langle x, dx \rangle ^2 }{(1+\Vert x\Vert ^2 )^2}. \end{aligned}$$
Fig. 2
figure 2

The gnomonic (central) projection from the upper hemisphere

This defines a projective Riemannian metric on \(\Omega = \mathbb {R}^d\) with constant sectional curvature \(+1\). Theorem 1.1 then implies that the spherical volume of the random polytope generated by independent points on the boundary of a spherical convex body obeys a central limit theorem. Figure 3 illustrates a random inscribed polytope on the upper hemisphere \(S^2_+\).

Fig. 3
figure 3

Illustration of the spherical inscribed random polytope \(K_s(n)\) generated in a spherical convex body K contained in the open hemisphere \(S^2_+\)

Remark 1.4

The classical Beltrami theorem states that any projective Riemannian metric on a convex domain \(\Omega \subset \mathbb {R}^d\) is of constant sectional curvature, and hence locally isometric to (a rescaling of) either the Euclidean, hyperbolic, or spherical space (see, e.g., [10, 19, 44, 58]). Moreover, if the metric is complete and the underlying space simply connected, it is globally isometric to one of these spaces.

1.3 Random inscribed polytopes in projective Finsler geometries

We turn now to the more general case of projective Finsler metrics. For this we let \(\Omega \subset \mathbb {R}^d\) be a convex body, equipped with a Finsler metric F, i.e., a continuous function \(F:T\Omega \rightarrow \mathbb {R}\) on the tangent bundle \(T\Omega \) of \(\Omega \) such that for all \(x \in \Omega \), \(F(x,\, \cdot \,) :T_x\Omega \rightarrow \mathbb {R}\) is a norm, where we write \(T_x\Omega \) for the tangent space of \(\Omega \) at x. We assume that F is \(C^3\)-smooth away from the zero section of \(T\Omega \) and is strongly convex, that is, the vertical Hessian \(\frac{\partial ^2 F^2}{\partial v_i \partial v_j}(x,v)\) is non-degenerate at every \(v \ne 0\). We assume moreover that F is projective, that is, straight line segments are geodesics of F. Again, the regularity assumption on F implies that these are the only geodesics.

We note that in Finsler geometry, unlike Riemannian geometry, there does not exist a canonical choice of volume measurement. However, any ’reasonable’ notion of Finsler volume is completely determined by its value on normed spaces (see e.g. [17, §5.5.3]). A Lebesgue measure on a d-dimensional normed space X can be described in terms of a (positive) density, that is, a norm on the (1-dimensional) top exterior power \(\bigwedge ^dX\). This leads to the following axiomatic definition due to Álvarez Paiva and Thompson [4].

Definition 1.5

A definition of volume on d-dimensional normed spaces is an assignment to each d-dimensional normed space X of a norm \(\mu _X\) on \(\bigwedge ^d X\) such that the following conditions are satisfied:

  1. 1.

    If \(T:X \rightarrow Y\) is a short map (i.e., a linear map of norm \(\le 1\)), the induced map \(\bigwedge ^d T : \bigwedge ^dX \rightarrow \bigwedge ^d Y\) is short as well.

  2. 2.

    The assignment \(X \mapsto (\bigwedge ^d X, \mu _X)\) is continuous in the Banach–Mazur topology.

  3. 3.

    If X is a Euclidean space, \(\mu _X\) is the standard Euclidean volume measure.

Given a definition of volume on a d-dimensional normed space, one can define a volume on a general d-dimensional Finsler manifolds, by the following procedure. If (MF) is a Finsler manifold, that is, a differentiable manifold M together with a Finsler metric F on TM, then for each \(x \in M\) we obtain a norm \(\mu _{T_xM}\) on \(\bigwedge ^dT_xM\). This norm varies continuously with x, and hence defines a continuous volume density on M. Volume densities can be integrated (see e.g. [11, 46]), yielding a volume measure on M.

The following examples are taken from [4].

Example 1.6

(The Busemann definition [18]) The Busemann definition of volume of a d-dimensional normed space X is such that the volume of the unit ball of X is \(\mathrm{Vol}_{\mathrm{Bus}}(B) = \kappa _d\), where \(\kappa _d\) is the volume of the d-dimensional Euclidean unit ball. The corresponding density on X is given by

$$\begin{aligned} \mu _{\mathrm{Bus}}(v_1 \wedge \cdots \wedge v_d)= \frac{\kappa _d}{\mathrm{Vol}(B;v_1, \ldots , v_n)}, \end{aligned}$$

where \(\mathrm{Vol}(B;v_1, \ldots , v_n)\) denotes the volume of B with respect to the Lebesgue measure determined by the basis \(v_1, \ldots v_d\). The resulting volume measure on a d-dimensional continuous Finsler manifold is known to coincide with its d-dimensional Hausdorff measure (see [18, §6].)

Example 1.7

(The Holmes–Thompson definition [30]) The Holmes–Thompson volume definition uses the canonical symplectic structure on \(X \times X^*\) (see e.g. [4, 45]), and the associated symplectic volume. The Holmes–Thompson volume of the unit ball B of X is equal to the symplectic volume of \(B \times B^* \subset X \times X^*\) divided by \(\kappa _d\). The corresponding density is given by

$$\begin{aligned} \mu _{\mathrm{HT}}(v_1 \wedge \cdots \wedge v_n) = \frac{ \mathrm{Vol}(B^*; \xi _1, \ldots , \xi _d) }{\kappa _d}, \end{aligned}$$

where \(\xi _1, \ldots , \xi _d\) is the basis of \(X^*\) dual to \(v_1, \ldots ,v_d\). It is known that the resulting Holmes–Thompson volume of a d-dimensional Finsler manifold is equal to the symplectic volume of the unit co-disc bundle \(B^*(M) \subset T^*M\) with respect to the canonical symplectic structure on \(T^*M\), divided by \(\kappa _d\) (see e.g. [4, 45]).

Example 1.8

(The Gromov mass and \(\hbox {mass}^*\) definitions [27]) The Gromov mass definition is such that the maximal cross-polytope inscribed in the unit ball of X has volume \(2^n/n!\). The corresponding density is given by

$$\begin{aligned} \mu _{\mathrm{mass}}(a) = \inf \Vert v_1\Vert \cdots \Vert v_d\Vert , \end{aligned}$$

where the infimum extends over all \(v_1, \ldots , v_d\) such that \(a= v_1 \wedge \cdots \wedge v_d\).

The dual notion is the Gromov \(\hbox {mass}^*\) definition, for which the minimal parallelotope circumscribed about the unit ball of X has volume \(2^d\). The corresponding density is given by

$$\begin{aligned} \mu _{\mathrm{{mass}}^* }(v_1 \wedge \cdots \wedge v_d) = \left[ \mu _{\mathrm{{mass}}} (\xi _1 \wedge \cdots \wedge \xi _d) \right] ^{-1}, \end{aligned}$$

where \(\xi _1,\ldots , \xi _d\) is the dual basis to \(v_1, \ldots , v_d\), and the mass definition on the right hand side is applied to the dual space of X.

We now return to our setting of a projective Finsler metric on a convex domain \(\Omega \). We fix a definition of volume on d-dimensional normed spaces, which defines a volume measure on \(\Omega \), as we explained above. We denote this volume measure by \(\Phi \). Now, given a convex body \(K \subset \Omega \) of class \(C^2_+\), fixing another definition of volume on \((d-1)\)-dimensional normed spaces defines a surface measure on \({{\,\mathrm{bd}\,}}K\), and we denote the resulting normalized probability measure on \({{\,\mathrm{bd}\,}}K\) by \(\sigma \). Let \(X_1,X_2, \ldots \) be a sequence of independent random points on \({{\,\mathrm{bd}\,}}K\) distributed according to \(\sigma \). For \(n\ge d+1\) the convex hull \(K_F(n):=[X_1, \ldots , X_n]\) is called the random inscribed Finsler polytope.

Theorem 1.9

Let \(\Omega \subset \mathbb {R}^d\) be an open and convex domain, and F be a projective Finsler metric on \(\Omega \) that is strongly convex and \(C^3\)-smooth away from the zero section of \(T\Omega \). Then the Finsler volume \(\Phi (K_F(n))\) of the random Finsler polytope \(K_F(n)\) satisfies a central limit theorem, that is,

$$\begin{aligned} \frac{\Phi (K_F(n)) - \mathbb {E}\Phi (K_F(n)) }{\sqrt{{{\,\mathrm{Var}\,}}\Phi (K_F(n))}} \overset{d}{\longrightarrow }Z \qquad \text {as}~{ n\rightarrow \infty }, \end{aligned}$$

where Z is a standard Gaussian random variable.

Remark 1.10

Theorem 1.1 is almost a special case of Theorem 1.9, except it allows for slightly weaker regularity of the metric.

Example 1.11

(Hilbert geometry) The best known example of a projective Finsler metric is the Hilbert metric inside an open and convex domain \(\Omega \subset \mathbb {R}^d\). The Hilbert–Finsler norm is defined by

$$\begin{aligned} H_\Omega (x, v) = \frac{1}{2}\left[ \frac{1}{t_+(x,v)} + \frac{1}{t_-(x,v)}\right] , \end{aligned}$$

where \(t_\pm (x,v)\) are defined by (see Fig. 4)

$$\begin{aligned} t_\pm (x,v) = \sup \{t > 0 \,:\, x \pm t v \in \Omega \}. \end{aligned}$$
(1.2)
Fig. 4
figure 4

a The Hilbert distance in a convex domain \(\Omega \). b The Finsler norm of a Hilbert (or Funk) geometry in \(\Omega \)

The induced distance function on \(\Omega \) is given by

$$\begin{aligned} \rho _{\Omega }(x,y) = \frac{1}{2} \log \left( \frac{\Vert a-x\Vert }{\Vert a-y\Vert } \, \frac{\Vert b-x\Vert }{\Vert b-y\Vert }\right) , \end{aligned}$$

where a and b are the intersection points of the line passing through x and y with \({{\,\mathrm{bd}\,}}\Omega \), arranged so that (axyb) lie in that order on the line (see Fig. 4). We refer the reader to [48] for more details on Hilbert geometries.

From Theorem 1.9 we deduce that, for any two fixed definitions of volume on d- and \((d-1)\)-dimensional normed spaces, the Finsler volume of the convex hull of independent random points on the boundary of a convex body in a Hilbert geometry obeys a central limit theorem.

Let us remark that in order to apply Theorem 1.9, we need to assume that the Hilbert–Finsler norm \(H_\Omega \) is \(C^3\)-smooth and strongly convex (which is the case if \(\Omega \) is a bounded convex domain of class \(C^3\), whose boundary has everywhere positive Gauss–Kronecker curvature.) However, in case of Hilbert geometries we may in fact relax these regularity assumptions, which were made in order to ensure the uniqueness of geodesics. For Hilbert geometries, it is known that this holds if \(\Omega \) is a strictly convex domain (see [49, Corollary 12.7]). Thus is \(\Omega \) if a strictly convex domain of class \(C^1\), the asymptotic normality of random inscribed polytopes still holds.

Example 1.12

(Funk geometry) The discussion of definition of volume applies to Finsler norms which are reversible, i.e., satisfy \(F(x,-v)=F(x,v)\) for all \(x \in \Omega \) and \(v \in T_x \Omega \). However, for non-reversible Finsler norms, one may still define the Buseman and Holmes–Thompson volume densities. A famous example of a projective non-reversible Finsler norm is the Funk geometry in a convex domain \(\Omega \subset \mathbb {R}^d\). This is the Finsler norm

$$\begin{aligned} F_\Omega (x,v) = \frac{1}{t_+(x,v)}, \end{aligned}$$

where \(t_+(x,v)\) is defined as in (1.2), see also Fig. 4. We refer again to [48] for more details on Funk geometries. Assume that \(\Omega \) is \(C^1\)-smooth and strictly convex, then the Funk metric is uniquely geodesic (see [49, Corollary 7.8]). Then, fixing either the Buseman or the Holmes–Thompson volume definitions for the d- and \((d-1)\)-dimensional volume measurements, we have that the Finsler volume of the convex hull of independent random points on the boundary of a convex body in a Funk geometry obeys a central limit theorem.

1.4 Dual Brunn–Minkowski theory

The dual Brunn–Minkowski theory, introduced by Lutwak [40, 42], is a variant of classical Brunn–Minkowski theory, which has become a central piece of modern convex geometry, see e.g. [2, 7, 14, 25, 26, 31, 43]. Its starting point is the replacement of Minkowski sum by the so-called radial sum of convex bodies, or more generally, star bodies. Dual mixed volumes and related concepts are then derived analogously to classical mixed volumes. While not dual to the classical theory in a precise sense, many results and constructions of the dual theory mirror those of the classical one (see e.g. [59] for details about dual Brunn–Minkowski theory as well as the references cited therein). Here we focus on the dual volumes, which may be derived from dual mixed volumes, or defined directly by dualizing the Kubota formula. Namely, the j-th dual volume of a star body \(A\subset \mathbb {R}^d\) is, up to a constant, the average volume of the intersection of A with a j-dimensional linear subspace, chosen according to the Haar probability measure on the Grassmannian of j-dimensional linear subspaces of \(\mathbb {R}^d\). In [41] Lutwak proved the following formula for the j-th dual volume of a convex body \(A \subset \mathbb {R}^d\) containing the origin in terms of its radial function \(\rho _A:S^{d-1}\rightarrow (0,+\infty )\), defined by \(\rho _A(u) = \max \{r>0: ru\in A\}\):

$$\begin{aligned} \widetilde{V} _j (A) = \kappa _d \int _{S^{d-1}} \rho _A(u)^j du, \end{aligned}$$
(1.3)

where we recall that \(\kappa _d\) is the volume of the d-dimensional Euclidean unit ball. Moreover, du denotes the infinitesimal element of the normalized surface measure on the unit sphere \(S^{d-1}\). Using (1.3), Lutwak extended the definition of the dual volumes \({ \widetilde{V} }_j\) to any \(j \in \mathbb {R}\), and it is this extension which we investigate here. Our next result is a central limit theorem for dual volumes of random inscribed polytopes. However, we note that with positive probability, the random inscribed polytope does not contain the origin. To remedy this, we consider the convex hull of the random polytope with a fixed convex set T containing the origin, which is strictly contained in K, where the specific choice of T is irrelevant for our result. Let us emphasize that, for large n, the random inscribed polytope contains T with overwhelming probability, in which case this convex hull is simply the polytope itself.

Theorem 1.13

Let \(K \subset \mathbb {R}^d\) be a convex body of class \(C^2_+\) and let \(T\subset K\) be another convex body, which is strictly contained in K and contains the origin. Let \(\sigma \) be a probability measure on \({{\,\mathrm{bd}\,}}K\) with positive continuous density. Denote by \(K_{\sigma ,T}(n) \) the convex hull of the random inscribed polytope \(K_\sigma (n)\) and T. Then, for any real \(j \ne 0\), the dual volume \( \widetilde{V} _j(K_{\sigma ,T}(n))\) satisfies a central limit theorem, that is

$$\begin{aligned} \frac{ \widetilde{V} _j(K_{\sigma ,T}(n)) - \mathbb {E} \widetilde{V} _j(K_{\sigma ,T}(n))}{ \sqrt{{{\,\mathrm{Var}\,}} \widetilde{V} _j(K_{\sigma ,T}(n)) } } \overset{d}{\longrightarrow } Z \qquad \text {as }~{n\rightarrow \infty }, \end{aligned}$$
(1.4)

where Z is a standard Gaussian random variable.

1.5 Random polyhedral sets

In this section we consider a dual model of a random circumscribing polyhedral set. Let \(K \subset \mathbb {R}^d\) be a convex body of class \(C^2_+\). Fix a probability measure \(\sigma \) on \({{\,\mathrm{bd}\,}}K\) with a positive and continuous density \(\varsigma \) with respect to the \((d-1)\)-dimensional Hausdorff measure on \({{\,\mathrm{bd}\,}}K\). For a point \(x \in {{\,\mathrm{bd}\,}}K\) denote by H(x) the unique supporting affine hyperplane to K at x, and by \(H^-(x)\) the closed half-space determined by H(x) containing K. We define a (weighted) random polyhedral set as follows: Let \(X_1, X_2,\ldots \) be a sequence of independent random points on \({{\,\mathrm{bd}\,}}K\) distributed according to \(\sigma \), and define

$$\begin{aligned} P_\sigma (n) = \bigcap _{i=1}^n H^-(X_i) \end{aligned}$$

for \(n\ge d+1\). We denote by W(L) the mean width of a convex body \(L\subset \mathbb {R}^d\), that is,

$$\begin{aligned} W(L) = \int _{S^{d-1}} w(L,u) \,du, \end{aligned}$$

where, as above, the integration is with respect to the normalized spherical Lebesgue measure and w(Lu) is the width of L in direction u, i.e., \(w(L,u)=h_L(u)+h_L(-u)\) with \(h_L(y)=\max \{\langle x,y\rangle :x\in L\}\), \(y\in \mathbb {R}^d\) being the support function of L. We show a central limit theorem for the mean width of the random polyhedral set \(P_\sigma (n)\). However, as this set is unbounded with positive probability, we will consider its intersection with a fixed convex window L which strictly contains K. A common choice for L in the literature is the parallel body \(K_1:=\{x \in \mathbb {R}^d \,:\, \mathrm{dist} (x, K) \le 1 \}\), but the result does not depend on the choice of L.

Theorem 1.14

Under the above assumptions, the mean width of \(P_\sigma (n) \cap L\) satisfies a central limit theorem, that is,

$$\begin{aligned} \frac{W(P_\sigma (n) \cap L) - \mathbb {E}W(P_\sigma (n) \cap L)}{\sqrt{{{\,\mathrm{Var}\,}}W(P_\sigma (n) \cap L)} } \overset{d}{\longrightarrow } Z \qquad \text {as}~{ n\rightarrow \infty }, \end{aligned}$$

where Z is a standard Gaussian random variable.

In this context we would like to mention that the expected mean width of random polyhedral sets \(P_\sigma (n) \cap L\) has been analysed in detail in [12, 15] under different smoothness assumptions on the body K. Theorem 1.14 adds a central limit theorem to this line of research.

1.6 Weighted random inscribed polytopes

Let us finally turn to the main result of this paper, which we use to derive Theorems 1.11.9 and 1.13 as special cases as we shall explain in Sect. 4. It is the counterpart for inscribed random polytopes of [9, Theorem 2.1], which holds for random convex hulls with points chosen inside a convex body. At the same time it generalizes the main result in [63] for the volume of random convex hulls of uniformly distributed random points on the boundary of a convex body of class \(C_+^2\) to weighted volumes and to random points chosen according to a density.

To formally describe the set-up, fix a space dimension \(d\ge 2\) and let \(K \subset \mathbb {R}^d\) be a convex body whose boundary \({{\,\mathrm{bd}\,}}K\) is a \(C^2\)-smooth submanifold of \(\mathbb {R}^d\) with everywhere positive Gauss–Kronecker curvature. We fix a probability measure \(\sigma \) on \({{\,\mathrm{bd}\,}}K\) with a continuous and positive density \(\varsigma > 0\) with respect to the \((d-1)\)-dimensional Hausdorff measure on \({{\,\mathrm{bd}\,}}K\). Additionally, we let \(\Phi \) be a measure on K with a positive density \(\phi > 0\) with respect to the Lebesgue measure on K, such that \(\phi \) is continuous on a (relative) neighbourhood of \( {{\,\mathrm{bd}\,}}K\) in K.

This puts us into the position to define what we mean by a weighted random inscribed polytope in K. We choose a sequence \(X_1,X_2,\ldots \) of independent random points on \({{\,\mathrm{bd}\,}}K\) according to the probability measure \(\sigma \), and for \(n\ge d+1\) set \(K_\sigma (n) := [X_1, \ldots , X_n]\) to be the convex hull of \(X_1,\ldots ,X_n\). We will prove that the \(\Phi \)-measure of \(K_\sigma (n)\) satisfies a central limit theorem, as \(n\rightarrow \infty \).

Theorem 1.15

Under the above assumptions one has

$$\begin{aligned} \frac{\Phi (K_\sigma (n)) - \mathbb {E}\Phi (K_\sigma (n))}{ \sqrt{{{\,\mathrm{Var}\,}}\Phi (K_\sigma (n)) }} \overset{d}{\longrightarrow } Z\qquad \text {as}~{n\rightarrow \infty }, \end{aligned}$$

where Z is a standard Gaussian random variable.

Remark 1.16

In our proof we establish the following quantitative version of Theorem 1.15:

$$\begin{aligned} \sup _{t\in \mathbb {R}}\Big |\mathbb {P}\Big (\frac{\Phi (K_\sigma (n)) - \mathbb {E}\Phi (K_\sigma (n))}{ \sqrt{{{\,\mathrm{Var}\,}}\Phi (K_\sigma (n)) }}\le t\Big )-\mathbb {P}(Z\le t)\Big | \le C\, n^{-\frac{1}{2}}\, (\log n)^{3\frac{d+1}{d-1} } \end{aligned}$$

for some constant \(C=C(K,\varsigma ,\phi )>0\) only depending on K, \(\varsigma \) and \(\phi \). Clearly, taking \(n\rightarrow \infty \) this yields the distributional convergence stated in Theorem 1.15. In the same spirit it is possible to upgrade Theorem 1.1, Theorem 1.9, Theorem 1.13 and Theorem 1.14 as well.

2 Preliminaries

In this paper we denote absolute constants by \(c,C,\ldots \) and whenever a constant depends on additional parameters \(a,b,\ldots \), say, we indicate this by writing \(c=c(a,b,\ldots ),C=C(a,b,\ldots )\) etc. Our convention is that constants may depend on the convex body K and the measures \(\Phi \) and \(\sigma \), but never on the number of points n.

2.1 Geometric tools

Throughout this section we keep the assumptions from Sect. 1.6, namely, \(\Omega \subset \mathbb {R}^d\) is a convex domain, and \(K \subset \Omega \) is a convex body of class \(C^2_+\), \(\Phi \) and \(\sigma \) are measures on K and \({{\,\mathrm{bd}\,}}K\), respectively, with positive densities \(\phi \) and \(\varsigma \), with respect to the Lebesgue measure and \((d-1)\)-dimensional Hausdorff measure, respectively, such that \(\varsigma \) is continuous and \(\phi \) is continuous in a neighbourhood of \({{\,\mathrm{bd}\,}}K\).

For a hyperplane \(H \subset \mathbb {R}^d\) we use the notation \(H^\pm \) for the two closed half-spaces bounded by H. For a parameter \(t \in (0,1)\), the \(\sigma \)-surface body (or weighted surface body) of K with parameter t is defined by

$$\begin{aligned} K_\sigma ^t = \bigcap \{H^+ \,:\, H \subset \mathbb {R}^d \text { a hyperplane}, \sigma ({{\,\mathrm{bd}\,}}K \cap H^-) \le t \}. \end{aligned}$$

Note that we get back the classical surface body \(K^t\) from [61] if we choose for \(\sigma \) the normalized \((d-1)\)-dimensional Hausdorff measure on \({{\,\mathrm{bd}\,}}K\).

For a point \(z \in {{\,\mathrm{bd}\,}}K\) and \(t > 0\), define the visibility region of z (with respect to the measure \(\sigma \)) as all points in \(K {\setminus } K_\sigma ^t\) visible from z around the ’obstacle’ \(K_\sigma ^t\), that is

$$\begin{aligned} {{\,\mathrm{Vis}\,}}_\sigma (z, t) = \{ y \in K {\setminus } K_\sigma ^t \,:\, [z,y] \cap K_\sigma ^t = \emptyset \}. \end{aligned}$$
Fig. 5
figure 5

The visibility region \({{\,\mathrm{Vis}\,}}_\sigma (z,t)\) of a point \(z\in {{\,\mathrm{bd}\,}}K\)

As above, when \(\sigma \) is the \((d-1)\)-dimensional Hausdorff measure we denote the visibility region simply by \({{\,\mathrm{Vis}\,}}(z, t)\). We will require the following estimates on visibility regions (Fig. 5):

Lemma 2.1

Let \(K \subset \mathbb {R}^d\) be a convex body of class \(C^2_+\). Then there is a constant \(C=C(K, \sigma , \Phi )\) such that for all sufficiently small \(t>0\) one has

$$\begin{aligned} \sup _{z \in {{\,\mathrm{bd}\,}}K} \Phi ({{\,\mathrm{Vis}\,}}_\sigma (z, t)) \le C t^{\frac{d+1}{d-1}} \end{aligned}$$
(2.1)

and

$$\begin{aligned} \sup _{z\in {{\,\mathrm{bd}\,}}K} \sigma (\left\{ y \in {{\,\mathrm{bd}\,}}K \,:\, {{\,\mathrm{Vis}\,}}_\sigma (z, t) \cap {{\,\mathrm{Vis}\,}}_\sigma (y,t) \ne \emptyset \right\} ) \le C t. \end{aligned}$$
(2.2)

The proofs of (2.1) and (2.2) for the unweighted case (i.e., when \(\Phi \) is the Lebesgue measure and \(\sigma \) is the \((d-1)\)-dimensional Hausdorff measure) can be extracted from existing literature (see [65, Lemma 6.3] and [57, Lemma 6.2]), and the general case follows by a ‘sandwiching’ argument similar to [8, Lemma 5.2]. For transparency, we sketch a direct argument below.

Proof

The result follows from elementary properties of caps. By definition, a cap in K is a subset of the form \(K \cap H^+\), where H is an affine hyperplane. Any cap \(C^K\) contains a unique point \(z \in {{\,\mathrm{bd}\,}}K\) of maximal distance from H, which we call the center of \(C^K\). When \(\sigma (C^K \cap {{\,\mathrm{bd}\,}}K)=t\), we call \(C^K\) a t-cap. The important (and trivial) observation here is that \({{\,\mathrm{Vis}\,}}_{\sigma }(z,t)\) is precisely the union of all t-caps containing z.

The proof requires the following estimates on caps: there exists positive constants \(M, t_0, \rho _0\), depending only on K, \(\sigma \) and \(\Phi \), for which the following holds.

  • Any t-cap with \(t \le t_0\) has diameter \(\le M t^{\frac{1}{d-1}}\) and \(\Phi \)-measure \(\le M t^{\frac{d+1}{d-1}}\).

  • Any subset \(Y \subset {{\,\mathrm{bd}\,}}K\) with diameter \(\rho \le \rho _0\) is contained in the t-cap centered at any point \(y \in Y\), where \(t=M \rho ^{d-1}\).

These facts can be proven by a simple direct computation, using the fact that our assumptions on K imply uniform upper and lower (away from zero) bounds on the principle curvatures. Assuming this, it easily follows that for any \(z\in {{\,\mathrm{bd}\,}}K\), \({{\,\mathrm{Vis}\,}}_\sigma (z,t)\) is contained in the (Mt)-cap centered at z. Using the bound on \(\Phi \)-measure in the first item, this fact implies (2.1). Moreover, this fact also implies uniform bounds on the diameter of the sets \(\left\{ y \in {{\,\mathrm{bd}\,}}K \,:\, {{\,\mathrm{Vis}\,}}_\sigma (z, t) \cap {{\,\mathrm{Vis}\,}}_\sigma (y,t) \ne \emptyset \right\} \), which by the second item implies (2.2). \(\square \)

Finally, we will make extensive use of the fact that \(K_\sigma (n)\) contains the \(\sigma \)-surface body with overwhelming probability. More precisely, we require the following result from [57, Lemma 4.2].

Lemma 2.2

Let \(K \subset \mathbb {R}^d\) be a convex body of class \(C^2_+\), and let \(\sigma \) be a probability measure on \({{\,\mathrm{bd}\,}}K\) with positive and continuous density with respect to the \((d-1)\)-dimensional Hausdorff measure. Then for any \(\alpha > 0\) there exists \(c =c(\alpha ) > 0\) such that for n sufficiently large one has, denoting \(\tau = c \,\frac{\log n}{n}\),

$$\begin{aligned} \mathbb {P}\bigl ( K_\sigma ^\tau \not \subset K_\sigma (n) \bigr ) \le n^{-\alpha }. \end{aligned}$$

2.2 A normal approximation bound

The purpose of this section is to rephrase a very general normal approximation bound for non-linear functionals of independent and identically distributed random variables from [22, 39]. We present it in the framework of general Polish spaces E with a probability measure \(\mu \), although in our application in the proof of Theorem 1.15E will be the boundary of a smooth convex body in \(\mathbb {R}^d\) and \(\mu \) the probability measure \(\sigma \) on \({{\,\mathrm{bd}\,}}K\). For \(n\in \mathbb {N}\), let \(f:\bigcup _{k=1}^nE^k\rightarrow \mathbb {R}\) be a symmetric and measurable function. By this we mean that f is a symmetric function acting on point configurations in E of at most n points. If \(x=(x_1,\ldots ,x_n)\in E^n\) and \(i\in \{1,\ldots ,n\}\), we introduce the notation

$$\begin{aligned} x_{\lnot i}:= (x_1,\dotsc ,x_{i-1}, x_{i+1},\dotsc ,x_n)\in E^{n-1}. \end{aligned}$$

Similarly, for two indices \(i,j\in \{1,\ldots ,n\}\) with \(i<j\) we denote by \(x_{\lnot i,j}\in E^{n-2}\) the \((n-2)\)-tuple arising from x by removing both \(x_i\) and \(x_j\). We are now in the position to define the first- and second-order difference operator of f by

$$\begin{aligned} D_i f(x)&:= f(x)-f(x_{\lnot i}), \end{aligned}$$

and

$$\begin{aligned} D_{i,j} f(x)&:= D_i(D_j f(x)) = f(x)-f(x_{\lnot i})-f(x_{\lnot j})+f(x_{\lnot i,j}), \end{aligned}$$

respectively. In other words, \(D_if(x)\) measures the effect on the functional f when \(x_i\) is removed from x, and similar interpretation is valid for \(D_{i,j}f(x)\).

Let now \(X=(X_1,\ldots ,X_n)\) be an n-tuple of independent random elements from E with distribution \(\mu \), and let \(X'\) and \(X''\) be independent random copies of X whose coordinates are denoted by \(X_i'\) and \(X_i''\), \(i\in \{1,\ldots ,n\}\), respectively. By a recombination of \(\{X,X',X''\}\) we understand a random vector \(Z=(Z_1,\ldots ,Z_n)\) having the property that \(Z_i\in \{X_i,X_i',X''_i\}\) for each \(i\in \{1,\ldots ,n\}\). This allows us to introduce the following quantities:

$$\begin{aligned} B_1(f)&:= \sup _{(Y,Y',Z,Z')}\mathbb {E}\big [{\mathbf {1}}\{D_{1,2}f(Y)\ne 0\}{\mathbf {1}}\{D_{1,3}f(Y')\ne 0\}\,(D_2f(Z))^2(D_3f(Z'))^2\big ]\,,\\ B_2(f)&:= \sup _{(Y,Z,Z')}\mathbb {E}\big [{\mathbf {1}}\{D_{1,2}f(Y)\ne 0\}\,(D_1f(Z))^2(D_2f(Z'))^2\big ]\,,\\ B_{3}(f)&:= \sup _Y \mathbb {E}|f(X) (D_1f(Y))^3| \,,\\ B_4(f)&:= \mathbb {E}|D_1f(X)|^3\,,\\ B_5(f)&:= \mathbb {E}|D_1f(X)|^4\,, \end{aligned}$$

where the suprema in the definitions of \(B_1\), \(B_2\) and \(B_3\) are taken over all tuples of recombinations \((Y,Y', Z, Z')\), \((Y, Z, Z')\) and Y, respectively, of \(\{X, X' ,X''\}\).

To measure the distance between (the laws of) two random variables W and V we use the Kolmogorov distance. We recall that the Kolmogorov distance between W and V is given by

$$\begin{aligned} d_{{{\,\mathrm{Kol}\,}}}(W,V) := \sup _{x\in \mathbb {R}}\big |\mathbb {P}(V\le x)-\mathbb {P}(W\le x)\big |\,, \end{aligned}$$
(2.3)

and note that convergence of the Kolmogorov distance implies convergence in distribution.

We are now prepared to rephrase the following normal approximation bound, which combines, in a slightly simplified form, Theorem 5.1 and Proposition 5.3 from [39].

Lemma 2.3

Fix \(n\in \mathbb {N}\). Let \(X_1,\dotsc ,X_n\) be independent random elements taking values in a Polish space E and are distributed according to a probability measure \(\mu \), and let \(f:\bigcup _{k=1}^n E^k\rightarrow \mathbb {R}\) be a symmetric and measurable function. Define \(W(n):=f(X_1,\ldots ,X_n)\) and assume that \(\mathbb {E}\, W(n)=0\) and \(\mathbb {E}\, W(n)^2=1\). Then there exists an absolute constant \(c>0\) such that

$$\begin{aligned} \begin{aligned} d_{{{\,\mathrm{Kol}\,}}}\left( W(n),Z\right)&\le c\left[ n\sqrt{nB_1(f)}+n\sqrt{B_2(f)}+nB_{3}(f) + n B_4(f) + \sqrt{n B_5(f)}\right] , \end{aligned} \end{aligned}$$
(2.4)

where Z is a standard Gaussian random variable.

Remark 2.4

Let us point out that in the recent (and yet unpublished) work [23], the bound given by Lemma 2.3 was improved by removing the terms \(B_3\) and \(B_4\) from (2.4). Using this improved results leads to a somewhat simpler proof of Theorem 1.15, as well as to slightly improved rate of convergence, given by a better exponent of the logarithmic term (see Remark 1.16.) For sake of completeness, we have chosen to apply the results of [39].

3 Proof of Theorem 1.15

This section is devoted to the proof of our main result about weighted random inscribed polytopes. The proof uses Lemma 2.3. To obtain the required bound on the right hand side of (2.4) we need to combine an upper bound on the difference operators with a lower bound on the variance. As the two are independent, we treat them separately, the latter in Sect. 3.1 and the former in Sect. 3.2

3.1 A lower bound for the variance

Richardson, Vu and Wu [57, Theorem 1.1] established a lower bound for the variance of the volume of the random inscribed polytope \(K_\sigma (n)\) by adapting the proof of Reitzner [53, Theorem 3]. With some further adaptation to their arguments we will show that the the following more general theorem holds as well.

Theorem 3.1

Let \(K\subset \mathbb {R}^d\) be a convex body of class \(C_+^2\) and fix a probability measure \(\sigma \) on \({{\,\mathrm{bd}\,}}K\) with continuous density \(\varsigma >0\) with respect to the \((d-1)\)-dimensional Hausdorff measure. Then set \(K_\sigma (n)\) as the random inscribed polytope generated as the convex hull of n independent random points distributed with respect to \(\sigma \). Furthermore, let \(\Phi \) be a measure on K with continuous density \(\phi >0\) with respect to the Lebesgue measure on K. Then there exist constants \(c=c(K,\varsigma ,\phi )>0\) and \(N=N(K,\varsigma ,\phi )\in \mathbb {N}\) such that for all \(n\ge N\) we have that

$$\begin{aligned} {{\,\mathrm{Var}\,}}\Phi (K_\sigma (n)) \ge cn^{-\frac{d+3}{d-1}}. \end{aligned}$$

One of the key constructions is to approximate \({{\,\mathrm{bd}\,}}K\) around a fixed point \(x\in {{\,\mathrm{bd}\,}}K\) by an elliptic paraboloid \(Q_x\). If we choose coordinates such that x is at the origin and such that \(\mathbb {R}^{d-1}\) is the tangent hyperplane to \({{\,\mathrm{bd}\,}}K\) at x where the outer unit normal \(n_K(x)\) of \({{\,\mathrm{bd}\,}}K\) at x is \(-e_d\). Then

$$\begin{aligned} Q_x := \{z\in \mathbb {R}^d : \kappa _1(x) z_1^2 + \cdots +\kappa _{d-1}(x) z_{d-1}^2 \le 2 z_d\}, \end{aligned}$$

where \(\kappa _1(x),\dotsc ,\kappa _{d-1}(x)\) are the principal curvatures of \({{\,\mathrm{bd}\,}}K\) at x. We may map the standard elliptic paraboloid \(E=\{z\in \mathbb {R}^d: z_1^2+\cdots +z_{d-1}^2 \le z_d\}\) to \(Q_x\) via a linear map, i.e., if we set

$$\begin{aligned} A_x := \mathrm {diag}\left( \sqrt{\frac{2h}{\kappa _1(x)}},\dotsc ,\sqrt{\frac{2h}{\kappa _{d-1}(x)}},h\right) , \end{aligned}$$
(3.1)

then \(Q_x = A_xE\). Here, the dependence on \(h>0\) is chosen in such a way that the cap \(C^{E}(0,1):=\{z\in E : z_d\le 1\}\) of height 1 is mapped to

$$\begin{aligned} C^{Q_x}(x,h) := \{z\in Q_x: z_d\le h\} = A_xC^{E}(0,1). \end{aligned}$$

Note also that

$$\begin{aligned} \det A_x = 2^{\frac{d-1}{2}} \kappa (x)^{-\frac{1}{2}} h^{\frac{d+1}{2}}, \end{aligned}$$

where \(\kappa (x) := \prod _{i=1}^{d-1} \kappa _i(x)\) is the Gauss–Kronecker curvature of \({{\,\mathrm{bd}\,}}K\) at x. Since K is of class \(C_+^2\) there are \(h_0>0\) and \(c_0>1\) such that for all \(h\in (0,h_0)\) we have that

$$\begin{aligned} \frac{1}{c_0} h^{\frac{d+1}{2}} \le \left| \det A_x\right| \le c_0 h^{\frac{d+1}{2}}. \end{aligned}$$
(3.2)

Since the (Lebesgue) density function \(\phi \) of \(\Phi \) is positive and continuous near \({{\,\mathrm{bd}\,}}K\), we also find \(c_1>1\) such that for all h small enough we have that

$$\begin{aligned}&\frac{1}{c_1} {{\,\mathrm{Vol}\,}}_d(C^{Q_x}(x,h)) (\phi (x)+o_h(1)) \le \Phi (C^{K}(x,h))\\&\le c_1 {{\,\mathrm{Vol}\,}}_d(C^{Q_x}(x,h)) (\phi (x)+o_h(1)), \end{aligned}$$

where \(o_h(1) \rightarrow 0\) as \(h\rightarrow 0^+\), and \(C^K(x,h):= \{z\in K : \langle x-z,n_K(x) \rangle \le h\}\) is the cap of K of height h with apex in x. Here we use the fact that for \(h>0\) small enough (independently of \(x \in {{\,\mathrm{bd}\,}}K\)), the cap \(C^{K}(x,h)\) is contained in the neighbourhood of \({{\,\mathrm{bd}\,}}K\) where \(\phi \) is continuous.

Next, let us repeat the random simplex construction in the standard paraboloid E. In the following we denote by H(ut) the affine hyperplane with unit normal \(u\in S^{d-1}\) and signed distance t from the origin, i.e., \(H(u,t) = \{x\in \mathbb {R}^d : \langle x, u\rangle = t\}\). We first consider the simplex \(S=[v_0,\dotsc ,v_d]\) in the cap \(C^{E}(0,1)\), where the vertex \(v_0\) is the origin and \([v_1,\dotsc ,v_d]\) is a regular simplex inscribed in the \((d-1)\)-dimensional ball \(\{z\in E: z_d = h_d\}\), where \(h_d<\frac{1}{2d^2}\) is chosen small enough so that

$$\begin{aligned}&\{\lambda z : \lambda \ge 0,\, z\in S\} \supset (2E)\cap H(e_d,1)\\&= \{(z_1,\dotsc ,z_{d-1},1) \in \mathbb {R}^d: \Vert (z_1,\dotsc ,z_{d-1})\Vert =\sqrt{2}\}. \end{aligned}$$

This condition ensures that the cone spanned by S is “flat” enough and will be important later on, see (3.3), to ensure a certain independence property, see (3.5).

Now we consider the orthogonal projection \(\mathrm {proj}_{\mathbb {R}^{d-1}}: \mathbb {R}^d \rightarrow \mathbb {R}^{d-1}\), defined by \(\mathrm {proj}_{\mathbb {R}^{d-1}}(z) = (z_1,\dotsc ,z_{d-1})\). We consider a balls \(B_i\subset \mathbb {R}^{d-1}\) of radius \(r>0\) around \(v_0=0=v_0'\) and \(\mathrm {proj}_{\mathbb {R}^{d-1}}(v_i)\) for \(i=1,\dotsc ,d\). We further set \(B_i' := {{\,\mathrm{bd}\,}}E\cap \mathrm {proj}_{\mathbb {R}^{d-1}}^{-1}(B_i)\) for \(i=0,\dotsc ,d\). We will choose \(r>0\) later, but it will be small enough so that for all \(w_i\in B_i'\), \(i=0,\dotsc ,d\), we have that \([w_0,\dotsc ,w_d]\) is sufficiently close to \(S=[v_0,\dotsc ,v_d]\). In particular, we have that

$$\begin{aligned} \{\lambda z: \lambda \ge 0,\, z\in [w_0,\dotsc ,w_d]\} \supset (2E)\cap H(e_d,1), \end{aligned}$$

for all \(w_i\in B_i'\), \(i=0,\dotsc ,d\) (see Fig. 6).

Fig. 6
figure 6

Construction of the random simplex \([w_0, \ldots , w_d]\)

Furthermore, if W is randomly distributed on \(B_0'\) with respect to a continuous and positive probability density \(\vartheta \), then there exists \(c_3>0\) such that

$$\begin{aligned} {{\,\mathrm{Var}\,}}_W({{\,\mathrm{Vol}\,}}_d([W,v_1,\dotsc ,v_d])) \ge c_3>0, \end{aligned}$$

where for some random element X the notation \({{\,\mathrm{Var}\,}}_X\) (and also \(\mathbb {E}_X\) below) indicates that the variance (or the expectation) is taken with respect to the law of X.

Using the linear transformation \(A_x\) defined at (3.1), we set

$$\begin{aligned} D_i(x) := A_x B_i \subset T_x{{\,\mathrm{bd}\,}}K\cong \mathbb {R}^{d-1}, \end{aligned}$$

where \(T_x{{\,\mathrm{bd}\,}}K\) is the tangent space of \({{\,\mathrm{bd}\,}}K\) at x, which is isometric (\(\cong \)) to \(\mathbb {R}^{d-1}\). Further, we set

$$\begin{aligned} D_i'(x) := \tilde{f}_x(D_i(x)) \subset {{\,\mathrm{bd}\,}}K \cap C^K(x,h), \end{aligned}$$

where \(f_x:\mathbb {R}^{d-1}\rightarrow \mathbb {R}\) locally defines \({{\,\mathrm{bd}\,}}K\) around x via \(\tilde{f}_x(y) = (y,f_x(y)) \in {{\,\mathrm{bd}\,}}K\). We also stress that \(D_i'(x)\subset {{\,\mathrm{bd}\,}}K\) is not the image of \(B_i'\) under \(A_x\) since \(A_xB_i'\subset {{\,\mathrm{bd}\,}}Q_x\). Finally, for sufficiently small \(h>0\), we find that

$$\begin{aligned} \frac{1}{c_4} h^{\frac{d-1}{2}} \le \sigma (D_i'(x)) \le c_4 h^{\frac{d-1}{2}}, \end{aligned}$$

for some constant \(c_4>1\) and

$$\begin{aligned} \begin{aligned} \{\lambda z: \lambda \ge 0, \, z\in [y_0,\dotsc ,y_d]\}&\supset (2Q_x)\cap H(n_K(x),\langle x,n_K(x) \rangle -h)\\&\supset K \cap H(n_K(x), \langle x, n_K(x)\rangle -h), \end{aligned} \end{aligned}$$
(3.3)

for all \(y_i\in D'_i(x)\).

We are now ready to adapt to our situation the main lemma [57, Lemma 3.1], that has to be changed in the proof of [57, Theorem 1.1].

Lemma 3.2

There exists \(r_0>0\) such that for all \(r\in (0,r_0)\) there is \(h_0=h_0(r)>0\) and \(c_5=c_5(r)>1\) such that for all \(y_i\in D_i'(x)\), \(i=1,\dotsc ,d\), and \(h\in (0,h_0)\) we have that

$$\begin{aligned} \frac{1}{c_5} h^{d+1} \le {{\,\mathrm{Var}\,}}_Y \Phi ([Y,y_1,\dotsc ,y_d]) \le c_5 h^{d+1}, \end{aligned}$$
(3.4)

where Y is a random point in \(D_0'(x)\subset {{\,\mathrm{bd}\,}}K\) distributed with respect to a continuous density function \(\varsigma >0\).

Proof

To prove [57, Lemma 3.1] one first shows [57, Claim 8.1], where the first and second moment of the volume are asymptotically bounded. Following the proof of [57, Claim 8.1] and [57, Claim 8.2] we obtain

$$\begin{aligned}&\mathbb {E}_Y\Phi ([Y,y_1,\dotsc ,y_d]) = (1+o_{r,h}(1)) \frac{1}{{{\,\mathrm{Vol}\,}}_{d-1}(B_0)}\\&\int _{B_0} \Phi ([\tilde{f}_x(A_x'z),y_1,\dotsc ,y_d]) \, \mathrm {d}z, \end{aligned}$$

where we recall from (3.1) that \(A_x\) is the linear transformation that maps the standard paraboloid E to the approximating paraboloid \(Q_x=A_xE\) of \({{\,\mathrm{bd}\,}}K\) around x and \(A_x'\) is the restriction of \(A_x\) to \(\mathbb {R}^{d-1}\), i.e., \(A_x'=\mathrm {diag}\left( \sqrt{\frac{2h}{\kappa _1(x)}},\dotsc ,\sqrt{\frac{2h}{\kappa _{d-1}(x)}}\right) \). Now, since \(\phi \) is continuous at \(x\in {{\,\mathrm{bd}\,}}K\) and since \([Y,y_1,\dotsc , y_d]\subset C^K(x,h)\), we find

$$\begin{aligned} \Phi ([\tilde{f}_x(A_x'z),y_1,\dotsc ,y_d])&= {{\,\mathrm{Vol}\,}}_d([\tilde{f}_x(A_x'z), y_1, \dotsc , y_d]) (\phi (x)+o_{h}(1)). \end{aligned}$$

By setting

$$\begin{aligned} \psi _1(r) := \frac{1}{{{\,\mathrm{Vol}\,}}_{d-1}(B_0)} \int _{B_0} {{\,\mathrm{Vol}\,}}_d([\tilde{b}(z),v_1,\dotsc ,v_d])\, \mathrm {d}z, \end{aligned}$$

where \(\tilde{b}(z) = (z,\Vert z\Vert ^2)\) parametrizes E, we therefore derive

$$\begin{aligned} \lim _{h\rightarrow 0^+} \frac{\mathbb {E}_Y \Phi ([Y,y_1,\dotsc ,y_d])}{\left| \det A_x\right| \psi _1(r)} = (\phi (x)+o_r(1)), \end{aligned}$$

similar to [57, Equation (33)]. Analogously, by setting

$$\begin{aligned} \psi _2(r) := \frac{1}{{{\,\mathrm{Vol}\,}}_{d-1}(B_0)} \int _{B_0} {{\,\mathrm{Vol}\,}}_d([\tilde{b}(z),v_1,\dotsc ,v_d])^2\, \mathrm {d}z, \end{aligned}$$

we obtain

$$\begin{aligned} \lim _{h\rightarrow 0^+} \frac{\mathbb {E}_Y \Phi ([Y,y_1,\dotsc ,y_d])^2}{\left| \det A_x\right| ^2\psi _2(r)} = (\phi (x)^2+o_r(1)). \end{aligned}$$

Hence,

$$\begin{aligned} \lim _{h\rightarrow 0^+} \frac{{{\,\mathrm{Var}\,}}_Y \Phi ([Y,y_1,\dotsc ,y_d])}{\left| \det A_x\right| ^2}&= (\psi _2(r) - \psi _1(r)^2)\,(\phi (x)^2+o_r(1))\\&= \big [\!{{\,\mathrm{Var}\,}}_W{{\,\mathrm{Vol}\,}}_d([W,v_1,\dotsc ,v_d])\big ]\, (\phi (x)^2+o_r(1)) >0 \end{aligned}$$

for \(r>0\) small enough, since \(\phi (x)>0\) and \({{\,\mathrm{Var}\,}}_W{{\,\mathrm{Vol}\,}}_d([W,v_1,\dotsc ,v_d])>0\) for all \(r>0\).

Thus, there is \(c_6>1\) and \(h_0>0\) such that for all \(h\in (0,h_0)\) we have that

$$\begin{aligned} \frac{1}{c_6} \left| \det A_x\right| ^2 \le {{\,\mathrm{Var}\,}}_Y \Phi ([Y,y_1,\dotsc ,y_d]) \le c_6\left| \det A_x\right| ^2, \end{aligned}$$

which completes the proof by (3.2). \(\square \)

Proof of Theorem 3.1

Replacing [57, Lemma 3.1] with Lemma 3.2 in the proof of [57, Theorem 1.1] essentially yields the statement. Let us briefly recall the main steps: Choose n points \(Y_1,\dotsc ,Y_n\) in \({{\,\mathrm{bd}\,}}K\) at random according to \(\varsigma \). Furthermore, choose n points \(x_1,\dotsc ,x_n\in {{\,\mathrm{bd}\,}}K\) and corresponding disjoint caps \(C^K(x_j,h_n)\), \(j=1,\dotsc ,n\), according to the economic cap covering, see [57, Lemma 6.6], and in each cap \(C^K(x_j,h_n)\) define the sets \(D_i'(x_j)\), \(i=0,\dotsc ,d\) as constructed before. Here,

$$\begin{aligned} \frac{1}{c_7} n^{-\frac{2}{d-1}} \le h_n \le c_7 n^{-\frac{2}{d-1}}, \end{aligned}$$

and

$$\begin{aligned} \frac{1}{c_8 n} \le \sigma (C^K(x_j,h_n)\cap {{\,\mathrm{bd}\,}}K) \le \frac{c_8}{n}, \end{aligned}$$

for some constants \(c_7,c_8>1\) and n large enough.

Now let \(A_j\), \(j=1,\dotsc ,n\), be the event that exactly one random point, say \(Y_0,\dotsc ,Y_d\), is contained in each of the sets \(D_i'(x_j)\), i.e., \(Y_i\in D_i'(x_j)\), \(i=0,\dotsc ,d\), and every other point is outside of \(C^K(x_j,h_n)\cap {{\,\mathrm{bd}\,}}K\), i.e., \(Y_i\not \in C^K(x_j,h_n)\cap {{\,\mathrm{bd}\,}}K\) for \(i=d+1,\dotsc , n-1\). Then,

$$\begin{aligned} \mathbb {P}(A_j)&= \left( {\begin{array}{c}n\\ d+1\end{array}}\right) \, \mathbb {P}(Y_i \not \in C^K(x_j,h_n)\cap {{\,\mathrm{bd}\,}}K, i\ge d+1)\, \mathbb {P}( Y_i\in D_i'(x_j), i=0,\dotsc ,d )\\&\ge \left( {\begin{array}{c}n\\ d+1\end{array}}\right) \, (1-\sigma (C^K(x_j,h_n)\cap {{\,\mathrm{bd}\,}}K))^{n-d-1} \,\prod _{i=0}^d \sigma (D_i'(x_j))\\&\ge \left( {\begin{array}{c}n\\ d+1\end{array}}\right) \, \left( 1-\frac{c_8}{n}\right) ^{n-d-1} c_7^{\frac{d^2-1}{2}} (c_4n)^{-d-1}. \end{aligned}$$

As a consequence, there is \(c_9>0\) such that for all n large enough and all \(j=1,\dotsc ,n\), we have that

$$\begin{aligned} \mathbb {P}(A_j) \ge c_9 >0, \end{aligned}$$

which yields

$$\begin{aligned} \mathbb {E}\sum _{j=1}^n {\mathbf {1}}_{A_j} = \sum _{j=1}^n \mathbb {P}(A_j) \ge c_9n >0. \end{aligned}$$

Next, let \(\mathcal {F}\) be the \(\sigma \)-algebra generated by the positions of all \((Y_1,\dotsc ,Y_n)\) except those which are contained in \(D_0'(x_j)\) with \({\mathbf {1}}_{A_j}= 1\) for at least one \(j=1,\dotsc ,n\). Hence, if \((Y_1,\dotsc ,Y_n)\) is \(\mathcal {F}\)-measurable and \({\mathbf {1}}_{A_j}=1\), then, up to reordering, we may assume that \(Y_j\in D_0'(x_j)\) is random and \(Y_{j+k}=y_k^j \in D_k'(x_j)\) is fixed for \(k=1,\dotsc ,d\). Let \((Y_1,\dotsc ,Y_n)\) be an arbitrary \(\mathcal {F}\)-measurable random vector. If \({\mathbf {1}}_{A_j}(Y_1,\dotsc ,Y_n) ={\mathbf {1}}_{A_k}(Y_1,\dotsc ,Y_n)=1\) for some \(j,k\in \{1,\dotsc ,n\}\), \(j\ne k\), and assuming without loss of generality that \(Y_j\in D_0'(x_j)\) and \(Y_k\in D_0'(x_k)\), then \(Y_j\) and \(Y_k\) are vertices of \(K_\sigma (n)=[Y_1,\dotsc ,Y_n]\) and by (3.3) it is not possible that there is an edge between \(Y_j\) and \(Y_k\). Therefore, the change of weighted volume affected by moving \(Y_j\) in \(D_0'(x_j)\) is independent of the change of the weighted volume of moving \(Y_k\) in \(D_0'(x_k)\). This yields

$$\begin{aligned} {{\,\mathrm{Var}\,}}[\Phi (K_\sigma (n))|\mathcal {F}] = \sum _{j=1}^n {\mathbf {1}}_{A_j} {{\,\mathrm{Var}\,}}_{Y_j} \Phi ([Y_j,y_1^j,\dotsc ,y_d^j]) \end{aligned}$$
(3.5)

Thus, for large enough n, we finally derive from the total variance formula that

$$\begin{aligned} {{\,\mathrm{Var}\,}}\Phi (K_{\sigma }(n))&= \mathbb {E}{{\,\mathrm{Var}\,}}[\Phi (K_\sigma (n))|\mathcal {F}] + {{\,\mathrm{Var}\,}}\mathbb {E}[\Phi (K_\sigma (n))|\mathcal {F}] \\&\ge \mathbb {E}{{\,\mathrm{Var}\,}}[\Phi (K_{\sigma }(n))|\mathcal {F}]\\&= \mathbb {E}\sum _{j=1}^n {\mathbf {1}}_{A_j} {{\,\mathrm{Var}\,}}_{Y_j}\Phi ([Y_j,y_1^j,\dotsc ,y_d^j]) \\&\ge \frac{1}{c_5} (h_n)^{d+1}\, \mathbb {E}\sum _{j=1}^n {\mathbf {1}}_{A_j} \ge c n^{-\frac{2(d+1)}{d-1}+1} = cn^{-\frac{d+3}{d-1}}, \end{aligned}$$

where \(c>0\) is some constant. This completes the prove of Theorem 3.1.

3.2 Proof of the main theorem

In this section we prove Theorem 1.15. To do so, we apply Lemma 2.3 to the random variable

$$\begin{aligned} W(n) = \frac{ \Phi (K_\sigma (n) ) - \mathbb {E}\Phi (K_\sigma (n) )}{\sqrt{{{\,\mathrm{Var}\,}}\Phi (K_\sigma (n) ) }} \end{aligned}$$

and deduce that \(d_{{{\,\mathrm{Kol}\,}}} (W(n), Z) \rightarrow 0\) as \(n\rightarrow \infty \), where Z is a standard Gaussian random variable. To apply the lemma, we consider a vector \(X = (X_1, \ldots , X_n)\) of independent points on \({{\,\mathrm{bd}\,}}K\) distributed according to \(\sigma \). We set \(\tilde{f}(X) = \Phi ([X_1, \ldots , X_n]) \) and \(f(X) = \frac{\tilde{f}(X)-\mathbb {E}\tilde{f}(X )}{\sqrt{{{\,\mathrm{Var}\,}}\tilde{f}(X)} }\) and note that by definition \(f(X) = W(n)\). Note moreover that f and \(\tilde{f}\) may be extended in an obvious manner to symmetric functions on \(\bigcup _{k=1}^n({{\,\mathrm{bd}\,}}K)^k\).

Our estimation of the first- and second-order difference operators is based on the following simple observation. For a point \(z \in {{\,\mathrm{bd}\,}}K\) and a convex \(L \subset K\) we set \(\Delta (z, L) := \mathrm{conv} (L \cup \{x\}) {\setminus } L\). Then, if the \(\sigma \)-surface body \(K_\sigma ^{\tau }\) is contained in \([X_2, \ldots , X_n]\), one has

$$\begin{aligned} \Delta (X_1, [X_2, \ldots , X_n]) \subset {{\,\mathrm{Vis}\,}}_\sigma (X_1, \tau ). \end{aligned}$$
(3.6)

With this preparation, we can already bound the moments of the fist-order difference operator. We state the next result in general, although we will apply it below only with \(p=6\).

Lemma 3.3

Fix an integer \(p\ge 1\). There exists a constant \(C=C(K, \Phi , \sigma , p)>0\) such that for large enough n one has

$$\begin{aligned} \mathbb {E}|D_1f(X)|^p \le C \,n^{-\frac{p}{2}} \,(\log n)^{p\frac{d+1}{d-1} } \end{aligned}$$

In the proof below and subsequent ones, the letter C stands for an arbitrary constant (independent of n), whose value may change from line to line. As we explained above, C is allowed to depend on K, \(\Phi \) and \(\sigma \).

Proof of Lemma 3.3

Fix \(\alpha > 0\), which will be specified later. By Lemma 2.2, there exists a constant \(c(\alpha )> 0\) such that, denoting \(\tau = c(\alpha ) \frac{\log (n-1)}{n-1}\), the event \(A := \{K_\sigma ^{\tau } \subset [X_2, \ldots , X_n] \}\) has \(\mathbb {P}(A^c) \le (n-1)^{-\alpha } \le C n^{-\alpha }\) (where C depends on \(\alpha \), for example one can take \(C=2^\alpha \)). On A, we use our observation (3.6) and the bound (2.1) to obtain

$$\begin{aligned} |D_1\tilde{f}(X)| \le \Phi ({{\,\mathrm{Vis}\,}}_\sigma (X_1, \tau )) \le C \tau ^{\frac{d+1}{d-1}} \le C \left( \frac{\log n}{n} \right) ^{\frac{d+1}{d-1}}. \end{aligned}$$
(3.7)

On \(A^c\) we have the trivial bound

$$\begin{aligned} |D_1 \tilde{f}(X)| \le \Phi (K) =: D. \end{aligned}$$
(3.8)

Combining these estimates with the convexity of the function \(t \mapsto t^p\) we find that

$$\begin{aligned} \mathbb {E}|D_1\tilde{f}(X) |^p&= \mathbb {E}| {\mathbf {1}}_A \cdot D_1\tilde{f}(X) + {\mathbf {1}}_{A^c} \cdot D_1\tilde{f}(X) |^p \\&\le 2^{p-1} \, \mathbb {E}\left[ {\mathbf {1}}_A \cdot C \left( \frac{\log n}{n} \right) ^{p\frac{d+1}{d-1}} + {\mathbf {1}}_{A^c} \cdot D^p \right] \\&\le C \, \left[ \left( \frac{\log n}{n} \right) ^{p\frac{d+1}{d-1}} + n^{-\alpha } \right] \end{aligned}$$

Choosing now \(\alpha > p\frac{d+1}{d-1}\), this yields

$$\begin{aligned} \mathbb {E}|D_1\tilde{f}(X)|^p \le C \left( \frac{\log n}{n} \right) ^{p\frac{d+1}{d-1}}. \end{aligned}$$

Finally, since by Theorem 3.1 we have that \({{\,\mathrm{Var}\,}}\tilde{f}(X) \ge C n^{-\frac{d+3}{d-1}}\), we conclude that

$$\begin{aligned} \mathbb {E}|D_1f(X)|^p&= {{\,\mathrm{Var}\,}}\tilde{f}(X)^{-\frac{p}{2}}\, \mathbb {E}|D_1\tilde{f}(X)|^p \le C \left( \frac{\log n}{n} \right) ^{p\frac{d+1}{d-1}} \, n^{\frac{p}{2}\,\frac{d+3}{d-1}} \\&= C \,n^{-\frac{p}{2}} \,(\log n)^{p\frac{d+1}{d-1} }. \end{aligned}$$

This completes the argument. \(\square \)

With this preparation, we turn to prove the asymptotic normality for the weighted volume of random weighted inscribed polytopes.

Proof

(Proof of Theorem 1.15) Let, as above, \(W(n) = f(X) = \frac{\Phi (K_\sigma (n)) -\mathbb {E}\Phi (K_\sigma (n)) }{\sqrt{{{\,\mathrm{Var}\,}}\Phi (K_\sigma (n))}}\), and observe that \(\mathbb {E}W(n) = 0\) and \({{\,\mathrm{Var}\,}}W(n)=1\). We estimate the five terms in the bound (2.4). We start the last three terms, which involve only the first-order difference operator. For the term \(B_3(f)\), we apply the Cauchy–Schwarz inequality to deduce

$$\begin{aligned}&B_3(f) = \sup _Y \mathbb {E}|f(X) (D_1f(Y))^3|\nonumber \\&\le \sqrt{{{\,\mathrm{Var}\,}}f(X)} \cdot \sup _Y \sqrt{\mathbb {E}|D_1f(Y)|^6} = \sqrt{\mathbb {E}| D_1f(X)|^6}, \end{aligned}$$
(3.9)

where we have used the facts that \({{\,\mathrm{Var}\,}}f(X) = 1\) and that any recombination Y of \(\{X,X', X''\}\) is equal in distribution to X.

For the term \(B_4(f)\) we use again the Cauchy-Schwarz inequality, and obtain

$$\begin{aligned} B_4(f) = \mathbb {E}|D_1f(X)|^3 \le \sqrt{\mathbb {E}|D_1f(X)|^6}. \end{aligned}$$
(3.10)

Finally, to bound the term \(B_5(f)\) we use Hölder’s inequality which gives

$$\begin{aligned} B_5(f) = \mathbb {E}|D_1f(X)|^4 \le \left( \mathbb {E}|D_1f(X)|^6\right) ^{2/3}. \end{aligned}$$
(3.11)

Now, in view of (3.9), (3.10) and (3.11), Lemma 3.3 implies that

$$\begin{aligned} \max \left\{ nB_3(f), nB_4(f), \sqrt{nB_5(f)} \right\} \le C\, n^{-\frac{1}{2}} \, (\log n)^{3\frac{d+1}{d-1} } . \end{aligned}$$
(3.12)

Next, we turn to the terms \(B_1(f)\) and \(B_2(f)\), involving the second-order difference operator. We note that for \(j=1, 2\) one has \(B_j(f) = B_j(\tilde{f}) \Big / {{\,\mathrm{Var}\,}}\tilde{f}(X)^2\). Therefore we estimate first the terms \(B_j(\tilde{f})\). We start with \(B_1(\tilde{f})\). Observe that \(D_{1,2}\tilde{f}(Y)=0\) whenever the regions \(\Delta (Y_1, [Y_3, \ldots , Y_n])\) and \(\Delta (Y_2, [Y_3, \ldots , Y_n])\) are disjoint. We consider therefore the event

$$\begin{aligned} A' = \left\{ K_\sigma ^{\tau } \subset \bigcap _{W \in \{Y, Y', Z, Z'\} } [W_4, \ldots , W_n] \right\} . \end{aligned}$$

Using Lemma 2.2 (along with the union bound), for a fixed \(\alpha > 0\) (to be specified later), one can find \(c(\alpha )> 0\) such that, for \(\tau = c(\alpha ) \frac{\log n}{n}\), one has \(\mathbb {P}((A')^c) \le C n^{-\alpha }\). On \(A'\), our observation (3.6) implies that

$$\begin{aligned} {\mathbf {1}}\{D_{1,2}\tilde{f}(Y) \ne 0 \} \le {\mathbf {1}}\{ {{\,\mathrm{Vis}\,}}_\sigma (Y_1, \tau ) \cap {{\,\mathrm{Vis}\,}}_\sigma (Y_2, \tau ) \ne \emptyset \}, \end{aligned}$$

and the analogous statement holds for \(D_{1,3}f(Y')\). Combined with the bound (3.7) for the first-order difference operator and the estimate (2.2) (and on \((A')^c\), the trivial bound (3.8)), this yields

$$\begin{aligned} B_1(\tilde{f})&= \mathbb {E}\left[ {\mathbf {1}}\{D_{1,2}f(Y)=0 \} {\mathbf {1}}\{D_{1,3}f(Y')=0\}\, |D_2f(Z)|^2\, |D_3f(Z')|^2 \right] \\&\le \mathbb {E}\left[ {\mathbf {1}}_{A'} {\mathbf {1}}\{ {{\,\mathrm{Vis}\,}}_\sigma (Y_1, \tau ) \cap {{\,\mathrm{Vis}\,}}_\sigma (Y_2, \tau ) \} {\mathbf {1}}\{ {{\,\mathrm{Vis}\,}}_\sigma (Y_1', \tau ) \cap {{\,\mathrm{Vis}\,}}_\sigma (Y_3', \tau ) \}\, C \left( \frac{\log n}{n}\right) ^{4\frac{d+1}{d-1}} \right] \\&\quad + \mathbb {E}\left[ {\mathbf {1}}_{(A')^c} D^2 \right] \le C \left( \frac{\log n}{n}\right) ^{2} \left( \frac{\log n}{n}\right) ^{4\frac{d+1}{d-1}} + C D^2 n^{-\alpha }, \end{aligned}$$

where in the last step we used the independence of Y and \(Y'\) and the fact that the event \(A'\) depends only on the entries \(W_j\) for \(j \ge 4\). Finally, picking \(\alpha =6 > 2 + \frac{d+1}{d-1}\) for any \(d\ge 2\), we derive that

$$\begin{aligned} B_1(\tilde{f}) \le C\left( \frac{\log n}{n}\right) ^{4\frac{d+1}{d-1} + 2}. \end{aligned}$$

A very similar computation yields the estimate

$$\begin{aligned} B_2(\tilde{f}) \le C \left( \frac{\log n}{n}\right) ^{4\frac{d+1}{d-1} + 1}. \end{aligned}$$

Using the bound \({{\,\mathrm{Var}\,}}\tilde{f}(X) \ge C n^{-\frac{d+3}{d-1}}\) provided by Theorem 3.1, we deduce that

$$\begin{aligned} B_1(f)&= \frac{B_1(\tilde{f})}{{{\,\mathrm{Var}\,}}\tilde{f}(X)^2} \le C \left( \frac{\log n}{n}\right) ^{4\frac{d+1}{d-1} + 2} \, n^{2\frac{d+3}{d-1}} = C\, n^{-4}\, (\log n)^{4\frac{d+1}{d-1} +2 } \end{aligned}$$

and

$$\begin{aligned} B_2(f)&= \frac{B_2(\tilde{f})}{{{\,\mathrm{Var}\,}}\tilde{f}(X)^2} \le C \left( \frac{\log n}{n}\right) ^{4\frac{d+1}{d-1} + 1} \, n^{2\frac{d+3}{d-1}} = C\, n^{-3}\, (\log n)^{4\frac{d+1}{d-1} + 1 }. \end{aligned}$$

From this we get

$$\begin{aligned} \max \left\{ n\sqrt{nB_1(f)}, n \sqrt{B_2(f)} \right\} \le C \, n^{-\frac{1}{2}} \,(\log n)^{ 2\frac{d+1}{d-1}+1 }. \end{aligned}$$
(3.13)

Finally, we apply Lemma 2.3 and the bounds (3.13) and (3.12) to conclude that

$$\begin{aligned} d_{{{\,\mathrm{Kol}\,}}}(W(n), Z)&\le C \left[ n\sqrt{nB_1(f)}+n\sqrt{B_2(f)}+nB_{3}(f) + n B_4(f) + \sqrt{n B_5(f)}\right] \\&\le C n^{-\frac{1}{2}} (\log n)^{3\frac{d+1}{d-1}}, \end{aligned}$$

where, as before, Z is a standard Gaussian random variable. In particular, since the last expression tend to zero as \(n\rightarrow \infty \), this implies convergence in distribution of W(n) to Z. \(\square \)

4 Proofs of other results

4.1 Random inscribed polytopes in projective Riemannian geometries

Proof of Theorem 1.1

We fix a Euclidean structure on \(\Omega \), with associated d-dimensional Lebesgue measure and \((d-1)\)-dimensional Hausdorff measure on \({{\,\mathrm{bd}\,}}K\). We have to verify that \(\sigma _g\) and \(\Phi _g\) meet the conditions of Theorem 1.15. Indeed, by the uniqueness of the Riemannian volume measure (see Sect. 1.2), the Euclidean Lebesgue measure on K and \((d-1)\)-dimensional Hausdorff measure on \({{\,\mathrm{bd}\,}}K\) can be considered as the Riemannian volume measures on K and \({{\,\mathrm{bd}\,}}K\), respectively, associated with the Euclidean structure. As the local expression (1.1) shows, both \(\sigma _g\) and the \((d-1)\)-dimensional Hausdorff measure on \({{\,\mathrm{bd}\,}}K\) are given by integrating \(C^1\)-volume densities, and hence (as the space of volume densities is one-dimensional) they differ by a positive \(C^1\)-function. A similar reasoning applies to \(\Phi _g\) and the Lebesgue measure on K. Therefore, Theorem 1.15 applies here, and proves the result. \(\square \)

4.2 Random inscribed polytopes in projective Finsler metrics

Proof of Theorem 1.9

We fix a Euclidean structure on \(\Omega \), with an associated d-dimensional Lebesgue measure and \((d-1)\)-dimensional Hausdorff measure on \({{\,\mathrm{bd}\,}}K\). We have to verify that \(\Phi \) and \(\sigma \) satisfy the assumptions of Theorem 1.15. Indeed, by definition \(\sigma \), as well as the \((d-1)\)-dimensional Hausdorff measure on \({{\,\mathrm{bd}\,}}K\), are given as integrals of continuous volume densities. Since the space of densities on \(T_x{{\,\mathrm{bd}\,}}K\) is one-dimensional for all \(x\in {{\,\mathrm{bd}\,}}K\), the two volume densities differ by multiplication by a positive continuous function. The same applies to \(\Phi \) and the Lebesgue measure on K. Therefore, we can apply Theorem 1.15 and deduce asymptotic normality of \(\Phi (K_F(n))\).

4.3 Dual Brunn–Minkowski theory

In what follows we will require the following adaptation of Theorem 1.15. We keep the assumptions of that theorem, and let T be a fixed convex body strictly contained in K and such that T contains the origin in the interior. We use the notation \( K_{\sigma ,T}(n)\) for the convex hull of \(K_\sigma (n)\) and a T. Then we claim that the \(\Phi \)-measure of \(K_{\sigma , T}(n)\) satisfies a central limit theorem, that is,

$$\begin{aligned} \frac{\Phi (K_{\sigma ,T}(n)) - \mathbb {E}\Phi (K_{\sigma ,T}(n)) }{ \sqrt{{{\,\mathrm{Var}\,}}\Phi (K_{\sigma ,T}(n)) }} \overset{d}{\longrightarrow } Z, \end{aligned}$$
(4.1)

as \(n\rightarrow \infty \), where Z is a standard Gaussian random variable. The adaptation of the proof of Theorem 1.15 to this case is rather minor; it suffices to note that for small enough \(t>0\), T is contained in the \(\sigma \)-surface body \(K_\sigma ^t\), and hence, in view of Lemma 2.2, with overwhelming probability, \(K_{\sigma ,T}(n) = K_\sigma (n)\). We use this adaptation to prove Theorem  1.13.

Proof of Theorem1.13

A simple integration in polar coordinates using formula (1.3) gives

$$\begin{aligned} \widetilde{V} _j(A) = {\left\{ \begin{array}{ll} \displaystyle \frac{j}{d} \int _A \Vert x\Vert ^{j-d}\, dx, &{} j > 0,\\ \displaystyle \frac{|j|}{d} \int _{\mathbb {R}^d {\setminus } A} \Vert x\Vert ^{j-d}\, dx, &{} j < 0, \end{array}\right. } \end{aligned}$$

for a convex body \(A\subset \mathbb {R}^d\) containing the origin, where dx indicates integration with respect to the Lebesgue measure on \(\mathbb {R}^d\).

This formula brings the dual volumes into the framework of Theorem 1.15, with the caveat that for \(j < 0\) the density function \(\Vert x\Vert ^{j-d}\) is not integrable at the origin. For \(j >0\), however, the density function \(\phi _j(x)=\frac{j}{d} \Vert x\Vert ^{j-d}\) is integrable on K and continuous near \({{\,\mathrm{bd}\,}}K\), and \( \widetilde{V} _j(K_{\sigma ,T}(n) ) = \Phi _j(K_{\sigma , T} (n))\). For \(j < 0\) we note that by definition \(K_{\sigma ,T}(n)\) contains T, so we can get around the problem by taking a measure \(\Phi _j\) with Lebesgue density \(\phi _j\), such that, on \( \mathbb {R}^d {\setminus } T\), \(\phi _j(x)=\frac{|j|}{d} \Vert x\Vert ^{j-d}\), and \(\phi _j\) is continuous and positive on T. With this definition we have \( \widetilde{V} _j(K_{\sigma ,T}(n) ) = \Phi _j(\mathbb {R}^d) -\Phi _j(K_{\sigma , T} (n))\). Therefore, for any \(j \ne 0\), the result follows immediately from the modification (4.1) of Theorem 1.15.

\(\square \)

4.4 Random polyhedral sets

Proof

(Proof of Theorem 1.14) We may assume without loss of generality that K contains the origin. First, we note that by [40, Lemma 2], for a convex body A containing the origin, \(W(L) = \frac{2}{\kappa _d} \widetilde{V} _{-1}(L^*) \), where \( \widetilde{V} _{-1}\) denotes the dual volume considered in Sect. 1.4, and \(L^*\) denotes the polar body of L, namely

$$\begin{aligned} L^* = \{ y \in \mathbb {R}^d \,:\, \forall x \in L \, \langle x,y \rangle \le 1 \}. \end{aligned}$$

Therefore, denoting \(C_d := \frac{2}{\kappa _d} \) we find that

$$\begin{aligned} W(P_\sigma (n) \cap L) = C_d \widetilde{V} _{-1}({{\,\mathrm{conv}\,}}(P_\sigma (n)^* \cup L^*) ). \end{aligned}$$
(4.2)

Consider the Legendre transform \(\Lambda : {{\,\mathrm{bd}\,}}K \rightarrow {{\,\mathrm{bd}\,}}K^*\), which assigns to \(x \in {{\,\mathrm{bd}\,}}K\) the unique point \(\Lambda (x) \in {{\,\mathrm{bd}\,}}K^*\) which is proportional to the outer normal to \({{\,\mathrm{bd}\,}}K\) at x. Since by assumption K is of class \(C^2_+\), \(\Lambda \) is a diffeomorphism. Then,

$$\begin{aligned} P_\sigma (n)^* = \left( H^-(X_1) \cap \cdots \cap H^-(X_n) \right) ^* = [\Lambda (X_1), \ldots , \Lambda (X_n)]. \end{aligned}$$

As the \(X_i\) are independent and distributed according to \(\sigma \) on \({{\,\mathrm{bd}\,}}K\), the points \(\Lambda (X_i)\) are independent and distributed according to the push-forward measure \(\sigma ^* := \Lambda _*\sigma \) on \({{\,\mathrm{bd}\,}}K^*\). Note that \(\sigma ^*\) has a continuous and positive density with respect to the \((d-1)\)-dimensional Hausdorff measure on \({{\,\mathrm{bd}\,}}K^*\). Explicitly, according to [15, equ. 52], if \(\sigma \) has density \(\varsigma \), \(\sigma ^*\) has density \(\kappa (\Lambda ^{-1}(y)) \varsigma (\Lambda ^{-1}(y))\, \frac{\langle y, n_{K^*}(y)\rangle }{\Vert y\Vert ^d}\), where \(\kappa \) is the Gauss–Kronecker curvature of \({{\,\mathrm{bd}\,}}K\) and \(n_{K^*}(y)\) is the unit normal vector to \({{\,\mathrm{bd}\,}}K^*\) at y.

In other words, \(P_\sigma (n)^* = [\Lambda (X_1), \ldots , \Lambda (X_n)]\) is equal in distribution to \(K^*_{\sigma ^*}(n)\), the random polytope inscribed in \(K^*\) generated by n independent random points with distribution \(\sigma ^*\) on \({{\,\mathrm{bd}\,}}K^*\), and therefore, in the notation of Theorem 1.13, \( {{\,\mathrm{conv}\,}}(P_\sigma (n)^* \cup L^*) \) is equal in distribution to \( K^*_{\sigma ^*, L^*}(n)\). Combining this with (4.2), we find that the random variables \(W(P_\sigma (n) \cap L)\) and \(C_d \widetilde{V} _{-1}(K^*_{\sigma ^*, L^*}(n) )\) are equal in distribution. The result now follows from Theorem 1.13. \(\square \)