1 Introduction

We denote by \(\mathscr {K}^d\), \(d \ge 2\), the collection of all compact convex sets in the Euclidean space \(\mathbb {R}^d\) and by \(\mathscr {K}^d_{sm}\) the set of all smooth convex bodies in \(\mathscr {K}^d\), which are all \(K \in \mathscr {K}^d\) that have nonempty interior, boundary of differentiability class \(\mathscr {C}^2\), and positive Gaussian curvature everywhere. Let \(\eta _t\) be a Poisson point process on \(\mathbb {R}^d\) with intensity measure \(\mu = t \Lambda _d\), where \(t > 0\) and \(\Lambda _d\) denotes the d-dimensional Lebesgue measure on \(\mathbb {R}^d\), see, e.g., [18] for more details on Poisson point processes. We fix \(K \in \mathscr {K}^d_{sm}\) and investigate the random polytope \(K_t \subset K\) defined as the convex hull of all points in \(\eta _t \cap K\),

$$\begin{aligned} K_t := {\text {conv}}\left\{ x : x \in \eta _t \cap K\right\} . \end{aligned}$$

The investigation of random convex hulls is one of the classical problems in stochastic geometry, see, for instance, the survey article [12] and the introduction to stochastic geometry [13]. Functionals like the intrinsic volumes \(V_j(K_t)\) and the components \(f_j(K_t)\) of the \(\mathbf {f}\)-vector, see Sect. 3, of the random polytope \(K_t\) have been studied prominently, see [6, 7, 25] and the references therein as well as the remarks and references in [15, Theorem 5.5] for more details.

Central limit theorems for \(V_j(K_t)\) were proven in the special case that K is the d-dimensional Euclidean unit ball, see [6, 28]. Short proofs for the binomial case \(K_n\), where n i.i.d. uniformly distributed points in K are considered instead of a Poisson point process, were derived in [30]. Recently [15] embedded the problem for both cases, the binomial and the Poisson case, in the theory of stabilizing functionals deriving central limit theorems for smooth convex bodies removing the logarithmic factors in the error of approximation improving the rate of convergence.

We extend the results of [30] on the intrinsic volumes to the more general case of continuous and motion invariant valuations. This includes the total intrinsic volume functional (Wills functional) \(W(K_t)\) and a central limit theorem for the intrinsic volumes in the Poisson case similar to [30, Theorem 1.1]. We also obtain a univariate central limit theorem for the oracle estimator \(\hat{\vartheta }_\mathrm{oracle}\) that was derived in the remarkable work of [1] on the estimation of the volume of a convex body. Finally we investigate the components of the \(\mathbf {f}\)-vector \(f_j(K_t)\), \(j \in \left\{ 0,\ldots ,d-1\right\} \), defined as the number of j-dimensional faces of \(K_t\) and derive a multivariate limit theorem on the random vector composed of the intrinsic volumes and the \(\mathbf {f}\)-vector.

For a continuous and motion invariant valuation \(\varphi : \mathscr {K}^d \rightarrow \mathbb {R}\), we define the random variable \(\varphi (K_t)\) and the corresponding standardization

$$\begin{aligned} \tilde{\varphi }(K_t) := \frac{\varphi (K_t)-{\mathbb {E}}\left[ \varphi (K_t)\right] }{\sqrt{{\mathbb {V}}\left[ \varphi (K_t)\right] }}. \end{aligned}$$
(1)

The remarkable theorem of Hadwiger, see 6 and [8, 10, 14] for more details and different proofs, states that every continuous motion invariant valuation can be decomposed into a linear combination of intrinsic volumes, i.e., for all \(L \in \mathscr {K}^d\) it holds, that \(\varphi (L) = \sum _{i = 0}^d c_i V_i(L)\) where \(V_i\) denotes the ith intrinsic volume and \(c_i \in \mathbb {R}\) are constants depending only on \(\varphi \).

We prove a central limit theorem for \(t \rightarrow \infty \) under some additional assumptions on the coefficients \(c_i\) in this linear decomposition, allowing us to assume that our valuation functional does not lose variance compared to the intrinsic volume functionals \(V_j(K_t)\).

We denote by \({\text {d}}_W(X,Y) := \sup _{h \in \mathrm{Lip}\left( 1\right) } \left| {\mathbb {E}}\left[ h(X)\right] - {\mathbb {E}}\left[ h(Y)\right] \right| \) the Wasserstein distance between the laws of X and Y where \(\mathrm{Lip}\left( 1\right) \) denotes the class of all 1-Lipschitz functions \(h:\mathbb {R}\rightarrow \mathbb {R}\), see (9) in Sect. 2.1 for the precise definition.

Theorem 1

(Univariate Limit Theorem) Assume that \(\varphi \) is a continuous and motion invariant valuation, with linear decomposition given by Hadwiger (13), such that \(c_i c_j \ge 0\) for all \(i,j \in \left\{ 0, \ldots , d\right\} \) and suppose that at least one index \(k \in \left\{ 1, \ldots , d\right\} \) exists, such that \(c_k \ne 0\). Then

$$\begin{aligned} {\text {d}}_W(\tilde{\varphi }(K_t), Z) = \mathscr {O}\left( t^{-\frac{1}{2} + \frac{1}{d+1}} \log (t)^{3 + \frac{2}{d+1}}\right) , \end{aligned}$$
(2)

where \(Z \overset{{d}}{\sim }\mathscr {N}(0,1)\).

For \(j \in \left\{ 1, \ldots , d\right\} \) we can directly obtain the central limit theorem for the standardized jth intrinsic volume in the Poisson model by setting \(\varphi (K_t) = V_j(K_t)\). Further we can directly obtain a central limit theorem with rate of convergence for the total intrinsic volume, also known as the Wills functional, see [11, 19, 32], setting the coefficients \(c_j = 1\), \(j \in \left\{ 0,\ldots ,d\right\} \) in Theorem 1.

Corollary 1  (Wills functional) Let \(W(K_t)\) denote the Wills functional, defined by \(W(K_t) = \sum _{j=0}^d V_j(K_t)\) and denote by \(\tilde{W}(K_t)\) the corresponding standardization. Then

$$\begin{aligned} {\text {d}}_W(\tilde{W}(K_t), Z) = \mathscr {O}\left( t^{-\frac{1}{2} + \frac{1}{d+1}} \log (t)^{3 + \frac{2}{d+1}}\right) , \end{aligned}$$
(3)

where \(Z \overset{{d}}{\sim }\mathscr {N}(0,1)\).

In the remarkable work [1] on the estimation of the volume of a convex body \(V_d(K)\) given that the intensity \(t > 0\) is known, the oracle estimator

$$\begin{aligned} \hat{\vartheta }_\mathrm{oracle}(K_t) = V_d(K_t) + \frac{f_0(K_t)}{t}, \end{aligned}$$

is derived. This estimator is unbiased,

$$\begin{aligned} {\mathbb {E}}\left[ \hat{\vartheta }_\mathrm{oracle}(K_t)\right] = V_d(K), \end{aligned}$$

and of minimal variance among all unbiased estimators (UMVU), see [1, eq. (3.2), Theorem 3.2]. Additionally the variance can be obtained by combining [1, Theorem 3.2] with [23, Lemma 1] yielding

$$\begin{aligned} {\mathbb {V}}\left[ \hat{\vartheta }_\mathrm{oracle}(K_t)\right] = \frac{1}{t} {\mathbb {E}}\left[ V_d(K \setminus K_t)\right] = \gamma _d \Omega (K)(1+o(1))t^{-1-\frac{2}{d+1}}, \end{aligned}$$
(4)

for \(t \rightarrow \infty \), where the constant \(\gamma _d\) only depends on the dimension and is known explicitly and \(\Omega (K)\) denotes the affine surface area of K. This enables us to prove the following univariate limit theorem for the oracle estimator:

Theorem 2

(Oracle estimator) Let \(\hat{\vartheta }_\mathrm{oracle}\) be the oracle estimator for a smooth convex body \(K \in \mathscr {K}^d_{sm}\) and denote by \(\tilde{\vartheta }_\mathrm{oracle}\) the corresponding standardization. Then

$$\begin{aligned} {\text {d}}_W(\tilde{\vartheta }_\mathrm{oracle}(K_t),Z) = \mathscr {O}\left( t^{-\frac{1}{2}+\frac{1}{d+1}} \log (t)^{3d+\frac{2}{d+1}}\right) , \end{aligned}$$
(5)

where \(Z \overset{{d}}{\sim }\mathscr {N}(0,1)\).

Finally we provide a multivariate limit theorem for the intrinsic volumes and the components of the \(\mathbf {f}\)-vector of the random polytope \(K_t\) that were considered in [15, Theorem 5.5] in the univariate case. We note that this result suggests that it should be possible to remove the logarithmic factors in the speed of convergence of the multivariate limit theorem, since these factors are due to the floating-body approach. Unfortunately, up to our knowledge, there is no multivariate version of the method provided in [15] available at the moment. Thus we decided to restrict to the floating-body approach on all results.

We denote by \({\text {d}}_3(X,Y) := \sup _{h \in \mathscr {H}_m}\left| {\mathbb {E}}\left[ h(X)\right] - {\mathbb {E}}\left[ h(Y)\right] \right| \) the multivariate probability distribution distance, where \(\mathscr {H}_m\) denotes the class of all \(\mathscr {C}^3\)-functions \(h:\mathbb {R}^m \rightarrow \mathbb {R}\) such that all absolute values of the second and third partial derivatives are bounded by one. See (10) in Sect. 2.2 for the precise definition.

Theorem 3

(Multivariate Limit Theorem) Let

$$\begin{aligned} F(K_t) : = \left( \tilde{V}_1(K_t), \ldots , \tilde{V}_d(K_t), \tilde{f}_0(K_t), \ldots , \tilde{f}_{d-1}(K_t)\right) \in \mathbb {R}^{2d} \end{aligned}$$
(6)

be the vector of the standardized intrinsic volumes \(\tilde{V}_j\) and standardized number of k-dimensional faces \(\tilde{f}_k(K_t)\), i.e.,

$$\begin{aligned} \tilde{V}_j(K_t) := \frac{V_j(K_t)-{\mathbb {E}}\left[ V_j(K_t)\right] }{\sqrt{{\mathbb {V}}\left[ V_j(K_t)\right] }} \quad \text {and} \quad \tilde{f}_k(K_t) := \frac{f_k(K_t)-{\mathbb {E}}\left[ f_k(K_t)\right] }{\sqrt{{\mathbb {V}}\left[ f_k(K_t)\right] }}. \end{aligned}$$
(7)

We denote by \(F_i := F_i(K_t)\), \(i = 1, \ldots , 2d\), the ith component of the multivariate functional \(F(K_t)\). Define \(\Sigma (t) = (\sigma _{ij}(t))_{i,j \in \left\{ 1, \ldots , 2d\right\} }\) as the covariance matrix of \(F(K_t)\), i.e., \(\sigma _{ij}(t) := {\mathrm {Cov}}\left[ F_i,F_j\right] \) and \(\sigma _{ii}(t) = 1\) for all \(i \in \left\{ 1, \ldots , 2d\right\} \) and all \(t > 0\). Then

$$\begin{aligned} {\text {d}}_3(F(K_t), N_{\Sigma (t)}) = \mathscr {O}\left( t^{-\frac{1}{2}+\frac{1}{d+1}}\log (t)^{3d+\frac{2}{d+1}}\right) \end{aligned}$$
(8)

where \(N_{\Sigma (t)} \overset{{d}}{\sim }\mathscr {N}(0, \Sigma (t))\).

Note that \(N_{\Sigma (t)}\) still depends on the intensity t. This gives rise to the following questions:

Open Problems The limit of the variances and covariances is still unknown; thus, we cannot set \(\sigma _{ij}\) to be the limit of the correlations (rescaled covariances) \(\sigma _{ij}(t)\). These limits would give rise to a limit theorem providing a fixed multivariate Gaussian distribution \(\mathscr {N}(0, \Sigma )\) with fixed covariance matrix \(\Sigma \). In this case, the rate of the limit theorem will also contain the rate of the correlations on the right-hand side of the bound; thus, it would be beneficial to obtain the limit \(\sigma _{ij}\) of \(\sigma _{ij}(t)\) including an upper bound for \(\left| \sigma _{ij}(t) - \sigma _{ij}\right| \), since the convergence of the correlations could be slower than the rate given in Theorem 3, slowing down the overall convergence. We should mention that Calka, Schreiber and Yukich were able to derive limits for the variance in the case that K is the Euclidean unit ball using the paraboloid growth process, see [6], but up to our knowledge there are no results on the limit of the variance in a general (smooth) convex body and also no results on the covariances of \(F(K_t)\).

The Euler–Poincaré equations and more general the Dehn–Sommerville equations, see [33, Chapter 8], imply linear dependencies on the components of the \(\mathbf {f}\)-vector. Thus the covariance matrix \(\Sigma (t)\) is singular and therefore \({\mathrm {rank}}\left( \Sigma (t)\right) < 2d\), which gives rise to the question what \({\mathrm {rank}}\left( \Sigma (t)\right) \) actually is and if this also applies to the limiting covariance matrix \(\Sigma \)?

Remark 1

Note that the univariate results can be derived for the \(d_3\)-distance using the multivariate result. Since the additional work that is needed to prove the univariate results alongside the multivariate limit theorem is small, we decided to directly prove the univariate results in the Wasserstein distance.

The paper is organized as follows: For the convenience of the reader, we repeat the relevant material on the Malliavin–Stein method for normal approximation of Poisson functionals in Sect. 2. In Sect. 3 we introduce some background material on convex geometry and corresponding estimations using floating bodies without proofs, to keep our presentation reasonably self-contained. The proofs of our main results are presented in Sect. 4, starting with the central limit theorem for valuations, Theorem 1, in Sect. 4.1 handling the intrinsic volumes. In Sect. 4.2 we prove the multivariate limit theorem, Theorem 3, by extending our proof to the components of the \(\mathbf {f}\)-vector. Finally we can combine the results derived in the proofs before to obtain the central limit theorem for the oracle estimator, Theorem 2.

2 Stochastic Preliminaries

Let \(\eta \) be a Poisson point process over the Euclidean space \((\mathbb {R}^d, \mathscr {B}^d)\) with intensity measure \(\mu \). One can think of \(\eta \) as a random element in the space \(\mathrm {N}_\sigma \) of all \(\sigma \)-finite counting measures \(\chi \) on \(\mathbb {R}^d\), i.e., \(\chi (B) \in \mathbb {N}\cup \{ \infty \}\) for all \(B \in \mathscr {X}\), where the space \(\mathrm {N}_\sigma \) is equipped with the \(\sigma \)-field \(\mathscr {N}_\sigma \) generated by the mappings \(\chi \rightarrow \chi (B)\), \(B \in \mathscr {B}^d\). To simplify our notation, we will often handle \(\eta \) as a random set of points given by

$$\begin{aligned} x \in \eta \Leftrightarrow x \in \left\{ y \in \mathbb {R}^d : \eta (\left\{ y\right\} ) > 0\right\} . \end{aligned}$$

We call a random variable F a Poisson functional if there exists a measurable map \(f:\mathrm {N}_\sigma \rightarrow \mathbb {R}\) such that \(F = f(\eta )\) almost surely. The map f is called the representative of F. We define the difference operator or so-called add-one-cost operator:

Definition 1

Let F be a Poisson functional and f its corresponding representative, then the first-order difference operator is given by

$$\begin{aligned} D_xF := f(\eta + \delta _x) - f(\eta ), \quad x \in \mathbb {R}^d, \end{aligned}$$

where \(\delta _x\) denotes the Dirac measure with mass concentrated at x. We say that F belongs to the domain of the difference operator, i.e., \(F \in {\text {dom}}\left( D\right) \), if \({\mathbb {E}}\left[ F^2\right] < \infty \) and

$$\begin{aligned} \int \limits _{\mathbb {R}^d} \! {\mathbb {E}}\left[ (D_xF)^2\right] \, \mu ({\mathrm {d}}x) < \infty . \end{aligned}$$

The second-order difference operator is obtained by iterating the definition:

$$\begin{aligned} D_{x_1,x_2}^2F&:= D_{x_1}(D_{x_2}F)\\&\;= f(\eta + \delta _{x_1} + \delta _{x_2}) - f(\eta + \delta _{x_1}) - f(\eta + \delta _{x_2}) + f(\eta ), \quad x_1, x_2 \in \mathbb {R}^d. \end{aligned}$$

For a deeper discussion of the underlying theory of Poisson point processes, Malliavin calculus, Wiener-Itô-Chaos expansion and Malliavin–Stein method, see [18, 20].

2.1 Malliavin–Stein Method for the Univariate Case

We denote by \(\mathrm{Lip}\left( 1\right) \) the class of Lipschitz functions \(h:\mathbb {R}\rightarrow \mathbb {R}\) with Lipschitz constant less or equal to one. Given two \(\mathbb {R}\)-valued random variables XY, with \({\mathbb {E}}\left|X \right| < \infty \) and \({\mathbb {E}}\left|Y \right| < \infty \), the Wasserstein distance between the laws of X and Y, written as \({\text {d}}_W(X,Y)\), is defined as

$$\begin{aligned} {\text {d}}_W(X,Y) := \sup \limits _{h \in \mathrm{Lip}\left( 1\right) } \left| {\mathbb {E}}\left[ h(X)\right] - {\mathbb {E}}\left[ h(Y)\right] \right| . \end{aligned}$$
(9)

If a sequence \((X_n)\) of random variables satisfies \(\lim _{n \rightarrow \infty } {\text {d}}_W(X_n, Y) = 0\), then it holds that \(X_n\) converges to Y in distribution, see [18, p. 219 and Proposition B.9]. Especially if Y has standard Gaussian distribution, we obtain a central limit theorem by showing \({\text {d}}_W(X_n, Y) \rightarrow 0\), which we will achieve by rephrasing the bound given by [17, Theorem 1.1] which is an extension of [21, Theorem 3.1], see also [18, Chapter 21.1, 21.2] for a slightly different form and proofs.

Theorem 4

Define

$$\begin{aligned} \tau _1&:= \int \limits _{K^3} \! \left( {\mathbb {E}}\left[ (D_{x_1,x_3}^2F)^4\right] {\mathbb {E}}\left[ (D_{x_2,x_3}^2 F)^4\right] \right. \\&\quad \times \, \left. {\mathbb {E}}\left[ (D_{x_1}F)^4\right] {\mathbb {E}}\left[ (D_{x_2}F)^4\right] \right) ^{\frac{1}{4}} \mu ^3({\mathrm {d}}(x_1, x_2, x_3))\\ \tau _2&:= \int \limits _{K^3} \! \left( {\mathbb {E}}\left[ (D_{x_1,x_3}^2 F)^4\right] {\mathbb {E}}\left[ (D_{x_2,x_3}^2 F)^4\right] \right) ^{\frac{1}{2}} \mu ^3({\mathrm {d}}(x_1, x_2, x_3))\\ \tau _3&:= \int \limits _{K} {\mathbb {E}}\left|D_xF \right|^3 \mu ({\mathrm {d}}x). \end{aligned}$$

Let \(F \in {\text {dom}}\left( D\right) \) be a Poisson functional such that \({\mathbb {E}}\left[ F\right] = 0\) and \({\mathbb {V}}\left[ F\right] = 1\), and let Z be a standard Gaussian random variable. Then

$$\begin{aligned} {\text {d}}_W(F,Z) \le 2 \sqrt{\tau _1} + \sqrt{\tau _2} + \tau _3. \end{aligned}$$

2.2 Malliavin–Stein Method for the Multivariate Case

We denote by \(\mathscr {H}_m\) the class of all \(\mathscr {C}^3\)-functions \(h:\mathbb {R}^m \rightarrow \mathbb {R}\) such that all absolute values of the second and third partial derivatives are bounded by one, i.e.,

$$\begin{aligned} \max \limits _{1 \le i_1 \le i_2 \le m} \sup \limits _{x \in \mathbb {R}^d} \left| \frac{\partial ^2}{\partial x_{i_1} \partial x_{i_2}} h(x)\right|&\le 1 \end{aligned}$$

and

$$\begin{aligned} \max \limits _{1 \le i_1 \le i_2 \le i_3 \le m} \sup \limits _{x \in \mathbb {R}^d} \left| \frac{\partial ^3}{\partial x_{i_1} \partial x_{i_2} \partial x_{i_3}} h(x)\right|&\le 1. \end{aligned}$$

Given two \(\mathbb {R}^m\)-valued random vectors XY with \({\mathbb {E}}\left\| X\right\| ^2 < \infty \) and \({\mathbb {E}}\left\| Y\right\| ^2 < \infty \), the distance \({\text {d}}_3\) between the laws of X and Y, written as \({\text {d}}_3(X,Y)\), is defined as

$$\begin{aligned} {\text {d}}_3(X,Y) := \sup \limits _{h \in \mathscr {H}_m}\left| {\mathbb {E}}\left[ h(X)\right] - {\mathbb {E}}\left[ h(Y)\right] \right| . \end{aligned}$$
(10)

If a sequence \((X_n)\) of random vectors satisfies \(\lim _{n \rightarrow \infty } {\text {d}}_3(X_n, Y) = 0\), then it holds that \(X_n\) converges to Y in distribution, see [22, Remark 2.16]. Especially if Y is a m-dimensional centered Gaussian vector with covariance matrix \(\Sigma \in \mathbb {R}^{m \times m}\), we obtain a multivariate limit theorem by showing \({\text {d}}_3(X_n, Y) \rightarrow 0\). We will achieve this, similar to the univariate central limit theorem, by rephrasing the bound given by [29, Theorem 1.1] which extends [22], to provide the multivariate analogue to the univariate result derived in [17], that was stated here as Theorem 4.

Theorem 5

Let \(F = (F_1, \ldots , F_m)\) with \(m \ge 2\) be a vector of Poisson functionals \(F_1, \ldots , F_m \in {\text {dom}}\left( D\right) \) with \({\mathbb {E}}\left[ F_i\right] = 0\), \(i \in \left\{ 1, \ldots , m\right\} \). Define

$$\begin{aligned} \gamma _1&:= \sum \limits _{i,j=1}^m \int \limits _{K^3} \left( {\mathbb {E}}\left[ (D_{x_1,x_3}^2F_i)^4\right] {\mathbb {E}}\left[ (D_{x_2,x_3}^2 F_i)^4\right] \right. \\&\quad \times \, \left. {\mathbb {E}}\left[ (D_{x_1}F_j)^4\right] {\mathbb {E}}\left[ (D_{x_2}F_j)^4\right] \right) ^{\frac{1}{4}} \mu ^3({\mathrm {d}}(x_1, x_2, x_3))\\ \gamma _2&:= \sum \limits _{i,j=1}^m \int \limits _{K^3} \left( {\mathbb {E}}\left[ (D_{x_1,x_3}^2F_i)^4\right] {\mathbb {E}}\left[ (D_{x_2,x_3}^2 F_i)^4\right] \right. \\&\quad \times \, \left. {\mathbb {E}}\left[ (D_{x_1,x_3}^2F_j)^4\right] {\mathbb {E}}\left[ (D_{x_2,x_3}^2 F_j)^4\right] \right) ^{\frac{1}{4}} \mu ^3({\mathrm {d}}(x_1, x_2, x_3))\\ \gamma _3&:= \sum \limits _{i=1}^m \int \limits _{K} {\mathbb {E}}\left|D_xF_i \right|^3 \mu ({\mathrm {d}}x) \end{aligned}$$

and let \(\Sigma = (\sigma _{ij})_{i,j \in \left\{ 1, \ldots , m\right\} } \in \mathbb {R}^{m \times m}\) be positive semi-definite, then

$$\begin{aligned} {\text {d}}_3(F,N_\Sigma ) \le \frac{m}{2} \sum \limits _{i,j=1}^m \left| \sigma _{ij} - {\mathrm {Cov}}\left[ F_i,F_j\right] \right| + m \sqrt{\gamma _1} + \frac{m}{2} \sqrt{\gamma _2} + \frac{m^2}{4} \gamma _3. \end{aligned}$$

3 Geometric Preliminaries

Fix \(j \in \left\{ 1, \ldots , d\right\} \) and let G(dj) denote the Grassmannian of all j-dimensional linear subspaces of \(\mathbb {R}^d\) equipped with the uniquely determined Haar probability measure \(\nu _j\), see [26, Section 4.4]. For \(k \in \mathbb {N}\), the k-dimensional volume of the k-dimensional unit ball \(\mathbb {B}^k\) is denoted by \(\kappa _k := \pi ^{\frac{k}{2}}\Gamma \left( 1+\frac{k}{2}\right) ^{-1}\).

For a convex body \(K \in \mathscr {K}^d\), the j-dimensional Lebesgue measure of the orthogonal projection of K onto the linear subspace \(\mathbb {L}\in G(d,j)\) is denoted by \(\Lambda _j(K \vert \mathbb {L})\).

For \(j \in \left\{ 1,\ldots ,d\right\} \), the jth intrinsic volume of K is given by Kubota’s formula, see [27, p. 222]:

$$\begin{aligned} V_j(K) = \left( {\begin{array}{c}d\\ j\end{array}}\right) \frac{\kappa _d}{\kappa _j \kappa _{d-j}} \int \limits _{G(d,j)} \Lambda _j(K \vert \mathbb {L}) \nu _j({\mathrm {d}}\mathbb {L}) \end{aligned}$$
(11)

and for \(j = 0\) the 0th intrinsic volume of K, \(V_0(K)\), is the Euler characteristics of K, and therefore we have \(V_0(K) = {\mathbb {1}}\left\{ K \ne \emptyset \right\} \). It is worth mentioning that \(V_1(K)\) and \(V_{d-1}(K)\) are the mean with and the surface area, respectively, up to multiplicative constants and \(V_d(K)\) equals the d-dimensional Lebesgue volume of K. The intrinsic volumes are crucial examples of continuous, motion invariant valuations:

Definition 2

A real function on the space of convex bodies, \(\varphi : \mathscr {K}^d \rightarrow \mathbb {R}\) , is called a valuation, if and only if

$$\begin{aligned} \varphi (K) + \varphi (L) = \varphi (K \cup L) + \varphi (K \cap L) \end{aligned}$$
(12)

holds, whenever \(K, L, K \cup L \in \mathscr {K}^d\). It is called continuous if it is continuous according to the Hausdorff metric on \(\mathscr {K}^d\), and it is called invariant under rigid motions if it is invariant under translations and rotations on \(\mathbb {R}^d\).

The theorem of Hadwiger [8, 10, 14] states that every continuous and motion invariant valuation \(\varphi : \mathscr {K}^d \rightarrow \mathbb {R}\) can be decomposed into a linear combination of intrinsic volumes:

Theorem 6

(Hadwiger) Let \(\varphi \) be a continuous and motion invariant valuation. Then there exist coefficients \(c_i \in \mathbb {R}\), \(i \in \left\{ 0, \ldots , d\right\} \), such that for all convex sets \(L \in \mathscr {K}^d\), it holds that

$$\begin{aligned} \varphi (L) = \sum \limits _{i = 0}^d c_i V_i(L), \end{aligned}$$
(13)

where \(V_i\) denotes the ith intrinsic volume.

For further information on Hadwiger’s theorem, convex geometry and integral geometry, we refer the reader to [9, 26, 27].

Let \(P \in \mathscr {K}^d\) be a polytope and \(i \in \left\{ 0, \ldots , d\right\} \). We denote by \(\mathscr {F}_i(P)\) the set of all i-dimensional faces, i-faces for short, of P and by \(\mathscr {F}_i^{{\text {Vis}}}(P,x)\) the restriction to those i-faces that can be seen from x, where we consider a face \(\mathfrak {F}\) of P to be seen by x if all points \(z \in \mathfrak {F}\) can be connected by a straight line [zx] to x such that the intersection of this line with P only contains the starting point z, i.e.,

$$\begin{aligned} \mathscr {F}_i^{{\text {Vis}}}(P,x) := \left\{ \mathfrak {F}\in \mathscr {F}_i(P) : \forall z \in \mathfrak {F}: [x,z] \cap P = \left\{ z\right\} \right\} . \end{aligned}$$

The sets of all faces resp. all visible faces are denoted by \(\mathscr {F}(P) := \bigcup _{i=0}^d \mathscr {F}_i(P)\) resp. \(\mathscr {F}^{{\text {Vis}}}(P,x) := \bigcup _{i=0}^d \mathscr {F}_i^{{\text {Vis}}}(P,x)\).

For a vertex \(v \in \mathscr {F}_0(P)\), the link of v in P is the set of all faces of P that do not contain v but are contained in a (higher dimensional) face that contains v, i.e.,

$$\begin{aligned} {\text {link}}(P, v) := \left\{ \mathfrak {F}\in \mathscr {F}(P) : v \not \in \mathfrak {F}\text { and } \exists \mathfrak {G}\in \mathscr {F}(P) : \mathfrak {F}\subset \mathfrak {G}\ni v\right\} , \end{aligned}$$

see [33, Chapter 8.1] for a recent account of the theory.

The number of i-dimensional faces of P will be denoted by \(f_i(P)\), i.e.,

$$\begin{aligned} f_i(P) = \left| \mathscr {F}_i(P)\right| . \end{aligned}$$

Note that the vector \((f_{-1}(P), f_0(P), \ldots , f_d(P))\) with \(f_{-1}(P) := V_0(P)\) is the \(\mathbf {f}\)-vector of P, see [33, Definition 8.16, p. 245] for more details.

3.1 Geometric Estimations

We introduce the notion of the \(\varepsilon \)-floating body, following [25, Section 2.2.3]. For a fixed \(K \in \mathscr {K}^d\) and a closed halfspace H, we call the intersection \(C = H \cap K\) a cap of K. If C has volume \(V_d(C) = \varepsilon \), we call C an \(\varepsilon \)-cap of K. We define the function \(v: K \rightarrow \mathbb {R}\) by

$$\begin{aligned} v(z) = \min \left\{ V_d(K \cap H) : H \text { is a halfspace with } z \in H\right\} , \end{aligned}$$

and the floating body with parameter \(\varepsilon \), \(\varepsilon \)-floating body for short, as the level set

$$\begin{aligned} K(v \ge \varepsilon ) = \left\{ z \in K : v(z) \ge \varepsilon \right\} , \end{aligned}$$

which is convex, since it is the intersection of halfspaces. The wet part of K is defined as \(K(v \le \varepsilon )\), where the name comes from the three-dimensional picture when K is a box containing \(\varepsilon \) units of water. Note that the \(\varepsilon \)-floating body is (up to its boundary) the remaining set of K, if all \(\varepsilon \)-caps are removed from K and the wet part is the union of these caps. For the convenience of the reader, we will only use the notation for the floating body \(K(v \ge \varepsilon )\) to prevent confusion with the wet part denoted by \(K(v \le \varepsilon )\). From now on, we will assume that the parameter \(\varepsilon > 0\) is sufficiently small. Thus we can use the following lemmas from [31, Lemma 6.1-6.3]:

Lemma 1

Let C be an \(\varepsilon \)-cap of K, then there are two constants \(c_1, c_2 \in (0,\infty )\) such that the diameter of C, \({{\text {diam}}}(C) = \sup _{x,y \in C}\left\| x-y\right\| \), is bounded by

$$\begin{aligned} c_1 \varepsilon ^{\frac{1}{d+1}} \le {{\text {diam}}}(C) \le c_2 \varepsilon ^{\frac{1}{d+1}}. \end{aligned}$$

Lemma 2

Let x be a point on the boundary \(\partial K\) of K and \(D(x,\varepsilon )\) the set of all points on the boundary which are of distance at most \(\varepsilon \) to x. Then the convex hull of \(D(x,\varepsilon )\) has volume at most \(c_3 \varepsilon ^{d+1}\), where \(c_3 \in (0,\infty )\) is some constant not depending on \(\varepsilon \).

Lemma 3

Let C be an \(\varepsilon \)-cap of K. The union of all \(\varepsilon \)-caps intersecting C has volume at most \(c_4 \varepsilon \), where \(c_4 \in (0,\infty )\) is some constant not depending on \(\varepsilon \).

4 Proofs of the Main Results

To shorten our notation, we write \(K^x_t\) resp. \(K^y_t\) for the convex hull of \((\eta _t + \delta _x) \cap K\) resp. \((\eta _t + \delta _y) \cap K\) and \(K^{xy}_t\) for the convex hull of \((\eta _t + \delta _x + \delta _y) \cap K\). Further we will use \(\mathbf {C}\in (0,\infty )\) to denote a constant that can depend on the dimension and the convex set K, but is independent of the intensity of our Poisson point process t. For sake of brevity, we will not mention these properties of \(\mathbf {C}\) in the following, and additionally the value of \(\mathbf {C}\) may also differ from line to line. We will use \(g(t) \ll f(t)\) to indicate that g(t) is of order at most f(t), i.e.,

$$\begin{aligned} g(t) \ll f(t)&:\Leftrightarrow g(t) = \mathscr {O}\left( f(t)\right) \\&\;\;\Leftrightarrow \exists c> 0, t_0> 0 : \forall t > t_0 : g(t) \le c f(t), \end{aligned}$$

where c and \(t_0\) are constants not depending on t. We will use \(g(t) = \Theta (f(t))\) to indicate that g(t) is of the same order of f(t), i.e.,

$$\begin{aligned} g(t) = \Theta (f(t)) :\Leftrightarrow f(t) = \mathscr {O}\left( g(t)\right) \text { and } g(t) = \mathscr {O}\left( f(t)\right) . \end{aligned}$$

For sufficiently large \(t > 0\), we define \(\varepsilon _t := c \tfrac{\log (t)}{t}\) with \(c > 0\) and denote by \(K(v \ge \varepsilon _t)\) the \(\varepsilon _t\)-floating body of K. Let \(A(\varepsilon _t, t) := \left\{ K(v \ge \varepsilon _t) \subseteq K_t\right\} \) be the event that the \(\varepsilon _t\)-floating body is contained in the random polytope \(K_t\). Recall the well-known lemma from [3, 31] and [25, Lemma 2.2] in a slightly modified version for the Poisson case:

Lemma 4

For any \(\beta \in (0,\infty )\) there exists a constant \(c(\beta , d) \in (0,\infty )\) only depending on \(\beta \) and the space dimension d, such that the event \(A(\varepsilon _t,t)\), that the \(\varepsilon _t\)-floating body is contained in the random polytope \(K_t\) occurs with high probability. More precisely, the probability of the complementary event \(A^c(\varepsilon _t, t)\) has polynomial decay with exponent \(-\beta \) for \(t \rightarrow \infty \), i.e.,

$$\begin{aligned} {\mathbb {P}}\left( A^c(\varepsilon _t, t)\right) < \mathbf {C}t^{-\beta }, \end{aligned}$$

whenever t is sufficiently large.

Note that the parameter \(\beta \) can be freely chosen in \((0, \infty )\), thus for \(\beta = 16d+1\), which is sufficiently large for all our purposes, we get \(c(\beta , d)\), and therefore we can define \(\varepsilon _t\) such that \(K(v \ge \varepsilon _t) \subseteq K_t\) with high probability according to Lemma 4.

We will use the following estimation of subsets of G(dj) from [4, Lemma 1] to handle the projections arising from Kubota’s formula in our proof of Theorem 1:

Lemma 5

For \(z \in \mathbb {S}^{d-1}\) and \(\mathbb {L}\in G(d,j)\), we define the angle \(\sphericalangle (z,\mathbb {L})\) as the minimum of all angles \(\sphericalangle (z,x)\), \(x \in \mathbb {L}\). For sufficiently small \(\alpha > 0\), one has that

$$\begin{aligned} \nu _j\left( \left\{ \mathbb {L}\in G(d,j) : \sphericalangle (z,\mathbb {L}) \le \alpha \right\} \right) = \Theta \left( \alpha ^{d-j}\right) . \end{aligned}$$

4.1 Proof of Theorem 1: Valuation Functional

We first recall that the valuation functional \(\varphi (K_t)\) can be decomposed with Hadwiger (13) into the linear combination of intrinsic volumes, and thus the variance \({\mathbb {V}}\left[ \varphi (K_t)\right] \) can be rewritten as

$$\begin{aligned} {\mathbb {V}}\left[ \varphi (K_t)\right] = \sum \limits _{i = 0}^d c_i^2 {\mathbb {V}}\left[ V_i(K_t)\right] + 2 \sum \limits _{i=0}^d \sum \limits _{j = i+1}^d c_i c_j {\mathrm {Cov}}\left[ V_i(K_t),V_j(K_t)\right] . \end{aligned}$$

For \(V_i\), \(i \in \left\{ 1,\ldots ,d\right\} \), we will use the variance bound from [15, eq. 5.20, eq. 5.22, eq. 5.23], see also [6, Corollary 7.1] and [23]:

$$\begin{aligned} t^{-1-\frac{2}{d+1}} \ll {\mathbb {V}}\left[ V_i(K_t)\right] \ll t^{-1-\frac{2}{d+1}}. \end{aligned}$$
(14)

Since \(V_0(K_t)\) is the Euler characteristics of \(K_t\), we have \(V_0(K_t) = {\mathbb {1}}\left\{ K_t \ne \emptyset \right\} \) and therefore \(V_0(K_t)\) is a Bernoulli distributed random variable with success probability \({\mathbb {P}}\left( V_0(K_t) = 1\right) = 1 - e^{-t \Lambda _d(K)}\). The expectation is given by \({\mathbb {E}}\left[ V_0(K_t)\right] = 1 - e^{-t \Lambda _d(K)}\) and the variance by \({\mathbb {V}}\left[ V_0(K_t)\right] = (1 - e^{-t \Lambda _d(K)})e^{-t \Lambda _d(K)}\), which can be bounded by

$$\begin{aligned} 0 \le {\mathbb {V}}\left[ V_0(K_t)\right] \ll e^{-t \Lambda _d(K)} \ll t^{-1-\frac{2}{d+1}}. \end{aligned}$$
(15)

Lemma 6

For all \(i,j \in \left\{ 0, \ldots , d\right\} \) the intrinsic volumes of \(K_t\) are non negatively correlated and their covariances are bounded from above by the same order of magnitude as the variance, i.e.,

$$\begin{aligned} 0 \le {\mathrm {Cov}}\left[ V_i(K_t),V_j(K_t)\right] \ll t^{-1-\frac{2}{d+1}}. \end{aligned}$$
(16)

Proof

Since \(D_xV_j(K_t) \ge 0\) for all \(x \in \mathbb {R}^d\), it follows from the Harris-FKG inequality for Poisson processes, see [16, Theorem 11], that

$$\begin{aligned} {\mathbb {E}}\left[ V_i(K_t) V_j(K_t)\right] \ge {\mathbb {E}}\left[ V_i(K_t)\right] {\mathbb {E}}\left[ V_j(K_t)\right] , \end{aligned}$$

which directly implies the lower bound on the covariances. The Cauchy–Schwarz inequality implies

$$\begin{aligned} {\mathrm {Cov}}\left[ V_i(K_t),V_j(K_t)\right] \le \sqrt{{\mathbb {V}}\left[ V_i(K_t)\right] {\mathbb {V}}\left[ V_j(K_t)\right] }; \end{aligned}$$

thus, using (14) and (15), the upper bound on the covariances is obtained. \(\square \)

We are now in a position to bound the variance of our valuation functional with the following lemma:

Lemma 7

Under the assumptions of Theorem 1, the variance of the valuation functional is bounded by

$$\begin{aligned} t^{-1-\frac{2}{d+1}} \ll {\mathbb {V}}\left[ \varphi (K_t)\right] \ll t^{-1-\frac{2}{d+1}}. \end{aligned}$$
(17)

Proof

We assumed \(c_ic_j \ge 0\) for all ij and that there exist at least one index \(k \in \left\{ 1,\ldots ,d\right\} \) such that \(c_k \ne 0\). Thus Lemma 6 implies

$$\begin{aligned} {\mathbb {V}}\left[ \varphi (K_t)\right] \ge c_k^2 {\mathbb {V}}\left[ V_k(K_t)\right] \gg t^{-1-\frac{2}{d+1}}, \end{aligned}$$

and

$$\begin{aligned} {\mathbb {V}}\left[ \varphi (K_t)\right] \ll (d+1) t^{-1-\frac{2}{d+1}} + (d+1)d \sqrt{\left( t^{-1-\frac{2}{d+1}}\right) \left( t^{-1-\frac{2}{d+1}}\right) } \ll t^{-1-\frac{2}{d+1}}, \end{aligned}$$

which completes the proof. \(\square \)

The crucial part in the proof of Theorem 1 is the application of the general bound given by Theorem 4, and thus we need to investigate the moments occurring in \(\tau _1\), \(\tau _2\) and \(\tau _3\). In the first step, we adapt and slightly extend the proof from [30] for the binomial case, to work in the Poisson case, yielding upper bounds on the moments of the first- and second-order difference operators applied to the intrinsic volumes \(V_j(K_t)\) which will be used in the second step to derive the bounds for the valuation functional.

First-Order Difference Operator Fix \(x \in K\) and \(j \in \left\{ 1, \ldots , d\right\} \), then conditioned on the event \(A(\varepsilon _t, t)\), it follows that

$$\begin{aligned} D_xV_j(K_t) = {\mathbb {1}}\left\{ x \in K \setminus K_t\right\} D_xV_j(K_t) = {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} D_xV_j(K_t), \end{aligned}$$
(18)

thus we can restrict the following to the case \(x \in K \setminus K(v \ge \varepsilon _t)\).

For \(x \in K \setminus K(v \ge \varepsilon _t)\), we define z to be the closest point to x on the boundary \(\partial K\). Since K is smooth, z is uniquely determined, if \(\varepsilon _t\) is sufficiently small.

The visible region of z is defined as the set of all points that can be connected to z by a straight line in K avoiding \(K(v \ge \varepsilon _t)\), i.e.,

$$\begin{aligned} {\text {Vis}}_z(t) := \left\{ y \in K \setminus K(v \ge \varepsilon _t) : [y,z] \cap K(v \ge \varepsilon _t) = \emptyset \right\} . \end{aligned}$$
(19)

Note that given the sandwiching \(K(v \ge \varepsilon _t) \subset K_t \subset K\), a random point x can influence the random polytope only in the visibility region.

We construct a full-dimensional spherical cap C such that \(K^x_t \setminus K_t \subseteq C\). The definition of the visible region, which was first used in [5, 31], is crucial in the following steps:

Let \(y_1,y_2 \in {\text {Vis}}_z(t)\), then there exist two \(\varepsilon _t\)-caps \(C_1\) and \(C_2\) such that the straight line \([y_1, z]\) resp. \([y_2, z]\) is contained in \(C_1\) resp. \(C_2\), thus

$$\begin{aligned} \left\| y_1-y_2\right\| \le \left\| y_1 - z\right\| + \left\| y_2 - z\right\| \le {{\text {diam}}}(C_1) + {{\text {diam}}}(C_2). \end{aligned}$$

Since the diameter of any \(\varepsilon _t\)-cap C of K can be bounded by \(\mathbf {C}\varepsilon _t^\frac{1}{d+1}\), see Lemma 1, it follows directly that the diameter of the visibility region can be bounded by

$$\begin{aligned} \rho := {{\text {diam}}}\left( {\text {Vis}}_z(t)\right) \ll \left( \frac{\log (t)}{t}\right) ^{\frac{1}{d+1}}. \end{aligned}$$

Let \(D(z, \rho )\) be the set of all points on the boundary \(\partial K\) which are of distance at most \(\rho \) to z, i.e.,

$$\begin{aligned} D(z,\rho ) = \left\{ y \in \partial K : \left\| y - z\right\| < \rho \right\} \end{aligned}$$

and denote the cap that is given by the convex hull of \(D(z,\rho )\) by C, i.e.,

$$\begin{aligned} C := {\text {conv}}\left\{ D(z,\rho )\right\} . \end{aligned}$$
(20)

By construction, we have \(K^x_t \setminus K_t \subseteq {\text {Vis}}_z(t) \subseteq C\). It follows from Lemma 2 that the volume of C is of order at most \(\tfrac{\log (t)}{t}\).

Fix a linear subspace \(\mathbb {L}\in G(d,j)\), then one has that the set difference of the projection of \(K^x_t\) and \(K_t\) onto the subspace \(\mathbb {L}\) is contained in the projection of C onto \(\mathbb {L}\)

$$\begin{aligned} (K^x_t|\mathbb {L}) \setminus (K_t|\mathbb {L}) \subseteq C | \mathbb {L}. \end{aligned}$$

The j-dimensional volume of the projected cap \(C | \mathbb {L}\) can be bounded in its order of magnitude by

$$\begin{aligned} \Lambda _j(C| \mathbb {L}) \ll \left( \tfrac{\log (t)}{t}\right) ^{\tfrac{j+1}{d+1}}. \end{aligned}$$
(21)

Depending on the angle between z and \(\mathbb {L}\), \(\sphericalangle (z,\mathbb {L})\), the part \(K^x_t \setminus K_t\) is not visible for the orthogonal projection on \(\mathbb {L}\) since it is hidden behind \(K(v \ge \varepsilon _t)\), i.e.,

$$\begin{aligned} (K^x_t \setminus K_t) | \mathbb {L}\subseteq K(v \ge \varepsilon _t) | \mathbb {L}, \end{aligned}$$

for sufficiently large t. To obtain a bound on the maximal angle \(\sphericalangle (z,\mathbb {L})\) where the projection does not vanish, we approximate K by a ball \(\mathbb {B}^d(z_c,r)\) with center \(z_c\) and radius r such that \(\mathbb {B}^d(z_c,r) \subseteq K\) and \(\mathbb {B}^d(z_c,r) \cap \partial K = \left\{ z\right\} \). Indeed we approximate the boundary \(\partial K\) of K from the inside of K by a ball, which is possible, since K is sufficiently smooth.

We repeat the construction of the cap C for \(\mathbb {B}^d(z_c,r)\) with the corresponding \(\varepsilon _t\)-floating body \(\mathbb {B}^d(v \ge \varepsilon _t)\) of the ball to obtain the cap \(C_\mathbb {B}\) and define \(\alpha \) to be the central angle of \(C_\mathbb {B}\) in \(\mathbb {B}^d(z_c,r)\). It follows from Lemma 2 that the volume of \(C_\mathbb {B}\) is of order at most \(\tfrac{\log (t)}{t}\), since \(\rho _\mathbb {B}\ll \left( \tfrac{\log (t)}{t}\right) ^{\frac{1}{d+1}}\), which yields

$$\begin{aligned} \alpha \ll \left( \tfrac{\log (t)}{t}\right) ^{\frac{1}{d+1}}. \end{aligned}$$
(22)

Thus it follows from \(\mathbb {B}^d(v \ge \varepsilon _t) \subseteq K(v \ge \varepsilon _t) \subseteq K_t \subseteq K_t^x\) that \(K_t^x | \mathbb {L}= K_t | \mathbb {L}\) if \(\sphericalangle (z,\mathbb {L})\) is of larger order than \(\alpha \), and therefore we have

$$\begin{aligned} \Lambda _j\left( (K^x_t | \mathbb {L}) \setminus (K_t | \mathbb {L})\right) \ne 0, \quad \text {only if} \quad \sphericalangle (z,\mathbb {L}) \ll \alpha . \end{aligned}$$

Using (22) and (21), it follows that

$$\begin{aligned} \Lambda _j\left( (K^x_t | \mathbb {L}) \setminus (K_t | \mathbb {L})\right)&\le {\mathbb {1}}\left\{ \sphericalangle (z,\mathbb {L}) \ll \left( \tfrac{\log (t)}{t}\right) ^{\frac{1}{d+1}}\right\} \Lambda _j(C | \mathbb {L})\\&\le {\mathbb {1}}\left\{ \sphericalangle (z,\mathbb {L}) \ll \left( \tfrac{\log (t)}{t}\right) ^{\frac{1}{d+1}}\right\} \left( \tfrac{\log (t)}{t}\right) ^{\frac{j+1}{d+1}}. \end{aligned}$$

Finally we use Kubota’s formula (11) together with (18) and Lemma 5 to obtain

$$\begin{aligned}&D_xV_j(K_t) = {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} c(d,j) \int \limits _{G(d,j)} \Lambda _j\left( (K^x_t | \mathbb {L}) \setminus (K_t | \mathbb {L})\right) \nu _j({\mathrm {d}}\mathbb {L})\\&\quad \ll {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} \int \limits _{G(d,j)} {\mathbb {1}}\left\{ \sphericalangle (z,\mathbb {L}) \ll \left( \tfrac{\log (t)}{t}\right) ^{\frac{1}{d+1}}\right\} \left( \tfrac{\log (t)}{t}\right) ^{\frac{j+1}{d+1}} \nu _j({\mathrm {d}}\mathbb {L})\\&\quad \ll {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} \left( \tfrac{\log (t)}{t}\right) ^{\frac{j+1}{d+1}} \nu _j\left( \left\{ \mathbb {L}: \sphericalangle (z,\mathbb {L}) \ll \left( \tfrac{\log (t)}{t}\right) ^{\frac{1}{d+1}}\right\} \right) \\&\quad \ll {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} \left( \tfrac{\log (t)}{t}\right) ^{\frac{j+1}{d+1}}\left( \tfrac{\log (t)}{t}\right) ^{\frac{d-j}{d+1}}\\&\quad = {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} \tfrac{\log (t)}{t}. \end{aligned}$$

where \(c(d,j) = \left( {\begin{array}{c}d\\ j\end{array}}\right) \frac{\kappa _d}{\kappa _j \kappa _{d-j}}\) can be omitted since we are bounding \(D_xV_j(K_t)\) in its order of magnitude with respect to t.

Second-Order Difference Operator

Fix \(x,y \in K\) and \(j \in \left\{ 1,\ldots ,d\right\} \), similar to (18) and conditioned on the event \(A(\varepsilon _t, t)\), we have that

$$\begin{aligned} D_{x,y}^2V_j(K_t) = {\mathbb {1}}\left\{ x,y \in K \setminus K(v \ge \varepsilon _t)\right\} D_{x,y}^2V_j(K_t). \end{aligned}$$

To further restrict \(x,y \in K \setminus K(v \ge \varepsilon _t)\), we show the following lemma:

Lemma 8

Fix two convex bodies \(P,K \in \mathscr {K}^d\) with \(P \subset K\) and two points \(x,y \in K \setminus P\). Denote by \(P^{xy}\), \(P^{x}\) and \(P^{y}\) the convex hulls of \(P \cup \left\{ x,y\right\} \), \(P \cup \left\{ x\right\} \) resp. \(P \cup \left\{ y\right\} \). We define the visibility region of x with respect to P by

$$\begin{aligned} \mathscr {V}_x(P) := \left\{ z \in K \setminus P : [z,x] \cap P = \emptyset \right\} . \end{aligned}$$

If \(\mathscr {V}_x(P) \cap \mathscr {V}_y(P) = \emptyset \), then

$$\begin{aligned} P^x \cap P^y&= P, \end{aligned}$$
(23)
$$\begin{aligned} P^x \cup P^y&= P^{xy}, \end{aligned}$$
(24)

and further it follows for all valuations \(\psi :\mathscr {K}^d \rightarrow \mathbb {R}\) that the second-order difference operator of \(\psi (P)\) vanishes, i.e.,

$$\begin{aligned} D_{x,y}^2\psi (P) = 0. \end{aligned}$$
(25)

Proof

Using \(P^x \subseteq \mathscr {V}_x(P) \cup P\) and \(P^y \subseteq \mathscr {V}_y(P) \cup P\), it follows directly from \(\mathscr {V}_x(P) \cap \mathscr {V}_y(P) = \emptyset \) that \(P^x \cap P^y \subseteq (\mathscr {V}_x(P) \cap \mathscr {V}_y(P)) \cup P = P\). Additionally the inclusion \(P \subseteq P^x \cap P^y\), which follows directly from the definition of the convex hull, gives (23).

Again, it is immediate that \(P^x \subseteq P^{xy}\) and \(P^y \subseteq P^{xy}\), and thus it remains to prove that \(P^{xy} \subseteq P^x \cup P^y\). Assume \(z \in P^{xy} \setminus (P^x \cup P^y)\), then there exist \(\lambda _1, \lambda _2 \in [0,1]\) and \(u \in P^x\), \(v \in P^y\) such that

$$\begin{aligned} z = \lambda _1 u + (1-\lambda _1)y = \lambda _2 v + (1-\lambda _2) x, \end{aligned}$$

where we can safely assume that u and v are chosen such that \(\lambda _1\) and \(\lambda _2\) are maximized. Note that \(u \in P^y\) implies \(z \in P^y\) resp. \(v \in P^x\) implies \(z \in P^x\), a contradiction, which leads to the remaining case \(u \in P^x \setminus P^y\) and \(v \in P^y \setminus P^x\). By construction it now follows that \([x,z] \cap P = \emptyset \) and \([y,z] \cap P = \emptyset \), which yields \(z \in \mathscr {V}_x(P)\) and \(z \in \mathscr {V}_y(P)\) contrary to \(\mathscr {V}_x(P) \cap \mathscr {V}_y(P) = \emptyset \) which gives (24).

The second-order difference operator of \(\psi (P)\) is given by

$$\begin{aligned} D^2_{x,y}\psi (P) = \psi (P^{xy}) - \psi (P^x) - \psi (P^y) + \psi (P). \end{aligned}$$

Using (23) and (24), we can rewrite the first term according to the valuation property (12) to

$$\begin{aligned} \psi (P^{xy}) = \psi (P^x) + \psi (P^y) - \psi (P^x \cap P^y) = \psi (P^x) + \psi (P^y) - \psi (P), \end{aligned}$$

which gives (25) when substituted in the representation of \(D^2_{x,y}\psi (P)\). \(\square \)

Since \({\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) = \emptyset \) implies the conditions of Lemma 8 for \(P = K_t \supseteq K(v \ge \varepsilon _t)\), it follows that

$$\begin{aligned} D_{x,y}^2V_j(K_t) = {\mathbb {1}}\left\{ x,y \in K \setminus K(v \ge \varepsilon _t)\right\} {\mathbb {1}}\left\{ {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} D_{x,y}^2V_j(K_t). \end{aligned}$$

Taking \(D_{x,y}^2V_j(K_t) = D_x(D_yV_j(K_t))\), we obtain \(\left| D_{x,y}^2V_j(K_t)\right| \le D_xV_j(K^y_t) + D_xV_j(K_t)\) where we immediately see that the second term \(D_xV_j(K_t)\) is the first-order difference operator that we have bounded before. Using \(K(v \ge \varepsilon _t) \subseteq K_t \subseteq K_t^y\), we can substitute \(K_t\) with \(K_t^y\) in the proof for the first-order difference operator to obtain

$$\begin{aligned}&D_{x,y}^2V_j(K_t)\\&\quad = {\mathbb {1}}\left\{ x,y \in K \setminus K(v \ge \varepsilon _t)\right\} {\mathbb {1}}\left\{ {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} \left( D_xV_j(K^y_t) + D_xV_j(K_t)\right) \\&\quad \ll {\mathbb {1}}\left\{ x,y \in K \setminus K(v \ge \varepsilon _t)\right\} {\mathbb {1}}\left\{ {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} \left( \tfrac{\log (t)}{t} + \tfrac{\log (t)}{t}\right) \\&\quad \ll {\mathbb {1}}\left\{ x,y \in K \setminus K(v \ge \varepsilon _t)\right\} {\mathbb {1}}\left\{ {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} \tfrac{\log (t)}{t}. \end{aligned}$$

The results of the prior discussion can be summarized in the following lemma bounding the order of magnitude of the pth absolute moment of the first- and second-order difference operator of the intrinsic volumes \(V_j(K_t)\).

Lemma 9

Let \(p \in \left\{ 1,\ldots ,8\right\} \), \(j \in \left\{ 0,\ldots ,d\right\} \) and \(x,y \in K\), then

$$\begin{aligned} {\mathbb {E}}\left|(D_xV_j(K_t))^p \right|&\ll {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} \left( \tfrac{\log (t)}{t}\right) ^p, \end{aligned}$$
(26)
$$\begin{aligned} {\mathbb {E}}\left|(D_{x,y}^2V_j(K_t))^p \right|&\ll {\mathbb {1}}\left\{ x,y \in K \setminus K(v \ge \varepsilon _t)\right\} \nonumber \\&\qquad \times {\mathbb {1}}\left\{ {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} \left( \tfrac{\log (t)}{t}\right) ^p, \end{aligned}$$
(27)

Proof

Let \(j \ne 0\). On the event \(A(\varepsilon _t, t)\), we use the bounds derived before and on the complementary event \(A^c(\varepsilon _t, t)\), it is sufficient to use the estimations \(D_xV_j(K_t) \le V_j(K)\) resp. \(\left| D_{x,y}^2V_j(K_t)\right| \le 2 V_j(K)\), thus

$$\begin{aligned} {\mathbb {E}}\left|(D_xV_j(K_t))^p \right|&= {\mathbb {E}}\left[ (D_xV_j(K_t))^p{\mathbb {1}}_{A(\varepsilon _t, t)}\right] + {\mathbb {E}}\left[ (D_xV_j(K_t))^p{\mathbb {1}}_{A^c(\varepsilon _t, t)}\right] \\&\quad \ll {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} \left( \tfrac{\log (t)}{t}\right) ^p {\mathbb {P}}\left( A(\varepsilon _t, t)\right) \\&\qquad + V_j(K)^p {\mathbb {P}}\left( A^c(\varepsilon _t, t)\right) \end{aligned}$$

and

$$\begin{aligned}&{\mathbb {E}}\left|(D_{x,y}^2V_j(K_t))^p \right| \\&\quad \le {\mathbb {E}}\left|(D_{x,y}^2V_j(K_t))^p{\mathbb {1}}_{A(\varepsilon _t, t)} \right| + {\mathbb {E}}\left|(D_{x,y}^2V_j(K_t))^p{\mathbb {1}}_{A^c(\varepsilon _t, t)} \right|\\&\quad \ll {\mathbb {1}}\left\{ x,y \in K \setminus K(v \ge \varepsilon _t)\right\} {\mathbb {1}}\left\{ {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} \left( \tfrac{\log (t)}{t}\right) ^p {\mathbb {P}}\left( A(\varepsilon _t, t)\right) \\&\qquad + (2V_j(K))^p {\mathbb {P}}\left( A^c(\varepsilon _t, t)\right) . \end{aligned}$$

Since \({\mathbb {P}}\left( A(\varepsilon _t, t)\right) \le 1\) and \({\mathbb {P}}\left( A^c(\varepsilon _t, t)\right) \le t^{-\beta }\), with \(\beta = 16d+1 > p\), see Lemma 4, our claim follows for \(j \in \left\{ 1,\ldots ,d\right\} \).

Let \(j = 0\). We use \(V_0(K_t) = {\mathbb {1}}\left\{ K_t \ne \emptyset \right\} \) to derive

$$\begin{aligned} {\mathbb {E}}\left|(D_xV_j(K_t))^p \right|&= {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} {\mathbb {P}}\left( K_t = \emptyset \right) \end{aligned}$$

and

$$\begin{aligned}&{\mathbb {E}}\left|(D_{x,y}^2V_0(K_t))^p \right|\\&\quad = {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} {\mathbb {1}}\left\{ {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} {\mathbb {P}}\left( K_t = \emptyset \right) , \end{aligned}$$

thus using \({\mathbb {P}}\left( K_t = \emptyset \right) = e^{-t\Lambda _d(K)}\) the claim follows by bounding the exponential decay with \((\tfrac{\log (t)}{t})^p\). \(\square \)

Our next objective is to prove corresponding bounds on the moments of the first- and second-order difference operator of the valuation functional we wish to study.

Lemma 10

Let \(p \in \left\{ 1,\ldots ,8\right\} \) and \(x,y \in K\), then

$$\begin{aligned} {\mathbb {E}}\left|(D_x\varphi (K_t))^p \right|&\ll {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} \left( \tfrac{\log (t)}{t}\right) ^p, \end{aligned}$$
(28)
$$\begin{aligned} {\mathbb {E}}\left|(D_{x,y}^2\varphi (K_t))^p \right|&\ll {\mathbb {1}}\left\{ x,y \in K \setminus K(v \ge \varepsilon _t)\right\} \nonumber \\&\qquad \times {\mathbb {1}}\left\{ {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} \left( \tfrac{\log (t)}{t}\right) ^p. \end{aligned}$$
(29)

Proof

Since \(D_x\) is linear, we obtain

$$\begin{aligned} {\mathbb {E}}\left|(D_x\varphi (K_t))^p \right| \le \sum \limits _{j_1, \ldots , j_p = 0}^d \left( \prod \limits _{k=1}^p c_{j_k}\right) {\mathbb {E}}\left|\prod \limits _{k=1}^p D_xV_{j_k}(K_t) \right|. \end{aligned}$$

and the generalized Hölder inequality yields

$$\begin{aligned} {\mathbb {E}}\left|(D_x\varphi (K_t))^p \right| \le \sum \limits _{j_1, \ldots , j_p = 0}^d \left( \prod \limits _{k=1}^p c_{j_k}\right) \left( \prod \limits _{k=1}^p {\mathbb {E}}\left|D_xV_{j_k}(K_t) \right|^p\right) ^{\frac{1}{p}}. \end{aligned}$$

Therefore we can use (26) to bound all moments on the right-hand side yielding (28). The proof of (29) is similar using (27) instead of (26). \(\square \)

Before we apply the previous lemma to the error bounds \(\tau _1\), \(\tau _2\) and \(\tau _3\), we introduce two estimations for the domain of integration. From [2, Theorem 6.3] it follows directly that

$$\begin{aligned} \Lambda _d(K \setminus K(v \ge \varepsilon _t)) \ll \left( \tfrac{\log (t)}{t}\right) ^{\frac{2}{d+1}}. \end{aligned}$$
(30)

Denote by C(x) resp. C(z) the caps constructed according to (20) for points \(x, z \in K \setminus K(v \ge \varepsilon _t)\), then for every fixed \(x \in K \setminus K(v \ge \varepsilon _t)\) we have

$$\begin{aligned} \left\{ y \in K \setminus K(v \ge \varepsilon _t) : {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} \subseteq \bigcup \limits _{z \in {\text {Vis}}_x(t)} {\text {Vis}}_z(t) \subseteq \bigcup \limits _{z \in C(x)} C(z). \end{aligned}$$

Recall that the volumes of C(x) and C(z) are of order at most \(\tfrac{\log (t)}{t}\); thus Lemma 3 yields

$$\begin{aligned} \Lambda _d\left( \left\{ y \in K \setminus K(v \ge \varepsilon _t) : {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} \right) \ll \tfrac{\log (t)}{t}, \end{aligned}$$
(31)

for all \(x \in K \setminus K(v \ge \varepsilon _t)\).

Applying the previous results to the error bound \(\tau _1\) yields

$$\begin{aligned} \tau _1&\ll {\mathbb {V}}\left[ \varphi (K_t)\right] ^{-2} \left( \tfrac{\log (t)}{t}\right) ^4 t^3 \int \limits _{K} {\mathbb {1}}\left\{ x_3 \in K \setminus K(v \ge \varepsilon _t)\right\} \\&\quad \times \,\left( \int \limits _K {\mathbb {1}}\left\{ {\text {Vis}}_{x_1}(t) \cap {\text {Vis}}_{x_3}(t) \ne \emptyset \right\} {\mathrm {d}}x_1\right) \\&\quad \times \,\left( \int \limits _K {\mathbb {1}}\left\{ {\text {Vis}}_{x_2}(t) \cap {\text {Vis}}_{x_3}(t) \ne \emptyset \right\} {\mathrm {d}}x_2\right) {\mathrm {d}}x_3\\&\ll \left( t^{-1-\frac{2}{d+1}}\right) ^{-2} \left( \tfrac{\log (t)}{t}\right) ^4 t^3 \left( \tfrac{\log (t)}{t}\right) ^{\frac{2}{d+1}} \left( \tfrac{\log (t)}{t}\right) ^2\\&\ll t^{-1 + \frac{2}{d+1}} \log (t)^{6+\frac{2}{d+1}}. \end{aligned}$$

In the same manner, we can see that

$$\begin{aligned} \tau _2&\ll t^{-1+\frac{2}{d+1}} \log (t)^{6 + \frac{2}{d+1}},\\ \tau _3&\ll t^{-\frac{1}{2}+\frac{1}{d+1}} \log (t)^{3+\frac{2}{d+1}}. \end{aligned}$$

Combining these three bounds with Theorem 4 leads to

$$\begin{aligned} {\text {d}}_W(\tilde{\varphi }(K_t),Z) \le 2 \sqrt{\tau _1} + \sqrt{\tau _2} + \tau _3 \ll t^{-\frac{1}{2}+\frac{1}{d+1}} \log (t)^{3+\frac{2}{d+1}}, \end{aligned}$$

for \(Z \overset{{d}}{\sim }\mathscr {N}(0,1)\), completing the proof of Theorem 1.

4.2 Proof of Theorem 3: Multivariate Functional

We start to investigate the moments of the first- and second-order difference operators applied to the components of the \(\mathbf {f}\)-vector by combining combinatorial results from [24] with the floating body and economic cap covering approach from [25, 30].

First-Order Difference Operator

Fix \(x \in K\) and \(j \in \left\{ 0,\ldots ,d-1\right\} \), then conditioned on the event \(A(\varepsilon _t, t)\), it follows similar to (18) that

$$\begin{aligned} D_xf_j(K_t) = {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} D_xf_j(K_t), \end{aligned}$$

thus we can restrict the following to the case \(x \in K \setminus K(v \ge \varepsilon _t)\).

Let \(K_t\) be fixed and assume \(x \not \in K_t\). Since the polytope \(K_t\) is simplicial and all vertices are in general position almost surely, analysis similar to that in [24, Section 4] allows us to decompose \(D_xf_j(K_t)\) into the number of j-faces gained, denoted by \(f_j^+\), and the number of j-faces lost, denoted by \(f_j^-\):

$$\begin{aligned} \left| D_xf_j(K_t)\right| = \left| f_j^+ - f_j^-\right| \le f_j^+ + f_j^-. \end{aligned}$$

Every j-face gained in \(K_t^x\) is the convex hull of x and a \((j-1)\)-face in \({\text {link}}(K_t^x,x)\). Additionally every \((j-1)\)-face in \({\text {link}}(K_t^x,x)\) is also contained in \(\mathscr {F}_{j-1}^{{\text {Vis}}}(K_t,x)\), thus

$$\begin{aligned} f_j^+ \le f_{j-1}\left( {\text {link}}(K_t^x,x)\right) \le \left| \mathscr {F}_{j-1}^{{\text {Vis}}}(K_t,x)\right| . \end{aligned}$$

On the other hand, the j-faces in \(\mathscr {F}_j(K_t)\) that are lost have to be visible from x, thus

$$\begin{aligned} f_j^- \le \left| \mathscr {F}_{j}^{{\text {Vis}}}(K_t,x)\right| . \end{aligned}$$

Note that not all visible j-faces are removed; to gain the exact amount of lost j-faces, one has to calculate the number of j-faces that are visible and not contained in the link of x in the new polytope \(K_t^x\), i.e., \(\left| \mathscr {F}_j^{{\text {Vis}}}(K_t,x) \setminus {\text {link}}(K_t^x,x)\right| \), as we will see later for the second-order difference operator.

Let z be the closest point to x on the boundary of \(\partial K\), then it follows immediately that every visible i-face \(\mathfrak {F}\in \mathscr {F}_i^{{\text {Vis}}}(K_t,x)\) has to be a subset of \({\text {Vis}}_z(t)\). Since \((i+1)\) pairwise distinct points are needed to form an i-face, the number of visible i-faces can be bound by the number of points in \({\text {Vis}}_z(t)\), i.e.,

$$\begin{aligned} \left| \mathscr {F}_i^{{\text {Vis}}}(K_t,x)\right| \le \left( {\begin{array}{c}\eta \left( {\text {Vis}}_z(t)\right) \\ i+1\end{array}}\right) \end{aligned}$$

for all \(i \in \left\{ 0, \ldots , d-1\right\} \). Combining these steps, we obtain

$$\begin{aligned} \left| D_xf_j(K_t)\right|&\le \left( {\begin{array}{c}\eta \left( {\text {Vis}}_z(t)\right) \\ j\end{array}}\right) + \left( {\begin{array}{c}\eta \left( {\text {Vis}}_z(t)\right) \\ j+1\end{array}}\right) = \left( {\begin{array}{c}\eta \left( {\text {Vis}}_z(t)\right) +1\\ j+1\end{array}}\right) \\&\le \left( \eta \left( {\text {Vis}}_z(t)\right) +1\right) ^{j+1} \end{aligned}$$

and further for \(p \in \left\{ 1,\ldots ,8\right\} \),

$$\begin{aligned} {\mathbb {E}}\left|D_xf_j(K_t){\mathbb {1}}_{A(\varepsilon _t,t)} \right|^p \le {\mathbb {E}}\left[ \left( \eta \left( {\text {Vis}}_z(t)\right) +1\right) ^{p(j+1)}\right] . \end{aligned}$$

The binomial theorem and the fact that \(\eta \left( {\text {Vis}}_z(t)\right) \) is Poisson distributed with parameter \(\mu ({\text {Vis}}_z(t))\) yield

where denotes the Stirling numbers of the second kind. Recall that

$$\begin{aligned} \mu ({\text {Vis}}_z(t)) = t \Lambda _d({\text {Vis}}_z(t)) \ll t \tfrac{\log (t)}{t} = \log (t) \end{aligned}$$

and \(j \in \left\{ 0, \ldots , d-1\right\} \), thus

$$\begin{aligned} {\mathbb {E}}\left|D_xf_j(K_t){\mathbb {1}}_{A(\varepsilon _t,t)} \right|^p \ll \log (t)^{pd}. \end{aligned}$$

Conditioned on the complementary event \(A^c(\varepsilon _t,t)\), we need to slightly modify the proof, replacing \(\eta ({\text {Vis}}_z(t))\) by \(\eta (K)\), the number of all points in K, and using the Cauchy–Schwarz inequality to separate the expectation of the indicator, from the moments of \(\eta (K)\):

since \(\beta = 16d+1 \ge 2pd\).

Second-Order Difference Operator Fix \(x,y \in K\) and \(j \in \left\{ 0, \ldots , d-1\right\} \), similar to the intrinsic volumes handled before we have

$$\begin{aligned} D_{x,y}^2f_j(K_t) = {\mathbb {1}}\left\{ x,y \in K \setminus K(v \ge \varepsilon _t)\right\} D_{x,y}^2 V_j(K_t). \end{aligned}$$

We show the following lemma, to obtain a restriction on xy for the components of the \(\mathbf {f}\)-vector similar to that derived from Lemma 8 for the intrinsic volumes:

Lemma 11

Fix a d-dimensional polytope \(P \subset K\) contained in a convex body \(K \in \mathscr {K}^d\) and two points \(x,y \in K \setminus P\). Let \(\mathscr {V}_x(P)\) and \(\mathscr {V}_y(P)\) denote the visibility regions of x and y with respect to P be defined as in Lemma 8. If \(\mathscr {V}_x(P) \cap \mathscr {V}_y(P) =\emptyset \), then

$$\begin{aligned} D_{x,y}^2f_j(P) = 0, \end{aligned}$$

for all \(j \in \left\{ 0, \ldots , d-1\right\} \).

Proof

Denote by \(P^{xy}\), \(P^x\) and \(P^y\) the convex hulls of \(P \cup \left\{ x,y\right\} \), \(P \cup \left\{ x\right\} \) resp. \(P \cup \left\{ y\right\} \), then we can decompose the number of j-faces of \(P^{xy}\) into the number of j-faces of \(P^x\) and the gained and lost j-faces obtained from adding y, i.e.,

$$\begin{aligned} f_j(P^{xy}) = f_j(P^x) + f_{j-1}({\text {link}}(P^{xy},y)) - \left| \mathscr {F}_j^{{\text {Vis}}}(P^x,y) \setminus {\text {link}}(P^{xy},y)\right| \end{aligned}$$

Since the visible regions are disjoint, we have \(\mathscr {F}^{{\text {Vis}}}(P^x,y) = \mathscr {F}^{{\text {Vis}}}(P,y)\) and additionally \({\text {link}}(P^{xy},y) = {\text {link}}(P^y,y)\), thus

$$\begin{aligned} D_{x,y}^2f_j(P)&= f_j(P^{xy}) - f_j(P^x) - f_j(P^y) + f_j(P)\\&= f_{j-1}({\text {link}}(P^{y},y)) - \left| \mathscr {F}_j^{{\text {Vis}}}(P,y) \setminus {\text {link}}(P^{y},y)\right| - f_j(P^y) + f_j(P). \end{aligned}$$

Similar to \(f_j(P^{xy})\) we can decompose \(f_j(P)\) by counting the j-faces in \(P^y\) and subtracting the difference that arises from the addition of y to P:

$$\begin{aligned} f_j(P) = f_j(P^y) - f_{j-1}({\text {link}}(P^y, y)) + \left| \mathscr {F}_j^{{\text {Vis}}}(P,y) \setminus {\text {link}}(P^y,y)\right| , \end{aligned}$$

yielding

$$\begin{aligned} D_{x,y}^2f_j(P)&= f_{j-1}({\text {link}}(P^{y},y)) - \left| \mathscr {F}_j^{{\text {Vis}}}(P,y) \setminus {\text {link}}(P^y,y)\right| - f_j(P^y)\\&\quad +\, f_j(P^y) - f_{j-1}({\text {link}}(P^y, y)) + \left| \mathscr {F}_j^{{\text {Vis}}}(P,y) \setminus {\text {link}}(P^y,y)\right| \\&= 0, \end{aligned}$$

which is the desired conclusion.

Since \({\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) = \emptyset \) implies the conditions of Lemma 11 for \(P = K_t \supseteq K(v \ge \varepsilon _t)\), it follows that

$$\begin{aligned} D_{x,y}^2f_j(K_t) = {\mathbb {1}}\left\{ x,y \in K \setminus K(v \ge \varepsilon _t)\right\} {\mathbb {1}}\left\{ {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} D_{x,y}^2f_j(K_t), \end{aligned}$$

and similar to \(D_{x,y}^2V_j(K_t)\) we derive

$$\begin{aligned} D_{x,y}^2(K_t) \ll {\mathbb {1}}\left\{ x,y \in K \setminus K(v \ge \varepsilon _t)\right\} {\mathbb {1}}\left\{ {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} \log (t)^{pd}. \end{aligned}$$

Having disposed of these preliminary steps, we can now summarize the results in the following lemma bounding the order of magnitude of the pth moment of the difference operator of the \(\mathbf {f}\)-vector components \(f_j(K_t)\).

Lemma 12

Let \(p \in \left\{ 1,\ldots ,8\right\} \), \(j \in \left\{ 0,\ldots ,d-1\right\} \) and \(x,y \in K\), then

$$\begin{aligned} {\mathbb {E}}\left|(D_xf_j(K_t))^p \right|&\ll {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} \log (t)^{dp},\\ {\mathbb {E}}\left|(D_{x,y}^2f_j(K_t))^p \right|&\ll {\mathbb {1}}\left\{ x,y \in K \setminus K(v \ge \varepsilon _t)\right\} \\&\qquad \qquad \times \,{\mathbb {1}}\left\{ {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} \log (t)^{dp}. \end{aligned}$$

Proof

The proof is similar to the one of Lemma 9. \(\square \)

We are left with the task of applying our estimations on the bound \(\gamma _1\), \(\gamma _2\) and \(\gamma _3\) given by Theorem 5. Since we consider the multivariate functional given by (6), we have to distinguish the following three cases depending on the combination of functionals \(F_i\) and \(F_j\) using the corresponding bounds for the variance given by (14) for the intrinsic volumes and by

$$\begin{aligned} t^{1-\frac{2}{d+1}} \ll {\mathbb {V}}\left[ f_k(K_t)\right] \ll t^{1-\frac{2}{d+1}}, \end{aligned}$$
(32)

for the components of the \(\mathbf {f}\)-vector, see [23]. We denote by \(\gamma _1(i,j)\), \(\gamma _2(i,j)\) resp. \(\gamma _3(i)\) the integral in \(\gamma _1\), \(\gamma _2\) resp. \(\gamma _3\), then it follows from Lemma 9, 12 and the estimations on the domain of integration (30) and (31) that

$$\begin{aligned}&\gamma _1(i,j)\\&\gamma _2(i,j)&\quad \ll t^{-1+\frac{2}{d+1}} \times {\left\{ \begin{array}{ll} \log (t)^{6+\frac{2}{d+1}}, &{} i,j \in \left\{ 1,\ldots ,d\right\} ,\\ \log (t)^{4+2d+\frac{2}{d+1}}, &{} i \in \left\{ 1,\ldots ,d\right\} , j \in \left\{ d+1, \ldots , 2d\right\} ,\\ \log (t)^{2+4d+\frac{2}{d+1}}, &{} i,j \in \left\{ d+1, \ldots , 2d\right\} . \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \gamma _3(i) \ll t^{-\frac{1}{2}+\frac{1}{d+1}} \times {\left\{ \begin{array}{ll} \log (t)^{3+\frac{2}{d+1}}, &{} i \in \left\{ 1, \ldots , d\right\} ,\\ \log (t)^{3d + \frac{2}{d+1}}, &{} i \in \left\{ d+1, \ldots , 2d\right\} . \end{array}\right. } \end{aligned}$$

It can easily be seen that the speed of convergence is dominated by the case \(i,j \in \left\{ d+1, \ldots , 2d\right\} \); thus we can rewrite the bound in Theorem 5 using \(\sigma _{ij}(t) = {\mathrm {Cov}}\left[ F_i,F_j\right] \) to

$$\begin{aligned} {\text {d}}_3(F,N_\Sigma (t))&\ll d \sum \limits _{i,j=1}^{2d} \left| \sigma _{ij}(t) - {\mathrm {Cov}}\left[ F_i,F_j\right] \right| \\&\qquad +\, 2d \cdot t^{-\frac{1}{2}+\frac{1}{d+1}}\log (t)^{1+2d+\frac{1}{d+1}} + d \cdot t^{-\frac{1}{2}+\frac{1}{d+1}}\log (t)^{1+2d+\frac{1}{d+1}}\\&\qquad +\, d^2 \cdot t^{-\frac{1}{2}+\frac{1}{d+1}}\log (t)^{3d+\frac{2}{d+1}}\\&\ll t^{-\frac{1}{2}+\frac{1}{d+1}}\log (t)^{3d+\frac{2}{d+1}}, \end{aligned}$$

which completes the proof.

4.3 Proof of Theorem 2: Oracle Functional

Recall that the oracle estimator is given by \(\hat{\vartheta }_\mathrm{oracle}(K_t) = V_d(K_t) + t^{-1} f_0(K_t)\), and its variance asymptotics is given by (4)

$$\begin{aligned} {\mathbb {V}}\left[ \hat{\vartheta }_\mathrm{oracle}(K_t)\right] = \gamma _d \Omega (K)(1+o(1))t^{-1-\frac{2}{d+1}} = \Theta \left( t^{-1-\frac{2}{d+1}}\right) , \end{aligned}$$

for \(t \rightarrow \infty \), where the constant \(\gamma _d\) only depends on the dimension and is known explicitly and \(\Omega (K)\) denotes the affine surface area of K. Rescaling of \(\hat{\vartheta }_\mathrm{oracle}(K_t)\) yields

$$\begin{aligned} \frac{\hat{\vartheta }_\mathrm{oracle}(K_t)}{\sqrt{{\mathbb {V}}\left[ \hat{\vartheta }_\mathrm{oracle}(K_t)\right] }} = \Theta \left( t^{\frac{1}{2}+\frac{1}{d+1}} V_d(K_t) + t^{-\frac{1}{2}+\frac{1}{d+1}}f_0(K_t)\right) , \end{aligned}$$

where the scaling \(t^{\frac{1}{2}+\frac{1}{d+1}}\) resp. \(t^{-\frac{1}{2}+\frac{1}{d+1}}\) corresponds with the asymptotic variance of \(V_d(K_t)\) resp. \(f_0(K_t)\), see (14) and (32). Therefore we can use the previous results to deduce bounds on the moments of the first- and second-order difference operator of the standardized oracle estimator \(\tilde{\vartheta }_\mathrm{oracle}(K_t)\).

Lemma 13

Let \(p \in \left\{ 1,\ldots , 4\right\} \) and \(x,y \in K\), then

$$\begin{aligned} {\mathbb {E}}\left|(D_x\tilde{\vartheta }_\mathrm{oracle}(K_t))^p \right|&\ll {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} t^{-\frac{p}{2}+\frac{p}{d+1}} \log (t)^{dp},\\ {\mathbb {E}}\left|(D_{x,y}^2\tilde{\vartheta }_\mathrm{oracle}(K_t))^p \right|&\ll {\mathbb {1}}\left\{ x,y \in K \setminus K(v \ge \varepsilon _t)\right\} \\&\qquad \times \, {\mathbb {1}}\left\{ {\text {Vis}}_x(t) \cap {\text {Vis}}_y(t) \ne \emptyset \right\} t^{-\frac{p}{2} + \frac{p}{d+1}} \log (t)^{dp}. \end{aligned}$$

Proof

Using the binomial theorem and the Cauchy–Schwarz inequality, it follows directly with Lemmas 9 and 12 that

$$\begin{aligned}&{\mathbb {E}}\left|t^{\frac{1}{2}+\frac{1}{d+1}} D_xV_d(K_t) + t^{-\frac{1}{2}+\frac{1}{d+1}}D_xf_0(K_t) \right|^p\\&\quad \le \sum \limits _{j=0}^p \left( {\begin{array}{c}p\\ j\end{array}}\right) \left( {\mathbb {E}}\left|t^{\frac{1}{2}+\frac{1}{d+1}}D_xV_d(K_t) \right|^{2j} {\mathbb {E}}\left|t^{-\frac{1}{2}+\frac{1}{d+1}}D_xf_0(K_t) \right|^{2(p-j)} \right) ^\frac{1}{2}\\&\quad \ll {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} \sum \limits _{j=0}^p \left( {\begin{array}{c}p\\ j\end{array}}\right) \left( t^{\frac{1}{2}+\frac{1}{d+1}} \tfrac{\log (t)}{t}\right) ^{\frac{2j}{2}} \left( t^{-\frac{1}{2}+\frac{1}{d+1}} \log (t)^d\right) ^{\frac{2(p-j)}{2}}\\&\quad = {\mathbb {1}}\left\{ x \in K \setminus K(v \ge \varepsilon _t)\right\} t^{-\frac{p}{2}+\frac{p}{d+1}} \sum \limits _{j=0}^p \left( {\begin{array}{c}p\\ j\end{array}}\right) \log (t)^{j+d(p-j)}. \end{aligned}$$

Since \(j+d(p-j) \le dp\), the desired conclusion follows. The proof for the second-order difference operator is similar. \(\square \)

Applying these estimations to the bound \(\tau _1\), \(\tau _2\) and \(\tau _3\) in Theorem 4 yields

$$\begin{aligned} \tau _1&\ll t^{-1+\frac{2}{d+1}}\log (t)^{2+4d+\frac{2}{d+1}},\\ \tau _2&\ll t^{-1+\frac{2}{d+1}}\log (t)^{2+4d+\frac{2}{d+1}},\\ \tau _3&\ll t^{-\frac{1}{2}+\frac{1}{d+1}}\log (t)^{3d+\frac{2}{d+1}}, \end{aligned}$$

thus

$$\begin{aligned} {\text {d}}_W(\tilde{\vartheta }(K_t),Z) \le 2 \sqrt{\tau _1} + \sqrt{\tau _2} + \tau _3 \ll t^{-\frac{1}{2}+\frac{1}{d+1}}\log (t)^{3d+\frac{2}{d+1}}, \end{aligned}$$

for \(Z \overset{{d}}{\sim }\mathscr {N}(0,1)\), completing the proof of Theorem 2.