1 Introduction

Random tessellations in Euclidean space \({\mathbb {R}}^d\) form a class of mathematical objects that have been under intensive investigation in stochastic geometry during the last decades. In addition to intrinsic mathematical curiosity, a major reason for continuing interest in random tessellations is that they provide highly relevant models for practical applications, for example, in telecommunication or materials science [19, 48, 49, 57]. One of the principal random tessellation models in Euclidean space is induced by a Poisson process of hyperplanes. In \({\mathbb {R}}^d\) with \(d\ge 2\) and in the stationary and isotropic case, the construction of a Poisson hyperplane tessellation can be described as follows. Fix a parameter \(t>0\) and consider a stationary Poisson point process on the real line with intensity t. To each point \(p_i\) of the Poisson process we attach independently of each other and independently of the underlying Poisson process a random vector \(u_i\) which is uniformly distributed on the unit sphere \({\mathbb {S}}^{d-1}\) of \({\mathbb {R}}^d\). Then to each pair \((p_i,u_i)\in {\mathbb {R}}\times {\mathbb {S}}^{d-1}\) we associate the hyperplane \(H_i\) which has \(u_i\) as a (Euclidean) unit normal vector and passes through the point \(p_iu_i\) with (signed) distance \(p_i\) from the origin. We call the random collection of all such hyperplanes a (stationary and isotropic) Poisson hyperplane process in \({\mathbb {R}}^d\) with intensity t. The random hyperplanes \(H_i\) almost surely divide the space \({\mathbb {R}}^d\) into countably many random convex polytopes. The collection of all these polytopes is a (stationary and isotropic) Poisson hyperplane tessellation in \({\mathbb {R}}^d\) with intensity t. We remark that the intensity parameter t, roughly speaking, controls the expected surface content of the Poisson hyperplane tessellation per unit volume. More precisely, \(t={\mathbb {E}}{\mathcal {H}}^{d-1}(Z\cap [0,1]^d)\), where \(Z=\bigcup _{i=1}^\infty H_i\) is the random union set induced by the Poisson hyperplane process and \({\mathcal {H}}^{d-1}\) stands for the \((d-1)\)-dimensional Hausdorff measure.

For Poisson hyperplane tessellations many first- and second-order quantities are explicitly available for a broad class of functionals and also a comprehensive central limit theory has been developed over the last 15 years, cf. [21, 23, 36, 58, 66] and [64, Chapter 10] as well as the many references cited therein. In the literature, central limit theorems for functionals of Poisson hyperplanes have been considered in two different set-ups. In a first setting the tessellation is restricted to a fixed (usually convex) observation window and the asymptotic behaviour is explored when the intensity t of the underlying Poisson process is increased. Alternatively, the intensity is kept fixed, while the size of the observation window is increased. By a simple scaling relation both set-ups are equivalent when homogeneous functionals (such as intrinsic volumes, positive powers of intrinsic volumes or integrals with respect to support measures) of the tessellation are considered, see [36, Corollary 6.2].

While the spherical analogues of Poisson hyperplane tessellations, namely Poisson great hypersphere tessellations, were investigated, for example, in [2, 24,25,26, 44], only few results seem to be available for such tessellations in standard spaces of constant negative curvature, see [6, 55, 62, 70]. The spherical space of constant positive curvature is distinguished by its compactness, which in turn implies that Poisson great hypersphere tessellations almost surely consist of only finitely many spherical random polytopes. In contrast, Poisson hyperplane tessellations in a standard space of constant negative curvature display a number of striking new phenomena that cannot be observed in their Euclidean or spherical counterparts. It is the purpose of the present paper to initiate a systematic study of intersection processes of Poisson hyperplane tessellations in the d-dimensional hyperbolic space \({\mathbb {H}}^{d}\) and to uncover some of the anticipated and remarkable new phenomena. We confine ourselves to the study of the total volume (in the appropriate dimension) of the intersection processes induced by Poisson hyperplanes in a (hyperbolic convex) test set. We explicitly identify the expectation and the covariance structure of these functionals by making recourse to general formulas for and structural properties of Poisson U-statistics and to Crofton-type formulas from hyperbolic integral geometry. In addition and more importantly, we study probabilistic limit theorems for these functionals in the two asymptotic regimes described above for the Euclidean set-up. While the central limit theorems for growing intensity and fixed observation window are a direct consequence of general central limit theorems for Poisson U-statistics [36, 58, 66, 67], it will turn out that the limit theory in the other regime, that is, when the intensity is kept fixed and the size of the observation window is increased, is fundamentally different. We will prove that here a central limit theorem in fact holds in space dimensions \(d=2\) and \(d=3\). On the other hand, we will show that a central limit theorem fails for all space dimensions \(d\ge 4\) if the total \((d-1)\)-volume of the union of all hyperplanes is considered. For the total volume of intersection processes of arbitrary order this will be proved for technical reasons only for dimensions \(d\ge 7\). We emphasize that this remarkable and surprising new feature is a consequence of the negative curvature of the underlying space and has no counterpart in the Euclidean or spherical set-up. Another interesting and unexpected feature is observed in this regime for the asymptotic covariance matrix of the vector of k-volumes of the k-skeletons, \(k=0,\ldots ,d-1\). This matrix turns out to have full rank for \(d=2\), but it has rank one in dimension \(d\ge 3\). In addition, we will study the situation in which the intensity and the size of the observation window are increased simultaneously. In this case it will turn out that in all situations where the central limit theorem fails for fixed intensity, the Gaussian fluctuations are in fact preserved as soon as the intensity tends to infinity, independently of the behaviour of the size of the observation window (as long as it is bounded from below).

As anticipated above, the proofs of our results concerning first- and second-order properties of the total volume of intersection processes rely on general formulas for U-statistics of Poisson point processes as presented in [35] and on tools from hyperbolic integral geometry as developed in [11, 18, 61, 68]. The central limit theorems we consider will be of quantitative nature, that is, we will provide explicit bounds on the quality (speed) of normal approximation measured in terms of both the Wasserstein and the Kolmogorov distance. Their proofs are based on general normal approximation bounds that have been derived in [15, 58, 67] using the Malliavin–Stein technique on Poisson spaces (see collection [51] for a representative overview concerning this method). This directly implies the central limit theorem for fixed windows and growing intensities. On the other hand, for fixed intensity and when the window is a hyperbolic ball \(B_r\) of radius r around a fixed point in \({\mathbb {H}}^{d}\), crucial building blocks of these bounds are Crofton-type integrals of the form

$$\begin{aligned} \int _{A_h(d,k)}{\mathcal {H}}^k(H\cap B_r)^l\,\mu _k(dH), \end{aligned}$$

where \(A_h(d,k)\) denotes the space of k-dimensional totally geodesic subspaces of \({\mathbb {H}}^{d}\) and \(\mu _k\) is the suitably normalized invariant measure on \(A_h(d,k)\) (all terms will be explained in detail below). While in the Euclidean case the asymptotic behaviour of such integrals, as \(r\rightarrow \infty \), is quite straightforward, this is not the case in the hyperbolic set-up. In contrast to the Euclidean case, it will turn out that their behaviour crucially depends on whether \(l(k-1)\) is less than, greater than or equal to \(d-1\) (see Lemma 8). In essence, the latter is an effect of the negative curvature, which in turn causes an exponential growth of volume of linearly expanding balls in \({\mathbb {H}}^{d}\). To show that a central limit theorem fails in higher space dimensions is arguably the most technical part of this paper. We do this by showing that the fourth cumulant of the centred and normalized total volume of the intersection processes does not converge to 0, which in turn is the fourth cumulant of a standard Gaussian distribution. However, to bring this in contradiction with a central limit theorem we need to argue that the fourth power of the total volume is uniformly integrable, which in turn will be established by consideration of their fifths moments. This requires a fine analysis of combinatorial moment formulas for U-statistics of Poisson processes. In essence and in contrast to the lower dimensional cases \(d=2\) and \(d=3\), the failure of the central limit theorem for space dimensions \(d \ge 4\) is due to the fact that in these dimensions the contribution of single hyperplanes is asymptotically not negligible anymore.

We emphasize that the present paper contributes to a recent and active line of current mathematical research in stochastic geometry on models in non-Euclidean spaces. As concrete examples we mention here the studies about spherical convex hulls and convex hulls on half-spheres in [5, 28, 38]. Central limit theorems for the volume of random convex hulls in spherical space, hyperbolic spaces and Minkowski geometries were obtained in [7], asymptotic normality of very general so-called stabilizing functionals of Poisson point processes on manifolds was considered in [54]. Again more specifically, the papers [9, 17, 47, 50] study various aspects of random geometric graphs in hyperbolic spaces, including central limit theorems for a number of parameters. Random tessellations of the unit sphere by great hyperspheres are the content of [2, 24, 25, 44], while so-called random splitting tessellations in spherical spaces were introduced and investigated in [14, 26]. The paper [12] is concerned with properties of Poisson-Voronoi tessellations on general Riemannian manifolds. Finally, the geometry of random fields on the sphere is studied in the monograph [39] and invariant random fields on spaces with a group action are described in [40]. In a similar vein, it is pointed out in [37] that a systematic study of the invariance properties of probability distributions under a general group action is missing. The book [37] therefore explores Markov processes whose distributions are invariant under the action of a Lie group.

The remaining parts of this paper are structured as follows. In the next section we formally define Poisson hyperplane tessellations in \({\mathbb {H}}^{d}\) and present our main results. We start in Sect. 2.1 with expectations and continue in Sect. 2.2 with second-order characteristics associated with the total volume of intersections processes. Our limit theorems will be discussed in Sect. 2.3. The necessary background material on hyperbolic geometry and hyperbolic integral geometry is collected in Sect. 3.1, the background material on Poisson U-statistics is the content of Sects. 3.2 and 3.3. All remaining sections are devoted to the proofs of our results. In Sect. 4 we present the proofs for first- and second-order parameters and also carry out a detailed covariance analysis, which is needed for our multivariate central limit theory. Our results on generalizations of the K-function and the pair-correlation function are established in Sect. 5. All univariate limit theorems are proved in Sect. 6, while the arguments for the multivariate central limit theorems are provided in the final Sect. 7.

2 Main results

2.1 First-order quantities

We denote by \({\mathbb {H}}^{d}\), for \(d\ge 2\), the d-dimensional hyperbolic space of constant curvature \(-1\), which is endowed with the hyperbolic metric \(d_h(\,\cdot \,,\,\cdot \,)\). We refer to Sect. 3.1 below for further background material on hyperbolic geometry and for a description of the conformal ball model for \({\mathbb {H}}^{d}\). Let \(p\in {\mathbb {H}}^{d}\) be an arbitrary (fixed) point, also referred to as the origin. For \(r\ge 0\) we denote by \(B_r=\{x\in {\mathbb {H}}^{d}:d_h(x,p)\le r\}\) the closed hyperbolic ball around p with radius r. A set \(K\subset {\mathbb {H}}^{d}\) is called a hyperbolic convex body, provided that K is non-empty, compact and if with each pair of points \(x,y\in K\) the (unique) geodesic connecting x and y is contained in K. The space of hyperbolic convex bodies is denoted by \({\mathcal {K}}_h^d\). Recall that for \(k\in \{0,1,\ldots ,d-1\}\) a k-dimensional totally geodesic subspace of \({\mathbb {H}}^{d}\) is called a k-plane and especially \((d-1)\)-planes are called hyperplanes. The space of k-planes in \({\mathbb {H}}^{d}\) is denoted by \(A_h(d,k)\). The space \(A_h(d,k)\) carries a measure \(\mu _k\), which is invariant under isometries of \({\mathbb {H}}^{d}\) (see Sect. 3.1 for the present normalization of this measure). For \(s\ge 0\) we denote by \({\mathcal {H}}^s\) the s-dimensional Hausdorff measure with respect to the intrinsic metric of \({\mathbb {H}}^{d}\) as a Riemannian manifold. Finally, we write \(\omega _k={2\pi ^{k/2}/\varGamma ({k/ 2})}\), \(k\in {\mathbb {N}}\), for the surface area of the k-dimensional unit ball in the Euclidean space \({\mathbb {R}}^k\).

Fig. 1
figure 1

Two realizations of a Poisson hyperplane tessellation in \({\mathbb {H}}^2\) of different intensities represented in the conformal disc model

For \(t>0\), let \(\eta _t\) be a Poisson process on the space \(A_h(d,d-1)\) of hyperplanes in \({\mathbb {H}}^{d}\) with intensity measure \(t\mu _{d-1}\). We refer to \(\eta _t\) as a (hyperbolic) Poisson hyperplane process with intensity t. It induces a Poisson hyperplane tessellation in \({\mathbb {H}}^{d}\), i.e., a subdivision of \({\mathbb {H}}^{d}\) into (possibly unbounded) hyperbolic cells (generalized polyhedra), see Fig. 1. For \(i\in \{0,\ldots ,d-1\}\) we consider the intersection process \(\xi _t^{(i)}\) of order \(d-i\) of the Poisson hyperplane process \(\eta _t\) given by

$$\begin{aligned} \xi _t^{(i)} := \frac{1}{(d-i)!}\sum _{(H_1,\ldots ,H_{d-i})\in \eta _{t,\ne }^{d-i}}\delta _{H_1\cap \ldots \cap H_{d-i}}\,\mathbf{1}\{\dim (H_1\cap \ldots \cap H_{d-i})=i\}, \end{aligned}$$

where \(\eta _{t,\ne }^{d-i}\) is the set of \((d-i)\)-tuples of different hyperplanes supported by \(\eta _t\), \(\delta _{(\,\cdot \,)}\) denotes the Dirac measure and \(\dim (\,\cdot \,)\) stands for the dimension of the set in the argument. In this paper we are interested in random variables of the form

$$\begin{aligned} F_{W,t}^{(i)}&:= \int {\mathcal {H}}^i(E\cap W)\,\xi ^{(i)}_t(dE) \nonumber \\&= \frac{1}{(d-i)!}\sum _{(H_1,\ldots ,H_{d-i})\in \eta _{t,\ne }^{d-i}}{\mathcal {H}}^i(H_1\cap \ldots \cap H_{d-i} \cap W)\,\nonumber \\&\quad \times \mathbf{1}\{\dim (H_1\cap \ldots \cap H_{d-i})=i\}, \end{aligned}$$
(1)

where \(W\subset {\mathbb {H}}^{d}\) is a (fixed) Borel set in \({\mathbb {H}}^{d}\) . In other words, \(F_{W,t}^{(i)}\) measures the total i-volume (i.e., the i-dimensional Hausdorff measure) of the intersection process \(\xi _t^{(i)}\) within W. For example,

$$\begin{aligned} F_{W,t}^{(d-1)} = \sum _{H\in \eta _t}{\mathcal {H}}^{d-1}(H\cap W) = {\mathcal {H}}^{d-1}\Big (\bigcup _{H\in \eta _t}H\cap W\Big ) \end{aligned}$$

is the total surface content of the union of all hyperplanes of \(\eta \) within W. On the other hand,

$$\begin{aligned} F_{W,t}^{(0)} = {1\over d!}\sum _{(H_1,\ldots ,H_d)\in \eta _{t,\ne }^d}\mathbf{1}\{H_1\cap \ldots \cap H_d\cap W\ne \emptyset ,\mathrm{dim}(H_1\cap \ldots \cap H_d)=0\} \end{aligned}$$

is the total number of vertices in W of the Poisson hyperplane tessellation, i.e., the total number of intersection points induced by the hyperplanes of \(\eta _t\). In the Euclidean case these random variables have received particular attention in the literature, see e.g. [20, 21, 27, 29, 30, 36, 42, 58, 64] and the references cited therein. As in the Euclidean case, we will start by investigating the expectation of \(F_{W,t}^{(i)}\).

Theorem 1

(Expectation) If \(W\subset {\mathbb {H}}^{d}\) is a Borel set, \(t>0\) and \(i\in \{0,1,\ldots ,d-1\}\), then

$$\begin{aligned} {\mathbb {E}} F_{W,t}^{(i)}= \frac{\omega _{i+1}}{\omega _{d+1}}\left( \frac{\omega _{d+1}}{\omega _d}\right) ^{d-i} \frac{t^{d-i}}{(d-i)!}\, {\mathcal {H}}^d(W). \end{aligned}$$

Remark 1

In comparison with the Euclidean and spherical case we observe that precisely the same formula holds in these spaces. This is not surprising, since the proof of Theorem 1 is based only on the multivariate Mecke formula for Poisson processes and a recursive application of Crofton’s formula from integral geometry, see Sect. 4. Since the latter holds for any standard space of constant curvature \(\kappa \in \{-1,0,1\}\) with the same constant (cf. [11, 61]), independently of the curvature \(\kappa \), the result of Theorem 1 holds simultaneously for all standard spaces of constant curvature \(\kappa \in \{-1,0,1\}\). In other words this means that the expectation \({\mathbb {E}} F_{W,t}^{(i)}\) is not an appropriate quantity to ‘feel’ or to ‘detect’ the curvature of the underlying space. For this we will use second-order characteristics.

2.2 Second-order quantities

In a next step, we describe the covariance structure of the functionals \(F_{W,t}^{(i)}\), \(i\in \{0,1,\ldots ,d-1\}\), introduced in (1). The following explicit representation for the covariances will be derived from the Fock space representation of Poisson U-statistics.

Theorem 2

(Covariances) Let \(W\subset {\mathbb {H}}^{d}\) be a Borel set, let \(t>0\), and let \(i,j\in \{0,1,\ldots ,d-1\}\). Then

$$\begin{aligned}&{\mathbb {C}}{\mathrm{ov}}(F_{W,t}^{(i)},F_{W,t}^{(j)}) \\&\quad =\sum _{n=1}^{\min \{d-i,d-j\}} c_{i,j,n,d}\,t^{2d-i-j-n}\int _{A_h(d,d-n)}{\mathcal {H}}^{d-n}(E\cap W)^{2}\,\mu _{d-n}(dE) \end{aligned}$$

with

$$\begin{aligned} c_{i,j,n,d} = \frac{1}{n!}\, \frac{1}{\omega _{d+1} \, \omega _{d-n+1}}\frac{\omega _{i+1}}{(d-i-n)!} \frac{\omega _{j+1}}{(d-j-n)!} \left( \frac{\omega _{d+1}}{\omega _d} \right) ^{2d-i-j-n}. \end{aligned}$$

Remark 2

Since Theorem 2 follows from the general Fock space representation of Poisson U-statistics, the formula for \({\mathbb {C}}{\mathrm{ov}}(F_{W,t}^{(i)},F_{W,t}^{(j)})\) is formally the same for all spaces of constant curvature \(\kappa \in \{-1,0,1\}\). However, the curvature properties of the underlying space are hidden in the integral-geometric expression

$$\begin{aligned} J_k(W):=\int _{A_h(d,k)}{\mathcal {H}}^{k}(E\cap W)^{2}\,\mu _{k}(dE), \end{aligned}$$

for \(k\in \{0,\ldots ,d-1\}\). In fact, if \(\kappa \in \{-1,0\}\) and if we replace W by a ball \(B_r\) of radius r around an arbitrary fixed point, we can consider the asymptotic behaviour of \(J_k(B_r)\), as \(r\rightarrow \infty \), which is quite different in these two cases (note that in spherical spaces with constant curvature \(\kappa =1\) the range of r is bounded). While in the Euclidean case \(\kappa =0\), \(J_k(B_r)\) behaves like a constant multiple of \(r^{d+k}\) for all choices of k, in the hyperbolic case \(\kappa =-1\) we will show that \(J_k(B_r)\) behaves like a constant multiple of \(e^{(d-1)r}\) if \(2k-1<d\), like a constant multiple of \(re^{(d-1)r}\) if \(2k-1=d\) and like a constant multiple of \(e^{2(k-1)r}\) if \(2k-1>d\), see Lemma 8 below. This also means that only in the case where \(2k-1<d\), the value \(J_k(B_r)\) grows with r like a constant multiple of the volume of \(B_r\). In this sense we can say that second-order properties of the functionals \(F_{W,t}^{(i)}\) are sensitive to the curvature of the underlying space. In Euclidean space, cross-sectional measures such as the Crofton-type integrals \(J_k(W)\) have been studied intensively, in particular since they naturally arise in the context of stereological problems or in the investigation of geometric inequalities (see [64, Section 8.6] for further details). In the present study, it follows from Theorem 2 that the quantities \(J_{d-n}(B_r)\), for \(n=1,\ldots ,d-i\), determine the asymptotic behaviour of the variances \({\mathbb {V}}{\mathrm{ar}}(F^{(i)}_{B_r,t})\), as \(r\rightarrow \infty \). We will show in Sect. 4.4 that the dominating contribution comes from \(J_{d-1}(B_r)\). Since the distinction \(2(d-1)-1<d\), \(2(d-1)-1=d\), \(2(d-1)-1>d\) precisely corresponds to the cases \(d=2\), \(d=3\), \(d\ge 4\), we already see one reason for the dependence of our results on the dimension of the hyperbolic space. In order to establish the normal approximation bounds of Theorem 5, which are based on the general bound provided in (14), we have to deal with further integrals of Crofton-type, as described in Sect. 6.2.

Continuing the discussion of second-order properties of Poisson hyperplane tessellations in \({\mathbb {H}}^{d}\), we now introduce and describe the K-function and the pair-correlation function of the i-dimensional Hausdorff measure restricted to the i-skeleton of the tessellation. In the Euclidean case these two functions have turned out to be essential tools in the second-order analysis of stationary random measures (see the original paper [60] and the recent monograph [4] as well as the references cited therein). To be precise, for \(i\in \{0,1,\ldots ,d-1\}\) and fixed \(t>0\), we first consider the i-skeleton of the Poisson hyperplane tessellation in \({\mathbb {H}}^{d}\) with intensity t, which is defined as the random closed set

$$\begin{aligned} \mathrm{skel}_i := \mathop \bigcup \limits _{\begin{array}{c} (H_1,\ldots ,H_{d-i})\in \eta _{t,\ne }^{d-i}\\ \mathrm{dim}(H_1\cap \ldots \cap H_{d-i})=i \end{array}}H_1\cap \ldots \cap H_{d-i}. \end{aligned}$$

The i-dimensional Hausdorff measure on \(\mathrm{skel}_i\) is denoted by \({\mathbf {M}}_i\). It is a stationary random measure on \({\mathbb {H}}^{d}\), that is, its distribution is invariant under isometries of \({\mathbb {H}}^{d}\). Its intensity is defined by \(\lambda _i={\mathbb {E}}F_{B,t}^{(i)}\), where \(B\subset {\mathbb {H}}^{d}\) is an arbitrary Borel set with \({\mathcal {H}}^d(B)=1\). It follows from Theorem 1 that

$$\begin{aligned} \lambda _i=\frac{\omega _{i+1}}{\omega _{d+1}}\left( \frac{\omega _{d+1}}{\omega _d}\right) ^{d-i} \frac{t^{d-i}}{(d-i)!}. \end{aligned}$$
(2)

The K-function of the random measure \({\mathbf {M}}_i\) is defined by

$$\begin{aligned} K_i(r) : = \frac{1}{\lambda _i^2}\,{\mathbb {E}}\int _{{{\mathbb {H}}^{d}}}\int _B\mathbb {1}\{0<d_h(x,y)\le r\}\,{\mathbf {M}}_i(dy)\,{\mathbf {M}}_i(dx),\qquad r>0. \end{aligned}$$
(3)

Writing this definition in the form

$$\begin{aligned} \lambda _i K_i(r)=\frac{1}{\lambda _i}\,{\mathbb {E}}\int _B {\mathbf {M}}_i(B(y,r){\setminus }\{y\})\, {\mathbf {M}}_i(dy), \end{aligned}$$

where B(yr) is a closed hyperbolic ball with centre y and radius r, justifies the (common) interpretation of the K-function as the mean \({\mathcal {H}}^i\)-measure of the i-skeleton \(\mathrm{skel}_i\) within a ball of radius r centred at the typical point of \({\mathbf {M}}_i\) (see also [45, p. 316] for a similar description in the point process case). In point process theory (which concerns the case \(i=0\)), the K-function is a popular device for the analysis and distinction of spatial correlations in point patterns. Since it includes the radius as a parameter, the K-function provides cumulative information across a range of spatial scales (see [4]). Here we consider also correlations in mass distributions concentrated on lower-dimensional structures.

The condition \(d_h(x,y)>0\) is usually omitted in the definition of the K-function of a diffuse stationary random measure, since in this case it has no effect. For \(i\in \{1,\ldots ,d-1\}\), the proof of the following more general Theorem 3 will show that \(K_i(r)\) remains indeed unchanged if we drop the condition \(d_h(x,y)>0\). For \(i=0\), however, the random measure \({\mathbf {M}}_i\) is a stationary point process in \({\mathbb {H}}^{d}\) and then the restriction \(d_h(x,y)>0\) is common. The term which has to be added if this restriction is removed is just \(\lambda _0^{-1}\), see the comments below. The proof of Theorem 3 will also show that the summands corresponding to indices \(n\in \{0,\ldots ,d-1\}\) in (4) are not affected by the restriction, but the summand with \(n=d\) will be zero.

If we define \(K_i(B,r)\) as in (3), but for a general measurable set \(B\subset {\mathbb {H}}^{d}\), it follows from the stationarity of \(\eta _t\) that the measure \(K_i(\,\cdot \,,r)\) is isometry invariant and hence a constant multiple of \({\mathcal {H}}^d(\, \cdot \, )\), provided it is locally finite. In Theorem 3, this will be shown and the constant will be determined by calculating \(K_i(B,r)\) for a measurable set \(B\subset {\mathbb {H}}^{d}\) with \({\mathcal {H}}^d(B)=1\). We will also see that the map \(r\mapsto K_i(r)\) is differentiable, which allows us to consider the pair-correlation function

$$\begin{aligned} g_i(r) := {1\over \omega _{d}\sinh ^{d-1}(r)}\,\frac{dK_i}{ dr}(r),\qquad r> 0. \end{aligned}$$

Roughly speaking it describes the probability of finding a point on the i-skeleton at geodesic distance r from another point belonging to \(\mathrm{skel}_i\). In contrast to the cumulative K-function, which contains contributions for all interpoint distances less than or equal to some r, the pair-correlation function basically contains contributions only from interpoint distances equal to some given r. In statistical physics, pair-correlation functions are a common tool, for instance, in the analysis of the spatial distribution of (random) heterogeneous materials, of disordered particle packings [3] or of hyperuniformity in point patterns and random measures [31, 32, 43, 69].

More generally and in analogy to the covariances considered in Theorem 2, we will consider the mixed K-function \(K_{ij}\) for \( i, j \in \{0,\ldots ,d-1\}\). For \(r>0\) and a measurable set \(B\subset {\mathbb {H}}^{d}\) with \({\mathcal {H}}^d(B)=1\) it is defined by

$$\begin{aligned} K_{ij}(r)&= {1\over \lambda _i \lambda _j}\,{\mathbb {E}}\int _{{{\mathbb {H}}^{d}}}\int _B\mathbb {1}\{0<d_h(x,y)\le r\}\,{\mathbf {M}}_j(dy)\,{\mathbf {M}}_i(dx)\\&= {1\over \lambda _i \lambda _j}\,{\mathbb {E}}\int _{\mathrm{skel}_i}\int _{\mathrm{skel}_j\cap B}\mathbb {1}\{0<d_h(x,y)\le r\}\,{\mathcal {H}}^j(dy)\,{\mathcal {H}}^i(dx). \end{aligned}$$

Similarly as in the special case of the classical K-function, we can rewrite the definition in the form

$$\begin{aligned} \lambda _i K_{ij}(r)=\frac{1}{\lambda _j}\,{\mathbb {E}}\int _B {\mathbf {M}}_i(B(y,r)\setminus \{y\})\, {\mathbf {M}}_j(dy), \end{aligned}$$

which suggests the interpretation of the mixed K-function as describing the random measure \({\mathbf {M}}_i\) as seen from a typical point of \({\mathbf {M}}_j\), in the sense of Palm distributions. We retrieve the ordinary K-function by the special choice \(j=i\). The new mixed K-function allows to explore correlations between mass distributions concentrated on structures of different dimensions, including the interaction between a point process (obtained for \(i=0\)) and diffuse random measures (obtained for \(j>0\)).

The mixed pair-correlation function \(g_{ij}\) is then defined in the obvious way by differentiation of \(K_{ij}\), namely,

$$\begin{aligned} g_{ij}(r) := {1\over \omega _{d}\sinh ^{d-1}(r)}\,\frac{dK_{ij}}{ dr}(r),\qquad r> 0. \end{aligned}$$

As in the case of the K-function, the condition that \(0<d_h(x,y)\) can be omitted if \(i\ge 1\) or \(j\ge 1\).

Theorem 3

(Mixed K-function and mixed pair-correlation function) If \(i,j\in \{0,1,\ldots ,d-1\}\), \(t>0\) and \(r>0\), then

$$\begin{aligned} K_{ij}(r)&= \sum _{n=0}^{m(d,i,j)}n!{d-i\atopwithdelims ()n}{d-j\atopwithdelims ()n}{\omega _{d+1}\omega _{d-n}\over \omega _{d-n+1}}\bigg ({\omega _d\over \omega _{d+1}}{1\over t}\bigg )^{n}\int _0^r\sinh ^{d-n-1}(s)\,ds,\nonumber \\ g_{ij}(r)&= 1+\sum _{n=1}^{m(d,i,j)}n!{d-i\atopwithdelims ()n}{d-j\atopwithdelims ()n}{\omega _{d-n}\over \omega _{d-n+1}}\bigg ({\omega _d\over \omega _{d+1}}\bigg )^{n-1}{1\over (t\sinh (r))^{n}}, \end{aligned}$$
(4)

where \(m(d,i,j):=\min \{d-i,d-j,d-1\}\).

In (4) we restrict the summation to \(n\le d-1\) in order to avoid an undefined expression which arises for \(i=j=0\) and \(n=d\). Alternatively, for \(n=d\) the factor \(\omega _{d-n}=\omega _0\) is \(\omega _0=2/\varGamma (0)=0\) and the product with the infinite integral can be defined to be zero. However, if we remove the restriction \(d_h(x,y)>0\), then for \(i=j=0\) and an arbitrary Borel set \(B\subset {\mathbb {H}}^d\) with \({\mathcal {H}}^d(B)=1\) we get the additional contribution

$$\begin{aligned} K_{00}^*(r):&=\frac{1}{\lambda _0^2}{\mathbb {E}}\int _{\mathrm{skel}_0}\int _{\mathrm{skel}_0\cap B}\mathbb {1}\{0=d_h(x,y)\le r\}\,{\mathcal {H}}^0(dy)\,{\mathcal {H}}^0(dx)\\&=\frac{1}{\lambda _0^2}{\mathbb {E}}\left[ {\mathcal {H}}^0(\mathrm{skel}_0\cap B)\right] =\frac{1}{\lambda _0^2}{\mathbb {E}}\left[ F^{(0)}_{B,t}\right] =\lambda _0^{-1}. \end{aligned}$$

This is consistent with (4) if the summation is extended up to \(n=d\) and the product \(\omega _0 \int _0^r\sinh ^{d-d-1}(s)\, ds\) is (properly) interpreted as \({\mathcal {H}}^{d-d}(B_r^{d-d})=1\) (see the proof of Theorem 3).

In the special case \(d=2\) and for \(i=j\) we thus obtain

$$\begin{aligned} g_0(r) = 1+{4\over \pi \,t}\,{1\over \sinh (r)}\qquad \text {and}\qquad g_1(r) = 1+{1\over \pi \,t}\,{1\over \sinh (r)}, \end{aligned}$$

and for \(d=3\) and again \(i=j\) we get

$$\begin{aligned} g_0(r)&= 1 + {9\over 2\,t}\,{1\over \sinh (r)} + {36\over \pi ^2\,t^2}\,{1\over \sinh ^2(r)},\\ g_1(r)&= 1 + {2\over t}\,{1\over \sinh (r)} + {4\over \pi ^2\,t^2}\,{1\over \sinh ^2(r)},\\ g_2(r)&= 1 + {1\over 2\,t}\,{1\over \sinh (r)}, \end{aligned}$$

see Fig. 2.

Fig. 2
figure 2

Left panel: The pair-correlation functions \(g_{0}\) (solid curve) and \(g_1\) (dashed curve) for \(d=2\) and \(t=1\). Right panel: The pair-correlation functions \(g_{0}\) (solid curve), \(g_1\) (dashed curve) and \(g_2\) (dotted curve) for \(d=3\) and \(t=1\)

Remark 3

An inspection of the proof shows that Theorem 3 is based only on Crofton’s formula and Lemma 4, which in turn is also based on Crofton’s formula. However, since the latter holds for any space of constant curvature \(\kappa \in \{-1,0,1\}\) with the same constant (cf. [11, 61]), independently of the curvature \(\kappa \), Theorem 3 remains valid also in spherical and Euclidean spaces of curvature \(\kappa =1\) and \(\kappa =0\), respectively. Namely, defining the modified sine function

$$\begin{aligned} \mathrm{sn}_\kappa (r) := {\left\{ \begin{array}{ll} \sin (r) &{}: \kappa =1,\\ r &{}: \kappa =0,\\ \sinh (r) &{}: \kappa =-1, \end{array}\right. } \end{aligned}$$

we obtain

$$\begin{aligned} K_{ij}(r) = \sum _{n=0}^{m(d,i,j)}n!{d-i\atopwithdelims ()n}{d-j\atopwithdelims ()n}{\omega _{d+1}\omega _{d-n}\over \omega _{d-n+1}}\bigg ({\omega _d\over \omega _{d+1}}{1\over t}\bigg )^{n}\int _0^r\mathrm{sn}_\kappa ^{d-n-1}(s)\,ds \end{aligned}$$

and

$$\begin{aligned} g_{ij}(r) = 1+\sum _{n=1}^{m(d,i,j)}n!{d-i\atopwithdelims ()n}{d-j\atopwithdelims ()n}{\omega _{d-n}\over \omega _{d-n+1}}\bigg ({\omega _d\over \omega _{d+1}}\bigg )^{n-1}{1\over (t\,\mathrm{sn}_\kappa (r))^{n}} \end{aligned}$$

for \(r> 0\) if \(\kappa \in \{-1,0\}\) and \(0< r<\pi \) if \(\kappa =1\). For \(i=j=d-1\) and \(\kappa =1\) these formulas have been proved in [26, Section 6.2] based on a different normalization. Moreover, for \(\kappa =0\) the formula for \(g_0(r)\) appears as the identity (3.15) in [22], while \(g_{d-1}(r)\) can be found in [65, Section 7]. As already explained in [23], for general \(i\in \{0,1,\ldots ,d-1\}\) it can in principle be deduced from an explicit formula for the second-order moments of the total volume of intersection processes, see [41, p. 164].

2.3 Limit theorems

Our next result is a central limit theorem for \(F_{W,t}^{(i)}\), for a fixed hyperbolic convex body W, when the intensity parameter t tends to infinity. We will measure the distance between (the laws of) two random variables by the Wasserstein and the Kolmogorov distance. For their definitions we refer to Sect. 3.2 below.

Theorem 4

(CLT, growing intensity) Let \(d\ge 2\), \(i\in \{0,1,\ldots ,d-1\}\) and let \(W\in {\mathcal {K}}_h^d\) be a fixed hyperbolic convex body with non-empty interior. Let N be a standard Gaussian random variable, and let \(d(\,\cdot \,,\,\cdot \,)\) denote either the Wasserstein or the Kolmogorov distance. Then there exists a constant \(c\in (0,\infty )\) such that

$$\begin{aligned} d\left( \frac{F_{W,t}^{(i)}- {\mathbb {E}}F_{W,t}^{(i)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}F_{W,t}^{(i)}}},N \right) \le c\,t^{-1/2} \end{aligned}$$

for all \(t\ge 1\).

As already explained in the introduction, the central limit problem for \(F_{W,t}^{(i)}\) can also be approached in another set-up, which in the Euclidean case is equivalent to the one just discussed, but turns out to be fundamentally different in hyperbolic space. More precisely, we turn now to the case, where the intensity t is fixed, while the size of the observation window is increased. We do this only in the case of spherical windows in \({\mathbb {H}}^{d}\). In other words, we choose for W the hyperbolic ball \(B_r\) (around the origin p) and write \(F_{r,t}^{(i)}\) instead of \(F_{B_r,t}^{(i)}\) in this case. Our next result is a central limit theorem for \(F_{r,t}^{(i)}\) for dimension \(d=2\) in part (a) and for \(d=3\) in part (b). Moreover, it turns out that a central limit theorem for \(F_{r,t}^{(i)}\) is no longer valid in any space dimension \(d\ge 4\), see part (c). We emphasize that this surprising phenomenon is in sharp contrast to the Euclidean case [21, 36, 58] and is an effect of the negative curvature.

In the following, it should be understood that whenever we impose an assumption \(r\ge 1\), the lower bound 1 could be replaced by any other fixed positive number.

Theorem 5

(CLT, growing spherical window) Let \(t\ge 1\), let N be a standard Gaussian random variable, and let \(d(\,\cdot \,,\,\cdot \,)\) denote either the Wasserstein or the Kolmogorov distance.

  1. (a)

    If \(d=2\), then there is a constant \(c_2\in (0,\infty )\) only depending on t such that

    $$\begin{aligned} d\left( \frac{F_{r,t}^{(i)}- {\mathbb {E}}F_{r,t}^{(i)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}F_{r,t}^{(i)}}},N \right) \le c_2\,r^{1-i}\,e^{-r/2} \end{aligned}$$

    for \(i\in \{0,1\}\) and \(r\ge 1\).

  2. (b)

    If \(d=3\), then there is a constant \(c_3\in (0,\infty )\) only depending on t such that

    $$\begin{aligned} d\left( \frac{F_{r,t}^{(i)}- {\mathbb {E}}F_{r,t}^{(i)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}F_{r,t}^{(i)}}},N \right) \le {\left\{ \begin{array}{ll} c_3\,r^{-1} &{}: i=2,\\ c_3\,r^{-1/2} &{}: i\in \{0,1\},\\ \end{array}\right. } \end{aligned}$$

    for \(r\ge 1\).

  3. (c)

    If \(d\ge 4\) and \(i=d-1\) or if \(d\ge 7\) and \(i\in \{0,1,\ldots ,d-1\}\), then the random variable \((F_{r,t}^{(i)}- {\mathbb {E}}F_{r,t}^{(i)})/\sqrt{{\mathbb {V}}{\mathrm{ar}}F_{r,t}^{(i)}}\) does not satisfy a central limit theorem for \(r\rightarrow \infty \).

Remark 4

  1. (i)

    The restriction imposed on the parameters di in Theorem 5 (c) is the result of a number of technical obstacles one needs to overcome in its proof. We strongly believe that a central limit theorem in fact fails for all \(d\ge 4\) and all choices of \(i\in \{0,1,\ldots ,d-1\}\). However, we have to leave this as an open problem for future work. For some remarks about the potential limiting distribution in Theorem 5 (c) we refer to Remark 12.

  2. (ii)

    It is instructive to rewrite the normal approximation bounds in Theorem 5 (a) and (b) as follows. For \(d=2\) and \(i\in \{0,1\}\) we have that

    $$\begin{aligned} d\left( \frac{F_{r,t}^{(i)}- {\mathbb {E}}F_{r,t}^{(i)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}F_{r,t}^{(i)}}},N \right) \le {\hat{c}}_2 \, {\log ^{1-i}{\mathcal {H}}^2(B_r)\over \sqrt{{\mathcal {H}}^2(B_r)}},\qquad r\ge 1, \end{aligned}$$

    and for \(d=3\) we have, again for \(r\ge 1\),

    $$\begin{aligned} d\left( \frac{F_{r,t}^{(i)}- {\mathbb {E}}F_{r,t}^{(i)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}F_{r,t}^{(i)}}},N \right) \le {\hat{c}}_3 {\left\{ \begin{array}{ll} {1\over \log {\mathcal {H}}^3(B_r)} &{}: i=2,\\ {1\over \sqrt{\log {\mathcal {H}}^3(B_r)}} &{}: i\in \{0,1\}.\\ \end{array}\right. } \end{aligned}$$

    Here \({\hat{c}}_2,{\hat{c}}_3\in (0,\infty )\) are again constants only depending on t. This means that in dimension \(d=2\) the speed of convergence is the same as in the Euclidean case (up to the logarithmic factor for \(i=0\)). Moreover, it shows that \(d=3\) is the critical dimension for the central limit theorem, which only holds in this case with a rate of convergence which is very much slowed down.

Theorem 4 shows that for fixed radius r and increasing intensity t a central limit theorem for \(F_{r,t}^{(i)}\) with \(i\in \{0,1,\ldots ,d-1\}\) holds. On the other hand, according to Theorem 5 (c) the central limit theorem breaks down for dimensions \(d\ge 4\) (if the total surface area is considered) or \(d\ge 7\) (for general \(i\in \{0,1,\ldots ,d-1\}\)) if the intensity t stays fixed and \(r\rightarrow \infty \). Against this background the question arises whether in these cases the central limit behaviour can be preserved if the intensity t and the radius r tend to infinity simultaneously. In fact, the following result states that this is indeed the case. More precisely, it says that, independently of the behaviour of r, the central limit theorem holds as soon as \(t\rightarrow \infty \) (and r is bounded from below by 1).

Theorem 6

(CLT for simultaneous growth of intensity and window) Let \(d\ge 4\) and \(i=d-1\) or \(d\ge 7\) and \(i\in \{0,1,\ldots ,d-1\}\). Also, let N be a standard Gaussian random variable. Then there is a constant \(c\in (0,\infty )\) such that

$$\begin{aligned} d\left( \frac{F_{r,t}^{(i)}- {\mathbb {E}}F_{r,t}^{(i)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}F_{r,t}^{(i)}}},N \right) \le {c\over \sqrt{t}} \end{aligned}$$

for all \(r\ge 1\) and \(t\ge 1\), where \(d(\,\cdot \,,\,\cdot \,)\) denotes either the Wasserstein or the Kolmogorov distance.

Remark 5

In dimensions \(d=2\) and \(d=3\) we also have normal approximation bounds that simultaneously involve the two parameters t and r. In fact, for \(d=2\) the bounds (36) and (40) below show that

$$\begin{aligned} d\left( \frac{F_{r,t}^{(i)}- {\mathbb {E}}F_{r,t}^{(i)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}F_{r,t}^{(i)}}},N \right) \le c\,t^{-1/2}\,r^{1-i}e^{-r/2} \end{aligned}$$

holds for all \(t\ge 1\), \(r\ge 1\) and \(i\in \{0,1\}\). Similarly, for \(d=3\) the estimates (42), (46) and (47) prove that

$$\begin{aligned} d\left( \frac{F_{r,t}^{(i)}- {\mathbb {E}}F_{r,t}^{(i)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}F_{r,t}^{(i)}}},N \right) \le c\cdot {\left\{ \begin{array}{ll} t^{-1/2}r^{-1} &{}: i=2,\\ t^{-1/2}r^{-1/2} &{}: i\in \{0,1\}, \end{array}\right. } \end{aligned}$$

for all \(t\ge 1\) and \(r\ge 1\). In both cases, \(d(\,\cdot \,,\,\cdot \,)\) stands for either the Wasserstein or the Kolmogorov distance. This way we recover Theorem 4 for \(d=2\) and \(d=3\) in the special case where \(W=B_r\) with r fixed and we recover Theorem 5 (a) and (b) by fixing t.

Finally, let us turn to the multivariate set-up. To compare the distance between the distributions of (the laws of) two random vectors we use what is known as the \(d_2\)- and the \(d_3\)-distance; for their definition we refer to Sect. 3.3 below. We approach the multivariate central limit theorem by considering, as above, two different settings. To handle the central limit problem for a fixed window \(W\in {\mathcal {K}}_h^d\) and growing intensities we define for \(t>0\) the d-dimensional random vector

$$\begin{aligned} \mathbf{F }_{W,t} := \Bigg ({F_{W,t}^{(0)}-{\mathbb {E}}F_{W,t}^{(0)}\over t^{d-1/2}},\ldots ,{F_{W,t}^{(i)}-{\mathbb {E}}F_{W,t}^{(i)}\over t^{d-i-1/2}},\ldots ,{F_{W,t}^{(d-1)}-{\mathbb {E}}F_{W,t}^{(d-1)}\over t^{1/2}}\Bigg ). \end{aligned}$$

Moreover, for \(i,j\in \{0,1,\ldots ,d-1\}\) we introduce the asymptotic covariances and the asymptotic covariance matrix of the random vector \(\mathbf{F }_{W,t}\), as \(t\rightarrow \infty \), by

$$\begin{aligned} \tau ^{i,j}_W := \lim _{t\rightarrow \infty }{\mathbb {C}}{\mathrm{ov}}\Bigg ({F_{W,t}^{(i)}-{\mathbb {E}}F_{W,t}^{(i)}\over t^{d-i-1/2}},{F_{W,t}^{(j)}-{\mathbb {E}}F_{W,t}^{(j)}\over t^{d-j-1/2}}\Bigg ),\qquad T_W:=\left( \tau ^{i,j}_W\right) _{i,j=0}^{d-1}. \end{aligned}$$

The existence of the limit and the precise value of \(\tau ^{i,j}_W\) follows from (18) below. It is easy to see that \(T_W\) has rank one, as in Euclidean space.

In view of Theorem 5, for fixed intensity \(t>0\) and a sequence of growing spherical windows, taking \(W=B_r\) for \(r>0\) we put

$$\begin{aligned} \mathbf{F }_{r,t} := {\left\{ \begin{array}{ll} \Bigg ({F_{r,t}^{(0)}-{\mathbb {E}}F_{r,t}^{(0)}\over e^{r/2}},{F_{r,t}^{(1)}-{\mathbb {E}}F_{r,t}^{(1)}\over e^{r/2}}\Bigg ) &{}: d=2,\\ \Bigg ({F_{r,t}^{(0)}-{\mathbb {E}}F_{r,t}^{(0)}\over \sqrt{r}\,e^{r}},{F_{r,t}^{(1)}-{\mathbb {E}}F_{r,t}^{(1)}\over \sqrt{r}\,e^{r}},{F_{r,t}^{(2)}-{\mathbb {E}}F_{r,t}^{(2)}\over \sqrt{r}\,e^{r}}\Bigg ) &{}: d=3,\\ \Bigg ({F_{r,t}^{(0)}-{\mathbb {E}}F_{r,t}^{(0)}\over e^{r(d-2)}},\ldots ,{F_{r,t}^{(d-1)}-{\mathbb {E}}F_{r,t}^{(d-1)}\over e^{r(d-2)}}\Bigg ) &{}: d\ge 4, \end{array}\right. } \end{aligned}$$

and define the asymptotic covariance matrix \(\varSigma _d=\left( \sigma ^{i,j}_d\right) _{i,j=0}^{d-1}\) of the random vector \(\mathbf{F }_{r,t}\), as \(r\rightarrow \infty \), for \(d \ge 2\) by

$$\begin{aligned} \sigma ^{i,j}_d: = {\left\{ \begin{array}{ll} \lim \limits _{r\rightarrow \infty }{\mathbb {C}}{\mathrm{ov}}\Bigg ({F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over e^{r/2}},{F_{r,t}^{(j)}-{\mathbb {E}}F_{r,t}^{(j)}\over e^{r/2}}\Bigg ) &{}: d=2,\\ \lim \limits _{r\rightarrow \infty }{\mathbb {C}}{\mathrm{ov}}\Bigg ({F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over \sqrt{r}\,e^{r}},{F_{r,t}^{(j)}-{\mathbb {E}}F_{r,t}^{(j)}\over \sqrt{r}\,e^{r}}\Bigg ) &{}: d=3,\\ \lim \limits _{r\rightarrow \infty }{\mathbb {C}}{\mathrm{ov}}\Bigg ({F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over e^{r(d-2)}},{F_{r,t}^{(j)}-{\mathbb {E}}F_{r,t}^{(j)}\over e^{r(d-2)}}\Bigg ) &{}: d\ge 4. \end{array}\right. } \end{aligned}$$

The covariance matrices \(\varSigma _d\) are explicitly given by (20) for \(d=2\), (28) for \(d=3\) and (29) for \(d \ge 4\) below. Moreover, in Sect. 4.5 we determine convergence rates. In particular, we will show that \(\varSigma _2\) has full rank (is positive definite) and \(\varSigma _d\) has rank one for \(d \ge 3\). We remark that this is in sharp contrast to the corresponding result in Euclidean spaces, where the asymptotic covariance matrix has rank one for all \(d\ge 2\), see [21, Theorem 5.1 (ii)]. Note that the dependence of these limits on the fixed intensity \(t>0\) is not made explicit by our notation, but this dependence is shown in Lemmas 20, 21 and 23.

In order to state the multivariate central limit theorem, we use the \(d_2\) and the \(d_3\) distance for random vectors (see Sect. 3.3 for explicit definitions).

Theorem 7

(Multivariate CLT)

  1. (a)

    Let \(d\ge 2\) and \(W\in {\mathcal {K}}_h^d\). Let \(N_{T_W}\) be a d-dimensional centred Gaussian random vector with covariance matrix \(T_W\). Then there exists a constant \(c\in (0,\infty )\) such that

    $$\begin{aligned} d_3(\mathbf{F }_{W,t},N_{T_W}) \le c\,t^{-1/2} \end{aligned}$$

    for all \(t\ge 1\).

  2. (b)

    Fix \(t\ge 1\) and let \(d=2\). Let \(N_{\varSigma _2}\) be a 2-dimensional centred Gaussian random vector with covariance matrix \(\varSigma _2\). Then there exists a constant \(c_2\in (0,\infty )\) such that

    $$\begin{aligned} d_j(\mathbf{F }_{r,t},N_{\varSigma _2}) \le c_2\,r\,e^{-r/2} \end{aligned}$$

    for all \(r\ge 1\) and \(j\in \{2,3\}\).

  3. (c)

    Fix \(t\ge 1\) and let \(d=3\). Let \(N_{\varSigma _3}\) be a 3-dimensional centred Gaussian random vector with covariance matrix \(\varSigma _3\). Then there exists a constant \(c_3\in (0,\infty )\) such that

    $$\begin{aligned} d_3(\mathbf{F }_{r,t},N_{\varSigma _3}) \le c_3\,r^{-1/2} \end{aligned}$$

    for all \(r\ge 1\).

Remark 6

After having seen that in the univariate case the central limit theorem for \(d\ge 4\) can be preserved by a simultaneous growth of the intensity t and the radius r, the question arises whether such a phenomenon also holds in the multivariate set-up. This is in fact the case, but we decided not to present the details for brevity.

3 Background material and preparations

3.1 More hyperbolic geometry

Recall that by \({\mathbb {H}}^{d}\) we denote the hyperbolic space of dimension d. For concreteness we may take as a specific model for \({\mathbb {H}}^{d}\) the d-dimensional open Euclidean open unit ball \(B_\mathrm{euc}^{d,\circ }\) together with the Poincaré metric \(d_h\) given by

$$\begin{aligned} \cosh d_h(x,y) := 1+{2\Vert x-y\Vert _{\mathrm{euc}}^2\over (1-\Vert x\Vert _\mathrm{euc}^2)(1-\Vert y\Vert _{\mathrm{euc}}^2)},\qquad x,y\in B_{\mathrm{euc}}^{d,\circ }, \end{aligned}$$

where \(\Vert \,\cdot \,\Vert _{\mathrm{euc}}\) stands for the usual Euclidean norm. This is known as the conformal ball model for \({\mathbb {H}}^{d}\), see [56, Chapter 4.5]. However, it should be emphasized that our arguments are independent of the special choice of a model for a simply connected, geodesically complete space of constant negative curvature \(\kappa =-1\). We write \(B(z,r)=\{x\in {\mathbb {H}}^{d}:d_h(x,z)\le r\}\) for the hyperbolic ball with centre \(z\in {\mathbb {H}}^{d}\) and radius \(r\ge 0\) and put \(B_r=B(p,r)\), where p is a fixed reference point. In this paper the s-dimensional Hausdorff measure \({\mathcal {H}}^s\), \(s\ge 0\), is understood with respect to the metric space \(({\mathbb {H}}^{d},d_h)\).

For later reference we need a formula for the surface area of a hyperbolic ball B(zr). It is given by

$$\begin{aligned} {\mathcal {H}}^{d-1}(\partial B(z,r))=\omega _d \sinh ^{d-1}(r), \end{aligned}$$

where \(\omega _d=d \kappa _d ={2 \pi ^{d/2}}/{\varGamma (d/2)}\) is the surface area of a d-dimensional unit ball in the Euclidean space \({\mathbb {R}}^d\) and \(\kappa _d\) is its volume. Moreover, the volume of a hyperbolic ball of radius r is given by

$$\begin{aligned} {\mathcal {H}}^{d}(B(z,r))= \omega _d \int _{0}^{r} \sinh ^{d-1}(s) \ ds. \end{aligned}$$
(5)

We refer to Sections 3.3 and 3.4 and especially to formulas (3.25) and (3.26) in the monograph [13]. For the special case \(d=2\), we get \({\mathcal {H}}^2(B(z,r))= 2 \pi (\cosh (r)-1)\). Here, \(\cosh \) and \(\sinh \) are the hyperbolic cosine and sine, which are given by

$$\begin{aligned} \cosh (x)=\frac{e^x+e^{-x}}{2}\qquad \text {and}\qquad \sinh (x)=\frac{e^x-e^{-x}}{2},\qquad x\in {\mathbb {R}}, \end{aligned}$$

respectively. We will frequently make use of the fact that \(\cosh (x), \sinh (x) \in \varTheta (e^x)\), as \(x\rightarrow \infty \), where \(\varTheta (\,\cdot \,)\) stands for the usual Landau symbol. Additionally we will use the following inequalities.

Lemma 1

The function \(\sinh \) satisfies the inequalities

$$\begin{aligned}&\mathrm{(a)}\ \sinh (x)\ge e^{x-3} \quad \text {for } x \ge 0.1,&\mathrm{(b)} \ \sinh (x)\ge x \quad \text {for } x\ge 0. \end{aligned}$$

Proof

  1. (a)

    By the definition of the hyperbolic sine function, we get

    $$\begin{aligned} \frac{2 \sinh (x)}{e^{x-3}}=e^3-e^{-2x+3}=e^3(1-e^{-2x}) \ge 2 \quad \text {for }x\ge 0.1, \end{aligned}$$

    since \(\exp (2x)\ge (1-2\exp (-3))^{-1}\) for \(x\ge 0.1\).

  2. (b)

    This follows from the definition of \(\sinh \) by basic calculus.

\(\square \)

Let \(I({\mathbb {H}}^{d})\) denote the isometry group of \({\mathbb {H}}^{d}\), and let \(I({\mathbb {H}}^{d},p)\) denote the subgroup of isometries which fix p. The isometry group acts transitively on \({\mathbb {H}}^{d}\), that is, \({\mathbb {H}}^{d}\) is a homogeneous space which additionally satisfies the axiom of free mobility (see [63, III.6 and IV.1] for a discussion from the viewpoint of Riemannian geometry). A detailed description of the isometry goup \(I({\mathbb {H}}^{d})\) is provided in [56, Chapters 3–6]. In the hyperboloid model, isometries are obtained as restrictions of positive Lorentz transformations [56, Theorem 3.2.3], in the conformal ball model and in the upper half-space model, isometries are described as Möbius transformations (see [56, Theorems 4.5.2 and 4.6.2]), and in the projective disc model (also Klein–Beltrami model) the description is in terms of projective transformations (see [56, Theorems 6.1.2 and 6.1.3]). A classification of Möbius transformations is given in [56, Section 4.7], the structure of the isometry group as a topological group is analyzed in [56, Section 5.2]. For an alternative approach to hyperbolic space and its isometry group, see also [8], a more elementary treatment in the special case of the hyperbolic plane is provided in the textbook [1].

We denote by \(G_h(d,k)\) the compact space of k-dimensional totally geodesic subspaces containing the origin p. In the conformal ball model, all elements of \(G_h(d,k)\) arise as follows. If p coincides with the centre o of \(B_{\mathrm{euc}}^{d,\circ }\), then an element of \(G_h(d,k)\) is the intersection of \(B_{\mathrm{euc}}^{d,\circ }\) with a k-dimensional Euclidean linear subspace of \({\mathbb {R}}^d\). If otherwise \(p\ne o\), then an element of \(G_h(d,k)\) is the intersection of \(B_{\mathrm{euc}}^{d,\circ }\) with a k-dimensional Euclidean sphere in \({\mathbb {R}}^d\) through p which is orthogonal to the boundary of \(B_\mathrm{euc}^{d,\circ }\), cf. [56, Theorem 4.5.3]. Up to a scaling factor, \(G_h(d,k)\) carries a regular Borel measure \(\nu _k\) which is invariant under \(I({\mathbb {H}}^{d},p)\). Since \(G_h(d,k)\) is compact we can normalize \(\nu _k\) such that \(\nu _k(G_h(d,k))=1\). Recall that \(A_h(d,k)\) is the space of k-dimensional (hyperbolic) planes in \({\mathbb {H}}^{d}\). In the conformal ball model all elements of \(A_h(d,k)\) can be represented as intersections with \(B_\mathrm{euc}^{d,\circ }\) of either k-dimensional Euclidean linear subspace of \({\mathbb {R}}^d\) or k-dimensional Euclidean spheres in \({\mathbb {R}}^d\) that are orthogonal to the boundary of \(B_{\mathrm{euc}}^{d,\circ }\). Sometimes it is more convenient to use the projective disc model of hyperbolic space, in which hyperbolic k-planes are precisely the non-empty intersections of the open Euclidean unit ball \(B_\mathrm{euc}^{d,\circ }\) with Euclidean k-planes of \({\mathbb {R}}^d\) (see [56, Theorem 6.1.4]). On \(A_h(d,k)\) there exists a unique (up to scaling) \(I({\mathbb {H}}^{d})\)-invariant measure. In contrast to \(G_h(d,k)\), the larger space \(A_h(d,k)\) is not compact. Each k-plane \(H \in A_h(d,k)\) is uniquely determined by its orthogonal subspace \(L_{d-k}\) passing through the origin p and the intersection point \(\{x\}=H \cap L_{d-k}\). Using these facts, Santaló [61, Equation (17.41)] (see also [68, Proposition 2.1.6], [18, Equation (9)]) provides a useful representation of an isometry invariant measure on \(A_h(d,k)\), which we use here with a different normalization. For a Borel set \(B \subset A_h(d,k)\), it is given by

$$\begin{aligned} \mu _k(B)= \int _{G_h(d,d-k)} \int _{L}\cosh ^{k}(d_h(x,p)) \, \mathbb {1}\{H(L,x) \in B\} \ {\mathcal {H}}^{d-k}(dx) \ \nu _{d-k}(d L), \end{aligned}$$
(6)

where H(Lx) is the k-plane orthogonal to L passing through x.

Remark 7

The current normalization of the measure \(\mu _k\) differs from the normalization of the measure \(dL_k\) used in [61] by the constant \( {\omega _d \cdots \omega _{d-k+1}}/({\omega _k \cdots \omega _1})\). This also affects the constants in the formulas from hyperbolic integral geometry taken from [61]. The reason for the present normalization is to simplify a comparison of our results to corresponding results in Euclidean and spherical space.

According to [61, Equation (14.69)] the measure \(\mu _k\) satisfies the following Crofton-type formula. In fact, the discussion in [11, Section 7] allows us to state the result not only for sets bounded by smooth submanifolds (as in [61]), but for much more general sets, which include arbitrary convex sets as a very special case. The following lemma holds for \({\mathcal {H}}^{d+i-k}\)-measurable sets \(W\subset {\mathbb {H}}^{d}\) which are Hausdorff \((d+i-k)\)-rectifiable. Following [11, Definition 5.13], we say that a set \(W\subset {\mathbb {H}}^{d}\) is \(\ell \)-rectifiable if \(\ell \) is an integer with \(0< \ell \le d\) and W is the image of some bounded subset of \({\mathbb {R}}^\ell \) under a Lipschitz map from \({\mathbb {R}}^\ell \) to \({\mathbb {H}}^{d}\). A set \(W\subset {\mathbb {H}}^{d}\) is Hausdorff \(\ell \)-rectifiable provided that \({\mathcal {H}}^\ell (W)<\infty \) and if there exist \(\ell \)-rectifiable subsets \(B_1,B_2,\ldots \) of \({\mathbb {H}}^{d}\) such that \({\mathcal {H}}^\ell (W\setminus \bigcup _{i\ge 1}B_i)=0\). Clearly, any Borel set W which is contained in an \(\ell \)-dimensional plane is Hausdorff \(\ell \)-rectifiable if it satisfies \({\mathcal {H}}^\ell (W)<\infty \). Note that Hausdorff \(\ell \)-rectifiability is an extremely general concept which describes sets having dimension \(\ell \) and satisfying just some very mild regularity condition. In particular, finite unions of \(\ell \)-dimensional \(C^1\)-submanifolds are included as special cases (see [46, Chapter 3] for a gentle introduction to rectifiability). Moreover, by Federer’s structure theorem every set with finite \(\ell \)-dimensional Hausdorff measure can be decomposed into a Hausdorff \(\ell \)-rectifiable part and a purely unrectifiable part. The latter is “invisible from an integral-geometric point of view” (see Frank Morgan’s comment on the structure theorem) and considered to be rather exotic.

Lemma 2

Let \(0 \le i \le k \le d-1\), and let \(W\subset {\mathbb {H}}^{d}\) be a Borel set which is Hausdorff \((d+i-k)\)-rectifiable. Then

$$\begin{aligned} \int _{A_h(d,k)} {\mathcal {H}}^{i}(W \cap E) \ \mu _k(d E) = \frac{\omega _{d+1} \, \omega _{i+1}}{\omega _{k+1} \, \omega _{d-k+i+1}}\,{\mathcal {H}}^{d+i-k}(W). \end{aligned}$$
(7)

Remark 8

Strictly speaking the case \(k=i\) is not covered by [11]. Although the framework in [11] should extend to this marginal case, we prefer to provide an elementary direct argument for the case \(k=i\). In this case, the left side of (7) defines an isometry invariant Borel measure on \({\mathbb {H}}^{d}\). Therefore, in order to confirm (7) in this case, it is sufficient to show that the equality holds for \(W=B_r\), \(r\ge 0\). Since equality holds for \(r=0\) and in view of (5), it is sufficient to show that \(\omega _d\sinh ^{d-1}(r)\) is the derivative with respect to r of the function defined by

$$\begin{aligned} h(r):&= \int _{A_h(d,k)} {\mathcal {H}}^{k}(B_r \cap E) \ \mu _k(d E)\\&=\omega _k\omega _{d-k}\int _0^r\sinh ^{d-k-1}(t)\cosh ^k(t) \int _0^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (t)}\right) }\sinh ^{k-1}(s)\, ds\, dt, \end{aligned}$$

where we used (6) and (19) for the equality. The differential of h can be determined by basic rules of calculus. Using that \({{\,\mathrm{\mathrm{arcosh}}\,}}(\cosh (r)/\cosh (r))=0\), we thus obtain

$$\begin{aligned} h'(r)=\omega _k\omega _{d-k}\int _0^r\sinh ^{d-k-1}(t)\sinh (r)\big (\cosh ^2(r)-\cosh ^2(t)\big )^{\frac{k-1}{2}}\cosh (t)\, dt. \end{aligned}$$

The substitution \(\sinh (t)=\sinh (r)\cdot x\) leads to

$$\begin{aligned} h'(r)&= \omega _k\omega _{d-k}\int _0^1 x^{d-k-1}\big ({1-x^2}\big )^{\frac{k-2}{2}}\, dx \, \sinh ^{d-1}(r)=\omega _d\sinh ^{d-1}(r). \end{aligned}$$

The second equality is obtained by first transforming the integral into a Beta integral (by the substitution \(x^2=s\)) and by expressing the Beta function in terms of a ratio of Gamma function values.

Remark 9

Although both sides of (2) define measures with respect to their dependence on a Borel set \(W\subset {\mathbb {H}}^{d}\), for \(k\ne i\) the equality in (2) in general does not extend from \((d+i-k)\)-rectifiable sets to general Borel sets. This is due to deep classical results in the structure theory of geometric measure theory, see [16, p. 2] or [46, Chapter 3] for an introduction and [16, Theorem 3.3.13] for the general treatment. In fact, in the Euclidean setting, for \(i=0\), \(k\in \{1,\ldots ,d-1\}\) and for a general Borel set \(W\subset {\mathbb {R}}^d\), the right side of (2) is always as large as the left side with equality if and only if W is \((d-k)\)-rectifiable.

In what follows we use the convention that \(\text {dim}(\varnothing )=-1\). The counterpart of the following lemma in Euclidean space is well known and intuitively clear. In hyperbolic space the lemma states that for almost all (with respect to the product measure \(\mu _{d-1}^n\)) n-tuples of hyperplanes (with \(n\in \{1,\ldots ,d\}\)) the intersection of these hyperplanes is a \((d-n)\)-plane or the empty set. Note that the latter cannot occur in Euclidean space.

Lemma 3

Fix \(d\ge 2\) and let \(n\in \{1,\ldots ,d\}\). Then \(\mathrm{dim}(H_1 \cap \ldots \cap H_{n})\in \{-1,d-n\}\) holds for \(\mu _{d-1}^n\)-almost all \((H_1,\ldots ,H_n)\in A_h(d,d-1)^n\).

Proof

We apply induction over \(n\ge 1\). For \(n=1\) there is nothing to show. For \(n \in \{2,\ldots ,d\}\) we have

by the induction hypothesis. Let us introduce the abbreviation \(L_{d-k} \mathrel {\mathop :}=H_1 \cap \ldots \cap H_k\) for \(H_1,\ldots ,H_k\in A_h(d,d-1)\) and \(k\in \{1,\ldots ,d\}\). Using this notation, we have

$$\begin{aligned}&\mu _{d-1}^{n}(\{(H_1,\ldots ,H_{n}) \in A_{h}(d,d-1)^{n}: \ \text {dim}(H_1 \cap \ldots \cap H_{n}) \not \in \{-1,d-n\}\}) \\&\quad = \int _{A_{h}(d,d-1)^n} {\mathbf {1}} \{\text {dim}(L_{d-n})\not \in \{-1,d-n\} \} \ \mu _{d-1}^{n}(d(H_1,\ldots ,H_n)). \end{aligned}$$

Clearly,

$$\begin{aligned}&{\mathbf {1}} \{\text {dim}(L_{d-n})\not \in \{-1,d-n\} \}\nonumber \\&\quad \le {\mathbf {1}} \{\text {dim}(L_{d-n})\not \in \{-1,d-n\}, \ \text {dim}(L_{d-(n-1)})=d-(n-1) \} \nonumber \\&\qquad +{\mathbf {1}} \{ \text {dim}(L_{d-(n-1)}) \not \in \{-1,d-(n-1)\} \}. \end{aligned}$$
(8)

By the induction hypothesis and Fubini’s theorem we get

$$\begin{aligned} \int _{A_{h}(d,d-1)^n} {\mathbf {1}} \{\text {dim}(L_{d-(n-1)}) \not \in \{-1,d-(n-1)\} \} \ \mu _{d-1}^{n}(d(H_1,\ldots ,H_n)) = 0, \end{aligned}$$

which covers the case of the second indicator function on the right-hand side of (8). To deal with the contribution from first indicator function on the right-hand side of (8), we write \(c(H_1,\ldots ,H_{n-1})\) for an arbitrary point chosen on \(H_1 \cap \ldots \cap H_{n-1}\) (in a measurable way). Then, again by Fubini’s theorem,

$$\begin{aligned}&\mu _{d-1}^{n}(\{(H_1,\ldots ,H_{n}) \in A_{h}(d,d-1)^n: \ \text {dim}(L_{d-n}) \notin \{-1,d-n \},\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \qquad \quad \qquad \quad \text {dim}(L_{d-(n-1)})=d-(n-1)\}) \\&\qquad \le \int _{A_{h}(d,d-1)^{n-1}} \int _{A_{h}(d,d-1)} {\mathbf {1}} \{H_1 \cap \ldots \cap H_{n-1} \subseteq H_n, \ H_1 \cap \ldots \cap H_{n-1} \ne \emptyset \} \\&\qquad \qquad \times \,\mu _{d-1}(d H_n) \ \mu _{d-1}^{n-1}(d(H_1,\ldots ,H_{n-1})) \\&\qquad \le \int _{A_{h}(d,d-1)^{n-1}} \int _{A_{h}(d,d-1)}{\mathbf {1}} \{c(H_1, \ldots , H_{n-1}) \in H_n \}\\&\qquad \qquad \times \mu _{d-1}(d H_n) \ \mu _{d-1}^{n-1}(d(H_1,\ldots ,H_{n-1})) \\&\qquad = \int _{A_{h}(d,d-1)^{n-1}} 0 \ \mu _{d-1}^{n-1}(d(H_1,\ldots ,H_{n-1})) = 0. \end{aligned}$$

This completes the proof. \(\square \)

We will frequently make use of the following transformation formula. The corresponding fact is well known in Euclidean integral geometry. The subsequent argument follows the same strategy, but is based on the Crofton formula in hyperbolic space and Lemma 3. In the following, we write \(A_{h}(d,d-1)^{d-k}_*\) to denote the set of all \((H_1 ,\ldots , H_{d-k})\in A_{h}(d,d-1)^{d-k}\) which have non-empty intersection.

Lemma 4

Let \(k \in \{0,\ldots ,d-1\}\), and let \(f:A_h(d,k)\rightarrow {\mathbb {R}}\) be a non-negative measurable function. Then

$$\begin{aligned}&\int _{A_{h}(d,d-1)^{d-k}_*} f(H_1 \cap \ldots \cap H_{d-k}) \,\mu _{d-1}^{d-k}(d(H_1,\ldots ,H_{d-k}))\nonumber \\&\quad =c(d,k) \int _{A_{h}(d,k)} f(E) \ \mu _{k}(d E) \end{aligned}$$
(9)

with

$$\begin{aligned} c(d,k)=\frac{\omega _{k+1}}{\omega _{d+1}}\left( \frac{\omega _{d+1}}{\omega _d}\right) ^{d-k}. \end{aligned}$$

Proof

Note that by Lemma 3, for \(\mu _{d-1}^{d-k}\)-almost all \((H_1 ,\ldots , H_{d-k})\in A_{h}(d,d-1)^{d-k}_*\) we have \(H_1 \cap \ldots \cap H_{d-k}\in A(d,k)\), hence the integral on the left side of (9) is well defined.

Let B be a (Borel) measurable subset of \(A_{h}(d,k)\). Then we define a locally finite Borel measure on \(A_{h}(d,k)\) by

$$\begin{aligned} {\overline{\mu }}_k(B):=\mu _{d-1}^{d-k}(\{(H_1,\ldots ,H_{d-k}) \in A_{h}(d,d-1)^{{d-k}}_*: H_1 \cap \ldots \cap H_{d-k} \in B\}). \end{aligned}$$

The isometry invariance of \(\mu _{d-1}\) implies that \({\overline{\mu }}_k\) is isometry invariant and hence a multiple of \(\mu _k\). To determine the constant, let \(W\in {\mathcal {K}}_h^d\) be a fixed convex body. Then, by a \((d-k)\)-fold application of the Crofton formula (7) with the choice \(k=d-1\) and (successively) \(i=k,k+1,\ldots ,d-1\) there, we get

$$\begin{aligned}&\int _{A_h(d,k)} {\mathcal {H}}^{k}( E\cap W) \, {\overline{\mu }}_k(d E)\\&\quad = \int _{A_{h}(d,d-1)^{d-k}_*} {\mathcal {H}}^{k}(H_1 \cap \ldots \cap H_{d-k} \cap W) \ \mu _{d-1}^{d-k}(d(H_1,\ldots ,H_{d-k})) \\&\quad = \left( \prod _{i=k}^{d-1} \frac{\omega _{d+1}}{\omega _d} \frac{\omega _{i+1}}{\omega _{i+2}} \right) {\mathcal {H}}^d(W) = \frac{\omega _{k+1}}{\omega _{d+1}}\left( \frac{\omega _{d+1}}{\omega _d}\right) ^{d-k} {\mathcal {H}}^d(W). \end{aligned}$$

On the other hand, applying directly the Crofton formula with \(i=k\), we get

$$\begin{aligned} \int _{A_h(d,k)} {\mathcal {H}}^{k}(E\cap W) \ \mu _k(d E) = {\mathcal {H}}^{d}(W). \end{aligned}$$

A comparison shows that \({\overline{\mu }}_k=c(d,k)\mu _k\) which proves the assertion of the lemma. \(\square \)

3.2 Poisson U-statistics

Let \(({\mathbb {X}},{\mathcal {X}})\) be a measurable space, which is endowed with a \(\sigma \)-finite measure \(\mu \). Let \(\eta \) be a Poisson process on \({\mathbb {X}}\) with intensity measure \(\mu \) (we refer to [35] for a formal construction). Further, fix \(m\in {\mathbb {N}}\) and let \(h:{\mathbb {X}}^m\rightarrow {\mathbb {R}}\) be a non-negative, measurable and symmetric function, which is integrable with respect to \(\mu ^m\), the m-fold product measure of \(\mu \). By a Poisson U-statistic (of order m and with kernel h) we understand a random variable of the form

$$\begin{aligned} {\mathscr {U}} = \sum _{(x_1,\ldots ,x_m)\in \eta _{\ne }^m}h(x_1,\ldots ,x_m), \end{aligned}$$

where \(\eta _{\ne }^m\) is the collection of all m-tuples of distinct points of \(\eta \), see [35]. Functionals of this type have received considerable attention in the literature, especially in connection with applications in stochastic geometry, see, for example, [15, 27, 33, 34, 36, 51, 58, 66, 67]. In the following, we will frequently use the following consequence of the multivariate Mecke equation for Poisson functionals [35, Theorem 4.4]. Namely, the expectation \({\mathbb {E}}{\mathscr {U}}\) of the Poisson U-statistic \({\mathscr {U}}\) is given by

$$\begin{aligned} {\mathbb {E}}{\mathscr {U}} = {\mathbb {E}}\sum _{(x_1,\ldots ,x_m)\in \eta _{\ne }^m}h(x_1,\ldots ,x_m)= \int _{{\mathbb {X}}^m}h(x_1,\ldots ,x_m)\,\mu ^m(d(x_1,\ldots ,x_m)). \end{aligned}$$
(10)

In the present paper we need a formula for the centred moments of the Poisson U-statistics \({\mathscr {U}}\) as well as a bound for the Wasserstein and the Kolmogorov distance of a normalized version of \({\mathscr {U}}\) and a standard Gaussian random variable. To state such results, we need some more notation. Following [35, Chapter 12], for an integer \(n\in {\mathbb {N}}\) we let \(\varPi _n\) and \(\varPi _n^*\) be the set of partitions and sub-partitions of \([n]:=\{1,\ldots ,n\}\), respectively. We recall that by a sub-partition of \(\{1,\ldots ,n\}\) we understand a family of non-empty disjoint subsets (called blocks) of \(\{1,\ldots ,n\}\) and that a sub-partition \(\sigma \) is called a partition if \(\bigcup _{J\in \sigma }J=\{1,\ldots ,n\}\). For \(\sigma \in \varPi _n^*\) we let \(|\sigma |\) be the number of blocks of \(\sigma \) and \(\Vert \sigma \Vert =\big |\bigcup _{J\in \sigma }J\big |\) be the number of elements of \(\bigcup _{J\in \sigma }J\). In particular, a partition \(\sigma \in \varPi _n\) satisfies \(\Vert \sigma \Vert =n\). For \(\ell \in {\mathbb {N}}\) and \(n_1,\ldots ,n_\ell \in {\mathbb {N}}\), let \(n:=n_1+\ldots +n_\ell \) and define

$$\begin{aligned} J_i:=\{j\in {\mathbb {N}}:n_1+\ldots +n_{i-1}<j\le n_1+\ldots +n_i\},\qquad i\in \{1,\ldots ,\ell \}, \end{aligned}$$

and \(\pi :=\{J_i:i\in \{1,\ldots ,\ell \}\}\). Next, we introduce two classes of sub-partitions of [n] by

$$\begin{aligned} \varPi ^*(n_1,\ldots ,n_\ell )&:= \{\sigma \in \varPi _n^*:|J\cap J'|\le 1\text { for all }J\in \sigma \text { and }J'\in \pi \},\\ \varPi _{\ge 2}^*(n_1,\ldots ,n_\ell )&:= \{\sigma \in \varPi ^*(n_1,\ldots ,n_\ell ):|J|\ge 2\text { for all }J\in \sigma \}. \end{aligned}$$

In the same way the two classes of partitions \(\varPi (n_1,\ldots ,n_\ell )\) and \(\varPi _{\ge 2}(n_1,\ldots ,n_\ell )\) of [n] are defined (just by omitting the upper index \(^*\) in the above definition). From now on we assume that \(n_1=\ldots =n_\ell =m\in {\mathbb {N}}\) and define, for \(\sigma \in \varPi ^*(m,\ldots ,m)\) (where here and below m appears \(\ell \) times),

$$\begin{aligned}{}[\sigma ] := \{i\in [\ell ]:\text {there exists a block }J\in \sigma \text { such that }J\cap \{m(i-1)+1,\ldots ,mi\}\ne \emptyset \} \end{aligned}$$

as well as

$$\begin{aligned} \varPi _{\ge 2}^{**}(m,\ldots ,m) := \{\sigma \in \varPi _{\ge 2}^*(m,\ldots ,m):[\sigma ]=[\ell ]\}. \end{aligned}$$

The sub-partitions \(\sigma \in \varPi _{\ge 2}^{**}(m,\ldots ,m)\) of \([m\ell ]\) are easy to visualize as diagrams (cf. [52, Chapter 4]). In such a diagram the \(m\ell \) elements of \([m\ell ]\) are arranged in an array of \(\ell \) rows and m columns, where \(1,\ldots ,m\) form the first row, \(m+1,\ldots ,2m\) the second etc. The blocks of \(\sigma \) are indicated by closed curves, where the elements enclosed by a curve are meant to belong to the same block. Then the condition that \(\sigma \in \varPi _{\ge 2}^{**}(m,\ldots ,m)\) can be expressed by the following three requirements:

  1. (i)

    all blocks of \(\sigma \) have at least two elements,

  2. (ii)

    each block of \(\sigma \) contains at most one element from each row,

  3. (iii)

    in each row there is at least one element that belongs to some block of \(\sigma \).

For an example and a counterexample we refer to Fig. 3.

Fig. 3
figure 3

Left panel: Sub-partition from \(\varPi _{\ge 2}^{**}(4,4,4)\). Right panel: Example of a sub-partition not belonging to \(\varPi _{\ge 2}^{**}(4,4,4)\). In fact, the block indicated by the dashed curve contradicts condition (i), the block indicated by the dotted curve contradicts condition (ii) and since no element from the last row is contained in any block also condition (iii) is violated

For two functions \(g_1:{\mathbb {X}}^{\ell _1}\rightarrow {\mathbb {R}}\) and \(g_2:{\mathbb {X}}^{\ell _2}\rightarrow {\mathbb {R}}\) with \(\ell _1,\ell _2\in {\mathbb {N}}\), we denote by \(g_1\otimes g_2:{\mathbb {X}}^{\ell _1+\ell _2}\rightarrow {\mathbb {R}}\) their usual tensor product. We are now in the position to rephrase the following formula for the centred moments of the Poisson U-statistic \({\mathscr {U}}\) (see [35, Proposition 12.13]):

$$\begin{aligned} {\mathbb {E}}[({\mathscr {U}}-{\mathbb {E}}{\mathscr {U}})^\ell ] = \sum _{\sigma \in \varPi _{\ge 2}^{**}(m,\ldots ,m)}\int _{{\mathbb {X}}^{m\ell +|\sigma |-\Vert \sigma \Vert }}(h^{\otimes \ell })_\sigma \,d\mu ^{m\ell +|\sigma |-\Vert \sigma \Vert }, \end{aligned}$$
(11)

where \(h^{\otimes \ell }\) is the \(\ell \)-fold tensor product of h with itself and \((h^{\otimes \ell })_\sigma :{\mathbb {X}}^{m\ell +|\sigma |-\Vert \sigma \Vert }\rightarrow {\mathbb {R}}\) stands for the function that arises from \(h^{\otimes \ell }\) by replacing all variables that are in the same block of \(\sigma \) by a new, common variable. Here, we implicitly assume that the function h is such that all integrals that appear on the right-hand side are well-defined. This formula will turn out to be a crucial tool in the proof of Theorem 5(c).

3.3 Normal approximation bounds

In this section, we continue to use the notation and the set-up of the preceding section. But since we turn to normal approximation bounds for Poisson U-statistics, some further notation is required. For \(u,v\in \{1,\ldots ,m\}\) we let \(\varPi _{\ge 2}^{\mathrm{con}}(u,u,v,v)\) be the class of partitions in \(\varPi _{\ge 2}(u,u,v,v)\) whose diagram is connected, which means that the rows of the diagram cannot be divided into two subsets, each defining a separate diagram (cf. [52, page 47]). More formally, there are no sets \(A,B\subset [4]\) with \(A\cup B=[4]\), \(A\cap B=\emptyset \) and such that each block either consists of elements from rows in A or of elements from rows in B, see Fig. 4 for an example and a counterexample. We can now introduce the quantities

$$\begin{aligned} M_{u,v}(h) := \sum _{\sigma \in \varPi _{\ge 2}^{\mathrm{con}}(u,u,v,v)} \int _{{\mathbb {X}}^{|\sigma |}}(h_u\otimes h_u\otimes h_v\otimes h_v)_\sigma \,\ d\mu ^{|\sigma |}, \end{aligned}$$
(12)

where

$$\begin{aligned} h_u(x_1,\ldots ,x_u) = {m\atopwithdelims ()u}\int _{{\mathbb {X}}^{m-u}}h(x_1,\ldots ,x_u,{\tilde{x}}_1,\ldots ,{\tilde{x}}_{m-u})\,\mu ^{m-u}(d({\tilde{x}}_1,\ldots ,{\tilde{x}}_{m-u})) \end{aligned}$$
(13)

for \(u\in \{1,\ldots ,m\}\) (again, we implicitly assume that h is such that the integrals appearing in (12) are well-defined). To measure the distance between two real-valued random variables XY (or, more precisely, their laws), the Kolmogorov distance

$$\begin{aligned} d_K(X,Y) \mathrel {\mathop :}=\sup _{s \in {\mathbb {R}}} |{\mathbb {P}}(X \le s)- {\mathbb {P}}(Y \le s)| \end{aligned}$$

and the Wasserstein distance

$$\begin{aligned} d_W(X,Y) \mathrel {\mathop :}=\sup _{\varphi \in \text {Lip}(1)} | {\mathbb {E}}\varphi (X)-{\mathbb {E}}\varphi (Y)| \end{aligned}$$

are used, where \(\text {Lip}(1)\) denotes the space of Lipschitz functions \(\varphi :{\mathbb {R}}\rightarrow {\mathbb {R}}\) with a Lipschitz constant less than or equal to one. It is well known that convergence with respect to the Kolmogorov or the Wasserstein distance implies convergence in distribution. We are now in the position to rephrase a quantitative central limit theorem for Poisson U-statistics. Namely, [58, Theorem 4.7] and [67, Therorem 4.2] state that there exists a constant \(c_m\in (0,\infty )\), depending only on m (the order of the Poisson U-statistic), such that

$$\begin{aligned} d\left( \frac{{\mathscr {U}}-{\mathbb {E}}{\mathscr {U}}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}({\mathscr {U}})}},N \right) \le c_m \sum _{u,v=1}^{m} \frac{\sqrt{M_{u,v}(h)}}{{\mathbb {V}}{\mathrm{ar}}({\mathscr {U}})}, \end{aligned}$$
(14)

where \(d(\,\cdot \,,\,\cdot \,)\) stands for either the Wasserstein or the Kolmogorov distance. Here, one can choose \(c_m=2 m^{7/2}\) for the Wasserstein distance and \(c_m=19 m^{5}\) for the Kolmogorov distance.

Fig. 4
figure 4

Left panel: Partition from \(\varPi _{\ge 2}^{\mathrm{con}}(2,2,3,3)\). Right panel: Example of a partition not belonging to \(\varPi _{\ge 2}^{\mathrm{con}}(2,2,3,3)\). In fact, the diagram is not connected as indicated by the dashed line

Finally, we turn to a multivariate normal approximation for Poisson U-statistics. For integers \(p\in {\mathbb {N}}\) and \(m_1,\ldots ,m_p\in {\mathbb {N}}\), and for each \(i\in \{1,\ldots ,p\}\), let

$$\begin{aligned} {\mathscr {U}}_i= \sum _{(x_1,\ldots ,x_{m_i})\in \eta _{\ne }^{m_i}}h^{(i)}(x_1,\ldots ,x_{m_i}) \end{aligned}$$

be a Poisson U-statistic of order \(m_i\) based on a kernel function \(h^{(i)}:{\mathbb {X}}^{m_i}\rightarrow {\mathbb {R}}\) satisfying the same assumptions as above. We form the p-dimensional random vector \(\mathbf{U }:=({\mathscr {U}}_1,\ldots ,{\mathscr {U}}_p)\) and our goal is to compare \(\mathbf{U }\) with a p-dimensional Gaussian random vector \(\mathbf{N }\). To do this, we use the so-called \(d_2\)- and \(d_3\)-distance, which are defined as

$$\begin{aligned} d_2(\mathbf{U },\mathbf{N })&:= \sup _{h\in C^2}\big |{\mathbb {E}}\varphi (\mathbf{U })-{\mathbb {E}}\varphi (\mathbf{N })\big |\\ d_3(\mathbf{U },\mathbf{N })&:= \sup _{h\in C^3}\big |{\mathbb {E}}\varphi (\mathbf{U })-{\mathbb {E}}\varphi (\mathbf{N })\big |, \end{aligned}$$

respectively. Here, \(C^2\) is the space of functions \(\varphi :{\mathbb {R}}^p\rightarrow {\mathbb {R}}\) which are twice partially continuously differentiable and satisfy

$$\begin{aligned} \sup _{x\ne y}{|\varphi (x)-\varphi (y)|\over \Vert x-y\Vert }\le 1\qquad \text {and}\qquad \sup _{x\ne y}{\Vert \nabla \varphi (x)-\nabla \varphi (y)\Vert _{\mathrm{op}}\over \Vert x-y\Vert }\le 1, \end{aligned}$$

where \(\Vert \,\cdot \,\Vert \) denotes the Euclidean norm in \({\mathbb {R}}^p\) and \(\Vert \,\cdot \,\Vert _{\mathrm{op}}\) stands for the operator norm. Moreover, \(C^3\) is the space of functions \(\varphi :{\mathbb {R}}^p\rightarrow {\mathbb {R}}\) which are thrice partially continuously differentiable and satisfy

$$\begin{aligned} \max _{1\le i\le j\le p}\sup _{x\in {\mathbb {R}}^p}\Big |{\partial ^2 \varphi (x)\over \partial x_i\partial x_j}\Big | \le 1\qquad \text {and}\qquad \max _{1\le i\le j\le k\le p}\sup _{x\in {\mathbb {R}}^p}\Big |{\partial ^3 \varphi (x)\over \partial x_i\partial x_j\partial x_k}\Big | \le 1. \end{aligned}$$

Moreover, similarly to the quantities \(M_{u,v}(h)\) introduced in (12), for \(i,j\in \{1,\ldots ,p\}\), \(u\in \{1,\ldots ,m_i\}\) and \(v\in \{1,\ldots ,m_j\}\) we define

$$\begin{aligned} M_{u,v}(h^{(i)},h^{(j)}) := \sum _{\pi \in \varPi _{\ge 2}^{\mathrm{con}}(u,u,v,v)}\int _{{\mathbb {X}}^{|\pi |}}(h_u^{(i)}\otimes h_u^{(i)}\otimes h_v^{(j)}\otimes h_v^{(j)})_\pi \,d\mu ^{|\pi |}, \end{aligned}$$

where \(h_u^{(i)}\) and \(h_v^{(j)}\) are given by (13). This allows us to state the following multivariate normal approximation bound from [66, Theorem 6.3] (see also [59, Equation (5.1)]). Namely, if \(\mathbf{N }\) is a centred Gaussian random vector with covariance matrix \(\varSigma =(\sigma _{i,j})_{i,j=1}^p\), then

$$\begin{aligned} d_3(\mathbf{U }-{\mathbb {E}}\mathbf{U },\mathbf{N })\le & {} \frac{1}{2} \sum _{i,j=1}^{p} |\sigma _{i,j}- {\mathbb {C}}{\mathrm{ov}}({\mathscr {U}}_i,{\mathscr {U}}_j)| \nonumber \\&+\frac{p}{2} \left( \sum _{n=1}^{p} \sqrt{{\mathbb {V}}{\mathrm{ar}}({\mathscr {U}}_n)}+1\right) \sum _{i,j=1}^{p} \sum _{u=1}^{m_i} \sum _{v=1}^{m_j} m_i^{7/2}\nonumber \\&\times \sqrt{M_{u,v}(h^{(i)},h^{(j)})}. \end{aligned}$$
(15)

If the covariance matrix \(\varSigma \) is positive definite then also

$$\begin{aligned} d_2(\mathbf{U }-{\mathbb {E}}\mathbf{U },\mathbf{N }) \le&\Vert \varSigma ^{-1}\Vert _{\mathrm{op}}\Vert \varSigma \Vert _{\mathrm{op}}^{1/2}\sum _{i,j=1}^{p} |\sigma _{i,j}- {\mathbb {C}}{\mathrm{ov}}({\mathscr {U}}_i,{\mathscr {U}}_j)| \nonumber \\&+\frac{p\sqrt{2\pi }}{4}\Vert \varSigma ^{-1}\Vert _\mathrm{op}^{3/2}\Vert \varSigma \Vert _{\mathrm{op}}\left( \sum _{i=1}^{p} \sqrt{{\mathbb {V}}{\mathrm{ar}}({\mathscr {U}}_i)}+1\right) \nonumber \\&\times \sum _{i,j=1}^{p} \sum _{u=1}^{m_i} \sum _{v=1}^{m_j} m_i^{7/2}\sqrt{M_{u,v}(h^{(i)},h^{(j)})}, \end{aligned}$$
(16)

where again \(\Vert \,\cdot \,\Vert _{\mathrm{op}}\) stands for the operator norm. We remark that although the bound for \(d_2(\mathbf{U }-{\mathbb {E}}\mathbf{U },\mathbf{N })\) is not explicitly stated in the literature, it directly follows from [53, Theorem 3.3] together with the computations in [66, Chapters 5 and 6] for the \(d_3\)-distance.

4 Proofs I: Expectations and variances

4.1 Representation as a Poisson U-statistic

We recall that \(\eta _t\), for \(t>0\), is a Poisson hyperplane process in \({\mathbb {H}}^{d}\) with intensity measure \(t\mu _{d-1}\). Moreover, for a Borel set \(W\subset {\mathbb {H}}^{d}\) and \(i\in \{0,1,\ldots ,d-1\}\) we recall from (1) the definition of the functional \(F_{W,t}^{(i)}\). To shorten our notation we put

$$\begin{aligned}&f^{(i)}(H_1,\ldots ,H_{d-i}) \nonumber \\&\quad \mathrel {\mathop :}={\left\{ \begin{array}{ll} \frac{1}{(d-i)!} {\mathcal {H}}^{i}(H_1 \cap \ldots \cap H_{d-i} \cap W) &{}: \text {dim}(H_1 \cap \ldots \cap H_{d-i})=i, \\ 0 &{}: \text{ otherwise }, \end{array}\right. } \end{aligned}$$

which allows us to rewrite \(F_{W,t}^{(i)}\) as

$$\begin{aligned} F_{W,t}^{(i)}= \sum _{(H_1,\ldots ,H_{d-i}) \in \eta _{t,\ne }^{d-i}} f^{(i)}(H_1,\ldots ,H_{d-i}). \end{aligned}$$

In other words, \(F_{W,t}^{(i)}\) is a Poisson U-statistic of order \(d-i\) and with kernel \(f^{(i)}\). It is well known (see [33,34,35,36, 58]) that Poisson U-statistics admit a Fock space representation having only a finite number of terms. This leads to the variance and covariance representations

$$\begin{aligned} {\mathbb {V}}{\mathrm{ar}}(F_{W,t}^{(i)})&= \sum _{n=1}^{d-i} t^{2(d-i)-n}n!\Vert f_n^{(i)}\Vert _n^{2}, \end{aligned}$$
(17)

where the functions \(f_n^{(i)}: A_h(d,d-1)^n \rightarrow [0, \infty )\) are given by

$$\begin{aligned} f_n^{(i)}(H_1,\dots ,H_n)&\mathrel {\mathop :}=\left( {\begin{array}{c}d-i\\ n\end{array}}\right) \int _{A_h(d,d-1)^{d-i-n}} f^{(i)}(H_1,\ldots ,H_n,{\tilde{H}}_1,\ldots ,{\tilde{H}}_{d-i-n}) \\&\quad \ \times \mu _{d-1}^{d-i-n}(d({\tilde{H}}_1,\ldots ,{\tilde{H}}_{d-i-n})), \end{aligned}$$

recall (13), and we write \(\Vert \, \cdot \,\Vert _n\) for the norm in the \(L^2\)-space \(L^2(\mu _{d-1}^n)\) with respect to the n-fold product measure of \(\mu _{d-1}\). Similarly, for \(i,j \in \{0,1, \ldots , d-1\}\) the covariance \({\mathbb {C}}{\mathrm{ov}}(F_{W,t}^{(i)},F_{W,t}^{(j)})\) can be represented as

$$\begin{aligned} {\mathbb {C}}{\mathrm{ov}}(F_{W,t}^{(i)},F_{W,t}^{(j)})&= \sum _{n=1}^{\min \{d-i,d-j\}} t^{2d-i-j-n}n!\langle f_n^{(i)},f_n^{(j)} \rangle _{n}, \end{aligned}$$
(18)

where \(\langle \, \cdot \, , \, \cdot \, \rangle _{n}\) denotes the standard scalar product in \(L^2(\mu _{d-1}^n)\).

4.2 Expectations: Proof of Theorem 1

Theorem 1 is a consequence of the transformation formula in Lemma 4 and the Crofton formula in Lemma 2 with \(k=i\) there. Recall that the constant c(di) used below is defined in Lemma 4. Since the intensity measure of \(\eta _t\) is \(t\mu _{d-1}\) and using (10) with \(m=d-i\) and \(\mu ^{d-i}=t^{d-i}\mu _{d-1}^{d-i}\), we obtain

$$\begin{aligned} {\mathbb {E}}F_{W,t}^{(i)}&= t^{d-i} \int _{A_h(d,d-1)^{d-i}} f^{(i)}(H_1,\ldots ,H_{d-i}) \ \mu _{d-1}^{d-i}(d(H_1,\ldots ,H_{d-i})) \\&= \frac{t^{d-i}}{(d-i)!} \int _{A_h(d,d-1)^{d-i}_*} {\mathcal {H}}^{i}(H_1\cap \ldots \cap H_{d-i} \cap W) \ \mu _{d-1}^{d-i}(d(H_1,\ldots ,H_{d-i})) \\&= c(d,i)\,\frac{t^{d-i}}{(d-i)!} \int _{A_h(d,i)} {\mathcal {H}}^{i}(E \cap W)\,\mu _i(dE)\\&=c(d,i)\,\frac{t^{d-i}}{(d-i)!}\, {\mathcal {H}}^{d}(W)\\&= \frac{\omega _{i+1}}{\omega _{d+1}}\left( \frac{\omega _{d+1}}{\omega _d}\right) ^{d-i} \frac{t^{d-i}}{(d-i)!} \ {\mathcal {H}}^d(W), \end{aligned}$$

and the proof is complete. \(\square \)

Remark 10

The measure \(W\mapsto {\mathbb {E}}F_{W,t}^{(i)}\) is isometry invariant. One could argue that it must be a constant multiple of \({\mathcal {H}}^{d}\), if one knows that it is also locally finite. Theorem 1 shows that this is indeed the case and also yields the constant.

4.3 Variances: Proof of Theorem 2

To investigate the variance of \(F_{W,t}^{(i)}\) we use the representation as a Poisson U-statistic, especially (17). We start by simplifying the kernel functions \(f_n^{(i)}\).

Lemma 5

Let \(n\in \{1,\ldots ,d-i\}\). Let \(W\subset {\mathbb {H}}^{d}\) be a bounded Borel set. If \(H_1,\ldots ,H_n \in A_h(d,d-1)\) are n hyperplanes satisfying \(dim (H_1 \cap \ldots \cap H_n)=d-n\), then

$$\begin{aligned} f_n^{(i)}(H_1,\ldots ,H_n)= c(i,n,d)\, {\mathcal {H}}^{d-n}(H_1 \cap \ldots \cap H_n \cap W) \end{aligned}$$

with

$$\begin{aligned} c(i,n,d):={\left( {\begin{array}{c}d-i\\ n\end{array}}\right) \over (d-i)!}\frac{\omega _{i+1} }{\omega _{d-n+1}} \left( \frac{\omega _{d+1}}{\omega _d} \right) ^{d-n-i}. \end{aligned}$$

Proof

We use the definition of \(f_n^{(i)}\), the transformation formula in Lemma 4 and the Crofton formula (7) (in the general form indicated before the statement of Lemma 2). Putting \(L_{d-n}:=H_1\cap \ldots \cap H_{n}\), this gives

$$\begin{aligned}&\left( {\begin{array}{c}d-i\\ n\end{array}}\right) ^{-1}f_n^{(i)}(H_1,\ldots ,H_n) \\&\quad = \frac{1}{(d-i)!} \int _{A_h(d,d-1)^{d-i-n}_*} {\mathcal {H}}^{i}(L_{d-n} \cap {\tilde{H}}_1 \cap \ldots \cap {\tilde{H}}_{d-i-n} \cap W) \\&\qquad \times \mu _{d-1}^{d-i-n}(d({\tilde{H}}_1,\ldots ,{\tilde{H}}_{d-i-n})) \\&\quad =\frac{c(d,i+n)}{(d-i)!} \int _{A_h(d,i+n)}{\mathcal {H}}^i(L_{d-n}\cap W \cap E)\,\mu _{i+n}(dE)\\&\quad = \frac{c(d,i+n)}{(d-i)!} \frac{\omega _{d+1} \, \omega _{i+1}}{\omega _{i+n+1} \, \omega _{d-n+1}}{\mathcal {H}}^{d-n}(L_{d-n} \cap W) \\&\quad = {1\over (d-i)!}\frac{\omega _{i+1} }{\omega _{d-n+1}} \left( \frac{\omega _{d+1}}{\omega _d} \right) ^{d-n-i} {\mathcal {H}}^{d-n}(H_1 \cap \ldots \cap H_n \cap W). \end{aligned}$$

Here we used that since \(L_{d-n}\) is \((d-n)\)-dimensional, the intersection \(L_{d-n}\cap W\) is a Hausdorff \((d-n)\)-rectifiable set. \(\square \)

For the variances and covariances, we need the \(L^{2}\)-norms and the scalar products of these functions.

Corollary 1

Let \(W\subset {\mathbb {H}}^{d}\) be a bounded Borel set. If \(n\in \{1,\ldots ,\min \{d-i,d-j\}\}\), then

$$\begin{aligned} \langle f_n^{(i)},f_n^{(j)} \rangle _n = c(d,n,i,j) \int _{A_h(d,d-n)}{\mathcal {H}}^{d-n}(E\cap W)^{2}\,\mu _{d-n}(dE). \end{aligned}$$

Especially, the choice \(W=B_r\) for some \(r>0\) yields

$$\begin{aligned} \langle f_n^{(i)},f_n^{(j)} \rangle _n = c(d,n,i,j) \, \omega _{n} \int _{0}^{r} \cosh ^{d-n}(s) \sinh ^{n-1}(s) \,{\mathcal {H}}^{d-n}(L_{d-n}(s) \cap B_r)^2 \ ds, \end{aligned}$$

where \( c(d,n,i,j):= c(d,d-n)\, c(i,n,d)\, c(j,n,d)\) and \(L_{d-n}(s)\) for \(s\in [0,r]\) is an arbitrary \((d-n)\)-dimensional totally geodesic subspace which satisfies \(d_h(L_{d-n}(s),p)=s\).

Proof

The first claim is a direct consequence of the previous lemma and the transformation formula from Lemma 4.

The second claim follows by combining the previous result with (6) and by an application of [13, Proposition 3.1 and Equations (3.15), (3.22)] in an n-dimensional plane \(L_{n}\in G_h(d,n)\) through the fixed origin p. \(\square \)

Proof of Theorem 2

This is now a direct consequence of (17) and Corollary 1. \(\square \)

4.4 Variance: asymptotic behaviour

In this section we look at the variance of \(F_{r,t}^{(i)}=F_{B_r,t}^{(i)}\) in the asymptotic regime, as \(r\rightarrow \infty \). We divide our analysis into the three different cases \(d=2\), \(d=3\) and \(d\ge 4\). Before, we start with a number of preprations.

4.4.1 Preliminaries

The following lemma will be repeatedly applied below.

Lemma 6

If \(r>0\) and \( 0\le s \le r \), then

$$\begin{aligned} 0\le {{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) -(r-s)\le \log (2). \end{aligned}$$

Proof

We start by proving the lower bound which is equivalent to \(\cosh (r)-\cosh (s)\cosh (r-s) \ge 0\). By definition of \(\cosh \), \(\sinh \) and since \(0 \le s \le r\) we have

$$\begin{aligned} \cosh (r)-\cosh (s)\cosh (r-s)=\sinh (s)\sinh (r-s) \ge 0. \end{aligned}$$

This yields the lower bound. Next, we turn to the upper bound. We use the logarithmic representation \({{\,\mathrm{\mathrm{arcosh}}\,}}(x)=\log (x+\sqrt{x^{2}-1})\) of the \({{\,\mathrm{\mathrm{arcosh}}\,}}\)-function and the fact that \(\cosh (r)/\cosh (s) \ge 1\) for \(r\ge s\ge 0\). Then we get

$$\begin{aligned}&{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) -(r-s) \nonumber \\&\quad = \log \left( \frac{\cosh (r)}{\cosh (s)}+\sqrt{\frac{\cosh ^2(r)}{\cosh ^2(s)}-1} \right) -(r-s) \\&\quad =\log \left( \frac{e^s \cosh (r)}{e^r \cosh (s)}+\sqrt{\frac{e^{2s} \cosh ^2(r)}{e^{2r} \cosh ^2(s)}-\frac{e^{2s}}{e^{2r}}} \right) \\&\quad =\log \left( \frac{e^s (e^r+e^{-r})}{e^r (e^s+e^{-s})}+\sqrt{\frac{e^{2s} (e^r+e^{-r})^2}{e^{2r} (e^s+e^{-s})^2}-\frac{e^{2s}}{e^{2r}}} \right) \\&\quad =\log \left( \frac{1+e^{-2r}}{1+e^{-2s}}+\sqrt{\frac{e^{2s} (e^{2r}+2+e^{-2r}-e^{2s}-2-e^{-2s})}{e^{2r} (e^{2s}+2+e^{-2s})}} \right) \\&\quad =\log \left( \frac{1+e^{-2r}}{1+e^{-2s}}+\sqrt{\frac{1+e^{-4r}-e^{2s-2r}-e^{-4s}}{1+2e^{-2s}+e^{-2s-2r}}} \right) \\&\quad \le \log (2), \end{aligned}$$

where the last inequality holds because both terms in the argument of the \(\log \) function are bounded from above by 1 for \(s \in [0,r]\). \(\square \)

Moreover, we frequently apply the following upper and lower bounds for \({\mathcal {H}}^{i}(L_i(s)\cap B_r)\). As before, let \(L_{i}(s)\) denote an arbitrary measurable choice of an i-dimensional totally geodesic subspace satisfying \(d_h(L_{i}(s),p)=s\), \(i\in \{1,\ldots ,d-1\}\). The following lemma concerns the case \(i\in \{2,\ldots ,d-1\}\).

Lemma 7

If \(i \in \{2,\ldots ,d-1\}\) and \(0 \le s \le r\), then

$$\begin{aligned} {\mathcal {H}}^{i}(L_i(s)\cap B_r)\le \frac{\omega _i}{i-1} e^{(r-s)(i-1)}. \end{aligned}$$

If, in addition, \(0\le s \le r-1/2\) then

$$\begin{aligned} \frac{\omega _i}{e^{3(i-1)}(i-1)}e^{(r-s)(i-1)} \le {\mathcal {H}}^{i}(L_i(s)\cap B_r). \end{aligned}$$

Proof

We start by noting that \(L_{i}(s) \cap B_r\) is an i-dimensional hyperbolic ball of radius \({{\,\mathrm{\mathrm{arcosh}}\,}}\big (\frac{\cosh (r)}{\cosh (s)} \big )\) for \(i\in \{1,\ldots ,d-1\}\), see [56, Theorem 3.5.3]. Thus we get

$$\begin{aligned} {\mathcal {H}}^{i}(L_i(s)\cap B_r)&= \omega _i \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) } \sinh ^{i-1}(u) \ du \end{aligned}$$
(19)

for \(i\in \{1,\ldots ,d-1\}\). Hence, using Lemma 6 and for \(i \in \{2,\ldots ,d-1\}\) we get

$$\begin{aligned} {\mathcal {H}}^{i}(L_i(s)\cap B_r)&\le \omega _i \int _{0}^{r-s+\log (2)} \sinh ^{i-1}(u) \ du \\&\le \frac{\omega _i}{2^{i-1}} \int _{0}^{r-s+\log (2)} e^{u(i-1)} \ du \\&\le \frac{\omega _i}{2^{i-1}(i-1)} 2^{i-1}e^{(r-s)(i-1)} \\&= \frac{\omega _i}{i-1} e^{(r-s)(i-1)}. \end{aligned}$$

On the other hand, Lemma 6 and Lemma 1 imply that

$$\begin{aligned} {\mathcal {H}}^{i}(L_i(s)\cap B_r)&\ge \omega _i \int _{0}^{r-s} \sinh ^{i-1}(u) \ du \\&=\omega _i \left( \int _{1/2}^{r-s} \sinh ^{i-1}(u) \ du +\int _{0}^{1/2} \sinh ^{i-1}(u) \ du \right) \\&\ge \omega _i \left( \int _{1/2}^{r-s} \left( \frac{e^{u}}{e^3} \right) ^{i-1} \ du +\int _{0}^{1/2} u^{i-1} \ du \right) \\&= \frac{\omega _i}{e^{3(i-1)}(i-1)} \left( e^{(r-s)(i-1)}-e^{(i-1)/2} \right) + \frac{1}{2^{i}} \, \frac{\omega _i}{i } \\&\ge \frac{\omega _i}{e^{3(i-1)}(i-1)} e^{(r-s)(i-1)}, \end{aligned}$$

where we used that \(s \le r-1/2\) to obtain the equality in the third line. The last inequality is due to

$$\begin{aligned} \frac{1}{2^{i}} \, \frac{\omega _i}{i }- \frac{\omega _i}{e^{(5/2)(i-1)}(i-1)}&= \omega _i \left( \frac{1}{i \, 2^{i}} - \frac{1}{e^{(5/2)(i-1)}(i-1)} \right) \ge 0. \end{aligned}$$

The positivity of the last term holds for \(i \ge 2\), since \(2^{i+1} \le e^{(5/2)(i-1)}\) implies that

$$\begin{aligned} 2^{i} \le \frac{i-1}{i} \, e^{(5/2)(i-1)}, \end{aligned}$$

which is equivalent to the desired inequality. \(\square \)

We will need later the following lemma.

Lemma 8

Let \(r\ge 1 \). For \(k\in \{0,1,\ldots ,d-1\}\) and \( 0 \le s \le r\), let \(L_k(s) \in A_h(d,k)\) be a k-dimensional totally geodesic subspace such that \(d_h(L_k(s),p)=s\). Then for any \( l \in {\mathbb {N}}\) there exist constants \(c,C>0\), depending only on kl and d, such that

$$\begin{aligned} c \,g(k,l,d,r)&\le \omega _{d-k} \int _{0}^{r} \cosh ^{k}(s) \sinh ^{d-1-k}(s) \, {\mathcal {H}}^{k}(L_k(s) \cap B_r)^{l}\, ds\\&= \int _{A_h(d,k)}{\mathcal {H}}^k (H\cap B_r)^l\, \mu _k(dH) \le C \, g(k,l,d,r) \end{aligned}$$

with

$$\begin{aligned}g(k,l,d,r)= {\left\{ \begin{array}{ll} \exp (r(d-1)) &{}: l(k-1)<d-1, \\ r \, \exp (r(d-1)) &{}: l(k-1)=d-1, \\ \exp (r \, l(k-1)) &{}: l(k-1)>d-1. \end{array}\right. }\end{aligned}$$

Proof

The asserted equality of the two integral expressions is clear from the argument for the second claim in Corollary 1.

For \(k=0\) the integral is just the volume of a geodesic ball of radius r which can be bounded from above and below by a positive constant times \(\exp (r(d-1))\).

In the following, we repeatedly use that the intersection \(L_k(s) \cap B_r\) is a k-dimensional hyperbolic ball of radius \({{\,\mathrm{\mathrm{arcosh}}\,}}(\cosh (r)/\cosh (s))\). The constants c and C used in the calculations below only depend on kld and may vary from line to line. Suppose that \(k\ge 2\). The substitution \(u=r-s\) and an application of Lemma 6 yield

$$\begin{aligned}&\int _{0}^{r} \cosh ^{k}(s) \sinh ^{d-1-k}(s) \, {\mathcal {H}}^{k}(L_k(s) \cap B_r)^{l} \, ds\\&\quad = \int _{0}^{r} \cosh ^{k}(r-u)\, \sinh ^{d-1-k}(r-u) \, {\mathcal {H}}^{k}(L_k(r-u) \cap B_r)^{l} \, du \\&\quad = \int _{0}^{r} \cosh ^{k}(r-u)\,\sinh ^{d-1-k}(r-u) \, \left( \omega _k \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (r-u)}\right) } \sinh ^{k-1}(s) \ ds \right) ^{l} \, du \\&\quad \le C \int _{0}^{r} e^{k(r-u)}\,e^{(d-1-k)(r-u)} \, \left( 2^{-(k-1)}\int _{0}^{u+\log (2)} e^{(k-1)s} \ ds \right) ^{l} \, du \\&\quad \le C \int _{0}^{r} e^{(d-1)(r-u)} \, \left( \frac{1}{k-1} e^{(u +\log (2))(k-1)}\right) ^{l} \, du \\&\quad \le C e^{r(d-1)}\int _{0}^{r} e^{u(l(k-1)-(d-1))} \, du \\&\quad \le C g(k,l,d,r). \end{aligned}$$

To obtain the lower bound, we first show for \(u \ge 0.2\) that

$$\begin{aligned} \int _{0}^{u} \sinh ^{k-1}(s)&\ge \int _{0.1}^{u} \sinh ^{k-1}(s) \ ds \ge \int _{0.1}^{u} e^{(k-1)(s-3)} \ ds \\&\ge \frac{e^{-3(k-1)}}{k-1}\left( e^{(k-1)u}-e^{0.1(k-1)}\right) \\&\ge \frac{e^{0.1(k-1)}}{k-1} \, e^{-3(k-1)}\left( e^{(k-1)(u-0.1)}-1\right) \\&\ge \frac{e^{0.1(k-1)}}{k-1} \, e^{-3(k-1)} \frac{1}{20}e^{(k-1)(u-0.1)}. \end{aligned}$$

Now we substitute again \(u=r-s\). An application of Lemma 1 and the lower bound from Lemma 6 then yield

$$\begin{aligned}&\int _{0}^{r} \cosh ^{k}(s) \sinh ^{d-1-k}(s) \, {\mathcal {H}}^{k}(L_k(s) \cap B_r)^{l} \, ds\\&\quad = \int _{0}^{r} \cosh ^{k}(r-u)\, \sinh ^{d-1-k}(r-u) \, {\mathcal {H}}^{k}(L_k(r-u) \cap B_r)^{l} \, du \\&\quad = \int _{0}^{r} \cosh ^{k}(r-u)\,\sinh ^{d-1-k}(r-u) \, \left( \omega _k \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (r-u)}\right) } \sinh ^{k-1}(s) \ ds \right) ^{l} \, du \\&\quad \ge c\int _{0}^{r-0.1} e^{k(r-u)}\,e^{(d-1-k)(r-u-3)} \, \left( \int _{0}^{u} \sinh ^{k-1}(s) \ ds \right) ^{l} \, du \\&\quad \ge c e^{r(d-1)}\int _{0.2}^{r-0.1} e^{-u(d-1)} \, e^{l(k-1)(u-0.1)} \, du \\&\quad = c e^{r(d-1)}\int _{0.2}^{r-0.1} e^{u(l(k-1)-(d-1))} \, du \\&\quad \ge c g(k,l,d,r). \end{aligned}$$

For \(k=1\), the proof is almost the same except that we simply use that \(\int _0^a\sinh ^{k-1}(s)\, ds=a\) for \(a\ge 0\). \(\square \)

4.4.2 The planar case \(d=2\)

Although we present a very detailed covariance analysis in Sect. 4.5 we will separately investigate the asymptotic behaviour of the variances in Lemmas 911. In fact while the results of Sect. 4.5 are necessary for the multivariate central limit theorems, the variance analysis we carry out here is already sufficient for the unvariate cases. In this and the following two sections, \(c_i\) will denote a positive constant only depending on the dimension and a counting parameter \(i \in {\mathbb {N}}_0\). If it additionally depends on another parameter \(n \in {\mathbb {N}}_0\), we indicate this by writing, for instance, \(c_{i,n}\) or \(c_i(n)\). The value of this constant may change from occasion to occasion.

Lemma 9

Let \(d=2\), \(i\in \{0,1\}\) and \(t\ge t_0>0\). Then there are constants \(c^{(i)}(2,t_0), C^{(i)}(2,t_0)\in (0,\infty )\) such that for all \(r\ge 1\),

$$\begin{aligned} c^{(i)}(2,t_0)\,t^{3-2i}\,e^{r} \le {\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(i)}) \le C^{(i)}(2,t_0)\,t^{3-2i} \,e^{r}. \end{aligned}$$

Proof

For \(i\in \{0,1\}\) and \(n=1\), Corollary 1 and Lemma 8 yield

$$\begin{aligned} c_i\, e^r\le \Vert f_1^{(i)}\Vert _1^{2}= c_i \int _{0}^{r} \cosh (s)\, {\mathcal {H}}^{1}(L_{1}(s) \cap B_r)^2 \ ds \le C_i \, e^{r}. \end{aligned}$$

Similarly, for \(i=0\) and \(n=2\) we obtain

$$\begin{aligned} \Vert f_2^{(0)}\Vert _2^{2}&= c_0 \int _{0}^{r} \sinh (s) {\mathcal {H}}^{0}(L_{1}(s) \cap B_r)^2 \ ds = c_0 \int _{0}^{r} \sinh (s) ds\\&= c_0 \big (\cosh (r)-1\big ). \end{aligned}$$

From (17) we now deduce that

$$\begin{aligned} c(t^2+t^3)e^r\le c_1t^3e^r+c_2t^2e^r\le {\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(0)})\le c_1t^3e^r+c_2t^2e^r\le C(t^2+t^3)e^r. \end{aligned}$$

Using that \(t\ge t_0>0\), the assertion follows for \(i=0\). The case \(i=1\) is obtained in the same way, but requires bounds for only one summand in (17). \(\square \)

4.4.3 The spatial case \(d=3\)

Lemma 10

Let \(d=3\), \(i\in \{0,1,2\}\) and \(t\ge t_0>0\). Then there are constants \(c^{(i)}(3,t_0),C^{(i)}(3,t_0)\in (0,\infty )\) such that for all \(r\ge 1\),

$$\begin{aligned} c^{(i)}(3,t_0)\,t^{5-2i} \,re^{2r} \le {\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(i)}) \le C^{(i)}(3,t_0)\,t^{5-2i} \,re^{2r}. \end{aligned}$$

Proof

Corollary 1 and Lemma 8 imply the upper bound

$$\begin{aligned} {\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(i)})-\sum _{n=2}^{3-i} t^{6-2i-n} n! \Vert f_n^{(i)}\Vert _n^{2}&= t^{5-2i} \Vert f_1^{(i)}\Vert _1^{2} \le c_i \, t^{5-2i} \, re^{2r}. \end{aligned}$$

It remains to determine the asymptotic behaviour in r of the terms \(\Vert f_{2}^{(i)}\Vert _{2}^2\) and \(\Vert f_{3}^{(i)}\Vert _{3}^2\). Corollary 1 and Lemma 8 yield

$$\begin{aligned} c_i\, e^{2r}\le \Vert f_{2}^{(i)}\Vert _{2}^2\le C_i \, e^{2r}\qquad \text {and}\qquad c_i\, e^{2r}\le \Vert f_{3}^{(i)}\Vert _{3}^2 \le C_i \, e^{2r}. \end{aligned}$$

To deduce the lower bound, it is sufficient to derive a lower bound for the term \(\Vert f_{1}^{(i)}\Vert _{1}^2\). But

$$\begin{aligned} {\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(i)})\ge t^{5-2i} \Vert f_1^{(i)}\Vert _1^{2}\ge c_i \, t^{5-2i} \, re^{2r}. \end{aligned}$$

This completes the proof. \(\square \)

4.4.4 The higher dimensional case \(d\ge 4\)

Lemma 11

Let \(d\ge 4\), \(i\in \{0,1,\ldots ,d-1\}\), and \(t\ge t_0>0\). Then there are positive constants \(c^{(i)}(d,t_0)\), \(C^{(i)}(d,t_0)\in (0,\infty )\) such that for all \(r\ge 1\),

$$\begin{aligned} c^{(i)}(d,t_0)\, t^{2(d-i)-1} \, e^{2r(d-2)} \le {\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(i)}) \le C^{(i)}(d,t_0)\,t^{2(d-i)-1} \,e^{2r(d-2)}. \end{aligned}$$

Proof

Combining Corollary 1 with Lemma 8, we obtain

$$\begin{aligned} {\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(i)})- \sum _{n=d-1}^{d-i} t^{2(d-i)-n} n! \Vert f_n^{(i)}\Vert _n^{2} \le \sum _{n=1}^{\min \{d-2, \ d-i\} } c_{i,n} \, t^{2(d-i)-n}\, g(d-n,2,d,r). \end{aligned}$$

For \(n=1\le \min \{d-2, \ d-i\}\), we have \(g(d-1,2,d,r)\le C_i\,\exp (r2(d-2))\), since \(2(d-2)-(d-1)=d-3>0\). If \(2(d-n-1)-(d-1)=d-1-2n>0\), then \(g(d-n,2,d,r)\le g(d-1,2,d,r)\). For the remaining cases, we use that \(\exp (r(d-1))\) is of lower than \(\exp (2r(d-2))\) for \(d\ge 4\). Moreover, as in the case \(d=3\) it follows that \(\Vert f_{d-1}^{(i)}\Vert _{d-1}^2\) and \(\Vert f_{d}^{(i)}\Vert _{d}^2\) are of order at most \(e^{r(d-1)}\). This yields the upper bound.

The lower bound is again derived by just taking into account \(\Vert f_{1}^{(i)}\Vert _{1}^2\) and, in addition, by applying the lower bound \(g(d-1,2,d,r)\ge c_i\, \exp (r2(d-2))\) from Lemma 8.

\(\square \)

4.5 Covariance analysis

In this section we prepare the proof of Theorem 7 by an asymptotic analysis of the covariance structure of the random vector \({\mathbf{F }}_{r,t}\) in dimensions \(d=2\) and \(d=3\).

4.5.1 The planar case \(d=2\)

The following lemma describes the rate of convergence, as \(r\rightarrow \infty \), of the suitably scaled covariances to the asymptotic covariance matrix \(\varSigma _d=(\sigma ^{i,j}_d)_{i,j=0}^{d-1}\) for \(d=2\).

Lemma 12

Let \(d=2\) and \(t\ge t_0>0\). There is a positive constant \(c_\mathrm{cov}(2,t_0)\in (0,\infty )\) such that if \(r\ge 1\), then

$$\begin{aligned}&\Bigg |\sigma ^{i,j}_2-{\mathbb {C}}{\mathrm{ov}}\Bigg ({F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over e^{r/2}},{F_{r,t}^{(j)}-{\mathbb {E}}F_{r,t}^{(j)}\over e^{r/2}}\Bigg )\Bigg |\qquad \\&\quad \le c_{\mathrm{cov}}(2,t_0)\, t^{3-i-j} r^{2}e^{-r},i,j\in \{0,1\}. \end{aligned}$$

Moreover,

$$\begin{aligned} \varSigma _2= \begin{pmatrix} t^2\left( \left( \frac{4}{\pi }\right) ^2ta+\frac{1}{\pi }\right) &{}\quad \frac{8}{\pi }\,t^2a \\ \frac{8}{\pi }\,t^2a &{}\quad 4ta \end{pmatrix} \end{aligned}$$
(20)

and \(a=4\cdot \mathrm{G}\) with Catalan’s constant \(\mathrm{G}\approx 0.915965594\). In particular, \(\varSigma _2\) is positive definite with \(\det (\varSigma _2)=\frac{4}{\pi }t^3a\).

Proof

Since \({\mathbf{F }}_{r,t}\) is a vector of Poisson U-statistics the covariance representation (18) shows that, for \(i,j\in \{0,1\}\),

$$\begin{aligned} {\mathbb {C}}{\mathrm{ov}}\Bigg ({F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over e^{r/2}},{F_{r,t}^{(j)}-{\mathbb {E}}F_{r,t}^{(j)}\over e^{r/2}}\Bigg ) = e^{-r}\sum _{n=1}^{\min \{2-i,2-j\}} t^{4-i-j-n}n!\langle f_n^{(i)},f_n^{(j)} \rangle _{n} \end{aligned}$$

and it remains to compute the scalar products. Using (19) and Corollary 1 we get

$$\begin{aligned} \langle {f}_1^{(i)},{f}_1^{(j)} \rangle _{1}&= c(2,1,i,j)\cdot 2\cdot 4 \int _{0}^{r} \cosh (s) \, {{\,\mathrm{\mathrm{arcosh}}\,}}^{2} \left( \frac{\cosh (r)}{\cosh (s)}\right) \, ds \\&= c(2,1,i,j)\cdot 2\cdot 4 \int _{0}^{r} \cosh (r-s) \, {{\,\mathrm{\mathrm{arcosh}}\,}}^{2} \left( \frac{\cosh (r)}{\cosh (r-s)}\right) \, ds\\&= c_1^{(i,j)} \int _{0}^{r} (e^{r-s}+e^{s-r}) {{\,\mathrm{\mathrm{arcosh}}\,}}^{2} \left( e^{s}\left( \frac{1+e^{-2r}}{1+e^{2(s-r)}}\right) \right) \,ds \end{aligned}$$

with \(c_1^{(i,j)}=4\cdot c(i,1,2)c(j,1,2)\). We have \(c(0,1,2)=2/\pi \) and \(c(1,1,2)=1\), and hence

$$\begin{aligned} c_1^{(0,0)}=4\left( \frac{2}{\pi }\right) ^2=\left( \frac{4}{\pi }\right) ^2,\quad c_1^{(1,1)}=4,\quad c_1^{(1,0)}=c_1^{(0,1)}=4\cdot \frac{2}{\pi }=\frac{8}{\pi }. \end{aligned}$$

Furthermore, again by Corollary 1

$$\begin{aligned} \langle {f}_2^{(i)},{f}_2^{(j)} \rangle _{2}&= c(2,2,i,j)\cdot 2 \int _{0}^{r} \sinh (s)\, ds = c_2^{(i,j)} (e^r+e^{-r}-2) \end{aligned}$$

with \(c_2^{(i,j)}=(2/\pi )c(i,2,2)c(j,2,2)\). In particular, \(c_2^{(0,0)}=1/(2\pi )\).

In the following, we use that

$$\begin{aligned} {{\,\mathrm{\mathrm{arcosh}}\,}}\left( e^{s}\left( \frac{1+e^{-2r}}{1+e^{2(s-r)}}\right) \right) \le {{\,\mathrm{\mathrm{arcosh}}\,}}\left( e^{s}\right) \le s+\log (2). \end{aligned}$$
(21)

For \((i,j) \in \{(0,1),(1,0),(1,1)\}\) we then deduce from the dominated convergence theorem that

$$\begin{aligned} \sigma ^{i,j}_2&= \lim _{r \rightarrow \infty } c_1^{(i,j)}t^{3-i-j} \int _{0}^{r} ({e^{-s}+e^{-2r+s}}){{\,\mathrm{\mathrm{arcosh}}\,}}^{2} \left( e^{s}\left( \frac{1+e^{-2r}}{1+e^{2(s-r)}}\right) \right) \ ds\\&=c_1^{(i,j)} t^{3-i-j}\int _{0}^{\infty } e^{-s} {{\,\mathrm{\mathrm{arcosh}}\,}}^{2}(e^{s}) \ ds=:c_1^{(i,j)} t^{3-i-j}\cdot a \end{aligned}$$

and, in addition we have

$$\begin{aligned} \sigma ^{0,0}_2&= c_1^{(0,0)} t^3\cdot a+2t^2\,c_2^{(0,0)}. \end{aligned}$$

Since \(a=4\cdot \text {G}\) by the following Remark 11, we obtain the specific values of \(\sigma ^{i,j}_2\) for \(i,j\in \{0,1\}\), and hence the determinant of the asymptotic covariance matrix \(\varSigma _2\) given in (20).

Next we prove the asserted rates of convergence. For \((i,j) \in \{(0,1),(1,0),(1,1)\}\), we get

$$\begin{aligned}&\Bigg |\sigma ^{i,j}_2-{\mathbb {C}}{\mathrm{ov}}\Bigg ({F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over e^{r/2}},{F_{r,t}^{(j)}-{\mathbb {E}}F_{r,t}^{(j)}\over e^{r/2}}\Bigg )\Bigg |\nonumber \\&\quad = \left| c_1^{(i,j)}t^{3-i-j}\cdot a-c_1^{(i,j)}t^{3-i-j} \int _{0}^{r} (e^{-s}+e^{-2r+s}) {{\,\mathrm{\mathrm{arcosh}}\,}}^{2} \left( e^{s}\left( \frac{1+e^{-2r}}{1+e^{2(s-r)}}\right) \right) \ ds \right| \nonumber \\&\quad \le c_1^{(i,j)}t^{3-i-j}\int _{0}^{r} e^{-s} \left( {{\,\mathrm{\mathrm{arcosh}}\,}}^{2}(e^{s})- {{\,\mathrm{\mathrm{arcosh}}\,}}^{2} \left( e^{s}\left( \frac{1+e^{-2r}}{1+e^{2(s-r)}}\right) \right) \right) \ ds \end{aligned}$$
(22)
$$\begin{aligned}&\qquad + c_1^{(i,j)}t^{3-i-j}\int _{0}^{r} e^{-2r+s} {{\,\mathrm{\mathrm{arcosh}}\,}}^{2} \left( e^{s}\left( \frac{1+e^{-2r}}{1+e^{2(s-r)}}\right) \right) \ ds \end{aligned}$$
(23)
$$\begin{aligned}&\qquad + c_1^{(i,j)} t^{3-i-j}\int _{r}^{\infty } e^{-s} {{\,\mathrm{\mathrm{arcosh}}\,}}^{2}(e^{s}) \ ds . \end{aligned}$$
(24)

Applying (21) to the expression in (24) we get

$$\begin{aligned} \int _{r}^{\infty } {e^{-s}} {{\,\mathrm{\mathrm{arcosh}}\,}}^{2}(e^{s}) \ ds \le \int _{r}^{\infty } e^{-s} (\log (2)+s)^{2} \ ds \le c \,r^{2} e^{-r}. \end{aligned}$$
(25)

Using (21) for the expression in (23) we obtain

$$\begin{aligned} \int _{0}^{r} e^{-2r+s} {{\,\mathrm{\mathrm{arcosh}}\,}}^{2} \left( e^{s}\left( \frac{1+e^{-2r}}{1+e^{2(s-r)}}\right) \right) \, ds&\le \int _{0}^{r} e^{-2r+s} {{\,\mathrm{\mathrm{arcosh}}\,}}^{2} \left( e^{s}\right) \, ds \nonumber \\&\le \int _{0}^{r} e^{-2r+s} (s+\log (2))^{2} \, ds \nonumber \\&\le c\,r^{2}e^{-r}. \end{aligned}$$
(26)

Finally, we treat the expression in (22). An application of the mean value theorem in the first and (21) in the second to last step yields

$$\begin{aligned}&\int _{0}^{r} e^{-s} \left( {{\,\mathrm{\mathrm{arcosh}}\,}}^{2}(e^{s})- {{\,\mathrm{\mathrm{arcosh}}\,}}^{2} \left( e^{s}\left( \frac{1+e^{-2r}}{1+e^{2(s-r)}}\right) \right) \right) \ ds \nonumber \\&\quad \le \int _{0}^{r} e^{-s} \left( \left( e^{s}- e^{s}\left( \frac{1+e^{-2r}}{1+e^{2(s-r)}}\right) \right) \, \underset{z \in \left[ e^{s}\left( \frac{1+e^{-2r}}{1+e^{2(s-r)}}\right) ,e^{s}\right] }{\max }\frac{d}{d z} ({{\,\mathrm{\mathrm{arcosh}}\,}}^{2}(z)) \right) \ ds \nonumber \\&\quad \le \int _{0}^{r} \left( \frac{e^{2(s-r)}-e^{-2r}}{1+e^{2(s-r)}} \right) \frac{2 {{\,\mathrm{\mathrm{arcosh}}\,}}(e^{s})}{\sqrt{\left( \frac{e^{s}+e^{-2r+s}}{1+e^{2(s-r)}} \right) ^{2}-1}} \ ds \nonumber \\&\quad = \int _{0}^{r} e^{-2r}\left( e^{2s}-1 \right) \frac{2 {{\,\mathrm{\mathrm{arcosh}}\,}}(e^{s})}{\sqrt{e^{2s}-1+e^{2(s-2r)}-e^{-4(r-s)}}} \ ds \nonumber \\&\quad \le \frac{1}{\sqrt{1-e^{-2r}}}\int _{0}^{r} e^{-2r}\left( e^{2s}-1 \right) \frac{2 {{\,\mathrm{\mathrm{arcosh}}\,}}(e^{s})}{\sqrt{e^{2s}-1}} \ ds \nonumber \\&\quad \le c e^{-2r}\int _{0}^{r} \sqrt{e^{2s}-1} \,{{\,\mathrm{\mathrm{arcosh}}\,}}(e^{s}) \ ds \nonumber \\&\quad \le c e^{-2r}\int _{0}^{r} e^{s} (s+\log (2)) \ ds \nonumber \\&\quad \le c\,re^{-r}. \end{aligned}$$
(27)

Thus, a combination of (25), (26) and (27) yields the result for \((i,j)\in \{(0,1),(1,0),(1,1)\}\). Finally, if \((i,j)=(0,0)\) we obtain the result by additionally taking into account that

$$\begin{aligned} |c_2^{(0,0)} (1+e^{-2r}-2e^{-r})-c_2^{(0,0)}| \le c \,e^{-r}. \end{aligned}$$

This completes the proof. \(\square \)

Remark 11

The relation \(a=4\text {G}\) can be confirmed by Maple. It is not clear to us how Maple verifies this relation. Since we could not find the current integral representation of the Catalan constant in one of the lists available to us, we provide a short derivation. We first use the substitution \(t=\exp (-{{\,\mathrm{\mathrm{arcosh}}\,}}(e^s))\) or \(e^s=\frac{1}{2}(t^{-1}+t)\) and then expand \((1+t^2)^{-2}\) into a Taylor series under the integral sign. This leads to

$$\begin{aligned} a&=\int _{0}^{\infty } e^{-s} {{\,\mathrm{\mathrm{arcosh}}\,}}^{2}(e^{s})\, ds=2\int _0^1\frac{1-t^2}{(1+t^2)^2}(\ln t)^2\, dt\\&=2\int _0^1\sum _{i=0}^\infty (-1)^i(i+1)t^{2i}(1-t^2)(\ln t)^2\, dt. \end{aligned}$$

By the substitution \(t=e^y\) we obtain

$$\begin{aligned} \int _0^1 t^{2i}(\ln t)^2\, dt=\frac{2}{(2i+1)^3}. \end{aligned}$$

Hence we can interchange summation and integration to get

$$\begin{aligned} a&=4\left( \sum _{i=0}^\infty (-1)^i(i+1)\frac{1}{(2i+1)^3}-\sum _{i=0}^\infty (-1)^i(i+1)\frac{1}{(2i+3)^3}\right) \\&=4\left( \frac{1}{2}\text {G}+\frac{1}{2}\sum _{i=0}^\infty (-1)^i \frac{1}{(2i+1)^3}-\frac{1}{2}(-\text {G}+1)+ \frac{1}{2}\sum _{i=0}^\infty (-1)^i \frac{1}{(2i+3)^3}\right) \\&=4\left( \frac{1}{2}\text {G}+\frac{1}{2}\text {G} +\frac{1}{2}-\frac{1}{2}\right) =4\text {G}. \end{aligned}$$

4.5.2 The spatial case \(d=3\)

Now we turn to the case \(d=3\) and again describes the rate of convergence, as \(r\rightarrow \infty \), of the suitably scaled covariances to the asymptotic covariance matrix \(\varSigma _d=(\sigma ^{i,j}_d)_{i,j=0}^{d-1}\).

Lemma 13

Let \(d=3\) and \(t\ge t_0>0\). There exists a positive constant \(c_{\mathrm{cov}}(3,t_0)\in (0,\infty )\) such that

$$\begin{aligned}&\Bigg |\sigma ^{i,j}_3-{\mathbb {C}}{\mathrm{ov}}\Bigg ({F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over \sqrt{r}\,e^{r}},{F_{r,t}^{(j)}-{\mathbb {E}}F_{r,t}^{(j)}\over \sqrt{r}\,e^{r}}\Bigg )\Bigg | \\&\quad \le c_{\mathrm{cov}}(3,t_0)\, t^{5-i-j} \, r^{-1}, \qquad i,j\in \{0,1,2 \}, \end{aligned}$$

for \(r\ge 1\). The matrix \(\varSigma _3\) has rank one and is explicitly given by

$$\begin{aligned} \varSigma _3= 2 \pi ^2\, \begin{pmatrix} \frac{\pi ^{2}}{2^8}t^{5} &{}\quad \frac{\pi ^{2}}{2^6}t^{4} &{}\quad \frac{\pi }{2^4}t^{3} \\ \frac{\pi ^{2}}{2^6}t^{4} &{}\quad \frac{\pi ^{2}}{2^4}t^{3} &{}\quad \frac{\pi }{2^2}t^{2} \\ \frac{\pi }{2^4}t^{3} &{}\quad \frac{\pi }{2^2}t^{2} &{} t \end{pmatrix}. \end{aligned}$$
(28)

Proof

For \(i,j\in \{0,1,2\}\), the covariance formula for Poisson U-statistics yields that

$$\begin{aligned} {\mathbb {C}}{\mathrm{ov}}\Bigg ({F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over \sqrt{r}\,e^{r}},{F_{r,t}^{(j)}-{\mathbb {E}}F_{r,t}^{(j)}\over \sqrt{r}\,e^{r}}\Bigg ) = r^{-1}e^{-2r}\sum _{n=1}^{\min \{3-i,3-j\}} t^{6-i-j-n}n!\langle f_n^{(i)},f_n^{(j)} \rangle _{n}. \end{aligned}$$

As in the planar case \(d=2\) we compute the scalar products. We let \(L_2(s)\) be a 2-dimensional subspace in \({\mathbb {H}}^3\) having distance \(s\ge 0\) from the origin p. For \(n=1\) Corollary 1 and Equation (19) yield

$$\begin{aligned} \langle {f}_1^{(i)},{f}_1^{(j)} \rangle _{1}&= \omega _1 c(3,1,i,j) \int _{0}^{r} \cosh ^{2}(s) \,{\mathcal {H}}^{2}(L_2(s)\cap B_r)^{2} \ ds \\&= \omega _2^{2} \omega _1 c(3,1,i,j) \int _{0}^{r} \cosh ^{2}(s) \left( \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)}\right) }\sinh (u) \ du \right) ^{2} \ ds \\&= \omega _2^{2} \omega _1 c(3,1,i,j) \int _{0}^{r} \cosh ^{2}(s) \left( \frac{\cosh (r)}{\cosh (s)}-1 \right) ^{2} \ ds \\&= \omega _2^{2} \omega _1 c(3,1,i,j) \int _{0}^{r} \ \left( \cosh (r)-\cosh (s)\right) ^{2} \ ds \\&= {\omega _2^{2} \omega _1 c(3,1,i,j)}\frac{1}{2} \,\big (r+2r\cosh ^2(r)-3\sinh (r)\cosh (r)\big ). \end{aligned}$$

In addition, using Lemma 4 and Lemma 8, we obtain

$$\begin{aligned} \langle {f}_2^{(i)},{f}_2^{(j)} \rangle _{2}&\le c \,e^{2r} \qquad \text {and}\qquad \langle {f}_3^{(i)},{f}_3^{(j)} \rangle _{3} \le c \,e^{2r}. \end{aligned}$$

Since \(c(3,2)=1\), \(c(0,1,3)=\pi /16\), \(c(1,1,3)=\pi /4\) and \(c(2,1,3)=1\), we obtain \(c(3,1,0,0)=\pi ^2/2^8\), \(c(3,1,0,1)=\pi ^2/2^6\), \(c(3,1,0,2)=\pi /2^4\), \(c(3,1,1,1)=\pi ^2/2^4\), \(c(3,1,1,2)=\pi /2^2\) and \(c(3,1,2,2)=1\). Moreover, we have

$$\begin{aligned} \sigma ^{i,j}_3&= \lim _{r \rightarrow \infty } t^{5-i-j}\,{\omega _2^{2} \omega _1 c(3,1,i,j)}\frac{1}{2} \,r^{-1}e^{-2r} \big (r+2r\cosh ^2(r)-3\sinh (r)\cosh (r)\big )\\&= t^{5-i-j}\, {\omega _2^{2} \omega _1 c(3,1,i,j)}\frac{1}{4} \\&=t^{5-i-j}\, 2 \pi ^{2} c(3,1,i,j). \end{aligned}$$

Therefore we conclude that the asymptotic covariance matrix \(\varSigma _3\) is given by (28). Clearly, \(\varSigma _3\) has rank one. Moreover, we obtain

$$\begin{aligned}&\Bigg |\sigma ^{i,j}_3-{\mathbb {C}}{\mathrm{ov}}\Bigg ({F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over \sqrt{r}\,e^{r}},{F_{r,t}^{(j)}-{\mathbb {E}}F_{r,t}^{(j)}\over \sqrt{r}\,e^{r}}\Bigg )\Bigg | \\&\quad \le t^{5-i-j}\, 4 \pi ^{2}c(3,1,i,j) \left| 1/2-r^{-1}e^{-2r}\big (r+2r\cosh ^2(r)-3\sinh (r)\cosh (r)\big )\right| \\&\qquad + r^{-1}e^{-2r}\sum _{n=2}^{\min \{3-i,3-j \}} t^{6-i-j-n}\,n! \langle {f}_n^{(i)},{f}_n^{(j)} \rangle _{n} \\&\quad \le c_{\mathrm{cov}}(3,t_0)\,t^{5-i-j}\, r^{-1}, \end{aligned}$$

where we used that \(|1/2-r^{-1}e^{-2r}\big (r+2r\cosh ^2(r)-3\sinh (r)\cosh (r)\big )|\) is bounded from above by a constant multiple of \(r^{-1}\) as \(r\rightarrow \infty \). This completes the proof. \(\square \)

4.5.3 The case \(d\ge 4\)

In order to describe explicitly the limit covariance matrix \(\varSigma (d)\) for \(d \ge 4\) we need the following lemma.

Lemma 14

For \(\alpha >0\),

$$\begin{aligned} \int _{0}^{\infty } \cosh ^{-\alpha }(x) \ dx=\frac{\sqrt{\pi }}{2} \frac{\varGamma (\frac{\alpha }{2})}{\varGamma (\frac{\alpha +1}{2})}. \end{aligned}$$

Proof

Substituting first \(u=e^{x}\) and then \(\tan (z)=u\), and using \((\tan ^{2}(x)+1)^{-1}=\cos ^{2}(x)\), we get

$$\begin{aligned}&\int _{0}^{\infty } \cosh ^{-\alpha }(x) \ dx = 2^{\alpha } \int _{1}^{\infty } \frac{u^{\alpha -1}}{(u^{2}+1)^{\alpha }} \ du \\&\quad = 2^{\alpha } \int _{\pi /4}^{\pi /2} \sin ^{{\alpha -1}}(z) \cos ^{\alpha -1}(z) \ dz=:I_\alpha . \end{aligned}$$

The trigonometric identity \(2\sin \alpha \cos \alpha =\sin (2\alpha )\) and the substitution \(y=2z\) yield

$$\begin{aligned} I_\alpha =2 \int _{\pi /4}^{\pi /2} \sin ^{{\alpha -1}}(2z) \ dz = \int _{0}^{\pi /2} \sin ^{{\alpha -1}}(y) \ dy =\frac{\sqrt{\pi }}{2}\frac{\varGamma (\frac{\alpha }{2})}{\varGamma (\frac{\alpha +1}{2})}. \end{aligned}$$

For the last equality we use the substitution \(\sin (y)=t\) to transform the integral into a Beta integral which can be expressed as a ratio of Gamma functions. This completes the argument. \(\square \)

Depending on the dimension, we will bound the speed of convergence by means of the function

$$\begin{aligned} h(d,r)={\left\{ \begin{array}{ll} e^{-r} &{}: d=4,\\ re^{-2r} &{}: d=5,\\ e^{-2r} &{}: d\ge 6. \end{array}\right. } \end{aligned}$$

Lemma 15

Let \(d\ge 4\) and \(t\ge t_0>0\). There exists a positive constant \(c_{\mathrm{cov}}(d,t_0)\in (0,\infty )\) such that

$$\begin{aligned}&\Bigg |\sigma ^{i,j}_d-{\mathbb {C}}{\mathrm{ov}}\Bigg ({F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over e^{r(d-2)}},{F_{r,t}^{(j)}-{\mathbb {E}}F_{r,t}^{(j)}\over e^{r(d-2)}}\Bigg )\Bigg | \\&\qquad \le c_{\mathrm{cov}}(d,t_0)\, t^{2d-1-i-j} \, h(d,r), \qquad i,j\in \{0,\ldots ,d-1\}, \end{aligned}$$

for \(r\ge 1\). The matrix \(\varSigma _d\) has rank one and its entries are explicitly given by

$$\begin{aligned} \sigma ^{i,j}_d=t^{2d-1-i-j} \, c(i,1,d) \,c(j,1,d) \, \frac{\omega _{d-1}\omega _d }{ 4^{d-2}(d-3)(d-2) }, \qquad i,j \in \{0,\ldots ,d-1\}, \end{aligned}$$
(29)

where the constants c(i, 1, d), c(j, 1, d) are introduced in Lemma 5.

Proof

Recall that

$$\begin{aligned}&{\mathbb {C}}{\mathrm{ov}}\Bigg ({F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over e^{(d-2)r}},{F_{r,t}^{(j)}-{\mathbb {E}}F_{r,t}^{(j)}\over e^{(d-2)r}}\Bigg )\nonumber \\&\quad =e^{-2(d-2)r} \sum _{n=1}^{\min \{d-i,d-j\}} t^{2d-i-j-n}n! \langle f_n^{(i)},f_{n}^{(j)} \rangle _n \end{aligned}$$
(30)

for \(r\ge 1\). In a first step, we bound from above the summands with \(n \in \{2, \ldots , \min \{d-i,d-j\}\}\). Lemma 8 implies that

$$\begin{aligned} e^{-2(d-2)r} \,\langle f_n^{(i)},f_{n}^{(j)} \rangle _n \le c \, e^{-2(d-2)r} \, g(d-n,2,d,r) \end{aligned}$$

with some constant c, not depending on r. For \(2(d-n-1) < d-1\) we obtain from Lemma 8 that

$$\begin{aligned} c \, e^{-2(d-2)r} g(d-n,2,d,r) \le c \, e^{r(-2d+4)}\, e^{r(d-1)}\le c \, e^{r(-d+3)}\le c\, h(d,r). \end{aligned}$$

Note that \(2(d-n-1) = d-1\) implies that d is odd, hence \(d\ge 5\), and therefore

$$\begin{aligned} c \, e^{-2(d-2)r} g(d-n,2,d,r) \le c \, e^{r(-2d+4)}\, r \, e^{r(d-1)}\le c \, r \, e^{r(-d+3)}\le c\, h(d,r). \end{aligned}$$

For \(2(d-n-1)>d-1\) we get

$$\begin{aligned} c \, e^{-2(d-2)r} g(d-n,2,d,r) \le c \, e^{r(-2d+4)} \, e^{2r(d-n-1)}\le c \, e^{r(-2n+2)}\le c\, h(d,r), \end{aligned}$$

since \(n\ge 2\).

Now we examine the remaining term corresponding to \(n=1\) in (30). By Corollary 1 and (19) we get

$$\begin{aligned} \begin{aligned}&e^{-2(d-2)r} \, \langle f_1^{(i)},f_{1}^{(j)} \rangle _1 \\&\quad = \frac{c(d,1,i,j) \, \omega _1}{e^{2(d-2)r}} \int _{0}^{r} \cosh ^{d-1}(s) \, {\mathcal {H}}^{d-1}(L_{d-1}(s)\cap B_r)^{2} \ ds \\&\quad = \frac{c(d,1,i,j) \, \omega _1}{e^{2(d-2)r}} \int _{0}^{r} \cosh ^{d-1}(s) \, \left( \omega _{d-1} \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) } \sinh ^{d-2}(u) \ du \right) ^{2} \ ds \\&\quad = \frac{c(d,1,i,j) \, \omega _1 \, \omega _{d-1}^{2}}{e^{2(d-2)r}} \int _{0}^{r} \cosh ^{d-1}(s) \\&\quad \times \left( \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) } \sum _{k=0}^{d-2} \frac{(-1)^{k}}{2^{d-2}} \, \left( {\begin{array}{c}d-2\\ k\end{array}}\right) \, e^{u(d-2-2k)} \ du \right) ^{2} \ ds \\&\quad = \frac{c(d,1,i,j) \, \omega _1 \, \omega _{d-1}^{2}}{4^{d-2}e^{2(d-2)r}} \int _{0}^{r} \cosh ^{d-1}(s) \\&\quad \times \left( \sum _{k=0}^{d-2} (-1)^{k} \, \left( {\begin{array}{c}d-2\\ k\end{array}}\right) \, \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) } e^{u(d-2-2k)} \ du \right) ^{2} \ ds \end{aligned} \end{aligned}$$
(31)

The quadratic term in brackets in (31) is given by

$$\begin{aligned}&\sum _{(k_1,k_2) \in \{0, \ldots ,d-2\}^{2}}(-1)^{k_1+k_2} \, \left( {\begin{array}{c}d-2\\ k_1\end{array}}\right) \, \left( {\begin{array}{c}d-2\\ k_2\end{array}}\right) \, \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) } e^{u_1(d-2-2 k_1)} \ du_1 \\&\qquad \quad \times \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) } e^{u_2(d-2-2k_2)} \ du_2. \end{aligned}$$

Next, we provide and upper bound for the summands obtained for \((k_1,k_2) \in \{0, \ldots ,d-2\}^{2} \setminus \{(0,0)\} \). Without loss of generality we assume \(k_2 \ge 1\). Then we get

$$\begin{aligned}&e^{-2(d-2)r} \int _{0}^{r} \cosh ^{d-1}(s) \, \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) } e^{u_1(d-2-2k_1)} \ du_1\nonumber \\&\qquad \qquad \qquad \times \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) } e^{u_2(d-2-2k_2)} \ du_2 \ ds\nonumber \\&\quad \le c \, e^{-2r(d-2)} \int _{0}^{r} e^{s(d-1)} \, \int _{0}^{r-s+\log (2)} e^{u_1(d-2-2k_1)} \ du_1 \nonumber \\&\qquad \qquad \qquad \times \int _{0}^{r-s+\log (2)} e^{u_2(d-2-2k_2)} \ du_2 \ ds \nonumber \\&\quad \le c \, e^{-2r(d-2)} \int _{0}^{r} e^{s(d-1)} \, e^{(r-s)(d-2)} \, e^{(r-s)(d-4)} \ ds \nonumber \\&\quad \le c \, e^{-2r} \int _{0}^{r} e^{s(-d+5)} \ ds\le c\, h(d,r) \end{aligned}$$
(32)

for \(d\ge 5\). For \(d=4\) the third line is

$$\begin{aligned}&c \, e^{-4r} \int _{0}^{r} e^{3s} \, e^{2(r-s)} (r-s+\log (2)) \ ds \\&\quad = c \, e^{-2r} \int _{0}^{r} (r-s+\log (2)) \, e^{s} \ ds\le c\, h(4,r). \end{aligned}$$

Therefore we can concentrate on the summand corresponding to \(k=0\) in (31). In the following we will make use of the representation \({{\,\mathrm{\mathrm{arcosh}}\,}}(x)=\log (x+\sqrt{x^{2}-1})\) of the \({{\,\mathrm{\mathrm{arcosh}}\,}}\)-function in order to evaluate the inner integral. Then we get

$$\begin{aligned}&\cosh ^{-2(d-2)}(r) \int _{0}^{r} \cosh ^{d-1}(s) \, \left( \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) } e^{u(d-2)} \ du \right) ^{2} \ ds \nonumber \\&\quad = \frac{\cosh ^{-2(d-2)}(r)}{(d-2)^{2}} \int _{0}^{r} \cosh ^{d-1}(s) \, \left( \left( \frac{\cosh (r)}{\cosh (s)}+\sqrt{\frac{\cosh ^{2}(r)}{\cosh ^{2}(s)}-1} \right) ^{d-2}-1 \right) ^{2} \ ds\nonumber \\&\quad = (d-2)^{-2} \, \int _{0}^{r} \cosh ^{-(d-3)}(s) \, \left( \left( 1+\sqrt{1-\frac{\cosh ^{2}(s)}{\cosh ^{2}(r)}} \right) ^{d-2}-\left( \frac{\cosh (s)}{\cosh (r)} \right) ^{d-2} \right) ^{2} \ ds. \end{aligned}$$
(33)

For \(r\rightarrow \infty \) this expression converges to a constant. To get the correct rate stated in the lemma we observe that

$$\begin{aligned}&\Bigg |\sigma ^{i,j}_d-{\mathbb {C}}{\mathrm{ov}}\Bigg ({F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over e^{r(d-2)}},{F_{r,t}^{(j)}-{\mathbb {E}}F_{r,t}^{(j)}\over e^{r(d-2)}}\Bigg )\Bigg | \\&\quad \le \left| \sigma ^{i,j}_d-e^{-2(d-2)r} \, t^{2d-1-i-j}\langle f_1^{(i)},f_{1}^{(j)} \rangle _1\right| \\&\qquad +e^{-2(d-2)r} \sum _{n=2}^{\min \{d-i,d-j\}} t^{2d-i-j-n}n! \langle f_n^{(i)},f_{n}^{(j)} \rangle _n. \end{aligned}$$

We have already shown that the second summand satisfies the asserted upper bound. It follows from (32) that it remains to consider

$$\begin{aligned}&\left| \sigma ^{i,j}_d- \frac{\beta }{e^{2r(d-2)}} \int _{0}^{r} \cosh ^{d-1}(s) \, \left( \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) } e^{u(d-2)} \ du \right) ^{2} \ ds\right| \nonumber \\&\quad \le \left| \sigma ^{i,j}_d- \frac{\beta }{4^{d-2}\cosh ^{2(d-2)}(r)} \int _{0}^{r} \cosh ^{d-1}(s) \, \left( \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) } e^{u(d-2)} \ du \right) ^{2} \ ds\right| \nonumber \\&\qquad + \beta \left| \frac{1}{4^{d-2} \, \cosh ^{2(d-2)}(r)}- \frac{1}{e^{2r(d-2)}} \right| \int _{0}^{r} \cosh ^{d-1}(s) \, \left( \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{ \cosh (r)}{\cosh (s)} \right) } e^{u(d-2)} \ du \right) ^{2} \ ds, \end{aligned}$$
(34)

where we set

$$\begin{aligned} \beta \mathrel {\mathop :}=t^{2d-1-i-j} \, \frac{c(d,1,i,j) \, \omega _1 \, \omega _{d-1}^{2}}{4^{d-2}}. \end{aligned}$$

For the second summand, observe that

$$\begin{aligned}&\left| \frac{1}{4^{d-2} \, \cosh ^{2(d-2)}(r)}- \frac{1}{e^{2r(d-2)}} \right| \le e^{-2r(d-2)} \left( 1-(1+e^{-2r})^{-2(d-2)} \right) \\&\quad \le c \, e^{-2r(d-1)}. \end{aligned}$$

Since by (33) the integral in the second summand of (34) is of the order \(e^{2r(d-2)}\), the second summand is at most of the order \(\beta \, e^{-2r}\).

It remains to show the decay of the first summand in (34). This is done by using the same steps as in (33) and by splitting up the limit covariance \(\sigma ^{i,j}_d\). Lemma 14 and basic calculus show that the asserted entries of the asymptotic covariance matrix can be written in the form

$$\begin{aligned} \sigma ^{i,j}_d = \frac{\beta }{(d-2)^{2}} \int _{0}^{\infty } \cosh ^{-(d-3)}(s) \ ds. \end{aligned}$$

Then we get

$$\begin{aligned}&\left| \sigma ^{i,j}_d- \frac{\beta }{4^{d-2}\cosh ^{2(d-2)}(r)} \int _{0}^{r} \cosh ^{d-1}(s) \, \left( \int _{0}^{{{\,\mathrm{\mathrm{arcosh}}\,}}\left( \frac{\cosh (r)}{\cosh (s)} \right) } e^{u(d-2)} \ du \right) ^{2} \ ds\right| \\&\quad \le I_1+I_2, \end{aligned}$$

where

$$\begin{aligned} I_1:= \frac{\beta }{(d-2)^{2}} \int _{r}^{\infty } \cosh ^{-(d-3)}(s) \ ds \le c\, \beta \, e^{-(d-3)r}. \end{aligned}$$

and

$$\begin{aligned} I_2:&= \frac{\beta }{(d-2)^{2} \, 4^{d-2}} \int _{0}^{r} \cosh ^{-(d-3)}(s) \\&\quad \times \,\left| 2^{2(d-2)}- \left( \left( 1+\sqrt{1-\frac{\cosh ^{2}(s)}{\cosh ^{2}(r)}} \right) ^{d-2}-\left( \frac{\cosh (s)}{\cosh (r)} \right) ^{d-2} \right) ^{2}\right| \ ds. \end{aligned}$$

It remains to provide an upper bound for \(I_2\). For this we expand the square and use the triangle inequality to get \(I_2\le I_3+I_4\), where

$$\begin{aligned} I_3&\le \beta \int _{0}^{r} \cosh ^{-(d-3)}(s) \,\\&\quad \times \left( 2 \, \left( 1+\sqrt{1-\frac{\cosh ^{2}(s)}{\cosh ^{2}(r)}} \right) ^{d-2} \left( \frac{\cosh (s)}{\cosh (r)} \right) ^{d-2}+ \left( \frac{\cosh (s)}{\cosh (r)} \right) ^{2(d-2)}\right) \ ds \\&\le c \,\beta \, \int _{0}^{r} e^{-s(d-3)}\,\left( e^{(s-r)(d-2)}+ e^{(s-r)(2d-4)}\right) \ ds \\&\le c \,\beta \, e^{-r(d-2)} \int _{0}^{r} e^{s} \ ds +c \,\beta \, e^{-r(2d-4)} \int _{0}^{r} e^{s(d-1)} \ ds \le c \,\beta \, h(d,r), \end{aligned}$$

with some constant c. Here we also used that

$$\begin{aligned} \frac{\cosh (s)}{\cosh (r)}= \frac{e^{s}+e^{-s}}{e^{r}+e^{-r}} \le \frac{2 \,e^{s}}{e^{r}}=2 \, e^{s-r},\qquad 0 \le s \le r. \end{aligned}$$
(35)

In order to provide an upper bound for \(I_4\), we use the mean value theorem and the inequality \(1-\sqrt{1-x} \le x\), for \( x \in [0,1]\), to get

$$\begin{aligned} \left| 2^{2(d-2)}-\left( 1+\sqrt{1-x}\right) ^{2(d-2)}\right| \le 2(d-2)2^{2d-5}x. \end{aligned}$$

Hence we obtain

$$\begin{aligned} I_4&\le \beta \int _{0}^{r}\cosh ^{-(d-3)}(s)\,\left| 2^{2(d-2)}- \left( 1+\sqrt{1-\frac{\cosh ^{2}(s)}{\cosh ^{2}(r)}} \right) ^{2(d-2)}\right| \, ds\\&\le c\, \beta \, \int _0^r \cosh ^{-(d-3)}(s)\frac{\cosh ^{2}(s)}{\cosh ^{2}(r)}\, ds \le c \,\beta e^{-2r} \, \int _{0}^{r} e^{s(-d+5)} \ ds \le c \,\beta \, h(d,r), \end{aligned}$$

where also (35) was used. This concludes the proof. \(\square \)

5 Proofs II: Mixed K-function and mixed pair-correlation function

Let \(r>0\), \( i, j \in \{0,\ldots ,d-1\}\) and let \(B\subset {\mathbb {H}}^{d}\) be measurable with \({\mathcal {H}}^d(B)=1\). Then

$$\begin{aligned} K_{ij}(r)&= {1\over \lambda _i \lambda _j}\,{\mathbb {E}}\int _{\mathrm{skel}_i}\int _{\mathrm{skel}_j\cap B}\mathbb {1}\{0<d_h(x,y)\le r\}\,{\mathcal {H}}^j(dy)\,{\mathcal {H}}^i(dx). \end{aligned}$$

Already at this point we see that the condition \(0<d_h(x,y)\) can be omitted if \(i \ge 1\) or \(j\ge 1\). Requiring that \(x\in \mathrm{skel}_i\) and \(y\in \mathrm{skel}_j\) means that there exist

$$\begin{aligned} (H_1,\ldots ,H_{d-i})\in \eta _{t,\ne }^{d-i}\qquad \text {and}\qquad (G_1,\ldots ,G_{d-j})\in \eta _{t,\ne }^{d-j} \end{aligned}$$

such that \(x\in H_1\cap \ldots \cap H_{d-i}\) and \(y\in G_1\cap \ldots \cap G_{d-j}\). However, some of the hyperplanes of the first \((d-i)\)-tuple may coincide with some of the hyperplanes of the second \((d-j)\)-tuple. Let \(n\in \{0,1,\ldots ,\min \{d-i,d-j\}\}\) denote the number of common hyperplanes. Then we obtain the representation

$$\begin{aligned} K_{ij}(r)&= {1\over \lambda _i \lambda _j}\,\sum _{n=0}^{\min \{d-i,d-j\} }\alpha (d,i,j,n)\,{\mathbb {E}}\sum _{(H_1,\ldots ,H_{d-i},G_1,\ldots ,G_{d-j-n})\in \eta _{t,\ne }^{2d-i-j-n}}\int _{H_1\cap \ldots \cap H_{d-i}}\\&\quad \times \int _{H_1\cap \ldots \cap H_n\cap G_1\cap \ldots G_{d-j-n}\cap B}\mathbb {1}\{0<d_h(x,y)\le r\}\,{\mathcal {H}}^j(dy)\,{\mathcal {H}}^i(dx) \end{aligned}$$

with the combinatorial coefficient given by

$$\begin{aligned} \alpha (d,i,j,n) ={1\over n!(d-i-n)!(d-j-n)!} . \end{aligned}$$

Note that if \(n=0\) we interpret the second integral as an integral over the set \(G_1\cap \ldots \cap G_{d-j}\cap B\) and if \(n=d-j\) we understand that the integral ranges over \(H_1\cap \ldots \cap H_{d-j}\cap B\). Moreover, if \(i=j=0\), then the summand obtained for \(n=d\) is zero, since almost surely \(x,y\in H_1\cap \ldots \cap H_d\) and \(d_h(x,y)>0\) cannot be satisfied simultaneously. Hence the summation can be restricted to \(n\le m(d,i,j)\) in the following.

An application of (10) leads to

$$\begin{aligned} K_{ij}(r)&= {1\over \lambda _i \lambda _j}\,\sum _{n=0}^{m(d,i,j)}\alpha (d,i,j,n)t^{2d-i-j-n}\\&\quad \times \int _{A_h(d,d-1)^{2d-i-j-n}_*}\int _{H_1\cap \ldots \cap H_{d-i}}\int _{H_1\cap \ldots \cap H_n\cap G_1\cap \ldots G_{d-j-n}\cap B}\\&\quad \times \mathbb {1}\{0<d_h(x,y)\le r\}\,{\mathcal {H}}^j(dy)\,{\mathcal {H}}^i(dx)\,\mu _{d-1}^{2d-i-j-n}(d(H_1,\ldots ,H_{d-i},G_1,\ldots ,G_{d-j-n}))\\&={1\over \lambda _i \lambda _j}\,\sum _{n=0}^{m(d,i,j)}\alpha (d,i,j,n)t^{2d-i-j-n}\int _{A_h(d,d-1)^{n}_*}\int _{A_h(d,d-1)^{d-i-n}_*}\int _{A_h(d,d-1)^{d-j-n}_*}\\&\quad \times \int _{H_1\cap \ldots \cap H_{d-i}}\int _{H_1\cap \ldots \cap H_n\cap G_{1}\cap \ldots G_{d-j-n}\cap B}\mathbb {1}\{0<d_h(x,y)\le r\}\,{\mathcal {H}}^j(dy)\,{\mathcal {H}}^i(dx)\\&\quad \times \mu _{d-1}^{d-j-n}(d(G_{1},\ldots ,G_{d-j-n}))\,\mu _{d-1}^{d-i-n}(d(H_{n+1},\ldots ,H_{d-i}))\,\mu _{d-1}^{n}(d(H_1,\ldots ,H_n)), \end{aligned}$$

where we have used Fubini’s theorem to split the integration over \(A_h(d,d-1)^{2d-i-j-n}_*\) into four groups of the form \(A_h(d,d-1)^{n}_*\times A_h(d,d-1)^{d-i-n}_*\times A_h(d,d-1)^{d-j-n}_*\). The first group of hyperplanes comprises the n common hyperplanes \(H_1,\ldots ,H_n\), while the second and the third group is associated with the \((d-i-n)\)-tuple \(H_{n+1},\ldots ,H_{d-i}\) and the \((d-j-n)\)-tuple \(G_{1},\ldots ,G_{d-j-n}\), respectively. We now apply Lemma 4 successively to each of the three outer integrals. Together with Fubini’s theorem this gives

$$\begin{aligned} K_{ij}(r)&= {1\over \lambda _i \lambda _j}\,\sum _{n=0}^{m(d,i,j)}\alpha (d,i,j,n)\beta (d,i,j,n) t^{2d-i-j-n}\int _{A_h(d,d-n)}\int _{A_h(d,i+n)}\int _{A_h(d,j+n)}\\&\quad \times \int _{E\cap F}\int _{B\cap E\cap G}\mathbb {1}\{0<d_h(x,y)\le r\}\,{\mathcal {H}}^j(dy)\,{\mathcal {H}}^i(dx)\,\mu _{j+n}(dG)\,\mu _{i+n}(dF)\,\mu _{d-n}(dE)\\&= {1\over \lambda _i \lambda _j}\,\sum _{n=0}^{m(d,i,j)}\alpha (d,i,j,n)\beta (d,i,j,n) t^{2d-i-j-n}\int _{A_h(d,d-n)}\int _{A_h(d,j+n)}\int _{B\cap E\cap G}\\&\quad \times \int _{A_h(d,i+n)}\int _{E\cap F}\mathbb {1}\{0<d_h(x,y)\le r\}\,{\mathcal {H}}^i(dx)\,\mu _{i+n}(dF)\,{\mathcal {H}}^j(dy)\,\mu _{j+n}(dG)\,\mu _{d-n}(dE), \end{aligned}$$

where \(\beta (d,i,j,n) \mathrel {\mathop :}=c(d,d-n)c(d,i+n)c(d,j+n)\).

For the two innermost integrals we get

$$\begin{aligned}&\int _{A_h(d,i+n)}\int _{E\cap F}\mathbb {1}\{0<d_h(x,y)\le r\}\,{\mathcal {H}}^i(dx)\,\mu _{i+n}(dF)\\&\quad =\int _{A_h(d,i+n)}{\mathcal {H}}^i(\{x\in E\cap F:0<d_h(x,y)\le r\})\,\mu _{i+n}(dF)\\&\quad =\int _{A_h(d,i+n)}{\mathcal {H}}^i(E\cap (B(y,r){\setminus }\{y\})\cap F)\,\mu _{i+n}(dF). \end{aligned}$$

Since \(y\in E\), the intersection \(E\cap (B(y,r){\setminus }\{y\})\) has dimension \(d-n\) and we can apply Crofton’s formula to conclude that

$$\begin{aligned} \int _{A_h(d,i+n)}\int _{E\cap F}\mathbb {1}\{0<d_h(x,y)&\le r\}\,{\mathcal {H}}^i(dx)\,\mu _{i+n}(dF)\\&={\omega _{d+1}\omega _{i+1}\over \omega _{i+n+1}\omega _{d-n+1}}{\mathcal {H}}^{d-n}(E\cap B(y,r)). \end{aligned}$$

Here we also used that \({\mathcal {H}}^{d-n}(E\cap (B(y,r){\setminus }\{y\}))={\mathcal {H}}^{d-n}(E\cap B(y,r))\), since \(d-n\ge 1\). Moreover, since \(y\in E\) the value of \({\mathcal {H}}^{d-n}(E\cap B(y,r))\) is independent of the choice of E and y, and is given by the \((d-n)\)-dimensional Hausdorff measure

$$\begin{aligned} {\mathcal {H}}^{d-n}(B_r^{d-n}) = \omega _{d-n}\int _0^r\sinh ^{d-n-1}(s)\,ds \end{aligned}$$

of a \((d-n)\)-dimensional geodesic ball \(B_r^{d-n}\) of radius r. We thus arrive at

$$\begin{aligned} K_{ij}(r)&= {1\over \lambda _i \lambda _j}\,\sum _{n=0}^{m(d,i,j)}\alpha (d,i,j,n)\beta (d,i,j,n){\omega _{d+1}\omega _{i+1}\over \omega _{i+n+1}\omega _{d-n+1}}{\mathcal {H}}^{d-n}(B_r^{d-n})t^{2d-i-j-n}\\&\quad \times \int _{A_h(d,d-n)}\int _{A_h(d,j+n)}\int _{B\cap E\cap G}1 \ {\mathcal {H}}^j(dy)\,\mu _{j+n}(dG)\,\mu _{d-n}(dE)\\&= {1\over \lambda _i \lambda _j}\,\sum _{n=0}^{m(d,i,j)}\alpha (d,i,j,n)\beta (d,i,j,n){\omega _{d+1}\omega _{i+1}\over \omega _{i+n+1}\omega _{d-n+1}}{\mathcal {H}}^{d-n}(B_r^{d-n})t^{2d-i-j-n}\\&\quad \times \int _{A_h(d,d-n)}\int _{A_h(d,j+n)}{\mathcal {H}}^j(B\cap E\cap G)\,\mu _{j+n}(dG)\,\mu _{d-n}(dE). \end{aligned}$$

The two remaining integrals can be evaluated by using twice the Crofton formula. Indeed, noting that for \(\mu _{d-n}\)-almost all \(E\in A_h(d,d-n)\) the set \(B\cap E\) is either empty or has dimension \(d-n\) we find that

$$\begin{aligned}&\int _{A_h(d,d-n)}\int _{A_h(d,j+n)}{\mathcal {H}}^j(B\cap E\cap G)\,\mu _{j+n}(dG)\,\mu _{d-n}(dE)\\&\quad ={\omega _{d+1}\omega _{j+1}\over \omega _{j+n+1}\omega _{d-n+1}}\int _{A_h(d,d-n)}{\mathcal {H}}^{d-n}(B\cap E)\,\mu _{d-n}(dE)\\&\quad ={\omega _{d+1}\omega _{j+1}\over \omega _{j+n+1}\omega _{d-n+1}}{\mathcal {H}}^d(B). \end{aligned}$$

Since \({\mathcal {H}}^d(B)=1\) we finally conclude that

$$\begin{aligned} K_{ij}(r)&={1\over \lambda _i \lambda _j}\, \sum _{n=0}^{m(d,i,j)}\alpha (d,i,j,n)\beta (d,i,j,n)\\&\quad \times {\omega _{d+1}^{2}\omega _{i+1}\omega _{j+1}\over \omega _{d-n+1}^{2}\omega _{i+n+1}\omega _{j+n+1}} t^{2d-i-j-n}{\mathcal {H}}^{d-n}(B_r^{d-n})\\ \quad&={1\over \lambda _i\lambda _j}\,\sum _{n=0}^{m(d,i,j)}\alpha (d,i,j,n)\beta (d,i,j,n) \\&\quad \times {\omega _{d+1}^{2}\omega _{i+1}\omega _{j+1}\over \omega _{d-n+1}^{2}\omega _{i+n+1}\omega _{j+n+1}} \omega _{d-n}t^{2d-i-j-n} \int _0^r\sinh ^{d-n-1}(s)\,ds. \end{aligned}$$

Simplification of the constant by means of the constants given in (2) and Lemma 4 completes the proof for the mixed K-function \(K_{ij}\). The formula for the mixed pair-correlation function follows by differentiation. This completes the proof of Theorem 3. \(\square \)

6 Proofs III: Univariate limit theorems

6.1 The case of growing intensity: Proof of Theorem 4

The central limit theorem is in this case a direct consequence of the central limit theorem for general Poisson U-statistics stated as Corollary 4.3 in [67] (see also [15]).

\(\Box \)

6.2 The case of growing windows: Proof of Theorem 5

Our strategy in the proof of Theorem 5 (a) and (b) can be summarized as follows. The normal approximation bound (14) for general U-statistics of Poisson processes is given by a sum involving terms of the type \(M_{u,v}\), which are defined in (12) and (13) and which in turn are given as sums of integrals over partitions \(\sigma \in \varPi _{\ge 2}^{\mathrm{con}}(u,u,v,v)\). In applying these normal approximation bounds to the Euclidean counterparts of the functionals \(F_{r,t}^{(i)}\) it was possible to extract a common scaling factor from each of the integrals in \(M_{u,v}\) and to treat the number of terms, that is, the number of elements of \(\varPi _{\ge 2}^{\mathrm{con}}(u,u,v,v)\) as a constant, see [36, 58]. In the hyperbolic set-up this is no longer possible and each integral in the definition of \(M_{u,v}\) needs a separate treatment. In fact, it will turn out that these integrals exhibit different asymptotic behaviours as functions of r, as \(r\rightarrow \infty \). For the analysis, we have to determine explicitly the partitions in \(\varPi _{\ge 2}^{\mathrm{con}}(u,u,v,v)\) and for each such partition we have to provide a bound for the resulting integral. Since \(\mu =t\mu _{d-1}\), we can bound the dependence with respect to the intensity \(t\ge 1\) by \(t^a\) with \(a\le 4(d-i)-2(u+v)+|\sigma |\) for each \(\sigma \in \varPi _{\ge 2}^\mathrm{con}(u,u,v,v)\).

To show that a central limit theorem fails in higher space dimensions \(d\ge 4\) is the most technical part in the proof of Theorem 5. We do this by showing that the fourth cumulant of the centred and normalized total volume \(F_{r,t}^{(i)}\) is bounded away from 0 by an absolute and strictly positive constant and hence does not converge to 0. The latter in turn is the fourth cumulant of a standard Gaussian random variable. However, in view of the well known expression of the fourth cumulant in terms of the first four centred moments this approach can only work if we can ensure that the sequence of random variables

$$\begin{aligned} \Bigg ({F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over \sqrt{{\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(i)})}}\Bigg )^4 \end{aligned}$$

is uniformly integrable. We will prove that this is indeed the case by showing that their fifths moments are uniformly bounded. This requires a very careful analysis of the combinatorial formula (11) for the centred moments of U-statistics of Poisson processes.

The representation of a U-statistic will be as in Sect. 4.1. In the following computations, c will be a positive constant only depending on the dimension and whose value may change from occasion to occasion.

6.2.1 The planar case \(d=2\): Proof of Theorem 5 (a)

As indicated above, we will use the bound (14) in combination with (12) and (13). We distinguish the cases \(i=0\) and \(i=1\). In the following, we can assume that \(r,t\ge 1\).

For \(i=1\), which corresponds to the total edge length in \(B_r\), it is enough to bound \(M_{1,1}(f^{(1)})\). For this we note that \(\varPi _{\ge 2}^{\mathrm{con}}(1,1,1,1)\) only consists of the trivial partition \(\sigma _1=\{1,2,3,4\}\), see Fig. 5 (left panel). Thus, using Lemma 8, we have that

$$\begin{aligned} M_{1,1}(f^{(1)}) =c\, t\int _{A_h(2,1)} {\mathcal {H}}^1(H\cap B_r)^4\,\mu _1(d H) \le c\,t \, e^{r}. \end{aligned}$$

Together with the lower variance bound from Lemma 9 this yields

$$\begin{aligned} d\left( \frac{F_{r,t}^{(1)}-{\mathbb {E}}F_{r,t}^{(1)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(1)})}},N \right) \le c\,\frac{\sqrt{te^{r}}}{tc^{(1)}(2,1)\,e^r} \le c\,t^{-1/2}\, e^{-r/2}. \end{aligned}$$
(36)

Here we used that the exponent of t is given by \(4(2-1)-2(1+1)+1=1\).

Next, we deal with the case \(i=0\), which corresponds to the total vertex count in \(B_r\). In this situation, we need to bound the terms \(M_{1,1}(f^{(0)}), M_{1,2}(f^{(0)}), M_{2,2}(f^{(0)})\). For \(M_{1,1}(f^{(0)})\) we can argue as in the case \(i=1\), since \(\varPi _{\ge 2}^{\mathrm{con}}(1,1,1,1)\) only consists of the single partition \(\sigma _1\), see Fig. 5 (left panel). This allows us to conclude that

$$\begin{aligned} M_{1,1}(f^{(0)})&=c \,t^5\int _{A_h(2,1)} {\mathcal {H}}^1(H \cap B_r)^4 \ \mu _1(dH) \le c\,t^5\, e^{r}, \end{aligned}$$

where we used that the exponent of t is given by \(4(2-0)-2(1+1)+1=5\).

Fig. 5
figure 5

Left panel: Illustration of the partition in \(\varPi _{\ge 2}^{\mathrm{con}}(1,1,1,1)\). Middle panel: Illustration of the partitions in \(\varPi _{\ge 2}^{\mathrm{con}}(1,1,2,2)\). Right panel: Illustration of the partitions in \(\varPi _{\ge 2}^{\mathrm{con}}(2,2,2,2)\)

To deal with \(M_{1,2}(f^{(0)})\) we observe that, up to renumbering of the elements, \(\varPi _{\ge 2}^{\mathrm{con}}(1,1,2,2)\) consists of precisely three partitions \(\sigma _1, \ \sigma _2\) and \(\sigma _3\), which are illustrated in Fig. 5 (middle panel). For \(\sigma _1\) we obtain, using Crofton’s formula and Lemma 8,

$$\begin{aligned}&\int _{A_h(2,1)^2}{\mathcal {H}}^1(H_1\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_2\cap B_r)^2\,\mu _1^2(d(H_1,H_2))\nonumber \\&\quad =\int _{A_h(2,1)^2}{\mathcal {H}}^1(H_1\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_2\cap B_r)\,\mu _1^2(d(H_1,H_2))\nonumber \\&\quad = c\int _{A_h(2,1)}{\mathcal {H}}^1(H_1\cap B_r)^3\,\mu _1(dH_1) \le c\,e^r. \end{aligned}$$
(37)

Moreover, for the partition \(\sigma _2\) we compute, using twice that \({\mathcal {H}}^1(H\cap B_r)\le 2r\) for each \(H\in A_h(2,1)\) and again Crofton’s formula,

$$\begin{aligned}&\int _{A_h(2,1)^2}{\mathcal {H}}^1(H_1\cap B_r)\, {\mathcal {H}}^1(H_2\cap B_r)\, {\mathcal {H}}^0(H_1\cap H_2\cap B_r)^2\,\mu _1^2(d(H_1,H_2))\nonumber \\&\quad \le 4r^{2} \int _{A_h(2,1)^2}{\mathcal {H}}^0(H_1\cap H_2\cap B_r)\,\mu _1^2(d(H_1,H_2))\le c\, r^{2} \, e^r, \end{aligned}$$
(38)

and for partition \(\sigma _3\) we get

$$\begin{aligned}&\int _{A_h(2,1)^3} {\mathcal {H}}^1(H_1\cap B_r)\,{\mathcal {H}}^1(H_2\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_3\cap B_r)\,\nonumber \\&\qquad \qquad \quad \times {\mathcal {H}}^0(H_2\cap H_3\cap B_r)\,\mu _1^3(d(H_1,H_2,H_3))\nonumber \\&\quad \le 2r\int _{A_h(2,1)^2}{\mathcal {H}}^1(H_2\cap B_r)\,{\mathcal {H}}^1(H_3\cap B_r)\,{\mathcal {H}}^0(H_2\cap H_3\cap B_r)\,\mu _1^2(d(H_2,H_3))\nonumber \\&\quad \le 4r^2\int _{A_h(2,1)} {\mathcal {H}}^1(H_3\cap B_r)^2\,\mu _1(dH_3) \le c\,r^2\,e^r. \end{aligned}$$
(39)

This yields that \(M_{1,2}(f^{(0)})\le c\,t^5\,(e^r+2r^2e^r)\le c\,t^5\,r^2e^r\) (recall that \(r,t\ge 1\)). Here we used that the exponent of t is given by \(4(2-0)-2 (2+1)+\max \{2,3\}=5\).

Now we deal with the term \(M_{2,2}(f^{(0)})\), which involves a summation over partitions in \(\varPi _{\ge 2}^{\mathrm{con}}(2,2,2,2)\). Up to renumbering of the elements, there are precisely four such partitions \(\sigma _1\), \(\sigma _2\), \(\sigma _3\) and \(\sigma _4\), which are illustrated in Fig. 5 (right panel). For \(\sigma _1\) we compute

$$\begin{aligned}&\int _{A_h(2,1)^2}{\mathcal {H}}^0(H_1\cap H_2\cap B_r)^4\,\mu _1^2(d(H_1,H_2))\\&\quad = \int _{A_h(2,1)^2}{\mathcal {H}}^0(H_1\cap H_2\cap B_r)\,\mu _1^2(d(H_1,H_2))\\&\quad = c\int _{A_h(2,1)}{\mathcal {H}}^1(H_1\cap B_r)\,\mu _1(dH_1) \le c\,e^r, \end{aligned}$$

where we used Crofton’s formula and Lemma 8. Similarly, for \(\sigma _2\) and \(\sigma _3\) we get

$$\begin{aligned}&\int _{A_h(2,1)^3}{\mathcal {H}}^0(H_1\cap H_2\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_3\cap B_r)^2\,\mu _1^3(d(H_1,H_2,H_3))\\&\quad =\int _{A_h(2,1)^3}{\mathcal {H}}^0(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_3\cap B_r)\,\mu _1^3(d(H_1,H_2,H_3))\\&\quad =c\int _{A_h(2,1)} {\mathcal {H}}^1(H_1\cap B_r)^2\,\mu _1(dH_1) \le c\,e^r, \end{aligned}$$

and, additionally using that \({\mathcal {H}}^0(H_1\cap H_2\cap B_r)\le 1\) for \(\mu _1^2\)-almost all \((H_1,H_2)\in A_h(2,1)^2\),

$$\begin{aligned}&\int _{A_h(2,1)^3}{\mathcal {H}}^0(H_1\cap H_2\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_2\cap H_3\cap B_r)\,\\&\quad \times \mu _1^3(d(H_1,H_2,H_3))\\&\quad \le \int _{A_h(2,1)^3}{\mathcal {H}}^0(H_1\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_2\cap H_3\cap B_r)\,\mu _1^3(d(H_1,H_2,H_3))\\&\quad =c\int _{A_h(2,1)}{\mathcal {H}}^1(H_3\cap B_r)^2\,\mu _1(dH_3) \le c\,e^r. \end{aligned}$$

Finally, we deal with \(\sigma _4\). Using once more that \({\mathcal {H}}^0(H_1\cap H_2\cap B_r)\le 1\) for \(\mu _1^2\)-almost all \((H_1,H_2)\in A_h(2,1)^2\) and also that \({\mathcal {H}}^1(H\cap B_r)\le 2r\) for each \(H\in A_h(2,1)\), and again Crofton’s formula together with Lemma 8, we obtain

$$\begin{aligned}&\int _{A_h(2,1)^4}{\mathcal {H}}^0(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_3\cap H_4\cap B_r)\\&\qquad \times {\mathcal {H}}^0(H_2\cap H_4\cap B_r)\,\mu _1^4(d(H_1,H_2,H_3,H_4))\\&\quad \le c\int _{A_h(2,1)^2}{\mathcal {H}}^1(H_3\cap B_r)\,{\mathcal {H}}^0(H_3\cap H_4\cap B_r)\,{\mathcal {H}}^1(H_4\cap B_r)\,\mu _1^2(d(H_3,H_4))\\&\quad \le c\,r\int _{A_h(2,1)}{\mathcal {H}}^1(H_4\cap B_r)^2\,\mu _1(dH_4)\le c\,r\,e^r. \end{aligned}$$

Altogether, this yields that \(M_{2,2}(f^{(0)})\le c\,t^4\,(e^r+e^r+e^r+re^r)\le c\,t^4\,re^r\), where the exponent of t follows from \(4\cdot 2-2\cdot 4+\max \{2,3,4\}=4\).

Combining the bounds for \(M_{1,1}(f^{(0)})\), \(M_{1,2}(f^{(0)})\) and \(M_{2,2}(f^{(0)})\) with the lower variance bound provided by Lemma 9 we deduce from (14) that

$$\begin{aligned} d\left( \frac{F_{r,t}^{(0)}-{\mathbb {E}}F_{r,t}^{(0)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(0)})}},N \right) \le c\,\frac{\sqrt{t^5e^{r}}+\sqrt{t^5r^2 e^{r}}+\sqrt{t^4re^{r}}}{t^3c^{(0)}(2,1)e^r} \le c\,t^{-1/2}\, r e^{-r/2}. \end{aligned}$$
(40)

This completes the proof of Theorem 5(a). \(\square \)

6.2.2 The spatial case \(d=3\): Proof of Theorem 5(b)

The following lemma will be used repeatedly in deriving upper bounds for integrals. For \(H\in A_h(3,2)\) we write \(L_1(H)\) for an arbitrary 1-dimensional subspace in H which satisfies \(d_h(H,p)=d_h(L_1(H),p)\).

Lemma 16

Let \(d=3\) and \(a,b\ge 0\). If \(r\ge 1\), then

$$\begin{aligned} I(a,b):= & {} \int _{A_h(3,2)}{\mathcal {H}}^2(H\cap B_r)^a\,{\mathcal {H}}^1(L_1(H)\cap B_r)^b\, \mu _2(dH)\\\le & {} c\,{\left\{ \begin{array}{ll} \exp (2r)&{}: 0\le a<2,\\ r^{b+1}\exp (2r)&{}: a=2,\\ r^{b}\exp (ar)&{}: a>2,\\ \end{array}\right. } \end{aligned}$$

where \(c=c(a,b)\) is a constant depending only on a and b.

Proof

We use the definition (6) of the measure \(\mu _2\), Lemma 7 and the argument in the proof of Lemma 8 to get

$$\begin{aligned} I(a,b)\le c\int _0^re^{2s}e^{(r-s)a}(r-s+\log 2)^b\, ds. \end{aligned}$$

If \(0\le a<2\), then

$$\begin{aligned} I(a,b)\le c\, e^{2r}\int _0^r e^{s(a-2)}(s+\log 2)^b\, ds\le c\, e^{2r}. \end{aligned}$$

This also shows that \(I(2,b)\le ce^{2r}r^{b+1}\). For \(a>2\), we get

$$\begin{aligned} I(a,b)\le c\, e^{ar}\int _0^re^{s(2-a)}(r-s+\log 2)^b\, ds\le c\, r^be^{ar}, \end{aligned}$$

which completes the argument. \(\square \)

For \(d=3\) we need to distinguish the cases \(i=2\), \(i=1\) and \(i=0\). If \(i=2\) there is only one partition \(\sigma _1\) (compare with the left panel of Fig. 5) and we obtain

$$\begin{aligned} \int _{A_h(3,2)}{\mathcal {H}}^2(H\cap B_r)^4\,\mu _2(dH) \le c\, g(2,4,3,r)\le c\,e^{4r}. \end{aligned}$$
(41)

This proves that \(M_{1,1}(f^{(2)})\le c\,t\,e^{4r}\) and together with the lower variance bound from Lemma 10 and (14) this yields

$$\begin{aligned} d\left( \frac{F_{r,t}^{(2)}-{\mathbb {E}}F_{r,t}^{(2)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(2)})}},N \right) \le c\,\frac{\sqrt{te^{4r}}}{tc^{(2)}(3,1)e^{2r}r} \le c\, t^{-1/2}\,r^{-1}. \end{aligned}$$
(42)

To deal with the case \(i=1\), we need to bound \(M_{1,1}(f^{(1)})\), \(M_{1,2}(f^{(1)})\) and \(M_{2,2}(f^{(1)})\). As in the case \(d=2\), to bound \(M_{1,1}(f^{(1)})\) we can argue as for \(i=2\) to obtain \(M_{1,1}(f^{(1)})\le c\,t^{5}\,e^{4r}\). Next, we consider \(M_{1,2}(f^{(1)})\), which requires an analysis of the integrals resulting from the three partitions \(\sigma _1, \ \sigma _2\) and \(\sigma _3\) shown in the middle panel of Fig. 5. For \(\sigma _1\) we compute

$$\begin{aligned}&\int _{A_h(3,2)^2}{\mathcal {H}}^2(H_1\cap B_r)^2\,{\mathcal {H}}^1(H_1\cap H_2\cap B_r)^2\,\mu _2^2(d(H_1,H_2))\nonumber \\&\quad \le \int _{A_h(3,2)^2}{\mathcal {H}}^2(H_1\cap B_r)^2\,{\mathcal {H}}^1(L_1(H_1)\cap B_r){\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,\mu _2^2(d(H_1,H_2))\nonumber \\&\quad \le c\, I(3,1)\le c \,re^{3r}, \end{aligned}$$
(43)

where we used the Crofton formula and Lemma 16. Arguing similarly for the partition \(\sigma _2\) from the middle panel of Fig. 5 we obtain

$$\begin{aligned}&\int _{A_h(3,2)^2}{\mathcal {H}}^2(H_1\cap B_r)\,{\mathcal {H}}^2(H_2\cap B_r)\,{\mathcal {H}}^1(H_1\cap H_2\cap B_r)^{2}\,\mu _2^2(d(H_1,H_2))\nonumber \\&\quad \le c \int _{A_h(3,2)^2}{\mathcal {H}}^2(H_1\cap B_r)\,{\mathcal {H}}^2(H_2\cap B_r)\,{\mathcal {H}}^1(L_1(H_1)\cap B_r)\,\nonumber \\&\qquad \times {\mathcal {H}}^1(L_1(H_2)\cap B_r)\,\mu _2^2(d(H_1,H_2))\nonumber \\&\quad \le c\,I(1,1)^2 \le c\,e^{4r}, \end{aligned}$$
(44)

and for \(\sigma _3\) we get

$$\begin{aligned}&\int _{A_h(3,2)^3}{\mathcal {H}}^2(H_1\cap B_r)\,{\mathcal {H}}^2(H_2\cap B_r)\,{\mathcal {H}}^1(H_1\cap H_3\cap B_r)\,\nonumber \\&\qquad \times {\mathcal {H}}^1(H_2\cap H_3\cap B_r)\,\mu _2^3(d(H_1,H_2,H_3))\nonumber \\&\quad \le \int _{A_h(3,2)^3}{\mathcal {H}}^2(H_1\cap B_r)\,{\mathcal {H}}^2(H_2\cap B_r)\,{\mathcal {H}}^1(L_1(H_1)\cap B_r)\,\nonumber \\&\qquad \times {\mathcal {H}}^1(H_2\cap H_3\cap B_r)\,\mu _2^3(d(H_1,H_2,H_3))\nonumber \\&\quad \le c\int _{A_h(3,2)^2}{\mathcal {H}}^2(H_1\cap B_r)\,{\mathcal {H}}^2(H_2\cap B_r)^{2}\,{\mathcal {H}}^1(L_1(H_1)\cap B_r)\,\mu _2^2(d(H_1,H_2))\nonumber \\&\quad \le c\, I(1,2) \, g(2,2,3,r) \le c\,r\,e^{4r}. \end{aligned}$$
(45)

We thus conclude that \(M_{1,2}(f^{(1)})\le c\,t^5\,(re^{3r}+e^{4r}+re^{4r})\le c\,t^5\,re^{4r}\).

Finally, we deal with \(M_{2,2}(f^{(1)})\) for which an analysis of the four partitions \(\sigma _1\), \(\sigma _2\), \(\sigma _3\) and \(\sigma _4\) shown in the right panel of Fig. 5 is necessary. For \(\sigma _1\) we have

$$\begin{aligned}&\int _{A_h(3,2)^2}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)^4\,\mu _2^2(d(H_1,H_2))\\&\quad \le \int _{A_h(3,2)^2}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(L_1(H_1)\cap B_r)^3\,\mu _2^2(d(H_1,H_2)) \le c\, I(1,3) \le c\,e^{2r}, \end{aligned}$$

where we also used Crofton’s formula. We continue with \(\sigma _2\) and get, by similar arguments,

$$\begin{aligned}&\int _{A_h(3,2)^3}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)^2\,{\mathcal {H}}^1(H_1\cap H_3\cap B_r)^2\,\mu _2^3(d(H_1,H_2,H_3))\\&\quad \le \int _{A_h(3,2)^3}{\mathcal {H}}^1(L_1(H_1)\cap B_r)^2\,{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\\&\qquad \times {\mathcal {H}}^1(H_1\cap H_3\cap B_r)\,\mu _2^3(d(H_1,H_2,H_3))\\&\quad = c\int _{A_h(3,2)}{\mathcal {H}}^1(L_1(H_1)\cap B_r)^2\,{\mathcal {H}}^2(H_1\cap B_r)^2\,\mu _2(dH_1)\\&\quad \le c\, I(2,2)\le c\,r^3\,e^{2r}. \end{aligned}$$

Moreover, for \(\sigma _3\) and \(\sigma _4\) we have the bounds

$$\begin{aligned}&\int _{A_h(3,2)^3}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)^2\,{\mathcal {H}}^1(H_1\cap H_3\cap B_r)\,{\mathcal {H}}^1(H_2\cap H_3\cap B_r)\\&\qquad \times \mu _2^3(d(H_1,H_2,H_3))\\&\quad \le c\int _{A_h(3,2)^3}{\mathcal {H}}^1(L_1(H_1)\cap B_r)^3\,{\mathcal {H}}^1(H_2\cap H_3\cap B_r)\,\mu _2^3(d(H_1,H_2,H_3))\\&\quad \le c\,{\mathcal {H}}^3(B_r)\, I(0,3) \le c\,e^{4r} \end{aligned}$$

and

$$\begin{aligned}&\int _{A_h(3,2)^4}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_1\cap H_3\cap B_r)\,{\mathcal {H}}^1(H_3\cap H_4\cap B_r)\\&\qquad \times {\mathcal {H}}^1(H_2\cap H_4\cap B_r)\,\mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad \le \int _{A_h(3,2)^4}{\mathcal {H}}^1(L_1(H_1)\cap B_r)^2\,{\mathcal {H}}^1(H_2\cap H_4\cap B_r)\,\\&\qquad \times {\mathcal {H}}^1(H_3\cap H_4\cap B_r)\,\mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad = c \int _{A_h(2,3)^2}{\mathcal {H}}^1(L_1(H_1)\cap B_r)^2\,{\mathcal {H}}^2(H_4\cap B_r)^2\,\mu _2^2(d(H_1,H_4))\\&\quad \le c\, I(0,2)\, g(2,2,3,r)\le c\,r\,e^{4r}. \end{aligned}$$

Altogether this gives \(M_{2,2}(f^{(1)})\le c\,t^4\,(e^{2r}+r^3e^{2r}+e^{4r}+re^{4r})\le c\, t^4\,re^{4r}\). The estimates for \(M_{1,1}(f^{(1)})\), \(M_{1,2}(f^{(1)})\) and \(M_{2,2}(f^{(1)})\) together with Lemma 10 and (14) show that

$$\begin{aligned} d\left( \frac{F_{r,t}^{(1)}-{\mathbb {E}}F_{r,t}^{(1)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(1)})}},N \right) \le c\, \frac{\sqrt{t^5e^{4r}}+\sqrt{t^5re^{4r}}+\sqrt{t^4re^{4r}}}{t^3c^{(1)}(3,1)e^{2r}r} \le c\,t^{-1/2}\,r^{-1/2}. \end{aligned}$$
(46)
Fig. 6
figure 6

Illustration of the partition in \(\varPi _{\ge 2}^\mathrm{con}(1,1,3,3)\)

Finally, we need to treat the case of \(F_{r,t}^{(0)}\), which requires to find upper bounds for the terms \(M_{u,v}(f^{(0)})\) with \((u,v)\in \{(1,1),(1,2),(1,3),(2,2),(2,3),(3,3)\}\). We have \(M_{1,1}(f^{(0)})\le c\,t^9\,e^{4r}\) from (41). To treat \(M_{1,2}(f^{(0)})\) we need to consider the partitions \(\sigma _1,\sigma _2\) and \(\sigma _3\) shown in the middle panel of Fig. 5 and to obtain upper bounds for the three integrals which are already treated in (43), (44) and (45). This implies that \(M_{1,2}(f^{(0)})\le c\,t^9\,re^{4r}\). Next, we deal with \(M_{1,3}(f^{(0)})\), which can be expressed as a sum over the three partitions \(\sigma _1\), \(\sigma _2\) and \(\sigma _3\) shown in Fig. 6. For \(\sigma _1\), using that \({\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\le 1\) for \(\mu _2^3\)-almost all \((H_1,H_2,H_3)\in A_h(3,2)^3\), we have that

$$\begin{aligned}&\int _{A_h(3,2)^3}{\mathcal {H}}^2(H_1\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)^2\,\mu _2^3(d(H_1,H_2,H_3))\\&\quad = \int _{A_h(3,2)^3}{\mathcal {H}}^2(H_1\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,\mu _2^3(d(H_1,H_2,H_3))\\&\quad =c \int _{A_h(3,2)}{\mathcal {H}}^2(H_1\cap B_r)^3\,\mu _2(dH_1) \le c \,g(2,3,3,r) \le c\,e^{3r}, \end{aligned}$$

where we also used Crofton’s formula and Lemma 8. Similarly, for \(\sigma _2\) we obtain

$$\begin{aligned}&\int _{A_h(3,2)^3}{\mathcal {H}}^2(H_1\cap B_r)\,{\mathcal {H}}^2(H_2\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)^2\,\mu _2^3(d(H_1,H_2,H_3))\\&\quad = \int _{A_h(3,2)^3}{\mathcal {H}}^2(H_1\cap B_r)\,{\mathcal {H}}^2(H_2\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\\&\qquad \qquad \qquad \,\,\,\, \times \mu _2^3 (d(H_1,H_2,H_3))\\&\quad \le c\,{\mathcal {H}}^3(B_r) \, I(1,1)\le c\,e^{4r}, \end{aligned}$$

and for \(\sigma _3\) we have that

$$\begin{aligned}&\int _{A_h(3,2)^4}{\mathcal {H}}^2(H_1\cap B_r)\,{\mathcal {H}}^2(H_2\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_3\cap H_4\cap B_r)\\&\qquad \times {\mathcal {H}}^0(H_2\cap H_3\cap H_4\cap B_r)\,\mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad \le \int _{A_h(3,2)^4}{\mathcal {H}}^2(H_1\cap B_r)\,{\mathcal {H}}^2(H_2\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_3\cap H_4\cap B_r)\\&\qquad \times \mu _2^4 (d(H_1,H_2,H_3,H_4))\\&\quad =c\,{\mathcal {H}}^3(B_r)\int _{A_h(3,2)}{\mathcal {H}}^2(H_1\cap B_r)^2\,\mu _2(dH_1)\le c\, e^{2r}\, g(2,2,3,r)\le c\,re^{4r}. \end{aligned}$$

This proves that \(M_{1,3}(f^{(0)})\le c\,t^8\,(e^{3r}+e^{4r}+re^{4r})\le c\,t^8\,re^{4r}\).

The next term is \(M_{2,2}(f^{(0)})\). However, up to a constant, this term is the same as \(M_{2,2}(f^{(1)})\), which was already bounded above. This yields that \(M_{2,2}(f^{(0)})\le c\,t^8\, re^{4r}\) and it remains to consider \(M_{2,3}(f^{(0)})\) and \(M_{3,3}(f^{(0)})\).

Fig. 7
figure 7

Illustration of the partition in \(\varPi _{\ge 2}^\mathrm{con}(2,2,3,3)\)

In order to deal with \(M_{2,3}(f^{(0)})\), up to renumbering of the elements precisely the 12 partitions \(\sigma _1,\ldots ,\sigma _{12}\) in \(\varPi _{\ge 2}^{\mathrm{con}}(2,2,3,3)\) have to be considered, see Fig. 7. Using that \({\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\le 1\) for \(\mu _2^3\)-almost all \((H_1,H_2,H_3)\in A_h(3,2)^3\) we find for \(\sigma _1\) that

$$\begin{aligned}&\int _{A_h(3,2)^3}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)^2\,\mu _2^3(d(H_1,H_2,H_3))\\&\quad = \int _{A_h(3,2)^3}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,\mu _2^3(d(H_1,H_2,H_3)). \end{aligned}$$

Applying now Crofton’s formula, we obtain the upper bound

$$\begin{aligned}&c\int _{A_h(3,2)^2}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)^3\,\mu _2^3(d(H_1,H_2))\le c\, I(1,2)\le c\, e^{2r}. \end{aligned}$$

The same arguments also lead to bounds for the remaining partitions \(\sigma _2,\ldots ,\sigma _{12}\). As for \(\sigma _1\), the first step is always to bound the 0-dimensional Hausdorff measure \({\mathcal {H}}^0(\,\cdot \,)\) of the intersection of the three planes corresponding to the last row of the partition by 1, which is a valid estimate for \(\mu _2^3\)-almost all triples of planes. For this reason we systematically skip this first step in our following computations and only show how to deal with the integral of the three remaining terms

$$\begin{aligned}&{\mathcal {H}}^1(\text {intersection of the 2 planes corresponding to the first row})\\&\quad \times {\mathcal {H}}^1(\text {intersection of the 2 planes corresponding to the second row})\\&\quad \times {\mathcal {H}}^0(\text {intersection of the 3 planes corresponding to the third row}). \end{aligned}$$

For \(\sigma _2\) we get

$$\begin{aligned}&\int _{A_h(3,2)^3}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_1\cap H_3\cap B_r){\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\\&\qquad \times \mu _2^3(d(H_1,H_2,H_3))\\&\quad \le c\int _{A_h(3,2)^3}{\mathcal {H}}^1(L_1(H_1)\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,\mu _2^3(d(H_1,H_2,H_3)) \\&\quad \le c\, I(1,2)\le c\,e^{2r}, \end{aligned}$$

for \(\sigma _3\) we get

$$\begin{aligned}&\int _{A_h(3,2)^4}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_1\cap H_3\cap B_r)\\&\qquad \times {\mathcal {H}}^0 (H_1\cap H_3\cap H_4\cap B_r)\,\mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad \le c\int _{A_h(3,2)^3}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(L_1(H_1)\cap B_r)\,{\mathcal {H}}^1(L_1(H_3)\cap B_r)\,\\&\qquad \times \mu _2^3(d(H_1,H_2,H_3))\\&\quad \le c\, I(1,1)\, I(0,1)\le c\,e^{4r}, \end{aligned}$$

for \(\sigma _4\) we get

$$\begin{aligned}&\int _{A_h(3,2)^4}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_3\cap H_4\cap B_r)\,\mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad \le c\int _{A_h(3,2)^2}{\mathcal {H}}^1(L_1(H_1)\cap B_r)\,{\mathcal {H}}^1(L_1(H_2)\cap B_r)\,{\mathcal {H}}^2(H_1\cap B_r)\,\mu _2^2(d(H_1,H_2))\\&\quad \le c\, I(1,1)\, I(0,1)\le c\,e^{4r}, \end{aligned}$$

for \(\sigma _5\) we get

$$\begin{aligned}&\int _{A_h(3,2)^4}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_1\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_2\cap H_3\cap H_4\cap B_r)\\&\qquad \quad \times \,\mu _2^4(d(H_1,H_2,H_3,H_4))\\&\qquad \le c\int _{A_h(3,2)^3}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(L_1(H_3)\cap B_r)^{2}\,\mu _2^3(d(H_1,H_2,H_3))\\&\qquad \le c\,{\mathcal {H}}^3(B_r)\, I(0,2)\le c\,e^{4r}, \end{aligned}$$

for \(\sigma _6\) we get

$$\begin{aligned}&\int _{A_h(3,2)^4}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_2\cap H_3\cap B_r) {\mathcal {H}}^0(H_2\cap H_3\cap H_4\cap B_r)\\&\qquad \times \mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad \le c\int _{A_h(3,2)^3}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(L_1(H_3)\cap B_r)^{2}\,\mu _2^3(d(H_1,H_2,H_3)) , \end{aligned}$$

which is the same as for \(\sigma _5\) and thus bounded by \(e^{4r}\). For \(\sigma _7\) we have

$$\begin{aligned}&\int _{A_h(3,2)^4}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_3\cap H_4\cap B_r)\,\mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad \le c\int _{A_h(3,2)^2}{\mathcal {H}}^1(L_1(H_1)\cap B_r)\,{\mathcal {H}}^1(L_1(H_2)\cap B_r)\,{\mathcal {H}}^2(H_1\cap B_r)\,\mu _2^2(d(H_1,H_2))\\&\quad \le c\, I(1,1)\, I(0,1) \le c\,e^{4r}, \end{aligned}$$

for \(\sigma _8\) we have

$$\begin{aligned}&\int _{A_h(3,2)^4}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_3\cap H_4\cap B_r)\, {\mathcal {H}}^0(H_2\cap H_3\cap H_4\cap B_r)\\&\qquad \times \mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad \le \int _{A_h(3,2)^4}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_3\cap H_4\cap B_r)\,\mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad =c\,{\mathcal {H}}^3(B_r)^2 \le c\,e^{4r}, \end{aligned}$$

for \(\sigma _9\) we have

$$\begin{aligned}&\int _{A_h(3,2)^4}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_3\cap H_4\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\\&\qquad \times \mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad \le \int _{A_h(3,2)^4}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_3\cap H_4\cap B_r)\,\mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad =c\,{\mathcal {H}}^3(B_r)^2 \le c\,e^{4r}. \end{aligned}$$

Next, for \(\sigma _{10}\) we get

$$\begin{aligned}&\int _{A_h(3,2)^5}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_1\cap H_3\cap B_r){\mathcal {H}}^0(H_3\cap H_4\cap H_5\cap B_r)\\&\qquad \times \mu _2^5(d(H_1,H_2,H_3,H_4,H_5))\\&\quad \le c\int _{A_h(3,2)^3}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(L_1(H_3)\cap B_r)\,{\mathcal {H}}^2(H_3\cap B_r)\\&\qquad \quad \times \mu _2^3(d(H_1,H_2,H_3))\\&\qquad \le c\,{\mathcal {H}}^3(B_r)\, I(1,1)\le c\,e^{4r}, \end{aligned}$$

for \(\sigma _{11}\) we get

$$\begin{aligned}&\int _{A_h(3,2)^5}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_3\cap H_4\cap B_r)\\&\qquad \times {\mathcal {H}}^0(H_3\cap H_4\cap H_5\cap B_r)\,\mu _2^5(d(H_1,H_2,H_3,H_4,H_5))\\&\quad =c\int _{A_h(3,2)^4}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_3\cap H_4\cap B_r)^2\,\mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad \le c\,{\mathcal {H}}^3(B_r)\int _{A_h(3,2)^2}{\mathcal {H}}^1(L_1(H_3)\cap B_r)\,{\mathcal {H}}^1(H_3\cap H_4\cap B_r)\,\mu _2^2(d(H_3,H_4))\\&\quad = c\,{\mathcal {H}}^3(B_r)\int _{A_h(3,2)}{\mathcal {H}}^1(L_1(H_3)\cap B_r)\,{\mathcal {H}}^2(H_3\cap B_r)\,\mu _2(d H_3) \\&\quad \le c\,{\mathcal {H}}^3(B_r)\,I(1,1) \le c\,e^{4r} \end{aligned}$$

and for \(\sigma _{12}\) we get

$$\begin{aligned}&\int _{A_h(3,2)^5}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_3\cap H_4\cap B_r)\\&\qquad \times {\mathcal {H}}^0(H_2\cap H_3\cap H_5\cap B_r)\,\mu _2^5(d(H_1,H_2,H_3,H_4,H_5))\\&\quad \le c\,{\mathcal {H}}^3(B_r)\int _{A_h(3,2)}{\mathcal {H}}^2(H_3\cap B_r)\,{\mathcal {H}}^1(L_1(H_3)\cap B_r)\,\mu _2(d H_3)\\&\quad \le c\,e^{2r}\, I(1,1)\le c\, e^{4r}. \end{aligned}$$

Altogether this yields that \(M_{2,3}(f^{(0)})\le c\,t^7\,(2\, e^{2r}+10 \, e^{4r})\le c\,t^7\,e^{4r}\).

Fig. 8
figure 8

Illustration of the partition in \(\varPi _{\ge 2}^\mathrm{con}(3,3,3,3)\)

Finally, we deal with the term \(M_{3,3}(f^{(0)})\). This requires to consider the partitions in \(\varPi _{\ge 2}^{\mathrm{con}}(3,3,3,3)\). Up to renumbering of the elements there are precisely 11 partitions \(\sigma _1,\ldots ,\sigma _{11}\) of this type and they are shown in Fig. 8. The analysis of the resulting integrals works the same way as above. Especially, we use once again systematically that \({\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\le 1\) for \(\mu _2^3\)-almost all \((H_1,H_2,H_3)\in A_h(3,2)^3\) and apply this to the term corresponding to the last row of each of the partitions. This leaves us with integrals over

$$\begin{aligned}&{\mathcal {H}}^0(\text {intersection of the 3 planes corresponding to the first row})\\&\quad \times {\mathcal {H}}^0(\text {intersection of the 3 planes corresponding to the second row})\\&\quad \times {\mathcal {H}}^0(\text {intersection of the 3 planes corresponding to the third row}), \end{aligned}$$

which in turn can be bounded using Crofton’s formula, Lemma 8 and Lemma 16. For \(\sigma _1\) this yields

$$\begin{aligned}&\int _{A_h(3,2)^3}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)^3\,\mu _2^3(d(H_1,H_2,H_3))\\&\quad = \int _{A_h(3,2)^3}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,\mu _2^3(d(H_1,H_2,H_3)) = c\,{\mathcal {H}}^3(B_r) \le c\,e^{2r}, \end{aligned}$$

for \(\sigma _2\) and \(\sigma _3\) we obtain

$$\begin{aligned}&\int _{A_h(3,2)^4}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_2\cap H_4\cap B_r)\\&\qquad \times \mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad = \int _{A_h(3,2)^4}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_2\cap H_4\cap B_r)\\&\qquad \qquad \qquad \quad \times \mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad =c\int _{A_h(3,2)^2}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)^2\,\mu _2^2(d(H_1,H_2)) \le c\, I(1,1)\le c\, e^{2r}, \end{aligned}$$

for \(\sigma _4\) we obtain

$$\begin{aligned}&\int _{A_h(3,2)^5}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_4\cap H_5\cap B_r)\\&\qquad \times \mu _2^5(d(H_1,H_2,H_3,H_4,H_5))\\&\quad = \int _{A_h(3,2)^5}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_4\cap H_5\cap B_r)\\&\qquad \qquad \,\, \times \mu _2^5(d(H_1,H_2,H_3,H_4,H_5))\\&\quad =c\int _{A_h(3,2)}{\mathcal {H}}^2(H_1\cap B_r)^2\,\mu _2(dH_1) \le c\, g(2,2,3,r)\le c\,re^{2r}, \end{aligned}$$

for \(\sigma _5\) we have

$$\begin{aligned}&\int _{A_h(3,2)^5}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_2\cap H_4\cap B_r)\\&\qquad \times {\mathcal {H}}^0(H_1\cap H_3\cap H_5\cap B_r)\,\mu _2^5(d(H_1,H_2,H_3,H_4,H_5))\\&\quad \le c\int _{A_h(3,2)}{\mathcal {H}}^2(H_1\cap B_r)\,{\mathcal {H}}^1(L_1(H_1)\cap B_r)^2\,\mu _2(dH_1)\le c\, I(1,2)\le c\,e^{2r}, \end{aligned}$$

for \(\sigma _6\) we have

$$\begin{aligned}&\int _{A_h(3,2)^4}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_2\cap H_4\cap B_r)\\&\qquad \times {\mathcal {H}}^0(H_1\cap H_3\cap H_4\cap B_r)\,\mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad \le \int _{A_h(3,2)^4}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_2\cap H_4\cap B_r)\,\mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad =c\int _{A_h(3,2)^2}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)^{2}\,\mu _2^2(d(H_1,H_2)) \le c\,e^{2r} \end{aligned}$$

by the same argument as for \(\sigma _2\) and \(\sigma _3\). For \(\sigma _7\) we have

$$\begin{aligned}&\int _{A_h(3,2)^5}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_2\cap H_4\cap B_r)\\&\qquad \times {\mathcal {H}}^0(H_1\cap H_2\cap H_5\cap B_r)\,\mu _2^5(d(H_1,H_2,H_3,H_4,H_5))\\&\quad = c\int _{A_h(3,2)^2}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)^3\,\mu _2(d(H_1,H_2))\\&\quad \le c\int _{A_h(3,2)^2}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(L_1(H_1)\cap B_r)^2\,\mu _2^2(d(H_1,H_2))\\&\quad \le c\, I(1,2) \le c\,e^{2r}, \end{aligned}$$

for \(\sigma _8\) we obtain

$$\begin{aligned}&\int _{A_h(3,2)^5}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)^2\,{\mathcal {H}}^0(H_1\cap H_4\cap H_5\cap B_r)\,\mu _2^5(d(H_1,H_2,H_3,H_4,H_5))\\&\quad = \int _{A_h(3,2)^5}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_4\cap H_5\cap B_r)\,\mu _2^5(d(H_1,H_2,H_3,H_4,H_5))\\&\quad =c\int _{A_h(3,2)}{\mathcal {H}}^2(H_1\cap B_r)^2\,\mu _2(dH_1) \le c\, g(2,2,3,r) \le c\,re^{2r}, \end{aligned}$$

for \(\sigma _9\) we get

$$\begin{aligned}&\int _{A_h(3,2)^5}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_2\cap H_4\cap B_r)\\&\qquad \times {\mathcal {H}}^0(H_1\cap H_3\cap H_5\cap B_r)\,\mu _2^5(d(H_1,H_2,H_3,H_4,H_5))\\&\quad \le c\int _{A_h(3,2)^3}{\mathcal {H}}^1(H_1\cap H_2\cap B_r)\,{\mathcal {H}}^1(H_1\cap H_3\cap B_r)\,\mu _2^3(d(H_1,H_2,H_3))\\&\quad = c\int _{A_h(3,2)}{\mathcal {H}}^2(H_1\cap B_r)^2\,\mu _2(dH_1)\le c\, g(2,2,3,r) \le c\,re^{2r}, \end{aligned}$$

for \(\sigma _{10}\) we obtain

$$\begin{aligned}&\int _{A_h(3,2)^6}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_2\cap H_4\cap B_r)\\&\qquad \times {\mathcal {H}}^0(H_4\cap H_5\cap H_6\cap B_r)\,\mu _2^6(d(H_1,\ldots ,H_6))\\&\quad \le c \int _{A_h(3,2)^4}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,{\mathcal {H}}^2(H_4\cap B_r)\,\mu _2^4(d(H_1,H_2,H_3,H_4))\\&\quad =c\,{\mathcal {H}}^3(B_r)^2\le c\,e^{4r} \end{aligned}$$

and, finally, for \(\sigma _{11}\) we have

$$\begin{aligned}&\int _{A_h(3,2)^6}{\mathcal {H}}^0(H_1\cap H_2\cap H_3\cap B_r)\,{\mathcal {H}}^0(H_1\cap H_4\cap H_5\cap B_r)\\&\qquad \times {\mathcal {H}}^0(H_3\cap H_4\cap H_6\cap B_r)\,\mu _2^6(d(H_1,\ldots ,H_6))\\&\quad = c\int _{A_h(3,2)^3}{\mathcal {H}}^{1}(H_1 \cap H_3 \cap B_r) \,{\mathcal {H}}^{1}(H_1 \cap H_4 \cap B_r) \,{\mathcal {H}}^{1}(H_3 \cap H_4 \cap B_r)\\&\qquad \times \mu _2^3\, (d(H_1,H_3,H_4))\\&\quad \le c\int _{A_h(3,2)^3}{\mathcal {H}}^{1}(L_1(H_1) \cap B_r)^{2} \,{\mathcal {H}}^{1}(H_3 \cap H_4 \cap B_r) \,\mu _2^3(d(H_1,H_3,H_4))\\&\quad =c\,{\mathcal {H}}^3(B_r)\,I(0,2)\le c\,e^{4r}. \end{aligned}$$

We thus conclude that \(M_{3,3}(f^{(0)})\le c\,t^6\,(6e^{2r}+3\,re^{2r}+2e^{4r})\le c\,t^6\,e^{4r}\). An application of the upper bounds for \(M_{u,v}(f^{(0)})\) with \((u,v)\in \{(1,1),(1,2),(1,3),(2,2),(2,3),(3,3)\}\) and the lower bound for the variance from Lemma 10 in (14) shows that

$$\begin{aligned} d\left( \frac{F_{r,t}^{(0)}-{\mathbb {E}}F_{r,t}^{(0)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(0)})}},N \right) \le c\, \frac{3\sqrt{t^9e^{4r}}+3\sqrt{t^9re^{4r}}}{t^5c^{(1)}(3,1)e^{2r}r} \le c\,t^{-1/2}\,r^{-1/2} \end{aligned}$$
(47)

and the proof of Theorem 5 (b) is complete. \(\square \)

6.2.3 The higher dimensional cases \(d\ge 4\): Proof of Theorem 5 (c)

In order to show that for \(d\ge 4\) and \(i=d-1\) and for \(d\ge 7\) and \(i\in \{0,\ldots ,d-1\}\) non of the centred and normalized functionals \(F_{r,t}^{(i)}\) converges in distribution to a Gaussian random variable, as \(r\rightarrow \infty \), we will argue that the fourth cumulant

$$\begin{aligned} {{\,\mathrm{cum}\,}}_4:={\mathbb {E}}\left( \widetilde{F_{r,t}^{(i)}}\right) ^4-3,\qquad \qquad \widetilde{F_{r,t}^{(i)}}:={F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over \sqrt{{\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(i)})}} \end{aligned}$$

does not converge to zero, which is the value of the fourth cumulant of a standard Gaussian random variable. We start with the following crucial, but rather technical result, which is based on the formula (11) for the centred moments of a Poisson U-statistic.

Lemma 17

Let \(d\ge 4\), \(i\in \{0,1,\ldots ,d-1\}\) and \(t\ge t_0>0\). If \(d \in \{4,5,6\}\) and \(i=d-1\) or if \(d\ge 7\), then

$$\begin{aligned} \sup _{r\ge 1}{\mathbb {E}}\left( \widetilde{F_{r,t}^{(i)}}\right) ^5<\infty . \end{aligned}$$
Fig. 9
figure 9

Left panel: The two types of (sub-)partitions in \(\varPi _{\ge 2}^{**}(1,1,1,1,1)\). Right panel: Example of a sub-partition \(\sigma \) from \(\varPi _{\ge 2}^{**}(4,4,4,4,4)\) with \(m(\sigma )=3\)

Proof

We start by explaining our method by considering the case \(i=d-1\). In this situation

$$\begin{aligned} {\mathbb {E}}\left( \widetilde{F_{r,t}^{(d-1)}}\right) ^5 = {{\mathbb {E}}\big (F_{r,t}^{(d-1)}-{\mathbb {E}}F_{r,t}^{(d-1)}\big )^5\over ({\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(d-1)}))^{5/2}} \le c\,{{\mathbb {E}}\big (F_{r,t}^{(d-1)}-{\mathbb {E}}F_{r,t}^{(d-1)}\big )^5\over e^{5r(d-2)}}, \end{aligned}$$

where we used the variance bound from Lemma 11, which is available since \(t\ge t_0\) and \(r\ge 1\). For the centred fifth moment, (11) implies that

$$\begin{aligned}&{\mathbb {E}}\big (F_{r,t}^{(d-1)}-{\mathbb {E}}F_{r,t}^{(d-1)}\big )^5 \\&\quad = \sum _{\sigma \in \varPi _{\ge 2}^{**}(1,1,1,1,1)}t^{5-|\sigma |+\Vert \sigma \Vert }\int _{A_h(d,d-1)^{5-|\sigma |-\Vert \sigma \Vert }}\big ((f^{(d-1)})^{\otimes 5}\big )_\sigma \,d\mu _{d-1}^{5-|\sigma |+\Vert \sigma \Vert }. \end{aligned}$$

The set \(\varPi _{\ge 2}^{**}(1,1,1,1,1)\) consists only of two types of sub-partitions of \(\{1,2,3,4,5\}\), which are actually partitions, see Fig. 9. The first type only consists of one partition, namely the trivial partition, only containing the single block \(\{1,2,3,4,5\}\). The second type contains \({5\atopwithdelims ()2}=10\) partitions having precisely two blocks, one of size 2 and the other of type 3. Since the integrals corresponding to these partitions all yield the same contribution, we can restrict our computations to \(\{\{1,2,3\},\{4,5\}\}\), for example. Thus,

$$\begin{aligned}&{\mathbb {E}}\big (F_{r,t}^{(d-1)}-{\mathbb {E}}F_{r,t}^{(d-1)}\big )^5 = t^9\int _{A_h(d,d-1)}{\mathcal {H}}^{d-1}(H\cap B_r)^5\,\mu _{d-1}(dH)\\&\quad +10t^8\int _{A_h(d,d-1)^2}{\mathcal {H}}^{d-1}(H_1\cap B_r)^3\,{\mathcal {H}}^{d-1}(H_2\cap B_r)^2\,\mu _{d-1}^2(d(H_1,H_2)). \end{aligned}$$

By Lemma 8 we have

$$\begin{aligned} \int _{A_h(d,d-1)}{\mathcal {H}}^{d-1}(H\cap B_r)^5\,\mu _{d-1}(dH)\le c\, g(d-1,5,d,r)\le c\, e^{5r(d-2)}, \end{aligned}$$

since \(5(d-2)-(d-1)=4d-9>0\). Again by Lemma 8 we obtain

$$\begin{aligned}&\int _{A_h(d,d-1)^2}{\mathcal {H}}^{d-1}(H_1\cap B_r)^3\,{\mathcal {H}}^{d-1}(H_2\cap B_r)^2\,\mu _{d-1}^2(d(H_1,H_2))\\&\quad \le c\, g(d-1,3,d,r)\, g(d-1,2,d,r)\le c\, e^{3r(d-2)}e^{2r(d-2)} \le c\, e^{5r(d-2)}, \end{aligned}$$

since \(d>3\). Thus we get

$$\begin{aligned} \sup _{r\ge 1}{\mathbb {E}}\left( \widetilde{F_{r,t}^{(i)}}\right) ^5 \le c\,\sup _{r\ge 1}{e^{5r(d-2)}+e^{5r(d-2)}\over e^{5r(d-2)}} = c<\infty . \end{aligned}$$

This proves the claim for \(i=d-1\).

Now we fix \(i\in \{0,1,\ldots ,d-2\}\) arbitrarily and assume that \(d\ge 7\). Furthermore, we fix an arbitrary partition \(\sigma \in \varPi _{\ge 2}^{**}(d-i,d-i,d-i,d-i,d-i)\). We denote by \(m(\sigma )\in \{2,3,4,5\}\) the size of the maximal block of \(\sigma \) and represent \(\sigma \) as a diagram. The elements of this diagram are labelled \(a_{p,q}\). Here, \(p\in \{1,\ldots ,5\}\) represents the row number and \(q\in \{1,\ldots ,d-i\}\) stands for the column number. Without loss of generality we can and will assume that the maximal block of \(\sigma \) sits in the left upper corner of the diagram of \(\sigma \), that is, the maximal block is of the form \(\{a_{1,1},\ldots ,a_{m(\sigma ),1}\}\). To each row \(p\in \{1,\ldots ,5\}\) we associate two numbers b(p) and c(p) in the following way. By b(p) we denote the number of elements of row p in position

$$\begin{aligned} (p,q)\in (\{1,\ldots ,m(\sigma )\}\times \{2,\ldots ,d-i\})\cup (\{m(\sigma )+1,\ldots ,5\}\times \{1,\ldots ,d-i\}) \end{aligned}$$

which are contained in a block of \(\sigma \) that has at least one element in a row below p, and we let c(p) be the number of elements in position (pq) (with the same restrictions as above) in row p not contained in any block of \(\sigma \) that has at least one element in a row below p, see Fig. 9 for an example. Note that \(b(5)=0\), \(c(5)=d-i\) if \(m(\sigma )<5\), and \(c(p)=d-i-b(p)-1\) if \(p\in \{1,\ldots ,m(\sigma )\}\). Our task is to show that the integral (in symbolic notation)

$$\begin{aligned} {\mathscr {I}} :&= \int \cdots \int \Big (\big (f^{(i)}\big )^{\otimes 5}\Big )_\sigma \\&=\int \cdots \int f^{(i)}(H_1, G_1,\ldots , G_{b(1)}, K_1,\ldots , K_{c(1)})\\&\qquad \qquad \qquad \times f^{(i)}(\ldots )\,f^{(i)}(\ldots )\,f^{(i)}(\ldots )\,f^{(i)}(\ldots ) \, \mu _{d-1}(d H_1) \ldots \end{aligned}$$

is bounded by a constant multiple of \(e^{5(d-2)r}\), which is the order of \(({\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(i)}))^{5/2}\). We first integrate with respect to the hyperplanes \(K_1,\ldots ,K_{c(1)}\), which do not appear in any of the arguments of the other four functions \(f^{(i)}(\ldots )\). By Crofton’s formula this gives \(c\,{\mathcal {H}}^{d-1-b(1)}(B_r\cap H_1\cap G_1\cap \ldots \cap G_{b(1)})\). Now we replace \(H_1\cap G_1\cap \ldots \cap G_{b(1)}\) by a \((d-1-b(1))\)-dimensional subspace \(L_{d-1-b(1)}(s_1)\) having distance \(s_1=d_h(H_1,p)\) from p. This leads to

$$\begin{aligned} {\mathcal {H}}^{d-1-b(1)}(B_r\cap H_1\cap G_1\cap \ldots \cap G_{b(1)})\le {\mathcal {H}}^{d-1-b(1)}(B_r\cap L_{d-1-b(1)}(s_1)). \end{aligned}$$
(48)

Then \(G_1,\ldots ,G_{b(1)}\) are active integration variables for rows below the first row. Repeating the same argument for \(p=2,\ldots ,m(\sigma )\), we arrive at (again in symbolic notation)

$$\begin{aligned} {\mathscr {I}}&\le c \int \cdots \int {\mathcal {H}}^{d-1-b(1)}(B_r\cap L_{d-1-b(1)}(s_1))\cdots \\&\quad \times {\mathcal {H}}^{d-1-b(m(\sigma ))} (B_r\cap L_{d-1-b(m(\sigma ))}(s_1))\\&\quad \times f^{(i)}(\ldots )\cdots f^{(i)}(\ldots )\, \mu _{d-1}(dH_1) \ldots , \end{aligned}$$

where \(f^{(i)}(\ldots )\) appears \(5-m(\sigma )\) times. From now on we distinguish the following two cases:

  1. (a)

    there is no block that contains precisely two elements from the rows below \(m(\sigma )\),

  2. (b)

    there exists a block that contains precisely two elements from the rows below \(m(\sigma )\).

We start by treating case (a). If \(m(\sigma )=2\), then all blocks of \(\sigma \) have two elements. In particular, no element of row \(p\ge 3\) can be in a (2-element) block with another element in a block below. Hence, we have \(c(p)=d-i\) for \(p\ge 3\). If \(m(\sigma )=3\), then an element of row \(p=4\) cannot be in a common block with an element of row 5 due to assumption (a). Hence \(c(4)=c(5)=d-i\). This shows that \(c(p)=d-i\) for \(p\in \{m(\sigma )+1,\ldots ,5\}\). We can thus carry out the \(5-m(\sigma )\) integrals involving the functions \(f^{(i)}(\ldots )\), which by Crofton’s formula and Lemma 8 leads to the upper bound

$$\begin{aligned} {\mathcal {H}}^d(B_r)^{5-m(\sigma )} \le c\,e^{(5-m(\sigma ))(d-1)r}. \end{aligned}$$
(49)

The only remaining integral in \({\mathscr {I}}\) is

$$\begin{aligned}&{\mathscr {J}}:=\int _0^r\cosh ^{d-1}(s)\,{\mathcal {H}}^{d-1-b(1)}(B_r\cap L_{d-1-b(1)}(s))\cdots {\mathcal {H}}^{d-1-b(m(\sigma ))}(B_r\cap L_{d-1-b(m(\sigma ))}(s))\,ds. \end{aligned}$$

To proceed, we define for \(p\in \{1,\ldots ,m(\sigma )\}\) the function

$$\begin{aligned} g_p(s):&=e^{-r(d-2)}\cdot {\left\{ \begin{array}{ll} e^{(r-s)(d-2-b(p))}&{}:d-1-b(p) \ge 2,\\ r-s+\log (2)&{}: d-1-b(p) = 1,\\ 1&{}: d-1-b(p) = 0. \end{array}\right. } \end{aligned}$$

Then, Lemma 6, (19) and Lemma 7 imply that

$$\begin{aligned} {\mathscr {J}} \le c \,e^{m(\sigma )(d-2)r}\,{\mathscr {K}}\qquad \text {with}\qquad {\mathscr {K}}:=\int _0^r \cosh ^{d-1}(s)\,g_1(s)\cdots g_{m(\sigma )}(s)\,ds. \end{aligned}$$
(50)

We let

$$\begin{aligned} Z_{01}&:= \{p\in \{1,\ldots ,m(\sigma )\}:d-1-b(p)\in \{0,1\}\},\\ Z_{1}&:= \{p\in \{1,\ldots ,m(\sigma )\}:d-1-b(p)=1\}. \end{aligned}$$

Then

$$\begin{aligned} {\mathscr {K}}&\le c\,e^{-r(d-2)|Z_{01}|-r\sum _{p=1, p\notin Z_{01}}^{m(\sigma )}b(p)} \int _0^r (r-s+\log (2))^{|Z_1|}\,e^{sE}\,ds, \end{aligned}$$
(51)

where the exponent E is given by

$$\begin{aligned} E := (d-1)-(d-2)(m(\sigma )-|Z_{01}|)+\sum _{p=1, p\notin Z_{01}}^{m(\sigma )}b(p). \end{aligned}$$

If \(E<0\) the integral in (51) is bounded by a constant times \(r^{|Z_1|}\). In view of (49) and (50) we conclude that

$$\begin{aligned} {\mathscr {I}} \le c\,e^{(5-m(\sigma ))(d-1)r}\,e^{m(\sigma )(d-2)r}\,e^{-(d-2)|Z_{01}|r-r\sum _{p=1, p\notin Z_{01}}^{m(\sigma )}b(p)}r^{|Z_1|}. \end{aligned}$$
(52)

In order to bound \({\mathscr {I}}\) from above by a constant times \(e^{5(d-2)r}\), we use the decomposition

$$\begin{aligned} e^{5(d-2)r} = e^{(5-m(\sigma ))(d-1)r}\,e^{m(\sigma )(d-2)r}\,e^{-(5-m(\sigma ))r}. \end{aligned}$$
(53)

A comparison of the exponents in (52) and (53) shows that if \(E<0\), then it is sufficient to prove that

$$\begin{aligned} (d-2)|Z_{01}|+\sum _{p=1, p\notin Z_{01}}^{m(\sigma )}b(p) {\left\{ \begin{array}{ll} \ge 5-m(\sigma )&{}: \text {if } |Z_1|=0,\\> 5-m(\sigma )&{}: \text {if } |Z_1|>0. \end{array}\right. } \end{aligned}$$

If \(|Z_{01}|>0\), then \((d-2)|Z_{01}|\ge 4> 5-m(\sigma )\) for \(d\ge 6\). If \(|Z_{01}|=0\), then also \(|Z_1|=0\), and in this case it is sufficient to show that \(\sum _{p=1}^{m(\sigma )}b(p)\ge 5-m(\sigma )\). To see this, note that, for any \(m(\sigma )\in \{2,\ldots ,5\}\), under condition (a) we know that for \(5-m(\sigma )\) of the positions \((p,q)\in \{1,\ldots ,m(\sigma )\}\times \{2,\ldots ,d-i\}\) there has to be a block containing the element at (pq) and exactly one element at \((p',q')\in \{m(\sigma )+1,\ldots ,5\}\times \{1,\ldots ,d-i\}\), since each row has to be visited by some block. But this implies the required inequality.

Next, suppose that \(E=0\). Then the integral in (51) is bounded by a polynomial in r of degree at most \(|Z_1|+1\) and another comparison of exponents in (52) and (53) implies that in this case we need to prove that

$$\begin{aligned} (d-2)|Z_{01}| + \sum _{p=1, p\notin Z_{01}}^{m(\sigma )}b(p) > 5-m(\sigma ). \end{aligned}$$
(54)

Using the assumption that \(E=0\), we see that in this case

$$\begin{aligned} (d-2)|Z_{01}| + \sum _{p=1, p\notin Z_{01}}^{m(\sigma )}b(p)&= m(\sigma )(d-2)-(d-1). \end{aligned}$$

This shows that the inequality in (54) is equivalent to \((d-1)(m(\sigma )-1) > 5\), which is always satisfied for \(d\ge 7\).

Finally, we suppose that \(E>0\) in which case a comparison of the exponents in (52) and (53) shows that we have to verify that

$$\begin{aligned}&(d-2)|Z_{01}|+ \sum _{p=1, p\notin Z_{01}}^{m(\sigma )}b(p) - (d-1) + (d-2)(m(\sigma )-|Z_{01}|) \\&\quad - \sum _{p=1, p\notin Z_{01}}^{m(\sigma )}b(p) \ge 5-m(\sigma ). \end{aligned}$$

After simplification, this is equivalent to \((d-1)(m(\sigma )-1) \ge 5\), which holds for \(d\ge 6\). This completes the argument in case (a) for \(d\ge 7\).

We turn now to case (b), where we have to distinguish the sub-cases \(m(\sigma )=2\) and \(m(\sigma )=3\). We start with the case \(m(\sigma )=2\). Then, arguing as at the beginning of the proof for case (a), we have

$$\begin{aligned} {\mathscr {I}}\le c\, {\mathscr {I}}_1{\mathscr {I}}_2{\mathcal {H}}^d(B_r) \end{aligned}$$

with

$$\begin{aligned}&{\mathscr {J}}_j:= \int _0^r \cosh ^{d-1}(s)\,{\mathcal {H}}^{d-1-\bar{b}(2j-1)}(B_r\cap L_{d-1-{{\bar{b}}}(2j-1)}(s)) {\mathcal {H}}^{d-1-\bar{b}(2j)}(B_r\cap L_{d-1-{{\bar{b}}}(2j)}(s))\,ds \end{aligned}$$

for \(j\in \{1,2\}\), where \({{\bar{b}}}(i)=b(i)\) for \(i\in \{1,2,4\}\) and \({{\bar{b}}}(3)=b(3)-1\ge 0\). Moreover, without loss of generality, we can assume that \(b(1)\ge 1\). Similarly to (50), for \(j \in \{1,2\}\) we get

$$\begin{aligned} {\mathscr {J}}_j \le e^{2(d-2)r}\,{\mathscr {K}}_j \qquad \text {with}\qquad {\mathscr {K}}_j:=\int _0^r \cosh ^{d-1}(s)\,g_{2j-1}(s)\,g_{2j}(s)\,ds. \end{aligned}$$

For \(j \in \{1,2\}\) we let

$$\begin{aligned} Z_{01}^{j}&:= \{p\in \{2j-1,2j\}:d-1-{{\bar{b}}}(p)\in \{0,1\}\},\\ Z_{1}^{j}&:= \{p\in \{2j-1,2j\}:d-1-{{\bar{b}}}(p)=1\}. \end{aligned}$$

Then

$$\begin{aligned} {\mathscr {K}}_j&\le c\,e^{-r(d-2)|Z_{01}^{j}|-r\sum _{p=2j-1, p\notin Z_{01}^{j}}^{2j}{{\bar{b}}}(p)}\int _0^r (r-s+\log (2))^{|Z_1^{j}|}\,e^{sE_j}\,ds, \end{aligned}$$
(55)

where the exponents \(E_j\), \(j\in \{1,2\}\), are given by

$$\begin{aligned} E_j :=&(d-1)-(d-2)(2-|Z_{01}^{j}|)+\sum _{p=2j-1, p\notin Z_{01}^{j}}^{2j}{{\bar{b}}}(p). \end{aligned}$$

We will show that \({\mathscr {K}}_1\) is bounded by a constant multiple of \(e^{-r}\) and \({\mathscr {K}}_2\) by a constant. Then we can conclude that

$$\begin{aligned} {\mathscr {I}}\le c\, e^{(d-1)r}{\mathscr {I}}_1{\mathscr {I}}_2\le c\, e^{(d-1)r}e^{4(d-2)r}e^{-r}\le e^{5(d-2)r}. \end{aligned}$$

We first consider \({\mathscr {K}}_1\). For \(E_1<0\) the integral in (55) is bounded by a constant multiple of \(r^{|Z_1^1|}\). Therefore it is sufficient to compare the exponents and to show that

$$\begin{aligned} (d-2)|Z_{01}^{1}|+\sum _{p=1, p\notin Z_{01}^{1}}^{2}b(p) {\left\{ \begin{array}{ll} \ge 1&{}:|Z^1_1|=0,\\> 1&{}:|Z^1_1|>0. \end{array}\right. } \end{aligned}$$

Since \(b(1)\ge 1\) and \(d\ge 4\), this is satisfied.

Next, suppose that \(E_1=0\). In this case, the integral in (55) is bounded by a polynomial in r and we have to show the inequality

$$\begin{aligned} (d-2)|Z_{01}^{1}|+\sum _{p=1, p\notin Z_{01}^{1}}^{2}b(p) > 1. \end{aligned}$$
(56)

Using the assumption that \(E_1=0\), we get

$$\begin{aligned} (d-2)|Z_{01}^{1}|+\sum _{p=1, p\notin Z_{01}^{1}}^{2}b(p)=-(d-1)+2(d-2)=d-3. \end{aligned}$$

Hence (56) is true for \(d\ge 5\).

Finally, we suppose that \(E_1>0\). Then we have to show that

$$\begin{aligned}&(d-2)|Z_{01}^{1}|+\sum _{p=1, p\notin Z_{01}^{1}}^{2}b(p)-(d-1)+(d-2)(2-|Z_{01}|^{j})\\&\quad -\sum _{p=1, p\notin Z_{01}^{1}}^{2}b(p)\ge 1. \end{aligned}$$

After simplifications this is equivalent to \(d\ge 4\).

Now we prove that \(\nonumber {\mathscr {K}}_2\) is bounded by a constant. For \(E_2<0\), a comparison of the exponents in (55) shows that we need that

$$\begin{aligned} (d-2)|Z_{01}^{2}|+ \sum _{p=3, p \notin Z_{01}^{2}}^{4}\bar{b}(p){\left\{ \begin{array}{ll} \ge 0&{} :|Z^2_{1}|=0,\\> 0&{} :|Z^2_{1}|>0, \end{array}\right. } \end{aligned}$$

which is trivially satisfied.

For \(E_2=0\) the required inequality is

$$\begin{aligned} (d-2)|Z_{01}^{2}|+ \sum _{p=3, p \notin Z_{01}^{2}}^{4}{{\bar{b}}}(p)> 0, \end{aligned}$$

which is equivalent to \(-(d-1)+2(d-2)>0\), that is, to \(d\ge 4\).

Finally, if \(E_2>0\) then we have to verify that

$$\begin{aligned}&(d-2)|Z_{01}^{2}|+ \sum _{p=3, p \notin Z_{01}^{2}}^{4}\bar{b}(p)-(d-1)+(d-2)(2-|Z_{01}|^{2})\\&\quad -\sum _{p=3, p \notin Z_{01}^{2}}^{4}{{\bar{b}}}(p)\ge 0. \end{aligned}$$

Again simplification yields that this is equivalent to \(d\ge 3\).

Now we turn to the case \(m(\sigma )=3\). Then we have

$$\begin{aligned} {\mathscr {I}}\le c\, {\mathscr {I}}_3{\mathscr {I}}_4 \end{aligned}$$

with

$$\begin{aligned} {\mathscr {I}}_3&:=\int _0^r\cosh ^{d-1}(s)\prod _{i=1}^3{\mathcal {H}}^{d-1-b(i)}(B_r\cap L_{d-1-b(i)}(s))\, ds,\\ {\mathscr {I}}_4&:= \int _0^r\cosh ^{d-1}(s)\prod _{i=4}^5{\mathcal {H}}^{d-1-\bar{b}(i)}(B_r\cap L_{d-1-{{\bar{b}}}(i)}(s))\, ds, \end{aligned}$$

where \(0\le {{\bar{b}}}(4):=b(4)-1\le d-i-1\le d-1\) and \({{\bar{b}}}(5)=0\). We will prove that \({\mathscr {I}}_3\le c\, e^{3(d-2)r}\) and \({\mathscr {I}}_4\le c\, e^{2(d-2)r}\), which in turn proves that \({\mathscr {I}}\le c\, e^{5(d-2)r}\).

As in the proof of case (a) (and for \(m(\sigma )=3\) there), we obtain

$$\begin{aligned} {\mathscr {I}}_3\le c\, e^{3(d-2)r}{\mathscr {K}}_3\qquad \text {with}\qquad {\mathscr {K}}_3:=\int _0^r\cosh ^{d-1}(s) g_1(s)g_2(s)g_3(s)\, ds. \end{aligned}$$

We show that \({\mathscr {K}}_3\le c\). For this, we proceed as before and obtain

$$\begin{aligned} {\mathscr {K}}_3&\le c\,e^{-r(d-2)|Z_{01}^3|-r\sum _{p=1, p\notin Z_{01}^3}^3 b(p)} \int _0^r (r-s+\log (2))^{|Z_1^3|}\,e^{sE_3}\,ds, \end{aligned}$$

where

$$\begin{aligned} Z_{01}^3:= & {} \{p\in \{1,\ldots ,3\}:d-1-b(p)\in \{0,1\}\}, \\ Z_{1}^3:= & {} \{p\in \{1,\ldots ,3:d-1-b(p)=1\} \end{aligned}$$

and

$$\begin{aligned} E_3:=(d-1)-(d-2)(3-|Z^3_{01}|)+\sum _{p=1,p\notin Z_{01}^3}^3b(p). \end{aligned}$$

If \(E_3\le 0\), then

$$\begin{aligned} r^{|Z^3_{01}|}e^{-r(d-2)|Z^3_{01}|}e^{-r\sum _{p=1,p\notin Z_{01}^3}^3 b(p)}\le c \end{aligned}$$

provided that

$$\begin{aligned} (d-2)|Z^3_{01}|+\sum _{p=1,p\notin Z_{01}^3}^3 b(p) {\left\{ \begin{array}{ll} \ge 0&{}:|Z^3_1|=0,\\>0 &{}:|Z^3_1|>0. \end{array}\right. } \end{aligned}$$

This is obviously true, since \(|Z^3_{01}|\ge |Z^3_1|\) and \(d\ge 4\). Hence, if \(E_3\le 0\), then \({\mathscr {K}}_3\le c\).

If \(E_3>0\), then \({\mathscr {K}}_3\le c\) follows provided that

$$\begin{aligned} (d-2)|Z^3_{01}|+\sum _{p=1,p\notin Z_{01}^3}^3 b(p)-E_3\ge 0. \end{aligned}$$

The latter is equivalent to \((d-2)3-(d-1)\ge 0\), that is, to \(2d\ge 5\). Thus we have shown that \({\mathscr {I}}_3\le c\, e^{3(d-2)}\). In order to show that \({\mathscr {I}}_4\le c\, e^{2(d-2)}\), we distinguish several cases.

If \({{\bar{b}}}(4)<d-3\), then

$$\begin{aligned} {\mathscr {I}}_4&\le c\int _0^re^{s(d-1)}e^{(r-s)(d-2-{{\bar{b}}}(4))}e^{(r-s)(d-2)}\, ds\\&\le c\,e^{(2(d-2)-{{\bar{b}}}(4))r}\int _0^re^{s(-d+3+{{\bar{b}}}(4))}\, ds\le c\, e^{2(d-2)r}. \end{aligned}$$

If \({{\bar{b}}}(4)=d-3\), then

$$\begin{aligned} {\mathscr {I}}_4\le c\, e^{(2(d-2)-d+3)r}r=c\, r e^{r(d-1)}\le c\, e^{2(d-2)r}, \end{aligned}$$

since \(d-1< 2(d-2)\) for \(d\ge 4\).

If \({{\bar{b}}}(4)=d-2\), then

$$\begin{aligned} {\mathscr {I}}_4\le c\int _0^re^{s(d-1)}(r-s+\log (2))e^{(r-s)(d-2)}\, ds\le c\, e^{r(d-1)}. \end{aligned}$$

If \({{\bar{b}}}(4)=d-1\), then

$$\begin{aligned} {\mathscr {I}}_4\le c\int _0^re^{s(d-1)} e^{(r-s)(d-2)}\, ds\le c\, e^{r(d-1)}. \end{aligned}$$

Thus in all cases we have \({\mathscr {I}}_4\le c\, e^{2(d-2)r}\), which completes the proof. \(\square \)

Proof of Theorem 5 (c)

Let d and i be as in the statement of Theorem 5 (c), and suppose to the contrary that \(\widetilde{F_{r,t}^{(i)}}\) converges in distribution, as \(r\rightarrow \infty \), to a standard Gaussian random variable N. As a consequence of Lemma 17, the family of random variables \(\big ((\widetilde{F_{r,t}^{(i)}})^4\big )_{r\ge 1}\) is uniformly integrable, which implies that \({\mathbb {E}}(\widetilde{F_{r,t}^{(i)}})^4\rightarrow {\mathbb {E}}N^4=3\), as \(r\rightarrow \infty \). Thus, we would also have that

$$\begin{aligned} {{\,\mathrm{cum}\,}}_4 = {\mathbb {E}}\left( \widetilde{F_{r,t}^{(i)}}\right) ^4-3 \rightarrow {\mathbb {E}}N^4 - 3 = 0, \end{aligned}$$
(57)

as \(r\rightarrow \infty \). On the other hand, from [67, page 112] we know that

$$\begin{aligned} \frac{M_{1,1}(f^{(i)})}{({\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(i)}))^2} \le {{\,\mathrm{cum}\,}}_4. \end{aligned}$$

In addition, we have the following lower bound for \(M_{1,1}(f^{(i)})\):

$$\begin{aligned} M_{1,1}(f^{(i)})= & {} c t^{4(d-1-i)+1} \int _{A_h(d,d-1)} {\mathcal {H}}^{d-1}({\tilde{H}}_1\cap B_r)^4 \ \mu _{d-1}(d{\tilde{H}}_1)\\\ge & {} c\, g(d-1,4,d,r) \ge c\, e^{4r(d-2)}, \end{aligned}$$

since \(4(d-2)-(d-1)>0\), which follows from our assumption that \(d\ge 4\), and since \(i\le d-1\) and \(t\ge 1\). In combination with Lemma 11 we thus find that

$$\begin{aligned} {{\,\mathrm{cum}\,}}_4\ge \frac{M_{1,1}(f^{(i)})}{({\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(i)}))^2} \ge {c\over c^{(i)}(d,1)}{e^{4r(d-2)}\over e^{4r(d-2)}}=c>0, \end{aligned}$$

which is a contradiction to (57). Consequently, the family of random variables \(\big (\widetilde{F_{r,t}^{(i)}}\big )_{r\ge 1}\) cannot satisfy a central limit theorem as \(r\rightarrow \infty \). \(\square \)

Remark 12

Let \(d\ge 4\) and \(i=d-1\) or \(d\ge 7\) and \(i\in \{0,1,\ldots ,d-1\}\). For such d and i the proof of Theorem 5 (c) in combination with [10, Corollary 4.7.19], a corollary of the Eberlein-S̆mulian theorem, shows that there exists a subsequence \(\widetilde{F_{r_k,t}^{(i)}}\) such that \(\widetilde{F_{r_k,t}^{(i)}}\) converges in distribution and in \(L^4\) to some limiting random variable X, say. Especially this implies that \({\mathbb {E}}X=0\), \({\mathbb {E}}X^2=1\) and \({\mathbb {E}}X^m<\infty \) for \(m\in \{3,4\}\). In particular, this rules out for X the classical \(\alpha \)-stable distributions for any \(0<\alpha <2\) and, since we have shown that \({{\,\mathrm{cum}\,}}_4(X)>0\), also a Gaussian distribution. We leave the determination of the distribution of the limiting random variable X as a challenging open problem for future research.

6.3 The case of simultaneous growth of intensity and window: Proof of Theorem 6

According to Lemma 17 we have that, for any fixed \(t\ge 1\),

$$\begin{aligned} \sup _{r\ge 1}{\mathbb {E}}\left( \widetilde{F_{r,t}^{(i)}}\right) ^5<\infty ,\qquad \text {where}\qquad \widetilde{F_{r,t}^{(i)}}={F_{r,t}^{(i)}-{\mathbb {E}}F_{r,t}^{(i)}\over \sqrt{{\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(i)})}} \end{aligned}$$

and where d and i are as in the statement of Theorem 6. Then, taking \(t=1\), by Hölder’s inequality it follows that

$$\begin{aligned} \sup _{r\ge 1}{\mathbb {E}}\left( \widetilde{F_{r,1}^{(i)}}\right) ^4 \le \sup _{r\ge 1}\left( {\mathbb {E}}\left( \widetilde{F_{r,1}^{(i)}}\right) ^5\right) ^{4/5} < \infty . \end{aligned}$$
(58)

Next, we recall the definition of the integrals \(M_{u,v}(h)\), \(u,v\in \{1,\ldots ,m\}\), from (12) that are associated with a general Poisson U-statistic of order \(m\in {\mathbb {N}}\) with kernel function h. In order to emphasize the role of the measure these integrals are taken with, we will write \(M_{u,v}(h\,;\,\mu )\) in what follows. By definition of the integrated kernels in (13) we have that

$$\begin{aligned} M_{u,v}(f^{(i)}\,;\,t \mu _{d-1}) \le t^{4(d-i-1)+1} M_{u,v}(f^{(i)}\,;\,\mu _{d-1}) \end{aligned}$$
(59)

for any \(t\ge 1\) and any fixed \(r\ge 1\). In fact, \(f_u^{(i)}\) and \(f_v^{(i)}\) contribute twice the factor \(t^{d-i-u}\) and twice the factor \(t^{d-i-v}\) by (13), respectively, and the integral in (12) leads to an additional factor \(t^{|\sigma |}\). By the choice \(u=v=1\) we maximize the resulting exponent and see that their product is bounded by \(t^{4(d-i-1)+1}\). Indeed, if \(u=v=1\) we necessarily have that \(|\sigma |=1\) since \(\sigma \) has to be connected. On the other hand, if \(u+v\ge 3\) then \(|\sigma |\le u+v\) and hence

$$\begin{aligned} 2(d-i-u)+2(d-i-v)+|\sigma |&\le 2(d-i-u)+2(d-i-v)+u+v\\&=4(d-i-1)-(u+v)+4\\&\le 4(d-i-1)+1. \end{aligned}$$

Now, we apply the normal approximation bound (14) to the Poisson U-statistic \(F_{r,t}^{(i)}\). Together with (59) and the lower and the upper variance bound from Lemma 11 this yields

$$\begin{aligned} d\left( \frac{F_{r,t}^{(i)}- {\mathbb {E}}F_{r,t}^{(i)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}F_{r,t}^{(i)}}},N \right)&\le c \sum _{u,v=1}^{d-i} \frac{\sqrt{M_{u,v}(f^{(i)}{;} \ t \mu _{d-1})}}{{\mathbb {V}}{\mathrm{ar}}(F_{r,t}^{(i)})} \\&\le c \sum _{u,v=1}^{d-i} {t^{2(d-i-1)+1/2}\over t^{2(d-i)-1}}\frac{\sqrt{M_{u,v}(f^{(i)}\,;\, \mu _{d-1})}}{{\mathbb {V}}{\mathrm{ar}}(F_{r,1}^{(i)})}\\&= {c\over \sqrt{t}} \sum _{u,v=1}^{d-i} \frac{\sqrt{M_{u,v}(f^{(i)}\,;\, \mu _{d-1})}}{{\mathbb {V}}{\mathrm{ar}}(F_{r,1}^{(i)})} \end{aligned}$$

for any \(t\ge 1\) and \(r\ge 1\). Note that the expression in the sum has now become a function of the parameter r only. We can now apply for any \(u,v\in \{1,\ldots ,d-i\}\) the estimate

$$\begin{aligned} \frac{\sqrt{M_{u,v}(f^{(i)}\,;\, \mu _{d-1})}}{{\mathbb {V}}{\mathrm{ar}}(F_{r,1}^{(i)})} \le \sqrt{{\mathbb {E}}\left( \widetilde{F_{r,1}^{(i)}}\right) ^4 - 3} \end{aligned}$$

from the discussion after [67, Corollary 4.3] (see also [34, Proposition 3.8]). This leads to the bound

$$\begin{aligned} d\left( \frac{F_{r,t}^{(i)}- {\mathbb {E}}F_{r,t}^{(i)}}{\sqrt{{\mathbb {V}}{\mathrm{ar}}F_{r,t}^{(i)}}},N \right)&\le \frac{c}{\sqrt{t}} \sqrt{{\mathbb {E}}\left( \widetilde{F_{r,1}^{(i)}}\right) ^4 - 3} \le \frac{c}{\sqrt{t}} \sqrt{{\mathbb {E}}\left( \widetilde{F_{r,1}^{(i)}}\right) ^4}. \end{aligned}$$

However, in view of (58) the last expression is bounded by \(c/\sqrt{t}\) for all \(t\ge 1\) and \(r\ge 1\). This completes the proof of Theorem 6. \(\square \)

7 Proofs IV: Multivariate limit theorems

7.1 The case of growing intensity: Proof of Theorem 7 (a)

This is a direct consequence of [36, Theorem 5.2]. \(\square \)

7.2 The case of growing windows: Proof of Theorem 7 (b) and (c)

7.2.1 The planar case \(d=2\): Proof of Theorem 7 (b)

Our goal is to use (15). The first term in (15) is bounded by a constant multiple of \(r^2e^{-r}\) by Lemma 12. To evaluate the second term we have to combine the lower variance bound from Lemma 9 with upper bounds for the terms \(M_{1,1}\), \(M_{1,2}\) and \(M_{2,2}\). In the proof of Theorem 5 (a) we have already shown that \(M_{1,1}(f^{(i)},f^{(i)})\le ce^r\) for \(i\in \{0,1\}\) and \(M_{2,2}(f^{(0)},f^{(0)})\le cre^r\), which implies that

$$\begin{aligned} M_{1,1}(e^{-r/2}{f}^{(i)},e^{-r/2}f^{(i)})&\le c\,e^{-2r}\,e^{r} = c\,e^{-r},\\ M_{2,2}(e^{-r/2}{f}^{(0)},e^{-r/2}f^{(0)})&\le c\,r\,e^{-2r}\,e^{r} = c\,r\,e^{-r}. \end{aligned}$$

Finally, up to a constant factor an upper bound for \(M_{1,2}(e^{-r/2}{f}^{(i)},e^{-r/2}f^{(0)})\), for \(i\in \{0,1\}\), is given by

\(M_{1,2}(e^{-r/2}{f}^{(0)},e^{-r/2}f^{(0)})\), which is equal to

$$\begin{aligned} e^{-2r}M_{1,2}( {f}^{(0)} ) \le c\,e^{-2r}\,(e^r+2 r^2\,e^r) \le c\,r^2\,e^{-r}. \end{aligned}$$

Thus we conclude from (15) that

$$\begin{aligned} d_3(\mathbf{F }_{r,t},N_{\varSigma _2}) \le c\,(r^2\,e^{-r}+e^{-r/2}+r^{1/2}\,e^{-r/2}+r\,e^{-r/2}) \le c\, r \,e^{-r/2}. \end{aligned}$$

Since the covariance matrix \(\varSigma _2\) is invertible, \(\Vert \varSigma _2^{-1}\Vert _{\mathrm{op}}\Vert \varSigma _2\Vert _{\mathrm{op}}^{1/2}\) and \(\Vert \varSigma _2^{-1}\Vert _{\mathrm{op}}\Vert ^{3/2}\varSigma _2\Vert _{\mathrm{op}}\) are positive and finite constants only depending on t. Together with (16) this also implies that

$$\begin{aligned} d_2(\mathbf{F }_{r,t},N_{\varSigma _2}) \le c\, r \,e^{-r/2}. \end{aligned}$$

and completes the proof of Theorem 7 (b). \(\square \)

7.2.2 The spatial case \(d=3\): Proof of Theorem 7 (c)

Our goal is again to use the normal approximation bound (15). By Lemma 13 the first term in (15) is bounded from above by a constant multiple of \(r^{-1}\). Next, it remains to provide upper bounds for the terms

$$\begin{aligned} M_{u,v} \qquad \text {for}\qquad (u,v)\in \{(1,1),(1,2),(1,3),(2,2),(2,3),(3,3)\}. \end{aligned}$$

As in the planar case \(d=2\) all integrals which are involved have already been treated in the proof of the univariate limit theorem. Thus, using the bounds derived in the proof of Theorem 5 (b) we can complete the proof in dimension \(d=3\). \(\square \)