Does a central limit theorem hold for the $k$-skeleton of Poisson hyperplanes in hyperbolic space?

Poisson processes in the space of $(d-1)$-dimensional totally geodesic subspaces (hyperplanes) in a $d$-dimensional hyperbolic space of constant curvature $-1$ are studied. The $k$-dimensional Hausdorff measure of their $k$-skeleton is considered. Explicit formulas for first- and second-order quantities restricted to bounded observation windows are obtained. The central limit problem for the $k$-dimensional Hausdorff measure of the $k$-skeleton is approached in two different set-ups: (i) for a fixed window and growing intensities, and (ii) for fixed intensity and growing spherical windows. While in case (i) the central limit theorem is valid for all $d\geq 2$, it is shown that in case (ii) the central limit theorem holds for $d\in\{2,3\}$ and fails if $d\geq 4$ and $k=d-1$ or if $d\geq 7$ and for general $k$. Also rates of convergence are studied and multivariate central limit theorems are obtained. Moreover, the situation in which the intensity and the spherical window are growing simultaneously is discussed. In the background are the Malliavin-Stein method for normal approximation and the combinatorial moment structure of Poisson U-statistics as well as tools from hyperbolic integral geometry.


Introduction
Random tessellations in R d form a class of mathematical objects that have been under intensive investigation in stochastic geometry during the last decades. In addition to intrinsic mathematical curiosity, a major reason for continuing interest in random tessellations is that they provide highly relevant models for practical applications, for example, in telecommunication or materials science [16,41,42,50]. One of the principal random tessellation models in Euclidean space is induced by a Poisson process of hyperplanes. In R d with d ≥ 2 and in the stationary and isotropic case, the construction of a Poisson hyperplane tessellation can be described as follows. Fix a parameter t > 0 and consider a stationary Poisson point process on the real line with intensity t. To each point p i of the Poisson process we attach independently of each other and independently of the underlying Poisson process a random vector u i which is uniformly distributed on the unit sphere S d−1 of R d . Then to each pair (p i , u i ) ∈ R × S d−1 we associate the hyperplane H i := {x ∈ R d : x, u i = p i } and call the random collection of all such hyperplanes a (stationary and isotropic) Poisson hyperplane process in R d with intensity t. The random hyperplanes H i almost surely divide the space R d into countably many random convex polytopes. The collection of all these polytopes is a (stationary and isotropic) Poisson hyperplane tessellation in R d with intensity t. We remark that the intensity parameter t, roughly speaking, controls the expected surface content of the Poisson hyperplane tessellation per unit volume. More precisely, t = EH d−1 (Z ∩ [0, 1] d ), where Z = ∞ i=1 H i is the random union set induced by the Poisson hyperplane process and H d−1 stands for the (d − 1)-dimensional Hausdorff measure.
For Poisson hyperplane tessellations many first-and second-order quantities are explicitly available for a broad class of functionals and also a comprehensive central limit theory has been developed over the last 15 years, cf. [18,20,31,51] and [56,Chapter 10] as well as the many references cited therein. In the literature, central limit theorems for functionals of Poisson hyperplanes have been considered in two different set-ups. In a first setting the tessellation is restricted to a fixed (usually convex) observation window and the asymptotic behaviour is explored when the intensity t of the underlying Poisson process is increased. Alternatively, the intensity is kept fixed, while the size of the observation window is increased. By a simple scaling relation both set-ups are equivalent when homogeneous functionals (such as intrinsic volumes, positive powers of intrinsic volumes or integrals with respect to support measures) of the tessellation are considered, see [31,Corollary 6.2].
While the spherical analogues of Poisson hyperplane tessellations, namely Poisson great hypersphere tessellations, were investigated, for example, in [1,23,21,22,38], only few results seem to be available for such tessellations in standard spaces of constant negative curvature, see [4,48,55,61]. The spherical space of constant positive curvature especially features by its compactness, which in turn implies that Poisson great hypersphere tessellations almost surely consist of only finitely many spherical random polytopes, In contrast, Poisson hyperplane tessellations in a standard space of constant negative curvature display a number of striking new phenomena that cannot be observed in their Euclidean or spherical counterparts. It is the purpose of the present paper to initiate a systematic study of intersection processes of Poisson hyperplane tessellations in the d-dimensional hyperbolic space H d and to uncover some of the anticipated and remarkable new phenomena. We confine ourselves to the study of the total volume (in the appropriate dimension) of the intersection processes induced by Poisson hyperplanes in a (hyperbolic convex) test set. We explicitly identify the expectation and the covariance structure of these functionals by making recourse to general formulas for and structural properties of Poisson U-statistics and to Crofton-type formulas from hyperbolic integral geometry. In addition and more importantly, we study probabilistic limit theorems for these functionals in the two asymptotic regimes described above for the Euclidean set-up. While the central limit theorems for growing intensity and fixed observation window are a direct consequence of general central limit theorems for Poisson U-statistics [31,51,58,59], it will turn out that the limit theory in the other regime, that is, when the intensity is kept fixed and the size of the observation window is increased, is fundamentally different. We will prove that here a central limit theorem in fact holds in space dimensions d = 2 and d = 3. On the other hand, we will show that a central limit theorem fails for all space dimensions d ≥ 4 if the total (d − 1)-volume of the union of all hyperplanes is considered. For the total volume of intersection processes of arbitrary order this will be proved for technical reasons only for dimensions d ≥ 7. We emphasize that this remarkable and surprising new feature is a consequence of the negative curvature of the underlying space and has no counterpart in the Euclidean or spherical set-up. Another interesting and unexpected feature is observed in this regime for the asymptotic covariance matrix of the vector of k-volumes of the k-skeletons, k = 0, . . . , d − 1. This matrix turns out to have full rank for d = 2, but it has rank one in dimension d ≥ 3. In addition, we will study the situation in which the intensity and the size of the observation window are increased simultaneously. In this case it will turn out that in all situations where the central limit theorem fails for fixed intensity, the Gaussian fluctuations are in fact preserved as soon as the intensity tends to infinity, independently of the behaviour of the size of the observation window (as long as it is bounded from below).
As anticipated above, the proofs of our results concerning first-and second-order properties of the total volume of intersection processes rely on general formulas for U-statistics of Poisson point processes as presented in [30] and on tools from hyperbolic integral geometry as developed in [8,15,54,60]. The central limit theorems we consider will be of quantitative nature, that is, we will provide explicit bounds on the quality (speed) of normal approximation measured in terms of both the Wasserstein and the Kolmogorov distance. Their proofs are based on general normal approximation bounds that have been derived in [12,51,59] using the Malliavin-Stein technique on Poisson spaces (see collection [44] for a representative overview concerning this method). This directly implies the central limit theorem for fixed windows and growing intensities. On the other hand, for fixed intensity and when the window is a hyperbolic ball B r of radius r around a fixed point in H d , crucial building blocks of these bounds are Crofton-type integrals of the form where A h (d, k) denotes the space of k-dimensional totally geodesic subspaces of H d and µ k is the suit-ably normalized invariant measure on A h (d, k) (all terms will be explained in detail below). While in the Euclidean case the asymptotic behaviour of such integrals, as r → ∞, is quite straightforward, this is not the case in the hyperbolic set-up. In contrast to the Euclidean case, it will turn out that their behaviour crucially depends on whether l(k − 1) is less than, greater than or equal to d − 1 (see Lemma 16). In essence, the latter is an effect of the negative curvature, which in turn causes an exponential growth of volume of linearly expanding balls in H d . To show that a central limit theorem fails in higher space dimensions is arguably the most technical part of this paper. We do this by showing that the fourth cumulant of the centred and normalized total volume of the intersection processes does not converge to 0, which in turn is the fourth cumulant of a standard Gaussian distribution. However, to bring this in contradiction with a central limit theorem we need to argue that the fourth power of the total volume is uniformly integrable, which in turn will be established by consideration of their fifths moments. This requires a fine analysis of combinatorial moment formulas for U-statistics of Poisson processes. In essence and in contrast to the lower dimensional cases d = 2 and d = 3, the failure of the central limit theorem for space dimensions d ≥ 4 is due to the fact that in these dimensions the contribution of single hyperplanes is asymptotically not negligible anymore.
We emphasize that the present paper contributes to a recent and active line of current mathematical research in stochastic geometry on models in non-Euclidean spaces. As concrete examples we mention here the studies about spherical convex hulls and convex hulls on half-spheres in [3,25,33]. Central limit theorems for the volume of random convex hulls in spherical space, hyperbolic spaces and Minkowski geometries were obtained in [5], asymptotic normality of very general so-called stabilizing functionals of Poisson point processes on manifolds was considered in [47]. Again more specifically, the papers [6,14,40,43] study various aspects of random geometric graphs in hyperbolic spaces, including central limit theorems for a number of parameters. Random tessellations of the unit sphere by great hyperspheres are the content of [1,21,22,38], while so-called random splitting tessellations in spherical spaces were introduced and investigated in [11,23]. The paper [9] is concerned with properties of Poisson-Voronoi tessellations on general Riemannian manifolds. Finally, the geometry of random fields on the sphere is studied in the monograph [34] and invariant random fields on spaces with a group action are described in [35]. In a similar vein, it is pointed out in [32] that a systematic study of the invariance properties of probability distributions under a general group action is missing. The book [32] therefore explores Markov processes whose distributions are invariant under the action of a Lie group.
The remaining parts of this paper are structured as follows. In the next section we formally define Poisson hyperplane tessellations in H d and present our main results. We start in Section 2.1 with expectations and continue in Section 2.2 with second-order characteristics associated with the total volume of intersections processes. Our limit theorems will be discussed in Section 2.3. The necessary background material on hyperbolic geometry and hyperbolic integral geometry is collected in Section 3.1, the background material on Poisson U-statistics is the content of Sections 3.2 and 3.3. All remaining sections are devoted to the proofs of our results. In Section 4 we present the proofs for first-and second-order parameters and also carry out a detailed covariance analysis, which is needed for our multivariate central limit theory. Our results on generalizations of the K-function and the pair-correlation function are established in Section 5. All univariate limit theorems are proved in Section 6, while the arguments for the multivariate central limit theorems are provided in the final Section 7.

First-order quantities
We denote by H d , for d ≥ 2, the d-dimensional hyperbolic space of constant curvature −1, which is supplied with the hyperbolic metric d h ( · , · ). We refer to Section 3.1 below for further background material on hyperbolic geometry and for a description of the conformal ball model for H d . Let p ∈ H d be an arbitrary (fixed) point, also referred to as the origin. For r ≥ 0 we denote by B r = {x ∈ H d : d h (x, p) ≤ r} the hyperbolic ball around p with radius r. A set K ⊂ H d is called a hyperbolic convex body, provided that K is non-empty, compact and if with each pair of points x, y ∈ K the (unique) geodesic connecting x and y is contained in K. The space of hyperbolic convex bodies is denoted by carries a measure µ k , which is invariant under isometries of H d (see Section 3.1 for the present normalization of this measure). For s ≥ 0 we denote by H s the s-dimensional Hausdorff measure with respect to the intrinsic metric of H d as a Riemannian manifold. Finally, we write ω k = 2π k/2 /Γ(k/2), k ∈ N, for the surface area of the k-dimensional unit ball in the Euclidean space R k .
For t > 0, let η t be a Poisson process on the space A h (d, d − 1) of hyperplanes in H d with intensity measure tµ d−1 . We refer to η t as a (hyperbolic) Poisson hyperplane process with intensity t. It induces a Poisson hyperplane tessellation in H d , i.e., a subdivision of H d into (possibly unbounded) hyperbolic cells (generalized polyhedra), see Figure 1. For i ∈ {0, . . . , d − 1} we consider the intersection process ξ (i) t of order d − i of the Poisson hyperplane process η t given by where η d−i t, = is the set of (d − i)-tuples of different hyperplanes supported by η t , δ ( · ) denotes the Dirac measure and dim( · ) stands for the dimension of the set in the argument. In this paper we are interested in random variables of the form is the total number of vertices in W of the Poisson hyperplane tessellation, i.e., the total number of intersection points induced by the hyperplanes of η t . In the Euclidean case these random variables have received particular attention in the literature, see e.g. [17,18,24,26,27,31,37,51,56] and the references cited therein. As in the Euclidean case, we will start by investigating the expectation of F (i) Remark 1. In comparison with the Euclidean and spherical case we observe that precisely the same formula holds in these spaces. This is not surprising, since the proof of Theorem 1 is based only on the multivariate Mecke formula for Poisson processes and a recursive application of Crofton's formula from integral geometry, see Section 4. Since the latter holds for any standard space of constant curvature κ ∈ {−1, 0, 1} with the same constant (cf. [8,54]), independently of the curvature κ, the result of Theorem 1 holds simultaneously for all standard spaces of constant curvature κ ∈ {−1, 0, 1}. In other words this means that the expectation EF (i) W,t is not an appropriate quantity to 'feel' or to 'detect' the curvature of the underlying space. For this we will use second-order characteristics.

Second-order quantities
In a next step, we describe the covariance structure of the functionals F (i) (1). The following explicit representation for the covariances will be derived from the Fock space representation of Poisson U-statistics.
Theorem 2 (Covariances). Let W ⊂ H d be a Borel set, let t > 0, and let i, j ∈ {0, 1, . . . , d − 1}. Then Remark 2. Since Theorem 2 follows from the general Fock space representation of Poisson U-statistics, the formula for Cov(F (i) W,t ) is formally the same for all spaces of constant curvature κ ∈ {−1, 0, 1}. However, the curvature properties of the underlying space are hidden in the integral-geometric expression In fact, if κ ∈ {−1, 0} and if we replace W by a ball B r of radius r around an arbitrary fixed point, we can consider the asymptotic behaviour of J k (B r ), as r → ∞, which is quite different in these two cases (note that in spherical spaces with constant curvature κ = 1 the range of r is bounded). While in the Euclidean case κ = 0, J k (B r ) behaves like a constant multiple of r d+k for all choices of k, in the hyperbolic case κ = −1 we will show that J k (B r ) behaves like a constant multiple of e (d−1)r if 2k − 1 < d, like a constant multiple of re (d−1)r if 2k − 1 = d and like a constant multiple of e 2(k−1)r if 2k − 1 > d, see Lemma 16 below. In this sense we can say that second-order properties of the functionals F Continuing the discussion of second-order properties of Poisson hyperplane tessellations in H d , we now introduce and describe the K-function and the pair-correlation function of the i-dimensional Hausdorff measure restricted to the i-skeleton of the tessellation. In the Euclidean case these two functions have turned out to be essential tools in the second-order analysis of stationary random measures (see the original paper [53] and the recent monograph [2] as well as the references cited therein). To be precise, for i ∈ {0, 1, . . . , d − 1} and fixed t > 0, we first consider the i-skeleton of the Poisson hyperplane tessellation in H d with intensity t, which is defined as the random closed set The i-dimensional Hausdorff measure on skel i is denoted by M i . It is a stationary random measure on where B ⊂ H d is an arbitrary Borel set with H d (B) = 1. It follows from Theorem 1 that The K-function of the random measure M i is defined by The condition d h (x, y) > 0 is usually omitted in the definition of the K-function of a diffuse stationary random measure. For i ∈ {1, . . . , d − 1}, the proof of the following more general Theorem 3 will show that K i (r) remains indeed unchanged if we drop the condition d h (x, y) > 0. For i = 0, however, the random measure M i is a stationary point process in H d and then the restriction d h (x, y) > 0 is common. The proof of Theorem 3 will also show that the summands corresponding to indices n ∈ {0, . . . , d − 1} in (4) are not affected by the restriction, but the summand with n = d will be zero. If we define K i (B, r) as in (3), but for a general measurable set B ⊂ H d , it follows from the stationarity of η t that the measure K i ( · , r) is isometry invariant and hence a constant multiple of H d ( · ), provided it is locally finite. In Theorem 3, this will be shown and the constant will be determined. We will also see that K i (r) is differentiable, which allows us to consider the pair-correlation function Roughly speaking it describes the probability of finding a point on the i-skeleton at geodesic distance r from another point belonging to skel i . More generally and in analogy to the covariances considered in Theorem 2, we will consider the mixed K-function K ij for i, j ∈ {0, . . . , d−1}. For r > 0 and a measurable set and describes the random measure M i as seen from a typical point of M j , in the sense of Palm distribution. In particular, we retrieve the ordinary K-function by the special choice j = i. The mixed pair-correlation function g ij is then defined in the obvious way by differentiation of K ij , namely, As in the case of the K-function, the condition that 0 < d h (x, y) can be omitted if i ≥ 1 or j ≥ 1.
In (4) we restrict the summation to n ≤ d − 1 in order to avoid an undefined expression which arises for i = j = 0 and n = d. Alternatively, for n = d the factor ω d−n = ω 0 is ω 0 = 2/Γ(0) = 0 and the product with the infinite integral can be defined to be zero.

Remark 3.
An inspection of the proof shows that Theorem 3 is based only on Crofton's formula and Lemma 10, which in turn is also based on Crofton's formula. However, since the latter holds for any space of constant curvature κ ∈ {−1, 0, 1} with the same constant (cf. [8,54]), independently of the curvature κ, Theorem 3 remains valid also in spherical and Euclidean spaces of curvature κ = 1 and κ = 0, respectively. Namely, defining the modified sine function n−1 1 (t sn κ (r)) n for r > 0 if κ ∈ {−1, 0} and 0 < r < π if κ = 1. For i = j = d − 1 and κ = 1 these formulas have been proved in [23, Section 6.2] based on a different normalization. Moreover, for κ = 0 the formula for g 0 (r) appears as the identity (3.15) in [19], while g d−1 (r) can be found in [57,Section 7]. As already explained in [20], for general i ∈ {0, 1, . . . , d − 1} it can in principle be deduced from an explicit formula for the second-order moments of the total volume of intersection processes, see [36, p. 164].

Limit theorems
Our next result is a central limit theorem for F (i) W,t , for a fixed hyperbolic convex body W , when the intensity parameter t tends to infinity. We will measure the distance between (the laws of) two random variables by the Wasserstein and the Kolmogorov distance. For their definitions we refer to Section 3.2 below.
Theorem 4 (CLT, growing intensity). Let d ≥ 2, i ∈ {0, 1, . . . , d − 1} and let W ∈ K d h be a fixed hyperbolic convex body with non-empty interior. Let N be a standard Gaussian random variable, and let d( · , · ) denote either the Wasserstein or the Kolmogorov distance. Then there exists a constant c ∈ (0, ∞) such that for all t ≥ 1.
As already explained in the introduction, the central limit problem for F (i) W,t can also be approached in another set-up, which in the Euclidean case is equivalent to the one just discussed, but turns out to be fundamentally different in hyperbolic space. More precisely, we turn now to the case, where the intensity t is fixed, while the size of the observation window is increased. We do this only in the case of spherical windows in H d . In other words, we choose for W the hyperbolic ball B r (around the origin p) and write F (i) Br,t in this case. Our next result is a central limit theorem for F (i) r,t for dimension d = 2 in part (a) and for d = 3 in part (b). Moreover, it turns out that a central limit theorem for F (i) r,t is no longer valid in any space dimension d ≥ 4, see part (c). We emphasize that this surprising phenomenon is in sharp contrast to the Euclidean case [18,31,51] and is an effect of the negative curvature.
Theorem 5 (CLT, growing spherical window). Let t ≥ 1, let N be a standard Gaussian random variable, and let d( · , · ) denote either the Wasserstein or the Kolmogorov distance.
for r ≥ 1.   (ii) It is instructive to rewrite the normal approximation bounds in Theorem 5 (a) and (b) as follows.
For d = 2 and i ∈ {0, 1} we have that and for d = 3 we have, again for r ≥ 1, Hereĉ 2 ,ĉ 3 ∈ (0, ∞) are again constants only depending on t. This means that in dimension d = 2 and for i = 0 the speed of convergence is the same as in the Euclidean case, up to the logarithmic factor. Moreover, it shows that d = 3 is the critical dimension for the central limit theorem, which only holds in this case with a rate of convergence which is very much slowed down.
Theorem 4 shows that for fixed radius r and increasing intensity t a central limit theorem for F (i) r,t with i ∈ {0, 1, . . . , d − 1} holds. On the other hand, according to Theorem 5 (c) the central limit theorem breaks down for dimensions d ≥ 4 (if the total surface area is considered) or d ≥ 7 (for general i ∈ {0, 1, . . . , d − 1}) if the intensity t stays fixed and r → ∞. Against this background the question arises whether in these cases the central limit behaviour can be preserved if the intensity t and the radius r tend to infinity simultaneously. In fact, the following result states that this is indeed the case. More precisely, it says that, independently of the behaviour of r, the central limit theorem holds as soon as t → ∞ (and r is bounded from below by 1).
Theorem 6 (CLT for simultaneous growth of intensity and window). Let d ≥ 4 and i = d − 1 or d ≥ 7 and i ∈ {0, 1, . . . , d − 1}. Also, let N be a standard Gaussian random variable. Then there is a constant for all r ≥ 1 and t ≥ 1, where d( · , · ) denotes either the Wasserstein or the Kolmogorov distance.
Remark 5. In dimensions d = 2 and d = 3 we also have normal approximation bounds that simultaneously involve the two parameters t and r. In fact, for d = 2 the bounds (35) and (39) below show that holds for all t ≥ 1, r ≥ 1 and i ∈ {0, 1}. Similarly, for d = 3 the estimates (41), (45) and (46) prove that for all t ≥ 1 and r ≥ 1. In both cases, d( · , · ) stands for either the Wasserstein or the Kolmogorov distance. This way we recover Theorem 4 for d = 2 and d = 3 in the special case where W = B r with r fixed and we recover Theorem 5 (a) and (b) by fixing t.
Finally, let us turn to the multivariate set-up. To compare the distance between the distributions of (the laws of) two random vectors we use what is known as the d 2 -and the d 3 -distance; for their definition we refer to Section 3.3 below. We approach the multivariate central limit theorem by considering, as above, two different settings. To handle the central limit problem for a fixed window W ∈ K d h and growing intensities we define for t > 0 the d-dimensional random vector Moreover, for i, j ∈ {0, 1, . . . , d − 1} we introduce the asymptotic covariances and the asymptotic covariance matrix of the random vector F W,t , as t → ∞, by The existence of the limit and the precise value of τ i,j W follows from (17) below. It is easy to see that T W has rank one, as in Euclidean space.
In view of Theorem 5, for fixed intensity t > 0 and a sequence of growing spherical windows, taking W = B r for r > 0 we put and define the asymptotic covariance matrix of the random vector F r,t , as r → ∞, for The covariance matrices Σ d are explicitly given by (19) for d = 2, (27) for d = 3 and (28) for d ≥ 4 below. Moreover, in Section 4.5 we determine convergence rates. In particular, we will show that Σ 2 has full rank (is positive definite) and Σ d has rank one for d ≥ 3. We remark that this is in sharp contrast to the corresponding result in Euclidean spaces, where the asymptotic covariance matrix has rank one for all d ≥ 2, see [18,Theorem 5.1 (ii)]. Note that the dependence of these limits on the fixed intensity t > 0 is not made explicit by our notation, but this dependence is shown in Lemmas 20, 21 and 23.
In order to state the multivariate central limit theorem, we use the d 2 and the d 3 distance for random vectors (see Section 3.3 for explicit definitions).
for all r ≥ 1.

Remark 6.
After having seen that in the univariate case the central limit theorem for d ≥ 4 can be preserved by a simultaneous growth of the intensity t and the radius r, the question arises whether such a phenomenon also holds in the multivariate set-up. This is in fact the case, but we decided not to present the details for brevity.
3 Background material and preparations 3 where · euc stands for the usual Euclidean norm. This is known as the conformal ball model for H d , see [49,Chapter 4.5]. However, it should be emphasized that our arguments are independent of the special choice of a model for a simply connected, geodesically complete space of constant negative curvature ≤ r} for the hyperbolic ball with centre z ∈ H d and radius r ≥ 0 and put B r = B(p, r), where p is a fixed reference point. In this paper the s-dimensional For later reference we need a formula for the surface area of a hyperbolic ball B(z, r). It is given by where ω d = dκ d = 2π d/2 /Γ(d/2) is the surface area of a d-dimensional unit ball in the Euclidean space R d and κ d is its volume. Moreover, the volume of a hyperbolic ball of radius r is given by We refer to Sections 3.3 and 3.4 and especially to formulas (3.25) and (3.26) in the monograph [10]. For the special case d = 2, we thus get H 2 (B(z, r)) = 2π(cosh(r) − 1). Here, cosh and sinh are the hyperbolic cosine and sine, which are given by respectively. We will frequently make use of the fact that cosh(x), sinh(x) ∈ Θ(e x ), as x → ∞, where Θ( · ) stands for the usual Landau symbol. Additionally we will use the following inequalities.
Lemma 8. The function sinh satisfies the inequalities Proof. (a) By the definition of the hyperbolic sine function, we get is uniquely determined by its orthogonal subspace L d−k passing through the origin p and the intersection point {x} = H ∩ L d−k . Using these facts, Santaló [54,Equation (17.41)] (see also [60, Proposition 2.1.6], [15,Equation (9)]) provides a useful representation of an isometry invariant measure on A h (d, k), which we use here with a different normalization. For a Borel set B ⊂ A h (d, k), it is given by where H(L, x) is the k-plane orthogonal to L passing through x.
Remark 7. The current normalization of the measure µ k differs from the normalization of the measure dL k used in [54] by the constant ω d · · · ω d−k+1 /(ω k · · · ω 1 ). This also affects the constants in the formulas from hyperbolic integral geometry taken from [54]. The reason for the present normalization is to simplify a comparison of our results to corresponding results in Euclidean and spherical space.
According to [54,Equation (14.69)] the measure µ k satisfies the following Crofton-type formula. In fact, the discussion in [8,Section 7] allows us to state the result not only for sets bounded by smooth submanifolds (as in [54]), but for much more general sets, which include arbitrary convex sets as a very special case. The following lemma holds for H d+i−k measurable sets W ⊂ H d which are Hausdorff Remark 8. Strictly speaking the case k = i is not covered by [8]. Although the framework in [8] should extend to this marginal case, we prefer to provide an elementary direct argument for the case k = i. In this case, the left side of (7) defines an isometry invariant Borel measure on H d . Therefore, in order to confirm (7) in this case, it is sufficient to show that the equality holds for W = B r , r ≥ 0. Since equality holds for r = 0 and in view of (5), it is sufficient to show that ω d sinh d−1 (r) is the derivative with respect to r of the function defined by where we used (6) and (18) for the equality. The differential of h can be determined by basic rules of calculus. Using that arcosh(cosh(r)/ cosh(r)) = 0, we thus obtain The substitution sinh(t) = sinh(r) · x leads to as was to be shown.
Remark 9. Although both sides of (9) define measures with respect to their dependence on a Borel set W ⊂ H d , for k = i the equality in (9) in general does not extend from (d+i−k)-rectifiable sets to general Borel sets. This is due to deep classical results in the structure theory of geometric measure theory, see [13, p. 2] We will frequently make use of the following transformation formula.
Proof. Let ϕ ∈ I(H d ) be an arbitrary isometry, and let B a measurable subset of A h (d, k). Then we have by the isometry invariance of µ d−1 . Since up to a multiplicative constant, µ k is the only isometry invariant measure on A h (d, k), the formula follows up to the determination of the constant, which is independent of the function f . We do this by choosing On the other hand, applying directly the Crofton formula with i = k, we get A comparison yields the constant and proves the assertion of the lemma.
In what follows we use the convention that dim(∅) = −1.
We apply induction over n and start by observing that for n = 1 there is nothing to show. For n ≥ 2 we have We decompose the indicator function as follows: Since the second indicator function on the right-hand side is identically equal to zero, we arrive at

By the induction hypothesis and Fubini's theorem we get
which covers the case of the third indicator function on the right-hand side of (8). Finally, we write c(H 1 , . . . , H n−1 ) for an arbitrary point chosen on H 1 ∩ . . . ∩ H n−1 (in a measurable way). Then, again by Fubini's theorem, we conclude for the first indicator function on the right-hand side of (8) that This completes the proof.

Poisson U-statistics
Let (X, X ) be a measurable space, which is supplied with a σ-finite measure µ. Let η be a Poisson process on X with intensity measure µ (we refer to [30] for a formal construction). Further, fix m ∈ N and let h : X m → R be a non-negative, measurable and symmetric function, which is integrable with respect to µ m , the m-fold product measure of µ. By a Poisson U-statistic (of order m and with kernel h) we understand a random variable of the form where η m = is the collection of all m-tuples of distinct points of η, see [30]. Functionals of this type have received considerable attention in the literature, especially in connection with applications in stochastic geometry, see, for example, [12,24,28,29,31,44,51,58,59]. In the following, we will frequently use the following consequence of the multivariate Mecke equation for Poisson functionals [30,Theorem 4.4]. Namely, the expectation EU of the Poisson U-statistic U is given by In the present paper we need a formula for the centred moments of the Poisson U-statistics U as well as a bound for the Wasserstein and the Kolmogorov distance of a normalized version of U and a standard Gaussian random variable. To state such results, we need some more notation. Following [30,Chapter 12], for an integer n ∈ N we let Π n and Π * n be the set of partitions and sub-partitions of [n] := {1, . . . , n}, respectively. We recall that by a sub-partition of {1, . . . , n} we understand a family of non-empty disjoint subsets (called blocks) of {1, . . . , n} and that a sub-partition σ is called a partition if J∈σ J = {1, . . . , n}. For σ ∈ Π * n we let |σ| be the number of blocks of σ and σ = J∈σ J be the number of elements of J∈σ J. In particular, a partition σ ∈ Π n satisfies σ = n. For ∈ N and n 1 , . . . , n ∈ N, let n := n 1 + . . . + n and define  (4,4,4). Right panel: Example of a sub-partition not belonging to Π * * ≥2 (4,4,4). In fact, the block indicated by the dashed curve contradicts condition (i), the block indicated by the dotted curve contradicts condition (ii) and since no element from the last row is contained in any block also condition (iii) is violated. (i) all blocks of σ have at least two elements, (ii) each block of σ contains at most one element from each row, (iii) in each row there is at least one element that belongs to some block of σ.
For an example and a counterexample we refer to Figure 3.
For two functions g 1 : X 1 → R and g 2 : X 2 → R with 1 , 2 ∈ N), we denote by g 1 ⊗g 2 : X 1 + 2 → R their usual tensor product. We are now in the position to rephrase the following formula for the centred moments of the Poisson U-statistic U (see [30,Proposition 12.13]): where h ⊗ is the -fold tensor product of h with itself and (h ⊗ ) σ : X m +|σ|− σ → R stands for the function that arises from h ⊗ by replacing all variables that are in the same block of σ by a new, common variable. Here, we implicitly assume that the function h is such that all integrals that appear on the righthand side are well-defined. This formula will turn out to be a crucial tool in the proof of Theorem 5 (c).

Normal approximation bounds
In this section, we continue to use the notation and the set-up of the preceding section. But since we turn to normal approximation bounds for Poisson U-statistics, some further notation is required. For u, v ∈ {1, . . . , m} we let Π con ≥2 (u, u, v, v) be the class of partitions in Π ≥2 (u, u, v, v) whose diagram is connected, which means that the rows of the diagram cannot be divided into two subsets, each defining a separate diagram (cf. [45, page 47]). More formally, there are no sets A, B ⊂ [4] with A ∪ B = [4], A ∩ B = ∅ and such that each block either consists of elements from rows in A or of elements from rows in B, see Figure 4 for an example and a counterexample. We can now introduce the quantities where for u ∈ {1, . . . , m} (again, we implicitly assume that h is such that the integrals appearing in (11) are well-defined). To measure the distance between two real-valued random variables X, Y (or, more precisely, their laws), the Kolmogorov distance where d( · , · ) stands for either the Wasserstein or the Kolmogorov distance. Here, one can choose c m = 2m 7/2 for the Wasserstein distance and c m = 19m 5 for the Kolmogorov distance.
Finally, we turn to a multivariate normal approximation for Poisson U-statistics. For integers p ∈ N and m 1 , . . . , m p ∈ N, and for each i ∈ {1, . . . , p}, let be a Poisson U-statistic of order m i based on a kernel function h (i) : X m i → R satisfying the same assumptions as above. We form the p-dimensional random vector U := (U 1 , . . . , U p ) and our goal is to compare U with a p-dimensional Gaussian random vector N. To do this, we use the so-called d 2 -and d 3 -distance, which are defined as respectively. Here, C 2 is the space of function ϕ : R p → R which are twice partially continuously differentiable and satisfy where · denotes the Euclidean norm in R p and · op stands for the operator norm. Moreover, C 3 is the space of functions ϕ : R p → R which are thrice partially continuously differentiable and satisfy Moreover, similarly to the quantities M u,v (h) introduced in (11), for i, j ∈ {1, . . . , p}, u ∈ {1, . . . , m i } and v ∈ {1, . . . , m j } we define v are given by (12). This allows us to state the following multivariate normal approximation bound from [58, Theorem 6.3] (see also [52,Equation (5.1)]). Namely, if N is a centred Gaussian random vector with covariance matrix Σ = (σ i,j ) p i,j=1 , then If the covariance matrix Σ is positive definite then also W,t . To shorten our notation we put which allows us to rewrite F (i) In other words, F W,t is a Poisson U-statistic of order d − i and with kernel f (i) . It is well known (see [30,28,29,31,51]) that Poisson U-statistics admit a Fock space representation having only a finite number of terms. This leads to the variance and covariance representations where the functions f (12), and we write · n for the norm in the L 2 -space L 2 (µ n d−1 ) with respect to the n-fold product measure of µ d−1 . Similarly, for i, j ∈ {0, 1, . . . , d − 1} the covariance Cov(F where · , · n denotes the standard scalar product in L 2 (µ n d−1 ).

Expectations: Proof of Theorem 1
Theorem 1 is a consequence of the transformation formula in Lemma 10 and the Crofton formula in Lemma 9 with k = i there. In fact, using (9) we obtain and the proof is complete. 2 W,t is isometry invariant. One could argue that it must be a constant multiple of H d , if one knows that it is also locally finite. Theorem 1 shows that this is indeed the case and also yields the constant.

Variances: Proof of Theorem 2
To investigate the variance of F (i) W,t we use the representation as a Poisson U-statistic, especially (16). We start by simplifying the kernel functions f Proof. We use the definition of f

Here we used that since
For the variances and covariances, we need the L 2 -norms and the scalar products of these functions.
Especially, the choice W = B r for some r > 0 yields Proof of Theorem 2. This is now a direct consequence of (16) and Corollary 13.

Variance: Asymptotic behaviour
In this section we look at the variance of F (i) Br,t in the asymptotic regime, as r → ∞. We divide our analysis into the three different cases d = 2, d = 3 and d ≥ 4. Before, we start with a number of preprations.

Preliminaries
The following lemma will be repeatedly applied below.
If, in addition, 0 ≤ s ≤ r − 1/2 then Proof. We start by noting that L i (s) ∩ B r is an i-dimensional hyperbolic ball of radius arcosh cosh(r) On the other hand, Lemma 14 and Lemma 8 imply that where we used that s ≤ r − 1/2 to obtain the equality in the third line. The last inequality is due to The positivity of the last term holds for i ≥ 2, since 2 i+1 ≤ e (5/2)(i−1) implies that which is equivalent to the desired inequality.
We will need later the following lemma.
totally geodesic subspace such that d h (L k (s), p) = s. Then for any l ∈ N there exist constants c, C > 0, depending only on k, l and d, such that Proof. The asserted equality of the two integral expressions is clear from the argument for the second claim in Corollary 13.
For k = 0 the integral is just the volume of a geodesic ball of radius r which can be bounded from above and below by a positive constant times exp(r(d − 1)).
In the following, we repeatedly use that the intersection L k (s)∩B r is a k-dimensional hyperbolic ball of radius arcosh(cosh(r)/ cosh(s)). The constants c and C used in the calculations below only depend on k, l, d and may vary from line to line. Suppose that k ≥ 2. The substitution u = r − s and an application of Lemma 14 yield To obtain the lower bound, we first show for u ≥ 0.2 that Now we substitute again u = r − s. An application of Lemma 8 and the lower bound from Lemma 14 then yield For k = 1, the proof is almost the same except that we simply use that a 0 sinh k−1 (s) ds = a for a ≥ 0.

The planar case d = 2
Although we present a very detailed covariance analysis in Section 4.5 we will separately investigate the asymptotic behaviour of the variances in Lemmas 17 -19. In fact while the results of Section 4.5 are necessary for the multivariate central limit theorems, the variance analysis we carry out here is already sufficient for the unvariate cases. In this and the following two sections, c i will denote a positive constant only depending on the dimension and a counting parameter i ∈ N 0 . If it additionally depends on another parameter n ∈ N 0 , we indicate this by writing, for instance, c i,n or c i (n). The value of this constant may change from occasion to occasion.
Proof. For i ∈ {0, 1} and n = 1, Corollary 13 and Lemma 16 yield Similarly, for i = 0 and n = 2 we obtain From (16) we now deduce that Using that t ≥ t 0 > 0, the assertion follows for i = 0. The case i = 1 is obtained in the same way, but requires bounds for only one summand in (16).

Proof. Corollary 13 and Lemma 16 imply the upper bound
It remains to determine the asymptotic behaviour in r of the terms f To deduce the lower bound, it is sufficient to derive a lower bound for the term f This completes the proof.
. For the remaining cases, we use that exp(r(d − 1)) is of lower than exp(2r(d − 2)) for d ≥ 4. Moreover, as in the case d = 3 it follows that f

Covariance analysis
In this section we prepare the proof of Theorem 7 by an asymptotic analysis of the covariance structure of the random vector F r,t in dimensions d = 2 and d = 3.
Next we prove the asserted rates of convergence. For (i, j) ∈ {(0, 1), (1, 0), (1, 1)}, we get Using (20) for the expression in (22) Finally, we treat the expression in (21). An application of the mean value theorem in the first and (20) in the second to last step yields Thus, a combination of (24), (25) and (26) This completes the proof.
Remark 11. The relation a = 4G can be confirmed by Maple. It is not clear to us how Maple verifies this relation. Since we could not find the current integral representation of the Catalan constant in one of the lists available to us, we provide a short derivation. We first use the substitution t = exp(− arcosh(e s )) or e s = 1 2 (t −1 + t) and then expand (1 + t 2 ) −2 into a Taylor series under the integral sign. This leads to By the substitution t = e y we obtain 1 0 t 2i (ln t) 2 dt = 2 (2i + 1) 3 .

The spatial case d = 3
Now we turn to the case d = 3 and again describes the rate of convergence, as r → ∞, of the suitably scaled covariances to the asymptotic covariance matrix Lemma 21. Let d = 3 and t ≥ t 0 > 0. There exists a positive constant c cov (3, t 0 ) ∈ (0, ∞) such that for r ≥ 1. The matrix Σ 3 has rank one and is explicitly given by Proof. For i, j ∈ {0, 1, 2}, the covariance formula for Poisson U-statistics yields that As in the planar case d = 2 we compute the scalar products. We let L 2 (s) be a 2-dimensional subspace in H 3 having distance s ≥ 0 from the origin p. For n = 1 Corollary 13 and Equation (18) yield In addition, using Lemma 10 and Lemma 16, we obtain  2π 2 c(3, 1, i, j).
Therefore we conclude that the asymptotic covariance matrix Σ 3 is given by (27). Clearly, Σ 3 has rank one. Moreover, we obtain where we used that |1/2 − r −1 e −2r r + 2r cosh 2 (r) − 3 sinh(r) cosh(r) | is bounded from above by a constant multiple of r −1 as r → ∞. This completes the proof.

The case d ≥ 4
In order to describe explicitly the limit covariance matrix Σ(d) for d ≥ 4 we need the following lemma. .
Depending on the dimension, we will bound the speed of convergence by means of the function : d ≥ 6.
Lemma 23. Let d ≥ 4 and t ≥ t 0 > 0. There exists a positive constant c cov (d, t 0 ) ∈ (0, ∞) such that for r ≥ 1. The matrix Σ d has rank one and its entries are explicitly given by where the constants c(i, 1, d), c(j, 1, d) are introduced in Lemma 12.
Therefore we can concentrate on the summand corresponding to k = 0 in (30). In the following we will make use of the logarithmic representation arcosh(x) = log(x + √ x 2 − 1) of the arcosh-function in order to evaluate the inner integral. Then we get For r → ∞ this expression converges to a constant. To get the correct rate stated in the lemma we observe that We have already shown that the second summand satisfies the asserted upper bound. It follows from (31) that it remains to consider For the second summand, observe that Since by (32) the integral in the second summand of (33) is of the order e 2r(d−2) , the second summand is at most of the order β e −2r .
It remains to show the decay of the first summand in (33). This is done by using the same steps as in (32) and by splitting up the limit covariance σ i,j d . Lemma 22 and basic calculus show that the asserted entries of the asymptotic covariance matrix can be written in the form Then we get where and It remains to provide an upper bound for I 2 . For this we expand the square and use the triangle inequality to get I 2 ≤ I 3 + I 4 , where In order to provide an upper bound for I 4 , we use the mean value theorem and the inequality 1− √ 1 − x ≤ x, for x ∈ [0, 1], to get Hence we obtain where also (34) was used. This concludes the proof.
5 Proofs II -Mixed K-function and mixed pair-correlation function Already at this point we see that the condition 0 < d h (x, y) can be omitted if i ≥ 1 or j ≥ 1.
Requiring that x ∈ skel i and y ∈ skel j means that there exist However, some of the hyperplanes of the first (d − i)-tuple may coincide with some of the hyperplanes of the second (d − j)-tuple. We denote by n ∈ {0, 1, . . . , d − i} the number of common hyperplanes. Then we obtain the representation with the combinatorial coefficient given by Note that if n = 0 we interpret the second integral as an integral over the set G 1 ∩ . . . ∩ G d−j ∩ B and if n = d − j we understand that the integral ranges over H 1 ∩ . . . ∩ H d−j ∩ B. Moreover, if i = j = 0, then the summand obtained for n = d is zero, since almost surely x, y ∈ H 1 ∩. . .∩H d and d h (x, y) > 0 cannot be satisfied simultaneously. Hence the summation can be restricted to n ≤ m(d, i, j) in the following. An application of (9) leads to where we have used Fubini's theorem to split the integration over The first group of hyperplanes comprises the n common hyperplanes H 1 , . . . , H n , while the second and the third group is associated with the (d−i−n)tuple H n+1 , . . . , H d−i and the (d − j − n)-tuple G 1 , . . . , G d−j−n , respectively. We now apply Lemma 10 successively to each of the three outer integrals. Together with Fubini's theorem this gives For the two innermost integrals we get Since y ∈ E, the intersection E ∩ (B(y, r) \ {y}) has dimension d − n and we can apply Crofton's formula to conclude that B(y, r)).
Here we also used that H d−n (E ∩ (B(y, r) \ {y})) = H d−n (E ∩ B(y, r)), since d − n ≥ 1. Moreover, since y ∈ E the value of H d−n (E ∩ B(y, r)) is independent of the choice of E and y, and is given by the (d − n)-dimensional Hausdorff measure of radius r. We thus arrive at The two remaining integrals can be evaluated by using twice the Crofton formula. Indeed, noting that for µ d−n -almost all E ∈ A h (d, d − n) the set B ∩ E is either empty or has dimension d − n we find that Since H d (B) = 1 we finally conclude that Simplification of the constant by means of the constants given in (2) and Lemma 10 completes the proof for the mixed K-function K ij . The formula for the mixed pair-correlation function follows by differentiation. This completes the proof of Theorem 3. 2 6 Proofs III -Univariate limit theorems 6.1 The case of growing intensity: Proof of Theorem 4 The central limit theorem is in this case a direct consequence of the central limit theorem for general Poisson U-statistics stated as Corollary 4.3 in [59] (see also [12]). 2

The case of growing windows: Proof of Theorem 5
Our strategy in the proof of Theorem 5 (a) and (b) can be summarized as follows. The normal approximation bound (13) for general U-statistics of Poisson processes is given by a sum involving terms of the type M u,v , which are defined in (11) and (12) and which in turn are given as sums of integrals over partitions σ ∈ Π con ≥2 (u, u, v, v). In applying these normal approximation bounds to the Euclidean counterparts of the functionals F (i) r,t it was possible to extract a common scaling factor from each of the integrals in M u,v and to treat the number of terms, that is, the number of elements of Π con ≥2 (u, u, v, v) as a constant, see [31,51]. In the hyperbolic set-up this is no longer possible and each integral in the definition of M u,v needs a separate treatment. In fact, it will turn out that these integrals exhibit different asymptotic behaviours as functions of r, as r → ∞. For the analysis, we have to determine explicitly the partitions in Π con ≥2 (u, u, v, v) and for each such partition we have to provide a bound for the resulting integral. Since µ = tµ d−1 , we can bound the dependence with respect to the intensity t ≥ 1 by 4(d − i) − 2(u + v) + |σ| for each σ ∈ Π con ≥2 (u, u, v, v). To show that a central limit theorem fails in higher space dimensions d ≥ 4 is the most technical part in the proof of Theorem 5. We do this by showing that the fourth cumulant of the centred and normalized total volume F this approach can only work if we can ensure that the sequence of random variables is uniformly integrable. We will prove that this is indeed the case by showing that their fifths moments are uniformly bounded. This requires a very careful analysis of the combinatorial formula (10) for the centred moments of U-statistics of Poisson processes. The representation of a U-statistic will be as in Section 4.1. In the following computations, c will be a positive constant only depending on the dimension and whose value may change from occasion to occasion.

The planar case d = 2: Proof of Theorem 5 (a)
As indicated above, we will use the bound (13) in combination with (11) and (12). We distinguish the cases i = 0 and i = 1. In the following, we can assume that r, t ≥ 1.
and, finally, for σ 11 we have We thus conclude that M 3,3 (f (0) ) ≤ c t 6 (6e 2r + 3 re 2r + 2e 4r ) ≤ c t 6    r,t converges in distribution to a Gaussian random variable, as r → ∞, we will argue that the fourth cumulant does not converge to zero, which is the value of the fourth cumulant of a standard Gaussian random variable. We start with the following crucial, but rather technical result, which is based on the formula (10) for the centred moments of a Poisson U-statistic. Proof. We start by explaining our method by considering the case i = d − 1. In this situation where we used the variance bound from Lemma 19, which is available since t ≥ t 0 and r ≥ 1. For the centred fifth moment, (10) implies that The set Π * * ≥2 (1, 1, 1, 1, 1) consists only of two types of sub-partitions of {1, 2, 3, 4, 5}, which are actually partitions, see Figure 9. The first type only consists of one partition, namely the trivial partition, only containing the single block {1, 2, 3, 4, 5}. The second type contains 5 2 = 10 partitions having precisely two blocks, one of size 2 and the other of type 3. Since the integrals corresponding to these partitions all yield the same contribution, we can restrict our computations to {{1, 2, 3}, {4, 5}}, for example. Thus, H 2 )). This proves the claim for i = d − 1.

By Lemma 16 we have
Now we fix i ∈ {0, 1, . . . , d − 2} arbitrarily and assume that d ≥ 7. Furthermore, we fix an arbitrary . We denote by m(σ) ∈ {2, 3, 4, 5} the size of the maximal block of σ and represent σ as a diagram. The elements of this diagram are labelled a p,q . Here, p ∈ {1, . . . , 5} represents the row number and q ∈ {1, . . . , d − i} stands for the column number. Without loss of generality we can and will assume that the maximal block of σ sits in the left upper corner of the diagram of σ, that is, the maximal block is of the form {a 1,1 , . . . , a m(σ),1 }. To each row p ∈ {1, . . . , 5} we associate two numbers b(p) and c(p) in the following way. By b(p) we denote the number of elements of row p in position which are contained in a block of σ that has at least one element in a row below p, and we let c(p) be the number of elements in position (p, q) (with the same restrictions as above) in row p not contained in any block of σ that has at least one element in a row below p, see Figure 9 for an example. Note that b(5) = 0, c(5) = d − i if m(σ) < 5, and c(p) = d − i − b(p) − 1 if p ∈ {1, . . . , m(σ)}. Our task is to show that the integral (in symbolic notation) I : = · · · f (i) ⊗5 σ = · · · f (i) (H 1 , G 1 , . . . , G b(1) , K 1 , . . . , K c(1) ) is bounded by a constant multiple of e 5(d−2)r , which is the order of (Var(F (i) r,t )) 5/2 . We first integrate with respect to the hyperplanes K 1 , . . . , K c(1) , which do not appear in any of the arguments of the other four functions f (i) (. . .). By Crofton's formula this gives c H d−1−b(1) (B r ∩ H 1 ∩ G 1 ∩ . . . ∩ G b(1) ). Now we replace H 1 ∩G 1 ∩. . .∩G b(1) by a (d−1−b(1))-dimensional subspace L d−1−b(1) (s 1 ) having distance s 1 = d h (H 1 , p) from p. This leads to Then G 1 , . . . , G b(1) are active integration variables for rows below the first row. Repeating the same argument for p = 2, . . . , m(σ), we arrive at (again in symbolic notation) where f (i) (. . .) appears 5 − m(σ) times. From now on we distinguish the following two cases: (a) there is no block that contains precisely two elements from the rows below m(σ), (b) there exists a block that contains precisely two elements from the rows below m(σ).
We start by treating case (a). If m(σ) = 2, then all blocks of σ have two elements. In particular, no element of row p ≥ 3 can be in a (2-element) block with another element in a block below. Hence The only remaining integral in I is To proceed, we define for p ∈ {1, . . . , m(σ)} the function where the exponent E is given by If E < 0 the integral in (50) is bounded by a constant times r |Z 1 | . In view of (48) and (49) In order to bound I from above by a constant times e 5(d−2)r , we use the decomposition e 5(d−2)r = e (5−m(σ))(d−1)r e m(σ)(d−2)r e −(5−m(σ))r .
Next, suppose that E = 0. Then the integral in (50) is bounded by a polynomial in r of degree at most |Z 1 | + 1 and another comparison of exponents in (51) and (52) implies that in this case we need to prove that Using the assumption that E = 0, we see that in this case This shows that the inequality in (53) is equivalent to (d − 1)(m(σ) − 1) > 5, which is always satisfied for d ≥ 7.
Finally, we suppose that E > 0 in which case a comparison of the exponents in (51) and (52) shows that we have to verify that  In addition, we have the following lower bound for M 1,1 (f (i) ): H d−1 (H 1 ∩ B r ) 4 µ d−1 (dH 1 ) ≥ c g(d − 1, 4, d, r) ≥ c e 4r(d−2) , since 4(d − 2) − (d − 1) > 0, which follows from our assumption that d ≥ 4, and since i ≤ d − 1 and t ≥ 1. In combination with Lemma 19 we thus find that which is a contradiction to (56). Consequently, the family of random variables F shows that there exists a subsequence F (i) r k ,t such that F (i) r k ,t converges in distribution and in L 4 to some limiting random variable X, say. Especially this implies that EX = 0, EX 2 = 1 and EX m < ∞ for m ∈ {3, 4}. In particular, this rules out for X the classical α-stable distributions for any 0 < α < 2 and, since we have shown that cum 4 (X) > 0, also a Gaussian distribution. We leave the determination of the distribution of the limiting random variable X as a challenging open problem for future research.

The case of simultaneous growth of intensity and window: Proof of Theorem 6
According to Lemma 25 we have that, for any fixed t ≥ 1, where F Next, we recall the definition of the integrals M u,v (h), u, v ∈ {1, . . . , m}, from (11) that are associated with a general Poisson U-statistic of order m ∈ N with kernel function h. In order to emphasize the role of the measure these integrals are taken with, we will write M u,v (h ; µ) in what follows. By definition of the integrated kernels in (12) we have that for any t ≥ 1 and any fixed r ≥ 1. In fact, f v contribute twice the factor t d−i−u and twice the factor t d−i−v by (12), respectively, and the integral in (11) leads to an additional factor t |σ| . By the choice u = v = 1 we maximize the resulting exponent and see that their product is bounded by t 4(d−i−1)+1 . Indeed, if u = v = 1 we necessarily have that |σ| = 1 since σ has to be connected. On the other hand, if u + v ≥ 3 then |σ| ≤ u + v and hence