1 Introduction

This paper is primarily concerned with the following random coverage problem. Given a specified compact region A in a d-dimensional Euclidean space, what is the probability that A is fully covered by a union of Euclidean balls of radius r centred on n points placed independently uniformly at random in A, in the large-n limit with \(r =r(n)\) becoming small in an appropriate manner?

This is a very natural type of question with a long history; see for example [3, 7, 11,12,13, 16, 20]. Potential applications include wireless communications [3, 18], ballistics [13], genomics [2], statistics [8], immunology [20], stochastic optimization [23], and topological data analysis [4, 9].

In an alternative version of this question, one considers a smaller compact region \(B \subset A^o\) (\(A^o\) denotes the interior of A), and asks whether B (rather than all of A) is covered. This version is simpler because boundary effects are avoided. This alternative version of our question was answered independently in the 1980s by Hall [12] and Janson [16]. Another way to avoid boundary effects would be to consider coverage of a smooth manifold such as a sphere (as in [20]), and this was also addressed in [16].

However, the original question does not appear to have been addressed systematically until now (at least, not when \(d>1\)). Janson [16, p. 108] makes some remarks about the case where \(A= [0,1]^d\) and one uses balls of the \(\ell _\infty \) norm, but does not consider more general classes of A or Euclidean balls.

This question seems well worth addressing. In many of the applications areas, it is natural to consider the influence only of the random points placed within the region A rather than also hypothetical points placed outside A, for example in the problem of statistical set estimation which we shall discuss below.

We shall express our results in terms of the coverage threshold \(R_n\), which we define to be the the smallest radius of balls, centred on a set \(\mathscr {X}_n\) of n independent uniform random points in A, required to cover A. Note that \(R_n\) is equal to the Hausdorff distance between the sets \(\mathscr {X}_n\) and A. More generally, for \(k \in \mathbb {N}\) the k-coverage threshold \(R_{n,k}\) is the smallest radius required to cover A k times. These thresholds are random variables, because the locations of the centres are random. We investigate their probabilistic behaviour as n becomes large.

We shall determine the limiting behaviour of \(\mathbb {P}[R_{n,k} \le r_n]\) for any fixed k and any sequence of numbers \((r_n)\), for the case where A is smoothly bounded (for general \(d \ge 2\)) or where A is a polytope (for \(d=2 \) or \(d= 3\)). We also obtain similar results for a high-intensity Poisson sample in A, which may be more relevant in some applications, as argued in [12].

We also derive strong laws of large numbers showing that \(nR_{n,k}^d/\log n\) converges almost surely to a finite positive limit, and establishing the value of the limit. These strong laws carry over to more general cases where k may vary with n, and the distribution of points may be non-uniform. We give results of this type for A smoothly bounded, or for A a convex polytope.

We emphasise that in all of these results, the limiting behaviour depends on the geometry of \(\partial A\), the topological boundary of A. For example, we shall show that when \(d=3\) and the points are uniformly distributed over a polyhedron, the limiting behaviour of \(R_n\) is determined by the angle of the sharpest edge if this angle is less than \(\pi /2\). If this angle exceeds \(\pi /2\) then the location in A furthest from the sample \(\mathscr {X}_n\) is asymptotically uniformly distributed over \(\partial A\), but if this angle is less than \(\pi /2\) the location in A furthest from \(\mathscr {X}_n\) is asymptotically uniformly distributed over the union of those edges which are sharpest, i.e. those edges which achieve the minimum subtended angle.

We restrict attention here to coverage by Euclidean balls of equal radius. The work of [12, 16] allowed for generalizations such as other shapes or variable radii, in their versions of our problem. We do not attempt to address these generalizations here; in principle it may be possible, but it would add considerably to the complication of the formulation of results.

One application lies in statistical set estimation. One may wish to estimate the set A from the sample \(\mathscr {X}_n\). One possible estimator in the literature is the union of balls of radius \(r_n\) centred on the points of \(\mathscr {X}_n\), for some suitable sequence \((r_n)_{n \ge 1}\) decreasing to zero. In particular, when estimating the perimeter of A one may well wish to take \(r_n\) large enough so that these balls fully cover A, that is, \(r_n \ge R_n\). For further discussion see Cuevas and Rodríguez-Casal [8].

We briefly discuss some related concepts. One of these is the maximal spacing of the sample \(\mathscr {X}_n\). This is defined to be volume of the largest Euclidean ball that can be fitted into the set \(A {\setminus } \mathscr {X}_n\) (the reason for the terminology becomes apparent from considering the case with \(d=1\)). More generally, the maximal k-spacing of the sample is defined to be volume of the largest Euclidean ball that can be fitted inside the set A while containing fewer than k points of \(\mathscr {X}_n\).

The maximal spacing also has a long history; see for example [1, 10, 14, 16]. As described in [1], there are many statistical applications. Essentially, it differs from the coverage threshold \(R_{n}\) only because of boundary effects (we shall elaborate on this in Sect. 2), but these effects are often important in determining the asymptotic behaviour of the threshold.

Another interpretation of the coverage threshold is via Voronoi cells. Calka and Chenavier [6] have considered, among other things, extremes of circumscribed radii of a Poisson–Voronoi tessellation on all of \(\mathbb {R}^d\) (the circumscribed radius of a cell is the radius of the smallest ball centred on the nucleus that contains the cell). To get a finite maximum they consider the maximum restricted to those cells having non-empty intersection with some bounded window \(A \subset \mathbb {R}^d\). This construction avoids dealing with delicate boundary effects, and the limit distribution, for large intensity, is determined in [6] using results from [16].

It seems at least as natural to consider Voronoi cells with respect to the Poisson sample restricted to A. A little thought (similar to arguments given in [6]) shows that the largest circumradius of the Voronoi cells inside A (i.e., the intersections with A of the Voronoi cells), with respect to the sample \(\mathscr {X}_n\), is equal to \(R_n\), and likewise for a Poisson sample in A; thus, our results add to those given in [6].

A somewhat related topic is the convex hull of the random sample \(\mathscr {X}_n\). For \(d=2\) with A convex, the limiting behaviour of the Hausdorff distance from this convex hull to A is obtained in [5]. The limiting behaviour of the Hausdorff distance from \(\mathscr {X}_n\) itself to A (which is our \(R_n\)) is not the same as for the convex hull.

2 Definitions and notation

Throughout this paper, we work within the following mathematical framework. Let \(d \in \mathbb {N}\). Suppose we have the following ingredients:

  • A compact, Riemann measurable set \(A \subset \mathbb {R}^d\) (Riemann measurability of a bounded set in \(\mathbb {R}^d\) amounts to its boundary having zero Lebesgue measure).

  • A Borel probability measure \(\mu \) on A with probability density function f.

  • A specified set \(B \subset A\) (possibly \(B=A\)).

  • On a common probability space \((\mathbb {S},{{\mathscr {F}}},\mathbb {P})\), a sequence \(X_1,X_2,\ldots \) of independent identically distributed random d-vectors with common probability distribution \(\mu \), and also a unit rate Poisson counting process \((Z_t,t\ge 0)\), independent of \((X_1,X_2,\ldots )\) (so \(Z_t\) is Poisson distributed with mean t for each \(t >0\)).

For \(n \in \mathbb {N}\), \(t >0\), let \(\mathscr {X}_n:= \{X_1,\ldots ,X_n\}\), and let \({{\mathscr {P}}}_t:= \{X_1,\ldots ,X_{Z_t}\}\). These are the point processes that concern us here. Observe that \({{\mathscr {P}}}_t\) is a Poisson point process in \(\mathbb {R}^d \) with intensity measure \(t \mu \) (see e.g. [19]).

For \(x \in \mathbb {R}^d\) and \(r>0\) set \(B(x,r):= \{y \in \mathbb {R}^d:\Vert y-x\Vert \le r\}\) where \(\Vert \cdot \Vert \) denotes the Euclidean norm. (We write \(B_{(d)} (x,r)\) for this if we wish to emphasise the dimension.) For \(r>0\), let \(A^{(r)}:=\{ x \in A: B(x,r) \subset A^o\}\), the ‘r-interior’ of A.

Also, define the set \(A^{[r]}\) to be the interior of the union of all hypercubes of the form \(\prod _{i=1}^d[n_ir,(n_i+1)r] \), with \(n_1,\ldots ,n_d \in \mathbb {Z}\), that are contained in \(A^o\) (the set \(A^{[r]}\) resembles \(A^{(r)}\) but is guaranteed to be Riemann measurable).

For any point set \(\mathscr {X}\subset \mathbb {R}^d\) and any \(D \subset \mathbb {R}^d\) we write \(\mathscr {X}(D)\) for the number of points of \(\mathscr {X}\) in D, and we use below the convention \(\inf \{\} := +\infty \).

Given \(n, k \in \mathbb {N}\), and \(t \in (0,\infty )\), define the k-coverage thresholds \(R_{n,k}\) and \(R'_{t,k}\) by

$$\begin{aligned}{} & {} R_{n,k} : = \inf \left\{ r >0: \mathscr {X}_n (B(x,r)) \ge k ~~~~ \forall x \in B \right\} ; \end{aligned}$$
(2.1)
$$\begin{aligned}{} & {} R'_{t,k} : = R_{Z_t,k} := \inf \left\{ r >0: {{\mathscr {P}}}_t (B(x,r)) \ge k ~~~~ \forall x \in B \right\} , \end{aligned}$$
(2.2)

and define also the interior k-coverage thresholds

$$\begin{aligned}{} & {} \tilde{R}_{n,k} : = \inf \left\{ r >0: \mathscr {X}_n (B(x,r)) \ge k ~~~ \forall x \in B \cap A^{(r)} \right\} ; \end{aligned}$$
(2.3)
$$\begin{aligned}{} & {} \tilde{R}_{Z_t,k} : = \inf \left\{ r >0: {{\mathscr {P}}}_t (B(x,r)) \ge k ~~~ \forall x \in B \cap A^{(r)} \right\} . \end{aligned}$$
(2.4)

Set \(R_n : = R_{n,1}\), and \(R'_t:= R'_{t,1}\), and \({\tilde{R}}_n: = {\tilde{R}}_{n,1}\). Then \(R_n\) is the coverage threshold. Observe that \(R_n = \inf \{ r >0: B \subset \cup _{i=1}^n B(X_i,r) \}\). In the case \(B=A\), this agrees with our earlier definition of \(R_n\).

We are chiefly interested in the asymptotic behaviour of \(R_n\) for large n. More generally, we consider \(R_{n,k}\) where k may vary with n. We are especially interested in the case with \(B=A\).

Observe that \(\tilde{R}_{n,k} \) is the smallest r such that \( B \cap A^{(r)} \) is covered k times by the balls of radius r centred on the points of \(\mathscr {X}_n\). It can be seen that when \(B=A\), the maximal k-spacing of the sample \(\mathscr {X}_n\) (defined earlier) is equal to \(\theta _d {\tilde{R}}_n^d\), where \(\theta _d := \pi ^{d/2}/\varGamma (1 + d/2)\), the volume of the unit ball in \(\mathbb {R}^d\).

We use the Poissonized k-coverage threshold \(R'_{t,k}\), and the interior k-coverage thresholds \({\tilde{R}}_{n,k}\) and \({\tilde{R}}_{Z_t,k}\), mainly as stepping stones towards deriving results for \(R_{n,k}\) and \({\tilde{R}}_{n,k}\) respectively, but they are also of interest in their own right. Indeed, some of the literature [3, 12, 13, 18] is concerned more with \(R'_t\) than with \(R_n\), and we have already mentioned the literature on the maximal spacing.

We now give some further notation used throughout. For \(D \subset \mathbb {R}^d\), let \({\overline{D}}\) and \(D^o\) denote the closure of D and interior of D, respectively. Let |D| denote the Lebesgue measure (volume) of D, and \(|\partial D|\) the perimeter of D, i.e. the \((d-1)\)-dimensional Hausdorff measure of \(\partial D\), when these are defined. Write \(\log \log t\) for \(\log (\log t)\), \(t >1\). Let o denote the origin in \(\mathbb {R}^d\). Set \(\mathbb {H}:= \mathbb {R}^{d-1}\times [0,\infty )\) and \(\partial \mathbb {H}:= \mathbb {R}^{d-1}\times \{0\}\).

Given two sets \(\mathscr {X},{{\mathscr {Y}}}\subset \mathbb {R}^d\), we set \( \mathscr {X}\triangle {{\mathscr {Y}}}:= (\mathscr {X}{\setminus } {{\mathscr {Y}}}) \cup ({{\mathscr {Y}}}{\setminus } \mathscr {X})\), the symmetric difference between \(\mathscr {X}\) and \({{\mathscr {Y}}}\). Also, we write \(\mathscr {X}\oplus {{\mathscr {Y}}}\) for the set \(\{x+y: x \in \mathscr {X}, y \in {{\mathscr {Y}}}\}\). Given also \(x \in \mathbb {R}^d\) we write \(x+{{\mathscr {Y}}}\) for \(\{x\} \oplus {{\mathscr {Y}}}\).

Given \(x,y \in \mathbb {R}^d\), we denote by [xy] the line segment from x to y, that is, the convex hull of the set \(\{x,y\}\). We write \(a \wedge b\) (respectively \(a \vee b\)) for the minimum (resp. maximum) of any two numbers \(a,b \in \mathbb {R}\).

Given \(m \in \mathbb {N}\) and functions \(f: \mathbb {N}\cap [m,\infty ) \rightarrow \mathbb {R}\) and \(g: \mathbb {N}\cap [m,\infty ) \rightarrow (0,\infty )\), we write \(f(n) = O(g(n))\) as \(n \rightarrow \infty \) if \(\limsup _{n \rightarrow \infty }|f(n)|/g(n) < \infty \). We write \(f(n)= o(g(n))\) as \(n \rightarrow \infty \) if \(\lim _{n \rightarrow \infty } f(n)/g(n) =0\). We write \(f(n)= \varTheta (g(n))\) as \(n \rightarrow \infty \) if both \(f(n)= O(g(n))\) and \(g(n)= O(f(n))\) (and \(f > 0\)). Given \(s >0\) and functions \(f: (0,s) \rightarrow \mathbb {R}\) and \(g:(0,s) \rightarrow (0,\infty )\), we write \(f(r) = O(g(r))\) as \(r \downarrow 0\), or \(g(r) = \varOmega (f(r))\) as \( r \downarrow 0\), if \(\limsup _{r \downarrow 0} |f(r) |/g(r) < \infty \). We write \(f(r)= o(g(r))\) as \(r \downarrow 0\) if \(\lim _{r \downarrow 0 } f(r)/g(r) =0\), and \(f(r) \sim g(r)\) as \(r \downarrow 0\) if this limit is 1.

3 Convergence in distribution

The main results of this section are concerned with weak convergence for \(R_{n,k}\) (defined at (2.1)) as \(n \rightarrow \infty \) with k fixed, in cases where f is uniform on A and \(B=A\).

Our first result concerns the case where A has a smooth boundary in the following sense. We say that A has \(C^2\) boundary if for each \(x \in \partial A\) there exists a neighbourhood U of x and a real-valued function f that is defined on an open set in \(\mathbb {R}^{d-1}\) and twice continuously differentiable, such that \(\partial A \cap U\), after a rotation, is the graph of the function f.

Given \(d \in \mathbb {N}\), define the constant

$$\begin{aligned} c_d := \frac{1}{d!} \left( \frac{\sqrt{\pi } \; \varGamma ((d/2)+1) }{ \varGamma ((d+1)/2) } \right) ^{d-1}. \end{aligned}$$
(3.1)

Note that \(c_1=c_2=1\), and \(c_3 =3 \pi ^2/32\). Moreover, using Stirling’s formula one can show that \(c_d^{1/d} \sim e (\pi /(2d))^{1/2}\) as \(d \rightarrow \infty \). Given also \(k \in \mathbb {N}\), for \(d \ge 2\) set

$$\begin{aligned} c_{d,k} := \left( \frac{c_{d-1}}{(k-1)!} \right) \theta _d^{2-d-1/d} \theta _{d-1}^{2d-3} \theta _{d-2}^{1-d} (1- 1/d)^{d+k-3 + 1/d} 2^{-1+1/d}. \end{aligned}$$
(3.2)

By some tedious algebra, one can simplify this to

$$\begin{aligned} c_{d,1} = (d!)^{-1}2^{1-d} \pi ^{(d/2)-1} (d-1)^{d+(1/d)-2}\varGamma \left( \frac{d+1}{2} \right) ^{1-d} \varGamma \left( \frac{d}{2} \right) ^{d+ 1/d -1}, \end{aligned}$$

with \(c_{d,k} = c_{d,1}(1-1/d)^{k-1}/(k-1)!\) for \(k >1\). Note that \(c_{2,k} = 2^{1-k} \pi ^{-1/2}/(k-1)! \) and \(c_{3,k}= 2^{k-5} 3^{1-k} \pi ^{5/3}/(k-1)!\), and \(c_{d,1}^{1/d} \sim e/(2d)^{1/2}\) as \(d \rightarrow \infty \).

In all limiting statements in the sequel, n takes values in \(\mathbb {N}\) while t takes values in \((0,\infty )\).

Theorem 3.1

Suppose that \(d\ge 2\) and \(f=f_0\textbf{1}_A\), where \(A \subset \mathbb {R}^d\) is compact, and has \(C^2\) boundary, and \(f_0 := |A|^{-1}\). Assume \(B=A\). Let \(k \in \mathbb {N}\), \(\zeta \in \mathbb {R}\).

Then

$$\begin{aligned}{} & {} \lim _{n \rightarrow \infty } \mathbb {P}\left[ \left( \frac{n \theta _d f_0 R_{n,k}^d}{2} \right) - \left( \frac{d-1}{d}\right) \log (n f_0) - \left( d+k-3+1/d \right) \log \log n \le \zeta \right] \nonumber \\{} & {} \quad = \lim _{t \rightarrow \infty } \mathbb {P}\left[ \left( \frac{t \theta _d f_0 (R'_{t,k})^d}{2} \right) - \left( \frac{d-1}{d}\right) \log (t f_0) - \left( d+k-3+1/d \right) \log \log t \le \zeta \right] \nonumber \\{} & {} \quad = {\left\{ \begin{array}{ll} \exp \left( - |A| e^{-2 \zeta } - c_{2,1} | \partial A| e^{- \zeta } \right) &{}{\text { if }} ~ d=2, k =1 \\ \exp \left( - c_{d,k} | \partial A| e^{- \zeta } \right) &{}{\text { otherwise.}} \end{array}\right. } \end{aligned}$$
(3.3)

When \(d=2, k=1 \) the exponent in (3.3) has two terms. This is because the location in A furthest from the sample \(\mathscr {X}_n\) might lie either in the interior of A, or on the boundary.

When \(d \ge 3\) or \(k \ge 2\), the exponent of (3.3) has only one term, because the location in A with furthest k-th nearest point of \(\mathscr {X}_n\) is located, with probability tending to 1, on the boundary of A, and likewise for \({{\mathscr {P}}}_t\). Increasing either d or k makes it more likely that this location lies on the boundary, and the exceptional nature of the limit in the case \((d,k)=(2,1)\) reflects this.

We also consider the case where A is a polytope, only for \(d=2\) or \(d=3\). All polytopes in this paper are assumed to be bounded, connected, and finite (i.e. have finitely many faces). We do not require the polytope to be convex here.

Theorem 3.2

Suppose \(d=2\), \(B=A\), and \(f = f_0 \textbf{1}_A\), where \(f_0 : = |A|^{-1}\). Assume that A is compact and polygonal.

Let \(|\partial A|\) denote the length of the boundary of A. Let \(k \in \mathbb {N}\), \(\zeta \in \mathbb {R}\). Then

$$\begin{aligned}{} & {} \lim _{n \rightarrow \infty } \mathbb {P}[ n (\pi /2) f_0 R_{n,k}^2 -(1/2) \log (n f_0) - (k-1/2) \log \log n \le \zeta ] \nonumber \\{} & {} \quad = \lim _{t \rightarrow \infty } \mathbb {P}[ t (\pi /2) f_0 (R'_{t,k})^2 - (1/2) \log (t f_0) - (k-1/2) \log \log t \le \zeta ] \nonumber \\{} & {} \quad = {\left\{ \begin{array}{ll} \exp ( - |A| e^{-2 \zeta } - |\partial A| \pi ^{-1/2} e^{-\zeta } ) &{} {\text { if }} k=1, \\ \exp ( - c_{2,k} |\partial A| e^{-\zeta } ) &{} {\text { if }} k\ge 2. \end{array}\right. } \end{aligned}$$
(3.4)

One might seek to extend Theorem 3.2 to a more general class of sets A including both polygons (covered by Theorem 3.2) and sets with \(C^2\) boundary (covered by Theorem 3.1). One could take sets A having piecewise \(C^2\) boundary, with the extra condition that the corners of A are not too pointy, in the sense that for each corner q, there exists a triangle with vertex at q that is contained in A. We would expect that it is possible to extend the result to this more general class.

When \(d=3\) and A is polyhedral, there are several cases to consider, depending on the value of the angle \(\alpha _1\) subtended by the ‘sharpest edge’ of \(\partial A\). The angle subtended by an edge e is defined as follows. Denote the two faces meeting at e by \(F_1,F_2\). Let p be a point in the interior of the edge, and for \(i=1,2\) let \(\ell _i\) be a line segment starting from p that lies within \(F_i\) and is perpendicular to the edge e. Let \(\theta \) denote the angle between the line segments \(\ell _1\) and \(\ell _2\) (so \(0< \theta < \pi \)). The angle subtended by the edge e is \(\theta \) if there is a neighbourhood U of p such that \(A \cap U\) is convex, and is \(2 \pi - \theta \) if there is no such neighbourhood of p.

If \(\alpha _1 < \pi /2\) then the location in A furthest from the sample \(\mathscr {X}_n\) is likely to be on a 1-dimensional edge of \(\partial A\), while if \(\alpha _1 > \pi /2\) the furthest location from the sample is likely to be on a 2-dimensional face of \(\partial A\), in the limit \(n \rightarrow \infty \). If \(\alpha _1 = \pi /2\) (for example, for a cube), both of these possibilities have non-vanishing probability in the limit.

Since there are several cases to consider, to make the statement of the result more compact we put it in terms of \(\mathbb {P}[R_{n,k} \le r_n]\) for a sequence of constants \((r_n)\).

Theorem 3.3

Suppose \(d=3\), A is polyhedral, \(B=A\) and \(f = f_0 \textbf{1}_A\), where \(f_0 := |A|^{-1}\). Denote the 1-dimensional edges of A by \(e_1,\ldots ,e_\kappa \). For each \(i \in \{1,\ldots , \kappa \}\), let \(\alpha _i\) denote the angle that A subtends at edge \(e_i\) (with \(0< \alpha _i < 2 \pi \)), and write \(|e_i|\) for the length of \(e_i\). Assume the edges are listed in order so that \(\alpha _1 \le \alpha _2 \le \cdots \le \alpha _\kappa \). Let \(|\partial _1 A|\) denote the total area (i.e., 2-dimensional Hausdorff measure) of all faces of A, and let \(|\partial _2 A| \) denote the total length of those edges \(e_i\) for which \(\alpha _i= \alpha _1\). Let \(\beta \in \mathbb {R}\) and \(k \in \mathbb {N}\). Let \((r_t)_{t >0}\) be a family of real numbers satisfying (as \(t \rightarrow \infty \))

$$\begin{aligned} (2 \alpha _1 \wedge \pi ) f_0 t r_t^3 - \log (t f_0) - \beta = {\left\{ \begin{array}{ll} (3k -1) \log \log t + o(1) &{} \text{ if } \alpha _1 \le \pi /2 \\ (\frac{1+ 3k}{2}) \log \log t + o(1) &{} \text{ if } \alpha _1 > \pi /2. \end{array}\right. } \end{aligned}$$

Then

$$\begin{aligned}{} & {} \lim _{n \rightarrow \infty } \mathbb {P}[ R_{n,k} \le r_n ] = \lim _{t \rightarrow \infty } \mathbb {P}[ R'_{t,k} \le r_t ]\nonumber \\{} & {} \quad = {\left\{ \begin{array}{ll} \exp \left( - \frac{3^{1-k} \alpha _1^{1/3}|\partial _2 A| e^{-\beta /3} }{ (k-1)! (32)^{1/3}} \right) &{} {\text { if }} \alpha _1 < \pi /2, ~ {\text { or }} (\alpha _1 = \pi /2, k>1)\\ \exp \left( - \frac{ 3 \pi ^{5/3} 2^k |\partial _1 A| e^{-2 \beta /3} }{(k-1)! 3^k 32 } \right) &{} {\text { if }} \alpha _1 > \pi /2\\ \exp \left( - \frac{ \pi ^{1/3} |\partial _2 A| e^{-\beta /3} }{4} - \frac{ \pi ^{5/3} |\partial _1 A|e^{- 2 \beta /3}}{16} \right) &{} {\text { if }} \alpha _1 = \pi /2, k=1. \end{array}\right. } \nonumber \\ \end{aligned}$$
(3.5)

We now give a result in general d for \({\tilde{R}}_{n,k}\), and for \(R_{n,k}\) in the case with \({\overline{B}} \subset A^o\) (now we no longer require \(B=A\)). These cases are simpler because boundary effects are avoided. In fact, the result stated below has some overlap with already known results; it is convenient to state it here too for comparison with the results just given, and because we shall be using it to prove those results.

Proposition 3.4

Suppose A is compact with \( |A| > 0\) and \(f = f_0 \textbf{1}_A\), where \(f_0 := |A|^{-1}\), and \(B \subset A\) is Riemann measurable. Let \(k \in \mathbb {N}\) and \(\beta \in \mathbb {R}\). Then

$$\begin{aligned}{} & {} \lim _{n \rightarrow \infty } \mathbb {P}[ n \theta _d f_0 {\tilde{R}}_{n,k}^d - \log (n f_0) - (d+k-2) \log \log n \le \beta ] \nonumber \\{} & {} \quad = \lim _{t \rightarrow \infty } \mathbb {P}[ t \theta _d f_0 ({\tilde{R}}_{Z_t,k})^d - \log (t f_0) - (d+k -2) \log \log t \le \beta ] \nonumber \\{} & {} \quad = \exp (-(c_d / (k-1)!) |B|e^{-\beta } ). \end{aligned}$$
(3.6)

If moreover B is closed with \(B \subset A^o\), then

$$\begin{aligned}{} & {} \lim _{n \rightarrow \infty } \mathbb {P}[ n \theta _d f_0 R_{n,k}^d - \log (n f_0) - (d+k-2) \log \log n \le \beta ] \nonumber \\{} & {} \quad = \lim _{t \rightarrow \infty } \mathbb {P}[ t \theta _d f_0 (R'_{t,k})^d - \log (t f_0) - (d+k -2) \log \log t \le \beta ] \nonumber \\{} & {} \quad = \exp (-(c_d / (k-1)!) |B|e^{-\beta } ). \end{aligned}$$
(3.7)

The case \(k=1\) of the second equality of (3.7) can be found in [6]. It provides a stronger asymptotic result than the one in [18]. A similar statement to the case \(k=1\) of (3.6) can be found in [17].

Remark 3.5

Let \(k \in \mathbb {N}\). The definition (2.1) of the coverage threshold \(R_{n,k}\) suggests we think of the number and (random) locations of points as being given, and consider the smallest radius of balls around those points needed to cover B k times.

Alternatively, as in [16], one may think of the radius of the balls as being given, and consider the smallest number of balls (with locations generated sequentially at random) needed to cover B k times. That is, given \(r>0\), define the random variable

$$\begin{aligned} N(r,k) := \inf \{ n \in \mathbb {N}: \mathscr {X}_n (B(x,r)) \ge k ~~~ \forall x \in B\}, \end{aligned}$$

and note that \(N(r,k) \le n\) if and only if \(R_{n,k} \le r\). In the setting of Theorem 3.1, 3.2 or 3.3, one may obtain a limiting distribution for N(rk) (suitably scaled and centred) as \(r \downarrow 0\) by using those results together with the following (we write \({\mathop {\longrightarrow }\limits ^{{{\mathscr {D}}}}}\) for convergence in distribution):

Proposition 3.6

Let \(k \in \mathbb {N}\). Suppose Z is a random variable with a continuous cumulative distribution function, and \(a, b, c, c' \in \mathbb {R}\) with \(a>0, b >0\) are such that \( a n R_{n,k}^d - b \log n -c \log \log n {- c' } {\mathop {\longrightarrow }\limits ^{{{\mathscr {D}}}}}Z \) as \(n \rightarrow \infty \). Then as \(r \downarrow 0\),

$$\begin{aligned} a r^d N(r,k) - b \log \left( (b/a) r^{-d} \right) - (c+ b) \log \log (r^{-d}) {- c'} {\mathop {\longrightarrow }\limits ^{{{\mathscr {D}}}}}Z. \end{aligned}$$

For example, using Theorem 3.1, and applying Proposition 3.6 with \(a = \theta _d f_0/2\), \(b= (d-1)/d\), \(c = d+ k -3 +1/d\), and \(c' = ((d-1)/d) \log f_0\), we obtain the following:

Corollary 3.7

Suppose that \(d\ge 2\) and \(f=f_0\textbf{1}_A\), where \(A \subset \mathbb {R}^d\) is compact, and has \(C^2\) boundary, and \(f_0 := |A|^{-1}\). Assume \(B=A\). Let \(k \in \mathbb {N}\), \(\zeta \in \mathbb {R}\). Then

$$\begin{aligned}{} & {} \lim _{r \downarrow 0} \mathbb {P}\Big [ \left( \frac{ \theta _d f_0 r^d N(r,k)}{2} \right) - \left( \frac{d-1}{d}\right) \log \left( \left( \frac{2 (d-1)}{d \theta _d} \right) r^{-d} \right) \nonumber \\{} & {} \qquad - \left( d+k-2 \right) \log \log r^{-d} \le \zeta \Big ]\nonumber \\{} & {} \quad = {\left\{ \begin{array}{ll} \exp \left( - |A| e^{-2 \zeta } - c_{2,1} | \partial A| e^{- \zeta } \right) &{}{\text { if }} ~ d=2, k =1 \\ \exp \left( - c_{d,k} | \partial A| e^{- \zeta } \right) &{}{\text { otherwise.}} \end{array}\right. } \end{aligned}$$
(3.8)

4 Strong laws of large numbers

The results in this section provide strong laws of large numbers (SLLNs) for \(R_{n}\). For these results we relax the condition that f be uniform on A. We give strong laws for \(R_n\) when \(A=B\) and A is either smoothly bounded or a polytope. Also for general A we give strong laws for \(R_n\) when \({\overline{B}} \subset A^o\), and for \({\tilde{R}}_n\) for general B.

More generally, we consider \(R_{n,k}\), allowing k to vary with n. Throughout this section, assume we are given a constant \(\beta \in [0,\infty ]\) and a sequence \(k: \mathbb {N}\rightarrow \mathbb {N}\) with

$$\begin{aligned} \lim _{n \rightarrow \infty } \left( k(n)/\log n \right) = \beta ; ~~~~~ \lim _{n \rightarrow \infty } \left( k(n)/n \right) = 0. \end{aligned}$$
(4.1)

We make use of the following notation throughout:

$$\begin{aligned}{} & {} f_0 := \mathrm{ess~inf}_{x \in B} f(x); ~~~~~~~ f_1:= \inf _{x \in \partial A} f(x); \end{aligned}$$
(4.2)
$$\begin{aligned}{} & {} H(t) := {\left\{ \begin{array}{ll} 1-t + t \log t, ~~~ &{} {\text { if }} ~ t >0 \\ 1 , &{} {\text { if }} ~ t =0. \end{array}\right. } \end{aligned}$$
(4.3)

Observe that \(-H(\cdot )\) is unimodal with a maximum value of 0 at \(t=1\). Given \(a \in [0,\infty )\), we define the function \(\hat{H}_a: [0,\infty ) \rightarrow [a,\infty )\) by

$$\begin{aligned} y = \hat{H}_a(x) \Longleftrightarrow y H(a/y) =x,~ y \ge a, \end{aligned}$$

with \(\hat{H}_0(0):=0\). Note that \(\hat{H}_a(x)\) is increasing in x, and that \(\hat{H}_0(x)=x\) and \(\hat{H}_a(0)=a\).

Throughout this paper, the phrase ‘almost surely’ or ‘a.s.’ means ‘except on a set of \(\mathbb {P}\)-measure zero’. We write \(f|_A\) for the restriction of f to A. If \(f_0=0,\) \( b>0\) we interpret \(b/f_0\) as \(+\infty \) in the following limiting statements, and likewise for \(f_1\).

Theorem 4.1

Suppose that \(d \ge 2\) and \(A \subset \mathbb {R}^d\) is compact with \(C^2\) boundary, and that \(f|_A\) is continuous at x for all \(x \in \partial A\). Assume also that \(B=A\) and (4.1) holds. Then as \(n \rightarrow \infty \), almost surely

$$\begin{aligned}{} & {} (n \theta _d R_{n,k(n)}^d)/k(n) \rightarrow \max \left( 1/f_0, 2 /f_1 \right) ~~~~~{\text { if }}~ \beta =\infty ; \end{aligned}$$
(4.4)
$$\begin{aligned}{} & {} (n \theta _d R_{n,k(n)}^d)/\log n \rightarrow \max \left( \hat{H}_\beta (1)/f_0, 2 \hat{H}_{\beta }(1-1/d)/f_1 \right) , ~~~{\text { if }}~ \beta < \infty .\qquad \end{aligned}$$
(4.5)

In particular, if \(k \in \mathbb {N}\) is a constant, then as \(n \rightarrow \infty \), almost surely

$$\begin{aligned} (n \theta _d R_{n,k}^d)/\log n \rightarrow \max \left( 1/f_0, (2-2/d)/f_1 \right) . \end{aligned}$$
(4.6)

We now consider the case where A is a polytope. Assume the polytope A is compact and finite, that is, has finitely many faces. Let \(\varPhi (A)\) denote the set of all faces of A (of all dimensions). Given a face \(\varphi \in \varPhi (A)\), denote the dimension of this face by \(D(\varphi )\). Then \(0 \le D(\varphi ) \le d-1\), and \(\varphi \) is a \(D(\varphi )\)-dimensional polytope embedded in \(\mathbb {R}^d\). Set \(f_\varphi := \inf _{x \in \varphi }f(x)\), let \(\varphi ^o\) denote the relative interior of \(\varphi \), and set \(\partial \varphi := \varphi {\setminus } \varphi ^o\). Then there is a cone \({{\mathscr {K}}}_\varphi \) in \(\mathbb {R}^d\) such that every \(x \in \varphi ^o\) has a neighbourhood \(U_x\) such that \(A \cap U_x = (x+ {{\mathscr {K}}}_\varphi ) \cap U_x\). Define the angular volume \(\rho _\varphi \) of \(\varphi \) to be the d-dimensional Lebesgue measure of \({{\mathscr {K}}}_\varphi \cap B(o,1)\).

For example, if \(D(\varphi )=d-1\) then \(\rho _\varphi = \theta _d/2\). If \(D(\varphi ) = 0\) then \(\varphi = \{v\}\) for some vertex \(v \in \partial A\), and \(\rho _\varphi \) equals the volume of \(B(v,r) \cap A\), divided by \(r^d\), for all sufficiently small r. If \(d=2\), \(D(\varphi )=0\) and \(\omega _\varphi \) denotes the angle subtended by A at the vertex \(\varphi \), then \(\rho _{\varphi } = \omega _\varphi /2\). If \(d=3\) and \(D(\varphi )=1\), and \(\alpha _\varphi \) denotes the angle subtended by A at the edge \(\varphi \) (which is either the angle between the two boundary planes of A meeting at \(\varphi \), or \(2 \pi \) minus this angle), then \(\rho _{\varphi } = 2 \alpha _\varphi /3\).

Our result for A a polytope includes a condition that the polytope be convex; we conjecture that this condition is not needed. We include connectivity in the definition of a polytope, so for \(d=1\) a polytope is defined to be an interval.

Theorem 4.2

Suppose A is a convex compact finite polytope in \(\mathbb {R}^d\). If \(d \ge 4\), assume moreover that A is convex. Assume that \(f|_A\) is continuous at x for all \(x \in \partial A\), and set \(B=A\). Assume \(k(\cdot )\) satisfies (4.1). Then, almost surely,

$$\begin{aligned} \lim _{n \rightarrow \infty } nR_{n,k(n)}^d/ k( n)= & {} \max \left( \frac{1}{f_0 \theta _d}, \max _{\varphi \in \varPhi (A)} \left( \frac{1}{f_\varphi \rho _\varphi } \right) \right) , ~~~~~ ~~ {\text { if }} ~\beta = \infty ; \end{aligned}$$
(4.7)
$$\begin{aligned} \lim _{n \rightarrow \infty } nR_{n,k(n)}^d/ \log n= & {} \max \left( \frac{ \hat{H}_\beta (1) }{f_0 \theta _d} , \max _{\varphi \in \varPhi (A)} \left( \frac{ \hat{H}_\beta ( D(\varphi )/d) }{ f_\varphi \rho _\varphi } \right) \right) , ~~~{\text { if }} ~\beta < \infty . \nonumber \\ \end{aligned}$$
(4.8)

In the next three results, we spell out some special cases of Theorem 4.2.

Corollary 4.3

Suppose that \(d=2\), A is a convex polygon with \(B=A \), and \(f|_A\) is continuous at x for all \(x \in \partial A\). Let V denote the set of vertices of A, and for \(v \in V\) let \(\omega _v\) denote the angle subtended by A at vertex v. Assume (4.1) with \(\beta < \infty \). Then, almost surely,

$$\begin{aligned} \lim _{n \rightarrow \infty } \left( \frac{ n R_{n,k(n)}^2}{\log n}\right) = \max \left( \frac{\hat{H}_\beta (1)}{\pi f_0}, \frac{2 \hat{H}_\beta (1/2)}{\pi f_1}, \max _{v \in V} \left( \frac{2 \beta }{ \omega _{v} f(v) } \right) \right) . \end{aligned}$$
(4.9)

In particular, for any constant \(k \in \mathbb {N}\), \( \lim _{n \rightarrow \infty } \left( \frac{ n \pi R_{n,k}^2}{\log n}\right) = \max \left( \frac{1}{f_0}, \frac{1}{ f_1} \right) = \frac{1}{f_0} . \)

Corollary 4.4

Suppose \(d=3\) (so \(\theta _d = 4\pi /3\)), A is a convex polyhedron with \(B=A \), and \(f|_A\) is continuous at x for all \(x \in \partial A\). Let V denote the set of vertices of A, and E the set of edges of A. For \(e\in E\), let \(\alpha _e\) denote the angle subtended by A at edge e, and \(f_e\) the infimum of f over e. For \(v \in V\) let \(\rho _v\) denote the angular volume of vertex v. Suppose (4.1) holds with \(\beta < \infty \). Then, almost surely,

$$\begin{aligned} \lim _{n \rightarrow \infty } \left( \frac{ n R_{n,k(n)}^3}{\log n}\right) = \max \left( \frac{\hat{H}_\beta (1)}{\theta _3 f_0}, \frac{2 \hat{H}_\beta (2/3)}{\theta _3 f_1}, \frac{3 \hat{H}_\beta (1/3) }{ 2 \min _{e \in E} (\alpha _e f_e ) } , \max _{v \in V} \left( \frac{\beta }{ \rho _{v} f(v) } \right) \right) . \end{aligned}$$

In particular, if \(\beta =0\) the above limit comes to \(\max \left( \frac{3}{ 4 \pi f_0}, \frac{1}{\pi f_1}, \max _{e \in E} \left( \frac{1 }{2 \alpha _e f_e } \right) \right) \).

Corollary 4.5

Suppose \(A=B = [0,1]^d\), and \(f|_A\) is continuous at x for all \(x \in \partial A\). For \(1 \le j \le d\) let \(\partial _j\) denote the union of all \((d-j)\)-dimensional faces of A, and let \(f_j\) denote the infimum of f over \(\partial _j\). Assume (4.1) with \(\beta < \infty \). Then

$$\begin{aligned} \lim _{n \rightarrow \infty } \left( \frac{ n R_{n,k(n)}^d}{\log n}\right) = \max _{0 \le j \le d} \left( \frac{2^j \hat{H}_\beta (1-j/d) }{ \theta _d f_j } \right) , ~~~~~ a.s. \end{aligned}$$
(4.10)

It is perhaps worth spelling out what the preceding results mean in the special case where \(\beta =0\) (for example, if k(n) is a constant) and also \(\mu \) is the uniform distribution on A (i.e. \(f(x) \equiv f_0\) on A). In this case, the right hand side of (4.6) comes to \((2-2/d)/f_0\), while the right hand side of (4.8) comes to \(f_0^{-1}\max (1/ \theta _d, \max _{\varphi \in \varPhi (A)} \frac{D(\varphi )}{(d \rho _\varphi )})\). The limit in (4.9) comes to \(1/(\pi f_0)\), while the limit in Corollary 4.4 comes to \(f_0^{-1} \max [ 1/\pi , \max _e (1 /(2 \alpha _e))]\), and the right hand side of (4.10) comes to \(2^{d-1}/(\theta _d d)\).

Remark 4.6

The notion of coverage threshold is analogous to that of connectivity threshold in the theory of random geometric graphs [21]. Our results show that the threshold for full coverage by the balls \(B(X_i,r)\), is asymptotically twice the threshold for the union of these balls to be connected, if \(A^o\) is connected, at least when A has a smooth boundary or A is a convex polytope. This can be seen from comparison of Theorem 4.1 above with [21, Theorem 13.7], and comparison of Corollary 4.5 above with [22, Theorem 2.5].

Remark 4.7

We compare our results with [8]. In the setting of our Theorem 4.1, [8, Theorem 3] and [8, Remark 1] give an upper bound on \(\limsup _{n \rightarrow \infty }(n \theta _d R_n^d/\log n)\) of \(\max (f_0^{-1}, 2f_1^{-1})\) in probability or \(\max (2f^{-1}_0, 4f_1^{-1})\) almost surely. In the setting of our Theorem 4.2, they give an upper bound of \(f_0^{-1} \vee \max _{\varphi } (\theta _d/(f_\varphi \rho _\varphi ))\) in probability, or twice this almost surely. Our (4.6) and (4.8) improve significantly on those results.

Remark 4.8

In Theorem 4.2 we do not consider the case where A is a non-convex polytope. To generalize the proof of Lemma 6.12 to non-convex polytopes for general d would seem to require considerably more work.

Our final result is a law of large numbers for \({\tilde{R}}_{n,k}\), no longer requiring \(B=A\). In the case where B is contained in the interior of A, this easily yields a law of large numbers for the k-coverage threshold \(R_{n,k}\). Recall from Sect. 2 that \(\mu \) denotes the probability measure on A with density f.

Proposition 4.9

Suppose that either (i) B is compact and Riemann measurable with \(\mu (B) >0\), and \(B \subset A^o\), and f is continuous on A; or (ii) \(B=A\). Assume (4.1) holds. Then, almost surely,

$$\begin{aligned}{} & {} \lim _{n \rightarrow \infty } ( n \theta _d {\tilde{R}}_{n,k(n)}^d/k(n)) = f_0^{-1} ~~~~~{\text { if }}~ \beta = \infty ; \end{aligned}$$
(4.11)
$$\begin{aligned}{} & {} \lim _{n \rightarrow \infty } (n \theta _d {\tilde{R}}_{n,k(n)}^d /\log n) = \hat{H}_\beta (1)/f_0, ~~~~~{\text { if }}~ \beta < \infty . \end{aligned}$$
(4.12)

In particular, if \(k \in \mathbb {N}\) is a constant then

$$\begin{aligned} \mathbb {P}[ \lim _{n \rightarrow \infty } ( n \theta _d f_0 {\tilde{R}}_{n,k}^d /\log n) = 1 ] =1. \end{aligned}$$
(4.13)

In Case (i), all of the above almost sure limiting statements hold for \(R_{n,k(n)}\) as well as for \({\tilde{R}}_{n,k(n)}\).

Proposition 4.9 has some overlap with known results; the uniform case with \(A=B = [0,1]^d\) and \(f \equiv 1\) on A is covered by [10, Theorem 1]. Taking \(\textbf{C}\) of that paper to be the class of Euclidean balls centred on the origin, we see that the quantity denoted \(M_{1,m}\) in [10] equals \(\tilde{R}_n\). In [10, Example 3] it is stated that the Euclidean balls satisfy the conditions of [10, Theorem 3]. See also [17]. Note also that [15] has a result similar to the case of Proposition 4.9 where \(d=2\), \(A=B= [0,1]^2\) and f is uniform over A.

5 Strategy of proofs

We first give an overview of the strategy for the proofs, in Sect. 6, of the strong laws of large numbers that were stated in Sect. 4.

For \(n \in \mathbb {N}\) and \(p \in [0,1]\) let \(\textrm{Bin}(n,p)\) denote a binomial random variable with parameters np. Recall that \(H(\cdot )\) was defined at (4.3), and \(Z_t\) is a Poisson(t) variable for \(t>0\). The proofs in Sect. 6 rely heavily on the following lemma.

Lemma 5.1

(Chernoff bounds) Suppose \(n \in \mathbb {N}\), \(p \in (0,1)\), \(t >0\) and \(0< k < n\).

  1. (a)

    If \(k \ge np\) then \(\mathbb {P}[ \textrm{Bin}(n,p) \ge k ] \le \exp \left( - np H(k/(np) ) \right) \).

  2. (b)

    If \(k \le np\) then \(\mathbb {P}[ \textrm{Bin}(n,p) \le k ] \le \exp \left( - np H(k/(np)) \right) \).

  3. (c)

    If \(k \ge e^2 np\) then \(\mathbb {P}[ \textrm{Bin}(n,p) \ge k ] \le \exp \left( - (k/2) \log (k/(np))\right) \le e^{-k}\).

  4. (d)

    If \(k < t\) then \(\mathbb {P}[Z_t \le k ] \le \exp (- t H(k/t))\).

  5. (e)

    If \(k \in \mathbb {N}\) then \(\mathbb {P}[Z_t = k ] \ge (2 \pi k)^{-1/2}e^{-1/(12k)}\exp (- t H(k/t))\).

Proof

See e.g. [21, Lemmas 1.1, 1.2 and 1.3]. \(\square \)

Recall that \(R_{n,k}\) is defined at (2.1), that we assume \((k(n))_{n \ge 1}\) satisfies (4.1) for some \(\beta \in [0,\infty ]\), and that \(\hat{H}_\beta (x)\) is defined to be the \( y \ge \beta \) such that \(y H(\beta /y) =x\), where \(H(\cdot )\) was defined at (4.3).

If \(f \equiv f_0\) on A for a constant \(f_0\), and \(r>0\), then for \(x \in A^{(r)}\), \(\mathscr {X}_n(B(x,r))\) is binomial with mean \(n f_0 \theta _d r^d =: M\), say. Hence if \(M > k(n)\), then parts (b) and (e) of Lemma 5.1 suggest that we should have

$$\begin{aligned} \mathbb {P}[\mathscr {X}_n(B(x,r)) < k(n) ] \approx \exp (- M H(k(n)/M)) . \end{aligned}$$

If \(\beta < \infty \), and we choose \(r=r_n\) so that \(M = a \log n\) with \(a = \hat{H}_\beta (1)\) this probability approximates to \(\exp ( - a (\log n) H( (\beta \log n) / (a \log n)))\), which comes to \(n^{-1}\). Since we can find \(n^{1+o(1)}\) disjoint balls of radius \(r_n\) where this might happen, this suggests \(a = \hat{H}_\beta (1)\) is the critical value of \(a := M/\log n\), below which the interior region \(A^{(r_n)}\) is not covered (k(n) times) with high probability, and above which it is covered. We can then improve the ‘with high probability’ statement to an ‘almost surely for large enough n’ statement using Lemma 6.1. If f is continuous but not constant on A, the value of \(f_0\) defined at (4.2) still determines the critical choice of a. If \(\beta = \infty \) instead, taking \(M= a' k(n)\) the critical value of \(a'\) is now 1. These considerations lead to Proposition 4.9.

Now consider the boundary region \(A {\setminus } A^{(r)}\), in the case where \(\partial A\) is smooth. We can argue similarly to before, except that for \(x \in \partial A\) the approximate mean of \(\mathscr {X}_n(B(x,r))\) changes to \(nf_1 \theta r^d/2 =: M'\). If \(\beta < \infty \), and we now choose \(r = r_n\) so \(M' = a' \log n\) with \(a' = \hat{H}_\beta (1-1/d)\), then \(\mathbb {P}[\mathscr {X}_n(B(x,r_n) ) < k(n) ] \approx n^{-(1-1/d)}\). Since we can find \(n^{1-1/d + o(1)}\) disjoint balls of radius \(r_n\) centred in \(A {\setminus } A^{(r_n)}\), this suggests \(a' := \hat{H}_\beta (1-1/d)\) is the critical choice of \(a'\) for covering \(A {\setminus } A^{(r_n)}\).

For polytopal A we consider covering the regions near to each of the lower dimensional faces of \(\partial A\) in an analogous way; the dimension of a face affects both the \(\mu \)-content of a ball centred on that face, and the number of disjoint balls that can packed into the region near the face.

Next, we describe the strategy for the proof, in Sect. 7, of the weak convergence results that were stated in Sect. 3.

First, we shall provide a general ‘De-Poissonization’ lemma (Lemma 7.1), as a result of which for each of the Theorems in Sect. 3 it suffices to prove the results for a Poisson process rather than a binomial point process (i.e., for \(R'_{t,k}\) rather than for \(R_{n,k}\)).

Next, we shall provide a general lemma (Lemma 7.2) giving the limiting probability of covering (k times, with k now fixed) a bounded region of \(\mathbb {R}^d\) by a spherical Poisson Boolean model (SPBM) on the whole of \(\mathbb {R}^d\), in the limit of high intensity and small balls. This is based on results from [16] (or [12] if \(k=1\)). Applying Lemma 7.2 for an SPBM with all balls of the same radius yields a proof of Proposition 3.4.

Next, we shall consider the SPBM with balls of equal radius \(r_t\), centred on a homogeneous Poisson process of intensity \(t f_0 \) in a d-dimensional half-space. In the large-t limit, with \(r_t\) shrinking in an appropriate way, in Lemma 7.4 we determine the limiting probability that the a given bounded set within the surface of the half-space is covered, by applying Lemma 7.2 in \(d-1\) dimensions, now with balls of varying radii. Moreover we will show that the probability that a region in the half-space within distance \(r_t\) of that set is covered with the same limiting probability.

We can then complete the proof of Theorem 3.2 by applying Lemma 7.4 to determine the limiting probability that the region near the boundaries of a polygonal set A is covered, and Proposition 3.4 to determine the limiting probability that the interior region is covered, along with a separate argument to show the regions near the corners of polygonal A are covered with high probability.

For Theorem 3.3 we again use Proposition 3.4 to determine the limiting probability that the interior region is covered, and Lemma 7.4 to determine the limiting probability that the region near the faces (but not the edges) of polyhedral A are covered. To deal with the region near the edges of A, we also require a separate lemma (Lemma 7.10) determining the limiting probability that a bounded region near the edge of an infinite wedge-shaped region in \(\mathbb {R}^3\) is covered by an SPBM restricted to that wedge-shaped region.

The proof of Theorem 3.1 requires further ideas. Let \(\gamma \in (1/(2d),1/d)\). We shall work simultaneously on two length scales, namely the radius \(r_t\) satisfying (7.15) (and hence satisfying \(r_t = \varTheta (((\log t)/t)^{1/d})\)), and a coarser length-scale given by \(t^{-\gamma }\). If \(d=2\), we approximate to \(\partial A\) by a polygon of side-length that is \(\varTheta (t^{-\gamma })\). We approximate to \({{\mathscr {P}}}_t\) by a Poisson process inside the polygon, and can determine the asymptotic probability of complete coverage of this approximating polygon by considering a lower-dimensional Boolean model on each of the edges. In fact we shall line up these edges by means rigid motions, into a line segment embedded in the plane; in the limit we obtain the limiting probability of covering this line segment with balls centred on a Poisson process in the half-space to one side of this line segment, which we know about by Lemma 7.4. By separate estimates we can show that the error terms either from the corners of the polygon, or from the approximation of Poisson processes, are negligible in the large-t limit.

For \(d \ge 3\) we would like to approximate to A by a polyhedral set \(A_t\) obtained by taking its surface to be a triangulation of the surface of A with side-lengths \(\varTheta (t^{-\gamma })\). However, two obstacles to this strategy present themselves.

The first obstacle is that in 3 or more dimensions, it is harder to be globally explicit about the set \(A_t\) and the set difference \(A \triangle A_t\). We deal with this by triangulating \(\partial A\) locally rather than globally; we break \(\partial A\) into finitely many pieces, each of which lies within a single chart within which \(\partial A\), after a rotation, can be expressed as the graph of a \(C^2\) function on a region U in \(\mathbb {R}^{d-1}\). Then we triangulate U (in the sense of tessellating it into simplices) explicitly and use this to determine an explicit local triangulation (in the sense of approximating the curved surface by a union of \((d-1)\)-dimensional simplices) of \(\partial A\).

The second obstacle is that the simplices in the triangulation cannot in general be reassembled into a \((d-1)\)-dimensional cube. To get around this, we shall pick \(\gamma ' \in (\gamma ,1/d)\) and subdivide these simplices into smaller \((d-1)\)-dimensional cubes of side \(t^{-\gamma '}\); we can reassemble these smaller \((d-1)\)-dimensional cubes into a cube in \(\mathbb {R}^{d-1}\), and control the boundary region near the boundaries of the smaller \((d-1)\)-dimensional cubes, or near the boundary of the simplices, by separate estimates.

6 Proof of strong laws of large numbers

In this section we prove the results stated in Sect. 4. Throughout this section we are assuming we are given a constant \(\beta \in [0,\infty ]\) and a sequence \((k(n))_{n \in \mathbb {N}}\) satisfying (4.1). Recall that \(\mu \) denotes the distribution of \(X_1\), and this has a density f with support A, and that \(B \subset A\) is fixed, and \(R_{n,k}\) is defined at (2.1).

We shall repeatedly use the following lemma. It is based on what in [21] was called the ‘subsequence trick’. This result says that if an array of random variables \(U_{n,k}\) is monotone in n and k, and \(U_{n,k(n)}\), properly scaled, converges in probability to a constant at rate \(n^{-\varepsilon }\), one may be able to improve this to almost sure convergence.

Lemma 6.1

(Subsequence trick) Suppose \((U_{n,k},(n,k) \in \mathbb {N}\times \mathbb {N})\) is an array of random variables on a common probability space such that \(U_{n,k} \) is nonincreasing in n and nondecreasing in k, that is, \(U_{n+1,k} \le U_{n,k} \le U_{n,k+1} \) almost surely, for all \((n,k) \in \mathbb {N}\times \mathbb {N}\). Let \(\beta \in [0,\infty ), \varepsilon>0, c >0\), and suppose \((k(n),n \in \mathbb {N})\) is an \(\mathbb {N}\)-valued sequence such that \(k(n)/\log n \rightarrow \beta \) as \(n \rightarrow \infty \).

  1. (a)

    Suppose \( \mathbb {P}[n U_{n, \lfloor (\beta + \varepsilon ) \log n \rfloor } > \log n] \le c n^{-\varepsilon }, \) for all but finitely many n. Then \(\mathbb {P}[ \limsup _{n \rightarrow \infty } n U_{n,k(n)} /\log n \le 1] =1\).

  2. (b)

    Suppose \(\varepsilon < \beta \) and \( \mathbb {P}[n U_{n, \lceil (\beta - \varepsilon ) \log n \rceil } \le \log n] \le c n^{-\varepsilon }, \) for all but finitely many n. Then \(\mathbb {P}[ \liminf _{n \rightarrow \infty } n U_{n,k(n)} /\log n \ge 1] =1\).

Proof

(a) For each n set \(k'(n) := \lfloor (\beta + \varepsilon ) \log n \rfloor \). Pick \(K \in \mathbb {N}\) with \(K \varepsilon >1\). Then by our assumption, we have for all large enough n that \(\mathbb {P}[n^K U_{n^K, k'(n^K)} > \log (n^K) ]\le c n^{-K \varepsilon }\), which is summable in n. Therefore by the Borel-Cantelli lemma, there exists a random but almost surely finite N such that for all \(n \ge N\) we have

$$\begin{aligned} n^K U_{n^K,k'(n^K)} \le \log (n^K), \end{aligned}$$

and also \(k(m) \le (\beta + \varepsilon /2) \log m\) for all \(m \ge N^K\), and moreover \((\beta + \varepsilon /2) \log ((n+1)^K) \le (\beta + \varepsilon ) \log (n^K) \) for all \(n \ge N\). Now for \(m \in \mathbb {N}\) with \(m \ge N^K\), choose \(n \in \mathbb {N}\) such that \(n^K \le m < (n+1)^K\). Then

$$\begin{aligned} k(m) \le (\beta +\varepsilon /2)\log ((n+1)^K) \le (\beta +\varepsilon )\log (n^K), \end{aligned}$$

so \(k(m) \le k'(n^K)\). Since \(U_{m,k}\) is nonincreasing in m and nondecreasing in k,

$$\begin{aligned} m U_{m,k(m)}/\log m \le (n+1)^K U_{n^K,k'(n^K)}/\log (n^K) \le (1+ n^{-1})^K , \end{aligned}$$

which gives us the result asserted.

(b) The proof is similar to that of (a), and is omitted. \(\square \)

6.1 General lower and upper bounds

In this subsection we present asymptotic lower and upper bounds on \(R_{n,k(n)}\), not requiring any extra assumptions on A and B. In fact, A here can be any metric space endowed with a probability measure \(\mu \), and B can be any subset of A. The definition of \(R_{n,k}\) at (2.1) carries over in an obvious way to this general setting.

Later, we shall derive the results stated in Sect. 4 by applying the results of this subsection to the different regions within A (namely interior, boundary, and lower-dimensional faces).

Given \(r>0, a>0\), define the ‘packing number’ \( \nu (B,r,a) \) be the largest number m such that there exists a collection of m disjoint closed balls of radius r centred on points of B, each with \(\mu \)-measure at most a. The proof of the following lemma implements, for a general metric space, the strategy outlined in Sect. 5.

Lemma 6.2

(General lower bound) Let \(a >0, b \ge 0\). Suppose \(\nu (B,r,a r^d) = \varOmega (r^{-b})\) as \(r \downarrow 0\). Then, almost surely, if \(\beta = \infty \) then \(\liminf _{n \rightarrow \infty } \left( n R_{n,k(n)}^d/k(n) \right) \ge 1/a\). If \(\beta < \infty \) then \(\liminf _{n \rightarrow \infty } \left( n R_{n,k(n)}^d/\log n \right) \ge a^{-1} \hat{H}_\beta (b/d)\), almost surely.

Proof

First suppose \(\beta = \infty \). Let \(u\in (0,1/a )\). Set \(r_n := \left( uk(n)/n \right) ^{1/d}\), \(n \in \mathbb {N}\). By (4.1), \(r_n \rightarrow 0\) as \(n \rightarrow \infty \). Then, given n sufficiently large, we have \(\nu (B,r_n,ar_n^d ) >0\) so we can find \(x \in B\) such that \( \mu (B(x,r_n)) \le a r_n^d\), and hence \( n \mu (B(x,r_n)) \le a uk(n) . \) If \(k(n) \le e^2 n \mu (B(x,r_n))\) (and hence \(n \mu (B(x,r_n)) \ge e^{-2} k(n)\)), then since \(\mathscr {X}_n(B(x,r_n))\) is binomial with parameters n and \(\mu (B(x,r_n))\), by Lemma 5.1(a) we have that

$$\begin{aligned}{} & {} \mathbb {P}[ R_{n,k(n)} \le r_n] \le \mathbb {P}[ \mathscr {X}_n(B(x,r_n) ) \ge k(n) ] \le { \exp \left( - n \mu (B(x,r_n)) H \left( \frac{k(n)}{n \mu ( B(x,r_n))} \right) \right) }\\{} & {} \quad \le \exp \left( - e^{-2} k(n) H \left( (a u)^{-1} \right) \right) , \end{aligned}$$

while if \(k (n) > e^2 n \mu (B(x,r_n))\) then by Lemma 5.1(c), \(\mathbb {P}[ R_{n,k(n)} \le r_n] \le e^{-k(n)}\). Therefore \(\mathbb {P}[ R_{n,k(n)} \le r_n]\) is summable in n because we assume here that \(k(n)/\log n \rightarrow \infty \) as \(n \rightarrow \infty \). Thus by the Borel-Cantelli lemma, almost surely \(R_{n,k(n)} > r_n\) for all but finitely many n, and hence \( \liminf n R_{n,k(n)}^d /k(n) \ge u\). This gives the result for \(\beta = \infty \).

Now suppose instead that \(\beta < \infty \) and \(b=0\), so that \(\hat{H}_\beta (b/d) = \beta \). Assume that \(\beta >0\) (otherwise the result is trivial). Let \(\beta ' \in (0, \beta )\). Let \(\delta > 0\) with \(\beta ' < \beta - \delta \) and with \( \beta ' H \left( \frac{ \beta - \delta }{\beta ' } \right) > \delta . \) For \(n \in \mathbb {N}\), set \(r_n := ( (\beta ' \log n)/ (a n) )^{1/d}\) and set \(k'(n)= \lceil (\beta - \delta ) \log n \rceil \). By assumption \(\nu (B,r_n,a r_n^d) = \varOmega (1)\), so for all n large enough, we can (and do) choose \(x_n \in B\) such that \( n \mu (B(x_{n},r_n)) \le n a r_n^d = \beta ' \log n. \) Then by a simple coupling, and Lemma 5.1(a),

$$\begin{aligned} \mathbb {P}[ \mathscr {X}_{n} ( B(x_{n},r_n) ) \ge k'(n) ]\le & {} { \mathbb {P}\left[ \textrm{Bin}\left( n, (\beta ' \log n)/n) \right) \ge k'(n) \right] }\\\le & {} \exp \left( - \left( \beta ' \log n \right) H \left( \frac{\beta - \delta }{\beta ' } \right) \right) \le n^{- \delta }. \end{aligned}$$

Hence \(\mathbb {P}[ n R_{n,k'(n)}^d \le (\beta '/a) \log n ] = \mathbb {P}[ R_{n,k'(n)} \le r_n] \le n^{-\delta }\). Then by the subsequence trick (Lemma 6.1(b)), we may deduce that \( \liminf ( n R_{n,k(n)}^d /\log n) \ge \beta '/a , \) almost surely, which gives us the result for this case.

Now suppose instead that \(\beta < \infty \) and \(b > 0\). Let \(u\in ( 0, a^{-1} \hat{H}_\beta (b/d))\), so that \(ua H(\beta /(ua)) < b/d\). Choose \(\varepsilon >0\) such that \((1+ \varepsilon ) ua H(\beta /(ua)) < (b/d)-9 \varepsilon \).

For each \(n \in \mathbb {N}\) set \(r_n= (u(\log n)/n)^{1/d}\). Let \(m_n := \nu (B,r_n,a r_n^d)\), and choose \(x_{n,1},\ldots ,\) \(x_{n,m_n} \in B\) such that the balls \(B(x_{n,1},r_n),\ldots ,B(x_{n,m_n},r_n)\) are pairwise disjoint and each have \(\mu \)-measure at most \(ar_n^d\). Set \(\lambda (n):= n+ n^{3/4}\). For \(1 \le i \le m_n\), if \(k(n) > 1\) then by a simple coupling, and Lemma 5.1(e),

$$\begin{aligned}{} & {} \mathbb {P}[ {{\mathscr {P}}}_{\lambda (n)}( B(x_{n,i},r_n) ) \le k(n) -1] \ge \mathbb {P}[ Z_{\lambda (n) ar_n^d} \le k(n)-1]\\{} & {} \quad \ge \left( \frac{e^{-1/(12(k(n)-1))}}{ \sqrt{2 \pi (k(n)-1)}} \right) \exp \left( - \lambda (n) a r_n^d H \left( \frac{k(n)-1}{\lambda (n) a r_n^d } \right) \right) . \end{aligned}$$

Now \(\lambda (n)r_n^d/\log n \rightarrow u\) so by (4.1), \((k(n)-1)/(\lambda (n) a r_n^d) \rightarrow \beta /(ua)\) as \(n \rightarrow \infty \). Thus by the continuity of \(H(\cdot )\), provided n is large enough and \(k(n) > 1\), for \(1 \le i \le m_{n}\),

$$\begin{aligned}{} & {} \mathbb {P}[{{\mathscr {P}}}_{\lambda (n)}(B(x_{n,i},r_n)) \le k(n)-1]\nonumber \\{} & {} \quad \ge \left( \frac{e^{-1/12} }{\sqrt{2 \pi (\beta +1 ) \log n }} \right) \exp \left( - (1+ \varepsilon ) a uH \left( \frac{\beta }{a u} \right) \log n \right) . \end{aligned}$$
(6.1)

If \(k(n)=1\) for infinitely many n, then \(\beta =0\) and (6.1) still holds for large enough n.

By (6.1) and our choice of \(\varepsilon \), there is a constant \(c >0 \) such that for all large enough n and all \(i \in \{1,\ldots ,m_n\}\) we have

$$\begin{aligned} \mathbb {P}[{{\mathscr {P}}}_{\lambda (n)}(B(x_{n,i},r_n)) \le k(n)-1] \ge c (\log n)^{-1/2} n^{ 9 \varepsilon - b/d } \ge n^{8\varepsilon -b/d}. \end{aligned}$$

Hence, setting \(E_n:= \cap _{i=1}^{m_n} \{ {{\mathscr {P}}}_{\lambda (n)}(B(x_{n,i},r_n)) \ge k(n) \}\), for all large enough n we have

$$\begin{aligned} \mathbb {P}[E_n] \le ( 1- n^{8\varepsilon -b/d})^{m_n} \le \exp ( - m_n n^{8\varepsilon -b/d} ). \end{aligned}$$

By assumption \(m_n = \nu (B,r_n , a r_n^d) = \varOmega (r_n^{-b})\) so that for large enough n we have \(m_n \ge n^{(b/d) - \varepsilon }\), and therefore \(\mathbb {P}[E_n]\) is is summable in n.

By Lemma 5.1(d), and Taylor expansion of H(x) about \(x=1\) (see the print version of [21, Lemma 1.4] for details; there may be a typo in the electronic version), for all n large enough \( \mathbb {P}[Z_{\lambda (n)} < n] \le \exp ( - \frac{1}{9} n^{1/2}), \) which is summable in n. Since \(R_{m,k}\) is nonincreasing in m, by the union bound

$$\begin{aligned} \mathbb {P}[R_{n,k(n)} \le r_n] \le \mathbb {P}[R_{Z_{\lambda (n)},k(n)} \le r_n ] + \mathbb {P}[ Z_{\lambda (n) }< n ] \le \mathbb {P}[ E_n]+ \mathbb {P}[ Z_{\lambda (n)} < n], \end{aligned}$$

which is summable in n by the preceding estimates. Therefore by the Borel-Cantelli lemma,

$$\begin{aligned} \mathbb {P}[ \liminf ( n R_{n,k(n)}^d /\log n) \ge u] =1, ~~~~~ u< a^{-1} \hat{H}_\beta (b/d), \end{aligned}$$

so the result follows for this case too. \(\square \)

Given \(r>0\), and \(D \subset A\), define the ‘covering number’

$$\begin{aligned} \kappa (D,r): = \min \{m \in \mathbb {N}: \exists x_1,\ldots ,x_m \in D ~{\text { with }} ~ D \subset \cup _{i=1}^m B(x_i,r) \}. \end{aligned}$$
(6.2)

We need a complementary upper bound to go with the preceding asymptotic lower bound on \(R_{n,k(n)}\). For this, we shall require a condition on the ‘covering number’ that is roughly dual to the condition on ‘packing number’ used in Lemma 6.2. Also, instead of stating the lemma in terms of \(R_{n,k}\) directly, it is more convenient to state it in terms of the ‘fully covered’ region \(F_{n,k,r}\), defined for \(n,k \in \mathbb {N}\) and \(r>0\) by

$$\begin{aligned} F_{n,k,r}:= \{x \in A: \mathscr {X}_n(B(x,r)) \ge k\}. \end{aligned}$$
(6.3)

We can characterise the event \(\{{\tilde{R}}_{n,k} \le r\} \) in terms of the set \(F_{n,k,r}\) as follows:

$$\begin{aligned} {\tilde{R}}_{n,k } \le r {\text { if and only if }} (B \cap A^{(r)} ) \subset F_{n,k,r}. \end{aligned}$$
(6.4)

Indeed, the ‘if’ part of this statement is clear from (2.3). For the ‘only if’ part, note that if there exists \(x \in (B \cap A^{(r)}) {\setminus } F_{n,k,r}\), then there exists \(s >r\) with \(x \in B \cap A^{(s)} {\setminus } F_{n,k,s}\). Then for all \(s' <s\) we have \(x \in B \cap A^{(s')} {\setminus } F_{n,k,s}\), and therefore \({\tilde{R}}_{n,k} \ge s > r\).

Lemma 6.3

(General upper bound) Suppose \(r_0, a, b \in (0,\infty )\), and a family of sets \(A_r \subset A,\) defined for \(0< r < r_0\), are such that for all \(r \in (0,r_0)\), \(x \in A_r\) and \(s \in (0,r)\) we have \(\mu (B(x,s)) \ge a s^d\), and moreover \(\kappa (A_r,r) = O(r^{-b})\) as \(r \downarrow 0\).

If \(\beta = \infty \) then let \(u> 1/a\) and set \(r_n = (uk(n)/n)^{1/d}\), \( n \in \mathbb {N}\). Then with probability one, \(A_{r_n} \subset F_{n,k(n),r_n}\) for all large enough n.

If \(\beta < \infty \), let \(u>a^{-1} \hat{H}_\beta (b/d)\) and set \(r_n = (u(\log n)/n)^{1/d}\). Then there exists \(\varepsilon >0\) such that \( \mathbb {P}[\{A_{r_n} \subset F_{n,\lfloor (\beta + \varepsilon ) \log n\rfloor ,r_n} \}^c ] = O(n^{-\varepsilon }) \) as \(n \rightarrow \infty \).

Proof

Let \(\varepsilon \in (0,1)\); if \(\beta = \infty \), assume \(a (1-\varepsilon )^d u> 1+ \varepsilon \). If \(\beta < \infty \), assume

$$\begin{aligned} a(1-\varepsilon )^d uH((\beta + \varepsilon )/(a (1-\varepsilon )^d u)) > ( b/d ) + \varepsilon . \end{aligned}$$

This can be achieved because \(a uH (\beta /(a u)) > b/d\) in this case. Set \(m_n = \kappa (A_{r_n},\varepsilon r_n)\). Then \(m_n= O(r_n^{-b}) = O(n^{b/d})\) (in either case). Let \(x_{n,1},\ldots ,x_{n,m_n} \in A_{r_n}\) with \(A_{r_n} \subset \cup _{i=1}^{m_n} B(x_{n,i}, \varepsilon r_n)\). Then for \(1 \le i \le m_n\), if \(\mathscr {X}_n(B(x_{n,i}, (1-\varepsilon )r_n) \ge k(n)\) then \(B(x_{n,i},\varepsilon r_n) \subset F_{n,k(n),r_n}\). Therefore

$$\begin{aligned} \mathbb {P}[ \{A_{r_n} \subset F_{n,k(n),r_n} \}^c ] \le \mathbb {P}[ \cup _{i=1}^{m_n} \{ \mathscr {X}_n(B(x_{n,i},(1-\varepsilon )r_n) < k(n) \}]. \end{aligned}$$
(6.5)

Suppose \(\beta = \infty \). Then for \(1 \le i \le m_n\),

$$\begin{aligned} n \mu (B(x_{n,i},(1-\varepsilon )r_n)) \ge n a ((1-\varepsilon )r_n)^d \ge (1+ \varepsilon ) k(n), \end{aligned}$$

so that by (6.5), the union bound, and Lemma 5.1(b),

$$\begin{aligned} \mathbb {P}[ \{A_{r_n} \subset F_{n,k(n),r_n} \}^c ]\le & {} m_n \mathbb {P}[\textrm{Bin}(n, (1+\varepsilon ) k(n)/n) <k(n) ]\\\le & {} m_n \exp ( - (1+\varepsilon ) k(n) H((1+\varepsilon )^{-1}) ). \end{aligned}$$

This is summable in n, since \(m_n = O(n^{b/d})\) and \(k(n)/\log n \rightarrow \infty \). Therefore by the first Borel-Cantelli lemma, we obtain the desired conclusion for this case.

Now suppose instead that \(\beta < \infty \). Then

$$\begin{aligned} n \mu (B(x_{n,i},(1-\varepsilon )r_n)) \ge n a ((1-\varepsilon )r_n)^d \ge a (1- \varepsilon )^d u\log n . \end{aligned}$$

Therefore setting \(k'(n) := \lfloor (\beta + \varepsilon ) \log n \rfloor \), by (6.5) we have

$$\begin{aligned} \mathbb {P}[ \{A_{r_n} \subset F_{n,k'(n),r_n} \}^c ]\le & {} m_n \exp \left( -a (1-\varepsilon )^d uH \left( \frac{(\beta +\varepsilon ) }{a (1- \varepsilon )^d u} \right) \log n \right) \\\le & {} m_n n^{-(b/d) -\varepsilon }, \end{aligned}$$

which yields the desired conclusion for this case. \(\square \)

6.2 Proof of Proposition 4.9

Throughout this subsection we assume that either: (i) B is compact and Riemann measurable with \(\mu (B)>0\) and \(B \subset A^o\), or (ii) \(B=A\). We do not (yet) assume f is continuous on A. Recall from (4.2) that \(f_0: = \mathrm{ess~inf}_{x \in B} f(x)\).

We shall prove Proposition 4.9 by applying (6.4) and Lemma 6.3 to derive an upper bound on \({\tilde{R}}_{n,k(n)}\) (Lemma 6.6), and Lemma 6.2 to derive a lower bound (Lemma 6.5). For the lower bound we also require the following lemma (recall that \(\nu (B,r,a)\) was defined just before Lemma 6.2).

Lemma 6.4

Let \(\alpha > f_0\). Then \(\liminf _{r \downarrow 0} r^d \nu (B,r, \alpha \theta _d r^d) > 0\).

Proof

Let \(\kappa = 3^{-d} \theta _d\). Note that \(0< \kappa <1\). Set

$$\begin{aligned} \alpha ' := \alpha \kappa /4 + f_0(1- \kappa /4). \end{aligned}$$

By the Riemann measurability assumption, \(\mathrm{ess~inf}_{B^{(\varepsilon )}} (f) \downarrow f_0\) as \(\varepsilon \downarrow 0\). Therefore we can and do choose \(\delta >0\) with \(\mu (B^{(\delta )} ) >0\) and with \(\mathrm{ess~inf}_{B^{(\delta )}}(f) < \alpha '\).

Note that \(f_0< \alpha ' < \alpha \). Let \(x_0 \in B^{(\delta )}\) with \(f(x_0) < \alpha '\) and \(x_0\) a Lebesgue point of f. Then take \(r_0 \in (0,\delta )\) such that \(\mu (B(x_0,r_0)) < \alpha ' \theta _d r_0^d\).

For \(r>0\) sufficiently small, we can and do take a collection of disjoint balls \(B(x_{r,1},r),\ldots ,B(x_{r, \sigma '(r)},r)\), all contained in \(B(x_0,r_0)\), with \(\sigma '(r) > (\kappa /2) (r_0/r)^d\). Indeed, we can take \((1+ o(1))\theta _d r_0^d (3r)^{-d}\) [as \(r \downarrow 0\)] disjoint cubes of side 3r inside \(B(x_0,r_0)\) and can take a closed ball of radius r inside the interior of each of these cubes.

Given small r, write \(B_i\) for \(B(x_{r,i},r)\) and \(B_0\) for \(B(x_0,r_0)\). Suppose fewer than half of the balls \(B_i, 1 \le i \le \sigma '(r)\) satisfy \(\mu (B_i) \le \alpha \theta _d r^d\). Then more than half of them satisfy \(\mu (B_i) > \alpha \theta _d r^d\). Let D denote the union of the latter collection of balls. Denoting volume by \(|\cdot |\), we have \(\mu (D) \ge \alpha |D|\) and \(\mu (B_0 {\setminus } D) \ge f_0 |B_0 {\setminus } D|\), and

$$\begin{aligned} |D| \ge (1/2) \sigma '(r) \theta _d r^d \ge (\kappa /4) \theta _d r_0^d = (\kappa /4) |B_0|. \end{aligned}$$

Then

$$\begin{aligned} \frac{\mu (B_0)}{|B_0|} \ge \alpha \frac{|D|}{|B_0|} + f_0 ( 1- \frac{|D|}{|B_0|}) \ge \alpha ', \end{aligned}$$

which contradicts our original assumption about \(r_0\). Therefore at least half of the balls \(B_i, 1 \le i \le \sigma '(r)\) satisfy \(\mu (B_i) \le \alpha \theta _d r^d\). Thus \(\nu (B,r,\alpha \theta _dr^d) \ge \sigma '(r)/2\). \(\square \)

Lemma 6.5

It is the case that

$$\begin{aligned}{} & {} \mathbb {P}[ \liminf (n \theta _d {\tilde{R}}_{n,k(n)}^d/k(n) ) \ge 1/f_0]=1 ~~~~~{\text { if }} ~\beta = \infty ; \end{aligned}$$
(6.6)
$$\begin{aligned}{} & {} \mathbb {P}[ \liminf ( n \theta _d {\tilde{R}}_{n,k(n)}^d /\log n) \ge \hat{H}_\beta (1)/f_0 ] =1 ~~~~~{\text { if }} ~\beta < \infty . \end{aligned}$$
(6.7)

Proof

Let \(\alpha '> \alpha > f_0\). Set \(r_n := (k(n)/(n \theta _d \alpha '))^{1/d}\) if \(\beta = \infty \), and set \(r_n:= (\hat{H}_\beta (1) (\log n)/(n \theta _d \alpha '))^{1/d}\) if \(\beta < \infty \).

Assume for now that B is compact with \(B \subset A^o\), so that there exists \(\delta >0\) such that \(B \subset A^{(\delta )}\). Then, even if f is not continuous on A, we can find \(x_0 \in B\) with \(f(x_0) < \alpha \), such that \(x_0\) is a Lebesgue point of f. Then for all small enough \(r>0\) we have \(\mu (B(x_0,r)) < \alpha \theta _d r^d\), so that \(\nu (B,r,\alpha \theta _d r^d) = \varOmega (1)\) as \(r \downarrow 0\).

If \(\beta = \infty \), then by Lemma 6.2 (taking \(b=0\)), \(\liminf _{n \rightarrow \infty } nR_{n,k(n)}^d/k(n) \ge (\theta _d \alpha )^{-1}\), almost surely. Hence for all large enough n we have \(R_{n,k(n)} > r_n\); provided n is also large enough so that \(r_n < \delta \) we also have \({\tilde{R}}_{n,k(n)} > r_n\), and (6.6) follows.

Suppose instead that \(\beta < \infty \). Then by Lemma 6.4, \(\nu (B,r, \alpha \theta _d r^{d}) = \varOmega (r^{-d})\) as \(r \downarrow 0\). Hence by Lemma 6.2, almost surely \(\liminf _{n \rightarrow \infty } \left( n R_{n,k(n)}^d /\log n \right) \ge (\alpha \theta _d)^{-1} \hat{H}_\beta (1)\), and hence for large enough n we have \(R_{n,k(n)} > r_n \) and also \({\tilde{R}}_{n,k(n)} > r_n \), which yields (6.7).

Finally, suppose instead that \(B=A\). Then by using e.g. [21, Lemma 11.12] we can find compact, Riemann measurable \(B' \subset A^o\) with \(\mu (B') >0\) and \(\mathrm{ess~inf}_{x \in B'} f(x) < \alpha \). Define \(S_{n,k}\) to be the smallest \(r \ge 0\) such that every point in \(B'\) is covered at least k times by balls of radius r centred on points of \(\mathscr {X}_n\). By the argument already given we have almost surely for all large enough n that \(S_{n,k(n)} > r_n \) and also \(B' \subset A^{(r_n)}\). For such n, there exists \(x \in B' \subset B \cap A^{(r_n)}\) with \(\mathscr {X}_n(B(x,r_n)) < k(n)\), and hence by (6.4), \({\tilde{R}}_{n,k(n)} > r_n\), which gives us (6.6) and (6.7) in this case too. \(\square \)

Now and for the rest of this subsection, we do assume in case (i) (with \(B \subset A^o\)) that f is continuous on A.

Lemma 6.6

Suppose that \(f_0 >0\). Then almost surely

$$\begin{aligned}{} & {} \limsup ( n \theta _d {\tilde{R}}_{n,k(n)}^d /k(n)) \le 1/f_0, ~~~~~ {\text { if }}\beta = \infty ; \end{aligned}$$
(6.8)
$$\begin{aligned}{} & {} \limsup ( n \theta _d {\tilde{R}}_{n,k(n)}^d /\log n) \le \hat{H}_\beta (1)/f_0 , ~~~~~ {\text { if }}\beta < \infty . \end{aligned}$$
(6.9)

Proof

We shall apply Lemma 6.3, here taking \(A_r = B \cap A^{(r)}\). To start, we claim that

$$\begin{aligned} \liminf _{r \downarrow 0} \inf _{x \in B \cap A^{(r)} , s \in (0,r)} \left( \frac{\mu (B(x,s))}{\theta _d s^d } \right) \ge f_0. \end{aligned}$$
(6.10)

This follows from the definition (4.2) of \(f_0\) when \(B=A\). In the other case (with \(B \subset A^o\)) it follows from (4.2) and the assumed continuity of f on A.

Suppose \(f_0 < \infty \) and let \(\delta \in (0,f_0)\). It is easy to see that \(\kappa (B \cap A^{(r)},r) = O(r^{-d})\) as \(r \downarrow 0\). Together with (6.10), this shows that the hypotheses of Lemma 6.3 (taking \(A_r = B \cap A^{(r)}\)) apply with \(a= \theta _d(f_0- \delta )\) and \(b= d\). Hence by (6.4) and Lemma 6.3, if \(\beta = \infty \) then for any \( u > (\theta _d(f_0-\delta ))^{-1}\), we have almost surely for large enough n that \({\tilde{R}}_{n,k(n)} \le (u k(n)/n)^{1/d}\), and (6.8) follows.

If \(\beta < \infty \), then by (6.4) and Lemma 6.3, given \(u > \hat{H}_\beta (1)/ (\theta _d (f_0-\delta )) \), there exists \(\varepsilon >0\) such that, setting \(k'(n):= \lfloor (\beta + \varepsilon ) \log n \rfloor \) and \(r_n:= (u (\log n)/n)^{1/d}\), we have

$$\begin{aligned} \mathbb {P}[n {\tilde{R}}_{n, k'(n)}^d> u \log n] = \mathbb {P}[ {\tilde{R}}_{n,k'(n)} > r_n ] = \mathbb {P}[\{B \cap A^{(r_n)} \subset F_{n, k'(n),r_n}\}^c] = O( n^{-\varepsilon }). \end{aligned}$$

Therefore by Lemma 6.1, which is applicable since \({\tilde{R}}_{n,k}^d/u\) is nonincreasing in n and nondecreasing in k, we obtain that \( \limsup ( n {\tilde{R}}_{n,k(n)}^d /\log n) \le u\), almost surely. Since \(u > \hat{H}_\beta (1)/(\theta _d ( f_0 - \delta ))\) and \(\delta \in (0,f_0)\) are arbitrary, we therefore obtain (6.9). \(\square \)

Proof of Proposition 4.9

Under either hypothesis ((i) or (ii)), it is immediate from Lemmas 6.5 and 6.6 that (4.11) holds if \(\beta = \infty \) and (4.12) holds if \(\beta < \infty \).

It follows that almost surely \({\tilde{R}}_{n,k(n)} \rightarrow 0\) as \(n \rightarrow \infty \), and therefore if we are in Case (i) (with \(B \subset A^o\)) we have \({\tilde{R}}_{n,k(n)} = R_{n,k(n)}\) for all large enough n. Therefore in this case (4.11) (if \(\beta =\infty \)) or (4.12) (if \(\beta < \infty )\) still holds with \({\tilde{R}}_{n,k(n)}\) replaced by \(R_{n,k(n)}\). \(\square \)

6.3 Proof of Theorem 4.1

In this section and again later on, we shall use certain results from [21], which rely on an alternative characterization of A having a \(C^2\) boundary, given in the next lemma. Recall that \(S \subset \mathbb {R}^d\) is called a \((d-1)\)-dimensional submanifold of \(\mathbb {R}^d\) if there exists a collection of pairs \((U_i,\phi _i)\), where \(\{U_i\}\) is a collection of open sets in \(\mathbb {R}^d\) whose union contains S, and \(\phi _i\) is a \(C^2\) diffeomorphism of \(U_i\) onto an open set in \(\mathbb {R}^d\) with the property that \(\phi _i(U_i \cap S) = \phi _i(U_i) \cap (\mathbb {R}^{d-1} \times \{0\})\). The pairs \((U_i,\phi _i)\) are called charts. We shall sometimes also refer to the sets \(U_i\) as charts here.

Lemma 6.7

Suppose \(A \subset \mathbb {R}^d\) has a \(C^2\) boundary. Then \(\partial A\) is a \((d-1)\)-dimensional \(C^2\) submanifold of \(\mathbb {R}^d\).

Proof

Let \(x \in \partial A\). Let U be an open neighbourhood of x, \(V \subset \mathbb {R}^{d-1} \) an open set, and \(\rho \) a rotation on \(\mathbb {R}^d\) about x such that \( \rho ( \partial A \cap U) = \{(w,f(w)): w \in V\}\), and moreover \(\rho (U) \subset V \times \mathbb {R}\). Then for \((w,z) \in U\) (with \(w \in V \) and \(z \in \mathbb {R}\)), take \(\psi (w,z) = (w,z-f(w))\). Then \(\psi \circ \rho \) is a \(C^2\) diffeomorphism from U to \(\psi \circ \rho (U)\), with the property that \(\psi \circ \rho (U \cap \partial A) = \psi \circ \rho (U) \cap (\mathbb {R}^{d-1} \times \{0\})\), as required. \(\square \)

Remark 6.8

The converse to Lemma 6.7 also holds: if \(\partial A\) is a \((d-1)\)-dimensional submanifold of \(\mathbb {R}^d\) then A has a \(C^2\) boundary in the sense that we have defined it. The proof of this this implication is more involved, and not needed in the sequel, so we omit the argument.

We shall use the following lemma here and again later on.

Lemma 6.9

Suppose \(A \subset \mathbb {R}^d\) is compact, and has \(C^2\) boundary. Given \(\varepsilon >0\), there exists \(\delta >0\) such that for all \(x \in A\) and \(s \in (0,\delta )\), we have \(|B(x,s) \cap A| > (1- \varepsilon ) \theta _d s^d/2\).

Proof

Immediate from applying first Lemma 6.7, and then [21, Lemma 5.7]. \(\square \)

Recall that \(F_{n,k,r}\) was defined at (6.3). We introduce a new variable \(R_{n,k,1}\), which is the smallest radius r of balls required to cover k times the boundary region \(A {\setminus } A^{(r)} \):

$$\begin{aligned} R_{n,k,1} := \inf \{r >0 : A {\setminus } A^{(r)} \subset F_{n,k,r} \}, ~~~~n,k \in \mathbb {N}. \end{aligned}$$
(6.11)

Loosely speaking, the 1 in the subscript refers to the fact that this boundary region is in some sense \((d-1)\)-dimensional. For all nk, we claim that

$$\begin{aligned} R_{n,k} = \max ( {\tilde{R}}_{n,k},R_{n,k,1}), ~~~~{\text { if }}~ B=A. \end{aligned}$$
(6.12)

Indeed, if \(r > R_{n,k}\) then \(A \subset F_{n,k,r}\) so that \(A^{(r)} \subset F_{n,k,r}\) and \(A {\setminus } A^{(r)} \subset F_{n,k,r}\), and hence \(r \ge \max ({\tilde{R}}_{n,k},R_{n,k,1})\); hence, \(R_{n,k} \ge \max ({\tilde{R}}_{n,k},R_{n,k,1})\). For an inequality the other way, suppose \(r > \max ({\tilde{R}}_{n,k},R_{n,k,1})\); then by (2.3), there exists \(r' <r\) with \(A^{(r')} \subset F_{n,k,r'}\), and hence also \(A^{(r)} \subset F_{n,k,r}\). Moreover by (6.11) there exists \(s < r\) with \(A {\setminus } A^{(s)} \subset F_{n,k,s}\). Now suppose \(x \in A^{(s)} {\setminus } A^{(r)}\). Let \(z \in \partial A\) with \(\Vert z-x\Vert = \,\textrm{dist}(x,\partial A) \in (s,r]\). Let \(y \in [x,z]\) with \(\Vert y-z\Vert = s\). Then \(y \in A {\setminus } A^{(s)}\), so that \(y \in F_{n,k,s}\), and also \(\Vert x-y\Vert \le r-s\). This implies that \(x \in F_{n,k,r}\). Therefore \(A^{(s)} {\setminus } A^{(r)} \subset F_{n,k,r}\). Combined with the earlier set inclusions this shows that \(A \subset F_{n,k,r}\) and hence \(R_{n,k} \le r\). Thus \(R_{n,k} \le \min ({\tilde{R}}_{n,k},R_{n,k,1})\), and hence (6.12) as claimed.

Recall that we are assuming that \(k(n)/\log n \rightarrow \beta \in [0,\infty ]\) and \(k(n) /n \rightarrow 0\), as \(n \rightarrow \infty \), and \(f_1 := \inf _{\partial A} f\). We shall derive Theorem 4.1 using the next two lemmas.

Lemma 6.10

Suppose the assumptions of Theorem 4.1 apply. Then

$$\begin{aligned}{} & {} \mathbb {P}[\liminf _{n \rightarrow \infty } \left( n \theta _d R_{n,k(n)}^d /k( n) \right) \ge 2/ f_1 ] =1, ~~~~{\text { if }}~ \beta = \infty ; \end{aligned}$$
(6.13)
$$\begin{aligned}{} & {} \mathbb {P}[\liminf _{n \rightarrow \infty } \left( n \theta _d R_{n,k(n)}^d /\log n \right) \ge 2 \hat{H}_\beta (1-1/d)/ f_1 ] =1, ~~~~~ {\text { if }} ~ \beta < \infty . \qquad \end{aligned}$$
(6.14)

Proof

Let \(\varepsilon >0\). By [21, Lemma 5.8], for each \(r >0 \) we can and do take \(\ell _r \in \mathbb {N}\cup \{0\}\) and points \(y_{r,1},\ldots ,y_{r,\ell _r} \in \partial A\) such that the balls \(B(y_{r,i},r), 1 \le i \le \ell _r\), are disjoint and each satisfy \(\mu (B(y_{r,i}, r)) \le (f_1+\varepsilon ) \theta _d r^d/2\), with \(\liminf _{r \downarrow 0} (r^{d-1} \ell _r) >0\). In other words, \( \liminf _{r \downarrow 0} r^{d-1} \nu (B,r,(f_1+\varepsilon )\theta _d r^d/2) >0\).

Hence, if \(\beta = \infty \) then by Lemma 6.2 we have that \(\liminf _{n \rightarrow \infty } \left( n R_{n,k(n)}^d/k(n) \right) \ge 2/(\theta _d (f_1+ \varepsilon ))\), almost surely, and this yields (6.13).

Now suppose \(\beta < \infty \); also we are assuming \(d \ge 2\). By taking \(a= (f_1+ \varepsilon ) \theta _d/2\) in Lemma 6.2, we obtain that, almost surely,

$$\begin{aligned} \liminf _{n \rightarrow \infty } \left( nR_{n,k(n)}^d / \log n \right) \ge \left( 2/((f_1+ \varepsilon ) \theta _d) \right) \hat{H}_\beta \left( 1 - 1/d \right) , \end{aligned}$$

and hence (6.14). \(\square \)

Lemma 6.11

Under the assumptions of Theorem 4.1,

$$\begin{aligned}{} & {} \mathbb {P}[\limsup _{n \rightarrow \infty } \left( n \theta _d R_{n,k(n),1}^d /k( n) \right) \le 2/ f_1 ] =1, ~~~~{\text { if }} \beta = \infty ; \end{aligned}$$
(6.15)
$$\begin{aligned}{} & {} \mathbb {P}[\limsup _{n \rightarrow \infty } \left( n \theta _d R_{n,k(n),1}^d /\log n \right) \le 2 \hat{H}_\beta (1- 1/d) / f_1 ] =1, ~~~~~ {\text { if }} ~ \beta < \infty . \qquad \qquad \end{aligned}$$
(6.16)

Proof

We shall apply Lemma 6.3, here taking \(A_r = A {\setminus } A^{(r)}\). Observe that by (6.11), event \(\{ A {\setminus } A^{(r)} \subset F_{n,k,r} \}\) implies \(R_{n,k,1} \le r\), for all \(r>0\), \(n, k \in \mathbb {N}\).

We claim that

$$\begin{aligned} \kappa (A {\setminus } A^{(r)},r) = O(r^{1-d}) ~~{\text { as }} ~~ r \downarrow 0. \end{aligned}$$
(6.17)

To see this, let \(r>0\), and let \(x_1,\ldots ,x_m \in \partial A\) with \(\partial A \subset \cup _{i=1}^m B(x_i,r)\), and with \(m = \kappa (\partial A,r)\). Then \(A {\setminus } A^{(r)} \subset \cup _{i=1}^{m} B(x_{i}, 2r)\). Setting \(c := \kappa (B(o,4),1)\), we can cover each ball \(B(x_i,2r)\) by c balls of radius r/2, and therefore can cover \(A {\setminus } A^{(r)}\) by cm balls of radius r/2, denoted \(B_1,\ldots , B_{cm}\) say. Set \( {{\mathscr {I}}}:= \{i \in \{1,\ldots ,cm\}: A{\setminus } A^{(r)} \cap B_i \ne \varnothing \}\). For each \(i \in {{\mathscr {I}}}\), select a point \(y_i \in A {\setminus } A^{(r)} \cap B_i\). Then \(A {\setminus } A^{(r)} \subset \cup _{i \in {{\mathscr {I}}}} B(y_i,r) \), and hence \(\kappa (A {\setminus } A^{(r)}) \le c \kappa ( \partial A,r)\). By [21, Lemma 5.4], \(\kappa (\partial A,r) = O(r^{1-d})\), and (6.17) follows.

Let \(\varepsilon _1 \in (0,1)\). Since we assume \(f|_A\) is continuous at x for all \(x \in \partial A\), there exists \(\delta >0\) such that \(f(x) > (1 - \varepsilon _1)f_1\) for all \(x \in A\) distant less than \(\delta \) from \(\partial A\). Then, under the hypotheses of Theorem 4.1, by Lemma 6.9, there is a further constant \(\delta ' \in (0,\delta /2)\) such that for all \(r \in (0, \delta ')\) and all \(x \in A {\setminus } A^{(r)}\), \(s \in (0,r]\) we have \( \mu (B(x,s)) \ge (1- \varepsilon _1 )^{2} f_1 (\theta _d/2) s^d. \)

Therefore taking \(A_r = A {\setminus } A^{(r)}\), the hypotheses of Lemma 6.3 hold with \(a = (1-\varepsilon _1)^2 f_1 \theta _d/2\) and \(b= d-1\).

Thus if \(\beta = \infty \), then taking \(r_n = (u k(n)/n)^{1/d}\) with \(u > 2 (1-\varepsilon _1)^{-2}/(f_1 \theta _d)\), by Lemma 6.3 we have almost surely that for all n large enough, \(A {\setminus } A^{(r_n)} \subset F_{n,k(n),r_n}\) and hence \(R_{n,k(n),1} \le r_n\). That is, \(n R_{n,k(n),1}^d/k(n) \le u\), and (6.15) follows.

If \(\beta < \infty \), take \(u > (2 (1-\varepsilon _1)^{-2}/(f_1 \theta _d)) \hat{H}_\beta (1-1/d)\). By Lemma 6.3, if we set \(r_n= (u (\log n)/n)^{1/d}\), then there exist \(\varepsilon >0\) such that if we take \(k'(n)= \lfloor (\beta + \varepsilon ) \log n \rfloor \), then

$$\begin{aligned} \mathbb {P}[n R_{n,k'(n),1}^d> u \log n] = \mathbb {P}[ R_{n,k'(n),1} > r_n] \le \mathbb {P}[ \{ (A {\setminus } A^{(r_n)} ) \subset F_{n,k'(n),r_n} \}^c] = O(n^{-\varepsilon }). \end{aligned}$$

Also \(R_{n,k,1}\) is nonincreasing in n and nondecreasing in k, so by Lemma 6.1, we have almost surely that \(\limsup _{n \rightarrow \infty } n R^d_{n,k(n),1} /\log n \le u\), and hence (6.16). \(\square \)

Proof of Theorem 4.1

By (6.12), Proposition 4.9 and Lemma 6.11,

$$\begin{aligned} \limsup _{n \rightarrow \infty } \left( n \theta _d R_{n,k(n)}^d/k(n) \right) \le {\left\{ \begin{array}{ll} \max (1/f_0,2/f_1) &{} {\text { if }}~ \beta = \infty \\ \max (\hat{H}_\beta (1)/f_0, 2 \hat{H}_\beta (1-1/d)/f_1) &{} {\text { if }}~ \beta < \infty , \end{array}\right. } \end{aligned}$$

almost surely. Moreover by (6.12), Proposition 4.9 and Lemma 6.10,

$$\begin{aligned} \liminf _{n \rightarrow \infty } \left( n \theta _d R_{n,k(n)}^d/k(n) \right) \ge {\left\{ \begin{array}{ll} \max (1/f_0,2/f_1) &{} {\text { if }}~ \beta = \infty \\ \max (\hat{H}_\beta (1)/f_0, 2 \hat{H}_\beta (1-1/d)/f_1) &{} {\text { if }}~ \beta < \infty , \end{array}\right. } \end{aligned}$$

almost surely, and the result follows. \(\square \)

6.4 Polytopes: Proof of Theorem 4.2

Throughout this subsection we assume, as in Theorem 4.2, that A is a compact convex finite polytope in \(\mathbb {R}^d\). We also assume that \(B=A\), and \(f|_A\) is continuous at x for all \(x \in \partial A\), and (4.1) holds for some \(\beta \in [0,\infty ]\).

Given any \(x \in \mathbb {R}^d\) and nonempty \(S \subset \mathbb {R}^d\) we set \(\,\textrm{dist}(x,S):= \inf _{y \in S} \Vert x-y\Vert \).

Lemma 6.12

Suppose \(\varphi , \varphi '\) are faces of A with \(D(\varphi )>0\) and \(D(\varphi ') = d-1\), and with \(\varphi {\setminus } \varphi ' \ne \varnothing \). Then \(\varphi ^o \cap \varphi ' = \varnothing \), and \(K(\varphi ,\varphi ') < \infty \), where we set

$$\begin{aligned} K(\varphi ,\varphi ') := \sup _{x \in \varphi ^o} \left( \frac{\,\textrm{dist}(x,\partial \varphi )}{\,\textrm{dist}(x,\varphi ')}\right) . \end{aligned}$$
(6.18)

Proof

If \(\varphi \cap \varphi ' = \varnothing \) then \(K(\varphi ,\varphi ') < \infty \) by an easy compactness argument, so assume \(\varphi \cap \varphi ' \ne \varnothing \). Without loss of generality we may then assume \(o \in \varphi \cap \varphi '\).

If \(d=3\), A is convex and \(D(\varphi )= D(\varphi ')=2\), \(D(\varphi \cap \varphi ') = 1\) and moreover \(\varphi , \varphi '\) are rectangular with angle \(\alpha \) between them and \(0< \alpha < \pi \), then \(K(\varphi ,\varphi ') = \sec \alpha \), as illustrated in Fig. 1. However to generalize to all d takes some care.

Fig. 1
figure 1

Illustration of Lemma 6.12 in in \(d=3\) with \(D(\varphi ) = D(\varphi ')= 2\). The dot represents a point in \(\varphi ^o\)

Let \(\langle \varphi \rangle \), respectively \(\langle \varphi ' \rangle \), be the linear subspace of \(\mathbb {R}^d\) generated by \(\varphi \), respectively by \(\varphi '\). Set \(\psi := \langle \varphi \rangle \cap \langle \varphi ' \rangle \).

Since we assume A is convex, \(A \cap \langle \varphi ' \rangle = \varphi '\), and \(\langle \varphi ' \rangle \) is a supporting hyperplane of A.

We claim that \(\varphi \cap \langle \varphi ' \rangle \subset \partial \varphi \). Indeed, let \(z \in \varphi \cap \langle \varphi '\rangle \) and \(y \in \varphi {\setminus } \varphi ' \). Then \(y \in \varphi {\setminus } \langle \varphi ' \rangle \), and for all \(\varepsilon >0\) the vector \(y + (1+ \varepsilon ) (z- y) \) lies in the affine hull of \(\varphi \) but not in A, since it is on the wrong side of the supporting hyperplane \(\langle \varphi ' \rangle \), and therefore not in \(\varphi \). This shows that \(z \in \partial \varphi \), and hence the claim.

Since \(\varphi \cap \psi \subset \varphi \cap \langle \varphi ' \rangle \), it follows from the preceding claim that \(\varphi \cap \psi \subset \partial \varphi \).

Now let \(x \in \varphi ^o\). Then \(x \in \langle \varphi \rangle {\setminus } \psi \). Let \(\pi _\psi (x)\) denote the point in \(\psi \) closest to x, and set \(a:= \Vert x - \pi _\psi (x) \Vert = \,\textrm{dist}(x, \psi )\). Then \(a >0\).

Set \(w:= a^{-1}(x - \pi _\psi (x))\). Then \(\Vert w \Vert =1,\) \( w \in \langle \varphi \rangle \) and \(w \perp \psi \) (i.e., the Euclidean inner product of w and z is zero for all \(z \in \psi \)), so \(\,\textrm{dist}(w,\langle \varphi ' \rangle ) \ge \delta \), where we set

$$\begin{aligned} \delta := \inf \{ \,\textrm{dist}(y,\langle \varphi ' \rangle ): y \in \langle \varphi \rangle , \Vert y \Vert =1, y \perp \psi \}. \end{aligned}$$

If \(y \in \langle \varphi \rangle {\setminus } \psi \) then \(y \notin \langle \varphi ' \rangle \) so \(\,\textrm{dist}( y, \langle \varphi ' \rangle ) >0\). Therefore \(\delta \) is the infimum of a continuous, strictly positive function defined on a non-empty compact set of vectors y, and hence \(0< \delta < \infty \). Thus for \(x \in \varphi ^o\), with wa as given above, we have

$$\begin{aligned} \,\textrm{dist}(x, \varphi ')\ge & {} \,\textrm{dist}(x,\langle \varphi ' \rangle ) = \,\textrm{dist}(\pi _\psi (x) + a w, \langle \varphi ' \rangle ) \nonumber \\= & {} \,\textrm{dist}(aw, \langle \varphi ' \rangle ) \nonumber \\\ge & {} \delta a = \delta \,\textrm{dist}(x,\psi ). \end{aligned}$$
(6.19)

If \(\pi _\psi (x) \notin \varphi \), then there is a point in \([x, \pi _\psi (x)] \cap \partial \varphi \), while if \(\pi _\psi (x) \in \varphi \), then \(\pi _\psi (x) \in \partial \varphi \). Either way \(\,\textrm{dist}(x,\psi ) \ge \,\textrm{dist}(x, \partial \varphi )\), and hence by (6.19), \(\,\textrm{dist}(x,\varphi ') \ge \delta \,\textrm{dist}(x, \partial \varphi )\). Therefore \(K(\varphi ,\varphi ')\le \delta ^{-1} < \infty \) as required. \(\square \)

Recall that we are assuming (4.1). Also, recall that for each face \(\varphi \) of A we denote the angular volume of A at \(\varphi \) by \(\rho _{\varphi }\), and set \(f_\varphi := \inf _{\varphi } f(\cdot )\).

Lemma 6.13

Let \(\varphi \) be a face of A. Then, almost surely:

$$\begin{aligned}{} & {} \liminf _{n \rightarrow \infty } \left( n R^d_{n,k(n)}/ k(n) \right) \ge ( \rho _\varphi f_\varphi )^{-1} ~~~ {\text { if }} ~ \beta = \infty ; \end{aligned}$$
(6.20)
$$\begin{aligned}{} & {} \liminf _{n \rightarrow \infty } \left( n R^d_{n,k(n)}/ \log n \right) \ge (\rho _\varphi f_{\varphi } )^{-1} \hat{H}_\beta (D(\varphi )/d) ~~~ {\text { if }} ~ \beta < \infty . \end{aligned}$$
(6.21)

Proof

Let \(a > f_{\varphi }\). Take \(x_0 \in \varphi \) such that \(f (x_0) <a\). If \(D(\varphi ) >0\), assume also that \(x_0 \in \varphi ^o\). By the assumed continuity of f at \(x_0\), for all small enough \(r >0\) we have \(\mu (B(x_0,r)) \le a \rho _\varphi r^d\), so that \(\nu (B,r,a \rho _\varphi r^d) = \varOmega (1)\) as \(r \downarrow 0\). Hence by Lemma 6.2 (taking \(b=0\)), if \(\beta = \infty \) then almost surely \(\liminf _{n \rightarrow \infty } n R_{n,k(n)}^d/k(n) \ge 1/(a \rho _\varphi )\), and (6.20) follows. Also, if \(\beta < \infty \) and \(D(\varphi ) =0\), then by Lemma 6.2 (again with \(b=0\)), almost surely \(\liminf _{n \rightarrow \infty } (n R_{n,k(n)}^d/\log n) \ge \hat{H}_\beta (0)/(a \rho _\varphi )\), and hence (6.21) in this case.

Now suppose \(\beta < \infty \) and \(D(\varphi )>0\). Take \(\delta >0 \) such that \(f(x) < a\) for all \(x \in B(x_0, 2 \delta ) \cap A\), and such that moreover \(B(x_0,2 \delta ) \cap A = B(x_0,2 \delta ) \cap (x_0+ {{\mathscr {K}}}_\varphi )\) (the cone \({{\mathscr {K}}}_\varphi \) was defined in Sect. 4). Then for all \(x \in B(x_0,\delta ) \cap \varphi \) and all \(r \in (0,\delta )\), we have \(\mu (B(x,r)) \le a \rho _\varphi r^d\).

There is a constant \(c >0\) such that for small enough \(r >0\) we can find at least \(cr^{-D(\varphi )}\) points \(x_i \in B(x_0,\delta ) \cap \varphi \) that are all at a distance more than 2r from each other, and therefore \(\nu (B, r,a \rho _\varphi r^d ) = \varOmega ( r^{-D(\varphi )})\) as \(r \downarrow 0 \). Thus by Lemma 6.2 we have

$$\begin{aligned} \liminf _{n \rightarrow \infty } \left( nR_{n,k(n)}^d/k(n) \right) \ge ( a \rho _\varphi )^{-1} \hat{H}_{\beta }(D(\varphi )/d), \end{aligned}$$

almost surely, and (6.21) follows. \(\square \)

We now define a sequence of positive constants \(K_1,K_2,\ldots \) depending on A as follows. With \(K(\varphi ,\varphi ') \) defined at (6.18), set

$$\begin{aligned} K_A : = \max \{ K(\varphi ,\varphi '): \varphi , \varphi ' \in \varPhi (A), D(\varphi ') = d-1, \varphi {\setminus } \varphi ' \ne \varnothing \}, \end{aligned}$$
(6.22)

which is finite by Lemma 6.12. Then for \(j=1,2,\ldots \) set \(K_j := j(K_A+1)^{j-1}\). Then \(K_1=1\) and for each \(j \ge 1 \) we have \(K_{j+1} \ge (K_A+1)(K_j+1)\).

For each face \(\varphi \) of A and each \(r >0\), define the sets \(\varphi _{r} : = \cup _{x \in \varphi } B(x,r) \cap A\), and also \((\partial \varphi )_{r} := \cup _{x \in \partial \varphi } B(x,r) \cap A\) (so if \(D(\varphi )=0\) then \((\partial \varphi )_{r} = \partial \varphi = \varnothing \)). Given also \(r>0\), define for each \(n,k \in \mathbb {N}\) the event \(G_{n,k,r,\varphi }\) as follows:

If \(D(\varphi ) = d-j\) with \(1 \le j \le d\), let \(G_{n,k,r,\varphi } := \{ (\varphi _{K_j r} {\setminus } (\partial \varphi )_{K_{j+1} r}) {\setminus } F_{n,k,r} \ne \varnothing \}\), the event that there exists \(x \in \varphi _{K_j r} {\setminus } (\partial \varphi )_{K_{j+1} r}\) such that \(\mathscr {X}_n ( B(x,r)) < k\).

Let \(R_{n,k,1}\) be the smallest radius r of balls required to cover k times the boundary region \(A {\setminus } A^{(r)} \), as defined at (6.11).

Lemma 6.14

Given \(r>0\) and \(n, k \in \mathbb {N}\), \( \{R_{n,k,1} > r\} \subset \cup _{\varphi \in \varPhi (A)} G_{n,k,r,\varphi }. \)

Proof

Suppose \(R_{n,k,1} > r\). Then we can and do choose a point \(x \in (A {\setminus } A^{(r)} ) {\setminus } F_{n,k,r}\), and a face \(\varphi _1 \in \varPhi (A) \) with \(D(\varphi _1) = d-1\), and with \(x \in (\varphi _1)_{r} = (\varphi _1)_{K_1 r}\).

If \(x \notin (\partial \varphi _1)_{K_2 r}\) then \(G_{n,k,r,\varphi _1}\) occurs. Otherwise, we can and do choose \(\varphi _2 \in \varPhi (A)\) with \(D(\varphi _2) = d-2\) and \(x \in (\varphi _2)_{K_2 r}\).

If \(x \notin (\partial \varphi _2)_{K_3 r}\) then \(G_{n,k,r,\varphi _2}\) occurs. Otherwise, we can choose \(\varphi _3 \in \varPhi (A)\) with \(D(\varphi _3) = d-3\) and \(x \in (\varphi _3)_{K_3 r}\).

Continuing in this way, we obtain a terminating sequence of faces \(\varphi _1 \supset \varphi _2 \supset \cdots \varphi _m\), with \(m \le d\), such that for \(j =1,2,\ldots ,m\) we have \(D(\varphi _j) = d-j\) and \(x \in (\varphi _j)_{K_j r}\), and \(x \notin (\partial \varphi _m)_{K_{m+1} r}\) (the sequence must terminate because if \(D(\varphi )=0\) then \((\partial \varphi )_s =\varnothing \) for all \(s>0\) by definition). But then \(G_{n,k,r,\varphi _m}\) occurs, completing the proof. \(\square \)

Lemma 6.15

Let \(r>0\) and \(j \in \mathbb {N}\). Suppose \(\varphi \in \varPhi (A)\) and \(x \in \varphi _{K_{j} r} {\setminus } (\partial \varphi )_{K_{j+1} r}\). Then for all \(\varphi ' \in \varPhi (A)\) with \(D(\varphi ') = d-1\) and \(\varphi {\setminus } \varphi ' \ne \varnothing \), we have \(\,\textrm{dist}(x, \varphi ') \ge r\).

Proof

Suppose some such \(\varphi '\) exists with \(\,\textrm{dist}(x,\varphi ') < r\). Then there exist points \(z \in \varphi \) and \(z' \in \varphi '\) with \(\Vert z-x\Vert \le K_j r\), and \(\Vert z'-x\Vert < r\), so that by the triangle inequality \(\Vert z'-z\Vert < (K_j+1)r\).

By (6.18) and (6.22), this implies that \(\,\textrm{dist}(z, \partial \varphi ) < K_A(K_j+1) r\). On the other hand, we also have that

$$\begin{aligned} \,\textrm{dist}(z, \partial \varphi ) \ge \,\textrm{dist}(x,\partial \varphi ) - \Vert z-x\Vert \ge (K_{j+1} - K_j)r, \end{aligned}$$

and combining these inequalities shows that \(K_{j+1} - K_j < K_A(K_j+1)\), that is, \(K_{j+1}< K_j(K_A+1) +K_A < (K_j+1)(K_A+1)\). However, we earlier defined the sequence \((K_j)\) in such a way that \(K_{j+1} \ge (K_j+1)(K_A+1)\), so we have a contradiction. \(\square \)

Lemma 6.16

Let \(\varphi \) be a face of A. If \(\beta = \infty \) then let \(u > 1/(f_{\varphi } \rho _\varphi )\). If \(\beta < \infty \), let \(u > \hat{H}_\beta (D(\varphi )/d) / (f_{\varphi } \rho _\varphi ) \). For each \(n \in \mathbb {N}\), set \(r_n = (u k(n)/n )^{1/d}\) if \(\beta = \infty \), and set \(r_n = (u (\log n)/n )^{1/d}\) if \(\beta < \infty \). Then:

(i) if \(\beta = \infty \) then a.s. the events \(G_{n,k(n),r_n,\varphi }\) occur for only finitely many n;

(ii) if \(\beta < \infty \) then there exists \(\varepsilon >0\) such that, setting \(k'(n):= \lfloor (\beta + \varepsilon ) \log n\rfloor \), we have that \(\mathbb {P}[G_{n,k'(n),r_n,\varphi }] = O(n^{- \varepsilon })\) as \(n \rightarrow \infty \).

Proof

Set \(j \!=\! d\!-\!D(\varphi )\). We shall apply Lemma 6.3, now taking \(A_r \!=\! \varphi _{K_j r} {\setminus } (\partial \varphi )_{K_{j+1} r}\). With this choice of \(A_r\), observe first that \(\kappa (A_r,r) = O(r^{-D(\varphi )})\) as \(r \downarrow 0\).

Let \(\delta \in (0,1)\). Assume \(u > 1/(f_\varphi \rho _\varphi (1-\delta ))\) if \(\beta = \infty \) and assume that \(u > \hat{H}_\beta (D(\varphi )/d)/(f_\varphi \rho _\varphi (1-\delta ))\) if \(\beta < \infty \).

By Lemma 6.15, for all small enough r, and for all \(x \in \varphi _{K_j r} {\setminus } (\partial \varphi )_{K_{j+1} r}\), and all \(s \in (0,r]\), the ball B(xs) does not intersect any of the faces of dimension \(d-1\), other than those which meet at \(\varphi \) (i.e., which contain \(\varphi \)). Also \(f(y) \ge (1- \delta ) f_\varphi \) for all \(y \in A \) sufficiently close to \(\varphi \). Also we are assuming A is convex, so \({{\mathscr {K}}}_\varphi \) is a convex cone. Hence

$$\begin{aligned} \mu (B(x,s)) \ge (1- \delta ) f_\varphi \rho _\varphi s^d. \end{aligned}$$

Therefore we can apply Lemma 6.3, now with \(A_r = \varphi _{K_j r} {\setminus } (\partial \varphi )_{K_{j+1} r}\), taking \(a = (1-\delta ) f_\varphi \rho _\varphi \) and \(b = D(\varphi )\).

If \(\beta = \infty \), taking \(u > 1/(f_\varphi \rho _\varphi (1-\delta ))\) and \(r_n = (u k(n)/n)^{1/d}\), by Lemma 6.3 we have with probability 1 that \( \varphi _{K_j r_n} {\setminus } (\partial \varphi )_{K_{j+1} r_n} \subset F_{n,k(n),r_n}\) for all large enough n, which gives part (i).

If \(\beta {<} \infty \), taking \(u {>} \hat{H}_\beta (D(\varphi )/d)/(f_\varphi \rho _\varphi (1-\delta ))\), and setting \(r_n = (u (\log n)/n)^{1/d}\), we have from Lemma 6.3 that there exists \(\varepsilon >0 \) such that setting \(k'(n) := \lfloor (\beta + \varepsilon ) \log n \rfloor \), we have \(\mathbb {P}[ \{ \varphi _{K_j r_n} {\setminus } (\partial \varphi )_{K_{j+1} r_n} \subset F_{n,k'(n),r_n} \}^c] = O(n^{-\varepsilon })\), which gives part (ii). \(\square \)

Proof of Theorem 4.2

Suppose \(\beta = \infty \). Let \(u > \max \left( \frac{1}{\theta _d f_0}, \max _{\varphi \in \varPhi (A)} \frac{1}{f_\varphi \rho _\varphi } \right) \). Setting \(r_n = (u k(n)/n)^{1/d}\), we have from Lemma 6.16 that \(G_{n,k(n),r_n,\varphi }\) occurs only finitely often, a.s., for each \(\varphi \in \varPhi (A)\). Hence by Lemma 6.14, \(R_{n,k(n),1} \le r_n\) for all large enough n, a.s. Hence, almost surely \( \limsup _{n \rightarrow \infty } \left( n R_{n,k(n),1}^d/k(n) \right) \le u. \)

Since \(u > 1/(\theta _d f_0)\), by Proposition 4.9 we also have \(\limsup _{n \rightarrow \infty } \left( n {\tilde{R}}_{n,k(n)}^d/k(n) \right) \le u\), almost surely, and hence by (6.12), almost surely

$$\begin{aligned} \limsup _{n\rightarrow \infty } \left( n R_{n,k(n)}^d/k(n) \right) \le \max \left( (\theta _d f_0)^{-1}, \max _{\varphi \in \varPhi (A)} 1/(f_\varphi \rho _\varphi ) \right) . \end{aligned}$$

Moreover, by Lemma 6.13, Proposition 4.9 and (6.12) we also have that

$$\begin{aligned} \liminf _{n\rightarrow \infty } \left( n R_{n,k(n)}^d/k(n) \right) \ge \max \left( (\theta _d f_0)^{-1}, \max _{\varphi \in \varPhi (A)} 1/(f_\varphi \rho _\varphi ) \right) , \end{aligned}$$

and thus (4.7).

Now suppose \(\beta < \infty \). Let \(u > \max \left( \hat{H}_\beta (1)/ (f_0\theta _d), \max _{\varphi \in \varPhi (A)} \hat{H}_\beta (D(\varphi )/d)/(f_\varphi \rho _\varphi ) \right) \). Set \(r_n := (u (\log n)/n)^{1/d}\). Given \(\varphi \in \varPhi (A)\), by Lemma 6.16 there exists \(\varepsilon >0\) such that, setting \(k'(n):= \lfloor (\beta + \varepsilon ) \log n \rfloor \), we have \(\mathbb {P}[G_{n,k'(n),r_n,\varphi } ] = O(n^{-\varepsilon })\). Hence by Lemma 6.14 and the union bound,

$$\begin{aligned} \mathbb {P}[ n R_{n,k'(n),1}^d/ \log n> u] = \mathbb {P}[R_{n,k'(n),1} > r_n] = O(n^{- \varepsilon }). \end{aligned}$$

Thus by the subsequence trick (Lemma 6.1 (a)), \( \limsup _{n \rightarrow \infty } \left( n R_{n,k(n),1}^d/ \log n \right) \le u, \) almost surely. Since \(u > \hat{H}_\beta (1)/ (f_0\theta _d)\), and we take \(B=A\) here, by Proposition 4.9 we also have a.s. that \(\limsup _{n \rightarrow \infty } \left( n {\tilde{R}}_{n,k(n)}^d/\log n \right) \le u\), and hence by (6.12), almost surely

$$\begin{aligned} \limsup _{n\rightarrow \infty } \left( n R_{n,k(n)}^d/\log n \right) \le \max \left( \frac{ \hat{H}_\beta (1) }{ f_0\theta _d } , \max _{\varphi \in \varPhi (A)} \left( \frac{ \hat{H}_\beta (D(\varphi )/d)}{ f_\varphi \rho _\varphi } \right) \right) . \end{aligned}$$

Moreover, by Lemma 6.13, Proposition 4.9 and (6.12), we also have a.s. that

$$\begin{aligned} \liminf _{n\rightarrow \infty } \left( n R_{n,k(n)}^d/k(n) \right) \ge \max \left( \frac{\hat{H}_\beta (1)}{\theta _d f_0}, \max _{\varphi \in \varPhi (A)} \left( \frac{ \hat{H}_\beta (D(\varphi )/d) }{ f_\varphi \rho _\varphi } \right) \right) , \end{aligned}$$

and thus (4.8). \(\square \)

7 Proof of results from Sect. 3

Throughout this section, we assume \(f = f_0 \textbf{1}_A\), where \(A \subset \mathbb {R}^d\) is compact and Riemann measurable with \(|A| >0\), and \(f_0 := |A|^{-1}\).

7.1 Preliminaries, and proof of Propositions 3.4 and 3.6

We start by showing that any weak convergence result for \(R'_{t,k}\) (in the large-t limit) of the type we seek to prove, implies the corresponding weak convergence result for \(R_{n,k}\) in the large-n limit. This is needed because all of the results in Sect. 3 are stated both for \(R_{n,k}\) and for \(R'_{t,k}\) (these quantities were defined at (2.1) and (2.2)).

Lemma 7.1

(de-Poissonization) Suppose \(\mu \) is uniform over A. Let \(k \in \mathbb {N}\), and \(a,b,c \in \mathbb {R}\) with \(a >0\) and \(b >0\). Let F be a continuous cumulative distribution function. Suppose that

$$\begin{aligned} \lim _{t \rightarrow \infty } \mathbb {P}[a t (R'_{t,k})^d - b \log t - c \log \log t \le \gamma ] = F(\gamma ), ~~~~ \forall \gamma \in \mathbb {R}. \end{aligned}$$
(7.1)

Then

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {P}[a n R_{n,k}^d - b \log n - c \log \log n \le \gamma ] = F(\gamma ), ~~~~ \forall \gamma \in \mathbb {R}. \end{aligned}$$
(7.2)

Proof

For each \(n \in \mathbb {N}\), set \(t(n) := n- n^{3/4}\). Let \(\gamma \in \mathbb {R}\). Given \(n \in \mathbb {N}\cap (1,\infty )\), set

$$\begin{aligned} r_n := \left( \frac{b \log n + c \log \log n + \gamma }{ a n } \right) ^{1/d}. \end{aligned}$$

Then

$$\begin{aligned}{} & {} t(n) a r_n^d - b \log (t(n) ) - c \log \log (t(n) )\nonumber \\{} & {} \quad = (b \log n + c \log \log n + \gamma ) \frac{t(n)}{n} - b \log (n- n^{3/4}) - c \log \log (n-n^{3/4}) \nonumber \\{} & {} \quad \rightarrow \gamma . \end{aligned}$$
(7.3)

Then by (7.1), and the continuity of F, we obtain that

$$\begin{aligned} \mathbb {P}[R'_{t(n),k} \le r_n] \rightarrow F(\gamma ). \end{aligned}$$
(7.4)

Moreover, since adding further points reduces the k-coverage threshold,

$$\begin{aligned} \mathbb {P}[R'_{t(n),k} \le r_n< R_{n,k}] \le \mathbb {P}[R'_{t(n),k} < R_{n,k}] \le \mathbb {P}[Z_{t(n)} > n ], \end{aligned}$$
(7.5)

which tends to zero by Chebyshev’s inequality.

Now suppose \(R'_{t(n),k} > r_n\). Pick a point X of B that is covered by fewer than k of the closed balls of radius \(r_n\) centred on points in \({{\mathscr {P}}}_{t(n)}\) (this can be done in a measurable way). If, additionally, \(R_{n,k} \le r_n\), then we must have \(Z_{t(n)} < n\), and at least one of the points \(X_{Z_{t(n)}+1}, X_{Z_{t(n)}+2},\ldots ,X_n \) must lie in \(B(X,r_n) \). Therefore

$$\begin{aligned} \mathbb {P}[ R'_{t(n),k} > r_n \ge R_{n,k}] \le \mathbb {P}[ \{ n-2 n^{3/4} \le Z_{t(n) } \le n\}^c] + 2 n^{3/4} \theta _d f_0 r_n^d, \end{aligned}$$

which tends to zero by Chebyshev’s inequality. Combined with (7.5) and (7.4) this shows that \( \mathbb {P}[ R_{n,k} \le r_n ] \rightarrow F(\gamma ) \) as \(n \rightarrow \infty \), which gives us (7.2) as required. \(\square \)

The spherical Poisson Boolean model (SPBM) is defined to be a collection of Euclidean balls (referred to as grains) of i.i.d. random radii, centred on the points of a homogeneous Poisson process in the whole of \(\mathbb {R}^d\). Often in the literature the SPBM is taken to be the union of these balls (see e.g. [19]) but here, following [13], we take the SPBM to be the collection of these balls, rather than their union. This enables us to consider multiple coverage: given \(k \in \mathbb {N}\) we say a point \(x \in \mathbb {R}^d\) is covered k times by the SPBM if it lies in k of the balls in this collection. The SPBM is parametrised by the intensity of the Poisson process and the distribution of the radii.

We shall repeatedly use the following result, which comes from results in Janson [16] or (when \(k=1\)) Hall [12]. Recall that \(c_d\) was defined at (3.1).

Lemma 7.2

Let \(d,k \in \mathbb {N}\). Suppose Y is a bounded nonnegative random variable, and \(\alpha = \theta _d {\mathbb {E}} \,[Y^d] \) is the expected volume of a ball of radius Y. Let \(\beta \in \mathbb {R}\). Suppose \( \delta (\lambda ) \in (0,\infty )\) is defined for all \(\lambda >0\), and satisfies

$$\begin{aligned} \lim _{\lambda \rightarrow \infty } \left( \alpha \delta (\lambda )^d \lambda - \log \lambda - (d+k-2) \log \log \lambda \right) = \beta . \end{aligned}$$
(7.6)

Let \(B \subset \mathbb {R}^d\) be compact and Riemann measurable, and for each \(\lambda >0\) let \(B_\lambda \subset B\) be Riemann measurable with the properties that \(B_\lambda \subset B_{\lambda '}\) whenever \(\lambda \le \lambda '\), and that \(\cup _{\lambda >0} B_\lambda \supset B^o\).

Let \(E_\lambda \) be the event that every point in \(B_\lambda \) is covered at least k times by a spherical Poisson Boolean model with intensity \(\lambda \) and radii having the distribution of \(\delta (\lambda )Y\). Then

$$\begin{aligned} \lim _{\lambda \rightarrow \infty } \mathbb {P}[E_\lambda ] = \exp \left( - \left( \frac{c_d ({\mathbb {E}} \,[Y^{d-1}] )^d}{ (k-1)! ({\mathbb {E}} \,[Y^d ])^{d-1} } \right) |B| e^{-\beta } \right) . \end{aligned}$$
(7.7)

In the proof, and elsewhere, we use the fact that asymptotics of iterated logarithms are unaffected by multiplicative constants: if \(a >0\) then as \(t \rightarrow \infty \) we have

$$\begin{aligned} \log \log (ta ) = \log ( \log t ( 1+ (\log a)/\log t)) = \log \log t + o(1). \end{aligned}$$
(7.8)

Proof of Lemma 7.2

For \(k=1\), when \(B_\lambda = B\) for all \(\lambda >0\) the result can be obtained from [12, Theorem 2]. Since [12] does not address multiple coverage, we use [16] instead to prove the result for general k. Because of the way the result is stated in [16], we need to express \(\log (1/(\alpha \delta ^d))\) and \(\log \log (1/(\alpha \delta ^d))\) asymptotically in terms of \(\lambda \) (we are now writing just \(\delta \) for \(\delta (\lambda )\)). By (7.6),

$$\begin{aligned} \alpha \delta ^d = \lambda ^{-1}(\log \lambda ) (1 + o(1)) \end{aligned}$$
(7.9)

so that \( \log (1/( \alpha \delta ^d)) = \log \lambda - \log \log \lambda + o(1) \) and \( \log \log (1/(\alpha \delta ^d)) = \log \log \lambda + o(1). \) Therefore we have as \(\lambda \rightarrow \infty \) that

$$\begin{aligned}{} & {} \lambda \delta ^d \alpha - \log (1/(\alpha \delta ^d)) - (d+ k -1) \log \log (1/(\alpha \delta ^d)) \nonumber \\{} & {} \quad = \lambda \delta ^d \alpha - \log \lambda - (d+ k -2) \log \log \lambda +o(1), \end{aligned}$$
(7.10)

which tends to \(\beta \) by (7.6).

Let \(\alpha _J\) be the quantity denoted \(\alpha \) by Janson [16] (our \(\alpha \) is the quantity so denoted by Hall [12]). In the present setting, as described in [16, Example 4],

$$\begin{aligned} \alpha _J = \frac{1}{d!} \left( \frac{\sqrt{\pi } \varGamma (1 +d/2) }{ \varGamma ((1+d)/2) } \right) ^{d-1} \frac{({\mathbb {E}} \,[Y^{d-1}] )^d}{( {\mathbb {E}} \,[Y^d])^{d-1} } = \frac{c_d ({\mathbb {E}} \,[Y^{d-1}])^d}{({\mathbb {E}} \,[Y^{d} ])^{d-1} }. \end{aligned}$$

Let \({\tilde{A}} = B(o,r_0)\) with \(r_0\) chosen large enough so that B is contained in the interior of \({\tilde{A}}\). Let \((X_1,Y_1),(X_2,Y_2),\ldots \) be independent identically distributed random \((d+1)\)-vectors with \(X_1 \) uniformly distributed over \({\tilde{A}}\) and \(Y_1 \) having the distribution of Y, independent of \(X_1\).

Set \(n(\lambda ) := \lceil \lambda |{\tilde{A}} | - \lambda ^{3/4} \rceil \). Let \({\tilde{E}}_\lambda \) be the event that every point of B is covered at least k times by the balls \(B(X_1,\delta Y_1),\ldots ,B(X_{n(\lambda )},\delta Y_{n(\lambda )})\). By (7.9) we have \(\lambda ^{3/4} \delta ^d \rightarrow 0\), so that \(n(\lambda ) \delta ^d \alpha /|{\tilde{A}}| = \lambda \delta ^d \alpha + o(1)\), and hence by (7.10), and (7.8), we have as \(\lambda \rightarrow \infty \) that

$$\begin{aligned}{} & {} \frac{n(\lambda ) \delta ^d \alpha }{|{\tilde{A}}|} - \log \left( \frac{|B|}{\alpha \delta ^d} \right) - (d + k-1) \log \log \left( \frac{|B|}{\alpha \delta ^d} \right) + \log \left( \frac{(k-1)!}{\alpha _J} \right) \\{} & {} \quad \rightarrow \beta + \log \left( \frac{(k-1)!}{\alpha _J |B|} \right) . \end{aligned}$$

Then by [16, Theorem 1.1]

$$\begin{aligned} \mathbb {P}[ {\tilde{E}}_\lambda ] \rightarrow \exp \left( - \left( \frac{\alpha _J|B|}{(k-1)!} \right) e^{-\beta } \right) . \end{aligned}$$
(7.11)

We can and do assume that our Poisson Boolean model, restricted to grains centred in \({\tilde{A}}\), is coupled to the sequence \((X_n,Y_n)_{n \ge 1}\) as follows. Taking \(Z_{\lambda |{\tilde{A}}|}\) to be Poisson distributed with mean \( \lambda |{\tilde{A}}|, \) independent of \( (X_1,Y_1), (X_2,Y_2),\ldots \), assume the restricted Boolean model consists of the balls \(B(X_i,\delta Y_i), 1 \le i \le Z_{\lambda |{\tilde{A}}|}\).

Then \(\mathbb {P}[{\tilde{E}}_\lambda {\setminus } E_\lambda ] \le \mathbb {P}[Z_{\lambda |{\tilde{A}}|} < n(\lambda )]\), which tends to zero by Chebyshev’s inequality. Also, if \({\tilde{E}}_\lambda \) fails to occur, we can and do choose (in a measurable way) a point \(V \in B\) which is covered by fewer than k of the balls \(B(X_i,\delta Y_i), 1 \le i \le n(\lambda )\). Then for large \(\lambda \), grains centred outside \({\tilde{A}}\) cannot intersect B, and

$$\begin{aligned} \mathbb {P}[E_\lambda {\setminus } {\tilde{E}}_\lambda ] \le \mathbb {P}[ Z_{\lambda |A|} - n(\lambda ) > 2 \lambda ^{3/4} ] + 2 \lambda ^{3/4} \mathbb {P}[X_1 \in B(V,\delta ) ] \end{aligned}$$

which tends to zero by Chebyshev’s inequality and (7.9). These estimates, together with (7.11), give us the asserted result (7.7) for general k in the case with \(B_\lambda =B\) for all \(\lambda \). It is then straightforward to obtain (7.7) for general \((B_\lambda ) \) satisfying the stated conditions. \(\square \)

Proof of Proposition 3.4

Suppose for some \(\beta \in \mathbb {R}\) that \((r_t)_{t >0}\) satisfies

$$\begin{aligned} \lim _{t \rightarrow \infty } \left( \theta _d t f_0 r_t^d - \log ( t f_0) - (d+k -2) \log \log t \right) = \beta . \end{aligned}$$
(7.12)

The point process \({{\mathscr {P}}}_t = \{X_1,\ldots ,X_{Z_t}\}\) is a homogeneous Poisson process of intensity \(t f_0\) in A. Let \({{\mathscr {Q}}}_t\) be a homogeneous Poisson process of intensity \(t f_0\) in \(\mathbb {R}^d {\setminus } A\), independent of \({{\mathscr {P}}}_t\). Then \({{\mathscr {P}}}_t \cup {{\mathscr {Q}}}_t\) is a homogeneous Poisson process of intensity \(t f_0\) in all of \(\mathbb {R}^d\).

The balls of radius \(r_t\) centred on the points of \({{\mathscr {P}}}_t \cup {{\mathscr {Q}}}_t\) form a Boolean model in \(\mathbb {R}^d\), and in the notation of Lemma 7.2, here we have \(\delta (\lambda ) = r_t\), and \(\mathbb {P}[Y=1]=1\), so that \(\alpha =\theta _d\), and \(\lambda = tf_0\), so that \(\log \log \lambda = \log \log t + o(1)\) by (7.8).

Also, by (7.12) we have the condition (7.6) from Lemma 7.2.

First assume B is compact and Riemann measurable with \(B \subset A^o\). Then for all large enough t we have \(B \subset A^{(r_t)}\), in which case \(R'_{n,k} \le r_t\) if and only if all locations in B are covered at least k times by the balls of radius \(r_t\) centred on points of \({{\mathscr {P}}}_t \cup {{\mathscr {Q}}}_t\), which is precisely the event denoted \(E_\lambda \) in Lemma 7.2. Therefore by that result, we obtain that \(\mathbb {P}[R'_{t,k} \le r_t] \rightarrow \exp (-(c_d/(k-1)!) |B|e^{-\beta })\). This yields the second equality of (3.7). We then obtain the first equality of (3.7) using Lemma 7.1.

Now consider general Riemann measurable \(B \subset A\) (dropping the previous stronger assumption on B). Given \(\varepsilon >0\), by using [21, Lemma 11.12] we can find a Riemann measurable compact set \(B' \subset A^o\) with \(|A {\setminus } B'|< \varepsilon \). Then \(B \cap B'\) is also Riemann measurable. Let \(S_{Z_t,k}\) be the smallest radius of balls centred on \({{\mathscr {P}}}_t\) needed to cover k times the set \(B \cap B'\). Then \(\mathbb {P}[S_{Z_t,k} \le r_t] \rightarrow \exp ( -(c_d/(k-1)!) |B \cap B'|e^{-\beta })\). For sufficiently large t we have \(\mathbb {P}[{\tilde{R}}_{Z_t,k} \le r_t] \le \mathbb {P}[S_{Z_t,k} \le r_t ] \), but also \(\mathbb {P}[\{S_{Z_t,k} \le r_t\} {\setminus } \{ {\tilde{R}}_{Z_t,k} \le r_t\}] \) is bounded by the probability that \(A {\setminus } B'\) is not covered k times by a SPBM of intensity \(tf_0\) with radii \(r_t\), which converges to \(1 - \exp (-c_d /(k-1)!) |A {\setminus } B'|e^{-\beta })\). Using these estimates we may deduce that

$$\begin{aligned} \limsup _{t \rightarrow \infty } \mathbb {P}[{\tilde{R}}_{Z_t,k}\le & {} r_t] \le \exp \left[ -\left( c_d/(k-1)!\right) (|B| -\varepsilon ) e^{-\beta }\right] ;\\ \liminf _{t \rightarrow \infty } \mathbb {P}[{\tilde{R}}_{Z_t,k} \le r_t]\ge & {} \exp \left[ -\left( \frac{c_d}{(k-1)!}\right) |B| e^{-\beta }\right] \\{} & {} - \left( 1- \exp \left( - \left( \frac{c_d}{(k-1)!}\right) \varepsilon e^{-\beta } \right) \right) , \end{aligned}$$

and since \(\varepsilon \) can be arbitrarily small, that \(\mathbb {P}[{\tilde{R}}_{Z_t,k} \le r_t] \rightarrow - \exp (-(c_d /(k-1)!) |B |e^{-\beta })\). This yields the second equality of (3.6), and then we can obtain the first equality of (3.6) by a similar argument to Lemma 7.1. \(\square \)

Proof of Proposition 3.6

It suffices to prove this result in the special case with \(c'=0\) (we leave it to the reader to verify this). Recall that (in this special case) we assume \(a n R_{n,k}^d - b \log n -c \log \log n {\mathop {\longrightarrow }\limits ^{{{\mathscr {D}}}}}Z\). Let \((r_m)_{m \ge 1}\) be an arbitrary real-valued sequence satisfying \(r_m \downarrow 0\) as \(m \rightarrow \infty \). Let \(t \in \mathbb {R}\). Then for all but finitely many \(m \in \mathbb {N}\) we can and do define \(n_m \in \mathbb {N}\) by

$$\begin{aligned} n_m:= \lfloor a^{-1} r_m^{-d} \left( b \log ((b/a) r_m^{-d} ) + (c+b) \log \log (r_m^{-d}) +t \right) \rfloor , \end{aligned}$$

and set \(t_m := a r_m^d n_m - b \log ( (b/a) r_m^{-d}) - (c+b) \log \log (r_m^{-d}) \). As \(m \rightarrow \infty \), we have \(t_m \rightarrow t\) and also \(\log n_m= \log [ (b/a) r_m^{-d} \log ((b/a) r_m^{-d})] + o(1)\), and hence \(\log \log n_m = \log \log (r_m^{-d}) + o(1)\). Therefore

$$\begin{aligned} a n_m r_m^d - b \log n_m - c \log \log n_m= & {} a n_m r_m^d - b \log ( (b/a)r_m^{-d}) \\{} & {} - b \log \log ((b/a) r_m^{-d})- c \log \log ( r_m^{-d}) + o(1), \end{aligned}$$

which converges to t (using (7.8)).

Also as \(m \rightarrow \infty \) we have \(n_m \rightarrow \infty \) and

$$\begin{aligned}{} & {} \mathbb {P}[a r_m^d N(r_m,k) - b \log \left( (b/a) r_m^{-d} \right) - (c+b) \log \log ( r_m^{-d})\le t_m] \\{} & {} \quad = \mathbb {P}[N(r_m ,k) \le n_m ]= \mathbb {P}[ R_{n_m,k} \le r_m], \end{aligned}$$

and by the convergence in distribution assumption, this converges to \( \mathbb {P}[Z \le t]\). \(\square \)

We shall use the following notation throughout the sequel. Fix \(d,k \in \mathbb {N}\). Suppose that \(r_t >0\) is defined for each \(t >0\).

Given any point process \(\mathscr {X}\) in \(\mathbb {R}^d\), and any \(t >0\), define the ‘vacant’ region

$$\begin{aligned} V_t(\mathscr {X}) : =\{ x \in \mathbb {R}^d:\mathscr {X}( B(x,r_t) ) < k \}, \end{aligned}$$
(7.13)

which is the set of locations in \(\mathbb {R}^d\) covered fewer than k times by the balls of radius \(r_t\) centred on the points of \(\mathscr {X}\). Given also \(D \subset \mathbb {R}^d\), define the event

$$\begin{aligned} F_t(D,\mathscr {X}) := \{V_t (\mathscr {X}) \cap D = \varnothing \} , \end{aligned}$$
(7.14)

which is the event that every location in D is covered at least k times by the collection of balls of radius \(r_t\) centred on the points of \(\mathscr {X}\). Again the F stands for ‘fully covered’, but we no longer need the notation \(F_{n,k,r}\) from (6.3). However, we do use the notation \(\kappa (D,r)\) from (6.2) in the next result, which will be used to show various ‘exceptional’ regions are covered with high probability in the proofs that follow.

Lemma 7.3

Let \(t_0 \ge 0\), and suppose \((r_t)_{t >t_0}\) satisfies \(t r_t^d \sim c \log t\) as \(t \rightarrow \infty \), for some constant \(c >0\). Suppose \((\mu _t)_{t \ge t_0}\) are a family of Borel measures on \(\mathbb {R}^d\), and for each \(t \ge t_0\) let \({{\mathscr {R}}}_{t}\) be a Poisson process with intensity measure \(t \mu _t\) on \(\mathbb {R}^d\). Suppose \((W_t)_{t \ge t_0}\) are Borel sets in \(\mathbb {R}^d\), and \(a > 0, b \ge 0\) are constants, such that (i) \(\kappa (W_t, r_t) = O(r_t^{-b})\) as \(t \rightarrow \infty \), and (ii) \(\mu _t(B(x,s)) > a s^d\) for all \(t \ge t_0\), \(x \in W_t\), \(s \in [r_t/2,r_t]\). Let \(\varepsilon >0\). Then \(\mathbb {P}[(F_t(W_t, {{\mathscr {R}}}_t))^c ]= O(t^{(b/d)-a c + \varepsilon })\) as \(t \rightarrow \infty \).

Proof

This proof is similar to that of Lemma 6.3. Let \(\delta \in (0,1/2)\). Since \(\kappa (W_t,\delta r_t)\) is at most a constant times \(\kappa (W_t,r_t)\), we can and do cover \(W_t\) by \(m_t\) balls of radius \(\delta r_t\), with \(m_t= O(r_t^{-b}) = O(t^{b/d})\) as \(t \rightarrow \infty \). Let \(B_{1,t},\ldots ,B_{m_t,t}\) be balls with radius \((1-\delta )r_t\) and with the same centres as the balls in the covering. Then for \(t > t_0\) and \(1 \le i \le m_t\) we have \(t \mu _t (B_{i,t} ) \ge a t (1-\delta )^d r_t^d\) and so by Lemma 5.1(d), provided \( k < \delta a t (1-\delta )^d r_t^d \) and \(tr_t^d > (1-\delta ) c \log t\) (which is true for large t) we have

$$\begin{aligned} \mathbb {P}[F_t(W_t,{{\mathscr {R}}}_t)^c ] \le \mathbb {P}[ \cup _{i=1}^{m_t} \{ {{\mathscr {R}}}_t(B_{i,t}) < k \} ]\le & {} m_t \mathbb {P}[ Z_{at(1-\delta )^d r_t^d} \le \delta (1-\delta )^d a t r_t^d]\\\le & {} m_t \exp ( - (1-\delta )^d a t r_t^d H(\delta ) )\\= & {} O(t^{b/d} t^{-a(1-\delta )^{d+1} H(\delta ) c}) ~~~{\text { as }}~ t \rightarrow \infty . \end{aligned}$$

Since we can choose \(\delta \) so that \(a(1-\delta )^{d+1} H(\delta ) c > ac - \varepsilon \), the result follows. \(\square \)

7.2 Coverage of a region in a hyperplane by a Boolean model in a half-space

In this subsection, assume that \(d \ge 2\). Let \(\zeta \in \mathbb {R}\), \(k \in \mathbb {N}\), and assume that \((r_t)_{t >0}\) satisfies

$$\begin{aligned} \frac{f_0 t \theta _d r_t^d}{2} - \left( \frac{d - 1}{d}\right) \log (t f_0) - \left( d+k-3+1/d \right) \log \log t \rightarrow \zeta ~~ {\text { as }} ~ t \rightarrow \infty , ~~~ \nonumber \\ \end{aligned}$$
(7.15)

so that for some function h(t) tending to zero as \(t \rightarrow \infty \),

$$\begin{aligned} \exp (-\theta _d f_0 t r_t^d/2) = (t f_0)^{-(d-1)/d} (\log t)^{-d-k +3- 1/d} e^{- \zeta + h(t)}. \end{aligned}$$
(7.16)

For \(t >0\) let \({{\mathscr {U}}}_t\) denote a homogeneous Poisson process on the half-space \(\mathbb {H}:= \mathbb {R}^{d-1} \times [0,\infty )\) of intensity \(t f_0\). Recall from (3.2) the definition of \(c_{d,k}\). The next result determines the limiting probability of covering a bounded region \(\varOmega \times \{0\} \) in the hyperplane \(\partial \mathbb {H}:= \mathbb {R}^{d-1} \times \{0\}\) by balls of radius \(r_t\) centred on \({{\mathscr {U}}}_t\), or of covering the \(r_t\)-neighbourhood of \(\varOmega \times \{0\}\) in \(\mathbb {H}\). It is crucial for dealing with boundary regions in the proof of Theorems 3.13.2 and 3.3. In it, \(|\varOmega |\) denotes the \((d-1)\)-dimensional Lebesgue measure of \(\varOmega \).

Lemma 7.4

Let \(\varOmega \subset \mathbb {R}^{d-1}\) and (for each \(t >0\)) \(\varOmega _t \subset \mathbb {R}^{d-1}\) be closed and Riemann measurable, with \(\varOmega _t \subset \varOmega \) for each \(t >0\). Assume (7.15) holds for some \(\zeta \in (-\infty ,\infty ]\) and also \( \limsup _{t \rightarrow \infty } ( tr_t^d/(\log t)) < \infty \).

$$\begin{aligned} \lim _{t \rightarrow \infty } (\mathbb {P}[F_t(\varOmega \times \{0\},{{\mathscr {U}}}_t ) ] ) = \exp \left( - c_{d,k} |\varOmega | e^{- \zeta } \right) . \end{aligned}$$
(7.17)

Also, given \(a \in (0,\infty )\) and \(\delta _t >0\) for each \(t >0\),

(7.18)

Remark 7.5

Usually we shall use Lemma 7.4 in the case where \(\zeta < \infty \). In this case the extra condition \( \limsup _{t \rightarrow \infty } ( tr_t^d/(\log t)) < \infty \) is automatic. When \(\zeta =\infty \), in (7.17) we use the convention \(e^{-\infty } := 0\).

The following terminology and notation will be used in the proof of Lemma 7.4, and again later on. We use bold face for vectors in \(\mathbb {R}^d \) here. Given \(\textbf{x}\in \mathbb {R}^d\), we let \(\pi _d(\textbf{x})\) denote the d-th co-ordinate of \(\textbf{x}\), and refer to \(\pi _d(\textbf{x})\) as the as height of \(\textbf{x}\). Given \(\textbf{x}_1 \in \mathbb {R}^d, \ldots ,\textbf{x}_d \in \mathbb {R}^d\), and \(r >0\), if \(\cap _{i=1}^d \partial B(\textbf{x}_i,r)\) consists of exactly two points, we refer to these as \(\textbf{p}_r(\textbf{x}_1,\ldots ,\textbf{x}_d)\) and \(\textbf{q}_r(\textbf{x}_1,\ldots ,\textbf{x}_d)\) with \(\textbf{p}_r(\textbf{x}_1,\ldots ,\textbf{x}_d)\) at a smaller height than \(\textbf{q}_r(\textbf{x}_1,\ldots ,\textbf{x}_d)\) (or if they are at the same height, take \(\textbf{p}_r(\textbf{x}_1,\ldots ,\textbf{x}_d) < \textbf{q}_r(\textbf{x}_1,\ldots ,\textbf{x}_d)\) in the lexicographic ordering). Define the indicator function

$$\begin{aligned}{} & {} h_r(\textbf{x}_1,\ldots ,\textbf{x}_d) := \textbf{1} \{ \pi _d(\textbf{x}_1) \le \min (\pi _d(\textbf{x}_2), \ldots , \pi _d(\textbf{x}_d)) \}\nonumber \\{} & {} \quad \times \textbf{1} \{ \#( \cap _{i=1}^d \partial B(\textbf{x}_i,r) ) = 2 \} \textbf{1} \{ \pi _d(\textbf{x}_1) < \pi _d(\textbf{q}_r(\textbf{x}_1,\ldots ,\textbf{x}_d))\} . \end{aligned}$$
(7.19)

Proof of Lemma 7.4

Assume for now that \(\zeta < \infty \). Considering the slices of balls of radius \(r_t\) centred on points of \({{\mathscr {U}}}_t\) that intersect the hyperplane \(\mathbb {R}^{d-1} \times \{0\}\), we have a \((d-1)\)-dimensional Boolean model with (in the notation of Lemma 7.2)

$$\begin{aligned} \delta = r_t, ~~~ \lambda = t f_0 r_t, ~~~~ \alpha = \frac{\theta _d}{2} , ~~~~~ {\mathbb {E}} \,[ Y^{d-1}] = \frac{\theta _d}{2 \theta _{d-1}} , ~~~~~ {\mathbb {E}} \,[Y^{d-2}]= \frac{\theta _{d-1}}{2 \theta _{d-2}}. \end{aligned}$$

To see the moment assertions here, note that here \(Y= (1-U^2)^{1/2}\) with U uniformly distributed over [0, 1], so that \(\theta _{d-1}Y^{d-1}\) is the \((d-1)\)-dimensional Lebesgue measure of a \((d-1)\)-dimensional affine slice through the unit ball at distance U from its centre, and an application of Fubini’s theorem gives the above assertions for \(\alpha := \theta _{d-1} {\mathbb {E}} \,[Y^{d-1}]\) and hence for \({\mathbb {E}} \,[Y^{d-1}]\); the same argument in a lower dimension gives the assertion regarding \({\mathbb {E}} \,[Y^{d-2}]\). (We take \(\theta _0=1\) so the assertion for \({\mathbb {E}} \,[Y^{d-2}]\) is valid for \(d=2\) as well.)

By (7.15), as \(t \rightarrow \infty \) we have \(r_t \sim (2 -2/d)^{1/d} (\theta _d t f_0)^{-1/d} (\log t)^{1/d}\), and therefore \( \lambda \sim (2 - 2/d)^{1/d} \theta _d^{-1/d} (t f_0)^{1-1/d} (\log t )^{1/d} \) so that

$$\begin{aligned} \log \lambda = (1-1/d) \log ( t f_0) + d^{-1} \log \log t + d^{-1} \log \left( \frac{2-2/d}{\theta _d} \right) + o(1). \end{aligned}$$

Hence, \(\log \lambda = (\log t)(1- 1/d)(1+ g(t))\) for some function g(t) tending to zero.

Therefore \(\log \log \lambda = \log \log t + \log (1-1/d) + o(1)\). Checking (7.6) here, we have

$$\begin{aligned}{} & {} \alpha \delta ^{d-1} \lambda - \log \lambda - (d+k -3) \log \log \lambda \nonumber \\{} & {} \quad = (\theta _d/2) f_0 tr_t^d - (1-1/d) \log (t f_0) - (d+k -3 +1/d) \log \log t \nonumber \\{} & {} \qquad - d^{-1} \log ((2-2/d)/\theta _d) - (d+k-3 ) \log (1-1/d) + o(1), \end{aligned}$$
(7.20)

so by using (7.15) again we obtain that

$$\begin{aligned}{} & {} \lim _{t \rightarrow \infty } ( \alpha \delta ^{d-1} \lambda - \log \lambda - (d+k -3) \log \log \lambda ) \\{} & {} \quad = \zeta - d^{-1} \log (2/\theta _d) - (d+k -3 + d^{-1}) \log (1-1/d). \end{aligned}$$

Also \(({\mathbb {E}} \,[Y^{d-2}])^{d-1}/({\mathbb {E}} \,[Y^{d-1}])^{d-2} = \theta _d^{2-d} \theta _{d-1}^{2d-3} \theta _{d-2}^{1-d}/2\), so (7.17) follows by Lemma 7.2 and (3.2).

Having now verified (7.17) in the case where \(\zeta < \infty \), we can easily deduce (7.17) in the other case too.

It remains to prove (7.18); we now consider general \(\zeta \in (0,\infty ]\). Let \(E_t\) be the (exceptional) event that there exist d distinct points \(\textbf{x}_1,\ldots ,\textbf{x}_d\) of \({{\mathscr {U}}}_t\) such that \( \cap _{i=1}^d \partial B(\textbf{x}_i,r_t) \) has non-empty intersection with the hyperplane \(\mathbb {R}^{d-1} \times \{0\}\). Then \(\mathbb {P}[E_t]=0\).

Suppose that the event displayed in (7.18) occurs, and that \(E_t\) does not. Let \(\textbf{w}\) be a location of minimal height (i.e., d-coordinate) in the closure of \(V_t({{\mathscr {U}}}_t) \cap ( \varOmega _t \times [0,a r_t])\). Since we assume \(F_t(((\partial \varOmega _t) \oplus B_{(d-1)}(o, \delta _t)) \times [0,a r_t], {{\mathscr {U}}}_t)\) occurs, \(\textbf{w}\) must lie in \(\varOmega _t^o \times [0,ar_t]\). Also we claim that \(\textbf{w}\) must be a ‘corner’ given by the meeting point of the boundaries of exactly d balls of radius \(r_t\) centred at points of \({{\mathscr {U}}}_t \), located at \(\textbf{x}_1,\ldots ,\textbf{x}_d\) say, with \(\textbf{x}_1\) the lowest of these d points, and with \(\#(\cap _{i=1}^d \partial B(\textbf{x}_i,r_t) ) =2\), and \(\textbf{w}\in V_t( {{\mathscr {U}}}_t {\setminus } \{\textbf{x}_1,\ldots ,\textbf{x}_d\})\).

Indeed, if \(\textbf{w}\) is not at the boundary of any such ball, then for some \(\delta >0\) we have \(B(\textbf{w}, \delta ) \subset V_t({{\mathscr {U}}}_t)\), and then we could find a location in \(V_t({{\mathscr {U}}}_t) \cap (\varOmega _t \times [0,ar_t])\) lower than \(\textbf{w}\), a contradiction. Next, suppose instead that \(\textbf{w}\) lies at the boundary of fewer than d such balls. Then denoting by L the intersection of the supporting hyperplanes at \(\textbf{w}\) of each of these balls, we have that L is an affine subspace of \(\mathbb {R}^d\), of dimension at least 1. Take \(\delta >0\) small enough so that \(B(\textbf{w},\delta )\) does not intersect any of the boundaries of balls of radius \(r_t\) centred at points of \({{\mathscr {U}}}_t\), other than those which meet at \(\textbf{w}\). Taking \(\textbf{w}' \in L \cap B(\textbf{w}, \delta ) {\setminus } \{\textbf{w}\}\) such that \(\textbf{w}'\) is at least as low as \(\textbf{w}\), we have that \(\textbf{w}'\) lies in the interior of \(V_t({{\mathscr {U}}}_t)\). Hence for some \(\delta ' >0\), \(B(\textbf{w}',\delta ') \subset V_t({{\mathscr {U}}}_t)\) and we can find a location in \(B(\textbf{w}',\delta ')\) that is lower than \(\textbf{w}\), yielding a contradiction for this case too. Finally, with probability 1 there is no set of more than d points of \({{\mathscr {U}}}_t\) such that the boundaries of balls of radius \(r_t\) centred on these points have non-empty intersection, so \(\textbf{w}\) is not at the boundary of more than d such balls. Thus we have justified the claim.

Moreover \(\textbf{w}\) must be the point \( \textbf{q}_{r_t}(\textbf{x}_1,\ldots ,\textbf{x}_d)\) rather than \(\textbf{p}_{r_t}(\textbf{x}_1,\ldots ,\textbf{x}_d)\), because otherwise by extending the line segment from \( \textbf{q}_{r_t}(\textbf{x}_1,\ldots ,\textbf{x}_d)\) to \(\textbf{p}_{r_t}(\textbf{x}_1,\ldots ,\textbf{x}_d)\) slightly beyond \(\textbf{p}_{r_t}(\textbf{x}_1,\ldots ,\textbf{x}_d)\) we could find a point in \(V_t({{\mathscr {U}}}_t) \cap (\varOmega _t \times [0,a r_t])\) lower than \(\textbf{w}\), contradicting the statement that \(\textbf{w}\) is a location of minimal height in the closure of \(V_t({{\mathscr {U}}}_t) \cap (\varOmega _t \times [0,a r_t])\). Moreover, \(\textbf{w}\) must be strictly higher than \(\textbf{x}_1\), since if \(\pi _d(\textbf{w}) \le \min (\pi _d(\textbf{x}_1), \ldots ,\pi _d(\textbf{x}_d))\), then locations just below \(\textbf{w}\) would lie in \(V_t({{\mathscr {U}}}_t) \cap (\varOmega _t \times [0,a r_t])\), contradicting the statement that \(\textbf{w}\) is a point of minimal height in the closure of \(V_t({{\mathscr {U}}}_t \cap (\varOmega _t \times [0,ar_t]))\). Hence, \(h_{r_t}(\textbf{x}_1,\ldots ,\textbf{x}_d) = 1\), where \(h_r(\cdot )\) was defined at (7.19).

Note that there is a constant \(c' := c'(d,a) >0\) such that for any \(\textbf{x}\in \mathbb {H}\) with \(0 \le \pi _d(\textbf{x}) \le a\) we have that \(|B(\textbf{x},1) \cap \mathbb {H}| \ge (\theta _d/2) + c' \pi _d(\textbf{x})\). Hence for any \(r>0\), and any \(\textbf{x}\in \mathbb {H}\) with \(0 \le \pi _d (\textbf{x}) \le a r\),

$$\begin{aligned} |B(\textbf{x},r) \cap \mathbb {H}| = r^d |B(r^{-1}\textbf{x},1) \cap \mathbb {H}| \ge (\theta _d/2) r^d + c' r^{d-1} \pi _d(\textbf{x}). \end{aligned}$$

Thus if the event displayed in (7.18) holds, then almost surely there exists at least one d-tuple of points \(\textbf{x}_1,\ldots ,\textbf{x}_d \in {{\mathscr {U}}}_t\), such that \(h_{r_t}(\textbf{x}_1,\ldots ,\textbf{x}_d)=1\), and moreover \(\textbf{q}_{r_t}(\textbf{x}_1,\ldots ,\textbf{x}_d) \in V_t( {{\mathscr {U}}}_t {\setminus } \{\textbf{x}_1,\ldots ,\textbf{x}_d\}) \cap ( \varOmega _t \times [0,a r_t])\). By the Mecke formula (see e.g. [19]), there is a constant c such that the expected number of such d-tuples is bounded above by

$$\begin{aligned}{} & {} c t^d (tr_t^d)^{k-1} \int _{\mathbb {H}} d\textbf{x}_1 \cdots \int _{\mathbb {H}} d\textbf{x}_d h_{r_t}(\textbf{x}_1,\ldots ,\textbf{x}_d) \textbf{1}\{ \textbf{q}_{r_t}(\textbf{x}_1,\ldots ,\textbf{x}_d) \in \varOmega _t \times [0,ar_t] \} \nonumber \\{} & {} \qquad \qquad \qquad \times \exp (- (\theta _d/2) f_0 t r_t^d - c' f_0 t r_t^{d-1} \pi _d( \textbf{q}_{r_t}(\textbf{x}_1,\ldots ,\textbf{x}_d) )). ~~~ \end{aligned}$$
(7.21)

Now we change variables to \(\textbf{y}_i= r_t^{-1}(\textbf{x}_i-\textbf{x}_1)\) for \(2 \le i \le d\), noting that

$$\begin{aligned} \pi _{d}(\textbf{q}_{r_t}(\textbf{x}_1,\ldots ,\textbf{x}_d))= & {} \pi _d(\textbf{x}_1) + \pi _d( \textbf{q}_{r_t}(o,\textbf{x}_2 -\textbf{x}_1, \ldots , \textbf{x}_d-\textbf{x}_1) ) \\= & {} \pi _d(\textbf{x}_1) + r_t \pi _d( \textbf{q}_1(o,\textbf{y}_2, \ldots , \textbf{y}_d)). \end{aligned}$$

Hence by (7.16), there is a constant \(c''\) such that the expression in (7.21) is at most

$$\begin{aligned}{} & {} c'' t^d (\log t)^{k-1} t^{-1 + 1/d} (\log t)^{3-(1/d) -d - k} r_t^{d(d-1)} \int _0^{a r_t} e^{-c' f_0 t r^{d-1}_t u} du \nonumber \\{} & {} \quad \times \int _\mathbb {H}\cdots \int _\mathbb {H}h_{1}(o,\textbf{y}_2,\ldots ,\textbf{y}_d) \exp ( - c' f_0 t r_t^d \pi _d( \textbf{q}_1(o,\textbf{y}_2,\ldots ,\textbf{y}_d)) ) d\textbf{y}_2 \ldots d\textbf{y}_d.\nonumber \\ \end{aligned}$$
(7.22)

In the last expression the first line is bounded by a constant times the expression

$$\begin{aligned} t^{d-1 + 1/d} r_t^{d(d-1)} (\log t)^{2-d -1/d} (t r_t^{d-1})^{-1} = (t r_t^d)^{d-2 + 1/d} (\log t)^{2-d -1/d}, \end{aligned}$$

which is bounded because we assume \(\limsup _{t \rightarrow \infty } (tr_t^d/(\log t))< \infty \). The second line of (7.22) tends to zero by dominated convergence because \(tr_t^d \rightarrow \infty \) by (7.15) and because the indicator function \( (\textbf{z}_2,\ldots ,\textbf{z}_d) \mapsto h_1(o,\textbf{z}_2,\ldots , \textbf{z}_d) \) has bounded support and is zero when \(\pi _d(\textbf{q}_1(o,\textbf{z}_2,\ldots ,\textbf{z}_d)) \le 0\). Therefore by Markov’s inequality we obtain (7.18). \(\square \)

7.3 Polygons: proof of Theorem 3.2

Of the three theorems in Sect. 3, Theorem 3.2 has the simplest proof, so we give this one first. Thus, in this subsection, we set \(d=2\), take A to be polygonal with \(B=A\), and take \(f \equiv f_0 \textbf{1}_A\), with \(f_0 := |A|^{-1}\). Denote the vertices of A by \(q_1,\ldots ,q_\kappa \), and the angles subtended at these vertices by \(\alpha _1,\ldots ,\alpha _\kappa \) respectively. Choose \(K \in (2, \infty )\) such that \(K \sin \alpha _i > 9\) for \(i=1,\ldots , \kappa \).

For the duration of this subsection (and the next) we fix \(k \in \mathbb {N}\) and \(\beta \in \mathbb {R}\). Assume we are given real numbers \((r_t)_{t >0}\) satisfying

$$\begin{aligned} \lim _{t \rightarrow \infty } (\pi t f_0 r_t^2 - \log ( t f_0) + (1 - 2k ) \log \log t ) = \beta . \end{aligned}$$
(7.23)

Setting \(\zeta = \beta /2\), this is the same as the condition (7.15) for \(d=2\).

Given any \(D \subset \mathbb {R}^2\), and any point process \(\mathscr {X}\) in \(\mathbb {R}^2\), and \(t >0\), define the ‘vacant region’ \(V_t(\mathscr {X})\) and the event \(F_t(D,\mathscr {X})\) as at (7.13) and (7.14).

For \(t >0\), in this subsection we define the ‘corner regions’ \(Q_t\) and \(Q_t^-\) by

$$\begin{aligned} Q_t:= \cup _{j=1}^\kappa B(q_j,(K+9)r_t); ~~~~~ Q^-_t:= \cup _{j=1}^\kappa B(q_j,Kr_t). \end{aligned}$$

Lemma 7.6

It is the case that \(\mathbb {P}[F_t(Q_t \cap A,{{\mathscr {P}}}_t)] \rightarrow 1\) as \(t \rightarrow \infty \).

Proof

We have \(\kappa (Q_t \cap A,r_t) = O(1)\) as \(t \rightarrow \infty \). Also there exists \(a>0\) such that for all large enough t, all \(s \in [r_t/2,r_t]\) and \(x \in Q_t \cap A\), we have \(f_0 |B(x,s) \cap A| \ge a r_t^d\). Thus we can apply Lemma 7.3, taking \(W_t = Q_t \cap A\), and \(\mu _t (\cdot ) = f_0 |\cdot \cap A|\), with \(b = 0\), to deduce that \(\mathbb {P}[(F_t(W_t, {{\mathscr {P}}}_t))^c ] \) is \( O(t^{-\delta })\) for some \(\delta >0\), and hence tends to 0. \(\square \)

Lemma 7.7

It is the case that

$$\begin{aligned} \lim _{t \rightarrow \infty } ( \mathbb {P}[F_t(\partial A {\setminus } Q^-_t,{{\mathscr {P}}}_t) ]) = \exp (- c_{2,k} |\partial A| e^{-\beta /2}). \end{aligned}$$
(7.24)

Also,

$$\begin{aligned} \lim _{t \rightarrow \infty } ( \mathbb {P}[F_t (A^{(3r_t)} \cup \partial A \cup (Q_t \cap A),{{\mathscr {P}}}_t) {\setminus } F_t(A,{{\mathscr {P}}}_t)] ) = 0. \end{aligned}$$
(7.25)

Proof

Denote the line segments making up \(\partial A\) by \(I_1,\ldots ,I_{\kappa }\), and for \( t >0\) and \(1 \le i \le \kappa \) set \(I_{t,i} := I_i {\setminus } Q^-_t\).

Let \(i,j,k \in \{1,\ldots ,\kappa \}\) be such that \(i\ne j\) and the edges \(I_i\) and \(I_j\) are both incident to \(q_k\). If \(x \in I_{t,i}\) and \(y \in I_{t,j}\), then \(\Vert x-y \Vert \ge (Kr_t) \sin \alpha _k \ge 9r_t\). Hence the events \(F_t(I_{t,1}, {{\mathscr {P}}}_t),\ldots ,F_t(I_{t,\kappa }, {{\mathscr {P}}}_t)\) are mutually independent. Therefore

$$\begin{aligned} \mathbb {P}[ F_t(\partial A {\setminus } Q^-_t, {{\mathscr {P}}}_t) ] = \prod _{i=1}^\kappa \mathbb {P}[ F_t(I_{t,i}, {{\mathscr {P}}}_t) ], \end{aligned}$$

and by Lemma 7.4 this converges to the right hand side of (7.24).

Now we prove (7.25). For \(t >0\), and \(i \in \{1,2,\ldots , \kappa \} \), let \(S_{t,i}\) denote the rectangular block of dimensions \(|I_{t,i} | \times 3 r_t\), consisting of all points in A at perpendicular distance at most \(3 r_t\) from \(I_{t,i}\). Let \(\partial _{\textrm{side}}S_{t,i}\) denote the union of the two ‘short’ edges of the boundary \(S_{t,i}\), i.e. the two edges bounding \(S_{t,i}\) which are perpendicular to \(I_{t,i}\).

Then \(A {\setminus } (A^{(3r_t)} \cup Q_t) \subset \cup _{i=1}^\kappa S_{t,i}\), and also \((\partial _{\textrm{side}} S_{t,i} \oplus B(o,r_t)) \subset Q_t\) for \(1 \le i \le \kappa \), so that

$$\begin{aligned}{} & {} F_t (A^{(3r_t)} \cup \partial A \cup (Q_t \cap A),{{\mathscr {P}}}_t) {\setminus } F_t(A,{{\mathscr {P}}}_t)\\{} & {} \subset \cup _{i=1}^\kappa [F_t(I_{t,i} \cup (\partial _{\textrm{side}} S_{t,i} \oplus B(o,r_t)), {{\mathscr {P}}}_t ) {\setminus } F_t(S_{t,i},{{\mathscr {P}}}_t)]. \end{aligned}$$

For \(i \in \{1,\ldots ,\kappa \}\), let \(I'_{t,i}\) denote an interval of length \(|I_{i,t}|\) contained in the x-axis. By a rotation one sees that \(\mathbb {P}[F_t(I_{t,i} \cup (\partial _{\textrm{side}} S_{t,i} \oplus B(o,r_t)), {{\mathscr {P}}}_t ) {\setminus } F_t(S_{t,i}, {{\mathscr {P}}}_t)]\) is at most

$$\begin{aligned} \mathbb {P}[ F_t((I'_{t,i} \times \{0\}) \cup (((\partial I'_{t,i}) \oplus [-r_t,r_t]) \times [0,3 r_t] ),{{\mathscr {U}}}_t ) {\setminus } F_t(I'_{t,i} \times [0,3r_t],{{\mathscr {U}}}_t) ] , \end{aligned}$$

where \({{\mathscr {U}}}_t\) is as in Lemma 7.4. By (7.18) from that result, this probability tends to zero, which gives us (7.25). \(\square \)

Proof of Theorem 3.2

Let \(\beta \in \mathbb {R}\) and suppose \((r_t)_{t >0}\) satisfies (7.23). Then

$$\begin{aligned} \lim _{t \rightarrow \infty } ( \pi f_0 t r_t^2 - \log (t f_0) - k \log \log t ) = {\left\{ \begin{array}{ll} \beta &{} {\text { if }} k=1 \\ + \infty &{} {\text { if }} k \ge 2. \end{array}\right. } \end{aligned}$$

Hence by (3.6) from Proposition 3.4 (taking \(B=A\) there and using (6.4)),

$$\begin{aligned} \lim _{t \rightarrow \infty } \mathbb {P}[ F_t(A^{(r_t)},{{\mathscr {P}}}_t ) ] = \lim _{t \rightarrow \infty } \mathbb {P}[ {\tilde{R}}_{Z_t,k} \le r_t] = {\left\{ \begin{array}{ll} \exp ( - |A| e^{-\beta }) &{} {\text { if }} k=1 \\ 1 &{} {\text { if }} k \ge 2. \end{array}\right. }\nonumber \\ \end{aligned}$$
(7.26)

Observe that \(\{ F_t(A^{(r_t)},{{\mathscr {P}}}_t) \subset F_t(A^{(3r_t)},{{\mathscr {P}}}_t) \). We claim that

$$\begin{aligned} \mathbb {P}[ F_t(A^{(3r_t)},{{\mathscr {P}}}_t) {\setminus } F_t(A^{(r_t)},{{\mathscr {P}}}_t)] \rightarrow 0 ~~~ {\text { as }} ~ t \rightarrow \infty . \end{aligned}$$
(7.27)

Indeed, given \(\varepsilon >0\), for large t the probability on the left side of (7.27) is bounded by \(\mathbb {P}[ F_t(A^{(r_t)}{\setminus } A^{[\varepsilon ]},{{\mathscr {P}}}_t)^c] \) (the set \(A^{[\varepsilon ]}\) was defined in Sect. 2), and by (3.6) from Proposition 3.4 (as in (7.26) but now taking \(B=A {\setminus } A^{[\varepsilon ]}\) in Proposition 3.4), the latter probability tends to a limit which can be made arbitrarily small by the choice of \(\varepsilon \).

Also, by (7.24) and Lemma 7.6,

$$\begin{aligned} \lim _{t \rightarrow \infty } ( \mathbb {P}[F_t(\partial A,{{\mathscr {P}}}_t ) ]) = \exp (- c_{2,k} |\partial A| e^{-\beta /2}). \end{aligned}$$
(7.28)

Using (7.25) followed by Lemma  7.6, we obtain that

$$\begin{aligned} \lim _{t \rightarrow \infty } \mathbb {P}[ F_t(A,{{\mathscr {P}}}_t) ]{} & {} = \lim _{t \rightarrow \infty } \mathbb {P}[ F_t(A^{(3r_t)} \cup \partial A \cup (Q_t \cap A),{{\mathscr {P}}}_t) ] \nonumber \\{} & {} = \lim _{t \rightarrow \infty } \mathbb {P}[ F_t(A^{(3r_t)} \cup \partial A,{{\mathscr {P}}}_t) ], \end{aligned}$$
(7.29)

provided these limits exist.

By (7.27), (7.26) still holds with \(A^{(r_t)}\) replaced by \(A^{(3r_t)}\) on the left. Also, the events \(F_t(\partial A,{{\mathscr {P}}}_t) \) and and \(F_t(A^{(3r_t)},{{\mathscr {P}}}_t) \) are independent since the first of these events is determined by the configuration of Poisson points distant at most \(r_t\) from \(\partial A\), while the second event is determined by the Poisson points distant more than \(2r_t\) from \(\partial A\). Therefore the limit in (7.29) does indeed exist, and is the product of the limits arising in (7.26) and (7.28).

Since \(F_t(A,{{\mathscr {P}}}_t) = \{R'_{t,k} \le r_t\} \), this gives us the second equality of (3.4), and we then obtain the first equality of (3.4) by using Lemma 7.1. \(\square \)

7.4 Polyhedra: proof of Theorem 3.3

Now suppose that \(d=3\) and \(A=B\) is a polyhedron with \(f \equiv f_0 \textbf{1}_A\), where \(f_0 = 1/|A|\). Fix \(k \in \mathbb {N}\) and \(\beta \in \mathbb {R}\). Recall that \(\alpha _1\) denotes the smallest edge angle of A. Throughout this subsection, we assume that we are given \((r_t)_{ t >0}\) with the following limiting behaviour as \(t \rightarrow \infty \):

$$\begin{aligned}{} & {} ( \pi \wedge 2\alpha _1 ) f_0 t r_t^3 - \log (t f_0) - \beta = {\left\{ \begin{array}{ll} (3k -1) \log \log t + o(1) &{} \text{ if } \alpha _1 \le \pi /2\\ \left( \frac{1+ 3k}{2}\right) \log \log t + o(1) &{} \text{ if } \alpha _1 > \pi /2. ~~~~ \end{array}\right. }\nonumber \\ \end{aligned}$$
(7.30)

For \(D \subset \mathbb {R}^3\) and \(\mathscr {X}\) a point set in \(\mathbb {R}^3\), define the region \(V_t(\mathscr {X})\) and event \(F_t(D,\mathscr {X})\) by (7.13) and (7.14) as before, now with \((r_t)_{t >0}\) satisfying (7.30).

To prove Theorem 3.3, our main task is to determine \(\lim _{t \rightarrow \infty } \mathbb {P}[F_t(A,{{\mathscr {P}}}_t)]\). Our strategy for this goes as follows. We shall consider reduced faces and reduced edges of A, obtained by removing a region within distance \(Kr_t\) of the boundary of each face/edge, for a suitable constant K. Then the events that different reduced faces and reduced edges are covered are mutually independent. Observing that the intersection of \({{\mathscr {P}}}_t \) with a face or edge is a lower-dimensional SPBM, and using Lemma 7.2, we shall determine the limiting probability that the reduced faces and edges are covered.

If \(F_t(A,{{\mathscr {P}}}_t)\) does not occur but all reduced faces and all reduced edges are covered, then the uncovered region must either meet the region \(A^{(3r_t)}\), or go near (but not intersect) one of the reduced faces, or go near (but not intersect) one of the reduced edges, or go near one of the vertices. We shall show in turn that each of these possibilities has vanishing probability.

The following lemma will help us deal with the regions near the faces of A.

Lemma 7.8

Let \(\varOmega \subset \mathbb {R}^2\) and (for each \(t >0\)) \(\varOmega _t \subset \mathbb {R}^2\) be closed and Riemann measurable, with \(\varOmega _t \subset \varOmega ^o\) for each \(t >0\). Let \({{\mathscr {U}}}_t\) be a homogeneous Poisson process of intensity \(tf_0\) in \(\mathbb {H}\). Assume (7.30) holds. Then

$$\begin{aligned}{} & {} \lim _{t \rightarrow \infty } (\mathbb {P}[F_t(\varOmega \times \{0\},{{\mathscr {U}}}_t ) ] ) = {\left\{ \begin{array}{ll} \exp \left( - c_{3,k} |\varOmega | e^{- 2 \beta /3} \right) , &{} \text { if } \, \alpha _1 > \pi /2\\ &{} \text{ or } \, \alpha _1= \pi /2, k=1\\ 1 &{} \text{ otherwise. } \end{array}\right. } ~~~ \nonumber \\ \end{aligned}$$
(7.31)

Also in all cases, given \(a \in (0,\infty )\) and \(\delta _t >0\) for each \(t >0\),

(7.32)

Proof

In the case where \(\alpha _1 > \pi /2\) or \(\alpha _1= \pi /2,k=1\) the condition (7.30) implies that (7.15) holds (for \(d=3\)) with \(\zeta := 2 \beta /3\). In the other case (with \(\alpha _1 < \pi /2 \) or \(\alpha _1 = \pi /2, k>1\)) the condition (7.30) implies that (7.15) holds (for \(d=3\)) with \(\zeta := + \infty \), and also \(\limsup _{t \rightarrow \infty }(tr_t^3 / \log t) < \infty \). Therefore we can deduce the result (in both cases) from Lemma 7.4. \(\square \)

In this subsection we use bold face to indicate vectors in \(\mathbb {R}^3\). For \(i=1,2,3\), we let \(\pi _i: \mathbb {R}^3 \rightarrow \mathbb {R}\) denote projection onto the \(i\hbox {th}\) coordinate. For \(\textbf{x}\in \mathbb {R}^3\) we shall sometimes refer to \(\pi _2(\textbf{x})\) and \(\pi _3(\textbf{x})\) as the ‘depth’ and ‘height’ of \(\textbf{x}\), respectively.

For \(r>0\) and for triples \((\textbf{x},\textbf{y},\textbf{z}) \in (\mathbb {R}^3)^3\), we define the function \(h_r(\textbf{x},\textbf{y},\textbf{z})\) and (when \(h_r(\textbf{x},\textbf{y},\textbf{z})=1\)) the points \(\textbf{p}_r(\textbf{x}, \textbf{y},\textbf{z})\) and \(\textbf{q}_r(\textbf{x}, \textbf{y},\textbf{z})\), as at (7.19), now specialising to \(d=3\).

For \(\alpha \in (0,2 \pi )\), let \(\mathbb {W}_\alpha \subset \mathbb {R}^3 \) be the wedge-shaped region

$$\begin{aligned} \mathbb {W}_\alpha := \{(x,r\cos \theta , r\sin \theta ) : x \in \mathbb {R}, r \ge 0, \theta \in [0,\alpha ] \} . \end{aligned}$$

For \(\alpha \in (0, 2 \pi )\) and \(t >0\), let \({{\mathscr {W}}}_{\alpha ,t}\) be a homogeneous Poisson process in \(\mathbb {W}_\alpha \) of intensity \(t f_0\).

Lemma 7.9

(Lower bound for the volume of a ball within a wedge) Let \(\alpha \in (0, \pi )\), and \(u >0\). There is a constant \(c' = c'(\alpha ,u) \in (0,\infty )\) such that for any \(r >0\) and any \(\textbf{x}\in \mathbb {W}_{\alpha } \cap B(o, ur) \),

$$\begin{aligned} |B(\textbf{x},r) \cap \mathbb {W}_\alpha | \ge (2 \alpha /3) r^3 + c' r^2 \pi _3(\textbf{x}) + c' r^2 (\pi _2(\textbf{x}) - \pi _3(\textbf{x}) \cot \alpha ) . \end{aligned}$$
(7.33)

Proof

As illustrated in Fig. 2, we can find a constant \(c' >0 \) such that for any \(\textbf{x}\in \mathbb {W}_\alpha \cap B(o,u) \), we have that

$$\begin{aligned} |B(\textbf{x},1) \cap \mathbb {W}_\alpha | \ge (2 \alpha /3) + c' (\pi _3(\textbf{x})+ (\pi _2(\textbf{x}) - (\pi _3(\textbf{x})/ \tan \alpha )) ). \end{aligned}$$
Fig. 2
figure 2

We show only the 2nd and 3rd co-ordinates. In the proof of Lemma 7.9, we lower bound the volume of the intersection of the circle (representing a ball of unit radius) with the leftmost wedge shown, by the sum of the volumes of the union of the three shaded regions, each intersected with the ball/circle

Also \( |B(\textbf{x},r) \cap \mathbb {W}_\alpha | = r^{3} |B(r^{-1} \textbf{x},1) \cap \mathbb {W}_\alpha | \) by scaling, and it is then straightforward to deduce (7.33). \(\square \)

In the next three lemmas, we take I to be an arbitrary fixed compact interval in \(\mathbb {R}\), and then, given \(\alpha \in (0,2\pi )\) and \(b \ge 0\), set

$$\begin{aligned} \mathbb {W}_{\alpha , b} := \{ (x_1,x_2,x_3) \in \mathbb {W}_\alpha : x_1 \in I, x_2^2 + x_3^2 \le b^2\}. \end{aligned}$$

Also, writing \(I = [a_1,a_2]\), let \(I'_t:= [a_1 - r_t,a_2+ r_t] \) (a slight extension of the interval I). Let \(\mathbb {W}'_{\alpha ,b,t} := \{ (x_1,x_2,x_3) \in \mathbb {W}_\alpha : x_1 \in I'_t, x_2^2 + x_3^2 \le b^2\}. \)

The next lemma will help us to show that if the part of \(\partial A\) near to a given edge is covered, then the interior of A near that edge is also likely to be covered.

Lemma 7.10

Let \(\alpha \in [\alpha _1,\pi )\), \(a \in (1,\infty )\). Then

$$\begin{aligned} \lim _{t \rightarrow \infty } (\mathbb {P}[F_t((\partial \mathbb {W}_\alpha ) \cap \mathbb {W}_{\alpha , a r_t}, {{\mathscr {W}}}_{\alpha ,t}) {\setminus } F_t( \mathbb {W}_{\alpha , a r_t}, {{\mathscr {W}}}_{\alpha , t} ) ] ) = 0. \end{aligned}$$
(7.34)

Proof

First, note that \(\kappa (\mathbb {W}'_{\alpha ,ar_t,t} {\setminus } \mathbb {W}_{\alpha ,ar_t},r_t) = O(1)\) as \(t \rightarrow \infty \), and hence by taking the measure \(\mu _t\) to be \(f_0 | \cdot \cap \mathbb {W}_{\alpha }|\) in Lemma 7.3,

$$\begin{aligned} { \lim _{t \rightarrow \infty } \mathbb {P}[F_t ( \mathbb {W}'_{\alpha ,ar_t,t} {\setminus } \mathbb {W}_{\alpha ,ar_t},{{\mathscr {W}}}_{\alpha ,t})] =1.} \end{aligned}$$
(7.35)

Let \(E_t\) here be the (exceptional) event that there exist three distinct points \(\textbf{x},\textbf{y},\textbf{z}\) of \({{\mathscr {W}}}_{\alpha ,t}\) such that \(\partial B(\textbf{x},r_t) \cap \partial B(\textbf{y},r_t) \cap \partial B(\textbf{z},r_t)\) has non-empty intersection with the plane \(\mathbb {R}^2 \times \{0\} \). Then \(\mathbb {P}[E_t]=0\).

Suppose that event \(F_t((\partial \mathbb {W}_\alpha ) \cap \mathbb {W}_{\alpha , a r_t}, {{\mathscr {W}}}_{\alpha ,t}) {\setminus } F_t( \mathbb {W}_{\alpha ,a r_t}, {{\mathscr {W}}}_{\alpha ,t} ) \) occurs, and that \(F_t ( \mathbb {W}'_{\alpha ,ar_t,t} {\setminus } \mathbb {W}_{\alpha ,ar_t})\) occurs, and that \(E_t\) does not. Let \(\textbf{w}\) be a point of minimal height in the closure of \(V_t({{\mathscr {W}}}_{\alpha ,t}) \cap \mathbb {W}_{\alpha ,ar_t}\). Then \(\textbf{w}\in \mathbb {W}_{\alpha ,a r_t} {\setminus } \partial \mathbb {W}_\alpha \), and \(\textbf{w}=\textbf{q}_{r_t}(\textbf{x},\textbf{y},\textbf{z})\) for some triple \((\textbf{x},\textbf{y},\textbf{z})\) of points of \({{\mathscr {W}}}_{\alpha ,t} \), satisfying \(h_{r_t}(\textbf{x},\textbf{y},\textbf{z})=1\), where \(h_{r_t}(\cdot )\) was defined at (7.19). Also \(\textbf{w}\) is covered by fewer than k of the balls centred on the other points of \({{\mathscr {W}}}_{\alpha ,t}\). Hence by Markov’s inequality,

$$\begin{aligned} \mathbb {P}[F_t((\partial \mathbb {W}_\alpha )\!{} & {} \cap \mathbb {W}_{\alpha ,2 a r_t}, {{\mathscr {W}}}_{\alpha ,t}) \cap F_t(\mathbb {W}'_{\alpha ,ar_t,t}{\setminus } \mathbb {W}_{\alpha ,ar_t},{{\mathscr {W}}}_{\alpha ,t}) {\setminus } F_t( \mathbb {W}_{\alpha ,a r_t}, {{\mathscr {W}}}_{\alpha ,t} ) ] \nonumber \\{} & {} \le \mathbb {P}[N_t \ge 1] \le {\mathbb {E}} \,[N_t], ~~~~~~ \end{aligned}$$
(7.36)

where we set

$$\begin{aligned} N_t := \sum _{\textbf{x},\textbf{y},\textbf{z}\in {{\mathscr {W}}}_{\alpha ,t}}^{\ne } h_{r_t}(\textbf{x},\textbf{y},\textbf{z}) \textbf{1} \{ \textbf{q}_{r_t}(\textbf{x},\textbf{y},\textbf{z}) \in \mathbb {W}_{\alpha ,ar_t} \} \textbf{1} \{ {{\mathscr {W}}}_{\alpha ,t} (B(\textbf{q}_{r_t}(\textbf{x},\textbf{y},\textbf{z}),r_t) ) < k \}, \end{aligned}$$

and \(\sum ^{\ne }\) means we are summing over triples of distinct points in \({{\mathscr {W}}}_{\alpha ,t}\). By the Mecke formula, \({\mathbb {E}} \,[N_t]\) is bounded by a constant times

$$\begin{aligned}{} & {} t^3 (tr_t^3)^{k-1} \int _{\mathbb {W}} d\textbf{x}\int _{\mathbb {W}} d\textbf{y}\int _{\mathbb {W}} d\textbf{z}h_{r_t}(\textbf{x},\textbf{y},\textbf{z}) \textbf{1}\{ \textbf{q}_{r_t}(\textbf{x},\textbf{y},\textbf{z}) \in \mathbb {W}_{\alpha ,ar_t} \}\\{} & {} \qquad \qquad \times \exp (- t f_0 |B(\textbf{q}_{r_t}(\textbf{x},\textbf{y},\textbf{z}),r_t) \cap \mathbb {W}_{\alpha ,ar_t}|). \end{aligned}$$

Hence by (7.33), there is a constant c such that \({\mathbb {E}} \,[N_t]\) is bounded by

$$\begin{aligned}{} & {} c t^3 (tr_t^3)^{k-1} \int _{\mathbb {W}_\alpha } d\textbf{x}\int _{\mathbb {W}_\alpha } d\textbf{y}\int _{\mathbb {W}_\alpha } d\textbf{z}h_{r_t}(\textbf{x},\textbf{y},\textbf{z}) \textbf{1}\{ \textbf{q}_{r_t}(\textbf{x},\textbf{y},\textbf{z}) \in \mathbb {W}_{\alpha ,ar_t} \} \nonumber \\{} & {} \quad \times \exp \left\{ - (2\alpha /3) f_0 t r_t^3 - c' f_0 t r_t^2 \pi _3( \textbf{q}_{r_t}(\textbf{x},\textbf{y},\textbf{z}) ) \right. \nonumber \\{} & {} \quad \left. - c' f_0 tr_t^2 [\pi _2( \textbf{q}_{r_t}(\textbf{x},\textbf{y},\textbf{z})) - \pi _3(\textbf{q}_{r_t}(\textbf{x},\textbf{y},\textbf{z})) \cot \alpha ] \right\} . \end{aligned}$$
(7.37)

We claim that (7.30) implies

$$\begin{aligned} \exp (-(2 \alpha /3) f_0 t r_t^3 ) = O(t^{-1/3} (\log t)^{(1/3)-k}). \end{aligned}$$
(7.38)

Indeed, if \(\alpha = \alpha _1 \le \pi /2\), then the expression on the left side of (7.38) divided by the one on the right, tends to a finite constant. Otherwise, this ratio tends to zero.

Now we write \(c''\) for \(c' f_0\). By (7.30), (7.38) and the assumption that I is a bounded interval, we obtain that \({\mathbb {E}} \,[N_t]\) is bounded by a constant times

$$\begin{aligned}{} & {} t^3 (\log t)^{k-1} t^{-1/3}(\log t)^{(1/3)-k } \int _{\mathbb {W}_\alpha } d\textbf{z}\int _{\mathbb {W}_\alpha } d\textbf{y}\int _0^{2 ar_t} du \int _0^{2 ar_t} dv h_{r_t}( (0,u,v),\textbf{y},\textbf{z})\\{} & {} \quad \times \textbf{1}\{ \textbf{q}_{r_t}((0,u,v),\textbf{y},\textbf{z}) \in \mathbb {W}_\alpha \}\\{} & {} \quad \times \exp \left\{ -c'' t r_t^2 [v + (\pi _3( \textbf{q}_{r_t}((0,u,v) ,\textbf{y},\textbf{z}) ) -v) ] \right\} \\{} & {} \quad \times \exp \left\{ - c'' t r_t^2 [\pi _2( \textbf{q}_{r_t}((0,u,v),\textbf{y},\textbf{z})) - \pi _3(\textbf{q}_{r_t}((0,u,v),\textbf{y},\textbf{z})) \cot \alpha ] \right\} . \end{aligned}$$

Writing \(\textbf{u}:= (0,u,v)\), and changing variables to \(\textbf{y}'= r_t^{-1}(\textbf{y}-\textbf{u})\), and \(\textbf{z}'= r_t^{-1}(\textbf{z}-\textbf{u})\), we obtain that the previous expression is bounded by a constant times the quantity

$$\begin{aligned} b_t:= & {} t^{8/3} (\log t)^{-2/3 } r_t^6 \int _0^{2 ar_t} dv \int _{\mathbb {R}^3} d\textbf{z}' \int _{\mathbb {R}^3} d\textbf{y}' \\{} & {} \quad \times \int _0^{2 ar_t} du f_t(u,v,\textbf{y}',\textbf{z}') g_t(u,v,\textbf{y}',\textbf{z}'), \end{aligned}$$

where we set

$$\begin{aligned} f_t(u,v,\textbf{y}',\textbf{z}'):= & {} e^{- c''t r_t^2 v} \exp ( - c'' t r_t^2 [ \pi _3( \textbf{q}_{r_t}(\textbf{u}, \textbf{u}+ r_t \textbf{y}', \textbf{u}+ r_t \textbf{z}') ) -v ] )\\{} & {} \quad \times h_{r_t}(\textbf{u}, \textbf{u}+ r_t \textbf{y}', \textbf{u}+ r_t \textbf{z}') , \end{aligned}$$

and

$$\begin{aligned} g_t(u,v,\textbf{y}',\textbf{z}'):= & {} \textbf{1} \{ \pi _2 (\textbf{q}_{r_t} (\textbf{u}, \textbf{u}+ r_t \textbf{y}', \textbf{u}+ r_t \textbf{z}') )\\> & {} \pi _3( \textbf{q}_{r_t} (\textbf{u}, \textbf{u}+ r_t \textbf{y}', \textbf{u}+ r_t \textbf{z}') ) \cot \alpha \}\\{} & {} \times \exp \{ - c'' t r_t^2 [ \pi _2 (\textbf{q}_{r_t} (\textbf{u}, \textbf{u}+ r_t \textbf{y}', \textbf{u}+ r_t \textbf{z}') ) \\{} & {} - \pi _3( \textbf{q}_{r_t} (\textbf{u}, \textbf{u}+ r_t \textbf{y}', \textbf{u}+ r_t \textbf{z}') ) \cot \alpha ) ] \}. \end{aligned}$$

Observe that \(f_t(u,v,\textbf{y}',\textbf{z}')\) does not depend on u, so it is equal to \(f_t(0,v,\textbf{y}',\textbf{z}')\). Therefore we can take this factor outside the innermost integral. Also, setting \(\textbf{v}:= (0,0,v)\), we have

$$\begin{aligned} \textbf{q}_{r_t}(\textbf{u},\textbf{u}+ r_t \textbf{y}',\textbf{u}+ r_t \textbf{z}') = \textbf{q}_{r_t}(\textbf{v},\textbf{v}+ r_t \textbf{y}',\textbf{v}+ r_t \textbf{z}') + (0,u,0), \end{aligned}$$

Therefore, setting \(\textbf{w}:= \textbf{w}(t,v, \textbf{y}',\textbf{z}') := \textbf{q}_{r_t}(\textbf{v}, \textbf{v}+ r_t \textbf{y}', \textbf{v}+ r_t \textbf{z}') \), we have

$$\begin{aligned} g_t(u,v,\textbf{y}',\textbf{z}')= & {} \textbf{1}\{ \pi _2( \textbf{w}) + u > \pi _3(\textbf{w}) \cot \alpha _1 \} \exp ( - c'' t r_t^2 ( \pi _2( \textbf{w}) \\{} & {} + u - \pi _3(\textbf{w}) \cot \alpha _1 ) ). \end{aligned}$$

Defining \( u_0 := u_0 (t,v, \textbf{y}',\textbf{z}') := \pi _3( \textbf{w}) \cot \alpha _1 - \pi _2( \textbf{w}) , \) we obtain for the innermost integral that

$$\begin{aligned} \int _0^{2 a r_t} g_t (u,v,\textbf{y}',\textbf{z}') du = \int _{u_0}^{2 ar_t} \exp (- c'' t r_t^2 (u- u_0) ) du = O((tr_t^2)^{-1} ). \end{aligned}$$

Also, we have

$$\begin{aligned} e^{c'' t r_t^2 v} f_t(0,v, \textbf{y}', \textbf{z}')= & {} \exp [- c'' t r_t^2 \pi _3(\textbf{q}_{r_t} (o,r_t \textbf{y}',r_t \textbf{z}') ) ] h_{r_t}(o, r_t \textbf{y}', r_t \textbf{z}')\\= & {} \exp ( - c'' t r_t^3 \pi _3(\textbf{q}_1(o,\textbf{y}',\textbf{z}')) ) h_1(o,\textbf{y}',\textbf{z}'))\\=: & {} {\tilde{f}}_t(\textbf{y}',\textbf{z}'). \end{aligned}$$

Combining all this, we obtain that \(b_t\) is bounded by a constant times

$$\begin{aligned} t^{8/3} (\log t)^{-2/3} r_t^6 (t r_t^2)^{-1}\int _0^{2 ar_t} e^{-c'' t r_t^2 v} dv \int _{\mathbb {R}^3} \int _{\mathbb {R}^3} d\textbf{z}' d\textbf{y}' {\tilde{f}}_t(\textbf{y}',\textbf{z}') , \end{aligned}$$

and since the first integral is \(O((tr_t^2)^{-1})\), this is bounded by a constant times

$$\begin{aligned} (t r_t^3)^{2/3} (\log t)^{-2/3} \int _{\mathbb {R}^3} \int _{\mathbb {R}^3} {\tilde{f}}_t(\textbf{y}',\textbf{z}') d\textbf{y}' d\textbf{z}'. \end{aligned}$$

By dominated convergence the last integral tends to zero, while by (7.30) the prefactor tends to a constant. Thus \(b_t \rightarrow 0\) as \(t \rightarrow \infty \). Therefore also \({\mathbb {E}} \,[N_t] \rightarrow 0\), and combined with (7.36) and (7.35), this yields (7.34). \(\square \)

The next lemma helps us to show that if a given edge of A is covered, then the part of \(\partial A\) near that edge is also likely to be covered.

Lemma 7.11

Suppose \(\alpha _1 \le \alpha \le \pi /2\). Let \(a >1\). Let \(\mathbb {F}:= \partial \mathbb {H}:= \mathbb {R}^2 \times \{0\}\). Then \(\mathbb {P}[F_t(\mathbb {W}_{\alpha ,0}, {{\mathscr {W}}}_{\alpha ,t} ) {\setminus } F_t(\mathbb {F}\cap \mathbb {W}_{\alpha ,a r_t},{{\mathscr {W}}}_{\alpha ,t}) ] \rightarrow 0\) as \(t \rightarrow \infty \).

Proof

Given \(\textbf{x},\textbf{y}\in \mathbb {R}^3\), and \(r>0\), if \(\#( \partial B(\textbf{x},r) \cap \partial B(\textbf{y},r) \cap \mathbb {F})=2 \), then denote the two points of \( \partial B(\textbf{x},r) \cap \partial B(\textbf{y},r) \cap \mathbb {F}\), taken in increasing lexicographic order, by \({\tilde{\textbf{p}}}_{r}(\textbf{x},\textbf{y})\) and \({\tilde{\textbf{q}}}_{r}(\textbf{x},\textbf{y})\). Define the indicator functions

$$\begin{aligned}{} & {} {\tilde{h}}_r^{(1)}(\textbf{x},\textbf{y}): = \textbf{1} \{ \pi _2(\textbf{y}) \ge \pi _2(\textbf{x}) \} \textbf{1} \{ \# ( \partial B(\textbf{x},r) \cap \partial B(\textbf{y},r) \cap \mathbb {F}) =2 \}\\{} & {} \quad \times \textbf{1} \{ \pi _2({\tilde{\textbf{p}}}_{r}(\textbf{x},\textbf{y}))> \pi _2(\textbf{x}) \} ; \\{} & {} {\tilde{h}}_r^{(2)}(\textbf{x},\textbf{y}): = \textbf{1} \{ \pi _2(\textbf{y}) \ge \pi _2(\textbf{x}) \} \textbf{1} \{ \# ( \partial B(\textbf{x},r) \cap \partial B(\textbf{y},r) \cap \mathbb {F}) =2 \}\\{} & {} \quad \times \textbf{1} \{ \pi _2({\tilde{\textbf{q}}}_{r}(\textbf{x},\textbf{y})) > \pi _2(\textbf{x}) \}. \end{aligned}$$

Let \(E_t\) here denote the (exceptional) event that there exist two distinct points \(\textbf{x},\textbf{y}\in {{\mathscr {W}}}_{\alpha ,t}\), such that \(\partial B(\textbf{x},r_t) \cap \partial B(\textbf{y},r_t) \cap \mathbb {W}_{\alpha ,0} \ne \varnothing \). Then \(\mathbb {P}[E_t]=0\).

Suppose \(F_t(\mathbb {W}_{\alpha ,0}, {{\mathscr {W}}}_{\alpha ,t}) {\setminus } F_t(\mathbb {F}\cap \mathbb {W}_{\alpha ,a r_t},{{\mathscr {W}}}_{\alpha ,t}) \) occurs, and \(E_t\) does not. Let \(\textbf{w}\) be a location in \(\overline{V_t({{\mathscr {W}}}_{\alpha ,t})} \cap \mathbb {F}\) of minimal depth (i.e., minimal second coordinate). Then \(\textbf{w}\) must lie at the intersection of the boundaries of balls of radius \(r_t\) centred on points \(\textbf{x},\textbf{y}\in {{\mathscr {W}}}_{\alpha ,t}\), that is, at \({\tilde{\textbf{p}}}_{r_t}(\textbf{x},\textbf{y})\) or at \({\tilde{\textbf{q}}}_{r_t}(\textbf{x},\textbf{y})\). Moreover \(\textbf{w}\) must be covered by at most \(k-1\) other balls centred on points of \({{\mathscr {W}}}_{\alpha ,t}\). Also \(\pi _2(\textbf{w}) > \min (\pi _2(\textbf{x}),\pi _2(\textbf{y})) \); otherwise, for sufficiently small \(\delta \) the location \(\textbf{w}+ (0,-\delta ,0)\) would be in \(\overline{ V_t ( {{\mathscr {W}}}_{\alpha ,t}}) \cap \mathbb {F}\) and have a smaller depth than \(\textbf{w}\). Finally, to have \(\textbf{w}\in \mathbb {W}_{\alpha , ar_t}\) we need \(\pi _2(\textbf{w}) \le a r_t\) and hence \(\min (\pi _2(\textbf{x}),\pi _2(\textbf{y})) \le a\). Therefore

$$\begin{aligned} \mathbb {P}[ F_t(\mathbb {W}_{\alpha ,0}, {{\mathscr {W}}}_{\alpha ,t}) {\setminus } F_t(\mathbb {F}\cap \mathbb {W}_{\alpha ,a r_t},{{\mathscr {W}}}_{\alpha ,t}) ] \le \mathbb {P}[N_t^{(1)} \ge 1 ] + \mathbb {P}[N_t^{(2)} \ge 1 ],\nonumber \\ \end{aligned}$$
(7.39)

where, with \(\sum _{\textbf{x},\textbf{y}\in {{\mathscr {W}}}_{\alpha ,t}}^{\ne }\) denoting summation over ordered pairs of distinct points of \({{\mathscr {W}}}_{\alpha ,t}\), we set

$$\begin{aligned}{} & {} N_t^{(1)} := \sum _{\textbf{x},\textbf{y}\in {{\mathscr {W}}}_{\alpha ,t}}^{\ne } {\tilde{h}}_{r_t}^{(1)}(\textbf{x},\textbf{y}) \textbf{1} \{ {{\mathscr {W}}}_{\alpha ,t}(B({\tilde{\textbf{p}}}_{r_t}(\textbf{x},\textbf{y}), r_t ) )< k+2 \} \textbf{1}\{\pi _2(\textbf{x}) \le ar_t\};\\{} & {} N_t^{(2)} := \sum _{\textbf{x},\textbf{y}\in {{\mathscr {W}}}_{\alpha ,t}}^{\ne } {\tilde{h}}_{r_t}^{(2)}(\textbf{x},\textbf{y}) \textbf{1} \{ {{\mathscr {W}}}_{\alpha ,t}(B({\tilde{\textbf{q}}}_{r_t}(\textbf{x},\textbf{y}), r_t ) ) < k+2 \} \textbf{1}\{\pi _2(\textbf{x}) \le ar_t\}. \end{aligned}$$

By the Mecke formula, and (7.33) from Lemma 7.9, with \(c':= c'(\alpha ,a+1)\) we have that \({\mathbb {E}} \,[N_t^{(1)}]\) is bounded by a constant times

$$\begin{aligned}{} & {} t^2 (t r_t^3)^{k-1} \int _0^{a r_t} du \int _0^{r_t} dv \int _{\mathbb {R}^3} d \textbf{y}{\tilde{h}}^{(1)}_{r_t} ((0,u,v),\textbf{y}) \exp ( - (2 \alpha /3) f_0t r_t^3 \nonumber \\{} & {} \qquad \qquad - c' f_0 t r_t^2 \pi _{2} ({\tilde{\textbf{p}}}_{r_t}((0,u,v),\textbf{y}) ) ), \end{aligned}$$

and there is a similar bound for \({\mathbb {E}} \,[N_t^{(2)}]\), involving \({\tilde{h}}_{r_t}^{(2)}\) and \({\tilde{\textbf{q}}}_{r_t}(\textbf{x},\textbf{y})\).

Consider \({\mathbb {E}} \,[ N_t^{(1)}]\) (we can treat \({\mathbb {E}} \,[N_t^{(2)} ]\) similarly), and write \(c''\) for \(c'f_0\). By (7.38), \({\mathbb {E}} \,[N_t^{(1)}]\) is bounded by a constant times

$$\begin{aligned}{} & {} t^2 (tr_t^3)^{k-1} t^{-1/3} (\log t)^{(1/3)-k} \int _0^{a r_t} du \int _0^{r_t} dv \int _{\mathbb {R}^3} d\textbf{y}{\tilde{h}}^{(1)}_{r_t} ((0,u,v),\textbf{y}) \nonumber \\{} & {} \qquad \qquad \quad \times \exp ( - c'' t r_t^2 \pi _{2} ({\tilde{\textbf{p}}}_{r_t}((0,u,v),\textbf{y}) ) ). \end{aligned}$$
(7.40)

Set \(\textbf{u}:= (0,u,v)\) and \(\textbf{v}:= (0,0,v)\). We shall now make the changes of variable \(u'= r_t^{-1} u\), \(v'= r_t^{-1} v\) and \(\textbf{y}' = r_t^{-1} (\textbf{y}- \textbf{u})\). Also write \(\textbf{u}':= (0,u',v') = r_t^{-1} \textbf{u}\) and \(\textbf{v}' := (0,0,v')\). Then

$$\begin{aligned} {\tilde{\textbf{p}}}_{r_t}(\textbf{u},\textbf{u}+ r_t \textbf{y}') = r_t {\tilde{\textbf{p}}}_1(\textbf{u}',\textbf{u}' + \textbf{y}') = r_t {\tilde{\textbf{p}}}_1(\textbf{v}',\textbf{v}' + \textbf{y}') + r_t(0,u',0). \end{aligned}$$

Also

$$\begin{aligned} {\tilde{h}}_{r_t}^{(1)} (\textbf{u},\textbf{u}+ r_t \textbf{y}') = {\tilde{h}}_{1}^{(1)} (\textbf{u}' ,\textbf{u}' + \textbf{y}') = {\tilde{h}}_{1}^{(1)} (\textbf{v}' ,\textbf{v}' + \textbf{y}'). \end{aligned}$$

By our changes of variable, the integral in (7.40) comes to

$$\begin{aligned} r_t^5 \int _0^{a } du' \int _0^{1} dv' \int _{\mathbb {R}^3} d \textbf{y}' {\tilde{h}}^{(1)}_{1} (\textbf{v}',\textbf{v}' + \textbf{y}') \exp ( - c'' t r_t^3 u' - c'' t r_t^3 \pi _{2} ({\tilde{\textbf{p}}}_{1}(\textbf{v}',\textbf{v}' + \textbf{y}') ) ). \end{aligned}$$

Therefore since \(\int _0^a e^{- c'' t r_t^3 u'} du' = O((tr_t^3)^{-1})\), and \(tr_t^3 = O(\log t)\) by (7.30), the expression in (7.40) is bounded by a constant times

$$\begin{aligned} t^{5/3} (\log t)^{-2/3} r_t^5 (t r_t^3)^{-1} \int _0^1 dv' \int _{\mathbb {R}^3} d\textbf{y}' {\tilde{h}}^{(1)}_{1} (\textbf{v}',\textbf{v}' +\textbf{y}') \exp ( - c'' tr_t^3 \pi _{2} ({\tilde{\textbf{p}}}_{1}(\textbf{v}',\textbf{v}' + \textbf{y}') ) ) . \end{aligned}$$

For each \(v' \in [0,1]\) the function \(\textbf{y}' \mapsto {\tilde{h}}_1^{(1)} (\textbf{v}',\textbf{v}' + \textbf{y}')\) has bounded support and is zero whenever \(\pi _2({\tilde{\textbf{p}}}_1(\textbf{v}',\textbf{v}' + \textbf{y}')) \le 0\) (because \(\pi _2(\textbf{v}')=0\)). Therefore in the last displayed expression the integral tends to zero by dominated convergence, while the prefactor tends to a finite constant by (7.30).

Thus \({\mathbb {E}} \,[N_t^{(1)}] \rightarrow 0\), and one can show similarly that \({\mathbb {E}} \,[N_t^{(2)}] \rightarrow 0\). Hence by Markov’s inequality, the expression on the left hand side of (7.39) tends to zero. \(\square \)

The next lemma enables us to reduce the limiting behaviour of the probability that a region near a given edge of A is covered, to that of the corresponding probability for the edge itself.

Lemma 7.12

Let \(a_0 >1\). If \(\alpha \in [\alpha _1,\pi /2]\), then \(\mathbb {P}[F_t(\mathbb {W}_{\alpha ,0},{{\mathscr {W}}}_{\alpha ,t}) {\setminus } F_t(\mathbb {W}_{\alpha , a_0 r_t},{{\mathscr {W}}}_{\alpha ,t})] \rightarrow 0\) as \(t \rightarrow \infty \). Moreover, if \(\alpha \in (\pi /2,2\pi )\) then \(\mathbb {P}[F_t(\mathbb {W}_{\alpha ,a_0 r_t}, {{\mathscr {W}}}_{\alpha ,t})] \rightarrow 1\) as \(t \rightarrow \infty \).

Proof

First suppose \(\alpha \le \pi /2\). It follows from Lemma 7.11 that \(\mathbb {P}[F_t(\mathbb {W}_{\alpha ,0},{{\mathscr {W}}}_{\alpha ,t}) {\setminus } F_t ( (\partial \mathbb {W}_\alpha ) \cap \mathbb {W}_{\alpha , a_0 r_t},{{\mathscr {W}}}_{\alpha ,t})] \rightarrow 0\). Combined with Lemma 7.10, this shows that \(\mathbb {P}[F_t(\mathbb {W}_{\alpha ,0},{{\mathscr {W}}}_{\alpha ,t}) {\setminus } F_t(\mathbb {W}_{\alpha , a_0 r_t},{{\mathscr {W}}}_{\alpha ,t})] \rightarrow 0\).

Now suppose \(\alpha > \pi /2\). In this case some of the geometrical arguments used in proving Lemmas 7.10 and 7.11 do not work. Instead we can use Lemma 7.3, taking \(\mu _t = f_0 | \cdot \cap \mathbb {W}_{\alpha }|\), taking \(W_t \) of that result to be \( \mathbb {W}_{\alpha , a_0 r_t}\), with \(a = 2f_0 (\alpha \wedge \pi )/3\) and \(b=1\). By (7.30), we have \(tr_t^d \sim c \log t\) with \(c = 1/(f_0 \pi )\). Hence \(a c = 2 (\alpha \wedge \pi ) / (3 \pi ) > 1/3 = b/d\), so application of Lemma 7.3 yields the claim. \(\square \)

Using Lemma 6.12, choose \(K >4\) such that for any edge or face \(\varphi \) of A, and any other face \(\varphi '\) of A with \(\varphi {\setminus } \varphi ' \ne \varnothing \), and all \(x \in \varphi ^o\), we have \(\,\textrm{dist}(x, \partial \varphi ) \le (K/3) \,\textrm{dist}(x, \varphi ')\). Denote the vertices of A by \(q_1,\ldots ,q_\nu \), and the faces of A by \(H_1,\ldots ,H_m\). Recall that we denote the edges of A by \(e_1,\ldots ,e_\kappa \) with the angle subtended by A at \(e_i\) denoted \(\alpha _i\) for each i. For \( 1 \le i \le \kappa \), denote the length of the edge \(e_i\) by \(|e_i|\).

Define the ‘corner regions’

$$\begin{aligned} Q_t := \cup _{i=1}^\nu B(q_i,K(K+7)r_t) \cap A; Q_t^+ := \cup _{i=1}^\nu B(q_i,K (K+9)r_t) \cap A. \end{aligned}$$
(7.41)

Lemma 7.13

It is the case that \(\mathbb {P}[F_t(Q_t^+,{{\mathscr {P}}}_t) ] \rightarrow 1\) as \(t \rightarrow \infty \).

Proof

As in the proof of Lemma 7.6, we can apply Lemma 7.3, taking \(W_t= Q_t^+\), and \(\mu _t= f_0 | \cdot \cap A|\), with \(b=0\). \(\square \)

We now determine the limiting probability of coverage for any edge of A.

Lemma 7.14

Let \(i \in \{1,\ldots ,\kappa \}\). If \(\alpha _i= \alpha _1 \le \pi /2\) then

$$\begin{aligned} \lim _{t \rightarrow \infty }( \mathbb {P}[F_t(e_i{\setminus } Q_t,{{\mathscr {P}}}_t) ] ) = \exp ( - (|e_i|/(k-1)!) ( \alpha _1/32)^{1/3}3^{1-k} e^{- \beta /3} ), \end{aligned}$$

while if \(\alpha _i > \min (\alpha _1,\pi /2)\) then \(\lim _{t \rightarrow \infty } (\mathbb {P}[F_t(e_i {\setminus } Q_t, {{\mathscr {P}}}_t) ] ) = 1\).

Proof

The portions of balls \(B(x,r_t), x \in {{\mathscr {P}}}_t\) that intersect the edge \(e_i\) form a spherical Poisson Boolean model in 1 dimension with (in the notation of Lemma 7.2)

$$\begin{aligned} \lambda = f_0 (\alpha _i/2) t r_t^2; ~~~ \delta = r_t; ~~~ \alpha = \frac{\theta _3 \alpha _i/(2 \pi )}{(\alpha _i/(2\pi )) \pi } = 4/3. \end{aligned}$$

By (7.30), as \(t \rightarrow \infty \) we have \( t f_0 r_t^2 = (2 \alpha _1 \wedge \pi )^{-2/3} (t f_0)^{1/3} (\log t)^{2/3}(1+o(1)), \) so that

$$\begin{aligned} \log \lambda = (1/3) \log ( \alpha _i^3/(8(2\alpha _1 \wedge \pi )^2) ) + (1/3) \log (t f_0) + (2/3) \log \log t + o(1), \end{aligned}$$

and \(\log \log \lambda = \log \log t - \log 3 + o(1)\). Therefore

$$\begin{aligned}{} & {} \alpha \delta \lambda - \log \lambda - (k-1) \log \log \lambda = (2/3) f_0 \alpha _i t r_t^3 - (1/3) \log ( \alpha _i^3/(8(2 \alpha _1 \wedge \pi )^2))\\{} & {} \quad - (1/3) \log ( t f_0) - (2/3) \log \log t - (k-1) \log \log t + (k-1) \log 3 + o(1), \end{aligned}$$

so that by (7.30) again,

$$\begin{aligned}{} & {} \lim _{t \rightarrow \infty } ( \alpha \delta \lambda - \log \lambda - (k-1) \log \log \lambda )\\{} & {} \quad = {\left\{ \begin{array}{ll} \beta /3 - (1/3) \log ( \alpha _1/32) + (k-1) \log 3 &{} \text{ if } \alpha _i = \alpha _1 \le \pi /2\\ + \infty &{} \text{ otherwise. } \end{array}\right. } \end{aligned}$$

We then obtain the stated results by application of Lemma 7.2. \(\square \)

Recall that \(H_1,\ldots ,H_m\) are the faces of \(\partial A\). For \(t >0\) we define the ‘edge regions’

$$\begin{aligned} W_t:= & {} \cup _{i=1}^\kappa (e_i \oplus B(\textbf{o},(K+1)r_t) )\cap A ; ~~~~~ W_t^- := \cup _{i=1}^\kappa (e_i \oplus B(\textbf{o},Kr_t) ) \cap A; \nonumber \\ W_t^+:= & {} \cup _{i=1}^\kappa (e_i \oplus B(\textbf{o},(K+4)r_t) ) \cap A. \end{aligned}$$
(7.42)

The next lemma provides the limiting probability that the ‘interior regions’ of all of the faces of A are covered. Recall that \(|\partial _1 A|\) denotes the total area of all faces of A.

Lemma 7.15

Define event \(G_t:= F_t(\partial A {\setminus } W_t^+,{{\mathscr {P}}}_t).\) It is the case that

$$\begin{aligned}{} & {} \lim _{t \rightarrow \infty } ( \mathbb {P}[G_t] ) = {\left\{ \begin{array}{ll} \exp \left( - c_{3,k} |\partial _1 A| e^{- 2 \beta /3} \right) &{} {\text { if }} \alpha _1 > \pi /2, ~ {\text { or }} \alpha _1 = \pi /2, k=1\\ 1 &{} {\text { otherwise.}} \end{array}\right. }\nonumber \\ \end{aligned}$$
(7.43)

Proof

We claim that the events \(F_t(H_1 {\setminus } W_t^+ ,{{\mathscr {P}}}_t),\ldots ,F_t(H_m {\setminus } W_t^+, {{\mathscr {P}}}_t)\) are mutually independent. Indeed, for \(i,j \in \{1,\ldots ,m\}\) with \(i \ne j\), if \(x \in H_i {\setminus } W_t^+\) then \(\,\textrm{dist}(x, \partial H_i) \ge Kr_t\) so by our choice of K, \(\,\textrm{dist}(x, H_j) \ge 3 r_t\), so the \(r_t\)-neighbourhoods of \(H_1 {\setminus } W_t^+, \ldots ,\) \( H_m {\setminus } W_t^+\) are disjoint, and the independence follows. Therefore

$$\begin{aligned} \mathbb {P}[ G_t ] = \prod _{i=1}^m \mathbb {P}[ F_t(H_i {\setminus } W_t^+, {{\mathscr {P}}}_t) ]. \end{aligned}$$

By (7.31) from Lemma 7.8 and an obvious rotation, for \(1 \le i \le m\) we have

$$\begin{aligned} \lim _{t \rightarrow \infty } (\mathbb {P}[ F_t(H_i {\setminus } W_t^+,{{\mathscr {P}}}_t) ] ) = {\left\{ \begin{array}{ll} \exp \left( - c_{3,k} |H_i| e^{-2 \beta /3} \right) &{} \text{ if } \alpha _1 > \pi /2, ~ \\ &{} \text{ or } \alpha _1 = \pi /2, k=1 \\ 1 &{} \text{ otherwise, } \end{array}\right. } \end{aligned}$$

and the result follows. \(\square \)

Next we estimate the probability that there is an uncovered region near to a face of A but not touching that face, and not close to any edge of A.

Lemma 7.16

Define event \(E_t^{(2)} = F_t(\partial A \cup W^+_t ,{{\mathscr {P}}}_t) {\setminus } F_t(A {\setminus } A^{(3r_t)}, {{\mathscr {P}}}_t)\). Then \(\mathbb {P}[E_t^{(2)}] \rightarrow 0\) as \(t \rightarrow \infty \).

Proof

For \(1 \le i \le m\), let \(S_{t,i}\), respectively \(S_{t,i}^+\), denote the slab of thickness \(3 r_t\) consisting of all locations in A lying at a perpendicular distance at most \(3 r_t\) from the reduced face \(H_i {\setminus } W_t\), respectively from \(H_i {\setminus } W_t^-\).

We claim that \(S_{t,i}^+ {\setminus } S_{t,i} \subset W_t^+\), for each \(i \in \{1,\ldots ,m\}\). Indeed, given \(w \in S_{t,i}^+ {\setminus } S_{t,i}\), let z be the nearest point in \(H_i\) to w. Then \(\Vert w-z\Vert \le 3r_t\). Also \(z \notin H_i {\setminus } W_t\), so \(z \in W_t\). Therefore for some \(j \le \kappa \) we have \(\,\textrm{dist}(z,e_j) \le (K+1) r_t\), so \(\,\textrm{dist}(w,e_j) \le (K+4) r_t\), so \(w \in W_t^+\), justifying the claim.

Suppose \(E_t^{(2)} \) occurs. Let \(x \in V_t ({{\mathscr {P}}}_t) \cap A {\setminus } A^{(3 r_t)}\). Choose \(i \in \{1,\ldots ,m \}\) and \(y \in H_i\) such that \(\Vert x-y \Vert = \,\textrm{dist}(x,H_i) \le 3 r_t\). Then since \(x \notin W_t^+\), for all j we have

$$\begin{aligned} \,\textrm{dist}(y, \partial H_j) \ge \,\textrm{dist}(x,\partial H_j) - 3r_t > (K+1) r_t, \end{aligned}$$

so \(y \in H_i {\setminus } W_t\) and \(x \in S_{t,i}\). Thus \(V_t ({{\mathscr {P}}}_t) \) intersects the slab \(S_{t,i}\). However, it does not intersect the face \(H_i\), and nor does it intersect the set \(S_{t,i}^+ {\setminus } S_{t,i},\) (since by the earlier claim this set is contained in \(W_t^+\) which is fully covered). Hence by the union bound

$$\begin{aligned} \mathbb {P}[ E_t^{(2)} ] \le \sum _{i=1}^m \mathbb {P}[ F_t((H_i {\setminus } W_t^{-}) \cup (S_{t,i}^+ {\setminus } S_{t,i}), {{\mathscr {P}}}_t ) {\setminus } F_t( S_{t,i}, {{\mathscr {P}}}_t ) ]. \end{aligned}$$

By (7.32) from Lemma 7.8, along with an obvious rotation, each term in the above sum tends to zero. This gives us the result. \(\square \)

Next we estimate the probability that there is an uncovered region within distance \(Kr_t\) of a 1-dimensional edge of \(\partial A\) but not including any of that edge itself.

Lemma 7.17

It is the case that \(\mathbb {P}[F_t( \cup _{i=1}^\kappa e_i \cup Q_t^+,{{\mathscr {P}}}_t) {\setminus } F_t (W_t^+, {{\mathscr {P}}}_t) ] \rightarrow 0\) as \(t \rightarrow \infty \).

Proof

Let \(i \in \{1,\ldots , \kappa \}\).

For \(t >0\), let \(W^*_{i,t}\) denote the set of locations in A at perpendicular distance at most \((K+4)r_t\) from the reduced edge \(e_i {\setminus } Q_t\). We claim that

$$\begin{aligned} (e_i \oplus B(o,(K+4)r_t)) \cap A {\setminus } Q_t^+ \subset W_{i,t}^*. \end{aligned}$$
(7.44)

Indeed, suppose \(x \in (e_i \oplus B(o,(K+4)r_t)) \cap A {\setminus } Q_t^+\). Let \(y \in e_i\) with \(\Vert x-y\Vert = d(x, e_i)\). Then \(\Vert x-y\Vert \le (K+4)r_t\), and since \(x \notin Q_t^+\), for all \(j \in \{1\ldots ,\nu \}\)

$$\begin{aligned} \,\textrm{dist}(y,q_j)\ge & {} \,\textrm{dist}(x,q_j) - (K+4)r_t\\> & {} [K(K+9) - (K+4)] r_t \ge K(K+7) r_t. \end{aligned}$$

Therefore \(y \notin Q_t\), so \(x \in W^*_{i,t}\), demonstrating the claim. Hence

$$\begin{aligned}{} & {} \mathbb {P}[F_t (e_i \cup Q_t^+,{{\mathscr {P}}}_t) {\setminus } F_t( (e_i \oplus B(o,(K+4)r_t) ) \cap A ,{{\mathscr {P}}}_t) ]\\{} & {} \qquad \qquad \qquad \le \mathbb {P}[F_t(e_i,{{\mathscr {P}}}_t) {\setminus } F_t(W^*_{i,t} {\setminus } Q_t^+ ,{{\mathscr {P}}}_t)], \end{aligned}$$

and we claim that this probability tends to zero. Indeed, if \(x \in W_{i,t}^* {\setminus } Q_t^+\), then taking \(x' \in e_i\) with \(\Vert x-x'\Vert = \,\textrm{dist}(x,e_i)\), we have \(\,\textrm{dist}(x',\partial e_i) \ge \,\textrm{dist}(x,\partial e_i) - (K+4)r_t \ge K(K+7)r_t\), and so by the choice of K, for any face \(\varphi ' \) of A, other than the two faces meeting at \(e_i\), we have \(\,\textrm{dist}(x',\varphi ') > 3(K+7)r_t\), and hence \(\,\textrm{dist}(x,\varphi ') \ge (2K+17) r_t\). Then the claim follows by Lemma 7.12 and an obvious rotation. Since

$$\begin{aligned}{} & {} F_t( \cup _{i=1}^\kappa e_i \cup Q_t^+,{{\mathscr {P}}}_t) {\setminus } F_t ( W_t^+, {{\mathscr {P}}}_t) \\{} & {} \subset \cup _{i=1}^{\kappa } [F_t (e_i \cup Q_t^+,{{\mathscr {P}}}_t) {\setminus } F_t( (e_i \oplus B(o,(K+4)r_t)) \cap A ,{{\mathscr {P}}}_t) ], \end{aligned}$$

the result follows by the union bound. \(\square \)

Proof of Theorem 3.3

First we estimate the probability of event \(F_t(A^{(3r_t)},{{\mathscr {P}}}_t)^c \), that there is an uncovered region in A distant more than \(3r_t\) from \(\partial A\). We apply Lemma 7.3, with \(\mu _t = \mu \), \(W_t = A {\setminus } A^{(3r_t)}\), \(b=d\), \(a= f_0 \theta _3\), and \(c= 1/(f_0 \min (\pi , 2 \alpha _1))\). Then \((b/d) - ac = 1 - (4 \pi /(3 \min (\pi ,2 \alpha _1))) <0\), so by Lemma 7.3,

$$\begin{aligned} \lim _{t \rightarrow \infty } ( \mathbb {P}[F_t(A^{(3r_t)},{{\mathscr {P}}}_t)] ) = 1. \end{aligned}$$
(7.45)

Let \(\partial ^*A := ([(\partial A) {\setminus } W^+_t] \cup \cup _{i=1}^\kappa e_i ) {\setminus } Q_t\) (note that the definition depends on t but we omit this from the notation). This is the union of reduced faces and reduced edges of \(\partial A\).

We have the event inclusion \( F_t(A,{{\mathscr {P}}}_t) \subset F_t(\partial ^* A,{{\mathscr {P}}}_t) \), and by the union bound

$$\begin{aligned}{} & {} \mathbb {P}[ F_t(\partial ^* A,{{\mathscr {P}}}_t) {\setminus } F_t(A,{{\mathscr {P}}}_t) ] \le \mathbb {P}[F_t(Q_t^+)^c] + \mathbb {P}[ F_t( \cup _{i=1}^\kappa e_i \cup Q_t^+,{{\mathscr {P}}}_t) {\setminus } F_t(W^+_t,{{\mathscr {P}}}_t) ]\nonumber \\{} & {} \quad + \mathbb {P}[ F_t(\partial A \cup W_t^+,{{\mathscr {P}}}_t) {\setminus } F_t(A {\setminus } A^{(3r_t)}, {{\mathscr {P}}}_t) ] + \mathbb {P}[ F_t(A^{(3r_t)},{{\mathscr {P}}}_t)^c ], \end{aligned}$$

and the four probabilities on the right tend to zero by Lemma 7.13, Lemma 7.17, Lemma 7.16 and (7.45) respectively. Therefore

$$\begin{aligned} \lim _{t \rightarrow \infty } \mathbb {P}[ F_t(A,{{\mathscr {P}}}_t) ] = \lim _{t \rightarrow \infty } \mathbb {P}[F_t(\partial ^* A, {{\mathscr {P}}}_t)], \end{aligned}$$
(7.46)

provided the limit on the right exists.

Next we claim that the events \(F_t(e_i {\setminus } Q_t,{{\mathscr {P}}}_t)\), \(1 \le i \le \kappa \), are mutually independent. Indeed, for distinct \(i,j \in \{1,\ldots ,\kappa \}\), if \(x \in e_i {\setminus } Q_t\) then \(\,\textrm{dist}(x, \partial e_i) \ge K(K+7)r_t\) by (7.41), so by our choice of K, \(\,\textrm{dist}(x, e_j) \ge 3 r_t\) for t sufficiently large. Therefore the \(r_t\)-neighbourhoods of \(e_1 {\setminus } Q_t, \ldots , e_\kappa {\setminus } Q_t\) are disjoint, and the independence follows. Hence by Lemma 7.14,

$$\begin{aligned}{} & {} \lim _{t \rightarrow \infty } (\mathbb {P}[ F_t(\cup _{i=1}^\kappa e_i {\setminus } Q_t, {{\mathscr {P}}}_t) ] ) \nonumber \\{} & {} \quad = {\left\{ \begin{array}{ll} \exp \left( - {\sum }_{\{i: \alpha _i = \alpha _1\}} (|e_i|/(k-1)!) (\alpha _1/32)^{1/3} 3^{1-k} e^{- \beta /3} \right) &{}~ {\text { if }} ~ \alpha _1 \le \pi /2\\ 1 &{} ~ {\text { otherwise}}. \end{array}\right. }\nonumber \\ ~~~ \end{aligned}$$
(7.47)

Observe next that by the definition of \(W_t^+\) at (7.42), the set \(\partial A {\setminus } W_t^+\) is at Euclidean distance at least \(K r_t\) from all of the edges \(e_i\). Therefore the events \(F_t( \cup _{i=1}^\kappa e_i {\setminus } Q_t, {{\mathscr {P}}}_t) \) and \( F_t (\partial A {\setminus } W_t^+,{{\mathscr {P}}}_t) \) are independent. Therefore using (7.46) we have

$$\begin{aligned} \lim _{t \rightarrow \infty } \mathbb {P}[ F_t(A,{{\mathscr {P}}}_t) ] = \lim _{t \rightarrow \infty } \mathbb {P}[F_t( \cup _{i=1}^\kappa e_i {\setminus } Q_t, {{\mathscr {P}}}_t) ] \times \lim _{t \rightarrow \infty } \mathbb {P}[ F_t (\partial A {\setminus } W_t^+,{{\mathscr {P}}}_t) ], \end{aligned}$$

provided the two limits on the right exist. But we know from (7.47) and Lemma 7.15 that these two limits do indeed exist, and what their values are. Substituting these two values, we obtain the result stated for \(R'_{t,k}\) in (3.5). Then we obtain the result stated for \(R_{n,k}\) in (3.5) by applying Lemma 7.1. \(\square \)

7.5 Proof of Theorem 3.1: first steps

In this subsection we assume that \(d \ge 2\) and A has \(C^2\) boundary. Let \(\zeta \in \mathbb {R}\), and assume that \((r_t)_{t >0}\) satisfies (7.15). Let \(k \in \mathbb {N}\). Given any point process \(\mathscr {X}\) in \(\mathbb {R}^d\), and any \(t >0\), define the ‘vacant’ region \(V_t(\mathscr {X})\) by (7.13), and given also \(D \subset \mathbb {R}^d\), define \(F_t(D,\mathscr {X})\), the event that D is ‘fully covered’ k times, by (7.14).

Given \(x \in \partial A\) we can express \(\partial A\) locally in a neighbourhood of x, after a rotation, as the graph of a \(C^2\) function with zero derivative at x. As outlined in earlier in Sect. 5, we shall approximate to that function by the graph of a piecewise affine function (in \(d=2\), a piecewise linear function).

For each \(x \in \partial A\), we can find an open neighbourhood \({{\mathscr {N}}}_x\) of x, a number \(r(x) >0\) such that \(B(x, 3r(x)) \subset {{\mathscr {N}}}_x\) and a rotation \(\rho _x\) about x such that \(\rho _x(\partial A \cap {{\mathscr {N}}}_x)\) is the graph of a real-valued \(C^2\) function f defined on an open ball \(D \subset \mathbb {R}^{d-1}\), with \(\langle f'(u),e\rangle \le 1/9\) for all \(u \in D\) and all unit vectors e in \(\mathbb {R}^{d-1}\), where \(\langle \cdot ,\cdot \rangle \) denotes the Euclidean inner product in \(\mathbb {R}^{d-1}\) and \(f'(u):= (\partial _1 f(u),\partial _2f(u), \ldots , \partial _{d-1} f(u) )\) is the derivative of f at u. Moreover, by taking a smaller neighbourhood if necessary, we can also assume that there exists \(\varepsilon >0\) and \(a \in \mathbb {R}\) such that \(f(u) \in [a+ \varepsilon ,a+ 2 \varepsilon ]\) for all \(u \in D\) and also \(\rho _x(A) \cap (D \times [a,a + 3 \varepsilon ]) = \{(u,z): u \in D, a \le z \le f(u)\}\).

By a compactness argument, we can and do take a finite collection of points \(x_1,\ldots , x_J \in \partial A\) such that

$$\begin{aligned} \partial A \subset \cup _{j=1}^J B(x_j,r(x_j)). \end{aligned}$$
(7.48)

Then there are constants \(\varepsilon _j >0\), and rigid motions \(\rho _j, 1 \le j \le J\), such that for each j the set \(\rho _j(\partial A \cap {{\mathscr {N}}}_{x_j}) \) is the graph of a \(C^2 \) function \(f_j\) defined on a ball \(I_j\) in \(\mathbb {R}^{d-1}\), with \(\langle f'_j(u), e \rangle \le 1/9\) for all \(u \in I_j\) and all unit vectors \(e \in \mathbb {R}^{d-1}\), and also with \(\varepsilon _j \le f_j(u) \le 2 \varepsilon _j \) for all \(u \in I_j\) and \(\rho _j(A) \cap (I_j \times [0,3\varepsilon _j]) = \{(u,z):u \in I_j, 0 \le z \le f(u)\}\).

Let \(\varGamma \subset \partial A\) be a closed set such that \(\varGamma \subset B(x_j,r(x_j))\) for some \(j \in \{1,\ldots ,J\}\), and such that \(\kappa (\partial \varGamma ,r) = O(r^{2-d})\) as \(r \downarrow 0\), where in this section we set \(\partial \varGamma : = \varGamma \cap \overline{\partial A {\setminus } \varGamma } \), the boundary of \(\varGamma \) relative to \(\partial A\) (the \(\kappa \) notation was given at (6.2)). To simplify notation we shall assume that \(\varGamma \subset B(x_1,r(x_1))\), and moreover that \(\rho _1\) is the identity map. Then \(\varGamma = \{(u,f_1(u)): u \in U\}\) for some bounded set \(U \subset \mathbb {R}^{d-1}\). Also, writing \(\phi (\cdot )\) for \(f_1(\cdot )\) from now on, we assume

$$\begin{aligned} \phi (U) \subset [\varepsilon _1,2 \varepsilon _1] \end{aligned}$$
(7.49)

and

$$\begin{aligned} A \cap (U \times [0,3 \varepsilon _1]) = \{(u,z): u \in U, 0 \le z \le \phi (u) \}. \end{aligned}$$
(7.50)

Note that for any \(u,v \in U\), by the mean value theorem we have for some \(w \in [ u,v]\) that

$$\begin{aligned} |\phi (v) - \phi (u) | = | \langle v-u, \phi '(w)\rangle | \le (1/9) \Vert v-u\Vert . \end{aligned}$$
(7.51)

Choose (and keep fixed for the rest of this paper) constants \( \gamma _0, \gamma , \gamma '\) with

$$\begin{aligned} 1/(2d)< \gamma _0< \gamma< \gamma ' < 1/d. \end{aligned}$$
(7.52)

The use of these will be for \(t^{-\gamma }\), \(t^{-\gamma '}\) and \(t^{-\gamma _0}\) to provide length scales that are different from each other and from that of \(r_t\).

When \(d=2\), we approximate to \(\varGamma \) by a polygonal line \(\varGamma _t\) with edge-lengths that are \(\varTheta (t^{-\gamma })\). When \(d=3\), we approximate to \(\varGamma \) by a polyhedral surface \(\varGamma _t\) with all of its vertices in \(\partial A \), and face diameters that are \(\varTheta (t^{-\gamma })\), taking all the faces of \(\varGamma _t\) to be triangles. For general d, we wish to approximate to \(\varGamma \) by a set \(\varGamma _t\) given by a union of the \((d-1)\)-faces in a certain simplicial complex of dimension \(d-1\) embedded in \(\mathbb {R}^d\).

To do this, divide \(\mathbb {R}^{d-1}\) into cubes of dimension \(d-1\) and side \(t^{-\gamma }\), and divide each of these cubes into \((d-1)!\) simplices (we take these simplices to be closed). Let \(U_t\) be the union of all those simplices in the resulting tessellation of \(\mathbb {R}^{d-1}\) into simplices, that are contained within U, and let \(U_t^-\) be the union of those simplices in the tessellation which are contained within \(U^{(3dt^{-\gamma })}\), where for \(r>0\) we set \(U^{(r)}\) to be the set of \(x \in U\) at a Euclidean distance more than r from \(\mathbb {R}^{d-1} {\setminus } U\). If \(d=2\), the simplices are just intervals. See Fig. 3.

Let \(\sigma ^-(t)\), respectively \(\sigma (t)\), denote the number of simplices making up \(U_t^-\), respectively \(U_t\). Choose \(t_0 >0\) such that \(\sigma ^-(t) >0\) for all \(t \ge t_0\).

Fig. 3
figure 3

Example in \(d=3\). The outer crescent-shaped region is U, while the inner crescent is \(U^{(3dt^{-\gamma })}\) (the annular region is not to scale: its thickness should be 9 times the length of the shortest edges of the polygons). The outer polygon is \(U_t\), while the inner polygon is \(U_t^-\)

Let \(\psi _t: U_t \rightarrow \mathbb {R}\) be the function that is affine on each of the simplices making up \(U_t\), and agrees with the function \(\phi \) on each of the vertices of these simplices. Our approximating surface (or polygonal line if \(d=2\)) will be defined by \(\varGamma _t := \{(x, \psi _t(x)-K t^{-2 \gamma }): x \in U^-_t\}\), with the constant K given by the following lemma. This lemma uses Taylor expansion to show that \(\psi _t\) a good approximation to \(\phi \).

Lemma 7.18

(Polytopal approximation of \(\partial A\)) Set \(K := \sup _{t \ge t_0, u \in U_t} ( t^{2 \gamma } |\phi (u) - \psi _t(u) |)\). Then \(K < \infty \).

Proof

Set \(K_0 := d^3 \sup \{ |\phi ''_{\ell m}(v)| , \ell , m \in \{1,\ldots , d-1\}, v \in U\}\), i.e. \(d^3\) times the supremum over all \(v \in U\) of the max-norm of the Hessian of \(\phi \) at v. Then \(K_0 < \infty \).

Given \(t \ge t_0\), denote the simplices making up \(U_t\) by \(T_{t,1},\ldots ,T_{t,\sigma (t)}\). Let \(i \in \{1,\ldots ,\sigma (t)\}\). Let \(u_0,u_1,\ldots , u_{d-1} \in \mathbb {R}^{d-1}\) be the vertices of \(T_{t,i}\).

Let \(u \in T_{t,i}\). Then u is a convex combination of \(u_0,\ldots , u_{d-1}\); write \(u= u_0 + \sum _{j=1}^{d-1} \alpha _j(u_j - u_0) \) with \(\alpha _j \ge 0\) for each j and \(\sum _{j=1}^{d-1} \alpha _j \le 1\). Set \(v_0 = u_0\), and for \(1 \le k \le d-1\), set \(v_k := u_0 + \sum _{j=1}^k \alpha _j(u_j-u_0)\). Then by the mean value theorem, for \(1 \le k \le d-1\), there exists \(w_k \in T_{t,i}\) (in fact, \(w_k \in [v_{k-1}, v_k]\)), such that

$$\begin{aligned} \phi (v_k) - \phi (v_{k-1}) = \langle v_k-v_{k-1} , \phi '(w_k) \rangle = \alpha _k \langle u_k-u_0 , \phi '(w_k) \rangle . \end{aligned}$$

Also, since \(\psi _t\) is affine on \(T_{t,i}\) and agrees with \(\phi \) on \(u_0, u_1, \ldots , u_{d-1}\), there exists \(\tilde{w}_k \in T_{t,i}\) (in fact, \(\tilde{w}_k \in [u_0,u_k]\)) such that

$$\begin{aligned} \psi _t (v_k) - \psi _t (v_{k-1})= & {} \alpha _k (\psi _t(u_k) - \psi _t(u_0) )\\= & {} \alpha _k (\phi (u_k) - \phi (u_0) ) = \alpha _k \langle u_k-u_0 , \phi '(\tilde{w}_k) \rangle . \end{aligned}$$

Hence,

$$\begin{aligned} \phi (v_k) - \phi (v_{k-1}) - (\psi _t(v_k) - \psi _t(v_{k-1}) ) = \alpha _k \langle u_k-u_0, \phi '(w_k) - \phi '(\tilde{w}_k) \rangle . \end{aligned}$$

By the mean value theorem again, each component of \(\phi '(w_k) -\phi '(\tilde{w}_k)\) is bounded by \( K_0 d^{-2}\Vert \tilde{w}_k- w_k\Vert . \) Therefore \(\Vert \phi '(w_k) - \phi '(\tilde{w}_k)\Vert \le K_0 d^{-1} \Vert \tilde{w}_k - w_k\Vert \). Since \(\textrm{diam}(T_{t,i}) = (d-1)^{1/2} t^{-\gamma }\), and since \(u = v_{d-1}\) and \(\phi (u_0) = \psi _t(u_0)\),

$$\begin{aligned} |\phi (u)- \psi _t(u)|{} & {} = \left| \sum _{k=1}^{d-1} [\phi (v_k) - \phi (v_{k-1} ) - (\psi _t(v_k) - \psi _t(v_{k-1} ))] \right| \\{} & {} \le \sum _{k=1}^{d-1} \alpha _k \Vert u_k - u_0\Vert K_0 d^{-1} \Vert \tilde{w}_k - w_k\Vert \\{} & {} \le \textrm{diam}(T_{t,i})^2 K_0 d^{-1} \le K_0 t^{-2 \gamma }, \end{aligned}$$

and therefore \(K \le K_0 < \infty \), as required. \(\square \)

We now subtract a constant from \(\psi _t\) to obtain a piecewise affine function \(\phi _t\) that approximates \(\phi \) from below. For \(t \ge t_0\) and \(u \in U_t\), define \(\phi _t(u) := \psi _t(u) - K t^{-2 \gamma }\), with K given by Lemma 7.18. Then for all \(t \ge t_0, u \in U_t\) we have \(|\psi _t(u)-\phi (u)|\le Kt^{-2\gamma }\) so that

$$\begin{aligned} \phi _t(u) \le \phi (u) \le \phi _t(u) + 2K t^{-2 \gamma }. \end{aligned}$$
(7.53)

Define the set \(\varGamma _t: = \{(u,\phi _t(u)): u \in U_t^-\}\). We refer to each \((d-1)\)-dimensional face of \(\varGamma _t\) (given by the graph of \(\phi _t\) restricted to one of the simplices in our triangulation of \(\mathbb {R}^{d-1}\)) as simply a face of \(\varGamma _t\). Denote these faces of \(\varGamma _t\) by \(H_{t,1}, \ldots , H_{t,\sigma ^-(t)}\). The number of faces, \(\sigma ^-(t)\), is \(\varTheta (t^{(d-1)\gamma })\) as \(t \rightarrow \infty \). The perimeter (i.e., the \((d-2)\)-dimensional Hausdorff measure of the boundary) of each individual face is \( \varTheta (t^{-(d-2)\gamma })\).

For \(t \ge t_0\), define subsets \(A_t,A_t^-, {\tilde{A}}_t, A_t^{*}, A_t^{**}\) of \( \mathbb {R}^d\) and Poisson processes \({{\mathscr {P}}}'_t\) and \(\tilde{{{\mathscr {P}}}}_t\) in \(\mathbb {R}^d\) by

$$\begin{aligned}{} & {} A_t := \{(u,z): u \in U_t, 0 \le z \le \phi (u)\}, ~~~ {\tilde{A}}_t := \{(u,z): u \in U_t, 0 \le z \le \phi _t(u)\}, \nonumber \\{} & {} A_t^- := \{(u,z): u \in U_t^-, 0 \le z \le \phi (u)\}, ~~~~~ ~~~~~ \nonumber \\{} & {} A_t^* := \{(u,z): u \in U_t^-, \phi _t(u) - (3/2) r_t \le z \le \phi (u)\}, \nonumber \\{} & {} A_t^{**} := \{(u,z): u \in U_t^-, \phi _t(u) - (3/2) r_t \le z \le \phi _t(u)\}, \nonumber \\{} & {} {{\mathscr {P}}}'_t := {{\mathscr {P}}}_t \cap A_t, ~~~ \tilde{{{\mathscr {P}}}}_t := {{\mathscr {P}}}_t \cap {\tilde{A}}_t. \end{aligned}$$
(7.54)

Thus \(A_t\) is a ‘thick slice’ of A near the boundary region \(\varGamma \), \({\tilde{A}}_t\) is an approximating region having \(\varGamma _t\) as its upper boundary, and \(A_t^{*}\), \(A_t^{**}\) are ‘thin slices’ of A also having \(\varGamma \), respectively \(\varGamma _t\), as upper boundary. By (7.53), (7.49) and (7.50), \(A_t^{**} \subset A_t^* \subset A_t^- \subset A_t \subset A\), and \(A_t^{**} \subset {\tilde{A}}_t \subset A_t\). The rest of this subsection, and the next subsection, are devoted to proving the following intermediate step towards a proof of Theorem 3.1.

Proposition 7.19

(Limiting coverage probability for approximating polytopal surface) It is the case that \( \lim _{t \rightarrow \infty } \mathbb {P}[F_t(A_t^{**},\tilde{{{\mathscr {P}}}}_t) ] = \exp (- c_{d,k} |\varGamma | e^{- \zeta } ), \) where \(|\varGamma |\) denotes the \((d-1)\)-dimensional Hausdorff measure of \(\varGamma \).

The following corollary of Lemma 7.18 is a first step towards proving this.

Lemma 7.20

(a) It is the case that \(|A_t {\setminus } {\tilde{A}}_t| = O(t^{-2 \gamma })\) as \(t \rightarrow \infty \).

(b) Let K be as given in Lemma 7.18. Then for all \(t \ge t_0\) and \(x \in U_t^{(r_t)} \times \mathbb {R}\), \(|B(x,r_t) \cap A_t {\setminus } {\tilde{A}}_t| \le 2K \theta _{d-1} r^{d-1}_t t^{- 2 \gamma }\).

Proof

Since \(|A_t {\setminus } {\tilde{A}}_t | = \int _{U_t} (\phi (u) - \phi _t(u)) du\), where this is a \((d-1)\)-dimensional Lebesgue integral, part (a) comes from (7.53).

For (b), let \(x \in U_t^{(r_t)} \times \mathbb {R}\), and let \(u \in U_t^{(r_t)}\) be the projection of x onto the first \(d-1\) coordinates. Then if \(y \in B(x,r_t) \cap A_t {\setminus } {\tilde{A}}_t\), we have \(y = (v,s)\) with \(\Vert v-u\Vert \le r_t\) and \(\phi _t(v) < s \le \phi (v)\). Therefore using (7.53) yields

$$\begin{aligned} |B(x,r_t) \cap A_t {\setminus } {\tilde{A}}_t| \le \int _{B_{(d-1)}(u,r_t)} (\phi (v) - \phi _t(v)) dv \le 2K \theta _{d-1} t^{-2\gamma } r_t^{d-1}, \end{aligned}$$

where the integral is a \((d-1)\)-dimensional Lebesgue integral. This gives part (b). \(\square \)

The next lemma says that small balls centred in A, or very near to A, have almost half of their volume in A.

Lemma 7.21

Let \(\varepsilon >0\). Then for all large enough t, all \(x \in A_t^*\), and all \(s \in [r_t/2,r_t]\), we have \(|B(x,s) \cap {\tilde{A}}_t|> (1- \varepsilon ) (\theta _d/2) s^d\).

Proof

For all large enough t, all \(x \in A_t^*\) and \(s \in [r_t/2,r_t]\), we have \(B(x,s )\cap A \subset A_t\), so \(B(x,s) \cap A_t = B(x,s) \cap A\), and hence by Lemma 6.9 and Lemma 7.20(b),

$$\begin{aligned} |B(x,s) \cap {\tilde{A}}_t|= & {} |B(x,s) \cap A_t | - |B(x,s ) \cap A_t {\setminus } {\tilde{A}}_t | \\\ge & {} (1 - \varepsilon /2) (\theta _d /2) s^d - O( r_t^{d-1} t^{-2 \gamma }). \end{aligned}$$

Since \(t^{-2 \gamma } = o(r_t)\), this gives us the result. \(\square \)

Recall that \(\mathbb {H}\) and \({{\mathscr {U}}}_t\) were defined just before Lemma 7.4. The next lemma provides a bound on the probability that a region of diameter \(O(r_t)\) within A or \(A_t^*\) is not fully covered. The bound is relatively crude, but very useful for dealing with ‘exceptional’ regions such as those near the boundaries of faces in the polytopal approximation.

Lemma 7.22

Let \(\delta \in (0,1)\), \(K_1 >0\). Then as \(t \rightarrow \infty \),

$$\begin{aligned}{} & {} \sup _{z \in \mathbb {R}^d} \mathbb {P}[ F_t(B(z,K_1r_t) \cap A_t^* , \tilde{{{\mathscr {P}}}}_t)^c ] = O(t^{\delta - (d-1)/d}), \end{aligned}$$
(7.55)
$$\begin{aligned}{} & {} \sup _{z \in \mathbb {R}^d} \mathbb {P}[ F_t(B(z,K_1r_t) \cap \mathbb {H}, {{\mathscr {U}}}_t )^c ] = O(t^{\delta - (d-1)/d}), \end{aligned}$$
(7.56)

and

$$\begin{aligned} \sup _{z \in \mathbb {R}^d} \mathbb {P}[ F_t(B(z,K_1r_t) \cap A, {{\mathscr {P}}}_t )^c ] = O(t^{\delta - (d-1)/d}). \end{aligned}$$
(7.57)

Proof

For (7.55), it suffices to prove, for any \((z_t)_{t >0}\) with \(z_t \in \mathbb {R}^d \) for each t, that

$$\begin{aligned} \mathbb {P}[ F_t(B(z_t,K_1r_t) \cap A, {{\mathscr {P}}}_t )^c ] = O(t^{\delta - (d-1)/d}) ~~~ {\text { as }} ~ t \rightarrow \infty . \end{aligned}$$
(7.58)

To see this, we apply Lemma 7.3, taking \(\mu _t= f_0 | \cdot \cap {\tilde{A}}_t|\), taking \(W_t= B(z_t,K_1 r_t) \cap A_t^*\), with \(c= 2 (d-1)/(d \theta _d f_0)\) (using (7.15)), \(b=0\), and \(a= (\theta _d f_0/2)(1-\delta )\) (using Lemma 7.21). Then \((b/d)- ac = - (1- \delta )(d-1)/d\). Application of Lemma 7.3, taking \(\varepsilon = \delta /d\), shows that (7.58) holds, and hence (7.55).

The proofs of (7.56) and (7.57) are similar; for (7.57) we have to use (6.9) directly, rather than using Lemma 7.21. We omit the details. \(\square \)

Let \(\partial _{d-2} \varGamma _t := \cup _{i=1}^{\sigma ^-(t)} \partial _{d-2} H_{t,i}\), the union of all \((d-2)\)-dimensional faces in the boundaries of the faces making up \(\varGamma _t\) (the \(H_{t,i}\) were defined just after (7.53)). Recall from (7.52) that \(\gamma ' \in (\gamma ,1/d)\). For each \(t >0\) define the sets \(Q_t, Q^+_t \subset \mathbb {R}^d\) by

$$\begin{aligned} Q_t := ( \partial _{d-2} \varGamma _t \oplus B(o,7r_t)) ; ~~~~~~~ Q^+_t := (\partial _{d-2} \varGamma _t \oplus B(o,8dt^{- \gamma '})) \cap A_t^{*}. \qquad \end{aligned}$$
(7.59)

Thus \(Q_t^+\) is a region near the corners of our polygon approximating \(\partial A\) (if \(d=2\)) or near the boundaries of the faces of our polytopal surface approximating \(\partial A\) (if \(d \ge 3\)). In the next lemma we show that \(Q_t^+\) is fully covered with high probability.

Lemma 7.23

It is the case that \(\mathbb {P}[F_t( Q^+_t, \tilde{{{\mathscr {P}}}}_t)] \rightarrow 1\) as \(t \rightarrow \infty \).

Proof

Let \(\varepsilon \in (0, \gamma '- \gamma )\). For each face \(H_{t,i}\) of \(\varGamma _t\), \(1 \le i \le \sigma ^-(t)\), we claim that we can take \(x_{i,1},\ldots , x_{i,k_{t,i}} \in H_{t,i}\) with \(\max _{1 \le i \le \sigma ^-(t)} k_{t,i} = O(t^{-\gamma '} t^{-(d-2)\gamma } /r_t^{d-1} )\), such that

$$\begin{aligned} ( (\partial _{d-2} H_{t,i}) \oplus B(o, 9dt^{-\gamma '}) ) \cap H_{t,i} \subset \cup _{j=1}^{k_{t,i}} B(x_{i,j},r_t). \end{aligned}$$
(7.60)

Indeed, we can cover \(\partial _{d-2} H_{t,i}\) by \(O(t^{(d-2)(\gamma '-\gamma )})\) balls of radius \(t^{-\gamma '}\), denoted \(B_{i,\ell }^{(0)}\) say. Replace each ball \(B_{i,\ell }^{(0)}\) with a ball \(B'_{i,\ell }\) with the same centre as \(B_{i,\ell }^{(0)}\) and with radius \( 10 d t^{-\gamma '}\). Then cover \(B'_{i,\ell } \cap H_{t,i}\) by \(O((t^{-\gamma '} /r_t)^{d-1})\) balls of radius \(r_t\). Every point in the set on the left side of (7.60) lies in one of these balls of radius \(r_t\), and the claim follows.

Given \(x \in Q_t^+\), write \(x=(u,h)\) with \(u \in U_t^-\), \(h \in \mathbb {R}\), and set \(y := (u, \phi _t(u))\). Then \(\Vert y-x\Vert \le 2 r_t\) and there exists i with \(y \in H_{t,i}\). Take such an i. Then \(\,\textrm{dist}(y, \partial _{d-2}H_{t,i})= \,\textrm{dist}(y, \partial _{d-2}\varGamma _t) \le 9d t^{- \gamma '}\), so \(\Vert y- x_{i,j} \Vert \le r_t\) for some \(j \le k_{t,i}\) by (7.60). Therefore \(Q_t^+ \subset \cup _{i=1}^{\sigma ^-(t)} \cup _{j=1}^{k_{t,i}} B(x_{i,j},3r_t)\), so by the union bound,

$$\begin{aligned} \mathbb {P}[ F_t( Q^+_t , \tilde{{{\mathscr {P}}}}_t) ^c] \le \sum _{i=1}^{\sigma ^-(t)} \sum _{j=1}^{k_{t,i}} \mathbb {P}[ F_t( B(x_{i,j}, 3 r_t) \cap A_t^* , \tilde{{{\mathscr {P}}}}_t )^c]. \end{aligned}$$

By Lemma 7.22 and the fact that \(\sigma ^-(t) = O(t^{(d-1) \gamma })\), the estimate in the last display is \( O(t^{\gamma - \gamma '} r_t^{1-d} t^{\varepsilon - (d-1)/d}) = O(t^{\varepsilon + \gamma -\gamma '}), \) which tends to zero. \(\square \)

7.6 Induced coverage process and proof of Proposition 7.19

We shall conclude the proof of Proposition 7.19 by means of a device we refer to as the induced coverage process. This is obtained by taking the parts of \({\tilde{A}}_t\) near the flat parts of \(\varGamma _t\), along with any Poisson points therein, and rearranging them into a flat region of macroscopic size. The definition of this is somewhat simpler for \(d=2\), so for presentational purposes, we shall consider this case first.

Suppose that \(d=2\). The closure of the set \(\varGamma _t {\setminus } Q_t\) is a union of closed line segments (‘intervals’) with total length \(|\varGamma _t| - 14 r_t \sigma ^-(t)\), which tends to \(| \varGamma |\) as \(t \rightarrow \infty \) because \( \sigma ^-(t) = O(t^{\gamma })\) and \(\gamma < 1/2\). Denote these intervals by \(I_{t,1},\ldots ,I_{t,\sigma ^-(t)}\), taking \(I_{t,i}\) to be a sub-interval of \(H_{t,i}\) for \(1 \le i \le \sigma ^-(t)\).

For \(1 \le i \le \sigma ^-(t)\) define the closed rectangular strip (which we call a ‘block’) \(S_{t,i}\) (respectively \(S^+_{t,i}\)) of dimensions \(|I_{t,i} | \times 2 r_t\) (resp. \(|I_{t,i}| \times 4 r_t\)) to consist of those locations inside \(A_t\) lying within perpendicular distance at most \(2r_t\) (resp., at most \(4 r_t\)) of \(I_{t,i}\). That is,

$$\begin{aligned} S_{t,i} := I_{t,i} \oplus [o,2 r_t e_{t,i}], ~~~~ S^+_{t,i} := I_{t,i} \oplus [o,4 r_t e_{t,i}], \end{aligned}$$

where \(e_{t,i}\) is a unit vector perpendicular to \(I_{t,i}\) pointing inwards into \({\tilde{A}}_t\) from \(I_{t,i}\). Define the long interval \(D_t := [0, |\partial A_t| - 14 r_t \sigma ^-(t) ]\), and the long horizontal rectangular strips

$$\begin{aligned} S_t:= D_t \times [0,2 r_t]; ~~~~~~ S^+_t:= D_t \times [0, 4r_t]. \end{aligned}$$

Denote the lower boundary of \(S_t\) (that is, the set \(D_t \times \{0\}\)) by \(L_t\).

We shall verify in Lemma 7.25 that \(S_{t,1}^+,\ldots , S_{t,\sigma ^-(t)}^+\) are disjoint. Now choose rigid motions \(\rho _{t,i}\) of the plane, \(1 \le i \le \sigma ^-(t)\), such that after applications of these rigid motions the blocks \(S_{t,i}\) are lined up end to end to form the strip \(S_t\), with the long edge \(I_{t,i}\) of the block transported to part of the lower boundary \(L_t\) of \(S_t\). In other words, choose the rigid motions so that the sets \( \rho _{t,i}(S_{t,i})\), \( 1 \le i \le \sigma ^-(t)\), have pairwise disjoint interiors and their union is \(S_t\), and also \(\rho _{t,i}(I_{t,i}) \subset L_t\) for \(1 \le i \le \sigma ^-(t)\) (see Fig. 4). This also implies that \(\cup _{i=1}^{\sigma ^-(t)} \rho _{t,i}(S^+_{t,i}) = S_t^+\).

Fig. 4
figure 4

The induced coverage process in 2 dimensions. The upper thick line is part of the set \(\varGamma _t\), while the lower thick line is part of the line \(L_t\). We show four of the blocks \(S_{t,i}\) next to the upper thick line, and four of the blocks making up \(S_t\) next to the lower thick line. The arrows represent the rigid motions \(\rho _{t,i}\). The disks shown are part of \(Q_t\)

By the restriction, mapping and superposition theorems for Poisson processes (see e.g. [19]), the point process \({{\mathscr {P}}}''_t:= \cup _{i=1}^{\sigma ^-(t)} \rho _{t,i} ({{\mathscr {P}}}'_t \cap S^+_{t,i}) \) is a homogeneous Poisson process of intensity \(t f_0\) in the long strip \(S_t^+\).

Now suppose \(d \ge 3\). To describe the induced coverage process in this case, we first define a ‘tartan’ (plaid) region \(T_t\) as follows. Recall that \(\gamma ' \in (\gamma ,1/d)\). Partition each face \(H_{t,i}\), \(1 \le i \le \sigma ^-(t)\) into a collection of \((d-1)\)-dimensional cubes of side \(t^{- \gamma '}\) contained in \(H_{t,i}\), together with a boundary region contained within \(\partial _{d-2} H_{t,i} \oplus B(o,d t^{- \gamma '})\). Let \(T_t\) be the union (over all faces) of the boundaries of the \((d-1)\)-dimensional cubes in this partition (see Fig. 5). Set \(T^+_t := [ T_t \oplus B(o,11r_t) ] \cap A_t^{**}\).

Enumerate the \((d-1)\)-dimensional cubes in the above subdivision of the faces \(H_{t,i}, 1 \le i \le \sigma ^-(t)\), as \(I^+_{t,1},\ldots , I^+_{t,\lambda (t)}\). For \(1 \le i \le \lambda (t)\) let \(I_{t,i}:= I^+_{t,i} {\setminus } (T_t \oplus B(o, 7r_t))\), which is a \((d-1)\)-dimensional cube of side length \(t^{-\gamma '} -14r_t\) with the same centre and orientation as \(I_{t,i}^+\). We claim that the total \((d-1)\)-dimensional Lebesgue measure of these \((d-1)\)-dimensional cubes satisfies

$$\begin{aligned} \lim _{t \rightarrow \infty }( | \cup _{i=1}^{\lambda (t)} I_{t,i} | ) = |\varGamma |. \end{aligned}$$
(7.61)

Indeed, for \(1 \le i \le \lambda (t)\) we have \(|I_{t,i}|/|I_{t,i}^+| = ((t^{-\gamma '} - 14r_t)/t^{-\gamma '})^{d-1}\), which tends to one since \(r_t = O( ( (\log t)/t )^{1/d })\) by (7.15), and \(\gamma ' < 1/d\), so the proportionate amount removed near the boundaries of the \((d-1)\)-dimensional cubes \(I^+_{t,i}\) to give \(I_{t,i}\) vanishes. Also the ‘boundary part’ of a face \(H_{t,i}\) that is not contained in any of the of the \(I^+_{t,j}\)s has \((d-1)\)-dimensional Lebesgue measure that is \(O(t^{-(d-2)\gamma } t^{-\gamma '})\), so that the total \((d-1)\)-dimensional measure of the removed regions near the boundaries of the faces is \(O(t^{(d-1)\gamma } \times t^{-(d-2)\gamma } t^{-\gamma '})= O(t^{\gamma -\gamma '} )\), which tends to zero. Thus the claim (7.61) is justified.

For \(1 \le i \le \lambda (t)\) define closed, rectilinear (but not aligned with the axes), d-dimensional cuboids (which we call ‘blocks’) \(S_{t,i}\) (respectively \(S^+_{t,i}\)), as follows. Let \(S_{t,i}\) (respectively \(S^+_{t,i}\)) be the closure of the set of those locations \(x \in {\tilde{A}}_t\) such that \(\,\textrm{dist}(x,I_{t,i}) \le 2 r_t \) (resp., \(\,\textrm{dist}(x,I_{t,i}) \le 4 r_t \)) and such that there exists \(y \in I_{t,i}^o\) (the relative interior of \(I_{t,i}\)) satisfying \(\Vert y-x\Vert = \,\textrm{dist}(x, I_{t,i})\). For example, if \(d=3\) then \(S_{t,i}\) (resp. \(S^+_{t,i}\)) is a cuboid of dimensions \((t^{- \gamma '} - 14r_t) \times (t^{-\gamma '} -14r_t) \times 2 r_t\) (resp. \( (t^{- \gamma '} - 14r_t) \times (t^{-\gamma '} -14r_t) \times 4 r_t \)) with \(I_{t,i}\) as its base. We shall verify in Lemma 7.25 that \(S_{t,1}^+,\ldots , S_{t,\sigma ^-(t)}^+\) are disjoint.

Define a region \(D_t \subset \mathbb {R}^{d-1}\) that is approximately a rectilinear hypercube with lower left corner at the origin, and obtained as the union of \(\lambda (t)\) disjoint \((d-1)\)-dimensional cubes of side \(t^{-\gamma '} - 14 r_t\). We can and do arrange that \(D_t \subset [0,|\varGamma _t|^{1/(d-1)}+ t^{- \gamma }]^{d-1}\) for each t, and \(|D_t| \rightarrow |\varGamma |\) as \(t \rightarrow \infty \). Define the flat slabs

$$\begin{aligned} S_t:= D_t \times [0, 2 r_t]; ~~~~~~ S^+_t:= D_t \times [0, 4r_t], \end{aligned}$$

and denote the lower boundary of \(S_t\) (that is, the set \(D_t \times \{0\}\)) by \(L_t\).

Fig. 5
figure 5

Part of the ‘tartan’ region \(T_t\) when \(d=3\). The outer triangle represents one face \(H_{t,i}\), and the part of \(T_t\) within \(H_{t,i}\) is given by the union of the boundaries of the squares. The triangle has sides of length \(\varTheta (t^{-\gamma })\), while the squares have sides of length \(t^{-\gamma '}\). The region between the two triangles is part of \(Q_t^+\). It has thickness \(8dt^{-\gamma '}\) (the constant 8d is not drawn to scale), and covers the whole boundary region not covered by the squares

Now choose rigid motions \(\rho _{t,i}\) of \(\mathbb {R}^d\), \(1 \le i \le \lambda (t)\), such that under applications of these rigid motions the blocks \(S_{t,i}\) are reassembled to form the slab \(S_t\), with the square face \(I_{t,i}\) of the i-th block transported to part of the lower boundary \(L_t\) of \(S_t\). In other words, choose the rigid motions so that the sets \( \rho _{t,i}(S_{t,i})\), \( 1 \le i \le \lambda (t)\), have pairwise disjoint interiors and their union is \(S_t\), and also \(\rho _{t,i}(I_{t,i}) \subset L_t\) for \(1 \le i \le \lambda (t)\). This also implies that \(\cup _{i=1}^{\lambda (t)} \rho _{t,i}(S^+_{t,i}) = S_t^+\).

By the restriction, mapping and superposition theorems, the point process \({{\mathscr {P}}}''_t:= \cup _{i=1}^{\lambda (t)} \rho _{t,i} (\tilde{{{\mathscr {P}}}}_t \cap S^+_{t,i}) \) is a homogeneous Poisson process of intensity t in the flat slab \(S_t^+\).

In both cases (\(d=2\) or \(d \ge 3\)), we extend \({{\mathscr {P}}}''_t\) to a Poisson process \({{\mathscr {U}}}'_t\) in \( \mathbb {H}:= \mathbb {R}^{d-1} \times [0,\infty )\) as follows. Let \({{\mathscr {P}}}'''_t\) be a Poisson process of intensity \(t f_0\) in \(\mathbb {H}{\setminus } S^+_t\), independent of \(\tilde{{{\mathscr {P}}}}_t\), and set

$$\begin{aligned} {{\mathscr {U}}}'_t:= {{\mathscr {P}}}''_t \cup {{\mathscr {P}}}'''_t. \end{aligned}$$
(7.62)

Then \({{\mathscr {U}}}'_t\) is a homogeneous Poisson process of intensity \(tf_0\) in \(\mathbb {H}\). We call the collection of balls of radius \(r_t\) centred on the points of this point process the induced coverage process.

The next lemma says that in \(d \ge 3\), the ‘tartan’ region \(T_t^+\) is covered with high probability. It is needed because locations in \(T_t^+\) lie near the boundary of blocks \(S_{t,i}\), so that coverage of these locations by \(\tilde{{{\mathscr {P}}}}_t \) does not necessarily correspond to coverage of their images in the induced coverage process.

Lemma 7.24

Suppose \(d \ge 3\). Then \( \lim _{t \rightarrow \infty } \mathbb {P}[ F_t(T^+_t, \tilde{{{\mathscr {P}}}}_t) ] =1.\)

Proof

The total number of the \((d-1)\)-dimensional cubes (in the whole of \(\varGamma _t\)) in the partition described above is \(O(t^{(d-1) \gamma '})\), and for each of these \((d-1)\)-dimensional cubes the number of balls of radius \(r_t\) required to cover the boundary of the cube is \(O((t^{-\gamma '} r_t^{-1})^{d-2})\). Thus we can take \(x_{t,1}, \ldots , x_{t,k_t} \in \mathbb {R}^d\), with \(k_t = O(t^{\gamma '} r_t^{2-d})\), such that \(T_t \subset \cup _{i=1}^{k_t} B(x_{t,i},r_t)\).

Then \(T_t^+ \subset \cup _{i=1}^{k_t} B(x_{t,i},12r_t) \cap A_t^*\). Let \(\varepsilon \in (0,(1/d)- \gamma ')\). By Lemma 7.22,

$$\begin{aligned}{} & {} \mathbb {P}[ F_t(T^+_t, \tilde{{{\mathscr {P}}}}_t) ^c] \le \sum _{i=1}^{k_t} \mathbb {P}[F_t(B(x_{t,i},12r_t) \cap A_t^*, \tilde{{{\mathscr {P}}}}_t )^c]\\{} & {} \quad = O(t^{ \gamma '} r_t^{2-d} t^{\varepsilon - (d-1)/d}) = O(t^{\varepsilon + \gamma ' -1/d}), \end{aligned}$$

which tends to zero. \(\square \)

So as to be able to treat the cases \(d=2\) and \(d \ge 3\) together, for \(d=2\) we define \(\lambda (t):= \sigma ^-(t)\) and \(T_t^+ := T_t := \varnothing \) (these were previously defined only for \(d=3\)). We verify next that the blocks \(S_{t,i}^+, 1 \le i \le \lambda (t)\), are pairwise disjoint. This is needed to ensure that the Poisson processes \(\tilde{{{\mathscr {P}}}}_t \cap S_{t,i}^+, 1 \le i \le \lambda (t)\), are mutually independent.

Lemma 7.25

Suppose \(d \ge 2\), \(t \ge t_0\) and \(i,j \in \{1,2,\ldots ,\lambda (t)\}\) with \(i <j\). Then \(S_i^+ \cap S_j^+ = \varnothing \).

Proof

Suppose \(S_i^+ \cap S_j^+ \ne \varnothing \); we shall obtain a contradiction. Let \(x \in S_i^+ \cap S_j^+ \). Let y be the closest point in \(I_{t,i}\) to x, and \(y'\) the closest point in \(I_{t,j}\) to x. Choose \(k,\ell \) such that \(I_{t,i} \subset H_{t,k}\) and \(I_{t,j} \subset H_{t,\ell }\). Then \(k\ne \ell \) since if \(d=2\) we have \(k=i\) and \(\ell =j\), while if \(d \ge 3\), if \(k = \ell \) then clearly we would have \(S_i^+ \cap S_j^+ = \varnothing \).

Let \(J_{t,k}\) be the projection of \(H_{t,k}\) onto \(\mathbb {R}^{d-1}\), and write \(y= (u,\phi _t(u))\) with \(u \in J_{t,k}\). Let \(v \in \partial J_{t,k}\). By (7.51) and (7.53),

$$\begin{aligned} |\phi _t(v)- \phi _t(u) | \le |\phi (v) -\phi (u)| + 4 K t^{-2 \gamma } \le (1/9)\Vert v-u\Vert + 4 K t^{-2 \gamma }. \end{aligned}$$

Since \(y \in I_{t,i}\) we have \(\Vert y - (v,\phi _t(v))\Vert \ge \,\textrm{dist}(y,\partial H_{t,k}) \ge 7 r_t\), so that

$$\begin{aligned} 7r_t \le \Vert u-v\Vert + |\phi _t(u) - \phi _t(v)| \le (10/9)\Vert u-v\Vert + 4Kt^{-2 \gamma }, \end{aligned}$$

and hence \(\Vert u-v\Vert \ge (63/10) r_t -4Kt^{-2 \gamma }\), and hence provided t is large enough, \( \,\textrm{dist}(u, \partial J_{t,k} ) \ge 6 r_t. \) Similarly, writing \(y' := (u',\phi _t(u'))\) we have \(\,\textrm{dist}(u', \partial J_{t,\ell }) \ge 6r_t\). Therefore

$$\begin{aligned} \Vert y- y' \Vert \ge \Vert u- u'\Vert \ge 12 r_t. \end{aligned}$$

However, also \(\Vert y-y'\Vert \le \Vert y-x\Vert + \Vert y'-x\Vert \le 8 r_t\), and we have our contradiction. \(\square \)

Denote the union of the boundaries (relative to \(\mathbb {R}^{d-1} \times \{0\}\)) of the lower faces of the blocks making up the strip/slab \(S_t\), by \(C_t^0\), and the \((9r_t)\)-neighbourhood of this region by \(C_t\) (the C can be viewed as standing for ‘corner region’, at least when \(d=2\)), i.e.

$$\begin{aligned} C_t^0 := \cup _{i=1}^{\lambda (t)} \rho _{t,i} (\partial I_{t,i}), ~~~~~~~ C_t := ( C_t^0 \oplus B(o,9r_t) ) \cap \mathbb {H}. \end{aligned}$$
(7.63)

Here \(\partial I_{t,i}\) denotes the relative boundary of \(I_{t,i}\).

The next lemma says that the corner region \(C_t\) is covered with high probability. It is needed because locations in \(Q_t^+\) lie near the boundaries of the blocks assembled to make the induced coverage process, so that coverage of these locations in the induced coverage process does not necessarily correspond to coverage of their pre-images in the original coverage process.

Lemma 7.26

Suppose \(d \ge 2\). Then \(\lim _{t \rightarrow \infty } \mathbb {P}[ F_t(C_t ,{{\mathscr {U}}}'_t) ] =1\).

Proof

Set \({\tilde{\gamma }}:= \gamma \) if \(d=2\), or if \(d \ge 3\) set \({\tilde{\gamma }}:= \gamma ' \). Let \(\varepsilon \in (0, (1/d)- {\tilde{\gamma }})\). The number of \((d-1)\)-dimensional cubes \(\lambda (t)\) making up \(L_t\) is \(O(t^{(d-1) {\tilde{\gamma }}})\), and for each of these \((d-1)\) dimensional cubes, the number of balls of radius \(r_t\) required to cover the boundary is \(O((t^{-{\tilde{\gamma }}} r_t^{-1})^{d-2})\). Hence we can take points \(x_{t,1}, \ldots , x_{t, m_t} \in L_t\), with \(m_t = O(t^{{\tilde{\gamma }}} r_t^{2-d})\), such that \(C_t^0 \subset \cup _{i=1}^{m_t} B(x_{t,i},r_t)\). Then \(C_t \subset \cup _{i=1}^{m_t} B(x_{t,i},10r_t)\). Hence by (7.56) from Lemma 7.22, we obtain the estimate

$$\begin{aligned} \mathbb {P}[ F_t(C_t \cap \mathbb {H}, {{\mathscr {U}}}'_t) ^c] = O(t^{{\tilde{\gamma }}} r_t^{2-d} t^{\varepsilon - (d-1)/d}) = O(t^{\varepsilon + {\tilde{\gamma }}-1/d}), \end{aligned}$$

which tends to zero. \(\square \)

Lemma 7.27

(Limiting coverage probabilities for the induced coverage process) Suppose \(d \ge 2\). Then

$$\begin{aligned} \lim _{t \rightarrow \infty } \mathbb {P}[F_t(S_t, {{\mathscr {U}}}'_t)] = \lim _{t \rightarrow \infty } \mathbb {P}[F_t(L_t, {{\mathscr {U}}}'_t)] = \exp (- c_{d,k} |\varGamma | e^{- \zeta } ) . \end{aligned}$$
(7.64)

Proof

The second equality of (7.64) is easily obtained using (7.17) from Lemma 7.4.

Recall that \(L_t = D_t \times \{0\}\). Also \(\partial L_t \subset C_t^0\), so that \((\partial D_t \oplus B_{(d-1)}(o,r_t)) \times [0,2 r_t] \subset C_t\), and therefore by (7.18) from Lemma 7.4,

$$\begin{aligned} \mathbb {P}[( F_t(L_t, {{\mathscr {U}}}'_t) {\setminus } F_t(S_t, {{\mathscr {U}}}'_t) ) \cap F_t(C_t, {{\mathscr {U}}}'_t) ] \rightarrow 0. \end{aligned}$$

Therefore using also Lemma 7.26 shows that \(\mathbb {P}[ F_t(L_t, {{\mathscr {U}}}'_t) {\setminus } F_t(S_t, {{\mathscr {U}}}'_t) ] \rightarrow 0\), and this gives us the rest of (7.64). \(\square \)

We are now ready to complete the proof of Proposition 7.19.

Proof of Proposition 7.19

We shall approximate event \(F_t (A^{**}_t , \tilde{{{\mathscr {P}}}}_t)\) by events \( F_t(L_t, {{\mathscr {U}}}'_t)\) and \(F_t(S_t,{{\mathscr {U}}}'_t)\), and apply Lemma 7.27.

Suppose \(F_t(Q^+_t \cup T_t^+, \tilde{{{\mathscr {P}}}}_t) {\setminus } F_t(A_t^{**}, \tilde{{{\mathscr {P}}}}_t) \) occurs, and choose \(x \in V_t(\tilde{{{\mathscr {P}}}}_t) \cap A_t^{**} {\setminus } (Q_t^+ \cup T_t^+)\). Let \(y \in \varGamma _t \) with \(\Vert y-x\Vert = \,\textrm{dist}(x,\varGamma _t)\). Then \(\Vert y-x\Vert \le 2r_t\), and since \( x \notin Q_t^+ \), by (7.59) we have \(\,\textrm{dist}(x, \partial _{d-2} \varGamma _t) \ge 8d t^{-\gamma '}\), and hence \(\,\textrm{dist}(y,\partial _{d-2}\varGamma _t) \ge 8dt^{-\gamma '} - 2 r_t \ge 7d t^{-\gamma '}\), provided t is large enough. Therefore y lies in the interior of the face \(H_{t,i}\) for some i and \(x-y\) is perpendicular to \(H_{t,i}\) (if \(y \ne x\)). Also (if \(d \ge 3\)), since \(x \notin T_t^+\), \(\,\textrm{dist}(x,T_t) \ge 11 r_t\), so \(\,\textrm{dist}(y,T_t) \ge 9 r_t\). Therefore \(y \in I_{t,j}\) for some j, and x lies in the block \(S_{t,j}\), and moreover \(\,\textrm{dist}(x, (\partial S^+_{t,j}) {\setminus } I_{t,j} ) \ge 2r_t\). Hence \(B(\rho _{t,j}(x),r_t) \cap \mathbb {H}\subset \rho _{t,j}(S^+_{t,j})\), and hence by (7.62), \( {{\mathscr {U}}}'_t(B(\rho _{t,j}(x),r_t)) = \tilde{{{\mathscr {P}}}}_t(B(x,r_t))< k, \) so event \(F_t(S_t, {{\mathscr {U}}}'_t) \) does not occur. Hence

$$\begin{aligned} F_t(S_t, {{\mathscr {U}}}'_t) {\setminus } F_t(A_t^{**}, \tilde{{{\mathscr {P}}}}_t) \subset F_t(Q^+_t \cup T^+_t, \tilde{{{\mathscr {P}}}}_t)^c, \end{aligned}$$

so by Lemmas 7.23 and  7.24, \(\mathbb {P}[ F_t(S_t,{{\mathscr {U}}}'_t) {\setminus } F_t (A^{**}_t , \tilde{{{\mathscr {P}}}}_t) ] \rightarrow 0\), and hence using (7.64) we have

$$\begin{aligned} \liminf _{t \rightarrow \infty } \mathbb {P}[F_t(A_t^{**},\tilde{{{\mathscr {P}}}}_t)] \ge \exp (-c_{d,k} |\varGamma |e^{-\zeta }). \end{aligned}$$
(7.65)

Suppose \(F_t (A^{**}_t , \tilde{{{\mathscr {P}}}}_t) {\setminus } F_t(L_t,{{\mathscr {U}}}'_t)\) occurs, and choose \( y \in L_t \cap V_t({{\mathscr {U}}}'_t)\). Take \(i \in \{1,\ldots ,\lambda (t)\}\) such that \(y \in \rho _{t,i}(I_{t,i})\). Then \( \,\textrm{dist}( y, \rho _{t,i} (\partial I_{t,i})) \le r_t \), since otherwise \(\rho _{t,i}^{-1}(y)\) would be a location in \(A^{**}_t \cap V_t( \tilde{{{\mathscr {P}}}}_t)\). Thus \(y \in C_t\) by (7.63), and therefore using Lemma 7.26 yields that

$$\begin{aligned} \mathbb {P}[ F_t(A^{**}_t, \tilde{{{\mathscr {P}}}}_t) {\setminus } F_t(L_t,{{\mathscr {U}}}'_t) ] \le \mathbb {P}[ F_t(C_t ,{{\mathscr {U}}}'_t)^c ] \rightarrow 0 . \end{aligned}$$

Combining this with (7.64) and (7.65) completes the proof. \(\square \)

7.7 Proof of Theorem 3.1: conclusion

Lemma 7.19 gives the limiting probability of coverage of a polytopal approximation to a region near part of \(\partial A\). The next two lemmas show that \(\mathbb {P}[F_t(A^{**}_t, \tilde{{{\mathscr {P}}}}_t)]\) approximates \(\mathbb {P}[F_t(A^*_t,{{\mathscr {P}}}'_t)]\) (recall the definitions at (7.54)). From this we can deduce that we get the same limiting probability even after dispensing with the polytopal approximation.

Lemma 7.28

Let \(E^{(1)}_t:= F_t( A_t^{**} , \tilde{{{\mathscr {P}}}}_t) {\setminus } F_t(A^*_t,{{\mathscr {P}}}'_t)\). Then \(\mathbb {P}[E^{(1)}_t] \rightarrow 0\) as \( t \rightarrow \infty \).

Proof

Let \(\varepsilon \in (0, 2\gamma - 1/d)\). Suppose \(E^{(1)}_t \cap F_t(Q_t^+ , {{\mathscr {P}}}'_t)\) occurs. Then since \(\tilde{{{\mathscr {P}}}}_t \subset {{\mathscr {P}}}'_t\), \(V_t({{\mathscr {P}}}'_t)\) intersects with \(A^*_t {\setminus } A^{**}_t\), and therefore by (7.53), \(V_t({{\mathscr {P}}}'_t)\) includes locations distant at most \(2K t^{-2 \gamma }\) from \(\varGamma _t\). Also \(\varGamma _t \cap V_t ({{\mathscr {P}}}'_t) = \varnothing \), since \(\varGamma _t \subset A^{**}_t\).

Pick a location \(x \in \overline{V_t({{\mathscr {P}}}'_t) \cap A_t^*} \) of minimal distance from \(\varGamma _t\). Then \(x \notin Q_t^+\), so the nearest point in \(\varGamma _t\) to x lies in the interior of \(H_{t,i}\) for some i. We claim that x lies at the intersection of the boundaries of d balls of radius \(r_t\) centred on points of \({{\mathscr {P}}}'_t\); this is proved similarly to the similar claim concerning \(\textbf{w}\) in the proof of Lemma 7.4. Moreover, x is covered by at most \(k-1 \) of the other balls centred in \({{\mathscr {P}}}'_t\) (in fact, by exactly \(k-1\) such balls, but we do not need this below). Also x does not lie in the interior of \(A_t^{**}\).

Thus if \(E^{(1)}_t \cap F_t(Q_t^+ , {{\mathscr {P}}}'_t)\) occurs, there must exist d points \(x_1,x_2,\ldots , x_d\) of \({{\mathscr {P}}}'_t \) such that \(\cap _{i=1}^d \partial B(x_i,r_t)\) includes a point in \(A_t^*\) but outside the interior of \(A_t^{**}\), within distance \(2Kt^{-2 \gamma }\) of \(\varGamma _t\) and further than \(r_t\) from all but at most \(k-1\) other points of \({{\mathscr {P}}}'_t\). Hence by the Mecke formula, integrating first over the positions of \(x_2-x_1, x_3-x_1, \ldots ,x_d - x_1\) and then over the location of \(x_1\), and using Lemma 7.20(a), Lemma 7.21 and (7.16), we obtain for suitable constants \(c, c'\) that

$$\begin{aligned} \mathbb {P}[E^{(1)}_t \cap F_t(Q_t^+ , {{\mathscr {P}}}'_t)]\le & {} c t^d r_t^{d(d-1)} t^{-2 \gamma } (t r_t^d)^{k-1} \exp (- (1-\varepsilon ) ( \theta _d/2 ) f_0 t r_t^d)\\\le & {} c' (tr_t^d)^{k-1} t^{d + \varepsilon - 2 \gamma - (d-1)/d } r_t^{d(d-1)} \\= & {} O( (tr_t^d)^{d+ k-2} t^{(1/d)- 2 \gamma + \varepsilon }), \end{aligned}$$

which tends to zero by (7.15). Also \(\mathbb {P}[ F_t(Q_t^+ , {{\mathscr {P}}}'_t)] \rightarrow 1\) by Lemma 7.23, so \(\mathbb {P}[E^{(1)}_t] \rightarrow 0\). \(\square \)

Lemma 7.29

Let \(G_t := F_t(A_t^*, {{\mathscr {P}}}'_t) {\setminus } F_t(A_t^{**}, \tilde{{{\mathscr {P}}}}_t)\). Then \(\lim _{t \rightarrow \infty } \mathbb {P}[ G_t]=0\).

Proof

If event \(G_t\) occurs, then since \(A^{**}_t \subset A_t^*\), there exists \(y \in A^{**}_t \cap V_t(\tilde{{{\mathscr {P}}}}_t)\) such that y is covered by at least one of the balls of radius \(r_t\) centred on \({{\mathscr {P}}}'_t {\setminus } \tilde{{{\mathscr {P}}}}_t\). Hence there exists \(x \in {{\mathscr {P}}}'_t {\setminus } \tilde{{{\mathscr {P}}}}_t\) with \(B(x,r_t) \cap V_t(\tilde{{{\mathscr {P}}}}_t) \cap A^{**}_t \ne \varnothing \). Therefore

$$\begin{aligned} G_t \subset F_t( \cup _{x \in {{\mathscr {P}}}'_t {\setminus } \tilde{{{\mathscr {P}}}}_t} B(x,r_t) \cap A_t^{**}, \tilde{{{\mathscr {P}}}}_t)^c . \end{aligned}$$
(7.66)

Let \(\varepsilon \in (0,2 \gamma - 1/d)\). Let \( {{\mathscr {Q}}}_t: = {{\mathscr {P}}}'_t {\setminus } \tilde{{{\mathscr {P}}}}_t\). Then \(\tilde{{{\mathscr {P}}}}_t \) and \({{\mathscr {Q}}}_t\) are independent homogeneous Poisson processes of intensity \(t f_0\) in \({\tilde{A}}_t\), \(A_t {\setminus } {\tilde{A}}_t\) respectively. By Lemma 7.22 and the union bound, there is a constant c such that for any \(m \in \mathbb {N}\) and any set of m points \(x_1,\ldots ,x_m\) in \(\mathbb {R}^d\), we have

$$\begin{aligned} \mathbb {P}\left[ F_t( \cup _{i=1}^m B({x_i},r_t) \cap A_t^{**} ,\tilde{{{\mathscr {P}}}}_t )^c \right] \le c m t^{\varepsilon - (d-1)/d}. \end{aligned}$$

Let \(N_t := {{\mathscr {Q}}}_t(\mathbb {R}^d)\). By Lemma 7.20(a), \({\mathbb {E}} \,[ N_t] = O(t^{1- 2 \gamma })\), so that by conditioning on \({{\mathscr {Q}}}_t\) we have for some constant \(c'\) that

$$\begin{aligned} \mathbb {P}[ F_t( \cup _{x \in {{\mathscr {Q}}}_t } B(x,r_t) \cap A_t^{**} , \tilde{{{\mathscr {P}}}}_t )^c] \le c t^{\varepsilon - (d-1)/d} {\mathbb {E}} \,[N_t] \le c' t^{(1/d)- 2 \gamma + \varepsilon }, \end{aligned}$$

which tends to zero by the choice of \(\varepsilon \). Hence by (7.66), \(\mathbb {P}[G_t ] \rightarrow 0\). \(\square \)

To complete the proof of Theorem 3.1, we shall break \(\partial A\) into finitely many pieces, with each piece contained in a single chart. We would like to write the probability that all of \(\partial A\) is covered as the product of probabilities for each piece, but to achieve the independence needed for this, we need to remove a region near the boundary of each piece. By separate estimates we can show the removed regions are covered with high probability, and this is the content of the next lemma.

Recall from (7.52) that \(\gamma _0 \in (1/(2d),\gamma )\). Define the sets \( {\varDelta }_t := \partial \varGamma \oplus B(o, t^{-\gamma _0}) \) and \( {\varDelta }_t^{+} := \partial \varGamma \oplus B(o, 2 t^{-\gamma _0}). \)

Lemma 7.30

It is the case that \(\lim _{t \rightarrow \infty } F_t( {\varDelta }_t^{+} \cap A, {{\mathscr {P}}}_ t) =1\).

Proof

Let \(\varepsilon \in (0,2 \gamma _0 - 1/d)\). Since we assume \(\kappa (\partial \varGamma ,r)= O(r^{2-d})\) as \(r \downarrow 0\), for each t we can take \(x_{t,1},\ldots ,x_{t,k(t)} \in \mathbb {R}^d\) with \(\partial \varGamma \subset \cup _{i=1}^{k(t)} B(x_{t,i},t^{-\gamma _0})\), and with \(k(t) = O(t^{(d-2)\gamma _0})\). Then \({\varDelta }_t^{+} \subset \cup _{i=1}^{k(t)} B(x_{t,i},3t^{-\gamma _0})\). For each \(i \in \{1,\ldots , k(t)\}\), we can cover the ball \(B(x_{t,i},3 t^{-\gamma _0})\) with \(O((t^{-\gamma _0}/r_t)^d)\) smaller balls of radius \(r_t\). Then we end up with balls of radius \(r_t\), denoted \(B_{t,1},\ldots ,B_{t,m(t)}\) say, such that \({\varDelta }_t^{+} \subset \cup _{i=1}^{m(t)} B_{t,i}\) and \(m(t) = O(r_t^{-d} t^{-2 \gamma _0})\). By (7.57) from Lemma 7.22, and the union bound,

$$\begin{aligned} \mathbb {P}[ \cup _{i=1}^{m(t)} ( F_t(B_{t,i} \cap A,{{\mathscr {P}}}_t )^c)] = O( r_t^{-d} t^{- 2 \gamma _0} t^{\varepsilon - (d-1)/d}) = O( t^{ (1/d) + \varepsilon - 2 \gamma _0} ), \end{aligned}$$

which tends to zero. \(\square \)

Given \(t >0\), define the sets \( \varGamma ^{(t^{-\gamma _0})}:= \varGamma {\setminus } {\varDelta }_t \) and

$$\begin{aligned} \varGamma ^{(t^{-\gamma _0})}_{r_t} := (\varGamma ^{(t^{-\gamma _0})} \oplus B(o, r_t)) \cap A; ~~~~~ \varGamma _{r_t} := (\varGamma \oplus B(o,r_t)) \cap A, \end{aligned}$$

and define the event \(F_t^\varGamma := F_t(\varGamma ^{(t^{- \gamma _0})}_{r_t} , {{\mathscr {P}}}_t )\).

Note that the definition of \(F_t^\varGamma \) does not depend on the choice of chart. This is needed for the last stage of the proof of Theorem 3.1. Lemma 7.33 below shows that \(\mathbb {P}[F_t^\varGamma ]\) is well approximated by \(\mathbb {P}[F_t(A_t^*, {{\mathscr {P}}}'_t)]\) and we have already determined the limiting behaviour of the latter. We prepare for the proof of Lemma 7.33 with two geometrical lemmas.

Lemma 7.31

For all large enough t, it is the case that \(\varGamma _{r_t}^{(t^{-\gamma _0})} \subset A_t^*\).

Proof

Let \(x \in \varGamma ^{(t^{-\gamma _0})}_{r_t} \), and take \(y \in \varGamma ^{(t^{-\gamma _0})}\) with \(\Vert x-y\Vert \le r_t\). Writing \(y = (u,\phi (u))\) with \(u \in U_t\), we claim that \(\,\textrm{dist}(u, \partial U) \ge (1/2)t^{-\gamma _0}\). Indeed, if we had \(\,\textrm{dist}(u, \partial U) < (1/2)t^{-\gamma _0}\), then we could take \(w \in \partial U\) with \(\Vert u -w \Vert < (1/2)t^{-\gamma _0}\). Then \((w,\phi (w)) \in \partial \varGamma \) and by (7.51), \(|\phi (w) - \phi (u)| \le (1/4) t^{-\gamma _0}\), so

$$\begin{aligned} \Vert (u,\phi (u)) - (w,\phi (w)) \Vert \le \Vert u-w\Vert + |\phi (u)-\phi (w)| \le (3/4)t^{-\gamma _0}, \end{aligned}$$

contradicting the assumption that \(y \in \varGamma ^{(t^{-\gamma _0})}\), so the claim is justified.

Writing \(x = (v,s)\) with \(v \in \mathbb {R}^{d-1}\), and \(s \in \mathbb {R}\), we have \(\Vert v-u\Vert \le \Vert x-y\Vert \le r_t\), so \(\,\textrm{dist}(v, \partial U) \ge (1/2)t^{-\gamma _0} - r_t\), and hence \(v \in U_t^-\), provided t is big enough (\(U_t^-\) was defined shortly after (7.52).) Also \(|\phi (v) - \phi (u)| \le r_t/4\) by (7.51), so \(|\phi _t(v) - \phi (u)| \le r_t/2\), provided t is big enough, by (7.53). Also \(|s - \phi (u)| \le \Vert x-y\Vert \le r_t\), so \( |s - \phi _t(v)| \le (3/2) r_t. \) Therefore \(x \in A^*_t\) by (7.54). \(\square \)

Lemma 7.32

For all large enough t, we have (a) \([A_t^* \oplus B(o,4 r_t)] \cap A \subset A_t\), and (b) \([ A_t^* \oplus B(o,4r_t)] \cap \partial A \subset \varGamma \), and (c) \([ \varGamma _{r_t}^{(t^{-\gamma _0})} \oplus B(o,4r_t)] \cap \partial A \subset \varGamma \).

Proof

Let \(x \in A_t^*\). Write \(x = (u,z)\) with \(u \in U_t^-\) and \(\phi _t(u) - 3r_t/2 \le z \le \phi (u)\).

Let \(y \in B(x,4 r_t) \cap A\), and write \(y = (v,s)\) with \(v \in \mathbb {R}^{d-1}\) and \(s \in \mathbb {R}\). Then \(\Vert v-u\Vert \le 4r_t\) so provided t is big enough, \(v \in U_t\). Also \(|s-z| \le 4 r_t\), and \(|\phi (v) - \phi (u)| \le r_t\) by (7.51), so

$$\begin{aligned} |s - \phi (v)| \le |s - z| + |z - \phi (u)| + |\phi (u)- \phi (v)| \le 4r_t + 2r_t + r_t, \end{aligned}$$

and since \(y \in A\), by (7.49) and (7.50) we must have \( 0 \le s \le \phi (v) \), provided t is big enough. Therefore \(y = (v,s) \in A_t\), which gives us (a).

If also \(y \in \partial A\), then \(\phi (v)=s\), so \(y \in \varGamma \). Hence we have part (b). Then by Lemma 7.31 we also have part (c). \(\square \)

Lemma 7.33

It is the case that \(\mathbb {P}[F_t^\varGamma \triangle F_t(A_t^*, {{\mathscr {P}}}'_t) ] \rightarrow 0\) as \(t \rightarrow \infty \).

Proof

Since \(\varGamma _{r_t}^{(t^{- \gamma _0})} \subset A_t^*\) by Lemma 7.31, and moreover \({{\mathscr {P}}}'_t \subset {{\mathscr {P}}}_t\), it follows that \(F_t(A_t^*,{{\mathscr {P}}}'_t) \subset F_t(\varGamma ^{(t^{-\gamma _0})}_{r_t} , {{\mathscr {P}}}_t) = F_t^\varGamma \). Therefore it suffices to prove that

$$\begin{aligned} \mathbb {P}[F_t(\varGamma ^{(t^{-\gamma _0})}_{r_t} , {{\mathscr {P}}}_t) {\setminus } F_t(A_t^*,{{\mathscr {P}}}'_t) ] \rightarrow 0. \end{aligned}$$
(7.67)

Let \(\varepsilon >0\). Suppose event \(F_t(\varGamma ^{(t^{-\gamma _0})}_{r_t} , {{\mathscr {P}}}_t) \cap F_t( {\varDelta }_t^{+} \cap A, {{\mathscr {P}}}_t) {\setminus } F_t(A_t^*,{{\mathscr {P}}}'_t)\) occurs. Choose \(x \in A_t^* \cap V_t({{\mathscr {P}}}'_t)\). Then by Lemma 7.32(a), \(B(x,r_t) \cap A \subset A_t \). Hence \({{\mathscr {P}}}_t \cap B(x,r_t) \subset {{\mathscr {P}}}'_t\), and therefore \(x \in V_t({{\mathscr {P}}}_t)\).

Since we are assuming \( F_t(\varGamma ^{(t^{-\gamma _0})}_{r_t} , {{\mathscr {P}}}_t) \) occurs, we therefore have \(\,\textrm{dist}(x, \varGamma ^{(t^{-\gamma _0})} ) > r_t \). Since we also assume \(F_t({\varDelta }_t^{+} \cap A,{{\mathscr {P}}}_t)\), we also have \(\,\textrm{dist}(x, \partial \varGamma ) \ge 2 t^{-\gamma _0}\) and therefore \(\,\textrm{dist}(x,{\varDelta }_t) = \,\textrm{dist}(x, (\partial \varGamma ) \oplus B(o,t^{-\gamma _0})) \ge t^{-\gamma _0} > r_t\). Hence

$$\begin{aligned} { \,\textrm{dist}(x,\varGamma ) \ge \min (\,\textrm{dist}(x, \varGamma ^{(t^{-\gamma _0})}), \,\textrm{dist}(x, (\partial \varGamma ) \oplus B(o,t^{-\gamma _0})) > r_t.} \end{aligned}$$

Moreover, by Lemma 7.32(b), \(\,\textrm{dist}(x, (\partial A) {\setminus } \varGamma ) \ge 4 r_t\). Thus \(\,\textrm{dist}(x, \partial A) > r_t\).

Moreover, \(\,\textrm{dist}(x,\partial A) \le \,\textrm{dist}(x,\varGamma ) \le 2r_t \) because \(x \in A_t^*\), and therefore \(x \notin A^{[\varepsilon ]}\) (provided t is large enough) since \(\overline{A^{[\varepsilon ]}}\) is compact and contained in \(A^o\) (the set \(A^{[\varepsilon ]}\) was defined in Sect. 2.) Therefore the event \(F_t(A^{(r_t)} {\setminus } A^{[\varepsilon ]},{{\mathscr {P}}}_t)^c\) occurs. Thus, for large enough t we have the event inclusion

$$\begin{aligned} F_t(\varGamma ^{(t^{-\gamma _0})}_{r_t} , {{\mathscr {P}}}_t) \cap F_t( {\varDelta }_t^{+} \cap A, {{\mathscr {P}}}_t) {\setminus } F_t(A_t^*,{{\mathscr {P}}}'_t) \subset F_t( A^{(r_t)} {\setminus } A^{[\varepsilon ]} ,{{\mathscr {P}}}_t)^c . \end{aligned}$$
(7.68)

By (7.15),

$$\begin{aligned} \lim _{t \rightarrow \infty } ( \theta _d t f_0 r_t^d - \log (t f_0) - (d+k-2) \log \log t ) = {\left\{ \begin{array}{ll} 2 \zeta &{} {\text { if }} d=2,k=1 \\ +\infty &{} {\text { otherwise.}} \end{array}\right. }\nonumber \\ \end{aligned}$$
(7.69)

Hence by Proposition 3.4 (taking \(B= A {\setminus } A^{[\varepsilon ]}\), and using (6.4)),

$$\begin{aligned}{} & {} \lim _{t \rightarrow \infty } \mathbb {P}[ F_t( A^{(r_t)} {\setminus } A^{[\varepsilon ]} ,{{\mathscr {P}}}_t) ] = {\left\{ \begin{array}{ll} \exp (- |A {\setminus } A^{[\varepsilon ]} |e^{-2 \zeta }) &{} {\text { if }} d=2, k=1 \\ 1 &{} {\text { otherwise.}} \end{array}\right. }\nonumber \\ \end{aligned}$$
(7.70)

Therefore since \(\varepsilon \) can be arbitrarily small and \(|A {\setminus } A^{[\varepsilon ]} | \rightarrow 0\) as \(\varepsilon \downarrow 0\), the event displayed on the left hand side of (7.68) has probability tending to zero. Then using Lemma 7.30, we have (7.67), which completes the proof. \(\square \)

Corollary 7.34

It is the case that \( \lim _{t \rightarrow \infty } \mathbb {P}[F_t^\varGamma ] = \exp (- c_{d,k} |\varGamma | e^{- \zeta } ). \)

Proof

By Lemmas 7.28 and 7.29, \(\mathbb {P}[ F_t(A_t^*, {{\mathscr {P}}}'_t) \triangle F_t(A^{**}_t, \tilde{{{\mathscr {P}}}}_t) ] \rightarrow 0\). Then by Lemma 7.33, \(\mathbb {P}[ F_t^\varGamma \triangle F_t(A^{**}_t, \tilde{{{\mathscr {P}}}}_t) ] \rightarrow 0\), and now the result follows by Proposition 7.19. \(\square \)

Proof of Theorem 3.1

Let \(x_1,\ldots ,x_J \) and \(r(x_1),\ldots ,r(x_J)\) be as described at (7.48). Set \(\varGamma _1 := B(x_1,r(x_1) ) \cap \partial A\), and for \(j =2,\ldots , J\), let

$$\begin{aligned} { \varGamma _j := \overline{ B(x_j,r(x_j) ) \cap \partial A {\setminus } \cup _{i=1}^{j-1} B(x_i,r(x_i))}. } \end{aligned}$$

Then \(\varGamma _1,\ldots ,\varGamma _J\) comprise a finite collection of closed sets in \(\partial A\) with disjoint interiors, each of which satisfies \(\kappa (\varGamma _i,r) = O(r^{2-d})\) as \(r \downarrow 0\), and is contained in a single chart \(B(x_j,r(x_j))\), and with union \(\partial A\). For \(1 \le i \le J\), define \(F_t^{\varGamma _i}\) similarly to \(F_t^\varGamma \), that is, \(F_t^{\varGamma _i} := F_t(\varGamma _{i,r_t}^{(t^{-\gamma _0})} , {{\mathscr {P}}}_t)\) with

$$\begin{aligned} \varGamma _{i,r_t}^{(t^{-\gamma _0})} := \left( \left[ \varGamma _i {\setminus } ((\partial \varGamma _i) \oplus B(o,t^{-\gamma _0})) \right] \oplus B(o, r_t) \right) \cap A, \end{aligned}$$

and \(\partial \varGamma _i := \varGamma _i \cap \overline{\partial A {\setminus } \varGamma _i}\). First we claim that the following event inclusion holds:

$$\begin{aligned} \cap _{i=1}^J F_t^{\varGamma _i} \cap F_t(A^{(r_t)}, {{\mathscr {P}}}_t) {\setminus } F_t(A,{{\mathscr {P}}}_t) \subset \left( \cap _{i=1}^J F_t([(\partial \varGamma _i) \oplus B(o,2t^{-\gamma _0})] \cap A, {{\mathscr {P}}}_t) \right) ^c. \end{aligned}$$

Indeed, suppose \( \cap _{i=1}^J F_t^{\varGamma _i} \cap F_t(A^{(r_t)}, {{\mathscr {P}}}_t) {\setminus } F_t(A,{{\mathscr {P}}}_t) \) occurs, and choose \(x \in A \cap V_t({{\mathscr {P}}}_t)\). Then \(\,\textrm{dist}(x,\partial A) \le r_t\) since we assume \(F_t(A^{(r_t)},{{\mathscr {P}}}_t)\) occurs. Then for some \(i \in \{1,\ldots ,J\}\) and some \(y \in \varGamma _i\) we have \(\Vert x-y \Vert \le r_t\). Since we assume \(F_t^{\varGamma _i}\) occurs, we have \(x \notin \varGamma _{i,r_t}^{(t^{-\gamma _0})}\), and hence \(\,\textrm{dist}(y, \partial \varGamma _i) \le t^{- \gamma _0}\), so \(\,\textrm{dist}(x, \partial \varGamma _i) < 2t^{- \gamma _0}\). Therefore \(F_t([(\partial \varGamma _i) \oplus B(o, 2t^{-\gamma _0} )] \cap A, {{\mathscr {P}}}_t)\) fails to occur, justifying the claim.

By the preceding claim and and the union bound,

$$\begin{aligned}{} & {} \mathbb {P}[F_t(A,{{\mathscr {P}}}_t) ] \le \mathbb {P}[ \cap _{i=1}^J F_t^{\varGamma _i} \cap F_t(A^{(r_t)},{{\mathscr {P}}}_t)] \nonumber \\{} & {} \quad \le \mathbb {P}[F_t(A,{{\mathscr {P}}}_t) ] + \sum _{i=1}^J \mathbb {P}[ F_t([(\partial \varGamma _i) \oplus B(o,2t^{-\gamma _0})] \cap A, {{\mathscr {P}}}_t)^c]. \end{aligned}$$
(7.71)

By Lemma 7.30, \(\mathbb {P}[ F_t([(\partial \varGamma _i) \oplus B(o,2t^{-\gamma _0})] \cap A, {{\mathscr {P}}}_t)] \rightarrow 1 \) for each i. Therefore by (2.2) and (7.71),

$$\begin{aligned} \lim _{t \rightarrow \infty } \mathbb {P}[R'_{t,k} \le r_t] = \lim _{t \rightarrow \infty } \mathbb {P}[F_{t} (A,{{\mathscr {P}}}_t) ] = \lim _{t \rightarrow \infty } \mathbb {P}[ \cap _{i=1}^J F_t^{\varGamma _i} \cap F_t(A^{(r_t)}, {{\mathscr {P}}}_t) ],\nonumber \\ \end{aligned}$$
(7.72)

provided the last limit exists. By Corollary 7.34, we have for each i that

$$\begin{aligned} \lim _{t \rightarrow \infty } (\mathbb {P}[ F^{\varGamma _i}_t]) = \exp (- c_{d,k}|\varGamma _i| e^{-\zeta } ). \end{aligned}$$
(7.73)

Also, we claim that for large enough t the events \(F_t^{\varGamma _1}\), ..., \(F_t^{\varGamma _J}\) are mutually independent. Indeed, given distinct \(i,j \in \{1,\ldots ,J\}\), if \(x \in \varGamma _{i,r_t}^{(t^{-\gamma _0})}\) and \(y \in \varGamma _{j,r_t}^{(t^{-\gamma _0})}\), then we can take \(y' \in \varGamma _j {\setminus } ( \partial \varGamma _j \oplus B(o,t^{-\gamma _0}))\) with \(\Vert y'-y \Vert \le r_t\). If \(\Vert x -y \Vert \le 2r_t\) then by the triangle inequality \(\Vert x - y'\Vert \le 3r_t\), but since \(y' \notin \varGamma _i\), this would contradict Lemma 7.32(c). Therefore \(\Vert x - y\Vert > 2r_t\), and hence the \(r_t\)-neighbourhoods of \( \varGamma _{i,r_t}^{(t^{-\gamma _0})}\) and of \( \varGamma _{j,r_t}^{(t^{-\gamma _0})}\) are disjoint. This gives us the independence claimed.

Now observe that \(F_t(A^{(r_t)},{{\mathscr {P}}}_t) \subset F_t(A^{(4r_t)},{{\mathscr {P}}}_t)\), and we claim that

$$\begin{aligned} \mathbb {P}[ F_t(A^{(4r_t)},{{\mathscr {P}}}_t) {\setminus } F_t(A^{(r_t)},{{\mathscr {P}}}_t)] \rightarrow 0 ~~~ {\text { as }} ~ t \rightarrow \infty . \end{aligned}$$
(7.74)

Indeed, given \(\varepsilon >0\), for large t the probability on the left side of (7.74) is bounded by \(\mathbb {P}[ F_t(A^{(r_t)}{\setminus } A^{[\varepsilon ]},{{\mathscr {P}}}_t)^c] \), and by (7.70) the latter probability tends to a limit which can be made arbitrarily small by the choice of \(\varepsilon \). Hence by Proposition 3.4 (using (6.4)) and (7.69),

$$\begin{aligned} \lim _{t \rightarrow \infty } \mathbb {P}[ F_t(A^{(4r_t)},{{\mathscr {P}}}_t) ]= & {} \lim _{t \rightarrow \infty } \mathbb {P}[ F_t(A^{(r_t)},{{\mathscr {P}}}_t)] \nonumber \\= & {} {\left\{ \begin{array}{ll} \exp ( - |A| e^{-2 \zeta } ) &{} {\text { if }} ~ d=2,k=1 \\ 1 &{} {\text { otherwise}}. \end{array}\right. } \end{aligned}$$
(7.75)

Moreover, by (7.72) and (7.74),

$$\begin{aligned} \lim _{t \rightarrow \infty } \mathbb {P}[R'_{t,k} \le r_t] = \lim _{t \rightarrow \infty } \mathbb {P}[ \cap _{i=1}^J F_t^{\varGamma _i} \cap F_t(A^{(4r_t)}, {{\mathscr {P}}}_t) ], \end{aligned}$$
(7.76)

provided the last limit exists. However, the events in the right hand side of (7.76) are mutually independent, so using (7.73), (7.75) and (7.15), we obtain the second equality of (3.3). We then obtain the rest of (3.3) using Lemma 7.1. \(\square \)