Random Euclidean coverage from within

Let X1,X2,…\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_1,X_2, \ldots $$\end{document} be independent random uniform points in a bounded domain A⊂Rd\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A \subset \mathbb {R}^d$$\end{document} with smooth boundary. Define the coverage thresholdRn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_n$$\end{document} to be the smallest r such that A is covered by the balls of radius r centred on X1,…,Xn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_1,\ldots ,X_n$$\end{document}. We obtain the limiting distribution of Rn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_n$$\end{document} and also a strong law of large numbers for Rn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_n$$\end{document} in the large-n limit. For example, if A has volume 1 and perimeter |∂A|\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$|\partial A|$$\end{document}, if d=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d=3$$\end{document} then P[nπRn3-logn-2log(logn)≤x]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {P}[n\pi R_n^3 - \log n - 2 \log (\log n) \le x]$$\end{document} converges to exp(-2-4π5/3|∂A|e-2x/3)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\exp (-2^{-4}\pi ^{5/3} |\partial A| e^{-2 x/3})$$\end{document} and (nπRn3)/(logn)→1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(n \pi R_n^3)/(\log n) \rightarrow 1$$\end{document} almost surely, and if d=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d=2$$\end{document} then P[nπRn2-logn-log(logn)≤x]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {P}[n \pi R_n^2 - \log n - \log (\log n) \le x]$$\end{document} converges to exp(-e-x-|∂A|π-1/2e-x/2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\exp (- e^{-x}- |\partial A|\pi ^{-1/2} e^{-x/2})$$\end{document}. We give similar results for general d, and also for the case where A is a polytope. We also generalize to allow for multiple coverage. The analysis relies on classical results by Hall and by Janson, along with a careful treatment of boundary effects. For the strong laws of large numbers, we can relax the requirement that the underlying density on A be uniform.


Introduction
This paper is primarily concerned with the following random coverage problem. Given a specified compact region B in a d-dimensional Euclidean space, what is the probability that B is fully covered by a union of Euclidean balls of radius r centred on n points placed independently uniformly at random in B, in the large-n limit with r = r(n) becoming small in an appropriate manner? This is a very natural type of question with a long history; see for example [3,7,10,11,12,15,20]. Potential applications include wireless communications [3,18], ballistics [12], genomics [2], statistics [8], immunology [20], and topological data analysis [4,17].
In a simpler alternative version of this question, the points are placed uniformly not in B, but in a larger region A with B ⊂ A o (A o denotes the interior of A). This version is simpler because boundary effects are avoided. This alternative version of our question was answered independently in the 1980s by Hall [11] and Janson [15]. Another way to avoid boundary effects would be to consider coverage of a smooth manifold such as a sphere (as in [20]), and this was also addressed in [15].
However, the original question does not appear to have been addressed systematically until now (at least, not when d > 1). Janson [15, p. 108] makes some remarks about the case where B = [0, 1] d and we use balls of the ∞ norm, but does not consider more general classes of B or Euclidean balls.
This question seems well worth addressing. In many of the applications areas, it is natural to consider the influence only of the random points placed within the region B rather than also hypothetical points placed outside B, for example in the problem of statistical set estimation which we shall discuss below.
We shall express our results in terms of the coverage threshold R n , which we define to be the the smallest radius of balls, centred on a set X n of n independent uniform random points in B, required to cover B. Note that R n is equal to the Hausdorff distance between the sets X n and B. More generally, for k ∈ N the kcoverage threshold R n,k is the smallest radius required to cover B k times. These thresholds are random variables, because the locations of the centres are random. We investigate their probabilistic behaviour as n becomes large.
We shall determine the limiting behaviour of P[R n,k ≤ r n ] for any fixed k and any sequence of numbers (r n ), for the case where B is smoothly bounded (for general d ≥ 2) or where B is a polytope (for d = 2 or d = 3). We also obtain similar results for a high-intensity Poisson sample in B, which may be more relevant in some applications, as argued in [11].
We also derive strong laws of large numbers showing that that nR d n,k / log n converges almost surely to a finite positive limit, and establishing the value of the limit. These strong laws carry over to more general cases where k may vary with n, and the distribution of points may be non-uniform. We give results of this type for B smoothly bounded (for general d), or for B a polytope (for d ≤ 3) or for B a convex polytope (for d ≥ 4). We emphasise that in all of these results, the limiting behaviour depends on the geometry of ∂B, the topological boundary of B.
We restrict attention here to coverage by Euclidean balls of equal radius. The work of [11,15] allowed for generalizations such as other shapes or variable radii, in their versions of our problem. We do not attempt to address these generalizations here; in principle it may be possible, but it would add considerably to the complication of the formulation of results.
One application lies in statistical set estimation. One may wish to estimate the set B from the sample X n . One possible estimator in the literature is the union of balls of radius r n centred on the points of X n , for some suitable sequence (r n ) n≥1 decreasing to zero. In particular, when estimating the perimeter of B one may well wish to take r n large enough so that these balls fully cover B, that is, r n ≥ R n . For further discussion see Cuevas and Rodríguez-Casal [8].
We briefly discuss some related concepts which have appeared in the literature. One of these is the maximal spacing. For r > 0 let B (r) be the set of points in B at a Euclidean distance more than r from the complement of B, and letR n be the smallest r such that B (r) is contained in the union of balls of radius r centred on points in X n . The maximal spacing is defined to be θ dR d n , where θ d := π d/2 /Γ(1 + d/2), the volume of the unit ball in R d . SinceR n is defined in terms of coverage of B (r) rather than B, by considering the maximal spacing one avoids dealing with boundary effects.
Another interpretation of the coverage threshold is via Voronoi cells. Calka and Chenavier [6] have considered, among other things, extremes of circumscribed radii of a Poisson-Voronoi tessellation on all of R d (the circumscribed radius of a cell is the radius of the smallest ball centred on the nucleus that contains the cell). To get a finite maximum they consider the maximum restricted to those cells having nonempty intersection with some bounded window B ⊂ R d . This construction avoids dealing with delicate boundary effects, and the limit distribution, for large intensity, is determined in [6] using results from [15].
It seems at least as natural to consider Voronoi cells inside B with respect to the Poisson sample restricted to B. A little thought (similar to arguments given in [6]) shows that the largest circumradius of Voronoi cells inside B with respect to the sample X n is equal to R n , and likewise for a Poisson sample in B; thus, our results add to those given in [6].
A somewhat related topic is that of the convex hull of the random sample X n . For d = 2 with B convex, the limiting behaviour of the Hausdorff distance from this convex hull to B is obtained in [5]. The limiting behaviour of the Hausdorff distance from X n itself to B (which is what concerns us here) is not the same as for the convex hull.
We work within the following mathematical framework. Let d ∈ N. Suppose on some probability space (S, F, P) that X 1 , X 2 , . . . are independent identically distributed random d-vectors with common probability distribution µ having density function f with compact, Riemann measurable support A ⊂ R d (Riemann measurability of a compact set in R d amounts to its boundary having zero Lebesgue measure). Let B ⊂ A be a specified set (possibly the set A itself). For x ∈ R d and r > 0 set B(x, r) := {y ∈ R d : y − x ≤ r} where · denotes the Euclidean norm. For n ∈ N, let X n := {X 1 , . . . , X n }. Given also k ∈ N, we define the k-coverage threshold R n,k by R n,k := inf {r > 0 : X n (B(x, r)) ≥ k ∀x ∈ B} , n, k ∈ N, (1.1) where for any point set X ⊂ R d and any D ⊂ R d we write X (D) for the number of points of X in D, and we use the convention inf{} := +∞. In particular R n := R n,1 is the coverage threshold. Observe that R n = inf{r > 0 : B ⊂ ∪ n i=1 B(X i , r)}. In the case B = A, R n is the Hausdorff distance between the sample X n and the region A.
We are interested in the asymptotic behaviour of R n for large n. More generally, we consider R n,k where k may vary with n. We also consider analogous quantities denoted R t and R t,k respectively, defined similarly using a Poisson sample of points (and formally defined later on). We are mainly interested in the case with B = A.
LetR n,k be the smallest r such that B ∩ A (r) is covered k times by the balls of radius r centred on the points of X n , i.e. set R n,k := inf r > 0 : X n (B(x, r)) ≥ k ∀x ∈ B ∩ A (r) , n, k ∈ N.
In the case where B = A, we define the maximal k-spacing to be θ dR d n,k , and the maximal 1-spacing θ dR d n is also called the maximal spacing; the reason for the terminology becomes apparent from considering the case with d = 1. The maximal spacing also has a long history; see for example [1,9,13,15]. As described in [1], there are many statistical applications.
When B = A, clearlyR n,k differs from R n,k only because of boundary effects. However, these boundary effects are often important in determining the asymptotic behaviour of the threshold. For example, we shall show that when d = 3 and the points are uniformly distributed over a polyhedron, the limiting behaviour is determined by the angle of the sharpest edge if this angle is less than π/2. If this angle exceeds π/2 then the location in A furthest from the sample X n is asymptotically uniformly distributed over ∂A, but if this angle is less than π/2 the location in A furthest from X n is asymptotically uniformly distributed over the union of those edges which are sharpest.
Let (Z t , t ≥ 0) be a unit rate Poisson counting process, independent of (X 1 , X 2 , . . .) and on the same probability space (S, F, P) (so Z t is Poisson distributed with mean t for each t > 0). The point process P t := {X 1 , . . . , X Zt } is a Poisson point process in R d with intensity measure tµ, where we set µ to be the distribution of X 1 (see e.g. [19]). For t ∈ (0, ∞), k ∈ N we define a secondary k-coverage threshold R t,k := R Zt,k := inf {r > 0 : P t (B(x, r)) ≥ k ∀x ∈ B} , t > 0, (1.3) with R t := R t,1 . Also definẽ R Zt,k := inf r > 0 : P t (B(x, r)) ≥ k ∀x ∈ B ∩ A (r) , n, k ∈ N. (1.4) We use R t,k andR Zt,k mainly as stepping stones towards deriving results for R n,k andR n,k respectively, but they are also of interest in their own right. Indeed, some of the literature [12,11,3,18] is concerned more with R t than with R n . We mention some notation used throughout. For D ⊂ R d , let D denote the closure of D. Let |D| denote the Lebesgue measure (volume) of D, and |∂D| the perimeter of D, i.e. the (d − 1)-dimensional Hausdorff measure of ∂D, when these are defined. Given t > 1, we write log 2 t for log(log t). Let o denote the origin in R d .
Given two sets X , Y ⊂ R d , we set X Y := (X \ Y) ∪ (Y \ X ), the symmetric difference between X and Y. Also, we write X ⊕Y for the set {x+y : x ∈ X , y ∈ Y}.
Given also x ∈ R d we write x + Y for {x} + Y.
Given x, y ∈ R d , we denote by [x, y] the line segment from x to y, that is, the convex hull of the set {x, y}. We write a ∧ b (respectively a ∨ b) for the minimum (resp. maximum) of any two numbers a, b ∈ R.

Convergence in distribution
The main results of this section are concerened with weak convergence results for R n,k (defined at (1.1)) as n → ∞ with k fixed, in cases where f is uniform on A and B = A.
Our first result concerns the case where A has a smooth boundary in the following sense. We say that A has C 2 boundary if for each x ∈ ∂A there exists a neighbourhood U of x and a real-valued function f that is defined on an open set in R d−1 and twice continuously differentiable, such that ∂A ∩ U , after a rotation, is the graph of the function f .
has C 2 boundary, and f 0 : When d = 2, k = 1 the exponent in (2.3) has two terms. This is because the location in A furthest from the sample X n might lie either in the interior of A, or on the boundary. When d ≥ 3 or k ≥ 2, the exponent of (2.3) has only one term, because the location in A with furthest k-th nearest point of X n is located, with probability tending to 1, on the boundary of A, and likewise for P t . Increasing either d or k makes it more likely that this location lies on the boundary, and the exceptional nature of the limit in the case (d, k) = (2, 1) reflects this.
We also consider the case where A is a polytope, only for d = 2 or d = 3. All polytopes in this paper are assumed to be bounded, connected, and finite (i.e. have finitely many faces).
Assume that A is compact, and either (a) A is polygonal or (b) A has C 2 boundary. Let |∂A| denote the length of the boundary of A. Let k ∈ N, ζ ∈ R. Then Case (b) of Theorem 2.2 is in fact covered by Theorem 2.1. One might seek to extend Theorem 2.2 to a more general class of sets A including both of the cases (a) and (b). One could take sets A having piecewise C 2 boundary, with the extra condition that the corners of A are not too pointy, in the sense that for each corner q, there exists a triangle with vertex at q that is contained in A. We would expect that it is possible to extend the result to this more general class.
When d = 3 and A is polyhedral, there are several cases to consider, depending on the value of the angle α 1 subtended by the 'sharpest edge' of ∂A. If α 1 < π/2 then the location in A furthest from the sample X n is likely to be on a 1-dimensional edge of ∂A, while if α 1 > π/2 the furthest location from the sample is likely to be on a 2-dimensional face of ∂A, in the limit n → ∞. If α 1 = π/2 (for example, for a cube), both of these possibilities have non-vanishing probability in the limit.
Since there are several cases to consider, to make the statement of the result more compact we put it in terms of P[R n,k ≤ r n ] for a sequence of constants (r n ).
Denote the 1-dimensional edges of A by e 1 , . . . , e κ . For each i ∈ {1, . . . , κ}, let α i denote the angle that A subtends at edge e i (with 0 < α i < 2π), and write |e i | for the length of e i . Assume the edges are listed in order so that α 1 ≤ α 2 ≤ · · · ≤ α κ . Let |∂ 1 A| denote the total area (i.e., 2-dimensional Hausdorff measure) of all faces of A, and let |∂ 2 A| denote the total length of those edges e i for which α i = α 1 . Let β ∈ R and k ∈ N. Let (r t ) t>0 be a family of real numbers satisfying (as t → ∞) We now give a result in general d forR n,k , and for R n,k in the case with B ⊂ A o (now we no longer require B = A). These cases are simpler because boundary effects are avoided. In fact, the result stated below has some overlap with already known results; it is convenient to state it here too for comparision with the results just given, and because we shall be using it to prove those results. (2.7) The case k = 1 of the second equality of (2.7) can be found in [6]. It provides a stronger asymptotic result than the one in [18]. A similar statement to the case k = 1 of (2.6) can be found in [16].
The definition (1.1) of the coverage threshold R n,k suggests we think of the number and (random) locations of points as being given, and consider the smallest radius of balls around those points needed to cover B k times. Alternatively, as in [15], one may think of the radius of the balls as being given, and consider the smallest number of balls (with locations generated sequentially at random) needed to cover B k times. That is, given r > 0, define the random variable N (r, k) := inf{n ∈ N : X n (B(x, r)) ≥ k ∀x ∈ B}, and note that N (r, k) ≤ n if and only if R n,k ≤ r. In the setting of Theorem 2.1, 2.2 or 2.3, one may obtain a limiting distribution for N (r, k) (suitably scaled and centred) as r ↓ 0 by using those results together with the following (we write D −→ for convergence in dsitribution): Proposition 2.6. Let k ∈ N. Suppose Z is a random variable with a continuous cumulative distribution function, and a, b, c ∈ R with a > 0, b > 0 are such that anR d n,k − b log n − c log 2 n D −→ Z as n → ∞. Then as r ↓ 0,

Strong laws of large numbers
The results in this section provide strong laws of large numbers (SLLNs) for R n . For these results we relax the condition that f be uniform on A. We give strong laws for R n when A = B and A is either smoothly bounded or a polytope. Also for general A we give strong laws for R n when B ⊂ A o , and forR n for general B.
More generally, we consider R n,k , allowing k to vary with n. Throughout this section, assume we are given a constant β ∈ [0, ∞] and a sequence k : N → N with lim n→∞ (k(n)/ log n) = β; lim n→∞ (k(n)/n) = 0.
We make use of the following notation throughout: Observe that −H(·) is unimodal with a maximum value of 0 at t = 1. Given withĤ 0 (0) := 0. Note thatĤ a (x) is increasing in x, and thatĤ 0 (x) = x. Throughout this paper, the phrase 'almost surely' or 'a.s.' means 'except on a set of P-measure zero'. We write f | A for the restriction of f to A.
Theorem 3.1. Suppose that d ≥ 2 and A ⊂ R d is compact with C 2 boundary, that f 0 > 0, and that f | A is continuous at x for all x ∈ ∂A. Assume also that B = A and (3.1) holds. Then as n → ∞, almost surely In particular, if k ∈ N is a constant, then as n → ∞, almost surely We now consider the case where A is a polytope. Assume the polytope A is compact and finite, that is, has finitely many faces. Let Φ(A) denote the set of all faces of A (of all dimensions). Given a face ϕ ∈ Φ(A), denote the dimension of this face by D(ϕ). Then 0 ≤ D(ϕ) ≤ d − 1, and ϕ is a D(ϕ)-dimensional polytope embedded in R d . Set f ϕ := inf x∈ϕ f (x), let ϕ o denote the relative interior of ϕ, and set ∂ϕ : If D(ϕ) = 0 then ϕ = {v} for some vertex v ∈ ∂A, and ρ ϕ equals the volume of B(v, r), divided by r d , for all sufficiently small r. If d = 2, D(ϕ) = 0 and ω ϕ denotes the angle subtended by A at the vertex ϕ, then ρ ϕ = ω ϕ /2. If d = 3 and D(ϕ) = 1, and α ϕ denotes the angle subtended by A at the edge ϕ (which is either the angle between the two boundary planes of A meeting at ϕ, or 2π minus this angle), then ρ ϕ = 2α ϕ /3.
For d ≥ 4, our result for A a polytope includes a condition that the polytope be convex; we conjecture that this condition is not needed. We include connectivity in the definition of a polytope, so for d = 1 a polytope is defined to be an interval.
Theorem 3.2. Suppose A is a compact finite polytope in R d . If d ≥ 4, assume moreover that A is convex. Assume that f | A is continuous at x for all x ∈ ∂A, and set B = A. Assume k(·) satisfies (3.1). Then, almost surely, In the next three results, we spell out some special cases of Theorem 3.2.
Let V denote the set of vertices of A, and for v ∈ V let ω v denote the angle subtended by A at vertex v. Assume (3.1) with β < ∞. Then, almost surely, .
Remark 3.6. The notion of coverage threshold is analogous to that of connectivity threshold in the theory of random geometric graphs [21]. Our results show that the threshold for full coverage by the balls B(X i , r), is asymptotically twice the threshold for the union of these balls to be connected, if A o is connected, at least when A has a smooth boundary or A = [0, 1] d . This can be seen from comparison of Theorem 3.1 above with [21,Theorem 13.7], and comparison of Corollary 3.5 above with [21,Theorem 13.2]. More general polytopes were not considered in [21].
Remark 3.7. We compare our results with [8]. In the setting of our In the setting of our Theorem 3.2, they give an upper bound of f −1 0 ∨ max ϕ (θ d /(f ϕ ρ ϕ )) in probability, or twice this almost surely. Our (3.6) and (3.8) improve significantly on those results.
Our final result is a law of large numbers forR n,k , no longer requiring B = A. In the case where B is contained in the interior of A, this easily yields a law of large numbers for the k-coverage threshold R n,k . (3.11)

12)
In particular, if k ∈ N is a constant then In Case (i), all of the above almost sure limiting statements hold for R n,k(n) as well as forR n,k(n) .
Proposition 3.8 has some overlap with known results; the uniform case with A = B = [0, 1] d and f ≡ 1 on A is covered by [9,Theorem 1]. Taking C of that paper to be the class of Euclidean balls centred on the origin, we see that the quantity denoted M 1,m in [9] equalsR n . In [9,Example 3] it is stated that the Euclidean balls satisfy the conditions of [9,Theorem 3]. See also [16]. Note also that [14] has a result similar to the case of Proposition 3.8 where d = 2, A = B = [0, 1] 2 and f is uniform over A.

Proof of strong laws of large numbers
In this section we prove the results stated in Section 3. Throughout this section we are assuming we are given a constant β ∈ [0, ∞] and a sequence (k(n)) n∈N satisfying (3.1). Recall that µ denotes the distribution of X 1 , and this has a density f with support A, and that B ⊂ A is fixed, and R n,k is defined at (1.1).
We shall repeatedly use the following two lemmas. For n ∈ N and p ∈ [0, 1] let Bin(n, p) denote a binomial random variable with parameters n, p. Recall that H(·) was defined at (3.3).
The next lemma is based on what in [21] was called the 'subsequence trick'.
Lemma 4.2 (Subsequence trick). Suppose (U n,k , (n, k) ∈ N × N) is an array of random variables on a common probability space such that U n,k is nonincreasing in n and nondecreasing in k, that is, U n+1,k ≤ U n,k ≤ U n,k+1 almost surely, for all (n, k) ∈ N × N. Let β ∈ [0, ∞), ε > 0, c > 0, and suppose (k(n), n ∈ N) is an N-valued sequence such that k(n)/ log n → β as n → ∞.
Proof. (a) For each n set k (n) := (β + ε) log n . Pick K ∈ N with Kε > 1. Then by our assumption, we have for all large enough n that P[n K U n K ,k (n K ) > log(n K )] ≤ cn −Kε , which is summable in n. Therefore by the Borel-Cantelli lemma, there exists a random but almost surely finite N such that for all n ≥ N we have n K U n K ,k (n K ) ≤ log(n K ), and also k(m) ≤ (β + ε/2) log m for all m ≥ N K , and moreover (β + ε/2) log((n + 1) K ) ≤ (β + ε) log(n K ) for all n ≥ N . Now for m ∈ N with m ≥ N K , choose n ∈ N such that n K ≤ m < (n + 1) K . Then k(m) ≤ (β + ε/2) log((n + 1) K ) ≤ (β + ε) log(n K ), so k(m) ≤ k (n K ). Since U m,k is nonincreasing in m and nondecreasing in k, which gives us the result asserted.
(b) The proof is similar to that of (a), and is omitted.
Proof. Denote the right hand side of (4.1) by L 0 . Assume L 0 > 0 (otherwise there is nothing to prove). Let α ∈ (0, L 0 ) and let ε > 0 be such that α + ε < L 0 . Then for r > 0 sufficiently small, r d > (α + ε) inf x∈B µ(B(x, r)). Set r n := (αk(n)/n) 1/d , n ∈ N. Then, given n sufficiently large, we can find x ∈ B such that (α + ε)µ(B(x, r n )) < r d n , and hence nµ(B(x, r n )) < αk(n) α+ε , so that if k ≤ e 2 nµ(B(x, r n )) then by Lemma 4.1(a), while if k > e 2 nµ(B(x, r n )) then by Lemma 4.1(c), P[R n,k(n) ≤ r n ] ≤ e −k(n) . Therefore P[R n,k(n) ≤ r n ] is summable in n because we assume here that k(n)/ log n → ∞ as n → ∞. Thus by the Borel-Cantelli lemma, almost surely R n,k(n) > r n for all but finitely many n, and hence lim inf nR d n,k(n) /k(n) ≥ α. This gives the result. Given r > 0, b > 0, let ν(r, b) be the largest number m such that there exists a collection of m disjoint closed balls of radius r centred on points of B, each with µ-measure at most b. Given a > 0, and taking log(0) := −∞ here, set  Also recall thatĤ β (x) is defined to be the y ≥ β such that yH(β/y) = x.
For each n ∈ N set r n = (α(log n)/n) 1/d . Let m n := ν(r n , ar d n ), and choose x n,1 , . . . , x n,mn ∈ B such that the balls B(x n,1 , r n ), . . . , B(x n,mn , r n ) are pairwise disjoint and each have µ-measure at most ar d n . Set λ(n) := n+n 3/4 . For 1 ≤ i ≤ m n , if k(n) > 1 then by [21,Lemma 1.3], and hence, provided n is large enough, If k(n) = 1 for infinitely many n, then β = 0 and (4.3) still holds for large enough n. By (4.3) and our choice of ε, there is a constant c > 0 such that for all large enough n and all i ∈ {1, . . . , m n } we have Hence, setting E n := ∩ mn i=1 {P λ(n) (B(x n,i , r n )) ≥ k(n)}, for all large enough n we have By (4.2), for large enough n we have log ν(r n , ar d n ) ≥ (γ(a) − ε)) log(1/r n ) , so that and therefore P[E n ] is is summable in n. By [21,Lemma 1.4], for all n large enough P[Z λ(n) < n] ≤ exp(−(1/9)n 1/2 ), which is summable in n. Since R m,k is nonincreasing in m, by the union bound P[R n,k(n) ≤ r n ] ≤ P[R Z λ(n) ,k(n) ≤ r n ] + P[Z λ(n) < n] ≤ P[E n ] + P[Z λ(n) < n], which is summable in n by the preceding estimates. Therefore by the Borel-Cantelli lemma, and the result follows.
The next result can be viewed as extending the preceding lemma to the case with γ(a) = 0, although its proof is closer to that of Lemma 4.3.

Proof of Proposition 3.8
Throughout this subsection we assume, as in Proposition 3.8, that either: (i) B is compact and Riemann measurable with µ(B) > 0 and B ⊂ A o , and f is continuous Proof. By the measurablility assumption, ess inf B (ε) (f ) ↓ f 0 as ε ↓ 0. Therefore we can and do choose δ > 0 with µ(B (δ) ) > 0 and with ess inf B (δ) (f ) < α. For r > 0, let σ (r) be the maximum number of disjoint closed balls B i of radius r that can be found such that 0 < µ(B i ∩ B (δ) ) ≤ αθ d r d . Then by applying [21, Lemma 5.2], taking the measure F there to be the restriction of µ to B (δ) , we have that lim inf r↓0 (r d σ (r)) > 0. Each of the balls B i satisfies B i ∩ B (δ) = ∅, so if r < δ the centre of B i lies in B. Hence σ (r) ≤ ν(r, αθ d r d ) for r < δ, and the result follows.
If β = ∞, then by Lemma 4.3, lim inf n→∞ nR d n,k(n) /k(n) ≥ (θ d α) −1 , almost surely. Hence for all large enough n we have R n,k(n) ≥ r n ; provided n is also large enough so that r n < δ we also haveR n,k(n) ≥ r n , and (4.5) follows.
Suppose instead that β < ∞. Then by Lemma 4.6, there is a constant c > 0 such that ν(r, αθ d r d ) > cr −d for all small enough r > 0. Using (4.2), it follows that β (1), and hence for large enough n we have R n,k(n) ≥ r n and alsoR n,k(n) ≥ r n , which yields (4.6).
Finally, suppose instead that B = A. Then by using e.g. [21, Lemma 11.12] we can find compact, Riemann measurable B ⊂ A o with µ(B ) > 0 and ess inf x∈B f (x) < α. Define S n,k to be the smallest r ≥ 0 such that every point in B is covered at least k times by balls of radius r centred on points of X n . By the argument already given we have almost surely that S n,k(n) ≥ r n for all large enough n, but this implies that alsoR n,k(n) ≥ r n for all large enough n, and this gives us (4.5) and (4.6) in this case too. .
Proof. Let L denote the right hand side of (4.7), and let α > L. Choose ε > 0 such that (1 − 2ε)α > L. Then for all sufficiently small r > 0 we have that . Let r n := (αk(n)/n) 1/d . Subdivide R d into cubes of side δr n , and let those cubes in the subdivision which intersect B ∩ A (rn) be denoted Q n,1 , . . . , Q n,mn . Then m n = O(n). For 1 ≤ i ≤ m n , let q n,i denote the first element, in the lexicographic ordering, of Q n,i ∩ B ∩ A (rn) . Then for all large enough n and for 1 ≤ i ≤ m n , we have that Let n ∈ N. IfR n,k(n) > r n , then there exists y ∈ B ∩ A (rn) with X n (B(y, r n )) < k(n). Then for some i ∈ {1, . . . , m n } we have that y ∈ Q n,i and hence by the triangle inequality, B(q n,i , r n (1−δd)) ⊂ B(y, r n ) so that X n (B(q n,i , (1−δd)r n )) < k(n). Thus by (4.8), and Lemma 4.1(b), and the union bound, for all large enough n we have that which is summable in n since k(n)/ log n → ∞ and m n = O(n). Hence by the Borel-Cantelli lemma, almost surely, for all large enough n we haveR n,k(n) ≤ r n and hence lim sup n→∞ (nR d n,k(n) /k(n)) ≤ α. This gives us the result. Lemma 4.9. Suppose that f 0 > 0. Then almost surely Proof. To start, we claim that This follows from the definition (3.2) of f 0 when B = A. In the other case it follows from (3.2) and the assumed continuity of f on A.
For n ∈ N, set r n = (α(log n)/(nθ)) 1/d . Divide R d into a collection of boxes (i.e., rectilinear hypercubes) of side εr n . Let the boxes in this collection which intersect B ∩ A (rn) be denoted Q n,1 , . . . , Q n,mn , and for 1 ≤ i ≤ m n let q n,i denote the first element, in the lexicographic ordering, of the set Q n,i ∩ B ∩ A (rn) . For each n ∈ N set k (n) := (β + ε) log n . Define the event By (4.11), for all large enough n, and all j ∈ {1, . . . , m n }, the binomial random variable X n (B(q n,j , (1 − dε)r n )) satisfies so that by Lemma 4.1(b) and the union bound, Therefore by (4.12), since k (n) ≤ (β + ε) log n, and since m n = O(n/ log n), for all large enough n we have This shows that P[nθ dR d n,k (n) > α log n] ≤ n −ε for all large enough n. Therefore by Lemma 4.2, which is applicable since θ dR d n,k /α is nonincreasing in n and nondecreasing in k, we obtain that lim sup(nθ dR d n,k(n) / log n) ≤ α for all large enough n, almost surely. Since α >Ĥ β (1)/f 0 was arbitrary, we therefore obtain (4.10).
It follows that almost surelyR n,k(n) → 0 as n → ∞, and therefore if we are in Case (i) (with B ⊂ A o ) we haveR n,k(n) = R n,k(n) for all large enough n. Therefore in this case (3.11)

Proof of Theorem 3.1
In this section and again later on, we shall use certain results from [21], which rely on an alternative characterization of A having a C 2 boundary, given in the next The pairs (U i , φ i ) are called charts. We shall sometimes also refer to the sets U i as charts here.
Remark 4.11. The converse to Lemma 4.10 also holds: if ∂A is a (d−1)-dimensional submanifold of R d then A has a C 2 boundary in the sense that we have defined it. The proof of this this implication is more involved, and not needed in the sequel, so we omit the argument.
We shall use the following lemma here and again later on.
For the proof of Theorem 3.1, we introduce for each n, k ∈ N a new variable R n,k,1 , which is the smallest radius r of balls required to cover k times the boundary region A \ A (r) : (4.13) Loosely speaking, the 1 in the subscript refers to the fact that this boundary region is in some sense (d − 1)-dimensional. It is not hard to see that for all n, k, we have (4.14) Recall that we are assuming that k(n)/ log n → β ∈ [0, ∞] and k(n)/n → 0, as n → ∞, and f 1 := inf ∂A f . Theorem 3.1 is immediate from the next two lemmas.
Lemma 4.14. Under the assumptions of Theorem 3.1, Proof. If β = ∞, then let α > 2/f 1 and for n ∈ N let r n := (αk(n)/(nθ)) 1/d . If β < ∞, then let α > 2Ĥ β (1 − 1/d)/f 1 and for n ∈ N let r n := (α(log n)/(nθ)) 1/d . Note that r n → 0 in all cases, since we assume k(n) = o(n). If β = ∞ then let ε ∈ (0, 1/(2d)) be chosen so that Given n ∈ N, divide R d into a collection of boxes (i.e., rectilinear hypercubes) of side εr n . Let the boxes in this collection which intersect A \ A (rn) be denoted Q n,1 , . . . , Q n, n . For 1 ≤ j ≤ n , let q n,j denote the first element of the set Q n,j ∩ A \ A (rn) , in the lexicographic order. We claim that To see this, note that by [21,Lemma 5.4], we can cover ∂A by m n balls of radius r n , where m n = O(r 1−d n ). Denote the centres of these balls by x n,1 , . . . , x n,mn . Then Hence for each j ≤ n we have Q n,j ⊂ B(x n,i , 3r n ) for some i ≤ m n . Then n ≤ K 0 m n , where K 0 is the largest number of disjoint cubes of side ε that can be packed inside a Euclidean ball of radius 3.
If R n,k (n),1 > r n , then there exists x ∈ A \ A (rn) such that X n (B(x, r n )) < k (n). For some j ≤ n we then have x ∈ Q n,j , so that by the triangle inequality B(q n,j , (1 − dε)r n ) ⊂ B(x, r n ), and hence X n (B(q n,j , (1 − dε)r n )) < k (n). Therefore Then, under the hypotheses of Theorem 3.1, by Lemma 4.12, for all large enough n we have Suppose β = ∞. Then by (4.23) and (4.19), for large enough n we have Hence by Lemma 4.1(b) and the union bound there is a constant δ > 0 such that for all large enough n we have P[E n ] ≤ n exp(−δ k(n)), and therefore since k(n)/n → ∞ in this case, P[E n ] is summable in n. Hence by the Borel-Cantelli lemma there is an almost surely finite random variable N such that E n does not occur for any n ≥ N . Then by (4.22), we have for all n ≥ N that R n,k(n),1 ≤ r n , that is, nθR d n,k(n),1 ≤ αk(n). Since α > 2/f 1 is arbitrary, this gives (4.17). Suppose β < ∞. Then by (4.23) and Lemma 4.1(b), for n large enough and hence by (4.20) and (4.21), Hence by (4.22) we have P[nθR d n,k (n),1 > α log n] = O(n −ε ). Therefore by Lemma 4.2, almost surely lim sup n→∞ (nθR d n,k(n),1 / log n) ≤ α. Since α is arbitrary subject to α > 2Ĥ β (1 − 1/d)/f 1 , this shows that (4.18) holds.

Polytopes: Proof of Theorem 3.2
Throughout this subsection we assume, as in Theorem 3.2, that A is a compact finite polytope in R d , and if d ≥ 4 then also A is convex. We also assume that B = A, and f | A is continuous at x for all x ∈ ∂A, and (3.1) holds for some β ∈ [0, ∞].
Assume first that A is convex. Then A ∩ ϕ = ϕ , and ϕ is a supporting hyperplane of A.
We claim that ϕ ∩ ϕ ⊂ ∂ϕ. Indeed, let z ∈ ϕ ∩ ϕ and y ∈ ϕ \ ϕ . Then y ∈ ϕ \ ϕ , and for all ε > 0 the vector y + (1 + ε)(z − y) lies in the affine hull of ϕ but not in A, since it is on the wrong side of the supporting hyperplane ϕ , and therefore not in ϕ. This shows that z ∈ ∂ϕ, and hence the claim.
If y ∈ ϕ \ ψ then y / ∈ ϕ so dist(y, ϕ ) > 0. Therefore δ is the infimum of a continuous, strictly positive function defined on a non-empty compact set of vectors y, and hence 0 < δ < ∞. Thus for x ∈ ϕ o , with w, a as given above, we have Either way dist(x, ψ) ≥ dist(x, ∂ϕ), and hence by (4.26), dist(x, ϕ ) ≥ δ dist(x, ∂ϕ). Therefore K(ϕ, ϕ ) ≤ δ −1 < ∞ as required. Now we drop the assumption that K is convex. Suppose first that the dimension of the face ϕ ∩ ϕ is zero. Without loss of generality we may assume ϕ ∩ ϕ = {o}. Then there exists a neighbourhood U of o, and cones K, K , such that ϕ ∩ U = K ∩ U and ϕ ∩ U = K ∩ U . Then we have by a compactness argument that δ > 0. Choose Moreover, by a compactness argument dist(x, ϕ ) is bounded away from 0 on x ∈ ϕ \ B(o, r), while dist(x, ∂ϕ) is bounded away from infinity. Combining this with the preceding bound we obtain that K(ϕ, ϕ ) < ∞ whenever D(ϕ ∩ ϕ ) = 0.
If d = 2 then the preceding case with D(ϕ ∩ ϕ ) = 0, is the only non-trivial case to consider so we have the result in this case. Now suppose d = 3. Then we also need to consider the case with D(ϕ ∩ ϕ ) = 1. In this case D(ϕ) = D(ϕ ) = 2. Then we may write ϕ∩ϕ = ∪ k j=1 e j , where e 1 , . . . , e k are edges of A, all contained in a single line (for example k could be 2 if A is a cube with a small polyhedral notch removed in a neighbourhood of the middle of one of its edges). Without loss of generality we assume that this line is the x-axis. Pick j ∈ {1, . . . , k} and assume without loss of generality that the endpoints of the line segment e j are at o and at (a, 0, 0) for some a > 0, and that ϕ ⊂ R 2 × {0}.
Let α be the angle subtended by A at the edge e j . Then 0 ≤ α < 2π with α = π. Also, pick b > 0 such that none of the edges e j , j ∈ {1, . . . , k}, intersect with the open line segment from (−b, 0, 0) to o.
For x ∈ ϕ o with x close to e j we need to find a lower bound for dist(x, ϕ )/ dist(x, ∂ϕ). We argue separately depending on whether x lies in the 'left' region L : Then setting Similarly, we can find ε j > 0 and Define ϕ , ϕ as before. The planes ϕ and ϕ are at an angle α := min(α, 2π − α) to each other. Given for all x ∈ ϕ ∩ M . Combined with the preceding estimates for x ∈ L and for x ∈ R, we have By a compactness argument, for is also bounded away from zero. This shows that K(ϕ, ϕ ) is finite in this case too.
Recall that we are assuming (3.1). Also, recall that for each face ϕ of A we denote the angular volume of A at ϕ by ρ ϕ , and set f ϕ := inf ϕ f (·). Lemma 4.16. Let ϕ be a face of A. Then, almost surely: , and (4.27) follows. Also, if β < ∞ and D(ϕ) = 0, then by Lemma 4.5, almost surely lim inf n→∞ (nR d n,k(n) / log n) ≥ β/(aρ ϕ ), and (4.28) in this case follows sincê There is a constant c > 0 such that for small enough r > 0 we can find at least cr −D(ϕ) points x i ∈ B(x 0 , δ) ∩ ϕ that are all at a distance more than 2r from each other, and therefore lim inf r↓0 r D(ϕ) ν(r, aρ ϕ r d ) ≥ c. Thus by (4.2) we have γ(aρ ϕ ) ≥ D(ϕ), and so by Lemma 4.4 we have almost surely that and (4.28) follows.
Set j = d − D(ϕ). Divide R d into a collection of boxes of side εr n . Let the boxes in this collection which intersect with ϕ K j rn \ (∂ϕ) K j+1 rn be denoted Q n,1 , . . . , Q n,mn . Note that m n = O(r −D(ϕ) n ). For 1 ≤ ≤ m n let q n, denote the first element (in the lexicographic ordering) of the set Q n, ∩ ϕ K j rn \ (∂ϕ) K j+1 rn .
Given n ∈ N, set k (n) := (β + ε) log n if β < ∞ and k (n) := k(n) if β = ∞. Suppose F n,k (n),rn,ϕ occurs. Then there exists x ∈ ϕ K j rn \ (∂ϕ) K j+1 rn with X n (B(x, r n )) < k (n). Then there exists ∈ {1, . . . , m n } such that x ∈ Q n, , and hence by the triangle inequality, B(q n, , (1 − dε)r n ) ⊂ B(x, r n ). Hence By Lemma 4.18, for all n, and for 1 ≤ ≤ m n , the ball B(q n, , (1 − dε)r n ) does not intersect any of the faces of dimension d − 1, other than those which meet at ϕ (i.e., which contain ϕ). Also f (y) ≥ (1 − dε)f ϕ for all y ∈ A sufficiently close to ϕ. Therefore, for all n sufficiently large and all ∈ {1, . . . , m n }, which is summable in n because in this case k(n)/ log n → ∞. Therefore by the Borel-Cantelli lemma E n , and hence F n,k(n),rn,ϕ , occurs for only finitely many n, almost surely. This yields part (i).

Proof of results from Section 2
Throughout this section, we assume f = f 0 1 A , where A ⊂ R d is compact and Riemann measurable with |A| > 0, and f 0 := |A| −1 .

Preliminaries, and proof of Propositions 2.4 and 2.6
We start by showing that any weak converence result for R t (in the large-t limit) of the type we seek to prove, implies the corresponding weak convergence result for R n in the large-n limit.
Lemma 5.1 (de-Poissonization). Suppose µ is uniform over A. Let k ∈ N, and a, b, c ∈ R with a > 0 and b > 0. Let F be a continuous cumulative distribution function. Suppose Proof. For each n ∈ N, set t(n) := n − n 3/4 . Let γ ∈ R. Given n ∈ N ∩ (1, ∞), set r n := b log n + c log 2 n + γ an Then by (5.1), and the continuity of F , we obtain that Moreover, since adding further points reduces the k-coverage threshold, which tends to zero by Chebyshev's inequality. Now suppose R t(n),k > r n . Pick a point X of B that is covered by fewer than k of the closed balls of radius r n centred on points in P t(n) (this can be done in a measurable way). If, additionally, R n,k ≤ r n , then we must have Z t(n) < n, and at least one of the points X Z t(n) +1 , X Z t(n) +2 , . . . , X n must lie in B(X, r n ). Therefore which tends to zero by Chebyshev's inequality. Combined with (5.5) and (5.4) this shows that P[R n,k ≤ r n ] → F (γ) as n → ∞, which gives us (5.2) as required.
The spherical Poisson Boolean model (SPBM) is defined to be a collection of Euclidean balls (referred to as grains) of i.i.d. random radi, centred on the points of a homogeneous Poisson process in R d . Often in the literature the SPBM is taken to be the union of these balls (see e.g. [19]) but here, following [12], we take the SPBM to be the collection of these balls, rather than their union. This enables us to consider multiple coverage: given k ∈ N we say a point x ∈ R d is covered k times by the SPBM if it lies in k of the balls in this collection. The SPBM is parametrised by the intensity of the Poisson process and the distribution of the radii.
We shall repeatedly use the following result, which comes from results in Janson [15] or (when k = 1) Hall [11]. Recall that c d was defined at (2.1).
is the expected volume of a ball of radius Y . Let β ∈ R. Suppose δ(λ) ∈ (0, ∞) is defined for all λ > 0, and satisfies Let B ⊂ R d be compact and Riemann measurable, and for each λ > 0 let B λ ⊂ R d be Riemann measurable with the property that B λ ↑ B as λ → ∞. Let E λ be the event that every point in B λ is covered at least k times by a spherical Poisson Boolean model with intensity λ and radii having the distribution of δ(λ)Y . Then Proof. For k = 1, when B λ = B for all λ > 0 the result can be obtained from [11,Theorem 2]. Since [11] does not address multiple coverage, we use [15] instead to prove the result for general k. Because of the way the result is stated in [15], we need to express log(1/(αδ d )) and log 2 (1/(αδ d )) asymptotically in terms of λ (we are now writing just δ for δ(λ)). By (5.6), so that log(1/(αδ d )) = log λ − log 2 λ + o(1) and log 2 (1/(αδ d )) = log 2 λ + o(1). Therefore we have as λ → ∞ that which tends to β by (5.6). Let α J be the quantity denoted α by Janson [15] (our α is the quantity so denoted by Hall [11]). In the present setting, as described in [15,Example 4], LetÃ = B(o, r 0 ) with r 0 chosen large enough so that B is contained in the interior ofÃ. Let (X 1 , Y 1 ), (X 2 , Y 2 ), . . . be independent identically distributed random (d+1)vectors with X 1 uniformly distributed overÃ and Y 1 having the distribution of Y , independent of X 1 .
Then P[Ẽ λ \ E λ ] ≤ P[Z λ|Ã| < n(λ)], which tends to zero by Chebyshev's inequality. Also, ifẼ λ fails to occur, we can and do choose (in a measurable way) a point V ∈ B which is covered by fewer than k of the balls B(X i , δY i ), 1 ≤ i ≤ n(λ). Then which tends to zero by Chebyshev's inequality and (5.8). These estimates, together with (5.10), give us the asserted result (5.7) for general k in the case with B λ = B for all λ. It is then straightforward to obtain (5.7) for general (B λ ) satisfying the stated condition.

Proof of Proposition 2.4. Suppose for some
The point process P t = {X 1 , . . . , X Zt } is a homogeneous Poisson process of intensity tf 0 in in A. Let Q t be a homogeneous Poisson process of intensity tf 0 in R d \ A, independent of P t . Then P t ∪ Q t is a homogeneous Poisson process of intensity tf 0 in all of R d . The balls of radius r t centred on the points of P t ∪ Q t form a Boolean model in R d , and in the notation of Lemma 5.2, here we have δ = r t , λ = tf 0 so that log 2 λ = log 2 t + o(1), and P[Y = 1] = 1, so that α = θ d . Also, by (5.11) we have the condition (5.6) from Lemma 5.2.
First assume B is compact and Riemann measurable with B ⊂ A o . Then for all large enough t we have B ⊂ A (rt) , in which case R n,k ≤ r t if and only if all locations in B are covered at least k times by the balls of radius r t centred on points of P t ∪Q t , which is precisely the event denoted E λ in Lemma 5.2. Therefore by that result, we obtain that P[R t,k ≤ r t ] → exp(−(c d /(k − 1)!)|B|e −β ). This yields the second equality of (2.7). We then obtain the first equality of (2.7) using Lemma 5.1. Now consider general Riemann measurable B ⊂ A (dropping the previous stronger assumption on B). Given ε > 0, by using [21, Lemma 11.12] we can find a Riemann measurable compact set B ⊂ A o with |A \ B | < ε. Then B ∩ B is also Riemann measurable. Let S Zt,k be the smallest radius of balls centred on P t needed to cover k times the set B ∩B . Then P[S Zt,k ≤ r t ] → exp(−(c d /(k −1)!)|B ∩B |e −β ). For sufficiently large t we have P[R Zt,k ≤ r t ] ≤ P[S Zt,k ≤ r t ], but also P[{S Zt,k ≤ r t }\{R Zt,k ≤ r t }] is bounded by the probability that A \ B is not covered k times by a SPBM of intensity tf 0 with radii r t , which converges to 1 − exp(−c d /(k − 1)!)|A \ B |e −β ). Using these estimates we may deduce that and since ε can be arbirarily small, that This yields the second equality of (2.6), and then we can obtain the first equality of (2.6) by a similar argument to Lemma 5.1.
Proof of Proposition 2.6. Recall that we assume anR d n,k − b log n − c log 2 n D −→ Z. Let (r m ) n≥1 be an arbitrary real-valued sequence satisfying r m ↓ 0 as m → ∞. Let t ∈ R. Then for all but finitely many m ∈ N we can and do define n m ∈ N by

Proof of Theorem 2.1: first steps
In this subsection we assume that d ≥ 2 and A has C 2 boundary. Let ζ ∈ R, and assume that (r t ) t>0 satisfes Let k ∈ N. Given any point process X in R d , and any t > 0, define the 'vacant' region V t (X ) := {x ∈ R d : X (B(x, r t )) < k}, (5.13) which is the set of locations in R d covered fewer than k times by the balls of radius r t centred on the points of X . Given also D ⊂ R d , define the event which is the event that every location in D is covered at least k times by the union of balls of radius r t centred on the points of X (the F stands for 'fully covered'). Given x ∈ R d , we let π d (x) denote the d-th co-ordinate of x, and refer to π d (x) as the as height of x. Given consists of exactly two points, we refer to these as p r (x 1 , . . . , x d ) and q r (x 1 , . . . , x d ) with p r (x 1 , . . . , x d ) at a smaller height than q r (x 1 , . . . , x d ) (or if they are at the same height, take p r (x 1 , . . . , x d ) < q r (x 1 , . . . , x d ) in the lexicographic ordering). Define the indicator function  Also, given a ∈ (0, ∞) and δ t > 0 for each t > 0, Remark. Usually we use Lemma 5.3 in the case where ζ < ∞. In this case the extra condition lim sup t→∞ (tr d t /(log t)) < ∞ is automatic. When ζ = ∞, in (5.16) we use the convention e −∞ := 0.
Proof of Lemma 5.3. Assume for now that ζ < ∞. Considering the slices of balls of radius r t centred on points of U t that intersect the hyperplane R d−1 × {0}, we have a (d − 1)-dimensional Boolean model with (in the notation of Lemma 5.2) To see the moment assertions here, note that here Y = (1 − U 2 ) 1/2 with U uniformly distributed over [0, 1], so that θ d−1 Y d−1 is the (d − 1)-dimensional Lebesgue measure of a (d − 1)-dimensional affine slice through the unit ball at distance U from its centre, and an application of Fubini's theorem gives the above assertions for α := and log 2 λ = log 2 t + log(1 − 1/d) + o(1). Checking (5.6) here, we have  It remains to prove (5.17); we now consider general ζ ∈ (0, ∞]. Let E t be the (exceptional) event that there exist d distinct points Suppose that the event displayed in (5.17) occurs, and that E t does not. Let w be a location of minimal height (i.e., d-coordinate) in the closure ar t ], and be a 'corner' given by the meeting point of the boundaries of d balls of radius r t centred at points of U t , located at x 1 , . . . , x d say, with x 1 the lowest of these d points, and with #(∩ d i=1 ∂B(x i , r t )) = 2, and w ∈ V t (U t \ {x 1 , . . . , x d }). Moreover w must be the point p rt (x 1 , . . . , x d ) rather than q rt (x 1 , . . . , x d ), because otherwise by extending the line segment from p rt (x 1 , . . . , x d ) to q rt (x 1 , . . . , x d ) slightly beyond q rt (x 1 , . . . , x d ) we could find a point in V t (U t ) ∩ (Ω t × [0, ar t ]) lower than w, contradicting the statement that w is a location of minimal height in the closure of V t (U t ) ∩ (Ω t × [0, ar t ]). Moreover, w must be strictly higher than x 1 , since if π d (w) ≤ min(π d (x 1 ), . . . , π d (x d )), then locations just below w would lie in V t (U t ) ∩ (Ω t × [0, ar t ]), contradicting the statement that w is a point of minimal height in the closure of V t (U t ∩ (Ω t × [0, ar t ])). Hence, h rt (x 1 , . . . , x d ) = 1, where h r (·) was defined at (5.15).
Note that there is a constant c := c (d, a) > 0 such that for any x ∈ H with 0 ≤ π d (x) ≤ a we have that |B(x, 1) ∩ H| ≥ (θ d /2) + c π d (x). Hence for any r > 0, and any x ∈ H with 0 ≤ π d (x) ≤ ar, Thus if the event displayed in (5.17) holds, then almost surely there exists at least one d-tuple of points x 1 , . . . , x d ∈ U t , such that h rt (x 1 , . . . , x d ) = 1, and moreover . By the Mecke formula (see e.g. [19]), there is a constant c such that the expected number of such d-tuples is bounded by It follows from (5.12) that Now we change variables to Hence by (5.20), there is a constant c such that the expression in (5.19) is at most In the last expression the first line is bounded by a constant times the expression is bounded because we assume lim sup t→∞ (tr d t /(log t)) < ∞. The second line of (5.21) tends to zero by dominated convergence because tr d t → ∞ by (5.12) and because the indicator function (z 2 , . . . , z d ) → h 1 (o, z 2 , . . . , z d ) has bounded support and is zero when π d (p 1 (o, z 2 , . . . , z d )) ≤ 0. Therefore by Markov's inequality we obtain (5.17).
The idea of proof for Theorem 2.1 goes as follows. Let γ ∈ (1/(2d), 1/d). We work simultaneously on two length scales, namely the radius r t satisfying (5.12) (and hence satisfying r t = Θ(((log t)/t) 1/d ), and a coarser length-scale given by t −γ . If d = 2, we approximate to ∂A by a polygon of side-length that is Θ(t −γ ). We approximate to P t by a Poisson process inside the polygon, and can determine the asyptotic probability of complete coverage of this approximating polygon by considering a lower-dimensional Boolean model on each of the edges. In fact we shall line up these edges by means rigid motions, into a line segment embedded in the plane; in the limit we obtain the limiting probability of covering this line segment with balls centred on a Poisson process in the half-space to one side of this line segment. By separate estimates we can show that the error terms either from the corners of the polygon, or from the approximation of Poisson processes, are negligable in the large-t limit.
For d ≥ 3 we would like to approximate to A by a polyhedral set A t obtained by taking its surface to be a triangulation of the surface of A with side-lengths Θ(t −γ ). However, two obstacles to this strategy present themselves.
The first obstacle is that in 3 or more dimensions, it is harder to be globally explicit about the set A t and the set difference A A t . We deal with this by triangulating ∂A locally rather than globally; we break ∂A into finitely many pieces, each of which lies within a single chart within which ∂A, after a rotation, can be expressed as the graph of a C 2 function on a region U in R d−1 . Then we triangluate U (in the sense of tessallating it into simplices) explicitly and use this to determine an explicit local triangulation (in the sense of approximating the curved surface by a union of (d − 1)-dimensional simplices) of ∂A.
The second obstacle is that the simplices in the triangulation cannot in general be reassembled into a (d − 1)-dimensional cube. To get around this, we shall pick γ ∈ (γ, 1/d) and subdivide these simplices into smaller (d − 1)-dimensional cubes of side t −γ ; we can reassemble these smaller (d − 1)-dimensional cubes into a cube in R d−1 , and control the boundary region near the boundaries of the smaller (d − 1)dimensional cubes, or near the boundary of the simplices, by separate estimates.
For each x ∈ ∂A, we can find an open neighbourhood N x of x, a number r(x) > 0 such that B(x, 3r(x)) ⊂ N x and a rotation ρ x about x such that ρ x (∂A ∩ N x ) is the graph of a real-valued C 2 function f defined on an open disk D ⊂ R d−1 , with f (u), e ≤ 1/9 for all u ∈ D and all unit vectors e in R d−1 , where ·, · denotes the Euclidean inner product in R d−1 and f (u) := (∂ 1 f (u), ∂ 2 f (u), . . . , ∂ d−1 f (u)) is the derivative of f at u. Moreover, by taking a smaller neighbourhood if necessary, we can also assume that there exists ε > 0 and a ∈ R such that f (u) ∈ [a + ε, a + 2ε] for all u ∈ D and also ρ By a compactness argument, we can and do take a finite collection of points Then there are constants ε j > 0, and rigid motions ρ j , 1 ≤ j ≤ J, such that for each j the set ρ j (∂A ∩ N x j ) is the graph of a C 2 function f j defined on a ball I j in R d−1 , with f j (u), e ≤ 1/9 for all u ∈ I j and all unit vectors e ∈ R d−1 , and also with ε j ≤ f j (u) ≤ 2ε j for all u ∈ I j and A∩( Let Γ ⊂ ∂A be a closed set such that Γ ⊂ B(x j , r(x j )) for some j ∈ {1, . . . , J}, and such that ∂Γ has finite (d − 2)-dimensional Hausdorff measure, where in this section we set ∂Γ := Γ ∩ ∂A \ Γ, the boundary of Γ relative to ∂A. To simplify notation we shall assume that Γ ⊂ B(x 1 , r(x 1 )), and moreover that ρ 1 is the identity map. Then Γ = {(u, f 1 (u)) : u ∈ U } for some bounded set U ⊂ R d−1 . Also, writing φ(·) for f 1 (·) from now on, we assume Note that for any u, v ∈ U , by the intermediate value theorem we have for some (5.25) Let γ ∈ (1/(2d), 1/d). When d = 2, we approximate to Γ by a polygonal line Γ t with edge-lengths that are Θ(t −γ ). When d = 3, we approximate to Γ by a polyhedral surface Γ t with all of its vertices in ∂A, and face diameters that are Θ(t −γ ), taking all the faces of Γ t to be triangles. For general d, we wish to approximate to Γ by a set Γ t given by a union of the (d − 1)-faces in a certain simplicial complex of dimension To be explicit about this, divide R d−1 into cubes of dimension d − 1 and side t −γ , and divide each of these cubes into (d − 1)! simplices (we take these simplices to be closed). Let U t be the union of all those simplices in the resulting tessellation of R d−1 into simplices, that are contained within U , and let U − t be the union of those simplices in the tessellation which are contained within U (3dt −γ ) , where for r > 0 we set U (r) to be the set of x ∈ U at a Euclidean distance more than r from R d−1 \ U . If d = 2, the simplices are just intervals. See Figure 1.
where the function φ t is affine on each of the simplices making up U − t , and agrees with the function φ on each of the vertices of these simplices. For t > 0, let R t be a homogeneous Poisson process on R d \ A of intensity tf 0 , independent of P t , and let H t := P t ∪R t . Then H t is a homogeneous Poisson process in R d of intensity tf 0 . Define subsets A t , A − t ,Ã t ,Ã − t of R d and Poisson processes P t andP t in R d by Thus A t is a 'thick slice' of A near the boundary region Γ, P t is the restriction of P t to this 'thick slice', whileP t is a coupled Poisson process on an approximating regionÃ t with flat boundary sections. The rest of this subsection is devoted to proving the following intermediate step towards a proof of Theorem 2.1.
The proof of Proposition 5.4 proceeds in stages. The first step involves Taylor expansion to show that φ t is a good approximation to φ.
Lemma 5.5. There is a constant K ∈ (0, ∞) such that: Also, since φ t is affine on T t,i and agrees with φ on u 0 , u 1 , . . . , u d−1 , there exists Hence,

By the intermediate value theorem again, each component of
This gives us part (a).
For part (c), let x ∈ U (rt) t × R, and let u ∈ U (rt) t be the projection of x onto the first d − 1 coordinates. Then if y ∈ B(x, r t ) ∩ (A t Ã t ), we have y = (v, s) with v − u ≤ r t and either φ t (v) < s ≤ φ(v) or φ(v) < s ≤ φ t (v). Therefore using part (a) yields where the integral is a (d − 1)-dimensional Lebesgue integral and B d−1 (u, r t ) denotes a (d − 1)-dimensional ball of radius r t centred on u. This gives us part (c).
Lemma 5.6. Let ε > 0, a > 0, c > 0. Then for all large enough t and all Proof. By Lemma 5.5(a), given Then by the triangle inequality, provided (c+K)t −2γ ≤ ar t we have B(x, ar t ) ⊃ B(y, ar t −K t −2γ ), where we set K := c + K. Thus by Lemma 4.12 and Lemma 5.5(c), provided t is large enough, we can apply Lemma 4.12 in the last line above because if y ∈ A − t is near the upper boundary of A t then B(y, ar t − K t −2γ ) ∩ A t = B(y, ar t − K t −2γ ) ∩ A, while the lower boundary is flat. Since t −2γ = o(r t ), (5.27) gives us the result.
Recall that H and U t were defined just before Lemma 5.3. and Let K 2 be the minimum number of balls of radius δ/2 required to cover B(o, K 1 ). For each z ∈ R d , we can and do cover B(z, K 1 r t ) by K 2 balls of radius δr t /2, denoted B t,z,1 , . . . , B t,z,K 2 . Assume these are enumerated in such a way that the balls in the covering which intersect with A − t ∪ A − t are B t,z,1 , . . . , B t,z,K (t,z) for some K (t, z) ≤ K 2 . Replacing each ball B t,z,j , 1 ≤ j ≤ K (t, z), by a ball B t,z,j , of radius δr t centred on a point of B t,z,j ∩(A − t ∪Ã − t ), we obtain a covering of B(z, K 1 r t ) ∩ (A − t ∪Ã − t ) by at most K 2 balls of radius δr t centred on points in A − t ∪Ã − t . For 1 ≤ j ≤ K (t, z), let B t,z,j denote the ball of radius (1 − δ)r t having the same centre as B t,z,j .
Define the measure µ t (·) := tf 0 | · ∩A t ∩Ã t |, which is the intensity measure of the Poisson point process P t ∩P t . By Lemma 5.6, for large t each ball B t,z,j , and hence for some constant c > 0 independent of t, z, j, ). Therefore using (5.12) we obtain that for some further constant c > 0, , we can find j ∈ {1, . . . , K (t, z)} such that the centre of B t,z,j is distant at most δr t from x, and hence by the triangle inequality B t,z,j ⊂ B(x, r t ) so that (P t ∩P t )(B(x, r t )) ≥ k. Therefore F t (B(z, K 1 r t )∩(A − t ∪Ã − t ), P t ∩P t ) occurs, so that (5.28) follows from (5.31) and the union bound.
The proofs of (5.29) and (5.30) are similar but easier.
, the union of all (d − 2)-dimensional faces in the boundaries of the faces making up Γ t . For this section we pick γ ∈ (γ, 1/d) and define the sets Q t , Q + t for all t > 0 by (5.32) Lemma 5.8. It is the case that P[F t (Q + By Lemma 5.7, the union bound, (5.12) and the fact that , which tends to zero.
The proof that P[F t (Q t , P t )] is similar; we need fewer balls of radius r t to cover We shall conclude the proof of Proposition 5.4 by means of a device we refer to as the induced coverage process. The definition of this is somewhat simpler for d = 2, so we shall first consider this case for presentational purposes.
Suppose that d = 2. The closure of the set Γ t \ Q t is a union of closed intervals (i.e., line segments) with total length |Γ t | − 14r t κ(t), which tends to |Γ| as t → ∞ because κ(t) = O(t γ ) and γ < 1/2. Denote these intervals by I t,1 , . . . , I t,κ(t) , taking I t,i to be a sub-interval of H t,i for 1 ≤ i ≤ κ(t).
For 1 ≤ i ≤ κ(t) define the closed rectangular strips (which we call 'blocks') S t,i (respectively S + t,i ), of dimensions |I t,i | × 3r t (resp. |I t,i | × 4r t ), to consist of those locations inside A t lying within perpendicular distance at most 3r t (resp., at most 4r t ) of I t,i . That is, where e t,i is a unit vector perpendicular to I t,i pointing inwards intoÃ t from I t,i . Define the long horizontal rectangular strips 4r t ], and denote the lower boundary of S t (that is, its intersection with the x-axis) by L t . Now choose rigid motions ρ t,i of the plane, 1 ≤ i ≤ κ(t), such that after applications of these rigid motions the blocks S t,i are lined up end to end to form the strip S t , with the long edge I t,i of the block transported to part of the lower boundary L t of S t . In other words, choose the rigid motions so that the sets ρ t,i (S t,i ), 1 ≤ i ≤ κ(t), have pairwise disjoint interiors and their union is S t , and also ρ t,i (I t,i ) ⊂ L t for 1 ≤ i ≤ κ(t) (see Figure 2). This also implies that ∪ In lining up these blocks, let us carry the points of P t within each block to the new location; that is, we apply ρ t,i both to the block S + t,i and to the points of P t ∩S + t,i . By the restriction, mapping and superposition theorems for Poisson processes (see e.g. [19]), the resulting point process P t := ∪ κ(t) i=1 ρ t,i (P t ∩ S + t,i ) is a homogeneous Poisson process of intensity tf 0 on the long strip S + t . We extend P t to a Poisson process U t on R × [0, ∞) as follows. Let P t be a Poisson process of intensity tf 0 on (R × [0, ∞)) \ S t , independent of P t , and set U t := P t ∪ P t . Then U t is a homogeneous Poisson process of intensity tf 0 in the upper half-plane. We call the collection of disks of radius r t centred on the points of this point process the induced coverage process. We denote the lower corners of the blocks making up the strip S t , taken from left to right, byq t,0 ,q t,1 , . . . ,q t,κ(t) . That is,q t,0 = (0, 0), and andq 0,κ(t) = (|∂A t | − 2r t κ t , 0), whileq t,1 , . . . ,q t,κ(t)−1 are the points on the line segment L t at which the successive blocks making up S t are joined together. We then define the set The setQ t ∩ S t will play a similar role in S t to that of Q t ∩ A t in A t . Lemma 5.9. Suppose d = 2. Then lim t→∞ P[F t (Q t ∩ S t , U t )] = 1.
Proof of Proposition 5.4 for d = 2. We shall approximate event F t (∂A t , P t ) by F t (L t , U t ), the event that every location in the long lower edge of S t is covered at least k times by the disks of the induced coverage process just described. By Lemma 5.3, If F t (L t , U t ) occurs but F t (Γ t , P t ) does not, choose x ∈ Γ t ∩ V t (P t ) (recalling that V t (·) was defined at (5.13)). Then x must lie within distance r t of an endpoint of one of the intervals I t,1 , . . . , I t,κ(t) , and hence within distance 8r t of one of the corners of Γ t . Therefore it lies in Q + t ∩ Γ t , so does not, then in the induced coverage process, there is some location in the lower edge L t that is covered fewer than k times, and this location must lie within distance r t of the endpoint of one of the blocks that are put together to make the strip S t . Therefore To describe the induced coverage process in this case, we first define a 'tartan' (plaid) region T t as follows. Recall that we took γ ∈ (γ, 1/d). Partition each face H t,i , 1 ≤ i ≤ κ(t) into a collection of (d − 1)-dimensional cubes of side t −γ contained in H t,i , together with a boundary region contained within ∂ d−2 H t,i ⊕ B(o, dt −γ ), Let T t be the union (over all faces) of the boundaries of the (d−1)-dimensional cubes in this partition (see Figure 3). Set Enumerate the (d − 1)-dimensional cubes in the above subdivision of the faces H t,i , 1 ≤ i ≤ κ(t), as I + t,1 , . . . , I + t,λ(t) . For 1 ≤ i ≤ λ(t) let I t,i := I + t,i \ (T t ⊕ B(o, 7r t )), which is a (d−1)-dimensional cube of side length t −γ −14r t with the same centre and orientation as I + t,i . We claim that the total (d − 1)-dimensional Lebesgue measure of these (d − 1)-dimensional cubes satisfies , which tends to one since r t = O(((log t)/t) 1/d ) by (5.12), and γ < 1/d, so the proportionate amount removed near the boundaries of the (d − 1)-dimensional cubes I + t,i to give I t,i vanishes. Also the 'boundary part' of a face H t,i that is not contained in any of the of the I + t,j s has (d − 1)-dimensional measure that is O(t −(d−2)γ t −γ ) removed, so that the total (d − 1)-dimensional measure of these removed regions near the boundary of the faces is , which tends to zero. Thus the claim (5.33) is justified.
For 1 ≤ i ≤ λ(t) define closed, rectilinear (but not aligned with the axes), ddimensional cuboids (which we call 'blocks') S t,i (respectively S + t,i ), as follows. Let S t,i (respectively S + t,i ) be the closure of the set of those locations x ∈Ã t such that Figure 3: Part of the 'tartan' region T t when d = 3. The outer triangle represents one face H t,i , and the part of T t within H t,i is given by the union of the boundaries of the squares. The triangle has sides of length Θ(t −γ ), while the squares have sides of length t −γ . The region between the two triangles is part of Q + t . It has thickness 8dt −γ (the constant 8d is not drawn to scale), and covers the whole boundary region not covered by the squares.
dist(x, I t,i ) ≤ 2r t (resp., dist(x, I t,i ) ≤ 4r t ) and such that there exists y ∈ I o t,i (the relative interior of I t,i ) satisfying y − x = dist(x, I t,i ). For example, if d = 3 then S t,i (resp. Define a region D t ⊂ R d−1 that is approximately a rectilinear hypercube with lower left corner at the origin, and obtained as the union of λ(t) disjoint (d − 1)dimensional cubes of side t −γ −14r t . We can and do arrange that D t ⊂ [0, |Γ t | 1/(d−1) + t −γ ] d−1 for each t, and |D t | → |Γ| as t → ∞. Define the flat slabs and denote the lower boundary of S t (that is, the set D t × {0}) by L t . Now choose rigid motions ρ t,i of R d , 1 ≤ i ≤ λ(t), such that under applications of these rigid motions the blocks S t,i are reassembled to form the slab S t , with the square face I t,i of the i-th block transported to part of the lower boundary L t of S t . In other words, choose the rigid motions so that the sets ρ t,i (S t,i ), 1 ≤ i ≤ λ(t), have pairwise disjoint interiors and their union is S t , and also ρ t,i (I t,i ) ⊂ L t for 1 ≤ i ≤ λ(t). This also implies that ∪ In lining up these blocks, let us carry the points ofP t within each block to the new location; that is, we apply ρ t,i both to the block S + t,i and to the points of P t ∩ S + t,i . The resulting point process P t := ∪ ) is a homogeneous Poisson process of intensity t in the flat slab S + t . We extend P t to a Poisson process U t on H := R d−1 × [0, ∞) as follows. Let P t be a Poisson process of intensity tf 0 on (R d−1 × [0, ∞)) \ S t , independent ofP t , and set U t := P t ∪ P t . Then U t is a homogeneous Poisson process of intensity tf 0 in the upper half-space H. We call the collection of balls of radius r t centred on the points of this point process the induced coverage process, for d ≥ 3.
The total number of the (d − 1)-dimensional cubes (in the whole of Γ t ) in the partition described above is O(t (d−1)γ ), and for each of these (d − 1)-dimensional cubes the number of balls of radius r t required to cover the boundary of the cube Using the union bound and Lemma 5.7 yields that which tends to zero.
Denote the union of the boundaries (relative to R d−1 × {0}) of the lower facees of the blocks making up the strip S t , byQ 0 t , and the (9r t )-neighbourhood of this region byQ t , i.e.
Here ∂I t,i denotes the relative boundary of I t,i . The next lemma is the higher-dimensional version of Lemma 5.9.
Proof of Proposition 5.4 for d ≥ 3. We shall approximate event F t (Γ t ,P t ) by F t (L t , U t ), the event that every location in the lower face L t of S t is covered at least k times in the induced coverage process just described. By Lemma 5.3, we obtain that If F t (L t , U t ) occurs but F t (Γ t ,P t ) does not, then we can and do pick so by Lemmas 5.8 and 5.10, . . , λ(t)} such that y ∈ ρ t,i (I t,i ). Then we must have dist(y, ρ t,i (∂I t,i )) ≤ r t , where ∂I t,i denotes the (d − 2)-dimensional boundary of the (d − 1)-dimensional cube I t,i , since otherwise ρ −1 t,i (y) would be a location in Γ t ∩ V t (P t ). Therefore so by Lemma 5.11, P[F t (Γ t ,P t ) \ F t (L t , U t )] → 0, which completes the proof.

Proof of Theorem 2.1: conclusion
In this subsection we return to considering the cases with d = 2 and d ≥ 3 together. We retain the notation used in the preceding subsection. Define the sets A * t and A * * t in R d by Therefore event F t (S t , U t ) does not occur. If also F t (Q t ∩ H, U t ) occurs, then since we also assume F t (Γ t ,P t ), we claim that F t (L t , U t ) occurs. This is because for all w ∈ L t \Q t , we have for some j that w ∈ ρ t,j (I t,j ) and dist(w, ρ t,j (∂I t,j )) > r t so that B(w, r t ) ∩ H ⊂ ρ t,j (S t,j ) and hence and the probability of the event on the right tends to zero by (5.17) from Lemma 5.3. Therefore also P[E (1) t ] → 0 by using Lemmas 5.12, 5.10, and 5.11. The next lemma will help us to show that the error due to approximating the Poisson process P t withP t (both defined at (5.26)) vanishes as t → ∞.
Lemma 5.14. It is the case that Proof. Let ε ∈ (0, 2γ − 1/d). Let Q t := P t ∩P t and Q t := P t P t . Then Q t and Q t are independent homogeneous Poisson processes of intensity tf 0 in A t ∩Ã t , A t Ã t respectively. By Lemma 5.7 and the union bound, there is a constant c such that for any m ∈ N and any set of m points x 1 , . . . , x m in R d , we have Pick a location x ∈ V t (P t ) ∩ A * t of minimal distance from Γ t . Since F t (Q t ) is assumed to occur, the nearest point in Γ t to x lies in the interior of H t,i for some i. Then x lies at the intersection of the boundaries of d balls of radius r t centred on points of P t , and is covered by k − 1 of the other balls centred in P t . Also x does not lie in the interior of A * * t .
includes a point outside the interior of A * * t , within distance Kt −2γ of Γ t and further than r t from all but at most k−1 other points of P t . Hence by the Mecke formula, integrating first over the positions of and then over location of x 1 , and using Lemma 5.5(b), Lemma 5.6 and (5.12), we obtain for suitable constants c, c that Recall that G t := F t (Γ t , P t ) \ F t (Γ t ,P t ). If this event occurs, then there exists y ∈ Γ t ∩ V t (P t ) such that y is covered by at least one of the balls of radius r t centred on P t \P t . Hence there exists so that by Lemma 5.14, P[G t ] → 0. Let Proof. Let ε ∈ (0, 2γ 0 −1/d). Since we assume ∂Γ has finite (d−2)-dimensional Hausdorff measure, for each t we can take x t,1 , . . . , Then we end up with balls of radius r t , denoted B t,1 , . . . , B t,m(t) say, such that By (5.12), Hence by Proposition 2.4, Therefore since ε can be arbitrarily small and |A \ A (ε) | ↓ 0 as ε ↓ 0, the event displayed on the left hand side of (5.35) has probability tending to zero. Then using Lemma 5.17, we have (5.34), which completes the proof.
. Then Γ 1 , . . . , Γ J comprise a finite collection of closed sets in ∂A with disjoint interiors, each of which has boundary with finite 1-dimensional Hausdorff measure, and is contained in a single chart B(x j , r(x j )), and with union ∂A.
and ∂Γ i := Γ i ∩ ∂A \ Γ i . First we claim that the following event inclusion holds: . Then dist(x, ∂A) ≤ r t since we assume F t (A (rt) , P t ) occurs. Then for some i ∈ {1, . . . , J} and some y ∈ Γ i we have x − y ≤ r t . Since we assume F Γ i t occurs, we have dist(y, , then we can take y ∈ Γ j \ (∂Γ j ⊕ B(o, t −γ 0 )) with y − y ≤ r t . If x − y ≤ 2r t then by the triangle inequality x − y ≤ 3r t , but since y / ∈ Γ i , this would contradict Lemma 5.19(c). Therefore x − y > 2r t , and hence the r t -neighbourhoods of Γ are disjoint. This gives us the independence claimed. Now observe that F t (A (rt) , P t ) ⊂ F t (A (4rt) , P t ), and we claim that provided the last limit exists. However, the events in the right hand side of (5.43) are mutually independent, so using (5.40) and (5.42), we obtain the second equality of (2.3). We then obtain the rest of (2.3) using Lemma 5.1.

Polygons: proof of Theorem 2.2
In this subsection we shall prove Theorem 2.2 in the case where A is polygonal (the other case is covered by Theorem 2.1). Set d = 2, take A to be polygonal with B = A, and take f ≡ f 0 1 A , with f 0 := |A| −1 . Denote the vertices of A by q 1 , . . . , q κ , and the angles subtended at these vertices by α 1 , . . . , α κ respectively. Choose K ∈ (2, ∞) such that K sin α i > 9 for i = 1, . . . , κ.
For the duration of this subsection (and the next) we fix k ∈ N and β ∈ R. Assume we are given real numbers (r t ) t>0 satisfying Setting ζ = β/2, this is the same as the condition (5.12) for d = 2. Given any D ⊂ R 2 , and any point process X in R 2 , and t > 0, define the 'vacant region' V t (X ) and the event F t (D) as at (5.13) and (5.14).
For t > 0, in this subsection we define the 'corner regions' Q t and Q − t by Proof. There is a constant m, independent of t, such that for all t > 0 there exist points y t,1 , . . . , y t,m ∈ A such that Q t ⊂ ∪ m j=1 B(y t,j , r t /2). Also there is a constant a > 0 such that for all t > 0 and all j ∈ {1, . . . , m} we have |B(y t,j , r t /2) ∩ A| ≥ ar 2 t . Then and since tr 2 t → ∞ by (5.44), the probability on the right tends to zero. Lemma 5.23. It is the case that Proof. Denote the line segments making up ∂A by I 1 , . . . , I κ , and for t > 0 and 1 ≤ i ≤ κ set I t,i := I i \ Q − t . Let i, j, k ∈ {1, . . . , κ} be such that i = j and edges I i and I j are both incident to q k . If x ∈ I t,i and y ∈ I t,j , then x − y ≥ (Kr t ) sin α k ≥ 9r t . Hence for all large enough t the events F t (I t,1 , P t ), . . . , F t (I t,κ , P t ) are mutually independent. Therefore and by Lemma 5.3 this converges to the right hand side of (5.45). Now we prove (5.46). For t > 0, and i ∈ {1, 2, . . . , κ}, let S t,i denote the rectancular block of dimensions |I t,i | × 3r t , consisting of all points in A at perpendicular distance at most 3r t from I t,i . Let ∂ side S t,i denote the union of the two 'short' edges of the boundary S t,i , i.e. the two edges bounding S t,i which are perpendicular to I t,i . Then For i ∈ {1, . . . , κ}, let I t,i denote an interval of length |I i,t | contained in the x-axis. By a rotation one sees that where U t is as in Lemma 5.3. By (5.17) from that result, this probability tends to zero, which gives us (5.46).
Proof of Theorem 2.2 for polygonal A. Let β ∈ R and suppose (r t ) t>0 satisfies (5.44). Then Using (5.46) followed by Lemma 5.22, we obtain that provided these limits exist.
The events F t (∂A, P t ) and and F t (A (3rt) , P t ) are independent since the first of these events is determined by the configuration of Poisson points distant at most r t from ∂A, while the second event is determined by the Poisson points distant more than 2r t from ∂A. Therefore the limit in (5.49) does indeed exist, and is the product of the limits arising in (5.47) and (5.48).
Since F t (A, P t ) = {R t,k ≤ r t }, this gives us the second equality of (2.4), and we then obtain the first equality of (2.4) by using Lemma 5.1.

Polyhedra: Proof of Theorem 2.3
Now suppose that d = 3 and A = B is a polyhedron with f ≡ f 0 1 A , where f 0 = 1/|A|. Fix k ∈ N and β ∈ R. Recall that α 1 denotes the smallest edge angle of A. Throughout this subsection, we assume are given (r t ) t>0 with the following limiting behaviour as t → ∞.
For D ⊂ R 3 and X a point set in R 3 , define event F t (D, X ) by (5.14) as before, but now in 3 dimensions. Let H : To prove Theorem 2.3, our main task is to determine lim t→∞ P[F t (A, P t )]. Our strategy for this goes as follows. We shall consider reduced faces and reduced edges of A, obtained by removing a region within distance Kr t of the boundary of each face/edge, for a suitable constant K. Then the events that different reduced faces and reduced edges are covered are mutually independent. Observing that the intersection of P t with a face or edge is a lower-dimensional SPBM, and using Lemma 5.2, we shall determine the limiting probability that the reduced faces and edges are covered.
If F t (A, P t ) does not occur but all reduced faces and all reduced edges are covered, then the uncovered region must either meet the region A (3rt) , or go near (but not touch) one of the reduced faces, or go near (but not touch) one of the reduced edges, or go near one of the vertices. We shall show in turn that each of these possibilities has vanishing probability.
In this subsection we use bold face to indicate vectors in R 3 . For i = 1, 2, 3, we let π i : R 3 → R denote projection onto the ith coordinate. For x ∈ R 3 we shall sometimes refer to π 2 (x) and π 3 (x) as the 'depth' and 'height' of x, respectively.
Also |B(x, r) ∩ W| = r 3 |B(r −1 x, 1) ∩ W| by scaling, and it is then straightforward to deduce (5.54). In the next three lemmas, we take I to be an arbitrary fixed compact interval in R, and then, given α ∈ (0, 2π) and b ≥ 0, set W α,b := {(x 1 , x 2 , x 3 ) ∈ W α : x 1 ∈ I, x 2 2 + x 2 3 ≤ b 2 }. The next lemma will help us to show that if the part of ∂A near to a given edge is covered, then the interior of A near that edge is also likely to be covered. Proof. Let E t here be the (exceptional) event that there exist three distinct poionts x, y, z of W α,t such that ∂B(x, r t ) ∩ ∂B(y, r t ) ∩ ∂B(z, r t ) has non-empty intersection with the plane R 2 × {0}. Then P[E t ] = 0. Suppose that F t ((∂W α )∩W α,art , W α,t )\F t (W α,art , W α,t ) occurs, and that E t does not. Let w be a point of minimal height in the closure of V t (W α,t ) ∩ W α,art . Then w ∈ W α,art \ ∂W α , and w = p rt (x, y, z) for some triple (x, y, z) of points of W α,t , satisfying h rt (x, y, z) = 1, where h rt (·) was defined at (5.51). Also w is covered by fewer than k of the balls centred on the other points of W α,t . Hence by Markov's inequality, where we set N t := = x,y,z∈Wα,t h rt (x, y, z)1{p rt (x, y, z) ∈ W α,art }1{W α,t (B(p rt (x, y, z), r t )) < k}, and = means we are summing over triples of distinct points in W α,t . By the Mecke formula, E [N t ] is bounded by a constant times t 3 (tr 3 t ) k−1 and g t (u, v, y , z ) := 1{π 2 (p rt (u, u + r t y , u + r t z )) > π 3 (p rt (u, u + r t y , u + r t z )) cot α} × exp{−c tr 2 t [π 2 (p rt (u, u + r t y , u + r t z )) − π 3 (p rt (u, u + r t y , u + r t z )) cot α)]}.
Let E t here denote the (exceptional) event that there exist two distinct points x, y ∈ W α,t , such that ∂B(x, r t ) ∩ ∂B(y, r t ) ∩ W α,0 = ∅. Then P[E t ] = 0.
We then obtain the stated results by application of Lemma 5.2.
Recall that H 1 , . . . , H m are the faces of ∂A. For t > 0 we define the 'edge regions' The next lemma provides the limiting probability that the 'interior regions' of all of the faces of A are covered.
Lemma 5.31. Define event G t := F t (∂A \ W t , P t ). It is the case that Proof. We claim that the events F t (H 1 \ W t , P t ), . . . , F t (H m \ W t , P t ) are mutually independent. Indeed, for i, j ∈ {1, . . . , m} with i = j, if x ∈ H i \ W t then dist(x, ∂H i ) ≥ Kr t so by our choice of K, dist(x, H j ) ≥ 3r t , so the r t -neighbourhoods of H 1 \ W t , . . . , H m \ W t are disjoint, and the independence follows. Therefore By ( if α 1 > π/2, or α 1 = π/2, k = 1 1 otherwise, and the result follows. Next we estimate the probability that there is an uncovered region near to a face of A but not touching that face, and not close to any edge of A. K(K + 7)r t so by our choice of K, dist(x, e j ) ≥ 3r t for t sufficiently large. Therefore the r t -neighbourhoods of e 1 \ Q t , . . . , e κ \ Q t are disjoint, and the independence follows. Hence by Lemma 5.30, Observe next that by the definition of W + t , the set ∂A \ W + t is at Euclidean distance at least Kr t from all of the edges e i . Therefore the events F t (∪ κ i=1 e i \ Q t , P t ) and F t (∂A \ W + t , P t ) are independent. Therefore using (5.66) we have provided the two limits on the right exist. But we know from (5.67) and Lemma 5.31 that these two limits do indeed exist, and what their values are. Substituting these two values, we obtain the result stated for R t,k in (2.5). Then we obtain the result stated for R n,k in (2.5) by applying Lemma 5.1.