1 Introduction

The study of the equidistribution properties of sequences and point sets on the hypersphere has raised considerable interest in recent times, both in the deterministic (see, e.g., [2, 9, 12, 19, 21, 22, 33, 38, 41]) and in the stochastic case (see, e.g., [13, 15, 18, 20, 26, 35, 36, 39] for independent and identically distributed random points and [1, 4,5,6] for determinantal point processes). The aim of this paper is to prove, under the assumption that the points are uniformly and independently distributed on the hypersphere, asymptotic distributional results for some of the measures of equidistribution introduced in the literature. We focus here only on measures that can be expressed through maxima and minima of distances between points of the hypersphere. This implies that we do not provide asymptotic results about Riesz’ energy (see, e.g., [33, 37, 38, 41]) as well as other Sobolev statistics on the hypersphere (see [18, 20, 27, 35, 36]), that will be studied in companion papers.

The measures that we consider can be connected to two problems on the hypersphere. Covering measures concern covering problems, i.e., problems in which the hypersphere is completely covered by the union of a collection of spherical caps. The question is generally to guarantee covering while minimizing either the size of the caps when their number is fixed, or the number of caps when their size is fixed. Separation measures concern packing problems, i.e., problems in which a collection of spherical caps is located on the hypersphere without any overlapping between caps. The aim is to maximize the number of caps while keeping their size fixed, or the size of the caps while keeping their cardinality fixed.

For some measures related to covering and packing, we provide results concerning their asymptotic distribution for a sample of points uniformly and independently distributed on the hypersphere. Some of the results that follow are based on earlier theorems proved in probability theory. We also give the asymptotic distribution of two quantities, namely the geodesic radius and the hole radius, associated to each facet of the convex hull of the points distributed on the hypersphere. In doing so, we provide a proof of Conjecture 2.3 formulated in [13, p. 67]. Moreover, by comparing our results with the lower bounds available in the literature for covering and separation measures, we provide some further confirmation that it is easier to reach a good covering than it is to reach a good packing. In particular, random points perform rather well when covering is concerned but their separation properties are not very good. Our results bear a striking resemblance with some recent results in stochastic geometry (see [7, 16, 17, 42]).

This is the structure of the paper. In Sect. 2 we provide some definitions that will be used in the rest of the paper. The results are stated in Sect. 3, while the corresponding proofs are gathered in Sect. 5. Section 4 contains some reflections about the covering and separation properties of random uniform points.

2 Definitions

Consider the hypersphere \(\mathbb {S}^{d}\subset \mathbb {R}^{d+1}\) of radius one, endowed with the geodesic distance \(\mu \) and the Euclidean distance m. If \(\mathbf {x}\cdot \mathbf {y}\) denotes the inner product between two vectors, for \(\mathbf {x},\mathbf {y}\in \mathbb {S}^{d}\) we have \(\mu (\mathbf {x},\mathbf {y})=\arccos (\mathbf {x}\cdot \mathbf {y})\) and \(m (\mathbf {x},\mathbf {y})=\sqrt{2(1-\mathbf {x}\cdot \mathbf {y})}\). Let \(B (\mathbf {x},r):=\{\mathbf {y}\in \mathbb {S}^{d}:\mu (\mathbf {x},\mathbf {y})<r\} \) be the geodesic ball (or spherical cap) with center \(\mathbf {x}\) and radius r.

Consider N points \(X_{N}=\{ \mathbf {x}_{1},\dots ,\mathbf {x}_{N}\} \) independently and uniformly distributed on \(\mathbb {S}^{d}\subset \mathbb {R}^{d+1}\). The convex hull of the point configuration \(X_{N}\) is a polytope that is sometimes called (see, e.g., [40]) the “inscribing polytope”. The number of facets of the polytope is itself a random variable \(f_{d}\). As the inscribing polytope is simplicial with probability one, from the Lower Bound Theorem (see [3]) we have \(f_{d}\ge dN-(d+2)(d-1)\) (the same value stated in [13, p. 63] for \(d=2\)). Therefore, when N increases, the number of facets increases too.

To each facet of the polytope a hole is associated, i.e., the maximal spherical cap for the particular facet containing points of \(X_{N}\) only on its boundary. We suppose that the facets, as well as the quantities associated with them, are indexed by a subscript k ranging from 1 to \(f_d\) and are arranged in no special order. Let \(\alpha _{k}=\alpha _{k}(X_{N})\) be the geodesic radius of the k-th cap. Following [13], we define the k-th hole radius \(\rho _{k}=\rho _{k}(X_{N})\) as the Euclidean distance in \(\mathbb {R}^{d+1}\) from the cap boundary to the center of the spherical cap located on the sphere above the k-th facet, so that \(\rho _{k}=2\sin ({\alpha _{k}}/{2})\).

An interesting measure of uniformity is the geodesic covering radius (also called mesh norm or fill radius, see [13, p. 62]) defined as

$$\begin{aligned} \alpha =\alpha (X_{N},\mathbb {S}^{d})&:=\max _{\mathbf {y}\in \mathbb {S}^{d}}\min _{\mathbf {x}_{j}\in X_{N}}\mu (\mathbf {y},\mathbf {x}_{j})\\&=\sup {\{ r>0:\text {there exists } \mathbf {x}\in \mathbb {S}^{d} \text { with } B (\mathbf {x},r)\subset \mathbb {S}^{d}\setminus X_{N}}\} , \end{aligned}$$

the largest geodesic distance from a point in \(\mathbb {S}^{d}\) to the nearest point in \(X_{N}\) or the geodesic radius of the largest spherical cap containing no points from \(X_{N}\). It is clear that \(\alpha =\max _{1\le k\le f_{d}}\alpha _{k}\). Another quantity is the Euclidean covering radius (although the name mesh norm is also used, see [23])

$$\begin{aligned} \rho =\rho (X_{N},\mathbb {S}^d):=\max _{\mathbf {y}\in \mathbb {S}^{d}}\min _{\mathbf {x}_{j}\in X_{N}}m (\mathbf {y},\mathbf {x}_{j}), \end{aligned}$$

whose properties have been studied in [39]. In this case too, \(\rho =\max _{1\le k\le f_{d}}\rho _{k}\).

Now we pass to the measures linked to packing. Other measures are the separation distance (see, e.g., [13, p. 62]) or minimum angle (see, e.g., [15, p. 1838])

$$\begin{aligned} \theta =\theta (X_{N},\mathbb {S}^{d}):=\min _{\begin{array}{c} \mathbf {x}_{i},\mathbf {x}_{j}\in X_{N}\\ i\ne j \end{array}}\mu (\mathbf {x}_{i},\mathbf {x}_{j})=\min _{\mathbf {x}_{j}\in X_{N}}\left\{ \min _{\begin{array}{c} \mathbf {x}_{i}\in X_{N}\\ i\ne j \end{array}}\mu (\mathbf {x}_{i},\mathbf {x}_{j})\right\} , \end{aligned}$$

the largest nearest neighbor distance (see, e.g., [29,30,31])

$$\begin{aligned} \theta ^{\prime }=\theta ^{\prime }(X_{N},\mathbb {S}^{d}):=\max _{\mathbf {x}_{j}\in X_{N}}\left\{ \min _{\begin{array}{c} \mathbf {x}_{i}\in X_{N}\\ i\ne j \end{array}}\mu (\mathbf {x}_{i},\mathbf {x}_{j})\right\} , \end{aligned}$$

and the maximum angle (see, e.g., [15])

$$\begin{aligned} \theta ^{\prime \prime }=\theta ^{\prime \prime }(X_{N},\mathbb {S}^{d}):=\max _{\begin{array}{c} \mathbf {x}_{i},\mathbf {x}_{j}\in X_{N}\\ i\ne j \end{array}}\mu (\mathbf {x}_{i},\mathbf {x}_{j}). \end{aligned}$$

It is also possible to define the same quantities through the Euclidean distance. Indeed, if we replace \(\mu \) with m, we get the minimum spacing (see, e.g., [8, p. 274]) or minimum distance (see, e.g., [11, p. 654]) or separation radius (see, e.g., [23])

$$\begin{aligned} \varTheta =\varTheta (X_{N}):=\min _{\begin{array}{c} \mathbf {x}_{i},\mathbf {x}_{j}\in X_{N}\\ i\ne j \end{array}}m (\mathbf {x}_{i},\mathbf {x}_{j})=\min _{\mathbf {x}_{j}\in X_{N}}\left\{ \min _{\begin{array}{c} \mathbf {x}_{i}\in X_{N}\\ i\ne j \end{array}}m (\mathbf {x}_{i},\mathbf {x}_{j})\right\} . \end{aligned}$$

The other corresponding quantities

$$\begin{aligned} \varTheta ^{\prime }=\varTheta ^{\prime }(X_{N})&:=\max _{\mathbf {x}_{j}\in X_{N}}\left\{ \min _{\begin{array}{c} \mathbf {x}_{i}\in X_{N}\\ i\ne j \end{array}}m (\mathbf {x}_{i},\mathbf {x}_{j})\right\} , \\ \varTheta ^{\prime \prime }=\varTheta ^{\prime \prime }(X_{N})&:=\max _{\begin{array}{c} \mathbf {x}_{i},\mathbf {x}_{j}\in X_{N}\\ i\ne j \end{array}}m (\mathbf {x}_{i},\mathbf {x}_{j}) \end{aligned}$$

seem to lack a name.

We will need the following probabilistic definitions. The symbol \({\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\) (\({\mathop {\longrightarrow }\limits ^{\mathcal {D}}}\)) denotes convergence in probability (in distribution) when \(N\rightarrow \infty \). A generalized gamma random variable \(GG (\alpha ,\beta ,\gamma )\) is characterized by the probability density function (pdf)

$$\begin{aligned} f_{GG(\alpha ,\beta ,\gamma )}(x)=\frac{\alpha x^{\beta -1}}{\gamma ^{{\beta }/{\alpha }}\varGamma (\beta /\alpha )}\exp \frac{-x^{\alpha }}{\gamma },\qquad x\ge 0, \end{aligned}$$

and the cumulative distribution function (cdf)

$$\begin{aligned} F_{GG(\alpha ,\beta ,\gamma )}(x)=\frac{\gamma (\beta /\alpha ,x^{\alpha }/\gamma )}{\varGamma (\beta /\alpha )},\qquad x\ge 0, \end{aligned}$$

where \(\gamma (\,{\cdot }\,,\,{\cdot }\,)\) is the lower incomplete gamma function. The gamma random variable \(G (\beta ,\gamma )\) corresponds to the previous case when \(\alpha \equiv 1\), the exponential random variable \(\mathcal {E}(\gamma )\) corresponds to the case when both \(\alpha \equiv 1\) and \(\beta \equiv 1\). A Gumbel random variable \({\text {Gumbel}}(\mu ,\beta )\) is defined by the pdf and the cdf

$$\begin{aligned} f_{{\text {Gumbel}}(\mu ,\beta )}(x)&=\frac{1}{\beta }\exp {\left\{ -\frac{x-\mu }{\beta }-\exp {\frac{-(x-\mu )}{\beta }}\right\} },\\ F_{{\text {Gumbel}}(\mu ,\beta )}(x)&=\exp {\left\{ -\exp \frac{-(x-\mu )}{\beta }\right\} }. \end{aligned}$$

A Weibull random variable \({\text {Weibull}}(\lambda ,k)\) is defined by the pdf and the cdf

$$\begin{aligned} f_{{\text {Weibull}}(\lambda ,k)}(x)&= \frac{k}{\lambda }\biggl (\frac{x}{\lambda }\biggr )^{\!k-1}\exp {\biggl \{-\biggl (\frac{x}{\lambda }\biggr )^{\!k}\biggr \}} ,&\quad x&\ge 0,\\ F_{{\text {Weibull}}(\lambda ,k)}(x)&= 1-\exp {\biggl \{-\biggl (\frac{x}{\lambda }\biggr )^{\!k}\biggr \}},&\quad x&\ge 0. \end{aligned}$$

We will also need the constant

$$\begin{aligned} \kappa _{d}:=\frac{1}{d}\cdot \frac{\varGamma ((d+1)/2)}{\sqrt{\pi }\,\varGamma (d/2)}=\frac{\varGamma ((d+1)/2)}{2\sqrt{\pi }\,\varGamma ((d+2)/2)} \end{aligned}$$

and the regularized incomplete beta function

$$\begin{aligned} I_{x}(a,b):=\frac{\varGamma (a+b)}{\varGamma (a)\varGamma (b)}\int _{0}^{x}t^{a-1}(1-t)^{b-1}\mathrm {d}t \end{aligned}$$

(see, e.g., [34, 8.17.2]). For the reader’s convenience, we restate Conjecture 2.3 from [13, p. 67].

Conjecture 2.1

The scaled hole radii \(N^{{1}/{d}}\rho _{1},\dots ,N^{{1}/{d}}\rho _{f_{d}}\) associated with the facets of the convex hull of N uniformly and independently distributed random points on \(\mathbb {S}^{d}\) are (dependent) random variables from a distribution which converges, as \(N\rightarrow \infty \), to the limiting distribution \(GG(d,d^{2},\kappa _{d}^{-1})\).

3 Results

We start with the covering measures. The following theorem proves Conjecture 2.1.

Theorem 3.1

The hole radius is characterized by the asymptotic distribution

$$\begin{aligned} N^{{1}/{d}}\rho _{k}{\mathop {\longrightarrow }\limits ^{\mathcal {D}}}GG\bigl (d,d^{2},\kappa _{d}^{-1}\bigr ) \end{aligned}$$

for any \(k\in \mathbb {N}\). The same result holds replacing \(\rho _k\) with the geodesic radius of the cap \(\alpha _{k}\).

Theorem 3.2

The Euclidean covering radius is characterized by the asymptotic distribution

$$\begin{aligned} N^{{1}/{d}} (\ln N)^{({d-1})/{d}} d\kappa _{d}^{{1}/{d}}\rho -d\ln N-(d-1)\ln \ln N&\,+\ln {(d!(2\kappa _{d})^{d-1})}\\&\qquad {\mathop {\longrightarrow }\limits ^{\mathcal {D}}}\,{\text {Gumbel}} (0,1). \end{aligned}$$

The same result holds replacing \(\rho \) with the geodesic covering radius \(\alpha \).

Remark 3.3

(i) The previous result generalizes [39, Cor. 3.4], where it is proved that

$$\begin{aligned} \biggl (\frac{N}{\ln N}\biggr )^{\!1/d}\rho {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\kappa _{d}^{-1/d}. \end{aligned}$$

The same result holds for \(\alpha \). This is coherent with [8, pp. 276, 280–281], where it is shown that \(\rho \) is of order \(N^{-1/d+o(1)}\).

(ii) In the case \(d=1\), the previous result becomes

$$\begin{aligned} \frac{N}{\pi }\rho -\ln N{\mathop {\longrightarrow }\limits ^{\mathcal {D}}}{\text {Gumbel}} (0,1) \end{aligned}$$

and holds in the same form for \(\alpha \). This is coherent with the fact that the expectation \(\mathbb {E}\alpha \) can be written as \(\mathbb {E}\alpha =(\pi \ln N)/N+\pi \gamma /N+o(N^{-1})\) (see [13, p. 63]), where \(\gamma \) is the Euler–Mascheroni constant.

(iii) A heuristic justification for this theorem can be obtained as follows. From Theorem 3.1 and the continuous mapping theorem (see, e.g., [43, p. 288]), \(N\rho _{k}^{d}{\mathop {\longrightarrow }\limits ^{\mathcal {D}}}G(d,\kappa _{d}^{-1})\). The asymptotic distribution of the maximum \(M_{n}\) from a sample of n independent gamma random variables \(G (\beta ,\gamma )\) can be found, e.g., in [25, pp. 128, 156]:

$$\begin{aligned} \gamma ^{-1}M_{n}-\ln n-(\beta -1)\ln \ln n+\ln \varGamma (\beta )\,{\mathop {\longrightarrow }\limits ^{\mathcal {D}}}\,{\text {Gumbel}}(0,1). \end{aligned}$$

In our case, \(M_{n}\) would correspond to \(N\rho ^{d}=\max _{1\le j\le f_{d}}N\rho _{j}^{d}\), \(\beta \) to d, and \(\gamma \) to \(\kappa _{d}^{-1}\). This should be compared with the result (3) below. The two formulas differ in several respects: first, each \(N\rho _{j}^{d}\) is not exactly distributed like a gamma random variable; second, the random variables \(N\rho _{j}^{d}\) are not independent; third, the maximum is taken over a random number of elements. For these reasons, it is hard to find an exact correspondence between the elements of the two formulas. Nevertheless, their common structure is clear.

Now we turn to the separation measures. Some of these results are already known, but we present them here for completeness.

Theorem 3.4

The separation distance, the largest nearest neighbor distance and the maximum angle are characterized by the asymptotic distributions

$$\begin{aligned} N^{{2}/{d}}\theta&\,{\mathop {\longrightarrow }\limits ^{\mathcal {D}}}\,{\text {Weibull}}\left( \biggl (\frac{2}{\kappa _{d}}\biggr )^{\!1/d},d\right) ,\\ N^{1/d}(\ln N)^{(d-1)/d}d\kappa _{d}^{1/d}\theta ^{\prime }-d\ln N&\,{\mathop {\longrightarrow }\limits ^{\mathcal {D}}}\,{\text {Gumbel}}(0,1),\\ N^{2/d}(\pi -\theta ^{\prime \prime })&\,{\mathop {\longrightarrow }\limits ^{\mathcal {D}}}\,{\text {Weibull}}\left( \biggl (\frac{2}{\kappa _{d}}\biggr )^{\!1/d},d\right) . \end{aligned}$$

The asymptotic behavior of \(\varTheta \) and \(\varTheta ^{\prime }\) is the same as that of \(\theta \) and \(\theta ^{\prime }\), respectively. For \(\varTheta ^{\prime \prime }\), instead, we have

$$\begin{aligned} 4N^{4/d}(2-\varTheta ^{\prime \prime })\,{\mathop {\longrightarrow }\limits ^{\mathcal {D}}}\,{\text {Weibull}}\left( \biggl (\frac{2}{\kappa _{d}}\biggr )^{\!2/d},\frac{d}{2}\right) . \end{aligned}$$

Remark 3.5

(i) The result for \(\varTheta \) is coherent with the one in [8, p. 280], stating that \(\varTheta =N^{-2/d+o(1)}\).

(ii) The equality of the asymptotic distributions of \(\theta \) and \(\theta ^{\prime \prime }\) comes as no surprise, as the pdf of \(\mu (\mathbf {x}_{i},\mathbf {x}_{j})\) for \(i\ne j\) is symmetric around \(\pi /2\) (see [15, Thm. 1] or [13, Thm. 3.1]). We provide a heuristic justification of the asymptotic distribution of \(\theta ^{\prime \prime }\); the one of \(\theta \) can be obtained through a symmetry argument. We suppose for one moment that the \(N (N-1)/2\) random variables \(\mu (\mathbf {x}_{i},\mathbf {x}_{j})\) with \(i<j\) are independent. As the distribution of these random variables has a finite right endpoint \(\pi \), the asymptotic distribution could be a Weibull distribution, provided the von Mises condition in [25, Cor. 3.3.13] is satisfied. This is indeed the case with \(\alpha =d\). Moreover, the scaling parameter \(c_{n}\) appearing in [25, Thm. 3.3.12] is approximately equal to \((N^{2}\kappa _{d}/2)^{-1/d}\). The asymptotic distribution of \((N^{2}\kappa _{d}/2)^{1/d}( \theta ^{\prime \prime } - \pi )\) has, therefore, cdf \(\exp {\{ -(-x)^d\}}\) (see [25, p. 121]). The asymptotic result for \(\theta ^{\prime \prime }\) is confirmed after an adequate rescaling. What is remarkable is that the result is valid even if the random variables \(\mu (\mathbf {x}_{i},\mathbf {x}_{j})\) are dependent.

(iii) The difference between the results for \(\varTheta \) and \(\varTheta ^{\prime \prime }\) is due to the asymmetry of the pdf of \(m(\mathbf {x}_{i},\mathbf {x}_{j})\) for \(i\ne j\) around 1. Indeed, the pdf is

$$\begin{aligned} f_{m(\mathbf {x}_{i},\mathbf {x}_{j})}(x)=2d\kappa _{d}\biggl (\frac{x}{2}\biggr )^{\!d-1}(4-x^{2})^{{d}/{2}-1}. \end{aligned}$$

While the lower tail is similar to the one of \(\mu (\mathbf {x}_{i},\mathbf {x}_{j})\), thus justifying the asymptotic equivalence of \(\theta \) and \(\varTheta \), the upper tail of the pdf behaves like \(2^{d-1}d\kappa _{d}(2-x)^{{d}/{2}-1}\). This means that it satisfies the von Mises condition in [25, Cor. 3.3.13] with limit \(\alpha ={d}/{2}\). The scaling parameter \(c_{n}\) of [25, Thm. 3.3.12] is approximately equal to \((2^{d-1}N^{2}\kappa _{d})^{-{2}/{d}}\) and this confirms the asymptotic behavior of \(\varTheta ^{\prime \prime }\).

(iv) In the previous remarks, we justified the asymptotic distributions of some measures of covering and separation under the unwarranted assumption that the elements on which the extrema were taken were independent. Now, we show that this heuristic reasoning can be misleading. The distribution of \(\theta _{j}:=\min _{\mathbf {x}_{i}\in X_{N},i\ne j}\mu (\mathbf {x}_{i},\mathbf {x}_{j})\) can be explicitly obtained as

$$\begin{aligned} \mathbb {P}\ \{ \theta _{j}\le x\}&=1-\mathbb {P} \ \{ \theta _{j}>x\} =1-\mathbb {P} \ \{ \mathbf {x}_{i}\notin B(\mathbf {x}_{j},x),\,1\le i\le N,\,i\ne j\} \\&=1-\biggl [1-I_{\sin ^{2}(x/2)}\biggl (\frac{d}{2},\frac{d}{2}\biggr )\biggr ]^{N-1} \end{aligned}$$

from (1). From (2) this implies

$$\begin{aligned} \mathbb {P}\ \{ N\theta _{j}^{d}\le y\}=1-\left[ 1-I_{\sin ^{2}({y^{1/d}N^{-1/d}}/{2})}\biggl (\frac{d}{2},\frac{d}{2}\biggr )\right] ^{N-1}\!\rightarrow \,1-e^{-\kappa _{d}y} \end{aligned}$$

or \(N\theta _{j}^{d}{\mathop {\longrightarrow }\limits ^{\mathcal {D}}}\mathcal {E}(\kappa _{d}^{-1})\). In the following, we will reason as if all the \(N\theta _{j}^{d}\)’s were independent copies of \(\mathcal {E}(\kappa _{d}^{-1})\). From the formula of the largest nearest neighbor distance, it is clear that \(N(\theta ^{\prime })^{d}=\max _{1\le j\le N}N\theta _{j}^{d}\). Therefore, this suggests (see [25, pp. 128, 155]) the incorrect formula \(N\kappa _{d}(\theta ^{\prime })^{d}-\ln N{\mathop {\longrightarrow }\limits ^{\mathcal {D}}}{\text {Gumbel}}(0,1)\) while the correct one (see [30, p. 259]) is

$$\begin{aligned} \frac{N\kappa _{d}(\theta ^{\prime })^{d}}{2}-\ln N{\mathop {\longrightarrow }\limits ^{\mathcal {D}}}{\text {Gumbel}}(0,1). \end{aligned}$$

In the same way, the formula of the separation distance yields \(N\theta ^{d}=\min _{1\le j\le N}N\theta _{j}^{d}\). The minimum of N independent copies of \(\mathcal {E}(\kappa _{d}^{-1})\) is a \(\mathcal {E}((N\kappa _{d})^{-1})\) random variable. Therefore,

$$\begin{aligned} \mathbb {P} \ \{ N^{2}\theta ^{d}\le y\} =\mathbb {P} \ \{ N\theta ^{d}\le N^{-1}y\} \simeq 1-e^{-\kappa _{d}y} \end{aligned}$$

and \(\mathbb {P}\ \{ N^{{2}/{d}}\theta \le x\} \simeq 1-e^{-\kappa _{d}x^{d}}\). This suggests, at odds with the result of the theorem, that \(N^{{2}/{d}}\theta {\mathop {\longrightarrow }\limits ^{\mathcal {D}}}{\text {Weibull}}(\kappa _{d}^{-{1}/{d}},d)\). The reason for the omission of the term 2 in both formulas is that both \(\theta \) and \(\theta ^{\prime }\) involve \(N(N-1)\) distances but only \(N(N-1)/2\) are really different.

4 Conclusions

The present paper provides some asymptotic distributional results for several measures of covering and separation. In these conclusions, we compare our results with the lower bounds available in the literature for deterministic point sets.

When applied to a point set of uniformly and independently distributed random points, the geodesic covering radius \(\alpha \) and its Euclidean counterpart \(\rho \) have an asymptotic order of \((({\ln N})/{N})^{1/d}\) in probability. A lower bound on \(\alpha \), from which a similar inequality can be obtained for \(\rho \), is given by \(\alpha \ge c_{d}N^{-{1}/{d}}\) for a constant \(c_{d}>0\) and it is achieved by spherical designs (see [10, p. 784]) that have thus the optimal covering property. See also [8, p. 280] for the Euclidean covering radius. This shows that random points almost have the optimal covering property, up to a logarithmic factor.

For the same point set, the separation distance \(\theta \) and the minimum distance \(\varTheta \) have an asymptotic order of \(N^{-{2}/{d}}\) in probability. It can be shown that the order \(\theta \ge c_{d}^{\prime }N^{-{1}/{d}}\) for a constant \(c_{d}^{\prime }>0\) is best possible and point sets that achieve that rate are called well separated (see [11, p. 654] for a review of the literature on the topic). This shows that random points are not in general well separated.

As a result of these considerations, we can confirm that, as stated in [8, p. 274], “the covering radius [\(\rho \)] is much more forgiving than the minimal spacing [\(\varTheta \)] in that the placement of a few bad points does not affect [\(\rho \)] drastically.”

5 Proofs

In the proofs, we will use the following notation. We say that \(X_{n}=O_{\mathbb {P}}(a_{n})\) as \(n\rightarrow \infty \) if, for any \(\varepsilon >0\), there exists a finite \(M>0\) and a finite \(N>0\) such that \(\mathbb {P}\ \{ |a_{n}^{-1}X_{n}|>M\} <\varepsilon \) for any \(n>N\). It is clear that \(a_{n}^{-1}X_{n}{\mathop {\longrightarrow }\limits ^{\mathcal {D}}} X\) and \(a_{n}^{-1}X_{n}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}1\) both imply that \(X_{n}=O_{\mathbb {P}}(a_n)\) as \(n\rightarrow \infty \). We will also need the following lemma.

Lemma 5.1

Suppose \(r_{n}(T_{n}-\theta _{n})\rightarrow _{\mathcal {D}}W\), where \(T_{n}\ge 0\), \(\theta _{n}\ge 0\), and \(\theta _{n}r_{n}\rightarrow \infty \). Then, for \(m\in \mathbb {N}\),

$$\begin{aligned} mr_{n}\theta _{n}^{1-{1}/{m}}(T_{n}^{{1}/{m}}-\theta _{n}^{{1}/{m}})\rightarrow _{\mathcal {D}}W. \end{aligned}$$

Proof

We start with writing \(r_{n}(T_{n}-\theta _{n})\) as

$$\begin{aligned} r_{n}(T_{n}-\theta _{n})&=r_{n}\theta _{n}\biggl (\frac{T_{n}}{\theta _{n}}-1\biggr )=r_{n}\theta _{n}\left( \biggl (\frac{T_{n}}{\theta _{n}}\biggr )^{\!1/m}-1\right) \sum _{j=0}^{m-1}\biggl (\frac{T_{n}}{\theta _{n}}\biggr )^{\!j/m}\\&=r_{n}\theta _{n}^{1-1/m}(T_{n}^{1/m}-\theta _{n}^{1/m})\sum _{j=0}^{m-1}\biggl (\frac{T_{n}}{\theta _{n}}\biggr )^{\!{j}/{m}}. \end{aligned}$$

From \(r_{n}(T_{n}-\theta _{n})\rightarrow _{\mathcal {D}}W\), we can state that \({T_{n}}/{\theta _{n}}=1+O_{\mathbb {P}}(1/(\theta _{n}r_{n}))\) and, provided \(\theta _{n}r_{n}\rightarrow \infty \), \({T_{n}}/{\theta _{n}}\rightarrow _{\mathbb {P}}1\) and \(\sum _{j=0}^{m-1}({T_{n}}/{\theta _{n}})^{j/m}\rightarrow _{\mathbb {P}}m\). Using Slutsky’s theorem (see, e.g., [43, p. 34]), we finally get

$$\begin{aligned} mr_{n}\theta _{n}^{1-{1}/{m}}(T_{n}^{{1}/{m}}-\theta _{n}^{{1}/{m}})\rightarrow _{\mathcal {D}}W. \end{aligned}$$

\(\square \)

Proof of Theorem 3.1

We show a property of the vector \(\mathbf {r}(X_{N})=(\rho _{1},\rho _{2},\dots ,\rho _{f_{d}})\) that will be used below. If its elements are arranged in no special order, \(\mathbf {r}(X_{N})\) is composed of \(f_{d}\) identically distributed and dependent variables. Conditionally on \(f_d\), the distribution of the vector \(\mathbf {r}(X_N)\) is invariant under permutations of the indices and \(\mathbf {r}(X_N)\) is a finite exchangeable sequence (see [24]).

From [13, Thm. 2.2], for \(p\ge 0\) we have

$$\begin{aligned} \mathbb {E}\sum _{k=1}^{f_{d}}\rho _{k}^{p}=c_{d,p}N^{1-{p/d}}(1+O(N^{-{2}/{d}})) \end{aligned}$$

for a constant \(c_{d,p} := {2 \kappa _{d^2} \varGamma (d+p/d)}/((d+1)\varGamma (d)(\kappa _d)^{d+p/d})\) defined in [13, p. 65] and, from [14, Sect. 2.5] and [13, p. 65],

$$\begin{aligned} \mathbb {E}f_{d}=B_{d}N\cdot (1+O(N^{-2/d})) \end{aligned}$$

for a constant \(B_{d}:={2 \kappa _{d^2}}/((d+1)(\kappa _d)^d)\) defined in [13, p. 63]. We have

$$\begin{aligned} \mathbb {E}\sum _{k=1}^{f_{d}}\rho _{k}^{p}=\mathbb {E}\left\{ \mathbb {E}\left[ \,\sum _{k=1}^{f_{d}}\rho _{k}^{p}\,\Big |\,f_{d}\right] \right\} =\mathbb {E}\left\{ \,\sum _{k=1}^{f_{d}}\mathbb {E}[\rho _{k}^{p}| f_{d}]\right\} . \end{aligned}$$

Now, conditionally on the value of \(f_{d}\), the vector \((\rho _{1},\dots ,\rho _{f_{d}})\) is a finite exchangeable sequence. Then, because of exchangeability, \(\mathbb {E}[\rho _{k}^{p}\,|\,f_{d}]\) is independent of the index k and we have

$$\begin{aligned} \mathbb {E}\sum _{k=1}^{f_{d}}\rho _{k}^{p}=\mathbb {E}\left\{ \,\sum _{k=1}^{f_{d}}\mathbb {E}[\rho _{k}^{p}| f_{d}]\right\} =\mathbb {E}\{ f_{d}\mathbb {E}[\rho _{k}^{p}| f_{d}]\} =\mathbb {E}f_{d}\rho _{k}^{p}. \end{aligned}$$

Now we consider the covariance \({\text {Cov}}(f_{d},\rho _{k}^{p})\),

$$\begin{aligned} {\text {Cov}}(f_{d},\rho _{k}^{p})&=\mathbb {E}f_{d}\rho _{k}^{p}-\mathbb {E}f_{d}\mathbb {E}\rho _{k}^{p},\\ \frac{{\text {Cov}}(f_{d},\rho _{k}^{p})}{\mathbb {E}f_{d}}&=\frac{\mathbb {E}f_{d}\rho _{k}^{p}}{\mathbb {E}f_{d}}-\mathbb {E}\rho _{k}^{p},\\ \mathbb {E}\rho _{k}^{p}&=\frac{\mathbb {E}f_{d}\rho _{k}^{p}}{\mathbb {E}f_{d}}-\frac{{\text {Cov}}(f_{d},\rho _{k}^{p})}{\mathbb {E}f_{d}}. \end{aligned}$$

From the Cauchy–Schwarz inequality we have

$$\begin{aligned} \biggl |\frac{{\text {Cov}}(f_{d},\rho _{k}^{p})}{\mathbb {E}f_{d}}\biggr |\le \frac{\sqrt{\mathbb {V}(f_{d})\mathbb {V}(\rho _{k}^{p})}}{\mathbb {E}f_{d}}\le \frac{\sqrt{\mathbb {V}(f_{d})\mathbb {E}\rho _{k}^{2p}}}{\mathbb {E}f_{d}}\le \frac{\sqrt{\mathbb {V}(f_{d})\mathbb {E}\rho ^{2p}}}{\mathbb {E}f_{d}} \end{aligned}$$

where \(\rho =\rho (X_{N},\mathbb {S}^{d})\) is the covering radius of \(X_{N}\) on the sphere.

Now, we majorize \(\mathbb {V}(f_{d})\) as in [44, Thm. 4.2.1],Footnote 1 and \(\mathbb {E}\rho ^{2p}\) as in [39, Cor. 3.3], to get

$$\begin{aligned} \biggl |\frac{{\text {Cov}}(f_{d},\rho _{k}^{p})}{\mathbb {E}f_{d}}\biggr |&=O\biggl (\frac{\sqrt{\mathbb {V}(f_{d})\mathbb {E}\rho ^{2p}}}{\mathbb {E}f_{d}}\biggr )=O\left( \frac{1}{N}\sqrt{N\biggl (\frac{\ln N}{N}\biggr )^{\!{2p}/{d}}}\right) \\&=O\biggl (\frac{(\ln N)^{p/d}}{N^{(2p+d)/(2d)}}\biggr ). \end{aligned}$$

Therefore,

$$\begin{aligned} \mathbb {E}\rho _{k}^{p}&=\frac{\mathbb {E}f_{d}\rho _{k}^{p}}{\mathbb {E}f_{d}}+O\biggl (\frac{(\ln N)^{\!{p}/{d}}}{N^{(2p+d)/({2d})}}\biggr )\\&=\frac{c_{d,p}}{B_{d}}N^{-{p/d}}+O\biggl (\frac{1}{N^{({p+2})/{d}}}\biggr )+O\biggl (\frac{(\ln N)^{p/d}}{N^{(2p+d)/(2d)}}\biggr )\qquad \text {and}\\ \mathbb {E}(N^{1/d}\rho _{k})^{p}\rightarrow \frac{c_{d,p}}{B_{d}}&=\frac{\varGamma (d+p/d)}{\varGamma (d)}(\kappa _{d})^{-p/d}\\&= \frac{\varGamma (d+p/d)}{\varGamma (d)}\biggl (2\sqrt{\pi }\,\frac{\varGamma ((d+2)/2)}{\varGamma ((d+1)/2)}\biggr )^{\!p/d}. \end{aligned}$$

This means that the raw moments of \(N^{1/d}\rho _{k}\) converge to the raw moments of a generalized gamma random variable \(GG(d,d^{2},\kappa _{d}^{-1})\). Convergence of moments is not in itself sufficient to guarantee convergence in distribution. The results in [28, Sect. 5.2] show that this is indeed the case for generalized gamma random variables whenever \(d\ge 1/2\). To get the distribution of \(\alpha _{k}\) we use the delta method (see, e.g., [43, p. 279]), i.e., the fact that \(c_n(W_n-a)\rightarrow _{\mathcal {D}}X\), with \(c_{n}\) diverging, implies that \(c_{n}(g(W_{n})-g(a))\rightarrow _{\mathcal {D}}g^{\prime }(a)\cdot X\) for \(g(\,{\cdot }\,)\) differentiable at a. The result is then evident from the relation \(\alpha _{k}=2\arcsin (\rho _{k}/2)\). \(\square \)

Proof of Theorem 3.2

Let V be the Riemannian volume of the spherical cap of hole radius \(\rho \):

$$\begin{aligned} V=I_{\rho ^{2}/4}\biggl (\frac{d}{2},\frac{d}{2}\biggr ) \end{aligned}$$
(1)

(see [13, (2)–(6)]). From [32, Remark on p. 276] and [31, p. 664], we have

$$\begin{aligned} NV-\ln N-(d-1)\ln \ln N+\ln \bigl (d! (2\kappa _{d})^{d-1}\bigr ){\mathop {\longrightarrow }\limits ^{\mathcal {D}}}{\text {Gumbel}}(0,1). \end{aligned}$$

According to [39, Cor. 3.4], \({N\rho ^d}/({\ln N}){\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\kappa _{d}^{-1}\) and \(\rho =O_{\mathbb {P}}((({\ln N})/{N})^{1/d})\). By expanding V around \(\rho =0\) (see, e.g., [34, 8.17.22]),

$$\begin{aligned} V=\frac{\rho ^{d}2^{1-d}}{dB({d}/{2},{d}/{2})}(1+O(\rho ^{2}))=\kappa _{d}\rho ^{d}+O_{\mathbb {P}}\biggl (\biggl (\frac{\ln N}{N}\biggr )^{\!(d+2)/d}\biggr ), \end{aligned}$$
(2)

we get

$$\begin{aligned} N\kappa _{d}\rho ^{d}-\ln N-(d-1)\ln \ln N+\ln \bigl (d! (2\kappa _{d})^{d-1}\bigr ){\mathop {\longrightarrow }\limits ^{\mathcal {D}}}{\text {Gumbel}}(0,1). \end{aligned}$$
(3)

In order to obtain the asymptotic distribution of \(\rho \) from that of \(\rho ^{d}\), we cannot use the delta method (see, e.g., [43, p. 279]) as the centering of \(\rho \) depends on N. The uniform delta method (see, e.g., [45, Sect. 3.4]) does not seem to work either. Therefore, we use Lemma 5.1, where we identify

$$\begin{aligned} r_{n}=N,\quad m=d,\quad T_{n}=\kappa _{d}\rho ^{d},\\ \theta _{n}=\frac{\ln N+(d-1)\ln \ln N-\ln \bigl (d! (2\kappa _{d})^{d-1}\bigr )}{N}. \end{aligned}$$

We have

$$\begin{aligned} \theta _{n}^{1-{1}/{m}}&=\biggl (\frac{\ln N}{N}\biggr )^{\!1-1/d}\biggl (1+O\biggl (\frac{\ln \ln N}{\ln N}\biggr )\biggr ),\\ \theta _{n}^{{1}/{m}}&=\biggl (\frac{\ln N}{N}\biggr )^{\!1/d}\biggl [1+\frac{d-1}{d}\frac{\ln \ln N}{\ln N}-\frac{\ln \bigl (d! (2\kappa _{d})^{d-1}\bigr )}{d\ln N}+O\biggl (\biggl (\frac{\ln \ln N}{\ln N}\biggr )^{\!2}\biggr )\biggr ]. \end{aligned}$$

At last, \(r_{n}(T_{n}-\theta _{n})\) behaves asymptotically like

$$\begin{aligned} N^{{1}{d}}(\ln N)^{({d-1})/{d}} d\kappa _{d}^{{1}/{d}}\rho -d\ln N-(d-1)\ln \ln N+\ln \bigl (d! (2\kappa _{d})^{d-1}\bigr ). \end{aligned}$$

Now we show the same result holds for \(\alpha \). Indeed,

$$\begin{aligned} \rho =2\sin \frac{\alpha }{2}=\alpha +O(\alpha ^{3})=\alpha +O(\rho ^{3})=\alpha +O_{\mathbb {P}}\biggl (\biggl (\frac{\ln N}{N}\biggr )^{\!3/d}\biggr ),\\ N^{1/d}(\ln N)^{(d-1)/{d}}d\kappa _{d}^{1/d}\rho =N^{1/d}(\ln N)^{(d-1)/d} d\kappa _{d}^{1/d}\alpha +O_{\mathbb {P}}\biggl (\frac{( \ln N)^{(d+2)/d}}{N^{2/d}}\biggr ). \end{aligned}$$

\(\square \)

Proof of Theorem 3.4

The results for \(\theta \) and \(\theta ^{\prime \prime }\) are in [15, Thm. 2]. The result for \(\theta ^{\prime }\) is a consequence of [30, p. 259] (see also [29]). It is not immediate, but can be obtained applying Lemma 5.1 as in the proof of Theorem 3.2. The results for \(\varTheta \) and \(\varTheta ^{\prime }\) comes from the ones for \(\theta \) and \(\theta ^{\prime }\) using the fact that \(\varTheta =2\sin (\theta /{2})\) and \(\varTheta ^{\prime }=2\sin (\theta ^{\prime }/2)\).

As far as \(\varTheta ^{\prime \prime }\) is concerned, in this case too, one has \(\varTheta ^{\prime \prime }=2\sin (\theta ^{\prime \prime }/{2})\) and \(\theta ^{\prime \prime }=2\arcsin (\varTheta ^{\prime \prime }/2)\). Now, using \(\sin x =\cos {({\pi }/{2}-x)}\) and the double-angle formula \(1-2\sin ^2x = \cos 2x\), we get

$$\begin{aligned} 2-\varTheta ^{\prime \prime }&=2\biggl (1-\sin \frac{\theta ^{\prime \prime }}{2}\biggr )=4\sin ^{2}\frac{\pi -\theta ^{\prime \prime }}{4}\\&=\frac{(\pi -\theta ^{\prime \prime })^{2}}{4}+O((\pi -\theta ^{\prime \prime })^{4})=\frac{(\pi -\theta ^{\prime \prime })^{2}}{4} +O_{\mathbb {P}}(N^{-{8}/{d}}) \end{aligned}$$

from \(\pi -\theta ^{\prime \prime }=O_{\mathbb {P}}(N^{-2/{d}})\). This implies that \(2-\varTheta ^{\prime \prime }=O_{\mathbb {P}}(N^{-4/d})\) and should behave like \((\pi -\theta ^{\prime \prime })^2/4\). Thus,

$$\begin{aligned} 4N^{4/d}(2-\varTheta ^{\prime \prime })=\bigl [N^{2/d}(\pi -\theta ^{\prime \prime })\bigr ]^{2}+O_{\mathbb {P}}(N^{-4/d}). \end{aligned}$$

From the continuous mapping theorem (see, e.g., [43, p. 288]), the asymptotic distribution of \(4N^{4/d}(2-\varTheta ^{\prime \prime })\) is the square of a \({\text {Weibull}}{(({2}/{\kappa _{d}})^{{1}/{d}},d)}\). That is the distribution of a \({\text {Weibull}}{((2/{\kappa _{d}})^{{2}/{d}},{d}/{2})}\):

$$\begin{aligned} F_{[{\text {Weibull}}(\lambda ,k)]^{2}}(x)&=F_{{\text {Weibull}}(\lambda ,k)}(x^{1/2})=1-\exp {\left\{ -\biggl (\frac{x^{{1}/{2}}}{\lambda }\biggr )^{\!k}\right\} }\\&=1-\exp {\left\{ -\biggl (\frac{x}{\lambda ^2}\biggr )^{\!k/2}\right\} }=F_{{\text {Weibull}}(\lambda ^{2},k/2)}(x). \end{aligned}$$

\(\square \)