1 Introduction and results

Wendel’s equality [10] is one of the classical results in geometric probability: it states that if \(x_1, \ldots , x_n\) are i.i.d. random points in \(\mathbb {R}^d\) whose distribution is (centrally) symmetric with respect to the origin o, and the probability measures of hyperplanes are 0, then the probability that o is not contained in the convex hull \([x_1,\ldots , x_n]\) is

$$\begin{aligned} \mathbb {P}(o\notin [x_1,\ldots , x_n])=\frac{1}{2^{n-1}}\sum _{i=0}^{d-1}{n-1\atopwithdelims ()i}. \end{aligned}$$
(1.1)

One can find a simple proof of (1.1) in Bárány [1, pp. 94–95], which is independent of the distribution (under the above conditions).

It was proved by Wagner and Welzl [9], that o-symmetric distributions are extremal in this sense. For more information, see also [8, Section 8.1.2].

Recently, Kabluchko and Zaporozhets [3] investigated the related problem of finding the probability that the convex hull of n i.i.d. normally distributed random points in \(\mathbb {R}^d\) contains a fixed points of space; they called these absorption probabilities. For a general introduction to random polytopes we refer to the recent survey paper by Schneider [7] and the book by Schneider and Weil [8].

We denote the d-dimensional origin centered unit radius closed ball by \(B^d\) and its boundary by \(S^{d-1}\). The symbol \(\kappa _d\) denotes the volume (Lebesgue measure) of \(B^d\), and \(\omega _d\) is the surface volume of \(B^d\). For general information on convex sets, see the monograph [6] by Schneider.

In this paper we study the following spindle convex variant of the above problems. Let \(x,y\in \mathbb {R}^d\) be two points and \(\varrho >0\). If \(|x-y|\le 2\varrho \), then let the spindle \([x,y]_\varrho \) determined by x and y be the intersection of all radius \(\varrho \) closed balls that contain both x and y. If \(|x-y|>2\varrho \), then let \([x,y]_\varrho =\mathbb {R}^d\). We say that a convex body \(K\subset \mathbb {R}^d\) (compact convex set with non-empty interior) is spindle convex with radius \(\varrho \), or \(\varrho \)-spindle convex if together with any two points \(x,y\in K\), it contains the spindle \([x,y]_\varrho \). It is known ([2]) that if a convex body \(K\subset \mathbb {R}^d\) is spindle convex with radius \(\varrho \), then K is the intersection of all radius \(\varrho \) closed balls that contain K. This latter property is called radius \(\varrho \) ball-convexity.

Let \(X\subset \mathbb {R}^d\). If \(X\subset \varrho B^d+v\) for some \(v\in \mathbb {R}^d\), then the radius \(\varrho \) spindle convex hull \([X]_\varrho \) of K is defined as the intersection of all radius \(\varrho \) closed balls containing X. If \(X\not \subset \varrho B^d+v\) for any \(v\in \mathbb {R}^d\), then let \([X]_\varrho =\mathbb {R}^d\). If \(K\subset \mathbb {R}^d\) is spindle convex with radius \(\varrho \), and \(X\subset K\), then \([X]_\varrho \subset K\). For more information on spindle convexity, see, for example, the paper [2] by Bezdek et al. and the book [4] by Martini, Montejano and Oliveros and the references therein.

First, we describe the \(\varrho \)-spindle convex uniform model. Let \(\varrho >0\), and let \(K\subset \mathbb {R}^d\) be an o-symmetric convex body that is \(\varrho \)-spindle convex. Let \(x_1,\ldots , x_n\) be i.i.d. uniform random points from K. We denote the radius \(\varrho \) spindle convex hull of \(x_1,\ldots , x_n\) by \(K_{(n)}^\varrho =[x_1,\ldots , x_n]_\varrho \). By the \(\varrho \)-spindle convexity of K, the random ball-polytope \(K_{(n)}^\varrho \) is contained in K. We ask the same question as in the classical convex case: what is the probability that \(o\in K_{(n)}^\varrho \)? We note that in this model we may always achieve by scaling (simultaneously K and radius \(\varrho \) circles) that \(\varrho =1\). Henceforth, in the following two theorems we assume that \(\varrho =1\).

We study the special case when \(K=rB^d\) with \(0<r\le 1\). Then K is clearly spindle convex with radius \(\varrho =1\). We wish to determine the probability

$$\begin{aligned} P(d,r,n):=\mathbb {P}(o\in [x_1,\ldots , x_n]_1). \end{aligned}$$

In Sect. 2 we prove the following theorem:

Theorem 1.1

Let \(K=r B^d\). Then

$$\begin{aligned} P(d,r,2)=\frac{\omega _{d-1}\omega _d}{(r^d\kappa _d)^2}\int _0^r \int _{0}^r\int _{0}^{\varphi (r_1, r_2)} r_1^{d-1}r_2^{d-1} \sin ^{d-2} \varphi \, d\varphi dr_2 d r_1, \end{aligned}$$

where \(\varphi (r_1, r_2)=\arcsin (r_1/2)+\arcsin (r_2/2)\). In particular,

$$\begin{aligned} P(2,1,2)&=\frac{\sqrt{3}}{\pi }-\frac{1}{3}= 0.2179\ldots ,\\ P(3,1,2)&=\frac{1}{64}(23+12\sqrt{3}\pi -8\pi ^2)= 0.1459\ldots . \end{aligned}$$

Furthermore, for the case of three points, we prove the following statement in Sect. 3.

Theorem 1.2

Let \(K=B^2\). Then

$$\begin{aligned} P(2,1,3)=\frac{-84\pi ^2-477+360\sqrt{3}\pi }{144\pi ^2}= 0.4594\ldots . \end{aligned}$$

Finally, in Sect. 4, we study the Gaussian \(\varrho \)-spindle convex model. Let \(x_1, \ldots , x_n\) be i.i.d. random points from \(\mathbb {R}^d\) distributed according to the standard normal distribution. The question is the same, what is the probability that \(o\in K_{(n)}^\varrho \)? We note that in this second case, it may, and does, happen that \(K_{(n)}^\varrho =\mathbb {R}^d\). We give an integral formula for the probability that a Gaussian unit radius spindle contains the origin and evaluate it numerically in the plane.

2 Proof of Theorem 1.1

Note that it is the simplest case of the model when \(n=2\), and \(K=r B^d\), where \(0<r\le 1\) is a fixed number. This, of course, is of no interest in the classical version of Wendel’s problem as \(\mathbb {P}(o\in [x_1, x_2])=0\) since \([x_1, x_2]\) is a segment.

Let us examine what it means geometrically that \(o\in [x_1, x_2]_1\). Let \(M( x_1)\) denote the union of all open unit balls that contain o and \(x_1\) on their boundary. Let \(K(d, r, x_1)\) be the part of \(rB^d{\setminus }M(x_1)\) that is in the closed half-space bounded by the hyperplane through o and orthogonal to \(x_1\) which does not contain \(x_1\). We depicted this region in Fig. 1 when \(d=2\). We will only use \(K(2,r,x_1)\) in our calculations, so, in order to simplify notation, we will denote it by \(K(r, x_1)=K(2,r,x_1)\).

Fig. 1
figure 1

The region \(K(r,x_1)\)

In order to evaluate P(dr, 2), we use the linear Blaschke-Petkantschin formula. Let G(d, 2) denote the Grassmannian manifold of 2-dimensional linear subspaces of \(\mathbb {R}^d\), and \(\nu _2\) be the unique rotation invariant Haar probability measure on G(d, 2). The 2-dimensional special case of the linear Blaschke-Petkantschin formula (see, for example, [8, Theorem 7.2.1 on p. 271]) says the following: If \(f:(\mathbb {R}^d)^2\rightarrow \mathbb {R}\) is a non-negative measurable function, then

$$\begin{aligned} \int _{(\mathbb {R}^d)^2} f\, d\lambda ^2=\frac{\omega _{d-1}\omega _d}{\omega _1\omega _2}\int _{G(d,2)}\int _{L^2} f(x_1, x_2)\nabla ^{d-2}_2(x_1, x_2)\, d\lambda _L^2\, \nu _2 (dL), \end{aligned}$$
(2.1)

where \(\nabla _2\) denotes the area of the parallelogram spanned by the vectors \(x_1, x_2\) in L. The symbol \(\lambda \) denotes the Lebesgue measure in \(\mathbb {R}^d\), and \(\lambda _L\) the (2-dimensional) Lebesgue measure in L.

Next, using polar coordinates for \(x_1, x_2\in L\), that is, \(x_1=r_1 u_1\), \(x_2=r_2 u_2\), where \(u_1, u_2\in S^{1}\), \(r_1, r_2\in \mathbb {R}_+\), we may write the right-hand-side of (2.1) as follows.

$$\begin{aligned}&\frac{\omega _{d-1}\omega _d}{\omega _1\omega _2}\int _{G(d,2)}\int _{L^2} f(x_1,x_2)\nabla ^{d-2}_2(x_1,x_2)\, d\lambda _L^2\, \nu _2 (dL)\nonumber \\&\quad =\frac{\omega _{d-1}\omega _d}{\omega _1\omega _2}\int _{G(d,2)}\int _{(S^1\times \mathbb {R})^2} f(r_1u_1, r_2u_2)\nonumber \\&\qquad \times \nabla ^{d-2}_2(r_1u_1, r_2u_2)\, r_1 r_2 dr_1 du_1 d r_2 du_2\, \nu _2 (dL)\nonumber \\&\quad =\frac{\omega _{d-1}\omega _d}{\omega _1\omega _2}\int _{G(d,2)}\int _{(S^1\times \mathbb {R})^2} f(r_1u_1, r_2u_2)r_1^{d-1} r_2^{d-1}\times \nonumber \\&\qquad \times |u_1\times u_2|^{d-2} dr_1 du_1 d r_2 du_2\, \nu _2 (dL). \end{aligned}$$
(2.2)

Now, from (2.2) we obtain that

$$\begin{aligned} P(d,r,2)&=\frac{1}{(r^d\kappa _d)^2}\int _{rB^d}\int _{rB^d} {\mathbf {1}}(o\in [x_1, x_2]_1)\, dx_1 d x_2\\&=\frac{1}{(r^d\kappa _d)^2}\frac{\omega _{d-1}\omega _d}{\omega _1\omega _2}\int _{G(d,2)}\int _{S^1}\int _0^r\int _{S^1}\int _{0}^r {\mathbf {1}}(o\in [r_1u_1, r_2u_2]_1) r_1^{d-1} r_2^{d-1}\\&\qquad \times |u_1\times u_2|^{d-2} dr_1 du_1 d r_2 du_2\, \nu _2 (dL)\\&=\frac{1}{(r^d\kappa _d)^2}\frac{\omega _{d-1}\omega _d}{\omega _1\omega _2}\int _{S^1}\int _0^r\int _{S^1}\int _{0}^r {\mathbf {1}}(o\in [r_1u_1, r_2u_2]_1) r_1^{d-1} r_2^{d-1}\\&\qquad \times |u_1\times u_2|^{d-2} dr_1 du_1 d r_2 du_2\\&=\frac{1}{(r^d\kappa _d)^2}\frac{\omega _{d-1}\omega _d}{\omega _1\omega _2}\int _{S^1}\int _0^r\int _{S^1}\int _{0}^r {\mathbf {1}}(x_2\in K(r, x_1)) r_1^{d-1} r_2^{d-1}\\&\qquad \times |u_1\times u_2|^{d-2} dr_2 du_2 d r_1 du_1. \end{aligned}$$

By the rotational symmetry of \(rB^d\), integration with respect to \(u_1\) is a multiplication by \(2\pi \). Hence, from now on, we fix \(u_1=(0,1)\). Let \(\varphi \) be the angle of \(u_2\) and \(-u_1\), as shown on Fig. 1, and let

$$\begin{aligned} \varphi (r_1, r_2)=\arcsin (r_1/2)+\arcsin (r_2/2). \end{aligned}$$

Then

$$\begin{aligned} P(d,r,2)&=\frac{2\pi }{(r^d\kappa _d)^2}\frac{\omega _{d-1}\omega _d}{\omega _1\omega _2}\int _0^r\int _{0}^r \int _{-\varphi (r_1,r_2)}^{\varphi (r_1,r_2)} r_1^{d-1}r_2^{d-1}|\sin \varphi |^{d-2}\, d\varphi dr_2 d r_1\\&=\frac{4\pi }{(r^d\kappa _d)^2}\frac{\omega _{d-1}\omega _d}{\omega _1\omega _2}\int _0^r \int _{0}^r\int _{0}^{\varphi (r_1, r_2)} r_1^{d-1}r_2^{d-1} \sin ^{d-2} \varphi \, d\varphi dr_2 d r_1\\&=\frac{\omega _{d-1}\omega _d}{(r^d\kappa _d)^2}\int _0^r \int _{0}^r\int _{0}^{\varphi (r_1, r_2)} r_1^{d-1}r_2^{d-1} \sin ^{d-2} \varphi \, d\varphi dr_2 d r_1. \end{aligned}$$

The above integral can be evaluated for any specific value of d using multiple integration by parts. In particular,

$$\begin{aligned} P(2,r,2)&=\frac{4}{\pi r^4}\int _0^r\int _{0}^r \int _{0}^{\varphi (r_1, r_2)} r_2r_1 d\varphi dr_2 d r_1\nonumber \\&=\frac{4}{\pi r^4}\int _0^r\int _{0}^r r_2r_1 (\arcsin (r_1/2)+\arcsin (r_2/2))\, dr_2 d r_1\nonumber \\&=\frac{4}{\pi r^4}\left( \frac{r^2}{4}(r\sqrt{4-r^2}+2(r^2-2)\arcsin (r/2))\right) \nonumber \\&=\frac{1}{\pi r^2}\left( r\sqrt{4-r^2}+2(r^2-2)\arcsin (r/2)\right) , \end{aligned}$$
(2.3)

and

$$\begin{aligned} P(3,r,2)&=\frac{9}{2r^6}\int _0^r\int _{0}^r \int _{0}^{\varphi (r_1, r_2)} r_2^2 r_1^2 \sin \varphi \, d\varphi dr_2 d r_1\\&=\frac{9}{2r^6}\left( \frac{r^2}{288}(-72+90r^2-4r^4+9r^6)\right. \\&\quad \left. +\frac{1}{4}\arcsin (r/2)(R\sqrt{4-r^2}(r^2-2)+4\arcsin (r/2))\right) . \end{aligned}$$

In particular,

$$\begin{aligned} P(2,1,2)&=\frac{\sqrt{3}}{\pi }-\frac{1}{3}= 0.2179\ldots ,\\ P(3,1,2)&=\frac{1}{64}(23+12\sqrt{3}\pi -8\pi ^2)= 0.1459\ldots . \end{aligned}$$

This finishes the proof of Theorem 1.1. \(\square \)

We conclude this section with the following statements.

Corollary 2.1

For any fixed \(d\ge 2\), it holds that

$$\begin{aligned} \lim _{r\rightarrow 0^+} P(d,r,2)=0. \end{aligned}$$

Furthermore, for any fixed \(0<r\le 1\), it holds that

$$\begin{aligned} \lim _{d\rightarrow \infty } P(d,r,2)=0. \end{aligned}$$

Proof

Note that, using \(\arcsin x\le \pi x/2\) for \(x\in [0,\pi /2]\) and \(\sin x\le x\) for \(x\in [0, \pi /2]\), we get that

$$\begin{aligned} P(d,r,2)&\le \frac{C(d)}{r^{2d}}\int _0^r \int _{0}^r\int _{0}^{r_1+r_2} r_1^{d-1}r_2^{d-1} (r_1+r_2)^{d-2}\, d\varphi dr_2 d r_1\\&\le \frac{2^{d-1}C(d)}{r^{2d}}\int _0^r \int _{r_1}^r\int _{0}^{2r_2} r_2^{3d-4}\, d\varphi dr_2 d r_1\\&=\frac{2^{d}C(d)}{r^{2d}}\int _0^r \int _{0}^r r_2^{3d-3}\, dr_2 dr_1\\&=\frac{2^{d}C(d)}{r^{2d}}\frac{r^{3d-1}}{3d-2}, \end{aligned}$$

where the constant C(d) depends only on the dimension d. From this it follows that

$$\begin{aligned} \lim _{r\rightarrow 0^+}P(d,r,2)=0 \end{aligned}$$

for \(d\ge 2\), as claimed.

In the proof of the second statement we use the fact that \(\varphi (r_1, r_2)\le \pi /3\). Thus

$$\begin{aligned} P(d,r,2)&\le \frac{\omega _{d-1}\omega _d}{r^{2d}\kappa _d^2}\int _0^r\int _0^r r_1^{d-1}r_2^{d-1}\left( \frac{\sqrt{3}}{2}\right) ^{d-1} \, dr_2dr_1\\&=\frac{\omega _{d-1}\omega _d}{d^2\kappa _d^2}\left( \frac{\sqrt{3}}{2}\right) ^{d-1}=\frac{d-1}{d}\frac{\kappa _{d-1}}{\kappa _d}\left( \frac{\sqrt{3}}{2}\right) ^{d-1}. \end{aligned}$$

From \(\kappa _{d-1}/\kappa _d\sim c\cdot \sqrt{d}\) as \(d\rightarrow \infty \), it follows that \(P(d,r,2)\rightarrow 0\) as \(d\rightarrow \infty \). \(\square \)

3 Proof of Theorem 1.2

The case when \(n=3\), can be treated, at least in the plane, as follows. We only consider when \(r=1\), that is, \(K=B^2\). Let \(x_1, x_2, x_3\) be i.i.d. uniform random points from \(B^2\). Let

$$\begin{aligned} P(2,1,3):&=\mathbb {P}(o\in [x_1, x_2, x_3]_1)\\&=\mathbb {P}(o\in [x_1, x_2]_1)+\mathbb {P}(o\notin [x_1, x_2]_1\text { and }o\in [x_1, x_2, x_3]_1)\\&=P(2,1,2)+\mathbb {P}(o\notin [x_1, x_2]_1\text { and }o\in [x_1, x_2, x_3]_1). \end{aligned}$$

Let

$$\begin{aligned} \overline{P}(2,1,3):=\mathbb {P}(o\notin [x_1, x_2]_1\text { and }o\in [x_1, x_2, x_3]_1). \end{aligned}$$

Due to the rotational symmetry of \(B^2\), we may assume that \(x_1=(0,r_1)\). Let \(x_2=r_2u_2\), where \(\varphi \) is the angle of \(u_2\) and the negative half of the y-axis. Making use of the previously introduced notation, we write \(K(x_1)=K(1,x_1)\) and, similarly, \(K(x_2)=K(1,x_2)\). The ray \(ox_i\) divides \(K(x_i)\) into two congruent parts. The part that is on the positive side of \(ox_i\) is denoted by \(K^+(x_i)\), and the negative part is \(K^-(x_i)\), as shown in Fig. 2.

Fig. 2
figure 2

.

Let \(V^+(x_i)=V_2(K^+(x_i))\) and \(V^-(x_i)=V_2(K^-(x_i))\) for \(i=1,2\). Then it holds that

$$\begin{aligned} V^+(x_i)&=V^-(x_i)=\int _0^1\int _{0}^{\varphi (r_i, r)} r\, d\varphi dr =\int _0^1 (\arcsin (r_i/2)+\arcsin (r/2))r\, dr\\&= \frac{1}{12} \left( 3\sqrt{3}-\pi +6\arcsin (r_i/2)\right) . \end{aligned}$$

We distinguish four cases according to the relative position of \(x_1\) and \(x_2\).

Case 1. \(r_2\le r_1\) and \(x_2\notin [x_1, o]_1\).

In this case, \(\varphi \in [\varphi (r_1, r_2), \pi -\arcsin (r_1/2)+\arcsin (r_2/2)]\). Then

$$\begin{aligned} P_1&:=\mathbb {P}(o\notin [x_1, x_2]_1\text { and }o\in [x_1, x_2, x_3]_1\text { and }x_2\notin [x_1, o]_1\text { and } r_1\ge r_2)\\&=\frac{2\pi }{\pi ^3}\int _0^1\int _{0}^{r_1}\int _{\varphi (r_1,r_2)}^{\pi -\arcsin (r_1/2)+\arcsin (r_2/2)}\\&\quad \times \left( V^+(x_1)+V^-(x_2)+\frac{\pi -\varphi }{2}\right) r_1r_2 d\varphi dr_2 dr_1\\&=\frac{1}{\pi ^2}\int _0^1\int _{0}^{r_1}\int _{\varphi (r_1,r_2)}^{\pi -\arcsin (r_1/2)+\arcsin (r_2/2)} \left( \sqrt{3}-\frac{\pi }{3}+\arcsin (r_1/2)\right. \\&\qquad \left. +\arcsin (r_2/2)+\frac{\pi -\varphi }{2}\right) r_1r_2 d\varphi dr_2 dr_1\\&=-\frac{5}{72}-\frac{1}{\pi ^2}+\frac{5}{4\sqrt{3} \pi }. \end{aligned}$$

Case 2.  \(r_2\ge r_1\) and \(x_1\notin [x_2, o]_1\). By the symmetry of \(x_1\) and \(x_2\),

$$\begin{aligned} P_2&:=\mathbb {P}(o\notin [x_1, x_2]_1\text { and }o\in [x_1, x_2, x_3]_1\text { and }x_1\notin [x_2, o]_1\text { and }r_1\le r_2)\\&=\mathbb {P}(o\notin [x_1, x_2]_1\text { and }o\in [x_1, x_2, x_3]_1\text { and }x_2\notin [x_1, o]_1\text { and } r_1\ge r_2)\\&=-\frac{5}{72}-\frac{1}{\pi ^2}+\frac{5}{4\sqrt{3} \pi }. \end{aligned}$$

Case 3.  \(x_2\in [x_1, o]_1\).

In this case \(r_1\ge r_2\) and \(\varphi \in [\pi -\arcsin (r_1/2)+\arcsin (r_2/2),\pi ]\). Then \(K(x_2)\subset K(x_1)\), thus

$$\begin{aligned} P_3&:=\mathbb {P}(o\notin [x_1, x_2]_1\text { and }o\in [x_1, x_2, x_3]_1\text { and }x_2\in [x_1, o]_1)\\&=\frac{2\pi }{\pi ^3}\int _0^1\int _{0}^{r_1}\int _{\pi -\arcsin (r_1/2)+\arcsin (r_2/2)}^\pi V(x_1) r_1r_2 d\varphi dr_2 dr_1\\&=\frac{1}{\pi ^2}\int _0^1\int _{0}^{r_1}\int _{\pi -\arcsin (r_1/2)+\arcsin (r_2/2)}^\pi \left( \frac{\sqrt{3}}{2}-\frac{\pi }{6}+\arcsin (r_1/2)\right) r_1r_2 d\varphi dr_2 dr_1\\&=\frac{99-24\sqrt{3}\pi +4\pi ^2}{576 \pi ^2}. \end{aligned}$$

Case 4.  \(x_1\in [x_2, o]_1\). Again, by the symmetry of \(x_1\) and \(x_2\),

$$\begin{aligned} P_4&=\mathbb {P}(o\notin [x_1, x_2]_1\text { and }o\in [x_1, x_2, x_3]_1\text { and }x_1\in [x_2, o]_1)\\&=\mathbb {P}(o\notin [x_1, x_2]_1\text { and }o\in [x_1, x_2, x_3]_1\text { and }x_2\in [x_1, o]_1)\\&=\frac{99-24\sqrt{3}\pi +4\pi ^2}{576 \pi ^2}. \end{aligned}$$

Thus, considering the symmetry with respect to the line \(ox_1\), we obtain that

$$\begin{aligned} \overline{P}(2,1,3)&=2(P_1+P_2+P_3+P_4)=\frac{-36\pi ^2-477+216\sqrt{3}\pi }{144\pi ^2}. \end{aligned}$$

Thus,

$$\begin{aligned} P(2,1,3)=P(2,1,2)+\overline{P}(2,1,3)=\frac{-84\pi ^2-477+360\sqrt{3}\pi }{144\pi ^2}= 0.4594\ldots . \end{aligned}$$

We note that the actual calculation can be carried out, at least numerically, for any \(0<r\le 1\). Furthermore, the cases of \(n=4, 5,\ldots \) are essentially similar, although the case analysis grows significantly more complicated as n increases.

Finally, we note that according to Wendel’s equality (1.1),

$$\begin{aligned} \mathbb {P}(0\in [x_1,x_2,x_3])=\frac{1}{4}<P(2,1,3). \end{aligned}$$

4 The case of normally distributed random points

In this subsection we consider the model in which \(\varrho =1\) and \(x_1,\ldots , x_n\) are i.i.d. random points in \(\mathbb {R}^d\) that are distributed according to the standard normal distribution with density function

$$\begin{aligned} f(x)=\frac{1}{(2\pi )^\frac{d}{2}}e^{-\frac{|x|^2}{2}}, \; x\in \mathbb {R}^d. \end{aligned}$$

Here we need to use the part of the definition of the spindle convex hull that normally does not come into play when the random points are chosen from a convex body that is spindle convex with radius less than or equal to 1. Namely, if \(x,y\in \mathbb {R}^d\) are such that \(|x-y|>2\), then \([x,y]_1:=\mathbb {R}^d\).

We are interested in the following probability

$$\begin{aligned} P_N(d,1,n):=\mathbb {P}(o\in [x_1,\ldots , x_n]_1). \end{aligned}$$

It is clear that

$$\begin{aligned} \mathbb {P}(o\in [x_1,\ldots , x_n])\le \mathbb {P}(o\in [x_1,\ldots , x_n]_1) \end{aligned}$$

as \([X]\subset [X]_1\) for any \(X\subset \mathbb {R}^d\).

Let E be the event that \(|x_1-x_2|\le 2\). Then

$$\begin{aligned} P_N(d,1,2)=\mathbb {P}(o\in [x_1, x_2]_1 \text { and } E)+\mathbb {P}(E^c), \end{aligned}$$

where \(E^c\) is the complement of E, as \(E^c\) automatically implies that \(o\in [x_1, x_2]_1\).

Let l denote the length of the random segment \([x_1x_2]\). It is known (see [5, p. 438] and the historical references therein) that the density of \(s:=l^2/4\) is

$$\begin{aligned} g(s)=\frac{s^{\frac{d}{2}-1}e^{-s}}{\Gamma (d/2)}, \; 0< s<\infty . \end{aligned}$$
(4.1)

Thus,

$$\begin{aligned} \mathbb {P}(E^c)=\int _1^\infty g(s)\, ds&=\frac{\gamma (d/2, 1)}{\Gamma (d/2)}, \end{aligned}$$

where \(\Gamma (\cdot )\) is Euler’s gamma function, and \(\gamma (d/2, x)\) denotes the lower incomplete gamma function.

Using the linear Blaschke–Petkantschin formula (2.2) and the rotational invariance of the standard normal distribution we obtain that

$$\begin{aligned} \mathbb {P}&(o\in [x_1, x_2]_1 \text { and }E)\\&=\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d}\int _{\mathbb {R}^d} {\mathbf {1}}(o\in [x_1, x_2]_1 \text { and }E)\,e^{-\frac{|x_1|^2+|x_2|^2}{2}}\, dx_1 d x_2\\&=\frac{1}{(2\pi )^d}\frac{\omega _{d-1}\omega _d}{\omega _1\omega _2}\int _{G(d,2)}\int _{L^2} {\mathbf {1}}(o\in [x_1, x_2]_1\,\text { and }E)\\&\quad \times \Delta ^{d-2}(x_1, x_2)\,e^{-\frac{|x_1|^2+|x_2|^2}{2}} dx_1 d x_2 \nu _2 (dL)\\&=\frac{1}{(2\pi )^d}\frac{\omega _{d-1}\omega _d}{\omega _1\omega _2}\int _{L^2} {\mathbf {1}}(o\in [x_1, x_2]_1\text { and }E)\Delta ^{d-2}(x_1, x_2)\,e^{-\frac{|x_1|^2+|x_2|^2}{2}} dx_1 d x_2.\\ \end{aligned}$$

In order to evaluate the above integral, we use polar coordinates \(x_1=r_1 u_1\) and \(x_2=r_2u_2\), \(r_1, r_2\ge 0\), \(u_1, u_2\in S^1\). Let \(\varphi \) be the angle of \(-u_1\) and \(u_2\), as before. For \(2-r_1\le r_2\le \sqrt{4-r_1^2}\), let

$$\begin{aligned} \psi (r_1, r_2)=\pi -\arccos \left( \frac{r_1^2+r_2^2-4}{2r_1r_2}\right) . \end{aligned}$$

We distinguish two cases according to \(r_2\). When \(0\le r_2\le 2-r_1\), then \(-\varphi (r_1, r_2)\le \varphi \le \varphi (r_1, r_2)\), and when \(2-r_1\le r_2\le \sqrt{4-r_1^2}\), then \(-\varphi (r_1, r_2)\le \varphi \le -\psi (r_1, r_2)\) and \(\psi (r_1, r_2)\le \varphi (r_1, r_2)\), see Fig. 3.

Fig. 3
figure 3

Integration bounds in \(\varphi \) according to \(r_2\)

By the rotational symmetry of the normal distribution, integration with respect to \(u_1\) is just a multiplication by \(2\pi \). Then, w obtain that

$$\begin{aligned}&\mathbb {P}(o\in [x_1, x_2]_1\text { and } E)\\&\quad =\frac{2}{(2\pi )^{d-1}}\frac{\omega _{d-1}\omega _d}{\omega _1\omega _2}\int _0^2\int _0^{2-r_1} \int _{0}^{\varphi (r_1, r_2)}r_1^{d-1} r_2^{d-1}\sin ^{d-2}(\varphi )\,e^{-\frac{r_1^2+r_2^2}{2}} d\varphi dr_2 dr_1 \\&\qquad +\frac{2}{(2\pi )^{d-1}}\frac{\omega _{d-1}\omega _d}{\omega _1\omega _2}\int _0^2\int _{2-r_1}^{\sqrt{4-r_1^2}} \int _{\psi (r_1, r_2)}^{\varphi (r_1, r_2)}r_1^{d-1} r_2^{d-1}\\&\qquad \times \sin ^{d-2}(\varphi )\,e^{-\frac{r_1^2+r_2^2}{2}} d\varphi dr_2 dr_1 . \end{aligned}$$

The above integrals can be evaluated, at least numerically, for any specific value of d. In particular, for \(d=2\), we obtain for the first integral

$$\begin{aligned}&\frac{1}{\pi }\int _0^2 \int _{0}^{2-r_1} \int _{0}^{\varphi (r_1, r_2)} r_1r_2 \,e^{-\frac{r_1^2+r_2^2}{2}}\, d\varphi dr_2 d r_1\\&\quad =\frac{1}{\pi }\int _0^2 \int _{0}^{2-r_1} (\arcsin (r_1/2)+\arcsin (r_2/2)) r_1r_2 \,e^{-\frac{r_1^2+r_2^2}{2}}\, dr_2 d r_1\\&\quad = 0.079214\ldots . \end{aligned}$$

The second integral is

$$\begin{aligned}&\frac{1}{\pi }\int _0^2 \int _{2-r_1}^{\sqrt{4-r_1^1}} \int _{\psi (r_1, r_2)}^{\varphi (r_1, r_2)} r_1r_2 \,e^{-\frac{r_1^2+r_2^2}{2}}\, d\varphi dr_2 d r_1\\&\quad =\frac{1}{\pi }\int _0^2 \int _{2-r_1}^{\sqrt{4-r_1^2}} \left( \arcsin (r_1/2)+\arcsin (r_2/2)-\pi +\arccos \left( \frac{r_1^2+r_2^2-4}{2r_1r_2}\right) \right) \\&\qquad \times r_1r_2 \,e^{-\frac{r_1^2+r_2^2}{2}}\, dr_2 d r_1\\&\quad = 0.01866\ldots . \end{aligned}$$

For \(d=2\),

$$\begin{aligned} \mathbb {P}(E^c)=\frac{\gamma (1, 1)}{\Gamma (1)}=\frac{1}{e}= 0.367879\ldots , \end{aligned}$$

thus, in summary,

$$\begin{aligned} P_N(2,1,2)= 0.465753\ldots . \end{aligned}$$