1 Introduction

In this note, random variate generators that are uniformly fast in starting location are derived for a family of distributions of hits of symmetric stable processes. The motivation for this work is for use in [6], where these methods are used to estimate Riesz \(\alpha \)-capacity for general sets. More precisely, let \(\{X(t); t \ge 0\}\) (\(d \ge 2\)) be the symmetric stable process in \(\mathbb {R}^d\) of index \(\alpha \) with \(0 < \alpha \le 2\). When \(0< \alpha < 2\), it is a process with stationary independent increments whose continuous transition density, relative to Lebesgue measure in \(\mathbb {R}^d\), is

$$\begin{aligned} p(t,x) = (2\pi )^{-d} \int e^{i (x, \xi ) - t |\xi |^\alpha } \, d\xi , \end{aligned}$$

where \(x, \xi \in \mathbb {R}^d\), \(t > 0\), \(d\xi \) is Lebesgue measure, \((x,\xi )\) is the inner product in \(\mathbb {R}^d\) and \(|\xi |^2 = (\xi , \xi )\). We have \(X(0)=x\). Define

$$\begin{aligned} T&= \inf \{ t \ge 0: |X(t)| > 1 \}, \\T^*&= \inf \{ t \ge 0: |X(t)| < 1 \}. \end{aligned}$$

Thus, T and \(T^*\) are the first passage times to the exterior and interior of the unit ball, respectively. Define

$$\begin{aligned} \mu (dy)= & {} P\{ X(T) \in dy, T< \infty \}, \quad |y| \ge 1, \\ \mu ^* (dy)= & {} P\{ X(T^*) \in dy, T^* < \infty \}, \quad |y| \le 1. \end{aligned}$$

These describe the distributions of the hits of the unit ball when \(X(0)=x\). The measures are well-known, and are both given by

$$\begin{aligned} f_x(y) dy {\mathop {=}\limits ^{\text {def}}}\frac{\varphi (x)}{ \left( 1 - |y|^2 \right) ^{\alpha /2} \times |x-y|^d } \, dy. \end{aligned}$$

where

$$\begin{aligned} \varphi (x) = { \frac{\Gamma (d/2) \sin (\pi \alpha /2) \left( 1 - |x|^2 \right) ^{\alpha /2}}{\pi ^{1+ d/2}} }. \end{aligned}$$

More precisely,

$$\begin{aligned} \mu (dy) = f_x( y) dy, \quad |y|\ge 1, \end{aligned}$$

if \(0< \alpha < 2\),    \(|x| < 1\), and

$$\begin{aligned} \mu ^* (dy) = f_x( y) dy, |y| \le 1, \end{aligned}$$

if \(\alpha < d\), \(|x| > 1\), or if \(\alpha = d = 1\), \(|x| > 1\). Special cases of these results are due to [9] and [11]. The full result, including a more detailed description of the case \(d=1<\alpha <2, |x|>1\), is given by [1]. For a survey and more recent results, see [5].

When \(\alpha =2\), \(|x| > 1\), we set \(T^* = \inf \{t > 0: |X(t)|=1\}\), and note that \(X(T^*)\) is supported on the surface of the unit ball.

In this paper, we are interested in generating a random vector Y in the unit ball \(B = \{ y: |y| \le 1 \}\) of \(\mathbb {R}^d\) with density proportional to \(f_x(y)\) when \(|x| > 1\). Figure 1 shows an example of simulated hitting points of the unit ball in \(\mathbb {R}^3\) generated by the methods described below. Throughout the paper, \(S_{d-1}=\{x \in \mathbb {R}^d: |x|=1 \}\) denotes the surface of B, and \(Z_d\) is a random variable uniformly distributed on \(S_{d-1}\). We only deal with the case \(d > 1\). We drop the dependence upon x in the notation and extend the family of distributions to include the cases \(\alpha = 0\) and \(\alpha =2\). For \(\alpha \in [0,2)\), we define

$$\begin{aligned} f(y) {\mathop {=}\limits ^{\text {def}}}\frac{1}{ \left( 1 - |y|^2 \right) ^{\alpha /2} \times |x-y|^d}, \end{aligned}$$

which is proportional to a density on B. For \(\alpha = 2\), we define the measure on the surface \(S_{d-1}\) of B that is given by the Poisson kernel; it is proportional to \(|x-y|^{-d}\). This corresponds to the hit position of \(S_{d-1}\) for standard Brownian motion started at x where \(|x|>1\). While formally, f is a density for all values \(\alpha \in (-\infty ,2)\), we will not be concerned here with negative values of \(\alpha \).

For the sake of normalization, we define \(x= (\lambda ,0,0,\ldots ,0)\), where \(\lambda > 1\).

Fig. 1
figure 1

A sample of \(n=5000\) hitting points of the unit ball in dimension 3 for \(\alpha =1.5\) with starting point marked in red. Points are spread throughout the ball, but more concentrated near the starting point \(x=(1.5,0,0)\)

Finally, we will name our algorithms for easy reference later. For the Brownian case (\(\alpha =2\)), we have B0, B2, B3 and Bd, while for general \(\alpha \in (0,2)\), they are called R0, R1 and R2.

2 Hitting Distribution for Exiting the Unit Ball When Starting at \(|x| < 1\)

Before focusing on simulating hitting of a ball, we discuss how the related problem of exiting a ball can be solved. When the starting point is \(x=0\), we can simulate directly the hitting distribution for the exiting the sphere problem. Recall that it also uses the density f(y) and that when \(x = 0\),

$$\begin{aligned} f(y)= \frac{\pi ^{-(d/2 + 1)} \Gamma (d/2) \sin (\pi \alpha /2) }{(|y|^2-1)^{\alpha /2} |y|^d}, \quad |y| > 1. \end{aligned}$$

Since this is radially symmetric, it can be simulated by \(X=R Z_d\), where \(R=|X|\) is the amplitude/magnitude of X and \(Z_d\) is uniform on the unit sphere \(S_{d-1}\). Using radial symmetry, the density of R is

$$\begin{aligned} h(r)=f((r,0,...,0)) \cdot \hbox {Area}(S_{d-1}) \cdot r^{d-1} = \frac{2 \sin (\pi \alpha /2) }{ \pi r (r^2-1)^{\alpha /2}}, \quad r > 1. \end{aligned}$$

A change of variable shows that \(R {\mathop {=}\limits ^{\mathcal {L}}}1/\sqrt{T}\) where \(T {\mathop {=}\limits ^{\mathcal {L}}}\hbox {Beta}(\alpha /2,1-\alpha /2)\) has density h. Surprisingly, there is no dependence on dimension d in the distribution of R.

We can also simulate the hitting distribution for the complement of the unit ball when we start at \(x \ne 0\). The duality property in [8], which is also described in Section 3 of [1], states that if \(0< |x| < 1\), and if \(x^* = x/|x|^2\) is its spherical inverse outside the unit ball, and if \(Y^* \in B\) has the hitting distribution for the ball starting from \(x^*\), its spherical inverse \(Y=Y^*/|Y^*|^2\) has the hitting distribution outside B when started at \(x \in B\).

3 Warm-Up: The Case \(\alpha = 2\)—Brownian Motion

Recall that \(Y=(Y_1,\ldots ,Y_d)=X(T^*) \in S_{d-1}\) is the point of entry of the unit ball B for Brownian motion started at \(x= (\lambda ,0,0,\ldots ,0)\), \(\lambda > 1\), given that Brownian motion hits B. The density of Y with respect to the uniform measure on \(S_{d-1}\) is proportional to \(1/||x-y||^d\), where we recall that \(x=(\lambda , 0,\ldots ,0)\) and \(y \in S_{d-1}\). As \(||x-y|| \ge \lambda -1\), we can apply this simple rejection method:

figure a

In this algorithm, we tacitly used the fact that

$$\begin{aligned} \frac{\lambda -1}{||x-Y || } = \sqrt{\frac{ (\lambda - 1)^2 }{ \lambda ^2 +1 -2 \lambda Y_1 }}. \end{aligned}$$

The expected number of iterations grows as \(((\lambda +1)/(\lambda -1))^d\), which makes it clear that for \(\lambda \) near one, a more efficient algorithm is needed. The algorithms presented below all take expected time uniformly bounded over all values of \(\lambda \).

We write \(W=Y_1\). A simple geometric argument shows that W has density proportional to

$$\begin{aligned} f(w) {\mathop {=}\limits ^{\text {def}}}\frac{ ( 1 - w^2 )^{(d-3)/2} }{ \left( 1 - w^2 + (\lambda - w )^2 \right) ^{d/2} }, \quad |w| \le 1. \end{aligned}$$

If \(Z_{d-1}\) denotes a uniform point on \(S_{d-2}\), i.e., on the surface of the unit ball of \(\mathbb {R}^{d-1}\), then we note that

$$\begin{aligned} Y\mathop = \limits ^{{\mathscr {L}}} (W,\sqrt{1 - W^{2} } Z_{{d - 1}} ), \end{aligned}$$

where W and \(Z_{d-1}\) are independent. The generation of \(Z_{d-1}\) is easily achieved by taking \(d-1\) independent standard normal random variates and normalizing them to be of total Euclidean length one, see [2], for general notions of random variate generation. We now describe how to generate W.

Fig. 2
figure 2

The unnormalized functions f are shown for \(d=2\) (top) to \(d=6\) (bottom) for a value of \(\lambda =1.5\)

An inspection of the density, e.g., Fig. 2, shows three regimes: for \(d=2\), it is U-shaped; for \(d=3\), it is monotonically increasing on \([-1,1]\); and for \(d>3\), the density is unimodal, and zero at both endpoints of the interval. The cases \(d=2\) and \(d=3\) have simple explicit solutions. After presenting these, we will propose a method for \(d \ge 3\) that is uniformly fast over all values of \(\lambda \).

3.1 The Planar Case: \(d=2\)

The starting density on \([-1,1]\) is proportional to

$$\begin{aligned} f(w) {\mathop {=}\limits ^{\text {def}}}\frac{1 }{ 1+\lambda ^2 -2\lambda w } \times \frac{ 1}{ \sqrt{1-w^2} }. \end{aligned}$$

Set \(\gamma = \frac{ 2\lambda }{ 1+ \lambda ^2}\), and note that \(\gamma \in [0,1]\). Observe that \(f(w) + f(-w)\) is proportional to

$$\begin{aligned} g(w) = \frac{ 1}{ 1 - (\gamma w)^2 } \times \frac{ 1 }{ \sqrt{1-w^2} }, \end{aligned}$$

where we initially will try to generate a random variate W with density proportional to g on [0, 1]. Given such a W, it suffices then to replace W by \(-W\) with probability \(f(-W)/(f(W)+f(-W))\), i.e., with probability

$$\begin{aligned} \frac{ (1+\lambda ^2)^2 -(2\lambda w)^2 }{ 2 (1+\lambda ^2) ( 1+\lambda ^2 +2\lambda W ) } = \frac{ 1+\lambda ^2 - 2\lambda W }{ 2 (1+\lambda ^2) } = \frac{ 1 - \gamma W}{ 2 }. \end{aligned}$$

Note that \(g(w) \le h(w)\), where

$$\begin{aligned} h(w) = \frac{ 1 }{ 1 - \gamma w } \times \frac{ 1 }{ \sqrt{1-w} }. \end{aligned}$$

The density of \(Y = 1/ \sqrt{1-W}\) is proportional to

$$\begin{aligned} \frac{1 }{ 1 + \delta y^2 }, \quad y \ge 1, \end{aligned}$$

where \(\delta = (1-\gamma )/ \gamma = (\lambda -1)^2 / 2\lambda \). Thus, \(R = \sqrt{\delta } Y\) has density proportional to \(1/(1+r^2)\) on \([\sqrt{\delta }, \infty )\). If U denotes a uniform [0, 1] random variable, then by the inversion method,

$$\begin{aligned} Y {\mathop {=}\limits ^{\mathcal {L}}}\frac{ \tan \left( \arctan ( \sqrt{\delta } ) + U \left( \frac{\pi }{ 2} - \arctan ( \sqrt{\delta } ) \right) \right) }{ \sqrt{\delta } }. \end{aligned}$$

As \(W= 1 - 1/Y^2\), we can obtain a random variate from g by the rejection method by accepting W with probability

$$\begin{aligned} \frac{g(W)}{ h(W)} = \frac{ 1- \gamma W }{1 - (\gamma W)^2 } \times \frac{ \sqrt{1-W} }{ \sqrt{1-W^2} } = \frac{1}{ (1+\gamma W) \sqrt{1+W} }. \end{aligned}$$

Observe that this acceptance probability is at least \(1/(\sqrt{2}(1+\gamma )) \ge 1/\sqrt{8}\). Therefore, this method is uniformly fast over all choices of \(\lambda > 1\). The algorithm:

figure b

3.2 The Cubic Case: \(d=3\)

Just for \(d=3\), the density of W simplifies dramatically, so that we can find a direct solution by the inversion method. We obtain that if U is uniformly distributed on [0, 1] then

$$\begin{aligned} W {\mathop {=}\limits ^{\mathcal {L}}}\frac{\lambda }{ 2} + \frac{1}{2 \lambda } \left( 1 - \frac{ 1}{ \left( \frac{1 }{ \lambda + 1 } + \frac{ 2 U }{ \lambda ^2 -1 } \right) ^2 } \right) \end{aligned}$$

has density proportional to

$$\begin{aligned} \frac{ 1 }{ \left( 1 - w^2 + (\lambda - w )^2 \right) ^{3/2} }, \quad |w| \le 1. \end{aligned}$$

This will be called algorithm B3. Exact one-liners have been known for over two decades. See, e.g., [3] and [4]. Theses are basically equivalent to the method suggested above. As \(\lambda \rightarrow \infty \), we obtain \(W {\mathop {=}\limits ^{\mathcal {L}}}2U-1\), which is uniformly distributed on [0, 1]. This confirms Archimedes’s theorem which states that a uniform point on \(S_{2}\) has uniform marginals.

3.3 The General Case: \(d\ge 3\)

For \(d>2\), we proceed by simple rejection. Using the notation for W from above, we still use the notation f for the density of W on \([-1,1]\) (see above). We define \(g(w) = f(|w|)\), and observe that \(f(w) \le g(w)\) for all \(w \in [-1,1]\), yet \(\int g \le 2\), so rejection from g is entirely feasible. As g is symmetric about zero, it suffices to find an efficient way of generating a random variable Z with density proportional to g on [0, 1], and then note that SZ has density g on \([-1,1]\) where S is an equiprobable random sign. Define

$$\begin{aligned} \gamma = \frac{(\lambda - 1 )^2 }{ 2 \lambda }. \end{aligned}$$

We observe that g(w) is proportional to

$$\begin{aligned} \frac{ (1-w^2)^{(d-3)/ 2}}{( \gamma + (1-w) )^{d/2} } \le h(w) {\mathop {=}\limits ^{\text {def}}}\frac{ (2 (1-w))^{(d-3)/ 2 } }{( \gamma + (1-w) )^{d / 2} }. \end{aligned}$$

If H has density proportional to h on [0, 1], then \(T = \gamma / (1-H)\) has a density that is proportional to

$$\begin{aligned} \phi (t) = \frac{1 }{ \sqrt{t} (1+t)^{d/2} }, t \ge \gamma . \end{aligned}$$

We will give a generator for T that has uniformly bounded expected time over all values of \(\gamma \) (and thus \(\lambda \)). This can be used in a simple rejection algorithm that inherits the uniform expected complexity:

figure c

3.4 A Generator for T

There are two cases, according to whether \(\gamma \ge 2/d\) or \(\gamma < 2/d\). If \(\gamma \ge 2/d\), we bound \(\phi (t) \le 1/( \sqrt{\gamma } (1+t)^{d/2} )\). A random variate with density proportional to the dominating function is given by

$$\begin{aligned} T = (1+\gamma ) U^{- 2/( d-2)} - 1, \end{aligned}$$

where U is uniform on [0, 1]. Thus, one can repeat generating uniform \([0,1]^2\) pairs (UV) until \(V \le \sqrt{\gamma /T}\), and return T. The expected complexity is bounded from above by a function of d times \(\sqrt{1+1/\gamma }\), and is therefore uniformly bounded over all \(\gamma \ge 2/d\). So assume that \(\gamma < 2/d\). We bound

$$\begin{aligned} \phi (t) \le {\left\{ \begin{array}{ll} \phi _1 (t) = \frac{1}{\sqrt{t} (1+\gamma )^{d/2}} &{} \text {if}\;\frac{2 }{ d} > t \ge \gamma , \\ \phi _2 (t) = \frac{1 }{\sqrt{\frac{2}{d}} (1+t)^\frac{d }{ 2} } &{} \text {if}\;t \ge \frac{2}{d}. \end{array}\right. } \end{aligned}$$

Random variates \(T_1\) and \(T_2\) with densities \(\phi _1\) and \(\phi _2\) can be obtained as \(\left( \sqrt{\gamma } + U \left( \sqrt{\frac{2}{d}} - \sqrt{\gamma } \right) \right) ^2\) and \(\left( 1+\frac{2}{d} \right) U^{-2/(d-2)} -1\), respectively, where U is uniform on [0, 1]. We summarize the rejection algorithm, where \(p= \int _\gamma ^{2/d} \phi _1 (t)\, dt\) and \(q = \int _{2/d}^\infty \phi _2 (t)\, dt\):

figure d

The probability of accepting \(T_1\) is \(E\left\{ \left( \frac{ 1+\gamma }{1+T_1 } \right) ^{d / 2} \right\} \), which is greater than \(1/(1+2/d)^{d/2}\). The latter tends to 1/e as \(d \rightarrow \infty \). The probability of accepting \(T_2\) is \(E\left\{ \sqrt{\frac{2 }{d T_2}} \right\} \), which is bounded from below by a strictly positive constant uniformly over all \(d > 2\). Thus, the expected time taken by the rejection algorithm for T is uniformly bounded from above over all values of \(\gamma > 0\) and \(d > 2\).

4 A Simple Rejection Algorithm When \(0< \alpha < 2\)

Recalling

$$\begin{aligned} f(y) {\mathop {=}\limits ^{\text {def}}}\frac{1}{ \left( 1 - |y|^2 \right) ^{\alpha /2} \times |x-y|^d }, \end{aligned}$$

we see that

$$\begin{aligned} f(y) \le \frac{1}{ \left( 1 - |y|^2 \right) ^{\alpha /2} } (\lambda -1)^{-d}. \end{aligned}$$

This leads to a simple rejection algorithm, as a random variable with density proportional to \(\left( 1 - |y|^2 \right) ^{-\alpha /2}\) on B can be obtained as \(R Z_{d}\), where R is distributed as

$$\begin{aligned} \sqrt{\hbox {Beta} \left( \frac{d}{ 2}, 1- \frac{\alpha }{ 2} \right) }. \end{aligned}$$

Here is the rejection algorithm:

figure e

Since \(|x-Y| \le (\lambda + 1)\), we can conservatively upper bound the expected number of iterations of this algorithm by

$$\begin{aligned} \left( \frac{ \lambda + 1 }{ \lambda - 1 } \right) ^d. \end{aligned}$$

This performance deteriorates quickly when \(\lambda \) approaches 1. In the next section, we construct an algorithm with uniformly bounded expected time.

5 A Uniformly Fast Algorithm for \(\alpha \in [0,2)\)

Again, we let \(Y=(Y_1,\ldots ,Y_d)=X(T^*) \in B\) be the point of entry of the unit ball B of \(\mathbb {R}^d\) when the symmetric stable process of parameter \(\alpha \in (0,2)\) starts at \(X(0)= (\lambda ,0,0,\ldots ,0)\), \(\lambda > 1\), given that the process enters the ball (i.e., \(T^* < \infty \)). We write \(W=Y_1\), and \(H = \sqrt{ \sum _{i=2}^d Y_i^2}\), see Fig. 3. A simple geometric argument shows that (WH) has density proportional to

$$\begin{aligned} \frac{ ( 1 - (h^2 + w^2) )^{- \alpha /2} h^{d-2} }{ \left( h^2 + (\lambda - w )^2 \right) ^{d/2} }, \quad |w| \le 1, h^2+w^2 \le 1, h \ge 0. \end{aligned}$$

Given (WH), note that

$$\begin{aligned} Y {\mathop {=}\limits ^{\mathcal {L}}}(W, H Z_{d-1}), \end{aligned}$$

where (WH) and \(Z_{d-1}\) are independent. Therefore, we have reduced our problem to a two-dimensional one. For \(d=2\), in particular, note that \(Z_{d-1}\) is merely a random sign.

Fig. 3
figure 3

Definition of the (WH) coordinates

Instead of working with (WH), it is helpful to use coordinates (QR), where

$$\begin{aligned} Q= & {} H^2 + W^2,\\ R= & {} 1 - W/\sqrt{H^2 + W^2}, \end{aligned}$$

and \((Q,R) \in [0,1] \times [0,2]\). Vice versa,

$$\begin{aligned} W= & {} (1-R) \sqrt{Q}, \\ H= & {} \sqrt{2R-R^2} \sqrt{Q}. \end{aligned}$$

The joint density of (QR) (in terms of (qr)) is proportional to

$$\begin{aligned} \frac{ ( 1 - q )^{- \alpha / 2} q^{(d-2)/ 2} (2r-r^2)^{(d-3)/ 2}}{\left( q(2r-r^2) + (\lambda - (1-r)\sqrt{q} )^2 \right) ^{d/2} }, \quad 0 \le q \le 1, 0 \le r \le 2. \end{aligned}$$

We introduce the function \(\gamma = \gamma (q,r)\) for the denominator without the exponent:

$$\begin{aligned} \gamma = q(2r-r^2) + (\lambda - (1-r)\sqrt{q} )^2. \end{aligned}$$

Observe that \((\lambda - 1)^2 \le \gamma \le 1 + \lambda ^2\). Thus, for \(\lambda \ge 5/4\), the ratio of upper to lower bound for \(\gamma \) is \(\le 41\), the maximum being reached at \(\lambda = 5/4\). For that case, we use rejection from a density proportional to

$$\begin{aligned} ( 1 - q )^{- \frac{\alpha }{2}} q^{(d-2)/2} (2r-r^2)^{(d-3)/ 2}, \end{aligned}$$

where the first part is a beta \(( d/2, 1-\alpha /2 )\) density, and the second part is proportional to the density of two times a beta \(( (d-1)/2, (d-1)/2 )\) random variable. Thus, the following algorithm, which can be used for all values of the parameters, uses an expected number of iterations not exceeding \(41^{d/2}\) for all choices of \(\alpha \in [0,2), \lambda \ge 5/4\):

figure f

This leaves us with the case \(\lambda \in (1,5/4]\). To ensure uniform speed over all these choices of \(\lambda \) and \(\alpha \), we will employ a rejection method over a partition of the space. Assume that a generic density f is bounded by a function \(g_k\), where \(\{ A_k, k \ge 1 \}\) is a partition of the space. Let \(p_k = \int _{A_k} g_k\), \(p= \sum _k p_k\). Assume furthermore that there is a constant \(c > 0\) such that \(\int _{A_k} f \ge c \int _{A_k} g_k\). Then the following general rejection method requires an expected number of iterations that does not exceed 1/c:

figure g

Remark 1

Straightforward evaluation of \(U g \le f\) is numerically unstable in certain cases, so it is better to test if \(U (g/f) \le 1\), where g/f is algebraically simplified on each of the regions \(A_j\).

To verify the claim, observe that \(\int f = 1\), and \(\sum _k \int _{A_k} g_k \le 1/c\). We use a partition into five sets. The basic function of interest is

$$\begin{aligned} f(q,r) = \frac{\zeta (q) \rho (r) }{ (\gamma (q,r))^{d/2} }, \end{aligned}$$

where

$$\begin{aligned} \zeta (q)= & {} ( 1 - q )^{-\alpha / 2} q^{(d-2)/ 2}, \\ \rho (r)= & {} (2r-r^2)^{(d-3)/2}, \\ \gamma (q,r)= & {} q(2r-r^2) + (\lambda - (1-r)\sqrt{q} )^2. \end{aligned}$$

The regions are defined as follows, see Fig. 4:

$$\begin{aligned} A_1&: r \ge 1/16, q \ge 1/2. \\ A_2&: q \le 1/2. \\ A_3&: r \le (\lambda -1)^2, q \ge 3 - 2 \lambda . \\ A_4&: (\lambda -1)^2 \le r \le 1/16, 4r \ge (1-q)^2. \\ A_5&: 1/2 \le q \le 3 -2 \lambda , 4r \le (1-q)^2. \end{aligned}$$
Fig. 4
figure 4

Partition of the region for method R2 when \(d=2\). The left plot shows the partition for \(A_1,\ldots ,A_5\) in the (rq) coordinates; the right plot shows the preimage of these sets in the (\(x_1,x_2\)) coordinates

Since we employ the rejection method, it suffices to bound all three factors of f(qr) from above and below on each of the five regions. We begin with \(\gamma (q,r)\):

$$\begin{aligned} \gamma (q,r)= & {} q(2r-r^2) + ((\lambda -1) +(1-\sqrt{q}) + r\sqrt{q} )^2 \\\ge & {} q(2r-r^2) + (\lambda -1)^2 +\left( \frac{1-q }{ 2} \right) ^2 + r^2 q \\= & {} (\lambda -1)^2 +\left( \frac{1-q}{ 2} \right) ^2 + 2rq \\\ge & {} \max \left( (\lambda -1)^2,\left( \frac{1-q}{ 2} \right) ^2, 2rq \right) , \\\ge & {} {\left\{ \begin{array}{ll} 1/16 &{} \text {on} A_1 \cup A_2 \\ (\lambda -1)^2 &{} \text {on} A_3 \\ r &{} \text {on} A_4 \\ \left( \frac{1-q}{ 2} \right) ^2 &{} \text {on} A_5. \end{array}\right. } \end{aligned}$$

and similarly,

$$\begin{aligned} \gamma (q,r)\le & {} q(2r-r^2) + ((\lambda -1) +(1-\sqrt{q}) + r\sqrt{q} )^2 \\\le & {} 3q(2r-r^2) + 3(\lambda -1)^2 +3\left( 1-q \right) ^2 + 3r^2 q \\= & {} 3(\lambda -1)^2 +3\left( 1-q \right) ^2 + 6rq, \\= & {} 3(\lambda -1)^2 +12\left( \frac{1-q}{ 2} \right) ^2 + 6rq, \\\le & {} 18 \max \left( (\lambda -1)^2,\left( \frac{1-q}{ 2} \right) ^2, 2rq \right) \end{aligned}$$

and thus,

$$\begin{aligned} \gamma (q,r) \le {\left\{ \begin{array}{ll} 12 &{} \text {on} A_1 \\ 8.3 &{} \text {on} A_2 \\ 36 (\lambda -1)^2 &{} \text {on} A_3 \\ 36 r &{} \text {on} A_4 \\ 36 \left( \frac{1-q}{2} \right) ^2 &{} \text {on} A_5. \end{array}\right. } \end{aligned}$$

We define the upper bound used for rejection in each of the five regions as \(\zeta (q) \rho (r)\) times the upper bound on \(\gamma (q,r)^{-d/2}\) derived above. In a few cases, we use an even larger upper bound that increases the bound at most by a multiplicative factor that does not depend upon \(\alpha \) or \(\lambda \), and thus will not affect the claim that the method is universally fast over all \(\alpha \in (0,2)\), \(\lambda \in (1,5/4]\). The bounds are all of the form

$$\begin{aligned} f(q,r) \le g(q,r) \end{aligned}$$

where we observe that for \(d\ge 3\),

$$\begin{aligned} f(q,r)\le & {} {\left\{ \begin{array}{ll} {4^d} \,( 1 - q )^{- \alpha /2} q^{(d-2)/2} (2r-r^2)^{(d-3)/ 2} &{} \text {on}\;A_1 \cup A_2 \\ \frac{1}{ (\lambda -1)^d} \, ( 1 - q )^{- \alpha /2} q^{(d-2)/ 2} (2r-r^2)^{(d-3)/ 2} &{} \text {on}\;A_3 \\ ( 1 - q )^{- \alpha / 2} q^{(d-2)/2} r^{-3/2} (2-r)^{(d-3)/ 2} &{} \text {on}\;A_4 \\ 2^d ( 1 - q )^{- d - (\alpha / 2)} q^{(d-2)/2} (2r-r^2)^{(d-3)/2} &{} \text {on}\;A_5 \end{array}\right. } \\\le & {} g(q,r) {\mathop {=}\limits ^{\text {def}}}{\left\{ \begin{array}{ll} {4^d} \,( 1 - q )^{- \alpha /2} q^{(d-2)/ 2} (2r-r^2)^{(d-3)/2} &{} \text {on}\;A_1 \cup A_2 \\ \frac{2^{(d-3)/ 2} }{(\lambda -1)^d} \, ( 1 - q )^{-\alpha /2} r^{(d-3 )/ 2} &{} \text {on}\;A_3 \\ 2^{(d-3)/ 2} \, ( 1 - q )^{- \alpha /2} r^{-3/2} &{} \text {on}\;A_4 \\ 2^d \,2^{(d-3)/2} \,( 1 - q )^{- d - (\alpha /2)} r^{(d-3)/ 2} &{} \text {on}\;A_5. \end{array}\right. } \end{aligned}$$

For \(d=2\), the factor \(2^{(d-3)/ 2}\) in the expressions dealing with \(A_3\), \(A_4\) and \(A_5\) in the definition of g(qr) should be replaced by \(4/\sqrt{31}\). By inspection of each of these sets of inequalities, it is clear that in each region, the compound upper bound on f(qr) used for rejection, divided by f(qr) is bounded by a universal constant that depends upon d but not on \(\lambda \) or \(\alpha \). Thus, the rejection method that is based on the bounds given here is uniformly fast:

Fig. 5
figure 5

First hitting locations on the unit ball starting from \(x=(1.2,0)\) for varying \(\alpha \) in dimension \(d=2\). When \(\alpha =2\), the locations are on the surface. When \(\alpha < 2\), the points are on the interior and get more uniform as \(\alpha \) decreases to 0

Fig. 6
figure 6

First hitting locations on the unit ball starting from \(x=(\lambda ,0)\), where \(\alpha =1.5\) is fixed and \(\lambda \) varies as shown in dimension \(d=2\)

Proposition 1

(a) (Speed) For fixed d, the expected number of iterations performed by algorithm R2 below is uniformly bounded over \(\lambda \in (1,5/4]\), \(\alpha \in (0,2)\). Algorithm R0 is uniformly fast over all \(\lambda \ge \lambda ^* > 1\) and \(\alpha \in (0,2)\), while algorithm R1 is uniformly fast over all \(\lambda \ge 5/4\), \(\alpha \in (0,2)\).

(b) (Validity) Algorithms R0 and R1 can be used for all values of the parameters. Algorithm R2 is valid for \(\lambda \in (1,5/4]\), \(\alpha \in (0,2)\).

6 Putting Things Together

There are two tasks left to do. First we need to compute

$$\begin{aligned} p_k = \int _{A_k} g(q,r) \, dq dr. \end{aligned}$$

To facilitate computations, we call \(A_0 = A_1 \cup A_2\), define \(p_0 = \int _{[0,1]\times [0,2]} g(q,r)\), where g is the upper bound for \(A_0\) extended to the entire space, and will reject all random vectors that do not fall in \(A_0\). This does not affect the validity of proposition 1. Define

$$\begin{aligned} p = p_0 + p_3 + p_4 + p_5. \end{aligned}$$

The values shown below include expressions that involve the beta function \(B(a,b) = \Gamma (a) \Gamma (b) / \Gamma (a+b)\), and were obtained using the identity \(\int _0^2 (2r-r^2)^{(d-3)/ 2} \,dr = 2^{d-2} B\left( {(d-1)/2}, { (d-1)/ 2} \right) \).

$$\begin{aligned} p_0= & {} 4^d \, B\left( \frac{d }{ 2}, {1- \frac{\alpha }{ 2}} \right) \times 2^{d-2} B\left( \frac{d-1}{ 2}, \frac{ d-1}{ 2} \right) \\ p_3= & {} \frac{2^{(d-3)/2} }{(\lambda -1)^{\alpha / 2}} \, \frac{ 2^{3 - (\alpha / 2)}}{(2-\alpha ) (d-1) } \\ p_4= & {} 2^{(d-3)/ 2} \, \frac{2^{4-\alpha / 2 } }{ \alpha (2-\alpha )} \, \left( {(\lambda -1)^{-\alpha /2} - (1/4)^{-\alpha /2}} \right) \\ p_5= & {} 2^{(d-3)/ 2} \, \frac{8}{ \alpha (d-1)} \, \left( {(2(\lambda -1))^{-\alpha /2} - (1/2)^{-\alpha /2}} \right) . \end{aligned}$$

For \(d=2\), the factor \(2^{(d-3 )/2}\) in the expressions for \(p_3, p_4\) and \(p_5\) should be replaced by \(4/\sqrt{31}\).

On each \(A_k\), we need to show how to generate a random pair (QR) with density proportional to g. Except for \(A_4\) and \(A_5\), this is quite straightforward, as we will see below.

The full algorithm:

figure h

The individual generators for g are as follows, where \(V_1\) and \(V_2\) denote independent uniform [0, 1] random variables:

figure i

7 Practical Considerations

These algorithms have been coded using the open source R language, see [7]. Figures 5 and 6 show the hitting locations of the unit ball in the plane for varying values of \(\alpha \) and \(\lambda \).

We compared the simple rejection algorithm R0 with the uniformly fast algorithms R1 and R2. The timing shown in Table 1 shows that the performance of R0 deteriorates quickly as \(\lambda \) gets close to one. Furthermore, method R1 worsens with the dimension. We should point out that neither method is uniformly bounded in the dimension d. For one thing, any algorithm should take time at least linearly increasing with d.

Table 1 Timing in milliseconds per random vector for the two methods with \(\alpha =1.1\)

The methods described above assume a starting point on the first axis. For a general starting point x, first rotate this point to the \(x_1\) axis, e.g., \(x \rightarrow x^* {\mathop {=}\limits ^{\text {def}}}(|x|,0,\ldots ,0)\). Then apply the algorithms given above with starting point \(x^*\) to produce an output \(Y^*\), and then reverse the above rotation to get the final Y. This rotation back to the original direction is accomplished by using d Given’s rotations.

8 The Work Ahead

While the algorithm above is uniformly fast over all \(\lambda > 1\), \(\alpha \in [0,2)\), it is not uniformly fast over all dimensions d. Thus an improvement in that respect is desirable.

It would be quite interesting to develop an algorithm that can efficiently generate the pair (XT), where X is the location of entry in the unit ball and T is the time of entry. For the Brownian case (\(\alpha =2\)), the joint distribution is, e.g., given in [10].