Abstract
We provide uniformly efficient random variate generators for a collection of distributions for the hits of the symmetric stable process in \(\mathbb {R}^d\).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this note, random variate generators that are uniformly fast in starting location are derived for a family of distributions of hits of symmetric stable processes. The motivation for this work is for use in [6], where these methods are used to estimate Riesz \(\alpha \)-capacity for general sets. More precisely, let \(\{X(t); t \ge 0\}\) (\(d \ge 2\)) be the symmetric stable process in \(\mathbb {R}^d\) of index \(\alpha \) with \(0 < \alpha \le 2\). When \(0< \alpha < 2\), it is a process with stationary independent increments whose continuous transition density, relative to Lebesgue measure in \(\mathbb {R}^d\), is
where \(x, \xi \in \mathbb {R}^d\), \(t > 0\), \(d\xi \) is Lebesgue measure, \((x,\xi )\) is the inner product in \(\mathbb {R}^d\) and \(|\xi |^2 = (\xi , \xi )\). We have \(X(0)=x\). Define
Thus, T and \(T^*\) are the first passage times to the exterior and interior of the unit ball, respectively. Define
These describe the distributions of the hits of the unit ball when \(X(0)=x\). The measures are well-known, and are both given by
where
More precisely,
if \(0< \alpha < 2\), \(|x| < 1\), and
if \(\alpha < d\), \(|x| > 1\), or if \(\alpha = d = 1\), \(|x| > 1\). Special cases of these results are due to [9] and [11]. The full result, including a more detailed description of the case \(d=1<\alpha <2, |x|>1\), is given by [1]. For a survey and more recent results, see [5].
When \(\alpha =2\), \(|x| > 1\), we set \(T^* = \inf \{t > 0: |X(t)|=1\}\), and note that \(X(T^*)\) is supported on the surface of the unit ball.
In this paper, we are interested in generating a random vector Y in the unit ball \(B = \{ y: |y| \le 1 \}\) of \(\mathbb {R}^d\) with density proportional to \(f_x(y)\) when \(|x| > 1\). Figure 1 shows an example of simulated hitting points of the unit ball in \(\mathbb {R}^3\) generated by the methods described below. Throughout the paper, \(S_{d-1}=\{x \in \mathbb {R}^d: |x|=1 \}\) denotes the surface of B, and \(Z_d\) is a random variable uniformly distributed on \(S_{d-1}\). We only deal with the case \(d > 1\). We drop the dependence upon x in the notation and extend the family of distributions to include the cases \(\alpha = 0\) and \(\alpha =2\). For \(\alpha \in [0,2)\), we define
which is proportional to a density on B. For \(\alpha = 2\), we define the measure on the surface \(S_{d-1}\) of B that is given by the Poisson kernel; it is proportional to \(|x-y|^{-d}\). This corresponds to the hit position of \(S_{d-1}\) for standard Brownian motion started at x where \(|x|>1\). While formally, f is a density for all values \(\alpha \in (-\infty ,2)\), we will not be concerned here with negative values of \(\alpha \).
For the sake of normalization, we define \(x= (\lambda ,0,0,\ldots ,0)\), where \(\lambda > 1\).
Finally, we will name our algorithms for easy reference later. For the Brownian case (\(\alpha =2\)), we have B0, B2, B3 and Bd, while for general \(\alpha \in (0,2)\), they are called R0, R1 and R2.
2 Hitting Distribution for Exiting the Unit Ball When Starting at \(|x| < 1\)
Before focusing on simulating hitting of a ball, we discuss how the related problem of exiting a ball can be solved. When the starting point is \(x=0\), we can simulate directly the hitting distribution for the exiting the sphere problem. Recall that it also uses the density f(y) and that when \(x = 0\),
Since this is radially symmetric, it can be simulated by \(X=R Z_d\), where \(R=|X|\) is the amplitude/magnitude of X and \(Z_d\) is uniform on the unit sphere \(S_{d-1}\). Using radial symmetry, the density of R is
A change of variable shows that \(R {\mathop {=}\limits ^{\mathcal {L}}}1/\sqrt{T}\) where \(T {\mathop {=}\limits ^{\mathcal {L}}}\hbox {Beta}(\alpha /2,1-\alpha /2)\) has density h. Surprisingly, there is no dependence on dimension d in the distribution of R.
We can also simulate the hitting distribution for the complement of the unit ball when we start at \(x \ne 0\). The duality property in [8], which is also described in Section 3 of [1], states that if \(0< |x| < 1\), and if \(x^* = x/|x|^2\) is its spherical inverse outside the unit ball, and if \(Y^* \in B\) has the hitting distribution for the ball starting from \(x^*\), its spherical inverse \(Y=Y^*/|Y^*|^2\) has the hitting distribution outside B when started at \(x \in B\).
3 Warm-Up: The Case \(\alpha = 2\)—Brownian Motion
Recall that \(Y=(Y_1,\ldots ,Y_d)=X(T^*) \in S_{d-1}\) is the point of entry of the unit ball B for Brownian motion started at \(x= (\lambda ,0,0,\ldots ,0)\), \(\lambda > 1\), given that Brownian motion hits B. The density of Y with respect to the uniform measure on \(S_{d-1}\) is proportional to \(1/||x-y||^d\), where we recall that \(x=(\lambda , 0,\ldots ,0)\) and \(y \in S_{d-1}\). As \(||x-y|| \ge \lambda -1\), we can apply this simple rejection method:
In this algorithm, we tacitly used the fact that
The expected number of iterations grows as \(((\lambda +1)/(\lambda -1))^d\), which makes it clear that for \(\lambda \) near one, a more efficient algorithm is needed. The algorithms presented below all take expected time uniformly bounded over all values of \(\lambda \).
We write \(W=Y_1\). A simple geometric argument shows that W has density proportional to
If \(Z_{d-1}\) denotes a uniform point on \(S_{d-2}\), i.e., on the surface of the unit ball of \(\mathbb {R}^{d-1}\), then we note that
where W and \(Z_{d-1}\) are independent. The generation of \(Z_{d-1}\) is easily achieved by taking \(d-1\) independent standard normal random variates and normalizing them to be of total Euclidean length one, see [2], for general notions of random variate generation. We now describe how to generate W.
An inspection of the density, e.g., Fig. 2, shows three regimes: for \(d=2\), it is U-shaped; for \(d=3\), it is monotonically increasing on \([-1,1]\); and for \(d>3\), the density is unimodal, and zero at both endpoints of the interval. The cases \(d=2\) and \(d=3\) have simple explicit solutions. After presenting these, we will propose a method for \(d \ge 3\) that is uniformly fast over all values of \(\lambda \).
3.1 The Planar Case: \(d=2\)
The starting density on \([-1,1]\) is proportional to
Set \(\gamma = \frac{ 2\lambda }{ 1+ \lambda ^2}\), and note that \(\gamma \in [0,1]\). Observe that \(f(w) + f(-w)\) is proportional to
where we initially will try to generate a random variate W with density proportional to g on [0, 1]. Given such a W, it suffices then to replace W by \(-W\) with probability \(f(-W)/(f(W)+f(-W))\), i.e., with probability
Note that \(g(w) \le h(w)\), where
The density of \(Y = 1/ \sqrt{1-W}\) is proportional to
where \(\delta = (1-\gamma )/ \gamma = (\lambda -1)^2 / 2\lambda \). Thus, \(R = \sqrt{\delta } Y\) has density proportional to \(1/(1+r^2)\) on \([\sqrt{\delta }, \infty )\). If U denotes a uniform [0, 1] random variable, then by the inversion method,
As \(W= 1 - 1/Y^2\), we can obtain a random variate from g by the rejection method by accepting W with probability
Observe that this acceptance probability is at least \(1/(\sqrt{2}(1+\gamma )) \ge 1/\sqrt{8}\). Therefore, this method is uniformly fast over all choices of \(\lambda > 1\). The algorithm:
3.2 The Cubic Case: \(d=3\)
Just for \(d=3\), the density of W simplifies dramatically, so that we can find a direct solution by the inversion method. We obtain that if U is uniformly distributed on [0, 1] then
has density proportional to
This will be called algorithm B3. Exact one-liners have been known for over two decades. See, e.g., [3] and [4]. Theses are basically equivalent to the method suggested above. As \(\lambda \rightarrow \infty \), we obtain \(W {\mathop {=}\limits ^{\mathcal {L}}}2U-1\), which is uniformly distributed on [0, 1]. This confirms Archimedes’s theorem which states that a uniform point on \(S_{2}\) has uniform marginals.
3.3 The General Case: \(d\ge 3\)
For \(d>2\), we proceed by simple rejection. Using the notation for W from above, we still use the notation f for the density of W on \([-1,1]\) (see above). We define \(g(w) = f(|w|)\), and observe that \(f(w) \le g(w)\) for all \(w \in [-1,1]\), yet \(\int g \le 2\), so rejection from g is entirely feasible. As g is symmetric about zero, it suffices to find an efficient way of generating a random variable Z with density proportional to g on [0, 1], and then note that SZ has density g on \([-1,1]\) where S is an equiprobable random sign. Define
We observe that g(w) is proportional to
If H has density proportional to h on [0, 1], then \(T = \gamma / (1-H)\) has a density that is proportional to
We will give a generator for T that has uniformly bounded expected time over all values of \(\gamma \) (and thus \(\lambda \)). This can be used in a simple rejection algorithm that inherits the uniform expected complexity:
3.4 A Generator for T
There are two cases, according to whether \(\gamma \ge 2/d\) or \(\gamma < 2/d\). If \(\gamma \ge 2/d\), we bound \(\phi (t) \le 1/( \sqrt{\gamma } (1+t)^{d/2} )\). A random variate with density proportional to the dominating function is given by
where U is uniform on [0, 1]. Thus, one can repeat generating uniform \([0,1]^2\) pairs (U, V) until \(V \le \sqrt{\gamma /T}\), and return T. The expected complexity is bounded from above by a function of d times \(\sqrt{1+1/\gamma }\), and is therefore uniformly bounded over all \(\gamma \ge 2/d\). So assume that \(\gamma < 2/d\). We bound
Random variates \(T_1\) and \(T_2\) with densities \(\phi _1\) and \(\phi _2\) can be obtained as \(\left( \sqrt{\gamma } + U \left( \sqrt{\frac{2}{d}} - \sqrt{\gamma } \right) \right) ^2\) and \(\left( 1+\frac{2}{d} \right) U^{-2/(d-2)} -1\), respectively, where U is uniform on [0, 1]. We summarize the rejection algorithm, where \(p= \int _\gamma ^{2/d} \phi _1 (t)\, dt\) and \(q = \int _{2/d}^\infty \phi _2 (t)\, dt\):
The probability of accepting \(T_1\) is \(E\left\{ \left( \frac{ 1+\gamma }{1+T_1 } \right) ^{d / 2} \right\} \), which is greater than \(1/(1+2/d)^{d/2}\). The latter tends to 1/e as \(d \rightarrow \infty \). The probability of accepting \(T_2\) is \(E\left\{ \sqrt{\frac{2 }{d T_2}} \right\} \), which is bounded from below by a strictly positive constant uniformly over all \(d > 2\). Thus, the expected time taken by the rejection algorithm for T is uniformly bounded from above over all values of \(\gamma > 0\) and \(d > 2\).
4 A Simple Rejection Algorithm When \(0< \alpha < 2\)
Recalling
we see that
This leads to a simple rejection algorithm, as a random variable with density proportional to \(\left( 1 - |y|^2 \right) ^{-\alpha /2}\) on B can be obtained as \(R Z_{d}\), where R is distributed as
Here is the rejection algorithm:
Since \(|x-Y| \le (\lambda + 1)\), we can conservatively upper bound the expected number of iterations of this algorithm by
This performance deteriorates quickly when \(\lambda \) approaches 1. In the next section, we construct an algorithm with uniformly bounded expected time.
5 A Uniformly Fast Algorithm for \(\alpha \in [0,2)\)
Again, we let \(Y=(Y_1,\ldots ,Y_d)=X(T^*) \in B\) be the point of entry of the unit ball B of \(\mathbb {R}^d\) when the symmetric stable process of parameter \(\alpha \in (0,2)\) starts at \(X(0)= (\lambda ,0,0,\ldots ,0)\), \(\lambda > 1\), given that the process enters the ball (i.e., \(T^* < \infty \)). We write \(W=Y_1\), and \(H = \sqrt{ \sum _{i=2}^d Y_i^2}\), see Fig. 3. A simple geometric argument shows that (W, H) has density proportional to
Given (W, H), note that
where (W, H) and \(Z_{d-1}\) are independent. Therefore, we have reduced our problem to a two-dimensional one. For \(d=2\), in particular, note that \(Z_{d-1}\) is merely a random sign.
Instead of working with (W, H), it is helpful to use coordinates (Q, R), where
and \((Q,R) \in [0,1] \times [0,2]\). Vice versa,
The joint density of (Q, R) (in terms of (q, r)) is proportional to
We introduce the function \(\gamma = \gamma (q,r)\) for the denominator without the exponent:
Observe that \((\lambda - 1)^2 \le \gamma \le 1 + \lambda ^2\). Thus, for \(\lambda \ge 5/4\), the ratio of upper to lower bound for \(\gamma \) is \(\le 41\), the maximum being reached at \(\lambda = 5/4\). For that case, we use rejection from a density proportional to
where the first part is a beta \(( d/2, 1-\alpha /2 )\) density, and the second part is proportional to the density of two times a beta \(( (d-1)/2, (d-1)/2 )\) random variable. Thus, the following algorithm, which can be used for all values of the parameters, uses an expected number of iterations not exceeding \(41^{d/2}\) for all choices of \(\alpha \in [0,2), \lambda \ge 5/4\):
This leaves us with the case \(\lambda \in (1,5/4]\). To ensure uniform speed over all these choices of \(\lambda \) and \(\alpha \), we will employ a rejection method over a partition of the space. Assume that a generic density f is bounded by a function \(g_k\), where \(\{ A_k, k \ge 1 \}\) is a partition of the space. Let \(p_k = \int _{A_k} g_k\), \(p= \sum _k p_k\). Assume furthermore that there is a constant \(c > 0\) such that \(\int _{A_k} f \ge c \int _{A_k} g_k\). Then the following general rejection method requires an expected number of iterations that does not exceed 1/c:
Remark 1
Straightforward evaluation of \(U g \le f\) is numerically unstable in certain cases, so it is better to test if \(U (g/f) \le 1\), where g/f is algebraically simplified on each of the regions \(A_j\).
To verify the claim, observe that \(\int f = 1\), and \(\sum _k \int _{A_k} g_k \le 1/c\). We use a partition into five sets. The basic function of interest is
where
The regions are defined as follows, see Fig. 4:
Since we employ the rejection method, it suffices to bound all three factors of f(q, r) from above and below on each of the five regions. We begin with \(\gamma (q,r)\):
and similarly,
and thus,
We define the upper bound used for rejection in each of the five regions as \(\zeta (q) \rho (r)\) times the upper bound on \(\gamma (q,r)^{-d/2}\) derived above. In a few cases, we use an even larger upper bound that increases the bound at most by a multiplicative factor that does not depend upon \(\alpha \) or \(\lambda \), and thus will not affect the claim that the method is universally fast over all \(\alpha \in (0,2)\), \(\lambda \in (1,5/4]\). The bounds are all of the form
where we observe that for \(d\ge 3\),
For \(d=2\), the factor \(2^{(d-3)/ 2}\) in the expressions dealing with \(A_3\), \(A_4\) and \(A_5\) in the definition of g(q, r) should be replaced by \(4/\sqrt{31}\). By inspection of each of these sets of inequalities, it is clear that in each region, the compound upper bound on f(q, r) used for rejection, divided by f(q, r) is bounded by a universal constant that depends upon d but not on \(\lambda \) or \(\alpha \). Thus, the rejection method that is based on the bounds given here is uniformly fast:
Proposition 1
(a) (Speed) For fixed d, the expected number of iterations performed by algorithm R2 below is uniformly bounded over \(\lambda \in (1,5/4]\), \(\alpha \in (0,2)\). Algorithm R0 is uniformly fast over all \(\lambda \ge \lambda ^* > 1\) and \(\alpha \in (0,2)\), while algorithm R1 is uniformly fast over all \(\lambda \ge 5/4\), \(\alpha \in (0,2)\).
(b) (Validity) Algorithms R0 and R1 can be used for all values of the parameters. Algorithm R2 is valid for \(\lambda \in (1,5/4]\), \(\alpha \in (0,2)\).
6 Putting Things Together
There are two tasks left to do. First we need to compute
To facilitate computations, we call \(A_0 = A_1 \cup A_2\), define \(p_0 = \int _{[0,1]\times [0,2]} g(q,r)\), where g is the upper bound for \(A_0\) extended to the entire space, and will reject all random vectors that do not fall in \(A_0\). This does not affect the validity of proposition 1. Define
The values shown below include expressions that involve the beta function \(B(a,b) = \Gamma (a) \Gamma (b) / \Gamma (a+b)\), and were obtained using the identity \(\int _0^2 (2r-r^2)^{(d-3)/ 2} \,dr = 2^{d-2} B\left( {(d-1)/2}, { (d-1)/ 2} \right) \).
For \(d=2\), the factor \(2^{(d-3 )/2}\) in the expressions for \(p_3, p_4\) and \(p_5\) should be replaced by \(4/\sqrt{31}\).
On each \(A_k\), we need to show how to generate a random pair (Q, R) with density proportional to g. Except for \(A_4\) and \(A_5\), this is quite straightforward, as we will see below.
The full algorithm:
The individual generators for g are as follows, where \(V_1\) and \(V_2\) denote independent uniform [0, 1] random variables:
7 Practical Considerations
These algorithms have been coded using the open source R language, see [7]. Figures 5 and 6 show the hitting locations of the unit ball in the plane for varying values of \(\alpha \) and \(\lambda \).
We compared the simple rejection algorithm R0 with the uniformly fast algorithms R1 and R2. The timing shown in Table 1 shows that the performance of R0 deteriorates quickly as \(\lambda \) gets close to one. Furthermore, method R1 worsens with the dimension. We should point out that neither method is uniformly bounded in the dimension d. For one thing, any algorithm should take time at least linearly increasing with d.
The methods described above assume a starting point on the first axis. For a general starting point x, first rotate this point to the \(x_1\) axis, e.g., \(x \rightarrow x^* {\mathop {=}\limits ^{\text {def}}}(|x|,0,\ldots ,0)\). Then apply the algorithms given above with starting point \(x^*\) to produce an output \(Y^*\), and then reverse the above rotation to get the final Y. This rotation back to the original direction is accomplished by using d Given’s rotations.
8 The Work Ahead
While the algorithm above is uniformly fast over all \(\lambda > 1\), \(\alpha \in [0,2)\), it is not uniformly fast over all dimensions d. Thus an improvement in that respect is desirable.
It would be quite interesting to develop an algorithm that can efficiently generate the pair (X, T), where X is the location of entry in the unit ball and T is the time of entry. For the Brownian case (\(\alpha =2\)), the joint distribution is, e.g., given in [10].
References
Blumenthal RM, Getoor RK, Ray DB (1961) On the distribution of first hits for the symmetric stable processes. Trans Am Math Soc 99(3):540–554
Devroye L (1986) Non-uniform random variate generation. Springer Verlag, New York
Given JA, Hubbard JB, Douglas JF (1997) A first-passage algorithm for the hydrodynamic friction and diffusion-limited reaction rate of macromolecules. J Chem Phys 106(9):3761–3771
Kang EH, Mansfield ML, Douglas JF (2004) Numerical path integration technique for the calculation of transport properties of proteins. Phys Rev E 69:031918
Kyprianou AE (2018) Stable processes, self-similarity and the unit ball. ALEA Lat Am J Probab Math Stat 15:617–690
Nolan JP, Audus D, Douglas J (2023) Computation of \(\alpha \)-capacity \(C_\alpha \) of general sets in \({\mathbb{R}}^d\) using stable random walks. SIAM J Appl Math (To appear)
R Core Team (2023) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna
Riesz M (1938) Intégrales de Riemann-Liouville et potentiels. Acta Sci Math Szeged 9:1–42
Spitzer F (1958) Some theorems concerning 2-dimensional Brownian motion. Trans AMS 87:187–197
Uchiyama K (2016) Density of space-time distribution of Brownian first hitting of a disc and a ball. Potential Anal 44:497–541
Widom H (1961) Stable processes and integral equations. Trans Am Math Soc 98(3):430–449
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of Special Issue: International Conference on Statistical Distributions and Applications (ICOSDA 2022) guest edited by Narayanaswamy Balakrishnan, Indranil Ghosh, Hon Keung Ng, Kalimuthu Krishnamoorthy, and Helton Saulo.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Devroye, L., Nolan, J.P. Random Variate Generation for the First Hit of a Ball for the Symmetric Stable Process in \(\mathbb {R}^d\). J Stat Theory Pract 18, 11 (2024). https://doi.org/10.1007/s42519-023-00364-1
Accepted:
Published:
DOI: https://doi.org/10.1007/s42519-023-00364-1
Keywords
- Random variate generation
- Simulation
- Monte Carlo method
- Expected time analysis
- Stable processes
- Hitting times