First Hitting Time of Brownian Motion on Simple Graph with Skew Semiaxes

Consider a stochastic process that lives on n-semiaxes emanating from a common origin. On each semiaxis it behaves as a Brownian motion and at the origin it chooses a semiaxis randomly. In this paper we study the first hitting time of the process. We derive the Laplace transform of the first hitting time, and provide the explicit expressions for its density and distribution functions. Numerical examples are presented to illustrate the application of our results.


Introduction
Suppose we have a system of semiaxes emanating from a common origin (a simple graph) and a particle moving on this system. On each semiaxis, the particle behaves as a Brownian motion; and once at the origin, it instantaneously chooses a semiaxis for its next voyage randomly according to a given transition probability. We set an upper boundary on each semiaxis (see Fig. 1), and study the first time that the boundary is hit by the particle.
The study of first hitting time of Brownian motion with linear boundary goes back to Doob (1949). Other types of boundary have also been considered. The second-order boundary was studied by Salminen (1988) using the infinitesimal generator method. The squareroot boundary was considered by Breiman (1967) via the Doob's transform approach. Wang and Pötzelberger (1997) obtained the crossing probability for Brownian motion with a piecewise linear boundary using the Brownian bridge. Scheike (1992) derived an exact formula for the broken linear boundary. Peskir (2002) provided a general result for the continuous boundary using the Chapman-Kolmogorov formula, and gave the probability density function of the first hitting time in terms of a Volterra integral system.
For the first hitting time of Brownian motion with a two-sided boundary, the Laplace transform and density are well-known, see Borodin and Salminen (1996) Section II.1.3. Escribá (1987 studied the crossing problem with two sloping line boundaries. Che and Dassios (2013) used a martingale method to derive the crossing probability with a twosided boundary involving random jumps. Sacerdote et al. (2014) constructed a Volterra integral system for the probability density function of the first hitting time of Brownian motion with a general two-sided boundary.
In this paper, we are interested in the Brownian motion moving on a simple graph. The construction of Brownian motion on general metric graphs can be seen in Georgakopoulos and Kolesko (2014), Kostrykin et al. (2012) and Fitzsimmons and Kuter (2015). This process contains the Walsh Brownian motion as a special case, which was first considered in the epilogue of Walsh (1978), and further studied by Rogers (1983), Baxter and Chacon (1982), Salisbury (1986) and Barlow et al. (1989). More recently, Karatzas and Yan (2019) introduced a class of planar processes called 'semimartingales on rays', which can be viewed as a generalization of the Walsh Brownian motion.
We will study the first hitting time of the Brownian motion on a simple graph. We derive the Laplace transform of the first hitting time, and provide the explicit inverse methods for its density and distribution functions. In particular, our results can be reduced to the first hitting time of Walsh Brownian motion. Papanicolaou et al. (2012) and Yor (1997) Section 17.2.3 derived the Laplace transform of the first hitting time of Walsh Brownian motion when the entering probabilities of the semiaxes are uniformly distributed; Fitzsimmons and Kuter (2015) generalized this result to an arbitrary entering probability. This paper is motivated by the real-time gross settlement system (RTGS, and known as CHAPS in the UK, see McDonough 1997 andPadoa-Schioppa 2005). The participating banks in the RTGS system are concerned about liquidity risk and wish to prevent the considerable liquidity exposure between two banks. There is evidence suggesting that in CHAPS, banks usually set bilateral or multilateral limits on the exposed position with others (see , this mechanism was studied by Che and Dassios (2013) using a Markov model. For a single bank, namely bank A, let a reflected Brownian motion be the net balance between bank A and bank i, and let u i be the bilateral limit set up by bank A for bank i, Che and Dassios (2013) calculated the probability that the limit is exceeded in a finite time. We generalize this model by considering an individual bank A and n counterparties. Assume that bank A uses an internal queue to manage its outgoing payments, and once the current payment to bank i is settled, it has probability P ij to make another payment to bank j, where i, j ∈ {1, … , n} . Let a reflected Brownian motion be the net balance between bank A and bank i. To avoid the considerable exposure to liquidity risk, a limit b i has been set for the net balance between bank A and bank i (this limit might be set by either the participating banks or the central bank, see Padoa-Schioppa 2005), and they are interested in the first time that such a limit is exceeded. In practice, an individual bank could set multiple limits or even remove the limit on different types of counterparties. This problem can be reduced to the calculation of the first hitting time of Brownian motion on a simple graph. For more details about the RTGS system, see Che (2011) and Soramäki et al. (2007). Applications of the current paper also include the communication in a network, see Deng and Li (2009).
We construct the Brownian motion on the simple graph in Sect. 2, then calculate the Laplace transform of the first hitting time and present some special cases of the result in Sect. 3. Two inverse methods for the Laplace transform are provided in Sect. 4. Numerical examples are presented in Sect. 5. Section 6 proposes a continuous extension of our results. The proofs of the main results are attached in Appendix 1.

Construction of the Underlying Process and the First Hitting Time
In this section, we construct the Brownian motion on a simple graph and define the first hitting time we are interested in. Let n be a finite positive integer, we denote by S a simple graph containing n semiaxes emanating from the common origin, i.e., S ∶= {S 1 , … , S n } , and fix a transition probability matrix ∶= (P ij ) n×n , so that ∑ n j=1 P ij = 1 , for i = 1, … , n . We require the Markov chain induced by to have only one closed communicating class, so there exists a unique stationary distribution (see Norris 1998).
Consider a planar process X(t) on the simple graph S. We represent the position of X(t) by (||X(t)||, Θ(t)) , where ||X(t)|| denotes the distance between X(t) and the origin, and Θ(t) ∈ {S 1 , … , S n } indicates the current semiaxis of the process. Let ||X(t)|| have the same distribution as a reflected Brownian motion. We expect Θ(t) to be constant during each excursion of X(t) away from the origin and have the same distribution as when X(t) returns to the origin. To this end, we set Then, X(t) behaves as a Brownian motion on each semiaxis, as long as it does not hit the origin. Once at the origin, it instantaneously chooses a new semiaxis according to .
Next, we define the first hitting time of X(t). On each semiaxis S i , there is a unique upper boundary b i > 0 , our target is to study the first hitting time defined as We need to calculate the excursion length of X(t), but the problem is there is no first excursion from zero; before any t > 0 , the process has made an infinite number of small excursions away from the origin. To approximate the dynamic of a Brownian motion, Dassios and Wu (2010) introduced the "perturbed Brownian motion", we will extend this idea here. For every 0 < < min i=1,…,n b i , we define a perturbed process X (t) = (||X (t)||, Θ (t)) on the simple graph S, where ||X (t)|| has the same distribution as a reflected Brownian motion starting from , as long as X (t) does not hit the origin. Once at the origin, X (t) not only chooses a new semiaxis according to , but also jumps to on the new semiaxis, thus, We define the first hitting time similarly as before, in a pathwise sense, then → in distribution. Hence we will first study the behaviour of X (t) , then take the limit → 0 to calculate the Laplace transform of the first hitting time .
When X(t) starts from a point other than the origin, we denote by X(0) = (b * p , S p ) the initial state of X(t), where p ∈ {1, … , n} is arbitrary but fixed, and 0 < b * p < b p (see Fig. 1). We also define the first hitting time of X(t) at the origin as ∶= inf{t ≥ 0;||X(t)|| = 0} . Then, X(t) behaves as a Brownian motion starting from b * p during [0, ) , i.e., before hitting the origin. Once at the origin, X(t) chooses a new semiaxis according to , and behaves as a Brownian motion on a simple graph starting from the origin.
We use b * p (.) for the expectation with respect to the probability measure that X(t) starts from (b * p , S p ) , and in particular, (.) when X(t) starts from the origin.

Laplace Transform of
We derive the Laplace transform of the first hitting time in this section.  (25). This means the perturbed process X (t) will only hit the upper boundary b i . Then we follow the rest of the proof for Theorem 1 to derive the Laplace transform, and take the limit → 0 to calculate the probability. ◻ As in Sect. 2, X(t) can be reduced to some special cases by choosing the parameters accordingly, then we can compare Corollary 1 to the results in the existing literature.

Example 1 (reflected Brownian motion)
When n = 1 and P 11 = 1 , X(t) reduces to a reflected Brownian motion. The stationary distribution of = (P ij ) 1×1 is 1 = 1 . Let the upper boundary be b 1 > 0 , then the first hitting time has the Laplace transform This is the Laplace transform of the first hitting time of a reflected Brownian motion at b 1 , see Borodin and Salminen (1996) Section II.3.2.
Example 2 (skew Brownian motion) When n = 2 , P 11 = P 21 = and P 12 = P 22 = 1 − , for ∈ (0, 1) , X(t) reduces to a skew Brownian motion (see Lejay 2006). The stationary distribution of = (P ij ) 2×2 is ( , 1 − ) . Let the upper boundaries be b 1 > 0 and b 2 > 0 , then the first hitting time has the Laplace transform Moreover, when = 1 2 , X(t) becomes a standard Brownian motion. In this case, the stationary distribution of is 1 2 , 1 2 , and the first hitting time has the Laplace transform This is the Laplace transform of the first exit time of a standard Brownian motion from the two-sided barrier Borodin and Salminen (1996) Section II.1.3.

Example 3 (Walsh Brownian motion)
When n is a finite positive integer and P 1j = P 2j = … = P nj =∶ P j , for j = 1, … , n , X(t) becomes a Walsh Brownian motion. The stationary distribution of = (P ij ) n×n is (P 1 , … , P n ) . Let the upper boundaries be b 1 > 0, … , b n > 0 , then the first hitting time has the Laplace transform This is the Laplace transform of the first hitting time of a Walsh Brownian motion, see Fitzsimmons and Kuter (2015). In particular, when P 1 = ⋯ = P n = 1 n , we revert to the main result of Papanicolaou et al. (2012) and Yor (1997) Section 17.2.3.

Inverse Laplace Transform
In this section we provide two methods to invert the Laplace transform (2). For simplicity, we denote by g(x, t) and Ψ(x, t) the density and distribution functions of an inverse Gaussian random variable with parameter x: .
where Φ(⋅) is the standard normal distribution function. We first present an auxiliary result concerning the poles of the Laplace transform (2), this result will enable us to apply the inverse methods.
Lemma 1 Denote by − * the poles of the Laplace transform (2), then − * are negative real numbers. Moreover, when n > 1 and the upper boundaries {b i } i=1,…,n are rational numbers, we can find out the poles by solving the equation with respect to y, and looking for * > 0 which satisfies From now on, we denote by − * the poles of (2). We sort all the poles in descending order as − * 1 > − * 2 > … , and denote the set of all poles by {− * i } i∈ℕ . We also make the convention that the expressions ∑ − * f (− * ) and ∑ i∈ℕ f (− * i ) represent the summation with respect to all the poles in descending order.
Next, we apply the convolution method to invert the Laplace transform (2).
Theorem 2 Assume that b k are rational numbers, then there exist positive integers c k and In this case, the density of the first hitting time is and the distribution of is where * is the convolution notation, (t) denotes the Dirac Delta function, and In practice, the difficulty in evaluating the convolutions above restricts the usefulness of Theorem 2, so we provide a more explicit way to invert the Laplace transform (2).
Theorem 3 Assume that the upper boundaries {b i } i=1,…,n are rational numbers, then the density of the first hitting time is and the distribution of is Remark 2 Since the poles − * are negative real numbers, the series (7) and (8) converge fast when t is large, but slowly when t is small. Inspired by the general Theta function transformation (see Bellman 1961 Section 19), we provide the alternative expressions for the density and distribution functions of that converge fast for small t.
When , we can invert (9) term by term to derive the density of : Then integrate the density over (0, t) for the distribution of : We provide some examples to illustrate the use of Theorem 3 and Remark 2. (3). To find the poles of the Laplace transform, we need to solve the equation coth(b 1 √ 2 ) = 0 . Set = − * , we have coth(b 1 √ −2 * ) = cos(b 1 √ 2 * ) = 0 , and b 1 √ 2 * = 2k−1 2 , k ∈ ℤ + . Therefore, the Laplace transform (3) has the poles Using Theorem 3, we calculate the density and distribution functions of to be These expressions converge fast when t is large, but slowly when t is small.

Example 4 (reflected Brownian motion) Consider the Laplace transform
On the other hand, denote by x ∶= e − √ 2 , the negative binomial expansion implies √ 2 is the Laplace transform of an inverse Gaussian random variable with parameter (2k − 1)b 1 , then Remark 2 gives These expressions converge fast for small t, but slowly for large t.
Example 5 (standard Brownian motion) Let b 1 = 1 , b 2 = 2 in Laplace transform (5), then Using Lemma 1, we can derive the poles of the Laplace transform by solving y 2 − 3 = 0 and y = tan � √ 2 * � . Thus, the poles are Using Theorem 3, we calculate the density and distribution functions of to be and These expressions converge fast when t is large, but slowly when t is small. On the other hand, denote by x ∶= e − √ 2 , the negative binomial expansion implies For every k ∈ ℤ + , we invert x 3k−1 and x 3k−2 using the inverse Gaussian density, then These expressions converge fast for small t, but slowly for large t.
These expressions converge fast when t is large, but slowly when t is small. On the other hand, denote by x ∶= e − √ 2 , the series expansion implies We invert it term by term to derive the density function, then integrate the density for the distribution function: These expressions converge fast for small t, but slowly for large t. (6), then the stationary distribution of = (P ij ) 3×3 is 1 3 , 1 3 , 1 3 , and

Example 7 (Walsh Brownian motion)
Using Lemma 1, we can derive the poles of the Laplace transform by solving y 4 − 12y 2 + 11 = 0 and y = tan( √ 2 * ) . Thus, we know ±1 = tan( √ 2 * ) and ± √ 11 = tan( √ 2 * ) , and the poles are Using Theorem 3, we calculate the density and distribution functions of to be (16) (18) These expressions converge fast when t is large, but slowly when t is small. On the other hand, denote by x ∶= e − √ 2 , the series expansion implies We invert it term by term to derive the density function, then integrate the density for the distribution function: These expressions converge fast for small t, but slowly for large t.

Numerical Implementation
In this section, we present the numerical illustration for Examples 5, 6 and 7. We will plot the density and distribution functions in each example, and study the accuracy of these functions. For Example 5, we first consider the density function when t is large. Since (10) converges fast for large t, we truncate it at a fixed level n. Define the truncated function We plot f 2 (t) , f 4 (t) and f 6 (t) in Fig. 2a. To demonstrate the accuracy of the truncated functions, we also invert the Laplace transform (e − ) numerically using the Gaver-Stehfest method (see Cohen 2007), and view the resulting curve f (t) as the benchmark in Fig. 2a.
We see from Fig. 2a that, when t is small, f 2 (t) , f 4 (t) and f 6 (t) are not accurate because they are far from the benchmark. As t increases, f 6 (t) converges to f (t) earlier than f 4 (t) and f 2 (t) . When t is large enough, all the curves converge to f (t).
(20) Ψ(7, t)). (22) The difference between f n (t) and f (t) is recorded in Table 1. We denote by d n ∶= |f (t) − f n (t)| the truncation error of f n (t) , for n = 2, 4, 6 . We also set the error tolerance level to be 0.0001. Then, if d n < 0.0001 , we say f n (t) is sufficiently accurate; otherwise, it is not sufficiently accurate. From Table 1, we know d 6 < 0.0001 for t ≥ 0.054 , so f 6 (t) is a sufficiently accurate approximation for the density function of when t ≥ 0.054.
For the distribution function (11), we define the truncated function and plot F 2 (t) , F 4 (t) and F 6 (t) in Fig. 2b. We also invert the Laplace transform (e − ) numerically, and use the resulting curve F (t) as the benchmark in Fig. 2b. We see from Fig. 2b that, when t is small, the truncated functions are not parallel to F (t) . As t increases, they become parallel to the benchmark. Since the gradient of the distribution

Fig. 2 Density and distribution functions in Example 5
curve is the density, when the distribution curve is parallel to the benchmark, we know the approximation is relatively accurate. Next, we consider the density function when t is small. Since (12) converges fast for small t, we truncate it at a fixed level n. Define the truncated function we plot f 2 (t) , f 4 (t) and f 6 (t) in Fig. 2c. We also use the same benchmark as before, i.e., f (t) obtained by inverting the Laplace transform (e − ) numerically.
We see from Fig. 2c that, when t is small, f 2 (t) , f 4 (t) and f 6 (t) are accurate. As t increases, f 2 (t) diverges from the benchmark earlier than f 4 (t) and f 6 (t) . When t is large enough, all the curves diverge from the benchmark.
The difference between f n (t) and f (t) is recorded in Table 1. We denote by e n ∶= |f (t) − f n (t)| the truncation error of f n (t) , for n = 2, 4, 6 . From Table 1 we know, with the error tolerance level 0.0001, f 6 (t) is sufficiently accurate when t ≤ 26.945.
For the distribution function (13), we define the truncated function and plot F 2 (t) , F 4 (t) and F 6 (t) in Fig. 2d. We also invert the Laplace transform (e − ) numerically, and use the resulting curve F (t) as the benchmark in Fig. 2d. We see from the figure that, when t is small, the truncated functions are accurate. As t increases, the curves diverge from the benchmark. Hence the approximation is relatively accurate for small t.
In conclusion, with the truncation level n = 6 and the error tolerance level 0.0001, the truncated density function (22) is sufficiently accurate for t ≥ 0.054 ; while the truncated density function (23) is sufficiently accurate for t ≤ 26.945.
A similar analysis is conducted for Example 6, with the results recorded in Fig. 3 and Table 2. In conclusion, with the truncation level n = 6 and the error tolerance level 0.0001, the truncated function of (14) is sufficiently accurate for t ≥ 0.055 ; while the truncated function of (16) is sufficiently accurate for t ≤ 3.181.
For Example 7, the numerical results are recorded in Fig. 4 and Table 3. In conclusion, with the truncation level n = 6 and the error tolerance level 0.0001, the truncated function of (18) is sufficiently accurate for t ≥ 0.261 ; while the truncated function of (20) is sufficiently accurate for t ≤ 2.995.

Continuous Extension
In this section, we extend our results to a graph with infinite but countably many semiaxes. Assume that there are n semiaxes, the stationary distribution of the transition probability matrix is uniform, i.e., k = 1 n , for k = 1, … , n . We let the upper boundaries be Then, Corollary 1 implies that the Laplace transform of the first hitting time is Taking n → ∞ , the Laplace transform becomes b k = k n (a 2 − a 1 ) + a 1 , for 0 < a 1 < a 2 , k = 1, … , n. Since the poles of the Laplace transform come from both the numerator and denominator, we derive these poles by solving the equations Denote by − * the poles of (24), we use the residue theorem to calculate the density of the first hitting time: On the other hand, we denote by x ∶= e − √ 2 , then (24) can be written as Res(e t (e − ), − * ).  When i < , X (t) hits the upper boundary b i before returning to the origin, and = i ; while for < i , X (t) returns to the origin without hitting b i , then jumps to , S j with probability P ij and restarts from semiaxis S j . Using the strong Markov property, we have the following system of equations: for i = 1, … , n, There are n equations in this system, hence it is possible to solve for e − 1 {Θ (0)=S i } . However, we are only interested in the first hitting time , then there is no need to solve the whole system. Instead, we apply the series expansion with respect to on both sides of (25), we have where K is the constant term and A ij ( ) is the j-th coefficient of the expansion. Also, where c ij is the j-th coefficient of the series expansion, and where l ij is the j-th coefficient of the series expansion. Then, we can write (25) as This equation can be expanded to Note that every term in this equation contains , and those terms with the same power of must coincide. Hence for those terms containing 1 , we have Since > 0 , we cancel on both sides, this gives We write the system of Eq. (26) in matrix form: where = (P ij ) n×n is the transition probability matrix. We denote by Π ∶= ( 1 , … , n ) the stationary distribution of , such that ∑ n i=1 i = 1 and Π = Π , and multiply the vector Π on both sides of (27), this gives where we already know c i1 = . But K is the constant term of the series expansion of e − {Θ (0)=S i } , hence Equation (28) implies that, as → 0 , the Laplace transform of 1 {Θ (0)=S i } does not depend on the initial semiaxis S i . In fact, as → 0 , the initial position of X (t)1 {Θ (0)=S i } converges to the origin from semiaxis S i . On the other hand, we know that a Brownian motion will make an infinite number of very small excursions around the origin before any t > 0 . Hence, when the transition probability matrix is aperiodic, the limiting process lim →0 X (t)1 {Θ (0)=S i } will eventually choose the semiaxis according to the limiting distribution of , which is identical to its stationary distribution Π (see Privault 2018). While for a periodic , after an infinite number of transitions, the limiting process has a uniform probability to be in any semiaxis that belongs to the closed communicating class of . Hence, the choice of semiaxis is also identical to the stationary distribution Π . The same argument also applies to the original process X(t).
In the meanwhile, the limiting process lim →0 X (t)1 {Θ (0)=S i } behaves as a Brownian motion starting from 0 on each semiaxis, which is identical to X(t). Together with the argument above, we know lim →0 X (t)1 {Θ (0)=S i } → X(t) in a pathwise sense, then lim It follows from (28) and (29) that When b * p ≠ 0 , the initial state of X(t) is X(0) = (b * p , S p ) , for some p = 1, … , n . Then, X(t) will either hit the upper boundary b p before returning to the origin, or arrive at the origin without hitting b p . Using the similar method as above, we know The expectation of can be calculated by applying the moment generating function, then the theorem is proved. ◻

Proof of Lemma 1
Proof The poles of (2) are equivalent to the roots of its denominator. We notice that the Laplace transform (2) has the limit lim →0 (e ) = 1 , hence = 0 is not a pole of (2   where (t) denotes the Dirac Delta function and g(x, t) represents the inverse Gaussian density with parameter x. Then the inverse of ∏ n k=1 (1 − z m k ) can be written as the convolution of this result.
For the third Laplace transform, since m k are integers, the expression can be interpreted as a polynomial with variable z. We notice that all the poles of the Laplace transform come from the roots of this polynomial. On the other hand, these poles have been derived in Lemma 1. Hence we can rewrite the polynomial as Thus, we have where Then we can invert 1/L(z) term by term: Finally, the density of can be written as the convolution of the results above. For the distribution function of , we need to invert E(e − ) , but this is equivalent to replacing the term 1 √ 2 by 1 √ 2 3 in (35). Since we only need to change the resulting function accordingly, and the theorem is proved. ◻ as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.