1 Introduction

To determine the distribution of the time that a stochastic process spends in a set (up to a given time) is a classical, much studied and well understood problem. For diffusions, the Feynman–Kac formula is the basic tool to attack the problem. If the generator of the diffusion has smooth coefficients, this approach calls for solving a parabolic differential equation with a boundary condition. See, e.g., for Brownian motion Durrett [7, Sect. 4], and for diffusions Karatzas and Shreve [15, Sect. 5.7]. We refer also to Borodin and Salminen [6] for explicit examples of various one-dimensional diffusions.

Undoubtedly, the most famous occupation time distribution is the arcsine law, which dates back to Lévy [21]. According to this, for a standard Brownian motion W, letting \(A^W_1\) denote the time W is positive up to time 1, it holds that

$$\begin{aligned} {{\,\mathrm{{\mathbf {P}}}\,}}_0\left( A_1^W \le x\right) = \frac{2}{\pi } \arcsin (\sqrt{x}), \quad x \in [0,1], \end{aligned}$$
(1)

where \({{\,\mathrm{{\mathbf {P}}}\,}}_0\) is the probability measure associated with W (when initiated at 0). For proofs of (1) based on the Feynman–Kac formula, see [15, p. 273] and Mörters and Peres [23, p. 214]. In the latter one, instead of inverting the double Laplace transform, the moments of the distribution are calculated, and that approach is, in a sense, closer to the one discussed in this paper.

We are interested in the occupation times for a regular one-dimensional diffusion. More precisely, let \((X_t)_{t\ge 0}\) be such a diffusion and introduce, for \(t\ge 0\),

$$\begin{aligned} A_t^X := {{\,\mathrm{Leb}\,}}\{s\in [0,t]:X_s\ge 0\}, \end{aligned}$$

where \({{\,\mathrm{Leb}\,}}\) denotes the Lebesgue measure. Recall that in [28] Truman and Williams calculated, applying the Feynman–Kac method, a formula for the moment generating function of the occupation time up to an independent exponential time for a fairly general (positively recurrent) diffusion. Watanabe [31] extended the result to hold for a general regular (gap) diffusion exploiting the random time change techniques. However, earlier Barlow, Pitman and Yor [4] derived the formula in case of a skew two-sided Bessel process using excursion theory. This result was connected in [31] with a distribution found by Lamperti in [18]. In [24] Pitman and Yor proved, and also extended, the general formula presented in [31] using the excursion theory (but they also discuss an approach via the Feynman–Kac method). We refer also to Watanabe, Yano and Yano [32], where the inversion of the Laplace transform is discussed, and to Kasahara and Yano [16] for results of the asymptotic behavior of the density at 0.

Our main contributions are, firstly, a new expression for the moment generating function of the occupation time up to an exponential time, as well as a recursive equation for the Laplace transforms of the moments of the occupation time. A novel feature in our analysis is, perhaps, that it is based explicitly on Kac’s moment formula and not on the Feynman–Kac formula. In spite of the fact that both formulas have been known for decades, we have not been able to find precisely these results in the literature. It is seen that the general formula in [31] can be obtained from our formula via some straightforward calculations. Secondly, the recursive formula for the moments is solved for skew two-sided Bessel processes. Somewhat surprisingly, the result says that the moments are polynomials in the dimension and skewness parameters. Although the density of the occupation time is known in this case, it does not seem to be possible to find the general formula for the moments via integration, but numerical integration can, of course, be performed to check the formula for some given values of the parameters. Skew Brownian motion is a special case of a skew two-sided Bessel process obtained when the dimension parameter is \(-1/2\), and in this case the formula for the moments is simpler.

The paper is organized as follows. In the next section, some basic ingredients from the theory of diffusions are recalled. We also introduce the diffusions for which the occupation times are studied later in the paper. In Sect. 3, we discuss Kac’s moment formula for integral functionals. Although this formula is well known, a short proof is included for completeness of the presentation. Section 4 contains the formula for the moment generating function, see (19), and it is also proved that this coincides with the formula in [31]. In Sect. 5, we derive the recursive equation for the Laplace transforms of the moments of the occupation time, see Theorem 2. In the proof, a technical result, Lemma 1, is needed, the proof of which is given in “Appendix”. In Sect. 6, we apply the results on a number of diffusions and present formulas for the skew two-sided Bessel process, skew Brownian motion, oscillating Brownian motion, Brownian spider and sticky Brownian motion.

2 Preliminaries on Diffusions

Let \(X=(X_t)_{t\ge 0}\) be a regular diffusion taking values on an interval \(I\subseteq {{\,\mathrm{{\mathbb {R}}}\,}}\). For simplicity, it is assumed that X is conservative, i.e., \({{\,\mathrm{{\mathbf {P}}}\,}}_x(X_t\in I)=1\) for all \(x\in I\) and \(t\ge 0,\) where \({{\,\mathrm{{\mathbf {P}}}\,}}_x\) stands for the probability measure associated with X when initiated at x. To fix ideas, we suppose that \(0\in I\). In this section, we briefly describe the setup for the diffusion X.

The notations m and S are used for the speed measure and the scale function, respectively. It is assumed that S is normalized to satisfy \(S(0)=0\). Recall that X has a transition density p with respect to m, that is, for any Borel subset B of I it holds that

$$\begin{aligned} {{\,\mathrm{{\mathbf {P}}}\,}}_x\left( X_t\in B\right) = \int _B p(t;x,y)\,m(dy). \end{aligned}$$

Moreover, for \(\lambda > 0\) let \(\varphi _\lambda \) and \(\psi _\lambda \) denote the decreasing and increasing, respectively, positive and continuous solutions of the generalized ODE (see [6, p. 18])

$$\begin{aligned} \frac{d}{dm}\frac{d}{dS} u=\lambda u. \end{aligned}$$

The solutions \(\psi _\lambda \) and \(\varphi _\lambda \) are unique up to a multiplicative constant when appropriate boundary conditions are imposed. The Wronskian constant \(\omega _{\lambda }\) is defined as

$$\begin{aligned} \omega _{\lambda }&:=\psi ^{+}_{\lambda }(x)\varphi _\lambda (x)-\psi _\lambda (x)\varphi ^{+}_\lambda (x) \nonumber \\&\,=\psi ^{-}_{\lambda }(x)\varphi _\lambda (x)-\psi _\lambda (x)\varphi ^{-}_{\lambda }(x), \end{aligned}$$
(2)

where the superscripts \(^+\) and \(^-\) denote the right and left derivatives with respect to the scale function S. We remark that in case x is not a sticky point it holds that \(\psi ^{+}_{\lambda }(x) = \psi ^{-}_{\lambda }(x)\) and \(\varphi ^{+}_{\lambda }(x) = \varphi ^{-}_{\lambda }(x)\) [13, p. 129] (cf. [25, Thm. 3.12, p. 308]). The Green kernel (also called the resolvent kernel) with respect to the speed measure m [13, p. 150] is given by

$$\begin{aligned} G_\lambda (x,y):= {\left\{ \begin{array}{ll} w^{-1}_\lambda \psi _\lambda (x)\varphi _\lambda (y), &{} x\le y,\\ w^{-1}_\lambda \psi _\lambda (y)\varphi _\lambda (x), &{} x\ge y, \end{array}\right. } \end{aligned}$$
(3)

and satisfies

$$\begin{aligned} G_\lambda (x,y)=\int _0^\infty {{\,\mathrm{\mathrm {e}}\,}}^{-\lambda t}p(t;x,y)\, \mathop {}\!\mathrm {d}t. \end{aligned}$$

It is well known that the first hitting time \(H_y:=\inf \{t\ge 0 : X_t=y\}\) has a density with respect to the Lebesgue measure. Especially for \(H_0\), we use the notation

$$\begin{aligned} f(x;t) := {{\,\mathrm{{\mathbf {P}}}\,}}_x(H_0\in \mathop {}\!\mathrm {d}t)/\mathop {}\!\mathrm {d}t \end{aligned}$$

for the \({{\,\mathrm{{\mathbf {P}}}\,}}_x\)-density of \(H_0\) and

$$\begin{aligned} {\widehat{f}}(x;\lambda ) := \int _0^\infty {{\,\mathrm{\mathrm {e}}\,}}^{-\lambda t}{{\,\mathrm{{\mathbf {P}}}\,}}_x(H_0\in \mathop {}\!\mathrm {d}t) = {{\,\mathrm{{\mathbf {E}}}\,}}_x({{\,\mathrm{\mathrm {e}}\,}}^{-\lambda H_0}) = {\left\{ \begin{array}{ll} \dfrac{\varphi _\lambda (x)}{\varphi _\lambda (0)}, &{} x\ge 0, \\ \dfrac{\psi _\lambda (x)}{\psi _\lambda (0)}, &{} x\le 0, \end{array}\right. } \end{aligned}$$
(4)

for its Laplace transform. Moreover, let

$$\begin{aligned} {\widehat{f}}_\lambda ^{(k)}(x;\lambda ) := \frac{\mathop {}\!\mathrm {d}^k}{\mathop {}\!\mathrm {d}\lambda ^k}{\widehat{f}}(x;\lambda ) = (-1)^k {{\,\mathrm{{\mathbf {E}}}\,}}_x(H_0^k {{\,\mathrm{\mathrm {e}}\,}}^{-\lambda H_0}). \end{aligned}$$

Using the continuity of \(\varphi _\lambda \), it follows from (4) that \(\lim _{x\downarrow 0}{\widehat{f}}(x;\lambda )=1\). We also need the following result regarding the kth derivative of \({\widehat{f}}(x;\lambda )\), which is perhaps known, but we do not have any reference.

Lemma 1

For any \(\lambda >0\) and \(k\ge 1\),

$$\begin{aligned} \lim _{x\downarrow 0} {\widehat{f}}_\lambda ^{(k)}(x;\lambda ) = 0. \end{aligned}$$
(5)

Proof

A proof based on spectral representations is given in “Appendix”. \(\square \)

Let \(t>0\) and consider the occupation time on \({{\,\mathrm{{\mathbb {R}}}\,}}_+:=[0,+\infty )\) up to time t:

$$\begin{aligned} A_t^X := {{\,\mathrm{Leb}\,}}\{s\in [0,t]:X_s\ge 0\}=\int _0^t {{\,\mathrm{\mathbb {1}}\,}}_{[0,\infty )}(X_s)\,\mathop {}\!\mathrm {d}s. \end{aligned}$$
(6)

If X is a self-similar process (see [27] p. 70), that is, for any \(a\ge 0\) there exists \(b\ge 0\) such that

$$\begin{aligned} (X_{at})_{t\ge 0} \overset{(d)}{=} (b X_t)_{t\ge 0}, \end{aligned}$$
(7)

then, as is easily seen, for any fixed \(t\ge 0\),

$$\begin{aligned} A_t^X \overset{(d)}{=} t A_1^X. \end{aligned}$$
(8)

Example 1

Skew two-sided Bessel processes are introduced in [4]; see also [1, 5, 31]. A skew two-sided Bessel process \((X^{(\nu ,\beta )}_t)_{t\ge 0}\) with parameter \(\nu \in (-1,0)\) and skewness parameter \(\beta \in (0,1)\) is a diffusion on \({{\,\mathrm{{\mathbb {R}}}\,}}\) with the speed measure

$$\begin{aligned} m_\nu (\mathop {}\!\mathrm {d}x) = {\left\{ \begin{array}{ll} 4\beta x^{2\nu +1} \mathop {}\!\mathrm {d}x, &{} x>0, \\ 4(1-\beta ) |x|^{2\nu +1} \mathop {}\!\mathrm {d}x, &{} x<0, \end{array}\right. } \end{aligned}$$

and the scale function

$$\begin{aligned} S_\nu (x) = {\left\{ \begin{array}{ll} -\frac{1}{4\beta \nu } x^{-2\nu }, &{} x\ge 0, \\ \frac{1}{4(1-\beta )\nu } |x|^{-2\nu }, &{} x\le 0. \end{array}\right. } \end{aligned}$$

The generator is given by

$$\begin{aligned} {{\,\mathrm{{\mathcal {G}}\!}\,}}f(x) = \frac{1}{2} f''(x) + \frac{2\nu +1}{2x} f'(x), \; x\ne 0, \quad {{\,\mathrm{{\mathcal {G}}\!}\,}}f(0) = {{\,\mathrm{{\mathcal {G}}\!}\,}}f(0+) = {{\,\mathrm{{\mathcal {G}}\!}\,}}f(0-), \end{aligned}$$

with the domain

$$\begin{aligned} {\mathcal {D}} = \left\{ f : f, {{\,\mathrm{{\mathcal {G}}\!}\,}}f \in {\mathcal {C}}_b({{\,\mathrm{{\mathbb {R}}}\,}}) , \frac{\mathop {}\!\mathrm {d}f}{\mathop {}\!\mathrm {d}S}(0+) = \frac{\mathop {}\!\mathrm {d}f}{\mathop {}\!\mathrm {d}S}(0-) \right\} . \end{aligned}$$

Recall that for a “one-sided” Bessel diffusion on \({{\,\mathrm{{\mathbb {R}}}\,}}_+\) reflected at zero [6, p. 137] we have the fundamental solutions

$$\begin{aligned} {\widehat{\psi }}_\lambda (x) = x^{-\nu } I_\nu (x\sqrt{2\lambda }), \quad {\widehat{\varphi }}_\lambda (x) = x^{-\nu } K_\nu (x\sqrt{2\lambda }), \end{aligned}$$

for \(x>0\). The limits

$$\begin{aligned} {\widehat{\psi }}_\lambda (0) = \biggl ( \frac{\sqrt{2\lambda }}{2} \biggr )^\nu \frac{1}{\varGamma (1+\nu )}, \quad {\widehat{\varphi }}_\lambda (0) = \biggl ( \frac{\sqrt{2\lambda }}{2} \biggr )^\nu \frac{\varGamma (-\nu )}{2} \end{aligned}$$

follow from the fact that, for \(\nu \in (-1,0)\) and when \(x\rightarrow 0\),

$$\begin{aligned} I_\nu (x) \simeq \frac{1}{\varGamma (\nu +1)} \biggl ( \frac{x}{2} \biggr )^\nu , \quad K_\nu (x) \simeq \frac{\varGamma (-\nu )}{2} \biggl ( \frac{x}{2} \biggr )^\nu . \end{aligned}$$
(9)

Let

$$\begin{aligned} \psi _\lambda (x) := {\left\{ \begin{array}{ll} \dfrac{\pi }{2\beta \sin (-\pi \nu )}\,{\widehat{\psi }}_\lambda (x) - \dfrac{1-\beta }{\beta } \,{\widehat{\varphi }}_\lambda (x), &{} x\ge 0, \\ {\widehat{\varphi }}_\lambda (|x|), &{} x\le 0, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \varphi _\lambda (x) := {\left\{ \begin{array}{ll} {\widehat{\varphi }}_\lambda (x), &{} x\ge 0, \\ \dfrac{\pi }{2(1-\beta )\sin (-\pi \nu )}\,{\widehat{\psi }}_\lambda (|x|) - \dfrac{\beta }{1-\beta } \,{\widehat{\varphi }}_\lambda (|x|), &{} x\le 0. \end{array}\right. } \end{aligned}$$

Since \({\widehat{\psi }}_\lambda \) and \({\widehat{\varphi }}_\lambda \) are solutions of \({{\,\mathrm{{\mathcal {G}}\!}\,}}f(x) = \lambda f(x)\) on \({{\,\mathrm{{\mathbb {R}}}\,}}_+\), we immediately see that \(\psi _\lambda \) is also a solution when \(x>0\), while for \(x<0\)

$$\begin{aligned} \psi _\lambda ^{\,\prime }(x) = -{\widehat{\varphi }}_\lambda ^{\,\prime }(|x|), \quad \psi _\lambda ^{\,\prime \prime }(x) = {\widehat{\varphi }}_\lambda ^{\,\prime \prime \!}(|x|), \end{aligned}$$

and, hence,

$$\begin{aligned} {{\,\mathrm{{\mathcal {G}}\!}\,}}\psi _\lambda (x) = \frac{1}{2} {\widehat{\varphi }}_\lambda ^{\,\prime \prime \!}(|x|) + \frac{2\nu +1}{2|x|} {\widehat{\varphi }}_\lambda ^{\,\prime }(|x|) = \lambda {\widehat{\varphi }}_\lambda (|x|) = \lambda \psi _\lambda (x). \end{aligned}$$

Thus, \(\psi _\lambda \) is a solution of the equation

$$\begin{aligned} {{\,\mathrm{{\mathcal {G}}\!}\,}}f(x) = \lambda f(x), \quad x\ne 0. \end{aligned}$$
(10)

Furthermore, \(\psi _\lambda \) is continuous since

$$\begin{aligned} \lim _{x\downarrow 0} \psi _\lambda (x)&= \frac{\varGamma (-\nu )\varGamma (1+\nu )}{2\beta } {\widehat{\psi }}_\lambda (0) - \frac{\beta }{1-\beta } {\widehat{\varphi }}_\lambda (0) \\&= \biggl ( \frac{\sqrt{2\lambda }}{2} \biggr )^\nu \frac{\varGamma (-\nu )}{2} = {\widehat{\varphi }}_\lambda (0) = \lim _{x\uparrow 0} \psi _\lambda (x), \end{aligned}$$

using Euler’s reflection formula. Notice that from

$$\begin{aligned} {\widehat{\psi }}_\lambda ^{\,\prime }(x) = x^{-\nu } I_{\nu +1}(x\sqrt{2\lambda }), \quad {\widehat{\varphi }}_\lambda ^{\,\prime }(x) = - x^{-\nu } K_{\nu +1}(x\sqrt{2\lambda }), \end{aligned}$$

and (9) it follows that

$$\begin{aligned} \lim _{x\downarrow 0} \; x^{2\nu +1} {\widehat{\psi }}_\lambda ^{\,\prime }(x) = 0, \qquad \lim _{x\downarrow 0} \; x^{2\nu +1}{\widehat{\varphi }}_\lambda ^{\,\prime }(x) = - \biggl ( \frac{2}{\sqrt{2\lambda }} \biggr )^\nu \varGamma (1+\nu ), \end{aligned}$$

and, therefore,

$$\begin{aligned} \lim _{x\downarrow 0} \frac{\mathop {}\!\mathrm {d}\psi _\lambda (x)}{\mathop {}\!\mathrm {d}S}&= \frac{\pi }{\sin (-\pi \nu )} \lim _{x\downarrow 0} x^{2v+1}{\widehat{\psi }}_\lambda ^{\,\prime }(x) - 2(1-\beta ) \lim _{x\downarrow 0}x^{2v+1}{\widehat{\varphi }}_\lambda ^{\,\prime }(x) \\&= - 2(1-\beta ) \lim _{x\uparrow 0} |x|^{2v+1}{\widehat{\varphi }}_\lambda ^{\,\prime }(|x|) \\&= \lim _{x\uparrow 0} \frac{\mathop {}\!\mathrm {d}\psi _\lambda (x)}{\mathop {}\!\mathrm {d}S}, \end{aligned}$$

so the scale derivative of \(\psi _\lambda \) is also continuous. Finally, from the continuity of \(\psi _\lambda \) and the fact that \({\widehat{\psi }}_\lambda \) is an increasing and \({\widehat{\varphi }}_\lambda \) a positive and decreasing function on \({{\,\mathrm{{\mathbb {R}}}\,}}_+\) (also note that \(\sin (-\pi \nu )>0\)), it follows that \(\psi _\lambda \) is positive and increasing on \({{\,\mathrm{{\mathbb {R}}}\,}}\). After using a similar procedure for \(\varphi _\lambda \), we conclude that the functions \(\psi _\lambda \) and \(\varphi _\lambda \) are increasing and decreasing, respectively, positive and continuous solutions of (10). These functions are also given in the recent paper [1], albeit there with a different normalization.

Using the functions \(\psi _\lambda \) and \(\varphi _\lambda \) as defined above, the corresponding Wronskian is given by

$$\begin{aligned} w_\lambda = \frac{\mathop {}\!\mathrm {d}\psi _\lambda }{\mathop {}\!\mathrm {d}S} \varphi _\lambda - \frac{\mathop {}\!\mathrm {d}\varphi _\lambda }{\mathop {}\!\mathrm {d}S} \psi _\lambda = \frac{\pi }{\sin (-\pi \nu )}, \end{aligned}$$

and the Green kernel can be obtained using (3). In particular, we get, for \(y>0\), that

$$\begin{aligned} G_\lambda (0,y)&= w_\lambda ^{-1} \psi _\lambda (0)\varphi _\lambda (y) \nonumber \\&= \frac{\sin (-\pi \nu )}{\pi } {\widehat{\varphi }}_\lambda (0) {\widehat{\varphi }}_\lambda (y) \nonumber \\&= \frac{1}{2 \varGamma (\nu + 1)} \biggl ( \frac{\sqrt{2\lambda }}{2} \biggr )^\nu y^{-\nu } K_\nu (y\sqrt{2\lambda }). \end{aligned}$$
(11)

Note that \(G_\lambda (0,y) = \frac{1}{2} {\widehat{G}}_\lambda (0,y)\), where \({\widehat{G}}_\lambda \) is the Green kernel for the one-sided Bessel process reflected at zero [6, p. 137] and also that the skewness parameter \(\beta \) is not present in the expression for \(G_\lambda (0,y)\).

A skew two-sided Bessel process has the scaling property; more precisely, (7) holds with \(b=\sqrt{a}\). This can be verified through a straightforward calculation using the Green kernel.

For a skew two-sided Bessel process, the skewness parameter \(\beta \) has a similar interpretation as for a skew Brownian motion, namely that it corresponds to the probability of the process being positive at any given time, when initiated at 0. This follows from

$$\begin{aligned} \int _0^\infty {{\,\mathrm{\mathrm {e}}\,}}^{-\lambda t} {{\,\mathrm{{\mathbf {P}}}\,}}_0(X^{(\nu ,\beta )}_t\ge 0) \mathop {}\!\mathrm {d}t&= \int _0^\infty G_\lambda (0,y) m_\nu (\mathop {}\!\mathrm {d}y) \nonumber \\&= \int _0^\infty \frac{4\beta }{2 \varGamma (\nu + 1)} \biggl ( \frac{\sqrt{2\lambda }}{2} \biggr )^\nu y^{\nu +1} K_\nu (y\sqrt{2\lambda }) \mathop {}\!\mathrm {d}y \nonumber \\&= \frac{\beta }{2^{\nu }\lambda \, \varGamma (1+\nu )} \int _0^\infty z^{1+\nu } K_\nu (z) \mathop {}\!\mathrm {d}z \nonumber \\&= \frac{\beta }{\lambda }, \end{aligned}$$
(12)

where in the last step an integral formula for modified Bessel functions of the second kind [11, Eq. (6.561.16)] has been applied. Inverting the Laplace transform gives that, for any \(t>0\),

$$\begin{aligned} {{\,\mathrm{{\mathbf {P}}}\,}}_0(X^{(\nu ,\beta )}_t\ge 0) = \beta . \end{aligned}$$

Example 2

Skew Brownian motion with skewness parameter \(\beta \in (0,1)\) is a diffusion which behaves like a standard Brownian motion when away from a certain skew point, here taken to be 0, while the sign of every excursion from the skew point is chosen by an independent Bernoulli trial with parameter \(\beta \), see [2, 4, 19, 30, 31]. This corresponds to a skew two-sided Bessel process with \(\nu =-1/2\), but for the convenience of the reader we write down explicit expressions. In this case, we have the speed measure

$$\begin{aligned} m(\mathop {}\!\mathrm {d}x) = {\left\{ \begin{array}{ll} 4(1-\beta ) \mathop {}\!\mathrm {d}x, &{} x<0, \\ 4\beta \mathop {}\!\mathrm {d}x, &{} x>0, \end{array}\right. } \end{aligned}$$

and the scale function

$$\begin{aligned} S(x) = {\left\{ \begin{array}{ll} \frac{x}{2(1-\beta )}, &{} x\le 0, \\ \frac{x}{2\beta }, &{} x\ge 0. \end{array}\right. } \end{aligned}$$

The fundamental solutions to (10) are obtained by inserting \(\nu =-1/2\) into the expressions for \(\psi _\lambda \) and \(\varphi _\lambda \) in the previous section, and after a suitable scaling we get

$$\begin{aligned} \psi _\lambda (x) = {\left\{ \begin{array}{ll} {{\,\mathrm{\mathrm {e}}\,}}^{x\sqrt{2\lambda }} + \frac{1-2\beta }{\beta } \sinh (x\sqrt{2\lambda }), &{} x\ge 0, \\ {{\,\mathrm{\mathrm {e}}\,}}^{x\sqrt{2\lambda }}, &{} x\le 0, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \varphi _\lambda (x) = {\left\{ \begin{array}{ll} {{\,\mathrm{\mathrm {e}}\,}}^{-x\sqrt{2\lambda }}, &{} x\ge 0, \\ {{\,\mathrm{\mathrm {e}}\,}}^{-x\sqrt{2\lambda }} + \frac{1-2\beta }{1-\beta } \sinh (x\sqrt{2\lambda }), &{} x\le 0. \end{array}\right. } \end{aligned}$$

The Wronskian is in this case \(w_\lambda = 2\sqrt{2\lambda }\). We also get that

$$\begin{aligned} G_\lambda (0,y) = \frac{1}{2 \sqrt{2\lambda }} {{\,\mathrm{\mathrm {e}}\,}}^{-y\sqrt{2\lambda }}, \quad y\ge 0. \end{aligned}$$
(13)

Example 3

Oscillating Brownian motion (see, e.g., [20]) is a diffusion \(({\widetilde{X}}_t)_{t\ge 0}\) which is characterized by the speed measure

$$\begin{aligned} {\widetilde{m}}(\mathop {}\!\mathrm {d}x) = {\left\{ \begin{array}{ll} (2/\sigma _{-}^2) \mathop {}\!\mathrm {d}x, &{} x<0, \\ (2/\sigma _{+}^2) \mathop {}\!\mathrm {d}x, &{} x>0, \end{array}\right. } \end{aligned}$$

with \(\sigma _{-}>0, \sigma _{+}>0\), and the scale function \({\widetilde{S}}(x) = x\). The fundamental solutions associated with oscillating Brownian motion are (cf. [22])

$$\begin{aligned} {\widetilde{\psi }}_\lambda (x) = {\left\{ \begin{array}{ll} \cosh \Big (\frac{x\sqrt{2\lambda }}{\sigma _+}\Big ) + \frac{\sigma _+}{\sigma _-} \sinh \Big (\frac{x\sqrt{2\lambda }}{\sigma _+}\Big ), &{} x\ge 0, \\ \exp \Big (\frac{x\sqrt{2\lambda }}{\sigma _-}\Big ), &{} x\le 0, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} {\widetilde{\varphi }}_\lambda (x) = {\left\{ \begin{array}{ll} \exp \Big (-\frac{x\sqrt{2\lambda }}{\sigma _+}\Big ), &{} x\ge 0, \\ \cosh \Big (\frac{x\sqrt{2\lambda }}{\sigma _+}\Big ) - \frac{\sigma _+}{\sigma _-} \sinh \Big (\frac{x\sqrt{2\lambda }}{\sigma _+}\Big ), &{} x\le 0, \end{array}\right. } \end{aligned}$$

and the corresponding Wronskian is given by \({\widetilde{w}}_\lambda = \sqrt{2\lambda }\big ( \frac{1}{\sigma _+} + \frac{1}{\sigma _-} \big )\). This gives that

$$\begin{aligned} {\widetilde{G}}_\lambda (0,y) = \frac{\sigma _+ \sigma _-}{(\sigma _+ + \sigma _-)\sqrt{2\lambda }} \exp \biggl (-\frac{y\sqrt{2\lambda }}{\sigma _+}\biggr ), \quad y\ge 0. \end{aligned}$$
(14)

It can be checked that the oscillating Brownian motion has the scaling property. In fact, it holds that

$$\begin{aligned} ({\widetilde{X}}_t)_{t\ge 0} \overset{(d)}{=}(S(X_t))_{t\ge 0}, \end{aligned}$$
(15)

where \((X_t)_{t\ge 0}\) denotes the skew Brownian motion with \(\beta =\sigma _-/(\sigma _++\sigma _-)\) and S is its scale function.

Example 4

Sticky Brownian motion is a diffusion which behaves like a standard Brownian motion when on excursions away from a certain given point, which we take to be 0. The crucial property which distinguishes sticky Brownian motion from standard Brownian motion is that the occupation time at 0 for sticky Brownian motion is positive up to any given time \(t>0\) a.s. if the process starts from 0, i.e.,

$$\begin{aligned} {{\,\mathrm{Leb}\,}}\{ s\in [0,t]\, :\, X_t=0\}>0 \quad {{\,\mathrm{{\mathbf {P}}}\,}}_0\ \text {-a.s.}, \end{aligned}$$

where \((X_t)_{t\ge 0}\) denotes the sticky Brownian motion. However, a sticky Brownian motion does not stay any time interval at 0 (such a behavior would violate the strong Markov property). The scale function and the speed measure are

$$\begin{aligned}&S(x)=x,\quad m(\mathop {}\!\mathrm {d}x) = 2 \mathop {}\!\mathrm {d}x +2\gamma \varepsilon _{\{0\}}(dx), \end{aligned}$$

respectively, where \(\gamma >0\) and \(\varepsilon _{\{0\}}\) denotes the Dirac measure at 0. The fundamental solutions are (see [6, p. 127] and references therein)

$$\begin{aligned} \psi _\lambda (x) = {\left\{ \begin{array}{ll} {{\,\mathrm{\mathrm {e}}\,}}^{x\sqrt{2\lambda }} + \gamma \sqrt{2\lambda } \sinh (x\sqrt{2\lambda }), &{} x\ge 0, \\ {{\,\mathrm{\mathrm {e}}\,}}^{x\sqrt{2\lambda }}, &{} x\le 0, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \varphi _\lambda (x) = {\left\{ \begin{array}{ll} {{\,\mathrm{\mathrm {e}}\,}}^{-x\sqrt{2\lambda }}, &{} x\ge 0, \\ {{\,\mathrm{\mathrm {e}}\,}}^{-x\sqrt{2\lambda }} - \gamma \sqrt{2\lambda } \sinh (x\sqrt{2\lambda }), &{} x\le 0. \end{array}\right. } \end{aligned}$$

with Wronskian \(w_\lambda = 2\sqrt{2\lambda } + 2\lambda \gamma \). In particular, this gives that

$$\begin{aligned} G_\lambda (0,y) = \frac{1}{2 \sqrt{2\lambda }+2\lambda \gamma } {{\,\mathrm{\mathrm {e}}\,}}^{-y\sqrt{2\lambda }}, \quad y\ge 0. \end{aligned}$$
(16)

3 Kac’s Moment Formula

In this section, we recall the classical moment formula for integral functionals due to Kac [14]. See [9] for formulas for additive functionals in a framework of a general strong Markov process, and also for further references. Our aim here is to present the formula in a form directly applicable to the case at hand.

Let X be a regular diffusion taking values on an interval \(I\subseteq {{\,\mathrm{{\mathbb {R}}}\,}}\), as defined above. For a measurable and bounded function V define for \(t>0\)

$$\begin{aligned} A_t(V) := \int _0^t V(X_s) \mathop {}\!\mathrm {d}s. \end{aligned}$$

Proposition 1

(Kac’s moment formula) For \(t>0\), \(x\in I\) and \(n=1,2,\dotsc \),

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_x\bigl ((A_t(V))^n\bigr ) = n \int _I m(\mathop {}\!\mathrm {d}y) \int _0^t p(s;x,y) V(y) {{\,\mathrm{{\mathbf {E}}}\,}}_y\bigl ((A_{t-s}(V))^{n-1}\bigr ) \mathop {}\!\mathrm {d}s. \end{aligned}$$
(17)

Proof

The formula clearly holds for \(n=1\). For \(n\ge 2\) consider

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_x\bigl ((A_t(V))^n\bigr )&= {{\,\mathrm{{\mathbf {E}}}\,}}_x \biggl ( \Bigl ( \int _0^t V(X_s) \mathop {}\!\mathrm {d}s \Bigr )^n \biggr ) \\&= {{\,\mathrm{{\mathbf {E}}}\,}}_x \biggl ( \int _0^t \cdots \int _0^t V(X_{s_1}) \cdot \dotsc \cdot V(X_{s_n}) \mathop {}\!\mathrm {d}s_1 \cdot \dotsc \cdot \mathop {}\!\mathrm {d}s_n \biggr ) \\&= n! {{\,\mathrm{{\mathbf {E}}}\,}}_x \biggl ( \int _0^t \mathop {}\!\mathrm {d}s_1 \int _{s_1}^t \mathop {}\!\mathrm {d}s_2 \, \cdots \int _{s_{n-1}}^t \mathop {}\!\mathrm {d}s_n V(X_{s_1}) \cdot \dotsc \cdot V(X_{s_n}) \biggr ), \end{aligned}$$

where the last step holds due to the symmetry of the function

$$\begin{aligned} (s_1,\dotsc ,s_n) \rightarrow V(X_{s_1}) \cdot \dotsc \cdot V(X_{s_n}). \end{aligned}$$

Consequently, using the Markov property and the induction assumption,

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_x\bigl ((A_t(V))^n\bigr )&= n! \int _0^t \mathop {}\!\mathrm {d}s_1 \int _I m(\mathop {}\!\mathrm {d}y) p(s;x,y)V(y) \\&\quad \cdot {{\,\mathrm{{\mathbf {E}}}\,}}_y \biggl ( \int _0^{t-s_1} \mathop {}\!\mathrm {d}s_2 \, \cdots \int _{s_{n-1}}^{t-s_1} \mathop {}\!\mathrm {d}s_n V(X_{s_2}) \cdot \dotsc \cdot V(X_{s_n}) \biggr ) \\&= n \int _0^t \mathop {}\!\mathrm {d}s_1 \int _I m(\mathop {}\!\mathrm {d}y) p(s;x,y)V(y) {{\,\mathrm{{\mathbf {E}}}\,}}_y\bigl ((A_{t-s_1}(V))^{n-1}\bigr ), \end{aligned}$$

which proves the claim. \(\square \)

4 Moment Generating Function

In this section, as in the previous one, it is assumed that X is a regular diffusion taking values in the interval \(I\subseteq {{\,\mathrm{{\mathbb {R}}}\,}}\), as introduced in Sect. 2. Recall that \(A_t\) is the occupation time on \({{\,\mathrm{{\mathbb {R}}}\,}}_+\) up to time t, as defined in (6). Let \(T\sim {\text {Exp}}(\lambda )\) be an exponentially distributed random variable independent of X. We here derive an expression for the moment generating function of \(A_T\), which always exists, since \(A_t\le t\) for all t.

Theorem 1

Let \(I^+:=I \cap [0,\infty )\). For \(x>0\),

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_x({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T}) = \frac{\lambda }{\lambda +r} - \left( \frac{\lambda }{\lambda +r} - {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T}) \right) {{\,\mathrm{{\mathbf {E}}}\,}}_x({{\,\mathrm{\mathrm {e}}\,}}^{-(\lambda +r)H_0}), \end{aligned}$$
(18)

and

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T}) =\frac{1}{\lambda +r}\left( \lambda + r\, \frac{\displaystyle 1-\lambda \int _{I^+} G_\lambda (0,y) \,m(\mathop {}\!\mathrm {d}y)}{\displaystyle 1+r \int _{I^+} G_\lambda (0,y) {\widehat{f}}(y;\lambda +r) \,m(\mathop {}\!\mathrm {d}y)}\right) . \end{aligned}$$
(19)

Proof

Equation (18) follows from

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_x({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T} ; H_0>T)&= {{\,\mathrm{{\mathbf {E}}}\,}}_x({{\,\mathrm{\mathrm {e}}\,}}^{-rT} ; H_0>T) \nonumber \\&= \frac{\lambda }{\lambda +r} \left( 1 - {{\,\mathrm{{\mathbf {E}}}\,}}_x({{\,\mathrm{\mathrm {e}}\,}}^{-(\lambda +r)H_0}) \right) , \end{aligned}$$

together with

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_x({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T} ; H_0<T)&= {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T}) {{\,\mathrm{{\mathbf {E}}}\,}}_x({{\,\mathrm{\mathrm {e}}\,}}^{-rH_0} ; H_0<T) \nonumber \\&= {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T}){{\,\mathrm{{\mathbf {E}}}\,}}_x({{\,\mathrm{\mathrm {e}}\,}}^{-(\lambda +r)H_0}). \end{aligned}$$

From Kac’s moment formula (17) with \(V(x)={{\,\mathrm{\mathbb {1}}\,}}_{[0,\infty )}(x)\), it follows that

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_x({{\,\mathrm{\mathrm {e}}\,}}^{-rA_t})&= \sum _{n=0}^\infty {{\,\mathrm{{\mathbf {E}}}\,}}_x(A_t^n) \frac{(-r)^n}{n!} \\&= 1 + \sum _{n=1}^\infty (-r) \frac{(-r)^{n-1}}{(n-1)!} \int _{I^+} m(\mathop {}\!\mathrm {d}y) \int _0^t p(s;x,y) {{\,\mathrm{{\mathbf {E}}}\,}}_y(A_{t-s}^{n-1}) \mathop {}\!\mathrm {d}s\\&= 1 - r \int _{I^+} m(\mathop {}\!\mathrm {d}y) \int _0^t p(s;x,y) {{\,\mathrm{{\mathbf {E}}}\,}}_y({{\,\mathrm{\mathrm {e}}\,}}^{-rA_{t-s}}) \mathop {}\!\mathrm {d}s. \end{aligned}$$

From this, we obtain the formula

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_x({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T})&= \int _0^\infty {{\,\mathrm{{\mathbf {E}}}\,}}_x({{\,\mathrm{\mathrm {e}}\,}}^{-rA_t})\lambda {{\,\mathrm{\mathrm {e}}\,}}^{-\lambda t} \mathop {}\!\mathrm {d}t \nonumber \\&= 1 - r\lambda \int _{I^+} {{\,\mathrm{{\mathcal {L}}}\,}}\left\{ \int _0^t p(s;x,y) {{\,\mathrm{{\mathbf {E}}}\,}}_y({{\,\mathrm{\mathrm {e}}\,}}^{-rA_{t-s}}) \mathop {}\!\mathrm {d}s \right\} m(\mathop {}\!\mathrm {d}y) \nonumber \\&= 1 - r \int _{I^+} G_\lambda (x,y) {{\,\mathrm{{\mathcal {L}}}\,}}\{ \lambda {{\,\mathrm{{\mathbf {E}}}\,}}_y({{\,\mathrm{\mathrm {e}}\,}}^{-rA_t}) \} m(\mathop {}\!\mathrm {d}y) \nonumber \\&= 1 - r \int _{I^+} G_\lambda (x,y) {{\,\mathrm{{\mathbf {E}}}\,}}_y({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T}) m(\mathop {}\!\mathrm {d}y), \end{aligned}$$
(20)

using the convolution formula for the Laplace transform \((\mathcal {L})\). Inserting (18) into the right-hand side of (20) and putting \(x=0\), we can solve the resulting expression for \({{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T})\), which gives

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T})&= \frac{1-\frac{r\lambda }{\lambda +r} \int _{I^+} G_\lambda (0,y) (1-{{\,\mathrm{{\mathbf {E}}}\,}}_y({{\,\mathrm{\mathrm {e}}\,}}^{-(\lambda +r)H_0})) \,m(\mathop {}\!\mathrm {d}y)}{1+r \int _{I^+} G_\lambda (0,y) {{\,\mathrm{{\mathbf {E}}}\,}}_y({{\,\mathrm{\mathrm {e}}\,}}^{-(\lambda +r)H_0}) \,m(\mathop {}\!\mathrm {d}y)} \\&= \frac{\lambda }{\lambda +r}+ \frac{r}{\lambda +r} \cdot \frac{1-\lambda \int _{I^+} G_\lambda (0,y) \,m(\mathop {}\!\mathrm {d}y)}{1+r \int _{I^+} G_\lambda (0,y) {\widehat{f}}(y;\lambda +r) \,m(\mathop {}\!\mathrm {d}y)}, \end{aligned}$$

proving the result. \(\square \)

In the next corollary, we connect formula (19) with the result in [31, Cor. 2], which is a special case of [24, Eq. (68)] and also corresponds to [28, Eq. (110)]. Here we consider the occupation times on both \({{\,\mathrm{{\mathbb {R}}}\,}}_+\) and \({{\,\mathrm{{\mathbb {R}}}\,}}_-:=(-\infty ,0)\):

$$\begin{aligned} A_t^+ := \int _0^t {{\,\mathrm{\mathbb {1}}\,}}_{[0,\infty )}(X_s)\,\mathop {}\!\mathrm {d}s \quad \text {and} \quad A_t^- := \int _0^t {{\,\mathrm{\mathbb {1}}\,}}_{(-\infty ,0)}(X_s)\,\mathop {}\!\mathrm {d}s, \end{aligned}$$

respectively. Formula (21) coincides, when multiplied by \(\lambda ^{-1}\), with [24, Eq. (68)] (without the local time term).

Corollary 1

For \(r,q\ge 0\),

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T^{+} - qA_T^{-}}) = \frac{\frac{\lambda }{\lambda +q} \varphi _{\lambda +r}(0)\psi _{\lambda +q}^- (0) - \frac{\lambda }{\lambda +r} \psi _{\lambda +q}(0)\varphi _{\lambda +r}^- (0)}{\varphi _{\lambda +r}(0)\psi _{\lambda +q}^- (0) - \psi _{\lambda +q}(0)\varphi _{\lambda +r}^- (0)}, \end{aligned}$$
(21)

where the superscript \(^-\) denotes the left derivative with respect to the scale function S.

Remark 1

If the point 0 is included in \(A_t^-\) instead, rather than in \(A_t^+\), then the left derivatives in (21) should be replaced by right derivatives. Note, however, that there is a difference only if 0 is a sticky point, since otherwise the left and right scale derivatives are equal.

Proof of Corollary 1

We first prove (21) when \(q=0\). Since for \(y>0\) we have that

$$\begin{aligned} {\widehat{f}}(y;\lambda +r) = {{\,\mathrm{{\mathbf {E}}}\,}}_y({{\,\mathrm{\mathrm {e}}\,}}^{-(\lambda +r)H_0}) = \frac{\varphi _{\lambda +r}(y)}{\varphi _{\lambda +r}(0)}, \end{aligned}$$

we can rewrite (19) as

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T})&= \frac{1}{\lambda +r} \left( \lambda + r\, \frac{w_\lambda \varphi _{\lambda +r}(0)-\psi _\lambda (0) \varphi _{\lambda +r}(0) \int _{I^+} \lambda \varphi _\lambda (y) \,m(\mathop {}\!\mathrm {d}y)}{w_\lambda \varphi _{\lambda +r}(0)+r\psi _\lambda (0) \int _{I^+} \varphi _\lambda (y) \varphi _{\lambda +r}(y) \,m(\mathop {}\!\mathrm {d}y)} \right) . \end{aligned}$$
(22)

For the integral in the numerator, it holds that

$$\begin{aligned} \int \limits _{[a,b)} \lambda \varphi _\lambda (y) \,m(\mathop {}\!\mathrm {d}y) = \int \limits _{[a,b)} \frac{\mathop {}\!\mathrm {d}}{\mathop {}\!\mathrm {d}m}\frac{\mathop {}\!\mathrm {d}}{\mathop {}\!\mathrm {d}S}\varphi _\lambda (y) \,m(\mathop {}\!\mathrm {d}y) = \varphi _\lambda ^- (b)-\varphi _\lambda ^- (a). \end{aligned}$$
(23)

Recall the following integration by parts formula for a Lebesgue–Stieltjes integral, with functions U and V being of finite variation and at least one of them continuous on (ab):

$$\begin{aligned} \int \limits _{[a,b)} U \mathop {}\!\mathrm {d}V + \int \limits _{[a,b)} V \mathop {}\!\mathrm {d}U = U(b-)V(b-) - U(a-)V(a-). \end{aligned}$$
(24)

Since \(\varphi \) is continuous and of finite variation, we get, applying (23) and (24), that

$$\begin{aligned}&\lambda \int \limits _{[a,b)} \varphi _\lambda (y) \varphi _{\lambda +r}(y) \,m(\mathop {}\!\mathrm {d}y)= \int \limits _{[a,b)} \varphi _{\lambda +r}(y) \cdot \lambda \varphi _{\lambda }(y) \,m(\mathop {}\!\mathrm {d}y) \nonumber \\&\quad = \varphi _{\lambda +r}(b) \varphi _{\lambda }^- (b) - \varphi _{\lambda +r}(a) \varphi _{\lambda }^- (a) -\int \limits _{[a,b)}\left( \frac{\mathop {}\!\mathrm {d}}{\mathop {}\!\mathrm {d}S}\varphi _{\lambda }(y) \right) \left( \frac{\mathop {}\!\mathrm {d}}{\mathop {}\!\mathrm {d}S}\varphi _{\lambda +r}(y) \right) \mathop {}\!\mathrm {d}S, \end{aligned}$$
(25)

and likewise

$$\begin{aligned}&(\lambda +r) \int \limits _{[a,b)} \varphi _\lambda (y) \varphi _{\lambda +r}(y) \,m(\mathop {}\!\mathrm {d}y) = \int \limits _{[a,b)} \varphi _{\lambda }(y) \cdot (\lambda +r)\varphi _{\lambda +r}(y) \,m(\mathop {}\!\mathrm {d}y) \nonumber \\&\quad = \varphi _{\lambda }(b) \varphi _{\lambda +r}^- (b) -\varphi _{\lambda }(a) \varphi _{\lambda +r}^- (a) - \int \limits _{[a,b)}\left( \frac{\mathop {}\!\mathrm {d}}{\mathop {}\!\mathrm {d}S}\varphi _{\lambda }(y) \right) \left( \frac{\mathop {}\!\mathrm {d}}{\mathop {}\!\mathrm {d}S}\varphi _{\lambda +r}(y) \right) \mathop {}\!\mathrm {d}S. \end{aligned}$$
(26)

Subtracting (25) from (26) yields

$$\begin{aligned} r \int \limits _{[a,b)} \varphi _\lambda (y) \varphi _{\lambda +r}(y) \,m(\mathop {}\!\mathrm {d}y)= & {} \varphi _{\lambda }(b) \varphi _{\lambda +r}^- (b) - \varphi _{\lambda +r}(b) \varphi _{\lambda }^- (b) \\&+\, \varphi _{\lambda +r}(a) \varphi _{\lambda }^- (a) - \varphi _{\lambda }(a) \varphi _{\lambda +r}^- (a+). \end{aligned}$$

Furthermore, if b is an upper boundary point of the diffusion which is either non-exit or exit-and-entrance with reflection, then it holds that \(\varphi _{\lambda }^- (b) = 0\) (see [13, p. 130, Table 1]). Since the lower boundary point of \(I^+\) is 0, we thereby get that

$$\begin{aligned} \int _{I^+} \lambda \varphi _\lambda (y) \,m(\mathop {}\!\mathrm {d}y)&= -\varphi _\lambda ^- (0), \\ r \int _{I^+} \varphi _\lambda (y) \varphi _{\lambda +r}(y) \,m(\mathop {}\!\mathrm {d}y)&= \varphi _{\lambda +r}(0) \varphi _{\lambda }^- (0) - \varphi _{\lambda }(0) \varphi _{\lambda +r}^- (0). \end{aligned}$$

Inserting this into (22) yields

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T^{+}})&= \frac{\lambda }{\lambda +r} + \frac{\frac{r}{\lambda +r} ( w_\lambda \varphi _{\lambda +r}(0) + \psi _\lambda (0) \varphi _{\lambda +r}(0) \varphi _\lambda ^- (0) )}{w_\lambda \varphi _{\lambda +r}(0) + \psi _\lambda (0) ( \varphi _{\lambda +r}(0) \varphi _{\lambda }^- (0) - \varphi _{\lambda }(0) \varphi _{\lambda +r}^- (0))} \nonumber \\&= \frac{\varphi _{\lambda +r}(0)\psi _{\lambda }^- (0) - \frac{\lambda }{\lambda +r} \psi _{\lambda }(0)\varphi _{\lambda +r}^- (0)}{\varphi _{\lambda +r}(0)\psi _{\lambda }^- (0) - \psi _{\lambda }(0)\varphi _{\lambda +r}^- (0)}, \end{aligned}$$
(27)

where \(w_\lambda = \psi ^{-}_{\lambda }(0)\varphi _\lambda (0)-\psi _\lambda (0)\varphi ^{-}_{\lambda }(0)\) has been inserted, see (2). We now extend this result. Since \(A_T^{+}+A_T^{-}=T\),

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-pA_T^{+}-qT}) = {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-(p+q)A_T^{+}-qA_T^{-}}) = {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T^{+}-qA_T^{-}}), \end{aligned}$$

where \(r=p+q\). On the other hand,

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-pA_T^{+}-qT})&= \int _0^\infty \lambda {{\,\mathrm{\mathrm {e}}\,}}^{-\lambda t} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-pA_T^{+}-qt}) \mathop {}\!\mathrm {d}t \\&= \int _0^\infty \lambda {{\,\mathrm{\mathrm {e}}\,}}^{-(\lambda +q)t} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-pA_T^{+}}) \mathop {}\!\mathrm {d}t \\&= \frac{\lambda }{\lambda +q} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-pA_{{\widehat{T}}}^{+}}), \end{aligned}$$

where \({\widehat{T}}\sim {\text {Exp}}(\lambda +q)\). Applying (27) gives

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T^{+} - qA_T^{-}})&= \frac{\lambda }{\lambda +q} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-(r-q) A_{{\widehat{T}}}^{+}}) \\&= \frac{\lambda }{\lambda +q} \cdot \frac{\varphi _{\lambda +r}(0)\psi _{\lambda +q}^- (0) - \frac{\lambda +q}{\lambda +r} \psi _{\lambda +q}(0)\varphi _{\lambda +r}^- (0)}{\varphi _{\lambda +r}(0)\psi _{\lambda +q}^- (0) - \psi _{\lambda +q}(0)\varphi _{\lambda +r}^- (0)} \\&= \frac{\frac{\lambda }{\lambda +q}\varphi _{\lambda +r}(0)\psi _{\lambda +q}^- (0) - \frac{\lambda }{\lambda +r} \psi _{\lambda +q}(0)\varphi _{\lambda +r}^- (0)}{\varphi _{\lambda +r}(0)\psi _{\lambda +q}^- (0) - \psi _{\lambda +q}(0)\varphi _{\lambda +r}^- (0)}, \end{aligned}$$

which proves (21). \(\square \)

Remark 2

If we instead consider the occupation times on the intervals \([\alpha ,+\infty )\) and \((-\infty ,\alpha )\) for some \(\alpha \in {{\,\mathrm{{\mathbb {R}}}\,}}\), then (21) holds when 0 is replaced with \(\alpha \) everywhere.

5 Recursive Formula for the Moments

In this section, we use Kac’s moment formula to derive our main result, namely a recursive equation for the Laplace transforms of the moments of \(A_t\) for fixed \(t>0\). When the diffusion is a self-similar process, the expression becomes a recursion for moments of \(A_1\) instead (which is easily transformed into a recursion for moments of \(A_t\), since in this case \(A_t=tA_1\)). It is assumed that X is a regular diffusion taking values in the interval \(I\subseteq {{\,\mathrm{{\mathbb {R}}}\,}}\) as defined in Sect. 2. We introduce the Laplace transform of \(A_t\) via

$$\begin{aligned} {\widehat{A}}_x(\lambda ;n) := {{\,\mathrm{{\mathcal {L}}}\,}}_t\{{{\,\mathrm{{\mathbf {E}}}\,}}_x(A_t^n)\}(\lambda ) := \int _0^\infty {{\,\mathrm{\mathrm {e}}\,}}^{-\lambda t} {{\,\mathrm{{\mathbf {E}}}\,}}_x(A_t^n) \mathop {}\!\mathrm {d}t. \end{aligned}$$

If there is no ambiguity, the variables t and \(\lambda \) in the notation of the Laplace transforms are omitted; for instance, we shall write \({{\,\mathrm{{\mathcal {L}}}\,}}\{{{\,\mathrm{{\mathbf {E}}}\,}}_x(A_t^n)\}\) instead of \({{\,\mathrm{{\mathcal {L}}}\,}}_t\{{{\,\mathrm{{\mathbf {E}}}\,}}_x(A_t^n)\}(\lambda )\).

Theorem 2

Let \(I^+:=I \cap [0,\infty )\). The Laplace transforms of the moments of \(A_t\) for X starting from 0 are given for \(n=1\) by

$$\begin{aligned} {\widehat{A}}_0(\lambda ;1)= \frac{1}{\lambda } \int _{I^+} G_\lambda (0,y) m(\mathop {}\!\mathrm {d}y), \end{aligned}$$
(28)

and for \(n=2,3,\dotsc \) by

$$\begin{aligned} {\widehat{A}}_0(\lambda ;n) = \frac{n!}{\lambda ^{n-1}} {\widehat{A}}_0(\lambda ;1) + \frac{n!}{\lambda ^{n+1}}\sum _{k=1}^{n-1} \left( 1 - \frac{\lambda ^{n-k+1}}{(n-k)!} {\widehat{A}}_0(\lambda ;n-k) \right) D_{k}(\lambda ),\quad \end{aligned}$$
(29)

where

$$\begin{aligned} D_k(\lambda ) := \frac{(-\lambda )^k}{(k-1)!} \int _{I^+} G_\lambda (0,y) {\widehat{f}}_\lambda ^{(k-1)}(y;\lambda ) m(\mathop {}\!\mathrm {d}y). \end{aligned}$$
(30)

Moreover, under the scaling property (7), for all \(\lambda >0\),

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1) = \lambda \int _{I^+} G_\lambda (0,y) m(\mathop {}\!\mathrm {d}y), \end{aligned}$$
(31)

and

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^n) = {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1) + \sum _{k=1}^{n-1} \left( 1 - {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^{n-k}) \right) D_k(\lambda ). \end{aligned}$$
(32)

Remark 3

Equation (29) can be rewritten as

$$\begin{aligned} U_n(\lambda ) = U_1(\lambda ) + \sum _{k=1}^{n-1} \left( 1 - U_{n-k}(\lambda )) \right) D_k(\lambda ), \end{aligned}$$

where \(U_n(\lambda ):=\frac{\lambda ^{n+1}}{n!} {\widehat{A}}_0(\lambda ;n)\).

Proof of Theorem 2

Taking the Laplace transform on both sides of Kac’s moment formula (17) with \(V(x)={{\,\mathrm{\mathbb {1}}\,}}_{[0,\infty )}(x)\) yields

$$\begin{aligned} {\widehat{A}}_x(\lambda ;n) = n \int _{I^+} G_\lambda (x,y) {\widehat{A}}_y(\lambda ;n-1) m(\mathop {}\!\mathrm {d}y). \end{aligned}$$
(33)

Substituting \(n=1\) and \(x=0\) in (33) gives (28). To derive (29), we first find an expression for \({\widehat{A}}_x(\lambda ;n)\) in terms of \({\widehat{A}}_0(\lambda ;k)\) for different k. For any starting point \(x>0\),

$$\begin{aligned} A_t = {\left\{ \begin{array}{ll} H_0 + A_{t-H_0} \circ \theta _{H_0}, &{}H_0<t,\\ t, &{}H_0>t. \end{array}\right. } \end{aligned}$$

where \(\theta _t\) is the usual shift operator. By the strong Markov property,

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_x(A_t^n) = t^n(1-{{\,\mathrm{{\mathbf {P}}}\,}}_x(H_0<t)) + \sum _{k=0}^n \left( {\begin{array}{c}n\\ k\end{array}}\right) {{\,\mathrm{{\mathbf {E}}}\,}}_x(H_0^k (A_{t-H_0} \circ \theta _{H_0})^{n-k};H_0<t).\nonumber \\ \end{aligned}$$
(34)

We have the following Laplace transforms with respect to t:

$$\begin{aligned} {{\,\mathrm{{\mathcal {L}}}\,}}\{{{\,\mathrm{{\mathbf {E}}}\,}}_x(H_0^k (A_{t-H_0} \circ \theta _{H_0})^{n-k};H_0<t)\}&= {{\,\mathrm{{\mathcal {L}}}\,}}\left\{ \int _0^t {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_{t-s}^{n-k})s^k f(x;s) \mathop {}\!\mathrm {d}s \right\} \\&= {{\,\mathrm{{\mathcal {L}}}\,}}\{{{\,\mathrm{{\mathbf {E}}}\,}}_0(A_{t}^{n-k})\} {{\,\mathrm{{\mathcal {L}}}\,}}\{t^k f(x;t)\} \\&= (-1)^k {\widehat{A}}_0(\lambda ;n-k) {\widehat{f}}_\lambda ^{(k)}(x;\lambda ) \end{aligned}$$

and

$$\begin{aligned} {{\,\mathrm{{\mathcal {L}}}\,}}\{t^n {{\,\mathrm{{\mathbf {P}}}\,}}_x(H_0<t)\}&= {{\,\mathrm{{\mathcal {L}}}\,}}\left\{ t^n \int _0^t f(x;t) \mathop {}\!\mathrm {d}t\right\} \\&= (-1)^n \frac{\mathop {}\!\mathrm {d}^n}{\mathop {}\!\mathrm {d}\lambda ^n} \left( \frac{1}{\lambda }{\widehat{f}}(x;\lambda )\right) \\&= \frac{n!}{\lambda ^{n+1}} \sum _{k=0}^n \frac{(-\lambda )^k}{k!} {\widehat{f}}_\lambda ^{(k)}(x;\lambda ). \end{aligned}$$

Hence, taking the Laplace transforms on the both sides of (34) gives

$$\begin{aligned} {\widehat{A}}_x(\lambda ;n)&= \frac{n!}{\lambda ^{n+1}} - \frac{n!}{\lambda ^{n+1}} \sum _{k=0}^n \frac{(-\lambda )^k}{k!} {\widehat{f}}_\lambda ^{(k)}(x;\lambda ) \nonumber \\&\quad + \sum _{k=0}^n (-1)^k\left( {\begin{array}{c}n\\ k\end{array}}\right) {\widehat{A}}_0(\lambda ;n-k) {\widehat{f}}_\lambda ^{(k)}(x;\lambda ) \nonumber \\&= \frac{n!}{\lambda ^{n+1}} + \frac{n!}{\lambda ^{n+1}} \sum _{k=0}^{n-1} \frac{(-\lambda )^{k}}{k!} {\widehat{f}}_\lambda ^{(k)}(x;\lambda ) \left( \frac{\lambda ^{n-k+1}}{(n-k)!}{\widehat{A}}_0(\lambda ;n-k) - 1 \right) . \end{aligned}$$
(35)

Note that in the summation the term with \(k=n\) disappears, since \({\widehat{A}}_0(\lambda ;0) = 1/\lambda \). Inserting the expression in (35) into both sides of (33) and solving for \({\widehat{A}}_0(\lambda ;n)\) gives

$$\begin{aligned} {\widehat{A}}_0(\lambda ;n)= & {} \frac{n!}{\lambda ^{n+1} {\widehat{f}}(x;\lambda )} \biggl ( \lambda \int _{I^+} G_\lambda (x,y) m(\mathop {}\!\mathrm {d}y) + {\widehat{f}}(x;\lambda ) - 1 \nonumber \\&+ \sum _{k=1}^{n-1} \left( 1 - \frac{\lambda ^{n-k+1}}{(n-k)!}{\widehat{A}}_0(\lambda ;n-k) \right) D_k(x;\lambda ) \biggr ), \end{aligned}$$
(36)

where

$$\begin{aligned} D_k(x;\lambda ) = \frac{(-\lambda )^k}{k! {\widehat{f}}(x;\lambda )} \left( {\widehat{f}}_\lambda ^{(k)}(x;\lambda ) + k\int _{I^+} G_\lambda (x,y) {\widehat{f}}_\lambda ^{(k-1)}(y;\lambda ) m(\mathop {}\!\mathrm {d}y) \right) . \end{aligned}$$
(37)

Note, however, that x is not present on the left-hand side of (36), so the right-hand side cannot depend on x either. Thus, we may choose any \(x>0\). We show now that the limit of the right-hand side of (36) exists when \(x\downarrow 0\). Since

$$\begin{aligned}&\lim _{x\downarrow 0} \int _{I^+} G_\lambda (x,y) m(\mathop {}\!\mathrm {d}y) \\&\quad = \lim _{x\downarrow 0} \biggl ( \frac{\varphi _\lambda (x)}{w_\lambda } \int \limits _{[0,x)} \psi _\lambda (y) m(\mathop {}\!\mathrm {d}y) + \frac{\psi _\lambda (x)}{w_\lambda } \int \limits _{[x,\infty )\cap I^+} \varphi _\lambda (y) m(\mathop {}\!\mathrm {d}y) \biggr ) \\&\quad = \int _{I^+} G_\lambda (0,y) m(\mathop {}\!\mathrm {d}y) \\&\quad = \lambda {\widehat{A}}_0(\lambda ;1), \end{aligned}$$

it is seen by induction that \(\lim _{x\downarrow 0} D_{k}(x;\lambda )\) exists for all values of k. Consequently, recalling that \(\lim _{x\downarrow 0}{\widehat{f}}(x;\lambda )=1\), we may write

$$\begin{aligned} {\widehat{A}}_0(\lambda ;n) = \frac{n!}{\lambda ^{n-1}} {\widehat{A}}_0(\lambda ;1) + \frac{n!}{\lambda ^{n+1}}\sum _{k=1}^{n-1} \left( 1 - \frac{\lambda ^{n-k+1}}{(n-k)!} {\widehat{A}}_0(\lambda ;n-k) \right) \lim _{x\downarrow 0} D_{k}(x;\lambda ). \end{aligned}$$

We calculate the limit of \(D_{k}(x;\lambda )\) from the explicit expression (37). Using the result in Lemma 1,

$$\begin{aligned} \lim _{x\downarrow 0} D_k(x;\lambda )&= \lim _{x\downarrow 0} \frac{(-\lambda )^k}{(k-1)!} \int _{I^+} G_\lambda (x,y) {\widehat{f}}_\lambda ^{(k-1)}(y;\lambda ) m(\mathop {}\!\mathrm {d}y) \\&= \frac{(-\lambda )^k}{(k-1)!} \lim _{x\downarrow 0} \biggl ( \frac{\varphi _\lambda (x)}{w_\lambda } \int \limits _{[0,x)} \psi _\lambda (y) {\widehat{f}}_\lambda ^{(k-1)}(y;\lambda ) m(\mathop {}\!\mathrm {d}y) \\&\quad + \frac{\psi _\lambda (x)}{w_\lambda } \int \limits _{[x,\infty )\cap I^+} \quad \varphi _\lambda (y) {\widehat{f}}_\lambda ^{(k-1)}(y;\lambda ) m(\mathop {}\!\mathrm {d}y) \biggr ) \\&=\frac{(-\lambda )^k}{(k-1)!} \int _{I^+} G_\lambda (0,y) {\widehat{f}}_\lambda ^{(k-1)}(y;\lambda ) m(\mathop {}\!\mathrm {d}y), \end{aligned}$$

which is the right-hand side of (30). The proof of the recursive Eq. (29) is now complete.

Assume finally that the process X is self-similar so that (8) holds. Then \({{\,\mathrm{{\mathbf {E}}}\,}}_0(A_t^n) = t^n {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^n)\) and thus

$$\begin{aligned} {\widehat{A}}_0(\lambda ;n) = {{\,\mathrm{{\mathcal {L}}}\,}}\{t^n{{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^n)\} = \frac{n!}{\lambda ^{n+1}} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^n). \end{aligned}$$
(38)

It is clear that in this case (31) follows immediately from (28). After inserting (38) into (29), it is seen that \(\lambda \) is only left in \(D_k(\lambda ),\, k=1,2,...,n-1\). Putting \(n=2\) we find that \(D_1(\lambda )\) does not depend on \(\lambda \), and, by induction, we conclude that \(D_k(\lambda )\) does not depend on \(\lambda \) for any k. \(\square \)

6 Examples

6.1 Skew Two-Sided Bessel Processes

We now apply the results in Theorems 1 and 2 on the skew two-sided Bessel process, which is described in Example 1. In this case, the function \({\widehat{f}}\) is

$$\begin{aligned} {\widehat{f}}(x;\lambda ) = \frac{{\widehat{\varphi }}(x)}{{\widehat{\varphi }}(0)} = \frac{2^{\nu +1}}{\varGamma (-\nu )} (x\sqrt{2\lambda })^{-\nu } K_\nu (x\sqrt{2\lambda }), \end{aligned}$$
(39)

and the Green kernel is given in (11). Note that here \(I^+ = [0,\infty )\). We derive the next result from (19) in Theorem 1. Alternatively, formula (21) in Corollary 1 could have been used.

Proposition 2

Let \(T\sim \text {Exp}(\lambda )\) independent of X. For any \(r\ge 0\),

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T}) = \frac{\beta \lambda ^{\nu +1} + (1-\beta ) (\lambda +r)^{\nu +1}}{\beta (\lambda +r) \lambda ^{\nu } + (1-\beta ) (\lambda +r)^{\nu +1}}. \end{aligned}$$
(40)

Proof

The identity is trivial when \(r=0\). For \(r>0\), it follows from (19) that

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T})&= \frac{1}{\lambda +r} \left( \lambda + r \, \frac{1-\lambda \int _0^\infty G_\lambda (0,y) \,m_\nu (\mathop {}\!\mathrm {d}y)}{1+r \int _0^\infty G_\lambda (0,y) {\widehat{f}}(y;\lambda +r) \,m_\nu (\mathop {}\!\mathrm {d}y)} \right) \nonumber \\&= \frac{1}{\lambda +r} \left( \lambda + r \, \frac{1-\lambda \varDelta _1}{1+r \varDelta _2} \right) , \end{aligned}$$
(41)

where we need to calculate the integrals \(\varDelta _1\) and \(\varDelta _2\). From (12), we already have that

$$\begin{aligned} \varDelta _1 = \int _0^\infty G_\lambda (0,y) m_\nu (\mathop {}\!\mathrm {d}y) = \frac{\beta }{\lambda }. \end{aligned}$$

Recalling (11) and (39), the second integral becomes

$$\begin{aligned} \varDelta _2&= \int _0^\infty G_\lambda (0,y) {\widehat{f}}(y;\lambda +r) \, m_\nu (\mathop {}\!\mathrm {d}y) \nonumber \\&= \frac{4\beta }{\varGamma (\nu +1)\varGamma (-\nu )} \left( \frac{\lambda }{\lambda +r} \right) ^{\frac{\nu }{2}} \int _0^\infty y K_\nu \big (y\sqrt{2(\lambda +r)}\big ) K_\nu \big (y\sqrt{2\lambda }\big ) \mathop {}\!\mathrm {d}y \nonumber \\&= \frac{-\beta \nu }{\lambda +r} \left( \frac{\lambda }{\lambda +r} \right) ^{\nu } {}_2F_1 \left( 1,1+\nu \,;2\,;\frac{r}{\lambda +r}\right) , \end{aligned}$$
(42)

after applying an integral formula for modified Bessel functions of the second kind [11, Eq. (6.576.4)]. The hypergeometric function can in this case be rewritten using an incomplete beta function [11, Eq. (8.391)] as

$$\begin{aligned} {}_2F_1 \left( 1,1+\nu \,;2\,;\frac{r}{\lambda +r}\right)&= \frac{\lambda +r}{r} \int _0^\frac{r}{\lambda +r} (1-t)^{-\nu -1} \mathop {}\!\mathrm {d}t \\&= \frac{\lambda +r}{r\nu } \left( \left( \frac{\lambda }{\lambda +r} \right) ^{-\nu } -1 \right) . \end{aligned}$$

Inserting this into (42) yields

$$\begin{aligned} \varDelta _2 = \frac{\beta }{r} \left( \left( \frac{\lambda }{\lambda +r} \right) ^{\nu } -1 \right) . \end{aligned}$$

When inserting \(\varDelta _1\) and \(\varDelta _2\) into (41), we obtain

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T})&= \frac{1}{\lambda +r} \left( \lambda + r \, \frac{1-\beta }{1+\beta \Big ( \big ( \frac{\lambda }{\lambda +r} \big )^{\nu } -1 \Big )} \right) \\&= \frac{1}{\lambda +r} \left( \lambda + \frac{r(1-\beta )(\lambda +r)^\nu }{(1-\beta )(\lambda +r)^\nu +\beta \lambda ^\nu } \right) \\&= \frac{(1-\beta )\lambda (\lambda +r)^\nu + \beta \lambda ^{\nu +1} + (1-\beta )r(\lambda +r)^\nu }{(\lambda +r)((1-\beta )(\lambda +r)^\nu + \beta \lambda ^\nu )} \\&= \frac{\beta \lambda ^{\nu +1} + (1-\beta ) (\lambda +r)^{\nu +1}}{\beta (\lambda +r) \lambda ^{\nu } + (1-\beta ) (\lambda +r)^{\nu +1}}, \end{aligned}$$

which proves the claim. \(\square \)

Note that the expression in (40) can be rewritten as

$$\begin{aligned} \lambda \int _0^\infty {{\,\mathrm{\mathrm {e}}\,}}^{-\lambda t} {{\,\mathrm{{\mathbf {E}}}\,}}({{\,\mathrm{\mathrm {e}}\,}}^{-r A_t}) \mathop {}\!\mathrm {d}t = {{\,\mathrm{{\mathbf {E}}}\,}}_0({{\,\mathrm{\mathrm {e}}\,}}^{-rA_T}) = \lambda \cdot \frac{\beta (\lambda + r)^{-\nu -1} + (1-\beta ) \lambda ^{-\nu -1}}{\beta (\lambda + r)^{-\nu } + (1-\beta ) \lambda ^{-\nu }}, \end{aligned}$$

which is equivalent to (4.a) in [4]; see also [18, 31].

Next we apply Theorem 2 to find a recursive formula for the moments of \(A_1\).

Theorem 3

(Skew two-sided Bessel process) For \(n\ge 1\),

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^n) = \beta \left( {\begin{array}{c}\nu +n-1\\ n-1\end{array}}\right) - \beta \sum _{k=1}^{n-1} \left( {\begin{array}{c}\nu +k-1\\ k\end{array}}\right) {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^{n-k}). \end{aligned}$$
(43)

Proof

Equations (30)–(32) hold, since the skew two-sided Bessel process is self-similar. From (31) and (12), we obtain the first moment,

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1) = \lambda \int _0^\infty G_\lambda (0,y) m_\nu (\mathop {}\!\mathrm {d}y) = \beta . \end{aligned}$$
(44)

Next we calculate the coefficients \(D_k(\lambda )\) as given in (30). Setting \(k=1\) gives

$$\begin{aligned} D_{1}(\lambda )&= -\lambda \int _0^\infty G_\lambda (0,y) {\widehat{f}}(y;\lambda ) m_\nu (\mathop {}\!\mathrm {d}y) \\&= \beta \nu , \end{aligned}$$

where the integral is given by (42) with \(r=0\) (note that the hypergeometric function is equal to 1 in this case). In order to find \(D_{k}(\lambda )\) for \(k>1\), we need to differentiate \({\widehat{f}}(x;\lambda )\) with respect to \(\lambda \). Let

$$\begin{aligned} g(\nu ; x) := x^{-\nu } K_\nu (x), \end{aligned}$$

for which it can be shown by induction that

$$\begin{aligned} g^{(n)}(\nu ;x)&:= \frac{\mathop {}\!\mathrm {d}^n}{\mathop {}\!\mathrm {d}x^n}g(\nu ; x) = \sum _{i=0}^{\lfloor n/2 \rfloor } \frac{(-1)^{n+i} n!}{i! (n-2i)! 2^{i}}x^{-(\nu +i)} K_{\nu +n-i} (x) \\&= \sum _{i=\lceil n/2 \rceil }^{n} \frac{(-1)^{i} n!}{(n-i)! (2i-n)! 2^{n-i}}x^{-(\nu +n-i)} K_{\nu +i} (x). \end{aligned}$$

Writing

$$\begin{aligned} {\widehat{f}}(x;\lambda ) = \frac{2^{\nu +1}}{\varGamma (-\nu )} g(\nu ;x\sqrt{2\lambda }), \end{aligned}$$

and differentiating (see, e.g., [6, Appx. 5] for general formulas) gives

$$\begin{aligned} {\widehat{f}}_\lambda ^{(k)}(x;\lambda )&= \frac{2^{\nu +1}}{\varGamma (-\nu )} \sum _{i=1}^{k} \frac{(-1)^{k-i} (2k-1-i)!}{(i-1)! (k-i)!} \frac{(x\sqrt{2})^{i}}{(2\sqrt{\lambda })^{2k-i}} g^{(i)}(\nu ;x\sqrt{2\lambda }) \\&= \frac{2^{\nu +1}}{\lambda ^{k}\varGamma (-\nu )} \sum _{i=1}^{k} \sum _{j=\lceil i/2 \rceil }^{i} \frac{(-1)^{k+i-j} (2k-1-i)! \,i}{(k-i)! (i-j)! (2j-i)! 2^{2k-j}} \\&\quad \cdot (x\sqrt{2\lambda })^{-(\nu -j)} K_{\nu +j} (x\sqrt{2\lambda }). \end{aligned}$$

Combining this with (11) yields, for \(k=1,2,\dotsc \),

$$\begin{aligned} D_{k+1}(\lambda )&= \frac{(-\lambda )^{k+1}}{k!} \int _0^\infty G_\lambda (0,y) {\widehat{f}}_\lambda ^{(k)}(y;\lambda ) m_\nu (\mathop {}\!\mathrm {d}y)\\&=\frac{-2\beta }{k! \,\varGamma (\nu +1)\varGamma (-\nu )} \sum _{i=1}^{k} \sum _{j=\lceil i/2 \rceil }^{i} \frac{(-1)^{i-j} (2k-1-i)! \,i}{(k-i)! (i-j)! (2j-i)! 2^{2k-j}} \cdot \varDelta _{\nu ,j}, \end{aligned}$$

where

$$\begin{aligned} \varDelta _{\nu ,j}&:= \sqrt{2\lambda } \int _0^\infty (y\sqrt{2\lambda })^{j+1} K_\nu (y\sqrt{2\lambda }) K_{\nu +j} (y\sqrt{2\lambda }) \mathop {}\!\mathrm {d}y \\&= \int _0^\infty z^{j+1} K_\nu (z) K_{\nu +j}(z) \mathop {}\!\mathrm {d}z \\&= \frac{2^{j-1}}{j+1}\varGamma (1-\nu )\varGamma (\nu +1+j), \end{aligned}$$

by an integration formula for modified Bessel functions [11, Eq. (6.576.4)]. Inserting this and changing the order of summation,

$$\begin{aligned} D_{k+1}(\lambda )&=\frac{-\beta \,\varGamma (1-\nu )}{k! \,\varGamma (\nu +1)\varGamma (-\nu )} \sum _{i=1}^{k} \sum _{j=\lceil i/2 \rceil }^{i} \frac{(-1)^{i-j} (2k-1-i)! \,i\, \varGamma (\nu +1+j)}{(k-i)! (i-j)! (2j-i)! (j+1) 2^{2k-2j}} \\&=\frac{\beta \nu }{k! \,\varGamma (\nu +1)} \sum _{j=1}^{k} \frac{\varGamma (\nu +1+j)}{(j+1) 2^{2k-2j}} \sum _{i=j}^{\min (k,2j)} \frac{(-1)^{i-j} (2k-1-i)!\,i }{(k-i)! (i-j)! (2j-i)!} \\&=\frac{\beta }{k \,\varGamma (\nu )} \sum _{j=1}^{k} \frac{\varGamma (\nu +1+j)}{(j+1)! \, 2^{2k-2j}} \sum _{i=j}^{\min (k,2j)} i (-1)^{i-j} \left( {\begin{array}{c}j\\ i-j\end{array}}\right) \left( {\begin{array}{c}2k-1-i\\ k-1\end{array}}\right) . \end{aligned}$$

In the case \(j=k\), the inner sum has only one term, namely

$$\begin{aligned} \sum _{i=k}^{k} i (-1)^{i-k} \left( {\begin{array}{c}k\\ i-k\end{array}}\right) \left( {\begin{array}{c}2k-1-i\\ k-1\end{array}}\right) = k. \end{aligned}$$

When \(j<k\), we show that the inner sum is zero. Note that for any \(k<i<2k\) the summand is zero, since \(0 \le 2k-1-i < k-1\). Thus, when \(j<k\) we can always choose 2j as the upper limit for the summation index, and the inner sum becomes

$$\begin{aligned}&\sum _{i=j}^{2j} i (-1)^{i-j} \left( {\begin{array}{c}j\\ i-j\end{array}}\right) \left( {\begin{array}{c}2k-1-i\\ k-1\end{array}}\right) \\&\quad = \sum _{i=0}^{j} (i+j) (-1)^{i} \left( {\begin{array}{c}j\\ i\end{array}}\right) \left( {\begin{array}{c}2k-1-j-i\\ k-1\end{array}}\right) \\&\quad = j \Biggl [ \sum _{i=0}^{j} (-1)^{i} \left( {\begin{array}{c}j\\ i\end{array}}\right) \left( {\begin{array}{c}2k-1-j-i\\ k-1\end{array}}\right) \\&\qquad + \sum _{i=1}^{j} (-1)^{i} \left( {\begin{array}{c}j-1\\ i-1\end{array}}\right) \left( {\begin{array}{c}2k-1-j-i\\ k-1\end{array}}\right) \Biggr ] \\&\quad = j \left( \left( {\begin{array}{c}2k-2j-1\\ k-j-1\end{array}}\right) - \left( {\begin{array}{c}2k-2j-1\\ k-j\end{array}}\right) \right) \\&\quad = 0, \end{aligned}$$

using, in the third step, the identity [10, Eq. (3.49)]

$$\begin{aligned} \sum _{k=0}^{n} (-1)^{k} \left( {\begin{array}{c}n\\ k\end{array}}\right) \left( {\begin{array}{c}x-k\\ m\end{array}}\right) = \left( {\begin{array}{c}x-n\\ m-n\end{array}}\right) , \end{aligned}$$

valid for \(n,m\in {{\,\mathrm{{\mathbb {N}}}\,}}\) and, in case \((m < n), (x\notin \{m, \dotsc , n-1\}).\) From this, we conclude that only the term corresponding to \(j=k\) remains in the expression for \(D_{k+1}(\lambda )\), which thus simplifies to

$$\begin{aligned} D_{k+1}(\lambda ) = \frac{\beta \,\varGamma (\nu +1+k)}{(k+1)! \,\varGamma (\nu )} = \beta \left( {\begin{array}{c}\nu +k\\ k+1\end{array}}\right) . \end{aligned}$$

Reducing the index k by 1 and recalling that \(D_1(\lambda )=\beta \nu \), we conclude that

$$\begin{aligned} D_{k}(\lambda ) = \beta \left( {\begin{array}{c}\nu +k-1\\ k\end{array}}\right) , \end{aligned}$$
(45)

for all \(k\ge 1\). Inserting this and (44) into (32) results in the recursion

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^n)&= \beta + \beta \sum _{k=1}^{n-1} \left( 1 - {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^{n-k}) \right) \left( {\begin{array}{c}\nu +k-1\\ k\end{array}}\right) \\&= \beta \sum _{k=0}^{n-1} \left( {\begin{array}{c}\nu +k-1\\ k\end{array}}\right) - \beta \sum _{k=1}^{n-1} \left( {\begin{array}{c}\nu +k-1\\ k\end{array}}\right) {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^{n-k}) \\&= \beta \left( {\begin{array}{c}\nu +n-1\\ n-1\end{array}}\right) - \beta \sum _{k=1}^{n-1} \left( {\begin{array}{c}\nu +k-1\\ k\end{array}}\right) {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^{n-k}), \end{aligned}$$

where the last step follows by the identity [10, Eq. (1.49)]

$$\begin{aligned} \sum _{k=0}^n \left( {\begin{array}{c}x+k\\ k\end{array}}\right) = \left( {\begin{array}{c}x+n+1\\ n\end{array}}\right) . \end{aligned}$$
(46)

This proves the theorem. \(\square \)

Corollary 2

The mapping \(\beta \mapsto {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^n)\) is continuous and increasing.

Proof

Since \(\nu \in (-1,0)\), it follows from the properties of the gamma function that

$$\begin{aligned} \left( {\begin{array}{c}\nu +n-1\\ n-1\end{array}}\right) = \frac{\varGamma (\nu +n)}{\varGamma (n)\varGamma (\nu +1)}>0, \quad \left( {\begin{array}{c}\nu +n-1\\ n\end{array}}\right) = \frac{\varGamma (\nu +n)}{\varGamma (n+1)\varGamma (\nu )}<0, \end{aligned}$$

for all \(n\in {{\,\mathrm{{\mathbb {Z}}}\,}}_+\). The result then immediately follows from (43) by induction. \(\square \)

In the following, we use \(\left[ \begin{array}{c}{n}\\ {k}\end{array}\right] \) for unsigned Stirling numbers of the first kind and \(\left\{ \begin{array}{c}{n}\\ {k}\end{array}\right\} \) for Stirling numbers of the second kind. They are defined recursively through

$$\begin{aligned} \left[ \begin{array}{c}{n+1}\\ {k}\end{array}\right] = n \left[ \begin{array}{c}{n}\\ {k}\end{array}\right] + \left[ \begin{array}{c}{n}\\ {k-1}\end{array}\right] \quad \text {and} \quad \left\{ \begin{array}{c}{n+1}\\ {k}\end{array}\right\} = k \left\{ \begin{array}{c}{n}\\ {k}\end{array}\right\} + \left\{ \begin{array}{c}{n}\\ {k-1}\end{array}\right\} , \end{aligned}$$
(47)

for \(n,k\in {{\,\mathrm{{\mathbb {Z}}}\,}}\), with initial conditions

$$\begin{aligned} \left[ \begin{array}{c}{0}\\ {0}\end{array}\right] = \left\{ \begin{array}{c}{0}\\ {0}\end{array}\right\} = 1, \,\quad \left[ \begin{array}{c}{n}\\ {0}\end{array}\right] = \left[ \begin{array}{c}{0}\\ {n}\end{array}\right] = \left\{ \begin{array}{c}{n}\\ {0}\end{array}\right\} = \left\{ \begin{array}{c}{0}\\ {n}\end{array}\right\} = 0, \quad n\ne 0. \end{aligned}$$

The combinatorial interpretation of these numbers is that \(\left[ \begin{array}{c}{n}\\ {k}\end{array}\right] \) counts the number of permutations of n elements with k disjoint cycles, whereas \(\left\{ \begin{array}{c}{n}\\ {k}\end{array}\right\} \) is the number of ways to partition a set of n elements into k nonempty subsets. The notation for Stirling numbers varies between different authors; we use the notation recommended in [17].

From the recursion in (43), we derive the following explicit expression for the moments of \(A_1\).

Theorem 4

(Skew two-sided Bessel process) For any \(n\ge 1\),

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^n) = \sum _{k=0}^{n-1}\sum _{j=0}^{k} \frac{ (-1)^{j} j!}{(n-1)!} \left[ \begin{array}{c}{n}\\ {k+1}\end{array}\right] \left\{ \begin{array}{c}{k+1}\\ {j+1}\end{array}\right\} \nu ^k \beta ^{j+1}. \end{aligned}$$
(48)

In the proof of Theorem 4, we need the following result.

Lemma 2

For any \(n, m, l \in {{\,\mathrm{{\mathbb {N}}}\,}}\),

$$\begin{aligned} \left[ \begin{array}{c}{n+1}\\ {l+m+1}\end{array}\right] \left( {\begin{array}{c}l+m\\ l\end{array}}\right) = \sum _{k=l}^{n-m} \left[ \begin{array}{c}{k+1}\\ {l+1}\end{array}\right] \left[ \begin{array}{c}{n-k}\\ {m}\end{array}\right] \left( {\begin{array}{c}n\\ k\end{array}}\right) . \end{aligned}$$
(49)

Proof

Using the well known and very much similar identity [12, Eq. (6.29)]

$$\begin{aligned} \left[ \begin{array}{c}{n}\\ {l+m}\end{array}\right] \left( {\begin{array}{c}l+m\\ l\end{array}}\right) = \sum _{k=l}^{n-m} \left[ \begin{array}{c}{k}\\ {l}\end{array}\right] \left[ \begin{array}{c}{n-k}\\ {m}\end{array}\right] \left( {\begin{array}{c}n\\ k\end{array}}\right) , \end{aligned}$$
(50)

we prove (49) by induction. It is easy to verify that (49) holds for any \(m,l\in {{\,\mathrm{{\mathbb {N}}}\,}}\) when \(n=0\) (both sides are zero except when \(m=l=0\), in which case both sides are equal to 1). Assume that (49) holds for \(n=N\) and all \(m,l\in {{\,\mathrm{{\mathbb {N}}}\,}}\). Now let \(n=N+1\) and let m and l be arbitrary numbers in \({{\,\mathrm{{\mathbb {N}}}\,}}\). Using the recursion in (47), we get that

$$\begin{aligned}&\left[ \begin{array}{c}{N+2}\\ {l+m+1}\end{array}\right] \left( {\begin{array}{c}l+m\\ l\end{array}}\right) \\&\quad = (N+1)\left[ \begin{array}{c}{N+1}\\ {l+m+1}\end{array}\right] \left( {\begin{array}{c}l+m\\ l\end{array}}\right) + \left[ \begin{array}{c}{N+1}\\ {l+m}\end{array}\right] \left( {\begin{array}{c}l+m\\ l\end{array}}\right) \\&\quad = (N+1)\sum _{k=l}^{N-m} \left[ \begin{array}{c}{k+1}\\ {l+1}\end{array}\right] \left[ \begin{array}{c}{N-k}\\ {m}\end{array}\right] \left( {\begin{array}{c}N\\ k\end{array}}\right) + \!\sum _{k=l}^{N+1-m} \left[ \begin{array}{c}{k}\\ {l}\end{array}\right] \left[ \begin{array}{c}{N+1-k}\\ {m}\end{array}\right] \left( {\begin{array}{c}N+1\\ k\end{array}}\right) \\&\quad = \left[ \begin{array}{c}{N+1-l}\\ {m}\end{array}\right] \left( {\begin{array}{c}N+1\\ l\end{array}}\right) + \!\sum _{k=l+1}^{N+1-m} \left( k\left[ \begin{array}{c}{k}\\ {l+1}\end{array}\right] + \left[ \begin{array}{c}{k}\\ {l}\end{array}\right] \right) \left[ \begin{array}{c}{N+1-k}\\ {m}\end{array}\right] \left( {\begin{array}{c}N+1\\ k\end{array}}\right) \\&\quad = \sum _{k=l}^{N+1-m} \left[ \begin{array}{c}{k+1}\\ {l+1}\end{array}\right] \left[ \begin{array}{c}{N+1-k}\\ {m}\end{array}\right] \left( {\begin{array}{c}N+1\\ k\end{array}}\right) , \end{aligned}$$

where in the second step we have used both the induction assumption and (50). Thus, (49) holds also for \(n=N+1\) and all \(m,l\in {{\,\mathrm{{\mathbb {N}}}\,}}\). The result follows by induction. \(\square \)

Proof of Theorem 4

The result is proved by induction from Theorem 3. The identity (48) holds for \(n=1\), since \({{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1)=\beta \). Assume that (48) holds for all \(n\in \{1,2,\dotsc ,N\}\). We wish to prove that it then also holds for \(n=N+1\), that is,

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^{N+1})&= \sum _{k=0}^{N}\sum _{j=0}^{k} \frac{ (-1)^{j} j!}{N!} \left[ \begin{array}{c}{N+1}\\ {k+1}\end{array}\right] \left\{ \begin{array}{c}{k+1}\\ {j+1}\end{array}\right\} \nu ^k \beta ^{j+1} \nonumber \\&= \frac{\beta }{N!} \sum _{k=0}^{N} \left[ \begin{array}{c}{N+1}\\ {k+1}\end{array}\right] \nu ^k + \sum _{k=1}^{N}\sum _{j=1}^{k} \frac{ (-1)^{j} j!}{N!} \left[ \begin{array}{c}{N+1}\\ {k+1}\end{array}\right] \left\{ \begin{array}{c}{k+1}\\ {j+1}\end{array}\right\} \nu ^k \beta ^{j+1} \nonumber \\&= \beta \left( {\begin{array}{c}\nu +N\\ N\end{array}}\right) - \frac{1}{N!} \sum _{k=1}^{N}\sum _{j=1}^{k} j! \left[ \begin{array}{c}{N+1}\\ {k+1}\end{array}\right] \left\{ \begin{array}{c}{k+1}\\ {j+1}\end{array}\right\} \nu ^k (-\beta )^{j+1}. \end{aligned}$$
(51)

Here the last step follows from the identity [12, Eq. (6.11)]

$$\begin{aligned} \sum _{k=0}^{n} \left[ \begin{array}{c}{n}\\ {k}\end{array}\right] x^k = x(x+1)\cdots (x+n-1) = \frac{\varGamma (x+n)}{\varGamma (x)}, \end{aligned}$$

which gives that

$$\begin{aligned} \left( {\begin{array}{c}\nu +n\\ n\end{array}}\right) = \frac{1}{n!} \frac{\varGamma (\nu +n+1)}{\varGamma (\nu +1)} = \frac{1}{n!} \cdot \frac{1}{\nu } \sum _{k=1}^{n+1} \left[ \begin{array}{c}{n+1}\\ {k}\end{array}\right] \nu ^k = \frac{1}{n!} \sum _{k=0}^{n} \left[ \begin{array}{c}{n+1}\\ {k+1}\end{array}\right] \nu ^k.\qquad \end{aligned}$$
(52)

On the other hand, by the recursive formula (43) we have

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^{N+1}) = \beta \left( {\begin{array}{c}\nu +N\\ N\end{array}}\right) - \beta \sum _{m=1}^{N} \left( {\begin{array}{c}\nu +N-m\\ N-m+1\end{array}}\right) {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^{m}). \end{aligned}$$
(53)

Applying (52), the binomial coefficient in the summand can be rewritten as

$$\begin{aligned} \left( {\begin{array}{c}\nu +N-m\\ N-m+1\end{array}}\right)&= \frac{\nu }{N-m+1} \left( {\begin{array}{c}\nu +N-m\\ N-m\end{array}}\right) \\&= \frac{1}{(N-m+1)!} \sum _{k=m}^{N} \left[ \begin{array}{c}{N-m+1}\\ {N-k+1}\end{array}\right] \nu ^{N-k+1}, \end{aligned}$$

and using the induction assumption it follows that

$$\begin{aligned}&\beta \sum _{m=1}^{N} \left( {\begin{array}{c}\nu +N-m\\ N-m+1\end{array}}\right) {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^{m}) \\&\quad = \sum _{m=1}^{N} \left( {\begin{array}{c}\nu +N-m\\ N-m+1\end{array}}\right) \sum _{i=1}^{m}\sum _{j=1}^{i} \frac{(j-1)!}{(m-1)!} \left[ \begin{array}{c}{m}\\ {i}\end{array}\right] \left\{ \begin{array}{c}{i}\\ {j}\end{array}\right\} \nu ^{i-1} (-\beta )^{j+1} \\&\quad = \sum _{m=1}^{N}\sum _{i=1}^{m}\sum _{j=1}^{i} \sum _{k=m}^N \frac{(j-1)!}{N!} \left( {\begin{array}{c}N\\ m-1\end{array}}\right) \left[ \begin{array}{c}{N-m+1}\\ {N-k+1}\end{array}\right] \left[ \begin{array}{c}{m}\\ {i}\end{array}\right] \left\{ \begin{array}{c}{i}\\ {j}\end{array}\right\} \nu ^{N-k+i} (-\beta )^{j+1} \\&\quad = \frac{1}{N!} \sum _{k=1}^{N}\sum _{j=1}^{k} \nu ^{k} (-\beta )^{j+1} (j-1)! \sum _{i=j}^{k} \sum _{m=i}^{N-k+i} \left( {\begin{array}{c}N\\ m-1\end{array}}\right) \left[ \begin{array}{c}{N-m+1}\\ {k-i+1}\end{array}\right] \left[ \begin{array}{c}{m}\\ {i}\end{array}\right] \left\{ \begin{array}{c}{i}\\ {j}\end{array}\right\} . \end{aligned}$$

When this is inserted into (53), the result should be equivalent to (51). After subtracting the identical first terms, we are left with polynomials in \(\nu \) and \(\beta \) on both sides, and thus it suffices to show that the coefficients are equal. Indeed,

$$\begin{aligned}&(j-1)! \sum _{i=j}^{k} \sum _{m=i}^{N-k+i} \left( {\begin{array}{c}N\\ m-1\end{array}}\right) \left[ \begin{array}{c}{N-m+1}\\ {k-i+1}\end{array}\right] \left[ \begin{array}{c}{m}\\ {i}\end{array}\right] \left\{ \begin{array}{c}{i}\\ {j}\end{array}\right\} \\&\quad = (j-1)! \sum _{i=j}^{k} \left\{ \begin{array}{c}{i}\\ {j}\end{array}\right\} \sum _{m=i-1}^{N-(k-i+1)} \left( {\begin{array}{c}N\\ m\end{array}}\right) \left[ \begin{array}{c}{N-m}\\ {k-i+1}\end{array}\right] \left[ \begin{array}{c}{m+1}\\ {i}\end{array}\right] \\&\quad = (j-1)! \sum _{i=j}^{k} \left\{ \begin{array}{c}{i}\\ {j}\end{array}\right\} \left[ \begin{array}{c}{N+1}\\ {k+1}\end{array}\right] \left( {\begin{array}{c}k\\ i-1\end{array}}\right) \\&\quad = j! \left[ \begin{array}{c}{N+1}\\ {k+1}\end{array}\right] \left\{ \begin{array}{c}{k+1}\\ {j+1}\end{array}\right\} , \end{aligned}$$

where the second step follows from Lemma 2. The last step holds since

$$\begin{aligned} \sum _{i=j}^{k} \left\{ \begin{array}{c}{i}\\ {j}\end{array}\right\} \left( {\begin{array}{c}k\\ i-1\end{array}}\right)&= \sum _{i=j}^{k} \left\{ \begin{array}{c}{i}\\ {j}\end{array}\right\} \left( \left( {\begin{array}{c}k+1\\ i\end{array}}\right) - \left( {\begin{array}{c}k\\ i\end{array}}\right) \right) \\&= \sum _{i=j}^{k+1} \left\{ \begin{array}{c}{i}\\ {j}\end{array}\right\} \left( {\begin{array}{c}k+1\\ i\end{array}}\right) - \left\{ \begin{array}{c}{k+1}\\ {j}\end{array}\right\} - \sum _{i=j}^{k} \left\{ \begin{array}{c}{i}\\ {j}\end{array}\right\} \left( {\begin{array}{c}k\\ i\end{array}}\right) \\&= \left\{ \begin{array}{c}{k+2}\\ {j+1}\end{array}\right\} - \left\{ \begin{array}{c}{k+1}\\ {j}\end{array}\right\} - \left\{ \begin{array}{c}{k+1}\\ {j+1}\end{array}\right\} \\&= (j+1) \left\{ \begin{array}{c}{k+1}\\ {j+1}\end{array}\right\} - \left\{ \begin{array}{c}{k+1}\\ {j+1}\end{array}\right\} \\&= j \left\{ \begin{array}{c}{k+1}\\ {j+1}\end{array}\right\} , \end{aligned}$$

using both (47) and the identity [12, Eq. (6.15)]

$$\begin{aligned} \sum _{i=j}^{k} \left\{ \begin{array}{c}{i}\\ {j}\end{array}\right\} \left( {\begin{array}{c}k\\ i\end{array}}\right) = \left\{ \begin{array}{c}{k+1}\\ {j+1}\end{array}\right\} . \end{aligned}$$

This completes the proof for \(n=N+1\). By induction, (48) holds for all \(n\ge 1\). \(\square \)

Remark 4

The distribution of \(A_1\) for a skew two-sided Bessel process starting from 0 has been characterized in [4]. The moments of \(A_1\) can also be calculated numerically for particular values of \(\beta \) and \(\nu \) by integrating the known density [31], for \(x\in (0,1)\),

$$\begin{aligned} f(x) = \frac{\frac{1}{\pi }\sin (-\nu \pi ) \beta (1-\beta )(x(1-x))^{-\nu -1}}{\beta ^2 (1-x)^{-2\nu }+(1-\beta )^2 x^{-2\nu } + 2\beta (1-\beta ) (x(1-x))^{-\nu } \cos (-\nu \pi )}.\nonumber \\ \end{aligned}$$
(54)

We call the distribution induced by this density a Lamperti distribution, as it was first found in [18]. The result in (48) does not, however, seem to be easily obtainable through analytic integration. For the special case of skew Brownian motion, see Remark 6.

6.2 Skew Brownian Motion

Let now \((X_t)_{t\ge 0}\) be a skew Brownian motion, see Example 2. This corresponds to a skew Bessel process with \(\nu =-1/2\), and thus it follows from (45) that

$$\begin{aligned} D_k(\lambda ) = \beta \left( {\begin{array}{c}k-\frac{3}{2}\\ k\end{array}}\right) = \frac{-\beta }{2^{2k}(2k-1)} \left( {\begin{array}{c}2k\\ k\end{array}}\right) , \end{aligned}$$
(55)

and the recursion in (43) becomes

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^n)&= \frac{\beta }{2^{2n-2}}\left( {\begin{array}{c}2n-2\\ n-1\end{array}}\right) + \beta \sum _{k=1}^{n-1} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^{n-k}) \frac{1}{2^{2k}(2k-1)} \left( {\begin{array}{c}2k\\ k\end{array}}\right) . \end{aligned}$$
(56)

Since the recursion has already been solved for skew Bessel processes, the moments of \(A_1\) for skew Brownian motion can be obtained from Theorem 4.

Theorem 5

(Skew Brownian motion) For any \(n\ge 1\),

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^n) = \sum _{k=0}^{n-1} \left( {\begin{array}{c}n-1+k\\ k\end{array}}\right) \frac{\beta ^{n-k}}{2^{n+k-1}}. \end{aligned}$$
(57)

Proof

Substituting \(\nu =-1/2\) in (48) yields

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^n)&= \sum _{i=1}^{n}\sum _{k=1}^{i} \frac{ (-1)^{k-1} (k-1)!}{(n-1)!} \left[ \begin{array}{c}{n}\\ {i}\end{array}\right] \left\{ \begin{array}{c}{i}\\ {k}\end{array}\right\} \left( -\frac{1}{2}\right) ^{i-1} \beta ^{k} \\&= \sum _{k=1}^{n} \frac{ (-1)^{k-n} \beta ^{k} (k-1)!}{2^{n-1}(n-1)!} \sum _{i=k}^{n} \left[ \begin{array}{c}{n}\\ {i}\end{array}\right] \left\{ \begin{array}{c}{i}\\ {k}\end{array}\right\} (-2)^{n-i} \\&= \sum _{k=1}^{n} \frac{ \beta ^{k}}{2^{2n-k-1}}\left( {\begin{array}{c}2n-k-1\\ n-k\end{array}}\right) , \end{aligned}$$

where in the last step the identity [33, Eq. (18)]

$$\begin{aligned} \sum _{i=k}^{n} \left[ \begin{array}{c}{n}\\ {i}\end{array}\right] \left\{ \begin{array}{c}{i}\\ {k}\end{array}\right\} (-2)^{n-i} = (-1)^{n-k}\frac{(2n-k-1)!}{2^{n-k} (k-1)!(n-k)!} \end{aligned}$$
(58)

has been applied. A change of order of summation now gives the result. \(\square \)

Remark 5

The result in Theorem 5 was first proved by solving the recursive formula (56) in a different way than in the proof of Theorem 4. However, later the authors became aware that the identity (58) is already found in [33], and thus the result follows from (48), as shown above. The earlier, alternative method will be presented with additional comments in a future publication.

Corollary 3

(Standard Brownian motion) For \(\beta =1/2\) and \(n\ge 1\), it holds that

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^n) = \frac{1}{2^{2n}} \left( {\begin{array}{c}2n\\ n\end{array}}\right) , \end{aligned}$$

and, hence, \(A_1\) has the arcsine law.

Proof

The result follows from (57) when inserting \(\beta =1/2\) and applying (46). Since \(A_1\) is bounded, its moments determine the distribution uniquely, and we recover Lévy’s arcsine law for \(A_1\). \(\square \)

Remark 6

The distribution function of \(A_1\) for skew Brownian motion is given in [31, Eq. (2.5), p. 158] as

$$\begin{aligned} {{\,\mathrm{{\mathbf {P}}}\,}}_0(A_1\le x) = \frac{2}{\pi } \arcsin \left( \sqrt{\frac{x}{x+(\beta /(1-\beta ))^2(1-x)}} \right) , \quad x\in [0,1]. \end{aligned}$$

This yields the density (see [32, p. 782] and [2, p. 196])

$$\begin{aligned} f_{A_1}(x) = \frac{\beta (1-\beta )}{\pi \sqrt{x(1-x)} (\beta ^2 +x(1-2\beta ))}, \quad x\in (0,1), \end{aligned}$$

which corresponds to (54) with \(\nu =-1/2\). From this density, it is possible to calculate the moments of \(A_1\) by integration. However, using this approach we have not been able to obtain the expression in (57).

6.3 Oscillating Brownian Motion

Let \(({{\widetilde{X}}}_t)_{t\ge 0}\) be an oscillating Brownian motion, see Example 3. From (15), it is easily seen that the law of \(A_1\) is the same as the corresponding law for a skew Brownian motion. However, here we wish to demonstrate the use of Theorem 2 and therefore give a proof of the following result based on formula (32) therein.

Theorem 6

(Oscillating Brownian motion) For any \(n\ge 1\),

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1^n) = \sum _{k=0}^{n-1} \left( {\begin{array}{c}n-1+k\\ k\end{array}}\right) \frac{1}{2^{n+k-1}} \left( \frac{\sigma _{-}}{\sigma _{+}+\sigma _{-}} \right) ^{n-k}. \end{aligned}$$

Proof

For oscillating Brownian motion,

$$\begin{aligned} {\widehat{f}}(x;\lambda ) = {{\,\mathrm{{\mathbf {E}}}\,}}_x({{\,\mathrm{\mathrm {e}}\,}}^{-\lambda H_0}) = {{\,\mathrm{\mathrm {e}}\,}}^{-\frac{x\sqrt{2\lambda }}{\sigma _+}}, \quad x\ge 0. \end{aligned}$$

Recalling (14), Eq. (30) then becomes

$$\begin{aligned} {\widetilde{D}}_k(\lambda )&= \frac{(-\lambda )^k}{(k-1)!} \int _0^\infty \frac{\sigma _+ \sigma _-}{(\sigma _+ + \sigma _-)\sqrt{2\lambda }} {{\,\mathrm{\mathrm {e}}\,}}^{-\frac{y\sqrt{2\lambda }}{\sigma _+}} \frac{\mathop {}\!\mathrm {d}^{k-1}}{\mathop {}\!\mathrm {d}\lambda ^{k-1}}\biggl ( {{\,\mathrm{\mathrm {e}}\,}}^{-\frac{y\sqrt{2\lambda }}{\sigma _+}} \biggr ) \frac{2}{\sigma _{+}^2} \mathop {}\!\mathrm {d}y \\&= \frac{2 \sigma _-}{\sigma _+ + \sigma _-} \frac{(-\lambda )^k}{(k-1)!} \int _0^\infty \frac{1}{2\sqrt{2\lambda }} {{\,\mathrm{\mathrm {e}}\,}}^{-z\sqrt{2\lambda }} \frac{\mathop {}\!\mathrm {d}^{k-1}}{\mathop {}\!\mathrm {d}\lambda ^{k-1}} \bigl ( {{\,\mathrm{\mathrm {e}}\,}}^{-z\sqrt{2\lambda }} \bigr ) 2 \mathop {}\!\mathrm {d}z \\&= \frac{\sigma _-}{\sigma _+ + \sigma _-} \frac{1}{\beta } D_k(\lambda ), \end{aligned}$$

where \(D_k(\lambda )\) is as for skew Brownian motion. In other words, by (55),

$$\begin{aligned} {\widetilde{D}}_k(\lambda ) = \left( \frac{\sigma _-}{\sigma _+ + \sigma _-}\right) \frac{-1}{2^{2k}(2k-1)} \left( {\begin{array}{c}2k\\ k\end{array}}\right) . \end{aligned}$$

Since

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0(A_1) = \lambda \int _0^\infty {\widetilde{G}}_\lambda (0,y) m_\nu (\mathop {}\!\mathrm {d}y) = \frac{\sigma _-}{\sigma _+ + \sigma _-}, \end{aligned}$$

we thus arrive at the same recursion as in (56), except with \(\sigma _{-}/(\sigma _{+}+\sigma _{-})\) instead of \(\beta \). Thus, the result follows from Theorem 5. \(\square \)

6.4 Brownian Spider

For a positive integer n, let \(I_1,\dotsc ,I_n\) be half-lines in \({{\,\mathrm{{\mathbb {R}}}\,}}^2\) meeting at the origin. Such a configuration can be seen as a graph G, say, with one vertex and n infinite edges. A Brownian spider, also called Walsh’s Brownian motion on a finite number of rays, see [3, 4, 19, 29, 30], is a diffusion on G such that when away from the origin on a ray and before hitting the origin, it behaves on that ray like an ordinary one-dimensional Brownian motion, but as the process reaches the origin one of the half-lines is chosen randomly for its “next” excursion. A rigorous construction of the probability measure governing a Brownian spider can be done using the excursion theory, see [4].

Let \((X_t)_{t\ge 0}\) denote a Brownian spider living on G, and for every \(i\in \{1,2,...n\}\) let \(p_i\) be the probability for choosing half-line \(I_i\) when at the origin. For any subset \({\mathcal {I}}\subseteq \{1,2,\dotsc ,n\}\), consider the occupation time of X on \(\{I_i:\, i\in {\mathcal {I}}\},\) i.e.,

$$\begin{aligned} A_t^{\mathcal {I}} = {{\,\mathrm{Leb}\,}}\left\{ s\in [0,t]: X_s\in \bigcup _{i\in {\mathcal {I}}} I_i\right\} . \end{aligned}$$

From the excursion theoretical construction of the Brownian spider, it can be deduced that \(A_t^{\mathcal {I}}\) has the same law as the occupation time on \({{\,\mathrm{{\mathbb {R}}}\,}}_+\) for a skew Brownian motion with the skewness parameter \(\beta := \sum _{i\in {\mathcal {I}}} p_i\). Thus, we deduce the following result from (57).

Theorem 7

(Brownian spider) For any \(n\ge 1\),

$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}_0((A_1^{\mathcal {I}})^n) = \sum _{k=0}^{n-1} \left( {\begin{array}{c}n-1+k\\ k\end{array}}\right) \frac{1}{2^{n+k-1}} \bigg ( \sum _{i\in {\mathcal {I}}} p_i \bigg )^{n-k}. \end{aligned}$$

We refer also to the recent paper [34] for results concerning distributions of occupation times for diffusions on a multiray.

6.5 Sticky Brownian Motion

In this section, we highlight the use of the moment formula for a process that does not have the scaling property. Moreover, we wish to understand how the presence of a sticky point affects the formula. To this end, let \(X=(X_t)_{t\ge 0}\) be a Brownian motion sticky at 0 with stickyness parameter \(\gamma >0\), see Example 4. Since X behaves like a standard Brownian motion on excursions from 0, the Laplace transform of the first hitting time of 0 when \(X_0=x>0\) is as for standard Brownian motion, i.e.,

$$\begin{aligned} {\widehat{f}}(x;\lambda ) :={{\,\mathrm{{\mathbf {E}}}\,}}_x\left( {{\,\mathrm{\mathrm {e}}\,}}^{-\lambda H_0}\right) = {{\,\mathrm{\mathrm {e}}\,}}^{-x\sqrt{2\lambda }}. \end{aligned}$$

In this section, besides \(A^X_t\) defined in (6), we also consider

$$\begin{aligned} B_t^X := {{\,\mathrm{Leb}\,}}\{s\in [0,t]:X_s>0\}. \end{aligned}$$

Clearly,

$$\begin{aligned} A_t =B_t + {{\,\mathrm{Leb}\,}}\{s\in [0,t]:X_s=0\}, \end{aligned}$$
(59)

where \(A_t\) and \(B_t\) are used as notation for \(A^X_t\) and \(B^X_t,\) respectively. Since 0 is a sticky point, the second term on the right-hand side of (59) is strictly positive for all \(t>0\) a.s. (if X starts at 0). We use here the notation \(D^A_n(\lambda )\) instead of \(D_n(\lambda )\) as defined in (30):

$$\begin{aligned} D^A_n(\lambda ) := \frac{(-\lambda )^n}{(n-1)!} \int \limits _{[0,\infty )} G_\lambda (0,y) {\widehat{f}}_\lambda ^{(n-1)}(y;\lambda ) m(\mathop {}\!\mathrm {d}y). \end{aligned}$$

Using the explicit form (16) for the Green kernel yields for \(n=1\)

$$\begin{aligned} D^A_1(\lambda )&= -2\lambda \left( \int _0^\infty G_\lambda (0,y) {\widehat{f}}_\lambda (y;\lambda ) \mathop {}\!\mathrm {d}y + \gamma \, G_\lambda (0,0) {\widehat{f}}_\lambda (0;\lambda ) \right) \\&=\frac{-2\lambda }{2 \sqrt{2\lambda }+2\lambda \gamma } \left( \frac{1}{2\sqrt{2\lambda }}+\gamma \right) \\&= - H(\lambda )\left( \frac{1}{2}+\gamma \sqrt{2\lambda }\right) , \end{aligned}$$

where

$$\begin{aligned} H(\lambda ):=\left( 2 + \gamma \sqrt{2\lambda }\right) ^{-1}. \end{aligned}$$
(60)

Comparing the expressions for the Green kernels in (13) and (16) and recalling (55), as well as Lemma 1, it is seen that for \(n\ge 2\)

$$\begin{aligned} D^A_n(\lambda )&= \frac{2(-\lambda )^n}{(n-1)!} \int _{(0,\infty )} G_\lambda (0,y) {\widehat{f}}_\lambda ^{(n-1)}(y;\lambda ) \mathop {}\!\mathrm {d}y\\&= \frac{-1}{2^{2n}(2n-1)} \left( {\begin{array}{c}2n\\ n\end{array}}\right) \, \frac{\sqrt{2\lambda }}{2\sqrt{2\lambda } +2\gamma \lambda }\\&=- H(\lambda )\, T_n \end{aligned}$$

where

$$\begin{aligned} T_n:= \frac{1}{2^{2n}(2n-1)} \left( {\begin{array}{c}2n\\ n\end{array}}\right) \end{aligned}$$
(61)

The values for \(D^B_n(\lambda )\) are obtained in the same way, and we conclude the discussion above in the following result.

Proposition 3

It holds that

$$\begin{aligned} D^A_1(\lambda ) = - \frac{H(\lambda )}{2}\left( 1+2\gamma \sqrt{2\lambda }\right) ,\qquad D^B_1(\lambda ) = - \frac{H(\lambda )}{2} = - H(\lambda )\, T_1, \end{aligned}$$

and, for \(n\ge 2\),

$$\begin{aligned} D^A_n(\lambda ) = D^B_n(\lambda ) = - H(\lambda )\, T_n, \end{aligned}$$

with \(H(\lambda )\) and \(T_n\) given in (60) and (61), respectively.

Recall the recursive Eq. (29) for the Laplace transforms of the moments of \(B_t\) (an analogous formula holds for \(A_t\)):

$$\begin{aligned}&{{\widehat{B}}}_n(\lambda )\nonumber \\&\quad := {\widehat{B}}_0(\lambda ;n) = \frac{n!}{\lambda ^{n-1}} {{\widehat{B}}}_1(\lambda ) + \frac{n!}{\lambda ^{n+1}}\sum _{k=1}^{n-1} \left( 1 - \frac{\lambda ^{n-k+1}}{(n-k)!} {{\widehat{B}}}_{n-k}(\lambda ) \right) D^B_{k}(\lambda ).\qquad \qquad \end{aligned}$$
(62)

Next we show that this recursive equation can be solved similarly as in the case of skew Brownian motion. The corresponding equation for the Laplace transforms of \(A_t\) does not seem to allow such a simple solution.

Proposition 4

For \(n=1,2,\dotsc \),

$$\begin{aligned} {{\widehat{B}}}_n(\lambda ) = \frac{n!}{\lambda ^{n+1}} \sum _{k=0}^{n-1} \left( {\begin{array}{c}n-1+k\\ k\end{array}}\right) \frac{1}{2^{n+k-1}} \left( \frac{1}{2+\gamma \sqrt{2\lambda }} \right) ^{n-k}. \end{aligned}$$
(63)

Proof

Introducing \(U_k(\lambda ) := \lambda ^{k+1}{\widehat{B}}_k(\lambda )/k!\) as in Remark 3, Eq. (62) can be rewritten as

$$\begin{aligned} U_n(\lambda )&= U_1(\lambda ) + \sum _{k=1}^{n-1} (1 - U_{n-k}(\lambda )) (- H(\lambda )T_k) \nonumber \\&= H(\lambda ) \Biggl ( 1 - \sum _{k=1}^{n-1} (1 - U_{n-k}(\lambda )) T_k \Biggr ), \end{aligned}$$
(64)

since

$$\begin{aligned} U_1(\lambda ) = \lambda ^2 {\widehat{B}}_1(\lambda ) = \lambda \int _0^\infty G_\lambda (0,y) m(\mathop {}\!\mathrm {d}y) = H(\lambda ). \end{aligned}$$

Clearly, (64) is of the same form as (56) and, consequently, by Theorem 5,

$$\begin{aligned} U_n(\lambda ) = \sum _{k=0}^{n-1} \left( {\begin{array}{c}n-1+k\\ k\end{array}}\right) \frac{(H(\lambda ))^{n-k}}{2^{n+k-1}}, \end{aligned}$$
(65)

which is the same as

$$\begin{aligned} {{\widehat{B}}}_n(\lambda ) = \frac{n!}{\lambda ^{n+1}} \sum _{k=0}^{n-1} \left( {\begin{array}{c}n-1+k\\ k\end{array}}\right) \frac{(H(\lambda ))^{n-k}}{2^{n+k-1}}, \end{aligned}$$

and the claimed formula (63) follows. \(\square \)

Consider now a regular diffusion \(X=(X_t)_{t\ge 0}\) as introduced in Sect. 2. In [31, p. 161] are given necessary and sufficient conditions in terms of the speed measure of X that ensure that \(A^X_t/t\) converges in distribution to a random variable \(\xi \) which is Lamperti-distributed, i.e., the density of \(\xi \) is given by (54). It is a fairly simple matter to check these conditions for a sticky Brownian motion with the limiting random variable being then arcsine-distributed. We conclude the paper by showing that this result is also easily obtained for both \(A_t/t\) and \(B_t/t\) using the recursive equation. Notice that in this case the convergence of the moments is equivalent to the convergence in distribution.

Proposition 5

For \(n=1,2,\dotsc \)

$$\begin{aligned} \lim _ {\lambda \rightarrow 0} \lambda ^{n+1}{{\widehat{A}}}_n(\lambda )) =\lim _ {\lambda \rightarrow 0} \lambda ^{n+1}{{\widehat{B}}}_n(\lambda )) = \frac{n!}{2^{2n}} \left( {\begin{array}{c}2n\\ n\end{array}}\right) \end{aligned}$$
(66)

and

$$\begin{aligned} \lim _{t\rightarrow \infty } t^{-n}{{\,\mathrm{{\mathbf {E}}}\,}}_0(A_t^n) = \lim _{t\rightarrow \infty } t^{-n}{{\,\mathrm{{\mathbf {E}}}\,}}_0(B_t^n) = \frac{1}{2^{2n}} \left( {\begin{array}{c}2n\\ n\end{array}}\right) . \end{aligned}$$
(67)

Proof

We prove (66) for \(B_t\) with induction from (62). An analogous reasoning is valid for \(A_t\) and, hence, the details are omitted. The claim for \(B_t\) holds for \(n=1\), as is easily seen from (65). Multiplying (64) with n! yields

$$\begin{aligned} \lambda ^{n+1}{{\widehat{B}}}_n(\lambda ) = {n!}\left( H(\lambda ) - H(\lambda ) \sum _{k=1}^{n-1} \left( 1 - \frac{\lambda ^{n-k+1}}{(n-k)!} {{\widehat{B}}}_{n-k}(\lambda ) \right) T_{k}\right) . \end{aligned}$$

By the induction assumption, the limit of the right-hand side exists as \(\lambda \rightarrow 0\), implying

$$\begin{aligned} \lim _{\lambda \rightarrow 0} \lambda ^{n+1}{{\widehat{B}}}_n(\lambda ) = {n!}\left( \frac{1}{2} - \frac{1}{2} \sum _{k=1}^{n-1} \left( 1 - \frac{1}{2^{2k}} \left( {\begin{array}{c}2k\\ k\end{array}}\right) \right) T_{n-k}\right) . \end{aligned}$$
(68)

From equation (56) and Corollary 3 it is seen that the expression in the outer parenthesis of (68) is as claimed in (66). The statement (67) follows by evoking the Tauberian theorem presented in [8, p. 423], which is applicable since \(t\mapsto {{\,\mathrm{{\mathbf {E}}}\,}}_0(B_t^n)\) is increasing. \(\square \)