1 Introduction and statement of the results

Let \(X =\{ X_t, \, t\ge 0\}\) be a real-valued process starting at zero and \(T_x = \inf \{ t > 0, \; X_t > x\}\) be its first-passage time above a positive level \(x\). Studying the law of \(T_x\) is a classic problem in probability theory. In general, it is difficult to obtain an explicit expression of this law. However, it has been observed that in many interesting cases the survival function has a polynomial decay:

$$\begin{aligned} {\mathbb {P}}[T_x > t] = t^{-\theta + o(1)}, \qquad t \rightarrow +\infty , \end{aligned}$$
(1.1)

where \(\theta \) is a positive constant which is called the persistence exponent and usually does not depend on \(x\). The computation of persistence exponents turns out to have connections with various problems in probability and mathematical physics.

Physicists consider that the persistence exponent is a parameter providing a crucial insight on the whole history of a process, and which is more informative than its correlation structure. The persistence exponent has been measured experimentally in several situations (fluctuating interfaces, breath figure, nematic systems) and we refer to the recent survey paper [3] for a list of observations, simulations and rigorous results in this field. The question is also attractive for the mathematicians since that, up to now, very few rigorous computations are actually performed, especially in the non-Markovian framework.

A central result in this topic is Goldman-Sinai’s evaluation of the persistence exponent \(\theta = 1/4\) for the integrated Brownian motion [10, 25]. There are three natural generalizations of this result, all in search for a proof. The first one is the persistence exponent for twice integrated, or more generally \(n\)th time integrated, Brownian motion. This simple open problem on Brownian motion is believed to be very challenging. Some numerical evaluations have been performed by physicists—see again [3], not leading to a precise conjecture on \(\theta .\) The second one is the persistence exponent for integrated fractional Brownian motion with Hurst parameter \(H\), and it has been conjectured by Molchan and Khokhlov [17] that \(\theta \) should be \(H(1-H)\). The third one is the persistence exponent for integrated stable Lévy processes, which is the matter of the present paper. It is important to mention that the first above question has tight connections with the structure of the real roots of random polynomials with Gaussian coefficients and large degree, whereas the second and third ones appear naturally when studying the shock structure of the inviscid Burgers equation with fractional Brownian, resp. Lévy stable, initial data. We refer to [2] and the bibliography therein for complete details on these three open problems, and their respective connections.

In this paper we investigate this question for the process

$$\begin{aligned} X_t = \int _0^t L_s\, ds, \end{aligned}$$

where \(L = \{L_t, \, t\ge 0\}\) is a strictly \(\alpha \)-stable Lévy process starting from zero, with law \({\mathbb {P}}.\) This process solves the differential equation

$$\begin{aligned} \frac{\mathrm{d}^2 X_t}{\mathrm{d}t} = {\dot{L}}_t, \end{aligned}$$

which describes the dynamics of a particle submitted to a force given by a stable noise. This is a natural generalization of the so-called acceleration process studied by physicists—see section 3.2 in [3]. A non-rigorous analysis of the survival function of this particle in the presence of a potential is performed in Section III. B of [7]. The present paper computes rigorously the persistence exponent of the free particle of equation (1) in [7].

To state our main result, we need some notation. Our process \(L\) is normalized to have characteristic exponent

$$\begin{aligned} \varPsi (\lambda ) = \log ({\mathbb {E}}[e^{\mathrm{i}\lambda L_1}]) = -(\mathrm{i}\lambda )^\alpha e^{-\mathrm{i}\pi \alpha \rho \, \mathrm{sgn}(\lambda )}, \qquad \lambda \in {\mathbb {R}}, \end{aligned}$$
(1.2)

where \(\alpha \in (0,2]\) is the self-similarity parameter and \(\rho = {\mathbb {P}}[L_1 \ge 0]\) is the positivity parameter. We refer to [27] and [19] for classic accounts on stable laws and processes. The strict stability implies the \((1/\alpha )\)-self-similarity of \(L\) and the \((1+1/\alpha )\)-self-similarity of \(X\), in other words that

$$\begin{aligned} \{L_{kt}, \; t\ge 0\}\; \mathop {=}\limits ^{d}\; \{k^{1/\alpha }L_{t}, \; t\ge 0\}\qquad \text{ and }\qquad \{X_{kt}, \; t\ge 0\}\; \mathop {=}\limits ^{d}\; \{k^{1+1/\alpha }X_{t}, \; t\ge 0\} \end{aligned}$$

for all \(k > 0.\) When \(\alpha = 2,\) one has \(\rho = 1/2\) and \(\varPsi (\lambda ) = -\lambda ^2,\) so that \(L = \sqrt{2} B\) is a rescaled Brownian motion. When \(\alpha = 1,\) one has \(\rho \in (0,1)\) and \(L\) is a Cauchy process with a linear drift. When \(\alpha \in (0,1)\cup (1,2)\) the characteristic exponent takes the more familiar form

$$\begin{aligned} \varPsi (\lambda ) = -\kappa _{\alpha ,\rho }\vert \lambda \vert ^\alpha (1 - \mathrm{i}\beta \tan (\pi \alpha /2)\,\mathrm{sgn}(\lambda )), \end{aligned}$$

where \(\beta \in [-1,1]\) is an asymmetry parameter, whose connection with the positivity parameter is given by Zolotarev’s formula:

$$\begin{aligned} \rho = \frac{1}{2} + \frac{1}{\pi \alpha } \arctan (\beta \tan (\pi \alpha /2)), \end{aligned}$$

and \(\kappa _{\alpha ,\rho } = \cos (\pi \alpha (\rho -1/2)) > 0\) is a scaling constant. The latter could have taken any positive value, changing the normalization (1.2) accordingly, without incidence on our purposes below. One has \(\rho \in [0,1]\) if \(\alpha < 1\) and \(\rho \in [1-1/\alpha , 1/\alpha ]\) if \(\alpha > 1.\) When \(\alpha > 1\) and \(\rho = 1/\alpha \) the process \(L\) has no positive jumps, whereas it has no negative jumps when \(\alpha > 1\) and \(\rho = 1-1/\alpha \). When \(\alpha < 1\) and \(\rho = 0\) or \(\rho = 1,\) the process \(\vert L\vert \) is a stable subordinator and has increasing sample paths, a situation which will be implicitly excluded throughout this paper. In this case, the process \(X\) is indeed also monotonous and the survival function in (1.1) either is one or decays towards zero at an exponential speed—see [2] p. 4 for details.

When \(\alpha =2,\) the bivariate process \((X,L)\) is Gaussian with explicit covariance function and transition density, providing also the basic example of a degenerate diffusion process—see [14] for details and references. When \(\alpha < 2,\) the process \((X,L)\) is Non-Gaussian \(\alpha \)-stable in the broad sense of [19]. The process \((X,L)\) is a strong Markov process, which is sometimes called the Kolmogorov process in the literature. In the following we will set \({\mathbb {P}}_{(x,y)}\) for the law of \((X,L)\) starting at \((x,y)\in {\mathbb {R}}^2.\) Our main concern in this paper is the hitting time of zero for \(X\):

$$\begin{aligned} T_0 = \inf \{ t> 0, \; X_t = 0\}. \end{aligned}$$

Since \(\vert L\vert \) is not a subordinator, a simple argument using self-similarity and the zero-one law for Markov processes—see Lemma 3 below for details—shows that \({\mathbb {P}}_{(0,0)}[T_0 = 0] = 1,\) in other words that the origin is regular for the vertical axis. If \(x < 0\) or \(x=0\) and \(y <0,\) the continuity of the sample paths of \(X\) shows that a.s. \(T_0 = \inf \{ t> 0, \; X_t > 0\},\) and it will be checked in Lemma 3 below that \(T_0\) is also a.s. finite. If \(x > 0\) or \(x=0\) and \(y >0,\) the law of \(T_0\) is obviously deduced from that of the latter situation in considering the dual Lévy process \(-L.\)

When \((x,y)\ne (0,0),\) the difficulty to obtain concrete information on the law of \(T_0\) under \({\mathbb {P}}_{(x,y)}\) comes from the fact that \(X\) itself is not a Markov process. In the Brownian case for example, the density function of \(T_0\) is expressed through quite intricate integral formulæ —see [2] pp. 15–16 and the references therein. On the other hand, some universal estimates can be obtained for the behaviour of the distribution function \({\mathbb {P}}_{(x,y)}[T_0\le t]\) as \(t\rightarrow 0,\) using self-similarity and Gaussian or stable upper tails for the supremum process—see e.g. Section 10.4 in [19]. But it is well-known that the study of \({\mathbb {P}}_{(x,y)}[T_0 > t]\) as \(t\rightarrow +\infty \) is a harder problem, where a more exotic behaviour is expected.

Throughout the paper, for any real functions \(f\) and \(g\) we will use the standard notation \(f(t)\asymp g(t)\) as \(t\rightarrow +\infty \) to express the fact that there exist two positive and finite constants \(\kappa _1, \kappa _2\) such that \(\kappa _1 f(t)\le g(t) \le \kappa _2 f(t)\) as \(t\rightarrow +\infty .\) Our main result is the following.

Theorem A

Assume that \(x < 0\) or \(x=0\) and \(y < 0.\) One has

$$\begin{aligned} {\mathbb {P}}_{(x,y)} [T_0 > t] \asymp t^{-\theta },\qquad t\rightarrow +\infty , \end{aligned}$$

with \(\theta = \rho /(1+ \alpha (1-\rho )).\)

In the Brownian case \(\alpha = 2,\) one has \(\theta = 1/4=\rho /2\) and as mentioned before this estimate has been known since the works of Goldman—see Proposition 2 in [10], with a more precise formulation on the density function of \(T_0\), following the seminal article of McKean [16]. This result had then been partially rediscovered by Sinai in [25]. The universality of the persistence exponent 1/4 for integrals of real-valued Lévy processes having exponential moments on both sides has been shown in [1], with the help of strong approximation arguments. Recently, it was proved in [5] that all integrated real random walks with finite variance also have 1/4 as persistence exponent, extending [25] in which the particular case of the integrated simple random walk was studied. Let us also mention that the survival function of the \(n\)th hitting time of zero for the integrated Brownian motion exhibits the same power decay up to a logarithmic term in \(ct^{-1/4}(\ln (t))^{n-1}\) with an explicit constant \(c\), as shown by the first author in [18].

In the case \(1 < \alpha < 2\) and with no negative jumps, that is \(\rho = 1 -1/\alpha ,\) one obtains \(\theta = (\alpha - 1)/2\alpha = \rho /2,\) an estimate which had been proved by the second author in [23] with different techniques and a less precise formulation than Theorem A for the lower bound, involving a logarithmic correction term. It is worth mentioning that the same persistence exponent \((\alpha - 1)/2\alpha \) appears for the integrals of random walks which are attracted towards this spectrally positive Lévy process—see Remark 1.2 in [5] and the main result of [26]. Our result leads therefore to a natural conjecture on the persistence exponent of general integrated random walks in a stable domain of attraction.

It had been conjectured in [2]—see Conjecture 4 therein—that the persistence exponent of \(X\) might be \(\rho /2\) in general. This expected value should be compared with a classic result of Bingham stating that the persistence exponent is \(\rho \) for the stable process \(L\)—see (2.16) in [2] and Theorem 3A in [4]. The admissible set of \((\alpha ,\rho )\) and Theorem A entail that \(\theta > \rho /2\) as soon as \(L\) has negative jumps, hence providing a negative answer to this conjecture. The fact that \(\theta \) is an increasing function of the positivity parameter \(\rho \) matches the intuition, however it is harder to explain heuristically why it is also a decreasing function of \(\alpha .\)

Specifying \(x = -1\) and \(y = 0\) in Theorem A entails by self-similarity the following lower tail probability estimate

$$\begin{aligned} {\mathbb {P}}[ X_1^*\le \varepsilon ] \asymp \varepsilon ^{\frac{\theta \alpha }{\alpha +1}}, \qquad \varepsilon \rightarrow 0, \end{aligned}$$

with the notation \(X_1^* = \sup \{X_t, \, t\le 1\}.\) Some heuristics on the subordination of \(X\) by the inverse local time of \(L\) when \(\alpha > 1\) had led to the conjecture, formulated in Part 2 of [21], that in the symmetric case \(\rho =1/2\) one should have \({\mathbb {P}}[ X_1^*\le \varepsilon ] = \varepsilon ^{(\alpha -1)^+/2(\alpha +1) +o(1)}\) as \(\varepsilon \rightarrow 0.\) A positive answer to this conjecture had been announced in [6], with a mistake. The invalidity of this conjecture as soon as \(\alpha \) is close enough to 1 had been observed in [24]. Theorem A shows that Shi’s exponent is the right one only for integrated Brownian motion: in the symmetric case one has \(\theta \alpha /(\alpha +1) = \alpha /(\alpha +1)(\alpha +2) \ge (\alpha -1)^+/2(\alpha +1),\) with an equality only if \(\alpha =2.\) Let us mention in passing that lower tail probabilities offer some challenging problems for Gaussian processes—see [15, 20].

Our method to prove Theorem A hinges upon the random variable \(L_{T_0},\) the so-called hitting place of \((X,L)\) on the vertical axis, which has been extensively studied in the Brownian case—see [10, 13, 14, 16]. Notice that this random variable is positive under \({\mathbb {P}}_{(x,y)}\) if \(x<0\) or \(x=0\) and \(y <0\). The reason why it is connected to the persistence exponent comes from the following heuristical equivalence for fractional moments

$$\begin{aligned} {\mathbb {E}}_{(x,y)}[T_0^s] < +\infty \quad \Leftrightarrow \quad {\mathbb {E}}_{(x,y)}[L_{T_0}^{\alpha s}] < +\infty \end{aligned}$$

for all \(s >0,\) which had been conjectured in [23] p. 176, and turns out to be true as a consequence of Theorem A and Lemma 5 below. The precise relationship between the upper tails of \(T_0\) and that of \(L_{T_0}\) follows from a series of probabilistic estimates which are the matter of Sect. 4.

In this paper we also provide a rather complete description of the law of the random variable \(L_{T_0}\) when \((X,L)\) starts from a coordinate axis. To express our second main result, we need some further notation. For every \(\mu \in (0,1),\) introduce the \(\mu \)-Cauchy random variable \(\mathbf{C}_\mu ,\) with density

$$\begin{aligned} \frac{\sin (\pi \mu )}{\pi \mu (x^2 + 2 \cos (\pi \mu ) x + 1)}\,\mathbf{1}_{\{x\ge 0\}}. \end{aligned}$$

Our above denomination comes from the case \(\mu = 1/2,\) where \(\mathbf{C}_{1/2}\) is the half-Cauchy distribution. If \(X\) is a positive random variable and \(\nu \in {\mathbb {R}}\) is such that \({\mathbb {E}}[X^\nu ] < \infty ,\) the positive random variable \(X^{(\nu )}\) defined by

$$\begin{aligned} {\mathbb {E}}[f(X^{(\nu )})] = \frac{{\mathbb {E}}[X^\nu f(X)]}{{\mathbb {E}}[X^\nu ]} \end{aligned}$$

for all bounded and continuous functions \(f : {\mathbb {R}}^+ \rightarrow {\mathbb {R}}\), is known as the size bias of order \(\nu \) of \(X.\) Observe that when \(X\) is absolutely continuous, the density of \(X^{(\nu )}\) is obtained by multiplying that of \(X\) by \(x^\nu \) and renormalizing. We finally introduce the parameters

$$\begin{aligned} {\gamma }= \frac{\rho \alpha }{1+\alpha }\qquad \text{ and }\qquad \chi = \frac{\rho \alpha }{1+\alpha (1-\rho )}=\alpha \theta . \end{aligned}$$

Notice that from the admissible set for \((\alpha , \rho ),\) we have \({\gamma }\in (0,1/2)\) and \(\chi \in (0,1).\)

Theorem B

  1. (i)

    For every \(y < 0,\) under \({\mathbb {P}}_{(0,y)}\) one has

    $$\begin{aligned} L_{T_0}\, \mathop {=}\limits ^{d}\, \vert y\vert (\mathbf{C}_\chi ^{1-{\gamma }})^{(1)}. \end{aligned}$$
  2. (ii)

    For every \(x < 0,\) under \({\mathbb {P}}_{(x,0)}\) the positive random variable \(L_{T_0}\) has Mellin transform

    $$\begin{aligned}&{\mathbb {E}}_{(x,0)}[L_{T_0}^{s-1}] = \frac{(1+\alpha )^{\frac{1-s}{1+\alpha }}{\varGamma }(\frac{\alpha +2}{\alpha +1}){\varGamma }(\frac{1-s}{\alpha +1})\sin (\pi {\gamma })}{{\varGamma }(\frac{s}{\alpha +1}){\varGamma }(1-s)\sin (\pi s(1-{\gamma }))}\,\vert x\vert ^{\frac{s-1}{\alpha +1}}, \quad \vert s\vert < 1/(1-{\gamma }). \end{aligned}$$

The proof of this result is given in Sect. 3, following some preliminary computations involving oscillating integrals and the Fourier transform of \(X_t,\) performed in Sect. 2. Observe that the density in (i) above is explicit and reads for example

$$\begin{aligned} \frac{3}{2\pi } \frac{\vert y\vert ^{1/2} z^{3/2}}{\vert y\vert ^3 + z^3}\mathbf{1}_{\{z\ge 0\}} \end{aligned}$$

in the Brownian case, a formula originally proved by McKean in [16]—see also formulæ  (1) and (2) in [13]. As is well-known, the Cauchy random variable appears in exit or winding problems for two-dimensional Brownian motion. The fact that it is also connected with similar problems for general integrated stable processes is perhaps more surprising.

An interesting consequence of (ii) is that the Mellin transform can be inverted in the Cauchy case \(\alpha = 1\) and exhibits the same type of law as in (i): one obtains

$$\begin{aligned} L_{T_0}\, \mathop {=}\limits ^{d}\, \sqrt{2\vert x\vert }\, (\mathbf{C}_\delta ^{1-{\gamma }})^{(1)} \end{aligned}$$

with the notation \(\delta = (1+\chi )/2.\) The Mellin transform of (ii) can also be simply inverted in the Brownian case in terms of Beta and Gamma random variables, shedding some new light on a formula by Gor’kov [11] which was of the analytical type, and in the case \(\alpha < 1\) in terms of positive stable random variables. The Mellin inversion is however more complicated when \(\alpha \in (1,2),\) and involves no classical random variables in general—see Sect. 3.3 below for details.

2 Preliminary computations

The following lemma, which we could not locate in the literature, will be useful in the sequel.

Lemma 1

Let \(\nu \in (0,1)\) and \(X\) be a real random variable such that \({\mathbb {E}}[\vert X\vert ^{-\nu }] < \infty .\) One has

$$\begin{aligned} \int _0^{\infty } \lambda ^{\nu -1} \,{\mathbb {E}}[\cos (\lambda X)] \, d\lambda = {\varGamma }(\nu ) \cos (\pi \nu /2)\,{\mathbb {E}}[ \vert X\vert ^{-\nu }] \end{aligned}$$

and

$$\begin{aligned} \int _0^{\infty } \lambda ^{\nu -1} {\mathbb {E}}[\sin (\lambda X)]\, d\lambda = {\varGamma }(\nu ) \sin (\pi \nu /2)\,{\mathbb {E}}[ \vert X\vert ^{-\nu }\mathrm{sgn}(X)]. \end{aligned}$$

Proof

The generalized Fresnel integral which is computed e.g. in formula (37) p. 13 of [8] shows that for all \(u\ne 0, \nu \in (0, 1),\) one has

$$\begin{aligned} \int _0^{\infty } \lambda ^{\nu -1} \cos (\lambda u) \, d\lambda = {\varGamma }(\nu ) \cos (\pi \nu /2)\, \vert u\vert ^{-\nu }. \end{aligned}$$
(2.1)

The first statement of the lemma is hence simply a switching of the expectation and the integral. However, we cannot apply Fubini’s theorem directly. Set \(\mu \) for the probability distribution of \(X.\) From (2.1) and an integration by parts, we get

$$\begin{aligned} {\varGamma }(\nu ) \cos (\pi \nu /2)\,{\mathbb {E}}[ \vert X\vert ^{-\nu }]&= \int _{\mathbb {R}}\mu (du)\;\left( \int _0^{\infty } \lambda ^{\nu -1} \cos (\lambda u) \,d\lambda \right) \\&= (1-\nu ) \int _{\mathbb {R}}\mu (du)\;\left( \int _0^{\infty } \frac{\sin (\lambda u)}{u}\, \lambda ^{\nu -2} \,d\lambda \right) . \end{aligned}$$

Since

$$\begin{aligned} \int _{\mathbb {R}}\mu (du)\;\left( \int _0^{\infty } \left| \frac{\sin (\lambda u)}{u}\right| \, \lambda ^{\nu -2} \,d\lambda \right)&\le \int _{\mathbb {R}}\mu (du)\;\left( \int _0^{\infty } \left( \lambda \wedge \frac{1}{\vert u\vert }\right) \, \lambda ^{\nu -2} \,d\lambda \right) \\&\le \frac{{\mathbb {E}}[ \vert X\vert ^{-\nu }]}{\nu (1-\nu )}\; <\; +\infty , \end{aligned}$$

we may now apply Fubini’s theorem and obtain

$$\begin{aligned} {\varGamma }(\nu ) \cos (\pi \nu /2)\,{\mathbb {E}}[ \vert X\vert ^{-\nu }] = (1-\nu ) \int _0^{\infty } \lambda ^{\nu -2} \, d\lambda \; \left( \int _{\mathbb {R}}\frac{\sin (\lambda u)}{u}\, \mu (du) \right) . \end{aligned}$$

The dominated convergence theorem entails that the function

$$\begin{aligned} \psi (\lambda )= \int _{\mathbb {R}}\frac{\sin (\lambda u)}{u}\, \mu (du) \end{aligned}$$

is differentiable, with derivative

$$\begin{aligned} \psi ^\prime (\lambda ) = \int _{\mathbb {R}}\cos (\lambda u)\, \mu (du) = {\mathbb {E}}[\cos (\lambda X)]. \end{aligned}$$

Thus, another integration by parts yields

$$\begin{aligned} {\varGamma }(\nu ) \cos (\pi \nu /2)\,{\mathbb {E}}[ \vert X\vert ^{-\nu }] = \int _0^{\infty } \lambda ^{\nu -1}\, {\mathbb {E}}[\cos (\lambda X)] \, d\lambda - \bigg [\lambda ^{\nu -1}\psi (\lambda )\bigg ]_0^{+\infty } \end{aligned}$$

and it remains to prove that the last term on the right-hand side is zero. On the one hand, one has

$$\begin{aligned} \lambda ^{\nu -1}\vert \psi (\lambda )\vert \le \lambda ^\nu \;\rightarrow \; 0\quad \hbox {as }\lambda \rightarrow 0. \end{aligned}$$

On the other hand, using

$$\begin{aligned} \lambda ^{\nu -1}\left| \frac{\sin (\lambda u)}{u}\right| \le \lambda ^{\nu -1}\frac{\vert \sin (\lambda u)\vert ^{1-\nu }}{\vert u\vert } \le |u|^{-\nu }, \end{aligned}$$

and the dominated convergence theorem, we see that \(\lambda ^{\nu -1}\vert \psi (\lambda )\vert \rightarrow 0\) as \(\lambda \rightarrow +\infty .\) This completes the proof of the first statement of the lemma. The second statement may be handled similarly with the help of the formula

$$\begin{aligned} \int _0^{\infty } \lambda ^{\nu -1} \sin (\lambda x) \, d\lambda = {\varGamma }(\nu ) \sin (\pi \nu /2)\mathrm{sgn}(x) \vert x\vert ^{-\nu }, \qquad \vert \nu \vert < 1, \end{aligned}$$
(2.2)

which is given e.g. in (38) p. 13 in [8]. \(\square \)

Lemma 2

For all \(x, y\in {\mathbb {R}}\) and \(t \ge 0\) one has

$$\begin{aligned} \log ({\mathbb {E}}_{(x,y)}[e^{\mathrm{i}\lambda X_t}]) = \mathrm{i}\lambda (x+ yt)\;-\;\frac{t^{\alpha +1}}{\alpha +1}\,(\mathrm{i}\lambda )^\alpha e^{-\mathrm{i}\pi \alpha \rho \, \mathrm{sgn}(\lambda )}, \qquad \lambda \in {\mathbb {R}}. \end{aligned}$$

Proof

It is clearly enough to consider the case \(x = y =0.\) Integrating by parts yields the following representation of \(X_t\) as a stable integral:

$$\begin{aligned} X_t = \int _0^\infty (t-s)^+ \,dL_s = \int _0^\infty (t-x)^+ \, M(dx), \end{aligned}$$

where \(M\) is an \(\alpha \)-stable random measure on \({\mathbb {R}}^+\) with Lebesgue control measure and constant skewness intensity \(\beta (x) =\beta \)—see Example 3.3.3 in [19]. In the case \(\alpha \ne 1,\) the statement of the lemma is a direct consequence of Proposition 3.4.1 (i) in [19], reformulated with the \((\alpha ,\rho )\) parametrization. In the case \(\alpha =1,\rho =1/2\) we use Proposition 3.4.1 (ii) in [19] (with \(\beta =0\)). The case \(\alpha =1,\rho \ne 1/2\) follows from the symmetric case in adding a drift coefficient \(\mu t\) for some \(\mu \ne 0,\) which integrates in \(\mu t^2/2.\) \(\square \)

We now set

$$\begin{aligned} s_{\alpha ,\rho }= \frac{\sin (\pi \alpha (\rho -1/2))}{\alpha +1} \in (-1,1)\quad \text{ and }\quad c_{\alpha ,\rho }= \frac{\cos (\pi \alpha (\rho -1/2))}{\alpha +1} \in (0,1). \end{aligned}$$

The next proposition gives a representation for the Mellin transform of \(X_t\) restricted on the event \(\{X_t>0\}\). For the sake of simplicity, we shall denote this variable by \(X_t^+\) in the following, with the abuse of notation:

$$\begin{aligned} {\mathbb {E}}_{(x,y)}\left[ (X_t^+)^{-\nu }\right] = {\mathbb {E}}_{(x,y)}\left[ X_t^{-\nu } \mathbf{1}_{\{X_t > 0\}}\right] . \end{aligned}$$

Proposition 1

For all \(x, y\in {\mathbb {R}}, t > 0\) and \(\nu \in (0,1)\) one has

$$\begin{aligned} {\mathbb {E}}_{(x,y)}[(X^+_t)^{-\nu }]&= \frac{\varGamma (1-\nu )}{\pi } \int _0^{\infty } \! \lambda ^{\nu -1} e^{-c_{\alpha ,\rho }\lambda ^\alpha t^{\alpha +1}} \sin (\lambda (x+yt)\nonumber \\&\quad +\, s_{\alpha ,\rho }\lambda ^\alpha t^{\alpha +1} + \pi \nu /2)\, d\lambda . \end{aligned}$$
(2.3)

Proof

Since \(X_t\) is a stable random variable, it has a bounded density and \({\mathbb {E}}_{(x,y)}[(X^+_t)^{-\nu }]\) is hence finite for all \(\nu \in (0,1).\) By Lemma 2 we have

$$\begin{aligned} \log \left( {\mathbb {E}}_{(x,y)}\left[ e^{i\lambda X_t}\right] \right) = \mathrm{i}\lambda (x + yt) - \lambda ^\alpha t^{1+\alpha }(c_{\alpha ,\rho }-\mathrm{i}s_{\alpha ,\rho }), \quad \lambda \ge 0. \end{aligned}$$

Taking the real part and integrating with respect to \(\lambda ^{\nu -1}\) on \(]0,+\infty [\), we deduce

$$\begin{aligned}&\int _0^{\infty }\! \lambda ^{\nu -1} e^{- c_{\alpha ,\rho }\lambda ^\alpha t^{1+\alpha }} \cos (\lambda (x+yt) + s_{\alpha ,\rho }\lambda ^\alpha t^{1+\alpha })\, d\lambda \\&\quad = \int _0^{\infty } \!\lambda ^{\nu -1} {\mathbb {E}}_{(x,y)}\left[ \cos (\lambda X_t)\right] \,d\lambda \\&\quad = \varGamma (\nu ) \cos \left( \frac{\pi \nu }{2}\right) \, {\mathbb {E}}_{(x,y)}\left[ |X_t|^{-\nu }\right] , \end{aligned}$$

where the second equality comes from Lemma 1. Similarly, taking the imaginary part entails

$$\begin{aligned}&\int _0^{\infty }\! \lambda ^{\nu -1} e^{- c_{\alpha ,\rho }\lambda ^\alpha t^{1+\alpha }} \sin (\lambda (x+yt) + s_{\alpha ,\rho }\lambda ^\alpha t^{1+\alpha })\, d\lambda \\&\quad = \varGamma (\nu ) \sin \left( \frac{\pi \nu }{2}\right) \, {\mathbb {E}}_{(x,y)}\left[ |X_t|^{-\nu }\text {sgn}(X_t)\right] . \end{aligned}$$

Multiplying the first relation by \(\sin (\pi \nu /2)\), the second by \(\cos (\pi \nu /2),\) and summing, we finally obtain

$$\begin{aligned} \varGamma (\nu ) \sin (\pi \nu )\,{\mathbb {E}}_{(x,y)}[(X^+_t)^{-\nu }]&= \int _0^{\infty } \! \lambda ^{\nu -1} e^{-c_{\alpha ,\rho }\lambda ^\alpha t^{\alpha +1}} \sin (\lambda (x+yt)\\&{\quad }+\, s_{\alpha ,\rho }\lambda ^\alpha t^{\alpha +1} + \pi \nu /2)\, d\lambda , \end{aligned}$$

which yields the required expression. \(\square \)

Our last proposition provides some crucial computations for the proof of Theorem B.

Proposition 2

Set \(\nu \in (\alpha /(\alpha +1),1)\) and \(s = (1-\nu )(\alpha +1)\in (0,1).\)

  1. (i)

    For every \(y > 0,\) one has

    $$\begin{aligned} \int _0^\infty {\mathbb {E}}_{(0,y)} [(X^+_t)^{-\nu }]\, dt = (\alpha +1)^{1-\nu }{\varGamma }(1-s) \sin (\pi s (1-{\gamma }))\frac{{\varGamma }(1-\nu )^2 }{\pi }\,y^{s-1}\cdot \end{aligned}$$
  2. (ii)

    For every \(y < 0,\) one has

    $$\begin{aligned} \int _0^\infty {\mathbb {E}}_{(0,y)} [(X^+_t)^{-\nu }]\, dt = (\alpha +1)^{1-\nu }{\varGamma }(1-s)\sin (\pi {\gamma }s) \frac{{\varGamma }(1-\nu )^2 }{\pi }\, \vert y\vert ^{s-1}\cdot \end{aligned}$$
  3. (iii)

    For every \(x<0,\) one has

    $$\begin{aligned}&\int _0^\infty {\mathbb {E}}_{(x,0)} [(X^+_t)^{-\nu }]\, dt\\&\qquad =(\alpha +1)^{-\frac{\alpha }{\alpha +1}} \,\varGamma \left( \frac{1-s}{\alpha +1}\right) \sin (\pi {\gamma }){\varGamma }\left( \frac{1}{\alpha +1}\right) \frac{{\varGamma }(1-\nu ) }{\pi }\, \vert x\vert ^{\frac{s-1}{\alpha +1}}. \end{aligned}$$

Proof

Suppose first that \(x = 0\) and \(y\in {\mathbb {R}}.\) Integrating the expression on the right-hand side of Proposition 1 yields a double integral of the form

$$\begin{aligned} I_\nu&= \int _0^{\infty }\left( \int _0^{\infty } \lambda ^{\nu -1} e^{- c_{\alpha ,\rho }\lambda ^\alpha t^{1+\alpha }} \sin (\lambda yt + s_{\alpha ,\rho }\lambda ^\alpha t^{1+\alpha } +\nu \pi /2)\, d\lambda \right) dt\\ \quad \qquad&= \int _0^{\infty }\left( \int _0^{\infty } r^{\nu -1} e^{-c_{\alpha ,\rho }r^\alpha } \sin (ryt^{-1/\alpha } + s_{\alpha ,\rho }r^\alpha +\nu \pi /2)\, dr\right) t^{-\nu (1+1/\alpha )}\, dt\\ \quad \qquad&= \int _0^{\infty }\left( \int _0^{\infty } r^{\nu -1} e^{-c_{\alpha ,\rho }r^\alpha } \sin (ryu + s_{\alpha ,\rho }r^\alpha +\nu \pi /2)\, dr\right) \alpha u^{-s}\, du\\ \quad \qquad&= \int _0^{\infty }\left( \int _0^{\infty } \alpha u^{-s} \sin (ryu + s_{\alpha ,\rho }r^\alpha +\nu \pi /2)\, du\right) r^{\nu -1} e^{-c_{\alpha ,\rho }r^\alpha } \, dr, \end{aligned}$$

where the first, resp. second, equality comes from the change of variable \(\lambda t^{1+1/\alpha }=r\), resp. \(u= t^{-1/\alpha }\), and the switching of the integrals in the third equality is made exactly as in Lemma 1, using the fact that \(s\in (0,1)\) and \(s+\nu > 1.\)

Suppose first \(y > 0.\) We start by computing the integral in \(u\) with the help of formulæ  (2.1) and (2.2) and some trigonometry:

$$\begin{aligned}&\alpha \int _0^{\infty } u^{-s} \sin (ryu + s_{\alpha ,\rho }r^\alpha +\nu \pi /2)\, du\\&\quad = \alpha \varGamma (1-s) \cos ((s-\nu )\pi /2 - s_{\alpha ,\rho }r^\alpha ) (yr)^{s-1}. \end{aligned}$$

We then compute the integral in \(r\) with the change of variable \(z=r^\alpha ,\) using the notation \(Z = e^{\mathrm{i}\pi \alpha (\rho - 1/2)}\):

$$\begin{aligned} I_\nu&= \alpha \varGamma (1-s) \,y^{s-1}\int _0^{\infty } r^{\alpha (1-\nu )-1} e^{-c_{\alpha ,\rho }r^\alpha } \cos ((s-\nu )\pi /2 - s_{\alpha ,\rho }r^\alpha ) \, dr\\&= \varGamma (1-s)\,y^{s-1} \int _0^{\infty } z^{-\nu } e^{-c_{\alpha ,\rho }z} \cos ((s-\nu )\pi /2 - s_{\alpha ,\rho }z ) \, dz\\&= (1+\alpha )^{1-\nu }{\varGamma }(1-s){\varGamma }(1-\nu )\mathfrak {R}( e^{\mathrm{i}\pi (s-\nu )/2} Z^{\nu -1})\,y^{s-1}\\&= (1+\alpha )^{1-\nu }{\varGamma }(1-s){\varGamma }(1-\nu )\sin (\pi s (1-{\gamma }))\,y^{s-1}, \end{aligned}$$

where the third line follows after some algebraic simplifications. By Proposition 1, this completes the proof of (i).

Suppose now \(y < 0.\) An analogous computation to the above one shows that

$$\begin{aligned} I_\nu&= \alpha \int _0^{\infty } u^{-s} \sin (ryu + s_{\alpha ,\rho }r^\alpha +\nu \pi /2)\, du \\&= \alpha \varGamma (1-s) \sin ((s+\nu -1)\pi /2 + s_{\alpha ,\rho }r^\alpha ) \vert yr\vert ^{s-1}. \end{aligned}$$

The integral in \(r\) is then computed in the same way as before and yields the formula

$$\begin{aligned} I_\nu&= (1+\alpha )^{1-\nu }{\varGamma }(1-s){\varGamma }(1-\nu )\,\mathfrak {I}( e^{\mathrm{i}\pi (s+\nu -1)/2} {\bar{Z}}^{\nu -1})\,\vert y\vert ^{s-1}\\&= (1+\alpha )^{1-\nu }{\varGamma }(1-s){\varGamma }(1-\nu )\sin (\pi {\gamma }s)\,\vert y\vert ^{s-1}, \end{aligned}$$

which completes the proof of (ii) by Proposition 1.

We last suppose \(x < 0\) and \(y =0.\) We again integrate the expression on the right-hand side of Eq. (2.3), making the changes of variable \(\lambda t^{1+1/\alpha }=r\) and \(u= t^{-(1+1/\alpha )}.\) This yields a double integral of the form

$$\begin{aligned} \frac{\alpha }{\alpha +1}\int _0^{\infty }\left( \int _0^{\infty } r^{\nu -1} e^{-c_{\alpha ,\rho }r^\alpha } \sin (rxu + s_{\alpha ,\rho }r^\alpha +\nu \pi /2)\, dr\right) u^{-(s+\alpha )/(1+\alpha )}\, du, \end{aligned}$$

where we can switch the orders of integration as in Lemma 1 because \((s+\alpha )/(1+\alpha )\in (0,1)\) and \((s+\alpha )/(1+\alpha ) + \nu > 1.\) We then compute the integral in \(u\) similarly as above and find

$$\begin{aligned} \frac{\alpha }{\alpha +1} \,\varGamma \left( \frac{1-s}{\alpha +1}\right) \,\sin (\pi \alpha /2(\alpha +1) + s_{\alpha ,\rho }r^\alpha ) \, \vert xr\vert ^{\frac{s-1}{\alpha +1}}. \end{aligned}$$

We finally compute the integral in \(r\) with the change of variable \(r = z^{1/\alpha },\) and get after some algebraic manipulations

$$\begin{aligned} I_\nu = (\alpha +1)^{-\frac{\alpha }{\alpha +1}}\sin (\pi {\gamma }){\varGamma }\left( \frac{1}{\alpha +1}\right) \,\varGamma \left( \frac{1-s}{\alpha +1}\right) \vert x\vert ^{\frac{s-1}{\alpha +1}}, \end{aligned}$$

which completes the proof of (iii) by Proposition 1. \(\square \)

Remark 1

It seems hard to find an explicit formula in general for

$$\begin{aligned} \int _0^\infty {\mathbb {E}}_{(x,y)} [(X^+_t)^{-\nu }]\, dt \end{aligned}$$

when \((x,y)\) is not on a coordinate axis. In the symmetric Cauchy case, some further computations show that the integral equals

$$\begin{aligned} \frac{1}{\sin (\pi \nu )}\,\mathfrak {I}\left( \int _0^\infty (-(x+yt +\mathrm{i}t^2/2))^{-\nu }\,dt\right) . \end{aligned}$$

This can be rewritten with the hypergeometric function, but apparently not in a tractable manner when \(x y \ne 0.\)

3 Proof of Theorem B

The following lemma shows the aforementioned and intuitively obvious fact that \(T_0\) is a proper random variable for any starting point.

Lemma 3

For all \(x, y\in {\mathbb {R}}\) one has \({\mathbb {P}}_{(x,y)}[ T_0 < +\infty ] = 1.\)

Proof

Suppose first \(x =-1\) and \(y=0.\) Then

$$\begin{aligned} {\mathbb {P}}_{(-1,0)}[ T_0 = +\infty ] = {\mathbb {P}}_{(0,0)}[ X_\infty ^* < 1] \,= \,{\mathbb {P}}_{(0,0)}[ X_\infty ^* =0] \,\le \, {\mathbb {P}}_{(0,0)}[ X_1 \le 0] \,<\, 1, \end{aligned}$$

where the second equality comes from the self-similarity of \(X\) and the strict inequality from the fact that \(X_1\) is a two-sided stable random variable—see Lemma 2. On the other hand, setting \(T = \inf \{ t > 0, X_t > 0\},\) it is clear by self-similarity that under the probability measure \({\mathbb {P}}_{(0,0)}\) one has

$$\begin{aligned} T\, \mathop {=}\limits ^{d}\, k T \end{aligned}$$

for all \(k > 0.\) In particular, \({\mathbb {P}}_{(0,0)}[T\in \{0, +\infty \}] = 1.\) Moreover, the zero-one law for the Markov process \((X,L)\) entails that \({\mathbb {P}}_{(0,0)}[T =0]\) is \(0\) or \(1.\) Since \({\mathbb {P}}_{(0,0)}[T=+\infty ] = {\mathbb {P}}_{(0,0)}[ X_\infty ^* =0] < 1,\) we get \({\mathbb {P}}_{(0,0)}[T=+\infty ] = 0\) whence \({\mathbb {P}}_{(-1,0)}[ T_0 = +\infty ] = 0\) as desired. Notice that it also entails \({\mathbb {P}}_{(0,0)}[T=0] = 1,\) as mentioned in the introduction.

Using again self-similarity, this entails \({\mathbb {P}}_{(x,0)}[ T_0 <+\infty ] = 1\) for all \(x\le 0,\) and also for all \(x \ge 0\) by considering the dual process \(-L.\) The fact that \({\mathbb {P}}_{(x,y)}[ T_0 <+\infty ] = 1\) for all \(x,y\) such that \(xy < 0\) follows then by a comparison of the sample paths.

Suppose now that \(x \le 0\) and \( y <0.\) Introduce the stopping time \(S = \inf \{t > 0, L_t > 0\},\) which is a.s. finite under \({\mathbb {P}}_{(x,y)}\) because \(\vert L\vert \) is not a subordinator. It is clear that \(L_S \ge 0\) and \(X_S < 0\) a.s. Applying the strong Markov property, we see from the above cases that

$$\begin{aligned} {\mathbb {P}}_{(x,y)}[ T_0 = +\infty ] \, \le \, {\mathbb {E}}_{(x,y)}[ {\mathbb {P}}_{(X_S,L_S)}[ T_0 = +\infty ]] = 0. \end{aligned}$$

The same argument holds for \(x\ge 0\) and \(y > 0.\) \(\square \)

Assume now \(x < 0\) or \(x=0\) and \(y < 0.\) It is clear that at \(T_0\) the process \(X\) has a non-negative speed, which entails by right-continuity that \(L_{T_0} \ge 0\) a.s. Applying the Markov property at \(T_0\) entails

$$\begin{aligned} {\mathbb {P}}_{(x,y)}[X_t \in du] = \int _0^\infty \int _0^t {\mathbb {P}}_{(0,z)}[ X_{t-s} \in du]\,{\mathbb {P}}_{(x,y)}[T_0\in ds, L_{T_0}\in dz] \end{aligned}$$
(3.1)

for all \(t, u > 0.\) Integrating in time yields after a change of variable and Fubini’s theorem

$$\begin{aligned} \int _0^\infty {\mathbb {P}}_{(x,y)}[X_t \in du]\, dt = \int _0^\infty \left( \int _0^\infty {\mathbb {P}}_{(0,z)}[ X_t\in du]\,dt\right) {\mathbb {P}}_{(x,y)}[L_{T_0}\in dz] \end{aligned}$$

for all \(u > 0.\) Integrating in space along \(u^{-\nu }\) and applying again Fubini’s theorem finally shows the general formula

$$\begin{aligned} \int _0^\infty {\mathbb {E}}_{(x,y)} [(X^+_t)^{-\nu }] dt&= \int _0^\infty {\mathbb {P}}_{(x,y)}[L_{T_0} \in dz] \left( \int _0^\infty {\mathbb {E}}_{(0,z)} [(X^+_t)^{-\nu }]\, dt\right) \end{aligned}$$
(3.2)

which is valid for all \(\nu \in {\mathbb {R}},\) with possibly infinite values on both sides.

3.1 Proof of (i)

Assume \(x =0\) and \(y < 0.\) Setting \(\nu \in (\alpha /(\alpha +1), 1),\) a straightforward application of Proposition 2 (i) and (ii) is that both sides of (3.2) are finite, which leads to

$$\begin{aligned} {\mathbb {E}}_{(0,y)} [L_{T_0}^{s-1}] = \vert y\vert ^{s-1}\left( \frac{\sin (\pi {\gamma }s)}{\sin (\pi (1-{\gamma })s)}\right) \end{aligned}$$

for all \(s\in (0,1).\) The formula extends then to \(\{\vert s \vert < 1/(1-{\gamma })\}\) by analytic continuation. On the other hand, for all \(\mu \in (0,1)\) and \(s\in (-1,1),\) the formula

$$\begin{aligned} \int _0^\infty \frac{\sin (\pi \mu ) x^s}{\pi \mu (x^2 + 2 \cos (\pi \mu ) x + 1)}\, dx = \frac{\sin (\pi \mu s)}{\mu \sin (\pi s)} \end{aligned}$$

is a simple and well-known consequence of the residue theorem. Recalling that

$$\begin{aligned} \chi = \frac{{\gamma }}{1-{\gamma }} \in (0,1) \end{aligned}$$

and the definition of \(\mathbf{C}_\mu ,\) we deduce

$$\begin{aligned} {\mathbb {E}}_{(0,y)} [L_{T_0}^{s-1}] = \vert y\vert ^{s-1}{\mathbb {E}}[\mathbf{C}_\chi ^{(1-{\gamma })s}] \end{aligned}$$

for all \(\vert s \vert < 1/(1-{\gamma }),\) which concludes the proof of (i) by Mellin inversion. \(\square \)

3.2 Proof of (ii)

Assume \(x < 0\) and \(y =0.\) Another application of (3.2) combined with Proposition 2 (i) and (iii) shows that

$$\begin{aligned} {\mathbb {E}}_{(x,0)}[L_{T_0}^{s-1}] = \frac{(1+\alpha )^{\frac{1-s}{1+\alpha }}{\varGamma }\left( \frac{\alpha +2}{\alpha +1}\right) {\varGamma }\left( \frac{1-s}{\alpha +1}\right) \sin (\pi {\gamma })}{{\varGamma }\left( \frac{s}{\alpha +1}\right) {\varGamma }(1-s)\sin (\pi s(1-{\gamma }))}\,\vert x\vert ^{\frac{s-1}{\alpha +1}} \end{aligned}$$
(3.3)

for all \(s\in (0,1).\) A simple analysis of the Gamma factors shows that the above expression remains finite for all \(\vert s \vert < 1/(1-{\gamma }).\) \(\square \)

3.3 Some further Mellin inversions

In this paragraph we would like to invert (3.3) for certain values of the parametrization \((\alpha , \rho ).\) Without loss of generality we set \(x = -1, y =0.\) Applying the complement formula for the Gamma function, we first deduce from (3.3) the identity:

$$\begin{aligned} {\mathbb {E}}_{(-1,0)}[L_{T_0}^{s-1}] = (1+\alpha )^{\frac{1-s}{1+\alpha }}\;\frac{{\varGamma }\left( \frac{\alpha +2}{\alpha +1}\right) {\varGamma }\left( \frac{1-s}{\alpha +1}\right) {\varGamma }(s(1-{\gamma })){\varGamma }(1-s(1-{\gamma }))}{{\varGamma }\left( \frac{s}{\alpha +1}\right) {\varGamma }(1-s){\varGamma }({\gamma }){\varGamma }(1-{\gamma })} \end{aligned}$$
(3.4)

for \(\vert s \vert < 1/(1-{\gamma }).\) In particular, this shows that the random variable \(L_{T_0}\) has moments of Gamma type (see [12] for a recent survey).

3.3.1 The Cauchy case

We have \(\alpha = 1\) and \(\rho \in (0,1),\) whence \({\gamma }=\rho /2\in (0,1/2).\) As mentioned in the introduction, set

$$\begin{aligned} \delta = \frac{1}{2(1-{\gamma })} = \frac{1+\chi }{2}\; \in (1/2, 1). \end{aligned}$$

Applying the Legendre-Gauss multiplication formula transforms (3.4) into

$$\begin{aligned} {\mathbb {E}}_{(-1,0)}[L_{T_0}^{s-1}] = 2^{\frac{s-1}{2}}\;\times \; \frac{\sin (\pi {\gamma })\sin (\pi s/2)}{\sin (\pi s(1-{\gamma }))}\cdot \end{aligned}$$

As above, this entails that under \({\mathbb {P}}_{(x,0)}\) one has

$$\begin{aligned} L_{T_0}\, \mathop {=}\limits ^{d}\, \sqrt{2\vert x\vert }\, (\mathbf{C}_\delta ^{1-{\gamma }})^{(1)}, \end{aligned}$$

which provides a striking similarity with the law of \(L_{T_0}\) under \({\mathbb {P}}_{(0,y)}\) for \(y < 0.\) Notice that these two laws are however never the same, because \(\delta \ne \chi .\)

3.3.2 The Brownian case

We have \(\alpha =2, \rho = \chi = 1/2\) and \({\gamma }= 1/3.\) Applying three times the Legendre-Gauss multiplication formula and simplifying the quotients shows

$$\begin{aligned} {\mathbb {E}}_{(-1,0)}[L_{T_0}^{s-1}] = 9^{\frac{s-1}{3}}\;\times \;\frac{{\varGamma }(1/2 +s/3)}{{\varGamma }(5/6)}\;\times \;\frac{{\varGamma }(1/2 -s/3){\varGamma }(1/3)}{{\varGamma }(2/3 -s/3){\varGamma }(1/6)} \end{aligned}$$

for all \(s\in (0,1).\) Inverting the Mellin transform, this entails that under \({\mathbb {P}}_{(x,0)}\) one has

$$\begin{aligned} L_{T_0}\, \mathop {=}\limits ^{d}\, \vert 9x\vert ^{1/3}\left( \frac{{\varvec{\Gamma }}_{5/6}}{\mathbf{B}_{1/6,1/6}}\right) ^{1/3}, \end{aligned}$$

where \({\varvec{\Gamma }}_c\) resp. \(\mathbf{B}_{a,b}\) stands for the standard Gamma resp. Beta random variable, and the quotient is assumed independent. Gor’kov [11] provides an expression of the density of \(L_{T_0}\) under \({\mathbb {P}}_{(x,y)}\) in terms of the confluent hypergeometric function—see also formula (3) in [13]. It seems however that the above simple identity in law has passed unnoticed in the literature on integrated Brownian motion.

Remark 2

It is well-known that \(\log ({\varvec{\Gamma }}_c)\) and \(\log (\mathbf{B}_{a,b})\) are infinitely divisible random variables, and this property is hence also shared by \(\log (L_{T_0})\) under \({\mathbb {P}}_{(x,0)}.\) The question whether \(L_{T_0}\) itself is infinitely divisible is an interesting open problem for Brownian motion.

3.3.3 The other cases

When \(\alpha \in (0,1)\cup (1,2),\) the law of \(L_{T_0}\) can be expressed as a more complicated product involving the standard positive \(\mu \)-stable random variable \(\mathbf{Z}_\mu , \mu \in (0,1).\) Recall that the latter is characterized through its Mellin transformation by

$$\begin{aligned} {\mathbb {E}}[\mathbf{Z}_\mu ^{s}] = \frac{{\varGamma }\left( 1-\frac{s}{\mu }\right) }{{\varGamma }(1-s)}, \qquad s < \mu . \end{aligned}$$

Suppose first that \(\alpha \in (0,1).\) We then have \(\rho \in (0,1),{\gamma }\in (0,1/2)\) and \(\chi \in (0,1).\) Introducing the further parameters

$$\begin{aligned} \eta = \frac{1}{(\alpha +1)(1-{\gamma })} = \frac{1}{1+\alpha (1-\rho )}\; \in (1/2, 1)\qquad \text{ and }\qquad \sigma = \frac{\alpha +1}{2}\in (1/2,1), \end{aligned}$$

another application of the Legendre-Gauss formula shows that under \({\mathbb {P}}_{(x,0)}\) one has

$$\begin{aligned} L_{T_0}\, \mathop {=}\limits ^{d}\, 2\,\left| \frac{x}{\alpha +1}\right| ^{\frac{1}{\alpha +1}}\,\mathbf{Z}_\sigma ^{\frac{1}{2}}\,\times \left( \frac{\mathbf{Z}_\delta ^{\frac{1}{2}}}{\mathbf{Z}_\eta ^{\frac{1}{\alpha +1}}}\right) ^{(1)}\!, \end{aligned}$$

which is an extension of the Cauchy case since when \(\alpha =1\) the first multiplicand is \(\mathbf{Z}_1^{\frac{1}{2}} = \mathbf{1}.\)

Suppose next that \(\alpha \in (1,2)\) and assume that \({\gamma }\le 1/3.\) Notice that this assumption is fulfilled in the spectrally negative case, where \(\rho \!=\! 1-1/\alpha \) viz. \({\gamma }\!= \!(\alpha -1)/(\alpha +1) < 1/3.\) Analogous computations lead to the identity in law

$$\begin{aligned} L_{T_0}\; \mathop {=}\limits ^{d}\; 3. 2^{-2/3}\left| \frac{x}{\alpha +1}\right| ^{\frac{1}{\alpha +1}}\,\times \,\left( \frac{\mathbf{Z}_{\frac{\alpha +1}{3}}}{\mathbf{B}_{1/6,1/6}}\right) ^{\frac{1}{3}}\times \;\left( \frac{\mathbf{Z}_{2/3(1-{\gamma })}^{2/3}}{\mathbf{Z}_{1/(1+\alpha (1-\rho ))}^{1/(\alpha +1)}}\right) ^{(1)}\!, \end{aligned}$$

which is an extension of the Brownian case since when \(\alpha =2\) the first multiplicand is \(\mathbf{B}_{1/6,1/6}^{-1/3},\) whereas the second one reads

$$\begin{aligned} \left( \mathbf{Z}_{1/2}^{-1/3}\right) ^{(1)} \mathop {=}\limits ^{d}\; 2^{2/3} \left( {\varvec{\Gamma }}_{1/2}^{1/3}\right) ^{(1)}\mathop {=}\limits ^{d}\; 2^{2/3} {\varvec{\Gamma }}_{5/6}^{1/3}. \end{aligned}$$

The case \({\gamma }> 1/3\) is however more mysterious and the factorization of \(L_{T_0}\) seems then to require less classical random variables than the Beta, Gamma, and positive stable ones.

4 Proof of Theorem A

We first reduce the problem to the situation where the bivariate process \((X,L)\) starts from a coordinate axis.

Lemma 4

Assume that \(x < 0.\) For all \(y\in {\mathbb {R}}\) one has

$$\begin{aligned} {\mathbb {P}}_{(x,y)}[T_0 > t] \asymp {\mathbb {P}}_{(x, 0)}[T_0 > t], \qquad t\rightarrow +\infty . \end{aligned}$$

Proof

Fix \(t > 1\) and suppose first that \(y > 0.\) One has \({\mathbb {P}}_{(x,y)}[T_0 > t]\le {\mathbb {P}}_{(x,0)}[T_0 > t]\) by a direct comparison of the sample paths. On the other hand,

$$\begin{aligned} {\mathbb {P}}_{(x,y)}[T_0 > t]&\ge {\mathbb {P}}_{(x,y)}[X_1 < x, L_1 < 0, T_0 > t]\\&= {\mathbb {E}}_{(x,y)} \left[ \mathbf{1}_{\{X_1 < x, X_1^* < 0, L_1 < 0\}} {\mathbb {P}}_{(X_1, L_1)}[T_0 > t-1]\right] \\&\ge {\mathbb {E}}_{(x,y)} \left[ \mathbf{1}_{\{X_1 < x, X_1^* < 0, L_1 < 0\}}{\mathbb {P}}_{(x, 0)}[T_0 > t-1]\right] \; \ge c\, {\mathbb {P}}_{(x, 0)}[T_0 > t] \end{aligned}$$

for some \(c >0,\) where the equality follows from the Markov property, the second inequality from a comparison of the sample paths, and the third inequality from a support theorem in uniform norm for the Lévy process \(L\). More precisely, since the Lévy measure of \(L\) has full support, it follows from Corollary 1 in [22] that \({\mathbb {P}}_{(x,y)}[ \sup _{t\le 1} \vert L_t - f(t)\vert \le \varepsilon ] > 0\) for every continuous function \( f : [0,1]\rightarrow {\mathbb {R}}\) such that \(f(0) =x\) and every \(\varepsilon > 0.\) In particular, choosing the appropriate function \(f\) shows that \({\mathbb {P}}_{(x,y)}[X_1 < x, X_1^* < 0, L_1 < 0] > 0.\)

Fix again \(t > 1\) and suppose now that \(y < 0.\) Then \({\mathbb {P}}_{(x,y)}[T_0 > t]\ge {\mathbb {P}}_{(x,0)}[T_0 > t],\) and similarly as above one has

$$\begin{aligned} {\mathbb {P}}_{(x, 0)}[T_0 > t]&\ge {\mathbb {E}}_{(x, 0)} \left[ \mathbf{1}_{\{X_1 < x, X_1^* < 0, L_1 < y\}} {\mathbb {P}}_{(X_1, L_1)}[T_0 > t-1]\right] \\&\ge {\mathbb {P}}_{(x, 0)} [X_1 < x, X_1^* < 0, L_1 < y]\,\times \,{\mathbb {P}}_{(x,y)}[T_0 > t-1]\\&\ge c\, {\mathbb {P}}_{(x,y)}[T_0 > t] \end{aligned}$$

for some \(c > 0.\) This completes the proof. \(\square \)

In the remainder of this section, we will implicitly assume, without loss of generality, that

$$\begin{aligned} \{x=0, y <0\}\qquad \text{ or }\qquad \{x < 0, y =0\}. \end{aligned}$$

We start by studying the asymptotics at infinity of the density function of \(L_{T_0}\) under \({\mathbb {P}}_{(x,y)},\) which we denote by \(f^0_{x,y}.\)

Lemma 5

There exists \(c > 0\) such that

$$\begin{aligned} f^0_{x,y}(z) \sim c z^{-1/(1-{\gamma })}, \qquad z \rightarrow +\infty . \end{aligned}$$

Proof

If \(x =0,\) the asymptotics is a direct consequence of the explicit expression of \(f^0_{(0,y)}\) which is given in Theorem B (i). If \(y =0,\) Theorem B (ii) shows that the first positive pole of the Mellin transform of \(L_{T_0}\) under \({\mathbb {P}}_{(x,0)}\) is at \(1/(1-{\gamma })\), and is simple. The required asymptotic for \(f^0_{(x,0)}\) is then a consequence of a converse mapping theorem for Mellin transforms (see Theorem 4 in [9] or Theorem 6.4 in [12]). \(\square \)

Remark 3

  1. (a)

    The converse mapping theorem for Mellin transforms yields also an explicit expression for the constant \(c\), but we shall not need this information in the sequel.

  2. (b)

    We believe that the above asymptotic remains true for \(x < 0\) and all \(y\ne 0.\) However, the Mellin transform of \(L_{T_0}\) under \({\mathbb {P}}_{(x,y)}\) is then expressed with the help of a double integral which is absolutely divergent, and whose singularities are difficult to study at first sight.

  3. (c)

    This lemma entails by integration that

    $$\begin{aligned} {\mathbb {P}}_{(x,y)}[L_{T_0} > z] \sim c\chi ^{-1}\, z^{-\chi }, \qquad z \rightarrow +\infty . \end{aligned}$$

    Heuristically, it is tempting to write \(L_{T_0} = T_0^{1/\alpha } \vert L_1\vert \) by scaling and, since \({\mathbb {P}}_{(x,y)}[ \vert L_1\vert > z] \sim c z^{-\alpha } \ll z^{-\chi }\) at infinity, to infer that

    $$\begin{aligned} {\mathbb {P}}_{(x,y)}[T_0 > t] \asymp t^{-\chi /\alpha } = t^{-\theta }, \qquad t \rightarrow +\infty . \end{aligned}$$

    This explains the equivalence between finite moments stated in the introduction. We will prove in the remainder of this section that these heuristics are actually correct.

The following lemma provides our key-estimate.

Lemma 6

For all \(\nu \in (\alpha (1-\theta )/(\alpha +1), 1)\) there exists \(c > 0\) such that

$$\begin{aligned} {\mathbb {E}}_{(x,y)}\left[ \int _0^t \mathbf{1}_{\{T_0>t-u\}}\,{\mathbb {E}}_{(0,L_{T_0})}\left[ (X_u^+)^{-\nu }\right] du \right] \sim c\, t^{1-(1+1/\alpha )\nu - \theta },\qquad t\rightarrow +\infty . \end{aligned}$$

Proof

We first assume \(\nu \in (\alpha /(\alpha +1), 1)\) and transform the expression on the left-hand side. From (3.1), Fubini’s theorem, and the Markov property, we have

$$\begin{aligned}&\int _0^{\infty } e^{-\lambda t}\,{\mathbb {E}}_{(x,y)}\left[ (X_t^+)^{-\nu }\right] dt\\&\quad = {\mathbb {E}}_{(x,y)}\left[ e^{-\lambda T_0} \int _0^{\infty } e^{-\lambda t}\,{\mathbb {E}}_{(0,L_{T_0})}\left[ (X_t^+)^{-\nu }\right] dt \right] , \quad \lambda \ge 0, \end{aligned}$$

both sides being finite thanks to Proposition 2. Integrating by parts shows then, with the help of (3.2) and Proposition 2, that

$$\begin{aligned}&\lambda \int _0^{\infty } e^{-\lambda t}\int _t^{\infty } \left( {\mathbb {E}}_{(x,y)}\left[ (X_u^+)^{-\nu }\right] - {\mathbb {E}}_{(x,y)}\left[ {\mathbb {E}}_{(0,L_{T_0})}\left[ (X_u^+)^{-\nu }\right] \right] \right) du\, dt\\&\qquad ={\mathbb {E}}_{(x,y)}\left[ (1-e^{-\lambda T_0})\int _0^{\infty } e^{-\lambda t}{\mathbb {E}}_{(0,L_{T_0})}\left[ (X_t^+)^{-\nu }\right] dt \right] \\&\qquad ={\mathbb {E}}_{(x,y)}\left[ \int _0^{\infty } \lambda \,e^{-\lambda t}\left( \int _0^t \mathbf{1}_{\{T_0>t-u\}} {\mathbb {E}}_{(0,L_{T_0})}\left[ (X_t^+)^{-\nu }\right] du\right) dt \right] . \end{aligned}$$

Inverting the Laplace transforms yields that

$$\begin{aligned} {\mathbb {E}}_{(x,y)}\left[ \int _0^t \mathbf{1}_{\{T_0>t-u\}}{\mathbb {E}}_{(0,L_{T_0})}\left[ (X_u^+)^{-\nu }\right] du \right] \; = \; H_{(x,y)}(t), \end{aligned}$$
(4.1)

with the notation

$$\begin{aligned} H_{(x,y)}(t) = \int _t^{+\infty } \left( {\mathbb {E}}_{(x,y)}\left[ (X_u^+)^{-\nu }\right] - {\mathbb {E}}_{(x,y)}\left[ {\mathbb {E}}_{(0,L_{T_0})}\left[ (X_u^+)^{-\nu }\right] \right] \right) du, \quad t >0. \end{aligned}$$
(4.2)

It remains therefore to compute the asymptotics of the function \(H_{(x,y)}\), which only depends on the law of \(L_{T_0}\) under \({\mathbb {P}}_{(x,y)}\). To this end, we shall compute the Mellin transform of \(H^\prime _{(x,y)}\) and apply a converse mapping theorem. Recall that \(s=(1-\nu )(1+\alpha )\) and let \(z\in {\mathbb {R}}\) such that \(s+\alpha z \in (0,1)\). Following the same computations as in Proposition 2, we deduce that:

  1. (i)

    for \(y>0\)

    $$\begin{aligned} \pi \int _0^{+\infty } t^z {\mathbb {E}}_{(0,y)}\left[ (X_t^+)^{-\nu }\right] dt&= (1+\alpha )^{1-\nu +z}\varGamma (1-\nu ) \varGamma (1-\nu +z)\varGamma (1-s-\alpha z) \\&\quad \times \, \sin (\pi (s+\alpha z) - \pi s \gamma - z\alpha \pi \rho ) \, y^{s+\alpha z -1} \end{aligned}$$
  2. (ii)

    for \(y<0\)

    $$\begin{aligned} \pi \int _0^{+\infty } t^z {\mathbb {E}}_{(0,y)}\left[ (X_t^+)^{-\nu }\right] dt&= (1+\alpha )^{1-\nu +z}\varGamma (1-\nu ) \varGamma (1-\nu +z)\varGamma (1-s -\alpha z)\\&\quad \times \, \sin (\pi s \gamma + z\alpha \pi \rho )\, |y|^{s+\alpha z -1}. \end{aligned}$$

Hence, the Mellin transform of \(H_{(0,y)}^\prime \) is explicitly given by

$$\begin{aligned}&\pi \int _0^{+\infty } t^z H_{(0,y)}^\prime (t) dt \\&\quad = (1+\alpha )^{1-\nu +z}\varGamma (1-\nu )\varGamma (1-\nu +z)\varGamma (1-s-\alpha z)\\&\quad \quad \times \left( \sin (\pi (s+\alpha z) - \pi s \gamma - z\alpha \pi \rho )\frac{\sin (\pi \gamma (s+\alpha z))}{\sin (\pi (1-\gamma ) (s+\alpha z))}\right. \\&\qquad \left. -\sin (\pi s \gamma + z\alpha \pi \rho ) \right) |y|^{s+\alpha z -1}. \end{aligned}$$

Observe that this formula extends by analytic continuation to \(\{|s+\alpha z|< 1/(1-\gamma )\}\). The pole at

$$\begin{aligned} z=\frac{1}{1-\gamma } - \frac{s}{\alpha } = \frac{1+\alpha }{1+\alpha (1-\rho )} - \frac{(1-\nu )(1+\alpha )}{\alpha } = \theta + \nu \left( 1+\frac{1}{\alpha }\right) -1 \end{aligned}$$

is simple and we may therefore apply the converse mapping theorem for Mellin transforms to obtain the asymptotics:

$$\begin{aligned} -H_{(0,y)}^\prime (t)&= {\mathbb {E}}_{(0,y)}\left[ (X_t^+)^{-\nu }\right] - {\mathbb {E}}_{(0,y)}\left[ {\mathbb {E}}_{(0,L_{T_0})}\left[ (X_t^+)^{-\nu }\right] \right] \nonumber \\&\quad {\mathop {\sim }\limits _{t \rightarrow +\infty }} c_1\, t^{-((1+1/\alpha )\nu +\theta )} \end{aligned}$$
(4.3)

for some \(c_1>0\). The announced result then follows by integration. In the case \(x<0\) and \(y=0\), we have similarly:

$$\begin{aligned} \pi \int _0^{+\infty } t^z {\mathbb {E}}_{(x,0)}\left[ (X_t^+)^{-\nu }\right] dt&= (1+\alpha )^{\frac{z-\alpha }{1+\alpha }}\varGamma (1-\nu )\varGamma \left( \frac{z+1}{\alpha +1}\right) \varGamma \left( \frac{1-s-\alpha z}{1+\alpha }\right) \\&\quad \times \,\sin \left( \pi \alpha \rho \frac{1+z}{1+\alpha }\right) \, |x|^{\frac{s+\alpha z+\alpha }{1+\alpha }-1} \end{aligned}$$

and we may proceed as before, by extending the Mellin transform of \(H_{(x,0)}^\prime \) to \(\{|s+\alpha z|< 1/(1-\gamma )\}\) and applying the converse mapping theorem to obtain once again :

$$\begin{aligned} {\mathbb {E}}_{(x,0)}\left[ (X_t^+)^{-\nu }\right] - {\mathbb {E}}_{(x,0)}\left[ {\mathbb {E}}_{(0,L_{T_0})}\left[ (X_t^+)^{-\nu }\right] \right] {\mathop {\sim }\limits _{t \rightarrow +\infty }} c_2\, t^{-((1+1/\alpha )\nu +\theta )}, \end{aligned}$$
(4.4)

for some \(c_2>0\). Suppose now \(\nu \in (\alpha (1-\theta )/(\alpha +1), \alpha /(\alpha +1)).\) The left-hand side of (4.1) is well-defined and the estimates (4.3) and (4.4) entail that the integral in (4.2) is absolutely convergent, because \((1+1/\alpha )\nu +\theta > 1\). By analytic continuation this shows that (4.1) remains valid for \(\nu \in (\alpha (1-\theta )/(\alpha +1), \alpha /(\alpha +1)),\) and the estimates (4.3) and (4.4) hold as well. This completes the proof, again by integration. \(\square \)

4.1 Proof of the upper bound

Fix \(A>0\) and \(\nu \in (\alpha /(\alpha +1), 1).\) By continuity and positivity there exists \(\varepsilon >0\) such that for all \(z\in [0,A]\),

$$\begin{aligned} \int _0^1 {\mathbb {E}}_{(0,z)}\left[ (X_u^+)^{-\nu } \right] \,du \;\ge \; \varepsilon . \end{aligned}$$

For all \(t > 0,\) we get from (4.1), a change of variable and the self-similarity of \(X\) and \(L\):

$$\begin{aligned} t^{(1+1/\alpha )\nu +\theta -1} H_{(x,y)}(t)&\ge t^{(1+1/\alpha )\nu +\theta -1}\,{\mathbb {E}}_{(x,y)}\left[ \mathbf{1}_{\{T_0>t\}}\int _0^t {\mathbb {E}}_{(0,L_{T_0})}\left[ (X_u^+)^{-\nu }\right] du\right] \\&= t^{\theta }\, {\mathbb {E}}_{(x,y)}\left[ \mathbf{1}_{\{T_0>t\}}\int _0^1 {\mathbb {E}}_{(0,\frac{1}{t^{1/\alpha }}L_{T_0})}\left[ (X_u^+)^{-\nu }\right] du\right] \\&\ge \varepsilon t^{\theta }\, {\mathbb {P}}_{(x,y)}[ T_0>t, L_{T_0}\le A t^{1/\alpha }]\\&\ge \varepsilon t^{\theta }\, \left( {\mathbb {P}}_{(x,y)}[T_0>t] - {\mathbb {P}}_{(x,y)}[T_0>t, L_{T_0}\ge A t^{1/\alpha }] \right) \\&\ge \varepsilon t^{\theta }\, \left( {\mathbb {P}}_{(x,y)}[T_0>t] - {\mathbb {P}}_{(x,y)} [L_{T_0}\ge A t^{1/\alpha }]^{1/p} \right) , \end{aligned}$$

On the one hand, Lemma 5 entails

$$\begin{aligned} \limsup _{t\rightarrow +\infty }\; t^{\theta }\,{\mathbb {P}}_{(x,y)}[L_{T_0} \ge At^{1/\alpha }]^{1} = K < +\infty . \end{aligned}$$

On the other hand, Lemma 6 shows that

$$\begin{aligned} t^{(1+1/\alpha )\nu +\theta -1} H_{(x,y)}(t)\; \rightarrow \; c > 0\qquad \text{ as } t\rightarrow +\infty . \end{aligned}$$

Putting everything together entails

$$\begin{aligned} t^\theta {\mathbb {P}}_{(x,y)}[T_0>t] \le {\widetilde{K}} \end{aligned}$$

for some finite \({\widetilde{K}}\) as soon as \(t\) is large enough. \(\square \)

4.2 Proof of the lower bound

We start with the following lemma:

Lemma 7

One has

$$\begin{aligned} \int _0^t {\mathbb {P}}_{(x,y)}[T_0>u]\, du\; \asymp \; t^{1-\theta }\qquad \text{ as } t\rightarrow +\infty . \end{aligned}$$

Proof

Firstly, integrating the above upper bound for \({\mathbb {P}}_{(x,y)}[T_0>t]\) entails the existence of a finite \(\kappa _2\) such that

$$\begin{aligned} \int _0^t {\mathbb {P}}_{(x,y)}[T_0>u]\, du\;\le \; \kappa _2 \,t^{1-\theta }\qquad \text{ as } t\rightarrow +\infty . \end{aligned}$$

To prove the lower inequality, we fix \(\nu \in (\alpha (1-\theta )/(1+\alpha ),\alpha /(1+\alpha ))\) and deduce from Proposition 1 the uniform bound

$$\begin{aligned} {\mathbb {E}}_{(0,y)}[(X^+_u)^{-\nu }] \le \frac{\varGamma (1-\nu )}{\pi } \int _0^{\infty } \! \lambda ^{\nu -1} e^{-c_{\alpha ,\rho }\lambda ^\alpha u^{\alpha +1}} \, d\lambda \le Ku^{-\nu (1+1/\alpha )}, \quad u >0, \end{aligned}$$
(4.5)

for some finite constant \(K.\) Set \(\eta =\nu (1+1/\alpha ) \in (0,1)\) and fix \(\varepsilon \in (0,1).\) Using (4.1) and (4.5) we decompose

$$\begin{aligned}&t^{\eta +\theta -1} H_{(x,y)}(t)\\&\quad \le K t^{\eta +\theta -1}\left( \int _0^{t(1-\varepsilon )} \frac{{\mathbb {P}}_{(x,y)}[T_0>u] }{(t-u)^{\eta }}\,du + \int _{t(1-\varepsilon )}^t \frac{{\mathbb {P}}_{(x,y)}[T_0>u] }{(t-u)^{\eta }}\,du\right) \\&\quad \le K \varepsilon ^{-\eta } t^{\theta -1}\int _0^{t} {\mathbb {P}}_{(x,y)}[T_0>u] \,du + \frac{K t^\theta \varepsilon ^{1-\eta }}{1-\eta } \,{\mathbb {P}}_{(x,y)}[T_0>t(1-\varepsilon )]\\&\quad \le {\widetilde{K}} \varepsilon ^{-\eta } \left( t^{\theta -1}\int _0^{t} {\mathbb {P}}_{(x,y)}[T_0>u] \,du + \varepsilon \right) \end{aligned}$$

for some finite \({\widetilde{K}}\), where the third inequality follows from the upper bound. Applying Lemma 6 and taking \(\varepsilon \) small enough shows finally that there exists \(\kappa _1> 0\) such that

$$\begin{aligned} \int _0^t {\mathbb {P}}_{(x,y)}[T_0>u]\, du\;\ge \; \kappa _1\, t^{1-\theta }\qquad \text{ as } t\rightarrow +\infty . \end{aligned}$$

\(\square \)

We can now finish the proof of the lower bound for \({\mathbb {P}}_{(x,y)}(T_0 > t)\). Fixing \(A > 0\) and applying the mean value theorem entails

$$\begin{aligned} A\, t^\theta \, {\mathbb {P}}_{(x,y)}[T_0>t] \ge t^{\theta -1} \int _t^{t+tA} {\mathbb {P}}_{(x,y)}[T_0>u] \,du \ge \kappa _1 (1+A)^{1-\theta } - \kappa _2 \end{aligned}$$

as \(t\rightarrow +\infty \), for some constants \(0 < \kappa _1 < \kappa _2 < \infty \) given by Lemma 7. Since \(\theta <1,\) the lower bound follows by choosing \(A\) large enough. \(\square \)