Abstract
We compute the persistence exponent of the integral of a stable Lévy process in terms of its self-similarity and positivity parameters. This solves a problem raised by Shi (Lower tails of some integrated processes. In: Small deviations and related topics (problem panel 2003). Along the way, we investigate the law of the stable process \(L\) evaluated at the first time its integral \(X\) hits zero, when the bivariate process \((X,L)\) starts from a coordinate axis. This extends classical formulæ by McKean (J Math Kyoto Univ 2:227–235, 1963) and Gor’kov (Soviet. Math. Dokl. 16:904–908, 1975) for integrated Brownian motion.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and statement of the results
Let \(X =\{ X_t, \, t\ge 0\}\) be a real-valued process starting at zero and \(T_x = \inf \{ t > 0, \; X_t > x\}\) be its first-passage time above a positive level \(x\). Studying the law of \(T_x\) is a classic problem in probability theory. In general, it is difficult to obtain an explicit expression of this law. However, it has been observed that in many interesting cases the survival function has a polynomial decay:
where \(\theta \) is a positive constant which is called the persistence exponent and usually does not depend on \(x\). The computation of persistence exponents turns out to have connections with various problems in probability and mathematical physics.
Physicists consider that the persistence exponent is a parameter providing a crucial insight on the whole history of a process, and which is more informative than its correlation structure. The persistence exponent has been measured experimentally in several situations (fluctuating interfaces, breath figure, nematic systems) and we refer to the recent survey paper [3] for a list of observations, simulations and rigorous results in this field. The question is also attractive for the mathematicians since that, up to now, very few rigorous computations are actually performed, especially in the non-Markovian framework.
A central result in this topic is Goldman-Sinai’s evaluation of the persistence exponent \(\theta = 1/4\) for the integrated Brownian motion [10, 25]. There are three natural generalizations of this result, all in search for a proof. The first one is the persistence exponent for twice integrated, or more generally \(n\)th time integrated, Brownian motion. This simple open problem on Brownian motion is believed to be very challenging. Some numerical evaluations have been performed by physicists—see again [3], not leading to a precise conjecture on \(\theta .\) The second one is the persistence exponent for integrated fractional Brownian motion with Hurst parameter \(H\), and it has been conjectured by Molchan and Khokhlov [17] that \(\theta \) should be \(H(1-H)\). The third one is the persistence exponent for integrated stable Lévy processes, which is the matter of the present paper. It is important to mention that the first above question has tight connections with the structure of the real roots of random polynomials with Gaussian coefficients and large degree, whereas the second and third ones appear naturally when studying the shock structure of the inviscid Burgers equation with fractional Brownian, resp. Lévy stable, initial data. We refer to [2] and the bibliography therein for complete details on these three open problems, and their respective connections.
In this paper we investigate this question for the process
where \(L = \{L_t, \, t\ge 0\}\) is a strictly \(\alpha \)-stable Lévy process starting from zero, with law \({\mathbb {P}}.\) This process solves the differential equation
which describes the dynamics of a particle submitted to a force given by a stable noise. This is a natural generalization of the so-called acceleration process studied by physicists—see section 3.2 in [3]. A non-rigorous analysis of the survival function of this particle in the presence of a potential is performed in Section III. B of [7]. The present paper computes rigorously the persistence exponent of the free particle of equation (1) in [7].
To state our main result, we need some notation. Our process \(L\) is normalized to have characteristic exponent
where \(\alpha \in (0,2]\) is the self-similarity parameter and \(\rho = {\mathbb {P}}[L_1 \ge 0]\) is the positivity parameter. We refer to [27] and [19] for classic accounts on stable laws and processes. The strict stability implies the \((1/\alpha )\)-self-similarity of \(L\) and the \((1+1/\alpha )\)-self-similarity of \(X\), in other words that
for all \(k > 0.\) When \(\alpha = 2,\) one has \(\rho = 1/2\) and \(\varPsi (\lambda ) = -\lambda ^2,\) so that \(L = \sqrt{2} B\) is a rescaled Brownian motion. When \(\alpha = 1,\) one has \(\rho \in (0,1)\) and \(L\) is a Cauchy process with a linear drift. When \(\alpha \in (0,1)\cup (1,2)\) the characteristic exponent takes the more familiar form
where \(\beta \in [-1,1]\) is an asymmetry parameter, whose connection with the positivity parameter is given by Zolotarev’s formula:
and \(\kappa _{\alpha ,\rho } = \cos (\pi \alpha (\rho -1/2)) > 0\) is a scaling constant. The latter could have taken any positive value, changing the normalization (1.2) accordingly, without incidence on our purposes below. One has \(\rho \in [0,1]\) if \(\alpha < 1\) and \(\rho \in [1-1/\alpha , 1/\alpha ]\) if \(\alpha > 1.\) When \(\alpha > 1\) and \(\rho = 1/\alpha \) the process \(L\) has no positive jumps, whereas it has no negative jumps when \(\alpha > 1\) and \(\rho = 1-1/\alpha \). When \(\alpha < 1\) and \(\rho = 0\) or \(\rho = 1,\) the process \(\vert L\vert \) is a stable subordinator and has increasing sample paths, a situation which will be implicitly excluded throughout this paper. In this case, the process \(X\) is indeed also monotonous and the survival function in (1.1) either is one or decays towards zero at an exponential speed—see [2] p. 4 for details.
When \(\alpha =2,\) the bivariate process \((X,L)\) is Gaussian with explicit covariance function and transition density, providing also the basic example of a degenerate diffusion process—see [14] for details and references. When \(\alpha < 2,\) the process \((X,L)\) is Non-Gaussian \(\alpha \)-stable in the broad sense of [19]. The process \((X,L)\) is a strong Markov process, which is sometimes called the Kolmogorov process in the literature. In the following we will set \({\mathbb {P}}_{(x,y)}\) for the law of \((X,L)\) starting at \((x,y)\in {\mathbb {R}}^2.\) Our main concern in this paper is the hitting time of zero for \(X\):
Since \(\vert L\vert \) is not a subordinator, a simple argument using self-similarity and the zero-one law for Markov processes—see Lemma 3 below for details—shows that \({\mathbb {P}}_{(0,0)}[T_0 = 0] = 1,\) in other words that the origin is regular for the vertical axis. If \(x < 0\) or \(x=0\) and \(y <0,\) the continuity of the sample paths of \(X\) shows that a.s. \(T_0 = \inf \{ t> 0, \; X_t > 0\},\) and it will be checked in Lemma 3 below that \(T_0\) is also a.s. finite. If \(x > 0\) or \(x=0\) and \(y >0,\) the law of \(T_0\) is obviously deduced from that of the latter situation in considering the dual Lévy process \(-L.\)
When \((x,y)\ne (0,0),\) the difficulty to obtain concrete information on the law of \(T_0\) under \({\mathbb {P}}_{(x,y)}\) comes from the fact that \(X\) itself is not a Markov process. In the Brownian case for example, the density function of \(T_0\) is expressed through quite intricate integral formulæ —see [2] pp. 15–16 and the references therein. On the other hand, some universal estimates can be obtained for the behaviour of the distribution function \({\mathbb {P}}_{(x,y)}[T_0\le t]\) as \(t\rightarrow 0,\) using self-similarity and Gaussian or stable upper tails for the supremum process—see e.g. Section 10.4 in [19]. But it is well-known that the study of \({\mathbb {P}}_{(x,y)}[T_0 > t]\) as \(t\rightarrow +\infty \) is a harder problem, where a more exotic behaviour is expected.
Throughout the paper, for any real functions \(f\) and \(g\) we will use the standard notation \(f(t)\asymp g(t)\) as \(t\rightarrow +\infty \) to express the fact that there exist two positive and finite constants \(\kappa _1, \kappa _2\) such that \(\kappa _1 f(t)\le g(t) \le \kappa _2 f(t)\) as \(t\rightarrow +\infty .\) Our main result is the following.
Theorem A
Assume that \(x < 0\) or \(x=0\) and \(y < 0.\) One has
with \(\theta = \rho /(1+ \alpha (1-\rho )).\)
In the Brownian case \(\alpha = 2,\) one has \(\theta = 1/4=\rho /2\) and as mentioned before this estimate has been known since the works of Goldman—see Proposition 2 in [10], with a more precise formulation on the density function of \(T_0\), following the seminal article of McKean [16]. This result had then been partially rediscovered by Sinai in [25]. The universality of the persistence exponent 1/4 for integrals of real-valued Lévy processes having exponential moments on both sides has been shown in [1], with the help of strong approximation arguments. Recently, it was proved in [5] that all integrated real random walks with finite variance also have 1/4 as persistence exponent, extending [25] in which the particular case of the integrated simple random walk was studied. Let us also mention that the survival function of the \(n\)th hitting time of zero for the integrated Brownian motion exhibits the same power decay up to a logarithmic term in \(ct^{-1/4}(\ln (t))^{n-1}\) with an explicit constant \(c\), as shown by the first author in [18].
In the case \(1 < \alpha < 2\) and with no negative jumps, that is \(\rho = 1 -1/\alpha ,\) one obtains \(\theta = (\alpha - 1)/2\alpha = \rho /2,\) an estimate which had been proved by the second author in [23] with different techniques and a less precise formulation than Theorem A for the lower bound, involving a logarithmic correction term. It is worth mentioning that the same persistence exponent \((\alpha - 1)/2\alpha \) appears for the integrals of random walks which are attracted towards this spectrally positive Lévy process—see Remark 1.2 in [5] and the main result of [26]. Our result leads therefore to a natural conjecture on the persistence exponent of general integrated random walks in a stable domain of attraction.
It had been conjectured in [2]—see Conjecture 4 therein—that the persistence exponent of \(X\) might be \(\rho /2\) in general. This expected value should be compared with a classic result of Bingham stating that the persistence exponent is \(\rho \) for the stable process \(L\)—see (2.16) in [2] and Theorem 3A in [4]. The admissible set of \((\alpha ,\rho )\) and Theorem A entail that \(\theta > \rho /2\) as soon as \(L\) has negative jumps, hence providing a negative answer to this conjecture. The fact that \(\theta \) is an increasing function of the positivity parameter \(\rho \) matches the intuition, however it is harder to explain heuristically why it is also a decreasing function of \(\alpha .\)
Specifying \(x = -1\) and \(y = 0\) in Theorem A entails by self-similarity the following lower tail probability estimate
with the notation \(X_1^* = \sup \{X_t, \, t\le 1\}.\) Some heuristics on the subordination of \(X\) by the inverse local time of \(L\) when \(\alpha > 1\) had led to the conjecture, formulated in Part 2 of [21], that in the symmetric case \(\rho =1/2\) one should have \({\mathbb {P}}[ X_1^*\le \varepsilon ] = \varepsilon ^{(\alpha -1)^+/2(\alpha +1) +o(1)}\) as \(\varepsilon \rightarrow 0.\) A positive answer to this conjecture had been announced in [6], with a mistake. The invalidity of this conjecture as soon as \(\alpha \) is close enough to 1 had been observed in [24]. Theorem A shows that Shi’s exponent is the right one only for integrated Brownian motion: in the symmetric case one has \(\theta \alpha /(\alpha +1) = \alpha /(\alpha +1)(\alpha +2) \ge (\alpha -1)^+/2(\alpha +1),\) with an equality only if \(\alpha =2.\) Let us mention in passing that lower tail probabilities offer some challenging problems for Gaussian processes—see [15, 20].
Our method to prove Theorem A hinges upon the random variable \(L_{T_0},\) the so-called hitting place of \((X,L)\) on the vertical axis, which has been extensively studied in the Brownian case—see [10, 13, 14, 16]. Notice that this random variable is positive under \({\mathbb {P}}_{(x,y)}\) if \(x<0\) or \(x=0\) and \(y <0\). The reason why it is connected to the persistence exponent comes from the following heuristical equivalence for fractional moments
for all \(s >0,\) which had been conjectured in [23] p. 176, and turns out to be true as a consequence of Theorem A and Lemma 5 below. The precise relationship between the upper tails of \(T_0\) and that of \(L_{T_0}\) follows from a series of probabilistic estimates which are the matter of Sect. 4.
In this paper we also provide a rather complete description of the law of the random variable \(L_{T_0}\) when \((X,L)\) starts from a coordinate axis. To express our second main result, we need some further notation. For every \(\mu \in (0,1),\) introduce the \(\mu \)-Cauchy random variable \(\mathbf{C}_\mu ,\) with density
Our above denomination comes from the case \(\mu = 1/2,\) where \(\mathbf{C}_{1/2}\) is the half-Cauchy distribution. If \(X\) is a positive random variable and \(\nu \in {\mathbb {R}}\) is such that \({\mathbb {E}}[X^\nu ] < \infty ,\) the positive random variable \(X^{(\nu )}\) defined by
for all bounded and continuous functions \(f : {\mathbb {R}}^+ \rightarrow {\mathbb {R}}\), is known as the size bias of order \(\nu \) of \(X.\) Observe that when \(X\) is absolutely continuous, the density of \(X^{(\nu )}\) is obtained by multiplying that of \(X\) by \(x^\nu \) and renormalizing. We finally introduce the parameters
Notice that from the admissible set for \((\alpha , \rho ),\) we have \({\gamma }\in (0,1/2)\) and \(\chi \in (0,1).\)
Theorem B
-
(i)
For every \(y < 0,\) under \({\mathbb {P}}_{(0,y)}\) one has
$$\begin{aligned} L_{T_0}\, \mathop {=}\limits ^{d}\, \vert y\vert (\mathbf{C}_\chi ^{1-{\gamma }})^{(1)}. \end{aligned}$$ -
(ii)
For every \(x < 0,\) under \({\mathbb {P}}_{(x,0)}\) the positive random variable \(L_{T_0}\) has Mellin transform
$$\begin{aligned}&{\mathbb {E}}_{(x,0)}[L_{T_0}^{s-1}] = \frac{(1+\alpha )^{\frac{1-s}{1+\alpha }}{\varGamma }(\frac{\alpha +2}{\alpha +1}){\varGamma }(\frac{1-s}{\alpha +1})\sin (\pi {\gamma })}{{\varGamma }(\frac{s}{\alpha +1}){\varGamma }(1-s)\sin (\pi s(1-{\gamma }))}\,\vert x\vert ^{\frac{s-1}{\alpha +1}}, \quad \vert s\vert < 1/(1-{\gamma }). \end{aligned}$$
The proof of this result is given in Sect. 3, following some preliminary computations involving oscillating integrals and the Fourier transform of \(X_t,\) performed in Sect. 2. Observe that the density in (i) above is explicit and reads for example
in the Brownian case, a formula originally proved by McKean in [16]—see also formulæ (1) and (2) in [13]. As is well-known, the Cauchy random variable appears in exit or winding problems for two-dimensional Brownian motion. The fact that it is also connected with similar problems for general integrated stable processes is perhaps more surprising.
An interesting consequence of (ii) is that the Mellin transform can be inverted in the Cauchy case \(\alpha = 1\) and exhibits the same type of law as in (i): one obtains
with the notation \(\delta = (1+\chi )/2.\) The Mellin transform of (ii) can also be simply inverted in the Brownian case in terms of Beta and Gamma random variables, shedding some new light on a formula by Gor’kov [11] which was of the analytical type, and in the case \(\alpha < 1\) in terms of positive stable random variables. The Mellin inversion is however more complicated when \(\alpha \in (1,2),\) and involves no classical random variables in general—see Sect. 3.3 below for details.
2 Preliminary computations
The following lemma, which we could not locate in the literature, will be useful in the sequel.
Lemma 1
Let \(\nu \in (0,1)\) and \(X\) be a real random variable such that \({\mathbb {E}}[\vert X\vert ^{-\nu }] < \infty .\) One has
and
Proof
The generalized Fresnel integral which is computed e.g. in formula (37) p. 13 of [8] shows that for all \(u\ne 0, \nu \in (0, 1),\) one has
The first statement of the lemma is hence simply a switching of the expectation and the integral. However, we cannot apply Fubini’s theorem directly. Set \(\mu \) for the probability distribution of \(X.\) From (2.1) and an integration by parts, we get
Since
we may now apply Fubini’s theorem and obtain
The dominated convergence theorem entails that the function
is differentiable, with derivative
Thus, another integration by parts yields
and it remains to prove that the last term on the right-hand side is zero. On the one hand, one has
On the other hand, using
and the dominated convergence theorem, we see that \(\lambda ^{\nu -1}\vert \psi (\lambda )\vert \rightarrow 0\) as \(\lambda \rightarrow +\infty .\) This completes the proof of the first statement of the lemma. The second statement may be handled similarly with the help of the formula
which is given e.g. in (38) p. 13 in [8]. \(\square \)
Lemma 2
For all \(x, y\in {\mathbb {R}}\) and \(t \ge 0\) one has
Proof
It is clearly enough to consider the case \(x = y =0.\) Integrating by parts yields the following representation of \(X_t\) as a stable integral:
where \(M\) is an \(\alpha \)-stable random measure on \({\mathbb {R}}^+\) with Lebesgue control measure and constant skewness intensity \(\beta (x) =\beta \)—see Example 3.3.3 in [19]. In the case \(\alpha \ne 1,\) the statement of the lemma is a direct consequence of Proposition 3.4.1 (i) in [19], reformulated with the \((\alpha ,\rho )\) parametrization. In the case \(\alpha =1,\rho =1/2\) we use Proposition 3.4.1 (ii) in [19] (with \(\beta =0\)). The case \(\alpha =1,\rho \ne 1/2\) follows from the symmetric case in adding a drift coefficient \(\mu t\) for some \(\mu \ne 0,\) which integrates in \(\mu t^2/2.\) \(\square \)
We now set
The next proposition gives a representation for the Mellin transform of \(X_t\) restricted on the event \(\{X_t>0\}\). For the sake of simplicity, we shall denote this variable by \(X_t^+\) in the following, with the abuse of notation:
Proposition 1
For all \(x, y\in {\mathbb {R}}, t > 0\) and \(\nu \in (0,1)\) one has
Proof
Since \(X_t\) is a stable random variable, it has a bounded density and \({\mathbb {E}}_{(x,y)}[(X^+_t)^{-\nu }]\) is hence finite for all \(\nu \in (0,1).\) By Lemma 2 we have
Taking the real part and integrating with respect to \(\lambda ^{\nu -1}\) on \(]0,+\infty [\), we deduce
where the second equality comes from Lemma 1. Similarly, taking the imaginary part entails
Multiplying the first relation by \(\sin (\pi \nu /2)\), the second by \(\cos (\pi \nu /2),\) and summing, we finally obtain
which yields the required expression. \(\square \)
Our last proposition provides some crucial computations for the proof of Theorem B.
Proposition 2
Set \(\nu \in (\alpha /(\alpha +1),1)\) and \(s = (1-\nu )(\alpha +1)\in (0,1).\)
-
(i)
For every \(y > 0,\) one has
$$\begin{aligned} \int _0^\infty {\mathbb {E}}_{(0,y)} [(X^+_t)^{-\nu }]\, dt = (\alpha +1)^{1-\nu }{\varGamma }(1-s) \sin (\pi s (1-{\gamma }))\frac{{\varGamma }(1-\nu )^2 }{\pi }\,y^{s-1}\cdot \end{aligned}$$ -
(ii)
For every \(y < 0,\) one has
$$\begin{aligned} \int _0^\infty {\mathbb {E}}_{(0,y)} [(X^+_t)^{-\nu }]\, dt = (\alpha +1)^{1-\nu }{\varGamma }(1-s)\sin (\pi {\gamma }s) \frac{{\varGamma }(1-\nu )^2 }{\pi }\, \vert y\vert ^{s-1}\cdot \end{aligned}$$ -
(iii)
For every \(x<0,\) one has
$$\begin{aligned}&\int _0^\infty {\mathbb {E}}_{(x,0)} [(X^+_t)^{-\nu }]\, dt\\&\qquad =(\alpha +1)^{-\frac{\alpha }{\alpha +1}} \,\varGamma \left( \frac{1-s}{\alpha +1}\right) \sin (\pi {\gamma }){\varGamma }\left( \frac{1}{\alpha +1}\right) \frac{{\varGamma }(1-\nu ) }{\pi }\, \vert x\vert ^{\frac{s-1}{\alpha +1}}. \end{aligned}$$
Proof
Suppose first that \(x = 0\) and \(y\in {\mathbb {R}}.\) Integrating the expression on the right-hand side of Proposition 1 yields a double integral of the form
where the first, resp. second, equality comes from the change of variable \(\lambda t^{1+1/\alpha }=r\), resp. \(u= t^{-1/\alpha }\), and the switching of the integrals in the third equality is made exactly as in Lemma 1, using the fact that \(s\in (0,1)\) and \(s+\nu > 1.\)
Suppose first \(y > 0.\) We start by computing the integral in \(u\) with the help of formulæ (2.1) and (2.2) and some trigonometry:
We then compute the integral in \(r\) with the change of variable \(z=r^\alpha ,\) using the notation \(Z = e^{\mathrm{i}\pi \alpha (\rho - 1/2)}\):
where the third line follows after some algebraic simplifications. By Proposition 1, this completes the proof of (i).
Suppose now \(y < 0.\) An analogous computation to the above one shows that
The integral in \(r\) is then computed in the same way as before and yields the formula
which completes the proof of (ii) by Proposition 1.
We last suppose \(x < 0\) and \(y =0.\) We again integrate the expression on the right-hand side of Eq. (2.3), making the changes of variable \(\lambda t^{1+1/\alpha }=r\) and \(u= t^{-(1+1/\alpha )}.\) This yields a double integral of the form
where we can switch the orders of integration as in Lemma 1 because \((s+\alpha )/(1+\alpha )\in (0,1)\) and \((s+\alpha )/(1+\alpha ) + \nu > 1.\) We then compute the integral in \(u\) similarly as above and find
We finally compute the integral in \(r\) with the change of variable \(r = z^{1/\alpha },\) and get after some algebraic manipulations
which completes the proof of (iii) by Proposition 1. \(\square \)
Remark 1
It seems hard to find an explicit formula in general for
when \((x,y)\) is not on a coordinate axis. In the symmetric Cauchy case, some further computations show that the integral equals
This can be rewritten with the hypergeometric function, but apparently not in a tractable manner when \(x y \ne 0.\)
3 Proof of Theorem B
The following lemma shows the aforementioned and intuitively obvious fact that \(T_0\) is a proper random variable for any starting point.
Lemma 3
For all \(x, y\in {\mathbb {R}}\) one has \({\mathbb {P}}_{(x,y)}[ T_0 < +\infty ] = 1.\)
Proof
Suppose first \(x =-1\) and \(y=0.\) Then
where the second equality comes from the self-similarity of \(X\) and the strict inequality from the fact that \(X_1\) is a two-sided stable random variable—see Lemma 2. On the other hand, setting \(T = \inf \{ t > 0, X_t > 0\},\) it is clear by self-similarity that under the probability measure \({\mathbb {P}}_{(0,0)}\) one has
for all \(k > 0.\) In particular, \({\mathbb {P}}_{(0,0)}[T\in \{0, +\infty \}] = 1.\) Moreover, the zero-one law for the Markov process \((X,L)\) entails that \({\mathbb {P}}_{(0,0)}[T =0]\) is \(0\) or \(1.\) Since \({\mathbb {P}}_{(0,0)}[T=+\infty ] = {\mathbb {P}}_{(0,0)}[ X_\infty ^* =0] < 1,\) we get \({\mathbb {P}}_{(0,0)}[T=+\infty ] = 0\) whence \({\mathbb {P}}_{(-1,0)}[ T_0 = +\infty ] = 0\) as desired. Notice that it also entails \({\mathbb {P}}_{(0,0)}[T=0] = 1,\) as mentioned in the introduction.
Using again self-similarity, this entails \({\mathbb {P}}_{(x,0)}[ T_0 <+\infty ] = 1\) for all \(x\le 0,\) and also for all \(x \ge 0\) by considering the dual process \(-L.\) The fact that \({\mathbb {P}}_{(x,y)}[ T_0 <+\infty ] = 1\) for all \(x,y\) such that \(xy < 0\) follows then by a comparison of the sample paths.
Suppose now that \(x \le 0\) and \( y <0.\) Introduce the stopping time \(S = \inf \{t > 0, L_t > 0\},\) which is a.s. finite under \({\mathbb {P}}_{(x,y)}\) because \(\vert L\vert \) is not a subordinator. It is clear that \(L_S \ge 0\) and \(X_S < 0\) a.s. Applying the strong Markov property, we see from the above cases that
The same argument holds for \(x\ge 0\) and \(y > 0.\) \(\square \)
Assume now \(x < 0\) or \(x=0\) and \(y < 0.\) It is clear that at \(T_0\) the process \(X\) has a non-negative speed, which entails by right-continuity that \(L_{T_0} \ge 0\) a.s. Applying the Markov property at \(T_0\) entails
for all \(t, u > 0.\) Integrating in time yields after a change of variable and Fubini’s theorem
for all \(u > 0.\) Integrating in space along \(u^{-\nu }\) and applying again Fubini’s theorem finally shows the general formula
which is valid for all \(\nu \in {\mathbb {R}},\) with possibly infinite values on both sides.
3.1 Proof of (i)
Assume \(x =0\) and \(y < 0.\) Setting \(\nu \in (\alpha /(\alpha +1), 1),\) a straightforward application of Proposition 2 (i) and (ii) is that both sides of (3.2) are finite, which leads to
for all \(s\in (0,1).\) The formula extends then to \(\{\vert s \vert < 1/(1-{\gamma })\}\) by analytic continuation. On the other hand, for all \(\mu \in (0,1)\) and \(s\in (-1,1),\) the formula
is a simple and well-known consequence of the residue theorem. Recalling that
and the definition of \(\mathbf{C}_\mu ,\) we deduce
for all \(\vert s \vert < 1/(1-{\gamma }),\) which concludes the proof of (i) by Mellin inversion. \(\square \)
3.2 Proof of (ii)
Assume \(x < 0\) and \(y =0.\) Another application of (3.2) combined with Proposition 2 (i) and (iii) shows that
for all \(s\in (0,1).\) A simple analysis of the Gamma factors shows that the above expression remains finite for all \(\vert s \vert < 1/(1-{\gamma }).\) \(\square \)
3.3 Some further Mellin inversions
In this paragraph we would like to invert (3.3) for certain values of the parametrization \((\alpha , \rho ).\) Without loss of generality we set \(x = -1, y =0.\) Applying the complement formula for the Gamma function, we first deduce from (3.3) the identity:
for \(\vert s \vert < 1/(1-{\gamma }).\) In particular, this shows that the random variable \(L_{T_0}\) has moments of Gamma type (see [12] for a recent survey).
3.3.1 The Cauchy case
We have \(\alpha = 1\) and \(\rho \in (0,1),\) whence \({\gamma }=\rho /2\in (0,1/2).\) As mentioned in the introduction, set
Applying the Legendre-Gauss multiplication formula transforms (3.4) into
As above, this entails that under \({\mathbb {P}}_{(x,0)}\) one has
which provides a striking similarity with the law of \(L_{T_0}\) under \({\mathbb {P}}_{(0,y)}\) for \(y < 0.\) Notice that these two laws are however never the same, because \(\delta \ne \chi .\)
3.3.2 The Brownian case
We have \(\alpha =2, \rho = \chi = 1/2\) and \({\gamma }= 1/3.\) Applying three times the Legendre-Gauss multiplication formula and simplifying the quotients shows
for all \(s\in (0,1).\) Inverting the Mellin transform, this entails that under \({\mathbb {P}}_{(x,0)}\) one has
where \({\varvec{\Gamma }}_c\) resp. \(\mathbf{B}_{a,b}\) stands for the standard Gamma resp. Beta random variable, and the quotient is assumed independent. Gor’kov [11] provides an expression of the density of \(L_{T_0}\) under \({\mathbb {P}}_{(x,y)}\) in terms of the confluent hypergeometric function—see also formula (3) in [13]. It seems however that the above simple identity in law has passed unnoticed in the literature on integrated Brownian motion.
Remark 2
It is well-known that \(\log ({\varvec{\Gamma }}_c)\) and \(\log (\mathbf{B}_{a,b})\) are infinitely divisible random variables, and this property is hence also shared by \(\log (L_{T_0})\) under \({\mathbb {P}}_{(x,0)}.\) The question whether \(L_{T_0}\) itself is infinitely divisible is an interesting open problem for Brownian motion.
3.3.3 The other cases
When \(\alpha \in (0,1)\cup (1,2),\) the law of \(L_{T_0}\) can be expressed as a more complicated product involving the standard positive \(\mu \)-stable random variable \(\mathbf{Z}_\mu , \mu \in (0,1).\) Recall that the latter is characterized through its Mellin transformation by
Suppose first that \(\alpha \in (0,1).\) We then have \(\rho \in (0,1),{\gamma }\in (0,1/2)\) and \(\chi \in (0,1).\) Introducing the further parameters
another application of the Legendre-Gauss formula shows that under \({\mathbb {P}}_{(x,0)}\) one has
which is an extension of the Cauchy case since when \(\alpha =1\) the first multiplicand is \(\mathbf{Z}_1^{\frac{1}{2}} = \mathbf{1}.\)
Suppose next that \(\alpha \in (1,2)\) and assume that \({\gamma }\le 1/3.\) Notice that this assumption is fulfilled in the spectrally negative case, where \(\rho \!=\! 1-1/\alpha \) viz. \({\gamma }\!= \!(\alpha -1)/(\alpha +1) < 1/3.\) Analogous computations lead to the identity in law
which is an extension of the Brownian case since when \(\alpha =2\) the first multiplicand is \(\mathbf{B}_{1/6,1/6}^{-1/3},\) whereas the second one reads
The case \({\gamma }> 1/3\) is however more mysterious and the factorization of \(L_{T_0}\) seems then to require less classical random variables than the Beta, Gamma, and positive stable ones.
4 Proof of Theorem A
We first reduce the problem to the situation where the bivariate process \((X,L)\) starts from a coordinate axis.
Lemma 4
Assume that \(x < 0.\) For all \(y\in {\mathbb {R}}\) one has
Proof
Fix \(t > 1\) and suppose first that \(y > 0.\) One has \({\mathbb {P}}_{(x,y)}[T_0 > t]\le {\mathbb {P}}_{(x,0)}[T_0 > t]\) by a direct comparison of the sample paths. On the other hand,
for some \(c >0,\) where the equality follows from the Markov property, the second inequality from a comparison of the sample paths, and the third inequality from a support theorem in uniform norm for the Lévy process \(L\). More precisely, since the Lévy measure of \(L\) has full support, it follows from Corollary 1 in [22] that \({\mathbb {P}}_{(x,y)}[ \sup _{t\le 1} \vert L_t - f(t)\vert \le \varepsilon ] > 0\) for every continuous function \( f : [0,1]\rightarrow {\mathbb {R}}\) such that \(f(0) =x\) and every \(\varepsilon > 0.\) In particular, choosing the appropriate function \(f\) shows that \({\mathbb {P}}_{(x,y)}[X_1 < x, X_1^* < 0, L_1 < 0] > 0.\)
Fix again \(t > 1\) and suppose now that \(y < 0.\) Then \({\mathbb {P}}_{(x,y)}[T_0 > t]\ge {\mathbb {P}}_{(x,0)}[T_0 > t],\) and similarly as above one has
for some \(c > 0.\) This completes the proof. \(\square \)
In the remainder of this section, we will implicitly assume, without loss of generality, that
We start by studying the asymptotics at infinity of the density function of \(L_{T_0}\) under \({\mathbb {P}}_{(x,y)},\) which we denote by \(f^0_{x,y}.\)
Lemma 5
There exists \(c > 0\) such that
Proof
If \(x =0,\) the asymptotics is a direct consequence of the explicit expression of \(f^0_{(0,y)}\) which is given in Theorem B (i). If \(y =0,\) Theorem B (ii) shows that the first positive pole of the Mellin transform of \(L_{T_0}\) under \({\mathbb {P}}_{(x,0)}\) is at \(1/(1-{\gamma })\), and is simple. The required asymptotic for \(f^0_{(x,0)}\) is then a consequence of a converse mapping theorem for Mellin transforms (see Theorem 4 in [9] or Theorem 6.4 in [12]). \(\square \)
Remark 3
-
(a)
The converse mapping theorem for Mellin transforms yields also an explicit expression for the constant \(c\), but we shall not need this information in the sequel.
-
(b)
We believe that the above asymptotic remains true for \(x < 0\) and all \(y\ne 0.\) However, the Mellin transform of \(L_{T_0}\) under \({\mathbb {P}}_{(x,y)}\) is then expressed with the help of a double integral which is absolutely divergent, and whose singularities are difficult to study at first sight.
-
(c)
This lemma entails by integration that
$$\begin{aligned} {\mathbb {P}}_{(x,y)}[L_{T_0} > z] \sim c\chi ^{-1}\, z^{-\chi }, \qquad z \rightarrow +\infty . \end{aligned}$$Heuristically, it is tempting to write \(L_{T_0} = T_0^{1/\alpha } \vert L_1\vert \) by scaling and, since \({\mathbb {P}}_{(x,y)}[ \vert L_1\vert > z] \sim c z^{-\alpha } \ll z^{-\chi }\) at infinity, to infer that
$$\begin{aligned} {\mathbb {P}}_{(x,y)}[T_0 > t] \asymp t^{-\chi /\alpha } = t^{-\theta }, \qquad t \rightarrow +\infty . \end{aligned}$$This explains the equivalence between finite moments stated in the introduction. We will prove in the remainder of this section that these heuristics are actually correct.
The following lemma provides our key-estimate.
Lemma 6
For all \(\nu \in (\alpha (1-\theta )/(\alpha +1), 1)\) there exists \(c > 0\) such that
Proof
We first assume \(\nu \in (\alpha /(\alpha +1), 1)\) and transform the expression on the left-hand side. From (3.1), Fubini’s theorem, and the Markov property, we have
both sides being finite thanks to Proposition 2. Integrating by parts shows then, with the help of (3.2) and Proposition 2, that
Inverting the Laplace transforms yields that
with the notation
It remains therefore to compute the asymptotics of the function \(H_{(x,y)}\), which only depends on the law of \(L_{T_0}\) under \({\mathbb {P}}_{(x,y)}\). To this end, we shall compute the Mellin transform of \(H^\prime _{(x,y)}\) and apply a converse mapping theorem. Recall that \(s=(1-\nu )(1+\alpha )\) and let \(z\in {\mathbb {R}}\) such that \(s+\alpha z \in (0,1)\). Following the same computations as in Proposition 2, we deduce that:
-
(i)
for \(y>0\)
$$\begin{aligned} \pi \int _0^{+\infty } t^z {\mathbb {E}}_{(0,y)}\left[ (X_t^+)^{-\nu }\right] dt&= (1+\alpha )^{1-\nu +z}\varGamma (1-\nu ) \varGamma (1-\nu +z)\varGamma (1-s-\alpha z) \\&\quad \times \, \sin (\pi (s+\alpha z) - \pi s \gamma - z\alpha \pi \rho ) \, y^{s+\alpha z -1} \end{aligned}$$ -
(ii)
for \(y<0\)
$$\begin{aligned} \pi \int _0^{+\infty } t^z {\mathbb {E}}_{(0,y)}\left[ (X_t^+)^{-\nu }\right] dt&= (1+\alpha )^{1-\nu +z}\varGamma (1-\nu ) \varGamma (1-\nu +z)\varGamma (1-s -\alpha z)\\&\quad \times \, \sin (\pi s \gamma + z\alpha \pi \rho )\, |y|^{s+\alpha z -1}. \end{aligned}$$
Hence, the Mellin transform of \(H_{(0,y)}^\prime \) is explicitly given by
Observe that this formula extends by analytic continuation to \(\{|s+\alpha z|< 1/(1-\gamma )\}\). The pole at
is simple and we may therefore apply the converse mapping theorem for Mellin transforms to obtain the asymptotics:
for some \(c_1>0\). The announced result then follows by integration. In the case \(x<0\) and \(y=0\), we have similarly:
and we may proceed as before, by extending the Mellin transform of \(H_{(x,0)}^\prime \) to \(\{|s+\alpha z|< 1/(1-\gamma )\}\) and applying the converse mapping theorem to obtain once again :
for some \(c_2>0\). Suppose now \(\nu \in (\alpha (1-\theta )/(\alpha +1), \alpha /(\alpha +1)).\) The left-hand side of (4.1) is well-defined and the estimates (4.3) and (4.4) entail that the integral in (4.2) is absolutely convergent, because \((1+1/\alpha )\nu +\theta > 1\). By analytic continuation this shows that (4.1) remains valid for \(\nu \in (\alpha (1-\theta )/(\alpha +1), \alpha /(\alpha +1)),\) and the estimates (4.3) and (4.4) hold as well. This completes the proof, again by integration. \(\square \)
4.1 Proof of the upper bound
Fix \(A>0\) and \(\nu \in (\alpha /(\alpha +1), 1).\) By continuity and positivity there exists \(\varepsilon >0\) such that for all \(z\in [0,A]\),
For all \(t > 0,\) we get from (4.1), a change of variable and the self-similarity of \(X\) and \(L\):
On the one hand, Lemma 5 entails
On the other hand, Lemma 6 shows that
Putting everything together entails
for some finite \({\widetilde{K}}\) as soon as \(t\) is large enough. \(\square \)
4.2 Proof of the lower bound
We start with the following lemma:
Lemma 7
One has
Proof
Firstly, integrating the above upper bound for \({\mathbb {P}}_{(x,y)}[T_0>t]\) entails the existence of a finite \(\kappa _2\) such that
To prove the lower inequality, we fix \(\nu \in (\alpha (1-\theta )/(1+\alpha ),\alpha /(1+\alpha ))\) and deduce from Proposition 1 the uniform bound
for some finite constant \(K.\) Set \(\eta =\nu (1+1/\alpha ) \in (0,1)\) and fix \(\varepsilon \in (0,1).\) Using (4.1) and (4.5) we decompose
for some finite \({\widetilde{K}}\), where the third inequality follows from the upper bound. Applying Lemma 6 and taking \(\varepsilon \) small enough shows finally that there exists \(\kappa _1> 0\) such that
\(\square \)
We can now finish the proof of the lower bound for \({\mathbb {P}}_{(x,y)}(T_0 > t)\). Fixing \(A > 0\) and applying the mean value theorem entails
as \(t\rightarrow +\infty \), for some constants \(0 < \kappa _1 < \kappa _2 < \infty \) given by Lemma 7. Since \(\theta <1,\) the lower bound follows by choosing \(A\) large enough. \(\square \)
References
Aurzada, F., Dereich, S.: Universality of the asymptotics of the one-sided exit problem for integrated processes. Ann. Inst. H. Poincaré Probab. Stat. 49(1), 236–251 (2013)
Aurzada, F., Simon, T.: Persistence probabilities and exponents. To appear in Lévy Matters IV, Springer, Berlin. Available at arXiv:1203.6554
Bray, A.J., Majumdar, S.N., Schehr, G.: Persistence and first-passage properties in non-equilibrium systems. Adv. Phys. 62(3), 225–361 (2013)
Bingham, N.H.: Maxima of sums of random variables and suprema of stable processes. Z. Wahr. Verw. Geb. 26, 273–296 (1973)
Dembo, A., Ding, J., Gao, F.: Persistence of iterated partial sums. Ann. Inst. H. Poincaré Probab. Stat. 49(3), 873–884 (2013)
Devulder, A., Shi, Z., Simon, T.: The lower tail problem for integrated stable processes. Unpublished manuscript (2005)
Dybiec, B., Gudowska-Nowak, E., Hänggi, P.: Escape driven by \(\alpha \)-stable white noises. Phys. Rev. E 75, 021109 (2007)
Erdélyi, A., Magnus, W., Oberhettinger, F., Tricomi, F.G.: Higher transcendental functions, vol. I. McGraw-Hill, New York (1954)
Flajolet, P., Gourdon, X., Dumas, P.: Mellin transforms and asymptotics: Harmonic sums. Theoret. Comput. Sci. 144, 3–58 (1995)
Goldman, M.: On the first passage of the integrated Wiener process. Ann. Math. Stat. 42(6), 2150–2155 (1971)
Gor’kov, Yu.P.: A formula for the solution of a certain boundary value problem for the stationary equation of Brownian motion. Sov. Math. Dokl. 16, 904–908 (1975)
Janson, S.: Moments of Gamma type and the Brownian supremum process area. Probab. Surv. 7, 1–52 (2010)
Lachal, A.: Sur le premier instant de passage de l’intégrale du mouvement brownien. Ann. Inst. Henri Poincaré Probab. Stat. 27(3), 385–405 (1991)
Lachal, A.: L’intégrale du mouvement brownien. J. Appl. Probab. 30(1), 17–27 (1993)
Marcus, M.B.: Probability estimates for lowel levels of certain Gaussian processes with stationary increments. In: High dimensional probability II, Progress in Probability, vol. 47, pp. 173–179. Birkhäuser, Boston (2000)
McKean, H.P.: A winding problem for a resonator driven by a white noise. J. Math. Kyoto Univ. 2, 227–235 (1963)
Molchan, G.M., Khokhlov, A.: Small values of the maximum for the integral of fractional Brownian motion. J. Stat. Phys. 114(3–4), 923–946 (2004)
Profeta, C.: Some limiting laws associated with the integrated Brownian motion. To appear in ESAIM Probab. Statist. Available at arXiv:1307.1395 (2014)
Samorodnitsky, G., Taqqu, M.S.: Stable non-Gaussian random processes. Chapman & Hall, New York (1994)
Shao, Q.-M.: Lower tails probabilities and related processes. Lecture Notes. Abstract available at http://www.proba.jussieu.fr/pageperso/smalldev/lecturefile/qiman (2003)
Shi, Z.: Lower tails of some integrated processes. In: Small deviations and related topics (problem panel). Available at http://www.proba.jussieu.fr/pageperso/smalldev/pbfile/pb4 (2003)
Simon, T.: Sur les petites déviations d’un processus de Lévy. Potential. Anal. 14(2), 155–173 (2001)
Simon, T.: The lower tail problem for homogeneous functionals of stable processes with no negative jumps. ALEA Lat. Am. J. Probab. Math. Stat. 3, 165–179 (2007)
Simon, T.: On the Hausdorff dimension of regular points of inviscid Burgers equation with stable initial data. J. Stat. Phys. 131(4), 733–747 (2008)
Sinai, Ya.G.: Distribution of some functionals of the integral of a random walk. Teoret. Mat. Fiz. 90(3), 219–241 (1992)
Vysotsky, V.: Positivity of integrated random walks. Ann. Inst. H. Poincaré Probab. Stat. 50(1), 195–213 (2014)
Zolotarev, V.M.: One-dimensional stable distributions. Nauka, Moskva (1983)
Acknowledgments
The authors are grateful to an anonymous referee for suggesting the present proof of Lemma 6, which is much simpler than the original one, and to Grégory Schehr for some useful comments. This research benefited from the support of the “Chaire Marchés en Mutation”, Fédération Bancaire Française.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Profeta, C., Simon, T. Persistence of integrated stable processes. Probab. Theory Relat. Fields 162, 463–485 (2015). https://doi.org/10.1007/s00440-014-0577-5
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-014-0577-5
Keywords
- Integrated process
- Half-Cauchy distribution
- Hitting place
- Lower tail probability
- Mellin transform
- Persistence
- Stable Lévy process