## 1 Introduction: Persistence Probabilities for Fractional Brownian Motion

We study local times of stochastic processes from the point of view of persistence probabilities, i.e. the probabilities that a stochastic process remains inside a relatively small subset of its state space for a long time. The problem of calculating persistence probabilities is a very active field of mathematical research, see, for example, the recent articles [2, 4, 6, 23, 28, 31]; an overview of the developments over the last decades is given by Aurzada and Simon in [7]. The main motivation to study the persistence behaviour of stochastic systems is its great relevance for certain areas of statistical physics, see, for example, [9, 11, 24] and the survey by Bray et al. [8].

The starting point of our investigation is Molchan’s celebrated and by now classical results [27] concerning the persistence of fractional Brownian motion on the real line, which we briefly summarise now. Let $$B=(B_t)_{t\in \mathbb {R}}$$ denote a 1-dimensional fractional Brownian motion (FBM) of Hurst index $$H\in (0,1)$$. B can be characterised as the unique (up to multiplication by a constant) Gaussian process which is H-self-similar with stationary increments (H-sssi). In [27], it is shown that the maximum process $${\bar{B}}_t=\max _{0\le s \le t}B_s, t\ge 0,$$ of B satisfies

\begin{aligned} \mathbb {P}({\bar{B}}_T \le 1)= T^{-(1-H)+o(1)}. \end{aligned}
(1)

Subsequently, improved bounds on the error estimate implicit in (1) have been derived by Aurzada [1] and by Aurzada et al. [4]. Note that, using self-similarity, we may replace the boundary 1 in (1) by any fixed value $$x>0$$ without changing the order of decay. The probability in (1) is called the persistence probability and the corresponding exponent $${\bar{\kappa }}=1-H$$ the persistence exponent of B. In [27], Molchan also considered the lower tail probabilities of several other path functionals, namely

• $$\ell (0,T]:=\lim _{\epsilon \rightarrow 0}(2\epsilon )^{-1}\int _0^T \mathbb {1}\{B_t\in (-\epsilon ,\epsilon )\}\text {d}t$$, the local time at 0,

• $$\tau ^{+}:= \inf \{t\ge 1: B_t=0\}$$, the first zero after time 1,

• $$\sigma ^{+}_T:= \int _0^T \mathbb {1}\{B_t>0\}\text {d}t$$, the time spent in the positive half-axis,

• $$\tau ^{\text {max}}_T:= \arg \max \{B_t, t\in [0,T]\}$$, the time at which the maximum is achieved;

and his results imply that

\begin{aligned} \lim _{T\rightarrow \infty }\frac{\log \mathbb {P}(\tau ^{+}\ge T)}{\log T}=\lim _{T\rightarrow \infty }\frac{\log \mathbb {P}(\sigma ^{+}_T\le 1)}{\log T} = \lim _{T\rightarrow \infty }\frac{\log \mathbb {P}(\tau ^{\text {max}}_T\le 1)}{\log T}= H-1.\nonumber \\ \end{aligned}
(2)

These asymptotics can be viewed as a general (and very weak) form of Lévy’s arcsine laws for Brownian motion. Intuitively, the agreement of exponents can be explained by observing that the dominating events contributing to each of the probabilities in (1) and (2) are long (negative) excursions of B from $$B_0=0$$. This type of event also entails a small local time at 0, and one is inclined to believe that the probability of the local time being small is of the same order. However, the result in [27] for the local time is only a lower bound, namely that there is a constant $$b\in (0,\infty )$$ such that

\begin{aligned} \mathbb {P}(\ell (0,T]\le 1) \ge T^{-(1-H)}\,b \text {e}^{-\sqrt{\log T}}, \end{aligned}
(3)

for sufficiently large T. Hence, the local time persistence exponent

\begin{aligned} \kappa = -\lim _{T\rightarrow \infty }\frac{\mathbb {P}(\ell (0,T]\le 1)}{\log T} \end{aligned}

of B satisfies

\begin{aligned} \kappa \le 1-H, \end{aligned}

and the upper bound given in (3) is still the best lower tail estimateFootnote 1 for the local time of FBM with index $$H\in (0,1)\setminus \{\nicefrac {1}{2}\}$$ available in the literature.

Molchan’s proofs require some technical tools, namely Slepian’s lemma and reproducing kernel Hilbert spaces, which are specific to the Gaussian setting. Moreover, his argument crucially relies on the connection of the persistence probability to a certain path integral functional and this relation is in fact also useful outside the FBM context, see, for example, [3]. However, there is no straightforward link between the functional and the properties of $$\ell$$ which is the reason why Molchan’s result regarding the local times is incomplete.

The goal of the present paper is to show how to circumvent these obstacles and establish the equality

\begin{aligned} \kappa = 1-H \end{aligned}
(4)

for FBM directly by studying the local time. In fact, we prove a significantly stronger result, namely that there is a constant $$C\in (0,\infty )$$, such that

\begin{aligned} \mathbb {P}(\ell ((0,T]\le 1)\sim C T^{-(1-H)}, \end{aligned}
(5)

where here and in what follows we use the notation $$f(T)\sim g(T)$$ to indicate that the ratio of the two functions fg converges to 1 as the argument T approaches $$\infty$$. Our approach to show (5) does not use that FBM is a Gaussian process. Consequently, (5) not only holds for FBM, but for any H-sssi process which admits sufficiently regular local time measures.

In fact, (4) relates the time B spends at 0 to the box-counting dimensionFootnote 2 of its zero set, which equals $$1-H$$. Indeed, there is a well-known non-rigorous box-counting argument, see, for example, [13], which suggests that the probability of observing an excursion from 0 of length greater than T is of order $$T^{-{(1-H)}}$$. One way of looking at our result is that it makes this connection rigorous; our method indeed enables us to prove (4) using only the invariance properties of the underlying processes, without recurrence to specific distributional structures such as Gaussianity, the Markov or martingale property, etc.

It is immediate from (4) that we have $${\bar{\kappa }}=\kappa$$ for B and the author believes that this is also true in a more general context. In particular, one should be able to combine the arguments in this paper with the methods developed by Aurzada et al. in [4, 5] to show that both persistence exponents coincide for all H-sssi processes which are positively associated.

The technique for establishing $$\kappa =1-H$$ proposed below is entirely novel in the context of persistence probabilities. It combines three principal ingredients:

• a distributional representation of the local times using Palm theory;

• an invariance property of Palm distributions known as mass stationarity;

• a simple bi-variate scaling relation for an associated point process, which encodes the H-sssi property.

The first two items are covered by a very general and deep result of Last, Mörters and Thorisson [21], c.f. Sect. 4, but only the first item actually needs abstract results from the theory of random measures which are not based on elementary calculations.

The remainder of this text is organised as follows. In the next section, we fix our notation and present our main result for the local time persistence probabilities in a general setting, Theorem 1 and in two special cases, namely for FBM and the Rosenblatt process. The main argument to prove Theorem 1 is given in Sect. 3, subject to some auxiliary results which require a more extensive discussion in Sect. 4. In the appendix, we provide some useful results from the literature as well as some auxiliary calculations for convenience of the reader.

## 2 Notation and Main Results

We assume throughout the remainder of the article that $$(X_t)_{t\in \mathbb {R}}$$ is a real-valued stochastic process defined on a complete probability space $$(\varOmega ,{\mathscr {F}},\mathbb {P})$$, which satisfies $$X_0=0$$ almost surely, and is continuous in probability at 0, i.e.

\begin{aligned} \lim _{h\rightarrow 0}\mathbb {P}(|X_{h}|>\epsilon )=0. \end{aligned}
(6)

Most importantly, we require that X to be H-sssi, i.e. to satisfy the invariance relations

\begin{aligned} (X_{t+s}-X_t)_{s\in \mathbb {R}}\overset{d}{=}(X_{s})_{s\in \mathbb {R}}, \;\text { for any }t\in \mathbb {R},\quad \quad \text {(stationarity of increments),} \end{aligned}

and

\begin{aligned} (X_{rs})_{s\in \mathbb {R}}\overset{d}{=}(r^H X_s)_{s\in \mathbb {R}}, \;\text { for every } r>0, \quad \quad \text {(}H\text {-self-similarity)}, \end{aligned}

where $$\overset{d}{=}$$ denotes equality of finite dimensional distributions. We extend the definition of H-sssi to processes indexed by $$[0,\infty )$$ by restricting the stationarity of increments to positive shifts only.

### Remark 1

It is not hard to see, c.f. [15, Lemma 1.1.1], that self-similarity and (6) in fact imply $$\mathbb {P}(X_0=0)=1$$. Note further that stationarity of increments allows us to conclude continuity in probability at all times from (6).

Our main object of interest is the local time measure of X at 0. We use the following notational conventions related to measures: $${\mathfrak {B}}(\cdot )$$ denotes the Borel-$$\sigma$$-field of the space in brackets. If $$\nu$$ is a measure on $${\mathfrak {B}}(\mathbb {R})$$, the Borel-sets of the real line, and (ab) is an interval, we use the notation $$\nu (a,b)$$ instead of $$\nu ((a,b))$$ and an analogous shorthand for closed and half open intervals. We frequently associate a measure $$\nu$$ with its additive functional

\begin{aligned} \nu _t={\left\{ \begin{array}{ll} \nu (0,t], &{}\text { if }t>0,\\ -\nu (-t,0], &{}\text { if }t\le 0, \end{array}\right. } \end{aligned}

and vice versa. If $$\nu$$ is a random measure, then $$(\nu _t)_{t\in \mathbb {R}}$$ is a non-decreasing stochastic process. We define the occupation measure of X on $${\mathfrak {B}}(\mathbb {R}\times \mathbb {R})$$ by setting

\begin{aligned} \psi (A \times B)=\int _A \mathbb {1}\{X_r \in B\} \text {d}r,\quad A,B\in {\mathfrak {B}}(\mathbb {R}), \end{aligned}
(7)

and recall that (7) yields a well-defined Borel measure as long as the trajectories of X are Borel-functions. We say that X admits local times, or shorter X is LT, if for each $$n=1,2,\dots$$, $$\mathbb {P}$$-a.s.,

\begin{aligned} \psi \big ((-n,n)\times \cdot \big ) \text { is absolutely continuous w.r.t. Lebesgue measure}. \end{aligned}

Since X has stationary increments, it is in fact sufficient for X to be LT, that the Radon–Nikodym density $$\nicefrac {\text {d}\psi (I,\text {d}y)}{\text {d}y}$$ exists a.s. for an arbitrary open set I. Disintegration yields, for every y outside some Lebesgue-negligible set $${\mathscr {R}}$$, a locally finite measure $$\ell ^y$$ on $${\mathfrak {B}}(\mathbb {R})$$ such that

\begin{aligned} \psi (A \times B)=\int _B \ell ^y(A)\text {d}y,\quad A,B\in {\mathfrak {B}}(\mathbb {R}), \end{aligned}
(8)

and we call $$\ell ^y$$ the local time of X at level y. Moreover, it can be shown, see, for example, [18, Lemma (3)], that for every $$y\notin {\mathscr {R}}$$, we can choose a version of $$\ell ^{y}(0,t]$$ which is right-continuous in the time variable. A similar statement holds for $$\ell ^{y}(-t,0]$$. Recall that we have $$X(0)=0$$ a.s., i.e. $$\ell ^0=\ell ^{X(0)}$$, if $$0\notin {\mathscr {R}}$$, but a priori the existence of $$\ell ^0$$ cannot be guaranteed using the above construction of local times. This technical issue is addressed in Appendix, for the time being let us assume that $$0\notin {\mathscr {R}}$$ and that $$\ell ^0$$ is well defined.

We are chiefly interested in $$\ell ^0$$ and therefore just abbreviate $$\ell =\ell ^0$$ and call it local time, without reference to the level 0. From the construction of $$\ell$$, we can straightforwardly derive a path-wise representation. Let

\begin{aligned} \ell _\epsilon ^y(A):=\frac{1}{2\epsilon }\psi \left( A\times (y-\epsilon ,y+\epsilon )\right) , \end{aligned}

then we have that for all $$y\notin {\mathscr {R}}$$

\begin{aligned} \lim _{\epsilon \rightarrow 0}\ell _\epsilon ^{y}(A)= \ell ^{y}(A),\; A\in {\mathfrak {B}}(\mathbb {R}), \end{aligned}
(9)

which shows that our definition agrees with the formula for $$\ell$$ given in the introduction.

We now introduce two further structural conditions on $$\ell$$ which are necessary for our derivation of the lower tail probabilities of $$\ell (0,T]$$. Let

\begin{aligned} \mathsf {supp}(\nu )=\{t: \nu (t-\epsilon ,t+\epsilon )>0 \text { for all }\epsilon >0 \} \end{aligned}

denote the support of a measure $$\nu$$ on $${\mathfrak {B}}(\mathbb {R})$$ and recall that a nowhere dense set of real numbers is a set whose (topological) closure does not contain any interval.

### Assumptions on the local time

With probability 1,

\begin{aligned}&\ell \text { has no atoms}, \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \textsf {AL}\\&\mathsf {supp}(\ell ) \text { is nowhere dense.} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \,\,\textsf {ND} \end{aligned}

Condition AL is equivalent to demanding that $$(\ell _t)_{t\in \mathbb {R}}$$ be continuous a.s. Both conditions entail a rather erratic behaviour of the trajectories of X, which is not surprising in view of our main example, fractional Brownian motion.

### Remark 2

The validity of AL and ND is indispensable for the approach to local times taken in this paper. However, the author strongly believes that the conditions listed are not minimal and that in particular condition ND is a consequence of condition AL for any H-sssi process which is continuous in probability, but is not aware of any proof of this implication.

To exclude pathologies, we also restrict ourselves to situations where $$\ell$$ is a.s. not the zero measure, we then say $$\ell$$ is non-zero. We are now prepared to state our main result.

### Theorem 1

(Persistence of local time for H-sssi processes) Let X be continuous in probability, H-sssi and LT and denote by $$\ell$$ its local time at 0. If $$\ell$$ is nonzero and satisfies AL and ND , then there exists a constant $$c_X\in (0,\infty )$$ such that, as $$T\rightarrow \infty$$,

\begin{aligned} \mathbb {P}(\ell (0,T]\le 1) \sim c_X T^{-(1-H)}. \end{aligned}

There are two important observations needed for the proof of Theorem 1. The first one is a result stating that the lengths of the excursions of X from 0 follow, in a certain sense, a hyperbolic distribution. The precise formulation is given in Sect. 3 as Proposition 1. The second one is that, under AL and ND , $$\ell$$ is entirely encoded in the excursions of X from 0, which is manifested in the fact that the right-continuous inverse of $$(\ell _t)_{t\in \mathbb {R}}$$ is a.s. a pure jump process. Before we develop the details, we devote the remainder of this section to some of the implications of Theorem 1.

To this end we provide two example processes, for which Theorem 1 can be applied. The first one, naturally, is fractional Brownian motion. The local times and level sets of FBM have been studied by several authors, and the pioneering work was done by Kahane in the late 1960s, see in particular [19, Chapter 18].

### Theorem 2

(Persistence of local time for FBM) Let B denote fractional Brownian motion with Hurst index $$H\in (0,1)$$ and denote by $$\ell$$ its local time at 0. Then, there is a constant $$c_B\in (0,\infty )$$ such that, as $$T\rightarrow \infty$$,

\begin{aligned} \mathbb {P}(\ell (0,T]\le 1)\sim c_B T^{-(1-H)}. \end{aligned}

### Proof

We only need to verify conditions AL and ND . The continuity in time of fractional Brownian local time is well known, e.g. by applying the criterion proposed by Geman [17] for Gaussian processes. Let $${\mathscr {Z}}_B$$ denote the set of zeroes of the FBM trajectory. Kahane [19] showed that the Hausdorff dimension $$\dim {\mathscr {Z}}_B$$ of the zero set equals $$1-H<1$$ a.s. Since B is a.s. continuous, it follows that $${\mathscr {Z}}_B$$ is closed. Together with its non-integer dimension this implies that $${\mathscr {Z}}_B$$ is nowhere dense and therefore $$\mathsf {supp}(\ell )$$ is nowhere dense, since $$\mathsf {supp}(\ell )\subset {\mathscr {Z}}_B$$. $$\square$$

To illustrate the power of our approach, we now discuss a non-Gaussian example, namely the Rosenblatt process $$R=(R_t)_{t\in \mathbb {R}}$$. This process was introduced by Taqqu [33], see also [14] and arises as a limiting process in so-called (functional) non-central limit theorems, analogously to FBM appearing in central limit theorems for correlated random walks. We will not give a formal definition of the Rosenblatt process, and an ad hoc definition can be given using an iterated Wiener-Itō integral, see [34]. Instead, we restrict ourselves to listing the properties of R which are relevant to verify the local time persistence result. A comprehensive source for all stated facts is Taqqu’s survey article [35]. Unlike FBM, the Rosenblatt process can only be defined for $$H\in (\nicefrac 12, 1)$$. For any such H, R is uniquely defined (up to multiplication by a constant) and satisfies

• R has Hölder continuous paths a.s. for any Hölder exponent $$\gamma < H$$,

• R is H-sssi.

### Theorem 3

(Persistence of local time for the Rosenblatt process) Let R denote the H-sssi Rosenblatt process, $$H\in (\nicefrac 12,1)$$ and denote by $$\ell$$ its local time at 0. Then, there is a constant $$c_R\in (0,\infty )$$ such that, as $$T\rightarrow \infty$$,

\begin{aligned} \mathbb {P}(\ell (0,T]\le 1)\sim c_R T^{-(1-H)} \end{aligned}

### Proof

In principle, we can apply the same arguments as for FBM, but the corresponding preliminary results for the Rosenblatt process needed to verify conditions AL and ND are less well known. We thus give a slightly more explicit version of the argument. Existence of square integrable (in space) local times for R has been shown in [32]. Let us show continuity of the cumulative local time process $$(\ell _t)_{t\ge 0}$$. Geman’s sufficient criterion [17, Theorem B (I)] for the continuity of the local time can be restated as follows for two-sided stationary increment processes:

\begin{aligned} \int _{-1}^1 \sup _{\epsilon >0}\frac{1}{\epsilon }\mathbb {P}(|R_s|<\epsilon ) \text {d}s <\infty . \end{aligned}
(10)

To show that (10) holds, we use results of Veillette and Taqqu [38], who studied the distribution of $$R_1$$ extensively. In particular, they show that $$R_1$$ has a smooth density and the same holds, by self-similarity, for $$R_s, s\in {\mathbb {R}\setminus \{0\}}$$. Let $$f_s$$ denote the density of $$R_s$$. Then, (10) is satisfied, if

\begin{aligned} g(s):=\limsup _{\epsilon \downarrow 0}\frac{1}{2\epsilon }\int _{-\epsilon }^\epsilon f_s(u)\text {d}u \end{aligned}

is integrable around the origin. But by smoothness of $$f_s$$, we have $$g(s)=f_s(0)<\infty$$ and thus $$g(s)=s^{-H}g(1)$$ for any $$s>0$$. Consequently, g is integrable around 0 and (10) is satisfied.

Turning to ND, we argue using the well-known fact, see, for example, [19], about Hölder continuity and Hausdorff dimension of the level sets of a real function f: If f is $$\gamma$$-Hölder continuous with $$\gamma \in (0,1)$$, then the Hausdorff dimension of its level sets is at most $$1-\gamma .$$ This is sufficient to complete the argument in the same fashion as for FBM. $$\square$$

### Remark 3

The following general recipe may be used to verify the conditions of Theorem 1: If an H-sssi LT process has sufficiently high moments, then Hölder-continuity of the paths can always be inferred from the Kolmogorov–Chentsov continuity theorem [10] and ND is then always satisfied. Additionally, if the transition density at 0 exists, then AL is always satisfied.

## 3 Proof of Theorem 1

Let X be as in Theorem 1 and let $$\ell$$ be its local time at 0. Recall that the corresponding additive functional $$(\ell _t)_{t\in \mathbb {R}}$$, is given by

\begin{aligned} \ell _t={\left\{ \begin{array}{ll} \ell (0,t], &{} \text { if } t>0 \\ -\ell (t,0], &{} \text { if } t\le 0, \end{array}\right. } \end{aligned}

and denote its right-continuous inverse by $$L=(L_{x})_{x\in \mathbb {R}}$$. It is straightforward from (9) that $$(\ell _t)_{t\in \mathbb {R}}$$ is $$(1-H)$$-self-similar; hence, L is $$\nicefrac {1}{1-H}$$ self-similar. By AL, $$\ell$$ has no atoms and hence L is strictly increasing. By ND, $$\mathsf {supp}(\ell )$$ is nowhere dense. L is then a monotone pure jump process. Consequently, it induces a locally finite, purely atomic random measure $${\hat{\ell }}$$ on $${\mathfrak {B}}(\mathbb {R})$$. We say a random measure is $$\beta$$-scale-invariant, if its additive functional is a $$\beta$$-self-similar process. It thus follows from self-similarity of L that $${{\hat{\ell }}}$$ is $$\nicefrac {1}{1-H}$$-scale-invariant. Because $${\hat{\ell }}$$ is purely atomic, we may identify it with a point process on $$\mathbb {R}\times (0,\infty )$$, see Lemma 2. This point process is denoted by $${\hat{N}}$$ and its intensity measure by $${\hat{\varLambda }}.$$ The key observation of our argument is that $${\hat{\varLambda }}$$ is entirely determined (up to a multiplicative constant) by the invariance properties of $${\hat{\ell }}$$.

### Proposition 1

Let $${\hat{N}}$$ denote the point process representation of the inverse local time measure $${\hat{\ell }}$$. Then, the corresponding intensity measure $${\hat{\varLambda }}$$ is given by

\begin{aligned} {\hat{\varLambda }} (\text {d}x \times \text {d}m)= cm^{-1-(1-H)}\text {d}x\text {d}m, \end{aligned}
(11)

for some finite constant $$c>0$$.

We postpone the proof of Proposition 1 to the next section, but note that subject to the validity of Proposition 1, all that remains to establish Theorem 1 is to relate the tail behaviour of $${{\hat{\varLambda }}}$$ to the tail behaviour of $$\ell$$.

### Proof

(Proof of Theorem 1) We observe that

\begin{aligned} \mathbb {P}(\ell (0,T]\le 1)= \mathbb {P}(\ell (0,T]< 1) =\mathbb {P}(L_1>T),\quad T>0, \end{aligned}

i.e. we obtain lower tail bounds for $$\ell _t$$ from upper tail bounds for $$L_1$$. Let $${\hat{N}}$$ denote the point process representation of $${\hat{\ell }}$$ and note that $$L_1=\int _0^1\int _0^\infty m {\hat{N}}(\text {d}x \times \text {d}m)$$. Fix any $$r>0$$. Since $${\hat{N}}$$ is a simple point process and $${\hat{\ell }}$$ is purely atomic, we have by standard results from random measure theory, e.g. [12, Prop. 9.1.III(v)],

\begin{aligned} {\hat{N}}([0,1]\times (r,\infty ))=\lim _{n\rightarrow \infty }\sum _{k=1}^{n} \mathbb {1}\left\{ {\hat{\ell }}\left( \frac{k-1}{n},\frac{k}{n}\right] >r\right\} , \text { a.s}. \end{aligned}
(12)

Set

\begin{aligned} P_{k,n}:=\mathbb {P}\left( {\hat{\ell }}\left( \frac{k-1}{n},\frac{k}{n}\right] > r\right) , \quad 1\le k\le n, n=1,2,\dots , \end{aligned}

taking expectations in (12), we obtain

\begin{aligned} \limsup _{n\rightarrow \infty }\sum _{k=1}^n P_{k,n}\le \mathbb {E}{\hat{N}}([0,1]\times (r,\infty ))\le \liminf _{n\rightarrow \infty }\sum _{k=1}^nP_{k,n}, \end{aligned}

having applied Fatou’s lemma and the inverse Fatou’s lemma, i.e.

\begin{aligned}\sum _{k=1}^n P_{k,n}\overset{n\rightarrow \infty }{\longrightarrow } \mathbb {E}{\hat{N}}([0,1]\times (r,\infty )). \end{aligned}

For any $$\delta \in (0,1)$$ we may thus fix $$1\le N_\delta <\infty$$ such that

\begin{aligned} (1-\delta )\int _{r}^\infty c m^{-1-(1-H)}\text {d}m \le \sum _{k=1}^{N_\delta } P_{k,N_{\delta }} \le (1+\delta )\int _{r}^\infty c m^{-1-(1-H)}\text {d}m, \end{aligned}
(13)

where we have used that

\begin{aligned} \mathbb {E}{\hat{N}}([0,1]\times (r,\infty ))= {\hat{\varLambda }}([0,1]\times (r,\infty ))= \int _{r}^\infty c m^{-1-(1-H)}\text {d}m, \end{aligned}

according to Proposition 1. Using that $$P_{k,N_{\delta }}=\mathbb {P}(L_{\nicefrac {1}{N_\delta }}>r)$$ by the stationarity of the increments of L, we can rewrite (13) as

\begin{aligned} (1-\delta )\frac{c}{1-H}r^{-(1-H)} \le N_{\delta }\mathbb {P}(L_{\nicefrac {1}{N_\delta }}>r) \le (1+\delta )\frac{c}{1-H}r^{-(1-H)}, \end{aligned}

and applying the $$\nicefrac {1}{1-H}$$-self-similarity of L and rearranging terms yields

\begin{aligned} (1-\delta )\frac{c}{1-H}\left( {r}{N_\delta ^{\nicefrac {1}{1-H}}}\right) ^{-(1-H)} \le \mathbb {P}\left( L_1>rN_{\delta }^{\nicefrac {1}{1-H}}\right) \le (1+\delta )\frac{c}{1-H}\left( {r}{N_\delta ^{\nicefrac {1}{1-H}}}\right) ^{-(1-H)}, \end{aligned}

i.e.

\begin{aligned} \mathbb {P}(L_1>T)= \frac{c}{1-H}(1+o(1))T^{-(1-H)}, \quad \text { as }T\rightarrow \infty , \end{aligned}

and Theorem 1 is proved, subject to Proposition 1.

$$\square$$

## 4 Auxiliary Results

We now provide the background needed to complete the proof of Theorem 1, starting with some basics of random measure theory, namely we introduce Palm distributions and discuss some of their key properties and state a few results about scale invariant random measures.

### 4.1 Palm Distributions and Mass-Stationarity

Here and in the following subsection, we take a general point of view on $$\ell$$ and its distributional properties as a random measure, in particular we can forget about the process X from which it is derived. We instead consider some complete measurable space $$(\varOmega ,{\mathfrak {F}})$$, equipped with a $$\sigma$$-finite measure Q. We denote by $$E_Q(\cdot )$$ integration with respect to Q. A measurable map $$\xi$$ from $$\varOmega$$ into the space $$({\mathscr {M}},{\mathfrak {B}}({\mathscr {M}}))$$ of locally finite measures on $$\mathbb {R}$$, equipped with its Borel-$$\sigma$$-field is called a random measure, even though we stress that Q need not be a probability distribution. We call Q a quasi-distribution, to distinguish it from the measures which are elements of $${\mathscr {M}}$$ and set $$Q_\xi =Q\circ \xi ^{-1},$$ i.e.

\begin{aligned} Q_{\xi }(G)=Q(\xi \in G), \quad G\in {\mathfrak {B}}({\mathscr {M}}). \end{aligned}

The intensity measure of $$\xi$$ under Q (or $$Q_\xi$$) is given by

\begin{aligned} \varLambda _\xi (A):=E_{Q}\xi (A)=\int \nu (A)Q_{\xi }(\text {d}\nu ), \; A\in {\mathfrak {B}}(\mathbb {R}). \end{aligned}

In what follows, we frequently consider Q directly as a quasi-distribution on $${\mathfrak {B}}({\mathscr {M}})$$ without explicit reference to a (canonical) random measure $$\xi$$ with distribution Q; consequently, we denote the associated intensity measure by $$\varLambda _Q$$. Whenever we discuss a probability measure, then we indicate this by using blackboard-face symbols, e.g. $$\mathbb {P},\mathbb {P}_\xi ,\mathbb {E}_\xi ,$$ etc. Furthermore, to formalise our discussion of stationarity properties, we use the shift group $$(\theta _t)_{t\in \mathbb {R}}$$ on $$\mathbb {R}$$. Note that the $$\theta _t, t\in \mathbb {R},$$ act measurably on $$({\mathscr {M}},{\mathfrak {B}}({\mathscr {M}})),$$ and in particular we have that

\begin{aligned} \theta _{-t}\nu (A)=\nu (A+t)=\nu \circ \theta _t(A), \quad A \in {\mathfrak {B}}(\mathbb {R}), \end{aligned}

where $$A+t:=\{a+t, a\in A\}$$. A quasi-distribution Q on $${\mathfrak {B}}({\mathscr {M}})$$ is invariant under the shifts $$(\theta _t)_{t\in \mathbb {R}}$$, i.e. stationary, if

\begin{aligned} Q(G)=Q(\{ \theta _t\nu , \nu \in G \}), \; t\in \mathbb {R}. \end{aligned}

If Q is stationary and satisfies

\begin{aligned} \lambda _Q:= \varLambda _Q\left( (0,1]\right) \in (0,\infty ) \end{aligned}

then it follows immediately that $$\varLambda _Q(\text {d}s)=\lambda _Q \text {d}s$$, i.e. $$\varLambda _Q$$ is a constant multiple of Lebesgue measure. We call $$\lambda _Q\in [0,\infty ]$$ the intensity of Q. Note that the quasi-distribution Q is a place holder for a stationarised version of the distribution of the local time measure $$\ell$$, and the corresponding construction is discussed below. At the moment, it is more beneficial to stay in the general setting. However, the following assumptions on the support of Q

\begin{aligned} {\mathscr {S}}_{Q}:=\mathsf {supp}(Q)={\mathscr {M}}\setminus \bigcup _{N\in {\mathfrak {B}}({\mathscr {M}}): Q(N)=0} N \end{aligned}

are justified in view of our applications. Let o denote the 0-measure, then

\begin{aligned} o \notin {\mathscr {S}}_{Q},\text { i.e. { Q} is non-zero ,} \end{aligned}

and

\begin{aligned} \{\nu _t, t\in \mathbb {R}\}=\mathbb {R}, \; \text { for all } \nu \in {\mathscr {S}}_{Q}. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \textsf {R} \end{aligned}

We have $$\lambda _Q>0$$, if Q is nonzero and stationary. Note that R is in congruence with condition AL, and the details are given in Lemma 4.

We now turn to the subject of Palm distributions of a stationary, nonzero quasi-distribution Q. Fix any $$A\in {\mathfrak {B}}(\mathbb {R})$$ with finite and positive Lebesgue measure, then the quasi-distribution defined by

\begin{aligned} P_Q(G)=\frac{1}{\int _A \text {d}s }\int \int _A \mathbb {1}\{\theta _{-t}\nu \in G \} \nu (\text {d}t) Q(\text {d}\nu ), \quad G\in {\mathfrak {B}}({\mathscr {M}}), \end{aligned}

is independent of the choice of A, see Lemma 5. It is referred to as the Palm measure of Q. If Q has finite intensity $$\lambda _Q$$, then a probability distribution is defined by

\begin{aligned} \mathbb {P}_Q(G)= \frac{P_Q (G)}{\lambda _Q},\quad G\in {\mathfrak {B}}({\mathscr {M}}), \end{aligned}

and is called the Palm distribution of Q. Conversely, we say that a random measure $$\xi$$ is Palm-distributed, if its distribution $$\mathbb {P}_{\xi }$$ is the Palm distribution of some stationary quasi-distribution Q. Similarly, a random measure is Palm distributed if its distribution is the Palm distribution of some Q. It is well known, see, for example, [39, Lemma 3.3], that the almost sure properties of Q and $$\mathbb {P}_Q$$ agree (up to shifts). We may thus assume that any $$\nu \in {\mathscr {S}}(\mathbb {P}_Q)$$ has the property indicated in R. The latter entails that the right continuous inverse $$(\nu ^{-1}_x)_{x\in \mathbb {R}}$$ of the additive functional $$(\nu _t)_{t\in \mathbb {R}}$$ of $$\nu$$ is strictly increasing and that we have

\begin{aligned} \nu _{\nu ^{-1}_x}=x,\quad \text { for all } x\in \mathbb {R}. \end{aligned}

This allows us to define the measurable group of random time shifts

\begin{aligned} {\hat{\theta }}_x:={\hat{\theta }}_x(\nu ):=\theta _{\nu ^{-1}_x},\; x\in \mathbb {R}. \end{aligned}

Note that we may pick $$A=[0,1]$$ in the definition of $$P_Q$$ and change variables according to the random time change to obtain the alternative representation

\begin{aligned} \mathbb {P}_Q(G)=\frac{1}{\lambda _Q}\int \int _0^{\nu _1}\mathbb {1}\{ {\hat{\theta }}_{-x}\nu \in G\} \text {d}x Q(\text {d}\nu ), \; G\in {\mathfrak {B}}({\mathscr {M}}). \end{aligned}
(14)

Let us very briefly discuss the intuition behind Palm distributions in general and the use of the shifts $${\hat{\theta }}_x$$ in particular. Palm distributions originated in queuing theory [30]. The concept is easiest understood for simple stationary point processes, which corresponds to Q being the distribution of a random counting measures in our setting. In this case, the Palm distribution may be interpreted as a description of the distribution of the point process seen from a ‘typical point’, an intuition which can be made precise using ergodic theory, see, for example, the discussion in [37, Chapter 8]. At the heart of Palm theory lies a duality principle [36], which can be paraphrased as

“A point process is stationary, if and only if its Palm version is stationary under point-shifts.”

Since we are not working with point processes, point-stationarity needs to be replaced by ‘mass-stationarity’, which corresponds to invariance under the shifts $${\hat{\theta }}_x$$. Also, for our purpose, the backward implication in this statement is not needed. We only rely on the observation that a Palm distributed random measure is stationary w.r.t. to intrinsic shifts, i.e. shifts by mass points of its realisation. The following characterisation theorem due to Last, Mörters and Thorisson provides a general formulation of this principle.

### Theorem 4

([21, Theroem 3.1]) Assume that $$\nu$$ is a diffuse random measure on a probability space $$(\varOmega ,{\mathfrak {F}},\mathbb {P})$$ with $$\mathbb {P}(\nu (-\infty ,0)<\infty )=\mathbb {P}(\nu (0,\infty )<\infty )=0.$$ Then, the following statements are equivalent

1. (i)

$$\nu$$ is Palm-distributed;

2. (ii)

$$\nu$$ is mass-stationary, i.e. for all bounded Borel-sets C such that

\begin{aligned} \int _C \text {d}x=\int _{\partial C} \text {d}x>0 \end{aligned}

and all measurable functions $$f:\varOmega \times \mathbb {R}\rightarrow [0,\infty )$$

\begin{aligned} \mathbb {E}_{\mathbb {P}}\int \int \mathbb {1}_C(u)\frac{\mathbb {1}_{C-u}(s)}{\nu (C-u)}f(\theta _s,s+u)\nu (\text {d}s)\text {d}u = \mathbb {E}_{\mathbb {P}}\int \mathbb {1}_C(u) f(\theta _0,u)\text {d}u; \end{aligned}
3. (iii)

$$\mathbb {P}\circ {\hat{\theta }}_x^{-1} = \mathbb {P}$$ for all $$x\in \mathbb {R}$$.

For the local time $$\ell$$, the corresponding stationary quasi-distribution $$Q_\ell$$ hidden in statement (i) of Theorem 4 has the following interpretation: Let us set

\begin{aligned} Q_{\ell }(G)=\mathbb {E}\int \mathbb {1}\{\ell ^y(\cdot )\in G \}\text {d}y, \quad G\in {\mathfrak {B}}({\mathscr {M}}), o\notin G, \end{aligned}

recalling that o is the trivial measure. We can think of $$Q_{\ell }(G)$$ as the ‘local time at the origin’ of a trajectory in the flow of X, i.e. the quasi-distribution $$Q_X$$ on trajectories obtained via

\begin{aligned} Q_X({H})=\int \mathbb {1}\{ X+y\in {H} \} \text {d}y,\; {H}\in {\mathfrak {Z}}({\mathscr {C}}), \end{aligned}

where $${\mathscr {C}}$$ is a suitable path space equipped with the $$\sigma$$-field $${\mathfrak {Z}}(\cdot )$$ of cylinder sets. $$Q_X$$ can be thought of as mixture of the law of X w.r.t. Lebesgue measure in the ‘origin’ X(0) to obtain a stationary measure on paths. From $$Q_X$$, we can derive a stationary version of the occupation measure and then disintegrate to obtain $$Q_\ell$$. Note, however, that $$Q_{\ell }$$ and $$\mathbb {P}_{\ell }$$ need not be interpreted in this way—it is only necessary that the distribution of local time as a random measure is a Palm distribution.

In fact, we do not require the original definition (ii) of mass-stationary but only its characterisation through random shifts (iii). The crucial result for our approach is the implication (i)$$\Rightarrow$$(iii).

### Lemma 1

Let Q be a stationary nonzero measure on $${\mathfrak {B}}({\mathscr {M}})$$ with $$\lambda _Q<\infty$$, then its Palm distribution $$\mathbb {P}_Q$$ is stationary with respect to the random shifts $$({\hat{\theta }}_x)_{x\in \mathbb {R}}$$.

In the appendix, we provide a short, direct proof of Lemma 1 without reliance on Theorem 4. To actually apply Lemma 1, we need that $$\ell$$ is indeed Palm distributed, which is well known. Before we formulate the corresponding statement as Proposition 3, we briefly discuss scale-invariance of the local time which reflects the self-similar nature of the underlying process X.

### 4.2 Bi-Scale-Invariance

So far, we have focussed our discussion of random measures on invariance with respect to time shifts only. Now we additionally consider scale-invariance of measures, which is the counterpart to self-similarity of processes. The essential observation of this section is that stationarity combined with scale-invariance of a quasi-distribution or Palm distribution determines the corresponding intensity measures entirely up to a multiplicative constant.

We illustrate this by means of marked point processes on the real line. We consider $$(\varOmega ,{\mathfrak {F}},\mathbb {P})$$, i.e. we work under a probability measure. An extended marked point process with positive marks (EMPP) is a point process on $$\mathbb {R}\times (0,\infty )$$ which is a.s. finite on all sets of the form $$A\times M$$ for bounded $$A\in {\mathfrak {B}}(\mathbb {R})$$ and Borel sets

\begin{aligned} M\subset \left( \epsilon ,\frac{1}{\epsilon }\right) , \; \text { for some } \epsilon >0. \end{aligned}

Fix $$\beta >0$$ and $$r\in (0,1)$$. We define a rescaled point process by

\begin{aligned} S^\beta _r N (A\times M):= N(rA \times r^\beta M), \end{aligned}

where $$c M:=\{cm, m\in M\}$$ for any $$c\in \mathbb {R}\setminus \{0\}$$, hence a contraction by factor r in the time domain is combined with a contraction by $$r^\beta$$ in the mark space into the operator $$S_r^\beta$$. An EMPP N on $$\mathbb {R}\times (0,\infty )$$ is called $$\beta$$-bi-scale-invariant if, for any $$r\in (0,1)$$,

\begin{aligned} \mathbb {P}\circ (S^\beta _rN)^{-1} = \mathbb {P}\circ N^{-1}. \end{aligned}

Similarly, the EMPP is stationary, if its distribution is invariant under the shifts $$(\theta _t)_{t\in \mathbb {R}}$$ applied to the time domain only. We now recall a well-known result about point processes: if the intensity measure of the EMPP is finite, then it follows from bi-scale invariance together with stationarity that the intensity measure must be a product of a multiple of Lebesgue measure in time and a hyperbolic law on the marks. That this is the case can be seen by noting that stationarity implies homogeneity in time of the intensity measure, i.e. it must be a multiple of Lebesgue measure. Additionally, bi-scale-invariance is transferred into stationarity on the mark space when mapped to logarithmic coordinates; hence, the intensity on the marks is a multiple of Lebesgue measure in logarithmic coordinates which is translated into a hyperbolic law when reversing the coordinate transform.

### Proposition 2

Let N be a bi-scale invariant, stationary, extendedFootnote 3 marked point process on $$\mathbb {R}\times (0,\infty )$$ with positive and locally finite intensity measure $$\varLambda _N$$. Then, $$\varLambda _N$$ necessarily is of the form:

\begin{aligned} \varLambda _N (\text {d}t \times \text {d}m)= cm^{-1-\nicefrac {1}{\beta }}\text {d}t\text {d}m. \end{aligned}
(15)

### Proof

We only give an outline of the precise argument. A similar derivation in more detail can be found in [12, Chapter 12], for the case of an extended marked Poisson process. Let $$\varLambda _N$$ denote the intensity measure of N. We assume that

\begin{aligned} \varLambda _N(\{0\}\times (0,\infty ))=0, \end{aligned}

which holds if N has almost surely no points at 0 and that $$\varLambda _N$$ is absolutely continuous with respect to 2-dimensional Lebesgue measure. We use a logarithmic change of coordinates, which makes it necessary to decompose N into a marked point process $$N_+$$ on $$(0,\infty )$$ and a marked point process $$N_-$$ on $$(-\infty ,0)$$ with associated intensity measures $$\varLambda _\pm$$.

Let us first consider $$\varLambda _+$$ only. By the logarithmic change of coordinates

\begin{aligned} (t,m)\mapsto (\log t,\beta \log t-\log m),\quad t\in \mathbb {R}, m\in (0,\infty ), \end{aligned}

bi-scale-invariance is turned into shift-invariance and consequently under the new coordinates, $$\varLambda _+$$ must be a product of Lebesgue measure and some absolutely continuous measure $$\rho _+$$ on the (coordinate transformed) mark space, whenever the intensity in time is finite. In particular, reversing the coordinate transform, we obtain

\begin{aligned} \varLambda _+(\text {d}t\times \text {d}m) =\frac{ \phi _+({t^\beta }/{m})}{tm}\text {d}t\text {d}m \end{aligned}

for some locally integrable density $$\phi _+$$ of $$\rho _+$$ on $$(0,\infty )$$, see [12, p. 258]Footnote 4. A similar representation holds for $$\varLambda _{-}$$ with a density $$\phi _{-}$$. The additional assumption of stationarity in the time domain now implies that we must have

\begin{aligned} \phi _+(m)=\phi _{-}(m)=cm^{-\nicefrac {1}{\beta }}, \end{aligned}

for some $$c>0$$ and thus (15) must be satisfied. $$\square$$

To apply this representation to random measures, we recall that atomic random measures can be bijectively mapped to EMPPs. Let $${\mathscr {N}}$$ be the space of locally finite extended marked point processes with positive marks N on $$(\mathbb {R}\times (0,\infty ))$$ satisfying

\begin{aligned} \int (m\wedge 1) \varLambda _N (A\times \text {d}m)<\infty \end{aligned}
(16)

for any bounded Borel set A, and let $${\mathscr {M}}_a\subset {\mathscr {M}}$$ denote the locally finite, purely atomic random measures on $${\mathfrak {B}}(\mathbb {R})$$.

### Lemma 2

([12, Lemma 9.1.VII]) There is a bijection mapping $${\mathscr {N}}$$ onto $${\mathscr {M}}_a$$.

In principle, the bijection of Lemma 2 just consists of interpreting, for given $$\xi \in {\mathscr {M}}_a$$, a point $$x\in \mathsf {supp}(\xi )$$ with $$\xi (\{x\})=m>0$$ as a pair $$(x,m)\in \mathbb {R}\times (0,\infty )$$ and the collection of all such points forms a marked point process $$N_{\xi }$$. To deal with accumulation points of $$\mathsf {supp}(\xi )$$, one needs to consider extended MPPs. Note that if the intensity measure $$\varLambda _\xi$$ of some random measure $$\xi$$ under this bijection assigns infinite mass to a bounded open interval A, then by local finiteness of $$\xi$$ this must be the consequence of infinitely many smaller and smaller atoms. Thus, $$N_{\xi }$$ puts finite mass on any open set $$A\times (\delta ,\infty )$$ for $$\delta >0$$ and is thus locally finite on $$(0,\infty )$$ in the extended sense. Since we wish to apply Lemma 2 to measures without fixed atoms, we observe that the bijection is preserved when restricted to the subspace of measures without fixed points and the subspace of marked point processes without fixed points, respectively.

The version of $$\beta$$-bi-scale-invariance for quasi-distributions is just called $$\beta$$-scale-invariance, as defined in Sect. 3 for probability distributions. Let us quickly recall this definition in the notation of this section. Let $$\mathbb {P}$$ be a probability distribution on $${\mathfrak {B}}({\mathscr {M}})$$. $$\mathbb {P}$$ is called $$\beta$$-scale-invariant, if for any $$r\in (0,1)$$, $$B\in {\mathfrak {B}}(\mathbb {R})$$ we have

\begin{aligned} \mathbb {P}(G) = \mathbb {P}\left( \{R^{\beta }_r\nu , \nu \in G \}\right) ,\; G\in {\mathfrak {B}}({\mathscr {M}}) \end{aligned}

where

\begin{aligned} \left( R^\beta _r \nu \right) (A):= r^{-\beta }\nu (rA), \end{aligned}

or short $$\mathbb {P}\circ (R^\beta _r)^{-1}=\mathbb {P}$$. When considering a non-finite quasi-distribution, one needs to add an additional factor rescaling the total mass: A stationary non-finite quasi-distribution Q with finite intensity $$\lambda _Q$$ is called $$\beta$$-scale-invariant if

\begin{aligned} Q\circ \big (R^\beta _r\big )^{-1}=r^{\beta -1} Q. \end{aligned}

That this is the correct notion of scale-invariance for quasi-distributions can be seen by looking at their Palm distributions:

### Lemma 3

([39, Statement 2.3]) A stationary, nonzero, non-finite quasi distribution Q is $$\beta$$-scale-invariant if and only if its Palm distribution $$\mathbb {P}_Q$$ is $$\beta$$-scale-invariant.

### 4.3 Proof of Proposition 1

The last ingredient needed to prove Proposition 1 is the following result which establishes that $$\ell$$ is indeed distributed according to a scale-invariant Palm-distribution.

### Proposition 3

([40, Proposition 6.9.]) If X is H-sssi and LT, then $$\ell$$ is an $$(1-H)$$-scale-invariant, Palm-distributed random measure.

We may rephrase the statement of Proposition 3 as a statement about $$\mathbb {P}_{\ell }$$, the distribution of the local time as a random measure: there exists some quasi-distribution $$Q_{\ell }$$, such that $$\mathbb {P}_{\ell }$$ is the Palm-distribution of $$Q_{\ell }$$.

Finally, we are in the position to prove Proposition 1 and thus conclude the proof of Theorem 1.

### Proof

(Proof of Proposition 1) The representation (11) follows immediately from Proposition 2 upon showing $$\nicefrac {1}{1-H}$$-bi-scale-invariance of $${\hat{N}}$$. This is, in turn, equivalent to $$\nicefrac {1}{1-H}$$-scale-invariance and stationarity of $${\hat{\ell }}$$. The scale-invariance has already been established in the opening paragraph of Sect. 3. We now show that L has stationary increments, and thus, $${\hat{\ell }}$$ is a stationary random measure. By Proposition 3, $$\ell$$ is Palm-distributed. For $$x\in \mathbb {R}$$ set $$t(x):=\inf \{t: \ell (t)>x\}=L_x$$. Fix $$x_0\in \mathbb {R}$$ and consider a finite family of points $$x_1,\dots ,x_n$$ with $$x_0<x_1<\dots <x_n$$ and the corresponding random times $$t(x_i), i=1,\dots ,n.$$ Almost surely, $$\{t(x_i), i=1,\dots ,n\}\subset \mathsf {supp}(\ell )$$ and because $$\ell$$ is Palm-distributed, we may apply Lemma 1, to obtain

\begin{aligned} \big (t(x_i)-t(x_0)\big )_{i=1}^n\overset{d}{=} \big (t(x_i-x_0)-t(0)\big )_{i=1}^n=\big (L_{x_i-x_0}\big )_{i=1}^n \end{aligned}

and thus L has stationary increments. Consequently, $${\hat{\ell }}$$ is $$\nicefrac {1}{1-H}$$-scale-invariant and stationary and this concludes the proof of the Proposition 1. $$\square$$