Skip to main content
Log in

Asymptotic Behavior of Local Times of Compound Poisson Processes with Drift in the Infinite Variance Case

  • Published:
Journal of Theoretical Probability Aims and scope Submit manuscript

Abstract

Consider compound Poisson processes with negative drift and no negative jumps, which converge to some spectrally positive Lévy process with nonzero Lévy measure. In this paper, we study the asymptotic behavior of the local time process, in the spatial variable, of these processes killed at two different random times: either at the time of the first visit of the Lévy process to 0, in which case we prove results at the excursion level under suitable conditionings; or at the time when the local time at 0 exceeds some fixed level. We prove that finite-dimensional distributions converge under general assumptions, even if the limiting process is not càdlàg. Making an assumption on the distribution of the jumps of the compound Poisson processes, we strengthen this to get weak convergence. Our assumption allows for the limiting process to be a stable Lévy process with drift. These results have implications on branching processes and in queueing theory, namely, on the scaling limit of binary, homogeneous Crump–Mode–Jagers processes and on the scaling limit of the Processor-Sharing queue length process.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Barlow, M.T.: Necessary and sufficient conditions for the continuity of local time of Lévy processes. Ann. Probab. 16(4), 1389–1427 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  2. Bertoin, J.: Lévy Processes, volume 121 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge (1996)

    Google Scholar 

  3. Bertoin, J.: Exponential decay and ergodicity of completely asymmetric Lévy processes in a finite interval. Ann. Appl. Probab. 7(1), 156–169 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  4. Billingsley, P. (1999) Convergence of Probability Measures. Wiley Series in Probability and Statistics: Probability and Statistics, 2nd edn. Wiley, New York

  5. Borodin, A.N.: The asymptotic behavior of local times of recurrent random walks with finite variance. Teor. Veroyatnost. i Primenen. 26(4), 769–783 (1981)

    MATH  MathSciNet  Google Scholar 

  6. Borodin, A.N.: Asymptotic behavior of local times of recurrent random walks with infinite variance. Teor. Veroyatnost. i Primenen. 29(2), 312–326 (1984)

    MATH  MathSciNet  Google Scholar 

  7. Borodin, A.N.: On the character of convergence to Brownian local time. I, II. Probab. Theory Relat. Fields 72(2), 231–250, 251–277 (1986)

    Google Scholar 

  8. Caballero, M.E., Lambert, A., Uribe Bravo, G.: Proof(s) of the Lamperti representation of continuous-state branching processes. Probab. Surv. 6, 62–89 (2009)

  9. Chan, T., Kyprianou, A., Savov, M.: Smoothness of scale functions for spectrally negative Lévy processes. Probab. Theory Relat. Fields, 1–18 (2010). doi:10.1007/s00440-010-0289-4

  10. Csáki, E., Révész, P.: Strong invariance for local times. Z. Wahrsch. Verw. Gebiete 62(2), 263–278 (1983)

    Article  MATH  MathSciNet  Google Scholar 

  11. Csörgő, M., Révész, P.: On strong invariance for local time of partial sums. Stoch. Process. Appl. 20(1), 59–84 (1985)

    Article  Google Scholar 

  12. Duquesne, T., Le Gall, J.-F.: Random trees, Lévy processes and spatial branching processes. Astérisque 281, vi+147 (2002)

  13. Eisenbaum, N., Kaspi, H.: A necessary and sufficient condition for the Markov property of the local time process. Ann. Probab. 21(3), 1591–1598 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  14. Feller, W.: An Introduction to Probability Theory and Its Applications, vol. II, 2nd edn. Wiley, New York (1971)

  15. Grimvall, A.: On the convergence of sequences of branching processes. Ann. Probab. 2(6), 1027–1045 (1974)

    Article  MATH  MathSciNet  Google Scholar 

  16. Haccou, P., Jagers, P., Vatutin, V.A.: Branching Processes: Variation, Growth, and Extinction of Populations. Cambridge Studies in Adaptive Dynamics. Cambridge University Press, Cambridge (2007)

    Google Scholar 

  17. Helland, I.S.: Continuity of a class of random time transformations. Stoch. Process. Appl. 7(1), 79–99 (1978)

    Article  MATH  MathSciNet  Google Scholar 

  18. Jacod, J., Shiryaev, A.N.: Limit theorems for Stochastic Processes, volume 288 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 2nd edn. Springer, Berlin (2003)

    Google Scholar 

  19. Jain, N.C., Pruitt, W.E.: An invariance principle for the local time of a recurrent random walk. Z. Wahrsch. Verw. Gebiete 66(1), 141–156 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  20. Kang, J.-S., Wee, I.-S.: A note on the weak invariance principle for local times. Stat. Probab. Lett. 32(2), 147–159 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  21. Kella, O., Zwart, B., Boxma, O.: Some time-dependent properties of symmetric \(M/G/1\) queues. J. Appl. Probab. 42(1), 223–234 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  22. Kesten, H.: An iterated logarithm law for local time. Duke Math. J. 32, 447–456 (1965)

    Article  MATH  MathSciNet  Google Scholar 

  23. Khoshnevisan, D.: An embedding of compensated compound Poisson processes with applications to local times. Ann. Probab. 21(1), 340–361 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  24. Knight, F.B.: Random walks and a sojourn density process of Brownian motion. Trans. Am. Math. Soc. 109, 56–86 (1963)

    Article  MATH  Google Scholar 

  25. Kuznetsov, A., Kyprianou, A.E., Rivero, V.: The theory of scale functions for spectrally negative Lévy processes. In: Lévy Matters II. Lecture Notes in Mathematics, pp. 97–186. Springer, Berlin, Heidelberg (2013)

  26. Kyprianou, A., Rivero, V., Song, R.: Convexity and smoothness of scale functions and de Finetti’s control problem. J. Theor. Probab. 23, 547–564 (2010). doi:10.1007/s10959-009-0220-z

    Article  MATH  MathSciNet  Google Scholar 

  27. Lambert, A.: The contour of splitting trees is a Lévy process. Ann. Probab. 38(1), 348–395 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  28. Lambert, A., Simatos, F.: The weak convergence of regenerative processes using some excursion path decompositions. Ann. Inst. Henri Poincaré B: Probab. Stat. (accepted)

  29. Lambert, A., Simatos, F., Zwart, B.: Scaling limits via excursion theory: interplay between Crump–Mode–Jagers branching processes and processor-sharing queues. Ann. Appl. Probab (accepted)

  30. Lamperti, J.: Continuous-state branching processes. Bull. Am. Math. Soc. 73, 382–386 (1967)

    Article  MATH  MathSciNet  Google Scholar 

  31. Lamperti, J.: The limit of a sequence of branching processes. Probab. Theory Relat. Fields 7(4), 271–288 (1967)

    MATH  MathSciNet  Google Scholar 

  32. Limic, V.: A LIFO queue in heavy traffic. Ann. Appl. Probab. 11(2), 301–331 (2001)

    MATH  MathSciNet  Google Scholar 

  33. Perkins, E.: Weak invariance principles for local time. Z. Wahrsch. Verw. Gebiete 60(4), 437–451 (1982)

    Article  MATH  MathSciNet  Google Scholar 

  34. Révész, P.: Local time and invariance. In: Analytical methods in probability theory (Oberwolfach, 1980), volume 861 of Lecture Notes in Mathematics, pp. 128–145. Springer, Berlin (1981)

  35. Révész, P.: A strong invariance principle of the local time of RVs with continuous distribution. Studia Sci. Math. Hungar. 16(1–2), 219–228 (1981)

    MATH  MathSciNet  Google Scholar 

  36. Robert, P.: Stochastic Networks and Queues. Stochastic Modelling and Applied Probability Series. Springer, New York, xvii+398 pp. (2003)

  37. Sagitov, S.M.: General branching processes: convergence to Irzhina processes. J. Math. Sci. 69(4), 1199–1206 (1994). Stability problems for stochastic models (Kirillov, 1989)

    Google Scholar 

  38. Sagitov, S.: A key limit theorem for critical branching processes. Stoch. Process. Appl. 56(1), 87–100 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  39. Stone, C.: Limit theorems for random walks, birth and death processes, and diffusion processes. Ill. J. Math. 7, 638–660 (1963)

    MATH  Google Scholar 

Download references

Acknowledgments

F. Simatos would like to thank Bert Zwart for initiating this project and pointing out the reference [21].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florian Simatos.

Additional information

The research of Amaury Lambert is funded by project ‘MANEGE’ 09-BLAN-0215 from ANR (French national research agency). While most of this research was carried out, Florian Simatos was affiliated with CWI and sponsored by an NWO-VIDI grant.

Appendix: Proof of Proposition 5.1

Appendix: Proof of Proposition 5.1

In the rest of this section, we fix some \(a_0 > 0\) and we assume that the tightness assumption stated in Sect. 2 holds: In particular, \(\Lambda _n = \Lambda \) with \(\mathbb{P }(\Lambda \ge s) = (1+s)^{-\alpha }\) for some \(1 < \alpha < 2,\,n = s_n^{\alpha }\) and \(r_n = s_n^{\alpha -1}\). The goal of this section is to prove that the sequence \(L^0(a_0 + \, \cdot \,)\) under \(\mathbf{P }_n^*(\, \cdot \, | \, T(a_0) < T(0))\) is tight.

Note that this will prove Proposition 5.1: indeed, by Proposition 4.1, \(L^0(a_0 + \, \cdot \,)\) under \(\mathbf{P }_n^*(\, \cdot \, | \, T(a_0) < T(0))\) converges in the sense of finite-dimensional distributions to \(L^0(a_0 + \, \cdot \,)\) under \({\fancyscript{N}}(\, \cdot \, | \, T(a_0) < T(0))\). Moreover, the jumps of \(L^0(\, \cdot \, + a_0)\) are of deterministic size \(1/r_n\). Since \(1/r_n \rightarrow 0\), any limiting point must be continuous, see for instance [4]. Note that this reasoning could therefore be proved to show that \(L\) under \(\mathbf{P }^0\) is continuous (in the space variable), a result that is difficult to prove in general (see for instance [1]).

Under the tightness assumption, the scale function \(w_n\) enjoys the following useful properties. The convexity and smoothness properties constitute one of the main reasons for making the tightness assumption.

Lemma 7.1

For each \(n \ge 1,\,w_n\) is twice continuously differentiable and concave, its derivative \(w_n^{\prime }\) is convex, \(w_n(0) = 1\) and \(w_n^{\prime }(0) = \kappa _n\). Moreover,

$$\begin{aligned} \sup \left\{ \frac{w_n(t)}{(1+t)^{\alpha }}: n \ge 1, t > 0 \right\} < +\infty \end{aligned}$$

and finally, there exist \(n_0 \ge 1\) and \(t_0 > 0\) such that \(w_n(t_0) \ge 2\) for all \(n \ge n_0\).

Proof

The smoothness of \(w_n\) follows from Theorem 3 in [9] since \(f(s) = \mathbb{P }(\Lambda \ge s)\) is continuously differentiable with \(|f^{\prime }(0)| < +\infty \). The convexity properties follow from Theorem 2.1 in [26] since \(f\) is log-convex and \(\psi ^{\prime }_n(0) \ge 0\). The formulas for \(w_n(0)\) and \(w^{\prime }_n(0)\) are well known, see for instance [25]. We now prove the two last assertions.

First of all, note that

$$\begin{aligned} \sup \left\{ \frac{w_n(t)}{(1+t)^{\alpha }}: n \ge 1, t > 0 \right\} \le \max \left( \sup \left\{ w_n(1): n \ge 1 \right\} , \sup \left\{ \frac{w_n(t)}{t^{\alpha -1}}: n \ge 1, t \ge 1 \right\} \right) . \end{aligned}$$

Let \(\overline{\psi }\) be the Lévy exponent given by \(\overline{\psi }(\lambda ) = \lambda - (\alpha -1) \mathbb{E }( 1 - e^{-\lambda \Lambda })\) with corresponding scale function \(\overline{w}\). Since \(\kappa _n \rightarrow \alpha -1,\,\mathbb{P }_n^0\) converges in distribution to the law of the Lévy process with Lévy exponent \(\overline{\psi }\), and so it can be shown similarly as in the proof of Lemma 3.4 that \(w_n(1) \rightarrow \overline{w}(1)\). The first term \(\sup \left\{ w_n(1): n \ge 1 \right\} \) appearing in the above maximum is therefore finite. As for the second term, since for any \(n \ge 1\) we have \(\kappa _n \mathbb{E }(\Lambda ) = \kappa _n / (\alpha -1) \le 1\) by assumption, we get \(\overline{\psi }\le \psi _n\) and by monotonicity it follows that \(w_n \le \overline{w}\). Moreover, it is known that there exists a finite constant \(C > 0\) such that \(\overline{w}(t) \le C / (t \overline{\psi }(1/t))\) for all \(t > 0\), see Proposition III.1 or the proof of Proposition VII.10 in [2]. In particular,

$$\begin{aligned} \sup \left\{ \frac{w_n(t)}{t^{\alpha -1}}: n \ge 1, t \ge 1 \right\} \le \sup \left\{ \frac{\overline{w}(t)}{t^{\alpha -1}}: t \ge 1 \right\} \le C \sup \left\{ \frac{t^\alpha }{\overline{\psi }(t)}: 0 < t \le 1 \right\} . \end{aligned}$$

Since \(\mathbb{P }(\Lambda \ge s) = (1+s)^{-\alpha }\) one can check that there exists some constant \(\beta > 0\) such that \(\overline{\psi }(t) \sim \beta t^{\alpha }\) as \(t \rightarrow 0\), which shows that the last upper bound is finite and proves the desired result.

To prove the last assertion of the lemma, consider \(n_0\) large enough such that \({\underline{\kappa }} = \inf _{n\ge n_0}\kappa _n>1/2\) (remember that \(\kappa \rightarrow 1/(\alpha -1)>1\)). Let \({\underline{\psi }}\) be the Lévy exponent given by \({\underline{\psi }}(\lambda ) = \lambda - {\underline{\kappa }} \mathbb{E }(1-e^{-\lambda \Lambda })\) and corresponding scale function \({\underline{w}}\). By monotonicity, we get \(w_n \ge {\underline{w}}\) for any \(n \ge n_0\), and one easily checks that \({\underline{w}}(\infty ) = 1 / (1- {\underline{\kappa }} / (\alpha -1))\). Since by choice of \({\underline{\kappa }}\) this last limit is strictly larger than 2, there exists \(t_0 > 0\) such that \({\underline{w}}(t_0) \ge 2\). This proves the result. \(\square \)

Since we are interested in limit theorems, we will assume in the sequel without loss of generality that there exists \(t_0 > 0\) such that \(w_n(t_0) \ge 2\) for all \(n \ge 1\), and we henceforth fix such a \(t_0\). We first give a short proof of Proposition 5.1 based on the two following technical results, see Theorem 13.5 in [4].

Proposition 7.2

(Case \((b-a) \vee (c-b) \le t_0 / s_n\)) For any \(A > a_0\), there exist finite constants \(C, \gamma \ge 0\) such that for all \(n\ge 1,\,\lambda > 0\) and \(a_0 \le a < b < c \le A\) with \((b-a) \vee (c-b) \le t_0 / s_n\),

$$\begin{aligned} \mathbf{P }_n^* \left( \left| L^0(b) - L^0(a) \right| \wedge \left| L^0(c) - L^0(b) \right| \ge \lambda \, | \, T(a) < T(0) \right) \le C \frac{(c-a)^{3/2}}{\lambda ^\gamma }. \end{aligned}$$

Proposition 7.3

(Case \(b-a \ge t_0 / s_n\)) For any \(A > a_0\), there exist finite constants \(C, \gamma \ge 0\) such that for all \(n \ge 1,\, \lambda > 0\) and \(a_0 \le a < b \le A\) with \(b-a \ge t_0 / s_n\),

$$\begin{aligned} \mathbf{P }_n^* \left( \left| L^0(b) - L^0(a) \right| \ge \lambda \, | \, T(a) < T(0) \right) \le C \frac{(b-a)^{3/2}}{\lambda ^\gamma }. \end{aligned}$$

Moreover, the constant \(\gamma \) can be taken equal to the constant \(\gamma \) of Proposition 7.2.

At this point, it must be said that the case \((b-a) \vee (c-b) \le t_0 / s_n\) is much harder than the case \(b-a \ge t_0 / s_n\). The reason is that in the former case, the bound \((c-a)^{3/2}\) cannot be achieved without taking the minimum between \(|L^0(b) - L^0(a)|\) and \(|L^0(c) - L^0(b)|\). Considering only one of these two terms gives a bound which can be shown to decay only linearly in \(c-a\), which is not sufficient to establish tightness. This technical problem reflects that, in the well-studied context of random walks, tightness in the non-lattice case is harder than in the lattice one, where typically small oscillations, i.e., precisely when \((b-a) \vee (c-b) \le t_0 / s_n\), are significantly easier to control.

Proof of Proposition 5.1 based on Propositions 7.2 and 7.3

According to Theorem 13.5 in [4], it is enough to show that for each \(A > a_0\), there exist finite constants \(C, \gamma \ge 0\) and \(\beta > 1\) such that for all \(n \ge 1,\,\lambda > 0\) and \(a_0 \le a < b < c \le A\),

$$\begin{aligned} \mathbf{P }_n^*\left( \left| L^0(b)-L^0(a)\right| \wedge \left| L^0(c)- L^0(b)\right| \ge \lambda \,|\,T(a_0)<T(0)\right) \le C \frac{(c-a)^\beta }{\lambda ^\gamma }. \nonumber \\ \end{aligned}$$
(13)

Fix \(n \ge 1,\,\lambda > 0\) and \(a_0 \le a < b < c\) and let \(E = \{ |L^0(b) - L^0(a)| \wedge |L^0(c) - L^0(b)| \ge \lambda \}\). Since \(X\) under \(\mathbf{P }_n^*\) is spectrally positive, we have \(E \subset \{ L(a) > 0 \}\) and so

$$\begin{aligned} \mathbf{P }_n^*(E, T(a_0) < T(0)) = \mathbf{P }_n^*(E) = \mathbf{P }_n^*(E, T(a) < T(0)). \end{aligned}$$

Thus, Bayes formula entails

$$\begin{aligned} \mathbf{P }_n^* \left( E \, | \, T(a_0) < T(0) \right) = \frac{\mathbf{P }_n^* \left( T(a) < T(0) \right) }{\mathbf{P }_n^* \left( T(a_0) < T(0) \right) } \mathbf{P }_n^* \left( E \, | \, T(a) < T(0) \right) \end{aligned}$$

and since \(\mathbf{P }_n^* \left( T(a) < T(0) \right) \le \mathbf{P }_n^* \left( T(a_0) < T(0) \right) \), we get

$$\begin{aligned}&\mathbf{P }_n^* \left( \left| L^0(b) - L^0(a) \right| \wedge \left| L^0(c) - L^0(b) \right| \ge \lambda \, | \, T(a_0) < T(0) \right) \\&\quad \le \mathbf{P }_n^* \left( \left| L^0(b) - L^0(a) \right| \wedge \left| L^0(c) - L^0(b) \right| \ge \lambda \, | \, T(a) < T(0) \right) . \end{aligned}$$

Thus (13) follows from the previous inequality together with either Proposition 7.2 when \((b-a) \vee (c-b) \le t_0 / s_n\), or Proposition 7.3 when \(b-a \ge t_0 / s_n\). In the last remaining case where \(c-b \ge t_0 / s_n\), we derive similarly the following upper bound:

$$\begin{aligned}&\mathbf{P }_n^* \left( \left| L^0(b) - L^0(a) \right| \wedge \left| L^0(c) - L^0(b) \right| \ge \lambda \, | \, T(a_0) < T(0) \right) \\&\quad \le \mathbf{P }_n^* \left( \left| L^0(c) - L^0(b) \right| \ge \lambda \, | \, T(a_0) < T(0) \right) \\&\quad \le \mathbf{P }_n^* \left( \left| L^0(c) - L^0(b) \right| \ge \lambda \, | \, T(b) < T(0) \right) . \end{aligned}$$

Proposition 7.3 then concludes the proof. \(\square \)

The rest of this section is devoted to the proof of Propositions 7.2 and 7.3. Our analysis relies on an explicit expression of the law of \((L^0(b) - L^0(a), L^0(c) - L^0(b))\) under \(\mathbf{P }_n^{x_0}(\, \cdot \, | \, T(a) < T(0))\). For \(0 < a < b < c\) and \(x_0 > 0\), we define

$$\begin{aligned} p_n^{x_0}(a) = \mathbf{P }_n^{x_0} \left( T(a) < T(0) \right) , \quad p_{n,\xi }^{x_0}(a,b) = \mathbf{P }_n^{x_0} \left( T(b) < T(a) \, | \, T(a) < T(0) \right) \end{aligned}$$

as well as

$$\begin{aligned} p_{n,\theta }^{x_0}(a,b,c) = \mathbf{P }_n^{x_0} \left( T(c) < T(b) \, | \, T(b) < T(a) < T(0) \right) . \end{aligned}$$

Remember that \(G_n(a)\) denotes a geometric random variable with parameter \(p_n^a(a)\), and from now on we adopt the convention \(\sum _1^{-1} = \sum _1^0 = 0\).

Lemma 7.4

For any \(0 < a < b < c\) and \(x_0 > 0\), the random variable

$$\begin{aligned} \left( r_n L^0(b) - r_n L^0(a), r_n L^0(c) - r_n L^0(b) \right) \end{aligned}$$

under \(\mathbf{P }_n^{x_0}( \, \cdot \, | \, T(a) < T(0))\) is equal in distribution to

$$\begin{aligned} \left( \xi _n^{x_0} + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a, \, \theta _n^{x_0} \mathbb{1 }_{\{\xi _n^{x_0} \ge 0\}} + \sum _{k=1}^{N_{n,a}} \theta _{n,k}^a + \sum _{k=1}^{N_{n,b}^{x_0}} \theta _{n,k}^b \right) \end{aligned}$$
(14)

where

$$\begin{aligned} N_{n,a} = \sum _{k=1}^{G_n(a)} \mathbb{1 }_{\{\xi _{n,k}^a \ge 0\}} \text{ and } N_{n,b}^{x_0} = (\xi _n^{x_0})^+ + \sum _{k=1}^{G_n(a)} (\xi _{n,k}^a)^+. \end{aligned}$$

All the random variables \(\xi _n^{x_0},\,\xi _{n,k}^a,\,\theta _n^{x_0},\,\theta _{n,k}^a,\,\theta _{n,k}^b\) and \(G_n(a)\) are independent. For any \(u > 0\) and \(k \ge 1,\,\theta _{n,k}^u\) is equal in distribution to \(\xi _n^u\) and \(\theta _{n,k}^u\) to \(\theta _n^u\), where the laws of \(\xi _n^u\) and \(\theta _n^u\) are described as follows: for any function \(f\),

$$\begin{aligned} \mathbb{E }\left[ f(\xi _n^u) \right] = (1-p_{n,\xi }^u(a,b))f(-1) + p_{n,\xi }^u(a,b) \mathbb{E }\left[ f(G_n(b-a)) \right] \end{aligned}$$
(15)

and

$$\begin{aligned} \mathbb{E }\left[ f(\theta _n^u) \right] = (1-p_{n,\theta }^u(a,b,c))f(-1) + p_{n,\theta }^u(a,b,c) \mathbb{E }\left[ f(G_n(c-b)) \right] . \end{aligned}$$
(16)

Proof

In the rest of the proof, we work under \(\mathbf{P }_n^{x_0}(\, \cdot \, | \, T(a) < T(0))\). By definition, \(r_n L^0(a)\) and \(r_n L^0(b)\) is the number of visits of \(X\) to \(a\) and \(b\) in \([0, T(0)]\), respectively. Thus, if \(\beta \) is the number of visits of \(X\) to \(b\) in \([0, T(a)]\) and \(\beta _{k}^{a}\) the number of visits of \(X\) to \(b\) between the \(k\)th and \((k+1)\)st visit to \(a\), we have

$$\begin{aligned} r_n L^0(b) - r_n L^0(a) = (\beta - 1) + \sum _{k=1}^{r_n L^0(a) - 1} (\beta _{k}^a - 1). \end{aligned}$$

Decomposing the path of \(X\) between successive visits to \(a\) and using the strong Markov property, one easily checks that all the random variables of the right-hand side are independent and that \(r_n L^0(a),\,\beta _k^a\) and \(\beta \) are, respectively, equal in distribution to \(1 + G_n(a),\,\xi _{n}^{a}+1\) and \(\xi _{n}^{x_0}+1\). This shows that the random variable \(r_n L^0(b) - r_n L^0(a)\) is equal in distribution to \(\xi _n^{x_0} + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a\).

To describe the law of \((r_n L^0(b) - r_n L^0(a), r_n L^0(c) - r_n L^0(b))\), one also needs to count the number of visits of \(X\) to \(c\): if \(X\) visits \(b\) before \(a\), it \(X\) may visit \(c\) before the first visit to \(b\); it may also visit \(c\) each time it goes from \(a\) to \(b\); finally, it may also visit \(c\) between two successive visits to \(b\). These three different ways of visiting \(c\) are, respectively, taken into account by the terms \(\theta _n^{x_0},\,\theta _{n,k}^a\) and \(\theta _{n,k}^b\). \(\square \)

The previous result readily gives the law of \((r_n L^0(b) - r_n L^0(a), r_n L^0(c) - r_n L^0(b))\) under \(\mathbf{P }_n^*( \, \cdot \, | \, T(a) < T(0))\): This law can be written as

$$\begin{aligned} \left( \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a, \, \widetilde{\theta }_n^a \mathbb{1 }_{\{\widetilde{\xi }_n^a \ge 0\}} + \sum _{k=1}^{N_{n,a}} \theta _{n,k}^a + \sum _{k=1}^{\widetilde{N}_{n,b}^a} \theta _{n,k}^b \right) \end{aligned}$$
(17)

where \(\widetilde{N}_{n,b}^a = (\widetilde{\xi }_n^a)^+ + \sum _{k=1}^{G_n(a)} (\xi _{n,k}^a)^+\) and the random variables \(\xi _{n,k}^a,\,\theta _{n,k}^a,\,\theta _{n,k}^b,\,G_n(a)\) and \(N_{n,a}\) are as described in Lemma 7.4. Moreover, these random variables are also independent from the pair \((\widetilde{\xi }_n^a, \widetilde{\theta }_n^a)\) whose distribution is given by

$$\begin{aligned} \mathbb{E }\left[ f(\widetilde{\xi }_n^a, \widetilde{\theta }_n^a) \right] = \int \limits \mathbb{E }\left[ f(\xi _n^{x_0}, \theta _n^{x_0}) \right] \mathbb{P }(\chi _n^a \in \mathrm{d}x_0) \end{aligned}$$
(18)

where from now on \(\chi _n^a\) denotes a random variable equal in distribution to \(X(0)\) under \(\mathbf{P }_n^*(\, \cdot \, | \, T(a) < T(0))\). For convenience, we will sometimes consider that \(\chi _n^a\) lives on the same probability space and is independent from all the other random variables. This will for instance allow us to say that the random vector (17) conditional on \(\{\chi _n^a=x_0\}\) is equal in distribution to the random vector (14).

In order to exploit (17) and prove Propositions 7.2 and 7.3, we will use a method based on controlling the moments, following similar lines as [5, 6]. As Propositions 7.2 and 7.3 suggest, we need to distinguish the two cases \((b-a) \vee (c-b) \le t_0 / s_n\) and \(b-a \ge t_0 / s_n\) (remember that \(t_0\) is a fixed number such that \(w_n(t_0) \ge 2\) for each \(n \ge 1\), see the discussion after Lemma 7.1).

In the sequel, we need to derive numerous upper bounds. The letter \(C\) then denotes constants which may change from line to line (and even within one line) but never depend on \(n,\,a,\,b,\,c,\,x_0\) or \(\lambda \). They may however depend on other variables, such as typically \(a_0,\,A\) or \(t_0\).

Before starting, let us gather a few relations and properties that will be used repeatedly (and sometimes without comments) in the sequel. First, it stems from (1) that

$$\begin{aligned} \mathbf{P }_n^{x_0}(T(a) < T(0)) = \frac{W_n(a) - W_n(a-x_0)}{W_n(a)} = \frac{\int _0^{x_0} W_n^{\prime }(a-u) \mathrm{d}u}{W_n(a)}, \ 0 < x_0 \le a. \end{aligned}$$

Moreover, (1) \(\kappa _n \le 1\); (2) \(W^{\prime }_n(0) = \kappa _n s_n / r_n = \kappa _n s_n^{2-\alpha },\,W_n \ge 0\) is increasing, \(W^{\prime }_n \ge 0\) is decreasing and \(-W_n^{\prime \prime } \ge 0\) is decreasing (as a consequence of Lemma 7.1 together with the identity \(W_n(a) = w_n(a s_n) / r_n\)); (3) for every \(a > 0\), the sequences \((W_n(a), n \ge 1)\) and \((W_n^{\prime }(a), n \ge 1)\) are bounded away from 0 and infinity (by Lemma 3.4).

1.1 Case \((b-a) \vee (c-b) \le t_0 / s_n\)

The goal of this subsection is to prove Proposition 7.2 through a series of lemmas.

Lemma 7.5

For any \(A > a_0\), there exists a finite constant \(C\) such that for any \(n \ge 1\), any \(a_0 \le a \le A\) and any \(0 \le x_0 \le y \le a\),

$$\begin{aligned} j_{n,a}(x_0, y) \le \left\{ \begin{array}{ll} C x_0 (a-y) s_n^{2-\alpha } &{} \text{ if } x_0 \ge a/4,\\ C x_0 (a-y) &{} \text{ if } x_0 \le a/4 \text{ and } a/2 \le y,\\ C x_0 W_n^{\prime }(y-x_0) &{} \text{ if } x_0 \le a/4 \text{ and } y \le a/2, \end{array}\right. \end{aligned}$$
(19)

where \(j_{n,a}(x_0, y) = W_n(a-x_0) W_n(y) - W_n(y-x_0) W_n(a)\).

Proof

The derivation of these three bounds is based on the following identity:

$$\begin{aligned} j_{n,a}(x_0, y)&= \int \limits _0^{a-y} W_n^{\prime }(u+y) \mathrm{d}u \int \limits _0^{x_0} W_n^{\prime }(u+y-x_0) \mathrm{d}u \\&\quad + W_n(y) \int \limits _0^{x_0} \int \limits _0^{a-y} (-W_n^{\prime \prime }) (u+v+y-x_0) \mathrm{d}u \mathrm{d}v. \end{aligned}$$

The term \(W_n(y)\) appearing in the second term of the right-hand side is upper bounded by the finite constant \(\sup _{n \ge 1} W_n(A)\) and so does not play a role as far as (19) is concerned. Assume that \(x_0 \ge a/4\): then \(y \ge a_0/4\) and so

$$\begin{aligned} \int \limits _0^{a-y} W_n^{\prime }(u+y) \mathrm{d}u \int \limits _0^{x_0} W_n^{\prime }(u+y-x_0) \mathrm{d}u&\le (a-y) W^{\prime }_n(a_0/4) x_0 W^{\prime }_n(0)\\&\le C (a-y) x_0 s_n^{2-\alpha }. \end{aligned}$$

On the other hand,

$$\begin{aligned} \int \limits _0^{x_0} \int \limits _0^{a-y} (-W_n^{\prime \prime }) (u+v+y-x_0) \mathrm{d}u \mathrm{d}v \le (a-y) \int \limits _0^{x_0} (-W_n^{\prime \prime }) \le (a-y) W^{\prime }_n(0). \end{aligned}$$

This gives the desired upper bound of the form \(C x_0 s_n^{2-\alpha }\), since \(W^{\prime }_n(0) \le s_n^{2-\alpha }\) and \(1 \le C x_0\) when \(x_0 \ge a/4\) (writing \(1 = x_0 / x_0 \le 4 x_0 / a_0\)). This proves (19) in the case \(x_0 \ge a/4\). Assume now that \(x_0 \le a/4 \le a/2 \le y\), so that \(y \ge a_0/2\) and \(y-x_0 \ge a_0/4\). Then

$$\begin{aligned} \int \limits _0^{a-y} W_n^{\prime }(u+y) \mathrm{d}u \int \limits _0^{x_0} W_n^{\prime }(u+y-x_0) \mathrm{d}u&\le (a-y) W_n^{\prime }(a_0/2) x_0 W_n^{\prime }(a_0/4) \\&\le C x_0(a-y) \end{aligned}$$

and

$$\begin{aligned} \int \limits _0^{x_0} \int \limits _0^{a-y} (-W_n^{\prime \prime }) (u+v+y-x_0) \mathrm{d}u \mathrm{d}v \le \int \limits _0^{x_0} \int \limits _0^{a-y} (-W_n^{\prime \prime }) (u+v+a_0/4) \mathrm{d}u \mathrm{d}v. \end{aligned}$$

Since \(-W_n^{\prime \prime }\) is decreasing, so is the function \(\varphi (u) = \int _0^{a-y} (-W_n^{\prime \prime }) (u+v+a_0/4) \mathrm{d}v\). Differentiating, this immediately shows that the function \(z \mapsto z^{-1} \int _0^z \varphi \) is decreasing and since \(x_0 \ge a_0\), this gives

$$\begin{aligned} \int \limits _0^{x_0} \int \limits _0^{a-y} (-W_n^{\prime \prime }) (u+v+a_0/4) \mathrm{d}u \mathrm{d}v \le x_0 a_0^{-1} \int \limits _0^{a-y} \int \limits _0^{a_0} (-W_n^{\prime \prime }) (u+v+a_0/4) \mathrm{d}u \mathrm{d}v. \end{aligned}$$

Further, exploiting that \(-W_n^{\prime \prime }\) is decreasing, we obtain

$$\begin{aligned} \int \limits _0^{x_0} \int \limits _0^{a-y} (-W_n^{\prime \prime }) (u+v+a_0/4) \mathrm{d}u \mathrm{d}v&\le x_0 a_0^{-1} (a-y) \int \limits _0^{a_0} (-W_n^{\prime \prime }) (u+a_0/4) \mathrm{d}u \\&\le C x_0 (a-y). \end{aligned}$$

This proves (19) in the case \(x_0 \le a/4 \le a/2 \le y\). Assume now that \(x_0 \le a/4\) and that \(y \le a/2\): then

$$\begin{aligned} \int \limits _0^{a-y} W_n^{\prime }(u+y) \mathrm{d}u \int \limits _0^{x_0} W_n^{\prime }(u+y-x_0) \mathrm{d}u \le C x_0 W_n^{\prime }(y-x_0) \end{aligned}$$

and

$$\begin{aligned} \int \limits _0^{x_0} \int \limits _0^{a-y} (-W_n^{\prime \prime }) (u+v+y-x_0) \mathrm{d}u \mathrm{d}v&\le x_0 \int \limits _0^{a-y} (-W_n^{\prime \prime }) (u+y-x_0) \mathrm{d}u \\&\le x_0 W^{\prime }_n(y-x_0). \end{aligned}$$

This concludes the proof of (19), \(\square \)

Lemma 7.6

For any \(A > a_0\), there exists a finite constant \(C\) such that for any \(n \ge 1\), any \(a_0 \le a < b \le A\) and any \(x_0 \le a\),

$$\begin{aligned} 1-p_{n,\xi }^{x_0}(a,b) \le C (b-a) s_n \end{aligned}$$
(20)

and for any \(n \ge 1\), any \(a_0 \le a < b < c \le A\) with \(b-a \le t_0 / s_n\) and any \(x_0 \le b\),

$$\begin{aligned} 1 - p_{n, \theta }^{x_0}(a, b, c) \le C (c-b) s_n. \end{aligned}$$
(21)

Proof

Let us first prove (20), so assume until further notice \(x_0 \le a\). Let in the rest of the proof \(\tau _a = \inf \{ t \ge 0: X(t) \notin (0,a) \}\): then the inclusion

$$\begin{aligned} \{ T(a) < T(b), T(a) < T(0) \} \subset \{ a \le X(\tau _a) < b \} \end{aligned}$$

holds \(\mathbf{P }_n^{x_0}\)-almost surely and leads to

$$\begin{aligned} 1-p_{n,\xi }^{x_0}(a,b) = \frac{\mathbf{P }_n^{x_0} \left( T(a) < T(b), T(a) < T(0) \right) }{\mathbf{P }_n^{x_0} \left( T(a) < T(0) \right) }&\le \mathbf{P }_n^{x_0}\left( a \le X(\tau _a) < b \right) \frac{W_n(a)}{\int \limits _0^{x_0} W^{\prime }_n(a-u) \mathrm{d}u} \\&\le C x_0^{-1} \mathbf{P }_n^{x_0}\left( a \le X(\tau _a) < b \right) . \end{aligned}$$

Thus (20) will be proved if we can show that \(\mathbf{P }_n^{x_0}\left( a \le X(\tau _a) < b \right) \le C x_0 (b-a) s_n\). Let \(h_n(z) = \alpha \kappa _n s_n^{\alpha +1} (1+zs_n)^{-\alpha -1}\) be the density of the measure \(\Pi _n\) with respect to Lebesgue measure; note that it is decreasing and that the sequence \((h_n(z))\) is bounded for any \(z > 0\). Corollary 2 in [3] gives

$$\begin{aligned} \mathbf{P }_n^{x_0} \left( X(\tau _a-) \in \mathrm{d}y, \Delta X(\tau _a) \in \mathrm{d}z \right) = u_{n,a}(x_0, y) h_n(z) \mathrm{d}z \mathrm{d}y, \ y \le a \le y + z, \nonumber \\ \end{aligned}$$
(22)

where \(\Delta X(t) = X(t) - X(t-)\) for any \(t \ge 0\) and

$$\begin{aligned} u_{n,a}(x_0, y) = \frac{W_n(a-x_0) W_n(y)}{W_n(a)} - W_n(y-x_0) \mathbb{1 }_{\{y \ge x_0\}}, \end{aligned}$$

so it follows that

$$\begin{aligned} \mathbf{P }_n^{x_0}\left( a \le X(\tau _a) < b \right)&= \int \limits \mathbb{1 }_{\{y < a, a-y \le z < b-y\}} u_{n,a}(x_0, y) h_n(z) \mathrm{d}z \mathrm{d}y \\&\le (b-a) \int \limits _0^a u_{n,a}(x_0, y) h_n(a-y) \mathrm{d}y. \end{aligned}$$

Hence (20) will be proved if we can show that \(\int _0^a u_{n,a}(x_0, y) h_n(a-y) \mathrm{d}y \le C x_0 s_n\). We have by definition of \(u_{n,a}\)

$$\begin{aligned} \int \limits _0^a u_{n,a}(x_0, y) h_n(a-y) \mathrm{d}y&= \frac{W_n(a-x_0)}{W_n(a)} \int \limits _0^{x_0} W_n(y) h_n(a-y) \mathrm{d}y \\&\quad + \int \limits _{x_0}^a \left( \frac{W_n(a-x_0) W_n(y)}{W_n(a)} - W_n(y-x_0) \right) h_n(a\!-\!y) \mathrm{d}y \end{aligned}$$

which readily implies

$$\begin{aligned} \int \limits _0^a u_{n,a}(x_0, y) h_n(a-y) \mathrm{d}y&\le C W_n(a-x_0) \int \limits _0^{x_0} h_n(a-y) \mathrm{d}y \nonumber \\&\quad + C \int \limits _{x_0}^a j_{n,a}(x_0, y) h_n(a-y) \mathrm{d}y\nonumber \\ \end{aligned}$$
(23)

with \(j_{n,a}(x_0, y) = W_n(a-x_0) W_n(y) - W_n(y-x_0) W_n(a)\) as in Lemma 7.5. We will show that each term of the above right-hand side is upper bounded by a term of the form \(C x_0 s_n\). Let us focus on the first term, so we want to show that

$$\begin{aligned} W_n(a-x_0) \int \limits _0^{x_0} h_n(a-y) \mathrm{d}y \le C x_0 s_n. \end{aligned}$$

Assume first that \(x_0 \le a/2\), then \(a-y \ge a-x_0 \ge a/2 \ge a_0 / 2\) for any \(y \le x_0\) which gives

$$\begin{aligned} W_n(a-x_0) \int \limits _0^{x_0} h_n(a-y) \mathrm{d}y \le W_n(A) x_0 h_n(a_0/2) \le C x_0 \le C x_0 s_n. \end{aligned}$$

Assume now \(x_0 \ge a/2\): Since \(h_n\) is the density of \(\Pi _n\), we have

$$\begin{aligned} W_n(a-x_0) \int \limits _0^{x_0} h_n(a-y) \mathrm{d}y \le W_n(a-x_0) \Pi _n((a-x_0, \infty )) = \kappa _n s_n \frac{w_n((a-x_0)s_n)}{(1 + (a-x_0)s_n)^{\alpha }}. \end{aligned}$$

Since \(1 \le C x_0\) (because \(x_0 \ge a_0/2\)), the desired upper bound of the form \(C x_0 s_n\) follows from Lemma 7.1. We now control the second term of the right-hand side in (23), i.e., we have to show that

$$\begin{aligned} \int \limits _{x_0}^a j_{n,a}(x_0, y) h_n(a-y) \mathrm{d}y \le C x_0 s_n. \end{aligned}$$

In the case \(x_0 \ge a/4\), the first bound in (19) gives

$$\begin{aligned} \int \limits _{x_0}^a j_{n,a}(x_0, y) h_n(a-y) \mathrm{d}y&\le C x_0 s_n^{2-\alpha } \int \limits _{x_0}^a \frac{(a-y) s_n^{\alpha +1}}{(1+(a-y)s_n)^{\alpha +1}} \mathrm{d}y \\&= C x_0 s_n \int \limits _0^{(a-x_0) s_n} \frac{y}{(1+y)^{\alpha +1}} \mathrm{d}y \le C x_0 s_n. \end{aligned}$$

Assume from now on that \(x_0 \le a/4\) and decompose the interval \([x_0, a]\) into the union \([x_0, a/2] \cup [a/2,a]\). For \([a/2,a]\), (19) gives

$$\begin{aligned} \int \limits _{a/2}^a j_{n,a}(x_0, y) h_n(a-y) \mathrm{d}y \le C x_0 \int \limits _{a/2}^a \frac{(a-y) s_n^{\alpha +1}}{(1+(a-y)s_n)^{\alpha +1}} \mathrm{d}y \le C x_0 s_n^{\alpha -1} \le C x_0 s_n \end{aligned}$$

since \(\alpha - 1 < 1\). For \([x_0, a/2]\), (19) gives, using \(a-y \ge a_0/2\) when \(y \le a/2\),

$$\begin{aligned} \int \limits _{x_0}^{a/2} j_{n,a}(x_0, y) h_n(a-y) \mathrm{d}y \le C x_0 h_n(a_0/2) \int \limits _{x_0}^a W_n^{\prime }(y-x_0) \mathrm{d}y \le C x_0 \le C x_0 s_n. \end{aligned}$$

This finally concludes the proof of (20), which we use to derive (21). Assume from now on that \(x_0\le b\), we have by definition

$$\begin{aligned} 1-p_{n,\theta }^{x_0}(a,b,c)&= \frac{\mathbf{P }_n^{x_0} \left( T(b) < T(c), T(b) < T(a) < T(0) \right) }{\mathbf{P }_n^{x_0} \left( T(b) < T(a) < T(0) \right) } \\&\le \frac{\mathbf{P }_n^{x_0} \left( T(b) < T(c), T(b) < T(0) \right) }{\mathbf{P }_n^{x_0} \left( T(b) < T(a) < T(0) \right) }\\&= \frac{\mathbf{P }_n^{x_0} \left( T(b) < T(0) \right) }{\mathbf{P }_n^{x_0} \left( T(b) < T(a) < T(0) \right) } \left( 1 - p_{n,\xi }^{x_0}(b,c) \right) . \end{aligned}$$

Since \(\mathbf{P }_n^z(T(b) < T(0)) = \int _0^z \varphi \) where \(\varphi (u) = W^{\prime }_n(b-u) / W_n(b)\) is increasing, it is readily shown by differentiating that \(z \in [0,b] \mapsto z^{-1} \mathbf{P }_n^z(T(b) < T(0))\) is also increasing. Thus, for \(x_0 \le b\), we obtain

$$\begin{aligned} \mathbf{P }_n^{x_0} \left( T(b) < T(0) \right) \le x_0 b^{-1} \mathbf{P }_n^b \left( T(b) < T(0) \right) \le C x_0. \end{aligned}$$

In combination with (20) and the fact that \(\mathbf{P }_n^{x_0}(X(\tau _a) \ge b) \le \mathbf{P }_n^{x_0}(T(b) < T(a) < T(0))\), this entails

$$\begin{aligned} 1-p_{n,\theta }^{x_0}(a,b,c) \le C (c-b) s_n \frac{x_0}{\mathbf{P }_n^{x_0}\left( X(\tau _a) \ge b \right) }. \end{aligned}$$

Hence (21) will be proved if we show that \(x_0 \le C \mathbf{P }_n^{x_0}\left( X(\tau _a) \ge b \right) \). If follows from (22) that

$$\begin{aligned} \mathbf{P }_n^{x_0} \left( X(\tau _a) \ge b \right) = \int \limits _0^a u_{n,a}(x_0, y) \kappa _n s_n^{\alpha } \mathbb{P }(\Lambda \ge (b-y) s_n) \mathrm{d}y \end{aligned}$$

and because \(\mathbb{P }(\Lambda \ge u) = (1+s)^{-\alpha }\), it can be checked that \(\mathbb{P }(\Lambda \ge u+v) \ge \mathbb{P }(\Lambda \ge u) \mathbb{P }(\Lambda \ge v)\) for any \(u, v > 0\), so that

$$\begin{aligned} \mathbf{P }_n^{x_0} \left( X(\tau _a) \ge b \right) \ge \mathbb{P }(\Lambda \ge (b-a) s_n) \int \limits _0^a u_{n,a}(x_0, y) \kappa _n s_n^{\alpha } \mathbb{P }(\Lambda \ge (a-y) s_n) \mathrm{d}y. \end{aligned}$$

In view of (22), this last integral is equal to \(\mathbf{P }_n^{x_0}(X(\tau _a) \ge a) = \mathbf{P }_n^{x_0}(T(a) < T(0))\) so finally, using \((b-a) s_n \le t_0\) we get

$$\begin{aligned} \frac{x_0}{\mathbf{P }_n^{x_0} \left( X(\tau _a) \ge b \right) } \le \frac{x_0 W_n(a)}{\mathbb{P }(\Lambda \ge t_0) (W_n(a) - W_n(a-x_0))} \le C \end{aligned}$$

which proves (21). \(\square \)

Lemma 7.7

There exists a finite constant \(C\) such that for all \(a_0 \le a < b\) and all \(n \ge 1\),

$$\begin{aligned} \left| \mathbb{E }\left( \xi _n^a \right) \right| \le C (b-a) s_n \big / (r_n W_n(a)). \end{aligned}$$

Proof

Starting from \(a,\,X\) (under \(\mathbf{P }_n^a\)) makes \(1+G_n(a)\) visits to \(a\). Decomposing the path \((X(t), 0 \le t \le T(0))\) between successive visits to \(a\), one gets

$$\begin{aligned} \mathbf{P }_n^{a}\left( T(b) > T(0) \right)&= \mathbb{E }\left[ \left\{ \mathbf{P }_n^{a}(T(a) < T(b) \, | \, T(a) < T(0)) \right\} ^{G_n(a)} \right] \\&= \mathbb{E }\left[ (1 - p_{n,\xi }^a(a,b))^{G_n(a)} \right] . \end{aligned}$$

By definition, the left-hand side is equal to \(1-p_n^a(b)\), so integrating on \(G_n(a)\) gives

$$\begin{aligned} \mathbb{E }\left[ (1 - p_{n,\xi }^a(a,b))^{G_n(a)} \right] = \frac{1-p_n^a(a)}{1 - (1 - p_{n,\xi }^a(a,b))p_n^a(a)} = 1 - p_n^a(b) \end{aligned}$$

which gives

$$\begin{aligned} 1 - p_{n,\xi }^{a}(a,b) = \frac{W_n(b-a)W_n(a) - W_n(b) W_n(0)}{W_n(b-a)(W_n(a) - W_n(0))}. \end{aligned}$$
(24)

Let \(p_n = p_n^{b-a}(b-a)\) and \(p_{n,\xi } = p_{n,\xi }^a(a,b)\): We have

$$\begin{aligned} \mathbb{E }\left( \xi _n^a \right) = p_{n,\xi } - 1 + p_{n,\xi } \mathbb{E }(G_n(b-a)) = p_{n,\xi } - 1 + p_{n,\xi } \frac{p_n}{1-p_n} = \frac{p_{n,\xi }}{1-p_n}-1. \end{aligned}$$

Plugging in (1) and (24) gives after some computation

$$\begin{aligned} \mathbb{E }\left( \xi _n^a \right) = \frac{W_n(b) - W_n(a) - (W_n(b-a) - W_n(0))}{W_n(a) - W_n(0)} = \frac{\int _0^a \int _0^{b-a} W_n^{\prime \prime }(u+v) \mathrm{d}u \mathrm{d}v}{\int _0^a W_n^{\prime }}. \end{aligned}$$

Since \(W^{\prime }_n \ge 0\) and \(W_n^{\prime \prime } \le 0\), this gives

$$\begin{aligned} \left| \mathbb{E }\left( \xi _n^a \right) \right| = \frac{\int _0^a \int _0^{b-a} (-W_n^{\prime \prime })(u+v) \mathrm{d}u \mathrm{d}v}{\int _0^a W_n^{\prime }} \end{aligned}$$
(25)

and since \(W_n^{\prime }\) is convex, we get

$$\begin{aligned} \left| \mathbb{E }\left( \xi _n^a \right) \right| \le (b-a) \frac{\int _0^a (-W_n^{\prime \prime })}{\int _0^a W_n^{\prime }} = \frac{(b-a) W_n^{\prime }(0)}{W_n(a)} \frac{W_n(a) (W_n^{\prime }(0) - W_n^{\prime }(a))}{W_n^{\prime }(0) (W_n(a) - W_n(0))}. \end{aligned}$$

Since \(W_n^{\prime }(0) \le s_n / r_n\) and

$$\begin{aligned} \frac{W_n(a) (W_n^{\prime }(0) - W_n^{\prime }(a))}{W_n^{\prime }(0) (W_n(a) - W_n(0))} \le \frac{W_n(a_0)}{W_n(a_0) - W_n(0)} \le C, \end{aligned}$$

the proof is complete. \(\square \)

To control the higher moments of \(\xi _n^a\) and also the moments of the \(\theta \)’s, we introduce the following constants:

$$\begin{aligned} C_i = \sup \left\{ \frac{\mathbb{E }\left( (G_n(\delta ))^i \right) }{\delta s_n} : n \ge 1, 0 \le \delta \le t_0 / s_n \right\} , \ i \ge 1. \end{aligned}$$
(26)

Lemma 7.8

For any integer \(i \ge 1\), the constant \(C_i\) is finite.

Proof

Using the concavity of \(w_n\), one gets \(w_n(\delta s_n) \le w_n(0) + \delta s_n w_n^{\prime }(0) \le 1 + \delta s_n\) since \(w_n(0) = 1\) and \(w_n^{\prime }(0) = \kappa _n \le 1\) by Lemma 7.1. Hence

$$\begin{aligned} p_n^\delta (\delta ) = \frac{w_n(\delta s_n) - w_n(0)}{w_n(\delta s_n)} \le \frac{\delta s_n}{1 + \delta s_n} \le \min \left( \delta s_n, \frac{t_0}{1 + t_0} \right) \end{aligned}$$

where the last inequality holds for \(\delta s_n \le t_0\). In particular, for any \(i \ge 1\), we have

$$\begin{aligned} \frac{\mathbb{E }\left( (G_n(\delta ))^i \right) }{\delta s_n} = \frac{\mathbb{E }\left( (G_n(\delta ))^i \right) }{p_n^\delta (\delta )} \frac{p_n^\delta (\delta )}{\delta s_n} \le \sup _{0 \le p \le \varepsilon } \left( p^{-1} \mathbb{E }\left( G_p^i \right) \right) \end{aligned}$$

where \(\varepsilon = t_0 / (1 + t_0) < 1\) and \(G_p\) is a geometric random variable with parameter \(p\). It is well known that

$$\begin{aligned} \mathbb{E }(G_p^i) = \frac{p P_{i-1}(p)}{(1-p)^i} \end{aligned}$$

where \(P_i\) is the polynomial \(P_i(p) = \sum _{k=0}^i T_{k,i} p^i\) with \(T_{k,i} \ge 1\) the Eulerian numbers. Since \(P_i\) satisfies \(P_i(0) = 1\), one easily sees that for any \(\varepsilon < 1\),

$$\begin{aligned} \sup _{0 \le p \le \varepsilon } \left( p^{-1} \mathbb{E }\left( G_p^i \right) \right) < +\infty \end{aligned}$$

which achieves the proof. \(\square \)

Lemma 7.9

For any \(A > a_0\) and \(i \ge 1\), there exists a finite constant \(C\) such that for all \(n \ge 1\) and all \(a_0 \le a < b < c \le A\) with \((b-a) \vee (c-b) \le t_0 / s_n\)

$$\begin{aligned} \max \left( \mathbb{E }\left( |\xi _n^a|^i \right) , \ \mathbb{E }\left( |\theta _n^a|^i \right) , \mathbb{E }\left( |\theta _n^b|^i \right) \right) \le C (c-a) s_n. \end{aligned}$$
(27)

Proof

The results for \(\xi _n^a\) and \(\theta _n^a\) are direct consequences of (20) and (21) and the finiteness of \(C_i\): Indeed, using these two results, we have for instance for \(\xi _n^a\)

$$\begin{aligned} \mathbb{E }\left( |\xi _n^a|^i \right) \!=\! 1-p_{n,\xi }^a(a,b) \!+\! p_{n,\xi }^a(a,b) \mathbb{E }\left( (G_n(b\!-\!a))^i \right) \le C (b-a) s_n \!+\! C_i (b\!-\!a) s_n \end{aligned}$$

and similarly for \(\theta _n^a\). The result for \(\theta _n^b\) is also straightforward because

$$\begin{aligned} p_{n,\theta }^{b}(a,b,c)&= \mathbf{P }_n^{b} \left( T(c) < T(b) \, | \, T(b) < T(a) < T(0) \right) \\&= \mathbf{P }_n^b\left( T(c) < T(b) \, | \, T(b) < T(a) \right) \\&= p_{n,\xi }^{b-a}(c-a, b-a). \end{aligned}$$

and so the result follows similarly as for \(\xi _n^a\). \(\square \)

Recall the random variables \(\widetilde{\xi }_n^a\) and \(\widetilde{\theta }_n^a\) defined in (18).

Lemma 7.10

For any \(A > a_0\), there exists a finite constant \(C\) such that for all \(n \ge 1\) and all \(a_0 \le a < b < c \le A\) with \((b-a) \vee (c-b) \le t_0 / s_n\),

$$\begin{aligned} \mathbb{E }\left( \left| \widetilde{\xi }_n^a \right| ^i \right) \le C (b-a) s_n \ \text{ and } \ \mathbb{E }\left( \left| \widetilde{\theta }_n^a \right| ^i \right) \le C (c-a) s_n. \end{aligned}$$
(28)

Proof

Combining the two definitions (15) and (18), we obtain

$$\begin{aligned} \mathbb{E }\left( \left| \widetilde{\xi }_n^a \right| ^i \right)&= \int \limits _0^\infty \left( 1-p_{n,\xi }^{x_0}(a,b) + p_{n,\xi }^{x_0}(a,b) \mathbb{E }\left( (G_n(b-a))^i \right) \right) \mathbb{P }(\chi _n^a \in \mathrm{d}x_0) \\&\le C (b-a) s_n + \mathbb{P }(a \le \chi _n^a \le b) + C_i (b-a) s_n \end{aligned}$$

using (20) to obtain the inequality. We obtain similarly for \(\widetilde{\theta }_n^a\), using (21) instead of (20),

$$\begin{aligned} \mathbb{E }\left( \left| \widetilde{\theta }_n^a \right| ^i \right) \le C (c-b) s_n + \mathbb{P }(b \le \chi _n^a \le c) + C_i (c-b) s_n. \end{aligned}$$

Thus, the result will be proved if we can show that \(\mathbb{P }(a \le \chi _n^a \le c) \le C (c-a) s_n\); remember that \(\chi _n^a\) is by definition equal in distribution to \(X(0)\) under \(\mathbf{P }_n^*(\, \cdot \, | \, T(a) < T(0))\). Because \(X\) is spectrally positive and \(a \le A\), it holds that

$$\begin{aligned} \mathbf{P }_n^* \left( a < X(0) \le c \, | \, T(a) < T(0) \right) = \frac{\mathbf{P }_n^* \left( a < X(0) \le c \right) }{\mathbf{P }_n^* \left( T(a) < T(0) \right) } \le \frac{\mathbf{P }_n^* \left( a < X(0) \le c \right) }{\mathbf{P }_n^* \left( T(A) < T(0) \right) }. \end{aligned}$$

Since \(\mathbf{P }_n^* \left( T(A) < T(0) \right) = \mathbf{P }_n^0 \left( T(A) < T(0) \right) \) by Lemma 2.1, Lemma 3.6 implies that

$$\begin{aligned} \inf _{n \ge 1} \left( r_n \mathbf{P }_n^* \left( T(A) < T(0) \right) \right) > 0 \end{aligned}$$

which leads to \(\mathbf{P }_n^* \left( a < X(0) \le c \, | \, T(a) < T(0) \right) \le C r_n \mathbf{P }_n^* \left( a < X(0) \le c \right) \). We have by definition

$$\begin{aligned} r_n \mathbf{P }_n^* \left( a < X(0) \le c \right) = (\alpha -1) r_n \int \limits _{a s_n}^{c s_n} \frac{\mathrm{d}u}{(1+u)^\alpha } \le C (c-a) \le C (c-a) s_n \end{aligned}$$

which achieves the proof. \(\square \)

To control the sum of i.i.d. random variables, we will repeatedly use the following simple combinatorial lemma. In the sequel for \(I \in \mathbb{N }\) and \(\beta \in \mathbb{N }^I\) note \(|\beta | = \sum \beta _i\) and \(||\beta || = \sum i \beta _i\).

Lemma 7.11

Let \((Y_{k})\) be i.i.d. random variables with common distribution \(Y\). Then for any even integer \(I \ge 0\) and any \(K \ge 0\),

$$\begin{aligned} \mathbb{E }\left[ \left( \sum _{k=1}^K Y_i \right) ^I \right] \le I^I \sum _{\beta \in \mathbb{N }^I: ||\beta || = I} K^{|\beta |} \prod _{i=1}^I \left| \mathbb{E }\left( Y^i \right) \right| ^{\beta _i}. \end{aligned}$$

Proof

We have \(\mathbb{E }\left[ \left( Y_1 + \cdots + Y_K \right) ^I \right] = \sum _{1 \le k_1, \ldots , k_I \le K} \mathbb{E }(Y_{k_1} \ldots Y_{k_I})\). Since the \((Y_{k})\)’s are i.i.d., we have \(\mathbb{E }\left( Y_{k_1} \ldots Y_{k_I} \right) = m_1^{\beta _1} \ldots m_I^{\beta _I}\) with \(m_i = \mathbb{E }(Y^i)\) and \(\beta _i\) the number of \(i\)-tuples of \(k\), i.e., \(\beta _1\) is the number of singletons, \(\beta _2\) the number of pairs, etc ...  Since \(I\) is even, this leads to

$$\begin{aligned} \mathbb{E }\left[ \left( Y_{1} + \cdots + Y_K \right) ^I \right] \le \sum _{0 \le \beta _1, \ldots , \beta _I \le K: ||\beta || = I } A_{I,K}(\beta ) \, |m_1|^{\beta _1} \ldots |m_I|^{\beta _I} \end{aligned}$$

with \(A_{I,K}(\beta )\) the number of \(I\)-tuples \(k \in \{1, \ldots , K\}^I\) with exactly \(\beta _i\,i\)-tuples for each \(i = 1, \ldots , I\). There are \(K (K-1) \ldots (K - (|\beta |-1))\) different ways of choosing the \(|\beta |\) different values taken by \(k\), thus \(A_{I,K}(\beta ) = K (K-1) \ldots (K - (|\beta |-1)) \times B(I, |\beta |)\) with \(B(i, a)\) the number of ways of assigning \(i\) objects into \(a\) different boxes in such a way that no box is empty, so that \(A_{I,K}(\beta ) \le K^{|\beta |} I^{|\beta |} \le K^{|\beta |} I^I\) since \(|\beta | \le ||\beta || = I\). \(\square \)

In the sequel, we will use the inequality

$$\begin{aligned} \mathbb{E }\left( (G_n(a))^i \right) \le i! (r_n W_n(a))^i, \quad \ i \ge 1, a > 0, \end{aligned}$$

which comes from the fact that \(G_n(a)\) is stochastically dominated by an exponential random variable with parameter \(1-p_n^a(a) = 1/(r_n W_n(a))\). We now use the previous bounds on the moments to control the probability

$$\begin{aligned} \mathbb{P }\left( \left| \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \ge \lambda r_n \right) . \end{aligned}$$

We will see that it achieves a linear bound (in \(b-a\)) which justifies the need of the min later on.

Lemma 7.12

For any \(A > a_0\) and any even integer \(I \ge 2\), there exists a finite constant \(C\) such that for all \(n \ge 1\), all \(\lambda > 0\) and all \(a_0 \le a < b \le A\) with \(b-a \le t_0 / s_n\),

$$\begin{aligned} \mathbb{P }\left( \left| Y + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \ge \lambda r_n \right) \le C \frac{b-a}{\lambda ^I} s_n r_n^{-I/2} \end{aligned}$$

where \(Y\) is any random variable equal in distribution either to \(\widetilde{\xi }_n^a\) or to \(G_n(b-a)\).

Proof

Using first the triangular inequality and then Markov inequality gives

$$\begin{aligned} \mathbb{P }\left( \left| Y + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \ge \lambda r_n \right) \le (2/\lambda r_n)^{I} \left( \mathbb{E }\left( Y^I \right) + \mathbb{E }\left[ \left( \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right) ^I \right] \right) . \end{aligned}$$

Using the independence between \(G_n(a)\) and \((\xi _{n,k}^a, k \ge 1)\) together with Lemma 7.11 gives

$$\begin{aligned} \mathbb{E }\left[ \left( \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right) ^I \right]&\le C \sum _{\beta \in \mathbb{N }^I: ||\beta || = I} \mathbb{E }\left( (G_n(a))^{|\beta |}\right) \prod _{i=1}^I \left| \mathbb{E }\left( (\xi _n^a)^i \right) \right| ^{\beta _i} \\&\le C \sum _{\beta \in \mathbb{N }^I: ||\beta || = I} (r_n W_n(a))^{|\beta |} \prod _{i=1}^I \left| \mathbb{E }\left( (\xi _n^a)^i \right) \right| ^{\beta _i}. \end{aligned}$$

Lemma 7.7 gives the bound

$$\begin{aligned} \prod _{i=1}^I \left| \mathbb{E }\left( (\xi _n^a)^i \right) \right| ^{\beta _i}&\le C \left( (b-a) s_n / (r_n W_n(a)) \right) ^{\beta _1} \prod _{i=2}^I ((b-a)s_n)^{\beta _i} \\&= C ((b-a) s_n)^{|\beta |} \left( r_n W_n(a) \right) ^{-\beta _1} \le C (b-a) s_n \left( r_n W_n(a) \right) ^{-\beta _1} \end{aligned}$$

where \(((b-a) s_n)^{|\beta |} \le C (b-a) s_n\) follows from the fact that \((b-a) s_n \le t_0\) while \(|\beta | \ge 1\). Using (28) for the case \(Y = \widetilde{\xi }_n^a\) and the finiteness of \(C_I\) for the case \(Y = G_n(b-a)\), one can write \(\mathbb{E }(Y^I) \le C (b-a) s_n\), which gives

$$\begin{aligned} \mathbb{P }\left( \left| Y + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \ge \lambda r_n \right) \le C (\lambda r_n)^{-I} (b-a) s_n \left( 1 + \sum _{\beta \in \mathbb{N }^I: ||\beta || = I} (r_n W_n(a))^{|\beta | - \beta _1} \right) . \end{aligned}$$

But

$$\begin{aligned} |\beta | - \beta _1 = \sum _{i=2}^I \beta _i \le \sum _{i=2}^I (i/2) \beta _i = \frac{1}{2} \left( ||\beta ||- \beta _1 \right) \le I/2 \end{aligned}$$
(29)

and \(r_n W_n(a) \ge 1\) so \((r_n W_n(a))^{|\beta | - \beta _1} \le (r_n W_n(a))^{I/2}\), which proves the result. \(\square \)

Lemma 7.13

For any \(i \ge 1\) and \(A > 0\), it holds that

$$\begin{aligned} \sup \left\{ r_n^{-i} \mathbb{E }\left( (N_{n,a})^i \right) : n \ge 1, 0 < a < b \le A \right\} < +\infty \end{aligned}$$

and

$$\begin{aligned} \sup \left\{ r_n^{-i} \mathbb{E }\left( (\widetilde{N}_{n,b}^{a})^i \right) : n \ge 1, 0 < a < b < c \le A, b-a \le t_0 / s_n \right\} < +\infty . \end{aligned}$$

Proof

The result on \(N_{n,a}\) comes from the following inequality \(\mathbb{E }((N_{n,a})^i) \le \mathbb{E }((G_n(a))^i)\). For \(\widetilde{N}_{n,b}^{a}\), we use the fact that \(|\widetilde{\xi }_n^a|\) is stochastically dominated by \(1 + G_n(b-a)\) (since for any \(x_0 > 0,\,|\xi _n^{x_0}|\) is), thus

$$\begin{aligned} \mathbb{E }\left( ( \widetilde{N}_{n,b}^{a})^i \right) \le \mathbb{E }\left( \left( \sum _{k=1}^{G_n(a)+1} (1+G_{n,k}(b-a)) \right) ^i \right) \end{aligned}$$

with \((G_{n,k}(b-a), k \ge 1)\) i.i.d. with common distribution \(G_n(b-a)\), independent of \(G_n(a)\). Thus, Lemma 7.11 gives

$$\begin{aligned} \mathbb{E }\left( ( \widetilde{N}_{n,b}^{a})^i \right) \le C \sum _{\beta \in \mathbb{N }^i: ||\beta ||= i} \mathbb{E }\left( (1+G_n(a))^{|\beta |} \right) \prod _{k=1}^i \left[ \mathbb{E }\left( (1+G_{n}(b-a))^k \right) \right] ^{\beta _k}. \end{aligned}$$

Since \(G_n(a)\) is stochastically dominated by an exponential random variable with parameter \(1-p_n^a(a) = 1/(r_n W_n(a))\) and \(G_n(b-a)\) is integer valued, so that \((1+G_n(b-a))^k \le (1+G_n(b-a))^i\) for any \(1 \le k \le i\), we get, using that \(|\beta | \le i\) and that all quantities are greater than 1,

$$\begin{aligned} \mathbb{E }\left( ( \widetilde{N}_{n,b}^{a})^i \right) \le C E \left( (1+E r_n W_n(a))^{i} \right) \left[ \mathbb{E }\left( (1+G_{n}(b-a))^i \right) \right] ^{i} \end{aligned}$$

where \(E\) is a mean-1 exponential random variable. Using that for each \(1 \le k \le i\)

$$\begin{aligned} \mathbb{E }\left( (G_n(b-a))^k\right) \le \mathbb{E }\left( (G_n(b-a))^i\right) \le C_i (b-a) s_n \le C_i t_0, \end{aligned}$$

one gets

$$\begin{aligned} \sup \left\{ \left[ \mathbb{E }\left( (1+G_{n}(b-a))^i \right) \right] ^{i} : n \ge 1, b-a \le t_0 / s_n \right\} < +\infty . \end{aligned}$$

Together with the inequality

$$\begin{aligned} \mathbb{E }\left( (1+E r_n W_n(a))^{i} \right) \le \mathbb{E }\left( (1+E r_n W_n(A))^{i} \right) \le C r_n^{i} \end{aligned}$$

this concludes the proof. \(\square \)

We can now prove Proposition 7.2. Remember that we must find constants \(C\) and \(\gamma > 0\) such that

$$\begin{aligned} \mathbf{P }_n^* \left( \left| L^0(c) - L^0(b) \right| \wedge \left| L^0(b) - L^0(a) \right| \ge \lambda \, | \, T(a) < T(0) \right) \le C \frac{(c-a)^{3/2}}{\lambda ^\gamma } \end{aligned}$$

uniformly in \(n \ge 1,\,\lambda > 0\) and \(a_0 \le a < b < c \le A\) with \((b-a) \vee (c-b) \le t_0 / s_n\).

Proof of Proposition 7.2

Fix four even integers \(I_1, I_2, I_3, I_4\). By (17),

$$\begin{aligned}&\mathbf{P }_n^* \left( \left| L^0(c) - L^0(b) \right| \wedge \left| L^0(b) - L^0(a) \right| \ge \lambda \, | \, T(a) < T(0) \right) \\&\quad = \mathbb{P }\left( \left| \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \wedge \left| \widetilde{\theta }_n^a \mathbb{1 }_{\{\widetilde{\xi }_n^a \ge 0\}} + \sum _{k=1}^{N_{n,a}} \theta _{n,k}^a + \sum _{k=1}^{\widetilde{N}_{n,b}^{a}} \theta _{n,k}^b \right| \ge \lambda _n \right) \end{aligned}$$

with \(\lambda _n \!=\! \lambda r_n\). Let \({\fancyscript{F}}\) be the \(\sigma \)-algebra generated by \(\chi _n^a,\,G_n(a),\widetilde{\xi }_n^a\) and the \((\xi _{n,k}^a, k \!\ge \! 1)\). Then, the above probability is equal to

$$\begin{aligned} \mathbb{E }\left\{ \pi \, ; \, \left| \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \ge \lambda _n \right\} \end{aligned}$$

with \(\pi \) the random variable

$$\begin{aligned} \pi = \mathbb{P }\left( \left| \widetilde{\theta }_n^a \mathbb{1 }_{\{\widetilde{\xi }_n^a \ge 0\}} + \sum _{k=1}^{N_{n,a}} \theta _{n,k}^a + \sum _{k=1}^{\widetilde{N}_{n,b}^{a}} \theta _{n,k}^b \right| \ge \lambda _n \, \Big | \, {\fancyscript{F}}\right) \le \widetilde{\pi }+ \pi _a + \pi _b \end{aligned}$$

where

$$\begin{aligned} \widetilde{\pi }&= \mathbb{P }\left( \left| \widetilde{\theta }_n^a \mathbb{1 }_{\{\widetilde{\xi }_n^a \ge 0\}} \right| \ge \lambda _n / 3 \, \Big | \, {\fancyscript{F}}\right) = \mathbb{1 }_{\{\widetilde{\xi }_n^a \ge 0\}} \mathbb{P }\left( \left| \widetilde{\theta }_n^a \right| \ge \lambda _n / 3 \, \Big | \, \chi _n^a \right) , \\ \pi _a&= \mathbb{P }\left( \left| \sum _{k=1}^{N_{n,a}} \theta _{n,k}^a \right| \ge \lambda _n / 3 \, \Big | \, N_{n,a} \right) \text{ and } \pi _b = \mathbb{P }\left( \left| \sum _{k=1}^{\widetilde{N}_{n,b}^{a}} \theta _{n,k}^b \right| \ge \lambda _n / 3 \, \Big | \, \widetilde{N}_{n,b}^{a} \right) . \end{aligned}$$

The two terms \(\pi _a\) and \(\pi _b\) can be dealt with very similarly. Fix \(u = a\) or \(b\), and denote by \(N_u\) the random variable \(N_{n,a}\) if \(u = a\) or \(\widetilde{N}_{n,b}^{a}\) if \(u = b\). With this notation, \((\theta _{n,k}^u, k \ge 1)\) are i.i.d. and independent from \(N_u\), so that Markov inequality and Lemma 7.11 give

$$\begin{aligned} \pi _u \le (3I_1 / \lambda _n)^{I_1} \sum _{\beta \in \mathbb{N }^{I_1}: ||\beta || = I_1} N_u^{|\beta |} \prod _{i=1}^{I_1} \left| \mathbb{E }\left( \left( \theta _{n}^u \right) ^i \right) \right| ^{\beta _i}. \end{aligned}$$

By (27),

$$\begin{aligned} \prod _{i=1}^{I_1} \left| \mathbb{E }\left( \left( \theta _n^u \right) ^i \right) \right| ^{\beta _i} \le C ((c-a) s_n)^{|\beta |} \le C (c-a) s_n \end{aligned}$$

since \(1 \le |\beta | \le I_1\) and \((c-a) s_n \le t_0\). Since \(N_u\) is integer valued, it holds that \(N_u^{|\beta |} \le N_u^{I_1}\) and finally this gives

$$\begin{aligned} \pi _u \le C \lambda ^{-I_1} (N_u / r_n)^{I_1} (c-a) s_n. \end{aligned}$$

Applying Cauchy–Schwarz inequality yields

$$\begin{aligned} \mathbb{E }\left\{ \pi _u \, ; \, \left| \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \ge \lambda _n \right\} \le C \lambda ^{-I_1} (c-a) s_n \sqrt{ \mathbb{E }\left( (N_u / r_n)^{2I_1} \right) \mathbb{P }\left( \left| \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \ge \lambda _n \right) } \end{aligned}$$

and finally, Lemma 7.12 with \(Y = \widetilde{\xi }_n^a\) gives, together with Lemma 7.13,

$$\begin{aligned} \mathbb{E }\left\{ \pi _u \, ; \, \left| \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \ge \lambda _n \right\} \le C \frac{(c-a)^{3/2}}{\lambda ^{I_1+I_2/2}} s_n^{3/2} r_n^{-I_2/4}. \end{aligned}$$

It remains to control the term \(\widetilde{\pi }\): in \(\{ \widetilde{\xi }_n^a \ge 0 \},\,\widetilde{\xi }_n^a\) is equal in distribution to \(G_n(b-a)\) and is independent of everything else; thus, we have

$$\begin{aligned} \mathbb{E }\left\{ \widetilde{\pi }\, ; \, \left| \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \ge \lambda _n \right\}&= \mathbb{E }\left\{ \mathbb{P }\left( \left| \widetilde{\theta }_n^a \right| \ge \lambda _n / 3 \, \big | \, \chi _n^a \right) \, ; \, \left| \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \ge \lambda _n , \widetilde{\xi }_n^a \ge 0 \right\} \\&\le C \lambda _n^{-I_3} \mathbb{E }\left\{ \mathbb{E }\left( \left| \widetilde{\theta }_n^a \right| ^{I_3} \, \big | \, \chi _n^a \right) \, ; \, \left| G_n(b-a) + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \ge \lambda _n \right\} . \end{aligned}$$

Since \(\mathbb{E }( | \widetilde{\theta }_n^a|^{I_3} \, | \, \chi _n^a )\) is independent of \(G_n(b-a) + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a\), we get

$$\begin{aligned} \mathbb{E }\left\{ \widetilde{\pi }\, ; \, \left| \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \ge \lambda _n \right\}&\le C \lambda _n^{-I_3} \mathbb{E }\left( \left| \widetilde{\theta }_n^a \right| ^{I_3} \right) \mathbb{P }\left( \left| G_n(b-a) + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a \right| \ge \lambda _n \right) \\&\le C \lambda _n^{-I_3-I_4} (c-a)^2 s_n^2 r_n^{-I_4/2} \end{aligned}$$

where the second inequality follows using (28) and Lemma 7.12 with \(Y = G_n(b-a)\). Since \((c-a) s_n \le t_0\), we have \(((c-a)s_n)^2 \le C ((c-a)s_n)^{3/2}\) and finally, gathering the previous inequalities, one sees that we have derived the bound

$$\begin{aligned}&\mathbf{P }_n^* \left( \left| L^0(c) - L^0(b) \right| \wedge \left| L^0(b) - L^0(a) \right| \ge \lambda \, | \, T(a) < T(0) \right) \\&\quad \le C (c-a)^{3/2} s_n^{3/2} \left( \lambda ^{-I_1 - I_2/2} r_n^{-I_2/4} + \lambda ^{-I_3 - I_4} r_n^{-I_4/2} \right) . \end{aligned}$$

Now choose \(I_2\) and \(I_4\) large enough such that both sequences \((s_n^{3/2} r_n^{-I_2/4})\) and \((s_n^{3/2} r_n^{-I_4/2})\) are bounded: This is possible since for any \(\beta \in \mathbb{R },\,s_n r_n^{-\beta } = s_n^{1-\beta (\alpha -1)}\). Moreover, choose \(I_2\) not only even but a multiple of 4. Then, once \(I_2\) and \(I_4\) are fixed, choosing \(I_1\) and \(I_3\) in such a way that \(I_1 + I_2 / 2 = I_3 + I_4\) concludes the proof. \(\square \)

1.2 Case \(b-a \ge t_0 / s_n\)

We now consider the simpler case \(b-a \ge t_0 / s_n\) and prove Proposition 7.3.

Lemma 7.14

For any \(i \ge 1\), there exists a finite constant \(C\) such that for all \(n \ge 1\) and all \(0 < a < b\) such that \(b-a \ge t_0 / s_n\),

$$\begin{aligned} \mathbb{E }\left( |\widetilde{\xi }_n^a|^i \right) \le C (r_n W_n(b-a))^i. \end{aligned}$$

Proof

In view of (18), it is enough to show that \(\mathbb{E }\left( |\xi _n^{x_0}|^i \right) \le C (r_n W_n(b-a))^i\) for every \(x_0 > 0\). Since \(b-a \ge t_0 / s_n\), exploiting the monotonicity of \(w_n\) gives

$$\begin{aligned} p_n^{b-a}(b-a) = 1 - \frac{w_n(0)}{w_n((b-a)s_n)} \ge 1 - \frac{1}{w_n(t_0)} \ge \frac{1}{2} \end{aligned}$$

since \(t_0\) has been chosen such that \(w_n(t_0) \ge 2\). Since \(G_n(b-a)\) is a geometric random variable with parameter \(p_n^{b-a}(b-a)\), we have

$$\begin{aligned} \mathbb{E }\left( (G_n(b-a))^i \right) \ge \mathbb{E }(G_n(b-a)) = \frac{p_n^{b-a}(b-a)}{1-p_n^{b-a}(b-a)} \ge 1, \end{aligned}$$

using \(p_n^{b-a}(b-a) \ge 1/2\). Thus for any \(x_0 > 0\),

$$\begin{aligned} \mathbb{E }\left( |\xi _n^{x_0}|^i \right)&= 1-p_{n,\xi }^{x_0}(a,b) + p_{n,\xi }^{x_0}(a,b) \mathbb{E }\left( (G_n(b-a))^i \right) \\&\le (1-p_{n,\xi }^{x_0}(a,b)) \mathbb{E }\left( (G_n(b-a))^i \right) + p_{n,\xi }^{x_0}(a,b) \mathbb{E }\left( (G_n(b-a))^i \right) . \end{aligned}$$

This last quantity is equal to \(\mathbb{E }\left( (G_n(b\!-\!a))^i\right) \) and so the inequality \(\mathbb{E }\left( (G_n(b-a))^i\right) \) \( \le i! (r_n W_n(b-a))^i\) achieves the proof. \(\square \)

Lemma 7.15

For any \(i \ge 1\), there exists a finite constant \(C\) such that for all \(n \ge 1\) and all \(0 < a < b\) with \(b-a \ge t_0 / s_n\),

$$\begin{aligned} \mathbb{E }\left( \left| \xi _n^a \right| ^i \right) \le C (r_n W_n(b-a))^{i-1}. \end{aligned}$$

Moreover, for any \(n \ge 1\) and \(0 < a < b\),

$$\begin{aligned} \left| \mathbb{E }(\xi _n^a) \right| \le \frac{W_n(b-a)}{W_n(a) - W_n(0)} \end{aligned}$$

Proof

By definition (15) of \(\xi _n\), we have \(\mathbb{E }\left( \left| \xi _n^a \right| ^i \right) = 1-p_{n,\xi }^a(a,b) + p_{n,\xi }^a(a,b) \mathbb{E }\) \( \left( (G_n(b-a))^i \right) \) and so plugging in (24) gives

$$\begin{aligned} \mathbb{E }\left( \left| \xi _n^a \right| ^i \right)&\le 1 + i! \left( \frac{W_n(b-a)}{W_n(0)} \right) ^i \frac{W_n(0)(W_n(b) - W_n(b-a))}{W_n(b-a) (W_n(a) - W_n(0))} \\&\le 2 i! (r_n W_n(b-a))^{i-1} \end{aligned}$$

using \(W_n(b) - W_n(b-a) \le W_n(a) - W_n(0)\) and \(1 \le i! (r_n W_n(b-a))^{i-1}\). The second inequality is a direct consequence of (25) which can be expanded to

$$\begin{aligned} \left| \mathbb{E }\left( \xi _n^a \right) \right| = \frac{W_n(b-a) - W_n(0) - (W_n(b) - W_n(a))}{W_n(a) - W_n(0)}. \end{aligned}$$

The result is proved. \(\square \)

Proof of Proposition 7.3

By (17), we have

$$\begin{aligned} \mathbf{P }_n^* \left( \left| L^0(b) - L^0(a) \right| \ge \lambda \, | \, T(a) < T(0) \right) = \mathbb{P }\left( \left| \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a\right| \ge \lambda r_n \right) . \end{aligned}$$

We have

$$\begin{aligned}&\mathbb{P }\left( \left| \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a\right| \ge \lambda r_n \right) \\&\quad \le C(\lambda r_n)^{-I} \left( \mathbb{E }\left( |\widetilde{\xi }_n^a|^I \right) + \sum _{\beta \in \mathbb{N }^I: ||\beta ||= I} \mathbb{E }\left( (G_n(a))^{|\beta |} \right) \prod _{i=1}^I \left| \mathbb{E }\left( (\xi _n^a)^i \right) \right| ^{\beta _i} \right) \\&\quad \le C\lambda ^{-I} \left( (W_n(b-a))^I + \sum _{\beta \in \mathbb{N }^I: ||\beta ||= I} r_n^{|\beta | - I} (W_n(a))^{|\beta |} \prod _{i=1}^I \left| \mathbb{E }\left( (\xi _n^a)^i \right) \right| ^{\beta _i} \right) \end{aligned}$$

where the first inequality comes from the triangular inequality, Markov inequality and Lemma 7.11, and the second inequality is a consequence of Lemma  and the fact that \(G_n(a)\) is stochastically dominated by an exponential random variable with parameter \(1-p_n^a(a)\). Using Lemma 7.15 and the identity \(\sum _{i=2}^I (i-1) \beta _i = I - |\beta |\) gives

$$\begin{aligned}&r_n^{|\beta |-I} (W_n(a))^{|\beta |} \prod _{i=1}^I \left| \mathbb{E }\left( (\xi _n^a)^i \right) \right| ^{\beta _i} \\&\quad \le C r_n^{|\beta |-I} (W_n(a))^{|\beta |} \left( \frac{W_n(b-a)}{W_n(a) - W_n(0)} \right) ^{\beta _1} (r_n W_n(b-a))^{I-|\beta |} \\&\quad \le C (W_n(b-a))^{I + \beta _1-|\beta |}. \end{aligned}$$

Thus

$$\begin{aligned} \mathbb{P }\left( \left| \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a\right| \ge \lambda r_n \right) \le C\lambda ^{-I} \left( (W_n(b-a))^I + \sum _{\beta \in \mathbb{N }^I: ||\beta ||= I} (W_n(b-a))^{I-|\beta |+\beta _1} \right) . \end{aligned}$$

Since \(W_n(t) = w_n(t s_n) / s_n^{\alpha -1}\), it holds that

$$\begin{aligned} \sup \left\{ \frac{W_n(t)}{t^{\alpha -1}} : n \ge 1, t \ge t_0/s_n \right\} = \sup \left\{ \frac{w_n(t)}{t^{\alpha -1}} : n \ge 1, t \ge t_0 \right\} \end{aligned}$$

which has been shown to be finite in the proof of Lemma 7.1. Hence, the last upper bound yields

$$\begin{aligned} \mathbb{P }\left( \left| \widetilde{\xi }_n^a + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a\right| \ge \lambda r_n \right) \le C\lambda ^{-I} \left( (b-a)^{I(\alpha -1)} + \sum _{\beta \in \mathbb{N }^I: ||\beta ||= I} (b-a)^{(I-|\beta |+\beta _1)(\alpha -1)} \right) . \end{aligned}$$

By (29), \(I-|\beta | + \beta _1 \ge I / 2\) and since we consider \(b-a \le A\), this gives

$$\begin{aligned} (b-a)^{(I-|\beta |+\beta _1)(\alpha -1)} \le C (b-a)^{(\alpha -1) I / 2} \end{aligned}$$

and we finally get the desired bound for \(I\) large enough, i.e., such that \(I(\alpha -1) \ge 3\). Inspecting the proof of Proposition 7.2 one can check that one can choose the two constants \(\gamma \) to be equal. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lambert, A., Simatos, F. Asymptotic Behavior of Local Times of Compound Poisson Processes with Drift in the Infinite Variance Case. J Theor Probab 28, 41–91 (2015). https://doi.org/10.1007/s10959-013-0492-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10959-013-0492-1

Keywords

Mathematics Subject Classification (2010)

Navigation