Abstract
Consider compound Poisson processes with negative drift and no negative jumps, which converge to some spectrally positive Lévy process with nonzero Lévy measure. In this paper, we study the asymptotic behavior of the local time process, in the spatial variable, of these processes killed at two different random times: either at the time of the first visit of the Lévy process to 0, in which case we prove results at the excursion level under suitable conditionings; or at the time when the local time at 0 exceeds some fixed level. We prove that finite-dimensional distributions converge under general assumptions, even if the limiting process is not càdlàg. Making an assumption on the distribution of the jumps of the compound Poisson processes, we strengthen this to get weak convergence. Our assumption allows for the limiting process to be a stable Lévy process with drift. These results have implications on branching processes and in queueing theory, namely, on the scaling limit of binary, homogeneous Crump–Mode–Jagers processes and on the scaling limit of the Processor-Sharing queue length process.
Similar content being viewed by others
References
Barlow, M.T.: Necessary and sufficient conditions for the continuity of local time of Lévy processes. Ann. Probab. 16(4), 1389–1427 (1988)
Bertoin, J.: Lévy Processes, volume 121 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge (1996)
Bertoin, J.: Exponential decay and ergodicity of completely asymmetric Lévy processes in a finite interval. Ann. Appl. Probab. 7(1), 156–169 (1997)
Billingsley, P. (1999) Convergence of Probability Measures. Wiley Series in Probability and Statistics: Probability and Statistics, 2nd edn. Wiley, New York
Borodin, A.N.: The asymptotic behavior of local times of recurrent random walks with finite variance. Teor. Veroyatnost. i Primenen. 26(4), 769–783 (1981)
Borodin, A.N.: Asymptotic behavior of local times of recurrent random walks with infinite variance. Teor. Veroyatnost. i Primenen. 29(2), 312–326 (1984)
Borodin, A.N.: On the character of convergence to Brownian local time. I, II. Probab. Theory Relat. Fields 72(2), 231–250, 251–277 (1986)
Caballero, M.E., Lambert, A., Uribe Bravo, G.: Proof(s) of the Lamperti representation of continuous-state branching processes. Probab. Surv. 6, 62–89 (2009)
Chan, T., Kyprianou, A., Savov, M.: Smoothness of scale functions for spectrally negative Lévy processes. Probab. Theory Relat. Fields, 1–18 (2010). doi:10.1007/s00440-010-0289-4
Csáki, E., Révész, P.: Strong invariance for local times. Z. Wahrsch. Verw. Gebiete 62(2), 263–278 (1983)
Csörgő, M., Révész, P.: On strong invariance for local time of partial sums. Stoch. Process. Appl. 20(1), 59–84 (1985)
Duquesne, T., Le Gall, J.-F.: Random trees, Lévy processes and spatial branching processes. Astérisque 281, vi+147 (2002)
Eisenbaum, N., Kaspi, H.: A necessary and sufficient condition for the Markov property of the local time process. Ann. Probab. 21(3), 1591–1598 (1993)
Feller, W.: An Introduction to Probability Theory and Its Applications, vol. II, 2nd edn. Wiley, New York (1971)
Grimvall, A.: On the convergence of sequences of branching processes. Ann. Probab. 2(6), 1027–1045 (1974)
Haccou, P., Jagers, P., Vatutin, V.A.: Branching Processes: Variation, Growth, and Extinction of Populations. Cambridge Studies in Adaptive Dynamics. Cambridge University Press, Cambridge (2007)
Helland, I.S.: Continuity of a class of random time transformations. Stoch. Process. Appl. 7(1), 79–99 (1978)
Jacod, J., Shiryaev, A.N.: Limit theorems for Stochastic Processes, volume 288 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 2nd edn. Springer, Berlin (2003)
Jain, N.C., Pruitt, W.E.: An invariance principle for the local time of a recurrent random walk. Z. Wahrsch. Verw. Gebiete 66(1), 141–156 (1984)
Kang, J.-S., Wee, I.-S.: A note on the weak invariance principle for local times. Stat. Probab. Lett. 32(2), 147–159 (1997)
Kella, O., Zwart, B., Boxma, O.: Some time-dependent properties of symmetric \(M/G/1\) queues. J. Appl. Probab. 42(1), 223–234 (2005)
Kesten, H.: An iterated logarithm law for local time. Duke Math. J. 32, 447–456 (1965)
Khoshnevisan, D.: An embedding of compensated compound Poisson processes with applications to local times. Ann. Probab. 21(1), 340–361 (1993)
Knight, F.B.: Random walks and a sojourn density process of Brownian motion. Trans. Am. Math. Soc. 109, 56–86 (1963)
Kuznetsov, A., Kyprianou, A.E., Rivero, V.: The theory of scale functions for spectrally negative Lévy processes. In: Lévy Matters II. Lecture Notes in Mathematics, pp. 97–186. Springer, Berlin, Heidelberg (2013)
Kyprianou, A., Rivero, V., Song, R.: Convexity and smoothness of scale functions and de Finetti’s control problem. J. Theor. Probab. 23, 547–564 (2010). doi:10.1007/s10959-009-0220-z
Lambert, A.: The contour of splitting trees is a Lévy process. Ann. Probab. 38(1), 348–395 (2010)
Lambert, A., Simatos, F.: The weak convergence of regenerative processes using some excursion path decompositions. Ann. Inst. Henri Poincaré B: Probab. Stat. (accepted)
Lambert, A., Simatos, F., Zwart, B.: Scaling limits via excursion theory: interplay between Crump–Mode–Jagers branching processes and processor-sharing queues. Ann. Appl. Probab (accepted)
Lamperti, J.: Continuous-state branching processes. Bull. Am. Math. Soc. 73, 382–386 (1967)
Lamperti, J.: The limit of a sequence of branching processes. Probab. Theory Relat. Fields 7(4), 271–288 (1967)
Limic, V.: A LIFO queue in heavy traffic. Ann. Appl. Probab. 11(2), 301–331 (2001)
Perkins, E.: Weak invariance principles for local time. Z. Wahrsch. Verw. Gebiete 60(4), 437–451 (1982)
Révész, P.: Local time and invariance. In: Analytical methods in probability theory (Oberwolfach, 1980), volume 861 of Lecture Notes in Mathematics, pp. 128–145. Springer, Berlin (1981)
Révész, P.: A strong invariance principle of the local time of RVs with continuous distribution. Studia Sci. Math. Hungar. 16(1–2), 219–228 (1981)
Robert, P.: Stochastic Networks and Queues. Stochastic Modelling and Applied Probability Series. Springer, New York, xvii+398 pp. (2003)
Sagitov, S.M.: General branching processes: convergence to Irzhina processes. J. Math. Sci. 69(4), 1199–1206 (1994). Stability problems for stochastic models (Kirillov, 1989)
Sagitov, S.: A key limit theorem for critical branching processes. Stoch. Process. Appl. 56(1), 87–100 (1995)
Stone, C.: Limit theorems for random walks, birth and death processes, and diffusion processes. Ill. J. Math. 7, 638–660 (1963)
Acknowledgments
F. Simatos would like to thank Bert Zwart for initiating this project and pointing out the reference [21].
Author information
Authors and Affiliations
Corresponding author
Additional information
The research of Amaury Lambert is funded by project ‘MANEGE’ 09-BLAN-0215 from ANR (French national research agency). While most of this research was carried out, Florian Simatos was affiliated with CWI and sponsored by an NWO-VIDI grant.
Appendix: Proof of Proposition 5.1
Appendix: Proof of Proposition 5.1
In the rest of this section, we fix some \(a_0 > 0\) and we assume that the tightness assumption stated in Sect. 2 holds: In particular, \(\Lambda _n = \Lambda \) with \(\mathbb{P }(\Lambda \ge s) = (1+s)^{-\alpha }\) for some \(1 < \alpha < 2,\,n = s_n^{\alpha }\) and \(r_n = s_n^{\alpha -1}\). The goal of this section is to prove that the sequence \(L^0(a_0 + \, \cdot \,)\) under \(\mathbf{P }_n^*(\, \cdot \, | \, T(a_0) < T(0))\) is tight.
Note that this will prove Proposition 5.1: indeed, by Proposition 4.1, \(L^0(a_0 + \, \cdot \,)\) under \(\mathbf{P }_n^*(\, \cdot \, | \, T(a_0) < T(0))\) converges in the sense of finite-dimensional distributions to \(L^0(a_0 + \, \cdot \,)\) under \({\fancyscript{N}}(\, \cdot \, | \, T(a_0) < T(0))\). Moreover, the jumps of \(L^0(\, \cdot \, + a_0)\) are of deterministic size \(1/r_n\). Since \(1/r_n \rightarrow 0\), any limiting point must be continuous, see for instance [4]. Note that this reasoning could therefore be proved to show that \(L\) under \(\mathbf{P }^0\) is continuous (in the space variable), a result that is difficult to prove in general (see for instance [1]).
Under the tightness assumption, the scale function \(w_n\) enjoys the following useful properties. The convexity and smoothness properties constitute one of the main reasons for making the tightness assumption.
Lemma 7.1
For each \(n \ge 1,\,w_n\) is twice continuously differentiable and concave, its derivative \(w_n^{\prime }\) is convex, \(w_n(0) = 1\) and \(w_n^{\prime }(0) = \kappa _n\). Moreover,
and finally, there exist \(n_0 \ge 1\) and \(t_0 > 0\) such that \(w_n(t_0) \ge 2\) for all \(n \ge n_0\).
Proof
The smoothness of \(w_n\) follows from Theorem 3 in [9] since \(f(s) = \mathbb{P }(\Lambda \ge s)\) is continuously differentiable with \(|f^{\prime }(0)| < +\infty \). The convexity properties follow from Theorem 2.1 in [26] since \(f\) is log-convex and \(\psi ^{\prime }_n(0) \ge 0\). The formulas for \(w_n(0)\) and \(w^{\prime }_n(0)\) are well known, see for instance [25]. We now prove the two last assertions.
First of all, note that
Let \(\overline{\psi }\) be the Lévy exponent given by \(\overline{\psi }(\lambda ) = \lambda - (\alpha -1) \mathbb{E }( 1 - e^{-\lambda \Lambda })\) with corresponding scale function \(\overline{w}\). Since \(\kappa _n \rightarrow \alpha -1,\,\mathbb{P }_n^0\) converges in distribution to the law of the Lévy process with Lévy exponent \(\overline{\psi }\), and so it can be shown similarly as in the proof of Lemma 3.4 that \(w_n(1) \rightarrow \overline{w}(1)\). The first term \(\sup \left\{ w_n(1): n \ge 1 \right\} \) appearing in the above maximum is therefore finite. As for the second term, since for any \(n \ge 1\) we have \(\kappa _n \mathbb{E }(\Lambda ) = \kappa _n / (\alpha -1) \le 1\) by assumption, we get \(\overline{\psi }\le \psi _n\) and by monotonicity it follows that \(w_n \le \overline{w}\). Moreover, it is known that there exists a finite constant \(C > 0\) such that \(\overline{w}(t) \le C / (t \overline{\psi }(1/t))\) for all \(t > 0\), see Proposition III.1 or the proof of Proposition VII.10 in [2]. In particular,
Since \(\mathbb{P }(\Lambda \ge s) = (1+s)^{-\alpha }\) one can check that there exists some constant \(\beta > 0\) such that \(\overline{\psi }(t) \sim \beta t^{\alpha }\) as \(t \rightarrow 0\), which shows that the last upper bound is finite and proves the desired result.
To prove the last assertion of the lemma, consider \(n_0\) large enough such that \({\underline{\kappa }} = \inf _{n\ge n_0}\kappa _n>1/2\) (remember that \(\kappa \rightarrow 1/(\alpha -1)>1\)). Let \({\underline{\psi }}\) be the Lévy exponent given by \({\underline{\psi }}(\lambda ) = \lambda - {\underline{\kappa }} \mathbb{E }(1-e^{-\lambda \Lambda })\) and corresponding scale function \({\underline{w}}\). By monotonicity, we get \(w_n \ge {\underline{w}}\) for any \(n \ge n_0\), and one easily checks that \({\underline{w}}(\infty ) = 1 / (1- {\underline{\kappa }} / (\alpha -1))\). Since by choice of \({\underline{\kappa }}\) this last limit is strictly larger than 2, there exists \(t_0 > 0\) such that \({\underline{w}}(t_0) \ge 2\). This proves the result. \(\square \)
Since we are interested in limit theorems, we will assume in the sequel without loss of generality that there exists \(t_0 > 0\) such that \(w_n(t_0) \ge 2\) for all \(n \ge 1\), and we henceforth fix such a \(t_0\). We first give a short proof of Proposition 5.1 based on the two following technical results, see Theorem 13.5 in [4].
Proposition 7.2
(Case \((b-a) \vee (c-b) \le t_0 / s_n\)) For any \(A > a_0\), there exist finite constants \(C, \gamma \ge 0\) such that for all \(n\ge 1,\,\lambda > 0\) and \(a_0 \le a < b < c \le A\) with \((b-a) \vee (c-b) \le t_0 / s_n\),
Proposition 7.3
(Case \(b-a \ge t_0 / s_n\)) For any \(A > a_0\), there exist finite constants \(C, \gamma \ge 0\) such that for all \(n \ge 1,\, \lambda > 0\) and \(a_0 \le a < b \le A\) with \(b-a \ge t_0 / s_n\),
Moreover, the constant \(\gamma \) can be taken equal to the constant \(\gamma \) of Proposition 7.2.
At this point, it must be said that the case \((b-a) \vee (c-b) \le t_0 / s_n\) is much harder than the case \(b-a \ge t_0 / s_n\). The reason is that in the former case, the bound \((c-a)^{3/2}\) cannot be achieved without taking the minimum between \(|L^0(b) - L^0(a)|\) and \(|L^0(c) - L^0(b)|\). Considering only one of these two terms gives a bound which can be shown to decay only linearly in \(c-a\), which is not sufficient to establish tightness. This technical problem reflects that, in the well-studied context of random walks, tightness in the non-lattice case is harder than in the lattice one, where typically small oscillations, i.e., precisely when \((b-a) \vee (c-b) \le t_0 / s_n\), are significantly easier to control.
Proof of Proposition 5.1 based on Propositions 7.2 and 7.3
According to Theorem 13.5 in [4], it is enough to show that for each \(A > a_0\), there exist finite constants \(C, \gamma \ge 0\) and \(\beta > 1\) such that for all \(n \ge 1,\,\lambda > 0\) and \(a_0 \le a < b < c \le A\),
Fix \(n \ge 1,\,\lambda > 0\) and \(a_0 \le a < b < c\) and let \(E = \{ |L^0(b) - L^0(a)| \wedge |L^0(c) - L^0(b)| \ge \lambda \}\). Since \(X\) under \(\mathbf{P }_n^*\) is spectrally positive, we have \(E \subset \{ L(a) > 0 \}\) and so
Thus, Bayes formula entails
and since \(\mathbf{P }_n^* \left( T(a) < T(0) \right) \le \mathbf{P }_n^* \left( T(a_0) < T(0) \right) \), we get
Thus (13) follows from the previous inequality together with either Proposition 7.2 when \((b-a) \vee (c-b) \le t_0 / s_n\), or Proposition 7.3 when \(b-a \ge t_0 / s_n\). In the last remaining case where \(c-b \ge t_0 / s_n\), we derive similarly the following upper bound:
Proposition 7.3 then concludes the proof. \(\square \)
The rest of this section is devoted to the proof of Propositions 7.2 and 7.3. Our analysis relies on an explicit expression of the law of \((L^0(b) - L^0(a), L^0(c) - L^0(b))\) under \(\mathbf{P }_n^{x_0}(\, \cdot \, | \, T(a) < T(0))\). For \(0 < a < b < c\) and \(x_0 > 0\), we define
as well as
Remember that \(G_n(a)\) denotes a geometric random variable with parameter \(p_n^a(a)\), and from now on we adopt the convention \(\sum _1^{-1} = \sum _1^0 = 0\).
Lemma 7.4
For any \(0 < a < b < c\) and \(x_0 > 0\), the random variable
under \(\mathbf{P }_n^{x_0}( \, \cdot \, | \, T(a) < T(0))\) is equal in distribution to
where
All the random variables \(\xi _n^{x_0},\,\xi _{n,k}^a,\,\theta _n^{x_0},\,\theta _{n,k}^a,\,\theta _{n,k}^b\) and \(G_n(a)\) are independent. For any \(u > 0\) and \(k \ge 1,\,\theta _{n,k}^u\) is equal in distribution to \(\xi _n^u\) and \(\theta _{n,k}^u\) to \(\theta _n^u\), where the laws of \(\xi _n^u\) and \(\theta _n^u\) are described as follows: for any function \(f\),
and
Proof
In the rest of the proof, we work under \(\mathbf{P }_n^{x_0}(\, \cdot \, | \, T(a) < T(0))\). By definition, \(r_n L^0(a)\) and \(r_n L^0(b)\) is the number of visits of \(X\) to \(a\) and \(b\) in \([0, T(0)]\), respectively. Thus, if \(\beta \) is the number of visits of \(X\) to \(b\) in \([0, T(a)]\) and \(\beta _{k}^{a}\) the number of visits of \(X\) to \(b\) between the \(k\)th and \((k+1)\)st visit to \(a\), we have
Decomposing the path of \(X\) between successive visits to \(a\) and using the strong Markov property, one easily checks that all the random variables of the right-hand side are independent and that \(r_n L^0(a),\,\beta _k^a\) and \(\beta \) are, respectively, equal in distribution to \(1 + G_n(a),\,\xi _{n}^{a}+1\) and \(\xi _{n}^{x_0}+1\). This shows that the random variable \(r_n L^0(b) - r_n L^0(a)\) is equal in distribution to \(\xi _n^{x_0} + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a\).
To describe the law of \((r_n L^0(b) - r_n L^0(a), r_n L^0(c) - r_n L^0(b))\), one also needs to count the number of visits of \(X\) to \(c\): if \(X\) visits \(b\) before \(a\), it \(X\) may visit \(c\) before the first visit to \(b\); it may also visit \(c\) each time it goes from \(a\) to \(b\); finally, it may also visit \(c\) between two successive visits to \(b\). These three different ways of visiting \(c\) are, respectively, taken into account by the terms \(\theta _n^{x_0},\,\theta _{n,k}^a\) and \(\theta _{n,k}^b\). \(\square \)
The previous result readily gives the law of \((r_n L^0(b) - r_n L^0(a), r_n L^0(c) - r_n L^0(b))\) under \(\mathbf{P }_n^*( \, \cdot \, | \, T(a) < T(0))\): This law can be written as
where \(\widetilde{N}_{n,b}^a = (\widetilde{\xi }_n^a)^+ + \sum _{k=1}^{G_n(a)} (\xi _{n,k}^a)^+\) and the random variables \(\xi _{n,k}^a,\,\theta _{n,k}^a,\,\theta _{n,k}^b,\,G_n(a)\) and \(N_{n,a}\) are as described in Lemma 7.4. Moreover, these random variables are also independent from the pair \((\widetilde{\xi }_n^a, \widetilde{\theta }_n^a)\) whose distribution is given by
where from now on \(\chi _n^a\) denotes a random variable equal in distribution to \(X(0)\) under \(\mathbf{P }_n^*(\, \cdot \, | \, T(a) < T(0))\). For convenience, we will sometimes consider that \(\chi _n^a\) lives on the same probability space and is independent from all the other random variables. This will for instance allow us to say that the random vector (17) conditional on \(\{\chi _n^a=x_0\}\) is equal in distribution to the random vector (14).
In order to exploit (17) and prove Propositions 7.2 and 7.3, we will use a method based on controlling the moments, following similar lines as [5, 6]. As Propositions 7.2 and 7.3 suggest, we need to distinguish the two cases \((b-a) \vee (c-b) \le t_0 / s_n\) and \(b-a \ge t_0 / s_n\) (remember that \(t_0\) is a fixed number such that \(w_n(t_0) \ge 2\) for each \(n \ge 1\), see the discussion after Lemma 7.1).
In the sequel, we need to derive numerous upper bounds. The letter \(C\) then denotes constants which may change from line to line (and even within one line) but never depend on \(n,\,a,\,b,\,c,\,x_0\) or \(\lambda \). They may however depend on other variables, such as typically \(a_0,\,A\) or \(t_0\).
Before starting, let us gather a few relations and properties that will be used repeatedly (and sometimes without comments) in the sequel. First, it stems from (1) that
Moreover, (1) \(\kappa _n \le 1\); (2) \(W^{\prime }_n(0) = \kappa _n s_n / r_n = \kappa _n s_n^{2-\alpha },\,W_n \ge 0\) is increasing, \(W^{\prime }_n \ge 0\) is decreasing and \(-W_n^{\prime \prime } \ge 0\) is decreasing (as a consequence of Lemma 7.1 together with the identity \(W_n(a) = w_n(a s_n) / r_n\)); (3) for every \(a > 0\), the sequences \((W_n(a), n \ge 1)\) and \((W_n^{\prime }(a), n \ge 1)\) are bounded away from 0 and infinity (by Lemma 3.4).
1.1 Case \((b-a) \vee (c-b) \le t_0 / s_n\)
The goal of this subsection is to prove Proposition 7.2 through a series of lemmas.
Lemma 7.5
For any \(A > a_0\), there exists a finite constant \(C\) such that for any \(n \ge 1\), any \(a_0 \le a \le A\) and any \(0 \le x_0 \le y \le a\),
where \(j_{n,a}(x_0, y) = W_n(a-x_0) W_n(y) - W_n(y-x_0) W_n(a)\).
Proof
The derivation of these three bounds is based on the following identity:
The term \(W_n(y)\) appearing in the second term of the right-hand side is upper bounded by the finite constant \(\sup _{n \ge 1} W_n(A)\) and so does not play a role as far as (19) is concerned. Assume that \(x_0 \ge a/4\): then \(y \ge a_0/4\) and so
On the other hand,
This gives the desired upper bound of the form \(C x_0 s_n^{2-\alpha }\), since \(W^{\prime }_n(0) \le s_n^{2-\alpha }\) and \(1 \le C x_0\) when \(x_0 \ge a/4\) (writing \(1 = x_0 / x_0 \le 4 x_0 / a_0\)). This proves (19) in the case \(x_0 \ge a/4\). Assume now that \(x_0 \le a/4 \le a/2 \le y\), so that \(y \ge a_0/2\) and \(y-x_0 \ge a_0/4\). Then
and
Since \(-W_n^{\prime \prime }\) is decreasing, so is the function \(\varphi (u) = \int _0^{a-y} (-W_n^{\prime \prime }) (u+v+a_0/4) \mathrm{d}v\). Differentiating, this immediately shows that the function \(z \mapsto z^{-1} \int _0^z \varphi \) is decreasing and since \(x_0 \ge a_0\), this gives
Further, exploiting that \(-W_n^{\prime \prime }\) is decreasing, we obtain
This proves (19) in the case \(x_0 \le a/4 \le a/2 \le y\). Assume now that \(x_0 \le a/4\) and that \(y \le a/2\): then
and
This concludes the proof of (19), \(\square \)
Lemma 7.6
For any \(A > a_0\), there exists a finite constant \(C\) such that for any \(n \ge 1\), any \(a_0 \le a < b \le A\) and any \(x_0 \le a\),
and for any \(n \ge 1\), any \(a_0 \le a < b < c \le A\) with \(b-a \le t_0 / s_n\) and any \(x_0 \le b\),
Proof
Let us first prove (20), so assume until further notice \(x_0 \le a\). Let in the rest of the proof \(\tau _a = \inf \{ t \ge 0: X(t) \notin (0,a) \}\): then the inclusion
holds \(\mathbf{P }_n^{x_0}\)-almost surely and leads to
Thus (20) will be proved if we can show that \(\mathbf{P }_n^{x_0}\left( a \le X(\tau _a) < b \right) \le C x_0 (b-a) s_n\). Let \(h_n(z) = \alpha \kappa _n s_n^{\alpha +1} (1+zs_n)^{-\alpha -1}\) be the density of the measure \(\Pi _n\) with respect to Lebesgue measure; note that it is decreasing and that the sequence \((h_n(z))\) is bounded for any \(z > 0\). Corollary 2 in [3] gives
where \(\Delta X(t) = X(t) - X(t-)\) for any \(t \ge 0\) and
so it follows that
Hence (20) will be proved if we can show that \(\int _0^a u_{n,a}(x_0, y) h_n(a-y) \mathrm{d}y \le C x_0 s_n\). We have by definition of \(u_{n,a}\)
which readily implies
with \(j_{n,a}(x_0, y) = W_n(a-x_0) W_n(y) - W_n(y-x_0) W_n(a)\) as in Lemma 7.5. We will show that each term of the above right-hand side is upper bounded by a term of the form \(C x_0 s_n\). Let us focus on the first term, so we want to show that
Assume first that \(x_0 \le a/2\), then \(a-y \ge a-x_0 \ge a/2 \ge a_0 / 2\) for any \(y \le x_0\) which gives
Assume now \(x_0 \ge a/2\): Since \(h_n\) is the density of \(\Pi _n\), we have
Since \(1 \le C x_0\) (because \(x_0 \ge a_0/2\)), the desired upper bound of the form \(C x_0 s_n\) follows from Lemma 7.1. We now control the second term of the right-hand side in (23), i.e., we have to show that
In the case \(x_0 \ge a/4\), the first bound in (19) gives
Assume from now on that \(x_0 \le a/4\) and decompose the interval \([x_0, a]\) into the union \([x_0, a/2] \cup [a/2,a]\). For \([a/2,a]\), (19) gives
since \(\alpha - 1 < 1\). For \([x_0, a/2]\), (19) gives, using \(a-y \ge a_0/2\) when \(y \le a/2\),
This finally concludes the proof of (20), which we use to derive (21). Assume from now on that \(x_0\le b\), we have by definition
Since \(\mathbf{P }_n^z(T(b) < T(0)) = \int _0^z \varphi \) where \(\varphi (u) = W^{\prime }_n(b-u) / W_n(b)\) is increasing, it is readily shown by differentiating that \(z \in [0,b] \mapsto z^{-1} \mathbf{P }_n^z(T(b) < T(0))\) is also increasing. Thus, for \(x_0 \le b\), we obtain
In combination with (20) and the fact that \(\mathbf{P }_n^{x_0}(X(\tau _a) \ge b) \le \mathbf{P }_n^{x_0}(T(b) < T(a) < T(0))\), this entails
Hence (21) will be proved if we show that \(x_0 \le C \mathbf{P }_n^{x_0}\left( X(\tau _a) \ge b \right) \). If follows from (22) that
and because \(\mathbb{P }(\Lambda \ge u) = (1+s)^{-\alpha }\), it can be checked that \(\mathbb{P }(\Lambda \ge u+v) \ge \mathbb{P }(\Lambda \ge u) \mathbb{P }(\Lambda \ge v)\) for any \(u, v > 0\), so that
In view of (22), this last integral is equal to \(\mathbf{P }_n^{x_0}(X(\tau _a) \ge a) = \mathbf{P }_n^{x_0}(T(a) < T(0))\) so finally, using \((b-a) s_n \le t_0\) we get
which proves (21). \(\square \)
Lemma 7.7
There exists a finite constant \(C\) such that for all \(a_0 \le a < b\) and all \(n \ge 1\),
Proof
Starting from \(a,\,X\) (under \(\mathbf{P }_n^a\)) makes \(1+G_n(a)\) visits to \(a\). Decomposing the path \((X(t), 0 \le t \le T(0))\) between successive visits to \(a\), one gets
By definition, the left-hand side is equal to \(1-p_n^a(b)\), so integrating on \(G_n(a)\) gives
which gives
Let \(p_n = p_n^{b-a}(b-a)\) and \(p_{n,\xi } = p_{n,\xi }^a(a,b)\): We have
Plugging in (1) and (24) gives after some computation
Since \(W^{\prime }_n \ge 0\) and \(W_n^{\prime \prime } \le 0\), this gives
and since \(W_n^{\prime }\) is convex, we get
Since \(W_n^{\prime }(0) \le s_n / r_n\) and
the proof is complete. \(\square \)
To control the higher moments of \(\xi _n^a\) and also the moments of the \(\theta \)’s, we introduce the following constants:
Lemma 7.8
For any integer \(i \ge 1\), the constant \(C_i\) is finite.
Proof
Using the concavity of \(w_n\), one gets \(w_n(\delta s_n) \le w_n(0) + \delta s_n w_n^{\prime }(0) \le 1 + \delta s_n\) since \(w_n(0) = 1\) and \(w_n^{\prime }(0) = \kappa _n \le 1\) by Lemma 7.1. Hence
where the last inequality holds for \(\delta s_n \le t_0\). In particular, for any \(i \ge 1\), we have
where \(\varepsilon = t_0 / (1 + t_0) < 1\) and \(G_p\) is a geometric random variable with parameter \(p\). It is well known that
where \(P_i\) is the polynomial \(P_i(p) = \sum _{k=0}^i T_{k,i} p^i\) with \(T_{k,i} \ge 1\) the Eulerian numbers. Since \(P_i\) satisfies \(P_i(0) = 1\), one easily sees that for any \(\varepsilon < 1\),
which achieves the proof. \(\square \)
Lemma 7.9
For any \(A > a_0\) and \(i \ge 1\), there exists a finite constant \(C\) such that for all \(n \ge 1\) and all \(a_0 \le a < b < c \le A\) with \((b-a) \vee (c-b) \le t_0 / s_n\)
Proof
The results for \(\xi _n^a\) and \(\theta _n^a\) are direct consequences of (20) and (21) and the finiteness of \(C_i\): Indeed, using these two results, we have for instance for \(\xi _n^a\)
and similarly for \(\theta _n^a\). The result for \(\theta _n^b\) is also straightforward because
and so the result follows similarly as for \(\xi _n^a\). \(\square \)
Recall the random variables \(\widetilde{\xi }_n^a\) and \(\widetilde{\theta }_n^a\) defined in (18).
Lemma 7.10
For any \(A > a_0\), there exists a finite constant \(C\) such that for all \(n \ge 1\) and all \(a_0 \le a < b < c \le A\) with \((b-a) \vee (c-b) \le t_0 / s_n\),
Proof
Combining the two definitions (15) and (18), we obtain
using (20) to obtain the inequality. We obtain similarly for \(\widetilde{\theta }_n^a\), using (21) instead of (20),
Thus, the result will be proved if we can show that \(\mathbb{P }(a \le \chi _n^a \le c) \le C (c-a) s_n\); remember that \(\chi _n^a\) is by definition equal in distribution to \(X(0)\) under \(\mathbf{P }_n^*(\, \cdot \, | \, T(a) < T(0))\). Because \(X\) is spectrally positive and \(a \le A\), it holds that
Since \(\mathbf{P }_n^* \left( T(A) < T(0) \right) = \mathbf{P }_n^0 \left( T(A) < T(0) \right) \) by Lemma 2.1, Lemma 3.6 implies that
which leads to \(\mathbf{P }_n^* \left( a < X(0) \le c \, | \, T(a) < T(0) \right) \le C r_n \mathbf{P }_n^* \left( a < X(0) \le c \right) \). We have by definition
which achieves the proof. \(\square \)
To control the sum of i.i.d. random variables, we will repeatedly use the following simple combinatorial lemma. In the sequel for \(I \in \mathbb{N }\) and \(\beta \in \mathbb{N }^I\) note \(|\beta | = \sum \beta _i\) and \(||\beta || = \sum i \beta _i\).
Lemma 7.11
Let \((Y_{k})\) be i.i.d. random variables with common distribution \(Y\). Then for any even integer \(I \ge 0\) and any \(K \ge 0\),
Proof
We have \(\mathbb{E }\left[ \left( Y_1 + \cdots + Y_K \right) ^I \right] = \sum _{1 \le k_1, \ldots , k_I \le K} \mathbb{E }(Y_{k_1} \ldots Y_{k_I})\). Since the \((Y_{k})\)’s are i.i.d., we have \(\mathbb{E }\left( Y_{k_1} \ldots Y_{k_I} \right) = m_1^{\beta _1} \ldots m_I^{\beta _I}\) with \(m_i = \mathbb{E }(Y^i)\) and \(\beta _i\) the number of \(i\)-tuples of \(k\), i.e., \(\beta _1\) is the number of singletons, \(\beta _2\) the number of pairs, etc ... Since \(I\) is even, this leads to
with \(A_{I,K}(\beta )\) the number of \(I\)-tuples \(k \in \{1, \ldots , K\}^I\) with exactly \(\beta _i\,i\)-tuples for each \(i = 1, \ldots , I\). There are \(K (K-1) \ldots (K - (|\beta |-1))\) different ways of choosing the \(|\beta |\) different values taken by \(k\), thus \(A_{I,K}(\beta ) = K (K-1) \ldots (K - (|\beta |-1)) \times B(I, |\beta |)\) with \(B(i, a)\) the number of ways of assigning \(i\) objects into \(a\) different boxes in such a way that no box is empty, so that \(A_{I,K}(\beta ) \le K^{|\beta |} I^{|\beta |} \le K^{|\beta |} I^I\) since \(|\beta | \le ||\beta || = I\). \(\square \)
In the sequel, we will use the inequality
which comes from the fact that \(G_n(a)\) is stochastically dominated by an exponential random variable with parameter \(1-p_n^a(a) = 1/(r_n W_n(a))\). We now use the previous bounds on the moments to control the probability
We will see that it achieves a linear bound (in \(b-a\)) which justifies the need of the min later on.
Lemma 7.12
For any \(A > a_0\) and any even integer \(I \ge 2\), there exists a finite constant \(C\) such that for all \(n \ge 1\), all \(\lambda > 0\) and all \(a_0 \le a < b \le A\) with \(b-a \le t_0 / s_n\),
where \(Y\) is any random variable equal in distribution either to \(\widetilde{\xi }_n^a\) or to \(G_n(b-a)\).
Proof
Using first the triangular inequality and then Markov inequality gives
Using the independence between \(G_n(a)\) and \((\xi _{n,k}^a, k \ge 1)\) together with Lemma 7.11 gives
Lemma 7.7 gives the bound
where \(((b-a) s_n)^{|\beta |} \le C (b-a) s_n\) follows from the fact that \((b-a) s_n \le t_0\) while \(|\beta | \ge 1\). Using (28) for the case \(Y = \widetilde{\xi }_n^a\) and the finiteness of \(C_I\) for the case \(Y = G_n(b-a)\), one can write \(\mathbb{E }(Y^I) \le C (b-a) s_n\), which gives
But
and \(r_n W_n(a) \ge 1\) so \((r_n W_n(a))^{|\beta | - \beta _1} \le (r_n W_n(a))^{I/2}\), which proves the result. \(\square \)
Lemma 7.13
For any \(i \ge 1\) and \(A > 0\), it holds that
and
Proof
The result on \(N_{n,a}\) comes from the following inequality \(\mathbb{E }((N_{n,a})^i) \le \mathbb{E }((G_n(a))^i)\). For \(\widetilde{N}_{n,b}^{a}\), we use the fact that \(|\widetilde{\xi }_n^a|\) is stochastically dominated by \(1 + G_n(b-a)\) (since for any \(x_0 > 0,\,|\xi _n^{x_0}|\) is), thus
with \((G_{n,k}(b-a), k \ge 1)\) i.i.d. with common distribution \(G_n(b-a)\), independent of \(G_n(a)\). Thus, Lemma 7.11 gives
Since \(G_n(a)\) is stochastically dominated by an exponential random variable with parameter \(1-p_n^a(a) = 1/(r_n W_n(a))\) and \(G_n(b-a)\) is integer valued, so that \((1+G_n(b-a))^k \le (1+G_n(b-a))^i\) for any \(1 \le k \le i\), we get, using that \(|\beta | \le i\) and that all quantities are greater than 1,
where \(E\) is a mean-1 exponential random variable. Using that for each \(1 \le k \le i\)
one gets
Together with the inequality
this concludes the proof. \(\square \)
We can now prove Proposition 7.2. Remember that we must find constants \(C\) and \(\gamma > 0\) such that
uniformly in \(n \ge 1,\,\lambda > 0\) and \(a_0 \le a < b < c \le A\) with \((b-a) \vee (c-b) \le t_0 / s_n\).
Proof of Proposition 7.2
Fix four even integers \(I_1, I_2, I_3, I_4\). By (17),
with \(\lambda _n \!=\! \lambda r_n\). Let \({\fancyscript{F}}\) be the \(\sigma \)-algebra generated by \(\chi _n^a,\,G_n(a),\widetilde{\xi }_n^a\) and the \((\xi _{n,k}^a, k \!\ge \! 1)\). Then, the above probability is equal to
with \(\pi \) the random variable
where
The two terms \(\pi _a\) and \(\pi _b\) can be dealt with very similarly. Fix \(u = a\) or \(b\), and denote by \(N_u\) the random variable \(N_{n,a}\) if \(u = a\) or \(\widetilde{N}_{n,b}^{a}\) if \(u = b\). With this notation, \((\theta _{n,k}^u, k \ge 1)\) are i.i.d. and independent from \(N_u\), so that Markov inequality and Lemma 7.11 give
By (27),
since \(1 \le |\beta | \le I_1\) and \((c-a) s_n \le t_0\). Since \(N_u\) is integer valued, it holds that \(N_u^{|\beta |} \le N_u^{I_1}\) and finally this gives
Applying Cauchy–Schwarz inequality yields
and finally, Lemma 7.12 with \(Y = \widetilde{\xi }_n^a\) gives, together with Lemma 7.13,
It remains to control the term \(\widetilde{\pi }\): in \(\{ \widetilde{\xi }_n^a \ge 0 \},\,\widetilde{\xi }_n^a\) is equal in distribution to \(G_n(b-a)\) and is independent of everything else; thus, we have
Since \(\mathbb{E }( | \widetilde{\theta }_n^a|^{I_3} \, | \, \chi _n^a )\) is independent of \(G_n(b-a) + \sum _{k=1}^{G_n(a)} \xi _{n,k}^a\), we get
where the second inequality follows using (28) and Lemma 7.12 with \(Y = G_n(b-a)\). Since \((c-a) s_n \le t_0\), we have \(((c-a)s_n)^2 \le C ((c-a)s_n)^{3/2}\) and finally, gathering the previous inequalities, one sees that we have derived the bound
Now choose \(I_2\) and \(I_4\) large enough such that both sequences \((s_n^{3/2} r_n^{-I_2/4})\) and \((s_n^{3/2} r_n^{-I_4/2})\) are bounded: This is possible since for any \(\beta \in \mathbb{R },\,s_n r_n^{-\beta } = s_n^{1-\beta (\alpha -1)}\). Moreover, choose \(I_2\) not only even but a multiple of 4. Then, once \(I_2\) and \(I_4\) are fixed, choosing \(I_1\) and \(I_3\) in such a way that \(I_1 + I_2 / 2 = I_3 + I_4\) concludes the proof. \(\square \)
1.2 Case \(b-a \ge t_0 / s_n\)
We now consider the simpler case \(b-a \ge t_0 / s_n\) and prove Proposition 7.3.
Lemma 7.14
For any \(i \ge 1\), there exists a finite constant \(C\) such that for all \(n \ge 1\) and all \(0 < a < b\) such that \(b-a \ge t_0 / s_n\),
Proof
In view of (18), it is enough to show that \(\mathbb{E }\left( |\xi _n^{x_0}|^i \right) \le C (r_n W_n(b-a))^i\) for every \(x_0 > 0\). Since \(b-a \ge t_0 / s_n\), exploiting the monotonicity of \(w_n\) gives
since \(t_0\) has been chosen such that \(w_n(t_0) \ge 2\). Since \(G_n(b-a)\) is a geometric random variable with parameter \(p_n^{b-a}(b-a)\), we have
using \(p_n^{b-a}(b-a) \ge 1/2\). Thus for any \(x_0 > 0\),
This last quantity is equal to \(\mathbb{E }\left( (G_n(b\!-\!a))^i\right) \) and so the inequality \(\mathbb{E }\left( (G_n(b-a))^i\right) \) \( \le i! (r_n W_n(b-a))^i\) achieves the proof. \(\square \)
Lemma 7.15
For any \(i \ge 1\), there exists a finite constant \(C\) such that for all \(n \ge 1\) and all \(0 < a < b\) with \(b-a \ge t_0 / s_n\),
Moreover, for any \(n \ge 1\) and \(0 < a < b\),
Proof
By definition (15) of \(\xi _n\), we have \(\mathbb{E }\left( \left| \xi _n^a \right| ^i \right) = 1-p_{n,\xi }^a(a,b) + p_{n,\xi }^a(a,b) \mathbb{E }\) \( \left( (G_n(b-a))^i \right) \) and so plugging in (24) gives
using \(W_n(b) - W_n(b-a) \le W_n(a) - W_n(0)\) and \(1 \le i! (r_n W_n(b-a))^{i-1}\). The second inequality is a direct consequence of (25) which can be expanded to
The result is proved. \(\square \)
Proof of Proposition 7.3
By (17), we have
We have
where the first inequality comes from the triangular inequality, Markov inequality and Lemma 7.11, and the second inequality is a consequence of Lemma and the fact that \(G_n(a)\) is stochastically dominated by an exponential random variable with parameter \(1-p_n^a(a)\). Using Lemma 7.15 and the identity \(\sum _{i=2}^I (i-1) \beta _i = I - |\beta |\) gives
Thus
Since \(W_n(t) = w_n(t s_n) / s_n^{\alpha -1}\), it holds that
which has been shown to be finite in the proof of Lemma 7.1. Hence, the last upper bound yields
By (29), \(I-|\beta | + \beta _1 \ge I / 2\) and since we consider \(b-a \le A\), this gives
and we finally get the desired bound for \(I\) large enough, i.e., such that \(I(\alpha -1) \ge 3\). Inspecting the proof of Proposition 7.2 one can check that one can choose the two constants \(\gamma \) to be equal. \(\square \)
Rights and permissions
About this article
Cite this article
Lambert, A., Simatos, F. Asymptotic Behavior of Local Times of Compound Poisson Processes with Drift in the Infinite Variance Case. J Theor Probab 28, 41–91 (2015). https://doi.org/10.1007/s10959-013-0492-1
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-013-0492-1
Keywords
- Local times
- Lévy processes
- Infinite variance
- Weak convergence
- Crump–Mode–Jagers branching processes
- Processor-Sharing queue