The Tail of the Length of an Excursion in a Trap of Random Size

Consider a random walk with a drift to the right on {0,…,k}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{0,\ldots ,k\}$$\end{document} where k is random and geometrically distributed. We show that the tail P[T>t]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}[T>t]$$\end{document} of the length T of an excursion from 0 decreases up to constants like t-ϱ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t^{-\varrho }$$\end{document} for some ϱ>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varrho >0$$\end{document} but is not regularly varying. We compute the oscillations of tϱP[T>t]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t^\varrho \,{\mathbb {P}}[T>t]$$\end{document} as t→∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t\rightarrow \infty $$\end{document} explicitly.


Introduction and Main Result 1.Introduction
In this paper, we study a simple object: the tail of the time a biased random walk spends in a trap of random size.Our result is very explicit and may serve as a building block in the study of trapping models.Trapping phenomena for biased walks have been investigated intensively over the last decade, we refer to [4] for a survey.As a model for transport in an inhomogeneous medium, one can study biased random walk on a supercritical percolation cluster on Z d for d ě 2. It turns out that for small values of the bias, the walk moves at a positive linear speed, whereas for large values of the bias, the speed vanishes.The critical value of the bias separating the two regimes is the value where the expectation of the time spent in a trap changes from being finite to being infinite.This model goes back to [3] and was investigated in [7] and [22].Finally, Alexander Fribergh and Alan Hammond proved a sharp transition for the positivity of the speed in [12].Concerning limit laws for the distribution of the walker, a central limit theorem for small bias was proved in [22].The law of the walker in the subballistic case was addressed by [12]: the authors find the polynomial order of the distance of the walker to the origin.It is conjectured that it depends on the spatial direction of the bias if there is a limit law for the distance of the walker to the origin.
Replacing the integer lattice with a tree yields a biased random walk on a supercritical Galton-Watson tree.In this case, the phase transition for the bias is easier to understand and was shown in [20].It turns out that the distance of the walker to the origin does not satisfy a limit law, but there are subsequences converging to certain infinitely divisible laws, see [5].The crucial object is the time T spent in traps (averaged over the size of the trap): since the tail of this random variable is not in the domain of attraction of a stable law, there is no limit law for the time the walker needs to go at a certain distance of the origin.We refer to the introduction of [5] for more explanations.If one randomizes the bias, the situation changes, see [6] and [16].For one-dimensional random walk in random environment, limit laws for the distance of the walker to the origin have been proved in [17] under a non-lattice assumption.If the non-lattice assumption is violated, one would expect convergence of subsequences as for the aforementioned biased random walk on a Galton-Watson tree.The result of this paper can be used to confirm this in the simple case of an environment which has either a drift to the left or a reflection to the right, treated in [21] and [13].
As a toy model for the supercritical percolation cluster, one may consider a percolation on a ladder graph, conditioned to survive.This model was introduced in [2] and further investigated by [14,15,19].Again, our result may be applied to show that there is no limit law for the distance of the walker to the origin, as conjectured in [19].
There is a well-known connection between hitting times of a random walk (or random walk in random environment) and the total population size in a branching process (or branching process in random environment) with geometric offspring laws.For subcritical branching processes in random environment (BPRE), a precise asymptotics for the tail of the total population size under a non-lattice assumption was given in [1].See also [10] for an upper bound on the same tail without non-lattice assumption.Again, our result can serve as an example that the precise asymptotics fails in the lattice case, at least in a particular case of a degenerate environment.More precisely, consider a subcritical BPRE where in each generation the law of the offspring is either geometric with expectation ą 1 or the Dirac measure at 0. Denote by T the total population size in this BPRE.Then, while the probability PrT ą ts satisfies, for positive constants c 1 and c 2 and a certain exponent ̺, it is not regularly varying.More precisely, we show that PrT ą tst ̺ is asymptotically equivalent to a nonconstant, multiplicatively periodic function, see (1.11).In our setup, with T denoting the time spent in a trap of random size, (1.1) was proved in [19] and it was conjectured that the tail is not regularly varying.This is confirmed by our result.Similar tail asymptotics for various quantities are known in the context of branching processes, see for instance [23], [8], [9].

Main result
Let us now give precise definitions and state our main result, Theorem 1.1.Let β ą 1 be a fixed parameter.Let k P N 0 and consider discrete time random walk X on t0, . . ., ku with edge weight Cpl, l `1q " β l along the edge tl, l `1u and started in X 0 " 0. That is, if X n " l P t1, . . ., k ´1u then it jumps to l `1 with probability β{p1 `βq and to l ´1 with probability 1{p1 `βq.There is reflection at the boundaries: If X n " 0, then it jumps to 1.If X n " k, then it jumps to k ´1.Of course, for k " 0, the random walk is trivial.Let P k denote the probabilities with respect to fixed k and let P denote the probabilities with respect to a random geometrically distributed k with parameter 1 ´α, that is, Also let E k and E be the corresponding expectations, respectively.Here α P p0, 1q is a fixed parameter.Let T :" inf t ą 0 : and T " 0 if k " 0, be the length of an excursion from 0. Let Our random walk X is a special case of a random walk in an irreducible electrical network, see, e.g., [18,Chapter 19], on a finite graph pV, Eq with edge weights Cpeq, e P E. Denote by Cpxq the sum of Cpeq for all edges incident to the vertex x P V , and let C :" ř x Cpxq.The transition probabilities are given by ppx, yq " Cptx, yuq{Cpxq, x, y P V .It is easy to check that πpxq :" Cpxq{C defines the unique invariant measure.By [18,Theorem 17.52], the expected time to return to x (when started in x) equals 1{πpxq " C{Cpxq.
We use this fact to compute, for fixed k the expectation of T :

Hence
ErT s " p1 ´αq (1.9) Here, Γ is Euler's Gamma function and argpa `biq P p´π{2, π{2q denotes the angle of a `bi for a ą 0 and b P R. Note that the c ℓ decrease quickly with ℓ and hence the constant and the ℓ " 1 mode are dominant.
Note that g is a nonconstant multiplicatively periodic function, that is gpβtq " gptq for all t ą 0. (1.10) In particular, g is not slowly varying.

Outline
The strategy of the proof is as follows: We first consider the event A where X reaches k before returning to 0. On the complement of this event, T is very small and hence this case can be neglected for the tail of T (Lemma 2.3).On the event A, we split the time T into three parts: (1) the time T in needed to reach k, (2) the time T exc spent in excursions from k to k that do not reach 0, and (3) the length T out of the last excursion from k to 0.
We will show that the contributions from (1) and (3) can be neglected (Lemma 2.4 and Lemma 2.5).Finally, we consider (2).The number of excursions is geometrically distributed and the length of the single excursion has exponential moments.We infer that the tail of T is governed by the number of excursions multiplied by their expected lengths (Proposition 2.17).The number of excursions is geometrically distributed with a parameter that depends on k.We use a very detailed analysis to determine the tail averaged over k.

The time to get in and out
Let T in :" inf t ą 0 : be the time it takes to hit the right end of the interval.Let be the last visit (if any) of the right end of the interval before returning to 0. Let denote the time it takes for this last excursion from k to hit 0. Finally, let denote the time, the random walk spends in excursions from k before the last excursion from k starts.The random times T exc , T last and T out are well-defined on the event In fact, on A, we have T last ă 8.
Proof.Considering t0, . . ., ku as an electrical network with resistances Rpl, l `1q " β ´l, we get the effective resistances R eff p0, 1q " 1 and Now (compare, e.g., [18, (19.9)]) On A c , until time T , X is a random walk conditioned to return to 0 before hitting k.Now let U be such a random walk started in U 0 " 0. Let T U :" inf t ą 0 : U t " 0 ( .Then (2.7) The transition probabilities of U can be computed via Doob's h-transforms.Let h k plq " β ´l ´β´k be a harmonic (on t1, . . ., k ´1u) function for X with h k pkq " 0 and h k p0q ą 0. Then for l " 1, . . ., k ´1, we have We compare U to the random walk Y on Z with conductances β ´l along the edge tl, l `1u.
That is, Y makes a jump to the right with probability 1{p1 `βq and to the left with probability β{p1 `βq.Also, let Y be the random walk on Z with conductances β l along the edge tl, l `1u.That is, ´Y has the same jump probabilities as Y .Let Clearly, if Y 0 " Y0 " 0, then T Y and T Y have the same distribution.By (2.8), we see that T U is stochastically bounded by T Y .More precisely, we have By symmetry, the statements also hold for Y instead of Y conditioned on Y 1 " ´1.
Define the function ψ by Decomposing according to the position of Y at time 1 and using the strong Markov property at time τ (in the fourth line) yields This quadratic equation has two solutions which at λ " 0 take the values 1 and β, respectively.The relevant one takes the value 1 and is given in (2.11).Taking the derivatives at λ " 0 gives (2.10).l Lemma 2.3 There exists an ε ą 0 such that PrT ą t|A c s ď e ´εt , t ě 1. (2.12) Proof.This is a direct consequence of (2.7), (2.9) and the existence of exponential moments (Lemma Furthermore, for each k P N, Proof.Let V be a random walk on t0, . . ., ku with the same transition probabilities as U (see (2.8)) but started at k. Define T V :" inftt ą 0 : V t " 0u.
Note that V can be coupled with Y (started in Y0 " k) such that V t ď Yt for t ď T V .Arguing as in the proof of Lemma 2.4, we get an ε ą 0 such that Let T Y 0 :" inftt ą 0 : Yt " 0u.Note that Y has a drift β´1 β`1 to the left.Hence, the average time it takes to visit the point left of the starting point is β`1 β´1 .Now T Y 0 is the time it takes to visit the kth point left of the staring point.Hence, again by stochastic domination,

The time spent in excursions
Recall that T " T in `Texc `Tout .We have dealt with T in and T out .Now we turn to the time T exc the random walk X spends in excursions from k before it hits 0. These excursions are pieces of the random walk conditioned not to hit 0. Let N denote the number of these excursions if A occurs and N " 0 on A c .Note that N is geometrically distributed with respect to the conditional probability P k r ¨|As.

Our strategy is
• to compute the parameter of N (depending on k), • to estimate expectation and exponential moments of the lengths of the excursions and • to show that for the tail of T , it is good enough to replace the lengths of the excursions by their expected value.
Hence, the tail of N rules the game, see Proposition 2.17.
Finally, we will compute the tail of N with an involved analysis using Mellin transforms.

Let
B :" t Tk ă T0 u " t X returns to k before hitting 0u.
Lemma 2.6 We have Proof.This is similar to the proof of Lemma 2.1.l Let X be the random walk on t0, . . ., ku started at X0 " k and conditioned to return to k before hitting 0. This means the transition probabilities of X are given by Doob's h-transform with the harmonic function h 0 plq " 1 ´β´l .Explicitly, we have Let N, T p1q , T p2q , . . .be independent random variables with respect to P k and such that N is geometrically distributed with parameter P k rB c s " β´1 β k ´1 and P k rT piq " ls " P k r Tk " l|Bs, l P N 0 , i " 1, 2, . . . .

Also let
r T :" Lemma 2.7 We have Proof.This is a simple application of the strong Markov property.l Lemma 2.8 and (since Proof.This is a direct computation.l While r T is the quantity we have to study, it is more convenient to get rid of the randomness inherent in the lengths of the excursions and to replace them by their expected value.Hence, as a substitute for r T , we introduce In order to show that r T and T are in fact close, we estimate the exponential moments of T p1q and use Markov's inequality.As a direct computation of the exponential moments is a bit tricky, we make a little detour and use a comparison argument for branching processes.We prepare for this comparison argument with some considerations on the convex ordering of geometric distributions.Note that for the case ̺ ă 2, a simpler estimate based on variances would be good enough for our purposes.In fact, the variances exist for any fixed k and give estimates of order t ´2 which is good enough compared with the leading order term t ´̺ if ̺ ă 2.
Lemma 2.9 We can define a family pW r q rPp0,1s of geometrically distributed random variables with parameters r, such that W r and W q ´Wr are independent if 0 ă q ď r ď 1.

We have
Proof.Let pU n q nPN 0 be i.i.d.random variables uniformly distributed on r0, 1s.Let W r :" inftn : U n ď ru.
It is easy to check that the pW r q have the desired properties.l Lemma 2.10 Let 0 ă q ď r ď 1 and let W q and W r be geometrically distributed with parameters q and r, respectively.Let ϕ : R Ñ r0, 8q be a convex function.Then E rϕpW r ´ErW r sqs ď E rϕpW q ´ErW q sqs (2.24) Proof.By Lemma 2.9, we may and will assume that W r and W q ´Wr are independent.Hence W r ´ErW r s " E " W r ´ErW r s ˇˇW r ‰ " E " W q ´ErW q s ˇˇW r ‰ .
By Jensen's inequality, we get E " ϕ `Wr ´ErW r s ˘‰ " E " ϕ `ErW q ´ErW q s ˇˇW r s ˘‰ ď E " E " ϕpW q ´ErW q sq ˇˇW r ‰‰ " E " ϕpW q ´ErW q sq ‰ . ( be the total population sizes.The offspring law of Z piq in generation n is assumed to be geometric with parameter p i,n , i " 1, 2, n P N 0 .We also assume that p 1,n ď p 2,n for all n P N 0 and E " p Žp1q q 2 ‰ ă 8.
Then, we have Proof.First assume that p 1,n " 1 for n ě n 0 for some n 0 . (2.30) Hence Žpiq " Z piq 0 `. . .`Zpiq n 0 , i " 1, 2. For n 0 " 1, the statement follows from the expectation and variance formula for the geometric distribution.The induction step from n 0 ´1 to n 0 is a simple application of Wald's formula and the Blackwell-Girshick formula.In order to get rid of assumption (2.30), take monotone limits.
For the exponential inequalities we proceed similarly.Consider first the case (2.30) and n 0 " 1.In this case the assertion is a direct consequence of Corollary 2.11.For the induction step, we assume that the statement is true for n 0 ´1 and we show it for n 0 .Define κ piq :" ErexppλpZ Proof.Let Y be the random walk on Z that jumps to the right with probability β{p1`βq and to the left with probability 1{p1 `βq starting in k ´1.Let Recall T X k from (2.18).By the basic connection between the occupation times of excursions of random walks and Galton-Watson processes with geometric offspring distributions, we see that 1  2 pT Y k `1q has the same distribution as Žp1q from Lemma 2.12 with p 1,n " β β`1 .Similarly, using (2.17), we see that 1  2 T X k has the same distribution as Žp2q with By Lemma 2.12 and Lemma 2.2, we infer and On the other hand, By Lemma 2.5, we get Using the Markov property and arguing as in Lemma 2.5, we get Summing up and using Lemma 2.6 to get Now we turn to the proof of (2.34).Again by Lemma 2.12 and Lemma 2.2, we get for λ ă log β`1 (2.37) The first and second derivatives at zero are Hence, by Taylor's theorem, there exists a δ ą 0 such that Lemma 2.14 There is a constant c ą 0, such that for all t ą 0, we have (2.39) Proof.By Markov's inequality and Lemma 2.13, there are δ ą 0 and C ă 8 such that for λ P r0, δs, (2.40) We need to make a good choice for λ to make this inequality effective.Recall that N is geometric with parameter r k :" β´1 β k ´1 under the conditional probability P k r ¨|As.Define Then we have and for l ą k, Note that β´1 β k ´β ă 1 2 for all k ě 2. Hence (using the fact that logp1 `xq ě x{2 for x P r0, 1{2s), . Note that λ k Ó 0 and let k 0 P N be large enough such that λ k ă δ for all k ě k 0 .
Then (using Lemma 2.18 with ?β instead of β and hence 2̺ instead of ̺ in the last step) there is a constant C ă 8 such that (2.45) Since λ k 0 ą 0 is a constant, the claim follows.l It is still a bit inconvenient to work with T as the expectation of T p1q depends on k, though only slightly.The next step is to replace E k rT p1q s in the definition of T by its limit lim kÑ8 E k rT p1q s " 2β β´1 .
Lemma 2.15 There is a constant c ą 0, such that for all t ą 0, we have t . (2.46) Proof.By Lemma 2.13, and by the fact that T " 2N if k " 1, we know that ˇˇˇT ´N 2β Hence for any k 0 P N, P Now choose k 0 " ?t to get the result.l In order to see that the error terms are smaller than the main term, that is the tail of N , we need a lower bound for the tail of N .Since we give a more detailed analysis later, here we only make a very rough assertion.
Lemma 2.16 There exists a constant c ą 0 such that PrN ą ts ě ct ´̺ for all t ě 1.
Proof.For t P r1, β 2 s, the statement holds with c " PrN ą β 2 s.Now assume t ě β 2 and let c " β´1 β p1 ´αqe ´2β 2 .Let k P N, k ě 2 be such that β k ď t ď β k`1 .Then (recall Lemma 2.1 and note that 1 ´x ě e ´2x for x P r0, 1{2s) We summarize the above discussion in the following proposition.
Proposition 2.17 We have

Proof.
By Lemma 2.15, the tails of 2β β´1 N and T coincide in our scale, given by Lemma 2.16.By Lemma 2.14, the tails of r T and T coincide.Finally, by Lemmas 2.4, 2.5 and 2.7 the tails of T and r T coincide.l

The tail of a geometric random variable with random parameter
In order to compute the tail of N , it is convenient to replace the geometrically distributed random variable with parameter β´1 β k ´1 by an exponentially distributed random variable N 1 with parameter β ´k.Note that we neglected the factor β ´1 and we will re-introduce it by a scaling of t.The tail of N 1 is given by PrN 1 ą ts " f ptq :" Proof.Let k P N 0 be chosen such that β k´1 ď t ă β k .Recall that ̺ " ´logpαq{ logpβq.Then f ptq ě f pβ k q ě p1 ´αqα k e ´1 " p1 ´αqe ´1pβ k q ´̺ ě p1 ´αqe ´1β ´̺ t ´̺. (2.48) Note that f is decreasing and hence for l P Z and β l`1 ą t ě β l , we have  ] for a model of random walk in a random environment on Z with a drift to the left except for geometrically placed reflection points.His asymptotics is the same as ours except for an obvious factor due to the fact that (i) Solomon's "traps" have size at least one while ours start at zero and (ii) our random walk has a positive chance to exit the trap without reaching the bottom.
The usual Tauber theorems that would help to infer the tail behaviour of T from the behaviour of its Laplace transform near zero assume regular variation of the tails (and the Laplace transforms) which is not the case here.Solomon's proof uses asymptotic equivalence of the Laplace transform ψ to the Laplace transform ϕ he is interested in, just as we did above.However, this is possible only in the case αβ ą 1 which Solomon is mainly concerned with.Our approach of comparing the tails of the approximating random variable N 1 instead of its Laplace transform allows to deal also with the case αβ ď 1. ✸ Now we come to determining the asymptotic behavior of f ptq as t Ñ 8.The following proposition completes the proof of Theorem 1.1.
(2.67) Fix some γ ą ̺.We can approximate the integral by the finite integrals where R ℓ " p2ℓ `1qπ{ logpβq.We compute this integral using residue calculus for the path consisting of the four pieces rη ´Rℓ i, η `Rℓ is, rη `Rℓ i, γ `Rℓ is, rγ `Rℓ i, γ ´Rℓ is and rγ ´Rℓ i, η ´Rℓ is.Note that the horizontal paths do not hit the poles and hence the denominator of f ˚is bounded away from 0 while the modulus of the Γ function decreases very quickly with ℓ.Thus these integrals can be neglected.The integral along the second vertical piece can be estimated by ˇˇˇż (2.69) As we integrate clockwise, f ptq is minus the sum of the residues in pχ ℓ q ℓPZ plus the Opt ´γ q term.According to (2.66) these residues are t ´χℓ a ℓ,´1 " ´t´χ ℓ Γpχ ℓ q 1´α logpβq .Concluding,  we get (2.61).l Note that while (2.61) is true for all values of γ, the constant in the term Opt ´γ q in (2.61) is of order Γpγq, see (2.69) and thus increases quickly with γ.

Figure 2 . 1 :
Figure 2.1: Complex plane with the singularities of f ˚and the integration path.
Proof.Let ϕpxq :" e λx κ x .Since ErW q s ě ErW r s, we get by Lemma 2.10 Wr ı " E rϕpW r ´ErW r sqs κ ErWrs ď E rϕpW q ´ErW q sqs κ ErWqs .25) l Corollary 2.11 For λ P R, κ ě 1 and 0 ă q ď r ď 1, we have Proof.By Lemma 2.16 and Lemma 2.18, all error terms of order opt ´̺q can be neglected.We use this first to show that the summands of f ptq with β k ď t 2{3 are negligible:Recall from Lemma 2.1 that P k rAs " β´1 β´β 1´k .Let ε ą 0 and choose t large enough such that P k rAs ď p1 `εq β´1 β for all k such that β k ą t 2{3 .Then Now we come to the complementary estimate for the lim inf.Note that logp1 ´xq ě ´x ´x2 for x P r0, 1{2s.For the summands of PrN ą ts with β k ą t 2{3 , and for t ě β 3 , we have k ě 2 (thus β´1 β k ´1 ď 1We infer for C " Cpβq large enough and all t ě 2, [21]r comparison of the tails of N 1 and T in Lemma 2.19 and Proposition 2.17 allows to recover a result of Solomon[21]which we briefly sketch here.Now let ϕ be the Laplace transform of T , that is ϕpλq " Ere ´λT s, λ ě 0. Using Lemma 2.19 and Proposition 2.17, if αβ ą 1, it is easy to show that Denote by L 1 and L 2 the Laplace transforms of µ 1 and µ 2 , respectively.Then In fact, assume we have two probability measures µ 1 and µ 2 on r0, 8q and ξ P [11,osition 2.21 For all γ ą ̺, as t Ñ 8, we have Proof.The proof of (2.61) uses Mellin transforms and follows the strategy outlined in[11, Example 12].We define the Mellin transform of f by An explicit computation shows that the integral converges for z in the strip Repzq P p0, ρq and equals That is, f ˚is holomorphic for Repzq P p0, ρq and can be uniquely extended to a meromorphic function in C with poles in the nonpositive integers and in χ ℓ :" ̺ `2πiℓ{ logpβq, see Figure2.1.Let