Heavy-Tailed Random Walks on Complexes of Half-Lines

We study a random walk on a complex of finitely many half-lines joined at a common origin; jumps are heavy-tailed and of two types, either one-sided (towards the origin) or two-sided (symmetric). Transmission between half-lines via the origin is governed by an irreducible Markov transition matrix, with associated stationary distribution μk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu _k$$\end{document}. If χk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\chi _k$$\end{document} is 1 for one-sided half-lines k and 1 / 2 for two-sided half-lines, and αk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha _k$$\end{document} is the tail exponent of the jumps on half-line k, we show that the recurrence classification for the case where all αkχk∈(0,1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha _k \chi _k \in (0,1)$$\end{document} is determined by the sign of ∑kμkcot(χkπαk)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sum _k \mu _k \cot ( \chi _k \pi \alpha _k )$$\end{document}. In the case of two half-lines, the model fits naturally on R\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathbb {R}}}$$\end{document} and is a version of the oscillating random walk of Kemperman. In that case, the cotangent criterion for recurrence becomes linear in α1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha _1$$\end{document} and α2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha _2$$\end{document}; our general setting exhibits the essential nonlinearity in the cotangent criterion. For the general model, we also show existence and non-existence of polynomial moments of return times. Our moments results are sharp (and new) for several cases of the oscillating random walk; they are apparently even new for the case of a homogeneous random walk on R\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathbb {R}}}$$\end{document} with symmetric increments of tail exponent α∈(1,2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha \in (1,2)$$\end{document}.


Introduction
We study Markov processes on a complex of half-lines R + × S, where S is finite, and all half-lines are connected at a common origin. On a given half-line, a particle performs a random walk with a heavy-tailed increment distribution, until it would exit the half-line, when it switches (in general, at random) to another half-line to complete its jump.
To motivate the development of the general model, we first discuss informally some examples; we give formal statements later.
The one-sided oscillating random walk takes place on two half-lines, which we may map onto R. From the positive half-line, the increments are negative with density proportional to y −1−α , and from the negative half-line, the increments are positive with density proportional to y −1−β , where α, β ∈ (0, 1). The walk is transient if and only if α + β < 1; this is essentially a result of Kemperman [15].
The oscillating random walk has several variations and has been well studied over the years (see, e.g. [17][18][19]21]). This previous work, as we describe in more detail below, is restricted to the case of two half-lines. We generalize this model to an arbitrary number of half-lines, labelled by a finite set S, by assigning a rule for travelling from half-line to half-line.
First, we describe a deterministic rule. Let T be a positive integer. Define a routing schedule of length T to be a sequence σ = (i 1 , . . . , i T ) of T elements of S, dictating the sequence in which half-lines are visited, as follows. The walk starts from line i 1 , and, on departure from line i 1 jumps over the origin to i 2 , and so on, until departing i T it returns to i 1 ; on line k ∈ S, the walk jumps towards the origin with density proportional to y −1−α k where α k ∈ (0, 1). One simple example takes a cyclic schedule in which σ is a permutation of the elements of S. In any case, a consequence of our results is that now the walk is transient if and only if k∈S μ k cot(π α k ) > 0, (1.1) where μ k is the number of times k appears in the sequence σ . In particular, in the cyclic case the transience criterion is k∈S cot(π α k ) > 0. It is easy to see that, if S contains two elements, the cotangent criterion (1.1) is equivalent to the previous one for the one-sided oscillating walk (α 1 + α 2 < 1). For more than two half-lines, the criterion is nonlinear, and it was necessary to extend the model to more than two lines in order to see the essence of the behaviour.
More generally, we may choose a random routing rule between lines: on departure from half-line i ∈ S, the walk jumps to half-line j ∈ S with probability p(i, j). The deterministic cyclic routing schedule is a special case in which p(i, i ) = 1 for i the successor to i in the cycle. In fact, this set-up generalizes the arbitrary deterministic routing schedule described above, as follows. Given the schedule sequence σ of length T , we may convert this to a cyclic schedule on an extended state space consisting of μ k copies of line k and then reading σ as a permutation. So the deterministic routing model is a special case of the model with Markov routing, which will be the focus of the rest of the paper.
Our result again will say that (1.1) is the criterion for transience, where μ k is now the stationary distribution associated with the stochastic matrix p(i, j). Our general model also permits two-sided increments for the walk from some of the lines, which contribute terms involving cot(π α k /2) to the cotangent criterion (1.1). These two-sided models also generalize previously studied classical models (see, e.g. [15,17,23]). Again, it is only in our general setting that the essential nature of the cotangent criterion (1.1) becomes apparent.
Rather than R + × S, one could work on Z + × S instead, with mass functions replacing probability densities; the results would be unchanged.
The paper is organized as follows. In Sect. 2, we formally define our model and describe our main results, which as well as a recurrence classification include results on existence of moments of return times in the recurrent cases. In Sect. 3, we explain how our general model relates to the special case of the oscillating random walk when S has two elements, and state our results for that model; in this context, the recurrence classification results are already known, but the existence-of-moments results are new even here, and are in several important cases sharp. The present work was also motivated by some problems concerning many-dimensional, partially homogeneous random walks similar to models studied in [5,6,13]: we describe this connection in Sect. 4. The main proofs are presented in Sects. 5, 6, and 7, the latter dealing with the critical boundary case which is more delicate and requires additional work. We collect various technical results in the Appendix.

Model and Results
Consider (X n , ξ n ; n ∈ Z + ), a discrete-time, time-homogeneous Markov process with state space R + × S, where S is a finite non-empty set. The state space is equipped with the appropriate Borel sets, namely sets of the form B × A where B ∈ B(R + ) is a Borel set in R + , and A ⊆ S. The process will be described by: -an irreducible stochastic matrix labelled by S, P = ( p(i, j); i, j ∈ S); and -a collection (w i ; i ∈ S) of probability density functions, so w i : R → R + is a Borel function with R w i (y)dy = 1.
We view R + × S as a complex of half-lines R + × {k}, or branches, connected at a central origin O := {0} × S; at time n, the coordinate ξ n describes which branch the process is on, and X n describes the distance along that branch at which the process sits. We will call X n a random walk on this complex of branches.
To simplify notation, throughout we write P x,i [ · ] for P[ · | (X 0 , ξ 0 ) = (x, i)], the conditional probability starting from (x, i) ∈ R + × S; similarly, we use E x,i for the corresponding expectation. The transition kernel of the process is given for (x, i) ∈ R + × S, for all Borel sets B ⊆ R + and all j ∈ S, by The dynamics of the process represented by (2.1) can be described algorithmically as follows. Given (X n , ξ n ) = (x, i) ∈ R + × S, generate (independently) a spatial increment ϕ n+1 from the distribution given by w i and a random index η n+1 ∈ S according to the distribution p(i, · ). Then, In words, the walk takes a w ξ n -distributed step. If this step would bring the walk beyond the origin, it passes through the origin and switches onto branch η n+1 (or, if η n+1 happens to be equal to ξ n , it reflects back along the same branch). The finite irreducible stochastic matrix P is associated with a (unique) positive invariant probability distribution (μ k ; k ∈ S) satisfying For future reference, we state the following.
(A0) Let P = ( p(i, j); i, j ∈ S) be an irreducible stochastic matrix, and let (μ k ; k ∈ S) denote the corresponding invariant distribution.
Our interest here is when the w i are heavy-tailed. We allow two classes of distribution for the w i : one-sided or symmetric. It is convenient, then, to partition S as S = S one ∪ S sym where S one and S sym are disjoint sets, representing those branches on which the walk takes, respectively, one-sided and symmetric jumps. The w k are then described by a collection of positive parameters (α k ; k ∈ S).
For a probability density function v : R → R + , an exponent α ∈ (0, ∞), and a constant c ∈ (0, ∞), we write v ∈ D α,c to mean that there exists c : R + → (0, ∞) with sup y c(y) < ∞ and lim y→∞ c(y) = c for which If v ∈ D α,c is such that (2.3) holds and c(y) satisfies the stronger condition c(y) = c + O(y −δ ) for some δ > 0, then we write v ∈ D + α,c . Our assumption on the increment distributions w i is as follows.
(A1) Suppose that, for each k ∈ S, we have an exponent α k ∈ (0, ∞), a constant c k ∈ (0, ∞), and a density function v k ∈ D α k ,c k . Then suppose that, for all y ∈ R, w k is given by We say that X n is recurrent if lim inf n→∞ X n = 0, a.s., and transient if lim n→∞ X n = ∞, a.s. An irreducibility argument shows that our Markov chain (X n , ξ n ) displays the usual recurrence/transience dichotomy and exactly one of these two situations holds; however, our proofs establish this behaviour directly using semimartingale arguments, and so we may avoid discussion of irreducibility here.
Throughout we define, for k ∈ S, Our first main result gives a recurrence classification for the process. (a) Suppose that max k∈S χ k α k ≥ 1. Then, X n is recurrent.
In the recurrent cases, it is of interest to quantify recurrence via existence or nonexistence of passage-time moments. For a > 0, let τ a := min{n ≥ 0 : X n ≤ a}, where throughout the paper we adopt the usual convention that min ∅ := +∞. The next result shows that in all the recurrent cases, excluding the boundary case in Theorem 1(b)(iii), the tails of τ a are polynomial. Theorem 2 Suppose that (A0) and (A1) hold. In cases (a) and (b)(i) of Theorem 1, there exist 0 < q 0 ≤ q 1 < ∞ such that for all x > a and all k ∈ S, E x,k τ q a < ∞, for q < q 0 and E x,k τ q a = ∞, for q > q 1 .
Remark 1 Our general results have q 0 < q 1 so that Theorem 2 does not give sharp estimates; these remain an open problem.
We do have sharp results in several particular cases for two half-lines, in which case our model reduces to the oscillating random walk considered by Kemperman [15] and others. We present these sharp moments results (Theorems 4 and 6) in the next section, which discusses in detail the case of the oscillating random walk, and also describes how our recurrence results relate to the known results for this classical model.

Two Half-Lines Become One Line
In the case of our general model in which S consists of two elements, S = {−1, +1}, say, it is natural and convenient to represent our random walk on the whole real line R. Namely, if ω(x, k) := kx for x ∈ R + and k = ±1, we let Z n = ω(X n , ξ n ).
The simplest case has no reflection at the origin, only transmission, i.e. p(i, j) = 1{i = j}, so that μ = ( 1 2 , 1 2 ). Then, for B ⊆ R a Borel set, and, similarly, writing w − (y) for w −1 (−y), for x ∈ R + , and hence we have for x ∈ R\{0} and Borel B ⊆ R, We may make an arbitrary non-trivial choice for the transition law at Z n = 0 without affecting the behaviour of the process, and then, (3.1) shows that Z n is a timehomogeneous Markov process on R. Now, Z n is recurrent if lim inf n→∞ |Z n | = 0, a.s., or transient if lim n→∞ |Z n | = ∞, a.s. The one-dimensional case described at (3.1) has received significant attention over the years. We describe several of the classical models that have been considered.

Homogeneous Symmetric Random Walk
The most classical case is as follows.
In this case, Z n describes a random walk with i.i.d. symmetric increments. Theorem 3 follows from our Theorem 1, since in this case k∈S μ k cot (χ k πα k ) = cot(π α/2).
Since it deals with a sum of i.i.d. random variables, Theorem 3 may be deduced from the classical theorem of Chung and Fuchs [7], via, e.g. the formulation of Shepp [23]. The method of the present paper provides an alternative to the classical (Fourier analytic) approach that generalizes beyond the i.i.d. setting (note that Theorem 3 is not, formally, a consequence of Shepp's most accessible result, Theorem 5 of [23], since v does not necessarily correspond to a unimodal distribution in Shepp's sense).
With τ a as defined previously, in the setting of the present section we have τ a = min{n ≥ 0 : We have the following result on existence of passage-time moments, whose proof is in Sect. 6; while part (i) is well known, we could find no reference for part (ii).

Theorem 4 Suppose that (Sym) holds, and that
Our main interest concerns spatially inhomogeneous models, i.e. in which w x depends on x, typically only through sgn x, the sign of x. Such models are known as oscillating random walks and were studied by Kemperman [15], to whom the model was suggested in 1960 by Anatole Joffe and Peter Ney (see [15, p. 29]).

One-Sided Oscillating Random Walk
The next example, following [15], is a one-sided oscillating random walk: In other words, the walk always jumps in the direction of (and possibly over) the origin, with tail exponent α from the positive half-line and exponent β from the negative halfline. The following recurrence classification applies.
Theorem 5 was obtained in the discrete-space case by Kemperman [15, p. 21]; it follows from our Theorem 1, since in this case The special case of (Osc1) in which α = β was called antisymmetric by Kemperman; here, Theorem 5 shows that the walk is transient for α < 1/2 and recurrent for α > 1/2. We have the following moments result, proved in Sect. 6.

Two-Sided Oscillating Random Walk
Another model in the vein of [15] is a two-sided oscillating random walk: Now, the jumps of the walk are symmetric, as under (Sym), but with a tail exponent depending upon which side of the origin the walk is currently on, as under (Osc1). The most general recurrence classification result for the model (Osc2) is due to Sandrić [21]. A somewhat less general, discrete-space version was obtained by Rogozin and Foss (Theorem 2 of [17, p. 159]), building on [15]. Analogous results in continuous time were given in [4,11]. Here is the result.

Mixed Oscillating Random Walk
A final model is another oscillating walk that mixes the one-and two-sided models: In the discrete-space case, Theorem 2 of Rogozin and Foss [17, p. 159] gives the recurrence classification.

Additional Remarks
It is possible to generalize the model further by permitting the local transition density to vary within each half-line. Then, we have the transition kernel for all Borel sets B ⊆ R. Here, the local transition densities w x : R → R + are Borel functions. Variations of the oscillating random walk, within the general setting of (3.2), have also been studied in the literature. Sandrić [19,21] supposes that the w x satisfy, for each x ∈ R, w x (y) ∼ c(x)|y| −1−α(x) as |y| → ∞ for some measurable functions c and α; he refers to this as a stable-like Markov chain. Under a uniformity condition on the w x , and other mild technical conditions, Sandrić [19] obtained, via Foster-Lyapunov methods similar in spirit to those of the present paper, sufficient conditions for recurrence and transience: essentially lim inf x→∞ α(x) > 1 is sufficient for recurrence and lim sup x→∞ α(x) < 1 is sufficient for transience. These results can be seen as a generalization of Theorem 3. Some related results for models in continuous-time (Lévy processes) are given in [20,22,24]. Further results and an overview of the literature are provided in Sandrić's Ph.D. thesis [18].

Many-Dimensional Random Walks
The next two examples show how versions of the oscillating random walk of Sect. 3 arise as embedded Markov chains in certain two-dimensional random walks.

Example 2
We present two variations on the previous example, which are superficially similar but turn out to be less delicate. First, modify the random walk of the previous example by supposing that (4.1) holds but replacing the behaviour at See the left-hand part of Fig. 2 for an illustration. The embedded process X n now has, for all x ∈ Z and for r ≥ 0, where u(r ) = (c/2)(1 + o(1))r −1/2 . Thus, X n is a random walk with symmetric increments, and the discrete version of our Theorem 3 (and also a result of [23]) implies that the walk is transient. This walk was studied by Campanino and Petritis [5,6], who proved transience via different methods. Next, modify the random walk of Example 1 by supposing that (4.1) holds but replacing the behaviour at Fig. 2 for an illustration. This time the walk takes a symmetric increment as at (4.3) when x ≥ 0 but a one-sided increment as at (4.2) when x < 0. In this case, the discrete version of our Theorem 8 (and also a result of [17]) shows that the walk is transient.
One may obtain the general model on Z + × S as an embedded process for a random walk on complexes of half-spaces, generalizing the examples considered here; we leave this to the interested reader.

Lyapunov Functions
Our proofs are based on demonstrating appropriate Lyapunov functions; that is, for suitable ϕ : R + × S → R + we study Y n = ϕ(X n , ξ n ) such that Y n has appropriate local supermartingale or submartingale properties for the one-step mean increments First, we note some consequences of the transition law (2.1). Let ϕ : R + × S → R + be measurable. Then, we have from (2.1) that, for (x, i) ∈ R + × S, Our primary Lyapunov function is roughly of the form x → |x| ν , ν ∈ R, but weighted according to an S-dependent component (realized by a collection of multiplicative weights λ k ); these weights provide a crucial technical tool.
For ν ∈ R and x ∈ R, we write Then, for parameters λ k > 0 for each k ∈ S, define for x ∈ R + and k ∈ S, Now, for this Lyapunov function, (5.1) gives Depending on whether i ∈ S sym or i ∈ S one , the above integrals can be expressed in terms of v i as follows. For i ∈ S sym ,

Estimates of Functional Increments
In the course of our proofs, we need various integral estimates that can be expressed in terms of classical transcendental functions. For the convenience of the reader, we gather all necessary integrals in Lemmas 1 and 2; the proofs of these results are deferred until the Appendix. Recall that the Euler gamma function Γ satisfies the functional equation zΓ (z) = Γ (z + 1), and the hypergeometric function m F n is defined via a power series (see [1]).

Remark 2
The j integrals can be obtained as derivatives with respect to ν of the i integrals, evaluated at ν = 0.
The next result collects estimates for our integrals in the expected functional increments (5.4) and (5.5) in terms of the integrals in Lemma 1.
For α ∈ (0, 1) and ν > −1 we have Proof These estimates are mostly quite straightforward, so we do not give all the details. We spell out the estimate in (5.6); the others are similar. We have With the substitution u = 1+y x , this last expression becomes Let ε ∈ (0, c). Then, there exists y 0 ∈ R + such that |c(y) − c| < ε for all y ≥ y 0 , so that |c(ux − 1) − c| < ε for all u in the range of integration, provided x ≥ y 0 . Writing for the duration of the proof, we have that For u ≥ 1+x x and x ≥ y 0 , we have as x → ∞, it follows that for any ε > 0 we may choose x sufficiently large so that and since ε > 0 was arbitrary, we obtain (5.6) from (5.11).
We also need the following simple estimates for ranges of α when the asymptotics for the final two integrals in Lemma 3 are not valid.
We conclude this subsection with two algebraic results.

Lemma 6
Suppose that (A0) holds. Given (b k ; k ∈ S) with b k ∈ R for all k, there exists a solution (θ k ; k ∈ S) with θ k ∈ R for all k to the system of equations if and only if k∈S μ k b k = 0. Moreover, if a solution to (5.13) exists, we may take θ k > 0 for all k ∈ S.

Supermartingale Conditions and Recurrence
We use the notation We start with the case max k∈S χ k α k < 1. We will obtain a local supermartingale by choosing the λ k carefully. Lemma 6, which shows how the stationary probabilities μ k enter, is crucial; a similar idea was used for random walks on strips in Section 3.1 of [10]. Next is our key local supermartingale result in this case.
Since i ν,α 2,1 > 0 for all α > 0 and all ν ∈ (0, α), Lemma 5 shows that D f ν (x, i) will be negative for all i and all x sufficiently large provided that we can find ν such that (Pλ) i λ i + R (α i , ν) < 0 for all i. By (5.12), writing θ = (θ k ; k ∈ S), by (5.15). Therefore, by Lemma 5, we can always find sufficiently small ν > 0 and a vector λ = λ(ν) with strictly positive elements for which D f ν (x, i) ≤ −εx ν−α i for some ε > 0, all i, and all x sufficiently large. Maximizing over i gives part (i). The argument for part (ii) is similar. Suppose ν ∈ (−1, 0). This time, k∈S μ k a k = δ for some δ > 0, and we set b k = −a k + δ, so that k∈S μ k b k = 0 once more. Lemma 6 now shows that we can find θ k so that j∈S p(i, j)θ j − θ i + a i = δ, for all i ∈ S. (5.16) With this choice of θ k , we again set λ k = 1 + θ k ν; note we may assume λ k > 0 for all k for ν sufficiently small.
Again, D f ν (x, i) will be non-positive for all i and all x sufficiently large provided that we can find λ and ν such that (Pλ) i λ i + R (α i , ν) < 0 for all i. Following a similar argument to before, we obtain with (5.16) that Thus, we can find for ν < 0 close enough to 0 a vector λ = λ(ν) with strictly positive elements for which D f ν (x, i) ≤ −εx ν−α i for all i and all x sufficiently large.
Now we examine the case max k∈S χ k α k ≥ 1.

Proposition 2
Suppose that (A0) and (A1) hold, and max k∈S χ k α k ≥ 1. Then, there exist ν ∈ (0, α ), λ k > 0 (k ∈ S), ε > 0, and x 0 ∈ R + such that for all i ∈ S, Before starting the proof of this proposition, we introduce the following notation. For k ∈ S, denote by a k = π cot(π χ k α k ) and define the vector a = (a k ; k ∈ S). For i ∈ S and A ⊆ S, write P(i, A) = j∈A p(i, j). Define S 0 = {i ∈ S : χ i α i ≥ 1} and recursively, for m ≥ 1, The rest of the proof is devoted into establishing this fact.
In the case i ∈ S M , the condition in (5.20) reads, by (5.21), . Using the fact that for < m we have ρ ,m = exp(−ν L m, ) and for > m we have ρ ,m = exp(ν L ,m ), the above expression becomes, up to o(ν) terms

On introducing the constant
.
Hence, by Corollary 1, there exist non-trivial solutions for the lower triangular matrix L within an algorithmically determined region L. The positivity of the lower triangular part of L implies that the components of λ are ordered: λ (m) < λ (m+1) for 0 ≤ m < M − 1. Given L, the ratios of the λ (k) are determined, and by construction, the λ (k) are positive.
We are almost ready to complete the proof of Theorem 1, excluding part (b)(iii); first, we need one more technical result concerning non-confinement.
Proof We claim that for each x ∈ R + , there exists ε x > 0 such that Indeed, given (x, i) ∈ R + × S, we may choose j ∈ S so that p(i, j) > 0 and we may choose z 0 ≥ x sufficiently large so that, for some which gives (5.23). The local escape property (5.23) implies the lim sup result by a standard argument: see, e.g. [16,Proposition 3.3.4].

Proof of Theorem 1
We are not yet ready to prove part (b)(iii): we defer that part of the proof until Sect. 7.
The other parts of the theorem follow from the supermartingale estimates in this section together with the technical results from the Appendix. Indeed, under the conditions of part (a) or (b)(i) of the theorem, we have from Propositions 2 or 1(i), respectively, that for suitable ν > 0 and λ k , Thus, we may apply Lemma 16, which together with Lemma 8 shows that lim inf n→∞ X n ≤ x 0 , a.s. Thus, there exists an interval I ⊆ [0, x 0 + 1] such that (X n , η n ) ∈ I × {i} i.o., where i is some fixed element of S. Let τ 0 := 0 and for k ∈ N define τ k = min{n > τ k−1 + 1 : (X n , η n ) ∈ I × {i}}. Given i ∈ S, we may choose j, k ∈ S such that p(i, j) > δ 1 and p( j, k) > δ 1 for some δ 1 > 0; let γ = α i ∨α j . Then, we may choose δ 2 ∈ (0, 1) and z 0 ∈ R + such that v i (z) > δ 2 z −1−γ and v j (z) > δ 2 z −1−γ for all z ≥ z 0 . Then, for any ε ∈ (0, 1), uniformly in k. Thus, Lévy's extension of the Borel-Cantelli lemma shows X n < ε infinitely often. Thus, since ε ∈ (0, 1) was arbitrary, lim inf n→∞ X n = 0, a.s. On the other hand, under the conditions of part (b)(ii) of the theorem, we have from Proposition 1(ii) that for suitable ν < 0 and λ k , for any x 1 sufficiently large. Thus, we may apply Lemma 17, which shows that for any ε > 0 there exists x ∈ (x 1 , ∞) for which, for all n ≥ 0, by Lemma 8. Since ε > 0 was arbitrary, we get lim inf m→∞ X m ≥ x 1 , a.s., and since x 1 was arbitrary we get lim m→∞ X m = ∞, a.s.

Technical Tools
The following result is a straightforward reformulation of Theorem 1 of [2].

Lemma 9
Let Y n be an integrable F n -adapted stochastic process, taking values in an unbounded subset of R + , with Y 0 = x 0 fixed. For x > 0, let σ x := inf{n ≥ 0 : Y n ≤ x}. Suppose that there exist δ > 0, x > 0, and γ < 1 such that for any n ≥ 0, Then, for any p The following companion result on non-existence of moments is a reformulation of Corollary 1 of [2].

Lemma 10 Let Y n be an integrable F n -adapted stochastic process, taking values in an
Suppose that there exist C 1 , C 2 > 0, x > 0, p > 0, and r > 1 such that for any n ≥ 0, on {n < σ x } the following hold: Then, for any q > p, E[σ q x ] = ∞ for x 0 > x.

Proof of Theorem 2
Proof of Theorem 2 Under conditions (a) or (b)(i) of Theorem 1, we have from Propositions 2 or 1, respectively, that there exist positive λ k and constants ε > 0, β > 0 and ν ∈ (0, β) such that, Let Y n = f ν (X n , ξ n ). Then, Y n is bounded above and below by positive constants times (1 + X n ) ν , so we have that (6.1) holds for x sufficiently large with γ = 1 − (β/ν). It follows from Lemma 9 that E[σ p x ] < ∞ for p ∈ (0, ν/β), which gives the claimed existence of moments result.
It is not hard to see that some moments of the return time fail to exist, due to the heavy-tailed nature of the model, and an argument is easily constructed using the 'one big jump' idea: a similar idea is used in [14]. We sketch the argument. For any x, i, for all y sufficiently large we have P x,i [X 1 ≥ y − x] ≥ εy −ᾱ . Given such a first jump, with uniformly positive probability the process takes time at least of order y β to return to a neighbourhood of zero (where β can be bounded in terms of α); this can be proved using a suitable maximal inequality as in the proof of Theorem 2.10 of [14]. Combining these two facts shows that with probability of order y −ᾱ the return time to a neighbourhood of the origin exceeds order y β . This polynomial tail bound yields non-existence of sufficiently high moments.

Lemma 12
The positive zero of C ( · , α) described in Lemma 11 is given by v one Proof First suppose = one. Then, from Lemma 1 we verify that for α ∈ ( 1 2 , 1), α). Now, suppose that = sym and α ∈ (1, 2). To verify C sym (α − 1, α) = 0, it is simpler to work with the integral representations directly, rather than the hypergeometric functions. After the substitution z = 1/u, we have Similarly, after the same substitution, which we may evaluate as Finally, we have that i α−1,α We can now complete the proofs of Theorems 4 and 6.
We sketch the argument for E[σ p ] = ∞ when p > 1/2. For ν ∈ (1, α), it is not hard to show that D f ν (x, i) ≥ 0 for all x sufficiently large, and we may verify the other conditions of Lemma 10 to show that E[σ p ] = ∞ for p > 1/2.

Proof of Theorem 6
Most of the proof is similar to that of Theorem 4, so we omit the details. The case where a different argument is required is the non-existence part of the case α ≥ 1. We have that for some ε > 0 and all y sufficiently large, P x,i [X 1 ≥ y] ≥ ε(x + y) −α . A similar argument to Lemma 4 shows that for any ν ∈ (0, 1), for Then, a suitable maximal inequality implies that with probability at least 1/2 started from X 1 ≥ y it takes at least cy ν steps for X n to return to a neighbourhood of 0, for some c > 0. Combining the two estimates gives which implies E[σ p ] = ∞ for p ≥ α/ν, and since ν ∈ (0, 1) was arbitrary, we can achieve any p > α.

Logarithmic Lyapunov Functions
In this section, we prove Theorem 1(b)(iii). Throughout this section, we write a k := cot(χ k πα k ) and suppose that max k∈S χ k α k < 1, that k∈S μ k a k = 0, and that , where δ > 0 may be chosen so as not to depend upon i.
Lemma 6 shows that there exist λ k > 0 (k ∈ S) such that we fix such a choice of the λ k from now on. We prove recurrence by establishing the following result.

Lemma 13
Suppose that the conditions of Theorem 1(b)(iii) hold, and that (λ k ; k ∈ S) are such that (7.2) holds. Then, there exists x 0 ∈ R + such that It is not easy to perform the integrals required to estimate Dh(x, i) (and hence establish Lemma 13) directly. However, the integrals for Dg(x, i) are computable (they appear in Lemma 2), and we can use some analysis to relate Dh(x, i) to Dg(x, i). Thus, the first step in our proof of Lemma 13 is to estimate Dg(x, i).
For the Lyapunov function g, (5.1) gives for x ∈ R + and i ∈ S, where we have used the fact that g(x) is defined for all x ∈ R and symmetric about 0, and we have introduced the notation The next lemma, proved in the next subsection, estimates the integrals in (7.4).

Lemma 14
Suppose that v i ∈ D + α i ,c i . Then, for ∈ {one, sym}, for α ∈ (0, 1/χ i ), for some η > 0, as x → ∞, Note that Lemma 14 together with (7.3) shows that by (7.2). This is not enough by itself to establish recurrence, since the sign of the O(x −α i −η ) term is unknown. This is why we need the function h(x, i).

Proof of Recurrence in the Critical Case
Proof of Lemma 14 To ease notation, we drop the subscripts i everywhere for the duration of this proof. From (7.4) we obtain, for x > 0, Let α ∈ (0, 2). The claim in the lemma for = sym will follow from the estimates since then we obtain from (7.6) with (7.7) and Lemma 2 that which yields the stated result via the digamma reflection formula (equation 6.3.7 from [1, p. 259]). We present here in detail the proof of only the final estimate in (7.7); the others are similar. Some algebra followed by the substitution u = y/(1 + x) shows that the third integral in (7.7) is There is a constant C ∈ R + such that for all x sufficiently large, using Taylor's theorem for log and the fact that c(y) is uniformly bounded. On the other hand, for u > Here we have that Combining these estimates and using the fact that α ∈ (0, 2), we obtain the final estimate in (7.7). The claim in the lemma for = one follows after some analogous computations, which we omit.
Finally, there exists ε > 0 such that, for all i, j ∈ S and all x sufficiently large, Proof The proof is based on the observation that, since (h(x, i)) 2 = g(x, i), Thus, for y ≥ 0,  (x, i). This gives the second inequality and also yields the third inequality once combined with the y ∈ [0, x] case of (7.8).
Finally, for y ≥ x note that So the sign of the expression in (7.9) is non-positive for y ≤ ψ(x) := x − 1 + (1 + x)e λ i −λ j and non-negative for y ≥ ψ(x), and By the monotonicity in y of the denominator, the expression in (7.9) satisfies both for y ∈ [0, ψ(x)] and for y ∈ [ψ(x), ∞). Here h(ψ(x) − x, j) = h(x, i), by (7.10). Hence, we obtain the bound To improve on this estimate, suppose that y ≥ K x where K ∈ N is such that K x > ψ(x). Then, using the fact that the numerator in (7.9) is positive, we may choose K ∈ N such that for all j and all y ≥ K x, for K sufficiently large (depending on max j λ j ) and and all x sufficiently large. Here It follows that, for some ε > 0, for all x sufficiently large, for y ≥ K x, The final inequality in the lemma now follows since, for all x sufficiently large, where we used the substitution u = 2+y 1+x . For K sufficiently large, the term inside the logarithm is uniformly positive, and the claimed bound follows. Now, we may complete the proofs of Lemma 13 and then Theorem 1(b)(iii).

Proof of Lemma 13
Lemma 15 together with (7.5) shows that, for all x sufficiently large.

Proof of Theorem 1(b)(iii)
Lemma 13 with Lemma 16 shows that lim inf n→∞ X n ≤ x 0 , a.s., and then, a similar argument to that in the proof of parts (a) and (b)(i) of Theorem 1 shows that lim inf n→∞ X n = 0, a.s.
Proof First note that, by hypothesis, E f (X 1 , ξ 1 ) ≤ E f (X 0 , ξ 0 ) + C < ∞, and iterating this argument, it follows that E f (X n , ξ n ) < ∞ for all n ≥ 0. Fix n ∈ Z + . For x 0 ∈ R + in the hypothesis of the lemma, write λ = min{m ≥ n : X m ≤ x 0 }. Let Y m = f (X m∧λ , ξ m∧λ ). Then, (Y m , m ≥ n) is an (F m , m ≥ n)-adapted non-negative supermartingale. Hence, by the supermartingale convergence theorem, there exists Y ∞ ∈ R + such that lim m→∞ Y m = Y ∞ , a.s. In particular, Since n ∈ Z + was arbitrary, the result follows: This completes the proof.

Lemma 17
Let (X n , ξ n ) be an F n -adapted process taking values in R + × S. Let f : R + × S → R + be such that sup x,i f (x, i) < ∞ and lim x→∞ f (x, i) = 0 for all i ∈ S. Suppose that there exists x 1 ∈ R + for which inf y≤x 1 f (y, i) > 0 for all i and Then, for any ε > 0 there exists x ∈ (x 1 , ∞) for which, for all n ≥ 0, Proof Fix n ∈ Z + . For x 1 ∈ R + in the hypothesis of the lemma, write λ = min{m ≥ n : X m ≤ x 1 } and set Y m = f (X m∧λ , ξ m∧λ ). Then, (Y m , m ≥ n) is an (F m , m ≥ n)adapted non-negative supermartingale, and so converges a.s. as m → ∞ to some Y ∞ ∈ R + . Moreover, by the optional stopping theorem for supermartingales, Here, we have that, a.s., Combining these inequalities we obtain In particular, on {X n ≥ x > x 1 }, we have Y n = f (X n , ξ n ) and so Since lim y→∞ f (y, i) = 0 and inf y≤x 1 f (y, i) > 0, given ε > 0 we can choose x > x 1 large enough so that max i sup y≥x f (y, i) the choice of x depends only on f , x 1 , and ε, and, in particular, does not depend on n. Then, on {X n ≥ x}, P[λ < ∞ | F n ] < ε, as claimed.
Finally, we need the following elementary fact.

Lemma 18
Let A ⊆ R + be a Borel set, and let f and g be measurable functions from A to R. Suppose that there exist constants g − and g + with 0 < g − < g + < ∞ such that g − ≤ g(u) ≤ g + for all u ∈ A. Then, Proof It suffices to suppose that A | f (u)|du < ∞. Then, A | f (u)g(u)|du ≤ g + A | f (u)|du < ∞, and This completes the proof.