Long term behaviour of a reversible system of interacting random walks

This paper concerns the long-term behaviour of a system of interacting random walks labeled by vertices of a finite graph. The model is reversible which allows to use the method of electric networks in the study. In addition, examples of alternative proofs not requiring reversibility are provided.


Introduction
Let G be a finite non-oriented graph with n ≥ 1 vertices labelled by 1, 2, . . . , n. Somewhat abusing notation, we will use G also for the set of the vertices of this graph. Let A = (a ij ) be the adjacency matrix of the graph, that is a ij = a ji = 1 or a ij = 0 according to whether vertices i and j are adjacent (connected by an edge) or not. If vertices i, j ∈ G are connected by an edge, i.e. a ij = 1, call them neighbours and write i ∼ j. By definition, a vertex is not a neighbour of itself, i.e a ii = 0 for all i = 1, . . . , n (i.e. there are no self-loops).
In other words, denoting the rates of the CTMC ξ(t) by q ξ,η , for ξ, η ∈ Z n + , we have q ξ,η =        e αξ i +β(Aξ) i = e αξ i +β j:j∼i ξ j , η = ξ + e i , where e i ∈ Z n + is the i-th unit vector, and · denotes the usual Euclidean norm. It is easy to see that if β = 0, then CTMC ξ(t) is a collection of n independent reflected continuous-time random walks on Z + (symmetric if also α = 0). In general, the Markov chain can be regarded as an inhomogeneous random walk on infinite graph Z G + . Alternatively, it can be interpreted as a system of n random walks on Z + labelled by the vertices of graph G and evolving subject to a nearest neighbour interaction.
The purpose of the present paper is to study how the long term behaviour of CTMC ξ(t) depends on the parameters α and β together with properties of the graph G. In our main result (Theorem 2.3), we give a complete classification saying whether the Markov chain is recurrent or transient, and in the recurrent case whether it is positive recurrent or null recurrent. We find phase transitions, with different behaviour in various regions depending of parameters α, β and properties of graph G. Furthermore, we give results (Theorem 6.1) on whether the Markov chain is explosive or not. (This is relevant for the transient case only, since a recurrent CTMC always is non-explosive.) These results are less complete and leave one case open.
It is obvious that CTMC ξ(t) is irreducible; hence the initial distribution is irrelevant for our results. (We may if we like assume that we start at 0 = (0, . . . , 0) ∈ Z n + .) CTMC ξ(t) was introduced in [13], where its long term behaviour was studied in several cases. In particular, conditions for positive or null recurrence and transience were obtained in some special cases; these results are extended in the present paper. In addition, the typical asymptotic behaviour of the Markov chain was studied in some transient cases.
One example of our results is the case α < 0 and β > 0, which is of a particular interest because of the following phenomenon observed in [13] in some special cases. If α < 0 and β = 0, then, as said above, CTMC ξ(t) is formed by a collection of independent positive recurrent reflected random walks on Z + , and is thus positive recurrent. If both α < 0 and β < 0, then the Markov chain is still positive recurrent (as shown below). The interaction in this case is, in a sense, competitive, as neighbours obstruct the growth of each other. Now keep α < 0 fixed but let β > 0. If β is positive, but not large, then one could intuitively expect that the Markov chain is still positive recurrent ("stable"), as the interaction (cooperative in this case) is not strong enough. On the other hand, if β > 0 is sufficiently large, then the intuition suggests that the Markov chain becomes transient ("unstable"). It turns out that this is correct and that the phase transition in the model behaviour occurs at the critical value β = |α| λ 1 (G) , where λ 1 (G) is the largest eigenvalue of graph G. Namely, if β < |α| λ 1 (G) then the Markov chain is positive recurrent, and if β ≥ |α| λ 1 (G) then the Markov chain is transient. Moreover, it turns out that exactly at the critical regime, i.e., β = |α| λ 1 (G) , the Markov chain is non-explosive transient. We conjecture that if β > |α| λ 1 (G) , then it is explosive transient. This remains as an open problem in the general case (see Remark 6.2 below). Another important contribution of this paper to the previous study of the Markov chain is a recurrence/transience classification in the case α = 0 and β < 0. This case was discussed in [13] only for the simplest graph with two vertices. We show that in general there are only two possible long term behaviours of the Markov chain if α = 0 and β < 0. Namely, CTMC ξ(t) is either non-explosive transient or null recurrent, and this depends only on the independence number of the graph G.
We also consider some variations of the Markov chain defined above. First, we include in our results the Markov chain above with dynamics obtained by setting β = −∞ (with convention 0 · ∞ = 0). In other words, a component cannot jump up (only down, when possible), if at least one of its neighbours is non-zero; this can thus be interpreted as hard-core interaction. See Section 3.3 for more details on this hard-core case.
In Section 5 we consider the discrete time Markov chain (DTMC) ζ(t) ∈ Z n + that corresponds to CTMC ξ(t), i.e. the corresponding embedded DTMC. We show that our main results also apply to this DTMC.
Finally, in Section 7, we study the CTMC with the rates given by We show that similar results holds for this chain, although there is a minor difference. We use essentially the method of electric networks in our proofs; this is possible since the CTMC ξ(t) is reversible (see Section 3.1). The use of reversibility was rather limited in [13], where the Lyapunov function method and direct probabilistic arguments were the main research techniques. In addition, we provide examples of alternative proofs of some of our results based on the Lyapunov function method and renewal theory for random walks. The advantage of these alternative methods is that they do not require reversibility and can be applied in more general situations. Therefore, the alternative proofs are of interest on their own right. Remark 1.1. In the case α = β = 0, all rates (1.1) equal 1, and the Markov chain is a continuous-time simple random walk on Z n + . It is known that a simple random walk on the octant Z n + is null recurrent for n ≤ 2 and transient for n ≥ 3; this is a variant of the corresponding well-known result for simple random walk on Z n , and can rather easily be shown using electric network theory, see Example 3.2 below. Remark 1.2. We allow the graph G to be disconnected. However, there is no interaction between different components of G, and the CTMC ξ(t) consists of independent Markov chains defined by the connected components of G. Hence, the case of main interest is when G is connected. Remark 1.3. The case when G has no edges is somewhat exceptional but also rather trivial, since then the value of β is irrelevant, and ξ(t) consists of n independent continuous-time random walks on Z + ; in fact, ξ(t) then is as in the case β = 0 for any other G with n vertices. In particular, if G has no edges, we may assume β = 0. Remark 1.4. CTMC ξ(t) is a model of interacting spins and, as such, is related to models of statistical physics. The stationary distribution of a finite Markov chain with bounded components and the same transition rates is of interest in statistical physics. In particular, if components take only values 0 and 1, then the stationary distribution of the corresponding Markov chain is equivalent to a special case of the famous Ising model. One of the main problems in statistical physics is to determine whether such probability distribution is subject to phase transition as the underlying graph indefinitely expands. In the present paper, we keep the finite graph G fixed, but allow arbitrarily large components ξ i . We then study phase transitions of this model, in the sense discussed above.

The main results
In order to state our results, we need two definitions from graph theory. We also let e(G) denote the number of edges in G.
Definition 2.1. The eigenvalues of a finite graph G are the eigenvalues of its adjacency matrix A. These are real, since A is symmetric, and we denote them by λ 1 (G) ≥ λ 2 (G) ≥ · · · ≥ λ n (G), so that λ 1 := λ 1 (G) is the largest eigenvalue.
Note that λ 1 (G) > 0 except in the rather trivial case e(G) = 0 (see Remark 1.3). (ii) The independence number κ = κ(G) of a graph G is the cardinality of the largest independent set of vertices.
For example, if G is a cyclic graph C n with n vertices, then κ = ⌊n/2⌋. The main results of the paper are collected in the following theorem, which generalises results concerning positive recurrence of the Markov chain obtained in [13]. Theorem 2.3. Let −∞ < α < ∞ and −∞ ≤ β < ∞, and consider the CTMC ξ(t).

Reversibility of the Markov chain
Define the following function where the second sum is interpreted as the sum over unordered pairs {i, j} (i.e, a sum over the edges in G), ·, · is the Euclidean scalar product, E is the unit n × n matrix, A is the adjacency matrix of the graph G and A direct computation gives the detailed balance equation for i = 1, . . . , n and ξ ∈ Z n + . Note that, recalling (1.2), (3.3) is equivalent to the standard form of the balance equation Hence, (3.3) means that the Markov chain is reversible with invariant measure µ(ξ) := e W (ξ) , ξ ∈ Z n + . The explicit formula for the invariant measure µ enables us to easily see when µ is summable, and thus can be normalised to an invariant distribution (i.e., a probability measure); we return to this in Lemma 4.13.
Remark 3.1. Recall that a recurrent CTMC has an invariant measure that is unique up to a multiplicative constant, while a transient CTMC in general may have several linearly independent invariant measures (or none). We do not investigate whether the invariant measure µ is unique (up to constant factors) for our Markov chain also in transient cases.

Electric network corresponding to the Markov chain
Let us define the electric network on graph Z n + corresponding to the Markov chain of interest. According to the general method (e.g., see [2] or [6]) the construction goes as follows. First, suppose that β > −∞. Given ξ = (ξ 1 , . . . , ξ n ) ∈ Z n + replace each edge (assuming ξ i ≥ 1) by a resistor with conductance (resistance −1 ) equal to Note that C ξ−e i ,ξ does not depend on i in our case. Also, C 0,e i = e W (e i ) = 1, i.e., the edges connecting the origin 0 with e i have conductance 1, and thus resistance 1 (Ohm, say). We denote the network consisting of Z n + with the conductances (3.5) by Γ α,β,G . Otherwise, we will for convenience sometimes denote an electric network by the same symbol as the underlying graph when it is clear from the context what the conductances are.
Let N(Γ) be an electric network on an infinite graph Γ. The effective resistance R ∞ (Γ) = R ∞ (N(Γ)) of the network is defined, loosely speaking, as the resistance between some fixed point of Γ, which in our case we choose as 0, and infinity (see e.g. [2], [6] or [8] for more details). Recall that a reversible Markov chain is transient if and only if the effective resistance of the corresponding electric network is finite. Equivalently, a reversible Markov chain is recurrent if and only if the effective resistance of the corresponding electric network is infinite.
A common approach to showing either recurrence or transience of a reversible Markov chain is based on Rayleigh's monotonicity law. In particular, if N(Γ ′ ) is a subnetwork of N(Γ), obtained by deleting some edges, then and the corresponding Markov chain on Γ is recurrent.
Example 3.2. We illustrate these methods, and give a flavour of later proofs, by showing how they work for a simple random walk (SRW) on Z n + , which as said in Remark 1.1 is the special case α = β = 0 of our model. The corresponding electric network has all resistances equal to 1.
First, we obtain a lower bound of R ∞ (Z n + ) by some short-circuiting. (See [2, page 76], or the Nash-Williams criterion and Remark 2.10 in [8, pages 37-38].) Let, recalling (3.2), and let Γ ′′ be the network obtained from Z n + by short-circuiting each set V L of vertices; we can regard each V L as a vertex in Γ ′′ . Then we have ≍ L n−1 resistors in parallel connecting V L−1 and V L . As a result, their conductances (i.e. inverse of resistance) sum up; hence the effective resistance R L between V L−1 and V L is ≍ 1 L n−1 . Now Γ ′′ consists of a sequence of resistors R L in series, so we must sum them; consequently the resistance of the modified network is (3.7) If n = 1 or n = 2, this sum is infinite and thus On the other hand, if n ≥ 3, one can show that the random walk is transient. See, for example, the description of the tree NT 2.5849 in [2, Section 2.2.9], or the construction of a flow with finite energy in [8, page 41] (there done for Z n , but works for Z n + too), for a direct proof that R ∞ (Z n + ) < ∞. An alternative argument uses the well-known transience of SRW on Z n (n ≥ 3) as follows. Consider a unit current flow from 0 to infinity on Z n . By symmetry, for every vertex (x 1 , . . . , x n ) ∈ Z n , the potential is the same at all points (±x 1 , . . . , ±x n ). Hence we may short-circuit each such set without changing the effective resistance R ∞ . The short-circuited network, Γ ′ say, is thus also transient. However, Γ ′ can be regarded as a network on Z n + where each edge has a conductance between 2 and 2 n (depending only on the number of non-zero coordinates). Hence, by Rayleigh's monotonicity law, R ∞ (Z n + ) ≤ 2 n R ∞ (Γ ′ ) < ∞, and thus the SRW is transient.

The hard-core interaction
Let us discuss in more detail the model with hard-core interaction, i.e. β = −∞. Then a component ξ i can increase only when ξ j = 0 for every j ∼ i, and it follows that the set is absorbing, i.e., if the Markov chain ξ(t) reaches Ω, then it will stay there forever. In particular, if the chain starts at 0, then it will stay in Γ 0 . Moreover, it is easy to see that given any initial state, the process will a.s. reach Γ 0 at some time (and then thus stay in Γ 0 ). Hence, any state ξ ∈ Z n + \ Γ 0 (i.e., with at least two neighbouring non-zero components) is a non-essential state, and the long-term behaviour of ξ(t) depends only on its behaviour on Γ 0 . Therefore, in the hard-core case we consider the Markov chain with the state space Γ 0 . This chain on Γ 0 is easily seen to be irreducible.
Note that Γ 0 is the set of configurations such that Aξ, ξ = 0, where A is the adjacency matrix of graph G. Equivalently, a configuration ξ belongs to Γ 0 if and only if the set {i : ξ i > 0} is an independent set of vertices in G (see Definition 2.2). Remark 3.3. In the special case α = 0, the Markov chain with the hard-core interaction β = −∞ can be regarded as a simple symmetric random walk on the subgraph Γ 0 ⊆ Z n + . In this special case, (3.1) yields W (ξ) = 0 for every ξ ∈ Γ 0 , so by (3.5), the conductance of every edge in Γ 0 is 1. We may also regard this network as a network on Z n + with the conductance for edge {ξ − e k , ξ} defined by where the second case simply means that the edge is not wired.

Proof of Theorem 2.3
In this section we prove Theorem 2.3 by proving a long series of lemmas treating different cases. Note that we include the hard-core case β = −∞. (For emphasis we say this explicitly each time it may occur.) Recall that Γ α,β,G denotes Z n + regarded as an electrical network with conductances (3.5) corresponding to the CTMC ξ(t).
As a first application of the method of electric networks we treat the case α > 0.
We give similar arguments for the other transient cases. Recall that A is a non-negative symmetric matrix with eigenvalues λ 1 , . . . , λ n . Thus there exists an orthonormal basis of eigenvectors v i with Av i = λ i v i , i = 1, . . . , n. By the Perron-Frobenius theorem v 1 can be chosen non-negative, i.e. v 1 ∈ R n + . (If G is connected, then v 1 is unique and strictly positive.) Proof. For each t ≥ 0, define x(t) := tv 1 and y(t) := (⌊x 1 (t)⌋, . . . , ⌊x n (t)⌋). By construction, y(t) is piecewise constant. Let y 0 = 0, y 1 , y 2 , . . . be the sequence of different values of y(t), where at each t such that two or more coordinates of y(t) jump simultaneously, we insert intermediate vectors, so that only one coordinate changes at a time, and y k+1 −y k = 1 for all k. Then S(y k ), the sum of coordinates of y k , is equal to k, and thus k/n ≤ y k ≤ k. Furthermore, for each k there is a t k such that y k − y(t k ) ≤ n, and thus since α + βλ 1 ≥ 0 by assumption. Therefore, by (3.1), Consider the subnetwork Γ ′ ⊂ Γ α,β,G formed by the vertices {y k }. The resistance of the edge connecting y k−1 and y k is equal to so that the effective resistance of the subnetwork R ∞ (Γ ′ ) = k R k < ∞. Hence R ∞ (Γ α,β,G ) < ∞ and the Markov chain is transient. Proof. We do exactly as in the proof of Lemma 4.2 up to (4.4). Now α = 0, so (4.5) is no longer good enough. Instead we note that (4.3) implies a k, Furthermore, λ 1 > 0 since e(G) > 0, and thus (3.1), (4.4) and (4.7) yield, recalling α = 0, for some c 1 > 0 and all large k. It follows again that the subnetwork Γ ′ := {y k } has finite effective resistance, and thus the Markov chain is transient.
Proof. As said in Remark 1.1 and Example 3.2, in this case, the Markov chain is just simple random walk on Z n + , which is transient for n ≥ 3.
Proof. When e(G) = 0, the parameter β is irrelevant and may be changed to 0. The result thus follows from Lemma 4.4.
Proof. Since κ ≥ 3, there are three vertices of the graph G not adjacent to each other; w.l.o.g. let them be 1, 2 and 3. Consider the subnetwork By (3.1), we have in this case W (ξ) = 0 for every ξ ∈ Γ ′ , and thus (3.5) implies that in the corresponding electrical network all edges in Γ ′ have conductance 1, and thus resistance 1. Hence, the Markov chain corresponding to the network Γ ′ is simple random walk on Γ ′ ∼ = Z 3 + . By Remark 1.1 and Example 3.2, a simple random walk on the octant Z 3 and thus the Markov chain is transient.
We turn to proving recurrence in the remaining cases.
For L ≥ 1, there are O(L n−1 ) vertices in V L , and thus O(L n−1 ) edges between V L−1 and V L . When short-circuiting each V L , we can regard each V L as a single vertex in Γ ′′ ; the edges between V L−1 and V L then become parallel, and can be combined inte a single edge between V L−1 and V L . The conductance, C L say, of this edge is obtained by summing the conductances of all edges between V L−1 and V L (since they are in parallel), and thus (4.12) Consequently, the resistances C −1 L are bounded below, and since Γ ′′ is just a path with these resistances in series, As explained in Section 3.2, this implies that R ∞ (Γ α,β,G ) = ∞ and that the Markov chain ξ(t) is recurrent.
The assumption κ ≤ 2 implies that amongst any three vertices of the graph there are at least two which are connected by an edge.
Let b := −β > 0. Then, since α = 0, (3.1) yields (4.14) Let again V L be defined by (3.6), short-circuit all the vertices within each V L , and denote the resulting network by Γ ′′ . We can regard each V L as a vertex of Γ ′′ . Fix L ∈ Z + and consider x = (x 1 , . . . , x n ) ∈ V L . Let us order the components of x in decreasing order: then, by construction, u ∈ {0, 1, . . . , ⌊L/3⌋}. Among the three vertices corresponding to x (1) , x (2) , x (3) at least two are connected, so that we can bound (4.15) Hence, by (3.5), the conductance of each of the resistors coming to x from V L−1 is bounded above by e −bu 2 . Next, the number of such x ∈ V L with x (3) = u is bounded by n! (u+1) n−3 L, as there are at most u + 1 possibilities for each of x (4) , x (5) , . . . , x (n) , at most L possibilities for x (2) and then x (1) = L − n i≥2 x (i) is determined, and there are at most n! different orderings of x i for each x (1) , . . . , x (n) .
All these resistors are in parallel, so we sum their conductance to get an effective conductance between V L−1 and V L , which is thus bounded above by for some C(n, b) < ∞. (Thus, the conductance between V L−1 and V L is of the same order as in the case Z 2 + in Example 3.2.) Hence, the effective resistance R L between V L−1 and V L is bounded below by cL −1 , and thus Finally, R ∞ (Γ α,β,G ) ≥ R ∞ (Γ ′′ ) = ∞, and the chain is therefore recurrent. Proof. Since e(G) = 0, we may replace β by 0; the result then follows from Lemma 4.9.
This completes the classification of transient and recurrent cases. We proceed to distinguish between positive recurrent and null recurrent cases; we do this by analysing the invariant measure µ(ξ) = e W (ξ) , and in particular its total mass    Proof. We consider four different cases.
Proof. In all four cases, the Markov chain is recurrent, by Lemmas 4.7, 4.8, 4.9, 4.10, 4.11. Hence the chain is non-explosive, and the invariant measure is unique up to a constant factor; furthermore, the chain is positive recurrent if and only if this measure has finite total mass so that there exists an invariant distribution. In other words, in these recurrent cases, the chain is positive recurrent if and only if Z α,β,G < ∞. By Lemma 4.13, this holds in case (i), but not in (ii)-(iv).
Proof of Theorem 2.3. The theorem follows by collecting Lemmas 4.1-4.6 and 4.14.

The corresponding discrete time Markov chain
In this section we consider the discrete time Markov chain (DTMC) ζ(t) ∈ Z n + that corresponds to the CTMC ξ(t), i.e. the corresponding embedded DTMC. Note that we use t to denote both the continuous and the discrete time, although the two chains are related by a random change of time.
Recall that the transition probabilities of DTMC ζ(t) are proportional to corresponding transition rates of CTMC ξ(t). Thus, if the rates of ξ(t) are q ξ,η , given by (1.2), and C ξ,η = C η,ξ are the conductances given by (3.5) (with C ξ,η = 0 if ξ − η = 1), and further q ξ := η∼ξ q ξ,η and C ξ := η∼ξ C ξ,η , then the transition probabilities of ζ(t) are It is obvious that a CTMC is irreducible if and only if the corresponding DTMC is, and it is easy to see that the same holds for reversibility. Similarly, since a CTMC and the corresponding DTMC pass through the same states (with a random change of time parameter), if one is recurrent [or transient], then so is the other. However, in general, since the two chains pass through the states at different speeds, one of the chains may be positive recurrent and the other null recurrent. (Recall that many different CTMC have the same embedded DTMC, and that some of them may be positive recurrent and others not.) In our case, there is no such complication.

Theorem 5.1. The conclusions in Theorem 2.3 hold also for the DTMC ζ(t).
Before proving the theorem, we note that it follows from (5.1) that the DTMC ζ(t) is reversible with an invariant measure We denote the total mass of this invariant measure by Consequently, Z α,β,G − 1 ≤ Z α,β,G ≤ 2nZ α,β,G . Proof. Immediate by (5.4) and Lemma 4.13.

Proof of Theorem 5.1. As said above, ζ(t) is transient precisely when ξ(t) is.
A DTMC is positive recurrent if and only if it has an invariant distribution, and then every invariant measure is a multiple of the stationary distribution. Hence, ζ(t) is positive recurrent if and only if the invariant measure µ(ξ) has finite mass, i.e., if Z α,β,G < ∞. Lemma 5.2 shows that this holds precisely in case (i) of Theorem 2.3, i.e., when ξ(t) is positive recurrent. Remark 5.3. We can use the DTMC ζ(t) to give an alternative proof of Lemma 4.14(i) without Lemmas 4.7-4.8. Assume α < 0 and α + βλ 1 < 0. Then, by Lemma 5.2, Z α,β,G < ∞. Hence, the DTMC ζ(t) has a stationary distribution and is thus positive recurrent. (Recall that this implication holds in general for a DTMC, but not for a CTMC, see Remark 4.12.) Hence ξ(t) is recurrent, and thus non-explosive. Furthermore, Lemma 4.13 shows that also Z α,β,G < ∞, and thus also ξ(t) has a stationary distribution. Since ξ(t) is non-explosive, this implies that ξ(t) is positive recurrent.

Explosions
It was shown in [13] that in most of the transient cases in Theorem 2.3, the CTMC ξ(t) is explosive. (Recall that a recurrent CTMC is non-explosive.) We complement this by exhibiting in Lemma 6.3 one non-trivial transient case where ξ(t) is non-explosive.
Recall also the standard fact that if, as above, q ξ := η q ξ,η is the total rate of leaving ξ, and ζ(t) is the DTMC in Section 5, then ξ(t) is explosive if and only if ∞ t=1 q −1 ζ(t) < ∞ with positive probability. In particular, ξ(t) is non-explosive when the rates q ξ are bounded.
Combining these results, we obtain the following partial classification, proved later in this section. Let ν i denote the degree of vertex i ∈ G, and note that   (and, as a consequence, β > 0). We conjecture that ξ(t) always is explosive in this case, but leave this as an open problem. (Our intuition is that in this case, which is transient by Theorem 2.3, ξ(t) will tend to infinity along a path that stays rather close to the line {sv 1 : s ∈ R} in R n , and that the rates q ξ are exponentially large close to this line.) Lemma 6.3. If α < 0 and α + βλ 1 (G) = 0, then the CTMC ξ(t) is transient and nonexplosive.
We prove first an elementary lemma.
For explosion, we may assume that G is connected, since we otherwise may consider the components of G separately, see Remark 1.2. Then, [13,Theorem 1(3)

and its proof]
show that if α + β min i ν i > 0 and β ≥ 0, then ξ(t) explodes a.s.; this includes the cases (ii)(b) and (ii)(c) above, and the case α > 0, β ≥ 0. Furthermore, [13,Theorem 2] shows that if α > 0 and β ≤ 0, then ξ(t) a.s. explodes; together with the result just mentioned, this shows explosion when α > 0. Remark 6.6. It is shown in [13] that explosion may occur in several different ways, depending on both the parameters α, β and the graph G. For example, if G is a star, then there are (at least) three possibilities, each occuring with probability 1 when (α, β) is in some region: a single component ξ i explodes (tends to infinity in finite time); two adjacent components explode simultaneously; or all components explode simultaneously.
Furthermore, the results in [13] show that in the explosive cases in Theorem (ii), the Markov chain asymptotically evolves as a pure birth process, in the sense that, with probability one, there is a random finite time after which none of the components decreases, i.e. there are no "death" events after this time. Consequently, the corresponding discrete time Markov chain can be regarded as a growth process on a graph similar to interacting urn models (e.g., see models in [1], [11] and [12]). One of the main problems in such growth processes is the same as in the urn models. Namely, it is of interest to understand how exactly the process escapes to infinity, i.e. whether all components grow indefinitely, or the growth localises in a particular subset of the underlying graph.
We do not discuss this sort of problems here and hope to address it elsewhere.

A modified model
In this section, we study the CTMC ξ(t) with the rates q ξ,η in (1.3), and the corresponding DTMC ζ(t). This model is interesting since we have "decoupled" α and β, with birth rates depending on α and death rates depending on β.
Since q ξ,ξ±e i differ from q ξ,ξ±e i by the same factor e −β j:j∼i ξ j , which furthermore does not depend on ξ i , the balance equation (3.4) holds for q ξ,η too, and thus ξ(t) has the same invariant measure µ(ξ) = e W (ξ) as ξ(t).
Proof. The lemmas in Section 4 all hold for ξ(t) too by the same proofs with no or minor modifications, except Lemma 4.2 in the case α < 0, α + βλ 1 = 0; we omit the details. This exceptional case is treated in Lemmas 7.4 and 7.5 below.
A few cases alternatively follow by Remark 7.1 and the Rayleigh monotonicity law. Before treating the exceptional case, we give a simple combinatorial lemma. Lemma 7.3. Suppose that G is a connected graph with e(G) ≥ 2, and let as above v 1 = (v 11 , . . . , v 1n ) be a positive eigenvector of A with eigenvalue λ 1 . Then, for each i, Proof. First, e.g. by (6.1), λ 1 ≥ 1. Hence, for every i, If one of the inequalities in (7.3) is strict, then (7.2) holds. In the remaining case λ 1 = 1, and every j = i is a neighbour of i.
Proof. If G is connected, then v 1 satisfies (7.2) by Lemma 7.3.
On the other hand, if G is disconnected and has a component with at least two edges, it suffices to consider that component.
In the remaining case, G consists only of isolated edges and vertices. There are at least two edges, which we w.l.o.g. may assume are 12 and 34. Then λ 1 = 1 and v 1 := 1 2 (e 1 + e 2 + e 3 + e 4 ) = 1 2 (1, 1, 1, 1, 0, . . . ) is an eigenvector satisfying (7.2). Hence we may assume that v 1 satisfies (7.2). Hence there exists δ > 0 such that for every i = 1, . . . , n, We follow the proof of Lemma 4.2, and note that there is equality in (4.4) and (4.5). Hence, for any i, again writing a ′ := −α/2 > 0, and using (4.3), for some c > 0. Thus, the resistance of the edge connecting y k and y k+1 is, for some i, Hence, ∞ k=1 R k < ∞, and the network is transient by the same argument as before.
The invariant measure e W (ξ) is the same as for ξ(t) and has total mass Z α,β,G = ∞ by Lemma 4.13; hence ξ(t) is not positive recurrent.
This completes the proof when G is connected. If G is disconnected, then G consist of one edge and one or several isolated vertices. By Remark 1.2, ξ(t) then consists of n − 1 independent parts: one part is the CTMC in Z 2 + defined by the graph K 2 , which is null recurrent by the first part of the proof; the other parts are independent copies of the CMTC in Z + defined by a single vertex, and these are positive recurrent since α < 0. It is now easy to see that the combined ξ(t) is null recurrent.
The corresponding DTMC ζ(t) has invariant measure (7.10) Note that this (in general) differs from the invariant measure C ξ for ζ(t), see (5.2). Denote the total mass of this invariant measure by There is no obvious analogue of the relation (5.4), but we can nevertheless prove the following analogue of Lemma 5.2 Lemma 7.6. Let −∞ < α < ∞ and −∞ ≤ β < ∞. Then Z α,β,G < ∞ if and only if α < 0 and α + βλ 1 < 0.
Proof. By the proof of Lemma 4.13 with minor modifications. In particular, in the case α < 0 and α + βλ 1 = 0, we argue also as in (7.5)-(7.7) in the proof of Lemma 7.4 (but now allowing δ = 0). We omit the details.
Proof. By Theorem 7.2 for recurrence vs transience, and by Theorem 7.7 for positive recurrence vs null recurrence.
We are not going to analyse the modified model any further.

Alternative proofs using Lyapunov functions
In this section we give alternative proofs of some parts of Theorem 2.3. These proofs do not use reversibility, and have therefore potential extensions also to cases where electric networks are not applicable. They are based on the following recurrence criterion for countable Markov chains using Lyapunov functions, see e.g. [ Note that the Lyapunov function f (ξ) is far from unique. The idea of the method is to find some explicit function f for which the conditions can be verified. There is also a related criterion for transience [4,Theorem 2.2.2], but we will not use it here.
We give only some examples. (See also [13] for further examples.) It might be possible to give a complete proof of Theorem 2.3 using these methods, but this seems rather challenging. Note that (since our Markov chains have bounded steps), the Lyapunov function f can be changed arbitrarily on a finite set; hence it suffices to define f (ξ) (and verify its properties) for ξ large. We do so, usually without comment, in the examples below.
Example 8.2. (Proof of the hard-core case of Theorem 2.3(ii)(a) by the recurrence criterion 8.1.) Assume that α = 0, β = −∞ and k max (G) ≤ 2. As said in Section 3.3, we may assume that the Markov chain lives on Γ 0 defined in (3.8); since κ ≤ 2, this implies that no more than two components of the process can be non-zero. Therefore, the Markov chain evolves as a simple random walk on a certain finite union of quadrants of Z 2 + and half-lines Z + glued along the axes. Each of these random walks is null-recurrent, and, hence, the whole process should be null-recurrent as well. We provide a rigorous justification to this heuristic argument by using the recurrence criterion 8.1.
The generator L of the Markov chain in the case α = 0, β = −∞ is We define a Lyapunov function on Z n + by where e = (1, . . . , 1) ∈ Z n + is the vector whose all coordinates are equal to 1, and C 1 > 0 is sufficiently large so that the expression inside the log is greater than 1. Note that the function is defined for any state in Z n + , but we consider it only on the subset Γ 0 . Let ξ ∈ Γ 0 with ξ > C 1 + 1. First, assume that ξ has two non-zero components, say x > 0 and y > 0, so that and both x and y can increases as well as decrease. A direct computation gives that Next, assume that ξ has only one non-zero component, say x = a + 1 > 0. Then f (ξ) = log (a 2 + 1/2) , and this component can both increase and decrease. Note that some of the other components may also increase by 1, and assume there are m ≥ 0 such components. A direct computation gives that Hence, Lf (ξ) ≤ 0 whenever ξ ∈ Γ 0 with ξ > C 1 + 1. It follows now from the recurrence criterion 8.1 that CTMC ξ(t) is recurrent. Now consider the case α = 0 and −∞ < β < 0. The generator of the Markov chain with parameter α = 0 is We consider for simplicity only some small graphs G, using modifications of the Lyapunov function (8.1) used in the hard-core case.
Recurrence in the case α = 0, b := −β > 0 and G = K 2 , the graph with just 2 vertices and a single edge, was shown in [13] by applying the recurrence criterion 8.1 with the Lyapunov function f (ξ) = log(ξ 1 + ξ 2 + 1). Alternatively, one could use e.g. f (ξ) = log(ξ 1 + ξ 2 ) or log(ξ 2 1 + ξ 2 2 ). We extend this to the case G = K n , the complete graph with n vertices, for any n ≥ 2. Regard f (ξ) as a function on R n \ {0} and write r = ξ . The partial derivatives of f are ∂f (ξ) and all second derivatives are O r −2 . Hence, a Taylor expansion of each of the differences in (8.2) yields the formula, for ξ ∈ Z n + , Suppose first that at least 2 components ξ i are positive. Then Aξ i = j =i ξ j ≥ 1 for every i, and thus (8.4) implies, since r ≤ i ξ i , which is negative when x is large. Hence, in both cases, Lf (ξ) ≤ 0 when ξ is large, and recurrence follows by the recurrence criterion 8.1.
The argument in Example 8.3 used the fact that G is a complete graph so that (Aξ) i ≥ 1 unless only ξ i is non-zero. Similar arguments work for some other graphs. A An alternative argument in the hard core case We present here yet another argument, which seems to be able to give an alternative proof of Theorem 2.3(iii)(e). However, the argument is not completely rigorous, so it should in its present form be regarded only as a heuristic argument. It is included here in order to suggest future developments.
The Markov chain is simple random walk on this state space.
If the random walk is in Γ 3 j , then it will sooner or later reach a point where two of the three allowed coordinates are 0, and thus only one non-zero, say x j . The random walk then may either go back into Γ 3 j , or it may move into Γ 2 j . In the latter case, it might either return again to the intersection line L j = Γ 2 j ∩ Γ 3 j , or it might cross Γ 2 j and reach L j+3 , in which case it may go on to other parts of Γ. We want to show that with positive probability, the latter case will not happen. Consequently, the random walk will a.s. eventually be confined to Γ 3 j := Γ 3 j ∪ Γ 2 j ∪ Γ 2 j+2 ∪ Γ 2 j+4 for j = 1 or 2; in particular, the random walk is transient. If the random walk is in Γ 3 j , it may escape through Γ 2 j , Γ 2 j+2 or Γ 2 j+4 . Allowing three routes of escape does not seem to be significantly different from just one, so we consider for simplicity instead random walk on Γ ′ j := Γ 3 j ∪ Γ 2 j . (This is one of the non-rigorous steps.) The rest of the argument is thus devoted to showing the following, which implies that there is a positive probability of not escaping.
Claim A.1. A random walk on Γ ′ j a.s. hits the line L j+3 ⊂ Γ 2 j only a finite number of times.
We drop the index j. Note that Γ ′ is a product Γ ′ = U × (V ∪ W ), where U ∼ = Z + , V ∼ = Z + and W ∼ = Z 2 + , and V and W intersect in the single point 0 (the 0 in both V and W ).
We consider continuous time, with jumps with rate 1 along any edge. Then the random walk consists of two independent components, a continuous-time random walk in U and a continuous-time random walk in V ∪ W .
Consider the latter (i.e. a continuous-time random walk in V ∪ W ). It returns infinitely often to 0, making excursions into either V or W ; the excursions are independent. (Recall that random walk in V or in W is recurrent, thus every excursion is finite and eventually returns to 0.) Consider first excursions into V ∼ = Z + . Let T V be the time until the first return, f V (s) := Ee −sT V its Laplace transform, and the Laplace transform of the corresponding renewal measure µ V . By symmetry, we can consider a random walk on Z instead of Z + , and then the intensity dµ V (x)/ dx of a return at x is ≈ x −1/2 , where ≈ means 'of the same order as'. Hence, as s → 0, and thus f V (s) ≈ 1 − s 1/2 . For excursions into W ∼ = Z 2 + , we similarly have dµ W (x)/ dx ≈ x −1 and hence, for small s, g W (s) ≈ where we used the fact that and hence, for t ≥ 1, say, t −0.49 ≈ 1 0 s −0.51 e −st ds. Consequently, a.s. only a finite number of excursions into Γ 2 will hit the diagonal. Any excursion hitting the line L j+3 = {0} × V has to hit the diagonal first, and thus there is a.s. only a finite number of such excursions, each hitting the line a finite number of times.
This completes our (partly heuristic) argument for Claim A.1, and thus for transience of the Markov chain.
Lemma A.2. Let (X n , Y n ) be a discrete time symmetric simple random walk on Z 2 + . Then, for every x > 0, 1 x ≤ P (X n , Y n ) hits the diagonal before it hits y = 0 | (X 0 , Y 0 ) = (x, 1) ≤ 2 1 + x . (A.6) Proof. Define τ := min{n : Y n = 0 or X n = Y n }, Z n := Yn Xn+Yn andZ n := Z n∧τ . A direct computation gives that E(Z n+1 − Z n | (X n , Y n )) < 0 if 0 < Y n < X n . Hence, the stopped processZ n is a bounded supermartingale. Furthermore, τ < ∞ a.s., and it follows from the Optional Stopping Theorem that E Z τ | (X 0 , Y 0 ) = (x, 1) ≤Z 0 = 1 x+1 . On the other hand, Z τ takes only the values 0 and 1/2, with the latter value if the diagonal is hit first. Thus E(Z τ ) = 1 2 P Z τ = 1 2 . Therefore, given (X 0 , Y 0 ) = (x, 1), the probability that (X n , Y n ) hits diagonal before the line y = 0 is no larger than 2 1+x . Consider now the processW n := W τ ∧n , where W n := Yn Xn . A direct computation gives that E W n+1 −W n | (X n , Y n ) ≥ 0 is non-negative. ThusW n is a bounded submartingale, and by the optional stopping theorem E(W τ ) ≥W 0 = 1 x . Furthermore, E(W τ ) = P(W τ = 1), the probability of hitting the diagonal. Therefore, given (X 0 , Y 0 ) = (x, 1), the probability that (X n , Y n ) hits the diagonal before it hits the line y = 0 is at least 1 x .