1 Introduction

We study a model of mass redistribution on a graph. A vertex \(x\) of the graph holds mass \(M^x_t\ge 0\) at time \(t\). When a “meteor hits” \(x\) at time \(t\), the mass \(M^x_t\) of the soil present at \(x\) is distributed equally among all neighbors of \(x\) (added to their masses). There is no soil (mass) left at \(x\) just after a meteor hit. Meteor hits are modeled as independent Poisson processes, one for each vertex of the graph. This model was studied in [3] in the case of finite graphs. The existence and uniqueness of the stationary distribution were proved for all connected simple graphs. The rate of convergence to the stationary distribution was estimated for some graphs. Various properties of the stationary distribution were proved.

This paper is mostly devoted to the meteor process on \({\mathbb {Z}}^d\). The existence of the process is proved for arbitrary (infinite) graphs with a bounded degree in Sect. 2. In Sect. 3, a stationary distribution is found for the process on \({\mathbb {Z}}^d\), for every \(d\ge 1\). In the same section, the first two moments of the mass distribution at a vertex in the stationary regime are determined. Convergence to this stationary distribution is proved for a large family of initial distributions in Sect. 4. In Sect. 5, for the one-dimensional lattice \({\mathbb {Z}}\), the net flow of mass between adjacent vertices is shown to have bounded variance as time goes to infinity. The same section contains an alternative representation of the process on \({\mathbb {Z}}\) as a collection of non-crossing paths. The distributions of a “tracer particle” in this system of non-crossing paths are shown to be tight as time goes to infinity.

Section 6 is the only part of the paper devoted to finite graphs. It is shown that, for finite graphs, the support of the stationary distribution is equal to the “largest possible” candidate for this set.

We presented a review of related models and articles in [3]. The following is a shortened version of that discussion with some new references.

A model of mass redistribution similar to ours appeared in [13] but that paper went in a completely different direction. It was mostly focused on the limit model when the graph approximates the real line. There is a considerable literature on a mass redistribution model called “chip-firing”; we mention here only [4, 18]. Mass redistribution is a part of every sandpile model, including a “continuous” version studied in [9]. See also “divisible sandpile” in [15]. Sandpile models have considerably different structures and associated questions from ours. In a different direction, the reader may want to consult a paper [16] on “overhang”. The introduction to [3] explains how our model on a finite graph can be represented as a product of random matrices. This is similar to the product of random matrices that appeared in [14]. So far, no technically useful connection between our model and random matrices has been found but such a connection seems to be an intriguing possibility. A more recent line of investigation related to our work is on Markov chains on the space of partitions—see [5, 6].

An important technical tool in [3] and this paper is a pair of “weakly interacting” continuous time symmetric random walks on the graph. They are called WIMPs for “weakly interacting mathematical particles.” If the two random walks are at different vertices, they move independently. However, if they are at the same vertex, their next jumps occur at the same time, after an exponential waiting time, common to both processes. The dependence ends here—the two processes jump to vertices chosen independently, even though they jump at the same time. One can think about each of the random walks as a grain of sand. The mass present at every vertex can be thought of as a large number of very small grains of sand. WIMPs played an important role in [8]. A similar process (“associate Markov chain”) appeared in Section 2.1 of [1].

2 Construction and basic properties

This section contains definitions and results from [3]. Only Proposition 2.1 is new.

The following setup and notation will be used in most of the paper. All constants will be assumed to be strictly positive, finite, real numbers, unless stated otherwise. The notation \(|S|\) will be used for the cardinality of a finite set, \(S\). We will write \(\mathbf{0}=(0,0,\ldots , 0)\).

We will consider only connected graphs with no loops and no multiple edges. We will often denote the chosen graph by \(G\) and its vertex set by \(V\). In particular, we often use \(k\) for \(|V|\). We let \(d_v\) stand for the degree of a vertex \(v\), and write \(v\leftrightarrow x\) if vertices \(v\) and \(x\) are connected by an edge.

We will write \({\mathcal {C}}_k\) to denote the circular graph with \(k\) vertices, \(k\ge 2\). In other words, the vertex set of \({\mathcal {C}}_k\) is \(\{1,2,\ldots , k\}\) and the only pairs of vertices joined by edges are of the form \((j, j+1)\) for \(j=1,2,\ldots , k-1\), and \((k,1)\). For \({\mathcal {C}}_k\), all arguments will apply “mod \(k\)”. For example, we will refer to \(k\) as a vertex “to the left of 1,” and interpret \(j-1\) as \(k\) in the case when \(j=1\).

Every vertex \(v\) is associated with a Poisson process \(N^v\) representing “arrival times of meteors” with intensity 1. We assume that all processes \(N^v\) are jointly independent. A vertex \(v\) holds some “soil” with mass equal to \(M^v_t\ge 0\) at time \(t\ge 0\). The processes \(M^v\) evolve according to the following scheme.

We assume that \(M^v_0 \in [0, \infty )\) for every \(v\), a.s. At the time \(t\) of a jump of \(N^v\), \(M^v\) jumps to 0. At the same time, the mass \(M^v_{t-}\) is “distributed” equally among all adjacent sites, that is, for every vertex \(x\leftrightarrow v\), the process \(M^x\) increases by \(M^v_{t-}/d_v\), that is, \(M^x_t = M^x_{t-} + M^v_{t-}/d_v\). The mass \(M^v\) will change only when \(N^v\) jumps and just prior to that time there is positive mass at \(v\), or \(N^x\) jumps, for some \(x\leftrightarrow v\) and just prior to that time there is positive mass at \(x\). We will denote the meteor process \({\mathcal {M}}_t = \{M^{v}_t, v\in V\}\).

The informal definition of the meteor process \({\mathcal {M}}_t\) is clearly rigorous if meteor hits, i.e., jump times of processes \(N^v\), do not have accumulation times. Hence, the definition does not require any more attention in the case when \(V\) is finite. The case of infinite graph requires a more formal argument, presented in the following proposition.

Proposition 2.1

Suppose that \(G\) is a \((\)not necessarily finite\()\) graph and assume that \(d_G:=\sup _{v\in V} d_v < \infty .\) Assume that \(M^v_0 \in [0, \infty )\) for every \(v,\) a.s. Then there exists a unique process \(\{{\mathcal {M}}_t, t\ge 0\}\) evolving in the manner described above.

Proof

The proof is an implementation of the graphical construction method first proposed in [12]. Heuristically speaking, this method works because on short enough time intervals, we have domination by subcritical percolation.

It will be convenient to use independent Poisson processes \(N^v_t\) defined for all \(t \in {\mathbb {R}}\), not only for \(t\ge 0\). For a set \(A\subset V\), let \(U(A) = \{v \in V: \exists y \in A \text { such that } v \leftrightarrow y\}\).

Consider any \(x\in V\) and \(T>0\). Let \(\Delta N^v_t = N^v_t - N^v_{t-}\). Let \(A_0=\{x\}\), \(t_0 = T\) and for \(j\ge 1\), let

$$\begin{aligned} t_j&= \sup \{t \le t_{j-1}: \Delta N^y_{t} \ne 0 \text { for some } y \in U(A_{j-1})\},\\ y_j&= y \text { such that } \Delta N^y_{t_j} \ne 0,\\ A_j&= A_{j-1} \cup \{y_j\},\\ S_j&= t_{j-1} - t_j. \end{aligned}$$

We have \(|A_j| \le 1 + j \) and \(|U(A_j)| \le (1+j) d_G\). Given \(A_{j-1}\), the distribution of \(S_j\) is exponential with the mean \(1 / |U(A_{j-1})|\). Let \(S^*_j\) be independent exponential random variables with \({\mathbb {E}}S^*_j = 1/((1+j) d_G)\). One can couple (construct on the same probability space) \(S_j\)’s and \(S^*_j\)’s so that \(S^j \ge S^*_j\) for all \(j\ge 1\), a.s. A straightforward application of Kolmogorov’s three series theorem shows that \(\sum _{j\ge 1} S^*_j = \infty \), a.s. Hence \(\sum _{j\ge 1} S_j = \infty \), a.s. Let \(I\) be the largest \(i\) such that \(\sum _{j= 1}^i S_j < T\) and note that \(I < \infty \), a.s.

Recall that \(x\in V\) is fixed. We will say that a function \(\{\Lambda _t, t\in [0,T]\}\) with values in \(V\) is an acceptable path if \(\Lambda _T =x, \Lambda \) jumps at a time \(t\) if and only if \(\Lambda _{t-} = v\) and \(N^v\) has a jump at time \(t\), and the jump takes \(\Lambda \) to one of the neighbors of \(v\), i.e., if \(t\) is a jump time then \(\Lambda _t \leftrightarrow \Lambda _{t-}\).

It is easy to see that if \(\Lambda \) is an acceptable path then \(\Lambda _t = x \in A_0\) for \(t\in [t_1, t_0]\). By induction, \(\Lambda _t \in A_j\) for all \(t\in [t_{j+1}, t_{j}) \cap [0,T]\). Hence, \(\Lambda _0 \in A_I\). It follows that the number of acceptable paths is finite, a.s.

Suppose that \(\Lambda \) is an acceptable path with exactly \(j\) jumps on the interval \([0,T]\) and let \(u_1 < u_2 < \cdots < u_j\) be the jump times of \(\Lambda \). We will write \(d(x) \) in place of \(d_x\) for typographical reasons. Let \(\widetilde{M}^\Lambda _T = M^{\Lambda _0}_0 \prod _{i=1}^j 1/d(\Lambda _{u_i-})\) and let \( M^x_T = \sum _\Lambda \widetilde{M}^\Lambda _T\), where the sum is over all acceptable paths \(\Lambda \). Note that \( M^x_T\) is well defined and finite, a.s.

It is easy to check that \((x,t) \rightarrow M^x_t\) has the properties described in the definition of \(M^x_t\) and that it is the unique process with these properties. \(\square \)

We will now define WIMPs (“weakly interacting mathematical particles”).

Definition 2.2

Consider a finite graph. Suppose that a meteor process \({\mathcal {M}}\) is given and let \(a= \sum _{v\in V} M^v_0\). For each \(j\ge 1\), let \(\{Y^j_n, n\ge 0\}\) be a discrete time symmetric random walk on \(G\) with the initial distribution \({\mathbb {P}}(Y^j_0 = x) = M^x_0/a\) for \(x\in V\). We assume that conditional on \( {\mathcal {M}}_0\), all processes \(\{Y^j_n, n\ge 0\}\), \(j\ge 1\), are independent.

Recall Poisson processes \(N^v\) defined earlier in this section and assume that they are independent of \(\{Y^j_n, n\ge 0\}\), \(j\ge 1\). For every \(j\ge 1\), we define a continuous time Markov process \(\{Z^j_t, t\ge 0\}\) by requiring that the embedded discrete Markov chain for \(Z^j\) is \(Y^j\) and \(Z^j\) jumps at a time \(t\) if and only if \(N^v\) jumps at time \(t\), where \(v=Z^j_{t-}\). Note that the jump times of all \(Z^j\)’s are defined by the same family of Poisson processes \(\{N^v\}_{v\in V}\).

Processes \(Z^j\) are continuous time nearest neighbor symmetric random walks on \(G\) with exponential holding time with mean 1. The joint distribution of \((Z^1, Z^2)\) is the following. The state space for the process \((Z^1, Z^2)\) is \(V^2\). If \((Z^1_t, Z^2_t)=(x,y)\) with \(x\ne y\) then the process will stay in this state for an exponential amount of time with mean \(1/2\) and at the end of this time interval, one of the two processes (chosen uniformly) will jump to one of the nearest neighbors (also chosen uniformly). This behavior is the same as that of two independent random walks. However, if \((Z^1_t, Z^2_t)=(x,x)\) then the pair of processes behave in a way that is different from that of a pair of independent random walks. Namely, after an exponential waiting time with mean 1 (not \(1/2\)), both processes will jump at the same time; each one will jump to one of the nearest neighbors of \(x\) chosen uniformly and independently of the direction of the jump of the other process.

Remark 2.3

The meteor process \(\{{\mathcal {M}}_t, t\ge 0\}\) is a somewhat unusual stochastic process in that its state space can be split into an uncountable number of disjoint communicating classes. For example, consider the following two initial distributions. Suppose that \(M^v_0 = 1\) for all \(v\). Fix some \(x\in V\), and let \(\widetilde{M}^v_0 = 1/\pi \) for all \(v\ne x\) and \(\widetilde{M}^x_0 = |V| - (|V|-1)/\pi \). If \(\{{\mathcal {M}}_t, t\ge 0\}\) and \(\{\widetilde{\mathcal {M}}_t, t\ge 0\}\) are meteor processes with these initial distributions then for every \(t>0\), the distributions of \({\mathcal {M}}_t\) and \(\widetilde{\mathcal {M}}_t\) will be mutually singular.

It follows from these observations that proving convergence of \(\{{\mathcal {M}}_t, t\ge 0\}\) to the stationary distribution cannot proceed along the most classical lines.

Theorem 2.4

[3] Consider the process \(\{{\mathcal {M}}_t, t\ge 0\}\) on a finite graph \(G.\) Assume that \(|V| =k\) and \(\sum _{v\in V} M^v_0 = k.\) When \(t\rightarrow \infty ,\) the distribution of \({\mathcal {M}}_t\) converges to a distribution \(Q\) on \([0,k]^k.\) The distribution \(Q\) is the unique stationary distribution for the process \(\{{\mathcal {M}}_t, t\ge 0\}.\) In particular\(,\) \(Q\) is independent of the initial distribution of \({\mathcal {M}}.\)

Remark 2.5

It is easy to see that, for a finite graph \(G\), there exists a stationary version of the process \({\mathcal {M}}_t\) on the whole real line, i.e., there exists a process \(\{{\mathcal {M}}_t, t\in {\mathbb {R}}\}\), such that the distribution of \({\mathcal {M}}_t\) is the stationary measure \(Q\) for each \(t\in {\mathbb {R}}\). Moreover, one can construct independent Poisson processes \(\{N^v_t,t\in {\mathbb {R}}\} \), \(v\in V\), on the same probability space, such that \(\{{\mathcal {M}}_t, t\in {\mathbb {R}}\}\) jumps according to the algorithm described above, relative to these Poisson processes. We set \(N^v_0=0\) for all \(v\) for definiteness.

The following result has been proved in [3] for finite graphs.

Proposition 2.6

Suppose \(G\) is finite. Let \(T^v_t\) denote the time of the last jump of \(N^v\) on the interval \([0,t],\) with the convention that \(T^v_t=-1\) if there were no jumps on this interval. Let \(U(v)=\{v\} \cup \{x\in V: x\leftrightarrow v\}\).

  1. (i)

    Assume that \(M^v_0 + M^x_0 >0\) for a pair of adjacent vertices \(v\) and \(x.\) Then\(,\) almost surely\(,\) for all \(t\ge 0,\) \(M^v_t + M^x_t >0.\)

  2. (ii)

    Let \(R_t\) be the number of pairs \((x,v)\) such that \(x\leftrightarrow v\) and \(M^v_t + M^x_t =0.\) The process \(R_t\) is non-increasing\(,\) a.s.

  3. (iii)

    Assume that \(M^x_0 >0\) for \(x\in U(v){\setminus }\{v\}.\) Then \(M^v_t = 0\) if and only if one of the following conditions holds: (a) \(T^v_t = \max \{T^x_t: x\in U(v)\}>-1\) or (b) \(M^v_0 = 0\) and \(\max \{T^x_t: x\in U(v){\setminus }\{v\}\}=-1.\)

  4. (iv)

    Suppose that the process \(\{{\mathcal {M}}_t, t\ge 0\}\) is in the stationary regime\(,\) that is\(,\) its distribution at time \(0\) is the stationary distribution \(Q.\) Then \(M^v_t + M^{x}_t >0\) for all \(t\ge 0\) and all pairs of adjacent vertices \(v\) and \(x,\) a.s.

  5. (v)

    Recall from Remark 2.5 the stationary meteor process \(\{{\mathcal {M}}_t, t\in {\mathbb {R}}\}\) and the corresponding Poisson processes \(\{N^v_t,t\in {\mathbb {R}}\},\) \(v\in V.\) Let \(T^v\) denote the time of the last jump of \(N^v\) on the interval \((-\infty , 0]\) and note that \(T^v\) is well defined for every \(v\) because such a jump exists\(,\) a.s. Then \(M^v_0 =0\) if an only if \(T^v = \max \{T^x: x\in U(v)\}.\)

3 Stationary distribution on \({\mathbb {Z}}^d\)

Recall that \({\mathcal {C}}_k\) denotes the circular graph with \(k\) vertices, \(k\ge 2\). In other words, the vertex set of \({\mathcal {C}}_k\) is \(\{1,2,\ldots , k\}\) and the only pairs of vertices joined by edges are of the form \((j, j+1)\) for \(j=1,2,\ldots , k-1\), and \((k,1)\).

We will show that for any \(d\ge 1\), stationary distributions for meteor processes on tori \({\mathcal {C}}_k^d\) converge, as \(k\rightarrow \infty \), to a stationary distribution for the meteor process on \({\mathbb {Z}}^d\), in an appropriate sense. We need the following notation to state the theorem.

We equip the space \({\mathbb {R}}^{{\mathbb {Z}}^d}\) with a metric \(\rho \) defined by

$$\begin{aligned} \rho (f,g) = \sum _{x\in {\mathbb {Z}}^d}(|f(x) - g(x)|\wedge 1) 2^{-|x|}, \quad f,g \in {\mathbb {R}}^{{\mathbb {Z}}^d}. \end{aligned}$$

Note that \(\lim _{n\rightarrow \infty } \rho (f_n, g_n) = 0\) if and only if \(\lim _{n\rightarrow \infty } |f_n(x) - g_n(x)| =0\) for every \(x\in {\mathbb {Z}}^d\). We define the Skorokhod space \(D([0,\infty ), {\mathbb {R}}^{{\mathbb {Z}}^d})\) of RCLL functions and its topology in the usual way relative to the metric \(\rho \).

For \(n\ge 1\), let \(K_n =\{1, 2, \ldots , n\}^d\subset {\mathbb {Z}}^d\) and \(K'_n = K_n - (\lfloor n/2 \rfloor ,\ldots , \lfloor n/2 \rfloor )\). In other words, \(K'_n\) is \(K_n\) is shifted so that it is (almost) centered at the origin.

Consider any \(d\ge 1\). Let \(V_k\) be the vertex set of \({\mathcal {C}}_k^d\) and let \({\mathcal {M}}^k_t = \{M^{k,x}_t, x\in V_k\}\) be the meteor process on \({\mathcal {C}}_k^d\), with the average mass 1 per vertex. Let \(Q_k\) be the stationary distribution for \({\mathcal {M}}^k\).

Consider the graph with the vertex set \(K'_k\) and edges connecting vertices at distance 1 (according to the Euclidean distance). This graph can be embedded in the obvious way into \({\mathcal {C}}_k^d\). The vertex sets \(K'_k\) and \(V_k\) of the two graphs are in one to one correspondence so we will consider the process \({\mathcal {M}}^k\) as a process on \(K'_k\), although its transitions do not respect the edge structure of \(K'_k\).

For each \(k\ge 2\), \(t\ge 0\) and \(x\in {\mathbb {Z}}^d\), let \( M^{k,x}_t = M^{k,y}_t \), where \(y\in K'_k\) is the unique vertex such that \(x-y = k v\) for some \(v\in {\mathbb {Z}}^d\). By abuse of notation, we will write \({\mathcal {M}}^k_t = \{M^{k,x}_t, x\in {\mathbb {Z}}^d\}\) and we will use \(Q_k\) to denote the stationary distribution for the process \(\{M^{k,x}_t, x\in {\mathbb {Z}}^d\}\).

Theorem 3.1

  1. (i)

    The distributions \(Q_k\) converge to a distribution \(Q_\infty \) on \({\mathbb {R}}^{{\mathbb {Z}}^d}\) as \(k\rightarrow \infty .\)

  2. (ii)

    If the distribution of \({\mathcal {M}}^k_0\) is \(Q_k\) for every \(k,\) then processes \(\{{\mathcal {M}}^{k}_t, t\ge 0\}\) converge weakly in the Skorokhod space \(D([0,\infty ), {\mathbb {R}}^{{\mathbb {Z}}^d})\) to a process \(\{{\mathcal {M}}^{\infty }_t, t\ge 0\}\) with the initial distribution equal to \(Q_\infty ,\) when \(k\rightarrow \infty .\)

We will give the proof only in the case \(d\ge 2\). The case \(d=1\) can be treated using a simplified version of the argument given below.

The idea of the proof of Theorem 3.1 is the following. If the distributions \(Q_k\) do not converge then for any sequence of processes with these distributions constructed on the same probability space, there will be some pairs of these processes with large indices which will have different values of the mass at some finite set of vertices, with non-vanishing probability. This will be proved to be false by representing masses using “grains of sand”, that is, random walks (a single mass can be thought of as consisting of a very large number of grains of sand). We will couple pairs of random walks with jump times and locations determined by the same system of Poisson processes, and we will show that they will meet sufficiently fast for our purposes. The precise formulation of the coupling is given in the next lemma. Using this coupling, we can construct a sequence of meteor processes on the same probability space, with distributions \(Q_k\), with masses very close to each other for large indices, thus contradicting the original assumption.

For any (discrete time or continuous time) stochastic process \(R\), any set \(A\) and an element \(a\) of the state space of \(R\) (for example, \(a\) can be a number), let \(T(R, A) = \inf \{t\ge 0: R_t \in A\}\) and \(T(R, a) = \inf \{t\ge 0: R_t =a\}\). For real \(a\), we will write \(T^+(R, a) = \inf \{t\ge 0: R_t \ge a\}\) and \(T^-(R, a) = \inf \{t\ge 0: R_t \le a\}\).

Lemma 3.2

Suppose that \(\{N^x_t, t\in {\mathbb {R}}\}, x\in {\mathbb {Z}}^d,\) are independent Poisson processes. For any \(\beta \in (0,1)\) and \(\delta _1 >0\) there exist \(\delta >0, a_0>1, m_1\) and \(\lambda >1\) such that

$$\begin{aligned} (1/\beta ) (1-\delta )&> 1,\end{aligned}$$
(3.1)
$$\begin{aligned} (1+\lambda + 3 \delta )/2&< (1/\beta ) (1-\delta ), \end{aligned}$$
(3.2)

and for all \(a\ge a_0, m\ge m_1,\) and \( z_0, \widetilde{z}_0 \in {\mathbb {Z}}^d\) such that \(|z_0 - \widetilde{z}_0| \le a^{\lambda ^m },\) one can construct a coupling of continuous time random walks \(Z\) and \(\widetilde{Z}\) on \({\mathbb {Z}}^d,\) starting from \(Z_0 = z_0\) and \(\widetilde{Z}_0 = \widetilde{z}_0,\) with jumps determined by \(\{N^x_t, t\in {\mathbb {R}}\},x\in {\mathbb {Z}}^d,\) \((\)the same family for both \(Z\) and \(\widetilde{Z}),\) in the sense of Sect. 2, and with the following properties.

  1. (i)

    Let \(t_* = 2 a^{(1+\lambda + 2\delta )\lambda ^{m}}.\) Then

    $$\begin{aligned}&\mathbb {P}\left( \sup _{0\le t \le t_*}|Z_{t} - Z_0| \ge a^{(1+\lambda + 3\delta )\lambda ^{m}/2}\right) \le \delta _1,\end{aligned}$$
    (3.3)
    $$\begin{aligned}&\mathbb {P}\left( \sup _{0\le t \le t_*}|\widetilde{Z}_{t} -\widetilde{Z}_0| \ge a^{(1+\lambda + 3\delta )\lambda ^{m}/2}\right) \le \delta _1. \end{aligned}$$
    (3.4)
  2. (ii)

    Let

    $$\begin{aligned} T_* = t_* \wedge T^+(|Z_\cdot - Z_0|, a^{(1+\lambda + 3\delta )\lambda ^{m}/2}) \wedge T^+(|\widetilde{Z}_\cdot - \widetilde{Z}_0|, a^{(1+\lambda + 3\delta )\lambda ^{m}/2}). \end{aligned}$$
    (3.5)

    Then

    $$\begin{aligned} \mathbb {P}(T(Z- \widetilde{Z}, \mathbf{0}) < T_*) > 1-\delta _1. \end{aligned}$$
    (3.6)

The proof of the lemma is quite technical so it will be presented at the end of Sect. 3.

Proof of Theorem 3.1

Step 1 We will first prove part (i) of the theorem. Since \({\mathbb {E}}_{Q_k} M^{k,x}_0 = 1\) for every \(x\) and \(k\), it follows that for every fixed \(x\), the family of distributions of \(\{M^{k,x}_0, k\ge 1\}\) is tight. Hence, for every fixed \(x_1,\ldots , x_j\), the family of \(j\)-dimensional distributions of \(\{(M^{k,x_1}_{0},\ldots , M^{k,x_j}_{0}), k\ge 1\}\) is tight. Using the diagonal method, we can find a subsequence \(k_m\) such that for any \(x_1,\ldots , x_j\), the distributions of \(\{(M^{k_m,x_1}_{0},\ldots , M^{k_m,x_j}_{0}), m\ge 1\}\) converge. The limiting distributions are consistent by construction so there exists a distribution \(Q\) on \({\mathbb {R}}^{{\mathbb {Z}}^d}\) whose restriction to any \(x_1,\ldots , x_j\) is equal to the limit of the distributions of \(\{(M^{k_m,x_1}_{0},\ldots , M^{k_m,x_j}_{0}), m\ge 1\}\).

Assume that part (i) of the theorem is false, i.e., \(Q_k\) do not converge to \(Q\). Then there exist \(x_1,\ldots , x_{i_1}\in {\mathbb {Z}}^d\) and \(\ell _m\) such that \(\ell _m\rightarrow \infty \) as \(m\rightarrow \infty \), and the vectors \((M^{\ell _m,x_1}_{0},\ldots , M^{\ell _m,x_{i_1}}_{0})\) do not have the same limiting distribution as \((M^{k_m,x_1}_{0},\ldots , M^{k_m,x_{i_1}}_{0})\) when \(m\rightarrow \infty \).

Suppose that \(\{{\mathcal {M}}^{\ell _m}_{t}, t\ge 0\}\), \( m\ge 1\), and \(\{{\mathcal {M}}^{k_m}_{t}, t\ge 0\}\), \( m\ge 1\), are constructed on the same probability space. This implies that the distribution of \({\mathcal {M}}^{\ell _m}_{t}\) is \(Q_{\ell _m}\) and the distribution of \({\mathcal {M}}^{k_m}_{t}\) is \(Q_{k_m}\) for all \(m\ge 1\) and \(t\ge 0\). But we do not assume anything about the relationship between the two families of processes; in particular, we do not assume that the family \(\{{\mathcal {M}}^{\ell _m}_{t}, t\ge 0\}\), \( m\ge 1\), is independent of \(\{{\mathcal {M}}^{k_m}_{t}, t\ge 0\}\), \( m\ge 1\).

The assumption that \((M^{\ell _m,x_1}_{0},\ldots , M^{\ell _m,x_{i_1}}_{0})\) and \((M^{k_m,x_1}_{0},\ldots , M^{k_m,x_{i_1}}_{0})\) do not have the same limiting distribution implies that there exist \(c_1, p_1 >0\) such that for every \(m_0\) there exists \(m> m_0\) such that,

$$\begin{aligned} {\mathbb {P}}\left( \sum _{i=1}^{i_1} |M^{k_m,x_i}_{0}- M^{\ell _m,x_i}_{0}| > c_1\right) > p_1. \end{aligned}$$
(3.7)

Note that \(c_1\) and \(p_1\) can depend on the (“marginal”) distributions of \((M^{\ell _m,x_1}_{0},\ldots , M^{\ell _m,x_{i_1}}_{0})\) and \((M^{k_m,x_1}_{0},\ldots , M^{k_m,x_{i_1}}_{0})\) for \(m\ge 1\), but they can be chosen so that they do not depend on the (“joint”) distributions of \((M^{\ell _m,x_1}_{0},\ldots , M^{\ell _m,x_{i_1}}_{0},M^{k_m,x_1}_{0},\ldots , M^{k_m,x_{i_1}}_{0})\) for \(m\ge 1\).

Let \(i_2 = 2\left\lceil \max _{1\le i,j \le i_1} |x_i-x_j|\right\rceil \). Let \(\Gamma _n^1 = \{x_1,\ldots , x_{i_1}\}\) and let \(\{\Gamma ^j_n, j=1,\ldots , i_3\}\) be the family of all sets of the form \(\Gamma ^1_n + i_2 v\) for some \(v\in {\mathbb {Z}}^d\), such that \(\Gamma ^1_n + i_2 v \subset K'_n \). If \(j\ne i\) then \(\Gamma ^j_n \cap \Gamma ^i_n = \emptyset \). Note that \(i_3 = i_3(n) \ge \lfloor n/(2 i_2) \rfloor ^ d \ge c_2 n^d \) for \(n \ge 2 i_2\). We obtain from (3.7) that for every \(m_0\) there exists \(m> m_0\) such that,

$$\begin{aligned} {\mathbb {E}}\sum _{x\in K_n} |M^{k_m,x}_{0}- M^{\ell _m,x}_{0}|&= {\mathbb {E}}\sum _{x\in K'_n} |M^{k_m,x}_{0}- M^{\ell _m,x}_{0}| \ge {\mathbb {E}}\sum _{j=1}^{i_3} \sum _{x\in \Gamma ^j_n} |M^{k_m,x}_{0}-M^{\ell _m,x}_{0}|\nonumber \\&= i_3 \sum _{x\in \Gamma ^1_n} {\mathbb {E}}|M^{k_m,x}_{0}- M^{\ell _m,x}_{0} | \ge i_3 c_1 p_1 \ge c_3 n^d. \end{aligned}$$
(3.8)

By stationarity, for every \(t\ge 0\), the distributions of \((M^{\ell _m,x_1}_{t},\ldots , M^{\ell _m,x_{i_1}}_{t})\) and \((M^{k_m,x_1}_{t},\ldots , M^{k_m,x_{i_1}}_{t})\) are the same as those of the vectors \((M^{\ell _m,x_1}_{0},\ldots , M^{\ell _m,x_{i_1}}_{0})\) and \((M^{k_m,x_1}_{0},\ldots , M^{k_m,x_{i_1}}_{0})\). In view of the remark following (3.7), that formula applies also at time \(t\). Thus (3.8) also applies at any \(t\ge 0\), i.e.,

$$\begin{aligned} {\mathbb {E}}\sum _{x\in K_n} |M^{k_m,x}_{t}- M^{\ell _m,x}_{t}| \ge c_3 n^d. \end{aligned}$$
(3.9)

We will construct \(\{{\mathcal {M}}^k_t, t\ge 0\}\) on a common probability space in such a way that the last inequality is false for large \(n\) and hence part (i) of the theorem is true.

Step 2 Consider \(k_0,\ell _0\) and \(n\) such that for all \(k\ge k_0\) and \(\ell \ge \ell _0\), \(K_n \subset K'_k \cap K'_\ell \). Fix some \(\beta \in \left( 0 , 1\right) \). We will consider pairs of positive integers \(n\) and \(n_\beta \) such that \(n\) is the smallest integer greater than or equal to \(n_\beta ^{1/\beta }\) which is divisible by \(n_\beta \). Let \(K_n^1 = \{1,\ldots , n_\beta \}^d\) and let \(\{K^j_n, j=1,\ldots , j_n\}\) be the family of all sets of the form \(K^1_n + n_\beta v\) for some \(v\in {\mathbb {Z}}^d\), such that \((K^1_n + n_\beta v) \cap K_n \ne \emptyset \). We will write \({\mathcal {J}}= \{1,\ldots , j_n\}\).

We have

$$\begin{aligned} {\mathbb {E}}_{Q_k} \left( \sum _{x\in K^j_n} M^{k,x}_0\right) = |K^j_n| = n_\beta ^d. \end{aligned}$$
(3.10)

Let \(\partial K^j_n\) be the set of (nearest neighbor) edges in \({\mathbb {Z}}^d\) such that exactly one of the endpoints is in \(K^j_n\). We have \(|\partial K^j_n| = 2d n_\beta ^{d-1}\) so, by Theorem 5.1 and Remark 5.2 (v) of [3],

$$\begin{aligned} \lim _{k\rightarrow \infty } {{\mathrm{Var}}}_{Q_k} \left( \sum _{x\in K^j_n} M^{k,x}_0\right) = |\partial K^j_n|/(2d) = n_\beta ^{d-1}. \end{aligned}$$

We will assume from now on that \(k\) and \(\ell \) are so large that

$$\begin{aligned} {{\mathrm{Var}}}_{Q_k} \left( \sum _{x\in K^j_n} M^{k,x}_0\right) \le 2 n_\beta ^{d-1}, \quad {{\mathrm{Var}}}_{Q_\ell } \left( \sum _{x\in K^j_n} M^{\ell ,x}_0\right) \le 2 n_\beta ^{d-1}. \end{aligned}$$
(3.11)

By (3.10), (3.11) and Hölder’s inequality,

$$\begin{aligned} {\mathbb {E}}_{Q_k} \left| \sum _{x\in K^j_n} M^{k,x}_0 - \sum _{x\in K^j_n} M^{\ell ,x}_0\right|&\le {\mathbb {E}}_{Q_k} \left| \sum _{x\in K^j_n} M^{k,x}_0 - n_\beta ^d\right| + {\mathbb {E}}_{Q_k} \left| \sum _{x\in K^j_n} M^{\ell ,x}_0 - n_\beta ^d\right| \nonumber \\&\le \sqrt{2} n_\beta ^{(d-1)/2} + \sqrt{2} n_\beta ^{(d-1)/2} = 2 \sqrt{2} n_\beta ^{(d-1)/2}. \end{aligned}$$
(3.12)

Fix \(K^j_n\) and suppose that \(\sum _{x\in K^j_n} M^{k,x}_0 \le \sum _{x\in K^j_n} M^{\ell ,x}_0\). Then let

$$\begin{aligned} a (k,\ell ,j,n)&= \frac{\sum _{x\in K^j_n} M^{k,x}_0 }{ \sum _{x\in K^j_n} M^{\ell ,x}_0} \le 1,\end{aligned}$$
(3.13)
$$\begin{aligned} M^{*,k,x}_0&= M^{k,x}_0, \quad x\in K^j_n, \end{aligned}$$
(3.14)
$$\begin{aligned} M^{*,\ell ,x}_0&= a (k,\ell ,j,n) M^{\ell ,x}_0, \quad x\in K^j_n, \end{aligned}$$
(3.15)
$$\begin{aligned} \Lambda ^j_n&= \sum _{x\in K^j_n} M^{*,k,x}_0 = \sum _{x\in K^j_n} M^{*,\ell ,x}_0. \end{aligned}$$
(3.16)

If \(\sum _{x\in K^j_n} M^{k,x}_0 \ge \sum _{x\in K^j_n} M^{\ell ,x}_0\) then we interchange the roles of \(k\) and \(\ell \) in the definitions (3.13)–(3.15) so that (3.16) still holds.

We obtain from (3.16),

$$\begin{aligned} \sum _{x\in K_n} M^{*,k,x}_0 = \sum _{x\in K_n} M^{*,\ell ,x}_0. \end{aligned}$$

It follows from (3.12) that

$$\begin{aligned}&{\mathbb {E}}\left( \left( \sum _{x\in K_n} M^{k,x}_0 - \sum _{x\in K_n} M^{*,k,x}_0\right) +\left( \sum _{x\in K_n} M^{\ell ,x}_0 - \sum _{x\in K_n} M^{*,\ell ,x}_0\right) \right) \nonumber \\&\quad \le 4 \sqrt{2} n_\beta ^{(d-1)/2} c_4 n^{d(1-\beta )} \le c_5 n^{d - (d+1)\beta /2}. \end{aligned}$$
(3.17)

Step 3 First we will choose values of parameters used in this step. Recall that we have fixed a \(\beta \in (0,1)\). We now fix \(\delta _1>0\) so small that \(6\delta _1 < c_3/2\), where \(c_3\) is the constant in (3.9). Then we choose \(a_0,m_1, \delta \) and \(\lambda \) corresponding to \(\beta \) and \(\delta _1\) as in Lemma 3.2. Consider \(a\ge a_0\), \(m\ge m_1\) and let \(n_\beta \) be such that \(d n_\beta < a^{\lambda ^m} \le 2 d n_\beta \). Recall \(c_5\) from (3.17). We make \(n\) and \(m\) larger, if necessary, so that

$$\begin{aligned}&\displaystyle n^d - 2 d n^{d-1} ((2d) ^{(1/\beta )(1-\delta )} n^{1-\delta } + n^\beta ) > |K_n| (1-\delta _1), \end{aligned}$$
(3.18)
$$\begin{aligned}&\displaystyle c_5 n^{d - (d+1)\beta /2} \le \delta _1 n^d. \end{aligned}$$
(3.19)

Assume that \(k_0\) and \(\ell _0\) are so large that for all \(k\ge k_0\) and \(\ell \ge \ell _0\), we have \({{\mathrm{dist}}}(K_n, (K'_k)^c) \ge a^{(1+\lambda + 3\delta )\lambda ^{m}/2} \) and \({{\mathrm{dist}}}(K_n, (K'_\ell )^c) \ge a^{(1+\lambda + 3\delta )\lambda ^{m}/2} \).

Suppose that \(\{N^x_t, t\in {\mathbb {R}}\}\), \(x\in {\mathbb {Z}}^d\), are independent Poisson processes. Consider \(k,\ell \ge 2\). Recall that the stationary distribution \(Q_k\) was extended from \(K'_k \) (identified with \( {\mathcal {C}}_k^d\)) to \({\mathbb {Z}}^d\) in a periodic way. Suppose that \({\mathcal {M}}^k_0\) has the distribution \(Q_k\) and similarly we assume that \({\mathcal {M}}^\ell _0\) has the distribution \(Q_\ell \). We do not need any assumptions about the relationship of \({\mathcal {M}}^k_0\) and \({\mathcal {M}}^\ell _0\) but, for definiteness, we assume that \({\mathcal {M}}^k_0\), \({\mathcal {M}}^\ell _0\) and \(\{N^x_t, t\in {\mathbb {R}}\}\), \(x\in {\mathbb {Z}}^d\), are independent.

Recall definitions (3.13)–(3.16). Let \(\mu ^{j,k}\) and \(\mu ^{j,\ell }\) be (random) probability measures on \(K^j_n\) defined by

$$\begin{aligned} \mu ^{j,k}(x) = {M^{*,k,x}_0}/{\Lambda ^j_n}, \quad \mu ^{j,\ell }(x) = {M^{*,\ell ,x}_0}/{\Lambda ^j_n}, \qquad x \in K^j_n. \end{aligned}$$

Let \(Z_t\) and \(\widetilde{Z}_t\) be a coupling of two continuous time nearest neighbor random walks constructed as in Lemma 3.2, with the following initial distributions,

$$\begin{aligned} {\mathbb {P}}(Z_0 = x ) = \mu ^{j,k}(x), \quad {\mathbb {P}}(\widetilde{Z}_0 = x ) = \mu ^{j,\ell }(x), \qquad x\in K^j_n. \end{aligned}$$

The joint distribution of \(Z_0\) and \(\widetilde{Z}_0\) is irrelevant to our argument but for the sake of definiteness we assume that these random variables are independent given \({\mathcal {M}}^k_0\) and \({\mathcal {M}}^\ell _0\).

We will now define processes \(X\) and \(\widetilde{X}\) closely related to \(Z\) and \(\widetilde{Z}\). We will consider a new family of Poisson processes. For each \(t\ge 0\) and \(x\in K'_k\), let \(\widehat{N}^{k,x}_t = N^x_t \). For each \(t\ge 0\) and \(x\in {\mathbb {Z}}^d\), let \(\widehat{N}^{k,x}_t =\widehat{N}^{k,y}_t \), where \(y\in K'_k\) is the unique point such that \(x-y = k v\) for some \(v\in {\mathbb {Z}}^d\). Let \(\widehat{N}^{\ell ,x}_t = N^x_t\) for \( t\ge 0\) and \(x\in K'_\ell \). For each \(t\ge 0\) and \(x\in {\mathbb {Z}}^d\), let \( N^{\ell ,x}_t = N^{\ell ,y}_t \), where \(y\in K'_\ell \) and \(x-y = \ell v\) for some \(v\in {\mathbb {Z}}^d\).

Recall that we can identify the torus \({\mathcal {C}}_k^d\) with \(K'_k\). We assume that \(X_0=Z_0\in K_n \subset {\mathcal {C}}_k^d\) and \(\widetilde{X}_0 =\widetilde{Z}_0\in K_n \subset {\mathcal {C}}_\ell ^d\). We define \(X\) as a continuous time random walk on the torus \({\mathcal {C}}_k^d\) with jump times defined by \(\{\widehat{N}^{k,x}_t, t\in {\mathbb {R}}\}\), \(x\in {\mathcal {C}}_k^d\). Let \(S^Z_j\) be the time of the \(j\)-th jump of \(Z\) and let \(S^X_j\) be the time of the \(j\)-th jump of \(X\). We require that \(X_{S^X_j} - X_{S^X_j-} = Z_{S^Z_j} - Z_{S^Z_j-}\) for all \(j\), where addition is in the sense of operation on the torus for \(X\) and operation on \({\mathbb {Z}}^d\) for \(Z\). The formula defining the directions of the jumps of \(X\) is informal but its meaning should be unambiguous to the reader. The conditions listed above define \(X\) uniquely.

The definition of \(\widetilde{X}\) is analogous, relative to \({\mathcal {C}}_\ell ^d\) and \(\{\widehat{N}^{\ell ,x}_t, t\in {\mathbb {R}}\}\), \(x\in {\mathcal {C}}_\ell ^d\).

The definitions of \(k_0\) and \(\ell _0\) at the beginning of this step and the definition of \(T_*\) in (3.5) show that \(X\) and \(\widetilde{X}\) have the same trajectories as \(Z\) and \(\widetilde{Z}\), up to time \(T_*\). Recall that \(d n_\beta < a^{\lambda ^m} \le 2 d n_\beta \). Then \(|X_0 - \widetilde{X}_0| \le a^{\lambda ^m }\) and Lemma 3.2 implies that,

$$\begin{aligned} {\mathbb {P}}(T(X- \widetilde{X}, \mathbf{0}) < T_*) > 1-\delta _1. \end{aligned}$$
(3.20)

Since \(a^{\lambda ^{m}} \le 2d n_\beta \), and in view of (3.2),

$$\begin{aligned} a^{(1+\lambda + 3\delta )\lambda ^{m}/2}&= (a^{\lambda ^{m}})^{(1+\lambda + 3\delta )/2} \le (2dn_\beta )^{(1+\lambda + 3\delta )/2}\nonumber \\&\le (2d n_\beta )^{(1/\beta )(1-\delta )} \le (2d)^{(1/\beta ) (1-\delta )} n^{1-\delta }. \end{aligned}$$
(3.21)

Recall \({\mathcal {J}}\) from Step 2. Let \({\mathcal {A}}\) be the family of all \(j \in {\mathcal {J}}\) such that \({{\mathrm{dist}}}(K^j_n, K_n^c) \ge a^{(1+\lambda + 3\delta )\lambda ^{m}/2}\). We will estimate the volume of \(K^*_n := \bigcup _{j \in {\mathcal {A}}} K^j_n\). In view of (3.18) and (3.21),

$$\begin{aligned} \nonumber |K^*_n|&= \left| \bigcup _{j \in {\mathcal {A}}} K^j_n\right| = |K_n| - \left| \bigcup _{j \in {\mathcal {J}}{\setminus }{\mathcal {A}}} K^j_n\right| \ge n^d - 2 d n^{d-1} (a^{(1+\lambda + 3\delta )\lambda ^{m}/2} + n_\beta )\\&\ge n^d - 2 d n^{d-1} ((2d)^{(1/\beta )(1-\delta )} n^{1-\delta } + n^\beta ) > |K_n| (1-\delta _1). \end{aligned}$$
(3.22)

Let \(\{(M^{1,k,x}_t)_{x\in {\mathcal {C}}_k^d}, t\ge 0\}\) be the meteor process with the initial distribution defined by \(M^{1,k,x}_0 = M^{*,k,x}_0\) if \(x\in K_n\). For all other \(x \in {\mathcal {C}}_k^d {\setminus } K_n\), we let \( M^{1,k,x}_0 = 0\). The jump times of \({\mathcal {M}}^{1,k,x}\) are defined by \(\{\widehat{N}^{k,x}_t, t\in {\mathbb {R}}\}\), \(x\in {\mathcal {C}}_k^d\), in the usual way. The process \(\{(M^{1,\ell ,x}_t)_{x\in {\mathcal {C}}_\ell ^d}, t\ge 0\}\) is defined in an analogous way relative to the family \(\{\widehat{N}^{\ell ,x}_t, t\in {\mathbb {R}}\}\), \(x\in {\mathcal {C}}_\ell ^d\), of Poisson processes, with the initial distribution \(M^{1,\ell ,x}_0 = M^{*,\ell ,x}_0\) for \(x\in K_n\).

Recall from Lemma 3.2 that \(t_* = 2 a^{(1+\lambda + 2\delta )\lambda ^{m}}\). Let \({\mathcal {G}}_0\) be the \(\sigma \)-field generated by \({\mathcal {M}}^{1,k}_0\) and \({\mathcal {M}}^{1,\ell }_0\). Let \({\mathcal {F}}_*\) be the \(\sigma \)-field generated by \({\mathcal {M}}^{1,k}_0\), \({\mathcal {M}}^{1,\ell }_0\), \(\{\widehat{N}^{k,x}_t, 0\le t\le t_*\}\), \(x\in {\mathcal {C}}_k^d\), and \(\{\widehat{N}^{\ell ,x}_t, 0\le t\le t_*\}\), \(x\in {\mathcal {C}}_\ell ^d\). It is easy to see that, a.s.,

$$\begin{aligned} M^{1,k,x}_{t_*} = \sum _{j\in {\mathcal {J}}} \Lambda ^j_n {\mathbb {P}}_{\mu ^{j,k}}(X_{t_*} =x \mid {\mathcal {F}}_*), \quad M^{1,\ell ,x}_{t_*} = \sum _{j\in {\mathcal {J}}} \Lambda ^j_n {\mathbb {P}}_{\mu ^{j,\ell }}(\widetilde{X}_{t_*} =x \mid {\mathcal {F}}_*). \end{aligned}$$

This implies that, a.s.,

$$\begin{aligned} \sum _{x\in K_n}|M^{1,k,x}_{t_*} - M^{1,\ell ,x}_{t_*}| \le \sum _{j\in {\mathcal {J}}} \Lambda ^j_n {\mathbb {P}}(X_{t_*} \ne \widetilde{X}_{t_*} \mid {\mathcal {F}}_*). \end{aligned}$$

By (3.20),

$$\begin{aligned}&{\mathbb {E}}\sum _{x\in K_n} |M^{1,k,x}_{t_*} - M^{1,\ell ,x}_{t_*}| = {\mathbb {E}}{\mathbb {E}}\left( \sum _{x\in K_n} |M^{1,k,x}_{t_*} - M^{1,\ell ,x}_{t_*}| \mid {\mathcal {F}}_*\right) \\&\quad \le {\mathbb {E}}{\mathbb {E}}\left( \sum _{j\in {\mathcal {J}}}\Lambda ^j_n \mathbf{1}(X^{j,i}_{t_*} \ne \widetilde{X}^{j,i}_{t_*}) \mid {\mathcal {F}}_*\right) = {\mathbb {E}}{\mathbb {E}}\left( \sum _{j\in {\mathcal {J}}}\Lambda ^j_n \mathbf{1}(X^{j,i}_{t_*} \ne \widetilde{X}^{j,i}_{t_*}) \mid {\mathcal {G}}_0\right) \\&\quad \le \delta _1 {\mathbb {E}}\sum _{j\in {\mathcal {J}}} \Lambda ^j_n. \end{aligned}$$

Since

$$\begin{aligned} {\mathbb {E}}\Lambda ^j_n = {\mathbb {E}}_{Q_k} \sum _{x\in K^j_n} M^{*,k,x}_0 \le {\mathbb {E}}_{Q_k} \sum _{x\in K^j_n} M^{k,x}_0 = |K^j_n| = n_\beta ^d, \end{aligned}$$

it follows that

$$\begin{aligned} {\mathbb {E}}\sum _{x\in K_n} |M^{1,k,x}_{t_*} - M^{1,\ell ,x}_{t_*}| \le \delta _1 |{\mathcal {J}}| n_\beta ^d = \delta _1 (n/n_\beta )^d n_\beta ^d = \delta _1 n^d. \end{aligned}$$
(3.23)

Let \(\{(M^{2,k,x}_t)_{x\in {\mathcal {C}}_k^d}, t\ge 0\}\) be the meteor process with the initial distribution defined by \(M^{2,k,x}_0 = M^{k,x}_0 - M^{1,k,x}_0\) if \(x\in K_n\). For all other \(x \in {\mathcal {C}}_k^d {\setminus }K_n\), we let \( M^{2,k,x}_0 = 0\). The jump times of \({\mathcal {M}}^{2,k,x}\) are defined by \(\{\widehat{N}^{k,x}_t, t\in {\mathbb {R}}\}\), \(x\in {\mathcal {C}}_k^d\), in the usual way. The process \(\{(M^{2,\ell ,x}_t)_{x\in {\mathcal {C}}_\ell ^d}, t\ge 0\}\) is defined in an analogous way relative to the family \(\{\widehat{N}^{\ell ,x}_t, t\in {\mathbb {R}}\}\), \(x\in {\mathcal {C}}_\ell ^d\), of Poisson processes. It follows from (3.17) and (3.19) that

$$\begin{aligned}&{\mathbb {E}}\sum _{x\in K_n} |M^{2,k,x}_{t_*} - M^{2,\ell ,x}_{t_*}| \le {\mathbb {E}}\sum _{x\in K_n} M^{2,k,x}_{t_*} + {\mathbb {E}}\sum _{x\in K_n} M^{2,\ell ,x}_{t_*}\nonumber \\&\quad \le {\mathbb {E}}\sum _{x\in {\mathcal {C}}_k^d} M^{2,k,x}_{t_*} + {\mathbb {E}}\sum _{x\in {\mathcal {C}}_\ell ^d} M^{2,\ell ,x}_{t_*} \nonumber \\&\quad = {\mathbb {E}}\sum _{x\in {\mathcal {C}}_k^d} M^{2,k,x}_{0} + {\mathbb {E}}\sum _{x\in {\mathcal {C}}_\ell ^d} M^{2,\ell ,x}_{0} \nonumber \\&\quad = {\mathbb {E}}\left( \left( \sum _{x\in K_n} M^{k,x}_{0} - \sum _{x\in K_n} M^{1,k,x}_{0}\right) + \left( \sum _{x\in K_n} M^{\ell ,x}_{0} - \sum _{x\in K_n} M^{1,\ell ,x}_{0}\right) \right) \nonumber \\&\quad = {\mathbb {E}}\left( \left( \sum _{x\in K_n} M^{k,x}_{0} - \sum _{x\in K_n} M^{*,k,x}_{0}\right) + \left( \sum _{x\in K_n} M^{\ell ,x}_{0} - \sum _{x\in K_n} M^{*,\ell ,x}_{0}\right) \right) \nonumber \\&\quad \le c_5 n^{d - (d+1)\beta /2} \le \delta _1 n^d. \end{aligned}$$
(3.24)

Let \(\{(M^{3,k,x}_t)_{x\in {\mathcal {C}}_k^d}, t\ge 0\}\) be the meteor process with the initial distribution defined by \(M^{3,k,x}_0 = M^{k,x}_0\) if \(x\in {\mathcal {C}}_k^d {\setminus }K_n\). For \(x \in K_n\), we let \( M^{3,k,x}_0 = 0\). The jump times of \({\mathcal {M}}^{3,k,x}\) are defined by \(\{\widehat{N}^{k,x}_t, t\in {\mathbb {R}}\}, x\in {\mathcal {C}}_k^d\), in the usual way. The process \(\{(M^{3,\ell ,x}_t)_{x \in {\mathcal {C}}_\ell ^d}, t\ge 0\}\) is defined in an analogous way relative to the family \(\{\widehat{N}^{\ell ,x}_t, t\in {\mathbb {R}}\}\), \(x\in {\mathcal {C}}_\ell ^d\), of Poisson processes. Note that for all \(x \in K_n\) and \(t\ge 0\),

$$\begin{aligned} M^{k,x}_t = M^{1,k,x}_t + M^{2,k,x}_t + M^{3,k,x}_t, \end{aligned}$$
(3.25)

and the analogous formula holds for \(M^{\ell ,x}_t\). We have by (3.22),

$$\begin{aligned} {\mathbb {E}}\sum _{x\in K_n {\setminus }K^*_n} |M^{3,k,x}_{t_*} - M^{3,\ell ,x}_{t_*}|&\le {\mathbb {E}}\sum _{x\in K_n {\setminus }K^*_n} (M^{3,k,x}_{t_*} + M^{3,\ell ,x}_{t_*})\nonumber \\&\le {\mathbb {E}}\sum _{x\in K_n {\setminus }K^*_n} (M^{k,x}_{t_*} + M^{\ell ,x}_{t_*})\nonumber \\&= \sum _{x\in K_n {\setminus }K^*_n} ({\mathbb {E}}M^{k,x}_{t_*} + {\mathbb {E}}M^{\ell ,x}_{t_*})\nonumber \\&= 2 |K_n {\setminus }K^*_n| < 2 \delta _1 |K_n| = 2 \delta _1 n^d. \end{aligned}$$
(3.26)

Since \({\mathbb {E}}M^{3,k,x}_0 = \mathbf{1}_{K'_k {\setminus }K_n} (x)\), one can easily show that,

$$\begin{aligned} {\mathbb {E}}M^{3,k,x}_{t_*}&= \sum _{y \in K'_k {\setminus }K_n} {\mathbb {P}}(X_{t_*} =x \mid X_0 =y)\nonumber \\&= \sum _{y \in K'_k {\setminus }K_n} {\mathbb {P}}(X_{t_*} =y \mid X_0 =x) = {\mathbb {P}}(X_{t_*} \in K'_k {\setminus }K_n \mid X_0 = x). \end{aligned}$$
(3.27)

Recall that the Hausdorff distance between \(K^*_n\) and \({\mathbb {Z}}^d {\setminus }K_n\) is greater than \(a^{(1{+}\lambda + 3\delta )\lambda ^{m}\!{/}2}\). We also recall that \(X\) has the same trajectory as \(Z\) up to time \(T_*\). These observations, (3.27) and Lemma 3.2(i) imply that for \(x\in K^*_n\), \({\mathbb {E}}M^{3,k,x}_{t_*} \le \delta _1\). For the same reason, \({\mathbb {E}}M^{3,\ell ,x}_{t_*} \le \delta _1\). It follows that

$$\begin{aligned} \nonumber {\mathbb {E}}\sum _{x\in K^*_n} |M^{3,k,x}_{t_*} - M^{3,\ell ,x}_{t_*}|&\le {\mathbb {E}}\sum _{x\in K^*_n} (M^{3,k,x}_{t_*} + M^{3,\ell ,x}_{t_*}) = \sum _{x\in K^*_n} ({\mathbb {E}}M^{3,k,x}_{t_*} + {\mathbb {E}}M^{3,\ell ,x}_{t_*}) \\&\le 2 \delta _1 | K^*_n| < 2 \delta _1 |K_n| = 2 \delta _1 n^d. \end{aligned}$$
(3.28)

Recall that we have chosen \(\delta _1>0\) so that \(6\delta _1 < c_3/2\). In view of (3.25), the estimates (3.23), (3.24), (3.26) and (3.28) imply that,

$$\begin{aligned} {\mathbb {E}}\sum _{x\in K_n} |M^{k,x}_{t_*} - M^{\ell ,x}_{t_*}|&\le 6 \delta _1 n^d < (c_3/2)\delta _1 n^d. \end{aligned}$$

This contradicts (3.9) so the proof of part (i) of the theorem is complete.

Step 4 We will now prove part (ii) of the theorem. This is the second time in this paper that we will apply the method of graphical construction of [12]. Since \(Q_k\) converge to \(Q_\infty \), we can construct random vectors \(\widetilde{\mathcal {M}}^k_0\), \(k\ge 1\), and \(\widetilde{\mathcal {M}}_0\) on the same space, such that \(\widetilde{\mathcal {M}}^k_0\) has distribution \(Q_k\) for each \(k\ge 1\), \(\widetilde{\mathcal {M}}_0\) has distribution \(Q_\infty \), and \(\widetilde{\mathcal {M}}^k_0 \rightarrow \widetilde{\mathcal {M}}_0\), a.s. Let \(\{N^x_t, t\in {\mathbb {R}}\}\), \(x\in {\mathbb {Z}}^d\), be independent Poisson processes, also independent of \(\widetilde{\mathcal {M}}^k_0\), \(k\ge 1\), and \(\widetilde{\mathcal {M}}_0\). For each \(k\), let \(\{\widetilde{\mathcal {M}}^k_t, t\ge 0\}\) be the meteor process with jumps determined by \(\{N^x_t, t\in {\mathbb {R}}\}\), \(x\in {\mathbb {Z}}^d\), and the initial value \(\widetilde{\mathcal {M}}^k_0\). Similarly, let \(\{\widetilde{\mathcal {M}}_t, t\ge 0\}\) be the meteor process with jumps determined by \(\{N^x_t, t\in {\mathbb {R}}\}\), \(x\in {\mathbb {Z}}^d\), and the initial value \(\widetilde{\mathcal {M}}_0\).

Write \(\widetilde{\mathcal {M}}^k_t = (\widetilde{M}^{k,x}_t)_{x\in {\mathbb {Z}}^d}\) and \(\widetilde{\mathcal {M}}_t = (\widetilde{M}^{x}_t)_{x\in {\mathbb {Z}}^d}\). The proof of Proposition 2.1 shows that, a.s., for a family of trajectories of \(\{N^x_t, t\in {\mathbb {R}}\}\), \(x\in {\mathbb {Z}}^d\), for \(y \in {\mathbb {Z}}^d\) and \(s_1>0\) there exists a finite set \(A\subset {\mathbb {Z}}^d\) such that the values of \(\widetilde{M}^{k,y}_s\), \(k\ge 1\), and \(\widetilde{M}^y_s\), \(s\in [0,s_1]\), are uniquely determined by the values of \(\widetilde{M}^{k,z}_0\), \(k\ge 1\), and \(\widetilde{M}^z_0\) for \(z \in A\). Moreover, for each \(k\ge 1\) and \(s\in [0,s_1]\), the value of \(\widetilde{M}^{k,y}_s\) is a continuous function of \(\widetilde{M}^{k,z}_0\), \(z\in A\). A similar remark applies to \(\widetilde{M}^y_s\). This and the fact that \(\widetilde{M}^{k,z}_0 \rightarrow \widetilde{M}^z_0\) a.s., when \(k\rightarrow \infty \), for each \(z\in A\), imply that the processes \(\{\widetilde{M}^{k,y}_s, s\in [0,s_1]\}\) converge in the Skorokhod topology to \(\{\widetilde{M}^{y}_s, s\in [0,s_1]\}\) as \(k\rightarrow \infty \). Since this holds for all \(y\in {\mathbb {Z}}^d\), \(s_1>0\) and almost all trajectories of \(\{N^x_t, t\in {\mathbb {R}}\}\), \(x\in {\mathbb {Z}}^d\), we conclude that \(\{\widetilde{\mathcal {M}}^k_t, t\ge 0\}\) converge in the Skorokhod topology to \(\{\widetilde{\mathcal {M}}_t, t\ge 0\}\), a.s.

Recall the definition of \(\{{\mathcal {M}}^k_t, t\ge 0\}\) from the statement of the theorem. Fix any \(K'_n\), \(t_1>0\) and \(p_1 >0\). Then there exists \(k_1\) so large that for \(k\ge k_1\), with probability greater than \(1-p_1\), there is no trajectory of any continuous time random walk with jumps determined by \(\{N^x_t, t\in {\mathbb {R}}\}\), \(x\in {\mathbb {Z}}^d\), with starting point outside \(K'_k\) and visiting \(K'_n\) at some time in the interval \([0, t_1]\). This implies that we could construct \(\{\widetilde{\mathcal {M}}^k_t, t\ge 0\}\) and \(\{{\mathcal {M}}^k_t, t\ge 0\}\) on the same probability space so that with probability greater than \(1-p_1\), \(\widetilde{M}^{k,y}_t = M^{k,y}_t\) for all \(t\in [0,t_1]\) and \(y \in K'_n\). This and the fact that \(\{\widetilde{\mathcal {M}}^k_t, t\ge 0\}\) converge in the Skorokhod topology to \(\{\widetilde{\mathcal {M}}_t, t\ge 0\}\), a.s., easily imply that \(\{{\mathcal {M}}^k_t, t\ge 0\}\) converge to \(\{\widetilde{\mathcal {M}}_t, t\ge 0\}\) weakly in the Skorokhod topology. Finally, note that the process that we call \(\{\widetilde{\mathcal {M}}_t, t\ge 0\}\) in this proof is the same as the process called \(\{{\mathcal {M}}^\infty _t, t\ge 0\}\) in the theorem.

Remark 3.3

Suppose that \(\{{\mathcal {M}}_t, t\ge 0\}\) is the meteor process on \({\mathbb {Z}}^d\) with \({\mathcal {M}}_0\) distributed as \(Q_\infty \) defined in Theorem 3.1. Then for every constant \(c\in (0,\infty )\), the process \(\{c{\mathcal {M}}_t, t\ge 0\}\) is stationary. For later reference, we call the distribution of this process \(Q^{(c)}_\infty \).

According to Theorem 3.4 below, the mean amount of mass at a vertex for the process \(c{\mathcal {M}}\) is equal to \(c\). From the purely formal point of view, this shows that there are infinitely many stationary distributions for the meteor process on \({\mathbb {Z}}^d\) but this is a rather uninteresting observation.

Let \(u(t,x) = {\mathbb {E}}M^x_t\) for \(t\ge 0\) and \(x\in {\mathbb {Z}}^d\). It is easy to see that \(u(t,x)\) satisfies the heat equation on \({\mathbb {Z}}^d\). If the process \({\mathcal {M}}\) is in the stationary regime then \(u(t,x)\) does not depend on \(t\) so \(x\rightarrow u(t,x)\) is harmonic. A non-negative harmonic function on \({\mathbb {Z}}^d\) is constant. There exist many stationary meteor processes on \({\mathbb {Z}}^d\) with \(u(t,x) \equiv 1\). One can construct them as in Remark 4.3(i) below, as mixtures of processes with distributions \(Q^{(c)}_\infty \). It would be interesting to know whether there exist any stationary distributions with the average mass 1 at a vertex which are not mixtures of distributions \(Q^{(c)}_\infty \).

Theorem 3.4

Let \(Q_\infty \) be defined as in Theorem 3.1. Suppose that \(d\ge 1\) and let \(\{{\mathcal {M}}_t, t\ge 0\}\) be the meteor process under the stationary measure \(Q_\infty .\) We have

$$\begin{aligned}&{\mathbb {E}}_{Q_\infty } M^x_0 = 1, \quad x\in {\mathbb {Z}}^d, \end{aligned}$$
(3.29)
$$\begin{aligned}&{{\mathrm{Var}}}_{Q_\infty } M^x_0 = 1, \quad x\in {\mathbb {Z}}^d,\end{aligned}$$
(3.30)
$$\begin{aligned}&{{\mathrm{Cov}}}_{Q_\infty }(M^x_0, M^y_0) = -\frac{1}{2d},\quad x\leftrightarrow y,\end{aligned}$$
(3.31)
$$\begin{aligned}&{{\mathrm{Cov}}}_{Q_\infty }(M^x_0, M^y_0) = 0, \quad x\ne y \text { and } x\not \leftrightarrow y. \end{aligned}$$
(3.32)

Proof

The following has been proved in Theorem 5.1 of [3]. Suppose that \(d\ge 1\) and let \(\{{\mathcal {M}}_t, t\ge 0\} = \{(M^1_t,M^2_t,\ldots , M^{k}_t), t\ge 0\}\) be the meteor process on \(G={\mathcal {C}}_n^d\) (the product of \(d\) copies of the cycle \({\mathcal {C}}_n\)), under the stationary measure \(Q_k\) (here \(k=n^d\)). Let \(V\) denote the vertex set of \({\mathcal {C}}_n^d\) and assume that \(\sum _{x\in V} M^x_0 = k\) under \(Q_k\). Then

$$\begin{aligned}&{\mathbb {E}}_{Q_k} M^x_0 = 1, \quad x\in V, \end{aligned}$$
(3.33)
$$\begin{aligned}&\lim _{k\rightarrow \infty } {{\mathrm{Var}}}_{Q_k} M^x_0 = 1, \quad x\in V, \end{aligned}$$
(3.34)
$$\begin{aligned}&\lim _{k\rightarrow \infty } {{\mathrm{Cov}}}_{Q_k}(M^x_0, M^y_0) = -\frac{1}{2d},\quad x\leftrightarrow y,\end{aligned}$$
(3.35)
$$\begin{aligned}&\lim _{k\rightarrow \infty } {{\mathrm{Cov}}}_{Q_k}(M^x_0, M^y_0) = 0,\quad x\ne y \text { and } x\not \leftrightarrow y. \end{aligned}$$
(3.36)

In view of Theorem 3.1(i), formulas (3.29)–(3.32) follow from (3.33)–(3.36) provided \(M^x_0\) and \((M^x_0)^2\) are uniformly integrable under \(Q_k\), \(k\ge 1\).

Uniform integrability of \(M^x_0\) under \(Q_k\), \(k\ge 1\), follows from (3.34). It remains to prove uniform integrability of \((M^x_0)^2\) under \(Q_k\), \(k\ge 1\). It will suffice to show that

$$\begin{aligned} \limsup _{k\rightarrow \infty } {\mathbb {E}}_{Q_k} (M^x_0)^3 <\infty , \quad x\in V. \end{aligned}$$
(3.37)

We will define some subsets of the sets of vertices of \({\mathcal {C}}_n^d\) and \({\mathcal {C}}_n^{3d}\). We will suppress the dependence on \(n\) in this notation. Let \( \mathbf{0}=(0,\ldots ,0)\in {\mathcal {C}}_n^d\). Let \(\mathbf{a}\) be the set of all vertices \((a_1, \ldots , a_d)\in {\mathcal {C}}_n^d \) such that \(|a_i| = |a_j| = 1\) for some \(i\ne j\), and \(a_m = 0\) for all \(m\ne i,j\). Let \(\mathbf{b}\) be the set of all vertices \((b_1,\ldots , b_d) \in {\mathcal {C}}_n^d\) such that \(|b_i| =2\) for some \(i\), and \(b_m = 0\) for all \(m\ne i\). Let \(\mathbf{h}\) be the set of all vertices \((h_1,\ldots , h_d)\in {\mathcal {C}}_n^d \) such that \(|h_i| =1\) for some \(i\), and \(h_m = 0\) for all \(m\ne i\). Let \(\mathbf{g}= V {\setminus }(\{\mathbf{0}\}\cup \mathbf{a}\cup \mathbf{b}\cup \mathbf{h})\).

Let \(V_3\) be the set of all vertices of \({\mathcal {C}}_n^{3d}\). We will use the following notation for the elements of \(V_3\),

$$\begin{aligned} x= (x^1, x^2, x^3)=\left( \left( x^1_1, x^1_2,\ldots , x^1_d\right) ,\left( x^2_1, x^2_2,\ldots , x^2_d\right) ,\left( x^3_1, x^3_2,\ldots , x^3_d\right) \right) . \end{aligned}$$

For \(y=(y_1,\ldots ,y_d)\in {\mathcal {C}}_n^d\) let \(\Vert y \Vert _1 = \sum _{1\le k\le n} |y_k|\). Let \(V_3^*\) be the set of all \(x\in V_3\) such that

$$\begin{aligned} \max \left( \Vert x^1 - x^2 \Vert _1, \Vert x^2 - x^3 \Vert _1, \Vert x^3 - x^1 \Vert _1\right) \ge 5, \end{aligned}$$

and let \(V_3^\circ \) be the interior of \(V_3^*\), that is, the set of all vertices in \(V_3^*\) which are not connected by an edge with a vertex in \(V_3 {\setminus }V_3^*\).

Let \(\mathbf{0}'\) be the set of all \(x= (x^1, x^2, x^3)\in V_3\) such that \(x^j-x^i = \mathbf{0}\) for some \(1\le i,j \le 3\), \(j\ne i\). Let \(\mathbf{a}'\) be the set of all \(x= (x^1, x^2, x^3)\in V_3\) such that \(x^j-x^i \in \mathbf{a}\) for some \(1\le i,j \le 3, j\ne i\). We define \(\mathbf{b}'\) and \( \mathbf{h}'\) in an analogous manner. Let \(\mathbf{g}' = V_3 {\setminus }(\mathbf{0}'\cup \mathbf{a}' \cup \mathbf{b}' \cup \mathbf{h}')\).

We will base our estimates for \({\mathbb {E}}_{Q_k} (M^x_0)^3 \) on a representation of \(M^x_0\) using WIMPs. Let \(Z^1, Z^2\) and \( Z^3\) be as in Definition 2.2. In particular, \({\mathbb {P}}(Z^j_0 = x) = M^x_0/n^d\) for \(j=1,2,3\) and \(x\in V\).

Since the state space \({\mathcal {C}}_n^{3d}\) for the process \((Z^1, Z^2,Z^3)\) is finite, the process has a stationary distribution. The stationary distribution is unique because all states communicate. We will estimate the probability that \(Z^1_t = Z^2_t=Z^3_t\) under the stationary distribution. Let \(\{\pi _x, x\in V_3\}\) be the set of stationary probabilities for the discrete time Markov chain (the skeleton process) embedded in \((Z^1, Z^2,Z^3)\).

Let \(\pi '_x = 1\) for \(x\in \mathbf{0}'\), \(\pi '_x = \frac{6d-3}{8d}\) for \(x\in \mathbf{h}'\) and \(\pi '_x = \frac{3}{4}\) for all other \(x\in V_3\). It was verified using computer algebra (Mathematica) that the function \(\pi '\) satisfies the following equations.

$$\begin{aligned} \pi '_x&= \frac{1+2d}{4d} \pi '_x + \frac{2}{3} \pi '_y, \quad x\in \mathbf{0}',\ y \in \mathbf{h}',\end{aligned}$$
(3.38)
$$\begin{aligned} \pi '_x&= \frac{1}{3d} \pi '_y + \frac{2d -2}{3d} \pi '_z + \frac{1}{3} \pi '_x,\quad x\in \mathbf{h}',\ y \in \mathbf{b}',\ z\in \mathbf{a}',\end{aligned}$$
(3.39)
$$\begin{aligned} \pi '_x&= \frac{1}{4d^2} \pi '_v + \frac{2}{3d} \pi '_y + \frac{2d-2}{3d} \pi '_z + \frac{1}{3} \pi '_x, \quad x\in \mathbf{a}',\ v\in \mathbf{0}',\ y \in \mathbf{h}',\ z\in \mathbf{g}',\nonumber \\ \end{aligned}$$
(3.40)
$$\begin{aligned} \pi '_x&= \frac{1}{8d^2} \pi '_v + \frac{1}{3d} \pi '_y + \frac{2d-1}{3d} \pi '_z + \frac{1}{3} \pi '_x, \quad x\in \mathbf{b}',\ v\in \mathbf{0}',\ y \in \mathbf{h}',\ z\in \mathbf{g}',\nonumber \\ \end{aligned}$$
(3.41)
$$\begin{aligned} \pi '_x&= \frac{2}{3} \pi '_y + \frac{1}{3} \pi '_x, \quad x\in \mathbf{g}',\ y \in \mathbf{a}' \cup \mathbf{b}' \cup \mathbf{g}'. \end{aligned}$$
(3.42)

We will show that the stationary distribution \(\pi \) satisfies equations corresponding to (3.38)–(3.42) in an appropriate sense that will be made precise below. To save space, we will discuss the counterpart of only one of the above equations, namely (3.40). For any finite set \(A\) let \(\# A\) denote its cardinality. We will prove that for \(x\in \mathbf{a}' \cap V_3^\circ \), the stationary distribution must satisfy

$$\begin{aligned} \pi _x&= \frac{1}{4d^2} \frac{\sum _{v\in \mathbf{0}', v\leftrightarrow x} \pi _v}{ \# \{v\in \mathbf{0}', v\leftrightarrow x\}} + \frac{2}{3d} \frac{\sum _{y\in \mathbf{h}', y\leftrightarrow x} \pi _y}{ \# \{y\in \mathbf{h}', y\leftrightarrow x\}}\nonumber \\&\quad +\, \frac{2d-2}{3d} \frac{\sum _{z\in \mathbf{g}', z\leftrightarrow x} \pi _z}{ \# \{z\in \mathbf{g}', z\leftrightarrow x\}} + \frac{1}{3} \frac{\sum _{u\in \mathbf{a}', u\leftrightarrow x} \pi _u}{ \# \{u\in \mathbf{a}', u\leftrightarrow x\}}. \end{aligned}$$
(3.43)

Consider \(x\in \mathbf{a}'\cap V_3^\circ \). Suppose without loss of generality that \(x^1 - x^2 \in \mathbf{a}\). There exists a unique \(v=(v^1,v^2,v^3)\in \mathbf{0}'\) which can be a state of \((Z^1, Z^2,Z^3)\) from which the process can jump to \(x\). We have \(v^1-v^2= \mathbf{0}\). When \(v_1\) is hit by a meteor then \(v^1\) will jump to \(x^1\) and \(v^2\) will jump to \(x^2\) with probability \(1/(4d^2)\). With probability \(1/(4d^2)\), \(v^1\) will jump to \(x^2\) and \(v^2\) will jump to \(x^1\). The sum of the two probabilities is \(2/(4d^2)\). We multiply this quantity by \(1/2\) because the first meteor hit which moves the process \((Z^1, Z^2,Z^3)\) to a new location may hit either \(v^1\) (equal to \(v^2\)) or \(v^3\), with equal probabilities. Hence, we have the factor \(\frac{1}{4d^2}\) in the first term on the right hand side of (3.43).

There exist exactly 4 elements of \(\mathbf{h}'\) which can be states of \((Z^1, Z^2,Z^3)\) from which the process can jump to \(x\). This transition requires that the specific process, either \(Z^1\) or \(Z^2\), jumps, and each process has probability \(1/3\) of jumping first. The probability that the jump will go in the desirable direction is \(1/(2d)\). The product of these probabilities and 4 is equal to \(2/(3d)\) so this justifies the presence of the factor \(\frac{2}{3d} \) in the second term on the right hand side of (3.43).

There exist exactly \(4d-4\) elements of \(\mathbf{g}'\) which can be states of \((Z^1, Z^2,Z^3)\) from which the process can jump to \(x\). This transition requires that the specific process, either \(Z^1\) or \(Z^2\), jumps, and each process has probability \(1/3\) of jumping first. The probability that the jump will go in the desirable direction is \(1/(2d)\). The product of these probabilities and \(4d-4\) is equal to \((2d-2)/(3d)\) so this justifies the presence of the factor \(\frac{2d-2}{3d} \) in the third term on the right hand side of (3.43).

Finally, the process \(Z^3\) will jump first with probability \(1/3\). There exist exactly \(2d\) elements of \(\mathbf{a}'\) which can be states of \((Z^1, Z^2,Z^3)\) from which the process can jump to \(x\). The probability that \(Z^3\) will jump in the desirable direction is \(1/(2d)\). We have \( (1/3) (2d) (1/(2d)) = 1/3\) so this explains the factor \( \frac{1}{3} \) in the last term on the right hand side of (3.43).

If all occurrences of the function \(\pi \) are replaced by \(\pi '\) in (3.43) then this equation reduces to (3.40). A similar argument applies to the appropriate counterparts of other equations in the system (3.38)–(3.42) so we conclude that \(\pi \) and \(\pi '\) satisfy the same system of equilibrium equations on \(V_3^\circ \).

Let \(q_{x,y}\) denote the one step transition probabilities for the skeleton process (discrete time Markov chain) embedded in \((Z^1, Z^2,Z^3)\). Let \(O=\{(x^1,x^2,x^3) \in V_3: x^1=x^2=x^3\}\). It is easy to see that there exist \(p_1>0\) and \(k_1\) (independent of \(n\)) such that for every \(x\in V_3 {\setminus }V_3^\circ \) there exist \(y \in O\) and a sequence \(z_1, z_2,\ldots , z_j\) such that \(j \le k_1\), \(z_1 = y, z_j= x, z_m \leftrightarrow z_{m+1}\) for all \(m=1,\ldots , j-1\), and

$$\begin{aligned} \prod _{1\le m \le j-1} q_{z_m, z_{m+1}} > p_1. \end{aligned}$$
(3.44)

By symmetry, \(\pi _x = \pi _y\) for all \(x,y \in O\). Let \(\pi _o\) denote this common value. If follows from (3.44) that \(\pi _x \ge p_1 \pi _o\) for all \(x\in V_3 {\setminus }V_3^\circ \). Since \(\pi '_x \le 1\) for all \(x\), we obtain \(\pi _x/\pi '_x \ge p_1 \pi _o\) for all \(x\in V_3 {\setminus }V_3^\circ \). Let \(\alpha = \min _{x\in V_3} \pi _x/\pi '_x\). If this minimum is attained in \(V_3^\circ \) then it is easy to check, using the fact that \(\pi \) and \(\pi '\) satisfy the same equilibrium equations on \(V_3^\circ \), that \(\pi _x/\pi _x'\) also attains the minimum \(\alpha \) on \(V_3 {\setminus }V_3^\circ \), and, therefore, \(\min _{x\in V_3} \pi _x/\pi '_x\ge p_1 \pi _o\). Since \(\pi '_x \ge (6d-3)/(8d)\) for all \(x\), we obtain \(\pi _x \ge p_1 \pi _o(6d-3)/(8d)\) for \(x\in V_3^*\). This, and the previously derived estimate for \(x\in V_3 {\setminus }V_3^\circ \) show that if we let \(c_1 = p_1 (6d-3)/(8d)>0\) then for all \(x\in V_3\),

$$\begin{aligned} \pi _x \ge p_1 \pi _o(6d-3)/(8d) = c_1 \pi _o. \end{aligned}$$
(3.45)

Let \(\{\pi ^*_x, x\in V_3\}\) be the set of stationary probabilities for the process \((Z^1, Z^2,Z^3)\). The holding time for \((Z^1, Z^2,Z^3)\) has mean 1 for all states in \(O\). The mean is \(1/2\) or \(1/3\) for all other states, depending on whether any two components of \((Z^1, Z^2,Z^3)\) are equal or not. Hence, if we take \(c_2 = c_1/3>0\) then (3.45) yields for \(x\in V_3\),

$$\begin{aligned} \pi ^*_x \ge c_1 \pi _o/3 = c_2 \pi ^*_o. \end{aligned}$$

Recall that \(k=n^d\). Since \(\sum _x \pi ^*_x = 1\), the last formula implies that

$$\begin{aligned} \limsup _{n\rightarrow \infty }k^2 {\mathbb {P}}_{Q_k} (Z^1= Z^2=Z^3) \le c_2^{-1}. \end{aligned}$$
(3.46)

Let \({\mathcal {G}}_t = \sigma ({\mathcal {M}}_s, 0\le s \le t)\). It is easy to see that, for \(x\in V\) and \(j=1,2,3\),

$$\begin{aligned} {\mathbb {P}}_{Q_k}(Z^j_0 =x \mid {\mathcal {G}}_0) = M^x_0/k. \end{aligned}$$

The random variables \(Z^1_0, Z^2_0\) and \( Z^3_0\) are conditionally independent given \({\mathcal {G}}_0\), so

$$\begin{aligned} {\mathbb {P}}_{Q_k}(Z^1= Z^2=Z^3=x \mid {\mathcal {G}}_0) = (M^x_0 /k)^3. \end{aligned}$$

Thus, using invariance of the process \((Z^1, Z^2,Z^3)\) under shifts of \({\mathcal {C}}_n^d\),

$$\begin{aligned} {\mathbb {E}}_{Q_k} (M^x_0 )^3&= k^3 {\mathbb {E}}_{Q_k} {\mathbb {P}}_{Q_k}(Z^1= Z^2=Z^3=x \mid {\mathcal {G}}_0) =k^3{\mathbb {P}}_{Q_k}(Z^1= Z^2=Z^3=x) \\&= k^2{\mathbb {P}}_{Q_k}(Z^1= Z^2=Z^3). \end{aligned}$$

This and (3.46) yield

$$\begin{aligned} \limsup _{k\rightarrow \infty } {\mathbb {E}}_{Q_k} (M^x_0 )^3 \le c_2^{-1}. \end{aligned}$$

This proves (3.37) and thus completes the proof of the theorem. \(\square \)

Remark 3.5

The derivation of (3.33)–(3.36) in [3] is based on an explicit solution to a set of equations similar to (3.38)–(3.42). The method breaks down if one wants to generalize Theorem 3.4 to the third or higher moments because the solution to a similar set of equations needed for a similar argument does not seem to have a tractable form.

Corollary 3.6

We have for all \(n\ge 1,\)

$$\begin{aligned} \begin{array}{ll} &{}\displaystyle {\mathbb {E}}_{Q_\infty }\left( \sum _{x\in K_n} M^x_0\right) = n^d, \\ &{}\displaystyle {{\mathrm{Var}}}_{Q_\infty } \left( \sum _{x\in K_n} M^x_0\right) = n^{d-1}. \end{array} \end{aligned}$$
(3.47)

Proof

The first formula follows directly from (3.29).

We have from (3.30)–(3.32),

$$\begin{aligned} {{\mathrm{Var}}}_{Q_\infty } \left( \sum _{x\in K_n} M^x_0\right)&= \sum _{x\in K_n} {{\mathrm{Var}}}_{Q_\infty } M^x_0 + \sum _{x,y\in K_n, x\leftrightarrow y} {{\mathrm{Cov}}}_{Q_\infty }(M^x_0, M^y_0)\\&= \sum _{x\in K_n} 1 - \sum _{x,y\in K_n, x\leftrightarrow y} \frac{1}{2d}. \end{aligned}$$

Note that the contribution from each edge should be counted twice in the second sum on the right hand side. Formula (3.47) follows by counting the numbers of vertices in \(K_n\) and edges connecting neighbors in \(K_n\). \(\square \)

Lemma 3.2 is a crucial step in the proof of Theorem 3.1. The lemma shows that given a system of independent Poisson processes, one can construct two random walks with jumps determined by this family of Poisson processes (the same for both random walks), and such that the random walks meet after a relatively short time. The basic idea of the proof is to use the usual mirror coupling for each coordinate separately. This would be quite straightforward if we could couple Poisson processes determining jump times for the two random walks. The fact that a single family of Poisson processes is used causes a problem. Namely, mirror-coupled coordinates do not meet because they typically arrive at the meeting location at different times. The time difference can be estimated and it turns out to be manageable but the mirror coupling has to be restarted and the whole procedure requires multiple induction arguments. On the technical side, it is worth noting that similar arguments often work on the “exponential” scale, that is, one divides space and/or time into “boxes” of the diameter \(2^k\), \(k\in {\mathbb {Z}}\). This does not seem to work in our case and we have to work with “doubly exponential” scale \(2^{2^k}\). More precisely, the argument requires that the doubly exponential scale is \(a^{\lambda ^m}\) for carefully chosen values of \(a\) and \(\lambda \).

Proof of Lemma 3.2

Step 1 Suppose that an i.i.d. family of Poisson processes \(N^x, x\in {\mathbb {Z}}^d\), is given. We will construct processes \(X\) and \(\widetilde{X}\) so that \(X\) (\(\widetilde{X}\)) jumps at a time \(t\) if and only if \(X_{t-} = v\) (resp., \(\widetilde{X}_{t-} = v\)) and \(N^v\) has a jump at time \(t\).

Suppose that \(X\) is given and let \(\{Y_j, j\ge 0\}\) be the discrete time random walk embedded in \(X\). We will define \(\widetilde{Y}\), the discrete time random walk embedded in \(\widetilde{X}\), below. We will write

$$\begin{aligned}&X_t = (X^1_t, X^2_t,\ldots , X^d_t), \quad \widetilde{X}_t = (\widetilde{X}^1_t, \widetilde{X}^2_t,\ldots , \widetilde{X}^d_t),\\&Y_j = (Y^1_j, Y^2_j,\ldots , Y^d_j),\quad \widetilde{Y}_j = (\widetilde{Y}^1_j, \widetilde{Y}^2_j,\ldots , \widetilde{Y}^d_j). \end{aligned}$$

For \(i=1,\ldots , d\), let

$$\begin{aligned} \widetilde{Y}^i_j = {\left\{ \begin{array}{ll} - Y^i_j + X_0 + \widetilde{X}_0 &{} \text {for } 0\le j \le \tau := T(|Y^i_\cdot - \widetilde{Y}^i_\cdot |, \{0,1\}), \\ Y^i_j - Y^i_\tau + \widetilde{Y}^i_\tau &{} \text {for } j > \tau . \end{array}\right. } \end{aligned}$$

Let \(\bar{Y}^i_j = |Y^i_j - \widetilde{Y}^i_j|\) and note that \(\bar{Y}^i\) is a discrete time lazy symmetric random walk, with step size 2, starting at a non-negative integer and stopped at the hitting time of \(\{0,1\}\). “Lazy” means here that \({\mathbb {P}}(\bar{Y}^i_{j+1}= \bar{Y}^i_j) =(d-1)/d\).

Fix any \(\beta \in \left( 0, 1\right) \) and \(\delta _1 >0\). We will argue that one can choose \(\delta >0, a_0>1\) and \(\lambda >1\) satisfying conditions (3.48)–(3.51), (3.54)–(3.57), (3.58), (3.62) and (3.63) stated below, for all \(a\ge a_0\) and \(m\ge 1\). Note that conditions (3.48), (3.49) are the same as (3.1), (3.2).

We fix \(\delta >0\) and \(\lambda >1\) satisfying

$$\begin{aligned}&\displaystyle (1/\beta ) (1-\delta ) > 1,\end{aligned}$$
(3.48)
$$\begin{aligned}&\displaystyle (1+\lambda + 3 \delta )/2 < (1/\beta ) (1-\delta ), \end{aligned}$$
(3.49)
$$\begin{aligned}&\displaystyle (1+\lambda + 4\delta )/4 < 1/\lambda , \end{aligned}$$
(3.50)
$$\begin{aligned}&\displaystyle 1+\lambda + 2\delta > (1+\lambda + 3 \delta )/2. \end{aligned}$$
(3.51)

Let

$$\begin{aligned} k_{m,1}&= \lceil a^{(1+\lambda + 2\delta )\lambda ^{m}} - a^{(1+\lambda + 3 \delta )\lambda ^{m}/2} \rceil ,\end{aligned}$$
(3.52)
$$\begin{aligned} k_{m,2}&= \lfloor a^{(1+\lambda + 2\delta )\lambda ^{m}} + a^{(1+\lambda + 3 \delta )\lambda ^{m}/2} \rfloor . \end{aligned}$$
(3.53)

There exists \(1<a_1<\infty \) so large that for all \(a\ge a_1\) and \(m\ge 1\), we have, in view of (3.51),

$$\begin{aligned}&\displaystyle k_{m,1} > a^{(1+\lambda + \delta )\lambda ^{m}}, \end{aligned}$$
(3.54)
$$\begin{aligned}&\displaystyle k_{m,2} < 2 a^{(1+\lambda + 2\delta )\lambda ^{m}}, \end{aligned}$$
(3.55)
$$\begin{aligned}&\displaystyle (k_{m_1} - a^{(1+\lambda + 2\delta )\lambda ^{m}})^2 \ge (1/2) a^{(1+\lambda + 3 \delta )\lambda ^{m}}, \end{aligned}$$
(3.56)
$$\begin{aligned}&\displaystyle (k_{m_2} - a^{(1+\lambda + 2\delta )\lambda ^{m}})^2 \ge (1/2) a^{(1+\lambda + 3 \delta )\lambda ^{m}}. \end{aligned}$$
(3.57)

We can find \(a_2 \ge a_1\) such that, because of (3.50), for \(a\ge a_2\) and \(m\ge 1\),

$$\begin{aligned} 1+ a^{(1+\lambda + 4\delta )\lambda ^{m}/4} < a^{\lambda ^m/\lambda }/d = a^{\lambda ^{m-1}}/d. \end{aligned}$$
(3.58)

Let

$$\begin{aligned} A^i_1 = \left\{ \sup \{|\widetilde{Y}^i_j - \widetilde{Y}^i_k |: k_{m,1} \le j,k \le k_{m,2}\} \ge a^{(1+\lambda + 4 \delta )\lambda ^{m}/4}\right\} . \end{aligned}$$

An application of Kolmogorov’s inequality shows that

$$\begin{aligned} {\mathbb {P}}\left( A^i_1\right) \le c_1 a^{-\delta \lambda ^{m}/2}. \end{aligned}$$
(3.59)

Let

$$\begin{aligned} A_2^i = \{T(\bar{Y}^i, \{0,1\}) < T^+(\bar{Y}^i, a^{\lambda ^{m+1}})\}. \end{aligned}$$

By gambler’s ruin formula, for \(m\ge 1\),

$$\begin{aligned} {\mathbb {P}}\left( (A_2^i)^c \mid \bar{Y}^i_0 \le a^{\lambda ^m}\right) \le 2 \frac{a^{\lambda ^m}}{a^{\lambda ^{m+1}}} = 2\left( a^{1-\lambda }\right) ^{\lambda ^m}. \end{aligned}$$
(3.60)

Let

$$\begin{aligned} A_3^i =\left\{ T(\bar{Y}^i, \{0,1\} \cup [a^{\lambda ^{m+1}},\infty )) \le a^{(1+\lambda + \delta )\lambda ^{m}}\right\} . \end{aligned}$$

We have the following standard estimate,

$$\begin{aligned} {\mathbb {E}}\left( T(\bar{Y}^i, \{0,1\} \cup [a^{\lambda ^{m+1}},\infty )) \mid \bar{Y}^i_0\le a^{\lambda ^m}\right) \le c_2a^{(1+\lambda )\lambda ^{m}}. \end{aligned}$$

Hence,

$$\begin{aligned} {\mathbb {P}}\left( (A_3^i)^c \mid \bar{Y}^i_0 \le a^{\lambda ^m}\right) \le c_2 a^{(1+\lambda )\lambda ^{m}}/a^{(1+\lambda + \delta )\lambda ^{m}} = c_2 a^{-\delta \lambda ^{m}}. \end{aligned}$$
(3.61)

Let \(c_3 = c_1 + c_2 + 14\) so that for \(a\ge a_2\) and \(m\ge 1\),

$$\begin{aligned} 2\left( a ^{1-\lambda }\right) ^{\lambda ^m} + c_1 a^{- \delta \lambda ^{m}/2} + c_2 a^{-\delta \lambda ^{m}} + 14 a^{-\delta \lambda ^{m}} \le 2\left( a ^{1-\lambda }\right) ^{\lambda ^m} + c_3 a^{-\delta \lambda ^{m}/2}. \end{aligned}$$
(3.62)

Recall that \(\delta _1>0\) has been fixed. Given \(\lambda , \delta \) and \(a_2\) chosen so far, we can find \(a_0 \ge a_2\) so that, for \(a\ge a_0\), we have

$$\begin{aligned} \sum _{m=1}^\infty d \left( 2\left( a^{1-\lambda }\right) ^{\lambda ^m} + c_3 a^{- \delta \lambda ^{m}/2}\right) < \delta _1/4 \wedge 1/2. \end{aligned}$$
(3.63)

This completes the specification of \(\delta , \lambda \) and \(a_0\).

Let \(\widehat{X}_t = \widetilde{X}_t\) for \(t\le T(X- \widetilde{X}, \mathbf{0})\) and let \(\{\widehat{X}_t, t \ge T(X- \widetilde{X}, \mathbf{0})\}\) be a continuous random walk on \({\mathbb {Z}}^d\) with the same skeleton process as that of \(\{\widetilde{X}_t, t \ge T(X- \widetilde{X}, \mathbf{0})\}\), but with holding times independent of \(X\) and \(\{\widetilde{X}_t, t \le T(X- \widetilde{X}, \mathbf{0})\}\). Let \(S(j)\) be the time of the \(j\)-th jump of \(X\), let \(\widetilde{S}(j)\) be the time of the \(j\)-th jump of \(\widetilde{X}\), and let \(\widehat{S}(j)\) be the time of the \(j\)-th jump of \(\widehat{X}\).

Let \(t_1=a^{(1+\lambda + 2\delta )\lambda ^{m}}.\) Let \(M\) be the number of jumps of \(X\) before \(t_1\) and let \(\widehat{M}\) be the number of jumps of \(\widehat{X}\) before \(t_1\). Recall \(k_{m,1}\) from (3.52). The random variable \(S(k_{m,1}) \) is the sum of \(k_{m,1}\) independent exponential random variables. By the Chebyshev inequality, using (3.56), for \(m\ge 1\),

$$\begin{aligned} {\mathbb {P}}(M \le k_{m,1})&= {\mathbb {P}}(S(k_{m,1}) \ge t_1) \le \frac{k_{m,1}}{(k_{m,1} - t_1)^2}\nonumber \\&\le 2\frac{\lceil a^{(1+\lambda + 2\delta )\lambda ^{m}} - a^{(1+\lambda + 3 \delta )/2\lambda ^{m}} \rceil }{(a^{(1+\lambda + 3 \delta )\lambda ^{m}/2})^2 } \nonumber \\&\le 2 \frac{ a^{(1+\lambda + 2\delta )\lambda ^{m}}}{a^{(1+\lambda + 3\delta )\lambda ^{m}} } = 2 a^{-\delta \lambda ^{m}}. \end{aligned}$$
(3.64)

For the same reason, we have,

$$\begin{aligned} {\mathbb {P}}(\widehat{M} \le k_{m,1}) \le 2 a^{-\delta \lambda ^{m}}. \end{aligned}$$
(3.65)

A similar calculation using (3.55) and (3.57) gives

$$\begin{aligned} {\mathbb {P}}(M \ge k_{m,2})&\le 4 a^{-\delta \lambda ^{m}},\end{aligned}$$
(3.66)
$$\begin{aligned} {\mathbb {P}}(\widehat{M} \ge k_{m,2})&\le 4 a^{-\delta \lambda ^{m}}. \end{aligned}$$
(3.67)

Let

$$\begin{aligned} A_4 = \{k_{m,1} \le M \le k_{m,2}, k_{m,1} \le \widehat{M} \le k_{m,2}\}. \end{aligned}$$

We combine (3.64), (3.65), (3.66) and (3.67) to see that

$$\begin{aligned} {\mathbb {P}}\left( A_4\right) \ge 1- 12 a^{-\delta \lambda ^{m}}. \end{aligned}$$
(3.68)

Let

$$\begin{aligned} A_5 = \{M \le a^{(1+\lambda + \delta )\lambda ^{m}}\}. \end{aligned}$$

It follows from (3.54) and (3.64) that

$$\begin{aligned} {\mathbb {P}}(A_5) \le 2 a^{-\delta \lambda ^{m}}. \end{aligned}$$
(3.69)

Suppose that \((A^i_1)^c \cap A_2^i \cap A^i_3 \cap A_4 \cap A_5^c\) holds. Then \(X^i_{t_1} = Y^i_M \) (by the definition of \(M\)) and \(|Y^i_M - \widetilde{Y}^i_M| \le 1\) (because \(A_2^i \cap A^i_3 \cap A_5^c\) holds). We also have \(\widehat{X}^i_{t_1} = \widetilde{Y}^i_{\widehat{M}} \) (by the definition of \(\widehat{M}\)) and \(|\widetilde{Y}^i_{\widehat{M}} - \widetilde{Y}^i_M| \le a^{(1+\lambda + 4 \delta )\lambda ^{m}/4}\) (because \((A^i_1)^c \cap A_4\) holds). It follows that, using condition (3.58),

$$\begin{aligned} |X^i_{t_1} - \widehat{X}^i_{t_1}|&\le |X^i_{t_1} - \widetilde{Y}^i_M| + |\widehat{X}^i_{t_1} - \widetilde{Y}^i_M| = |Y^i_M - \widetilde{Y}^i_M| + |\widetilde{Y}^i_{\widehat{M}} - \widetilde{Y}^i_M|\\&\le 1 + a^{(1+\lambda + 4 \delta )\lambda ^{m}/4} \\&\le a^{\lambda ^{m-1}}/d. \end{aligned}$$

Let

$$\begin{aligned} U_m = t_1 \wedge T(X- \widetilde{X}, \mathbf{0}) = a^{(1+\lambda + 2\delta )\lambda ^{m}} \wedge T(X- \widetilde{X}, \mathbf{0}). \end{aligned}$$
(3.70)

Then

$$\begin{aligned} |X^i_{U_m} - \widetilde{X}^i_{U_m}| = |X^i_{U_m} - \widehat{X}^i_{U_m}| \le |X^i_{t_1} - \widehat{X}^i_{t_1}| \le a^{\lambda ^{m-1}}/d. \end{aligned}$$

It follows from (3.59), (3.60), (3.61), (3.68), (3.69) and (3.62) that

$$\begin{aligned} {\mathbb {P}}((A^i_1)^c \cap A_2^i \cap A^i_3 \cap A_4 \cap A_5^c)&\ge 1 - 2\left( a ^{1-\lambda }\right) ^{\lambda ^ m} + c_1 a^{- \delta \lambda ^{m}/2} + c_2 a^{- \delta \lambda ^{m}} + 14 a^{- \delta \lambda ^{m}} \\&\ge 1- 2\left( a ^{1-\lambda }\right) ^{\lambda ^ m} - c_3 a^{- \delta \lambda ^{m}/2}. \end{aligned}$$

This implies that \(U_m\) has the following property,

$$\begin{aligned} {\mathbb {P}}\left( |X^i_{U_m} - \widetilde{X}^i_{U_m}| \le a^{\lambda ^{m-1}}/d \mid |X_{0} - \widetilde{X}_{0}| \le a^{\lambda ^{m}}\right) \ge 1- 2\left( a^{1- \lambda }\right) ^{\lambda ^ m}- c_3 a^{- \delta \lambda ^{m}/2}. \end{aligned}$$

It follows that

$$\begin{aligned} {\mathbb {P}}\left( |X_{U_m} - \widetilde{X}_{U_m}| \le a^{\lambda ^{m-1}} \mid |X_{0} - \widetilde{X}_{0}| \le a^{\lambda ^{m}}\right) \ge 1- d\left( 2 \left( a^{1-\lambda }\right) ^{\lambda ^m} + c_3 a^{-\delta \lambda ^{m}/2}\right) . \end{aligned}$$
(3.71)

Step 2 Next we will argue that we can define \(Z\) and \(\widetilde{Z}\) so that \(T(Z- \widetilde{Z}, \mathbf{0})< \infty \), a.s., for any initial distributions of \(Z_0\) and \(\widetilde{Z}_0\).

Suppose that \(Z^*\) and \( Z^{**}\) are independent continuous time simple random walks. Fix any \(t_2\in (0,\infty )\). It is easy to see that there exists \(p_1>0\) such that

$$\begin{aligned} {\mathbb {P}}(T(Z^*- Z^{**}, \mathbf{0}) < t_2 \mid |Z^*_0 - Z^{**}_0| \le a^{\lambda }) > p_1. \end{aligned}$$
(3.72)

We will define two continuous time simple random walks \(Z\) and \(\widetilde{Z}\) starting from arbitrary deterministic points \(z_0\) and \(\widetilde{z}_0\). The construction will be based on a family of intermediate processes \(Z^j\) and \(\widetilde{Z}^j\) defined as follows. Let \(m\ge 1\) be the smallest integer such that \(|z_0 - \widetilde{z}_0| \le a^{\lambda ^m}\). Then we let \(Z^m\) and \(\widetilde{Z}^m \) be continuous time simple random walks defined as \(X\) and \(\widetilde{X}\) at the beginning of Step 1, with \(Z^m_0 = z_0\) and \(\widetilde{Z}^m_0 = \widetilde{z}_0\). If \(m>1\) then let \(U_m\) be defined as in (3.70) but relative to \(Z^m\) and \(\widetilde{Z}^m\). If \(m=1\) then let \(U_1 = T(Z^1- \widetilde{Z}^1, \mathbf{0}) \wedge t_2\).

We continue the construction using two-stage induction. In one of the stages, the index will decrease. Suppose that we have defined \(Z^j\), \(\widetilde{Z}^j\) and \(U_j\) for \(j=m, m-1, m-2,\ldots , n\). If \(Z^n_{U_n} = \widetilde{Z}^n_{U_n} \) or \(n=1\) or \(|Z^n_{U_n} - \widetilde{Z}^n_{U_n} | > a ^{\lambda ^{n-1}}\) then we stop the induction, that is, we do not define \(Z^{n-1}\), \(\widetilde{Z}^{n-1}\) and \(U_{n-1}\). Suppose that one of these events occurred. Let \(R_1 = \sum _{n\le k\le m} U_k\). We define \(W^1_t\) and \(\widetilde{W}^1_t\) for \(t\in [0, R_1]\) by setting \(W^1_0 = z_0\), \(\widetilde{W}^1_0 = \widetilde{z}_0\),

$$\begin{aligned} W^1_t = Z^j\left( t- \sum _{j+1\le k\le m} U_k\right) , \quad \widetilde{W}^1_t = \widetilde{Z}^j\left( t- \sum _{j+1\le k\le m} U_k\right) , \end{aligned}$$

for \(t \in \left( \sum _{j+1\le k\le m} U_k, \sum _{j\le k\le m} U_k\right] \) and \(j = m,m-1,\ldots , n\).

Next suppose that \(n>1, Z^n_{U_n} \ne \widetilde{Z}^n_{U_n}\) and \(|Z^n_{U_n} - \widetilde{Z}^n_{U_n}|\le a ^{\lambda ^{n-1}}\). Then we construct \(Z^{n-1}\) and \(\widetilde{Z}^{n-1}\) in the same way as \(X\) and \(\widetilde{X}\) were constructed at the beginning of Step 1, with \(Z^{n-1}_0 = Z^n_{U_n}\) and \(\widetilde{Z}^{n-1}_0 = \widetilde{Z}^n_{U_n}\). We require that jumps of these processes are determined by the family \(\{N^x_t, t > \sum _{n\le k\le m} U_k\}\) in the sense that \(Z^{n-1}_t\) jumps at a time \(s > 0\) if and only if the Poisson process \(N^x\) jumps at time \(s - \sum _{n\le k\le m} U_k\), where \(x = Z^{n-1}_{s-}\). Similarly, \(\widetilde{Z}^{n-1}_t\) jumps at a time \(s > 0\) if and only if the Poisson process \(N^x\) jumps at time \(s - \sum _{n\le k\le m} U_k\), where \(x = \widetilde{Z}^{n-1}_{s-}\). We construct the skeleton processes \(Y\) and \(\widetilde{Y}\) for \(Z^{n-1}\) and \(\widetilde{Z}^{n-1}\) so that they start from \(Y_0 = Z^n_{U_n}\) and \(\widetilde{Y}_0 = \widetilde{Z}^n_{U_n}\) but otherwise they are independent of \(\{Z^j_t, t\le U_j\}\) and \(\{\widetilde{Z}^j_t, t\le U_j\}\) for \(j=m,m-1,\ldots , n\).

Note that the inductive procedure necessarily ends because the parameter \(n\) cannot decrease below 1.

By the strong Markov property, (3.63), (3.71) and (3.72),

$$\begin{aligned}&{\mathbb {P}}\left( W^1_{R_1} = \widetilde{W}^1_{R_1}\right) \nonumber \\&\quad \ge {\mathbb {P}}\left( \left( \bigcap _{2 \le j \le m} \left\{ |Z^j_{U_j} - \widetilde{Z}^j_{U_j} | \le a ^{\lambda ^{j-1}}\right\} \cap \left\{ Z^1_{U_1} = \widetilde{Z}^1_{U_1}\right\} \right) \cup \bigcup _{2 \le j \le m} \left\{ Z^j_{U_j} = \widetilde{Z}^j_{U_j}\right\} \right) \nonumber \\&\quad \ge \left( 1 - \sum _{2 \le j \le m} d \left( 2\left( a ^{1-\lambda }\right) ^{\lambda ^ m} + c_3 a^{- \delta \lambda ^{m}/2} \right) \right) p_1 > p_1/2. \end{aligned}$$
(3.73)

Note that the above estimate does not depend on \(m\).

We proceed with the second induction argument. Suppose that \(W^j\), \(\widetilde{W}^j\) and \(R_j\) have been defined for \(j = 1, 2,\ldots , \ell \). If \(W^\ell _{R_\ell } = \widetilde{W}^\ell _{R_\ell } \) then we let \(\zeta = \sum _{i=1}^\ell R_i\). We define \(Z_t\) and \(\widetilde{Z}_t\) for \(t\in [0, \zeta ]\) by setting \(Z_0 = z_0\), \(\widetilde{Z}_0 = \widetilde{z}_0\), and

$$\begin{aligned} Z_t = W^i\left( t- \sum _{1\le k\le i-1} R_k\right) , \quad \widetilde{Z}_t = \widetilde{W}^i\left( t- \sum _{1\le k\le i-1} R_k\right) , \end{aligned}$$

for \(t \in \left( \sum _{1\le k\le i-1} R_k, \sum _{1\le k\le i} R_k\right] \) and \(i = 1,2,\ldots , \ell \). We let \(\{Z_t , t> \zeta \}\) be a continuous time random walk with \(Z_{\zeta +} = Z_{\zeta }\) but otherwise independent of \(\{Z_t, t \le \zeta \}\). We require, as usual, that \(Z_t\) jumps at a time \(s > \zeta \) if and only if the Poisson process \(N^x\) jumps at time \(s\), where \(x = Z_{s-}\). We also let \(\widetilde{Z}_t = Z_t\) for \(t > \zeta \).

If \(W^\ell _{R_\ell } \ne \widetilde{W}^\ell _{R_\ell } \) then we construct \(W^{\ell +1}\) and \(\widetilde{W}^{\ell +1}\) in the same way as \(W^1\) and \(\widetilde{W}^1\) were constructed, with \(W^{\ell +1}_0 = W^\ell _{R_\ell }\) and \(\widetilde{W}^{\ell +1}_0 = \widetilde{W}^\ell _{R_\ell }\). We require that jumps of these processes are determined by the family \(\{N^x_t, t > \sum _{1\le k\le \ell } R_k\}\) in the sense that \(W^{\ell +1}_t\) jumps at a time \(s > 0\) if and only if the Poisson process \(N^x\) jumps at time \(s - \sum _{1\le k\le \ell } R_k\), where \(x = W^{\ell +1}_{s-}\). Similarly, \(\widetilde{W}^{\ell +1}_t\) jumps at a time \(s > 0\) if and only if the Poisson process \(N^x\) jumps at time \(s - \sum _{1\le k\le \ell } R_k\), where \(x = \widetilde{W}^{\ell +1}_{s-}\).

By (3.73),

$$\begin{aligned} {\mathbb {P}}\left( W^j_{R_j} \ne \widetilde{W}^j_{R_j}, j=1,2,\ldots , \ell \right) \le (1-p_1/2)^\ell . \end{aligned}$$

Letting \(\ell \rightarrow \infty \), we conclude that \(\zeta < \infty \), a.s.

Step 3 Recall that an arbitrarily small \(\delta _1>0\) has been fixed in Step 1 and fix some \(a\ge a_0\), \(\lambda >1\) and \(\delta >0\) satisfying conditions (3.48)–(3.51), (3.54)–(3.57), (3.58), (3.62) and (3.63). Since the coupling time \(\zeta \) for \(Z\) and \(\widetilde{Z}\) constructed in Step 2 is finite, a.s., we can find \(r\) so large that

$$\begin{aligned} {\mathbb {P}}(T(Z- \widetilde{Z}, \mathbf{0}) < T^+(|Z- \widetilde{Z}|, r) \wedge r \mid |Z_0 - \widetilde{Z}_0| \le a^{\lambda }) > 1-\delta _1/4. \end{aligned}$$
(3.74)

We will strengthen the claim proved in Step 2. Recall \(t_1=a^{(1+\lambda + 2\delta )\lambda ^{m}}\) and \(U_m \) defined in (3.70). For fixed \(\lambda \), we may make \(a_0\) larger, if necessary, so that for all \(a\ge a_0\) and \(m\ge 1\) we have \(\sum _{j=1}^m a^{(1+\lambda + 2\delta )\lambda ^{j}} + r \le 2 a^{(1+\lambda + 2\delta )\lambda ^{m}} \). Let \( t_2 = 2 a^{(1+\lambda + 2\delta )\lambda ^{m}}\). Then the same argument which was used in (3.73) and the inequality (3.63) yield

$$\begin{aligned}&{\mathbb {P}}\left( \left( \bigcap _{2 \le j \le m} \left\{ |Z^j_{U_j} - \widetilde{Z}^j_{U_j} | \le a ^{\lambda ^{j-1}}\right\} \cap \left\{ U_j \le a^{(1+\lambda + 2\delta )\lambda ^{j}}\right\} \right) \cup \bigcup _{2 \le j \le m} \left\{ Z^j_{U_j} = \widetilde{Z}^j_{U_j}\right\} \right) \nonumber \\&\quad \ge 1 - \sum _{2 \le j \le m} d \left( 2\left( a ^{1-\lambda }\right) ^{\lambda ^ m} + c_3 a^{- \delta \lambda ^{m}/2}\right) > 1- \delta _1/4. \end{aligned}$$

Hence

$$\begin{aligned}&{\mathbb {P}}(T^-(|Z- \widetilde{Z}|, a^\lambda ) < T^+(|Z- \widetilde{Z}|, a^{\lambda ^{m+1}}) \wedge 2 a^{(1+\lambda + 2\delta )\lambda ^{m}} \mid |Z_0 - \widetilde{Z}_0| \le a^{\lambda ^m}) \nonumber \\&\quad > 1-\delta _1/4. \end{aligned}$$
(3.75)

Since \(Z\) and \(\widetilde{Z}\) are continuous time random walks, standard estimates show that for some \(c_4\) and all sufficiently large \(m\),

$$\begin{aligned}&{\mathbb {P}}\left( \sup _{0\le t \le t_2}|Z_{t} - Z_0| \ge a^{(1+\lambda + 3\delta )\lambda ^{m}/2}\right) \le c_4 a^{- \delta \lambda ^{m}/2} \le \delta _1/4,\end{aligned}$$
(3.76)
$$\begin{aligned}&{\mathbb {P}}\left( \sup _{0\le t \le t_2}|\widetilde{Z}_{t} -\widetilde{Z}_0| \ge a^{(1+\lambda + 3\delta )\lambda ^{m}/2}\right) \le c_4 a^{- \delta \lambda ^{m}/2} \le \delta _1/4. \end{aligned}$$
(3.77)

These estimates agree with those in part (i) of the lemma.

Let

$$\begin{aligned} T_* = t_2 \wedge T^+(|Z_\cdot - Z_0|, a^{(1+\lambda +3\delta ) \lambda ^{m}/2}) \wedge T^+(|\widetilde{Z}_\cdot -\widetilde{Z}_0|, a^{(1+\lambda + 3\delta )\lambda ^{m}/2}). \end{aligned}$$

The strong Markov property applied at \(T^-(|Z- \widetilde{Z}|, a^\lambda )\) and (3.74), (3.75), (3.76) and (3.77) imply for large \(m\),

$$\begin{aligned} {\mathbb {P}}(T(Z- \widetilde{Z}, \mathbf{0}) < T_* \mid |Z_0 - \widetilde{Z}_0|\le a^{\lambda ^m}) > 1-\delta _1. \end{aligned}$$
(3.78)

The proof of part (ii) of the lemma is complete.\(\square \)

4 Convergence to stationary distribution

Theorem 4.1

Let \(Q_\infty \) be defined as in Theorem 3.1. Suppose that the initial distribution of \(\{{\mathcal {M}}_t, t\ge 0\}\) is shift invariant\(,\) i.e\(.,\) for every \(y\in {\mathbb {Z}}^d,\) the distributions of \(\{M^x_0, x\in {\mathbb {Z}}^d\}\) and \(\{M^{x+y}_0, x\in {\mathbb {Z}}^d\}\) are identical. Assume that for some constants \(c_1\) and \(\alpha < 2 d,\) we have

$$\begin{aligned} {\mathbb {P}}(M^\mathbf{0}_0 \ge 0)&= 1, \end{aligned}$$
(4.1)
$$\begin{aligned} {\mathbb {E}}M^\mathbf{0}_0&= 1, \end{aligned}$$
(4.2)
$$\begin{aligned} {{\mathrm{Var}}}\left( \sum _{x\in K_n} M^x_0\right)&\le c_1 n^ \alpha , \quad \text { for all } n\ge 1. \end{aligned}$$
(4.3)

Then the distributions of \({\mathcal {M}}_t\) converge to \(Q_\infty \) as \(t\rightarrow \infty .\)

Proof

We will adapt the proof of Theorem 3.1.

Step 1 Let \(Q_*\) denote the distribution of \({\mathcal {M}}_0\). Assume that the theorem is false, i.e., the distributions of \({\mathcal {M}}_t\) do not converge to \(Q_\infty \) as \(t\rightarrow \infty \). Then there exist \(x_1,\ldots , x_{i_1}\in {\mathbb {Z}}^d\) and \(t_m\) such that \(t_m\rightarrow \infty \) as \(m\rightarrow \infty \), and the distributions of \(\{(M^{x_1}_{t_m},\ldots , M^{x_{i_1}}_{t_m}), m\ge 1\}\) do not converge to the restriction of \(Q_\infty \) to \(\{x_1,\ldots , x_{i_1}\}\). Let \(\widetilde{\mathcal {M}}_t\) denote the process with the initial distribution \(Q_\infty \). Suppose that \(\{{\mathcal {M}}_t, t\ge 0\}\) and \(\{\widetilde{\mathcal {M}}_t, t\ge 0\}\) are constructed on the same space in such a way that the joint distribution of \(({\mathcal {M}}, \widetilde{\mathcal {M}})\) is invariant under shifts by vectors in \({\mathbb {Z}}^d\). Then there exist \(c_2, p_1 >0\) such that for every \(m_0\) there exists \(m> m_0\) such that,

$$\begin{aligned} {\mathbb {P}}\left( \sum _{i=1}^{i_1} |M^{x_i}_{t_m}- \widetilde{M}^{x_i}_{t_m}| > c_2\right) > p_1. \end{aligned}$$
(4.4)

Let \(i_2 = \left\lceil \max _{1\le i \le i_1} |x_i|\right\rceil \). Let \(\Gamma _n^1 = \{x_1,\ldots , x_{i_1}\}\) and let \(\{\Gamma ^j_n, j=1,\ldots , i_3\}\) be the family of all sets of the form \(\Gamma ^1_n + i_2 v\) for some \(v\in {\mathbb {Z}}^d\), such that \(\Gamma ^1_n + i_2 v \subset K'_n \). If \(j\ne i\) then \(\Gamma ^j_n \cap \Gamma ^i_n = \emptyset \). Note that \(i_3 = i_3(n) \ge \lfloor n/(2 i_2) \rfloor ^ d \ge c_3 n^d \) for \(n \ge 2 i_2\). We obtain from (4.4) that for every \(m_0\) there exists \(m> m_0\) such that,

$$\begin{aligned} {\mathbb {E}}\sum _{x\in K_n} |M^{x}_{t_m}- \widetilde{M}^{x}_{t_m} |&= {\mathbb {E}}\sum _{x\in K'_n} |M^{x}_{t_m}- \widetilde{M}^{x}_{t_m} | = {\mathbb {E}}\sum _{j=1}^{i_3} \sum _{x\in \Gamma ^j_n} |M^{x}_{t_m}- \widetilde{M}^{x}_{t_m}| \nonumber \\&= i_3 \sum _{x\in \Gamma ^1_n} {\mathbb {E}}|M^{x}_{t_m}- \widetilde{M}^{x}_{t_m} | \ge i_3 c_2 p_1 \ge c_4 n^d. \end{aligned}$$
(4.5)

We will show that the last inequality is false for some \(n\) and large \(m\) and hence the theorem is true.

Step 2 Fix some \(\beta \in \left( 0, 1\right) \). We will consider pairs of positive integers \(n\) and \(n_\beta \) such that \(n\) is the smallest integer greater than or equal to \(n_\beta ^{1/\beta }\) which is divisible by \(n_\beta \). Let \(K_n^1 = \{1,\ldots , n_\beta \}^d\) and let \(\{K^j_n, j=1,\ldots , j_n\}\) be the family of all sets of the form \(K^1_n + n_\beta v\) for some \(v\in {\mathbb {Z}}^d\), such that \((K^1_n + n_\beta v) \cap K_n \ne \emptyset \). We will write \({\mathcal {J}}= \{1,\ldots , j_n\}\).

We have

$$\begin{aligned} {\mathbb {E}}_{Q_*} \left( \sum _{x\in K^j_n} M^{x}_0\right) = {\mathbb {E}}_{Q_\infty } \left( \sum _{x\in K^j_n} \widetilde{M}^{x}_0\right) = |K^j_n| = n_\beta ^d. \end{aligned}$$
(4.6)

By Corollary 3.6,

$$\begin{aligned} {{\mathrm{Var}}}_{Q_\infty } \left( \sum _{x\in K^j_n} \widetilde{M}^{x}_0\right) = n_\beta ^{d-1}. \end{aligned}$$

It follows from this, (4.2), (4.3), (4.6) and Hölder’s inequality that,

$$\begin{aligned} {\mathbb {E}}\left| \sum _{x\in K^j_n} M^{x}_0 - \sum _{x\in K^j_n} \widetilde{M}^{x}_0\right|&\le {\mathbb {E}}\left| \sum _{x\in K^j_n} M^{x}_0 - n_\beta ^d\right| + {\mathbb {E}}\left| \sum _{x\in K^j_n} \widetilde{M}^{x}_0 - n_\beta ^d\right| \nonumber \\&\le \sqrt{c_1} n_\beta ^{\alpha /2} + n_\beta ^{(d-1)/2}. \end{aligned}$$
(4.7)

Fix \(K^j_n\) and suppose that \(\sum _{x\in K^j_n} M^{x}_0 \le \sum _{x\in K^j_n}\widetilde{M}^{x}_0\). Then let

$$\begin{aligned} a (j,n)&= \frac{\sum _{x\in K^j_n} M^{x}_0}{\sum _{x\in K^j_n} \widetilde{M}^{x}_0} \le 1,\end{aligned}$$
(4.8)
$$\begin{aligned} M^{*,x}_0&= M^{x}_0, \quad x\in K^j_n, \end{aligned}$$
(4.9)
$$\begin{aligned} \widetilde{M}^{*,x}_0&= a (j,n) \widetilde{M}^{x}_0, \quad x\in K^j_n. \end{aligned}$$
(4.10)

Note that

$$\begin{aligned} \Lambda ^j_n := \sum _{x\in K^j_n} M^{*,x}_0 = \sum _{x\in K^j_n} \widetilde{M}^{*,x}_0. \end{aligned}$$
(4.11)

If \(\sum _{x\in K^j_n} M^{x}_0 > \sum _{x\in K^j_n}\widetilde{M}^{x}_0\) then we interchange the roles of processes \({\mathcal {M}}\) and \(\widetilde{\mathcal {M}}\) in the definitions (4.8)–(4.10) so that (4.11) still holds.

We obtain from (4.11)

$$\begin{aligned} \sum _{x\in K_n} M^{*,x}_0 = \sum _{x\in K_n} \widetilde{M}^{*,x}_0. \end{aligned}$$
(4.12)

It follows from (4.7) that for some \(c_5, c_6\) and \(\gamma > 0\),

$$\begin{aligned}&{\mathbb {E}}\left( \left( \sum _{x\in K_n} M^{x}_0 - \sum _{x\in K_n} M^{*,x}_0\right) + \left( \sum _{x\in K_n}\widetilde{M}^{x}_0 - \sum _{x\in K_n} \widetilde{M}^{*,x}_0\right) \right) \\&\quad \le (\sqrt{c_1} n_\beta ^{\alpha /2} + n_\beta ^{(d-1)/2}) c_ 5 n^{d(1-\beta )} \le c_6 n^{d-\gamma }. \nonumber \end{aligned}$$
(4.13)

Step 3 We will now choose values for parameters used in this step. Recall that we have fixed a \(\beta \in (0,1)\). We now fix \(\delta _1>0\) so small that \(6\delta _1 < c_4/4\), where \(c_4\) is the constant in (4.5). Then we choose \(a_0,m_1, \delta \) and \(\lambda \) corresponding to \(\beta \) and \(\delta _1\) as in Lemma 3.2. Consider \(a\ge a_0\), \(m\ge m_1\) and let \(n_\beta \) be such that \(d n_\beta < a^{\lambda ^m} \le 2 d n_\beta \). Recall \(c_6\) and \(\gamma \) from (4.13). We make \(n\) and \(m\) larger, if necessary, so that

$$\begin{aligned} \begin{array}{c} \displaystyle n^d - 2 d n^{d-1} ((2d) ^{(1/\beta )(1-\delta )} n^{1-\delta } + n^\beta ) > |K_n| (1-\delta _1),\\ \displaystyle \quad c_6 n^{d-\gamma } \le \delta _1 n^d. \end{array} \end{aligned}$$
(4.14)

Suppose that \(\{N^x_t, t\in {\mathbb {R}}\}\), \(x\in {\mathbb {Z}}^d\), are independent Poisson processes. We assume that \({\mathcal {M}}_0\), \(\widetilde{\mathcal {M}}_0\) and \(\{N^x_t, t\in {\mathbb {R}}\}\), \(x\in {\mathbb {Z}}^d\), are independent.

Recall definitions (4.8)–(4.10). Let \(\mu ^{j}\) and \(\widetilde{\mu }^{j}\) be the probability measures on \(K^j_n\) defined by

$$\begin{aligned} \mu ^{j}(x) = {M^{*,x}_0}/{\Lambda ^j_n},\quad \widetilde{\mu }^{j}(x) = {\widetilde{M}^{*,x}_0}/{ \Lambda ^j_n }, \qquad x \in K^j_n. \end{aligned}$$

Let \(Z_t\) and \(\widetilde{Z}_t\) be a coupling of two continuous time nearest neighbor random walks constructed as in Lemma 3.2, with the following initial distribution,

$$\begin{aligned} {\mathbb {P}}(Z_0 = x ) = \mu ^{j}(x), \quad {\mathbb {P}}(\widetilde{Z}_0 = x ) = \widetilde{\mu }^{j}(x), \qquad x\in K^j_n. \end{aligned}$$

The joint distribution of \(Z_0\) and \(\widetilde{Z}_0\) is irrelevant to our argument but for the sake of definiteness we assume that these random variables are independent.

Recall that \(d n_\beta < a^{\lambda ^m} \le 2 d n_\beta \). Then \(|Z_0 - \widetilde{Z}_0| \le a^{\lambda ^m }\) and Lemma 3.2 implies that,

$$\begin{aligned} {\mathbb {P}}(T(Z- \widetilde{Z}, \mathbf{0}) < T_*) > 1-\delta _1. \end{aligned}$$
(4.15)

Recall the following estimate from (3.21),

$$\begin{aligned} a^{(1+\lambda + 3\delta )\lambda ^{m}/2} \le (2d)^{(1/\beta ) (1-\delta )} n^{1-\delta }. \end{aligned}$$
(4.16)

Recall \({\mathcal {J}}\) from Step 2. Let \({\mathcal {A}}\) be the family of all \(j \in {\mathcal {J}}\) such that \({{\mathrm{dist}}}(K^j_n, K_n^c) \ge a^{(1+\lambda + 3\delta )\lambda ^{m}/2}\). Let \(K^*_n = \bigcup _{j \in {\mathcal {A}}} K^j_n\). We have shown in (3.22) that,

$$\begin{aligned} |K^*_n| > |K_n| (1-\delta _1). \end{aligned}$$
(4.17)

Let \(\left\{ {\mathcal {M}}^{1}_t, t\ge 0\right\} \) be the meteor process with the initial distribution defined by \(M^{1,x}_0 = M^{*,x}_0\) if \(x\in K_n\). For all other \(x \in {\mathbb {Z}}^d {\setminus }K_n\), we let \( M^{1,x}_0 = 0\). The jump times of \({\mathcal {M}}^{1,x}\) are defined by \(\{N^{k,x}_t, t\in {\mathbb {R}}\}\), \(x\in {\mathbb {Z}}^d\), in the usual way. The process \(\left\{ \widetilde{\mathcal {M}}^{1}_t, t\ge 0\right\} \) is defined in an analogous way, with the initial distribution \(\widetilde{M}^{1,x}_0 = \widetilde{M}^{*,x}_0\) for \(x\in K_n\).

Recall from Lemma 3.2 that \(t_* = 2 a^{(1+\lambda + 2\delta )\lambda ^{m}}\). Let \({\mathcal {F}}_*\) be the \(\sigma \)-field generated by \({\mathcal {M}}^{1}_0\), \(\widetilde{\mathcal {M}}^{1}_0\), and \(\{N^{k,x}_t, 0\le t\le t_*\}\), \(x\in {\mathbb {Z}}^d\). Let \({\mathcal {G}}_0\) be the \(\sigma \)-field generated by \({\mathcal {M}}^{1}_0\) and \(\widetilde{\mathcal {M}}^{1}_0\). We have, a.s., for all \(x\in {\mathbb {Z}}^d\),

$$\begin{aligned} M^{1,x}_{t_*} = \sum _{j\in {\mathcal {J}}}\Lambda ^j_n {\mathbb {P}}_{\mu _j}(Z_{t_*} =x \mid {\mathcal {F}}_*), \quad \widetilde{M}^{1,x}_{t_*} = \sum _{j\in {\mathcal {J}}} \Lambda ^j_n {\mathbb {P}}_{\widetilde{\mu }_j}(\widetilde{Z}_{t_*} =x \mid {\mathcal {F}}_*). \end{aligned}$$

This implies that, a.s.,

$$\begin{aligned} \sum _{x\in K_n} |M^{1,x}_{t_*} - \widetilde{M}^{1,x}_{t_*}| \le \sum _{j\in {\mathcal {J}}} \Lambda ^j_n {\mathbb {P}}_{\widetilde{\mu }_j}(Z_{t_*} \ne \widetilde{Z}_{t_*} \mid {\mathcal {F}}_*). \end{aligned}$$

By (4.15),

$$\begin{aligned}&{\mathbb {E}}\sum _{x\in K_n} |M^{1,x}_{t_*} - \widetilde{M}^{1,x}_{t_*}| = {\mathbb {E}}{\mathbb {E}}\left( \sum _{x\in K_n} |M^{1,x}_{t_*} - \widetilde{M}^{1,x}_{t_*}| \mid {\mathcal {F}}_*\right) \\&\quad \le {\mathbb {E}}{\mathbb {E}}\left( \sum _{j\in {\mathcal {J}}}\Lambda ^j_n \mathbf{1}(X_{t_*} \ne \widetilde{X}_{t_*} ) \mid {\mathcal {F}}_*\right) ={\mathbb {E}}{\mathbb {E}}\left( \sum _{j\in {\mathcal {J}}}\Lambda ^j_n \mathbf{1}(X_{t_*} \ne \widetilde{X}_{t_*}) \mid {\mathcal {G}}_0\right) \\&\quad \le \delta _1 {\mathbb {E}}\sum _{j\in {\mathcal {J}}} \Lambda ^j_n. \end{aligned}$$

Since

$$\begin{aligned} {\mathbb {E}}\Lambda ^j_n = {\mathbb {E}}_{Q_k} \sum _{x\in K^j_n} M^{*,x}_0 \le {\mathbb {E}}_{Q_k} \sum _{x\in K^j_n} M^{x}_0 = |K^j_n| = n_\beta ^d, \end{aligned}$$

it follows that

$$\begin{aligned} {\mathbb {E}}\sum _{x\in K_n} |M^{1,x}_{t_*} - \widetilde{M}^{1,x}_{t_*}| \le \delta _1 |{\mathcal {J}}| n_\beta ^d = \delta _1 (n/n_\beta )^d n_\beta ^d = \delta _1 n^d. \end{aligned}$$
(4.18)

Let \(\big \{{\mathcal {M}}^{2,x}_t, t\ge 0\big \}\) be the meteor process with the initial distribution defined by \(M^{2,x}_0 = M^{x}_0 - M^{1,x}_0\) if \(x\in K_n\). For all other \(x \in {\mathbb {Z}}^d {\setminus }K_n\), we let \(M^{2,x}_0 = 0\). The jump times of \({\mathcal {M}}^{2,x}\) are defined by \(\{N^{k,x}_t, t\in {\mathbb {R}}\}\), \(x\in {\mathbb {Z}}^d\), in the usual way. The process \(\big \{\widetilde{\mathcal {M}}^{2,x}_t, t\ge 0\big \}\) is defined in an analogous way. It follows from (4.13) and (4.14) that

$$\begin{aligned}&{\mathbb {E}}\sum _{x\in K_n} |M^{2,x}_{t_*} -\widetilde{M}^{2,x}_{t_*}| \le {\mathbb {E}}\sum _{x\in K_n} M^{2,x}_{t_*} + {\mathbb {E}}\sum _{x\in K_n} \widetilde{M}^{2,x}_{t_*}\nonumber \\&\quad \le {\mathbb {E}}\left( \left( \sum _{x\in K_n} M^{x}_{0} - \sum _{x\in K_n} M^{*,x}_{0}\right) + \left( \sum _{x\in K_n} \widetilde{M}^{x}_{0} - \sum _{x\in K_n} \widetilde{M}^{*,x}_{0}\right) \right) \nonumber \\&\quad \le c_6 n^{d-\gamma } \le \delta _1 n^d. \end{aligned}$$
(4.19)

Let \(\big \{{\mathcal {M}}^{3,x}_t, t\ge 0\big \}\) be the meteor process with the initial distribution defined by \(M^{3,x}_0 = M^{x}_0\) if \(x\in {\mathbb {Z}}^d {\setminus }K_n\). For \(x \in K_n\), we let \(M^{3,x}_0 = 0\). The jump times of \({\mathcal {M}}^{3,x}\) are defined by \(\{N^{k,x}_t, t\in {\mathbb {R}}\}, x\in {\mathbb {Z}}^d\), in the usual way. The process \(\big \{\widetilde{\mathcal {M}}^{3,x}_t, t\ge 0\big \}\) is defined in an analogous way. Note that for all \(x \in K_n\) and \(t\ge 0\),

$$\begin{aligned} M^{x}_t = M^{1,x}_t + M^{2,x}_t + M^{3,x}_t, \end{aligned}$$
(4.20)

and the analogous formula holds for \(\widetilde{M}^{x}_t\). We have by (4.17),

$$\begin{aligned}&\nonumber {\mathbb {E}}\sum _{x\in K_n {\setminus }K^*_n} |M^{3,x}_{t_*} -\widetilde{M}^{3,x}_{t_*}| \le {\mathbb {E}}\sum _{x\in K_n {\setminus }K^*_n} (M^{3,x}_{t_*} + \widetilde{M}^{3,x}_{t_*}) \le {\mathbb {E}}\sum _{x\in K_n {\setminus }K^*_n} (M^{x}_{t_*} + \widetilde{M}^{x}_{t_*})\\&\quad = \sum _{x\in K_n {\setminus }K^*_n} ({\mathbb {E}}M^{x}_{t_*} + {\mathbb {E}}\widetilde{M}^{x}_{t_*}) = 2 |K_n {\setminus }K^*_n| < 2 \delta _1 |K_n| = 2 \delta _1 n^d. \end{aligned}$$
(4.21)

Since \({\mathbb {E}}M^{3,x}_0 = \mathbf{1}_{{\mathbb {Z}}^d {\setminus }K_n} (x)\), one can easily show that

$$\begin{aligned} {\mathbb {E}}M^{3,x}_{t_*}&= \sum _{y \in {\mathbb {Z}}^d {\setminus }K_n} {\mathbb {P}}(Z_{t_*} =x \mid Z_0 =y)\nonumber \\&= \sum _{y \in {\mathbb {Z}}^d {\setminus }K_n} {\mathbb {P}}(Z_{t_*} =y \mid Z_0 =x) = {\mathbb {P}}(Z_{t_*} \in {\mathbb {Z}}^d {\setminus }K_n \mid Z_0 = x). \end{aligned}$$
(4.22)

Recall that the Hausdorff distance between \(K^*_n\) and \({\mathbb {Z}}^d {\setminus }K_n\) is greater than \(a^{(1+\lambda {+} 3\delta )\lambda ^{m}\!{/}2}\). These observations, (4.22) and Lemma 3.2(i) imply that for \(x\in K^*_n\), \({\mathbb {E}}M^{3,x}_{t_*} \le \delta _1\). For the same reason, \({\mathbb {E}}\widetilde{M}^{3,x}_{t_*} \le \delta _1\). It follows that

$$\begin{aligned} \nonumber {\mathbb {E}}\sum _{x\in K^*_n} |M^{3,x}_{t_*} - \widetilde{M}^{3,x}_{t_*}|&\le {\mathbb {E}}\sum _{x\in K^*_n} (M^{3,x}_{t_*} + \widetilde{M}^{3,x}_{t_*}) = \sum _{x\in K^*_n} ({\mathbb {E}}M^{3,x}_{t_*} + \widetilde{\mathbb {E}}M^{3,x}_{t_*}) \\&\le 2 \delta _1 | K^*_n| < 2 \delta _1 |K_n| = 2 \delta _1 n^d. \end{aligned}$$
(4.23)

Recall that we have chosen \(\delta _1>0\) so that \(6\delta _1 < c_4/4\). In view of (4.20), the estimates (4.18), (4.19), (4.21) and (4.23) imply that, for large \(n\),

$$\begin{aligned} {\mathbb {E}}\sum _{x\in K_n} |M^{x}_{t_*} - \widetilde{M}^{x}_{t_*}| \le 6 \delta _1 n^d < (c_4/4)\delta _1 n^d. \end{aligned}$$

Recall that the joint distribution of \(({\mathcal {M}}, \widetilde{\mathcal {M}})\) is invariant under shifts by vectors in \({\mathbb {Z}}^d\). This and the last estimate imply that for every \(x\in {\mathbb {Z}}^d\), \({\mathbb {E}}|M^{x}_{t_*} - \widetilde{M}^{x}_{t_*}| < (c_4/4)\delta _1\). It follows that \({\mathbb {E}}(M^{x}_{t_*} - \widetilde{M}^{x}_{t_*})^+ < (c_4/4)\delta _1\) and \({\mathbb {E}}(\widetilde{M}^{x}_{t_*} - M^{x}_{t_*})^+ < (c_4/4)\delta _1\) for all \(x\in {\mathbb {Z}}^d\), where \(a^+ = \max (a,0)\). This easily implies that \({\mathbb {E}}(M^{x}_{t} - \widetilde{M}^{x}_{t})^+ < (c_4/4)\delta _1\) and \({\mathbb {E}}(\widetilde{M}^{x}_{t} - M^{x}_{t})^+ < (c_4/4)\delta _1\) for all \(x\in {\mathbb {Z}}^d\) and \(t\ge t_*\). Hence, for \(t\ge t_*\),

$$\begin{aligned} {\mathbb {E}}\sum _{x\in K_n} |M^{x}_{t} - \widetilde{M}^{x}_{t}| < (c_4/2)\delta _1 n^d. \end{aligned}$$

This contradicts (4.5) and, therefore, completes the proof. \(\square \)

Corollary 4.2

Let \(Q_\infty \) be defined as in Theorem 3.1. Suppose that \(M^x_0,x\in {\mathbb {Z}}^d,\) are i.i.d. non-negative random variables with \({\mathbb {E}}M^x_0 =1.\) Then the distributions of \({\mathcal {M}}_t\) converge to \(Q_\infty \) as \(t\rightarrow \infty .\)

Proof

Fix an arbitrarily small \(\delta >0\). For \(0< a< \infty \), let \( M^{x,a}_0 = M^x_0 \mathbf{1}_{\{M^x_0\le a\}}\), and let \(\{{\mathcal {M}}^a_t, t\ge 0\}\) be the meteor process with the initial distribution \(\{M^{x,a}_0, x\in {\mathbb {Z}}^d\}\). Let \(\widetilde{M}^{x,a}_0 = M^x_0 \mathbf{1}_{\{M^x_0> a\}}\) and let \(\{\widetilde{\mathcal {M}}^a_t, t\ge 0\}\) be the meteor process with the initial distribution \(\{\widetilde{M}^{x,a}_0, x\in {\mathbb {Z}}^d\}\). Suppose that \(a\) is so large that \(\mu (a) :={\mathbb {E}}M^{\mathbf{0},a}_0 > 1-\delta \) and \({\mathbb {E}}\widetilde{M}^{\mathbf{0},a}_0 < \delta \). Since \(M^{x,a}_0\), \(x\in {\mathbb {Z}}^d\), are i.i.d. and bounded, we have for some finite \(c_1\),

$$\begin{aligned}&\displaystyle {\mathbb {P}}(M^{\mathbf{0},a}_0 \ge 0) =1, \\&\displaystyle {\mathbb {E}}M^{\mathbf{0},a}_0 = \mu (a) \in (1-\delta , 1), \\&\displaystyle {{\mathrm{Var}}}\left( \sum _{x\in K_n} M^{x,a}_0\right) \le c_1 n^ d, \quad \text {for all } n\ge 1. \end{aligned}$$

Comparing these formulas to (4.1)–(4.3), we see that Theorem 4.1 implies that the distributions of \({\mathcal {M}}^a_t\) converge to \( Q^{\mu (a)}_\infty \) as \(t\rightarrow \infty \), where \( Q^{\mu (a)}_\infty \) is as in Remark 3.3. We have \({\mathbb {E}}\widetilde{M}^{x,a}_t < \delta \) for all \(t >0\) and \(x\in {\mathbb {Z}}^d\) by the conservation of mass and shift invariance. Since \({\mathcal {M}}_t = {\mathcal {M}}^a_t + \widetilde{\mathcal {M}}^a_t\) and \(\delta \) is arbitrarily small, the last two claims easily imply the corollary. \(\square \)

Remark 4.3

(i) The condition \(\alpha < 2 d\) in Theorem 4.1 cannot be relaxed. To see this, consider the following initial distribution of the process \({\mathcal {M}}\). With probability \(1/2\), \(M^x_0 = 0\) for all \(x\in {\mathbb {Z}}^d\). With probability \(1/2\), \(M^x_0 = 2\) for all \(x\in {\mathbb {Z}}^d\). It is elementary to check that this distribution is shift invariant and satisfies (4.1)–(4.3) with \(\alpha =2d\). Recall distributions \( Q^{c}_\infty \) from Remark 3.3. It follows easily from Theorem 4.1 that the distributions of \({\mathcal {M}}_t\) converge, as \(t\rightarrow \infty \), to \((1/2)Q^{0}_\infty + (1/2)Q^{2}_\infty \ne Q_\infty \).

(ii) We conjecture that Theorem 4.1 remains true even if we drop the assumption that \({\mathcal {M}}_0\) has shift invariant distribution.

5 Flows and reflected paths

We will prove a theorem about the flow of mass between adjacent sites in \({\mathbb {Z}}\). We will write \(F^x_t\) to denote the net flow between \(x\) and \(x+1\) on the time interval \([0,t]\), for \(x\in {\mathbb {Z}}\) and \(t\ge 0\). More formally,

$$\begin{aligned} F^x_t = \frac{1}{2} \sum _{0\le s\le t} \left( (N^x_s - N^x_{s-}) M^x_{s-} - (N^{x+1}_s - N^{x+1}_{s-}) M^{x+1}_{s-}\right) . \end{aligned}$$

Theorem 5.1

Consider the meteor process \({\mathcal {M}}_t\) on \({\mathbb {Z}}\) in the stationary regime\(,\) i.e.\(,\) suppose that the distribution of \({\mathcal {M}}_0\) is \(Q_\infty .\) Then for every \(t\ge 0,\)

$$\begin{aligned} {{\mathrm{Var}}}F^\mathbf{0}_t \le 2. \end{aligned}$$

Proof

Fix any \(t\ge 0\) and consider an odd integer \(x> 6\). We will eventually let \(x\rightarrow \infty \) so \(x\) should be thought of as a large integer. Note that

$$\begin{aligned} F^\mathbf{0}_t - F^x_t = \sum _{1\le y \le x} M^y_t - \sum _{1\le y \le x} M^y_0. \end{aligned}$$

By stationarity of the process \({\mathcal {M}}\), the distribution of \(\sum _{1\le y \le x} M^y_s\) does not depend on \(s\), so

$$\begin{aligned} {\mathbb {E}}\left( F^\mathbf{0}_t - F^x_t\right) = {\mathbb {E}}\left( \sum _{1\le y \le x} M^y_t\right) - {\mathbb {E}}\left( \sum _{1\le y \le x} M^y_0\right) =0. \end{aligned}$$
(5.1)

We obtain from (3.47),

$$\begin{aligned}&{{\mathrm{Var}}}\left( F^\mathbf{0}_t - F^x_t\right) = {{\mathrm{Var}}}\left( \sum _{1\le y \le x} M^y_t - \sum _{1\le y \le x} M^y_0\right) \nonumber \\&\quad \le {{\mathrm{Var}}}\left( \sum _{1\le y \le x} M^y_t\right) + 2 \left( {{\mathrm{Var}}}\left( \sum _{1\le y \le x} M^y_t\right) {{\mathrm{Var}}}\left( \sum _{1\le y \le x} M^y_0\right) \right) ^{1/2} + {{\mathrm{Var}}}\left( \sum _{1\le y \le x} M^y_0\right) \nonumber \\&\quad \le 4. \end{aligned}$$
(5.2)

Given \(\{N^y, y\in {\mathbb {Z}}\}, y_1,y_2\in {\mathbb {Z}}\) and \(t_1 < t_2\), we will say that there is a path between \((y_1, t_1)\) and \((y_2, t_2)\) if some mass could pass from the first point to the other according to the rules of the meteor process evolution (see the definition of “acceptable path” in the proof of Proposition 2.1). Let \(A^-_x\) be the event that there is no path from any point \(((x-3)/2, s), s\in [0,t]\), to \((\mathbf{0}, t)\). Let \(A^+_x\) be the event that there is no path from any point \(((x+3)/2, s), s\in [0,t]\), to \((x, t)\). Let \(A_x = A^-_x \cap A^+_x\). It is easy to see that \({\mathbb {P}}_{Q_\infty }(A_x) \rightarrow 1\) as \(x\rightarrow \infty \). We obtain using (5.1) and (5.2),

$$\begin{aligned} {{\mathrm{Var}}}\left( \left( F^\mathbf{0}_t - F^x_t\right) \mathbf{1}_{A_x}\right)&\le {\mathbb {E}}\left( \left( F^\mathbf{0}_t - F^x_t\right) \mathbf{1}_{A_x} \right) ^2\le {\mathbb {E}}\left( F^\mathbf{0}_t - F^x_t\right) ^2\nonumber \\&= {{\mathrm{Var}}}\left( F^\mathbf{0}_t - F^x_t\right) \le 4. \end{aligned}$$
(5.3)

If a random variable \(\xi \) has finite variance or \(\xi \mathbf{1}_{A_x}\) has finite variance then

$$\begin{aligned}&{{\mathrm{Var}}}(\xi \mid A_x) \\&\quad = {\mathbb {E}}((\xi - {\mathbb {E}}(\xi \mid A_x))^2 \mid A_x)\nonumber \\&\quad = {\mathbb {E}}\left( \left( \xi - {\mathbb {E}}(\xi \mathbf{1}_{A_x}) \frac{1}{{\mathbb {P}}(A_x)}\right) ^2 \mathbf{1}_{A_x}\right) \frac{1}{{\mathbb {P}}(A_x)}\nonumber \end{aligned}$$
(5.4)
$$\begin{aligned}&\quad = \frac{1}{{\mathbb {P}}(A_x)}{\mathbb {E}}\left( \xi ^2 \mathbf{1}_{A_x} - 2 \xi {\mathbb {E}}(\xi \mathbf{1}_{A_x}) \frac{\mathbf{1}_{A_x}}{{\mathbb {P}}(A_x)} + \frac{1}{{\mathbb {P}}(A_x)^2} \left( {\mathbb {E}}(\xi \mathbf{1}_{A_x})\right) ^2 \mathbf{1}_{A_x}\right) \nonumber \\&\quad = \frac{1}{{\mathbb {P}}(A_x)} \left( {\mathbb {E}}(\xi \mathbf{1}_{A_x})^2 - \left( {\mathbb {E}}(\xi \mathbf{1}_{A_x})\right) ^2\right) + \frac{{\mathbb {P}}(A_x) -1}{{\mathbb {P}}(A_x)^2} \left( {\mathbb {E}}(\xi \mathbf{1}_{A_x})\right) ^2\end{aligned}$$
(5.5)
$$\begin{aligned}&\quad = \frac{1}{{\mathbb {P}}(A_x)} {{\mathrm{Var}}}(\xi \mathbf{1}_{A_x})+ \frac{{\mathbb {P}}(A_x) -1}{{\mathbb {P}}(A_x)^2} \left( {\mathbb {E}}(\xi \mathbf{1}_{A_x})\right) ^2\nonumber \\&\quad \le \frac{1}{{\mathbb {P}}(A_x)} {{\mathrm{Var}}}(\xi \mathbf{1}_{A_x}) . \end{aligned}$$
(5.6)

Consider an arbitrary \(\delta \in (0,1/2)\) and fix \(x_1\) so large that for \(x\ge x_1\) we have \({\mathbb {P}}(A_x) \ge 1-\delta \). We apply (5.4)–(5.6) to \(\xi = F^\mathbf{0}_t - F^x_t\) and we use (5.3) to see that for \(x\ge x_1\),

$$\begin{aligned} {{\mathrm{Var}}}(F^\mathbf{0}_t - F^x_t \mid A_x) \le 4/(1-\delta ). \end{aligned}$$
(5.7)

We will now show that \(F^\mathbf{0}_t\) and \(F^x_t\) are conditionally uncorrelated given \(A_x\). Let \({\mathcal {G}}_t =\sigma \{N^y_s, s\in [0,t], y \in {\mathbb {Z}}\}\). It is easy to see that there exist random variables \(\alpha (v,z)\ge 0, v,z \in {\mathbb {Z}}\), which are measurable with respect to \({\mathcal {G}}_t\) and such that, a.s., for all \(z \in {\mathbb {Z}}\), we have \(F^z_t = \sum _{v\in {\mathbb {Z}}} \alpha (v,z) M^v_0\). The random variables \(\alpha (v,z)\) encode the transport of the mass from \(v\) to \(z\) and then to \(z+1\), and from \(v\) to \(z+1\) and then to \(z\), along the paths “opened” by \(N\)’s.

For \(y\in {\mathbb {Z}}\), let \(G_y\) be the event that there was no meteor hit at \(y\) between times 0 and \(t\). We have \({\mathbb {P}}(G_y) = e^{-t}\) for all \(y\). Suppose that \(v, z \in {\mathbb {Z}}\) and \(v< z\). A part of the mass that was present at \(v \) at time 0 could have moved between vertices \(z\) and \(z+1\) during the time interval \([0,t]\) only if the event \(C(v,z):=\bigcup _{v\le w \le z-1} G_w\) did not occur. By the independence of \(G_y\)’s, for \(v<z, {\mathbb {P}}\left( C(v,z)^c\right) = (1- e^{-t})^ {z-v}\). Hence, \({\mathbb {P}}(\alpha (v,z) \ne 0) \le (1- e^{-t})^ {z-v}\). A similar argument yields \({\mathbb {P}}(\alpha (v,z) \ne 0) \le (1- e^{-t})^ {v-z-1}\) for \(v>z+1\). Note that \(\alpha (v,z) \le 1\) for all \(v\) and \(z\). These observations and (3.30) imply that, for all \(z_1, z_2 \in {\mathbb {Z}}\),

$$\begin{aligned}&{\mathbb {E}}\sum _{v\in {\mathbb {Z}}} \sum _{w\in {\mathbb {Z}}}| \alpha (v,z_1) M^v_0\alpha (w,z_2) M^w_0| = \sum _{v\in {\mathbb {Z}}} \sum _{w\in {\mathbb {Z}}} {\mathbb {E}}| \alpha (v,z_1) \alpha (w,z_2)| {\mathbb {E}}| M^v_0 M^w_0|\\&\quad \le \sum _{v\in {\mathbb {Z}}} \sum _{w\in {\mathbb {Z}}} ({\mathbb {E}}\alpha (v,z_1)^2)^{1/2} ({\mathbb {E}}\alpha (w,z_2)^2)^{1/2} ({\mathbb {E}}(M^v_0)^2)^{1/2} ({\mathbb {E}}(M^w_0)^2)^{1/2}\\&\quad \le \sum _{v\in {\mathbb {Z}}} (1- e^{-t})^ {|z_1-v|-1} < \infty . \end{aligned}$$

The above bound allows us to change the order of summation in the calculation of \({{\mathrm{Cov}}}(F^\mathbf{0}_t ,F^x_t \mid A_x)\) below. Recall that \(\mathbf{1}_{A_x} = \mathbf{1}_{A^-_x}\mathbf{1}_{A^+_x}\). Since

$$\begin{aligned}&A^-_x \in \sigma \{N^y_s, s\in [0,t], y \le (x-3)/2\}\quad \text {and}\\&\quad A^+_x \in \sigma \{N^y_s, s\in [0,t], y \ge (x+3)/2\}, \end{aligned}$$

the events \(A^-_x \) and \(A^+_x \) are independent. This implies independence of the following pairs of random variables for \(v\le (x-3)/2\) and \(w\ge (x+3)/2\): \( \alpha (v,\mathbf{0}) M^v_0 \mathbf{1}_{A^-_x}\) and \(\mathbf{1}_{A^+_x}\); \(\alpha (w,x) M^w_0 \mathbf{1}_{A^-_x}\) and \(\mathbf{1}_{A^+_x}\). All these remarks imply that

$$\begin{aligned}&{{\mathrm{Cov}}}(F^\mathbf{0}_t ,F^x_t \mid A_x) = {\mathbb {E}}(F^\mathbf{0}_t F^x_t\mid A_x) - {\mathbb {E}}(F^\mathbf{0}_t \mid A_x) {\mathbb {E}}(F^x_t\mid A_x)\\&\quad = {\mathbb {E}}\left( \sum _{v\in {\mathbb {Z}}} \alpha (v,\mathbf{0}) M^v_0 \sum _{w\in {\mathbb {Z}}} \alpha (w,x) M^w_0 \mathbf{1}_{A_x}\right) /{\mathbb {P}}(A_x) \\&\qquad - \left[ {\mathbb {E}}\left( \sum _{v\in {\mathbb {Z}}} \alpha (v,\mathbf{0}) M^v_0 \mathbf{1}_{A_x}\right) /{\mathbb {P}}(A_x)\right] \cdot \left[ {\mathbb {E}}\left( \sum _{w\in {\mathbb {Z}}} \alpha (w,x) M^w_0 \mathbf{1}_{A_x}\right) /{\mathbb {P}}(A_x)\right] \\&\quad = {\mathbb {E}}\left( \sum _{v\le (x-3)/2} \alpha (v,\mathbf{0}) M^v_0 \mathbf{1}_{A^-_x} \sum _{w\ge (x+3)/2} \alpha (w,x) M^w_0 \mathbf{1}_{A^+_x}\right) /{\mathbb {P}}(A_x) \\&\qquad - \left[ {\mathbb {E}}\left( \sum _{v\le (x-3)/2} \alpha (v,\mathbf{0}) M^v_0 \mathbf{1}_{A^-_x}\mathbf{1}_{A^+_x}\right) /{\mathbb {P}}(A_x)\right] \times \\&\qquad \times \left[ {\mathbb {E}}\left( \sum _{w\ge (x+3)/2} \alpha (w,x) M^w_0 \mathbf{1}_{A^-_x}\mathbf{1}_{A^+_x}\right) /{\mathbb {P}}(A_x)\right] \\&\quad = \sum _{v\le (x-3)/2} \sum _{w\ge (x+3)/2} \Big [{\mathbb {E}}\left( \alpha (v,\mathbf{0}) M^v_0 \mathbf{1}_{A^-_x} \alpha (w,x) M^w_0 \mathbf{1}_{A^+_x}\right) /{\mathbb {P}}(A_x) \\&\qquad - {\mathbb {E}}\left( \alpha (v,\mathbf{0}) M^v_0 \mathbf{1}_{A^-_x}\right) \left( {\mathbb {P}}(A^+_x) /{\mathbb {P}}(A_x)\right) {\mathbb {E}}\left( \alpha (w,x) M^w_0 \mathbf{1}_{A^+_x}\right) \left( {\mathbb {P}}(A^-_x) /{\mathbb {P}}(A_x)\right) \Big ]\\&\quad = \left( 1/{\mathbb {P}}(A_x)\right) \sum _{v\le (x-3)/2} \sum _{w\ge (x+3)/2} \Big [{\mathbb {E}}\left( \alpha (v,\mathbf{0}) M^v_0 \mathbf{1}_{A^-_x} \alpha (w,x) M^w_0 \mathbf{1}_{A^+_x}\right) \\&\qquad - {\mathbb {E}}\left( \alpha (v,\mathbf{0}) M^v_0 \mathbf{1}_{A^-_x}\right) {\mathbb {E}}\left( \alpha (w,x) M^w_0 \mathbf{1}_{A^+_x}\right) \Big ]. \end{aligned}$$

Each term in the last sum is equal to 0 because for any \(v\le (x-3)/2\) and \(w\ge (x+3)/2\), the random variables \(M^v_0 \) and \( M^w_0\) are uncorrelated [see (3.32)], the random variables \(\alpha (v,\mathbf{0}) \mathbf{1}_{A^-_x}\) and \(\alpha (w,x) M^w_0 \mathbf{1}_{A^+_x}\) are independent, and so are the random variables \(\alpha (v,\mathbf{0}) M^v_0 \mathbf{1}_{A^-_x}\) and \( \alpha (w,x) \mathbf{1}_{A^+_x}\). We conclude that \({{\mathrm{Cov}}}(F^\mathbf{0}_t ,F^x_t \mid A_x)=0\), i.e., \(F^\mathbf{0}_t\) and \(F^x_t\) are conditionally uncorrelated given \(A_x\). This and (5.7) imply that

$$\begin{aligned} {{\mathrm{Var}}}(F^\mathbf{0}_t \mid A_x) + {{\mathrm{Var}}}(F^x_t \mid A_x) = {{\mathrm{Var}}}(F^\mathbf{0}_t - F^x_t \mid A_x) \le 4/(1-\delta ). \end{aligned}$$

By symmetry, \({{\mathrm{Var}}}(F^\mathbf{0}_t \mid A_x)= {{\mathrm{Var}}}(F^x_t \mid A_x)\) so

$$\begin{aligned} {{\mathrm{Var}}}(F^\mathbf{0}_t \mid A_x) \le 2/(1-\delta ). \end{aligned}$$
(5.8)

Recall events \(G_y\) and let \(H_y = \bigcup _{1\le v \le y} (G_v \cap G_{-v})\) for \(y\ge 1\). By independence of \(G_y\)’s, \({\mathbb {P}}(H_y^c) = (1- e^{-2t})^ y\) for \(y\ge 1\). A part of the mass that was present at \(-y \) and \(y\) at time 0 could have moved between vertices \(\mathbf{0}\) and \(1\) during the time interval \([0,t]\) only if \(H_y\) failed. Hence,

$$\begin{aligned} a:= {\mathbb {E}}|F^\mathbf{0}_t| \le {\mathbb {E}}M^\mathbf{0}_0 + \sum _{y \ge 1} {\mathbb {P}}(H_y^c) {\mathbb {E}}(M^y_0+ M^{-y}_0) \le 1 + \sum _{y \ge 1} 2 (1- e^{-2t})^ y < \infty . \end{aligned}$$

Recall that \({\mathbb {P}}(A_x) \ge 1-\delta >1/2\) for \(x\ge x_1\). We obtain from (5.4) and (5.5),

$$\begin{aligned} {\mathbb {E}}(\xi \mathbf{1}_{A_x})^2&= {\mathbb {P}}(A_x) {{\mathrm{Var}}}(\xi \mid A_x) + \left( 1-\frac{ {\mathbb {P}}(A_x) -1}{ {\mathbb {P}}(A_x)}\right) \left( {\mathbb {E}}(\xi \mathbf{1}_{A_x})\right) ^2\\&\le {{\mathrm{Var}}}(\xi \mid A_x) + 2 \left( {\mathbb {E}}(\xi \mathbf{1}_{A_x})\right) ^2. \end{aligned}$$

We apply this formula to \(\xi = F^\mathbf{0}_t \) and use (5.8) to see that for \(x\ge x_1\),

$$\begin{aligned} {\mathbb {E}}(F^\mathbf{0}_t \mathbf{1}_{A_x})^2 \le {{\mathrm{Var}}}(F^\mathbf{0}_t \mid A_x) + 2 \left( {\mathbb {E}}(F^\mathbf{0}_t \mathbf{1}_{A_x})\right) ^2 \le 2/(1-\delta ) + 2a^2. \end{aligned}$$
(5.9)

It is easy to see that \(A_x^- \subset A_{x+2}^-\) for all odd \(x > 6\) and \({\mathbb {P}}(A_x^-) \rightarrow 1\) as \(x\rightarrow \infty \). Given \(A_x^-\), the event \(A_x^+\) and the random variable \(F^\mathbf{0}_t\) are independent so the conditional distribution of \(F^\mathbf{0}_t\) given \(A_x^-\) is the same as the conditional distribution of \(F^\mathbf{0}_t\) given \(A_x\). This implies that

$$\begin{aligned}&{\mathbb {E}}(F^\mathbf{0}_t \mathbf{1}_{A_x^-})^2 = {\mathbb {E}}((F^\mathbf{0}_t \mathbf{1}_{A_x^-})^2 \mid A_x^-) {\mathbb {P}}(A_x^-) = {\mathbb {E}}((F^\mathbf{0}_t )^2 \mid A_x^-) {\mathbb {P}}(A_x^-)\nonumber \\&\quad = {\mathbb {E}}((F^\mathbf{0}_t )^2 \mid A_x) {\mathbb {P}}(A_x^-) = \frac{{\mathbb {P}}(A_x^-) }{{\mathbb {P}}(A_x) } {\mathbb {E}}((F^\mathbf{0}_t \mathbf{1}_{A_x})^2 \mid A_x) {\mathbb {P}}(A_x)= \frac{{\mathbb {P}}(A_x^-) }{{\mathbb {P}}(A_x) }{\mathbb {E}}(F^\mathbf{0}_t \mathbf{1}_{A_x})^2. \end{aligned}$$
(5.10)

Since \(\lim _{x\rightarrow \infty } {\mathbb {P}}(A_x) = \lim _{x\rightarrow \infty } {\mathbb {P}}(A_x^-)=1\), the last formula and (5.9) imply that for some \(x_2\) and all \(x\ge x_2\), \({\mathbb {E}}(F^\mathbf{0}_t \mathbf{1}_{A_x^-})^2 \le 2/(1-\delta ) + 2a^2 + \delta \). By Fatou’s lemma, \({\mathbb {E}}(F^\mathbf{0}_t )^2 \le 2/(1-\delta ) + 2a^2+\delta \), so

$$\begin{aligned} {{\mathrm{Var}}}F^\mathbf{0}_t \le {\mathbb {E}}(F^\mathbf{0}_t )^2 \le 2/(1-\delta ) + 2a^2+\delta . \end{aligned}$$
(5.11)

Recall that \({\mathbb {E}}|F^\mathbf{0}_t| < \infty \). By symmetry, \({\mathbb {E}}F^\mathbf{0}_t=0\). By dominated convergence, \(\lim _{x\rightarrow \infty } {\mathbb {E}}(F^\mathbf{0}_t \mathbf{1}_{A_x^-}) = 0\). A calculation similar to that in (5.10) shows that \({\mathbb {E}}(F^\mathbf{0}_t \mathbf{1}_{A_x}) =\frac{{\mathbb {P}}(A_x) }{{\mathbb {P}}(A_x^-) } {\mathbb {E}}(F^\mathbf{0}_t \mathbf{1}_{A_x^-})\). This, the previous observation and the fact that \(\lim _{x\rightarrow \infty } {\mathbb {P}}(A_x) = \lim _{x\rightarrow \infty } {\mathbb {P}}(A_x^-)=1\) imply that \(\lim _{x\rightarrow \infty } {\mathbb {E}}(F^\mathbf{0}_t \mathbf{1}_{A_x}) = 0\). Hence, we can strengthen (5.9) to see that for any \(\delta >0\), some \(x_3\) and all \(x\ge x_3\),

$$\begin{aligned} {\mathbb {E}}(F^\mathbf{0}_t \mathbf{1}_{A_x})^2 \le {{\mathrm{Var}}}(F^\mathbf{0}_t \mid A_x) + 2 \left( {\mathbb {E}}(F^\mathbf{0}_t \mathbf{1}_{A_x})\right) ^2 \le 2/(1-\delta ) +\delta . \end{aligned}$$

This allows to strengthen (5.11) as follows,

$$\begin{aligned} {{\mathrm{Var}}}F^\mathbf{0}_t \le {\mathbb {E}}(F^\mathbf{0}_t )^2 \le 2/(1-\delta ) +2\delta . \end{aligned}$$

Since \(\delta >0\) is arbitrarily small, this completes the proof. \(\square \)

We will now introduce an alternative representation of the meteor process on \({\mathbb {Z}}\). The mass at each vertex will be represented by ordered particles. Any two particles will be always ordered in the same way, no matter at which vertex they reside. Let

$$\begin{aligned} \Gamma ^0_0&= 0,\\ \Gamma ^k_0&= \sum _{j=0}^{k-1} M^j_0, \quad k\ge 1,\\ \Gamma ^k_0&= -\sum _{j=k}^{-1} M^j_0, \quad k\le -1. \end{aligned}$$

We define \(\Gamma ^k_t\) to be a piecewise constant RCLL function with values in \({\mathbb {R}}\) as follows. The function \(\Gamma ^k_\cdot \) jumps at time \(t\) only if \(N^{k-1}_t = N^{k-1}_{t-} +1\) or \(N^{k}_t = N^{k}_{t-} +1\). If \(N^{k-1}_t = N^{k-1}_{t-} +1\) then \(\Gamma ^k_\cdot \) jumps at time \(t\) to \((\Gamma ^{k-1}_{t-} + \Gamma ^k_{t-})/2\). If \(N^{k}_t = N^{k}_{t-} +1\) then \(\Gamma ^k_\cdot \) jumps at time \(t\) to \((\Gamma ^{k}_{t-} + \Gamma ^{k+1}_{t-})/2\).

Heuristically speaking, functions \(\Gamma ^k\) play a role similar to that of the cumulative distribution function for probability distributions on the real line. A further heuristic interpretation of these functions is that they determine the positions of infinitely many (uncountably many) particles moving along non-crossing trajectories in the following sense. The particle with label \(y\in {\mathbb {R}}\) is located at vertex \(k \in {\mathbb {Z}}\) at time \(t\) if an only if \(\Gamma ^k_t \le y < \Gamma ^{k+1}_t\). We formalize this by defining \(H^y_t\), the position of \(y\) at time \(t\), to be the unique \(k\) such that \(\Gamma ^k_t \le y < \Gamma ^{k+1}_t\). Note that for all \(x,y\in {\mathbb {R}}\) and \(s,t\ge 0\), \((H^x_s - H^y_s)(H^x_t - H^y_t) \ge 0\), i.e., \(x\) and \(y\) are always ordered in the same way.

Proposition 5.2

Suppose that \({\mathcal {M}}_0\) has the distribution \(Q_\infty .\) Then for every \(\alpha \in (0,2)\) there exists \(c_1 < \infty \) such that for all \(y\in {\mathbb {R}}\) and \(t\ge 0,\) \({\mathbb {E}}(|H^y_t - H^y_0|^\alpha ) < c_1.\)

Proof

Recall the notation \(F^x_t\) from Theorem 5.1. Let \(k_0 = H^y_0\). Consider any integer \(m\ge 1\) and suppose that \(|H^y_t - H^y_0|> m\). By symmetry, it will suffice to analyze the case \(H^y_t - H^y_0> m\). If this event occurred then at least one of the following events occurred,

$$\begin{aligned} A_1&= \{F^{k_0}_t \ge m/2\},\\ A_2&= \left\{ \sum _{j=k_0+1}^{k_0+m} M^j_t \le m/2\right\} . \end{aligned}$$

By Theorem 5.1,

$$\begin{aligned} {\mathbb {P}}(A_1) \le \frac{{{\mathrm{Var}}}F^{k_0}_t}{(m/2)^2} \le 8 m^{-2}. \end{aligned}$$
(5.12)

Corollary 3.6 implies that

$$\begin{aligned} {\mathbb {P}}(A_2) \le {\mathbb {P}}\left( \left| \sum _{j=k_0+1}^{k_0+m} M^j_t - m\right| \ge m/2\right) \le \frac{{{\mathrm{Var}}}\sum _{j=k_0+1}^{k_0+m} M^j_t}{(m/2)^2} \le 4 m^{-2}. \end{aligned}$$

This and (5.12) imply that, for \(m \ge 1\),

$$\begin{aligned} {\mathbb {P}}(|H^y_t - H^y_0|> m) = 2 {\mathbb {P}}(H^y_t - H^y_0> m) \le 2 ({\mathbb {P}}(A_1) + {\mathbb {P}}(A_2)) \le 24 m^{-2}. \end{aligned}$$
(5.13)

Note that the inequality also holds for real \(m \ge 1\). It is well known that for a non-negative random variable \(\xi \) and \(\alpha >0\),

$$\begin{aligned} {\mathbb {E}}\xi ^\alpha = \alpha \int _0^\infty a^{\alpha -1} {\mathbb {P}}(\xi >a) da. \end{aligned}$$

Hence, (5.13) yields for every \(\alpha \in (0,2)\) and some \(c_1=c_1(\alpha )<\infty \),

$$\begin{aligned} {\mathbb {E}}(|H^y_t - H^y_0|^\alpha )&= \alpha \int _0^\infty a^{\alpha -1} {\mathbb {P}}(|H^y_t - H^y_0|>a) da \\&\le \alpha \int _0^1 a^{\alpha -1} da + \alpha \int _1^\infty a^{\alpha -1} 24 a^{-2} da < c_1. \end{aligned}$$

\(\square \)

Remark 5.3

(i) One may ask whether the condition \(\alpha < 2\) in Proposition 5.2 is sharp. We believe that it is not. We conjecture that for every \(\alpha < \infty \) there exists \(c_1 < \infty \) such that for all \(y\in {\mathbb {R}}\) and \(t\ge 0\), \({\mathbb {E}}(|H^y_t - H^y_0|^\alpha ) < c_1\). Similarly, we believe that the uniform bound in Theorem 5.1 can be extended to every moment of \(F^\mathbf{0}_t\).

(ii) We present an informal but easy to formalize argument showing that on a circular graph, a particle in an ordered system of particles cannot move more than two full circles around the graph. Consider the circular graph \({\mathcal {C}}_n\) and identify its vertices with points \(e^{i2\pi k/n}\) on the unit circle \(S\). Let \(\widetilde{H}^\theta _0\) denote the position of the particle with label \(\theta \) at time 0, where \(e^{i\theta }\in S\). Processes \(\widetilde{H}^\theta _t\) are defined in a way analogous to that for \(H^y_t\); we leave the details of the construction to the reader. Let \(a = \int _0^{2\pi } \widetilde{H}^\theta _0 d\theta \). Note that when a meteor hits at time \(t\), half of the mass is moved \(2\pi /n\) radians in the clockwise direction and half of the mass is moved \(2\pi /n\) radians in the opposite direction. Hence, for every meteor hit time \(t\ge 0\), \(\int _0^{2\pi } \widetilde{H}^\theta _t d\theta = \int _0^{2\pi } \widetilde{H}^\theta _{t-} d\theta \). It follows that \(\int _0^{2\pi } \widetilde{H}^\theta _t d\theta = a\) for all \(t\ge 0\). Suppose that a particle moved more than two full circles around the graph, say, in the clockwise direction, between times 0 and \(t\). Then, because all particles are ordered, all other particles must have moved at least one full circle in the clockwise direction. Thus \(\int _0^{2\pi } \widetilde{H}^\theta _t d\theta \ge a + (2\pi )^2 > a\). This is a contradiction. It shows that a particle in an ordered system of particles can never move more than two full circles around the graph. A slight improvement of the argument shows that a particle can never move by the angle \(2\pi (1+1/n)\).

(iii) The representation of the meteor model using non-crossing functions \(H^y\) is similar in spirit to some other models known in the literature. One of them is the motion of the tracer particle in the exclusion process, see, e.g., [2]. Another one is the trajectory of a particle in one of several models for reflecting paths proposed by Harris [11] and Spitzer [17], and later generalized and carefully analyzed in [7]. In all the cited models, the variance of the reflecting particle location grows with time (as a power of time). It is rather surprising that the variance of \(H^y_t\) is not growing with time. This may be related to the fact that in the model of Howitt and Warren [13], the mass redistribution function has to be rescaled as in [13, (1.4)] for the limit in their theorems to be non-degenerate.

(iv) An intriguing problem of “number variance saturation” was studied in [10]. At this point it is not clear whether the resemblance between that phenomenon and our Proposition 5.2 is more than superficial.

6 Support of the stationary measure

Consider a connected simple graph \(G\) with a finite vertex set \(V\) and let \(k=|V|\). Note that \({\mathcal {M}}_t \in [0,\infty )^V\). Recall from Sect. 2 that there exists a unique stationary measure \(Q\) for the meteor process with \(\sum _{x\in V} M^x_0 = k\). Let \(U_Q\) be the closure of the support of \(Q\) in \([0,\infty )^V\). We define \(U\) to be the (closed) subset of \([0,\infty )^V\) which consists of all \(\{a_x, x\in V\}\) such that \(\sum _{x\in V} a_x = k\) and \(a_x =0\) for at least one \(x\in V\).

Theorem 6.1

We have \(U_Q = U.\)

Proof

The inclusion \(U_Q \subset U\) is obvious. We will prove the opposite inclusion.

Step 1 Recall that \(d_x\) denotes the degree of vertex \(x \in V\). For \(\mathbf{a}= \{a_x, x\in V\} \in [0,\infty )^V\) and a vertex \(y\in V\), we define \({\mathcal {T}}(\mathbf{a}, y) = \{b_x, x\in V\}\) by setting \(b_y = 0\), and \(b_x = a_x + b_y/d_y\) for all \(x \leftrightarrow y\). We let \(b_x = a_x\) if \(x \not \leftrightarrow y\) and \(x\ne y\). Note that the operation \({\mathcal {T}}\) encodes the jump of the meteor process when a meteor hits vertex \(y\).

We will now define an “inverse” operation to \({\mathcal {T}}\). For a vertex \(y\in V\) and \(\mathbf{a}= \{a_x, x\in V\} \in [0,\infty )^V\), let \(a_y^{\min } =\min _{x\leftrightarrow y} a_x \). We let \({\mathcal {R}}(\mathbf{a}, y) = \{b_x, x\in V\}\), where \(b_y = a_y + d_y a_y^{\min }\), and \(b_x = a_x - a_y^{\min }\) for all \(x \leftrightarrow y\). We let \(b_x = a_x\) if \(x \not \leftrightarrow y\) and \(x\ne y\). We will typically apply \({\mathcal {R}}\) to \(\mathbf{a}\) and \(y\) such that \(\mathbf{a}\in U\), \(a_y =0\) and \(a_y^{\min } >0\). It is easy to see that if \(\mathbf{a}\) and \(y\) satisfy these conditions then

$$\begin{aligned} {\mathcal {T}}({\mathcal {R}}(\mathbf{a},y),y) =\mathbf{a}, \quad {\mathcal {R}}(\mathbf{a},y) \in U. \end{aligned}$$
(6.1)

For \(\mathbf{a}, {\mathbf {b}}\in [0,\infty )^V, \mathbf{a}=\{a_x, x\in V\}, {\mathbf {b}}=\{b_x, x\in V\}\), let \(|\mathbf{a}- {\mathbf {b}}| = \sum _{x\in V} |a_x-b_x|\). In other words, \(|\mathbf{a}- {\mathbf {b}}|\) is the \(L^1\) norm of \(\mathbf{a}- {\mathbf {b}}\). This implies that we have the usual triangle inequality \(|\mathbf{a}- {\mathbf {c}}|\le |\mathbf{a}- {\mathbf {b}}|+|{\mathbf {b}}- {\mathbf {c}}|\) for \(\mathbf{a}, {\mathbf {b}},{\mathbf {c}}\in [0,\infty )^V\).

Let \(U^*\) be the set of all \(\mathbf{a}=\{a_x, x\in V\}\in U\) such that \(a_x + a_y >0\) for all \(x\leftrightarrow y\). Fix any \(\mathbf{a}\in U^*\). Let \(a^1_{\min } \) be the minimum of non-zero \(a_x\)’s, and fix any \(\varepsilon _1\in (0, a^1_{\min }/2)\). We will define inductively infinite sequences of real numbers \(\varepsilon _1, \varepsilon _2, \ldots \), vertices \(x_1, x_2, \ldots \) and elements \(\mathbf{a}^1, \mathbf{a}^2, \ldots \) of \(U\).

We let \(\mathbf{a}^1 = \{a^1_x, x\in V\} =\{a_x, x\in V\}=\mathbf{a}\). Let \(z_1,\ldots , z_m\) be all vertices such that \(a^1_{z_r} = 0\) for \(r=1,\ldots , m\). We find \(\delta >0\) so small that \(\delta < \varepsilon _1 / 2\) and we find \(y\) such that \(\delta < a^1_y\). We define \(\widetilde{\mathbf{a}}^1 = \{\widetilde{a}^1_x, x\in V\} \in U \) by setting \(\widetilde{a}^1_{z_r} = \delta /2^r\) for \(r=2,\ldots , m\), \(\widetilde{a}^1_y = a^1_y - \sum _{r=2}^m \delta /2^r\), and \(\widetilde{a}^1_x = a^1_x\) for all \(x \ne y,z_2, \ldots , z_m\). Note that \(\widetilde{a}^1_x=0\) if and only if \(x= z_1\). We let \(\mathbf{a}^2 = {\mathcal {R}}(\widetilde{\mathbf{a}}^1, z_1)\) and \(x_1 = z_1\). We let \(a^2_{\min } \) be the minimum of non-zero \(a^2_x\)’s, and choose \(\varepsilon _2\in (0, (\varepsilon _1 \wedge a^2_{\min })/2)\).

For the induction step, we assume that \(\varepsilon _1,\ldots , \varepsilon _j\), \(x_1,\ldots , x_{j-1}\) and \(\mathbf{a}^1,\ldots , \mathbf{a}^j\) have been defined for some integer \(j\ge 2\). Write \(\mathbf{a}^j = \{a^j_x, x\in V\}\) and suppose that \(z^j_1,\ldots , z^j_{m_j}\) are all vertices with \(a^j_{z^j_r} = 0\) for \(r=1,\ldots , m_j\). For \(x\in \{z^j_1,\ldots , z^j_{m_j}\}\), let \(\alpha (x)\) be the smallest \(\ell \le j\) such that \(x\) belongs to every sequence \(z^r_1,\ldots , z^r_{m_r}\) for \(r=\ell ,\ldots , j\). We can and will assume that the sequence \(z^j_1,\ldots , z^j_{m_j}\) is ordered in such a way that \(\alpha (z^j_1) \le \alpha (z^j_r)\) for all \(r=2,\ldots , m_j\). We find \(\delta _j >0\) so small that \(\delta _j < \varepsilon _j / 2^j\) and we find \(v\) such that \(\delta _j < a^j_v\). We define \(\widetilde{\mathbf{a}}^j =\{\widetilde{a}^j_x, x\in V\} \in U \) by setting \(\widetilde{a}^j_{z^j_r} = \delta _j/2^r\) for \(r=2,\ldots , m_j\), \(\widetilde{a}^j_v = a^j_v - \sum _{r=2}^m \delta _j/2^r\), and \(\widetilde{a}^j_x = a^j_x\) for all \(x \ne v, z^j_2, \ldots , z^j_{m_j}\). Note that

$$\begin{aligned} |\widetilde{\mathbf{a}}^j - \mathbf{a}^j| = 2\sum _{r=2}^m \delta _j/2^r \le \delta _j < \varepsilon _j/2^j. \end{aligned}$$
(6.2)

We have \(\widetilde{a}^j_x=0\) if and only if \(x= z^j_1\). We let \(\mathbf{a}^{j+1} = {\mathcal {R}}(\widetilde{\mathbf{a}}^j, z^j_1)\) and \(x_j = z^j_1\). We let \(a^{j+1}_{\min } \) be the minimum of non-zero \(a^{j+1}_x\)’s, and choose \(\varepsilon _{j+1}\in (0, (\varepsilon _j \wedge a^{j+1}_{\min })/2)\). This completes the inductive definition of sequences \(\varepsilon _1, \varepsilon _2, \ldots \), \(x_1, x_2, \ldots \) and \(\mathbf{a}^1, \mathbf{a}^2, \ldots \)

Note that \(a^{j+1}_{z^j_1} >0\) and recall how we have used the function \(\alpha (\,\cdot \,)\) to choose an element of \(z^j_1,\ldots , z^j_{m_j}\) to be in the first position, i.e., \(z^j_1\). It follows easily that for every vertex \(x\in V\), there exist infinitely many \(j\) such that \(a^j_x >0\).

In view of (6.1), we have

$$\begin{aligned} {\mathcal {T}}(\mathbf{a}^n, x_{n-1}) = \widetilde{\mathbf{a}}^{n-1}, \quad n\ge 2. \end{aligned}$$
(6.3)

Step 2 Consider an integer \(n_0\ge 1\). Let \({\mathbf {b}}^{n_0} = \mathbf{a}^{n_0}\) and define \({\mathbf {b}}^n\) for \(n_0-1, n_0-2, \ldots , 1\) by \({\mathbf {b}}^n = {\mathcal {T}}({\mathbf {b}}^{n+1}, x_{n}) \). We will show that

$$\begin{aligned} |{\mathbf {b}}^n - \mathbf{a}^n| \le \sum _{m=n}^{n_0-1} \varepsilon _n/2^m\le \varepsilon _n, \quad n=1,\ldots , n_0. \end{aligned}$$
(6.4)

By the definition of \({\mathbf {b}}^{n_0}\), the estimate holds for \(n=n_0\). We will prove the formula for other \(n\) by induction. Suppose that the formula holds for some \(n\in [2,n_0]\). We will show that it holds for \(n-1\).

It follows from the proof of Theorem 3.2 in [3] (see the first displayed formula in that proof) that for any \({\mathbf {c}}^1, {\mathbf {c}}^2 \in [0,\infty )^k\) and \(x\in V\),

$$\begin{aligned} |{\mathcal {T}}({\mathbf {c}}^1, x) - {\mathcal {T}}({\mathbf {c}}^2, x)| \le |{\mathbf {c}}^1 - {\mathbf {c}}^2|. \end{aligned}$$
(6.5)

We have by the definition of \({\mathbf {b}}^n\), (6.3), (6.5), induction step assumption (6.4) and (6.2),

$$\begin{aligned} |{\mathbf {b}}^{n-1} - \mathbf{a}^{n-1}|&= |{\mathcal {T}}({\mathbf {b}}^{n}, x_{n-1}) - \mathbf{a}^{n-1}|\\&\le |{\mathcal {T}}({\mathbf {b}}^{n}, x_{n-1}) - \widetilde{\mathbf{a}}^{n-1}| + |\widetilde{\mathbf{a}}^{n-1} - \mathbf{a}^{n-1}| \\&= |{\mathcal {T}}({\mathbf {b}}^{n}, x_{n-1}) - {\mathcal {T}}(\mathbf{a}^{n}, x_{n-1})| + |\widetilde{\mathbf{a}}^{n-1} - \mathbf{a}^{n-1}| \\&\le |{\mathbf {b}}^{n} - \mathbf{a}^{n}| + |\widetilde{\mathbf{a}}^{n-1} - \mathbf{a}^{n-1}|\\&\le \sum _{m=n}^{n_0-1} \varepsilon _n/2^m + \varepsilon _{n-1}/2^{n-1}\\&\le \sum _{m=n}^{n_0-1} \varepsilon _{n-1}/2^m + \varepsilon _{n-1}/2^{n-1} \\&= \sum _{m=n-1}^{n_0-1} \varepsilon _{n-1}/2^m. \end{aligned}$$

This completes the induction step and thus completes the proof of (6.4).

Step 3 We will next prove that for every \(y\in V\), the sequence \(x_1, x_2, \ldots \) contains infinitely many \(y\)’s. Suppose otherwise. Let \(V_1\) be the set of all \(y\in V\) such that the sequence \(x_1, x_2, \ldots \) contains infinitely many \(y\)’s and let \(k_1 = |V_1|\). By assumption, \(k_1 < k\). Since \(V\) is finite, \(k_1>0\).

Recall that \(\widetilde{a}^j_x=0\) if and only if \(x= z^j_1\), \(\mathbf{a}^{j+1} = {\mathcal {R}}(\widetilde{\mathbf{a}}^j, z^j_1)\) and \(x_j = z^j_1\). It follows that \(a^{j+1}_x=0\) only if (but not necessarily if) \(x\leftrightarrow x_j\). In particular, if \(a^{j+1}_x=0\) then \(x\ne x_j\). This implies that \(k_1 \ge 2\). Another consequence of the fact that \(a^{j+1}_x=0\) only if \(x\leftrightarrow x_j\) is that \(V_1\) is a connected subset of \(V\).

By assumption, \(V_1^c := V{\setminus }V_1 \ne \emptyset \). Let \(n_1\) be so large that \(x_j \in V_1\) for all \(j\ge n_1\). We have noted earlier in the proof that for every vertex \(x\in V\), there exist infinitely many \(j\) such that \(a^j_x >0\). Let \(n_2 \ge n_1\) be such that for some \(y\in V_1\), we have \(a^{n_2}_y >0\). By the definition of \(\varepsilon _{n_2}\), there exists \(y\in V_1\) such that \(a^{n_2}_y > 2 \varepsilon _{n_2}\). It follows from (6.4) applied with \(n=n_2\) that for any \(n_0>n_2\) and \({\mathbf {b}}^{n_0} = \mathbf{a}^{n_0}\), there exists \(y\in V_1\) with \(b^{n_2}_y > \varepsilon _{n_2}\).

Let \(\{\widetilde{X}_n, n\ge 1\}\) be a discrete symmetric random walk on \(V\) and let \(\{X_n, n\ge 1\}\) be the process \(X \) killed upon exiting \(V_1\). Let \(p_n(x,y)\) be the \(n\)-step transition probabilities for \(X\). Since \(V_1^c \ne \emptyset \) and \(V\) is connected, it follows that no matter what \(X_0\) is, the process \(X\) will be killed at a finite (random) time, a.s., and, therefore, for any fixed \(x,y\in V_1\), \(\lim _{n\rightarrow \infty } p_n(x,y) = 0\). Since \(V_1\) is a finite set, \(\lim _{n\rightarrow \infty } \sup _{x,y \in V_1} p_n(x,y) = 0\). We choose \(n_3\) so large that

$$\begin{aligned} \sup _{n\ge n_3} \sup _{x \in V_1} \sum _{y \in V_1} p_n(x,y) \le \varepsilon _{n_2}/(2k). \end{aligned}$$
(6.6)

We will say that \(y_1, y_2,\ldots , y_n\) is a nearest neighbor path in \(V_1\) if \(y_j \in V_1\) for all \(j\) and \(y_j \leftrightarrow y_{j+1}\) for all \(1\le j \le n-1\). It is easy to see that we can choose \(n_0\) so large that the following holds

(A1) For each nearest neighbor path \(y_1, y_2,\ldots , y_{n_3}\) of length \(n_3\) in \(V_1\), there exist \(j_1, j_2, \ldots , j_{n_3}\) such that \(n_0>j_1 > j_2 > \cdots > j_{n_3}>n_2\) and \(x_{j_m} = y_m\) for \(1 \le m \le n_3\).

Let

$$\begin{aligned} \gamma&= \sum _{x\in V_1} b^{n_0}_x, \\ {\mathbf {p}}_x&= b^{n_0}_x/\gamma , \quad x\in V_1, \end{aligned}$$

and note that \(\{{\mathbf {p}}_x, x\in V_1\}\) is a probability distribution on \(V_1\).

Suppose that the initial distribution of \(X \) is given by \({\mathbb {P}}(X_1 =x) ={\mathbf {p}}_x\) for \(x\in V_1\). We will define a process \(\{Y_n, 1\le n\le n_0-n_2+1\}\) which, heuristically speaking, represents the process \(X \) slowed down so that it is moving to the next step along its trajectory only when the current \(x_j\) agrees with its location. The rigorous definition is the following. We set \(Y_1 = X_1\) and \(k_1 = 1\). Suppose that \(Y_n\) and \(k_n\) have been defined for some \(n < n_0-n_2+1\). If \(Y_n = x_{n_0-n} \) then we let \(k_{n+1} = k_n +1\) and \(Y_{n+1} = X_{k_{n+1}}\). Otherwise we let \(Y_{n+1} = Y_n\) and \(k_{n+1} = k_n\).

Let \(\zeta \) be the time \(n\le n_0-n_2+1\) when \(Y_n\) is killed (upon exiting \(V_1\)); we let \(\zeta = n_0-n_2+2\) if there is no such time. It follows from (A1) that \(Y_n\) makes at least \(n_3\) steps on the interval \([1,n_0-n_2+1]\) or it is killed at \(\zeta \le n_0-n_2+1\). Hence, in view of (6.6), \({\mathbb {P}}(\zeta > n_0-n_2+1) \le \varepsilon _{n_2}/(2k)\). It is elementary to check that \({\mathbb {P}}(Y_n = x) = b^{n_0 -n +1}_x/\gamma \) for \(x\in V_1\) and \(1\le n \le n_0-n_2+1\). In particular, \({\mathbb {P}}(Y_{n_0-n_2+1} = x) = b^{n_2}_x/\gamma \) for \(x\in V_1\). We obtain, using (6.6), for all \(x\in V_1\),

$$\begin{aligned} b^{n_2}_x&= \gamma {\mathbb {P}}(Y_{n_0-n_2+1} = x) \le k {\mathbb {P}}(Y_{n_0-n_2+1} = x)\\&\le k {\mathbb {P}}(\zeta > n_0-n_2+1) \le k \varepsilon _{n_2}/(2k)\\&= \varepsilon _{n_2}/2. \end{aligned}$$

This contradicts the fact that there exists \(y\in V_1\) with \(b^{n_2}_y > \varepsilon _{n_2}\). This completes the proof that for every \(y\in V\), the sequence \(x_1, x_2, \ldots \) contains infinitely many \(y\)’s.

Step 4 Recall the following: (i) We fixed an arbitrary \(\mathbf{a}\in U^*\); (ii) \(a^1_{\min } \) is the minimum of non-zero \(a_x\)’s; (iii) \(\varepsilon _1\in (0, a^1_{\min }/2)\) is a fixed, arbitrarily small number; (iv) the sequence \(x_1, x_2, \ldots \) was constructed from \(\mathbf{a}\). For a given \({\mathbf {c}}\in U\) and \(n\), we let \({\mathbf {c}}^{n} = {\mathbf {c}}\) and we define \({\mathbf {c}}^j\) for \(j=n-1, n-2, \ldots , 1\) by \({\mathbf {c}}^j = {\mathcal {T}}({\mathbf {c}}^{j+1}, x_{j}) \). We will show that there exists \(n_4\) so large that for all \(n\ge n_4\) and all \({\mathbf {c}}\in U\),

$$\begin{aligned} |{\mathbf {c}}^1 - \mathbf{a}| \le 2\varepsilon _1. \end{aligned}$$
(6.7)

Let \(\{X^1_i, i\ge 1\}\) and \(\{X^2_i, i\ge 1\}\) be independent discrete time symmetric random walks on \(G\). Their initial distributions will be specified below.

Consider a large \(n\) whose value will be specified later. We define random walks \(\{Y^1_i, 1\le i\le n\}\) and \(\{Y^2_i, 1 \le i\le n\}\) with “time delay” as follows. For \(m=1,2\), we let \(Y^m_1 = X^m_1\) and \(\beta ^m_1 = 1\). Consider any \(2\le j\le n\) and suppose that \(Y^m_{j-1}\) and \(\beta ^m_{j-1}\) have been defined. Let

$$\begin{aligned} \beta ^m_j&= {\left\{ \begin{array}{ll} \beta ^m_{j-1}+1 &{} \text {if } Y^m_{j-1} = x_{n-j+1},\\ \beta ^m_{j-1} &{} \text {otherwise}, \end{array}\right. } \\ Y^m_j&= X^m_{\beta ^m_j}. \end{aligned}$$

In other words, \(Y^m\) visits the same vertices as \(X^m\) does, in the same order, but it changes the location between times \(j-1\) and \(j\) if and only if \(Y^m_{j-1} = x_{n-j+1}\).

Let \(d_G = \max \{d_x: x\in V\}\) be the degree of the graph \(G\). Let \({{\mathrm{dist}}}(x,y)\) be the graph distance between \(x,y\in V\). For \(0 \le j \le n-1\),

$$\begin{aligned}&{\mathbb {P}}\left( {{\mathrm{dist}}}(Y^1_{j+1},Y^2_{j+1}) = {{\mathrm{dist}}}(Y^1_{j},Y^2_{j})-1 \mid Y^1_j \ne Y^2_j, (Y^1_{j+1},Y^2_{j+1}) \ne (Y^1_j, Y^2_j)\right) \nonumber \\&\quad \ge 1/d_G. \end{aligned}$$
(6.8)

Let \(\tau =\min \{j: 1\le j \le n, Y^1_j = Y^2_j\}\) with the convention that \(\min \emptyset = \infty \). We obtain from (6.8), for \(\ell \le n-1\), and any \(x,y\in V\),

$$\begin{aligned} {\mathbb {P}}\left( \tau \le \ell \mid \beta ^1_{\ell -1} + \beta ^2_{\ell -1} \ge k+1, Y^1_1 =x, Y^2_1=y\right) \ge 1/d_G^k. \end{aligned}$$

Therefore,

$$\begin{aligned} {\mathbb {P}}\left( \tau > \ell \mid \beta ^1_{\ell -1} + \beta ^2_{\ell -1} \ge k+1, Y^1_1 =x, Y^2_1=y\right) \le 1- 1/d_G^k. \end{aligned}$$

This and the Markov property imply that

$$\begin{aligned} {\mathbb {P}}\left( \tau = \infty \mid \beta ^1_{n-1} + \beta ^2_{n-1} \ge (k+1) m, Y^1_1 =x, Y^2_1=y\right) \le (1- 1/d_G^k)^m. \end{aligned}$$
(6.9)

We now fix \(m_0\) such that \((1-1/d_G^{k})^{m_0} < \varepsilon _1/(2k)\).

Recall that for every \(y\in V\), the sequence \(x_1, x_2, \ldots \) contains infinitely many \(y\)’s. This implies that there exists \(n_4\) so large that for any sequence \(y_1, y_2,\ldots , y_{km_0}\) of elements of \(V\) of length \(km_0\), there exists a subsequence \(x_{j_1}, x_{j_2},\ldots , x_{j_{km_0}}\) of \(x_1, x_2,\ldots , x_{n_4}\) such that \(x_{j_m} = y_m\) for all \(1\le m \le km_0\). Recall the integer \(n\) used in the definition of \(Y^m\)’s and assume that \(n\ge n_4\). It follows from the definition of \(n_4\) that \(\beta ^1_{n-1} + \beta ^2_{n-1} \ge (k+1) m_0\) with probability 1. Hence, with this choice of \(n\), (6.9) implies that for any \(x,y \in V\),

$$\begin{aligned} {\mathbb {P}}\left( \tau = \infty \mid Y^1_1 =x, Y^2_1=y\right) \le (1- 1/d_G^k)^{m_0} < \varepsilon _1/(2k). \end{aligned}$$
(6.10)

Recall that the sequence \({\mathbf {c}}^j\) for \(j=n,n-1, \ldots , 1\) was defined relative to \(n\). We let \({\mathbf {b}}^n = \mathbf{a}^n\) and we define \({\mathbf {b}}^j\) for \(j=n-1, n-2, \ldots , 1\) by \({\mathbf {b}}^j = {\mathcal {T}}({\mathbf {b}}^{j+1}, x_{j}) \). Let \({\mathbf {b}}^j = \{b^j_x, x\in V\}\) and \({\mathbf {c}}^j = \{c^j_x, x\in V\}\) for \(1\le j \le n\), and \(p^j_x = b^j_x/k\) and \(q^j_x = c^n_x/k\) for \(x\in V\). Note that \({\mathbf {p}}^j:=\{p^j_x, x\in V\}\) and \({\mathbf {q}}^j:=\{q^j_x, x\in V\}\) are probability distributions on \(V\), for all \(j\).

Let \(Y^3_j = Y^2_j\) for \(j \le \tau \) and \(Y^3_j = Y^1_j\) for \(j > \tau \). Assume that the (initial) distribution of \(Y^1_1\) is \({\mathbf {p}}^n\) and the distribution of \(Y^3_1= Y^2_1\) is \({\mathbf {q}}^n\). It is easy to see that the distribution of \(Y^1_j\) is \({\mathbf {p}}^{n-j+1}\) and that of \( Y^3_j\) is the same as that of \(Y^2_j\), and it is equal to \({\mathbf {q}}^{n-j+1}\), for all \(1\le j\le n\). It follows from (6.10) that

$$\begin{aligned} |{\mathbf {p}}^1 - {\mathbf {q}}^1 | = \sum _{x\in V} |p^1_x - q^1_x| \le 2{\mathbb {P}}(Y^1_n \ne Y^3_n) \le \varepsilon _1/k. \end{aligned}$$

Hence, \(|{\mathbf {b}}^1 - {\mathbf {c}}^1| \le \varepsilon _1\). This and (6.4) applied with \(n=1\) show that \(|\mathbf{a}^1 - {\mathbf {c}}^1| \le 2\varepsilon _1\). Thus, (6.7) is proved.

Step 5 Consider any time \(t\ge 0\) and suppose that the first \(n_4\) meteor hits after time \(t\) occur at the sites \(x_{n_4}, x_{n_4-1},\ldots , x_1\), in this order, and the last hit occurs at time \(s >t\). It follows from (6.7) that \(|{\mathcal {M}}_s - \mathbf{a}| \le 2\varepsilon _1\). A standard argument shows that, with probability 1, there exists \(t\ge 0\) such that the first \(n_4\) meteor hits after time \(t\) occur at the sites \(x_{n_4}, x_{n_4-1},\ldots , x_1\). Hence, \({\mathcal {M}}\) will come within distance \(2\varepsilon _1\) of \(\mathbf{a}\) at some time, a.s., for any initial distribution of \({\mathcal {M}}_0\). Since \(\mathbf{a}\) is an arbitrary element of \(U^*\), \(\varepsilon _1\) is an arbitrarily small positive number and \(U_Q\) is closed (by definition), we have \(U^* \subset U_Q\). It is easy to see that \(U^*\) is dense in \(U\). We conclude that \(U\subset U_Q\), thus finishing the proof. \(\square \)