1 Introduction

In this paper we consider a one-dimensional growing particle system with a finite range of interaction. A configuration is specified by assigning to each site \(x\in {\mathbb {Z}}\) a number of particles \({\eta (x) \in \{0,1,\ldots ,N\}}\), \(n \in {\mathbb {N}}\), occupying x. The state space of the process is thus \(\{0,1,\ldots ,N\}^{{\mathbb {Z}}}\). Under additional assumptions such as non-degeneracy and translation invariance, we show that the system spreads linearly in time and the speed can be expressed as an average value of a certain functional over a certain measure. A respective shape theorem and a fluctuation result are given.

The first shape theorem was proven in [31] for a discrete-space growth model. A general shape theorem for discrete-space attractive growth models can be found in [18, Chapter 11]. In the continuous-space settings shape results for growth models have been obtained in [15] for a model of growing sets and in [7] for a continuous-space particle birth process.

The asymptotic behavior of the position of the rightmost particle of the branching random walk under various assumptions is given in [17, 20], and [16], see also references therein. A sharp condition for a shape theorem for a random walk with restriction is given in [11]. The speed of propagation for a one-dimensional discrete-space supercritical branching random walk with an exponential moment condition can be found in [8]. More refined limiting properties have been obtained recently, such as the limiting law of the minimum or the limiting process seen from its tip, see [1,2,3, 5]. Blondel [9] proves a shape result for the East model, which is a non-attractive particle system. A law of large numbers and a central limit theorem for the position of the tip were established in [14] for a stochastic combustion process with a bounded number of particles per site.

In many cases the underlying stochastic model is attractive, which enables the application of a subadditive ergodic theorem. Typically shape results have been obtained using the subadditivity property in one form or another. This is not only the case for the systems of motionless particles listed above (see, among others, [7, 15, 18]) but also for those with moving particles, see, e.g., shape theorem for the frog model [6]. A certain kind of subadditivity was also used in [27], where a shape theorem for a non-attractive model involving two types of moving particles is given. In the present paper our model is not attractive, and we do not rely on subadditivity (see also Remark 2.6). We work with motionless particles.

In addition to the shape theorem, we also provide a sub-Gaussian limit estimate on the deviation of the position of the rightmost particle from the mean. Various sub-exponential and sub-Gaussian estimates on the convergence rate for the first passage percolation under different assumptions can be found in, e.g., [4, 26]. We also derive an exponential non-asymptotic bound valid for all times.

On Page 4 we describe a particular model with the birth rate declining in crowded locations. This is achieved by augmenting the free branching rate with certain multipliers describing the effects of the competition on the parent’s ability to procreate and offspring’s ability to survive in a dense location. This process is in general non-attractive.

The paper is organized as follows. In Sect. 2 we describe our model in detail, give our assumptions, and formulate the main results. In Sect. 3 we outline the construction of the process as a unique solution to a stochastic equation driven by a Poisson point process. We note here that this very much resembles the construction via graphical representation. In Sect. 4 we prove the main results, Theorems 2.4, 2.7, and 2.8. Some numerical simulations are discussed in Sect. 5.

2 Model and the Main Results

We consider here a one dimensional continuous-time discrete-space birth process with multiple particles per site allowed. The state space of our process is \(\mathcal {X}: = \{0,1,\ldots , N\} ^{{\mathbb {Z}}}\). For \(\eta \in \mathcal {X}\) and \(x \in {\mathbb {Z}}\), \(\eta (x)\) is interpreted as the number of particles, or individuals, at x.

The evolution of the process can be described as follows. If the system is in the state \(\eta \in \mathcal {X}\), a single particle is added at \(x \in {\mathbb {Z}}\) (that is, the \(\eta (x)\) is increased by 1) at rate \(b(x, \eta )\) provided that \(\eta (x) < N\); the number of particles at x does not grow anymore once it reaches N. Here \(b: {\mathbb {Z}}\times \mathcal {X}\rightarrow {\mathbb {R}}_+\) is the map called a ‘birth rate.’ The heuristic generator of the model is given by

$$\begin{aligned} L F (\eta ) = \sum \limits _{x \in {\mathbb {Z}}} b(x, \eta ) [F(\eta ^{+x} ) - F(\eta )], \end{aligned}$$
(1)

where \(\eta ^{+x} (y) = \eta (y)\), \(y \ne x\), and

$$\begin{aligned} \eta ^{+x} (x) = {\left\{ \begin{array}{ll} \eta (x) + 1, &{} \text { if } \eta (x ) < N, \\ \eta (x), &{} \text { if } \eta (x ) = N. \end{array}\right. } \end{aligned}$$
(2)

We make the following assumptions about b. For \(y \in {\mathbb {Z}}\), \(\eta \in \mathcal {X}\), let \( \eta \odot y \in \mathcal {X}\) be the shift of \(\eta \) by y, so that \([\eta \odot y] (x) = \eta (x - y)\).

Condition 2.1

(Translation invariance) For any \(x, y \in \mathcal {X}\) and \(\eta \in \mathcal {X}\),

$$\begin{aligned} b(x+y, \eta \odot y) = b(x,\eta ). \end{aligned}$$

Condition 2.2

(Finite range of interaction) For some \(R \in {\mathbb {N}}\),

$$\begin{aligned} b(x, \eta ) = b(x, \xi ), \ \ \ \ x \in {\mathbb {Z}}, \ \eta , \xi \in \mathcal {X}\end{aligned}$$
(3)

whenever \(\eta (z) = \xi (z)\) for all \(z \in {\mathbb {Z}}\) with \(|x - z| \le R\).

Put differently, Condition 2.2 means that interaction in the model has a finite range R. Since the number of particles occupying a given site cannot grow larger than N, with no loss in generality we can also assume that

$$\begin{aligned} b(x, \eta ) = 0, \ \ \ \ \text {if } \eta (x) = N. \end{aligned}$$
(4)

For \(\eta \in \mathcal {X}\) we define the set of occupied sites

$$\begin{aligned} \text {occ}(\eta ) = \{z\in {\mathbb {Z}}: \eta (z) >0 \}. \end{aligned}$$

Condition 2.3

(Non-degeneracy) For every \(x \in {\mathbb {Z}}\) and \(\eta \in \mathcal {X}\), \(b(x, \eta ) > 0\) if and only if there exists \(y \in \text {occ}(\eta )\) with \(|x-y|\le R\).

Note that by translation invariance, \(\sup \nolimits _{x \in {\mathbb {Z}}, \eta \in \mathcal {X}} b(x, \eta )\) is finite because this supremum is equal to

$$\begin{aligned} \overline{\mathbf { b}} := \max \{b(0, \eta ) \mid \eta \in \mathcal {X}, \eta (y) = 0 \text { for all } y \text { with } |y| > R \}. \end{aligned}$$
(5)

Similarly, it follows from translation invariance and non-degeneracy that

$$\begin{aligned} \underline{\mathbf { b}} := \inf \limits _{\begin{array}{c} x \in {\mathbb {Z}}, \eta \in \mathcal {X}, \\ \mathrm{dist}(x, \text {occ}(\eta ) ) \le R \end{array} } b(x, \eta ) > 0. \end{aligned}$$
(6)

The construction of the birth process is outlined in Sect. 3. Let \((\eta _t)_{t \ge 0} = (\eta _t)\) be the birth process with birth rate b and initial condition \(\eta _0 (k) = \mathbb {1} \{k = 0\}\), \(k \in {\mathbb {Z}}\). For an interval \([a,b]\subset {\mathbb {R}}\) and \(c > 0\), c[ab] denotes the interval [cacb]. The following theorem characterizes the growth of the set of occupied sites.

Theorem 2.4

There exists \(\lambda _r, \lambda _l >0\) such that for every \(\varepsilon >0\) a.s. for sufficiently large t,

$$\begin{aligned} \bigg ( (1 - \varepsilon )[-\lambda _l t, \lambda _r t] \cap {\mathbb {Z}}\bigg ) \subset \text {occ}(\eta _t) \subset (1 + \varepsilon )[-\lambda _l t, \lambda _r t]. \end{aligned}$$
(7)

Remark 2.5

If we assume additionally that the birth rate is symmetric, that is, if for all \(x \in {\mathbb {Z}}\), \(\eta \in \mathcal {X}\),

$$\begin{aligned} b(x, \eta ) = b(-x, {\tilde{\eta }}), \end{aligned}$$

where \({\tilde{\eta }} (y) = \eta (-y)\), then, as can be seen from the proof, \(\lambda _l = \lambda _r\) holds true in Theorem 2.4.

Remark 2.6

Note that under our assumptions the following attractiveness property does not have to hold: if for two initial configuration \(\eta ^1 _0 \le \eta ^2 _0\), then \(\eta ^1 _t \le \eta ^2 _t\) for all \(t\ge 0\). This renders inapplicable the techniques based on a subadditive ergodic theorem (e.g., [28]) which are usually used in the proof of shape theorems (see e.g. [7, 15, 18]). On the other hand, our technique relies heavily on the dimension being one as the analysis is based on viewing the process from its tip. It would be of interest to extend the result to dimensions \(\mathrm {d}\ge 2\). To the best of our knowledge, even for the following modification of Eden’s model, the shape theorem has not been proven. Take \(\mathrm {d}= 2\), \(N = 1\) (only one particle per site is allowed), and for \(x \in {\mathbb {Z}}^2\) and \(\eta \subset {\mathbb {Z}}^2\) with \(\eta (x) = 0\) let

$$\begin{aligned} b(x, \eta ) = \mathbb {1} \left\{ \sum \limits _{y \in {\mathbb {Z}}^2: |y - x| = 1}\eta (y) \in \{1,2\} \right\} . \end{aligned}$$

and define \(\xi _t = \eta _t \cup \{y \in {\mathbb {Z}}^2: y \text { is surrounded by particles of } \eta _t \}\), \(t \ge 0\). It is reasonable to expect the shape theorem to hold for \(\xi _t\). Note that the classical Eden model can be seen as a birth process with rate

$$\begin{aligned} b(x, \eta ) = \mathbb {1}\left\{ \sum \limits _{y \in {\mathbb {Z}}^2: |y - x| = 1}\eta (y) > 0 \right\} \end{aligned}$$

started from a single particle at the origin.

For \(\eta \in \mathcal {X}\) with \(\sum \limits _{x \in {\mathbb {Z}}} \eta (x) < \infty \), let

$$\begin{aligned} \text {tip}(\eta ) := \max \{m \in {\mathbb {Z}}: \eta (m) > 0\} \end{aligned}$$

be the position of the rightmost occupied site. Let

$$\begin{aligned} X_t = \text {tip} (\eta _t). \end{aligned}$$
(8)

By Theorem 2.4 a.s.

$$\begin{aligned} \frac{X_t}{t} \rightarrow \lambda _r, \ \ \ t \rightarrow \infty . \end{aligned}$$
(9)

We now give two results on the deviations of \(X_t\) from the mean. The first theorem gives a sub-Gaussian limit estimate on the fluctuations around the mean, while the second provides an exponential estimate for all \(t \ge 0\). Let \(\lambda _r\) be as in (9).

Theorem 2.7

There exist \(\mathrm {C}_1, \vartheta >0\) such that

$$\begin{aligned} \limsup \limits _{t \rightarrow \infty } {\mathbb {P}}\left\{ |X_t - \lambda _r t| \ge q \sqrt{t} \right\} \le \mathrm {C}_1 e^{- \vartheta q ^2 }, \ \ \ q > 0 . \end{aligned}$$
(10)

Theorem 2.8

There exist \(\mathrm {C}_2, \theta >0\) such that

$$\begin{aligned} {\mathbb {P}}\left\{ |\frac{X_t}{t} - \lambda _r | \ge q \right\} \le \mathrm {C}_2 e^{-\theta q t}, \ \ \ q> 0, \ t > 0 . \end{aligned}$$
(11)

Of course, Theorems 2.7 and 2.8 also apply to the position of the leftmost occupied site provided that \(\lambda _r\) is replaced with \(\lambda _l\).

Birth rate with regulation via fecundity and establishment. As an example of a non-trivial model satisfying our assumptions, consider the birth process in \(\mathcal {X}\) with birth rate

$$\begin{aligned} \begin{aligned} b(x, \eta )&= \exp \left\{ - \sum \limits _{u \in {\mathbb {Z}}} \phi (u-x) \eta (u) \right\} \\&\quad \sum \limits _{y \in {\mathbb {Z}}} \left[ a(x-y) \eta (y) \exp \left\{ -\sum \limits _{v \in {\mathbb {Z}}} \psi (v-y) \eta (v) \right\} \right] , \\&\qquad x\in {\mathbb {Z}}, \eta \in \mathcal {X}, \eta (x) < N. \end{aligned} \end{aligned}$$
(12)

where \(a, \phi , \psi : {\mathbb {Z}}\rightarrow {\mathbb {R}}_+\) have a finite range, \(\sum \limits _{x \in {\mathbb {Z}}} a(x) > 0\). Birth rate (12) is a modification of the free branching rate

$$\begin{aligned} \begin{aligned} b(x, \eta ) =\sum \limits _{y \in {\mathbb {Z}}} a(x-y) \eta (y) , \ \ \ x\in {\mathbb {Z}}, \eta \in \mathcal {X}, \eta (x) < N. \end{aligned} \end{aligned}$$

The purpose of the modification is to include damping mechanisms reducing the birth rate in the dense regions. The first exponent multiplier in (12), \( \exp \left\{ - \sum \nolimits _{u \in {\mathbb {Z}}} \phi (u-x) \eta (u) \right\} \), represents the reduction in establishment at location x if \(\eta \) has many individuals around x. The second exponent multiplier, \(\exp \left\{ -\sum \nolimits _{v \in {\mathbb {Z}}} \psi (v-y) \eta (v) \right\} \), represents diminishing fecundity of an individual at y surrounded by many other individuals. Further description and motivation for an equivalent continuous-space model can be found in [10, 19]. We note here that the birth process with birth rate (12) does not in general possess the attractiveness property mentioned in Remark 2.6. Some numerical observations on this model are collected in Sect. 5.

3 Construction of the Process

Similarly to [7, 21, 22], we construct the process as a solution to the stochastic equation

$$\begin{aligned} \begin{aligned} \eta _t (k) = \int \limits _{(0,t] \times \{k\} \times [0, \infty ) } \mathbb {1} _{ [0,b(i, \eta _{s-} )] } (u) P(\mathrm{d}s,\mathrm{d}i,\mathrm{d}u) + \eta _0 (k), \quad t \ge q, \ k \in {\mathbb {Z}}, \end{aligned}\nonumber \\ \end{aligned}$$
(13)

where \((\eta _t)_{t \ge 0}\) is a càdlàg \(\mathcal {X}\)-valued solution process, P is a Poisson point process on \({\mathbb {R}}_+ \times {\mathbb {Z}}\times {\mathbb {R}}_+ \), the mean measure of P is \(ds \times \# \times du\) (\(\#\) is the counting measure on \({\mathbb {Z}}\) ). We require the processes P and \(\eta _0\) to be independent of each other. Equation (13) is understood in the sense that the equality holds a.s. for every \(k \in {\mathbb {Z}}\) and \(t \ge 0\). In the integral on the right-hand side of (13), \(i = k\) is the location and s is the time of birth of a new particle. Thus, the integral from 0 to t represents the number of births at \(k \in {\mathbb {Z}}\) which occurred before t.

This section follows closely Section 5 in [7]. Note that the only difference to Theorem 5.1 from [7] is that the ‘geographic’ space is discrete (\({\mathbb {Z}}\)) rather than continuous (\({\mathbb {R}}^\mathrm {d}\) as in [7]). This change requires no new arguments, ideas, or techniques in comparison with [7].

We will make the following assumption on the initial condition:

$$\begin{aligned} {\mathbb {E}}\sum \limits _{i \in {\mathbb {Z}}}\eta _0 (i) < \infty . \end{aligned}$$
(14)

Let P be defined on a probability space \((\Omega , \mathscr {F} , {\mathbb {P}}) \). We say that the process P is compatible with an increasing, right-continuous and complete filtration of \(\sigma \)-algebras \((\mathscr {F}_t, t \ge 0)\), \(\mathscr {F}_t \subset \mathscr {F}\), if P is adapted, that is, all random variables of the type \(P({\bar{T}} _1 \times U)\), \({\bar{T}} _1 \in \mathscr {B}([0;t])\), \(U \in \mathscr {B}({\mathbb {Z}}\times {\mathbb {R}}_+)\), are \(\mathscr {F}_t\)-measurable, and all random variables of the type \(P( (t , t + h] \times U) \), \( h \ge 0\), \(U \in \mathscr {B}({\mathbb {Z}}\times {\mathbb {R}}_+)\), are independent of \(\mathscr {F}_t\) (here we consider the Borel \(\sigma \)-algebra for \({\mathbb {Z}}\) to be the collection \(2 ^{{\mathbb {Z}}}\) of all subsets of \({\mathbb {Z}}\)).

We equip \(\mathcal {X}\) with the product set topology and \(\sigma \)-algebra generated by the open sets in this topology.

Definition 3.1

A (weak) solution of Eq. (13) is a triple \((( \eta _t )_{t\ge 0} , P )\), \((\Omega , \mathscr {F} , {\mathbb {P}}) \), \((\{ \mathscr {F} _t \} _ {t\ge 0}) \), where

  1. (i)

    \((\Omega , \mathscr {F} , {\mathbb {P}})\) is a probability space, and \(\{ \mathscr {F} _t \} _ {t\ge 0}\) is an increasing, right-continuous and complete filtration of sub-\(\sigma \)-algebras of \(\mathscr {F}\),

  2. (ii)

    P is a Poisson point process on \({\mathbb {R}}_+ \times {\mathbb {Z}}\times {\mathbb {R}}_+ \) with intensity \(ds \times \# \times du \),

  3. (iii)

    \( \eta _0 \) is a random \(\mathscr {F} _0\)-measurable element in \(\mathcal {X}\) satisfying (14),

  4. (iv)

    the processes P and \(\eta _0\) are independent, and P is compatible with \(\{ \mathscr {F} _t \} _ {t\ge 0} \),

  5. (v)

    \(( \eta _t )_{t\ge 0} \) is a càdlàg \(\mathcal {X}\)-valued process adapted to \(\{ \mathscr {F} _t \} _ {t\ge 0} \), \(\eta _t \big | _{t=0} = \eta _0\),

  6. (vi)

    all integrals in (13) are well-defined,

    $$\begin{aligned} {\mathbb {E}}\int \limits _0 ^t \mathrm{d}s \sum \limits _{i \in {\mathbb {Z}}} b(i, \eta _{s-}) < \infty , \ \ \ t > 0, \end{aligned}$$
  7. (vii)

    equality (13) holds a.s. for all \(t\in [0,\infty ]\) and all \(k \in {\mathbb {Z}}\).

Let

$$\begin{aligned} \mathscr {S} ^{0} _t = \sigma \bigl \{&\eta _0 , P([0,q] \times \{k\} \times C ) , \\&q \in [0,t], k \in {\mathbb {Z}}, C \in \mathscr {B} ({\mathbb {R}}_+) \bigr \}, \nonumber \end{aligned}$$
(15)

and let \(\mathscr {S} _t\) be the completion of \(\mathscr {S} ^{0} _t\) under P. Note that \(\{ \mathscr {S} _t \}_{t\ge 0} \) is a right-continuous filtration.

Definition 3.2

A solution of (13) is called strong if \(( \eta _t )_{t\ge 0} \) is adapted to \((\mathscr {S} _t, t\ge 0)\).

Definition 3.3

We say that pathwise uniqueness holds for (13) if for any two (weak) solutions \((( \eta _t )_{t\ge 0} , P )\), \((\Omega , \mathscr {F} , {\mathbb {P}}) \), \((\{ \mathscr {F} _t \} _ {t\ge 0}) \) and \((( \eta _t ^{\prime } )_{t\ge 0} , P )\), \((\Omega , \mathscr {F} , {\mathbb {P}}) \), \((\{ \mathscr {F} _t \} _ {t\ge 0}) \) with \(\eta _0 = \eta _0 ^{\prime }\) we have

$$\begin{aligned} {\mathbb {P}}\left\{ \eta _t = \eta _t ^{\prime } \text { for all } t\ge 0\right\} = 1. \end{aligned}$$
(16)

Definition 3.4

We say that joint uniqueness in law holds for Eq. (13) with an initial distribution \(\nu \) if any two (weak) solutions \(((\eta _t) , P)\) and \(((\eta _t ^{ \prime }) , P ^{\prime } )\) of (13), \(Law(\eta _0)= Law( \eta _0 ^{\prime })=\nu \), have the same joint distribution:

$$\begin{aligned} Law ((\eta _t) , P) = Law ((\eta _t ^{\prime } ), P ^{\prime } ). \end{aligned}$$

Theorem 3.5

Pathwise uniqueness, strong existence and joint uniqueness in law hold for Eq. (13). The unique solution is a Markov process with respect to the filtration \((\mathscr {S} _t, t\ge 0)\).

The proof follows exactly the proof of Theorem 5.1 in [7] and is therefore omitted. We also note here that the unique solution of (13) satisfies a.s. \(\eta _t(x) \le N\) for \(x \in {\mathbb {Z}}\) and \(t \ge 0\) by (4).

4 Proofs

Let \({\mathbb {Z}}_+ = {\mathbb {N}}\cup \{0\}\), \({\mathbb {Z}}_- = -{\mathbb {Z}}_+\), and \(\beta _t: {\mathbb {Z}}_- \rightarrow {\mathbb {Z}}_+\) be \(\eta _t\) seen from its tip defined by

$$\begin{aligned} \beta _t (-n) = \eta _t (\text {tip}(\eta _t) - n), \ \ \ n = 0,1,2,\ldots \end{aligned}$$

Let \(h_t\) be the position of first block of R sites occupied by N particles seen from the tip,

$$\begin{aligned}&h_t := \max \{m \in {\mathbb {Z}}_-: \beta _t(m-1) = \beta _t(m-2) \\&\quad = \cdots = \beta _t(m-R) = N \} \vee \min \{ m \in {\mathbb {Z}}_-: \beta _t (m) >0 \}. \end{aligned}$$

We adopt here the convention \(\max \{ \varnothing \} = -\infty \), so that if there are no blocks of R consecutive sites occupied by N particles, \(h_t\) equals to the furthest from the origin occupied site for \(\beta _t\). Finally, define \(\alpha _t: {\mathbb {Z}}_- \rightarrow \{0,1,\ldots ,N\} \) by

$$\begin{aligned} \alpha _t (m)= \beta _t (m) \mathbf {I}\{m \ge h_t\}. \end{aligned}$$

Thus, \(\alpha _t\) can be interpreted as the part of \(\eta _t\) seen from its tip until the first block of R sites occupied by N particles. The process \((\alpha _t, t \ge 0)\) takes values in a countable space

$$\begin{aligned} \Upsilon := \left\{ \gamma \bigg | \gamma : {\mathbb {Z}}_- \rightarrow \{0,1,\ldots ,N\}, \sum \limits _{x \in {\mathbb {Z}}_-}\gamma (x) < \infty \right\} . \end{aligned}$$

Let us underline that \(\alpha _t\) is a function of \(\eta _t\); we denote by \(\mathcal {A}\) the respective mapping \(\mathcal {A} : \mathcal {X}\rightarrow \Upsilon \), so that \(\alpha _t = \mathcal {A}(\eta _t)\).

Lemma 4.1

The process \((\alpha _t, t \ge 0)\) is a continuous-time positive recurrent Markov process with a countable state space. Furthermore, \((\alpha _t, t \ge 0)\) is strongly ergodic.

Proof

We start from a key observation: for \(t\ge 0\), conditionally on the event

$$\begin{aligned} \eta (m) = \eta (m+1) = \cdots = \eta (m+R) = N \end{aligned}$$

for some \(m \in {\mathbb {Z}}\), the families \(\{\eta _s(y), s\ge t, y < m \}\) and \(\{\eta _s (y), s\ge t, y > m +R \}\) are independent. Consequently, \((\alpha _t, t \ge 0)\) is an irreducible continuous-time Markov chain. The definitions and properties of continuous-time Markov chains used here can be found, e.g., in [13, Section 4.4]. Translation invariance Condition 2.1 ensures that \((\alpha _t, t \ge 0)\) is time-homogeneous. Define \(\mathbf {0} _{\Upsilon } \in \Upsilon \) by \(\mathbf {0} _{\Upsilon } (m) = 0 \), \(m = 0,-1,-2,\ldots \) Now, let us note that

$$\begin{aligned} \inf \limits _{\gamma \in \Upsilon } {\mathbb {P}}\left[ \alpha _{t+1} = \mathbf {0} _{\Upsilon } | \alpha _t = \gamma \right] = \inf \limits _{\gamma \in \Upsilon } {\mathbb {P}}\left[ \alpha _{1} = \mathbf {0} _{\Upsilon } | \alpha _0 = \gamma \right] >0. \end{aligned}$$
(17)

Indeed, for \(\eta _0\) such that \(\mathcal {A}(\eta _0) = \alpha _0 = \gamma \),

$$\begin{aligned} {\mathbb {P}}\left\{ \alpha _{1} = \mathbf {0}_{\Upsilon }\right\}\ge & {} {\mathbb {P}}\left\{ \text {tip} (\eta _1) = \text {tip} (\eta _0)\right\} \\&\times {\mathbb {P}}\left\{ \eta _1(\text {tip} (\eta _1)) = \eta _1(\text {tip} (\eta _1-1)) = \cdots = \eta _1(\text {tip} (\eta _1)-R+1) = N\right\} >0, \end{aligned}$$

and the last expression is separated from 0 uniformly in \(\gamma \). It follows from (17) that the state \(\mathbf {0} _{\Upsilon }\) is positive recurrent. Since \((\alpha _t, t \ge 0)\) is irreducible, it follows that it is also positive recurrent.

The strong ergodicity follows from (17). \(\square \)

Denote by \(\pi \) the ergodic measure for \((\alpha _t, t \ge 0)\). For \(\gamma \in \Upsilon \), let \(\eta ^\gamma \in \mathcal {X}\) be

$$\begin{aligned} \eta ^\gamma (m) = {\left\{ \begin{array}{ll} \gamma (m), &{} \ \ \ m \le 0 \\ 0, &{} \ \ \ m >0 \end{array}\right. } \end{aligned}$$
(18)

Note that \(\mathcal {A}(\eta ^\gamma ) = \gamma \). Define \(f:\Upsilon \rightarrow {\mathbb {R}}_+\) by

$$\begin{aligned} f(\gamma ) = \sum \limits _{m = 1} ^N m b(x, \eta ^\gamma ). \end{aligned}$$

Note that

$$\begin{aligned} \sup \limits _{\gamma \in \Upsilon } f(\gamma ) \le \frac{R(R + 1)}{2} \overline{\mathbf { b}} . \end{aligned}$$
(19)

Since f is bounded, by the ergodic theorem for continuous-time Markov chains

a.s.

$$\begin{aligned} \frac{1}{t} \int \limits _{0} ^t f(\alpha _s)\mathrm{d}s \rightarrow \langle f \rangle _{\pi }, \end{aligned}$$
(20)

where \(\langle f \rangle _{\pi } := \sum \limits _{\gamma \in \Upsilon } \pi (\gamma ) f(\gamma )\) (here for convenience \(\pi (\{\gamma \})\) is denoted by \(\pi (\gamma )\), \(\gamma \in \Upsilon \)).

Recall that \(X_t = \text {tip}(\eta _t)\). The process \((X_t, t \ge 0)\) is an increasing pure jump type Markov process, and the rate of jump of size \(m \in \{1,2,\ldots ,R\}\) at time t is \(b(X_t + m, \eta _t)\). Indeed, note that

$$\begin{aligned} \begin{aligned} X_t&= \sum \limits _{k = 1} ^R k \int \limits _{(0,t] \times {\mathbb {Z}}\times [0, \infty ) } \mathbb {1} \{ i = k +X_{s-} \} \mathbb {1} _{ [0,b(k + X _{s-}, \eta _{s-} )] } (u) P (\mathrm{d}s,\mathrm{d}i,\mathrm{d}u) \\&= \sum \limits _{k = 1} ^R k \int \limits _{(0,t] \times [0, \infty ) } \mathbb {1} _{ [0,b(k + X _{s-}, \eta _{s-} )] } (u) P^{(k)} (\mathrm{d}s,\mathrm{d}u), \end{aligned} \end{aligned}$$
(21)

where the integrator is defined by

$$\begin{aligned} \begin{aligned} P^{(k)} (A \times B) = P\big (\{(t, i, u) \in {\mathbb {R}}_+ \times {\mathbb {Z}}\times {\mathbb {R}}_+ \big | X_{t-} + k = i, (t,u) \in A \times B \} \big ), \\ \ \ A, B \in \mathscr {B}( {\mathbb {R}}_+), \ \ k \in \{1,\ldots ,m\}. \end{aligned} \end{aligned}$$

In other words, \(P^{(k)}(A \times B) \) is \( P(A \times \{i\} \times B)\) if \( A \subset \{t: X_{t-} + k = i\}\) and \(B \in \mathscr {B}({\mathbb {R}}_+)\). Note that \(P^{(k)}\) is a Poisson point process on \({\mathbb {R}}_+ \times {\mathbb {R}}_+\) with mean measure \(ds \times du\) (this follows for example from the strong Markov property of a Poisson point process, as formulated in appendix in [7], applied to the jump times of \((X_t, t\ge 0)\) ). The indicators in (21) are

$$\begin{aligned} \begin{aligned}&\mathbb {1} \{ i = k +X_{s-} \} = {\left\{ \begin{array}{ll} 1, &{} \text { if } i = k +X_{s-}, \\ 0, &{} \text { otherwise},\\ \end{array}\right. }\\&\mathbb {1} _{ [0,b(k + X _t, \eta _{s-} )] } (u) = {\left\{ \begin{array}{ll} 1, &{} \text { if } u \in [0,b(k + X _t, \eta _{s-} )], \\ 0, &{} \text { otherwise}.\\ \end{array}\right. } \end{aligned} \end{aligned}$$
(22)

Therefore, by, e.g., (3.8) in Section 3, Chapter 2 of [25], the process

$$\begin{aligned} M _t : = X_t - \int \limits _0 ^t \sum \limits _{k=1} ^R k b(X_t + k, \eta _s) \mathrm{d}s = X_t - \int \limits _0 ^t f(\alpha _s)\mathrm{d}s. \end{aligned}$$
(23)

is a martingale with respect to the filtration \((\mathscr {S} _t, t\ge 0)\) defined below (15).

We now formulate a strong law of large numbers for martingales. The following theorem is an abridged version of [24, Theorem 2.18].

Theorem 4.2

Let \(\{S_n = \sum \nolimits _{i = 1} ^n x _i , n \in {\mathbb {N}}\}\) be an \(\{\mathscr {S}_n\}\)-martingale and \(\{U_n\}_{n \in {\mathbb {N}}}\) be a non-decreasing sequence of positive real numbers, \(\lim \nolimits _{n \rightarrow \infty } U_n = \infty \). Then for \(p \in [1,2]\) we have

$$\begin{aligned} \lim \limits _{n \rightarrow \infty } U _n ^{-1} S _n = 0 \end{aligned}$$

a.s. on the set \(\left\{ \sum \nolimits _{i = 1} ^\infty U _n ^{-p} {\mathbb {E}}\big \{|x_i|^p \big | \mathscr {F}_{i-1}\big \} < \infty \right\} \).

Lemma 4.3

Strong law of large numbers applies to \((M_t, t \ge 0)\):

$$\begin{aligned} {\mathbb {P}}\left\{ \frac{M_t}{t} \rightarrow 0\right\} = 1. \end{aligned}$$
(24)

Proof

Let \(\Delta M _n = M_{n+1} - M_n\). Then \(\Delta M_n\) is stochastically dominated by \(\mathrm {V}_1 + 2 \mathrm {V}_2 + \cdots + R\mathrm {V}_R + \frac{R(R + 1)}{2} \overline{\mathbf { b}} \), where \(\mathrm {V}_1,\ldots , \mathrm {V}_R\) are independent Poisson random variables with mean \(\overline{\mathbf { b}}\), independent of \(\mathscr {F}_n\). Applying the strong law of large numbers for martingales from Theorem 4.2 with \(p = \frac{3}{2}\) and \(U_n = n\), we get a.s.

$$\begin{aligned} \frac{1}{n} \sum \limits \Delta M_n \rightarrow 0. \end{aligned}$$
(25)

Since a.s. for every \(\varepsilon >0\),

$$\begin{aligned} {\mathbb {P}}\left\{ \sup \limits _{s \in [0,1] }|M_{n+s} - M_s| \ge \varepsilon n \text { for infinitely many } n\right\} = 0, \end{aligned}$$

(24) follows. \(\square \)

Proof of Theorem 2.4

Let \(\lambda _r = \langle f \rangle _{\pi }\). From (20) and Lemma 4.3 we get a.s.

$$\begin{aligned} \frac{1}{t} X_t - \langle f \rangle _{\pi } \rightarrow 0, \end{aligned}$$
(26)

or

$$\begin{aligned} \frac{1}{t} \text {tip}(\eta _t) -\lambda _r \rightarrow 0. \end{aligned}$$
(27)

In the same way (due to the symmetric nature of our assumptions) we can show the equivalent of (27) for the leftmost occupied site \(Y_t\) for \(\eta _t\): there exists \(\lambda _l >0\) such that for any \(\varepsilon >0\) a.s. for large t,

$$\begin{aligned} \frac{ |Y_t|}{t} - \lambda _l \rightarrow 0. \end{aligned}$$
(28)

Hence the second inclusion in (7) holds. (As an aside we point out here that \( \lambda _l\) can be expressed as an average value in the same way as \(\lambda _r = \langle f \rangle _{\pi }\). To do so, we would need to define the opposite direction counterparts to \((\beta _t, t \ge 0)\), \((\alpha _t, t \ge 0)\), f, and other related objects.)

To show the first inclusion in (7), we fix \(\varepsilon >0\), and for each \(x \in {\mathbb {Z}}\). By (27) and (28), a.s. for large t

$$\begin{aligned} \left[ -\lambda _l t \left( 1 + \frac{\varepsilon }{4} \right) ^{-1}, \lambda _r t \left( 1 + \frac{\varepsilon }{4}\right) ^{-1}\right] \subset [Y_t, X_t]. \end{aligned}$$
(29)

For \(x \in {\mathbb {Z}}\), let

$$\begin{aligned} \begin{aligned} \sigma (x)&:= \inf \{t\ge 0: \mathrm{dist}(x, \text {occ}(\eta _t)) \le R \}, \\ \tau (x)&:= \inf \{t\ge 0: \eta _t (x) \ge 1 \} = \inf \{t\ge 0: x \in \text {occ}(\eta _t)\}. \end{aligned} \end{aligned}$$
(30)

Clearly, for any \(x \in {\mathbb {Z}}\), \(0 \le \sigma (x) \le \tau (x)\). Because of the finite range assumption, by (29) a.s. for \(x \in {\mathbb {N}}\) with large |x|

$$\begin{aligned} \sigma (x) \le \frac{(1 + \frac{\varepsilon }{4}) |x|}{\lambda _r}. \end{aligned}$$
(31)

By (6), the random variable \(\tau (x) - \sigma (x)\) is stochastically dominated by an exponential random variable with mean \(\underline{\mathbf {b} } ^{-1}\). In particular

$$\begin{aligned} {\mathbb {P}}\left\{ \tau (x) - \sigma (x) \ge \frac{\varepsilon |x|}{4 \lambda _r}\right\} \le \exp \{ - \frac{\varepsilon |x|\underline{\mathbf {b} }}{4\lambda _r} \}. \end{aligned}$$

Since \(\sum \nolimits _{x \in {\mathbb {N}}} \exp \{ - \frac{\varepsilon |x|\underline{\mathbf {b} }}{4\lambda _r} \} < \infty \), a.s. for all but finitely many \(x \in {\mathbb {N}}\) we have

$$\begin{aligned} \tau (x) - \sigma (x) \le \frac{\varepsilon |x|}{4\lambda _r}. \end{aligned}$$

Hence from (31) a.s. for all but finitely many \(x \in {\mathbb {N}}\),

$$\begin{aligned} \tau (x ) \le \frac{(1 + \frac{\varepsilon }{2}) |x|}{\lambda _r}. \end{aligned}$$
(32)

From (32) it follows that a.s. if t is large and \(x \in {\mathbb {N}}\), \(|x| \le \frac{\lambda _r t}{1 + \frac{\varepsilon }{2}} \), then \(\tau (x) \le t\). Note that \(\text {occ}(\eta _t ) = \{ x\in {\mathbb {Z}}: \tau (x) \le t\}\). Thus for large t.

$$\begin{aligned} \left[ 0, \frac{\lambda _r t}{1 + \frac{\varepsilon }{2}}\right] \cap {\mathbb {Z}}\subset \text {occ}(\eta _t). \end{aligned}$$
(33)

Repeating this argument verbatim for \(-x \in {\mathbb {N}}\) and \(\lambda _l\) in place of \(x \in {\mathbb {N}}\) and \(\lambda _r\), respectively, we find that

$$\begin{aligned} \left[ - \frac{\lambda _l t}{1 + \frac{\varepsilon }{2}}, 0\right] \cap {\mathbb {Z}}\subset \text {occ}(\eta _t). \end{aligned}$$
(34)

Since \( 1 - \varepsilon \le \frac{1}{1 + \frac{\varepsilon }{2}}\) for \(\varepsilon >0\), the first inclusion in (7) follows from (33) and (34). \(\square \)

Lemma 4.4

For some \(\varrho ^2 \in (0, +\infty )\) a.s.

$$\begin{aligned} \frac{1}{t} [M]_t \rightarrow \varrho ^2, \ \ \ t\rightarrow \infty . \end{aligned}$$
(35)

Proof

Let \(\theta _0 = 0\) and denote by \(\theta _n\), \(n \in {\mathbb {N}}\), the moment of n-th hitting of \(\mathbf {0} _\Upsilon \) by the Markov chain \((\alpha _t, t \ge 0)\). For \(n \in {\mathbb {N}}\), define a random piecewise constant function \(Z_n \) by

$$\begin{aligned} Z_n (t)= \alpha _{ (t + \theta _n) \wedge \theta _{n+1} }, \ \ \ t\ge 0. \end{aligned}$$
(36)

The sequence \(\{Z_n\}_{n \in {\mathbb {N}}}\) can be seen as a sequence of independent random elements in the Skorokhod space \(\mathcal {D}: = D([0,\infty ), \Upsilon )\) endowed with the usual Skorokhod topology.

Let \(G: \mathcal {D} \rightarrow {\mathbb {Z}}_+\) be the functional such that \(G(Z_n) = [M]_{\theta _{n+1} - \theta _{n}}\) is the change of \(( [M] _t)\) between \(\theta _n\) and \(\theta _{n+1}\). The function G can be written down explicitly, but it is not necessary for our purposes. Now, since the number of jumps for \(Z_1\) has exponential tails, for any \(m \in {\mathbb {N}}\)

$$\begin{aligned} {\mathbb {E}}\left\{ G^m(Z_1)\right\} < \infty . \end{aligned}$$
(37)

By the strong law of large numbers, a.s.

$$\begin{aligned} \frac{1}{n} \sum \limits _{i = 1} ^n G(Z_i) \rightarrow {\mathbb {E}}\left\{ G(Z_1)\right\} >0. \end{aligned}$$
(38)

Since \(\theta _2 - \theta _1\), \(\theta _3 - \theta _2\), ..., are i.i.d. random variables, a.s.

$$\begin{aligned} \frac{\theta _n}{n} \rightarrow {\mathbb {E}}\left\{ \theta _2 - \theta _1\right\} >0. \end{aligned}$$
(39)

For \(t>0\), let \(n = n(t) \in {\mathbb {N}}\) be such that \(t \in [\theta _n, \theta _{n+1})\). Then by (38) and (39) a.s.

$$\begin{aligned} \lim \limits _{t \rightarrow \infty } \frac{[M]_t}{t}= & {} \lim \limits _{t \rightarrow \infty }\frac{[M]_{\theta _1} + \sum \nolimits _{i = 1} ^{n(t)} G(Z_i)}{t} = \lim \limits _{t \rightarrow \infty }\frac{\sum \nolimits _{i = 1} ^{n(t)} G(Z_i)}{n(t)} \frac{n(t)}{t}\nonumber \\= & {} \frac{{\mathbb {E}}\left\{ G(Z_1)\right\} }{{\mathbb {E}}\left\{ \theta _2 - \theta _1\right\} } >0. \end{aligned}$$
(40)

\(\square \)

Before proceeding with the final part of the paper, we formulate a central limit theorem for martingales used in the proof of Theorem 2.7. The statement below is a corollary of [23, Theorem 5.1].

Proposition 4.5

Assume that (35) holds and that for some \(K >0\) a.s.

$$\begin{aligned} \sup \limits _{t \ge 0} |M_t - M_{t-}| \le K. \end{aligned}$$

Then

$$\begin{aligned} \frac{1}{\sqrt{t}} M _t \overset{d}{ \rightarrow } \mathcal {N}(0, \varrho ^2 ), \ \ \ t\rightarrow \infty . \end{aligned}$$
(41)

Proposition 4.5 follows from [23, Theorem 5.1, (b)] by taking \(M_n\) in notation of [23] to be \(\frac{M_{tn}}{n}\) in our notation.

Proof of Theorem 2.7

By Lemma 4.1, the continuous-time Markov chain \((\alpha _t, t \ge 0)\) is strongly ergodic. Since the function f is bounded, the central limit theorem holds for \((\alpha _t, t \ge 0)\) by [29, Theorem 3.1]. That is, the convergence in distribution takes place

$$\begin{aligned} \frac{1}{\sqrt{t}} \left[ \int \limits _0 ^t f(\alpha _s )\mathrm{d}s - \langle f \rangle _{\pi } \right] \overset{d}{ \rightarrow } \mathcal {N}(0, \sigma ^2 _{f}), \ \ \ t\rightarrow \infty , \end{aligned}$$
(42)

where \(\sigma ^2 _{f} \ge 0\) is a constant depending on f, and \(\mathcal {N}(c, \sigma ^2 )\) is the normal distribution with mean \(c\in {\mathbb {R}}\) and variance \(\sigma ^2 \ge 0\). Recall that \(\langle f \rangle _{\pi } = \lambda _r\). By (42), for some \(C_1, \vartheta _1 >0\)

$$\begin{aligned} \limsup \limits _{t \rightarrow \infty } {\mathbb {P}}\left\{ \left| \int \limits _0 ^t f(\alpha _s )\mathrm{d}s - \lambda _r t \right| \ge q \sqrt{t} \right\} \le C_1 e^{- \vartheta _1 q ^2 }, \ \ \ q > 0 . \end{aligned}$$
(43)

Recall that \(M_t\) was defined in (23). By Lemma 4.4 for some \(\varrho ^2 \in (0, +\infty )\) a.s.

$$\begin{aligned} \frac{1}{t} [M]_t \rightarrow \varrho ^2, \ \ \ t\rightarrow \infty . \end{aligned}$$
(44)

By the martingale central limit theorem (Proposition 4.5)

$$\begin{aligned} \frac{1}{\sqrt{t}} M _t \overset{d}{ \rightarrow } \mathcal {N}(0, \varrho ^2 ), \ \ \ t\rightarrow \infty . \end{aligned}$$
(45)

Hence for some \(C_2, \vartheta _2 >0\)

$$\begin{aligned} \limsup \limits _{t \rightarrow \infty } {\mathbb {P}}\left\{ \left| X_t - \int \limits _0 ^t f(\alpha _s )\mathrm{d}s \right| \ge q \sqrt{t} \right\} \le C_2 e^{- \vartheta _2 q ^2 }, \ \ \ q > 0 . \end{aligned}$$
(46)

By (43) and (46),

$$\begin{aligned} \limsup \limits _{t \rightarrow \infty } {\mathbb {P}}\left\{ \left| X_t - \lambda _r t \right| \ge q \sqrt{t} \right\} \le C_3 e^{- \vartheta _3 q ^2 }, \ \ \ q > 0, \end{aligned}$$
(47)

for some \(C_3, \vartheta _3 >0\). \(\square \)

Remark 4.6

Since \(\varrho ^2 > 0\) in the proof of Theorem 2.7, \(\vartheta \) in Theorem 2.7 cannot be taken arbitrary large; that is, the limiting fluctuations are of order \(\sqrt{t}\).

Proof of Theorem 2.8

By (21), (23), and [25, (3.9), p. 62, Section 3, Chapter 2], the predictable quadratic variation

$$\begin{aligned} \langle M \rangle _t = \langle X \rangle _t = \int \limits _{0} ^t \sum \limits _{k=1} ^R k ^2 b(k + X_{s-} , \eta _{s-} ) \mathrm{d}s = \int \limits _{0} ^t g(\alpha _{s-})\mathrm{d}s, \end{aligned}$$
(48)

where \(g: \Upsilon \rightarrow {\mathbb {R}}_+ \) is such that \(g(\alpha ) = \sum _{k=1} ^R k ^2 b(k + \text {tip}(\eta ) , \eta )\) whenever \(\mathcal {A}(\eta ) = \alpha \). Recall that the mapping \(\mathcal {A}\) was defined on Page 7; \(g(\alpha )\) does not depend on the choice of \(\eta \in \mathcal {A}^{-1}(\alpha )\).

By [12, Theorem 1.1] (see also [34, Theorem 1 and Remark 3a])

$$\begin{aligned} {\mathbb {P}}\left\{ \left| \frac{1}{t} \int \limits _{0} ^t g(\alpha _{s-}) \mathrm{d}s - \langle g \rangle _{\pi } \right| \ge q \right\} \le C_1 e ^{-\delta _q t}, \ \ \ q >0, t\ge 0, \end{aligned}$$
(49)

where \(C_1 >0\), \( \delta _q >0\) depends on q but not on t, and \(\delta _q\) grows not slower than linearly as a function of q. Note that the jumps of \((M_t, t \ge 0)\) do not exceed R. By an exponential inequality for martingales with bounded jumps, [32, Lemma 2.1], for any \(a,b >0\)

$$\begin{aligned} {\mathbb {P}}\left\{ |M_t| \ge a, \langle M \rangle _t \le b \text { for some } t \ge 0 \right\} \le \exp \left\{ -\frac{a^2}{2(aR + b)} \right\} . \end{aligned}$$
(50)

Taking here \(a = rt\), \(b = \langle g \rangle _\pi (1 + \varepsilon )t \) for \(r>0\), \( \varepsilon \in (0,1)\), we get for \(t \ge 0\)

$$\begin{aligned} {\mathbb {P}}\left\{ |M_t| \ge rt, \langle M \rangle _t \le \langle g \rangle _\pi (1 + \varepsilon )t \right\} \le \exp \left\{ -\frac{r^2}{2(rR + \langle g \rangle _\pi (1 + \varepsilon ) )} t \right\} . \end{aligned}$$
(51)

By (48) and (49),

$$\begin{aligned} {\mathbb {P}}\left\{ \langle M \rangle _t > \langle g \rangle _\pi (1 + \varepsilon )t \right\} \le {\mathbb {P}}\left\{ \frac{1}{t} |\langle M \rangle _t - \langle g \rangle _\pi | \ge \varepsilon \langle g \rangle _\pi \right\} \le C_1 e ^{-\delta _{\varepsilon \langle g \rangle _\pi } t}. \end{aligned}$$
(52)

By (51) and (52) we get

$$\begin{aligned} \begin{aligned}&{\mathbb {P}}\left\{ |M_t| \ge rt\right\} \le {\mathbb {P}}\left\{ |M_t| \ge rt, \langle M \rangle _t \le \langle g \rangle _\pi (1 + \varepsilon )t \right\} + {\mathbb {P}}\left\{ \langle M \rangle _t > \langle g \rangle _\pi (1 + \varepsilon )t \right\} \\&\quad \le \exp \left\{ -\frac{r^2}{2(rR + \langle g \rangle _\pi (1 + \varepsilon ) )} t \right\} + C e ^{-\delta _{\varepsilon \langle g \rangle _\pi } t} \le C e^{-\delta _1 r t}, \ \ \ t \ge 0, \end{aligned} \end{aligned}$$
(53)

where \(\delta _1 >0\) does not depend on r or t.

Recalling the definition of M in (23), we rewrite (53) as

$$\begin{aligned} {\mathbb {P}}\left\{ \left| X_t - \int \limits _0 ^t f(\alpha _s )\mathrm{d}s \right| \ge r \right\} \le C_2 e ^{-\delta _2 r t} , \ \ \ t \ge 0, \end{aligned}$$
(54)

where \(C_2, \delta _2 >0\). By [12, Theorem 1.1] (or [34, Theorem 1, Remark 3a]),

$$\begin{aligned} {\mathbb {P}}\left\{ \left| \frac{1}{t} \int \limits _{0} ^t f(\alpha _{s-}) \mathrm{d}s - \langle f \rangle _{\pi } \right| \ge r \right\} \le C_3 e ^{-\delta _3 r t}, \ \ \ t\ge 0, \end{aligned}$$
(55)

where the constant \(\delta _3\) does not depend on r. Combining (54) and (55) yields (11) and completes the proof. \(\square \)

Remark 4.7

We see from the proofs that the non-degeneracy condition can be weakened. In particular, (6) can be removed if we instead require \((\alpha _t, t \ge 0)\) to be strongly ergodic with the ergodic measure satisfying \(\langle f \rangle _{\pi } > 0\). Of course, \(\text {occ} (\eta )\) in Theorem 2.4 would need to be replaced with the set of sites surrounded by \(\text {occ} (\eta )\). Some changes in the proof would have to be made. In particular, if \(\pi (\mathbf {0} _{\Upsilon }) = 0\), the moments of time \(\theta _n\) in the proof of Lemma 4.4 would need to be redefined as the hitting moments of a state \(\gamma \in \Upsilon \) with \(\pi (\gamma ) > 0\).

Remark 4.8

It would be of interest to see whether the finite range condition can be weakened to include interactions decaying exponentially or polynomially fast with the distance. If the interaction range is infinite, there is no reason for \((\alpha _t, t \ge 0)\) to be a recurrent Markov chain, let alone a strongly ergodic one. It may be the case however that the process seen from the tip \((\beta _t, t \ge 0)\) turns out to possess some kind of a mixing property, which would enable application of limit theorems.

5 Numerical Simulations and Monotonicity of the Speed

We start this section with the following conjecture claiming that the speed is a monotone functional of the birth rate. Consider two birth processes \((\eta ^{(1)}_t, t\ge 0)\) and \((\eta ^{(2)} _t, t\ge 0)\) with different birth rates \(b_1\) and \(b_2\), respectively, satisfying conditions of Theorem 2.4. Denote by \( \lambda _r ^{(j)}\) the speed at which \((\eta ^{(j)}_t, t\ge 0)\) is spreading to the right in the sense of Theorem 2.4, \(j = 1,2\).

Fig. 1
figure 1

The speed of the tip for various \(c_{\text {fec}} , c_{\text {est}}\). For each pair, the speed is computed as the average speed of the tip \(X_t\) between \(t_1 = 100\) and \(t_2 = 1000\), that is, as \(\frac{X_{t_2} - X_{t_1}}{t_2 - t_1}\). Early evolution is excluded to reduce bias

Fig. 2
figure 2

The speed as a function of \( c_{\text {est}}\). The other parameter with is fixed \(c_{\text {fec}} = 1 \). Each estimate is computed as \(\frac{X_{t_2} - X_{t_1}}{t_2 - t_1}\) with \(t_1 = 100\) and \(t_2 = 10{,}000\)

Fig. 3
figure 3

Ten trajectories of the tip. The parameters are \(c_{\text {fec}} =c_{\text {est}} = 0.5\)

Question 5.1

Assume that for all \(x \in {\mathbb {Z}}\) and \(\eta \in \mathcal {X}\)

$$\begin{aligned} b_1(x, \eta ) \le b_2(x, \eta ). \end{aligned}$$
(56)

Is it always true that \(\lambda _r ^{(1)} \le \lambda _r ^{(2)}\)?

The answer to Question 5.1 is positive if \(b_2\) is additionally assumed to be monotone in the second argument, that is, if

$$\begin{aligned} b_2(x, \eta ) \le b_2(x, \zeta ), \ \ \ x \in {\mathbb {Z}}\end{aligned}$$

whenever \(\eta \le \zeta \). Indeed, in this case the two birth processes \((\eta ^{(1)}_t, t\ge 0)\) and \((\eta ^{(2)} _t, t\ge 0)\) with rates \(b_1\) and \(b_2\) are coupled in such a way that a.s.

$$\begin{aligned} \eta ^{(1)}_t \le \eta ^{(2)}_t, \ \ \ t \ge 0 \end{aligned}$$

(see Lemma 5.1 in [7]). One might think that the answer is positive in a general case, too.

It turns out that the birth rate with fecundity and establishment regulation discussed on Page 4 links up naturally with Question 5.1. Let the birth rate b be as in (12) with \(R = N = 3\), \(a(x) = \mathbb {1} \{ |x| \le 3 \}\),

$$\begin{aligned}&\psi (x) = c_{\text {fec}} \left[ \mathbb {1} \{ |x| = 0 \} + \frac{1}{2} \mathbb {1} \{ |x| = 1 \} \right] , \\&\psi (x) = c_{\text {est}} \left[ \mathbb {1} \{ |x| = 0 \} + \frac{1}{2} \mathbb {1} \{ |x| = 1 \} \right] . \end{aligned}$$

Note that b decreases as either of the parameters \(c_{\text {fec}}\) and \(c_{\text {est}}\) increases. Figure 1 shows the speed of the model with birth rate (12) for a thousand randomly chosen from \([0,1]^2\) pairs of parameters \((c_{\text {fec}} , c_{\text {est}})\).

Interestingly, we observe that for the values of \(c_{\text {fec}} \) close to one, the speed increases as a function of \(c_{\text {est}} \). This phenomenon is more apparent in Fig. 2, where the speed is computed as a function of \(c_{\text {est}} \) with \(c_{\text {fec}} = 1 \). This example demonstrates that the answer to Question 5.1 is negative without additional assumptions on \(b_1\) and \(b_2\).

In Fig. 3 ten different trajectories with \(c_{\text {fec}} =c_{\text {est}} = 0.5\) are shown. Numerical analysis was conducted in R [30], and figures were produced using the package ggplot2 [33].