1 Introduction

This paper concerns a probabilistic model describing a sequential allocation of particles on a finite cycle graph. The model is motivated by cooperative sequential adsorption (CSA) (see [7, 8] and references therein). CSA models are widely applied in physical chemistry for modelling adsorption processes on a material surface onto which particles are deposited at random. The main peculiarity of adsorption processes is that deposited particles change adsorption properties of the material. This motivates the growth rates defined in Eq. (1). The growth rates model a particular situation where the subsequent particles are more likely to be adsorbed around previously deposited particles.

There is typically a hard-core constraint associated with CSA. That is, the adsorption (growth) rate is zero at any location with more than a certain number of particles. The asymptotic shape of the spatial configuration of deposited particles is of primary interest in such models. Many probabilistic models of spatial growth by monolayer deposition, diffusion and aggregation dynamics present this characteristic. For instance, the Eden model [6], diffusion-limited aggregation process [22], first-passage percolation models [17] and contact interaction processes [18].

In contrast, in our model (defined in Sect. 2) we allow any number of particles to be deposited at each site. This is motivated by growing interfaces (Fig. 1) associated with multilayer adsorption processes (see [2, 10, 15]). Even though the random nature of these processes is usually emphasized in the physical literature, there is a limited number of rigorous formulations and published results in this field (most of them in [14, 16]). Our model is closely related to a variant of random deposition models, but as we do not apply any of the techniques from this field, we refer the reader to the survey on surface growth [1].

Fig. 1
figure 1

Multilayer adsorption/random deposition model

Our model can be naturally interpreted in terms of interacting urn models. In the case of no interaction, in which the growth rate at site i is given by \(\varGamma (x_i),\) where \(x_i\) is the number of existing particles at site i and \(\varGamma : \mathbb {Z}_+ \rightarrow (0,\, \infty )\) is a given function (called the reinforcement rule [4] or feedback function [12]), our model coincides with a generalised Pólya urn (GPU) model with a particular reinforcement rule \(\varGamma .\) Each site (with no underlying graph structure) corresponds to a different colour of ball. The growth rule corresponds to choosing an existing ball of colour i,  with probability proportional to \(\varGamma (x_i),\) and adding a new ball of that colour. The case \(\varGamma (x) = x\) is the classical Pólya urn.

The so called Rubin’s exponential embedding (first appearing in [5]) classifies the two possible limiting behaviours in the above class of GPU models. Firstly, there almost surely exists a site i that gets all but finitely many particles. Secondly, the number of particles at every site grows almost surely to infinity. For a comprehensive survey on urn models and their applications, see [13] and references therein.

In contrast, we consider growth rules with graph-based interactions (as in [19]) where the underlying graph is a cycle with N sites. In our growth model the rate of growth at site i is given by a site-dependent reinforcement rule \(\varGamma _i = \exp (\lambda _i u_i),\) where \(\lambda _i >0\) and \(u_i\) is the number of existing particles in a neighbourhood of site i. This allows one to take into account the case where different sites might possibly have different reinforcement schemes (Fig. 2). In other words, the case where each site has its own intrinsic ‘capacity’ parameter, which is what would be expected in many real-life situations. Although the model can easily be defined for a general graph, the results will heavily depend on its topological properties. In this paper we only address the case of a cycle graph. See [3, 9] for results on general graphs but different growth rules.

Fig. 2
figure 2

An interpolated graph of a particular parameter set \((\lambda _i)_{i=1}^{20}\)

The main result of the present paper classifies, in terms of the set of parameters \(\varLambda = (\lambda _i)_{i=1}^N,\) the two possible behaviours of the model. The first behaviour is localization of growth at a single site. This means that from a random moment of time onwards, all subsequent particles are allocated at a particular site. The second is localization of growth at a pair of neighbouring sites with equal \(\lambda \) parameter. Similarly as in the first case, this means that from a random moment of time onwards, all subsequent particles are allocated at a particular pair of neighbouring sites. In particular, if \(\lambda _i\ne \lambda _{i+1}\) for all i,  then, with probability one, the growth will eventually localise at a single site. On the other hand, if \(\lambda _i\equiv \lambda ,\) then, with probability one, the growth will eventually localise at a pair of neighbouring sites. In the general case of a fixed and arbitrary parameter set \(\varLambda ,\) only the above two types of limiting behaviour are possible. Theorem 1 below provides a complete characterization of the parameter set \(\varLambda \) and associated subsets where only one of the regimes, or both, may happen.

The model with \(\varGamma _i = \exp (\lambda u_i),\) i.e., \(\lambda _i\equiv \lambda \in \mathbb {R},\) was first considered in [19], and an analogue of Theorem 1 (Theorem 3 in [19]) was proved for this particular case of site-independent parameter \(\lambda .\)

The paper is organised as follows. In Sect. 2, we formally define the model, fix some terminology and state Theorem 1 which is our main result. The proof of the theorem appears in Sect. 6 and relies essentially on Lemmas 18 stated in Sect. 3 and proved in Sect. 5. Section 4 contains results concerning sums of random geometric progressions, which are of interest in their own right. These results combined with stochastic domination techniques are constantly used in the proofs of Lemmas 58.

2 The Model and Main Result

Consider a cycle graph with \(N \ge 4\) vertices (sites) enumerated by the first N natural numbers such that \(1\sim 2 \sim \cdots \sim N-1 \sim N\sim 1,\) where \(i\sim j\) indicates that sites i and j are incident. Let \(\mathbb {Z}_{+}\) be the set of non-negative integers and \(\varLambda =\{\lambda _1,\ldots ,\lambda _N\}\) be an arbitrary set of positive real numbers. Given \(\mathbf{x}=(x_1, \ldots , x_N)\in \mathbb {Z}_{+}^N,\) define the growth rates as

$$\begin{aligned} \varGamma _i(\mathbf{x})= e^{\lambda _i\left( x_i+\sum _{j\sim i}x_j\right) },\quad i=1,\ldots , N. \end{aligned}$$
(1)

Consider a discrete-time Markov chain \(X(n)=(X_1(n),\ldots , X_N(n))\in \mathbb {Z}_{+}^N\) with the following transition probabilities

$$\begin{aligned} \mathsf {P}\left( X_i(n+1)=X_i(n)+1|X(n)=\mathbf{x}\right) =\frac{\varGamma _{i}(\mathbf{x})}{\sum _{k=1}^N\varGamma _k(\mathbf{x})},\quad i=1,\ldots , N,\quad \mathbf{x}\in \mathbb {Z}_{+}^N. \end{aligned}$$

The Markov chain describes the evolution of the number of particles sequentially allocated at each site of the graph. Given the configuration of particles \(X(n)=\mathbf{x}\in \mathbb {Z}^N_+\) at time n,  the next incoming particle is placed at site i with probability proportional to \(\varGamma _i(\mathbf{x}).\)

Definition 1

For \(i\in \{1,\ldots , N\}\) (\(modulo \> N\))

  1. (1)

    a site \(\{i\}\) is a local minimum, if \(\lambda _{i}<\min (\lambda _{i-1},\, \lambda _{i+1});\)

  2. (2)

    a pair of sites \(\{i,\, i+1\}\) is a local minimum of size 2,  if \(\lambda _i =\lambda _{i+1}<\min (\lambda _{i-1},\, \lambda _{i+2});\)

  3. (3)

    a site \(\{i\}\) is a local maximum, if \(\lambda _{i}>\max (\lambda _{i-1},\, \lambda _{i+1});\)

  4. (4)

    a pair of sites \(\{i,\, i+1\}\) is a saddle point, if

    $$\begin{aligned} \min \left( \lambda _{i-1},\, \lambda _{i+2}\right)<\lambda _i=\lambda _{i+1}<\max \left( \lambda _{i-1},\, \lambda _{i+2}\right) ; \end{aligned}$$
  5. (5)

    a site \(\{i\}\) is a growth point, if either \(\lambda _{i-1}<\lambda _{i}<\lambda _{i+1},\) or \(\lambda _{i-1}>\lambda _{i}>\lambda _{i+1}.\)

Definition 2

Let \(\{i,\, i+1\}\) be a local minimum of size two. We say that it is a local minimum of size 2 and

  1. (1)

    type 1,  if \(\lambda _i=\lambda _{i+1}> \frac{\lambda _{i-1}\lambda _{i+2}}{\lambda _{i-1}+\lambda _{i+2}},\)

  2. (2)

    type 2,  if \(\lambda _i=\lambda _{i+1}\le \frac{\lambda _{i-1}\lambda _{i+2}}{\lambda _{i-1}+\lambda _{i+2}}.\)

The following theorem is the main result of the paper.

Theorem 1

For every \(X(0)=\mathbf{x}\in \mathbb {Z}_+^N\) and

  1. (i)

    for every local maximum \(\{k\},\) with positive probability,

    $$\begin{aligned} \lim _{n\rightarrow \infty } X_i(n)=\infty \quad \text {if and only if } i=k; \end{aligned}$$
  2. (ii)

    for every pair \(\{k,\, k+1\}\) where \(\lambda _k = \lambda _{k+1}=:\lambda ,\) but not a local minimum of size 2 and type 2, with positive probability,

    $$\begin{aligned}&\lim _{n\rightarrow \infty } X_i(n)=\infty ,\quad \text {if and only if }i\in \{k,\, k+1\},\quad \text {and} \\&\lim _{n\rightarrow \infty } \frac{X_{k+1}(n)}{X_{k}(n)}=e^{\lambda R}, \end{aligned}$$

    where \(R=\lim _{n\rightarrow \infty } [X_{k+2}(n)-X_{k-1}(n)]\in \mathbb {Z}.\)

No other limiting behaviour is possible. That is, with probability 1, exactly one of the above events occurs in a random location \(\{k\}\) or \(\{k,\,k+1\}\) as described in (i) and (ii), respectively.

3 Lemmas

We start with notations that will be used throughout the proofs. Given \(i = 1,\ldots , N,\) define the following events

$$\begin{aligned}&A_n^i := \{\text {at time } n \text { a particle is placed at site } i\}, \quad n \in \mathbb {Z}_+, \\&A_n^{i, i+1}:= A_n^i \cup A_n^{i+1}, \quad n \in \mathbb {Z}_+. \end{aligned}$$

Define also the following events

$$\begin{aligned} A_{[n_1,n_2]}^i&:= \bigcap _{n= n_1}^{n_2} A_n^i,\\ A^{i, i+1}_{[n_1, n_2]}&:=\bigcap _{n=n_1}^{n_2}A^{i, i+1}_n, \end{aligned}$$

indicating that from time \(n_1\) to \(n_2\) all particles are placed at site i,  and at sites i or \(i+1,\) respectively. Further, events \(A_{[n, \infty )}^i\) and \(A_{[n, \infty )}^{i, i+1}\) denote the corresponding limiting cases as \(n_2\) goes to infinity.

Let \(\mathbf{e}_i\in \mathbb {Z}_{+}^N\) be a vector, whose ith coordinate is 1,  and all other coordinates are zero. Given \(\mathbf{x}\in \mathbb {Z}_+^N,\) define the following probability measure \(\mathsf {P}_{\mathbf{x}}(\cdot ) = \mathsf {P}(\, \cdot \, | \, X(0)=\mathbf{x}).\)

Remark 1

In lemmas and proofs below we denote by \(\epsilon \) and \(\varepsilon ,\) possibly with subscripts, various positive constants whose values do not depend on the starting configuration \(\mathbf{x}\) and may vary from line to line. This is essential for the proof of Theorem 1. Also, the results are stated only for the essentially different cases, and whenever there are trivially symmetric situations (e.g., \(\lambda _{k-1}< \lambda _k < \lambda _{k+1}\) and \(\lambda _{k-1}> \lambda _k > \lambda _{k+1}\)), we state and prove only one of them in order to avoid unnecessary repetition.

Lemma 1

Suppose that \(\{k\}\) is a local maximum, and \(\mathbf{x}\in \mathbb {Z}_{+}^N\) is such that \(\varGamma _k(\mathbf{x})=\max _{i}\varGamma _i(\mathbf{x}).\) Then, with positive probability, all subsequent particles are allocated at k,  i.e., \(\mathsf {P}_{\mathbf{x}}\left( A_{[1,\infty )}^k\right) \ge \epsilon \) for some \(\epsilon >0.\)

Lemma 1 describes the only case where localisation of growth at a single site can occur, namely, at a local maximum.

Lemma 2

Suppose that \(\{k\}\) is a growth point, and \(\mathbf{x}\in \mathbb {Z}_{+}^N\) is such that \(\varGamma _k(\mathbf{x})=\max _{i}\varGamma _i(\mathbf{x}).\) If \(\lambda _{k-1}<\lambda _k<\lambda _{k+1},\) then there exist \(n = n(\mathbf{x},\, \varLambda )\in \mathbb {Z}_{+}\) and \(\epsilon > 0,\) such that \(\mathsf {P}_{\mathbf{x}}(A_{[1,n]}^k) \ge \epsilon \) and \(\varGamma _{k+1}(\mathbf{x}+n\mathbf{e}_k)=\max _{i}\varGamma _i(\mathbf{x}+n\mathbf{e}_k).\)

Lemma 3

Suppose that \(\{k\}\) is a local minimum, and \(\mathbf{x}\in \mathbb {Z}_{+}^N\) is such that \(\varGamma _k(\mathbf{x})= \max _{i}\varGamma _i(\mathbf{x}).\) Then there exist \(n=n(\mathbf{x},\, \varLambda )\in \mathbb {Z}_{+}\) and \(\epsilon >0,\) such that \(\mathsf {P}_{\mathbf{x}}(A_{[1,n]}^k) \ge \epsilon \) and \(\max (\varGamma _{k-1}(\mathbf{x}+n\mathbf{e}_k),\, \varGamma _{k+1}(\mathbf{x}+n\mathbf{e}_k))=\max _{i}\varGamma _i(\mathbf{x}+n\mathbf{e}_k).\)

Lemmas 23 describe the following effect. If the maximal rate is attained at a site which is either a growth point or a local minimum, then, with positive probability, allocating \(n=n(\mathbf{x},\, \varLambda )\) particles at that site results in relocation of the maximal rate to a nearest neighbour with larger parameter \(\lambda .\) The number of particles required for relocation (the relocation time) is deterministic and depends only on the starting configuration \(\mathbf{x}\) and parameter set \(\varLambda .\)

Lemma 4

Suppose that \(\varGamma _{k}(\mathbf{x})=\max _{i}\varGamma _i(\mathbf{x}).\)

  1. (1)

    \(\lambda _{k-1}<\lambda _k=\lambda _{k+1} \ge \lambda _{k+2};\) or

  2. (2)

    \(\lambda _{k-1}=\lambda _k=\lambda _{k+1}\ge \lambda _{k+2},\) and \(\varGamma _{k+1}(\mathbf{x})\ge \varGamma _{k-1}(\mathbf{x}),\)

then, with positive probability, all subsequent particles are allocated at sites \(\{k,\, k+1\},\) i.e., \(\mathsf {P}_{\mathbf{x}}(A_{[1,\infty )}^{k, k+1}) \ge \epsilon \) for some \(\epsilon >0.\)

Lemma 4 describes a particular case that implies the second possible limiting behaviour of the model, i.e., localisation of growth at a pair of neighbouring sites.

Definition 3

Define the following stopping times

$$\begin{aligned}&\tau _k=\inf \left( n: X_k(n)=X_k(0)+1\right) ,\\&w_{k}^+=\min \left( \tau _i : \, i\ne k,\, k+1,\, k+2\right) , \quad \text {for } k=1, \ldots , N, \end{aligned}$$

where the usual convention is that

$$\begin{aligned} \inf (\emptyset )=\infty \quad \text {and } \min (a,\, \infty )=a,\quad \text {for } a\in \mathbb {R}_{+}\cup \{\infty \}. \end{aligned}$$

The above stopping times and the quantities \(r,\, z_1\) and \(z_2\) below will appear throughout Lemmas 48 and their proofs.

Definition 4

Given \(\mathbf{x}\in \mathbb {Z}_+^N\) define

$$\begin{aligned} r:=r(\mathbf{x})=x_{k+2}-x_{k-1}. \end{aligned}$$
(2)

In addition, if a pair of sites \(\{k,\, k+1\}\) is such that \(\lambda _k=\lambda _{k+1}=:\lambda \) and

$$\begin{aligned}&\lambda _{k-1}>\lambda ,\quad \text {define } z_1=\frac{1}{\lambda }\log \left( \frac{\lambda _{k-1}-\lambda }{\lambda }\right) ,\end{aligned}$$
(3)
$$\begin{aligned}&\lambda _{k+2}>\lambda ,\quad \text {define } z_2=\frac{1}{\lambda }\log \left( \frac{\lambda }{\lambda _{k+2}-\lambda }\right) . \end{aligned}$$
(4)

Before stating Lemma 5, let us denote by \(B_k\) the event in which a particle arrives in finite time at \(k+2\) before anywhere outside \(\{k,\,k+1,\,k+2\}.\) That is to say,

$$\begin{aligned} B_k:=\left\{ \tau _{k+2}<w_{k}^+\right\} . \end{aligned}$$
(5)

Lemma 5

Suppose that a pair of sites \(\{k,\, k+1\}\) is a saddle point with \(\lambda _{k-1}< \lambda _k=\lambda _{k+1}=:\lambda <\lambda _{k+2},\) and \(\mathbf{x}\in \mathbb {Z}_{+}^N\) is such that

$$\begin{aligned} \max \left( \varGamma _{k}(\mathbf{x}),\, \varGamma _{k+1}(\mathbf{x})\right) =\max _{i}\varGamma _i(\mathbf{x}). \end{aligned}$$
(6)
  1. (1)

    Then there exists \(\epsilon >0\) such that

    $$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A_{[1,\infty )}^{k, k+1}\bigcup B_k\right) =\mathsf {P}_{\mathbf{x}}\left( A_{[1,\infty )}^{k, k+1}\right) +\mathsf {P}_{\mathbf{x}}\left( B_k\right) \ge \epsilon . \end{aligned}$$
  2. (2)

    If \(r<z_2,\) then, with positive probability, all subsequent particles are allocated at sites \(\{k,\, k+1\},\) i.e., \(\mathsf {P}_{\mathbf{x}}(A_{[1,\infty )}^{k, k+1})\ge \varepsilon \) for some \(\varepsilon >0.\)

  3. (3)

    If \(r\ge z_2,\) then \(\mathsf {P}_{\mathbf{x}}(A_{[1,\infty )}^{k, k+1})=0,\) and, hence, \(\mathsf {P}_{\mathbf{x}}(B_k)\ge \epsilon .\)

  4. (4)

    If \(r>z_2\) is strict, then, with positive probability, the maximal rate relocates as follows. There exists \( \varepsilon > 0\) such that

    $$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( B_k,\, \max \limits _{i=k+2, k+3}\varGamma _{i}\left( X\left( \tau _{k+2}\right) \right) = \max \limits _{i}\varGamma _{i}\left( X\left( \tau _{k+2}\right) \right) \right) \ge \varepsilon , \end{aligned}$$
    (7)

    where \(\max _{i}\varGamma _i(X(\tau _{k+2}))\) may be attained at \(k+3\) only if \(\lambda _{k+3} > \lambda .\)

Part (4) of Lemma 5 is similar to Lemmas 23 in that it also describes relocation of the maximal rate to a site with larger parameter \(\lambda .\) The main difference is that in Lemma 5 the relocation time is random. This is in contrast to Lemmas 23, where the relocation time is deterministic.

The proposition and definition below are intended to clarify some assumptions and simplify some notations in Lemmas 68.

Proposition 1

Let \(\{k,\, k+1\}\) be a local minimum of size 2 with \(\lambda =\lambda _{k}=\lambda _{k+1},\) and let \(r=r(\mathbf{x}),\) \(z_1\) and \(z_2\) be quantities as in Definition 4. Then, \(z_1<z_2\) if and only if local minimum \(\{k,\, k+1\}\) is of type 1, in which case there might exist \(\mathbf{x}\) such that \(z_1<r<z_2.\) Otherwise, if a local minimum \(\{k,\, k+1\}\) is of type 2, then \(z_2\le z_1,\) in which case \(r \ge z_2\) or \(r \le z_1\) for all \(\mathbf{x}.\)

Definition 5

Recall that \(\tau _k:=\inf (n: X_k(n)=X_k(0)+1)\) and let us further define the following stopping times

$$\begin{aligned} \sigma _k&=\min \left( \tau _{k-1},\, \tau _{k+2}\right) ,\\ w_k&=\min \left( \tau _i \, : \, i\ne k\pm 1,\, k,\, k+2\right) , \end{aligned}$$

and following events

$$\begin{aligned} D_k&=\left\{ \sigma _k<w_k\right\} ,\\ D_k'&=\left\{ \tau _{k-1}<\min \left( \tau _{k+2},\, w_k\right) \right\} ,\\ D_k''&=\left\{ \tau _{k+2}<\min \left( \tau _{k-1},\, w_k\right) \right\} . \end{aligned}$$

Note that \(D_k'\cap D_k''=\emptyset ,\) \(D_k=D_k'\cup D_k''\) and \(A^{k, k+1}_{[1, \infty )}\cap D_k=\emptyset .\)

Lemma 6

Suppose that \( \{k,\, k+1\}\) is a local minimum of size 2,  and \( \mathbf{x}\in \mathbb {Z}_{+}^N\) is such that \(\max (\varGamma _{k}(\mathbf{x}), \varGamma _{k+1}(\mathbf{x}))=\max _{i}\varGamma _i(\mathbf{x}).\)

  1. (1)

    There exists \(\epsilon >0\) such that

    $$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A_{[1,\infty )}^{k, k+1}\bigcup D_k\right) =\mathsf {P}_{\mathbf{x}}\left( A_{[1,\infty )}^{k, k+1}\right) +\mathsf {P}_{\mathbf{x}}\left( D_k\right) \ge \epsilon . \end{aligned}$$
  2. (2)

    If \( z_1<r< z_2\) (only possible if \(\{k,\,k+1\}\) is of type 1), then, with positive probability, all subsequent particles are allocated at sites \(\{k,\, k+1\},\) i.e., \(\mathsf {P}_{\mathbf{x}}(A_{[1,\infty )}^{k, k+1})>\varepsilon \) for some \(\varepsilon >0.\)

  3. (3)

    If \(r\le z_1\) or \(r\ge z_2\) (always the case if \(\{k,\,k+1\}\) is of type 2),

    then \(\mathsf {P}_{\mathbf{x}}(A_{[1,\infty )}^{k, k+1})=0\) and, hence, \(\mathsf {P}_{\mathbf{x}}(D_k)\ge \epsilon .\)

Lemma 6 is analogous to Parts (1)–(3) of Lemma 5 for the case of a local minimum of size 2. An analogue of Part (4) of Lemma 5 in the same situation is provided by Lemma 7.

Lemma 7

Suppose that local minimum \(\{k,\,k+1\}\) is of size 2 with \(\lambda _k=\lambda _{k+1}:=\lambda ,\) and \( \mathbf{x}\in \mathbb {Z}_{+}^N\) is such that \(\max (\varGamma _{k}(\mathbf{x}),\, \varGamma _{k+1}(\mathbf{x}))=\max _{i}\varGamma _i(\mathbf{x}).\)

  1. (1)

    If \(\{k,\,k+1\}\) is of type 1 and \(r<z_1,\) or \(\{k,\,k+1\}\) is of type 2 and \(r<z_2\) then

    $$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( D_k', \,\max \limits _{i=k-2,k-1}\varGamma _{i}\left( X\left( \tau _{k-1}\right) \right) =\max \limits _{i=1,\ldots ,N}\varGamma _{i}\left( X\left( \tau _{k-1}\right) \right) \right) \ge \varepsilon , \end{aligned}$$

    for some \(\varepsilon >0,\) where \(\max \nolimits _{i}\varGamma _i(X(\tau _{k-1}))\) may be attained at \(k-2\) only if \(\lambda _{k-2} > \lambda .\)

  2. (2)

    If \(\{k,\,k+1\}\) is of type 1 and \(r>z_2,\) or \(\{k,\,k+1\}\) is of type 2 and \(r>z_1\) then

    $$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( D_k'', \,\max \limits _{i=k+2, k+3}\varGamma _{i}\left( X\left( \tau _{k+2}\right) \right) =\max \limits _{i=1,\ldots ,N}\varGamma _{i}\left( X\left( \tau _{k+2}\right) \right) \right) \ge \varepsilon , \end{aligned}$$

    for some \(\varepsilon >0,\) where \(\max \nolimits _{i}\varGamma _i(X(\tau _{k+2}))\) may be attained at \(k+3\) only if \(\lambda _{k+3} > \lambda .\)

  3. (3)

    If \(\{k,\,k+1\}\) is of type 2 and \(z_2<r<z_1,\) then

    $$\begin{aligned}&\mathsf {P}_{\mathbf{x}}\left( D_k',\, \max \limits _{i=k-2, k-1}\varGamma _{i}\left( X\left( \tau _{k-1}\right) \right) =\max \limits _{i=1,\ldots ,N}\varGamma _{i}\left( X\left( \tau _{k-1}\right) \right) \right) \\&\quad + \mathsf {P}_{\mathbf{x}}\left( D_k'',\, \max \limits _{i=k+2, k+3}\varGamma _{i}\left( X\left( \tau _{k+2}\right) \right) =\max \limits _{i=1,\ldots ,N}\varGamma _{i}\left( X\left( \tau _{k+2}\right) \right) \right) \ge \varepsilon , \end{aligned}$$

    for some \(\varepsilon >0,\) where \(\max \varGamma _i\) follows the corresponding prescriptions as above.

Remark 2

The next lemma concerns the borderline cases in between having a local minimum \(\{k,\, k+1\}\) of size 2 and type 1 or a saddle point. For example, in notations of Lemma 7 these cases are formally obtained by setting either \(\lambda _{k-1}=\lambda \) (where \(-\infty =z_1<z_2\)), or \(\lambda _{k+2}=\lambda \) (where \(z_1<z_2=\infty \)). As both cases can be addressed in similar ways, the lemma below deals only with the case \(\lambda _{k-1}=\lambda .\)

Lemma 8

Suppose that sites \(\{k-1,\, k,\, k+1,\, k+2\}\) are such that

$$\begin{aligned} \lambda _{k-1}=\lambda _k=\lambda _{k+1}=:\lambda <\lambda _{k+2}, \end{aligned}$$

\(\mathbf{x}\in \mathbb {Z}_{+}^N\) is such that \(\max (\varGamma _{k}(\mathbf{x}),\, \varGamma _{k+1}(\mathbf{x}))=\max _{i}\varGamma _i(\mathbf{x}) \) and, additionally, \(\varGamma _{k-1}(\mathbf{x})\le \varGamma _{k+1}(\mathbf{x}).\)

  1. (1)

    There exists \(\epsilon >0\) such that

    $$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A_{[1,\infty )}^{k, k+1}\bigcup D_k\right) =\mathsf {P}_{\mathbf{x}}\left( A_{[1,\infty )}^{k, k+1}\right) +\mathsf {P}_{\mathbf{x}}\left( D_k\right) \ge \epsilon . \end{aligned}$$
  2. (2)

    If \(r<z_2,\) then, with positive probability all subsequent particles are allocated at sites \(\{k,\, k+1\},\) i.e., \(\mathsf {P}_{\mathbf{x}}(A_{[1,\infty )}^{k, k+1})\ge \epsilon \) for some \(\varepsilon >0.\)

  3. (3)

    If \(r\ge z_2,\) then \(\mathsf {P}_{\mathbf{x}}(A_{[1,\infty )}^{k, k+1})=0\) and, hence, \(\mathsf {P}_{\mathbf{x}}(D_k)\ge \epsilon .\)

  4. (4)

    If \(r>z_2,\) then there exists \( \varepsilon > 0\) such that

    $$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( B_k,\, \max \limits _{i=k+2, k+3}\varGamma _{i}\left( X\left( \tau _{k+2}\right) \right) = \max \limits _{i}\varGamma _{i}\left( X\left( \tau _{k+2}\right) \right) \right) \ge \varepsilon , \end{aligned}$$

    where \(\max \nolimits _{i}\varGamma _i(X(\tau _{k+2}))\) may be attained at \(k+3\) only if \(\lambda _{k+3} > \lambda .\)

The following corollary concerns those cases covered by Parts (3) of Lemmas 5, 6 and 8, where the configuration parameter r is equal to one of the model parameters \(z_1\) and \(z_2.\) In what follows we call them critical cases.

Corollary 1

For the critical cases, relocation of the maximal rate to a site with larger parameter \(\lambda \) also occurs, with positive probability, in finite time.

Remark 3

Let us remark the following.

  1. (1)

    It is important to emphasize that in all the above cases where the maximal rate \(\max _i\varGamma _i(\mathbf{x})\) eventually relocates with positive probability, it always relocates to a site with strictly larger parameter \(\lambda .\)

  2. (2)

    Note that Lemmas 2, 3, 5 and 7 can be appropriately reformulated in order to cover the symmetric cases by simply re-labelling the graph sites in reverse order as the graph is a cycle. For example, if \(\{k,\, k+1\}\) is a saddle point as in Lemma 5, then the corresponding symmetric case would be \(\lambda _{k-1}>\lambda _k=\lambda _{k+1}>\lambda _{k+2},\) etc.

4 Random Geometric Progressions and Bernoulli Measures

The statements and propositions in this section are essential building blocks for the proof of lemmas which follow. The reason is that along the proofs of Lemmas 48 we need to analyse the limiting behaviour of random variables of the form \(\sum _{i=0}^{n} \prod _{j=1}^i\zeta _j,\) as \(n \rightarrow \infty ,\) where \(\{\zeta _j,\, \> j \ge 1\}\) is an i.i.d. sequence of positive random variables. It will also be necessary to compare such variables and introduce some stochastic domination concepts to enable us to carry out uniform estimates not depending on the starting configuration \(X(0) = \mathbf{x}.\) We refer to [21] for standard definitions and basic properties of stochastic domination. The following notations are used throughout. Given random variables X and Y (or sequences X and Y), we write \(X\ge _{st}Y\) if X stochastically dominates Y. Similarly, given two probability measures \(\nu \) and \(\mu ,\) we write \(\mu \ge _{st}\nu \) if \(\mu \) stochastically dominates \(\nu .\)

Random geometric progressions In this subsection we consider random variables realised on a certain probability space \((\varOmega ,\, \mathcal{F},\, \mathsf {P}).\) \(\mathsf {E}\) denotes the expectation with respect to probability measure \(\mathsf {P}.\) If X and Y are random variables or sequences such that \(X\ge _{st}Y,\) then we may assume that \(\mathsf {P}\) is a coupling of probability distributions of X and Y such that \(\mathsf {P}(X\ge Y)=1.\) Such a coupling exists by Strassen’s theorem ([20]).

Given a random sequence \(\mathbf{\zeta }=\{\zeta _i,\, i\ge 1\},\) define

$$\begin{aligned} Y_i(\zeta )=\prod _{j=1}^i\zeta _j,\quad i\ge 1,\quad Y_0(\zeta )=1,\quad \text{ and }\quad Z_n(\zeta )=\sum _{i=0}^{n}Y_i(\zeta ),\quad n\ge 1, \end{aligned}$$
(8)

and

$$\begin{aligned} Z(\mathbf{\zeta })=\sum _{i=0}^{\infty }Y_i(\zeta ). \end{aligned}$$

Proposition 2

  1. (1)

    Let \(\mathbf{\zeta }=\{\zeta _{ i},\, i\ge 1\}\) be an i.i.d. sequence of positive random variables such that \(\mathsf {E}(\log (\zeta _{i}))<0.\) Then \(\mathsf {P}(Z(\zeta )<\infty )=1\) and, consequently, \(\mathsf {E}(e^{-Z(\zeta )})>0.\)

  2. (2)

    Let \(\mathbf{\theta }=\{\theta _{i},\, i\ge 1\}\) be another i.i.d. sequence of positive random variables such that \(\mathsf {E}(\log (\theta _{i}))<0\) and \(\mathbf{\theta }\ge _{st}\zeta .\) Then \(\mathsf {E}(e^{-Z(\zeta )})\ge \mathsf {E}(e^{-Z(\theta )}).\)

Proof of Proposition 2

The first statement of the proposition is a well known simple fact in the theory of random walk in a random environment. Indeed, denote \(\mathsf {E}(\log (\zeta _{i}))=a<0.\) Given \(\delta >0\) such that \(a+\delta <0,\) it follows from the strong law of large numbers that \(Y_n<e^{(a+\delta )n}\) for all but finitely many n almost surely. Therefore, a tail of \(Z(\zeta )\) is eventually majorised by the corresponding tail of a converging geometric progression. In turn, finiteness of \(Z(\zeta )\) implies positiveness of the expectation. Moreover, note that \(e^{Z(\cdot )}\) is an increasing function. Therefore, \(e^{-Z(\zeta )}\ge _{st}e^{-Z(\theta )}\) and hence, \(\mathsf {E}(e^{-Z(\zeta )})\ge \mathsf {E}(e^{-Z(\theta )})\) as claimed. \(\square \)

Definition 6

Let \(\zeta =\{\zeta _i,\, i\ge 1\}\) and \(\eta =\{\eta _j,\, j\ge 1\}\) be i.i.d. sequences of positive random variables. Sequence \(\eta \) is said to be reciprocal to \(\zeta \) if \(\eta _1\) has the same distribution as \(1/\zeta _{1}.\)

The following proposition follows from basic properties of stochastic domination.

Proposition 3

Let X and Y be two i.i.d. sequences of positive random variables, and let \(\eta _{\scriptscriptstyle X}\) and \(\eta _{\scriptscriptstyle Y}\) be their corresponding reciprocal sequences. If \(X\ge _{st}Y\) then \(\eta _{\scriptscriptstyle X}\le _{st}\eta _{\scriptscriptstyle Y}.\)

Proposition 4

Let \(\zeta =\{\zeta _{ i},\, i\ge 1\}\) be an i.i.d. sequence of positive random variables such that \(\mathsf {E}(\log (\zeta _{i}))>0.\) Let \(\{Y_i,\, i\ge 0\} \) and \(\{Z_{n}(\zeta ),\, n\ge 1\}\) be the random variables as in (8). Define the following random sequence

$$\begin{aligned} F_{n}(\zeta )=Z_{n}(\zeta )/Y_{n}(\zeta ),\quad n\ge 1. \end{aligned}$$

Then, \(F_{n}(\zeta )\) converges in distribution to

$$\begin{aligned} Z(\eta )=1+\sum \limits _{i=1}^{\infty }\prod _{j=1}^i\eta _j, \quad \text {as} \quad n\rightarrow \infty , \end{aligned}$$

where \(\eta \) is the sequence reciprocal to \(\zeta .\) Moreover, \(Z(\eta )\) is almost surely finite and \(Z(\eta )\ge _{st}F_n(\zeta )\) for any \(n\ge 1.\)

Proof of Proposition 4

First, note that for every \(n\ge 1,\)

$$\begin{aligned} F_{n}(\zeta )=1+\sum \limits _{i=1}^{n}\prod _{j=1}^i\zeta _{n-j+1}^{-1}= 1+\sum \limits _{i=1}^{n}\prod _{j=1}^i\eta _{j}^{(n)}, \end{aligned}$$

where \(\eta _{j}^{(n)}=\zeta _{n-j+1}^{-1}.\) This means that \(F_{n}(\zeta )\) has the same distribution as \(Z_{n}(\eta )\) defined for the sequence \(\eta =\{\eta _i,\, i\ge 1\}\) reciprocal to \(\zeta .\) Therefore, \(F_n(\zeta )\) converges in distribution to \(Z(\eta ).\) In addition, \(\mathsf {E}(\log (\eta _1))=-\mathsf {E}(\log (\zeta _1))<0.\) Therefore, by Proposition 2, \(Z(\eta )\) is almost surely finite. Finally, it follows by construction that \(Z(\eta )\ge _{st}F_n(\zeta ),\) \(n\ge 1.\) \(\square \)

Proposition 5

Let \(\zeta =\{\zeta _{ i},\, i\ge 1\}\) be an i.i.d. sequence of positive random variables such that \(\mathsf {E}(\log (\zeta _{i}))=a>0,\) and \(\eta =\{\eta _i,\, i\ge 1\}\) be its reciprocal sequence. Given \(0<\gamma <1,\) define the following stopping time

$$\begin{aligned} {\widehat{m}}=\min \left( n: \gamma Y_n(\zeta )\ge 1\right) . \end{aligned}$$
(9)

Then both \(Z(\eta )<\infty \) and \(Z_{{\widehat{m}}-1}(\zeta )<\infty \) almost surely, \(\gamma Z_{{\widehat{m}}-1}(\zeta ) \le _{st} Z(\eta ),\) and, hence,

$$\begin{aligned} \mathsf {E}\left( e^{-\gamma Z_{{\widehat{m}}-1}(\zeta )}\right) \ge \mathsf {E}\left( e^{-Z(\eta )}\right) >0. \end{aligned}$$
(10)

Proof of Proposition 5

By Proposition 4, \(Z(\eta )\) is almost surely finite and \(F_n(\zeta )\le _{st} Z(\eta )\) for all \(n\ge 1.\) Therefore, \(F_{\widehat{m}-1}(\zeta )\le _{st}Z(\eta ).\) Since \(\gamma Y_{\widehat{m}-1}(\zeta )<1\) we obtain that

$$\begin{aligned} \gamma Z_{\widehat{m}-1}(\zeta )<Z_{{\widehat{m}}-1}(\zeta )/Y_{{\widehat{m}}-1}(\zeta ) = F_{{\widehat{m}}-1}(\zeta ). \end{aligned}$$

Consequently, \(\gamma Z_{\widehat{m}-1}(\zeta ) \le _{st} Z(\eta ),\) which implies (10) as claimed. \(\square \)

Proposition 6

Let \(\zeta =(\zeta _{i},\, i\ge 1)\) and \(\theta =(\theta _{i},\, i\ge 1)\) be i.i.d. sequences of positive random variables such that \(\mathsf {E}(\log (\theta _{1}))>0\) and \(\zeta _1\ge _{st}\theta _1.\) Let \(\eta _{\scriptscriptstyle \zeta }\) and \(\eta _{\theta }\) be sequences reciprocal to \(\zeta \) and \(\theta ,\) respectively. Given \(0<\gamma <1,\) let \({\widehat{m}}\) be the stopping time for sequence \(\zeta \) as in (9). Then

$$\begin{aligned} \mathsf {E}\left( e^{-\gamma Z_{{\widehat{m}}-1}(\zeta )}\right) \ge \mathsf {E}\left( e^{-Z(\eta _{\theta })}\right) . \end{aligned}$$

Proof of Proposition 6

Note that \(\zeta _1\ge _{st}\theta _1\) implies \(\mathsf {E}(\log (\zeta _{1}))>0.\) By Proposition 4 both \(Z(\eta _{\zeta })\) and \(Z(\eta _{\theta })\) are almost surely finite. Further, by Proposition 3\(\eta _{\zeta }\le _{st}\eta _{\theta }.\) Therefore

$$\begin{aligned} \mathsf {E}\left( e^{-Z(\eta _{\zeta })}\right) \ge \mathsf {E}\left( e^{-Z(\eta _{\theta })}\right) . \end{aligned}$$

By Proposition 5, it follows that

$$\begin{aligned} \mathsf {E}\left( e^{-\gamma Z_{{\widehat{m}}-1}(\zeta )}\right) \ge \mathsf {E}\left( e^{-Z(\eta _{\zeta })}\right) \ge \mathsf {E}\left( e^{-Z(\eta _{\theta })}\right) \end{aligned}$$

as claimed. \(\square \)

Bernoulli measures Now, we introduce a family of Bernoulli measures and some notations that will be used throughout proofs of Lemmas 48.

Let \(\xi =(\xi _i,\, i\ge 1)\) be a sequence of independent Bernoulli random variables with success probability p. Let \(\mu _p\) be the distribution of \(\xi ,\) that is, the product Bernoulli measure defined on the set of infinite binary sequences, and denote by \(\mathbb {E}_p\) the expectation with respect to the Bernoulli measure \(\mu _p.\)

Define

$$\begin{aligned} U_i=\xi _1+\cdots + \xi _i,\quad i\ge 1, \end{aligned}$$
(11)

the binomial random variables corresponding to a Bernoulli sequence \(\xi .\)

Let \(\lambda _{k-1},\) \(\lambda _k,\) \(\lambda _{k+1}\) and \(\lambda _{k+2}\) be \(\lambda \)-parameters corresponding to quadruples \(\{k-1,\, k,\, k+1,\, k+2\}\) of the graph sites such that \(\lambda =\lambda _k=\lambda _{k+1}\) as in Lemmas 48. Let us define the following i.i.d. sequences

$$\begin{aligned} \begin{aligned} \zeta _1&=\left( \zeta _{1, i}=e^{\lambda _{k-1}(1-\xi _i)-\lambda },\, i\ge 1\right) ,\\ \zeta _2&=\left( \zeta _{2, i}=e^{\lambda _{k+2}\xi _i-\lambda },\, i\ge 1\right) . \end{aligned} \end{aligned}$$
(12)

It is a well known fact that if \( 0<p'\le p''<1,\) then \(\mu _{p'}\le _{st}\mu _{p''}.\) This fact yields the following proposition.

Proposition 7

Let \(\zeta _1',\, \zeta _2'\) and \(\zeta _1'',\, \zeta _2''\) be sequences defined by (12) for Bernoulli sequences with success probabilities \(p'\) and \(p'',\) respectively. If \(0<p'\le p''<1,\) then \(\zeta _1'\ge _{st}\zeta _1''\) and \(\zeta _2'\le _{st}\zeta ''_2.\)

Note that variables \(Z_n\) [defined in (8)] corresponding to sequences \(\zeta _1\) and \(\zeta _2\) can be expressed in terms of Binomial random variables (11) as follows

$$\begin{aligned} Z_n\left( \zeta _1\right) =\sum \limits _{i=0}^{n}e^{\lambda _{k-1}(i-U_i)-\lambda i}\quad \text {and}\quad Z_n\left( \zeta _2\right) =\sum \limits _{i=0}^{n}e^{\lambda _{k+2}U_i-\lambda i}. \end{aligned}$$
(13)

It is useful to note that if \(\lambda _{k-1}=\lambda _{k+2}=\lambda ,\) then the above expressions are

$$\begin{aligned} Z_n\left( \zeta _1\right) =\sum \limits _{i=0}^{n}e^{-\lambda U_i} \quad \text {and}\quad Z_n\left( \zeta _2\right) =\sum \limits _{i=0}^{n}e^{\lambda (U_i-i)}. \end{aligned}$$

5 Proofs of Lemmas

In the following proofs we show the existence of positive real constants Cc\(\epsilon \) and \(\varepsilon ,\) whose exact values are immaterial and may vary from line to line, but which do not depend on the starting configuration \(X(0) = \mathbf{x}.\) In order to avoid notational clutter we shall denote initial allocation rates \(\varGamma _i(\mathbf{x})\) simply by \(\varGamma _i\) for all i. Moreover, whenever we fix index \(k \in \{1,\ldots ,N\}\) and consider indices in the neighbourhood of k,  those indices should be interpreted as \(modulo \> N.\)

5.1 Proofs of Lemmas 13

For short, denote \(B=\sum _{i \ne k, k \pm 1}\varGamma _i\) and \(Z=\sum _{i=1}^N\varGamma _i.\) By assumption, \(\varGamma _k=\max \nolimits _{i=1, \ldots , N}\varGamma _i,\) then

$$\begin{aligned} \frac{\varGamma _{k-1}}{\varGamma _k}\le 1,\quad \frac{\varGamma _{k+1}}{\varGamma _k}\le 1, \quad \varGamma _k\ge \frac{Z}{N}\quad \text {and}\quad \frac{Z-\varGamma _k}{Z}\le \frac{(N-1)}{N}. \end{aligned}$$
(14)

It follows from the last two inequalities that

$$\begin{aligned} \frac{B}{\varGamma _k}\le N-1. \end{aligned}$$
(15)

Proof of Lemma 1

Recall that \(\lambda _k>\max (\lambda _{k-1},\, \lambda _{k+1}).\) We need to prove the existence of a positive number \(\epsilon \) such that

$$\begin{aligned} \mathbb {P}_{\mathbf{x}}\left( A_{[1, \infty )}^k\right) = \prod \limits _{n=0}^{\infty } \frac{\varGamma _ke^{\lambda _kn}}{\varGamma _{k-1}e^{\lambda _{k-1}n}+\varGamma _ke^{\lambda _kn}+\varGamma _{k+1}e^{\lambda _{k+1}n}+ B} >\epsilon , \end{aligned}$$
(16)

where \(\epsilon > 0\) depends only on \(\lambda _{k-1},\, \lambda _k,\, \lambda _{k+1}\) and N.

Indeed, rewriting the identity in (16) and applying bounds (14) and (15),

$$\begin{aligned}&\mathbb {P}_{\mathbf{x}}\left( A_{[1, \infty )}^k\right) \\&\quad =\exp \left( -\sum _{n=0}^{\infty }\log \left( 1+\frac{\varGamma _{k-1}}{\varGamma _k}e^{(\lambda _{k-1}-\lambda _k)n}+ \frac{\varGamma _{k+1}}{\varGamma _k}e^{(\lambda _{k+1}-\lambda _k)n}+\frac{B}{\varGamma _k}e^{-\lambda _kn}\right) \right) \\&\quad \ge \exp \left( -\sum _{n=0}^{\infty }\log \left( 1+e^{(\lambda _{k-1}-\lambda _k)n}+e^{(\lambda _{k+1}-\lambda _k)n}+(N-1)e^{-\lambda _kn}\right) \right) \\&\quad \ge \exp \left( -C\sum _{n=0}^{\infty }\left( e^{(\lambda _{k-1}-\lambda _k)n}+e^{(\lambda _{k+1}-\lambda _k)n}+(N-1)e^{-\lambda _kn}\right) \right)>\epsilon >0, \end{aligned}$$

since the series in the exponent above converges. It is not hard to see that in the last inequality, \(\epsilon \) should depend only on \(\lambda _{k-1},\, \lambda _k,\, \lambda _{k+1}\) and N. \(\square \)

Proof of Lemma 2

Recall that \(\lambda _{k-1}<\lambda _k<\lambda _{k+1}.\) We need to prove the existence of a finite positive integer \({\hat{n}}\) and a positive number \(\epsilon \) such that

$$\begin{aligned} \varGamma _{k+1}e^{\lambda _{k+1}{\hat{n}}} \ge \varGamma _ke^{\lambda _k{\hat{n}}} > \max \left( \varGamma _{k-1}e^{\lambda _{k-1}{\hat{n}}},\, \max _{i \ne k, k \pm 1}\varGamma _i\right) \end{aligned}$$

and

$$\begin{aligned} \mathbb {P}_{\mathbf{x}}\left( A_{[1, {\hat{n}}]}^k\right) = \prod \limits _{n=0}^{{\hat{n}}-1} \frac{\varGamma _ke^{\lambda _kn}}{\varGamma _{k-1}e^{\lambda _{k-1}n}+\varGamma _ke^{\lambda _kn}+\varGamma _{k+1}e^{\lambda _{k+1}n}+ B} >\epsilon , \end{aligned}$$
(17)

where \(\epsilon > 0\) depends only on \(\lambda _{k-1},\, \lambda _k,\, \lambda _{k+1}\) and N. Note that the sequence \(e^{(\lambda _{k+1}-\lambda _k)n},\, n \ge 0\) is exponentially increasing, so there exists the minimal integer \({\hat{n}}\) such that

$$\begin{aligned} e^{(\lambda _{k+1}-\lambda _k){\hat{n}}} \ge \frac{\varGamma _k}{\varGamma _{k+1}}, \quad \text {that is,} \quad \frac{\varGamma _{k+1}(\mathbf{x}+ {\hat{n}} \mathbf{e}_k)}{\varGamma _k(\mathbf{x}+ {\hat{n}} \mathbf{e}_k)} \ge 1. \end{aligned}$$

Then, it is easy to see that

$$\begin{aligned} \frac{\varGamma _{k+1}}{\varGamma _k}\sum \limits _{n=0}^{\hat{n}-1} e^{(\lambda _{k+1}-\lambda _k)n}\le C_1<\infty , \end{aligned}$$
(18)

where \(C_1\) depends only on \(\lambda _k\) and \(\lambda _{k+1}.\) Further, rewriting the identity in (17) and using bounds (14), (15) and (18), gives that

$$\begin{aligned}&\mathbb {P}_{\mathbf{x}}\left( A_{[1, {\hat{n}}]}^k\right) \\&\quad =\exp \left( -\sum _{n=0}^{{\hat{n}}-1}\log \left( 1+\frac{\varGamma _{k-1}}{\varGamma _k}e^{(\lambda _{k-1}-\lambda _k)n} + \frac{\varGamma _{k+1}}{\varGamma _k}e^{(\lambda _{k+1}-\lambda _k)n}+\frac{B}{\varGamma _k}e^{-\lambda _k n}\right) \right) \\&\quad \ge \exp \left( -\sum _{n=0}^{{\hat{n}}-1}\log \left( 1+e^{(\lambda _{k-1}-\lambda _k)n}+\frac{\varGamma _{k+1}}{\varGamma _k}e^{(\lambda _{k+1}-\lambda _k)n}+(N-1)e^{-\lambda _kn}\right) \right) \\&\quad \ge \exp \left( -C_2\sum _{n=0}^{\hat{n}-1}\left( e^{(\lambda _{k-1}-\lambda _k)n}+ \frac{\varGamma _{k+1}}{\varGamma _k}e^{(\lambda _{k+1}-\lambda _k)n}+(N-1)e^{-\lambda _k n}\right) \right) >\epsilon , \end{aligned}$$

for some \(\epsilon >0.\) \(\square \)

Proof of Lemma 3

Recall that \(\lambda _k < \min (\lambda _{k-1},\, \lambda _{k+1})\). As in the proof of Lemma 2, we need to show existence of a finite positive integer \({\hat{n}}\) and a positive \(\epsilon \) such that

$$\begin{aligned} \max \left( \varGamma _{k-1}e^{\lambda _{k-1}{\hat{n}}},\, \varGamma _{k+1}e^{\lambda _{k+1}{\hat{n}}}\right) \ge \varGamma _ke^{\lambda _k\hat{n}} \ge \max _{i \ne k, k \pm 1}\varGamma _i \end{aligned}$$

and

$$\begin{aligned} \mathbb {P}_{\mathbf{x}}\left( A_{[1, {\hat{n}}]}^k\right) = \prod \limits _{n=0}^{{\hat{n}}-1} \frac{\varGamma _ke^{\lambda _kn}}{\varGamma _{k-1}e^{\lambda _{k-1}n}+\varGamma _ke^{\lambda _kn}+\varGamma _{k+1}e^{\lambda _{k+1}n}+ B} >\epsilon , \end{aligned}$$

where \(\epsilon > 0\) depends only on \(\lambda _{k-1}, \,\lambda _k,\, \lambda _{k+1}\) and N. This can be shown similar to the proof of Lemma 2, and we skip details. \(\square \)

5.2 Proofs of Lemmas 48

5.2.1 Notations

We start with some preliminary considerations and notations that will be used throughout the proofs of Lemmas 48.

Let \(\{k,\, k+1\}\) be a pair of sites such that \(\lambda _k=\lambda _{k+1}=\lambda .\) If, as defined in Definition 2, \(r=r(\mathbf{x})=x_{k+2}-x_{k-1},\) then \(\frac{\varGamma _{k+1}(\mathbf{x})}{\varGamma _{k}(\mathbf{x})}=e^{\lambda r}.\) Therefore, given that the next particle is allocated at either k or \(k+1,\) the conditional \(\mathsf {P}_{\mathbf{x}}\)-probability to choose \(k+1\) is equal to

$$\begin{aligned} p:=p(r)=\frac{\varGamma _{k+1}(\mathbf{x})}{\varGamma _{k}(\mathbf{x})+\varGamma _{k+1}(\mathbf{x})}=\frac{e^{\lambda r}}{1+e^{\lambda r}}. \end{aligned}$$
(19)

We henceforth denote \(q=1-p.\) Furthermore, probability p does not change by adding particles at sites k and \(k+1\) since configuration parameter r remains constant.

Note that p(z),  considered as a function of \(z\in \mathbb {R},\) is monotonically increasing. A direct computation gives that unique solutions of equations \(\lambda _{k-1}-\lambda =p(z)\lambda _{k-1}\) and \(\lambda _{k+2}p(z)=\lambda \) are quantities \(z_1\) and \(z_2\) [defined in (3)], respectively.

Let \(S_n\) be the number of additional particles at site \(k+1\) at time \(n \ge 1.\) Let \(S_0=0\) and \(s(n)=(s_0,\, s_1, \ldots , s_n)\) be a fixed trajectory of a finite random sequence \(S(n)=(S_0,\, S_1, \ldots , S_n).\) Note that, by construction, any trajectory s(n) is a sequence of non-negative integers such that \(s_0=0\) and \(s_{i}-s_{i-1}\in \{0,\, 1\},\) \(i=1, \ldots , n.\)

For short, denote

$$\begin{aligned} \begin{aligned}&\varGamma _i=\varGamma _i(\mathbf{x}),\quad \widetilde{\varGamma }_k=\sum _{i\ne k, k\pm 1, k+2}\varGamma _i,\\&\gamma _{k,1}=\frac{\varGamma _{k-1}}{\varGamma _{k}+\varGamma _{k+1}},\quad \gamma _{k,2}=\frac{\varGamma _{k+2}}{\varGamma _{k}+\varGamma _{k+1}}, \quad \widetilde{\gamma }_k=\frac{\widetilde{\varGamma }_k}{\varGamma _{k}+\varGamma _{k+1}}. \end{aligned} \end{aligned}$$
(20)

In the rest of this section we are going to derive expressions for probabilities \(\mathsf {P}_{\mathbf{x}}(A^{k, k+1}_{[1, n+1]}),\, n \ge 1,\) in terms of expectations with respect to a Bernoulli product measure on \(\{0,\,1\}^{\infty }\) with parameter p defined in (19). These expressions allow one to obtain lower and upper bounds for the above probabilities. We start with the case of fixed n and then extend it to the case where n is a stopping time.

In the above notations

$$\begin{aligned}&\mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{i+1},\, S_{i+1}\bigg . \right. \left. =s_{i+1}\bigg |A^{k, k+1}_{[1, i]},\, S_i=s_i\right) \\&\quad = \frac{p^{s_{i+1}-s_i}q^{1-(s_{i+1}-s_i)}(\varGamma _{k}+\varGamma _{k+1})e^{\lambda i}}{(\varGamma _{k}+\varGamma _{k+1})e^{\lambda i} +\varGamma _{k-1}e^{\lambda _{k-1}(i-s_i)}+\varGamma _{k+2}e^{\lambda _{k+2}s_i}+\widetilde{\varGamma }_k}\\&\quad =\frac{p^{s_{i+1}-s_i}q^{1-(s_{i+1}-s_i)}}{1+\gamma _{k,1}e^{\lambda _{k-1}(i-s_i)-\lambda i}+\gamma _{k,2}e^{\lambda _{k+2}s_i-\lambda i}+ \widetilde{\gamma }_ke^{-\lambda i}}. \end{aligned}$$

Then, given n we obtain by repeated conditioning that

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, n+1]},\, S_{n+1}=s_{n+1}, \ldots , S_1=s_1\right) =p^{s_{n+1}}q^{n+1-s_{n+1}}W_n\left( s_1, \ldots , s_n\right) , \end{aligned}$$

where

$$\begin{aligned} W_n\left( s_1, \ldots , s_n\right) = \prod \limits _{i=0}^n \frac{1}{1+\gamma _{k,1}e^{\lambda _{k-1}(i-s_i)-\lambda i}+ \gamma _{k,2}e^{\lambda _{k+2}s_i-\lambda i}+\widetilde{\gamma }_ke^{-\lambda i}}. \end{aligned}$$
(21)

Consequently, we get that

$$\begin{aligned} \begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, n+1]}\right)&=\sum \limits _{s(n+1)}p^{s_{n+1}}q^{n+1-s_{n+1}}W_n\left( s_1, \ldots , s_n\right) ,\\&=\sum \limits _{s(n)}(p+q)p^{s_{n}}q^{n-s_{n}}W_n\left( s_1, \ldots , s_n\right) ,\\&=\sum \limits _{s(n)}p^{s_{n}}q^{n-s_{n}}W_n\left( s_1, \ldots , s_n\right) , \end{aligned} \end{aligned}$$
(22)

where the sum in the first line is over all possible trajectories \(s(n+1)=(s_1, \ldots , s_{n+1})\) of \(S(n+1)=(S_1, \ldots , S_{n+1})\) and the other two are over all possible trajectories \(s(n)=(s_1, \ldots , s_n)\) of \(S(n)=(S_1, \ldots , S_n).\) Therefore, we arrive to the following equation

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, n+1]}\right) =\mathbb {E}_p\left( W_n\left( U_1, \ldots , U_n\right) \right) , \end{aligned}$$
(23)

where \(\mathbb {E}_p\) is the expectation with respect to the Bernoulli measure \(\mu _p\) defined in Sect. 4 and \(U_i,\) \(i\ge 1,\) are Binomial random variables defined in (11).

Further, assumptions of Lemmas 48 imply that \(\frac{\varGamma _i}{\varGamma _k+\varGamma _{k+1}}\le 1,\) \(i=1, \ldots , N.\) Therefore, quantity \(\widetilde{\gamma }_k\) defined in (20) can be bounded as follows

$$\begin{aligned} \widetilde{\gamma }_k\le (N-4). \end{aligned}$$
(24)

Using bound (24) and inequality \(\log (1+z)\le z\) for all \(z\ge 0\) we obtain that

$$\begin{aligned} W_n\left( s_1, \ldots , s_n\right)&\ge \prod _{i=0}^n \frac{1}{1+ \gamma _{k,1}e^{\lambda _{k-1}(i-s_i)-\lambda i}+ \gamma _{k,2}e^{\lambda _{k+2}s_i-\lambda i}+(N-4)e^{-\lambda i}}\\&=e^{-\sum _{i=0}^n\log (1+\gamma _{k,1}e^{\lambda _{k-1}(i-s_i)-\lambda i}+ \gamma _{k,2}e^{\lambda _{k+2}s_i-\lambda i}+(N-4)e^{-\lambda i})}\\&\ge e^{-\left( \sum _{i=0}^n \gamma _{k,1}e^{\lambda _{k-1}(i-s_i)-\lambda i}+ \gamma _{k,2}e^{\lambda _{k+2}s_i-\lambda i}+c_1e^{-c_2i}\right) }\\&\ge \delta e^{-\gamma _{k,1}\sum _{i=0}^n e^{\lambda _{k-1}(i-s_i)-\lambda i}} e^{-\gamma _{k,2}\sum _{i=0}^n e^{\lambda _{k+2}s_i-\lambda i}}, \end{aligned}$$

for some \(\delta >0\) not depending on the configuration \(\mathbf{x}.\) On the other hand, note that

$$\begin{aligned} W_n\left( s_1, \ldots , s_n\right) \le \prod \limits _{i=0}^n \frac{1}{1+ \gamma _{k,1}e^{\lambda _{k-1}(i-s_i)-\lambda i}+ \gamma _{k,2}e^{\lambda _{k+2}s_i-\lambda i}}. \end{aligned}$$
(25)

The above inequalities yield the following lower and upper bounds

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, n+1]}\right)\ge & {} \delta \mathbb {E}_p\left( e^{-\gamma _{k,1}\sum _{i=0}^n e^{\lambda _{k-1}(i-U_i)-\lambda i}} e^{-\gamma _{k,2}\sum _{i=0}^n e^{\lambda _{k+2}U_i-\lambda i}} \right) , \end{aligned}$$
(26)
$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, n+1]}\right)\le & {} \mathbb {E}_p\left( \prod \limits _{i=0}^n \frac{1}{1+ \gamma _{k,1}e^{\lambda _{k-1}(i-U_i)-\lambda i}+ \gamma _{k,2}e^{\lambda _{k+2}U_i-\lambda i}}\right) . \end{aligned}$$
(27)

We will also need a generalisation of lower bound (26) for probabilities \(\mathsf {P}_{\mathbf{x}}(A^{k, k+1}_{[1, \tau ]}),\) where \(\tau \) is one of the following stopping times, \(\min (n: S_n-c_1n\ge c_2),\) \(\min (n: n-S_n\ge c_3),\) and the minimum of two such stopping times. At the moment, we shall not further specify such stopping times as it will be clear later which one it refers to. Arguing similarly as in Eq. (22), one can obtain that

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, \tau ]}\right) = \sum \limits _{n=0}^{\infty } \sum \limits _{s(n)}p^{s_n}q^{n-s_n}W_n(s_1, \ldots , s_n)1_{\{\mathsf M_n\}}, \end{aligned}$$

where \(\mathsf M_n\) is a set of paths \(s(n)=(s_1, \ldots , s_n)\) for which \(\tau =n+1.\) Furthermore, similar to Eq. (23), we can rewrite equation above as

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, \tau ]}\right) = \mathbb {E}_p\left( W_{\tilde{\tau }}\left( U_1, \ldots , U_{\tilde{\tau }}\right) \right) , \end{aligned}$$

where \(\tilde{\tau }\) is a stopping time defined by replacing \(S_n\) by \(U_n\) in the same way as \(\tau \) but in terms of random variables \(U_n.\) Proceeding similar to how we got lower bound (26) we obtain the following lower bound

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, \tau ]}\right) \ge \delta \mathbb {E}_p\left( e^{-\gamma _{k,1}\sum _{i=0}^{\tilde{\tau }-1} e^{\lambda _{k-1}(i-U_i)-\lambda i}} e^{-\gamma _{k,2}\sum _{i=0}^{\tilde{\tau }-1} e^{\lambda _{k+2}U_i-\lambda i}} \right) . \end{aligned}$$
(28)

Let us rewrite the lower bounds in terms of random sequences \(\zeta _1,\) \(\zeta _2\) and \(Z_n\) as defined in (12) and (13). In these notations, lower bounds (26) and (28) take the following form

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, n+1]}\right) \ge \delta \mathbb {E}_p\left( e^{-\gamma _{k,1}Z_{n}(\zeta _1)} e^{-\gamma _{k,2}Z_{n}(\zeta _2)} \right) \end{aligned}$$
(29)

and

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, \tau ]}\right) \ge \delta \mathbb {E}_p\left( e^{-\gamma _{k,1}Z_{\tilde{\tau }-1}(\zeta _1)} e^{-\gamma _{k,2}Z_{\tilde{\tau }-1}(\zeta _2)} \right) \end{aligned}$$
(30)

respectively.

Finally, letting \(n \rightarrow \infty \) in (26) and (29) we obtain the following bound

$$\begin{aligned} \begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, \infty )}\right)&\ge \delta \mathbb {E}_p\left( e^{-\gamma _{k,1}\sum _{i=0}^{\infty } e^{\lambda _{k-1}(i-U_i)-\lambda i}} e^{-\gamma _{k,2}\sum _{i=0}^{\infty }e^{\lambda _{k+2}U_i-\lambda i}} \right) \\&= \delta \mathbb {E}_p\left( e^{-\gamma _{k,1}Z(\zeta _1)} e^{-\gamma _{k,2}Z(\zeta _2)} \right) . \end{aligned} \end{aligned}$$
(31)

5.2.2 Proof of Lemma 4

We start with the following proposition.

Proposition 8

Let \(\mu _p\) be the Bernoulli measure defined in Sect. 4, and let \(U_n,\, n\ge 1,\) be the corresponding Binomial random variables [defined in (11)]. Then

  1. (1)

    given \(\varepsilon \in (0,\,1)\) and \(\kappa > 0,\) there exist positive constants \(c_1 \) and \( c_2\) such that

    $$\begin{aligned} \inf _{p \in (0,\,1)} \mu _p\left( \bigcap _{n=M}^{\infty }\left\{ \frac{n}{2}p(1-\kappa )-c_1\le U_n\le np(1+\kappa )+c_2\right\} \right) \ge \varepsilon , \end{aligned}$$
    (32)

    where \(M=[p^{-1}]\) is the integer part of \(p^{-1};\)

  2. (2)

    given \(\lambda >0,\) there exists \(\varepsilon _1>0\) such that

    $$\begin{aligned} \inf \limits _{p\in (0,\, 1)}\mathbb {E}_p\left( e^{-p\sum _{i=0}^{\infty }e^{-\lambda U_i}}\right) \ge \varepsilon _1. \end{aligned}$$

Proof of Proposition 8

Set \(U_0=0\) and define the following random variables

$$\begin{aligned}&V_j=U_{jM}-U_{(j-1)M}=\sum _{i=(j-1)M+1}^{jM}\xi _{i},\quad j\ge 1,\\&Y_{j}=V_1+\cdots +V_{j},\quad j\ge 1,\\&Y_0=0. \end{aligned}$$

First, denote \(a(p) := \mathbb {E}_p(V_i)=pM=p[p^{-1}]\) and note that \(a(p) \in [1/2,\, 1]\) for all \(p \in (0,\,1).\) Moreover, \(Var(V_i) = \mathbb {E}_p(V_i^2)-(\mathbb {E}_p(V_i))^2= p(1-p)[p^{-1}]\le 1.\) Now, consider the auxiliary process \(\chi _n := Y_n - n(1-\kappa )/2 + c',\) with \(\chi _0 = c'.\) Note that \( \mathbb {E}_p(\chi _{n+1} - \chi _n \> | \> \chi _n=\chi ) = a(p) - (1-\kappa )/2 > 0.\) Moreover, if we define the stopping time \(t_x = \min _{n\ge 0}\{\chi _n < x\},\) it follows from [11, Theorem 2.5.18] that there exist \(x_1\) and \(\alpha > 0\) such that

$$\begin{aligned} \mathbb {P}\left( \bigcap _{n=1}^{\infty }\left\{ Y_n \ge \frac{n}{2}(1-\kappa )-\left( c'-x_1\right) \right\} \right) = \mathbb {P}\left( t_{x_1} = \infty \right) \ge 1-\left( \frac{1+x_1}{1 +\chi _0} \right) ^{\alpha }. \end{aligned}$$

So, for every \(\varepsilon \in (0,\,1)\) and \(\kappa > 0,\) we can appropriately choose \(\alpha \) and \(\chi _0 = c' > x_1\) such that the probability in the above display is greater than \(\varepsilon /2.\) Analogously, if we define \(\chi _n = -Y_n + n(1+\kappa ),\) the upper bound can be found exactly as above, yielding

$$\begin{aligned} \mu _p\left( \bigcap _{n=1}^{\infty }\left\{ \frac{n}{2}(1-\kappa )-c\le Y_n\le n(1+\kappa )+c\right\} \right) \ge \varepsilon . \end{aligned}$$
(33)

Further, fix \(n\ge M.\) Let \(m_n\) and \(l_n\) be integers such that \(n=m_nM+l_n,\) where \(l_n<M.\) Then on event \(\bigcap _{n=1}^{\infty }\left\{ \frac{n}{2}(1-\kappa )-c\le Y_n\le n(1+\kappa )+c\right\} \) the following bounds hold

$$\begin{aligned} U_n\ge Y_{m_n}\ge \frac{1}{2}\left( \frac{n}{M}-\frac{l_n}{M}\right) (1-\kappa )-c\ge \frac{1}{2}np(1-\kappa )-c_1, \end{aligned}$$
(34)

and

$$\begin{aligned} U_n\le Y_{m_n+1}\le \left( \frac{n}{M}+\frac{M-l_n}{M}\right) (1+\kappa )+c\le np(1+\kappa )+c_2. \end{aligned}$$
(35)

Inequalities (33)–(35) yield bound (32).

Recall that \(M=[p^{-1}],\) and so,

$$\begin{aligned} p\sum _{i=0}^{M-1}e^{-\lambda U_i}\le pM\le 1. \end{aligned}$$

By combining this bound with bound (32), it follows that given \(\varepsilon \in (0,\,1)\) and \(\kappa >0\) we can find \(c_1>0\) such that with \(\mu _p\)-probability at least \(\varepsilon \)

$$\begin{aligned} p\sum _{i=0}^{\infty } e^{-\lambda U_i}\le 1+p\sum _{i=M}^{\infty } e^{-\lambda \left( \frac{1}{2}pi(1-\kappa )-c_1\right) }\le C \end{aligned}$$
(36)

for some deterministic constant \(C=C(\varepsilon ,\, \lambda )\) and all \(p\in (0,\,1).\) Therefore

$$\begin{aligned} \inf \limits _{p\in (0,\, 1)}\mathbb {E}_p\left( e^{-p\sum _{i=0}^{\infty }e^{-\lambda U_i}}\right) \ge \varepsilon e^{-C}=\varepsilon _1>0, \end{aligned}$$

as required. \(\square \)

We are now ready to proceed with the proof of the lemma. Recall that \(\lambda _k=\lambda _{k+1}=:\lambda .\)

Proof of Part (1) of Lemma 4

Recall that in this case \(\lambda _{k-1}<\lambda _{k}=\lambda _{k+1}=\lambda ,\) \(\lambda \ge \lambda _{k+2}\) and \(\varGamma _k=\max _i\varGamma _i.\) Then,

$$\begin{aligned} \gamma _{k,1}Z\left( \zeta _1\right) = \gamma _{k,1}\sum _{i=0}^{\infty } e^{\lambda _{k-1}(i-U_i)-\lambda i}\le \sum _{i=0}^{\infty }e^{-(\lambda -\lambda _{k-1})i}\le C_1<\infty , \end{aligned}$$
(37)

where \(C_1>0\) is a deterministic constant and we used that \(\gamma _{k, 1}\le 1.\)

Further, if \(\lambda >\lambda _{k+2},\) then

$$\begin{aligned} \gamma _{k,2}Z\left( \zeta _2\right) =\gamma _{k,2}\sum _{i=0}^{\infty }e^{\lambda _{k+2}U_i-\lambda i} \le \sum _{i=0}^{\infty }e^{-(\lambda -\lambda _{k+2})i}\le C_2<\infty , \end{aligned}$$
(38)

where \(C_2>0\) is a deterministic constant and we used that \(\gamma _{k, 2}\le 1.\) Then, using bounds (37) and (38) in lower bound (31) gives that \( \mathsf {P}_{\mathbf{x}}(A^{k, k+1}_{[1, \infty )})\ge \varepsilon \) for some \(\varepsilon >0,\) as claimed.

If \(\lambda =\lambda _{k+2},\) then bound (38) cannot be used, and we proceed as follows. Note that in this case

$$\begin{aligned} \gamma _{k,2}Z\left( \zeta _2\right) = \gamma _{k,2}\sum _{i=0}^{\infty }e^{\lambda (U_i-i)} \le q\sum _{i=0}^{\infty }e^{\lambda (U_i-i)}, \end{aligned}$$
(39)

as

$$\begin{aligned} \gamma _{k, 2}=\frac{\varGamma _{k+2}}{\varGamma _{k}+\varGamma _{k+1}}\le \frac{\varGamma _{k}}{\varGamma _{k}+\varGamma _{k+1}}=q=1-p, \end{aligned}$$
(40)

where p is defined in (19). Further, combining bounds (37) and (39) in (31) we get that

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, \infty )}\right) \ge \varepsilon _1 \mathbb {E}_p\left( e^{-q\sum _{i=0}^{\infty }e^{\lambda (U_i-i)}}\right) =\varepsilon _1\mathbb {E}_p\left( e^{-p\sum _{i=0}^{\infty }e^{-\lambda U_i}}\right) , \end{aligned}$$

where the equality holds by symmetry. It is left to note that the expectation in the right side of the last equation is bounded below uniformly over \(p\in (0,\, 1)\) by Part (2) of Proposition 8. \(\square \)

Proof of Part (2) of Lemma 4

Recall that in this case \(\lambda _{k-1}=\lambda _{k}=\lambda _{k+1}=\lambda \ge \lambda _{k+2},\) \(\varGamma _k=\max _i\varGamma _i\) and \(\varGamma _{k-1}\le \varGamma _{k+1}.\) These conditions give that \(e^{\lambda _{k-1}(i-U_i)-\lambda i}=e^{-\lambda U_i},\) \(e^{\lambda _{k+2}U_i-\lambda i}\le e^{\lambda (U_i-i)},\) and

$$\begin{aligned} \gamma _{k, 1}=\frac{\varGamma _{k-1}}{\varGamma _k+\varGamma _{k+1}}\le \frac{\varGamma _{k+1}}{\varGamma _k+\varGamma _{k+1}}=p. \end{aligned}$$
(41)

Recall also that \(\gamma _{k, 2}\le q=1-p\) [see (40)]. Using all these inequalities in lower bound (31) gives the following lower bound

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, \infty )}\right) \ge \delta \mathbb {E}_p\left( e^{-p\sum _{i=0}^{\infty } e^{-\lambda U_i}} e^{-q\sum _{i=0}^{\infty } e^{\lambda (U_i-i)}} \right) . \end{aligned}$$
(42)

We have already shown in (36) that for any \(\varepsilon \in (0,\, 1)\) there exists constant \(C=C(\varepsilon )>0\) such that

$$\begin{aligned} \mu _p\left( p\sum _{i=0}^{\infty } e^{-\lambda U_i}\le C\right) \ge \varepsilon \quad \text {and} \quad \mu _p\left( q\sum _{i=0}^{\infty } e^{\lambda (U_i-i)}\le C\right) \ge \varepsilon \end{aligned}$$
(43)

for all p,  where the second bound holds by symmetry. Choosing \(\varepsilon >0.5\) we get that

$$\begin{aligned} \mu _p\left( p\sum _{i=0}^{\infty } e^{-\lambda U_i}\le C,\, q\sum _{i=0}^{\infty } e^{\lambda (U_i-i)}\le C\right) \ge 2\varepsilon -1>0, \end{aligned}$$

for all p. Combining this bound with Eq. (42) we finally obtain that \( \mathsf {P}_{\mathbf{x}}(A^{k, k+1}_{[1, \infty )}) \ge \varepsilon _2 \) for some \(\varepsilon _2>0,\) as claimed. \(\square \)

5.2.3 Proof of Lemma 5

Proof of Part (1) of Lemma 5

Note that at every time a particle is added to site k or \(k+1,\) the allocation rates at these sites are multiplied by \(e^{\lambda }.\) In particular, if a particle is added to site k,  then the allocation rate at \(k-1\) is multiplied by \(e^{\lambda _{k-1}}.\) Otherwise, if a particle is added to site \(k+1,\) then the allocation rate at \(k+2\) is multiplied by \(e^{\lambda _{k+2}}.\) Other rates remain unchanged. Thus, by allocating a particle at k or \(k+1,\) the sum of rates at \(k,\, k+1\) and \(k+2\) over the sum of rates at all other sites is increased by a multiple constant. This yields the following exponential bound

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( \bigcup _{i\ne k, k+1, k+2}A^i_{n+1}\bigg |A^{k, k+1}_{[1, n]}\right) \le C_1e^{-C_2n}, \end{aligned}$$
(44)

for some \(C_1,\, C_2>0.\) In turn, bound (44) implies that with a positive probability (not depending on \(\mathbf{x}\)) event \(A_{[1,\infty )}^{k, k+1}\cup \{\tau _{k+2}<w_k^{+}\}\) occurs as claimed. Note also that events \(A_{[1,\infty )}^{k, k+1}\) and \(\{\tau _{k+2}<w_k^{+}\}\) are mutually exclusive. Thus, with a positive probability either all particles will be allocated at k and \(k+1,\) or a particle is eventually placed at \(k+2\) before anywhere else outside \(k,\, k+1\) and \(k+2.\) Placing a particle at \(k+2\) can violate condition (6) because the maximal allocating probability can be now attained at sites \(k+2\) and \(k+3\) as well. Part (1) of Lemma 5 is proved. \(\square \)

Proof of Part (2) Lemma 5

Note that \(e^{\lambda _{k-1}(i-U_i)-\lambda i}<e^{-(\lambda -\lambda _{k-1})i}\) and \(\lambda _{k-1}<\lambda .\) Consequently, for any n

$$\begin{aligned} Z_n\left( \zeta _1\right) =\sum _{i=0}^n e^{\lambda _{k-1}(i-U_i)-\lambda i}\le \sum _{i=0}^{\infty } e^{-(\lambda -\lambda _{k-1})i}<C<\infty . \end{aligned}$$
(45)

Note also that \(\gamma _{k, 1}\le 1\) and \(\gamma _{k, 2}\le 1.\) Combining these inequalities with Eq. (45) and letting \(n\rightarrow \infty \) in (29) gives that

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, \infty ]}\right) \ge \varepsilon _1\mathbb {E}_p\left( e^{-Z(\zeta _2)}\right) , \end{aligned}$$

for some \(\varepsilon _1>0.\) Further, assumption \(r<z_2\) implies that \(\lambda _{k+2}p - \lambda < 0.\) Recall that parameter \(r=x_{k+2}-x_{k-1}\) takes integer values, and \(p=p(r)\) is a monotonically increasing function of r. Let \(r_{0}\) be the maximal integer such that \(r<z_2\) and \(p_0=p(r_0),\) so that \(\lambda _{k+2}p_0-\lambda <0.\) It follows from Propositions  2 and 7 that for all \(0<p<p_0\)

$$\begin{aligned} \mathbb {E}_{p}\left( e^{-Z(\zeta _2)}\right) \ge \mathbb {E}_{p_0}\left( e^{-Z(\zeta _2)}\right) >0, \end{aligned}$$

and, hence, \( \mathsf {P}_{\mathbf{x}}(A_{[1, \infty ]}^{k, k+1})\ge \varepsilon \) for some uniform \(\varepsilon >0\) over configurations \(\mathbf{x}\) satisfying \(r<z_2.\) Part (2) of Lemma 5 is proved. \(\square \)

Proof of Part (3) of Lemma 5

We are going to use the following relaxation of upper bound (27)

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, n+1]}\right) \le \mathbb {E}_p\left( \prod \limits _{i=0}^n \frac{1}{1+ \gamma _{k,2}e^{\lambda _{k+2}U_i-\lambda i}}\right) . \end{aligned}$$
(46)

Next, assumption \(r\ge z_2\) implies that \(\lambda _{k+2}p- \lambda \ge 0.\) Therefore, by the strong law of large numbers, we get that \( \mu _p\text {-} a.s.\) \(\lambda _{k+2}U_i-\lambda i\ge 0\) for infinitely many i and, hence, \(\prod _{i=0}^n \frac{1}{1+\gamma _{k,2}e^{\lambda _{k+2}U_i-\lambda i}}\rightarrow 0.\) The product is bounded by 1,  therefore, by the Lebesgue’s dominated convergence theorem, the expectation in the right side of (46) tends to 0 as \(n\rightarrow \infty ,\) which implies that

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A_{[1, \infty ]}^{k, k+1}\right) =\lim \limits _{n\rightarrow \infty }\mathsf {P}_{\mathbf{x}}\left( A_{[1, n+1]}^{k, k+1}\right) =0, \end{aligned}$$
(47)

as claimed. Note that Eq. (47) combined with Part (1) of the lemma further yields that \(\mathsf {P}_{\mathbf{x}}(\tau _{k+2}<w_{k}^+)>\epsilon \) for some \(\epsilon .\) \(\square \)

Proof of Part (4) of Lemma 5

Define

$$\begin{aligned} \widehat{n}=\min \left( n:\gamma _{k,2}e^{\lambda _{k+2}S_n-\lambda n}\ge 1\right) . \end{aligned}$$
(48)

In other words, \({\hat{n}}\) is the first time when the allocation rate at site \(k+2\) exceeds the sum of allocation rates at sites k and \(k+1,\) becoming therefore, the maximal rate.

Applying lower bound (30) gives that

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, {\widehat{n}}]}\right) \ge \delta \mathbb {E}_p\left( e^{- \gamma _{k,1}Z_{{\widehat{m}} -1}(\zeta _1)} e^{- \gamma _{k,2}Z_{{\widehat{m}} -1}(\zeta _2)}\right) , \end{aligned}$$
(49)

where

$$\begin{aligned} {\widehat{m}}=\min \left( m:\gamma _{k, 2}e^{\lambda _{k+2}U_m-\lambda m}\ge 1\right) . \end{aligned}$$
(50)

Equation (45) yields that \( \gamma _{k,1}Z_{{\widehat{m}} -1}(\zeta _1)<Z(\zeta _1)<C<\infty .\) This allows us to rewrite bound (49) as follows \( \mathsf {P}_{\mathbf{x}}(A^{k, k+1}_{[1, \widehat{n}]})\ge \varepsilon _2\mathbb {E}_p(e^{- \gamma _{k,2}Z_{\widehat{m}-1}(\zeta _2)} ),\) for some \(\varepsilon _2.\) By assumption \(r > z_2.\) Let now \(r_0\) be the minimal integer such that \(r_0>z_2\) and \(p_0=p(r_0).\) Then \(\lambda _{k+2}p- \lambda>\lambda _{k+2}p_0 - \lambda > 0\) for any \(p>p_0.\) It follows from Propositions 6 and 7 that for all \(p>p_0\)

$$\begin{aligned} \mathbb {E}_{p}\left( e^{-\gamma _{k,2} Z_{{\widehat{m}}-1}(\zeta _2)}\right) \ge \mathbb {E}_{p_0}\left( e^{-Z(\eta _2)}\right) >0, \end{aligned}$$

where \(\eta _2\) is the sequence reciprocal to \(\zeta _2.\) Hence \(\mathsf {P}_{\mathbf{x}}(A^{k, k+1}_{[1, {\widehat{n}}]}) \ge \varepsilon _2\epsilon _2>0.\)

Next, recall event \(B_k\) defined in (5). Note that \(A^{k, k+1}_{[1, {\widehat{n}}]} \cap A_{{\widehat{n}} + 1}^{k+2}\subseteq B_k,\) so that \(\mathsf {P}_{\mathbf{x}}(B_k)\ge \varepsilon _2\epsilon _2/N>0\) as well.

It is left to show that the maximal rate \(\max _i\varGamma _i\) relocates as described in (7). Clearly, this is always the case if \(\lambda <\min (\lambda _{k+2},\, \lambda _{k+3}).\) This might not be the case in the following particular situation. Namely, suppose that \(\lambda _{k+3} \le \lambda \) and initial configuration \(\mathbf{x}\) is such that \(\varGamma _k = \max _i \varGamma _i\) and \(\varGamma _{k+3}e^{\lambda _{k+3}} \ge \varGamma _k.\) In this case, if \(\tau _{k+2} = 1,\) then the maximal rate might move to \(k+3.\) However, note that \(\tau _{k+2}\ge 2\) on event \(A^{k, k+1}_{[1, {\widehat{n}}]}.\) Indeed, by definition (48) \({\widehat{n}}\ge 1,\) and, hence, on this event \(\tau _{k+2}\ge 2\) as \(\tau _{k+2}>{\widehat{n}},\) so that at least one particle is deposited at \(\{k,\, k+1\}\) by time \(\tau _{k+2}.\) It is not hard to check that placing one particle at \(\{k,\, k+1\}\) makes impossible that relocation of \(\max _i\varGamma _i\) to \(k+3\) when \(\lambda _{k+3}\le \lambda .\) \(\square \)

5.2.4 Proof of Lemma 6

First, note that the proof of Part (1) of Lemma 6 is analogous to the proof of Part (1) of Lemma 5 and we omit technical details. For simplicity of notation we denote \(\lambda =\lambda _{k}=\lambda _{k+1}\) in the rest of the proof.

Proof of Part (2) of Lemma 6

Recall lower bound (31)

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, \infty ]}\right) \ge \delta \mathbb {E}_p\left( e^{-\gamma _{k,1}Z(\zeta _1)} e^{- \gamma _{k,2}Z(\zeta _2)}\right) . \end{aligned}$$

Note that \(z_1< r < z_2\) if and only if both \(\lambda _{k-1}(1-p)-\lambda <0\) and \(\lambda _{k+2}p-\lambda <0.\) Therefore, it follows from Proposition 2 that \( \mu _p\text {-} a.s.\) both \(Z(\zeta _1)<\infty \) and \(Z(\zeta _2)<\infty .\) Consequently,

$$\begin{aligned} \mathbb {E}_p\left( e^{-\gamma _{k,1}Z(\zeta _1)} e^{- \gamma _{k,2}Z(\zeta _2)}\right) \ge \mathbb {E}_p\left( e^{-Z(\zeta _1)} e^{- Z(\zeta _2)}\right) \ge \varepsilon (p)>0, \end{aligned}$$

as \(\gamma _{k, i}\le 1,\) \(i=1,\,2,\) so that \(\mathsf {P}_{\mathbf{x}}(A^{k, k+1}_{[1, \infty ]})\ge \delta \varepsilon (p).\) It is left to note that there is a finite number (depending only on \(\lambda \)’s) of possible values of integer-valued parameter r satisfying \(z_1<r<z_2,\) and, hence, the same number of possible values of probability p. Therefore, constant \(\varepsilon (p)\) can be chosen as the minimal one for those values of p. This concludes the proof of the second part of the lemma. \(\square \)

Proof of Part (3) of Lemma 6

Let us start by noting the following. Assumption \(r \le z_1\) implies that \(\lambda _{k-1}(1-p)-\lambda \ge 0,\) and assumption \(r \ge z_2\) implies that \(\lambda _{k+2}p - \lambda \ge 0.\) Therefore, the law of large numbers yields that \(\mu _p\text {-}a.s.\) at least one of the following events \(\{\lambda _{k-1}(i-U_i)-\lambda i\ge 0\}\) and \(\{\lambda _{k+2}U_i-\lambda i \ge 0\}\) occurs for infinitely many i. Consequently, \( \mu _p\text {-} a.s.\) \(\prod \nolimits _{i=0}^n \frac{1}{1+ \gamma _{k,1}e^{\lambda _{k-1}(i-U_i)-\lambda i}+ \gamma _{k,2}e^{\lambda _{k+2}U_i-\lambda i}}\rightarrow 0,\) as \(n\rightarrow \infty .\) Using bound (27) and the Lebesgue dominated convergence theorem, we obtain that

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, n+1]}\right) \le \mathbb {E}_p\left( \prod \limits _{i=0}^n \frac{1}{1+ \gamma _{k,1}e^{\lambda _{k-1}(i-U_i)-\lambda i}+ \gamma _{k,2}e^{\lambda _{k+2}U_i-\lambda i}}\right) \rightarrow 0, \end{aligned}$$

as \(n\rightarrow \infty .\) Hence, \(\mathsf {P}_{\mathbf{x}}(A^{k, k+1}_{[1, \infty ]})=0,\) and, hence, \(\mathsf {P}_{\mathbf{x}}(D_k)\ge \varepsilon ,\) as claimed. \(\square \)

5.2.5 Proof of Lemma 7

The proof here is similar to the proof of Part (4) of Lemma 5. The common starting point is the lower bound (30) where \(\tau \) and \(\tilde{\tau }\) are appropriately chosen stopping times.

Proof of Parts (1) and (2) of Lemma 7

First, note that the random variables \(Z(\zeta _1)\) and \(Z(\zeta _2)\) are finite if \(\lambda _{k-1}(1-p)-\lambda <0\) and \(\lambda _{k+2}p-\lambda <0,\) respectively. In fact, by our assumptions, precisely one of these conditions is necessarily satisfied so that one of \(Z(\zeta _1)\) and \(Z(\zeta _2)\) is almost surely finite. Then we apply bound (30) with the corresponding pair of stopping times \((\tau , \,\tilde{\tau })=({\widehat{n}}_2,\, {\widehat{m}}_2)\) or \((\tau , \,\tilde{\tau })=({\widehat{n}}_1,\, {\widehat{m}}_1)\) respectively, where

$$\begin{aligned} {\widehat{n}}_1&=\min \left( n:\gamma _{k, 1}e^{\lambda _{k-1}(n-S_n)-\lambda n}\ge 1\right) ,\\ {\widehat{n}}_2&=\min \left( n:\gamma _{k, 2}e^{\lambda _{k+2}S_n-\lambda n}\ge 1\right) ,\\ {\widehat{m}}_{1}&=\min \left( m: \gamma _{k, 1}e^{\lambda _{k-1}(m-U_m)-\lambda m}\ge 1\right) ,\\ {\widehat{m}}_2&=\min \left( m: \gamma _{k, 2}e^{\lambda _{k+2}U_m-\lambda m}\ge 1\right) . \end{aligned}$$

For concreteness, consider the case where \(\{k,\, k+1\}\) is of type 2 and \(r>z_1\ge z_2,\) in which case \(\lambda _{k-1}(1-p)-\lambda <0\) and \(\lambda _{k+2}p-\lambda >0.\) Applying bound (30) with \((\tau ,\, \tilde{\tau })=({\widehat{n}}_2,\, {\widehat{m}}_2)\) yields that

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, {\widehat{n}}_2]}\right) \ge \delta \mathbb {E}_p\left( e^{- \gamma _{k, 1}Z_{{\widehat{m}}_2-1}(\zeta _1)} e^{- \gamma _{k, 2}Z_{{\widehat{m}}_2-1}(\zeta _2)}\right) . \end{aligned}$$

Condition \(\lambda _{k-1}(1-p)-\lambda <0\) and Proposition  2 imply that \(Z(\zeta _1)<\infty \>\> \mu _p \text {-a.s.}\) Therefore, we can bound \(\gamma _{k, 1}Z_{{\widehat{m}}_2-1}(\zeta _1)\le Z(\zeta _1),\) as \(\gamma _{k, 1}\le 1.\) Also, condition \(\lambda _{k+2}p-\lambda >0\) and Proposition 5 imply that \(\gamma _{k, 2}Z_{{\widehat{m}}_2-1}(\zeta _2)<\infty \> \mu _p\text {-a.s.}\) Combining the above, we get to the following lower bound

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, {\widehat{n}}_2]}\right) \ge \delta \mathbb {E}_p\left( e^{-Z(\zeta _1)} e^{- \gamma _{k, 2}Z_{{\widehat{m}}_2-1}(\zeta _2)}\right) . \end{aligned}$$

Moreover, let \(\eta _2\) be the sequence reciprocal to \(\zeta _2.\) Then, applying Proposition 5 again, we get that \(Z(\eta _2)<\infty \) \(\mu _p\)-a.s., \( Z(\eta _2)\ge _{st}\gamma Z_{{\widehat{m}}-1}(\zeta _2)\) and

$$\begin{aligned} \mathbb {E}_{p}\left( e^{-Z(\zeta _1)} e^{- \gamma Z_{{\widehat{m}}-1}(\zeta _2)}\right) \ge \mathbb {E}_{p}\left( e^{-Z(\zeta _1)} e^{- Z(\eta _2)}\right) >0. \end{aligned}$$

Let us show that, when \(r>z_1,\) the expectation in the right side of the preceding display is uniformly bounded below over \(p=p(r).\) To this end, take the minimal integer \(r_0\) such that \(r_0>z_1\) so that condition \(r>z_1\) implies \(p>p_0=p(r_0),\) and, hence, \(\lambda _{k-1}(1-p)-\lambda<\lambda _{k-1}(1-p_0)-\lambda <0\) and \(\lambda _{k+2}p-\lambda>\lambda _{k+2}p_0-\lambda >0.\) This implies the following. First, consider the random variable \(Z(\zeta _1)\) with distribution determined by parameter \(p_0.\) By Propositions 2 and 7, it follows that \(Z(\zeta _1)\) is almost surely finite, and, moreover, it stochastically dominates any other random variable \(Z(\zeta _1)\) with distribution determined by \(p>p_0.\) Second, consider the random variable \(Z(\eta _2),\) where \(\eta _2\) is a sequence reciprocal to sequence \(\zeta _2\) whose distribution is determined by parameter \(p_0.\) By Propositions 2, 3 and 7, it follows that \(Z(\eta _2)\) is almost surely finite and, moreover, it stochastically dominates any other random variable \(Z(\eta _2),\) where \(\eta _2\) is reciprocal to \(\zeta _2\) whose distribution is determined by \(p>p_0.\)

Therefore, \(\mathbb {E}_{p}( e^{-Z(\zeta _1)} e^{- Z(\eta _2)})\ge \mathbb {E}_{p_0}( e^{-Z(\zeta _1)} e^{- Z(\eta _2)}).\) Summarizing the above, we finally obtain that

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, {\widehat{n}}_2]}\right) \ge \delta \mathbb {E}_{p_0}\left( e^{-Z(\zeta _1)}e^{-Z(\eta _2)}\right) >0. \end{aligned}$$

We have considered here only the case where \(\{k,\,k+1\}\) is of type 2 and \(r>z_1,\) but by rearranging the stopping times above, one should note that for all the remaining cases stated in Parts (1) and (2) of Lemma 7, the reasoning is exactly the same as above. \(\square \)

Proof of Part (3) of Lemma 7

Let us obtain the lower bound in Part (3) of Lemma 7. In this case \(\{k,\, k+1\}\) is a local minimum of type 2 and \(z_2<r<z_1.\) The double inequality implies that both \(\lambda _{k-1}(1-p)-\lambda >0\) and \(\lambda _{k+2}p-\lambda >0.\) As a result, both \(Z(\zeta _1)\) and \(Z(\zeta _2)\) are infinite. In this case we modify bound (30) with stopping times \(\tau ={\widehat{n}}=\min ({\widehat{n}}_1,\, {\widehat{n}}_2)\) and \(\tilde{\tau }={\widehat{m}}= \min ({\widehat{m}}_1,\, {\widehat{m}}_2),\) as follows

$$\begin{aligned} \begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, {\widehat{n}}]}\right)&\ge \delta \mathbb {E}_p\left( e^{- \gamma _{k,1}Z_{{\widehat{m}}-1}(\zeta _1)} e^{- \gamma _{k,2}Z_{{\widehat{m}}-1}(\zeta _2)} \right) \\&\ge \delta \mathbb {E}_p\left( e^{-\gamma _{k,1}Z_{\widehat{m}_1-1}(\zeta _1)} e^{-\gamma _{k,2}Z_{{\widehat{m}}_2-1}(\zeta _2)} \right) , \end{aligned} \end{aligned}$$

where in the last inequality we bounded \({\widehat{m}}=\min ({\widehat{m}}_1,\, {\widehat{m}}_2)\) by \({\widehat{m}}_1\) and \({\widehat{m}}_2,\) respectively. By Proposition  5 \(\mu _p\text {-} a.s.\) both \( \gamma _{k,1}Z_{\widehat{m}_1-1}(\zeta _1)<\infty \) and \( \gamma _{k,2}Z_{\widehat{m}_2-1}(\zeta _2)<\infty .\) Therefore, \(\mathsf {P}_{\mathbf{x}}(A^{k, k+1}_{[1, {\widehat{n}}]})\ge \varepsilon (p)>0.\) Further, there are finitely many integers r such that \(z_2<r<z_1.\) Consequently, there are finitely many corresponding values of probability p,  and \(\mathsf {P}_{\mathbf{x}}(A^{k, k+1}_{[1, \widehat{n}]})\ge \varepsilon \) for some \(\varepsilon >0\) uniformly over all values of p in this finite set.

Finally, relocation of the maximal rate in all cases covered by Lemma 7 can be shown by modifying the argument used in the proof of Part (4) of Lemma 5. \(\square \)

5.2.6 Proof of Lemma 8

We skip proofs of Parts (1) and (3) as they are analogous to the proofs of Parts (1) and (3) of Lemma 5. Proofs of Parts (2) and (4) can be obtained by appropriately modifying proofs of Parts (2) and (4) of Lemma 5 and combining them with the ideas in the proof of Lemma 4. Modifications are due to condition \(\lambda _{k-1}=\lambda \) implying that \(z_1=-\infty <z_2\) (see Remark 2).

Proof of Part (2) of Lemma 8

Recall that in this case \(r<z_2,\) so that \(\lambda _{k+2}p-\lambda <0\) and \(p<p_0,\) where \(p_0\) is defined in Part (2) of Lemma 5. Repeating the proof of Part (2) of Lemma 5 and using that \(\gamma _{k, 1}\le p\) and \(\gamma _{k,2}\le 1\) [see (41) and (40)] we obtain the following lower bound

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, \infty ]}\right) \ge \mathbb {E}_p\left( e^{-pZ(\zeta _1)}e^{-Z(\zeta _2)}\right) . \end{aligned}$$
(51)

Our assumptions imply that both \(Z(\zeta _1)\) and \(Z(\zeta _2)\) are almost surely finite by Proposition 2. Fix \(\varepsilon >0.5,\) let \(C_1=C_1(\varepsilon )>0\) be such that

$$\begin{aligned} \mu _p\left( pZ\left( \zeta _1\right) \le C_1\right) =\mu _p\left( p\sum _{i=0}^{\infty }e^{-\lambda U_i}\le C_1\right) \ge \varepsilon \end{aligned}$$
(52)

for all \(p\in (0,\, 1)\) [see (43)], and let \(C_2=C_2(\varepsilon )\) be such that \(\mu _{p_0}(Z(\zeta _2)\le C_2)\ge \varepsilon .\) The last inequality yields that \(\mu _{p}(Z(\zeta _2)\le C_2)\ge \mu _{p_0}(Z(\zeta _2)\le C_2)\ge \varepsilon ,\) as \(Z(\zeta _2),\) with distribution determined by parameter \(p_0,\) dominates any random variable \(Z(\zeta _2)\) with distribution determined by parameter \(p<p_0.\) Finally, by using the same elementary argument as in the proof of Lemma 4, we get that \(\mu _p(pZ(\zeta _1)\le C_1,\, Z(\zeta _2)\le C_2)\ge 2\varepsilon -1,\) which implies that the expectation in the right side of (51) is bounded below away from zero, so that \( \mathsf {P}_{\mathbf{x}}(A_{[1, \infty ]}^{k, k+1})\ge \varepsilon _1 \) for some uniform \(\varepsilon _1>0\) over configurations \(\mathbf{x}\) satisfying \(r<z_2.\) \(\square \)

Proof of Part (4) of Lemma 8

Recall that in this case \(r>z_2,\) so that \(\lambda _{k+2}p-\lambda >0\) and \(p>p_0,\) where \(p_0\) is now defined in Part (4) of Lemma 5. Repeating the proof of Part (4) of Lemma 5 and using again that \(\gamma _{k, 1}\le p\) we obtain the following lower bound

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, {\widehat{n}}]}\right) \ge \delta \mathbb {E}_p\left( e^{-pZ(\zeta _1)}e^{- \gamma _{k,2}Z_{{\widehat{m}} -1}(\zeta _2)}\right) , \end{aligned}$$

where \( {\widehat{n}}\) and \({\widehat{m}}\) are defined in (48) and (50), respectively. Our assumptions imply that both \(Z(\zeta _1)\) and \(Z_{{\widehat{m}} -1}(\zeta _2)\) are almost surely finite by Propositions 2 and 5. Further, Proposition 5 yields that

$$\begin{aligned} \mathsf {P}_{\mathbf{x}}\left( A^{k, k+1}_{[1, {\widehat{n}}]}\right) \ge \delta \mathbb {E}_p\left( e^{-pZ(\zeta _1)}e^{-Z(\eta _2)}\right) , \end{aligned}$$
(53)

where \(\eta _2\) is the random sequence reciprocal to \(\zeta _2.\)

Let \(\varepsilon >0.5\) and \(C_1=C_1(\varepsilon )>0\) be such that (52) holds, and let \(C_2=C_2(\varepsilon )\) be such that \(\mu _{p_0}(Z(\eta _2)\le C_2)\ge \varepsilon .\) The last inequality yields that

$$\begin{aligned} \mu _{p}\left( Z\left( \eta _2\right) \le C_2\right) \ge \mu _{p_0}\left( Z\left( \eta _2\right) \le C_2\right) \ge \varepsilon , \end{aligned}$$

as \(Z(\eta _2),\) with distribution determined by parameter \(p_0,\) dominates any random variable \(Z(\eta _2)\) with distribution determined by parameter \(p>p_0.\)

As at the same stage of the proof in Part (2) we can now conclude that the expectation in the right side of (53) is bounded below away from zero, which implies that \( \mathsf {P}_{\mathbf{x}}(A_{[1, {\widehat{n}}]}^{k, k+1})\ge \varepsilon _2 \) for some uniform \(\varepsilon _2>0\) over configurations \(\mathbf{x}\) satisfying \(r>z_2.\) \(\square \)

5.2.7 Proof of Corollary 1

The critical cases where \(r = z_1\) or \(r = z_2\) need to be treated separately since these cases can not be proven directly by the above arguments. However, by a slight modification one can amend the proof of each lemma in order to encompass such critical cases.

The modification is the same for all lemmas, but for the sake of concreteness let us consider the critical case described in Part (3) of Lemma 5 assuming that \(r=z_2.\) We start by commenting on the same effect that we already discussed in the proof of Part (4) of Lemma 5. Namely, recall that if \(\lambda _{k+3}<\lambda _k=\lambda _{k+1},\) \(\varGamma _{k+3}e^{\lambda _{k+3}}\ge \varGamma _k,\) and \(\varGamma _k=\max _i\varGamma _i,\) then \(\tau _{k+2} = 1\) makes the maximal rate move to \(k+3.\) One can check that the above situation is the only one that can possibly relocate the maximal rate to a site with smaller \(\lambda .\) In order to avoid such case, it is simply a matter of placing a particle at k at the first step, which can be done with probability at least 1 / N. Therefore, without loss of generality we can exclude this case.

Next, if at time \(\tau _{k+2}\) the maximal rate relocates either to \(k+2,\) or to \(k+3\) (provided \(\lambda _{k+3}>\lambda _k=\lambda _{k+1}\)) then we are done. Suppose the opposite, namely, that at time \(\tau _{k+2}\) the maximal allocation rate remains where it was, that is, at k or at \(k+1.\) It is left to note that given event \(A_{[1, \tau _{k+2}-1]}^{k, k+1},\) placing a particle at site \(k+2\) at moment \(\tau _{k+2}\) increases the configuration parameter \(r=x_{k+2}-x_{k-1}\) by 1,  so that the resulting configuration is such that \(r>z_2.\) By Part (4) of Lemma 5, the next allocated particles at \(\{k,\, k+1\}\) will end up by relocating the maximal rate as prescribed.

Other critical cases can be handled similarly, and we skip straightforward technical details.

6 Proof of Theorem 1

The idea of the proof goes briefly as follows. Given any initial state \(X(0) = \mathbf{x},\) the site k where \(\varGamma _k(\mathbf{x})=\max _{i=1,\ldots , N}(\varGamma _i(\mathbf{x}))\) is identified. Then, a particle allocation strategy is drawn so that it always results in localization of growth as described in Theorem 1. Lemmas 18 enable us to identify the corresponding strategy for each particular case and bound its probability from below uniformly over initial configurations (see Remark 1). Should a particular strategy fail to happen, which means that at a certain step n a particle is not allocated according to that strategy, but somewhere else, a new one is drawn and this procedure reiterates from X(n). Since there is a finite number of possible strategies it follows from the renewal argument below that almost surely one of them eventually succeeds.

In what follows, when referring to Lemma 2 or one of Lemmas 48, this automatically includes the symmetric cases by re-labelling the graph in reverse order (as explained in Remark 3). Also, local minima of size 2 and type 1 automatically include the limiting case described in Remark 2.

Let \(X(n) = \mathbf{x}\) be a fixed and arbitrary configuration, and:

  1. (1)

    Assume that \( \varGamma _k(\mathbf{x})=\max _{i=1,\ldots , N}(\varGamma _i(\mathbf{x}))\) and \(\lambda _{k-1}\ne \lambda _k \ne \lambda _{k+1}.\)

    1. (1.1)

      Let k be a local maximum. By Lemma  1, with positive probability, all subsequent particles are allocated at k.

    2. (1.2)

      Let k be either a growth point, or a local minimum. By Lemmas 2 and 3, with positive probability, the maximal rate relocates in finite time to one of its nearest neighbours having parameter \(\lambda > \lambda _k.\)

  2. (2)

    Assume that \(\varGamma _k(\mathbf{x})=\max _{i=1,\ldots , N}(\varGamma _i(\mathbf{x}))\) and that additional assumptions of Lemma 4 are satisfied. Lemma 4 yields that, with positive probability, all subsequent particles are allocated at sites \(\{k,\, k+1\}.\)

  3. (3)

    Assume that \(\max (\varGamma _{k}(\mathbf{x}),\, \varGamma _{k+1}(\mathbf{x}))=\max _{i}\varGamma _i(\mathbf{x}),\) where \(\{k,\, k+1\}\) is either a saddle point, or a local minimum of size 2 and type 1. Additional assumptions on \(\mathbf{x},\) as described in Part (2) of Lemmas 5, 6 and 8, guarantee that, with positive probability, all subsequent particles are allocated at sites \(\{k,\, k+1\}.\)

  4. (4)

    Assume that \(\max (\varGamma _{k}(\mathbf{x}),\, \varGamma _{k+1}(\mathbf{x}))=\max _{i}\varGamma _i(\mathbf{x}),\) where \(\{k,\, k+1\}\) is either a saddle point of size 2,  or a local minimum of size 2 of either type. Assume also that configuration \(\mathbf{x}\) is such that assumptions as in the preceding item do not hold. Such cases are covered by Lemmas: 5, Parts (3) and (4), 6 Part (3), 7 and 8, and finally, 8 Parts (3) and (4) complemented by Corollary 1. In all those cases, with positive probability, the maximal rate eventually relocates in a random but finite time to a site with larger parameter \(\lambda .\)

  5. (5)

    Finally, for the remaining cases of local minima, maxima or saddle points of size greater than 2,  it is not hard to check that such cases can be reduced to one, or a combination, of the above items.

Thus, for every configuration \(\mathbf{x}\) and every set of positive real parameters \(\varLambda = (\lambda _k)_{k=1}^N,\) we have identified two types of events. First, there are events resulting in localisation of growth at either a single site or a pair of neighbouring sites [as described in Theorem 1 Parts (1) and (2), respectively]. Call such events \(\mathsf {L}\)-events. Second, there are events resulting in relocation of the maximal rate. Call such events \(\mathsf {R}\)-events.

The next step of the proof is to define a sequence of random moments of time \((T_j)_{j \ge 0}\) called renewal moments. First, set \(T_0=0.\) Now, given \(T_j,\) let us define \(T_{j+1}.\) Suppose that at time \(T_j\) the process is at state \(\mathbf{x}.\) We identify an event \(R_1,\ldots , R_mL\) (strategy) formed by a sequence of m \(\mathsf {R}\)-events (possibly none) ending at an \(\mathsf {L}\)-event. At the fist moment of time \(t>T_j\) a particle is not allocated according to \(R_1,\ldots , R_mL,\) we set \(T_{j+1}=t.\)

Note that \(\mathsf {R}\)-events are defined in a way so that the maximal rate always relocates to a site with strictly larger parameter \(\lambda .\) It follows that the number of \(\mathsf {R}\)-events preceding any \(\mathsf {L}\)-event is bounded by the number of different values of \(\lambda _i, \> i=1,\ldots , N.\) Then, by Lemmas 18, probabilities of events \(R_1,\ldots , R_mL\) are bounded below uniformly over configurations, where \(m \le N.\)

Further, let \(j_{max}:= \max \{ j \ge 0 : T_j < \infty \}.\) Lemmas   18 imply the existence of an uniform bound \(\epsilon >0\) such that \(\mathsf {P}(T_j = \infty )\ge \epsilon \) on \(\{T_{j-1} < \infty \}.\) Therefore, \(\mathsf {P}(T_j < \infty ) \le 1 - \epsilon \) on \(\{T_{j-1} < \infty \},\) or equivalently, \(\mathsf {P}(j_{max} \ge j \,|\, j_{max} \ge j-1) < 1-\epsilon .\) Thus, \(\mathsf {P}(j_{max}< \infty ) = 1.\) This implies that \(T_j = \infty \) for some j,  so that, with probability one, a certain allocation strategy \(R_1,\ldots , R_mL\) eventually succeeds, that is the growth process localises as claimed.

Finally, the long term behaviour of ratio \(X_{k+1}(n)/X_k(n)\) described in item (ii) of the theorem is implied by the law of large numbers for the Binomial distribution. This follows straightforwardly from the proofs of Lemma 4 and Parts (2) of Lemmas 5, 6 and 8. The theorem is proved. \(\square \)