Abstract
We consider a stochastic aggregation model on \(\mathbb {Z}^d\). Start with particles distributed according to the product Bernoulli measure with parameter \(\mu \). In addition, start with an aggregate at the origin. Nonaggregated particles move as continuoustime simple random walks obeying the exclusion rule, whereas aggregated particles do not move. The aggregate grows by attaching particles to its surface whenever a particle attempts to jump onto it. This evolution is called multiparticle diffusion limited aggregation. Our main result states that if on \(d>1\) the initial density of particles is large enough, then with positive probability the aggregate has linearly growing arms; that is, there exists a constant \(c>0\) so that at time t the aggregate contains a point of distance at least ct from the origin, for all t. The key conceptual element of our analysis is the introduction and study of a new growth process. Consider a first passage percolation process, called type 1, starting from the origin. Whenever type 1 is about to occupy a new vertex, with positive probability, instead of doing it, it gives rise to another first passage percolation process, called type 2, which starts to spread from that vertex. Each vertex gets occupied only by the process that arrives to it first. This process may have three phases: extinction (type 1 gets eventually surrounded by type 2), coexistence (infinite clusters of both types emerge), and strong survival (type 1 produces an infinite cluster which entraps all type 2 clusters). Understanding the various phases of this process is of mathematical interest on its own right. We establish the existence of a strong survival phase, and use this to show our main result.
Introduction
In this work we consider one of the classical aggregation processes, introduced in [25] (see also [29]) with the goal of providing an example of “a simple and tractable” mathematical model of dendritic growth, for which theoretical and mathematical concepts and tools could be designed and tested on. Almost four decades later we still encounter tremendous mathematical challenges studying its geometric and dynamic properties, and understanding the driving mechanism lying behind the formation of fractallike structures.
Multiparticle diffusion limited aggregation (MDLA) We consider the following stochastic aggregation model on \(\mathbb {Z}^d, \; d\ge 1\). Start with an infinite collection of particles located at the vertices of the lattice, with at most one particle per vertex, and initially distributed according to the product Bernoulli measure with parameter \(\mu \in (0,1)\). In addition, there is an aggregate, which initially consists of only one special particle, placed at the origin. The system evolves in continuous time. Nonaggregated particles move as simple symmetric random walks obeying the exclusion rule, i.e. particles jump at rate 2d to a uniformly random neighbor, but if the chosen neighbor already contains another nonaggregated particle, such jump is suppressed and the particle waits for the next attempt to jump. Aggregated particles do not move. Whenever a nonaggregated particle attempts to jump on a vertex occupied by the aggregate, the jump of this particle is suppressed, the particle becomes part of the aggregate, and never moves from that moment onwards. Thus the aggregate grows by attaching particles to its surface whenever a particle attempts to jump onto it. This evolution will be referred to as multiparticle diffusion limited aggregation, MDLA; examples for different values of \(\mu \) are shown in Fig. 1.
Characterizing the behavior of MDLA is a widely open and challenging problem. Existing mathematical results are limited to one dimension [8, 20]. In this case, it is known that the aggregate has almost surely sublinear growth for any \(\mu \in (0,1)\), having size of order \(\sqrt{t}\) by time t. The main obstacle preventing the aggregate to grow with positive speed is that, from the point of view of the front (i.e., the rightmost point) of the aggregate, the density of particles decreases since the aggregate grows by forming a region of density 1, larger than the initial density of particles.
In dimensions two and higher, MDLA seems to present a much richer and complex behavior, which changes substantially depending on the value of \(\mu \); refer to Fig. 1. For small values of \(\mu \), the low density of particles affects the rate of growth of the aggregate, as it needs to wait particles that move diffusively to find their way to its boundary. This suggests that the growth of the aggregate at small scales is governed by evolution of the “local” harmonic measure of its boundary. This causes the aggregate to grow by protruding long fractallike arms, similar to dendrites. On the other hand, when \(\mu \) is large enough, the situation appears to be different. In this case, the aggregate is immersed in a very dense cloud of particles, and its growth follows random, dynamically evolving geodesics that deviate from occasional regions without particles. Instead of showing dendritic type of growth, the aggregate forms a dense region characterized by the appearance of a limiting shape, similar to a first passage percolation process [9, 24]. These two regimes do not seem to be exclusive. For intermediate values of \(\mu \), the process shows the appearance of a limiting shape at macroscopic scales, while zooming in to mesoscopic and microscopic scales reveals rather complex ramified structures similar to dendritic growth, as in Fig. 2.
The main result of this paper is to establish that, unlike in dimension one, in dimensions \(d\ge 2\) MDLA has a phase of linear growth. We actually prove a stronger result, showing that the aggregate grows with positive speed in all directions. For \(t\ge 0\), let \(\mathcal {A}_t\subset \mathbb {Z}^d\) be the set of vertices occupied by the aggregate by time t, and let \(\bar{\mathcal {A}}_t\supseteq \mathcal {A}_t\) be the set of vertices of \(\mathbb {Z}^d\) that are not contained in the infinite component of \(\mathbb {Z}^d{\setminus } \mathcal {A}_t\). Note that \(\bar{\mathcal {A}}_t\) comprises all vertices of \(\mathbb {Z}^d\) that either belong to the aggregate or are separated from infinity by the aggregate. For \(x\in \mathbb {Z}^d\) and \(r\in \mathbb {R}_+\), we denote by B(x, r) the ball of radius r centered at x.
Theorem 1.1
There exists \(\mu _0\in (0,1)\) such that, for all \(\mu >\mu _0\), there are positive constants \(c_1=c_1(\mu ,d)\) and \(c_2=c_2(\mu ,d)\) for which
Remark 1.2
It is not difficult to see that the aggregate cannot grow faster than linearly. That is, there exists a constant \(c_3\) such that the probability that \(\bar{\mathcal {A}}_t \subset B(0,c_3 t)\) for all \(t\ge t_0\) goes to 1 with \(t_0\). This is the case because the growth of the aggregate is slower than the growth of a first passage percolation with exponential passage times of rate 1, which has linear growth; see, for example, [1, 21].
We believe that Theorem 1.1 holds in a stronger form, with \(\mathbb {P}\big (\bar{\mathcal {A}}_t \supset B(0,c_1 t) \text { for all } t\ge t_0\big )\) going to 1 with \(t_0\). However, with positive probability, it happens that there is no particle within a large distance to the origin at time 0. In this case, in the initial stages of the process, the aggregate will grow very slowly as if in a system with a small density of particles. We expect that the density of particles near the boundary of the aggregate will become close to \(\mu \) after particles have moved for a large enough time, allowing the aggregate to start having positive speed of growth. However, particles perform a nonequilibrium dynamics due to their interaction with the aggregate, and the behavior and the effect of this initial stage of low density is not yet understood mathematically. This is related to the problem of describing the behavior of MDLA for small values of \(\mu \), which is still far from reach, and raises the challenging question of whether the aggregate has positive speed of growth for any \(\mu >0\). Even in a heuristic level, it is not at all clear what the behavior of the aggregate should be for small \(\mu \). On the one hand, the low density of particles causes the aggregate to grow slowly since particles move diffusively until they are aggregated. On the other hand, since the aggregate is immersed in a dense cloud of particles, this effect of slow growth could be restricted to small scales only, because at very large scales the aggregate could simultaneously grow in many different directions.
We now describe the ideas of the proof of Theorem 1.1. For this we use the language of the dual representation of the exclusion process, where vertices without particles are regarded as hosting another type of particles, called holes, which perform among themselves simple symmetric random walks obeying the exclusion rule. When \(\mu \) is large enough, at the initial stages of the process, the aggregate grows without encountering any hole. The growth of the aggregate is then equivalent to a first passage percolation process with independent exponential passage times. This stage is well understood: it is known that first passage percolation not only grows with positive speed, but also has a limiting shape [9, 24]. However, at some moment, the aggregate will start encountering holes. We can regard the aggregate as a solid wall for holes, as they can neither jump onto the aggregate nor be attached to the aggregate. In one dimension, holes end up accummulating at the boundary of the aggregate, and this is enough to prevent positive speed of growth. The situation is different in dimensions \(d\ge 2\), since the aggregate is able to deviate from any hole it encounters, advancing through the particles that lie in the neighborhood of the hole until it completely surrounds and entraps the hole. The problem is that the aggregate will find regions of holes of arbitrarily large sizes, which require a long time for the aggregate to go around them. When \(\mu \) is large enough, the regions of holes will be typically well spaced out, giving sufficient room for the aggregate to grow inbetween the holes. One needs to show that the delays caused by deviation from holes are not large enough to prevent positive speed. A challenge is that as holes cannot jump onto the aggregate, their motion gets a drift whenever they are neighboring the aggregate. Hence, holes move according to a nonequilibrium dynamics, which creates difficulties in controlling the location of the holes. In order to overcome this problem, we introduce a new process to model the interplay between the aggregate and holes.
First passage percolation in a hostile environment (FPPHE) This is a twotype first passage percolation process. At any time \(t\ge 0\), let \(\eta ^1(t)\) and \(\eta ^2(t)\) denote the vertices of \(\mathbb {Z}^d\) occupied by type 1 and type 2, respectively. We start with \(\eta ^1(0)\) containing only the origin of \(\mathbb {Z}^d\), and \(\eta ^2(0)\) being a random set obtained by selecting each vertex of \(\mathbb {Z}^d{\setminus }\{0\}\) with probability \(p\in (0,1)\), independently of one another. Both type 1 and type 2 are growing processes; i.e., for any times \(t<t'\) we have \(\eta ^1(t)\subseteq \eta ^1(t')\) and \(\eta ^2(t)\subseteq \eta ^2(t')\). Type 1 spreads from time 0 throughout \(\mathbb {Z}^d\) at rate 1. Type 2 does not spread at first, and we denote \(\eta ^2(0)\) as type 2 seeds. Whenever the type 1 process attempts to occupy a vertex hosting a type 2 seed, the occupation is suppressed and that type 2 seed is activated and starts to spread throughout \(\mathbb {Z}^d\) at rate \(\lambda \in (0,1)\). The other type 2 seeds remain inactive until type 1 or already activated type 2 attempts to occupy their location. A vertex of the lattice is only occupied by the type that arrives to it first, so \(\eta ^1(t)\) and \(\eta ^2(t)\) are disjoint sets for all t; this causes the two types to compete with each other for space. Note that type 2 spreads with smaller rate than type 1, but type 2 starts with a density of seeds while type 1 starts only from a single location.
We show that it is possible to analyze MDLA via a coupling with this process by showing that a hole that has been in contact with the aggregate will remain contained inside a cluster of type 2. Since the aggregate grows in the same way as type 1, establishing that the type 1 process grows with positive speed allows us to show that MDLA has linear growth. Besides its application to studying MDLA, we believe that FPPHE is an interesting process to analyze on its own right, as it shows fascinating different phases of behavior depending on the choice of p and \(\lambda \). An illustration of the behavior of this process is shown in Fig. 3.
The first phase is the extinction phase, where type 1 stops growing in finite time with probability 1. This occurs, for example, when \(p> 1p_{\mathrm {c}}\), with \(p_{\mathrm {c}}=p_{\mathrm {c}}(d)\) being the critical probability for independent site percolation on \(\mathbb {Z}^d\). In this case, with probability 1, the origin is contained in a finite cluster of vertices not occupied by type 2 seeds, and hence type 1 will eventually stop growing. This extinction phase for type 1 also arises when \(p\le 1p_{\mathrm {c}}\) but \(\lambda \) is close enough to 1 so that type 2 clusters grow quickly enough to surround type 1 and confine it to a finite set.
We show in this work that another phase exists, called the strong survival phase, and which is characterized by a positive probability of appearance of an infinite cluster of type 1, while type 2 is confined to form only finite clusters. Note that type 1 cannot form an infinite cluster with probability 1, since with positive probability all neighbors of the origin contain seeds of type 2. Unlike the extinction phase, whose existence is quite trivial to show, the existence of a strong survival phase for some value of p and \(\lambda \) is far from obvious. Here we not only establish the existence of this phase, but we show that such a phase exists for any\(\lambda <1\) provided that p is small enough. We also show that type 1 has positive speed of growth. For any t, we define \({\bar{\eta }}^1(t)\) as the set of vertices of \(\mathbb {Z}^d\) that are not contained in the infinite component of \(\mathbb {Z}^d{\setminus } \eta ^1(t)\), which comprises \(\eta ^1(t)\) and all vertices of \(\mathbb {Z}^d{\setminus } \eta ^1(t)\) that are separated from infinity by \(\eta ^1(t)\). The theorem below will be proved in Sect. 5, as a consequence of a more general theorem, Theorem 5.1.
Theorem 1.3
For any \(\lambda <1\), there exists a value \(p_0\in (0,1)\) such that, for all \(p\in (0,p_0)\), there are positive constants \(c_1=c_1(p,d)\) and \(c_2=c_2(p,d)\) for which
There is a third possible regime, which we call the coexistence phase, and is characterized by type 1 and type 2 simultaneously forming infinite clusters with positive probability. (We regard the coexistence phase as a regime of weak survival for type 1, in the sense that type 1 survives but leaves enough space for type 2 to produce at least one infinite cluster.) Whether this regime actually occurs for some value of p and \(\lambda \) is an open problem, and even simulations do not seem to give good evidence of the existence of this regime. For example, in the rightmost picture of Fig. 3, we observe a regime where \(\eta ^1\) survives, while \(\eta ^2\) seems to produce only finite clusters, but of quite long sizes. This also seems to be the behavior of the central picture in Fig. 3, though it is not as clear whether each cluster of \(\eta ^2\) will be eventually confined to a finite set. However, the behavior in the leftmost picture of Fig. 3 is not at all clear. The cluster of \(\eta ^1\) has survived until the simulation was stopped, but produced a very thin set. It is not clear whether coexistence will happen in this situation, whether \(\eta ^1\) will eventually stop growing, or even whether after a much longer time the “arms” produced by \(\eta ^1\) will eventually find one another, constraining \(\eta ^2\) to produce only finite clusters.
Establishing whether a coexistence phase exists for some value of p and \(\lambda \) is an interesting open problem. We can establish that a coexistence phase occurs in a particular example of FPPHE, where type 1 and type 2 have deterministic passage times, with all randomness coming from the locations of the seeds. In this example, all three phases occur. We discuss this in Sect. 2. See also the recent paper [6], where coexistence is established when \(\mathbb {Z}^d\) is replaced by a hyperbolic nonamenable graph.
Historical remarks and related works MDLA belongs to a class of models, introduced firstly in the physics and chemistry literature (see [15] and references therein), and later in the mathematics literature as well, with the goal of studying geometric and dynamic properties of static formations produced by aggregating randomly moving colloidal particles. Some numerically established quantities, such as fractal dimension, showed striking similarities between clusters produced by aggregating particles and clusters produced in other growth processes of entirely different nature, such as dielectric breakdown cascades and Laplacian growth models (in particular, HeleShaw cell [26]). These similarities were further investigated by the introduction of the Hastings–Levitov growth model [13], which is represented as a sequence of conformal mappings. Nonetheless, it is still debated in the physics literature whether some of these models belong to the same universality class or not [4].
In the mathematics literature, the diffusion limited aggregation model (DLA), introduced in [14] following the introduction of MDLA in [25], became a paradigm object of study among aggregation models driven by diffusive particles. However, progress on understanding DLA and MDLA mathematically has been relatively modest. The main results known about DLA are bounds on its rate of growth, derived by Kesten [16, 17] (see also [2]), but several variants have been introduced and studied [3, 5, 7, 10, 23, 27]. Regarding MDLA, it was rigorously studied only in the onedimensional case [8, 19, 20], for which sublinear growth has been proved for all densities \(p\in (0,1)\) in [20].
Structure of the paper We start in Sect. 2 with a discussion of an example of FPPHE where the passage times are deterministic, and show that this process has a coexistence phase. Then, in preparation for the proof of strong survival of FPPHE (Theorem 1.3), we state in Sect. 3 existing results on first passage percolation, and discuss in Sect. 4 a result due to Häggstrom and Pemantle regarding noncoexistence of a twotype first passage percolation process. This result plays a fundamental role in our analysis of FPPHE. Then, in Sect. 5, we state and prove Theorem 5.1, which is a more general version of Theorem 1.3. In Sect. 6 we relate FPPHE with MDLA, giving the proof of Theorem 1.1.
Example of coexistence in FPPHE
In this section we consider FPPHE with deterministic passage times. That is, whenever type 1 (resp., type 2) occupies a vertex \(x\in \mathbb {Z}^d\), then after time 1 (resp., \(1/\lambda \)) type 1 (resp., type 2) will occupy all unoccupied neighbors of x. If both type 1 and type 2 try to occupy a vertex at the same time, we choose one of them uniformly at random. Recall that we denote by \(\eta ^i(t)\), \(i\in \{1,2\}\), the set of vertices occupied by type i by time t. For simplicity, we restrict this discussion to dimension \(d=2\).
Figure 4 shows a simulation of this process for \(p=0.2\) and different values of \(\lambda \). In all the three pictures in Fig. 4, \(\eta ^1\) seems to survive. However, note that the leftmost picture in Fig. 4 differs from the other two since \(\eta ^2\) also seems to give rise to an infinite cluster, characterizing a regime of coexistence. See Fig. 5 for more details.
Our theorem below establishes the existence of a coexistence phase. We note that here the phase for survival for \(\eta ^1\) is stronger than that shown in Theorem 1.3. Here we show that for some small enough p, \(\eta ^1\) survives for any\(\lambda <1\). The actual value of \(\lambda \) plays a role only on determining whether coexistence happens. In the theorem below and its proof, a directed path in \(\mathbb {Z}^d\) is defined to be a path whose jumps are only along the positive direction of the coordinates.
Theorem 2.1
For any \(\lambda \in (0,1)\) and any \(p\in (0,1p_{\mathrm {c}}^{\mathrm {dir}})\), where \(p_{\mathrm {c}}^{\mathrm {dir}}=p_{\mathrm {c}}^{\mathrm {dir}}(\mathbb {Z}^d)\) denotes the critical probability for directed site percolation in \(\mathbb {Z}^d\), we have
Furthermore, for any \(\lambda \in (0,1)\), there exists a positive \(p_0<1p_{\mathrm {c}}^{\mathrm {dir}}\) such that for any \(p\in (p_0,1p_{\mathrm {c}}^{\mathrm {dir}})\) we have
Proof
Consider a directed percolation process on \(\mathbb {Z}^d\) where a vertex is declared to be open if it is not in \(\eta ^2(0)\), otherwise the vertex is closed. For any \(t\ge 0\), let \(C_t\) be the vertices reachable from the origin by a directed path of length at most t where all vertices in the path are open. We will prove (1) by showing that
Let \(x\in \eta ^2(0)\) be the vertex of \(\eta ^2(0)\) that is the closest to the origin, in \(\ell _1\) norm. Clearly, for any time \(t<\Vert x\Vert _1\), we have that \(\eta ^1(t)\) has not yet interacted with \(\eta ^2(0)\), giving that \(\eta ^1(t) = \{y\in \mathbb {Z}^d :\Vert y\Vert _1 \le t\}= C_t\). See Fig. 6a for an illustration. Then, at time \(\Vert x\Vert _1\), \(\eta ^1\) tries to occupy all vertices at distance \(\Vert x\Vert _1\) from the origin, leading to the configuration in Fig. 6b and activating the seed x of \(\eta ^2(0)\), illustrated in pink in the picture. Since \(\eta ^1\) is faster than \(\eta ^2\), \(\eta ^1\) is able to “go around” x, traversing the same path as in a directed percolation process. This leads to the configuration in Fig. 6c. Note that the same behavior occurs when \(\eta ^1\) finds a larger set of consecutive seeds of \(\eta ^2\) at the same \(\ell _1\) distance from the origin. For example, see what happens with the three red seeds in Fig. 6d–f. In this case, a directed percolation process does not reach any vertex inside the red triangle in Fig. 6f, as those vertices are shaded by the three red seeds. Since \(\eta ^2\) is slower than \(\eta ^1\), the cluster of \(\eta ^2\) that starts to grow when the three red seeds are activated cannot occupy any vertex outside of the red triangle.
A different situation occurs when \(\eta ^1\) finds a vertex of \(\eta ^2(0)\) in the axis, as with the yellow vertex of Fig. 6c. Note that, in a directed percolation process, all vertices below the yellow seed will not be reachable from the origin. In our twotype process, something similar occurs, but only for a finite number of steps. When \(\eta ^1\) activates the yellow seed at \(x\in \eta ^2(0)\), \(\eta ^1\) cannot immediately go around x as explained above. For \(\lambda \) close enough to 1, \(\eta ^2\) occupies the successive vertex in the axis before \(\eta ^1\) can go around x. This continues for some steps, with \(\eta ^2\) being able to grow along the axis; see Fig. 6d, e. However, at each step \(\eta ^1\) will be \(1\lambda \) faster than \(\eta ^2\). This will accumulate for roughly \(\frac{1}{1\lambda }\) steps, when \(\eta ^1\) will finally be able to go around \(\eta ^2\); as in Fig. 6f. This happens unless \(\eta ^2(0)\) happens to have a seed at a vertex neighboring one of the vertices on the axis occupied by the growth of \(\eta ^2\). This is illustrated by the green vertices of Fig. 6f–h. When the first green vertex out of the axis is activated, \(\eta ^1\) will not be able to occupy the vertex to the right of the green vertex, and will encounter the next green seed before it can go around the first green seed found at the axis. The crucial fact to observe is that the clusters of \(\eta ^2\) that start to grow after the activation of each green seed can only occupy vertices located to the right of the seeds, and at the same vertical coordinate. This is a subset of the vertices that are shaded by the green seeds in a directed percolation process. Therefore, (1) follows since the vertices occupied by \(\eta ^2\) are a subset of the following set: take the union of all triangles obtained from sets of consecutive seeds away from the axis (as with the pink, red and blue seeds in Fig. 6), and take the union of semilines starting at seeds located at the axis or at seeds neighboring semilines starting from seeds of smaller \(\ell _1\) distance to the origin (as with the yellow and green seeds in Fig. 6). This set is exactly the set of vertices not reached by a directed path from the origin.
Now we turn to (2). First notice that, from the first part, we have that \(\eta ^1(t)\supseteq C_t\) for all p and \(\lambda \). Since \(C_t\) does not depend on \(\lambda \), once we fix \(p\in (0,1p_{\mathrm {c}}^{\mathrm {dir}})\), we can take \(\lambda \) as close to 1 as we want, and \(\eta ^1\) will still produce an infinite component. Now we consider one of the axis. For example, the one containing the green vertices in Fig. 6. Let (x, 0) be the first vertex occupied by \(\eta ^2\) in that axis. For each integer k, we will define \(X_k\) as the smallest nonnegative integer such that \((k,X_k)\) will be occupied by \(\eta ^1\). Similarly, \(Y_k\) is the smallest nonnegative integer such that \((k,Y_k)\) will be occupied by \(\eta ^1\). Now we analyze the evolution of \(X_k\); the one of \(Y_k\) will be analogous. Assume that \(X_1,X_2,\ldots ,X_{k1}=0\). Then, with probability at least p we have that \(X_{k+1}\ge 1\). When this happens, \(\eta ^1\) will need to do at least \(\frac{1}{1\lambda }\) steps before being able to occupy the axis again. However, for each \(s\ge 2\), the probability that \(X_{k+s}> X_{k+1}\) is at least p. This gives that the probability that the random variable X reaches value above 1 before going back to zero is at least \(1(1p)^{\frac{1}{1\lambda }}\). Once we have fixed p, by setting \(\lambda \) close enough to 1 we can make this probability very close to 1. This gives that \(X_k\) has a drift upwards. Since the downwards jumps of \(X_k\) are of size at most 1, this implies that at some time \(X_k\) will depart from 0 and will never return to it. A similar behavior happens for \(Y_k\), establishing (2). \(\square \)
Preliminaries on first passage percolation
Let \(\upsilon \) be a probability distribution on \((0,\infty )\) with no atoms and with a finite exponential moment. Consider a first passage percolation process \(\{\xi (t)\}_t\), which starts from the origin and spreads according to \(\upsilon \). More precisely, for each pair of neighboring vertices \(x,y\in \mathbb {Z}^d\), let \(\zeta _{x,y}\) be an independent random variables of distribution \(\upsilon \). The value \(\zeta _{x,y}\) is regarded as the time that \(\xi \) needs to spread throughout the edge (x, y). Note that \(\zeta \) defines a random metric on \(\mathbb {Z}^d\), where the distance between two vertices is the length of the shortest path between them, and the length of a path is the sum of the values of \(\zeta \) over the edges of the path. Hence, given any initial configuration \(\xi (0)\subset \mathbb {Z}^d\), the set \(\xi (t)\) comprises all vertices of \(\mathbb {Z}^d\) that are within distance t from \(\xi (0)\) according to the metric \(\zeta \). We assume throughout the paper that \(d\ge 2\).
For \(X\subset \mathbb {Z}^d\), let \(\mathbb {Q}_X^\upsilon \) be the probability measure induced by the process \(\xi \) when \(\xi (0)=X\). When the value of \(\xi (0)\) is not important, we will simply write \(\mathbb {Q}^\upsilon \), and when \(\upsilon \) is the exponential distribution of rate 1, we write \(\mathbb {Q}\).
Let \({\tilde{\xi }}(t)\subset \mathbb {R}^d\) be defined by \({\tilde{\xi }}(t)=\bigcup _{x\in \xi (t)}\left( x+[1/2,1/2]^d\right) \); that is, \({\tilde{\xi }}(t)\) is obtained by adding a unit cube centered at each point of \(\xi (t)\). A celebrated theorem of Richardson [24], extended by Cox and Durrett [9], establishes that the rescaled set
Such a result is now widely referred to as a shape theorem, and \(\mathcal {B}_\upsilon \) is referred to as the limit set. The set \(\mathcal {B}_\upsilon \) defines a norm \(\cdot _\upsilon \) on \(\mathbb {R}^d\) via
We abuse notation and define, for any \(t\ge 0\), \(\mathcal {B}_\upsilon \left( t\right) \) as the ball of radius t according to the norm above: \(\mathcal {B}_\upsilon \left( t\right) =\left\{ x\in \mathbb {Z}^d :x_\upsilon \le t\right\} \). As before, we drop the subscript \(\upsilon \) when \(\upsilon \) is the exponential distribution of rate 1.
In [18, Theorem 2], Kesten derived upper bounds on the fluctuations of \(\xi (t)\) around \(\mathcal {B}_\upsilon \left( t\right) \). We state Kesten’s result in Proposition 3.1 below, in a form that is more suitable to our use later.^{Footnote 1} Before, we need to introduce some notation. Given any set of positive values \(\{\zeta '_{x,y}\}_{x,y}\) to the edges of the lattice, which we from now on refer to as passage times, and given any two vertices \(x,y \in \mathbb {Z}^d\), let
We extend this notion to subsets by writing
For two vertices \(x,y\in \mathbb {Z}^d\) we use the notation
Furthermore, for any set \(A\subset \mathbb {Z}^d\), define
Given a set \(A\subset \mathbb {Z}^d\), we say that an event is measurable with respect to passage times inside A if the event is measurable with respect to the passage times of the edges whose both endpoints are in A.
For any \(t>0\), any \(\delta \in (0,1)\), and any set of passage times \(\zeta '\), define the event
Disregarding some discrepancies in the choice of the boundary, \(S_t^\delta (\zeta ')\) is the event that \(\xi (t)\) is either not contained in \(\mathcal {B}_\upsilon \left( (1+\delta )t\right) \) or does not contain \(\mathcal {B}_\upsilon \left( (1\delta )t\right) \).
Proposition 3.1
Let \(\upsilon \) be a probability distribution on \((0,\infty )\), with no atoms, and with a finite exponential moment. There exist constants \(c_1,c_2,c_3>0\) depending on d and \(\upsilon \) such that, for all \(t\ge 1\) and all \(\delta > c_1 t^{\frac{1}{2d+4}}(\log t)^\frac{1}{d+2}\),
Moreover, we have that
Proof
First we establish (7). Note that the event \(\big \{\inf _{x\in \partial ^\mathrm {i}\mathcal {B}_\upsilon \left( (1+\delta )t\right) }D(0,x; \zeta )\le t\big \}\) is measurable with respect to the passage times inside \(\mathcal {B}_\upsilon \left( (1+\delta )t\right) \). Then, if this event does not hold, that is under \(\left\{ \inf _{x\in \partial ^\mathrm {i}\mathcal {B}_\upsilon \left( (1+\delta )t\right) }D(0,x; \zeta )>t\right\} \), the event \(\left\{ \sup _{x\in \partial ^\mathrm {o}\mathcal {B}_\upsilon \left( (1\delta )t\right) }D(0,x; \zeta )\le t\right\} \) is also measurable with respect to the passage times inside \(\mathcal {B}_\upsilon \left( (1+\delta )t\right) \), establishing (7). The bound in (6) follows directly from Kesten’s result [18, Theorem 2]. \(\square \)
Encapsulation of competing first passage percolation
Here we consider two first passage percolation processes that compete for space as they grow through \(\mathbb {Z}^d\). One of the processes spreads throughout \(\mathbb {Z}^d\) at rate 1, while the other spreads according to a distribution \(\upsilon \) such that its limit shape is contained in \(\mathcal {B}\left( \lambda \right) \) for some \(\lambda <1\), with \(\lambda \) being a parameter of the system. We will say that \(\lambda \) is the rate of spread of the second process. We assume that the starting configuration of each process comprises only a finite set of vertices. In this case, one expects that both processes cannot simultanenously grow indefinitely; that is, one of the processes will eventually surround the other, confining it to a finite subset of \(\mathbb {Z}^d\). This was studied by Häggström and Pemantle [12]. In the proof of our main result, we will employ a refined version of a result in their paper. In particular, we will give a lower bound on the probability that the faster process surrounds the slower one within some fixed time.
First we define the processes precisely. Let \(\xi ^1\) denote the faster process so that, for each time \(t\ge 0\), \(\xi ^1(t)\) gives the set of vertices occupied by the faster process at time t. Similarly, let \(\xi ^2\) denote the slower process. For each neighbors \(x,y \in \mathbb {Z}^d\), let \(\zeta ^1_{x,y}\) be an independent exponential random variables of rate 1, and let \(\zeta ^2_{x,y}\) be an independent random variable of distribution \(\upsilon \). For \(i\in \{1,2\}\), \(\zeta ^i_{x,y}\) represents the passage time of process \(\xi ^i\) through the edge (x, y).
The processes start at disjoint sets \(\xi ^1(0),\xi ^2(0)\subset \mathbb {Z}^d\). Then they spread throughout \(\mathbb {Z}^d\) according to the passage times \(\zeta ^1\) and \(\zeta ^2\) with the constraint that, whenever a vertex is occupied by either \(\xi ^1\) or \(\xi ^2\), the other process cannot occupy that vertex afterwards. Therefore, for any \(t\ge 0\), we obtain that \(\xi ^1(t)\) and \(\xi ^2(t)\) are disjoint sets. To define \(\xi ^1,\xi ^2\) more precisely, we will iteratively set \(s^k(x)\), for each \(x\in \mathbb {Z}^d\) and \(k\in \{1,2\}\), so that at the end \(s^k(x)\) is the time x is occupied by process k, or \(s^k(x)=\infty \) if x is not occupied by process k. Start setting \(s^1(x)=0\) for all \(x\in \xi ^1(0)\), \(s^2(x)=0\) for all \(x\in \xi ^2(0)\), and \(s^k(x)=\infty \) for all \(k\in \{1,2\}\) and \(x\not \in \xi ^k(0)\). Then, choose the value of \(k\in \{1,2\}\) and the pair of neighboring vertices x, y with \(s^k(x)<\infty \) and \(s^k(y)=\infty \) that minimizes \(s^k(x)+\zeta ^k_{x,y}\), and set \(s^k(y)=s^k(x)+\zeta ^k_{x,y}\). Then,
Given two sets \(X_1,X_2,\subset \mathbb {Z}^d\), let \(\mathbb {Q}_{X_1,X_2}^\upsilon \) denote the probability measure induced by the processes \(\xi ^1,\xi ^2\) with initial configurations \(\xi ^1(0)=X_1\) and \(\xi ^2(0)=X_2\).
The proposition below is a more refined version of a result of Häggström and Pemantle [12, Proposition 2.2]. It establishes that if \(\xi ^2\) starts from inside \(\mathcal {B}\left( r\right) \) for some \(r\in \mathbb {R}_+\), and \(\xi ^1\) starts from a single vertex outside of a larger ball \(\mathcal {B}\left( \alpha r\right) \), for some \(\alpha >1\), then there is initially a large separation between \(\xi ^1\) and \(\xi ^2\), allowing \(\xi ^1\) to surround \(\xi ^2\) with high probability. Moreover, we obtain that \(\xi ^1\) will eventually confine \(\xi ^2\) to some set \(\mathcal {B}\left( R\right) \) for some given R, and the probability that this happens goes to 1 with \(\alpha \). We need to state this result in a high level of detail, as we will apply it at various scales later in our proofs. We say that an event is increasing (resp., decreasing) with respect to some passage times \(\zeta \) if whenever the event holds for \(\zeta \) it also holds for any passage times \(\zeta '\) that satisfies \(\zeta '_{x,y}\ge \zeta _{x,y}\) (resp., \(\zeta '_{x,y}\le \zeta _{x,y}\)) for all neighboring \(x,y\in \mathbb {Z}^d\).
Proposition 4.1
There exist positive constants \(c_1,c_2\) depending only on d so that, for any \(\lambda \in (0,1)\), any \(r>1\), and any \(\alpha >\left( \frac{1}{\lambda (1\lambda )}\right) ^{c_1}\), if \(\upsilon \) is such that \(\mathcal {B}_\upsilon \subset \mathcal {B}\left( \lambda \right) \), we can define deterministic values R and \(T=T(R)\) satisfying \(R\le \alpha r \exp \left( \frac{c_1}{1\lambda }\right) \) and \(T\le R\left( \frac{11\lambda }{10}\right) ^2\) such that the following holds. For any \(x\in \partial ^\mathrm {o}\mathcal {B}\left( \alpha r\right) \), there is an event F that is measurable with respect to the passage times \(\zeta ^1,\zeta ^2\) inside \(\mathcal {B}\left( R \left( \frac{11\lambda }{10}\right) ^2\right) \) and is increasing with respect to \(\zeta ^2\) and decreasing with respect to \(\zeta ^1\), whose occurrence implies
and whose probability of occurrence satisfies
In particular, when F occurs, within time T, \(\xi ^1\) encapsulates \(\xi ^2\) inside \(\mathcal {B}\left( R\right) \).
We defer the proof of the proposition above to “Appendix A”. The proof will follow along the lines of [12, Proposition 2.2], but we need to perform some steps with more care, as we need to obtain bounds on the probability that F occurs, to establish bounds on R and T, to derive that F is increasing with respect to \(\eta ^2\) and decreasing with respect to \(\zeta ^1\), and to obtain the measurability constraints on F.
We will need to apply the above proposition in a more complex setting. For this, it is important to keep in mind the process FPPHE defined in Sect. 1, where a cluster of type 2 starts spreading from each type 2 seed when that seed is activated, and type 2 seeds are initially distributed in \(\mathbb {Z}^d\) as a product measure. We will apply the encapsulation procedure of Proposition 4.1 for each different cluster of type 2 growing out of its seed. This means that we will apply Proposition 4.1 at several scales (that is, with different values of r) and at several places of \(\mathbb {Z}^d\). The encapsulation happening in one place may end up interfering in the spread of type 1 and type 2 in the other places.
In order to have a version of Proposition 4.1 that can handle this situation, we will focus in one such encapsulation. For that encapsulation, we represent type 1 as \(\xi ^1\), and assume that \(\xi ^1(0)\) contains at least one vertex from \(\partial ^\mathrm {o}\mathcal {B}\left( \alpha r\right) \). For the cluster of type 2 whose encapsulation we are considering, we let it start from \(\xi ^2(0)\subseteq \mathcal {B}\left( r\right) \). Here \(\xi ^2\) will only represent the cluster of type 2 that spreads from \(\xi ^2(0)\). For the other clusters of type 2, we will not refer to them as \(\xi ^2\) but simply as type 2.
To model the spread of the other clusters of type 2, we introduce a positive number \(\gamma \) and two sequences of simply connected subsets \((\Pi _\iota )_\iota \) and \((\Pi '_\iota )_\iota \) of \(\mathbb {Z}^d\), such that
and, for each \(\iota \ge 1\), we have
The sets \(\Pi _\iota \) represent the other regions of \(\mathbb {Z}^d\) (of smaller scale) where the encapsulation of a cluster of type2 of FPPHE may be happening while \(\xi ^1,\xi ^2\) spread from \(\xi ^1(0),\xi ^2(0)\), whereas the sets \(\Pi _\iota '\) represent the regions inside which each type 2 cluster gets confined to. (The value of \(\gamma \) will be quite small, so that all \(\Pi _\iota \) are of scale smaller than r because clusters of scale larger than r will be treated afterwards: in the proof we will consider the clusters essentially in order of their sizes.) Outside of the set \(\bigcup _\iota \Pi _\iota \), the spread of \(\xi ^1,\xi ^2\) will follow the passage times \(\zeta ^1,\zeta ^2\), respectively. However, the spread of \(\xi ^1,\xi ^2\) inside \(\bigcup _\iota \Pi _\iota \) may be different and quite complicated. We will not need to specify this precisely, we will only require the following properties:

(P1)
For each \(\iota \), if \(\xi ^2\) does not enter \(\Pi _\iota \) from \(\xi ^2(0)\), then \(\Pi _\iota {\setminus } \Pi _\iota '\) becomes entirely occupied by \(\xi ^1\).

(P2)
For any \(\iota ,x\in \partial ^\mathrm {i}\Pi _\iota ,\) and \(y\in \Pi _\iota \), either the time that \(\xi ^1\) takes to spread from x to y within \(\Pi _\iota \) is smaller than that given by the passage times \(\zeta ^1\), or y is occupied by type 2.

(P3)
For any \(\iota ,x\in \partial ^\mathrm {i}\Pi _\iota ,\) and \(y\in \Pi _\iota \), either the time that \(\xi ^2\) takes to spread from x to y within \(\Pi _\iota \) is larger than that given by the passage times \(\zeta ^2\), or y is occupied by type 1.
Above when we refer to the time given by the passage times \(\zeta ^k\), \(k\in \{1,2\}\), we mean the time given by the passage times \(\zeta ^k\) when we completely ignore the presence of the cluster of type 2 that grows from within \(\Pi _\iota \). Regarding properties (P2) and (P3) above, in our application of the proposition below, we will do some scaling of the passage times so that, within each \(\Pi _\iota \), type 1 will actually spread at a rate faster than 1 while type 2 will spread at a rate slower than \(\lambda \). Since within \(\Pi _\iota \) type 1 needs to do a detour around the growing cluster of type 2, we will use a coupling argument to say that even with this detour type 1 will spread inside \(\Pi _\iota \) faster than the passage times given by \(\zeta ^1\). Similarly, within \(\Pi _\iota \) type 2 may benefit from \(\xi ^2\) entering from outside and blocking type 1 as it attempts to encapsulate type 2. We will use a coupling argument to say that, even with the help from \(\xi ^2\), type 2 will spread inside \(\Pi _\iota \) slower than the passage times given by \(\zeta ^2\). This will become clearer in Sect. 5.1, where we present a highlevel description of the whole proof. At this point, we do not need much detail of how the spread of type 1 and type 2 happen inside each \(\Pi _\iota \).
The goal of the proposition below, which is a refinement of Proposition 4.1, is to argue that with high probability the passage times \(\zeta ^1,\zeta ^2\) are such that \(\xi ^1\) encapsulates \(\xi ^2\) inside a ball surrounding \(\mathcal {B}\left( r\right) \) unless there exists a set \(\Pi _\iota \) that does not satisfy one of the properties (P1)–(P3). Given three sets \(S_1,S_2,S_3\subset \mathbb {Z}^d\), we say that \(S_1\) separates \(S_2\) from \(S_3\) if any path in \(\mathbb {Z}^d\) from \(S_2\) to \(S_3\) intersects \(S_1\).
Proposition 4.2
There exist positive constants \(c_1,c_2,c_3\) depending only on d so that, for any \(\lambda \in (0,1)\), any \(r>1\), and any \(\alpha >\left( \frac{1}{\lambda (1\lambda )}\right) ^{c_1}\), if \(\upsilon \) satisfies \(\mathcal {B}_\upsilon \subset \mathcal {B}(\lambda )\), we can define deterministic values R and \(T=T(R)\) satisfying \(R\le \alpha r \exp \left( \frac{c_1}{1\lambda }\right) \) and \(T\le R\left( \frac{11\lambda }{10}\right) ^2\) such that the following holds. For any \(\gamma \le c_3 \lambda (1\lambda )\alpha \) and any \(x\in \partial ^\mathrm {o}\mathcal {B}\left( \alpha r\right) \), there is an event F that is measurable with respect to the passage times \(\zeta ^1,\zeta ^2\) inside \(\mathcal {B}\left( R \left( \frac{11\lambda }{10}\right) ^2\right) \) and is increasing with respect to \(\zeta ^2\) and decreasing with respect to \(\zeta ^1\), such that its occurrence implies that either
or there exist sets \(\{\Pi _\iota \}_\iota ,\{\Pi _\iota '\}_\iota \) satisfying (8) and (9), such that there exists \(\iota \) for which \(\Pi _\iota \cap \mathcal {B}\left( R\left( \frac{11\lambda }{10}\right) ^2\right) \ne \emptyset \) and \(\Pi _\iota \) does not satisfy at least one of the properties (P1)–(P3). Moreover, we obtain that
Proof of Theorem 1.3
Theorem 1.3 will follow directly from Theorem 5.1 below, which we will prove in this section. The proof is quite long, so we start with an overview. For clarity’s sake, we discuss the proof overview under the setting of Theorem 1.3, and only state Theorem 5.1 in Sect. 5.2.
Proof overview
We start with a highlevel overview of the proof. Below we refer to Fig. 7. Since p is small enough, initially \(\eta ^1\) will grow without finding any seed of \(\eta ^2(0)\), as in Fig. 7a. When \(\eta ^1\) activates a seed of \(\eta ^2\), then we will apply Proposition 4.1 to establish that \(\eta ^1\) will go around \(\eta ^2\), encapsulating it inside a small ball (according to the norm \(\cdot \)). This is illustrated by the encapsulation of \(C_1\) in Fig. 7b. The yellow ball in the picture marks the region inside which the cluster of \(\eta ^2\) will be trapped; in Proposition 4.1 this corresponds to the ball \(\mathcal {B}\left( \frac{10R}{11\lambda }\right) \). To ensure the encapsulation of a cluster of \(\eta ^2\), we will need to observe the passage times inside a larger ball; for example, only to ensure the measurability requirement of the event in Proposition 4.1 we need to observe the passage times in \(\mathcal {B}\left( R\left( \frac{11\lambda }{10}\right) ^2\right) \). This larger ball is represented by the red circles in Fig. 7. As long as different red circles do not intersect one another, the encapsulation of different clusters of \(\eta ^2\) will happen independently. However, when \(\eta ^1\) encounters a large cluster of \(\eta ^2\) seeds, as it happens with the cluster \(C_3\) in Fig. 7d, the encapsulation procedure will require a larger region to succeed. We will carry this out by developing a multiscale analysis of the encapsulation procedure, where the size of the region will depend, among other things, on the size of the clusters of \(\eta ^2(0)\). After the encapsulation takes place, as in Fig. 7e, we are left with a larger yellow ball and a larger red circle. Also, whenever two clusters of \(\eta ^2(0)\) are close enough such that their corresponding red circles intersect, as it happens with \(C_2\) in Fig. 7c, then the encapsulation cannot be guaranteed to succeed. In this case, we see these clusters as if they were a larger cluster, and perform the encapsulation procedure over a slightly larger region, as in Fig. 7c, d.
There is one caveat in the above description. Suppose \(\eta ^1\) encounters a very large cluster of \(\eta ^2\), for example \(C_3\) in Fig. 7d. It is likely that during the encapsulation of \(C_3\), inside the red circle of this encapsulation, we will find smaller clusters of \(\eta ^2\). This happens in Fig. 7d with \(C_4\). This does not pose a big problem, since as long as the red circle of the encapsulation of the small clusters do not intersect one another and do not intersect the yellow ball produced by the encapsulation of \(C_3\), the encapsulation of \(C_3\) will succeed. This is illustrated in Fig. 7e, where the encapsulation of \(C_4\) happened inside the encapsulation of \(C_3\). There is yet a subtlety. During the encapsulation of \(C_4\), the advance of \(\eta ^1\) is slowed down, as it needs to make a detour around the growing cluster of \(C_4\). This slowing down could cause the encapsulation of \(C_3\) to fail. Similarly, as \(\eta ^2\) spreads from \(C_3\), \(\eta ^2\) may find vertices that have already been occupied by \(\eta ^2\) due to the spread of \(\eta ^2\) from other nonencapsulated seeds. This would happen, for example, if the yellow ball that grows from \(C_3\) were to intersect the yellow ball that grows from \(C_4\). If this happens before the encapsulation of \(C_4\) ends, then the spread of \(C_3\) gets a small advantage. The area occupied by the spread of \(\eta ^2\) from \(C_4\) can in this case be regarded as being absorbed by the spread of \(\eta ^2\) from \(C_3\), causing \(C_3\) to spread faster than if \(C_4\) were not present. We will need to show that \(\eta ^1\) is not slowed down too much by possible detours around smaller clusters, and \(\eta ^2\) is not sped up too much by the absorption of smaller clusters.
To do this, we will define a sequence of scales \(R_1,R_2,\ldots \), with \(R_k\) increasing with k. The value of \(R_k\) represents the radius of the region inside which encapsulation takes place at scale k. (Later when making this argument rigorous, for each scale k we will need to introduce several radii, but to simplify the discussion here we can think for the moment that \(R_k\) gives the radius of the red circles in Fig. 7, and that the radius of the yellow circles at scale k is just a constant times \(R_k\).) The larger the cluster of seeds of \(\eta ^2\), the larger k must be. We will treat the scales in order, starting from scale 1. This procedure is illustrated in Fig. 8 for the encapsulation of the configuration in Fig. 7a. Once all clusters of scale \(k1\) or below have been treated, we look at all remaining (untreated) clusters that are not too big to be encapsulated at scale k. If two clusters of scale k are too close to each other, so that their corresponding red circles intersect, we will not carry out the encapsulation and will treat these clusters as if they were one cluster from a larger scale, as illustrated in Fig. 8a. After disregarding these, all remaining clusters of scale k are disjoint and can be treated independently using the more refined Proposition 4.2. The \(\Pi _\iota \) will be the clusters of scale smaller than k that happen to fall inside the red circle of the cluster of scale k. Although small and going very fast to zero with k, the probability that the encapsulation procedure fails is still positive. So it will happen that some encapsulation will fail, as illustrated by the vertex at the top of Fig. 8a. If this happens for some cluster of scale k, which is an event measurable with respect to the passage times inside a red circle of scale k containing the \(\eta ^2\) seeds of that cluster, we then take the whole area inside this red circle and consider it as a larger cluster of \(\eta ^2(0)\) seeds, leaving it to be treated at a larger scale, as in Fig. 8b. Then we turn to the next scale, as in Fig. 8c, d.
In order to handle the slow down of \(\eta ^1\) due to detours imposed by smaller scales, and the sped up of \(\eta ^2\) due to absorption of smaller scales, we will introduce a decreasing sequence of positive numbers \(\epsilon _1,\epsilon _2,\ldots \), as follows. In the encapsulation of a cluster C of scale k, we will show not only that \(\eta ^1\) is able to encapsulate C, but also that \(\eta ^1\) does that sufficiently fast. We do this by coupling the spread of \(\eta ^1\) inside the red circle of C with a slower first passage percolation process of rate \(\prod _{i=1}^ke^{\epsilon _i}\) that evolves independently of \(\eta ^2\). In other words, this slower first passage percolation process does not need to do a detour around C, but pay the price by having slower passage times. We show that the spread of \(\eta ^1\) around C is faster than that of this slower first passage percolation process. Similarly, we show that, even after absorbing smaller scales, \(\eta ^2\) still spreads slow enough inside the red circle of C, so that we can couple it with a faster first passage percolation process of rate \(\lambda \prod _{i=1}^ke^{\epsilon _i}\), which evolves independently of everything else. We show using this coupling that the spread of \(\eta ^2\) is slower than that of the faster first passage percolation process. Thus at scale k, \(\eta ^1\) is spreading at rate at least \(\prod _{i=1}^ke^{\epsilon _i}\) while \(\eta ^2\) is spreading at rate at most \(\lambda \prod _{i=1}^ke^{\epsilon _i}\), regardless of what happened at smaller scales. By adequately setting \(\epsilon _k\), we can ensure that \(\prod _{i=1}^ke^{\epsilon _k}> \lambda \prod _{i=1}^ke^{\epsilon _k}\) for all k, allowing us to apply Proposition 4.2 at all scales.
The final ingredient is to develop a systematic way to argue that \(\eta ^1\) produces an infinite cluster. For this we introduce two types of regions, which we call contagious and infected. We start at scale 1, where all vertices of \(\eta ^2(0)\) are contagious. Using the configuration in Fig. 7a as an example, all white balls there are contagious. The contagious vertices that do not belong to large clusters or are not close to other contagious vertices, are treated at scale 1. The other contagious vertices remain contagious for scale 2. Then, for each cluster treated at scale 1, either the encapsulation procedure is successful or not. If it is successful, then the yellow balls produced by the encapsulation of these clusters are declared infected, and the vertices in these clusters are removed from the set of contagious vertices. In Fig. 8b, the yellow area represents the infected vertices after clusters of scale 1 have been treated. Recall that when an encapsulation is successful, all vertices reached by \(\eta ^2\) from that cluster must be contained inside the yellow area. On the other hand, if the encapsulation is not successful, then all vertices inside the red circle become contagious and go to scale 2, together with the other preselected vertices. An example of this situation is given by the cluster at the topright corner of Fig. 8b. We carry out this procedure iteratively until there are no more contagious vertices or the origin has been disconnected from infinity by infected vertices. The proof is concluded by showing that \(\eta ^2\) is confined to the set of infected vertices, and that with positive probability the infected vertices will not disconnect the origin from infinity.
Roadmap of the proof We now proceed to the details of the proof. We split the proof in few sections. In Sect. 5.2, we state Theorem 5.1, the more general version of Theorem 1.3. Then in Sect. 5.3 we set up the multiscale analysis, specifying the sizes of the scales and some parameters. This will define boxes of multiple scales, and we will classify boxes as being either good or bad. Roughly speaking, a box will be good if the encapsulation procedure inside the box is successful. The concrete definition of good boxes is done in Sect. 5.4. In Sect. 5.5 we estimate the probability that a box is good, independent of what happens outside the box. We then introduce contagious and infected sets in Sect. 5.6, and show that \(\eta ^2\) is confined to the set of infected vertices. At this point, it remains to show that the set of infected vertices does not disconnect the origin from infinity. For this, we need to control the set of contagious vertices, which can actually grow as we move to larger scales (for example, this happens when some encapsulation procedure fails). The event that a vertex is contagious at some scale k depends on what happens at previous scale. We estimate the probability of such event by establishing a recursion over scales, which we carry out in Sect. 5.7. With this we have a way to control whether a vertex is infected. In order to show that the origin is not disconnected from infinity by infected vertices, we apply the first moment method. We sum, over all contours around the origin, the probability that this contour contains only infected vertices. Since infected vertices can arise at any scale, we need to look at multiscale paths and contours of infected vertices, which we do in Sect. 5.8. We then put all ingredients together and complete the proof of Theorem 1.3 in Sect. 5.9.
General version of Theorem 1.3
In this section we will consider a generalization of FPPHE, where the passage times of \(\eta ^2\) can be given by any distribution, while the passage times of \(\eta ^1\) are exponential random variables of rate 1.
Let \(\upsilon \) be a probability distribution on \((0,\infty )\), with no atoms, and such it has a finite exponential moment. It holds by [1, Theorem 2.16] that a first passage percolation with passage times given by i.i.d. random variables with distribution \(\upsilon \) has a limit shape \(\mathcal {B}_\upsilon \), as in (4). Recall that \(\mathcal {B}\left( r\right) =r \mathcal {B}\) denotes the ball of radius r according to the norm induced by the shape theorem of first passage percolation with passage times that are exponential random variables of rate 1.
For any edge (x, y) of the lattice, let \(\zeta ^1_{x,y}\) be an independent exponential random variable of rate 1, and let \(\zeta ^2_{x,y}\) be an independent random variable distributed according to \(\upsilon \). For \(i\in \{1,2\}\), \(\zeta ^i_{x,y}\) is regarded as the passage time of \(\eta ^i\) through (x, y); that is, when \(\eta ^i\) occupies x, then after time \(\zeta ^i_{x,y}\) we have that \(\eta ^i\) will occupy y provided that y has not been occupied by the other type.
Recall that, for any t, we define \({\bar{\eta }}^1(t)\) as the set of vertices of \(\mathbb {Z}^d\) that are not contained in the infinite component of \(\mathbb {Z}^d{\setminus } \eta ^1(t)\), which comprises \(\eta ^1(t)\) and all vertices of \(\mathbb {Z}^d{\setminus } \eta ^1(t)\) that are separated from infinity by \(\eta ^1(t)\). Theorem 1.3 follows immediately from the theorem below by taking \(\upsilon \) to be the exponential distribution of rate \(\lambda \).
Theorem 5.1
For any \(\lambda <1\), there exists a value \(p_0\in (0,1)\) such that the following holds. For all \(p\in (0,p_0)\) and all \(\upsilon \) satisfying \(\mathcal {B}_\upsilon \subseteq \mathcal {B}\left( \lambda \right) \), there are positive constants \(c_1=c_1(p,d,\upsilon )\) and \(c_2=c_2(p,d,\upsilon )\) so that
Multiscale setup
Let \(\epsilon \in (0,1/2)\) be fixed and small enough so that all inequalities below hold:
We can define positive constants \(C_\mathrm {FPP}<C_\mathrm {FPP}'\), depending only on d, such that for all \(r>0\) we have
Set \(C_\mathrm {FPP}\) to be the largest constant and \(C_\mathrm {FPP}'\) to be the smallest constant satisfying (11). Since \(\mathcal {B}\left( r\right) \) is convex and has all the symmetries of the lattice \(\mathbb {Z}^d\), we not only obtain that \(\mathcal {B}\left( r\right) \) is contained in the \(\ell _\infty \)ball of radius \(C'_\mathrm {FPP}r\) but it contains the \(\ell _1\)ball of radius \(C'_\mathrm {FPP}r\). Using that the latter contains the \(\ell _\infty \)ball of radius \(\frac{C'_\mathrm {FPP}r}{d}\), we obtain that
Given \(\upsilon \), we can define \(\Delta _\upsilon \ge 1\) as the smallest number such that
Equivalently, we have \(\Delta _\upsilon = \sup _{x \in \mathcal {B}\left( \lambda \right) } x_\upsilon \). If \(\upsilon \) is an exponential distribution of rate \(\lambda \), we have \(\Delta _\upsilon =1\).
Let \(L_1\) be a large number, and fix \(\alpha >1\) so that it satisfies the conditions in Proposition 4.2. We let k be an index for the scales. For \(k\ge 1\), once \(L_k\) has been defined, we set
Also, for \(k\ge 1\), define
where \(c_1\) is the constant in Proposition 4.2. Since \(1\lambda > 2\epsilon \), we have that \(\mathcal {B}\left( R_k^\mathrm {enc}\right) \) contains all the passage times according to which the event in Proposition 4.2 with \(r=R_k\) is measurable. For \(k\ge 2\), let
We then obtain the following bounds for \(L_k\):
and
The first bound follows from (16) and (11), and the fact that in (16) \(L_k\) is obtained via an infimum, so any cube containing \(\mathcal {B}\left( 100 k^d R_{k1}^{\mathrm {outer}}\right) \) must have side length at least \(L_k\). The second bound follows from similar considerations, but applying (14) and (11).
The intuition is that \(L_k\) is the size of scale k, and \(R_k\) is the radius of the clusters of \(\eta ^2(0)\) to be treated at scale k. The value of \(R_k^\mathrm {enc}\) gives the radius inside which the encapsulation takes place; in the overview in Sect. 5.1 and in Figs. 7 and 8, \(R_k^\mathrm {enc}\) will be larger than the radius of each yellow ball so that each \(\eta ^2\) cluster treated at scale k will be contained inside a ball of radius \(R_k^\mathrm {enc}\). Regarding \(R_k^{\mathrm {outer}}\), it represents a larger radius, which will be needed for the development of some couplings between scales; in the overview in Sect. 5.1 and in Figs. 7 and 8, \(R_k^{\mathrm {outer}}\) gives the radius of the red circles.
With the definitions above we obtain
for some constant \(c=c(d,\epsilon ,\alpha ,\upsilon )>\frac{288000 d^2 \alpha }{\epsilon }\exp \left( \frac{1+c_1}{2\epsilon }\right) \Delta _\upsilon \). Iterating the above bound, we obtain
Using similar reasons we can see that
which allows us to conclude that
where \({\tilde{c}}\) is a positive constant depending on \(\alpha ,\epsilon ,d\) and \(\upsilon \), and the last step follows for all \(k\ge 1\) by setting \(L_1\) large enough.
At each scale \(k\ge 1\), tessellate \(\mathbb {Z}^d\) into cubes of sidelength \(L_k\), producing a collection of disjoint cubes
Whenever we refer to a cube in \(\mathbb {Z}^d\), we will only consider cubes of the form \(\prod _{i=1}^d[a_i,b_i]\) for reals \(a_i<b_i\), \(i\in \{1,2,\ldots ,d\}\). We will need cubes at each scale to overlap. We then define the following collection of cubes
We refer to each such cube \(Q_k(i)\) of scale k as a kbox, and note that \(Q_k(i)\supset Q_k^\mathrm {core}(i)\). One important property is that
As described in the proof overview (see Sect. 5.1), when going from scale k to scale \(k+1\), we will need to consider a slowed down version of \(\eta ^1\) and a sped up version of \(\eta ^2\). For this reason we set \(\epsilon _1=0\) and define for \(k\ge 2\)
Set \(\lambda ^1_1 = 1\) and \(\lambda _1^2=\lambda \), and let \(\zeta ^1_1=\zeta ^1\) and \(\zeta ^2_1=\zeta ^2\) be the passage times used by \(\eta ^1\) and \(\eta ^2\), respectively. For \(k\ge 2\), define
We have that \(\lambda ^1_k > \lambda ^1_{k+1}\) and \(\lambda ^2_k < \lambda ^2_{k+1}\) for all \(k\ge 1\). Also, note that
which gives
where the third inequality follows from the bound on \(\epsilon \) via (10).
For each \(k\ge 2\), consider two collections of passage times \(\zeta _k^1\) and \(\zeta _k^2\) on the edges of \(\mathbb {Z}^d\), which are given by \(\frac{\zeta ^1}{\lambda ^1_k}\) and \(\frac{\zeta ^2\lambda }{\lambda ^2_k}\), respectively. These will be the passage times we will use in the analysis at scale k. Note that, for any given k, the passage times of \(\zeta _k^1\) are independent exponential random variables of parameter \(\lambda _k^1\), while for the passage times of \(\zeta _k^2\) we obtain that its limit shape is contained in \(\mathcal {B}\left( \lambda _k^2\right) \).
Moreover, up to a time scaling, having passage times \(\zeta _k^1,\zeta _k^2\) is equivalent to having type 1 spreading at rate 1, while type 2 spreads according to a random variable whose limit shape is contained in \(\mathcal {B}\left( \frac{\lambda _k^2}{\lambda _k^1}\right) \). Therefore, let \(\lambda _k^{\mathrm {eff}}=\frac{\lambda _k^2}{\lambda _k^1}\) be the effective rate of type 2 in comparison with that of type 1 at scale k. From now on, we will refer to the \(\lambda ^2_k\) as the rate of spread of type 2 at scale k, even if type 2 may not have exponential passage times.
We obtain that
Thus the effective rate of spread of type 2 is smaller than 1 at all scales. We can also define the effective passage time of type 2 at scale k as
in this way, at scale k, when employing Proposition 4.2, we will take the passage times \(\zeta ^1_k,\zeta ^2_k\) and scale time by a factor of \(\lambda _k^1\), so that type 1 spreads according to the passage times \(\zeta ^1\) and type 2 spreads according to the passage times \(\zeta _k^{\mathrm {eff}}\). Finally, for \(k\ge 1\), define
Note that, using the passage times \(\zeta ^1_k,\zeta ^2_k\), we have that \(T_k^1\) represents the time required to run each encapsulation procedure at scale k (before time is scaled by a factor of \(\lambda _k^1\) as mentioned above).
Definition of good boxes
For each \(Q_k(i)\), we will apply Proposition 4.2 to handle the situation where \(Q_k(i)\) entirely contains a cluster of \(\eta ^2(0)\). At scale k we will only handle the clusters that have not already been handled at a scale smaller than k. By the relation between \(L_k\) and \(R_k\) in (18), the cluster of \(\eta ^2\) inside \(Q_k(i)\) will not start growing before \(\eta ^1\) reaches the boundary of \(L_ki+\mathcal {B}\left( R_k\right) \). By the time \(\eta ^1\) reaches the boundary of \(L_ki+\mathcal {B}\left( R_k\right) \), \(\eta ^1\) must have crossed the boundary of \(L_ki+\mathcal {B}\left( \alpha R_k\right) \). (For the moment we assume that \(L_ki+\mathcal {B}\left( \alpha R_k\right) \) does not contain the origin, otherwise we will later consider that the origin has already been disconnected from infinity by \(\eta ^2\).) At this point we apply Proposition 4.2 with \(r=R_k\) and \(\lambda =\lambda _k^{\mathrm {eff}}\), obtaining values R and T such that
where the second inequality follows from (15) and the last inequality follows from (23), and
where the last two inequalities follow from (26) and (25), respectively. Note that in our application of Proposition 4.2 above time has been scaled by \(\lambda _k^1\), since we apply it with type 1 (resp., type 2) spreading at rate 1 (resp., \(\lambda _k^{\mathrm {eff}}\)) instead of the actual rate \(\lambda _k^1\) (resp., \(\lambda _k^2\)). This is the reason why the term \(\frac{1}{\lambda _k^1}\) appears in the definition of \(T_k^1\) in (25). With this we get \(\lambda _{k}^1 T_k^1\) in the righthand side of (27), and the actual time that the encapsulation procedure takes is \(\frac{1}{\lambda _k^1}T\le T_k^1\). At the moment we have not yet defined the sets \(\{\Pi _\iota \}_\iota ,\{\Pi '_\iota \}_\iota \); they will only be defined precisely in Sect. 5.6.
Now let \(E_k(i,x)\) with \(x\in L_ki+\mathcal {B}\left( \alpha R_k\right) {\setminus } \mathcal {B}\left( \alpha R_k/2\right) \), be the event in the application of Proposition 4.2 with the origin at \(L_ki\), \(r=R_k\), passage times given by \(\zeta ^1,\zeta ^{\mathrm {eff}}_k\), and \(\eta ^1\) starting from x. Here x represents the first vertex of \(\partial ^\mathrm {o}\left( L_ki+ \mathcal {B}\left( \alpha R_k\right) \right) \) occupied by \(\eta ^1\), from where the encapsulation of the cluster of \(\eta ^2\) inside \(L_ki+\mathcal {B}\left( R_k^\mathrm {enc}\right) \) will start. Letting \(B_k(i) = \left( L_ki+\mathcal {B}\left( \alpha R_k\right) {\setminus } \mathcal {B}\left( \alpha R_k/2\right) \right) \cup \partial ^\mathrm {o}\left( L_ki+\mathcal {B}\left( \alpha R_k\right) \right) \), define
The event \(G_k^\mathrm {enc}(i)\) implies that \(\eta ^1\) encapsulates \(\eta ^2\) inside \(L_ki+\mathcal {B}\left( R_k^\mathrm {enc}\right) \) during a time interval of length \(T_k^1\), unless \(\eta ^2\) “invades” \(L_ki+\mathcal {B}\left( R_k^\mathrm {enc}\right) \) from outside, that is, unless another cluster of \(\eta ^2\) starts growing and reaches the boundary of \(L_ki+\mathcal {B}\left( R_k^\mathrm {enc}\right) \) before \(\eta ^1\) manages to encapsulate \(\eta ^2\) inside \(L_ki+\mathcal {B}\left( R_k^\mathrm {enc}\right) \). (When we apply the above argument later in the proof, we will only try to encapsulate a cluster of \(\eta ^2\) at scale k if the ball \(L_ki+\mathcal {B}\left( R_k^{\mathrm {outer}}\right) \supset L_ki+ \mathcal {B}\left( R_k^\mathrm {enc}\right) \) does not intersect other balls being treated at the same scale. If there is another ball being treated at the same scale k and intersecting \(L_ki+\mathcal {B}\left( R_k^{\mathrm {outer}}\right) \), then these balls will be only treated at a larger scale, not allowing different clusters of \(\eta ^2\) of the same scale to interfere in each other’s encapsulation.)
For each \(i\in \mathbb {Z}^d\), define
We will also define two other events \(G_k^{1}(i)\) and \(G_k^{2}(i)\), which will be measurable with respect to \(\zeta ^1_k,\zeta ^2_k\) inside \(Q_k^{\mathrm {outer}}(i)\). For any \(X\subset \mathbb {Z}^d\), let \(\zeta ^1_{k}_X\) be the passage times that are equal to \(\zeta ^1_k\) inside X and are equal to infinity everywhere else; define \(\zeta ^2_{k}_X\) analogously. Define the event \(G_k^{1}(i)\) as
The main intuition behind this event is that, during the encapsulation of a \((k+1)\)box, \(\eta ^1\) will need to perform some small local detours when encapsulating clusters of scale k or smaller. We can capture this by using the slower passage times \(\zeta _{k+1}^1\). If \(G_k^{1}\) holds for the kboxes that are traversed during the encapsulation of a \((k+1)\)box, then using the slower passage times \(\zeta _{k+1}^1\) but ignoring the actual detours around kboxes will only slow down \(\eta ^1\).
We also need to handle the case where the growth of \(\eta ^2\) is sped up by absorption of smaller scales. For \(i\in \mathbb {Z}^d\), define
Note that the event \(G_k^2(i)\) implies the following. Let \(x\in \partial ^\mathrm {i}Q_k^{{\mathrm {outer}}/3}(i)\) be the first vertex of \(Q_k^{{\mathrm {outer}}/3}(i)\) reached by \(\eta ^2\) from outside \(Q_k^{{\mathrm {outer}}/3}(i)\). While \(\eta ^2\) travels from x to \(Q_k^\mathrm {enc}(i)\), the encapsulation of \(Q_k^\mathrm {enc}(i)\) may start taking place. Then, \(\eta ^2\) can only get a sped up inside \(Q_k^\mathrm {enc}(i)\) if \(\eta ^2\) enters \(Q_k^\mathrm {enc}(i)\) before the encapsulation of \(Q_k^\mathrm {enc}(i)\) is completed. However, under \(G_k^2(i)\) and the passage times \(\zeta _k^2\), the time that \(\eta ^2\) takes to go from x to \(Q_k^\mathrm {enc}(i)\) is larger than the time, under \(\zeta _{k+1}^2\), that \(\eta ^2\) takes to go from x to all vertices in \(Q_k^\mathrm {enc}(i)\). Therefore, under \(G_k^2(i)\), we can use the faster passage times \(\zeta _{k+1}^2\) to absorb the possible sped up that \(\eta ^2\) may get by the cluster growing inside \(Q_k^\mathrm {enc}(i)\).
For \(i\in \mathbb {Z}^d\) and \(k\ge 1\), we define
and say that
Hence, intuitively, \(Q_k(i)\) being good means that \(\eta ^1\) successfully encapsulates the growing cluster of \(\eta ^2\) inside \(Q_k(i)\), and this happens in such a way that the detour of \(\eta ^1\) during this encapsulation is faster than letting \(\eta ^1\) use passage times \(\zeta _{k+1}^1\), and also the possible sped up that \(\eta ^2\) may get from clusters of \(\eta ^2\) coming from outside \(Q_k(i)\) is slower than letting \(\eta ^2\) use passage times \(\zeta _{k+1}^2\).
We now explain why in the definition of \(G_k^1(i)\) and \(G_k^2(i)\) we calculate passage times from \(\partial Q_k^{{\mathrm {outer}}/3}(i)\) instead of from \(\partial Q_k^{\mathrm {outer}}(i)\). The reason is that we had to define \(G_k^1(i)\) and \(G_k^2(i)\) in such a way that they are measurable with respect to the passage times inside \(Q_k^{\mathrm {outer}}(i)\). We do this by forcing to use only passage times inside \(Q_k^{\mathrm {outer}}(i)\). By using the distance between \(\partial Q_k^{\mathrm {outer}}(i)\) and \(\partial Q_k^{{\mathrm {outer}}/3}(i)\), we can ensure that this constraint does not change much the probability that the corresponding events occur.
Probability of good boxes
In this section we show that the events \(G_k^\mathrm {enc}(i)\), \(G_k^1(i)\) and \(G_k^2(i)\), defined in Sect. 5.4, are likely to occur.
Lemma 5.2
There exist positive constants \(L_0=L_0(d,\epsilon )\) and \(c=c(d,\upsilon )\) such that if \(L_1\ge L_0\), then for any \(k\ge 1\) and any \(i\in \mathbb {Z}^d\) we have
Moreover, the event \(G_k(i)\) is measurable with respect to the passage times inside \(Q_k^{\mathrm {outer}}(i)\).
Before proving the lemma above, we state and prove two lemmas regarding the probability of the events \(G_k^1(i)\) and \(G_k^2(i)\).
Lemma 5.3
There exist positive constants \(L_0=L_0(d,\epsilon )\) and \(c=c(d)\) such that if \(L_1\ge L_0\), then for any \(k\ge 1\) and any \(i\in \mathbb {Z}^d\) we have
Proof
Set \(\delta =\frac{\epsilon }{120 k^{2}}\). Define
and
We will show that there exists a constant \(c=c(d)>0\) such that
and
Using (28) and (29), it remains to show that
Note that
Thus we need to show that the last term in the righthand side above is at least \((1+\delta )\tau _2\), which is equivalent to showing that
Rearranging the terms, the inequality above translates to
Using that \(\exp \left( \epsilon (k+1)^{2}\right) \ge 1+\epsilon (k+1)^{2}\) and then applying the value of \(\delta \), we obtain that the lefthand side above is at least
Hence, it now suffices to show that
which is true since the righthand side above is at most \(\left( \frac{11\lambda }{10}\right) ^2 + \exp \left( \epsilon /4\right) \le \left( \frac{11}{10}\right) ^2 + \exp \left( 1/4\right) \le 3\).
Now we turn to establish (28) and (29). We start with (28). First note that
Recall the notation \(S_t^\delta \) from Proposition 3.1, which is the (unlikely) event that at time t first passage percolation of rate 1 does not contain \(\mathcal {B}\left( (1\delta )t\right) \) or is not contained in \(\mathcal {B}\left( (1+\delta )t\right) \). Then using time scaling to go from passage times of rate \(\lambda _{k+1}^1\) to passage times of rate 1, and using the union bound on x, we obtain
where in the first inequality we used that \((1+\delta )(1\delta )\tau _1\lambda _{k+1}^1\le \tau _1\lambda _{k+1}^1 = R_k^{\mathrm {outer}}/3R_k^\mathrm {enc}\), and in the second inequality we applied Proposition 3.1.
Now we turn to (29). We again use time scaling and the fact that \(\tau _2\lambda _k^1=R^{\mathrm {outer}}_k/3\) to write
where the second inequality follows since \((1\delta /2)(1+\delta )\tau _2\lambda _k^1\ge R_k^{\mathrm {outer}}/3\) for all \(\delta \in [0,1]\). Moreover, \((1+\delta /2)(1+\delta )\tau _2\lambda _k^1< \frac{2 R_k^{\mathrm {outer}}}{3}\), implying that \(S_{(1+\delta )\tau _2\lambda _k^1}^{\delta /2}\) is measurable with respect to the passage times inside \(Q_k^{\mathrm {outer}}(i)\). Finally, the last step of the derivation above follows from Propositon 3.1. \(\square \)
The next lemma shows that \(G_k^2(i)\) occurs with high probability.
Lemma 5.4
There exist positive constants \(L_0=L_0(d,\epsilon )\) and \(c=c(d,\nu )\) such that if \(L_1\ge L_0\), then for any \(k\ge 1\) and any \(i\in \mathbb {Z}^d\) we have
Proof
Set \(\delta =\frac{\epsilon }{20 k^{2}}\) and fix an arbitrary \(x\in \partial ^\mathrm {i}Q_k^{{\mathrm {outer}}/3}(i)\). Define the smallest distance between x and \(Q_k^\mathrm {enc}(i)\) with respect to the norm \(\upsilon \) as
Since \(\mathcal {B}_\upsilon \subseteq \mathcal {B}\left( \lambda \right) \), we have that
Under the passage times \(\zeta _k^2\), the time it takes to reach \(Q_k^\mathrm {enc}(i)\) from x is roughly \(m \frac{\lambda }{\lambda _k^2}\). Therefore, we define
and will show later that there exists a constant \(c'>0\) such that, uniformly over x,
Now, under the faster passage times \(\zeta _{k+1}^2\), the time it takes to reach \(Q_k^\mathrm {enc}(i)\) from x is roughly \(m \frac{\lambda }{\lambda _{k+1}^2}\). Let \(x'\in \partial ^\mathrm {i}Q_k^\mathrm {enc}(i)\) be the first vertex of \(Q_k^\mathrm {enc}(i)\) reached from x. Note that
Under the passage times \(\zeta _{k+1}^2\), which is a scale of \(\zeta ^2\) by a factor of \(\frac{\lambda }{\lambda _{k+1}^2}\), the time until \(x'+\mathcal {B}_\upsilon \Delta _\upsilon \frac{2 R_k^\mathrm {enc}}{\lambda }\) is fully occuppied starting from y is roughly \(\Delta _\upsilon \frac{2 R_k^\mathrm {enc}}{\lambda _{k+1}^2}\). Therefore, we set
and will show that there exists a constant \(c''>0\) such that
Assuming (30) and (31) for the moment, it remains to show that
Replacing \(\lambda _{k+1}^2\) with \(\lambda _k^2 \exp (\epsilon (k+1)^{2})\) in the definition of \(\tau _2\), (32) follows if we show that
First note that
where the first inequality follows by the definition of \(R_k^{\mathrm {outer}}\) in (15). So now it suffices to show that
Rearranging gives that \(\frac{1\delta }{(1+\delta )(1+5\delta /2)}\ge \exp \left( \epsilon (k+1)^{2}\right) \). The lefthand side is at least \((1\delta )^2(15\delta /2)\ge 1 \frac{9\delta }{2}\). Using that \(e^{a}\le 1a+a^2/2\) for all \(a\ge 0\), (32) holds if the following is true
Using the value of \(\delta \), we are left to showing
which is true since the righthand side is at least \(\frac{1}{4} \cdot \left( 1\frac{\epsilon }{8}\right) \ge \frac{1}{4} \cdot \frac{15}{16}\). This establishes (32).
Now we turn to establish (30) and (31), which essentially follow from Proposition 3.1. We start with (30). Scaling the passage times \(\zeta _{k^2}\) by \(\frac{\lambda _k^2}{\lambda }\) we obtain passage times distributed as \(\upsilon \). Hence,
The same reasoning holds for (31), which gives
Then the lemma follows by taking the union bound over x, and using the fact that \(R_k^{\mathrm {outer}}\) is very large at all scales so that the extra term obtained from the union bound can be absorbed in the constant c. \(\square \)
Proof of Lemma 5.2
Proposition 4.2 gives that \(G_k^\mathrm {enc}(i)\) can be defined so that it is measurable with respect to the passage times inside
Moreover, Proposition 4.1 gives a constant \(c_2>0\) so that, for all large enough \(L_1\), we have
where the last step follows by applying the bounds in (23) and c is a positive constant. By definition, the events \(G_k^1\) and \(G_k^2(i)\) are measurable with respect to the passage times inside \(L_ki+\mathcal {B}\left( R_k^{\mathrm {outer}}\right) \). So the proof is completed by using the bounds in Lemmas 5.3 and 5.4. \(\square \)
Contagious and infected sets
As discussed in the proof overview in Sect. 5.1, for each scale k, we will define a set \(C_k\subset \mathbb {Z}^d\) as the set of contagious vertices at scale k, and also define a set \(I_k\subset \mathbb {Z}^d\) as the set of infected vertices at scale k. The main intuition behind such sets is that \(C_k\) represents the vertices of \(\mathbb {Z}^d\) that need to be handled at scale k or larger, whereas \(I_k\) represents the vertices of \(\mathbb {Z}^d\) that may be taken by \(\eta ^2\) at scale k. In particular, we will show that the vertices of \(\mathbb {Z}^d\) that will be occupied by \(\eta ^2\) are contained in \(\bigcup _{k\ge 1}I_k\).
At scale 1 we set the contagious vertices as those initially taken by \(\eta ^2\); that is,
All clusters of \(C_1\) that belong to good 1boxes and that are not too close to contagious clusters from other 1boxes will be “cured” by the encapsulation process described in the previous section. The other vertices of \(C_1\) will become contagious vertices for scale 2, together with the vertices belonging to bad 1boxes. Using this, define \(C_k^{\mathrm {bad}}\) as the following subset of the contagious vertices:
Intuitively, \(C_k^{\mathrm {bad}}\) is the set of contagious vertices that cannot be cured at scale k since they are not far enough from other contagious vertices in other kboxes. Now for the vertices in \(C_k{\setminus } C_k^{\mathrm {bad}}\), the definition of \(C_k^{\mathrm {bad}}\) gives that we can select a set \(\mathcal {I}_k\subset \mathbb {Z}^d\) representing kboxes such that for each \(x\in C_k{\setminus } C_k^{\mathrm {bad}}\) there exists a unique \(i\in \mathcal {I}_k\) for which \(x\in Q_k(i)\), and for each pair \(i,j\in \mathcal {I}_k\), we have \(Q_k^{\mathrm {outer}}(i)\cap Q_k^{\mathrm {outer}}(j)=\emptyset \). Then, given \(C_k\), we define \(I_k\) as the set of vertices that can be taken by \(\eta ^2\) during the encapsulation of the good kbox, which is more precisely given by
We then define inductively
The lemma below gives that if the contagious sets of scales larger than k are all empty, then \(\eta ^2\) must be contained inside \(\bigcup _{j=1}^{k1}I_j\).
Lemma 5.5
Let \(A\subset \mathbb {Z}^d\) be arbitrary. Then, for any \(k\ge 1\), either we have that
or
Proof
We will assume that (35) does not occur; that is,
The lemma will follow by showing that the above implies (36).
We start with scale 1. Recall that \(C_1\) contains all elements of \(\eta ^2(0)\). Then, all elements of \(C_1{\setminus } C_1^{\mathrm {bad}}\) are handled at scale 1. Let \(i\in \mathcal {I}_1\), so \(Q_1(i)\) intersects \(C_1{\setminus } C_1^{\mathrm {bad}}\). If \(Q_1(i)\) is a good box, the passage times inside \(Q_1^\mathrm {enc}(i)\) are such that \(\eta ^1\) encapsulates \(\eta ^2\) within \(Q_1^\mathrm {enc}(i)\) unless another cluster of \(\eta ^2\) enters \(Q_1^\mathrm {enc}(i)\) from outside. When the encapsulation succeeds, we have that the cluster of \(\eta ^2\) growing inside \(Q_1^\mathrm {enc}(i)\) never exits \(Q_1^\mathrm {enc}(i)\subset I_1\).
Before proceeding to the proof for scales larger than 1, we explain the possibility that the encapsulation above does not succeed because another cluster of \(\eta ^2\) (say, from \(Q_1(j)\)) enters \(Q_1^\mathrm {enc}(i)\) from outside. Note that if \(Q_1^{\mathrm {outer}}(j)\cap Q_1^{\mathrm {outer}}(i) \ne \emptyset \), then the two clusters are not handled at scale 1: they will be handled together at a higher scale. Now assume that \(Q_1^{\mathrm {outer}}(j)\) and \(Q_1^{\mathrm {outer}}(i)\) are disjoint and do not intersect any other region \(Q_1^{\mathrm {outer}}\) from a contagious site. Thus both \(Q_1(i)\) and \(Q_1(j)\) are handled at scale 1. If they are both good, the encapsulations succeed within \(Q_1^\mathrm {enc}(i)\) and \(Q_1^\mathrm {enc}(j)\), and do not interfere with each other. Assume that \(Q_1(i)\) is good, but \(Q_1(j)\) is bad. In this case, we will make \(Q_1^{\mathrm {outer}}(j)\) to be contagious for scale 2, but up to scale 1 this does not interfere with the encapsulation within \(Q_1^\mathrm {enc}(i)\) because these two regions are disjoint. The encapsulation of \(Q_1^{\mathrm {outer}}(j)\) will be treated at scale 2 or higher, and the fact that \(Q_1^{\mathrm {outer}}(j)\cap Q_1^{\mathrm {outer}}(i)=\emptyset \) will be used to allow a coupling argument between scales.
We now explain the analysis for a scale \(j\in \{2,3,\ldots ,k\}\), assuming that we have carried out the analysis until scale \(j1\). Thus, we have showed that all contagious vertices successfully handled at scale smaller than j are contained inside \(I_1\cup I_2 \cup \cdots \cup I_{j1}\). Consider a cell \(Q_j(i)\) of scale j with \(i\in \mathcal {I}_j\). During the encapsulation of \(\eta ^2\) inside \(Q_j^\mathrm {enc}(i)\), it may happen that \(\eta ^1\) advances through a cell \(Q_{j1}(i')\) that was treated at scale \(j1\); that is, \(i'\in \mathcal {I}_{j1}\). (For simplicity of the discussion, we assume here that this cell is of scale \(j1\), but it could be of any scale \(j'\le j1\).) Note that \(Q_{j1}(i')\) must be good for scale \(j1\) because otherwise cell i would not be treated at scale j. The fact that \(Q_{j1}(i')\) is good implies that the time \(\eta ^1\) takes to go from \(\partial ^\mathrm {i}Q_{j1}^{{\mathrm {outer}}/3}(i')\) to all points in \(\partial ^\mathrm {i}Q_{j1}^\mathrm {enc}(i')\), therefore encapsulating \(Q_{j1}(i')\), is smaller than the time given by the passage times \(\zeta _j^1\). Moreover, \(Q_{j1}(i')\) being good implies that the time \(\eta ^2\) takes to go from \(\partial ^\mathrm {i}Q_{j1}^{{\mathrm {outer}}/3}(i')\) to any point in \(\partial ^\mathrm {o}Q_{j1}^\mathrm {enc}(i')\) is larger than the time given by the passage times \(\zeta _j^2\). This puts us in the context of Proposition 4.2, where the sets \(\{\Pi _\iota \}_\iota \) are given by the clusters of \(\bigcup _{j''=1}^{j1} \bigcup _{i'' \in \mathcal {I}_{j''}} Q_{j''}^{\mathrm {outer}}(i'')\), and for each \(\iota \), the set \(\Pi _\iota '\subset \Pi _\iota \) is given by the union of \(Q_{j''}^\mathrm {enc}(i'')\) over all \(j'',i''\) for which \(Q_{j''}^{\mathrm {outer}}(i'')\subset \Pi _\iota \). Therefore, under the event that all the cells involved in the definition of \(\{\Pi _\iota \}_\iota \) are good, Proposition 4.2 gives \(\eta ^2\) cannot escape the set \(\bigcup _{\iota =1}^j I_\iota \), after all contagious vertices of scale at most j have been analyzed. Therefore, inductively we obtain that \(\eta ^2(t)\subset \bigcup _{\iota =1}^\infty I_\iota \).
For scales larger than k, we will use that (37) holds. Since for any scale j and any \(i\in \mathbb {Z}^d\) we have that
we obtain
This and (37) give that \(\bigcup \nolimits _{j>k}I_j\) does not intersect A, hence \(\eta ^2(t)\cap A\subset \bigcup _{\iota =1}^k I_\iota \). \(\square \)
Recursion
Define
Recall that \(Q_k^\mathrm {core}(i)\) are disjoint for different \(i\in \mathbb {Z}^d\), as defined in (22). Define also
where c is the constant in Lemma 5.2 so that for any \(k\in \mathbb {N}\) and \(i\in \mathbb {Z}^d\), we have \(\mathbb {P}\left( G_k(i)\right) \ge 1q_k\).
By the definition of \(C_k\) from (34), in order to have \(Q_k^\mathrm {core}(i)\cap C_k\ne \emptyset \) it must happen that either
or
where \(\iota \) is the unique number such that \(x\in Q_{k1}^\mathrm {core}(\iota )\). The condition above holds by the following. If (38) does not happen, then there must exist a \(x\in C_{k1}\cap Q_k^\mathrm {core}(i)\) that was not treated at scale \(k1\); that is, \(x\in C_{k1}^{\mathrm {bad}}\). Then, by the definition of \(C^{\mathrm {bad}}_{k1}\), it must be the case that there exists a y satisfying the conditions in (39). The values x, y as in (39) must satisfy
Lemma 5.6
For any \(k\ge 2\) and any \(i\in \mathbb {Z}^d\), define the super cell
for \(k=1\), set \(Q_1^\mathrm {super}(i) = Q_1^\mathrm {core}(i)\). Then the event \(\left\{ Q_k^\mathrm {core}(i)\cap C_k\ne \emptyset \right\} \) is measurable with respect to \(Q_k^\mathrm {super}(i)\).
Proof
The theorem is true for \(k=1\) since \(\left\{ Q_1^\mathrm {core}(i)\cap C_1\ne \emptyset \right\} \) is equivalent to \(\left\{ Q_1^\mathrm {core}(i)\cap \eta ^2(0)\ne \emptyset \right\} \). Our goal is to apply an induction argument to establish the lemma for \(k>1\). First note that, since the event that a box of scale \(k1\) is good is measurable with respect to passage times inside a ball of diameter \(R_{k1}^{\mathrm {outer}}\), we have that condition (38) is measurable with respect to the passage times inside
It remains to establish the measurability result for condition (39). Note that condition (39) gives the existence of a point y in \(\bigcup _{x\in Q_k^\mathrm {core}(i)} \left( x+\mathcal {B}\left( 3R_{k1}^{\mathrm {outer}}\right) \right) \) such that \(y\in C_{k1}\). Let j be the integer such that \(y\in Q_{k1}^\mathrm {core}(j)\). Then the induction hypothesis gives that \(\{Q_{k1}^\mathrm {core}(j) \cap C_{k1}\ne \emptyset \}\) is measurable with respect to the passage times inside
Therefore, condition (39) is measurable with respect to the passage times inside \(Q_k^\mathrm {super}(i)\). \(\square \)
Lemma 5.7
There exists a constant \(c=c(d,\epsilon ,\alpha ,\upsilon )>0\) such that, for all \(k\in \mathbb {N}\) and all \(i\in \mathbb {Z}^d\), we have
Proof
From the discussion above, we have that \(\rho _k(i)\) is bounded above by the probability that condition (38) occurs plus the probability that condition (39) occurs. We start with condition (38). Note that \(Q_{k1}^\mathrm {core}(j)\), for j defined as in (38), must be contained inside
Therefore, there is a constant \(c_0\) depending only on d such that the number of options for the value of j is at most
for some constant \(c_1\), where the first inequality comes from (21) and the last inequality follows from (19). Then, taking the union bound on the value of j, we obtain that the probability that condition (38) occurs is at most
Now we bound the probability that condition (39) happens. For any \(z\in \mathbb {Z}^d\), let \(\varphi (z)\in \mathbb {Z}^d\) be such that \(z\in Q_{k1}^\mathrm {core}(\varphi (z))\). We will need to estimate the number of different values that \(\varphi (x)\) and \(\varphi (y)\) can assume. Since \(x\in Q_k^\mathrm {core}(i)\), we have that \(\varphi (x)\) can assume at most \(\left( \frac{L_k+2L_{k1}}{L_{k1}}\right) ^d\) values. For \(\varphi (y)\), first note that any point in \(Q_{k1}^\mathrm {core}(\varphi (y))\) must be contained inside \(\varphi (x)L_{k1} + [3C_\mathrm {FPP}' R_{k1}^{\mathrm {outer}} L_{k1},3C_\mathrm {FPP}' R_{k1}^{\mathrm {outer}}+ L_{k1}]^d\). Therefore, \(Q_{k1}^\mathrm {core}(\varphi (y))\) must be contained inside a cube of side length
and consequently there are at most
possible values for \(\varphi (y)\). Letting \(A_k\) be the number of ways of choosing the \(Q_{k1}^\mathrm {core}\) boxes containing x, y according to condition (39), we obtain
for some constant \(c_2=c_2(d,\epsilon ,\alpha ,\upsilon )>0\), where the inequality follows from (19). Now, given x, y, we want to give an upper bound for
From Lemma 5.6 we have that the event \(\left\{ Q_{k1}^\mathrm {core}(\varphi (x)) \cap C_{k1}\right\} \) is measurable with respect to the passage times inside \(Q_{k1}^\mathrm {super}(\varphi (x))\). By the definition of \(Q_{k1}^\mathrm {super}\) in (41), for any \(z\in Q_{k1}^\mathrm {super}(\varphi (x))\) we have
where we related \(R_{k2}^{\mathrm {outer}}\) and \(R_{k1}\) via (21). Since by (40) and (18) we have
where the last inequality follows from (12), we then obtain \(Q_{k1}^\mathrm {super}(\varphi (x)) \cap Q_{k1}^\mathrm {super}(\varphi (y))= \emptyset \). This gives that the events \(\left\{ Q_{k1}^\mathrm {core}(\varphi (x)) \cap C_{k1}\right\} \) and \(\left\{ Q_{k1}^\mathrm {core}(\varphi (y)) \cap C_{k1}\right\} \) are independent, yielding
\(\square \)
In the lemma below, recall that \(\eta ^2(0)\) is given by adding each vertex of \(\mathbb {Z}^d\) with probability p, independently of one another. Also let \({\bar{\rho }}\) be such that \({\bar{\rho }}\ge \sup _{j}\rho _{1}(j)\).
Lemma 5.8
Fix any positive constant a. We can set \(L_1\) large enough and then p small enough, both depending on \(a,\alpha ,\epsilon , d\) and \(\upsilon \), such that for all \(k\in \mathbb {N}\) and all \(i\in \mathbb {Z}^d\), we have
Proof
For \(k=1\), then \(\rho _k(i)\) is bounded above by the probability that \(\eta ^2(0)\) intersects \(Q_k^\mathrm {core}(i)\). Once \(L_1\) has been fixed, this probability can be made arbitrarily small by setting p small enough.
Now we assume that \(k\ge 2\). We will expand the recursion in Lemma 5.7. Using the same constant c as in Lemma 5.7, define
Now fix k, set \(A_{1}=1\), and define for \(\ell =0,1,\ldots ,k1\)
With this, the recursion in Lemma 5.7 can be written as
where in the second inequality we used that \((x+y)^m \le 2^{m1}\left( x^m+y^m\right) \) for all \(x,y\in \mathbb {R}\) and \(m\in \mathbb {N}\). Iterating the above inequality, we obtain
We now claim that
We can prove (44) by induction on \(\ell \). Note that \(A_0\) does satisfy the above inequality. Then, using the induction hypothesis and the recursive definition of \(A_\ell \) in (42), we have
Now we use that \((x+1)^{5/2}\le 6x^{3}\) for all \(x\ge 1\), which yields
establishing (44). Plugging (44) into (43), we obtain
Given a value of \(L_1\), for all small enough p we obtain that \({\bar{\rho }}\) is sufficiently small to yield
Now we turn to the second term in (45). Note that for small enough \(\epsilon \), we have \(\epsilon \lambda R_k^\mathrm {enc}\ge \epsilon \lambda \exp \left( \frac{1+c_1}{2\epsilon }\right) R_k> R_k\). Thus, from Lemma 5.2, we have that \(q_{k\ell }\le \exp \left( c R_{k\ell }^\frac{d+1}{2d+4}\right) \), for some constant \(c=c(\alpha ,\epsilon ,d,\upsilon )>0\). We have from the relations (19) and (20) that \(R_j\le c_1 c_2^j (j!)^{d}L_1\) for positive constants \(c_1,c_2\). Therefore, for any \(k\ge \ell \), we have that
where in the last step we use that \(c_2^\frac{(d+1)}{2d+4}\ge c_2^{1/3}\ge 2\). Hence, for sufficiently large \(L_1\) we obtain
\(\square \)
Multiscale paths of infected sets
Let \(x\in \mathbb {Z}^d\) be a fixed vertex. We say that \(\Gamma =(k_1,i_1),(k_2,i_2),\ldots ,(k_\ell ,i_\ell )\) is a multiscale path from x if \(x\in Q_{k_1}^\mathrm {enc}(i_1)\), and for each \(j\in \{2,3,\ldots ,\ell \}\) we have \(Q_{k_j}^\mathrm {enc}(i_j)\cap Q_{k_{j1}}^\mathrm {enc}(i_{j1})\ne \emptyset \). Hence,
Given such a path, we say that the reach of \(\Gamma \) is given by \(\sup \big \{zx :z \in \bigcup \nolimits _{(k,i)\in \Gamma } Q_{k}^\mathrm {enc}(i)\big \}\), that is, the distance between x and the furthest away point of \(\Gamma \). We will only consider paths such that \(Q_{k_j}^\mathrm {enc}(i_j)\subset I_{k_j}\). Recall the way the sets \(I_\kappa \) are constructed from \(C_\kappa {\setminus } C_\kappa ^{\mathrm {bad}}\), which is defined in (33). Then for any two \((k,i),(k',i')\in \Gamma \) with \(k=k'\) we have \(Q_{k}^\mathrm {enc}(i)\cap Q_{k'}^\mathrm {enc}(i')=\emptyset \). Therefore, we impose the additional restriction that on any multiscale path \(\Gamma =(k_1,i_1),(k_2,i_2),\ldots ,(k_\ell ,i_\ell )\) we have \(k_j\ne k_{j1}\) for all \(j\in \{2,3,\ldots ,\ell \}\).
Now we introduce a subset \({\tilde{\Gamma }}\) of \(\Gamma \) as follows. For each \(k\in \mathbb {N}\) and \(i\in \mathbb {Z}^d\), define
Note that \(Q_k^{\mathrm {outer}}(i)\subset Q_k^\mathrm {neigh}(i)\subset Q_k^\mathrm {neigh2}(i)\). Let \(\kappa _1> \kappa _2>\cdots \) be an ordered list of the scales that appear in cells of \(\Gamma \). The set \({\tilde{\Gamma }}\) will be constructed in steps, one step for each scale. First, add to \({\tilde{\Gamma }}\) all cells of \(\Gamma \) of scale \(\kappa _1\). Then, for each \(j\ge 2\), after having decided which cells of \(\Gamma \) of scale at least \(\kappa _{j1}\) we add to \({\tilde{\Gamma }}\), we add to \({\tilde{\Gamma }}\) all cells \((k,i)\in \Gamma \) of scale \(k=\kappa _j\) such that \(Q_k^\mathrm {neigh}(i)\) does not intersect \(Q_{k'}^\mathrm {neigh}(i')\) for each \((k',i')\) already added to \({\tilde{\Gamma }}\). Recall that, from the definition of \(C_k^{\mathrm {bad}}\) in (33), two cells \((k,j),(k,j')\) of the same scale that are part of \(I_k\) must be such that
This gives that \(jL_kj'L_k \ge 3 R_k^{\mathrm {outer}}R_k\), which implies that \(Q_k^\mathrm {neigh2}(j)\) and \(Q_k^\mathrm {neigh2}(j')\) do not intersect.
The idea behind the definitions above is that we will look at “paths” of multiscale cells such that two neighboring cells in the path are such that their \(Q^\mathrm {neigh2}\) regions intersect, and any two cells in the path have disjoint \(Q^\mathrm {neigh}\) regions. The first property limits the number of cells that can be a neighbor of a given cell, allowing us to control the number of such paths, while the second property allows us to argue that the encapsulation procedure behaves more or less independently for different cells of the path.
Lemma 5.9
Let \(\Gamma =(k_1,i_1),(k_2,i_2),\ldots ,(k_\ell ,i_\ell )\) be a multiscale path starting from x. Then, the subset \({\tilde{\Gamma }}\) defined above is such that
Furthermore, any point \(y\in \bigcup _{(k,i)\in \Gamma }Q_k^\mathrm {enc}(i)\) must belong to \(\bigcup _{(k,i)\in {\tilde{\Gamma }}}Q_k^\mathrm {neigh2}(i)\).
Proof
Let \(\Upsilon \) be an arbitrary subset of \({\tilde{\Gamma }}\) with \(\Upsilon \ne {\tilde{\Gamma }}\). The first part of the lemma follows by showing that
Define
Clearly, \(\Upsilon ^\mathrm {neigh}\supset \Upsilon \), and since \(\{Q_k^\mathrm {neigh}(i) :(k,i)\in {\tilde{\Gamma }}\}\) is by definition a collection of disjoint sets, we have that
Recall that \({\tilde{\Gamma }}\ne \Upsilon \), and since no element of \({\tilde{\Gamma }}{\setminus }\Upsilon \) was added to \(\Upsilon ^\mathrm {neigh}\), we have that \(\Upsilon ^\mathrm {neigh}\ne \Gamma \). Using that \(\bigcup _{(k,i)\in \Gamma } Q_{k}^\mathrm {enc}(i)\) is a connected set, we obtain a value
Refer to Fig. 9 for a schematic view of the definitions in this proof.
Let \((k',i')\) be the cell of \(\Upsilon ^\mathrm {neigh}\) for which \(Q_k^\mathrm {enc}(i)\) intersects \(Q_{k'}^\mathrm {enc}(i')\). Since \((k',i')\in \Upsilon ^\mathrm {neigh}\), let \((k'',i'')\) be the element of \(\Upsilon \) of largest scale for which \(Q_{k'}^\mathrm {neigh}(i')\) intersects \(Q_{k''}^\mathrm {neigh}(i'')\); if \((k',i')\in \Upsilon \), then \((k'',i'')=(k',i')\). We obtain that
By the construction of \({\tilde{\Gamma }}\), and the fact that \((k'',i'')\) was set as the element of largest scale satisfying \(Q_{k''}^\mathrm {neigh}(i'')\cap Q_{k'}^\mathrm {neigh}(i')\ne \emptyset \), we must have that
In the former case, the distance in (47) is bounded above by \(2R_{k''1}^{\mathrm {outer}}\), while in the latter case the distance is zero. So we assume that the distance between \(Q_{k''}^\mathrm {neigh}(i'')\) and \(Q_k^\mathrm {enc}(i)\) is at most \(2 R_{k''1}^{\mathrm {outer}}\), which yields that \(Q_k^\mathrm {enc}(i)\) intersects \(Q_{k''}^\mathrm {neigh2}(i'')\). Therefore, if \((k,i)\in {\tilde{\Gamma }}\), we have (46) and we are done. When \((k,i)\not \in {\tilde{\Gamma }}\), take the cell \((k''',i''')\in {\tilde{\Gamma }}\) of largest scale such that \(Q_k^\mathrm {neigh}(i)\) intersects \(Q_{k'''}^\mathrm {neigh}(i''')\) and, by the construction of \({\tilde{\Gamma }}\), we have
We obtain that \((k''',i''')\not \in \Upsilon \), otherwise it would imply that \((k,i)\in \Upsilon ^\mathrm {neigh}\) violating the definition of (k, i). The distance between \(Q_{k'''}^\mathrm {neigh}(i''')\) and \(Q_{k''}^\mathrm {neigh}(i'')\) is at most
Therefore, we have that \(Q_{k''}^\mathrm {neigh2}(i'')\) intersects \(Q_{k'''}^\mathrm {neigh2}(i''')\), establishing (46) and concluding the first part of the proof.
For the second part, take y to be a point of \(Q_{k}^\mathrm {enc}(i)\) with \((k,i)\in \Gamma \). If \((k,i)\in {\tilde{\Gamma }}\), then the lemma follows. Otherwise, let \((\kappa ,\iota )\) be the cell of largest scale in \({\tilde{\Gamma }}\) such that \(Q_{\kappa }^\mathrm {neigh}(\iota )\cap Q_{k}^\mathrm {neigh}(i)\ne \emptyset \). By the construction of \({\tilde{\Gamma }}\), we have that \(\kappa >k\). The distance between y and \(Q_{\kappa }^\mathrm {neigh}(\iota )\) is at most
which gives that \(y\in Q_\kappa ^\mathrm {neigh2}(\iota )\). \(\square \)
Now we define the type of multiscale paths we will consider.
Definition 5.10
Given \(x\in \mathbb {Z}^d\) and \(m>0\), we say that \(\Gamma =(k_1,i_1),(k_2,i_2),\ldots ,(k_\ell ,i_\ell )\) is a well separated path of reach m starting from x if all the following hold:
 (i):

\(x \in Q_{k_1}^\mathrm {neigh2}(i_1),\)
 (ii):

\(\text {for any } j\in \{2,3,\ldots ,\ell \}\text { we have that } Q_{k_j}^\mathrm {neigh2}(i_j)\text { intersects } Q_{k_{j1}}^\mathrm {neigh2}(i_{j1}),\)
 (iii):

\(\text {for any } j,\iota \in \{1,2,\ldots ,\ell \}\text { with } j\iota \ge 2\text { we have } Q_{k_j}^\mathrm {neigh2}(i_j)\text { does not} \text {intersect } Q_{k_\iota }^\mathrm {neigh2}(i_\iota ),\)
 (iv):

\(\text {for any distinct } j,\iota \in \{1,2,\ldots ,\ell \}\text { we have that } Q_{k_j}^\mathrm {neigh}(i_j)\text { does not} \text {intersect } Q_{k_\iota }^\mathrm {neigh}(i_\iota ),\)
 (v):

\(\text {for any } j\in \{2,3,\ldots ,\ell \} ,\text { we have }k_j\ne k_{j1},\)
 (vi):

\(\text {and the point of } Q_{k_\ell }^\mathrm {neigh2}(i_\ell )\text { that is furthest away from } x\text { is of distance } m\text { from } x.\)
We say that a well separated path \(\Gamma \) is infected if for all \((k,i)\in \Gamma \) we have \(Q_{k}^\mathrm {enc}(i)\subset I_k\). If the origin is separated from infinity by \(\eta ^2\), then there must exist a multiscale path for which the union of the \(Q_k^\mathrm {enc}(i)\) over the cells (k, i) in the path contains the set occupied by \(\eta ^2\) that separates the origin from infinity. Then Lemma 5.9 gives the existence of a well separated path for which the union of the \(Q_k^\mathrm {neigh2}(i)\) over (k, i) in the path separates the origin from infinity.
Lemma 5.11
Fix any positive constant c. We can set \(L_1\) large enough and then p small enough, both depending only on \(c,\alpha ,d, \epsilon \) and \(\upsilon \), so that the following holds. For any integer \(\ell \ge 1\), any given collection of (not necessarily distinct) integer numbers \(k_1,k_2,\ldots ,k_\ell \), and any vertex \(x\in \mathbb {Z}^d\), we have \(\mathbb {P}\big (\exists \text { a well separated path } \Gamma =(k_1,i_1),(k_2,i_2),\ldots ,(k_\ell ,i_\ell )\text { from } x\text { that is} \text {infected}\big ) \le \exp \left( c \sum \nolimits _{j=1}^\ell 2^{k_j}\right) \).
Proof
For any j, since the path is infected we have \(Q_{k_j}^\mathrm {enc}(i_j)\subset I_{k_j}\). This gives that there exists \({\tilde{i}}_j\) such that \(Q_{k_j}^\mathrm {core}({\tilde{i}}_j)\cap Q_{k_j}(i_j)\cap C_{k_j}\ne \emptyset \). From Lemma 5.6, we have that the event \(\left\{ Q_{k_j}^\mathrm {core}({\tilde{i}}_j)\cap C_{k_j}\ne \emptyset \right\} \) is measurable with respect to the passage times inside \(Q_{k_j}^\mathrm {super}({\tilde{i}}_j)\subset Q_{k_j}^\mathrm {neigh}(i_j)\). Also, the number of choices for \({\tilde{i}}_j\) is at most some constant \(c_1\), depending only on d. Since \(\{Q_{k_j}^\mathrm {neigh}(i_j)\}_{j=1,\ldots ,\ell }\) is a collection of disjoint sets, if we fix the path \(\Gamma =(k_1,i_1),(k_2,i_2),\ldots ,(k_\ell ,i_\ell )\), and take the union bound over the choices of \({\tilde{i}}_1, \tilde{i}_2,\ldots ,{\tilde{i}}_\ell \), we have from Lemma 5.8 that
where a can be made as large as we want by properly setting \(L_1\) and p. It remains to bound the number of well separated paths that exist starting from x. Since \(x \in Q_{k_1}^\mathrm {neigh2}(i_1)\), the number of ways to choose the first cell is at most \(\left( \frac{12}{5}C_\mathrm {FPP}' R_{k_1}^{\mathrm {outer}}\right) ^d\). Consider a j such that \(k_j>k_{j+1}\). We have that \(Q_{k_{j}}^\mathrm {neigh2}(i_{j})\) must intersect \(Q_{k_{j+1}}^\mathrm {neigh2}(i_{j+1})\), which gives that
Hence, the number of ways to choose \(i_{j+1}\) given \((k_j,i_j)\) and \(k_{j+1}\) is at most
Therefore, we have that \(\mathbb {P}\big (\exists \text { a well separated path } \Gamma =(k_1,i_1),(k_2,i_2),\ldots ,(k_\ell ,i_\ell )\text { from } x\text { that is infected}\big )\) is at most
where the second inequality follows for some \(c_2=c_2(d,\alpha ,\epsilon ,\upsilon )\) by the value of \(R_k^{\mathrm {outer}}\) from (20), and the last inequality follows by setting a large enough and such that \(a\ge 2c\). \(\square \)
For the lemma below, define the event
Let \(E_{\infty ,r}\) be the above event without the restriction that all scales must be smaller than \(\kappa \). Below we restrict to \(r>3\) just to ensure that \(\log \log r >0\).
Proposition 5.12
Fix any positive constant c, any \(r>3\) and any time \(t\ge 0\). We can set \(L_1\) large enough and then p small enough, both depending only on \(c,\alpha ,d,\epsilon \) and \(\upsilon \), so that there exists a positive constant \(c'\) depending only on d for which
Proof
Let \(A_r\) be the set of vertices of \(\mathbb {Z}^d\) of distance at most \(C_\mathrm {FPP}r\) from the origin. Set \(\delta _r = \frac{1}{(d+3)\log \log r}\) and \(\kappa = \delta _r \log r\). For any large enough a depending on \(L_1\) and p, we have
where in the last inequality we use Lemma 5.8. Since a above can be chosen as large as needed (by requiring that \(L_1\) is large enough and p is small enough), we can choose a large enough a so that \(\left( \frac{2C_\mathrm {FPP}' \left( r+\frac{6}{5}R_k^{\mathrm {outer}}+R_k\right) }{L_k}\right) ^d\le \exp \left( a2^{k1}\right) \) for all k, yielding
If the event above does not happen, then Lemma 5.5 gives that \(\eta ^2(t)\cap A_r \subset \bigcup _{j=1}^{\kappa 1}I_j\). Hence,
Let \(\Gamma \) be a well separated path from the origin, with all cells of scale smaller than \(\kappa \), and which has reach at least r. Define \(m_k(\Gamma )\) to be the number of cells of scale k in \(\Gamma \). Since \(\Gamma \) must contain at least one cell for which its \(Q^\mathrm {neigh2}\) region is not contained in \(A_r\), we have
Because of the type of bounds derived in Lemma 5.11, it will be convenient to rewrite the inequality above so that the term \(\sum _{k=1}^{\kappa 1}2^k\) appears. Note that using (20) we can set a constant \(c_0\ge 2\) such that \(R_j^{\mathrm {outer}}\le c_0^j (j!)^{d+2}L_1\) for all \(j\ge 1\), which gives
For any \(\Gamma \), define \(\varphi (\Gamma )=\sum _{k=1}^{\kappa 1} m_k(\Gamma ) 2^k\). We can then split the sum over all paths according to the value of \(\varphi (\Gamma )\) of the path. Using this, Lemma 5.11, and the fact that \(\varphi (\Gamma )\ge \frac{5 C_\mathrm {FPP}r}{12 c' (\kappa !)^{d+2}L_1}\), we have
where \(A_m\) is the number of ways to fix \(\ell \) and set \(k_1,k_2,\ldots ,k_\ell \) such that \(\varphi (\Gamma )=\sum _{j=1}^\ell 2^{k_j}=m\), and \(c''\) is the constant in Lemma 5.11. For each choice of \(\ell ,k_1,k_2,\ldots ,k_\ell \), we can define a string from \(\{0,1\}^m\) by taking \(2^{k_1}\) consecutive 0s, \(2^{k_2}\) consecutive 1s, \(2^{k_3}\) consecutive 0s, and so on and so forth. Note that each string is mapped to at most one choice of \(\ell ,k_1,k_2,\ldots ,k_\ell \). Therefore, \(A_m\le 2^m\), the number of strings in \(\{0,1\}^m\). The proof is completed since \(c''\) can be made arbitrarily large by setting \(L_1\) large enough and then p small enough, and \(\frac{5 C_\mathrm {FPP}r}{12 c' (\kappa !)^{d+2}L_1}\ge \frac{5 C_\mathrm {FPP}r}{12 c' \kappa ^{(d+2)\kappa }L_1}\ge \frac{5 C_\mathrm {FPP}r^\frac{1}{d+3}}{12 c' L_1}\). \(\square \)
Completing the proof of Theorem 5.1
Proof of Theorem 5.1
We start showing that \(\eta ^1\) grows indefinitely with positive probability. Let \(e_1=(1,0,0,\ldots ,0)\in \mathbb {Z}^d\). Any set of vertices that separates the origin from infinity must contain a vertex of the form \(b e_1\) for some nonnegative integer b. For any b and \(t\ge 0\), let
For the moment, we assume that b is larger than some fixed, large enough value \(b_0\). Recall that \(\mathcal {B}\left( r\right) \subseteq [ C_\mathrm {FPP}' r, C_\mathrm {FPP}' r]^d\), which gives that \(b e_1 + \mathcal {B}\left( \frac{b}{2C'_\mathrm {FPP}}\right) \) does not contain the origin. Hence, in order for \(\eta _2(t)\) to contain \(b e_1\) and separate the origin from infinity, \(\eta _2(t)\) must contain at least a vertex of distance (according to the norm \(\cdot \)) greater than \(\frac{b}{2C'_\mathrm {FPP}}\) from \(b e_1\). When \(\eta _2(t)\) separates the origin from infinity, it must contain a set of sites that form a connected component according to the \(\ell _\infty \) norm, which itself separates the origin from infinity and contains a vertex of distance (now according to the norm \(\cdot \)) greater than \(\frac{b}{2C'_\mathrm {FPP}}\) from \(b e_1\). This connected component implies the occurence of the event in Proposition 5.12, hence
Note that, as needed, the bound above does not depend on t; this will allows us to derive a bound for the survival of \(\eta ^1\) that is uniformly bounded away from 0 as t grows to infinity. Note also that the constant c from Proposition 5.12 can be made arbitrarily large by setting \(L_1\) and p properly. Therefore, \(\sum _{b=b_0}^\infty f_b(t)\) can be made smaller than 1, and in fact goes to zero with \(b_0\). Regarding the case \(b\le b_0\), for each \(k\ge 1\) let \(\mathcal {K}_k\) be the set of (k, i) such that \(R_k^{\mathrm {outer}}(i)\cap \{e_1,2e_1,\ldots ,b_0 e_1\}\). Note that there exists a constant \(c_b\) depending on \(b_0\) such that the cardinality of \(\mathcal {K}_k\) is at most \(c_b\) for all k. Then, using Lemma 5.8, we have
which can be made arbitrarily small since a can be made large enough by choosing \(L_1\) large and p small. This concludes this part of the proof, since
Now we turn to the proof of positive speed of growth for \(\eta ^1\). Note that \(\eta ^1\cup \eta ^2\) is stochastically dominated by a first passage percolation process where the passage times are at least i.i.d. exponential random variables of rate 2, because \(\eta ^2\) is slower than a first passage percolation of exponential passage times of rate 1. Then, by the shape theorem we have that there exists a constant \(c>0\) large enough such that
Now fix any t, take c as above, and set \(\kappa = 1+\frac{\log t}{\left( \log \log t\right) ^2}\). For any large enough a depending on \(L_1\) and p, we have
where in the second inequality we use Lemma 5.8, and the third inequality follows because a can be chosen large enough in Lemma 5.8. The above derivation allows us to restrict to cells of scale smaller than \(\kappa \). Note that since there are no contagious set of scale \(\kappa \) or larger intersecting \([ct,ct]^d\), the spread of \(\eta ^1(t)\) inside \([ct,ct]^d\) stochastically dominates a first passage percolation process of rate \(\lambda ^1_\kappa \). Thus, disregarding regions taken by \(\eta ^2\), we can set a sufficiently small constant \(c'>0\) so that, at time t, \({\bar{\eta }}^1\) will contain a ball of radius \(2c't\) around the origin with probability at least \(1\exp \left( c'' t^\frac{d+1}{2d+4}\right) \) for some constant \(c''\), by Proposition 3.1. The only caveat is that, at time t, there may be regions of scale smaller than \(\kappa \) that are taken by \(\eta ^2\) and intersects the boundary of \(\mathcal {B}\left( 2c't\right) \). If we show that such regions cannot intersect \(\partial ^\mathrm {i}\mathcal {B}\left( c't\right) \), then we have that the probability that \(\eta ^1\) survives up to time t but \({\bar{\eta }}^1(t)\) does not contain a ball of radius \(c't\) around the origin is at most \(12\exp \left( a 2^{\kappa 1}\right) \exp \left( c'' t^\frac{d+1}{2d+4}\right) \). This is indeed the case, since we can take a constant \(c'''\) such that any cell of scale smaller than \(\kappa \) has diameter at most
where the inequalities above hold for all large enough t, completing the proof. \(\square \)
From MDLA to FPPHE
Here we show how to use the proof scheme for FPPHE from Sect. 5 to establish Theorem 1.1. The relation between FPPHE and MDLA is very delicate, and we will need to introduce another process, which we call the hprocess. For the sake of clarity, this section is split into a few subsections.
Dual representation and Poisson clocks
We start by recalling the dual representation of the exclusion process. In this dual representation, vertices without particles are regarded as hosting another type of particle, called holes, while vertices hosting an original particle are seen as unoccupied. Using the terminology of the dual representation, in MDLA, holes perform a simple exclusion process among themselves, where they move as simple symmetric random walks obeying the exclusion rule (jumps to vertices already occupied by a hole or by the aggregate are suppressed). Then the growth of the aggregate is equivalent to a first passage percolation process which expands along its boundary edges at rate 1, but with the additional feature that the aggregate does not occupy vertices that are occupied by holes.
To be more precise, we now define MDLA in terms of Poisson clocks. A Poisson clock of rate \(\nu \) is a clock that rings infinitely many times, and such that the time until the first ring, as well as the time between any two consecutive rings, are given by independent exponential random variables of rate \(\nu \). Even though edges of \(\mathbb {Z}^d\) have so far always been considered as undirected, we will need to assign an independent Poisson clock of rate 1 to each oriented edge \((x \rightarrow y)\). Then the evolution of MDLA is as follows. When the clock of \((x \rightarrow y)\) rings, if x is occupied by a hole and y is unoccupied, the hole jumps from x to y. If x is occupied by the aggregate and y is unoccupied, then the aggregate occupies y. In any other case, nothing is done. Henceforth, the Poisson clocks used to construct MDLA will be referred to as the MDLAclocks.
MDLA with discovery of holes
We give a different representation of MDLA, which we refer to as MDLA with discovery of holes. Each vertex of \(\mathbb {Z}^d\) will either be occupied by the aggregate, be occupied by a hole, or be unoccupied. As before, the aggregate starts from the origin. However, unlike before, each vertex of \(\mathbb {Z}^d{\setminus }\{0\}\) is initially unoccupied, and is assigned a nonnegative integer value, which is given by an independent random variable having value \(i\ge 0\) with probability \((1\mu )^i\mu \). This value represents the number of holes that can be born at that vertex.
More precisely, when the MDLAclock of an edge \((x \rightarrow y)\) rings, a few things may happen.

If x hosts a hole and y is unoccupied, the hole jumps from x to y.

If x belongs to the aggregate, y is unoccupied and the value of y is 0, then the aggregate occupies y.

If x belongs to the aggregate, y is unoccupied and the value of y is \(i\ge 1\), then the value of y is changed to \(i1\), a hole is born at y (so y becomes occupied), and the aggregate does not occupy y.

In any other case, nothing happens.
Note that holes move independently of the values of the vertices, and perform continuous time, simple symmetric random walks (jumping at the time of the MDLAclocks) obeying the exclusion rule; that is, whenever a hole attempts to jump onto a vertex already occupied by a hole or by the aggregate, the jump is suppressed. Note that this process is equivalent to the description of MDLA with the dual representation of the exclusion process. The only difference is that, instead of placing all holes at time 0, holes are added one by one as the process evolves. More precisely, holes are born as the aggregate tries to occupy unoccupied vertices of value at least 1.
Backtracking jumps and overall strategy
The main idea we will use to compare MDLA and FPPHE is the following. Regardless of the location of a hole, if the hole jumps from a vertex x to a vertex y, with positive probability the MDLAclock of \((y \rightarrow x)\) rings before the other \(2(2d1)\) MDLAclocks involving edges of the form \((\cdot \rightarrow x)\) or \((y \rightarrow \cdot )\). This causes the hole to jump back to x before the hole can jump to any other vertex adjacent to y or before the aggregate or another hole can occupy x. We call this a backtracking jump. This type of jump intuitively gives that the rate at which a hole leaves a set of vertices is strictly smaller than 1. In other words, holes are slower than the aggregate. A natural approach is to set \(\lambda <1\) in FPPHE to represent the rate at which holes move (taking into consideration backtracking jumps), and then couple MDLA and FPPHE so that the following properties hold.

1.
The seeds of \(\eta _2\) are the vertices of value at least 1 in MDLA.

2.
The aggregate contains \(\eta _1\) at all times.

3.
For all \(t\ge 0\), the holes that have been discovered by time t are contained inside \(\eta _2(t)\).
Despite the above idea being relatively simple, the following delicate issue prevents this to be made into a rigorous argument. Suppose that the above three properties hold up to time t, and assume that at time t the aggregate of MDLA contains vertices that do not belong to \(\eta _1(t)\). Hence, at a later time the aggregate may discover a hole at a vertex x which is at the boundary of the aggregate but is not at the boundary of \(\eta _1\). In other words, the aggregate may discover a hole at a vertex x whose seed cannot be activated since x is yet unreachable by \(\eta _1\). At this time, property 3 would cease to hold.
To go around the above issue, we will employ a coupling argument to show that MDLA stochastically dominates FPPHE locally. In particular, we will use that coupling to show that the encapsulation procedure used for FPPHE (via Proposition 4.2) works as well for MDLA. Then, the multiscale machinery developed in Sect. 5 can be used to obtain that each cluster of holes get encapsulated by the aggregate at some finite (possibly large) scale. For this, we will use the fact that the encapsulation procedure we did for FPPHE in Proposition 4.2 is implied by the occurrence of a monotone event F, which is increasing with respect to the passage times of type 2 and decreasing with respect to the passage times of type 1.
Coupling of the initial configuration
We now formalize the coupling of the initial configurations of MDLA and FPPHE, as suggested in the previous section. For each vertex \(x\in \mathbb {Z}^d{\setminus }\{0\}\), note that the probability that x is assigned a value at least 1 in MDLA with discovery of holes is \(\sum _{i=1}^{\infty }(1\mu )^i \mu = 1\mu \). Then, we set \(p=1\mu \) so that we can couple the vertices with value at least 1 with the location of the type2 seeds of FPPHE. From now on, for each vertex of value at least 1, we will refer to it as a seed, regardless of whether we are talking about MDLA or FPPHE.
The hprocess
We will not actually couple MDLA with FPPHE, but we will couple MDLA with another process \(\{h_t\}_t\), which will be a growing subset of \(\mathbb {Z}^d\). We call this process the hprocess. The hprocess will be constructed using the MDLAclocks and the seeds, where the seeds have been coupled with MDLA as described in Sect. 6.4.
When a vertex x belongs to \(h_t\), we will say that x is infected. To avoid confusion, we will not say that x is occupied by \(h_t\) since, as we explain later, a vertex that is occupied by the aggregate can also be infected. Our goal with the hprocess is to obtain that the holes that have already been discovered at time t are contained in \(h_t \cup \partial ^\mathrm {o}h_t\), and the ones in \(\partial ^\mathrm {o}h_t\) are the holes that will jump back to \(h_t\) (in a backtracking jump).
At time 0 we set \(h_0=\emptyset \), and let the aggregate spread using the MDLAclocks using the representation with discovery of holes. The hprocess will evolve according to three operations: birth, expansion and halting upon encapsulation.

Birth. If at time t a hole is discovered by the aggregate inside a cluster \(C\subset \mathbb {Z}^d\) of seeds,^{Footnote 2} then we infect C; that is, we add to \(h_t\) the whole cluster C of seeds.

Expansion. For each unoriented edge (x, y), we will define a passage time \(\tau _{x,y}\) (which we will specify later on and will depend on the evolution of MDLA). So if x gets infected at time t, then x infects y at time \(t+\tau _{x,y}\); note that y could get infected before \(t+\tau _{x,y}\) if a neighbor of y different than x infects y.

Halting upon encapsulation. The hprocess is allowed to infect vertices that are occupied by the aggregate. However, if at some moment a cluster C of \(h_t\) is separated from infinity by the aggregate, which means that any path from C to infinity intersects \(\mathcal {A}_t\), then \(h_t\) will not infect any vertex of \(\partial ^\mathrm {o}C\) that already belongs to the aggregate.^{Footnote 3} This is to guarantee that a cluster of the hprocess is confined to a finite set when it gets encapsulated by the aggregate.
Now we introduce some notation. For each vertex x, let \(\mathcal {E}_x\) be the set of (unoriented) edges incident to x, and let \(\mathcal {E}_x^\rightarrow \) (resp., \(\mathcal {E}_x^\leftarrow \)) be the set of oriented edges going out of (resp., coming into) x. For each edge \((x \rightarrow y)\), let
which includes \((y \rightarrow x)\) but not \((x\rightarrow y)\). Let
Later, \(\frac{1}{M}\) will be a lower bound on the probability that a hole performs a backtracking jump.
We will use the convention that if we write \((x \rightarrow y)\in \partial ^\mathrm {e}h_t\), we mean that \(x \in h_t\) and \(y\not \in h_t\). We will update a set of edges \(\mathcal {H}(t)\) as the hprocess evolves, starting from \(\mathcal {H}(0)=\emptyset \). Moreover, for each \((x\rightarrow y)\in \mathcal {H}(t)\), we will associate an independent Bernoulli random variable \(\mathfrak {B}_{x\rightarrow y}\) of parameter 1 / M. If needed, the random variable \(\mathfrak {B}_{x\rightarrow y}\) will be redraw independently during the evolution of the hprocess. Once \(\mathcal {H}(t)\) is specified, we define
Evolution of the hprocess
Here we will describe how the hprocess uses the MDLAclocks to evolve. Our description here will be precise, but will not enter in the details needed to define the passage times of the hprocess. This will be carried out in Sect. 6.7.
Start from time 0, where we have \(h_0=\emptyset \) and \(\mathcal {H}(t)=\emptyset \). From this time, we let MDLA evolve using its MDLAclocks. If at some time t MDLA tries to occupy a seed (that is, MDLA discovers a hole) inside some cluster \(C\subset \mathbb {Z}^d\) of seeds, the hprocess undergoes a birth operation and we set \(h_t=C\). At this moment, we continue to let MDLA evolve using its MDLAclocks. If new holes are discovered, new births take place and clusters are added to the hprocess (that is, new clusters are infected).
We will now discuss all possibilities that could happen for the expansion of the hprocess. In all cases below, we will assume that the expansion of the hcluster is happening in an infected cluster that is not disconnected from infinity by the aggregate. Otherwise, the halting upon encapsulation would already have happened to that cluster, which would prevent it from expanding.
The hprocess only expands when an edge at the boundary of the hprocess or an edge from \(\mathcal {H}_B\) rings. (The edges in \(\mathcal {H}_B\) are needed to verify backtracking jumps, and “B” in the subscript actually stands for backtracking.) If an edge that is internal to the hprocess (that is, both of its endpoints are already infected) rings, then the hprocess does not change, even if that causes new holes to be discovered. Similarly, if an edge that is external to the hprocess (that is, both endpoints are not infected) and does not belong to \(\mathcal {H}_B\) rings, and no new hole get discovered by this operation, then the hprocess does not change.
Now we assume that an edge \((x\rightarrow y)\) from the boundary of the hprocess or from \(\mathcal {H}_B\) rings at time t, and discuss what occurs with the hprocess at this time. We split our discussion into three cases, and at the end explain two particular situations. Let \(s<t\) be the last time before t that the hprocess changed.
Case 1: A hole jumps out of the hprocess
This corresponds to \((x\rightarrow y)\in \partial ^\mathrm {o}h_s\) with a hole at \(x\in h_s\) and \(y\not \in h_s\) unoccupied at time \(t\). Thus, the ring of \((x\rightarrow y)\) causes the hole to jump from x to y.
It could be the case that there is already an edge \((y'\rightarrow y)\in \mathcal {H}(t)\) with \(y'\ne x\). If this is the case, we simply do nothing; this case will be better discussed in Sect. 6.6.5. Otherwise, if there is no such edge, we add \((x\rightarrow y)\) to \(\mathcal {H}(t)\) to verify whether the hole will do a backtracking jump to x. Moreover, we draw (independently from previous values that this random variable could have assumed) the Bernoulli random variable \(\mathfrak {B}_{x\rightarrow y}\) of parameter 1 / M. If \(\mathfrak {B}_{x\rightarrow y}=0\), which means that the hole will not backtrack to x, then we infect y at time t.
Case 2: Verification of backtracking jumps
This corresponds to \((x\rightarrow y)\in \mathcal {H}_B(s)\). Let \((u\rightarrow v)\) be the edge from \(\mathcal {H}(s)\) such that \((x\rightarrow y)\in \mathcal {E}_{u\rightarrow v}\subset \mathcal {H}_B(s)\), and assume that \((u\rightarrow v)\) was added to \(\mathcal {H}\) at time \(t'\le s\). Note that, from Sect. 6.6.1, this was done because a hole jumped from u to v at time \(t'\). Then while no clock from \(\mathcal {E}_{u \rightarrow v}\) rings, u will remain unoccupied and the hole will remain at v.
When the clock of an edge \((x\rightarrow y)\) from \(\mathcal {E}_{u \rightarrow v}\) rings at time t, then the first thing we do is to remove \((u\rightarrow v)\) from \(\mathcal {H}(t)\). We will say that the possibility of a backtracking jump through \((u\rightarrow v)\) has been verified. We then couple the value of \(\mathfrak {B}_{u\rightarrow v}\) with the MDLAclocks so that if \(\mathfrak {B}_{u\rightarrow v}=1\), we have that the first clock to ring among the ones from \(\mathcal {E}_{u \rightarrow v}\) is \((v \rightarrow u)\). If this happens, then \((x\rightarrow y)=(v\rightarrow u)\) so, at time t, the hole backtracks to u. Note that both u and v are already infected. In this case, nothing else needs to be done.
On the other hand, if \(\mathfrak {B}_{u\rightarrow v}=0\), the backtracking will not happen and \((x\rightarrow y)\in \mathcal {E}_{u\rightarrow v}{\setminus } (v \rightarrow u)\). Note that the probability that \((x\rightarrow y)\) is equal to a given \((w\rightarrow z)\in \mathcal {E}_{u\rightarrow v}{\setminus } (v\rightarrow u)\) is exactly
Note also that v is already infected. If \((x\rightarrow y)\in \mathcal {E}_u^\leftarrow \), then u could get occupied by the aggregate or by another hole, which could prevent the hole from v to jump back to u when the clock \((v\rightarrow u)\) rings. In this case, we do not need to do anything else.
Finally, if \((x \rightarrow y)\in \mathcal {E}_v^\rightarrow \), then the hole at \(v=x\) may jump to y, if y is unoccupied. If, in addition, we have that \(y\not \in h_s\), then the hole jumped out of the hprocess, so we perform the steps described in Sect. 6.6.1 to \((x\rightarrow y)\) so that we can later verify the possibility of a backtracking jump through \((x\rightarrow y)\). In particular, we add \((x \rightarrow y)\) to \(\mathcal {H}(t)\), sample \(\mathfrak {B}_{x\rightarrow y}\), and infect y if \(\mathfrak {B}_{x\rightarrow y}=0\).
The purpose of the set \(\mathcal {H}\) is to keep track of the edges over which a backtracking jump can happen. It will hold that
The above is quite straightforward from our description above, but we will actually prove it in Lemma 6.1, after we describe precisely how the passage times are constructed. We remark that in (53) we do not require z to host a hole. This is because of a corner case that we need to handle carefully, and which we will explain in Sect. 6.6.4.
Case 3: Expansion without jump of holes
Here we assume that \((x\rightarrow y)\in \partial ^\mathrm {o}h_t{\setminus } H_B(s)\) and such that one of the following conditions happens.

\(x\in h_{s}\) is unoccupied at time \(t\).

x is occupied by the aggregate at time \(t\).

x is occupied by a hole, but y is occupied by either a hole or the aggregate at time \(t\) (preventing the hole from x to jump to y).
(The case of x being occupied by a hole and y unoccupied is covered by Sect. 6.6.1.) The above three situations bring little trouble to us since it does not cause any hole to jump, so we will simply choose to infect y with probability \(\frac{M1}{M}<1\), otherwise we do nothing. This choice is to guarantee that the passage times \(\tau _{\cdot ,\cdot }\) stochastically dominate (but are not equal to) exponential random variables of rate 1 in this case.
Edges in \(\mathcal {H}^*_B\)
We need to give special attention to the set \(\mathcal {H}^*_B\). Suppose now that we have carried the above process up to a time t, when it happens that \(\mathcal {H}^*_B(t)\ne \emptyset \). Note that
Note that if t is the first time that \(\mathcal {H}^*_B(t)\ne \emptyset \), then both x and \(y'\) host a hole. The above gives that \((x\rightarrow y)\) is involved in the backtracking jumps of both \((x' \rightarrow x)\) and \((y \rightarrow y')\), which is in conflict with the fact that \(\mathfrak {B}_{x' \rightarrow x}\) and \(\mathfrak {B}_{y \rightarrow y'}\) are independent.
To solve this, for each \((x\rightarrow y)\in \mathcal {H}_B^*(t)\), we will consider two Poisson clocks: the actual MDLAclock, which will be associated to the backtracking jump of \((y \rightarrow y')\), and a fakeclock, which will be associated to the backtracking jump of \((x' \rightarrow x)\). So, if \(\mathfrak {B}_{x' \rightarrow x}=1\), it means that the clock of \((x \rightarrow x')\) will ring before the MDLAclocks of \(\mathcal {E}_{x'\rightarrow x}{\setminus } (x\rightarrow y)\) and before the fakeclock of \((x \rightarrow y)\). Similarly, if \(\mathfrak {B}_{y \rightarrow y'}=1\), it means that the clock of \((y' \rightarrow y)\) will ring before the MDLAclocks of \(\mathcal {E}_{y \rightarrow y'}\). The evolution of MDLA simply ignores the fakeclocks. Since the fakeclocks and the MDLAclocks are independent, there is no conflict with the independence of \(\mathfrak {B}_{x' \rightarrow x}\) and \(\mathfrak {B}_{y \rightarrow y'}\). Now we explain why this does not create other problems.
If the MDLAclock of \((x \rightarrow y)\) rings, we say that a clock from \(\mathcal {E}_{y\rightarrow y'}\) rings, whereas when the fakeclock of \((x \rightarrow y)\) rings, we say that a clock from \(\mathcal {E}_{x'\rightarrow x}\) rings. Assume that the first clock to ring among the MDLAclocks and fakeclocks of \(\mathcal {E}_{x\rightarrow y}\) is the MDLAclock of \((x \rightarrow y)\). Let \(s>t\) be the time at which that clock rings, and assume that this is the first clock to ring among the clocks of \(\mathcal {E}_{x'\rightarrow x}\) and \(\mathcal {E}_{y\rightarrow y'}\). Note that in this case we have \(\mathfrak {B}_{y \rightarrow y'}=0\). Then, the hole that is in x jumps to y, and we perform the steps described in Sect. 6.6.2 for the backtracking jump of \((y \rightarrow y')\) when \(\mathfrak {B}_{y \rightarrow y'}=0\). No action is taken with regards to the backtracking jump of \((x'\rightarrow x)\). In particular, we have that \((y\rightarrow y') \not \in \mathcal {H}(s)\) and, more crucially, we have that \((x'\rightarrow x) \in \mathcal {H}(s)\) even if there is no hole at x. (This is the reason why in (53) we have not required y to host a hole.)
The fact that \((x'\rightarrow x)\) remained in \(\mathcal {H}(s)\) will not cause problems because the hole that was in x jumped inside \(h_s\) (because \(y\in h_t \subseteq h_s\)). So, in some sense, that hole did backtrack to the hprocess. We will later still process the backtracking jump of \((x'\rightarrow x)\) even if there may not be a hole at x (which just means that no hole will jump, but the hprocess may still be updated according to the decision of a backtracking jump). For example, if \(\mathfrak {B}_{x' \rightarrow x}=1\), we will assume that there is a backtracking jump over \((x'\rightarrow x)\) causing x not to be added to the hprocess, which remains to be true even if x does not host a hole.
The fakeclock of \((x\rightarrow y)\) will exist while \((x\rightarrow y)\in \mathcal {H}_B^*\), that is, while both \((x'\rightarrow x)\) and \((y \rightarrow y')\) belong to \(\mathcal {H}\). When this ceases to be true, the fakeclock of \((x \rightarrow y)\) is simply deleted and we will only keep track of its MDLAclock.
Holes revisiting uninfected vertices
Consider the setting in the previous section, where \((x'\rightarrow x)\in \mathcal {H}(s)\) but there is no hole at x. Note that if \(\mathfrak {B}_{x'\rightarrow x}=1\) (so that \(x\not \in h_{s}\)), before the backtracking jump of \((x'\rightarrow x)\) is processed (that is, before the MDLAclocks of \(\mathcal {E}_{x'\rightarrow x}{\setminus } (x\rightarrow y)\) and the fakeclock of \((x\rightarrow y)\) ring), it could happen that a hole jumps from a vertex \(x''\) to x. Note that \(x'' \ne x'\), because \(x'\) remained unoccupied from the time the hole jumped from \(x'\) to x since \((x'\rightarrow x)\) is still in \(\mathcal {H}\). Since \((x''\rightarrow x)\not \in \mathcal {E}_{x'\rightarrow x}\), this jump does not affect the backtracking jump of \((x'\rightarrow x)\). But since x is not infected, the hole just jumped out of the hprocess. Suppose that this happens at some time \(s''\). This is the situation explained in Sect. 6.6.1, when nothing needs to be done to the hprocess. The reason is that this hole just occupied the place of the yet to be verified backtracking jump of \((x'\rightarrow x)\). Thus we only need to wait that backtracking to be processed.
Note that it can also happen that much later x is still not infected and a hole jumps from \(x'\) to x again. But, as we explained above, this can only happen after the initial backtracking jump of \((x'\rightarrow x)\) has been proceed. Since we had that \(\mathfrak {B}_{x'\rightarrow x}=1\) (so that x remained uninfected), the second jump of a hole from \(x'\) to x can only happen after the clock of the edge \((x\rightarrow x')\) rings (because this happens before any edge from \(\mathcal {E}_{x'}^{\leftarrow }\) rings, so \(x'\) remained unoccupied). When the edge \((x\rightarrow x')\) rings, the memoryless property of exponential random variables guarantees that the rings of the clocks in \(\mathcal {E}_{x'\rightarrow x}\) are from this moment independent of the past. Note that at this time it also happens that the value of \(\mathfrak {B}_{x'\rightarrow x}\) is redrawn independently, so this new hole jumping to x will have no correlation with the previous backtracking jump through \((x'\rightarrow x)\).
Construction of the passage times of the hprocess
In the previous section we described how the hprocess uses the MDLAclocks to evolve. Here we will use the discussion from the previous section to describe the construction of the passage times of the hprocess.
For each (unoriented) edge (x, y), we will construct an ordered list \(\Pi _{x,y}=\big (\Pi _{x,y}^{(1)},\Pi _{x,y}^{(2)},\ldots ,\Pi _{x,y}^{(\kappa _{x,y})}\big )\), where the \(\Pi _{x,y}^{(i)}\) will be independent random variables, and the number of elements \(\kappa _{x,y}\) in \(\Pi _{x,y}\) will also be a random variable. Recalling that \(\tau _{x,y}\) is the passage time of the hprocess through the edge (x, y), we will construct the hprocess so that
Suppose the hprocess has been constructed up to time t, and assume that
Assume also that
The last property above means that if the hole jumped from x to y at some time \(s\le t\), then during (s, t] the clocks of \(\mathcal {E}_{x \rightarrow y}\) have not rung (but it could be the case that a fakeclock from \(\mathcal {E}_{x\rightarrow y}\) has rung, see the discussions in Sects. 6.6.4 and 6.6.5). We will use the convention that \(\partial ^\mathrm {o}h_t\) gives the outer boundary of \(h_t\) that can be infected by the hprocess. (Recall that vertex at the boundary of \(h_t\) that is occupied by the aggregate does not get infected if the corresponding cluster of the hprocess is separated from infinity by the aggregate of MDLA.)
We let MDLA evolve according to its clock until the first time \(t+W\) at which either of two events happen:

1.
A birth operation takes place (see the definition in Sect. 6.5). Note that in this case new vertices are added to the hprocess, so \(h_{t+W}\ne h_t\).

2.
A clock (regardless of whether it is a MDLAclock or a fakeclock) from \(\partial ^\mathrm {e}h_t \cup \mathcal {H}_B(t)\) rings. In this case, we will not yet observe which of these clocks rang. We will call this case a potential expansion operation.
Birth operation or addition of vertices to the hprocess
Suppose that at time \(t+W\) an (unoriented) edge (x, y) becomes part of \(\partial ^\mathrm {e}h_{t+W}\); assume without loss of generality that this is because x got infected at time \(t+W\). Then we create a variable \(\Lambda _{x,y}\) whose initial value is 0. The value of \(\Lambda _{x,y}\) will be updated and will be used to add elements to the list \(\Pi _{x,y}\). Then we perform the following step:
Potential expansion operation of the hprocess
Suppose that at time \(t+W\) an edge from \(\partial ^\mathrm {e}h_t \cup \mathcal {H}_B(t)\) rings. The hprocess will expand according to the description in Sect. 6.6, but we elaborate a little bit more here. We do not immediately observe which is the edge that rings, this edge is still random and could also correspond to a fakeclock from an edge of \(\mathcal {H}_B^*(t)\).
Now we want to sample which clock from \(\partial ^\mathrm {e}h_t\cup \mathcal {H}_B(t)\) rings first. We will denote by
However, we need to be a bit careful in the sampling of \((x\rightarrow y)\). First, let
where each edge in \(\mathcal {H}_B^*(t)\) is counted with multiplicity 2 (to account for its fakeclock and MDLAclock) while the other edges have multiplicity 1. We would like to set \((x\rightarrow y)=(x'\rightarrow y')\). The problem is that, when a hole jumps over an edge \((u \rightarrow v)\) at some time s, where \(v\not \in h_s\), we need to decide right away whether that hole will backtrack to u later and this impacts which edge from \(\mathcal {E}_{u\rightarrow v}\) rings first. This decision is a function of the Bernoulli variable \(\mathfrak {B}_{u\rightarrow v}\). As explained in Sect. 6.6.2, if \(\mathfrak {B}_{u\rightarrow v}=1\), we know that the next clock to ring will be that of \((v\rightarrow u)\), whereas if \(\mathfrak {B}_{u\rightarrow v}=0\) we have that the next clock to ring is chosen uniformly at random from the clocks of \(\mathcal {E}_{u\rightarrow v}{\setminus } (v \rightarrow u)\).
We can now state precisely how \((x\rightarrow y)\) is chosen. Recall that if \((x' \rightarrow y')\in \mathcal {H}_B^*(t)\) with \((x'\rightarrow y')\in \mathcal {E}_{x''\rightarrow x'}\cap \mathcal {E}_{y' \rightarrow y''}\), then we say that \((x' \rightarrow y')\in \mathcal {E}_{x'' \rightarrow x'}\) only if it was the fakeclock of \((x'\rightarrow y')\) that rang, and say that \((x' \rightarrow y')\in \mathcal {E}_{y' \rightarrow y''}\) if it was the MDLAclock of \((x'\rightarrow y')\) that rang. Then, if \((x' \rightarrow y')\in \mathcal {E}_{u\rightarrow v}\) for some \((u\rightarrow v)\in \mathcal {H}(t)\) with \(\mathfrak {B}_{u\rightarrow v}=1\), set \((x\rightarrow y)=(v \rightarrow u)\). If \((x' \rightarrow y')=(v\rightarrow u)\) for some \((u\rightarrow v)\in \mathcal {H}(t)\) with \(\mathfrak {B}_{u\rightarrow v}=0\), then \((x\rightarrow y)\) is chosen uniformly at random from \(\mathcal {E}_{u\rightarrow v}{\setminus } (v\rightarrow u)\). In any other case, \((x\rightarrow y)=(x'\rightarrow y')\).
Now we describe the actions we need to do to compute the passage times as the hprocess evolves according to the description in Sect. 6.6. If an edge enters \(\partial ^\mathrm {e}h_{t+W}\) due to the ring of \((x\rightarrow y)\), we perform the steps described in Sect. 6.7.1 for that edge. Moreover, in any case, we
After the above, we do the following.
Note that if we always have \((x' \rightarrow y') =(x\rightarrow y)\), then the variables \(\Lambda _{x,y}\) would be i.i.d. exponential random variables of rate 1, since they are constructed exactly as described in Lemma B.3. However, we will have cases when \((x'\rightarrow y' ) \ne (x\rightarrow y)\), which will still give rise to the variables \(\Lambda _{x,y}\) being independent exponential random variables, but of possibly different rates. This will be explained in Sect. 6.8, after the passage times have been constructed.
Finally, if we have that \((x\rightarrow y) \in \partial ^\mathrm {e}h_t\) and y gets infected at time \(t+W\), we close the list \(\Pi _{x,y}\), and define \(\tau _{x,y}\) as in (56) (thereby concluding the construction of \(\tau _{x,y}\)). This means that nothing else will be added to the list \(\Pi _{x,y}\). This can happen in the following cases:

\((x\rightarrow y) \in \partial ^\mathrm {e}h_t {\setminus } \mathcal {H}_B(t)\), with x not hosting a hole in MDLA or y being occupied in MDLA. This was described in Sect. 6.6.3, where y gets infected with probability \(\frac{M1}{M}\).

\((x\rightarrow y) \in \partial ^\mathrm {e}h_t {\setminus } \mathcal {H}_B(t)\), with x hosting a hole in MDLA, y unoccupied, there exists no other edge \((\cdot \rightarrow y)\in \mathcal {H}(t)\), and \(\mathfrak {B}_{x\rightarrow y}=0\). This is the case that a hole jumps out of \(h_t\) (from x to y) and does not backtrack to x. This was described in Sects. 6.6.1 and 6.6.2.

\((x\rightarrow y) \in \mathcal {E}_{u\rightarrow v}\) for some \((u\rightarrow v)\in \mathcal {H}(t)\) with \(\mathfrak {B}_{u\rightarrow v}=0\) (so \(v\in h_t\) and \((x\rightarrow y)\ne (v\rightarrow u)\)). As mentioned in (52), the probability that \((x\rightarrow y)\) is equal to a given \((w\rightarrow z)\in \mathcal {E}_{u\rightarrow v}{\setminus } (v\rightarrow u)\) is \(\frac{1}{M1}\). If, in addition, we have that \((x\rightarrow y) \in \mathcal {E}_v^\rightarrow \), so \(x=v\), we may infect y if there is still a hole at \(x=v\) and \(\mathfrak {B}_{x\rightarrow y}=0\). (Note that this actually falls into the setting of the previous case, but we chose to highlight it here since a given edge from \(\mathcal {E}_v^\rightarrow \) rings at rate \(\frac{M}{M1}\) instead of rate 1, due to the conditioning on \(\mathfrak {B}_{x\rightarrow y}=0\).)
Iterating the above construction will produce the lists \(\Pi _{x,y}\) for some edges (x, y). For each edge that was not visited during this procedure, we sample an independent, exponential random variable of rate \(\frac{M1}{M}\) to be its passage time. For each edge (u, v) whose construction was not completed, we add to \(\Lambda _{u,v}\) an independent exponential random variable of rate \(\frac{M1}{M}\), add \(\Lambda _{u,v}\) as a new element to \(\Pi _{u,v}\), and complete the construction of \(\tau _{u,v}\). Regardless of the value of these random variables, the evolution of the hprocess will not change. Also, the final passage times of those edges stochastically dominate exponential random variables of rate \(\frac{M1}{M}\).
Properties of the passage times
Before establishing properties of the passage times, we establishe (53).
Lemma 6.1
For any \(t\ge 0\), (53) holds.
Proof
Clearly (53) holds at time 0 since \(\mathcal {H}(0)=\emptyset \). Now assume that it holds during [0, t), and that at time t a hole jumps out of \(h_{t}\), from x to y; so \((x\rightarrow y)\) is added to \(\mathcal {H}(t)\) via Case 1 (cf. Sect. 6.6.1) and \(x\in h_t\). Then, at time t, x is not occupied by a hole or the aggregate in MDLA and belongs to \(h_t\), while y hosts a hole in MDLA. If \(\mathfrak {B}_{x\rightarrow y}=0\), y gets infected at time t and (53) continues to hold. If \(\mathfrak {B}_{x\rightarrow y}=1\), y remains uninfected, but x remains unoccupied until at a time \(s>t\) the clock of an edge from \(\mathcal {E}_{x}^\leftarrow \) rings, but that edge must be \((y\rightarrow x)\) since \(\mathfrak {B}_{x\rightarrow y}=1\). So the hole gets back to an infected vertex.
It remains to show that the edges in \(\mathcal {H}(t)\) are disjoint. Assume that this is not the case; that is, there are edges \((x\rightarrow y),(u\rightarrow v)\in \mathcal {H}(t)\) with \(\{u,v\}\cap \{x,y\}=1\). Assume that \((x\rightarrow y)\) and \((u\rightarrow v)\) were added to \(\mathcal {H}\) at times \(t_x\) and \(t_u\), respectively, with \(t_x>t_u\). If \(u=x\), then at some time during \((t_u,t_x)\) a hole jumped into \(u=x\) in order to go to y at time \(t_x\). But this would cause \((u\rightarrow v)\) to be removed from \(\mathcal {H}\) by Case 2, which would imply that \((u\rightarrow v)\not \in \mathcal {H}(t_x)\). If \(v=y\), then at time \(t_x\) we have \(y\not \in h_{t_x}\); otherwise we would not add \((x\rightarrow y)\) to \(\mathcal {H}(t_x)\). In this case, since y does not host a hole at time \(t_x\), Case 1 gives that \((x\rightarrow y)\) is not added to \(\mathcal {H}(t_x)\) because there is already an edge \((\cdot \rightarrow y) \in \mathcal {H}(t_x)\). The case \(u=y\) cannot happen, because \((x\rightarrow y)\in \mathcal {E}_{u}^\leftarrow \), so Case 2 would remove \((u\rightarrow v)\) from \(\mathcal {H}(t_x)\). The case \(v=x\) is similar, since \((x\rightarrow y)\in \mathcal {E}_{v}^\rightarrow \), so Case 2 would remove \((u\rightarrow v)\) from \(\mathcal {H}(t_x)\). . \(\square \)
Let \(Z^{(M)}\) be the following random variable. Take \(Z'\) to be an exponential random variable of rate M, \(Z''\) be an exponential random variable of rate \(\frac{M1}{M}\), and Q be a Bernoulli random variable of parameter \(\frac{M1}{M}\), where \(Z',Z''\) and Q are independent of one another. Define
Lemma 6.2
For any \(M> 0\), \(Z^{(M)}\) stochastically dominates (strictly) an exponential random variable of rate 1. Moreover, \(Z^{(M)}\) is stochastically dominated by an exponential random variable of rate \(\frac{M1}{M}\).
Proof
For the first part, note that if we take (62) and replace \(Z''\) with \({\widehat{Z}}\), where \({\widehat{Z}}\) is an exponential random variable of rate 1, then Lemma B.3 with \(k=M\), \(W=Z'\) and \(X_1={\widehat{Z}}\) (the values of \(X_2,X_3,\ldots ,X_M\) being irrelevant) gives that \(Z' + {{\mathfrak {1}}}\left( Q=1\right) {\widehat{Z}}\) is an exponential random variable of rate 1. Since \(Z''\) stochastically dominates \({\widehat{Z}}\), we obtain the first part of the lemma.
For the second statement, we know from Lemma B.3 and the first part that \(Z' + {{\mathfrak {1}}}\left( Q=1\right) {\widehat{Z}}\) is an exponential random variable of rate 1. Then, using Lemma B.2, we have that \(\frac{M}{M1}Z' + {{\mathfrak {1}}}\left( Q=1\right) \frac{M}{M1}{\widehat{Z}}\) is an exponential random variable of rate \(\frac{M1}{M}\). But that variable has the same distribution as \(\frac{M}{M1}Z' + {{\mathfrak {1}}}\left( Q=1\right) Z''\ge Z^{(M)}\). \(\square \)
Now we show that the passage times \(\tau _{x,y}\) stochastically dominate i.i.d. random variables distributed as \(Z^{(M)}\).
Lemma 6.3
The collection of passage times \(\tau _{x,z}\), for each edge (x, y), stochastically dominates an i.i.d. collection of random variables distributed as \(Z^{(M)}\).
Proof
If an edge (w, z) is such that its passage time was not completed during the procedure above, then we know that its passage time stochastically dominates an independent, exponential random variable of rate \(\frac{M1}{M}\), which in turn stochastically dominates \(Z^{(M)}\) by the second part of Lemma 6.2.
So now we consider a given edge (w, z), whose passage time was completely constructed during the procedure described in Sects. 6.6 and 6.7. Let \(t_w<t_z\) be the times such that w is infected at time \(t_w\) and z is infected at time \(t_z\). The passage time of (w, z) is then completed at time \(t_z\) and is \(t_zt_w\).
Now we split the proof into two parts. In the first part, assume that w does not host a hole at time \(t_w\) (for example, because it was infected according to Case 3, Sect. 6.6.3). The crucial property we will use in this case is that \((w\rightarrow z)\) cannot belong to \(\mathcal {H}_B\) during \([t_w,t_z)\) since an edge \((z\rightarrow \cdot )\) cannot be in \(\mathcal {H}\) as z is not infected and an edge \((\cdot \rightarrow w)\) cannot be in \(\mathcal {H}\) since w was infected without a hole. As the hprocess evolves from \(t_w\), at each step, we add W to \(\Lambda _{w,z}\), until we find a step where the clock that rings is the one of \((w\rightarrow z)\). This is the procedure described in Lemma B.3 for the construction of independent exponential random variables of rate 1. Hence, at the time the clock of \((w \rightarrow z)\) rings, call this time s, we have that \(\Lambda _{w,z}\) is an exponential random variable of rate 1, which is then added to \(\Pi _{w,z}\). If at time s we fall into the setting of Case 3 (Sect. 6.6.3), then we only infect z with probability \(\frac{M1}{M}\); otherwise we wait for the next time the clock of \((w\rightarrow z)\) rings, adding another exponential random variable of rate 1 to the list \(\Pi _{w,z}\), and iterating this procedure. If at time s we fall into the setting of Case 1 (Sect. 6.6.1), we only infect z if \(\mathfrak {B}_{w \rightarrow z}=0\), which occurs with probability \(\frac{M1}{M}\); otherwise we iterate this procedure since the hole will jump back to w (or to the infected set before \((z\rightarrow w)\) rings). Therefore, each element of the list \(\Pi _{w,z}\) is an independent random variable of rate 1, and the number of elements is given by a geometric random variable of success probability \(\frac{M1}{M}\). Lemma B.1 gives that \(\tau _{w,z}\) is in this case an exponential random variable of rate \(\frac{M1}{M}\). Since this stochastically dominates \(Z^{(M)}\) by Lemma 6.2, this first part is completed.
Now, for the second part, assume that w hosts a hole at time \(t_w\). Assume that that hole jumped to w from a vertex \(w'\). The first situation to imagine is that the hole jumped from \(w'\) to w at time \(t_w\), which causes \((w' \rightarrow w)\) to be added to \(\mathcal {H}(t_w)\); thus \(\mathfrak {B}_{w'\rightarrow w}=0\). But, it could also be the case that \(\mathfrak {B}_{w'\rightarrow w}=1\). For this to happen, the hole must have jumped from \(w'\) to w at a time \(t_w'<t_w\), which caused \((w' \rightarrow w)\) to be added to \(\mathcal {H}(t_w')\) and w not to be added to \(h_{t_{w'}}\). Then, in a time \(t_w\), the clock of an edge \((w'' \rightarrow w)\) (which is not part of \(\mathcal {E}_{w'\rightarrow w}\)) rings at time \(t_w\) when w is still occupied by a hole, triggering Case 3, which decides to infect w. Regardless of which of the two situations above occurs, we know that \((w \rightarrow z)\in \mathcal {E}_{w'\rightarrow w}\).
If \(\mathfrak {B}_{w'\rightarrow w}=1\), then we know the hole will do a backtracking jump from w to \(w'\) before the clock of \((w\rightarrow z)\) rings, which will cause \((w'\rightarrow w)\) to be removed from \(\mathcal {H}\). At this point, w will not have a hole anymore and we may proceed as in the first part of the proof, which implies that the passage time \(\tau _{w,z}\) stochastically dominates an exponential random variable of rate \(\frac{M1}{M}\).
The most delicate case is when \(\mathfrak {B}_{w'\rightarrow w}=0\). Suppose that at time s a clock from \(\mathcal {E}_{w'\rightarrow w}\) rings. Note that, since \(\mathcal {E}_{w'\rightarrow w}=M\), we will have that \(\Lambda _{w,z}=st_w\) is distributed as an exponential random variable of rate M. Let \((x\rightarrow y)\) be the edge whose clock rang at time s. Then a few cases may happen.
Case A: \((x \rightarrow y)=(w\rightarrow z)\). This happens with probability \(\frac{1}{M1}\). When this is the case, we will have to do the steps of Case 2 plus Case 1, which causes \((w\rightarrow z)\) to be added to \(\mathcal {H}(s)\). Two subcases can then happen.
Case A.1: With probability \(\frac{M1}{M}\) we have \(\mathfrak {B}_{w\rightarrow z}=0\), so the hole at z will not do a backtracking jump to w; hence we infect z at time s. Therefore, with overall probability \(\frac{1}{M1} \times \frac{M1}{M}= \frac{1}{M}\) we obtain that the passage time of (w, z) is completed at time s, which implies that \(\tau _{w,z}\) is an exponential random variable of rate M.
Case A.2: With probability \(\frac{1}{M}\) we have \(\mathfrak {B}_{w\rightarrow z}=1\), so the hole will do a backtracking jump to w and we will not infect z. From this time onwards, the passage time of (w, z) is computed exactly as in the first part of the lemma. Therefore, from s, we will wait a time that is distributed according to an exponential random variable of rate \(\frac{M1}{M}\) to complete the passage time of (w, z). This gives that \(\tau _{w,z}\) is the sum of an exponential random variable of rate M plus an exponential random variable of rate \(\frac{M1}{M}\).
Case B: \((x\rightarrow y)\ne (w\rightarrow z)\). This happens with probability \(\frac{M2}{M1}\) and concludes the processing of the backtracking jump of \((w' \rightarrow w)\). From this time onwards, the passage time of (w, z) is computed exactly as in the first part of the proof. This gives that \(\tau _{w,z}\) is the sum of an exponential random variable of rate M plus an exponential random variable of rate \(\frac{M1}{M}\).
This concludes all cases in this second part. Then, the probability that \(\tau _{w,z}\) is given by the sum of an exponential random variable of rate M and an exponential random variable of rate \(\frac{M1}{M}\) is the probability that case A.1 does not happen, which is \(1\frac{1}{M}\). Therefore, \(\tau _{w,z}\) is distributed as \(Z^{(M)}\) from (62).
The independence of \(\tau \) across different edges follows from the fact that elements are added to each list \(\Pi _{x,y}\) only when the edge (x, y) rings, and edges ring independently of one another. \(\square \)
Concluding the proof of Theorem 1.1
The proof will rely on a result by van den Berg and Kesten [28] about strict inequalities for first passage percolation. We will state here a version that is adapted to our needs. In the one hand, it is a special case of the main result in [28], as we explain in Remark 6.5 below. On the other hand, the result we need does not follow from the theorems in [28], but follows from the proof there, without changing a single word. We will not repeat the full proof of [28] here, but will give a description of the main steps.
Recall that \(\mathcal {B}_\upsilon =\mathcal {B}_\upsilon \left( 1\right) \) denotes the ball of radius 1 according to the norm given by a first passage percolation with passage times given by i.i.d. random variables of distribution \(\upsilon \). Recall also that we drop the subscript \(\upsilon \) when \(\upsilon \) is the exponential distribution of rate 1.
Proposition 6.4
If \(\upsilon \) stochastically dominates and is not equal to the exponential distribution of rate 1, then there exists \(\epsilon >0\) such that
Proof
Let \(\mathrm {FPP}_1\) stands for a first passage percolation process with i.i.d. exponential passage times of rate 1, and let \(\mathrm {FPP}_\upsilon \) stands for a first passage percolation with i.i.d. passage times of distribution \(\upsilon \). This proposition is implicitly proven in [28], but the theorem in [28] states the above result only in the axial direction. To do this, let \(x_n=(n,0,0,\ldots ,0)\in \mathbb {Z}^d\), and for any \(x\in \mathbb {Z}^d\), define T(x) and \(T_\upsilon (x)\) to be the time that \(\mathrm {FPP}_1\) and \(\mathrm {FPP}_\upsilon \), respectively, take to occupy x. By Kingman’s subadditive ergodic theorem [21] we have that the following limits exists almost surely:
By monotonicity and stochastic domination, we obtain that \(\nu \le \nu _\upsilon \), and the main result of [28] is to establish the strict inequality \(\nu < \nu _\upsilon \).
The proof in [28] goes via a renormalization argument. First define a fixed, but large value \(\ell \) and partition \(\mathbb {Z}^d\) into cubes of sidelength \(\ell \). Then they say that a cube R is good if a certain “good” event happens. The good event is such that for any given path P of \(\mathrm {FPP}_\upsilon \) inside P, there is a positive probability (uniformly over P and R) such that can find an alternative path \(P'\) which differs very little from P and such that the time \(\mathrm {FPP}_1\) takes to traverse \(P'\) is at most the time that \(\mathrm {FPP}_\upsilon \) takes to traverse P minus a fixed value \(\delta >0\). Then a percolation argument (which is by now quite standard) gives that the set of good cubes percolates on \(\mathbb {Z}^d\). This means that any long enough path on \(\mathbb {Z}^d\), say of size n, must pass through a number of good cubes of order n. (Here \(\ell \) and \(\delta \) are fixed, while n can grow.) Then, we can consider the geodesic path P that \(\mathrm {FPP}_\upsilon \) takes to go from 0 to \(x_n\), and using the above reasoning we obtain an order of n good cubes R such that P has a long piece inside R. Then, for each good cube, with positive probability we can replace the long pieces of P within the good cube with the alternative path provided by the definition of good cubes, which produces a path from 0 to \(x_n\) whose passage time in \(\mathrm {FPP}_1\) is faster than that of \(\mathrm {FPP}_\upsilon \) by a factor of order n. This implies, for example, that one obtains a value \(\delta '>0\), depending only on \(\upsilon \), for which \(\nu \le (1\delta ) \nu _\upsilon \). This is the argument in [28].
An important feature of the proof in [28] is that it does not depend on the direction; this fact was already observed in [22]. In other words, instead of only considering the sequence of vertices \((x_n)_n\) as defined above, we can consider any rational value \(q\in \mathbb {Q}^d\) and a sequence \(x_n = q\, n\). Then, for each \(x\in \mathbb {R}^d\), associate x to the integer point \(y\in \mathbb {Z}^d\) such that \(x\in y+[1/2,1/2)^d\), and generalize T(x) and \(T_\upsilon (x)\) to be the time that \(\mathrm {FPP}_1\) and \(\mathrm {FPP}_\upsilon \), respectively, take to occupy the integer point that is associated to x. Then we can define \(\nu (q)\) and \(\nu _\upsilon (q)\) as in (63):
The very same proof in [28] gives that one can find \(\delta ''>0\) depending only on \(\upsilon \) such that, uniformly over all \(q\in \mathbb {Q}^d\), one has \(\nu (q) \le (1\delta '')\nu _\upsilon (q)\).
Then one can obtain two continuous functions \({\bar{\nu }},{\bar{\nu }}_\upsilon \) from \(\mathbb {R}^d\) to \(\mathbb {R}_+\) by taking the unique continuous extension of \(\nu ,\nu _\upsilon \). Since \(\delta ''>0\) uniformly on the choice of q, we have the existence of \(\epsilon >0\) for which \(\mathcal {B}_\upsilon \subset \mathcal {B}(1\epsilon )\). \(\square \)
Remark 6.5
The result in [28] is in some sense more general than stated above since instead of requiring stochastic domination, it just requires that \(\upsilon \) is less variable than an exponential random variable of rate 1; but we will not require such level of generality.
Proof of Theorem 1.1
Using Lemma 6.3, we have that the hprocess grows slower than if the red seeds grows clusters of first passage percolation with passage times distributed as \(Z^{(M)}\). Let \(\upsilon \) be the distribution of \(Z^{(M)}\). Then, by Proposition 6.4, we have the existence of \(\lambda <1\) such that \(\mathcal {B}_\upsilon \subseteq \mathcal {B}(\lambda )\). Then, performing the whole multiscale procedure described in Sect. 5 and using the encapsulation procedure of Sect. 4, we obtain that with positive probability the set of sites that are not infected by the hprocess and that are occupied by the aggregate grows indefinitely and contains a ball (up to regions separated from infinity by this set of sites).
Note that, in the hprocess, whole clusters of seeds are added when the aggregate tries to occupy a seed, while in FPPHE seeds are activated one by one. This is not a problem. In fact, the same proof works if in FPPHE the activation of a seed implies the activation of its whole cluster of seeds. The reason is that, in the encapsulation procedure of Proposition 4.2, we already assumed that \(\xi ^2(0)\) starts from time 0 from any subset of a ball \(\mathcal {B}\left( r\right) \) of radius r. \(\square \)
Notes
For the sake of clarify, given any set of vertices \(X\subset \mathbb {Z}^d\), when we say that C is a cluster of X, we mean that \(C\subset X\) and \(\partial ^\mathrm {o}C \cap X = \emptyset \).
If the aggregate decides to occupy a site x at the same time as the hprocess decides to infect x, then we assume that the aggregate occupies x immediately before the hprocess infects x. This is just a convenience to take care of the following situation. Assume that x is a neighbor of a cluster of infected sites which is disconnected from infinity by the aggregate but x itself is not disconnected from infinity by the aggregate. This implies that the aggregate occupies the neighbors of x that belong to the infected cluster. If at this time the aggregate and the hprocess both decide to occupy x, we obtain that the aggregate does so first, and then the hprocess does not infect x due to the halting upon encapsulation operation.
References
Auffinger, A., Damron, M., Hanson, J.: 50 years of first passage percolation. American Mathematical Society, New York (2017)
Barlow, M.T.: Fractals, and diffusionlimited aggregation. Bull. Sci. Math. 117(1), 161–169 (1993)
Barlow, M.T., Pemantle, R., Perkins, E.A.: Diffusionlimited aggregation on a tree. Probab. Theory Relat. Fields 107(1), 1–60 (1997)
Barra, F., Davidovitch, B., Levermann, A., Procaccia, I.: Laplacian growth and diffusion limited aggregation: different universality classes. Phys. Rev. Lett. 87(13), 134501 (2001)
Benjamini, I., Yadin, A.: Diffusion limited aggregation on a cylinder. Commun. Math. Phys. 279(1), 187–223 (2008)
Candellero, E., Stauffer, A.: Coexistence of competing first passage percolation on hyperbolic graphs (2018). arXiv:1810.04593
Carleson, L., Makarov, N.: Aggregation in the plane and Loewner’s equation. Commun. Math. Phys. 216(3), 583–607 (2001)
Chayes, L., Swindle, G.: Hydrodynamic limits for onedimensional particle systems with moving boundaries. Ann. Probab. 24(2), 559–598 (1996)
Cox, J.T., Durrett, R.: Some limit theorems for percolation processes with necessary and sufficient conditions. Ann. Probab. 9(4), 583–603 (1981)
Eldan, R.: Diffusionlimited aggregation on the hyperbolic plane. Ann. Probab. 43(4), 2084–2118 (2015)
Grimmett, Geoffrey, Kesten, Harry: Firstpassage percolation, network flows and electrical resistances. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 66(3), 335–366 (1984)
Häggström, O., Pemantle, R.: Absence of mutual unbounded growth for almost all parameter values in the twotype Richardson model. Stoch. Process. Appl. 90(2), 207–222 (2000)
Hastings, M.B., Levitov, L.S.: Laplacian growth as onedimensional turbulence. Phys. D 116, 244–252 (1998)
Witten Jr., T.A., Sander, L.M.: Diffusionlimited aggregation, a kinetic critical phenomenon. Phys. Rev. Lett. 47(19), 1400–1403 (1981)
Kassner, K.: Pattern Formation in DiffusionLimited Crystal Growth. World Scientific, Singapore (1996)
Kesten, H.: How long are the arms in DLA? J. Phys. A 20(1), L29–L33 (1987)
Kesten, H.: Upper bounds for the growth rate of DLA. Phys. A 168(1), 529–535 (1990)
Kesten, H.: On the speed of convergence in firstpassage percolation. Ann. Appl. Probab. 3(2), 296–338 (1993)
Kesten, H., Sidoravicius, V.: Positive recurrence of a onedimensional variant of diffusion limited aggregation. In and Out of Equilibrium 2. Progress in Probability, vol. 60, pp. 429–461. Birkhäuser, Basel (2008)
Kesten, H., Sidoravicius, V.: A problem in onedimensional diffusionlimited aggregation (DLA) and positive recurrence of Markov chains. Ann. Probab. 36(5), 1838–1879 (2008)
Kingman, J.F.C.: Subadditive ergodic theory. Ann. Probab. 1(6), 883–899 (1973)
Marchand, R.: Strict inequalities for the time constant in first passage percolation. Ann. Appl. Probab. 12(3), 1001–1038 (2002)
Martineau, Sébastien: Directed diffusionlimited aggregation. ALEA Lat. Am. J. Probab. Math. Stat. 14(1), 249–270 (2017)
Richardson, D.: Random growth in a tessellation. Proc. Camb. Philos. Soc. 74, 515–528 (1973)
Rosenstock, H., Marquardt, C.: Cluster formation in twodimensional random walks: application to photolysis of silver halides. Phys. Rev. B 22(12), 5797–5809 (1980)
Saffman, P.G., Taylor, G.I.: The penetration of a fluid into a porous medium or HeleShaw cell containing a more viscous liquid. Proc. R. Soc. Lond. A 245, 312–329 (1958)
Silvestri, Vittoria: Fluctuation results for Hastings–Levitov planar growth. Probab. Theory Relat. Fields 167(1), 417–460 (2017)
van den Berg, J., Kesten, H.: Inequalities for the time constant in firstpassage percolation. Ann. Appl. Probab. 3(1), 56–80 (1993)
Voss, R.: Multiparticle fractal aggregation. J. Stat. Phys. 36(5/6), 861–872 (1984)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Vladas Sidoravicius: Supported in part by CNPq Grants 308787/20110 and 476756/20120 and FAPERJ Grant E26/102.878/2012BBP.
Alexandre Stauffer: Supported by a Marie Curie Career Integration Grant PCIG13GA2013618588 DSRELIS, and EPSRC Early Career Fellowship EP/N004566/1.
Appendices
Appendix: Proof of Propositions 4.1 and 4.2
We first establish Proposition 4.1 and at the end, in Sect. A.4, we discuss how the proof can be changed to establish Proposition 4.2. We start describing the overall strategy of the proof, and setting up the notation. The main intuition behind the proof is that since \(\xi ^2(0)\) is initially inside \(\mathcal {B}\left( r\right) \) and \(\xi ^1(0)\) is outside \(\mathcal {B}\left( \alpha r\right) \), with \(\alpha >1\) being large enough, there is enough space for \(\xi ^1\) to start growing before noticing the presence of \(\xi ^2\). This gives enough time for \(\xi ^1\) and \(\xi ^2\) to get closer to the set predicted by the shape theorem. Then we can guarantee that \(\xi ^1\) can encapsulate \(\xi ^2\) by letting \(\xi ^1\) occupy a sequence of growing annulus sectors centered at the origin. This is illustrated in Fig. 10.
We now turn to the details of the construction of the annulus sectors. Set
Note that \(\delta \le \frac{1}{\lambda ^{1/10}}1 \wedge \frac{1}{10}\). Define
The value of \(C_n\) is related to the angle of the annulus sector at step n, which starts from the angle related to position x and increases until \(C_n\) is the full unit circle, according to the norm \(\cdot \). Let N be the step where we obtain the unit circle; i.e., N is the smallest integer so that \(C_N=C_{N+1}\). Note that
where \(d_H\) stands for the Hausdorff distance. Let
The goal is to show that, for each \(n=1,2,\ldots ,N\), \(\xi ^1\) completely occupies \(A_n^1\) after step n. Hence \(\xi ^1\) will encapsulate \(\xi ^2\) when it occupies \(A_N^1\).
As in (11), we let \(C_\mathrm {FPP}\) be a constant such that \(\mathcal {B}\left( r\right) \supset [C_\mathrm {FPP}r, C_\mathrm {FPP}r]^d\) for all \(r>0\), which gives that \(\mathcal {B}\left( \frac{3}{2C_\mathrm {FPP}}\right) \supset [3/2,3/2]^d\). Since for any point \(w\in \mathbb {R}^d\) we have that \(w+[1/2,+1/2]^d\) contains a vertex \({\bar{w}} \in \mathbb {Z}^d\), and at least one vertex in \({\bar{w}} + [1,1]^d\) must have norm \(\cdot \) smaller than that of \({\bar{w}}\), we obtain that
In order to show that \(\xi ^1\) occupies \(A_n^1\) for all n, we need to bound the distance between \(A_n^1\) and \(A_{n1}^1\). Given \(y\in A_{n}^1\), let \({\widehat{y}}\) be the closest vertex of \(\mathbb {Z}^d\) to the point \(\frac{y}{y}(1+\delta )^{n1}\alpha r\). Using the triangle inequality we have
Since the ball in \(\mathbb {R}^d\) centered at the point \(\frac{y}{y}(1+\delta )^{n1}\alpha r\) and of radius \((1+\delta )^{n1}\alpha r\,d_H(C_{n1},C_n)\) must contain a point \(w\in \mathbb {R}^d\) with \(\frac{w}{w}\in C_{n1}\) and \(w=(1+\delta )^{n1}\alpha r\), we obtain that
Thus, using that \(y{\widehat{y}}\le \left( (1+\delta )^{n}(1+\delta )^{n1}\right) \alpha r+\frac{3}{2 C_\mathrm {FPP}}\), we obtain
where the last step holds by choosing \(c_1\) large enough in the condition of \(\alpha \). This leads us to define
The value \(t_n\) represents the time we will wait so that \(\xi ^1\) grows from \(A^1_{n1}\) until it contains \(A^1_{n}\). For all \(n\ge 1\), define also the sets
and
where
Note that \(A_n^2 = \mathcal {B}\left( r+\lambda (1+\delta ) T_n\right) \) and the distance between \(A_{n1}^1\) and \(\partial ^\mathrm {i}{\widehat{A}}_n^1\) is at least \((1+\delta )t_n\); we have chosen to define \(A_n^2\) and \({\widehat{A}}_n^1\) independently of \(t_n\) because later we will apply Proposition 4.1 with a time scaling, which will only cause a change in the definition of \(t_n\) in this proof.
The idea is that after occupying \(A_{n1}^1\), \(\xi ^1\) will occupy \(A_n^1\) after a time interval of length \(t_n\), and this is achieved only using passage times inside \({\widehat{A}}_{n}^1\). At the same time, for each n, after time \(T_n\), \(\xi ^2\) will be contained inside \(A_n^2\). The crucial part of the construction is that \({\widehat{A}}_n^1 \cap A_n^2=\emptyset \). This means that the passage times inside \({\widehat{A}}_n^1\) that we use to guarantee that \(\xi ^1\) grows from \(A^1_{n1}\) to \(A^1_{n}\) do not intersect \(\xi ^2\).
The spread of \(\xi ^2\)
In this part we show that the growth of \(\xi ^2\) is not too fast, so that \(\xi ^2\) is contained inside \(A_n^2\). Recall that, by the conditions in Proposition 4.1, \(\alpha \) is assumed to be large enough; in particular, there exists a large \(c_1\) such that \(\alpha > \left( \frac{1}{\lambda (1\lambda )}\right) ^{c_1}\).
Lemma A.1
Assume that \(\alpha \) is large enough, as described above. Then there exists a constant \(c>0\) such that, for any \(n\ge 1\), we have
Proof
Let \(\lambda \upsilon \) denote the distribution of \(\lambda \zeta ^2_{x,y}\). By time scaling, we have that
By choosing \(c_1\) large enough in the condition of \(\alpha \), we can guarantee that \(\delta \ge (\lambda T_1)^{\frac{1}{2d+4}} \ge (\lambda T_n)^{\frac{1}{2d+4}}\), which allows us to apply Proposition 3.1. Then using that
we obtain
for positive constants \(c',c''\). Since the number of vertices in \(\mathcal {B}\left( r\right) \) is of order \(r^d\), and \(T_n\ge T_1\) is large enough by the condition in \(\alpha \), the lemma follows. \(\square \)
The spread of \(\xi ^1\)
Here we show that the growth of \(\xi ^1\) is fast enough, so that \(\xi ^1\) occupies \(A_n^1\) at time \(T_n\).
Lemma A.2
Assume that \(\alpha \) is large enough, as in the statement of Proposition 4.1. There exists a positive constant c such that, for any \(n\ge 1\), we have
Proof
We start writing
From (65) we have that
Applying Proposition 3.1 and using that the number of vertices in \(A_n^1\) is bounded above by \(c' (1+\delta )^{n1}\delta \alpha r\), we have
The lemma then follows since \(\alpha > \left( \frac{1}{\lambda (1\lambda )}\right) ^{c_1}\) for some large enough \(c_1\). \(\square \)
The lemma below gives that it is unlikely that \(\xi ^1\) leaves the set \({\widehat{A}}_n^1\) in the nth step. It may sound a bit counterintuitive that we need to guarantee that \(\xi ^1\) is not too fast, but this lemma is required to ensure that \(\xi ^1\) occupies \(A_n^1\) regardless of the passage times outside \({\widehat{A}}_n^1\).
Lemma A.3
Assume that \(\alpha \) is large enough, as in the statement of Proposition 4.1. There is a positive constant c such that, for any \(n\ge 1\), we have
Proof
Recall that
Then, since the number of vertices in \(A_{n1}^1\) can be bounded above by \(c' (1+\delta )^{n2}\delta \alpha r\) for some constant \(c'>0\), applying Proposition 3.1 we obtain
The lemma then follows since \(\alpha > \left( \frac{1}{\lambda (1\lambda )}\right) ^{c_1}\) for some large enough \(c_1\). \(\square \)
Completing the proof of Proposition 4.1
Proof of Proposition 4.1
For any \(X\subset \mathbb {Z}^d\), let \(\zeta ^1_X\) be a set of passage times to the edges of \(\mathbb {Z}^d\) such that they are equal to \(\zeta ^1\) for any edge whose both endpoints belong to X and are equal to infinity everywhere else. For each integer n, define the events
We define the event F in the proposition by \(F= \bigcap _{n=1}^N E_n^\text {c}\). We also define \(R=(1+\delta )^{N}\alpha r\) and \(T=T_N\). By Lemmas A.1, A.2 and A.3, we have
where the last inequality follows since \(\lambda t_1 = (1+\delta )^2\lambda \delta \alpha r\). This establishes the bound in the probability appearing in Proposition 4.1.
Since \(E_n^{(1)}\) is measurable with respect to \(A_n^2\) and \(E_n^{(2)}\) is measurable with respect to \({\widehat{A}}_n^1\), we have that F is measurable with respect to the passage times inside
Note that F is increasing with respect to \(\zeta ^2\) and decreasing with respect to \(\zeta ^1_{\mathcal {B}\left( \left( \frac{11\lambda }{10}\right) ^2 R\right) }\).
To conclude the proof of the proposition, it suffices to show that F implies that \(\xi ^1\) occupies all vertices in \(\bigcup _{n=1}^N A_n^1\), since
We will use induction on n to establish a stronger result by showing that, for each n, given that \(\xi ^1(T_{n1})\supset A_{n1}^1\), the event \(E_n^\text {c}\) implies that \(\xi ^1(T_{n})\supset A_{n}^1\). First, for \(n=0\) we have from the initial condition that \(\xi ^1(0) \supset A_0^1\). Now assume that \(\xi ^1(T_{n1})\supset A_{n1}^1\). Since \(E_n^{(1)}\) does not hold, we have that
where the secondtolast step follows since \(1+\delta =\frac{11\lambda }{10}\) and the function \(\left( \frac{11x}{10}\right) ^{10}x\le 1\) for all \(x\in [0,1]\). The last step follows since \(\alpha \) is large enough. Now we note that \(\widehat{A}_n^1\) does not intersect
Since \(E_n^{(2)}\) does not happen, and \(\xi ^2(T_n)\cap \widehat{A}_n^1=\emptyset \), the passage times inside \({\widehat{A}}_n^1\) guarantee that \(\xi ^1(T_n)\supset A_n^1\), concluding the proof. \(\square \)
Proof of Proposition 4.2
Proof of Proposition 4.2
The proof of Proposition 4.1 goes by showing that, for each \(n\ge 1\), given that \(\xi ^1\) occupies \(A_{n1}^1\), \(\xi ^1\) will occupy \(A_{n}^1\) after time \(t_n\), \(\xi ^2\) will be confined to \(A_n^2\), and these events are measurable with respect to \(\widehat{A}_n^1\cup A_n^2\). Moreover, we have by (70) that \(A_n^2\) does not intersect \({\widehat{A}}_n^1\), which guarantees that for each n the events for \(\xi ^1\) and \(\xi ^2\) at the nth step are independent. In the setting of Proposition 4.2, the main difference is that, with the presence of the sets \(\Pi _i\), some parts of \(A_n^1,A_n^2,{\widehat{A}}_n^1\) may intersect \(\bigcup _i\Pi _i\). Below, when we refer to the properties of \(\Pi _i\), we mean the enumerated properties (P1)–(P3) described for \(\Pi _i\) right before Proposition 4.2.
First we consider the effect of the sets \(\Pi _i\) for the spread of \(\xi ^2\). Note that, for each i, if \(\xi ^2(0)\) does not intersect \(\Pi _i\), then due to properties (P1) and (P3), the spread of \(\xi ^2\) inside \(\Pi _i\) is slower than that given by \(\zeta ^2\), just as in Proposition 4.1. On the other hand, if \(\xi ^2(0)\) intersects some \(\Pi _i\), then \(\xi ^2\) may benefit from the set of vertices in \(\Pi _i\) that is occupied by type 2. This decreases the distance between \(\mathcal {B}\left( r\right) \) and \(A_n^2\) by at most \(\gamma r\). Moreover, to ensure that the event for \(\xi ^2\) at the nth step is measurable with respect to \(A_n^2\), we will consider the event that \(\xi ^2\) is confined to \(A_n^2{\setminus } \partial ^\mathrm {i}_{\gamma r} A_n^2\), where \(\partial ^\mathrm {i}_{\gamma r} A_n^2\) stands for the vertices of \(A_n^2\) within distance \(\gamma r\) from \(\partial ^\mathrm {i}A_n^2\). Therefore, we define
For Proposition 4.1, the probability of the corresponding event is given in Lemma A.1, and follows from inequality (66). Here, (66) translates to
and Lemma A.1 follows by adjusting the constant c.
Now we turn to the effect of the \(\Pi _i\) on the spread of \(\xi ^1\). There are two aspects regarding the spread of \(\xi ^1\): the time to spread from \(A_{n1}^1\) to \(A_n^1\) (handled in Lemma A.2) and the measurability of this event (handled in Lemma A.3). Regarding Lemma A.2, the crucial inequality is (68). But since \(A_{n1}^1\) and \(A_n^1\) may intersect \(\bigcup _i\Pi _i\), to ensure that \(\xi ^1\) can spread in the nth step, we need to change \(E_n^{(2)}\) to
where \(A_n^1(\Pi )\) is the set \(A_n^1\) plus all sets \(\Pi _i\) that intersect \(A_n^1\); that is, \(A_n^1(\Pi )= A_n^1 \cup \left( \bigcup _{i:\Pi _i\cap A_n^1\ne \emptyset }\Pi _i \right) \). Then, using property (P2), (68) translates to
and Lemma A.2 follows by adjusting c.
Regarding Lemma A.3, \({\widehat{A}}_n^1\) will not need to be changed, and (69) is replaced with
where \(\frac{2}{C_\mathrm {FPP}}\) accounts for the fact that the definition of \({\widehat{A}}_n^2\) includes neighborhoods in \(\mathbb {Z}^d\) and \(\mathcal {B}\left( 2/C_\mathrm {FPP}\right) \) contains all vertices in \([2,2]^d\). Then Lemma A.3 holds as it is. Finally, we just need to ensure that the events \(E_n^{(1)}\) and \(E_n^{(2)}\) are independent. In other words, that \({\widehat{A}}_n^1\) and \(A_n^2\) do not intersect. But this is true since their definition did not change; hence 70 holds as it is. \(\square \)
Appendix: Standard properties of exponential random variables
Here we state some properties of exponential random variables that we will use in the paper.
Lemma B.1
(Random sum) Fix any \(q\in (0,1)\). Let L be a geometric random variable of success probability q. Let \(X_1,X_2,\ldots \) be an i.i.d. sequence of exponential random variables of rate 1. Hence,
Lemma B.2
(Scaling and minimum) Let X be an exponential random variable of rate \(\theta \). Then, for any \(M>0\), we have
Moreover, for integer M, \(\frac{X}{M}\) has the same distribution of the minimum of M independent, exponential random variables of rate \(\theta \).
In the lemma below we show that a collection of exponential random variables \(Z_1,Z_2,\ldots ,Z_{k}\) can be sampled by first sampling the minimum value among all of them, which is the variable \(Z_I\) whose value is W, and then using the memoryless property of exponential random variables to say that the other ones are equal to W plus an exponential random variable of the same rate.
Lemma B.3
(Decomposition on the minimum) Fix any integers \(k\ge 1\). Let \(X_i\), \(i=1,2,\ldots , k\), be independent exponential random variables of rate \(\theta _i\). Let I be a random variable in \(\{1,2,\ldots ,k\}\) which has value i with probability \(\frac{\theta _i}{\sum _{j=1}^k\theta _j}\). Let W be an independent, exponential random variable of rate \(\sum _{j=1}^k\theta _j\). Thus, if we set \(Z_i\), \(i=1,2,\ldots ,j\), as
we obtain that the \(Z_i\) are independent exponenential random variables of rate \(\theta _i\).
Note that the above lemma can be iterated. That is, after we see that \(Z_I=W\) is the minimum among the \(Z_i\), then the value of \(Z_i\) for \(i\ne I\) can be sampled by first sampling the minimum among the \(X_i\) with \(i\ne I\). Thus we obtain new random variables \(I'\) and \(W'\) so that \(X_{I'}=W'\) and \(Z_{I'}=W+W'\), while the other values of \(Z_i\) with \(i\ne I,I'\) are equal to \(W+W'\) plus an independent exponential random variable of the same rate. Then, after having sampled the values of \(Z_I\) and \(Z_{I'}\), we can iterate the above procedure with the \(Z_i\) that were not yet sampled, for \(k2\) iterations, until all the \(Z_i\)’s have been obtained.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Sidoravicius, V., Stauffer, A. Multiparticle diffusion limited aggregation. Invent. math. 218, 491–571 (2019). https://doi.org/10.1007/s00222019008905
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00222019008905