Higher order corrections for anisotropic bootstrap percolation
 564 Downloads
Abstract
Keywords
Bootstrap percolation Finitesize effects Metastability Sharp thresholdMathematics Subject Classification
60K35 82B43 82C431 Introduction
1.1 Motivation and statement of the main result
Bootstrap percolation is a general name for the dynamics of monotone, twostate cellular automata on a graph G. Bootstrap percolation models with different rules and on different graphs have since their invention by Chalupa et al. [20] been applied in various contexts and the mathematical properties of bootstrap percolation are an active area of research at the intersection between probability theory and combinatorics. See for instance [1, 2, 4, 5, 8, 23, 33] and the references therein.
Motivated by applications to statistical (solidstate) physics such as the Glauber dynamics of the Ising model [26, 36] and kinetically constrained spin models [17], the underlying graph is often taken to be a ddimensional lattice, and the initial state is usually chosen randomly.
Although some progress has recently been made in the study of very general cellular automata on lattices [12, 14, 25], attention so far has mainly focused on obtaining a very precise understanding of the metastable transition for specific simple models [4, 8, 9, 18, 23, 30, 33].
The main result of this paper is the following theorem:^{1}
Theorem 1.1
Finally, we remark that much weaker bounds (differing by a large constant factor) have recently been obtained for an extremely general class of twodimensional models by Bollobás et al. [12], see Sect. 1.3, below. Moreover, stronger bounds (differing by a factor of \(1 + o(1)\)) were proved for a certain subclass of these models (including the twoneighbour model, but not the anisotropic model) by the DuminilCopin and Holroyd [25].
Although various other specific models have been studied (see e.g. [15, 16, 34]), in each case the authors fell very far short of determining the second term.
1.2 The bootstrap percolation paradox
In [33] Holroyd for the first time determined sharp first order bounds on \(p_c\) for the standard model, and observed that they were very far removed from numerical estimates: \(\pi ^2/18 \approx 0.55\), while the same constant was numerically determined to be \(0.245 \pm 0.015\) on the basis of simulations of lattices up to \(L = 28800\) [3]. This phenomenon became known in the literature as the bootstrap percolation paradox, see e.g. [2, 21, 28, 31].
An attempt to explain this phenomenon goes as follows: if the convergence of \(p_c\) to its firstorder asymptotic value is extremely slow, while for any fixed L the transition around \(p_c\) is very sharp, then it may appear that \(p_c\) converges to a fixed value long before it actually does.
1.3 Universality

\({\mathcal {U}}\) is “supercritical” and has polynomial critical probability.

\({\mathcal {U}}\) is “critical” and has polylogarithmic critical probability.

\({\mathcal {U}}\) is “subcritical” and has critical probability bounded away from zero.
Theorem 1.2
 (a)If \({\mathcal {U}}\) is balanced, then^{5}$$\begin{aligned} p_c\big ( {\mathbb {Z}}_L^2,{\mathcal {U}}\big ) = \Theta \bigg ( \frac{1}{(\log L)^{1/\alpha }} \bigg ). \end{aligned}$$
 (b)If \({\mathcal {U}}\) is unbalanced, then$$\begin{aligned} p_c\big ( {\mathbb {Z}}_L^2,{\mathcal {U}}\big ) = \Theta \bigg ( \frac{(\log \log L)^2}{(\log L)^{1/\alpha }} \bigg ). \end{aligned}$$
Theorem 1.2 thus justifies our view of the anisotropic model as a canonical example of an unbalanced model.
1.4 Internally filling a critical droplet
As usual in (critical) bootstrap percolation, the key step in the proof of Theorem 1.1 will be to obtain very precise bounds on the probability that a “critical droplet” R is internally filled ^{6} (IF), i.e., that \(R \subset \langle {\mathcal {S}}\cap R \rangle \). We will prove the following bounds:
Theorem 1.3
The alert reader may have noticed the following surprising fact: we obtain the first three terms of \(p_c( [L]^2, {\mathcal {N}}_{\scriptscriptstyle (1,2)}, 3 )\) in Theorem 1.1, despite only determining the first two terms of \(\log {\mathbb {P}}_p( R\) is IF) in Theorem 1.3. We will show how to formally deduce Theorem 1.1 from Theorem 1.3 in Sect. 7, but let us begin by giving a brief outline of the argument.
1.5 A generalisation of the anisotropic model
Theorem 1.4
We will not prove Theorem 1.4, since the proof is conceptually the same as that in the case \(b = 2\), but requires several straightforward but lengthy calculations that might obscure the key ideas of the proof. It is, however, not too hard to see where the numerical factors come from:
1.6 Comparison with simulations
1.7 Comparison with the twoneighbor model
Comparing Theorem 1.1 with the analogous result for the twoneighbor model, (1.4), it may seem remarkable how much sharper the former is than the latter. We believe the following heuristic discussion goes a way towards explaining this difference.
Both approximations of \(p_c\) are proved using essentially the same critical droplet heuristic described above. Once a critical droplet has formed, the entire lattice will easily fill up. But filling a dropletsized area is exponentially unlikely: it is essentially a large deviations event. The theory of large deviations tells us that if a rare event occurs, it will occur in the most probable way that it can. For filling a droplet, this means that one should find an optimal “growth trajectory”: a sequence of dimensions from which a very small infected area (a “seed”) steadily grows to fill up the entire droplet. For the anisotropic model, in [23], the first and second authors determined this trajectory to be close to \(x = \frac{\mathrm {e}^{3py}}{3p}\), where x and y denote the horizontal and vertical dimensions of the seed as it grows. This approximation was enough to yield the first term of \(p_c\). In the current paper we establish tighter bounds of optimal trajectory around \(x = \frac{\mathrm {e}^{3py}}{3p}\), allowing us to give the sharper estimate for the probability of filling a droplet in Theorem 1.3. As we showed in Sect. 1.4 above, this correction is enough to obtain the first three terms of \(p_c\) for the anisotropic model.
For the twoneighbor model, however, finding this optimal growth trajectory is not at all the challenge: by symmetry it is trivially \(x=y\). The correction to \(p_c\) that Gravner, Holroyd, and Morris determined in [28, 30, 37], is instead due to the much smaller entropic effect of random fluctuations around this trajectory (see also the introduction of [29] for a more detailed explanation of this effect). We believe that such fluctuations also influence \(p_c\) for the anisotropic model, but that their effect will be much smaller than the improvements that can still be made in controlling the precise shape of the optimal growth trajectory.
1.8 About the proofs
The proof of Theorem 1.1 uses a rigorisation of the iterative determination of \(p_c\) in Sect. 1.4 above, combined with Theorem 1.3 and the classical argument of Aizenman and Lebowitz [4].
The lower bound in Theorem 1.3 is a refinement of the computation in [23].
Most of the work of this paper goes into the proof of the upper bound of Theorem 1.3. Like many recent entries in the bootstrap percolation literature, our proof centers around the “hierarchies” argument of Holroyd [33]. In particular, we sharpen the argument of [23] by incorporating the idea of “good” and “bad” hierarchies from [30], and by using very precise bounds on horizontal and vertical growth of infected rectangular regions.
The main new contributions of this paper (besides the iterative determination of \(p_c\)) can be found in Sects. 3 and 6.
In Sect. 3, we introduce the notion of spanning time (Definition 3.3), which characterises to a large extent the structure of configurations of vertical growth. We show that if the spanning time is 0, then such structures have a simple description in terms of paths of infected sites, whereas if the spanning time is not 0, then this description can still be given in terms of paths, but these paths now also involve more complex arrangements of infected sites. We call such arrangements infectors (Definition 3.7), and show that they are sufficiently rare that their contribution does not dominate the probability of vertical growth.
In Sect. 6 we generalise the variational principle of Holroyd [33] to a more general class of growth trajectories. This part of the proof is intended to be more widely applicable than the current anisotropic case, and is set up to allow for precise estimates.
1.9 Notation and definitions
A rectangle \([a,b] \times [c,d]\) is the set of sites in \({\mathbb {Z}}^2\) contained in the Euclidean rectangle \([a,b] \times [c,d]\). For a finite set \({\mathcal {Q}}\subset {\mathbb {Z}}^2\), we denote its dimensions by \(({\mathbf {x}}({\mathcal {Q}}), {\mathbf {y}}({\mathcal {Q}}))\), where \({\mathbf {x}}({\mathcal {Q}}) = \max \{a_1  b_1 +1 \, : \, \{(a_1,a_2), (b_1, b_2)\} \in {\mathcal {Q}}\times {\mathcal {Q}}\}\), and similarly, \({\mathbf {y}}({\mathcal {Q}}) = \max \{a_2  b_2 +1\, : \, \{(a_1,a_2), (b_1, b_2)\} \in {\mathcal {Q}}\times {\mathcal {Q}}\}\). So in particular, a rectangle \(R = [a,b] \times [c,d]\) has dimensions \(({\mathbf {x}}(R), {\mathbf {y}}(R)) = ([a,b]\, \cap \, {\mathbb {Z}}, [c,d] \,\cap \,{\mathbb {Z}})\). Oftentimes, the quantities that we calculate will only depend on the size of R, and be invariant with respect to the position of R. In such cases, when there is no possible confusion, we will write R with \({\mathbf {x}}(R)=x\) and \({\mathbf {y}}(R)=y\) as \([x] \times [y]\). A row of R is a set \(\{(m,n) \in R \,:\, n=n_0\}\) for some fixed \(n_0\). A column is similarly defined as a set \(\{(m,n) \in R \,:\, m =m_0\}\). We sometimes write \([a,b] \times \{c\}\) for the row \(\{ (m,c) \in {\mathbb {Z}}^2 \, :\, m \in [a,b] \cap {\mathbb {Z}}\}\), and use similar notation for columns.
Given rectangles \(R \subset R'\) we write \(\left\{ R \Rightarrow R' \right\} \) for the event that the dynamics restricted to \(R'\) eventually infect all sites of \(R'\) if all sites in R are infected, i.e., for the event that \(R' = \langle ({\mathcal {S}}\cap R') \cup R\rangle \).
We will frequently make use of two standard correlation inequalities: The first is the Fortuin–Kasteleyn–Ginibre inequality (FKGinequality), which states that for increasing events A and B, \({\mathbb {P}}_p(A \cap B) \geqslant {\mathbb {P}}_p(A) {\mathbb {P}}_p(B)\). The second is the van den Berg–Kesten inequality (BKinequality), which states that for increasing events A and B, \({\mathbb {P}}_p(A \circ B) \leqslant {\mathbb {P}}_p(A) {\mathbb {P}}_p(B)\), where \(A \circ B\) means that A and B occur disjointly (see [32, Chapter 2] for a more indepth discussion).
1.10 The structure of this paper
In Sect. 2 we state two key bounds, Lemmas 2.2 and 2.3, giving primarily lower bounds on the probabilities of horizontal and vertical growth of an infected rectangular region, and we use them to prove the lower bound of Theorem 1.3. In Sect. 3 we prove a complementary upper bound on the vertical growth of infected rectangles, Lemma 3.1. In Sect. 4 we prove Lemma 4.1, which combines the upper bounds on horizontal and vertical growth from Lemmas 2.2 and 3.1. This lemma is crucial for the upper bound of Theorem 1.3. We prove the upper bound of Theorem 1.3 in Sect. 5, subject to a variational principle, Lemma 5.9, that we prove in Sect. 6. Finally, in Sect. 7 we use Theorem 1.3 to prove Theorem 1.1.
2 The lower bound of Theorem 1.3
Recall that \(C_1 = \frac{1}{12}\) and \(C_2 = \frac{1}{6} \log \frac{8}{3 \mathrm {e}}\).
Proposition 2.1
Note that the upper bound on y is different from the bound in Theorem 1.3.
For the proof it suffices to show that there exists a subset of configurations that has the desired probability. We choose a subset of configurations that follow a typical “growth trajectory”: configurations that contain a small area that is locally densely infected (a seed). We bound the probability that such a seed will grow a bit (which is likely), and then a lot more (which is exponentially unlikely), until the infected region reaches a size where the growth is again very likely, because the boundary of the infected region is large and the dynamics depend only on the existence of infected sites on the boundary, not on their number.
To prove this proposition we will need bounds on the probability that a rectangle becomes infected in the presence of a large infected cluster on its boundary. We state two lemmas that achieve this, which are improvements upon [23, Lemmas 2.1 and 2.2].
Lemma 2.2
 (a)when \(p\rightarrow 0\) and \(py\rightarrow \infty \),$$\begin{aligned} f(p,y)=\mathrm {e}^{3py}+\Theta (\mathrm {e}^{4py}), \end{aligned}$$
 (b)when \(y \geqslant \frac{2}{p} \log \log \frac{1}{p}\),$$\begin{aligned} f(p,y)=\mathrm {e}^{3py}\left( 1+\Theta \left( \log ^{2} (1/p)\right) \right) , \end{aligned}$$
 (c)when \(p \rightarrow 0\), \(y \rightarrow \infty \), and \((1p)^y \rightarrow 1\),$$\begin{aligned} f(p,y) \geqslant \tfrac{1}{2} p y  3 p^2 y^2. \end{aligned}$$
Proof
Lemma 2.3
 (a)If \(p^2x\) is sufficiently small, then we have, for any rectangle \([x] \times [y]\),$$\begin{aligned} {\mathbb {P}}_p\left( [x] \times [y] \text { is uptrav} \right) \; \geqslant \; \exp \Big ( y \log (8p^2x)\big (1+O(p^2x + p)\big ) \Big ). \end{aligned}$$
 (b)As long as \( \frac{8 p^2 x}{5} \le 1\) we have$$\begin{aligned} {\mathbb {P}}_p([x] \times [y] \text { is uptrav}) \; \geqslant \; \left( \frac{8 p^2 x}{5 \mathrm {e}} \right) ^y. \end{aligned}$$
Proof
Proof of Proposition 2.1
We start by constructing a seed. Let \(r\, {:=} \,\lfloor \frac{2}{p} \log \log \frac{1}{p} \rfloor \) and infect sites (1, 2i) and \((2,2i+1)\) for \(2i\le r\). The probability that a rectangle \([2] \times [r]\) is a seed is \(p^r\). Note that the infected sites internally fill \([2]\times [r]\).
The growth of the seed to a rectangle of arbitrary size can be divided into three stages:
Now, by the FKGinequality, we can multiply the bounds from the three stages (i.e., (2.2), (2.9), and (2.10)) to complete the proof of Proposition 2.1.\(\square \)
3 An upper bound on the probability of uptraversability
The following bound is crucial for the proof of the upper bound of Theorem 1.3. Recall from (1.1) the definition of the bootstrap operator \({\mathcal {B}}\), and recall that \({\mathcal {B}}^{(t)}({\mathcal {S}})\) is the tth iterate of \({\mathcal {B}}\) with initial set \({\mathcal {S}}\), and that \(\langle {\mathcal {S}}\rangle = \lim _{t \rightarrow \infty } {\mathcal {B}}^{(t)}({\mathcal {S}})\). Recall that a rectangle \(R = [1,x] \times [1,y]\) is said to be uptraversable by a set \({\mathcal {S}}\) if \(R \subset \langle ({\mathcal {S}}\cap R) \cup ([1,x] \times \{0\}) \rangle \), and that we write \({\mathbb {P}}_p\) to indicate that the elements of \({\mathcal {S}}\) are chosen independently at random with probability p.
Lemma 3.1
We will apply this lemma with \(\frac{1}{p} \ll y \ll \frac{1}{p} \log ^6 \frac{1}{p} \leqslant x\) and \(k = \log ^2 \frac{1}{p}\). Note that in this case the upper bound given by the lemma is not much larger than the lower bound given by Lemma 2.3. In particular, for these choices of x, y and k, the bound given by the lemma is of the form \(\big ( ( 8 + o(1)) p^2 x \big )^y\).
Lemma 3.2
Let R be a rectangle such that R has \({\mathbf {x}}(R) \geqslant 2\) and \({\mathbf {y}}(R) \geqslant 1\), and let \({\mathcal {S}}\subset R\). Then R is uptraversable by \({\mathcal {S}}\) if and only if \(\langle {\mathcal {S}}\rangle \) contains a spanning pair for every row of R.
Proof
Suppose that \(R = [a,b] \times [c,d]\) with \(ba \geqslant 1\) and \(dc \geqslant 0\). It is easy to see that if \(\langle {\mathcal {S}}\rangle \) contains a spanning pair for every row of R, then R is uptraversable by \({\mathcal {S}}\): if \(\langle {\mathcal {S}}\rangle \) contains a spanning pair for the bottom row of R, then the whole row becomes infected, i.e., \([a,b] \times \{c\} \subset \langle {\mathcal {S}}\cup [a,b] \times \{c\}\rangle \). And given that the bottom row is infected, the row above the bottom row must also become infected, since \(\langle {\mathcal {S}}\rangle \) also contains a spanning pair for it, i.e., \([a,b] \times \{c+1\} \subset \langle {\mathcal {S}}\cup [a,b] \times \{c\}\rangle \). This argument can be repeated for all rows.
We now make another important definition.
Definition 3.3
The central idea in the proof of Lemma 3.1 is to consider the cases \(\tau = 0\) and \(\tau > 0\) separately. When \(\tau = 0\), the structure is significantly simpler than when \(\tau > 0\), which allows for a very sharp estimate. When \(\tau > 0\) more complex structures are possible, but more infected sites are required, and this allows us to use a less precise analysis.
3.1 The case \(\tau = 0\)
Given a rectangle R, let \({\mathcal {F}}_0(R)\) and \({\mathcal {F}}_+(R)\) denote the families of all minimal sets \({\mathcal {A}}\subset R\) such that R is uptraversable by \({\mathcal {A}}\) and \(\tau (R,{\mathcal {A}}) = 0\) and \(\tau (R,{\mathcal {A}})>0\), respectively. Let us write \({\mathcal {U}}_0(R)\) and \({\mathcal {U}}_+(R)\) for the upsets generated by \({\mathcal {F}}_0(R)\) and \({\mathcal {F}}_+(R)\), respectively, i.e., the collections of subsets of R that contain a set \({\mathcal {A}}\in {\mathcal {F}}_0(R)\) or \({\mathcal {A}}\in {\mathcal {F}}_+(R)\), respectively.
The following lemma gives a precise estimate of the probability that a rectangle is uptraversable and \(\tau =0\).
Lemma 3.4
We will prove Lemma 3.4 using the first moment method. To be precise, we will show that the expected number of members of \({\mathcal {F}}_0(R)\) that are contained in \({\mathcal {S}}\) is at most the righthand side of (3.2). This will follow easily from the following lemma.
Lemma 3.5
To count the sets in \({\mathcal {F}}_0(R)\), we will need to understand their structure. We will show that each set \({\mathcal {A}}\in {\mathcal {F}}_0(R)\) can be partitioned into “paths” as follows:
Lemma 3.6
Proof
Since \({\mathcal {A}}\) is a minimal subset of R such that R is uptraversable by \({\mathcal {A}}\), and \(\tau (R,{\mathcal {A}}) = 0\), it follows from Definition 3.3 that \({\mathcal {A}}\) contains a spanning pair for each row of R, and hence (by minimality of \({\mathcal {A}}\)) it follows that \({\mathcal {A}}\) consists exactly of a union of spanning pairs (one pair for each row) and no other sites. Let these pairs be \({\mathcal {P}}_1,\ldots ,{\mathcal {P}}_y\), and define a graph on [y] by placing an edge between i and j if \({\mathcal {P}}_i \cap {\mathcal {P}}_j\) is nonempty. The sets \(A_1,\ldots ,A_r\) are simply (the elements of \({\mathcal {A}}\) corresponding to) the components of this graph.
Let the components of the graph be \(C_1,\ldots ,C_r\), and note first that each component is a path, since a spanning pair for row \([a,b] \times \{c\}\) is contained in \([a,b] \times [c,c+1]\). Moreover, it follows immediately from this simple fact that if \({\mathcal {P}}_i \cap {\mathcal {P}}_j\) is nonempty then \({\mathcal {P}}_i\) and \({\mathcal {P}}_j\) must be spanning pairs for adjacent rows (say, \([a,b] \times \{c\}\) and \([a,b] \times \{c+1\}\)), and that their common element must lie in \([a,b] \times \{c+1\}\).
\(\square \)
Proof of Lemma 3.5
Lemma 3.4 now follows by Markov’s inequality:
Proof of Lemma 3.4
3.2 The case \(\tau > 0\)
In this section we analyse the event \({\mathcal {S}}\cap R \in {\mathcal {U}}_+(R)\). If R is uptraversable by \({\mathcal {S}}\), then let \({\mathcal {A}}\) again denote a subset of \({\mathcal {S}}\) of minimal cardinality such that R is uptraversable by \({\mathcal {A}}\). By Lemma 3.2 above we know that if R is uptraversable by \({\mathcal {A}}\), then there must exist a time t at which there is spanning pair in \({\mathcal {B}}^{(t)}({\mathcal {A}})\) for each row in R. The following definition isolates the sites that are responsible for the creation of such spanning pairs.
Definition 3.7

there exists a \(t \geqslant 0\) such that \({\mathcal {B}}^{(t)}({\mathcal {M}})\) contains a spanning pair for the row \(\ell \), and

there does not exist a subset \({\mathcal {M}}' \subset {\mathcal {M}}\) such that there exists a \(t' \geqslant 0\) such that \({\mathcal {B}}^{(t')}({\mathcal {M}}')\) contains a spanning pair for the row \(\ell \) .
Note that spanning pairs are infectors, but that many other configurations are possible: see Fig. 3 for a few examples.
Lemma 3.8
Proof
Recall that for any set \({\mathcal {Q}}\subset {\mathbb {Z}}^2\) we write \({\mathbf {x}}({\mathcal {Q}})\) and \({\mathbf {y}}({\mathcal {Q}})\) for the horizontal and vertical dimensions of that set. We split the event \(\{{\mathcal {S}}\cap R \in {\mathcal {U}}_+(R)\}\) according to whether there exists an infector \({\mathcal {M}}_\ell \) with \({\mathbf {x}}({\mathcal {M}}_\ell ) \geqslant 6k^2\) or not.
Lemma 3.9
Proof
Lemma 3.10
(Small infectors) There exist no infectors that are not a single spanning pair that intersect precisely one row, and there exist precisely two infectors that are not a single spanning pair that intersect precisely two rows, up to translations. The cardinality of these infectors is 4, and they span both rows they intersect.
Proof
Let \({\mathcal {M}}_j\) be the infector for some row j. Write v for an element of the spanning pair for row j that becomes infected due to the bootstrap dynamics on \({\mathcal {M}}_j\). (It is easy to see that only one element of a spanning pair can arise after time \(t=0\), but we do not use this fact.) Suppose t is the first time such that \({\mathcal {B}}^{(t)}({\mathcal {M}}_j)\) contains a spanning pair. Because \({\mathcal {M}}_j\) is not a spanning pair, \(t \geqslant 1\). Since v becomes infected at time t, it must be the case that \({\mathcal {N}}_{\scriptscriptstyle (1,2)}(v) \cap {\mathcal {B}}^{(t1)} ({\mathcal {M}}_j) \geqslant 3\). Any configuration of three sites in \({\mathcal {N}}_{\scriptscriptstyle (1,2)}(v)\) contains a spanning pair for the row that v is in, so v cannot be in row j. By the definition of spanning pairs, (3.1), a site can either span the row that it is in, or the row below it, so v is in row \(j+1\). We conclude that there are no infectors that are not a spanning pair that intersect precisely one row.
By the same argument, if \(t \geqslant 2\), then \({\mathcal {M}}_j\) must contain a site in row \(j+2\), so only infectors that intersect two rows can have \(t=1\).
One can easily verify that the only infectors with \(t =1\) that intersect two rows are translations of the configurations \(\{(0,0), (0,1), (3,1), (4,1)\}\) and \(\{(0,0), (0,1), (\,3,1), (\,4,1)\}\) (see the configuration in the bottomleft corner of Fig. 3). These infectors both have cardinality 4, and span both rows they intersect. \(\square \)

\(1 = a_1 \leqslant b_1 \leqslant a_2 \leqslant b_2 \leqslant \cdots \leqslant a_r \leqslant b_r =k\), and
 the eventoccurs.$$\begin{aligned} \{[1,x] \times [a_1, b_1] \text { is uptrav by }B_1\} \circ \cdots \circ \{[1,x] \times [a_r, b_r] \text { is uptrav by } B_r\} \end{aligned}$$
Lemma 3.11
 (a)
For any row \(\ell \in \{1, \dots , y\}\) there exists a unique \(i \in \{1,\dots , r\}\) such that \({\mathcal {M}}_\ell \subseteq B_i\).
 (b)
If \(B_i\) spans rows \(\ell , \dots , \ell +m\), then \(B_i = \cup _{j=\ell }^{\ell +m} {\mathcal {M}}_j\).
 (c)
If \({\mathcal {M}}_j \subseteq B_i\) and \(j < b_i\), then at least one of the following holds: \({\mathcal {M}}_j = B_i\); or there exists a \(j' < j\) such that \({\mathcal {M}}_j \subset {\mathcal {M}}_{j'} \subseteq B_i\); or \({\mathcal {M}}_j \cap {\mathcal {M}}_{j+1} \ne \varnothing \).
 (d)
If \({\mathcal {M}}_j \subseteq B_i\) and \(j = b_i\), then at least one of the following holds: \({\mathcal {M}}_j = B_i\); or there exists a \(j' < j\) such that \({\mathcal {M}}_j \subset {\mathcal {M}}_{j'} \subseteq B_i\); or \({\mathcal {M}}_{j1} \cap {\mathcal {M}}_{j} \ne \varnothing \).
Proof
 (a)
By construction, \({\mathcal {A}}= \cup _{i=1}^r B_i\), and \(B_i \circ B_j\) occurs if \(i \ne j\). By Lemma 3.8, \({\mathcal {A}}= \cup _{\ell =1}^k {\mathcal {M}}_\ell \). Suppose that there exists an \(\ell \) such that \({\mathcal {M}}_\ell \cap B_i\ne \varnothing \) and \({\mathcal {M}}_\ell \cap B_j\ne \varnothing \) for some \(i\ne j\). Without loss of generality, we can further assume that \(a_i\leqslant \ell \leqslant b_i\). Since \({\mathcal {M}}_\ell \) is the minimal set to create a spanning pair for row \(\ell \), and that \({\mathcal {M}}_\ell \cap B_i\) is a strict subset of \({\mathcal {M}}_\ell \) (since the latter intersects \(B_j\), which is disjoint from \(B_i\) by assumption), we deduce that \(\langle {\mathcal {M}}_\ell \cap B_i\rangle \) cannot contain a spanning pair for row \(\ell \). By Lemma 3.2, this means that \([1,x]\times [a_i,b_i]\) is not uptraversable by \(B_i\), which is a contradiction.
 (b)
By Lemma 3.8, \({\mathcal {A}}= \cup _{i=1}^k {\mathcal {M}}_i\). Combined with (a) this gives (b).
 (c)Suppose that \(B_i\) spans rows \(\ell , \dots , \ell +m\) and suppose that there exists a \(j < b_i\) such that neither \({\mathcal {M}}_j = B_i\) nor \({\mathcal {M}}_j \subset {\mathcal {M}}_{j'}\) for some \(j' < j\), and such that \({\mathcal {M}}_{j} \cap {\mathcal {M}}_{j+1} = \varnothing \). Then we can partitionIt then follows that$$\begin{aligned} B_i = \left( \bigcup _{s=\ell }^j {\mathcal {M}}_{s} \right) \sqcup \left( \bigcup _{t=j+1}^{\ell +m} {\mathcal {M}}_{t} \right) \,{=:}\, B_{i,1} \, \sqcup \, B_{i,2}. \end{aligned}$$occurs. This gives a contradiction, since by construction the sets \(B_1, \dots , B_r\) are the maximal partition of \({\mathcal {A}}\) with this property, so such a j does not exist. So we conclude that if \({\mathcal {M}}_j \subset B_i\) but \({\mathcal {M}}_j \ne B_i\) and \({\mathcal {M}}_j \nsubseteq {\mathcal {M}}_{j'}\) for some \(j'<j\), then \({\mathcal {M}}_{j} \cap {\mathcal {M}}_{j+1} \ne \varnothing \).$$\begin{aligned} \{[1,x] \times [\ell , j] \text { is uptrav by }B_{i,1}\} \circ \{[1,x] \times [j+1, \ell +m] \text { is uptrav by } B_{i,2}\} \end{aligned}$$
 (d)
The proof is identical to that of (c), mutatis mutandis.

\({\mathcal {S}}\cap ([1,x] \times [\ell +1, \ell + m]) \in {\mathcal {U}}_+([1,x] \times [\ell +1, \ell + m])\),

the minimal subset \({\mathcal {A}}\) of \({\mathcal {S}}\) such that \([1,x] \times [\ell +1, \ell + m]\) is uptraversable by \({\mathcal {A}}\) cannot be divided into two or more disjointly occurring pieces, i.e., \({\mathcal {A}}= B_1\) in the construction described above.

\(\max _{j=\ell +1}^m {\mathbf {x}}({\mathcal {M}}_j) < 6k^2\).
Lemma 3.12
Proof
There is at least one infected site in row \(\ell +1\), and it can be at x positions.
Lemma 3.13
Proof
3.3 The proof of Lemma 3.1
The case \(x < \frac{3 k^2}{p}\) is now easy. Note that if \([1,x] \times [1,y]\) is uptraversable by \({\mathcal {S}}\), then also \([1,x+a] \times [1,y]\) for any \(a \geqslant 1\) is uptraversable by \({\mathcal {S}}\) (i.e., uptraversability is a monotone increasing event in the width of the rectangle). Hence, \({\mathbb {P}}_p([x] \times [y]\) is uptrav) is a monotone increasing function in x. The bound thus follows by choosing \(x = \frac{3k^2}{p}\) and applying the bound for the case \(\frac{3 k^2}{p} \leqslant x \leqslant \frac{1}{p^2}\). \(\square \)
4 The probability of simultaneous horizontal and vertical growth
The lemma below states an upper bound on the probability of an infected rectangle growing both vertically and horizontally, i.e., an upper bound on \({\mathbb {P}}_p(R \Rightarrow R')\) for certain \(R \subset R'\).
Lemma 4.1
The proof uses a similar strategy as [23, Proof of Proposition 3.3]. Roughly speaking this strategy entails that we “decorrelate” the horizontal and vertical growth events needed for \(\{R \Rightarrow R'\}\).
Proof
If \(y+t \leqslant \frac{4}{p} \log \log \frac{1}{p}\) and \(x+s > 1/p^2\), then we use the trivial bound \({\mathbb {P}}_p(R \Rightarrow R') \leqslant 1,\) corresponding to \(U^p(R,R')=0\), as required.
If \(y+t \leqslant \frac{4}{p} \log \log \frac{1}{p}\) and \(x+s \leqslant 1/p^2\), then we apply Lemma 3.1 (with \(k = \xi \)), again giving the required bound.
Therefore, we assume henceforth that \(y+t > \frac{4}{p} \log \log \frac{1}{p}\) and \(x+s \leqslant \frac{1}{p^2}\).
Applying the bounds for the two cases to (4.6) completes the proof (using the crude upper bound \(p^{\xi } + 1 \leqslant 2 p^{ \xi }\) for p sufficiently small). \(\square \)
5 The upper bound of Theorem 1.3
Proposition 5.1
5.1 Notation and definitions
Before we proceed with the proof, we must introduce some more notation and a few definitions. Our proof uses hierarchies. The notion of hierarchies is due to Holroyd [33], and is common to much of the bootstrap percolation literature since. Here we use a definition of a hierarchy that is similar to the one in [23]:
Definition 5.2
 (a)
Hierarchy, seed, normal vertex, and splitter: A hierarchy \({\mathcal {H}}\) is a rooted tree with outdegrees at most three^{11} and with each vertex v labeled by nonempty rectangle \(R_v\) such that \(R_v\) contains all the rectangles that label the descendants of v. If the number of descendants of a vertex is 0, we call the vertex a seed. ^{12} If the vertex has one descendant, we call it a normal vertex, and we write \(u\mapsto v\) to indicate that u is a normal vertex with (unique) descendant v. If the vertex has two or more descendants, we call it a splitter vertex. We write \(N({\mathcal {H}})\) for the number of vertices in the tree \({\mathcal {H}}\).
 (b)Precision: A hierarchy of precision Z (with \(Z \geqslant 1\)) is a hierarchy that satisfies the following conditions:
 (1)
If w is a seed, then \({\mathbf {x}}(R_w) \geqslant 2\) and \({\mathbf {y}}(R_w)<2Z\), while if u is a normal vertex or a splitter, then \({\mathbf {y}}(R_u)\geqslant 2Z\).
 (2)
If u is a normal vertex with descendant v, then \({\mathbf {y}}(R_u){\mathbf {y}}(R_v) \leqslant 2Z\).
 (3)
If u is a normal vertex with descendant v and v is either a seed or a normal vertex, then \({\mathbf {y}}(R_u){\mathbf {y}}(R_v) >Z\).
 (4)
If u is a splitter with descendants \(v_1,\dots ,v_i\) and \(i \in \{2,3\}\), then there exists \(j\in \{1,\dots ,i\}\) such that \({\mathbf {y}}(R_{u}){\mathbf {y}}(R_{v_j})> Z.\)
 (1)
 (c)Presence: Given a set of infected sites \({\mathcal {S}}\) we say that a hierarchy \({\mathcal {H}}\) is present in \({\mathcal {S}}\) if all of the following events occur disjointly:
 (1)
For each seed w, \(R_w = \langle R_w \cap {\mathcal {S}}\rangle \) (i.e., \(R_w\) is internally filled by \({\mathcal {S}}\)).
 (2)
For each normal u and every v such that \(u \mapsto v\), \(R_u = \langle (R_v \cup {\mathcal {S}}) \cap R_u \rangle \) (i.e., the event \(\{R_v \Rightarrow R_u\}\) occurs on \({\mathcal {S}}\)).
 (1)
 (d)
Goodness: Similar to [30], we say that a seed w is large if \(Z/3 \leqslant {\mathbf {y}}(R_w) \leqslant Z\). We call a hierarchy good if it has at most \(\log ^{11} \frac{1}{p}\) large seeds, and we call it bad otherwise.
5.2 Outline of the proof of Proposition 5.1
In this section we give the proof of Proposition 5.1 subject to Lemma 5.9 below. We prove Lemma 5.9 in Sect. 6.
Lemma 5.3
Let R be a rectangle with \({\mathbf {x}}(R) \geqslant 2\) and let \(Z \geqslant 3\). If R is internally filled, then there exists a hierarchy \({\mathcal {H}}_{Z,R} \in {\mathbb {H}}_{Z,R}\) that is present, i.e., \({\mathcal {X}}(R; {\mathbb {H}}_{Z,R})\) occurs.
The proof of this lemma is the same as the proof of [23, Proposition 3.8] so we do not repeat it here. (But note that it does not matter that our definition of hierarchies uses “internally filled” rather than “koccurs”.)
Lemma 5.4
Proof
Lemma 5.5
Proof
The following lemma is used to determine a bound for the product of the seeds:
Lemma 5.6
Proof
Any seed of a hierarchy must have dimensions at least (2, 1) by definition, so an iterated application of (5.5) completes the proof.\(\square \)
To bound the second product of (5.4), we use the following lemma:
Lemma 5.7

\({\hat{R}}_0 = {{\tilde{R}}}_{N_{\mathrm{seed}}}\) (with \({{\tilde{R}}}_{N_{\mathrm{seed}}}\) as defined in Lemma 5.6 above),

\({\hat{R}}_{{\hat{N}}}\) has dimensions larger than R,

\({\mathbf {y}}({\hat{R}}_{n+1}){\mathbf {y}}({\hat{R}}_n)\leqslant \frac{1}{p} \log ^{8} \frac{1}{p}\) for every \(0\leqslant n\leqslant {\hat{N}}1\),
 for p sufficiently small,$$\begin{aligned} \prod _{v\mapsto w}{\mathbb {P}}_p(R_w \Rightarrow R_v)\leqslant 2p^{\xi N_{\mathrm{splitter}}}\prod _{n=0}^{{\hat{N}}1}\exp \left( U^p({\hat{R}}_n, {\hat{R}}_{n+1})\right) . \end{aligned}$$
The proof of this lemma goes by induction, using Lemma 4.1, and it is essentially the same as the proof of [23, Lemma 3.11], so we omit it here.
Lemma 5.8
Proof
The final ingredient of the proof is the following lemma:
Lemma 5.9

\(R_{0} = [1]\times [1]\),

\(y_1 \leqslant \frac{2}{p} \log ^{8} \frac{1}{p}\) and \(\frac{1}{3p} \log \frac{1}{p} \leqslant y_N \leqslant \frac{1}{p} \log \frac{1}{p}\),

\(x_1 \leqslant \frac{1}{3p} \log ^{12} \frac{1}{p}\) and \(x_N \geqslant \frac{1}{p^{2}}\).
The proof involves a longer computation, so we defer it to Sect. 6.
Proof of Proposition 5.1 subject to Lemma 5.9
It remains to prove Lemma 5.9. We do this in the upcoming section.
6 Variational principles: proof of Lemma 5.9
To prove Lemma 5.9 we will start by setting up some variational principles, similar to [33, Sect. 6]. We start with a few general lemmas.
Lemma 6.1
If \({\underline{a}} \leqslant {\underline{b}} \leqslant {\underline{c}}\), then \(W_{f,g} ({\underline{a}}, {\underline{b}}) + W_{f,g}({\underline{b}}, {\underline{c}}) \geqslant W_{f,g} ({\underline{a}}, {\underline{c}})\).
The proof is easy (see [33, Sect. 6]).
For sets \(A, B \subseteq {\mathbb {R}}_+^2\) we say that A lies Northwest of B and we write \(A \succcurlyeq B\) if for any \({\underline{a}} \in A\) and any \({\underline{b}} \in B\) that satisfy \(a_1 + a_2 = b_1 + b_2\) we have \(a_2 \geqslant b_2\).
Lemma 6.2
If \(\gamma _1\) and \(\gamma _2\) are paths from \({\underline{a}}\) to \({\underline{b}}\), and we have either \(\gamma _1 \succcurlyeq \gamma _2 \succcurlyeq \Delta _{f,g}\) or \(\Delta _{f,g} \succcurlyeq \gamma _2 \succcurlyeq \gamma _1\), then \(w_{f,g} (\gamma _1) \geqslant w_{f,g}(\gamma _2)\).
Proof
By the same reasoning we have \(w_{f,g}(\gamma _1)  w_{f,g}(\gamma _2) \geqslant 0\) when \(\Delta _{f,g} \succcurlyeq \gamma _2 \succcurlyeq ~\gamma _1\).
\(\square \)
Lemma 6.3
For \({\underline{a}}, {\underline{b}} \in \Delta _{f,g}\) with \({\underline{a}} \leqslant {\underline{b}}\), let \(\gamma _0 \,{:=}\,\Delta _{f,g} \cap ([a_1, b_1] \times [a_2, b_2])\), then \(W_{f,g}({\underline{a}}, {\underline{b}}) = w_{f,g}(\gamma _0)\).
Proof
Suppose by contradiction that \(\gamma _1 \ne \gamma _0\) is a minimiser of \(W_{f,g}({\underline{a}}, {\underline{b}})\), and \(\gamma _0\) is not. Then \(\gamma _1\) must intersect \(\gamma _0\) in at least two points (counting \({\underline{a}}\) and \({\underline{b}}\) as intersection points as well). So we can find a set of disjoint curves \(\{\eta _i^1\}\) with \(\eta _i^1 \subset \gamma _1\) and a set of disjoint curves \(\{\eta _i^0\}\) with \(\eta _i^0 \subset \gamma _0\) so that \(\gamma _1 {\setminus } \cup _i \eta _i^1 = \gamma _0 {\setminus } \cup _i \eta _i^0\) and \(\eta _i^1 \succcurlyeq \eta _i^0 \succcurlyeq \Delta _{f,g}\) or \(\Delta _{f,g} \succcurlyeq \eta _i^0 \succcurlyeq \eta _i^1\) for each i, and so that \(\eta _i^0\) and \(\eta _i^1\) have the same endpoints. By Lemma 6.2, replacing the curve \(\eta _i^1\) by \(\eta _i^0\) in \(\gamma _1\) does not increase the value of the line integral. Repeating this procedure for each such interval, we end up replacing the minimiser \(\gamma _1\) by \(\gamma _0\) without increasing the value of the integral, contradicting the assumption that \(\gamma _0\) was not a minimiser. \(\square \)
Given a set of points \(\{{\underline{a}}_{\scriptscriptstyle (i)}\}\), with \({\underline{a}}_{\scriptscriptstyle (i)} \in {\mathbb {R}}_+^2\), we write \({\underline{a}}_{\scriptscriptstyle (1)} \rightarrow {\underline{a}}_{\scriptscriptstyle (2)} \rightarrow \cdots \rightarrow {\underline{a}}_{\scriptscriptstyle (n)}\) for the path that linearly interpolates between successive points \({\underline{a}}_{\scriptscriptstyle (i)}\) and \({\underline{a}}_{\scriptscriptstyle (i+1)}\). Given a path \(\gamma \) and two points \({\underline{a}}, {\underline{b}} \in \gamma \), we write \({\underline{a}} {\mathop {\rightarrow }\limits ^{\gamma }} {\underline{b}}\) for the part of \(\gamma \) between \({\underline{a}}\) and \({\underline{b}}\).
Lemma 6.4
Proof
This follows directly from the definition of \(W_{f,g}\) and the assumptions on f and g.\(\square \)
Lemma 6.5
Proof
By Lemma 6.1, the righthand side is an upper bound on \(W_{\psi ,\phi }({\underline{a}}, {\underline{b}})\). It remains to prove that it is also a lower bound.
 (a)
\(\gamma \cap \Delta _{\psi ,\phi } \ne \varnothing ,\) or
 (b)
\(\gamma \cap \Delta _{\psi ,\phi } = \varnothing .\)
The following lemma now states the crucial bound:
Lemma 6.6
Proof
Proof of Lemma 5.9
7 The critical probability: proof of Theorem 1.1
Footnotes
 1.Throughout this paper we will use the standard Landau order notation: either for all x sufficiently large or sufficiently small, depending on the context,

\(f(x) = O(g(x))\) if there exists \(C>0\) such that \(f(x) \leqslant C g(x)\),

\(f(x) = \Omega (g(x))\) if there exists \(c>0\) such that \(f(x) \geqslant c g(x)\),

\(f(x) = \Theta (g(x))\) if \(f(x) =O(g(x))\) and \(f(x) = \Omega (g(x))\),

\(f(x) = o(g(x))\) if \( f(x) / g(x) \rightarrow 0\).

 2.
The \(\varepsilon \)window denotes the difference between the value of \(p_{\varepsilon }\) where \([L]^2\) is internally filled with probability \(\varepsilon \), and the value \(p_{1\varepsilon }\) where this probability equals \(1\varepsilon \). In other words, the \(\varepsilon \)window tells us how sharp the metastable transition is.
 3.
This is the partitioning: We say a direction \(u \in S^1\) is stable if \({\mathbb {H}}_{u}\), the discrete halfplane that is orthogonal to u, satisfies \(\langle {\mathbb {H}}_{u} \rangle = {\mathbb {H}}_{u}\). A family is supercritical if there exists an open semicircle in \(S^1\) containing no stable direction, and it is subcritical if every open semicircle contains an infinite number of stable directions. It is critical otherwise.
 4.
In other words, in a balanced model the critical droplet is a polygon, all of whose sides have the same length up to a constant factor. For the precise definition, which is somewhat more technical, see [12].
 5.
Here \({\mathbb {Z}}_L^2\) denotes the discrete twodimensional \(L~\times ~L\) torus, and \(p_c\big ( {\mathbb {Z}}_L^2,{\mathcal {U}}\big )\) is defined as in (1.2). We consider the torus since in general undesirable complications may arise due to boundary effects or strongly asymmetrical growth.
 6.
This notion is often referred to as “internally spanned” (especially in the older literature).
 7.
The value of \(\alpha \) follows from [12, Definition 1.2]. Furthermore, if \(r \leqslant b\) then the model is supercritical, so \(p_c\big ( {\mathbb {Z}}_L^2, {\mathcal {N}}_{\scriptscriptstyle (a,b)},r) \leqslant L^{c}\) for some \(c > 0\), and if \(r > a + b\) then the model is subcritical, so \(p_c\big ( {\mathbb {Z}}_L^2, {\mathcal {N}}_{\scriptscriptstyle (a,b)},r) > c'\) for some \(c' > 0\).
 8.
 9.
Note that in the proof of [23, Lemma 2.1] there are a number of (unimportant) sign errors.
 10.
For two positive sequences \(a_n\) and \(b_n\) we write \(a_n \gg b_n\) when \(a_n/b_n \rightarrow \infty \) and \(a_n \ll b_n\) when \(a_n/b_n \rightarrow 0\).
 11.
In the original construction of a hierarchy by Holroyd [33] for the standard model, hierarchies have outdegree at most two. The fact that we need outdegree three corresponds to the fact that the anisotropic model requires three infected sites in a neighbourhood. As a result, it is possible that a rectangle is internally filled by a set of three but not two disjoint internally filled smaller rectangles. Our definition of hierarchies reflects this. See also [23].
 12.
Note that although similar, this definition of a seed is different than the one used in the previous section.
 13.
This follows if we stop the algorithm in the proof of [23, Lemma 3.7] when a set \({\mathcal {S}}\) such that \(\langle {\mathcal {S}}\rangle \subset {\mathcal {R}}\) is first constructed, and by observing that if \([L]^2\) is internally filled, then we must construct such a set \({\mathcal {S}}\).
Notes
Acknowledgements
The authors would like to thank Robert Morris for his involvement in the earlier stages of the project, and for the many crucial insights he provided. We thank the anonymous referee for their careful reading and comments. The third author would like to thank Robert Fitzner for useful discussions about computer simulations. The first author was supported by the IDEX Chair funded by ParisSaclay and the NCCR SwissMap funded by the Swiss NSF. The third author was supported by the Netherlands Organisation for Scientific Research (NWO) through Gravitation—Grant networks024.002.003.
References
 1.Adler, J.: Bootstrap percolation. Physica A 171(3), 453–470 (1991)CrossRefGoogle Scholar
 2.Adler, J., Lev, U.: Bootstrap percolation: visualizations and applications. Braz. J. Phys. 33(3), 641–644 (2003)CrossRefGoogle Scholar
 3.Adler, J., Stauffer, D., Aharony, A.: Comparison of bootstrap percolation models. J. Phys. A Math. Gen. 22(7), L297 (1989)CrossRefGoogle Scholar
 4.Aizenman, M., Lebowitz, J.L.: Metastability effects in bootstrap percolation. J. Phys. A 21(19), 3801–3813 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
 5.Amini, H.: Bootstrap percolation in living neural networks. J. Stat. Phys. 141(3), 459–475 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
 6.Balister, P., Bollobás, B., Przykucki, M., Smith, P.: Subcritical \({\cal{U}}\)bootstrap percolation models have nontrivial phase transitions. Trans. A. Math. Soc. 368(10), 7385–7411 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
 7.Balogh, J., Bollobás, B.: Sharp thresholds in bootstrap percolation. Physica A 326(3), 305–312 (2003)CrossRefzbMATHGoogle Scholar
 8.Balogh, J., Bollobás, B., DuminilCopin, H., Morris, R.: The sharp threshold for bootstrap percolation in all dimensions. Trans. Am. Math. Soc. 364(5), 2667–2701 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
 9.Balogh, J., Bollobás, B., Morris, R.: Bootstrap percolation in three dimensions. Ann. Probab. 37(4), 1329–1380 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
 10.Balogh, J., Bollobás, B., Morris, R.: Bootstrap percolation in high dimensions. Comb. Probab. Comput. 19(5–6), 643–692 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
 11.BoermaKlooster, S.: A sharp threshold for an anisotropic bootstrap percolation model. University of Groningen Bachelor Thesis (2011)Google Scholar
 12.Bollobás, B., DuminilCopin, H., Morris, R., Smith, P.: Universality of twodimensional critical cellular automata. To appear in Proc. Lond. Math. Soc., Preprint. arXiv:1406.6680 (2014)
 13.Bollobás, B., DuminilCopin, H., Morris, R., Smith, P.: The sharp threshold for the Duarte model. To appear in Ann. Probab., Preprint. arXiv:1603.05237 (2016)
 14.Bollobás, B., Smith, P., Uzzell, A.J.: Monotone cellular automata in a random environment. Comb. Prob. Comput. 24, 687–722 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
 15.Bringmann, K., Mahlburg, K.: Improved bounds on metastability thresholds and probabilities for generalized bootstrap percolation. Trans. Am. Math. Soc. 364(7), 3829–3859 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
 16.Bringmann, K., Mahlburg, K., Mellit, A.: Convolution bootstrap percolation models, Markovtype stochastic processes, and mock theta functions. Int. Math. Res. Not. IMRN 5, 971–1013 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
 17.Cancrini, N., Martinelli, F., Roberto, C., Toninelli, C.: Kinetically constrained spin models. Probab. Theory Relat. Fields 140(3–4), 459–504 (2008)MathSciNetzbMATHGoogle Scholar
 18.Cerf, R., Cirillo, E.N.M.: Finite size scaling in threedimensional bootstrap percolation. Ann. Probab. 27(4), 1837–1850 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
 19.Cerf, R., Manzo, F.: The threshold regime of finite volume bootstrap percolation. Stoch. Process. Appl. 101(1), 69–82 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
 20.Chalupa, J., Leath, P.L., Reich, G.R.: Bootstrap percolation on a Bethe lattice. J. Phys. C 12(1), L31 (1979)CrossRefGoogle Scholar
 21.De Gregorio, P., Lawlor, A., Bradley, P., Dawson, K.A.: Clarification of the bootstrap percolation paradox. Phys. Rev. Lett. 93(2), 025501 (2004)CrossRefGoogle Scholar
 22.Duarte, J.: Simulation of a cellular automat with an oriented bootstrap rule. Physica A 157(3), 1075–1079 (1989)CrossRefGoogle Scholar
 23.DuminilCopin, H., van Enter, A.C.D.: Sharp metastability threshold for an anisotropic bootstrap percolation model. Ann. Probab. 41(3A), 1218–1242 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
 24.DuminilCopin, H., van Enter, A.C.D.: Erratum to “Sharp metastability threshold for an anisotropic bootstrap percolation model”. Ann. Probab. 44(2), 1599 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
 25.DuminilCopin, H., Holroyd, A.E.: Finite volume bootstrap percolation with threshold rules on \({\mathbb{Z}}^2\). Preprint at http://www.ihes.fr/~duminil/publi.html (2012)
 26.Fontes, L.R., Schonmann, R.H., Sidoravicius, V.: Stretched exponential fixation in stochastic Ising models at zero temperature. Commun. Math. Phys. 228(3), 495–518 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
 27.Gravner, J., Griffeath, D.: First passage times for threshold growth dynamics on \({\mathbb{Z}}^2\). Ann. Probab. 24(4), 1752–1778 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
 28.Gravner, J., Holroyd, A.E.: Slow convergence in bootstrap percolation. Ann. Appl. Probab. 18(3), 909–928 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
 29.Gravner, J., Holroyd, A.E.: Local bootstrap percolation. Electron. J. Probab. 14(14), 385–399 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
 30.Gravner, J., Holroyd, A.E., Morris, R.: A sharper threshold for bootstrap percolation in two dimensions. Probab. Theory Relat. Fields 153(1–2), 1–23 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
 31.Gray, L.: A mathematician looks at Wolfram’s new kind of science. Not. AMS 50, 200–211 (2003)zbMATHGoogle Scholar
 32.Grimmett, G.: Percolation, 2nd edn. Springer, Berlin (1999)CrossRefzbMATHGoogle Scholar
 33.Holroyd, A.E.: Sharp metastability threshold for twodimensional bootstrap percolation. Probab. Theory Relat. Fields 125(2), 195–224 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
 34.Holroyd, A.E., Liggett, T.M., Romik, D.: Integrals, partitions, and cellular automata. Trans. Am. Math. Soc. 356(8), 3349–3368 (2004). (electronic)MathSciNetCrossRefzbMATHGoogle Scholar
 35.Janson, S.: Poisson approximation for large deviations. Random Struct. Algorithms 1(2), 221–229 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
 36.Morris, R.: Zerotemperature Glauber dynamics on \({\mathbb{Z}}^d\). Probab. Theory Relat. Fields 149(3–4), 417–434 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
 37.Morris, R.: The second term for bootstrap percolation in two dimensions. Manuscript in preparation. Available at http://w3.impa.br/~rob/ (2014)
 38.Mountford, T.S.: Critical length for semioriented bootstrap percolation. Stoch. Process. Appl. 56(2), 185–205 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
 39.Rudin, W.: Real and Complex Analysis, 3rd edn. McGrawHill Book Co., New York (1987)zbMATHGoogle Scholar
 40.Schonmann, R.H.: Critical points of twodimensional bootstrap percolationlike cellular automata. J. Stat. Phys. 58(5–6), 1239–1244 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
 41.van Enter, A.C.D., Hulshof, T.: Finitesize effects for anisotropic bootstrap percolation: logarithmic corrections. J. Stat. Phys. 128(6), 1383–1389 (2007)Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.