In this section we prove Theorems 1.3, 1.6 and Corollary 1.7. From now on \(d\ge 2\), \(p>p_c({\mathbb {Z}}^d)\) and \(\frac{1}{\mu } >(\log n)^{d+2}\).
Good environments and growth of the evolving set
The first step is to obtain the growth of the Doob transform of the evolving set at excellent times. We will use the following theorem by Pete [10] which shows that the isoperimetric profile of the giant cluster basically coincides with the profile of the original lattice.
For a subset \(S\subseteq {\mathbb {Z}}_n^d\) we write \(S\subseteq {\mathcal {G}}\) to denote \(S\subseteq V({\mathcal {G}})\) and we also write \(|{\mathcal {G}}| = |V({\mathcal {G}})|\).
Theorem 6.1
[10, Corollary 1.4] For all \(d\ge 2\), \(p>p_c({\mathbb {Z}}^d)\), \(\delta \in (0,1)\) and \(c'>0\) there exist \(c>0\) and \(\alpha >0\) so that for all n sufficiently large
$$\begin{aligned}&{\mathbb {P}}\big (\forall \, S\subseteq {\mathcal {G}}: \, S \text { connected and } c (\log n)^{\frac{d}{d-1}} \le |S|\le (1-\delta )|{\mathcal {G}}|, \\&\quad \text { we have } |\partial _{{\mathcal {G}}} S|\ge \alpha |S|^{1-\frac{1}{d}}\big ) \ge 1-\frac{1}{n^{c'}}. \end{aligned}$$
Remark 6.2
Pete [10] only states that the probability appearing above tends to 1 as \(n\rightarrow \infty \), but a close inspection of the proof actually gives the polynomial decay. Mathieu and Remy [6] have obtained similar results.
Corollary 6.3
For all \(d\ge 2\), \(p>p_c({\mathbb {Z}}^d)\), \(c'>0\) and \(\delta \in (0,1)\) there exist \(c>0\) and \(\alpha >0\) so that for all n sufficiently large
$$\begin{aligned}&{\mathbb {P}}\left( \forall \, S\subseteq {\mathcal {G}}: c (\log n)^{\frac{d}{d-1}} \le |S|\le (1-\delta )|{\mathcal {G}}|, \text { we have } |\partial _{{\mathcal {G}}} S|\ge \frac{\alpha |S|^{1-\frac{1}{d}}}{\log n}\right) \\&\quad \ge 1-\frac{1}{n^{c'}}. \end{aligned}$$
Proof
We only need to prove the statement for all S that are disconnected, since the other case is covered by Theorem 6.1. Let A be the event appearing in the probability of Theorem 6.1.
Let S be a disconnected set satisfying \(S\subseteq {\mathcal {G}}\) and \(c (\log n)^{\frac{d}{d-1}} \le |S|\le (1-\delta )|{\mathcal {G}}|\). Let \(S=S_1\cup \cdots \cup S_k\) be the decomposition of S into its connected components. Then we claim that on the event A we have for all \(i\le k\)
$$\begin{aligned} |\partial _{{\mathcal {G}}} S_i| \ge \alpha \frac{|S_i|^{1-\frac{1}{d}}}{\log n}. \end{aligned}$$
Indeed, there are two cases to consider: (i) \(|S_i|\ge c(\log n)^{d/(d-1)}\), in which case the inequality follows from the definition of the event A; (ii) \(|S_i|<c(\log n)^{d/(d-1)}\), in which case the inequality is trivially true by taking \(\alpha \) small in Theorem 6.1, since the boundary contains at least one vertex. Therefore we deduce,
$$\begin{aligned} |\partial _{{\mathcal {G}}} S| =\sum _{i=1}^{k} |\partial _{{\mathcal {G}}} S_i| \ge \alpha \sum _{i=1}^{k} \frac{|S_i|^{1-\frac{1}{d}}}{\log n} \ge \alpha \frac{\left( \sum _{i=1}^{k}|S_i|\right) ^{1-\frac{1}{d}}}{\log n} = \alpha \frac{|S|^{1-\frac{1}{d}}}{\log n} \end{aligned}$$
and this completes the proof. \(\square \)
Recall that for a fixed environment \(\eta \) we write S for the Doob transform of the evolving set process associated to X and \(\tau _1, \tau _2, \ldots \) are the excellent times as in Definition 5.1 and we take \(\tau _0=0\).
Definition 6.4
Let \(c_1,c_2\) be two positive constants and \(\delta \in (0,1)\). Given \(n\ge 1\), define
$$\begin{aligned} t(n)=(\log n)^{16d+47}\cdot (n^2+1/\mu )\quad \text { and } \quad N=(\log n)^{8d+20}\cdot n^2. \end{aligned}$$
We call \(\eta \) a \(\delta \)-good environment if the following conditions hold:
- (1)
for all \(\frac{11d\log n}{\mu }\le t\le t(n)\log n\) the giant cluster \({\mathcal {G}}_t\) has size \(|{\mathcal {G}}_t|\in ((1-\delta )\theta (p) n^d, (1+\delta )\theta (p) n^d)\),
- (2)
for all \(\frac{11d\log n}{\mu }\le t\le t(n)\log n, \, \forall \, S\subseteq {\mathcal {G}}_t\) with
$$\begin{aligned} c_1(\log n)^{\frac{d}{d-1}}\le |S| \le (1-\delta )|{\mathcal {G}}_t| \quad \text { we have } \quad |\partial _{\eta _{t}}S|\ge \frac{c_2 |S|^{1-1/d}}{(\log n)}, \end{aligned}$$
- (3)
\({\mathbb {P}}_{x,\eta }\left( \tau _N\le t(n)\right) \ge 1-\frac{1}{n^{10d}}\) for all x,
- (4)
\({\mathbb {P}}_{x,\eta }\left( \tau _N<\infty \right) =1\) for all x.
To be more precise we should have defined a \((\delta , c_1, c_2)\)-good environment. But we drop the dependence on \(c_1\) and \(c_2\) to simplify the notation.
Lemma 6.5
For all \(\delta \in (0,1)\) there exist \(c_1, c_2, c_3\) positive constants and \(n_0\in {\mathbb {N}}\) such that for all \(n\ge n_0\) and all \(\eta _0\) we have
$$\begin{aligned} {\mathbb {P}}_{\,}\left( \eta \text { is }\delta \text { -good}\right) {\eta _0} \ge 1 - \frac{c_3}{n^{10d}}. \end{aligned}$$
Proof
We first prove that for all n sufficiently large and all \(\eta _0\)
$$\begin{aligned} {\mathbb {P}}_{\eta _0}\left( \eta \text { satisfies (1) and (2)}\right) \ge 1 -\frac{1}{n^{10d}}. \end{aligned}$$
(6.1)
The number of times that the Poisson clocks on the edges ring between times \(11d\log n/\mu \) and \(t(n) \log n\) is a Poisson random variable of parameter at most \(d(n^d \mu ) \cdot t(n) \log n\). Note that all edges update by time \(\frac{11d \log n}{\mu }\) with probability at least \(1-\frac{d}{n^{10d}}\). Using large deviations for the Poisson random variable, Lemma 3.1 and Corollary 6.3 for suitable constants c and \(\alpha \) prove (6.1). Corollary 5.4, Markov’s inequality and a union bound over all x immediately imply
$$\begin{aligned} {\mathbb {P}}_{\eta _0}\left( \eta \text { satisfies (3)}\right) \ge 1 - \frac{d}{n^{10d}}. \end{aligned}$$
Finally, to prove that \(\eta \) satisfies (4) with probability 1, we note that for almost every environment there will be infinitely many times at which all edges will be open for unit time and so at these times the intersection of the giant component with the evolving set will be large. Therefore such times are necessarily excellent. \(\square \)
For all \(\delta \in (0,1)\) we now define
$$\begin{aligned} \tau _\delta = \inf \{t\in {\mathbb {N}}: |S_t\cap {\mathcal {G}}_t| \ge (1-\delta ) |{\mathcal {G}}_t|\}. \end{aligned}$$
(6.2)
The goal of this section is to prove the following:
Proposition 6.6
Let \(\delta \in (0,1)\). There exists a positive constant c so that the following holds: for all n, if \(\eta \) is a \(\delta \)-good environment, then for all starting points x we have
$$\begin{aligned} {\mathbb {P}}_{x,\eta }\left( \tau _\delta \le t(n)\right) \ge 1 -\frac{c}{n^{10d}}. \end{aligned}$$
Recall from Sect. 2 the definition of \((Z_k)\) for a fixed environment \(\eta \) via
$$\begin{aligned} Z_k = \frac{\sqrt{\pi (S_k^{\#})}}{\pi (S_k)}. \end{aligned}$$
Note that we have suppressed the dependence on \(\eta \) for ease of notation. The following lemma on the drift of Z using the isoperimetric profile will be crucial in the proof of Proposition 6.6.
Lemma 6.7
Let \(\eta \) be a \(\delta \)-good environment with \(\delta \in (0,1)\). Then for all n sufficiently large and for all \(1\le i\le N\) (recall Definition 6.4) we have almost surely
where \({\mathcal {F}}_t\) is the \(\sigma \)-algebra generated by the evolving set up to time t and \((\tau _i)\) is the sequence of excellent times associated to the environment \(\eta \) and \(\varphi \) is defined as
$$\begin{aligned} \varphi (r) = {\left\{ \begin{array}{ll} c \cdot (\log n)^{-\beta }\cdot n^{-1}\cdot r^{-1/d} \quad &{}\quad \text{ if } \quad \frac{(\log n)^{\alpha }}{n^d}\le r \le \frac{1}{2} \\ c \cdot n^{-d}\cdot r^{-1} &{}\quad \text{ if } \quad r< \frac{(\log n)^{\alpha }}{n^d} \\ c \cdot 2^{1/d}\cdot (\log n)^{-\beta }\cdot n^{-1} &{}\quad \text{ if } \quad r\in \left[ \frac{1}{2},\infty \right) \end{array}\right. } \end{aligned}$$
with \(\alpha =4d+12+d/(d-1)\), \(\beta = 4d+9-12/d\) and c a positive constant.
Proof
Since \(\tau _\delta \) is a stopping time, it follows that \(\{\tau _\delta \wedge t(n)>\tau _i\}\in {\mathcal {F}}_{\tau _i}\), and hence we obtain
Lemma 2.3 implies that Z is a positive supermartingale and since \(\eta \) is a \(\delta \)-good environment, we have \(\tau _N<\infty \)\({\mathbb {P}}_\eta \)-almost surely. We thus get for all \(0\le i\le N-1\)
$$\begin{aligned} \mathbb {\widehat{E}}_{\eta }\left[ Z_{\tau _{i+1}}\;\vert \;{\mathcal {F}}_{\tau _i}\right] \le \mathbb {\widehat{E}}_{\eta }\left[ Z_{\tau _{i}+1}\;\vert \;{\mathcal {F}}_{\tau _i}\right] . \end{aligned}$$
Using the Markov property gives
Since \(\tau _i\) is a stopping time, the event \(\{\tau _i=t\}\) only depends on \((S_{u})_{u\le t}\). The distribution of \(S_{t+1}\) only depends on \(S_t\) and the outcome of the independent uniform random variable \(U_{t+1}\). Therefore we obtain
$$\begin{aligned} \begin{aligned} \mathbb {\widehat{E}}_{\eta }\left[ Z_{t+1}\;\vert \;\tau _i=t, S_t=S\right]&=\frac{\sqrt{\pi (S^{\#})}}{\pi (S)} \cdot \mathbb {\widehat{E}}_{\eta }\left[ \frac{Z_{t+1}}{Z_t}\;\Big \vert \;S_t=S\right] \\&= \frac{\sqrt{\pi (S^{\#})}}{\pi (S)} \cdot {\mathbb {E}}_{\eta }\left[ \sqrt{\frac{\pi (S_{t+1}^{\#})}{\pi (S_t^{\#})}}\;\Big \vert \;S_t=S\right] , \end{aligned} \end{aligned}$$
(6.5)
where for the last equality we used the transition probability of the Doob transform of the evolving set. If \(1\le |S|\le n^d/2\), then for all n sufficiently large
$$\begin{aligned} \begin{aligned} {\mathbb {E}}_{\eta }\left[ \sqrt{\frac{\pi (S_{t+1}^{\#})}{\pi (S_t^{\#})}}\;\Big \vert \;S_t=S\right]&\le {\mathbb {E}}_{\eta }\left[ \sqrt{\frac{\pi (S_{t+1})}{\pi (S_t)}}\;\Big \vert \;S_t=S\right] =1-\psi _{t+1}(S) \\&\le 1 - \frac{1}{8}\cdot \left( \varphi _{t+1}(S)\right) ^2, \end{aligned} \end{aligned}$$
(6.6)
where the equality is simply the definition of \(\psi _{t+1}\) and the last inequality follows from Lemma 2.1, since
$$\begin{aligned} {\mathbb {P}}_{\eta }\left( X_{t+1}=x\;\vert \;X_t=x\right) \ge e^{-1}. \end{aligned}$$
Similarly, if \(n^d>|S|>n^d/2\), then, using the fact that the complement of an evolving set process is also an evolving set process, we get
$$\begin{aligned} \begin{aligned} {\mathbb {E}}_{\eta }\left[ \sqrt{\frac{\pi (S_{t+1}^{\#})}{\pi (S_t^{\#})}}\;\Big \vert \;S_t=S\right]&\le {\mathbb {E}}_{\eta }\left[ \sqrt{\frac{\pi (S_{t+1}^c)}{\pi (S_t^c)}}\;\Big \vert \;S_t=S\right] =1-\psi _{t+1}(S^c) \\&\le 1 - \frac{1}{8}\cdot \left( \varphi _{t+1}(S^c)\right) ^2. \end{aligned} \end{aligned}$$
(6.7)
Plugging in the definition of \(\varphi _{t+1}\) we deduce for all \(1\le |S|<n^d\)
$$\begin{aligned} \varphi _{t+1}(S)&= \frac{1}{|S|}\sum _{x\in S}\sum _{y\in S^c} {\mathbb {P}}_{\eta }\left( X_{t+1}=y\;\vert \;X_t=x\right) \ge \frac{1}{2de|S|} \sum _{x\in S} \sum _{y\in S^c} \int _{t}^{t+1}\eta _s(x,y)\,ds\\ \varphi _{t+1}(S^c)&= \frac{1}{|S^c|}\sum _{x\in S}\sum _{y\in S^c} {\mathbb {P}}_{\eta }\left( X_{t+1}=y\;\vert \;X_t=x\right) \ge \frac{1}{2de|S^c|} \sum _{x\in S} \sum _{y\in S^c} \int _{t}^{t+1}\eta _s(x,y)\,ds. \end{aligned}$$
Since in (6.4) we multiply by
from now on we take t to be an excellent time, and hence we get from Definition 5.1
$$\begin{aligned} \varphi _{t+1}(S_t)\ge \frac{1}{4de}\cdot \frac{|\partial _{\eta _{t}}S_t|}{|S_t|}, \quad \varphi _{t+1}(S^c_t)\ge \frac{1}{4de}\cdot \frac{|\partial _{\eta _{t}}S_t|}{|S^c_t|} \quad \text { and } \quad |S_t\cap {\mathcal {G}}_t|\ge \frac{|S_t|}{(\log n)^{4d+12}}. \end{aligned}$$
(6.8)
Since \(|\partial _{\eta _t}S_t| \ge |\partial _{{\mathcal {G}}_t}S_t| = |\partial _{{\mathcal {G}}_t}({\mathcal {G}}_t\cap S_t)|\) we have
$$\begin{aligned} \varphi _{t+1}(S_t)\ge \frac{1}{4de}\cdot \frac{|\partial _{{\mathcal {G}}_{t}}({\mathcal {G}}_t \cap S_t)|}{|S_t|} \quad \text { and } \quad \varphi _{t+1}(S^c_t)\ge \frac{1}{4de}\cdot \frac{|\partial _{{\mathcal {G}}_{t}}({\mathcal {G}}_t \cap S_t)|}{|S^c_t|}. \end{aligned}$$
If \(|S_t|\le c_1(\log n)^{4d+12+d/(d-1)}\), then, since \(\eta \) is a \(\delta \)-good environment, \(|{\mathcal {G}}_t| \ge (1-\delta )\theta (p)n^d\), and hence, we use the obvious bound
$$\begin{aligned} \varphi _{t+1}(S_t) \ge \frac{1}{4de}\cdot \frac{1}{|S_t|}. \end{aligned}$$
(6.9)
Next, if \(\frac{n^d}{2}>|S_t|>c_1(\log n)^{4d+12+d/(d-1)}\), then using (6.8) and the fact that we are on the event \(\{\tau _\delta \wedge t(n)>t\}\) we get that
$$\begin{aligned} c_1(\log n)^{d/(d-1)}\le |{\mathcal {G}}_t\cap S_t|\le (1-\delta )|{\mathcal {G}}_t|. \end{aligned}$$
Therefore, since \(\eta \) is a \(\delta \)-good environment and \(t\le t(n)\), (2) of Definition 6.4 gives that in this case
$$\begin{aligned} \varphi _{t+1}(S_t) \ge \frac{c_2}{4de} \cdot \frac{|{\mathcal {G}}_t\cap S_t|^{1-\frac{1}{d}}}{(\log n)|S_t|} \ge \frac{c}{(\log n)^{4d+9-12/d}}\cdot \frac{|S_t|^{1-\frac{1}{d}}}{|S_t|} \end{aligned}$$
(6.10)
$$\begin{aligned} = \frac{c}{(\log n)^{4d+9-12/d}}\cdot \frac{1}{|S_t|^{1/d}}, \end{aligned}$$
(6.11)
where c is a positive constant and for the second inequality we used (6.8) again.
Finally when \(|S_t|\ge \frac{n^d}{2}\), on the event \(\{\tau _\delta \wedge t(n)>t\}\) we have from (6.8) and using again (2) of Definition 6.4
$$\begin{aligned} \varphi _{t+1}(S_t^c) \ge \frac{c}{(\log n)^{4d+9-12/d}} \cdot \frac{|S_t|^{1-\frac{1}{d}}}{n^d - |S_t|}\ge \frac{c \cdot 2^{1/d}}{(\log n)^{4d+9-12/d}} \cdot n^{-1}. \end{aligned}$$
(6.12)
Substituting (6.9), (6.10) and (6.12) into (6.6) and (6.7) and then into (6.3), (6.4) and (6.5) we deduce
where the function \(\varphi \) is given by
$$\begin{aligned} \varphi (r) = {\left\{ \begin{array}{ll} c \cdot (\log n)^{-\beta }\cdot n^{-1}\cdot r^{-1/d} &{}\quad \text{ if } \quad \frac{(\log n)^{\alpha }}{n^d}\le r \le \frac{1}{2} \\ c\cdot n^{-d}\cdot r^{-1} &{}\quad \text{ if } \quad r< \frac{(\log n)^{\alpha }}{n^d} \\ c \cdot 2^{1/d}\cdot (\log n)^{-\beta }\cdot n^{-1} &{}\quad \text{ if } \quad r\in \left[ \frac{1}{2},\infty \right) \end{array}\right. } \end{aligned}$$
with c a positive constant and \(\beta = 4d+9-12/d\). We now note that if \(\pi (S_t)\le 1/2\), then \(Z_t = (\pi (S_t))^{-1/2}\). If \(\pi (S_t) >1/2\), then \(Z_t = \sqrt{\pi (S_t^c)}/\pi (S_t)\le \sqrt{2}\). Since \(\varphi (r)=\varphi (1/2)\) for all \(r>1/2\), we get that in all cases
$$\begin{aligned} \varphi (\pi (S_{\tau _i})) = \varphi \left( \frac{1}{Z_{\tau _i}^{2}}\right) \end{aligned}$$
and this concludes the proof. \(\square \)
Proof of Proposition 6.6
We define
and
$$\begin{aligned} f(y)={\left\{ \begin{array}{ll} \left( \varphi \left( \frac{1}{y^2} \right) \right) ^2 &{}\quad \text{ if } y>0 \\ 0 &{}\quad \text{ if } y=0 \end{array}\right. }, \end{aligned}$$
where \(\varphi \) is defined in Lemma 6.7. With these definitions Lemma 6.7 gives for all \(1\le i\le N\)
$$\begin{aligned} \mathbb {\widehat{E}}_{x,\eta }\left[ Y_{i+1}\;\vert \;Y_i\right] \le Y_i(1-f(Y_i)) \end{aligned}$$
with \(Y_1 \le n^{d/2}\) for all \(n\ge 3\).
Since \(\varphi \) is decreasing, we get that f is increasing, and hence we can apply [7, Lemma 11 (iii)] to deduce that for all \(\varepsilon >0\) if
$$\begin{aligned} k\ge \int _{\varepsilon }^{n^{d/2}} \frac{1}{zf(z)} \, dz, \end{aligned}$$
then we have that
We now evaluate the integral
$$\begin{aligned} \int _{\varepsilon }^{n^{d/2}} \frac{1}{zf(z)} \, dz = \int _{\varepsilon }^{n^{d/2}} \frac{1}{z(\varphi (1/z^2))^2}\,dz = \frac{1}{2}\cdot \int _{\frac{1}{n^d}}^{\frac{1}{\varepsilon ^2}} \frac{1}{u(\varphi (u))^2}\,du. \end{aligned}$$
Splitting the integral according to the different regions where \(\varphi \) is defined and substituting the function we obtain
$$\begin{aligned} \int _{\frac{1}{n^d}}^{\frac{1}{\varepsilon ^2}} \frac{1}{u(\varphi (u))^2}\,du \le c' \cdot n^2 \cdot (\log n)^{2\beta }\cdot \log \frac{1}{\varepsilon }, \end{aligned}$$
where \(c'\) is a positive constant. Therefore, taking \(\varepsilon =\frac{1}{n^{10d}}\), this gives that for all \(k\ge c''\cdot n^2 (\log n)^{2\beta +1}\) with \(c''=2c'd\) we have that
and hence, since \(N=(\log n)^\gamma \cdot n^2\) with \(\gamma = 8d+20>2\beta +1\), we deduce
Clearly we have
$$\begin{aligned} \{\tau _\delta \wedge t(n)>\tau _N\}&= \{ \pi (S_{\tau _N})\ge 1/2, \tau _\delta \wedge t(n)>\tau _N\} \cup \{ \pi (S_{\tau _N})\nonumber \\&<1/2, \tau _\delta \wedge t(n)>\tau _N\}. \end{aligned}$$
(6.14)
For the second event appearing on the right hand side above using the definition of the process Z we get
The first event appearing on the right hand side of (6.14) implies that \(|S_{\tau _N}^c|\ge |S_{\tau _N}^c\cap {\mathcal {G}}_{\tau _N}| \ge \delta |{\mathcal {G}}_{\tau _N}|\). Since \(\eta \) is a \(\delta \)-good environment, by (1) of Definition 6.4 we have that \(|{\mathcal {G}}_{\tau _N}|\ge (1-\delta )\theta (p) n^d\). Therefore we obtain
By Markov’s inequality and the two inclusions above we now conclude
where c is a positive constant and in the last inequality we used (6.13). Since \(\eta \) is a \(\delta \)-good environment, this now implies that
$$\begin{aligned} {\mathbb {P}}_{x,\eta }\left( \tau _\delta \le t(n)\right) \ge 1 - \frac{c}{n^{10d}} \end{aligned}$$
and this finishes the proof. \(\square \)
Proof of Theorem 1.3
In this section we prove Theorem 1.3. First recall the definition of the stopping time \(\tau _\delta \) as the first time t that \(|S_t\cap {\mathcal {G}}_t|\ge (1-\delta )|{\mathcal {G}}_t|\).
Lemma 6.8
Let p be such that \(\theta (p)>1/2\). There exists \(n_0\) and \(\delta >0\) so that for all \(n\ge n_0\), if \(\eta \) is a \(\delta \)-good environment, then for all x
$$\begin{aligned} \Vert {\mathbb {P}}_{x,\eta }\left( X_{t(n)} \in \cdot \right) - \pi \Vert _{\mathrm{TV}} \le \frac{1 -\delta }{2}. \end{aligned}$$
Proof
Since \(\theta (p)>1/2\), there exist \(\varepsilon> 2\delta >0\) so that
$$\begin{aligned} \theta (p)>\frac{1}{2}+2\varepsilon \quad \text { and } \quad (1-\delta )^2 \theta (p)>\frac{1}{2}+\varepsilon . \end{aligned}$$
(6.15)
Summing over all possible values of \(\tau =\tau _\delta \) we obtain
$$\begin{aligned} \begin{aligned}&\Vert {\mathbb {P}}_{x,\eta }\left( X_{t(n)} \in \cdot \right) - \pi \Vert _{\mathrm{TV}} = \frac{1}{2} \sum _{z} \left| {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z\right) - \frac{1}{n^d} \right| \\&\quad \le \frac{1}{2} \sum _z \left| \sum _{s\le t(n)} {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z, \tau =s\right) -\sum _{s\le t(n)}\frac{{\mathbb {P}}_{x,\eta }\left( \tau =s\right) }{n^d} \right| + {\mathbb {P}}_{x,\eta }\left( \tau >t(n)\right) . \end{aligned} \end{aligned}$$
(6.16)
By the strong Markov property at time \(\tau \) we have
$$\begin{aligned} {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z, \tau =s\right)&=\sum _{y} {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z, \tau =s, X_s =y\right) \\&= \sum _{y} {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z\;\vert \;X_{s}=y\right) {\mathbb {P}}_{x,\eta }\left( \tau =s, X_s=y\right) . \end{aligned}$$
Since \(\tau \) is a stopping time for the evolving set process, we can use the coupling of the walk and the Doob transform of the evolving set, Theorem 2.2, to get
For all \(s\le t(n)\) we call \(\nu _s\) the probability measure defined by
We claim that
$$\begin{aligned} \Vert \nu _s - \pi \Vert _{\mathrm{TV}} \le \frac{1}{2}-\varepsilon . \end{aligned}$$
(6.17)
Indeed, we have
Since \(s\le t(n)\) and \(\eta \) is a \(\delta \)-good environment, we have \(|{\mathcal {G}}_s|\ge (1-\delta )\theta (p) n^d\), and hence on the event \(\{\tau =s\}\) we get
$$\begin{aligned} |S_s|\ge (1-\delta )^2 \theta (p)n^d > \left( \frac{1}{2} + \varepsilon \right) n^d. \end{aligned}$$
This now implies that
$$\begin{aligned} {\mathbb {E}}_{x,\eta }\left[ 1 - \frac{|S_s|}{n^d}\;\Big \vert \;\tau =s\right] \le \frac{1}{2} - \varepsilon \end{aligned}$$
and completes the proof of (6.17). By the definition of \(\nu _s\) we have
$$\begin{aligned}&\frac{1}{2} \sum _z \left| \sum _{s\le t(n)} {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z, \tau =s\right) -\sum _{s\le t(n)}\frac{{\mathbb {P}}_{x,\eta }\left( \tau =s\right) }{n^d} \right| \\&\quad =\frac{1}{2}\sum _z\left| \sum _{s\le {t(n)}} \sum _y{\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z\;\vert \;X_s=y\right) \nu _s(y){\mathbb {P}}_{x,\eta }\left( \tau =s\right) - \sum _{s\le {t(n)}} \frac{{\mathbb {P}}_{x,\eta }\left( \tau =s\right) }{n^d} \right| \\&\quad \le \sum _{s\le {t(n)}} {\mathbb {P}}_{x,\eta }\left( \tau =s\right) \frac{1}{2}\sum _z\left| \sum _y\nu _s(y) {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z\;\vert \;X_s=y\right) - \frac{1}{n^d} \right| . \end{aligned}$$
But since \(\pi \) is stationary for X when the environment is \(\eta \), we obtain
$$\begin{aligned} \frac{1}{2}\sum _z\left| \sum _y\nu _s(y) {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z\;\vert \;X_s=y\right) - \frac{1}{n^d} \right| \le \Vert \nu _s - \pi \Vert _{\mathrm{TV}}\le \frac{1}{2}-\varepsilon , \end{aligned}$$
where the last inequality follows from (6.17). Substituting this bound into (6.16) gives
$$\begin{aligned} \Vert {\mathbb {P}}_{x,\eta }\left( X_{t(n)} \in \cdot \right) - \pi \Vert _{\mathrm{TV}} \le \frac{1}{2}-\varepsilon + {\mathbb {P}}_{x,\eta }\left( \tau >t(n)\right) . \end{aligned}$$
From Proposition 6.6 we have
$$\begin{aligned} {\mathbb {P}}_{x,\eta }\left( \tau \le t(n)\right) \ge 1-\frac{c}{n^{2d}}. \end{aligned}$$
This together with the fact that we took \(2\delta <\varepsilon \) finishes the proof. \(\square \)
Corollary 6.9
Let p be such that \(\theta (p)>1/2\). Then there exist \(\delta \in (0,1)\) and \(n_0\) such that for all \(n\ge n_0\) and all starting environments \(\eta _0\) we have
$$\begin{aligned} {\mathcal {P}}_{\eta _0}\left( (\eta _t)_{t\le t(n)}: \,\forall x,y,\, \left\| P_\eta ^{t(n)}(x,\cdot ) - P_\eta ^{t(n)}(y,\cdot ) \right\| _{\mathrm{TV}} \le 1-\delta \right) \ge 1-\delta . \end{aligned}$$
Proof
Let \(\delta \) and \(n_0\) be as in the statement of Lemma 6.8. Then Lemma 6.8 gives that for all \(n\ge n_0\), if \(\eta \) is a \(\delta \)-good environment, then for all x and y we have
$$\begin{aligned} \left\| P_\eta ^{t(n)}(x,\cdot ) - \pi \right\| _{\mathrm{TV}}\le \frac{1-\delta }{2} \quad \text { and } \quad \left\| P_\eta ^{t(n)}(y,\cdot ) - \pi \right\| _{\mathrm{TV}}\le \frac{1-\delta }{2}. \end{aligned}$$
Using this and the triangle inequality we obtain that on the event that \(\eta \) is a \(\delta \)-good environment for all x and y
$$\begin{aligned} \left\| P_\eta ^{t(n)}(x,\cdot ) -P_\eta ^{t(n)}(y,\cdot ) \right\| _{\mathrm{TV}} \le 1-\delta . \end{aligned}$$
Therefore for all \(n\ge n_0\) we get for all \(\eta _0\)
$$\begin{aligned}&{\mathcal {P}}_{\eta _0}\left( (\eta _t)_{t\le t(n)}: \,\exists \,x,y,\, \left\| P_\eta ^{t(n)}(x,\cdot ) - P_\eta ^{t(n)}(y,\cdot ) \right\| _{\mathrm{TV}} >1-\delta \right) \\&\quad \le {\mathcal {P}}_{\eta _0}\left( \eta \text { is not a }\delta \text {-good environment}\right) . \end{aligned}$$
Taking \(n_0\) even larger we get from Lemma 6.5 that for all \(n\ge n_0\)
$$\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta \text { is not a }\delta \text {-good environment}\right) \le \delta \end{aligned}$$
and this concludes the proof. \(\square \)
The following lemma will be applied later in the case where R is a constant or a uniform random variable.
Lemma 6.10
Let R be a random time independent of X and such that the following holds: there exists \(\delta \in (0,1)\) such that for all starting environments \(\eta _0\) we have
$$\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : \,\forall x,y,\, \left\| {\mathbb {P}}_{x,\eta }\left( X_R=\cdot \right) - {\mathbb {P}}_{y,\eta }\left( X_R=\cdot \right) \right\| _{\mathrm{TV}} \le 1-\delta \right) \ge 1-\delta . \end{aligned}$$
Then there exists a positive constant \(c=c(\delta )\) and \(n_0=n_0(\delta )\in {\mathbb {N}}\) so that if \(k=c\log n\) and \(R(k)=R_1+\cdots +R_k\), where \(R_i\) are i.i.d. distributed as R, then for all \(n\ge n_0\), all x, y and \(\eta _0\)
$$\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta :\left\| {\mathbb {P}}_{x,\eta }\left( X_{R(k)}=\cdot \right) - {\mathbb {P}}_{y,\eta }\left( X_{R(k)}=\cdot \right) \right\| _{\mathrm{TV}} \le \frac{1}{n^{3d}}\right) \ge 1-\frac{1}{n^{3d}}. \end{aligned}$$
Proof
We fix \(x_0, y_0\) and let X, Y be two walks moving in the same environment \(\eta \) and started from \(x_0\) and \(y_0\) respectively. We now present a coupling of X and Y. We divide time into rounds of length \(R_1, R_2,\ldots \) and we describe the coupling for every round.
For the first round, i.e. for times between 0 and \(R_1\) we use the optimal coupling given by
$$\begin{aligned} {\mathbb {P}}_{x_0,y_0,\eta }\left( X_{R_1}\ne Y_{R_1}\right) = \Vert {\mathbb {P}}_{x_0,\eta }\left( X_{R_1}=\cdot \right) - {\mathbb {P}}_{y_0,\eta }\left( Y_{R_1}=\cdot \right) \Vert _{\mathrm{TV}}, \end{aligned}$$
where the environment \(\eta \) is restricted between time 0 and \(R_1\). We now change the definition of a good environment. We call \(\eta \) a good environment during \([0,R_1]\) if the total variation distance appearing above is smaller than \(1-\delta \).
If X and Y did not couple after \(R_1\) steps, then they have reached some locations \(X_{R_1}=x_1\) and \(Y_{R_1} = y_1\). In the second round we couple them using again the corresponding optimal coupling, i.e.
$$\begin{aligned} {\mathbb {P}}_{x_1,y_1,\eta }\left( X_{R_2}\ne Y_{R_2}\right) = \Vert {\mathbb {P}}_{x_1,\eta }\left( X_{R_2}=\cdot \right) - {\mathbb {P}}_{y_1,\eta }\left( Y_{R_2}=\cdot \right) \Vert _{\mathrm{TV}}. \end{aligned}$$
Similarly we call \(\eta \) a good environment for the second round if the total variation distance above is smaller than \(1-\delta \). We continue in the same way for all later rounds. By the assumption on R, i.e. the bound on the probability given in the statement of the lemma is uniform over all starting points x and y and the initial environment, we get that for all \(\eta _0\)
$$\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta \text { is good for the }i\text {-th round}\right) \ge 1-\delta \end{aligned}$$
and the same bound is true even after conditioning on the previous \(i-1\) rounds. Let \(k=c\log n\) for a constant c to be determined. Let E denote the number of good environments in the first k rounds. We now get
$$\begin{aligned}&{\mathbb {P}}_{x_0,y_0, \eta _0}\left( X_{R(k)}\ne Y_{R(k)}\right) \le {\mathbb {P}}_{x_0,y_0, \eta _0}\left( E\le \frac{(1-\delta )k}{2}\right) \\&\quad + {\mathbb {P}}_{x_0,y_0, \eta _0}\left( E > \frac{(1-\delta )k}{2}, X_{R(k)}\ne Y_{R(k)}\right) . \end{aligned}$$
By concentration, since we can stochastically dominate E from below by \({\mathrm{Bin}}(k,1-\delta )\), the first probability decays exponentially in k. For the second probability, on the event that there are enough good environments, since the probability of not coupling in each round is at most \(1-\delta \), by successive conditioning we get
$$\begin{aligned} {\mathbb {P}}_{x_0,y_0, \eta _0}\left( E > \frac{(1-\delta )k}{2}, X_{R(k)}\ne Y_{R(k)}\right) \le (1-\delta )^{(1-\delta )k/2}. \end{aligned}$$
Therefore, taking \(c=c(\delta )\) sufficiently large we get overall for all n sufficiently large
$$\begin{aligned} {\mathbb {P}}_{x_0,y_0, \eta _0}\left( X_{R(k)}\ne Y_{R(k)}\right) \le \frac{1}{n^{6d}}. \end{aligned}$$
So by Markov’s inequality again we obtain for all n sufficiently large
$$\begin{aligned}&{\mathcal {P}}_{\eta _0}\left( \eta : \Vert {\mathbb {P}}_{x_0,\eta }\left( X_{R(k)}=\cdot \right) - {\mathbb {P}}_{y_0,\eta }\left( Y_{R(k)}=\cdot \right) \Vert _{\mathrm{TV}} > \frac{1}{n^{3d}}\right) \\&\quad \le n^{3d}\cdot {\mathcal {E}}_{\eta _0}\left[ \Vert {\mathbb {P}}_{x_0,\eta }\left( X_{R(k)}=\cdot \right) - {\mathbb {P}}_{y_0,\eta }\left( Y_{R(k)}=\cdot \right) \Vert _{\mathrm{TV}}\right] \\&\quad \le n^{3d}\cdot {\mathbb {P}}_{x_0,y_0,\eta _0}\left( X_{R(k)}\ne Y_{R(k)}\right) \le \frac{1}{n^{3d}}, \end{aligned}$$
where \({\mathcal {E}}\) is expectation over the random environment. This finishes the proof. \(\square \)
Proof of Theorem 1.3
Let \(R=t(n)\). Then by Corollary 6.9 there exists \(n_0\) such that R satisfies the condition of Lemma 6.10 for \(n\ge n_0\). So applying Lemma 6.10 we get for all n sufficiently large and all \(x_0,y_0\) and \(\eta _0\)
$$\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : \Vert {\mathbb {P}}_{x_0,\eta }\left( X_{kt(n)}=\cdot \right) - {\mathbb {P}}_{y_0,\eta }\left( Y_{kt(n)}=\cdot \right) \Vert _{\mathrm{TV}} > \frac{1}{n^{3d}}\right) \le \frac{1}{n^{3d}}, \end{aligned}$$
where \(k=c\log n\). By a union bound over all starting states \(x_0,y_0\) we deduce
$$\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : \max _{x_0,y_0}\Vert P_\eta ^{kt(n)}(x_0,\cdot ) - P_\eta ^{kt(n)}(y_0,\cdot ) \Vert _{\mathrm{TV}} > \frac{1}{n^{3d}}\right) \le n^{2d} \cdot \frac{1}{n^{3d}} = \frac{1}{n^d}. \end{aligned}$$
This proves that for all n sufficiently large
$$\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : t_{\mathrm {mix}}(n^{-3d}, \eta ) \ge kt(n)\right) \le n^{-d} \end{aligned}$$
and thus completes the proof of the theorem. \(\square \)
Proof of Theorem 1.6
Proof of Theorem 1.6
Let \(\delta =\varepsilon /100\) and \(k=[2(1-\delta )/(\delta \theta (p))]+1\). For every starting point \(x_0\) we are going to define a sequence of stopping times. First let \(\xi _1\) be the first time that all the edges refresh at least once. Let \(\widetilde{\delta }=\delta /k\). Then we define \(\tau _1=\tau _1(x_0)\)
$$\begin{aligned} \tau _1 = \inf \left\{ t\ge \xi _1: |S_t\cap {\mathcal {G}}_t| \ge (1-\widetilde{\delta })|{\mathcal {G}}_t|\right\} \wedge (\xi _1+t(n)), \end{aligned}$$
where \((S_t)\) is the evolving set process starting at time \(\xi _1\) from \(\{X_{\xi _1}\}\) and coupled with X using the Diaconis Fill coupling. We define inductively, \(\xi _{i+1}\) as the first time after \(\xi _i+t(n)\) that all edges refresh at least once. In order to now define \(\tau _{i+1}\), we start a new evolving set process which at time \(\xi _{i+1}\) is the singleton \(\{X_{\xi _{i+1}}\}\). (This new restart does not affect the definition of the earlier \(\tau _j\)’s.) To simplify notation, we call this process again \(S_t\) and we couple it with the walk X using the Diaconis Fill coupling. Next we define
$$\begin{aligned} \tau _{i+1}=\inf \left\{ t\ge \xi _{i +1}: |S_t\cap {\mathcal {G}}_t| \ge (1-\widetilde{\delta })|{\mathcal {G}}_t|\right\} \wedge (\xi _{i+1}+t(n)). \end{aligned}$$
From now on we call \(\eta \) a good environment if \(\eta \) is a \(\delta \)-good environment and \(\xi _k\le 2kt(n)\). Lemma 6.5 and the definition of the \(\xi _i\)’s give for all \(\eta _0\)
$$\begin{aligned} {\mathcal {P}}_{\eta _0}(\eta \text { is good}) \ge 1-\frac{c_4}{n^{10d}}, \end{aligned}$$
(6.18)
where \(c_4\) is a positive constant. By Proposition 6.6 there exists a positive constant c so that if \(\eta \) is a good environment, then for all \(x_0\) and for all \(1\le i\le k\) we have
$$\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( \tau _{i}-\xi _{i}\le t(n)\right) \ge 1 -\frac{c}{n^{10d}}. \end{aligned}$$
(6.19)
We will now prove that there exists a positive constant \(c'\) so that for all \(x_0\)
$$\begin{aligned} {\mathbb {P}}_{x_0, \eta _0}\left( |{\mathcal {G}}_{\tau _1}\cup \cdots \cup {\mathcal {G}}_{\tau _k}|<(1-\delta )n^d\right) \le \frac{c'}{n^{2d}}. \end{aligned}$$
(6.20)
Writing again \({\mathcal {E}}\) for expectation over the random environment and using (6.18) and (6.19) we obtain for all \(i\le k\) that there exists a positive constant \(c''\) so that for all n sufficiently large and for all \(x_0, \eta _0\)
This and Markov’s inequality now give that for all n sufficiently large
$$\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : \forall \,x_0, \, {\mathbb {P}}_{x_0,\eta }\left( \tau _k\le (\log n)t(n)\right) \ge 1-\frac{1}{n^{{2d}}}\right) \ge 1-\frac{c''}{n}. \end{aligned}$$
(6.21)
Since every edge refreshes after an exponential time of parameter \(\mu \), it follows that the number of different percolation clusters that appear in an interval of length t is stochastically bounded by a Poisson random variable of parameter \(\mu \cdot t\cdot dn^d\). Therefore, the number of possible percolation configurations in the interval \([\xi _i, \xi _i+t(n)]\) is dominated by a Poisson variable \(N_i\) of parameter \(\mu \cdot t(n)\cdot dn^d\). By the concentration of the Poisson distribution, we obtain
$$\begin{aligned} {\mathbb {P}}\left( \exists \, i\le k: N_i\ge n^{d+4}\right) \le \exp \left( -c_1n \right) , \end{aligned}$$
where \(c_1\) is another positive constant. Let \({\mathcal {G}}^1, \ldots , {\mathcal {G}}^k\) be the giant components of independent supercritical percolation configurations. Since the percolation clusters obtained at the times \(\xi _i\) are independent, using Corollary 3.2 in the third inequality below we deduce that for all n sufficiently large
$$\begin{aligned}&{\mathbb {P}}_{x_0, \eta _0}\left( |{\mathcal {G}}_{\tau _1}\cup \cdots \cup {\mathcal {G}}_{\tau _k}|<(1-\delta )n^d\right) \\&\quad \le {\mathbb {P}}_{x_0,\eta _0}\left( |{\mathcal {G}}_{\tau _1}\cup \cdots \cup {\mathcal {G}}_{\tau _k}|<(1-\delta )n^d, \{\tau _{i}-\xi _{i}\le t(n)\} \cap \{N_i\le n^{d+4}\}, \forall \, i\le k\right) \\&\qquad + e^{-c_1n} + k\frac{c}{n^{10d}}\\&\quad \le n^{(d+4)k} {\mathbb {P}}\left( |{\mathcal {G}}^1\cup \cdots \cup {\mathcal {G}}^k|<(1-\delta )n^{d}\right) + k \frac{2c}{n^{10d}}\\&\quad \le \frac{n^{(d+4)k}}{c}\exp \left( -cn^{\frac{d}{d+1}} \right) + k\frac{2c}{n^{10d}}\le \frac{c''}{n^{10d}}, \end{aligned}$$
where \(c''\) is a positive constant uniform for all \(x_0\) and \(\eta _0\). This proves (6.20). So we can sum this error over all starting points \(x_0\) and get using Markov’s inequality that for all n sufficiently large and all \(\eta _0\)
$$\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : \forall x_0, \,{\mathbb {P}}_{x_0,\eta }\left( |{\mathcal {G}}_{\tau _1}\cup \cdots \cup {\mathcal {G}}_{\tau _k}|\ge (1-\delta )n^d\right) \ge 1-\frac{1}{n}\right) \ge 1-\frac{c'}{n}. \end{aligned}$$
(6.22)
The definition of the stopping times \(\tau _i\) immediately yields
$$\begin{aligned} \{|{\mathcal {G}}_{\tau _1}\cup \cdots \cup {\mathcal {G}}_{\tau _k}| \ge (1-\delta )n^d\} \subseteq \{|S_{\tau _1}\cup \cdots \cup S_{\tau _k}|\ge (1-\delta )^2n^d\}. \end{aligned}$$
This together with (6.22) now give
$$\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : \forall x_0, \,{\mathbb {P}}_{x_0,\eta }\left( |S_{\tau _1}\cup \cdots \cup S_{\tau _k}|\ge (1-\delta )^2 n^d\right) \ge 1-\frac{1}{n}\right) \ge 1-\frac{c'}{n}. \end{aligned}$$
(6.23)
Remember the dependence on \(x_0\) of the stopping times \(\tau _i\) that we have suppressed. We now change the definition of a good environment and call \(\eta \)good if it satisfies the following for all \(x_0\)
$$\begin{aligned}&{\mathbb {P}}_{x_0,\eta }\left( |S_{\tau _1}\cup \cdots \cup S_{\tau _k}|\ge (1-\delta )^2 n^d\right) \ge 1-\frac{1}{n} \quad \text { and } \end{aligned}$$
(6.24)
$$\begin{aligned}&{\mathbb {P}}_{x_0,\eta }\left( \tau _k\le (\log n)t(n)\right) \ge 1-\frac{1}{n^{2d}} \end{aligned}$$
(6.25)
From (6.21) and (6.23) we get that for all \(\eta _0\)
$$\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta \text { is good}\right) \ge 1-\frac{c'+c''}{n}. \end{aligned}$$
(6.26)
We now define a stopping time \(\tau (x_0)\) by selecting \(i\in \{1,\ldots , k\}\) uniformly at random and setting \(\tau (x_0)=\tau _i(x_0)\). Then at this time we have
We now set \(f_1(x) = {\mathbb {P}}_{x_0,\eta }\left( x\in S_{\tau _1}\cup \cdots \cup S_{\tau _k}\right) \) for all x. Since \(\eta \) is a good environment, then for some \(\delta '<\varepsilon /50\) we have for all n sufficiently large
$$\begin{aligned} \sum _x f_1(x) = {\mathbb {E}}_{x_0,\eta }\left[ |S_{\tau _1}\cup \cdots \cup S_{\tau _k}|\right] \ge (1-\delta )^2n^d \left( 1-\frac{1}{n}\right) = (1-\delta ') n^d.\nonumber \\ \end{aligned}$$
(6.27)
First let \(c=c(\varepsilon )\in {\mathbb {N}}\) be a constant to be fixed later. In order to define the stopping rule, we first repeat the above construction ck times. More specifically, when \(X_0=x_0\), we let \(\sigma _1= \tau (x_0)\wedge (\log n)t(n)\). Then, since \(\eta \) is a good environment, we obtain
$$\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( X_{\sigma _1}=x\right) \ge \frac{1}{kn^d} f_1(x) - \frac{1}{n^{2d}}. \end{aligned}$$
Let \(X_{\sigma _1} = x_1\). Then we define in the same way as above a stopping time \(\tau (x_1)\) with the evolving set process starting from \(\{x_1\}\) and the environment considered after time \(\sigma _1\). Then we set
$$\begin{aligned} \sigma _2 = \sigma _1+(\tau (x_1)-\sigma _1)\wedge (\log n)t(n). \end{aligned}$$
We continue in this way and define a sequence of stopping times \(\sigma _i\) for all \(i< ck\). In the same way as for the first round for all \(i< ck\) we have
$$\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( X_{\sigma _i}=x\right) \ge \frac{1}{kn^d} f_i(x) - \frac{1}{n^{2d}} \end{aligned}$$
and the function \(f_i\) satisfies (6.27).
We next define the stopping rule. To do so we will explain what is the probability of stopping in every round. We define the set \(A_1\) of good points for the first round as follows:
$$\begin{aligned} A_1 = \left\{ x: \, {\mathbb {P}}_{x_0, \eta }\left( X_{\sigma _1}=x\right) \ge \frac{1}{2kn^d}\right\} . \end{aligned}$$
We now sample X at time \(\sigma _1\). If \(X_{\sigma _1}=x\in A_1\), then at this time we stop with probability
$$\begin{aligned} \frac{1}{2kn^d {\mathbb {P}}_{x_0,\eta }\left( X_{\sigma _1}=x\right) }. \end{aligned}$$
If we stop after the first round, then we set \(T=\sigma _1\). So if \(x\in A_1\), we have
$$\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( X_{T}=x, T=\sigma _1\right) = \frac{1}{2kn^d}. \end{aligned}$$
From (6.27) we get that \(|A_1|\ge (1-3\delta ')n^d\) for all n sufficiently large. Therefore, summing over all \(x\in A_1\) we get that
$$\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( T=\sigma _1\right) \ge \frac{1-3\delta '}{2k}. \end{aligned}$$
Therefore, this now gives for \(x\in A_1\)
$$\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( X_T=x\;\vert \;T=\sigma _1\right) \le \frac{1}{(1-3\delta ')n^d}. \end{aligned}$$
We now define inductively the probability of stopping in the i-th round. Suppose we have not stopped up to the \(i-1\)-st round. We define the set of good points for the i-th round via
$$\begin{aligned} A_i=\left\{ x: \, {\mathbb {P}}_{x_0,\eta }\left( X_{\sigma _i}=x\right) \ge \frac{1}{2kn^d} \right\} \end{aligned}$$
If \(X_{\sigma _i}=x\in A_i\), then the probability we stop at the i-th round is
$$\begin{aligned} \frac{1}{2kn^d {\mathbb {P}}_{x_0,\eta }\left( X_{\sigma _i}=x\right) } \end{aligned}$$
and as above we obtain by summing over all \(x\in A_i\) and using that \(|A_i|\ge (1-3\delta ')n^d\)
$$\begin{aligned}&{\mathbb {P}}_{x_0,\eta }\left( T=\sigma _i\;\vert \;T>\sigma _{i-1}\right) \ge \frac{1-3\delta '}{2k} \quad \text { and }\\&{\mathbb {P}}_{x_0,\eta }\left( X_T=x\;\vert \;T=\sigma _i\right) \le \frac{1}{(1-3\delta ')n^d}, \,\, \forall \, x\in A_i. \end{aligned}$$
If we have not stopped before the ck-th round, then we set \(T=\sigma _{ck+1}\). Notice, however, that
$$\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( T=\sigma _{ck+1}\right) \le \left( 1-\frac{1-3\delta '}{2k} \right) ^{ck}\le \exp \left( -c(1-3\delta ') \right) . \end{aligned}$$
For every round \(i\le ck\), we now have that
$$\begin{aligned}&\left\| {\mathbb {P}}_{x_0,\eta }\left( X_T=\cdot \;\vert \;T=\sigma _i\right) - \pi \right\| _{\mathrm{TV}} \\&\quad =\sum _{x\in A_i} \left( {\mathbb {P}}_{x_0,\eta }\left( X_T=x\;\vert \;T=\sigma _i\right) - \frac{1}{n^d} \right) _+ + \frac{|A_i^c|}{n^d} \\&\quad \le \sum _{x\in A_i}\left( \frac{1}{(1-3\delta ')n^d} - \frac{1}{n^d} \right) + 3\delta ' \le \frac{3\delta '}{1-3\delta '}+3\delta ' \le 10\delta ', \end{aligned}$$
since \(\varepsilon <1/4\). So we now get overall
$$\begin{aligned}&\left\| {\mathbb {P}}_{x_0,\eta }\left( X_T=\cdot \right) - \pi \right\| _{\mathrm{TV}} \\&\quad \le \sum _{i\le ck} {\mathbb {P}}_{x_0,\eta }\left( T=\sigma _i\right) \left\| {\mathbb {P}}_{x_0,\eta }\left( X_{T}=\cdot \;\vert \;T=\sigma _i\right) -\pi \right\| _{\mathrm{TV}} + {\mathbb {P}}_{x_0,\eta }\left( T=\sigma _{ck+1}\right) \\&\quad \le 10\delta ' + \exp \left( -c(1-3\delta ') \right) . \end{aligned}$$
We now take \(c=c(\varepsilon )\) so that the above bound is smaller than \(\varepsilon \). Finally, by the definition of the stopping times \(\sigma _i\), we also get that \({\mathbb {E}}_{x_0,\eta }\left[ T\right] \le c k (\log n)t(n)\) and this concludes the proof. \(\square \)
Proof of Corollary 1.7
Let \(n=10r\). It suffices to prove the statement of the corollary for X being a random walk on dynamical percolation on \({\mathbb {Z}}_n^d\). From Theorem 1.6 there exists a so that for all n large enough and all x and \(\eta _0\)
$$\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( \exists \, t\le \left( n^2+\frac{1}{\mu } \right) (\log n)^a: \, \left\| X_t \right\| \ge r\right) \ge \frac{1}{2}. \end{aligned}$$
The statement of the corollary follows by iteration. \(\square \)