## 1 Introduction

This paper studies random walk on dynamical percolation on the torus $${\mathbb {Z}}^d_n$$. The edges refresh at rate $$\mu \le 1/2$$ and switch to open with probability p and closed with probability $$1-p$$ where $$p>p_c({\mathbb {Z}}^d)$$ with $$p_c({\mathbb {Z}}^d)$$ being the critical probability for bond percolation on $${\mathbb {Z}}^d$$. The random walk moves at rate 1. When its exponential clock rings, the walk chooses one of the 2d adjacent edges with equal probability. If the bond is open, then it makes the jump, otherwise it stays in place.

We represent the state of the system by $$(X_t,\eta _t)$$, where $$X_t\in {\mathbb {Z}}_n^d$$ is the location of the walk at time t and $$\eta _t\in \{0,1\}^{E({\mathbb {Z}}_n^d)}$$ is the configuration of edges at time t, where $$E({\mathbb {Z}}_n^d)$$ stands for the edges of the torus. We emphasise at this point that $$(X_t,\eta _t)$$ is Markovian, while the location of the walker $$(X_t)$$ is not.

One easily checks that $$\pi \times \pi _p$$ is the unique stationary distribution and that the process is reversible; here $$\pi$$ is uniform distribution and $$\pi _p$$ is product measure with density p on the edges. Moreover, if the environment $$\{\eta _t\}$$ is fixed, then $$\pi$$ is a stationary distribution for the resulting time inhomogeneous Markov process.

This model was introduced by Peres et al. . We emphasise that d and p are considered fixed, while n and $$\mu =\mu _n$$ are the two parameters which are varying. The focus of  was to study the total variation mixing time of $$(X,\eta )$$, i.e.

\begin{aligned} t_{\mathrm{mix}}(\varepsilon ) = \min \left\{ t\ge 0: \max _{x,\eta _0}\left\| {\mathbb {P}}_{x,\eta _0}\left( (X_t,\eta _t) = (\cdot , \cdot )\right) -\pi \times \pi _p \right\| _{\mathrm{TV}}\le \varepsilon \right\} . \end{aligned}

They focused on the subcritical regime, i.e. when $$p<p_c$$ and they proved the following:

### Theorem 1.1

 For all $$d\ge 1$$ and $$p<p_c$$ the mixing time of $$(X,\eta )$$ satisfies

\begin{aligned} t_{\mathrm{mix}}(1/4) \asymp \frac{n^2}{\mu }. \end{aligned}

They also established the same mixing time when one looks at the walk and averages over the environment.

In the present paper we focus on the supercritical regime. We study both the full system and the quenched mixing times. We start by defining the different notions of mixing that we will be using. First of all we write $${\mathbb {P}}_{x,\eta }\left( \cdot \right)$$ for the probability measure of the walk, when the environment process is conditioned to be $$\eta = (\eta _t)_{t\ge 0}$$ and the walk starts from x. We write $${\mathcal {P}}$$ for the distribution of the environment which is dynamical percolation on the torus, a measure on càdlàg paths $$[0,\infty ) \mapsto \{0,1\}^{E({\mathbb {Z}}^d_{n})}$$. We write $${\mathcal {P}}_{\eta _0}$$ to denote the measure $${\mathcal {P}}$$ when the starting environment is $$\eta _0$$. Abusing notation we write $${\mathbb {P}}_{x,\eta _0}\left( \cdot \right)$$ to mean the law of the full system when the walk starts from x and the initial configuration of the environment is $$\eta _0$$. To distinguish it from the quenched law, we always write $$\eta _0$$ in the subscript as opposed to $$\eta$$.

For $$\varepsilon \in (0,1)$$, $$x\in {\mathbb {Z}}_n^d$$ and a fixed environment $$\eta =(\eta _t)_{t\ge 0}$$ we write

\begin{aligned} t_{\mathrm {mix}}(\varepsilon , x, \eta ) = \min \left\{ t\ge 0:\left\| {\mathbb {P}}_{x,\eta }\left( X_t=\cdot \right) - \pi \right\| _{\mathrm{TV}} \le \varepsilon \right\} . \end{aligned}

We also write

\begin{aligned} t_{\mathrm {mix}}(\varepsilon , \eta ) = \max _x \,t_{\mathrm {mix}}(\varepsilon , x, \eta ) \end{aligned}

for the quenched $$\varepsilon$$-mixing time. We remark that $$t_{\mathrm {mix}}(\varepsilon , \eta )$$ could be infinite. Using the obvious definitions, the standard inequality $$t_{\mathrm{mix}}(\varepsilon )\le \log _2(\tfrac{1}{\varepsilon }) t_{\mathrm{mix}}(\tfrac{1}{4})$$ does not hold for time-inhomogeneous Markov chains and therefore also not for quenched mixing times. Therefore, in such situations, to describe the rate of convergence to stationarity, it is more natural to give bounds on $$t_{\mathrm{mix}}(\varepsilon , \eta )$$ for all $$\varepsilon$$ rather than just considering $$\varepsilon =1/4$$ as is usually done.

We first mention the result from the companion paper  which is an upper bound on the quenched mixing time and the hitting time of large sets for all values of p. We write $$\tau _A$$ for the first time X hits the set A.

### Theorem 1.2

 For all $$d\ge 1$$ and $$\delta >0$$, there exists $$C=C(d,\delta )<\infty$$ so that for all $$p\in [\delta ,1-\delta ]$$, for all $$\mu \le 1/2$$, for all n and for all $$\varepsilon$$, random walk in dynamical percolation on $${\mathbb {Z}}_n^d$$ with parameters p and $$\mu$$ satisfies for all x

\begin{aligned} \max _{\eta _0}{\mathcal {P}}_{\eta _0}\left( \eta = (\eta _t)_{t\ge 0}: \,t_{\mathrm {mix}}(\varepsilon , x, \eta )\ge \frac{Cn^2\log (1/\varepsilon )}{\mu ^4}\right) \le \varepsilon . \end{aligned}
(1.1)

Moreover, there exists a constant $$c=c(d,\delta )<1$$, so that for all $$A\subseteq {\mathbb {Z}}_n^d$$ with $$|A|\ge n^d/2$$ and all k we have

\begin{aligned}&\max _{\eta _0}\,{\mathcal {P}}_{\eta _0}\left( \eta =(\eta _t)_{t\ge 0}: \, \max _x{\mathbb {E}}_{x,\eta }\left[ \tau _A\right] \ge k\cdot \frac{Cn^2\log n}{\mu }\right) \le c^k \text { and} \nonumber \\&\quad \max _{x,\eta _0} {\mathbb {E}}_{x,\eta _0}\left[ \tau _A\right] \le \frac{Cn^2}{\mu }. \end{aligned}
(1.2)

Our first result concerns the quenched mixing time in the case when $$\theta (p)>1/2$$.

### Theorem 1.3

Let $$p\in (p_c({\mathbb {Z}}^d), 1)$$ with $$\theta (p)>1/2$$ and $$\mu \le 1/2$$. Then there exists $$a>0$$ (depending only on d and p) so that for all n sufficiently large we have

\begin{aligned} \sup _{\eta _0}{\mathcal {P}}_{\eta _0}\left( \eta = (\eta _t)_{t\ge 0}: \, t_{\mathrm{mix}}\left( n^{-3d}, \eta \right) \ge (\log n)^a \left( n^2 + \frac{1}{\mu }\right) \right) \le \frac{1}{n^d}. \end{aligned}

### Remark 1.4

We note that when $$1/\mu <(\log n)^b$$ for some $$b>0$$, then Theorem 1.3 follows from Theorem 1.2. (Take $$\varepsilon =n^{-3d}$$ in (1.1) and do a union bound over x.) So we are going to prove Theorem 1.3 in the regime when $$1/\mu > (\log n)^{d+2}$$.

Our second result concerns the mixing time at a stopping time in the quenched regime for all values of $$p>p_c({\mathbb {Z}}^d)$$. We first give the definition of this notion of mixing time that we are using.

### Definition 1.5

For $$\varepsilon \in (0,1)$$ and a fixed environment $$\eta =(\eta _t)_{t\ge 0}$$ we define

\begin{aligned} t_{\mathrm {stop}}(\varepsilon ,\eta )&= \max _{x} \inf \big \{ {\mathbb {E}}_{x,\eta }\left[ T\right] : \, T \text { randomised stopping time s.t.} \\&\quad \left\| {\mathbb {P}}_{x,\eta }\left( X_T=\cdot \right) - \pi \right\| _{\mathrm{TV}} \le \varepsilon \big \}. \end{aligned}

### Theorem 1.6

Let $$p\in (p_c({\mathbb {Z}}^d),1)$$, $$\varepsilon >0$$ and $$\mu \le 1/2$$. Then there exists $$a>0$$ (depending only on d and p) so that for all n sufficiently large we have

\begin{aligned} \inf _{\eta _0} {\mathcal {P}}_{\eta _0}\left( \eta =(\eta _t)_{t\ge 0}: \, t_{\mathrm {stop}}(\varepsilon ,\eta ) \le (\log n)^a \left( n^2+\frac{1}{\mu } \right) \right) = 1-o(1). \end{aligned}

Finally we give a consequence for random walk on dynamical percolation on all of $${\mathbb {Z}}^d$$. This is defined analogously to the process on the torus $${\mathbb {Z}}_n^d$$ above, where the edges refresh at rate $$\mu$$.

### Corollary 1.7

Let $$p\in (p_c({\mathbb {Z}}^d),1)$$ and $$\mu \le 1/2$$. Let X be the random walk on dynamical percolation on $${\mathbb {Z}}^d$$ and for $$r>0$$ let

\begin{aligned} \tau _r = \inf \{t\ge 0: \, \left\| X_t \right\| \ge r\}. \end{aligned}

Then there exists $$a>0$$ (depending only on d and p) so that for all r

\begin{aligned} \sup _{\eta _0}{\mathbb {E}}_{0,\eta _0}\left[ \tau _r\right] \le \left( r^2+\frac{1}{\mu } \right) (\log r)^a. \end{aligned}

Notation For positive functions fg we write $$f\sim g$$ if $$f(n)/g(n)\rightarrow 1$$ as $$n\rightarrow \infty$$. We also write $$f(n) \lesssim g(n)$$ if there exists a constant $$c \in (0,\infty )$$ such that $$f(n) \le c g(n)$$ for all n, and $$f(n) > rsim g(n)$$ if $$g(n) \lesssim f(n)$$. Finally, we use the notation $$f(n) \asymp g(n)$$ if both $$f(n) \lesssim g(n)$$ and $$f(n) > rsim g(n)$$.

Related work Various references to random walk on dynamical percolation has been provided in . In a different direction than we are pursuing, in a recent paper, Andres et al.  have obtained a quenched invariance principle for random walks with time-dependent ergodic degenerate weights that are assumed to take strictly positive values. More recently, Biskup and Rodriguez in  were able to handle the case where the weights can be zero, and hence the dynamical percolation case.

### 1.1 Overview of the proof

In this subsection we explain the high level idea of the proof and also give the structure of the paper. First we note that when we fix the environment to be $$\eta$$, we obtain a time inhomogeneous Markov chain. To study its mixing time, we use the theory of evolving sets developed by Morris and Peres adapted to the inhomogeneous setting, which was done in . We recall this in Sect. 2. In particular we state a theorem by Diaconis and Fill that gives a coupling of the chain with the Doob transform of the evolving set. (Diaconis and Fill proved it in the time homogeneous setting, but the adaptation to the time inhomogeneous setting is straightforward.) The importance of the coupling is that conditional on the Doob transform of the evolving set up to time t, the random walk at time t is uniform on the Doob transform at the same time. This property of the coupling is going to be crucial for us in the proofs of Theorems 1.3 and 1.6.

The size of the Doob transform of the evolving set in the inhomogeneous setting is again a submartingale, as in the homogeneous case. The crucial quantity we want to control is the amount by which its size increases. This increase will be large only at good times, i.e. when the intersection of the Doob transform of the evolving set with the giant cluster is a substantial proportion of the evolving set. Hence we want to ensure that there are enough good times. We would like to emphasise that in this case we are using the random walk to infer properties of the evolving set. More specifically, in Sect. 4 we give an upper bound on the time it takes the random walk to hit the giant component. Using this and the coupling of the walk with the evolving set, in Sect. 5 we establish that there are enough good times. We then employ a result of Gábor Pete which states that the isoperimetric profile of a set in a supercritical percolation cluster coincides with its lattice profile. We apply this result to the sequence of good times, and hence obtain a good drift for the size of the evolving set at these times.

We conclude Sect. 5 by constructing a stopping time upper bounded by $$(n^2+1/\mu )(\log n)^a$$ with high probability so that at this time the Doob transform of the evolving set has size at least $$(1-\delta ) (\theta (p)-\delta ) n^d$$. In the case when $$\theta (p)>1/2$$ we can take $$\delta >0$$ sufficiently small so that $$(1-\delta ) (\theta (p)-\delta )>1/2$$. Using the uniformity of the walk on the Doob transform of the evolving set again, we deduce that at this stopping time the walk is close to the uniform distribution in total variation with high probability. This yields Theorem 1.3.

To finish the proof of Theorem 1.6 the idea is to repeat the above procedure to obtain $$k=k(\varepsilon )$$ sets whose union covers at least $$1-\delta$$ of the whole space for a sufficiently small $$\delta$$. Then we define $$\tau$$ by choosing one of these times uniformly at random. At time $$\tau$$ the random walk will be uniform on a set with measure at least $$1-\delta$$, and hence this means that the total variation from the uniform distribution at this time is going to be small. Since this time is with high probability smaller than k times the mixing time, this finishes the proof.

## 2 Evolving sets for inhomogeneous Markov chains

In this section we give the definition of the evolving set process for a discrete time inhomogeneous Markov chain.

Given a general transition matrix $$p(\cdot , \cdot )$$ with state space $$\Omega$$ and stationary distribution $$\pi$$ we let for $$A,B\subseteq \Omega$$

\begin{aligned} Q_{p}(A,B):=\sum _{x\in A,y\in B} \pi (x)p(x,y). \end{aligned}
(2.1)

When $$B=\{y\}$$ we simply write $$Q_p(A,y)$$ instead of $$Q_p(A,\{y\})$$.

We first recall the definition of evolving sets in the context of a finite state discrete time Markov chain with state space $$\Omega$$, transition matrix p(xy) and stationary distribution $$\pi$$. The evolving-set process $$\{S_n\}_{n\ge 0}$$ is a Markov chain on subsets of $$\Omega$$ whose transitions are described as follows. Let U be a uniform random variable on [0, 1]. If $$S\subseteq \Omega$$ is the present state, we let the next state $$\widetilde{S}$$ be defined by

\begin{aligned} \widetilde{S}:=\left\{ y\in \Omega : \frac{Q_p(S,y)}{\pi (y)}\ge U\right\} . \end{aligned}

We remark that $$Q_p(S,y)/\pi (y)$$ is the probability that the reversed chain starting at y is in S after one step. Note that $$\Omega$$ and $$\varnothing$$ are absorbing states and it is immediate to check that

\begin{aligned} {\mathbb {P}}_{}\left( y\in S_{k+1}\;\vert \;S_k\right) =\frac{Q_p(S_k,y)}{\pi (y)}. \end{aligned}
(2.2)

Moreover, one can describe the evolving set process as that process on subsets which satisfies the “one-dimensional marginal” condition (2.2) and where these different events, as we vary y, are maximally coupled.

For a transition matrix p with stationary distribution $$\pi$$ we define for S with $$\pi (S)> 0$$

\begin{aligned} \varphi _{p}(S):=\frac{Q_{p}(S,S^c)}{\pi (S)} \quad \text { and } \quad \psi _{p}(S):=1-{\mathbb {E}}\left[ \sqrt{\frac{\pi (\widetilde{S})}{\pi (S)}}\right] , \end{aligned}

where $$\widetilde{S}$$ is the first step of the evolving set process started from S when the transition probability for the Markov chain is p and as always the stationary distribution is $$\pi$$.

For $$r\in [\min _x\pi (x), 1/2]$$ we define $$\psi _p(r) :=\inf \{\psi _p(S): \pi (S)\le r\}$$ and $$\psi _p(r)=\psi _p(1/2)$$ for $$r\ge 1/2$$. We define $$\varphi _p(r)$$ analogously. We now recall a lemma from Morris and Peres  that will be useful later.

### Lemma 2.1

[7, Lemma 10] Let $$0<\gamma \le 1/2$$ and let p be a transition matrix on the finite state space $$\Omega$$ with $$p(x,x)\ge \gamma$$ for all x. Let $$\pi$$ be a stationary distribution. Then for all sets $$S\subseteq \Omega$$ with $$\pi (S)>0$$ we have

\begin{aligned} 1-\psi _{p}(S)\le 1-\frac{\gamma ^2}{2(1-\gamma )^2}\cdot (\varphi _{p}(S))^2. \end{aligned}

We next define completely analogously to the time homogeneous case the evolving set process in the context of a time inhomogeneous Markov chain with a stationary distribution $$\pi$$. Consider a time inhomogeneous Markov chain with state space $${\mathcal {S}}$$ whose transition matrix for moving from time k to time $$k+1$$ is given by $$p_{k+1}(x,y)$$ where we assume that the probability measure $$\pi$$ is a stationary distribution for each $$p_k$$. In this case, we say that $$\pi$$ is a stationary distribution for the inhomogeneous Markov chain. Let $$Q_k= Q_{p_k}$$ be as defined in (2.1). We then obtain a time inhomogeneous Markov chain $$S_0,S_1,\ldots$$ on subsets of $${\mathcal {S}}$$ generated by

\begin{aligned} {S}_{k+1}:=\left\{ y\in {\mathcal {S}}: \frac{Q_{k+1}(S_k,y)}{\pi (y)}\ge U\right\} \end{aligned}

where U is as before a uniform random variable on [0, 1]. We call this the evolving set process with respect to $$p_1,p_2,\ldots$$ and stationary distribution $$\pi$$.

We now define the Doob transform of the evolving set process associated to a time inhomogeneous Markov chain. If $$K_p$$ is the transition probability for the evolving set process when the transition matrix for the Markov chain is p, then we define the Doob transform with respect to being absorbed at $$\Omega$$ via

\begin{aligned} {\widehat{K}}_p(S,S') = \frac{\pi (S')}{\pi (S)} K_p(S,S'). \end{aligned}

The following coupling of the time inhomogeneous Markov chain with the Doob transform of the evolving set will be crucial in the rest of the paper. The proof is identical to the proof of the homogeneous setting by Diaconis and Fill . For the proof see for instance [5, Theorem 17.23].

### Theorem 2.2

Let X be a time inhomogeneous Markov chain. Then there exists a Markovian coupling of X and the Doob transform $$(S_t)$$ of the associated evolving sets so that for all starting points x and all times t we have $$X_0=x$$, $$S_0=\{x\}$$ and for all w We write $$\varphi _n = \varphi _{p_n}$$ and $$\psi _n=\psi _{p_n}$$, where $$p_n$$ is the transition matrix at time n.

As in  we let

\begin{aligned} S^{\#}:= \left\{ \begin{array}{ccl} S &{} \quad \text{ if } \pi (S)\le \frac{1}{2} \\ S^c &{} \quad \text{ otherwise } \end{array} \right. \end{aligned}

and

\begin{aligned} Z_n:=\frac{\sqrt{\pi (S^{\#}_n)}}{\pi (S_n)}. \end{aligned}

The following lemma follows in the same way as in the homogeneous setting of , but we include the proof for the reader’s convenience.

### Lemma 2.3

Let S be the Doob transform with respect to absorption at $$\Omega$$ of the evolving set process associated to a time inhomogeneous Markov chain X with $${\mathbb {P}}_{}\left( X_{n+1}=x\;\vert \;X_n=x\right) \ge \gamma$$ for all n and x, where $$0<\gamma \le 1/2$$. Then for all n and all $$S_0\ne \varnothing$$ we have

\begin{aligned} \mathbb {\widehat{E}}_{}\left[ Z_{n+1}\;\vert \;{\mathcal {F}}_n\right] \le Z_n \left( 1- \frac{\gamma ^2}{2(1-\gamma )^2}\left( \varphi _{n+1}\left( \frac{1}{Z_n^{2}}\right) \right) ^2 \right) , \end{aligned}

where $${\mathcal {F}}_n$$ stands for the filtration generated by $$(S_i)_{i\le n}$$.

### Proof

Using the transition probability of the Doob transform of the evolving set, we almost surely have

\begin{aligned} \mathbb {\widehat{E}}_{}\left[ \frac{Z_{n+1}}{Z_n}\;\Big \vert \;{\mathcal {F}}_n\right] = {\mathbb {E}}_{}\left[ \frac{\pi (S_{n+1})}{\pi (S_n)} \cdot \frac{Z_{n+1}}{Z_n}\;\Big \vert \;{\mathcal {F}}_n\right] = {\mathbb {E}}_{}\left[ \sqrt{\frac{\pi (S_{n+1}^{\#})}{\pi (S_n^{\#})}}\;\Big \vert \;{\mathcal {F}}_n\right] . \end{aligned}

If $$\pi (S_n)\le 1/2$$, then

\begin{aligned} {\mathbb {E}}_{}\left[ \sqrt{\frac{\pi (S_{n+1}^{\#})}{\pi (S_n^{\#})}}\;\Big \vert \;{\mathcal {F}}_n\right] \le {\mathbb {E}}_{}\left[ \sqrt{\frac{\pi (S_{n+1}^{})}{\pi (S_n^{})}}\;\Big \vert \;{\mathcal {F}}_n\right] \le 1- \psi _{n+1}(\pi (S_n)). \end{aligned}

Suppose next that $$\pi (S_n)>1/2$$. Then

\begin{aligned} {\mathbb {E}}_{}\left[ \sqrt{\frac{\pi (S_{n+1}^{\#})}{\pi (S_n^{\#})}}\;\Big \vert \;{\mathcal {F}}_n\right] \le {\mathbb {E}}_{}\left[ \sqrt{\frac{\pi (S_{n+1}^{c})}{\pi (S_n^{c})}}\;\Big \vert \;{\mathcal {F}}_n\right] \le 1- \psi _{n+1}(\pi (S_n^c)). \end{aligned}

Lemma 2.1 and the fact that $$\varphi _{n+1}$$ is decreasing now give that

\begin{aligned} \mathbb {\widehat{E}}_{}\left[ \frac{Z_{n+1}}{Z_n}\;\Big \vert \;{\mathcal {F}}_n\right] \le 1- \frac{\gamma ^2}{2(1-\gamma )^2}\cdot (\varphi _{n+1}(\pi (S_n)))^2. \end{aligned}

Now note that if $$\pi (S_n)\le 1/2$$, then $$Z_n = (\pi (S_n))^{-1/2}$$. If $$\pi (S_n) >1/2$$, then $$Z_n = \sqrt{\pi (S_n^c)}/\pi (S_n)\le \sqrt{2}$$. Since $$\varphi _{n+1}(r)=\varphi _{n+1}(1/2)$$ for all $$r>1/2$$, we get that we always have

\begin{aligned} \varphi _{n+1}(\pi (S_n)) = \varphi _{n+1}\left( \frac{1}{Z_n^{2}}\right) \end{aligned}

and this concludes the proof. $$\square$$

## 3 Preliminaries on supercritical percolation

In this section we collect some standard results for supercritical percolation on $${\mathbb {Z}}_n^d$$ that will be used throughout the paper. We write $${\mathcal {B}}(x,r)$$ for the box in $${\mathbb {Z}}^d$$ centred at x of side length r. We also use $${\mathcal {B}}(x,r)$$ to denote the obvious subset of $${\mathbb {Z}}_n^d$$ whenever $$r<n$$. We denote by $$\partial {\mathcal {B}}(x,r)$$ the inner vertex boundary of the ball.

The following lemma might follow from known results but as we could not find a reference, we include its proof.

### Lemma 3.1

Let $$A\subseteq {\mathbb {Z}}_n^d$$ be a deterministic set with $$|A| = \alpha n^d$$, where $$\alpha \in (0,1]$$. Let $${\mathcal {G}}$$ be the giant cluster of supercritical percolation in $${\mathbb {Z}}_n^d$$ with parameter $$p>p_c$$. Then for all $$\varepsilon \in (0,\theta (p))$$ there exists a positive constant c depending on $$\varepsilon , d, p, \alpha$$ so that for all n

\begin{aligned} {\mathbb {P}}\left( |A\cap {\mathcal {G}}| \notin \left( \alpha (\theta (p)-\varepsilon )n^d, \alpha (\theta (p)+\varepsilon )n^d \right) \right) \le \frac{1}{c} \exp \left( -cn^{\frac{d}{d+1}}\right) . \end{aligned}

### Proof

Let $$\beta \in (0,1)$$ to be determined later. We start the proof by showing that with high probability a certain fraction of the points in A percolate to distance $$n^\beta /2$$. More precisely, we let $$A(x) = \{x \leftrightarrow \partial {\mathcal {B}}(x,n^\beta )\}$$. We will first show that for all sets $$D\subseteq {\mathbb {Z}}_n^d$$ with $$|D|=\gamma n^d$$, where $$\gamma \in (0,1]$$, and for all $$\varepsilon \in (0,\theta (p))$$ there exists $$c>0$$ depending on $$\varepsilon , d, p, \gamma$$ so that for all n (3.1)

Let $${\mathcal {L}}$$ be a lattice of points contained in $${\mathbb {Z}}_n^d$$ that are at distance $$n^\beta$$ apart. Then $${\mathcal {L}}$$ contains $$n^{d(1-\beta )}$$ points, and hence there exist $$n^{d\beta }$$ such lattices. By a union bound over all such lattices $${\mathcal {L}}$$ we now have (3.2)

Using the standard coupling between bond percolation on the torus and the whole lattice and [4, Theorems 8.18 and 8.21] we get

\begin{aligned} {\mathbb {P}}\left( A(x)\right)&= {\mathbb {P}}\left( x\in {{\mathcal {C}}}_\infty \right) + {\mathbb {P}}\left( x\notin {{\mathcal {C}}}_\infty , x\leftrightarrow {\mathcal {B}}(x,n^\beta )\right) \ge \theta (p) \quad \text { and } \nonumber \\ {\mathbb {P}}\left( A(x)\right)&\le \theta (p)+ e^{-cn^\beta } \end{aligned}
(3.3)

for some constant c depending only on d and p. We now fix a lattice $${\mathcal {L}}$$. So for all n large enough we can upper bound the probability appearing in (3.2) by We now note that for points $$x\in {\mathcal {L}}\cap D$$ the events A(x) are independent. Using a concentration inequality for sums of i.i.d. random variables and the fact that $$|{\mathcal {L}}\cap D|\le n^{d(1-\beta )}$$ we obtain where c is a positive constant depending on $$\gamma$$ and $$\varepsilon$$. Plugging this back into (3.2) gives (3.4)

for a possibly different constant c.

We next turn to prove that

\begin{aligned} {\mathbb {P}}\left( |A\cap {\mathcal {G}}|\le \alpha (\theta (p)-\varepsilon )n^d\right) \lesssim \exp \left( -cn^{\frac{d}{d+1}} \right) . \end{aligned}
(3.5)

From (3.3) and using a union bound we now get (3.6)

Using [4, Theorem 8.21] we deduce that for all $$\delta \in (0,1)$$, there exists a constant c (depending on $$\delta$$d and p) so that for large n and for all $$x,y\in {\mathcal {B}}(0,n(1-\delta ))$$ Using this and a union bound we now get (3.7)

Take $$\widetilde{\varepsilon }>0$$ and $$\delta \in (0,1)$$ such that $$(1+\theta (p)-\widetilde{\varepsilon })(1-\delta )^d>1$$. It follows that if there are at least $$(\theta (p)-\widetilde{\varepsilon })(1-\delta )^d n^d$$ points connected to each other in $${\mathcal {B}}(0,n(1-\delta ))$$, then the giant cannot be contained in $${\mathcal {B}}(0,n){\setminus } {\mathcal {B}}(0,n(1-\delta ))$$. This observation and (3.4) (with $$D={\mathcal {B}}(0,n(1-\delta ))$$ and so $$\gamma = (1-\delta )^d$$ and $$\varepsilon =\widetilde{\varepsilon }$$) together with (3.6) and (3.7) give

\begin{aligned} {\mathbb {P}}\left( \exists \, x\in {\mathcal {B}}(0,(1-\delta )n): \, A(x)\cap \{x\notin {\mathcal {G}}\}\right) \lesssim e^{-cn}+e^{-cn^{\beta }} + e^{-cn^{d(1-\beta )}}. \end{aligned}

Taking $$\beta =d/(d+1)$$ so that $$\beta =d(1-\beta )$$ we obtain

\begin{aligned} {\mathbb {P}}\left( \exists \, x\in {\mathcal {B}}(0,(1-\delta )n): \, A(x)\cap \{x\notin {\mathcal {G}}\}\right) \lesssim e^{-cn^{d/(d+1)}}. \end{aligned}
(3.8)

Let now $$\widetilde{A} = A\cap {\mathcal {B}}(0,n(1-\delta ))$$. Let $${\varepsilon '}$$ be such that $$(\alpha -{\varepsilon '})(\theta (p)-{\varepsilon '}) = \alpha (\theta (p) -\varepsilon )$$. By decreasing $$\delta$$ if necessary we get that $$|\widetilde{A}| \ge (\alpha -{\varepsilon '})n^d$$. So applying (3.1) we obtain This together with (3.8) finally gives By the choice of $${\varepsilon '}$$ this proves (3.5). To finish the proof of the lemma it only remains to show that Using (3.1) we can upper bound this probability by where the last inequality follows from (3.5) by taking $$A={\mathbb {Z}}_n^d$$. $$\square$$

### Corollary 3.2

Let $${\mathcal {G}}_1,{\mathcal {G}}_2, \ldots$$ be the giant components of i.i.d. percolation configurations with $$p>p_c$$ in $${\mathbb {Z}}_n^d$$. Fix $$\delta \in (0,1/4)$$ and let $$k=[2(1-\delta )/(\delta \theta (p))]+1$$. Then there exists a positive constant c so that

\begin{aligned} {\mathbb {P}}\left( |{\mathcal {G}}_1\cup \cdots \cup {\mathcal {G}}_k|<(1-\delta )n^d\right) \le \frac{1}{c}\exp \left( -cn^{\frac{d}{d+1}}\right) . \end{aligned}

### Proof

We start by noting that

\begin{aligned} |{\mathcal {G}}_1\cup \cdots \cup {\mathcal {G}}_k| = \sum _{i=1}^{k}|{\mathcal {G}}_i{\setminus } ({\mathcal {G}}_1\cup \cdots \cup {\mathcal {G}}_{i-1})|, \end{aligned}

where we set $${\mathcal {G}}_0=\varnothing$$. Therefore, by the choice of k we obtain

\begin{aligned}&{\mathbb {P}}\left( |{\mathcal {G}}_1\cup \cdots \cup {\mathcal {G}}_k|<(1-\delta )n^d\right) \\&\quad \le {\mathbb {P}}\left( \exists \, i\le k: |{\mathcal {G}}_1\cup \cdots \cup {\mathcal {G}}_{i-1}|<(1-\delta )n^d, |{\mathcal {G}}_i{\setminus }({\mathcal {G}}_1\cup \cdots \cup {\mathcal {G}}_{i-1})|< \frac{\delta \theta (p)}{2} n^d\right) . \end{aligned}

For any i, since the percolation clusters are independent, by conditioning on $${\mathcal {G}}_1,\ldots , {\mathcal {G}}_{i-1}$$ and using Lemma 3.1 we get

\begin{aligned}&{\mathbb {P}}\left( |{\mathcal {G}}_1\cup \cdots \cup {\mathcal {G}}_{i-1}|<(1-\delta )n^d, |{\mathcal {G}}_i{\setminus }({\mathcal {G}}_1\cup \cdots \cup {\mathcal {G}}_{i-1})|< \frac{1}{2}\delta \theta (p)n^d\right) \\&\quad \le \frac{1}{c} \exp \left( -cn^{\frac{d}{d+1}}\right) . \end{aligned}

Thus by the union bound we obtain

\begin{aligned} {\mathbb {P}}\left( |{\mathcal {G}}_1\cup \cdots \cup {\mathcal {G}}_k|<(1-\delta )n^d\right) \le \frac{k}{c} \exp \left( -cn^{\frac{d}{d+1}}\right) \le \frac{1}{c'}\exp \left( -c'n^{\frac{d}{d+1}}\right) , \end{aligned}

where $$c'$$ is a positive constant and this concludes the proof. $$\square$$

We perform percolation in $${\mathbb {Z}}_n^d$$ with parameter $$p>p_c$$. Let $${{\mathcal {C}}}_1, {{\mathcal {C}}}_2, \ldots$$ be the clusters in decreasing order of their size. We write $${{\mathcal {C}}}(x)$$ for the cluster containing the vertex $$x\in {\mathbb {Z}}_n^d$$. For any $$A\subseteq {\mathbb {Z}}_n^d$$, we denote by $${\mathrm{diam}}(A)$$ the diameter of A.

### Proposition 3.3

There exists a constant c so that for all r and for all n we have

\begin{aligned} {\mathbb {P}}\left( \exists i\ge 2: \, {\mathrm{diam}}({{\mathcal {C}}}_i)\ge r\right) \le n^d e^{-cr} + \exp \left( -cn^{\frac{d}{d+1}} \right) . \end{aligned}

### Proof

We write $${\mathcal {B}}_r={\mathcal {B}}(0,r)$$, where as before $${\mathcal {B}}(0,r)$$ denotes the box of side length r centred at 0. Then we have Using the standard coupling between bond percolation on $${\mathbb {Z}}_n^d$$ and bond percolation on $${\mathbb {Z}}^d$$ and [4, Theorems 8.18 and 8.21] we obtain Lemma 3.1 now gives us that

\begin{aligned} {\mathbb {P}}\left( \{{{\mathcal {C}}}_1\cap {\mathcal {B}}_{n/4}= \varnothing \}\cup \{{{\mathcal {C}}}_1\cap ({\mathcal {B}}_{n}{\setminus } {\mathcal {B}}_{3n/4})=\varnothing \}\right) \lesssim \exp \left( -cn^{\frac{d}{d+1}} \right) . \end{aligned}

So this now implies But using [4, Lemma 7.89] we obtain Taking a union bound over all the points of the torus concludes the proof. $$\square$$

### Corollary 3.4

Consider now dynamical percolation on $${\mathbb {Z}}_n^d$$ with $$p>p_c$$, where the edges refresh at rate $$\mu$$, started from stationarity. Let $${{\mathcal {C}}}_1(t)$$ denote the giant cluster at time t. Then for all $$k\in {\mathbb {N}}$$, there exists a positive constant c so that for all $$\varepsilon <\theta (p)$$ we have as $$n\rightarrow \infty$$

\begin{aligned}&{\mathbb {P}}\big (|{{\mathcal {C}}}_1(t)|\in ((\theta (p)-\varepsilon ) n^d, (\theta (p)+\varepsilon ) n^d) \,\,{\mathrm{and }}\,\,{\mathrm{diam}}({{\mathcal {C}}}_i(t))\le c\log n, \ \\&\quad \forall t\le n^k/\mu , \, \forall i\ge 2\big )\rightarrow 1. \end{aligned}

### Remark 3.5

Let $$\partial A$$ denote the edge boundary of a set $$A\subseteq {\mathbb {Z}}^d$$. This is how $$\partial A$$ will be used from now on. Using then the obvious bound that $$|\partial A| \le (2d)|A|\le 2d({\mathrm{diam}}(A))^d$$ on the event of Corollary 3.4 we get that for all $$i\ge 2$$

\begin{aligned} |\partial {{\mathcal {C}}}_i|\le 2d|{{\mathcal {C}}}_i|\le 2d(c\log n)^d. \end{aligned}

## 4 Hitting the giant component

In this section we give an upper bound on the time it takes the random walk to hit the giant component. From now on we fix $$d\ge 2$$ and $$p>p_c({\mathbb {Z}}^d)$$, and as before X is the random walk on the dynamical percolation process where the edges refresh at rate $$\mu$$.

Notation For every $$t>0$$ we denote by $${\mathcal {G}}_t$$ the giant component of the dynamical percolation process $$(\eta _t)$$ breaking ties randomly. (As we saw in Corollary 3.4 with high probability there are no ties in the time interval that we consider.)

### Proposition 4.1

(Annealed estimates) There exists a stopping time $$\sigma$$ and $$\alpha >0$$ such that:

1. (i)

$$\min _{x,\eta _0}{\mathbb {P}}_{x,\eta _0}\left( \frac{11d\log n}{\mu }\le \sigma \le \frac{(\log n)^{3d+8}}{\mu }\right) =1-o(1)$$ as $$n\rightarrow \infty$$ and

2. (ii)

$$\min _{x,\eta _0}{\mathbb {P}}_{x,\eta _0}\left( X_{\sigma } \in {\mathcal {G}}_\sigma \right) \ge \alpha$$.

### Proof

We let $$\tau$$ be the first time after $$11d\log n/\mu$$ that X hits the giant component, i.e.

\begin{aligned} \tau = \inf \left\{ t\ge 11d\frac{\log n}{\mu }: \, X_t \in {\mathcal {G}}_t\right\} . \end{aligned}

We now define a sequence of stopping times by setting $$r=2(c\log n)^{d+2}$$ for a constant c to be determined, $$T_0=0$$ and inductively for all $$i\ge 0$$

\begin{aligned} T_{i+1} = \inf \left\{ t\ge T_i+11d\frac{\log n}{\mu }: \, X_t \notin {\mathcal {B}}\left( X_{T_i+11d\log n/\mu },r\right) \right\} . \end{aligned}

Finally we set $$\sigma = \tau \wedge T_{(\log n)^{d+2}}$$. We will now prove that $$\sigma$$ satisfies (i) and (ii) of the statement of the proposition.

Proof of (i). By the strong Markov property we obtain for all n large enough and all $$x,\eta _1$$

\begin{aligned}&{\mathbb {P}}_{x,\eta _1}\left( T_{(\log n)^{d+2}} \le \frac{(\log n)^{3d+8}}{\mu }\right) \nonumber \\&\quad \ge {\mathbb {P}}_{x,\eta _1}\left( T_{i} - T_{i-1}< \log n \cdot \frac{r^2}{\mu }, \, \forall \,1\le i\le (\log n)^{d+2}\right) \nonumber \\&\quad \ge \left( \min _{x_0,\eta _0}{\mathbb {P}}_{}\left( T_1 -T_0 < \log n \cdot \frac{r^2}{\mu }\;\Big \vert \;X_{T_0}=x_0,\eta _{T_0}=\eta _0\right) \right) ^{(\log n)^{d+2}}. \end{aligned}
(4.1)

By (1.2) of Theorem 1.2 applied to the torus $${\mathbb {Z}}_{5r}^d$$ we get that if $$t=c'\cdot r^2/\mu$$, where $$c'$$ is a positive constant, then starting from any $$x_0\in {\mathcal {B}}(x,r)$$ and any bond configuration, the walk exits the ball $${\mathcal {B}}(x,r)$$ by time t with constant probability $$c_1$$. Hence the same is true for the process X on $${\mathbb {Z}}_n^d$$ for all starting states $$x_0$$ and configurations $$\eta _0$$.

Using this uniform bound over all $$\eta _0$$ and all $$x_0\in {\mathcal {B}}(x,r)$$, we can perform $$\log n/c'$$ independent experiments to deduce

\begin{aligned} {\mathbb {P}}_{}\left( T_1-T_0<\log n\cdot \frac{r^2}{\mu }\;\Big \vert \;X_{T_0}=x_0,\eta _{T_0}=\eta _0\right) \ge 1- (1-c_1)^{\log n/c'}, \end{aligned}

and hence substituting this into (4.1) we finally get

\begin{aligned} {\mathbb {P}}_{x_0,\eta _0}\left( T_{(\log n)^{d+2}} \le \frac{(\log n)^{3d+8}}{\mu }\right) = 1-o(1) \text { as } n\rightarrow \infty \end{aligned}

and this completes the proof of (i).

Proof of (ii). We fix $$x,\eta _0$$ and we consider two cases:

1. (1)

$${\mathbb {P}}_{x,\eta _0}\left( \tau <T_1\right) >\frac{1}{(\log n)^{d+2}}$$ or

2. (2)

$${\mathbb {P}}_{x,\eta _0}\left( \tau < T_1\right) \le \frac{1}{(\log n)^{d+2}}$$.

It suffices to prove that under condition (2), there is a constant $$\beta >0$$ so that $${\mathbb {P}}_{x,\eta _0}\left( X_{T_1}\in {\mathcal {G}}_{T_1}\right) \ge \beta$$. Indeed, this will then imply that

\begin{aligned} \min _{y,\eta _1}{\mathbb {P}}_{y,\eta _1}\left( \tau \le T_1\right) \ge \frac{1}{(\log n)^{d+2}}. \end{aligned}
(4.2)

Therefore, in both cases [(1) and (2)] we get that (4.2) is satisfied, and hence by the strong Markov property

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( \tau> T_{(\log n)^{d+2}}\right)&\le \left( \max _{y,\eta _1}{\mathbb {P}}_{y,\eta _1}\left( \tau >T_1\right) \right) ^{(\log n)^{d+2}} \\&= \left( 1-\min _{y,\eta _1}{\mathbb {P}}_{y,\eta _1}\left( \tau \le T_1\right) \right) ^{(\log n)^{d+2}} \le \frac{1}{e}, \end{aligned}

which immediately implies that $$\min _{y,\eta _1}{\mathbb {P}}_{y,\eta _1}\left( X_\sigma \in {\mathcal {G}}_{\sigma }\right) \ge 1-e^{-1}$$ as claimed. So we now turn to prove that under (2) there exists a positive constant $$\beta$$ so that

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( X_{T_1}\in {\mathcal {G}}_{T_1}\right) \ge \beta . \end{aligned}
(4.3)

Taking c in the definition of r satisfying $$c>50d^2$$ we have

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( X_{T_1}\in {\mathcal {G}}_{T_1}\right) \ge {\mathbb {P}}_{x,\eta _0}\left( X_{T_1}\in {\mathcal {G}}_{T_1}\;\Big \vert \;T_1\ge \frac{c\log n}{4d\mu }\right) {\mathbb {P}}_{x,\eta _0}\left( T_1\ge \frac{c\log n}{4d\mu }\right) . \end{aligned}

Since the critical probability for a half-space equals $$p_c({\mathbb {Z}}^d)$$ (as explained right before Theorem 7.35 in ) and by time $$\frac{c\log n}{4d\mu }$$ all edges in the torus have refreshed after time $$T_0$$ (with high probability for $$c>50d^2$$), we infer that, given $$T_1\ge \frac{c\log n}{4d\mu }$$, with probability bounded away from 0, the component of $$X_{T_1}$$ at time $$T_1$$ has diameter at least n / 3. It then follows from Corollary 3.4 that the first term on the right-hand side of the last display is bounded below by a positive constant.

So it now suffices to prove

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( \tau \ge T_1, T_1\ge \frac{c\log n}{4d\mu }\right) \ge \beta '>0. \end{aligned}

We denote by $${{\mathcal {C}}}_t$$ the cluster of the walk at time t, i.e. it is the connected component of the percolation configuration such that $$X_t\in {{\mathcal {C}}}_t$$. Next we define inductively a sequence of stopping times $$S_i$$ as follows: $$S_0=11d\log n/\mu$$ and for $$i\ge 0$$ we let $$S_{i+1}$$ be the first time after time $$S_i$$ that an edge opens on the boundary of $${{\mathcal {C}}}_{S_i}$$. For all $$i\ge 0$$ we define

\begin{aligned} A_i=\left\{ {\mathrm{diam}}({{\mathcal {C}}}_{S_i})\le c\log n\right\} \quad \text { and } \quad A = \bigcap _{0\le i\le (c\log n)^{d+1} -1}A_i. \end{aligned}

On the event A we have $$T_1\ge S_{(c\log n)^{d+1}}$$, since $$r=2(c\log n)^{d+2}$$ and by the triangle inequality we have for all $$i\le (c\log n)^{d+1} -1$$

\begin{aligned} d\left( X_{\frac{11d\log n}{\mu }}, X_{S_{i}}\right) \le i (c\log n). \end{aligned}
(4.4)

We now have

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( \tau \ge T_1, A^c\right) = \sum _{0\le i\le (c\log n)^{d+1}-1}{\mathbb {P}}_{x,\eta _0}\left( \tau \ge T_1, \cap _{j<i}A_j, A_i^c\right) . \end{aligned}
(4.5)

Note that on the event $$\cap _{j<i}A_j\cap \{\tau \ge T_1\}$$, we have that $${{\mathcal {C}}}_{S_{i}}$$ cannot be the giant component, since by time $$S_i$$ using (4.4) the random walk has only moved distance at most $$i c \log n$$ from $$X_{11d\log n/\mu }$$, and hence cannot have reached the boundary of the box $${\mathcal {B}}(X_{11d\log n/\mu },r)$$. Therefore, choosing c sufficiently large by Proposition 3.3 and large deviations for a Poisson random variable we get

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( \tau \ge T_1, \cap _{j<i}A_j, A_i^c\right) \le \frac{1}{n}, \end{aligned}

and hence plugging this upper bound into (4.5) gives

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( \tau \ge T_1, A^c\right) \le \frac{(c\log n)^{d+1}}{n}. \end{aligned}
(4.6)

So under the assumption that $${\mathbb {P}}_{x,\eta _0}\left( \tau <T_1\right) \le 1/(\log n)^{d+2}$$ and (4.6) we have for all n sufficiently large

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( A^c\right) = {\mathbb {P}}_{x,\eta _0}\left( A^c, \tau \ge T_1\right) + {\mathbb {P}}_{x,\eta _0}\left( A^c, \tau <T_1\right) \le \frac{2}{(\log n)^{d+2}}. \end{aligned}
(4.7)

Setting $$Y_i = S_{i}-S_{i-1}$$ we now get

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( T_1\ge \frac{c\log n}{4d\mu }\right)&\ge {\mathbb {P}}_{x,\eta _0}\left( A, S_{(c\log n)^{d+1}}\ge \frac{c\log n}{4d\mu }\right) \\&\ge {\mathbb {P}}_{x,\eta _0}\left( \sum _{i=1}^{(c\log n)^{d+1}}Y_i\ge \frac{c\log n}{4d\mu }, A\right) . \end{aligned}

One can define an exponential random variable $$E_{(c\log n)^{d+1}}$$ with parameter $$2d(c\log n)^d\mu$$ such that

1. (1)

$$Y_{(c\log n)^{d+1}} \ge E_{(c\log n)^{d+1}}$$ on $$A_{(c\log n)^{d+1}-1}$$ and

2. (2)

$$E_{(c\log n)^{d+1}}$$ is independent of $$\{A_0,\ldots ,A_{(c\log n)^{d+1}-1},Y_1\ldots Y_{(c\log n)^{d+1}-1}\}$$. Therefore we deduce

\begin{aligned}&{\mathbb {P}}_{x,\eta _0}\bigg (\sum _{i=1}^{(c\log n)^{d+1}}Y_i\ge \frac{c\log n}{4d\mu }, \,A\bigg )\\&\quad \ge {\mathbb {P}}_{x,\eta _0}\bigg (E_{(c\log n)^{d+1}}+\sum _{i=1}^{(c\log n)^{d+1}-1}Y_i\ge \frac{c\log n}{4d\mu }, \bigcap _{0\le i<(c\log n)^{d+1}-1}A_i\bigg ) \\&\qquad -{\mathbb {P}}_{x,\eta }\left( A_{(c\log n)^{d+1}-1}^c\right) \\&\quad \ge {\mathbb {P}}_{x,\eta _0}\left( E_{(c\log n)^{d+1}}+\sum _{i=1}^{(c\log n)^{d+1}-1}Y_i\ge \frac{c\log n}{4d\mu }, \bigcap _{0\le i<(c\log n)^{d+1}-1}A_i\right) \\&\qquad - \frac{2}{(\log n)^{d+2}}, \end{aligned}

where for the last inequality we used (4.7). Continuing in the same way, for each i, one can define an exponential random variable $$E_i$$ with parameter $$2d(c\log n)^d\mu$$ such that (1) $$Y_i \ge E_i$$ on $$A_{i-1}$$ and (2) $$E_i$$ is independent of $$\{A_0,\ldots ,A_{i-1},Y_1,\ldots , Y_{i-1},E_{i+1}, \ldots , E_{(c\log n)^{d+1}}\}$$. We therefore obtain

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( \sum _{i=1}^{(c\log n)^{d+1}}Y_i\ge \frac{c\log n}{4d\mu }, A\right) \ge {\mathbb {P}}\left( \sum _{i=1}^{(c\log n)^{d+1}} E_i\ge \frac{c\log n}{4d\mu }\right) - \frac{c'}{\log n}, \end{aligned}

where the $$E_i$$’s are i.i.d. exponential random variables of parameter $$2d(c\log n)^d\mu$$. By Chebyshev’s inequality, we finally conclude that

\begin{aligned} {\mathbb {P}}\left( \sum _{i=1}^{(c\log n)^{d+1}} E_i\ge \frac{c\log n}{4d\mu }\right) = 1-o(1) \quad \text { as } n\rightarrow \infty \end{aligned}

and this finishes the proof. $$\square$$

We now state and prove a lemma that will be used later on in the paper.

### Lemma 4.2

Let $$\sigma$$ and $$\alpha$$ be as in the statement of Proposition 4.1. Then as $$n\rightarrow \infty$$

\begin{aligned} \min _{x,\eta _0}{\mathbb {P}}_{x,\eta _0}\left( X_t\in {\mathcal {G}}_t, \,\, \forall \, t\in \left[ \sigma , \sigma +\frac{1}{(\log n)^{d+1}\mu }\right] \right) \ge \alpha ( 1-o(1)). \end{aligned}

### Proof

We fix $$x,\eta _0$$. From Proposition 4.1 we have

\begin{aligned}&{\mathbb {P}}_{x,\eta _0}\left( \exists \, t\in \left[ \sigma , \sigma +\frac{1}{(\log n)^{d+1}\mu }\right] :\, X_t\notin {\mathcal {G}}_t\right) \\&\quad ={\mathbb {P}}_{x,\eta _0}\left( X_\sigma \notin {\mathcal {G}}_\sigma \right) + {\mathbb {P}}_{x,\eta _0}\left( X_\sigma \in {\mathcal {G}}_\sigma , \exists \, t \in \left( \sigma ,\sigma +\frac{1}{(\log n)^{d+1}\mu }\right] :\, X_t\notin {\mathcal {G}}_t\right) \\&\quad \le 1-\alpha + {\mathbb {P}}_{x,\eta _0}\left( X_\sigma \in {\mathcal {G}}_\sigma , \exists \, t \in \left( \sigma ,\sigma +\frac{1}{(\log n)^{d+1}\mu }\right] :\, X_t\notin {\mathcal {G}}_t\right) . \end{aligned}

Let $$\tau$$ be the first time that all edges refresh at least once. Thus after time $$\tau$$ the percolation configuration is sampled according to $$\pi _p$$. We then have $${\mathbb {P}}_{x,\eta _0}\left( \tau \le (d+1)\log n/\mu \right) = 1-o(1)$$, and hence from Proposition 4.1 we get

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( \sigma \ge \tau \right) = 1-o(1). \end{aligned}

This together with Corollary 3.4 now gives as $$n\rightarrow \infty$$

\begin{aligned} \begin{aligned}&{\mathbb {P}}_{x,\eta _0}\big (\forall \, t \in \left[ \sigma ,\sigma +\frac{1}{(\log n)^{d+1}\mu }\right] : \, |{\mathcal {G}}_t|\in (\theta (p) n^d/2, 3\theta (p) n^d/2), \,\\&\quad {\mathrm{diam}}({{\mathcal {C}}}_i(t))\le c \log n, \forall \, i\ge 2\big ) \rightarrow 1, \end{aligned} \end{aligned}
(4.8)

where c comes from Corollary 3.4. We now define an event A as follows

\begin{aligned} A&=\big \{ \exists \, t\in \left[ \sigma , \sigma +\frac{1}{(\log n)^{d+1}\mu }\right] \text { and an edge } e:\, d(X_t, e) \le c\log n \\&\quad \text { and } e \text { refreshes at time } t \big \}. \end{aligned}

We also define B to be the event that there exists a time $$t\in [\sigma , \sigma + 1/((\log n)^{d+1}\mu )]$$ and an edge e such that $$d(X_t,e)>c\log n$$, the edge e updates at time t and this update disconnects $$X_t$$ from $${\mathcal {G}}_t$$. Then we have

\begin{aligned}&{\mathbb {P}}_{x,\eta _0}\left( X_\sigma \in {\mathcal {G}}_\sigma , \exists \, t\in \left( \sigma , \sigma +\frac{1}{(\log n)^{d+1}\mu }\right] : \, X_t\notin {\mathcal {G}}_t\right) \\&\quad \le {\mathbb {P}}_{x,\eta _0}\left( X_\sigma \in {\mathcal {G}}_\sigma , A\right) + {\mathbb {P}}_{x,\eta _0}\left( X_\sigma \in {\mathcal {G}}_\sigma , B\right) . \end{aligned}

We start by bounding the second probability above. From (4.8) we obtain as $$n\rightarrow \infty$$

\begin{aligned}&{\mathbb {P}}_{x,\eta _0}\left( X_\sigma \in {\mathcal {G}}_\sigma , B\right) \\&\quad \le {\mathbb {P}}\left( \exists \, t\in \left[ \sigma , \sigma +\frac{1}{(\log n)^{d+1}\mu }\right] , \exists \, i\ge 2: \, {\mathrm{diam}}({{\mathcal {C}}}_i(t))\ge c\log n\right) = o(1). \end{aligned}

It now remains to show that $${\mathbb {P}}_{x,\eta _0}\left( A\right) =o(1)$$ as $$n\rightarrow \infty$$. We now let $$\tau _0=\sigma$$ and for all $$i\ge 1$$ we define $$\tau _i$$ to be the time increment between the $$(i-1)$$-st time and the i-th time after time $$\sigma$$ that either X attempts a jump or an edge within distance $$c\log n$$ from X refreshes. Then $$\tau _i \sim \text {Exp}(1+c_1(\log n)^d\mu )$$ for a positive constant $$c_1$$ and they are independent. These times define a Poisson process of rate $$1+c_1(\log n)^d \mu$$. Using basic properties of exponential variables, the probability that at a point of this Poisson process an edge is refreshed is

\begin{aligned} \frac{c_1(\log n)^d\mu }{1+c_1(\log n)^d\mu }. \end{aligned}

Therefore, by the thinning property of Poisson processes, the times at which edges within $$c\log n$$ from X refresh constitute a Poisson process $${\mathcal {N}}$$ of rate $$c_1 (\log n)^d\mu$$. So we now obtain as $$n\rightarrow \infty$$

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( A\right) ={\mathbb {P}}\left( {\mathcal {N}}\left[ 0,\frac{1}{(\log n)^{d+1}\mu }\right] \ge 1\right) = 1 - \exp \left( -\frac{c_1}{\log n} \right) = o(1) \end{aligned}

and this concludes the proof. $$\square$$

## 5 Good and excellent times

As we already noted in Remark 1.4 we are going to consider the case where $$1/\mu > (\log n)^{d+2}$$.

We will discretise time by observing the walk X at integer times. When we fix the environment at all times to be $$\eta$$, then we obtain a discrete time Markov chain with time inhomogeneous transition probabilities

\begin{aligned} p_t^\eta (x,y)={\mathbb {P}}_{\eta }\left( X_{t+1}=y\;\vert \;X_t=x\right) \quad \forall \, x,y\in {\mathbb {Z}}_n^d, \, t\in {\mathbb {N}}. \end{aligned}

Let $$(S_t)_{t\in {\mathbb {N}}}$$ be the Doob transform of the evolving sets associated to this time inhomogeneous Markov chain as defined in Sect. 2. Since from now on we will mainly work with the Doob transform of the evolving sets, unless there is confusion, we will write $${\mathbb {P}}$$ instead of $$\widehat{{\mathbb {P}}}$$.

If G is a subgraph of $${\mathbb {Z}}_n^d$$ and $$S\subseteq V(G)$$, we write $$\partial _G S$$ for the edge boundary of S in G, i.e. the set of edges of G with one endpoint of S and the other one in $$V(G){\setminus } S$$.

We note that for every t, $$\eta _t$$ is a subgraph of $${\mathbb {Z}}_n^d$$ with vertex set $${\mathbb {Z}}_n^d$$.

### Definition 5.1

We call an integer time tgood if $$|S_t\cap {\mathcal {G}}_t| \ge \tfrac{|S_t|}{(\log n)^{4d+12}}$$. We call a good time texcellent if

\begin{aligned} \int _t^{t+1}\left| \partial _{\eta _s}S_t \right| \,ds \equiv \sum _{x\in S_t}\sum _{y\in S_t^c} \int _{t}^{t+1} \eta _s(x,y)\,ds \ge \frac{|\partial _{\eta _t}S_t|}{2}, \end{aligned}

where $$\eta _s(x,y)=0$$ if $$(x,y)\notin E({\mathbb {Z}}_n^d)$$. For all $$a \in {\mathbb {N}}$$ we let G(a) and $$G_e(a)$$ be the set of good and excellent times t respectively with $$0\le t\le (\log n)^{a}\left( n^2+\tfrac{1}{\mu }\right)$$.

As we already explained in the Introduction, we will obtain a strong drift for the size of the evolving set at excellent times. So we need to ensure that there are enough excellent times. We start by showing that there is a large number of good times. More formally we have the following:

### Lemma 5.2

For all $$\gamma \in {\mathbb {N}}$$ and $$\alpha >0$$, there exists $$n_0$$ so that for all $$n\ge n_0$$, all starting points and configurations $$x, \eta _0$$ we have

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( |G(8d+26+\gamma )|\ge (\log n)^{\gamma }\cdot \left( n^2+\frac{1}{\mu }\right) \right) \ge 1 - \frac{1}{n^\alpha }. \end{aligned}

### Proof

Fix $$\gamma \in {\mathbb {N}}$$ and $$\alpha >0$$. To simplify notation we write $$G=G(8d+26+\gamma )$$. By definition we have For every $$i\ge 0$$ we define

\begin{aligned} J_i=\left[ i\cdot (\log n)^{4d+\gamma +12}\cdot \left( n^2+\frac{1}{\mu }\right) ,(i+1)\cdot (\log n)^{4d+\gamma +12}\cdot \left( n^2+\frac{1}{\mu }\right) \right) \cap {\mathbb {N}}. \end{aligned}

We write $$t_i$$ for the left endpoint of the interval above. For integer t we let $${\mathcal {F}}_t$$ be the $$\sigma$$-algebra generated by the evolving set and the environment at integer times up to time t.

First of all we explain that for all $$x,\eta _0$$ and for all $$i\ge 0$$ we have almost surely (5.1)

Indeed, in every interval of length $$2(\log n)^{3d+8}/\mu$$ we have from Proposition 4.1 and Lemma 4.2 that with constant probability there exists an interval of length $$1/((\log n)^{d+1}\mu )$$ such that for all t in this interval $$X_t\in {\mathcal {G}}_t$$. Note that since $$1/\mu > (\log n)^{d+2}$$, this interval has length larger than 1. This establishes (5.1).

Using the coupling of the Doob transform of the evolving set and the random walk given in Theorem 2.2 we get that

\begin{aligned} {\mathbb {P}}_{}\left( X_t\in {\mathcal {G}}_t\;\vert \;S_t, {\mathcal {G}}_t\right) = \frac{|S_t\cap {\mathcal {G}}_t|}{|S_t|}, \end{aligned}

and hence For any $$x,\eta _0$$ we set

\begin{aligned} A_i(x,\eta _0) := \left\{ t\in J_i: \, {\mathbb {P}}_{x,\eta _0}\left( X_t\in {\mathcal {G}}_t\;\vert \;S_t, {\mathcal {G}}_t\right) \ge \frac{1}{(\log n)^{4d+12}} \right\} . \end{aligned}

We now claim that for any $$x,\eta _0$$ we have almost surely

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( |A_i(x,\eta _0)| \ge (\log n)^\gamma \cdot \left( n^2+\frac{1}{\mu }\right) \;\Big \vert \;{\mathcal {F}}_{t_i}\right) \ge \frac{1}{(\log n)^{4d+12}}. \end{aligned}
(5.2)

Indeed, if not, then there exists a set $$\Omega _0\in {\mathcal {F}}_{t_i}$$ with $${\mathbb {P}}\left( \Omega _0\right) >0$$ such that on $$\Omega _0$$

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( |A_i(x,\eta _0)| \ge (\log n)^\gamma \cdot \left( n^2+\frac{1}{\mu }\right) \;\Big \vert \;{\mathcal {F}}_{t_i}\right) <\frac{1}{(\log n)^{4d+12}}. \end{aligned}

We now define

\begin{aligned} Y=\sum _{t\in J_i} {\mathbb {P}}_{x,\eta _0}\left( X_t\in {\mathcal {G}}_t\;\vert \;S_t, {\mathcal {G}}_t\right) \end{aligned}

and writing $$A_i=A_i(x,\eta _0)$$ to simplify notation, we would get on the event $$\Omega _0$$ that

\begin{aligned}&{\mathbb {E}}_{x,\eta _0}\left[ Y\;\vert \;{\mathcal {F}}_{t_i}\right] \\&\quad = {\mathbb {E}}_{x,\eta _0}\left[ \sum _{t\in A_i} {\mathbb {P}}_{x,\eta _0}\left( X_t\in {\mathcal {G}}_t\;\vert \;S_t,{\mathcal {G}}_t\right) + \sum _{t\in A_i^c} {\mathbb {P}}_{x,\eta _0}\left( X_t\in {\mathcal {G}}_t\;\vert \;S_t,{\mathcal {G}}_t\right) \;\Big \vert \;{\mathcal {F}}_{t_i}\right] \\&\quad \le {\mathbb {E}}_{x,\eta _0}\left[ |A_i|\;\vert \;{\mathcal {F}}_{t_i}\right] + \frac{(\log n)^{4d+\gamma +12}}{(\log n)^{4d+12}} \cdot \left( n^2+\frac{1}{\mu }\right) \\&\quad \le \frac{(\log n)^{4d+\gamma +12}}{(\log n)^{4d+12}} \cdot \left( n^2+\frac{1}{\mu }\right) + (\log n)^\gamma \cdot \left( n^2+\frac{1}{\mu }\right) + (\log n)^\gamma \cdot \left( n^2+\frac{1}{\mu }\right) \\&\quad =3(\log n)^{\gamma }\cdot \left( n^2+ \frac{1}{\mu }\right) . \end{aligned}

But this gives a contradiction for $$n\ge e^{\sqrt{3}}$$, since we have almost surely where the second equality follows from the Diaconis Fill coupling, the third one from the tower property for conditional expectation and the inequality follows from (5.1).

Therefore, since (5.2) holds for all starting points and configurations $$x,\eta _0$$, we finally conclude that for all $$n\ge e^{\sqrt{3}}$$, all $$x,\eta _0$$ and for all i almost surely

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( T_i\ge (\log n)^\gamma \cdot \left( n^2+\frac{1}{\mu }\right) \;\Big \vert \;{\mathcal {F}}_{t_i}\right) \ge \frac{1}{(\log n)^{4d+12}}. \end{aligned}

Using the uniformity of this lower bound over all starting points and configurations yields for all n sufficiently large and all $$x,\eta _0$$

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( |G|\ge (\log n)^{\gamma }\cdot \left( n^2+\frac{1}{\mu }\right) \right)&\ge {\mathbb {P}}_{x,\eta _0}\left( \exists \, i: \, T_i\ge (\log n)^\gamma \cdot \left( n^2+\frac{1}{\mu }\right) \right) \\&= 1 - {\mathbb {P}}_{x,\eta _0}\left( \forall \, i: \, T_i < (\log n)^\gamma \cdot \left( n^2+\frac{1}{\mu }\right) \right) \\&\ge 1 - \left( 1 - \frac{1}{(\log n)^{4d+12}} \right) ^{(\log n)^{4d+14}} \ge 1 - \frac{1}{n^\alpha }. \end{aligned}

This now finishes the proof. $$\square$$

Next we show that there are enough excellent times.

### Lemma 5.3

For all $$\gamma \in {\mathbb {N}}$$ and $$\alpha >0$$, there exists $$n_0$$ so that for all $$n\ge n_0$$ and all $$x,\eta _0$$

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( |G_e(8d+26+\gamma )|\ge (\log n)^{\gamma -1} \cdot n^2\right) \ge 1 - \frac{1}{n^\alpha }. \end{aligned}

### Proof

For almost every environment, there is an infinite number of good times that we denote by $$t_1,t_2,\ldots$$. For every good time t we define $$I_t$$ to be the indicator that t is excellent.

Again to simplify notation we write $$G=G(8d+26+\gamma )$$ and $$G_e= G_e(8d+26+\gamma )$$. Note that if t is good and at least half of the edges of $$\partial _{\eta _t}S_t$$ do not refresh during $$[t,t+1]$$, then t is an excellent time (note that if $$\partial _{\eta _t}S_t=\varnothing$$, then t is automatically excellent). Let $$E_1,\ldots , E_{|\partial _{\eta _t}S_t|}$$ be the first times at which the edges on the boundary $$\partial _{\eta _t}S_t$$ refresh. They are independent exponential random variables with parameter $$\mu$$.

Let $${\mathcal {F}}_s$$ be the $$\sigma$$-algebra generated by the process (walk, environment and evolving set) up to time s. Then for all t, on the event $$\{t\in G\}$$ we have Since $${\mathbb {P}}_{x,\eta _0}\left( E_i>1\right) = e^{-\mu }$$ and $$\mu \le 1/2$$, there exists $$n_0$$ so that for all $$n\ge n_0$$ we have for all $$x, \eta _0$$ Let $$A=\{ |G|\ge (\log n)^{\gamma } \cdot n^2\}$$. By Lemma 5.2 we get $${\mathbb {P}}_{x,\eta _0}\left( A^c\right) \le 1/n^\alpha$$ for all $$n\ge n_0$$ and all $$x,\eta _0$$. Let $$G=\{t_1,\ldots , t_{|G|}\}$$. On the event A we have

\begin{aligned} |G_e| \ge \sum _{i=1}^{(\log n)^{\gamma }\cdot n^2} I_{t_i}. \end{aligned}

We thus get for all $$x,\eta _0$$ and all $$n\ge n_0$$

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( |G_e|< (\log n)^{\gamma -1} \cdot n^2\right)&\le {\mathbb {P}}_{x,\eta _0}\left( |G_e|< (\log n)^{\gamma -1} \cdot n^2, A\right) + \frac{1}{n^\alpha } \\&\le {\mathbb {P}}_{x,\eta _0}\left( \sum _{i=1}^{(\log n)^{\gamma }\cdot n^2} I_{t_i} <(\log n)^{\gamma -1} \cdot n^2\right) + \frac{1}{n^\alpha }. \end{aligned}

Since conditional on the past, the variables $$(I_{t_i})_i$$ dominate independent Bernoulli random variables with parameter 1 / 2, using a standard concentration inequality we get that this last probability decays exponentially in n and this concludes the proof. $$\square$$

Let $$\tau _1, \tau _2,\ldots$$ be the sequence of excellent times. Then the previous lemma immediately gives

### Corollary 5.4

Let $$\gamma \in {\mathbb {N}}$$, $$\alpha >0$$ and $$N= (\log n)^{\gamma } \cdot n^2$$. Then there exists $$n_0$$ so that for all $$n\ge n_0$$ and all $$x,\eta _0$$ we have

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( \tau _N\le (\log n)^{8d+27+\gamma } \cdot \left( n^2+\frac{1}{\mu }\right) \right) \ge 1-\frac{1}{n^\alpha }. \end{aligned}

## 6 Mixing times

In this section we prove Theorems 1.3, 1.6 and Corollary 1.7. From now on $$d\ge 2$$, $$p>p_c({\mathbb {Z}}^d)$$ and $$\frac{1}{\mu } >(\log n)^{d+2}$$.

### 6.1 Good environments and growth of the evolving set

The first step is to obtain the growth of the Doob transform of the evolving set at excellent times. We will use the following theorem by Pete  which shows that the isoperimetric profile of the giant cluster basically coincides with the profile of the original lattice.

For a subset $$S\subseteq {\mathbb {Z}}_n^d$$ we write $$S\subseteq {\mathcal {G}}$$ to denote $$S\subseteq V({\mathcal {G}})$$ and we also write $$|{\mathcal {G}}| = |V({\mathcal {G}})|$$.

### Theorem 6.1

[10, Corollary 1.4] For all $$d\ge 2$$, $$p>p_c({\mathbb {Z}}^d)$$, $$\delta \in (0,1)$$ and $$c'>0$$ there exist $$c>0$$ and $$\alpha >0$$ so that for all n sufficiently large

\begin{aligned}&{\mathbb {P}}\big (\forall \, S\subseteq {\mathcal {G}}: \, S \text { connected and } c (\log n)^{\frac{d}{d-1}} \le |S|\le (1-\delta )|{\mathcal {G}}|, \\&\quad \text { we have } |\partial _{{\mathcal {G}}} S|\ge \alpha |S|^{1-\frac{1}{d}}\big ) \ge 1-\frac{1}{n^{c'}}. \end{aligned}

### Remark 6.2

Pete  only states that the probability appearing above tends to 1 as $$n\rightarrow \infty$$, but a close inspection of the proof actually gives the polynomial decay. Mathieu and Remy  have obtained similar results.

### Corollary 6.3

For all $$d\ge 2$$, $$p>p_c({\mathbb {Z}}^d)$$, $$c'>0$$ and $$\delta \in (0,1)$$ there exist $$c>0$$ and $$\alpha >0$$ so that for all n sufficiently large

\begin{aligned}&{\mathbb {P}}\left( \forall \, S\subseteq {\mathcal {G}}: c (\log n)^{\frac{d}{d-1}} \le |S|\le (1-\delta )|{\mathcal {G}}|, \text { we have } |\partial _{{\mathcal {G}}} S|\ge \frac{\alpha |S|^{1-\frac{1}{d}}}{\log n}\right) \\&\quad \ge 1-\frac{1}{n^{c'}}. \end{aligned}

### Proof

We only need to prove the statement for all S that are disconnected, since the other case is covered by Theorem 6.1. Let A be the event appearing in the probability of Theorem 6.1.

Let S be a disconnected set satisfying $$S\subseteq {\mathcal {G}}$$ and $$c (\log n)^{\frac{d}{d-1}} \le |S|\le (1-\delta )|{\mathcal {G}}|$$. Let $$S=S_1\cup \cdots \cup S_k$$ be the decomposition of S into its connected components. Then we claim that on the event A we have for all $$i\le k$$

\begin{aligned} |\partial _{{\mathcal {G}}} S_i| \ge \alpha \frac{|S_i|^{1-\frac{1}{d}}}{\log n}. \end{aligned}

Indeed, there are two cases to consider: (i) $$|S_i|\ge c(\log n)^{d/(d-1)}$$, in which case the inequality follows from the definition of the event A; (ii) $$|S_i|<c(\log n)^{d/(d-1)}$$, in which case the inequality is trivially true by taking $$\alpha$$ small in Theorem 6.1, since the boundary contains at least one vertex. Therefore we deduce,

\begin{aligned} |\partial _{{\mathcal {G}}} S| =\sum _{i=1}^{k} |\partial _{{\mathcal {G}}} S_i| \ge \alpha \sum _{i=1}^{k} \frac{|S_i|^{1-\frac{1}{d}}}{\log n} \ge \alpha \frac{\left( \sum _{i=1}^{k}|S_i|\right) ^{1-\frac{1}{d}}}{\log n} = \alpha \frac{|S|^{1-\frac{1}{d}}}{\log n} \end{aligned}

and this completes the proof. $$\square$$

Recall that for a fixed environment $$\eta$$ we write S for the Doob transform of the evolving set process associated to X and $$\tau _1, \tau _2, \ldots$$ are the excellent times as in Definition 5.1 and we take $$\tau _0=0$$.

### Definition 6.4

Let $$c_1,c_2$$ be two positive constants and $$\delta \in (0,1)$$. Given $$n\ge 1$$, define

\begin{aligned} t(n)=(\log n)^{16d+47}\cdot (n^2+1/\mu )\quad \text { and } \quad N=(\log n)^{8d+20}\cdot n^2. \end{aligned}

We call $$\eta$$$$\delta$$-good environment if the following conditions hold:

1. (1)

for all $$\frac{11d\log n}{\mu }\le t\le t(n)\log n$$ the giant cluster $${\mathcal {G}}_t$$ has size $$|{\mathcal {G}}_t|\in ((1-\delta )\theta (p) n^d, (1+\delta )\theta (p) n^d)$$,

2. (2)

for all $$\frac{11d\log n}{\mu }\le t\le t(n)\log n, \, \forall \, S\subseteq {\mathcal {G}}_t$$ with

\begin{aligned} c_1(\log n)^{\frac{d}{d-1}}\le |S| \le (1-\delta )|{\mathcal {G}}_t| \quad \text { we have } \quad |\partial _{\eta _{t}}S|\ge \frac{c_2 |S|^{1-1/d}}{(\log n)}, \end{aligned}
3. (3)

$${\mathbb {P}}_{x,\eta }\left( \tau _N\le t(n)\right) \ge 1-\frac{1}{n^{10d}}$$ for all x,

4. (4)

$${\mathbb {P}}_{x,\eta }\left( \tau _N<\infty \right) =1$$ for all x.

To be more precise we should have defined a $$(\delta , c_1, c_2)$$-good environment. But we drop the dependence on $$c_1$$ and $$c_2$$ to simplify the notation.

### Lemma 6.5

For all $$\delta \in (0,1)$$ there exist $$c_1, c_2, c_3$$ positive constants and $$n_0\in {\mathbb {N}}$$ such that for all $$n\ge n_0$$ and all $$\eta _0$$ we have

\begin{aligned} {\mathbb {P}}_{\,}\left( \eta \text { is }\delta \text { -good}\right) {\eta _0} \ge 1 - \frac{c_3}{n^{10d}}. \end{aligned}

### Proof

We first prove that for all n sufficiently large and all $$\eta _0$$

\begin{aligned} {\mathbb {P}}_{\eta _0}\left( \eta \text { satisfies (1) and (2)}\right) \ge 1 -\frac{1}{n^{10d}}. \end{aligned}
(6.1)

The number of times that the Poisson clocks on the edges ring between times $$11d\log n/\mu$$ and $$t(n) \log n$$ is a Poisson random variable of parameter at most $$d(n^d \mu ) \cdot t(n) \log n$$. Note that all edges update by time $$\frac{11d \log n}{\mu }$$ with probability at least $$1-\frac{d}{n^{10d}}$$. Using large deviations for the Poisson random variable, Lemma 3.1 and Corollary 6.3 for suitable constants c and $$\alpha$$ prove (6.1). Corollary 5.4, Markov’s inequality and a union bound over all x immediately imply

\begin{aligned} {\mathbb {P}}_{\eta _0}\left( \eta \text { satisfies (3)}\right) \ge 1 - \frac{d}{n^{10d}}. \end{aligned}

Finally, to prove that $$\eta$$ satisfies (4) with probability 1, we note that for almost every environment there will be infinitely many times at which all edges will be open for unit time and so at these times the intersection of the giant component with the evolving set will be large. Therefore such times are necessarily excellent. $$\square$$

For all $$\delta \in (0,1)$$ we now define

\begin{aligned} \tau _\delta = \inf \{t\in {\mathbb {N}}: |S_t\cap {\mathcal {G}}_t| \ge (1-\delta ) |{\mathcal {G}}_t|\}. \end{aligned}
(6.2)

The goal of this section is to prove the following:

### Proposition 6.6

Let $$\delta \in (0,1)$$. There exists a positive constant c so that the following holds: for all n, if $$\eta$$ is a $$\delta$$-good environment, then for all starting points x we have

\begin{aligned} {\mathbb {P}}_{x,\eta }\left( \tau _\delta \le t(n)\right) \ge 1 -\frac{c}{n^{10d}}. \end{aligned}

Recall from Sect. 2 the definition of $$(Z_k)$$ for a fixed environment $$\eta$$ via

\begin{aligned} Z_k = \frac{\sqrt{\pi (S_k^{\#})}}{\pi (S_k)}. \end{aligned}

Note that we have suppressed the dependence on $$\eta$$ for ease of notation. The following lemma on the drift of Z using the isoperimetric profile will be crucial in the proof of Proposition 6.6.

### Lemma 6.7

Let $$\eta$$ be a $$\delta$$-good environment with $$\delta \in (0,1)$$. Then for all n sufficiently large and for all $$1\le i\le N$$ (recall Definition 6.4) we have almost surely where $${\mathcal {F}}_t$$ is the $$\sigma$$-algebra generated by the evolving set up to time t and $$(\tau _i)$$ is the sequence of excellent times associated to the environment $$\eta$$ and $$\varphi$$ is defined as

\begin{aligned} \varphi (r) = {\left\{ \begin{array}{ll} c \cdot (\log n)^{-\beta }\cdot n^{-1}\cdot r^{-1/d} \quad &{}\quad \text{ if } \quad \frac{(\log n)^{\alpha }}{n^d}\le r \le \frac{1}{2} \\ c \cdot n^{-d}\cdot r^{-1} &{}\quad \text{ if } \quad r< \frac{(\log n)^{\alpha }}{n^d} \\ c \cdot 2^{1/d}\cdot (\log n)^{-\beta }\cdot n^{-1} &{}\quad \text{ if } \quad r\in \left[ \frac{1}{2},\infty \right) \end{array}\right. } \end{aligned}

with $$\alpha =4d+12+d/(d-1)$$, $$\beta = 4d+9-12/d$$ and c a positive constant.

### Proof

Since $$\tau _\delta$$ is a stopping time, it follows that $$\{\tau _\delta \wedge t(n)>\tau _i\}\in {\mathcal {F}}_{\tau _i}$$, and hence we obtain (6.3)

Lemma 2.3 implies that Z is a positive supermartingale and since $$\eta$$ is a $$\delta$$-good environment, we have $$\tau _N<\infty$$$${\mathbb {P}}_\eta$$-almost surely. We thus get for all $$0\le i\le N-1$$

\begin{aligned} \mathbb {\widehat{E}}_{\eta }\left[ Z_{\tau _{i+1}}\;\vert \;{\mathcal {F}}_{\tau _i}\right] \le \mathbb {\widehat{E}}_{\eta }\left[ Z_{\tau _{i}+1}\;\vert \;{\mathcal {F}}_{\tau _i}\right] . \end{aligned}

Using the Markov property gives (6.4)

Since $$\tau _i$$ is a stopping time, the event $$\{\tau _i=t\}$$ only depends on $$(S_{u})_{u\le t}$$. The distribution of $$S_{t+1}$$ only depends on $$S_t$$ and the outcome of the independent uniform random variable $$U_{t+1}$$. Therefore we obtain

\begin{aligned} \begin{aligned} \mathbb {\widehat{E}}_{\eta }\left[ Z_{t+1}\;\vert \;\tau _i=t, S_t=S\right]&=\frac{\sqrt{\pi (S^{\#})}}{\pi (S)} \cdot \mathbb {\widehat{E}}_{\eta }\left[ \frac{Z_{t+1}}{Z_t}\;\Big \vert \;S_t=S\right] \\&= \frac{\sqrt{\pi (S^{\#})}}{\pi (S)} \cdot {\mathbb {E}}_{\eta }\left[ \sqrt{\frac{\pi (S_{t+1}^{\#})}{\pi (S_t^{\#})}}\;\Big \vert \;S_t=S\right] , \end{aligned} \end{aligned}
(6.5)

where for the last equality we used the transition probability of the Doob transform of the evolving set. If $$1\le |S|\le n^d/2$$, then for all n sufficiently large

\begin{aligned} \begin{aligned} {\mathbb {E}}_{\eta }\left[ \sqrt{\frac{\pi (S_{t+1}^{\#})}{\pi (S_t^{\#})}}\;\Big \vert \;S_t=S\right]&\le {\mathbb {E}}_{\eta }\left[ \sqrt{\frac{\pi (S_{t+1})}{\pi (S_t)}}\;\Big \vert \;S_t=S\right] =1-\psi _{t+1}(S) \\&\le 1 - \frac{1}{8}\cdot \left( \varphi _{t+1}(S)\right) ^2, \end{aligned} \end{aligned}
(6.6)

where the equality is simply the definition of $$\psi _{t+1}$$ and the last inequality follows from Lemma 2.1, since

\begin{aligned} {\mathbb {P}}_{\eta }\left( X_{t+1}=x\;\vert \;X_t=x\right) \ge e^{-1}. \end{aligned}

Similarly, if $$n^d>|S|>n^d/2$$, then, using the fact that the complement of an evolving set process is also an evolving set process, we get

\begin{aligned} \begin{aligned} {\mathbb {E}}_{\eta }\left[ \sqrt{\frac{\pi (S_{t+1}^{\#})}{\pi (S_t^{\#})}}\;\Big \vert \;S_t=S\right]&\le {\mathbb {E}}_{\eta }\left[ \sqrt{\frac{\pi (S_{t+1}^c)}{\pi (S_t^c)}}\;\Big \vert \;S_t=S\right] =1-\psi _{t+1}(S^c) \\&\le 1 - \frac{1}{8}\cdot \left( \varphi _{t+1}(S^c)\right) ^2. \end{aligned} \end{aligned}
(6.7)

Plugging in the definition of $$\varphi _{t+1}$$ we deduce for all $$1\le |S|<n^d$$

\begin{aligned} \varphi _{t+1}(S)&= \frac{1}{|S|}\sum _{x\in S}\sum _{y\in S^c} {\mathbb {P}}_{\eta }\left( X_{t+1}=y\;\vert \;X_t=x\right) \ge \frac{1}{2de|S|} \sum _{x\in S} \sum _{y\in S^c} \int _{t}^{t+1}\eta _s(x,y)\,ds\\ \varphi _{t+1}(S^c)&= \frac{1}{|S^c|}\sum _{x\in S}\sum _{y\in S^c} {\mathbb {P}}_{\eta }\left( X_{t+1}=y\;\vert \;X_t=x\right) \ge \frac{1}{2de|S^c|} \sum _{x\in S} \sum _{y\in S^c} \int _{t}^{t+1}\eta _s(x,y)\,ds. \end{aligned}

Since in (6.4) we multiply by from now on we take t to be an excellent time, and hence we get from Definition 5.1

\begin{aligned} \varphi _{t+1}(S_t)\ge \frac{1}{4de}\cdot \frac{|\partial _{\eta _{t}}S_t|}{|S_t|}, \quad \varphi _{t+1}(S^c_t)\ge \frac{1}{4de}\cdot \frac{|\partial _{\eta _{t}}S_t|}{|S^c_t|} \quad \text { and } \quad |S_t\cap {\mathcal {G}}_t|\ge \frac{|S_t|}{(\log n)^{4d+12}}. \end{aligned}
(6.8)

Since $$|\partial _{\eta _t}S_t| \ge |\partial _{{\mathcal {G}}_t}S_t| = |\partial _{{\mathcal {G}}_t}({\mathcal {G}}_t\cap S_t)|$$ we have

\begin{aligned} \varphi _{t+1}(S_t)\ge \frac{1}{4de}\cdot \frac{|\partial _{{\mathcal {G}}_{t}}({\mathcal {G}}_t \cap S_t)|}{|S_t|} \quad \text { and } \quad \varphi _{t+1}(S^c_t)\ge \frac{1}{4de}\cdot \frac{|\partial _{{\mathcal {G}}_{t}}({\mathcal {G}}_t \cap S_t)|}{|S^c_t|}. \end{aligned}

If $$|S_t|\le c_1(\log n)^{4d+12+d/(d-1)}$$, then, since $$\eta$$ is a $$\delta$$-good environment, $$|{\mathcal {G}}_t| \ge (1-\delta )\theta (p)n^d$$, and hence, we use the obvious bound

\begin{aligned} \varphi _{t+1}(S_t) \ge \frac{1}{4de}\cdot \frac{1}{|S_t|}. \end{aligned}
(6.9)

Next, if $$\frac{n^d}{2}>|S_t|>c_1(\log n)^{4d+12+d/(d-1)}$$, then using (6.8) and the fact that we are on the event $$\{\tau _\delta \wedge t(n)>t\}$$ we get that

\begin{aligned} c_1(\log n)^{d/(d-1)}\le |{\mathcal {G}}_t\cap S_t|\le (1-\delta )|{\mathcal {G}}_t|. \end{aligned}

Therefore, since $$\eta$$ is a $$\delta$$-good environment and $$t\le t(n)$$, (2) of Definition 6.4 gives that in this case

\begin{aligned} \varphi _{t+1}(S_t) \ge \frac{c_2}{4de} \cdot \frac{|{\mathcal {G}}_t\cap S_t|^{1-\frac{1}{d}}}{(\log n)|S_t|} \ge \frac{c}{(\log n)^{4d+9-12/d}}\cdot \frac{|S_t|^{1-\frac{1}{d}}}{|S_t|} \end{aligned}
(6.10)
\begin{aligned} = \frac{c}{(\log n)^{4d+9-12/d}}\cdot \frac{1}{|S_t|^{1/d}}, \end{aligned}
(6.11)

where c is a positive constant and for the second inequality we used (6.8) again.

Finally when $$|S_t|\ge \frac{n^d}{2}$$, on the event $$\{\tau _\delta \wedge t(n)>t\}$$ we have from (6.8) and using again (2) of Definition 6.4

\begin{aligned} \varphi _{t+1}(S_t^c) \ge \frac{c}{(\log n)^{4d+9-12/d}} \cdot \frac{|S_t|^{1-\frac{1}{d}}}{n^d - |S_t|}\ge \frac{c \cdot 2^{1/d}}{(\log n)^{4d+9-12/d}} \cdot n^{-1}. \end{aligned}
(6.12)

Substituting (6.9), (6.10) and (6.12) into (6.6) and (6.7) and then into (6.3), (6.4) and (6.5) we deduce where the function $$\varphi$$ is given by

\begin{aligned} \varphi (r) = {\left\{ \begin{array}{ll} c \cdot (\log n)^{-\beta }\cdot n^{-1}\cdot r^{-1/d} &{}\quad \text{ if } \quad \frac{(\log n)^{\alpha }}{n^d}\le r \le \frac{1}{2} \\ c\cdot n^{-d}\cdot r^{-1} &{}\quad \text{ if } \quad r< \frac{(\log n)^{\alpha }}{n^d} \\ c \cdot 2^{1/d}\cdot (\log n)^{-\beta }\cdot n^{-1} &{}\quad \text{ if } \quad r\in \left[ \frac{1}{2},\infty \right) \end{array}\right. } \end{aligned}

with c a positive constant and $$\beta = 4d+9-12/d$$. We now note that if $$\pi (S_t)\le 1/2$$, then $$Z_t = (\pi (S_t))^{-1/2}$$. If $$\pi (S_t) >1/2$$, then $$Z_t = \sqrt{\pi (S_t^c)}/\pi (S_t)\le \sqrt{2}$$. Since $$\varphi (r)=\varphi (1/2)$$ for all $$r>1/2$$, we get that in all cases

\begin{aligned} \varphi (\pi (S_{\tau _i})) = \varphi \left( \frac{1}{Z_{\tau _i}^{2}}\right) \end{aligned}

and this concludes the proof. $$\square$$

### Proof of Proposition 6.6

We define and

\begin{aligned} f(y)={\left\{ \begin{array}{ll} \left( \varphi \left( \frac{1}{y^2} \right) \right) ^2 &{}\quad \text{ if } y>0 \\ 0 &{}\quad \text{ if } y=0 \end{array}\right. }, \end{aligned}

where $$\varphi$$ is defined in Lemma 6.7. With these definitions Lemma 6.7 gives for all $$1\le i\le N$$

\begin{aligned} \mathbb {\widehat{E}}_{x,\eta }\left[ Y_{i+1}\;\vert \;Y_i\right] \le Y_i(1-f(Y_i)) \end{aligned}

with $$Y_1 \le n^{d/2}$$ for all $$n\ge 3$$.

Since $$\varphi$$ is decreasing, we get that f is increasing, and hence we can apply [7, Lemma 11 (iii)] to deduce that for all $$\varepsilon >0$$ if

\begin{aligned} k\ge \int _{\varepsilon }^{n^{d/2}} \frac{1}{zf(z)} \, dz, \end{aligned}

then we have that We now evaluate the integral

\begin{aligned} \int _{\varepsilon }^{n^{d/2}} \frac{1}{zf(z)} \, dz = \int _{\varepsilon }^{n^{d/2}} \frac{1}{z(\varphi (1/z^2))^2}\,dz = \frac{1}{2}\cdot \int _{\frac{1}{n^d}}^{\frac{1}{\varepsilon ^2}} \frac{1}{u(\varphi (u))^2}\,du. \end{aligned}

Splitting the integral according to the different regions where $$\varphi$$ is defined and substituting the function we obtain

\begin{aligned} \int _{\frac{1}{n^d}}^{\frac{1}{\varepsilon ^2}} \frac{1}{u(\varphi (u))^2}\,du \le c' \cdot n^2 \cdot (\log n)^{2\beta }\cdot \log \frac{1}{\varepsilon }, \end{aligned}

where $$c'$$ is a positive constant. Therefore, taking $$\varepsilon =\frac{1}{n^{10d}}$$, this gives that for all $$k\ge c''\cdot n^2 (\log n)^{2\beta +1}$$ with $$c''=2c'd$$ we have that and hence, since $$N=(\log n)^\gamma \cdot n^2$$ with $$\gamma = 8d+20>2\beta +1$$, we deduce (6.13)

Clearly we have

\begin{aligned} \{\tau _\delta \wedge t(n)>\tau _N\}&= \{ \pi (S_{\tau _N})\ge 1/2, \tau _\delta \wedge t(n)>\tau _N\} \cup \{ \pi (S_{\tau _N})\nonumber \\&<1/2, \tau _\delta \wedge t(n)>\tau _N\}. \end{aligned}
(6.14)

For the second event appearing on the right hand side above using the definition of the process Z we get The first event appearing on the right hand side of (6.14) implies that $$|S_{\tau _N}^c|\ge |S_{\tau _N}^c\cap {\mathcal {G}}_{\tau _N}| \ge \delta |{\mathcal {G}}_{\tau _N}|$$. Since $$\eta$$ is a $$\delta$$-good environment, by (1) of Definition 6.4 we have that $$|{\mathcal {G}}_{\tau _N}|\ge (1-\delta )\theta (p) n^d$$. Therefore we obtain By Markov’s inequality and the two inclusions above we now conclude where c is a positive constant and in the last inequality we used (6.13). Since $$\eta$$ is a $$\delta$$-good environment, this now implies that

\begin{aligned} {\mathbb {P}}_{x,\eta }\left( \tau _\delta \le t(n)\right) \ge 1 - \frac{c}{n^{10d}} \end{aligned}

and this finishes the proof. $$\square$$

### 6.2 Proof of Theorem 1.3

In this section we prove Theorem 1.3. First recall the definition of the stopping time $$\tau _\delta$$ as the first time t that $$|S_t\cap {\mathcal {G}}_t|\ge (1-\delta )|{\mathcal {G}}_t|$$.

### Lemma 6.8

Let p be such that $$\theta (p)>1/2$$. There exists $$n_0$$ and $$\delta >0$$ so that for all $$n\ge n_0$$, if $$\eta$$ is a $$\delta$$-good environment, then for all x

\begin{aligned} \Vert {\mathbb {P}}_{x,\eta }\left( X_{t(n)} \in \cdot \right) - \pi \Vert _{\mathrm{TV}} \le \frac{1 -\delta }{2}. \end{aligned}

### Proof

Since $$\theta (p)>1/2$$, there exist $$\varepsilon> 2\delta >0$$ so that

\begin{aligned} \theta (p)>\frac{1}{2}+2\varepsilon \quad \text { and } \quad (1-\delta )^2 \theta (p)>\frac{1}{2}+\varepsilon . \end{aligned}
(6.15)

Summing over all possible values of $$\tau =\tau _\delta$$ we obtain

\begin{aligned} \begin{aligned}&\Vert {\mathbb {P}}_{x,\eta }\left( X_{t(n)} \in \cdot \right) - \pi \Vert _{\mathrm{TV}} = \frac{1}{2} \sum _{z} \left| {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z\right) - \frac{1}{n^d} \right| \\&\quad \le \frac{1}{2} \sum _z \left| \sum _{s\le t(n)} {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z, \tau =s\right) -\sum _{s\le t(n)}\frac{{\mathbb {P}}_{x,\eta }\left( \tau =s\right) }{n^d} \right| + {\mathbb {P}}_{x,\eta }\left( \tau >t(n)\right) . \end{aligned} \end{aligned}
(6.16)

By the strong Markov property at time $$\tau$$ we have

\begin{aligned} {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z, \tau =s\right)&=\sum _{y} {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z, \tau =s, X_s =y\right) \\&= \sum _{y} {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z\;\vert \;X_{s}=y\right) {\mathbb {P}}_{x,\eta }\left( \tau =s, X_s=y\right) . \end{aligned}

Since $$\tau$$ is a stopping time for the evolving set process, we can use the coupling of the walk and the Doob transform of the evolving set, Theorem 2.2, to get For all $$s\le t(n)$$ we call $$\nu _s$$ the probability measure defined by We claim that

\begin{aligned} \Vert \nu _s - \pi \Vert _{\mathrm{TV}} \le \frac{1}{2}-\varepsilon . \end{aligned}
(6.17)

Indeed, we have Since $$s\le t(n)$$ and $$\eta$$ is a $$\delta$$-good environment, we have $$|{\mathcal {G}}_s|\ge (1-\delta )\theta (p) n^d$$, and hence on the event $$\{\tau =s\}$$ we get

\begin{aligned} |S_s|\ge (1-\delta )^2 \theta (p)n^d > \left( \frac{1}{2} + \varepsilon \right) n^d. \end{aligned}

This now implies that

\begin{aligned} {\mathbb {E}}_{x,\eta }\left[ 1 - \frac{|S_s|}{n^d}\;\Big \vert \;\tau =s\right] \le \frac{1}{2} - \varepsilon \end{aligned}

and completes the proof of (6.17). By the definition of $$\nu _s$$ we have

\begin{aligned}&\frac{1}{2} \sum _z \left| \sum _{s\le t(n)} {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z, \tau =s\right) -\sum _{s\le t(n)}\frac{{\mathbb {P}}_{x,\eta }\left( \tau =s\right) }{n^d} \right| \\&\quad =\frac{1}{2}\sum _z\left| \sum _{s\le {t(n)}} \sum _y{\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z\;\vert \;X_s=y\right) \nu _s(y){\mathbb {P}}_{x,\eta }\left( \tau =s\right) - \sum _{s\le {t(n)}} \frac{{\mathbb {P}}_{x,\eta }\left( \tau =s\right) }{n^d} \right| \\&\quad \le \sum _{s\le {t(n)}} {\mathbb {P}}_{x,\eta }\left( \tau =s\right) \frac{1}{2}\sum _z\left| \sum _y\nu _s(y) {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z\;\vert \;X_s=y\right) - \frac{1}{n^d} \right| . \end{aligned}

But since $$\pi$$ is stationary for X when the environment is $$\eta$$, we obtain

\begin{aligned} \frac{1}{2}\sum _z\left| \sum _y\nu _s(y) {\mathbb {P}}_{x,\eta }\left( X_{t(n)}=z\;\vert \;X_s=y\right) - \frac{1}{n^d} \right| \le \Vert \nu _s - \pi \Vert _{\mathrm{TV}}\le \frac{1}{2}-\varepsilon , \end{aligned}

where the last inequality follows from (6.17). Substituting this bound into (6.16) gives

\begin{aligned} \Vert {\mathbb {P}}_{x,\eta }\left( X_{t(n)} \in \cdot \right) - \pi \Vert _{\mathrm{TV}} \le \frac{1}{2}-\varepsilon + {\mathbb {P}}_{x,\eta }\left( \tau >t(n)\right) . \end{aligned}

From Proposition 6.6 we have

\begin{aligned} {\mathbb {P}}_{x,\eta }\left( \tau \le t(n)\right) \ge 1-\frac{c}{n^{2d}}. \end{aligned}

This together with the fact that we took $$2\delta <\varepsilon$$ finishes the proof. $$\square$$

### Corollary 6.9

Let p be such that $$\theta (p)>1/2$$. Then there exist $$\delta \in (0,1)$$ and $$n_0$$ such that for all $$n\ge n_0$$ and all starting environments $$\eta _0$$ we have

\begin{aligned} {\mathcal {P}}_{\eta _0}\left( (\eta _t)_{t\le t(n)}: \,\forall x,y,\, \left\| P_\eta ^{t(n)}(x,\cdot ) - P_\eta ^{t(n)}(y,\cdot ) \right\| _{\mathrm{TV}} \le 1-\delta \right) \ge 1-\delta . \end{aligned}

### Proof

Let $$\delta$$ and $$n_0$$ be as in the statement of Lemma 6.8. Then Lemma 6.8 gives that for all $$n\ge n_0$$, if $$\eta$$ is a $$\delta$$-good environment, then for all x and y we have

\begin{aligned} \left\| P_\eta ^{t(n)}(x,\cdot ) - \pi \right\| _{\mathrm{TV}}\le \frac{1-\delta }{2} \quad \text { and } \quad \left\| P_\eta ^{t(n)}(y,\cdot ) - \pi \right\| _{\mathrm{TV}}\le \frac{1-\delta }{2}. \end{aligned}

Using this and the triangle inequality we obtain that on the event that $$\eta$$ is a $$\delta$$-good environment for all x and y

\begin{aligned} \left\| P_\eta ^{t(n)}(x,\cdot ) -P_\eta ^{t(n)}(y,\cdot ) \right\| _{\mathrm{TV}} \le 1-\delta . \end{aligned}

Therefore for all $$n\ge n_0$$ we get for all $$\eta _0$$

\begin{aligned}&{\mathcal {P}}_{\eta _0}\left( (\eta _t)_{t\le t(n)}: \,\exists \,x,y,\, \left\| P_\eta ^{t(n)}(x,\cdot ) - P_\eta ^{t(n)}(y,\cdot ) \right\| _{\mathrm{TV}} >1-\delta \right) \\&\quad \le {\mathcal {P}}_{\eta _0}\left( \eta \text { is not a }\delta \text {-good environment}\right) . \end{aligned}

Taking $$n_0$$ even larger we get from Lemma 6.5 that for all $$n\ge n_0$$

\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta \text { is not a }\delta \text {-good environment}\right) \le \delta \end{aligned}

and this concludes the proof. $$\square$$

The following lemma will be applied later in the case where R is a constant or a uniform random variable.

### Lemma 6.10

Let R be a random time independent of X and such that the following holds: there exists $$\delta \in (0,1)$$ such that for all starting environments $$\eta _0$$ we have

\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : \,\forall x,y,\, \left\| {\mathbb {P}}_{x,\eta }\left( X_R=\cdot \right) - {\mathbb {P}}_{y,\eta }\left( X_R=\cdot \right) \right\| _{\mathrm{TV}} \le 1-\delta \right) \ge 1-\delta . \end{aligned}

Then there exists a positive constant $$c=c(\delta )$$ and $$n_0=n_0(\delta )\in {\mathbb {N}}$$ so that if $$k=c\log n$$ and $$R(k)=R_1+\cdots +R_k$$, where $$R_i$$ are i.i.d. distributed as R, then for all $$n\ge n_0$$, all xy and $$\eta _0$$

\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta :\left\| {\mathbb {P}}_{x,\eta }\left( X_{R(k)}=\cdot \right) - {\mathbb {P}}_{y,\eta }\left( X_{R(k)}=\cdot \right) \right\| _{\mathrm{TV}} \le \frac{1}{n^{3d}}\right) \ge 1-\frac{1}{n^{3d}}. \end{aligned}

### Proof

We fix $$x_0, y_0$$ and let XY be two walks moving in the same environment $$\eta$$ and started from $$x_0$$ and $$y_0$$ respectively. We now present a coupling of X and Y. We divide time into rounds of length $$R_1, R_2,\ldots$$ and we describe the coupling for every round.

For the first round, i.e. for times between 0 and $$R_1$$ we use the optimal coupling given by

\begin{aligned} {\mathbb {P}}_{x_0,y_0,\eta }\left( X_{R_1}\ne Y_{R_1}\right) = \Vert {\mathbb {P}}_{x_0,\eta }\left( X_{R_1}=\cdot \right) - {\mathbb {P}}_{y_0,\eta }\left( Y_{R_1}=\cdot \right) \Vert _{\mathrm{TV}}, \end{aligned}

where the environment $$\eta$$ is restricted between time 0 and $$R_1$$. We now change the definition of a good environment. We call $$\eta$$ a good environment during $$[0,R_1]$$ if the total variation distance appearing above is smaller than $$1-\delta$$.

If X and Y did not couple after $$R_1$$ steps, then they have reached some locations $$X_{R_1}=x_1$$ and $$Y_{R_1} = y_1$$. In the second round we couple them using again the corresponding optimal coupling, i.e.

\begin{aligned} {\mathbb {P}}_{x_1,y_1,\eta }\left( X_{R_2}\ne Y_{R_2}\right) = \Vert {\mathbb {P}}_{x_1,\eta }\left( X_{R_2}=\cdot \right) - {\mathbb {P}}_{y_1,\eta }\left( Y_{R_2}=\cdot \right) \Vert _{\mathrm{TV}}. \end{aligned}

Similarly we call $$\eta$$ a good environment for the second round if the total variation distance above is smaller than $$1-\delta$$. We continue in the same way for all later rounds. By the assumption on R, i.e. the bound on the probability given in the statement of the lemma is uniform over all starting points x and y and the initial environment, we get that for all $$\eta _0$$

\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta \text { is good for the }i\text {-th round}\right) \ge 1-\delta \end{aligned}

and the same bound is true even after conditioning on the previous $$i-1$$ rounds. Let $$k=c\log n$$ for a constant c to be determined. Let E denote the number of good environments in the first k rounds. We now get

\begin{aligned}&{\mathbb {P}}_{x_0,y_0, \eta _0}\left( X_{R(k)}\ne Y_{R(k)}\right) \le {\mathbb {P}}_{x_0,y_0, \eta _0}\left( E\le \frac{(1-\delta )k}{2}\right) \\&\quad + {\mathbb {P}}_{x_0,y_0, \eta _0}\left( E > \frac{(1-\delta )k}{2}, X_{R(k)}\ne Y_{R(k)}\right) . \end{aligned}

By concentration, since we can stochastically dominate E from below by $${\mathrm{Bin}}(k,1-\delta )$$, the first probability decays exponentially in k. For the second probability, on the event that there are enough good environments, since the probability of not coupling in each round is at most $$1-\delta$$, by successive conditioning we get

\begin{aligned} {\mathbb {P}}_{x_0,y_0, \eta _0}\left( E > \frac{(1-\delta )k}{2}, X_{R(k)}\ne Y_{R(k)}\right) \le (1-\delta )^{(1-\delta )k/2}. \end{aligned}

Therefore, taking $$c=c(\delta )$$ sufficiently large we get overall for all n sufficiently large

\begin{aligned} {\mathbb {P}}_{x_0,y_0, \eta _0}\left( X_{R(k)}\ne Y_{R(k)}\right) \le \frac{1}{n^{6d}}. \end{aligned}

So by Markov’s inequality again we obtain for all n sufficiently large

\begin{aligned}&{\mathcal {P}}_{\eta _0}\left( \eta : \Vert {\mathbb {P}}_{x_0,\eta }\left( X_{R(k)}=\cdot \right) - {\mathbb {P}}_{y_0,\eta }\left( Y_{R(k)}=\cdot \right) \Vert _{\mathrm{TV}} > \frac{1}{n^{3d}}\right) \\&\quad \le n^{3d}\cdot {\mathcal {E}}_{\eta _0}\left[ \Vert {\mathbb {P}}_{x_0,\eta }\left( X_{R(k)}=\cdot \right) - {\mathbb {P}}_{y_0,\eta }\left( Y_{R(k)}=\cdot \right) \Vert _{\mathrm{TV}}\right] \\&\quad \le n^{3d}\cdot {\mathbb {P}}_{x_0,y_0,\eta _0}\left( X_{R(k)}\ne Y_{R(k)}\right) \le \frac{1}{n^{3d}}, \end{aligned}

where $${\mathcal {E}}$$ is expectation over the random environment. This finishes the proof. $$\square$$

### Proof of Theorem 1.3

Let $$R=t(n)$$. Then by Corollary 6.9 there exists $$n_0$$ such that R satisfies the condition of Lemma 6.10 for $$n\ge n_0$$. So applying Lemma 6.10 we get for all n sufficiently large and all $$x_0,y_0$$ and $$\eta _0$$

\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : \Vert {\mathbb {P}}_{x_0,\eta }\left( X_{kt(n)}=\cdot \right) - {\mathbb {P}}_{y_0,\eta }\left( Y_{kt(n)}=\cdot \right) \Vert _{\mathrm{TV}} > \frac{1}{n^{3d}}\right) \le \frac{1}{n^{3d}}, \end{aligned}

where $$k=c\log n$$. By a union bound over all starting states $$x_0,y_0$$ we deduce

\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : \max _{x_0,y_0}\Vert P_\eta ^{kt(n)}(x_0,\cdot ) - P_\eta ^{kt(n)}(y_0,\cdot ) \Vert _{\mathrm{TV}} > \frac{1}{n^{3d}}\right) \le n^{2d} \cdot \frac{1}{n^{3d}} = \frac{1}{n^d}. \end{aligned}

This proves that for all n sufficiently large

\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : t_{\mathrm {mix}}(n^{-3d}, \eta ) \ge kt(n)\right) \le n^{-d} \end{aligned}

and thus completes the proof of the theorem. $$\square$$

### Proof of Theorem 1.6

Let $$\delta =\varepsilon /100$$ and $$k=[2(1-\delta )/(\delta \theta (p))]+1$$. For every starting point $$x_0$$ we are going to define a sequence of stopping times. First let $$\xi _1$$ be the first time that all the edges refresh at least once. Let $$\widetilde{\delta }=\delta /k$$. Then we define $$\tau _1=\tau _1(x_0)$$

\begin{aligned} \tau _1 = \inf \left\{ t\ge \xi _1: |S_t\cap {\mathcal {G}}_t| \ge (1-\widetilde{\delta })|{\mathcal {G}}_t|\right\} \wedge (\xi _1+t(n)), \end{aligned}

where $$(S_t)$$ is the evolving set process starting at time $$\xi _1$$ from $$\{X_{\xi _1}\}$$ and coupled with X using the Diaconis Fill coupling. We define inductively, $$\xi _{i+1}$$ as the first time after $$\xi _i+t(n)$$ that all edges refresh at least once. In order to now define $$\tau _{i+1}$$, we start a new evolving set process which at time $$\xi _{i+1}$$ is the singleton $$\{X_{\xi _{i+1}}\}$$. (This new restart does not affect the definition of the earlier $$\tau _j$$’s.) To simplify notation, we call this process again $$S_t$$ and we couple it with the walk X using the Diaconis Fill coupling. Next we define

\begin{aligned} \tau _{i+1}=\inf \left\{ t\ge \xi _{i +1}: |S_t\cap {\mathcal {G}}_t| \ge (1-\widetilde{\delta })|{\mathcal {G}}_t|\right\} \wedge (\xi _{i+1}+t(n)). \end{aligned}

From now on we call $$\eta$$ a good environment if $$\eta$$ is a $$\delta$$-good environment and $$\xi _k\le 2kt(n)$$. Lemma 6.5 and the definition of the $$\xi _i$$’s give for all $$\eta _0$$

\begin{aligned} {\mathcal {P}}_{\eta _0}(\eta \text { is good}) \ge 1-\frac{c_4}{n^{10d}}, \end{aligned}
(6.18)

where $$c_4$$ is a positive constant. By Proposition 6.6 there exists a positive constant c so that if $$\eta$$ is a good environment, then for all $$x_0$$ and for all $$1\le i\le k$$ we have

\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( \tau _{i}-\xi _{i}\le t(n)\right) \ge 1 -\frac{c}{n^{10d}}. \end{aligned}
(6.19)

We will now prove that there exists a positive constant $$c'$$ so that for all $$x_0$$

\begin{aligned} {\mathbb {P}}_{x_0, \eta _0}\left( |{\mathcal {G}}_{\tau _1}\cup \cdots \cup {\mathcal {G}}_{\tau _k}|<(1-\delta )n^d\right) \le \frac{c'}{n^{2d}}. \end{aligned}
(6.20)

Writing again $${\mathcal {E}}$$ for expectation over the random environment and using (6.18) and (6.19) we obtain for all $$i\le k$$ that there exists a positive constant $$c''$$ so that for all n sufficiently large and for all $$x_0, \eta _0$$ This and Markov’s inequality now give that for all n sufficiently large

\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : \forall \,x_0, \, {\mathbb {P}}_{x_0,\eta }\left( \tau _k\le (\log n)t(n)\right) \ge 1-\frac{1}{n^{{2d}}}\right) \ge 1-\frac{c''}{n}. \end{aligned}
(6.21)

Since every edge refreshes after an exponential time of parameter $$\mu$$, it follows that the number of different percolation clusters that appear in an interval of length t is stochastically bounded by a Poisson random variable of parameter $$\mu \cdot t\cdot dn^d$$. Therefore, the number of possible percolation configurations in the interval $$[\xi _i, \xi _i+t(n)]$$ is dominated by a Poisson variable $$N_i$$ of parameter $$\mu \cdot t(n)\cdot dn^d$$. By the concentration of the Poisson distribution, we obtain

\begin{aligned} {\mathbb {P}}\left( \exists \, i\le k: N_i\ge n^{d+4}\right) \le \exp \left( -c_1n \right) , \end{aligned}

where $$c_1$$ is another positive constant. Let $${\mathcal {G}}^1, \ldots , {\mathcal {G}}^k$$ be the giant components of independent supercritical percolation configurations. Since the percolation clusters obtained at the times $$\xi _i$$ are independent, using Corollary 3.2 in the third inequality below we deduce that for all n sufficiently large

\begin{aligned}&{\mathbb {P}}_{x_0, \eta _0}\left( |{\mathcal {G}}_{\tau _1}\cup \cdots \cup {\mathcal {G}}_{\tau _k}|<(1-\delta )n^d\right) \\&\quad \le {\mathbb {P}}_{x_0,\eta _0}\left( |{\mathcal {G}}_{\tau _1}\cup \cdots \cup {\mathcal {G}}_{\tau _k}|<(1-\delta )n^d, \{\tau _{i}-\xi _{i}\le t(n)\} \cap \{N_i\le n^{d+4}\}, \forall \, i\le k\right) \\&\qquad + e^{-c_1n} + k\frac{c}{n^{10d}}\\&\quad \le n^{(d+4)k} {\mathbb {P}}\left( |{\mathcal {G}}^1\cup \cdots \cup {\mathcal {G}}^k|<(1-\delta )n^{d}\right) + k \frac{2c}{n^{10d}}\\&\quad \le \frac{n^{(d+4)k}}{c}\exp \left( -cn^{\frac{d}{d+1}} \right) + k\frac{2c}{n^{10d}}\le \frac{c''}{n^{10d}}, \end{aligned}

where $$c''$$ is a positive constant uniform for all $$x_0$$ and $$\eta _0$$. This proves (6.20). So we can sum this error over all starting points $$x_0$$ and get using Markov’s inequality that for all n sufficiently large and all $$\eta _0$$

\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : \forall x_0, \,{\mathbb {P}}_{x_0,\eta }\left( |{\mathcal {G}}_{\tau _1}\cup \cdots \cup {\mathcal {G}}_{\tau _k}|\ge (1-\delta )n^d\right) \ge 1-\frac{1}{n}\right) \ge 1-\frac{c'}{n}. \end{aligned}
(6.22)

The definition of the stopping times $$\tau _i$$ immediately yields

\begin{aligned} \{|{\mathcal {G}}_{\tau _1}\cup \cdots \cup {\mathcal {G}}_{\tau _k}| \ge (1-\delta )n^d\} \subseteq \{|S_{\tau _1}\cup \cdots \cup S_{\tau _k}|\ge (1-\delta )^2n^d\}. \end{aligned}

This together with (6.22) now give

\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta : \forall x_0, \,{\mathbb {P}}_{x_0,\eta }\left( |S_{\tau _1}\cup \cdots \cup S_{\tau _k}|\ge (1-\delta )^2 n^d\right) \ge 1-\frac{1}{n}\right) \ge 1-\frac{c'}{n}. \end{aligned}
(6.23)

Remember the dependence on $$x_0$$ of the stopping times $$\tau _i$$ that we have suppressed. We now change the definition of a good environment and call $$\eta$$good if it satisfies the following for all $$x_0$$

\begin{aligned}&{\mathbb {P}}_{x_0,\eta }\left( |S_{\tau _1}\cup \cdots \cup S_{\tau _k}|\ge (1-\delta )^2 n^d\right) \ge 1-\frac{1}{n} \quad \text { and } \end{aligned}
(6.24)
\begin{aligned}&{\mathbb {P}}_{x_0,\eta }\left( \tau _k\le (\log n)t(n)\right) \ge 1-\frac{1}{n^{2d}} \end{aligned}
(6.25)

From (6.21) and (6.23) we get that for all $$\eta _0$$

\begin{aligned} {\mathcal {P}}_{\eta _0}\left( \eta \text { is good}\right) \ge 1-\frac{c'+c''}{n}. \end{aligned}
(6.26)

We now define a stopping time $$\tau (x_0)$$ by selecting $$i\in \{1,\ldots , k\}$$ uniformly at random and setting $$\tau (x_0)=\tau _i(x_0)$$. Then at this time we have We now set $$f_1(x) = {\mathbb {P}}_{x_0,\eta }\left( x\in S_{\tau _1}\cup \cdots \cup S_{\tau _k}\right)$$ for all x. Since $$\eta$$ is a good environment, then for some $$\delta '<\varepsilon /50$$ we have for all n sufficiently large

\begin{aligned} \sum _x f_1(x) = {\mathbb {E}}_{x_0,\eta }\left[ |S_{\tau _1}\cup \cdots \cup S_{\tau _k}|\right] \ge (1-\delta )^2n^d \left( 1-\frac{1}{n}\right) = (1-\delta ') n^d.\nonumber \\ \end{aligned}
(6.27)

First let $$c=c(\varepsilon )\in {\mathbb {N}}$$ be a constant to be fixed later. In order to define the stopping rule, we first repeat the above construction ck times. More specifically, when $$X_0=x_0$$, we let $$\sigma _1= \tau (x_0)\wedge (\log n)t(n)$$. Then, since $$\eta$$ is a good environment, we obtain

\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( X_{\sigma _1}=x\right) \ge \frac{1}{kn^d} f_1(x) - \frac{1}{n^{2d}}. \end{aligned}

Let $$X_{\sigma _1} = x_1$$. Then we define in the same way as above a stopping time $$\tau (x_1)$$ with the evolving set process starting from $$\{x_1\}$$ and the environment considered after time $$\sigma _1$$. Then we set

\begin{aligned} \sigma _2 = \sigma _1+(\tau (x_1)-\sigma _1)\wedge (\log n)t(n). \end{aligned}

We continue in this way and define a sequence of stopping times $$\sigma _i$$ for all $$i< ck$$. In the same way as for the first round for all $$i< ck$$ we have

\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( X_{\sigma _i}=x\right) \ge \frac{1}{kn^d} f_i(x) - \frac{1}{n^{2d}} \end{aligned}

and the function $$f_i$$ satisfies (6.27).

We next define the stopping rule. To do so we will explain what is the probability of stopping in every round. We define the set $$A_1$$ of good points for the first round as follows:

\begin{aligned} A_1 = \left\{ x: \, {\mathbb {P}}_{x_0, \eta }\left( X_{\sigma _1}=x\right) \ge \frac{1}{2kn^d}\right\} . \end{aligned}

We now sample X at time $$\sigma _1$$. If $$X_{\sigma _1}=x\in A_1$$, then at this time we stop with probability

\begin{aligned} \frac{1}{2kn^d {\mathbb {P}}_{x_0,\eta }\left( X_{\sigma _1}=x\right) }. \end{aligned}

If we stop after the first round, then we set $$T=\sigma _1$$. So if $$x\in A_1$$, we have

\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( X_{T}=x, T=\sigma _1\right) = \frac{1}{2kn^d}. \end{aligned}

From (6.27) we get that $$|A_1|\ge (1-3\delta ')n^d$$ for all n sufficiently large. Therefore, summing over all $$x\in A_1$$ we get that

\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( T=\sigma _1\right) \ge \frac{1-3\delta '}{2k}. \end{aligned}

Therefore, this now gives for $$x\in A_1$$

\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( X_T=x\;\vert \;T=\sigma _1\right) \le \frac{1}{(1-3\delta ')n^d}. \end{aligned}

We now define inductively the probability of stopping in the i-th round. Suppose we have not stopped up to the $$i-1$$-st round. We define the set of good points for the i-th round via

\begin{aligned} A_i=\left\{ x: \, {\mathbb {P}}_{x_0,\eta }\left( X_{\sigma _i}=x\right) \ge \frac{1}{2kn^d} \right\} \end{aligned}

If $$X_{\sigma _i}=x\in A_i$$, then the probability we stop at the i-th round is

\begin{aligned} \frac{1}{2kn^d {\mathbb {P}}_{x_0,\eta }\left( X_{\sigma _i}=x\right) } \end{aligned}

and as above we obtain by summing over all $$x\in A_i$$ and using that $$|A_i|\ge (1-3\delta ')n^d$$

\begin{aligned}&{\mathbb {P}}_{x_0,\eta }\left( T=\sigma _i\;\vert \;T>\sigma _{i-1}\right) \ge \frac{1-3\delta '}{2k} \quad \text { and }\\&{\mathbb {P}}_{x_0,\eta }\left( X_T=x\;\vert \;T=\sigma _i\right) \le \frac{1}{(1-3\delta ')n^d}, \,\, \forall \, x\in A_i. \end{aligned}

If we have not stopped before the ck-th round, then we set $$T=\sigma _{ck+1}$$. Notice, however, that

\begin{aligned} {\mathbb {P}}_{x_0,\eta }\left( T=\sigma _{ck+1}\right) \le \left( 1-\frac{1-3\delta '}{2k} \right) ^{ck}\le \exp \left( -c(1-3\delta ') \right) . \end{aligned}

For every round $$i\le ck$$, we now have that

\begin{aligned}&\left\| {\mathbb {P}}_{x_0,\eta }\left( X_T=\cdot \;\vert \;T=\sigma _i\right) - \pi \right\| _{\mathrm{TV}} \\&\quad =\sum _{x\in A_i} \left( {\mathbb {P}}_{x_0,\eta }\left( X_T=x\;\vert \;T=\sigma _i\right) - \frac{1}{n^d} \right) _+ + \frac{|A_i^c|}{n^d} \\&\quad \le \sum _{x\in A_i}\left( \frac{1}{(1-3\delta ')n^d} - \frac{1}{n^d} \right) + 3\delta ' \le \frac{3\delta '}{1-3\delta '}+3\delta ' \le 10\delta ', \end{aligned}

since $$\varepsilon <1/4$$. So we now get overall

\begin{aligned}&\left\| {\mathbb {P}}_{x_0,\eta }\left( X_T=\cdot \right) - \pi \right\| _{\mathrm{TV}} \\&\quad \le \sum _{i\le ck} {\mathbb {P}}_{x_0,\eta }\left( T=\sigma _i\right) \left\| {\mathbb {P}}_{x_0,\eta }\left( X_{T}=\cdot \;\vert \;T=\sigma _i\right) -\pi \right\| _{\mathrm{TV}} + {\mathbb {P}}_{x_0,\eta }\left( T=\sigma _{ck+1}\right) \\&\quad \le 10\delta ' + \exp \left( -c(1-3\delta ') \right) . \end{aligned}

We now take $$c=c(\varepsilon )$$ so that the above bound is smaller than $$\varepsilon$$. Finally, by the definition of the stopping times $$\sigma _i$$, we also get that $${\mathbb {E}}_{x_0,\eta }\left[ T\right] \le c k (\log n)t(n)$$ and this concludes the proof. $$\square$$

### Proof of Corollary 1.7

Let $$n=10r$$. It suffices to prove the statement of the corollary for X being a random walk on dynamical percolation on $${\mathbb {Z}}_n^d$$. From Theorem 1.6 there exists a so that for all n large enough and all x and $$\eta _0$$

\begin{aligned} {\mathbb {P}}_{x,\eta _0}\left( \exists \, t\le \left( n^2+\frac{1}{\mu } \right) (\log n)^a: \, \left\| X_t \right\| \ge r\right) \ge \frac{1}{2}. \end{aligned}

The statement of the corollary follows by iteration. $$\square$$