Abstract
We consider dynamical percolation on the d-dimensional discrete torus \({\mathbb {Z}}_n^d\) of side length n, where each edge refreshes its status at rate \(\mu =\mu _n\le 1/2\) to be open with probability p. We study random walk on the torus, where the walker moves at rate 1 / (2d) along each open edge. In earlier work of two of the authors with A. Stauffer, it was shown that in the subcritical case \(p<p_c({\mathbb {Z}}^d)\), the (annealed) mixing time of the walk is \(\Theta (n^2/\mu )\), and it was conjectured that in the supercritical case \(p>p_c({\mathbb {Z}}^d)\), the mixing time is \(\Theta (n^2+1/\mu )\); here the implied constants depend only on d and p. We prove a quenched (and hence annealed) version of this conjecture up to a poly-logarithmic factor under the assumption \(\theta (p)>1/2\). When \(\theta (p)>0\), we prove a version of this conjecture for an alternative notion of mixing time involving randomised stopping times. The latter implies sharp (up to poly-logarithmic factors) upper bounds on exit times of large balls throughout the supercritical regime. Our proofs are based on percolation results (e.g., the Grimmett–Marstrand Theorem) and an analysis of the volume-biased evolving set process; the key point is that typically, the evolving set has a substantial intersection with the giant percolation cluster at many times. This allows us to use precise isoperimetric properties of the cluster (due to G. Pete) to infer rapid growth of the evolving set, which in turn yields the upper bound on the mixing time.
1 Introduction
This paper studies random walk on dynamical percolation on the torus \({\mathbb {Z}}^d_n\). The edges refresh at rate \(\mu \le 1/2\) and switch to open with probability p and closed with probability \(1-p\) where \(p>p_c({\mathbb {Z}}^d)\) with \(p_c({\mathbb {Z}}^d)\) being the critical probability for bond percolation on \({\mathbb {Z}}^d\). The random walk moves at rate 1. When its exponential clock rings, the walk chooses one of the 2d adjacent edges with equal probability. If the bond is open, then it makes the jump, otherwise it stays in place.
We represent the state of the system by \((X_t,\eta _t)\), where \(X_t\in {\mathbb {Z}}_n^d\) is the location of the walk at time t and \(\eta _t\in \{0,1\}^{E({\mathbb {Z}}_n^d)}\) is the configuration of edges at time t, where \(E({\mathbb {Z}}_n^d)\) stands for the edges of the torus. We emphasise at this point that \((X_t,\eta _t)\) is Markovian, while the location of the walker \((X_t)\) is not.
One easily checks that \(\pi \times \pi _p\) is the unique stationary distribution and that the process is reversible; here \(\pi \) is uniform distribution and \(\pi _p\) is product measure with density p on the edges. Moreover, if the environment \(\{\eta _t\}\) is fixed, then \(\pi \) is a stationary distribution for the resulting time inhomogeneous Markov process.
This model was introduced by Peres et al. [9]. We emphasise that d and p are considered fixed, while n and \(\mu =\mu _n\) are the two parameters which are varying. The focus of [9] was to study the total variation mixing time of \((X,\eta )\), i.e.
They focused on the subcritical regime, i.e. when \(p<p_c\) and they proved the following:
Theorem 1.1
[9] For all \(d\ge 1\) and \(p<p_c\) the mixing time of \((X,\eta )\) satisfies
They also established the same mixing time when one looks at the walk and averages over the environment.
In the present paper we focus on the supercritical regime. We study both the full system and the quenched mixing times. We start by defining the different notions of mixing that we will be using. First of all we write \({\mathbb {P}}_{x,\eta }\left( \cdot \right) \) for the probability measure of the walk, when the environment process is conditioned to be \(\eta = (\eta _t)_{t\ge 0}\) and the walk starts from x. We write \({\mathcal {P}}\) for the distribution of the environment which is dynamical percolation on the torus, a measure on càdlàg paths \([0,\infty ) \mapsto \{0,1\}^{E({\mathbb {Z}}^d_{n})}\). We write \({\mathcal {P}}_{\eta _0}\) to denote the measure \({\mathcal {P}}\) when the starting environment is \(\eta _0\). Abusing notation we write \({\mathbb {P}}_{x,\eta _0}\left( \cdot \right) \) to mean the law of the full system when the walk starts from x and the initial configuration of the environment is \(\eta _0\). To distinguish it from the quenched law, we always write \(\eta _0\) in the subscript as opposed to \(\eta \).
For \(\varepsilon \in (0,1)\), \(x\in {\mathbb {Z}}_n^d\) and a fixed environment \(\eta =(\eta _t)_{t\ge 0}\) we write
We also write
for the quenched \(\varepsilon \)-mixing time. We remark that \(t_{\mathrm {mix}}(\varepsilon , \eta )\) could be infinite. Using the obvious definitions, the standard inequality \(t_{\mathrm{mix}}(\varepsilon )\le \log _2(\tfrac{1}{\varepsilon }) t_{\mathrm{mix}}(\tfrac{1}{4})\) does not hold for time-inhomogeneous Markov chains and therefore also not for quenched mixing times. Therefore, in such situations, to describe the rate of convergence to stationarity, it is more natural to give bounds on \(t_{\mathrm{mix}}(\varepsilon , \eta )\) for all \(\varepsilon \) rather than just considering \(\varepsilon =1/4\) as is usually done.
We first mention the result from the companion paper [8] which is an upper bound on the quenched mixing time and the hitting time of large sets for all values of p. We write \(\tau _A\) for the first time X hits the set A.
Theorem 1.2
[8] For all \(d\ge 1\) and \(\delta >0\), there exists \(C=C(d,\delta )<\infty \) so that for all \(p\in [\delta ,1-\delta ]\), for all \(\mu \le 1/2\), for all n and for all \(\varepsilon \), random walk in dynamical percolation on \({\mathbb {Z}}_n^d\) with parameters p and \(\mu \) satisfies for all x
Moreover, there exists a constant \(c=c(d,\delta )<1\), so that for all \(A\subseteq {\mathbb {Z}}_n^d\) with \(|A|\ge n^d/2\) and all k we have
Our first result concerns the quenched mixing time in the case when \(\theta (p)>1/2\).
Theorem 1.3
Let \(p\in (p_c({\mathbb {Z}}^d), 1)\) with \(\theta (p)>1/2\) and \(\mu \le 1/2\). Then there exists \(a>0\) (depending only on d and p) so that for all n sufficiently large we have
Remark 1.4
We note that when \(1/\mu <(\log n)^b\) for some \(b>0\), then Theorem 1.3 follows from Theorem 1.2. (Take \(\varepsilon =n^{-3d}\) in (1.1) and do a union bound over x.) So we are going to prove Theorem 1.3 in the regime when \(1/\mu > (\log n)^{d+2}\).
Our second result concerns the mixing time at a stopping time in the quenched regime for all values of \(p>p_c({\mathbb {Z}}^d)\). We first give the definition of this notion of mixing time that we are using.
Definition 1.5
For \(\varepsilon \in (0,1)\) and a fixed environment \(\eta =(\eta _t)_{t\ge 0}\) we define
Theorem 1.6
Let \(p\in (p_c({\mathbb {Z}}^d),1)\), \(\varepsilon >0\) and \(\mu \le 1/2\). Then there exists \(a>0\) (depending only on d and p) so that for all n sufficiently large we have
Finally we give a consequence for random walk on dynamical percolation on all of \({\mathbb {Z}}^d\). This is defined analogously to the process on the torus \({\mathbb {Z}}_n^d\) above, where the edges refresh at rate \(\mu \).
Corollary 1.7
Let \(p\in (p_c({\mathbb {Z}}^d),1)\) and \(\mu \le 1/2\). Let X be the random walk on dynamical percolation on \({\mathbb {Z}}^d\) and for \(r>0\) let
Then there exists \(a>0\) (depending only on d and p) so that for all r
Notation For positive functions f, g we write \(f\sim g\) if \(f(n)/g(n)\rightarrow 1\) as \(n\rightarrow \infty \). We also write \(f(n) \lesssim g(n)\) if there exists a constant \(c \in (0,\infty )\) such that \(f(n) \le c g(n)\) for all n, and \(f(n) > rsim g(n)\) if \(g(n) \lesssim f(n)\). Finally, we use the notation \(f(n) \asymp g(n)\) if both \(f(n) \lesssim g(n)\) and \(f(n) > rsim g(n)\).
Related work Various references to random walk on dynamical percolation has been provided in [9]. In a different direction than we are pursuing, in a recent paper, Andres et al. [1] have obtained a quenched invariance principle for random walks with time-dependent ergodic degenerate weights that are assumed to take strictly positive values. More recently, Biskup and Rodriguez in [2] were able to handle the case where the weights can be zero, and hence the dynamical percolation case.
1.1 Overview of the proof
In this subsection we explain the high level idea of the proof and also give the structure of the paper. First we note that when we fix the environment to be \(\eta \), we obtain a time inhomogeneous Markov chain. To study its mixing time, we use the theory of evolving sets developed by Morris and Peres adapted to the inhomogeneous setting, which was done in [8]. We recall this in Sect. 2. In particular we state a theorem by Diaconis and Fill that gives a coupling of the chain with the Doob transform of the evolving set. (Diaconis and Fill proved it in the time homogeneous setting, but the adaptation to the time inhomogeneous setting is straightforward.) The importance of the coupling is that conditional on the Doob transform of the evolving set up to time t, the random walk at time t is uniform on the Doob transform at the same time. This property of the coupling is going to be crucial for us in the proofs of Theorems 1.3 and 1.6.
The size of the Doob transform of the evolving set in the inhomogeneous setting is again a submartingale, as in the homogeneous case. The crucial quantity we want to control is the amount by which its size increases. This increase will be large only at good times, i.e. when the intersection of the Doob transform of the evolving set with the giant cluster is a substantial proportion of the evolving set. Hence we want to ensure that there are enough good times. We would like to emphasise that in this case we are using the random walk to infer properties of the evolving set. More specifically, in Sect. 4 we give an upper bound on the time it takes the random walk to hit the giant component. Using this and the coupling of the walk with the evolving set, in Sect. 5 we establish that there are enough good times. We then employ a result of Gábor Pete which states that the isoperimetric profile of a set in a supercritical percolation cluster coincides with its lattice profile. We apply this result to the sequence of good times, and hence obtain a good drift for the size of the evolving set at these times.
We conclude Sect. 5 by constructing a stopping time upper bounded by \((n^2+1/\mu )(\log n)^a\) with high probability so that at this time the Doob transform of the evolving set has size at least \((1-\delta ) (\theta (p)-\delta ) n^d\). In the case when \(\theta (p)>1/2\) we can take \(\delta >0\) sufficiently small so that \((1-\delta ) (\theta (p)-\delta )>1/2\). Using the uniformity of the walk on the Doob transform of the evolving set again, we deduce that at this stopping time the walk is close to the uniform distribution in total variation with high probability. This yields Theorem 1.3.
To finish the proof of Theorem 1.6 the idea is to repeat the above procedure to obtain \(k=k(\varepsilon )\) sets whose union covers at least \(1-\delta \) of the whole space for a sufficiently small \(\delta \). Then we define \(\tau \) by choosing one of these times uniformly at random. At time \(\tau \) the random walk will be uniform on a set with measure at least \(1-\delta \), and hence this means that the total variation from the uniform distribution at this time is going to be small. Since this time is with high probability smaller than k times the mixing time, this finishes the proof.
2 Evolving sets for inhomogeneous Markov chains
In this section we give the definition of the evolving set process for a discrete time inhomogeneous Markov chain.
Given a general transition matrix \(p(\cdot , \cdot )\) with state space \(\Omega \) and stationary distribution \(\pi \) we let for \(A,B\subseteq \Omega \)
When \(B=\{y\}\) we simply write \(Q_p(A,y)\) instead of \(Q_p(A,\{y\})\).
We first recall the definition of evolving sets in the context of a finite state discrete time Markov chain with state space \(\Omega \), transition matrix p(x, y) and stationary distribution \(\pi \). The evolving-set process \(\{S_n\}_{n\ge 0}\) is a Markov chain on subsets of \(\Omega \) whose transitions are described as follows. Let U be a uniform random variable on [0, 1]. If \(S\subseteq \Omega \) is the present state, we let the next state \(\widetilde{S}\) be defined by
We remark that \(Q_p(S,y)/\pi (y)\) is the probability that the reversed chain starting at y is in S after one step. Note that \(\Omega \) and \(\varnothing \) are absorbing states and it is immediate to check that
Moreover, one can describe the evolving set process as that process on subsets which satisfies the “one-dimensional marginal” condition (2.2) and where these different events, as we vary y, are maximally coupled.
For a transition matrix p with stationary distribution \(\pi \) we define for S with \(\pi (S)> 0\)
where \(\widetilde{S}\) is the first step of the evolving set process started from S when the transition probability for the Markov chain is p and as always the stationary distribution is \(\pi \).
For \(r\in [\min _x\pi (x), 1/2]\) we define \(\psi _p(r) :=\inf \{\psi _p(S): \pi (S)\le r\}\) and \(\psi _p(r)=\psi _p(1/2)\) for \(r\ge 1/2\). We define \(\varphi _p(r)\) analogously. We now recall a lemma from Morris and Peres [7] that will be useful later.
Lemma 2.1
[7, Lemma 10] Let \(0<\gamma \le 1/2\) and let p be a transition matrix on the finite state space \(\Omega \) with \(p(x,x)\ge \gamma \) for all x. Let \(\pi \) be a stationary distribution. Then for all sets \(S\subseteq \Omega \) with \(\pi (S)>0\) we have
We next define completely analogously to the time homogeneous case the evolving set process in the context of a time inhomogeneous Markov chain with a stationary distribution \(\pi \). Consider a time inhomogeneous Markov chain with state space \({\mathcal {S}}\) whose transition matrix for moving from time k to time \(k+1\) is given by \(p_{k+1}(x,y)\) where we assume that the probability measure \(\pi \) is a stationary distribution for each \(p_k\). In this case, we say that \(\pi \) is a stationary distribution for the inhomogeneous Markov chain. Let \(Q_k= Q_{p_k}\) be as defined in (2.1). We then obtain a time inhomogeneous Markov chain \(S_0,S_1,\ldots \) on subsets of \({\mathcal {S}}\) generated by
where U is as before a uniform random variable on [0, 1]. We call this the evolving set process with respect to \(p_1,p_2,\ldots \) and stationary distribution \(\pi \).
We now define the Doob transform of the evolving set process associated to a time inhomogeneous Markov chain. If \(K_p\) is the transition probability for the evolving set process when the transition matrix for the Markov chain is p, then we define the Doob transform with respect to being absorbed at \(\Omega \) via
The following coupling of the time inhomogeneous Markov chain with the Doob transform of the evolving set will be crucial in the rest of the paper. The proof is identical to the proof of the homogeneous setting by Diaconis and Fill [3]. For the proof see for instance [5, Theorem 17.23].
Theorem 2.2
Let X be a time inhomogeneous Markov chain. Then there exists a Markovian coupling of X and the Doob transform \((S_t)\) of the associated evolving sets so that for all starting points x and all times t we have \(X_0=x\), \(S_0=\{x\}\) and for all w

We write \(\varphi _n = \varphi _{p_n}\) and \(\psi _n=\psi _{p_n}\), where \(p_n\) is the transition matrix at time n.
As in [7] we let
and
The following lemma follows in the same way as in the homogeneous setting of [7], but we include the proof for the reader’s convenience.
Lemma 2.3
Let S be the Doob transform with respect to absorption at \(\Omega \) of the evolving set process associated to a time inhomogeneous Markov chain X with \({\mathbb {P}}_{}\left( X_{n+1}=x\;\vert \;X_n=x\right) \ge \gamma \) for all n and x, where \(0<\gamma \le 1/2\). Then for all n and all \(S_0\ne \varnothing \) we have
where \({\mathcal {F}}_n\) stands for the filtration generated by \((S_i)_{i\le n}\).
Proof
Using the transition probability of the Doob transform of the evolving set, we almost surely have
If \(\pi (S_n)\le 1/2\), then
Suppose next that \(\pi (S_n)>1/2\). Then
Lemma 2.1 and the fact that \(\varphi _{n+1}\) is decreasing now give that
Now note that if \(\pi (S_n)\le 1/2\), then \(Z_n = (\pi (S_n))^{-1/2}\). If \(\pi (S_n) >1/2\), then \(Z_n = \sqrt{\pi (S_n^c)}/\pi (S_n)\le \sqrt{2}\). Since \(\varphi _{n+1}(r)=\varphi _{n+1}(1/2)\) for all \(r>1/2\), we get that we always have
and this concludes the proof. \(\square \)
3 Preliminaries on supercritical percolation
In this section we collect some standard results for supercritical percolation on \({\mathbb {Z}}_n^d\) that will be used throughout the paper. We write \({\mathcal {B}}(x,r)\) for the box in \({\mathbb {Z}}^d\) centred at x of side length r. We also use \({\mathcal {B}}(x,r)\) to denote the obvious subset of \({\mathbb {Z}}_n^d\) whenever \(r<n\). We denote by \(\partial {\mathcal {B}}(x,r)\) the inner vertex boundary of the ball.
The following lemma might follow from known results but as we could not find a reference, we include its proof.
Lemma 3.1
Let \(A\subseteq {\mathbb {Z}}_n^d\) be a deterministic set with \(|A| = \alpha n^d\), where \(\alpha \in (0,1]\). Let \({\mathcal {G}}\) be the giant cluster of supercritical percolation in \({\mathbb {Z}}_n^d\) with parameter \(p>p_c\). Then for all \(\varepsilon \in (0,\theta (p))\) there exists a positive constant c depending on \(\varepsilon , d, p, \alpha \) so that for all n
Proof
Let \(\beta \in (0,1)\) to be determined later. We start the proof by showing that with high probability a certain fraction of the points in A percolate to distance \(n^\beta /2\). More precisely, we let \(A(x) = \{x \leftrightarrow \partial {\mathcal {B}}(x,n^\beta )\}\). We will first show that for all sets \(D\subseteq {\mathbb {Z}}_n^d\) with \(|D|=\gamma n^d\), where \(\gamma \in (0,1]\), and for all \(\varepsilon \in (0,\theta (p))\) there exists \(c>0\) depending on \(\varepsilon , d, p, \gamma \) so that for all n

Let \({\mathcal {L}}\) be a lattice of points contained in \({\mathbb {Z}}_n^d\) that are at distance \(n^\beta \) apart. Then \({\mathcal {L}}\) contains \(n^{d(1-\beta )}\) points, and hence there exist \(n^{d\beta }\) such lattices. By a union bound over all such lattices \({\mathcal {L}}\) we now have

Using the standard coupling between bond percolation on the torus and the whole lattice and [4, Theorems 8.18 and 8.21] we get
for some constant c depending only on d and p. We now fix a lattice \({\mathcal {L}}\). So for all n large enough we can upper bound the probability appearing in (3.2) by

We now note that for points \(x\in {\mathcal {L}}\cap D\) the events A(x) are independent. Using a concentration inequality for sums of i.i.d. random variables and the fact that \(|{\mathcal {L}}\cap D|\le n^{d(1-\beta )}\) we obtain

where c is a positive constant depending on \(\gamma \) and \(\varepsilon \). Plugging this back into (3.2) gives

for a possibly different constant c.
We next turn to prove that
From (3.3) and using a union bound we now get

Using [4, Theorem 8.21] we deduce that for all \(\delta \in (0,1)\), there exists a constant c (depending on \(\delta \), d and p) so that for large n and for all \(x,y\in {\mathcal {B}}(0,n(1-\delta ))\)

Using this and a union bound we now get

Take \(\widetilde{\varepsilon }>0\) and \(\delta \in (0,1)\) such that \((1+\theta (p)-\widetilde{\varepsilon })(1-\delta )^d>1\). It follows that if there are at least \((\theta (p)-\widetilde{\varepsilon })(1-\delta )^d n^d\) points connected to each other in \({\mathcal {B}}(0,n(1-\delta ))\), then the giant cannot be contained in \({\mathcal {B}}(0,n){\setminus } {\mathcal {B}}(0,n(1-\delta ))\). This observation and (3.4) (with \(D={\mathcal {B}}(0,n(1-\delta ))\) and so \(\gamma = (1-\delta )^d\) and \(\varepsilon =\widetilde{\varepsilon }\)) together with (3.6) and (3.7) give
Taking \(\beta =d/(d+1)\) so that \(\beta =d(1-\beta )\) we obtain
Let now \(\widetilde{A} = A\cap {\mathcal {B}}(0,n(1-\delta ))\). Let \({\varepsilon '}\) be such that \((\alpha -{\varepsilon '})(\theta (p)-{\varepsilon '}) = \alpha (\theta (p) -\varepsilon )\). By decreasing \(\delta \) if necessary we get that \(|\widetilde{A}| \ge (\alpha -{\varepsilon '})n^d\). So applying (3.1) we obtain

This together with (3.8) finally gives

By the choice of \({\varepsilon '}\) this proves (3.5). To finish the proof of the lemma it only remains to show that

Using (3.1) we can upper bound this probability by

where the last inequality follows from (3.5) by taking \(A={\mathbb {Z}}_n^d\). \(\square \)
Corollary 3.2
Let \({\mathcal {G}}_1,{\mathcal {G}}_2, \ldots \) be the giant components of i.i.d. percolation configurations with \(p>p_c\) in \({\mathbb {Z}}_n^d\). Fix \(\delta \in (0,1/4)\) and let \(k=[2(1-\delta )/(\delta \theta (p))]+1\). Then there exists a positive constant c so that
Proof
We start by noting that
where we set \({\mathcal {G}}_0=\varnothing \). Therefore, by the choice of k we obtain
For any i, since the percolation clusters are independent, by conditioning on \({\mathcal {G}}_1,\ldots , {\mathcal {G}}_{i-1}\) and using Lemma 3.1 we get
Thus by the union bound we obtain
where \(c'\) is a positive constant and this concludes the proof. \(\square \)
We perform percolation in \({\mathbb {Z}}_n^d\) with parameter \(p>p_c\). Let \({{\mathcal {C}}}_1, {{\mathcal {C}}}_2, \ldots \) be the clusters in decreasing order of their size. We write \({{\mathcal {C}}}(x)\) for the cluster containing the vertex \(x\in {\mathbb {Z}}_n^d\). For any \(A\subseteq {\mathbb {Z}}_n^d\), we denote by \({\mathrm{diam}}(A)\) the diameter of A.
Proposition 3.3
There exists a constant c so that for all r and for all n we have
Proof
We write \({\mathcal {B}}_r={\mathcal {B}}(0,r)\), where as before \({\mathcal {B}}(0,r)\) denotes the box of side length r centred at 0. Then we have

Using the standard coupling between bond percolation on \({\mathbb {Z}}_n^d\) and bond percolation on \({\mathbb {Z}}^d\) and [4, Theorems 8.18 and 8.21] we obtain

Lemma 3.1 now gives us that
So this now implies

But using [4, Lemma 7.89] we obtain

Taking a union bound over all the points of the torus concludes the proof. \(\square \)
Corollary 3.4
Consider now dynamical percolation on \({\mathbb {Z}}_n^d\) with \(p>p_c\), where the edges refresh at rate \(\mu \), started from stationarity. Let \({{\mathcal {C}}}_1(t)\) denote the giant cluster at time t. Then for all \(k\in {\mathbb {N}}\), there exists a positive constant c so that for all \(\varepsilon <\theta (p)\) we have as \(n\rightarrow \infty \)
Remark 3.5
Let \(\partial A\) denote the edge boundary of a set \(A\subseteq {\mathbb {Z}}^d\). This is how \(\partial A\) will be used from now on. Using then the obvious bound that \(|\partial A| \le (2d)|A|\le 2d({\mathrm{diam}}(A))^d\) on the event of Corollary 3.4 we get that for all \(i\ge 2\)
4 Hitting the giant component
In this section we give an upper bound on the time it takes the random walk to hit the giant component. From now on we fix \(d\ge 2\) and \(p>p_c({\mathbb {Z}}^d)\), and as before X is the random walk on the dynamical percolation process where the edges refresh at rate \(\mu \).
Notation For every \(t>0\) we denote by \({\mathcal {G}}_t\) the giant component of the dynamical percolation process \((\eta _t)\) breaking ties randomly. (As we saw in Corollary 3.4 with high probability there are no ties in the time interval that we consider.)
Proposition 4.1
(Annealed estimates) There exists a stopping time \(\sigma \) and \(\alpha >0\) such that:
- (i)
\(\min _{x,\eta _0}{\mathbb {P}}_{x,\eta _0}\left( \frac{11d\log n}{\mu }\le \sigma \le \frac{(\log n)^{3d+8}}{\mu }\right) =1-o(1)\) as \(n\rightarrow \infty \) and
- (ii)
\(\min _{x,\eta _0}{\mathbb {P}}_{x,\eta _0}\left( X_{\sigma } \in {\mathcal {G}}_\sigma \right) \ge \alpha \).
Proof
We let \(\tau \) be the first time after \(11d\log n/\mu \) that X hits the giant component, i.e.
We now define a sequence of stopping times by setting \(r=2(c\log n)^{d+2}\) for a constant c to be determined, \(T_0=0\) and inductively for all \(i\ge 0\)
Finally we set \(\sigma = \tau \wedge T_{(\log n)^{d+2}}\). We will now prove that \(\sigma \) satisfies (i) and (ii) of the statement of the proposition.
Proof of (i). By the strong Markov property we obtain for all n large enough and all \(x,\eta _1\)
By (1.2) of Theorem 1.2 applied to the torus \({\mathbb {Z}}_{5r}^d\) we get that if \(t=c'\cdot r^2/\mu \), where \(c'\) is a positive constant, then starting from any \(x_0\in {\mathcal {B}}(x,r)\) and any bond configuration, the walk exits the ball \({\mathcal {B}}(x,r)\) by time t with constant probability \(c_1\). Hence the same is true for the process X on \({\mathbb {Z}}_n^d\) for all starting states \(x_0\) and configurations \(\eta _0\).
Using this uniform bound over all \(\eta _0\) and all \(x_0\in {\mathcal {B}}(x,r)\), we can perform \(\log n/c'\) independent experiments to deduce
and hence substituting this into (4.1) we finally get
and this completes the proof of (i).
Proof of (ii). We fix \(x,\eta _0\) and we consider two cases:
- (1)
\({\mathbb {P}}_{x,\eta _0}\left( \tau <T_1\right) >\frac{1}{(\log n)^{d+2}}\) or
- (2)
\({\mathbb {P}}_{x,\eta _0}\left( \tau < T_1\right) \le \frac{1}{(\log n)^{d+2}}\).
It suffices to prove that under condition (2), there is a constant \(\beta >0\) so that \({\mathbb {P}}_{x,\eta _0}\left( X_{T_1}\in {\mathcal {G}}_{T_1}\right) \ge \beta \). Indeed, this will then imply that
Therefore, in both cases [(1) and (2)] we get that (4.2) is satisfied, and hence by the strong Markov property
which immediately implies that \(\min _{y,\eta _1}{\mathbb {P}}_{y,\eta _1}\left( X_\sigma \in {\mathcal {G}}_{\sigma }\right) \ge 1-e^{-1}\) as claimed. So we now turn to prove that under (2) there exists a positive constant \(\beta \) so that
Taking c in the definition of r satisfying \(c>50d^2\) we have
Since the critical probability for a half-space equals \(p_c({\mathbb {Z}}^d)\) (as explained right before Theorem 7.35 in [4]) and by time \(\frac{c\log n}{4d\mu }\) all edges in the torus have refreshed after time \(T_0\) (with high probability for \(c>50d^2\)), we infer that, given \( T_1\ge \frac{c\log n}{4d\mu }\), with probability bounded away from 0, the component of \(X_{T_1}\) at time \(T_1\) has diameter at least n / 3. It then follows from Corollary 3.4 that the first term on the right-hand side of the last display is bounded below by a positive constant.
So it now suffices to prove
We denote by \({{\mathcal {C}}}_t\) the cluster of the walk at time t, i.e. it is the connected component of the percolation configuration such that \(X_t\in {{\mathcal {C}}}_t\). Next we define inductively a sequence of stopping times \(S_i\) as follows: \(S_0=11d\log n/\mu \) and for \(i\ge 0\) we let \(S_{i+1}\) be the first time after time \(S_i\) that an edge opens on the boundary of \({{\mathcal {C}}}_{S_i}\). For all \(i\ge 0\) we define
On the event A we have \(T_1\ge S_{(c\log n)^{d+1}}\), since \(r=2(c\log n)^{d+2}\) and by the triangle inequality we have for all \(i\le (c\log n)^{d+1} -1\)
We now have
Note that on the event \(\cap _{j<i}A_j\cap \{\tau \ge T_1\}\), we have that \({{\mathcal {C}}}_{S_{i}}\) cannot be the giant component, since by time \(S_i\) using (4.4) the random walk has only moved distance at most \(i c \log n\) from \(X_{11d\log n/\mu }\), and hence cannot have reached the boundary of the box \({\mathcal {B}}(X_{11d\log n/\mu },r)\). Therefore, choosing c sufficiently large by Proposition 3.3 and large deviations for a Poisson random variable we get
and hence plugging this upper bound into (4.5) gives
So under the assumption that \({\mathbb {P}}_{x,\eta _0}\left( \tau <T_1\right) \le 1/(\log n)^{d+2}\) and (4.6) we have for all n sufficiently large
Setting \(Y_i = S_{i}-S_{i-1}\) we now get
One can define an exponential random variable \(E_{(c\log n)^{d+1}}\) with parameter \(2d(c\log n)^d\mu \) such that
- (1)
\(Y_{(c\log n)^{d+1}} \ge E_{(c\log n)^{d+1}}\) on \(A_{(c\log n)^{d+1}-1}\) and
- (2)
\(E_{(c\log n)^{d+1}}\) is independent of \(\{A_0,\ldots ,A_{(c\log n)^{d+1}-1},Y_1\ldots Y_{(c\log n)^{d+1}-1}\}\). Therefore we deduce
where for the last inequality we used (4.7). Continuing in the same way, for each i, one can define an exponential random variable \(E_i\) with parameter \(2d(c\log n)^d\mu \) such that (1) \(Y_i \ge E_i\) on \(A_{i-1}\) and (2) \(E_i\) is independent of \(\{A_0,\ldots ,A_{i-1},Y_1,\ldots , Y_{i-1},E_{i+1}, \ldots , E_{(c\log n)^{d+1}}\}\). We therefore obtain
where the \(E_i\)’s are i.i.d. exponential random variables of parameter \(2d(c\log n)^d\mu \). By Chebyshev’s inequality, we finally conclude that
and this finishes the proof. \(\square \)
We now state and prove a lemma that will be used later on in the paper.
Lemma 4.2
Let \(\sigma \) and \(\alpha \) be as in the statement of Proposition 4.1. Then as \(n\rightarrow \infty \)
Proof
We fix \(x,\eta _0\). From Proposition 4.1 we have
Let \(\tau \) be the first time that all edges refresh at least once. Thus after time \(\tau \) the percolation configuration is sampled according to \(\pi _p\). We then have \({\mathbb {P}}_{x,\eta _0}\left( \tau \le (d+1)\log n/\mu \right) = 1-o(1)\), and hence from Proposition 4.1 we get
This together with Corollary 3.4 now gives as \(n\rightarrow \infty \)
where c comes from Corollary 3.4. We now define an event A as follows
We also define B to be the event that there exists a time \(t\in [\sigma , \sigma + 1/((\log n)^{d+1}\mu )]\) and an edge e such that \(d(X_t,e)>c\log n\), the edge e updates at time t and this update disconnects \(X_t\) from \({\mathcal {G}}_t\). Then we have
We start by bounding the second probability above. From (4.8) we obtain as \(n\rightarrow \infty \)
It now remains to show that \({\mathbb {P}}_{x,\eta _0}\left( A\right) =o(1)\) as \(n\rightarrow \infty \). We now let \(\tau _0=\sigma \) and for all \(i\ge 1\) we define \(\tau _i\) to be the time increment between the \((i-1)\)-st time and the i-th time after time \(\sigma \) that either X attempts a jump or an edge within distance \(c\log n\) from X refreshes. Then \(\tau _i \sim \text {Exp}(1+c_1(\log n)^d\mu )\) for a positive constant \(c_1\) and they are independent. These times define a Poisson process of rate \(1+c_1(\log n)^d \mu \). Using basic properties of exponential variables, the probability that at a point of this Poisson process an edge is refreshed is
Therefore, by the thinning property of Poisson processes, the times at which edges within \(c\log n\) from X refresh constitute a Poisson process \({\mathcal {N}}\) of rate \(c_1 (\log n)^d\mu \). So we now obtain as \(n\rightarrow \infty \)
and this concludes the proof. \(\square \)
5 Good and excellent times
As we already noted in Remark 1.4 we are going to consider the case where \(1/\mu > (\log n)^{d+2}\).
We will discretise time by observing the walk X at integer times. When we fix the environment at all times to be \(\eta \), then we obtain a discrete time Markov chain with time inhomogeneous transition probabilities
Let \((S_t)_{t\in {\mathbb {N}}}\) be the Doob transform of the evolving sets associated to this time inhomogeneous Markov chain as defined in Sect. 2. Since from now on we will mainly work with the Doob transform of the evolving sets, unless there is confusion, we will write \({\mathbb {P}}\) instead of \(\widehat{{\mathbb {P}}}\).
If G is a subgraph of \({\mathbb {Z}}_n^d\) and \(S\subseteq V(G)\), we write \(\partial _G S\) for the edge boundary of S in G, i.e. the set of edges of G with one endpoint of S and the other one in \(V(G){\setminus } S\).
We note that for every t, \(\eta _t\) is a subgraph of \({\mathbb {Z}}_n^d\) with vertex set \({\mathbb {Z}}_n^d\).
Definition 5.1
We call an integer time tgood if \(|S_t\cap {\mathcal {G}}_t| \ge \tfrac{|S_t|}{(\log n)^{4d+12}}\). We call a good time texcellent if
where \(\eta _s(x,y)=0\) if \((x,y)\notin E({\mathbb {Z}}_n^d)\). For all \(a \in {\mathbb {N}}\) we let G(a) and \(G_e(a)\) be the set of good and excellent times t respectively with \(0\le t\le (\log n)^{a}\left( n^2+\tfrac{1}{\mu }\right) \).
As we already explained in the Introduction, we will obtain a strong drift for the size of the evolving set at excellent times. So we need to ensure that there are enough excellent times. We start by showing that there is a large number of good times. More formally we have the following:
Lemma 5.2
For all \(\gamma \in {\mathbb {N}}\) and \(\alpha >0\), there exists \(n_0\) so that for all \(n\ge n_0\), all starting points and configurations \(x, \eta _0\) we have
Proof
Fix \(\gamma \in {\mathbb {N}}\) and \(\alpha >0\). To simplify notation we write \(G=G(8d+26+\gamma )\). By definition we have

For every \(i\ge 0\) we define
We write \(t_i\) for the left endpoint of the interval above. For integer t we let \({\mathcal {F}}_t\) be the \(\sigma \)-algebra generated by the evolving set and the environment at integer times up to time t.
First of all we explain that for all \(x,\eta _0\) and for all \(i\ge 0\) we have almost surely

Indeed, in every interval of length \(2(\log n)^{3d+8}/\mu \) we have from Proposition 4.1 and Lemma 4.2 that with constant probability there exists an interval of length \(1/((\log n)^{d+1}\mu )\) such that for all t in this interval \(X_t\in {\mathcal {G}}_t\). Note that since \(1/\mu > (\log n)^{d+2}\), this interval has length larger than 1. This establishes (5.1).
Using the coupling of the Doob transform of the evolving set and the random walk given in Theorem 2.2 we get that
and hence

For any \(x,\eta _0\) we set
We now claim that for any \(x,\eta _0\) we have almost surely
Indeed, if not, then there exists a set \(\Omega _0\in {\mathcal {F}}_{t_i}\) with \({\mathbb {P}}\left( \Omega _0\right) >0\) such that on \(\Omega _0\)
We now define
and writing \(A_i=A_i(x,\eta _0)\) to simplify notation, we would get on the event \(\Omega _0\) that
But this gives a contradiction for \(n\ge e^{\sqrt{3}}\), since we have almost surely

where the second equality follows from the Diaconis Fill coupling, the third one from the tower property for conditional expectation and the inequality follows from (5.1).
Therefore, since (5.2) holds for all starting points and configurations \(x,\eta _0\), we finally conclude that for all \(n\ge e^{\sqrt{3}}\), all \(x,\eta _0\) and for all i almost surely
Using the uniformity of this lower bound over all starting points and configurations yields for all n sufficiently large and all \(x,\eta _0\)
This now finishes the proof. \(\square \)
Next we show that there are enough excellent times.
Lemma 5.3
For all \(\gamma \in {\mathbb {N}}\) and \(\alpha >0\), there exists \(n_0\) so that for all \(n\ge n_0\) and all \(x,\eta _0\)
Proof
For almost every environment, there is an infinite number of good times that we denote by \(t_1,t_2,\ldots \). For every good time t we define \(I_t\) to be the indicator that t is excellent.
Again to simplify notation we write \(G=G(8d+26+\gamma )\) and \(G_e= G_e(8d+26+\gamma )\). Note that if t is good and at least half of the edges of \(\partial _{\eta _t}S_t\) do not refresh during \([t,t+1]\), then t is an excellent time (note that if \(\partial _{\eta _t}S_t=\varnothing \), then t is automatically excellent). Let \(E_1,\ldots , E_{|\partial _{\eta _t}S_t|}\) be the first times at which the edges on the boundary \(\partial _{\eta _t}S_t\) refresh. They are independent exponential random variables with parameter \(\mu \).
Let \({\mathcal {F}}_s\) be the \(\sigma \)-algebra generated by the process (walk, environment and evolving set) up to time s. Then for all t, on the event \(\{t\in G\}\) we have

Since \({\mathbb {P}}_{x,\eta _0}\left( E_i>1\right) = e^{-\mu }\) and \(\mu \le 1/2\), there exists \(n_0\) so that for all \(n\ge n_0\) we have for all \(x, \eta _0\)

Let \(A=\{ |G|\ge (\log n)^{\gamma } \cdot n^2\}\). By Lemma 5.2 we get \({\mathbb {P}}_{x,\eta _0}\left( A^c\right) \le 1/n^\alpha \) for all \(n\ge n_0\) and all \(x,\eta _0\). Let \(G=\{t_1,\ldots , t_{|G|}\}\). On the event A we have
We thus get for all \(x,\eta _0\) and all \(n\ge n_0\)
Since conditional on the past, the variables \((I_{t_i})_i\) dominate independent Bernoulli random variables with parameter 1 / 2, using a standard concentration inequality we get that this last probability decays exponentially in n and this concludes the proof. \(\square \)
Let \(\tau _1, \tau _2,\ldots \) be the sequence of excellent times. Then the previous lemma immediately gives
Corollary 5.4
Let \(\gamma \in {\mathbb {N}}\), \(\alpha >0\) and \(N= (\log n)^{\gamma } \cdot n^2\). Then there exists \(n_0\) so that for all \(n\ge n_0\) and all \(x,\eta _0\) we have
6 Mixing times
In this section we prove Theorems 1.3, 1.6 and Corollary 1.7. From now on \(d\ge 2\), \(p>p_c({\mathbb {Z}}^d)\) and \(\frac{1}{\mu } >(\log n)^{d+2}\).
6.1 Good environments and growth of the evolving set
The first step is to obtain the growth of the Doob transform of the evolving set at excellent times. We will use the following theorem by Pete [10] which shows that the isoperimetric profile of the giant cluster basically coincides with the profile of the original lattice.
For a subset \(S\subseteq {\mathbb {Z}}_n^d\) we write \(S\subseteq {\mathcal {G}}\) to denote \(S\subseteq V({\mathcal {G}})\) and we also write \(|{\mathcal {G}}| = |V({\mathcal {G}})|\).
Theorem 6.1
[10, Corollary 1.4] For all \(d\ge 2\), \(p>p_c({\mathbb {Z}}^d)\), \(\delta \in (0,1)\) and \(c'>0\) there exist \(c>0\) and \(\alpha >0\) so that for all n sufficiently large
Remark 6.2
Pete [10] only states that the probability appearing above tends to 1 as \(n\rightarrow \infty \), but a close inspection of the proof actually gives the polynomial decay. Mathieu and Remy [6] have obtained similar results.
Corollary 6.3
For all \(d\ge 2\), \(p>p_c({\mathbb {Z}}^d)\), \(c'>0\) and \(\delta \in (0,1)\) there exist \(c>0\) and \(\alpha >0\) so that for all n sufficiently large
Proof
We only need to prove the statement for all S that are disconnected, since the other case is covered by Theorem 6.1. Let A be the event appearing in the probability of Theorem 6.1.
Let S be a disconnected set satisfying \(S\subseteq {\mathcal {G}}\) and \(c (\log n)^{\frac{d}{d-1}} \le |S|\le (1-\delta )|{\mathcal {G}}|\). Let \(S=S_1\cup \cdots \cup S_k\) be the decomposition of S into its connected components. Then we claim that on the event A we have for all \(i\le k\)
Indeed, there are two cases to consider: (i) \(|S_i|\ge c(\log n)^{d/(d-1)}\), in which case the inequality follows from the definition of the event A; (ii) \(|S_i|<c(\log n)^{d/(d-1)}\), in which case the inequality is trivially true by taking \(\alpha \) small in Theorem 6.1, since the boundary contains at least one vertex. Therefore we deduce,
and this completes the proof. \(\square \)
Recall that for a fixed environment \(\eta \) we write S for the Doob transform of the evolving set process associated to X and \(\tau _1, \tau _2, \ldots \) are the excellent times as in Definition 5.1 and we take \(\tau _0=0\).
Definition 6.4
Let \(c_1,c_2\) be two positive constants and \(\delta \in (0,1)\). Given \(n\ge 1\), define
We call \(\eta \) a \(\delta \)-good environment if the following conditions hold:
- (1)
for all \(\frac{11d\log n}{\mu }\le t\le t(n)\log n\) the giant cluster \({\mathcal {G}}_t\) has size \(|{\mathcal {G}}_t|\in ((1-\delta )\theta (p) n^d, (1+\delta )\theta (p) n^d)\),
- (2)
for all \(\frac{11d\log n}{\mu }\le t\le t(n)\log n, \, \forall \, S\subseteq {\mathcal {G}}_t\) with
$$\begin{aligned} c_1(\log n)^{\frac{d}{d-1}}\le |S| \le (1-\delta )|{\mathcal {G}}_t| \quad \text { we have } \quad |\partial _{\eta _{t}}S|\ge \frac{c_2 |S|^{1-1/d}}{(\log n)}, \end{aligned}$$ - (3)
\({\mathbb {P}}_{x,\eta }\left( \tau _N\le t(n)\right) \ge 1-\frac{1}{n^{10d}}\) for all x,
- (4)
\({\mathbb {P}}_{x,\eta }\left( \tau _N<\infty \right) =1\) for all x.
To be more precise we should have defined a \((\delta , c_1, c_2)\)-good environment. But we drop the dependence on \(c_1\) and \(c_2\) to simplify the notation.
Lemma 6.5
For all \(\delta \in (0,1)\) there exist \(c_1, c_2, c_3\) positive constants and \(n_0\in {\mathbb {N}}\) such that for all \(n\ge n_0\) and all \(\eta _0\) we have
Proof
We first prove that for all n sufficiently large and all \(\eta _0\)
The number of times that the Poisson clocks on the edges ring between times \(11d\log n/\mu \) and \(t(n) \log n\) is a Poisson random variable of parameter at most \(d(n^d \mu ) \cdot t(n) \log n\). Note that all edges update by time \(\frac{11d \log n}{\mu }\) with probability at least \(1-\frac{d}{n^{10d}}\). Using large deviations for the Poisson random variable, Lemma 3.1 and Corollary 6.3 for suitable constants c and \(\alpha \) prove (6.1). Corollary 5.4, Markov’s inequality and a union bound over all x immediately imply
Finally, to prove that \(\eta \) satisfies (4) with probability 1, we note that for almost every environment there will be infinitely many times at which all edges will be open for unit time and so at these times the intersection of the giant component with the evolving set will be large. Therefore such times are necessarily excellent. \(\square \)
For all \(\delta \in (0,1)\) we now define
The goal of this section is to prove the following:
Proposition 6.6
Let \(\delta \in (0,1)\). There exists a positive constant c so that the following holds: for all n, if \(\eta \) is a \(\delta \)-good environment, then for all starting points x we have
Recall from Sect. 2 the definition of \((Z_k)\) for a fixed environment \(\eta \) via
Note that we have suppressed the dependence on \(\eta \) for ease of notation. The following lemma on the drift of Z using the isoperimetric profile will be crucial in the proof of Proposition 6.6.
Lemma 6.7
Let \(\eta \) be a \(\delta \)-good environment with \(\delta \in (0,1)\). Then for all n sufficiently large and for all \(1\le i\le N\) (recall Definition 6.4) we have almost surely

where \({\mathcal {F}}_t\) is the \(\sigma \)-algebra generated by the evolving set up to time t and \((\tau _i)\) is the sequence of excellent times associated to the environment \(\eta \) and \(\varphi \) is defined as
with \(\alpha =4d+12+d/(d-1)\), \(\beta = 4d+9-12/d\) and c a positive constant.
Proof
Since \(\tau _\delta \) is a stopping time, it follows that \(\{\tau _\delta \wedge t(n)>\tau _i\}\in {\mathcal {F}}_{\tau _i}\), and hence we obtain

Lemma 2.3 implies that Z is a positive supermartingale and since \(\eta \) is a \(\delta \)-good environment, we have \(\tau _N<\infty \)\({\mathbb {P}}_\eta \)-almost surely. We thus get for all \(0\le i\le N-1\)
Using the Markov property gives

Since \(\tau _i\) is a stopping time, the event \(\{\tau _i=t\}\) only depends on \((S_{u})_{u\le t}\). The distribution of \(S_{t+1}\) only depends on \(S_t\) and the outcome of the independent uniform random variable \(U_{t+1}\). Therefore we obtain
where for the last equality we used the transition probability of the Doob transform of the evolving set. If \(1\le |S|\le n^d/2\), then for all n sufficiently large
where the equality is simply the definition of \(\psi _{t+1}\) and the last inequality follows from Lemma 2.1, since
Similarly, if \(n^d>|S|>n^d/2\), then, using the fact that the complement of an evolving set process is also an evolving set process, we get
Plugging in the definition of \(\varphi _{t+1}\) we deduce for all \(1\le |S|<n^d\)
Since in (6.4) we multiply by from now on we take t to be an excellent time, and hence we get from Definition 5.1
Since \(|\partial _{\eta _t}S_t| \ge |\partial _{{\mathcal {G}}_t}S_t| = |\partial _{{\mathcal {G}}_t}({\mathcal {G}}_t\cap S_t)|\) we have
If \(|S_t|\le c_1(\log n)^{4d+12+d/(d-1)}\), then, since \(\eta \) is a \(\delta \)-good environment, \(|{\mathcal {G}}_t| \ge (1-\delta )\theta (p)n^d\), and hence, we use the obvious bound
Next, if \(\frac{n^d}{2}>|S_t|>c_1(\log n)^{4d+12+d/(d-1)}\), then using (6.8) and the fact that we are on the event \(\{\tau _\delta \wedge t(n)>t\}\) we get that
Therefore, since \(\eta \) is a \(\delta \)-good environment and \(t\le t(n)\), (2) of Definition 6.4 gives that in this case
where c is a positive constant and for the second inequality we used (6.8) again.
Finally when \(|S_t|\ge \frac{n^d}{2}\), on the event \(\{\tau _\delta \wedge t(n)>t\}\) we have from (6.8) and using again (2) of Definition 6.4
Substituting (6.9), (6.10) and (6.12) into (6.6) and (6.7) and then into (6.3), (6.4) and (6.5) we deduce

where the function \(\varphi \) is given by
with c a positive constant and \(\beta = 4d+9-12/d\). We now note that if \(\pi (S_t)\le 1/2\), then \(Z_t = (\pi (S_t))^{-1/2}\). If \(\pi (S_t) >1/2\), then \(Z_t = \sqrt{\pi (S_t^c)}/\pi (S_t)\le \sqrt{2}\). Since \(\varphi (r)=\varphi (1/2)\) for all \(r>1/2\), we get that in all cases
and this concludes the proof. \(\square \)
Proof of Proposition 6.6
We define and
where \(\varphi \) is defined in Lemma 6.7. With these definitions Lemma 6.7 gives for all \(1\le i\le N\)
with \(Y_1 \le n^{d/2}\) for all \(n\ge 3\).
Since \(\varphi \) is decreasing, we get that f is increasing, and hence we can apply [7, Lemma 11 (iii)] to deduce that for all \(\varepsilon >0\) if
then we have that

We now evaluate the integral
Splitting the integral according to the different regions where \(\varphi \) is defined and substituting the function we obtain
where \(c'\) is a positive constant. Therefore, taking \(\varepsilon =\frac{1}{n^{10d}}\), this gives that for all \(k\ge c''\cdot n^2 (\log n)^{2\beta +1}\) with \(c''=2c'd\) we have that

and hence, since \(N=(\log n)^\gamma \cdot n^2\) with \(\gamma = 8d+20>2\beta +1\), we deduce

Clearly we have
For the second event appearing on the right hand side above using the definition of the process Z we get

The first event appearing on the right hand side of (6.14) implies that \(|S_{\tau _N}^c|\ge |S_{\tau _N}^c\cap {\mathcal {G}}_{\tau _N}| \ge \delta |{\mathcal {G}}_{\tau _N}|\). Since \(\eta \) is a \(\delta \)-good environment, by (1) of Definition 6.4 we have that \(|{\mathcal {G}}_{\tau _N}|\ge (1-\delta )\theta (p) n^d\). Therefore we obtain

By Markov’s inequality and the two inclusions above we now conclude

where c is a positive constant and in the last inequality we used (6.13). Since \(\eta \) is a \(\delta \)-good environment, this now implies that
and this finishes the proof. \(\square \)
6.2 Proof of Theorem 1.3
In this section we prove Theorem 1.3. First recall the definition of the stopping time \(\tau _\delta \) as the first time t that \(|S_t\cap {\mathcal {G}}_t|\ge (1-\delta )|{\mathcal {G}}_t|\).
Lemma 6.8
Let p be such that \(\theta (p)>1/2\). There exists \(n_0\) and \(\delta >0\) so that for all \(n\ge n_0\), if \(\eta \) is a \(\delta \)-good environment, then for all x
Proof
Since \(\theta (p)>1/2\), there exist \(\varepsilon> 2\delta >0\) so that
Summing over all possible values of \(\tau =\tau _\delta \) we obtain
By the strong Markov property at time \(\tau \) we have
Since \(\tau \) is a stopping time for the evolving set process, we can use the coupling of the walk and the Doob transform of the evolving set, Theorem 2.2, to get

For all \(s\le t(n)\) we call \(\nu _s\) the probability measure defined by

We claim that
Indeed, we have

Since \(s\le t(n)\) and \(\eta \) is a \(\delta \)-good environment, we have \(|{\mathcal {G}}_s|\ge (1-\delta )\theta (p) n^d\), and hence on the event \(\{\tau =s\}\) we get
This now implies that
and completes the proof of (6.17). By the definition of \(\nu _s\) we have
But since \(\pi \) is stationary for X when the environment is \(\eta \), we obtain
where the last inequality follows from (6.17). Substituting this bound into (6.16) gives
From Proposition 6.6 we have
This together with the fact that we took \(2\delta <\varepsilon \) finishes the proof. \(\square \)
Corollary 6.9
Let p be such that \(\theta (p)>1/2\). Then there exist \(\delta \in (0,1)\) and \(n_0\) such that for all \(n\ge n_0\) and all starting environments \(\eta _0\) we have
Proof
Let \(\delta \) and \(n_0\) be as in the statement of Lemma 6.8. Then Lemma 6.8 gives that for all \(n\ge n_0\), if \(\eta \) is a \(\delta \)-good environment, then for all x and y we have
Using this and the triangle inequality we obtain that on the event that \(\eta \) is a \(\delta \)-good environment for all x and y
Therefore for all \(n\ge n_0\) we get for all \(\eta _0\)
Taking \(n_0\) even larger we get from Lemma 6.5 that for all \(n\ge n_0\)
and this concludes the proof. \(\square \)
The following lemma will be applied later in the case where R is a constant or a uniform random variable.
Lemma 6.10
Let R be a random time independent of X and such that the following holds: there exists \(\delta \in (0,1)\) such that for all starting environments \(\eta _0\) we have
Then there exists a positive constant \(c=c(\delta )\) and \(n_0=n_0(\delta )\in {\mathbb {N}}\) so that if \(k=c\log n\) and \(R(k)=R_1+\cdots +R_k\), where \(R_i\) are i.i.d. distributed as R, then for all \(n\ge n_0\), all x, y and \(\eta _0\)
Proof
We fix \(x_0, y_0\) and let X, Y be two walks moving in the same environment \(\eta \) and started from \(x_0\) and \(y_0\) respectively. We now present a coupling of X and Y. We divide time into rounds of length \(R_1, R_2,\ldots \) and we describe the coupling for every round.
For the first round, i.e. for times between 0 and \(R_1\) we use the optimal coupling given by
where the environment \(\eta \) is restricted between time 0 and \(R_1\). We now change the definition of a good environment. We call \(\eta \) a good environment during \([0,R_1]\) if the total variation distance appearing above is smaller than \(1-\delta \).
If X and Y did not couple after \(R_1\) steps, then they have reached some locations \(X_{R_1}=x_1\) and \(Y_{R_1} = y_1\). In the second round we couple them using again the corresponding optimal coupling, i.e.
Similarly we call \(\eta \) a good environment for the second round if the total variation distance above is smaller than \(1-\delta \). We continue in the same way for all later rounds. By the assumption on R, i.e. the bound on the probability given in the statement of the lemma is uniform over all starting points x and y and the initial environment, we get that for all \(\eta _0\)
and the same bound is true even after conditioning on the previous \(i-1\) rounds. Let \(k=c\log n\) for a constant c to be determined. Let E denote the number of good environments in the first k rounds. We now get
By concentration, since we can stochastically dominate E from below by \({\mathrm{Bin}}(k,1-\delta )\), the first probability decays exponentially in k. For the second probability, on the event that there are enough good environments, since the probability of not coupling in each round is at most \(1-\delta \), by successive conditioning we get
Therefore, taking \(c=c(\delta )\) sufficiently large we get overall for all n sufficiently large
So by Markov’s inequality again we obtain for all n sufficiently large
where \({\mathcal {E}}\) is expectation over the random environment. This finishes the proof. \(\square \)
Proof of Theorem 1.3
Let \(R=t(n)\). Then by Corollary 6.9 there exists \(n_0\) such that R satisfies the condition of Lemma 6.10 for \(n\ge n_0\). So applying Lemma 6.10 we get for all n sufficiently large and all \(x_0,y_0\) and \(\eta _0\)
where \(k=c\log n\). By a union bound over all starting states \(x_0,y_0\) we deduce
This proves that for all n sufficiently large
and thus completes the proof of the theorem. \(\square \)
6.3 Proof of Theorem 1.6
Proof of Theorem 1.6
Let \(\delta =\varepsilon /100\) and \(k=[2(1-\delta )/(\delta \theta (p))]+1\). For every starting point \(x_0\) we are going to define a sequence of stopping times. First let \(\xi _1\) be the first time that all the edges refresh at least once. Let \(\widetilde{\delta }=\delta /k\). Then we define \(\tau _1=\tau _1(x_0)\)
where \((S_t)\) is the evolving set process starting at time \(\xi _1\) from \(\{X_{\xi _1}\}\) and coupled with X using the Diaconis Fill coupling. We define inductively, \(\xi _{i+1}\) as the first time after \(\xi _i+t(n)\) that all edges refresh at least once. In order to now define \(\tau _{i+1}\), we start a new evolving set process which at time \(\xi _{i+1}\) is the singleton \(\{X_{\xi _{i+1}}\}\). (This new restart does not affect the definition of the earlier \(\tau _j\)’s.) To simplify notation, we call this process again \(S_t\) and we couple it with the walk X using the Diaconis Fill coupling. Next we define
From now on we call \(\eta \) a good environment if \(\eta \) is a \(\delta \)-good environment and \(\xi _k\le 2kt(n)\). Lemma 6.5 and the definition of the \(\xi _i\)’s give for all \(\eta _0\)
where \(c_4\) is a positive constant. By Proposition 6.6 there exists a positive constant c so that if \(\eta \) is a good environment, then for all \(x_0\) and for all \(1\le i\le k\) we have
We will now prove that there exists a positive constant \(c'\) so that for all \(x_0\)
Writing again \({\mathcal {E}}\) for expectation over the random environment and using (6.18) and (6.19) we obtain for all \(i\le k\) that there exists a positive constant \(c''\) so that for all n sufficiently large and for all \(x_0, \eta _0\)

This and Markov’s inequality now give that for all n sufficiently large
Since every edge refreshes after an exponential time of parameter \(\mu \), it follows that the number of different percolation clusters that appear in an interval of length t is stochastically bounded by a Poisson random variable of parameter \(\mu \cdot t\cdot dn^d\). Therefore, the number of possible percolation configurations in the interval \([\xi _i, \xi _i+t(n)]\) is dominated by a Poisson variable \(N_i\) of parameter \(\mu \cdot t(n)\cdot dn^d\). By the concentration of the Poisson distribution, we obtain
where \(c_1\) is another positive constant. Let \({\mathcal {G}}^1, \ldots , {\mathcal {G}}^k\) be the giant components of independent supercritical percolation configurations. Since the percolation clusters obtained at the times \(\xi _i\) are independent, using Corollary 3.2 in the third inequality below we deduce that for all n sufficiently large
where \(c''\) is a positive constant uniform for all \(x_0\) and \(\eta _0\). This proves (6.20). So we can sum this error over all starting points \(x_0\) and get using Markov’s inequality that for all n sufficiently large and all \(\eta _0\)
The definition of the stopping times \(\tau _i\) immediately yields
This together with (6.22) now give
Remember the dependence on \(x_0\) of the stopping times \(\tau _i\) that we have suppressed. We now change the definition of a good environment and call \(\eta \)good if it satisfies the following for all \(x_0\)
From (6.21) and (6.23) we get that for all \(\eta _0\)
We now define a stopping time \(\tau (x_0)\) by selecting \(i\in \{1,\ldots , k\}\) uniformly at random and setting \(\tau (x_0)=\tau _i(x_0)\). Then at this time we have

We now set \(f_1(x) = {\mathbb {P}}_{x_0,\eta }\left( x\in S_{\tau _1}\cup \cdots \cup S_{\tau _k}\right) \) for all x. Since \(\eta \) is a good environment, then for some \(\delta '<\varepsilon /50\) we have for all n sufficiently large
First let \(c=c(\varepsilon )\in {\mathbb {N}}\) be a constant to be fixed later. In order to define the stopping rule, we first repeat the above construction ck times. More specifically, when \(X_0=x_0\), we let \(\sigma _1= \tau (x_0)\wedge (\log n)t(n)\). Then, since \(\eta \) is a good environment, we obtain
Let \(X_{\sigma _1} = x_1\). Then we define in the same way as above a stopping time \(\tau (x_1)\) with the evolving set process starting from \(\{x_1\}\) and the environment considered after time \(\sigma _1\). Then we set
We continue in this way and define a sequence of stopping times \(\sigma _i\) for all \(i< ck\). In the same way as for the first round for all \(i< ck\) we have
and the function \(f_i\) satisfies (6.27).
We next define the stopping rule. To do so we will explain what is the probability of stopping in every round. We define the set \(A_1\) of good points for the first round as follows:
We now sample X at time \(\sigma _1\). If \(X_{\sigma _1}=x\in A_1\), then at this time we stop with probability
If we stop after the first round, then we set \(T=\sigma _1\). So if \(x\in A_1\), we have
From (6.27) we get that \(|A_1|\ge (1-3\delta ')n^d\) for all n sufficiently large. Therefore, summing over all \(x\in A_1\) we get that
Therefore, this now gives for \(x\in A_1\)
We now define inductively the probability of stopping in the i-th round. Suppose we have not stopped up to the \(i-1\)-st round. We define the set of good points for the i-th round via
If \(X_{\sigma _i}=x\in A_i\), then the probability we stop at the i-th round is
and as above we obtain by summing over all \(x\in A_i\) and using that \(|A_i|\ge (1-3\delta ')n^d\)
If we have not stopped before the ck-th round, then we set \(T=\sigma _{ck+1}\). Notice, however, that
For every round \(i\le ck\), we now have that
since \(\varepsilon <1/4\). So we now get overall
We now take \(c=c(\varepsilon )\) so that the above bound is smaller than \(\varepsilon \). Finally, by the definition of the stopping times \(\sigma _i\), we also get that \({\mathbb {E}}_{x_0,\eta }\left[ T\right] \le c k (\log n)t(n)\) and this concludes the proof. \(\square \)
Proof of Corollary 1.7
Let \(n=10r\). It suffices to prove the statement of the corollary for X being a random walk on dynamical percolation on \({\mathbb {Z}}_n^d\). From Theorem 1.6 there exists a so that for all n large enough and all x and \(\eta _0\)
The statement of the corollary follows by iteration. \(\square \)
References
Andres, S., Chiarini, A., Deuschel, J.-D., Slowik, M.: Quenched invariance principle for random walks with time-dependent ergodic degenerate weights. Ann. Probab. 46(1), 302–336 (2018)
Biskup, M., Rodriguez, P.-F.: Limit theory for random walks in degenerate time-dependent random environments. J. Funct. Anal. 274(4), 985–1046 (2018)
Diaconis, P., Fill, J.A.: Strong stationary times via a new form of duality. Ann. Probab. 18(4), 1483–1522 (1990)
Grimmett, G.: Percolation, Volume 321 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 2nd edn. Springer, Berlin (1999)
Levin, D.A., Peres, Y., Wilmer, E.L.: Markov Chains and Mixing Times. American Mathematical Society, Providence, RI (2009). (With a chapter by James G. Propp and David B. Wilson)
Mathieu, P., Remy, E.: Isoperimetry and heat kernel decay on percolation clusters. Ann. Probab. 32(1A), 100–128 (2004)
Morris, B., Peres, Y.: Evolving sets, mixing and heat kernel bounds. Probab. Theory Relat. Fields 133(2), 245–266 (2005)
Peres, Y., Sousi, P., Steif, J.E.: Quenched exit times for random walk on dynamical percolation. Markov Process. Relat. Fields 24, 715–731 (2018)
Peres, Y., Stauffer, A., Steif, J.E.: Random walks on dynamical percolation: mixing times, mean squared displacement and hitting times. Probab. Theory Relat. Fields 162(3–4), 487–530 (2015)
Pete, G.: A note on percolation on \(\mathbb{Z}^d\): isoperimetric profile via exponential cluster repulsion. Electron. Commun. Probab. 13, 377–392 (2008)
Acknowledgements
We thank Sam Thomas and the referee for a careful reading of the manuscript and providing a number of useful comments. We also thank Microsoft Research for its hospitality where parts of this work were completed. The third author also acknowledges the support of the Swedish Research Council and the Knut and Alice Wallenberg Foundation.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Yuval Peres was affiliated at the time that the work was completed.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Peres, Y., Sousi, P. & Steif, J.E. Mixing time for random walk on supercritical dynamical percolation. Probab. Theory Relat. Fields 176, 809–849 (2020). https://doi.org/10.1007/s00440-019-00927-z
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-019-00927-z
Keywords
- Dynamical percolation
- Random walk
- Mixing times
- Stopping times
Mathematics Subject Classification
- Primary 60K35
- 60K37