How long does it take for Internal DLA to forget its initial profile?

Internal DLA is a discrete model of a moving interface. On the cylinder graph ZN×Z\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathbb {Z}}}_N \times {{\mathbb {Z}}}$$\end{document}, a particle starts uniformly on ZN×{0}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathbb {Z}}}_N \times \{0\}$$\end{document} and performs simple random walk on the cylinder until reaching an unoccupied site in ZN×Z≥0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathbb {Z}}}_N \times {{\mathbb {Z}}}_{\ge 0}$$\end{document}, which it occupies forever. This operation defines a Markov chain on subsets of the cylinder. We first show that a typical subset is rectangular with at most logarithmic fluctuations. We use this to prove that two Internal DLA chains started from different typical subsets can be coupled with high probability by adding order N2logN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N^2 \log N$$\end{document} particles. For a lower bound, we show that at least order N2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N^2$$\end{document} particles are required to forget which of two independent typical subsets the process started from.


Introduction
Internal diffusion-limited aggregation (IDLA) models a random subset of Z d that grows in proportion to the harmonic measure on its boundary seen from an internal point. More precisely, let A(0) = {0} denote the subset ("cluster") at time t = 0. For integer times t ≥ 1 inductively define where Z t denotes the exit location from A(t − 1) of a simple random walk on Z d starting from 0, independent of the past. The process (A(t)) t≥0 is a Markov chain on the space of connected subsets of Z d . As t → ∞, the asymptotic shape of A(t) is an origin-centered Euclidean ball [11] and its fluctuations from the ball are at most logarithmic in dimensions d ≥ 2 [2,4,7,10]; see also [3] for a lower bound on the fluctuations. Space-time averages of the fluctuations converge to a logarithmically correlated Gaussian field [8,9]. In the setting of the cylinder graph [9] it was asked what happens if the process is initiated with a cluster A(0) other than a singleton: How long does it take for IDLA to forget the shape of A(0)? Our main results, Theorems 1.3 and 1.4 below, give upper and lower bounds that match up to a log factor.
Fix an integer N ≥ 3, and let Z N × Z denote the cylinder graph with the cycle on N vertices as base graph. We refer to x ∈ Z N as the horizontal coordinate, and to y ∈ Z as the vertical coordinate. For k ∈ Z, we call {y = k} := Z N × {k} the kth level of the cylinder, and R k := {(x, y) : y ≤ k} the infinite rectangle of height k.
It is sometimes convenient to formulate the growth in terms of aggregation of particles. Let A(0) be the union of the lower half-cylinder R 0 with a finite (possibly empty) set of sites in the upper half-cylinder. At time zero, each site in A(0) is occupied by a particle. At each discrete time step, a new particle is released from a uniform point on level 0, and performs a simple random walk until reaches an unoccupied site, which it occupies forever. Motion of particles is instantaneous: we do not take into account how many random walk steps are required for a particle reach an unoccupied site, but rather increment the time by 1 after it does so. Formally, we define A(t) inductively according to (1), where Z t is the the exit location from A(t − 1) of a simple random walk on the cylinder graph starting from a uniform random site on level 0, independent of the past. Note that this is equivalent to adding a new site to the cluster according to the harmonic measure on its exterior boundary seen from level −∞. When A(0) = R 0 we say that the process is "starting from flat".
In the cylinder setting there are two parameters, the size N of the base graph, and the time t. Just as large IDLA clusters on Z d are logarithmically close to Euclidean balls, it is natural to expect large IDLA clusters on the cylinder to be logarithmically close to filled rectangles. When t = N 2 this was stated in [9] (but not proved there, as the proof method is the same as in [7]). Our first result extends this result to large times t ≤ N m , which we will later use to control the fluctuations of stationary clusters. t≥0 be an IDLA process on Z N × Z starting from the flat configuration A(0) = R 0 . For any γ > 0, m ∈ N there exists a constant b γ,m , depending only on γ, m, such that

Theorem 1.1 Let (A(t))
for N large enough.
We prove the above result in two steps. To start with, we argue that (2) holds for T ≤ (N log N ) 2 , which can be shown by adapting the Jerison, Levine and Sheffield arguments [7] to the cylinder setting (cf. Theorem 5.1). We then invoke the Abelian property of IDLA (cf. Sect. 2) to build large clusters by piling up nearly rectangular blocks of O(N 2 log N ) particles each.
Suppose now that the IDLA process on Z N × Z is not initiated from the flat configuration R 0 , but rather from an arbitrary connected cluster A(0) ⊃ R 0 . How long does it take for the process A(t) to forget that it did not start from flat? Clearly, the answer to this question very much depends on A(0). For example, it will take an arbitrarily large time to forget an arbitrarily tall initial profile. On the other hand, most profiles are unlikely for IDLA dynamics. This leads us to the following related question: How long does it take for IDLA to forget a typical initial profile? To define "typical" let := {A ⊆ Z N × Z : A = R 0 ∪ F for some finite F} denote the set of clusters which are completely filled up to level 0. This is the state space of IDLA. On we introduce the following shift procedure: each time the cluster is completely filled up to level k > 0, we shift the cluster down by k (see Fig. 1).

Fig. 1 An IDLA cluster A and its shifted version S(A)
A

S(A)
The shifted process defines a new Markov chain on the same configuration space . While the original IDLA chain is transient, we will see that Shifted IDLA is a positive recurrent Markov chain (cf. Remark 6.1) and it thus has a stationary distribution, which we denote by μ N .
Recall from [1] that a probability distribution is called a warm start for a given Markov chain if it is upper bounded by a constant factor times the stationary distribution of the chain. We relax this definition below. Definition 1.2 (Lukewarm start) For k ∈ N, a probability measure ν N on is said to be a k-lukewarm start (for Shifted IDLA) if We say that ν N is a lukewarm start if it is a k-lukewarm start for some k ∈ N.

Definition 1.3 (Typical clusters)
A random cluster A ∈ is said to be a typical cluster if it is distributed according to some lukewarm start ν N for Shifted IDLA. If ν N = μ N then A is said to be a stationary cluster.
For A ∈ , let |A| denote the number of occupied sites above level 0, that is |A| := {(x, y) ∈ A : y > 0}.
Let further h(A) denote the height of A, that is h(A) = max{y : (x, y) ∈ A for some x ∈ Z N }, so that h(A) ≥ 0 for all A ∈ . Our next result is a bound for the height of a typical cluster: it is at most logarithmic in N with high probability. Theorem 1.2 (Height of typical clusters) For any γ > 0, k ∈ N there exists a constant c γ,k , depending only on γ and k, such that for all k-lukewarm starts ν N it holds for N large enough.
In particular, taking k = 0 we see that stationary IDLA clusters have logarithmic height with high probability. This gives nontrivial information on the stationary measure of a nonreversible Markov chain on an infinite state space. Remark 1. 1 We believe that this bound is tight, in the sense that for A N ∼ μ N the ratio h(A N )/ log N converges in probability to a positive constant as N → ∞. See [6, Figure 9] for some numerical evidence.
We use Theorem 1.2 to bound from above the time it takes for IDLA to forget a typical initial profile.

Theorem 1.3 (The upper bound)
For any γ > 0, k ∈ N there exist a constant d γ,k and a set γ,k ⊆ , depending only on γ and k, such that the following hold for all sufficiently large N . For any k-lukewarm start ν N on Z N × Z, we have Moreover, for any A 0 , A 0 ∈ γ,k with |A 0 | = |A 0 |, writing t γ,k = d γ,k N 2 log N for brevity, we have where (A(t)) t≥0 and (A (t)) t≥0 are IDLA processes starting from A 0 and A 0 respectively, and P(t) and P (t) denote the laws of A(t) and A (t) respectively.
Thus IDLA forgets a typical initial state, and hence in particular a stationary initial state, in O(N 2 log N ) steps with high probability.

Remark 1.2
The assumption |A 0 | = |A 0 | could be relaxed to |A 0 | ≡ |A 0 | (mod N ), by shifting the smaller cluster to include some full rows of N sites each. To remove the cardinality assumption entirely, one could consider "lazy" IDLA processes: at each discrete time step, a particle is added with probability 1/2, and otherwise nothing happens. Then the difference |A t | − |A t | (mod N ) is a lazy simple random walk on an N -cycle, so with probability at least When the initial profiles are stationary, we can complement the upper bound in Theorem 1.3 with the following lower bound. Theorem 1.4 (The lower bound) For any δ, ε > 0 there exist disjoint subsets δ , δ of such that and a constant α = α(δ, ε) > 0 such that the following holds. Let (A(t)) t≥0 and (A (t)) t≥0 be two IDLA processes on Z N × Z starting from A 0 ∈ δ , A 0 ∈ δ , and denote by P(t), P (t) the laws of A(t), A (t) respectively. Then for N large enough.
The above theorem tells us that two independently sampled stationary profiles A, A are, with probability arbitrarily close to 1/2, different enough for IDLA to need order N 2 steps to forget from which one it started. To prove this, we identify a slowmixing statistic based on the second eigenvector of simple random walk on Z N (see Definition 8.1 in Sect. 8.3).

Outline of the proofs
We start by showing that IDLA forgets polynomially high profiles in polynomial time, as stated in the following theorem.
Theorem 1.5 Let A 0 , A 0 be any two clusters in with |A 0 | = |A 0 | and define Theorem 1.5 is proved via a coupling argument that we spell out below. We then combine it with Theorem 1.1 to show that stationary clusters have at most logarithmic height with high probability (cf. Theorem 1.2). The upper bound (cf. Theorem 1.3), is a simple corollary of these two results. We sketch here the main ideas behind these proofs.

Theorem 1.5: the water level coupling
Let A 0 and A 0 be two clusters in with n 0 : We are going to build large IDLA clusters starting from A 0 , A 0 in a convenient way. Let t γ,m = h 0 N + d γ,m N 2 log N as in Theorem 1.5. Introduce a new IDLA cluster W 0 , independent of everything else, built by adding t γ,m particles to the flat configuration R 0 . Note that since W 0 contains polynomially many particles, by Theorem 1.1 it will be completely filled up to height h 0 + b γ,m N log N for a suitable constant b γ,m with high probability.
We take W 0 to be the initial configuration of two auxiliary processes (W (t)) t≤n 0 and (W (t)) t≤n 0 , that we think of as water flooding the clusters. Water falling in A 0 (resp. A 0 ) freezes, and it is only released at a later time. Frozen water particles are released in pairs, and their trajectories are coupled so to make the particles meet with high probability before exiting the respective clusters. Clearly, by taking the initial Fig. 2 The "water level coupling" in the proof of Theorem 1.5. Top row: initial clusters. Middle row: the water cluster W (0), with frozen particles in light blue. Frozen particles are released in pairs, and their trajectories (in black) are coupled so that they meet with high probability before exiting W (0). After meeting, they follow the same trajectory (in red). As a result, they exit in the same exit location. Bottom row: identical clusters W (n 0 ) = W (n 0 ) resulting from the release of all frozen particles (color figure online) water cluster W 0 large enough we can ensure that all pairs of frozen particles meet with high probability before exiting their respective clusters, in which case we have W (s) = W (s) for all s ≤ n 0 . The theorem then follows by invoking the Abelian property (cf. Sect. 2) for the equalities in law See Fig. 2 for an illustration of this argument, and Sect. 4 for the details. Starting from a high density configuration (left), the associated Shifted IDLA process reaches a polynomially high profile (centre), and then a logarithmically high one (right). Both transitions take at most polynomially many steps

Theorem 1.2: typical clusters are shallow
It suffices to prove the result for ν N = μ N . Assume Theorem 1.1, according to which an IDLA process starting from flat has logarithmic fluctuations for polynomially many steps, with high probability. It suffices to show that such process reaches stationarity in polynomial time to conclude. This would follow from Theorem 1.5, if we knew that stationary clusters have at most polynomial height in N , with high probability. To see this, we first observe that stationary clusters are dense, since an IDLA process spends only a small amount of time at low density configurations. Here the density of a cluster A ∈ is measured via its excess height E(A) = h(A)−|A|/N , that is the difference between the actual height and ideal height of A. We show that if the excess height is too large, then it has a negative drift under IDLA dynamics. The advantage of measuring the clusters' density via their excess height lies on the fact that a bound on the latter easily translates on a bound on the number of empty sites below the top level. Once we have such bound, we can try to fill the holes below the top level by releasing enough (but at most polynomially many) additional particles. This will leave us with clusters of at most polynomial height. Take any deterministic time of the form t = N k for large enough k. Then the cluster A(t) is both stationary and it has at most polynomial height with high probability, which implies that typical clusters have at most polynomial height with high probability, as claimed (Fig. 3).

The Abelian property
The Abelian property of Internal DLA was first observed by Diaconis and Fulton [5]. More recently, it has been used to generate exact samples of Internal DLA clusters in less time than it takes to run the constitutent random walks [6]. For the related model of activated random walkers on Z, the Abelian property was used to prove existence of a phase transition [13].
To state the version of the Abelian property that will be used in our arguments, let us start by defining the Diaconis-Fulton smash sum in our setting. Given a set A ∈ and a vertex z ∈ Z N × Z, define the set A ⊕ {z} as follows: The Abelian property, stated below, gives some freedom on how to build IDLA clusters without changing their laws.
In light of the above result, given A ∈ and a finite subset where z 1 , . . . z n is an arbitrary enumeration of the elements of B. Remark 2. 2 We point out that Proposition 2.1 is stated and proved in [5] for finite sets, while in our setting the set A is infinite. Nevertheless, we can easily reduce to working with finite sets by changing the jump rates of our random walks at level zero to mimic the hitting distribution of a simple random walk after an excursion below level zero. This has the effect of contracting all excursions below level zero to a single step, and it clearly does not change the law of the model.

A mixing bound
For x ∈ Z N , let P t x denote the law of a lazy 1 simple random walk on Z N starting from x, and denote its stationary measure, uniform on Z N , by π N . For ε > 0 define The Total Variation (TV) mixing time of this walk is defined to be τ N (1/4), which we simply denote by τ N . It is well known that and moreover for any γ > 0 (see e.g. [14]). Let ω = (x(t), y(t)) t≥0 be a simple random walk on Z N × Z and define τ y n := inf{t ≥ 0 : y(t) = n} to be the first time it reaches level n. The next lemma tells us that by the time the walker has travelled for about N log N levels in the vertical coordinate, it has mixed well in the horizontal one. Proof Let (x(t)) t≥0 be the jump process associated to the motion of the horizontal coordinate, obtained by only looking at (x(t)) t≥0 when the random walk makes a horizontal step. Similarly, let (ŷ(t)) t≥0 be the jump process associated to the motion of the vertical coordinate. Let We estimate the two terms separately: the first one is controlled by large deviations estimates, while the second one by using the explicit expression for the moment generating function ofτ y n . More precisely, we have where we have used that E(2 −G 1 ) = 1/3 for the second inequality. For the second term, it is simple to check that, for z ∈ (0, 1), This gives, for any z ∈ (0, 1), Choose z ∈ (0, 1) of the form z 2 = 1 − 1/α 2 for some α 1 as N → ∞, to have that To have the far r.h.s. smaller than, say, N −5γ /4 it suffices to take Optimizing over α suggests to take α = 2N , to get n ≥ 10γ N log N .

The role of starting locations
The following result tells us that if the walkers have time to mix in the horizontal coordinate before exiting the cluster, the resulting IDLA configuration does not depend too much on their initial positions.
Remark 3.1 In particular, this tells us that, as long as the IDLA cluster is filled up to level C γ,m N log N , releasing the next T walkers from fixed initial locations below level 0 or from uniform locations at level 0 results in the same final cluster with high probability.
Proof This is an easy consequence of Lemma 3.1. For i ≤ T , let ω i = ((x i (k), y i (k))) k≥0 and ω i = ((x i (k), y i (k))) k≥0 denote the simple random walk trajectories of the ith walkers starting from (x i , y i ) and (x i , y i ) respectively. These are coupled as follows. If, say, y i < y i then ω i stays in place until ω i reaches level y i (and vice versa if y i > y i ). We can therefore assume that y i = y i without loss of generality. If, moreover, N is even and |x i − x i | is odd, then the first time that ω i moves in the horizontal coordinate we keep ω i in place. Since the probability of ω i reaching level N before making a horizontal step is o(2 −N ) for large N , we can assume that |x i − x i | is even without loss of generality. The walks move as follows. If ω i moves in the y coordinate, then so does ω i , and the two walks take the same step. Thus y i (0) = y i (0) implies that y i (k) = y i (k) for all k ≥ 0. If, on the other hand, ω i moves in the x coordinate, then so does ω i , and the two walks move according to the reflection coupling on the N -cycle (cf. Sect. 1.1). Finally, the walkers ω i and ω i stick together upon meeting, that is if and the ith walkers meet before exiting the identical clusters, then A(i) = A (i).
Let, consistently with the notation introduced in the proof of the previous result, (x i (k)) k≥0 and (x i (k)) k≥0 be the jump processes associated to (x i (k)) k≥0 and (x i (k)) k≥0 . Then, ifτ we have, by comparison with a simple random walk, for all γ > 0 and N large enough. Moreover, by setting Geometric random variables of mean 2. Let γ = 6(γ + m) and γ = γ /3. Recall that R n denotes the infinite rectangle of height n, and assume that A(0) = A (0) ⊇ R n for n = 10γ N log N . Denote by τ n the first time both walkers reach level n. Then by Lemma 3.1 and (12) we have where the third inequality follows from (12). Finally, taking λ = log(3/2) > 0 makes the term in the square brackets equal to 8/9, from which we conclude that P(ω i and ω i exit R n before meeting) ≤ N −3γ N 2 log 9 for N large enough. In all, we have found that ≤ P(∃i ≤ T such that ω i and ω i exit R n before meeting) This shows that we can take C γ,m = 60(γ + m) in (10) to have (11), thus concluding the proof.

The water level coupling
In this section we prove Theorem 1.5.
Proof of Theorem 1.5 Let A 0 , A 0 be any two clusters in with |A 0 | = |A 0 | = n 0 and such that Let C γ +1,m+1 and b γ +1,m+2 be defined as in Proposition 3.1 and Theorem 1.1 respectively, and define We build an auxiliary water cluster W 0 by adding t γ,m = h 0 N +d γ,m N 2 log N particles to the flat configuration R 0 according to IDLA rules. Then, since t γ,m ≤ N m+2 , by Theorem 1.1 we have for N large enough. In particular, since d γ,m ≥ C γ +1,m+1 + b γ +1,m+2 , this gives i.e. the water cluster is completely filled up to height h 0 + C γ +1,m+1 N log N with high probability. Write for arbitrary enumerations of the sites in A 0 , A 0 above level 0. We define two auxiliary processes (W (t)) t≤n 0 , (W (t)) t≤n 0 by setting W (0) = W (0) = W 0 and inductively defining for t ≤ n 0 where Z t , Z t denote the exit locations from W (t − 1), W (t − 1) of simple random walks on Z N × Z starting from z t , z t . These walks are coupled as follows. If z t is at a lower level than z t , then the walk starting from z t moves freely (independently of everything else) until it reaches the level of z t , while the other walk stays in place. Once at the same level, the walks move together in the vertical coordinate, whereas the horizontal coordinates evolve according to the reflection coupling: if one steps to the left, the other one steps to the right, and vice versa. 2 Once the walks meet, they move together in both coordinates. Since n 0 ≤ h 0 N ≤ N m+1 , and all the walks start at distance at least C γ +1,m+1 N log N from the boundary of the cluster, Proposition 3.1 gives 3 for N large enough. The result then follows by observing that if (A(t)) t≥0 and (A (t)) t≥0 denote two IDLA processes starting from A 0 and A 0 respectively, then by the Abelian property. Indeed, if we denote by w 1 , w 2 , . . . , w t γ,m the starting locations of the t γ,m walkers used to grow W 0 , then, with the notation introduced in Sect. 2, we have The claimed equality in law is then given by Proposition 2.1.

Logarithmic fluctuations for large clusters
In this section we bound the fluctuations of an IDLA cluster with polynomially many particles, thus proving Theorem 1.1. To start with, we claim that the following holds.
Then for any γ > 0 there exists a finite constant a γ , depending only on γ , such that The above theorem is stated for T = N 2 in [9], where the authors use it to show convergence of space-time averages of IDLA fluctuations to the Gaussian free field. It can be proved by the same arguments used in [7] for planar IDLA, which are in fact rather simplified by the structure of the cylinder graph. Since this proof does not appear anywhere, and since we need to extend it to larger values of T , we give it in "Appendix C". Assuming Theorem 5.1, we can proceed with the proof of Theorem 1.1.

Proof of Theorem 1.1
It suffices to show that (2) holds for any fixed T ≤ N m . The result will then follow by replacing γ with γ + m and using the union bound over all T ≤ N m . If T = O(N 2 log N ) then we are done by Theorem 5.1, so assume T N 2 log N . We are going to iteratively reduce the size of the cluster, until it becomes O(N 2 log N ). To this end, let a γ +2+m and C γ +2,m denote the constants in Theorem 5.1 and Proposition 3.1 respectively, and set a := a γ +2+m , c := C γ +2,m for brevity. At cost of increasing these constants, we can assume that 2c ≥ 1 and both a log N and c log N are integers. Define n := a N log N + 2cN 2 log N .
We build a large cluster, with the same law of A(T ), by using the water processes mentioned in the introduction. To this end, let W 1 denote the cluster obtained by adding n particles to A(0) = R 0 according to IDLA rules. Then by Theorem 5.1 for N large enough. Let denote the region which is filled with high probability after n releases. Write further for the fluctuation region. Water particles in the fluctuation region are declared frozen, and they will be released at a later time. On top of the water-filled region W f 1 we are going to build a second cluster with again n particles, so to fill a rectangle of height 4cN log N with high probability. Let W 2 denote such a cluster, obtained by adding n particles to W f 1 according to IDLA rules (thus, new water particles can, and will, settle inside F 1 ). Write W f 2 and F 2 for the filled region and fluctuation region of such cluster. Then, as in (13), we have We again declare particles in F 2 frozen, and treat their locations as empty for subsequent walkers. This procedure is iterated for k = T /n − 1 rounds (Fig. 4).
Let 1 denote the event that the fluctuations bound (13) holds for all k rounds, that is Then by (13) Let us restrict to this good event. It remains to release the ak N log N frozen particles from their locations, plus T − kn new ones uniformly from level 0. Denote by W (T ) the cluster obtained after all the released particles have settled, so that |W (T )| = T . By the Abelian property, W (T ) has the same distribution as A(T ). Write T = T − kn + ak N log N for brevity, and let W (T ) denote the cluster obtained by adding T particles to the filled rectangle R 2kcN log N according to IDLA rules (more precisely, we start T random walks uniformly from level zero, independently of everything else, and add their exit locations to the cluster). We argue that we can couple W (T ) and W (T ) so that they coincide with high probability. To see this, we proceed as follows. First release T − kn new particles uniformly from level 0, and note that, since T − kn ≥ n, on the event 1 these will fill a rectangle of height 2cN log N with probability at least 1 − N −(γ +2+m) . If this happens, then all the frozen particles are at distance at least cN log N from the boundary of the cluster. Thus by Proposition 3.1 we can couple W (T ) and W (T ) so that In all, we have reduced the problem of bounding the fluctuations of W (T ) (d) = A(T ) to the same one for the smaller cluster at the price of a small probability of failure. Since T ≤ 2 max{2n, aT /N }, this either makes the number of particles O((N log N ) 2 ), in which case we stop, or it decreases it by a multiplicative factor 2a/N . Thus after at most m iterations of the above procedure we are back to clusters with O((N log N ) 2 ) particles, which we know to have logarithmic fluctuations. This shows that so (2) holds with b γ,m = a, as wanted.

Typical profiles are shallow
In this section we show that typical IDLA profiles have at most logarithmic height, thus proving Theorem 1.2. This is achieved by combining Theorem 1.5 with a control on the density of stationary clusters.

Decay of the excess height
We distinguish between high and low density clusters by looking at their excess height, that we now define. Recall that denotes the set of clusters completely filled up to level 0, so that h(A) ≥ 0 for all A ∈ , while |A| denotes the total number of sites in A strictly above level 0.
is defined as the difference between the height of A and the minimum possible height, i.e., Note that E(A) ≥ 0. We say that a cluster A has high density if E(A) ≤ E * , where E * is a constant to be chosen later depending only on N . If instead E(A) > E * , then A is said to have low density (Fig. 5).
To start with, we prove that, for a suitable choice of E * , the excess height drops below E * quickly under the IDLA dynamics.
) denote the excess height of A(t). Then for any η ∈ (0, 1) there exists a constant E * = E * (N , η), depending only on N and η, such that if denotes the first time that the excess height drops below E * , then Lemma 6.1 is an easy consequence of the following result, which tells us that if a cluster has low density then its excess height has a negative drift under the IDLA dynamics.  6 An example of the trajectory of a particle surviving all the bad levels at distance at least n * one from the other

and write h(t) in place of h(A(t)) for brevity. Then for all
then there are at least E * bad levels between 0 and the top one h(t). Indeed, since h(t) ≥ |A(t)| N + E * then there are at least |A(t)| N + E * levels above level 0 in the cluster, and at most |A(t)| N of them can be completely filled. Recall from (6) the definition of τ N (N −γ ), and note that by (8) and as long as n ≥ n * := 20N log N . Then, if ω = (x(k), y(k)) k≥0 is a simple random walk on Z N × Z, the above implies that for N large enough. In words, a simple random walk on Z N × Z starting at level zero has probability at least 1/2N to reach level n * for the first time at any given vertex, uniformly over the starting location. Now, since there are at least E * bad levels, we can find at least E * n * bad levels at distance at least n * from each other. We treat these as traps. More precisely, for a new particle to increase the height of the cluster A(t), the particle must travel through all the bad levels without exiting the cluster, until it reaches the top. Since we are taking the bad levels sufficiently far apart, the particle has time to mix in between, so whenever it reaches a bad level for the first time it has probability at least 1 2N to fall outside the cluster. By taking enough bad levels, then, we can make the probability for the particle to survive all of them arbitrarily small (Fig. 6).
Formally, since the walker has probability at least 1 2N to exit the cluster upon reaching a bad level for the first time, we have To make this smaller than 1−η N it suffices to take E * large enough, precisely which concludes the proof.
Note in particular that which will be useful later on. We can now prove that low density clusters tend to decrease their excess height under IDLA dynamics.
Proof of Lemma 6.1 Let n 0 = |A(0)| denote the number of sites in A(0) of positive height, and assume that (15). It follows from Lemma 6.2 that for all t ≥ 0. Now, since |A(t)| = n 0 + t, for t < T E * we have It thus follows from Azuma's inequality that, for t > N η (E(0) − E * ), as claimed.
Remark 6.1 Lemma 6.1 implies that Shifted IDLA is positive recurrent, thus proving the existence of the stationary distribution μ N . To see this it suffices to show, for example, that the flat configuration R 0 is positive recurrent. Let (A(t)) t≥0 be an IDLA process with A(0) = R 0 , and define T 0 to be the first time t such that A(t) = R k for some k ≥ 1. Here is a wasteful way to show that E R 0 (T 0 ) < ∞. Fix any η ∈ (0, 1) and let E * = E * (N , η) be the integer constant in Lemma 6.1. Starting from R 0 , release E * N particles: with probability at least N −E * N the final configuration will be the filled rectangle R E * . If not, then the excess height has increased by at most E * N . We keep releasing particles until the excess height falls again below E * : by Lemma 6.1 this takes a random time with exponential tails, and hence finite expectation. Once the excess height is at most E * , there are at most E * N empty sites below the top level in the cluster. We release as many particles as the number of the empty sites below the top level: with probability at least N −E * N the final configuration will be a filled rectangle R k for some k ≥ 1. If not, the excess height is at most E * + E * N : we again wait for it to fall below E * , and iterate. After at most a Geometric N −E * N number of attempts, the final configuration will be a filled rectangle. Each attempt takes E * N releases, plus the time it takes for the excess height to fall below E * starting from E * + E * N , which has exponential tails. In all, we conclude that the total time it takes to go from R 0 back to a flat configuration R k for some k ≥ 1 has finite expectation.

Typical clusters are dense
In this section we show that, for N large enough, μ N gives high probability to high density clusters. This shows that stationary clusters are dense, and hence that typical clusters are dense, with high probability. Proof Recall that for (A(t)) t≥0 IDLA process on Z N × Z, we denote by (A * (t)) t≥0 the associated shifted process. Let us denote by * := {A * : A ∈ } its state space. Define the sets We seek to bound μ N (B c ). By the Ergodic theorem for positive recurrent Markov chains, Define the following stopping times: where ∂B = {A ∈ * : E(A) ∈ [2E * , 2E * + 1)}. Note that, since the excess height always changes by either 1 − 1/N or 1/N , We divide the interval [0, t] into excursions from A to ∂B and then from ∂B to A. Since during excursions from A to ∂B the process is in B, only excursions from ∂B to A contribute to the expectation in (17). Let us say that the concatenation of an excursion from A to ∂B and from ∂B to A is a complete excursion. In the next lemma we bound the number of complete excursions by time t.
denote the number of complete excursions by time t. Then for γ = 6 N e N and N large enough it holds Proof LetK Then clearlyK (t) ≥ K (t), and so P(K (t) ≥ γ t) ≤ P(K (t) ≥ γ t). Moreover, since the excess height changes by at most 1 at each step, at the start of each excursion from A to ∂B the excess height must lie between E * − 1 and E * . We know (cf. Lemma 6.2) that when the excess height is greater than E * it has a negative drift, but we do not have any information on its drift when the process is in the set A. To overcome this problem, we simply ignore the time spent in A. Indeed, we will see that the time spent in B\A is large enough to give us what we want.
We seek to stochastically bound τ i A,∂B from below. To this end, define the following auxiliary random walk on 1 N N = n N : n ∈ N , with a reflecting barrier at zero: Note that, while the Shifted IDLA process (A * (t)) t≥0 is in B\A, the walk X away from 0 stochastically dominates the associated excess height process by Lemma 6.2. In particular, let N 0 denote the total number of visits of X to 0 before reaching [E * , ∞). Then since each time the walk reaches 0 it jumps deterministically to 1 − 1/N , after which it takes at least N steps to reach 0 again. Now, N 0 is a geometric random variable with success probability P 1−1/N (X reaches [E * , ∞) before 0). The next result tells us that this probability is very small, and so N 0 is typically large.

Lemma 6.4
For E * as in (15) and N large enough, it holds Let us postpone the proof of the above lemma to "Appendix A", and explain how from this one can deduce the bound of Lemma 6.3. Note that (20) and (21) together imply that for N large enough. Let now T X E * := inf{i ≥ 0 : X i ≥ E * } denote the first hitting time of [E * , ∞) for the walk X , and notice that T X to be i.i.d. copies of T X E * , independent of everything else. Recall the definition ofK (t), and define furtherK Then we have This lemma is an easy consequence of the fact that T X E * ≥ N 0 N and Lemma 6.4, so we leave the proof for "Appendix B". This finishes the proof of Lemma 6.3.
Using (18) we can conclude the proof of Proposition 6.1. Indeed, writing E(t) in place of E(A * (t)) for brevity, and taking γ as above, we find for N large enough, since E * > (N log N ) 2 by (16). Thus we conclude that for N large enough, as claimed.

From high density to low height
We have shown in the previous section that typical clusters are dense. While this does not give any information on the height of A, it provides an upper bound on the number of empty sites, that we will call holes, below the top level h(A). Indeed, if E(A) ≤ E * then there can be at most N E * holes in the cluster. We now obtain a bound on the time it takes to fill these holes (cf. Proposition 6.2), showing that it is at most polynomial in N , and use this to prove that stationary clusters have at most polynomial height (cf. Proposition 6.3).

Proposition 6.2
Let (A * (t)) t≥0 denote a Shifted IDLA process on Z N ×Z, and assume that E(A * (0)) ≤ E * , with E * as in (15). Let Write h * (t) in place of h(A * (t)) for brevity, and assume that h * (0) > . Define T := inf{t ≥ 0 : h * (t) ≤ } to be the first time the height of the Shifted IDLA process drops below . Then there exists a constant C η > 0, depending only on η, such that for N large enough.

Remark 6.2
Since N is polynomial in N , this tells us that, when starting from a dense configuration, the height of a Shifted IDLA process drops below after at most polynomially many releases.
Proof We argue as follows. Each time we add a new particle to the cluster, the lowest hole has probability at least 1/N to be filled, independently of everything else. Hence it will take at most a geometric number of releases of parameter 1/N to fill the lowest hole. In total, then, it will take at most the sum of N E * i.i.d. Geometric(1/N ) to fill all the holes up to the top level h(A). Some care is needed, though: we cannot let the excess height increase too much while releasing these extra particles.
be a collection of i.i.d. Geometric(1/N ) random variables, and note that since > 2N 2 E * we have It follows that if we release particles then we have probability at least 1/2 to fill all the holes below the top level. If we fail, then the excess height has increased by at most . If so, we keep on releasing particles until the excess height falls again below E * , and iterate. After a Geometric(1/2) number of attempts we will have filled all the holes below the top level, which implies that the resulting Shifted IDLA cluster will have height at most . To formalise the above strategy, write E(t) in place of E(A * (t)) and define the following random times: Then, by definition, at times τ i there are at most N E * holes below the top level. Since the number of releases needed to fill all these holes is stochastically dominated by the sum of N E * i.i.d. Geometric(1/N ) random variables, (25) implies that at times τ i + we have filled all the holes with probability at least 1/2. If this happens, then h * (τ i + ) ≤ , and we stop. Otherwise E(τ i + ) ≤ E * + , and we start again. Let denote the number of attempts needed to succeed. Then, denoting by stochastic domination, we have K K for K Geometric(1/2) random variable. Note that the times s i 's are not in general identically distributed. It is convenient to make them i.i.d. by assuming that the excess height always starts from the maximum value E * + . More precisely, let and let (ŝ i ) i≥1 be a sequence of i.i.d. random variables equal in law toŝ. Then s i ŝ i and we have that We use this to estimate the moment generating function of T . For λ ∈ R, set M(λ) := E(e λŝ ). We have where the first equality above follows from the Monotone Convergence Theorem, and the last inequality from the independence of K and theŝ i 's. Thus we conclude that as claimed. It remains to prove Lemma 6.6.
Then it follows from Lemma 6.1 that, for t ≥ 2 N η and N large enough, it holds Therefore, for λ < η 2 32N 2 , we find where the last inequality holds for N large enough.
This concludes the proof of Proposition 6.2.
We use this to show that stationary clusters have at most polynomial height.

Proof Let (A(t)) t≥0 be an IDLA process with A(0) = A ∼ μ N . Then A(t) ∼ μ N for all deterministic t ≥ 0. Write h(t) in place of h(A(t))
for brevity. Take E * = N 2 log 3 N , and note that it satisfies (15). Define as in (23). Then we have where we have used that if T ≤ N 7 then h(N 7 ) ≤ + N 7 < N 8 .

Remark 6.3
It is worth pointing out that the proof of Proposition 6.1, and hence of Proposition 6.3 above, would also work for driving random walks on Z N × Z with a vertical drift. This would still give a polynomial bound for the height of typical clusters, perhaps with a larger exponent than the one in Proposition 6.3. Reasoning as in Theorem 1.5, such height bound could in turn be translated into a (largely nonoptimal) upper bound for the time it takes for IDLA with transient driving walks to forget its initial profile.

Typical clusters are shallow
We now finish the proof of Theorem 1.2. To start with, note that it suffices to prove the result for stationary clusters. Indeed, suppose we showed that for any γ > 0 there exists a constant c γ such that for N large enough. Then, if ν N is any k-lukewarm start for Shifted IDLA, we have so that (3) still holds with γ + k in place of γ . It thus suffices to prove Theorem 1.2 for stationary clusters. We argue as follows. Let (A(t)) t≥0 be an IDLA process starting from A(0) ∼ μ N , so that A(t) ∼ μ N for all deterministic t ≥ 0. Introduce an auxiliary IDLA process (Ā(t)) t≥0 starting from the flat profileĀ(0) = R 0 . Then, if |A(0)| = n 0 , we have Write A (t) =Ā(n 0 + t) to shorten the notation. Following the ideas presented in the proof of Theorem 1.5, we will couple the clusters A(N 10 ) and A (N 10 ) so that they match with high probability. Since A(N 10 ) ∼ μ N , and A (N 10 ) has logarithmic fluctuations with high probability, this will allow us to conclude.
To start with, note that by Proposition 6.3 we have for N large enough. In particular this shows that P(n 0 > N 9 ) ≤ 2e −N /2 . We can use this to bound the height of A (0). Indeed, for γ as in the statement of Theorem 1.2, we have for N large enough, where in the last inequality we have used Theorem 1.1 to argue that an IDLA cluster built by adding N 9 particles to R 0 has height O(N 8 ) with high probability. Introduce a water cluster W 0 obtained by adding N 10 particles to the flat configuration R 0 according to IDLA rules, and note that by Theorem 1.1 for N large enough. Define two auxiliary water processes (W (t)) t≥0 and (W (t)) t≥0 as follows. Set are declared frozen, and will be released at a later time. Note that which shows that all frozen particles are at distance at least N 8 from the boundary of W 0 with high probability. Fix arbitrary enumerations of the two sets of frozen particles, and accordingly denote their locations by {z 1 , z 2 , . . . , z n 0 } and {z 1 , z 2 , . . . , z n 0 }. For t ≥ 0, then, let W (t) (respectively W (t)) be the cluster obtained by adding to W (t −1) (respectively W (t − 1)) the exit location from it of a simple random walk on Z N × Z starting from z t (respectively z t ). The random walks starting from z t and z t are coupled as explained in the introduction: the higher one stays in place until the other one reaches its level, after which they move together in the vertical coordinate, and according to the reflection coupling in the horizontal one. 4 Then, writing E 0 for the event appearing in (29) for brevity, by Proposition 3.1 we find = W (n 0 ), this shows that we can couple A(N 10 ) and A (N 10 ) so that A(0) is. We want to argue that A (N 10 ) =Ā(n 0 +N 10 ) has logarithmic fluctuations. If n 0 were deterministic, this would follow from Theo-rem 1.1. Instead n 0 is random since such is A(0), and as already observed it satisfies P(n 0 > N 9 ) ≤ 2e −N /2 for N large enough. Moreover, by Theorem 1.1 there exists a constant b γ , depending only on γ , such that

Now, A(N 10 ) is stationary since
for N large enough. We thus conclude that for N large enough. This shows that A (N 10 ) has logarithmic fluctuations with high probability. Recall that for A ∈ we denote by A * its shifted version (cf. Definition 1.1). Then we have found that for N large enough, which concludes the proof.

The upper bound
We briefly explain how one can deduce Theorem 1.3 from Theorems 1.5 and 1.2. For any γ, k > 0 let c γ,k be defined as in Theorem 1.2, while we take d γ,2 as in Theorem 1.5 with m = 2. Set Then, if ν N is any k-lukewarm distribution, we can define for N large enough. Take any two clusters A 0 , A 0 in γ,k with |A 0 | = |A 0 |, and let (A(t)) t≥0 and (A (t)) t≥0 be two IDLA processes starting from A 0 and A 0 respectively. Then Theorem 1.5 tells us that we can build A(t γ,k ) and A (t γ,k ) on the same probability space so that for N large enough. We thus gather that for N large enough. Since γ is arbitrary, this concludes the proof.

The lower bound
In this section we specialise to stationary initial clusters, and prove that if two such clusters are sampled independently from μ N , then the IDLA dynamics will remember from which one it started for at least order N 2 steps, as stated in Theorem 1.4. We proceed as follows. We first recall the GFF fluctuations result by Jerison, Levine and Sheffield [9], saying that the average IDLA fluctuations, appropriately measured, converge to the restriction of the Gaussian Free Field to the unit circle. We then use this to define an observable which we show to be large, with positive probability, for stationary IDLA clusters (cf. Proposition 8.1). Finally, we argue that a necessary condition for IDLA to forget the initial configuration is for this observable to reach 0, and show that this takes time at least α N 2 for some α > 0, thus proving the result.

Average IDLA fluctuations and the GFF
Let us start by briefly recalling the average IDLA fluctuations result by Jerison, Levine and Sheffield [9]. Let (A(t)) t≥0 be an IDLA process on Z N × Z starting from flat, i.e.
with side-length 1/N and n 1 N , n 2 N as top-right corner, and set so that A N (t) ⊂ T × R. We use the rescaled and filled cluster A N (t) to define the discrepancy function for some finite integer K and with α −k = α k , so that ϕ is real-valued. Jerison, Levine and Sheffield proved the following. The next result tells us that the above theorem can be generalised to larger times.

Remark 8.1
Note that, as observed in [9], the exponential term in v(ϕ) is due to the fact that the process started from flat. Indeed, this term does not appear in the limiting variance v μ (ϕ), since T is large enough for the process to have reached stationarity.
The above result can be proved exactly as in [9], Theorem 3, by replacing the maximal fluctuations result with Theorem 1.1 above. For this reason we choose to skip the proof.
We now use the fact that A(T ) is close to a stationary cluster to control the size of the average fluctuations at stationarity. Let A μ denote a stationary cluster, that is A μ ∼ μ N , and denote by A μ N its rescaled and filled version as in (30). We start an IDLA process (A μ (t)) t≥0 from A μ (0) = A μ . Let n 0 = |A μ (0)| be the number of particles in A μ (0) above level zero. Then, if (A(t)) t≥0 is an IDLA process starting from flat, we have |A(n 0 )| = |A μ (0)| = n 0 and hence |A(n 0 + t)| = |A μ (t)| = n 0 + t for all t ≥ 0. Moreover, by Theorems 1.1 and 1.2, for any γ > 0 there exists a γ < ∞ such that, with h 0 = max{h(A(n 0 )), h(A μ (0))}, it holds for N large enough. Consider the time-shifted IDLA process A (t) = A(n 0 + t) starting from logarithmic height with high probability. Then by Theorem 1.5 for any γ > 0 there exists a finite constant d γ such that for t γ = d γ N 2 log N we can couple the clusters A μ (t γ ), A (t γ ) so that For ϕ as above, introduce the random variables Then for any δ > 0 we have Moreover, by Theorem 8.2 for any ε > 0 we can take N large enough to ensure that, if N is a standard Gaussian random variable, for any γ, δ, ε > 0 and N large enough.

The initial imbalance
We now make a choice for the test function ϕ in the above discussion. Let for which v μ (φ) = 1 8π . Then by (33) we can take ε = δ and N large enough to get On the other hand, where A μ ∼ μ N and the last equality holds in distribution. In order to build a martingale, we now approximate φ by its discrete harmonic extension ψ away from the line {y = 0} on the high probability event with c γ as in Theorem 1.2 with k = 0. Following [9], to define ψ we introduce q N solution of so that q N = 2π + O(N −2 ), and for n = (n 1 , It is easy to check that ψ is discrete harmonic on Z N ×Z. Moreover, for n = (n 1 , n 2 ) ∈ A μ and (x, y) ∈ Q N (n) we have that on the good event E μ for N large enough. Combining this with (34), we get for any δ > 0 and N large enough. The same arguments can be used to prove the reverse inequality, thus obtaining the following.
Let c 2 be defined as in Theorem 1.2 with k = 0. We define to have that for N large enough. Similarly, μ N ( δ ) ≥ 1 2 − δ for N large enough, and thus (4) in Theorem 1.4 is satisfied.

Remark 8.2
It follows from the above result that if A and A are two independent samples of μ N , then for any δ > 0 we can take N large enough so that which can be made arbitrarily close to 1/2 by taking δ small enough.

The observable
We use the above remark to define a convenient observable which, loosely speaking, measures the difference in the horizontal imbalance of two IDLA processes. Take A 0 ∈ δ and A 0 ∈ δ , and assume |A 0 | = |A 0 | without loss of generality, as if not then two IDLA processes starting from A 0 and A 0 will never meet. We take A 0 , A 0 as starting configurations of two IDLA processes (A(t)) t≥0 and (A (t)) t≥0 .

Definition 8.1 (Imbalance) For
A ∈ , define the horizontal imbalance of A by with q N as in (35).
We use this to define an observable u(t) which measures the difference in the imbalance of A(t) and A (t), namely

Remark 8.3
Since ψ is discrete harmonic on Z N × Z, we have that (u(t)) t≥0 is a discrete time martingale.
By Proposition 8.1 and Remark 8.2 we have for any δ > 0 and N large enough. Clearly A(t) = A (t) implies u(t) = 0, so if we define then for any α > 0 we have for c absolute constant and where the last inequality in (39) follows by the Burkholder-Davis-Gundy inequality. Now, if we let n t , n t denote the settling locations of the tth walkers in the two IDLA processes, we have To estimate the above expectations we have to control the height of the clusters A(α N 2 ) and A (α N 2 ), as the function ψ grows exponentially with them. We explain how to control the first expectation, the second one following from the same arguments. Introduce the good event Then the a priori bound in Lemma C.1 with m = 60 gives for some constant C α depending only on α, and N large enough. On E c , on the other hand, we trivially have that h( since q N ≤ 8 for N large enough. In all, we have found for N large enough. Similarly one can control the same expectation involving A (α N 2 ). Thus where c is the absolute constant in (39). Let ε be as in Theorem 1.4. Since C α → 0 as α → 0, we can take α small enough so that 8cC α δ 2 ≤ ε 1 2 − 10δ , to find Finally, putting this together with (38) we gather that which concludes the proof of Theorem 1.4.
We aim to show that for N large enough. Denote P 1−1/N simply by P to shorten the notation, and let T X 0,E * be the first time X reaches either 0 or [E * , ∞). Then we have where T X 0 is the first time the random walk X reaches 0, and X T X 0,E * denotes the process X stopped upon reaching 0 or [E * , ∞). We bound the two terms above separately.
For the rightmost term, it is easy to check that M(t) = X t + ηt N is a martingale up to time T X 0 . If we thus define M 0 (t) = M(t ∧ T X 0 ), then (M 0 (t)) t≥0 is a martingale for all times, with increments bounded by 1. It therefore follows from Azuma's inequality that for t ≥ 2N /η, which can be made smaller than e −2N by choosing t ≥ N 3 log N . For the remaining term, again by Azuma we have P sup for N large enough. In all, we have found that for N large enough. This concludes the proof of Lemma 6.4.

Appendix B: Proof of Lemma 6.5
We have, for any λ > 0, To bound the right hand side above, recall that T X E * ≥ N 0 N for N 0 ∼Geometric( p), with p ≤ e −N . Thus if we letN 0 be a Geometric random variable of parameter p := e −N , then we find where for the last inequality we have used that log(1 − x) ≥ −2x for x ∈ (0, 1/2). Thus N e N and recalling that γ = 6 N e N , it is then easy to check that which concludes the proof.

Appendix C: Logarithmic fluctuations for IDLA at large times
We survey the proof of the logarithmic fluctuations bound by Jerison, Levine and Sheffield for IDLA on the cylinder graph Z N × Z, and extend it to larger times to prove Theorem 5.1.

C.1. A priori bound
We obtain an a priori bound on the height of A(T ) following the outer bound argument by Lawler, Bramson and Griffeath [11], for arbitrary starting configurations. (A(t)) t≥0 denote an IDLA process on Z N × Z, and let h 0 = h(A(0)) denote its initial height. Then for any m ≥ 3 there exists β = β(m) ∈ (0, 1) such that, for T N log N and N large enough, it holds

Thus if, in particular, the process starts from the flat configuration A(0) = A 0 , then h(A(T )) ≤ mT /N with high probability for N large enough.
Proof Let Z k (t) := |A(t) ∩ {y = k}| denote the number of particles in A(t) at level k, and set μ k (t) := E(Z k (t)). Then We claim that for all k > h 0 and t ≥ 0. Indeed, clearly μ 1 (t) ≤ N and μ k (0) = 0 for all k > h 0 .
For other values of k, j we have where in the above inequality we have used that the probability that the (t + 1)th and (41) follows by a simple iteration. Now take t = T , k = h 0 + mT N + 1 and recall that k! ≥ k k e −k to get for any β ∈ e m m , 1) and N large enough.
Remark C.1 By the above result with h 0 = 0, it suffices to prove Theorem 5.1 for large T . Suppose, indeed, that there exists a finite constant b such that T ≤ bN log N . Since T N ≤ b log N , it suffices to take a > b to have that the inner bound is trivially satisfied. For the outer bound we set c = a+b 3 and note that, as long as a ≥ 2b, it holds which can be made smaller than N −γ by taking a ≥ max 2b; 3γ log 1/β − b . In light of this observation, from now on we can assume that T N log N as N → ∞.
The proof of Theorem 5.1 follows an iterative argument, which we now sketch.
Definition C.1 A point (x, y) with y ≥ 0 is said to be: Clearly, so we bound the probability of the right hand side. Note that the a priori bound in the previous section (cf. Lemma C.1) with m = 3 tells us that there exists an absolute constant β ∈ (0, 1) such that for N large enough. We use this to initialise the iteration, which consists of showing that, in turn, • no m-early point implies no -late point ( m), and • no -late point implies no m -early point (m ).
To perform the above steps, we will use an explicit discrete harmonic function with a pole close to the early/late point to build a martingale, which will be then controlled via its quadratic variation.

C.2. No early points implies no late points
The main goal of this section is to prove the following.
for N large enough.
The above result tells us that on the event that there is no m-early point, there is no -late point with high probability, with = √ Cm log N m.
To prove this, we argue as follows. For ζ ∈ Z N × Z, let L(ζ ) = {ζ is -late}. Then, since It therefore suffices to show that for arbitrary ζ ∈ R T N − . To see this, we use a discrete harmonic function to build a martingale with pole at ζ . We then show that on the event L(ζ ) this martingale is large and negative, while on the event E m [T ] c its quadratic variation is small. This will then imply that the probability that both L(ζ ) and E m [T ] c hold is small.
Recall that (A(t)) t≥0 denotes the IDLA process. For ζ = (ζ x , ζ y ) and z ∈ Z N ×Z + define H ζ (z) := P z (a SRW reaches level ζ y for the first time at ζ ), where SRW stands for simple random walk on Z N × Z. Then H ζ (z) ∈ [0, 1] for all z and H ζ (z) → 1/N as z y → −∞. Moreover, H ζ is discrete harmonic up to level ζ y (in fact, it is the discrete harmonic extension of the function 1(x = ζ x ) at level ζ y to the region {(x, y) : y ≤ ζ y }).
We embed the driving random walks in continuous time by mean of time-changed Brownian motions on the lattice (see [7] for a precise definition), that we denote by {(B n (t)) t∈ [0,1] , n ≥ 1}, so that B n (0) and B n (1) are the starting and settling location of the nth random walk respectively. This turns out to be technically convenient, since it makes our discrete time martingales into continuous time ones. Finally, for real t ∈ [0, ∞) we define Note that the first sum represents the contribution of the settled walkers, while the last term gives the contribution of the active one at time t. Since the function H ζ is discrete harmonic up to level ζ y , M ζ above is a continuous-time martingale up to the first time the cluster reaches level ζ y . We split it into a continuous part and a jump part, namely Let S ζ , S 1 ζ and S 2 ζ denote the quadratic variation of M ζ , M 1 ζ and M 2 ζ respectively. Since M 1 ζ is a continuous martingale starting from 0, it is a time-changed Brownian Motion, that is for B one-dimensional Brownian motion. Moreover, since M 2 ζ is piecewise constant and by independence of the starting locations, we have Recall that we want to bound the quadratic variation of M ζ on the event that no point is m-early, which means that the cluster has a controlled height. The next lemma shows how a control on the shape of the cluster can be translated into a control of the quadratic variation of the martingale.
Lemma C.2 Assume that m 1 and that ζ y ≥ t N + 2m + 1. Then Let us estimate the two factors separately. As a technical trick, we will assume that the driving random walks are released from a uniformly chosen location at level −N 2 , so that This clearly does not change the law of the process, and it has the technical advantage of transferring some amount of the quadratic variation of M ζ from S 2 ζ to S 1 ζ . We have: Note that the above bound does not depend on m, but only on the assumption (42). It remains to show that To this end, we use that on the event E m+1 [t] c the height on the cluster is controlled, which in turn implies that the quadratic variation of the continuous martingale part cannot be too large. Let us take t to be integer to simplify the writing. Then we find We claim that on the event E m+1 [t] c the martingale increments are bounded, that is for all s ∈ [n, n + 1) and some (non-random) a, b n . Indeed, Moreover, on the event E m+1 [t] c we have that for all n ≤ t the cluster A(n) is contained in R n N +m+1 , from which where the last inequality follows from the explicit computation of H ζ (z) for large N , and it holds as long as m 1 (see [12] for details). By a standard argument on Brownian motion, this implies that where τ n (a, b) denotes the exit time from the interval (−a, b) for a one-dimensional Brownian motion starting from 0. We thus have that It is easy to check (cf. [7], Lemma 5) that E(e λτ (−a,b) ) ≤ 1 + 10λab provided √ λ(a + b) ≤ 3. For our choice of parameters λ = 4 and which concludes the proof.
We can now show that no early points implies no late points.
Proof of Proposition C.1 Take m = 2 C log N without loss of generality, with C as in the statement, and note that m ≥ since ≥ C log N by assumption. Let ζ = (ζ x , ζ y ) be such that ζ y ≤ T N − , and set T 1 = N (ζ y + ). Note that T 1 ≤ T . Recall that L(ζ ) denotes the event that ζ is -late, that is ζ / ∈ A(T 1 ). As already observed, it will suffice to show that for arbitrary ζ ∈ R T N − . On the event L(ζ ) we use the martingale M ζ with pole at ζ . We will show that both M ζ (T 1 ) is large and negative, and S ζ (T 1 ) is small, so that the probability of both happening simultaneously is tiny.
Let A ζ (t) denote the IDLA cluster at time t with points stopped upon reaching level ζ y , counted with multiplicity. We have On the event L(ζ ), no particle reaches ζ by time T 1 . Moreover, H ζ (z) = 0 for all z = ζ at level ζ y , while H ζ (z) > 0 if z y < ζ y . It follows that the sum defining M ζ (T 1 ) is maximised when as many particles as possible are below level ζ y , that is when the cluster is completely filled up to level ζ y − 1. Hence To show that, on the other hand, the quadratic variation is small, we use the following lemma.

Lemma C.3
Assume that m 1 and ≤ m. Fix any ζ , and let t = N (ζ y + ). Then Using the above with t = T 1 and since T 1 ≤ e m we gather that In conclusion, for any s > 0 we have as long as m ≥ ≥ (γ + 5) log N and 2 4000m ≥ (γ + 5) log N . This shows that Proposition C.1 holds with C ≥ 4000(γ + 5).
In order to conclude, it remains to prove the above lemma.
Proof of Lemma C. 3 As in the proof of Lemma C. The factor involving S 2 ζ is bounded above by an absolute constant, so it suffices to show that, say, Note that ζ y = t N − , so we cannot use Lemma C.2, which requires ζ y ≥ t N +2m +1. On the other hand, note that if t 0 ≤ t then E m [t] c ⊆ E m [t 0 ] c . Moreover, for t 0 such that ζ y ≥ t 0 N + 2m + 1 we can apply Lemma C.2. Take therefore t 0 = N (ζ y − 2m − 1) ∨ 0. Then Now, if t 0 = 0 then ζ y ≤ 2m + 1, so t N = ζ y + ≤ 2m + 1 + ≤ 4m. Otherwise, t−t 0 N = + 2m + 2 ≤ 4m. In all, as wanted.
This concludes the proof of Proposition C.1.

C.3. No late points implies no early points
We prove the following. On the event Q z,t take ζ = (ζ x , ζ y ) such that ζ y = z x and ζ y = z y + m + 1 = t N + 2m + 1. Then for any s > 0 m .
To finish, we show that Note that on the event Q z,t we have A(t) ⊆ R t N +m+1 . Let B r (z) denote the Euclidean ball of radius r around z. Partition A(t) as follows: It remains to estimate the contribution to the martingale of points in A 2 , which we show to be large. To this end, observe that if i ≤ m and m ≤ k ≤ 2m + 1, then (with m 1) P 0 (a SRW reaches level k for the first time at (i, k)) This implies that H ζ (z) ≥ 1 12π m for z ∈ A 2 . Moreover, the next lemma shows that A 2 contains a positive proportion of points, and it identifies the absolute constant b in Proposition C.2. Proof This can be proven exactly as in the Z 2 case treated in [7], as the argument is completely local and m N .
This allows us to conclude that, on the event Q z, where the last inequality follows by recalling that = bm/48π and that, by assumption, m 2 ≤ 3C N log N and m ≥ C log N , from which bm 2 N ≤ 3bC log N ≤ bm 24π as long as 3C ≤ C /24π , which can always be ensured by taking C large enough, depending only on C.
In all, we conclude that where the last inequality holds as long as m ≥ 2 c 0 (γ + 5) log N , which can always be ensured at cost of increasing C , only depending on γ . This concludes the proof of Proposition C.2.

C.4. Iterative scheme
To finally prove Theorem 5.1 we iteratively apply Propositions C.1 and C.2, starting with m = 2T /N log N and iterating as long as m, ≥ a γ log N , with a γ = max{C, C }. Assume that T N log N , otherwise the result holds by Remark C.1. We have from Lemma C.1 with m = 3 that where, for all fixed γ > 0, the last inequality holds for N large enough. Let m 0 = 3N log 2 N . For C, C as in Proposition C.1 and C.2 respectively, iteratively define Then, as long as m k , k ≥ a γ log N , we have by Proposition C.1 and C.2 respectively. Thus We want to iterate as much as possible, that is to take k as large as possible so that k , m k ≥ a γ log N . It is easy to check that if k > c 0 log log N , for a suitable absolute constant c 0 , then k < a γ log N , so we can iterate at most c 0 log log N times. In all, this shows that by taking = max{ c 0 log log N ; a γ log N } and m = max{m c 0 log log N ; a γ log N } we have for N large enough, as wanted.