Skip to main content
Log in

Formation of an interface by competitive erosion

  • Published:
Probability Theory and Related Fields Aims and scope Submit manuscript

Abstract

We introduce a graph-theoretic model of interface dynamics called competitive erosion. Each vertex of the graph is occupied by a particle that can be either red or blue. New red and blue particles alternately get emitted from their respective bases and perform random walk. On encountering a particle of the opposite color they kill it and occupy its position. We prove that on the cylinder graph (the product of a path and a cycle) an interface spontaneously forms between red and blue and is maintained in a predictable position with high probability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Asselah, A., Gaudilliere, A.: From logarithmic to subdiffusive polynomial uctuations for internal DLA and related growth models. Anna. Probab. 41(3A), 1115–1159 (2013)

    Article  MATH  Google Scholar 

  2. Asselah, A., Gaudilliére, A.: Lower bounds on uctuations for internal DLA. Probab. Theory Relat. Fields 158(1–2), 39–53 (2014)

    Article  MATH  Google Scholar 

  3. Candellero, E., Ganguly, S., Hoffman, C., Levine, L.: Oil and water: a two-type internal aggregation model, arXiv preprint. arXiv:1408.0776 (2014)

  4. Fayolle, G., Malyshev, V.A., Menshikov, M.V.: Topics in the Constructive Theory of Countable Markov Chains. Cambridge University Press, Cambridge (1995)

    Book  MATH  Google Scholar 

  5. Jerison, D., Levine, L., Sheffeld, S.: Logarithmic uctuations for internal DLA. J. Am. Math. Soc. 25(1), 271–301 (2012)

    Article  MATH  Google Scholar 

  6. Jerison, D., Levine, L., Sheffeld, S.: Internal DLA and the gaussian free field. Duke Math. J. 163(2), 267–308 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  7. Jerison, D., Levine, L., Sheffeld, S.: Internal DLA for cylinders. In: Fefferman, C., Ionescu, A.D., Phong D.H., Wainger, S. (eds.) Advances in Analysis: The Legacy of Elias M. Stein, pp. 189–214. Princeton University Press, Princeton (2014)

  8. Kozma, G., Schreiber, E.: An asymptotic expansion for the discrete harmonic potential. Electron. J. Probab. 9(1), 1–17 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  9. Lawler, G.F.: Intersections of Random Walks. Modern Birkhäuser Classics, Birkhäuser (2012)

    MATH  Google Scholar 

  10. Lawler, G.F., Bramson, M.: Internal diffusion limited aggregation. Ann. Probab. 20(4), 2117–2140 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  11. Levin, D.A., Peres, Y., Wilmer, E.L.: Markov Chains and Mixing Times. American Mathematical Society, Providence (2009)

    MATH  Google Scholar 

  12. Levine, L., Peres, Y.: Strong spherical asymptotics for rotor-router aggregation and the divisible sandpile. Potential Anal. 30(1), 1–27 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  13. Levine, L., Peres, Y.: Scaling limits for internal aggregation models with multiple sources. Journal d’Analyse Mathématique 111(1), 151–219 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  14. Stanley, R.: Promotion and evacuation. Electron. J. Comb. 16(2), R9 (2009)

    MathSciNet  MATH  Google Scholar 

  15. Timáar, Á.: Boundary-connectivity via graph theory. Proc. Am. Math. Soc. 141(2), 475–480 (2013). 47

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

We thank Gerandy Brito and Matthew Junge for helpful comments. We also thank the anonymous referees for many useful comments and suggestions that helped improve the paper. The work was initiated when S.G. was an intern with the Theory Group at Microsoft Research, Redmond and a part of it was completed when L.L. and J.P. were visiting. They thank the group for its hospitality.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shirshendu Ganguly.

Additional information

Supported by NSF Grant DMS-1243606 and a Sloan Fellowship.

Partially supported by NSF Grant DMS-1001905.

Appendices

Appendix 1: IDLA on the cylinder

The proof of Theorem 6 follows by adapting the ideas of the proof appearing in [10]. The proof in [10] follows from a series of lemmas which we now state in our setting. Recall (74). Let \(\tau _{z}\), \({\tilde{\tau }}_{kn}\) be the hitting times of \(\phi (z)\) and \(\phi (Y_{kn,n})\) respectively.

Lemma 17

For any \(z=(x,y)\in {\mathcal {C}}_n\) with \(y\le kn\)

$$\begin{aligned} kn^2{\mathbb {P}}_{Y_{0,n}}(\tau _{z}<{{\tilde{\tau }}}_{kn}) \ge \sum _{w\in C_n\times [0,kn)} {\mathbb {P}}_{\phi (w)}(\tau _{z}<{{\tilde{\tau }}}_{kn}) \end{aligned}$$

where \({\mathbb {P}}_{\phi (w)}\) and \({\mathbb {P}}_{Y_{0,n}}\) are the random walk measures on \({\mathcal {C}}_n\) with starting point \(\phi (w)\) and uniform over \(Y_{0,n}\) respectively.

Proof

By symmetry in the first coordinate, under \({\mathbb {P}}_{Y_{0,n}},\) for any j,  the distribution of the random walk when it hits the set \(Y_{j,n},\) is uniform over the set \(Y_{j,n}\). Hence by the Markov property the chance that random walk hits \(\phi (z)\) before \(Y_{\phi (kn),n}\) after reaching the line \(Y_{\phi (j),n}\) is

$$\begin{aligned} \frac{1}{n}\sum _{w\in Y_{\phi (j),n}}{\mathbb {P}}_{w}(\tau _{z}<{{\tilde{\tau }}}_{kn}). \end{aligned}$$

Thus clearly for any \(j< kn\)

$$\begin{aligned} {\mathbb {P}}_{Y_{0,n}}( \tau _{z}<{{\tilde{\tau }}}_{kn})\ge \frac{1}{n}\sum _{w\in Y_{\phi (j),n}}{\mathbb {P}}_{w}(\tau _{z}< \tilde{\tau }_{kn}). \end{aligned}$$
(91)

The lemma follows by summing over j from 0 through \(kn-1\). \(\square \)

Lemma 18

Given positive numbers k and \({\epsilon }\) with \({\epsilon }< 1\), there exists \(\beta =\beta (k,{\epsilon })\) such that for all \(z=(x,y)\) with \(y\le (1-{\epsilon })kn\)

$$\begin{aligned} {\mathbb {P}}_{Y_{0,n}}(\tau _{z}<{{\tilde{\tau }}}_{kn,n})\ge \frac{\beta }{\ln n}. \end{aligned}$$

Proof

Since \(\phi (kn)-\phi (y)\ge {\epsilon }kn\) the semi-disc of radius \(\min ({\epsilon }k,1/2)n\) around z lies below the line \(Y_{\phi (kn),n}\). The random walk starting uniformly on \(Y_{o,n}\) hits the interval \((z-\min ({\epsilon }k/2 ,1/4)n,z+\min ({\epsilon }k/2 ,1/4)n)\) with probability at least \(\min ({\epsilon }k,1/2)\). Now the lemma follows by the standard result that the random walk starting within radius n / 2 has \(\Omega (\frac{1}{\log n})\) chance of returning to the origin before exiting the ball of radius n in \({\mathbb {Z}}^2.\) This fact can be found in [9, Prop 1.6.7]. \(\square \)

The next result is the standard Azuma-Hoeffding inequality stated for sums of indicator variables.

Lemma 19

For any positive integer n if \(X_i\) \(i=1,2 \ldots n\) are independent indicator variables then

$$\begin{aligned} {\mathbb {P}}\left( \left| \sum _{i=1}^{n}X_i-\mu \right| \ge t\right) \le 2 e^{-\frac{t^2}{4n}} \end{aligned}$$

where \({\mu ={{\mathbb {E}}}\sum _{i=1}^{n}X_i}.\)

1.1 Hitting estimates

Consider the simple random walk \((Y(t))_{t \ge 0}\) on \({\mathbb {Z}}^2\).

Lemma 20

For \((x,y) \in {\mathbb {Z}}^2\), let \(h(x,y) = {{\mathbb {P}}}_{(x,y)} \{ Y(\tau ({\mathbb {Z}}\times \{0\})) = (0,0) \}\) be the probability of first hitting the x-axis at the origin. Then

$$\begin{aligned} h(x,y) = \frac{y}{\pi (x^2+y^2)} + O\left( \frac{1}{x^2+y^2} \right) . \end{aligned}$$

Proof

Let

$$\begin{aligned} {{\widetilde{h}}}(x,y) = {\left\{ \begin{array}{ll} h(x,y), &{} \quad y>0 \\ 0, &{} \quad y=0 \\ -h(x,y), &{} \quad y<0. \end{array}\right. } \end{aligned}$$

The discrete Laplacian

$$\begin{aligned} \Delta {{\widetilde{h}}}(x,y) = {{\widetilde{h}}}(x,y) - \frac{{{\widetilde{h}}}(x+1,y)+ {{\widetilde{h}}}(x-1,y)+ {{\widetilde{h}}}(x,y+1) + {{\widetilde{h}}}(x,y-1)}{4} \end{aligned}$$

vanishes except when \((x,y) = (0,\pm 1)\), and \(\Delta \widetilde{h}(0,\pm 1) = \pm \frac{1}{4}\). Since \({{\widetilde{h}}}\) vanishes at \(\infty \) it follows that

$$\begin{aligned} {{\widetilde{h}}}(x,y) = \frac{a(x,y+1) - a(x,y-1)}{4} \end{aligned}$$
(92)

where

$$\begin{aligned} a(x,y) = \frac{1}{\pi } \log (x^2+y^2) + \kappa + O\left( \frac{1}{x^2+y^2}\right) \end{aligned}$$

is the recurrent potential kernel for \({\mathbb {Z}}^2\) (see [8]). Here \(\kappa \) is a constant whose value is irrelevant because it cancels in the difference (92). \(\square \)

Let \(X(\cdot )\) be the simple symmetric random walk on the half-infinite cylinder \({\mathcal {C}}_n=C_n \times {\mathbb {Z}}_{\ge 0}.\)

Lemma 21

For any positive integers \(j<k\), with \(\Delta =k-j<n \):

\(\mathrm{i.}\) for any \(w \in Y_{k,n}\),

$$\begin{aligned} {\mathbb {P}}_{w}(\tau (j)<\tau ^{+}(k))<\frac{1}{\Delta } \end{aligned}$$

where \(\tau (j)\) and \(\tau ^{+}(k)\) are the hitting and positive hitting times of \(Y_{j,n}\) and \(Y_{k,n}\) respectively for \(X(\cdot )\).

\(\mathrm{ii.}\) there exists a constant J such that for \(w\in Y_{j,n}\) and any subset \(B\subset Y_{k,n}\),

$$\begin{aligned} {\mathbb {P}}_w(X({\tau (k)})\in B)<J|B|/\Delta . \end{aligned}$$

Proof

i. is the following standard result about one-dimensional random walk: starting from 1 the probability of hitting \(\Delta \) before 0 is \(\frac{1}{\Delta }.\)

Now we prove ii. Clearly it suffices to prove it in the case when B consists of a single element. Notice that, if \(Y(t)=(Y_1(t),Y_2(t))\) is the simple random walk on \({\mathbb {Z}}^2\), then

$$\begin{aligned} X(t)=(Y_1(t)\text { mod } n,|Y_2(t)|) \end{aligned}$$
(93)

is distributed as the simple random walk on \({\mathcal {C}}_n.\) For any \(\ell \in {\mathbb {Z}}\) let \(\tau _1(\ell )\) be the hitting time of the line \(y=\ell ,\) for Y(t). Clearly by (93) for \(w=(0,j),z=(z_1,k)\in {\mathcal {C}}_n,\)

$$\begin{aligned}&{\mathbb {P}}_w(X({\tau (k)})=z)\\&\quad =\sum _{i=-\infty }^{\infty }P_{(0,j)}\left\{ Y(\tau _1(k)\wedge \tau _1(-k))\in \{(z_1+in,k),(z_1+in,-k)\}\right\} . \end{aligned}$$

By union bound the RHS is at most

$$\begin{aligned} \sum _{i=-\infty }^{\infty }P_{(0,j)}\left\{ Y(\tau _1(k))=(z_1+in,k)\right\} +\sum _{i=-\infty }^{\infty }P_{(0,j)}\left\{ Y(\tau _1(-k))=(z_1+in,-k)\right\} . \end{aligned}$$

Using the notation in Lemma 20 we can write the above sum as

$$\begin{aligned} \sum _{i=-\infty }^{\infty }[h(-z_1+in,\Delta )+h(-z_1+in,k+j)] \end{aligned}$$

By Lemma 20 the above sum is

$$\begin{aligned} O\left( \frac{1}{\Delta }\right) +O \left( \frac{1}{k+j}\right) =O\left( \frac{1}{\Delta }\right) . \end{aligned}$$

Hence we are done. \(\square \)

1.2 Proof of Theorem 6

Equipped with the results in the previous subsection the proof of Theorem 6 will now be completed by following the steps in [10] .

Lower bound It suffices to show \(C_n\times {[0,(1-\epsilon )kn]}\subset A({(1+\epsilon )kn^2})\). Fix \(z\in C_n\times {(1-\epsilon )kn}\) For any positive integer i we associate the following stopping times to the \(i^{th}\) walker:

  • \(\sigma ^i\): the stopping time in the IDLA process

  • \(\tau ^{i}_z\): the hitting time of \(\phi (z)\)

  • \(\tau ^{i}_{kn,n}\): the hitting time of the set \(\phi (Y_{kn,n}).\)

Now we define the random variables

$$\begin{aligned} N= & {} \sum _{i=1}^{{(1+\epsilon )kn^2}}{\mathbf {1}}_{(\tau ^{i}_z<\sigma ^{i})},\hbox { the number of particles that visit }\phi (z)\hbox { before stopping}\\ M= & {} \sum _{i=1}^{{(1+\epsilon )kn^2}}{\mathbf {1}}_{(\tau ^{i}_z<\tau ^{i}_{kn,n})},\nonumber \\&\hbox { the number of particles that visit }\phi (z)\hbox { before reaching }\phi (Y_{kn,n})\\ L= & {} \sum _{i=1}^{{(1+\epsilon )kn^2}}{\mathbf {1}}_{(\sigma ^{i}<\tau ^{i}_z <\tau ^{i}_{kn,n})} ,\hbox { the number of particles that visit }\phi (z)\nonumber \\&\hbox { before reaching }\phi (Y_{kn,n}) \hbox { but after stopping}. \end{aligned}$$

Thus

$$\begin{aligned} N\ge M-L. \end{aligned}$$

Hence

$$\begin{aligned} {\mathbb {P}}\Bigl (z\notin A((1+\epsilon )kn^2)\Bigr )={\mathbb {P}}(N=0)\le {\mathbb {P}}(M<a)+{\mathbb {P}}(L>a). \end{aligned}$$
(94)

where the last inequality holds for any a. Now by definition

$$\begin{aligned} {{\mathbb {E}}}(M)=(1+\epsilon )kn^2\,{\mathbb {P}}_{Y_{0,n}}(\tau _{z}<\tau _{kn,n}). \end{aligned}$$
(95)

We now bound the expectation of L. Define the following quantity: let independent random walks start from each \(w\in {\phi (C_n\times [0,kn])}\) and let

$$\begin{aligned} {\tilde{L}}=\sum _{w\in \phi (C_n\times [0,kn])}{\mathbf {1}}{(\tau _{z}<\tau _{kn,n} \hbox { for the walker starting at } w)}. \end{aligned}$$

Clearly \(L\le {\tilde{L}}.\) Hence the RHS of (94) can be upper bounded by \({\mathbb {P}}(M<a)+{\mathbb {P}}({\tilde{L}}>a)\). Now

$$\begin{aligned} {{\mathbb {E}}}({\tilde{L}})= \sum _{w\in \phi (C_n\times [0,kn])} {\mathbb {P}}_w(\tau _{z}<\tau _{kn,n}). \end{aligned}$$

Also by Lemma 17 and (95)

$$\begin{aligned} \left( 1+\frac{{\epsilon }}{2}\right) {{\mathbb {E}}}({\tilde{L}})\le {{\mathbb {E}}}(M). \end{aligned}$$

Choose \(a=(1+\epsilon /4)\max \bigl (\frac{\beta kn^2}{\ln n},{{\mathbb {E}}}({\tilde{L}})\bigr )\) where the \(\beta \) appears in Lemma 18. Now using Lemma 19, we get,

$$\begin{aligned} {\mathbb {P}}({\tilde{L}}>a)\le & {} \exp (-dn),\\ {\mathbb {P}}(M<a)\le & {} \exp (-dn), \end{aligned}$$

for some constant \(d=d({\epsilon },k)>0.\) Thus in (94) we get

$$\begin{aligned} {\mathbb {P}}(M<a)+{\mathbb {P}}(L>a)\le 2\exp (-dn). \end{aligned}$$

The proof of the lower bound now follows by taking the union bound:

$$\begin{aligned} {\mathbb {P}}(C_n\times {[0,(1-\epsilon )kn]} \subset A((1+\epsilon )kn^2)\le & {} \sum _{z\in C_n\times {[0,(1-\epsilon )kn]}} {\mathbb {P}}\Bigl (z\notin A((1+\epsilon )kn^2)\Bigr )\\\le & {} \sum _{z\in C_n\times {[0,(1-\epsilon )kn]}}2\exp (-dn)\\\le & {} \exp (-cn), \end{aligned}$$

where the last inequality holds for large enough n when c is smaller than d.

Upper bound In [10] the upper bound is proven by showing that the growth of the cluster above level \((1+\epsilon )kn\) is dominated by a multitype branching process. However here we slightly modify the proof to take into account that in our situation the initial cluster is not empty. We define some notation. Let us denote the particles making it out of level \(\phi (Y_{kn,n})\) by \(w_1,w_2,\ldots \) and define

$$\begin{aligned} {\tilde{A}}(j)=A(w_j). \end{aligned}$$

Choose \(k_0=k(1+\sqrt{{\epsilon }})n.\) We define

$$\begin{aligned} {\tilde{Y}}_{\ell ,n}:=Y_{k_0+\ell ,n}. \end{aligned}$$
(96)

Given the above notation let

$$\begin{aligned} Z_{\ell }(j):={\tilde{A}}(j)\cap {\tilde{Y}}_{\ell ,n}. \end{aligned}$$

Define

$$\begin{aligned} \mu _{\ell }(j):={{\mathbb {E}}}(Z_{\ell }(j)). \end{aligned}$$

Lemma 22

[10, Lemma 7] There exists a universal \(J_1>0\) such that for all \(k, {\epsilon }\in (0,1),\) \(n\ge N(k,{\epsilon })\) and all positive integers \(j,\ell \)

$$\begin{aligned} \mu _{\ell }(j)<kn\left( J_1 \frac{j}{\ell } \frac{1}{\sqrt{\epsilon } kn}\right) ^{\ell }. \end{aligned}$$

We include the proof of Lemma 22 for completeness. However first we show how it implies the upper bound in Theorem 6. Let us define the event

$$\begin{aligned} F:=C_n\times {(1-\epsilon )kn}\subset A(kn^2). \end{aligned}$$

Now let \(B=B(k)>0\) be a constant to be specified later. Then

$$\begin{aligned} {\mathbb {P}}\bigl (A(kn^2)\not \subset C_n\times [0,k(1+B\sqrt{\epsilon })n]\cap F\bigr )\le & {} \mathcal {\mathbb {P}}(Z_{n' }(2k \epsilon n^2)>1)\\\le & {} \mu _{n'}(2k\epsilon n^2) \end{aligned}$$

where \(n'=(k B)\sqrt{\epsilon }n-1 -k\sqrt{{\epsilon }} n.\) To see why these inequalities are true first note that the set \({\tilde{Y}}_{n',n}\) is at height less than \(k(1+B\sqrt{\epsilon }n).\) Hence the cluster at time \(kn^2\) should intersect \({\tilde{Y}}_{n',n}\) to grow beyond height \(k(1+B\sqrt{\epsilon })n.\) However on the event F at most \(2{\epsilon }k n^2\) particles out of the first \(kn^2\) move beyond height kn. Hence the size of the intersection of \({\tilde{Y}}_{n',n}\) and the cluster is at most \(Z_{n' }(2k \epsilon n^2).\) Thus we get the first inequality. The second inequality follows trivially from the fact that for a non-negative integer-valued random variable the expectation is at least as big as the probability of the random variable being positive. Using Lemma 22 we get

$$\begin{aligned} \mu _{n'}(2k\epsilon n^2)\le & {} kn\left( J_1 \frac{2k{\epsilon }n^2}{n'} \frac{1}{\sqrt{\epsilon } kn}\right) ^{n'}\\= & {} kn\left( J_1 \frac{4k\epsilon n^2}{k(B-1)\sqrt{\epsilon } n} \frac{1}{\sqrt{\epsilon } kn}\right) ^{k(B-1)\sqrt{\epsilon } n}\\= & {} kn\left( \frac{4J_1}{(B-1)k} \right) ^{k(B-1)\sqrt{\epsilon } n}. \end{aligned}$$

Thus

$$\begin{aligned} {\mathbb {P}}\bigl (A_{kn^2}\notin [0,n]\times [0,k(1+B\sqrt{\epsilon })n]\cap F\bigr )\le kn\left( \frac{4 J_1}{(B-1)k}\right) ^{(B-1)\sqrt{\epsilon }kn}. \end{aligned}$$

Now by choosing B such that \(4J_1 <(B-1)k\) we are done. \(\square \)

Proof of Lemma 22

The rate at which \({\tilde{Y}}_{\ell ,n}\) grows is at most the rate at which a particle exiting height kn reaches the occupied sites in \({\tilde{Y}}_{\ell -1,n}\). Thus if X(t) is the random walk on \(C_n\) defined in (73) then for any m

$$\begin{aligned} \mu _{\ell }(m+1)-\mu _{\ell }(m)\le & {} \sup _{y\in Y_{kn,n}} {\mathbb {P}}_{y}\Bigl [X(\tau _{{\tilde{Y}}_{\ell -1,n}})\in {\tilde{A}}(m)\Bigr ]\end{aligned}$$
(97)
$$\begin{aligned}\le & {} J \frac{\mu _{\ell -1}(m)}{kn\sqrt{{\epsilon }}} \end{aligned}$$
(98)

where the second inequality follows by Lemma 21 ii. Summing over \(m=0,1\ldots j\) we get

$$\begin{aligned} \mu _{\ell }(j)\le \frac{J}{kn\sqrt{{\epsilon }}}\sum _{m=0}^{j-1} \mu _{\ell -1}(m). \end{aligned}$$

Iterating the above relation in \(\ell \) with fixed j gives us

$$\begin{aligned}\mu _{\ell }(j)\le \left( J\frac{1}{\sqrt{{\epsilon }} {kn}}\right) ^{\ell -1} \frac{j^{\ell }}{\ell !}. \end{aligned}$$

The lemma follows by using the inequality

$$\begin{aligned} \ell !\ge \ell ^\ell e^{-\ell }. \end{aligned}$$

\(\square \)

Appendix 2: Green’s function and flows

We prove Lemma 6. We start by discussing some properties of the ordinary random walk on \(\mathrm {Cyl}_n\) [defined in (8)]. For any \(v \in \mathrm {Cyl}_n\) define

$$\begin{aligned} G_{n}(v)=\frac{1}{4n} {{\mathbb {E}}}_{w} \bigl [\# \hbox { visits to the line } y=0 \hbox { before } \tau (C_{n} \times \{1\}) \bigr ]. \end{aligned}$$
(99)

Lemma 23

For any point \((x,y)\in \mathrm {Cyl}_n\)

$$\begin{aligned} G_{n}(x,y)=1-y. \end{aligned}$$

Proof

Consider the lazy symmetric random walk on the interval [0, n] where at 0 the chance that it moves to 1 is \(\frac{1}{4}\) and everywhere else the chance that it jumps is \(\frac{1}{2}.\) By symmetry of \(\mathrm {Cyl}_n\) in the x-coordinate it is clear that for all \((x,y)\in \mathrm {Cyl}_n,\) \(4n G_{n}(x,y)\) is the expected number of times that the above one-dimensional random walk starting from ny hits 0 before hitting n. The above quantity is easy to compute and is \(4n(1-y).\ \square \)

Remark 5

Thus for any \(\sigma \in \Omega \cup \Omega '_{n}\)

$$\begin{aligned} h(\sigma )=\sum _{(x,y)\in B_1}G_n(x,y) \end{aligned}$$
(100)

where \(h(\cdot )\) is defined in (14).

We now define the stopped Green’s function. For any \(A\subset \mathrm {Cyl}_n\) and \(v\in \mathrm {Cyl}_n\) define

$$\begin{aligned} G_{A}(v)= \frac{1}{4n}{{\mathbb {E}}}_{v} \bigl [\# \hbox { visits to the line } y=0 \hbox { before } \tau (A^c) \bigr ]. \end{aligned}$$
(101)

Lemma 24

Given \(A \subset \mathrm {Cyl}_n\) such that \(A\cap (C_n \times \{1\})=\emptyset ,\) for all (xy) in \(\mathrm {Cyl}_n\) we have

$$\begin{aligned} {G}_{A}(x,y)={H}_{A}(x,y)-y \end{aligned}$$

where \(H_{A}(\cdot )\) was defined in (27).

Proof

Let \(y_{t}\) be the height of the walk at time \(t\le {\tau }(A^c)\). Consider the following telescopic series:

$$\begin{aligned} y_{\tau (A^c)}-y_{0}= \sum _{t=0}^{\infty }(y_{t+1}-y_{t}){\mathbf {1}}(\tau (A^c)>t). \end{aligned}$$
(102)

Notice that since \(A\cap (C_n \times \{1\}) =\emptyset \), \(t<\tau _{A^c}\) implies \(y_t<1\). We make the following simple observation:

$$\begin{aligned} {{\mathbb {E}}}\bigl [(y_{t+1}-y_{t}){\mathbf {1}} ({\tau }(A^c)>t)|{\mathcal {F}}_t \bigr ]= & {} \left\{ \begin{array}{ll} \frac{1}{4n} {\mathbf {1}}({\tau }({A^c})>t) &{}\quad y_t=0 \\ 0 &{}\quad y_t>0 \end{array} \right. \end{aligned}$$
(103)

where \({\mathcal {F}}_t\) is the filtration generated by the random walk up to time t. Taking expectations on both sides of (102), we get

$$\begin{aligned} {{\mathbb {E}}}_{(x,y)}[y_{{\tau }({A^c})}]-y =\frac{1}{4n}{{\mathbb {E}}}_{(x,y)}\sum _{t=0}^{\infty } {\mathbf {1}}({y_t=0}){\mathbf {1}}({\tau }({A^c})>t)=G_{A}(x,y) \end{aligned}$$

and hence we are done. \(\square \)

Remark 6

Note that the above lemma implies for any \((x,y)\,\in \,\mathrm {Cyl}_n\),

$$\begin{aligned} {{\mathbb {E}}}_{(x,y)}[y_{{\tau }({A^c})}]\ge y \end{aligned}$$

since the Green’s function is a non-negative quantity.

Next we relate the Green’s function to the solution of a variational problem. The results are well known and classical even though our setup is slightly different. Hence we choose to include the proofs for clarity. As defined in Sect. 4.1 let \(\vec {E}\) denote the set of directed edges of \(\mathrm {Cyl}_n.\)

For any function \(F: \mathrm {Cyl}_n \rightarrow {\mathbb {R}}\) define the gradient \(\nabla F : \vec {E} \rightarrow {\mathbb {R}}\) by

$$\begin{aligned} \nabla F (v,w) = F(w)-F(v) \end{aligned}$$

and the discrete Laplacian \(\Delta F : \mathrm {Cyl}_n \rightarrow {\mathbb {R}}\) by

$$\begin{aligned} \Delta F (v)=F(v) - \frac{1}{4}\sum _{w\sim v}F(w). \end{aligned}$$
(104)

Note that the graph \(\mathrm {Cyl}_n\) is 4-regular.

Recall the definition of energy from Sect. 4.1. The next result is a standard summation-by-parts formula.

Lemma 25

For any function \(F: \mathrm {Cyl}_n\rightarrow {\mathbb {R}}\)

$$\begin{aligned} {\mathcal {E}}(\nabla F)=4\sum _{v\in \mathrm {Cyl}_n}F(v)\Delta F(v). \end{aligned}$$

The proof follows by definition and expanding the terms.

For a subset \(A \subset \mathrm {Cyl}_n\) recalling the definition of stopped Green’s function let

$$\begin{aligned} f_A:=\nabla G_{A}. \end{aligned}$$
(105)

Also recall the definition of divergence (30).

Lemma 26

For any \((x,y)\in A\)

$$\begin{aligned} \mathop {\mathrm {div}}\bigl (f_A) (x,y)= \left\{ \begin{array}{ll} \frac{1}{n} &{}\quad \hbox {if }y=0, \\ 0 &{}\quad \hbox {otherwise}. \end{array} \right. \end{aligned}$$

Proof

For any \(v=(x,y)\in A\) by definition

$$\begin{aligned} \mathop {\mathrm {div}}(f_A)(v)=4\Delta G_{A}(v)=4G_{A}(v)-\sum _{w \sim v}G_{A}(w)= \left\{ \begin{array}{ll} \frac{1}{n} &{}\quad \hbox {if }y=0, \\ 0 &{}\quad \hbox {otherwise}. \end{array} \right. \end{aligned}$$
(106)

The last equality follows by the definition of \(G_{A}\) in (101) by looking at the first step of the random walk started from v. \(\square \)

We now prove that the random walk flow \(f_{A}\) on a set A is the flow with minimal energy.

Lemma 27

$$\begin{aligned} {\mathcal {E}}(f_A)=\inf _{f}{\mathcal {E}}(f) \end{aligned}$$

where the infimum is taken over all flows from \(\left( C_n \times \{0\}\right) \bigcap A\) to \(A^c\) such that for \((x,y)\in A\)

$$\begin{aligned} \mathop {\mathrm {div}}(f)(x,y)=\left\{ \begin{array}{ll} \frac{1}{n} &{}\quad \hbox {if }y=0, \\ 0 &{}\quad \hbox {otherwise.} \end{array} \right. \end{aligned}$$

Proof

The proof follows by standard arguments, see [11, Theorem 9.10]. We sketch the main steps. One begins by observing that the flow \(f_{A}\) satisfies the cycle law, i.e. the sum of the flow along any cycle is 0. To see this notice that for any cycle

$$\begin{aligned} x_1,x_2,\ldots x_k=x_1 \end{aligned}$$

where \(x_{i}'s\in \mathrm {Cyl}_n,\)

$$\begin{aligned} \sum _{i=1}^{k-1}f_{A}(x_{i},x_{i+1})=\sum _{i=1}^{k-1}(G_{A}(x_{i+1})-G_{A}(x_{i}))=0. \end{aligned}$$

The proof is then completed by first showing that the flow with the minimum energy must satisfy the cycle law, followed by showing that there is an unique flow satisfying the given divergence conditions and the cycle law. \(\square \)

Now suppose \(A\subset \mathrm {Cyl}_{n}{\setminus } \left( C_{n}\times \{1\}\right) \). Then

$$\begin{aligned} {\mathcal {E}}({f}_{A}) = {\mathcal {E}}(\nabla {G}_{A}) =\sum _{v \in A}{G}_{A}(v)4\Delta {G}_{A}(v)=\frac{1}{n}\sum _{k \in C_n}{G}_{A}(k,0)=\frac{1}{n}\sum _{k \in C_n}H_{A}(k,0). \end{aligned}$$
(107)

The first equality is by definition. The second equality follow from Lemma 25 and the fact that \(G_{A}\) is 0 outside A. The third equality is by (106). The last equality is by Lemma 24 since by hypothesis \(A \cap (C_{n}\times \{1\})=\emptyset \).

Proof of Lemma 6

The proof now follows from (107) and Lemma 27. \(\square \)

Appendix: Proof of Lemma 10

We first prove i. Looking at the process \(\omega (t)\) started from \(\omega (0)=\omega \) we see by (70) that the process

$$\begin{aligned} Z_t=X_{t\wedge \tau (B)}-c[t\wedge \tau (B)] \end{aligned}$$

with \(X_0=g(\omega )\) is a submartingale with respect to the filtration \({\mathcal {F}}_t\). Also by hypothesis \(|Z_{t+1}-Z_{t}|\le 2A_2.\)

Now by the standard Azuma–Hoeffding inequality for submartingales, for any time \(t>0\) such that \(a_2-a_1t<0\) we have

$$\begin{aligned} {\mathbb {P}}\left( Z_{t}-Z_0\le -(a_1t-a_2)\right) <e^{-\frac{{(a_1t-a_2)}^2}{4A_2^2t}}. \end{aligned}$$
(108)

Let T be as in the hypothesis of the lemma. We observe that the event \(\{X_{0} \ge A_1-a_2\} \cap \{ \tau (B)>T\}\) implies that

$$\begin{aligned} Z_{T}-Z_0\le -(a_1T-a_2). \end{aligned}$$

This is because by hypothesis \(Z_0=X_0\ge A_1-a_2\). Hence on the event \(\tau (B)>T\)

$$\begin{aligned} Z_{T}-Z_0=X_T-a_1 T-X_0< a_2-a_1T \end{aligned}$$

since \(X_T\le A_1\) by (68). Thus by (108)

$$\begin{aligned} {\mathbb {P}}(\tau (B)>T\mid X_{0}>A_1-a_2)\le e^{-\frac{{(a_1T-a_2)}^2}{4A_2^2T}}. \end{aligned}$$

To prove ii. let \(\omega _0=\omega (\tau (B^c)).\) By hypothesis

$$\begin{aligned} x:=f(\omega _0)\ge A_1-a_4-A_2. \end{aligned}$$

since by (69) the process cannot jump by more than \(A_2.\) Clearly it suffices to show

$$\begin{aligned} {\mathbb {P}}_{\omega _0}(\tau (B') \ge T') \ge 1-e^{-\frac{a_4^2 }{32A_2^2T}}+e^{-\frac{a_1^2 T}{32A_2^2}}. \end{aligned}$$

Now consider the submartingale

$$\begin{aligned} W_t=X_{t\wedge \tau '\wedge \tau ''}-a_1[{t\wedge \tau '\wedge \tau ''}] \end{aligned}$$

with \(W_0=x\), where \(\tau '=\tau (B)\) and \(\tau ''=\tau (B')\). We first claim that

$$\begin{aligned} {\mathbb {P}}_{\omega _0}(\tau '\wedge \tau ''>T)<e^{-\frac{c^2 T^2}{16C^2T}}. \end{aligned}$$
(109)

To see this notice that by the Azuma-Hoeffding inequality it follows that

$$\begin{aligned} {\mathbb {P}}(W_{T}-W_{0}< -a_1 T/2 )<e^{-\frac{a_1^2 T^2}{16A_2^2T}}. \end{aligned}$$
(110)

On the other hand the event \(\tau '\wedge \tau ''> T\) implies

$$\begin{aligned} W_{T}&\le A_1 -a_4-a_1T\\ W_{0}&\ge A_1-a_4-A_2. \end{aligned}$$

Thus the event \(\tau '\wedge \tau ''> T\) implies

$$\begin{aligned} W_{T}-W_{0}< -\frac{a_1 T}{2}, \end{aligned}$$

since by hypothesis \(T> \frac{2A_2}{a_1}\). (109) now follows from (110).

Now on the event \(\{\tau '\wedge \tau '' \le T\}\bigcap \{\tau ''< \tau '\}\) ,

$$\begin{aligned} W_{T}< & {} A_1-2a_4\\ W_{0}\ge & {} A_1-a_4-A_2. \end{aligned}$$

Hence

$$\begin{aligned} \{\tau '\wedge \tau '' \le T\}\cap \{\tau ''< \tau '\}\implies W_{T}-W_{0}\le -\frac{a_4}{2} \end{aligned}$$

since by hypothesis \(a_4> 2A_2\). Thus by (110) we have

$$\begin{aligned} {\mathbb {P}}\left( \{\tau '\wedge \tau '' \le T\} \bigcap \{\tau ''\le \tau '\} \right) \le e^{-\frac{a_4^2 }{16A_2^2T}}. \end{aligned}$$

Observe that

$$\begin{aligned} {\mathbb {P}}(\tau '\le \tau '')\ge {\mathbb {P}}(\tau '\wedge \tau '' \le T)-{\mathbb {P}}\left( \{\tau '\wedge \tau ''<T\} \bigcap \{\tau ''\le \tau '\} \right) . \end{aligned}$$

This along with (109) imply that \(\tau ''\) stochastically dominates a geometric variable with success probability at most

$$\begin{aligned} e^{-\frac{a_4^2 }{16A_2^2T}}+e^{-\frac{a_1^2 T}{16A_2^2}}. \end{aligned}$$

Thus we are done. \(\square \)

Future directions and related models

Fluctuations This article establishes that competitive erosion on the cylinder forms a macroscopic interface quickly. A natural next step is to find the order of magnitude of its fluctuations. Theorem 1 only shows that the fluctuations are o(n).

Fig. 11
figure 11

A variant of competitive erosion on \({\mathbb {Z}}^2\) with \(\mu _1 = \mu _2 = \delta _0\). From left to right, the set of all converted sites after \(n= 10^8, 10^9, 10^{10}\) particles have been released. Most red particles convert blue sites and vice versa, so that a relatively small number of sites are converted

Randomly evolving interfaces Competitive erosion on the cylinder models a random interface fluctuating around a fixed line. It can also model a moving interface if the measures \(\mu _1\) and \(\mu _2\) are allowed to depend on time.

Another model of a randomly evolving interface arises in the case of fixed but equal measures \(\mu _1 = \mu _2\). Figure 11 shows a variant of competitive erosion in the square grid \({\mathbb {Z}}^2\). Initially all sites are colored white. Red and blue particles are alternately released from the origin. Each particle performs random walk until reaching a site in \({\mathbb {Z}}^2 -\{(0,0)\}\) colored differently from itself, and converts that site to its own color. Particles that return to the origin before converting a site are killed. One would not necessarily expect any interface to emerge from this process, but simulations show surprisingly coherent red and blue territories.

Conformal invariance Our choice of the cylinder graph with uniform sources \(\mu _i\) on the top and bottom is designed to make the function g in the level set heuristic (see (5)) as simple as possible: \(g(x,y) = 1-y\). A candidate Lyapunov function for more general graphs is

$$\begin{aligned} h(S) = \sum _{v \in S^c} g(v) \end{aligned}$$

whose maximum over \(S \subset V\) of cardinality k is attained by the level set (7).

A case of particular interest is the following: Let \(V = D \cap (\frac{1}{n} {\mathbb {Z}}^2)\) where D is a bounded simply connected planar domain. We take \(\mu _i = \delta _{z_i}\) for points \(z_1,z_2 \in D\) adjacent to \(D^c\). As the edges of our graph we take the usual nearest-neighbor edges of \(\frac{1}{n} {\mathbb {Z}}^2\) and delete every edge between D and \(D^c\). In the case that D is the unit disk with \(z_1 = 1\) and \(z_2 = -1\), the level lines of g are circular arcs meeting \(\partial D\) at right angles. The location of the interface for general D can then be predicted by conformally mapping D to the disk. Extending the key Theorem 5 to the above setup is a technical challenge we address in a subsequent paper.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ganguly, S., Levine, L., Peres, Y. et al. Formation of an interface by competitive erosion. Probab. Theory Relat. Fields 168, 455–509 (2017). https://doi.org/10.1007/s00440-016-0715-3

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00440-016-0715-3

Mathematics Subject Classification

Navigation