Abstract
We study cover times of subsets of \({\mathbb {Z}}^2\) by a two-dimensional massive random walk loop soup. We consider a sequence of subsets \(A_n \subset {\mathbb {Z}}^2\) such that \(|A_n| \rightarrow \infty \) and determine the distributional limit of their cover times \({\mathcal {T}}(A_n)\). We allow the killing rate \(\kappa _n\) (or equivalently the “mass”) of the loop soup to depend on the size of the set \(A_n\) to be covered. In particular, we determine the limiting behavior of the cover times for inverse killing rates all the way up to \(\kappa _n^{-1}=|A_n|^{1-8/(\log \log |A_n|)},\) showing that it can be described by a Gumbel distribution. Since a typical loop in this model will have length at most of order \(\kappa _n^{-1/2}=|A_n|^{1/2},\) if \(\kappa _n^{-1}\) exceeded \(|A_n|,\) the cover times of all points in a tightly packed set \(A_n\) (i.e., a square or close to a ball) would presumably be heavily correlated, complicating the analysis. Our result comes close to this extreme case.
Similar content being viewed by others
1 Introduction
In this paper we are considering a covering problem for the massive random walk loop soup in \({\mathbb {Z}}^2\). Covering problems can be traced back to Dvoretzky (see [1]) who in 1956 studied the problem of covering the circle with a collection of randomly placed arcs of prescribed lengths. Many variants of this problem were later studied, and we mention in particular Janson’s work [2]. Informally, Janson fixed a set \(K\subset {\mathbb {R}}^d\) and asked for the first time when sets of small diameter arriving according to a Poisson process cover K completely. In particular, Janson determined the asymptotic distribution of this cover time as the diameter of the covering sets shrinks to 0.
Later, Belius [3] took a step in a new direction when he studied a variant of the problem in which the sets used to cover K are unbounded. Concretely, Belius fixed a set \(K\subset {\mathbb {Z}}^d\) and considered so-called random interlacements arriving according to a Poisson process with unit rate. These random interlacements can informally be understood as bi-infinite random walk trajectories (see [4] for more on this model). For this reason, the questions were posed for \(d\ge 3,\) as otherwise the random walks are recurrent. The use of unbounded sets in the covering means that the cover times of any two points \(x,y\in K\) are dependent regardless of the distance between x and y. A similar problem was then studied by Broman and Mussini (see [5], which also contains references to other papers on coverage problems), where now \(K\subset {\mathbb {R}}^d\) and the objects used to cover K are bi-infinite cylinders. In [5], the fact that \(K\subset {\mathbb {R}}^d\) is (in general) not a finite nor a discrete set poses a new set of challenges.
In the present paper, we restrict our attention to \({\mathbb {Z}}^2\) and consider the so-called massive random walk loop soup. The term massive comes from the connection between loop soups and field theory, particularly the Gaussian free field. A random walk loop soup with a non-zero killing rate corresponds to a Gaussian free field with a non-zero mass term. The loops are here generated by a random walk on \({\mathbb {Z}}^2\) which at every step has a positive probability of being killed (or landing in a cemetery state). As long as this killing rate is strictly positive, it keeps very large loops from appearing near the origin and ensures a nontrivial cover time; in contrast, with a zero killing rate every vertex in \({\mathbb {Z}}^2\) would be instantly covered. In this sense, our current project is very much related to the work of [3], as we again study the trajectory of a random walk, but we are introducing a killing in order to keep our walks from becoming too long. One could also study the (somewhat easier) case of finite portions of trajectories generated by killed random walks that do not form loops, but the random walk loop soup seems more natural and interesting, in particular because of its deep connection to the Gaussian free field and to other models of statistical mechanics (see e.g., [6] and references therein). Aside from this connection, the random walk loop soup is also an object of intrinsic interest as a prototypical example of a Poissonian system amenable to rigorous analysis thanks to the vast body of knowledge on the behavior of the random walk in two dimensions. We remark that while the usual set-up for the random walk loop soup is for the case of finite graphs (see for instance [7] and [8]), the only thing which is really needed is for the Green’s function to be finite. In our full-space setting, i.e. on \({\mathbb {Z}}^2\), this is accomplished by having a non-zero killing rate \(\kappa \), see further the remark in Sect. 2.
We will give a precise definition of the massive random walk loop soup in Sect. 2, but in order to present our main results we give here a short informal explanation. We will consider a measure \(\mu \) on the set of all loops (i.e., finite walks on \({\mathbb {Z}}^2\) ending at the same vertex where they started). Because of the non-zero killing rate, the measure \(\mu \) does not give much weight to very long loops. In particular, \(\mu (\Gamma _x)<\infty \) where \(\Gamma _x\) denotes the set of loops containing \(x\in {\mathbb {Z}}^2\). Furthermore, this measure is translation invariant so that, in particular, \(\mu (\Gamma _x)=\mu (\Gamma _o)\) for every \(x \in {\mathbb {Z}}^2,\) where o denotes the origin of the square lattice. Since the quantity \(\mu (\Gamma _o)\) will play a central role, we point out already here that it is a function of the Green’s function at 0 of a random walk with killing rate \(\kappa \). Furthermore, as our main result (Theorem 1.1 below) concerns the case where \(\kappa \) goes to 0 with the sizes of the sets we are covering, it follows that \(\mu (\Gamma _o)\) implicitly depends on the sizes of the sets we are covering.
The model that we study here is then a Poisson process \(\omega \) on \(\Gamma \times [0,\infty ),\) where \(\Gamma =\bigcup _{x\in {\mathbb {Z}}^2} \Gamma _x\) is the set of all loops. Furthermore, a pair \((\gamma ,s)\in \omega \) is a loop \(\gamma \) along with a “time-stamp” s denoting the time at which loop \(\gamma \) arrives. We can then define the cover time of the set \(A\subset {\mathbb {Z}}^2\) by letting
where we abuse notation somewhat and identify the loop \(\gamma \) with its trace, i.e., the vertices \(x\in {\mathbb {Z}}^2\) that it encounters. Our main result concerns the asymptotic cover time of a growing sequence of sets \((A_n)_{n \ge 1},\) as follows.
Theorem 1.1
Consider a sequence of finite subsets \(A_n \subset {\mathbb {Z}}^2\) such that \(|A_n| \uparrow \infty \). Furthermore, assume that the killing rates \(\kappa _n\) are such that, for every \(n, \exp (e^{32})\le \kappa ^{-1}_n\le |A_n|^{1-8/(\log \log |A_n|)}\). We then have that for n large enough
and therefore
where G is a Gumbel distributed random variable.
Remarks
It may seem that the bound on the right hand side of (1.1) does not depend on the killing rates \(\kappa _n\). However, equations (3.13) and (3.14) show that, for \(\kappa _n^{-1}\) large, \(\mu (\Gamma _o)\) is such that \(\vert \mu (\Gamma _0)-\log \log \kappa _n^{-1} \vert < 2\) (where the constant 2 is rather arbitrary). Furthermore, the remark after the proof of Lemma 5.1 indicates that it may not be possible to drastically improve on the rate of convergence in (1.1), at least not by using the methods of this paper.
We assume the lower bound on \(\kappa _n^{-1}\) in the statement of Theorem 1.1 out of convenience, and we claim that this can be relaxed (at least somewhat) by adding further details to our proofs. However, as we deem it natural to let \(\kappa _n\rightarrow 0\) as \(|A_n|\rightarrow \infty ,\) we do not think it worthwhile to make the paper more technical than it is in order to improve on this lower bound.
Furthermore, the upper bound on \(\kappa _n^{-1}\) cannot be easily improved, at least not substantially. Indeed, the discussion after the proof of Lemma 5.4 indicates that while it may be possible to improve the upper bound by replacing the number 8 with a slightly lower number, improving it further would require new ideas, if at all possible (see further the discussion below).
We continue this section with a more in depth discussion of our main result and of its proof.
It is straightforward to determine (see the start of Sect. 4) that the expected number of uncovered vertices \(x\in A_n\) at time \(\frac{\log |A_n|}{\mu (\Gamma _o)}\) is exactly 1, and this is why the distributional limit in Theorem 1.1 may exist at all. Furthermore, it is easy to show (see (7.1)) that \(\mu (\Gamma _o){\mathcal {T}}(o)\) is an exponentially distributed random variable with parameter 1. By using the well known fact that the maximum of independent exponentially distributed random variables (with parameters all equal to 1) converges to a Gumbel distribution, we see that (1.1) very much corresponds to the situation where the vertices of \(A_n\) are covered independently. Indeed, (1.2) means that \({\mathcal {T}}(A_n) \approx \frac{\log |A_n|+G}{\mu (\Gamma _o)}\) and, since \(\lim _{\kappa \rightarrow 0}\mu (\Gamma _o)=\infty ,\) we see that the cover time will be concentrated around \(\frac{\log |A_n|}{\mu (\Gamma _o)}\) with extremely small fluctuations of size \(\frac{G}{\mu (\Gamma _o)}\).
Our main result is not surprising for large killing rates (such as when \(\kappa _n\) is constant). This is because in such a regime large loops are strongly suppressed, which makes our model look similar to those studied in [3, 5] and [2], where similar results are obtained. The main difference in our work is that we let the killing rate \(\kappa _n\) go to 0 as the size of the set that needs to be covered diverges.
In this situation, we expect that the behavior may depend on the geometry (to be more precise, on the sparsity) of the sets \(A_n,\) as well as on how fast the killing rates \(\kappa _n\) go to 0. To see this, assume first that we are in the “compact” case where \(A_n\) is (as close as possible to) a ball of radius \(\sqrt{n}\), and consider the following situations.
-
(i)
If \(\kappa _n\) approaches zero very quickly, then for n large, the diameter of a typical loop intersecting \(A_n\) is vastly larger than the linear size of \(A_n\). It is then natural to expect that the first loop that arrives and touches \(A_n\) will in fact cover \(A_n\) completely, and the re-scaled cover time will simply be an exponential random variable as \(n \rightarrow \infty \). We give further support to this claim in Sect. 7 which includes a longer discussion along with two simple examples (Examples 7.1 and 7.3) further mentioned below.
-
(ii)
If instead \(\kappa _n=|A_n|^{-\alpha }\) for some \(\alpha <1\), then for n large, the diameter of a typical loop intersecting \(A_n\) is much smaller than the linear size of \(A_n\). This will create enough independence for the re-scaled cover time to converge to a Gumbel distribution as indeed Theorem 1.1 shows.
-
(iii)
If instead \(\kappa _n \rightarrow 0\) at a rate which is in between the two cases above, then the diameter of a typical loop may still be larger than \(A_n,\) but it will most likely not cover the entirety of \(A_n\). It is not clear to us how the cover time will behave in this intermediate case.
If \(A_n\) is sparser or stretched (say in the form of a line of length n), then the potential different phases described above may simply occur at other thresholds. However, if we allow the separation between points to depend on the killing rate, we can easily create an example (see Example 7.4) in which the limit of the cover time is always a Gumbel.
Having argued that one expects a different type of behavior for high and low killing rates, it is natural to ask where the threshold between those two regimes lies. For the “compact” case illustrated above, one may guess that the threshold could be when \(\kappa _n \sim |A_n|^{-1}\), so that the linear size of \(A_n\) in this case is of the same order as the diameter of a typical large loop, which essentially corresponds to the correlation length of the system (the separation distance at which two parts of the system become roughly independent). We remark that our main result comes very close to this supposed threshold. It may of course also be the case that the correct threshold corresponds to an even quicker rate at which \(\kappa _n \rightarrow 0,\) but if so, other methods than the ones employed in this paper will be needed to get close to the threshold.
We note that, if one takes a scaling limit, re-scaling space by \(1/\sqrt{A_n}\) and time by \(1/{A_n}\), \(\kappa _n=|A_n|^{-1}\) corresponds to the near-critical regime and leads to a massive Brownian loop soup (see [6]). In contrast to this, if \(\kappa _n=|A_n|^{-\alpha }\) for some \(\alpha <1\) one expects the scaling limit to be trivial (meaning that no macroscopic loops survive), while if \(\alpha >1\) one expects to obtain the critical (i.e. scale invariant) Brownian loop soup (see [6] for further discussion).
We briefly mention Examples 7.1 and 7.3, both considering sets \(A_n\) consisting of only two points. In Example 7.1 the two points are vastly separated, and the re-scaled cover time is shown to be the maximum of two independent exponential random variables. In contrast, Example 7.3 deals with two points that are neighbors, and the re-scaled cover time is shown to be a single exponential random variable. This provides some additional support for the discussion above concerning the potential different phases.
The overall strategy of the proof of Theorem 1.1 can be informally described as follows. At a time just before the expected cover time, i.e. at time \((1-\epsilon )\frac{\log |A_n|}{\mu (\Gamma _o)}\), the not yet covered region should consist of relatively few and well separated vertices (see Proposition 5.7). These separated vertices will then be covered “almost independently” as the distances between them are so large that we will not see many loops that are large enough to hit two such vertices. The main work will go into establishing the first of these two steps, for which we will need to perform an involved and detailed second moment estimate. Although the general strategy that we will follow has been used in [3] and [5], the main part of the work and challenges here are different. This is intimately connected to the fact that the methods must be fine-tuned in order for Theorem 1.1 to work as close as possible to the case \(\kappa _n^{-1}=|A_n|\).
We end this introduction with an outline of the rest of the paper. In Sect. 2 we will define and discuss the random walk loop soup. In Sect. 3 we will obtain various estimates on the Green’s function. However, to avoid breaking the flow of the paper, many of the calculations used in this section will be deferred to an appendix (Appendix A). The results of Sect. 3 will then be used in Sect. 4 in order to obtain estimates on the probabilities involved in our second moment estimate. The latter is done in Sect. 5 and in turn, these results are used in Sect. 6 to prove our main result. Finally, Sect. 7 contains the three examples and the discussion mentioned above.
2 The Loop Soup
The purpose of this section is twofold. Firstly, it will serve to introduce necessary notation and definitions. Secondly, it will serve as a brief introduction to random walk loop soups in the particular case that we study in this paper. See also [6,7,8,9] for an overview of the model studied in this paper.
We consider a discrete time simple symmetric random walk loop soup in \({\mathbb {Z}}^2\) with killing rate \(\kappa >0\). In this setting, a walker positioned at x at time n will move to a neighbor y with probability \(1/(4+\kappa )\), and it will be killed (or equivalently moved to a cemetery state) with probability \(\kappa /(4+\kappa )\). Next, \(\gamma _r\) is a loop rooted at the vertex \(x_0\) and of length \(|\gamma _r|=N\) if
for any sequence of neighboring vertices \(x_0,\ldots ,x_{N-1}\) where \(x_{N-1}\) is a vertex neighboring \(x_0\). As usual (see [7] or [8]), we only consider non-trivial loops, i.e. only loops with \(N\ge 2\).
We note that while the alternative notation \(\gamma _r=(x_0,x_1,\ldots ,x_{N-1},x_0)\) may seem more natural (as it “closes the loop”), it would be more cumbersome when we want to consider time-shifts of the loops. One could of course also define the loop in terms of the edges traversed, but as we consider the cover times for vertices, this seems less natural.
Since we are considering loops in \({\mathbb {Z}}^2,\) we must have that \(|\gamma _r|\) is an even number. The rooted measure \(\mu _r\) of a fixed rooted loop \(\gamma _r\) is then defined to be
for \(n\ge 1\). We see that \(\mu _r(\gamma _r)\) is the probability of the corresponding killed random walk on \({\mathbb {Z}}^2\) multiplied by the factor 1/(2n). Intuitively, the reason for this modification is that (most) loops have 2n possible starting points and will therefore contribute 2n times in the Poissonian construction below.
Proceeding, we find that the total rooted measure of all loops rooted at \(x_0\) of length 2n becomes
where \(L_{2n}\) denotes the number of loops rooted at o and of length 2n.
In order to define the (unrooted) loop measure we start by defining equivalence classes of loops by saying that the rooted loops \(\gamma _r, \gamma _r'\) are equivalent if we can obtain one from the other by a time-shift. More formally, if \(|\gamma _r|=2n\), then \(\gamma _r \sim \gamma _r'\) if there exists some \(0\le m <2n\) such that
We see that the equivalence class of \(\gamma _r\) contains exactly \(\frac{2n}{\textrm{mult}(\gamma _r)}\) rooted loops. Here, \(\textrm{mult}(\gamma _r)\) is the largest k such that \(\gamma _r\) can be written as the concatenation of k identical loops. We will think of an equivalence class as an unrooted loop (i.e. as a sequence of vertices with a specified order but with no specified first vertex), and we shall denote such a loop by \(\gamma \). We will occasionally write \(\gamma _r \in \gamma \) to indicate that the rooted loop \(\gamma _r\) is a member of the equivalence class \(\gamma \).
We then define the (unrooted) measure \(\mu \) on loops by letting \(\mu (\gamma )\) equal the weight of the rooted measure for a member of the equivalence class of \(\gamma ,\) multiplied by the number of members in this equivalence class. That is,
where \(\textrm{mult}(\gamma )=\textrm{mult}(\gamma _r)\) for any (and therefore every) \(\gamma _r \in \gamma \). Equation (2.1) thus defines our measure \(\mu \). We choose to work with unrooted loops and the corresponding measure because this is the canonical choice (see [7, 8]) leading to the Brownian loop soup in the scaling limit [9].
We now let \(\Gamma _{x}^{2n}\) denote the set of all unrooted loops \(\gamma \) such that \(x\in \gamma \) and \(|\gamma |=2n\). Then, we define
We observe that
It will turn out that the quantity \(\mu (\Gamma _o)\) will play an essential role in the rest of the paper. However, while (2.2) gives a concrete and easily understandable expression for \(\mu (\Gamma _o)\) it is not the most useful, and we will instead use (2.5) below.
Returning to our measure \(\mu \) we now let \(\omega \) denote a Poisson point process on \(\Gamma \times [0,\infty )\) with intensity measure \(\mu \times dt\). Here, \(\Gamma =\bigcup _{x\in {\mathbb {Z}}^2} \Gamma _x\) simply denotes the set of all unrooted loops in \({\mathbb {Z}}^2\). We shall think of a pair \((\gamma ,t)\in \omega \) as a loop \(\gamma \) along with a “time-stamp” t which corresponds to the time at which the loop arrived. We also let
so that \(\omega _t\) is the collection of loops that have arrived before time t. It will be convenient to introduce the notation
so that \({\mathcal {C}}_t\) is the covered region at time \(t>0\).
Remark
In this section, and in the rest of the paper, we will use equations such as (2.3) below from [7] and [8] involving the Green’s function. This requires a comment, since in [7] and [8], those equations are derived working with finite graphs, while we consider the infinite square lattice. The use of finite graphs in [7] and [8] is largely a matter of convenience ( [10]), since it allows us to write explicit formulas in terms of determinants of finite matrices. Nevertheless, the final formulas in terms of the Green’s function are valid whenever the Green’s function is well defined.
To see this why this is the case in the specific example of the massive random walk loop soup on the square lattice, the reader can think of coupling a massive loop soup on \(\mathbb {Z}^2\) and a loop soup on \([-L,L]^2 \cap \mathbb {Z}^2\), obtained from the first one by removing all loops that exit \([-L,L]^2\). If one focuses on the restrictions of the two processes to a finite window \([-L_0,L_0]^2 \cap \mathbb {Z}^2\), for any fixed \(L_0\), and sends \(L \rightarrow \infty \), the presence of a positive killing rate implies that the restriction of the second process to \([-L_0,L_0]^2 \cap \mathbb {Z}^2\) converges to that of the first. On the other hand, for the second process, one can use the formulas from [7] and [8] for any finite L. Moreover, the expressions in those formulas converge as \(L \rightarrow \infty \) because the Green’s function stays finite in that limit, due to the positive killing rate.
As just remarked, there is a close connection between the loop soup and the Green’s function for the killed simple symmetric random walk on \({\mathbb {Z}}^2\). It is known (see (4.18) on p. 74 of [8] or p. 45 of [7]) that (for the simple symmetric random walk with killing rate \(\kappa \))
where the first equality follows from the construction of the Poisson process and translation invariance.
Here, g(x, y) is a Green’s function given by (see [8] p. 9, (1.26))
where \((X^x_t)_{t>0}\) is a continuous time random walk (started at x) which waits an exponential time with parameter 1 and then picks any neighbor with probability \(1/(4+\kappa )\) and is killed with probability \(\kappa /(4+\kappa )\). Clearly, if \(N_t\) denotes the number of steps that this random walk has taken by time t, we then have that
where \(S_n^{x,\kappa }\) denotes a discrete time random walk started at x and with killing rate \(\kappa \).
Combining the above we obtain the formula
where
We note also that from (2.3) and (2.4) we have that \(e^{-\mu (\Gamma _o)}=\frac{1}{G^{o,o}}\) and so
As mentioned above, this equation will turn out to be much more useful for us than (2.2). Observe that \(\omega _u\cap \Gamma _x \cap \Gamma _y\) is the set of loops in \(\omega _u\) which intersect both x and y. We have (according to [7], p. 45) that
We conclude that
Much of the effort of this paper will be focused around obtaining good estimates for (2.7), and for other similar quantities. For this reason we shall need to study some aspects of the Green’s function \(G^{x,y}\) in detail, and then use these results to obtain good estimates of probabilities such as \({\mathbb {P}}(\{x,y\} \cap {\mathcal {C}}_u=\emptyset )\). In order to structure this, we choose to devote Sect. 3 exclusively to results concerning Green’s functions. These results are then used in Sect. 4 to obtain our estimates for relevant probabilities.
3 Green’s Function Estimates
We will write \(S_n\) in place of \(S_n^{o,o}\) and we start by observing that
where \(W^{o,x}_n\) denotes the number of walks of length n starting at the origin and ending at x. In (3.1) and in the rest of the paper, for \(x=(x_1,x_2) \in \mathbb {Z}^2\), |x| denotes \(|x_1|+|x_2|\). It is clear from (2.7) that in order to bound \({\mathbb {P}}(\{x,y\} \cap {\mathcal {C}}_u=\emptyset )\) we should strive to find good estimates for \(G^{o,o},G^{o,x}\) and the difference
and this is the main purpose of this section. The main results of the current section are Propositions 3.3 and 3.7. Proposition 3.3 will give estimates on (3.2) for small and moderate values of |x|, while Propositions 3.7 will provide estimates on \(G^{o,x}\) for large values of |x|. We mention that Propositions 3.7 is somewhat specialized to work for large values of \(\kappa ^{-1}\).
To avoid breaking the flow of the paper, the proofs of elementary lemmas concerning the number of walks \(W_n^{o,x},\) and estimates on partial sums of (3.1), are deferred to Appendix A.
We will now focus on Proposition 3.3, which will be proved through two lemmas. Firstly, Lemma 3.1 will allow us to estimate \(G^{o,o}-G^{o,x}\) in terms of partial sums of \(G^{o,o}\). Then, we will use Lemma 3.2 to quantify these bounds. In order to prove Proposition 3.7 we will use a consequence (Lemma 3.6) of the local central limit theorem, along with Lemmas 3.1 and 3.4.
We can now state our first lemma which is proved in Appendix A.
Lemma 3.1
For any \(x\in {\mathbb {Z}}^2\) such that |x| is even, we have that
for every \(n\ge 0\).
Our second lemma (again proved in Appendix A) provides lower bounds on the partial sums of
Lemma 3.2
For any \(\kappa >0\) and any \(N\ge 1\) we have that
Furthermore,
We are now ready to state and prove our first result concerning the difference (3.2). Proposition 3.3 will be “basic” in the sense that the statements are not the strongest possible, but they are sufficient for our purposes. Later, we will prove Proposition 3.7 which will be more specialized.
Proposition 3.3
For any \(\kappa >0\) we have that
For any \(|x|\ge 1\) we have that
If \(4\le |x|\le 2\kappa ^{-1},\) and \(\kappa ^{-1}\ge 2,\) we have that
Proof
The first result, i.e., (3.5), is an immediate consequence of (3.1) since we clearly have that
as \(|x|\rightarrow \infty \).
We now turn to (3.7) and we will also assume (momentarily) that |x| is an even number. We have that
by using Lemma 3.1 and the fact that \(W_{2n}^{o,x}=0\) for \(n\le |x|/2-1\). We can then use (3.3) with \(N=|x|/2\) to see that
where we used the assumption that \(|x|\le 2 \kappa ^{-1}\) in the second inequality. This proves (3.7) in the case when |x| is even.
We will have to take some extra care when |x| is odd. Therefore, assume that \(x=(2l+1,2k)\) with \(5\le |x|\le 2\kappa ^{-1}\) and observe that by (3.1),
We then conclude that
since at most two of the neighbors of x are closer to o than x. By again using (3.3) we now see that
Furthermore, we have that \(y^2-1\ge \frac{8y^2}{9}\) for every \(y\ge 3\) and therefore,
where we used that \(|x|\le 2\kappa ^{-1}\) in the penultimate inequality. By symmetry, the same estimate holds when \(G^{o,(2l+1,2k)}\) is replaced by \(G^{o,(2l,2k+1)},\) and this establishes (3.7).
For (3.6), consider first the case when \(x=(1,0)\) and observe that, as above,
where we used (3.8) to conclude that \(G^{o,o}-G^{o,x}\ge W_0^{o,o}=1\) whenever \(|x|\ge 2\) is even. The statement then follows for all \(|x|=1\) by symmetry. Next, if x such that \(|x|\ge 2\) is even, we again observe that by (3.8) \(G^{o,o}-G^{o,x}\ge W_0^{o,o}=1\). For odd values of \(|x|\ge 3\) we can sum over the neighbors to reach the same conclusion. \(\square \)
Our next lemma will give upper bounds on the tails of the sums in \(G^{o,o}\). The first part (i.e. (3.10)) will be used to prove Proposition 3.7, while the second part (i.e. (3.11)) will be used to prove Propositions 3.5 and 3.7, and the last part (i.e. (3.12)) will be used in later sections. The proof is again deferred until Appendix A.
Lemma 3.4
For any \(0<\kappa <1,\) and any \(N\in \{1,2,\ldots \}\) such that \(N\kappa <1\), we have that
On the other hand, if \(N\kappa \ge 1/2,\) then
Furthermore,
Our next proposition is elementary and presumably far from optimal, but useful nevertheless.
Proposition 3.5
Assume that \(|x|\ge 2 \kappa ^{-1}\) and that \(\kappa ^{-1}\ge e^{30}\). Then we have that
Proof
Assume first that |x| is even. By (3.1) and Lemma 3.1 we then have that
Using this, and then applying (3.11) with \(N=|x|/2-1\) so that \(N \kappa = (|x|/2-1)\kappa = (\kappa ^{-1}-1)\kappa \ge 1-\kappa \ge 1/2\) (using our assumptions on |x| and \(\kappa ^{-1}\)) we have that,
again, since we assume that \(|x|\ge 2 \kappa ^{-1}\) and \(\kappa ^{-1} \ge e^{30}\).
If instead |x| is odd, we can sum over the neighbors \(y \sim x\) of x and use Lemma 3.1 to see that
Using this and (3.11) we then see that
Furthermore, by (3.4) we have that
since \(\kappa ^{-1}\ge e^{30}\) by assumption, and so the statement follows. \(\square \)
For future reference, we note that by (3.12) and (2.5), we have that
Similarly, by using (3.4) in place of (3.12) we have that
Intuitively, it should be the case that for x to have a “decent chance of being hit” by a walk of length n starting at the origin o, then n should be of size of order close to \(|x|^2\) or larger. Therefore, we see from (3.1) that the contribution to \(G^{o,x}\) from walks which are considerably shorter than \(|x|^2\) should be small. This is made precise in our next lemma (again the proof is deferred to Appendix A) where the first statement (3.15) shows that the total contribution to \(G^{o,x}\) coming from walks that are shorter than \(|x|^2/\log |x|\) is negligible as \(|x|\rightarrow \infty \). Both statements of this lemma will be useful in order to obtain a good estimate for \(G^{o,x}\) in Proposition 3.7.
Lemma 3.6
For every |x| large enough, we have that
and that
Using this result, we can now prove the following proposition.
Proposition 3.7
Assume that \(e^9\le \kappa ^{-1}\le |A|\). For large enough |A|, we then have that
for every \(x\in {\mathbb {Z}}^2\) such that
Proof
Using (3.15) we have that
Furthermore, by (3.13) and our assumptions on \(\kappa ^{-1},\) it is easy to verify that \(\mu (\Gamma _o) \le \log \log |A|\). Therefore,
and so we note that, by using the lower bound on |x|, and that \(\kappa ^{1/2}\le e^{9/2}\) by assumption,
for |A| large enough. Next we observe that by (3.16)
by again using the lower bound on |x| in the last inequality, together with the fact that the function \(x^2/(2\log x)\) is increasing for x large. Furthermore, by using that \(\log (1-x)\le -x\) for any \(0<x<1\), we see that
Using (3.14) we observe that \(\mu (\Gamma _o)\ge 1\) and so \(|A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\le |A|^{3/2}\) by our assumption on \(\kappa \). By also using (3.18) we can therefore conclude from (3.20) that, for \(\vert A \vert \) large enough,
where the last inequality follows from elementary considerations and is not optimal. Therefore,
for |A| large enough.
Next, assume that |x| is even and note that
where the first equality uses that since \(|x|^2\) is even, \(W_{|x|^2+1}^{o,x}=0,\) and where the inequality follows from Lemma 3.1. Next, we want to apply (3.11) with \(N=|x|^2/2\) to the right hand side of (3.22). For this we need to verify that \(N \kappa \ge 1/2,\) and indeed, by our assumption on |x|, we here have that \(N \kappa =|x|^2 \kappa /2 \ge (\kappa ^{-1/2}|A|^{\frac{1}{\mu (\Gamma _o)}})^2 \kappa /2 \ge |A|^{\frac{2}{\mu (\Gamma _o)}}/2\ge 1/2\). Applying (3.11) we can therefore conclude that (by again using our assumption on |x|),
for |A| large enough. Inserting (3.19), (3.21) and (3.23) into (3.17) we get that
for |A| large enough, since \(\mu (\Gamma _o)\ge 1\) as before.
Assume now that |x| is odd and observe that \(|x|^2\ge |y|^2/2\) whenever \(y\sim x\) and \(\vert x \vert \ge 3\). We then sum over all \(y \sim x\) and observe that
where we used (3.11) in the fourth inequality and that \(|y|^2\ge |x|^2/2\) in the fifth (which holds for \(y \sim x\) and \(|x|\ge 4\)). We see that (3.24) holds also for this case, which concludes the proof. \(\square \)
4 Probability Estimates
Recall our main goal of obtaining estimates on the cover times of a sequence of growing sets. In order to get to that point, we need to consider a generic set A, which one can typically think of as being very large. Consider then
and note that \(u^*=u^*(|A|,\kappa )\). The relevance of \(u^*\) can be seen by first observing that by (2.5)
and then that it follows from (2.4) that the expected number of uncovered vertices at time \(u^*\) is 1. Informally, with enough independence, this is the intuition for why the cover time of the generic set A should be around \(u^*\) as mentioned in the discussion after the statement of Theorem 1.1 in the Introduction. Of course, what constitutes enough independence is hard to quantify, and is at the heart of the mentioned discussion as well as that of Sect. 7.
Before we start presenting the results of this section, recall the discussion at the end of the introduction. In short, we want to consider the covered set at time \((1-\epsilon )u^*=(1-\epsilon )\frac{\log |A|}{\mu (\Gamma _o)},\) which by the intuition above should be “just before coverage” when \(\epsilon >0\) is very small. We want to show that the set yet to be covered at that time consists of relatively few well-separated points. This result is obtained in Sect. 5 (in particular Proposition 5.7), using a second moment argument. In order to perform this, we need to understand the probability that two points o, x both belong to the uncovered set. This probability will of course be heavily dependent on the separation of o and x, and the main purpose of this section is to understand this dependence in detail.
Our first result is the following.
Proposition 4.1
Let \(\kappa ^{-1}>e^{30}\) and \(\epsilon \in (0,1)\). Then, for every \(x\in {\mathbb {Z}}^2\) we have that
Furthermore, for any \(x\in {\mathbb {Z}}^2\) such that \(4\le |x|\le 2\kappa ^{-1}\) we have that
If instead \(|x|\ge 2\kappa ^{-1},\) we have that
Proof
We start with the first statement. Use (2.7) to see that
Then, we have from (3.6) that \(G^{o,o}-G^{o,x}\ge \frac{3}{4}\). There are now two cases. Either \(G^{o,x}\ge \frac{G^{o,o}}{2}\), in which case \((G^{o,o}+G^{o,x})(G^{o,o}-G^{o,x})\ge \frac{9}{8}G^{o,o}\) and so
by (4.2), or \(G^{o,x}< \frac{G^{o,o}}{2},\) in which case \((G^{o,o}+G^{o,x})(G^{o,o}-G^{o,x})\ge G^{o,o}\frac{G^{o,o}}{2}\). Furthermore, by (3.4) we have that \(G^{o,o}\ge \frac{\log \kappa ^{-1}}{\pi }\ge \frac{30}{\pi }\ge \frac{9}{4},\) by our assumption on \(\kappa ,\) which proves (4.3).
For our second statement, we note that it follows from (4.6) that
so that by (3.7), we conclude that
which proves (4.4).
For the third statement, observe that by Proposition 3.5 we have that \(G^{o,o}-G^{o,x}\ge \frac{G^{o,o}}{2}\). Then we can use (4.7) to see that
where we used (3.4) in the last inequality. \(\square \)
Proposition 4.1 together with Proposition 3.7 will suffice when proving our desired second moment estimates. However, we shall also face the issue of covering a relatively small number of well separated (i.e. close to the correlation length \(\kappa ^{-1/2}\)) vertices. What we need is stated in Proposition 4.3 below, but in order to prove this we will first establish a preliminary result, namely Lemma 4.2.
For any \(K\subset {\mathbb {Z}}^2\), let
and
Recall that \(\omega _u\) denotes the loop soup with intensity u so that \(\omega _u(\Gamma _{K_1,K_2})\) is the number of loops \(\gamma \in \omega _u\) such that \(\gamma \cap K_1\ne \emptyset \) and \(\gamma \cap K_2 \ne \emptyset \).
Lemma 4.2
Let \(K_1,K_2 \subset {\mathbb {Z}}^2\) be disjoint, and let \(E_1,E_2\) be events that are determined by \(\omega _u\) restricted to the sets \(K_1\) and \(K_2\) respectively. We have that
Proof
Since the events \(E_1,E_2\) are determined by the restrictions of \(\omega _u\) to the subsets \(K_1,K_2\) respectively, they are conditionally independent on the event that \(\omega _u(\Gamma _{K_1, K_2})=0\). We then see that,
Furthermore, writing
for \(i= 1,2\) and using (4.9), we see that
\(\square \)
We are now ready to state and prove the following proposition mentioned before Lemma 4.2.
Proposition 4.3
Let \(K\subset {\mathbb {Z}}^2\) and let \(\{x_1,\ldots ,x_{|K|}\}\) be an enumeration of the vertices in K. Assume further that K is such that \(|x_i-x_j|\ge |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\) for every \(i\ne j\). Then we have that
whenever \(u\ge 1,\) \(e^9\le \kappa ^{-1}\le |A|\) and |A| is large enough.
Proof
We start by noting that by (2.6)
where we used the elementary inequality \((1-x)^u\ge 1-ux\) for \(0<x\le 1\) and \(u\ge 1,\) together with the fact that \(\frac{G^{o,x}}{G^{o,o}}\le 1,\) which is an immediate consequence of (3.6). Note that it follows from (3.14) and the assumption that \(\kappa ^{-1}\ge e^9\) that \(G^{o,o}\ge \log \left( \frac{\log \kappa ^{-1}}{\pi }\right) \ge 1\). For \(u\ge 1\) we can therefore use Proposition 3.7 (which uses that \(|x|\ge |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\) and that |A| is large enough) to see that
for every \(x\in {\mathbb {Z}}^2\) such that \(|x|\ge |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\).
We then note that for any \(1\le i\le |K|\) and \(u,\kappa \) as in the assumptions, we have that
since \(|y-x_i|\ge |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\) by assumption on K, and where we used (4.10) in the last inequality. Using this and Lemma 4.2 we see that (with \(K_1=\{x_1\}\) and \(K_2=K{\setminus } \{x_1\}\))
By iterating this we see that
which concludes the proof. \(\square \)
5 Second Moment Estimates
For \(\epsilon \in (0,1)\) we define
so that \(A_\epsilon \) is the set of vertices of A which are uncovered at time \((1-\epsilon )u^*\). By using (5.1), (2.4), (2.5) and the definition of \(u^*\) (i.e. (4.1)) in that order, we see that for \(x\in A,\)
Therefore,
In order to reach our end goal of this section, we shall need to establish a number of inequalities dealing with summing \({\mathbb {P}}(x,y \in A_\epsilon )\) over various ranges of x, y. We will have to consider the cases when the distances between x and y are small, intermediate and large separately. In addition, in order to make the argument work for any \(\exp (e^{32})<\kappa ^{-1}<|A|^{1-\frac{8}{\log \log |A|}}\) we will further have to divide the analysis into different cases depending on the value of \(\kappa ^{-1}\). In total we establish four lemmas (Lemmas 5.1, 5.2, 5.4 and 5.5) concerning such sums, and we then combine these results into Proposition 5.6. We note that not all of these results require equally strong conditions on \(\kappa ^{-1}\) and \(\epsilon \). We prefer to write the actual conditions required in the respective statements of each lemma, as this makes it easier to see where the constraints lie. We also note that we will actually only use the results below for \(\epsilon \) equal to \(\frac{1}{100\mu (\Gamma _o)}\) and \(\frac{1}{400\mu (\Gamma _o)}\). It may therefore seem superfluous to introduce \(\epsilon \) at all, but it will make the text less technical in the end.
Lemma 5.1
For any \(e^9\le \kappa ^{-1}\le |A|\) we have that
for every \(0<\epsilon \le \frac{1}{100 \mu (\Gamma _o)}\) and |A| large enough.
Proof
In order to establish (5.4), we use (4.3) and the definition of \(u^*\) in (4.1), together with translation invariance to see that
where we used that
since \(\mu (\Gamma _o)\ge 1\) by (3.14) and the fact that \(\kappa ^{-1}\ge e^9\). Furthermore, by our assumption on \(\epsilon \) we see that
We conclude that
where we used that \(\kappa ^{-1}\le |A|\) in the last inequality. This proves (5.4). \(\square \)
Remark
Note that even if one replaced the upper bound \(\left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}\) in the summation with 1, the bound would not improve much. In fact one would obtain
at the end of (5.5), leading only to a slight improvement on the current bound of \(|A|^{-\frac{1}{20\mu (\Gamma _o)}}\). In order to optimize the bound, an improvement of (4.3) would be required. However, even an optimal bound in place of (4.3) may not fundamentally change the result.
Our next lemma deals with intermediate scales of separation between x and y.
Lemma 5.2
Assume that \(\exp (e^{32})\le \kappa ^{-1}\le |A|^{}\) and that \(0<\epsilon \le \frac{1}{100 \mu (\Gamma _o)}\). Then for every |A| large enough,
Proof
In order to prove (5.6), we will use (4.4), and therefore we observe that \(|x-y|\le \kappa ^{-1/4}\le 2\kappa ^{-1}\). Next, we observe that by (3.13) we have that
which holds since we assume that \(\kappa ^{-1}\ge \exp (e^{32})\). Therefore,
where the last inequality is easily checked to hold for \(\kappa ^{-1} \ge \exp (e^{32})\) as in our assumption. Hence, the requirements for (4.4) are satisfied. We then use (4.1) and (4.4) to obtain that
By again using (5.7), we see that
where the last inequality follows since, by (3.14) and the fact that \(\kappa ^{-1}>\exp \left( e^{32}\right) \), we have that
Using (5.8) and (5.9) we see that
where we used that \(\kappa ^{-1}\le |A|\) in the fourth inequality, that \(\epsilon <\frac{1}{100 \mu (\Gamma _o)}\) in the fifth inequality, that \(\mu (\Gamma _o)\ge 30\) in the penultimate inequality, and finally that |A| is taken large enough in the last inequality. \(\square \)
Our next lemma is an intermediate result which we will use to prove Lemma 5.4.
Lemma 5.3
For any \(\kappa ^{-1}\) such that \(\exp \left( e^{32}\right) \le \kappa ^{-1} \le |A|^{1-\frac{8}{\log \log |A|}},\) we have that
for every |A| large enough.
Proof
If \(\kappa ^{-1} \ge |A|^{4/5}\), we can use (3.14) to see that
whenever |A| is large enough. Therefore we see that
as desired.
If \(\exp \left( e^{32}\right) \le \kappa ^{-1}\le |A|^{4/5}\), (3.14) and \(\kappa ^{-1} \ge \exp (e^{32})\) imply that \(\mu (\Gamma _o)\ge \log \left( \frac{\log \kappa ^{-1}}{\pi }\right) \ge 30\), so that
which conclude the proof. \(\square \)
Lemma 5.4
Assume that \(\exp (e^{32})\le \kappa ^{-1} \le |A|^{1-\frac{8}{\log \log |A|}}\) and that \(0<\epsilon \le \frac{1}{100 \mu (\Gamma _o)}\). Then for every |A| large enough,
Proof
In order to prove (5.10), we will again use (4.4), and to that end we observe that \(4\le \kappa ^{-1/4}< 2\kappa ^{-1}\) by our assumption that \(\kappa ^{-1}\ge \exp (e^{32})\). Then, for any \(\kappa ^{-1/4}\le |x-y| \le \min \left( 2 \kappa ^{-1}, |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\right) \) we have that by (4.4)
If instead \(|x-y|\ge 2 \kappa ^{-1},\) we use (4.5) to observe that
and so (5.11) holds for every x, y such that \(\kappa ^{-1/4}\le |x-y| \le |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\). It follows that
As in the proof of (5.9) we have that
We therefore see that
where we used that \(\epsilon <\frac{1}{100 \mu (\Gamma _o)}\) in the penultimate inequality. Finally, it follows from Lemma 5.3 (which uses that \(\kappa ^{-1}\ge \exp (e^{32})\)) that \(\kappa \ge |A|^{-1+\frac{6}{\mu (\Gamma _o)}}\) and so
which concludes the proof. \(\square \)
Remark
As we shall see, the above lemma is the only one that requires the upper bound on \(\kappa ^{-1},\) i.e. that \(\kappa ^{-1}\le |A|^{1-\frac{8}{\log \log |A|}}\). The other lemmas of this section only require that \(\kappa ^{-1}\le |A|\) (and in addition, with some extra effort this bound can be relaxed). If we changed the summation to be over the range \(\kappa ^{-1/4}\le |x-y| \le \log |A|\kappa ^{-1/2}\) (or so) instead of \(\kappa ^{-1/4}\le |x-y| \le |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\), then this would improve the bound somewhat. However, since the factor \(|A|^{\frac{\log (4\pi )}{\mu (\Gamma _o)}}\) would still remain in the summation (5.13), this would only lead to a slight improvement of the upper bound of \(\kappa ^{-1}\).
Our next lemma sums over pairs that are well separated.
Lemma 5.5
For any \(e^9\le \kappa ^{-1} \le |A|\) we have that
whenever \(0<\epsilon <1/2\) and |A| is large enough.
Proof
Using (2.7) we have that
where we used (4.2) in the last equality. By (3.5), \(\frac{G^{o,y-x}}{G^{o,o}}\rightarrow 0\) as \(|A|\rightarrow \infty \) since we are assuming that \(|y-x|\ge |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\). Furthermore, \(\log (1-u)\ge -2u\) whenever \(0<u<1/2\) and so
As before, we observe that it follows from (3.14) and the assumption that \(\kappa ^{-1}\ge e^9\) that both \(\mu (\Gamma _o)\ge 1\) and \(G^{o,o}\ge 1\). We can now use Proposition 3.7 (which requires that \(\kappa ^{-1}\le |A|\)) to conclude that
where the penultimate inequality follows since
for large enough |A|, and the last inequality follows since \(e^x \le 1+2x\) for small enough x. Combining (5.15), (5.16) and (5.17) we see that
and so
\(\square \)
Remark
It follows from equations (5.18) and (5.2) that
where R is some small error term. Morally, this means that o, x are “almost” independently covered. This is not surprising considering that they are separated by a distance close to the diameter of a typical loop, i.e. \(\kappa ^{-1/2}\) (recall the discussion after the statement of Theorem 1.1 in the Introduction).
We collect the above lemmas in the following proposition.
Proposition 5.6
For any \(\exp (e^{32})\le \kappa ^{-1}\le |A|^{1-\frac{8}{\log \log |A|}},\) \(0<\epsilon \le \frac{1}{100 \mu (\Gamma _o)}\) and |A| large enough we have that
and that
Proof
We start by considering (5.20). Since we assume that \(\exp (e^{32})\le \kappa ^{-1}\le |A|^{1-\frac{8}{\log \log |A|}},\) we can use (5.4), (5.6) and (5.10) to see that
for all |A| large enough (since \(\mu (\Gamma _o)\ge 30\) by our assumption on \(\kappa ^{-1}\) and (3.14)).
The second statement follows by using (5.14) together with (5.20) and observing that
for |A| large enough. \(\square \)
We shall now use Proposition 5.6 to prove that the uncovered region at time \((1-\epsilon )u^*\) consists of a small collection of vertices all separated by a large distance. To that end, define, for \(0<\epsilon <1\),
Proposition 5.7
For any \(\exp (e^{32})<\kappa ^{-1}\le |A|^{1-\frac{8}{\log \log |A|}}\) and \(0<\epsilon \le \frac{1}{100 \mu (\Gamma _o)}\) we have that
for every |A| large enough.
Proof
We use (5.20) of Proposition 5.6 to see that
for every |A| large enough. Next we observe that by (5.21) of Proposition 5.6 we have that
Recall (5.3) which states that \({\mathbb {E}}[|A_\epsilon |]=|A|^\epsilon ,\) so that \({\mathbb {E}}[(|A_\epsilon |-|A|^{\epsilon })^2] ={\mathbb {E}}[|A_\epsilon |^2]-|A|^{2 \epsilon }\). Therefore, by Chebyshev’s inequality,
for |A| large enough by using our assumption that \(\epsilon \le \frac{1}{100\mu (\Gamma _o)}\) in the penultimate inequality. Thus,
Combining (5.23) and (5.24) we then find that
where we again used the upper bound on \(\epsilon \). \(\square \)
6 Proof of Main Theorem
We will now put all the pieces together.
Proof of Theorem 1.1
In this proof we will fix \(\epsilon \) to be equal to \(\frac{1}{100 \mu (\Gamma _o)},\) but we shall apply Proposition 5.7 to \(\epsilon \) and \(\epsilon /4\). Our aim is to establish that
for every large enough finite set \(A\subset {\mathbb {Z}}^2\).
We will divide the proof of (6.1) into three cases, depending on the value of z.
Case 1: Here, \(z \le -\frac{\epsilon }{4} \log |A|\). Then
by the definition of \(A_\epsilon \) in (5.1). Moreover, \({\mathbb {P}}(A_{\epsilon /4} = \emptyset ) \le {\mathbb {P}}(A_{\epsilon /4}\notin H_{A,\epsilon /4})\), since \(H_{A,\epsilon /4}\) does not contain the empty set. It then follows from Proposition 5.7 that for |A| large enough,
Next, using that \(z\le -\frac{\epsilon }{4} \log |A|\) it follows that \(e^{-z}\ge |A|^{\epsilon /4},\) and so, for all |A| large enough,
Therefore,
and so for all |A| large enough, (6.1) is satisfied in this case.
Case 2: Assume now instead that \(z \ge \log |A|\). Then,
Then, since \(z \ge \log |A|\), we have that \(e^{-z}\le e^{-\log |A|}=|A|^{-1}\) and so
since \(e^x \ge 1 + x\) for every x. This, and the above equation gives
and so (6.1) holds also in this case.
Case 3: Assume that \(z \in \left( -\frac{\epsilon }{4} \log |A|,\log |A|\right) \) and start by observing that
We will now consider the three terms on the right hand side separately.
For the first term we note that
by Proposition 5.7. Similarly, for the second term of the right hand side of (6.4), we observe that
again by Proposition 5.7.
Consider now the third and final term of (6.4). Let \(K \in H_{A,\epsilon }\). We will show below that
After multiplication by \({\mathbb {P}}(A_\epsilon = K)\) and summation over all \(K \in H_{A,\epsilon }\) we then obtain
Summing the contributions from (6.5), (6.6) and (6.8) we conclude from (6.4) that
for all \(z \in (-\epsilon \log |A|, \log |A|)\) and |A| large enough. It may be worth recalling that, from the start of the proof, we assume that \(\epsilon =\frac{1}{100\mu (\Gamma _o)}\). This then proves (6.1) and completes the proof, modulo (6.7).
In order to prove (6.7) we consider the conditional probability
Let \(\omega _{u_1,u_2}\) denote the loops arriving between times \(u_1\) and \(u_2\) where \(u_1<u_2\). On the event that \(A_\epsilon =K\) it must be that K is covered by the loops arriving between times \((1-\epsilon )u^*\) and \(u^*+\frac{z}{\mu (\Gamma _o)}\) for the event \({\mathcal {T}}(A)\le u^*+\frac{z}{\mu (\Gamma _o)}\) to also occur. Therefore,
where the last equality follows from the Poissonian nature of the loop process, which implies that the distribution of the loops that fall between times \((1-\epsilon )u^*\) and \(u^*+\frac{z}{\mu (\Gamma _o)}\) is simply a Poissonian loop process with intensity \(u^*+\frac{z}{\mu (\Gamma _o)}-(1-\epsilon )u^* =\epsilon u^*+\frac{z}{\mu (\Gamma _o)}\).
We therefore see that
and using (6.10) we have
We will deal with the two terms on the right hand side of (6.11) separately.
For the first term, we will use Proposition 4.3. Let therefore \(x,y\in K\) be distinct. By the definition of \(H_{A,\epsilon }\) in (5.22), we have that, if \(x,y \in K\) and \(K \in H_{A,\epsilon }\), then \(|x-y|\ge \kappa ^{-1/2}|A|^{\frac{1}{\mu (\Gamma _o)}}\). Furthermore, if \(K \in H_{A,\epsilon }\), then \(||K|-|A|^\epsilon | \le |A|^{3\epsilon /4},\) and so we have that
and in particular, (6.12) implies that \(|K|\le 2|A|^\epsilon \).
We now want to apply Proposition 4.3, and to that end we note that by (3.13) we have that \(\mu (\Gamma _o)\le \log \log \kappa ^{-1}\le \log \log |A|\). Therefore, for every \(z\in \left( -\frac{\epsilon }{4}\log |A|, \log |A|\right) \) we have that
whenever |A| is large enough. We can therefore use Proposition 4.3 together with \(\vert K \vert \le 2 \vert A \vert ^{\epsilon }\) (which follows from (6.12)), \(\epsilon \le 1\), and the fact that \(\frac{z}{\mu (\Gamma _o)}\le \frac{\log |A|}{\mu (\Gamma _o)} =u^*\) (this is the only place where we use the upper bound on z), to see that
Next, observe that for any \(\kappa ^{-1}\le |A|\) we can use (5.7) to see that
for any |A| large enough. Therefore, we see that since \(\mu (\Gamma _o)\ge 1,\)
for |A| large enough by using the assumption on \(\epsilon \). We therefore conclude that
for |A| large enough.
We can now turn to the second term of the right hand side of (6.11). As before, \(\exp \left( -\epsilon u^*\right) =\exp \left( -\epsilon \frac{\log |A|}{\mu (\Gamma _o)}\right) \) and so we have that
Then by (6.12),
Therefore, using that \(\log (1-x)\ge -x-x^2 \) for every \(0<x<1/2,\) we get that
where we used that \(1-e^{-x}\le x\) for \(x>0\) in the last inequality. It is easy to check that \(ye^{-y}\le 1\) for every y, and so
Furthermore,
since \(z\ge -\frac{\epsilon }{4}\log |A|\). Combining (6.15), (6.16) and (6.17) we obtain
Similarly, we have that \(\log (1-x)\le -x\) for every \(0<x<1/2\) and therefore
It is easy to check that the function \(f(x)=x-x^{1-\beta }\) where \(0<\beta \le 1,\) is minimized when \(x=(1-\beta )^{1/\beta }\) so that
since \((1-\beta )^{1/\beta }\le 1-\beta \) for \(0<\beta <1\). We therefore obtain from (6.19) that
Combining (6.18) and (6.20) we see that
Using this and (6.13) in (6.11) proves that
proving (6.7). This completes the proof. \(\square \)
7 Examples and Discussion
Our main result, Theorem 1.1, is geared to work in the worst case possible, i.e., when all the points of A are grouped close together, and in order to prove Theorem 1.1 we had to assume an upper bound on \(\kappa _n^{-1},\) i.e., that \(\kappa _n^{-1}\le |A_n|^{1-8/(\log \log |A_n|)}\). As alluded to in the introduction, we believe that the distribution of the cover time may undergo a sort of phase transition as \(\kappa _n^{-1}\) increases even further. In order to indicate this, we will here consider two simple examples. In the first one we consider the cover time of two widely separated points, while in the second we consider two almost neighboring points. However, we start by observing that for \(A=\{x\},\) we clearly have from (2.4) that
where we used (2.5) in the last equality. Therefore, \(\mu (\Gamma _o) {\mathcal {T}}(x)\) is always an exponentially distributed random variable with parameter one.
Example 7.1
Consider a set with two points, say \(A=\{o,x\}\) and assume for convenience that |x| is even. Then, we start by noticing that
Using this, and (2.7) we then see that
Next, we trivially have that
and so it follows from (7.3) that
In order to bound the expression in (7.3) from above, we assume now that \(|x|\ge 10 \kappa ^{-2}\). We then have that (since |x| is assumed to be even)
where we used Lemma 3.1 in the first inequality and (3.11) in the second. Furthermore, as in the proof of (5.16), we see that
for \(\kappa ^{-1}\) large enough and since \(G^{o,o}\ge 1\). Combining (7.3) with (7.5) we conclude that
By fixing u and letting \(\kappa ^{-1}\rightarrow \infty ,\) we therefore see that (by using (3.14) and (2.3))
as \(\kappa ^{-1}\rightarrow \infty \). Thus, the re-scaled cover time \({\mathcal {T}}(o,x)\) behaves asymptotically like the cover time of two independent exponentially distributed random variables with parameter one. In light of the great distance between o and x, this is not surprising.
In our next example, we will consider two points which are close. We will need the following lemma, whose proof is similar to that of Lemma 3.2 and is deferred to the appendix.
Lemma 7.2
We have that for any \(\kappa >0,\)
Example 7.3
This example is similar to Example 2 in that \(A=\{o,x\}\). However, we will here assume that \(x=(1,1)\) so that the two points are close to each other. We start by noting that by (3.12) and Lemma 7.2 we have that
We therefore have that
and therefore (as in (7.5))
Noting that trivially,
we can therefore see that by using (7.9) and (7.2)
when \(\kappa ^{-1} \rightarrow \infty ,\) since it follows from (3.4) that in this case \(\mu (\Gamma _o) \rightarrow \infty \). We conclude that the re-scaled cover time of \({\mathcal {T}}(o,x)\) behaves like a single exponentially distributed random variable with parameter one.
Recall the discussion in the Introduction concerning a possible phase transition depending on the rate at which \(\kappa _n \rightarrow 0\). The purpose of our next example is to demonstrate the (perhaps unsurprising) fact that if we allow the separation distance between the vertices in \(A_n\) to depend on the killing rate \(\kappa _n,\) such a phase transition will be absent. Allowing the separation distance to depend on \(\kappa _n\) may feel like “cheating”, but serves to demonstrate how the geometry of the sets \(A_n\) plays an important role. In order not to make this example, and indeed the entire paper, forbiddingly long, we shall be somewhat informal. Otherwise we would have to repeat large parts of Sects. 5 and 6.
Example 7.4
Consider a (large) set A and a killing rate \(\kappa \) such that \(\kappa ^{-1}\ge \log |A|,\)
for every \(x,y\in A,\) and such that \(|x-y|\) is even for every \(x,y\in A\). We then have that
where we used Lemma 3.1 in the first inequality and (3.11) in the second. Then, as in Lemma 5.5 we have that (using that \(\mu (\Gamma _o)\ge 1\) whenever \(\kappa ^{-1}\) is larger than \(e^9\) due to (3.14)),
where we used the assumption that \(\kappa ^{-1}\ge \log |A|\) in the penultimate inequality.
Next, we consider sequences \((A_n,\kappa _n)_{n \ge 1}\) of sets and killing times with the property of \((A, \kappa )\) above, and such that \(|A_n| \rightarrow \infty \). We can then use the machinery of Sects. 5 and 6 and we note that we only need to consider the case where x, y is “well separated” (i.e Lemma 5.5). Applying this machinery demonstrates that that no upper bound for \(\kappa _n^{-1}\) is needed when proving a statement analogous to Theorem 1.1 in this case. This demonstrates that if we always have a large separation between the points in \(A_n,\) such as described by (7.10), we will obtain a Gumbel distribution as the limit even when \(\kappa _n \rightarrow 0\) exceedingly fast.
We end this section with an informal discussion (this is the discussion mentioned in the Introduction) concerning the case where \(\kappa _n^{-1}>>|A_n|\) and \(A_n\) is a (very large) ball or square. In this case, we believe that it may be that for sufficiently large values of \(\kappa _n^{-1},\) the set \(A_n\) will asymptotically be covered by the first loop which touches the set \(A_n\). The reason for this belief can be explained in two steps as follows.
Step 1: After an exponentially distributed time with rate \(\mu (\Gamma _{A_n})\) where
the first loop that touches \(A_n\) appears. With very high probability this loop should be of length order at least \(\log \kappa _n^{-1}\). The reason why we expect this, is that when analyzing \(\mu (\Gamma _o)\) starting from (2.2), one can show that the contribution from loops of length n smaller than \(\log \kappa _n^{-1}\) will be small compared to the total sum, which is of order \(\log \log \kappa _n^{-1}\) (due to (3.13) and (3.14)).
Step 2: Considering a typical loop of size order at least \(\log \kappa _n^{-1}\) from Step 1 touching \(A_n\), then the probability that it will in fact cover the entirety of \(A_n\) is again very high (note that therefore, \(\mu (\Gamma _{A_n}) \approx \mu (\Gamma _o)\)) whenever \(\kappa _n^{-1}\) is large enough. The reason why we believe this to be true, stems from considering the corresponding problem for a simple symmetric random walk \(S_n\) started at the origin of \({\mathbb {Z}}^2\). Let \(T_n\) be the first time when the walk has visited every site \(x\in B_n,\) where \(B_n\) is the ball of radius n in \({\mathbb {Z}}^2\). According to [11] (see also the references within for background on this challenging problem), we have that
From this, one can then conclude that “with high probability”, the ball \(B_n\) will be covered at time, say, exponential of \(\gamma (n)(\log n)^2\) where \(\gamma (n)\) is chosen appropriately. By letting
denote the trace of the random walk until time n and observing that
it follows from (7.11) that
In turn, one can hope that a similar statement can be inferred for a loop rooted at the origin by simply conditioning the random walk to be back at the origin at some suitable time. For instance, from a statement along the lines of (ignoring that \(n^{t \log n}\) may not be an integer)
one can infer that with very high probability, the ball \(B_n\) will be covered by a loop of length \(n^{\gamma (n) \log n}\) where again \(\gamma (n)\) is chosen appropriately. However, there does not seem to be an easy way to infer (7.13) directly from (7.12) without knowing an explicit rate of convergence in (7.12). The issue is of course that we are conditioning on an event which is known to have probability of order \(\left( n^{t \log n}\right) ^{-1}\). In order to turn the intuition above into a proof, one would instead have to prove (7.13) by other means. One may attempt to adapt the techniques of [11], but even if this were possible, that may very well be an entire project in itself and outside the scope of this paper.
References
Dvoretzky, A.: On covering a circle by randomly placed arcs. Proc. Nat. Acad. Sci. USA 42, 199–203 (1956)
Janson, S.: Random coverings in several dimensions. Acta Math. 156(1–2), 83–118 (1986)
Belius, D.: Cover levels and random interlacements. Ann. Appl. Probab. 22(2), 522–540 (2012)
Sznitman, A.-S.: Vacant set of random interlacements and percolation. Ann. Math. 171, 2039–2087 (2010)
Broman, E., Mussini, F.: Random cover times using the Poisson cylinder process. ALEA 16, 1165–1199 (2019)
Camia, F.: Scaling limits, brownian loops and conformal fields. In: Contucci, P., Giardinà, C. (eds.). Advances in disordered systems, random processes and some applications, pp. 205–269. Cambridge University Press (2017)
Le Jan, Y.: Markov paths, loops and field. Lectures from the 38th probability summer school held in Saint-Flour, 2008. Lecture notes in mathematics, 2026. École d’Été de probabilités de Saint-Flour. [Saint-Flour probability summer school] Springer, Heidelberg (2011)
Sznitman, A.-S.: Topics in occupation times and Gaussian free fields. Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Zürich (2012)
Lawler, G.F., Trujillo Ferreras, J.A.: Random walk loop soup. Trans. Amer. Math. Soc. 359, 767–787 (2007)
Le Jan, Y.: Personal communication
Dembo, A., Peres, Y., Rosen, J., Zeitouni, O.: Cover times for Brownian motion and random walks in two dimensions. Ann. Math. 160, 433–464 (2004)
Robbins, H.: A remark on Stirling’s formula. Amer. Math. Monthly 62, 26–29 (1955)
Dvoretzky, A., Erdös, P.: Some problems on random walk in space. Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, 1950 pp. 353-367. University of California Press, Berkeley and Los Angeles (1951)
Lawler, G.F., Limic, V.: Random walk: a modern introduction. Cambridge studies in advanced mathematics. Cambridge University Press, Cambridge (2010)
Acknowledgements
We would like to thank the anonymous referees for providing many helpful and detailed comments. We would also like to thank Yves Le Jan for suggesting the problem studied in this paper and Stas Volkov for discussions concerning random walks.
Funding
Open access funding provided by University of Gothenburg.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
E. B. was supported by the Swedish Research Council Grant number: 2017-04266. F.C. was at the time of submission an Editor of MPAG. The authors declare no further Conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A
Appendix A
In this appendix we shall provide full proofs of all lemmas of Sect. 3.
Proof of Lemma 3.1
Let \(S_n^x\) denote a simple symmetric random walk started at x and with killing rate \(\kappa =0\). Since \({\mathbb {P}}(S_{2n}^o=x)=4^{-2n} W_{2n}^{o,x},\) it suffices to show that \({\mathbb {P}}(S_{2n}^o=x)\le {\mathbb {P}}(S_{2n}^o=o)\) for every x such that |x| is even. To that end, we observe that
where we used the reversibility of the random walk in the second and third equality. \(\square \)
For the proof of Lemma 3.2 (and for future reference), we state a standard Stirling estimate (see [12]):
Proof of Lemma 3.2
Recall that \(L_{2n}\) denotes the number of loops rooted at o of length 2n, and note that \(W^{o,o}_{2n}=L_{2n}\). It is well known (see for instance [13]) that \(L_{2n}={2n \atopwithdelims ()n}^2\) and furthermore, by (A.1), we have that
and so
We can therefore conclude that
Next, let \(f(x)=\frac{e^{-1/(3x)}}{x}\) and observe that
and so f(x) is decreasing for all \(x\ge 1\). Therefore,
Then, observe that for any \(x>0\) we have that \(\log (1+x)\le 2x\). Therefore, by also using that \(e^{-x}\ge 1-x\) for every \(x>0,\) we see that
and so, by again using that \(e^{-x}\ge 1-x\) for every \(x>0,\)
Combining (A.2), (A.3) and (A.4) gives us that
establishing (3.3).
In order to obtain (3.4), we first observe that by a minor modification of (A.4) (changing the second to last inequality into an equality), we have that
Thus,
as desired. \(\square \)
We now turn to Lemma 3.4.
Proof of Lemma 3.4
We will again use the Stirling estimate (A.1) to see that
and so
We obtain that for \(N\ge 1,\)
Clearly \(\left( \frac{4}{4+\kappa }\right) ^{2x} \frac{e^{1/(12x)}}{x}\) is a decreasing function of \(x>0,\) and so we get that
where we used that \(e^y\le 1+2y\) for \(0<y<1\) in the second inequality. Furthermore, we have that \(\log (1+z)\ge z/2\) for every \(0<z<1,\) and therefore
and so we obtain
Therefore, if \(N<\kappa ^{-1}\) we see that
proving (3.10). If \(N\kappa \ge 1/2\), we see from (A.5) that
proving (3.11).
For (3.12), observe that as above,
Next, we have that
and furthermore, using that \(\kappa <1,\)
where the last inequality can be verified through elementary but lengthy calculations. We can then conclude from (A.6) that
as desired. \(\square \)
Before we can prove Lemma 3.6, which is the last lemma of Sect. 3, we need to present a version of the so-called local central limit theorem for random walks. We do not claim that the following result is original, although we could not find an explicit reference. However, as we shall see, the result is an easy consequence of Theorem 2.1.1 equation (2.4) of [14].
Lemma A.1
There exists a constant \(C<\infty ,\) such that for any \(n\ge 1\) and any \(x\in {\mathbb {Z}}^2\) such that \(|x|\ge 3,\) we have that
Proof
Let \((S_n^a)_{n \ge 1}\) be an aperiodic symmetric random walk on \({\mathbb {Z}}^2\). It follows from equation (2.4) of [14] that for any \(n\ge 1\) and \(y\in {\mathbb {Z}}^2,\)
where \(\Vert \cdot \Vert \) denotes the \(L^2\)-norm. Furthermore, using the notation in [14] (see in particular p. 4), \(\Gamma \) is the covariance matrix of the walk and \({\mathcal {J}}^*(y)^2=\Vert y \cdot \Gamma ^{-1} y \Vert \).
For a simple symmetric random walk on \({\mathbb {Z}}^2,\) it is easy to see that \(\Gamma =\frac{1}{2}I\) where I is the \(2\times 2\) identity matrix, and that \({\mathcal {J}}^*(y)^2=2\Vert y\Vert ^2\) as stated on p. 4 of [14]. It is worth noting that in [14] the notation \(|\cdot |\) is used for the \(L^2\)-norm (corresponding to Euclidean distance) while, throughout this paper, we use this notation for the \(L^1\)-norm (corresponding to graph distance), as defined at the beginning of Sect. 3. The simple symmetric random walk is however not aperiodic but rather bipartite (in the sense of p. 3 of [14]) since \({\mathbb {P}}(S_n=o)=0\) whenever n is odd. Therefore, we cannot apply (A.8) directly. Instead, we shall first consider even times n, and turn the simple symmetric walk at those times into an aperiodic random walk on \({\mathbb {Z}}^2\) as we now explain. This auxiliary walk \((S^e_n)_{n\ge 1}\) is defined by letting
for every \(n\ge 0\) and \(x\in {\mathbb {Z}}^2\). This corresponds to considering the walk at even times on the even sublattice of \({\mathbb {Z}}^2\) rotated clockwise by 45 degrees and shrunk by a factor of \(\sqrt{2}\), which gives a new walk on \({\mathbb {Z}}^2\). For example, \(S_2\) can take any of the nine values in \(\{o,\pm 2e_1,\pm 2e_2,\pm e_1\pm e_2\}\) (where \(e_1=(1,0)\) and \(e_2=(0,1)\)), which correspond to the vertices at distance 0 or distance 2 from o. The mapping T then maps these nine vertices to the set \(\{o,\pm e_1,\pm e_2,\pm e_1 \pm e_2\}\) so that for instance
and
Since the steps of the random walk \((S_n)_{n\ge 1}\) are independent, we see that \((S^e_n)_{n\ge 1}\) is an aperiodic random walk on \({\mathbb {Z}}^2\) (since \({\mathbb {P}}(S_n=o)>0\) for every \(n\ge 0\)). Therefore, we can apply (A.8) to this walk.
Fix \(x\in {\mathbb {Z}}^2\) such that |x| is even. Clearly, \({\mathbb {P}}(S_n=x)=0\) if n is odd and so trivially (A.7) holds in this case. If instead n is even, we have from (A.8) that
for some \(C'<\infty \). In order to continue, we need to determine \(\Gamma \!=\!{\mathbb {E}}[(S^e_1)_i(S^e_1)_j]_{1\le i,j \le 2}\) where \((S^e_1)_1,(S^e_1)_2\) denote the first and the second coordinate of \(S^e_1\), respectively. Therefore, note that
and that all four remaining probabilities equal 1/16. We then see that
and by symmetry that \({\mathbb {E}}[(S^e_1)_2(S^e_1)_2]=\frac{1}{2}\). Furthermore,
and so
Therefore
Furthermore, using the definition of the map T from (A.9) we see that
Inserting this into (A.10) we obtain
Lastly, we have that \(\Vert x\Vert ^2 \le |x|^2 \le 2\Vert x \Vert ^2\) and so we conclude that
Thus, (A.7) holds in this case for any \(C \ge 4C'\).
Assume now instead that |x| is odd so that \({\mathbb {P}}(S_n=x)=0\) whenever n is even or \(|x|>n\). For \(n \ge |x|\), we can now sum over the neighbors \(y \sim x\) and use the last inequality for each y to obtain, for \(|x|\ge 3,\)
for some \(C<\infty \), where, in the last inequality, we have used that \(n \ge |x|\). This establishes (A.7) also for |x| odd. \(\square \)
Proof of Lemma 3.6
According to Lemma A.1 we have that for \(|x|\ge 3\),
Observe next that the function \(ye^{-cy}\) is decreasing in y for \(y>1/c\) from which it follows that \(\frac{2}{n}e^{-\frac{|x|^2}{2n}}\) is increasing in n for \(n<\frac{|x|^2}{2}\). To prove (3.15), we then observe that
where the last inequality follows by taking |x| sufficiently large.
In order to prove (3.16), observe first that
Next, by using Lemma A.1 and the fact that \(ye^{-cy}\) is maximized when \(y=1/c\) whenever \(c>0,\) we see that
by taking |x| large enough. Combining (A.11) and (A.12) proves (3.16). \(\square \)
The last proof of this Appendix is that of Lemma 7.2.
Proof of Lemma 7.2
We will be somewhat informal at the start since the proof relies on a well known technique for two-dimensional random walks. Indeed, using the so-called ”45-degree-trick” it is easy to show that
Informally, this can be explained as follows. Start by considering a clockwise rotation of \({\mathbb {Z}}^2\) by 45 degrees. After re-orientation, the original walk can now be viewed as two independent one-dimensional random walks of lengths 2n on this rotated lattice. In order for the original random walk to end up at (1, 1), the re-oriented walk must be such that the vertical walker returns to the origin at time 2n, while the horizontal must take \(n+1\) steps to the right and \(n-1\) to the left, in order to end up at the position two steps to the right of the origin at time 2n.
Continuing, we have that \({2n \atopwithdelims ()n+1}=\frac{n}{n+1} {2n \atopwithdelims ()n}\) so that
As in the proof of Lemma 3.2 we then see that
We can therefore conclude that
Next, let \(f(x)=\frac{e^{-1/(3x)}}{x+1}\) and observe that
and so f(x) is decreasing for all \(x\ge 1\). Therefore,
Then, observe that for any \(x>0\) we have that \(\log (1+x)\le 2x\). Thus, since \(e^{-x}\ge 1-x\) for every \(x>0\),
and so,
Combining (A.13), (A.14) and (A.15) gives us that
We therefore conclude that
as desired. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Broman, E.I., Camia, F. Cover Times of the Massive Random Walk Loop Soup. Math Phys Anal Geom 27, 6 (2024). https://doi.org/10.1007/s11040-024-09478-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11040-024-09478-9