1 Introduction

In this paper we are considering a covering problem for the massive random walk loop soup in \({\mathbb {Z}}^2\). Covering problems can be traced back to Dvoretzky (see [1]) who in 1956 studied the problem of covering the circle with a collection of randomly placed arcs of prescribed lengths. Many variants of this problem were later studied, and we mention in particular Janson’s work [2]. Informally, Janson fixed a set \(K\subset {\mathbb {R}}^d\) and asked for the first time when sets of small diameter arriving according to a Poisson process cover K completely. In particular, Janson determined the asymptotic distribution of this cover time as the diameter of the covering sets shrinks to 0.

Later, Belius [3] took a step in a new direction when he studied a variant of the problem in which the sets used to cover K are unbounded. Concretely, Belius fixed a set \(K\subset {\mathbb {Z}}^d\) and considered so-called random interlacements arriving according to a Poisson process with unit rate. These random interlacements can informally be understood as bi-infinite random walk trajectories (see [4] for more on this model). For this reason, the questions were posed for \(d\ge 3,\) as otherwise the random walks are recurrent. The use of unbounded sets in the covering means that the cover times of any two points \(x,y\in K\) are dependent regardless of the distance between x and y. A similar problem was then studied by Broman and Mussini (see [5], which also contains references to other papers on coverage problems), where now \(K\subset {\mathbb {R}}^d\) and the objects used to cover K are bi-infinite cylinders. In [5], the fact that \(K\subset {\mathbb {R}}^d\) is (in general) not a finite nor a discrete set poses a new set of challenges.

In the present paper, we restrict our attention to \({\mathbb {Z}}^2\) and consider the so-called massive random walk loop soup. The term massive comes from the connection between loop soups and field theory, particularly the Gaussian free field. A random walk loop soup with a non-zero killing rate corresponds to a Gaussian free field with a non-zero mass term. The loops are here generated by a random walk on \({\mathbb {Z}}^2\) which at every step has a positive probability of being killed (or landing in a cemetery state). As long as this killing rate is strictly positive, it keeps very large loops from appearing near the origin and ensures a nontrivial cover time; in contrast, with a zero killing rate every vertex in \({\mathbb {Z}}^2\) would be instantly covered. In this sense, our current project is very much related to the work of [3], as we again study the trajectory of a random walk, but we are introducing a killing in order to keep our walks from becoming too long. One could also study the (somewhat easier) case of finite portions of trajectories generated by killed random walks that do not form loops, but the random walk loop soup seems more natural and interesting, in particular because of its deep connection to the Gaussian free field and to other models of statistical mechanics (see e.g., [6] and references therein). Aside from this connection, the random walk loop soup is also an object of intrinsic interest as a prototypical example of a Poissonian system amenable to rigorous analysis thanks to the vast body of knowledge on the behavior of the random walk in two dimensions. We remark that while the usual set-up for the random walk loop soup is for the case of finite graphs (see for instance [7] and [8]), the only thing which is really needed is for the Green’s function to be finite. In our full-space setting, i.e. on \({\mathbb {Z}}^2\), this is accomplished by having a non-zero killing rate \(\kappa \), see further the remark in Sect. 2.

We will give a precise definition of the massive random walk loop soup in Sect. 2, but in order to present our main results we give here a short informal explanation. We will consider a measure \(\mu \) on the set of all loops (i.e., finite walks on \({\mathbb {Z}}^2\) ending at the same vertex where they started). Because of the non-zero killing rate, the measure \(\mu \) does not give much weight to very long loops. In particular, \(\mu (\Gamma _x)<\infty \) where \(\Gamma _x\) denotes the set of loops containing \(x\in {\mathbb {Z}}^2\). Furthermore, this measure is translation invariant so that, in particular, \(\mu (\Gamma _x)=\mu (\Gamma _o)\) for every \(x \in {\mathbb {Z}}^2,\) where o denotes the origin of the square lattice. Since the quantity \(\mu (\Gamma _o)\) will play a central role, we point out already here that it is a function of the Green’s function at 0 of a random walk with killing rate \(\kappa \). Furthermore, as our main result (Theorem 1.1 below) concerns the case where \(\kappa \) goes to 0 with the sizes of the sets we are covering, it follows that \(\mu (\Gamma _o)\) implicitly depends on the sizes of the sets we are covering.

The model that we study here is then a Poisson process \(\omega \) on \(\Gamma \times [0,\infty ),\) where \(\Gamma =\bigcup _{x\in {\mathbb {Z}}^2} \Gamma _x\) is the set of all loops. Furthermore, a pair \((\gamma ,s)\in \omega \) is a loop \(\gamma \) along with a “time-stamp” s denoting the time at which loop \(\gamma \) arrives. We can then define the cover time of the set \(A\subset {\mathbb {Z}}^2\) by letting

$$\begin{aligned} {\mathcal {T}}(A):=\inf \left\{ t>0: A \subset \bigcup _{(\gamma ,s)\in \omega :s \le t} \gamma \right\} , \end{aligned}$$

where we abuse notation somewhat and identify the loop \(\gamma \) with its trace, i.e., the vertices \(x\in {\mathbb {Z}}^2\) that it encounters. Our main result concerns the asymptotic cover time of a growing sequence of sets \((A_n)_{n \ge 1},\) as follows.

Theorem 1.1

Consider a sequence of finite subsets \(A_n \subset {\mathbb {Z}}^2\) such that \(|A_n| \uparrow \infty \). Furthermore, assume that the killing rates \(\kappa _n\) are such that, for every \(n, \exp (e^{32})\le \kappa ^{-1}_n\le |A_n|^{1-8/(\log \log |A_n|)}\). We then have that for n large enough

$$\begin{aligned} \sup _{z \in {\mathbb {R}}}|{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A_n)-\log |A_n|\le z) -\exp (-e^{-z})| \le 12|A_n|^{-\frac{1}{800 \mu (\Gamma _o)}} \end{aligned}$$
(1.1)

and therefore

$$\begin{aligned} \mu (\Gamma _o){\mathcal {T}}(A_n)-\log |A_n| \underset{n \rightarrow \infty }{\longrightarrow }\ G \; \text { in distribution,} \end{aligned}$$
(1.2)

where G is a Gumbel distributed random variable.

Remarks

It may seem that the bound on the right hand side of (1.1) does not depend on the killing rates \(\kappa _n\). However, equations (3.13) and (3.14) show that, for \(\kappa _n^{-1}\) large, \(\mu (\Gamma _o)\) is such that \(\vert \mu (\Gamma _0)-\log \log \kappa _n^{-1} \vert < 2\) (where the constant 2 is rather arbitrary). Furthermore, the remark after the proof of Lemma 5.1 indicates that it may not be possible to drastically improve on the rate of convergence in (1.1), at least not by using the methods of this paper.

We assume the lower bound on \(\kappa _n^{-1}\) in the statement of Theorem 1.1 out of convenience, and we claim that this can be relaxed (at least somewhat) by adding further details to our proofs. However, as we deem it natural to let \(\kappa _n\rightarrow 0\) as \(|A_n|\rightarrow \infty ,\) we do not think it worthwhile to make the paper more technical than it is in order to improve on this lower bound.

Furthermore, the upper bound on \(\kappa _n^{-1}\) cannot be easily improved, at least not substantially. Indeed, the discussion after the proof of Lemma 5.4 indicates that while it may be possible to improve the upper bound by replacing the number 8 with a slightly lower number, improving it further would require new ideas, if at all possible (see further the discussion below).

We continue this section with a more in depth discussion of our main result and of its proof.

It is straightforward to determine (see the start of Sect. 4) that the expected number of uncovered vertices \(x\in A_n\) at time \(\frac{\log |A_n|}{\mu (\Gamma _o)}\) is exactly 1, and this is why the distributional limit in Theorem 1.1 may exist at all. Furthermore, it is easy to show (see (7.1)) that \(\mu (\Gamma _o){\mathcal {T}}(o)\) is an exponentially distributed random variable with parameter 1. By using the well known fact that the maximum of independent exponentially distributed random variables (with parameters all equal to 1) converges to a Gumbel distribution, we see that (1.1) very much corresponds to the situation where the vertices of \(A_n\) are covered independently. Indeed, (1.2) means that \({\mathcal {T}}(A_n) \approx \frac{\log |A_n|+G}{\mu (\Gamma _o)}\) and, since \(\lim _{\kappa \rightarrow 0}\mu (\Gamma _o)=\infty ,\) we see that the cover time will be concentrated around \(\frac{\log |A_n|}{\mu (\Gamma _o)}\) with extremely small fluctuations of size \(\frac{G}{\mu (\Gamma _o)}\).

Our main result is not surprising for large killing rates (such as when \(\kappa _n\) is constant). This is because in such a regime large loops are strongly suppressed, which makes our model look similar to those studied in [3, 5] and [2], where similar results are obtained. The main difference in our work is that we let the killing rate \(\kappa _n\) go to 0 as the size of the set that needs to be covered diverges.

In this situation, we expect that the behavior may depend on the geometry (to be more precise, on the sparsity) of the sets \(A_n,\) as well as on how fast the killing rates \(\kappa _n\) go to 0. To see this, assume first that we are in the “compact” case where \(A_n\) is (as close as possible to) a ball of radius \(\sqrt{n}\), and consider the following situations.

  1. (i)

    If \(\kappa _n\) approaches zero very quickly, then for n large, the diameter of a typical loop intersecting \(A_n\) is vastly larger than the linear size of \(A_n\). It is then natural to expect that the first loop that arrives and touches \(A_n\) will in fact cover \(A_n\) completely, and the re-scaled cover time will simply be an exponential random variable as \(n \rightarrow \infty \). We give further support to this claim in Sect. 7 which includes a longer discussion along with two simple examples (Examples 7.1 and 7.3) further mentioned below.

  2. (ii)

    If instead \(\kappa _n=|A_n|^{-\alpha }\) for some \(\alpha <1\), then for n large, the diameter of a typical loop intersecting \(A_n\) is much smaller than the linear size of \(A_n\). This will create enough independence for the re-scaled cover time to converge to a Gumbel distribution as indeed Theorem 1.1 shows.

  3. (iii)

    If instead \(\kappa _n \rightarrow 0\) at a rate which is in between the two cases above, then the diameter of a typical loop may still be larger than \(A_n,\) but it will most likely not cover the entirety of \(A_n\). It is not clear to us how the cover time will behave in this intermediate case.

If \(A_n\) is sparser or stretched (say in the form of a line of length n), then the potential different phases described above may simply occur at other thresholds. However, if we allow the separation between points to depend on the killing rate, we can easily create an example (see Example 7.4) in which the limit of the cover time is always a Gumbel.

Having argued that one expects a different type of behavior for high and low killing rates, it is natural to ask where the threshold between those two regimes lies. For the “compact” case illustrated above, one may guess that the threshold could be when \(\kappa _n \sim |A_n|^{-1}\), so that the linear size of \(A_n\) in this case is of the same order as the diameter of a typical large loop, which essentially corresponds to the correlation length of the system (the separation distance at which two parts of the system become roughly independent). We remark that our main result comes very close to this supposed threshold. It may of course also be the case that the correct threshold corresponds to an even quicker rate at which \(\kappa _n \rightarrow 0,\) but if so, other methods than the ones employed in this paper will be needed to get close to the threshold.

We note that, if one takes a scaling limit, re-scaling space by \(1/\sqrt{A_n}\) and time by \(1/{A_n}\), \(\kappa _n=|A_n|^{-1}\) corresponds to the near-critical regime and leads to a massive Brownian loop soup (see [6]). In contrast to this, if \(\kappa _n=|A_n|^{-\alpha }\) for some \(\alpha <1\) one expects the scaling limit to be trivial (meaning that no macroscopic loops survive), while if \(\alpha >1\) one expects to obtain the critical (i.e. scale invariant) Brownian loop soup (see [6] for further discussion).

We briefly mention Examples 7.1 and 7.3, both considering sets \(A_n\) consisting of only two points. In Example 7.1 the two points are vastly separated, and the re-scaled cover time is shown to be the maximum of two independent exponential random variables. In contrast, Example 7.3 deals with two points that are neighbors, and the re-scaled cover time is shown to be a single exponential random variable. This provides some additional support for the discussion above concerning the potential different phases.

The overall strategy of the proof of Theorem 1.1 can be informally described as follows. At a time just before the expected cover time, i.e. at time \((1-\epsilon )\frac{\log |A_n|}{\mu (\Gamma _o)}\), the not yet covered region should consist of relatively few and well separated vertices (see Proposition 5.7). These separated vertices will then be covered “almost independently” as the distances between them are so large that we will not see many loops that are large enough to hit two such vertices. The main work will go into establishing the first of these two steps, for which we will need to perform an involved and detailed second moment estimate. Although the general strategy that we will follow has been used in [3] and [5], the main part of the work and challenges here are different. This is intimately connected to the fact that the methods must be fine-tuned in order for Theorem 1.1 to work as close as possible to the case \(\kappa _n^{-1}=|A_n|\).

We end this introduction with an outline of the rest of the paper. In Sect. 2 we will define and discuss the random walk loop soup. In Sect. 3 we will obtain various estimates on the Green’s function. However, to avoid breaking the flow of the paper, many of the calculations used in this section will be deferred to an appendix (Appendix A). The results of Sect. 3 will then be used in Sect. 4 in order to obtain estimates on the probabilities involved in our second moment estimate. The latter is done in Sect. 5 and in turn, these results are used in Sect. 6 to prove our main result. Finally, Sect. 7 contains the three examples and the discussion mentioned above.

2 The Loop Soup

The purpose of this section is twofold. Firstly, it will serve to introduce necessary notation and definitions. Secondly, it will serve as a brief introduction to random walk loop soups in the particular case that we study in this paper. See also [6,7,8,9] for an overview of the model studied in this paper.

We consider a discrete time simple symmetric random walk loop soup in \({\mathbb {Z}}^2\) with killing rate \(\kappa >0\). In this setting, a walker positioned at x at time n will move to a neighbor y with probability \(1/(4+\kappa )\), and it will be killed (or equivalently moved to a cemetery state) with probability \(\kappa /(4+\kappa )\). Next, \(\gamma _r\) is a loop rooted at the vertex \(x_0\) and of length \(|\gamma _r|=N\) if

$$\begin{aligned} \gamma _r=((\gamma _r)_0,\ldots ,(\gamma _r)_{N-1}) =(x_0,x_1,\ldots ,x_{N-1}), \end{aligned}$$

for any sequence of neighboring vertices \(x_0,\ldots ,x_{N-1}\) where \(x_{N-1}\) is a vertex neighboring \(x_0\). As usual (see [7] or [8]), we only consider non-trivial loops, i.e. only loops with \(N\ge 2\).

We note that while the alternative notation \(\gamma _r=(x_0,x_1,\ldots ,x_{N-1},x_0)\) may seem more natural (as it “closes the loop”), it would be more cumbersome when we want to consider time-shifts of the loops. One could of course also define the loop in terms of the edges traversed, but as we consider the cover times for vertices, this seems less natural.

Since we are considering loops in \({\mathbb {Z}}^2,\) we must have that \(|\gamma _r|\) is an even number. The rooted measure \(\mu _r\) of a fixed rooted loop \(\gamma _r\) is then defined to be

$$\begin{aligned} \mu _r(\gamma _r:|\gamma _r|= & {} 2n,(\gamma _r)_0 =x_0,(\gamma _r)_1=x_1,\ldots ,(\gamma _r)_{2n-1} =x_{2n-1})\\= & {} \frac{1}{2n}\left( \frac{1}{4+\kappa }\right) ^{2n}, \end{aligned}$$

for \(n\ge 1\). We see that \(\mu _r(\gamma _r)\) is the probability of the corresponding killed random walk on \({\mathbb {Z}}^2\) multiplied by the factor 1/(2n). Intuitively, the reason for this modification is that (most) loops have 2n possible starting points and will therefore contribute 2n times in the Poissonian construction below.

Proceeding, we find that the total rooted measure of all loops rooted at \(x_0\) of length 2n becomes

$$\begin{aligned} \mu _r(\{\gamma _r:|\gamma _r|=2n,(\gamma _r)_0=x_0\}) =\frac{L_{2n}}{2n}\left( \frac{1}{4+\kappa }\right) ^{2n}, \end{aligned}$$

where \(L_{2n}\) denotes the number of loops rooted at o and of length 2n.

In order to define the (unrooted) loop measure we start by defining equivalence classes of loops by saying that the rooted loops \(\gamma _r, \gamma _r'\) are equivalent if we can obtain one from the other by a time-shift. More formally, if \(|\gamma _r|=2n\), then \(\gamma _r \sim \gamma _r'\) if there exists some \(0\le m <2n\) such that

$$\begin{aligned} ((\gamma _r)_0,\ldots ,(\gamma _r)_{2n-1}) =((\gamma '_r)_m, \ldots , (\gamma '_r)_{2n-1},(\gamma '_r)_{0},\ldots ,(\gamma '_r)_{m-1}). \end{aligned}$$

We see that the equivalence class of \(\gamma _r\) contains exactly \(\frac{2n}{\textrm{mult}(\gamma _r)}\) rooted loops. Here, \(\textrm{mult}(\gamma _r)\) is the largest k such that \(\gamma _r\) can be written as the concatenation of k identical loops. We will think of an equivalence class as an unrooted loop (i.e. as a sequence of vertices with a specified order but with no specified first vertex), and we shall denote such a loop by \(\gamma \). We will occasionally write \(\gamma _r \in \gamma \) to indicate that the rooted loop \(\gamma _r\) is a member of the equivalence class \(\gamma \).

We then define the (unrooted) measure \(\mu \) on loops by letting \(\mu (\gamma )\) equal the weight of the rooted measure for a member of the equivalence class of \(\gamma ,\) multiplied by the number of members in this equivalence class. That is,

$$\begin{aligned} \mu (\gamma )= & {} \sum _{\gamma _r\in \gamma }\mu _r(\gamma _r) =\sum _{\gamma _r\in \gamma } \frac{1}{2n}\left( \frac{1}{4+\kappa }\right) ^{2n} \nonumber \\= & {} \frac{1}{2n}\left( \frac{1}{4+\kappa }\right) ^{2n}\frac{2n}{\textrm{mult}(\gamma )} =\left( \frac{1}{4+\kappa }\right) ^{2n}\frac{1}{\textrm{mult}(\gamma )}, \end{aligned}$$
(2.1)

where \(\textrm{mult}(\gamma )=\textrm{mult}(\gamma _r)\) for any (and therefore every) \(\gamma _r \in \gamma \). Equation (2.1) thus defines our measure \(\mu \). We choose to work with unrooted loops and the corresponding measure because this is the canonical choice (see [7, 8]) leading to the Brownian loop soup in the scaling limit [9].

We now let \(\Gamma _{x}^{2n}\) denote the set of all unrooted loops \(\gamma \) such that \(x\in \gamma \) and \(|\gamma |=2n\). Then, we define

$$\begin{aligned} \Gamma _{x}=\bigcup _{n=1}^\infty \Gamma _{x}^{2n}. \end{aligned}$$

We observe that

$$\begin{aligned} \mu (\Gamma _o) =\sum _{n=1}^\infty \sum _{\gamma \in \Gamma _o^{2n}} \mu (\gamma ) =\sum _{n=1}^\infty \sum _{\gamma \in \Gamma _o^{2n}} \left( \frac{1}{4+\kappa }\right) ^{2n}\frac{1}{\textrm{mult}(\gamma )}. \end{aligned}$$
(2.2)

It will turn out that the quantity \(\mu (\Gamma _o)\) will play an essential role in the rest of the paper. However, while (2.2) gives a concrete and easily understandable expression for \(\mu (\Gamma _o)\) it is not the most useful, and we will instead use (2.5) below.

Returning to our measure \(\mu \) we now let \(\omega \) denote a Poisson point process on \(\Gamma \times [0,\infty )\) with intensity measure \(\mu \times dt\). Here, \(\Gamma =\bigcup _{x\in {\mathbb {Z}}^2} \Gamma _x\) simply denotes the set of all unrooted loops in \({\mathbb {Z}}^2\). We shall think of a pair \((\gamma ,t)\in \omega \) as a loop \(\gamma \) along with a “time-stamp” t which corresponds to the time at which the loop arrived. We also let

$$\begin{aligned} \omega _t:=\{\gamma \in \Gamma : (\gamma ,s)\in \omega \text { for some } s\le t\}, \end{aligned}$$

so that \(\omega _t\) is the collection of loops that have arrived before time t. It will be convenient to introduce the notation

$$\begin{aligned} {\mathcal {C}}_t=\{x\in {\mathbb {Z}}^2:x\in \gamma \text { for some } \gamma \in \omega _t\}, \end{aligned}$$

so that \({\mathcal {C}}_t\) is the covered region at time \(t>0\).

Remark

In this section, and in the rest of the paper, we will use equations such as (2.3) below from [7] and [8] involving the Green’s function. This requires a comment, since in [7] and [8], those equations are derived working with finite graphs, while we consider the infinite square lattice. The use of finite graphs in [7] and [8] is largely a matter of convenience ( [10]), since it allows us to write explicit formulas in terms of determinants of finite matrices. Nevertheless, the final formulas in terms of the Green’s function are valid whenever the Green’s function is well defined.

To see this why this is the case in the specific example of the massive random walk loop soup on the square lattice, the reader can think of coupling a massive loop soup on \(\mathbb {Z}^2\) and a loop soup on \([-L,L]^2 \cap \mathbb {Z}^2\), obtained from the first one by removing all loops that exit \([-L,L]^2\). If one focuses on the restrictions of the two processes to a finite window \([-L_0,L_0]^2 \cap \mathbb {Z}^2\), for any fixed \(L_0\), and sends \(L \rightarrow \infty \), the presence of a positive killing rate implies that the restriction of the second process to \([-L_0,L_0]^2 \cap \mathbb {Z}^2\) converges to that of the first. On the other hand, for the second process, one can use the formulas from [7] and [8] for any finite L. Moreover, the expressions in those formulas converge as \(L \rightarrow \infty \) because the Green’s function stays finite in that limit, due to the positive killing rate.

As just remarked, there is a close connection between the loop soup and the Green’s function for the killed simple symmetric random walk on \({\mathbb {Z}}^2\). It is known (see (4.18) on p. 74 of [8] or p. 45 of [7]) that (for the simple symmetric random walk with killing rate \(\kappa \))

$$\begin{aligned} {\mathbb {P}}(x\cap {\mathcal {C}}_u=\emptyset )= & {} {\mathbb {P}}(\not \exists \gamma \in \omega _u: o\in \gamma ) =\exp (-u\mu (\Gamma _o))\nonumber \\= & {} \left( \frac{1}{(4+\kappa )g(o,o)}\right) ^u \end{aligned}$$
(2.3)

where the first equality follows from the construction of the Poisson process and translation invariance.

Here, g(xy) is a Green’s function given by (see [8] p. 9, (1.26))

$$\begin{aligned} g(x,y)=\int _0^\infty \frac{1}{4+\kappa } {\mathbb {P}}(X_t^x=y) dt \end{aligned}$$

where \((X^x_t)_{t>0}\) is a continuous time random walk (started at x) which waits an exponential time with parameter 1 and then picks any neighbor with probability \(1/(4+\kappa )\) and is killed with probability \(\kappa /(4+\kappa )\). Clearly, if \(N_t\) denotes the number of steps that this random walk has taken by time t,  we then have that

$$\begin{aligned} g(x,y)= & {} \int _0^\infty \frac{1}{4+\kappa } {\mathbb {P}}(X_t^x=y) dt\\= & {} \frac{1}{4+\kappa }\int _0^\infty \sum _{n=0}^\infty {\mathbb {P}}(X_t^x=y |N_t=n){\mathbb {P}}(N_t=n)dt \\= & {} \frac{1}{4+\kappa } \sum _{n=0}^\infty \int _0^\infty {\mathbb {P}}(S_n^{x,\kappa }=y)\frac{t^n}{n!}e^{-t}dt \\= & {} \frac{1}{4+\kappa } \sum _{n=0}^\infty {\mathbb {P}}(S_n^{x,\kappa }=y) \int _0^\infty \frac{t^n}{n!}e^{-t}dt =\frac{1}{4+\kappa } \sum _{n=0}^\infty {\mathbb {P}}(S_n^{x,\kappa }=y), \end{aligned}$$

where \(S_n^{x,\kappa }\) denotes a discrete time random walk started at x and with killing rate \(\kappa \).

Combining the above we obtain the formula

$$\begin{aligned} {\mathbb {P}}(x\cap {\mathcal {C}}_u=\emptyset ) =(G^{o,o})^{-u}, \end{aligned}$$
(2.4)

where

$$\begin{aligned} G^{x,y}=\sum _{n=0}^\infty {\mathbb {P}}(S_n^{x,\kappa }=y). \end{aligned}$$

We note also that from (2.3) and (2.4) we have that \(e^{-\mu (\Gamma _o)}=\frac{1}{G^{o,o}}\) and so

$$\begin{aligned} \mu (\Gamma _o)=\log G^{o,o}. \end{aligned}$$
(2.5)

As mentioned above, this equation will turn out to be much more useful for us than (2.2). Observe that \(\omega _u\cap \Gamma _x \cap \Gamma _y\) is the set of loops in \(\omega _u\) which intersect both x and y. We have (according to [7], p. 45) that

$$\begin{aligned}{} & {} {\mathbb {P}}(\omega _u\cap \Gamma _x \cap \Gamma _y=\emptyset )\nonumber \\{} & {} \quad =\exp (-u \mu (\Gamma _x \cap \Gamma _y)) =\left( 1-\left( \frac{g(x,y)}{g(o,o)}\right) ^2\right) ^u =\left( 1-\left( \frac{G^{x,y}}{G^{o,o}}\right) ^2\right) ^u. \end{aligned}$$
(2.6)

We conclude that

$$\begin{aligned} {\mathbb {P}}(\{x,y\} \cap {\mathcal {C}}_u= & {} \emptyset ) =\exp (-u \mu (\Gamma _x \cup \Gamma _y))\nonumber \\= & {} \exp (-2u\mu (\Gamma _x)+u\mu (\Gamma _x \cap \Gamma _y)) \nonumber \\= & {} \frac{{\mathbb {P}}(x\cap {\mathcal {C}}_u=\emptyset )^2}{{\mathbb {P}}(\not \exists \gamma \in \omega _u: x,y\in \gamma )}\nonumber \\= & {} \frac{\left( \frac{1}{G^{o,o}}\right) ^{2u}}{\left( 1-\left( \frac{G^{x,y}}{G^{o,o}}\right) ^2\right) ^u} \left( \frac{1}{(G^{o,o})^2-(G^{x,y})^2}\right) ^u. \end{aligned}$$
(2.7)

Much of the effort of this paper will be focused around obtaining good estimates for (2.7), and for other similar quantities. For this reason we shall need to study some aspects of the Green’s function \(G^{x,y}\) in detail, and then use these results to obtain good estimates of probabilities such as \({\mathbb {P}}(\{x,y\} \cap {\mathcal {C}}_u=\emptyset )\). In order to structure this, we choose to devote Sect. 3 exclusively to results concerning Green’s functions. These results are then used in Sect. 4 to obtain our estimates for relevant probabilities.

3 Green’s Function Estimates

We will write \(S_n\) in place of \(S_n^{o,o}\) and we start by observing that

$$\begin{aligned} G^{o,x}= & {} \sum _{n=0}^\infty {\mathbb {P}}(S^{o,\kappa }_{n}=x) =\sum _{n=|x|}^\infty \left( \frac{4}{4+\kappa }\right) ^{n}{\mathbb {P}}(S_{n}=x)\nonumber \\= & {} \sum _{n=|x|}^\infty \left( \frac{1}{4+\kappa }\right) ^{n}W^{o,x}_{n} \end{aligned}$$
(3.1)

where \(W^{o,x}_n\) denotes the number of walks of length n starting at the origin and ending at x. In (3.1) and in the rest of the paper, for \(x=(x_1,x_2) \in \mathbb {Z}^2\), |x| denotes \(|x_1|+|x_2|\). It is clear from (2.7) that in order to bound \({\mathbb {P}}(\{x,y\} \cap {\mathcal {C}}_u=\emptyset )\) we should strive to find good estimates for \(G^{o,o},G^{o,x}\) and the difference

$$\begin{aligned} G^{o,o}-G^{o,x} =\sum _{n=0}^\infty \left( \frac{1}{4+\kappa }\right) ^{n}W^{o,o}_{n} -\sum _{n=|x|}^\infty \left( \frac{1}{4+\kappa }\right) ^{n}W^{o,x}_{n}, \end{aligned}$$
(3.2)

and this is the main purpose of this section. The main results of the current section are Propositions 3.3 and 3.7. Proposition 3.3 will give estimates on (3.2) for small and moderate values of |x|, while Propositions 3.7 will provide estimates on \(G^{o,x}\) for large values of |x|. We mention that Propositions 3.7 is somewhat specialized to work for large values of \(\kappa ^{-1}\).

To avoid breaking the flow of the paper, the proofs of elementary lemmas concerning the number of walks \(W_n^{o,x},\) and estimates on partial sums of (3.1), are deferred to Appendix A.

We will now focus on Proposition 3.3, which will be proved through two lemmas. Firstly, Lemma 3.1 will allow us to estimate \(G^{o,o}-G^{o,x}\) in terms of partial sums of \(G^{o,o}\). Then, we will use Lemma 3.2 to quantify these bounds. In order to prove Proposition 3.7 we will use a consequence (Lemma 3.6) of the local central limit theorem, along with Lemmas 3.1 and 3.4.

We can now state our first lemma which is proved in Appendix A.

Lemma 3.1

For any \(x\in {\mathbb {Z}}^2\) such that |x| is even, we have that

$$\begin{aligned} W_{2n}^{o,x}\le W_{2n}^{o,o} \end{aligned}$$

for every \(n\ge 0\).

Our second lemma (again proved in Appendix A) provides lower bounds on the partial sums of

$$\begin{aligned} G^{o,o}=\sum _{n=0}^{\infty } \left( \frac{1}{4+\kappa }\right) ^{2n}W_{2n}^{o,o}. \end{aligned}$$

Lemma 3.2

For any \(\kappa >0\) and any \(N\ge 1\) we have that

$$\begin{aligned} \sum _{n=0}^{N-1} \left( \frac{1}{4+\kappa }\right) ^{2n}W^{o,o}_{2n} \ge 1+\frac{\log N}{\pi }-\frac{N\kappa }{\pi }-\frac{1}{3 \pi }. \end{aligned}$$
(3.3)

Furthermore,

$$\begin{aligned} G^{o,o}\ge \frac{\log \kappa ^{-1}}{\pi }+1-\frac{4}{3 \pi }. \end{aligned}$$
(3.4)

We are now ready to state and prove our first result concerning the difference (3.2). Proposition 3.3 will be “basic” in the sense that the statements are not the strongest possible, but they are sufficient for our purposes. Later, we will prove Proposition 3.7 which will be more specialized.

Proposition 3.3

For any \(\kappa >0\) we have that

$$\begin{aligned} \lim _{|x|\rightarrow \infty } G^{o,x} = 0. \end{aligned}$$
(3.5)

For any \(|x|\ge 1\) we have that

$$\begin{aligned} G^{o,o}-G^{o,x} \ge \frac{3}{4}. \end{aligned}$$
(3.6)

If \(4\le |x|\le 2\kappa ^{-1},\) and \(\kappa ^{-1}\ge 2,\) we have that

$$\begin{aligned} G^{o,o}-G^{o,x} \ge \frac{\log |x|}{\pi }. \end{aligned}$$
(3.7)

Proof

The first result, i.e., (3.5), is an immediate consequence of (3.1) since we clearly have that

$$\begin{aligned} G^{o,x} =\sum _{n=|x|}^\infty \left( \frac{4}{4+\kappa }\right) ^{n}{\mathbb {P}}(S_{n}=x) \le \sum _{n=|x|}^\infty \left( \frac{4}{4+\kappa }\right) ^{n} \rightarrow 0, \end{aligned}$$

as \(|x|\rightarrow \infty \).

We now turn to (3.7) and we will also assume (momentarily) that |x| is an even number. We have that

$$\begin{aligned} G^{o,o}-G^{o,x} = \sum _{n=0}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} (W_{2n}^{o,o}-W_{2n}^{o,x}) \ge \sum _{n=0}^{\frac{|x|}{2}-1} \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o}, \end{aligned}$$
(3.8)

by using Lemma 3.1 and the fact that \(W_{2n}^{o,x}=0\) for \(n\le |x|/2-1\). We can then use (3.3) with \(N=|x|/2\) to see that

$$\begin{aligned} G^{o,o}-G^{o,x}\ge & {} 1+\frac{\log (|x|/2)}{\pi }-\frac{\kappa |x|/2}{\pi } -\frac{1}{3 \pi }\\\ge & {} \frac{\log |x|}{\pi }+1-\frac{\log 2}{\pi } -\frac{1}{\pi }-\frac{1}{3\pi } \ge \frac{\log |x|}{\pi }, \end{aligned}$$

where we used the assumption that \(|x|\le 2 \kappa ^{-1}\) in the second inequality. This proves (3.7) in the case when |x| is even.

We will have to take some extra care when |x| is odd. Therefore, assume that \(x=(2l+1,2k)\) with \(5\le |x|\le 2\kappa ^{-1}\) and observe that by (3.1),

$$\begin{aligned} G^{o,(2l+1,2k)}= & {} \sum _{n=0}^\infty \left( \frac{1}{4+\kappa }\right) ^{n}W^{o,(2l+1,2k)}_{n} \\= & {} \sum _{n=0}^\infty \left( \frac{1}{4+\kappa }\right) ^{n} \left( W^{o,(2l,2k)}_{n-1}+W^{o,(2l+2,2k)}_{n-1} +W^{o,(2l+1,2k-1)}_{n-1}\right. \\{} & {} \left. +W^{o,(2l+1,2k+1)}_{n-1}\right) \\= & {} \frac{1}{4+\kappa } \left( G^{o,(2l,2k)}+G^{o,(2l+2,2k)} +G^{o,(2l+1,2k-1)}+G^{o,(2l+1,2k+1)}\right) . \end{aligned}$$

We then conclude that

$$\begin{aligned}{} & {} G^{o,o}-G^{o,(2l+1,2k)}\\{} & {} \quad \ge \frac{1}{4}\left( 4G^{o,o}-G^{o,(2l,2k)}-G^{o,(2l+2,2k)} -G^{o,(2l+1,2k-1)}-G^{o,(2l+1,2k+1)}\right) \\{} & {} \quad \ge \frac{1}{2} \sum _{n=0}^{(|x|-1)/2-1} \left( \frac{1}{4+\kappa }\right) ^{2n}W_{2n}^{o,o} +\frac{1}{2}\sum _{n=0}^{(|x|+1)/2-1} \left( \frac{1}{4+\kappa }\right) ^{2n}W_{2n}^{o,o}, \end{aligned}$$

since at most two of the neighbors of x are closer to o than x. By again using (3.3) we now see that

$$\begin{aligned}{} & {} G^{o,o}-G^{o,(2l+1,2k)}\\{} & {} \quad \ge \frac{1}{2} \left( 1+\frac{\log (|x|-1)-\log 2}{\pi } -\frac{(|x|-1)\kappa }{2 \pi }-\frac{1}{3 \pi }\right) \\{} & {} \qquad +\frac{1}{2}\left( 1+\frac{\log (|x|+1)-\log 2}{\pi } -\frac{(|x|+1)\kappa }{2 \pi }-\frac{1}{3 \pi }\right) \\{} & {} \quad =\frac{\log (|x|^2-1)}{2\pi }+1-\frac{\log 2}{\pi } -\frac{|x| \kappa }{2 \pi }-\frac{1}{3 \pi }. \end{aligned}$$

Furthermore, we have that \(y^2-1\ge \frac{8y^2}{9}\) for every \(y\ge 3\) and therefore,

$$\begin{aligned} G^{o,o}-G^{o,(2l+1,2k)}\ge & {} \frac{\log \frac{8}{9}+\log |x|^2}{2\pi } +1-\frac{\log 2}{\pi } -\frac{|x| \kappa }{2 \pi }-\frac{1}{3 \pi } \nonumber \\\ge & {} \frac{\log |x|}{\pi } +1+\frac{\log 8-\log 9-\log 4}{2 \pi } -\frac{1}{\pi }-\frac{1}{3\pi } \ge \frac{\log |x|}{\pi },\nonumber \\ \end{aligned}$$
(3.9)

where we used that \(|x|\le 2\kappa ^{-1}\) in the penultimate inequality. By symmetry, the same estimate holds when \(G^{o,(2l+1,2k)}\) is replaced by \(G^{o,(2l,2k+1)},\) and this establishes (3.7).

For (3.6), consider first the case when \(x=(1,0)\) and observe that, as above,

$$\begin{aligned} G^{o,o}-G^{o,(1,0)} \ge \frac{1}{4}\left( 4G^{o,o}-G^{o,o}-G^{o,(2,0)} -G^{o,(1,1)}-G^{o,(1,-1)}\right) \ge \frac{3}{4}, \end{aligned}$$

where we used (3.8) to conclude that \(G^{o,o}-G^{o,x}\ge W_0^{o,o}=1\) whenever \(|x|\ge 2\) is even. The statement then follows for all \(|x|=1\) by symmetry. Next, if x such that \(|x|\ge 2\) is even, we again observe that by (3.8) \(G^{o,o}-G^{o,x}\ge W_0^{o,o}=1\). For odd values of \(|x|\ge 3\) we can sum over the neighbors to reach the same conclusion. \(\square \)

Our next lemma will give upper bounds on the tails of the sums in \(G^{o,o}\). The first part (i.e. (3.10)) will be used to prove Proposition 3.7, while the second part (i.e. (3.11)) will be used to prove Propositions 3.5 and 3.7, and the last part (i.e. (3.12)) will be used in later sections. The proof is again deferred until Appendix A.

Lemma 3.4

For any \(0<\kappa <1,\) and any \(N\in \{1,2,\ldots \}\) such that \(N\kappa <1\), we have that

$$\begin{aligned} \sum _{n=N+1}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o} \le \frac{\log (N\kappa )^{-1}}{\pi }+\frac{1}{6 \pi N}+4. \end{aligned}$$
(3.10)

On the other hand, if \(N\kappa \ge 1/2,\) then

$$\begin{aligned} \sum _{n=N+1}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o}\le 4e^{-N\kappa /4}. \end{aligned}$$
(3.11)

Furthermore,

$$\begin{aligned} G^{o,o}\le \frac{\log \kappa ^{-1}}{\pi }+2. \end{aligned}$$
(3.12)

Our next proposition is elementary and presumably far from optimal, but useful nevertheless.

Proposition 3.5

Assume that \(|x|\ge 2 \kappa ^{-1}\) and that \(\kappa ^{-1}\ge e^{30}\). Then we have that

$$\begin{aligned} G^{o,x}\le \frac{G^{o,o}}{2}. \end{aligned}$$

Proof

Assume first that |x| is even. By (3.1) and Lemma 3.1 we then have that

$$\begin{aligned} G^{o,x}=\sum _{n=|x|}^\infty \left( \frac{1}{4+\kappa }\right) ^n W_n^{o,x} =\sum _{n=\frac{|x|}{2}}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,x} \le \sum _{n=\frac{|x|}{2}}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o}. \end{aligned}$$

Using this, and then applying (3.11) with \(N=|x|/2-1\) so that \(N \kappa = (|x|/2-1)\kappa = (\kappa ^{-1}-1)\kappa \ge 1-\kappa \ge 1/2\) (using our assumptions on |x| and \(\kappa ^{-1}\)) we have that,

$$\begin{aligned} G^{o,x} \le \sum _{n=\frac{|x|}{2}}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o} \le 4 e^{-(|x|/2-1)\kappa /4} \le 4 e^{-(\kappa ^{-1}-1)\kappa /4}\le 4 \end{aligned}$$

again, since we assume that \(|x|\ge 2 \kappa ^{-1}\) and \(\kappa ^{-1} \ge e^{30}\).

If instead |x| is odd, we can sum over the neighbors \(y \sim x\) of x and use Lemma 3.1 to see that

$$\begin{aligned} G^{o,x}= & {} \sum _{n=|x|}^\infty \left( \frac{1}{4+\kappa }\right) ^{n} W_n^{o,x} =\sum _{n=|x|}^\infty \left( \frac{1}{4+\kappa }\right) ^{n} \sum _{y \sim x}W_{n-1}^{o,y}\\= & {} \sum _{n=|x|-1}^\infty \left( \frac{1}{4+\kappa }\right) ^{n+1} \sum _{y\sim x}W_n^{o,y} = \frac{1}{4+\kappa } \sum _{y\sim x} \sum _{n=|x|-1}^\infty \left( \frac{1}{4+\kappa }\right) ^{n} W_{n}^{o,y}\nonumber \\= & {} \frac{1}{4+\kappa } \sum _{y\sim x} \sum _{n=\frac{|x|-1}{2}}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,y} \le \frac{1}{4+\kappa } \sum _{y\sim x} \sum _{n=\frac{|x|-1}{2}}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o}\nonumber \\= & {} \frac{4}{4+\kappa } \sum _{n=\frac{|x|-1}{2}}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o} \le \sum _{n=\frac{|x|-1}{2}}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o}.\nonumber \end{aligned}$$

Using this and (3.11) we then see that

$$\begin{aligned} G^{o,x} \le \sum _{n=\frac{|x|-1}{2}}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o} \le 4 e^{-((|x|-1)/2-1)\kappa /4} \le 4 e^{-(\kappa ^{1}-3/2)\kappa /4}\le 4. \end{aligned}$$

Furthermore, by (3.4) we have that

$$\begin{aligned} G^{o,o}\ge \frac{\log \kappa ^{-1}}{\pi }+1-\frac{4}{3 \pi } \ge 10, \end{aligned}$$

since \(\kappa ^{-1}\ge e^{30}\) by assumption, and so the statement follows. \(\square \)

For future reference, we note that by (3.12) and (2.5), we have that

$$\begin{aligned} \mu (\Gamma _o)=\log G^{o,o} \le \log \left( \frac{\log \kappa ^{-1}}{\pi }+2\right) . \end{aligned}$$
(3.13)

Similarly, by using (3.4) in place of (3.12) we have that

$$\begin{aligned} \mu (\Gamma _o)=\log G^{o,o} \ge \log \left( \frac{\log \kappa ^{-1}}{\pi }+1-\frac{4}{3\pi }\right) . \end{aligned}$$
(3.14)

Intuitively, it should be the case that for x to have a “decent chance of being hit” by a walk of length n starting at the origin o,  then n should be of size of order close to \(|x|^2\) or larger. Therefore, we see from (3.1) that the contribution to \(G^{o,x}\) from walks which are considerably shorter than \(|x|^2\) should be small. This is made precise in our next lemma (again the proof is deferred to Appendix A) where the first statement (3.15) shows that the total contribution to \(G^{o,x}\) coming from walks that are shorter than \(|x|^2/\log |x|\) is negligible as \(|x|\rightarrow \infty \). Both statements of this lemma will be useful in order to obtain a good estimate for \(G^{o,x}\) in Proposition 3.7.

Lemma 3.6

For every |x| large enough, we have that

$$\begin{aligned} \sum _{n=|x|}^{\left\lfloor \frac{|x|^2}{2\log |x|}\right\rfloor } \left( \frac{1}{4+\kappa }\right) ^{n} W_n^{o,x} \le 3|x|^{-1}, \end{aligned}$$
(3.15)

and that

$$\begin{aligned} \sum _{n=\left\lfloor \frac{|x|^2}{2\log |x|}\right\rfloor +1}^{|x|^2} \left( \frac{1}{4+\kappa }\right) ^{n} W_n^{o,x} \le 2\left( 1-\frac{\kappa }{4+\kappa }\right) ^{\frac{|x|^2}{2 \log |x|}}. \end{aligned}$$
(3.16)

Using this result, we can now prove the following proposition.

Proposition 3.7

Assume that \(e^9\le \kappa ^{-1}\le |A|\). For large enough |A|,  we then have that

$$\begin{aligned} G^{o,x} \le |A|^{-\frac{1}{2\mu (\Gamma _o)}} \end{aligned}$$

for every \(x\in {\mathbb {Z}}^2\) such that

$$\begin{aligned} |x|\ge |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}. \end{aligned}$$

Proof

Using (3.15) we have that

$$\begin{aligned} G^{o,x}= & {} \sum _{n=|x|}^{\infty }\left( \frac{1}{4+\kappa }\right) ^{n} W_n^{o,x}\nonumber \\\le & {} 3|x|^{-1} +\sum _{n=\left\lfloor \frac{|x|^2}{2\log |x|}\right\rfloor +1}^{|x|^2} \left( \frac{1}{4+\kappa }\right) ^{n} W_n^{o,x} +\sum _{n=|x|^2+1}^\infty \left( \frac{1}{4+\kappa }\right) ^{n} W_n^{o,x}.\nonumber \\ \end{aligned}$$
(3.17)

Furthermore, by (3.13) and our assumptions on \(\kappa ^{-1},\) it is easy to verify that \(\mu (\Gamma _o) \le \log \log |A|\). Therefore,

$$\begin{aligned} |A|^{\frac{1}{\mu (\Gamma _o)}}\ge |A|^{\frac{1}{\log \log |A|}} \rightarrow \infty \qquad \text { as } \vert A \vert \rightarrow \infty , \end{aligned}$$
(3.18)

and so we note that, by using the lower bound on |x|,  and that \(\kappa ^{1/2}\le e^{9/2}\) by assumption,

$$\begin{aligned} 3|x|^{-1}\le 3|A|^{-\frac{1}{\mu (\Gamma _o)}}\kappa ^{1/2} \le |A|^{-\frac{1}{\mu (\Gamma _o)}}, \end{aligned}$$
(3.19)

for |A| large enough. Next we observe that by (3.16)

$$\begin{aligned} \sum _{n=\left\lfloor \frac{|x|^2}{2\log |x|}\right\rfloor +1}^{|x|^2} \left( \frac{1}{4+\kappa }\right) ^{n} W_n^{o,x}\le & {} 2\left( 1-\frac{\kappa }{4+\kappa }\right) ^{\frac{|x|^2}{2 \log |x|}}\\\le & {} 2\left( 1-\frac{\kappa }{4+\kappa }\right) ^{\frac{|A|^{\frac{2}{\mu (\Gamma _o)}}\kappa ^{-1}}{2 \log \left( |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\right) }} \end{aligned}$$

by again using the lower bound on |x| in the last inequality, together with the fact that the function \(x^2/(2\log x)\) is increasing for x large. Furthermore, by using that \(\log (1-x)\le -x\) for any \(0<x<1\), we see that

$$\begin{aligned}{} & {} \exp \left( \frac{|A|^{\frac{2}{\mu (\Gamma _o)}}\kappa ^{-1}}{2 \log \left( |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\right) } \log \left( 1-\frac{\kappa }{4+\kappa }\right) \right) \nonumber \\{} & {} \quad \le \exp \left( \frac{|A|^{\frac{2}{\mu (\Gamma _o)}}\kappa ^{-1}}{2 \log \left( |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\right) } \left( -\frac{\kappa }{5}\right) \right) =\exp \left( -\frac{|A|^{\frac{2}{\mu (\Gamma _o)}}}{10 \log \left( |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\right) } \right) .\nonumber \\ \end{aligned}$$
(3.20)

Using (3.14) we observe that \(\mu (\Gamma _o)\ge 1\) and so \(|A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\le |A|^{3/2}\) by our assumption on \(\kappa \). By also using (3.18) we can therefore conclude from (3.20) that, for \(\vert A \vert \) large enough,

$$\begin{aligned}{} & {} \exp \left( \frac{|A|^{\frac{2}{\mu (\Gamma _o)}}\kappa ^{-1}}{2 \log \left( |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\right) } \log \left( 1-\frac{\kappa }{4+\kappa }\right) \right) \\{} & {} \quad \le \exp \left( -\frac{|A|^{\frac{2}{\log \log |A|}}}{10 \log |A|^{3/2}}\right) \le \exp \left( -2\log |A|\right) =\frac{1}{|A|^2}, \end{aligned}$$

where the last inequality follows from elementary considerations and is not optimal. Therefore,

$$\begin{aligned} \sum _{n=\left\lfloor \frac{|x|^2}{2\log |x|}\right\rfloor +1}^{|x|^2} \left( \frac{1}{4+\kappa }\right) ^{n} W_n^{o,x} \le \frac{2}{|A|^2}\le \frac{1}{|A|}, \end{aligned}$$
(3.21)

for |A| large enough.

Next, assume that |x| is even and note that

$$\begin{aligned}{} & {} \sum _{n=|x|^2+1}^\infty \left( \frac{1}{4+\kappa }\right) ^{n} W_n^{o,x} =\sum _{n=|x|^2+2}^\infty \left( \frac{1}{4+\kappa }\right) ^{n} W_n^{o,x} \nonumber \\{} & {} \quad = \sum _{n=|x|^2/2+1}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,x} \le \sum _{n=|x|^2/2+1}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o}, \end{aligned}$$
(3.22)

where the first equality uses that since \(|x|^2\) is even, \(W_{|x|^2+1}^{o,x}=0,\) and where the inequality follows from Lemma 3.1. Next, we want to apply (3.11) with \(N=|x|^2/2\) to the right hand side of (3.22). For this we need to verify that \(N \kappa \ge 1/2,\) and indeed, by our assumption on |x|,  we here have that \(N \kappa =|x|^2 \kappa /2 \ge (\kappa ^{-1/2}|A|^{\frac{1}{\mu (\Gamma _o)}})^2 \kappa /2 \ge |A|^{\frac{2}{\mu (\Gamma _o)}}/2\ge 1/2\). Applying (3.11) we can therefore conclude that (by again using our assumption on |x|),

$$\begin{aligned} \sum _{n=|x|^2/2+1}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o} \le 4e^{-|x|^2\kappa /8} \le 4e^{-|A|^{\frac{2}{8\mu (\Gamma _o)}}} \le |A|^{-2}, \end{aligned}$$
(3.23)

for |A| large enough. Inserting (3.19), (3.21) and (3.23) into (3.17) we get that

$$\begin{aligned} G^{o,x}\le |A|^{-\frac{1}{\mu (\Gamma _o)}} +|A|^{-1}+|A|^{-2} \le |A|^{-\frac{1}{2\mu (\Gamma _o)}}, \end{aligned}$$
(3.24)

for |A| large enough, since \(\mu (\Gamma _o)\ge 1\) as before.

Assume now that |x| is odd and observe that \(|x|^2\ge |y|^2/2\) whenever \(y\sim x\) and \(\vert x \vert \ge 3\). We then sum over all \(y \sim x\) and observe that

$$\begin{aligned} \sum _{n=|x|^2+1}^\infty \left( \frac{1}{4+\kappa }\right) ^{n} W_n^{o,x}= & {} \sum _{n=|x|^2+1}^\infty \left( \frac{1}{4+\kappa }\right) ^{n} \sum _{y \sim x} W_{n-1}^{o,y}\\\le & {} \sum _{n=\frac{|y|^2}{2}+1}^\infty \left( \frac{1}{4+\kappa }\right) ^{n} \sum _{y \sim x} W_{n-1}^{o,y}\\= & {} \sum _{n=\frac{|y|^2}{2}}^\infty \left( \frac{1}{4+\kappa }\right) ^{n+1} \sum _{y \sim x} W_{n}^{o,y} \\= & {} \sum _{n=\frac{|y|^2}{4}}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n+1} \sum _{y \sim x} W_{2n}^{o,y}\\\le & {} \sum _{n=\frac{|y|^2}{4}}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n+1} \sum _{y \sim x} W_{2n}^{o,o} \\\le & {} \sum _{n=\frac{|y|^2}{4}}^\infty \left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o}\\\le & {} 4 e^{-|y|^2 \kappa /16} \le 4 e^{-|x|^2 \kappa /32} \le 4e^{-|A|^{\frac{2}{32 \mu (\Gamma _o)}}} \le |A|^{-2}, \end{aligned}$$

where we used (3.11) in the fourth inequality and that \(|y|^2\ge |x|^2/2\) in the fifth (which holds for \(y \sim x\) and \(|x|\ge 4\)). We see that (3.24) holds also for this case, which concludes the proof. \(\square \)

4 Probability Estimates

Recall our main goal of obtaining estimates on the cover times of a sequence of growing sets. In order to get to that point, we need to consider a generic set A,  which one can typically think of as being very large. Consider then

$$\begin{aligned} u^*=\frac{\log |A|}{\mu (\Gamma _o)}, \end{aligned}$$
(4.1)

and note that \(u^*=u^*(|A|,\kappa )\). The relevance of \(u^*\) can be seen by first observing that by (2.5)

$$\begin{aligned} \left( \frac{1}{G^{o,o}}\right) ^{u^*} =\exp (-u^*\mu (\Gamma _o))=|A|^{-1}, \end{aligned}$$
(4.2)

and then that it follows from (2.4) that the expected number of uncovered vertices at time \(u^*\) is 1. Informally, with enough independence, this is the intuition for why the cover time of the generic set A should be around \(u^*\) as mentioned in the discussion after the statement of Theorem 1.1 in the Introduction. Of course, what constitutes enough independence is hard to quantify, and is at the heart of the mentioned discussion as well as that of Sect. 7.

Before we start presenting the results of this section, recall the discussion at the end of the introduction. In short, we want to consider the covered set at time \((1-\epsilon )u^*=(1-\epsilon )\frac{\log |A|}{\mu (\Gamma _o)},\) which by the intuition above should be “just before coverage” when \(\epsilon >0\) is very small. We want to show that the set yet to be covered at that time consists of relatively few well-separated points. This result is obtained in Sect. 5 (in particular Proposition 5.7), using a second moment argument. In order to perform this, we need to understand the probability that two points ox both belong to the uncovered set. This probability will of course be heavily dependent on the separation of o and x,  and the main purpose of this section is to understand this dependence in detail.

Our first result is the following.

Proposition 4.1

Let \(\kappa ^{-1}>e^{30}\) and \(\epsilon \in (0,1)\). Then, for every \(x\in {\mathbb {Z}}^2\) we have that

$$\begin{aligned} {\mathbb {P}}(\{o,x\} \cap {\mathcal {C}}_{(1-\epsilon )u^*}=\emptyset ) \le |A|^{-(1-\epsilon )} \left( \frac{9}{8}\right) ^{-(1-\epsilon )u^*}. \end{aligned}$$
(4.3)

Furthermore, for any \(x\in {\mathbb {Z}}^2\) such that \(4\le |x|\le 2\kappa ^{-1}\) we have that

$$\begin{aligned} {\mathbb {P}}(\{o,x\} \cap {\mathcal {C}}_{(1-\epsilon )u^*}=\emptyset ) \le |A|^{-(1-\epsilon )} \left( \frac{\log |x|}{\pi }\right) ^{-(1-\epsilon )u^*}. \end{aligned}$$
(4.4)

If instead \(|x|\ge 2\kappa ^{-1},\) we have that

$$\begin{aligned} {\mathbb {P}}(\{o,x\} \cap {\mathcal {C}}_{(1-\epsilon )u^*}=\emptyset ) \le |A|^{-(1-\epsilon )} \left( \frac{\log \kappa ^{-1}}{2\pi }\right) ^{-(1-\epsilon )u^*}. \end{aligned}$$
(4.5)

Proof

We start with the first statement. Use (2.7) to see that

$$\begin{aligned} {\mathbb {P}}(\{o,x\} \cap {\mathcal {C}}_{(1-\epsilon )u^*}=\emptyset )= & {} \left( \frac{1}{(G^{o,o})^2-(G^{o,x})^2}\right) ^{(1-\epsilon )u^*}\nonumber \\= & {} \left( (G^{o,o}+G^{o,x})(G^{o,o}-G^{o,x})\right) ^{-(1-\epsilon )u^*}. \end{aligned}$$
(4.6)

Then, we have from (3.6) that \(G^{o,o}-G^{o,x}\ge \frac{3}{4}\). There are now two cases. Either \(G^{o,x}\ge \frac{G^{o,o}}{2}\), in which case \((G^{o,o}+G^{o,x})(G^{o,o}-G^{o,x})\ge \frac{9}{8}G^{o,o}\) and so

$$\begin{aligned} {\mathbb {P}}(\{o,x\} \cap {\mathcal {C}}_{(1-\epsilon )u^*}=\emptyset ) \le \left( G^{o,o}\frac{9}{8}\right) ^{-(1-\epsilon )u^*} =|A|^{-(1-\epsilon )} \left( \frac{9}{8}\right) ^{-(1-\epsilon )u^*}, \end{aligned}$$

by (4.2), or \(G^{o,x}< \frac{G^{o,o}}{2},\) in which case \((G^{o,o}+G^{o,x})(G^{o,o}-G^{o,x})\ge G^{o,o}\frac{G^{o,o}}{2}\). Furthermore, by (3.4) we have that \(G^{o,o}\ge \frac{\log \kappa ^{-1}}{\pi }\ge \frac{30}{\pi }\ge \frac{9}{4},\) by our assumption on \(\kappa ,\) which proves (4.3).

For our second statement, we note that it follows from (4.6) that

$$\begin{aligned}{} & {} {\mathbb {P}}(\{o,x\} \cap {\mathcal {C}}_{(1-\epsilon )u^*}=\emptyset )\nonumber \\{} & {} \quad \le \left( G^{o,o}\right) ^{-(1-\epsilon )u^*} \left( G^{o,o}-G^{o,x}\right) ^{-(1-\epsilon )u^*} = |A|^{-(1-\epsilon )}\left( G^{o,o}-G^{o,x}\right) ^{-(1-\epsilon )u^*},\nonumber \\ \end{aligned}$$
(4.7)

so that by (3.7), we conclude that

$$\begin{aligned} {\mathbb {P}}(\{o,x\} \cap {\mathcal {C}}_{(1-\epsilon )u^*}=\emptyset ) \le |A|^{-(1-\epsilon )} \left( \frac{\log |x|}{\pi }\right) ^{-(1-\epsilon )u^*}, \end{aligned}$$

which proves (4.4).

For the third statement, observe that by Proposition 3.5 we have that \(G^{o,o}-G^{o,x}\ge \frac{G^{o,o}}{2}\). Then we can use (4.7) to see that

$$\begin{aligned}{} & {} {\mathbb {P}}(\{o,x\} \cap {\mathcal {C}}_{(1-\epsilon )u^*}=\emptyset ) \le |A|^{-(1-\epsilon )}\left( G^{o,o}-G^{o,x}\right) ^{-(1-\epsilon )u^*}\\{} & {} \quad \le |A|^{-(1-\epsilon )}\left( \frac{G^{o,o}}{2}\right) ^{-(1-\epsilon )u^*} \le |A|^{-(1-\epsilon )} \left( \frac{\log \kappa ^{-1}}{2\pi }\right) ^{-(1-\epsilon )u^*} \end{aligned}$$

where we used (3.4) in the last inequality. \(\square \)

Proposition 4.1 together with Proposition 3.7 will suffice when proving our desired second moment estimates. However, we shall also face the issue of covering a relatively small number of well separated (i.e. close to the correlation length \(\kappa ^{-1/2}\)) vertices. What we need is stated in Proposition 4.3 below, but in order to prove this we will first establish a preliminary result, namely Lemma 4.2.

For any \(K\subset {\mathbb {Z}}^2\), let

$$\begin{aligned} \Gamma _{K}:=\{\gamma :\gamma \cap K \ne \emptyset \} =\bigcup _{x \in K} \Gamma _x \end{aligned}$$

and

$$\begin{aligned} \Gamma _{K_1,K_2}:=\Gamma _{K_1}\cap \Gamma _{K_2}. \end{aligned}$$

Recall that \(\omega _u\) denotes the loop soup with intensity u so that \(\omega _u(\Gamma _{K_1,K_2})\) is the number of loops \(\gamma \in \omega _u\) such that \(\gamma \cap K_1\ne \emptyset \) and \(\gamma \cap K_2 \ne \emptyset \).

Lemma 4.2

Let \(K_1,K_2 \subset {\mathbb {Z}}^2\) be disjoint, and let \(E_1,E_2\) be events that are determined by \(\omega _u\) restricted to the sets \(K_1\) and \(K_2\) respectively. We have that

$$\begin{aligned} |{\mathbb {P}}(E_1\cap E_2) -{\mathbb {P}}(E_1){\mathbb {P}}(E_2)|\le 4{\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2})\ne 0). \end{aligned}$$
(4.8)

Proof

Since the events \(E_1,E_2\) are determined by the restrictions of \(\omega _u\) to the subsets \(K_1,K_2\) respectively, they are conditionally independent on the event that \(\omega _u(\Gamma _{K_1, K_2})=0\). We then see that,

$$\begin{aligned}{} & {} {\mathbb {P}}(E_1 \cap E_2) \nonumber \\{} & {} \quad ={\mathbb {P}}(E_1 |\omega _u(\Gamma _{K_1, K_2}) =0) {\mathbb {P}}(E_2|\omega _u(\Gamma _{K_1, K_2}) =0) {\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2}) =0) \nonumber \\{} & {} \qquad + {\mathbb {P}}(E_1 \cap E_2| \omega _u(\Gamma _{K_1, K_2}) \ne 0) {\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2}) \ne 0). \end{aligned}$$
(4.9)

Furthermore, writing

$$\begin{aligned} {\mathbb {P}}(E_i)= & {} {\mathbb {P}}(E_i | \omega _u(\Gamma _{K_1, K_2}) =0){\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2}) =0) \\{} & {} \quad + {\mathbb {P}}(E_i | \omega _u(\Gamma _{K_1, K_2}) \ne 0) {\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2}) \ne 0) \end{aligned}$$

for \(i= 1,2\) and using (4.9), we see that

$$\begin{aligned}{} & {} |{\mathbb {P}}(E_1 \cap E_2)-{\mathbb {P}}(E_1){\mathbb {P}}(E_2)|\\{} & {} \quad \le |{\mathbb {P}}(E_1 |\omega _u(\Gamma _{K_1, K_2}) =0) {\mathbb {P}}(E_2|\omega _u(\Gamma _{K_1, K_2}) =0) {\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2}) =0) \\{} & {} \qquad -{\mathbb {P}}(E_1){\mathbb {P}}(E_2)|+{\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2}) \ne 0) \\{} & {} \quad = |{\mathbb {P}}(E_1 |\omega _u(\Gamma _{K_1, K_2}) =0) {\mathbb {P}}(E_2|\omega _u(\Gamma _{K_1, K_2}) =0) {\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2}) =0) \\{} & {} \qquad -{\mathbb {P}}(E_1){\mathbb {P}}(E_2 |\omega _u(\Gamma _{K_1, K_2})=0){\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2})=0) \\{} & {} \qquad +{\mathbb {P}}(E_1){\mathbb {P}}(E_2| \omega _u(\Gamma _{K_1, K_2})\ne 0){\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2})\ne 0)| +{\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2})\ne 0) \\{} & {} \quad \le |{\mathbb {P}}(E_1 |\omega _u(\Gamma _{K_1, K_2}) =0) {\mathbb {P}}(E_2|\omega _u(\Gamma _{K_1, K_2}) =0) {\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2}) =0) \\{} & {} \qquad -{\mathbb {P}}(E_1){\mathbb {P}}(E_2 |\omega _u(\Gamma _{K_1, K_2})=0){\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2})=0)| +2{\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2})\ne 0))\\{} & {} \quad \le |{\mathbb {P}}(E_1 |\omega _u(\Gamma _{K_1, K_2}) =0) {\mathbb {P}}(E_2|\omega _u(\Gamma _{K_1, K_2}) =0) \\{} & {} \qquad -{\mathbb {P}}(E_1){\mathbb {P}}(E_2 |\omega _u(\Gamma _{K_1, K_2})=0)| +2{\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2})\ne 0))\\{} & {} \quad =|{\mathbb {P}}(E_1 |\omega _u(\Gamma _{K_1, K_2}) =0) {\mathbb {P}}(E_2|\omega _u(\Gamma _{K_1, K_2}) =0) \\{} & {} \qquad -{\mathbb {P}}(E_2 |\omega _u(\Gamma _{K_1, K_2})=0) {\mathbb {P}}(E_1 |\omega _u(\Gamma _{K_1, K_2})=0){\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2})=0) \\{} & {} \qquad +{\mathbb {P}}(E_2 |\omega _u(\Gamma _{K_1, K_2})=0){\mathbb {P}}(E_1| \omega _u(\Gamma _{K_1, K_2})\ne 0) {\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2})\ne 0)| \\{} & {} \qquad +2{\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2})\ne 0) \\{} & {} \quad \le {\mathbb {P}}(E_2 | \omega _u(\Gamma _{K_1, K_2})=0) {\mathbb {P}}(E_1 | \omega _u(\Gamma _{K_1, K_2})=0)(1-{\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2})= 0)) \\{} & {} \qquad +3{\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2})\ne 0)\\{} & {} \quad \le 4{\mathbb {P}}(\omega _u(\Gamma _{K_1, K_2})\ne 0). \end{aligned}$$

\(\square \)

We are now ready to state and prove the following proposition mentioned before Lemma 4.2.

Proposition 4.3

Let \(K\subset {\mathbb {Z}}^2\) and let \(\{x_1,\ldots ,x_{|K|}\}\) be an enumeration of the vertices in K. Assume further that K is such that \(|x_i-x_j|\ge |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\) for every \(i\ne j\). Then we have that

$$\begin{aligned} |{\mathbb {P}}({\mathcal {T}}(K)\le u)-{\mathbb {P}}({\mathcal {T}}(o)\le u)^{|K|}| \le 2|K|^2u |A|^{-\frac{1}{\mu (\Gamma _o)}}, \end{aligned}$$

whenever \(u\ge 1,\) \(e^9\le \kappa ^{-1}\le |A|\) and |A| is large enough.

Proof

We start by noting that by (2.6)

$$\begin{aligned} {\mathbb {P}}(\omega _u\cap \Gamma _o \cap \Gamma _x=\emptyset ) =\left( 1-\left( \frac{G^{o,x}}{G^{o,o}}\right) ^2\right) ^{u} \ge 1-u\left( \frac{G^{o,x}}{G^{o,o}}\right) ^2 \end{aligned}$$

where we used the elementary inequality \((1-x)^u\ge 1-ux\) for \(0<x\le 1\) and \(u\ge 1,\) together with the fact that \(\frac{G^{o,x}}{G^{o,o}}\le 1,\) which is an immediate consequence of (3.6). Note that it follows from (3.14) and the assumption that \(\kappa ^{-1}\ge e^9\) that \(G^{o,o}\ge \log \left( \frac{\log \kappa ^{-1}}{\pi }\right) \ge 1\). For \(u\ge 1\) we can therefore use Proposition 3.7 (which uses that \(|x|\ge |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\) and that |A| is large enough) to see that

$$\begin{aligned} {\mathbb {P}}(\omega _u\cap \Gamma _o \cap \Gamma _x \ne \emptyset ) \le u\left( \frac{G^{o,x}}{G^{o,o}}\right) ^2 \le u\left( G^{o,x}\right) ^2\le u |A|^{-\frac{1}{\mu (\Gamma _o)}} \end{aligned}$$
(4.10)

for every \(x\in {\mathbb {Z}}^2\) such that \(|x|\ge |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\).

We then note that for any \(1\le i\le |K|\) and \(u,\kappa \) as in the assumptions, we have that

$$\begin{aligned}{} & {} {\mathbb {P}}\left( \omega _u \cap \Gamma _{K\setminus \{x_i\}} \cap \Gamma _{x_i} \ne \emptyset \right) \le \sum _{x_j\in K\setminus \{x_i\}} {\mathbb {P}}\left( \omega _u\cap \Gamma _{x_j}\cap \Gamma _{x_i} \ne \emptyset \right) \\{} & {} \quad \le (|K|-1) \max _{y\in K\setminus \{x_i\}} {\mathbb {P}}\left( \omega _u\cap \Gamma _{x_i}\cap \Gamma _{y} \ne \emptyset \right) \le (|K|-1) u |A|^{-\frac{1}{\mu (\Gamma _o)}}, \end{aligned}$$

since \(|y-x_i|\ge |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\) by assumption on K,  and where we used (4.10) in the last inequality. Using this and Lemma 4.2 we see that (with \(K_1=\{x_1\}\) and \(K_2=K{\setminus } \{x_1\}\))

$$\begin{aligned}{} & {} |{\mathbb {P}}({\mathcal {T}}(K)\le u) -{\mathbb {P}}({\mathcal {T}}(o)\le u){\mathbb {P}}({\mathcal {T}}(K \setminus \{x_1\})\le u)|\\{} & {} \quad =|{\mathbb {P}}({\mathcal {T}}(K)\le u)-{\mathbb {P}}({\mathcal {T}}(x_1)\le u){\mathbb {P}}({\mathcal {T}}(K \setminus \{x_1\})\le u)| \\{} & {} \quad \le 4{\mathbb {P}}\left( \omega _u \cap \Gamma _{K\setminus \{x_i\}} \cap \Gamma _{x_i} \ne \emptyset \right) \le 4(|K|-1)u|A|^{-\frac{1}{\mu (\Gamma _o)}}. \end{aligned}$$

By iterating this we see that

$$\begin{aligned}{} & {} |{\mathbb {P}}({\mathcal {T}}(K)\le u)-{\mathbb {P}}({\mathcal {T}}(o)\le u)^{|K|}|\\{} & {} \quad \le 4(|K|-1)u|A|^{-\frac{1}{\mu (\Gamma _o)}} +{\mathbb {P}}({\mathcal {T}}(o)\le u) |{\mathbb {P}}({\mathcal {T}}(K\setminus \{x_1\})\le u)\\{} & {} \quad -{\mathbb {P}}({\mathcal {T}}(o)\le u)^{|K|-1}| \\{} & {} \quad \le 4(|K|-1)u|A|^{-\frac{1}{\mu (\Gamma _o)}} +|{\mathbb {P}}({\mathcal {T}}(K\setminus \{x_1\})\le u)-{\mathbb {P}}({\mathcal {T}}(o)\le u)^{|K|-1}|\\{} & {} \quad \le \cdots \le ((|K|-1)+(|K|-2)+\cdots +1) 4u|A|^{-\frac{1}{\mu (\Gamma _o)}} \\{} & {} \quad = 4 \frac{(|K|-1)|K|}{2}u|A|^{-\frac{1}{\mu (\Gamma _o)}} \le 2|K|^2u|A|^{-\frac{1}{\mu (\Gamma _o)}}, \end{aligned}$$

which concludes the proof. \(\square \)

5 Second Moment Estimates

For \(\epsilon \in (0,1)\) we define

$$\begin{aligned} A_\epsilon :=\{x\in A: x \cap {\mathcal {C}}_{(1-\epsilon ) u^*}=\emptyset \}, \end{aligned}$$
(5.1)

so that \(A_\epsilon \) is the set of vertices of A which are uncovered at time \((1-\epsilon )u^*\). By using (5.1), (2.4), (2.5) and the definition of \(u^*\) (i.e. (4.1)) in that order, we see that for \(x\in A,\)

$$\begin{aligned} {\mathbb {P}}(x\in A_\epsilon )= & {} {\mathbb {P}}(x \cap {\mathcal {C}}_{(1-\epsilon ) u^*}=\emptyset ) =\left( G^{o,o}\right) ^{-(1-\epsilon )u^*}\nonumber \\= & {} \exp \left( -(1-\epsilon )u^* \mu (\Gamma _o)\right) \nonumber \\= & {} \exp (-(1-\epsilon ) \log |A|)=|A|^{-(1-\epsilon )}. \end{aligned}$$
(5.2)

Therefore,

$$\begin{aligned} {\mathbb {E}}[|A_\epsilon |]=\sum _{x \in A}{\mathbb {P}}(x\in A_\epsilon ) =|A|\exp (-(1-\epsilon )u^*\mu (\Gamma _x)) =|A|^\epsilon . \end{aligned}$$
(5.3)

In order to reach our end goal of this section, we shall need to establish a number of inequalities dealing with summing \({\mathbb {P}}(x,y \in A_\epsilon )\) over various ranges of xy. We will have to consider the cases when the distances between x and y are small, intermediate and large separately. In addition, in order to make the argument work for any \(\exp (e^{32})<\kappa ^{-1}<|A|^{1-\frac{8}{\log \log |A|}}\) we will further have to divide the analysis into different cases depending on the value of \(\kappa ^{-1}\). In total we establish four lemmas (Lemmas 5.1, 5.2, 5.4 and 5.5) concerning such sums, and we then combine these results into Proposition 5.6. We note that not all of these results require equally strong conditions on \(\kappa ^{-1}\) and \(\epsilon \). We prefer to write the actual conditions required in the respective statements of each lemma, as this makes it easier to see where the constraints lie. We also note that we will actually only use the results below for \(\epsilon \) equal to \(\frac{1}{100\mu (\Gamma _o)}\) and \(\frac{1}{400\mu (\Gamma _o)}\). It may therefore seem superfluous to introduce \(\epsilon \) at all, but it will make the text less technical in the end.

Lemma 5.1

For any \(e^9\le \kappa ^{-1}\le |A|\) we have that

$$\begin{aligned} \sum _{x,y\in A: 1\le |x-y| \le \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}} {\mathbb {P}}(x,y \in A_\epsilon ) \le |A|^{-\frac{1}{20\mu (\Gamma _o)}}, \end{aligned}$$
(5.4)

for every \(0<\epsilon \le \frac{1}{100 \mu (\Gamma _o)}\) and |A| large enough.

Proof

In order to establish (5.4), we use (4.3) and the definition of \(u^*\) in (4.1), together with translation invariance to see that

$$\begin{aligned}{} & {} \sum _{x,y\in A: 1\le |x-y| \le \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}} {\mathbb {P}}(x,y \in A_\epsilon ) \nonumber \\{} & {} \quad \le \sum _{x,y\in A: 1\le |x-y|\le \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}} |A|^{-(1-\epsilon )} \left( \frac{9}{8}\right) ^{-(1-\epsilon )u^*}\nonumber \\{} & {} \quad = \sum _{x,y\in A: 1\le |x-y|\le \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}} |A|^{-(1-\epsilon )} \exp \left( -(1-\epsilon )\log \left( \frac{9}{8}\right) \frac{\log |A|}{\mu (\Gamma _o)}\right) \nonumber \\{} & {} \quad = \sum _{x,y\in A: 1\le |x-y|\le \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}} |A|^{-(1-\epsilon )} |A|^{-(1-\epsilon )\log \left( \frac{9}{8}\right) \frac{1}{\mu (\Gamma _o)}} \nonumber \\{} & {} \quad \le 4|A|\left( \kappa ^{-1}\right) ^{\frac{1}{20 \mu (\Gamma _o)}} |A|^{-(1-\epsilon )} |A|^{-(1-\epsilon )\log \left( \frac{9}{8}\right) \frac{1}{\mu (\Gamma _o)}}\nonumber \\{} & {} \quad \le 4|A|^\epsilon \left( \kappa ^{-1}\right) ^{\frac{1}{20 \mu (\Gamma _o)}} |A|^{-\frac{1}{9\mu (\Gamma _o)}}, \end{aligned}$$
(5.5)

where we used that

$$\begin{aligned} (1-\epsilon )\log \left( \frac{9}{8}\right) \ge \left( 1-\frac{1}{100\mu (\Gamma _o)}\right) \log \left( \frac{9}{8}\right) >\frac{1}{9} \end{aligned}$$

since \(\mu (\Gamma _o)\ge 1\) by (3.14) and the fact that \(\kappa ^{-1}\ge e^9\). Furthermore, by our assumption on \(\epsilon \) we see that

$$\begin{aligned} 4 |A|^\epsilon |A|^{-\frac{1}{9\mu (\Gamma _o)}} \le |A|^{-\frac{1}{10\mu (\Gamma _o)}}. \end{aligned}$$

We conclude that

$$\begin{aligned}{} & {} \sum _{x,y\in A: 1\le |x-y| \le \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}} {\mathbb {P}}(x,y \in A_\epsilon )\\{} & {} \quad \le \left( \kappa ^{-1}\right) ^{\frac{1}{20 \mu (\Gamma _o)}} |A|^{-\frac{1}{10\mu (\Gamma _o)}} \le |A|^{\frac{1}{20\mu (\Gamma _o)}-\frac{1}{10\mu (\Gamma _o)}} =|A|^{-\frac{1}{20\mu (\Gamma _o)}}, \end{aligned}$$

where we used that \(\kappa ^{-1}\le |A|\) in the last inequality. This proves (5.4). \(\square \)

Remark

Note that even if one replaced the upper bound \(\left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}\) in the summation with 1,  the bound would not improve much. In fact one would obtain

$$\begin{aligned} 4|A|^\epsilon |A|^{-\frac{1}{9\mu (\Gamma _o)}} \le |A|^{-\frac{1}{10\mu (\Gamma _o)}}, \end{aligned}$$

at the end of (5.5), leading only to a slight improvement on the current bound of \(|A|^{-\frac{1}{20\mu (\Gamma _o)}}\). In order to optimize the bound, an improvement of (4.3) would be required. However, even an optimal bound in place of (4.3) may not fundamentally change the result.

Our next lemma deals with intermediate scales of separation between x and y.

Lemma 5.2

Assume that \(\exp (e^{32})\le \kappa ^{-1}\le |A|^{}\) and that \(0<\epsilon \le \frac{1}{100 \mu (\Gamma _o)}\). Then for every |A| large enough,

$$\begin{aligned} \sum _{x,y\in A: \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}\le |x-y| \le \kappa ^{-1/4}} {\mathbb {P}}(x,y \in A_\epsilon ) \le |A|^{-1/7}. \end{aligned}$$
(5.6)

Proof

In order to prove (5.6), we will use (4.4), and therefore we observe that \(|x-y|\le \kappa ^{-1/4}\le 2\kappa ^{-1}\). Next, we observe that by (3.13) we have that

$$\begin{aligned} \mu (\Gamma _o) \le \log \left( \frac{\log \kappa ^{-1}}{\pi }+2\right) \le \log \log \kappa ^{-1}, \end{aligned}$$
(5.7)

which holds since we assume that \(\kappa ^{-1}\ge \exp (e^{32})\). Therefore,

$$\begin{aligned} |x-y|\ge \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}} \ge \left( \kappa ^{-1}\right) ^{\frac{1}{40 \log \log \kappa ^{-1}}} \ge 4 \end{aligned}$$

where the last inequality is easily checked to hold for \(\kappa ^{-1} \ge \exp (e^{32})\) as in our assumption. Hence, the requirements for (4.4) are satisfied. We then use (4.1) and (4.4) to obtain that

$$\begin{aligned}{} & {} \sum _{x,y\in A: \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}\le |x-y| \le \kappa ^{-1/4}} {\mathbb {P}}(x,y \in A_\epsilon )\nonumber \\{} & {} \quad \le \sum _{x,y\in A: \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}\le |x-y| \le \kappa ^{-1/4}} |A|^{-(1-\epsilon )} \left( \frac{\log |x-y|}{\pi }\right) ^{-(1-\epsilon )u^*} \nonumber \\{} & {} \quad = \sum _{x,y\in A: \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}\le |x-y| \le \kappa ^{-1/4}} |A|^{-(1-\epsilon )} |A|^{-(1-\epsilon )\frac{\log \log |x-y|^{1/\pi }}{\mu (\Gamma _o)}} \nonumber \\{} & {} \quad \le \sum _{x,y\in A: \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}\le |x-y| \le \kappa ^{-1/4}} |A|^{-(1-\epsilon )} |A|^{-(1-\epsilon ) \frac{\log \log \left( \kappa ^{-1}\right) ^{\frac{1}{40 \pi \mu (\Gamma _o)}}}{\mu (\Gamma _o)}} \nonumber \\{} & {} \quad \le |A| \left( 2\kappa ^{-1/4}\right) ^2 |A|^{-(1-\epsilon )} |A|^{-(1-\epsilon ) \frac{\log \log \left( \kappa ^{-1}\right) ^{\frac{1}{40 \pi \mu (\Gamma _o)}}}{\mu (\Gamma _o)}}. \end{aligned}$$
(5.8)

By again using (5.7), we see that

$$\begin{aligned} \frac{\log \log \left( \kappa ^{-1}\right) ^{\frac{1}{40 \pi \mu (\Gamma _o)}}}{\mu (\Gamma _o)}= & {} \frac{\log \log \kappa ^{-1}-\log (40 \pi )- \log (\mu (\Gamma _o))}{\mu (\Gamma _o)}\nonumber \\\ge & {} 1 -\frac{\log (40 \pi )+ \log (\mu (\Gamma _o))}{\mu (\Gamma _o)} \ge \frac{2}{3}, \end{aligned}$$
(5.9)

where the last inequality follows since, by (3.14) and the fact that \(\kappa ^{-1}>\exp \left( e^{32}\right) \), we have that

$$\begin{aligned} \mu (\Gamma _o) \ge \log \left( \frac{\log \kappa ^{-1}}{\pi }+1-\frac{4}{3\pi }\right) \ge 30. \end{aligned}$$

Using (5.8) and (5.9) we see that

$$\begin{aligned}{} & {} \sum _{x,y\in A: \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}\le |x-y| \le \kappa ^{-1/4}} {\mathbb {P}}(x,y \in A_\epsilon )\\{} & {} \quad \le |A| \left( 2\kappa ^{-1/4}\right) ^2 |A|^{-(1-\epsilon )} |A|^{-(1-\epsilon ) \frac{\log \log \left( \kappa ^{-1}\right) ^{\frac{1}{40 \pi \mu (\Gamma _o)}}}{\mu (\Gamma _o)}} \\{} & {} \quad \le 4|A|^\epsilon \kappa ^{-1/2} |A|^{-(1-\epsilon )\frac{2}{3}} \le 4|A|^{2\epsilon -\frac{2}{3}}\kappa ^{-1/2} \le 4|A|^{2\epsilon -\frac{1}{6}} \\{} & {} \quad \le 4|A|^{\frac{2}{100 \mu (\Gamma _o)}-\frac{1}{6}} \le 4|A|^{\frac{2}{3000}-\frac{1}{6}} \le |A|^{-\frac{1}{7}}, \end{aligned}$$

where we used that \(\kappa ^{-1}\le |A|\) in the fourth inequality, that \(\epsilon <\frac{1}{100 \mu (\Gamma _o)}\) in the fifth inequality, that \(\mu (\Gamma _o)\ge 30\) in the penultimate inequality, and finally that |A| is taken large enough in the last inequality. \(\square \)

Our next lemma is an intermediate result which we will use to prove Lemma 5.4.

Lemma 5.3

For any \(\kappa ^{-1}\) such that \(\exp \left( e^{32}\right) \le \kappa ^{-1} \le |A|^{1-\frac{8}{\log \log |A|}},\) we have that

$$\begin{aligned} \kappa ^{-1}\le |A|^{1-\frac{6}{\mu (\Gamma _o)}}, \end{aligned}$$

for every |A| large enough.

Proof

If \(\kappa ^{-1} \ge |A|^{4/5}\), we can use (3.14) to see that

$$\begin{aligned} \mu (\Gamma _o)\ge \log \left( \frac{\log \kappa ^{-1}}{\pi }\right) \ge \log \log |A|^{4/(5\pi )} \ge \frac{3}{4}\log \log |A|, \end{aligned}$$

whenever |A| is large enough. Therefore we see that

$$\begin{aligned} |A|^{1-\frac{6}{\mu (\Gamma _o)}} \ge |A|^{1-\frac{8}{\log \log |A|}} \ge \kappa ^{-1}, \end{aligned}$$

as desired.

If \(\exp \left( e^{32}\right) \le \kappa ^{-1}\le |A|^{4/5}\), (3.14) and \(\kappa ^{-1} \ge \exp (e^{32})\) imply that \(\mu (\Gamma _o)\ge \log \left( \frac{\log \kappa ^{-1}}{\pi }\right) \ge 30\), so that

$$\begin{aligned} |A|^{1-\frac{6}{\mu (\Gamma _o)}} \ge |A|^{1-\frac{6}{30}}=|A|^{4/5}\ge \kappa ^{-1}, \end{aligned}$$

which conclude the proof. \(\square \)

Lemma 5.4

Assume that \(\exp (e^{32})\le \kappa ^{-1} \le |A|^{1-\frac{8}{\log \log |A|}}\) and that \(0<\epsilon \le \frac{1}{100 \mu (\Gamma _o)}\). Then for every |A| large enough,

$$\begin{aligned} \sum _{x,y\in A: \kappa ^{-1/4}\le |x-y| \le |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}} {\mathbb {P}}(x,y \in A_\epsilon ) \le |A|^{-\frac{1}{\mu (\Gamma _o)}}. \end{aligned}$$
(5.10)

Proof

In order to prove (5.10), we will again use (4.4), and to that end we observe that \(4\le \kappa ^{-1/4}< 2\kappa ^{-1}\) by our assumption that \(\kappa ^{-1}\ge \exp (e^{32})\). Then, for any \(\kappa ^{-1/4}\le |x-y| \le \min \left( 2 \kappa ^{-1}, |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\right) \) we have that by (4.4)

$$\begin{aligned}{} & {} {\mathbb {P}}(x,y \in A_\epsilon ) \le |A|^{-(1-\epsilon )} \left( \frac{\log |x-y|}{\pi }\right) ^{-(1-\epsilon )u^*}\nonumber \\{} & {} \quad \le |A|^{-(1-\epsilon )} \left( \log \kappa ^{-1/(4 \pi )}\right) ^{-(1-\epsilon )u^*}\nonumber \\{} & {} \quad = |A|^{-(1-\epsilon )}\exp \left( -(1-\epsilon )\frac{\log |A|}{\mu (\Gamma _o)} \log \log \kappa ^{-1/(4\pi )}\right) \nonumber \\{} & {} \quad =|A|^{-(1-\epsilon )}|A|^{-(1-\epsilon ) \frac{\log \log \kappa ^{-1/(4\pi )}}{\mu (\Gamma _o)}}. \end{aligned}$$
(5.11)

If instead \(|x-y|\ge 2 \kappa ^{-1},\) we use (4.5) to observe that

$$\begin{aligned} {\mathbb {P}}(x,y \in A_\epsilon ) \le |A|^{-(1-\epsilon )} \left( \frac{\log \kappa ^{-1}}{2\pi }\right) ^{-(1-\epsilon )u^*} \le |A|^{-(1-\epsilon )} \left( \frac{\log \kappa ^{-1}}{4\pi }\right) ^{-(1-\epsilon )u^*} \end{aligned}$$

and so (5.11) holds for every xy such that \(\kappa ^{-1/4}\le |x-y| \le |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\). It follows that

$$\begin{aligned}{} & {} \sum _{x,y\in A: \kappa ^{-1/4}\le |x-y| \le |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}} {\mathbb {P}}(x,y \in A_\epsilon )\nonumber \\{} & {} \quad \le \sum _{x,y\in A: \kappa ^{-1/4}\le |x-y| \le |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}} |A|^{-(1-\epsilon )} |A|^{-(1-\epsilon ) \frac{\log \log \kappa ^{-1/(4\pi )}}{\mu (\Gamma _o)}} \nonumber \\{} & {} \quad \le |A| \left( 2|A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\right) ^2 |A|^{-(1-\epsilon )} |A|^{-(1-\epsilon ) \frac{\log \log \kappa ^{-1/(4\pi )}}{\mu (\Gamma _o)}}. \end{aligned}$$
(5.12)

As in the proof of (5.9) we have that

$$\begin{aligned} \frac{\log \log \kappa ^{-1/(4\pi )}}{\mu (\Gamma _o)} =\frac{\log \log \kappa ^{-1}-\log (4 \pi )}{\mu (\Gamma _o)} \ge 1-\frac{\log (4 \pi )}{\mu (\Gamma _o)}. \end{aligned}$$

We therefore see that

$$\begin{aligned}{} & {} \sum _{x,y\in A: \kappa ^{-1/4}\le |x-y| \le |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}} {\mathbb {P}}(x,y \in A_\epsilon )\nonumber \\{} & {} \quad \le |A| \left( 2|A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\right) ^2 |A|^{-(1-\epsilon )} |A|^{-(1-\epsilon ) \frac{\log \log \kappa ^{-1/(4\pi )}}{\mu (\Gamma _o)}} \nonumber \\{} & {} \quad \le 4|A|^{\epsilon +\frac{2}{\mu (\Gamma _o)}} \kappa ^{-1} |A|^{-(1-\epsilon )\left( 1-\frac{\log (4 \pi )}{\mu (\Gamma _o)} \right) } \le 4|A|^{2\epsilon +\frac{2}{\mu (\Gamma _o)}+\frac{\log (4 \pi )}{\mu (\Gamma _o)}} (\kappa |A|)^{-1} \nonumber \\{} & {} \quad \le 4|A|^{\frac{2}{100\mu (\Gamma _o)}+\frac{2}{\mu (\Gamma _o)}+\frac{\log (4 \pi )}{\mu (\Gamma _o)}} (\kappa |A|)^{-1} \le |A|^{\frac{5}{\mu (\Gamma _o)}} (\kappa |A|)^{-1}, \end{aligned}$$
(5.13)

where we used that \(\epsilon <\frac{1}{100 \mu (\Gamma _o)}\) in the penultimate inequality. Finally, it follows from Lemma 5.3 (which uses that \(\kappa ^{-1}\ge \exp (e^{32})\)) that \(\kappa \ge |A|^{-1+\frac{6}{\mu (\Gamma _o)}}\) and so

$$\begin{aligned} (\kappa |A|)^{-1} |A|^{\frac{5}{\mu (\Gamma _o)}} \le |A|^{-\frac{6}{\mu (\Gamma _o)}}|A|^{\frac{5}{\mu (\Gamma _o)}} \le |A|^{-\frac{1}{\mu (\Gamma _o)}}, \end{aligned}$$

which concludes the proof. \(\square \)

Remark

As we shall see, the above lemma is the only one that requires the upper bound on \(\kappa ^{-1},\) i.e. that \(\kappa ^{-1}\le |A|^{1-\frac{8}{\log \log |A|}}\). The other lemmas of this section only require that \(\kappa ^{-1}\le |A|\) (and in addition, with some extra effort this bound can be relaxed). If we changed the summation to be over the range \(\kappa ^{-1/4}\le |x-y| \le \log |A|\kappa ^{-1/2}\) (or so) instead of \(\kappa ^{-1/4}\le |x-y| \le |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\), then this would improve the bound somewhat. However, since the factor \(|A|^{\frac{\log (4\pi )}{\mu (\Gamma _o)}}\) would still remain in the summation (5.13), this would only lead to a slight improvement of the upper bound of \(\kappa ^{-1}\).

Our next lemma sums over pairs that are well separated.

Lemma 5.5

For any \(e^9\le \kappa ^{-1} \le |A|\) we have that

$$\begin{aligned} \sum _{x,y\in A: |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2} \le |x-y|} {\mathbb {P}}(x,y \in A_\epsilon ) \le |A|^{2\epsilon }\left( 1+|A|^{-\frac{1}{2\mu (\Gamma _o)}}\right) , \end{aligned}$$
(5.14)

whenever \(0<\epsilon <1/2\) and |A| is large enough.

Proof

Using (2.7) we have that

$$\begin{aligned} {\mathbb {P}}(x,y \in A_\epsilon )= & {} {\mathbb {P}}(o,y-x \in A_\epsilon ) =\left( (G^{o,o})^2-(G^{o,y-x})^2\right) ^{-(1-\epsilon )u^*}\nonumber \\= & {} (G^{o,o})^{-2(1-\epsilon )u^*} \left( 1-\left( \frac{G^{o,y-x}}{G^{o,o}}\right) ^2\right) ^{-(1-\epsilon )u^*}\nonumber \\= & {} |A|^{-2(1-\epsilon )} \left( 1-\left( \frac{G^{o,y-x}}{G^{o,o}}\right) ^2\right) ^{-(1-\epsilon )u^*} \end{aligned}$$
(5.15)

where we used (4.2) in the last equality. By (3.5), \(\frac{G^{o,y-x}}{G^{o,o}}\rightarrow 0\) as \(|A|\rightarrow \infty \) since we are assuming that \(|y-x|\ge |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}\). Furthermore, \(\log (1-u)\ge -2u\) whenever \(0<u<1/2\) and so

$$\begin{aligned}{} & {} \left( 1-\left( \frac{G^{o,y-x}}{G^{o,o}}\right) ^2\right) ^{-(1-\epsilon )u^*}\nonumber \\{} & {} \quad =\exp \left( -(1-\epsilon )\frac{\log |A|}{\mu (\Gamma _o)} \log \left( 1-\left( \frac{G^{o,y-x}}{G^{o,o}}\right) ^2\right) \right) \nonumber \\{} & {} \quad \le \exp \left( 2(1-\epsilon )\frac{\log |A|}{\mu (\Gamma _o)} \left( \frac{G^{o,y-x}}{G^{o,o}}\right) ^2\right) . \end{aligned}$$
(5.16)

As before, we observe that it follows from (3.14) and the assumption that \(\kappa ^{-1}\ge e^9\) that both \(\mu (\Gamma _o)\ge 1\) and \(G^{o,o}\ge 1\). We can now use Proposition 3.7 (which requires that \(\kappa ^{-1}\le |A|\)) to conclude that

$$\begin{aligned}{} & {} \exp \left( 2(1-\epsilon )\frac{\log |A|}{\mu (\Gamma _o)} \left( \frac{G^{o,y-x}}{G^{o,o}}\right) ^2\right) \le \exp \left( 2(\log |A|) \left( G^{o,y-x}\right) ^2\right) \nonumber \\{} & {} \quad \le \exp \left( 2(\log |A|)|A|^{-\frac{1}{\mu (\Gamma _o)}}\right) \nonumber \\{} & {} \quad \le \exp \left( |A|^{-\frac{2}{3\mu (\Gamma _o)}}\right) \le 1+|A|^{-\frac{1}{2\mu (\Gamma _o)}} \end{aligned}$$
(5.17)

where the penultimate inequality follows since

$$\begin{aligned} |A|^{\frac{1}{3\mu (\Gamma _o)}} \ge |A|^{\frac{1}{3\log \log \kappa ^{-1}}} \ge |A|^{\frac{1}{3\log \log |A|}} \ge 2(\log |A|) \end{aligned}$$

for large enough |A|,  and the last inequality follows since \(e^x \le 1+2x\) for small enough x. Combining (5.15), (5.16) and (5.17) we see that

$$\begin{aligned} {\mathbb {P}}(x,y \in A_\epsilon ) \le |A|^{-2(1-\epsilon )}\left( 1+|A|^{-\frac{1}{2\mu (\Gamma _o)}}\right) , \end{aligned}$$
(5.18)

and so

$$\begin{aligned}{} & {} \sum _{x,y\in A: |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2} \le |x-y|} {\mathbb {P}}(x,y \in A_\epsilon )\nonumber \\{} & {} \quad \le \sum _{x,y\in A: |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2} \le |x-y|} |A|^{-2(1-\epsilon )}\left( 1+|A|^{-\frac{1}{2\mu (\Gamma _o)}}\right) \nonumber \\{} & {} \quad \le |A|^2|A|^{-2(1-\epsilon )} \left( 1+|A|^{-\frac{1}{2\mu (\Gamma _o)}}\right) =|A|^{2\epsilon }\left( 1+|A|^{-\frac{1}{2\mu (\Gamma _o)}}\right) . \end{aligned}$$
(5.19)

\(\square \)

Remark

It follows from equations (5.18) and (5.2) that

$$\begin{aligned} {\mathbb {P}}(x,y\in A_\epsilon ) \le |A|^{-2(1-\epsilon )}+R ={\mathbb {P}}(x\in A_\epsilon )^2+R \end{aligned}$$

where R is some small error term. Morally, this means that ox are “almost” independently covered. This is not surprising considering that they are separated by a distance close to the diameter of a typical loop, i.e. \(\kappa ^{-1/2}\) (recall the discussion after the statement of Theorem 1.1 in the Introduction).

We collect the above lemmas in the following proposition.

Proposition 5.6

For any \(\exp (e^{32})\le \kappa ^{-1}\le |A|^{1-\frac{8}{\log \log |A|}},\) \(0<\epsilon \le \frac{1}{100 \mu (\Gamma _o)}\) and |A| large enough we have that

$$\begin{aligned} \sum _{x,y\in A: 0<|x-y|\le |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}} {\mathbb {P}}(x,y\in A_\epsilon )\le 2|A|^{-\frac{1}{20\mu (\Gamma _o)}}, \end{aligned}$$
(5.20)

and that

$$\begin{aligned} \sum _{x,y\in A: |x-y|>0} {\mathbb {P}}(x,y\in A_\epsilon ) \le |A|^{2\epsilon }\left( 1+3|A|^{-\frac{1}{20\mu (\Gamma _o)}}\right) . \end{aligned}$$
(5.21)

Proof

We start by considering (5.20). Since we assume that \(\exp (e^{32})\le \kappa ^{-1}\le |A|^{1-\frac{8}{\log \log |A|}},\) we can use (5.4), (5.6) and (5.10) to see that

$$\begin{aligned}{} & {} \sum _{x,y\in A: 0<|x-y|\le |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}} {\mathbb {P}}(x,y\in A_\epsilon )\\{} & {} \quad \le \sum _{x,y\in A: 1\le |x-y| \le \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}} {\mathbb {P}}(x,y \in A_\epsilon )\\{} & {} \qquad +\sum _{x,y\in A: \left( \kappa ^{-1}\right) ^{\frac{1}{40 \mu (\Gamma _o)}}\le |x-y| \le \kappa ^{-1/4}} {\mathbb {P}}(x,y \in A_\epsilon ) \\{} & {} \qquad +\sum _{x,y\in A: \kappa ^{-1/4}\le |x-y| \le |A|^{\frac{1}{\mu (\Gamma _o)}}\kappa ^{-1/2}} {\mathbb {P}}(x,y \in A_\epsilon ) \\{} & {} \quad \le |A|^{-\frac{1}{20\mu (\Gamma _o)}}+|A|^{-1/7} +|A|^{-\frac{1}{\mu (\Gamma _o)}} \le 2|A|^{-\frac{1}{20\mu (\Gamma _o)}} \end{aligned}$$

for all |A| large enough (since \(\mu (\Gamma _o)\ge 30\) by our assumption on \(\kappa ^{-1}\) and (3.14)).

The second statement follows by using (5.14) together with (5.20) and observing that

$$\begin{aligned} 2|A|^{-\frac{1}{20\mu (\Gamma _o)}} +|A|^{2\epsilon }\left( 1+|A|^{-\frac{1}{2\mu (\Gamma _o)}}\right) \le |A|^{2\epsilon }\left( 1+3|A|^{-\frac{1}{20\mu (\Gamma _o)}}\right) \end{aligned}$$

for |A| large enough. \(\square \)

We shall now use Proposition 5.6 to prove that the uncovered region at time \((1-\epsilon )u^*\) consists of a small collection of vertices all separated by a large distance. To that end, define, for \(0<\epsilon <1\),

$$\begin{aligned} H_{A,\epsilon }:= & {} \Big \{K \subset A: ||K|-|A|^\epsilon | \le |A|^{3\epsilon /4},\nonumber \\{} & {} \text { and } |x-y|\ge \kappa ^{-1/2}|A|^{\frac{1}{\mu (\Gamma _o)}} \text { for every distinct } x,y\in K \Big \}. \end{aligned}$$
(5.22)

Proposition 5.7

For any \(\exp (e^{32})<\kappa ^{-1}\le |A|^{1-\frac{8}{\log \log |A|}}\) and \(0<\epsilon \le \frac{1}{100 \mu (\Gamma _o)}\) we have that

$$\begin{aligned} {\mathbb {P}}(A_\epsilon \not \in H_{A,\epsilon }) \le 3|A|^{-\epsilon /2} \end{aligned}$$

for every |A| large enough.

Proof

We use (5.20) of Proposition 5.6 to see that

$$\begin{aligned}{} & {} {\mathbb {P}}\left( \exists x,y\in A_{\epsilon }: 0<|x-y|<\kappa ^{-1/2}|A|^{\frac{1}{\mu (\Gamma _o)}}\right) \nonumber \\{} & {} \quad \le \sum _{x,y\in A: 0<|x-y|\le \kappa ^{-1/2}|A|^{\frac{1}{\mu (\Gamma _o)}}} {\mathbb {P}}(x,y\in A_\epsilon ) \le 2|A|^{-\frac{1}{20\mu (\Gamma _o)}} \end{aligned}$$
(5.23)

for every |A| large enough. Next we observe that by (5.21) of Proposition 5.6 we have that

$$\begin{aligned} {\mathbb {E}}[|A_\epsilon |^2]= & {} \sum _{x,y\in A} {\mathbb {P}}(x,y\in A_\epsilon )\\= & {} \sum _{x\in A} {\mathbb {P}}(x\in A_\epsilon ) +\sum _{x,y\in A: |x-y|>0} {\mathbb {P}}(x,y\in A_\epsilon ) \\\le & {} |A|^\epsilon +|A|^{2\epsilon }\left( 1+3|A|^{-\frac{1}{20\mu (\Gamma _o)}}\right) . \end{aligned}$$

Recall (5.3) which states that \({\mathbb {E}}[|A_\epsilon |]=|A|^\epsilon ,\) so that \({\mathbb {E}}[(|A_\epsilon |-|A|^{\epsilon })^2] ={\mathbb {E}}[|A_\epsilon |^2]-|A|^{2 \epsilon }\). Therefore, by Chebyshev’s inequality,

$$\begin{aligned}{} & {} {\mathbb {P}}\left( ||A_\epsilon |-|A|^\epsilon | \ge |A|^{3\epsilon /4}\right) \le \frac{{\mathbb {E}}[|A_\epsilon |^2]-|A|^{2 \epsilon }}{|A|^{3\epsilon /2}}\\{} & {} \quad \le \frac{|A|^\epsilon +|A|^{2\epsilon }\left( 1+3|A|^{-\frac{1}{20\mu (\Gamma _o)}}\right) -|A|^{2 \epsilon }}{|A|^{3\epsilon /2}} \\{} & {} \quad =|A|^{-\epsilon /2} +3|A|^{\epsilon /2}|A|^{-\frac{1}{20\mu (\Gamma _o)}} \\{} & {} \quad \le |A|^{-\epsilon /2} +3|A|^{\epsilon /2}|A|^{-5 \epsilon } \le 2|A|^{-\epsilon /2} \end{aligned}$$

for |A| large enough by using our assumption that \(\epsilon \le \frac{1}{100\mu (\Gamma _o)}\) in the penultimate inequality. Thus,

$$\begin{aligned} {\mathbb {P}}\left( ||A_\epsilon |-|A|^\epsilon | \ge |A|^{3\epsilon /4} \right) \le 2|A|^{-\epsilon /2}. \end{aligned}$$
(5.24)

Combining (5.23) and (5.24) we then find that

$$\begin{aligned} {\mathbb {P}}(A_\epsilon \not \in H_{A,\epsilon }) \le 2|A|^{-\frac{1}{20\mu (\Gamma _o)}} +2|A|^{-\epsilon /2} \le 3|A|^{-\epsilon /2}, \end{aligned}$$

where we again used the upper bound on \(\epsilon \). \(\square \)

6 Proof of Main Theorem

We will now put all the pieces together.

Proof of Theorem 1.1

In this proof we will fix \(\epsilon \) to be equal to \(\frac{1}{100 \mu (\Gamma _o)},\) but we shall apply Proposition 5.7 to \(\epsilon \) and \(\epsilon /4\). Our aim is to establish that

$$\begin{aligned} \sup _{z \in {\mathbb {R}}}|{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z) -\exp (-e^{-z})| \le 12 |A|^{-\epsilon /8} =12|A|^{-\frac{1}{800 \mu (\Gamma _o)}}\nonumber \\ \end{aligned}$$
(6.1)

for every large enough finite set \(A\subset {\mathbb {Z}}^2\).

We will divide the proof of (6.1) into three cases, depending on the value of z.

Case 1: Here, \(z \le -\frac{\epsilon }{4} \log |A|\). Then

$$\begin{aligned} {\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z)= & {} {\mathbb {P}}\left( {\mathcal {T}}(A) \le u^* + \frac{z}{\mu (\Gamma _o)}\right) \\\le & {} {\mathbb {P}}\left( {\mathcal {T}}(A)\le \left( u^*- \frac{\epsilon }{4} u^*\right) \right) \\= & {} {\mathbb {P}}\left( \left\{ x \in A: {\mathcal {T}}(x) > \left( 1 - \frac{\epsilon }{4} \right) u^*\right\} = \emptyset \right) \\= & {} {\mathbb {P}}(A_{\epsilon /4} = \emptyset ), \end{aligned}$$

by the definition of \(A_\epsilon \) in (5.1). Moreover, \({\mathbb {P}}(A_{\epsilon /4} = \emptyset ) \le {\mathbb {P}}(A_{\epsilon /4}\notin H_{A,\epsilon /4})\), since \(H_{A,\epsilon /4}\) does not contain the empty set. It then follows from Proposition 5.7 that for |A| large enough,

$$\begin{aligned} {\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z) \le {\mathbb {P}}(A_{\epsilon /4} \notin H_{A,\epsilon /4}) \le 3|A|^{-\epsilon /8}. \end{aligned}$$

Next, using that \(z\le -\frac{\epsilon }{4} \log |A|\) it follows that \(e^{-z}\ge |A|^{\epsilon /4},\) and so, for all |A| large enough,

$$\begin{aligned} \exp (-e^{-z}) \le \exp \left( -|A|^{\epsilon /4}\right) \le |A|^{-\epsilon /4}. \end{aligned}$$

Therefore,

$$\begin{aligned}{} & {} |{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z) -\exp (-e^{-z})|\nonumber \\{} & {} \quad \le {\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z) + \exp (-e^{-z}) \le 3|A|^{-\epsilon /8}+|A|^{-\epsilon /4}\nonumber \\{} & {} \quad \le 4|A|^{-\epsilon /8}, \end{aligned}$$
(6.2)

and so for all |A| large enough, (6.1) is satisfied in this case.

Case 2: Assume now instead that \(z \ge \log |A|\). Then,

$$\begin{aligned} {\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|>z)= & {} {\mathbb {P}}\left( {\mathcal {T}}(A)> u^* + \frac{z}{\mu (\Gamma _o)}\right) \\\le & {} {\mathbb {P}}({\mathcal {T}}(A)> 2u^*) = {\mathbb {P}}\left( \bigcup _{x \in A} \left\{ {\mathcal {T}}(x)>2u^*\right\} \right) \\\le & {} |A|{\mathbb {P}}({\mathcal {T}}(o) > 2u^*) \!=\! |A|\exp (-2u^* \mu (\Gamma _o)) \!=\! |A|^{-1}. \end{aligned}$$

Then, since \(z \ge \log |A|\), we have that \(e^{-z}\le e^{-\log |A|}=|A|^{-1}\) and so

$$\begin{aligned} \exp (-e^{-z}) \ge \exp (-|A|^{-1}) \ge 1-|A|^{-1}, \end{aligned}$$

since \(e^x \ge 1 + x\) for every x. This, and the above equation gives

$$\begin{aligned}{} & {} |{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A| \le z) -\exp (-e^{-z})|\nonumber \\{} & {} \quad =|1-{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|>z) -\exp (-e^{-z})| \nonumber \\{} & {} \quad \le {\mathbb {P}}(\mu (\Gamma _o)({\mathcal {T}}(A)-\log |A|)>z) + |1 - \exp (- e^{-z})| \le |A|^{-1} + |A|^{-1},\qquad \quad \end{aligned}$$
(6.3)

and so (6.1) holds also in this case.

Case 3: Assume that \(z \in \left( -\frac{\epsilon }{4} \log |A|,\log |A|\right) \) and start by observing that

$$\begin{aligned}{} & {} |{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z) - \exp (-e^{-z})| \nonumber \\{} & {} \quad \le |{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z) - {\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z, A_\epsilon \in H_{A,\epsilon })| \nonumber \\{} & {} \qquad + |\exp (-e^{-z}){\mathbb {P}}(A_\epsilon \in H_{A,\epsilon }) -\exp (-e^{-z})| \nonumber \\{} & {} \qquad +|{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z, A_\epsilon \in H_{A,\epsilon }) -\exp (-e^{-z}){\mathbb {P}}(A_\epsilon \in H_{A,\epsilon })|.\nonumber \\ \end{aligned}$$
(6.4)

We will now consider the three terms on the right hand side separately.

For the first term we note that

$$\begin{aligned}{} & {} |{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z) -{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z,A_\epsilon \in H_{A,\epsilon })| \nonumber \\{} & {} \quad = {\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z, A_\epsilon \not \in H_{A,\epsilon }) \le {\mathbb {P}}(A_\epsilon \notin H_{A,\epsilon }) \le 3|A|^{-\epsilon /2}\nonumber \\ \end{aligned}$$
(6.5)

by Proposition 5.7. Similarly, for the second term of the right hand side of (6.4), we observe that

$$\begin{aligned} |\exp (-e^{-z}){\mathbb {P}}(A_\epsilon \in H_{A,\epsilon })-\exp (-e^{-z})| =\exp (-e^{-z}) {\mathbb {P}}(A_\epsilon \notin H_{A,\epsilon }) \le 3|A|^{-\epsilon /2}\nonumber \\ \end{aligned}$$
(6.6)

again by Proposition 5.7.

Consider now the third and final term of (6.4). Let \(K \in H_{A,\epsilon }\). We will show below that

$$\begin{aligned} |{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z | A_\epsilon =K) -\exp (-e^{-z})| \le 5|A|^{-\epsilon /4}. \end{aligned}$$
(6.7)

After multiplication by \({\mathbb {P}}(A_\epsilon = K)\) and summation over all \(K \in H_{A,\epsilon }\) we then obtain

$$\begin{aligned} |{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z, A_\epsilon \in H_{A,\epsilon }) -\exp (-e^{-z}){\mathbb {P}}(A_\epsilon \in H_{A,\epsilon })| \le 5|A|^{-\epsilon /4}.\nonumber \\ \end{aligned}$$
(6.8)

Summing the contributions from (6.5), (6.6) and (6.8) we conclude from (6.4) that

$$\begin{aligned} |{\mathbb {P}}(\mu (\Gamma _o)({\mathcal {T}}(A)-\log |A|)\le z) -\exp (-e^{-z})| \le 6|A|^{-\epsilon /2}+5|A|^{-\epsilon /4}\le 12|A|^{-\epsilon /4}\nonumber \\ \end{aligned}$$
(6.9)

for all \(z \in (-\epsilon \log |A|, \log |A|)\) and |A| large enough. It may be worth recalling that, from the start of the proof, we assume that \(\epsilon =\frac{1}{100\mu (\Gamma _o)}\). This then proves (6.1) and completes the proof, modulo (6.7).

In order to prove (6.7) we consider the conditional probability

$$\begin{aligned} {\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(A)-\log |A|\le z | A_\epsilon =K) =\frac{{\mathbb {P}}\left( {\mathcal {T}}(A)\le u^*+\frac{z}{\mu (\Gamma _o)}, A_\epsilon =K\right) }{{\mathbb {P}}(A_\epsilon =K)}. \end{aligned}$$

Let \(\omega _{u_1,u_2}\) denote the loops arriving between times \(u_1\) and \(u_2\) where \(u_1<u_2\). On the event that \(A_\epsilon =K\) it must be that K is covered by the loops arriving between times \((1-\epsilon )u^*\) and \(u^*+\frac{z}{\mu (\Gamma _o)}\) for the event \({\mathcal {T}}(A)\le u^*+\frac{z}{\mu (\Gamma _o)}\) to also occur. Therefore,

$$\begin{aligned}{} & {} {\mathbb {P}}\left( {\mathcal {T}}(A)\le u^*+\frac{z}{\mu (\Gamma _o)}, A_\epsilon =K\right) \\{} & {} \quad ={\mathbb {P}}\left( (1-\epsilon )u^*\le {\mathcal {T}}(K)\le u^*+\frac{z}{\mu (\Gamma _o)}, A_\epsilon =K\right) \\{} & {} \quad ={\mathbb {P}}\left( K\subset \bigcup _{\gamma \in \omega _{(1-\epsilon )u^*,u^*+\frac{z}{\mu (\Gamma _o)}}} \gamma , A_\epsilon =K\right) \\{} & {} \quad ={\mathbb {P}}\left( K\subset \bigcup _{\gamma \in \omega _{(1-\epsilon )u^*,u^*+\frac{z}{\mu (\Gamma _o)}}} \gamma \right) {\mathbb {P}}( A_\epsilon =K)\\{} & {} \quad ={\mathbb {P}}\left( {\mathcal {T}}(K)\le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) {\mathbb {P}}( A_\epsilon =K), \end{aligned}$$

where the last equality follows from the Poissonian nature of the loop process, which implies that the distribution of the loops that fall between times \((1-\epsilon )u^*\) and \(u^*+\frac{z}{\mu (\Gamma _o)}\) is simply a Poissonian loop process with intensity \(u^*+\frac{z}{\mu (\Gamma _o)}-(1-\epsilon )u^* =\epsilon u^*+\frac{z}{\mu (\Gamma _o)}\).

We therefore see that

$$\begin{aligned} {\mathbb {P}}\left( {\mathcal {T}}(A)\le u^*+\frac{z}{\mu (\Gamma _o)} \Big | A_\epsilon = K\right) = {\mathbb {P}}\left( {\mathcal {T}}(K) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) , \end{aligned}$$
(6.10)

and using (6.10) we have

$$\begin{aligned}{} & {} \left| {\mathbb {P}}\left( {\mathcal {T}}(A)\le u^*+\frac{z}{\mu (\Gamma _o)} \Big | A_\epsilon = K\right) - \exp (-e^{-z})\right| \nonumber \\{} & {} \quad = \left| {\mathbb {P}}\left( {\mathcal {T}}(K) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) -\exp (-e^{-z})\right| \nonumber \\{} & {} \quad \le \left| {\mathbb {P}}\left( {\mathcal {T}}(K) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) -{\mathbb {P}}\left( {\mathcal {T}}(o) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) ^{|K|} \right| \nonumber \\{} & {} \qquad +\left| {\mathbb {P}}\left( {\mathcal {T}}(o) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) ^{|K|} -\exp (-e^{-z})\right| . \end{aligned}$$
(6.11)

We will deal with the two terms on the right hand side of (6.11) separately.

For the first term, we will use Proposition 4.3. Let therefore \(x,y\in K\) be distinct. By the definition of \(H_{A,\epsilon }\) in (5.22), we have that, if \(x,y \in K\) and \(K \in H_{A,\epsilon }\), then \(|x-y|\ge \kappa ^{-1/2}|A|^{\frac{1}{\mu (\Gamma _o)}}\). Furthermore, if \(K \in H_{A,\epsilon }\), then \(||K|-|A|^\epsilon | \le |A|^{3\epsilon /4},\) and so we have that

$$\begin{aligned} |A|^\epsilon -|A|^{3\epsilon /4} \le |K| \le |A|^\epsilon +|A|^{3\epsilon /4} \end{aligned}$$
(6.12)

and in particular, (6.12) implies that \(|K|\le 2|A|^\epsilon \).

We now want to apply Proposition 4.3, and to that end we note that by (3.13) we have that \(\mu (\Gamma _o)\le \log \log \kappa ^{-1}\le \log \log |A|\). Therefore, for every \(z\in \left( -\frac{\epsilon }{4}\log |A|, \log |A|\right) \) we have that

$$\begin{aligned}{} & {} \epsilon u^*+\frac{z}{\mu (\Gamma _o)} \ge \epsilon \frac{\log |A|}{\mu (\Gamma _o)}- \epsilon \frac{\log |A|}{4\mu (\Gamma _o)} =\epsilon \frac{3\log |A|}{4\mu (\Gamma _o)}\\{} & {} \quad =\frac{3}{400} \frac{\log |A|}{(\mu (\Gamma _o))^2} \ge \frac{3}{400} \frac{\log |A|}{(\log \log |A|)^2}\ge 1, \end{aligned}$$

whenever |A| is large enough. We can therefore use Proposition 4.3 together with \(\vert K \vert \le 2 \vert A \vert ^{\epsilon }\) (which follows from (6.12)), \(\epsilon \le 1\), and the fact that \(\frac{z}{\mu (\Gamma _o)}\le \frac{\log |A|}{\mu (\Gamma _o)} =u^*\) (this is the only place where we use the upper bound on z), to see that

$$\begin{aligned}{} & {} \left| {\mathbb {P}}\left( {\mathcal {T}}(K) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) -{\mathbb {P}}\left( {\mathcal {T}}(o) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) ^{|K|} \right| \\{} & {} \quad \le 2|K|^2 \left( \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) |A|^{-\frac{1}{\mu (\Gamma _o)}} \le 16|A|^{2\epsilon }u^* |A|^{-\frac{1}{\mu (\Gamma _o)}}. \end{aligned}$$

Next, observe that for any \(\kappa ^{-1}\le |A|\) we can use (5.7) to see that

$$\begin{aligned} |A|^{\frac{1}{2\mu (\Gamma _o)}} \ge |A|^{\frac{1}{2\log \log \kappa ^{-1}}} \ge |A|^{\frac{1}{2\log \log |A|}} \ge \log |A|, \end{aligned}$$

for any |A| large enough. Therefore, we see that since \(\mu (\Gamma _o)\ge 1,\)

$$\begin{aligned} 16|A|^{2\epsilon }u^*|A|^{-\frac{1}{\mu (\Gamma _o)}} = 16|A|^{2\epsilon }\frac{\log |A|}{\mu (\Gamma _o)} |A|^{-\frac{1}{\mu (\Gamma _o)}} \le 16|A|^{2\epsilon }|A|^{-\frac{1}{2\mu (\Gamma _o)}} \nonumber \le |A|^{-\epsilon }, \end{aligned}$$

for |A| large enough by using the assumption on \(\epsilon \). We therefore conclude that

$$\begin{aligned} \left| {\mathbb {P}}\left( {\mathcal {T}}(K) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) -{\mathbb {P}}\left( {\mathcal {T}}(o) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) ^{|K|} \right| \le |A|^{-\epsilon } \end{aligned}$$
(6.13)

for |A| large enough.

We can now turn to the second term of the right hand side of (6.11). As before, \(\exp \left( -\epsilon u^*\right) =\exp \left( -\epsilon \frac{\log |A|}{\mu (\Gamma _o)}\right) \) and so we have that

$$\begin{aligned}{} & {} {\mathbb {P}}\left( {\mathcal {T}}(o) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) ^{|K|}\\{} & {} \quad = \left( 1 - \exp \left( \left( -\epsilon u^*-\frac{z}{\mu (\Gamma _o)}\right) \mu (\Gamma _o)\right) \right) ^{|K|} = \left( 1 - \frac{e^{-z}}{|A|^\epsilon }\right) ^{|K|}. \end{aligned}$$

Then by (6.12),

$$\begin{aligned} \left( 1- \frac{e^{-z}}{|A|^\epsilon }\right) ^{|A|^\epsilon +|A|^{3\epsilon /4}}\le & {} {\mathbb {P}}\left( {\mathcal {T}}(o) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) ^{|K|}\nonumber \\\le & {} \left( 1-\frac{e^{-z}}{|A|^\epsilon }\right) ^{|A|^\epsilon -|A|^{3\epsilon /4}}. \end{aligned}$$
(6.14)

Therefore, using that \(\log (1-x)\ge -x-x^2 \) for every \(0<x<1/2,\) we get that

$$\begin{aligned}{} & {} \exp (-e^{-z})-{\mathbb {P}}\left( {\mathcal {T}}(o) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) ^{|K|}\nonumber \\{} & {} \quad \le \exp (-e^{-z})- \left( 1-\frac{e^{-z}}{|A|^\epsilon }\right) ^{|A|^\epsilon +|A|^{3\epsilon /4}} \nonumber \\{} & {} \quad =\exp (-e^{-z}) -\exp \left( \left( |A|^\epsilon +|A|^{3\epsilon /4}\right) \log \left( 1-\frac{e^{-z}}{|A|^\epsilon }\right) \right) \nonumber \\{} & {} \quad \le \exp (-e^{-z}) -\exp \left( \left( |A|^\epsilon +|A|^{3\epsilon /4}\right) \left( -\frac{e^{-z}}{|A|^\epsilon }-\frac{e^{-2z}}{|A|^{2\epsilon }} \right) \right) \nonumber \\{} & {} \quad = \exp (-e^{-z}) -\exp \left( -e^{-z}-e^{-z}|A|^{-\epsilon /4} -e^{-2z}|A|^{-\epsilon }\left( 1 + |A|^{-\epsilon /4}\right) \right) \nonumber \\{} & {} \quad =\exp (-e^{-z})\left( 1-\exp \left( -e^{-z}|A|^{-\epsilon /4} -e^{-2z}|A|^{-\epsilon }\left( 1 + |A|^{-\epsilon /4}\right) \right) \right) \nonumber \\{} & {} \quad \le \exp (-e^{-z})\left( e^{-z}|A|^{-\epsilon /4} +e^{-2z}|A|^{-\epsilon }\left( 1 + |A|^{-\epsilon /4}\right) \right) , \end{aligned}$$
(6.15)

where we used that \(1-e^{-x}\le x\) for \(x>0\) in the last inequality. It is easy to check that \(ye^{-y}\le 1\) for every y,  and so

$$\begin{aligned} \exp (-e^{-z})e^{-z}|A|^{-\epsilon /4} \le |A|^{-\epsilon /4}. \end{aligned}$$
(6.16)

Furthermore,

$$\begin{aligned} \exp (-e^{-z})\left( e^{-2z}|A|^{-\epsilon }\left( 1 + |A|^{-\epsilon /4}\right) \right) \le 2e^{-2z}|A|^{-\epsilon }\le 2|A|^{-\epsilon /2}, \end{aligned}$$
(6.17)

since \(z\ge -\frac{\epsilon }{4}\log |A|\). Combining (6.15), (6.16) and (6.17) we obtain

$$\begin{aligned} \exp (-e^{-z}) -{\mathbb {P}}\left( {\mathcal {T}}(o) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) ^{|K|} \le |A|^{-\epsilon /4} +2|A|^{-\epsilon /2} \le 3|A|^{-\epsilon /4}.\qquad \end{aligned}$$
(6.18)

Similarly, we have that \(\log (1-x)\le -x\) for every \(0<x<1/2\) and therefore

$$\begin{aligned}{} & {} \exp (-e^{-z})-{\mathbb {P}}\left( {\mathcal {T}}(o) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) ^{|K|}\nonumber \\{} & {} \quad \ge \exp (-e^{-z})-\left( 1-\frac{e^{-z}}{|A|^\epsilon }\right) ^{|A|^\epsilon \left( 1- |A|^{-\epsilon /4}\right) } \nonumber \\{} & {} \quad =\exp (-e^{-z}) -\exp \left( |A|^\epsilon \left( 1 - |A|^{-\epsilon /4}\right) \log \left( 1-\frac{e^{-z}}{|A|^\epsilon }\right) \right) \nonumber \\{} & {} \quad \ge \exp (-e^{-z}) -\exp \left( |A|^\epsilon \left( 1 - |A|^{-\epsilon /4}\right) \left( -\frac{e^{-z}}{|A|^\epsilon }\right) \right) \nonumber \\{} & {} \quad = \exp (-e^{-z}) -\exp \left( -e^{-z}\left( 1 - |A|^{-\epsilon /4}\right) \right) . \end{aligned}$$
(6.19)

It is easy to check that the function \(f(x)=x-x^{1-\beta }\) where \(0<\beta \le 1,\) is minimized when \(x=(1-\beta )^{1/\beta }\) so that

$$\begin{aligned} f(x)\ge (1-\beta )^{1/\beta }-(1-\beta )^{1/\beta -1} = -\frac{\beta }{1-\beta }(1-\beta )^{1/\beta } \ge -\beta , \end{aligned}$$

since \((1-\beta )^{1/\beta }\le 1-\beta \) for \(0<\beta <1\). We therefore obtain from (6.19) that

$$\begin{aligned}{} & {} \exp (-e^{-z})-{\mathbb {P}}\left( {\mathcal {T}}(o) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) ^{|K|}\nonumber \\{} & {} \quad \ge \exp (-e^{-z}) -\exp \left( -e^{-z}\left( 1 - |A|^{-\epsilon /4}\right) \right) \ge -|A|^{-\epsilon /4}. \end{aligned}$$
(6.20)

Combining (6.18) and (6.20) we see that

$$\begin{aligned} \left| {\mathbb {P}}\left( {\mathcal {T}}(o) \le \epsilon u^*+\frac{z}{\mu (\Gamma _o)}\right) ^{|K|} -\exp (-e^{-z})\right| \le 3|A|^{-\epsilon /4} +|A|^{-\epsilon /4} =4|A|^{-\epsilon /4}. \end{aligned}$$

Using this and (6.13) in (6.11) proves that

$$\begin{aligned} \left| {\mathbb {P}}\left( {\mathcal {T}}(A)\le u^*+\frac{z}{\mu (\Gamma _o)} \Big | A_\epsilon = K\right) - \exp (-e^{-z})\right| \le |A|^{-\epsilon } +4|A|^{-\epsilon /4} \le 5|A|^{-\epsilon /4} \end{aligned}$$

proving (6.7). This completes the proof. \(\square \)

7 Examples and Discussion

Our main result, Theorem 1.1, is geared to work in the worst case possible, i.e., when all the points of A are grouped close together, and in order to prove Theorem 1.1 we had to assume an upper bound on \(\kappa _n^{-1},\) i.e., that \(\kappa _n^{-1}\le |A_n|^{1-8/(\log \log |A_n|)}\). As alluded to in the introduction, we believe that the distribution of the cover time may undergo a sort of phase transition as \(\kappa _n^{-1}\) increases even further. In order to indicate this, we will here consider two simple examples. In the first one we consider the cover time of two widely separated points, while in the second we consider two almost neighboring points. However, we start by observing that for \(A=\{x\},\) we clearly have from (2.4) that

$$\begin{aligned} {\mathbb {P}}(\mu (\Gamma _o) {\mathcal {T}}(x)\le u) =1-{\mathbb {P}}({\mathcal {T}}(x)\ge u/\mu (\Gamma _o)) =1-\left( G^{o,o}\right) ^{-\frac{u}{\mu (\Gamma _o)}} =1-e^{-u},\nonumber \\ \end{aligned}$$
(7.1)

where we used (2.5) in the last equality. Therefore, \(\mu (\Gamma _o) {\mathcal {T}}(x)\) is always an exponentially distributed random variable with parameter one.

Example 7.1

Consider a set with two points, say \(A=\{o,x\}\) and assume for convenience that |x| is even. Then, we start by noticing that

$$\begin{aligned} {\mathbb {P}}({\mathcal {T}}(o,x)\le u)= & {} {\mathbb {P}}({\mathcal {T}}(o)\le u,{\mathcal {T}}(x)\le u)\nonumber \\= & {} 2{\mathbb {P}}({\mathcal {T}}(o)\le u)-{\mathbb {P}}({\mathcal {T}}(o)\le u \cup {\mathcal {T}}(x)\le u) \nonumber \\= & {} 2-2{\mathbb {P}}({\mathcal {T}}(o)\ge u) -\left( 1-{\mathbb {P}}({\mathcal {T}}(o)\ge u, {\mathcal {T}}(x)\ge u)\right) \nonumber \\= & {} 1-2{\mathbb {P}}({\mathcal {T}}(o)\ge u)+{\mathbb {P}}({\mathcal {T}}(o)\ge u, {\mathcal {T}}(x)\ge u) \nonumber \\= & {} 1-2{\mathbb {P}}(x \cap {\mathcal {C}}_u=\emptyset ) +{\mathbb {P}}(\{o,x\} \cap {\mathcal {C}}_u=\emptyset ). \end{aligned}$$
(7.2)

Using this, and (2.7) we then see that

$$\begin{aligned}{} & {} {\mathbb {P}}({\mathcal {T}}(o,x)\le u)-{\mathbb {P}}({\mathcal {T}}(o)\le u)^2\nonumber \\{} & {} \quad =1-2{\mathbb {P}}(x \cap {\mathcal {C}}_u= \emptyset ) +{\mathbb {P}}(\{o,x\} \cap {\mathcal {C}}_u= \emptyset ) -(1-{\mathbb {P}}(x \cap {\mathcal {C}}_u= \emptyset ))^2 \nonumber \\{} & {} \quad ={\mathbb {P}}(\{o,x\} \cap {\mathcal {C}}_u= \emptyset ) -{\mathbb {P}}(x \cap {\mathcal {C}}_u= \emptyset )^2\nonumber \\{} & {} \quad ={\mathbb {P}}(x \cap {\mathcal {C}}_u= \emptyset )^2 \left( \left( 1-\left( \frac{G^{o,x}}{G^{o,o}}\right) ^2\right) ^{-u}-1\right) . \end{aligned}$$
(7.3)

Next, we trivially have that

$$\begin{aligned} \left( 1-\left( \frac{G^{o,x}}{G^{o,o}}\right) ^2\right) ^{-u} \ge 1, \end{aligned}$$

and so it follows from (7.3) that

$$\begin{aligned} {\mathbb {P}}({\mathcal {T}}(o,x)\le u)-{\mathbb {P}}({\mathcal {T}}(o)\le u)^2 \ge 0. \end{aligned}$$
(7.4)

In order to bound the expression in (7.3) from above, we assume now that \(|x|\ge 10 \kappa ^{-2}\). We then have that (since |x| is assumed to be even)

$$\begin{aligned} G^{o,x}= & {} \sum _{n=|x|}^{\infty }\left( \frac{1}{4+\kappa }\right) ^{n} W_n^{o,x} =\sum _{n=|x|/2}^{\infty }\left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,x}\nonumber \\\le & {} \sum _{n=|x|/2}^{\infty }\left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o} \le 4e^{-(|x|/2-1)\kappa /4} \le 4e^{-(10\kappa ^{-2}/2-1)\kappa /4} \le 4e^{-\kappa ^{-1}}. \end{aligned}$$

where we used Lemma 3.1 in the first inequality and (3.11) in the second. Furthermore, as in the proof of (5.16), we see that

$$\begin{aligned}{} & {} \left( 1-\left( \frac{G^{o,x}}{G^{o,o}}\right) ^2\right) ^{-u}\nonumber \\{} & {} \quad \le \left( 1-\left( \frac{4 e^{-\kappa ^{-1}}}{G^{o,o}}\right) ^2\right) ^{-u} =\exp \left( -u\log \left( 1-\left( \frac{4 e^{-\kappa ^{-1}}}{G^{o,o}}\right) ^2\right) \right) \nonumber \\{} & {} \quad \le \exp \left( 2u\left( \frac{4 e^{-\kappa ^{-1}}}{G^{o,o}}\right) ^2\right) \le \exp \left( 32ue^{-2\kappa ^{-1}}\right) \le 1+32ue^{-2\kappa ^{-1}} \end{aligned}$$
(7.5)

for \(\kappa ^{-1}\) large enough and since \(G^{o,o}\ge 1\). Combining (7.3) with (7.5) we conclude that

$$\begin{aligned} {\mathbb {P}}({\mathcal {T}}(o,x)\le u)-{\mathbb {P}}({\mathcal {T}}(o)\le u)^2 \le {\mathbb {P}}(x \cap {\mathcal {C}}_u= \emptyset )^2 32ue^{-2\kappa ^{-1}}. \end{aligned}$$
(7.6)

By fixing u and letting \(\kappa ^{-1}\rightarrow \infty ,\) we therefore see that (by using (3.14) and (2.3))

$$\begin{aligned}{} & {} |{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(o,x)\le u) -{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(o)\le u)^2|\nonumber \\{} & {} \quad \le {\mathbb {P}}(x \cap {\mathcal {C}}_{\frac{u}{\mu (\Gamma _o)}}= \emptyset )^2 32\frac{u}{\mu (\Gamma _o)}e^{-2\kappa ^{-1}} =32ue^{-2u}\frac{e^{-2\kappa ^{-1}}}{\mu (\Gamma _o)} \nonumber \\{} & {} \quad \le 32ue^{-2u}\frac{e^{-2\kappa ^{-1}}}{\log \log \kappa ^{-1}-\log \pi } \rightarrow 0, \end{aligned}$$
(7.7)

as \(\kappa ^{-1}\rightarrow \infty \). Thus, the re-scaled cover time \({\mathcal {T}}(o,x)\) behaves asymptotically like the cover time of two independent exponentially distributed random variables with parameter one. In light of the great distance between o and x,  this is not surprising.

In our next example, we will consider two points which are close. We will need the following lemma, whose proof is similar to that of Lemma 3.2 and is deferred to the appendix.

Lemma 7.2

We have that for any \(\kappa >0,\)

$$\begin{aligned} G^{o,(1,1)} \ge \frac{\log \kappa ^{-1}}{\pi }-1. \end{aligned}$$
(7.8)

Example 7.3

This example is similar to Example 2 in that \(A=\{o,x\}\). However, we will here assume that \(x=(1,1)\) so that the two points are close to each other. We start by noting that by (3.12) and Lemma 7.2 we have that

$$\begin{aligned} G^{o,o}-G^{o,x} \le \frac{\log \kappa ^{-1}}{\pi }+2 -\left( \frac{\log \kappa ^{-1}}{\pi }-1\right) =3. \end{aligned}$$

We therefore have that

$$\begin{aligned}{} & {} \left( 1-\left( \frac{G^{o,x}}{G^{o,o}}\right) ^2\right) ^{-u} \ge \left( 1-\left( \frac{G^{o,o}-3}{G^{o,o}}\right) ^2\right) ^{-u}\\{} & {} \quad =\left( 1-\left( 1-\frac{3}{G^{o,o}}\right) ^2\right) ^{-u} = \left( \frac{6}{G^{o,o}}-\frac{9}{(G^{o,o})^2}\right) ^{-u}\\{} & {} \quad \ge \left( \frac{6}{G^{o,o}}\right) ^{-u} =\frac{6^{-u}}{{\mathbb {P}}(x \cap {\mathcal {C}}_{u}= \emptyset )}, \end{aligned}$$

and therefore (as in (7.5))

$$\begin{aligned}{} & {} {\mathbb {P}}(\{o,x\}\cap {\mathcal {C}}_u =\emptyset ) ={\mathbb {P}}(x \cap {\mathcal {C}}_u\ne \emptyset )^2 \left( 1-\left( \frac{G^{o,x}}{G^{o,o}}\right) ^2\right) ^{-u}\nonumber \\{} & {} \quad \ge {\mathbb {P}}(x \cap {\mathcal {C}}_u= \emptyset )^2 \frac{6^{-u}}{{\mathbb {P}}(x \cap {\mathcal {C}}_{u}=\emptyset )} ={\mathbb {P}}(x \cap {\mathcal {C}}_u= \emptyset )6^{-u}. \end{aligned}$$
(7.9)

Noting that trivially,

$$\begin{aligned} {\mathbb {P}}({\mathcal {T}}(o)\le u/\mu (\Gamma _o)) -{\mathbb {P}}({\mathcal {T}}(o,x)\le u/\mu (\Gamma _o))\ge 0, \end{aligned}$$

we can therefore see that by using (7.9) and (7.2)

$$\begin{aligned}{} & {} |{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(o,x)\le u) -{\mathbb {P}}(\mu (\Gamma _o){\mathcal {T}}(o)\le u)|\\{} & {} \quad ={\mathbb {P}}({\mathcal {T}}(o)\le u/\mu (\Gamma _o)) -{\mathbb {P}}({\mathcal {T}}(o,x)\le u/\mu (\Gamma _o)) \\{} & {} \quad =1-{\mathbb {P}}(x \cap {\mathcal {C}}_{u/\mu (\Gamma _o)}= \emptyset ) -(1-2{\mathbb {P}}(x \cap {\mathcal {C}}_{u/\mu (\Gamma _o)}= \emptyset )\\{} & {} \qquad +{\mathbb {P}}(\{o,x\} \cap {\mathcal {C}}_{u/\mu (\Gamma _o)}= \emptyset ))\\{} & {} \quad ={\mathbb {P}}(x \cap {\mathcal {C}}_{u/\mu (\Gamma _o)}= \emptyset ) -{\mathbb {P}}(\{o,x\} \cap {\mathcal {C}}_{u/\mu (\Gamma _o)}= \emptyset )\\{} & {} \quad \le {\mathbb {P}}(x \cap {\mathcal {C}}_{u/\mu (\Gamma _o)} = \emptyset ) \left( 1-6^{-{u/\mu (\Gamma _o)}} \right) \\{} & {} \quad =e^{-u}\left( 1-6^{-{u/\mu (\Gamma _o)}} \right) \rightarrow 0, \end{aligned}$$

when \(\kappa ^{-1} \rightarrow \infty ,\) since it follows from (3.4) that in this case \(\mu (\Gamma _o) \rightarrow \infty \). We conclude that the re-scaled cover time of \({\mathcal {T}}(o,x)\) behaves like a single exponentially distributed random variable with parameter one.

Recall the discussion in the Introduction concerning a possible phase transition depending on the rate at which \(\kappa _n \rightarrow 0\). The purpose of our next example is to demonstrate the (perhaps unsurprising) fact that if we allow the separation distance between the vertices in \(A_n\) to depend on the killing rate \(\kappa _n,\) such a phase transition will be absent. Allowing the separation distance to depend on \(\kappa _n\) may feel like “cheating”, but serves to demonstrate how the geometry of the sets \(A_n\) plays an important role. In order not to make this example, and indeed the entire paper, forbiddingly long, we shall be somewhat informal. Otherwise we would have to repeat large parts of Sects. 5 and 6.

Example 7.4

Consider a (large) set A and a killing rate \(\kappa \) such that \(\kappa ^{-1}\ge \log |A|,\)

$$\begin{aligned} |x-y|\ge 10\kappa ^{-2} \end{aligned}$$
(7.10)

for every \(x,y\in A,\) and such that \(|x-y|\) is even for every \(x,y\in A\). We then have that

$$\begin{aligned}{} & {} G^{o,x-y} =\sum _{n=|x|}^{\infty }\left( \frac{1}{4+\kappa }\right) ^{n} W_n^{o,x-y} =\sum _{n=|x|/2}^{\infty }\left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,x-y}\\{} & {} \quad \le \sum _{n=|x|/2}^{\infty }\left( \frac{1}{4+\kappa }\right) ^{2n} W_{2n}^{o,o} \le 4e^{-(|x|/2-1)\kappa /4} \le 4e^{-(10\kappa ^{-2}/2-1)\kappa /4} \le 4e^{-\kappa ^{-1}}, \end{aligned}$$

where we used Lemma 3.1 in the first inequality and (3.11) in the second. Then, as in Lemma 5.5 we have that (using that \(\mu (\Gamma _o)\ge 1\) whenever \(\kappa ^{-1}\) is larger than \(e^9\) due to (3.14)),

$$\begin{aligned}{} & {} {\mathbb {P}}(x,y\in A_\epsilon ) \le |A|^{-2(1-\epsilon )} \exp \left( 2(1-\epsilon )\frac{\log |A|}{\mu (\Gamma _o)} \left( \frac{G^{o,y-x}}{G^{o,o}}\right) ^2\right) \\{} & {} \quad \le |A|^{-2(1-\epsilon )} \exp \left( 2\log |A| \left( 16 e^{-2\kappa ^{-1}}\right) \right) \\{} & {} \quad \le |A|^{-2(1-\epsilon )} \exp \left( e^{-\kappa ^{-1}}\right) \le |A|^{-2(1-\epsilon )}\left( 1+e^{-\kappa ^{-1}}\right) , \end{aligned}$$

where we used the assumption that \(\kappa ^{-1}\ge \log |A|\) in the penultimate inequality.

Next, we consider sequences \((A_n,\kappa _n)_{n \ge 1}\) of sets and killing times with the property of \((A, \kappa )\) above, and such that \(|A_n| \rightarrow \infty \). We can then use the machinery of Sects. 5 and 6 and we note that we only need to consider the case where xy is “well separated” (i.e Lemma 5.5). Applying this machinery demonstrates that that no upper bound for \(\kappa _n^{-1}\) is needed when proving a statement analogous to Theorem 1.1 in this case. This demonstrates that if we always have a large separation between the points in \(A_n,\) such as described by (7.10), we will obtain a Gumbel distribution as the limit even when \(\kappa _n \rightarrow 0\) exceedingly fast.

We end this section with an informal discussion (this is the discussion mentioned in the Introduction) concerning the case where \(\kappa _n^{-1}>>|A_n|\) and \(A_n\) is a (very large) ball or square. In this case, we believe that it may be that for sufficiently large values of \(\kappa _n^{-1},\) the set \(A_n\) will asymptotically be covered by the first loop which touches the set \(A_n\). The reason for this belief can be explained in two steps as follows.

Step 1: After an exponentially distributed time with rate \(\mu (\Gamma _{A_n})\) where

$$\begin{aligned} \Gamma _{A_n}:=\bigcup _{x \in A_n} \Gamma _x, \end{aligned}$$

the first loop that touches \(A_n\) appears. With very high probability this loop should be of length order at least \(\log \kappa _n^{-1}\). The reason why we expect this, is that when analyzing \(\mu (\Gamma _o)\) starting from (2.2), one can show that the contribution from loops of length n smaller than \(\log \kappa _n^{-1}\) will be small compared to the total sum, which is of order \(\log \log \kappa _n^{-1}\) (due to (3.13) and (3.14)).

Step 2: Considering a typical loop of size order at least \(\log \kappa _n^{-1}\) from Step 1 touching \(A_n\), then the probability that it will in fact cover the entirety of \(A_n\) is again very high (note that therefore, \(\mu (\Gamma _{A_n}) \approx \mu (\Gamma _o)\)) whenever \(\kappa _n^{-1}\) is large enough. The reason why we believe this to be true, stems from considering the corresponding problem for a simple symmetric random walk \(S_n\) started at the origin of \({\mathbb {Z}}^2\). Let \(T_n\) be the first time when the walk has visited every site \(x\in B_n,\) where \(B_n\) is the ball of radius n in \({\mathbb {Z}}^2\). According to [11] (see also the references within for background on this challenging problem), we have that

$$\begin{aligned} \lim _{n \rightarrow \infty } {\mathbb {P}}(\log T_n \le t(\log n)^2)=e^{-4/t}. \end{aligned}$$
(7.11)

From this, one can then conclude that “with high probability”, the ball \(B_n\) will be covered at time, say, exponential of \(\gamma (n)(\log n)^2\) where \(\gamma (n)\) is chosen appropriately. By letting

$$\begin{aligned} Tr(n):=\{x\in {\mathbb {Z}}^2: S_k=x \text { for some } k=0,1,\ldots ,n\} \end{aligned}$$

denote the trace of the random walk until time n and observing that

$$\begin{aligned} {\mathbb {P}}(\log T_n \le t(\log n)^2) = {\mathbb {P}}(T_n \le n^{t(\log n)}) = {\mathbb {P}}(B_n \subset Tr(n^{t \log n})), \end{aligned}$$

it follows from (7.11) that

$$\begin{aligned} \lim _{n \rightarrow \infty } {\mathbb {P}}(B_n \subset Tr(n^{t \log n}))=e^{-4/t}. \end{aligned}$$
(7.12)

In turn, one can hope that a similar statement can be inferred for a loop rooted at the origin by simply conditioning the random walk to be back at the origin at some suitable time. For instance, from a statement along the lines of (ignoring that \(n^{t \log n}\) may not be an integer)

$$\begin{aligned} \lim _{n \rightarrow \infty } {\mathbb {P}}(B_n \subset Tr(n^{t \log n}) |S_{n^{t \log n}}=o)=e^{-4/t}, \end{aligned}$$
(7.13)

one can infer that with very high probability, the ball \(B_n\) will be covered by a loop of length \(n^{\gamma (n) \log n}\) where again \(\gamma (n)\) is chosen appropriately. However, there does not seem to be an easy way to infer (7.13) directly from (7.12) without knowing an explicit rate of convergence in (7.12). The issue is of course that we are conditioning on an event which is known to have probability of order \(\left( n^{t \log n}\right) ^{-1}\). In order to turn the intuition above into a proof, one would instead have to prove (7.13) by other means. One may attempt to adapt the techniques of [11], but even if this were possible, that may very well be an entire project in itself and outside the scope of this paper.