Abstract
Suppose that X is a simple random walk on \({\mathbb {Z}}_n^d\) for \(d \ge 3\) and, for each t, we let \({\mathcal {U}}(t)\) consist of those \(x \in {\mathbb {Z}}_n^d\) which have not been visited by X by time t. Let \(t_{\mathrm {cov}}\) be the expected amount of time that it takes for X to visit every site of \({\mathbb {Z}}_n^d\). We show that there exists \(0 < \alpha _0(d) \le \alpha _1(d) < 1\) and a time \(t_* = t_{\mathrm {cov}}(1+o(1))\) as \(n \rightarrow \infty \) such that the following is true. For \(\alpha > \alpha _1(d)\) (resp. \(\alpha < \alpha _0(d)\)), the total variation distance between the law of \({\mathcal {U}}(\alpha t_*)\) and the law of i.i.d. Bernoulli random variables indexed by \({\mathbb {Z}}_n^d\) with success probability \(n^{-\alpha d}\) tends to 0 (resp. 1) as \(n \rightarrow \infty \). Let \(\tau _\alpha \) be the first time t that \(|{\mathcal {U}}(t)| = n^{d-\alpha d}\). We also show that the total variation distance between the law of \({\mathcal {U}}(\tau _\alpha )\) and the law of a uniformly chosen set from \({\mathbb {Z}}_n^d\) with size \(n^{d-\alpha d}\) tends to 0 (resp. 1) for \(\alpha > \alpha _1(d)\) (resp. \(\alpha < \alpha _0(d)\)) as \(n \rightarrow \infty \).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Suppose that X is a simple random walk on \({\mathbb {Z}}_n^d\) for \(d \ge 3\) started from the stationary distribution. For each \(x \in {\mathbb {Z}}_n^d\), we let
be the first time that X visits x. For \(t\ge 0\) we define the process \((Q_x(t))\) and the set \({\mathcal {U}}(t)\) respectively by
The purpose of the present work is to study the law of the set \({\mathcal {U}}(t)\) for different values of t. The correlation structure of \((Q_x(t))\) was analyzed in the physics literature by Brummelhuis and Hilhorst [3, 4]. They show that the probability that any two given points \(x,y \in {\mathbb {Z}}_n^d\) which are far from each other are not visited by time t is asymptotically the same as in the case in which the points are independent, i.e., \({{\mathbb {P}}} \left( Q_x(t) =1, Q_y(t) = 1\right) \sim {{\mathbb {P}}} \left( Q_x(t) = 1\right) {{\mathbb {P}}} \left( Q_y(t) = 1\right) \) as \(t,n\rightarrow \infty \) at a certain rate. This leads them to assert that \({\mathcal {U}}(t)\) is “statistically uniformly distributed at large distances” [4, Section 4]. In this article, we study in what sense the entire joint law of \((Q_x(t))\) is uniformly distributed for “large” times t rather than focus on its finite dimensional distributions.
In order to state our results and put them into better context with the existing literature, we first introduce the following parameters for X. The maximal hitting time (\(t_{\mathrm {hit}}\)) and cover time (\(t_{\mathrm {cov}}\)) are respectively given by
The times \(t_{\mathrm {hit}},t_{\mathrm {cov}}\) are related in that \(t_{\mathrm {cov}}= t_{\mathrm {hit}}\log (n^d) (1+o(1))\) (see [16] as well as [15, Chapter 11], in particular [15, Exercise 11.4]). The rate at which the o(1) term tends to 0 will be important for technical reasons so in some cases we will describe times in terms of \(t_{\mathrm {hit}}\) or other ways rather than directly in terms of \(t_{\mathrm {cov}}\). For measures \(\mu \) and \(\nu \), we recall that the total variation distance is given by
where the supremum is taken over all measurable subsets A.
We will analyze the structure of \({\mathcal {U}}(t)\) at times of the form \(\alpha t_{\mathrm {cov}}\) for \(\alpha > 0\). We mention here three important regimes of \(\alpha \). The first is when \(\alpha > 1\). It is a consequence of work by Aldous [1] that for any \(\alpha > 1\) and \(t > \alpha t_{\mathrm {cov}}\) we have \({\mathcal {U}}(t) = \varnothing \) with high probability. The case that \(\alpha = 1\) was studied by Belius [2] using random interlacements [21] and later by Imbuzeiro-Oliveira and Prata [13, 19] using hitting time estimates [12]. The main focus of [2] is to obtain the Gumbel fluctuations of the cover time of \({\mathbb {Z}}_n^d\) and as a consequence of his analysis he shows in [2, Corollary 2.4] that the set of uncovered points at time \(t_{\beta } = t_{\mathrm {hit}}(\log (n^d) + \beta )\) for \(\beta \in {\mathbb {R}}\) suitably rescaled converges to a Poisson point process on \(({\mathbb {R}}/{\mathbb {Z}})^d\) of intensity \(e^{-\beta }\lambda \) where \(\lambda \) denotes Lebesgue measure on \(({\mathbb {R}}/{\mathbb {Z}})^d\). This was improved upon in [13, 19], where it is shown that the Gumbel fluctuations for the cover time hold for more general graphs. Moreover they show that the total variation distance between the law of \({\mathcal {U}}(t_{\beta })\) and that of a random subset of \({\mathbb {Z}}_n^d\) where points are included independently with probability \(e^{-\beta }n^{-d}\) tends to 0 as \(n\rightarrow \infty \). The regime of times considered in [2, 13, 19] is special because \(|{\mathcal {U}}(t_{\beta })|\) is tight as \(n \rightarrow \infty \) for any fixed \(\beta \in {\mathbb {R}}\). Additionally, the law of the evolution of \({\mathcal {U}}(t_{\beta })\) as \(\beta \) varies is also described in [13, 19].
The final regime of times is when \(\alpha \in (0,1)\). In contrast to the cases described above, for such choices of \(\alpha \) the size of \(|{\mathcal {U}}(t)|\) grows with n. In particular, it is shown in the proof of [18, Theorem 4.1] that it follows from [1] that \(|{\mathcal {U}}(t)| = n^{d-\alpha d + o(1)}\) with high probability as \(n \rightarrow \infty \). The combinatorial method of [13, 19] does not extend directly to this regime of times because the number of possible sets one is led to consider is simply too large. The following alternative “uniformity” statement for \({\mathcal {U}}(t)\) was proved in [17]. If \(\alpha \in (\tfrac{1}{2},1)\) (resp. \(\alpha \in (0,\tfrac{1}{2})\)) then \({\mathcal {U}}(t)\) is (resp. is not) “uniformly random” in the following sense. Suppose that \({\mathcal {V}}\subseteq {\mathbb {Z}}_n^d\) is chosen independently of X where each \(x \in {\mathbb {Z}}_n^d\) is included in \({\mathcal {V}}\) independently with probability \(\tfrac{1}{2}\). Then the total variation distance between the laws of \({\mathcal {V}}{\setminus } {\mathcal {U}}(t)\) and \({\mathcal {V}}\) tends to 0 (resp. 1) as \(n \rightarrow \infty \) for \(\alpha \in (\tfrac{1}{2},1)\) (resp. \((0,\tfrac{1}{2})\)). That is, for \(\alpha \in (\tfrac{1}{2},1)\), \({\mathcal {U}}(t)\) in a certain sense does not possess any sort of systematic geometric structure that would make it possible to determine from \({\mathcal {V}}{\setminus } {\mathcal {U}}(t)\) the location of the points in \({\mathcal {U}}(t)\). The threshold \(\alpha =\tfrac{1}{2}\) is important because \(|{\mathbb {Z}}_n^d {\setminus } ({\mathcal {V}}{\setminus } {\mathcal {U}}(t))| = \tfrac{1}{2}n^d + \Theta (n^{d-\alpha d + o(1)})\) for \(\alpha \in (0,\tfrac{1}{2})\) while \(|{\mathbb {Z}}_n^d {\setminus } {\mathcal {V}}| = \tfrac{1}{2}n^d + O(n^{d/2+o(1)})\) by the central limit theorem, so in this case the two sets can be distinguished for elementary reasons. We remark in passing that a similar problem for “thin” 3D torii is considered in [5] and the \(d=2\) version of this problem is solved in [18] using results from [9].
In contrast to [17], in this work we are going to study the asymptotic law of \({\mathcal {U}}(t)\) itself in the sense of [13, 19] in the regime of times with \(\alpha \in (0,1)\) without adding the extra noise. It will be rather important for us to choose the time t at which we consider \({\mathcal {U}}(t)\) very precisely since we will later need a very accurate estimate of \({{\mathbb {P}}} \left( \tau _x > t\right) \). In the theorem statement which follows, \(t_*\) indicates a time which we will define later in the article (Eq. (4.3)) and it satisfies
For any \(\alpha > 0\) we denote by \(\nu _{\alpha , n}\) the law of \(\{x:Z_x=1\}\), where the \(Z_x\) are i.i.d. Bernoulli random variables indexed by \({\mathbb {Z}}_n^d\) with success probability \(n^{-\alpha d}\). We will write \({\mathcal {L}}(\cdot )\) to indicate the law of a random variable. Our first main result is the following.
Theorem 1.1
For each \(d\ge 3\) there exist \(\alpha _0(d),\alpha _1(d)\in (0,1)\) with \(\alpha _0(d) \le \alpha _1(d)\) such that for all \(\alpha \in (\alpha _1(d),\infty )\) we have
and for all \(\alpha \in (0,\alpha _0(d))\) we have
In analogy with [9], we refer to the points in \({\mathcal {U}}(\alpha t_*)\) as “\(\alpha \)-late” for X. The reason for the terminology “late” is that the amount of time required by X to hit them is much larger than the maximal hitting time. Our definition of \(\alpha \)-late is slightly different than that given in [9] because we use \(t_*\) instead of \(t_{\mathrm {cov}}\).
Let \(p_d\) be the probability that a simple random walk in \({\mathbb {Z}}^d\) starting from 0 returns to 0 before escaping to \(\infty \). The values of \(\alpha _0(d)\) and \(\alpha _1(d)\) from Theorem 1.1 are explicitly given by
The threshold \(\alpha _0(d)\) is special because, as we show in Sects. 4 and 5, \({\mathcal {U}}(\alpha t_*)\) with high probability has neighbouring points for \(\alpha \in (0,\alpha _0(d))\) but does not for \(\alpha > \alpha _0(d)\). In fact, for every \(\alpha > \alpha _0(d)\) the distance between any pair of distinct points in \({\mathcal {U}}(\alpha t_*)\) is at least \(n^{p_d}\) with high probability. That is, the minimal distance between distinct points in \({\mathcal {U}}(\alpha t_*)\) jumps from 1 to being larger than \(n^{p_d}\) as \(\alpha \) crosses the threshold \(\alpha _0(d)\) with high probability. We emphasize that \(\alpha _0(d) > \tfrac{1}{2}\) for all \(d \ge 3\) and \(\alpha _0(d) \rightarrow \tfrac{1}{2}\) as \(d \rightarrow \infty \). The value \(\tfrac{1}{2}\) is significant due to the connection between this work and [17] described above.
Theorem 1.1 describes the asymptotic behavior of the law of \({\mathcal {U}}(t)\) at a deterministic time t of a specific form. In our second main result, we describe the asymptotic behavior of \({\mathcal {U}}(\tau )\) where \(\tau \) is the first time t that \({\mathcal {U}}(t)\) contains a certain number of points. More specifically, for each \(\alpha > 0\), we let
We also let \({\mathcal {W}}_\alpha \) be a subset of \({\mathbb {Z}}_n^d\) picked uniformly at random among all subsets of \({\mathbb {Z}}_n^d\) containing exactly \(n^{d-\alpha d}\) points. Then we have the following:
Theorem 1.2
Suppose that \(d\ge 3\) and that \(\alpha _0(d),\alpha _1(d) \in (0,1)\) are as in Theorem 1.1. For all \(\alpha \in (\alpha _1(d),\infty ),\) we have
and for all \(\alpha \in (0,\alpha _0(d))\) we have
We will derive Theorem 1.2 from Theorem 1.1 using an estimate which gives that the first hitting distribution of X on \(A \subseteq {\mathbb {Z}}_n^d\), where A is a set of points which is “well-separated,” is closely approximated by the uniform distribution on A.
A number of questions naturally arise from this work (exact values where the transitions from non-uniformity to uniformity occur, existence of a phase transition, behaviour for \(\alpha \in (0,\alpha _0(d))\), other graphs, etc.) which we state more carefully in Sect. 7.
1.1 Relation to other work
The structure of \({\mathcal {U}}(\alpha t_{\mathrm {cov}})\) for \(d=2\) was also studied in the physics literature by [3] and later in the mathematics literature by [9]. In contrast to the case that \(d \ge 3\), \({\mathcal {U}}(\alpha t_{\mathrm {cov}})\) for \(d=2\) is not uniform for any \(\alpha \in (0,1)\). In particular, the last visited set tends to organize itself into clusters which are of diameter up to \(n^\beta \) where \(\beta =\beta (\alpha ) > 0\) for any \(\alpha \in (0,1)\). The reason for the difference is that random walk for \(d=2\) is recurrent which leads to longer range correlations while for \(d \ge 3\) it is transient. Thus the process of coverage in the two regimes is very different. The work [9] is part of a larger series which also includes [6–8] and the proofs of Theorems 1.1 and 1.2 employ several techniques which are present in the articles of this series.
1.2 Notation and assumptions
Throughout this article, we shall always assume that \(d \ge 3\) unless explicitly stated otherwise. For functions f, g we will write \(f(n) \lesssim g(n)\) if there exists a constant \(c > 0\) such that \(f(n) \le c g(n)\) for all n. We write \(f(n) \gtrsim g(n)\) if \(g(n) \lesssim f(n)\). Finally, we write \(f(n) \asymp g(n)\) if both \(f(n) \lesssim g(n)\) and \(f(n) \gtrsim g(n)\). Many of the proofs will involve a number of different constants which we will often indicate simply by c. We write \({\mathbb {P}}\) without the subscript \(\pi \) to indicate the law of a simple random walk in \({\mathbb {Z}}_n^d\) started from stationarity. We will also write \({\mathbb {P}}_x\) to indicate the law of the random walk when started from x. We denote by \({\mathbb {E}}\) and \({\mathbb {E}}_x\) the corresponding expectations.
1.3 Strategy
The proofs of Theorems 1.1 and 1.2 require many different estimates. We now provide an overview of the different steps and how they fit together. Throughout, we assume that we have fixed some value of \(\alpha \in (0,1)\) and \(d \ge 3\).
Spatial decomposition We fix two small parameters \(\varepsilon , \varphi \in (0,1)\) and let \(\beta = \alpha -\varepsilon \). We then partition \({\mathbb {Z}}_n^d\) into disjoint boxes of side length \(n^\beta +n^\varphi \) and consider in each such box concentric sub-boxes of side lengths \(n^\beta -n^\varphi \) and \(n^\beta \) (see Fig. 1). We let \({\mathcal {S}}_{\beta }\) denote the collection of the latter type of concentric boxes and for each \(S \in {\mathcal {S}}_{\beta }\) we let \(\overline{S}\) (resp. \(\underline{S}\)) be the box with side length \(n^\beta +n^\varphi \) (resp. \(n^\beta -n^\varphi \)) which contains it (resp. is contained in it). We also let \({\mathcal {A}}= {\mathbb {Z}}_n^d {\setminus } \cup _{S \in {\mathcal {S}}_{\beta }} \underline{S}\) be the region between the outside and inside boxes. Note that \(|{\mathcal {A}}| \asymp n^{d-d\beta } \times n^{(d-1)\beta + \varphi } = n^{d-\beta +\varphi }\). The probability that a given point is not visited at time \(\alpha t_*\) is \(n^{-\alpha d(1+o(1))}\); this follows from the proof of [18, Theorem 4.1] using [1] as mentioned earlier and the vertex transitivity of \({\mathbb {Z}}_n^d\) (we will also give a more precise version of this result which is specific to \({\mathbb {Z}}_n^d\)). Consequently, for \(\alpha > (d+\varphi )/(d+1)\) we can choose \(\varepsilon > 0\) small enough so that we have \({\mathcal {A}}\cap {\mathcal {U}}(\alpha t_*) = \varnothing \) with high probability. Therefore it suffices to prove the uniformity of the last visited points which are contained in \(\cup _{S \in {\mathcal {S}}_{\beta }} \underline{S}\). This leads us to consider the following modified version of the problem. We let \(\widetilde{{\mathcal {U}}}(\alpha t_*)\) consist of those points in each box \(\underline{S}\) for \(S \in {\mathcal {S}}_{\beta }\) which have not been visited by the first time that the number of excursions made by X from \(\partial S\) to \(\partial \overline{S}\) by time \(\alpha t_*\) exceeds the typical number E. We show that we have sufficiently good concentration for the number of such excursions up to a given time so that \({\mathcal {U}}(\alpha t_*) = \widetilde{{\mathcal {U}}}(\alpha t_*)\) with high probability. We then prove the uniformity of \(\widetilde{{\mathcal {U}}}(\alpha t_*)\). This modified problem is useful to consider because the random variables \((\widetilde{{\mathcal {U}}}(\alpha t_*) \cap \underline{S})_{S \in {\mathcal {S}}_{\beta }}\) are independent conditional on the \(\sigma \)-algebra \({\mathcal {F}}\) generated by the entrance and exit points of these excursions. Thus to bound the total variation distance between \({\mathcal {L}}(\widetilde{{\mathcal {U}}}(\alpha t_*))\) and \(\nu _{\alpha ,n}\) it suffices to bound the expectation of the sum of the total variation distances between the conditional laws of the last visited set in each \(\underline{S}\) for \(S \in {\mathcal {S}}_{\beta }\) given \({\mathcal {F}}\) and a random subset of \(\underline{S}\) where points are included independently with probability \(n^{-\alpha d}\) (explained below).
Uniformity in each box Our strategy for proving the uniformity of \(\widetilde{{\mathcal {U}}}(\alpha t_*) \cap \underline{S}\) for a given \(S \in {\mathcal {S}}_{\beta }\) is based on the same high level idea used in [13, 19] (inclusion–exclusion and the Bonferroni inequalities) though the implementation is different. The first step is to show that for each \(\varepsilon > 0\) there exists \(M <\infty \) so that with high probability \(\max _{S \in {\mathcal {S}}_{\beta }} |\widetilde{{\mathcal {U}}}(\alpha t_*) \cap \underline{S}| \le M\). We also show that with high probability \(\widetilde{{\mathcal {U}}}(\alpha t_*) \cap \underline{S}\) is “well-separated” in the sense that for some choice of \(\gamma > 0\), the distance between any two distinct points \(x,y \in \widetilde{{\mathcal {U}}}(\alpha t_*) \cap \underline{S}\) is at least \(n^\gamma \). Thus to bound the total variation distance, we can restrict our attention to finite, well-separated sets. To complete the proof, we need very precise hitting estimates in order to determine the probability that any given such set \(S \subseteq \underline{S}\) for \(S \in {\mathcal {S}}_{\beta }\) is not visited by X during its first E excursions from \(\partial S\) to \(\partial \overline{S}\). This needs to be sufficiently precise so that we can sum the error over all possible well-separated subsets of \(\underline{S}\) of size M and then sum that error over all of the boxes in \({\mathcal {S}}_{\beta }\). To accomplish this, we put spherical annuli (see Fig. 2) around each of the points in S with in-radius \(n^{2\varphi /\kappa }\) for \(\kappa = d \wedge 6\) and out-radius \(n^\varphi \) (the sizes and the value of \(\varphi \) are chosen to optimize several error terms). Conditional on the number of excursions N that X makes across each such spherical annulus and their entrance and exit points as well as the corresponding data for the first E excursions from \(\partial S\) to \(\partial \overline{S}\), the events that the centers of these balls are hit are independent. Another concentration estimate implies that N is with high probability very close to the typical number made by X by time \(\alpha t_*\), so we can replace it with this deterministic value. Moreover, estimates for discrete harmonic functions [14] give us that the probability that a given excursion hits a point does not depend strongly on its entrance and exit points. Putting everything together finishes this step.
Non-uniformity for small \(\alpha \) The next step in the proof of Theorem 1.1 is to establish the existence of \(\alpha _0(d)\), i.e., that for small values of \(\alpha \) the total variation distance between the law of \({\mathcal {U}}(\alpha t_*)\) and \((Z_x)\) tends to 1 as \(n \rightarrow \infty \). The idea is to show that for sufficiently small values of \(\alpha \), the number of unvisited points which have an unvisited neighbour is much larger for \({\mathcal {U}}(\alpha t_*)\) than for \((Z_x)\).
Uniformity of \({\mathcal {U}}(\tau _\alpha )\) The final step is to deduce Theorem 1.2 from Theorem 1.1. The main idea is to show that for any well-separated collection of points A, the first exit distribution of X from \({\mathbb {Z}}_n^d {\setminus } A\) is close to the uniform measure on A provided X starts sufficiently far from A. By Theorem 1.1, if we fix \(\varepsilon > 0\) very small and run X until time \((\alpha -\varepsilon ) t_*\) then we know that \({\mathcal {U}}((\alpha -\varepsilon )t_*)\) is close in law to a random subset of \({\mathbb {Z}}_n^d\) where points are included independently with probability \(n^{-(\alpha -\varepsilon )d}\). Using the aforementioned estimate, for \(t \ge (\alpha -\varepsilon )t_*\) the random walk X decimates \({\mathcal {U}}(t)\) by removing points one by one uniformly at random. The estimate for the uniformity of the first exit distribution is good enough that we can sum the error over the \({\asymp } n^{d-(\alpha -\varepsilon )d}\) points necessary to remove until the last visited set has size exactly \(n^{d-\alpha d}\) provided we choose \(\varepsilon > 0\) small enough.
1.4 Outline
The remainder of this article is structured as follows. In Sect. 2, we establish several concentration estimates for the number of excursions that X makes across annuli of different widths. Next, in Sect. 3 we establish a number of estimates related to the probability that an excursion of X hits points. The purpose of Sect. 4 is to prove some preliminary results on the structure of the last visited set. In particular, we show that the points which have not been visited by time \(\alpha t_*\) for large enough values of \(\alpha \) are typically far from each other. In Sect. 5, we complete the proof of Theorem 1.1 and in Sect. 6 we derive Theorem 1.2 from Theorem 1.1. Finally, in Sect. 7 we list a number of open questions which naturally arise from this work.
2 Excursions
Let \(r<R\). We write \({\mathcal {S}}(x,r)\) for the box centered at x of side length r and \({\mathcal {B}}(x,r)\) for the closed Euclidean ball centered at x of radius r, i.e.
where addition is defined modulo n. For a set A we define the boundary \(\partial A\) to be the outer boundary, i.e.
For sets \(E(x,r) = {\mathcal {B}}(x,r) \text { or } {\mathcal {S}}(x,r)\) and \(F(x,R) = {\mathcal {B}}(x,R) \text { or } {\mathcal {B}}(x,R)\) with \(E(x,r)\subseteq F(x,R)\) we define a sequence of stopping times
and inductively we set
where E and F will be understood from the context.
Definition 2.1
We call a path of the random walk trajectory an excursion if it starts from F(x, R) and it comes back to \(\partial F(x,R)\) after hitting E(x, r).
We now define \(N^{\Box ,\circ }_x(r,R,t)\) to be the total number of excursions across the annulus \({\mathcal {B}}(x,R){\setminus } {\mathcal {S}}(x,r)\) before time t. More formally for \(E(x,r)={\mathcal {S}}(x,r)\) and \(F(x,R) = {\mathcal {B}}(x,R)\) we let
Similarly we define \(N^{\Box ,\Box }_x(r,R,t)\) for the number of excursions in the annulus \({\mathcal {S}}(x,R){\setminus } {\mathcal {S}}(x,r)\) before time t and finally \(N^{\circ ,\circ }_x(r,R,t)\) for the excursions across \({\mathcal {B}}(x,R){\setminus }{\mathcal {B}}(x,r)\) before time t.
Lemma 2.2
Let \(R \ge 10\sqrt{d}r\) and let \(Y_j\) be the exit point of the j-th excursion across \(\mathcal {B}(0,R){\setminus }\mathcal {B}(0,r)\) or across \(\mathcal {B}(0,R){\setminus }{\mathcal {S}}(0,r)\). Then \((Y_j)_j\) is a finite state space Markov chain. Let \(\widetilde{\pi }\) be its stationary distribution. Then the mixing time of the chain is of order 1, i.e. there exists \(k_0<\infty \) such that \(t_{\mathrm {mix}}= k_0\) and \(k_0\) only depends on d. Then there exists a positive constant c such that for all m and N we have
Proof
See Appendix 1. \(\square \)
Definition 2.3
For \(R\ge 10\sqrt{d}r\) we let
i.e. \(T^{\Box ,\circ }_{r,R}\) is the expected length of the excursion when the walk is started on \(\partial \mathcal {B}(0,R)\) according to the stationary distribution \(\widetilde{\pi }\) of the exit points of the excursions across the annulus \({\mathcal {B}}(0,R){\setminus }{\mathcal {S}}(0,r)\) as given in Lemma 2.2. We define \(T^{\circ ,\circ }_{r,R}\) similarly except that the excursions are across the annulus \({\mathcal {B}}(0,R){\setminus }{\mathcal {B}}(0,r)\). Note that we chose \(R\ge 10\sqrt{d} r\), so that the box is included in the ball in all dimensions.
Lemma 2.4
For each \(\psi \in (0,1/2)\) there exists \(n_0\ge 1\) and a positive constant c such that for all \(n\ge n_0\) the following is true. Suppose that \(n/4\ge R\ge 10\sqrt{d}r\) and \(t\asymp n^d\log n\). Then for all \(\delta >0\) such that \(\delta r^{d-2} n^{-\psi -1/2} \le 1\) and \(\delta n^{\psi } \le 1\) we have that for all x
where \(A=t/((1+\delta ) T^{\Box ,\circ }_{r,R})\) and \(A'=t/((1-\delta ) T^{\Box ,\circ }_{r,R})\).
Remark 2.5
We note that Lemma 2.4 holds when we replace \(N^{\Box ,\circ },T^{\Box ,\circ }\) by \(N^{\circ ,\circ },T^{\circ ,\circ }\) respectively. The proof is identical to the one given below.
Proof of Lemma 2.4
To simplify notation throughout the proof we simply write \(N_1=N^{\Box ,\circ }_x(r,R,t)\) and \(T_{r,R}=T^{\Box ,\circ }_{r,R}\). In order to avoid carrying too many constants, we will prove the result for \(t=n^d\log n\). The proof for \(t\asymp n^d\log n\) is exactly the same. Let \(N=k_0n^\psi \), where \(k_0\) is the mixing time of the exit point chain as in Lemma 2.2.
Note that \(A, A'\asymp r^{d-2}\log n\) by Lemma 8.3. In the following proof we will write either A, \(A'\) or the expression above depending on whichever is more convenient.
We first show that
Let \(V_i=\sigma _i-\sigma _{i-1}\) for all \(i\ge 1\). By the definition of \(N_1\) we get
It is easy to see that there exists a positive constant c such that
Indeed, \(\sigma _0-\tau _0\) is the time it takes for the random walk to exit the ball \({\mathcal {B}}(x,R)\) when started from \(\partial {\mathcal {B}}(x,r)\). Since \(R\le n/4\) and the total variation mixing time \(t_{\mathrm {mix}}\asymp n^2\)(see for instance [15, Theorem 5.5 and Example 7.4.1]), the probability that this time is \({\gtrsim } n^2\) is \({\le } 1/2\), so iterating the Markov property proves (2.2). Since \(t =n^d\log n\) we obtain
since \(\psi <1/2\). It thus suffices to show for some positive constant c we have that
In order to prove (2.3) we will establish the concentration of the sequence \((V_i)_i\). The idea is that if we allow enough time so that the corresponding exit point chain of Lemma 2.2 mixes, then the times \((V_i)_i\) are essentially i.i.d. so we can apply a concentration inequality for i.i.d. random variables.
Let \(t'=t(1-\frac{1}{n^{d-5/2}\log n} - \frac{c_1n^{2\psi }}{r^{d-2}\log n})\) for a positive constant \(c_1\). We will set the value of \(c_1\) later in the proof. Observe that
Since by Lemma 8.3 we have \({{\mathbb {E}}}\left[ V_i\right] \asymp n^d/r^{d-2}\) uniformly over all starting points in \(\partial {\mathcal {B}}(x,R)\), by the Markov property we have by possibly decreasing the value of \(c>0\)
Hence using the union bound we get that
By decreasing the value of \(c > 0\), the above is in turn \({\lesssim } e^{-cN}\). It remains to bound the second term appearing on the right hand side of (2.4). By applying a union bound and the strong Markov property we get
Let \((Z_i)\) be i.i.d. distributed according to \(\widetilde{\pi }\) and \((W_i)\) be i.i.d. excursion lengths across the annulus \({\mathcal {B}}(x,R) {\setminus } {\mathcal {B}}(x,r)\) when the starting point is \(Z_i\). Let \((Y_i)\) be the exit points of the excursions of the random walk. Then we couple \((V_i)_{i\ge N}\) with \((W_i)_{i\ge N}\) as follows: by Lemma 2.2 the optimal coupling for \(Y=(Y_N,Y_{2N},\ldots ,Y_{A})\) and \(Z=(Z_1,\ldots ,Z_{A/N})\) satisfies
Then we take \(V_i=W_i\) if \(Y_i = Z_i\), otherwise we take \(V_i\) and \(W_i\) to be independent. Hence this gives that
By decreasing the value of \(c > 0\), the above is \({\lesssim } e^{-c N}\). Note that for any two measures \(\mu _1\) and \(\mu _2\) we have for any event D that
Thus letting \(K=\frac{t'}{N}\), by (2.7) we have
Since \(Z_i\sim \widetilde{\pi }\), it follows that \({{\mathbb {E}}}\left[ W_i\right] =T_{r,R}\) for all i. Using Kac’s moment formula [11] we obtain for all \(j \in {\mathbb {N}}\) and a positive constant c
Thus for \(\theta >0\) we have
Choosing \(\theta = c_1 \delta /T_{r,R}\) we get that
and hence
Since \(\delta r^{d-2}n^{-\psi -1/2}\le 1\) and \(\delta n^\psi \le 1\), substituting the values of A and K and choosing \(c_1>0\) sufficiently small we get that for n sufficiently large
where \(c'\) is a positive constant. Hence this together with (2.5), (2.6), and (2.8) proves (2.1).
Next we show that
By the definition of \(N_1\) again we get
Using the same coupling as before, it suffices to prove that there exists a positive constant \(c'\) such that
where \((W_i)_i\) are i.i.d. excursion lengths started from i.i.d. points \((Z_i)_i\) distributed according to \(\widetilde{\pi }\). By Chernoff’s bound we have for \(\theta >0\) that
Using that \(e^{-x} \le 1-x+x^2\) and that \({{\mathbb {E}}}\left[ W_1^2\right] \le c T_{r,R}^2\) by Kac’s moment formula [11], we have
By taking \(\theta = c_1\delta /T_{r,R}\) and plugging everything into (2.10) we deduce
Choosing \(c_1>0\) small enough makes \(1-cc_1\) positive, hence
Recalling that A and \(A'\) are up to constants equal to \(r^{d-2}\log n\) by Lemma 8.3, the result follows by combining (2.1) and (2.9). \(\square \)
Definition 2.6
Fix \(\beta \in (0,1)\). We let W be a random variable whose law is equal to that of the number of excursions the random walk makes across the annulus \({\mathcal {S}}(0,n^\beta + n^\varphi ) {\setminus } {\mathcal {S}}(0,n^{\beta })\) during one excursion across \(\mathcal {B}(0,10\sqrt{d} n^\beta ) {\setminus } {\mathcal {S}}(0,n^\beta )\) when the starting point of the excursion on \(\partial \mathcal {B}(0,10\sqrt{d} n^\beta )\) is chosen according to \(\widetilde{\pi }\) from Lemma 2.2.
In the proofs of Theorem 1.1 and 1.2 we will take \(\beta =\alpha -\varepsilon \) for some small \(\varepsilon >0\). We suppress the dependency of W on \(\beta \) to lighten the notation.
Lemma 2.7
The random variable W defined above is stochastically dominated by the sum of 2d independent geometric random variables of parameter \(p\asymp n^{\varphi -\beta }\) and satisfies
Proof
We start by proving that \({{\mathbb {E}}}\left[ W\right] \gtrsim n^{\beta -\varphi }\). We note that the stationary distribution is up to multiplicative constants the same as the uniform distribution on \(\partial {\mathcal {B}}(0,10\sqrt{d}n^\beta )\), i.e. for all \(x\in \partial {\mathcal {B}}(0,10\sqrt{d} n^\beta )\) we have
See for instance [14, Lemma 6.3.7]. We can realize the random walk X in the following way: let U be a simple random walk on \({\mathbb {Z}}\) and V be a simple random walk on \({\mathbb {Z}}^{d-1}\) which is independent of U. Let \(\xi (i)\) be i.i.d. Bernoulli random variables with success probability \((d-1)/d\). Write \(r(k) =\sum _{i=1}^{k}\xi (i)\) and set
Then it is elementary to check that Z is a simple random walk in \({\mathbb {Z}}^d\), and hence \(X(k)=Z(k)\hbox { mod }n\) is a simple random walk on \({\mathbb {Z}}_n^d\).
With \(r = n^\beta +n^\varphi \), we let \(x_0=([r/2],\ldots ,[r/2], -[r/2])\) and let A be the set of points of \(\partial {\mathcal {S}}(0,n^\beta +n^\varphi )\) that are within distance \(n^\beta /16\) of \(x_0\). Then if \(\tau \) is the first hitting time of \(\partial {\mathcal {S}}(0,n^\beta +n^\varphi )\) after having first hit \(\partial {\mathcal {S}}(0,n^\beta )\), then it is easy to see that
where \(p_0\) is a positive constant. Indeed, it is a standard fact that with positive probability Brownian motion stays close to a given continuous function \(f:[0,1]\rightarrow {\mathbb {R}}^d\) for all times \(t\in [0, 1]\). Hence the above claim is true for a Brownian motion started uniformly on \(\partial \mathcal {B}(0,10\sqrt{d}n^\beta )\). The result for random walk follows by Donker’s invariance principle [10, Theorem 8.6.5].
We now let
i.e. T is the first time that \(V(r(\cdot ))\) reaches distance \(n^\beta /4\) from where it hit \(\partial {\mathcal {S}}(0,n^\beta +n^\varphi )\) at time \(r(\tau )\). Let \(s(t) = t - r(t)\). Note that \(s(T) - s(\tau )\) gives the number of steps that the random walk makes in the first coordinate axis during the time interval from \(\tau \) to T. Then there exist positive constants \(p_1\) and \(c_d\) depending only on d such that
On the event \(\{X(\tau ) \in A\}\) the random variable W is greater than or equal to the number E of excursions that U makes from \(n^\beta \) to \(n^\beta +n^\varphi \) before time T. Then using (2.11) we get that for all u
Since U is independent of V, on the event \(s(T)-s(\tau )\ge c_d n^{2\beta }\), the random variable E stochastically dominates the number of excursions that a one dimensional walk started from 0 makes from 0 to \(n^\varphi \) until time \(c_dn^{2\beta }\). It now immediately follows that
We now turn to show the first assertion of the lemma. Let \((Z^1,\ldots ,Z^d)\) be a simple random walk in \({\mathbb {Z}}^d\). For \(i=1,\ldots ,d\), we let
-
\(A_i\) be the number of excursions that \(Z^i\) makes from \(-\frac{n^\beta }{2}\) to \(-\frac{n^\beta }{2} - \frac{n^\varphi }{2}\) before hitting \(\pm 10\sqrt{d}n^{\beta }\)
-
\(B_i\) be the number of excursions that \(Z^i\) makes from \(\frac{n^\beta }{2}\) to \(\frac{n^\beta }{2}+\frac{n^\varphi }{2}\) before hitting \(\pm 10\sqrt{d}n^\beta \).
It is not hard to see that once the random walk hits \(\partial {\mathcal {S}}(0,n^\beta +n^\varphi )\), then the number of excursions it makes from \(\partial {\mathcal {S}}(0,n^\beta )\) to \(\partial {\mathcal {S}}(0,n^\beta +n^\varphi )\) before hitting \(\partial \mathcal {B}(0,10\sqrt{d}n^\beta )\) is stochastically dominated by
It follows from the gambler’s ruin estimate that the \(A_i\)’s and \(B_i\)’s are geometric of parameter \(n^{\varphi -\beta }\), hence this completes the proof of the lemma. \(\square \)
Claim 2.8
Let X be a geometric random variable of success probability \(p\in (0,1/2]\) taking values in \(\{1,2,\ldots \}\). Then for all j we have
Proof
See Appendix 1. \(\square \)
Lemma 2.9
For each \(\psi \in (0,1/2)\) there exists \(n_0\ge 1\) and a positive constant c such that for all \(n\ge n_0\) the following is true. Fix \(\beta , \varphi \in (0,1)\) and \(t\asymp n^d\log n\). For all \(\delta >0\) such that \(\delta n^{\beta (d-2)-\psi -1/2}\le 1\) and \(\delta n^{\psi }\le 1\) we let
Then for all x we have
Proof
To simplify notation throughout the proof we write \(B=\underline{E}(t,\delta )\), \(B'=\overline{E}(t,\delta )\), \(N_1 = N^{\Box ,\circ }_x(n^\beta ,10\sqrt{d}n^\beta ,t)\), and \(N_2=N^{\Box ,\Box }_x(n^\beta ,n^\beta +n^\varphi ,t)\). Let \(N=k_0n^\psi \), A, and \(A'\) be as in Lemma 2.4 with \(r=n^\beta \), \(R=10\sqrt{d}n^\beta \) and \(\delta \) replaced by \(\delta /2\). We start with the upper bound. We have
The first probability can be bounded using Lemma 2.4. We first notice that all excursions across \({\mathcal {S}}(x,n^\beta +n^\varphi ) {\setminus }{\mathcal {S}}(x,n^\beta )\) are contained in the excursions across \({\mathcal {B}}(x,10\sqrt{d}n^\beta ) {\setminus } {\mathcal {S}}(x,n^\beta )\). Hence it follows that we can bound the second probability by the probability that in the first A excursions of the annulus \({\mathcal {B}}(x,10\sqrt{d}n^\beta ){\setminus }{\mathcal {S}}(x,n^\beta )\) the number of excursions from \(\partial {\mathcal {S}}(x,n^\beta )\) to \(\partial {\mathcal {S}}(x,n^\beta +n^\varphi )\) is at most B. Let \(W_i\) be the number of excursions across the “thin” annulus (i.e. \({\mathcal {S}}(x,n^\beta +n^\varphi ){\setminus }{\mathcal {S}}(x,n^\beta )\)) during the i-th excursion across the “big” annulus (i.e. \({\mathcal {B}}(x,10\sqrt{d}n^\beta ){\setminus }{\mathcal {S}}(x,n^\beta )\)). We first show
By a union bound and the strong Markov property we get
Let \((Z_i)_i\) be i.i.d. distributed according to \(\widetilde{\pi }\) on \(\partial \mathcal {B}(0,10\sqrt{d}n^\beta )\) and let \((V_i)_i\) be i.i.d. with the same distribution as W when the starting point of the excursion on \(\partial \mathcal {B}(0,10\sqrt{d}n^{\beta })\) is \(Z_i\). Let \((Y_i)_i\) be the exit points of the excursions of the random walk. Then under the optimal coupling of \(Y=(Y_{N},\ldots ,Y_{A/N})\) and \(Z=(Z_1,\ldots ,Z_{A/N})\) we get from Lemma 2.2
Thus we can couple \((W_i)_i\) with \((V_i)_i\) by letting \(V_i=W_i\) if \(Y_i=Z_i\) and otherwise taking \(V_i\) and \(W_i\) to be independent. This now gives
We obtain
By adjusting the value of \(c > 0\), the error term above is \({\lesssim } e^{-c N}\). So now we need to bound the probability appearing on the right hand side of (2.15). Applying Chernoff’s inequality we get for \(\theta >0\)
where the last step follows since the \((V_i)_i\) are i.i.d. with \(V_i\sim W\) for all i. Using the inequalities
we obtain
Combining (2.16) and (2.17) we thus have that
where \(\eta =\delta /(2(1+\delta ))\). Setting \(\theta = \frac{\eta {{\mathbb {E}}}\left[ W\right] }{{{\mathbb {E}}}\left[ W^2\right] }\), we deduce that
From Lemma 2.7 and Claim 2.8 we see that there exists a positive constant c such that \({{\mathbb {E}}}\left[ W\right] ^2/{{\mathbb {E}}}\left[ W^2\right] \ge c\). This implies that there exists a positive constant \(c'\) such that
Since \(A\asymp n^{\beta (d-2)}\log n\) by Lemma 8.3, the above together with (2.14) and (2.15) proves (2.13) and this completes the proof of the upper bound.
For the lower bound in the same way as above we have
For the first term we use Lemma 2.4. For the second term we replace again this event by the event that in the first \(A'\) excursions across the “big” annulus there were at least \(B'\) excursions across the “thin” one. Hence if \((W_i)_i\) are as before, setting \(H=N^2{{\mathbb {E}}}\left[ W\right] \) we have
From Lemma 2.7 we immediately get that
where \((G_i)_i\) are i.i.d. each having the law of the sum of 2d independent geometric random variables of success probability \(n^{\varphi -\beta }\). Using Claim 2.8 we then get that for a positive constant c that
Using the same coupling as before we obtain
where the \((V_i)_i\) are i.i.d. and distributed according to the law of W. By possibly decreasing the value of \(c > 0\), the error term above is \({\lesssim } e^{-c N}\). By Lemma 2.7 and Claim 2.8 we have for a positive constant \(c_1\) that
Let \(\eta =\delta /(2(1-\delta ))\). Using the above, Chernoff’s inequality, and substituting the expression for \(B'\) gives
Setting \(\theta =c_2\eta /{{\mathbb {E}}}\left[ W\right] \) for a positive constant \(c_2\) to be determined and recalling that \(H=N^2{{\mathbb {E}}}\left[ W\right] \) we get
Using the assumption \(\delta n^\psi \le 1\) and taking \(c_2 >0\) sufficiently small we get for a positive constant \(c'\) and all sufficiently large n that
and, since \(A'\asymp n^{\beta (d-2)}\log n\) by Lemma 8.3, this finishes the proof of the lemma. \(\square \)
Definition 2.10
Fix \(\varphi ,\beta \in (0,1)\). Let \(\overline{{\mathcal {S}}}_{\beta }\) be a partition of \({\mathbb {Z}}_n^d\) into (disjoint) boxes of side length \(n^\beta + n^\varphi \) (we will suppress the dependency on \(\varphi \)). For each \(\overline{A} \in \overline{{\mathcal {S}}}_{\beta }\) we let A (resp. \(\underline{A}\)) be the box of side length \(n^\beta \) (resp. \(n^\beta - n^\varphi \)) which is concentric with \(\overline{A}\) and we let \({\mathcal {S}}_{\beta }\) (resp. \(\underline{{\mathcal {S}}}_{\beta }\)) be the collection of all such concentric boxes with this side length. For each \(z \in \cup _{A \in {\mathcal {S}}_{\beta }} A\) we let \(S_z\) be the element of \({\mathcal {S}}_{\beta }\) which contains z and \(\overline{S}_z\) the element of \(\overline{{\mathcal {S}}}_{\beta }\) which contains z. We let \({\mathcal {A}}= {\mathbb {Z}}_n^d{\setminus }\cup _{S \in {\mathcal {S}}_{\beta }} \underline{S}\) be the collection of points of the torus that lie in the annuli between the boxes of side length \(n^\beta +n^\varphi \) and the concentric boxes of side length \(n^\beta -n^\varphi \).
Definition 2.11
Fix \(\varphi ,\beta \in (0,1)\) and recall the definition of \(\underline{E}\) from Lemma 2.9. For every \(z\in {\mathbb {Z}}_n^d{\setminus } {\mathcal {A}}\) and \(R>r\) we define \(N_z(r,R,t)\) to be the number of excursions across the annulus \({\mathcal {B}}(z,R){\setminus }{\mathcal {B}}(z,r)\) during the first \(\underline{E}(t,\delta /4)\) excursions across the annulus \(\overline{S}_z {\setminus } S_z\) where \(S_z\) and \(\overline{S}_z\) are as in Definition 2.10.
Lemma 2.12
For each \(\psi \in (0,1/2)\) and \(\beta \in (0,1)\) there exist \(n_0 \ge 1\) and a positive constant c such that for all \(n \ge n_0\) the following is true. Let \(n^\beta \ge R\ge 10r\) and \(\delta \in (0,1/3)\) satisfy
If \(t \asymp n^d\log n,\) then for all \(z\in {\mathbb {Z}}_n^d{\setminus } {\mathcal {A}}\) we have that
where \(\underline{L}(t)= \frac{t}{(1+\delta )T^{\circ ,\circ }_{r,R}}\) and \(\overline{L}(t)=\frac{t}{(1-\delta )T^{\circ ,\circ }_{r,R}}\).
Proof
We define \(\widetilde{N}_z\) to be the number of excursions across the annulus \({\mathcal {B}}(z,R) {\setminus } {\mathcal {B}}(z,r)\) up to time \((1-\delta /2)t\) and we let T be the time it took for the \(\underline{E}(t,\delta /4)\) excursions across the “thin” annulus \({\mathcal {S}}(z,n^\beta +n^\varphi ){\setminus } {\mathcal {S}}(z,n^\beta )\) to complete. Notice that on the event \(\{T\ge (1-\delta /2)t\}\) we have \(\widetilde{N}_z\le N_z\) hence we get
We recall the definition of \(\underline{E}(t,\delta /4)\)
The first probability on the right side of (2.18) can be written as
Let
Let \(N_2 = N^{\Box ,\Box }_z(n^\beta ,n^\beta +n^\varphi ,(1-\delta /2)t)\). Applying Lemma 2.9 we get that
For the second probability on the right side of (2.18), we apply Lemma 2.4 to obtain for all \(\delta \in (0,1/3)\) that
Combining (2.18), (2.19), and (2.21) we deduce
and this finishes the proof of the first part.
We define \(N_z'\) to be the number of excursions across \({\mathcal {B}}(z,R) {\setminus } {\mathcal {B}}(z,r)\) by time \((1+\delta /2)t\). Let T be as in the first part of the proof. Notice that on the event \(\{T<(1+\delta /2)t\}\) we have \(N_z'\ge N_z\). So
By the definition of T we have
Applying Lemma 2.9 we get that if
then writing \(N_2' = N^{\Box ,\Box }_z(n^\beta ,n^\beta +n^\varphi , (1+\delta /2)t)\) we have
It is now easy to see that for all \(\delta >0\) we have \(\Gamma >\underline{E}(t,\delta /4)\), and hence combining (2.24) and (2.25) we obtain the following bound for the first probability on the right side of (2.23):
By Lemma 2.4 we can bound the second probability on the right side of (2.23) by:
Inserting the bounds from (2.26) and (2.27) into (2.23) concludes the proof. \(\square \)
3 Hitting probabilities
In this section we collect some results about hitting probabilities of simple random walks in \({\mathbb {Z}}_n^d\) for \(d\ge 3\). Some of the proofs are deferred to Appendix 1. We start by recalling Harnack’s inequality (see e.g. [14, Theorem 6.3.8]).
Lemma 3.1
(Harnack’s inequality) For all \(d\ge 1\) there exists a positive constant \(c_d\) such that the following is true. Let \(R\ge 2r\) and let f be a positive harmonic function on \({\mathcal {B}}(0,R) \subseteq {\mathbb {Z}}^d\). Then for all \(x,y\in {\mathcal {B}}(0,r)\) we have
Proof
See Appendix 1. \(\square \)
Lemma 3.2
There exists a constant \(C_d > 0\) depending only on d such that the following is true. Let \(n/4 \ge R\ge 2r\) such that both r, R tend to infinity as \(n\rightarrow \infty \) and let \(z\in {\mathbb {Z}}_n^d\) with \(\Vert z\Vert \le r/4\). We denote by \(\tau _R\) the first hitting time of \(\partial {\mathcal {B}}(0,R)\) and by \(\tau _z\) the first hitting time of z. Then for all \(x\in \partial {\mathcal {B}}(0,r)\) and all \(y\in \partial {\mathcal {B}}(0,R)\) we have
Proof
See Appendix 1. \(\square \)
Remark 3.3
To avoid confusion, we emphasize that \(\tau _x, \tau _y\) and \(\tau _z\) will always refer to hitting times of a point, while \(\tau _r\) and \(\tau _R\) to hitting times of boundaries of balls.
Remark 3.4
The constant \(C_d\) from the statement of Lemma 3.2 is given by \(c_d/G(0)\), where \(c_d\) is the constant from [14, Theorem 4.3.1] and G is the Green’s function for simple random walk on \({\mathbb {Z}}^d\). That is, G(0) is equal to the expected number of visits to 0 made by simple random walk started from 0 before escaping to \(\infty \).
Definition 3.5
We define \(p_d\) to be the probability that a simple random walk on \({\mathbb {Z}}^d\) started from 0 returns to 0.
Remark 3.6
For \(d=3\), it is well-known (see e.g. [20]) that \(p_3\approx 0.34\). It is also easy to see that \(p_d\rightarrow 0\) as \(d\rightarrow \infty \). Note that \(p_d\) is equal to the probability that a simple random walk in \({\mathbb {Z}}^d\) starting from 0 visits a given neighbour of 0 before escaping to \(\infty \).
Lemma 3.7
Let \(n/4 \ge R>2r \rightarrow \infty \) and \(x,y\in {\mathbb {Z}}_n^d\) satisfying \(\left\| x-y \right\| =o(r)\). We denote by \(\tau _R\) the first hitting time of \({\mathcal {B}}(x,R)\) and by \(\tau _x\) (resp. \(\tau _y)\) the first hitting time of x (resp. y). Then for all \(a\in \partial {\mathcal {B}}(x,r)\) and all \(b\in \partial {\mathcal {B}}(x,R)\) then we have
Moreover, if x and y are neighbours, then we have
Proof
By Bayes’ formula we have
where the second equality follows by the strong Markov property and Harnack’s inequality (Lemma 3.1). Let
be the number of times that X visits either x or y before hitting \(\partial {\mathcal {B}}(x,R)\). Then it is easy to see that
Note that we can write
Applying [14, Theorem 4.3.1] and the strong Markov property we thus have
where the o(1) term disappears when x and y are neighbours. For the denominator we have
Consequently, using the representation for Z from (3.3) it is easy to see by applying [14, Theorem 4.3.1] again and the last part of Remark 3.6 that
with equality when x and y are neighbours. Putting everything together yields the result. \(\square \)
4 Separated points
In this section we define the time \(t_*\) referred to in the Introduction and we prove that with high probability at time \(\alpha t_*\) for \(\alpha \in (0,1)\) large enough the points in the last visited set are at distance at least \(n^\gamma \) for some \(\gamma \) to be defined later. We prove these results in a certain setup which we describe below in order to make them compatible with the proofs of Theorems 1.1 and 1.2.
Setup Let \(\beta = \alpha - \varepsilon \) for some \(\varepsilon >0\) small enough to be determined later. As in Definition 2.10, we divide the torus into boxes of side length \(n^\beta +n^\varphi \) with \(\varphi \in (0,\beta )\) and we will make use of the notation described there. For every \(S\in {\mathcal {S}}_{\beta }\) we write \(\tau _{S}\) for the first time that the random walk has made \(\underline{E}(\alpha t_*,\delta /4)=\alpha t_*{{\mathbb {E}}}\left[ W\right] /((1+\delta /4)T^{\Box ,\circ }_{n^\beta ,10\sqrt{d}n^\beta })\) excursions across the annulus surrounding S, where W is as in Definition 2.6 and \(r=n^{2\varphi /\kappa }\)
and \(\psi , \varphi >0\) will be fixed later. We will explain the choice of the value of \(\delta \) in Remark 5.4 in Sect. 5.
We recall \({\mathcal {A}}= {\mathbb {Z}}_n^d{\setminus }\cup _{S \in {\mathcal {S}}_{\beta }} \underline{S}\) is the collection of points of the torus that lie in the annuli between the boxes of side length \(n^\beta +n^\varphi \) and the concentric boxes of side length \(n^\beta -n^\varphi \).
As in Definition 2.10, for every \(z\notin {\mathcal {A}}\), we write \(S_z \in {\mathcal {S}}_{\beta }\) for the unique box in \({\mathcal {S}}_{\beta }\) that contains z. We now consider the process \(Y=(Y_z)_{z}\) defined by \(Y_z=\mathfrak {1}(\tau _z>\tau _{{\mathcal {S}}_z})\) for \(z\notin {\mathcal {A}}\) and \(Y_z=0\) for \(z\in {\mathcal {A}}\).
For any \(\zeta >0\) we recall that the definition of the collection of \(\zeta \)-separated subsets \({\mathcal {P}}(\zeta )\) was follows
We will now define the time \(t_*\) that was introduced in the statement of Theorem 1.1 (but not defined there). We set
The precise value of \(\varphi \) and the radii in (4.3) are selected to optimize several error terms in Claim 5.1 and Eq. (5.12) in Sect. 5 and it is explained in Remark 5.4.
We remind the reader that we write \({{\mathbb {P}}}_{}\left( \tau _z<\tau _{n^\varphi }\right) \) for the probability that z is hit in an excursion across the annulus \({\mathcal {B}}(z,n^\varphi ){\setminus }{\mathcal {B}}(z,n^{2\varphi /\kappa })\) when the random walk starts from the uniform distribution on \({\mathbb {Z}}_n^d\). (Lemma 3.2 gives an error bound which is independent of the starting point.)
The following lemma implies that \(t_* = t_{\mathrm {cov}}(1+o(1))\) and it is proved in Appendix 2.
Lemma 4.1
For all \(r,R\rightarrow \infty \) with \(R=o(n)\) and \(r=o(R)\) as \(n\rightarrow \infty \) we have
The following lemma concerning the hitting probability of a point up to \(\alpha t_*\) is a standard fact (see for instance [17, Theorem 1.6] or [2, Lemma 2.3]). We include the proof here for completeness.
Lemma 4.2
For every \(x \in {\mathbb {Z}}_n^d\) we have
Proof
Let \(\varphi \) be as in the definition of \(t_*\) and \(r=n^{2\varphi /\kappa }\), \(R=n^\varphi \) and \(N_x\) be the number of excursions across the annulus \({\mathcal {B}}(x,R){\setminus }{\mathcal {B}}(x,r)\) before time \(\alpha t_*\). Let \(A = \alpha t_*/((1+\delta )T^{\circ ,\circ }_{n^{2\varphi /\kappa },n^\varphi })\) and \(\psi \in (0,1/2)\) be as in Lemma 2.4 and \(\delta \) as in (4.1). Writing \(\mathrm{Exc}(x,i) = \{x\text { not hit in the }i\text {-th excursion}\}\), we then have
We took the lower index in the intersection to be 2 rather than 1, because the first excursion has a positive chance of starting in \({\mathcal {B}}(x,r)\), while the second does not. Let \(a_i = X(\tau _i)\) and \(b_i=X(\sigma _i)\), where \(\tau _i, \sigma _i\) are defined at the beginning of Sect. 2. Let \({\mathcal {F}}=\sigma (\{(a_i,b_i):i=1,\ldots ,A\})\) be the \(\sigma \)-algebra generated by \(a_i, b_i\). Notice that conditional on \({\mathcal {F}}\) the events \(\mathrm{Exc}(x,i)\) are independent for \(i=2,\ldots , A\). Writing \(\tau _R\) for the first hitting time of \(\partial {\mathcal {B}}(x,R)\) we therefore get
From Lemma 3.2 we immediately get for all \(i\ge 2\) that
Hence we deduce
where for the second inequality we used the expression for A and \(t_*\) and Lemma 3.2. Recalling that \(\delta =n^{-(d-2)\varphi /\kappa + \psi }\) and \(\psi \in (0,1/2)\) small enough we thus see that
By Lemma 2.4 (since the choice of \(\delta \) satisfies the assumptions) we get
and this concludes the proof. \(\square \)
Lemma 4.3
Fix \(0<\zeta <\varphi \) and \(c>0\). Let \(U\in {\mathcal {P}}(\zeta )\) with \(|U| \le c\). Then we have
where the constant in \(\lesssim \) depends only on c. Moreover, for any \(u\in {\mathbb {Z}}_n^d\) we have
Note that the final part of Lemma 4.3 is not the same as Lemma 4.2, because we consider the hitting probability after the random walk has made a certain number of excursions across \(\overline{S}_x{\setminus } S_x\) rather than at time \(\alpha t_*\).
Proof of Lemma 4.3
Around every \(u\in U\) we place two balls of radii \(r=n^{2\zeta /\kappa }\) and \(R=\tfrac{1}{2} n^\zeta \). We let \(N_u\) be the number of excursions across the annulus that is created by the two balls during the first \(\underline{E}(\alpha t_*,\delta /4)\) excursions across the “thin” annulus \(\overline{S}_u{\setminus } S_u\), where \(\underline{E}\) is as in Lemma 2.9 and we will set the value of \(\delta \) later in the proof. We then have
where \(\underline{L}(t)\) is defined in the statement of Lemma 2.12. We let \({\mathcal {F}}\) be the \(\sigma \)-algebra generated by \(X(\tau _i(u))\) and \(X(\sigma _i(u))\) for all \(u\in U\), where \(\tau _i(u)\) and \(\sigma _i(u)\) are defined at the beginning of Sect. 2 with respect to the annuli \({\mathcal {B}}(u,R){\setminus }{\mathcal {B}}(u,r)\). Writing \({\mathrm{Exc}}(u,i) = \{u\text { not hit in the } i \text {-th excursion}\}\) we have
Given \({\mathcal {F}}\) the events \(\cap _{i=2}^{\underline{L}(\alpha t_*)}\mathrm{Exc}(u,i)\) are independent over different \(u\in U\), and hence
By Lemma 3.2 we have
We now set \(\delta = n^{-(d-2)\zeta /\kappa +\psi }\) and \(\psi \in (0,1/2)\) very small. Using Lemma 4.1 we get
Substituting this expression for \(\underline{L}(\alpha t_*)\) in the inequality above we deduce
Lemma 2.12 together with (4.4), (4.5) and (4.6) give
Note that in the above argument if \(U=\{u\}\), then we can place two balls of radii \(n^{2\varphi /\kappa }\) and \(n^\varphi \) around u and hence we lose the \(1+o(1)\) term in the expression for \(\underline{L}\). Therefore we get
and this concludes the proof. \(\square \)
Lemma 4.4
Fix \(0<\zeta <\varphi \) and \(c>0\). Let \(U\notin {\mathcal {P}}(\zeta )\) with \(|U| \le c\). Suppose that U viewed as a subset of the graph which arises by adding edges between all of the vertices of \({\mathbb {Z}}_n^d\) at distance at most \(n^\zeta \) consists of f components. Then
where \(p_d\) is as in Definition 3.5 and the constant in \(\lesssim \) depends on \(c,d,\zeta \) and \(\varphi \).
Proof
First we decompose U into its f connected components, i.e. every component contains points that are within distance \(n^\zeta \) from some point of the same component. If two points belong to different components, then their distance is at least \(n^\zeta \). Let a be the number of components \((A_i)\) containing exactly one point and let b be the number of components \((A'_i)\) containing at least two points. Since \(U\notin {\mathcal {P}}(\zeta )\), it follows that \(b\ge 1\). For \(i=1,\ldots , a\) we let \(Y_{1,i}= \mathfrak {1}(\tau _{a_i}>\tau _{S_{a_i}})\), where \(A_i=\{a_i\}\). For \(i=1,\ldots , b\) we pick \(x_i, y_i \in A_i'\) distinct such that \(\Vert x_i-y_i\Vert \le n^\zeta \) and we set \(Y_{2,i} = \mathfrak {1}(\tau _{x_i},\tau _{y_i}>\tau _{S_{x_i}})\). Note that for \(\zeta >0\) small enough \(S_{x_i}=S_{y_i}\). Let \(k=\sum _{i=1}^{b}\mathfrak {1}( \Vert x_i-y_i\Vert \le n^{\zeta /(10d)})\).
For \(j=1,\ldots , b\) we place two balls centered at each \(x_j\) satisfying \(\Vert x_j-y_j\Vert \le n^{\zeta /(10d)}\) of radii \(n^{2\zeta /d}\) and \(n^\zeta /2\). For each j not satisfying the above condition we place two balls around \(x_j\) of radii \(n^{\zeta /(15d)}\) and \(n^{\zeta /(10d)}/2\). We also place two balls of the same radii around the corresponding \(y_j\). As in Lemma 2.12 we denote by \(N_u= N_u(n^{2\zeta /d},n^{\zeta }/2,\alpha t_*)\) and \(N_u'=N_u(n^{\zeta /(15d)},n^{\zeta /(10d)}/2,\alpha t_*)\) for \(u\in U\). By conditioning on the events \(\{N_u>\underline{L}(\alpha t_*)\}\) and \(\{N_u' >\underline{L}(\alpha t_*)\}\) depending on the radii of the balls that we placed around u and using (3.1) in the case when \(\Vert x_i-y_i\Vert \le n^{\zeta /(10d)}\) we get exactly in the same way as in the proof of Lemma 4.3 that
Since \(k\le b\), \(a+b =f\), and \(b\ge 1\) from the above we deduce
and this finishes the proof. \(\square \)
Proposition 4.5
Fix \(\alpha >(1+p_d)/2,\) \(0<\gamma <2\alpha -1\) and let
Then \({{\mathbb {E}}}\left[ Z_\gamma \right] = o(1)\) as \(n\rightarrow \infty \).
Remark 4.6
We will show in the proof of the lower bound of Theorem 1.1 that the threshold \((1+p_d)/2\) is sharp: for \(\alpha \in (0,(1+p_d)/2)\) the random variable \(Z_\gamma \) from the statement of Proposition 4.5 tends to \(\infty \) almost surely for any \(\gamma >0\).
Proof of Proposition 4.5
For \(0<\zeta <\varphi \) to be determined shortly we write
From Lemma 4.4 with \(f=1\) we get
Hence for \(\zeta <2\alpha /(p_d+1) - 1\) we get that the above upper bound is o(1) as \(n\rightarrow \infty \). From Lemma 4.3 with \(|U|=2\) we get
Therefore taking \(\gamma <2\alpha -1\) we conclude that \({{\mathbb {E}}}\left[ Z_\gamma \right] =o(1)\) as \(n\rightarrow \infty \) and this completes the proof. \(\square \)
5 Total variation distance
In this section we give the proof of Theorem 1.1. As mentioned in Sect. 1.3 we will proceed by using the concentration estimates from Sect. 2 to reduce the problem to proving the uniformity of the last visited set in each box in an appropriately chosen partition of \({\mathbb {Z}}_n^d\). In order to establish the latter we will use the general strategy employed in the proof of [19, Theorem 6].
Let \(t_*\) be as defined in (4.3) in Sect. 4. Let \(Q=(Q_z)\) where \(Q_z = \mathfrak {1}(\tau _z > \alpha t_*)\) and \(Z=(Z_z)\), where \(Z_z\) are i.i.d. Bernoulli random variables of parameter \(n^{-\alpha d}\). Recall the definition of \({\mathcal {A}}\), the process Y and the collection of boxes \({\mathcal {S}}_{\beta }\), where \(\beta =\alpha -\varepsilon \), defined in the setup subsection at the beginning of Sect. 4 and in Definition 2.10. We define \(\widetilde{Q}\) by setting \(\widetilde{Q}_z = 0\) for all \(z \in {\mathcal {A}}\) and \(\widetilde{Q}_z = Q_z\) for \(z\notin {\mathcal {A}}\). We also define \(\widetilde{Z}\) by setting \(\widetilde{Z}_z = 0\) for \(z \in {\mathcal {A}}\) and \(\widetilde{Z}_z=Z_z\) for \(z\notin {\mathcal {A}}\).
Claim 5.1
If \(\alpha ,\) \(\varphi \) and \(\varepsilon \) satisfy \(d-(d+1)\alpha +\varepsilon + \varphi <0,\) then we have as \(n\rightarrow \infty \)
Proof
Using the obvious coupling between Q and \(\widetilde{Q}\) we get
Since the volume of each annulus is of order \(n^{(d-1)\beta +\varphi }\) and the total number of annuli in the torus is of order \(n^{d-d\beta }\), using Lemma 4.2 we get
where in the last step we used the assumption of the claim. In exactly the same way we get the result for Z and \(\widetilde{Z}\). \(\square \)
Lemma 5.2
We have
We prove Lemma 5.2 at the end of this section. We now proceed to the proof of Theorem 1.1.
Proof of Theorem 1.1 Part I, existence of \(\alpha _1(d)\) Let \(\alpha >(1+p_d)/2\). The statement of the theorem is equivalent to showing
By the triangle inequality for total variation distance we have
By Claim 5.1 and Lemma 5.2 it is enough to show that
Since \(Y_z=\widetilde{Z}_z=0\) for \(z\in {\mathcal {A}}\), in the total variation distance we only consider the distance between the law \(\mu \) of \((Y_z)_{z\notin {\mathcal {A}}}\) and the law \(\nu \) of \((\widetilde{Z}_z)_{z\notin {\mathcal {A}}}\).
For \(\gamma =2\alpha -1-2\varepsilon \) we recall that the definition of the collection of \(n^\gamma \)-separated subsets of \({\mathbb {Z}}_n^d{\setminus }{\mathcal {A}}\) is given by
For the total variation distance between \(\mu \) and \(\nu \) we have
where abusing notation we write
Let \(Z_\gamma \) be as in Proposition 4.5. Since \((a-b)_+ \le a\) for \(a,b>0\), we can bound by Markov’s inequality
where the last equality follows from Proposition 4.5, since \(\gamma \in (0,2\alpha -1)\) and \(\alpha >(1+p_d)/2\). Let M satisfy \(d-\alpha d-\varepsilon dM<0\). For \(B\in {\mathcal {S}}_{\beta }\) we define the collections of sets
Using again \((a-b)_+\le a\) for \(a,b>0\) we now get
We now show that \(\sum _{S \in {\mathcal {S}}{\setminus } {\mathcal {S}}_M} \mu (S) = o(1)\) as \(n\rightarrow \infty \). Setting \({\mathcal {U}}= \{ x\notin {\mathcal {A}}: \tau _x>\tau _{{\mathcal {S}}_x}\}\) we get by the union bound
where in the second inequality we used Lemma 4.3. Since \(d-\alpha d - \varepsilon d M <0\) we obtain that
Therefore we only need to show that
Let \({\mathcal {F}}\) denote the \(\sigma \)-algebra generated by \(X(\tau _i(S))\) and \(X(\sigma _i(\overline{S}))\) for all \(S\in {\mathcal {S}}_{\beta }\) and \(i\ge 0\), where \(\tau _i(S)\) and \(\sigma _i(\overline{S})\) refer to the stopping times as defined at the beginning of Sect. 2 with respect to the annulus \(\overline{S}{\setminus } S\). Then conditioning on \({\mathcal {F}}\), the collections \((Y_z)_{z\in B}\), for \(B\in {\mathcal {S}}_{\beta }\) become independent. Therefore using the independence and Jensen’s inequality, we have
Around every \(z\in {\mathbb {Z}}_n^d{\setminus } {\mathcal {A}}\) we place two balls of radii \(r=n^{2\varphi /\kappa }\) and \(R=n^\varphi \) and we write \(N_z\) for the number of excursions across the annulus \({\mathcal {B}}(z,R){\setminus } {\mathcal {B}}(z,r)\) during the first \(\underline{E}(\alpha t_*,\delta /4)\) excursions across \(S_z{\setminus }\overline{S}_z\) as in Lemma 2.12, where we recall \(\delta = n^{\varphi (2-d)/\kappa +\psi }\) from (4.1) and we take \(\psi >0\) very small. In some of the calculations below we have substituted the values of r and R, except in a few places in order to emphasize the cancellation. We set
and using Lemma 2.12 we get that there exists \(C>0\) such that \(\sum _{S\in {\mathcal {S}}_M} (\mu (S)-\nu (S))_+\) is upper bounded by
We now focus on the first term appearing in the expression above. We use the same technique as in the proof of [19, Theorem 6]. By the inclusion–exclusion formula it is easy to see that
and
where for a set P and \(\ell \in {\mathbb {N}}\) we write \(P\atopwithdelims ()\ell \) for the collection of subsets of P of size \(\ell \). Let \(K=1,\ldots , \frac{n^{d\beta } - |S\cup ({\mathcal {A}}\cap B)|}{2}\) to be determined later. Applying the Bonferroni inequalities as in [13, 19] the sum in (5.4) is upper bounded by
We start by showing that the second term in (5.5) is o(1). Indeed, it can be bounded by
Choosing \(K>0\) such that \(d-\alpha d + d\varepsilon -2dK\varepsilon <0\) gives that the above expression is o(1). This leads us to choose \(K>\frac{1-\alpha +\varepsilon }{2\varepsilon }\). Next we turn to bound the first term appearing in (5.5). To do that we split the sum over all \(W\in {B{\setminus } (S\cup {\mathcal {A}}) \atopwithdelims ()\ell }\) into the sets W such that \(W\cup S \in {\mathcal {S}}\) and into those W such that \(W \cup S \notin {\mathcal {S}}\). We also bound the positive part by the absolute value, so that we may forget about the term \((-1)^{\ell }\). Hence now we focus on proving that the following is o(1):
Claim 5.3
There exists \(\alpha _1(d)\in (0,1)\) depending only on d such that for all \(\alpha >\alpha _1(d)\) we have that the sum in (5.6) is o(1) as \(n\rightarrow \infty \).
Proof
Let \(W\in {B{\setminus } S \atopwithdelims ()\ell }\) such that \(W\cup S \in {\mathcal {S}}\). Note that \(|W\cup S|=|S|+\ell \). Note that since \(\gamma =2\alpha -1 -2\varepsilon \), if we take \(\varphi \) satisfying the assumption of Claim 5.1 and \(\varepsilon >0\) sufficiently small, then \(n^\varphi <n^\gamma \). Hence we can use Lemma 3.2 to get that almost surely
Substituting the value of \(t_*\) into the expressions for L and \(L'\) from (5.3), using Lemma 3.2 and the value of \(\delta \) (recall Eq. (4.1)) we get that
From (5.8) and using that for all x we have \(e^{-x} \ge 1 - x\) we get
where in the last inequality we used that for all \(x>0\) we have \(e^{-x} \le 1-x+x^2\) and that \(|S|+\ell \) is at most \(M+2K\) which is independent of n. Similarly substituting the value of \(L'\) and using \(1-x\ge e^{-x-2x^2}\) for \(x\in (0,1/2)\) we obtain
Putting everything together we deduce
Therefore the sum in (5.6) is bounded from above by
Thus if
then this last quantity is o(1). Recall that \(\varphi \) was taken to satisfy \(\varphi <(d+1)\alpha -d - \varepsilon \) from Claim 5.1. These two inequalities together give that
Since we can take \(\psi \) and \(\varepsilon \) as small as we like, we deduce that for any
the sum in (5.10) is o(1) as \(n\rightarrow \infty \) and this finishes the proof of the claim. \(\square \)
Remark 5.4
We now explain how we chose the values of r, R, and \(\delta \). The error terms that come from the hitting estimate Lemma 3.2 are O(r / R) and \(O(1/r^2)\) where r and R are the in and out radii, respectively, for the annulus that we put around each point. From the expressions (5.8) and (5.9) for L and \(L'\), respectively, we get the additional factor of \(1+O(\delta )\) where \(\delta \) is as in (4.1). Combining the different estimates yields an error term which is of order \(O(r/R)+O(1/r^2) +O(\delta )\). From the concentration result (Lemma 2.12) the smallest value of \(\delta \) that we can choose is of order \(r^{(2-d)/2} n^{\psi }\). In particular, the value of r essentially determines the value of \(\delta \). The largest value of R that we can take is of order \(n^\varphi \) because we need the outer boundary of the annulus centred at a point \(x \in \underline{S}\) for \(S \in {\mathcal {S}}_{\beta }\) to fit inside S. Given this choice, it is not hard to see that the optimal choice of r is \({\asymp } n^{2\varphi /\kappa }\).
It only remains to show that the sum in (5.7) is o(1). This will follow from the following two claims:
Claim 5.5
If \(\gamma \in (0,2\beta -1),\) then as \(n\rightarrow \infty \)
Proof
Clearly we have
We now bound the total number of sets \(U\subseteq B\) with \(U\notin {\mathcal {S}}\) such that \(|U|=m\). Since \(U\notin {\mathcal {S}}\), there exist two points of U that are at distance less than \(n^\gamma \) from each other. The number of ways of choosing these two points is \({\lesssim } n^{d\beta }\cdot n^{d\gamma }\). Then we have to pick another \(m-2\) points. Therefore we get
Hence (5.14) is
Since \(\gamma =2\alpha -1-2\varepsilon \) we get that the expression in (5.16) is o(1) as \(n\rightarrow \infty \). \(\square \)
Claim 5.6
For all \(\alpha >\alpha _1(d)\) we have as \(n\rightarrow \infty \) that
Proof
Fix \(\zeta >0\); we will determine its precise value later. First we define the collection of the \(\zeta \)-separated subsets of the box B similar to Sect. 4: \({\mathcal {S}}_B(\zeta ) = \{U\subseteq B: |x-y|\ge n^\zeta , \forall x,y \in U\}\). The expression in the left side of (5.17) is upper bounded by
For the term I, using (5.15) and Lemma 4.3, since \(U\in {\mathcal {S}}_B(\zeta )\), we get
If \(\zeta \in (0,2\alpha -1-\varepsilon )\), this last quantity is o(1). It remains to bound II. We view \(U\notin {\mathcal {S}}_B(\zeta )\) with \(U\subseteq B\) as a subset of the graph which arises by adding edges between all of the vertices of \({\mathbb {Z}}_n^d\) at distance at most \(n^\zeta \). Writing \({\mathcal {S}}(\zeta ,f,m)\) for the collection of sets \(U\subseteq B\) with \(U\notin {\mathcal {S}}_B(\zeta )\) and \(|U|=m\) that consist of f components, we have
since first we choose one point for each component among the \(n^{d\beta }\) possible points and then we connect the remaining \(m-f\) points to the already existing components. This upper bound and the same explanation appears in [19]. Using also Lemma 4.4 we deduce
Since for all d we have \(\alpha _1(d)>(1+p_d)/2\), by taking \(\zeta \) sufficiently small we see that this last quantity is o(1) and this finishes the proof of the claim and the proof of Part I of the theorem. \(\square \)
Proof of Lemma 5.2
We recall from (4.1) that \(\delta =n^{\varphi (2-d)/\kappa +\psi }\) and recall from the setup in Sect. 4 that for \(S\in {\mathcal {S}}_{\beta }\) we write \(\tau _{S}\) for the first time that X has made \(\underline{E}(\alpha t_*, \delta /4)= \alpha t_*{{\mathbb {E}}}\left[ W\right] /((1+\delta /4)T^{\Box ,\circ }_{n^\beta ,10\sqrt{d}n^\beta })\) excursions across the annulus \(\overline{S}{\setminus } S\).
We now let
Note that it suffices to show that \({{\mathbb {P}}} \left( {\mathcal {U}}(\alpha t_*) = \underline{{\mathcal {U}}}\right) =1-o(1)\). If \(x_{S}\) is the center of the box \(S\in {\mathcal {S}}_{\beta }\), we write \(N_{S}(t)= N_{x_{S}}(n^\beta ,n^\beta +n^\varphi ,t)\). Since the value of \(\delta \) satisfies the assumptions of Lemma 2.9 we immediately get
Therefore, it remains to show that \({{\mathbb {P}}} \left( \underline{{\mathcal {U}}}\subseteq {\mathcal {U}}(\alpha t_*)\right) =1-o(1)\). We first note that
Indeed, by Lemma 2.9 we have
since \(2\varphi /\kappa <\beta \) by Claim 5.1 provided that \(\varepsilon >0\) is sufficiently small. For each box \(S\in {\mathcal {S}}_{\beta }\) and each point \(z \in S\), let \(\sigma _z\) be the first time that \(X|_{[\tau _S,\infty )}\) has made
excursions across the annulus \({\mathcal {B}}(z,10n^\beta ){\setminus } {\mathcal {B}}(z,n^\beta )\). Then we have
where the final assertion follows from Lemma 2.4. (Lemma 2.4 is stated and proved for \(t\asymp n^d\log n\). The same result and proof are also applicable for times \(t>n^{3/2+\varepsilon }\) for any fixed \(\varepsilon >0\). In this case the exponent in the first error term becomes \(t/(T^{\circ ,\circ }_{n^\beta ,10n^\beta } n^\psi ))\).) Consequently,
and hence it follows that
In order to show that \({{\mathbb {P}}} \left( \underline{{\mathcal {U}}}\subseteq {\mathcal {U}}(\alpha t_*)\right) =1-o(1)\), it suffices to show that
By (5.18) and (5.20) we only need to show that
In order to prove this, we are going to get a bound on the probability that X visits a given point \(z \in \underline{{\mathcal {U}}} \cap S\) in the time interval \([\tau _S,\sigma _z]\). By Lemma 3.2 we obtain for constants \(c_1, c_2, c_3>0\) that
We now use the above estimate to prove (5.21). We have
From Lemma 4.3 we immediately get
Therefore combining (5.22) and (5.23) we deduce
and using (5.12) it follows that for \(\psi \) sufficiently small this last quantity is o(1) as \(n\rightarrow \infty \) and this concludes the proof. \(\square \)
Proof of Theorem 1.1 Part II, existence of \(\alpha _0(d)\) We define
Since \({{\mathbb {P}}} \left( Z_x = Z_y=1\right) = n^{-2\alpha d}\), we get that \({{\mathbb {E}}}\left[ U\right] \asymp n^{d-2\alpha d}\). Let \(\varepsilon \in (0,2\alpha p_d/(1+p_d))\). Then we have
By Markov’s inequality we immediately get
since \(\varepsilon <2\alpha p_d/(1+p_d)\). It thus remains to show that
Let \(L = \{(x_i,y_i)\}_{i=1}^{n^{d-\varepsilon }}\) be a grid of points such that \(\Vert x_i-y_i\Vert =1\) for all i and \(\Vert x_i-y_j\Vert \ge n^{\varepsilon /d}\) for all \(i\ne j\). We now place two balls around each pair of points \(x_i,y_i\) of radii \(R=n^{\varepsilon /d}/2\) and \(r=n^{\varepsilon /d^2}\). Let \(N_i\) be the number of excursions in the annulus around the point \(x_i\) up to time \(\alpha t_*\). Let \(E_i\) be the event that neither \(x_i\) nor \(y_i\) is covered during the \(A'=\alpha t_*/((1-\delta )T^{\circ ,\circ }_{r,R})\) excursions of the annulus around them, where \(\delta =r^{(2-d)/2}n^{\psi }\) for some \(\psi >0\) sufficiently small. We now define
Then by the union bound and Lemma 2.4 we have that
Therefore we get as \(n\rightarrow \infty \) that
So we can now bound
It thus suffices to show that
Let \({\mathcal {F}}\) be the \(\sigma \)-algebra generated by \(X(\tau _j(x_i))\) and \(X(\sigma _j(x_i))\) for all i and j, where \(\tau _j(x_i)\) and \(\sigma _j(x_i)\) are as defined at the beginning of Sect. 2. Then given \({\mathcal {F}}\) the events \(E_i\) become independent. From (3.2) of Lemma 3.7 and using \(1-x\ge e^{-x-2x^2}\) for \(x\in (0,1/2)\) we get that for all i and all n sufficiently large
From the above it follows that for all n sufficiently large
and hence by Chebyshev’s inequality we get
Since conditional on \({\mathcal {F}}\) the events \(E_i\) are independent, we get
Therefore, we deduce
Setting \(\alpha _0(d)=(1+p_d)/2\) gives that for all \(\alpha \in (0,\alpha _0(d))\) if we take \(\varepsilon \) sufficiently small the quantity above is o(1) and this concludes the proof of the theorem. \(\square \)
6 Exact uniformity
In this section we prove Theorem 1.2. We start with a preliminary lemma.
Lemma 6.1
Fix \(\gamma >0\). Let \(A\subseteq {\mathbb {Z}}_n^d\) satisfy \(A\in {\mathcal {S}}(\gamma )\) (recall (4.2)). Then for all x such that \({\mathrm{dist}}(x,A)\ge n^\gamma \) and all \(z\in A\) we have
where \(\tau _{A}\) is the first hitting time of A.
Proof
We let
Then it is standard that \(t_{\mathrm {unif}}\asymp c(d)n^2\) with c(d) only depending on dimension. Let \(\varepsilon >0\) be sufficiently small. We define
Then we have
By the Markov property we have
where the last equality follows from Proposition 8.1. Let \(\tau _A^+\) be the first return time to A. By reversibility we have for all \(z\in A\)
Since \(A \in {\mathcal {S}}(\gamma )\), it follows that for all \(w\in A\) we have \(A\cap {\mathcal {B}}(w) = \{w\}\), where \({\mathcal {B}}(w) = {\mathcal {B}}(w,n^\gamma /2)\). This now gives that for all \(w \in A\)
where K and s are independent of w and \(\tau _{\partial {\mathcal {B}}(w)}\) is the first hitting time of \(\partial {\mathcal {B}}(w)\). Therefore we get
Using (6.4) we obtain for all \(z\in A\)
Writing for shorthand \({{\mathbb {E}}}_{\partial {\mathcal {B}}}\left[ F\right] = {{\mathbb {E}}}_{z}\left[ {{\mathbb {E}}}_{X(\tau _{\partial {\mathcal {B}}(z)})}\left[ F\right] \right] \) we deduce
Using again Proposition 8.1 as in the last step of (6.2) we have
We also have
and since \(\max _{x,y}{{\mathbb {E}}}_{x}\left[ \tau _y\right] \asymp n^d\) (this follows from instance from Lemma 8.3 for \(r=1\)) we obtain
Writing \(G(x,y) = {{\mathbb {E}}}_{x}\left[ \sum _{t=0}^{t_{\mathrm {unif}}}\mathfrak {1}(X(t)=y)\right] \) for the Green kernel we have by Lemma 8.2 that
since \({\mathrm{dist}}(w,\partial {\mathcal {B}}(z)) \ge n^{\gamma }/2\) for all \(w\in A\). By the union bound we get
Therefore, from (6.11) and (6.12) we deduce
since \(\gamma \in (0,1)\) and \(\varepsilon >0\) is sufficiently small. Similarly we have
Substituting (6.10) and (6.13) into (6.9) we get
Plugging (6.8) and (6.15) into (6.7) gives
Combining (6.16) with (6.1), (6.2), (6.3), (6.5), (6.6) and (6.14) results in
Since the first term appearing in the sum above is independent of z by summing the above equality over all \(z\in A\) we get
This implies that
Finally we get
and this finishes the proof. \(\square \)
Proof of Theorem 1.2 Part I, existence of \(\alpha _1(d)\) Let \(t_1=(\alpha - \varepsilon )t_*\), where \(\alpha -\varepsilon >\alpha _1(d)\) and \(\alpha _1(d)\) is as in Theorem 1.1. For each \(x\in {\mathbb {Z}}_n^d\) we let \(Z_x=1\) with probability \(n^{-d(\alpha - \varepsilon )}\) and 0 otherwise, independently over different \(x\in {\mathbb {Z}}_n^d\). We set \(V=\{x\in {\mathbb {Z}}_n^d: Z_x=1\}\). Then by Theorem 1.1 we have that
where we recall that \({\mathcal {U}}(t)\) is the uncovered set at time t. Therefore there exists a coupling of V and \({\mathcal {U}}(t_1)\) such that
We now describe a coupling of the laws of \({\mathcal {U}}(\tau _\alpha )\) and \({\mathcal {W}}_\alpha \): First we fix \(\gamma \in (0,2(\alpha -\varepsilon ) -1)\). We couple \({\mathcal {U}}(t_1)\) and V using the optimal coupling. If \(|V|<n^{d-\alpha d}\) or \(V\notin {\mathcal {S}}(\gamma )\), then we generate \({\mathcal {U}}(\tau _\alpha )\) and \({\mathcal {W}}_\alpha \) independently. If \(|V| \ge n^{d-\alpha d}\) and \(V\in {\mathcal {S}}(\gamma )\), then we keep running the random walk until it has visited \(n^d-n^{d-\alpha d}\) points. We also remove points from V independently at random until we are left with a set on \(n^{d-\alpha d}\) points. Note that the resulting set is equal in distribution to \({\mathcal {W}}_\alpha \).
Let \(\xi _1,\ldots ,\xi _{|V|-n^{d-\alpha d}} \in {\mathcal {U}}(t_1)\) be the first \(|V|-n^{d-\alpha d}\) points in V visited by the random walk after time \(t_1\). Let \(\zeta _1\) be uniform in V. For each \(2\le j\le |V|-n^{d-\alpha d}\) we inductively let \(\zeta _j\) be uniform in \(V{\setminus } \{\zeta _1,\ldots , \zeta _{j-1}\}\). Then by Lemma 6.1 there exists a coupling of \((\xi _i)\) and \((\zeta _i)\) such that
We first couple \(\xi _1\) and \(\zeta _1\) using the above coupling. If this succeeds, then we couple \(\xi _2\) and \(\zeta _2\) in the same way. If at some point the coupling fails, then we let the two processes evolve independently. Therefore we get
Since \({{\mathbb {E}}}\left[ |V|\right] = n^{d-\alpha d}\) by Markov’s inequality we get as \(n\rightarrow \infty \)
Using Lemma 4.5 and (6.17) or by a straightforward calculation we obtain that for \(\gamma \in (0,2(\alpha -\varepsilon )-1)\)
By the union bound we now have
Using the expression for \(\alpha _1(d)\) given in (5.13), choosing \(\varepsilon \) sufficiently small and taking \(\gamma = 2(\alpha - \varepsilon ) -1 - \varepsilon \) give that the above quantity is o(1), since \(\alpha -\varepsilon >\alpha _1(d)\). This together with (6.19 6.20), (6.21) and (6.22) implies that
and this concludes the proof. \(\square \)
Proof of Theorem 1.2 Part II, existence of \(\alpha _0(d)\) The proof of this part follows in the same way as the proof of the existence of \(\alpha _0(d)\) in Theorem 1.1. Let \(\alpha _0(d)\) be as in Theorem 1.1 and \(\alpha >0\) with \(\alpha +\varepsilon <\alpha _0(d)\) with \(\varepsilon >0\) sufficiently small. We let \(Q_u=1(u\in {\mathcal {U}}(\tau _\alpha ))\) and \(Z_u=1(u\in {\mathcal {W}}_\alpha )\). Then we define
Then for all \(x,y \in {\mathbb {Z}}_n^d\) distinct we have
and hence \({{\mathbb {E}}}\left[ U'\right] \asymp n^{d-2\alpha d}\). Let \(t_1=(\alpha +\varepsilon )t_*\). Then on the event \(\{\tau _\alpha \le t_1\}\) we have \(W'\ge W\), where W is defined in (5.24) in the proof of Theorem 1.1 Part II. Take \(\varepsilon \in (0,2\alpha p_d/(1+p_d))\). Then we have
By Markov’s inequality we get
since \(\varepsilon \in (0,2\alpha p_d/(1+p_d))\). By Markov’s inequality again we have
where we used that \({{\mathbb {E}}}\left[ |{\mathcal {U}}(t_1)|\right] \asymp n^{d-d(\alpha +\varepsilon )}\). Therefore we get
where the last equality follows from (5.25) in the proof of Theorem 1.1 Part II and this concludes the proof. \(\square \)
7 Further questions
Throughout, we let \(\overline{\alpha }_0(d)\) (resp. \(\overline{\alpha }_1(d)\)) be the largest (resp. smallest) value such that the assertions of (1.1)–(1.4) hold.
Question 1
What are the precise values of \(\overline{\alpha }_0(d)\) and \(\overline{\alpha }_1(d)\)? Is it true that \(\overline{\alpha }_0(d)\) corresponds to the threshold \(\alpha _0(d)=(1+p_d)/2\) above which \({\mathcal {U}}(t)\) with high probability does not have neighbouring points while below which it does (as shown in Sects. 4 and 5)? Is there a phase transition: is it true that \(\overline{\alpha }_0(d) = \overline{\alpha }_1(d)\)? Our lower bound \(\alpha _0(d)\) for \(\overline{\alpha }_0(d)\) converges to \(\tfrac{1}{2}\) as \(d \rightarrow \infty \). Is this the correct asymptotic value of both \(\overline{\alpha }_0(d)\) and \(\overline{\alpha }_1(d)\) in the \(d \rightarrow \infty \) limit (in agreement with the threshold for non-uniformity in the sense of [17])?
Question 2
What is the asymptotic law of \({\mathcal {U}}(\alpha t_*)\) for \(\alpha \in (0,\overline{\alpha }_0(d))\)? We proved in Theorem 1.1 that \({\mathcal {U}}(\alpha t_*)\) for \(\alpha \in (0,\alpha _0(d))\) is not uniformly random by showing that it contains more neighbours than a random subset of \({\mathbb {Z}}_n^d\) where points are included independently with probability \(n^{-\alpha d}\). The arguments of Sect. 4 generalize to give that for any \(\alpha \in (0,1)\) there exists \(k = k(\alpha )\) and \(\gamma > 0\) such that each ball of radius \(n^\gamma \) contains at most k points with high probability. This suggests that there is a way to describe \({\mathcal {U}}(\alpha t_*)\) by:
-
(i)
sampling points in \({\mathbb {Z}}_n^d\) independently with probability \({\asymp } n^{-\alpha d}\) and then
-
(ii)
decorating the neighbourhood of each such point in a given way.
Question 3
For what class of graphs beyond \({\mathbb {Z}}_n^d\) for \(d \ge 3\) do the results of Theorems 1.1 and 1.2 also hold?
References
Aldous, D.J.: Threshold limits for cover times. J. Theor. Probab. 4(1), 197–211 (1991)
Belius, D.: Gumbel fluctuations for cover times in the discrete torus. Probab. Theory Relat. Fields 157(3–4), 635–689 (2013)
Brummelhuis, M.J.A.M., Hilhorst, H.J.: Covering of a finite lattice by a random walk. Physica A 176(3), 387–408 (1991)
Brummelhuis, M.J.A.M., Hilhorst, H.J.: How a random walk covers a finite lattice. Physica A: Stat. Mech. Appl. 185(14), 35–44 (1992)
Dembo, A., Ding, J., Miller, J., Peres, Y.: Cut-off for lamplighter chains on tori: dimension interpolation and phase transition (2013). arXiv:1312.4522
Dembo, A., Peres, Y., Rosen, J.: Brownian motion on compact manifolds: cover time and late points. Electron. J. Probab. 8(15), 14 (2003)
Dembo, A., Peres, Y., Rosen, J., Zeitouni, O.: Thick points for planar Brownian motion and the Erdos–Taylor conjecture on random walk. Acta Math. 186(2), 239–270 (2001)
Dembo, A., Peres, Y., Rosen, J., Zeitouni, O.: Cover times for Brownian motion and random walks in two dimensions. Ann. Math. (2) 160(2), 433–464 (2004)
Dembo, A., Peres, Y., Rosen, J., Zeitouni, O.: Late points for random walks in two dimensions. Ann. Probab. 34(1), 219–263 (2006)
Durrett, R.: Probability: Theory and Examples, 4th edn. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge (2010)
Fitzsimmons, P.J., Pitman, J.: Kac’s moment formula and the Feynman–Kac formula for additive functionals of a Markov process. Stoch. Process. Appl. 79(1), 117–134 (1999)
Imbuzeiro Oliveira, R.: Mean field conditions for coalescing random walks. ArXiv e-prints, September 2011 (2011). arXiv:1109.5684
Imbuzeiro Oliveira, R., Prata, A.: Late points and cover times for locally transient random walks (in preparation)
Lawler, G.F., Limic, V.: Random Walk: A Modern Introduction. Cambridge Studies in Advanced Mathematics, vol. 123. Cambridge University Press, Cambridge (2010)
Levin, D.A., Peres, Y., Wilmer, E.L.: Markov Chains and Mixing Times. American Mathematical Society, Providence (2009). With a chapter by James G. Propp and David B. Wilson
Matthews, P.: Covering problems for Brownian motion on spheres. Ann. Probab. 16(1), 189–199 (1988)
Miller, J., Peres, Y.: Uniformity of the uncovered set of random walk and cutoff for lamplighter chains. Ann. Probab. 40(2), 535–577 (2012)
Peres, Y., Revelle, D.: Mixing times for random walks on finite lamplighter groups. Electron. J. Probab. 9(26), 825–845 (2004)
Prata, A.: Ph.D. thesis (2012)
Spitzer, F.: Electrostatic capacity, heat flow, and Brownian motion. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 3, 110–121 (1964)
Sznitman, A.-S.: Vacant set of random interlacements and percolation. Ann. Math. (2) 171(3), 2039–2087 (2010)
Acknowledgments
We thank Amir Dembo, Roberto Imbuzeiro-Oliveira, Yuval Peres, and Augusto Teixeira for helpful discussions. We also thank an anonymous referee for many helpful comments.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: Elementary estimates
We begin by recording a few elementary estimates for Markov chains and random walks. Afterwards, we will give the proofs of several results stated in the text. The following is a restatement of [17, Proposition 3.3].
Proposition 8.1
Suppose that \(p^s(\cdot ,\cdot )\) denotes the transition kernel for a time-homogeneous Markov chain on a countable state space with a unique stationary distribution \(\pi \). For every \(s,t \in {\mathbb {N}},\)
It is easy to see that the following result can be derived from [14, Theorem 4.3.1].
Lemma 8.2
Let \(G(x,y) = {{\mathbb {E}}}_{x}\left[ \sum _{t=0}^{t_{\mathrm {unif}}}\mathfrak {1}(X(t)=y)\right] ,\) where \(t_{\mathrm {unif}}\) is the uniform mixing time of random walk on \({\mathbb {Z}}_n^d\). There exists a constant \(c_1 > 0\) depending only on \(d \ge 3\) such that
The following is a standard hitting time estimate for random walk.
Lemma 8.3
For all \(r<n/4\) we have
Proof of Lemma 2.2
It is clear that the sequence of exit points is a Markov chain. Since it is irreducible on a finite state space, it has a unique invariant distribution \(\widetilde{\pi }\).
Fix \(y\in \partial \mathcal {B}(0,R)\). We let \(f(x) = {{\mathbb {P}}}_{}\left( X(\tau _R) = y\;\vert \;X(\tau _r)=x\right) \). Then f is a harmonic function and since \(R/r\ge 10\sqrt{d}\), then \(\mathcal {B}(0,r)\) or \({\mathcal {S}}(0,r)\) are separated from \(\partial \mathcal {B}(0,R)\), so we can apply Harnack’s inequality (Lemma 3.1) and thus we get a constant \(c\ge 1\) such that for all \(x,z\in \partial \mathcal {B}(0,r)\) or \(x,z \in \partial {\mathcal {S}}(0,r)\) we have
uniformly over all \(y\in \partial \mathcal {B}(0,R)\). From that it follows that if \(\nu _x\) is the law of \(Y_j\) given that \(Y_{j-1}=x\), then for all \(x,z\in \partial \mathcal {B}(0,R)\)
By using the optimal coupling between \(\nu _x\) and \(\nu _y\) we get that for all x, y
Therefore, since \(\bar{d}(t)\) (defined in [15, Section 4.4])) is sub-multiplicative, we get that for all t
This now immediately gives that \(t_{\mathrm {mix}}=k_0<\infty \) and independent of the size of the state space.
Let \(\mu \) denote the law of \((Y_N,\ldots ,Y_{mN})\), then we have
where we write \(\mu (y_j|y_1,\ldots ,y_{j-1})\) for the conditional probability that \(Y_{jN}=y_j\) given \(Y_{iN}=y_i\) for all \(1\le i\le j-1\). Using Proposition 8.1 we get
Substituting in the formula above we get for \(me^{-N}<1\)
where in the last step we used that
If \(me^{-N}>1\), then the bound of Lemma 2.2 is trivially true and this completes the proof. \(\square \)
Proof of Claim 2.8
Since \(p\in (0, 1/2]\), we have that \(p/(1-p)\le 1\), and hence
We are now going to compare the sum appearing on the right hand side above to the integral \(\int _{1}^{\infty } x^je^{-px}\,dx\). The function \(f(x) = x^j e^{-px}\) is increasing for \(x\le j/p\) and decreasing for \(x>j/p\). We thus have
and
Therefore we get
Since the function f achieves its maximum at j / p we have that \(f(x) \le (j/p)^je^{-j}\) for all x. Using the above inequalities we get
It is easy to see that the integral appearing above is equal to \(j!/p^j\) (it is the Gamma function), and using Stirling’s formula we get
and this finishes the proof of the claim. \(\square \)
Proof of Lemma 3.1
By [14, Theorem 6.3.8, equation (6.19)] and using the fact that \(R>2 r\) we get that there exists a universal constant \(c_1\) such that for all \(u,v \in {\mathcal {B}}(0,r)\) with \(\Vert u-v\Vert =1\)
By Harnack’s inequality (see for instance [14, Theorem 6.3.9]) we get for a universal constant \(c_2\) that
Let \(u_0=x,u_1,\ldots ,u_{\ell -1}, u_\ell =y\) be the shortest path from x to y such that \(\Vert u_{i+1} -u_i\Vert =1\) for all i. Notice that the assumption \(x,y\in {\mathcal {B}}(0,r)\) gives that \(\ell \le 2r\) and \(u_i\in \mathcal {B}(0,r)\) for all i. We thus obtain
where in the second inequality we used (7.3) and for the last one we used (7.4). Therefore we deduce
and this concludes the proof. \(\square \)
Proof of Lemma 3.2
Let G be the Green kernel for simple random walk in \({\mathbb {Z}}^d\). Then by [14, Theorem 4.3.1] we have that as \(\Vert x\Vert \rightarrow \infty \), then
where \(c_d\) is a constant that only depends on the dimension d. By Bayes’ formula we have
We now treat the term \({{\mathbb {P}}}_{x}\left( \tau _z<\tau _R\right) \) and the ratio \({{\mathbb {P}}}_{x}\left( X(\tau _{R})=y\;\vert \;\tau _z<\tau _R\right) /{{\mathbb {P}}}_{x}\left( X(\tau _R)=y\right) \) separately. By transitivity in expressions involving the Green kernel we will take \(z=0\). However, \(\left\| z \right\| \) refers to the setting without the translation. Since the Green kernel is harmonic outside of 0, we can apply the optional stopping theorem to get
Since \(R-\Vert z\Vert \le \Vert X(\tau _R)\Vert \le R+\Vert z\Vert \) and \(r-\Vert z\Vert \le \Vert x\Vert \le r+\Vert z\Vert \) and we have that \(\Vert z\Vert \le r/4\), \(r, R\rightarrow \infty \) as \(n\rightarrow \infty \) by substituting in the asymptotic expression for the Green kernel, we get
Now it remains to bound the ratio
where the equality follows by the strong Markov property. If we set \(f(w) = {{\mathbb {P}}}_{w}\left( X(\tau _R)=y\right) \), then it is easy to check that f is harmonic in \(\mathcal {B}(0,R)\). Since by assumption \(x,z\in {\mathcal {B}}(0,r)\) Lemma 3.1 gives
Plugging (7.7) and (7.8) into (7.6) and setting \(C_d=c_d/G(0)\) gives
and this concludes the proof. \(\square \)
Appendix 2: Proof of Lemma 4.1
We start with some preliminary results. Throughout we assume that \(R=o(n)\) and \(R\ge 2r\). First we let \(\tau =\sigma _1-\sigma _0\), where the \(\sigma _i\)’s are defined in Sect. 2 and we take \(F(x,R)= {\mathcal {B}}(0,R)\) and \(E(x,r)= {\mathcal {B}}(0,r)\). We start by proving that up to small error the expectation of \(\tau \) does not depend on the starting point of X on \(\partial {\mathcal {B}}(0,R)\).
Proposition 9.1
There exist constants \(c_1,c_2 > 0\) such that for all \(u,v \in \partial {\mathcal {B}}(0,R)\) we have
We prove the above proposition after establishing the following two lemmas.
Lemma 9.2
There exists a constant \(C > 1\) such that the following is true. Suppose that \(Q_1 < Q_2\) with \(Q_1 \ge 2r\) and \(Q \ge Q_2 \ge 2 Q_1\). Let \(E_{r,Q} = \{\tau _{\partial {\mathcal {B}}(0,Q)} <\tau _{\partial {\mathcal {B}}(0,r)}\}\) and \(\sigma =\min \{t\ge 0: X(t) \in \partial {\mathcal {B}}(0,Q_2)\}\). Then
Proof
Note that the functions
are harmonic in \({\mathcal {B}}(0,Q_2) {\setminus } {\mathcal {B}}(0,r)\). Consequently, it follows from Harnack’s inequality (Lemma 3.1) that there exists a constant \(C_1 \ge 1\) such that
Since we have
by taking \(C = C_1^2\) proves the statement of the lemma. \(\square \)
Lemma 9.3
Let \(E_{r,Q}\) be as in Lemma 9.2, where \(Q=2^kR\) for some k and let \(\sigma \) be the first time that X hits \(\partial {\mathcal {B}}(0,Q)\). There exist constants \(c_1,c_2 > 0\) such that the following is true :
Proof
For each \(1 \le j \le k\), we let \(Q_j = 2^j R\). Note that \(Q_k = Q\). Lemma 9.3 implies that there exists a constant \(\rho _0 > 0\) such that if \(u,\widetilde{u} \in \partial {\mathcal {B}}(0,Q_{j-1})\) and \(Y,\widetilde{Y}\) are random walks starting from u, v respectively both conditioned on the event \(E_{r,Q}\) and \(\sigma _j,\widetilde{\sigma }_j\) denotes the first time that they hit \(\partial {\mathcal {B}}(0,Q_j)\) then
Let \(\sigma _{k-1}\) be the first time that X hits \(\partial {\mathcal {B}}(0,Q_{k-1})\). By iterating this, it follows that there exists a constant \(\rho _1 \in (0,1)\) such that for all \(u,v \in \partial {\mathcal {B}}(0,R)\) we have that
Let \(\sigma \) be the first time that X hits \(\partial {\mathcal {B}}(0,Q)\). Then it follows that
By the strong Markov property, we note that
Combining this with (7.9) and using Lemma 9.2 we see that the above is bounded from above by
and this finishes the proof. \(\square \)
Proof of Proposition 9.1
Fix \(y \in \partial {\mathcal {B}}(0,R)\). Let \(\xi \) be the length of time it takes for the random walk, after hitting \(\partial {\mathcal {B}}(0,Q)\) where \(Q=n/2\), to come hit \(\partial {\mathcal {B}}(0,r)\), and then hit \(\partial {\mathcal {B}}(0,R)\). Then for \(y \in \partial {\mathcal {B}}(0,R)\), we have that
Since in each round of the mixing time, the random walk has a positive chance of being outside of \({\mathcal {B}}(0,Q)\), it follows that there exists a constant \(C > 0\) such that
Let \(\sigma \) be the first time that X hits \(\partial {\mathcal {B}}(0,Q)\). We have that,
Combining, we have thus shown so far that
The result then follows because \({{\mathbb {E}}}_{z}\left[ \xi \mathfrak {1}(E_{r,Q})\right] \asymp n^d / r^{d-2}\) from Lemma 8.3. \(\square \)
Proof of Lemma 4.1
Let N be the index of the first excursion from \(\partial {\mathcal {B}}(x,R)\) back to itself through \(\partial {\mathcal {B}}(x,r)\) which hits x. Then from Lemma 3.2 it follows that N is essentially a geometric random variable with expectation
since \(r=o(R)\). Let \(\zeta _i\) be the length of the i-th such excursion. If \(z \in \partial {\mathcal {B}}(x,R)\), then we have that
Proposition 9.1 gives that
and hence putting everything together we obtain
If \(z \notin {\mathcal {B}}(x,R)\), then
In this case, by Lemma 8.3 we have \({{\mathbb {E}}}_{z}\left[ \tau _{\partial {\mathcal {B}}(x,R)}\right] =O(n^d/ R^{d-2})\), and hence from the above we get that if \(z\notin {\mathcal {B}}(x,R)\), then
If \(z \in {\mathcal {B}}(x,R)\), then we have that
Therefore, combining everything we get that
and this concludes the proof. \(\square \)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Miller, J., Sousi, P. Uniformity of the late points of random walk on \({\mathbb {Z}}_{n}^{d}\) for \(d \ge 3\) . Probab. Theory Relat. Fields 167, 1001–1056 (2017). https://doi.org/10.1007/s00440-016-0697-1
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-016-0697-1