1 Introduction

Let \(S^{(1)},\dots ,S^{(h)}\) be independent, simple, symmetric random walks on \(\mathbb {Z}^2\) starting at the origin. We will use \(\textrm{P}_x\) and \(\textrm{E}_x\) to denote the probability and expectation with respect to the law of the simple random walk when starting from \(x \in \mathbb {Z}^2\) and we will omit the subscripts when the walk starts from 0. For \(1\leqslant i <j \leqslant h\) we define the collision local time between \(S^{(i)}\) and \(S^{(j)}\) up to time N by

$$\begin{aligned} \textsf{L}_N^{(i,j)}:=\sum _{n=1}^N \mathbb {1}_{\{S_n^{(i)}=S_n^{(j)}\}} \,. \end{aligned}$$

Notice that given \(1\leqslant i<j\leqslant h\), \(\textsf{L}_N^{(i,j)}\) has the same law as the number of returns to zero, before time 2N, for a single simple, symmetric random walk S on \(\mathbb {Z}^2\), that is \( \textsf{L}_N^{(i,j)}{\mathop {=}\limits ^{\text {law}}}\textsf{L}_N:=\sum _{n=1}^N \mathbb {1}_{\{S_{2n}=0\}}\). This equality is a consequence of the independence of \(S^{(i)}\), \(S^{(j)}\) and the symmetry of the simple random walk. A first moment calculation shows that

$$\begin{aligned} R_N:= \textrm{E}\big [\textsf{L}_N\big ] = \sum _{n=1}^N \textrm{P}(S_{2n}=0) {\mathop {\approx }\limits ^{N \rightarrow \infty }} \frac{\log N}{\pi }, \end{aligned}$$
(1.1)

see Sect. 2 for more details. It was established by Erdős and Taylor, 60 years ago [10], that under normalisation (1.1), \(\textsf{L}_N\) satisfies the following limit theorem.

Theorem A

([10]) Let \(\textsf{L}_N:=\sum _{n=1}^N \mathbb {1}_{\{S_{2n}=0\}}\) be the local time at zero, up to time 2N, of a two-dimensional, simple, symmetric random walk \((S_n)_{n\geqslant 1}\) starting at 0. Then, as \(N \rightarrow \infty \),

$$\begin{aligned} \frac{\pi }{\log N}\, \textsf{L}_N \xrightarrow {(d)} Y, \end{aligned}$$

where Y has an exponential distribution with parameter 1.

Theorem A was recently generalised in [17]. In particular,

Theorem B

[17] Let \(h \in \mathbb {N}\) with \(h \geqslant 2\) and \(S^{(1)},\ldots ,S^{(h)}\) be h independent two-dimensional, simple random walks starting all at zero. Then, for \(\beta \) such that \(|\beta | \in (0,1)\), it holds that the total collision time \(\sum _{1\leqslant i<j\leqslant h} \textsf{L}_N^{(i,j)}\) satisfies

$$\begin{aligned} \textrm{E}^{\otimes h}\bigg [ e^{ \frac{\pi \, \beta }{\log N} \sum _{1\leqslant i<j\leqslant h} \textsf{L}_N^{(i,j)}} \bigg ] \xrightarrow [N \rightarrow \infty ]{} \bigg (\frac{1}{1-\beta }\bigg )^{\frac{h(h-1)}{2}}, \end{aligned}$$

and, consequently,

$$\begin{aligned} \frac{\pi }{\log N} \sum _{1\leqslant i<j\leqslant h} \textsf{L}_N^{(i,j)} \xrightarrow [N \rightarrow \infty ]{(d)} \Gamma \big (\tfrac{h(h-1)}{2},1\big ), \end{aligned}$$

where \( \Gamma \big (\tfrac{h(h-1)}{2},1\big )\) denotes a Gamma variable, which has a density \(\Gamma (h(h-1)/2)^{-1} x^{\tfrac{h(h-1)}{2}-1} e^{-x}\); \(\Gamma (\cdot )\), in the expression of the density, denotes the Gamma function.

Given the fact that a gamma distribution \(\Gamma (k,1)\), with parameter \(k\geqslant 1\), arises as the distribution of the sum of k independent random variables each one distributed according to an exponential random variable with parameter one (denoted as \(\textrm{Exp}(1)\)), Theorem B raises the question as to whether the joint distribution of the individual rescaled collision times \(\Big \{\frac{\pi }{\log N} \, \textsf{L}_N^{(i,j)}\Big \}_{1\leqslant i<j \leqslant h}\) converges to that of a family of independent \(\text {Exp}(1)\) random variables. This is what we prove in this work. In particular,

Theorem 1.1

Let \(h \in \mathbb {N}\) with \(h\geqslant 2\) and \(\varvec{\beta }:=\{\beta _{i,j}\}_{1\leqslant i<j \leqslant h} \in \mathbb {R}^{\frac{h(h-1)}{2}}\) with \(|\beta _{i,j}| <1\) for all \(1\leqslant i<j\leqslant h\). Then we have that

$$\begin{aligned} \textrm{E}^{\otimes h}\bigg [ e^{ \frac{\pi }{\log N} \sum _{1\leqslant i<j\leqslant h} \beta _{i,j} \textsf{L}_N^{(i,j)}} \bigg ] \xrightarrow [N \rightarrow \infty ]{} \prod _{1\leqslant i<j \leqslant h} \frac{1}{1-\beta _{i,j}} \, \end{aligned}$$
(1.2)

and, consequently,

$$\begin{aligned} \Big \{ \tfrac{\pi }{\log N} \,\textsf{L}_N^{(i,j)} \Big \}_{1\leqslant i<j\leqslant h} \xrightarrow [N \rightarrow \infty ]{(d)} \big \{ Y^{(i,j)} \big \}_{1\leqslant i<j\leqslant h}, \end{aligned}$$
(1.3)

where \(\big \{ Y^{(i,j)} \big \}_{1\leqslant i<j\leqslant h}\) are independent and identically distributed random variables following an \(\text {Exp}(1)\) distribution.

An intuitive way to understand the convergence of the individual collision times, or equivalently of the local time of a planar walk, to an exponential variable is the following. By (1.1), the number of visits to zero of a planar walk, which starts at zero, is \(O(\log N)\) and, thus, much smaller than the time horizon 2N. Typically, also, these visits happen within a short time, much smaller than 2N, so that every time the random walk is back at zero, the probability that it will return there again before time 2N is not essentially altered. This results in the local time \(\textsf{L}_N\) being asymptotic to a geometric random variable with parameter of order \((\log N)^{-1}\) (as also manifested by (1.1)), which when rescaled suitably converges to an exponential random variable.

The fact that the joint distribution of \( \Big \{ \tfrac{\pi }{\log N} \,\textsf{L}_N^{(i,j)} \Big \}_{1\leqslant i<j\leqslant h} \) converges to that of independent exponentials is much less apparent as the collision times have obvious correlations. A way to understand this is, again, through the fact that collisions happen at time scales much shorter than the time horizon N and, thus, it is not really possible to actually distinguish which pairs collide when collisions take place. More crucially, the logarithmic scaling, as indicated via (1.1), introduces a separation of scales between collisions of different pairs of walks, which is what, essentially, leads to the asymptotic factorisation of the Laplace transform (1.2). This intuition is reflected in the two main steps of our proof, which are carried out in Sects. 3.3 and 3.4.

Even though the Erdős–Taylor theorem appeared a long time ago, the multivariate extension that we establish here appears to be new. In [11] it was shown that the law of \(\tfrac{\pi }{\log N}\, \textsf{L}^{(1,2)}_N\), conditioned on \(S^{(1)}\), converges a.s. to that of an \(\textrm{Exp}(1)\) random variable. This implies that \(\big \{ \tfrac{\pi }{\log N} \,\textsf{L}^{(1,i)}_N\big \}_{1<i\leqslant h}\) converge to independent exponentials. However, it does not address the full independence of the family of all pairwise collisions \(\big \{ \tfrac{\pi }{\log N} \,\textsf{L}^{(i,j)}_N\big \}_{1\leqslant i<j\leqslant h}\).

In the continuum, phenomena of independence in functionals of planar Brownian motions have appeared in works around log-scaling laws see [18] (where the term log-scaling laws was introduced) as well as [19] and [14]. These works are mostly concerned with the problem of identifying the limiting distribution of windings of a planar Brownian motion around a number of points \(z_1,\ldots ,z_k\), different than the starting point of the Brownian motion, or the winding around the origin of the differences \(B^{(i)}-B^{(j)}\) between k independent Brownian motions \(B^{(1)},\ldots ,B^{(k)}\), starting all from different points, which are also different than zero. Without getting into details, we mention that the results of [14, 18, 19] establish that the windings (as well as some other functionals that fall within the class of log-scaling laws) converge, when logarithmically scaled, to independent Cauchy variables. [14] outlines a proof that the local times of the differences \(B^{(i)}-B^{(j)}, 1\leqslant i<j\leqslant k\), on the unit circle \(\{z\in \mathbb {R}^2 :|z|=1\}\) converge, jointly, to independent exponentials \(\textrm{Exp}(1)\), when logarithmically scaled, in a fashion similar to the scaling of Theorem 1.1. The methods employed in the above works rely heavily on continuous techniques (Itô calculus, time changes etc.), which do not have discrete counterparts. In fact, the passage from continuous to discrete is not straightforward either at a technical level (see e.g. the discussion on page 41 of [14] and [15]) or at a phenomenological level (see e.g. discussion on page 736 of [18]).

The approach we follow towards Theorem 1.1 starts with expanding the joint Laplace transform in the form of chaos series, which take the form of Feynman-type diagrams. To control (and simplify) these diagrams, we start by inputing a renewal representation as well as a functional analytic framework. The renewal theoretic framework was originally introduced in [3] in the context of scaling limits of random polymers (we will come back to the connection with polymers later on) and it captures the stream of collisions within a single pair of walks. The functional analytic framework can be traced back to works on spectral theory of delta-Bose gases [8, 9] and was also recently used in works on random polymers [4, 12, 17]. The core of this framework is to establish operator norm bounds for the total Green’s functions of a set of planar random walks conditioned on a subset of them starting at the same location and on another subset of them ending up at the same location. Roughly speaking, the significance of these operator estimates is to control the redistribution of collisions when walks switch pairs. The operator framework (together with the renewal one) allows to reduce the number of Feynman-type diagrams that need to be considered. For the reduced Feynman diagrams, one, then, needs to look into the logarithmic structure, which induces a separation of scales and leads to the fact that, asymptotically, the structure of the Feynman diagrams becomes that of the product of Feynman diagrams corresponding to Laplace transforms of single pairs of random walks.

Relations to random polymers. Exponential moments of collision times arise naturally when one looks at moments of partition functions of the model of directed polymer in a random environment (DPRE), we refer to [5] for an introduction to this model. For a family of i.i.d. variables \(\big ( \omega _{n,x} :n\in \mathbb {N}, x\in \mathbb {Z}^2 \big )\) with log-moment generating function \(\lambda (\beta )\), the partition function of the directed polymer measure is defined as

$$\begin{aligned} Z_{N,\beta }(x):=\textrm{E}_{x}\Big [\exp \Big (\sum _{n=1}^{N} \big (\beta \omega _{n,S_n} -\lambda (\beta ) \big ) \Big ) \Big ] , \end{aligned}$$

where \(\textrm{E}_{x}\) is the expected value with respect to a simple, symmetric walk starting at \(x\in \mathbb {Z}^2\). In the case that \(\omega _{n,x}\) is a standard normal variable, an explicit computation, for \(\beta _N:=\beta \sqrt{\tfrac{\pi }{ \log N}}\), gives that

$$\begin{aligned} {\mathbb {E}}\Big [ \big ( Z_{N,\beta _N}(x) \big )^h \Big ] = \textrm{E}^{\otimes h}\bigg [ e^{ \frac{\pi \beta ^2}{\log N} \sum _{1\leqslant i<j\leqslant h} \textsf{L}_N^{(i,j)}} \bigg ]. \end{aligned}$$
(1.4)

A corollary of our Theorem 1.1 is that the limit of the h-th moment of the DPRE partition function converges to \((1-\beta ^2)^{-h(h-1)/2}\); a result that was previously obtained in [17] combining upper bounds on moments, established in [17], with results on the distributional convergence of the partition function established in [1]. Using Theorem 1.1, we can further extend this to convergence of mixed moments. More precisely, for \(\beta _{i,N}:=\beta _i \sqrt{\tfrac{\pi }{\log N}}\) and \(i=1,\ldots ,h\), we have that

$$\begin{aligned} {\mathbb {E}}\big [ Z_{N,\beta _{1,N}}(x) \cdots Z_{N,\beta _{h,N}}(x) \big ]&= \textrm{E}^{\otimes h}\bigg [ e^{ \frac{\pi }{\log N} \sum _{1\leqslant i<j\leqslant h} \beta _{i}\beta _{j} \textsf{L}_N^{(i,j)}} \bigg ] \nonumber \\&\quad \xrightarrow [N\rightarrow \infty ]{} \prod _{1\leqslant i<j \leqslant N} \frac{1}{1-\beta _i\beta _j}. \end{aligned}$$
(1.5)

where the equality is again via an explicit computation as in (1.4) when the disorder is standard normal and the convergence follows from Theorem 1.1 after specialising the parameters \(\beta _{i,j}\) to the particular case of \(\beta _i\beta _j\).Footnote 1

Moment estimates on polymer partition functions are important in establishing fine properties, such as structure of maxima, of the field of partition functions \(\big \{ \sqrt{\log N } \big ( \log Z_{N,\beta }(x) -{\mathbb {E}}[\log Z_{N,\beta }(x) ] \big ):x\in \mathbb {Z}^2 \big \}\), which is known to converge to a log-correlated gaussian field [2]. We refer to [6] for more details. We expect the independence structure of the collision local times, that we establish here, to be useful towards these investigations. An interesting problem, in relation to this (but also of broader interest), is how large can the number h of random walks be (depending on N), before we start seeing correlations in the limit of the rescaled collisions. The work of Cosco–Zeitouni [6] has shown that there exists \(\beta _0 \in (0,1)\) such that for all \(\beta \in (0,\beta _0)\) and \(h=h_N \in \mathbb {N}\) such that

$$\begin{aligned} \limsup _{N \rightarrow \infty } \frac{3\beta }{1-\beta } \frac{1}{\log N} \left( {\begin{array}{c}h\\ 2\end{array}}\right) <1, \end{aligned}$$

one has that

$$\begin{aligned} \textrm{E}^{\otimes h}\bigg [ e^{ \frac{\pi \, \beta }{\log N} \sum _{1\leqslant i<j\leqslant h} \textsf{L}_N^{(i,j)}} \bigg ] \leqslant c(\beta )\, \Big (\frac{1}{1-\beta }\Big )^{\left( {\begin{array}{c}h\\ 2\end{array}}\right) (1+\epsilon _N)}, \end{aligned}$$

with \(c(\beta ) \in (0,\infty )\) and \(0\leqslant \epsilon _N =\epsilon (\beta ,N)\downarrow 0\) as \(N \rightarrow \infty \). This suggests that the threshold might be \(h=h_N=O(\sqrt{\log N})\). More recent results [7], imply that the independence fails when the number of walks is \( \gg \log N\). The question of whether there is a critical constant \(c_{crit.}\) such that for a number of walks larger that \(c_{crit.}\log N\) the independence of the collisions fails is an open and interesting problem.

Outline. The structure of the article is as follows: In Sect. 2 we set the framework of the chaos expansion, its graphical representations in terms of Feynman-type diagrams, as well as the renewal and functional analytic frameworks. In Sect. 3 we carry out the approximation steps, which lead to our theorem. At the beginning of Sect. 3 we also provide an outline of the scheme.

We close by mentioning our convention on the constants: whenever a constant depends on specific parameters, we will indicate this at the beginning of the statements but then drop the dependence, while if no dependence on parameters is indicated, then they will be understood as absolute constants.

2 Chaos expansions and auxiliary results

In this section we will introduce the framework, within which we work, and which consists of chaos expansions for the joint Laplace transform

$$\begin{aligned} M^{\varvec{\beta }}_{N,h}:=\textrm{E}^{\otimes h}\bigg [ e^{ \sum _{1\leqslant i<j\leqslant h} \frac{\pi \beta _{i,j}}{\log N} \, \textsf{L}_N^{(i,j)}} \bigg ], \end{aligned}$$
(2.1)

for a fixed collection of numbers \(\varvec{\beta }:=\{\beta _{i,j}\}_{1\leqslant i<j \leqslant h} \in \mathbb {R}^{\frac{h(h-1)}{2}}\) with \(|\beta _{i,j} | \in (0,1)\) for all \(1 \leqslant i<j \leqslant h\). We denote by

$$\begin{aligned} {\bar{\beta }}:=\max _{1\leqslant i<j \leqslant h} | \beta _{i,j} |<1, \end{aligned}$$
(2.2)

and define

$$\begin{aligned} \sigma _N^{i,j}:=\sigma _N^{i,j}(\beta _{i,j}):= e^{ \beta ^{i,j}_N}-1 \qquad \text {with} \qquad \beta ^{i,j}_N:=\frac{\pi \, \beta _{i,j}}{\log N}. \end{aligned}$$
(2.3)

Convention: From now on we will be assuming that all parameters \(\beta _{i,j}\) are nonnegative. We will return to the general case at the very end when discussing the proof of Theorem 1.1.

We will use the notation \(q_n(x):=\textrm{P}(S_n=x)\) for the transition probability of the simple, symmetric random walk. The expected collision local time between two independent simple, symmetric random walks will be

$$\begin{aligned} R_N:=\textrm{E}^{\otimes 2}\Big [\sum _{n=1}^N \mathbb {1}_{S_n^{(1)}=S_n^{(2)}}\Big ]=\sum _{n=1}^N q_{2n}(0) \end{aligned}$$
(2.4)

and by Proposition 3.2 in [3] we have that in the two-dimensional setting

$$\begin{aligned} R_N=\frac{\log N}{\pi }+ \frac{\alpha }{\pi }+ o(1), \end{aligned}$$
(2.5)

as \(N\rightarrow \infty \), with \(\alpha =\gamma +\log 16 -\pi \simeq 0.208\) and \(\gamma \simeq 0.577\) is the Euler constant.

2.1 Chaos expansion for two-body collisions and renewal framework

We start with the Laplace transform of the simple case of two-body collisions \( \textrm{E}\Big [e^{\beta ^{i,j}_N\, \textsf{L}^{(i,j)}_N}\Big ]\) and deduce its chaos expansion as follows:

$$\begin{aligned} \textrm{E}\Big [e^{\beta ^{i,j}_N\, \textsf{L}^{(i,j)}_N}\Big ]&= \textrm{E}\bigg [e^{ \beta ^{i,j}_N \, \sum _{n=1}^N \sum _{x\in \mathbb {Z}^2} \mathbb {1}_{\{ S_n^{(i)}=x \}} \mathbb {1}_{\{S_n^{(j)}=x \}}} \bigg ] \nonumber \\&=\textrm{E}\bigg [ \prod _{\begin{array}{c} 1\leqslant n \leqslant N \\ x\in \mathbb {Z}^2 \end{array}} \Big ( 1+ \Big (e^{ \beta ^{i,j}_N}-1 \Big ) \, \mathbb {1}_{\{ S_n^{(i)}=x \}} \mathbb {1}_{\{S_n^{(j)}=x \}} \Big )\bigg ] \nonumber \\&=1+\sum _{k\geqslant 1} (\sigma _N^{i,j})^k \sum _{\begin{array}{c} 1\leqslant n_1< \dots<n_k\leqslant N \\ x_1,\ldots ,x_k\in \mathbb {Z}^2 \end{array}} \textrm{E}\bigg [\prod _{a=1}^k \, \mathbb {1}_{\{ S_{n_a}^{(i)}=x_a \}} \mathbb {1}_{\{S_{n_a}^{(j)}=x_a \}} \bigg ] \nonumber \\&=1+ \sum _{k\geqslant 1} (\sigma _N^{i,j})^k \sum _{\begin{array}{c} 1\leqslant n_1< \dots <n_k\leqslant N, \, \\ x_1,\dots ,x_k \in \mathbb {Z}^2 \end{array}} \prod _{a=1}^k q^2_{n_a-n_{a-1}}(x_a-x_{a-1}) \end{aligned}$$
(2.6)

where in the last equality we used the Markov property, in the third we expanded the product and in the second we used the simple fact that

$$\begin{aligned} e^{\beta _N^{i,j} \, \mathbb {1}_{\{ S_n^{(i)}=S_n^{(j)}=x \}} }&= 1 + \Big ( e^{\beta _N^{i,j} \, \mathbb {1}_{\{ S_n^{(i)}= S_n^{(j)}=x\} } }-1 \Big ) = 1 + \Big ( e^{\beta _N^{i,j}} -1 \Big ) \mathbb {1}_{\{ S_n^{(i)}=S_n^{(j)}=x \} } \\&= 1 + \sigma _N^{i,j} \, \mathbb {1}_{\{ S_n^{(i)}=S_n^{(j)}=x \} }, \end{aligned}$$

with \(\sigma ^{i,j}_{N}\) defined in (2.3). We will express (2.6) in terms of the following quantity \(U^{\beta }_N(n,x)\), which plays an important role in our formulation. For \(\sigma _N:=\sigma _N(\beta ):=e^{\frac{\pi \beta }{\log N}}-1\) and \((n,x)\in \mathbb {N}\times \mathbb {Z}^2\), we define

$$\begin{aligned} \begin{aligned} U^{\beta }_N(n,x)&:=\sigma _{N}\, q^2_{n}(x) + \sum _{k \geqslant 1} \sigma _{N}^{k+1} \sum _{\begin{array}{c} 0<n_1<\dots<n_k<n \\ z_1,z_2,\dots ,z_k \in \mathbb {Z}^2 \end{array}} \\&\quad q^2_{n_1}(z_1)\Big \{ \prod _{j=2}^k q^2_{n_j-n_{j-1}}(z_j-z_{j-1}) \Big \}\, q^2_{n-n_k}(x-z_k). \end{aligned} \end{aligned}$$
(2.7)

and \(U_N^{\beta }(n,x):=\mathbb {1}_{\{x=0\}}\), if \(n=0\). Moreover, for \(n\in \mathbb {N}\) we define

$$\begin{aligned} U^{\beta }_N(n):=\sum _{x \in \mathbb {Z}^2} U^{\beta }_N(n,x) \,. \end{aligned}$$

\(U^{\beta }_N(n,x)\) represents the Laplace transform of the two-body collisions, scaled by \(\beta \), between a pair of random walks that are constrained to end at the spacetime point \((n,x) \in \{1,\dots ,N\}\times \mathbb {Z}^2\), starting from (0, 0). In particular, for any \(1\leqslant i <j\leqslant h\), we can write (2.6) as

$$\begin{aligned} \textrm{E}\Big [e^{\beta ^{i,j}_N\, \textsf{L}^{(i,j)}_N}\Big ] = \sum _{n=0}^N \sum _{x \in \mathbb {Z}^2} U^{\beta _{i,j}}_N(n,x) = \sum _{n=0}^N U^{\beta _{i,j}}_N(n). \end{aligned}$$

We will call \(U^{\beta }_N(n,x)\) a replica and for \(\sigma _N(\beta )=e^{\frac{\pi \beta }{\log N}}-1\) we will graphically represent \(\sigma _N(\beta )\, U^{\beta }_N(n,x)\) as

In the second line we have assigned weights \(q_{n'-n}(x'-x)\) to the solid lines going from (nx) to \((n',x')\) and we have assigned the weight \(\sigma _N(\beta )=e^{\frac{\pi \beta }{\log N}}-1\) to every solid dot.

\(U^{\beta }_N(n)\) and \(U^{\beta }_N(n,x)\) admit a very useful probabilistic interpretation in terms of certain renewal processes. More specifically, consider the family of i.i.d. random variables \((T^{\scriptscriptstyle (N)}_i, X^{\scriptscriptstyle (N)}_i)_{i \geqslant 1}\) with law

$$\begin{aligned} \textrm{P}\Big (\,\big (T^{\scriptscriptstyle (N)}_1, X^{\scriptscriptstyle (N)}_1 \big )=(n,x) \Big )=\frac{q_n^2(x)}{R_N}\,\mathbb {1}_{\{n \leqslant N\}} \, . \end{aligned}$$

and \(R_N\) defined in (2.4). Define the random variables \(\tau ^{\scriptscriptstyle (N)}_k:=T_1^{\scriptscriptstyle (N)}+\dots +T_k^{\scriptscriptstyle (N)}\), \(S^{\scriptscriptstyle (N)}_k:=X^{\scriptscriptstyle (N)}_1+\dots +X^{\scriptscriptstyle (N)}_k\), if \(k \geqslant 1\), and \((\tau _0,S_0):=(0,0)\), if \(k=0\). It is not difficult to see that \(U^{\beta }_N(n,x) \) and \(U^{\beta }_N(n) \) can, now, be written as

$$\begin{aligned} U^{\beta }_N(n,x)= & {} \sum _{k\geqslant 0} (\sigma _N R_N)^{k} \, \textrm{P}\big (\tau ^{\scriptscriptstyle (N)}_k=n, S^{\scriptscriptstyle (N)}_k=x\big ) \quad \text {and} \nonumber \\ U^{\beta }_N(n)= & {} \sum _{k\geqslant 0} (\sigma _N R_N)^{k} \, \textrm{P}\big (\tau ^{\scriptscriptstyle (N)}_k=n\big ) \end{aligned}$$
(2.8)

This formalism was developed in [3] and is very useful in obtaining sharp asymptotic estimates. In particular, it was shown in [3] that the rescaled process \(\Big (\frac{\tau ^{(N)}_{\lfloor s\log N\rfloor }}{N}, \frac{S^{(N)}_{\lfloor s\log N\rfloor }}{\sqrt{N}} \Big )\) converges in distribution for \(N\rightarrow \infty \) with the law of the marginal limiting process for \(\tfrac{\tau ^{(N)}_{\lfloor s\log N\rfloor }}{N}\) being the Dickman subordinator, which was defined in [3] as a truncated, zero-stable Lévy process.

An estimate that follows easily from this framework, which is useful for our purposes here, is the following: for \(\beta <1\), it holds

$$\begin{aligned} \limsup _{N\rightarrow \infty }\sum _{n=0}^N U^{\beta }_N(n)&=\limsup _{N\rightarrow \infty }\sum _{k\geqslant 0} (\sigma _N R_N)^{k} \, \textrm{P}\big (\tau ^{\scriptscriptstyle (N)}_k \leqslant N\big ) \nonumber \\&\leqslant \limsup _{N\rightarrow \infty } \sum _{k\geqslant 0} (\sigma _N R_N)^{k} \nonumber \\&= \limsup _{N\rightarrow \infty } \frac{1}{1-\sigma _N R_N} = \frac{1}{1-\beta }, \end{aligned}$$
(2.9)

where we used the fact that

$$\begin{aligned} \sigma _N R_N = \big (e^{\frac{\pi \beta }{\log N}}-1\big )\cdot \Big ( \frac{\log N}{\pi }+\frac{\alpha }{\pi }+o(1)\Big ) \xrightarrow [N\rightarrow \infty ]{} \beta <1. \end{aligned}$$
(2.10)

2.2 Chaos expansion for many-body collisions

We now move to the expansion of the Laplace transform \(M^{\varvec{\beta }}_{N,h}\) of the many-body collisions. The goal is to obtain an expansion in the form of products of certain Markovian operators. The desired expression will be presented in (2.17). This expansion will be instrumental in obtaining some important estimates in Sect. 2.3.

The first steps are similar as in the expansion for the two-body collisions, above. In particular, we have

$$\begin{aligned} M^{\varvec{\beta }}_{N,h}&:= \textrm{E}^{\otimes h}\bigg [e^{\sum _{1\leqslant i<j \leqslant h}\beta ^{i,j}_N \, \textsf{L}^{(i,j)}_{N}}\bigg ] = \textrm{E}\bigg [ \prod _{1\leqslant i<j \leqslant h} \, \prod _{\begin{array}{c} 1\leqslant n \leqslant N \\ x\in \mathbb {Z}^2 \end{array}} \Big ( 1+\sigma _N^{i,j} \, \mathbb {1}_{\{ S_n^{(i)}=x \}} \mathbb {1}_{\{S_n^{(j)}=x \}} \Big )\bigg ] \nonumber \\&= 1+ \sum _{k\geqslant 1} \,\, \sum _{\begin{array}{c} (i_a, j_a, n_a, x_a) \in {\mathcal {A}}_h, \,\, \text {for} \,\, a=1,\ldots ,k \\ \text {distinct} \end{array}} \,\, \textrm{E}\Big [ \prod _{a=1}^k \sigma _N^{i_a,j_a}\, \mathbb {1}_{\{ S_{n_a}^{(i_a)} = x_a \}} \,\mathbb {1}_{ \{S_{n_a}^{(j_a)}=x_a \}} \Big ]\quad \end{aligned}$$
(2.11)

where the last sum is over k distinct elements of the set

$$\begin{aligned} {\mathcal {A}}_h:=\big \{ (i,j,n,x) \in \mathbb {N}^3\times \mathbb {Z}^2 :1\leqslant i<j \leqslant h\big \}. \end{aligned}$$
Fig. 1
figure 1

This is a graphical representation of expansion (2.11) corresponding to the collisions of four random walks, each starting from the origin. Each solid line will be marked with the label of the walk that it corresponds to throughout the diagram. Each solid dot, which marks a collision among a subset A of the random walks, is given a weight \(\prod _{ i,j\in A} \sigma _N^{i,j}\). Any solid line between points (mx), (ny) is assigned the weight of the simple random walk transition kernel \(q_{m-n}(y-x)\). The hollow dots are assigned weight 1 and they mark the places where we simply apply the Chapman–Kolmogorov formula

The graphical representation of expansion (2.11) is depicted in Fig. 1. There, we have marked with black dots the space-time points (nx) where some of the walks collide and we have assigned to each the weight \(\prod _{1\leqslant i<j \leqslant h}\sigma _N^{i,j} \mathbb {1}_{\{S_n^{(i)}=S_n^{(j)}=x \}}\).

We now want to write the above expansion as a convolution of Markovian operators, following the Markov property of the simple random walks. We can partition the time interval \(\{0,1,\ldots ,N\}\) according to the times when collisions take place; these are depicted in Fig. 1 by vertical lines. In between two successive times mn, the walks will move from their locations \((x^{(i)})_{i=1,\ldots ,h}\) at time m to their new locations \((y^{(i)})_{i=1,\ldots ,h}\) at time n (some of which might coincide) according to their transition probabilities, giving a total weight to this transition of \(\prod _{i=1}^h q_{n-m}(y^{(i)}-x^{(i)})\). We, now, want to encode in this product the coincidences that may take place within the sets \((x^{(i)})_{i=1,\ldots ,h}\) and \((y^{(i)})_{i=1,\ldots ,h}\). To this end, we consider partitions I of the set of indices \(\{1,\ldots ,h\}\), which we denote by \(I\vdash \{1,\ldots ,h\}\). We also denote by |I| the number of parts of I. Given a partition \(I \vdash \{1,\dots ,h\}\), we define an equivalence relation \({\mathop {\sim }\limits ^{I}}\) in \(\{1,\dots ,h\}\) such that \(k {\mathop {\sim }\limits ^{I}} \ell \) if and only if k and \(\ell \) belong to the same part of partition I. Given a vector \(\varvec{y}=(y_1,\dots ,y_h) \in (\mathbb {Z}^2)^h\) and \(I \vdash \{1,\dots ,h\}\), we shall use the notation \(\varvec{y}\sim I\) to mean that \(y_k=y_\ell \) for all pairs \(k {\mathop {\sim }\limits ^{I}}\ell \). We use the symbol \(\circ \) to denote the one-part partition,Footnote 2 that is, \(\circ :=\{1,\dots ,h\}\), and \(*\) to denote the partition consisting only of singletons, that is \(*:=\bigsqcup _{i=1}^h \{i\}\). Moreover, given \(I \vdash \{1,\dots ,h\}\) such that \(|I|=h-1\) and \(I=\{i,j\}\sqcup \bigsqcup _{k \ne i,j} \{k\}\), by slightly abusing notation, we may identify and denote I by its non-trivial part \(\{i,j\}\).

Given this formalism, we denote the total transition weight of the h walks, from points \(\varvec{x}=(x^{(1)},\ldots ,x^{(h)})\in (\mathbb {Z}^2)^h\), subject to constraints \(\varvec{x}\sim I\) at time m, to points \(\varvec{y}=(y^{(1)},\ldots ,y^{(h)})\in (\mathbb {Z}^2)^h\), subject to constraints \(\varvec{y}\sim J\) at time n, by

$$\begin{aligned} Q^{I,J}_{n-m}(\varvec{x},\varvec{y}):=\mathbb {1}_{\{\varvec{x}\sim I\}} \, \prod _{i=1}^h q_{n-m}(y^{(i)}-x^{(i)}) \, \mathbb {1}_{\{\varvec{y}\sim J\}}\,. \end{aligned}$$
(2.12)

We will call this operator the constrained evolution. Furthermore, for a partition \(I \vdash \{1,\dots ,h\}\) and \(\varvec{\beta }=\{\beta _{i,j}\}_{1\leqslant i<j \leqslant h}\) we define the mixed collision weight subject to I as

$$\begin{aligned} \sigma _N(I):=\sigma _N(I,\{\beta _{i,j}\}_{1\leqslant i<j \leqslant h})=\prod _{\begin{array}{c} 1\leqslant i<j \leqslant h, \\ i {\mathop {\sim }\limits ^{I}}j \end{array}} \sigma _N^{i,j}, \end{aligned}$$
(2.13)

with \(\sigma _N^{i,j} \) as defined in (2.3). We can then rewrite (2.11) in the form

$$\begin{aligned} 1+ \sum _{r=1}^{\infty } \sum _{\begin{array}{c} \circ :=I_0 \\ I_1,\dots ,I_r \ne * \end{array}} \prod _{i=1}^r \sigma _N(I_i) \sum _{\begin{array}{c} 1 \leqslant n_1<\dots <n_r\leqslant N \\ 0:=\varvec{x}_0,\varvec{x}_1, \dots , \varvec{x}_r \in (\mathbb {Z}^2)^h \end{array}} \prod _{i=1}^r Q^{I_{i-1};I_i}_{n_i-n_{i-1}}(\varvec{x}_{i-1},\varvec{x}_i) \,. \end{aligned}$$
(2.14)
Fig. 2
figure 2

This is the simplified version of Figure’s 1 graphical representation of the expansion (2.14), where we have grouped together the blocks of consecutive collisions between the same pair of random walks. These are now represented by the wiggle lines (replicas) and we call the evolution in strips that contain only one replica as replica evolution (although strip seven is the beginning of another wiggle line, we have not represented it as such since we have not completed the picture beyond that point). The wiggle lines (replicas) between points (nx), (my), corresponding to collisions of a single pair of walks \(S^{(k)}, S^{(\ell )}\), are assigned weight \(U_N^{\beta _{k,\ell }}(m-n, y-x)\). A solid line between points (mx), (ny) is assigned the weight of the simple random walk transition kernel \(q_{m-n}(y-x)\)

We want to make one more simplification in this representation, which, however, contains an important structural feature. This is to group together consecutive constrained evolution operators \(\sigma _N(I_i) Q^{I_{i-1};I_i}_{n_i-n_{i-1}}(\varvec{x}_{i-1},\varvec{x}_i) \) for which \(I_{i-1}=I_i=h-1\). An example in Fig. 1 is the sequence of evolutions in the first three strips and another one is the group of evolutions in strips five and six. Such groupings can be captured by the following definition: For a partition \(I\vdash \{1,\dots ,h\}\) of the form \(I=\{k,\ell \} \sqcup \bigsqcup _{j\ne k,\ell }\{j\}\) and \(\varvec{x}=(x^{(1)},\dots ,x^{(h)})\), \(\varvec{y}=(y^{(1)},\dots ,y^{(h)}) \in (\mathbb {Z}^2)^h\), we define the replica evolution as

$$\begin{aligned} \textsf{U}^{I}_n(\varvec{x},\varvec{y}):= \mathbb {1}_{\{\varvec{x},\varvec{y}\sim I\}} \cdot U^{\beta _{k,\ell }}_{N}(n,y^{(k)}-x^{(k)}) \cdot \prod _{i\ne k,\ell } q_n(y^{(i)}-x^{(i)}), \end{aligned}$$
(2.15)

with \(U^{\beta }_{N}(n,y^{(k)}-x^{(k)})\) defined in (2.7). We name this replica evolution since in the time interval [0, n] we see a stream of collisions between only two of the random walks. The simplified version of expansion (2.14) (and Fig. 1) is presented in Fig. 2.

In order to re-express (2.14) with the reduction of the replica evolution (2.15), we need to introduce one more formalism, which is

$$\begin{aligned} P^{I;J}_n(\varvec{x},\varvec{y}):= \left\{ \begin{array}{ll} \sum _{\begin{array}{c} m_1\geqslant 1,\,m_2\geqslant 0:\, m_1+m_2=n\, , \\ \varvec{z}\in (\mathbb {Z}^2)^h \end{array}} Q^{I;J}_{m_1}(\varvec{x},\varvec{z})\cdot \textsf{U}^{J}_{m_2}(\varvec{z},\varvec{y}), &{}\quad \text { if } |J|=h-1 , \\ Q_n^{I;J}(\varvec{x},\varvec{y}) , &{}\quad \text { if } |J|<h-1 , \end{array} \right. \end{aligned}$$
(2.16)

where we recall that |J| is the number of parts of J and so \(|J|=h-1\) means that J has the form \(\{k,\ell \}\sqcup \bigsqcup _{i\ne k,\ell } \{i\}\), corresponding to a pairwise collision, while \(|J|<h-1\) means that there are multiple collisions (the latter would correspond to the end of the eighth strip in Fig. 1). In other words, the operator \(P_n^{I;J}\) groups together each replica evolution with its preceding constrained evolution.

We, finally, arrive at the desired expression for the Laplace transform of the many-body collisions:

$$\begin{aligned} M^{\varvec{\beta }}_{N,h}{} & {} = 1+\sum _{r = 1}^{\infty } \sum _{\begin{array}{c} \circ :=I_0,I_1,\dots ,I_r \\ I_j\ne I_{j+1},\, \hbox { if}\ |I_j|=|I_{j+1}|=h-1 \\ \text {for}\,\, j=1,\ldots ,r-1 \end{array}}\nonumber \\{} & {} \quad \prod _{i=1}^r \sigma _N(I_i)\sum _{\begin{array}{c} 1 \leqslant n_1<\dots <n_r\leqslant N, \\ 0:=\varvec{x}_0,\varvec{x}_1, \dots , \varvec{x}_r \in (\mathbb {Z}^2)^h \end{array}} \prod _{i=1}^r P^{I_{i-1};I_i}_{n_i-n_{i-1}}(\varvec{x}_{i-1},\varvec{x}_i) \,. \end{aligned}$$
(2.17)

2.3 Functional analytic framework and some auxiliary estimates

Let us start with some, fairly easy, bounds on operators Q and \(\textsf{U}\) (with the estimate on the latter being an upgrade of estimate (2.9)).

Lemma 2.1

Let the operators \(Q^{I;J}_n, \textsf{U}^J_n\) be defined in (2.12) and (2.15), respectively. For all partitions \(I\ne J\) with \(|J|=h-1\), \({{\bar{\beta }}}<1\) defined in (2.2) and \(\sigma _N(I)\) defined in (2.13), we have the bounds

$$\begin{aligned} \sum _{0 \leqslant n \leqslant N, \, \varvec{y}\in (\mathbb {Z}^2)^h} \textsf{U}^J_n(\varvec{x},\varvec{y}){} & {} \leqslant \frac{1}{1-{{{\bar{\beta }}}}\,'} \, \quad \text {and} \quad \sigma _N(J)\nonumber \\{} & {} \quad \cdot \Bigg ( \sum _{1\leqslant n \leqslant N, \varvec{y}\in (\mathbb {Z}^2)^h} Q^{I;J}_n(\varvec{x},\varvec{y}) \Bigg )\leqslant {\bar{\beta }}\,' , \end{aligned}$$
(2.18)

for any \({\bar{\beta }}\,' \in ({{\bar{\beta }}}, 1)\) and all large enough N.

Proof

We start by proving the first bound in (2.18). By definition (2.15) we have that

$$\begin{aligned} \begin{aligned} \sum _{ n\geqslant 0, \varvec{y}\in (\mathbb {Z}^2)^h} \textsf{U}^{J}_n(\varvec{x},\varvec{y})&:= \sum _{n\geqslant 0, \, \varvec{y}\in (\mathbb {Z}^2)^h}\mathbb {1}_{\{\varvec{x},\varvec{y}\sim J\}} \cdot U^{\beta _{k,\ell }}_{N}(n,y^{(k)}-x^{(k)}) \cdot \prod _{j\ne k,\ell } q_n(y^{(j)}-x^{(j)})\\&= \sum _{n \geqslant 0} U^{\beta _{k,\ell }}_N(n), \end{aligned} \end{aligned}$$

by using that \(\sum _{z \in \mathbb {Z}^2} q_n(z)=1\) to sum all the kernels \(q_n(y^{(j)}-x^{(j)})\) for \(j\ne k,\ell \) and \(\sum _{z \in \mathbb {Z}^2}U^{\beta }_N(n,z)=U^{\beta }_N(n)\). Moreover, by definition (2.7) and (2.3), since \(\beta _{k,\ell } \leqslant \bar{\beta }\), we have

$$\begin{aligned} \sum _{n \geqslant 0} U^{\beta _{k,\ell }}_N(n) \leqslant \sum _{n \geqslant 0} U^{{\bar{\beta }}}_N(n) =\sum _{k \geqslant 0} (\sigma _N({{\bar{\beta }}}) R_N)^k \,\textrm{P}(\tau ^{\scriptscriptstyle (N)}_k=n), \end{aligned}$$

and by (2.10) we have that for any \({{\bar{\beta }}}\,'\in ({{\bar{\beta }}}, 1)\) and all N large enough

$$\begin{aligned} \sum _{n \geqslant 0} U^{\beta _{k,\ell }}_N(n) \leqslant \sum _{k \geqslant 0} ({{\bar{\beta }}}\,')^k \,\textrm{P}(\tau ^{\scriptscriptstyle (N)}_k=n) \leqslant \sum _{k \geqslant 0} ({{\bar{\beta }}}\,')^k =\frac{1}{1-{{\bar{\beta }}}\,'} \,. \end{aligned}$$

Therefore,

$$\begin{aligned} \sum _{ n\geqslant 0, \varvec{y}\in (\mathbb {Z}^2)^h} \textsf{U}^{J}_n(\varvec{x},\varvec{y}) \leqslant (1-{{\bar{\beta }}}\,')^{-1} \,. \end{aligned}$$

For the second bound in (2.18) we recall from (2.12) that when \(J=\{k,\ell \} \sqcup \bigsqcup _{j \ne k,\ell } \{j\}\), then

$$\begin{aligned} Q^{I;J}_n(\varvec{x},\varvec{y}):=\bigg (\mathbb {1}_{\{\varvec{x}\sim I\}} \, \prod _{j \ne k,\ell } q_n(y^{(j)}-x^{(j)}) \bigg )\cdot q_n(y^{(k)}-x^{(k)})\cdot q_n(y^{(k)}-x^{(\ell )})\,, \end{aligned}$$

since \(\varvec{y}\sim J\) means that \(y_k=y_\ell \). Therefore, \(\sigma _N(J) =\sigma _N(\beta _{i,j})\leqslant \sigma _N({{\bar{\beta }}})\). We, now, use that \(\sum _{z \in \mathbb {Z}^2}q_n(z)=1\) in order to sum the kernels \(q_n(y^{(j)}-x^{(j)}), \, j \ne k,\ell \), while we also have by Cauchy-Schwarz that

$$\begin{aligned} \sigma _N(J)\cdot \bigg ( \sum _{1\leqslant n \leqslant N,\, y_k \in \mathbb {Z}^2} q_n(y^{(k)}-x^{(k)})\cdot q_n(y^{(k)}-x^{(\ell )})\bigg ) \leqslant \sigma _N({{\bar{\beta }}})\cdot \Big (\sum _{n=1}^N q_{2n}(0)\Big ) \leqslant {\bar{\beta }}\,', \end{aligned}$$

by (2.10), for all N large enough, thus establishing the second bound in (2.18). \(\square \)

Next, in Proposition 2.2, we are going to recall some norm estimates from [17] on the Laplace transform of operators \(P^{I;J}_{n}\), defined (2.16). For this, we need to set up the functional analytic framework. We start by defining \((\mathbb {Z}^2)^h_I:=\{\varvec{y}\in (\mathbb {Z}^2)^h: \varvec{y}\sim I\}\) and, for \(q \in (1,\infty )\), the \(\ell ^q((\mathbb {Z}^2)^h_I)\) space of functions \(f:(\mathbb {Z}^2)^h_I \rightarrow \mathbb {R}\) which have finite norm

$$\begin{aligned} \left\Vert f\right\Vert _{\ell ^q_I}:=\left\Vert f\right\Vert _{\ell ^q((\mathbb {Z}^2)^h_I)}:=\Bigg (\sum _{\varvec{y}\in ((\mathbb {Z}^2)^h_I) }\big |f(\varvec{y})\big |^q \Bigg )^{\frac{1}{q}}\,. \end{aligned}$$

For \(q \in (1,\infty )\) and for an operator \(\mathsf T:\ell ^q\big ((\mathbb {Z}^2)^h_J\big )\rightarrow \ell ^q\big ((\mathbb {Z}^2)^h_I\big )\) with kernel \(\mathsf T(\varvec{x},\varvec{y})\), one can define the pairing

$$\begin{aligned} \langle f,\mathsf Tg\rangle :=\sum _{\varvec{x}\,\in (\mathbb {Z}^2)^h_I,\varvec{y}\,\in (\mathbb {Z}^2)^h_J} f(\varvec{x})\mathsf T(\varvec{x},\varvec{y})g(\varvec{y}) \,. \end{aligned}$$
(2.19)

The operator norm will be given by

$$\begin{aligned} \left\Vert \mathsf T\right\Vert _{\ell ^q \rightarrow \ell ^q}:= \sup _{\left\Vert g\right\Vert _{\ell ^q_J} \leqslant 1} \left\Vert \mathsf Tg\right\Vert _{\ell ^q_I} =\sup _{\left\Vert f\right\Vert _{\ell ^p_I}\leqslant 1, \, \left\Vert g\right\Vert _{\ell ^q_J}\leqslant 1} \langle f,\mathsf Tg \rangle , \end{aligned}$$
(2.20)

for \(p,q \in (1,\infty )\) conjugate exponents, i.e. \(\frac{1}{p}+\frac{1}{q}=1\).

We introduce the weighted Laplace transforms of operators \(Q^{I,J}\) and \(\textsf{U}^J\). In particular, let w(x) be any continuous function in \(L^\infty (\mathbb {R}^2)\cap L^1(\mathbb {R}^2)\) such that \(\log w(x)\) is Lipschitz (one can think of \(w(x)=e^{-|x|}\)) and define \(w_N(x):=w(x/\sqrt{N})\). Also, for a function \(g:\mathbb {R}^2\rightarrow \mathbb {R}\) we define the tensor product \(g^{\otimes h}(x_1,\ldots ,x_h)=g(x_1)\cdots g(x_h)\), The weighted Laplace transforms are now defined as

$$\begin{aligned} \begin{aligned}&{\widehat{\textsf{Q}}}^{I;J}_{N,\lambda }(\varvec{x},\varvec{y}):=\bigg (\sum _{n \geqslant 1}^{N} e^{-\lambda \frac{n}{N}} Q^{I;J}_{n}(\varvec{x},\varvec{y})\bigg )\cdot \frac{w^{\otimes h}_N(\varvec{x})}{w^{\otimes h}_N(\varvec{y})},\\&{\widehat{\textsf{U}}}^{J}_{N,\lambda }(\varvec{x},\varvec{y}):=\bigg (\sum _{n \geqslant 0}^{N} e^{-\lambda \frac{n}{N}}\, \textsf{U}^{J}_{n}(\varvec{x},\varvec{y})\bigg )\cdot \frac{w^{\otimes h}_N(\varvec{x})}{w^{\otimes h}_N(\varvec{y})}\,. \end{aligned} \end{aligned}$$
(2.21)

The passage to a Laplace transform will help to estimate convolutions involving operators \(Q^{I;J}_{n}(\varvec{x},\varvec{y})\) and \(\textsf{U}^{J}_{n}(\varvec{x},\varvec{y})\) and the introduction of the weight comes handy in improving integrability when these operators are applied to functions which are not in \(\ell ^1((\mathbb {Z}^2)^h)\). We will see this in Lemma 2.3 below.

We also define the Laplace transform operator of the combined evolution (2.16):

$$\begin{aligned} {{\widehat{\textsf{P}}}}^{I;J}_{N,\lambda }= {\left\{ \begin{array}{ll} {\widehat{\textsf{Q}}}^{I;J}_{N,\lambda }, &{} \text { if } |J| \ne h-1 \\ {\widehat{\textsf{Q}}}^{I;J}_{ N,\lambda } \, {\widehat{\textsf{U}}}^J_{N,\lambda }, &{} \text { if } |J|=h-1 \,. \end{array}\right. } \end{aligned}$$
(2.22)

For our purposes, it will be sufficient to take \(\lambda =0\) and consider operators \({\widehat{\textsf{Q}}}^{I;J}_{N,0}, {\widehat{\textsf{U}}}^J_{N,0}\) and \( {{\widehat{\textsf{P}}}}^{I;J}_{N,0}\).

Using the above formalism we summarise in the next proposition some key estimates of [17] (see Propositions 3.2–3.4 and the proof of Theorem 1.3 therein), which are refinements of estimates in [4] (Section 6) and [8] (Section 3).

Proposition 2.2

Consider the operators \({\widehat{\textsf{Q}}}^{I;J}_{N,0}\) and \( {{\widehat{\textsf{P}}}}^{I;J}_{N,0}\) defined in (2.21) and (2.22) with \(\lambda =0\) and a weight function \(w\in L^\infty (\mathbb {R}^2)\cap L^1(\mathbb {R}^2)\) such that \(\log w(x)\) is Lipschitz. Then there exists a constant \(C=C(h,{\bar{\beta }},w) \in (0, \infty )\) (recall \({{\bar{\beta }}}\) from (2.2)) such that for all \(p,q\in (1,\infty )\) with \(\frac{1}{p}+\frac{1}{q}=1\) and all partitions \(I,J \vdash \{1,\dots ,h\}\), such that \(|I|,|J|\leqslant h-1\) and \(I \ne J\) when \(|I|=|J|=h-1\), we have that

$$\begin{aligned} \left\Vert {{\widehat{\textsf{P}}}}_{N,0}^{I;J}\right\Vert _{\ell ^q \rightarrow \ell ^q} \leqslant C \, p \, q. \end{aligned}$$
(2.23)

Moreover, if \(g \in \ell ^q(\mathbb {Z}^2)\),

$$\begin{aligned} \left\Vert \, g^{\otimes h} \, {\widehat{\textsf{Q}}}^{*;I}_{N,0} \right\Vert _{\ell ^p} \leqslant C \,q\, N^{\frac{1}{q}} \left\Vert g\right\Vert ^{h}_{\ell ^p}, \end{aligned}$$
(2.24)

for \(g^{\otimes h}(x_1,\ldots ,x_h):=g(x_1)\cdots g(x_h)\).

Let us now present the following lemma, which demonstrates how the above functional analytic framework will be used. This lemma will be useful in the first approximation, that we will perform in the next Section, in showing that contributions from multiple, i.e. three or more, collisions are negligible.

Lemma 2.3

Let \(H_{r,N}\) be the \(r^{th}\) term in the expansion (2.17), that is,

$$\begin{aligned} H_{r,N}:= \sum _{\circ :=I_0,I_1,\dots ,I_r} \prod _{i=1}^r \sigma _N(I_i)\sum _{\begin{array}{c} 1 \leqslant n_1<\dots <n_r\leqslant N, \\ 0:=\varvec{x}_0,\varvec{x}_1, \dots , \varvec{x}_r \in (\mathbb {Z}^2)^h \end{array}} \prod _{i=1}^r P^{I_{i-1};I_i}_{n_i-n_{i-1}}(\varvec{x}_{i-1},\varvec{x}_i) ,\qquad \end{aligned}$$
(2.25)

and \(H^{\mathsf {(multi)}}_{r,N}\) be the corresponding term with the additional constraint that there is at least one multiple collision (i.e. at some point, three or more walks meet), that is,

$$\begin{aligned} H^{\mathsf {(multi)}}_{r,N}:= & {} \sum _{\circ :=I_0,I_1,\dots ,I_r}\Bigg (\prod _{i=1}^r \sigma _N(I_i)\Bigg ) \mathbb {1}_{\{\exists \, 1\leqslant j\leqslant r \,: \, |I_j|<h-1 \}}\\{} & {} \sum _{\begin{array}{c} 1 \leqslant n_1<\dots <n_r\leqslant N \\ 0:=\varvec{x}_0,\varvec{x}_1, \dots , \varvec{x}_r \in (\mathbb {Z}^2)^h \end{array}} \prod _{i=1}^r P^{I_{i-1};I_i}_{n_i-n_{i-1}}(\varvec{x}_{i-1},\varvec{x}_i) \,. \end{aligned}$$

Then the following bounds hold:

$$\begin{aligned} H_{r,N} \leqslant \Big (\frac{C\,p\,q}{\log N} \Big )^r N^{\frac{h+1}{q}} \qquad \text {and} \qquad H^{\mathsf {(multi)}}_{r,N}\leqslant \frac{r}{\log N} \Big (\frac{C\,p\,q}{\log N} \Big )^r N^{\frac{h+1}{q}} . \end{aligned}$$
(2.26)

for any \(p,q\in (1,\infty )\) with \(\tfrac{1}{p}+\tfrac{1}{q}=1\) and a constant C that depends on h and \( {{\bar{\beta }}}\) but is independent of Nrpq.

Proof

We start by considering \(w(x)=e^{-|x|}, w_N(x):=w(\tfrac{x}{\sqrt{N}})\) and \(w_N^{\otimes h}(x_1,\ldots ,x_h)=\prod _{i=1}^h w_N(x_i)\) and by including in the expression (2.25) the term

$$\begin{aligned} \frac{1}{w_N^{\otimes h}(\varvec{x}_0)} \Big (\prod _{i=1}^r \frac{w_N^{\otimes h}(\varvec{x}_{i-1})}{w_N^{\otimes h}(\varvec{x}_i)} \Big )w_N^{\otimes h}(\varvec{x}_r) =1, \end{aligned}$$

thus rewriting \(H_{r,N}\) as

$$\begin{aligned} \begin{aligned} H_{r,N}&= \sum _{\circ :=I_0,I_1,\dots ,I_r} \prod _{i=1}^r \sigma _N(I_i) \\&\quad \times \sum _{\begin{array}{c} 1 \leqslant n_1<\dots <n_r \leqslant N \\ 0:=\varvec{x}_0,\varvec{x}_1, \dots , \,\varvec{x}_r \in (\mathbb {Z}^2)^h \end{array}} \frac{1}{w_N^{\otimes h}(\varvec{x}_0)} \prod _{i=1}^r P^{I_{i-1};I_i}_{n_i-n_{i-1}}(\varvec{x}_{i-1},\varvec{x}_i) \frac{w_N^{\otimes h}(\varvec{x}_{i-1})}{w_N^{\otimes h}(\varvec{x}_i)} \cdot w_N^{\otimes h}(\varvec{x}_r)\,. \end{aligned} \end{aligned}$$

We can extend the summation on \(\varvec{x}_0\) from \(\varvec{x}_0=0\) to \(\varvec{x}_0\in \mathbb {Z}^2\) by introducing a delta function \(\delta _0^{\otimes h}\) at zero. Then

$$\begin{aligned} \begin{aligned} H_{r,N}&= \sum _{*:=I_0,I_1,\dots ,I_r} \prod _{i=1}^r \sigma _N(I_i) \\&quad \times \sum _{\begin{array}{c} 1 \leqslant n_1<\dots <n_r \leqslant N \\ \varvec{x}_0,\varvec{x}_1, \dots , \,\varvec{x}_r \in (\mathbb {Z}^2)^h \end{array}} \frac{\delta _0^{\otimes h}(\varvec{x}_0)}{w_N^{\otimes h}(\varvec{x}_0)} \prod _{i=1}^r P^{I_{i-1};I_i}_{n_i-n_{i-1}}(\varvec{x}_{i-1},\varvec{x}_i) \frac{w_N^{\otimes h}(\varvec{x}_{i-1})}{w_N^{\otimes h}(\varvec{x}_i)} \cdot w_N^{\otimes h}(\varvec{x}_r)\,. \end{aligned} \end{aligned}$$

We can, now, bound the last expression by extending the temporal range of summations from \(1\leqslant n_1<\dots <n_r\leqslant N\) to \(n_i-n_{i-1} \in \{1,\dots , N\}\) for all \(i=1,\ldots ,r\). Recalling the definition of the Laplace transforms of the operators (2.21), (2.22), we, thus, obtain the upper bound

$$\begin{aligned} H_{r,N} \leqslant \sum _{*:=I_0,I_1,\dots ,I_r} \prod _{i=1}^r \sigma _N(I_i) \sum _{\varvec{x}_0,\varvec{x}_1, \dots , \,\varvec{x}_r \in (\mathbb {Z}^2)^h} \frac{\delta _0^{\otimes h}(\varvec{x}_0)}{w_N^{\otimes h}(\varvec{x}_0)} \prod _{i=1}^r {{\widehat{\textsf{P}}}}^{I_{i-1};I_i}_{N,0}(\varvec{x}_{i-1},\varvec{x}_i) \cdot w_N^{\otimes h}(\varvec{x}_r)\,, \end{aligned}$$

which we can write in the more compact and useful notation, using the brackets (2.19), as

$$\begin{aligned} H_{r,N} \leqslant \sum _{I_1,\dots ,I_r} \Big \langle \, \frac{\delta _0^{\otimes h}}{w_N^{\otimes h}}, {\widehat{\textsf{Q}}}^{*;I_1}_{N,0} {\widehat{\textsf{P}}}^{I_1;I_2}_{N,0} \cdots {\widehat{\textsf{P}}}^{I_{r-1};I_r}_{N,0} \, w^{\otimes h}_N\Big \rangle \prod _{i=1}^r \sigma _N(I_i). \end{aligned}$$

We note, here, that in the right-hand side we set the \(I_0\) partition to be equal to \(I_0=\{1\}\sqcup \cdots \sqcup \{h\}\). In this case \({\widehat{\textsf{P}}}^{I_{0};I_1}={\widehat{\textsf{P}}}^{*;I_1} ={\widehat{\textsf{Q}}}^{*;I_1} \) by definition (2.22). The delta function \(\delta _0^{\otimes h}(\varvec{x}_0)\) will force all points of \(\varvec{x}_0\) to coincide at zero, thus, forcing \(I_0\) to be equal to the partition \(\circ =\{1,\ldots ,h\}\) but, at the stage of operators, we do not yet need to enforce this constraint. At this stage we can proceed with the estimate using the operator norms (2.20) as

$$\begin{aligned} H_{r,N} \leqslant \sum _{I_1,\dots ,I_r} \left\Vert { \frac{\delta ^{\otimes h}_0}{w^{\otimes h }_N} \, {\widehat{\textsf{Q}}}^{*,I_1}_{N,0}} \right\Vert _{\ell ^p} \, \prod _{i=2}^r\, \left\Vert {\widehat{\textsf{P}}}_{N,0}^{I_{i-1};I_i}\right\Vert _{\ell ^q \rightarrow \ell ^q} \, \left\Vert w_N^{\otimes h}\right\Vert _{\ell ^q } \, \, \cdot \, \prod _{i=1}^r \sigma _N(I_i),\qquad \end{aligned}$$
(2.27)

By (2.24) of Proposition 2.2 we have that

$$\begin{aligned} \left\Vert { \frac{\delta ^{\otimes h}_0}{w^{\otimes h }_N} \, {\widehat{\textsf{Q}}}^{*,I_1}_{N,0}} \right\Vert _{\ell ^p} \leqslant C\,q\, N^{\frac{1}{q}} \left\Vert \frac{\delta _0}{w_N}\right\Vert ^{h}_{\ell ^p}=C\, q\, N^{\frac{1}{q}}, \end{aligned}$$

and by (2.23) we have that for all \(1\leqslant i \leqslant r-1\),

$$\begin{aligned} \left\Vert {{\widehat{\textsf{P}}}}_{N,0}^{I_{i-1};I_{i}}\right\Vert _{\ell ^q \rightarrow \ell ^q} \leqslant C \, p \, q \,. \end{aligned}$$

Inserting these estimates in (2.27) we deduce that

$$\begin{aligned} H_{r,N}&\leqslant (C\,p\,q)^r N^{\frac{1}{q}} \left\Vert \frac{\delta _0}{w_N}\right\Vert ^h_{\ell ^p} \left\Vert w_N\right\Vert _{\ell ^q}^{h} \sum _{I_1,\dots ,I_r} \prod _{i=1}^r\sigma _N(I_i) \nonumber \\&= (C\,p\,q)^r N^{\frac{1}{q}} \left\Vert w_N\right\Vert _{\ell ^q}^{h} \sum _{I_1,\dots ,I_r} \prod _{i=1}^r\sigma _N(I_i) \, , \end{aligned}$$
(2.28)

for a constant \(C=C(h,{\bar{\beta }})\in (0,\infty )\), not depending on pqrN. We now notice that for any partition \(I\vdash \{1,\ldots ,h\}\), it holds that \(\sigma _N(I)\leqslant C/ \log N\) (recall definitions (2.13) and (2.3)), so

$$\begin{aligned} \sum _{I_1,\dots ,I_r} \prod _{i=1}^r\sigma _N(I_i) \leqslant \Big (\frac{2^h C}{\log N}\Big )^r. \end{aligned}$$

Moreover, by Riemann summation, \(N^{-h/q}\left\Vert w_N\right\Vert _{\ell ^q}^{h}\) is bounded uniformly in N. Therefore, applying these on (2.28) we arrive at the bound

$$\begin{aligned} H_{r,N} \leqslant \Big (\frac{C\,p\,q}{\log N} \Big )^r N^{\frac{h+1}{q}}, \end{aligned}$$

for a new constant \(C=C(h,{\bar{\beta }})\in (0,\infty )\), which is the first claimed estimate in (2.26). For the second estimate in (2.26) we follow the same steps until we arrive at the bound

$$\begin{aligned} H^{\mathsf {(multi)}}_{r,N} \leqslant (C\,p\,q)^r N^{\frac{1}{q}} \left\Vert w_N\right\Vert _{\ell ^q}^{h} \sum _{I_1,\dots ,I_r}\prod _{i=1}^r \sigma _N(I_i) \mathbb {1}_{\{\exists \, 1\leqslant j\leqslant r:|I_j|<h-1\}}. \end{aligned}$$

Then we notice that for a partition \(I\vdash \{1,\ldots ,h\}\) with \(|I|<h-1\) it will hold that \(\sigma _N(I)\leqslant C (\log N)^{-2}\) (recall definitions (2.13) and (2.3)). This fact, together with the fact that there are r possible choices among the partitions \(I_1,\ldots ,I_r\) that can be chosen so that \(|I_j|<h-1\), leads to the second bound in (2.26). \(\square \)

3 Approximation steps and proof of the theorem

In this section we prove Theorem 1.1 through a series of approximations on the chaos expansion (2.11), (2.17). The first step, in Sect. 3.1, is to establish that the series in the chaos expansion (2.17) can be truncated up to a finite order and that the main contribution comes from diagrams where, at any fixed time, we only have at most two walks colliding. The second step, Sect. 3.2, is to show that the main contribution to the expansion and to diagrams like in Fig. 3, comes when all jumps between marked dots (see Fig. 3) happen within diffusive scale. The third step, in Sect. 3.3, captures the important feature of scale separation. This is intrinsic to the two-dimensionality and can be seen as the main feature that leads to the asymptotic independence of the collision times. With reference to Fig. 3, this says that the time between two consecutive replicas, say \(a_4-b_3\) in Fig. 3 must be much larger than the time between the previous replicas, say \(b_3-b_2\). This would then lead to the next step in Sect. 3.4, see also Fig. 4, which is that we can rewire the links so that the solid lines connect only replicas between the same pairs of walks. The final step, which is performed in Sect. 3.5 is to reverse all the above approximations within the rewired diagrams, to which we arrived in the previous step. The summation, then, of all rewired diagrams leads, in the limit, to the right hand of (1.2), thus completing the proof of the theorem.

3.1 Reduction to 2-body collisions and finite order chaoses

In this step, we use the functional analytic framework and estimates of the previous section to show that for each \(r \geqslant 1\), \(H_{r,N}\) decays exponentially in r, uniformly in \(N \in \mathbb {N}\) and that it is concentrated on configurations which contain only two-body collisions between the h random walks.

Proposition 3.1

There exist constants \(\textsf{a}\in (0,1)\) and \({\bar{C}}=C(h,{\bar{\beta }},\textsf{a}) \in (0,\infty )\) such that for all \(r \geqslant 1\),

$$\begin{aligned} \sup _{N \in \mathbb {N}} H_{r,N} \leqslant {\bar{C}} \, \textsf{a}^r,\qquad \text {and} \qquad H^{\mathsf {(multi)}}_{r,N} \leqslant \frac{{\bar{C}}}{\log N} \, r\, \textsf{a}^r \,. \end{aligned}$$
(3.1)

Proof

We use the estimates in (2.26) and make the choice \(q=q_N:=\frac{\textsf{a}}{C_1} \log N \) with \(\textsf{a}\in (0,1)\) and a constant \(C_1\) such that \(\frac{Cpq}{\log N} <\textsf{a}\) (recall that \(\tfrac{1}{p}+\tfrac{1}{q}=1\)). Moreover, this choice of q implies that

$$\begin{aligned} N^{\frac{h+1}{q}}=e^{\frac{h+1}{q} \log N}=e^{\frac{C_1\, (h+1)}{\textsf{a}}}. \end{aligned}$$

Therefore, choosing \({\bar{C}}=\, e^{\frac{C_1\, (h+1)}{\textsf{a}}}\) implies the first estimate in (3.1).

The second estimate follows from the same procedure and the same choice of \(q=q_N:=\frac{\textsf{a}}{C_1} \log N \) in the second bound of (2.26). \(\square \)

Proposition 3.2

If \(M^{\varvec{\beta }}_{N,h}\) is the joint Laplace transform of the collision local times \( \Big \{ \tfrac{\pi }{\log N} \,\textsf{L}_N^{(i,j)} \Big \}_{1\leqslant i<j\leqslant h} \), (2.1) and \(H_{r,N}\) is the \(r^{th}\) term in its chaos expansion (2.25), then for any \(\epsilon >0\) there exists \(K=K_\epsilon \) such that

$$\begin{aligned} \Big |M^{\varvec{\beta }}_{N,h}-\sum _{r=0}^K H_{r,N}\Big |\leqslant \epsilon , \end{aligned}$$

uniformly for all \(N \in \mathbb {N}\).

Proof

By Proposition, 3.1, \(H_{r,N}\) decay exponentially in r, uniformly in \(N \in \mathbb {N}\) and therefore

$$\begin{aligned} \limsup _{K \rightarrow \infty }\, \Big (\sup _{N\geqslant 1} \,\sum _{r>K} H_{r,N}\Big )=0, \end{aligned}$$

which means that we can truncate the expansion of \(M^{\varvec{\beta }}_{N,h}\) to a finite number of terms K depending only on \(\epsilon \). \(\square \)

By Proposition 3.1 we can focus on only two-body collisions, since higher order collisions bear a negligible contribution as \(N\rightarrow \infty \). Let us introduce some notation to conveniently describe the expansion of \(H_{r,N}\), after the reduction to only two-body collisions, which we will use in the sequel. Given \(r\geqslant 1\) we will denote by \(a_i,b_i \in \mathbb {N}\cup \{0\}\), \(a_i\leqslant b_i\), \(i=1,\dots ,r\) the times where replicas start and end respectively, see (2.15) and Fig. 2, where replicas are represented by wiggle lines. Thus, \(a_i\) will be the time marking the beginning of the \(i^{th}\) wiggle line and \(b_i\) the time marking its end. Note that, \(a_1=0\). Moreover, we use the notation \(\vec {\varvec{x}}=(\varvec{x}_1,\varvec{x}_2,\dots ,\varvec{x}_r) \in (\mathbb {Z}^{2})^{hr}\) to denote the starting points of the r replicas and \(\vec {\varvec{y}}=(\varvec{y}_1,\dots ,\varvec{y}_r) \in (\mathbb {Z}^{2})^{hr}\) the corresponding ending points. Again, notice that \(\varvec{x}_1=\varvec{0}\). We then define the set

$$\begin{aligned} \mathsf C_{r,N}:= & {} \Big \{ (\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\Big |\, 0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\nonumber \\\leqslant & {} b_r \leqslant N, \, \vec {\varvec{x}},\vec {\varvec{y}} \in (\mathbb {Z}^2)^{hr} ,\varvec{x}_1=\varvec{0} \Big \} \,. \end{aligned}$$
(3.2)

We also define a set of finite sequences of partitions

$$\begin{aligned} {\mathcal {I}}^{(2)}=\bigcup _{r=0}^{\infty } \bigg \{(I_1,\dots ,I_r): I_j \ne I_{j+1} \text { and } |I_j|=h-1,\forall j \in \{1,\dots ,r\} \bigg \}. \end{aligned}$$

Using the notational conventions outlined above we can write \( H_{r,N}= H^{(2)}_{r,N}+H^{\mathsf {(multi)}}_{r,N}\) with

$$\begin{aligned} H^{(2)}_{r,N}&:=\hspace{-0.1cm} \sum _{(I_1,\ldots ,I_r) \,\in \, {\mathcal {I}}^{(2)}}\sum _{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \,\mathsf C_{r,N}} \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1) \prod _{i=2}^r Q_{a_{i}-b_{i-1}}^{I_{i-1};I_i}(\varvec{y}_{i-1},\varvec{x}_i) \textsf{U}^{I_i}_{b_i-a_i}(\varvec{x}_i,\varvec{y}_i) \sigma _N(I_i). \end{aligned}$$
(3.3)

In the next sections will focus on \(H^{(2)}_{r,N}\), which by Proposition 3.1 contains the main contributions.

3.2 Diffusive spatial truncation

In this step we show that we can introduce diffusive spatial truncations in all the kernels appearing in (3.3) which originate from the diffusive behaviour of the simple random walk in \(\mathbb {Z}^2\). For a vector \(\varvec{x}=(x^{(1)},\dots ,x^{(h)}) \in (\mathbb {Z}^2)^h\), we shall use the notation

$$\begin{aligned} \left\Vert \varvec{x}\right\Vert _{\infty }=\max _{1\leqslant j \leqslant h} |x^{(j)}|, \end{aligned}$$

where \(|\cdot |\) denotes the usual Euclidean norm on \(\mathbb {R}^2\). For each \(r \in \mathbb {N}\), define \(H^{\mathsf {(diff)}}_{r,R,N}\) to be the sum in (3.3) where \(\mathsf C_{r,N}\) is replaced by

$$\begin{aligned} \begin{aligned} \mathsf C^{\mathsf {(diff)}}_{r,N,R}&:=\mathsf C_{r,N} \cap \Big \{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}}): \left\Vert \varvec{y}_i-\varvec{x}_i\right\Vert _{\infty }\leqslant R \sqrt{b_i-a_i}\text { and } \left\Vert \varvec{x}_i-\varvec{y}_{i-1}\right\Vert _{\infty } \\&\leqslant R \sqrt{a_i-b_{i-1}} \text { for all } 1\leqslant i\leqslant r \Big \} \, \end{aligned} \end{aligned}$$
(3.4)

and similarly we define

$$\begin{aligned} H^{\mathsf {(superdiff)}}_{r,N,R}= & {} \hspace{-0.25cm}\sum _{(I_1,\ldots ,I_r) \,\in \, {\mathcal {I}}^{(2)}} \sum _{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \, \mathsf C^{\mathsf {(superdiff)}}_{r,N,R} } \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1)\nonumber \\{} & {} \prod _{i=2}^r Q_{a_{i}-b_{i-1}}^{I_{i-1};I_i}(\varvec{y}_{i-1},\varvec{x}_i) \, \textsf{U}^{I_i}_{b_i-a_i}(\varvec{x}_i,\varvec{y}_i) \,\sigma _N(I_i), \end{aligned}$$
(3.5)

where

$$\begin{aligned} \begin{aligned} \mathsf C^{\mathsf {(superdiff)}}_{r,N,R}&:=\mathsf C_{r,N} \cap \Big \{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}}): \, \exists \,1\leqslant i\leqslant r: \left\Vert \varvec{y}_i-\varvec{x}_i\right\Vert _{\infty } \\&> R \sqrt{b_i-a_i} \text { or } \left\Vert \varvec{x}_i-\varvec{y}_{i-1}\right\Vert _{\infty }> R \sqrt{a_i-b_{i-1}} \,\Big \}. \end{aligned} \end{aligned}$$

Note that then we have that

$$\begin{aligned} H^{(2)}_{r,N}=H^{\mathsf {(diff)}}_{r,N,R}+H^{\mathsf {(superdiff)}}_{r,N,R} \,. \end{aligned}$$

We have the following Proposition.

Proposition 3.3

For all \(r\geqslant 1\) we have that

$$\begin{aligned} \lim _{R \rightarrow \infty } \sup _{N \in \mathbb {N}} H^{\mathsf {(superdiff)}}_{r,N,R} =0 \,. \end{aligned}$$
(3.6)

Proof

We use the bounds established in Lemma 3.4, below, and (2.18) to show (3.6). We can use a union bound for (3.5) to obtain that

$$\begin{aligned} \begin{aligned}&H^{\textsf {(superdiff)}}_{r,N,R} =\hspace{-0.2cm} \sum _{(I_1,\ldots ,I_r) \,\in \, {\mathcal {I}}^{(2)}} \,\sum _{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \, \mathsf C^{\textsf {(superdiff)}}_{r,N,R} } \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1) \\&\quad \prod _{i=2}^r Q_{a_{i}-b_{i-1}}^{I_{i-1};I_i}(\varvec{y}_{i-1},\varvec{x}_i) \, \textsf{U}^{I_i}_{b_i-a_i}(\varvec{x}_i,\varvec{y}_i) \,\sigma _N(I_i) \\&\leqslant \sum _{j=1}^r \sum _{(I_1,\ldots ,I_r) \,\in \, {\mathcal {I}}^{(2)}} \sum _{\begin{array}{c} 0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N,\\ 0:=\varvec{x}_1,\varvec{y}_1,\dots ,\varvec{x}_r,\varvec{y}_r \in (\mathbb {Z}^2)^h \end{array}} \\&\quad \bigg (\mathbb {1}_{\big \{\left\Vert \varvec{y}_j-\varvec{x}_j\right\Vert _{\infty }>R \sqrt{b_j-a_j} \big \}}+ \mathbb {1}_{\big \{\left\Vert \varvec{x}_j-\varvec{y}_{j-1}\right\Vert _{\infty }>R\sqrt{a_j-b_{j-1}} \big \}}\bigg ) \\&\quad \times \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1) \prod _{i=2}^r Q_{a_{i}-b_{i-1}}^{I_{i-1};I_i}(\varvec{y}_{i-1},\varvec{x}_i) \, \textsf{U}^{I_i}_{b_i-a_i}(\varvec{x}_i,\varvec{y}_i) \,\sigma _N(I_i) \,. \end{aligned} \end{aligned}$$
(3.7)

We split the sum on the last two lines of (3.7) according to the two indicator functions that appear therein. By repeated successive application of the bounds from (2.18) for \(j<i\leqslant r\) and then by using (3.10), which reads as

$$\begin{aligned} \sum _{\varvec{y}_j \in (\mathbb {Z}^2)^h, \, a_j\leqslant b_j\leqslant N} \textsf{U}^{I_j}_{b_j-a_j}(\varvec{x}_j,\varvec{y}_j)\, \mathbb {1}_{\big \{\left\Vert \varvec{y}_j-\varvec{x}_j\right\Vert _{\infty }>R \sqrt{b_j-a_j} \big \}}\leqslant e^{-\kappa R}, \end{aligned}$$

we deduce that

$$\begin{aligned} \begin{aligned}&\sum _{\begin{array}{c} 0:=a_1\leqslant b_1<a_2\leqslant \dots< a_r\leqslant b_r \leqslant N, \\ 0:=\varvec{x}_1,\varvec{y}_1,\dots ,\varvec{x}_r,\varvec{y}_r \in (\mathbb {Z}^2)^h \end{array}} \mathbb {1}_{\big \{\left\Vert \varvec{y}_j-\varvec{x}_j\right\Vert _{\infty }>R \sqrt{b_j-a_j} \big \}} \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1)\\&\quad \prod _{i=2}^r Q_{a_{i}-b_{i-1}}^{I_{i-1};I_i}(\varvec{y}_{i-1},\varvec{x}_i) \, \textsf{U}^{I_i}_{b_i-a_i}(\varvec{x}_i,\varvec{y}_i) \,\sigma _N(I_i) \\&\leqslant e^{-\kappa R}\bigg (\frac{{\bar{\beta }}'}{1-{\bar{\beta }}'}\bigg )^{r-j} \times \sum _{\begin{array}{c} 0:=a_1\leqslant b_1<a_2\leqslant \dots<b_{j-1} < a_j \leqslant N, \\ 0:=\varvec{x}_1,\varvec{y}_1,\dots ,\varvec{x}_j \in (\mathbb {Z}^2)^h \end{array}} \\&\quad \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1) \prod _{i=2}^{j-1} Q_{a_{i}-b_{i-1}}^{I_{i-1};I_i}(\varvec{y}_{i-1},\varvec{x}_i) \, \textsf{U}^{I_i}_{b_i-a_i}(\varvec{x}_i,\varvec{y}_i) \,\sigma _N(I_i) \\&\quad \times Q^{I_{j-1};I_j}_{a_j-b_{j-1}}(\varvec{y}_{j-1},\varvec{x}_j)\, \sigma _N(I_j) \,. \end{aligned} \end{aligned}$$
(3.8)

We then continue the summation using the bounds from (2.18), to obtain that the right-hand side of the inequality in (3.8) is bounded by

$$\begin{aligned} e^{-\kappa R} \bigg (\frac{{\bar{\beta }}'}{1-{\bar{\beta }}'}\bigg )^{r-j} \, \bigg (\frac{{\bar{\beta }}'}{1-{\bar{\beta }}'}\bigg )^{j-1} = e^{-\kappa R} \bigg (\frac{{\bar{\beta }}'}{1-{\bar{\beta }}'}\bigg )^{r-1}\,. \end{aligned}$$

Similarly, for the sum involving the second indicator function in (3.7) we obtain by using (2.18) and (3.9) of Lemma 3.4 that

$$\begin{aligned}&\sum _{\begin{array}{c} 0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N\, , \\ 0:=\varvec{x}_1,\varvec{y}_1,\dots ,\varvec{x}_r,\varvec{y}_r \in (\mathbb {Z}^2)^h \end{array}}{} & {} \mathbb {1}_{\big \{\left\Vert \varvec{x}_j-\varvec{y}_{j-1}\right\Vert _{\infty }>R\sqrt{a_j-b_{j-1}} \big \}} \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1) \\&\qquad{} & {} \times \prod _{i=2}^r Q_{a_{i}-b_{i-1}}^{I_{i-1};I_i}(\varvec{y}_{i-1},\varvec{x}_i) \, \textsf{U}^{I_i}_{b_i-a_i}(\varvec{x}_i,\varvec{y}_i) \,\sigma _N(I_i) \\&\leqslant e^{-\kappa R^2}\, \bigg (\frac{1}{1-{\bar{\beta }}'}\bigg )^{r} \, ({\bar{\beta }}')^{r-1} \, . \end{aligned}$$

Therefore, the right-hand side of the inequality in (3.7) is bounded by

$$\begin{aligned}{} & {} \sum _{j=1}^r \sum _{(I_1,\ldots ,I_r) \, \in {\mathcal {I}}^{(2)}}\, e^{-\kappa R} \Bigg ( \bigg (\frac{{\bar{\beta }}'}{1-{\bar{\beta }}'}\bigg )^{r-1} + \bigg (\frac{1}{1-{\bar{\beta }}'}\bigg )^{r} \, ({\bar{\beta }}')^{r-1} \Bigg )\\{} & {} \leqslant e^{-\kappa R} \Bigg (2r\cdot \frac{({\bar{\beta }}')^{r-1}}{(1-{\bar{\beta }}')^{r}}\cdot \left( {\begin{array}{c}h\\ 2\end{array}}\right) ^r \,\Bigg ), \end{aligned}$$

where the \(\left( {\begin{array}{c}h\\ 2\end{array}}\right) ^r\) factor comes from the fact that there are at most \(\left( {\begin{array}{c}h\\ 2\end{array}}\right) ^r\) choices for the sequence \((I_1,\dots ,I_r) \in {\mathcal {I}}^{(2)}\). Thus, recalling (3.7) we get that

$$\begin{aligned} \sup _{N \in \mathbb {N}} H^{\mathsf {(superdiff)}}_{r,N,R} \leqslant e^{-\kappa R} \Bigg (2r\cdot \frac{({\bar{\beta }}')^{r-1}}{(1-{\bar{\beta }}')^{r}}\cdot \left( {\begin{array}{c}h\\ 2\end{array}}\right) ^r \,\Bigg ) \xrightarrow {R \rightarrow \infty } 0. \end{aligned}$$

\(\square \)

Lemma 3.4

Let \(I,J \vdash \{1,\ldots ,h\}\) such that \(|I|=|J|=h-1\) and \(I \ne J\). For large enough \(R \in (0,\infty )\) and uniformly in \(\varvec{x}\in (\mathbb {Z}^2)_I^h\) we have that for a constant \(\kappa =\kappa (h,{\bar{\beta }}) \in (0,\infty )\),

$$\begin{aligned} \sigma _N(J) \cdot \Bigg (\sum _{1\leqslant n \leqslant N, \, \varvec{y}\, \in (\mathbb {Z}^2)^h} Q^{I;J}_n(\varvec{x},\varvec{y})\cdot \mathbb {1}_{\big \{ \left\Vert \varvec{x}-\varvec{y}\right\Vert _{\infty }>R \sqrt{n}\big \}}\Bigg ) \leqslant e^{-\kappa R^2} \end{aligned}$$
(3.9)

and

$$\begin{aligned} \sum _{1 \leqslant n \leqslant N, \, \varvec{y}\, \in (\mathbb {Z}^2)^h} \textsf{U}^J_n(\varvec{x},\varvec{y})\cdot \mathbb {1}_{\big \{\left\Vert \varvec{x}-\varvec{y}\right\Vert _{\infty }>R \sqrt{n}\big \}}\leqslant e^{-\kappa R} \,. \end{aligned}$$
(3.10)

Proof

We start with the proof of (3.9). Since \(|J|=h-1\), let us assume without loss of generality that \(J=\{k,\ell \} \sqcup \bigsqcup _{j \ne k,\ell }\{j\}\). In this case, \(Q^{I;J}_n(\varvec{x},\varvec{y})\) contains \(h-2\) random walk jumps with free endpoints \(y^{(j)}\), \(j \ne k,\ell \), that is

$$\begin{aligned} \prod _{j \ne k,\ell } q_n(y^{(j)}-x^{(j)}) \,. \end{aligned}$$

Moreover, J imposes the constraint that \(y^{(k)}=y^{(\ell )}\), which appears in \(Q^{I;J}_n(\varvec{x},\varvec{y})\) through the product of transition kernels

$$\begin{aligned} q_n(y^{(k)}-x^{(k)})\cdot q_n(y^{(k)}-x^{(\ell )}), \end{aligned}$$

recall (2.12). The constraint \(\left\Vert \varvec{x}-\varvec{y}\right\Vert _{\infty }>R \sqrt{n}\) implies that there exists \(1\leqslant j \leqslant h\) such that \(|x^{(j)}-y^{(j)}| > R\, \sqrt{n}\). We distinguish two cases:

  1. (1)

    There exists \(j \ne k,\ell \) such that \(|x^{(j)}-y^{(j)}| > R\, \sqrt{n}\), or

  2. (2)

    \(|x^{(j)}-y^{(j)}| > R\, \sqrt{n}\) for \(j=k\) or \(j=\ell \).

In both cases, we can use \(\sum _{z \in \mathbb {Z}^2} q_n(z)=1\) to sum the kernels \(q_n(y^{(i)}-x^{(i)})\) with \(i \notin \{ j, k,\ell \}\) to which we do not impose any super-diffusive constraints. By symmetry and translation invariance we can upper bound the left-hand side of (3.9) by

$$\begin{aligned} \begin{aligned} \sigma _N(J) \cdot \Bigg ((h-2) \sum _{1\leqslant n \leqslant N, \, z \in \mathbb {Z}^2}q_n(z) \cdot \mathbb {1}_{\{|z|>R\sqrt{n}\}}&\cdot \Big \{ \sup _{u \, \in \mathbb {Z}^2}\sum _{ \, z \in \mathbb {Z}^2} q_n(z)q_n(z+u)\Big \} \\ + 2\, \sup _{u \, \in \mathbb {Z}^2}\sum _{1 \leqslant n \leqslant N, \, z \in \mathbb {Z}^2}&q_n(z)q_n(z+u) \cdot \mathbb {1}_{\{|z|>R \sqrt{n}\}}\Bigg ) \,. \end{aligned}\nonumber \\ \end{aligned}$$
(3.11)

Looking at the first summand in (3.11) we have by Cauchy-Schwarz that

$$\begin{aligned} \begin{aligned} \sup _{u \in \mathbb {Z}^2} \sum _{z \in \mathbb {Z}^2} q_n(z)\,q_n(z+u) \leqslant \Big (\sum _{z \in \mathbb {Z}^2} q_n^2(z) \Big )^{1/2}\cdot \sup _{u \in \mathbb {Z}^2} \Big (\sum _{z \in \mathbb {Z}^2}q^2_n(z+u)\Big )^{1/2} \leqslant q_{2n}(0), \end{aligned}\nonumber \\ \end{aligned}$$
(3.12)

since \(\sum _{z \in \mathbb {Z}^2}q^2_{n}(z)=q_{2n}(0)\). Let us recall the deviation estimate for the simple random walk, which can be found in [16], Proposition 2.1.2, that is

$$\begin{aligned} \textrm{P}\Big (\max _{0\leqslant k \leqslant n}|S_k|>R \sqrt{n}\Big )\leqslant e^{-cR^2}\,, \end{aligned}$$
(3.13)

for a constant \(c\in (0,\infty )\) and all \(R \in (0,\infty )\), large enough. By using bound (3.12) and subsequently (3.13) on the first summand of (3.11) we get that

$$\begin{aligned} \begin{aligned}&\sum _{1\leqslant n \leqslant N, \, z \in \mathbb {Z}^2}q_n(z) \cdot \mathbb {1}_{\{|z|>R\sqrt{n}\}} \cdot \Big \{ \sup _{u \, \in \mathbb {Z}^2}\sum _{ \, z \in \mathbb {Z}^2} q_n(z)q_n(z+u)\Big \} \\&\quad \leqslant \sum _{1\leqslant n \leqslant N,\, z \in \mathbb {Z}^2} q_{2n}(0)\cdot q_n(z) \mathbb {1}_{\{|z|> R \sqrt{n}\}}\\&\quad \leqslant e^{-c R^2} R_N. \end{aligned} \end{aligned}$$

We recall from (2.4) and (2.5) that \(R_N=\sum _{n=1}^N q_{2n}(0) {\mathop {\approx }\limits ^{N \rightarrow \infty }} \frac{\log N}{\pi }\), therefore,

$$\begin{aligned} \begin{aligned}&\sigma _N(J) \cdot \Bigg ((h-2) \sum _{1\leqslant n \leqslant N, \, z \in \mathbb {Z}^2}q_n(z) \cdot \mathbb {1}_{\{|z|>R\sqrt{n}\}} \cdot \Big \{ \sup _{u \, \in \mathbb {Z}^2}\sum _{ \, z \in \mathbb {Z}^2} q_n(z)q_n(z+u)\Big \} \Bigg ) \\&\quad \leqslant \,(h-2) \,\sigma _N(J) \,R_N\, e^{-cR^2}\\&\quad \leqslant \, (h-2)\, {\bar{\beta }}' \, \,e^{-cR^2}, \end{aligned} \end{aligned}$$
(3.14)

for some \({{\bar{\beta }}}' \in ({{\bar{\beta }}},1)\). The second summand in the parenthesis in (3.11) can be bounded via Cauchy-Schwarz by

$$\begin{aligned} \Bigg ( \sum _{1 \leqslant n \leqslant N, \, z \in \mathbb {Z}^2 } q^2_n(z) \cdot \mathbb {1}_{\{|z|>R\sqrt{n}\}}\Bigg )^{\frac{1}{2}} \cdot \Bigg ( \sum _{1 \leqslant n \leqslant N, \, z \in \mathbb {Z}^2 } q^2_n(z+u) \Bigg )^{\frac{1}{2}} \,. \end{aligned}$$
(3.15)

For the first term in (3.15), using that \(\sup _{z \in \mathbb {Z}^2 }q_n(z)\leqslant \frac{C}{n}\) we get

$$\begin{aligned} \begin{aligned}&\sum _{1\leqslant n \leqslant N, \, z \in \mathbb {Z}^2} q^2_n(z) \cdot \mathbb {1}_{\{|z|>R \sqrt{n}\}}\leqslant \, C \\&\sum _{1\leqslant n \leqslant N, \, z \in \mathbb {Z}^2} \frac{q_n(z)}{n}\cdot \mathbb {1}_{\{|z|>R \sqrt{n}\}} \leqslant \, C \, e^{-c R^2} \, \log N \end{aligned} \end{aligned}$$
(3.16)

For the second term in (3.11), we have that for all \(u \in \mathbb {Z}^2\)

$$\begin{aligned} \sum _{1\leqslant n \leqslant N,\, z \in \mathbb {Z}^2 } q^2_n(z+u)= \sum ^N_{n=1} q_{2n}(0){\mathop {\approx }\limits ^{N \rightarrow \infty }} \frac{\log N}{\pi } \,. \end{aligned}$$

Thus, by (3.15) together with (3.16) we conclude that for the second summand in (3.11) we have

$$\begin{aligned} \sigma _N(J) \cdot \Bigg ( 2\, \sup _{u \, \in \mathbb {Z}^2}\sum _{1 \leqslant n \leqslant N, \, z \in \mathbb {Z}^2} q_n(z)q_n(z+u) \cdot \mathbb {1}_{\{|z|>R \sqrt{n}\}}\Bigg ) \leqslant C \, e^{-\frac{cR^2}{2}} \,.\nonumber \\ \end{aligned}$$

Therefore, recalling (3.14) we deduce that there exists a constant \(\kappa (h,{\bar{\beta }}) \in (0,\infty )\) such that

$$\begin{aligned} \sigma _N(J) \cdot \Bigg (\sum _{1\leqslant n \leqslant N, \, \varvec{y}\, \in (\mathbb {Z}^2)^h} Q^{I;J}_n(\varvec{x},\varvec{y})\cdot \mathbb {1}_{\big \{ \left\Vert \varvec{x}-\varvec{y}\right\Vert _{\infty }>R \sqrt{n}\big \}}\Bigg ) \leqslant e^{-\kappa R^2} \,. \end{aligned}$$

We move to the proof of (3.10). Similar to the proof of (3.9), we can bound the left-hand side of (3.10) by

$$\begin{aligned}{} & {} (h-1) \sum _{1 \leqslant n \leqslant N, \,w, \, z \in \mathbb {Z}^2}U_N^{{\bar{\beta }}}(n,w)\cdot q_n(z) \mathbb {1}_{\{|z|>R \sqrt{n}\}} \nonumber \\{} & {} \quad + \sum _{1\leqslant n \leqslant N, z \in \mathbb {Z}^2} U^{{\bar{\beta }}}_N(n,z) \cdot \mathbb {1}_{\{ |z|>R \sqrt{n}\}} \,. \end{aligned}$$
(3.17)

For the first summand in (3.17), by (3.13) we have that

$$\begin{aligned} \sum _{z \in \mathbb {Z}^2} q_n(z)\mathbb {1}_{\{|z|>R \sqrt{n}\}}\leqslant e^{-cR^2}, \end{aligned}$$

and \(\sum _{1\leqslant n \leqslant N,\, w \in \mathbb {Z}^2} U_N^{{\bar{\beta }}}(n,w)\leqslant \frac{1}{1-{\bar{\beta }}'}\), therefore

$$\begin{aligned} (h-1) \sum _{1 \leqslant n \leqslant N, \,w, \, z \in \mathbb {Z}^2}U_N^{{\bar{\beta }}}(n,w)\cdot q_n(z) \mathbb {1}_{\{|z|>R \sqrt{n}\}}\leqslant \frac{h-1}{1-{\bar{\beta }}'} \, e^{-c R^2} \,. \end{aligned}$$
(3.18)

For the second summand, we use the renewal representation of \(U^{{\bar{\beta }}}_N(\cdot ,\cdot )\) introduced in (2.8). In particular, we have that

$$\begin{aligned} \sum _{1\leqslant n \leqslant N, z \in \mathbb {Z}^2} U^{{\bar{\beta }}}_N(n,z) \cdot \mathbb {1}_{\{ |z|>R \sqrt{n}\}}=\sum _{k\geqslant 0} (\sigma _N({{\bar{\beta }}})R_N)^k \sum _{n=0}^N \textrm{P}\Big (\big |S^{\scriptscriptstyle (N)}_k \big |>R\sqrt{n},\tau ^{\scriptscriptstyle (N)}_k=n\Big ) \,.\nonumber \\ \end{aligned}$$
(3.19)

Then, by conditioning on the times \((T^{\scriptscriptstyle (N)}_i)_{1 \leqslant i \leqslant k}\) for which \(\tau _k^{\scriptscriptstyle (N)}=T^{\scriptscriptstyle (N)}_1+\dots +T^{\scriptscriptstyle (N)}_k\) we have that

$$\begin{aligned} \begin{aligned}&\textrm{P}\Big (\big |S^{\scriptscriptstyle (N)}_k \big |> R\sqrt{n},\, \tau ^{\scriptscriptstyle (N)}_k=n\Big )\\&\quad =\sum _{n_1+\dots +n_k=n} \textrm{P}\Big (\big |S^{\scriptscriptstyle (N)}_k \big |>R\sqrt{n}\, \Big | \cap _{i=1}^k\big \{T^{\scriptscriptstyle (N)}_i=n_i \big \}\Big )\, \prod _{i=1}^k \textrm{P}\big (T^{\scriptscriptstyle (N)}_i=n_i\big ) \,. \end{aligned} \end{aligned}$$
(3.20)

Note that when we condition on \(\cap _{i=1}^k\big \{T^{(\scriptscriptstyle N)}_i=n_i\big \}\), \(S_k^{\scriptscriptstyle (N)}\) is a sum of k independent random variables \((\xi _i)_{1 \leqslant i \leqslant k}\) taking values in \(\mathbb {Z}^2\), with law

$$\begin{aligned} \textrm{P}(\xi _i=x)=\frac{q^2_{n_i}(x)}{q_{2n_i}(0)} \,. \end{aligned}$$

The proof of Proposition 3.5 in [17] showed that there exists a constant \(C \in (0,\infty )\) such that for all \(\lambda \geqslant 0\)

$$\begin{aligned} \textrm{E}\Big [e^{\lambda |\sum _{i=1}^k \xi _i|}\Big ] \leqslant 2e^{4C\lambda ^2 n} \,, \end{aligned}$$
(3.21)

uniformly over the values \(n_1,\ldots ,n_k\). Therefore, by (3.21) with \(\lambda =\frac{1}{\sqrt{n}}\) and Markov’s inequality we obtain that

$$\begin{aligned} \textrm{P}\Big (\big |S^{\scriptscriptstyle (N)}_k \big |>R\sqrt{n}\, \Big | \cap _{i=1}^k\{T^{\scriptscriptstyle (N)}_i=n_i\}\Big ) \leqslant 2e^{4C-R} \,. \end{aligned}$$

Thus, looking back at (3.20) we have that for all \(k \geqslant 0\),

$$\begin{aligned} \textrm{P}\Big (\big |S^{\scriptscriptstyle (N)}_k \big |> R\sqrt{n},\, \tau ^{\scriptscriptstyle (N)}_k=n\Big ) \leqslant 2e^{4C-R}\,\, \textrm{P}(\tau ^{\scriptscriptstyle (N)}_k=n), \end{aligned}$$

therefore, plugging the last inequality into (3.19), we get that

$$\begin{aligned} \sum _{1\leqslant n \leqslant N, z \in \mathbb {Z}^2} U^{{\bar{\beta }}}_N(n,z) \cdot \mathbb {1}_{\{ |z|>R \sqrt{n}\}} \leqslant 2e^{4C-R} \sum _{n=1}^N U^{{\bar{\beta }}}_N(n) \leqslant \frac{2e^{4C-R}}{1-{\bar{\beta }}'}, \end{aligned}$$
(3.22)

therefore by (3.18) and (3.22) we have that there exists a constant \(\kappa (h,{{\bar{\beta }}})\in (0,\infty )\) such that

$$\begin{aligned} \sum _{1 \leqslant n \leqslant N, \, \varvec{y}\, \in (\mathbb {Z}^2)^h} \textsf{U}^J_n(\varvec{x},\varvec{y})\cdot \mathbb {1}_{\big \{\left\Vert \varvec{x}-\varvec{y}\right\Vert _{\infty }>R \sqrt{n}\big \}}\leqslant e^{-\kappa R}, \end{aligned}$$

for large enough \(R \in (0,\infty )\), thus concluding the proof of (3.10). \(\square \)

3.3 Scale separation

In this step we show that given \(r \in \mathbb {N}\), \(r \geqslant 2\), the main contribution to \(H^{\mathsf {(diff)}}_{r,N}\) comes from configurations where \(a_{i+1}-b_i>M(b_i-b_{i-1})\) for all \(1\leqslant i \leqslant r\) and large M, as \(N\rightarrow \infty \). Recall from (3.4) that

$$\begin{aligned} H^{\mathsf {(diff)}}_{r,N,R}= & {} \sum _{(I_1,\ldots ,I_r) \,\in \, {\mathcal {I}}^{(2)}} \,\sum _{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \,\mathsf C^{\mathsf {(diff)}}_{r,N,R} } \\{} & {} \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1) \prod _{i=2}^r Q_{a_i-b_{i-1}}^{I_{i-1};I_i}(\varvec{y}_{i-1},\varvec{x}_i) \, \textsf{U}^{I_i}_{b_i-a_i}(\varvec{x}_i,\varvec{y}_i) \cdot \sigma _N(I_i) \,. \end{aligned}$$

Define the set

$$\begin{aligned} \mathsf C^{\mathsf {(main)}}_{r,N,R,M}:=\mathsf C^{\mathsf {(diff)}}_{r,N,R} \cap \Big \{ (\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}}):\, a_{i+1}-b_i>M(b_i-b_{i-1})\text { for all } 1\leqslant i\leqslant r-1\, \Big \} \,,\nonumber \\ \end{aligned}$$
(3.23)

with the convention \(b_0:=0\) and accordingly define

$$\begin{aligned} H^{\mathsf {(main)}}_{r,N,R,M}:= & {} \hspace{-0.2cm}\sum _{(I_1,\ldots ,I_r) \,\in \, {\mathcal {I}}^{(2)}} \sum _{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \,\mathsf C^{\mathsf {(main)}}_{r,N,R,M}} \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1) \nonumber \\{} & {} \prod _{i=2}^r Q_{a_i-b_{i-1}}^{I_{i-1};I_i}(\varvec{y}_{i-1},\varvec{x}_i) \textsf{U}^{I_i}_{b_i-a_i}(\varvec{x}_i,\varvec{y}_i) \,\sigma _N(I_i) \,. \end{aligned}$$
(3.24)

We then have the following approximation proposition:

Proposition 3.5

For all fixed \(r\in \mathbb {N}\), \(r \geqslant 2\) and \(M \in (0,\infty )\),

$$\begin{aligned} \lim _{N \rightarrow \infty } \sup _{R \, \in (0,\infty )}\Big |H^{\mathsf {(diff)}}_{r,N,R}-H^{\mathsf {(main)}}_{r,N,R,M}\Big |=0\,. \end{aligned}$$
(3.25)

Proof

Fix \(M>0\). Let us begin by showing (3.25) for the simplest case which is \(r=2\). We have

$$\begin{aligned} \begin{aligned} H^{\mathsf {(diff)}}_{2,N,R}&- H^{\mathsf {(main)}}_{2,N,R,M} \leqslant \sum _{(I_1,I_2) \in \,{\mathcal {I}}^{(2)}}\,\, \sum _{\begin{array}{c} 0 \leqslant b_1 <a_2\leqslant N, \,a_2-b_1\leqslant M b_1, \\ \varvec{y}_1,\varvec{x}_2,\varvec{y}_2 \in (\mathbb {Z}^2)^h \end{array}}\\&\quad \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1) Q_{a_2-b_{1}}^{I_{1};I_2}(\varvec{y}_{1},\varvec{x}_2) \,\sigma _N(I_2) \textsf{U}^{I_2}_{b_2-a_2}(\varvec{x}_2,\varvec{y}_2). \end{aligned} \end{aligned}$$

We can bound \(\sigma _N(I_2)\) by \(\frac{\pi {{\bar{\beta }}}\,'}{\log N}\), for some \({{\bar{\beta }}}\,'\in ({{\bar{\beta }}},1)\) and use (2.18) to bound the last replica, i.e. the sum over \((b_2,\varvec{y}_2)\), thus getting

$$\begin{aligned} H^{\mathsf {(diff)}}_{2,N,R}- H^{\mathsf {(main)}}_{2,N,R,M}\leqslant & {} \frac{\pi {{\bar{\beta }}}\,'(1-{{\bar{\beta }}}\,')^{-1}}{\log N} \sum _{(I_1,I_2) \in \, {\mathcal {I}}^{(2)}} \, \nonumber \\{} & {} \sum _{\begin{array}{c} 0 \leqslant b_1 <a_2\leqslant N, \,a_2-b_1\leqslant M b_1, \\ \varvec{y}_1,\varvec{x}_2\in (\mathbb {Z}^2)^h \end{array}} \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1) Q_{a_2-b_{1}}^{I_1;I_2}(\varvec{y}_{1},\varvec{x}_2) \,.\nonumber \\ \end{aligned}$$
(3.26)

Notice that at this stage we can sum out the spatial endpoints of the free kernels in (3.26) and bound the coupling strength \(\beta _{k,\ell }\) of any replica \(\textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1)\) with \(I_1=\{k,\ell \}\sqcup \bigsqcup _{j\ne k,\ell }\{j\}\) by \({{\bar{\beta }}}\) to obtain

$$\begin{aligned} H^{\mathsf {(diff)}}_{2,N,R}&- H^{\mathsf {(main)}}_{2,N,R,M} \leqslant \frac{\pi {{\bar{\beta }}}\,'(1-{{\bar{\beta }}}\,')^{-1}}{\log N} \sum _{(I_1,I_2) \in \, {\mathcal {I}}^{(2)}} \, \sum _{\begin{array}{c} 0 \leqslant b_1<a_2\leqslant N, \,a_2-b_1\leqslant M b_1 \, , \\ y_1,x_2 \in \mathbb {Z}^2 \end{array}} \nonumber \\&\quad U^{{\bar{\beta }}}_N(b_1,y_1) q_{a_2-b_1}(x_2-y_1)\, q_{a_2}(x_2) \nonumber \\&\leqslant \frac{\pi {{\bar{\beta }}}\,'(1-{{\bar{\beta }}}\,')^{-1}}{\log N} \left( {\begin{array}{c}h\\ 2\end{array}}\right) ^2 \sum _{\begin{array}{c} 0 \leqslant b_1 <a_2\leqslant N, \,a_2-b_1\leqslant M b_1 \, , \\ y_1,x_2 \in \mathbb {Z}^2 \end{array}} \nonumber \\&\quad U^{{\bar{\beta }}}_N(b_1,y_1) q_{a_2-b_1}(x_2-y_1)\, q_{a_2}(x_2) \, . \end{aligned}$$
(3.27)

For the last inequality we also have used that the number of possible partitions \((I_1,I_2)\in {\mathcal {I}}^{(2)}\) is bounded by \(\left( {\begin{array}{c}h\\ 2\end{array}}\right) ^2\). For every fixed value of \(b_1\) in (3.27), we use Cauchy-Schwarz for the sum over \((a_2,x_2) \in (0,(1+M)b_1]\times \mathbb {Z}^2\) in (3.27) to obtain that

$$\begin{aligned} \begin{aligned}&\sum _{b_1<a_2\leqslant (1+M) b_1, \, x_2 \in \mathbb {Z}^2} q_{a_2-b_1}(x_2-y_1)\, q_{a_2}(x_2) \\&\leqslant \Bigg (\sum _{0<a_2\leqslant (1+M) b_1,\, x_2 \in \mathbb {Z}^2} q^2_{a_2-b_1}(x_2-y_1) \Bigg )^{\frac{1}{2}} \\&\quad \Bigg ( \sum _{b_1<a_2\leqslant (1+M) b_1,\, x_2 \in \mathbb {Z}^2} q^2_{a_2}(x_2)\Bigg )^{\frac{1}{2}} \\&= \Bigg (\sum _{0<a_2\leqslant (1+M) b_1} q_{2(a_2-b_1)}(0) \Bigg )^{\frac{1}{2}} \Bigg ( \sum _{b_1<a_2\leqslant (1+M) b_1} q_{2a_2}(0)\Bigg )^{\frac{1}{2}}. \end{aligned} \end{aligned}$$
(3.28)

We can bound the leftmost parenthesis in the last line of (3.28) by \(R^{1/2}_N=\Big (\sum _{n=1}^N q_{2n}(0)\Big )^{1/2}=O(\sqrt{\log N})\). For the other term we have

$$\begin{aligned} \sum _{b_1<a_2\leqslant (1+M)b_1} q_{2a_2}(0) \leqslant c\, \sum _{b_1<a_2\leqslant (1+M)b_1} \tfrac{1}{a_2}\leqslant c \log (1+M) \,. \end{aligned}$$
(3.29)

Therefore, using (3.28) and (3.29) along with \(\sum _{0\leqslant b_1\leqslant N,\,y_1 \in \mathbb {Z}^2} U^{{{\bar{\beta }}}}_N(b_1,y_1) \leqslant (1-{{\bar{\beta }}}\,')^{-1}\) in (3.27) we obtain that

$$\begin{aligned} H^{\mathsf {(diff)}}_{2,N,R}- H^{\mathsf {(main)}}_{2,N,R,M} \leqslant C \pi {{\bar{\beta }}}\,'(1-{{\bar{\beta }}}\,')^{-2} \sqrt{\frac{\log (1+M)}{\log N}} \xrightarrow {N \rightarrow \infty } 0 \,. \end{aligned}$$

Let us show how this argument can be extended to work for general \(r \in \mathbb {N}\). The key observation is that for every fresh collision between two random walks, that is \(I_{i+1}=\{k,\ell \} \sqcup \bigsqcup _{j \ne k,\ell } \{j\} \), happening at time \(0<a_{i+1}\leqslant N\), we have \(I_i \ne I_{i+1}\), therefore one of the two colliding walks with labels \(k,\ell \) has to have travelled freely, for time at least \(a_{i+1}-b_{i-1}\) from its previous collision. More precisely, every term in the expansion of \( H^{\mathsf {(diff)}}_{r,N,R} - H^{\mathsf {(main)}}_{r,N,R,M}\) contains for every \(1\leqslant i\leqslant r-1\) a product of the form

$$\begin{aligned} q_{a_{i+1}-b_i}(x_{i+1}-y_i)\cdot q_{a_{i+1}-b_{i-1}}(x_{i+1}-y_{i-1})\,, \end{aligned}$$

see Fig. 3. Recall from (3.2) and (3.24) that we have the expansion

$$\begin{aligned} \begin{aligned}&H^{\mathsf {(diff)}}_{r,N,R} - H^{\mathsf {(main)}}_{r,N,R,M}\\&=\sum _{(I_1,\ldots ,I_r) \,\in \, {\mathcal {I}}^{(2)}} \,\sum _{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \,\mathsf C^{\mathsf {(diff)}}_{r,N,R}\smallsetminus \mathsf C^{\mathsf {(main)}}_{r,N,R,M}} \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1) \\&\quad \prod _{i=2}^r Q_{a_i-b_{i-1}}^{I_{i-1};I_i}(\varvec{y}_{i-1},\varvec{x}_i) \, \textsf{U}^{I_i}_{b_i-a_i}(\varvec{x}_i,\varvec{y}_i) \,\sigma _N(I_i), \end{aligned} \end{aligned}$$
(3.30)

where by definition (3.23) we have that

$$\begin{aligned} \mathsf C^{\mathsf {(diff)}}_{r,N,R}\smallsetminus \mathsf C^{\mathsf {(main)}}_{r,N,R,M}= \mathsf C^{\mathsf {(diff)}}_{r,N,R}\, \cap \, \bigcup _{i=1}^{r-1}\Big \{ (\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}}):\, a_{i+1}-b_i\leqslant M(b_i-b_{i-1})\, \Big \} \,.\nonumber \\ \end{aligned}$$
(3.31)

We will start the summation of (3.30) from the end until we find the index \(1\leqslant i \leqslant r-1\) for which the sum over \(a_{i+1}\) is restricted to \(\big (b_i,b_i+M(b_i-b_{i-1})\big ]\), in agreement with (3.31), using (2.18) to bound the contribution of the sums over \(b_{j},a_{j+1}\) and the corresponding spatial points for \(i<j\leqslant r-1\). Next, notice that we can bound the contribution of the sum over \(a_{i+1} \in \big (b_i,b_i+M(b_i-b_{i-1})\big ]\) and \(x_{i+1} \in \mathbb {Z}^2\), using a change of variables, by a factor of

$$\begin{aligned} \frac{C}{\log N}\Bigg (\sup _{1\leqslant t\leqslant N,\, u \in \mathbb {Z}^2} \sum _{1\leqslant n \leqslant Mt,\, z \in \mathbb {Z}^2} q_{n}(z)q_{n+t}(z+u)\Bigg )\leqslant C \sqrt{\frac{\log (1+M)}{\log N}}, \end{aligned}$$

using Cauchy–Schwarz as in (3.28) and (3.29). The remaining sums over \(b_j,a_{j-1}\), \(1\leqslant j\leqslant i\) can be bounded again via (2.18). Therefore, taking into account that by (3.31) there are \(r-1\) choices for the index i such that the sum over \(a_{i+1}\) is restricted to \(\big (b_i,b_i+M(b_i-b_{i-1})\big ]\), we can give an upper bound to \(H^{\mathsf {(diff)}}_{r,N,R}- H^{\mathsf {(main)}}_{r,N,R,M}\) as follows:

$$\begin{aligned} H^{\mathsf {(diff)}}_{r,N,R}- H^{\mathsf {(main)}}_{r,N,R,M}\leqslant C (r-1) \, \left( {\begin{array}{c}h\\ 2\end{array}}\right) ^r \, \Big (\frac{{\bar{\beta }}\,'}{1-{\bar{\beta }}\,'}\Big )^r \sqrt{\frac{\log (1+M)}{\log N}} \xrightarrow {N \rightarrow \infty } 0, \end{aligned}$$

where we also used that the number of distinct sequences \((I_1,\dots ,I_r) \in {\mathcal {I}}^{(2)}\) is bounded by \(\left( {\begin{array}{c}h\\ 2\end{array}}\right) ^r\). \(\square \)

Fig. 3
figure 3

A diagramatic representation of a configuration of collisions between 4 random walks in \(H^{(2)}_{4,N}\) with \(I_1=\{2,3\}\), \(I_2=\{1,2\}\), \(I_3=\{3,4\}\) and \(I_4=\{2,3\}\). Wiggly lines represent replica evolution, see (2.15)

3.4 Rewiring

Recall the expansion of \(H^{\mathsf {(main)}}_{r,N,R,M} \),

$$\begin{aligned} H^{\mathsf {(main)}}_{r,N,R,M}:= & {} \sum _{(I_1,\ldots ,I_r) \,\in \, {\mathcal {I}}^{(2)}} \,\sum _{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \,\mathsf C^{\mathsf {(main)}}_{r,N,R,M}} \nonumber \\{} & {} \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1) \prod _{i=2}^r Q_{a_i-b_{i-1}}^{I_{i-1};I_i}(\varvec{y}_{i-1},\varvec{x}_i) \, \textsf{U}^{I_i}_{b_i-a_i}(\varvec{x}_i,\varvec{y}_i) \,\sigma _N(I_i) \,. \end{aligned}$$
(3.32)

We also remind the reader that we may identify a partition \(I=\{k,\ell \}\sqcup \bigsqcup _{j\ne k,\ell }\{j\}\) with its non-trivial part \(\{k,\ell \}\). Moreover, if \(\big (\{i_1,j_1\},\cdots ,\{i_r,j_r\}\big ) \in {\mathcal {I}}^{(2)}\) we will use the notation

$$\begin{aligned} \textsf{p}(m):=\max \big \{ k<m: \,\{i_m,j_m\}=\{i_k,j_k\} \big \}, \end{aligned}$$

with the convention that \(\textsf{p}(m)=0\) if \(\{i_k,j_k\} \ne \{i_m,j_m\}\) for all \(1\leqslant k <m\). Given this definition, the time \(b_{\textsf{p}(m)}\) represents the last time walks \({i_m,j_m}\) collided before their new collision at time \(a_m\). Note that since we always have \(\{i_k,j_k\} \ne \{i_{k+1},j_{k+1}\}\) by construction, \(\textsf{p}(m)<m-1\).

Consider a sequence of partitions \(\big (\{i_1,j_1\},\dots ,\{i_m,j_m\}\big ) \in {\mathcal {I}}^{(2)}\) and let \(m \in \{2,\dots ,r\}\). The goal of this step will be to show that we can make the replacement of weight

$$\begin{aligned}{} & {} q_{a_{m}-b_{m-1}}\Big (x^{(i_m)}_m-y^{(i_m)}_{m-1}\Big )\cdot q_{a_{m}-b_{m-1}}\Big (x^{(j_m)}_m-y^{(j_m)}_{m-1}\Big ) \nonumber \\{} & {} \quad \longleftrightarrow \quad q^2_{a_m-b_{\textsf{p}(m)}}\Big (x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\Big ) \,. \end{aligned}$$
(3.33)

by inducing an error which is negligible when \(M \rightarrow \infty \). We iterate this procedure for all partitions \(I_1,\dots ,I_r\). We call the procedure described above rewiring, see Figs. 3 and 4. The first step towards the full rewiring is to show the following lemma which quantifies the error of a single replacement (3.33).

Lemma 3.6

Let \(r\geqslant 2\) fixed and \(m \in \{2,\dots ,r\}\) with \(I_m=\{i_m,j_m\}\). Then, for every fixed \(R \in (0,\infty )\) and uniformly in \((\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \,\mathsf C^{\mathsf {(main)}}_{r,N,R,M}\) and all sequences of partitions \((I_1,\dots ,I_r) \in {\mathcal {I}}^{(2)}\),

$$\begin{aligned}{} & {} q_{a_{m}-b_{m-1}}\Big (x^{(i_m)}_m-y^{(i_m)}_{m-1}\Big )\cdot q_{a_{m}-b_{m-1}}\Big (x^{(j_m)}_m-y^{(j_m)}_{m-1}\Big ) \nonumber \\{} & {} \quad = q^2_{a_m-b_{\textsf{p}(m)}}\Big (x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\Big )\cdot e^{o_{M}(1)}, \end{aligned}$$
(3.34)

where \(o_{M}(1)\) denotes a quantity such that \(\lim _{M \rightarrow \infty } o_{M}(1)=0\).

Proof

We will show that

$$\begin{aligned} \begin{aligned} q_{a_{m}-b_{m-1}}\Big (x^{(i_m)}_m-y^{(i_m)}_{m-1}\Big )&=q_{a_m-b_{\textsf{p}(m)}}\Big (x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\Big )\cdot e^{o_{M}(1)} \end{aligned} \end{aligned}$$
(3.35)

and by symmetry we will get (3.34). To this end, we invoke the local limit theorem for simple random walks, which we recall from [16]. In particular, by Theorem 2.3.11 [16], we have that there exists \(\rho >0\) such that for all \(n\geqslant 0\) and \(x \in \mathbb {Z}^2\) with \(|x|<\rho \, n\),

$$\begin{aligned} q_n(x)=g_{\frac{n}{2}}(x)\cdot e^{O\Big (\frac{1}{n}+\frac{|x|^4}{n^3}\Big )}\cdot 2\cdot \mathbb {1}_{\big \{(n,x) \in \mathbb {Z}^3_{\textsf{even}}\big \}}\,, \end{aligned}$$
(3.36)

where \(g_t(x)=\frac{e^{-\frac{|x|^2}{2t}}}{2 \pi t}\) denotes the 2-dimensional heat kernel and

$$\begin{aligned} \mathbb {Z}^3_{\textsf{even}}:=\{(n,x) \in \mathbb {Z}\times \mathbb {Z}^2:\, n+x_1+x_2=0 \,\, (\text {mod}\,\, 2)\}\,. \end{aligned}$$

The last constraint in (3.36) is a consequence of the periodicity of the simple random walk. Let us proceed with the proof of Lemma 3.6. First, we derive two inequalities which are going to be useful for the approximations using the local limit theorem.

Auxiliary inequality 1. We claim that

$$\begin{aligned} a_m-b_{m-1}>(M-1)\,\big ( b_{m-1}-b_{\textsf{p}(m)}\big ) \,. \end{aligned}$$
(3.37)

To this end, we start with the fact that

$$\begin{aligned} a_{k+1}-b_k>M(b_k-b_{k-1})\,\,\, \text {holds for all}\, 1\leqslant k \leqslant r-1, \end{aligned}$$
(3.38)

which we will apply repeatedly. Starting with \(k:=m-1\), we have that

$$\begin{aligned} a_m-b_{m-1}>M(b_{m-1}-b_{m-2})=M(b_{m-1}-a_{m-1})+M(a_{m-1}-b_{m-2})\,.\nonumber \\ \end{aligned}$$
(3.39)

We can then estimate the second term in the right-hand-side as

$$\begin{aligned} \begin{aligned} M(a_{m-1}-b_{m-2})&=(M-1)(a_{m-1}-b_{m-2})+(a_{m-1}-b_{m-2})\\&>(M-1)(a_{m-1}-b_{m-2})+M(b_{m-2}-b_{m-3}) \end{aligned} \end{aligned}$$

where in the last step we used (3.38) with \(k:=m-2\). Inserting this into (3.39) we have that

$$\begin{aligned} a_m-b_{m-1} > M(b_{m-1}-a_{m-1}) + (M-1)(a_{m-1}-b_{m-2})+M(b_{m-2}-b_{m-3}) \end{aligned}$$

We next decompose \(M(b_{m-2}-b_{m-3})\) as we did in (3.39) for \(M(b_{m-1}-b_{m-2})\) and iterating this procedure up to \({\mathfrak {p}}(m)\) we obtain that

$$\begin{aligned} a_m-b_{m-1}&> \sum _{j={\mathfrak {p}}(m)+1}^{m-1} \Big ( M(b_{j}-a_{j}) + (M-1)(a_{j}-b_{j-1}) \Big ) \\&> (M-1) \big ( b_{m-1} - b_{{\mathfrak {p}}(m)}\big ), \end{aligned}$$

where in the last step we reduced \(M(b_{j}-a_{j})\) to \((M-1)(b_{j}-a_{j})\) and then telescoped.

Auxiliary inequality 2. As a second step, we will prove that

$$\begin{aligned} \Big |\,\big |x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\big |-\big |x^{(i_m)}_m-y^{(i_m)}_{m-1}\big |\, \Big |\leqslant R\cdot \sqrt{2r-1} \cdot \sqrt{\frac{a_m-b_{m-1}}{M-1}} \,. \end{aligned}$$
(3.40)

To this end, by the reverse triangle inequality we have that

$$\begin{aligned}{} & {} \Big |\,\big |x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\big |-\big |x^{(i_m)}_m-y^{(i_m)}_{m-1}\big |\, \Big |\leqslant \big |y^{(i_m)}_{m-1}-x^{(i_m)}_{m-1}\big |\nonumber \\{} & {} \quad +\big |x^{(i_m)}_{m-1}-y^{(i_m)}_{m-2}\big |+\dots +\big |x^{(i_m)}_{\textsf{p}(m)+1}-y^{(i_m)}_{\textsf{p}(m)}\big | \,. \end{aligned}$$
(3.41)

Note that by the diffusivity constraints of \(\mathsf C^{\mathsf {(main)}}_{r,N,R,M}\) we have that

$$\begin{aligned} \begin{aligned}&\big |y^{(i_m)}_{m-1}-x^{(i_m)}_{m-1}\big |+\big |x^{(i_m)}_{m-1}-y^{(i_m)}_{m-2}\big |+\dots +\big |x^{(i_m)}_{\textsf{p}(m)+1}-y^{(i_m)}_{\textsf{p}(m)}\big | \\&\leqslant R\cdot \bigg (\sum _{k=\textsf{p}(m)}^{m-2}\sqrt{a_{k+1}-b_k}+\sum _{k=\textsf{p}(m)+1}^{m-1}\sqrt{b_k-a_k}\bigg )\,. \end{aligned} \end{aligned}$$
(3.42)

By Cauchy-Schwarz on the right hand side of (3.42), (3.37) and the fact that \(m\leqslant r\), we furthermore have that

$$\begin{aligned} \begin{aligned}&\Big (\sum _{k=\textsf{p}(m)}^{m-2}\sqrt{a_{k+1}-b_k}+\sum _{k=\textsf{p}(m)+1}^{m-1}\sqrt{b_k-a_k}\Big )\\&\leqslant \sqrt{2r-1} \,\bigg (\sum _{k=\textsf{p}(m)}^{m-2}(a_{k+1}-b_k)+\sum _{k=\textsf{p}(m)+1}^{m-1}(b_k-a_k) \bigg )^{1/2}\\&= \sqrt{2r-1}\cdot \sqrt{b_{m-1}-b_{\textsf{p}(m)}} \\&\leqslant \sqrt{2r-1} \cdot \sqrt{\frac{a_m-b_{m-1}}{M-1}} \,. \end{aligned} \end{aligned}$$
(3.43)

Combining (3.43) and (3.41) we arrive at (3.40).

Now, we are ready to show approximation (3.35). By (3.36) we have

$$\begin{aligned} \begin{aligned}&\frac{q_{a_{m}-b_{m-1}}\big (x^{(i_m)}_m-y^{(i_m)}_{m-1}\big )}{q_{a_m-b_{\textsf{p}(m)}}\big (x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\big )} =e^{-\frac{|x^{(i_m)}_m-y^{(i_m)}_{m-1}|^2}{a_{m}-b_{m-1}}+\frac{|x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}|^2}{a_m-b_{\textsf{p}(m)}}} \cdot \\&\quad \bigg ( \frac{a_m-b_{\textsf{p}(m)}}{a_{m}-b_{m-1}}\bigg ) \cdot e^{O\Big (\frac{1}{a_{m}-b_{m-1}}+\frac{|x^{(i_m)}_m-y_{m-1}^{(i_m)}|^4+|x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}|^4}{(a_m-b_{m-1})^3}\Big )} \,. \end{aligned} \end{aligned}$$
(3.44)

Let us look at each term on the right hand side of (3.44), separately. First, we have

$$\begin{aligned} \begin{aligned} e^{-\frac{\big |x^{(i_m)}_m-y^{(i_m)}_{m-1}\big |^2}{a_{m}-b_{m-1}}+\frac{\big |x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)} \big |^2}{a_m-b_{\textsf{p}(m)}}}&\leqslant e^{\frac{1}{a_{m}-b_{m-1}}\Big ( \, \big |x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\big |^2 -\big |x^{(i_m)}_m-y^{(i_m)}_{m-1}\big |^2 \,\Big ) } \\&= e^{\frac{1}{a_{m}-b_{m-1}}\, \Big (\big |x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\big | +\big |x^{(i_m)}_m-y^{(i_m)}_{m-1}\big |\Big )\cdot \Big (\big |x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\big | -\big |x^{(i_m)}_m-y^{(i_m)}_{m-1}\big |\Big ) }\,. \end{aligned} \end{aligned}$$

and by using (3.40) we have

$$\begin{aligned} \begin{aligned} \big |x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\big | +\big |x^{(i_m)}_m-y^{(i_m)}_{m-1}\big |&\leqslant 2 \big |x^{(i_m)}_{m}-y^{(i_m)}_{m-1}\big |+R\sqrt{2r-1}\sqrt{\frac{a_m-b_{m-1}}{M-1}} \\&\leqslant 2R\sqrt{a_m-b_{m-1}} + R \sqrt{2r-1} \sqrt{\frac{a_m-b_{m-1}}{M-1}} \\&= R \sqrt{a_m-b_{m-1}} \Big (2+\sqrt{\frac{2r-1}{M-1}}\Big ) \,. \end{aligned} \end{aligned}$$

Therefore, by (3.40), again, we get

$$\begin{aligned}{} & {} e^{\frac{1}{a_{m}-b_{m-1}}\, \Big (\big |x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\big | +\big |x^{(i_m)}_m-y^{(i_m)}_{m-1}\big |\Big )\cdot \Big (\big |x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\big | -\big |x^{(i_m)}_m-y^{(i_m)}_{m-1}\big |\Big ) }\\{} & {} \quad \leqslant e^{R^2 \Big (2+\sqrt{\frac{2r-1}{M-1}} \Big )\sqrt{\frac{2r-1}{M-1}}} \,. \end{aligned}$$

Similarly, we can get a lower bound of

$$\begin{aligned} e^{-\frac{\big |x^{(i_m)}_m-y^{(i_m)}_{m-1}\big |^2}{a_{m}-b_{m-1}}+\frac{\big |x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\big |^2}{a_m-b_{\textsf{p}(m)}}} \geqslant e^{-R^2\big (1-\frac{1}{M}\big ) \Big (2+\sqrt{\frac{2r-1}{M-1}} \Big )\sqrt{\frac{2r-1}{M-1}}}, \end{aligned}$$

since \( a_m-b_{\textsf{p}(m)}<\big (1+\frac{1}{M-1}\big )(a_m-b_{m-1})\) by (3.37). The second term in (3.44) can be handled by (3.37) as

$$\begin{aligned} 1\leqslant \Big ( \frac{a_m-b_{\textsf{p}(m)}}{a_{m}-b_{m-1}}\Big ) =\Big ( 1+\frac{b_{m-1}-b_{\textsf{p}(m)}}{a_{m}-b_{m-1}}\Big ) < 1+\frac{1}{M-1} \xrightarrow {M \rightarrow \infty } 1\,. \end{aligned}$$

For the last term in (3.44) we have that

$$\begin{aligned} \begin{aligned}&\frac{\big |x^{(i_m)}_m-y_{m-1}^{(i_m)}\big |^4+ \big |x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\big |^4}{(a_{m}-b_{m-1})^3}\\&\qquad \qquad \qquad \qquad \leqslant \frac{\big |x^{(i_m)}_m-y_{m-1}^{(i_m)}\big |^4}{(a_{m}-b_{m-1})^3}+\frac{\Big (|x^{(i_m)}_m-y_{m-1}^{(i_m)}|+R \cdot \sqrt{2r-1}\cdot \sqrt{\frac{(a_{m}-b_{m-1})}{M-1}}\, \Big )^4}{(a_{m}-b_{m-1})^3} \\&\qquad \qquad \qquad \qquad \leqslant \frac{9\,R^4}{(a_{m}-b_{m-1})}+\frac{8R^4(2r-1)^2}{(a_{m}-b_{m-1})\cdot (M-1)^2}\,, \end{aligned} \end{aligned}$$

where we used (3.40) along with the inequality \((x+y)^4\leqslant 8(x^4+y^4)\) for \(x,y \in \mathbb {R}\). Therefore,

$$\begin{aligned} e^{\frac{\big |x^{(i_m)}_m-y_{m-1}^{(i_m)}\big |^4+\big |x^{(i_m)}_m-y^{(i_m)}_{\textsf{p}(m)}\big |^4}{(a_{m}-b_{m-1})^3}} \leqslant e^{\frac{9\,R^4}{(a_{m}-b_{m-1})}+\frac{8R^4(2r-1)^2}{(a_{m}-b_{m-1})\cdot {(M-1)^2}}} \leqslant e^{\frac{9\,R^4}{M}+\frac{8R^4(2r-1)^2}{M \cdot {(M-1)^2}}} \xrightarrow {M \rightarrow \infty } 1\,, \end{aligned}$$

where we used in the last inequality that \(a_m-b_{m-1}>M(b_{m-1}-b_{m-2})\geqslant M\) by (3.23). \(\square \)

Fig. 4
figure 4

Figure 3 after rewiring. We use blue lines to represent the new kernels produced by rewiring. The dashed lines represent remaining free kernels from the rewiring procedure as well as kernels coming from using the Chapman–Kolmogorov formula for the simple random walk

3.5 Final step

Now that we have Lemma 3.6 at our disposal, we can prove the main approximation result of this step. Recall from (3.32) that

$$\begin{aligned} \begin{aligned}&H^{\mathsf {(main)}}_{r,N,R,M}\\&\quad = \sum _{(I_1,\ldots ,I_r) \,\in \, {\mathcal {I}}^{(2)}} \,\sum _{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \,\mathsf C^{\mathsf {(main)}}_{r,N,R,M}} \textsf{U}^{I_1}_{b_1}(0,\varvec{y}_1) \prod _{i=2}^r Q_{a_i-b_{i-1}}^{I_{i-1};I_i}(\varvec{y}_{i-1},\varvec{x}_i) \, \textsf{U}^{I_i}_{b_i-a_i}(\varvec{x}_i,\varvec{y}_i) \,\sigma _N(I_i) \,. \end{aligned} \end{aligned}$$

Define \(H^{\mathsf {(rew)}}_{r,N,R,M}\) to be the resulting sum after rewiring has been applied to every term of \(H^{\mathsf {(main)}}_{r,N,R,M}\), that is, given a sequence of partitions \((I_1,\dots ,I_r)\in {\mathcal {I}}^{(2)}\) and \((\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \,\mathsf C^{\mathsf {(main)}}_{r,N,R,M}\), we apply the kernel replacement (3.33) to all partitions \(I_1,\dots ,I_r\) starting from \(I_r\) and moving backward. We remind the reader that we may denote a partition \(I=\{i,j\} \sqcup \bigsqcup _{k \ne i,j}\{k\} \in {\mathcal {I}}^{(2)}\) by its non-trivial part \(\{i,j\}\), see subsection 2.2.

Proposition 3.7

Fix \(0\leqslant r\leqslant K\). We have that

$$\begin{aligned} H^{\mathsf {(rew)}}_{r,N,R,M}=e^{ K\cdot o_M(1)}\, H^{\mathsf {(main)}}_{r,N,R,M} \end{aligned}$$
(3.45)

and

$$\begin{aligned} \begin{aligned}&H^{\mathsf {(rew)}}_{r,N,R,M} = \sum _{\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \,\, \sum _{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \,\mathsf C^{\mathsf {(main)}}_{r,N,R,M}} \\&\quad \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k,y^{(i_k)}_k-x^{(i_k)}_k\big ) \cdot \prod _{\begin{array}{c} 1\leqslant \ell \leqslant h, \\ \ell \ne i_k,j_k \end{array}} q_{b_k-a_k}(y^{(\ell )}_k-x^{(\ell )}_k) \\&\quad \times \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q^{2}_{a_{m}-b_{\textsf{p}(m)}}\big (x^{(i_m)}_{m}- y^{(i_{m})}_{\textsf{p}(m)} \big ) \cdot \prod _{\begin{array}{c} 1\leqslant \ell \leqslant h, \\ \ell \ne i_m,j_m \end{array}} q_{a_m-b_{m-1}}\big (x^{(\ell )}_m-y^{(\ell )}_{m-1}\big ) \,. \end{aligned} \end{aligned}$$
(3.46)

Proof

Equation (3.45) is a consequence of Lemma 3.6 and the fact that \(r\leqslant K\), while expansion (3.46) is a direct consequence of the rewiring procedure we described in the previous step, see also Figs. 3 and 4. \(\square \)

Next, we derive upper and lower bounds for \(H^{\mathsf {(rew)}}_{r,N,R,M}\). We begin with the upper bound.

Proposition 3.8

We have that

$$\begin{aligned} \begin{aligned} H^{\mathsf {(rew)}}_{r,N,R,M}&\leqslant \sum _{\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \,\,\, \sum _{\begin{array}{c} 0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N, \\ 0:=x_1,y_1,\dots ,x_r,y_r \in \mathbb {Z}^2 \end{array}} \\&\quad \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k,y_k-x_k\big ) \, \\&\times \, \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q^{2}_{a_{m}-b_{\textsf{p}(m)}}\big (x_{m}-y_{\textsf{p}(m)}\big ) \,. \end{aligned} \end{aligned}$$

Proof

Fix \(r\geqslant 1\) and from (3.46) recall that

$$\begin{aligned} \begin{aligned}&H^{\mathsf {(rew)}}_{r,N,R,M}\\&\quad =\hspace{-0.3cm} \sum _{\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \,\,\, \sum _{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \,\mathsf C^{\mathsf {(main)}}_{r,N,R,M}} \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k,y^{(i_k)}_k-x^{(i_k)}_k\big ) \cdot \\&\quad \prod _{\begin{array}{c} 1\leqslant \ell \leqslant h, \\ \ell \ne i_k,j_k \end{array}} q_{b_k-a_k}(y^{(\ell )}_k-x^{(\ell )}_k) \\&\qquad \qquad \qquad \times \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q^{2}_{a_{m}-b_{\textsf{p}(m)}}\big (x^{(i_m)}_{m}-y^{(i_{m})}_{\textsf{p}(m)} \big ) \cdot \prod _{\begin{array}{c} 1\leqslant \ell \leqslant h, \\ \ell \ne i_m,j_m \end{array}} q_{a_m-b_{m-1}}\big (x^{(\ell )}_m-y^{(\ell )}_{m-1}\big ) \,. \end{aligned} \end{aligned}$$
(3.47)

For the sake of obtaining an upper bound on \(H^{\mathsf {(rew)}}_{r,N,R,M}\) we can sum \((\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\) in (3.47) over \(\mathsf C_{r,N}\), see definition in (3.2), instead of \(\mathsf C^{\mathsf {(main)}}_{r,N,R,M}\). We start the summation of the right hand side of (3.47) from the end. Using that for \(n \in \mathbb {N}\), \(\sum _{z \in \mathbb {Z}^2} q_n(z)=1\) we deduce that

$$\begin{aligned} \sum _{\begin{array}{c} y_r^{(\ell )}\in \mathbb {Z}^2: \\ 1\leqslant \ell \leqslant h, \,\ell \ne i_r,j_r \end{array}}\, \prod _{\ell \ne i_r,j_r} q_{b_r-a_r}(y^{(\ell )}_r-x^{(\ell )}_r)= 1\,. \end{aligned}$$

We leave the sum \(\sum _{b_r \in [a_r,N], \, y^{(i_r)}_r \in \mathbb {Z}^2} U_N\big (b_r-a_r,y_r^{(i_r)}-x_r^{(i_r)}\big )\) intact and move on to the time interval \([b_{r-1},a_r]\). We use again that for \(n \in \mathbb {N}\), \(\sum _{z \in \mathbb {Z}^2} q_n(z)=1\), to deduce that

$$\begin{aligned} \sum _{\begin{array}{c} x_r^{(\ell )}\in \mathbb {Z}^2: \\ 1\leqslant \ell \leqslant h, \,\ell \ne i_r,j_r \end{array}}\,\prod _{\ell \ne i_r,j_r}\,q_{a_r-b_{r-1}}(x^{(\ell )}_r-y^{(\ell )}_{r-1}) =1 \,. \end{aligned}$$

Again, we leave the sum \(\sum _{a_r \in \, (b_{r-1},b_r], \,x_r^{(i_r)} \in \mathbb {Z}^2}\, q_{a_r-b_{\textsf{p}(r)}}^2\big (x^{(i_r)}_r- y^{(i_{r})}_{\textsf{p}(r)} \big )\) intact. We can iterate this procedure inductively since due to rewiring all the spatial variables \(y^{(\ell )}_{r-1}\), \(\ell \ne i_{r-1},j_{r-1}\) are free, that is, there are no outgoing laces starting off \(y^{(\ell )}_{r-1}\), \(\ell \ne i_{r-1},j_{r-1}\) at time \(b_{r-1}\). The summations we have performed correspond to getting rid of the dashed lines in Fig. 4. Iterating this procedure inductively then implies the following upper bound for \(H^{\mathsf {(rew)}}_{r,N,R,M}\).

$$\begin{aligned} \begin{aligned} H^{\mathsf {(rew)}}_{r,N,R,M} \leqslant \sum _{\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \,\, \sum _{\begin{array}{c} 0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N\,, \\ 0:=x_1,y_1,\dots ,x_r,y_r \in \mathbb {Z}^2 \end{array}} \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k,y_k-x_k\big ) \, \\ \times \, \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q^{2}_{a_{m}-b_{\textsf{p}(m)}}\big (x_{m}-y_{\textsf{p}(m)}\big ) \,. \end{aligned} \end{aligned}$$

\(\square \)

In the next proposition we derive complementary lower bounds for \(H^{\mathsf {(rew)}}_{r,N,R,M}\). Given \(0 \leqslant r \leqslant K\) and a sequence of partitions \(\vec {I}=(\{i_1,j_1\},\dots ,\{i_r,j_r\}) \in {\mathcal {I}}^{(2)}\) we define the set \(\mathsf C^{\mathsf {(rew)}}_{r,N,R,M}(\vec {I}\,)\) to be \(\mathsf C^{\mathsf {(main)}}_{r,N,R,M}\) where for every \(2\leqslant m \leqslant r\) we replace the diffusivity constraint \(\left\Vert \varvec{x}_m-\varvec{y}_{m-1}\right\Vert _{\infty } \leqslant R \sqrt{a_m-b_{m-1}}\) by the constraints

$$\begin{aligned} \begin{aligned}&|x^{(\ell )}_m-y^{(\ell )}_{m-1}|\leqslant R\sqrt{a_m-b_{m-1}},\, \ell \in \{1,\dots ,h\}\smallsetminus \{i_m,j_m\} \quad \text {and} \\&\quad |x^{(\ell ')}_m-y^{(\ell ')}_{\textsf{p}(m)}|\leqslant R \sqrt{1-\frac{1}{M}}\bigg (1-\sqrt{\frac{2K-1}{M-1}}\bigg ) \sqrt{a_m-b_{\textsf{p}(m)}}, \, \, \ell ' \in \{i_m,j_m\} \,. \end{aligned} \end{aligned}$$

This replacement transforms the diffusivity constraints imposed on the jumps of two walks \(\{i_m,j_m\}\) from their respective positions at time \(b_{m-1}\) to time \(a_m\), which is the time they (re)start colliding, to a diffusivity constraint connecting their common position at time \(b_{\textsf{p}(m)}\), which is the last time they collided before time \(a_m\), to their common position at time \(a_m\) when they start colliding again.

We have the following Lemma.

Lemma 3.9

Let \(0 \leqslant r \leqslant K\) and \(M>2K\). For all \(\vec {I}=\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}\) we have that

$$\begin{aligned} \mathsf C^{\mathsf {(rew)}}_{r,N,R,M}(\vec {I}\,) \subset \mathsf C^{\mathsf {(main)}}_{r,N,R,M} \,. \end{aligned}$$

Proof

Fix \(0\leqslant r \leqslant K\), a sequence \(\vec {I}=\big (\{i_1,j_1\}, \dots , \{i_r,j_r\}\big ) \in {\mathcal {I}}^{(2)}\) and \((\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}}) \in \mathsf C^{\mathsf {(rew)}}_{r,N,M,R}(\vec {I}\,)\). Moreover, let \(2\leqslant m \leqslant r\). By symmetry it suffices to prove that

$$\begin{aligned} \begin{aligned} \mathbb {1}_{\big \{|x^{(i_m)}_m- y^{(i_{m})}_{\textsf{p}(m)}|\leqslant R \sqrt{1-\frac{1}{M}}\Big (1-\sqrt{\frac{2r-1}{M-1}}\Big )\sqrt{a_m-b_{\textsf{p}(m)}}\big \}} \leqslant \mathbb {1}_{\big \{|x^{(i_m)}_m-y^{(i_{m})}_{m-1}|\leqslant R \sqrt{a_m-b_{m-1}}\big \}} \,. \end{aligned} \end{aligned}$$

Indeed, by the definition of \(\mathsf C^{\mathsf {(rew)}}_{r,N,M,R}(\vec {I}\,)\) and (3.40) we have that for \((\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}}) \in \mathsf C^{\mathsf {(rew)}}_{r,N,M,R}(\vec {I}\, )\),

$$\begin{aligned} \Big |\,|x^{(i_m)}_m- y^{(i_{m})}_{\textsf{p}(m)} |-|x^{(i_m)}_m-y^{(i_{m})}_{m-1}|\, \Big |\leqslant R\cdot \sqrt{2r-1} \cdot \sqrt{\frac{a_m-b_{m-1}}{M-1}} \,. \end{aligned}$$
(3.48)

Moreover by (3.37) we have that

$$\begin{aligned} a_m-b_{m-1}>(M-1)(b_{m-1}-b_{\textsf{p}(m)}) \Rightarrow a_m-b_{\textsf{p}(m)}>M(b_{m-1}-b_{\textsf{p}(m)}) \,. \end{aligned}$$

Therefore,

$$\begin{aligned} a_m-b_{m-1}= & {} a_m-b_{\textsf{p}(m)}-(b_{m-1}-b_{\textsf{p}(m)})>a_m-b_{\textsf{p}(m)} \nonumber \\{} & {} -\frac{1}{M}(a_m-b_{\textsf{p}(m)})=\Big (1-\frac{1}{M}\Big )(a_m-b_{\textsf{p}(m)}) \,. \end{aligned}$$
(3.49)

Combining inequalities (3.48) and (3.49) we get the result. \(\square \)

Proposition 3.10

Let \(0 \leqslant r \leqslant K\). For \(M>2K\) we have that

$$\begin{aligned}&H^{\mathsf {(rew)}}_{r,N,R,M} \\&\quad \geqslant (1-e^{-cR^2})^{2Kh} \, \hspace{-0.1cm} \sum _{\vec {I}=\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \, \sum _{\begin{array}{c} 0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N , \\ a_{i+1}-b_i>M(b_i-b_{i-1}), \, 1\leqslant i \leqslant r-1, \\ 0:=x_1,y_1,\dots ,x_r,y_r \in \mathbb {Z}^2 \end{array}} \\&\quad \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k,y_k-x_k\big ) \, \\&\quad \times \, \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q^{2}_{a_{m}-b_{\textsf{p}(m)}}\big (x_{m}-y_{\textsf{p}(m)}\big ) \cdot \mathbb {1}_{\big \{|y_k-x_k|\leqslant R\sqrt{b_k-a_k}, \, |x_m-y_{\textsf{p}(m)}|\leqslant R\, C_{K,M}\sqrt{a_m-b_{\textsf{p}(m)}}\big \}}, \end{aligned}$$

with \(C_{K,M}:=\sqrt{1-\frac{1}{M}}\bigg (1-\sqrt{\frac{2K-1}{M-1}}\bigg )\).

Proof

Recall from (3.46) that

$$\begin{aligned} \begin{aligned} H^{\mathsf {(rew)}}_{r,N,R,M}&=\hspace{-0.2cm} \sum _{\vec {I}=\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \,\,\, \sum _{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \,\mathsf C^{\mathsf {(main)}}_{r,N,R,M}} \\&\quad \times \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k,y^{(i_k)}_k-x^{(i_k)}_k\big )\cdot \prod _{\ell \ne i_k,j_k}q_{b_k-a_k}(y^{(\ell )}_k-x^{(\ell )}_k) \\&\quad \times \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q^{2}_{a_{m}-b_{\textsf{p}(m)}}\big (x^{(i_m)}_{m}- y^{(i_{m})}_{\textsf{p}(m)} \big ) \cdot \prod _{\ell \ne i_m,j_m}q_{a_m-b_{m-1}}\big (x^{(\ell )}_m-y^{(\ell )}_{m-1}\big ) \,. \end{aligned} \end{aligned}$$

By Lemma 3.9 we have that

$$\begin{aligned} \begin{aligned} H^{\mathsf {(rew)}}_{r,N,R,M}&\geqslant \hspace{-0.2cm} \sum _{\vec {I}=\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \,\,\, \sum _{(\vec {a},\vec {b},\vec {\varvec{x}},\vec {\varvec{y}})\, \in \,\mathsf C^{\mathsf {(rew)}}_{r,N,R,M}(\vec {I}\,)} \\&\quad \times \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k,y^{(i_k)}_k-x^{(i_k)}_k\big ) \cdot \prod _{\ell \ne i_k,j_k}q_{b_k-a_k}(y^{(\ell )}_k-x^{(\ell )}_k) \\&\quad \times \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q^{2}_{a_{m}-b_{\textsf{p}(m)}}\big (x^{(i_m)}_{m}- y^{(i_{m})}_{\textsf{p}(m)} \big ) \cdot \prod _{\ell \ne i_m,j_m}q_{a_m-b_{m-1}}\big (x^{(\ell )}_m-y^{(\ell )}_{m-1}\big ) \,. \end{aligned} \end{aligned}$$
(3.50)

The first step in getting a lower bound for \(H^{\mathsf {(rew)}}_{r,N,R,M}\) is to get rid of the dashed lines, see Fig. 4. We follow the steps we took in the proof of Proposition 3.8 for the upper bound. In particular, we start the summation of (3.50) beginning from the end. Using that for \(n \in \mathbb {N}\) and \(R\in (0,\infty )\)

$$\begin{aligned} \sum _{z \in \mathbb {Z}^2: \, |z|\leqslant R \sqrt{n}} q_n(z)=1 - \sum _{z \in \mathbb {Z}^2: \, |z|> R \sqrt{n}} q_n(z) \geqslant 1-e^{-cR^2}, \end{aligned}$$
(3.51)

by (3.13), we get that

$$\begin{aligned} \sum _{\begin{array}{c} y_r^{(\ell )}\in \mathbb {Z}^2: |y_r^{(\ell )}-x^{(\ell )}_r|\leqslant R\sqrt{b_r-a_r}, \, \\ 1\leqslant \ell \leqslant h,\,\ell \ne i_r,j_r \end{array}}\prod _{\ell \ne i_r,j_r} q_{b_r-a_r}(y^{(\ell )}_r-x^{(\ell )}_r) \geqslant (1-e^{-cR^2})^h\,. \end{aligned}$$

We leave the sum

$$\begin{aligned} \sum _{\begin{array}{c} b_r \in [a_r,N], \\ y^{(i_r)}_r \in \mathbb {Z}^2:\, |y^{(i_r)}_r-x_r^{(i_r)}|\leqslant R \sqrt{b_r-a_r} \end{array}} U_N\big (b_r-a_r,y_r^{(i_r)}-x_r^{(i_r)}\big ) \end{aligned}$$

as is and move on to the time interval \([b_{r-1},a_r]\). We use (3.51) to deduce that

$$\begin{aligned} \sum _{\begin{array}{c} x_r^{(\ell )}\in \mathbb {Z}^2: |x_r^{(\ell )}-y^{(\ell )}_{r-1}|\leqslant R\sqrt{a_r-b_{r-1}}, \, \\ 1\leqslant \ell \leqslant h,\,\ell \ne i_r,j_r \end{array}} \,\prod _{\ell \ne i_r,j_r}\,q_{a_r-b_{r-1}}(x^{(\ell )}_r-y^{(\ell )}_{r-1}) \geqslant (1-e^{-cR^2})^{h} \,. \end{aligned}$$

Again, we leave the sum \(\sum _{a_r \in \, (b_{r-1},b_r], \,x_r^{(i_r)} \in \mathbb {Z}^2}\, q_{a_r-b_{\textsf{p}(r)}}^2\big (x^{(i_r)}_r-y^{(i_{r})}_{\textsf{p}(r)}\big )\) intact. We can continue this procedure since due to rewiring all the spatial variables \(y^{(\ell )}_{r-1}\), \(\ell \ne i_{r-1},j_{r-1}\) are free, i.e. there are no outgoing laces starting off \(y^{(\ell )}_{r-1}\), \(\ell \ne i_{r-1},j_{r-1}\) at time \(b_{r-1}\), and there are no diffusivity constraints linking \(x^{(i_r)}_r=x_r^{(j_r)}\) with \(y^{(i_r)}_{r-1},y^{(j_r)}_{r-1}\) by definition of \(\mathsf C^{\mathsf {(rew)}}_{r,N,R,M}(\vec {I})\). Iterating this procedure we obtain that

$$\begin{aligned} \begin{aligned}&H^{\mathsf {(rew)}}_{r,N,R,M} \geqslant (1-e^{-cR^2})^{2Kh}\, \hspace{-0.1cm} \sum _{\vec {I}=\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \,\\&\quad \sum _{\begin{array}{c} 0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N, \\ a_{i+1}-b_i>M(b_i-b_{i-1}), \, 1\leqslant i \leqslant r-1, \\ 0:=x_1,y_1,\dots ,x_r,y_r \in \mathbb {Z}^2 \end{array}} \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k,y_k-x_k\big ) \, \\&\qquad \times \, \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q^{2}_{a_{m}-b_{\textsf{p}(m)}}\big (x_{m}-y_{\textsf{p}(m)}\big ) \cdot \mathbb {1}_{\big \{|y_k-x_k|\leqslant R\sqrt{b_k-a_k}, \, |x_m-y_{\textsf{p}(m)}|\leqslant R\, C_{K,M}\sqrt{a_m-b_{\textsf{p}(m)}}\big \}}, \end{aligned} \end{aligned}$$

with \(C_{K,M}=\sqrt{1-\frac{1}{M}}\bigg (1-\sqrt{\frac{2K-1}{M-1}}\bigg )\). \(\square \)

Fig. 5
figure 5

Graphical representation of a term of the chaos representation of \(\prod _{1\leqslant i<j \leqslant h}\textrm{E}\Big [e^{\frac{\pi \beta _{i,j}}{\log N}\, \textsf{L}_N^{(i,j)}}\Big ]\) for \(h=3\). This diagram captures the main contribution, which is when all a’s and b’s are distinct. Configurations where more than one pair of walks collide at a same time \(n\leqslant N\) have lower order contributions

To proceed with the last steps of our proof it is useful to record two chaos expansions of \(\prod _{1\leqslant i<j \leqslant h} \textrm{E}\Big [e^{\frac{\pi \beta _{i,j}}{\log N}\, \textsf{L}_N^{(i,j)}}\Big ]\). These read as

$$\begin{aligned} \prod _{1\leqslant i<j \leqslant h}\textrm{E}\Big [e^{\frac{\pi \beta _{i,j}}{\log N}\, \textsf{L}_N^{(i,j)}}\Big ]&= \sum _{\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \, \sum _{\begin{array}{c} 0:=a_1\leqslant b_1\leqslant a_2\leqslant \dots \leqslant a_r\leqslant b_r \leqslant N\, , \\ 0:=x_1,y_1,\dots ,x_r,y_r \in \mathbb {Z}^2 \end{array}} \nonumber \\&\quad \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k,y_k-x_k\big ) \nonumber \\&\times \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q^{2}_{a_{m}-b_{\textsf{p}(m)}}\big (x_{m}-y_{\textsf{p}(m)}\big ) \end{aligned}$$
(3.52)
$$\begin{aligned}&= \sum _{\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \,\,\, \sum _{0:=a_1\leqslant b_1\leqslant a_2\leqslant \dots \leqslant a_r\leqslant b_r \leqslant N} \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k\big ) \nonumber \\&\times \prod _{m=2}^r \sigma ^{i_m,j_m}_N q_{2(a_{m}-b_{\textsf{p}(m)})}(0) . \end{aligned}$$
(3.53)

This follows from (2.6) and (2.7) and by grouping together intervals [ab] where one observes collisions of only a single pair of random walks. Firgure 5 presents a graphical explanation / representation of these chaos expansions.

Proposition 3.11

We have that

$$\begin{aligned} \lim _{K \rightarrow \infty } \lim _{M \rightarrow \infty } \lim _{R \rightarrow \infty } \lim _{N \rightarrow \infty } \sum _{r=0}^{K} H^{\mathsf {(rew)}}_{r,N,R,M}=\prod _{1\leqslant i<j \leqslant h} \frac{1}{1-\beta _{i,j}} \,. \end{aligned}$$

Proof

We are going to prove this Proposition via means of the lower and upper bounds established in Propositions 3.8 and 3.10. By Proposition 3.8 we have that

$$\begin{aligned} \begin{aligned} H^{\mathsf {(rew)}}_{r,N,R,M}&\leqslant \sum _{\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \, \sum _{\begin{array}{c} 0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N\,, \\ 0:=x_1,y_1,\dots ,x_r,y_r \in \mathbb {Z}^2 \end{array}} \\&\quad \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k,y_k-x_k\big ) \, \\&\times \, \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q^{2}_{a_{m}-b_{\textsf{p}(m)}}\big (x_{m}-y_{\textsf{p}(m)}\big ) \,. \end{aligned} \end{aligned}$$
(3.54)

Summing the spatial points on the right hand side of (3.54) we obtain that

$$\begin{aligned}&H^{\mathsf {(rew)}}_{r,N,R,M} \leqslant \hspace{-0.3cm}\sum _{\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \,\,\, \sum _{0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N} \nonumber \\&\quad \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k\big ) \prod _{m=2}^r \sigma ^{i_m,j_m}_N q_{2(a_{m}-b_{\textsf{p}(m)})}(0) \, . \end{aligned}$$
(3.55)

Using (3.52) one can deduce that

$$\begin{aligned} \ \sum _{r \geqslant 0} H^{\mathsf {(rew)}}_{r,N,R,M} \leqslant \prod _{1\leqslant i<j \leqslant h} \textrm{E}\Big [e^{\frac{\pi \beta _{i,j}}{\log N}\, \textsf{L}_N^{(i,j)}}\Big ]= \big (1+o_N(1)\big )\prod _{1\leqslant i<j \leqslant h} \frac{1}{1-\beta _{i,j}} \,.\nonumber \\ \end{aligned}$$
(3.56)

Next, by Proposition 3.10 we have that

$$\begin{aligned} \begin{aligned}&H^{\mathsf {(rew)}}_{r,N,R,M} \geqslant (1-e^{-cR^2})^{2Kh}\, \hspace{-0.1cm} \sum _{\vec {I}=\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \, \sum _{\begin{array}{c} 0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N, \\ a_{i+1}-b_i>M(b_i-b_{i-1}), \, 1\leqslant i \leqslant r-1, \\ 0:=x_1,y_1,\dots ,x_r,y_r \in \mathbb {Z}^2 \end{array}} \\&\quad \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k,y_k-x_k\big ) \, \times \, \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q^{2}_{a_{m}-b_{\textsf{p}(m)}}\big (x_{m}-y_{\textsf{p}(m)}\big ) \cdot \\&\quad \mathbb {1}_{\big \{|y_k-x_k|\leqslant R\sqrt{b_k-a_k}, \, |x_m-y_{\textsf{p}(m)}|\leqslant R\, C_{K,M}\sqrt{a_m-b_{\textsf{p}(m)}}\big \}}, \end{aligned} \end{aligned}$$
(3.57)

Lifting the diffusivity conditions imposed on the right-hand side of (3.57) can be done using arguments already present in Lemma 3.4. More specifically, we use that for \(0 \leqslant m \leqslant N\), \(w \in \mathbb {Z}^2\) and \(1\leqslant i<j \leqslant h\),

$$\begin{aligned} \begin{aligned}&\sum _{\begin{array}{c} n \in [m,N],\, \\ z \in \mathbb {Z}^2: \, |z-w|\leqslant R\sqrt{n-m} \end{array}} \hspace{-0.3cm} U^{\beta _{i,j}}_N\big (n-m,z-w\big )\\&= \sum _{n \in [m,N]} U^{\beta _{i,j}}_N\big (n-m\big ) -\hspace{-0.2cm}\sum _{\begin{array}{c} n \in [m,N],\, \\ z \in \mathbb {Z}^2:\, |z-w|> R\sqrt{n-m} \end{array}} \hspace{-0.3cm}U^{\beta _{i,j}}_N\big (n-m,z-w\big ) \\&\geqslant \sum _{n \in [m,N]} U^{\beta _{i,j}}_N\big (n-m\big ) -e^{-\kappa R}\sum _{n \in [m,N]} U^{\beta _{i,j}}_N\big (n-m\big ) \\&\geqslant \big (1-e^{-\kappa R}\big ) \,\sum _{n \in [m,N]} U^{\beta _{i,j}}_N\big (n-m\big ), \end{aligned} \end{aligned}$$

where in the first inequality we used (3.22) from Lemma 3.4 with a suitable constant \(\kappa ({{\bar{\beta }}}) \in (0,\infty )\). Similarly we have that

$$\begin{aligned} \begin{aligned}&\sum _{\begin{array}{c} n \in [m,N], \, \\ z \in \mathbb {Z}^2: |z-w|\leqslant R\,C_{K,M}\,\sqrt{n-m} \end{array}}\hspace{-0.3cm} q^2_{n-m}(z-w) \\&\quad = \sum _{n \in [m,N]} q_{2(n-m)}(0) - \sum _{\begin{array}{c} n \in [m,N], \, \\ z \in \mathbb {Z}^2: |z-w|>R\, C_{K,M}\, \sqrt{n-m} \end{array}} q^2_{n-m}(z-w)\\&\quad \geqslant \big (1-e^{-\kappa \,R^2\, C^2_{K,M}}\big )\sum _{n \in [m,N]} q_{2(n-m)}(0) \end{aligned} \end{aligned}$$

by tuning the constant \(\kappa \) if needed. Therefore, we finally obtain that

$$\begin{aligned} \begin{aligned}&H^{\mathsf {(rew)}}_{r,N,R,M} \geqslant (1-e^{-cR^2})^{2Kh}\,(1-e^{-\kappa \,R})^K \, (1-e^{-\kappa \,R^2\,C^2_{K,M}})^K \times \\&\, \sum _{\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}}\,\,\,\sum _{\begin{array}{c} 0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N, \\ a_{i+1}-b_i>M(b_i-b_{i-1}), \, 1\leqslant i \leqslant r-1\,. \end{array}} \\&\quad \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k\big ) \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q_{2(a_{m}-b_{\textsf{p}(m)})}(0) \,. \end{aligned} \end{aligned}$$
(3.58)

The last restriction we need to lift is the restriction \(a_{i+1}-b_i>M(b_i-b_{i-1}), \, 1\leqslant i \leqslant r-1\). This can be done via the arguments used in Proposition 3.5, so we do not repeat it here, but only note that there exists a constant \({\widetilde{C}}_{K}={\widetilde{C}}_K({{\bar{\beta }}},h) \in (0,\infty )\) such that for all \(0 \leqslant r \leqslant K\), the corresponding sum to the right hand side of (3.58), but with its temporal range of summation be such that there exists \(1\leqslant i\leqslant r-1:\, a_{i+1}-b_i\leqslant M(b_i-b_{i-1})\), satisfies the bound

$$\begin{aligned} \begin{aligned}&\sum _{\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}}\,\,\, \sum _{\begin{array}{c} 0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N, \\ \exists \, 1\leqslant i\leqslant r-1:\, a_{i+1}-b_i\leqslant M(b_i-b_{i-1}) \end{array}}\\&\quad \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k\big ) \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q_{2(a_{m}-b_{\textsf{p}(m)})}(0) \\&\leqslant {\widetilde{C}}_K \cdot \epsilon _{N,M}, \end{aligned} \end{aligned}$$

where \(\epsilon _{N,M}\) is such that \(\lim _{N \rightarrow \infty } \epsilon _{N,M}=0\) for any fixed \(M \in (0,\infty )\). Therefore, the resulting lower bound on \(H^{\mathsf {(rew)}}_{r,N,R,M}\) will be

$$\begin{aligned} \begin{aligned} H^{\mathsf {(rew)}}_{r,N,R,M}&\geqslant (1-e^{-cR^2})^{2Kh}\,(1-e^{-\kappa \,R})^K \, (1-e^{-\kappa \,R^2\,C^2_{K,M}})^K \\&\times \Bigg ( \sum _{\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}}\,\,\,\sum _{0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N} \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k\big )\, \\&\,\qquad \qquad \times \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q_{2(a_{m}-b_{\textsf{p}(m)})}(0) - {\widetilde{C}}_K \cdot \epsilon _{N,M} \Bigg ) \,. \end{aligned} \end{aligned}$$
(3.59)

Note that

$$\begin{aligned} \begin{aligned} \prod _{1\leqslant i<j \leqslant h}\textrm{E}\Big [e^{\frac{\pi \beta _{i,j}}{\log N}\, \textsf{L}^{(i,j)}_N}\Big ]&= \sum _{r=0}^{K} \sum _{\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}} \,\,\\&\quad \sum _{0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N} \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k\big ) \\&\qquad \times \prod _{m=2}^r \sigma ^{i_m,j_m}_N \cdot q_{2(a_{m}-b_{\textsf{p}(m)})}(0) + A^{(1)}_{N}+ A_{N,K}^{(2)}, \end{aligned} \end{aligned}$$
(3.60)

where \(A_N^{(1)}\) denotes the part of the chaos expansion of \(\prod _{1\leqslant i<j \leqslant h}\textrm{E}\Big [e^{\frac{\pi \beta _{i,j}}{\log N}\, \textsf{L}^{(i,j)}_N}\Big ]\) where there exists a time \(n\leqslant N\) at which multiple pairs collide. Moreover, \(A^{(2)}_{N,K}\) denotes the corresponding sum on the right hand side of (3.60) but from \(r=K+1\) to \(\infty \), that is

$$\begin{aligned} \begin{aligned} A^{(2)}_{N,K}&= \sum _{r>K} \sum _{\big (\{i_1,j_1\},\dots ,\{i_r,j_r\}\big ) \in \, {\mathcal {I}}^{(2)}}\,\,\sum _{0:=a_1\leqslant b_1<a_2\leqslant \dots < a_r\leqslant b_r \leqslant N} \prod _{k=1}^r U^{\beta _{i_k,j_k}}_N\big (b_k-a_k\big ) \\&\quad \times \prod _{m=2}^r \sigma ^{i_m,j_m}_N q_{2(a_{m}-b_{\textsf{p}(m)})}(0) \,. \end{aligned} \end{aligned}$$

Next, we will give bounds for \(A^{(1)}_{N}\) and \(A_{N,K}^{(2)}\). Beginning with \(A_{N,K}^{(2)}\), let \(\rho _K:=\left\lfloor {\frac{K}{2\left( {\begin{array}{c}h\\ 2\end{array}}\right) }}\right\rfloor \, \). Since we are summing over \(r>K\), there has to be a pair \(1\leqslant i<j \leqslant h\) which has recorded more than \(\rho _K\) collisions. We recall from (2.8) that \(U^{\beta }_N(\cdot )\) admits the renewal representation

$$\begin{aligned} U_N^{\beta }(n)=\sum _{k \geqslant 0}(\sigma _N(\beta )\, R_N)^k\, \textrm{P}\big (\tau ^{\scriptscriptstyle (N)}_{k}=n\big ) \,. \end{aligned}$$

There are \(\left( {\begin{array}{c}h\\ 2\end{array}}\right) \) choices for the pair with more than \(\rho _K\) collisions. We can also use the bound (2.9) to bound the contribution of the rest \(\left( {\begin{array}{c}h\\ 2\end{array}}\right) -1\) pairs in \(A^{(2)}_{N,K}\). Therefore, we can write

$$\begin{aligned} A^{(2)}_{N,K}\leqslant \frac{\left( {\begin{array}{c}h\\ 2\end{array}}\right) }{(1-{\bar{\beta }}')^{\left( {\begin{array}{c}h\\ 2\end{array}}\right) }} \sum _{k>\rho _K} (\sigma _N({{\bar{\beta }}})R_N)^k\, \textrm{P}(\tau ^{\scriptscriptstyle (N)}_k \leqslant N) \leqslant \frac{\left( {\begin{array}{c}h\\ 2\end{array}}\right) }{(1-{\bar{\beta }}')^{\left( {\begin{array}{c}h\\ 2\end{array}}\right) }} \sum _{k>\rho _K} ({\bar{\beta }}')^k \xrightarrow {K \rightarrow \infty }0\,, \end{aligned}$$

uniformly in N, where \({\bar{\beta }}' \in ({{\bar{\beta }}},1)\).

Let us now proceed with estimating \(A_N^{(1)}\) and showing that it has negligible contribution. \(A_N^{(1)}\) consists of configurations where \({\mathfrak {m}}\geqslant 2\) pairs \(\{i_1,j_1\}, \{i_2,j_2\},\ldots ,\{i_{{\mathfrak {m}}}, j_{{\mathfrak {m}}} \}\) collide at a same time \(n\leqslant N\). Referring to Fig. 5, a case of \({\mathfrak {m}}=2\) could correspond to a situation when, for example, \(a_3=a_4\). Let n be the first time a multiple pair-collision takes place. We can choose the pairs \(\{i_1,j_1\}, \{i_2,j_2\},\ldots ,\{i_{{\mathfrak {m}}}, j_{{\mathfrak {m}}} \}\), which collide at that time, in \(\left( {\begin{array}{c}h\\ 2\end{array}}\right) \cdot \big (\left( {\begin{array}{c}h\\ 2\end{array}}\right) -1\big ) \cdots \big (\left( {\begin{array}{c}h\\ 2\end{array}}\right) -{\mathfrak {m}}+1\big ) / {\mathfrak {m}}! < \left( {\begin{array}{c}h\\ 2\end{array}}\right) ^{{\mathfrak {m}}}\) ways. We can use bound (2.9) to bound the contribution to \(A^{(1)}_{N}\) from the rest of the \(\left( {\begin{array}{c}h\\ 2\end{array}}\right) -{\mathfrak {m}}\) pairs by c\((1-{{\bar{\beta }}}')^{-{h\atopwithdelims ()2}+{\mathfrak {m}}}\) and the contribution from collisions involving pairs \(\{i_1,j_1\}, \{i_2,j_2\},\ldots ,\{i_{{\mathfrak {m}}}, j_{{\mathfrak {m}}} \}\) beyond time n by \(c(1-{{\bar{\beta }}}')^{-{\mathfrak {m}}}\). More precisely, we obtain that

$$\begin{aligned} \begin{aligned} A_N^{(1)}\leqslant \sum _{{\mathfrak {m}}=2}^{h \atopwithdelims ()2 } \frac{\left( {\begin{array}{c}h\\ 2\end{array}}\right) ^{\mathfrak {m}}}{(1-{\bar{\beta }}' )^{\left( {\begin{array}{c}h\\ 2\end{array}}\right) }} \sum _{n \geqslant 0, \, x_1,\ldots ,x_{{\mathfrak {m}}} \, \in \mathbb {Z}^2} \, \prod _{i=1}^{\mathfrak {m}} U^{{{\bar{\beta }}}}_N(n, x_i) \leqslant \sum _{{\mathfrak {m}}=2}^{h \atopwithdelims ()2 } \frac{\left( {\begin{array}{c}h\\ 2\end{array}}\right) ^{\mathfrak {m}}}{(1-{\bar{\beta }}' )^{\left( {\begin{array}{c}h\\ 2\end{array}}\right) }} \sum _{n \geqslant 0}\big (U^{{{\bar{\beta }}}}_N\big )^{{\mathfrak {m}}}(n) \,. \end{aligned} \end{aligned}$$

Schematically (making reference again to Fig. 5), we have summed out everything except the ‘wiggle’ lines which span [0, n] and which correspond to the pairs that simultaneously collide at time n and then summed out the spatial dependence of the latters. We will next use Proposition 1.5 of [3], which provides the estimate

$$\begin{aligned} \textrm{P}(\tau _k^{\scriptscriptstyle (N)}=n)\leqslant \frac{C\, k\, q_{2n}(0)}{R_N} \leqslant \frac{C' \, k\, }{n \, \log N}, \end{aligned}$$

where the second inequality follows by the local limit theorem. Therefore, from this, (2.8) and (2.10) we get that

$$\begin{aligned} \big (U^{{{\bar{\beta }}} }_N\big )^{\mathfrak {m}}(n) =\Big (\sum _{k \, \geqslant 0} (\sigma _N({{\bar{\beta }}})R_N)^{k} \, \textrm{P}(\tau _k^{\scriptscriptstyle (N)}=n) \Big )^{{\mathfrak {m}}} \leqslant \frac{(C')^{{\mathfrak {m}}}}{n^{{\mathfrak {m}}} \, (\log N)^{{\mathfrak {m}}} } \Big (\sum _{k \geqslant 0} k \cdot ({\bar{\beta }}')^k \Big )^{{\mathfrak {m}}} \,. \end{aligned}$$

for some \({\bar{\beta }}' \in ({{\bar{\beta }}},1)\). Since \({\bar{\beta }}'<1\) we have that \(\sum _{k \geqslant 0} k \cdot ({\bar{\beta }}')^k < \infty \) and we deduce that there exists a constant \(C=C({\bar{\beta }}')\) such that \(\big (U^{\bar{\beta }}_N\big )^{{\mathfrak {m}}}(n) \leqslant \frac{C^{{\mathfrak {m}}}}{n^{{\mathfrak {m}}}\,(\log N)^{{\mathfrak {m}}}}\). Since \(\sum _{n \geqslant 1} \frac{1}{n^{{\mathfrak {m}}}}<\infty \), for \({{\mathfrak {m}}}\geqslant 2\), there exists a (different) constant \(C=C({\bar{\beta }}')\in (0,\infty )\) such that

$$\begin{aligned} \sum _{n \geqslant 0} \big (U^{{{\bar{\beta }}}}_N\big )^{{\mathfrak {m}}}(n)\leqslant \frac{C}{(\log N)^{{\mathfrak {m}}}} \xrightarrow {N \rightarrow \infty } 0\,. \end{aligned}$$

The two bounds above, in combination with (3.59) and (3.60), allow us to write:

$$\begin{aligned} \begin{aligned} \sum _{r =0}^K H^{\mathsf {(rew)}}_{r,N,R,M}&\geqslant (1-e^{-cR^2})^{2Kh}\,(1-e^{-\kappa \,R})^K \, (1-e^{-\kappa \,R^2\,C^2_{K,M}})^K \\&\qquad \times \Bigg (\prod _{1\leqslant i<j \leqslant h}\textrm{E}\Big [e^{\frac{\pi \beta _{i,j}}{\log N}\, \textsf{L}^{(i,j)}_N}\Big ]- K \cdot {\widetilde{C}}_K\cdot \epsilon _{N,M}- o_N(1)-o_K(1) \Bigg ), \end{aligned} \end{aligned}$$

which together with upper bound (3.56) entail that

$$\begin{aligned} \lim _{K \rightarrow \infty } \lim _{M \rightarrow \infty } \lim _{R \rightarrow \infty } \lim _{N \rightarrow \infty } \sum _{r=0}^{K} H^{\mathsf {(rew)}}_{r,N,R,M}=\prod _{1\leqslant i<j \leqslant h} \frac{1}{1-\beta _{i,j}} \,. \end{aligned}$$

\(\square \)

We are now ready to put all pieces together and prove the main result of the paper, Theorem 1.1.

Proof of Theorem 1.1

The joint convergence statement (1.3) will follow from the convergence of the joint Laplace transform (1.2) for \(|\beta _{i,j}|<1\) and general results on the relation between convergence of Laplace transforms and distributional convergence. Let us sketch this argument: Since \(\big (\tfrac{\pi }{\log N} \textsf{L}_N^{(i,j)} \big )_{1\leqslant i<j\leqslant N}\) are non-negative random variables, it suffices by the Cramer-Wold device (see [13], Corollary 4.5) to establish the distributional convergence of any nonnegative linear combinations, i.e. \(\textsf{L}_N^{\varvec{\beta }}:=\sum _{i<j} \tfrac{\beta _{i,j} \pi }{\log N} \textsf{L}_N^{(i,j)}\) with \(\beta _{i,j}\geqslant 0\). The fact that \(\textsf{L}_N^{\varvec{\beta }}\) has exponential moments both positive (as follows from our estimates) and negative (as is clear from the nonnegativity of \(\textsf{L}_N^{\varvec{\beta }}\)) implies that the sequence \(\textsf{L}_N^{\varvec{\beta }}\) is tight. Tightness together with exponential moments imply that subsequential limits of the Laplace transforms exist, with the convergence being uniform in the vicinity of 0 in the complex plane. This leads to the analyticity of the limiting Laplace transforms in the neighbourhood of 0. We will establish that the Laplace transform (1.2) with positive parameters converge to that of the corresponding linear combination of independent exponential distributions with parameter 1, which is also analytic in the vicinity of zero (see right-hand side of (1.2)). Since the limiting Laplace transforms agree on an interval with non-empty interior (in this case determined by the conditions \(\beta _{i,j}\in [0,1)\)), by analyticity they have to agree in the whole domain of analyticity, which includes an open disc containing 0.

Therefore, it only remains to establish the convergence of the joint Laplace transform (1.2) for \(\beta _{i,j}\in [0,1)\). Let us do so relying on the results we have proven so far.

Let \(\epsilon >0\). There exists large \(K=K_\epsilon \in \mathbb {N}\) such that uniformly in \(N \in \mathbb {N}\)

$$\begin{aligned} \Big |M^{\varvec{\beta }}_{N,h}- \sum _{r=0}^K H_{r,N}\Big |\leqslant \epsilon , \end{aligned}$$
(3.61)

by Proposition 3.2. We have

$$\begin{aligned} \begin{aligned}&\Big | \sum _{r=0}^K H_{r,N}- \sum _{r=0}^K H^{\mathsf {(rew)}}_{r,N,R,M}\Big | \leqslant \bigg (\sum _{r=0}^K (H^{\mathsf {(superdiff)}}_{r,N,R}+H^{\mathsf {(multi)}}_{r,N} )\bigg )\\&\quad + \Big |\sum _{r=0}^K (H^{\mathsf {(diff)}}_{r,N,R}-H^{\mathsf {(main)}}_{r,N,R,M})\Big | + \Big | \sum _{r=0}^K (H^{\mathsf {(main)}}_{r,N,R,M}-H^{\mathsf {(rew)}}_{r,N,R,M})\Big | \,. \end{aligned} \end{aligned}$$

By Propositions 3.1, 3.3 we have that

$$\begin{aligned} \lim _{R \rightarrow \infty }\lim _{N \rightarrow \infty } \bigg (\sum _{r=0}^K (H^{\mathsf {(superdiff)}}_{r,N,R}+H^{\mathsf {(multi)}}_{r,N} )\bigg )\leqslant & {} \lim _{R \rightarrow \infty } \sup _{N \in \mathbb {N}}\sum _{r=0}^K H^{\mathsf {(superdiff)}}_{r,N,R}\\{} & {} +\lim _{N \rightarrow \infty } \sum _{r=0}^K H^{\mathsf {(multi)}}_{r,N}=0 \,. \end{aligned}$$

Moreover, by Proposition 3.5 we have that

$$\begin{aligned} \lim _{R \rightarrow \infty }\lim _{M \rightarrow \infty }\lim _{N \rightarrow \infty } \Big | \sum _{r=0}^K H^{\mathsf {(diff)}}_{r,N,R}- \sum _{r=0}^K H^{\mathsf {(main)}}_{r,N,R,M}\Big |=0 \,. \end{aligned}$$

Last, by Proposition 3.7 we have that

$$\begin{aligned} \lim _{R \rightarrow \infty } \lim _{M \rightarrow \infty } \lim _{N \rightarrow \infty } \Big |\sum _{r=0}^K H^{\mathsf {(main)}}_{r,N,R,M}- \sum _{r=0}^K H^{\mathsf {(rew)}}_{r,N,R,M}\Big |=0, \end{aligned}$$

therefore

$$\begin{aligned} \lim _{R \rightarrow \infty } \lim _{M \rightarrow \infty } \lim _{N \rightarrow \infty } \Big | \sum _{r=0}^K H_{r,N}- \sum _{r=0}^K H^{\mathsf {(rew)}}_{r,N,R,M}\Big |=0 \,. \end{aligned}$$
(3.62)

By Proposition 3.11 we have that

$$\begin{aligned} \lim _{K \rightarrow \infty } \lim _{R \rightarrow \infty }\lim _{M \rightarrow \infty }\lim _{N \rightarrow \infty } \sum _{r=0}^K H^{\mathsf {(rew)}}_{r,N,R,M}=\prod _{1\leqslant i <j \leqslant h} \frac{1}{1-\beta _{i,j}}, \end{aligned}$$
(3.63)

Therefore, by (3.61), (3.62) and (3.63) we obtain that

$$\begin{aligned} \lim _{N \rightarrow \infty } M^{\varvec{\beta }}_{N,h}=\prod _{1\leqslant i <j \leqslant h} \frac{1}{1-\beta _{i,j}}. \end{aligned}$$

\(\square \)