1 Introduction

Gaussian unitary ensembles (GUE) are Gaussian ensembles defined on the space of random matrices \( M ^{N }\)\( (N \in \mathbb {N})\) with independent random variables, the matrices of which are Hermitian. By definition, \( M ^{N }=[M ^{N }_{i,j}]_{i,j=1}^{N }\) is then an \( N \times N \) matrix having the form

$$\begin{aligned}&M ^{N }_{i,j} = {\left\{ \begin{array}{ll} \xi _i&{}\text { if } i=j \\ \tau _{i,j} /\sqrt{2}+ \sqrt{-1} \zeta _{i,j} /\sqrt{2} &{} \text { if } i<j, \end{array}\right. } \end{aligned}$$

where \(\{ \xi _{i} ,\tau _{i,j} , \zeta _{i,j}\}_{i<j}^{\infty }\) are i.i.d. Gaussian random variables with mean zero and half variance. Then, the eigenvalues \( \lambda _1 ,\ldots , \lambda _{N }\) of \( M ^{N }\) are real and have distribution \( \check{\mu }^{N}\) such that

$$\begin{aligned} \check{\mu }^{N}(\mathrm{d}\mathbf x _N)=\frac{1}{Z^N} \prod _{i<j}^{N} | x_i - x_j |^2 \prod _{k=1}^{N} e^{-| x_k |^2 } \,\mathrm{d}\mathbf x _N , \end{aligned}$$
(1.1)

where \(\mathbf x _N=(x_1,\ldots ,x_N)\in \mathbb {R}^{N }\) and \(Z^N\) is a normalizing constant [1]. Wigner’s celebrated semicircle law asserts that their empirical distributions converge in distribution to a semicircle distribution:

$$\begin{aligned}&\lim _{N \rightarrow \infty } \frac{1}{N } \{ \delta _{\lambda _1/\sqrt{N }} +\cdots + \delta _{\lambda _{N /\sqrt{N }}} \} = \frac{1}{\pi } 1_{(-\sqrt{2},\sqrt{2})}(x) \sqrt{2-x^2} \mathrm{d}x . \end{aligned}$$

One may regard this convergence as a law of large numbers because the limit distribution is a non-random probability measure.

We consider the scaling of the next order in such a way that the distribution is supported on the set of configurations. That is, let \( \theta \) be the position of the macroscale given by

$$\begin{aligned}&- \sqrt{2}< \theta < \sqrt{2}\end{aligned}$$
(1.2)

and take the scaling \( x \mapsto y \) such that

$$\begin{aligned}&x = \frac{y}{\sqrt{N }} + \theta \sqrt{N } . \end{aligned}$$
(1.3)

Let \( \mu _\theta ^N\) be the point process for which the labeled density \( \mathbf m _{\theta }^{N } \mathrm{d}{} \mathbf x _{N }\) is given by

$$\begin{aligned}&\mathbf m _{\theta }^{N } (\mathbf x _{N })= \frac{1}{Z^N} \prod _{i<j}^{N} | x_i - x_j |^2 \prod _{k=1}^{N} \mathrm{e}^{-| {x_k} + \theta N |^2/N } . \end{aligned}$$
(1.4)

The position \( \theta \) in (1.2) is called the bulk and the scaling in (1.3) the bulk scaling (of the point processes). It is well known that the rescaled point processes \( \mu _\theta ^N\) satisfy

$$\begin{aligned}&\lim _{N \rightarrow \infty } \mu _\theta ^N= \mu _{\theta }\quad \text { in distribution} , \end{aligned}$$
(1.5)

where \( \mu _{\theta }\) is the determinantal point process with sine kernel \( \textsf {K}_{\theta }\):

$$\begin{aligned}&\textsf {K}_{\theta } (x,y) = \frac{\sin \{\sqrt{2-\theta ^2} (x-y) \}}{\pi (x-y)} . \end{aligned}$$

By definition, \(\mu _\theta \) is the point process on \( \mathbb {R}\) for which the m-point correlation function \( \rho _{\theta }^m \) with respect to the Lebesgue measure is given by

$$\begin{aligned}&\rho _{\theta }^m (x_1,\ldots ,x_m) = \det [\textsf {K}_{\theta } (x_i,x_j) ]_{i,j=1}^m . \end{aligned}$$

We hence see that the limit is universal in the sense that it is the Sine\(_{2}\) point process and independent of the macro-position \( \theta \) up to the dilation of determinantal kernels \( \textsf {K}_{\theta } \). This may be regarded as a first step of the universality of the Sine\( _{2}\) point process, which has been extensively studied for general inverse temperature \(\beta \) and a wide class of free potentials (see [2] and references therein).

Once a static universality is established, then it is natural to enquire of its dynamical counterpart. Indeed, we shall prove the dynamical version of (1.5) and present a phenomenon called stochastic differential equation (SDE) gaps for \( \theta \not = 0 \).

Two natural \( N \)-particle dynamics are known for GUE. One is Dyson’s Brownian motion corresponding to time-inhomogeneous \( N \)-particle dynamics given by the time evolution of eigenvalues of time-dependent Hermitian random matrices \( \mathcal {M}^{N }(t)\) for which the coefficients are Brownian motions \( B_t^{i,j}\) [10].

The other is a diffusion process \( \mathbf X ^{\theta ,N }=(X^{\theta ,N , i})_{i=1}^{N }=\{(X_t^{\theta ,N , i})_{i=1}^{N }\}_t\) given by the SDE such that for \( 1 \le i \le N \)

$$\begin{aligned} \mathrm{d}X_t^{\theta ,N,i} = \mathrm{d}B_t^{i} + \sum _{ j \ne i }^{N } \frac{1}{X_t^{\theta ,N,i} - X_t^{\theta ,N,j}}\mathrm{d}t - \frac{1}{N}X_t^{\theta ,N,i}\,\mathrm{d}t -\theta \, \mathrm{d}t, \end{aligned}$$
(1.6)

which has a unique strong solution for \( \mathbf X _0^{\theta ,N } \in \mathbb {R}^{N } \backslash \mathcal {N}\) and \( \mathbf X ^{\theta ,N }\) never hits \( \mathcal {N} \), where \( \mathcal {N} = \{ \mathbf x =(x_k)_{k=1}^N;\, x_i=x_j \text { for some } i\not =j \} \) [4].

The derivation of (1.6) is as follows: Let \( \check{\mu }_\theta ^N(\mathrm{d}{} \mathbf x _{N }) = \mathbf m _{\theta }^{N } (\mathbf x _{N })\mathrm{d}{} \mathbf x _{N }\) be the labeled symmetric distribution of \( \mu _\theta ^N\). Consider a Dirichlet form on \( L^2(\mathbb {R}^{N }, \check{\mu }_\theta ^N)\) such that

$$\begin{aligned}&\mathcal {E}^{\check{\mu }_\theta ^N} (f,g) = \int _{ \mathbb {R}^{N }} \frac{1}{2} \sum _{i=1}^{N } \frac{\partial f }{\partial x_i} \frac{\partial g }{\partial x_i} \check{\mu }_\theta ^N(\mathrm{d}{} \mathbf x _{N } ) . \end{aligned}$$

Then, using (1.4) and integration by parts, we specify the generator \( -A^{N }\) of \( \mathcal {E}^{\check{\mu }_\theta ^N}\) on \( L^2(\mathbb {R}^{N }, \check{\mu }_\theta ^N)\) such that

$$\begin{aligned}&A^{N } = \frac{1}{2}\Delta + \sum _{i=1}^{N } \left\{ \sum _{j\not =i }^{N } \frac{1}{x_i-x_j} \right\} \frac{\partial }{\partial x_i} -\sum _{i=1}^{N } \left\{ \frac{x_i}{N } + \theta \right\} \frac{\partial }{\partial x_i} . \end{aligned}$$

From this, we deduce that the associated diffusion \( \mathbf X ^{\theta ,N }\) is given by (1.6).

Taking the limit \( N \rightarrow \infty \) in (1.6), we intuitively obtain the infinite-dimensional SDE (ISDE) of \( \mathbf X ^{\theta } = (X^{\theta , i})_{i\in \mathbb {N}}\) such that

$$\begin{aligned} \mathrm{d} X_t^{\theta ,i}&= \mathrm{d}B_t^i +\sum _{j\ne i}^{\infty } \frac{1}{X_t^{\theta ,i} - X_t^{\theta ,j}}\,\mathrm{d}t -\theta \,\mathrm{d}t , \end{aligned}$$
(1.7)

which was introduced in [22] with \( \theta = 0 \). For each \( \theta \), we have a unique, strong solution \( \mathbf X ^{\theta } \) of (1.7) such that \( \mathbf X _0^{\theta } = \mathbf s \) for \( \mu _{\theta }\circ \mathfrak {l} ^{-1}\)-a.s. \( \mathbf s \), where \( \mathfrak {l} \) is a labeling map. Although only the \( \theta = 0 \) ISDE of \( \mathbf X ^0 =: \mathbf X = (X^i)_{i\in \mathbb {N}}\) is studied in [17, 23], the general \( \theta \not = 0\) ISDE is nevertheless follows easily using the transformation

$$\begin{aligned}&X_t^{\theta ,i} = X_t^i - \theta t . \end{aligned}$$

Let \( \mathsf {X}_t^{\theta } = \sum _i \delta _{X_t^{\theta , i}}\) be the associated delabeled process. Then, \( \mathsf {X}^{\theta }= \{ \mathsf {X}_t^{\theta } \} \) takes \( \mu _{\theta }\) as an invariant probability measure and is not\( \mu _{\theta }\)-symmetric for \( \theta \not = 0\).

The precise meaning of the drift term in (1.7) is the substitution of \( \mathbf X _t^{\theta } =(X_t^{\theta , i})_{i\in \mathbb {N}} \) for the function \( b (x,\mathsf {y})\) given by the conditional sum

$$\begin{aligned}&b (x,\mathsf {y}) = \lim _{r\rightarrow \infty } \left\{ \sum _{|x-y_i|<r} \frac{1}{x-y_i} \right\} - \theta \quad \text { in } L_{\mathrm {loc}}^1 (\mu _{\theta }^{[1]}) , \end{aligned}$$
(1.8)

where \( \mathsf {y}=\sum _i \delta _{y_i}\) and \( \mu _{\theta }^{[1]}\) is the one-Campbell measure of \( \mu _{\theta }\) (see (2.1)). We do this in such a way that \( b (X_t^{\theta , i}, \sum _{j\not =i}\delta _{X_t^{\theta , j}})\). Because \( \mu _{\theta }\) is translation invariant, it can be easily checked that (1.8) is equivalent to (1.9):

$$\begin{aligned}&b (x,\mathsf {y}) = \lim _{r\rightarrow \infty } \left\{ \sum _{|y_i|<r} \frac{1}{x-y_i} \right\} - \theta \quad \text { in } L_{\mathrm {loc}}^1 (\mu _{\theta }^{[1]}) . \end{aligned}$$
(1.9)

Let \( \mathfrak {l} _{N }\) and \( \mathfrak {l} \) be labeling maps. We denote by \( \mathfrak {l} _{N ,m}\) and \( \mathfrak {l} _{m }\) the first m-components of \( \mathfrak {l} _{N }\) and \( \mathfrak {l} \), respectively. We assume that, for each \(m \in {\mathbb N}\),

$$\begin{aligned} \lim _{N \rightarrow \infty } \mu _\theta ^{N} \circ \mathfrak {l} _{N ,m}^{-1} = \mu _{\theta } \circ \mathfrak {l} _{m }^{-1} \text { weakly } . \end{aligned}$$
(1.10)

Let \( \mathbf X ^{\theta ,N }=(X^{\theta ,N , i})_{i=1}^{N }\) and \( \mathbf X =(X^i)_{i\in \mathbb {N}}\) be solutions of SDEs (1.6) and (1.11), respectively, such that

$$\begin{aligned} \mathrm{d}X_t^{\theta ,N,i}&= \mathrm{d}B_t^{i} + \sum _{ j \ne i }^{N } \frac{1}{X_t^{\theta ,N,i} - X_t^{\theta ,N,j}}\mathrm{d}t - \frac{1}{N}X_t^{\theta ,N,i}\,\mathrm{d}t -\theta \, \mathrm{d}t , \end{aligned}$$
(1.6)
$$\begin{aligned} \mathrm{d}X_t^i&= \mathrm{d}B_t^i + \lim _{r\rightarrow \infty } \sum _{j\ne i,\, |X_t^i - X_t^j|< r }^{\infty } \frac{1}{X_t^i - X_t^j}\,\mathrm{d}t . \end{aligned}$$
(1.11)

We now state the first main result of the present paper.

Theorem 1.1

Assume (1.2) and (1.10). Assume that \( \mathbf X _0^{\theta ,N } = \mu _\theta ^N\circ \mathfrak {l} _{N }^{-1} \) in distribution and \( \mathbf X _0 = \mu _{\theta }\circ \mathfrak {l} ^{-1} \) in distribution. Then, for each \( m \in \mathbb {N}\),

$$\begin{aligned} \lim _{{ N} \rightarrow \infty } (X^{\theta , N ,1},X^{\theta , N ,2},\ldots ,X^{\theta , N ,m}) = (X^{1},X^{2},\ldots ,X^{m}) \end{aligned}$$
(1.12)

weakly in \( C([0,\infty ) ,\mathbb {R}^m)\). In particular, the limit \( \mathbf X =(X^i)_{i\in \mathbb {N}}\) does not satisfy (1.7) for any \( \theta \) other than \( \theta = 0 \).

We next consider non-reversible initial distributions. Let \( \mathbf X ^N = (X^{N,i})_{i=1}^N\) and \( \mathbf Y ^{\theta } =(Y^{\theta , i})_{i \in \mathbb {N}}\) be solutions of (1.13) and (1.14), respectively, such that

$$\begin{aligned} \quad \mathrm{d}X_t^{N,i}&= \mathrm{d}B_t^{i} + \sum _{ j \ne i }^{N } \frac{1}{X_t^{N,i} - X_t^{N,j}}\mathrm{d}t - \frac{1}{N}X_t^{N,i}\,\mathrm{d}t , \end{aligned}$$
(1.13)
$$\begin{aligned} \mathrm{d} Y_t^{\theta ,i}&= \mathrm{d}B_t^i + \lim _{r\rightarrow \infty } \sum _{j\ne i,\, |Y_t^{\theta ,i} - Y_t^{\theta ,j}|< r }^{\infty } \frac{1}{Y_t^{\theta ,i} - Y_t^{\theta ,j}}\,\mathrm{d}t + \theta \,\mathrm{d}t . \end{aligned}$$
(1.14)

Note that \( \mathbf X ^N = \mathbf X ^{0 ,N }\) and that \( \mathbf X ^N \) is not reversible with respect to \( \mu _\theta ^N\circ \mathfrak {l} _{N }^{-1}\) for any \( \theta \not = 0 \). We remark that the delabeld process \( \mathsf {Y}^{\theta }= \{\sum _{i\in \mathbb {N}} \delta _{Y_t^{\theta ,i}}\}\) of \( \mathbf Y ^{\theta }\) has invariant probability measure \( \mu _{\theta }\) and is not symmetric with respect to \( \mu _{\theta }\) for \( \theta \not =0 \). We state the second main theorem.

Theorem 1.2

Assume (1.2) and (1.10). Assume that \( \mathbf X _0^N = \mu _\theta ^N\circ \mathfrak {l} _{N }^{-1} \) in distribution and \( \mathbf Y _0^{\theta } = \mu _{\theta }\circ \mathfrak {l} ^{-1} \) in distribution. Then, for each \( m \in \mathbb {N}\)

$$\begin{aligned} \lim _{{ N} \rightarrow \infty } (X^{{ N},1},X^{{ N},2},\dots ,X^{{ N},m}) = (Y^{\theta ,1}, Y^{\theta ,2}, \ldots , Y^{\theta ,m}) \end{aligned}$$
(1.15)

weakly in \( C([0,\infty ) ,\mathbb {R}^m)\).

  • We refer to the second claim in Theorem 1.1, and (1.15) as the SDE gaps. The convergence in (1.15) of Theorem 1.2 resembles the “Propagation of Chaos” in the sense that the limit equation (1.14) depends on the initial distribution, although it is a linear equation. Because the logarithmic potential is by its nature long-ranged, the effect of initial distributions \( \mu _\theta ^N\) still remains in the limit ISDE, and the rigidity of the Sine\( _{\mathrm {2}}\) point process makes the residual effect a non-random drift term \( \theta dt \).

    There is a result of dynamical universality of Dyson’s Brownian motion in [9]. This result is proved in a fairy general situation, but is restricted to finite-particle systems. Our result derives the ISDE from a finite-particle system and is thus regarded as a dynamical universality of Dyson’s Brownian motion in infinite dimensions and clarifies that the ISDE of Dyson’s Brownian motion in infinite dimensions plays a role of Brownian motion in invariance principle in finite dimensions.

  • Let \( \mathsf {S}_{\theta }\) be a Borel set such that \( \mu _{\theta }(\mathsf {S}_{\theta })= 1 \), where \( -\sqrt{2}< \theta < \sqrt{2}\). In [7], the first author proves that one can choose \( \mathsf {S}_{\theta }\) such that \( \mathsf {S}_{\theta } \cap \mathsf {S}_{\theta '} = \emptyset \) if \( \theta \not = \theta '\) and that for each \( \mathsf {s} \in \mathsf {S}_{\theta }\) (1.11) has a strong solution \( \mathbf X \) such that \( \mathbf X = \mathfrak {l} (\mathsf {s})\) and that

    $$\begin{aligned} \mathsf {X}_t := \sum _{i=1}^{\infty } \delta _{X_t^i} \in \mathsf {S}_{\theta } \quad \text { for all }t \in [0,\infty ) . \end{aligned}$$

    This implies that the state space of solutions of (1.11) can be decomposed into uncountable disjoint components. We conjecture that the component \( \mathsf {S}_{\theta }\) is ergodic for each \( \theta \in (-\sqrt{2}, \sqrt{2})\).

  • For \( \theta = 0 \), the convergence (1.12) is also proved in [16]. The proof in [16] is algebraic and valid only for dimension \( d=1 \) and inverse temperature \( \beta = 2\) with the logarithmic potential. It relies on an explicit calculation of the space-time correlation functions, the strong Markov property of the stochastic dynamics given by the algebraic construction, the identity of the associated Dirichlet forms constructed by two completely different methods, and the uniqueness of solutions of ISDE (1.7).

    Although one may prove (1.10) for \( \theta \not = 0\) using the algebraic method in [16], this requires a lot of work as mentioned above. We remark that, as a corollary and an application, Theorem 1.1 proves the weak convergence of finite-dimensional distributions explicitly given by the space-time correlation functions. We refer to [5, 16] for the representation of these correlation functions.

  • Tsai proves the pathwise uniqueness and the existence of strong solutions of

    $$\begin{aligned} \mathrm{d}X_t^i&= \mathrm{d}B_t^i + \frac{\beta }{2}\lim _{r\rightarrow \infty } \sum _{j\ne i,\, |X_t^i - X_t^j|< r }^{\infty } \frac{1}{X_t^i - X_t^j}\,\mathrm{d}t \quad (i\in \mathbb {N}) \end{aligned}$$
    (1.16)

    for general \( \beta \in [1,\infty )\) in [23]. The proof uses the classical stochastic analysis and crucially depends on a specific monotonicity of SDEs (1.16). For \( \beta = 1,4\), we have a good control of the correlation functions as for \( \beta = 2 \). Hence, our method can be applied to \( \beta = 1,4 \) and the same result as for \( \beta = 2 \) in Theorem 1.1 holds. We shall return to this point in future.

    It would be an interesting problem to apply Tsai’s method to the present problem. One may obtain a convergence at the non-equilibrium level. The difficulty is, however, Tsai’s method crucially depends on the translation invariance of the stationary measure. As a result, it seems difficult at present to apply it to solve the ISDE for the Airy interacting Brownian motion. It is thus not necessary obvious that Tsai’s method is applicable for \(\theta \not =0 \) because of lack of the translation invariance.

The key point of the proof of Theorem 1.1 is to prove the convergence of the drift coefficient \( b ^N(x,\mathsf {y})\) of the N-particle system to the drift coefficient \( b (x,\mathsf {y})\) of the limit ISDE even if \( \theta \not = 0 \). That is, as \( N \rightarrow \infty \),

$$\begin{aligned}&b ^N(x,\mathsf {y})= \left\{ \sum _{i=1}^N \frac{1}{x-y_i} \right\} - \theta \quad \Longrightarrow \quad b (x,\mathsf {y})= \lim _{r\rightarrow \infty } \left\{ \sum _{|y_i|<r} \frac{1}{x-y_i} \right\} . \end{aligned}$$

Note that support of the coefficients \( b ^N (x,\mathsf {y})\) and \( b (x,\mathsf {y})\) are mutually disjoint and that the sum in \( b ^N \) is not neutral for any \( \theta \not = 0 \). We shall prove uniform bounds of the tail of the coefficients using fine estimates of the correlation functions and cancel out the deviation of the sum in \( b ^N \) with \( \theta \). Because of rigidity of the Sine\(_{\mathrm {2}}\) point process, we justify this cancelation not only for static but also dynamical instances.

The organization of the paper is as follows: In Sect. 2, we prepare general theories for interacting Brownian motion in infinite dimensions. In Sect. 3, we quote estimates for the oscillator wave functions and determinantal kernels. In Sect. 4, we prove key estimates (2.21)–(2.24). In Sect. 5, we complete the proof of Theorem 1.1. In Sect. 6, we prove Theorem 1.2.

2 Preliminaries from General Theory

In this section, we present the general theory described in [8, 12, 13, 17] in a reduced form sufficient for the current purpose. In particular, we take the space where particles move in \( \mathbb {R}\) rather than \( \mathbb {R}^d\) as in the cited articles.

2.1 \(\mu \)-Reversible Diffusions

Let \(S_r=\{s \in \mathbb {R}\, ; |s| < r \} \). The configuration space \(\mathsf {S}\) over \( \mathbb {R}\) is a Polish space equipped with the vague topology such that

$$\begin{aligned}&\mathsf {S}= \left\{ \mathsf {s} = \sum _i \delta _{s_i}\, ;\, \mathsf {s} ( S_r) < \infty \text { for all } r \in \mathbb {N}\right\} . \end{aligned}$$

Each element \( \mathsf {s} \in \mathsf {S}\) is called a configuration regarded as countable delabeled particles. A probability measure \(\mu \) on \((\mathsf {S},\mathcal {B}(\mathsf {S}))\) is called a point process (a random point field).

A locally integrable symmetric function \(\rho ^n:\mathbb {R}^n \rightarrow [0 ,\infty )\) is called the n-point correlation function of \(\mu \) with respect to the Lebesgue measure if \(\rho ^n\) satisfies

$$\begin{aligned} \int _{A_1^{k_1}\times \cdots \times A_m^{k_m}} \rho ^n (s_1,\ldots ,s_n) \,d\mathbf s _n = \int _{\mathsf {S}} \prod _{i = 1}^{m} \frac{\mathsf {s} (A_i) ! }{(\mathsf {s} (A_i) - k_i )!} \mu (d\mathsf {s}) \end{aligned}$$

for any sequence of disjoint bounded measurable subsets \( A_1,\ldots ,A_m \subset \mathbb {R}\) and a sequence of natural numbers \( k_1,\ldots ,k_m \) satisfying \( k_1+\cdots + k_m = n \). Here, we assume that \({\mathsf {s} (A_i) ! }/{(\mathsf {s} (A_i) - k_i )!} =0\) for \(\mathsf {s} (A_i) - k_i < 0\).

Let \( \Phi :\mathbb {R} \rightarrow \mathbb {R}\) and \( \Psi :\mathbb {R}^2 \rightarrow \mathbb {R}\cup \{ \infty \} \) be measurable functions called free and interaction potentials, respectively. Let \(\mathcal {H}_r\) be the Hamiltonian on \(S_r\) given by

$$\begin{aligned} \mathcal {H}_r(\textsf {x}) =\sum _{x_i\in S_r}\Phi (x_i) +\sum _{j\ne k,x_j,x_k\in S_r}\Psi (x_j,x_k)\quad \text { for }\textsf {x}=\sum _i \delta _{x_i } . \end{aligned}$$

For each \( m ,r \in \mathbb {N}\) and \( \mu \)-a.s. \( \xi \in \mathsf {S}\), let \( \mu _{r,\xi }^{m} \) denote the regular conditional probability such that

$$\begin{aligned} \mu _{r,\xi }^{m} =\mu (\pi _{S_r}(\textsf {x})\in \cdot \,|\, \pi _{S_r^c}(\textsf {x})=\pi _{S_r^c}(\xi ),\, \textsf {x}(S_r)=m) . \end{aligned}$$

Here, for a subset A, we set \( \pi _{A}:\mathsf {S} \rightarrow \mathsf {S} \) by \( \pi _{A} (\mathsf {s}) = \mathsf {s} (\cdot \cap A )\).

Let \(\Lambda _r\) denote the Poisson point process with intensity being a Lebesgue measure on \(S_r\). We set \(\Lambda _r^m (\cdot )=\Lambda _r(\cdot \cap \mathsf {S}_{r}^{m})\), where \(\mathsf {S}_{r}^{m}=\{\mathsf {s}\in \mathsf {S}\,;\, \mathsf {s}(S_r)=m \}\).

Definition 1

([13, 14]) A point process \(\mu \) is said to be a \((\Phi ,\Psi )\)-quasi-Gibbs measure if its regular conditional probabilities \( \mu _{r,\xi }^{m} \) satisfy, for any \(r,m\in \mathbb {N}\) and \(\mu \)-a.s. \(\xi \),

$$\begin{aligned} c_{1}^{-1} e^{-\mathcal {H}_r(\textsf {x})}\Lambda _r^m (\mathrm{d}\textsf {x}) \le \mu _{r,\xi }^{m}(\mathrm{d}\textsf {x}) \le c_{1} e^{-\mathcal {H}_r(\textsf {x})}\Lambda _r^m (\mathrm{d}\textsf {x}) . \end{aligned}$$

Here, \(c_{1}\) is a positive constant depending on \(r,m,\xi \).

The significance of the quasi-Gibbs property is to guarantee the existence of \( \mu \)-reversible diffusion process \( \{ P_{\mathsf {s}} \} \) on \( \mathsf {S}\) given by the natural Dirichlet form associated with \( \mu \), in analogy with distorted Brownian motion in finite dimensions.

To introduce the Dirichlet form, we provide some notations. We say a function f on \( \mathsf {S}\) is local if f is \( \sigma [\pi _{K}]\)-measurable for some compact set K in \( \mathbb {R}\). For a local function f on \( \mathsf {S}\), we say f is smooth if \( \check{f}\) is smooth, where \( \check{f}(x_1,\ldots )\) is the symmetric function such that \( \check{f}(x_1,\ldots ) = f (\mathsf {x})\) for \( \mathsf {x} = \sum _i \delta _{x_i}\). Let \( \mathcal {D} _{\circ }\) be the set of all bounded, locally smooth functions on \( \mathsf {S}\).

Let \( \mathbb {D}\) be the standard square field on \( \mathsf {S}\) such that for \( f , g \in \mathcal {D} _{\circ }\) and \( \mathsf {s}=\sum _i\delta _{s_i}\)

$$\begin{aligned}&\mathbb {D}[f,g] (\mathsf {s})= \frac{1}{2}\left\{ \sum _i (\nabla _i{\check{f}} )(\nabla _i{\check{g}}) \right\} \, (\mathsf {s}) . \end{aligned}$$

We write \( \mathbf s =(s_i)_i\). Because the function \( \sum _i (\nabla _i {\check{f}}) (\mathbf s ) (\nabla _i{\check{g}}) (\mathbf s )\) is symmetric in \( \mathbf s =(s_i)_i\), we regard it as a function of \( \mathsf {s}\). We set \( L^{2}(\mu )= L^2(\mathsf {S},\mu )\) and let

$$\begin{aligned}&\mathcal {E}^{\mu }(f,g) = \int _{\mathsf {S}} \mathbb {D}[f,g] (\mathsf {s}) \mu (d\mathsf {s}), \quad \mathcal {D} _{\circ }^{\mu } =\{ f \in \mathcal {D} _{\circ }\cap L^{2}(\mu )\, ;\, \mathcal {E}^{\mu }(f,f) < \infty \} . \end{aligned}$$

We quote:

Lemma 1

([13]) Assume that \( \mu \) is a \((\Phi ,\Psi )\)-quasi-Gibbs measure with upper semicontinuous \((\Phi ,\Psi )\). Assume that the correlation functions \( \{ \rho ^n \} \) are locally bounded for all \( n \in \mathbb {N}\). Then, \( (\mathcal {E}^{\mu }, \mathcal {D} _{\circ }^{\mu } )\) is closable on \( L^{2}(\mu )\). Furthermore, there exists a \( \mu \)-reversible diffusion process \( \{ P_{\mathsf {s}} \} \) associate with the Dirichlet form \( (\mathcal {E}^{\mu }, \mathcal {D}^{\mu } )\) on \( L^{2}(\mu )\). Here, \( (\mathcal {E}^{\mu }, \mathcal {D}^{\mu } )\) is the closure of \( (\mathcal {E}^{\mu }, \mathcal {D} _{\circ }^{\mu })\) on \( L^{2}(\mu )\).

2.2 Infinite-Dimensional SDEs

Suppose that diffusion \( \{ P_{\mathsf {s}} \} \) in Lemma 1 is collision-free and that each tagged particle does not explode. Then, we can construct labeled dynamics \( \mathbf X =(X^i)_{i\in \mathbb {Z}}\) by introducing the initial labeling \( \mathfrak {l} =(\mathfrak {l} _i)_{i\in \mathbb {Z}}\) such that

$$\begin{aligned}&\mathbf X _0 = \mathfrak {l} (\mathsf {X}_0) . \end{aligned}$$

Indeed, once the label \( \mathfrak {l} \) is given at time zero, then each particle retains the tag for all time because of the collision-free and explosion-free property.

To specify the ISDEs satisfied by \( \mathbf X \) above, we introduce the notion of the logarithmic derivative of \( \mu \), which was introduced in [12].

A point process \( \mu _{x}\) is called the reduced Palm measure of \( \mu \) conditioned at \( x \in \mathbb {R}\) if \( \mu _{x}\) is the regular conditional probability defined as

$$\begin{aligned}&\mu _{x} = \mu (\cdot - \delta _x | \mathsf {s} (\{ x \} ) \ge 1 ) . \end{aligned}$$

A Radon measure \( \mu ^{[1]}\) on \( \mathbb {R}\times \mathsf {S}\) is called the 1-Campbell measure of \( \mu \) if

$$\begin{aligned}&\mu ^{[1]}(\mathrm{d}x \mathrm{d}\mathsf {s}) = \rho ^1 (x) \mu _{x} (\mathrm{d}\mathsf {s}) \mathrm{d}x . \end{aligned}$$
(2.1)

We write \( f \in L_{\mathrm {loc}}^p(\mu ^{[1]})\) if \( f \in L^p(S_r\times \mathsf {S}, \mu ^{[1]})\) for all \( r \in \mathbb {N}\).

Definition 2

A \( \mathbb {R}\)-valued function \( \textsf {d}^{\mu }\in L_{\mathrm {loc}}^1(\mu ^{[1]})\) is called the logarithmic derivative of \(\mu \) if, for all \(\varphi \in C_{0}^{\infty }(\mathbb {R})\otimes \mathcal {D} _{\circ }\),

$$\begin{aligned}&\int _{\mathbb {R}\times \mathsf {S}} \textsf {d}^{\mu }(x,\mathsf {y})\varphi (x,\mathsf {y}) \mu ^{[1]}(\mathrm{d}x \mathrm{d}\mathsf {y}) = - \int _{\mathbb {R}\times \mathsf {S}} \nabla _x \varphi (x,\mathsf {y}) \mu ^{[1]}(\mathrm{d}x \mathrm{d}\mathsf {y}). \end{aligned}$$

Under these assumptions, we obtain the following:

Lemma 2

([12]) Assume that \( \mathbf X =(X^i)_{i\in \mathbb {N}}\) is the collision-free and explosion-free. Then, \( \mathbf X \) is a solution of the following ISDE:

$$\begin{aligned}&\quad \mathrm{d}X_t^i = \mathrm{d}B_t^i + \frac{1}{2} \textsf {d}^{\mu }(X_t^i , \textsf {X}_{t}^{\diamond i}) \mathrm{d}t \quad (i \in \mathbb {N}) \end{aligned}$$
(2.2)

with initial condition \( \mathbf X _0 = \mathbf s \) for \( \mu \circ \mathfrak {l} ^{-1}\)-a.s. \( \mathbf s \), where \( \textsf {X}_{t}^{\diamond i}= \sum _{j\not =i}^{\infty } \delta _{X_t^{j}}\).

2.3 Finite-Particle Approximations

Let \( \mu \) be a point process with correlation functions \( \{ \rho ^n \}_{n\in \mathbb {N}} \). Let \( \{\mu ^{N}\}_{N \in \mathbb {N}} \) be a sequence of point processes on \(\mathbb {R}\) such that \( \mu ^{N}(\{ \mathsf {s}(\mathbb {R}) = N \} ) = 1 \). We assume:

(A1) Each \( \mu ^{N}\) has correlation functions \( \{\rho ^{N,n}\}_{n \in \mathbb {N}} \) satisfying, for each \(r \in \mathbb {N}\),

$$\begin{aligned}&\lim _{N\rightarrow \infty } \rho ^{N,n} (\mathbf x )= \rho ^{n} (\mathbf x ) \quad \text { uniformly on} S_r^{n} \text { for each } n\in \mathbb {N}, \end{aligned}$$
(2.3)
$$\begin{aligned}&\sup _{N\in \mathbb {N}} \sup _\mathbf{x \in S_r^{n}} \rho ^{N,n} (\mathbf x ) \le c_{2}^{n} n ^{c_{3}n} , \end{aligned}$$
(2.4)

where \( 0< c_{2}(r) < \infty \) and \( 0< c_{3}(r)< 1 \) are constants independent of \( n \in \mathbb {N}\).

It is known that (2.3) and (2.4) imply the weak convergence of \( \{ \mu ^{N}\} \) to \( \mu \) [13, Lemma A.1]. As in Sect. 1, let \( \mathfrak {l} \) and \( \mathfrak {l} _{N }\) be labels of \( \mu \) and \( \mu ^{N}\), respectively. We assume:

(A2) For each \(m\in \mathbb {N}\),

$$\begin{aligned}&\quad \lim _{N\rightarrow \infty }\mu ^{N } \circ \mathfrak {l} _{N ,m}^{-1} =\mu \circ \mathfrak {l} _{m }^{-1} \quad \text { weakly in } \mathbb {R}^m . \end{aligned}$$

We shall later take \( \mu ^{N } \circ \mathfrak {l} _{N }^{-1} \) as an initial distribution of labeled finite-particle system. Therefore, (A2) means the convergence of the initial distribution of the labeled dynamics.

For a labeled process \( \mathbf X ^N=(X^{N,i})_{i=1}^{N }\), where \( N \in \mathbb {N}\), we set

$$\begin{aligned}&\textsf {X}_t^{N,\diamond i}= \sum _{j\not =i}^{ N } \delta _{X_t^{N,j}} , \end{aligned}$$

where \( \textsf {X}_t^{N,\diamond i}\) denotes the zero measure for \( N = 1 \). Let \( \textsf {b}^{N } ,\textsf {b}:\mathbb {R}\times \mathsf {S} \rightarrow \mathbb {R}\) be measurable functions. We introduce the finite-dimensional SDE of \( \mathbf X ^{N } =(X^{N ,i})_{i=1}^{N } \) with these coefficients such that for \( 1\le i\le N \)

$$\begin{aligned} \mathrm{d}X_t^{N,i}&= \mathrm{d}B_t^i + \textsf {b}^{N }(X_t^{N,i},\textsf {X}_{t}^{N,\diamond i})\mathrm{d}t . \end{aligned}$$
(2.5)

We assume:

(A3) SDE (2.5) with initial condition \( \mathbf X _0^{N }= \mathbf s \) has a unique solution for \( \mu ^{N}\circ \mathfrak {l} _{N }^{-1}\)-a.s. \( \mathbf s \) for each \( N \). This solution does not explode.

Let \( u,\ u^{N},\ {w}:\mathbb {R} \rightarrow \mathbb {R}\) and \(g:\mathbb {R}^{2} \rightarrow \mathbb {R}\) be measurable functions. We set

$$\begin{aligned}&\textsf {g}_{r}(x,\textsf {y})= \sum _{i} \chi _r (x-y_i) g(x,y_i) , \end{aligned}$$
(2.6)
$$\begin{aligned}&w_{r}(x,\textsf {y})= \sum _{i}(1- \chi _r (x-y_i))g(x,y_i) , \end{aligned}$$
(2.7)

where \( \mathsf {y}=\sum _{i}\delta _{y_i}\) and \( \chi _r \in C_0^{\infty } (\mathbb {R})\) is a cut-off function such that \( 0 \le \chi _r \le 1 \), \( \chi _r (x) = 0 \) for \( |x| \ge r + 1 \), and \( \chi _r (x) = 1\) for \( |x| \le r \). We assume the following.

(A4) Each \( \mu ^{N}\) has a logarithmic derivative \( \textsf {d}^{N}\) such that

$$\begin{aligned}&\textsf {d}^{N}(x,\mathsf {y})= u^{N}(x) + \textsf {g}_{r}(x,\mathsf {y}) + w_{r}(x,\mathsf {y}) . \end{aligned}$$
(2.8)

Furthermore, we assume that

  1. (1)

    \(u^{N}\) are in \( C^1 (\mathbb {R}) \). Furthermore, \( u^{N}\) and \( \nabla u^{N}\) converge uniformly to u and \( \nabla u \), respectively, on each compact set in \( \mathbb {R}\).

  2. (2)

    \( g \in C^1 (\mathbb {R}^2 \cap \{x \not = y \}) \). There exists a \( \hat{p}>1 \) such that, for each \( R \in \mathbb {N}\),

    $$\begin{aligned}&\lim _{\mathsf {p}\rightarrow \infty } \limsup _{N \rightarrow \infty } \int _{x \in S_R, |x-y| \le 2^{-\mathsf {p}} } \chi _r (x-y) | g (x,y) | ^{\hat{p}}\, \rho _x^{N ,1}(y) \mathrm{d}x\mathrm{d}y = 0 , \end{aligned}$$
    (2.9)

    where \( \rho _x^{N ,1} \) is a one-correlation function of the reduced Palm measure \( \mu _x^{N }\).

  3. (3)

    There exists a continuous function \( w :\mathbb {R} \rightarrow \mathbb {R}\) such that for each \( R \in \mathbb {N}\)

    $$\begin{aligned}&\lim _{r\rightarrow \infty }\limsup _{N\rightarrow \infty } \int _{S_R\times \mathsf {S}} |w_{r}(x,\mathsf {y}) - {w}(x)|^{\hat{p}} \mathrm{d}\mu ^{N,{[1]}}= 0 . \end{aligned}$$
    (2.10)

Let p be such that \( 1< p < \hat{p}\). Assume (A1) and (A4). Then, [12, Theorem 45] deduces that the logarithmic derivative \(\textsf {d}^{\mu }\) of \( \mu \) exists in \( L_{\mathrm {loc}}^{p}(\mu ^{[1]})\) and is given by

$$\begin{aligned}&\textsf {d}^{\mu }(x,\mathsf {y})= u(x) + \mathsf {g}(x,\mathsf {y}) + {w}(x). \end{aligned}$$
(2.11)

Here, \( \mathsf {g}(x,\mathsf {y})= \lim _{r\rightarrow \infty } \textsf {g}_{r}(x,\mathsf {y}) \) and the convergence of \( \lim \textsf {g}_{r}\) takes place in \( L_{\mathrm {loc}}^{p}(\mu ^{[1]})\). Taking (2.11) into account, we introduce the ISDE of \( \mathbf X =(X^i)_{i\in \mathbb {N}}\):

$$\begin{aligned} \mathrm{d}X_t^i&= \mathrm{d}B_t^i + \frac{1}{2} \{ u (X_t^i) + \mathsf {g}(X_t^i,\mathsf {X}_t^{\diamond i}) + w (X_t^i) \} \mathrm{d}t . \end{aligned}$$
(2.12)

Under the assumptions of Lemma 2, ISDE (2.12) with \( \mathbf X _0 = \mathbf s \) has a solution for \( \mu \circ \mathfrak {l} ^{-1}\)-a.s. \( \mathbf s \). Moreover, the associated delabeled diffusion \( \mathsf {X}=\{ \mathsf {X}_t \} \) is \( \mu \)-reversible, where \( \mathsf {X}_t = \sum _{i\in \mathbb {N}} \delta _{X_t^i}\) for \( \mathbf X _t = (X_t^i)_{i\in \mathbb {N}}\). As for uniqueness, we recall the notion of \( \mu \)-absolute continuity solution introduced in [17].

Let \( \mathbf X = (X^i)_{i\in \mathbb {N}}\) be a family of solution of (2.12) satisfying \( \mathbf X _0 = \mathbf s \) for \( \mu \circ \mathfrak {l} ^{-1}\)-a.s. \( \mathbf s \). Let \( \mu _t \) be the distribution of the delabeled process \( \mathsf {X}_t =\sum _{i\in \mathbb {N}} \delta _{X_t^i}\) at time t with initial distribution \( \mu \). That is, \( \mu _t \) is given by

$$\begin{aligned}&\mu _t = \int _{\mathsf {S}} P_{\mathsf {s}} (\mathsf {X}_t \in \cdot ) \mathrm{d}\mu \end{aligned}$$

We say that \( \mathbf X \) satisfies the \( \mu \)-absolute continuity condition if

$$\begin{aligned}&\mu _t \prec \mu \quad \text { for all } t \ge 0 , \end{aligned}$$
(2.13)

where \( \mu _t \prec \mu \) means that \( \mu _t \) is absolutely continuous with respect to \( \mu \). If \( \mathsf {X}\) is \( \mu \)-reversible, then (2.13) is satisfied.

We say ISDE (2.12) has \( \mu \)-uniqueness in law of solutions if \( \mathbf X \) and \( \mathbf X '\) are solutions with the same initial distributions satisfying the \( \mu \)-absolute continuity condition, then they are equivalent in law. We assume:

(A5) ISDE (2.12) has \( \mu \)-uniqueness in law of solutions.

It is proved in [17] that ISDE (2.2) has a strong solution and a solution of (2.2) is pathwise unique for almost sure staring points if, loosely speaking, \( \mu \) is tail trivial, the logarithmic derivative \( \textsf {d}^{\mu }\) has a sort of off-diagonal smoothness, and the one-correlation function has sub-exponential growth at infinity. This results implies \( \mu \)-uniqueness in law. We refer to Theorems 2.1 in [17] for details. The next result is a special case of [8, Theorem 2.1].

Lemma 3

([8, Theorem 2.1]) Make the same assumptions in Lemmas 1 and 2. Assume (A1)(A4). Assume that \( \mathbf X _0^{N } = \mu ^{N}\circ \mathfrak {l} _{N }^{-1}\) in distribution. Then, \( \{ \mathbf X ^{N } \}_{N \in \mathbb {N}} \) is tight in \( C([0,\infty );\mathbb {R}^{\mathbb {N}})\) and each limit point \( \mathbf X \) of \( \{ \mathbf X ^{N} \}_{N \in \mathbb {N}} \) is a solution of (2.12) with initial distribution \( \mu \circ \mathfrak {l} ^{-1}\). If, in addition, we assume (A5), then for any \(m \in \mathbb {N}\)

$$\begin{aligned}&\lim _{N \rightarrow \infty } (X^{N ,1},\ldots ,X^{N ,m}) = (X^1,\ldots ,X^m) . \end{aligned}$$

weakly in \( C([0,\infty ) ,\mathbb {R}^m)\). Here, \( \mathbf X ^N=(X^{N,i})_{i=1}^{N }\) and \( \mathbf X = (X^i)_{i\in \mathbb {N}}\) as before.

2.4 Reduction of Theorem 1.1 to (2.10)

In this subsection, we deduce Theorem 1.1 from Lemma 3 by assuming (2.10). We take \(\mu _\theta ^N\) and \(\mu _{\theta }\) as in Sect. 1. Then, the logarithmic derivative \( \textsf {d}^{\mu _\theta ^N}\) of \(\mu _\theta ^N\) is given by

$$\begin{aligned} \textsf {d}^{\mu _\theta ^N}(x,\textsf {y})= \sum _{i=1}^N \frac{2}{x - y_i} -\frac{2x}{N} -2\theta , \end{aligned}$$
(2.14)

where \( \textsf {y}=\sum _{i} \delta _{y_i}\). From (2.14), we take coefficients in (A4) as follows:

$$\begin{aligned} u^{N}(x)&= -\frac{2x}{N} -2\theta , \quad u (x) = - 2 \theta ,\quad w (x) = 2\theta , \end{aligned}$$
(2.15)
$$\begin{aligned} g(x,y)&=\frac{2}{x - y} . \end{aligned}$$
(2.16)

Other functions are given by (2.6) and (2.7).

Lemma 4

Assume (2.10) holds with \( \hat{p}= 2\) for the coefficients as above. Then, (1.12) holds.

Proof

To prove Lemma 4, we check the assumptions in Lemma 3, that is, the assumptions in Lemma 1, Lemma 2, and (A1)(A5).

The assumptions in Lemma 1 are proved in [13]. The assumptions in Lemma 2 are checked in [12]. (A1) is well known. (A2) is assumed by (1.10). (A3) is obvious as the interaction is smooth outside the origin, and the capacity of the colliding set \( \{ x_i=x_j \text { for some }i\not =j\} \) is zero (see [4, 11]). Furthermore, the one-correlation functions are bounded, which guarantees explosion-free of tagged particles. We take functions in (A4) as (2.15) and (2.16). These satisfy (2.8), (2.9), and (1) of (A4). (2.10) is satisfied by assumption. It is known that \( \mu _{\theta }\) is tail trivial [15]. Then, (A5) follows from tail triviality of \( \mu _{\theta }\) and [17, Theorem 3.1]. All the assumptions in Lemma 3 are thus satisfied and hence yield (1.12). \(\square \)

2.5 A Sufficient Condition for (2.10)

The most crucial step to apply Lemma 3 is to check (2.10). Indeed, it only remains to prove (2.10) for Theorem 1.1. We quote then a sufficient condition for (2.10) in terms of correlation functions from [12]. Lemma 6 is a special case of [12, Lemma 53].

Let \( \mu _{\theta ,x}^N\) be the reduced Palm measure of \( \mu _\theta ^N\) conditioned at x. We denote the supremum norm in x over \( S _R \) by \( \Vert \, {\cdot }\, \Vert _R \). Let \( \mathrm {E}^{\cdot }\) and \( \mathrm {Var}^{\cdot }\) denote the expectation and variance with respect to \( \cdot \), respectively.

Lemma 5

Assume \( | \theta | < \sqrt{2} \). Let \( w_r \) be as in (2.7) with g(xy) given by (2.16). Let \( w (x) = 2\theta \) as in (2.15). Then, (2.10) follows from (2.17)–(2.20).

$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \mathrm {E}^{\mu _\theta ^N}[w_r(x,\mathsf {y})] - 2\theta \Big \Vert _{R} =0 , \end{aligned}$$
(2.17)
$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \mathrm {E}^{\mu _\theta ^N} [w_r(x,\mathsf {y})] - \mathrm {E}^{\mu _{\theta ,x}^N} [w_r(x,\mathsf {y})] \Big \Vert _{R} =0 , \end{aligned}$$
(2.18)
$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \mathrm {Var}^{\mu _\theta ^N}[w_r(x,\mathsf {y})] \Big \Vert _{R} = 0 , \end{aligned}$$
(2.19)
$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \mathrm {Var}^{\mu _\theta ^N} [w_r(x,\mathsf {y})] - \mathrm {Var}^{\mu _{\theta ,x}^N} [w_r(x,\mathsf {y})] \Big \Vert _{R} = 0 . \end{aligned}$$
(2.20)

Proof

Lemma 5 follows from [12, Lemma 52]. Indeed, (2.17), (2.18), (2.19), and (2.20) in the present paper correspond to (5.4), (5.2), (5.5), and (5.3) in [12], respectively. We note that in [12] we use \( 1_{S_r}(x)\) instead of \( \chi _r (x) \). This slight modification yields no difficulty. \(\square \)

Multiplying \( w_r (x,\mathsf {y})\) by a half, we give a sufficient condition of (2.17)–(2.20) in terms of correlation functions. Let \( \rho _{\theta ,x}^{N,m}\) and \( \rho _{\theta }^{N,m}\) be the m-point correlation functions of \( \mu _{\theta ,x}^N\) and \( \mu _\theta ^N\), respectively. Let

$$\begin{aligned}&S_{r,\infty }(x)=S_{r*}^x= \{y \in \mathbb {R}\, ; r< |x-y| < \infty \} . \end{aligned}$$

Lemma 6

Assume \( | \theta | < \sqrt{2} \). Then, (2.17)–(2.20) follow from (2.21)–(2.24).

$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{S_{r*}^x} \frac{\rho _\theta ^{N,1}(y)}{x-y} \mathrm{d}y - \theta \Big \Vert _{R} =0 , \end{aligned}$$
(2.21)
$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{S_{r*}^x} \frac{\rho _{\theta ,x}^{N,1}(y) - \rho _\theta ^{N,1}(y) }{x-y} \,\mathrm{d}y \Big \Vert _{R} =0 , \end{aligned}$$
(2.22)
$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{S_{r*}^x} \frac{\rho _\theta ^{N,1}(y)}{(x-y)^2} \mathrm{d}y \nonumber \\&\quad \quad \quad \quad +\, \int _{(S_{r*}^x)^2 } \frac{\rho _{\theta }^{N,2}(y,z)-\rho _\theta ^{N,1}(y) \rho _\theta ^{N,1} (z) }{(x-y)(x-z)} \,\mathrm{d}y \mathrm{d}z \Big \Vert _{R} = 0, \end{aligned}$$
(2.23)
$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{S_{r*}^x} \frac{\rho _{\theta ,x}^{N,1}(y) -\rho _\theta ^{N,1}(y) }{(x-y)^2} \,\mathrm{d}y + \nonumber \\&\int _{(S_{r*}^x)^2 } \frac{\rho _{\theta ,x}^{N,2}(y,z)-\rho _{\theta ,x} ^{N,1}(y) \rho _{\theta ,x} ^{N,1} (z) - \{\rho _\theta ^{N,2}(y,z) -\rho _\theta ^{N,1}(y) \rho _\theta ^{N,1} (z) \} }{(x-y)(x-z)} \,\mathrm{d}y \mathrm{d}z \Big \Vert _{R} =0 . \end{aligned}$$
(2.24)

Proof

Lemma 6 follows immediately from a standard calculation of correlation functions and the definitions of \( w_r\) and \( \chi _r\). \(\square \)

3 Subsidiary Estimates

Keeping Lemma 6 in mind, our task is to prove (2.21)–(2.24). To control the correlation functions in Lemma 6, we prepare in this section estimates of the oscillator wave functions and determinantal kernels. We shall use these estimates in Sect. 4.

3.1 Oscillator Wave Functions

Let \(H_n(x)=(-1)^n e^{x^2} (\frac{d}{dx})^n e^{-x^2} \) be Hermite polynomials. Let \(\psi _{n}(x)\) denote the oscillator wave functions defined by

$$\begin{aligned}&\psi _{n}(x) =\frac{1}{\sqrt{\sqrt{\pi }2^{n}n!}}e^{-\frac{x^2}{2}}H_n(x) . \end{aligned}$$

Note that \( \{ \psi _{n} \}_{n=0}^{\infty } \) is an orthonormal system; \(\int _{\mathbb {R}} \psi _{n}(x)\psi _{m}(x)\,\mathrm{d}x=\delta _{nm}\).

The following estimates for these oscillator wave functions are essentially due to Plancherel–Rotach [19]. We quote here a version from Katori–Tanemura [6].

Lemma 7

([6]) Let \( C_{nm}^1 \), \( C_{nm}^2 \), and \( D_{nm}^1\) be the constants introduced in [6] (see (A.1) in [6, 572 p]). Let \( l = -1,0,1\) and \(N , L \in \mathbb {N} \). Then, we have the following.

  1. (1)

    Let \(0 < \tau \le \frac{\pi }{2} \). Assume that \(N\sin ^3 \tau \ge C N^\varepsilon \) for some \(C >0 \) and \(\varepsilon > 0\). Then,

    $$\begin{aligned}&\psi _{N+l} ( \sqrt{2N} \cos \tau ) = \frac{1+ \mathcal {O}(N^{-1}) }{\sqrt{\pi \sin \tau }} \biggl (\frac{2}{N}\biggr )^{\frac{1}{4}}\\&\times \biggl [ \sum _{n=0} ^{L-1} \sum _{m=0}^{n} C_{nm}^1 (N+l, \tau ) \sin \biggl \{ \frac{N}{2} (2 \tau - \sin 2\tau ) + D_{nm}^1 (\tau ) -(1+l)\tau \biggr \} \\&\quad \quad \quad + \mathcal {O}\bigg (\frac{1}{N \sin \tau }\bigg ) \biggr ] . \end{aligned}$$
  2. (2)

    Let \(\tau >0 \). Assume that \(N\sinh ^3 \tau \ge C N^\varepsilon \) for some \(C >0 \) and \(\varepsilon > 0\). Then,

    $$\begin{aligned}&\psi _{N+l} ( \sqrt{2N} \cosh \tau )\\&= \frac{1+ \mathcal {O}(N^{-1}) }{\sqrt{2\pi \sinh \tau }} \biggl (\frac{1}{2N}\biggr )^{\frac{1}{4}} \exp \Big [\bigg (\frac{N+1+l}{2} \bigg )(2\tau -\sinh 2\tau ) + (1+l)\tau \Big ]\\&\times \Big [ \sum _{n=0} ^{L-1} \sum _{m=0}^{n} C_{nm}^2 (\tau , N+l)+ \mathcal {O}\biggl (\frac{\cosh ^3\tau }{N \sinh \tau } \biggr ) \Big ] . \end{aligned}$$

Proof

(1) and (2) follow from (5.5) and (5.10) in [6], respectively. \(\square \)

We next quote estimates from [6, 18].

Lemma 8

([6, 18])

  1. (1)

    Let \( y=\sqrt{2N} \cos \tau \) with \(N \in \mathbb {N} \) and \(0 < \tau \le \frac{\pi }{2} \). Assume that \(N\sin ^3 \tau \ge C N^\varepsilon \) for some \(C >0 \) and \(\varepsilon > 0\). Then,

    $$\begin{aligned}&\sum _{k=0}^{N-1} \psi _{k}(y)^2 = \frac{1}{\pi }\sqrt{2N-y^2} +\mathcal {O}\biggl (\frac{\sqrt{N}}{2N-y^2 } \biggr ) .\end{aligned}$$
  2. (2)

    Let \( y=\sqrt{2N} \cosh \tau \) with \(N \in \mathbb {N} \) and \(\tau > 0 \). Assume that \(N\sinh ^3 \tau \ge C N^\varepsilon \) for some \(C >0 \) and \(\varepsilon > 0\). Then,

    $$\begin{aligned} \sum _{k=0}^{N-1} \psi _{k} (y)^2 = \mathcal {O}\biggl (\frac{\sqrt{N}}{y^2-2N}\biggr ) .\end{aligned}$$
    (3.1)
  3. (3)

    There is a positive constant \(c_{4}\) such that for all \( N \in \mathbb {N}\)

    $$\begin{aligned}&\sup _{ y \in \mathbb {R}}| \psi _{N} (y) | \le {c_{4}}{N^{-\frac{1}{12}} } .\end{aligned}$$
    (3.2)

Proof

(1) follows from Lemma 5.2 (i) in [6]. (2) follows from Lemma 5.2 (ii) in [6]. From Lemma 6.9 in [18], there exists a constant \(c_{4}\) such that

$$\begin{aligned}&| N^{\frac{1}{12}} \psi _{N} (2 \sqrt{N}+ y N^{-\frac{1}{6}}) | \le \frac{c_{4}}{( 1 \vee | y | )^{\frac{1}{4}}} , \quad y \in [-2 N^{\frac{2}{3}} , \infty ) . \end{aligned}$$

Hence, we have

$$\begin{aligned}&| \psi _{N} (y) | \le \frac{c_{4}}{ N^{\frac{1}{12}} ( 1 \vee \{N^{\frac{1}{6}} |y - 2\sqrt{N}| \} )^{\frac{1}{4}} } ,\quad y \in [0 , \infty ) . \end{aligned}$$
(3.3)

Claim (3.2) is immediate from (3.3) and the well-known property such that \( \psi _{N} (y) = \psi _{N} (-y) \) if N is even and that \( \psi _{N} (y) = -\psi _{N} (-y) \) if N is odd. \(\square \)

3.2 Determinantal Kernels of \(N \)-Particle Systems

We recall the definition of determinantal point processes. Let \(K:\mathbb {R}^2 \rightarrow \mathbb {C}\) be a measurable kernel. A probability measure \( \mu \) on \( \mathsf {S}\) is called a determinantal point process with kernel K if, for each n, its n-point correlation function is given by

$$\begin{aligned} \rho ^n(x_1,\ldots ,x_n) = \mathrm {det}[K(x_i,x_j)]_{i,j=1}^{n} . \end{aligned}$$
(3.4)

If K is an Hermitian symmetric and of locally trace class such that \( 0 \le \mathrm {Spec} (K) \le 1 \), then there exists a unique determinantal point process with kernel K [20, 21].

The distribution of the delabeled eigenvalues of GUE associated with (1.1) is a determinantal point process with kernel \(\mathsf {K}^{N }\) such that

$$\begin{aligned} \mathsf {K}^{N }(x,y) =\sum _{k=0}^{N-1} \psi _{k}(x) \psi _{k}(y) . \end{aligned}$$
(3.5)

The Christoffel–Darboux formula and a simple calculation yield the following.

$$\begin{aligned} \mathsf {K}^{N }(x,y) =\sqrt{\frac{N}{2}} \frac{\psi _{N}(x)\psi _{N-1}(y)-\psi _{N-1}(x)\psi _{N}(y)}{x-y} . \end{aligned}$$
(3.6)

From the scaling (1.3), \( \mu _\theta ^N\) is a determinantal point process with kernel

$$\begin{aligned}&\mathsf {K}_{\theta }^{N }(x,y) = \frac{1}{\sqrt{N }} \mathsf {K}^{N }\left( \frac{x + N \theta }{\sqrt{N }}, \frac{y + N \theta }{\sqrt{N }}\right) . \end{aligned}$$
(3.7)

Let \( x_N= \sqrt{N}x\) and \( y_N= \sqrt{N}y \). We set

$$\begin{aligned} \textsf {L}^{N }(x ,y)= \frac{1}{\sqrt{N}}\mathsf {K}^{N }(x_N, y_N) = \frac{1}{\sqrt{N}} \mathsf {K}^{N }(\sqrt{N } x , \sqrt{N } y ) . \end{aligned}$$
(3.8)

From (3.7) and (3.8), we then clearly see that

$$\begin{aligned}&\mathsf {K}_{\theta }^{N }(x,y) = \mathsf {L}^N \left( \frac{x}{N} + \theta , \frac{y}{N} + \theta \right) ,\nonumber \\&\textsf {L}^{N }(x ,y)= \mathsf {K}_{\theta }^{N }(N(x-\theta ), N (y- \theta )) . \end{aligned}$$
(3.9)

From (3.6), we deduce

$$\begin{aligned}&\textsf {L}^{N }(x,x)= \, (1/ \sqrt{2}) \{\psi _{N-1}(x_N) \psi _{N}'(x_N) - \psi _{N}(x_N)\psi _{N-1}'(x_N) \} . \end{aligned}$$
(3.10)

Using the Schwartz inequality to (3.5), we see from (3.6) and (3.8) that

$$\begin{aligned}&\textsf {L}^{N }(y,z)^2\le \textsf {L}^{N }(y,y)\textsf {L}^{N }(z,z). \end{aligned}$$
(3.11)

From here on, we assume

$$\begin{aligned}&-\frac{2}{3}< \alpha < -\frac{1}{2}. \end{aligned}$$
(3.12)

We set

$$\begin{aligned}&\mathsf {B}^{N }= (-\sqrt{2}-N^{\alpha },-\sqrt{2}+N^{\alpha }) \cup (\sqrt{2}-N^{\alpha },\sqrt{2}+N^{\alpha }) . \end{aligned}$$
(3.13)

The next lemma will be used in Sect. 4.

Lemma 9

We set \( \mathsf {U}^{N }= \mathbb {R}\backslash \mathsf {B}^{N }\). Then, the following holds.

  1. (1)

    There exists a constant \(c_{5}\) such that for all \( N \in \mathbb {N}\)

    $$\begin{aligned}&\sup _{x,y \in \mathbb {R}}|\textsf {L}^{N }(x ,y)| \le c_{5}N^{\frac{1}{3}},\end{aligned}$$
    (3.14)
    $$\begin{aligned}&\sup _{x,y \in \mathsf {U}^{N }} |\textsf {L}^{N }(x ,y)| \le c_{5}. \end{aligned}$$
    (3.15)
  2. (2)

    Assume (3.12). Then, there exists a constant \( c_{6}\) such that

    $$\begin{aligned} |\textsf {L}^{N }(x ,y)|\le&\frac{c_{6}}{{ N |x-y| }} \quad \text { for each } x,y\in \mathsf {U}^{N }, N \in \mathbb {N}. \end{aligned}$$
    (3.16)

Proof

It is well known that

$$\begin{aligned} \sqrt{2} \psi _{n}'(x)=\sqrt{n} \psi _{n-1} (x) -\sqrt{n+1}\psi _{n+1}(x) . \end{aligned}$$

From this and (3.10), we see that with a simple calculation

$$\begin{aligned}&\textsf {L}^{N }(x,x)= \frac{1}{\sqrt{2}} \{ \psi _{N-1} \psi _{N}' - \psi _{N}\psi _{N-1}' \} (x_N) \nonumber \\&= \frac{N^{\frac{1}{2}}}{2}\{ \psi _{N-1}^2 + \psi _{N}^2 - {\sqrt{1 - N^{-1}}} \psi _{N-2}\psi _{N} - {\sqrt{1 + N^{-1}}} \psi _{N-1}\psi _{N+1} \} (x_N) . \end{aligned}$$
(3.17)

Combining this with (3.2), we obtain

$$\begin{aligned}&\textsf {L}^{N }(x,x)\le \frac{N^{\frac{1}{2}} }{2} 5 c_{4}^2 N^{-\frac{1}{6}} = \frac{5 c_{4}^2}{2} N^{\frac{1}{3}} . \end{aligned}$$

From this and (3.11), we deduce (3.14). From Lemma 7 and (3.17), we see that

$$\begin{aligned}&\sup _{N\in \mathbb {N}}\sup _{y\in \mathsf {U}^{N }} \textsf {L}^{N }(y,y)< \infty . \end{aligned}$$

We deduce (3.15) from this and (3.11). Taking a constant \( c_{5}\) in (3.14) and (3.15) in common completes the proof of (1).

Claim (3.16) follows from Lemma 7, (3.6), and (3.8). \(\square \)

4 Proof of (2.21)–(2.24)

As we see in Sect. 2, the point of the proof of Theorem 1.1 is to check conditions (2.21)–(2.24) in Lemma 6. The purpose of this section is to prove these equations. We recall a property of the reduced Palm measures of determinantal point processes.

Lemma 10

([20]) Let \(\mu \) be a determinantal point process with kernel K. Assume that \( K(x,y) = \overline{K(y,x)} \) and \( 0 \le \mathrm {Spec} (K) \le 1 \). Then, the reduced Palm measure \(\mu _x\) is a determinantal point process with kernel \(K_x\) given by

$$\begin{aligned} K_x(y,z)= K(y,z) - \frac{K(y,x)K(x,z)}{K(x,x)} \end{aligned}$$
(4.1)

for x such that \( K(x,x)>0 \).

Let \( \mathsf {K}_{\theta }^{N }\) be the determinantal kernel of \( \mu _\theta ^N\) given by (3.7). Let \( \mu _{\theta ,x}^N\) be as in Lemma 6. Recall that \( \mathsf {K}_{\theta }^{N }(y,z) = \mathsf {K}_{\theta }^{N }(z,y) \) by definition. Then, from this, (3.7), and (4.1), \( \mu _{\theta ,x}^N\) is a determinantal point process with kernel

$$\begin{aligned}&\mathsf {K}_{\theta ,x}^{N } (y,z) = \mathsf {K}_{\theta }^{N }(y,z) - \frac{ \mathsf {K}_{\theta }^{N }(x,y) \mathsf {K}_{\theta }^{N }(x,z) }{\mathsf {K}_{\theta }^{N }(x,x)} . \end{aligned}$$
(4.2)

From (3.4) and (4.2), we calculate correlation functions in (2.21)–(2.24) as follows.

$$\begin{aligned}&\rho _\theta ^{N,1}(y) = \mathsf {K}_{\theta }^{N }(y,y) , \end{aligned}$$
(4.3)
$$\begin{aligned}&\rho _{\theta ,x}^{N,1}(y) -\rho _\theta ^{N,1}(y) = - \frac{\mathsf {K}_{\theta }^{N }(x,y) ^2}{\mathsf {K}_{\theta }^{N }(x,x)} , \end{aligned}$$
(4.4)
$$\begin{aligned}&\rho _{\theta }^{N,2}(y,z)-\rho _\theta ^{N,1}(y) \rho _\theta ^{N,1} (z) = -\mathsf {K}_{\theta }^{N }(y,z)^2 , \end{aligned}$$
(4.5)
$$\begin{aligned}&\rho _{\theta ,x}^{N,2}(y,z)-\rho _{\theta ,x} ^{N,1}(y) \rho _{\theta ,x} ^{N,1} (z) - \{\rho _\theta ^{N,2}(y,z) -\rho _\theta ^{N,1}(y) \rho _\theta ^{N,1} (z) \} \nonumber \\&\quad \quad = -\,\mathsf {K}_{\theta ,x}^{N }(y,z)^2 + \mathsf {K}_{\theta }^{N }(y,z)^2 \nonumber \\&\quad \quad = 2\frac{\mathsf {K}_{\theta }^{N }(y,z) \mathsf {K}_{\theta }^{N }(x,y) \mathsf {K}_{\theta }^{N }(x,z) }{\mathsf {K}_{\theta }^{N }(x,x)} - \frac{ \mathsf {K}_{\theta }^{N }(x,y)^2 \mathsf {K}_{\theta }^{N }(x,z) ^2}{\mathsf {K}_{\theta }^{N }(x,x)^2 } . \end{aligned}$$
(4.6)

Using these and (3.9), we rewrite (2.21)–(2.24) as follows.

Lemma 11

To simplify the notation, let

$$\begin{aligned}&\textsf {x}_{\theta }^N= \frac{x}{N } + \theta , \quad T_{r,\infty }^{N }(x)= \left\{ y \in \mathbb {R}\, ;\, \frac{r}{N} \le | \textsf {x}_{\theta }^N- y | < \infty \right\} . \end{aligned}$$
(4.7)

Then, (2.21)–(2.24) are equivalent to (4.8)–(4.11) below, respectively.

$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{T_{r,\infty }^{N }(x)} \frac{\textsf {L}^{N }(y,y)}{\textsf {x}_{\theta }^N- y } dy - \theta \Big \Vert _{R} =0 , \end{aligned}$$
(4.8)
$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{T_{r,\infty }^{N }(x)} \frac{1}{\textsf {x}_{\theta }^N- y } \frac{\textsf {L}^{N }(\textsf {x}_{\theta }^N,y )^2}{\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} dy \Big \Vert _{R} =0 , \end{aligned}$$
(4.9)
$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{T_{r,\infty }^{N }(x)} \frac{1}{N } \frac{\textsf {L}^{N }(y,y)}{ |\textsf {x}_{\theta }^N- y |^2} dy -\int _{T_{r,\infty }^{N }(x)^2 } \frac{\textsf {L}^{N }(y,z)^2 }{(\textsf {x}_{\theta }^N- y )( \textsf {x}_{\theta }^N- z )} \mathrm{d}y\mathrm{d}z \Big \Vert _{R} =0 , \end{aligned}$$
(4.10)
$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{T_{r,\infty }^{N }(x)} \frac{1}{N }\frac{1}{ |\textsf {x}_{\theta }^N- y |^2} \frac{\textsf {L}^{N }(\textsf {x}_{\theta }^N,y )^2}{\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} \mathrm{d}y \nonumber \\&\quad \quad \quad +\int _{T_{r,\infty }^{N }(x)^2 } \frac{1 }{(\textsf {x}_{\theta }^N- y )( \textsf {x}_{\theta }^N- z )} \nonumber \\&\quad \quad \quad \times \biggl \{ 2\frac{ \textsf {L}^{N }(y,z)\textsf {L}^{N }(\textsf {x}_{\theta }^N,y)\textsf {L}^{N }(\textsf {x}_{\theta }^N,z ) }{\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} - \frac{ \textsf {L}^{N }(\textsf {x}_{\theta }^N,y)\textsf {L}^{N }(\textsf {x}_{\theta }^N,z)}{\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)^2 } \biggr \} \mathrm{d}y\mathrm{d}z \Big \Vert _{R} =0 . \end{aligned}$$
(4.11)

Proof

Recall that \( \textsf {L}^{N }(x ,y)= \mathsf {K}_{\theta }^{N }(N(x-\theta ), N (y- \theta )) \) by (3.9). Then, Lemma 11 follows easily from (4.3)–(4.6). \(\square \)

Let \( \mathsf {B}^{N }\) and \( \mathsf {U}^{N }\) be as in Lemma 9. Decompose \( \mathsf {U}^{N }\) into \( \mathsf {U}^{N }_1\) and \( \mathsf {U}_2^N \) such that

$$\begin{aligned} \mathsf {U}^{N }_1&= [ - \sqrt{2}+N^{\alpha }, \sqrt{2}-N^{\alpha } ] ,\quad \mathsf {U}_2^N = \mathbb {R}\backslash ( - \sqrt{2}-N^{\alpha }, \sqrt{2}+N^{\alpha } ) . \end{aligned}$$

Then, clearly \( \mathsf {U}^{N }= \mathsf {U}^{N }_1\cup \mathsf {U}_2^N \) and \( \mathbb {R}= \mathsf {U}^{N }_1\cup \mathsf {U}_2^N \cup \mathsf {B}^{N }\). We begin by the integral outside \( \mathsf {U}^{N }_1\).

Lemma 12

Let \( 0< q < 3/2 \). Then,

$$\begin{aligned}&\limsup _{N\rightarrow \infty } \Big \Vert \int _{\mathbb {R}\backslash \mathsf {U}^{N }_1} \frac{ \textsf {L}^{N }(y,y)^q}{ | \textsf {x}_{\theta }^N- y | } \mathrm{d}y \Big \Vert _{R} = 0 . \end{aligned}$$
(4.12)

Proof

From (3.14), (4.7), and the definition of \( \mathsf {B}^{N }\), we obtain that

$$\begin{aligned}&\quad \,\,\limsup _{N\rightarrow \infty } \Big \Vert \int _{ \mathsf {B}^{N }} \frac{ \textsf {L}^{N }(y,y)^q }{|\textsf {x}_{\theta }^N- y |} \mathrm{d}y \Big \Vert _{R} \nonumber \\&\le \limsup _{N\rightarrow \infty } \Big \Vert \int _{ \mathsf {B}^{N }} \frac{c_{5} ^q N^{\frac{q}{3}} }{| \textsf {x}_{\theta }^N- y |} \mathrm{d}y \Big \Vert _{R} \nonumber \\&\le \limsup _{N \rightarrow \infty } c_{5} ^q N^{\frac{q}{3}} \Bigl \{ \log \Bigl | \frac{x}{N} +\theta - (\sqrt{2} - N^{\alpha }) \Bigr | - \log \Bigl | \frac{x}{N} +\theta - (\sqrt{2} + N^{\alpha } ) \Bigr | \Bigr \}\nonumber \\&\quad \,\,+ \,c_{5} ^q N^{\frac{q}{3}} \Bigl \{ \log \Bigl | \frac{x}{N} +\theta - (-\sqrt{2} - N^{\alpha }) \Bigr | - \log \Bigl | \frac{x}{N} +\theta - (-\sqrt{2} + N^{\alpha } ) \Bigr | \Bigr \} \nonumber \\&= \mathcal {O}(N^{\frac{q}{3}+ \alpha }) = 0 \quad \text { as } N \rightarrow \infty . \end{aligned}$$
(4.13)

Here, we used \( q < 3/2 \) and \( \alpha < - 1/2 \) in the last line.

Note that \( | y | \ge \sqrt{2} + N^{\alpha } \) for \(y \in \mathsf {U}_2^N \). Let \( y=\sqrt{2} \cosh \tau \). Then, we see that

$$\begin{aligned} N \sinh ^3 \tau&= N(\cosh ^2 \tau -1)^{\frac{3}{2}} \\&=N2^{-\frac{3}{2}}(y^2-2)^{\frac{3}{2}} \ge N2^{-\frac{3}{2}}(2\sqrt{2}N^{\alpha }+N^{2\alpha })^{\frac{3}{2}} . \end{aligned}$$

From this, \( q > 0 \), and \(\alpha >-{2}/{3}\), we apply (3.1) to obtain \(c_{7}> 0 \) such that,

$$\begin{aligned}&\limsup _{N\rightarrow \infty } \Big \Vert \int _{ \mathsf {U}_2^N } \frac{\textsf {L}^{N }(y,y)^q }{|\textsf {x}_{\theta }^N- y |} \mathrm{d}y \Big \Vert _{R} \le \limsup _{N\rightarrow \infty } \Big \Vert \int _{ \mathsf {U}_2^N } \frac{c_{7}}{ |\textsf {x}_{\theta }^N- y | N ^q (y^2-2) ^q } \mathrm{d}y \Big \Vert _{R} = 0 , \end{aligned}$$

which combined with (4.13) yields (4.12). \(\square \)

Lemma 13

(4.8) holds.

Proof

Let \( y=\sqrt{2} \cos \tau \). Then, \( N \sin ^3 \tau \ge N 2^{-\frac{3}{2}} (2\sqrt{2}N^{\alpha }-N^{2\alpha })\) for \(y\in \mathsf {U}^{N }_1\). Then, applying Lemma 8 (1) we deduce that for each \( r > 0 \)

$$\begin{aligned}&\limsup _{N\rightarrow \infty } \Big \Vert \int _{ T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N }_1} \frac{\textsf {L}^{N }(y,y)}{\textsf {x}_{\theta }^N- y } \mathrm{d}y - \theta \Big \Vert _{R} \\&\quad = \limsup _{N\rightarrow \infty } \Big \Vert \Big \{ \int _{- \sqrt{2} + N^{\alpha } } ^{\textsf {x}_{\theta }^N- \frac{r}{N} } + \int _{\textsf {x}_{\theta }^N+ \frac{r}{N}}^{\sqrt{2} - N^{\alpha } } \Big \} \frac{1}{\textsf {x}_{\theta }^N- y } \frac{1}{\pi }\sqrt{2-y^2} \, \mathrm{d}y - \theta \Big \Vert _{R} \\&\quad =\biggl | \mathrm {P.V.} \int _{-\sqrt{2} }^{\sqrt{2}} \frac{1}{\theta -y } \frac{1}{\pi }\sqrt{2-y^2} \mathrm{d}y -\theta \biggr |=0 . \end{aligned}$$

Combining this with (4.12), we obtain (4.8). \(\square \)

It is well known that \( \mathsf {K}_{\theta }^{N }(x,x)\) are positive and continuous in x, and \( \{ \mathsf {K}_{\theta }^{N }(x,x) \}_{N\in \mathbb {N}} \) converges to \( \mathsf {K}_{\theta } (x,x)=\sqrt{2-\theta ^2}/\pi \) uniformly on each compact set. Then, we see

$$\begin{aligned} \sup _{N \in \mathbb {N}}\sup _{x \in S_R } \frac{1 }{\mathsf {K}_{\theta }^{N }(x,x)} < \infty . \end{aligned}$$

From this, (3.9), and (4.7), we see that the following constant \(c_{8}\) is finite.

$$\begin{aligned}&c_{8}:= \sup _{N \in \mathbb {N}} \sup _{x\in S_R }\frac{1}{\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} < \infty . \end{aligned}$$
(4.14)

Lemma 14

(4.15) and (4.16) below hold. In particular, (4.9) holds.

$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{T_{r,\infty }^{N }(x)} \frac{\textsf {L}^{N }(\textsf {x}_{\theta }^N,y)^2}{ |\textsf {x}_{\theta }^N- y |\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} \mathrm{d}y \Big \Vert _{R} =0 , \end{aligned}$$
(4.15)
$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{T_{r,\infty }^{N }(x)} \frac{\textsf {L}^{N }(\textsf {x}_{\theta }^N,y)}{ |\textsf {x}_{\theta }^N- y |\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} \mathrm{d}y \Big \Vert _{R} = 0 . \end{aligned}$$
(4.16)

Proof

From (3.11) and (4.12), we deduce that as \( N \rightarrow \infty \)

$$\begin{aligned}&\Big \Vert \int _{\mathbb {R}\backslash \mathsf {U}^{N }_1} \frac{\textsf {L}^{N }(\textsf {x}_{\theta }^N,y)^2}{ |\textsf {x}_{\theta }^N- y |\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} \mathrm{d}y \Big \Vert _{R} \le \Big \Vert \int _{\mathbb {R}\backslash \mathsf {U}^{N }_1} \frac{\textsf {L}^{N }(y,y)}{ |\textsf {x}_{\theta }^N- y |} dy \Big \Vert _{R} \rightarrow 0 . \end{aligned}$$
(4.17)

From (3.16) and (4.14), for each \( N \in \mathbb {N}\) and \( r > 0 \)

$$\begin{aligned} \Big \Vert \int _{ T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N }_1} \frac{\textsf {L}^{N }(\textsf {x}_{\theta }^N,y)^2 \, dy }{ |\textsf {x}_{\theta }^N- y |\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} \Big \Vert _{R}&\le \Big \Vert \int _{ T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N }_1} \frac{c_{6} ^2 c_{8} \, \mathrm{d}y }{N^2 |\textsf {x}_{\theta }^N-y|^3} \Big \Vert _{R} \nonumber \\&\le \frac{c_{6} ^2 c_{8}}{r^2} . \end{aligned}$$
(4.18)

Hence, (4.15) follows from (4.17) and (4.18). This completes the proof of (4.15).

We next prove (4.16). From (3.11), (4.12), and (4.14), we see for each \( r > 0 \)

$$\begin{aligned}&\limsup _{N\rightarrow \infty } \Big \Vert \int _{T_{r,\infty }^{N }(x)\backslash \mathsf {U}^{N }_1} \frac{\textsf {L}^{N }(\textsf {x}_{\theta }^N,y)}{ |\textsf {x}_{\theta }^N- y |\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} dy \Big \Vert _{R} =0 . \end{aligned}$$
(4.19)

From (3.16) and (4.14), we see that for each \( N \in \mathbb {N}\) and \( r > 0 \)

$$\begin{aligned} \Big \Vert \int _{T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N }_1} \frac{\textsf {L}^{N }(\textsf {x}_{\theta }^N,y)\, dy }{ |\textsf {x}_{\theta }^N- y |\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} \Big \Vert _{R}&\le \Big \Vert \int _{T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N }_1} \frac{c_{6}c_{8}\, \mathrm{d}y }{N |\textsf {x}_{\theta }^N- y |^2} \Big \Vert _{R} \nonumber \\&\le \frac{2c_{6}c_{8}}{r} . \end{aligned}$$
(4.20)

Combining (4.19) and (4.20), we obtain (4.16). \(\square \)

Lemma 15

(4.21) and (4.22) hold. In particular, (4.10) holds.

$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{T_{r,\infty }^{N }(x)} \frac{\textsf {L}^{N }(y,y)}{ N |\textsf {x}_{\theta }^N- y |^2} \,\mathrm{d}y \Big \Vert _{R} =0 , \end{aligned}$$
(4.21)
$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{T_{r,\infty }^{N }(x)^2 }\frac{\textsf {L}^{N }(y,z)^2 }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |} \mathrm{d}y\mathrm{d}z \Big \Vert _{R} =0. \end{aligned}$$
(4.22)

Proof

Note that \( \textsf {L}^{N }(y,y)\le c_{5} \) on \( \mathsf {U}^{N }\) by (3.15). Then, by the definition of \( T_{r,\infty }^{N }(x)\),

$$\begin{aligned}&\int _{T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N }} \frac{\textsf {L}^{N }(y,y)}{ N |\textsf {x}_{\theta }^N- y |^2} \,\mathrm{d}y \le \frac{c_{5} }{N} \frac{2N}{r} = \frac{2c_{5}}{r} . \end{aligned}$$
(4.23)

By (3.14), we see \( \textsf {L}^{N }(y,y)\le c_{5}N^{\frac{1}{3}}\) on \( \mathbb {R}\). Recall that \( |\mathsf {B}^{N }| = 4 N^{\alpha }\) by construction. Furthermore, \(c_{9} :=\limsup _{N\rightarrow \infty } \sup _{y \in \mathsf {B}^{N }}\Vert { |\textsf {x}_{\theta }^N- y |^{-2}} \Vert _{R} < \infty \). Hence, for each \( r > 0 \)

$$\begin{aligned}&\limsup _{N\rightarrow \infty }\int _{T_{r,\infty }^{N }(x)\cap \mathsf {B}^{N }} \frac{\textsf {L}^{N }(y,y)}{ N |\textsf {x}_{\theta }^N- y |^2} \,dy \le \limsup _{N\rightarrow \infty } \frac{ c_{5} N^{\frac{1}{3}} 4 N^{\alpha }c_{9}}{N} = 0 . \end{aligned}$$
(4.24)

Here, we used \( \alpha < -1/2\). We thus obtain (4.21) from (4.23) and (4.24).

We proceed with the proof of (4.22). We first consider the integral away from the diagonal line. By (3.16) and the Schwartz inequality, we see that

$$\begin{aligned}&\Big \Vert \int _{(T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N })^2 \cap \{ |y-z| \ge \frac{1}{ N }\} } \frac{ \textsf {L}^{N }(y,z)^2 }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |} \mathrm{d}y\mathrm{d}z \Big \Vert _{R}\\&\quad \le \Big \Vert \int _{(T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N })^2 \cap \{ |y-z| \ge \frac{1}{ N }\} } \frac{ c_{6} ^2 }{N^2 |y-z|^2 |\textsf {x}_{\theta }^N-y | |\textsf {x}_{\theta }^N-z | }\mathrm{d}y\mathrm{d}z \Big \Vert _{R}\\&\quad \le \Big \Vert \Big \{ \int _{T_{r,\infty }^{N }(x)^2 \cap \{ |y-z| \ge \frac{1}{ N }\} } \frac{ c_{6} ^2 }{N^2 |y-z|^2 |\textsf {x}_{\theta }^N-y |^2 }\mathrm{d}y\mathrm{d}z \Big \}^{\frac{1}{2}}\\&\quad \quad \quad \quad \quad \Big \{ \int _{T_{r,\infty }^{N }(x)^2 \cap \{ |y-z| \ge \frac{1}{ N }\} } \frac{ c_{6} ^2 }{N^2 |y-z|^2 |\textsf {x}_{\theta }^N-z |^2 }\mathrm{d}y\mathrm{d}z \Big \}^{\frac{1}{2}} \Big \Vert _{R}\\&\quad = \Big \Vert \int _{T_{r,\infty }^{N }(x)^2 \cap \{ |y-z| \ge \frac{1}{ N }\} } \frac{ c_{6} ^2 }{N^2 |y-z|^2 |\textsf {x}_{\theta }^N-y |^2 }\mathrm{d}y\mathrm{d}z \Big \Vert _{R}\\&\quad \le c_{6} ^2\frac{2N}{N^2}\Big \{\frac{2N}{r}\Big \} = \frac{4c_{6} ^2}{r} . \end{aligned}$$

The last line follows from a straightforward calculation. Indeed, first integrating z over \( \{ |y-z| \ge \frac{1}{N}\} \), and then integrating y over \( T_{r,\infty }^{N }(x)\), we obtain the inequality in the last line. We therefore see that

$$\begin{aligned}&\lim _{r\rightarrow \infty } \lim _{N\rightarrow \infty } \Big \Vert \int _{(T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N })^2 \cap \{ |y-z| \ge \frac{1}{ N }\} } \frac{ \textsf {L}^{N }(y,z)^2 }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |} \mathrm{d}y\mathrm{d}z \Big \Vert _{R} = 0 . \end{aligned}$$
(4.25)

We next consider the integral near the diagonal. From (3.15), we see that

$$\begin{aligned}&\Big \Vert \int _{(T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N })^2 \cap \{ |y-z| \le \frac{1}{ N }\} } \frac{ \textsf {L}^{N }(y,z)^2 }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |} \mathrm{d}y\mathrm{d}z \Big \Vert _{R} \nonumber \\ \le&\quad \Big \Vert \int _{(T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N })^2 \cap \{ |y-z| \le \frac{1}{ N }\} } \frac{ c_{5}^2 }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |} \mathrm{d}y\mathrm{d}z \Big \Vert _{R} \nonumber \\ \le&\quad \Big \Vert \int _{T_{r,\infty }^{N }(x)^2 \cap \{ |y-z| \le \frac{1}{ N }\} } \frac{c_{5}^2}{2} \{ \frac{ 1}{ |\textsf {x}_{\theta }^N- y |^2 }+ \frac{ 1}{ |\textsf {x}_{\theta }^N- z |^2 } \} \mathrm{d}y\mathrm{d}z \Big \Vert _{R} \nonumber \\&\quad = \frac{2c_{5}^2}{ N } \Big \Vert \int _{T_{r,\infty }^{N }(x)} \frac{1}{ |\textsf {x}_{\theta }^N- y |^2 }\mathrm{d}y \Big \Vert _{R} = \frac{2c_{5}^2}{ N } \frac{2N}{r} = \, \frac{4c_{5}^2}{r} . \end{aligned}$$
(4.26)

From (4.25) and (4.26), we have

$$\begin{aligned}&\lim _{r\rightarrow \infty } \lim _{N\rightarrow \infty } \Big \Vert \int _{(T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N })^2} \frac{ \textsf {L}^{N }(y,z)^2 }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |} \mathrm{d}y\mathrm{d}z \Big \Vert _{R} = 0 . \end{aligned}$$
(4.27)

We next consider the integral on \( \mathsf {B}^{N }\times \mathsf {B}^{N }\). Let

$$\begin{aligned} c_{10}=\limsup _{N\rightarrow \infty }\sup _{x\in S_R, y \in \mathsf {B}^{N }}|\textsf {x}_{\theta }^N- y |^{-1} . \end{aligned}$$

Then, we deduce from (3.14) and the definition of \( \mathsf {B}^{N }\) given by (3.13) that

$$\begin{aligned} \limsup _{N\rightarrow \infty }&\Big \Vert \int _{(T_{r,\infty }^{N }(x)\cap \mathsf {B}^{N })^2} \frac{ \textsf {L}^{N }(y,z)^2 }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |} \mathrm{d}y\mathrm{d}z \Big \Vert _{R} \nonumber \\ \le&\lim _{N\rightarrow \infty } c_{5}^2 c_{10}^2 N^{\frac{2}{3}} (4N^{\alpha })^2 = \, 0 . \end{aligned}$$
(4.28)

Here, we used \( |\mathsf {B}^{N }|= 4N^{\alpha }\) for the inequality and \( \alpha < - 1/2 \) for the last equality.

We finally consider the case \( \mathsf {U}^{N }\times \mathsf {B}^{N }\). Then, a similar argument yields

$$\begin{aligned}&\Big \Vert \int _{(T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N })\times (T_{r,\infty }^{N }(x)\cap \mathsf {B}^{N })} \frac{ \textsf {L}^{N }(y,z)^2 }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |} \mathrm{d}y\mathrm{d}z \Big \Vert _{R} \\ \nonumber&\quad \le \Big \Vert \int _{(T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N })\times (T_{r,\infty }^{N }(x)\cap \mathsf {B}^{N })} \frac{ \textsf {L}^{N }(y,y)\textsf {L}^{N }(z,z)}{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |} \mathrm{d}y\mathrm{d}z \Big \Vert _{R} \\ \nonumber&\quad = \Big \Vert \int _{T_{r,\infty }^{N }(x)\cap \mathsf {U}^{N }} \frac{ \textsf {L}^{N }(y,y)}{ |\textsf {x}_{\theta }^N- y |} \mathrm{d}y \int _{ T_{r,\infty }^{N }(x)\cap \mathsf {B}^{N }} \frac{ \textsf {L}^{N }(z,z)}{ |\textsf {x}_{\theta }^N- z |} \mathrm{d}z \Big \Vert _{R} \\ \nonumber&\quad = \mathcal {O}(\log N ) \mathcal {O}(N^{\frac{1}{3} + \alpha }) \rightarrow 0 \quad \text { as } N \rightarrow \infty . \end{aligned}$$
(4.29)

Collecting (4.27), (4.28), and (4.29), we conclude (4.22). \(\square \)

Lemma 16

(4.11) holds.

Proof

We shall estimate the three terms in (4.11) beginning with the first. From (3.11) and (4.21), we have

$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{T_{r,\infty }^{N }(x)} \frac{\textsf {L}^{N }(\textsf {x}_{\theta }^N,y )^2 \mathrm{d}y }{N |\textsf {x}_{\theta }^N- y |^2 \textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} \Big \Vert _{R} \\ \nonumber&\quad \le \lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{T_{r,\infty }^{N }(x)} \frac{\textsf {L}^{N }(y,y)\mathrm{d}y }{N |\textsf {x}_{\theta }^N- y |^2} \Big \Vert _{R} = 0 . \end{aligned}$$
(4.30)

Next, using the Schwartz inequality, we have for the second term

$$\begin{aligned}&\Big \Vert \int _{T_{r,\infty }^{N }(x)^2 } \frac{ \textsf {L}^{N }(y,z)\textsf {L}^{N }(\textsf {x}_{\theta }^N,y)\textsf {L}^{N }(\textsf {x}_{\theta }^N,z ) \, \mathrm{d}y\mathrm{d}z }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} \Big \Vert _{R} \\&\quad \le \Big \Vert \int _{T_{r,\infty }^{N }(x)^2 } \frac{ \textsf {L}^{N }(y,z)^2 \mathrm{d}y\mathrm{d}z }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |} \Big \Vert _{R} ^{\frac{1}{2}} \Big \Vert \int _{T_{r,\infty }^{N }(x)^2 } \frac{ \textsf {L}^{N }(\textsf {x}_{\theta }^N,y)^2 \textsf {L}^{N }(\textsf {x}_{\theta }^N,z)^2 \mathrm{d}y\mathrm{d}z }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)^2 } \Big \Vert _{R}^{\frac{1}{2}} \\&\quad = \Big \Vert \int _{T_{r,\infty }^{N }(x)^2 } \frac{ \textsf {L}^{N }(y,z)^2 \mathrm{d}y\mathrm{d}z }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |} \Big \Vert _{R}^{\frac{1}{2}} \Big \Vert \int _{T_{r,\infty }^{N }(x)} \frac{ \textsf {L}^{N }(\textsf {x}_{\theta }^N,y)^2 }{|\textsf {x}_{\theta }^N- y | \textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} \mathrm{d}y \Big \Vert _{R} . \end{aligned}$$

Applying (4.22) and (4.15) to the last line, we obtain

$$\begin{aligned}&\lim _{r \rightarrow \infty } \limsup _{N \rightarrow \infty } \Big \Vert \int _{T_{r,\infty }^{N }(x)^2 } \frac{ \textsf {L}^{N }(y,z)\textsf {L}^{N }(\textsf {x}_{\theta }^N,y)\textsf {L}^{N }(\textsf {x}_{\theta }^N,z ) \, \mathrm{d}y\mathrm{d}z }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} \Big \Vert _{R}= 0 . \end{aligned}$$
(4.31)

We finally estimate the third term. From (4.16), as \( N \rightarrow \infty \), we have

$$\begin{aligned}&\Big \Vert \int _{T_{r,\infty }^{N }(x)^2 } \frac{ \textsf {L}^{N }(\textsf {x}_{\theta }^N,y)\textsf {L}^{N }(\textsf {x}_{\theta }^N,z)\, \mathrm{d}y\mathrm{d}z }{ |\textsf {x}_{\theta }^N- y | |\textsf {x}_{\theta }^N- z |\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)^2 } \Big \Vert _{R} \\ \nonumber&\quad = \Big \Vert \Big \{ \int _{T_{r,\infty }^{N }(x)} \frac{\textsf {L}^{N }(\textsf {x}_{\theta }^N,y)\, \mathrm{d}y }{ |\textsf {x}_{\theta }^N- y |\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} \Big \}^2 \Big \Vert _{R} \\ \nonumber&\quad = \Big \Vert \int _{T_{r,\infty }^{N }(x)} \frac{\textsf {L}^{N }(\textsf {x}_{\theta }^N,y)\, \mathrm{d}y }{ |\textsf {x}_{\theta }^N- y |\textsf {L}^{N }(\textsf {x}_{\theta }^N,\textsf {x}_{\theta }^N)} \Big \Vert _{R}^2 \rightarrow 0 \quad \text { by } (4.16) . \end{aligned}$$
(4.32)

From (4.30), (4.31), and (4.32), we obtain (4.11). This completes the proof. \(\square \)

5 Proof of Theorem 1.1

From Lemmas 1316, we deduce that all the assumptions (2.21)–(2.24) in Lemma 6 are satisfied. Hence, (2.10) is proved by Lemma 6. Then, Theorem 1.1 follows from Lemmas 4, 5, and 6.

6 Proof of Theorem 1.2

In this section, we prove Theorem 1.2 using Theorem 1.1. It is sufficient for the proof of Theorem 1.2 to prove (1.15) in \( C([0,T];\mathbb {R}^m)\) for each \( T \in \mathbb {N}\). Hence, we fix \( T \in \mathbb {N}\). Let \( \mathbf X ^{N }=(X^{N , i})_{i=1}^{N }\) be as in (1.13). Let \( Y^{\theta , N,i} = \{ Y_t^{\theta , N,i} \} \) such that

$$\begin{aligned} Y_t^{\theta , N,i} = X_t^{N , i} + \theta t . \end{aligned}$$
(6.1)

Then, from (1.13) we see that \( \mathbf Y ^{\theta ,N} = (Y^{\theta ,N,i})_{i=1}^N\) is a solution of

$$\begin{aligned} \quad \mathrm{d} Y_t^{\theta ,N,i} = \,&\mathrm{d}B_t^{i} + \sum _{ j \ne i }^{N } \frac{1}{Y_t^{\theta ,N,i} - Y_t^{\theta ,N,j} }\mathrm{d}t - \frac{1}{N}Y_t^{\theta ,N,i}\,\mathrm{d}t + \frac{\theta }{N} \,\mathrm{d}t \end{aligned}$$
(6.2)

with the same initial condition as \( \mathbf X ^{N}\). Let \( P^{\theta , N}\) and \( Q^{\theta , N}\) be the distributions of \( \mathbf X ^{N}\) and \( \mathbf Y ^{\theta ,N}\) on \( C([0,T];\mathbb {R}^N )\), respectively. Then, applying the Girsanov theorem [3, pp. 190–195] to (6.2), we see that

$$\begin{aligned} \frac{\mathrm{d} Q^{\theta , N}}{\mathrm{d} P^{\theta , N}} (\mathbf W ) =&\exp \left\{ \int _0^T \sum _{i=1}^N \frac{\theta }{N} \mathrm{d}B_t^i -\frac{1}{2} \int _0^T \sum _{i=1}^N \Big | \frac{\theta }{N}\Big |^2 \mathrm{d}t \right\} \\ \nonumber&= \exp \left\{ \frac{\theta }{N}\sum _{i=1}^N B_T^i - \frac{\theta ^2 T }{2N } \right\} , \end{aligned}$$
(6.3)

where we write \( \mathbf W =(W^i)\in C([0,T];\mathbb {R}^N )\) and \( \{ B^i \}_{i=1}^N \) under \( P^{\theta , N}\) are independent copies of Brownian motions starting at the origin.

Lemma 17

For each \( \epsilon > 0 \),

$$\begin{aligned}&\lim _{N\rightarrow \infty } Q^{\theta , N} \Big ( \Big | \frac{\mathrm{d} P^{\theta , N}}{\mathrm{d} Q^{\theta , N}} (\mathbf W )-1 \Big | \ge \epsilon \Big ) = 0. \end{aligned}$$
(6.4)

Proof

It is sufficient for (6.4) to prove, for each \( \epsilon > 0 \),

$$\begin{aligned} \lim _{N\rightarrow \infty } P^{\theta , N} \Big ( \Big | \frac{\mathrm{d} Q^{\theta , N}}{d P^{\theta , N}} (\mathbf W )-1 \Big | \ge \epsilon \Big ) = 0 . \end{aligned}$$

This follows from (6.3) immediately. \(\square \)

Proof of Theorem 1.2

We write \( \mathbf W ^m =(W^1,\ldots ,W^m) \in C([0,T];\mathbb {R}^m )\) for \( \mathbf W =(W^i)_{i=1}^{N}\), where \( m \le N \le \infty \). Let \( Q^{\theta }\) be the distribution of the solution \( \mathbf Y ^{\theta }\) with initial distribution \( \mu _{\theta }\circ \mathfrak {l} ^{-1}\). From Theorem 1.1 and (6.1), we deduce that for each \( m \in \mathbb {N}\)

$$\begin{aligned} \lim _{N\rightarrow \infty }Q^{\theta , N}( \mathbf W ^m \in \cdot ) = Q^{\theta }( \mathbf W ^m \in \cdot ) \end{aligned}$$

weakly in \( C([0,T];\mathbb {R}^m)\). Then, from this, for each \( F \in C_{b}(C([0,T];\mathbb {R}^m))\),

$$\begin{aligned}&\lim _{N\rightarrow \infty } \int _{C([0,T];\mathbb {R}^N)} F (\mathbf W ^m)\mathrm{d}Q^{\theta , N}= \int _{C([0,T];\mathbb {R}^{\mathbb {N}})} F (\mathbf W ^m)\mathrm{d}Q^{\theta } . \end{aligned}$$
(6.5)

We obtain from (6.4) and (6.5) that

$$\begin{aligned} \lim _{N\rightarrow \infty }\int _{C([0,T];\mathbb {R}^N)} F (\mathbf W ^m)\mathrm{d}P^{N , \theta } =&\lim _{N\rightarrow \infty }\int _{C([0,T];\mathbb {R}^N)} F (\mathbf W ^m) \frac{\mathrm{d} P^{\theta , N}}{\mathrm{d} Q^{\theta , N}} (\mathbf W ) \mathrm{d}Q^{\theta ,N } \\ =&\lim _{N\rightarrow \infty }\int _{C([0,T];\mathbb {R}^N)} F (\mathbf W ^m) \mathrm{d}Q^{\theta ,N } \\ =&\int _{C([0,T];\mathbb {R}^{\mathbb {N}})} F (\mathbf W ^m)\mathrm{d}Q^{\theta } . \end{aligned}$$

This implies (1.15). We have thus completed the proof of Theorem 1.2. \(\square \)