Introduction

We study infinite-dimensional stochastic differential equations (ISDEs) of

$$\begin{aligned}&\quad \quad \quad \mathbf{X }=(X^i)_{i\in \mathbb {N}} \in C([0,\infty );{\mathbb {R}}^{d\mathbb {N}}) , \quad \text { where } {\mathbb {R}}^{d\mathbb {N}}= ({\mathbb {R}}^d)^{\mathbb {N}}, \end{aligned}$$

describing infinitely-many Brownian particles moving in \( {\mathbb {R}}^d\) with free potential \( \varPhi = \varPhi (x)\) and interaction potential \( \varPsi = \varPsi (x,y) \). The ISDEs of \( \mathbf{X }=(X^i)_{i\in \mathbb {N}} \) are given by

$$\begin{aligned}&dX_t^i = d B_t^i - \frac{\beta }{2}\nabla _x \varPhi (X_t^i) dt - \frac{\beta }{2} \sum _{j\not =i}^{\infty } \nabla _x \varPsi (X_t^i,X_t^j) dt \quad (i \in \mathbb {N}) . \end{aligned}$$
(1.1)

Here \(\mathbf{B }=(B^i)_{i=1}^{\infty }\), \( \{B^i\}_{i\in \mathbb {N}}\) are independent copies of d-dimensional Brownian motions, \( \nabla _x = (\frac{\partial }{\partial x_i} )_{i=1}^d\) is the nabla in x, and \( \beta \) is a positive constant called inverse temperature. The process \( (\mathbf{X },\mathbf{B })\) is defined on a filtered probability space \( (\varOmega ,{\mathscr {F}},P,\{ {\mathscr {F}}_t \} )\).

The study of ISDEs was initiated by Lang [16, 17], and continued by Shiga [37], Fritz [5], and the second author [39], and others. In their respective work, the free potential \( \varPhi \) is assumed to be zero and interaction potentials \(\varPsi \) are of class \( C_0^3 ({\mathbb {R}}^d)\) or exponentially decay at infinity. This restriction on \( \varPsi \) excludes polynomial decay and logarithmic growth interaction potentials, which are essential from the viewpoint of statistical physics and random matrix theory. The following are examples of such ISDEs.

Sine\({}_{\beta }\)interacting Brownian motions (Sect. 13.1)

Let \( d= 1 \), \( \varPhi (x) = 0 \), \( \varPsi (x,y) = - \log |x-y|\). We set

$$\begin{aligned}&dX_t^i = dB_t^i + \frac{\beta }{2}\lim _{r\rightarrow \infty } \sum _{|X_t^i-X_t^j|<r,\ j\not =i } \frac{1}{X_t^i-X_t^j} dt \quad (i\in {\mathbb {N}}). \end{aligned}$$
(1.2)

This ISDE with \( \beta = 2 \) is called the Dyson model in infinite dimensions by Spohn [38].

Airy \({}_{\beta } \) interacting Brownian motions

Let \( d=1\), \( \varPhi (x) = 0 \), \( \varPsi (x,y) = - \log |x-y|\). We set

$$\begin{aligned}&dX^i_t=dB^i_t + \frac{\beta }{2} \lim _{r\rightarrow \infty } \left\{ \left( \sum _{j \not = i, \ |X^j_t |<r}\frac{1}{X^i_t -X^j_t } \right) -\int _{|x|<r}\frac{{\hat{\rho }}(x)}{-x}dx \right\} dt \quad (i\in {\mathbb {N}}) . \end{aligned}$$
(1.3)

Here we set

$$\begin{aligned}&{\hat{\rho }}(x) = \frac{1_{(-\infty , 0)}(x)}{\pi } \sqrt{-x}. \end{aligned}$$

The function \({\hat{\rho }}\) is the scaling limit of the semicircle function

$$\begin{aligned} \sigma _{\mathrm{semi}}(x) = \frac{1}{2\pi }\sqrt{4-x^2}\mathbf{1 }_{(-2,2)}(x) \end{aligned}$$

with scaling \(n^{1/3}\sigma _{\mathrm{semi}}(xn^{-2/3}+2)\) associated with soft-edge scaling.

We solve (1.3) for \( \beta = 1,2,4\) in [30] using a result presented in this paper. As the solutions \( \mathbf{X }=(X_t^i)_{i\in \mathbb {N}}\) do not collide with each other, we label them in decreasing order such that \( X_t^{i+1} < X_t^{i}\) for all \( i \in \mathbb {N}\) and \( 0 \le t < \infty \). Let \( \beta = 2\). The top particle \( X_t^1\) is called the Airy process and has been extensively studied by [11, 12, 33] and others. In [32], we calculate the space-time correlation functions of the unlabeled dynamics \( {\textsf {X}}_t=\sum _{i=1}^{\infty }\delta _{X_t^i}\) and prove that they are equal to those of [14] and others. In particular, our dynamics for \( \beta =2\) are the same as the Airy line ensemble constructed in [3].

Bessel \({}_{\alpha ,\beta } \) interacting Brownian motions

Let \( d = 1 \), \( 1 \le \alpha < \infty \), \( \varPhi (x) = - \frac{\alpha }{2} \log x \) and \( \varPsi (x,y) = - \log |x-y|\). We set

$$\begin{aligned}&dX_t^i = dB_t^i + \frac{\beta }{2} \left\{ \frac{\alpha }{2X_t^i } + \sum _{ j\not = i }^{\infty } \frac{1}{X_t^i - X_t^j} \right\} dt \quad (i \in \mathbb {N}) .\end{aligned}$$
(1.4)

Here particles move in \( (0,\infty )\).

Ginibre interacting Brownian motions (Theorem 7.1)

Let \( d=2\), \( \varPsi (x,y) = -\log |x-y| \), and \( \beta = 2 \). We introduce the ISDE

$$\begin{aligned}&dX_t^i = dB_t^i + \lim _{r\rightarrow \infty } \sum _{|X_t^i-X_t^j|<r,\ j\not =i } \frac{X_t^i-X_t^j}{|X_t^i-X_t^j|^2} dt \quad (i\in {\mathbb {N}}) \end{aligned}$$
(1.5)

and also

$$\begin{aligned}&dX_t^i = dB_t^i - X_t^i dt + \lim _{r\rightarrow \infty } \sum _{|X_t^j|<r,\ j\not =i } \frac{X_t^i-X_t^j}{|X_t^i-X_t^j|^2} dt \quad (i\in {\mathbb {N}}). \end{aligned}$$
(1.6)

We shall prove that ISDEs (1.5) and (1.6) have the same strong solutions, reflecting the dynamical rigidity of two-dimensional stochastic Coulomb systems called the Ginibre random point field (see Sect. 7.1).

All these examples are related to random matrix theory. ISDEs (1.2) and (1.3) with \( \beta = 1,2,4 \) are the dynamical bulk and soft-edge scaling limits of the finite particle systems of Gaussian orthogonal/unitary/symplectic ensembles, respectively. ISDE (1.4) with \( \beta = 1,2,4 \) is the hard-edge scaling limit of those of the Laguerre ensembles. ISDEs (1.5) and (1.6) are dynamical bulk scaling limits of the Ginibre ensemble, which is a system of eigen-values of non-Hermitian Gaussian random matrices.

The next two examples have interaction potentials of Ruelle’s class [35]. The previous results also exclude these potentials because of the polynomial decay at infinity. We shall give a general theorem applicable to substantially all Ruelle’s class potentials (see Theorem 13.2).

Lennard-Jones 6-12 potential

Let \( d= 3\) and \(\varPsi _{6,12}(x) = \{|x|^{-12}-|x|^{-6}\} \). The interaction \( \varPsi _{6,12}\) is called the Lennard-Jones 6-12 potential. The corresponding ISDE is

$$\begin{aligned}&dX^i_t = dB^i_t + \frac{\beta }{2} \sum ^{\infty }_{j=1,j\ne i} \left\{ \frac{12(X^i_t-X^j_t)}{|X^i_t-X^j_t|^{14}} - \frac{6(X ^i_t-X^j_t)}{|X^i_t-X^j_t|^{8}}\, \right\} dt \quad (i\in \mathbb {N}) . \end{aligned}$$
(1.7)

Since the sum in (1.7) is absolutely convergent, we do not include prefactor \( \lim _{r\rightarrow \infty }\) unlike other examples (1.2), (1.3), (1.5), and (1.6).

Riesz potentials

Let \( d < a \in \mathbb {N}\) and \(\varPsi _a(x)=(\beta / a )|x|^{-a}\). The corresponding ISDE is

$$\begin{aligned} dX^i_t=dB^i_t + \frac{\beta }{2} \sum ^{\infty }_{j=1,j\ne i} \frac{X ^i_t-X^j_t}{| X^i_t-X^j_t|^{a+2}}dt \quad (i\in \mathbb {N}) . \end{aligned}$$
(1.8)

At first glance, ISDE (1.8) resembles (1.2) and (1.5) because (1.8) corresponds to the case \( a =0 \) in (1.2) and (1.5). However, the sums in the drift terms converge absolutely unlike those in (1.2) and (1.5).

In the present paper, we introduce a new method of establishing the existence of strong solution and the pathwise uniqueness of solution of the ISDEs, including the ISDEs with long-range interaction potentials. Our results apply to a surprisingly wide range of models and, in particular, all the examples above (with suitably chosen \( \beta \)).

In the previous works, the It\({\hat{\mathrm{o}}}\) scheme was used directly in infinite dimensions. This scheme requires the “(local) Lipschitz continuity” of coefficients because it relies on the contraction property of Picard approximations. Hence, a smart choice of stopping times is pivotal but is difficult for long-range potentials in infinite-dimensional spaces. We do not apply the It\({\hat{\mathrm{o}}}\) scheme to ISDEs directly but apply it infinitely-many times to an infinite system of finite-dimensional SDEs with consistency (IFC), which we explain in the sequel.

Our method is based on several novel ideas and consists of three main steps. The first step begins by reducing the ISDE to a differential equation of a random point field \( \mu \) satisfying

$$\begin{aligned}&\nabla _x \log \mu ^{[1]}(x,{\textsf {s}})= -\beta \left\{ \nabla _x \varPhi (x) + \lim _{r\rightarrow \infty } \sum _{|s_i |< r}^{\infty } \nabla _x \varPsi (x, s_i ) \right\} . \end{aligned}$$
(1.9)

Here \( {\textsf {s}}=\sum _i \delta _{s_i}\) is a configuration, \( \mu ^{[1]}\) is the 1-Campbell measure of \( \mu \) as defined in (2.1), and \( \nabla _x \log \mu ^{[1]}\) is defined in (2.3). We call \( \nabla _x \log \mu ^{[1]}\) the logarithmic derivative of \( \mu \). Equation (1.9) is given in an informal manner here, and we refer to Sect. 2 for the precise definition.

The first author proved in [26] with [25, 27, 28] that, if (1.9) has a solution \( \mu \) satisfying the assumptions (A2)–(A4) in Sects. 9.2 and 10, then the ISDE (1.1) has a solution \( (\mathbf{X }, \mathbf{B })\) starting at \( \mathbf{s }=(s_i)_{i\in {\mathbb {N}}}\). Here a solution means a pair of a stochastic process \( \mathbf{X }\) and \( {\mathbb {R}}^{d\mathbb {N}}\)-valued Brownian motion \( \mathbf{B }=(B^i)_{i\in \mathbb {N}}\) satisfying (1.1). We thus see that the quartet of papers [25,26,27,28] achieved the first step. We note that such solutions in [26] are not strong solutions explained below and that the existence of strong solutions and pathwise uniqueness of solution of ISDEs (1.1) were left open in [26].

In the second step, we introduce the IFC mentioned above using the solution \( (\mathbf{X }, \mathbf{B })\) in the first step. That is, we consider a family of the finite-dimensional SDEs of \( \mathbf{Y }^m =(Y^{m,i})_{i=1}^m \), \( m \in \mathbb {N}\), given by

$$\begin{aligned} dY_t^{m,i}&= dB_t^i - \frac{\beta }{2} \nabla _x \varPhi (Y_t^{m,i} ) dt - \frac{\beta }{2} \sum _{j\not =i}^{m} \nabla _x \varPsi (Y_t^{m,i}, Y_t^{m,j} ) dt \nonumber \\&\quad - \frac{\beta }{2} \lim _{r\rightarrow \infty } \sum _{j= m + 1,\ |X_t^{j}|<r}^{\infty } \nabla _x \varPsi (Y_t^{m,i}, X_t^{j} ) dt \end{aligned}$$
(1.10)

with initial condition

$$\begin{aligned}&\mathbf{Y }_0^m = \mathbf{s }^m . \end{aligned}$$

Here, for each \( m \in \mathbb {N}\), we set \( \mathbf{s }^m = (s_i)_{i=1}^{m} \) for \( \mathbf{s }=(s_i)_{i=1}^{\infty }\), and \( \mathbf{B }^m = (B^i)_{i=1}^{m} \) denotes the \( ({\mathbb {R}}^d)^m\)-valued Brownian motions. Note that we regard \( \mathbf{X }\) as ingredients of the coefficients of the SDE (1.10). Hence the SDE (1.10) is time-inhomogeneous although the original ISDE (1.1) is time-homogeneous.

Under suitable assumptions, SDE (1.10) has a pathwise unique, strong solution \( \mathbf{Y }^{m} \). Hence, \( \mathbf{Y }^{m}\) is a function of \( \mathbf{s }\), \( \mathbf{B }\), and \( \mathbf{X }\):

$$\begin{aligned} \mathbf{Y }^{m} = \mathbf{Y }^{m}(\mathbf{s },\mathbf{B },\mathbf{X })= \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{m}(\mathbf{X }) . \end{aligned}$$

As a function of \( (\mathbf{s },\mathbf{B })\), the process \( \mathbf{Y }^{m}\) depends only on the first m components \( (\mathbf{s }^{m},\mathbf{B }^{m})\). Since we take m to go to infinity, the limit, if it exists, depends on the whole \( (\mathbf{s },\mathbf{B })\). As a function of \( \mathbf{X }\), the solution \( \mathbf{Y }^m\) depends only on \(\mathbf{X }^{m*}\), where we set

$$\begin{aligned} \mathbf{X }^{m*}=(X^{i})_{i=m+1}^{\infty } . \end{aligned}$$

Hence, \( \mathbf{Y }^m(\mathbf{s },\mathbf{B },\cdot )\) is \( \sigma [\mathbf{X }^{m*}] \)-measurable. We therefore write

$$\begin{aligned}&\mathbf{Y }^{m} = \mathbf{Y }^{m}(\mathbf{s },\mathbf{B },(\mathbf{0 }^m,\mathbf{X }^{m*})) = \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{m}((\mathbf{0 }^m,\mathbf{X }^{m*})) . \end{aligned}$$

Here \( \mathbf{0 }^m = (0,\ldots ,0)\) is the \( ({\mathbb {R}}^d)^m \)-valued constant path. The value 0 does not have any special meaning. Just for the notational reason, we put \( \mathbf{0 }^m\) here.

With the pathwise uniqueness of the solutions of SDE (1.10), we see that \( \mathbf{X }^m = (X^i)_{i=1}^{m}\) is the unique strong solution of (1.10). Hence we deduce that

$$\begin{aligned}&\mathbf{X }^m = \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{m}((\mathbf{0 }^m,\mathbf{X }^{m*})) \quad \text { for all } m \in \mathbb {N}. \end{aligned}$$
(1.11)

This implies that \( \mathbf{X }^{m}\) becomes a function of \( \mathbf{s } \), \( \mathbf{B } \), and \( \mathbf{X }^{m*}\). The dependence on \( \mathbf{X }^{m*}\) inherits the coefficients of the SDE (1.10).

Relation (1.11) provides the crucial consistency we use. From this we deduce that the maps \( \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{m} \) have a limit \( \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{\infty }\) as m goes to infinity at least for \( \mathbf{X }\) in the sense that the first k components \( \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{m,k}\) of \( \mathbf{Y }_{\mathbf{s },\mathbf{B }}^m \) coincide with \( \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{m',k}\) for all \( k \le m , m'\). Furthermore, \( \mathbf{X }\) is a fixed point of the limit map \( \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{\infty }\):

$$\begin{aligned}&\mathbf{X } = \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{\infty } (\mathbf{X }) . \end{aligned}$$
(1.12)

Hence, the limit map \( \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{\infty } \) is a function of \( (\mathbf{s },\mathbf{B })\) and \( \mathbf{X }\) itself through \( \{\mathbf{X }^{m*}\}_{m\in \mathbb {N}}\). The point is that, for each fixed \( (\mathbf{s },\mathbf{B })\), the limit \( \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{\infty }= \mathbf{Y }^{\infty } (\mathbf{s },\mathbf{B },\cdot )\) as a function of \( \mathbf{X } \) is measurable with respect to the tail \( \sigma \)-field \( {\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) \) of the labeled path space \(W ({\mathbb {R}}^{d\mathbb {N}}):=C([0,\infty );{\mathbb {R}}^{d\mathbb {N}})\). Here we set \( {\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) \) such that

$$\begin{aligned} {\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) = \bigcap _{m=1}^{\infty } \sigma [\mathbf{X }^{m*}] .\end{aligned}$$

Let \( {\widetilde{P}}_{\mathbf{s }}\) be the distribution of the solution \( (\mathbf{X },\mathbf{B })\) of ISDE (1.1) starting at \( \mathbf{s }\). By definition \( {\widetilde{P}}_{\mathbf{s }}\) is a probability measure on \( W ({\mathbb {R}}^{d\mathbb {N}})\times W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})\), where \( W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})=\{ \mathbf{w } \in W ({\mathbb {R}}^{d\mathbb {N}}); \mathbf{w }_0 = \mathbf{0 } \} \). We write \( (\mathbf{w },\mathbf{b }) \in W ({\mathbb {R}}^{d\mathbb {N}})\times W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})\). Let \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) be the distribution of the first component under conditioning the second component at \( \mathbf{b }\), that is, \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) is the regular conditional distribution of \( {\widetilde{P}}_{\mathbf{s }}\) such that

$$\begin{aligned} {\widetilde{P}}_{\mathbf{s },\mathbf{b }}= {\widetilde{P}}_{\mathbf{s }}(\mathbf{w } \in \cdot \, | \mathbf{b }). \end{aligned}$$

By construction \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) is the distribution of \( (\mathbf{X },\mathbf{B })\) starting at \( \mathbf{s }\) conditioned at \( \mathbf{B }=\mathbf{b }\).

Because \( \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{\infty }\) is a \({\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}})\)-measurable function in \( \mathbf{X }\) for fixed \( (\mathbf{s },\mathbf{b })\), we deduce that, if \( {\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}})\) is trivial with respect to \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\), then \( \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{\infty }\) is a function of \( (\mathbf{s }\), \( \mathbf{B })\) being independent of \( \mathbf{X }\) for \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\)-a.s. Hence, from the identity (1.12), the existence of strong solutions and the pathwise uniqueness of them are related to \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\)-triviality of \({\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) \).

In Theorem 4.1 (First tail theorem), we shall give a sufficient condition of the existence of the strong solutions and the pathwise uniqueness in terms of the property of \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\)-triviality of \( {\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) \). This condition is necessary and sufficient as we see in Theorem 4.2. The critical point here is that, to some extent, we regard the labeled path tail \( \sigma \)-field \( {\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) \) as a boundary of ISDE (1.1) and pose boundary conditions in terms of \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\)-triviality of it.

The formalism regarding a strong solution as a function F of path space is at the heart of the Yamada–Watanabe theory. They proved the equivalence between \( \{\)the existence of a weak solution + the pathwise uniqueness\( \}\) and (the existence of) a unique strong solution (see Theorem 1.1 [9, 163p]). Our main theorems, Theorems 4.1 and 4.3, clarify the relation between this property of solutions and tail triviality of the labeled path space. We shall provide the existence of a strong solution and the pathwise uniqueness of solutions of ISDEs. In this sense, this is a counterpart of Yamada–Watanabe’s result in infinite dimensions. Our argument is however completely different from the Yamada–Watanabe theory. Indeed, the existence of a weak solution has been established in the first step, and we shall prove the pathwise uniqueness and the existence of strong solutions together using the analysis of the tail \( \sigma \)-field of the labeled path space.

In the third step, we prove \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\)-triviality of \( {\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) \). Let \( {\textsf {S}}\) be the set of configurations on \( {\mathbb {R}}^d\). Then, by definition, \( {\textsf {S}}\) is the set given by

$$\begin{aligned} {\textsf {S}}= \left\{ {\textsf {s}} = \sum _{i} \delta _{s_i}\, ;\, {\textsf {s}} ( K ) < \infty \text { for all compact sets } K \subset {\mathbb {R}}^d\right\} . \end{aligned}$$
(1.13)

By convention, we regard the zero measure as an element of \( {\textsf {S}}\). Each element \( {\textsf {s}}\) of \( {\textsf {S}}\) is called a configuration. We endow \( {\textsf {S}}\) with the vague topology, which makes \( {\textsf {S}}\) a Polish space. Let \( {\mathscr {T}}\, ({\textsf {S}})\) be the tail \( \sigma \)-field of the configuration space \( {\textsf {S}}\) over \( {\mathbb {R}}^d\):

$$\begin{aligned}&{\mathscr {T}}\, ({\textsf {S}})= \bigcap _{r=1}^{\infty } \sigma [{\pi _r^{c}}] . \end{aligned}$$
(1.14)

Here \( S_{r}= \{ |x|\le r \} \) and \( {\pi _r^{c}}\) is the projection \( {\pi _r^{c}}\!:\!{\textsf {S}}\!\rightarrow \!{\textsf {S}}\) such that \( {\pi _r^{c}}({\textsf {s}}) = {\textsf {s}} (\cdot \cap S_{r}^c)\).

In Theorem 5.1 (Second tail theorem), we deduce \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\)-triviality of \( {\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) \) from \( \mu \)-triviality of \( {\textsf {S}}\), where \( \mu \) is the solution of equation (1.9). Because \( W ({\mathbb {R}}^{d\mathbb {N}})\) is a huge space and the tail \( \sigma \)-field \( {\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) \) is not topologically well behaved, this step is difficult to perform.

The key point here is the absolute continuity condition (AC) given in Sect. 5, that is, the condition such that the associated unlabeled process \( {\textsf {X}}=\{ {\textsf {X}}_t \} \), where \( {\textsf {X}}_t = \sum _{i=1}^{\infty } \delta _{X_t^i}\), starting from \( \lambda \) satisfies

$$\begin{aligned}&{\textsf {P}}_{\lambda }\circ {\textsf {X}}_t^{-1} \prec \mu \quad \text { for each } 0< t < \infty , \end{aligned}$$
(1.15)

where \( {\textsf {P}}_{\lambda }\) is the distribution of \( {\textsf {X}} \) such that \( {\textsf {P}}_{\lambda }\circ {\textsf {X}}_0^{-1} = \lambda \). Here we write \( m_1 \prec m_2 \) if \( m_1 \) is absolutely continuous with respect to \( m_2 \). This condition is satisfied if \( \lambda = \mu \) and \( \mu \) is a stationary probability measure of the unlabeled diffusion \( {\textsf {X}}\).

Under this condition, \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\)-triviality of \( {\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) \) follows from \( \mu \)-triviality of \( {\mathscr {T}}\, ({\textsf {S}})\).

One of the key points of the proof of the second tail theorem is Proposition 5.1, which proves that the finite-dimensional distributions of \( {\textsf {X}} \) satisfying (1.15) restricted on the tail \( \sigma \)-field are the same as the restriction of the product measures of \( \mu \) on the tail \( \sigma \)-field of the product of the configuration space \( {\textsf {S}}\) [see (5.13)].

The difficulty in controlling \( {\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) \) under the distribution given by the solution of ISDE (1.1) is that the labeled dynamics \( \mathbf{X }=(X^i)_{i\in \mathbb {N}}\) have no associated stationary measures because they would be an approximately infinite product of Lebesgue measures (if they exist). Instead, we study the associated unlabeled dynamics \( {\textsf {X}}\) as above. We shall assume that \( {\textsf {X}} \) satisfies the absolute continuity condition. Then we immediately see that its single time distributions \( {\textsf {P}}_{\lambda }\circ {\textsf {X}}_t^{-1}\)\( (t \in (0,\infty ))\) starting from \( \lambda \) are tail trivial on \( {\textsf {S}}\). From this, with some argument, we deduce triviality of the tail \( \sigma \)-field \( {\mathscr {T}}_{\text {path}} ({\textsf {S}})\) of the unlabeled path space \( C([0,\infty );{\textsf {S}})=: W ({\textsf {S}})\). We then further lift triviality of \( {\mathscr {T}}_{\text {path}} ({\textsf {S}})\) to that of \( {\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) \). To implement this scheme, we employ a rather difficult treatment of the map \( \mathbf{Y }_{\mathbf{s },\mathbf{B }}^{\infty }\) in (1.12).

We write the second and third steps abstractly. This scheme is quite robust and conceptual, and can be applied to many other types of ISDEs involving stochastic integrals beyond the It\({\hat{\mathrm{o}}}\) type. Indeed, we do not use any particular structure of It\({\hat{\mathrm{o}}}\) stochastic analysis in these steps. Furthermore, even if \( \mu \) is not tail trivial, we can decompose it to the tail trivial random point fields and solve the ISDEs in this case as well.

In [5], Fritz constructed non-equilibrium solutions in the sense that the state space of solutions \( \mathbf{X }\) is explicitly given. In [5], it was assumed that potentials are of \( C_0^3({\mathbb {R}}^d)\) and the dimension d is less than or equal to 4. While we were preparing the manuscript, Tsai [40] constructed non-equilibrium solutions for the Dyson model with \( 1 \le \beta < \infty \). His proof relies on the monotonicity specific to one-dimensional particle systems and uses translation invariance of the solutions. Hence it is difficult to apply his method to multi-dimensional models, and Bessel\(_{\alpha ,\beta }\) and Airy\( _{\beta }\) interacting Brownian motions even if in one dimension. Moreover, the reversibility of the unlabeled processes of the solutions is unsolved in [40].

The present paper is organized as follows. In Sect. 2, we introduce notation used throughout the paper and recall some notions for random point fields.

From Sects. 3 to 6 we devote to the general theory concerning on ISDEs. In Sect. 3, we state the main general theorems (Theorems 3.13.2). We explain the role of the main assumptions \(\mathbf {(IFC)}\), (TT), (AC), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\) in Sect. 3.4 and present a list of conditions. In Sect. 4, clarify the relation between a strong solution and a weak solution satisfying \(\mathbf {(IFC)}\) and triviality of \({\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) \) in Theorem 4.1. We do this in a general setting beyond interacting Brownian motions. We present and prove Theorem 4.1 (First tail theorem). In Sect. 5, we derive triviality of \({\mathscr {T}}_{\text {path}} ({\mathbb {R}}^{d\mathbb {N}}) \) from that of \( {\mathscr {T}}\, ({\textsf {S}})\). We present and prove Theorem 5.1 (Second tail theorem). In Sect. 6, we prove the main theorems (Theorems 3.13.2).

From Sects. 7 to 8 we study the Ginibre interacting Brownian motion, which is one of the most prominent examples of interacting Brownian motions with the logarithmic potential. In Sect. 7.1, we present preliminary results. In Sect. 7.2, we state the result for the Ginibre ensemble (Theorem 7.1). In Sect. 8, we prove Theorem 7.1 by employing Theorems 4.1 and 5.1.

From Sects. 9 to 13 we devote to applying the general theory to the class of the ISDEs called interacting Brownian motions. We shall prepare feasible sufficient conditions for applications. In Sect. 9, we quote results on weak solutions of ISDEs and the related Dirichlet form theory. In Sect. 10, we give sufficient conditions of assumptions \(\mathbf {(SIN)}\) and \(\mathbf {(NBJ)}\) used in Theorems 3.13.2. In Sect. 11, we give a sufficient condition of assumption \(\mathbf {(IFC)}\). In Sect. 12, we devote to the sufficient conditions of \(\mathbf {(SIN)}\) and \(\mathbf {(NBJ)}\) for \( \mu \) with non-trivial tails. In Sect. 13, we give various examples and prove Theorems 13.113.2. In section (Appendix), we prove the tail decomposition of random point fields.

We shall explain the main assumptions in the present paper in Sect. 3.4 and present a list of assumptions in Table 1.

Preliminary: logarithmic derivative and quasi-Gibbs measures

Let \( S\) be a closed set in \( {\mathbb {R}}^d\) such that the interior \( S_{\text {int}}\) is a connected open set satisfying \( \overline{S_{\text {int}}} = S\) and that the boundary \( \partial S\) has Lebesgue measure zero. Let \( {\textsf {S}}\) be the configuration space over \( S\). The set \( {\textsf {S}}\) is defined by (1.13) by replacing \( {\mathbb {R}}^d\) with \( S\).

A symmetric and locally integrable function \( \rho ^n \!:\!S^n\!\rightarrow \![0,\infty ) \) is called the n-point correlation function of a random point field \( \mu \) on \( {\textsf {S}}\) with respect to the Lebesgue measure if \( \rho ^n \) satisfies

$$\begin{aligned} \int _{A_1^{k_1}\times \cdots \times A_m^{k_m}} \rho ^n (x_1,\ldots ,x_n) dx_1\cdots dx_n = \int _{{\textsf {S}}} \prod _{i = 1}^{m} \frac{{\textsf {s}} (A_i) ! }{({\textsf {s}} (A_i) - k_i )!} d\mu \end{aligned}$$

for any sequence of disjoint bounded measurable sets \( A_1,\ldots ,A_m \in {\mathscr {B}}(S) \) and a sequence of natural numbers \( k_1,\ldots ,k_m \) satisfying \( k_1+\cdots + k_m = n \). When \( {\textsf {s}} (A_i) - k_i < 0\), according to our interpretation, \({{\textsf {s}} (A_i) ! }/{({\textsf {s}} (A_i) - k_i )!} = 0\) by convention.

Let \( {\tilde{\mu }}^{[1]}\) be the measure on \((S\times {\textsf {S}}, {\mathscr {B}}(S)\otimes {\mathscr {B}}({\textsf {S}}))\) determined by

$$\begin{aligned} {\tilde{\mu }}^{[1]}(A\times {\textsf {B}})= \int _{{\textsf {B}}} {\textsf {s}}(A)\mu (d{\textsf {s}}), \quad A\in {\mathscr {B}}(S), \; {\textsf {B}}\in {\mathscr {B}}({\textsf {S}}). \end{aligned}$$

The measure \({\tilde{\mu }}^{[1]}\) is called the one-Campbell measure of \(\mu \). In case \(\mu \) has one-correlation function \(\rho ^1\), there exists a regular conditional probability \({\tilde{\mu }}_x\) of \( \mu \) satisfying

$$\begin{aligned}&\int _{A}{\tilde{\mu }}_x ({\textsf {B}})\rho ^1(x)dx = {\tilde{\mu }}^{[1]} (A\times {\textsf {B}}), \quad A\in {\mathscr {B}}(S), \; {\textsf {B}}\in {\mathscr {B}}({\textsf {S}}). \end{aligned}$$

The measure \({\tilde{\mu }}_x\) is called the Palm measure of \(\mu \) [13].

In this paper, we use the probability measure \(\mu _x(\cdot )\equiv {\tilde{\mu }}_x(\cdot -\delta _x)\), which is called the reduced Palm measure of \(\mu \). Informally, \( \mu _x \) is given by

$$\begin{aligned} \mu _{x} = \mu (\cdot - \delta _x | \, {\textsf {s}} (\{ x \} ) \ge 1 ) .\end{aligned}$$

We consider the Radon measure \( \mu ^{[1]}\) on \( S\times {\textsf {S}}\) such that

$$\begin{aligned}&\mu ^{[1]}(dx d{\textsf {s}}) = \rho ^1 (x) \mu _{x} (d{\textsf {s}}) dx . \end{aligned}$$
(2.1)

In the present paper, we always use \( \mu ^{[1]}\) instead of \({\tilde{\mu }}^{[1]}\). Hence we call \( \mu ^{[1]}\) the one-Campbell measure of \( \mu \).

For a subset A, we set \( \pi _{A}\!:\!{\textsf {S}}\!\rightarrow \!{\textsf {S}} \) by \( \pi _{A} ({\textsf {s}}) = {\textsf {s}} (\cdot \cap A )\). We say a function f on \( {\textsf {S}}\) is local if f is \( \sigma [\pi _{K}]\)-measurable for some compact set K in \( S\). For such a local function f on \( {\textsf {S}}\), we say f is smooth if \( {\check{f}}={\check{f}}_{O }\) is smooth, where O is a relative compact open set in \( S\) such that \( K \subset O \). Moreover, \( {\check{f}}_{O } \) is a function defined on \( \sum _{k=0}^{\infty } O^k \) such that \( {\check{f}}_{O } (x_1,\ldots x_k)\) restricted on \( O^k \) is symmetric in \( x_j \) (\( j=1,\ldots ,k\)) for each k such that \( {\check{f}}_{O }(x_1,\ldots ,x_k) = f ({\textsf {x}})\) for \( {\textsf {x}} = \sum _i \delta _{x_i}\) and that \( {\check{f}}_{O } \) is smooth in \( (x_1,\ldots ,x_k)\) for each k. Here, for \( k=0\), \( {\check{f}}_{O } \) is a constant function on \( \{ {\textsf {s}};\, {\textsf {s}}(O ) = 0 \} \). Because \( {\textsf {x}}\) is a configuration and O is relatively compact, the cardinality of the particles of \( {\textsf {x}}\) is finite in O. Note that \( {\check{f}}_{O } \) has a consistency such that

$$\begin{aligned}&{\check{f}}_{O }(x_1,\ldots ,x_k) = {\check{f}}_{O' } (x_1,\ldots ,x_k) \quad \text { for all }(x_1,\ldots ,x_k) \in (O \cap O')^k. \end{aligned}$$
(2.2)

Hence \( f ({\textsf {x}}) = {\check{f}}_{O } (x_1,\ldots x_k) \) is well defined.

Let \( {\mathscr {D}}_{\circ }\) be the set of all bounded, local smooth functions on \( {\textsf {S}}\). We set

$$\begin{aligned}&C_{0}^{\infty }(S)\otimes {\mathscr {D}}_{\circ }= \left\{ \sum _{i=1}^N f_i (x)g_i({\textsf {s}})\, ;\, f_i \in C_{0}^{\infty }(S),\ g_i \in {\mathscr {D}}_{\circ },\ N \in \mathbb {N}\right\} . \end{aligned}$$

Let \( S_r = \{ s \in S\, ;\, | s | \le r \} \). We write \( f \in L_{\text {loc}}^p(S\times {\textsf {S}}, \mu ^{[1]})\) if \( f \in L^p(S_{r}\times {\textsf {S}}, \mu ^{[1]})\) for all \( r \in \mathbb {N}\). For simplicity we set \( L_{\text {loc}}^p(\mu ^{[1]}) = L_{\text {loc}}^p(S\times {\textsf {S}}, \mu ^{[1]})\).

Definition 2.1

An \( {\mathbb {R}}^d\)-valued function \( {\textsf {d}}^{\mu }\) is called the logarithmic derivative of \(\mu \) if \( {\textsf {d}}^{\mu }\in L_{\text {loc}}^1(\mu ^{[1]})\) and, for all \(\varphi \in C_{0}^{\infty }(S)\otimes {\mathscr {D}}_{\circ }\),

$$\begin{aligned}&\int _{S\times {\textsf {S}}} {\textsf {d}}^{\mu }(x,{\textsf {y}})\varphi (x,{\textsf {y}}) \mu ^{[1]}(dx d{\textsf {y}}) = - \int _{S\times {\textsf {S}}} \nabla _x \varphi (x,{\textsf {y}}) \mu ^{[1]}(dx d{\textsf {y}}). \end{aligned}$$
(2.3)

If the boundary \( \partial S\) is nonempty and particles hit the boundary, then \( {\textsf {d}}^{\mu }\) would contain a term arising from the boundary condition. For example, if the Neumann boundary condition is imposed on the boundary, then there would be a local time-type drift. In this sense, it would be more natural to suppose that \( {\textsf {d}}^{\mu }\) is a distribution rather than \( {\textsf {d}}^{\mu }\in L_{\text {loc}}^1(\mu ^{[1]})\). Instead, we shall later assume that particles never hit the boundary, and the above formulation is thus sufficient in the present situation. It would be interesting to generalize the theory, including the case with the boundary condition; however, we do not pursue this here.

A sufficient condition of the explicit expression of the logarithmic derivative of random point fields is given in [26, Theorem 45]. Using this, one can obtain the logarithmic derivative of random point fields appearing in random matrix theory such as sine\(_{\beta } \), Airy\(_{\beta }, \) (\( \beta =1,2,4\)), Bessel\(_{2,\alpha } \) (\( 1\le \alpha \)), and the Ginibre random point field (see Lemma 13.2, [8, 28, 30], Lemma 7.3, respectively). For canonical Gibbs measures with Ruelle-class interaction potentials, one can easily calculate the logarithmic derivative employing Dobrushin-Lanford-Ruelle (DLR) equation (see Lemma 13.5).

Let \( {\textsf {S}}_r^{m} = \{ {\textsf {s}} \in {\textsf {S}}\, ;\, {\textsf {s}} (S_{r}) = m \} \). Let \( \varLambda _r \) be the Poisson random point field whose intensity is the Lebesgue measure on \( S_{r}\) and set \( \varLambda _r^m = \varLambda _r (\cdot \cap {\textsf {S}}_r^{m} ) \). We set maps \( \pi _r, {\pi _r^{c}}\!:\!{\textsf {S}}\!\rightarrow \!{\textsf {S}}\) such that \( \pi _r({\textsf {s}}) = {\textsf {s}} (\cdot \cap S_{r})\) and \( {\pi _r^{c}}({\textsf {s}}) = {\textsf {s}} (\cdot \cap S_{r}^c)\).

Definition 2.2

A random point field \( \mu \) is called a \( ( \varPhi , \varPsi )\)-quasi Gibbs measure if its regular conditional probabilities

$$\begin{aligned} \mu _{r,\xi }^{m} = \mu (\, \pi _r\in \cdot \, | \, {\pi _r^{c}}({\textsf {x}}) = {\pi _r^{c}}({\xi }),\, {\textsf {x}} (S_{r}) = m ) \end{aligned}$$

satisfy, for all \(r,m\in {\mathbb {N}}\) and \( \mu \)-a.s. \( \xi \),

$$\begin{aligned} c_{1}^{-1} e^{-{\mathscr {H}}_r({\textsf {x}}) } \varLambda _r^{m} (d{\textsf {x}}) \le \mu _{r,\xi }^{m} (d{\textsf {x}}) \le c_{1} e^{-{\mathscr {H}}_r({\textsf {x}}) } \varLambda _r^{m} (d{\textsf {x}}). \end{aligned}$$
(2.4)

Here \( c_{1} = c_{1} (r,m,\xi )\) is a positive constant depending on \( r ,\, m ,\, \xi \). For two measures \( \mu , \nu \) on a \( \sigma \)-field \( {\mathscr {F}} \), we write \( \mu \le \nu \) if \( \mu (A) \le \nu (A) \) for all \( A \in {\mathscr {F}} \). Moreover, \( {\mathscr {H}}_r \) is the Hamiltonian on \( S_{r}\) defined by

$$\begin{aligned}&{\mathscr {H}}_r ({\textsf {x}}) = \sum _{x_i\in S_{r}} \varPhi (x_i) + \sum _{ j < k ,\, x_j, x_k \in S_{r}} \varPsi (x_j,x_k) \quad \text { for } {\textsf {x}} = \sum _i \delta _{x_i} . \end{aligned}$$

Remark 2.1

  1. 1.

    From (2.4), we see that for all \(r,m\in {\mathbb {N}}\) and \( \mu \)-a.s. \( \xi \), \( \mu _{r,\xi }^{m} (d{\textsf {x}}) \) have (unlabeled) Radon–Nikodym densities \( m_{r,\xi }^{m} (d{\textsf {x}}) \) with respect to \( \varLambda _r^m \). This fact is important when we decompose quasi-Gibbs measures with respect to tail \( \sigma \)-fields in Lemma 14.2. Clearly, canonical Gibbs measures \( \mu \) with potentials \( (\varPhi , \varPsi )\) are quasi-Gibbs measures, and their densities \( m_{r,\xi }^{m} (d{\textsf {x}}) \) with respect to \( \varLambda _r^{m} \) are given by the DLR equation. That is, for \( \mu \)-a.s. \( \xi = \sum _j \delta _{\xi _j} \),

    $$\begin{aligned}&m_{r,\xi }^{m} (d{\textsf {x}}) = \frac{1}{{\mathscr {Z}}_{r,\xi }^{m} } \exp \left\{ - {\mathscr {H}}_r ({\textsf {x}}) - \sum _{ x_i \in S_{r},\, \xi _j \in S_{r}^c } \varPsi (x_i, \xi _j)\right\} . \end{aligned}$$

    Here \( {\mathscr {Z}}_{r,\xi }^{m} \) is the normalizing constant. For random point fields appearing in random matrix theory, interaction potentials are logarithmic functions, where the DLR equations do not make sense as they are.

  2. 2.

    If \( \mu \) is a \( ( \varPhi , \varPsi )\)-quasi Gibbs measure, then \( \mu \) is also a \( ( \varPhi + \varPhi _{\text {loc,bdd}} , \varPsi )\)-quasi Gibbs measure for any locally bounded measurable function \( \varPhi _{\text {loc,bdd}} \). In this sense, the notion of “quasi-Gibbs” is robust for the perturbation of free potentials. In particular, both the sine\(_{\beta } \) and Airy\(_{\beta } \) random point fields are \( (0, - \beta \log |x-y| )\)-quasi Gibbs measures, where \( \beta = 1,2,4\) (see [28, 30]).

  3. 3.

    From (2.4) we see that \( \varPhi \) and \( \varPsi \) are locally bounded from below with respect to the \( L^{\infty }\)-norm by the Lebesgue measure.

We collect notation we shall use throughout the paper: For a subset A of a topological space, we shall denote by \( W (A)=C([0,\infty );A)\) the set consisting of A-valued continuous paths on \( [0,\infty ) \).

We set \( \Vert w \Vert _{C([0,T];S)} = \sup _{t\in [0,T]} |w (t)|\). We then equip \( W (S^{{\mathbb {N}}})\) with Fréchet metric \( \text {dist}(\cdot ,*) \) given by, for \( \mathbf{w }=(w_n)_{n\in {\mathbb {N}}} \) and \( \mathbf{w }'=(w_n')_{n\in {\mathbb {N}}} \),

$$\begin{aligned} \text {dist}(\mathbf{w },\mathbf{w }') = \sum _{T=1}^{\infty } 2^{-T}\left\{ \sum _{n=1}^{\infty } 2^{-n} \min \{ 1, \Vert w_n-w_n'\Vert _{C([0,T];S)} \} \right\} . \end{aligned}$$

We introduce unlabeling maps \( {\mathfrak {u}}_{[m]}\!:\!S^{m} \times {\textsf {S}}\!\rightarrow \!{\textsf {S}}\) (\( m \in {\mathbb {N}} \)) such that

$$\begin{aligned} {\mathfrak {u}}_{[m]}((\mathbf{x },{\textsf {s}})) = \sum _{i=1}^{m} \delta _{x_i} + {\textsf {s}} \quad \text { for } \mathbf{x } = (x_i) \in S^{m} , \quad {\textsf {s}} \in {\textsf {S}}. \end{aligned}$$
(2.5)

By the same symbol \( {\mathfrak {u}}_{[m]}\), we also denote the map \( {\mathfrak {u}}_{[m]}\!:\!S^{m}\!\rightarrow \!{\textsf {S}} \) such that \( {\mathfrak {u}}_{[m]}(\mathbf{x }) = \sum _{i} \delta _{x_i} \), where \( \mathbf{x } = (x_i)_{i=1}^m \) and \( m \in {\mathbb {N}} \). Let \( {\mathfrak {u}} \!:\!S^{{\mathbb {N}}}\!\rightarrow \!{\textsf {S}}\) such that

$$\begin{aligned}&{\mathfrak {u}} ((s_i)_{i=1}^{\infty }) = \sum _{i=1}^{\infty } \delta _{s_i}. \end{aligned}$$
(2.6)

We often write \( \mathbf{s }= (s_i)_{i=1}^{\infty }\) and \( {\textsf {s}} = \sum _{i=1}^{\infty } \delta _{s_i}\). Thus (2.6) implies \( {\mathfrak {u}} (\mathbf{s }) = {\textsf {s}}\). For \( \mathbf{w }= \{\mathbf{w }_t\} = \{ (\textit{w}_t^i)_{i\in {\mathbb {N}}} \}_{t\in [0,\infty )} \in W (S^{{\mathbb {N}}})\), we set \( {\mathfrak {u}} _{\text {path}}(\mathbf{w }) \), called the unlabeled path of \( \mathbf{w }\), by

$$\begin{aligned}&{\mathfrak {u}} _{\text {path}}(\mathbf{w })_t = {\mathfrak {u}} (\mathbf{w }_t) = \sum _{i} \delta _{\textit{w}_t^i}. \end{aligned}$$
(2.7)

We note that \( {\mathfrak {u}} _{\text {path}}(\mathbf{w }) \) is not necessary an element of \( W ({\textsf {S}})\). See Remark 3.10 for example.

Let \( {\textsf {S}}_{\text {s}}\) be the subset of \( {\textsf {S}}\) with no multiple points. Let \( {\textsf {S}}_{\text {s.i.}}\) be the subset of \( {\textsf {S}}_{\text {s}}\) consisting of an infinite number of points. Then by definition \( {\textsf {S}}_{\text {s}}\) and \( {\textsf {S}}_{\text {s.i.}}\) are given by

$$\begin{aligned}&{\textsf {S}}_{\text {s}}= \{ {\textsf {s}} \in {\textsf {S}}\, ;\, {\textsf {s}} ( \{ x \} ) \le 1 \text { for all } x \in S\}, \ {\textsf {S}}_{\text {s.i.}}= \{ {\textsf {s}} \in {\textsf {S}}_{\text {s}}\, ;\, {\textsf {s}} (S) = \infty \} . \end{aligned}$$
(2.8)

A Borel measurable map \( {\mathfrak {l}} \!:\!{\textsf {S}}_{\text {s.i.}}\!\rightarrow \!S^{{\mathbb {N}}}\) is called a label if \( {\mathfrak {u}} \circ {\mathfrak {l}} ({\textsf {s}}) = {\textsf {s}}\) for all \( {\textsf {s}} \in {\textsf {S}}_{\text {s.i.}}\).

Let \( W ({\textsf {S}}_{\text {s}})\) and \( W ({\textsf {S}}_{\text {s.i.}})\) be the sets consisting of \( {\textsf {S}}_{\text {s}}\)- and \( {\textsf {S}}_{\text {s.i.}}\)-valued continuous path on \( [0,\infty )\). Each \( {\textsf {w}} \in W ({\textsf {S}}_{\text {s}})\) can be written as \( {\textsf {w}}_t = \sum _i \delta _{w_t^i}\), where \( w^i \) is an \( S\)-valued continuous path defined on an interval \( I_i \) of the form \( [0,b_i)\) or \( (a_i,b_i)\), where \( 0 \le a_i < b_i \le \infty \). Taking maximal intervals of this form, we can choose \( [0,b_i)\) and \( (a_i,b_i)\) uniquely up to labeling. We remark that \( \lim _{t\downarrow a_i} |w_t^i| =\infty \) and \( \lim _{t\uparrow b_i} |w_t^i| =\infty \) for \( b_i < \infty \) for all i. We call \( w^i\) a tagged path of \( {\textsf {w}}\) and \( I_i \) the defining interval of \( w^i \). We set

$$\begin{aligned}&W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})= \{ {\textsf {w}} \in W ({\textsf {S}}_{\text {s.i.}})\, ;\, I_i = [0,\infty ) \hbox { for all}\ i \} . \end{aligned}$$
(2.9)

We say tagged path \( w^i \) of \( {\textsf {w}}\) does not explode if \( b_i = \infty \), and does not enter if \( I_i = [0,b_i)\), where \( b_i \) is the right end of the interval where \( w^i \) is defined. Thus \( W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})\) is the set consisting of non-explosion and non-entering paths. Then we can naturally lift each unlabeled path \( {\textsf {w}}\in W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})\) to the labeled path \( \mathbf{w }=(\textit{w}^i )_{i\in \mathbb {N}} \in W (S^{{\mathbb {N}}})\) using a label \( {\mathfrak {l}} = ({\mathfrak {l}} _i)_{i\in \mathbb {N}}\) such that \( \mathbf{w }_0 = {\mathfrak {l}} ({\textsf {w}}_0)\). Indeed, we can do this because each tagged particle can carry the initial label i forever. We write this correspondence by \( {\mathfrak {l}} _{\text {path}}({\textsf {w}}) =({\mathfrak {l}} _{\text {path}}^i({\textsf {w}}))_{i\in \mathbb {N}}\) and set \( \mathbf{w }\) as

$$\begin{aligned}&\mathbf{w }= {\mathfrak {l}} _{\text {path}}({\textsf {w}}) \text { with } \mathbf{w }_0 = {\mathfrak {l}} ({\textsf {w}}_0). \end{aligned}$$
(2.10)

Then \( \textit{w}^i ={\mathfrak {l}} _{\text {path}}^i({\textsf {w}}) \) by construction. We set

$$\begin{aligned}&{\textsf {w}}^{m*} = \sum _{i>m}\delta _{\textit{w}^i} , \end{aligned}$$

where \( \sum _{i>m}\delta _{\textit{w}^i}= \{ \sum _{i>m}\delta _{\textit{w}_t^i} \}_{t\in [0,\infty )} \). For an unlabeled path \( {\textsf {w}}\), we call the path

$$\begin{aligned}&\mathbf{w }^{[m]} = \left( {\mathfrak {l}} _{\text {path}}^1({\textsf {w}}),\ldots ,{\mathfrak {l}} _{\text {path}}^m({\textsf {w}}), \sum _{i>m}\delta _{\textit{w}^i }\right) \end{aligned}$$
(2.11)

the m-labeled path. Similarly, for a labeled path \( \mathbf{w }=(\textit{w}^i ) \in W (S^{{\mathbb {N}}})\) we set \( \mathbf{w }^{[m]} \) by

$$\begin{aligned}&\mathbf{w }^{[m]} = \left( w^1,\ldots ,w^m, \sum _{i>m}\delta _{w^i}\right) . \end{aligned}$$
(2.12)

Remark 2.2

\( {\mathfrak {u}} _{\text {path}}(\mathbf{w })_t = {\mathfrak {u}} (\textit{w}_t)\) by (2.7), whereas \( {\mathfrak {l}} _{\text {path}}({\textsf {w}})_t \not = {\mathfrak {l}} ({\textsf {w}}_t)\) in general.

The main general theorems: Theorems 3.13.2

ISDE

Let \( \mathbf{X }= (X^i)_{i\in {\mathbb {N}}}\) be an \( S^{{\mathbb {N}}}\)-valued continuous process. We write \( \mathbf{X } = \{ \mathbf{X }_t \}_{t\in [0,\infty )} \) and \( X^i = \{ X_t^i \}_{t\in [0,\infty )}\). For \( \mathbf{X }\) and \( i\in {\mathbb {N}}\), we define the unlabeled processes \( {\textsf {X}}=\{ {\textsf {X}}_t \}_{t\in [0,\infty )} \) and \( {\textsf {X}}^{i\diamondsuit } = \{ {\textsf {X}}_t^{i\diamondsuit } \}_{t\in [0,\infty )} \) as \( {\textsf {X}}_t = \sum _{i\in {\mathbb {N}}} \delta _{X_t^i} \) and \( {\textsf {X}}_t^{i\diamondsuit } = \sum _{j\in {\mathbb {N}} ,\ j\not =i } \delta _{X_t^j} \).

Let \( {\textsf {H}}\) and \( {\textsf {S}}_{\text {sde}}\) be Borel subsets of \( {\textsf {S}}\) such that

$$\begin{aligned}&{\textsf {H}} \subset {\textsf {S}}_{\text {sde}}\subset {\textsf {S}}_{\text {s.i.}}\bigcap \{ {\textsf {s}}\,;\, {\textsf {s}} (\partial S) = 0 \} . \end{aligned}$$
(3.1)

Let \( {\mathfrak {u}}_{[1]}\) be as in (2.5). Define \( \mathbf{S }_{\text {sde}}\subset S^{{\mathbb {N}}}\) and \( {\textsf {S}}_{\text {sde}}^{[1]}\subset S\times {\textsf {S}}\) by

$$\begin{aligned}&\mathbf{S }_{\text {sde}}= {\mathfrak {u}} ^{-1} ({\textsf {S}}_{\text {sde}}) ,\quad {\textsf {S}}_{\text {sde}}^{[1]}= {\mathfrak {u}}_{[1]}^{-1} ({\textsf {S}}_{\text {sde}}) . \end{aligned}$$
(3.2)

Let \( \sigma \!:\!{\textsf {S}}_{\text {sde}}^{[1]}\!\rightarrow \!{\mathbb {R}}^{d^2}\) and \( \textit{b}\!:\!{\textsf {S}}_{\text {sde}}^{[1]}\!\rightarrow \!{\mathbb {R}}^d\) be Borel measurable functions, where d is the dimension of the Euclidean space \( {\mathbb {R}}^d\) that includes \( S\). In infinite dimensions, it is natural to consider coefficients \( \sigma \) and \( \textit{b}\) defined only on a suitable subset \( {\textsf {S}}_{\text {sde}}^{[1]}\) of \( S\times {\textsf {S}}\). Let \( {\mathfrak {l}} \!:\!{\textsf {S}}_{\text {s.i.}}\!\rightarrow \!S^{{\mathbb {N}}}\) be the label introduced in Sect. 2. We consider an ISDE of \( \mathbf{X } =(X^i)_{i\in {\mathbb {N}}}\) starting from \( {\mathfrak {l}} ({\textsf {H}}) \) with state space \( \mathbf{S }_{\text {sde}}\) such that

$$\begin{aligned}&dX_t^i = \sigma (X_t^i,{\textsf {X}}_t^{i\diamondsuit }) dB_t^i + \textit{b}(X_t^i,{\textsf {X}}_t^{i\diamondsuit }) dt \quad (i\in {\mathbb {N}}), \end{aligned}$$
(3.3)
$$\begin{aligned}&\mathbf{X } \in W (\mathbf{S }_{\text {sde}}) , \end{aligned}$$
(3.4)
$$\begin{aligned}&\mathbf{X }_0 \in {\mathfrak {l}} ({\textsf {H}}) . \end{aligned}$$
(3.5)

Here \( \mathbf{B }=(B^i)_{i\in \mathbb {N}}\) is an \( {\mathbb {R}}^{d\mathbb {N}}\)-valued Brownian motion; that is, \( \{B^i\}_{i\in \mathbb {N}}\) are independent copies of a d-dimensional Brownian motion starting at the origin.

Remark 3.1

Note that \( {\mathfrak {l}} ({\textsf {H}}) \subset {\mathfrak {u}} ^{-1} ({\textsf {H}}) \) and that \( {\mathfrak {u}} ^{-1} ({\textsf {H}}) \) is much larger than \( {\mathfrak {l}} ({\textsf {H}}) \). For example, if we take \( {\mathfrak {l}} ({\textsf {s}}) = (s_i)\) as \( |s_i| \le |s_{i+1}|\) for all \( i \in \mathbb {N}\), then \( \mathbf{X }_t \) will soon exit from \( {\mathfrak {l}} ({\textsf {H}}) \). This is why we take \( \mathbf{S }_{\text {sde}}= {\mathfrak {u}} ^{-1}({\textsf {S}}_{\text {sde}})\) in (3.2) rather than \( \mathbf{S }_{\text {sde}}= {\mathfrak {l}} ({\textsf {S}}_{\text {sde}})\).

From (3.4) the process \( \mathbf{X }\) moves in the set \( \mathbf{S }_{\text {sde}}\) where the coefficients \( \sigma \) and \( \textit{b}\) are well defined. Moreover, each tagged particle \( X^i \) of \(\mathbf{X }= (X^i )_{i\in \mathbb {N}}\) never explodes . By (3.4), \( {\textsf {X}}_t \in {\textsf {S}}_{\text {sde}}\) for all \( t \ge 0\), and in particular the initial starting point \( \mathbf{s }\) in (3.5) is supposed to satisfy \( \mathbf{s } \in {\mathfrak {l}} ({\textsf {H}}) \subset \mathbf{S }_{\text {sde}}\), which implies \( {\mathfrak {u}} (\mathbf{s }) \in {\textsf {H}} \subset {\textsf {S}}_{\text {sde}}\).

By (3.1), \( {\textsf {H}}\) is a subset of \( {\textsf {S}}_{\text {sde}}\). We shall take \( {\textsf {H}}\) in such a way that (3.3)–(3.5) has a solution for each \( \mathbf{s } \in {\mathfrak {l}} ({\textsf {H}}) \). To detect a sufficiently large subset \( {\textsf {H}} \) satisfying this is an important step to solve the ISDE.

The meaning of \( {\textsf {H}}\) to be large is however a problem at this stage because there is no natural measure on the infinite product space \( S^{{\mathbb {N}}}\). In practice, we equip \( {\textsf {S}}\) with a random point field \( \mu \) such that \( \mu ({\textsf {H}}) = \mu ({\textsf {S}}_{\text {sde}})= \mu (\textsf {{\textsf {S}}})=1\). We thus realize \( {\textsf {H}}\) as a support of \( \mu \). We shall later assume (A1) in Sect. 9 to relate \( \mu \) with (3.3) in such a way that the unlabeled dynamics \( {\textsf {X}}\) of the solution \( \mathbf{X }\) is \( \mu \)-reversible. In this sense the random point field \( \mu \) is associated with ISDE (3.3).

We remark that we can extend \( {\mathfrak {l}} ({\textsf {H}}) \) in (3.5) to \( {\mathfrak {u}} ^{-1} ({\textsf {H}}) \) by retaking other labels \( {\mathfrak {l}} \). Because the coefficients of ISDE (3.3) have symmetry, this causes no problems.

Essentially, following [9, Chapter IV] in finite dimension, we present a set of notions related to solutions of ISDE. In Definitions 3.13.8, \( (\varOmega ,{\mathscr {F}}, P ,\{ {\mathscr {F}}_t \} )\) is a general probability space.

Definition 3.1

(weak solution) By a weak solution of ISDE (3.3)–(3.4), we mean an \( S^{{\mathbb {N}}} \times {\mathbb {R}}^{d\mathbb {N}}\)-valued stochastic process \( (\mathbf{X },\mathbf{B })\) defined on a probability space \( (\varOmega , {\mathscr {F}}, P )\) with a reference family \( \{ {\mathscr {F}}_t \}_{t \ge 0 } \) such that

  1. (i)

    \( \mathbf{X }=(X^i)_{i=1}^{\infty } \) is an \( \mathbf{S }_{\text {sde}}\)-valued continuous process. Furthermore, \( \mathbf{X }\) is adapted to \( \{ {\mathscr {F}}_t \}_{t \ge 0 }\), that is, \( \mathbf{X }_t \) is \( {\mathscr {F}}_t /{\mathscr {B}}_t \)-measurable for each \( 0 \le t < \infty \), where

    $$\begin{aligned} {\mathscr {B}}_t = \sigma [ \mathbf{w }_s ; 0\le s \le t ,\, \mathbf{w }\in W (S^{{\mathbb {N}}})]. \end{aligned}$$
    (3.6)
  2. (ii)

    \( \mathbf{B } = (B^i)_{i=1}^{\infty }\) is an \( {\mathbb {R}}^{d\mathbb {N}}\)-valued \(\{ {\mathscr {F}}_t \}\)-Brownian motion with \( \mathbf{B }_0 = \mathbf{0 }\),

  3. (iii)

    the family of measurable \( \{ {\mathscr {F}}_t \}_{t \ge 0 } \)-adapted processes \( \varPhi \) and \( \varPsi \) defined by

    $$\begin{aligned}&\varPhi ^i(t,\omega ) = \sigma (X_t^i(\omega ),{\textsf {X}}_t^{i\diamondsuit }(\omega )) ,\quad \varPsi ^i(t,\omega ) = \textit{b}(X_t^i(\omega ),{\textsf {X}}_t^{i\diamondsuit }(\omega )) \end{aligned}$$

    belong to \( {\mathscr {L}}^{2} \) and \( {\mathscr {L}}^1 \), respectively. Here \( {\mathscr {L}}^{p} \) is the set of all measurable \( \{ {\mathscr {F}}_t \}_{t \ge 0 } \)-adapted processes \( \alpha \) such that \( E[ \int _0^T|\alpha (t,\omega )|^p dt ] < \infty \) for all T. Here we can and do take a predictable version of \( \varPhi ^i \) and \( \varPsi ^i\) (see pp 45-46 in [9]).

  4. (iv)

    with probability one, the process \( (\mathbf{X },\mathbf{B })\) satisfies for all t

    $$\begin{aligned}&X_t^i - X_0^i = \int _0^t \sigma (X_u^i,{\textsf {X}}_u^{i\diamondsuit }) dB_u^i + \int _0^t \textit{b}(X_u^i,{\textsf {X}}_u^{i\diamondsuit }) du \quad (i\in {\mathbb {N}}). \end{aligned}$$

Definition 3.2

(weak solution on\( \mathbf{A }\)) We say the ISDE (3.3)–(3.4) has a weak solution on a Borel set \( \mathbf{A } \) if it has a weak solution for arbitrary initial distribution \( \nu \) such that \( \nu (\mathbf{A }) = 1 \).

We say \( \mathbf{X }\) is a weak solution if the accompanied Brownian motion \( \mathbf{B }\) is obvious or not important. A solution \(\mathbf{X }\) staring at \( \mathbf{x }\) means \( \mathbf{X }\) is a solution such that \( \mathbf{X }_0=\mathbf{x }\) a.s.

Remark 3.2

In [9, Chap.  IV], the state space and the set of the initial starting points of SDEs are the same and taken to be \( {\mathbb {R}}^d\). In the present paper, the set of the initial starting points is \( {\mathfrak {l}} ({\textsf {H}}) \). \( {\mathfrak {l}} ({\textsf {H}}) \) is a subset of \( \mathbf{S }_{\text {sde}}\). So we introduced the notion “weak solution on \( \mathbf{A }\)” in Definition 3.2.

Definition 3.3

(uniqueness in law) We say that the uniqueness in law of weak solutions on \( {\mathfrak {l}} ({\textsf {H}}) \) for (3.3)–(3.4) holds if whenever \( \mathbf{X }\) and \( \mathbf{X }'\) are two weak solutions whose initial distributions coincide, then the laws of the processes \( \mathbf{X }\) and \( \mathbf{X }'\) on the space \( W (S^{{\mathbb {N}}})\) coincide. If this uniqueness holds for an initial distribution \( \delta _{\mathbf{s }}\), then we say the uniqueness in law of weak solutions for (3.3)–(3.4) starting at \( \mathbf{s }\) holds.

Remark 3.3

For each \( \mathbf{s } \in {\mathfrak {l}} ({\textsf {H}}) \) take \( \delta _{\mathbf{s }}\) as the initial law of the ISDE (3.3)–(3.4). Then the uniqueness in Definition 3.3 is equivalent to the uniqueness of the law of weak solutions starting at each \( \mathbf{s } \in {\mathfrak {l}} ({\textsf {H}}) \). We refer to Remark 1.2 in [9, 162 p] for the corresponding result for finite-dimensional SDEs.

Definition 3.4

(pathwise uniqueness) We say that the pathwise uniqueness of weak solutions of (3.3)–(3.4) on \( {\mathfrak {l}} ({\textsf {H}}) \) holds if whenever \( \mathbf{X }\) and \( \mathbf{X }'\) are two weak solutions defined on the same probability space \( (\varOmega , {\mathscr {F}} ,P )\) with the same reference family \( \{ {\mathscr {F}}_t \}_{t \ge 0 }\) and the same \( {\mathbb {R}}^{d\mathbb {N}}\)-valued \(\{ {\mathscr {F}}_t \}\)-Brownian motion \( \mathbf{B } \) such that \( \mathbf{X }_0=\mathbf{X }_0' \in {\mathfrak {l}} ({\textsf {H}}) \) a.s., then

$$\begin{aligned}&P (\mathbf{X }_t=\mathbf{X }_t'\text { for all } t \ge 0 ) = 1 .\end{aligned}$$

Definition 3.5

(pathwise uniqueness of weak solutions starting at\( \mathbf{s }\)) We say that the pathwise uniqueness of weak solutions of (3.3)–(3.4) starting at \( \mathbf{s }\) holds if the condition of Definition 3.4 holds for \( \mathbf{X }_0=\mathbf{X }_0' = \mathbf{s } \) a.s.

We now state the definition of strong solution, which is analogous to Definition 1.6 in [9, 163 p]. Let \( P_{\text {Br}}^{\infty }\) be the distribution of an \( {\mathbb {R}}^{d\mathbb {N}}\)-valued Brownian motion \( \mathbf{B } \) with \( \mathbf{B }_0 = \mathbf{0 }\). Let \( W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})= \{ \mathbf{w }\in W ({\mathbb {R}}^{d\mathbb {N}})\, ; \mathbf{w }_0 =\mathbf{0 } \}\). Clearly, \( P_{\text {Br}}^{\infty }(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}}))= 1 \).

Let \( {\mathscr {B}}_t \) be as (3.6). Let \( {\mathscr {B}}_t (P_{\text {Br}}^{\infty })\) be the completion of \( \sigma [ \mathbf{w }_s ; 0\le s \le t ,\, \mathbf{w }\in W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})] \) with respect to \( P_{\text {Br}}^{\infty }\). Let \( {\mathscr {B}}(P_{\text {Br}}^{\infty }) \) be the completion of \( {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) \) with respect to \( P_{\text {Br}}^{\infty }\).

Definition 3.6

(a strong solution starting at\( \mathbf{s }\)) A weak solution \( \mathbf{X }\) of (3.3)–(3.4) with an \( {\mathbb {R}}^{d\mathbb {N}}\)-valued \( \{ {\mathscr {F}}_t \}\)-Brownian motion \(\mathbf{B }\) is called a strong solution starting at \( \mathbf{s }\) defined on \( (\varOmega ,{\mathscr {F}}, P ,\{ {\mathscr {F}}_t \} )\) if \( \mathbf{X }_0=\mathbf{s }\) a.s.  and if there exists a function \( {F}_{\mathbf{s }}\!:\!W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})\!\rightarrow \!W (S^{{\mathbb {N}}})\) such that \( {F}_{\mathbf{s }}\) is \( {\mathscr {B}}(P_{\text {Br}}^{\infty }) /{\mathscr {B}}(W (S^{{\mathbb {N}}}))\)-measurable, and that \( {F}_{\mathbf{s }}\) is \( {\mathscr {B}}_t (P_{\text {Br}}^{\infty }) /{\mathscr {B}}_t \)-measurable for each t, and that \( {F}_{\mathbf{s }}\) satisfies

$$\begin{aligned}&\mathbf{X } = {F}_{\mathbf{s }}(\mathbf{B }) \quad \text { a.s.} \end{aligned}$$

Also we call \( {F}_{\mathbf{s }}\) itself a strong solution starting at \( \mathbf{s }\).

Definition 3.7

(a unique strong solution starting at\( \mathbf{s }\)) We say (3.3)–(3.4) has a unique strong solution starting at \( \mathbf{s }\) if there exists a \( {\mathscr {B}}(P_{\text {Br}}^{\infty }) /{\mathscr {B}}(W (S^{{\mathbb {N}}}))\)-measurable function \( {F}_{\mathbf{s }}\!:\!W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})\!\rightarrow \!W (S^{{\mathbb {N}}})\) such that, for any weak solution \( (\hat{\mathbf{X }},\hat{\mathbf{B }})\) of (3.3)–(3.4) starting at \( \mathbf{s }\), it holds that

$$\begin{aligned}&\hat{\mathbf{X }}={F}_{\mathbf{s }}(\hat{\mathbf{B }}) \quad \text { a.s.} \end{aligned}$$

and if, for any \( {\mathbb {R}}^{d\mathbb {N}}\)-valued \( \{{\mathscr {F}}_t\} \)-Brownian motion \( \mathbf{B } \) defined on \( (\varOmega ,{\mathscr {F}}, P ,\{ {\mathscr {F}}_t \} )\) with \( \mathbf{B }_0=\mathbf{0 }\), the continuous process \( {F}_{\mathbf{s }}(\mathbf{B })\) is a strong solution of (3.3)–(3.4) starting at \( \mathbf{s }\). Also we call \( {F}_{\mathbf{s }}\) a unique strong solution starting at \( \mathbf{s }\).

We next present a variant of the notion of a unique strong solution.

Definition 3.8

(a unique strong solution under constraint) For a condition \((\bullet )\), we say (3.3)–(3.4) has a unique strong solution starting at \( \mathbf{s }\) under the constraint \( (\bullet )\) if there exists a \( {\mathscr {B}}(P_{\text {Br}}^{\infty }) /{\mathscr {B}}(W (S^{{\mathbb {N}}}))\)-measurable function \( {F}_{\mathbf{s }}\!:\!W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})\!\rightarrow \!W (S^{{\mathbb {N}}})\) such that for any weak solution \( (\hat{\mathbf{X }},\hat{\mathbf{B }})\) of (3.3)–(3.4) starting at \( \mathbf{s }\) satisfying \( (\bullet )\), it holds that

$$\begin{aligned}&\hat{\mathbf{X }}={F}_{\mathbf{s }}(\hat{\mathbf{B }}) \quad \text { a.s.} \end{aligned}$$
(3.7)

and if for any \( {\mathbb {R}}^{d\mathbb {N}}\)-valued \( \{{\mathscr {F}}_t\} \)-Brownian motion \( \mathbf{B }\) defined on \( (\varOmega ,{\mathscr {F}}, P ,\{ {\mathscr {F}}_t \} )\) with \( \mathbf{B }_0=\mathbf{0 }\) the continuous process \( {F}_{\mathbf{s }}(\mathbf{B })\) is a strong solution of (3.3)–(3.4) starting at \( \mathbf{s }\) satisfying \( (\bullet )\). Also we call \( {F}_{\mathbf{s }}\) a unique strong solution starting at \( \mathbf{s }\) under the constraint \( (\bullet )\).

Remark 3.4

  1. 1.

    The meaning of strong solutions is similar to the conventional situation in [9, pp 159–167]. The difference is that we consider solutions starting at a point \( \mathbf{s }\). In [9], initial distributions are taken over all probability measures on the state space.

  2. 2.

    Similarly as Definition 3.8 we can introduce the notions of constrained versions of uniqueness in Definitions 3.33.5.

Main Theorem I (Theorem 3.1): \(\mu \) with trivial tail

Let \( (\mathbf{X },\mathbf{B })\) be an \( S^{{\mathbb {N}}} \times {\mathbb {R}}^{d\mathbb {N}}\)-valued continuous process defined on a filtered space \( (\varOmega ,{\mathscr {F}}, P ,\{ {\mathscr {F}}_t \} )\). We assume that \( (\varOmega ,{\mathscr {F}}, P )\) is a standard probability space. Then the regular conditional probability

$$\begin{aligned} P_{\mathbf{s }}= P ( \cdot | \mathbf{X }_0=\mathbf{s }) \end{aligned}$$

exists for \( P \circ \mathbf{X }_0^{-1}\)-a.s. \( \mathbf{s }\). We investigate \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\), and thus regard \( (\mathbf{X },\mathbf{B })\) as a stochastic process defined on the filtered space \( (\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \).

For \( \mathbf{X } = (X^i)_{i\in \mathbb {N}} \) we set \({\textsf {X}}_t^{m*} = \sum _{i=m+1}^{\infty } \delta _{X_t^i }\) as before. Define

$$\begin{aligned} \sigma {}_{{\textsf {X}}}^m\!:\![0,\infty ) \times S^{m} \!\rightarrow \!{\mathbb {R}}^{d^2}\quad \text { and }\quad \textit{b}{}_{{\textsf {X}}}^m\!:\![0,\infty ) \times S^{m} \!\rightarrow \!{\mathbb {R}}^d \end{aligned}$$

such that, for \( (u,\mathbf{v }) \in S^m\) and \( {\textsf {v}} = \sum _{i=1}^{m-1} \delta _{v_i} \in {\textsf {S}}\), where \(\mathbf{v }=(v_1,\ldots ,v_{m-1}) \in S^{m-1} \),

$$\begin{aligned}&\sigma {}_{{\textsf {X}}}^m( t, (u, \mathbf{v })) = {\sigma } (u , {\textsf {v}} + {\textsf {X}}_t^{m*}) ,\quad \textit{b}{}_{{\textsf {X}}}^m( t, (u, \mathbf{v })) = {\textit{b}} (u , {\textsf {v}} + {\textsf {X}}_t^{m*}) . \end{aligned}$$
(3.8)

We write \( {\mathfrak {l}} ({\textsf {s}}) =(s_i)_{i\in \mathbb {N}}= \mathbf{s } \) and \( {\textsf {s}}_m^* = \sum _{i=m+1}^{\infty } \delta _{s_i}\). Recall \( \mathbf{X }_0 = {\mathfrak {l}} ({\textsf {s}}) \). We then have \( {\textsf {X}}_0^{m*} = {\textsf {s}}_m^* \) by construction. We remark that the coefficients \( \sigma {}_{{\textsf {X}}}^m\) and \( \textit{b}{}_{{\textsf {X}}}^m\) depend on both unlabeled path \( {\textsf {X}}^{m*} \) and the label \( {\mathfrak {l}} \), although we suppress \( {\mathfrak {l}} \) from the notation. Let

$$\begin{aligned}&\mathbf{S }_{\text {sde}}^m(t,{\textsf {w}})= \{ \mathbf{s }^m = (s_1,\ldots ,s_m) \in S^m \,;\, {\mathfrak {u}} (\mathbf{s }^m) + {\textsf {w}}_t^{m*} \in {\textsf {S}}_{\text {sde}}\} , \end{aligned}$$
(3.9)

where \( {\textsf {w}}_t^{m*}=\sum _{i=m+1}^{\infty }\delta _{w_t^i}\) for \( {\textsf {w}}_t = \sum _{i=1}^{\infty } \delta _{w_t^i}\). By definition, \( \mathbf{S }_{\text {sde}}^m(t,{\textsf {w}})\) is a subset of \( S^m \) depending on \( {\textsf {w}}_t^{m*}\). In particular, \( \mathbf{S }_{\text {sde}}^m(t,{\textsf {w}})\) is a time-dependent domain. Let

$$\begin{aligned}&\mathbf{Y }^m=(Y^{m,i})_{i=1}^m ,\quad \mathbf{Y }^{m,i\diamondsuit } = (Y^{m,j})_{j\not =i}^m , \quad {\textsf {Y}}_t^{m,i\diamondsuit }=\sum _{j\not =i}^m \delta _{Y_t^{m,j}}. \end{aligned}$$

We introduce the SDE with random environment \( {\textsf {X}}\) defined on \( (\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \) describing \( \mathbf{Y }^m \) given by

$$\begin{aligned}&dY_t^{m,i} = \sigma {}_{{\textsf {X}}}^m(t, (Y_t^{m,i},\mathbf{Y }_t^{m,i\diamondsuit })) dB_t^i + \textit{b}{}_{{\textsf {X}}}^m(t, (Y_t^{m,i},\mathbf{Y }_t^{m,i\diamondsuit })) dt , \end{aligned}$$
(3.10)
$$\begin{aligned}&\mathbf{Y }_t^m \in \mathbf{S }_{\text {sde}}^m(t,{\textsf {X}})\quad \text { for all } t, \end{aligned}$$
(3.11)
$$\begin{aligned}&\mathbf{Y }_0^{m} = \mathbf{s }^m , \quad \text {where } \mathbf{s }^m=(s_1,\ldots ,s_m) \text { for } \mathbf{s }=(s_i)\in S^{{\mathbb {N}}}. \end{aligned}$$
(3.12)

A triplet of \( \{ {\mathscr {F}}_t \} \)-adapted, continuous process \( (\mathbf{Y }^m,\mathbf{B }^m ,\mathbf{X }^{m*} )\) on \( (\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \) satisfying (3.10)–(3.12) is called a weak solution.

Remark 3.5

  1. 1.

    Equation (3.10) makes sense because \( \mathbf{X }^{m*}\), \( \mathbf{B }^m \), and \(\mathbf{Y }^m\) are all defined on the same filtered space \( (\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \). We remark that (3.10) depends on \( \mathbf{X }^{m*} \), and that \( \mathbf{X }^{m*} \) is regarded as a part of the coefficient of (3.10). We emphasize \( (\mathbf{X },\mathbf{B })\) is a priori given in SDE (3.10). We consider SDE (3.10) only for \( \mathbf{B }^m\), but not for an arbitrary \( \{ {\mathscr {F}}_t \}\)-Brownian motion \( \hat{\mathbf{B }}^m \).

  2. 2.

    The SDE (3.10) is not a conventional type because the coefficient depends on \( \mathbf{X }\). We can regard \( \mathbf{X }\) as a random environment, and call this SDE of random environment type. Random environment type SDEs had appeared in homogenization problem (see [22, 34] for example). In this case, random environment and Brownian motion in SDEs are usually independent of each other. This is not the case in the present situation. If \( (\hat{\mathbf{B }}^m,\hat{{\textsf {X}}}^{m*})\) is equivalent in law to \( (\mathbf{B }^m,{\textsf {X}}^{m*}) \), then we can replace \( (\mathbf{B }^m,{\textsf {X}}^{m*}) \) by \( (\hat{\mathbf{B }}^m,\hat{{\textsf {X}}}^{m*})\) in (3.10). The new SDE is equivalent to (3.10) in the sense that the former has a weak solution if and only if the latter has one. We emphasize that \( \mathbf{B }^m \) and \( \mathbf{X }^{m*}\) are \( \{ {\mathscr {F}}_t\} \)-adapted and can depend on each other.

  3. 3.

    The triplet \( (\mathbf{X }^m,\mathbf{B }^m,\mathbf{X }^{m*})\) made of the original weak solution \( (\mathbf{X },\mathbf{B })\) of the ISDE (3.3)–(3.5) is a weak solution of (3.10)–(3.12). This yields the crucial identity (1.11).

We define the notion of strong solutions and a unique strong solution of (3.10)–(3.12). Let \( {\widetilde{P}}^m \) be the distribution of \( (\mathbf{B }^m,\mathbf{X }^{m*}) \) under \( (\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \):

$$\begin{aligned}&{\widetilde{P}}^m = P_{\mathbf{s }}\circ (\mathbf{B }^m,\mathbf{X }^{m*})^{-1} . \end{aligned}$$

Let \( W _{\mathbf{0 }} ({\mathbb {R}}^{dm})= \{ \mathbf{w } \in W ({\mathbb {R}}^{dm})\, ; \mathbf{w }_0 =\mathbf{0 } \}\) as before and set

$$\begin{aligned}&{\mathscr {C}}^{m}= \overline{{\mathscr {B}} (W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\mathbb {R}}^{d\mathbb {N}})) }^{{\widetilde{P}}^m } ,\\&{\mathscr {C}}^{m}_t= \overline{{\mathscr {B}}_t (W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\mathbb {R}}^{d\mathbb {N}})) }^{{\widetilde{P}}^m } . \end{aligned}$$

Here \( {\mathscr {B}}_t (W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\mathbb {R}}^{d\mathbb {N}}))= \sigma [(\mathbf{v }_s,\mathbf{w }_s) ; 0\le s \le t ,\, (\mathbf{v },\mathbf{w })\in W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\mathbb {R}}^{d\mathbb {N}})] \). Let \( {\mathscr {B}}_t^m = \sigma [\mathbf{w }_s; 0\le s \le t ,\, \mathbf{w } \in W ({\mathbb {R}}^{dm})]\). We state the definition of strong solution.

Definition 3.9

(strong solution for\( (\mathbf{X },\mathbf{B })\)starting at\( \mathbf{s }^m \)) \(\mathbf{Y }^{m}\) is called a strong solution of (3.10)–(3.12) for \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) if \( (\mathbf{Y }^{m},\mathbf{B }^m,\mathbf{X }^{m*})\) satisfies (3.10)–(3.12) and there exists a \( {\mathscr {C}}^{m}\)-measurable function

$$\begin{aligned}&F_{\mathbf{s }}^m\!:\!W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\mathbb {R}}^{d\mathbb {N}})\!\rightarrow \!W ({\mathbb {R}}^{dm}) \end{aligned}$$

such that \( F_{\mathbf{s }}^m\) is \( {\mathscr {C}}^{m}_t/{\mathscr {B}}_t^m \)-measurable for each t, and \( F_{\mathbf{s }}^m\) satisfies

$$\begin{aligned}&\mathbf{Y }^m = F_{\mathbf{s }}^m(\mathbf{B }^m,\mathbf{X }^{m*}) \quad P_{\mathbf{s }}\text {-a.s.} \end{aligned}$$

Remark 3.6

Our definition of a strong solution is different from that of Definition 1.6 in [9, 163 p] with the following points: We consider solutions starting at a point \( \mathbf{s }^m \) only. The main difference is that both the \( \{ {\mathscr {F}}_t \}\)-Brownian motion \( \mathbf{B }\) and the process \( \mathbf{X }^{m*}\) are a priori given and fixed. Hence the solution \( \mathbf{Y }^m \) is a function of not only \( \mathbf{B }\) but also \( \mathbf{X }^{m*}\). This means, if we put an arbitrary \( \{ {\mathscr {F}}_t \}\)-Brownian motion \( \mathbf{B }'\) into \( F_{\mathbf{s }}^m\) as \( F_{\mathbf{s }}^m(\mathbf{B }',\mathbf{X }^{m*})\), then \( F_{\mathbf{s }}^m(\mathbf{B }',\mathbf{X }^{m*})\) is not necessary a solution. We call \( \mathbf{X }^{m*}\) an environment processes. We note that there is no environment process in the framework of Definition 1.6 in [9, 163 p]. We shall take the limit \( m \rightarrow \infty \), and prove that the effect of \( \mathbf{X }^{m*}\) will vanish in the limit. As a result, the limit ISDE becomes conventional. Vanishing the effect of \( \mathbf{X }^{m*}\) as \( m \rightarrow \infty \) is a key to our argument. We will do this by the second main theorem (Theorem 5.1).

Definition 3.10

(a unique strong solution for\( (\mathbf{X },\mathbf{B })\)starting at\( \mathbf{s }^m \)) The SDE (3.10)–(3.12) is said to have a unique strong solution for \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) if there exists a function \( F_{\mathbf{s }}^m\) satisfying the conditions in Definition 3.9 and, for any weak solution \( (\hat{\mathbf{Y }}^m ,\mathbf{B }^m,\mathbf{X }^{m*})\) of (3.10)–(3.12) under \( P_{\mathbf{s }}\),

$$\begin{aligned}&\hat{\mathbf{Y }}^m = F_{\mathbf{s }}^m(\mathbf{B }^m,\mathbf{X }^{m*}) \quad \text {for } P_{\mathbf{s }}\text {-a.s.} \end{aligned}$$

The function \( F_{\mathbf{s }}^m\) in Definition 3.9 is also called a strong solution starting at \( \mathbf{s }^m \). The SDE (3.10)–(3.12) is said to have a unique strong solution \( F_{\mathbf{s }}^m\) if \( F_{\mathbf{s }}^m\) satisfies the condition in Definition 3.10. We note that the function \( F_{\mathbf{s }}^m\) is unique for \( {\widetilde{P}}^m \)-a.s. in this case.

We recall that these two notions are different from those of the infinite-dimensional counterparts Definitions 3.6 and Definition 3.7 because the SDE (3.10)–(3.12) is of random environment type.

We introduce the IFC condition of \( (\mathbf{X },\mathbf{B })\) defined on \( (\varOmega ,{\mathscr {F}}, P ,\{ {\mathscr {F}}_t \} )\) as follows.

\(\mathbf {(IFC)}\) The SDE (3.10)–(3.12) has a unique strong solution \( F_{\mathbf{s }}^m(\mathbf{B }^m,\mathbf{X }^{m*})\) for \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) for \( P\circ \mathbf{X }_0^{-1}\)-a.s. \( \mathbf{s }\) for all \( m \in \mathbb {N}\).

For convenience we introduce a quenched version of \(\mathbf {(IFC)}\):

\((\mathbf {IFC})_{{\mathbf {s}}}\) The SDE (3.10)–(3.12) has a unique strong solution \( F_{\mathbf{s }}^m(\mathbf{B }^m,\mathbf{X }^{m*})\) for \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) for all \( m \in \mathbb {N}\).

By definition \(\mathbf {(IFC)}\) holds if and only if \((\mathbf {IFC})_{{\mathbf {s}}}\) holds for \( P\circ \mathbf{X }_0^{-1}\)-a.s. \( \mathbf{s }\).

The SDE (3.10)–(3.12) is time inhomogeneous and the state space of the solution \( \mathbf{Y }^m \) given by (3.11) also depends on time t through \( \mathbf{X }_t^{m*} \). Because the SDE (3.10) is finite-dimensional, one can apply the classical theory of SDEs directly. A feasible sufficient condition of \(\mathbf {(IFC)}\) is given in Sect. 11.

We remark that a continuous process \( (\mathbf{X },\mathbf{B })\) satisfying \(\mathbf {(IFC)}\) is not necessary a weak solution. See Definition 4.3 and Remark 4.4.

Let \( {\mathscr {T}}\, ({\textsf {S}})\) be the tail \( \sigma \)-field of \( {\textsf {S}}\) defined as (1.14). A random point field \( \mu \) on \( S\) is called tail trivial if \( \mu (A) \in \{ 0,1 \} \) for all \( A \in {\mathscr {T}}\, ({\textsf {S}})\). Let \( \mathbf{X }=(X^i)_{i\in \mathbb {N}}\) be a continuous process defined on \( (\varOmega ,{\mathscr {F}}, P ,\{ {\mathscr {F}}_t \} )\) and \( {\textsf {X}}\) be the associated unlabeled process such that \( {\textsf {X}}_t=\sum _i \delta _{X_t^i}\). Let \( W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})\) be as (2.9). Let \( {\textsf {m}}_{r,T} \!:\!W (S^{{\mathbb {N}}}) \!\rightarrow \!{\mathbb {N}}\cup \{ \infty \} \) be such that

$$\begin{aligned}&{\textsf {m}}_{r,T} (\mathbf{w }) = \inf \{ m \in {\mathbb {N}}\, ;\, \min _{t\in [0,T]}|w_t^n|> r \text { for all } n \in {\mathbb {N}} \text { such that } n > m \} . \end{aligned}$$
(3.13)

We make assumptions of \( \mu \) and dynamics \( \mathbf{X }\) under \( P\).

\(\mathbf {(TT)}\)\( \mu \) is tail trivial.

\(\mathbf {(AC)}\)\( P\circ {\textsf {X}}_t^{-1} \prec \mu \) for all \( 0< t < \infty \).

\(\mathbf {(SIN)}\)   \( P({\textsf {X}} \in W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})) = 1 \).

\(\mathbf {(NBJ)}\)\( P( {\textsf {m}}_{r,T} (\mathbf{X })< \infty ) = 1 \) for each \( r , T \in {\mathbb {N}} \).

We define the conditions \(\mathbf {(AC)}\), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\) for a probability measure \( {\widehat{P}}\) on \( W ({\mathbb {R}}^{d\mathbb {N}})\) by replacing \( {\textsf {X}}\) and \( \mathbf{X }\) by \( {\textsf {w}}\) and \(\mathbf{w }\), respectively. We remark here \(\mathbf {(AC)}\), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\) are conditions depend only on the distribution of \( \mathbf{X }\).

We remark that, if \( (\mathbf{X },\mathbf{B })\) under \( P\) satisfies \(\mathbf {(SIN)}\), then \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) satisfies \(\mathbf {(SIN)}\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\), where \( P_{\mathbf{s }}= P(\cdot | \mathbf{X }_0=\mathbf{s })\). The same holds for \(\mathbf {(NBJ)}\). This is however not the case for \(\mathbf {(AC)}\). Similarly as \((\mathbf {IFC})_{{\mathbf {s}}}\), we write \(\mathbf {(SIN)}\)\(_{\mathbf{s }}\) and \(\mathbf {(NBJ)}\)\(_{\mathbf{s }} \) when we emphasize dependence on \( \mathbf{s }\).

It is known that all determinantal random point fields on continuous spaces are tail trivial [2, 19, 29]. These results are a generalization of that of determinantal random point fields on discrete spaces [1, 18, 36].

In Sect. 5, we deduce triviality of \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\) from that of \( {\mathscr {T}}\, ({\textsf {S}})\) through the tail \( \sigma \)-field \( {\mathscr {T}}_{\text {path}} ({\textsf {S}})\) of the unlabeled path space under these assumptions. We shall introduce the scheme carrying the tail \( \sigma \)-field of \( {\textsf {S}}\) to the tail \( \sigma \)-field of \(W (S^{{\mathbb {N}}})\).

The assumption \(\mathbf {(NBJ)}\) is crucial for the passage from the unlabeled dynamics \( {\textsf {X}} \) to the labeled dynamics \( \mathbf{X }\). If \( P( {\textsf {m}}_{r,T} (\mathbf{X })= \infty ) > 0 \), then we can not use this scheme. To catch the image for \( {\textsf {m}}_{r,T} (\mathbf{X })= \infty \), we shall give an example of path \( \mathbf{w }\) such that \( {\textsf {m}}_{r,T} (\mathbf{w }) = \infty \) in Remark 3.10. This example indicates the necessity of \(\mathbf {(NBJ)}\).

Let \( (\mathbf{X },\mathbf{B })\) be a weak solution of (3.3)–(3.4) defined on \( (\varOmega ,{\mathscr {F}}, P ,\{ {\mathscr {F}}_t \} )\).

If \( P_{\mathbf{s }}= P(\cdot | \mathbf{X }_0 = \mathbf{s })\) is a regular conditional probability, then \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a weak solution of (3.3)–(3.4) starting at \( \mathbf{s }\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\).

Conversely, suppose that \( \{ P_{\mathbf{s }}\} \) is a family of probability measures on \( (\varOmega ,{\mathscr {F}},\{{\mathscr {F}}_t \} )\), given for \( \mathbf{m }\)-a.s. \( \mathbf{s } \in S^{{\mathbb {N}}}\), such that \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a weak solution starting at \( \mathbf{s }\). If \( P_{\mathbf{s }}( A )\) is \( \overline{{\mathscr {B}}(S^{{\mathbb {N}}})}^{\mathbf{m }} \)-measurable in \( \mathbf{s } \) for any \( A \in {\mathscr {B}}(W (S^{{\mathbb {N}}})) \), then \( (\mathbf{X },\mathbf{B })\) under \( P:= \int P_{\mathbf{s }}\mathbf{m }(d\mathbf{s })\) is a weak solution of (3.3)–(3.4) such that \(\mathbf{m } = P\circ \mathbf{X }_0^{-1} \).

Taking these into account, we introduce the following condition for a family of strong solutions \( \{ {F}_{\mathbf{s }}\} \) of (3.3)–(3.4) given for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\).

\(\mathbf (MF) \)    \( P({F}_{\mathbf{s }}( \mathbf{B } ) \in A ) \) is \( \overline{{\mathscr {B}}(S^{{\mathbb {N}}})}^{ P\circ \mathbf{X }_0^{-1}} \)-measurable in \( \mathbf{s } \) for any \( A \in {\mathscr {B}}(W (S^{{\mathbb {N}}})) \).

For a family of strong solutions \( \{ {F}_{\mathbf{s }}\} \) satisfying \((\mathbf{MF} )\) we set

$$\begin{aligned}&P_{\{ {F}_{\mathbf{s }}\}}= \int P( {F}_{\mathbf{s }}(\mathbf{B } ) \in \cdot ) P\circ \mathbf{X }_0^{-1} (d\mathbf{s }) . \end{aligned}$$
(3.14)

Let \( (\mathbf{X },\mathbf{B })\) be a weak solution under \( P\). Suppose that \( (\mathbf{X },\mathbf{B })\) is a unique strong solution under \( P_{\mathbf{s }}\) for \( P\circ \mathbf{X }_0^{-1}\)-a.s. \( \mathbf{s }\), where \( P_{\mathbf{s }}= P(\cdot | \mathbf{X }_0=\mathbf{s })\). Let \( \{ {F}_{\mathbf{s }}\}\) be the unique strong solutions given by \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\). Then \((\mathbf{MF} )\) is automatically satisfied and

$$\begin{aligned}&P_{\{ {F}_{\mathbf{s }}\}}= P\circ \mathbf{X }^{-1} . \end{aligned}$$
(3.15)

Indeed, \( \mathbf{B }\) is a Brownian motion under both \( P\) and \( P_{\mathbf{s }}\). Then for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\)

$$\begin{aligned}&P( {F}_{\mathbf{s }}(\mathbf{B } ) \in \cdot ) = P_{\mathbf{s }}( {F}_{\mathbf{s }}(\mathbf{B } ) \in \cdot ) = P_{\mathbf{s }}(\mathbf{X } \in \cdot ) . \end{aligned}$$
(3.16)

Hence we deduce (3.15) from (3.14) and (3.16).

Definition 3.11

For a condition \( (\bullet ) \), we say (3.3)–(3.4) has a family of unique strong solutions \( \{ {F}_{\mathbf{s }}\} \) starting at \( \mathbf{s }\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\) under the constraints of \((\mathbf{MF} )\) and \( (\bullet ) \) if \( \{ {F}_{\mathbf{s }}\} \) satisfies \((\mathbf{MF} )\) and \( P_{\{ {F}_{\mathbf{s }}\}}\) satisfies \((\bullet ) \). Furthermore, (i) and (ii) are satisfied.

  1. (i)

    For any weak solution \( (\hat{\mathbf{X }},\hat{\mathbf{B }})\) under \( {\hat{P}}\) of (3.3)–(3.4) with

    $$\begin{aligned} {\hat{P}}\circ \hat{\mathbf{X }}_0^{-1} \prec P \circ \mathbf{X }_0^{-1} \end{aligned}$$

    satisfying \( (\bullet ) \), it holds that, for \({\hat{P}} \circ \hat{\mathbf{X }}_0^{-1}\)-a.s. \( \mathbf{s }\),

    $$\begin{aligned}&\hat{\mathbf{X }}={F}_{\mathbf{s }}(\hat{\mathbf{B }}) \quad {\hat{P}}_{\mathbf{s }} \text {-a.s.} , \end{aligned}$$

    where \( {\hat{P}}_{\mathbf{s }} = {\hat{P}}(\cdot | \hat{\mathbf{X }}_0={\mathbf{s }})\).

  2. (ii)

    For an arbitrary \( {\mathbb {R}}^{d\mathbb {N}}\)-valued \( \{{\mathscr {F}}_t\} \)-Brownian motion \( \mathbf{B }\) defined on \( (\varOmega ,{\mathscr {F}}, P ,\{ {\mathscr {F}}_t \} )\) with \( \mathbf{B }_0=\mathbf{0 }\), \( {F}_{\mathbf{s }}(\mathbf{B })\) is a strong solution of (3.3)–(3.4) satisfying \( (\bullet ) \) starting at \( \mathbf{s }\) for \( P \circ \mathbf{X }_0^{-1}\)-a.s. \( \mathbf{s }\).

Theorem 3.1

Assume \(\mathbf {(TT)}\) for \( \mu \). Assume that (3.3)–(3.4) has a weak solution \((\mathbf{X },\mathbf{B })\) under P satisfying \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \(\mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\). Then (3.3)–(3.4) has a family of unique strong solutions \( \{ {F}_{\mathbf{s }}\} \) starting at \( \mathbf{s }\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\) under the constraints of \((\mathbf \mathbf{MF} )\), \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \(\mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\).

The first corollary is a quench result.

Corollary 3.1

Under the same assumptions as Theorem 3.1 the following hold.

  1. 1.

    \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}:= P(\cdot | \mathbf{X }_0=\mathbf{s })\) is a strong solution starting at \( \mathbf{s }\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\).

  2. 2.

    Let \( (\mathbf{X }',\mathbf{B }') \) be any weak solution of (3.3)–(3.4) defined on \( (\varOmega ',{\mathscr {F}}', P' ,\{ {\mathscr {F}}_t' \} )\) satisfying \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \(\mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\). Assume that \( P'\circ \mathbf{X }_0'^{-1} \) is absolutely continuous with respect to \( P\circ \mathbf{X }_0^{-1} \). Then \( (\mathbf{X }',\mathbf{B }') \) under \( P_{\mathbf{s }}' \) becomes a strong solution starting at \( \mathbf{s }\) satisfying \( \mathbf{X }'= {F}_{\mathbf{s }}(\mathbf{B }')\) for \( P'\circ \mathbf{X }_0'^{-1} \)-a.s. \( \mathbf{s }\). Furthermore,

    $$\begin{aligned}&P_{\mathbf{s }}'\circ \mathbf{X }'^{-1}= P_{\mathbf{s }}\circ \mathbf{X }^{-1} \end{aligned}$$

    for \( P'\circ \mathbf{X }_0'^{-1} \)-a.s. \( \mathbf{s }\). Here \( P_{\mathbf{s }}' = P'(\cdot | \mathbf{X }_0' =\mathbf{s })\).

  3. 3.

    For any Brownian motion \( \mathbf{B }''\), \( ({F}_{\mathbf{s }}(\mathbf{B }''),\mathbf{B }'')\) becomes a strong solution of (3.3)–(3.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}},\)\(\mathbf {(SIN)}\)\(_{\mathbf{s }}\), and \(\mathbf {(NBJ)}\)\(_\mathbf{s }\) starting at \( \mathbf{s }\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\).

The second corollary is an anneal result.

Corollary 3.2

Under the same assumptions as Theorem 3.1 the following hold.

  1. 1.

    The uniqueness in law of weak solutions of (3.3)–(3.4) holds under the constraints of \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \(\mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\).

  2. 2.

    The pathwise uniqueness of weak solutions of (3.3)–(3.4) holds under the constraints of \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \(\mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\).

Remark 3.7

Because we exclude \( t = 0 \) in \(\mathbf {(AC)}\), Theorem 3.1 is valid even if \( P\circ {\textsf {X}}_0^{-1} \) is singular to \( \mu \).

Remark 3.8

We study ISDEs on \( S^{{\mathbb {N}}}\). It is difficult to solve the ISDEs on \( S^{{\mathbb {N}}}\) directly. One difficulty in treating \( S^{{\mathbb {N}}}\)-valued ISDEs is that \( S^{{\mathbb {N}}}\) does not have any good measures. To remedy this situation, we introduce the representation of \( S^{{\mathbb {N}}}\) as an infinite sequence of infinite-dimensional spaces:

$$\begin{aligned}&{\textsf {S}},\quad S\times {\textsf {S}},\quad S^2 \times {\textsf {S}},\quad S^3 \times {\textsf {S}},\quad S^4 \times {\textsf {S}},\quad \ldots . \end{aligned}$$
(3.17)

Each space in (3.17) has a good measure called the m-Campbell measure (see (9.11)). Using (3.8), we can rewrite (3.10) as

$$\begin{aligned}&dY_t^{m,i} = \sigma (Y_t^{m,i},{\textsf {Y}}_t^{m,i\diamondsuit } + {\textsf {X}}_t^{m*}) dB_t^i + \textit{b}(Y_t^{m,i},{\textsf {Y}}_t^{m,i\diamondsuit } + {\textsf {X}}_t^{m*}) dt .\end{aligned}$$
(3.18)

Thus \( (\mathbf{Y }^m,{\textsf {X}}^{m*})\) is an \( S^m\times {\textsf {S}}\)-valued process, and the scheme of infinite-dimensional spaces \( \{ S^m\times {\textsf {S}}\}_{m=0}^{\infty } \) in (3.17) is useful.

Remark 3.9

We call \(\mathbf {(NBJ)}\) no big jump condition because for any path \( \mathbf{w } = (w^i)_{i\in \mathbb {N}} \) such that \({\textsf {m}}_{r,T}(\mathbf{w })=\infty \)

$$\begin{aligned}&\sup _{ i \in \mathbb {N}}\sup _{0\le s,t \le T}|w_s^i - w_t^i| \mathbf{1}_{[0,r]}\left( \min _{0\le u \le T}|w_u^i|\right) =\infty \end{aligned}$$

and so for any \(\delta >0\)

$$\begin{aligned}&\sup _{ i \in \mathbb {N}}\sup _{\begin{array}{c} 0\le s,t \le T \\ |s-t|\le \delta \end{array}}|w_s^i - w_t^i| \mathbf{1}_{[0,r]}\left( \min _{0\le u \le T}|w_u^i|\right) =\infty , \end{aligned}$$

which implies the existence of paths which visit \(S_r\) during [0, T] and have modulus of continuity bigger than \(\ell \) for any \(\ell \in \mathbb {N}\).

Remark 3.10

An example of a path \( \mathbf{w }=(w^i)_{i\in \mathbb {N}} \in W ({\mathbb {R}}^{2\mathbb {N}}) \) such that \( {\textsf {m}}_{r,T}(\mathbf{w }) = \infty \) is as follows. Let \( t_i= \sum _{j=1}^i {2^{-j}}\) and

$$\begin{aligned}&w_t^i = {\left\{ \begin{array}{ll} (1,i) &{}\quad t \in [0, t_i]\cup [t_{i+1} , \infty ) \\ \text {linear} &{}\quad [t_i,t_i + 2^{-i-2}] \\ (0,0) &{}\quad t= t_i + 2^{-i-2} \\ \text {linear} &{}\quad [t_i + 2^{-i-2},t_{i+1}] .\end{array}\right. } \end{aligned}$$

All particles sit on the vertical line \( \{ (1,y); y \in {\mathbb {R}}^+ \} \) at time zero. The ith particle sits at (1, i), jumps off at time \( t_i \) and touch the origin at time \( t_i + 2^{-i-2}\). Then it springs up to the original position (1, i). We need \(\mathbf {(NBJ)}\) to exclude this type of “big jump ” paths. We note that \( {\mathfrak {u}} _{\text {path}}(\mathbf{w }) \not \in W ({\textsf {S}})\) although \( \mathbf{w }\in W ({\mathbb {R}}^{2\mathbb {N}}) \). We conjecture that, if \( {\mathfrak {u}} _{\text {path}}(\mathbf{w }) \in W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})\), then \( {\textsf {m}}_{r,T}(\mathbf{w }) < \infty \) is automatically satisfied.

Main theorem II (Theorem 3.2): \( \mu \) with non-trivial tail

In this section, we relax \(\mathbf {(TT)}\) of \( \mu \) by the tail decomposition of \( \mu \) as follows.

Let \( \mu _{\text {Tail}}^{{\textsf {a}}}\) be the regular conditional probability of \( \mu \) conditioned by \( {\mathscr {T}}\, ({\textsf {S}})\):

$$\begin{aligned}&\mu _{\text {Tail}}^{{\textsf {a}}}= \mu ( \, \cdot \, | {\mathscr {T}}\, ({\textsf {S}})) ( {\textsf {a}}) . \end{aligned}$$
(3.19)

Because \( {\textsf {S}}\) is a Polish space, such a regular conditional probability exists and satisfies

$$\begin{aligned}&\mu ({\textsf {A}}) = \int _{{\textsf {S}}} \mu _{\text {Tail}}^{{\textsf {a}}}({\textsf {A}}) \mu (d {\textsf {a}}) . \end{aligned}$$
(3.20)

By construction, \( \mu _{\text {Tail}}^{{\textsf {a}}}({\textsf {A}}) \) is a \( {\mathscr {T}}\, ({\textsf {S}})\)-measurable function in Åfor each \( {\textsf {A}}\in {\mathscr {B}}({\textsf {S}}) \).

Let \( {\textsf {H}}\) be a subset of \( {\textsf {S}}_{\text {sde}}\) in (3.5) and \( \mu ({\textsf {H}}) = 1 \). We assume there exists a version of \( \mu _{\text {Tail}}^{{\textsf {a}}}\) with a subset of \( {\textsf {H}}\), denoted by the same symbol \( {\textsf {H}}\), such that \( \mu ({\textsf {H}})= 1\) and that, for all \({\textsf {a}}\in {\textsf {H}}\),

$$\begin{aligned}&\mu _{\text {Tail}}^{{\textsf {a}}}\text { is tail trivial} , \end{aligned}$$
(3.21)
$$\begin{aligned}&\mu _{\text {Tail}}^{{\textsf {a}}}(\{ {\textsf {b}}\in {\textsf {S}}; \mu _{\text {Tail}}^{{\textsf {a}}}= \mu _{\text {Tail}}^{{\textsf {b}}} \} ) = 1 , \end{aligned}$$
(3.22)
$$\begin{aligned}&\mu _{\text {Tail}}^{{\textsf {a}}}\quad \text { and }\quad \mu _{\text {Tail}}^{{\textsf {b}}}\text { are mutually singular on } {\mathscr {T}}\, ({\textsf {S}})\text { if } \mu _{\text {Tail}}^{{\textsf {a}}}\not =\mu _{\text {Tail}}^{{\textsf {b}}}. \end{aligned}$$
(3.23)

Let \( \mathop {\sim }\limits _{\text {Tail}}\) be the equivalence relation such that \( {\textsf {a}} \mathop {\sim }\limits _{\text {Tail}} {\textsf {b}} \) if and only if

$$\begin{aligned}&\mu _{\text {Tail}}^{{\textsf {a}}}= \mu _{\text {Tail}}^{{\textsf {b}}} . \end{aligned}$$

Let \( {\textsf {H}}_{{\textsf {a}}}= \{ {\textsf {b}}\in {\textsf {H}}; {\textsf {a}}\mathop {\sim }\limits _{\text {Tail}} {\textsf {b}}\} \). Then \( {\textsf {H}}\) can be decomposed as a disjoint sum

$$\begin{aligned}&{\textsf {H}}= \sum _{ [{\textsf {a}}] \in {\textsf {H}}/ \mathop {\sim }\limits _{\text {Tail}}} {\textsf {H}}_{{\textsf {a}}}. \end{aligned}$$
(3.24)

From (3.22), we see that \( \mu _{\text {Tail}}^{{\textsf {a}}}( {\textsf {H}}_{{\textsf {a}}}) = 1 \) for all \( {\textsf {a}} \in {\textsf {H}}\).

For a labeled process \( \mathbf{X }=(X^i) \) on \( (\varOmega ,{\mathscr {F}}, P ,\{ {\mathscr {F}}_t \} )\) we set \( {\textsf {X}}_t=\sum _i \delta _{X_t^i}\) as before. Let \( {\mathfrak {l}} \) be a label. We assume \( \mathbf{X }_0 = {\mathfrak {l}} \circ {\textsf {X}}_0 \) for \( P\)-a.s. We note that plural labels satisfy the relation \( \mathbf{X }_0 = {\mathfrak {l}} \circ {\textsf {X}}_0 \) for \( P\)-a.s.  in general. For \( \mu \) as above we assume

$$\begin{aligned}&\mu = P\circ {\textsf {X}}_0^{-1} . \end{aligned}$$
(3.25)

We set \( P^{\AA } = P(\, \cdot \, | {\textsf {X}}_0^{-1}({\mathscr {T}}\, ({\textsf {S}}))) |_{{\textsf {X}}_0 = \AA } \). Then by (3.19) and (3.20)

$$\begin{aligned}&P^{{\textsf {a}}} = \int P (\cdot | {\textsf {X}}_0 = {\textsf {s}}) \mu _{\text {Tail}}^{{\textsf {a}}}(d{\textsf {s}}) . \end{aligned}$$
(3.26)

We can rewrite \( P^{\AA } \) as

$$\begin{aligned}&P^{{\textsf {a}}} = \int P (\cdot | \mathbf{X }_0 = \mathbf{s }) \mu _{\text {Tail}}^{{\textsf {a}}}\circ {\mathfrak {l}} ^{-1} (d\mathbf{s }) . \end{aligned}$$
(3.27)

From (3.19) and (3.25) we easily see \( \mu _{\text {Tail}}^{{\textsf {a}}}= P^{{\textsf {a}}}\circ {\textsf {X}}_0^{-1}\) and

$$\begin{aligned}&\mu _{\text {Tail}}^{{\textsf {a}}}\circ {\mathfrak {l}} ^{-1} = P^{{\textsf {a}}}\circ \mathbf{X }_0^{-1}. \end{aligned}$$
(3.28)

Let \( P_{\mathbf{s }}^{\AA } = P^{\AA }(\cdot | \mathbf{X }_0=\mathbf{s })\). Then \( P_{\mathbf{s }}^{\AA } = P (\cdot | \mathbf{X }_0 = \mathbf{s }) \) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\circ {\mathfrak {l}} ^{-1}\)-a.s. \( \mathbf{s }\) and for \( \mu \)-a.s. Å.

Theorem 3.2

Assume \( \mu \), \( \mu _{\text {Tail}}^{{\textsf {a}}}\), and \( {\textsf {H}}\) satisfy (3.21)–(3.23) for all \( \AA \in {\textsf {H}}\subset {\textsf {S}}_{\text {sde}}\), \( \mu ({\textsf {H}}) = 1 \), and (3.25). Assume that \((\mathbf{X },\mathbf{B })\) under \( P\) is a weak solution of (3.3)–(3.4) satisfying \(\mathbf {(IFC)}\), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\). Assume that, for \( \mu \)-a.s. Å, \((\mathbf{X },\mathbf{B })\) under \( P^{\AA } \) satisfies \(\mathbf {(AC)}\) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\). Then, for \( \mu \)-a.s. Å, (3.3)–(3.4) has a family of unique strong solutions \( \{ {F}_{\mathbf{s }}^{{\textsf {a}}}\}\) starting at \( \mathbf{s }\) for \( P^{\AA } \circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\) under the constraints of \((\mathbf \mathbf{MF} )\), \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\).

Remark 3.11

  1. 1.

    We shall prove in Lemma 14.2 that a version \( \{ \mu _{\text {Tail}}^{{\textsf {a}}}\} \) satisfying (3.21)–(3.23) exists if \( \mu \) is a quasi-Gibbs random point field satisfying (A2) in Sect. 9.2. All examples in the present paper are such quasi-Gibbs random point fields.

  2. 2.

    The unique strong solution \( {F}_{\mathbf{s }}^{{\textsf {a}}}\) in Theorem 3.2 yields the corollaries similar to Corollary 3.1 and Corollary 3.2. In particular, for \( \mu \)-a.s. Å, \( (\mathbf{X },\mathbf{B })= ({F}_{\mathbf{s }}^{{\textsf {a}}}(\mathbf{B }),\mathbf{B })\) under \( P_{\mathbf{s }}^{\AA }\) for \(\mu _{\text {Tail}}^{{\textsf {a}}}\circ {\mathfrak {l}} ^{-1} \)-a.s. \( \mathbf{s }\). We note here \( \mu _{\text {Tail}}^{{\textsf {a}}}\circ {\mathfrak {l}} ^{-1} = P^{\AA }\circ \mathbf{X }_0^{-1}\) by (3.28).

  3. 3.

    In Theorem 3.2, the assumption “\((\mathbf{X },\mathbf{B })\) under \( P^{\AA } \) satisfies \(\mathbf {(AC)}\) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\) for \( \mu \)-a.s. Å” is critical, and does not hold in general. We shall give sufficient conditions in Theorems 12.1 and 12.2. From these theorems we deduce all the examples in the present paper satisfy the assumption.

The role of the five assumptions: \(\mathbf {(IFC)}\), (TT), \(\mathbf {(AC)}\), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\)

In Sect. 3.2, we introduced the five significant assumptions: \(\mathbf {(IFC)}\), (TT), \(\mathbf {(AC)}\), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\). In this subsection, we explain the role of these assumptions in the proof of Theorem 3.1. We also explain the role of other main assumptions used in the present paper.

To prove Theorem 3.1, we use the strategy introduced in Sect. 1. One of the critical points of the strategy is the reduction of ISDE to the infinite system of finite-dimensional stochastic differential equations (SDEs). For this, we use the pathwise unique strong solutions of the finite-dimensional SDEs associated with ISDE. The condition \(\mathbf {(IFC)}\) claims the finite-dimensional SDEs have such a unique strong solution. So the \(\mathbf {(IFC)}\) is pivotal to the reduction of ISDE to the IFC schemes.

Another critical point of the proof is tail triviality of the labeled path space under the distribution of the weak solution. We shall deduce this from tail triviality of \( \mu \), denoted by (TT), in a general framework as the second tail theorem in Sect. 5.

The key idea for this is the passage from (TT) to that of the path space of the labeled dynamics. Because of \(\mathbf {(AC)}\), we deduce triviality of the tail \( \sigma \)-field of \( {\textsf {S}}\) under the single time marginal distributions of \( {\textsf {X}}\). From this we shall deduce tail triviality of the unlabeled path space under the finite-dimensional distributions of the unlabeled dynamics \( \mathbf{X }\), where the meaning of tail is spatial [see (5.7) for definition].

We next deduce tail triviality of the labeled path space from that of the unlabeled path space. The map \( {\mathfrak {l}} _{\text {path}}\) in (2.10) from the unlabeled path space to the labeled one plays an essential role in our argument. The assumption \(\mathbf {(SIN)}\) is necessary for the construction of this map. Here \(\mathbf {(SIN)}\) is an abbreviation of “unlabeled path spaces on single, infinite configurations with no explosion of tagged particles”. To carry out the passage, we shall use \(\mathbf {(NBJ)}\) in addition to \(\mathbf {(SIN)}\).

In Sect. 4, we shall deduce the existence of a unique strong solution of ISDE from tail triviality of labeled path space in Theorem 4.1. We call Theorem 4.1 the first tail theorem. The conditions in Theorem 4.1 are denoted by \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\).

The assumptions \(\mathbf {(IFC)}\), \(\mathbf {(TT)}\), \(\mathbf {(AC)}\), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\) are used in the proof of Theorem 5.1 (the second tail theorem) in Sect. 5. \(\mathbf {(TT)}\) and \(\mathbf {(AC)}\) are the assumptions of Theorem 5.2, which claims triviality of the labeled path space at the cylindrical level. \(\mathbf {(TT)}\), \(\mathbf {(AC)}\), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\) are the assumptions of Theorem 5.3, which proves (\( {\mathbf {C}}_{\mathrm{path}}1\)) and (\( \mathbf{C }_{\mathrm{path}}2\)).

Conditions (A1)–(A4) given in Sects. 9 and 10 are related to Dirichlet forms; these are the conditions for random point fields \( \mu \) from the viewpoint of Dirichlet form theory. Recall that \(\mathbf {(IFC)}\) is the most critical condition for the theory. We shall give the feasible sufficient condition of \(\mathbf {(IFC)}\) in terms of the assumptions (B1)–(B2) and (C1)–(C2) in Sect. 11. We shall present them in Theorems 11.1 and 11.2.

We present a table for these conditions in Table 1.

Table 1 List of conditions

Solutions and tail \( \sigma \)-fields: first tail theorem (Theorem 4.1)

This section proves the existence of a strong solution, the pathwise uniqueness of solutions, and that the ISDE (4.2)–(4.4) has a unique strong solution. The ISDEs studied in this section are more general than those in Sects. 1 and 3. Naturally, the ISDEs in the previous sections are typical examples that our results (Theorems 4.1,  4.2, and 4.3) can be applied to.

Throughout this section, \( \mathbf{X } = (X^i)_{i\in \mathbb {N}}\) is an \( S^{{\mathbb {N}}}\)-valued, continuous \( \{ {\mathscr {F}}_t \}\)-adapted processes defined on \((\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \) starting at \( \mathbf{s }\), which is indicated by the subscript \( \mathbf{s }\) in \( P_{\mathbf{s }}\). \( \mathbf{B } =(B^i)_{i\in \mathbb {N}}\) is an \( {\mathbb {R}}^{d\mathbb {N}}\)-valued, standard \( \{ {\mathscr {F}}_t \}\)-Brownian motion starting at the origin. \( P_{\text {Br}}^{\infty }\) is the distribution of \( \mathbf{B }\). Thus,

$$\begin{aligned}&P_{\mathbf{s }}(\mathbf{X }_0=\mathbf{s }) = 1 ,\quad P_{\mathbf{s }}(\mathbf{B } \in \cdot ) = P_{\text {Br}}^{\infty }. \end{aligned}$$
(4.1)

We shall fix the initial starting point \( \mathbf{s }\) throughout Sect. 4.

General theorems of the uniqueness and existence of strong solutions of ISDEs

In this subsection, we introduce ISDE (4.2)–(4.4) and state one of the main theorems (Theorem 4.1: First tail theorem).

Let \( \mathbf{W }_{\text {sol}}\) be a Borel subset of \( W (S^{{\mathbb {N}}})\). Let \( {\mathscr {B}} (\mathbf{W }_{\text {sol}})\) be the Borel \( \sigma \)-field of \( \mathbf{W }_{\text {sol}}\). Let \( {\mathscr {B}}_t (\mathbf{W }_{\text {sol}}) \) be the sub \( \sigma \)-field of \( {\mathscr {B}}(\mathbf{W }_{\text {sol}}) \) such that

$$\begin{aligned} {\mathscr {B}}_t (\mathbf{W }_{\text {sol}}) = \sigma [\mathbf{w }_u; 0\le u \le t \, , \, \mathbf{w }\in \mathbf{W }_{\text {sol}}] . \end{aligned}$$

Following [9] in finite dimensions, we shall introduce SDEs in infinite dimensions.

Definition 4.1

\( {\mathscr {A}}^{d,r} \) is the set of all functions \( \alpha \!:\![0,\infty )\times \mathbf{W }_{\text {sol}}\!\rightarrow \!{\mathbb {R}}^d\otimes {\mathbb {R}}^r \) such that

  1. 1.

    \( \alpha \) is \( {\mathscr {B}}([0,\infty ))\times {\mathscr {B}}(\mathbf{W }_{\text {sol}}) / {\mathscr {B}} ({\mathbb {R}}^d\otimes {\mathbb {R}}^r )\)-measurable,

  2. 2.

    \( \mathbf{W }_{\text {sol}}\ni \mathbf{w }\mapsto \alpha (t,\mathbf{w }) \in {\mathbb {R}}^d\otimes {\mathbb {R}}^r \) is \( {\mathscr {B}}_t (\mathbf{W }_{\text {sol}}) / {\mathscr {B}} ({\mathbb {R}}^d\otimes {\mathbb {R}}^r )\)-measurable for each \( t \in [0,\infty )\).

Let \( \sigma ^i \!:\!\mathbf{W }_{\text {sol}}\!\rightarrow \!W ({\mathbb {R}}^{d^2}) \) and \( b^i\!:\!\mathbf{W }_{\text {sol}}\!\rightarrow \!W ({\mathbb {R}}^d) \) such that \( \sigma ^i (\mathbf{w })_t \in {\mathscr {A}}^{d,d} \) and \( b^{i} (\mathbf{w })_t \in {\mathscr {A}}^{d,1} \). We assume \( \sigma ^{i} \in {\mathscr {L}}^2 \) and \( b^{i} \in {\mathscr {L}}^1 \), where \( {\mathscr {L}}^p \) is the same as Definition 3.1. We introduce the ISDE on \( S^{{\mathbb {N}}}\) of the form

$$\begin{aligned}&dX_t^i = \sigma ^{i}(\mathbf{X })_t dB_t^i + b^{i}(\mathbf{X })_t dt \quad (i\in {\mathbb {N}}) , \end{aligned}$$
(4.2)
$$\begin{aligned}&\mathbf{X } \in \mathbf{W }_{\text {sol}}, \end{aligned}$$
(4.3)
$$\begin{aligned}&\mathbf{X }_0 = \mathbf{s } . \end{aligned}$$
(4.4)

Here \( (\mathbf{X },\mathbf{B })\) is defined on \( (\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \), and \( \mathbf{B }=(B^i)_{i\in {\mathbb {N}}}\) is an \( {\mathbb {R}}^{d\mathbb {N}}\)-valued, \( \{ {\mathscr {F}}_t \}\)-Brownian motion as before.

The definition of a weak solution and a strong solution, and the related notions are similar to those of Sect. 3.

We remark that, in this section, we do not assume \( \mathbf{W }_{\text {sol}}= {\mathfrak {u}} _{\text {path}}^{-1} (W ({\textsf {W}})) \) for some \( {\textsf {W}} \subset {\textsf {S}}\) unlike the previous sections. This is because we intend to clarify the relation between the strong and pathwise notions of solutions of ISDE and tail triviality of the labeled path space. Indeed, our theorems (Theorems 4.1 and 4.2) clarify a general structure of the relation between the existence of a strong solution and the pathwise uniqueness of the solutions of ISDE (4.2)–(4.4) and triviality of the tail \( \sigma \)-field of the labeled path space \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\) defined by (4.9) below.

We make a minimal assumption for this structure. As a result, ISDEs in this section are much general than before. In Sect. 5, we return to the original situation, and deduce tail triviality of the labeled path space from that of the configuration space.

The correspondence between ISDE (3.3)–(3.4) and (4.2)–(4.3) is as follows.

$$\begin{aligned}&\mathbf{W }_{\text {sol}}= \{\mathbf{w }\in W (\mathbf{S }_{\text {sde}}); \mathbf{w }_0 \in {\mathfrak {l}} ({\textsf {H}}) \}, \\&\sigma ^i(\mathbf{X })_t = \sigma (X_t^i,{\textsf {X}}_t^{i\diamondsuit }) ,\ b^i(\mathbf{X })_t = \textit{b}(X_t^i,{\textsf {X}}_t^{i\diamondsuit }). \end{aligned}$$

Here \( {\textsf {H}}\), \( \sigma \), and \( \textit{b}\) are given in ISDE (3.3) and (3.5). Moreover, \( \mathbf{W }_{\text {sol}}\) corresponds to both \( {\mathfrak {l}} ({\textsf {H}}) \) and \( {\mathfrak {u}} ^{-1}({\textsf {S}}_{\text {sde}}) \).

The final form of our general theorems (Theorems 3.13.2) are stated in terms of random point fields. We emphasize that there are many interesting random point fields satisfying the assumptions, such as the sine, Airy, Bessel, and Ginibre random point fields, and all canonical Gibbs measures with potentials of Ruelle’s class (with suitable smoothness of potentials such that the associated ISDEs make sense).

We take the viewpoint not to pose the explicit conditions of the coefficients \( \sigma ^{i}\) and \( b^{i}\) to solve ISDE (4.2)–(4.4), but to assume the existence of a weak solution and the pathwise uniqueness of solutions of the associated infinite system of finite-dimensional SDEs instead.

For a given \( (\mathbf{X },\mathbf{B })\) defined on \( (\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \) satisfying (4.1), we introduce the infinite system of the finite-dimensional SDEs (4.5)–(4.7).

$$\begin{aligned}&dY_t^{m,i} = \sigma ^{i} (\mathbf{Y }^{m}\oplus \mathbf{X }^{m*})_t d B_t^i + b^{i}(\mathbf{Y }^{m}\oplus \mathbf{X }^{m*})_t dt \quad ( i=1,\ldots ,m) , \end{aligned}$$
(4.5)
$$\begin{aligned}&\mathbf{Y }^{m}\oplus \mathbf{X }^{m*}\in \mathbf{W }_{\text {sol}}, \end{aligned}$$
(4.6)
$$\begin{aligned}&\mathbf{Y }_0^{m}\oplus \mathbf{X }_0^{m*}= \mathbf{s } , \end{aligned}$$
(4.7)

where \( \mathbf{Y }^{m} = (Y^{m,1},\ldots ,Y^{m,m})\) is an unknown process defined on \( (\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \), and \( \mathbf{Y }^{m}\oplus \mathbf{X }^{m*}= (Y^{m,1},\ldots ,Y^{m,m},X^{m+1},X^{m+2},\ldots ) \).

The process \(\mathbf{Y }^{m} \) denotes a solution of (4.5)–(4.7) starting at \(\mathbf{s }^{m} = (s_1,\ldots ,s_m)\). The notion of a strong solution and a unique strong solution of (4.5)–(4.7) for \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is defined as Definition 3.9 and Definition 3.10 with an obvious modification. Let \( {\mathbf{B }}^m =({B}^i)_{i=1}^m \) be the first m-components of the \( \{ {\mathscr {F}}_t \}\)-Brownian motion \( \mathbf{B }=(B^i)_{i=1}^{\infty }\). The following assumption corresponds to \((\mathbf {IFC})_{{\mathbf {s}}}\) in Sect. 3.2.

\((\mathbf {IFC})_{{\mathbf {s}}}\) SDE (4.5)–(4.7) has a unique strong solution \( \mathbf{Y }^m = F_{\mathbf{s }}^m({\mathbf{B }}^m,\mathbf{X }^{m*}) \) for \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) for each \( m \in \mathbb {N}\).

Remark 4.1

The meaning of SDE (4.5)–(4.7) is not conventional because the coefficients include additional randomness \( \mathbf{X }^{m*}\), that is, \( \mathbf{X }^{m*}\) is interpreted as ingredients of the coefficients of the SDE (4.5). Furthermore, \( \mathbf{B }^m \) and \( \mathbf{X }^{m*}\) can depend on each other. See Remark 3.5. Another interpretation of SDE (4.5)–(4.7) is that \( (\mathbf{B }^m,\mathbf{X }^{m*})\) is regarded as input to the system rather than the interpretation such that \( \mathbf{X }^{m*} \) is regarded as a part of the coefficient. Solving SDE (4.5)–(4.7) then means constructing a function of \( (\mathbf{B }^m,\mathbf{X }^{m*})\). Thus a strong solution means a functional of \( (\mathbf{B }^m,\mathbf{X }^{m*})\).

We assume \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\). Then we obtain the crucial identity:

$$\begin{aligned}&\mathbf{Y }^m = \mathbf{X }^m . \end{aligned}$$
(4.8)

As we saw in Sect. 1, (4.8) plays an important role in the whole theory.

We set \( \mathbf{w }^{m*}=(w^i)_{i=m+1}^{\infty }\) for \( \mathbf{w }=(w^i)_{i=1}^{\infty }\in W (S^{{\mathbb {N}}})\). Let

$$\begin{aligned}&{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})= \bigcap _{m=1}^{\infty } \sigma [\mathbf{w }^{m*}]. \end{aligned}$$
(4.9)

By definition, \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\) is the tail \( \sigma \)-field of \( W (S^{{\mathbb {N}}})\) with respect to the label. For a probability measure P on \( W (S^{{\mathbb {N}}})\), we set

$$\begin{aligned}&{\mathscr {T}}_{\text {path}} ^{\{1\}} (S^{{\mathbb {N}}};P ) = \{ \mathbf{A } \in {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\, ;\, P (\mathbf{A }) = 1 \} . \end{aligned}$$
(4.10)

For the continuous process \( (\mathbf{X },\mathbf{B })\) given at the beginning of this section, we set

$$\begin{aligned}&{\widetilde{P}}_{\mathbf{s }}= P_{\mathbf{s }}\circ (\mathbf{X },\mathbf{B })^{-1} . \end{aligned}$$
(4.11)

By definition \( {\widetilde{P}}_{\mathbf{s }}\) is a probability measure on \( W (S^{{\mathbb {N}}})\times W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})\). We denote by \( (\mathbf{w },\mathbf{b }) \) generic elements in \( W (S^{{\mathbb {N}}})\times W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})\). Let \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) be the probability measure on \( W (S^{{\mathbb {N}}})\) given by the regular conditional probability \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) of \( {\widetilde{P}}_{\mathbf{s }}\) such that

$$\begin{aligned}&{\widetilde{P}}_{\mathbf{s },\mathbf{b }}(\cdot ) = {\widetilde{P}}_{\mathbf{s }}(\mathbf{w } \in \cdot \, | \mathbf{b }) . \end{aligned}$$
(4.12)

We introduce the conditions.

\( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\)\({\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\) is \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\)-trivial for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\).

\( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\)\( {\mathscr {T}}_{\text {path}} ^{\{1\}} (S^{{\mathbb {N}}};{\widetilde{P}}_{\mathbf{s },\mathbf{b }}) \) is independent of the distribution \( {\widetilde{P}}_{\mathbf{s }}\) for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\) if \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\).

In other words, \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\) means

\( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\)’ If \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) and \( (\mathbf{X }',\mathbf{B }')\) under \( P_{\mathbf{s }}' \) satisfy \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\), then

$$\begin{aligned}&\quad \quad {\mathscr {T}}_{\text {path}} ^{\{1\}} (S^{{\mathbb {N}}};{\widetilde{P}}_{\mathbf{s },\mathbf{b }}) = {\mathscr {T}}_{\text {path}} ^{\{1\}} (S^{{\mathbb {N}}};{\widetilde{P}}_{\mathbf{s },\mathbf{b }}' ) \quad \text { for } P_{\text {Br}}^{\infty }\text {-a.s.}\, \mathbf{b }. \end{aligned}$$

In this sense, \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\) is a condition for ISDE (4.2)–(4.4) rather than a specific solution \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\).

Remark 4.2

  1. 1.

    The conditions \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\) depend on \( \mathbf{s }\), which is the initial starting point of \( \mathbf{X }\). To indicate this we put the subscript \( \mathbf{s }\).

  2. 2.

    We emphasize that we fix \( \mathbf{s }\) throughout Sect. 4, while we do not fix \( \mathbf{s }\) in Sect. 5. We shall present a sufficient condition such that \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\) hold for a.s.  \( \mathbf{s }\) with respect to the initial distribution of \( \mathbf{X }_0\). We remark that, unlike Sect. 4, \( \mathbf{X }\) does not necessarily start at a fixed single point in Sect. 5.

We note that \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\) implies \( {\mathscr {T}}_{\text {path}} ^{\{1\}} (S^{{\mathbb {N}}};{\widetilde{P}}_{\mathbf{s },\mathbf{b }}) \) depends only on \( \mathbf{b }\) for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\).

We now state the main theorem of this section, which we shall prove in Sect. 4.2. In Theorem 4.1 and Corollary 4.1 we consider ISDE (4.2)–(4.4). Recall the notions of strong solutions starting at \( \mathbf{s } \) given by Definition 3.6 and a unique strong solution under the constraint of a condition \( (\bullet )\) given by Definition 3.8.

Theorem 4.1

(First tail theorem) The following hold.

  1. 1.

    Assume that \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\). Then \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a strong solution of (4.2)–(4.4).

  2. 2.

    Make the same assumptions as (1). We further assume that ISDE (4.2)–(4.4) satisfies \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\). Then (4.2)–(4.4) has a unique strong solution \( {F}_{\mathbf{s }}\) under the constraint of \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\).

Corollary 4.1

Make the same assumptions as Theorem 4.1 (2). Then the following hold.

  1. 1.

    For any weak solution \( (\mathbf{X }',{\mathbf{B }}')\) of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\), it holds that \( \mathbf{X }'={F}_{\mathbf{s }}({\mathbf{B }}')\).

  2. 2.

    For any Brownian motion \( \mathbf{B }''\), \( {F}_{\mathbf{s }}(\mathbf{B }'') \) is a strong solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\).

  3. 3.

    The uniqueness in law of weak solutions of (4.2)–(4.4) holds under the constraint of \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\).

Infinite systems of finite-dimensional SDEs with consistency

In this section we prove Theorem 4.1. Let \( (\mathbf{X },\mathbf{B })\) be a pair of an \( \{ {\mathscr {F}}_t \}\)-adapted continuous process \( \mathbf{X }\) and an \( \{ {\mathscr {F}}_t \}\)-Brownian motion \( \mathbf{B } \) defined on \( (\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \) satisfying (4.1) as before. We assume that \( P_{\mathbf{s }}\) satisfies:

$$\begin{aligned}&P_{\mathbf{s }}((\mathbf{X },\mathbf{B })\in \mathbf{W }_{\text {sol}}\times W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) = 1 . \end{aligned}$$

We denote by \( {\mathbb {P}}_{\mathbf{s },\mathbf{b }}\) the regular conditional probability of \( P_{\mathbf{s }}\) conditioned by the random variable \( {\mathbf{B }}\) such that

$$\begin{aligned}&{\mathbb {P}}_{\mathbf{s },\mathbf{b }}(\cdot ) = P_{\mathbf{s }}( (\mathbf{X },{\mathbf{B }}) \in \cdot \, |{\mathbf{B }}= \mathbf{b }). \end{aligned}$$
(4.13)

By construction \( {\mathbb {P}}_{\mathbf{s },\mathbf{b }}(\mathbf{W }_{\text {sol}}\times \{ \mathbf{b }\} ) = 1 \). This follows from the fact that the regular conditional probability in (4.13) is conditioned by the random variable \( {\mathbf{B }}\). We refer to [9, (3.1) 15p] for proof. We set

$$\begin{aligned}&{\mathbb {P}}_{\mathbf{s }}(\cdot ) = \int _{}{\mathbb {P}}_{\mathbf{s },\mathbf{b }}(\cdot )P_{\text {Br}}^{\infty }(d\mathbf{b }) . \end{aligned}$$
(4.14)

From (4.1), (4.13), and (4.14) we have a representation of \( {\widetilde{P}}_{\mathbf{s }}= P_{\mathbf{s }}\circ (\mathbf{X },\mathbf{B })^{-1}\) such that for any \( C \in {\mathscr {B}}(\mathbf{W }_{\text {sol}}) \times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) \)

$$\begin{aligned}&{\widetilde{P}}_{\mathbf{s }}( C ) = {\mathbb {P}}_{\mathbf{s }}( C ) . \end{aligned}$$
(4.15)

Remark 4.3

The probability measure \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) in (4.12) resembles \( {\mathbb {P}}_{\mathbf{s },\mathbf{b }}\) in (4.13), and they are closely related to each other. The difference is that \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) is a probability measure on \( W (S^{{\mathbb {N}}})\), whereas \( {\mathbb {P}}_{\mathbf{s },\mathbf{b }}\) is on \( W (S^{{\mathbb {N}}})\times W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})\).

In general, \( \overline{{\mathscr {F}} }^P\) denotes the completion of the \( \sigma \)-field \( {\mathscr {F}} \) with respect to probability P. Recall that \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})= \cap _{m\in \mathbb {N}}\sigma [\mathbf{w }^{m*}]\). Then it is easy to see that

$$\begin{aligned}&{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) \subset \bigcap _{m\in \mathbb {N}} \Big \{ \sigma [\mathbf{w }^{m*}] \times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) \Big \}. \end{aligned}$$
(4.16)

Hence taking the closure of both sides in (4.16) with respect to \( {\widetilde{P}}_{\mathbf{s }}\) we have

$$\begin{aligned}&\overline{{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}}))}^{{\widetilde{P}}_{\mathbf{s }}} \subset \overline{\bigcap _{m\in \mathbb {N}} \Big \{ \sigma [\mathbf{w }^{m*}] \times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) \Big \} }^{{\widetilde{P}}_{\mathbf{s }}} \nonumber \\&= \bigcap _{m\in \mathbb {N}} \overline{\sigma [\mathbf{w }^{m*}] \times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) }^{{\widetilde{P}}_{\mathbf{s }}} . \end{aligned}$$
(4.17)

The equality in (4.17) follows from a general fact such that the completion of the countable intersection of \( \sigma \)-fields with decreasing property coincides with the countable intersection of the completion of \( \sigma \)-fields with decreasing property. Indeed, if

$$\begin{aligned} K \in \overline{\bigcap _{m\in \mathbb {N}} \Big \{ \sigma [\mathbf{w }^{m*}] \times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) \Big \} }^{{\widetilde{P}}_{\mathbf{s }}} , \end{aligned}$$
(4.18)

then there exist \( A , B \in \bigcap _{m\in \mathbb {N}} \sigma [\mathbf{w }^{m*}] \times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) \) such that \( A \subset K \subset B \) and that \( {\widetilde{P}}_{\mathbf{s }}(B \backslash A ) = 0 \). Then \( A, B \in \sigma [\mathbf{w }^{m*}] \times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}}))\) for each m. This deduces

$$\begin{aligned} K \in \bigcap _{m\in \mathbb {N}} \overline{\sigma [\mathbf{w }^{m*}] \times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) }^{{\widetilde{P}}_{\mathbf{s }}} . \end{aligned}$$
(4.19)

Conversely, if (4.19) holds, then there exist \( A_m , B_m \in \sigma [\mathbf{w }^{m*}] \times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) \) such that \( A_m \subset K \subset B_m \) and that \( {\widetilde{P}}_{\mathbf{s }}(B_m \backslash A_m ) = 0 \) for each \( m \in \mathbb {N}\). It is clear that \( \limsup _m A_m \) and \( \liminf _m B_m \) are tail events such that

$$\begin{aligned}&{\widetilde{P}}_{\mathbf{s }}( \liminf _m B_m \backslash \limsup _m A_m ) \le \sum _{m=1}^{\infty } {\widetilde{P}}_{\mathbf{s }}(B_m \backslash A_m) = 0 . \end{aligned}$$

Hence (4.18) holds.

We need the inverse inclusion of (4.17), which does not hold in general. Hence we introduce further completions of \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}}))\) as follows. We set

$$\begin{aligned}&{\mathscr {K}} = {{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}}))} \end{aligned}$$
(4.20)

and \( {\mathscr {I}} \) intuitively given by

$$\begin{aligned}&{\mathscr {I}} = \overline{ \bigcap _{\mathbf{b }} \overline{{\mathscr {K}}}^{\, {\mathbb {P}}_{\mathbf{s },\mathbf{b }}} }^{\, P_{\text {Br}}^{\infty }} . \end{aligned}$$
(4.21)

Here the intersection is taken over \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\). To be precise, we set for \( U\subset W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})\)

$$\begin{aligned}&{\mathscr {K}}[U]= \bigcap _{\mathbf{b }\in U} \overline{{\mathscr {K}}}^{{\mathbb {P}}_{\mathbf{s },\mathbf{b }}} . \end{aligned}$$

Then the set \( {\mathscr {I}} \) is defined by

$$\begin{aligned}&{\mathscr {I}}= \{ V\, ;\, \text { there exist } V_i \in {\mathscr {K}}[U] \, (i=1,2),\, U\in {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) \nonumber \\&\quad \quad \text { such that } V_1 \subset V\subset V_2 , \ {\mathbb {P}}_{\mathbf{s },\mathbf{b }}(V_2\backslash V_1) = 0 \text { for all } \mathbf{b }\in U, \ P_{\text {Br}}^{\infty }(U)= 1 \} . \end{aligned}$$
(4.22)

Lemma 4.1

Let \( {\mathscr {W}}_m = \sigma [\mathbf{w }^{m*}] \times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) \). Then the following holds.

$$\begin{aligned} \bigcap _{m=1}^{\infty } \overline{{\mathscr {W}}_m }^{{\widetilde{P}}_{\mathbf{s }}} \subset {\mathscr {I}} . \end{aligned}$$
(4.23)

Proof

We recall \( {\widetilde{P}}_{\mathbf{s }}= P_{\mathbf{s }}\circ (\mathbf{X },\mathbf{B })^{-1} \) by (4.11). Similarly as the equality in (4.17),

$$\begin{aligned} \cap _{m=1}^{\infty }\overline{{\mathscr {W}}_m }^{ {\widetilde{P}}_{\mathbf{s }}} = \overline{\cap _{m=1}^{\infty }{\mathscr {W}}_m }^{ {\widetilde{P}}_{\mathbf{s }}} . \end{aligned}$$

From this we deduce that (4.23) is equivalent to

$$\begin{aligned} \overline{ \cap _{m=1}^{\infty }{\mathscr {W}}_m }^{ {\widetilde{P}}_{\mathbf{s }}} \subset {\mathscr {I}} . \end{aligned}$$
(4.24)

Let \( V \in \overline{\bigcap _{m=1}^{\infty }{\mathscr {W}}_m }^{ {\widetilde{P}}_{\mathbf{s }}} \). Then there exist \( V_{i} \in \cap _{m=1}^{\infty }{\mathscr {W}}_m \)\( (i=1,2)\) such that

$$\begin{aligned} V_{1} \subset V \subset V_{2} , \quad {\widetilde{P}}_{\mathbf{s }}(V_{2} \backslash V_{1}) = 0 . \end{aligned}$$
(4.25)

For \( \mathbf{b }\in W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})\), we set

$$\begin{aligned} W_{\mathbf{b }}:=(W (S^{{\mathbb {N}}}), \mathbf{b }) = \{ (\mathbf{w },\mathbf{b })\, ;\, \mathbf{w }\in W (S^{{\mathbb {N}}})\}. \end{aligned}$$

Note that \( V_{i} \cap W_{\mathbf{b }}= (A_{i}^{\mathbf{b }},\mathbf{b })\) for a unique \( A_{i}^{\mathbf{b }} \subset W (S^{{\mathbb {N}}})\). Clearly, \( W_{\mathbf{b }}\in \cap _{m=1}^{\infty }{\mathscr {W}}_m \). From these and \( V_{i} \in \cap _{m=1}^{\infty }{\mathscr {W}}_m \) we see

$$\begin{aligned}&(A_{i}^{\mathbf{b }},\mathbf{b }) = V_{i} \cap W_{\mathbf{b }}\in {\cap _{m=1}^{\infty }{\mathscr {W}}_m } . \end{aligned}$$

Hence \( (A_{i}^{\mathbf{b }},\mathbf{b }) \in {{\mathscr {W}}_m } \) for all \( m \in \mathbb {N}\). Then \( A_{i}^{\mathbf{b }} \in \cap _{m=1}^{\infty }\sigma [\mathbf{w }^{m*}] \). This implies

$$\begin{aligned}&A_{i}^{\mathbf{b }} \in {{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})} . \end{aligned}$$
(4.26)

Let \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) be as in (4.12). Let \( A^{\mathbf{b }}\) be a unique subset of \( W (S^{{\mathbb {N}}})\) such that \( (A^{\mathbf{b }},\mathbf{b }) = V\cap W_{\mathbf{b }}\). From \( (A_{i}^{\mathbf{b }},\mathbf{b }) = V_{i} \cap W_{\mathbf{b }}\) combined with (4.12), (4.25), and (4.26), there exists a set \( U \in {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) \) such that \( P_{\text {Br}}^{\infty }( U ) = 1 \) and that

$$\begin{aligned}&A^{\mathbf{b }} \in \overline{{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}}) }^{{\widetilde{P}}_{\mathbf{s },\mathbf{b }}} \quad \text { for all } \mathbf{b } \in U. \end{aligned}$$
(4.27)

Recall the relation between \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) and \( {\mathbb {P}}_{\mathbf{s },\mathbf{b }}\) given by Remark 4.3. Then (4.27) implies

$$\begin{aligned} (A^{\mathbf{b }},\mathbf{b }) \in \overline{{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) }^{{\mathbb {P}}_{\mathbf{s },\mathbf{b }}} \quad \hbox { for all}\ \mathbf{b } \in U . \end{aligned}$$

This together with \( (A^{\mathbf{b }},\mathbf{b }) = V\cap W_{\mathbf{b }}\) yields

$$\begin{aligned} V\cap W_{\mathbf{b }}\in \overline{{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) }^{{\mathbb {P}}_{\mathbf{s },\mathbf{b }}} \quad \hbox { for all}\ \mathbf{b } \in U . \end{aligned}$$
(4.28)

Recall that \( {\mathscr {K}} = {{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}}))}\) by (4.20). Then we rewrite (4.28) as

$$\begin{aligned}&V\cap W_{\mathbf{b }}\in \overline{{\mathscr {K}} }^{{\mathbb {P}}_{\mathbf{s },\mathbf{b }}} \quad \text { for all }\mathbf{b } \in U . \end{aligned}$$

Obviously, \( W_{\mathbf{b }}^c \in \overline{{\mathscr {K}} }^{{\mathbb {P}}_{\mathbf{s },\mathbf{b }}} \). Hence we deduce

$$\begin{aligned} \big ( V\cap W_{\mathbf{b }}\big ) \cup W_{\mathbf{b }}^c \in \overline{{\mathscr {K}} }^{{\mathbb {P}}_{\mathbf{s },\mathbf{b }}} \quad \text { for all }\mathbf{b } \in U . \end{aligned}$$
(4.29)

Because \( W_{\mathbf{b }}= W (S^{{\mathbb {N}}})\times \{ \mathbf{b } \} \) and \( {\mathbb {P}}_{\mathbf{s },\mathbf{b }}\) is concentrated on \( W_{\mathbf{b }}\), we deduce from (4.29)

$$\begin{aligned}&V \in \overline{{\mathscr {K}} }^{{\mathbb {P}}_{\mathbf{s },\mathbf{b }}} \quad \text { for all } {\mathbf{b } \in U } . \end{aligned}$$

Then we obtain

$$\begin{aligned}&V \in \bigcap _{\mathbf{b } \in U } \overline{{\mathscr {K}} }^{{\mathbb {P}}_{\mathbf{s },\mathbf{b }}}. \end{aligned}$$
(4.30)

From (4.22), (4.30), and \( P_{\text {Br}}^{\infty }( U ) = 1 \), we obtain (4.24). This completes the proof. \(\square \)

For \( (\mathbf{X },\mathbf{B })\) as above, we assume \( (\mathbf{X },\mathbf{B })\) satisfies \((\mathbf {IFC})_{{\mathbf {s}}}\) with \( F_{\mathbf{s }}^m\). We set

$$\begin{aligned}&{\widetilde{F}}_{\mathbf{s }}^m(\mathbf{X },\mathbf{B })= {F_{\mathbf{s }}^m}(\mathbf{B }^m,\mathbf{X }^{m*})\oplus \mathbf{X }^{m*}. \end{aligned}$$
(4.31)

Then we see \( {\widetilde{F}}_{\mathbf{s }}^m(\mathbf{X },\mathbf{B })\in \mathbf{W }_{\text {sol}}\). By construction, \( ({{\widetilde{F}}_{\mathbf{s }}^m(\mathbf{X },\mathbf{B })}, {\mathbf{B }})\) satisfies the SDE in integral form such that, for \( i = 1,\ldots ,m \),

$$\begin{aligned}&{\widetilde{F}}_{\mathbf{s }}^{m,i} (\mathbf{X },\mathbf{B })_t= s_i + \int _0^t \sigma ^i ( {\widetilde{F}}_{\mathbf{s }}^m(\mathbf{X },\mathbf{B }))_u dB_u^i + \int _0^t b^i ( {\widetilde{F}}_{\mathbf{s }}^m(\mathbf{X },\mathbf{B }))_u du , \end{aligned}$$
(4.32)

where \( {\widetilde{F}}_{\mathbf{s }}^m= ({\widetilde{F}}_{\mathbf{s }}^{m,1},\ldots ,{\widetilde{F}}_{\mathbf{s }}^{m,m},X^{m+1},X^{m+2}, \ldots ) \) by construction. We write

$$\begin{aligned}&{F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B })= \lim _{m\rightarrow \infty } {{\widetilde{F}}_{\mathbf{s }}^m(\mathbf{X },\mathbf{B })}\text { in } \mathbf{W }_{\text {sol}}\text { under } P_{\mathbf{s }}\end{aligned}$$
(4.33)

if \( {F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B })\in \mathbf{W }_{\text {sol}}\) and limits (4.34)–(4.36) converge in \( W (S) \) for \( P_{\mathbf{s }}\)-a.s. for all \( i \in \mathbb {N}\):

$$\begin{aligned} \lim _{m\rightarrow \infty } {\widetilde{F}}_{\mathbf{s }}^{m,i} (\mathbf{X },\mathbf{B })=&\,{F}_{\mathbf{s }}^{\infty , i} (\mathbf{X },\mathbf{B }), \end{aligned}$$
(4.34)
$$\begin{aligned} \lim _{m\rightarrow \infty } \int _0^{\cdot } \sigma ^i ( {\widetilde{F}}_{\mathbf{s }}^m(\mathbf{X },\mathbf{B }))_u dB_u^i =&\,\int _0^{\cdot } \sigma ^i ( {F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B }))_u dB_u^i , \end{aligned}$$
(4.35)
$$\begin{aligned} \lim _{m\rightarrow \infty } \int _0^{\cdot } b^i ( {\widetilde{F}}_{\mathbf{s }}^m(\mathbf{X },\mathbf{B }))_u du =&\int _0^{\cdot } b^i ( {F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B }))_u du . \end{aligned}$$
(4.36)

Lemma 4.2

Assume that \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\). Then the following hold.

  1. 1.

    The sequence of maps \( \{ {\widetilde{F}}_{\mathbf{s }}^m\}_{m\in {\mathbb {N}}} \) is consistent in the sense that, for \( P_{\mathbf{s }}\)-a.s.,

    $$\begin{aligned}&{\widetilde{F}}_{\mathbf{s }}^{m,i} (\mathbf{X },\mathbf{B })= {\widetilde{F}}_{\mathbf{s }}^{m+n,i} (\mathbf{X },\mathbf{B })\quad \text { for all } 1\le i \le m ,\ m, n \in {\mathbb {N}}. \end{aligned}$$
    (4.37)

    Furthermore, (4.33) holds and the map \( {F_{\mathbf{s }}^{\infty }}\) is well defined.

  2. 2.

    \( (\mathbf{X },\mathbf{B })\) is a fixed point of \( {F_{\mathbf{s }}^{\infty }}\) in the sense that, for \( P_{\mathbf{s }}\)-a.s.,

    $$\begin{aligned}&( \mathbf{X }, {\mathbf{B }}) = ( {F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B }), {\mathbf{B }}) . \end{aligned}$$
    (4.38)
  3. 3.

    \( {F_{\mathbf{s }}^{\infty }}(\cdot ,\mathbf{b })\) is \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}\)-measurable for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\), where

    $$\begin{aligned}&{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}= \overline{{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})}^{{\widetilde{P}}_{\mathbf{s },\mathbf{b }}} . \end{aligned}$$
    (4.39)

    \( {F_{\mathbf{s }}^{\infty }}\) is \( {\mathscr {I}} \)-measurable, where \( {\mathscr {I}} \) is given by (4.21) and (4.22).

Proof

The consistency (4.37) is clear because \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\). (4.38) is immediate from the consistency (4.37). We have thus obtained (1) and (2).

Let \( \overline{{\mathscr {W}}_m }\) be the completion of \( {\mathscr {W}}_m \) with respect to \( {\widetilde{P}}_{\mathbf{s }}\) as before. From Definition 3.9 and (4.31) we see \( {\widetilde{F}}_{\mathbf{s }}^m\) is \( \overline{{\mathscr {W}}_m }\)-measurable. Clearly,

$$\begin{aligned} \overline{{\mathscr {W}}_m }\supset \overline{{\mathscr {W}}_n }\quad \text { for } m \le n . \end{aligned}$$

Hence, \( {\widetilde{F}}_{\mathbf{s }}^{n}\) are \( \overline{{\mathscr {W}}_m }\)-measurable for all \( n\ge m \). Combining this with (4.33), we see that the limit function \( {F_{\mathbf{s }}^{\infty }}\) is \( \overline{{\mathscr {W}}_m }\)-measurable for each \( m \in {\mathbb {N}}\). Then \( {F_{\mathbf{s }}^{\infty }}\) is \( \{\cap _{m=1}^{\infty }\overline{{\mathscr {W}}_m }\}\)-measurable. Hence \( {F_{\mathbf{s }}^{\infty }}\) is \( {\mathscr {I}} \)-measurable from Lemma 4.1. The first claim in (3) follows from the second and the definition of \( {\mathscr {I}} \). We have thus obtained (3). \(\square \)

The next theorem reveals the relation between the existence and pathwise uniqueness of strong solutions and tail triviality of the labeled path space \( W (S^{{\mathbb {N}}})\). The definition of the strong solution staring at \( \mathbf{s }\) is given by Definition 3.6 with replacement of (3.3)–(3.5) by (4.2)–(4.4).

Recall that \( (\mathbf{X },\mathbf{B })\) is a continuous process defined on \( (\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \) satisfying (4.1) introduced at the beginning of this section, and \( {\widetilde{P}}_{\mathbf{s }}\) is the distribution of \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\). From \( P_{\mathbf{s }}\), we set \({\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) as (4.12) and \( {\mathscr {T}}_{\text {path}} ^{\{1\}} (S^{{\mathbb {N}}}; \cdot )\) as (4.10).

Theorem 4.2

  1. 1.

    Assume that \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\). Then \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a strong solution of (4.2)–(4.4) if and only if \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\) holds.

  2. 2.

    Let \( \mathbf{X } \) and \( \mathbf{X }' \) be strong solutions of (4.2)–(4.4) defined on \( (\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \) with the same \( \{ {\mathscr {F}}_t \}\)-Brownian motion \( {\mathbf{B }}\). Assume that \( (\mathbf{X },\mathbf{B })\) and \( (\mathbf{X }' ,\mathbf{B })\) under \( P_{\mathbf{s }}\) satisfy \((\mathbf {IFC})_{{\mathbf {s}}}\). Then,

    $$\begin{aligned}&P_{\mathbf{s }}( \mathbf{X } = \mathbf{X }' ) =1 \end{aligned}$$
    (4.40)

    if and only if for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\)

    $$\begin{aligned}&\quad {\mathscr {T}}_{\text {path}} ^{\{1\}} (S^{{\mathbb {N}}};{\widetilde{P}}_{\mathbf{s },\mathbf{b }}) = {\mathscr {T}}_{\text {path}} ^{\{1\}} (S^{{\mathbb {N}}};{\widetilde{P}}_{\mathbf{s },\mathbf{b }}'). \end{aligned}$$
    (4.41)

    Here \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}' \) is defined by (4.11) and (4.12) by replacing \( \mathbf{X }\) by \( \mathbf{X }'\).

Proof

We prove (1). We note that by Lemma 4.2 (2)

$$\begin{aligned}&( \mathbf{X }, {\mathbf{B }}) = ( {F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B }), {\mathbf{B }}) \quad ~\text { for } P_{\mathbf{s }}\text {-a.s}. \end{aligned}$$
(4.42)

The fixed point property (4.42) is a key of the proof. We shall utilize the structure \( \mathbf{X }= {F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B })\). By assumption \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a weak solution of (4.2)–(4.4). Then \(( {F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B }), {\mathbf{B }}) \) under \( P_{\mathbf{s }}\) is a weak solution by (4.42).

Suppose \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\). Then \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}}) \) is \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\)-trivial for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\). By Lemma 4.2 (3), we see \( {F_{\mathbf{s }}^{\infty }}(\cdot ,\mathbf{b })\) is \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}\)-measurable for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\). From these, we see that \( {F_{\mathbf{s }}^{\infty }}(\cdot ,\mathbf{b }) \) under \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) is a constant. Hence, \( {F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) becomes a function in \( {\mathbf{B }}\). So we write, under \( P_{\mathbf{s }}\),

$$\begin{aligned}&{F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B })= {{F}_{\mathbf{s }}}({\mathbf{B }}) . \end{aligned}$$
(4.43)

We next prove that \( {{F}_{\mathbf{s }}}\) is a \( {\mathscr {B}}(P_{\text {Br}}^{\infty }) \)-measurable function. Let \( A \in {\mathscr {B}}(W (S^{{\mathbb {N}}}))\) and set \( A ' = ({F_{\mathbf{s }}^{\infty }})^{-1} ( A ) \). From (4.13), (4.42), and (4.43), we deduce that there exists \( {\mathscr {N}} \) such that \( P_{\text {Br}}^{\infty }({\mathscr {N}} ) = 0 \) and that

$$\begin{aligned} {\mathbb {P}}_{\mathbf{s },\mathbf{b }}( A ' )&= P_{\mathbf{s }}(({F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B }), {\mathbf{B }}) \in A ' | {\mathbf{B }}= \mathbf{b }) \nonumber \\&= P_{\mathbf{s }}(({{F}_{\mathbf{s }}}({\mathbf{B }}) , {\mathbf{B }}) \in A ' | {\mathbf{B }}= \mathbf{b }) \nonumber \\&= 1_{ A ' } (({{F}_{\mathbf{s }}}(\mathbf{b }) , \mathbf{b }) ) \quad \text { for all } \mathbf{b }\notin {\mathscr {N}} \nonumber \\&= 1_{ A } ({{F}_{\mathbf{s }}}(\mathbf{b })) = 1_{{{F}_{\mathbf{s }}}^{-1} ( A )} (\mathbf{b }) . \end{aligned}$$
(4.44)

Note that \( {\mathbb {P}}_{\mathbf{s },\mathbf{b }}( A ' ) \) is \( {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})) \)-measurable in \( \mathbf{b }\) because \( {\mathbb {P}}_{\mathbf{s },\mathbf{b }}\) is the regular conditional probability of \( P_{\mathbf{s }}\) induced by \( {\mathbf{B }}\). Hence from (4.44) we see \( {{F}_{\mathbf{s }}}^{-1} ( A ) \in {\mathscr {B}}(P_{\text {Br}}^{\infty }) \). This implies \( {{F}_{\mathbf{s }}}\) is a \( {\mathscr {B}}(P_{\text {Br}}^{\infty }) \)-measurable function.

Similarly, we can prove \( {{F}_{\mathbf{s }}}\) to be a \( {\mathscr {B}}_t (P_{\text {Br}}^{\infty }) / {\mathscr {B}}_t \)-measurable function for each t. Indeed, let \( \mathbf{X }^{[0,t]}= \{\mathbf{X }_s\}_{0\le s \le t } \) and \( \mathbf{B }^{[0,t]}= \{ \mathbf{B }_s \}_{0\le s \le t } \) be given by the restriction of the time parameter set \( [0,\infty )\) to [0, t]. Then \( (\mathbf{X }^{[0,t]},\mathbf{B }^{[0,t]})\) is a weak solution of (4.2)–(4.4) on [0, t]. The filtered space of \( (\mathbf{X }^{[0,t]},\mathbf{B }^{[0,t]})\) is taken as \( ( \varOmega , {\mathscr {F}}_t , P_{\mathbf{s }}, \{ {\mathscr {F}}_s \}_{0\le s \le t} )\).

We set \({F}_{\mathbf{s }}^{[0,t]}(\mathbf{B } ) = \{{F}_{\mathbf{s }}(\mathbf{B })_s \}_{0\le s \le t } \). Then from (4.42) and (4.43) we see under \( P_{\mathbf{s }}\)

$$\begin{aligned}&{F}_{\mathbf{s }}^{[0,t]}(\mathbf{B } ) = \mathbf{X }^{[0,t]}. \end{aligned}$$
(4.45)

Applying the argument for \( (\mathbf{X },\mathbf{B })\) to \( (\mathbf{X }^{[0,t]},\mathbf{B }^{[0,t]})\), we obtain the counterparts to (4.42) and (4.43) of \( (\mathbf{X }^{[0,t]},\mathbf{B }^{[0,t]})\), that is, we have functions \( {F}_{\mathbf{s }}^{\infty ,t} \) of \( (\mathbf{X }^{[0,t]},\mathbf{B }^{[0,t]})\) and \( {F}_{\mathbf{s }}^t \) of \( \mathbf{B }^{[0,t]}\) satisfying under \( P_{\mathbf{s }}\)

$$\begin{aligned}&(\mathbf{X }^{[0,t]},\mathbf{B }^{[0,t]})= ( {F}_{\mathbf{s }}^{\infty ,t} (\mathbf{X }^{[0,t]},\mathbf{B }^{[0,t]}), \mathbf{B }^{[0,t]} ), \end{aligned}$$
(4.46)
$$\begin{aligned}&{F}_{\mathbf{s }}^{\infty ,t} (\mathbf{X }^{[0,t]},\mathbf{B }^{[0,t]})= {F}_{\mathbf{s }}^t (\mathbf{B }^{[0,t]} ). \end{aligned}$$
(4.47)

Hence from (4.45), (4.46), and (4.47) we have under \( P_{\mathbf{s }}\)

$$\begin{aligned}&{F}_{\mathbf{s }}^{[0,t]}(\mathbf{B } ) = {F}_{\mathbf{s }}^t (\mathbf{B }^{[0,t]} ) . \end{aligned}$$
(4.48)

We set \( W _{\mathbf{0 },t}= \{ \mathbf{w }\in C ([0,t]; {\mathbb {R}}^{d\mathbb {N}}) ; \mathbf{w }_0 = \mathbf{0 } \} \). We regard \( P_{\text {Br}}^{\infty }\) as a probability measure on \( ( W _{\mathbf{0 },t}, {\mathscr {B}}( W _{\mathbf{0 },t}) )\) and denote by \(\overline{{\mathscr {B}}( W _{\mathbf{0 },t})}\) the completion of \( {\mathscr {B}}( W _{\mathbf{0 },t})\) with respect to \( P_{\text {Br}}^{\infty }\). We replace \( {\mathscr {B}}(P_{\text {Br}}^{\infty }) \) and \( {\mathscr {B}}(W (S^{{\mathbb {N}}})) \) with \(\overline{{\mathscr {B}}( W _{\mathbf{0 },t})}\) and \( {\mathscr {B}}_t \), respectively. Then, applying the argument above to \( (\mathbf{X }^{[0,t]},\mathbf{B }^{[0,t]})\), we deduce \( {F}_{\mathbf{s }}^t \) is \( \overline{{\mathscr {B}}( W _{\mathbf{0 },t})}/ {\mathscr {B}}_t \)-measurable. Because we can naturally identify \( \overline{{\mathscr {B}}( W _{\mathbf{0 },t})}\) with \( {\mathscr {B}}_t (P_{\text {Br}}^{\infty }) \), we see \( {F}_{\mathbf{s }}^t \) is \( {\mathscr {B}}_t (P_{\text {Br}}^{\infty }) / {\mathscr {B}}_t \)-measurable. This and (4.48) imply \( {F}_{\mathbf{s }}^{[0,t]}\) is \( {\mathscr {B}}_t (P_{\text {Br}}^{\infty }) / {\mathscr {B}}_t \)-measurable. Hence, we conclude \( {{F}_{\mathbf{s }}}\) is \( {\mathscr {B}}_t (P_{\text {Br}}^{\infty }) / {\mathscr {B}}_t \)-measurable immediately.

Collecting these we see \( {{F}_{\mathbf{s }}}({\mathbf{B }}) = {F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a strong solution. In particular, the function \( {{F}_{\mathbf{s }}}\) is a strong solution in the sense of Definition 3.6.

Suppose that \( {F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a strong solution. Then by Definition 3.6 there exists a function \( {{F}_{\mathbf{s }}}\) such that \( {F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B })= {{F}_{\mathbf{s }}}(\mathbf{B })\) for \( P_{\mathbf{s }}\)-a.s. By (4.11) and (4.12) we have \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}= P_{\mathbf{s }}(\mathbf{X }\in \cdot | \mathbf{B }= \mathbf{b }) \) for given \( \mathbf{b }\). Hence the distribution of \( {F_{\mathbf{s }}^{\infty }}(\mathbf{w } , \mathbf{b }) \) under \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) is the delta measure \( \delta _{\mathbf{z }}\) concentrated at a non-random path \( \mathbf{z }\), say. Therefore, we deduce that \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}\) is \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\)-trivial for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\). We have thus obtained (1).

We proceed with the proof of (2).

We suppose (4.40). Then the image measures \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) and \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}' \) defined by (4.12) for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\) are the same. We thus obtain (4.41).

We next suppose (4.41). By assumption \( (\mathbf{X },\mathbf{B })\) and \( (\mathbf{X }' ,\mathbf{B })\) under \( P_{\mathbf{s }}\) are strong solutions. Hence from (1) we see \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}}) \) is trivial with respect to \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) and \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}' \) for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\). Combining this with (4.41) we obtain for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\)

$$\begin{aligned}&{\widetilde{P}}_{\mathbf{s },\mathbf{b }}|_{{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}}) } = {\widetilde{P}}_{\mathbf{s },\mathbf{b }}'|_{{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}}) } \end{aligned}$$
(4.49)

and in particular

$$\begin{aligned}&{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}= {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}'. \end{aligned}$$
(4.50)

Here we set \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}'\) by (4.39) by replacing \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) with \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}'\).

Let \( {\mathbb {P}}_{\mathbf{s },\mathbf{b }}\) and \( {\mathbb {P}}_{\mathbf{s },\mathbf{b }}' \) be defined by (4.13) for \( (\mathbf{X },\mathbf{B })\) and \( (\mathbf{X }' ,\mathbf{B })\) under \( P_{\mathbf{s }}\), respectively. Then from (4.49) we easily see for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\)

$$\begin{aligned}&{\mathbb {P}}_{\mathbf{s },\mathbf{b }}|_{{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}}) \times \{ \mathbf{b }\} } = {\mathbb {P}}_{\mathbf{s },\mathbf{b }}'|_{{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}}) \times \{ \mathbf{b }\} } . \end{aligned}$$
(4.51)

Here we set \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}}) \times \{ \mathbf{b }\} = {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}}) \times \sigma [\{\mathbf{b }\}]\).

Let \( {\mathscr {I}} \) and \( {\mathscr {I}}' \) be the \( \sigma \)-fields defined by (4.22) for \( (\mathbf{X },\mathbf{B })\) and \( (\mathbf{X }' ,\mathbf{B })\) under \( P_{\mathbf{s }}\), respectively. Then we obtain from (4.49) and (4.51) combined with (4.14) and (4.15)

$$\begin{aligned}&{\mathscr {I}} = {\mathscr {I}}' \end{aligned}$$
(4.52)

and for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\)

$$\begin{aligned}&\overline{{\mathscr {K}} }^{\, {\mathbb {P}}_{\mathbf{s },\mathbf{b }}}= \overline{{\mathscr {K}}} ^{\, {\mathbb {P}}_{\mathbf{s },\mathbf{b }}'} . \end{aligned}$$
(4.53)

Let \( {F_{\mathbf{s }}^{\infty }}\) and \( {F}_{\mathbf{s }}'^{\infty }\) be as Lemma 4.2 for \( (\mathbf{X },\mathbf{B })\) and \( (\mathbf{X }' ,\mathbf{B })\) under \( P_{\mathbf{s }}\), respectively. Then by (4.52) and Lemma 4.2 (3) both \( {F_{\mathbf{s }}^{\infty }}\) and \( {F}_{\mathbf{s }}'^{\infty }\) are \( {\mathscr {I}} \)-measurable. From (4.50) and Lemma 4.2 (3) we have both \( {F_{\mathbf{s }}^{\infty }}(\cdot ,\mathbf{b })\) and \( {F}_{\mathbf{s }}'^{\infty } (\cdot ,\mathbf{b })\) are \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}\)-measurable for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\).

Let \( U, V\in {\mathscr {I}} \) be subsets with \( {\widetilde{P}}_{\mathbf{s }}(U)={\widetilde{P}}_{\mathbf{s }}' (V)=1\) such that \( {F_{\mathbf{s }}^{\infty }}\) and \( {F}_{\mathbf{s }}'^{\infty }\) satisfy (4.38) on \( U\) and \( V\), respectively. From (4.38) we have \( {F_{\mathbf{s }}^{\infty }}(\mathbf{w },\mathbf{b }) = \mathbf{w } \) on \( U\) and \( {\widetilde{P}}_{\mathbf{s }}(U)=1\). Then we take a version of \( {F_{\mathbf{s }}^{\infty }}\) under \( {\widetilde{P}}_{\mathbf{s }}\) such that \( {F_{\mathbf{s }}^{\infty }}(\mathbf{w },\mathbf{b }) = \mathbf{w }\) on \( U\cup V\). Similarly, we take a version of \( {F}_{\mathbf{s }}'^{\infty } \) under \( {\widetilde{P}}_{\mathbf{s }}'\) such that \( {F}_{\mathbf{s }}'^{\infty } (\mathbf{w },\mathbf{b }) = \mathbf{w }\) on \( U\cup V\). We thus have

$$\begin{aligned}&{F_{\mathbf{s }}^{\infty }}(\mathbf{w },\mathbf{b }) = \mathbf{w } = {F}_{\mathbf{s }}'^{\infty } (\mathbf{w },\mathbf{b }) \quad \text { for } (\mathbf{w },\mathbf{b }) \in U\cup V. \end{aligned}$$

Hence for \( {\widetilde{P}}_{\mathbf{s }}\)- and \( {\widetilde{P}}_{\mathbf{s }}'\)-a.s.

$$\begin{aligned}&{F_{\mathbf{s }}^{\infty }}= {F}_{\mathbf{s }}'^{\infty } . \end{aligned}$$
(4.54)

We then deduce from (4.54) for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\)

$$\begin{aligned}&\quad \quad {F_{\mathbf{s }}^{\infty }}(\cdot ,\mathbf{b })= {F}_{\mathbf{s }}'^{\infty } (\cdot ,\mathbf{b }) \quad \text { for } {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\text {- and } {\widetilde{P}}_{\mathbf{s },\mathbf{b }}' \text {-a.s.} \end{aligned}$$
(4.55)

We recall that \( {F_{\mathbf{s }}^{\infty }}(\cdot ,\mathbf{b })\) and \( {F}_{\mathbf{s }}'^{\infty } (\cdot ,\mathbf{b }) \) are \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}\)-measurable for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\), and that \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}\) is \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\)- and \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}' \)-trivial for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\). From these, we write

$$\begin{aligned}&{F}_{\mathbf{s }}(\mathbf{b }) = {F_{\mathbf{s }}^{\infty }}(\cdot ,\mathbf{b }) ,\quad {F}_{\mathbf{s }}' (\mathbf{b }) = {F}_{\mathbf{s }}'^{\infty } (\cdot ,\mathbf{b }) . \end{aligned}$$
(4.56)

Then the strong solutions \( \mathbf{X }\) and \( \mathbf{X }'\) are given by \( \mathbf{X } = {F}_{\mathbf{s }}(\mathbf{B })\) and \( \mathbf{X }'={F}_{\mathbf{s }}' (\mathbf{B })\). Putting (4.49), (4.55), and (4.56) together, we obtain \( {F}_{\mathbf{s }}(\mathbf{b }) = {F}_{\mathbf{s }}' (\mathbf{b }) \) for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\). This deduces \( \mathbf{X } = {F}_{\mathbf{s }}(\mathbf{B }) = {F}_{\mathbf{s }}' (\mathbf{B }) = \mathbf{X }'\) under \( P_{\mathbf{s }}\), which yields (4.40). This concludes (2). \(\square \)

Let \( (\mathbf{X },\mathbf{B })\) be a continuous process defined on \( (\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \) satisfying (4.1) as before. Recall the notion of pathwise uniqueness starting at \( \mathbf{s }\) under the constraint of \((\mathbf {IFC})_{{\mathbf {s}}}\) given by Definition 3.5 and Remark 3.4 (2).

Theorem 4.3

Assume \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\). Then the pathwise uniqueness of weak solutions starting at \( \mathbf{s }\) under the constraint of \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\) holds.

Proof

Let \( (\mathbf{X },\mathbf{B })\) and \( (\mathbf{X }',\mathbf{B })\) be weak solutions of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\) with the same Brownian motion \( \mathbf{B }\). Then \( (\mathbf{X },\mathbf{B })\) and \((\mathbf{X }',\mathbf{B })\) are strong solutions by Theorem 4.2 (1). By \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\) we see (4.41) holds. Hence applying Theorem 4.2 (2) we obtain (4.40). From this we deduce the pathwise uniqueness of weak solutions starting at \( \mathbf{s }\) under the constraint of \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\). \(\square \)

Proof of Theorem 4.1

The claim (1) follows from Theorem 4.2 (1) immediately.

Let \( (\hat{\mathbf{X }},\hat{\mathbf{B }})\) be any weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\). Then from (1) we deduce that \( (\hat{\mathbf{X }},\hat{\mathbf{B }})\) becomes a strong solution with \( {\hat{{F}_{\mathbf{s }}}} \) such that \( \hat{\mathbf{X }} = \hat{{F}_{\mathbf{s }}} (\hat{\mathbf{B }})\).

Because the distribution of \( \mathbf{B }\) coincides with that of \( \hat{\mathbf{B }}\), \( ({F}_{\mathbf{s }}(\mathbf{B }),\mathbf{B }) \) and \( ({F}_{\mathbf{s }}(\hat{\mathbf{B }}) ,\hat{\mathbf{B }})\) have the same distribution. Hence \( ({F}_{\mathbf{s }}(\hat{\mathbf{B }}) ,\hat{\mathbf{B }})\) is a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\). By assumption, \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\) hold. Then Theorem 4.3 yields

$$\begin{aligned}&P_{\mathbf{s }}( {F}_{\mathbf{s }}(\hat{\mathbf{B }}) ={\hat{{F}_{\mathbf{s }}}} (\hat{\mathbf{B }}) ) = 1. \end{aligned}$$

Thus, we deduce \( {F}_{\mathbf{s }}= {\hat{F}}_{\mathbf{s }}\) for \( P_{\text {Br}}^{\infty }\)-a.s. Hence we obtain

$$\begin{aligned} \hat{\mathbf{X }} = {\hat{F}}_{\mathbf{s }} (\hat{\mathbf{B }})= {F}_{\mathbf{s }}(\hat{\mathbf{B }}). \end{aligned}$$

This implies (3.7).

Let \( \mathbf{B }'\) be any \( \{ {\mathscr {F}}_t' \} \)-Brownian motion defined on \( (\varOmega ', {\mathscr {F}}',P', \{ {\mathscr {F}}_t' \} )\). Because \( \mathbf{B }'\) and \( \mathbf{B }\) have the same distribution, \( ({F}_{\mathbf{s }}(\mathbf{B }'), \mathbf{B }')\) and \( ({F}_{\mathbf{s }}(\mathbf{B }), \mathbf{B })\) have the same distribution. Hence \( ({F}_{\mathbf{s }}(\mathbf{B }'), \mathbf{B }')\) is a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\). This implies \( {F}_{\mathbf{s }}(\mathbf{B }')= {F_{\mathbf{s }}^{\infty }}({F}_{\mathbf{s }}(\mathbf{B }'), \mathbf{B }')\) similarly as the argument at the beginning of the proof. Then by Theorem 4.2 (1) we deduce that \( {F}_{\mathbf{s }}(\mathbf{B }') \) is a strong solution. We have thus completed the proof of (2). \(\square \)

We conclude this section with two notions of solutions.

Definition 4.2

(IFC solution) A continuous process \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is called an IFC solution of (4.2)–(4.4) if \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\). Also, the distribution of an IFC solution is called an IFC solution.

We introduce a new notion of solutions of ISDEs. The notion is an equivalent notion of the weak solution of (4.2)–(4.4) in terms of an asymptotic infinite system of finite-dimensional SDEs with consistency (AIFC). We do not use the notion of AIFC solutions in the present paper; still, we expect that it will be useful to solve ISDEs.

Definition 4.3

A continuous process \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is called an AIFC solution of (4.2)–(4.4) if \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) satisfies \((\mathbf {IFC})_{{\mathbf {s}}}\) and (4.33). Also, the distribution of an AIFC solution is called an AIFC solution.

Remark 4.4

We note that IFC solutions in Definition 4.2 are always AIFC solutions. Conversely, we can construct an IFC solution of (4.2)–(4.4) from an AIFC solution. Indeed, assuming that \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) is an AIFC solution of (4.2)–(4.4) and letting \( {F_{\mathbf{s }}^{\infty }}\) be as (4.33), we see \( ({F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B }), {\mathbf{B }})\) under \( P_{\mathbf{s }}\) is a weak solution of (4.2)–(4.4). This is obvious from (4.32) and (4.34)–(4.36). Then \( ({F_{\mathbf{s }}^{\infty }}(\mathbf{X },\mathbf{B }), {\mathbf{B }})\) under \( P_{\mathbf{s }}\) is an IFC solution. Thus, constructing an IFC solution and, in particular, a weak solution are reduced to constructing an AIFC solution.

Triviality of \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\): second tail theorem (Theorem 5.1)

Let \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\) be the tail \( \sigma \)-field of the labeled path space \( W (S^{{\mathbb {N}}})\) introduced in (4.9). The purpose of this section is to prove triviality of \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\) under distributions of IFC solutions, which is a crucial step in constructing a strong solution as we saw in Theorem 4.1. This step is very hard in general because \( W (S^{{\mathbb {N}}})\) is a huge space and its tail \( \sigma \)-field \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\) is topologically wild. To overcome the difficulty, we introduce a sequence of well-behaved tail \( \sigma \)-fields, and deduce triviality of \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\) from that of \( {\mathscr {T}}\, ({\textsf {S}})\) under the stationary distribution of unlabeled dynamics \( {\textsf {X}}\) step by step along this sequence of tail \( \sigma \)-fields.

The space \( {\textsf {S}}\) is a tiny infinite-dimensional space compared with \( S^{{\mathbb {N}}}\) and \( W (S^{{\mathbb {N}}})\). Hence, \( {\textsf {S}}\) enjoys a nice probability measure \( \mu \) unlike \( S^{{\mathbb {N}}}\) and \( W (S^{{\mathbb {N}}})\). We can take \( \mu \) to be associated with \( {\textsf {S}}\)-valued stochastic dynamics satisfying the \( \mu \)-absolute continuity condition \(\mathbf {(AC)}\) and the no big jump condition \(\mathbf {(NBJ)}\). This fact is important to the derivation.

Let \( \lambda \) be a probability measure on \( ({\textsf {S}}, {\mathscr {B}}({\textsf {S}}) )\). We write \( {\textsf {w}}=\{ {\textsf {w}}_t\} \in W ({\textsf {S}})\). Let \( {\textsf {P}}_{\lambda }\) be a probability measure on \( (W ({\textsf {S}}),{\mathscr {B}}(W ({\textsf {S}})) )\) such that

$$\begin{aligned} {\textsf {P}}_{\lambda }\circ {{\textsf {w}}}_0^{-1} = \lambda . \end{aligned}$$
(5.1)

If necessary, we extend the domain of \( {\textsf {P}}_{\lambda }\) using completion of measures. We call \( {\textsf {P}}_{\lambda }\) a lift dynamics of \( \lambda \) if \( {\textsf {P}}_{\lambda }\) and \( \lambda \) satisfy (5.1). For a given random point field \( \lambda \) there exist many lift dynamics of it. We shall take a specific lift dynamics given by (5.2).

Let \( \mathbf{X }\) be an \( S^{{\mathbb {N}}}\)-valued continuous process on \( (\varOmega ,{\mathscr {F}}, P ,\{ {\mathscr {F}}_t \} )\). We set \( {\mathfrak {u}} _{\text {path}}(\mathbf{X })= {\textsf {X}} = \sum _{i\in \mathbb {N}} \delta _{X^i}\) as before. We assume that the associated unlabeled process \( {\mathfrak {u}} _{\text {path}}(\mathbf{X }) \) is an \( {\textsf {S}}\)-valued continuous process such that \( P \circ {\mathfrak {u}} (\mathbf{X }_0)^{-1} = \lambda \) and define \( {\textsf {P}}_{\lambda }\) by

$$\begin{aligned} {\textsf {P}}_{\lambda }:= (P \circ \mathbf{X }^{-1}) \circ {\mathfrak {u}} _{\text {path}}^{-1} = P \circ {\mathfrak {u}} _{\text {path}}(\mathbf{X })^{-1} = P \circ {\textsf {X}}^{-1} . \end{aligned}$$
(5.2)

Let \( \mu \) be a probability measure on \( ({\textsf {S}}, {\mathscr {B}}({\textsf {S}}) )\) and let \( W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})\) be as (2.9). We make the following assumptions originally introduced in Sect. 3.2.

\(\mathbf {(TT)}\)     \( \mu \) is tail trivial.

\(\mathbf {(AC)}\)     \( {\textsf {P}}_{\lambda }\circ {{\textsf {w}}}_t^{-1} \prec \mu \) for all \( 0< t < \infty \).

\(\mathbf {(SIN)}\)      \( {\textsf {P}}_{\lambda }( W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})) = 1 \).

\(\mathbf {(NBJ)}\)    \( P ( \{ {\textsf {m}}_{r,T} (\mathbf{X })< \infty \} ) = 1 \) for all \( r, T \in \mathbb {N}\).

We denote by \( P_{\mathbf{s }}\) the regular conditional probability such that

$$\begin{aligned}&P_{\mathbf{s }}= P (\cdot | \mathbf{X }_0=\mathbf{s }) . \end{aligned}$$
(5.3)

Let \( \mathbf{B } \) be a continuous process defined on the filtered space \( (\varOmega ,{\mathscr {F}}, P ,\{ {\mathscr {F}}_t \} )\). Let \( {\widetilde{P}}_{\mathbf{s }}\) denote the distribution of \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\):

$$\begin{aligned}&{\widetilde{P}}_{\mathbf{s }}= P_{\mathbf{s }}\circ (\mathbf{X },\mathbf{B })^{-1} . \end{aligned}$$
(5.4)

Let \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) be the regular conditional probabilities of \( P_{\mathbf{s }}\) such that

$$\begin{aligned}&{\widetilde{P}}_{\mathbf{s },\mathbf{b }}= P_{\mathbf{s }}( \mathbf{X } \in \cdot \, | {\mathbf{B }}= \mathbf{b }) . \end{aligned}$$
(5.5)

Then we easily see

$$\begin{aligned}&{\widetilde{P}}_{\mathbf{s }}= {P}((\mathbf{X },\mathbf{B })\in \cdot | \mathbf{X }_0= \mathbf{s }) ,\\&{\widetilde{P}}_{\mathbf{s },\mathbf{b }}= {P}( \mathbf{X } \in \cdot \, | \mathbf{X }_0 =\mathbf{s },\, {\mathbf{B }}= \mathbf{b }) = {\widetilde{P}}_{\mathbf{s }}(\mathbf{w } \in \cdot | \mathbf{b }) . \end{aligned}$$

We assume \( (\mathbf{X },\mathbf{B })\) under \( (\varOmega ,{\mathscr {F}}, P ,\{ {\mathscr {F}}_t \} )\) is a weak solution of ISDE (4.2)–(4.3). Then \( (\mathbf{X },\mathbf{B })\) under \( (\varOmega ,{\mathscr {F}}, P_{\mathbf{s }}, \{ {\mathscr {F}}_t \} ) \) is a weak solution of ISDE (4.2)–(4.4) for \( P \circ \mathbf{X }_0 \)-a.s. \( \mathbf{s }\). Hence we can apply the results in Sect. 4 to \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\).

We now state the main theorem of this section.

Theorem 5.1

(Second tail theorem) Assume that \( \mu \) and \( {\textsf {P}}_{\lambda }\) satisfy \(\mathbf {(TT)}\) and \(\mathbf {(AC)}\) for \( \mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\). Assume that there exists a label \( {\mathfrak {l}} \) such that \( {\mathfrak {l}} \circ {\mathfrak {u}} (\mathbf{s }) = \mathbf{s } \) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\). Assume that \( {\widetilde{P}}_{\mathbf{s }}\) is a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\) for \( P\circ \mathbf{X }_0^{-1} \)-a. s. \( \mathbf{s }\). Then \( {\widetilde{P}}_{\mathbf{s }}\) satisfies \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\). Furthermore, ISDE (4.2)–(4.4) satisfies \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\).

To explain the strategy of the proof, we introduce the notions of cylindrical tail \( \sigma \)-fields on \( W (S^{{\mathbb {N}}})\) and \( W ({\textsf {S}})\). We set

$$\begin{aligned} \mathbf{T } = \{ \mathbf{t }=(t_1,\ldots ,t_m)\, ;\, 0< t_i< t_{i+1} \, (1\le i < m),\, \ m \in {\mathbb {N}}\} . \end{aligned}$$
(5.6)

We remark that we exclude \( t_1 = 0\) in the definition of \( \mathbf{T }\) in (5.6).

Let \({\pi _r^{c}}\) be the projection \( {\pi _r^{c}}\!:\!{\textsf {S}}\!\rightarrow \!{\textsf {S}}\) such that \( {\pi _r^{c}}({\textsf {s}}) = {\textsf {s}} (\cdot \cap S_{r}^c)\). For \( {\textsf {w}}=\{ {\textsf {w}}_t\} \in W ({\textsf {S}})\) and \( \mathbf{t }=(t_1,\ldots ,t_m) \in \mathbf{T }\) we set \( {\pi _r^{c}}({{\textsf {w}}}_{\mathbf{t }} ) = ({\pi _r^{c}}({{\textsf {w}}}_{t_1}), \ldots , {\pi _r^{c}}({{\textsf {w}}}_{t_m} ) ) \in {\textsf {S}}^m \).

Let \( \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\) be the cylindrical tail \( \sigma \)-field of \( W ({\textsf {S}})\) such that

$$\begin{aligned} \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})= \bigvee _{\mathbf{t } \in \mathbf{T }} \bigcap _{r=1}^{\infty } \sigma [\, {\pi _r^{c}}({{\textsf {w}}}_{\mathbf{t }} ) \ ] . \end{aligned}$$
(5.7)

Let \( \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) \) be the cylindrical tail \( \sigma \)-field of \( W (S^{{\mathbb {N}}})\) defined as

$$\begin{aligned} \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) = \bigvee _{\mathbf{t } \in \mathbf{T }} \bigcap _{n=1}^{\infty } \sigma [{\mathbf{w }}_{\mathbf{t }}^{n*}]. \end{aligned}$$
(5.8)

Here \({\mathbf{w }}_{\mathbf{t }}^{n*} = ({\mathbf{w }}_{t_i}^{n*})_{i=1}^m \) for \( \mathbf{t } \in \mathbf{T }\), where \( {\mathbf{w }}_t^{n*}=(\textit{w}_t^k)_{k=n +1}^{\infty }\) for \( {\mathbf{w }}= (\textit{w}^n )_{n\in \mathbb {N}}\).

We shall prove Theorem 5.1 along with the following scheme:

$$\begin{aligned}&{\mathscr {T}}\, ({\textsf {S}})\xrightarrow [ {\begin{array}{c} \text {Theorem}~5.2\\ {\mathbf {(TT)}}, \ {\mathbf {(AC)}} \end{array}}]{(\text {Step I})}&\widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\xrightarrow [{\begin{array}{c} \text {Theorem}~5.3 \\ {\mathbf {(TT)}},\, {\mathbf {(AC)}},\, {\mathbf {(SIN)}},\, {\mathbf {(NBJ)}} \end{array}}]{(\text {Step II})}&\widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) \xrightarrow [{\begin{array}{c} \text {Theorem}~5.4\\ {{\mathbf {(IFC)}}_{\mathbf{s }}} \end{array}}]{(\text {Step III})}&{\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\\&\ \mu&\ {\textsf {P}}_{\lambda }= P \circ {\textsf {X}}^{-1}&\ P \circ \mathbf{X }^{-1}&\ \ {\widetilde{P}}_{\mathbf{s },\mathbf{b }}. \end{aligned}$$

We shall prove triviality of each tail \( \sigma \)-field in the scheme under the distribution put under the tail \( \sigma \)-field. The theorems under the arrows correspond to each step and the conditions there indicate what are used at each passage. Our goal is to obtain triviality of \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\) under \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) for \( P_{\text {Br}}^{\infty }\)-a.s. \( \mathbf{b }\) and for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\).

Remark 5.1

Theorem 5.2 needs only \(\mathbf {(TT)}\) and \(\mathbf {(AC)}\) for \( \mu \). Theorem 5.3 needs only \(\mathbf {(TT)}\) and \(\mathbf {(AC)}\) for \( \mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\). In Theorems 5.2 and 5.3, we do not use any properties of ISDE. That is, \( {\widetilde{P}}_{\mathbf{s }}\) is not necessary a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\). Such generality of these theorems would be interesting and useful in other aspects. Theorem 5.4 requires the property of weak solutions of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\) unlike Theorems 5.2 and 5.3. The map \( {F_{\mathbf{s }}^{\infty }}\) in (4.33) given by a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\) plays an important role in the proof of Theorem 5.4.

Remark 5.2

An example of unlabeled path \( {\textsf {w}}\) such that \( {{\textsf {m}}_{r,T} ({\mathfrak {l}} _{\text {path}}({\textsf {w}}))}= \infty \) is given by Remark 3.10. Such a large fluctuation \( {{\textsf {m}}_{r,T} ({\mathfrak {l}} _{\text {path}}({\textsf {w}}))}= \infty \) of unlabeled path \( {\textsf {w}}\) yields difficulty to control \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\) by the cylindrical tail \( \sigma \)-field \(\widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\) of unlabeled paths. Hence we assume \(\mathbf {(NBJ)}\).

Step I: From \( {\mathscr {T}}\, ({\textsf {S}})\) to \( \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\)

Let \( {\mathscr {T}}\, ({\textsf {S}})\) be the tail \( \sigma \)-field of \( {\textsf {S}}\) and \( \lambda \) a probability measure on \( {\textsf {S}}\) with lift dynamics \( {\textsf {P}}_{\lambda }\) as before. We shall lift \( \mu \)-triviality of \( {\mathscr {T}}\, ({\textsf {S}})\) to \( {\textsf {P}}_{\lambda }\)-triviality of the cylindrical tail \( \sigma \)-field \(\widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\) of \( W ({\textsf {S}})\). For a probability Q on \( {\mathscr {B}}( W ({\textsf {S}})) \), we set

$$\begin{aligned} \widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} ({\textsf {S}}; Q ) = \{ {\mathscr {X}} \in \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\, ;\, Q ( {\mathscr {X}} ) = 1 \} . \end{aligned}$$

We state the main theorem of this subsection.

Theorem 5.2

Assume \(\mathbf {(TT)}\) for \( \mu \) and \(\mathbf {(AC)}\) for \( \mu \) and \( {\textsf {P}}_{\lambda }\). The following then hold.

  1. 1.

    \( \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\) is \( {\textsf {P}}_{\lambda }\)-trivial.

  2. 2.

    \( \widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} ({\textsf {S}}; {\textsf {P}}_{\lambda }) \) depends only on \( \mu \).

Remark 5.3

Theorem 5.2 (2) means, if \( \mu \) is invariant under the dynamics,

$$\begin{aligned}&\widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} ({\textsf {S}}; {\textsf {P}}_{\lambda }) = \widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} ({\textsf {S}}; {\textsf {P}}_{\mu }) . \end{aligned}$$

In case of Theorem 3.1, we can take \( {\textsf {P}}_{\mu }\) as the distribution of the diffusion in Lemma 9.1 starting from the stationary measure \( \mu \). We shall identify the distribution \( {\textsf {P}}_{\lambda }|_{ \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})}\) in Proposition 5.1. From this we can specify \( \widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} ({\textsf {S}}; {\textsf {P}}_{\lambda }) \).

For a probability \( \nu \) on \( {\mathscr {B}}({\textsf {S}}) \), we set \( {\mathscr {T}}^{\{1\}}({\textsf {S}}; \nu ) = \{ {\textsf {A}} \in {\mathscr {T}}\, ({\textsf {S}})\, ;\, \nu ({\textsf {A}} ) = 1 \} \).

Lemma 5.1

Under the same assumptions as Theorem 5.2, the following hold.

  1. 1.

    \( {\mathscr {T}}\, ({\textsf {S}})\) is \( {\textsf {P}}_{\lambda }\circ {{\textsf {w}}}_t^{-1} \)-trivial for each \( t > 0\).

  2. 2.

    \( {\mathscr {T}}^{\{1\}}({\textsf {S}}; {\textsf {P}}_{\lambda }\circ {{\textsf {w}}}_t^{-1} ) = {\mathscr {T}}^{\{1\}}({\textsf {S}}; \mu ) \) for each \( t > 0\).

Proof

(1) is obvious. Indeed, let \( {\textsf {A}}\in {\mathscr {T}}\, ({\textsf {S}})\) and suppose \( {\textsf {P}}_{\lambda }\circ {{\textsf {w}}}_t^{-1} ({\textsf {A}}) > 0 \). Then \( \mu ({\textsf {A}}) > 0 \) by \(\mathbf {(AC)}\). Hence from \(\mathbf {(TT)}\), we deduce \( \mu ({\textsf {A}}) = 1 \). This combined with \(\mathbf {(AC)}\) implies \( {\textsf {P}}_{\lambda }\circ {{\textsf {w}}}_t^{-1} ({\textsf {A}}) = 1 \). We thus obtain (1). We deduce from \(\mathbf {(TT)}\), \(\mathbf {(AC)}\), and (1) that \({\textsf {A}} \in {\mathscr {T}}^{\{1\}}({\textsf {S}}; {\textsf {P}}_{\lambda }\circ {{\textsf {w}}}_t^{-1} ) \) if and only if \({\textsf {A}} \in {\mathscr {T}}^{\{1\}}({\textsf {S}}; \mu ) \). This implies (2). \(\square \)

We extend Lemma 5.1 for multi-time distributions. Let \( {\mathscr {T}}_{\text {path}} ^{\mathbf{t }} ({\textsf {S}})\) be the cylindrical tail \( \sigma \)-field conditioned at \( \mathbf{t }=(t_1,\ldots ,t_n) \in \mathbf{T }\):

$$\begin{aligned}&{\mathscr {T}}_{\text {path}} ^{\mathbf{t }} ({\textsf {S}}) = \bigcap _{r=1}^{\infty } \sigma [\, {\pi _r^{c}}({{\textsf {w}}}_{\mathbf{t }} ) \, ] .\end{aligned}$$
(5.9)

Using (5.9) we can rewrite \( \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\) as \( \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})= \bigvee _{\mathbf{t } \in \mathbf{T }} {\mathscr {T}}_{\text {path}} ^{\mathbf{t }} ({\textsf {S}})\).

We set \( {\textsf {S}}^{\mathbf{u }}={\textsf {S}}^{n}\), \( \mu ^{\otimes \mathbf{u }} = \mu ^{\otimes n}\), and \( |\mathbf{u }| = n \) if \( \mathbf{u }=(u_1,\ldots ,u_n) \in \mathbf{T }\).

Lemma 5.2

(1) Let \( \mathbf{u }=(u_i)_{i=1}^p ,\mathbf{v }=(v_j)_{j=1}^q\in \mathbf{T } \) such that \( u_p<v_1\) and set \( (\mathbf{u },\mathbf{v }) =(u_1,\ldots ,u_p,v_1,\ldots ,v_q) \in \mathbf{T } \). Then,

$$\begin{aligned}&{\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}}) \times {\mathscr {T}}_{\text {path}} ^{\mathbf{v }} ({\textsf {S}}) \subset {\mathscr {T}}_{\text {path}} ^{(\mathbf{u },\mathbf{v })} ({\textsf {S}}) . \end{aligned}$$
(5.10)

(2) For each \( {\textsf {C}} \in {\mathscr {T}}_{\text {path}} ^{(\mathbf{u },\mathbf{v })} ({\textsf {S}})\) and \( {\textsf {y}} \in {\textsf {S}}^{\mathbf{v }}\), the set \( {\textsf {C}}^{[{\textsf {y}}]}\) is \( {\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}})\)-measurable. Here

$$\begin{aligned}&{\textsf {C}}^{[{\textsf {y}}]} = \{ {\textsf {x}}\in {\textsf {S}}^{\mathbf{u }}\, ;\, ({\textsf {x}}, {\textsf {y}})\in {\textsf {C}}\} . \end{aligned}$$

Furthermore, \( \mu ^{\otimes \mathbf{u }}({\textsf {C}}^{[{\textsf {y}}]} )\) is a \( {\mathscr {T}}_{\text {path}} ^{\mathbf{v }} ({\textsf {S}})\)-measurable function in \( {\textsf {y}}\).

Proof

We easily deduce for each \( r \in \mathbb {N}\)

$$\begin{aligned} \sigma [\, {\pi _r^{c}}({{\textsf {w}}}_{\mathbf{u }} ) \, ] \times \sigma [\, {\pi _r^{c}}({{\textsf {w}}}_{\mathbf{v }} ) \, ] = \sigma [\, {\pi _r^{c}}({{\textsf {w}}}_{(\mathbf{u },\mathbf{v })} ) \, ] . \end{aligned}$$
(5.11)

Then, taking intersections in the left-hand side step by step, we obtain

$$\begin{aligned} \bigcap _{s=1}^{\infty } \sigma [\, \pi _s^c({{\textsf {w}}}_{\mathbf{u }} ) \, ] \times \bigcap _{t=1}^{\infty } \sigma [\, \pi _t^c({{\textsf {w}}}_{\mathbf{v }} ) \, ] \subset \sigma [\, {\pi _r^{c}}({{\textsf {w}}}_{(\mathbf{u },\mathbf{v })} ) \, ] . \end{aligned}$$
(5.12)

Hence, taking the intersection in r on the right-hand side of (5.12), we deduce (5.10).

Next, we prove (2). By assumption and from the obvious inclusion, we see that

$$\begin{aligned}&{\textsf {C}} \in {\mathscr {T}}_{\text {path}} ^{(\mathbf{u },\mathbf{v })} ({\textsf {S}}) \subset \sigma [\, \pi _s^c({{\textsf {w}}}_{(\mathbf{u },\mathbf{v })} ) \, ] \text { for all } s \in \mathbb {N}. \end{aligned}$$

Then, from (5.11), we see that

$$\begin{aligned}&{\textsf {C}} \in \sigma [\, \pi _s^c({{\textsf {w}}}_{\mathbf{u }} ) \, ] \times \sigma [\, \pi _s^c({{\textsf {w}}}_{\mathbf{v }} ) \, ] \quad \text { for all } s \in \mathbb {N}. \end{aligned}$$

Hence, \( {\textsf {C}}^{[{\textsf {y}}]}\) is \( \sigma [\, \pi _s^c({{\textsf {w}}}_{\mathbf{u }} ) \, ] \)-measurable for all \( s \in \mathbb {N}\). From this we deduce that \( {\textsf {C}}^{[{\textsf {y}}]}\) is \( {\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}})\)-measurable. The second claim in (2) can be proved similarly. \(\square \)

To simplify the notation, we set \( {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{t }}}= {\textsf {P}}_{\lambda }\circ {{\textsf {w}}}_{\mathbf{t }}^{-1}\) for \( \mathbf{t } \in \mathbf{T }\).

Proposition 5.1

Under the same assumptions as Theorem 5.2, the following hold.

For each \( \mathbf{t }=(t_1,\ldots ,t_n) \in \mathbf{T }\),

$$\begin{aligned} {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{t }}}|_{{\mathscr {T}}_{\text {path}} ^{\mathbf{t }} ({\textsf {S}}) } = \mu ^{\otimes \mathbf{t }} |_{ {\mathscr {T}}_{\text {path}} ^{\mathbf{t }} ({\textsf {S}}) } . \end{aligned}$$
(5.13)

In particular, for any \( {\textsf {C}}\in {\mathscr {T}}_{\text {path}} ^{\mathbf{t }} ({\textsf {S}}) \), the following identity with dichotomy holds.

$$\begin{aligned}&{\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{t }}}( {\textsf {C}}) = \mu ^{\otimes \mathbf{t }} ({\textsf {C}}) \in \{ 0,1 \}. \end{aligned}$$
(5.14)

Proof

We prove Proposition 5.1 by induction with respect to \( n = |\mathbf{t }| \).

If \( n = 1\), then the claims follow from Lemma 5.1. Here we used \(\mathbf {(TT)}\) and \(\mathbf {(AC)}\) to apply Lemma 5.1.

Next, suppose that the claims hold for \( n-1\). Let \( \mathbf{u }\) and \( \mathbf{v }\) be such that \( (\mathbf{u }, \mathbf{v }) = \mathbf{t }\) and that \( 1 \le |\mathbf{u }|, |\mathbf{v }| < n \). We then see that \( |\mathbf{u }|+|\mathbf{v }| = |\mathbf{t }| = n \). We shall prove that the claims hold for \( \mathbf{t }\) with \( |\mathbf{t }| = n \) in the sequel.

Assume that \( {\textsf {C}}\in {\mathscr {T}}_{\text {path}} ^{\mathbf{t }} ({\textsf {S}}) \). Then we deduce from Lemma 5.2 that \( {\textsf {C}}^{[{\textsf {y}}]} \in {\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}})\) and that \( \mu ^{\otimes \mathbf{u }}( {\textsf {C}}^{[{\textsf {y}}]} ) \) is a \( {\mathscr {T}}_{\text {path}} ^{\mathbf{v }} ({\textsf {S}})\)-measurable function in \( {\textsf {y}}\). By the induction hypothesis, we see that \( {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {C}}^{[{\textsf {y}}]} ) \) is \( {\mathscr {T}}_{\text {path}} ^{\mathbf{v }} ({\textsf {S}})\)-measurable in \( {\textsf {y}}\) and that the following identity with dichotomy holds.

$$\begin{aligned} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {C}}^{[{\textsf {y}}]} ) = \mu ^{\otimes \mathbf{u }} ( {\textsf {C}}^{[{\textsf {y}}]} ) \in \{ 0,1 \} \quad \text { for each } {\textsf {y}}\in {\textsf {S}}^{\mathbf{v }}.\end{aligned}$$
(5.15)

By disintegration, we see that

$$\begin{aligned} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in \cdot ) = \int _{{\textsf {S}}^{\mathbf{v }}} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in \cdot | {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}) {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}(d{\textsf {y}}). \end{aligned}$$
(5.16)

By the induction hypothesis, we see that \( {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {A}} ) \in \{ 0,1 \} \) for all \( {\textsf {A}} \in {\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}}) \). Hence let \({\textsf {A}} \in {\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}}) \) and suppose that

$$\begin{aligned}&{\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {A}} ) = a , \end{aligned}$$
(5.17)

where \( a \in \{ 0,1 \} \). We then obtain from (5.16) and \( {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {A}} ) \in \{ 0,1 \} \) that

$$\begin{aligned} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {A}} | {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}) = a \quad \text { for } {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}\text { -a.s.}\, {\textsf {y}}. \end{aligned}$$
(5.18)

From (5.16), (5.17), and (5.18), we deduce that for each \( {\textsf {A}} \in {\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}}) \)

$$\begin{aligned} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {A}} | {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}) = {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {A}}) \quad \text { for } {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}\text {-a.s.}\, {\textsf {y}}. \end{aligned}$$
(5.19)

We next remark that

$$\begin{aligned}&{\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}| {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}) = 1 \text { for } {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}\text {-a.s.}\, {\textsf {y}}. \end{aligned}$$
(5.20)

We refer the reader to the corollary of Theorem 3.3 on page 15 of [9] for the general result from which (5.20) is derived.

For all \( {\textsf {A}} \in {\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}}) \) and \( {\textsf {B}} \in {\mathscr {B}}({\textsf {S}}^{\mathbf{v }}) \) we deduce from (5.19) and (5.20) that

$$\begin{aligned} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {A}}, \, {{\textsf {w}}}_{\mathbf{v }} \in {\textsf {B}})&= \int _{{\textsf {S}}^{\mathbf{v }}} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {A}}, \, {{\textsf {w}}}_{\mathbf{v }} \in {\textsf {B}}\, | {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}) \, {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}(d{\textsf {y}}) \nonumber \\&= \int _{{\textsf {B}} } {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {A}} | {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}) \, {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}(d{\textsf {y}}) \quad \text { by }(5.20) \nonumber \\&= \int _{{\textsf {B}} } {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {A}} ) \, {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}(d{\textsf {y}}) \quad \quad \quad \quad \text { by }(5.19) \nonumber \\&= {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {A}} ) \, {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}({\textsf {B}} ). \end{aligned}$$
(5.21)

From (5.21) and the monotone class theorem, we deduce that \( {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{t }}}= {\textsf {P}}_{\lambda }^{({{\textsf {w}}}_{\mathbf{u }},{{\textsf {w}}}_{\mathbf{v }})}\) restricted on \( {\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}}) \times {\mathscr {B}}({\textsf {S}}^{\mathbf{v }})\) is a product measure. We thus obtain

$$\begin{aligned}&({\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{t }}}, {\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}}) \times {\mathscr {B}}({\textsf {S}}^{\mathbf{v }}) ) = ({\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{u }}}|_{{\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}}) } \times {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}|_{{\mathscr {B}}({\textsf {S}}^{\mathbf{v }}) } , {\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}}) \times {\mathscr {B}}({\textsf {S}}^{\mathbf{v }}) ). \end{aligned}$$
(5.22)

That is,

$$\begin{aligned}&{\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{t }}}|_{ {\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}}) \times {\mathscr {B}}({\textsf {S}}^{\mathbf{v }}) } = {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{u }}}|_{{\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}}) } \times {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}|_{{\mathscr {B}}({\textsf {S}}^{\mathbf{v }}) }. \end{aligned}$$

In particular, from (5.22), we deduce that for \( {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}\)-a.s. \({\textsf {y}}\)

$$\begin{aligned}&{\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {A}} \, | {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}\, ) = {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {A}} \, ) \quad \text { for all } {\textsf {A}} \in {\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}}) . \end{aligned}$$
(5.23)

For any \( {\textsf {C}} \in {\mathscr {B}} ({\textsf {S}}^{\mathbf{t }}) \), we deduce that

$$\begin{aligned} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{t }} \in {\textsf {C}})&= \int _{{\textsf {S}}^{\mathbf{v }}} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {C}}^{[{\textsf {y}}]}, \, {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}\, | {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}) \, {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}(d{\textsf {y}}) \nonumber \\&= \int _{{\textsf {S}}^{\mathbf{v }}} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {C}}^{[{\textsf {y}}]} \, | {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}\, ) \, {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}(d{\textsf {y}}) . \end{aligned}$$
(5.24)

Here, we used \( {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}| {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}) = 1 \) for \( {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}\)-a.s. \({\textsf {y}}\), which follows from (5.20).

Assume \( {\textsf {C}} \in {\mathscr {T}}_{\text {path}} ^{\mathbf{t }} ({\textsf {S}}) \). Then from Lemma 5.2 (2), we obtain

$$\begin{aligned} {\textsf {C}}^{[{\textsf {z}}]} \in {\mathscr {T}}_{\text {path}} ^{\mathbf{u }} ({\textsf {S}}) \quad \text {for all } {\textsf {z}}\in {\textsf {S}}^{\mathbf{v }} . \end{aligned}$$
(5.25)

Hence from (5.23) and (5.25), we see that for \( {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}\)-a.s. \( {\textsf {y}}\)

$$\begin{aligned} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {C}}^{[{\textsf {z}}]} \, | {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}\, ) = {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {C}}^{[{\textsf {z}}]} \, ) \quad \text { for all } {\textsf {z}}\in {\textsf {S}}^{\mathbf{v }} . \end{aligned}$$
(5.26)

We emphasize that (5.26) holds for all \( {\textsf {z}}\in {\textsf {S}}^{\mathbf{v }} \). Hence we can take \( {\textsf {z}}= {\textsf {y}}\) in (5.26) for \( {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}\)-a.s. \( {\textsf {y}}\). This yields for \( {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}\)-a.s. \( {\textsf {y}}\)

$$\begin{aligned} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {C}}^{[{\textsf {y}}]} \, | {{\textsf {w}}}_{\mathbf{v }} = {\textsf {y}}\, ) = {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {C}}^{[{\textsf {y}}]} \, ) . \end{aligned}$$
(5.27)

From (5.24), (5.27), and (5.15) we obtain

$$\begin{aligned} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{t }} \in {\textsf {C}})&= \int _{{\textsf {S}}^{\mathbf{v }}} {\textsf {P}}_{\lambda }( {{\textsf {w}}}_{\mathbf{u }} \in {\textsf {C}}^{[{\textsf {y}}]}) {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}(d{\textsf {y}}) = \int _{{\textsf {S}}^{\mathbf{v }}} \mu ^{\otimes \mathbf{u }} ({\textsf {C}}^{[{\textsf {y}}]}) {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}(d{\textsf {y}}) . \end{aligned}$$
(5.28)

From Lemma 5.2 (2), we see that \( \mu ^{\otimes \mathbf{u }} ({\textsf {C}}^{[{\textsf {y}}]}) \) is a \( {\mathscr {T}}_{\text {path}} ^{\mathbf{v }} ({\textsf {S}}) \)-measurable function in \( {\textsf {y}}\). Combining this with the induction hypothesis (5.13) for \( |\mathbf{v }|< n \), we obtain

$$\begin{aligned} \int _{{\textsf {S}}^{\mathbf{v }}} \mu ^{\otimes \mathbf{u }} ({\textsf {C}}^{[{\textsf {y}}]}) {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}(d{\textsf {y}})&= \int _{{\textsf {S}}^{\mathbf{v }}} \mu ^{\otimes \mathbf{u }} ({\textsf {C}}^{[{\textsf {y}}]}) {\textsf {P}}_{\lambda }^{{{\textsf {w}}}_{\mathbf{v }}}|_{ {\mathscr {T}}_{\text {path}} ^{\mathbf{v }} ({\textsf {S}}) } (d{\textsf {y}}) \nonumber \\&= \int _{{\textsf {S}}^{\mathbf{v }}} \mu ^{\otimes \mathbf{u }} ({\textsf {C}}^{[{\textsf {y}}]}) \mu ^{\otimes \mathbf{v }} |_{ {\mathscr {T}}_{\text {path}} ^{\mathbf{v }} ({\textsf {S}}) } (d{\textsf {y}}) \nonumber \\&= \int _{{\textsf {S}}^{\mathbf{v }}} \mu ^{\otimes \mathbf{u }} ({\textsf {C}}^{[{\textsf {y}}]}) \mu ^{\otimes \mathbf{v }} (d{\textsf {y}}) = \mu ^{\otimes \mathbf{t }} ({\textsf {C}}) . \end{aligned}$$
(5.29)

From (5.28) and (5.29), we obtain (5.13) and the equality in (5.14) for \( |\mathbf{t }| = n \).

We deduce \( \mu ^{\otimes \mathbf{t }} \)-triviality of \( {\mathscr {T}}_{\text {path}} ^{\mathbf{t }} ({\textsf {S}})\) from Lemma 5.2 (2) and the equality

$$\begin{aligned}&\int _{{\textsf {S}}^{\mathbf{v }}}\mu ^{\otimes \mathbf{u }} ({\textsf {C}}^{[{\textsf {y}}]})\mu ^{\otimes \mathbf{v }} (d{\textsf {y}}) = \mu ^{\otimes \mathbf{t }} ({\textsf {C}}) \end{aligned}$$
(5.30)

by induction with respect to \( n=|\mathbf{t }|\). Indeed, because \(\mu ^{\otimes \mathbf{u }}({\textsf {C}}^{[{\textsf {y}}]}) \in \{0,1\}\) by the assumption of induction and \( \mu ^{\otimes \mathbf{u }}({\textsf {C}}^{[{\textsf {y}}]})\) is \( {\mathscr {T}}_{\text {path}} ^{\mathbf{v }} ({\textsf {S}})\)-measurable in \({\textsf {y}}\) by Lemma 5.2 (2), we obtain \( \mu ^{\otimes \mathbf{t }} \)-triviality of \( {\mathscr {T}}_{\text {path}} ^{\mathbf{t }} ({\textsf {S}})\) from (5.30). Then from this we see that \( \mu ^{\otimes \mathbf{t }} ({\textsf {C}}) \in \{ 0,1 \} \) holds. This completes the proof. \(\square \)

Proof of Theorem 5.2

For \( \mathbf{t }=(t_1,\ldots ,t_n)\in \mathbf{T }\) and \( {\textsf {A}}_i \in {\mathscr {T}}\, ({\textsf {S}})\), let

$$\begin{aligned}&{\mathscr {X}} = \{ {{\textsf {w}}}\in W ({\textsf {S}})\, ;\, {{\textsf {w}}}_{t_i}\in {\textsf {A}}_{i}\ (i=1,\ldots ,n) \} . \end{aligned}$$
(5.31)

Let \( \textit{C}\widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\) denote the family of the elements of \( \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\) of the form (5.31). \( \textit{C}\widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\) is then \( {\textsf {P}}_{\lambda }\)-trivial by Proposition 5.1. The first claim (1) follows from this and the monotone class theorem. The second claim (2) immediately follows from the equality (5.14) in Proposition 5.1. \(\square \)

Step II: From \(\widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\) to \( \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) \)

Let \( \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\) and \( \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) \) be as (5.7) and (5.8), respectively:

$$\begin{aligned}&\widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})= \bigvee _{\mathbf{t } \in \mathbf{T }} \bigcap _{r=1}^{\infty } \sigma [\, {\pi _r^{c}}({{\textsf {w}}}_{\mathbf{t }} ) \ ] ,\quad \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) = \bigvee _{\mathbf{t } \in \mathbf{T }} \bigcap _{n=1}^{\infty }\sigma [{\mathbf{w }}_{\mathbf{t }}^{n*}]. \end{aligned}$$

This subsection proves the passage from \(\widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\) to \( \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) \).

For a given label \( {\mathfrak {l}} \), we can define the label map \( {\mathfrak {l}} _{\text {path}}\!:\!W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})\!\rightarrow \!W (S^{{\mathbb {N}}}) \) by (2.10). Let \( {\mathfrak {l}} \) be a label such that \( {\mathfrak {l}} \circ {\mathfrak {u}} (\mathbf{w }_0) = \mathbf{w }_0 \) for \( P\circ \mathbf{X }^{-1} \)-a.s. By \(\mathbf {(SIN)}\) the map \( {\mathfrak {l}} _{\text {path}}\) for \( {\mathfrak {l}} \) is well defined for \( {\textsf {P}}_{\lambda }\)-a.s. Then \( {\mathfrak {l}} _{\text {path}}\circ {\mathfrak {u}} _{\text {path}}(\mathbf{w })= \mathbf{w } \) holds for \( P\circ \mathbf{X }^{-1} \)-a.s. Hence from (5.2) we deduce

$$\begin{aligned}&{\textsf {P}}_{\lambda }\circ {\mathfrak {l}} _{\text {path}}^{-1} = P\circ \mathbf{X }^{-1} . \end{aligned}$$
(5.32)

Assumption \(\mathbf {(NBJ)}\) is a key to constructing the lift from the unlabeled path space to the labeled path space. We use \(\mathbf {(NBJ)}\) in Lemma 5.3 to control the fluctuation of the trajectory of the labeled path \( \mathbf{X }\). Indeed, we see the following.

Lemma 5.3

Assume \(\mathbf {(SIN)}\) and \(\mathbf {(NBJ)}\). Then

$$\begin{aligned}&{\mathfrak {l}} _{\text {path}}^{-1}(\widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) ) \subset \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}}) \quad \text { under }{\textsf {P}}_{\lambda }. \end{aligned}$$
(5.33)

Here in general for sub \( \sigma \)-fields \( {\mathscr {G}} \) and \( {\mathscr {H}} \) on \( (\varOmega , {\mathscr {F}}, P )\) we write \( {\mathscr {G}} \subset {\mathscr {H}} \) under P if \( {\mathscr {G}} \subset {\mathscr {H}} \) holds up to P, that is, for each \( A \in {\mathscr {G}} \) there exists an \( A' \in {\mathscr {H}} \) such that \( P (A \ominus A' ) = 0 \), where \( A \ominus A' = \{A\cup A'\}\backslash \{ A \cap A' \} \) denotes the symmetric difference.

Proof

Let \( {\textsf {m}}_{r,T} \) be as (3.13). By (5.32) we can rewrite \(\mathbf {(NBJ)}\) as

$$\begin{aligned}&{\textsf {P}}_{\lambda }(\{ {\textsf {w}}\, ;\, {\textsf {m}}_{r,T} ({\mathfrak {l}} _{\text {path}}({\textsf {w}}) ) < \infty \}) = 1 \text { for all } r, T \in \mathbb {N}. \end{aligned}$$
(5.34)

From (5.34) we easily see

$$\begin{aligned}&{\textsf {P}}_{\lambda }\left( \bigcap _{r=1}^{\infty } \{ {\textsf {w}}\, ;\, {\textsf {m}}_{r,T} ({\mathfrak {l}} _{\text {path}}({\textsf {w}}) ) < \infty \}\right) = 1 \hbox { for all}\ T \in \mathbb {N}. \end{aligned}$$
(5.35)

Let \( \mathbf{A } \in \bigcup _{\mathbf{t } \in \mathbf{T }} \bigcap _{n=1}^{\infty }\sigma [{\mathbf{w }}_{\mathbf{t }}^{n*}] \). Then there exists a \( \mathbf{t }\in \mathbf{T }\) such that

$$\begin{aligned} \mathbf{A } \in \bigcap _{n=1}^{\infty }\sigma [{\mathbf{w }}_{\mathbf{t }}^{n*}] . \end{aligned}$$

Let \( T \in \mathbb {N}\) such that \( t_k < T \), where \( \mathbf{t }= (t_1,\ldots ,t_k) \). Then we deduce for each \( r \in {\mathbb {N}} \)

$$\begin{aligned}&{\mathfrak {l}} _{\text {path}}^{-1}(\mathbf{A })\cap \{ {\textsf {w}}\, ;\, {\textsf {m}}_{r,T} ({\mathfrak {l}} _{\text {path}}({\textsf {w}}) )< \infty \}\nonumber \\&\quad \quad \quad \quad \in \ \sigma [{\pi _r^{c}}( {{\textsf {w}}}_{\mathbf{t }} ) ] \cap \{ {\textsf {w}}\, ;\, {\textsf {m}}_{r,T} ({\mathfrak {l}} _{\text {path}}({\textsf {w}}) ) < \infty \}, \end{aligned}$$
(5.36)

where \( {\mathscr {F}} \cap A = \{ F \cap A ; F \in {\mathscr {F}} \} \) for a \( \sigma \)-field \( {\mathscr {F}} \) and a subset A.

Combining (5.35) and (5.36), we obtain under \( {\textsf {P}}_{\lambda }\)

$$\begin{aligned} {\mathfrak {l}} _{\text {path}}^{-1}(\mathbf{A })&= {\mathfrak {l}} _{\text {path}}^{-1}(\mathbf{A })\bigcap \left\{ \bigcap _{r=1}^{\infty } \{ {\textsf {w}}\, ;\, {\textsf {m}}_{r,T} ({\mathfrak {l}} _{\text {path}}({\textsf {w}}) )< \infty \}\right\}&\text {by }(5.35) \\&= \bigcap _{r=1}^{\infty } \left\{ {\mathfrak {l}} _{\text {path}}^{-1}(\mathbf{A })\bigcap \{ {\textsf {w}}\, ;\, {\textsf {m}}_{r,T} ({\mathfrak {l}} _{\text {path}}({\textsf {w}}) )< \infty \}\right\} \\&\in \bigcap _{r=1}^{\infty } \left\{ \sigma [{\pi _r^{c}}( {{\textsf {w}}}_{\mathbf{t }} ) ] \bigcap \{ {\textsf {w}}\, ;\, {\textsf {m}}_{r,T} ({\mathfrak {l}} _{\text {path}}({\textsf {w}}) ) < \infty \}\right\}&\text {by }(5.36) \\&= \bigcap _{r=1}^{\infty } \sigma [{\pi _r^{c}}( {{\textsf {w}}}_{\mathbf{t }} ) ]&\text {by }(5.35). \end{aligned}$$

By (5.7) we see \( \bigcap _{r=1}^{\infty } \sigma [{\pi _r^{c}}( {{\textsf {w}}}_{\mathbf{t }} ) ] \subset \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}}) \). Thus, for arbitrary \( \mathbf{A } \in \bigcup _{\mathbf{t } \in \mathbf{T }} \bigcap _{n=1}^{\infty }\sigma [{\mathbf{w }}_{\mathbf{t }}^{n*}] \), we see \( {\mathfrak {l}} _{\text {path}}^{-1}(\mathbf{A })\in \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}}) \) under \({\textsf {P}}_{\lambda }\) from the argument above. Hence we obtain

$$\begin{aligned}&{\mathfrak {l}} _{\text {path}}^{-1}\left( \bigcup _{\mathbf{t } \in \mathbf{T }} \bigcap _{n=1}^{\infty }\sigma [{\mathbf{w }}_{\mathbf{t }}^{n*}] \right) \subset \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}}) \quad \hbox { under}\ {\textsf {P}}_{\lambda }. \end{aligned}$$
(5.37)

Applying the monotone class theorem, we then deduce (5.33) from (5.37). \(\square \)

For a probability measure Q on \( W (S^{{\mathbb {N}}})\), we set

$$\begin{aligned}&\widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} (S^{{\mathbb {N}}}; Q ) = \{ \mathbf{A } \in \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}); Q (\mathbf{A }) = 1 \} . \end{aligned}$$

We set two conditions:

(\( {\mathbf {C}}_{\mathrm{path}}1\)) \( \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) \) is \( P \circ \mathbf{X }^{-1} \)-trivial.

(\( \mathbf{C }_{\mathrm{path}}2\)) \( \widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} (S^{{\mathbb {N}}};P \circ \mathbf{X }^{-1} ) \) depends only on \( \mu \circ {\mathfrak {l}} ^{-1}\).

Theorem 5.3

Assume \(\mathbf {(TT)}\) for \( \mu \) and \(\mathbf {(AC)}\) for \( \mu \) and \( {\textsf {P}}_{\lambda }\). Assume \(\mathbf {(SIN)}\) and \(\mathbf {(NBJ)}\). Then (\( {\mathbf {C}}_{\mathrm{path}}1\)) and (\( \mathbf{C }_{\mathrm{path}}2\)) hold.

Proof

Let \( \mathbf{A } \in \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}})\). Then we see \( {\mathfrak {l}} _{\text {path}}^{-1}(\mathbf{A })\in \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\) under \({\textsf {P}}_{\lambda }\) from Lemma 5.3. Theorem 5.2 (1) deduces that \( \widehat{{\mathscr {T}}}_{\text {path}}({\textsf {S}})\) is \( {\textsf {P}}_{\lambda }\)-trivial. Hence we have

$$\begin{aligned}&{\textsf {P}}_{\lambda }\left( {\mathfrak {l}} _{\text {path}}^{-1}(\mathbf{A })\right) \in \{ 0,1 \} . \end{aligned}$$
(5.38)

We see from (5.32)

$$\begin{aligned} {\textsf {P}}_{\lambda }\left( {\mathfrak {l}} _{\text {path}}^{-1}(\mathbf{A })\right) = P \circ \mathbf{X }^{-1} (\mathbf{A }) . \end{aligned}$$
(5.39)

Combining (5.38) and (5.39), we obtain (\( {\mathbf {C}}_{\mathrm{path}}1\)).

From Lemma 5.3 we see that \( \widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} (S^{{\mathbb {N}}};{\textsf {P}}_{\lambda }\circ {\mathfrak {l}} _{\text {path}}^{-1}) \) depends only on \( \widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} ({\textsf {S}}; {\textsf {P}}_{\lambda }) \) and \( {\mathfrak {l}} _{\text {path}}\). From Theorem 5.2 (2) we see that \( \widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} ({\textsf {S}}; {\textsf {P}}_{\lambda }) \) depends only on \( \mu \). Recall that \( {\mathfrak {l}} _{\text {path}}({\textsf {w}}) \) is defined for all \( {\textsf {w}}\in W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})\) and \( {\mathfrak {l}} _{\text {path}}\) is unique for a given \( {\mathfrak {l}} \). Collecting these, we have

(\( \mathbf{C }_{\mathrm{path}}2\)’)    \( \widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} (S^{{\mathbb {N}}};{\textsf {P}}_{\lambda }\circ {\mathfrak {l}} _{\text {path}}^{-1}) \) depends only on \( \mu \) and \( {\mathfrak {l}} \).

For given \( \mu \circ {{\mathfrak {l}} }^{-1}\) and \( \mu \) we see the label \( {\mathfrak {l}} \) is uniquely determined for \( \mu \)-a.s. That is, if \( {\hat{{\mathfrak {l}} }}\) is a label such that \(\mu \circ {\hat{{\mathfrak {l}} }}^{-1} = \mu \circ {{\mathfrak {l}} }^{-1}\), then \( {\hat{{\mathfrak {l}} }} ({\textsf {s}} )= {\mathfrak {l}} ({\textsf {s}} )\) for \( \mu \)-a.s. \( {\textsf {s}} \). It is clear that \( \mu \) is uniquely determined by \( \mu \circ {{\mathfrak {l}} }^{-1}\) because \( (\mu \circ {{\mathfrak {l}} }^{-1})\circ {\mathfrak {u}} ^{-1} = \mu \).

Combining this with (\( \mathbf{C }_{\mathrm{path}}2\)’) we deduce

(\( \mathbf{C }_{\mathrm{path}}2\)”)    \( \widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} (S^{{\mathbb {N}}};{\textsf {P}}_{\lambda }\circ {\mathfrak {l}} _{\text {path}}^{-1}) \) depends only on \( \mu \circ {\mathfrak {l}} ^{-1}\).

From (5.32) we have \( {\textsf {P}}_{\lambda }\circ {\mathfrak {l}} _{\text {path}}^{-1} = P\circ \mathbf{X }^{-1} \). Then (\( \mathbf{C }_{\mathrm{path}}2\)”) is equivalent to (\( \mathbf{C }_{\mathrm{path}}2\)). Thus we obtain (\( \mathbf{C }_{\mathrm{path}}2\)). \(\square \)

Step III: From \( \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) \) to \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\): Proof of Theorem 5.1

Let \( {\widetilde{P}}_{\mathbf{s }}= P_{\mathbf{s }}\circ (\mathbf{X },\mathbf{B })^{-1} \) be as (5.4) and let (\( {\mathbf {C}}_{\mathrm{path}}1\)) and (\( \mathbf{C }_{\mathrm{path}}2\)) be as Theorem 5.3. We state the main result of this section.

Theorem 5.4

Assume that \( {\widetilde{P}}_{\mathbf{s }}\) is a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\) for \( P\circ \mathbf{X }_0^{-1} \)-a. s. \( \mathbf{s }\).

  1. 1.

    Assume (\( {\mathbf {C}}_{\mathrm{path}}1\)). Then \( {\widetilde{P}}_{\mathbf{s }}\) satisfies \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\).

  2. 2.

    Assume (\( {\mathbf {C}}_{\mathrm{path}}1\)) and (\( \mathbf{C }_{\mathrm{path}}2\)). Then (4.2)–(4.4) satisfies \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\).

Triviality in (\( {\mathbf {C}}_{\mathrm{path}}1\)) and (\( \mathbf{C }_{\mathrm{path}}2\)) is with respect to the anneal probability measure \( P \circ \mathbf{X }^{-1} \). The pair of assumptions \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\) is its quenched version. So Theorem 5.4 derives the quenched triviality from the annealed one. Another aspect of Theorem 5.4 is the passage of triviality of the cylindrical tail \( \sigma \)-field of the labeled path space to that of the full tail \( \sigma \)-field of the labeled path space.

Let \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}= P_{\mathbf{s }}( \mathbf{X } \in \cdot \, | {\mathbf{B }}= \mathbf{b })\) be as (5.5). To simplify the notation, we set

$$\begin{aligned} \varUpsilon = {P}\circ (\mathbf{X }_0,\mathbf{B } )^{-1} . \end{aligned}$$
(5.40)

Let \( {F_{\mathbf{s }}^{\infty }}\) be the map in (4.33). Such a map \( {F_{\mathbf{s }}^{\infty }}\) exists for \( P\circ \mathbf{X }_0^{-1} \)-a. s. \( \mathbf{s }\) because \( {\widetilde{P}}_{\mathbf{s }}\) is a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\) for \( P\circ \mathbf{X }_0^{-1} \)-a. s. \( \mathbf{s }\). Then for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\)

$$\begin{aligned} {F_{\mathbf{s }}^{\infty }}({\mathbf{w }},\mathbf{b })= {\mathbf{w }}\quad \text { for } {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\text {-a.s.}\, {\mathbf{w }}\end{aligned}$$
(5.41)

and the map \( {F}_{\mathbf{s }}\) in Theorem 4.1 is given by \( {F}_{\mathbf{s }}(\mathbf{b }) = {F_{\mathbf{s }}^{\infty }}({\mathbf{w }},\mathbf{b })\). Recall that \( {F_{\mathbf{s }}^{\infty }}\) is \( {\mathscr {I}} \)-measurable by Lemma 4.2 (3). \({F_{\mathbf{s }}^{\infty }}\) is however not necessary \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})\times {\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}}))\)-measurable. We define the \( \mathbf{W }_{\text {sol}}\)-valued map \( {F_{\mathbf{s },\mathbf{b }}^{\infty }}\) on a subset of \( \mathbf{W }_{\text {sol}}\) as

$$\begin{aligned}&{F_{\mathbf{s },\mathbf{b }}^{\infty }}({\cdot }) = {F_{\mathbf{s }}^{\infty }}({\cdot },\mathbf{b }) . \end{aligned}$$
(5.42)

The map \( {F_{\mathbf{s },\mathbf{b }}^{\infty }}\) is defined for \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\)-a.s. Let \( \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}= \{ {\mathbf{w }}\in \mathbf{W }_{\text {sol}}\, ;\, {F_{\mathbf{s },\mathbf{b }}^{\infty }}({\mathbf{w }}) = {\mathbf{w }}\} \).

Lemma 5.4

Make the same assumption as Theorem 5.4. Then, \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}(\mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}) = 1 \) for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\).

Proof

We deduce Lemma 5.4 from (5.5), (5.41), and (5.42) immediately. \(\square \)

We next recall the notion of a countable determining class.

Let \( (U,{\mathscr {U}} )\) be a measurable space and let \( {\mathscr {P}} \) be a family of probability measures on it. Let \( {\mathscr {U}}_0 \) be a subset of \( \cap _{P\in {\mathscr {P}}} {\mathscr {U}}^{P}\), where \( {\mathscr {U}}^{P}\) is the completion of the \( \sigma \)-field \( {\mathscr {U}} \) with respect to P. We call \( {\mathscr {U}}_0 \) a determining class under \( {\mathscr {P}} \) if any two probability measures P and Q in \( {\mathscr {P}} \) are equal if and only if \( P (A) = Q (A) \) for all \( A \in {\mathscr {U}}_0 \). Here we extend the domains of P and Q to \( \cap _{P\in {\mathscr {P}}} {\mathscr {U}}^{P}\) in an obvious manner. Furthermore, \( {\mathscr {U}}_0 \) is said to be a determining class of the measurable space \( (U,{\mathscr {U}} )\) if \( {\mathscr {P}} \) can be taken as the set of all probability measures on \( (U,{\mathscr {U}} )\). A determining class \( {\mathscr {U}}_0 \) is said to be countable if its cardinality is countable.

It is known that a Polish space X equipped with the Borel \( \sigma \)-field \( {\mathscr {B}}(X) \) has a countable determining class. If we replace \( {\mathscr {B}}(X) \) with a sub-\( \sigma \)-field \( {\mathscr {G}}\), the measurable space \( (X,{\mathscr {G}} )\) does not necessarily have any countable determining class in general. One of the difficulties to carry out our scheme is that measurable spaces \( W ({\textsf {S}})\) and \( W (S^{{\mathbb {N}}})\) equipped with tail \( \sigma \)-fields do not have any countable determining classes. In the sequel, we overcome this difficulty using \( {F_{\mathbf{s },\mathbf{b }}^{\infty }}\) and \( \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\).

As \( S^{{\mathbb {N}}}\) is a Polish space with the product topology, \( W (S^{{\mathbb {N}}})\) becomes a Polish space. Hence, there exists a countable determining class \( {\mathscr {V}} \) of \( (W (S^{{\mathbb {N}}}), {\mathscr {B}}(W (S^{{\mathbb {N}}})) )\). We can take such a class \( {\mathscr {V}} \) as follows. Let \( \mathbf{S }_1 \) be a countable dense subset of \( S^{{\mathbb {N}}}\), and

$$\begin{aligned} {\mathscr {U}} = {\mathscr {A}} [ \{ U_{r}(\mathbf{s }) ; \, 0 < r \in {\mathbb {Q}},\, \mathbf{s } \in \mathbf{S }_1 \} ] . \end{aligned}$$

Here \( {\mathscr {A}}[\cdot ] \) denotes the algebra generated by \( \cdot \), and \( U_{r}(\mathbf{s }) \) is an open ball in \( S^{{\mathbb {N}}}\) with center \( \mathbf{s }\) and radius r. We also take a suitable metric defining the same topology of the Polish space \( S^{{\mathbb {N}}}\). We note that \( {\mathscr {U}} \) is countable because the subset \( \{ U_{r}(\mathbf{s }) ; \, 0 < r \in {\mathbb {Q}},\, \mathbf{s } \in \mathbf{S }_1 \} \) is countable. Let

$$\begin{aligned}&{\mathscr {V}} = \bigcup _{{k}=1}^{\infty } \{ ({\mathbf{w }}_{\mathbf{t }})^{-1} (\mathbf{A })\, ;\, \mathbf{A } \in {\mathscr {U}}^{k}, \mathbf{t } \in ({\mathbb {Q}}\cap (0,\infty ))^{k} \}. \end{aligned}$$
(5.43)

We then see that \( {\mathscr {V}} \) is a countable determining class of \( (W (S^{{\mathbb {N}}}), {\mathscr {B}} (W (S^{{\mathbb {N}}})) )\).

Lemma 5.5

Make the same assumption as Theorem 5.4. Then for each \( \mathbf{V } \in {\mathscr {V}} \) and for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\)

$$\begin{aligned} (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}&\in \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) {}_{\mathbf{s },\mathbf{b }} . \end{aligned}$$
(5.44)

Here \( \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) {}_{\mathbf{s },\mathbf{b }} \) is the completion of the \( \sigma \)-field \( \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) \) with respect to \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\).

Proof

Let \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}\) be as (4.39). Then from Lemma 4.2 (3), we deduce that \( {F_{\mathbf{s },\mathbf{b }}^{\infty }}\) is a \( {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}\)-measurable function for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\). Then for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\)

$$\begin{aligned} (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V })&\in {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}. \end{aligned}$$

Hence for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\)

$$\begin{aligned}&(F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\in {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}\bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}. \end{aligned}$$
(5.45)

Here for a \( \sigma \)-field \( {\mathscr {F}} \) and a subset A, we set \( {\mathscr {F}} \cap A = \{ F \cap A ; F \in {\mathscr {F}} \} \) as before.

Suppose \( {\mathbf{w }}\in (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\). Then we see \( F_{\mathbf{s },\mathbf{b }}^{\infty }({\mathbf{w }}) \in \mathbf{V }\) from \( {\mathbf{w }}\in (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) \) and \(\mathbf{w }= F_{\mathbf{s },\mathbf{b }}^{\infty }({\mathbf{w }}) \) from \( {\mathbf{w }}\in \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\). Hence \(\mathbf{w }\in \mathbf{V } \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\). We thus obtain

$$\begin{aligned} (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\subset \mathbf{V } \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}. \end{aligned}$$

Suppose \(\mathbf{w }\in \mathbf{V } \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\). Then \( F_{\mathbf{s },\mathbf{b }}^{\infty }({\mathbf{w }}) = \mathbf{w }\in \mathbf{V }\). Hence \( {\mathbf{w }}\in (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\). Thus,

$$\begin{aligned} \mathbf{V } \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\subset (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}. \end{aligned}$$

Combining these, we see that, for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\),

$$\begin{aligned}&(F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}= \mathbf{V } \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}. \end{aligned}$$
(5.46)

Because \( \mathbf{V } \in {\mathscr {V}} \), there exists a \( \mathbf{t } \in ({\mathbb {Q}}\cap (0,\infty ))^{k} \) such that \( \mathbf{V }= ({\mathbf{w }}_{\mathbf{t }})^{-1} (\mathbf{A })\) for some \( \mathbf{A }\in {\mathscr {U}}^{k}\). From \( \mathbf{V }= ({\mathbf{w }}_{\mathbf{t }})^{-1} (\mathbf{A })\) we have \( \mathbf{V } \in \sigma [{\mathbf{w }}_{\mathbf{t }}] \). Hence we obtain

$$\begin{aligned} \mathbf{V } \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\in \sigma [{\mathbf{w }}_{\mathbf{t }}] \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}. \end{aligned}$$
(5.47)

Combining (5.46) and (5.47), we obtain for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\)

$$\begin{aligned} (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\in&\sigma [{\mathbf{w }}_{\mathbf{t }}] \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}. \end{aligned}$$
(5.48)

From (5.45) and (5.48) we deduce for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\)

$$\begin{aligned} (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\in&\big \{ {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}\bigcap \sigma [{\mathbf{w }}_{\mathbf{t }}] \big \} \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}. \end{aligned}$$
(5.49)

We easily see for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\)

$$\begin{aligned} {\mathscr {T}}_{\text {path}} (S^{{\mathbb {N}}})_{\mathbf{s },\mathbf{b }}\bigcap \sigma [{\mathbf{w }}_{\mathbf{t }}] \subset \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) {}_{\mathbf{s },\mathbf{b }} . \end{aligned}$$

This together with (5.49) yields

$$\begin{aligned}&(F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\in \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) {}_{\mathbf{s },\mathbf{b }} \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}. \end{aligned}$$
(5.50)

By Lemma 5.4 we have \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}(\mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}) = 1 \). Then (5.50) implies (5.44) immediately. \(\square \)

Lemma 5.6

  1. 1.

    Assume (\( {\mathbf {C}}_{\mathrm{path}}1\)). Then, for each \( \mathbf{A } \in \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) \),

    $$\begin{aligned}&{\widetilde{P}}_{\mathbf{s },\mathbf{b }}(\mathbf{A }) \in \{ 0,1 \} \quad \text { for }\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b }). \end{aligned}$$
    (5.51)
  2. 2.

    Assume (\( {\mathbf {C}}_{\mathrm{path}}1\)) and (\( \mathbf{C }_{\mathrm{path}}2\)). Then \( \widehat{{\mathscr {T}}}_{\text {path},\ \varUpsilon }^{\{1\}}(S^{{\mathbb {N}}}) \) depends only on \( \mu \circ {\mathfrak {l}} ^{-1}\), where

    $$\begin{aligned}&\widehat{{\mathscr {T}}}_{\text {path},\ \varUpsilon }^{\{1\}}(S^{{\mathbb {N}}}) = \big \{ \mathbf{A } \in \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) \, ;\, {\widetilde{P}}_{\mathbf{s },\mathbf{b }}(\mathbf{A }) = 1 \text { for }\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\big \} . \end{aligned}$$

Proof

We first prove (1). By construction we have a decomposition of \( {P}\) such that

$$\begin{aligned} P\circ \mathbf{X }^{-1} ( \mathbf{A } ) = \int _{ S^{{\mathbb {N}}}\times W _{\mathbf{0 }} ({\mathbb {R}}^{d\mathbb {N}})} {\widetilde{P}}_{\mathbf{s },\mathbf{b }}( \mathbf{A } ) \, \varUpsilon {(d\mathbf{s }d\mathbf{b })}. \end{aligned}$$
(5.52)

Let \( B_{\mathbf{A }} = \{ (\mathbf{s },\mathbf{b }) \, ;\, 0< {\widetilde{P}}_{\mathbf{s },\mathbf{b }}( \mathbf{A } ) < 1 \} \). Suppose that (5.51) is false. Then there exists an \( \mathbf{A } \in \widehat{{\mathscr {T}}}_{\text {path}}(S^{{\mathbb {N}}}) \) such that \( \varUpsilon ( B_{\mathbf{A }} ) > 0\). Hence we deduce that

$$\begin{aligned} 0<&\int _{ B_{\mathbf{A }} } {\widetilde{P}}_{\mathbf{s },\mathbf{b }}( \mathbf{A } ) \varUpsilon (d\mathbf{s }d\mathbf{b }) < 1 . \end{aligned}$$
(5.53)

From (5.52) and (5.53) together with \( \varUpsilon ( B_{\mathbf{A }} ) > 0\) we deduce \( 0< P\circ \mathbf{X }^{-1} ( \mathbf{A } ) < 1\). This contradicts (\( {\mathbf {C}}_{\mathrm{path}}1\)). Hence we obtain (1).

We next prove (2). Suppose that \( \mathbf{A } \in \widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} (S^{{\mathbb {N}}};P\circ \mathbf{X }^{-1} ) \). Then from (5.51) and (5.52), we deduce that \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}(\mathbf{A }) = 1 \) for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\). Furthermore, \( \widehat{{\mathscr {T}}}_{\text {path}}^{\{1\}} (S^{{\mathbb {N}}};P\circ \mathbf{X }^{-1} )\) depends only on \( \mu \circ {\mathfrak {l}} ^{-1}\) by (\( \mathbf{C }_{\mathrm{path}}2\)). Collecting these, we obtain (2). \(\square \)

We next prepare a general fact on countable determining classes.

Lemma 5.7

  1. 1.

    Let \( (U,{\mathscr {U}} )\) be a measurable space with a countable determining class \( {\mathscr {V}} =\{ V_n \}_{n\in {\mathbb {N}}} \). Let \( \nu \) be a probability measure on \( (U,{\mathscr {U}} )\). Suppose that \( \nu ( V_n ) \in \{ 0,1 \} \) for all \( n \in \mathbb {N}\). Then, \( \nu ( A ) \in \{ 0,1 \} \) for all \( A \in {\mathscr {U}} \). Furthermore, there exists a unique \( V_{\nu }\in {\mathscr {U}} \) such that \( V_{\nu }\cap A \in \{ \emptyset , V_{\nu }\} \) and \( \nu ( A ) = \nu ( A \cap V_{\nu }) \) for all \( A \in {\mathscr {U}} \).

  2. 2.

    In addition to the assumptions in (1), we assume \( \{ u \} \in {\mathscr {U}} \) for all \( u \in U \). Then there exists a unique \( a \in U \) such that \( \nu = \delta _a \).

Proof

We first prove (1). Let \( N (1) = \{ n \in {\mathbb {N}}; \nu (V_n) = 1 \} \). Then we take

$$\begin{aligned}&V_{\nu }= \left( \bigcap _{n \in N (1)} V_n \right) \bigcap \left( \bigcap _{n \not \in N (1)} V_n^c \right) . \end{aligned}$$

Clearly, we obtain \( \nu ( V_{\nu }) = 1\).

Let \( A \in {\mathscr {U}} \). Suppose that \( V_{\nu }\cap A \not \in \{ \emptyset , V_{\nu }\}\). We then cannot determine the value of \( \nu ( V_{\nu }\cap A )\) from the value of \( \nu (V_n)\) (\( n \in {\mathbb {N}}\)). This yields a contradiction. Hence, \( V_{\nu }\cap A \in \{ \emptyset , V_{\nu }\}\). If \( V_{\nu }\cap A = \emptyset \), then \( \nu ( A ) = 0 \). If \( V_{\nu }\cap A = V_{\nu }\), then \( \nu ( A ) \ge \nu ( V_{\nu }) = 1 \). We thus complete the proof of (1).

We next prove (2). Since \( \nu (V_{\nu }) = \nu (V_{\nu }\cap U ) = \nu (U) = 1\), we have \( \# V_{\nu }\ge 1 \).

Suppose \( \# V_{\nu }= 1\). Then there exists a unique \( a \in U \) such that \( V_{\nu }= \{ a \} \). This combined with (1) yields that \( \nu (A) = \nu (A \cap V_{\nu }) = \nu (A \cap \{ a \} ) \in \{ 0,1 \} \) for all \( A \in {\mathscr {U}} \). Hence we see that \( \nu = \delta _a \).

Next suppose \( \# V_{\nu }\ge 2 \). Then \( V_{\nu }\) can be decomposed into two non-empty measurable subsets \( V_{\nu }= V_{\nu }^1 + V_{\nu }^2\) because \( \{ u \}\in {\mathscr {U}} \) for all \( u \in U \). From (1), we have proved that \( \nu ( V_{\nu }^1 ), \nu ( V_{\nu }^2 ) \in \{ 0,1 \} \) and that \( V_{\nu }\) is unique. Hence such a decomposition \( V_{\nu }= V_{\nu }^1 + V_{\nu }^2\) yields a contradiction. This completes the proof of (2). \(\square \)

Let \( {\widetilde{P}}_{\mathbf{s }}\) and \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) be as (5.4) and (5.5), respectively.

Lemma 5.8

Make the same assumption as Theorem 5.4. Assume (\( {\mathbf {C}}_{\mathrm{path}}1\)). Then \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) is concentrated at a single path \( {\mathbf{w }}= {\mathbf{w }}(\mathbf{s },\mathbf{b })\in W (S^{{\mathbb {N}}})\), that is,

$$\begin{aligned}&{\widetilde{P}}_{\mathbf{s },\mathbf{b }}= \delta _{{\mathbf{w }}(\mathbf{s },\mathbf{b })} \end{aligned}$$
(5.54)

In particular, \( {\mathbf{w }}\) is a function of \( \mathbf{b }\) under \( {\widetilde{P}}_{\mathbf{s }}\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\).

Proof

From (5.41) and (5.42), we deduce \( {F_{\mathbf{s },\mathbf{b }}^{\infty }}(\mathbf{w }) = \mathbf{w }\) for \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\)-a.s. \( \mathbf{w }\) for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\). Hence by (5.5) we deduce for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\)

$$\begin{aligned} {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\circ ({F_{\mathbf{s },\mathbf{b }}^{\infty }})^{-1} = {\widetilde{P}}_{\mathbf{s },\mathbf{b }}. \end{aligned}$$
(5.55)

Let \( {\mathscr {V}} \) be the countable determining class given by (5.43). Then, we deduce from Lemmas 5.4, 5.5, 5.6, and the definition of \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\) that, for all \( \mathbf{V } \in {\mathscr {V}} \),

$$\begin{aligned} {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\circ (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) =&{\widetilde{P}}_{\mathbf{s },\mathbf{b }}( (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) ) \nonumber \\ =&{\widetilde{P}}_{\mathbf{s },\mathbf{b }}\big ((F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) \bigcap \mathbf{W }_{\mathbf{s },\mathbf{b }}^{\text {fix}}\big ) \in \{ 0,1 \}\quad \text { for }\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b }). \end{aligned}$$
(5.56)

Because \( {\mathscr {V}} \) is countable, we deduce from (5.56) that, for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\),

$$\begin{aligned}&{\widetilde{P}}_{\mathbf{s },\mathbf{b }}\circ (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{V }) \in \{ 0,1 \} \quad \text { for all } \mathbf{V } \in {\mathscr {V}} . \end{aligned}$$
(5.57)

We denote by \( {\mathscr {B}}(W (S^{{\mathbb {N}}}))_{\mathbf{s },\mathbf{b }}\) the completion of \( {\mathscr {B}}(W (S^{{\mathbb {N}}})) \) with respect to \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\). From (5.57) and Lemma 5.7, we obtain for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\),

$$\begin{aligned} {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\circ (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} (\mathbf{A }) \in \{ 0,1 \} \quad \text { for all } \mathbf{A } \in {\mathscr {B}}(W (S^{{\mathbb {N}}}))_{\mathbf{s },\mathbf{b }}. \end{aligned}$$

Furthermore, for \(\varUpsilon \text {-a.s.}\ (\mathbf{s },\mathbf{b })\), there exists a unique \( \mathbf{U }_{\mathbf{s },\mathbf{b }} \in {\mathscr {B}}(W (S^{{\mathbb {N}}}))_{\mathbf{s },\mathbf{b }}\) such that

$$\begin{aligned}&\mathbf{U }_{\mathbf{s },\mathbf{b }} \cap \mathbf{A } \in \{ \emptyset , \mathbf{U }_{\mathbf{s },\mathbf{b }} \} ,\quad {\widetilde{P}}_{\mathbf{s },\mathbf{b }}(\mathbf{A } ) = {\widetilde{P}}_{\mathbf{s },\mathbf{b }}(\mathbf{A } \cap \mathbf{U }_{\mathbf{s },\mathbf{b }} ) \end{aligned}$$

for all \( \mathbf{A } \in {\mathscr {B}}(W (S^{{\mathbb {N}}}))_{\mathbf{s },\mathbf{b }}\). Hence we deduce that the set \( \mathbf{U }_{\mathbf{s },\mathbf{b }} \) consists of a single point \( \{ {\mathbf{w }}(\mathbf{s },\mathbf{b })\} \) for some \( {\mathbf{w }}= {\mathbf{w }}(\mathbf{s },\mathbf{b })\in W (S^{{\mathbb {N}}})\). Then the probability measure \( {\widetilde{P}}_{\mathbf{s },\mathbf{b }}\circ (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} \) is concentrated at the single path \( {\mathbf{w }}= {\mathbf{w }}(\mathbf{s },\mathbf{b })\in W (S^{{\mathbb {N}}})\), that is,

$$\begin{aligned}&{\widetilde{P}}_{\mathbf{s },\mathbf{b }}\circ (F_{\mathbf{s },\mathbf{b }}^{\infty })^{-1} = \delta _{{\mathbf{w }}(\mathbf{s },\mathbf{b })} . \end{aligned}$$

In particular, \( {\mathbf{w }}\) is a function of \( \mathbf{b }\) under \( {\widetilde{P}}_{\mathbf{s }}\) because of (5.3), (5.4), and (5.5). This combined with (5.55) implies (5.54). \(\square \)

Proof of Theorem 5.4

From (5.54), we immediately obtain \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\). Hence we obtain (1). Let \( {\mathbf{w }}(\mathbf{s },\mathbf{b })\) and \( \mathbf{U }_{\mathbf{s },\mathbf{b }} \) be as Lemma 5.8. From Lemma 5.6 (2), we deduce that the set \( \mathbf{U }_{\mathbf{s },\mathbf{b }} \) depends only on \( \mu \circ {\mathfrak {l}} ^{-1}\). In particular, \( {\mathbf{w }}(\mathbf{s },\mathbf{b })\) depends only on \( \mu \circ {\mathfrak {l}} ^{-1}\). This completes the proof of (2). \(\square \)

Proof of Theorem 5.1

By assumption, \(\mathbf {(TT)}\) and \(\mathbf {(AC)}\) for \( \mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\) are satisfied, and furthermore, \( {\widetilde{P}}_{\mathbf{s }}\) is a weak solution of (4.2)–(4.4) satisfying \((\mathbf {IFC})_{{\mathbf {s}}}\) for \( P\circ \mathbf{X }_0^{-1} \)-a. s. \( \mathbf{s }\). Then we apply Theorem 5.3 to obtain (\( {\mathbf {C}}_{\mathrm{path}}1\)) and (\( \mathbf{C }_{\mathrm{path}}2\)). Hence we deduce from Theorem 5.4 that \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\) hold for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\). \(\square \)

Proof of Theorems 3.13.2

This section is devoted to proving Theorems 3.13.2. Throughout this section, \( S\) is \( {\mathbb {R}}^d\) or a closed set satisfying the assumption in Sect. 3, and \( {\textsf {S}}\) is the configuration space over \( S\). We have established two tail theorems: Theorems 4.1 and 5.1. Then Theorems 3.1 and 3.2 are immediate consequences of them.

Proof of Theorem 3.1

By assumption, \( \mu \) is tail trivial and \( (\mathbf{X },\mathbf{B })\) under P is a weak solution satisfying \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \(\mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\). Hence we deduce \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}2)\)\(_{{\mathbf {s}}}\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\) from the second tail theorem (Theorem 5.1). We therefore conclude from the first tail theorem (Theorem 4.1) that (3.3)–(3.4) has a unique strong solution \( {F}_{\mathbf{s }}\) starting at \( \mathbf{s }\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\) under the constraints of \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\).

Because the family of strong solutions \( \{ {F}_{\mathbf{s }}\} \) is given by a weak solution \( (\mathbf{X },\mathbf{B })\) under \( P\), \( \{ {F}_{\mathbf{s }}\} \) satisfies \((\mathbf{MF} )\). Recall that \( P_{\{ {F}_{\mathbf{s }}\}}= P\circ \mathbf{X }^{-1}\) by (3.15). Hence \( P_{\{ {F}_{\mathbf{s }}\}}\) satisfies \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \(\mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\).

We next check (i) and (ii) in Definition 3.11.

Let \( (\hat{\mathbf{X }},\hat{\mathbf{B }})\) under \( {\hat{P}}\) be a weak solution in (i). Then \( (\hat{\mathbf{X }},\hat{\mathbf{B }})\) under \( {\hat{P}}\) satisfies \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \(\mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\) by assumption. This implies \( (\hat{\mathbf{X }},\hat{\mathbf{B }})\) under \( {\hat{P}}_{\mathbf{s }}\) satisfies \((\mathbf {IFC})_{{\mathbf {s}}}\), \(\mathbf {(SIN)}\)\(_{\mathbf{s }}\), and \(\mathbf {(NBJ)}\)\(_{\mathbf{s }} \) for \( {\hat{P}}\circ \hat{\mathbf{X }}_0^{-1} \)-a.s. \( \mathbf{s }\) by Fubini’s theorem. Because \( {F}_{\mathbf{s }}\) is a unique strong solution starting at \( \mathbf{s }\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\) under the constraints of \((\mathbf {IFC})_{{\mathbf {s}}}\) and \( ({\mathbf {T}}_{\mathrm{path}}1)\)\(_{{\mathbf {s}}}\) and \({\hat{P}}\circ \hat{\mathbf{X }}_0^{-1} \prec P \circ \mathbf{X }_0^{-1} \), we obtain (i). Condition (ii) is clear because \( {F}_{\mathbf{s }}\) is a unique strong solution. This completes the proof. \(\square \)

Proof of Theorem 3.2

By (3.21) \( \mu _{\text {Tail}}^{{\textsf {a}}}\) is tail trivial. Hence \(\mathbf {(TT)}\) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\) is satisfied. By assumption, we have \((\mathbf{X },\mathbf{B })\) under \( P^{{\textsf {a}}} \) satisfies \(\mathbf {(AC)}\) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\) for \( \mu \)-a.s. Å.

We see that, for \( \mu \)-a.s. Å, \(\mathbf {(IFC)}\), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\) for \( (\mathbf{X },\mathbf{B })\) under \( P^{\AA } \) follow from Fubini’s theorem and those for \( (\mathbf{X },\mathbf{B })\) under \( P\).

Indeed, from (3.20), (3.25), and (3.26) we obtain

$$\begin{aligned}&P= \int P^{{\textsf {a}}} \mu (d{\textsf {a}}) . \end{aligned}$$
(6.1)

Then from \(\mathbf {(SIN)}\) and \(\mathbf {(NBJ)}\) for \( (\mathbf{X },\mathbf{B })\) under \( P\) and (6.1) we deduce

$$\begin{aligned}&1=P({\textsf {X}} \in W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})) = \int _{{\textsf {S}}} P^{{\textsf {a}}} ({\textsf {X}} \in W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})) \mu (d{\textsf {a}}) ,\\&1 = P( {\textsf {m}}_{r,T} (\mathbf{X })< \infty ) = \int _{{\textsf {S}}} P^{{\textsf {a}}}( {\textsf {m}}_{r,T} (\mathbf{X })< \infty ) \mu (d{\textsf {a}}). \end{aligned}$$

Hence \( P^{\AA } ({\textsf {X}} \in W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})) =P^{\AA }( {\textsf {m}}_{r,T} (\mathbf{X })< \infty )=1 \) for \( \mu \)-a.s. Å. This implies \(\mathbf {(SIN)}\) and \(\mathbf {(NBJ)}\) for \( (\mathbf{X },\mathbf{B })\) under \( P^{\AA } \) holds for \( \mu \)-a.s. Å.

By disintegration of \( P\circ \mathbf{X }_0^{-1} \),

$$\begin{aligned}&P\circ \mathbf{X }_0^{-1} = \int P^{{\textsf {a}}}\circ \mathbf{X }_0^{-1} \mu (d{\textsf {a}}) . \end{aligned}$$
(6.2)

We set \( P_{\mathbf{s }}= P (\cdot | \mathbf{X }_0 = \mathbf{s })\). By definition, \(\mathbf {(IFC)}\) for \( (\mathbf{X },\mathbf{B })\) under \( P\) implies \((\mathbf {IFC})_{{\mathbf {s}}}\) for \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) for \( P\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\). Then by (6.2) this implies \((\mathbf {IFC})_{{\mathbf {s}}}\) for \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) holds for \( P^{\AA }\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\) and for \( \mu \)-a.s. Å. We easily see for \( \mu \)-a.s. Å

$$\begin{aligned}&P_{\mathbf{s }}= P_{\mathbf{s }}^{{\textsf {a}}} \quad \text {for } P^{{\textsf {a}}}\circ \mathbf{X }_0^{-1} \text {-a.s.}\ \mathbf{s }. \end{aligned}$$

Hence, for \( \mu \)-a.s. Å, we have \((\mathbf {IFC})_{{\mathbf {s}}}\) for \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}^{\AA }\) for \( P^{\AA }\circ \mathbf{X }_0^{-1} \)-a.s. \( \mathbf{s }\). Then, for \( \mu \)-a.s. Å, we deduce \(\mathbf {(IFC)}\) for \( (\mathbf{X },\mathbf{B })\) under \( P^{\AA }\).

We have thus seen that, for \( \mu \)-a.s. Å, \( (\mathbf{X },\mathbf{B })\) under \( P^{\AA } \) fulfills the assumptions of Theorem 3.1. Hence Theorem 3.2 follows from Theorem 3.1. \(\square \)

The Ginibre interacting Brownian motion

In this section, we apply our theory to the special example of the Ginibre interacting Brownian motion and prove existence of strong solutions and pathwise uniqueness. Our proof is based on the idea explained in Introduction. Behind it there are two general theories called tail theorems. These two theories are robust and can be applied to various kinds of infinite-dimensional stochastic (differential) equations with symmetry beyond interacting Brownian motions. The purpose of this section is to clarify the roles of these two theories by applying them to the Ginibre interacting Brownian motion.

ISDE related to the Ginibre random point field

In Sect. 7.1, we introduce the Ginibre interacting Brownian motion and prepare the result of the first step.

The Ginibre random point field \( \mu _{\text {Gin}}\) is a random point field on \( {\mathbb {R}}^2\) whose n-point correlation function \( \rho _{\text {Gin}}^n \) with respect to the Lebesgue measure is given by

$$\begin{aligned} \rho _{\text {Gin}}^{n}(x_1,\ldots ,x_n) = \det [{\textsf {K}}_{\text {Gin}}(x_i,x_j)]_{i,j=1}^{n}, \end{aligned}$$
(7.1)

where \( {\textsf {K}}_{\text {Gin}}\!:\!{\mathbb {R}}^2\times {\mathbb {R}}^2\!\rightarrow \!{\mathbb {C}}\) is the kernel defined by

$$\begin{aligned}&{\textsf {K}}_{\text {Gin}}(x,y) = \pi ^{-1} e^{-\frac{|x|^{2}}{2}-\frac{|y|^{2}}{2}}\cdot e^{x {\bar{y}}}. \end{aligned}$$
(7.2)

Here we identify \( {\mathbb {R}}^2\) as \( {\mathbb {C}}\) by the obvious correspondence \( {\mathbb {R}}^2\ni x=(x_1,x_2)\mapsto x_1 + \sqrt{-1} x_2 \in {\mathbb {C}}\), and \( {\bar{y}}=y_1-\sqrt{-1} y_2 \) is the complex conjugate in this identification.

It is known that \( \mu _{\text {Gin}}\) is translation and rotation invariant. Moreover, \( \mu _{\text {Gin}}({\textsf {S}}_{\text {s.i.}})=1 \) and \( \mu _{\text {Gin}}\) is tail trivial [2, 19, 29].

We next introduce the Dirichlet form associated with \( \mu _{\text {Gin}}\) and construct \( {\textsf {S}}\)-valued diffusion. Let \( ({\mathscr {E}}^{\mu _{\text {Gin}}} ,{\mathscr {D}}_{\circ }^{\mu _{\text {Gin}}} )\) be a bilinear form on \( L^{2}({\textsf {S}},\mu _{\text {Gin}})\) defined by

$$\begin{aligned}&{\mathscr {D}}_{\circ }^{\mu _{\text {Gin}}} = \{ f \in {\mathscr {D}}_{\circ }\cap L^{2}({\textsf {S}},\mu _{\text {Gin}})\, ;\, {\mathscr {E}}^{\mu _{\text {Gin}}}(f,f) < \infty \} ,\\&{\mathscr {E}}^{\mu _{\text {Gin}}} (f,g) = \int _{{\textsf {S}}} {\mathbb {D}}[f,g] \, \mu _{\text {Gin}}(d{\textsf {s}}) ,\\&{\mathbb {D}}[f,g] ({\textsf {s}}) = \frac{1}{2} \sum _{i} (\partial _{s_i}{\check{f}} , \partial _{s_i}{\check{g}} )_{{\mathbb {R}}^2 } . \end{aligned}$$

Here \( {\textsf {s}}=\sum _i \delta _{s_i}\), \( \partial _{s_i}= (\frac{\partial }{\partial s_{i1}},\frac{\partial }{\partial s_{i2}}) \), and \((\cdot , \cdot )_{{\mathbb {R}}^2 }\) denotes the standard inner product in \( {\mathbb {R}}^2\). \( {\check{f}}\) is defined before (2.2).

Lemma 7.1

( [27, Theorem 2.3])

  1. 1.

    The Ginibre random point field \( \mu _{\text {Gin}}\) is a \( (|x|^2,- 2\log |x-y|)\)-quasi Gibbs measure.

  2. 2.

    \( ({\mathscr {E}}^{\mu _{\text {Gin}}} ,{\mathscr {D}}_{\circ }^{\mu _{\text {Gin}}} )\) is closable on \( L^{2}({\textsf {S}},\mu _{\text {Gin}})\).

  3. 3.

    The closure \( ({\mathscr {E}}^{\mu _{\text {Gin}}} ,{\mathscr {D}}^{\mu _{\text {Gin}}} )\) of \( ({\mathscr {E}}^{\mu _{\text {Gin}}} ,{\mathscr {D}}_{\circ }^{\mu _{\text {Gin}}} )\) on \( L^{2}({\textsf {S}},\mu _{\text {Gin}})\) is a quasi-regular Dirichlet form.

  4. 4.

    There exists a diffusion \( \{ {\textsf {P}}_{{\textsf {s}}}\}_{{\textsf {s}}\in {\textsf {S}}} \) associated with \( ({\mathscr {E}}^{\mu _{\text {Gin}}} ,{\mathscr {D}}^{\mu _{\text {Gin}}} )\) on \( L^{2}({\textsf {S}},\mu _{\text {Gin}})\).

A family of probability measures \( \{ {\textsf {P}}_{{\textsf {s}}}\}_{{\textsf {s}}\in {\textsf {S}}} \) on \( (W ({\textsf {S}}),{\mathscr {B}}(W ({\textsf {S}}))) \) is called a diffusion if the canonical process \( {\textsf {X}}=\{ {\textsf {X}}_t \} \) under \( {\textsf {P}}_{{\textsf {s}}}\) is a continuous process with the strong Markov property starting at \( {\textsf {s}}\). Here \( {\textsf {X}}_t({\textsf {w}}) = {\textsf {w}}_t\) for \( {\textsf {w}}=\{ {\textsf {w}}_t\} \in W ({\textsf {S}})\) by definition. \( {\textsf {X}}\) is adapted to \( \{ {\mathscr {F}}_t \}\), where \( {\mathscr {F}}_t = \cap _\nu {\mathscr {F}}_t^{\nu } \) and the intersection is taken over all Borel probability measures \( \nu \), \( {\mathscr {F}}_t^{\nu }\) is the completion of \( {\mathscr {F}}_t^+ = \cap _{\epsilon > 0 } {\mathscr {B}}_{t+\epsilon } ({\textsf {S}}) \) with respect to \( {\textsf {P}}_{\nu }=\int {\textsf {P}}_{{\textsf {s}}}\nu (d{\textsf {s}})\). The \( \sigma \)-field \( {\mathscr {B}}_t ({\textsf {S}}) \) is defined by

$$\begin{aligned} {\mathscr {B}}_t ({\textsf {S}}) = \sigma [{\textsf {w}}_s; 0 \le s \le t ] . \end{aligned}$$
(7.3)

Furthermore, \( \{ {\textsf {P}}_{{\textsf {s}}}\}_{{\textsf {s}}\in {\textsf {S}}} \) is called stationary if it has an invariant probability measure.

We refer to [20] for the definition of quasi-regular Dirichlet forms. We also refer to [6] for the Dirichlet form theory.

We recall the definition of capacity [Chapter 2.1 in [6]]. Denote by \({\mathscr {O}}\) the family of all open subsets of \({\textsf {S}}\). Let \(({\mathscr {E}}, {\mathscr {D}})\) be a quasi-regular Dirichlet form on \(L^2({\textsf {S}},\mu )\), and \({\mathscr {E}}_1(f,f)={\mathscr {E}}(f,f)+ ( f,f )_{L^2({\textsf {S}},\mu )}\). For \({\textsf {B}}\in {\mathscr {O}}\) we define

$$\begin{aligned}&\text {Cap}^\mu ({\textsf {B}})= {\left\{ \begin{array}{ll} \inf _{u\in {\mathscr {L}}_{{\textsf {B}}}}{\mathscr {E}}_1(u,u), \quad &{}{\mathscr {L}}_{{\textsf {B}}}\not = \emptyset \\ \infty \quad &{}{\mathscr {L}}_{{\textsf {B}}}= \emptyset , \end{array}\right. } \end{aligned}$$

where \( {\mathscr {L}}_{{\textsf {B}}}=\{u \in {\mathscr {D}} : u\ge 1, \mu \text{-a.e } \text{ on } {\textsf {B}}{} \}\), and we let for all set \({\textsf {A}}\subset {\textsf {S}}\)

$$\begin{aligned} \text {Cap}^\mu ({\textsf {A}})= \inf _{{\textsf {A}}\subset {\textsf {B}} \in {\mathscr {O}} }\text {Cap}^\mu ({\textsf {B}}) . \end{aligned}$$

We call this one-capacity of \({\textsf {A}}\) or simply the capacity of \({\textsf {A}}\).

We recall the notion of quasi-everywhere and quasi-continuity. Let \({\textsf {A}}\) be a subset of \({\textsf {S}}\). A statement depending on \({\textsf {s}}\in {\textsf {A}}\) is said to hold quasi-everywhere (q.e.) on \({\textsf {A}}\) if there exists a set \({\mathscr {N}}\subset {\textsf {A}}\) of zero capacity such that the statement is true for every \({\textsf {s}}\in {\textsf {A}}\setminus {\mathscr {N}}\). When \({\textsf {A}}={\textsf {S}}\), quasi-everywhere on \({\textsf {S}}\) is simply said quasi-everywhere. Let u be an extended real valued function defined q.e. on \({\textsf {S}}\). We call u quasi-continuous if there exists for any \(\epsilon >0\) an open set \({\textsf {G}}\subset {\textsf {S}}\) such that \(\text {Cap}^\mu ({\textsf {G}})<\varepsilon \) and the restriction \(u|_{{\textsf {S}}\setminus {\textsf {G}}}\) of u on \({\textsf {S}}\setminus {\textsf {G}}\) is finite continuous.

We set the probability measure \( {\textsf {P}}_{\mu _{\text {Gin}}} \) by

$$\begin{aligned}&{\textsf {P}}_{\mu _{\text {Gin}}} (A) = \int _{{\textsf {S}}} {\textsf {P}}_{{\textsf {s}}}(A) \mu _{\text {Gin}}(d{\textsf {s}}) \quad \text { for } A \in {\mathscr {F}} . \end{aligned}$$
(7.4)

Under \( {\textsf {P}}_{\mu _{\text {Gin}}} \), the unlabeled process \( {\textsf {X}}\) is a \( \mu _{\text {Gin}}\)-reversible diffusion. Let \( \text {Cap}^{\mu _{\text {Gin}}} \) be the capacity on the Dirichlet space \( ({\mathscr {E}}^{\mu _{\text {Gin}}} ,{\mathscr {D}}^{\mu _{\text {Gin}}} , L^{2}({\textsf {S}},\mu _{\text {Gin}})) \). Let \( {\textsf {S}}_{\text {s.i.}}\) and \( W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})\) be as (2.8) and (2.9), respectively .

Lemma 7.2

( [24, 26]) \( {\textsf {P}}_{\mu _{\text {Gin}}} (W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})) = 1 \) and \( \text {Cap}^{\mu _{\text {Gin}}}({\textsf {S}}_{\text {s.i.}}^c) = 0 \).

Proof

\( {\textsf {P}}_{\mu _{\text {Gin}}} (W _{\text {NE}}({\textsf {S}})) = 1 \) follows from [26, (2.10)]. Let \( {\textsf {S}}_{\text {s}}\) be as (2.8). Then by [24, Theorem 2.1], we see \( \text {Cap}^{\mu _{\text {Gin}}}({\textsf {S}}_{\text {s}}^c)= 0 \). Clearly, \( \mu _{\text {Gin}}({\textsf {S}}_{\text {s.i.}})= 1 \). Combining these imply the unlabeled diffusion never hits the set consisting of the finite configurations. We thus see the capacity of this set is zero. Hence we obtain \( \text {Cap}^{\mu _{\text {Gin}}}({\textsf {S}}_{\text {s.i.}}^c) = 0 \). In particular, \( {\textsf {P}}_{\mu _{\text {Gin}}} (W ({\textsf {S}}_{\text {s.i.}})) = 1 \), which together with \( {\textsf {P}}_{\mu _{\text {Gin}}} (W _{\text {NE}}({\textsf {S}})) = 1 \) implies the first claim. \(\square \)

Lemma 7.3

( [26, Theorem 61, Lemma 72]) The Ginibre random point field \( \mu _{\text {Gin}}\) has a logarithmic derivative \( {\textsf {d}}^{\text {Gin} }\). Furthermore, \( {\textsf {d}}^{\text {Gin} }\) has plural expressions:

$$\begin{aligned}&{\textsf {d}}^{\text {Gin} }(x, {\textsf {s}}) = 2 \lim _{r\rightarrow \infty } \sum _{|x-s_i| < r } \frac{x-s_i}{|x-s_i|^2} , \end{aligned}$$
(7.5)
$$\begin{aligned}&{\textsf {d}}^{\text {Gin} }(x, {\textsf {s}}) = -2x + 2 \lim _{r\rightarrow \infty } \sum _{|s_i| < r } \frac{x-s_i}{|x-s_i|^2}. \end{aligned}$$
(7.6)

Here the convergence takes place in \( L_{\text {loc}}^{p}(\mu ^{[1]}_{\text {Gin}})\) for any \( 1 \le p < 2 \).

From Lemma 7.2 the process \( \mathbf{X }=(X^i)_{i\in \mathbb {N}} = {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \in W ({\mathbb {R}}^{2\mathbb {N}})\) is well defined, where \( {\mathbb {R}}^{2\mathbb {N}}= ({\mathbb {R}}^2)^{\mathbb {N}}\). The unlabeled process \( {\textsf {X}}\) is defined on the canonical filtered space as the evaluation map \( {\textsf {X}}_t ({\textsf {w}}) = {\textsf {w}}_t\). That is, \((\varOmega ,{\mathscr {F}},\{ {\textsf {P}}_{{\textsf {s}}}\}, \{ {\mathscr {F}}_t \} )\) is given by \( \varOmega = W ({\textsf {S}}_{\text {s.i.}})\), \( {\mathscr {F}}={\mathscr {B}}(W ({\textsf {S}}_{\text {s.i.}})) \), \( \{{\textsf {P}}_{{\textsf {s}}}\}\) is the family of diffusion measures given by Lemma 7.1 (4). \( \{{\textsf {P}}_{{\textsf {s}}}\}\) can be regarded as diffusion on \( {\textsf {S}}_{\text {s.i.}}\) by Lemma 7.2. The labeled process \( \mathbf{X }= {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) is thus defined on \( (\varOmega ,{\mathscr {F}},\{ {\textsf {P}}_{{\textsf {s}}}\}, \{ {\mathscr {F}}_t \} )\). The ISDE satisfied by \( \mathbf{X }\) is as follows.

Lemma 7.4

( [26, Theorem 21, Theorem 22]) Let \( \mathbf{X }= {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) be the stochastic process defined on \( (\varOmega ,{\mathscr {F}},\{ {\textsf {P}}_{{\textsf {s}}}\}, \{ {\mathscr {F}}_t \} )\) as above. Then there exists a set \( {\textsf {H}} \) such that

$$\begin{aligned} \mu _{\text {Gin}}({\textsf {H}} ) = 1 ,\quad {\textsf {H}} \subset {\textsf {S}}_{\text {s.i.}}\end{aligned}$$

and that \( \mathbf{X }\) under \( {\textsf {P}}_{{\textsf {s}}}\) satisfies both ISDEs on \( {\mathbb {R}}^{2\mathbb {N}}\) starting at each point \( \mathbf{s }={\mathfrak {l}} ({\textsf {s}}) \in {\mathfrak {l}} ({\textsf {H}} ) \)

$$\begin{aligned}&dX_t^i = dB_t^i + \lim _{r\rightarrow \infty } \sum _{|X_t^i-X_t^j|<r ,\, j\not =i} \frac{X_t^i-X_t^j}{|X_t^i-X_t^j|^{2}} dt \quad (i \in \mathbb {N}), \end{aligned}$$
(1.5)
$$\begin{aligned}&dX_t^i = dB_t^i - X_t^i dt + \lim _{r\rightarrow \infty } \sum _{|X_t^j|<r ,\, j\not =i} \frac{X_t^i-X_t^j}{|X_t^i-X_t^j|^{2}} dt \quad (i \in \mathbb {N}) . \end{aligned}$$
(1.6)

Furthermore, \( \mathbf{X } \in W ({\mathfrak {u}} ^{-1}({\textsf {H}} ))\). The process \( \mathbf{B }= (B^i)_{i=1}^{\infty }\) in (1.5) and (1.6) is the same and is the \( {\mathbb {R}}^{2\mathbb {N}}\)-valued, \(\{ {\mathscr {F}}_t \}\)-Brownian motion given by the formula

$$\begin{aligned}&B_t^i = X_t^i -X_0^i -\int _0^t \lim _{r\rightarrow \infty } \sum _{|X_s^i-X_s^j|<r ,\, j\not =i} \frac{X_s^i-X_s^j}{|X_s^i-X_s^j|^{2}} ds , \end{aligned}$$
(7.7)
$$\begin{aligned}&B_t^i = X_t^i -X_0^i +\int _0^t X_s^i ds - \int _0^t \lim _{r\rightarrow \infty } \sum _{|X_s^j|<r ,\, j\not =i} \frac{X_s^i-X_s^j}{|X_s^i-X_s^j|^{2}} ds . \end{aligned}$$
(7.8)

We note that \( (\mathbf{X },\mathbf{B })\) above is a solution of the two ISDEs (1.5) and (1.6). We refer to Definition 3.1 for the definition of the solution in Lemma 7.4, which is often called a weak solution. We also note that the identity between (7.7) and (7.8) follows from the plural expressions of \( {\textsf {d}}^{\text {Gin} }\) in Lemma 7.3.

Remark 7.1

  1. 1.

    We cannot replace \( W ({\mathfrak {u}} ^{-1}({\textsf {H}} ))\) by \( W ({\mathfrak {l}} ({\textsf {H}} ))\) in Lemma 7.4. Indeed, \( {\mathfrak {l}} ({\textsf {H}} ) \subset {\mathfrak {u}} ^{-1}({\textsf {H}} ) \) and \( \mathbf{X }_t \not \in {\mathfrak {l}} ({\textsf {H}} ) \) for some \(0< t < \infty \).

  2. 2.

    The Brownian motion \( \mathbf{B } = (B^i)_{i=1}^{\infty }\) in Lemma 7.4 is given by a functional of \( {\textsf {X}}\). Hence we write \( \mathbf{B }({\textsf {X}})=(B^i({\textsf {X}}))_{i=1}^{\infty }\). It is given by the martingale part of the Fukushima decomposition of the Dirichlet process \( X_t^i-X_0^i\). Indeed, for (1.5) and (1.6), \( B^i\) is given by (7.7) and (7.8), respectively. We note that the coordinate function \( x_i\) does not belong to the domain of the unlabeled Dirichlet form even if locally. Hence we introduced in [26] the Dirichlet space of the m-labeled process \( (X^1,\ldots ,X^m ,\sum _{j>m}\delta _{X^j})\) to apply the Fukushima decomposition to the coordinate function \( x_i\), where \( i \le m \). The consistency of the Dirichlet spaces plays a crucial role for this argument [25, 26]. See Sect. 9.3 for the definition of the Dirichlet space of the m-labeled process and their consistency, which is given under the general situation.

  3. 3.

    The function in the coefficient in (1.6) belongs to the domain of the m-labeled Dirichlet space locally. Indeed, regarded as a function on \( {\mathbb {R}}^2\times {\textsf {S}}\), we prove that it belongs to the domain of one-labeled Dirichlet form locally (see Lemma 8.4). By the same argument we can prove that it is in the domain of the m-labeled Dirichlet form if we regard as a function on \( ({\mathbb {R}}^2)^m \times {\textsf {S}}\) in an obvious manner. Hence, by taking a quasi-continuous version of the function, the drift term becomes a Dirichlet process. We thus see that the process

    $$\begin{aligned} - X_t^i + \lim _{r\rightarrow \infty } \sum _{|X_t^j|<r ,\, j\not =i} \frac{X_t^i-X_t^j}{|X_t^i-X_t^j|^{2}} \end{aligned}$$

    in the drift term is a continuous process. This makes the meaning of the drift term more explicit because we usually take a predictable version of the coefficients (see pp 45–46 in [9]). The key point here is that the coefficient is in the domain of the Dirichlet space. All examples in the present paper enjoy this property. We shall assume this in (C1)–(C2) and use this in the proof of Proposition 11.2.

Main result for the Ginibre interacting Brownian motion

All above results in Sect. 7.1 belong to the first step explained in Introduction (Sect. 1). Our purpose in Sect. 7.2 is to obtain the existence of strong solutions and the pathwise uniqueness of the solution, which is the main result for the Ginibre interacting Brownian motion.

Let \( (\varOmega ,{\mathscr {F}}, {\textsf {P}}_{{\textsf {s}}}, \{ {\mathscr {F}}_t \} )\) be as Lemma 7.4. Let \( (\mathbf{X },\mathbf{B })\) be the solution of ISDEs (1.5) and (1.6) given by Lemma 7.4. Recall that \( \mathbf{X }= {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \), where \( {\textsf {X}}\) is the canonical process such that \( {\textsf {X}}_t ({\textsf {w}}) = {\textsf {w}}_t\) and \( \mathbf{B }\) is the \( \{ {\mathscr {F}}_t \}\)-Brownian motion given by (7.8). Let \( {\textsf {P}}_{\mu _{\text {Gin}}} \) be as (7.4).

Theorem 7.1

  1. 1.

    \( (\mathbf{X },\mathbf{B })\) under \( {\textsf {P}}_{\mu _{\text {Gin}}} \) satisfies the same conclusions as Theorem 3.1, Corollaries 3.1, and 3.2 for ISDE (1.5).

  2. 2.

    There exists a subset \( {\textsf {H}}\) of \( {\textsf {S}}_{\text {sde}}\) such that \( \mu _{\text {Gin}}({\textsf {H}}) = 1 \) and that \( (\mathbf{X },\mathbf{B })\) is a strong solution of ISDE (1.5) starting at \( \mathbf{s }={\mathfrak {l}} ({\textsf {s}}) \in {\mathfrak {l}} ({\textsf {H}}) \) defined on \( (\varOmega ,{\mathscr {F}}, {\textsf {P}}_{{\textsf {s}}}, \{ {\mathscr {F}}_t \} )\).

  3. 3.

    The same statements as (1) hold for ISDE (1.6).

  4. 4.

    The same statements as (2) hold for ISDE (1.6).

  5. 5.

    Let \(( \mathbf{X }',\mathbf{B }')\) and \( (\mathbf{X }'',\mathbf{B }')\) under \( P'\) be weak solutions of (1.5) and (1.6) satisfying \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \( \mu _{\text {Gin}}\), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\), respectively. Suppose that both solutions are defined on the same filtered space \( (\varOmega ',{\mathscr {F}}', P',\{ {\mathscr {F}}_t' \} )\) with the same \( \{ {\mathscr {F}}_t' \} \)-Brownian motion \( \mathbf{B }'\) and that \( \mathbf{X }_0' = \mathbf{X }_0''\) a.s. Then \( P' (\mathbf{X }_t'= \mathbf{X }_t'' \text { for all }t ) = 1 \).

We remark that ISDEs (1.5) and (1.6) are different ISDEs in general. Theorem 7.1 (5) asserts that, if the unlabeled particles start from a support \( {\textsf {S}}_{\text {sde}}\) of \( \mu _{\text {Gin}}\) and if the label \( {\mathfrak {l}} \) is common, then these two labeled dynamics are equal all the time. The intuitive explanation of this fact is as follows. One may regard the set \( {\mathfrak {u}} ^{-1} ({\textsf {S}}_{\text {sde}})\) as a sub-manifold of \( {\mathbb {R}}^{2\mathbb {N}}\) and the drift terms \( b_1 \) and \( b_2\)

$$\begin{aligned}&b_1\left( x_i, \sum _{j\not =i} \delta _{x_j}\right) = \lim _{r\rightarrow \infty } \sum _{|x_i-x_j|<r} \frac{x_i-x_j}{|x_i-x_j|^2} ,\\&b_2\left( x_i,\sum _{j\not =i} \delta _{x_j}\right) = - x_i + \lim _{r\rightarrow \infty } \sum _{|x_j|<r} \frac{x_i-x_j}{|x_i-x_j|^2} \end{aligned}$$

of each ISDE are regarded as “tangential vectors on \( {\mathfrak {u}} ^{-1} ({\textsf {S}}_{\text {sde}}) \)”. In [26], it was shown that both drifts are equal on \( {\mathfrak {u}} ^{-1} ({\textsf {S}}_{\text {sde}}) \). This implies the coincidence of ISDEs (1.5) and (1.6) on \( {\mathfrak {u}} ^{-1} ({\textsf {S}}_{\text {sde}}) \). Since the drift terms \( b_1\) and \( b_2 \) are tangential, the solutions stay in \( {\mathfrak {u}} ^{-1} ({\textsf {S}}_{\text {sde}}) \) all the time, which combined with the pathwise uniqueness of the solutions of ISDEs yields (4).

We note that the unlabeled dynamics \( {\textsf {X}}\) are \( \mu _{\text {Gin}}\)-reversible because \( {\textsf {X}}\) is given by the symmetric Dirichlet form \( ({\mathscr {E}}^{\mu _{\text {Gin}}} ,{\mathscr {D}}^{\mu _{\text {Gin}}} )\) on \( L^{2}({\textsf {S}},\mu _{\text {Gin}})\) (see Lemma 7.1 (4)). Hence, the distribution of \( {\textsf {X}}_t\) with initial distribution \( \mu _{\text {Gin}}\) satisfies \( \mu _{\text {Gin}}\circ {\textsf {X}}_t^{-1} = \mu _{\text {Gin}}\) for each \( 0< t < \infty \). In contrast, the labeled dynamics \( \mathbf{X }_t\) are trapped on a very thin subset of the huge space \( {\mathbb {R}}^{2\mathbb {N}}\). We conjecture that the distribution of \( \mathbf{X }_t\) is singular to the initial distribution \( \mu _{\text {Gin}}\circ {\mathfrak {l}} ^{-1} \) for some \( t > 0 \).

Proof of Theorem 7.1

In Sect. 8, we prove the main theorem (Theorem 7.1) using Theorem 3.1.

Localization of coefficients and Lipschitz continuity

Recall that the labeled process \( \mathbf{X }=(X^i)_{i \in \mathbb {N}}= {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) is obtained from the unlabeled process \( {\textsf {X}}\) under \({\textsf {P}}_{\mu _{\text {Gin}}} \). Set the m-labeled process \( (\mathbf{X }^m,{\textsf {X}}^{m*})\) by

$$\begin{aligned}&\left( \mathbf{X }^m,{\textsf {X}}^{m*}\right) = \left( X_1,\ldots ,X_m,\sum _{i=m+1}^{\infty }\delta _{X^i}\right) . \end{aligned}$$

This correspondence is similar to the correspondence of m-labeled path in the sense of (2.11) and (2.12). This relationship between the unlabeled process and the m-labeled process is called consistency.

The m-labeled process is associated with the m-labeled Dirichlet space given in Sect. 10 for the m-Campbell measure \( \mu _{\text {Gin}}^{[m]}\) of \( \mu _{\text {Gin}}\). Here

$$\begin{aligned}&\mu _{\text {Gin}}^{[m]} (d\mathbf{x }d{\textsf {y}}) = \rho _{\text {Gin}}^m (\mathbf{x }) \mu _{\text {Gin},\mathbf{x }}( d{\textsf {y}})d\mathbf{x } , \end{aligned}$$

where \( \rho _{\text {Gin}}^m \) is the m-point correlation function of \( \mu _{\text {Gin}}\) with respect to the Lebesgue measure \( d\mathbf{x } \) on \( {\mathbb {R}}^{2m} \), and \( \mu _{\text {Gin},\mathbf{x }}\) is the reduced Palm measure conditioned at \( \mathbf{x } \in {\mathbb {R}}^{2m}\). The m-labeled Dirichlet form is given by

$$\begin{aligned}&{\mathscr {E}} ^{\mu _{\text {Gin}}^{[m]}} (f,g) = \int _{{\mathbb {R}}^{2m} \times {\textsf {S}}} \left\{ \frac{1}{2}\sum _{i=1}^{m} \left( \frac{\partial f}{\partial x_i}, \frac{\partial g}{\partial x_i}\right) _{{\mathbb {R}}^2} + {\mathbb {D}}[f,g] \right\} \mu _{\text {Gin}}^{[m]} \left( d\mathbf{x } d{\textsf {y}}\right) , \end{aligned}$$
(8.1)

where \( \partial /\partial {x_i}\) is the nabla in \( {\mathbb {R}}^2\). This coincides with the Dirichlet form (9.12) with \( d=2\), \( S= {\mathbb {R}}^2\), and \( \mu ^{[m]}= \mu _{\text {Gin}}^{[m]}\). Furthermore, \( a (\mathbf{x },{\textsf {y}})\) in (9.12) is taken to be the \( 2\times 2\) unit matrix. We denote by \( \text {Cap}^{\mu _{\text {Gin}}^{[m]}}\) the capacity given by the Dirichlet form \( {\mathscr {E}} ^{\mu _{\text {Gin}}^{[m]}}\).

Let \( \mathbf{a }=\{ a_{{\textsf {q}}}\}_{{\textsf {q}}\in {\mathbb {N}}} \) be an increasing sequence of increasing sequences \( a_{{\textsf {q}}}= \{ a_{{\textsf {q}}}(r) \}_{r\in {\mathbb {N}}} \) such that \( a_{{\textsf {q}}}(r) < a_{{\textsf {q}}+1}(r)\) and \( a_{{\textsf {q}}}(r) < a_{{\textsf {q}}}(r+1)\) for all \( {\textsf {q}}, r \in \mathbb {N}\) and that \( \lim _{r\rightarrow \infty } a_{{\textsf {q}}}(r) = \infty \) for all \( {\textsf {q}}\in \mathbb {N}\). We take \( a_{{\textsf {q}}}(r) \in \mathbb {N}\).

Let \( {\textsf {K}}[a_{{\textsf {q}}}] =\{ {\textsf {s}}\, ;\, {\textsf {s}} (S_{r}) \le a_{{\textsf {q}}}(r) \text { for all } r \in {\mathbb {N}} \} \). Then \( {\textsf {K}}[a_{{\textsf {q}}}] \subset {\textsf {K}}[a_{{\textsf {q}}+1}]\) for all \( {\textsf {q}}\in {\mathbb {N}}\). It is easy to see that \( {\textsf {K}}[a_{{\textsf {q}}}]\) is a compact set in \( {\textsf {S}}\) for each \( {\textsf {q}}\in {\mathbb {N}}\). Let

$$\begin{aligned}&{\textsf {K}}[\mathbf{a }]= \bigcup _{{\textsf {q}}=1}^{\infty } {\textsf {K}}[a_{{\textsf {q}}}] . \end{aligned}$$

We take \( a_{{\textsf {q}}}(r) = {\textsf {q}}r^2 \). Then because \( \mu _{\text {Gin}}\) is translation invariant, we have

$$\begin{aligned}&\mu _{\text {Gin}}({\textsf {K}}[\mathbf{a }]) = 1 . \end{aligned}$$

We introduce an approximation of \( {\mathbb {R}}^{2m} \times {\textsf {S}}\) consisting of compact sets. Let

$$\begin{aligned}&{\textsf {S}}_{\text {s.i.}}^{[m]} = \left\{ (\mathbf{x },{\textsf {s}})\in {\mathbb {R}}^{2m} \times {\textsf {S}}\, ;\, {\mathfrak {u}} (\mathbf{x }) + {\textsf {s}} \in {\textsf {S}}_{\text {s.i.}}\right\} , \end{aligned}$$

where \( \mathbf{x }=(x_1,\ldots ,x_m)\) and \( {\mathfrak {u}} (\mathbf{x }) = \sum _{i=1}^m \delta _{x_i}\). Let \( a_{{\textsf {q}}}^+\) such that \( a_{{\textsf {q}}}^+ (r) = a_{{\textsf {q}}}(r+1)\) and

$$\begin{aligned}&{\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}({\textsf {s}})= \Big \{ \mathbf{x } \in S^m_{{\textsf {r}}} \, ;\, \min _{j\not =k } |x_j-x_k | \ge 2^{-{\textsf {p}}} ,\ \inf _{l,i} |x_l-s_i| \ge 2^{-{\textsf {p}}} \Big \} , \end{aligned}$$
(8.2)

where \( j,k,l=1,\ldots ,m \), \({\textsf {s}}=\sum _i \delta _{s_i}\), and \( S_{{\textsf {r}}}^m = \{ x \in {\mathbb {R}}^2;\, |x| \le {\textsf {r}}\}^m \). We set

$$\begin{aligned}&{\textsf {H}}[\mathbf{a }]_{{\textsf {p}},{\textsf {q}},{\textsf {r}}}= \big \{ (\mathbf{x },{\textsf {s}}) \in {\textsf {S}}_{\text {s.i.}}^{[m]} \, ;\, \ \mathbf{x } \in {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}({\textsf {s}}),\ {\textsf {s}} \in {\textsf {K}}[\textit{a}_{{\textsf {q}}}^+]\big \}, \end{aligned}$$
(8.3)
$$\begin{aligned}&{\textsf {H}}[\mathbf{a }]= \bigcup _{{\textsf {r}}=1}^{\infty } {\textsf {H}}[\mathbf{a }]_{{\textsf {r}}} , \quad {\textsf {H}}[\mathbf{a }]_{{\textsf {r}}} = \bigcup _{{\textsf {q}}=1}^{\infty } {\textsf {H}}[\mathbf{a }]_{{\textsf {q}},{\textsf {r}}}, \quad {\textsf {H}}[\mathbf{a }]_{{\textsf {q}},{\textsf {r}}} = \bigcup _{{\textsf {p}}=1}^{\infty } {\textsf {H}}[\mathbf{a }]_{{\textsf {p}},{\textsf {q}},{\textsf {r}}}. \end{aligned}$$
(8.4)

Although \( {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}({\textsf {s}}), {\textsf {H}}[\mathbf{a }]_{{\textsf {p}},{\textsf {q}},{\textsf {r}}}, {\textsf {H}}[\mathbf{a }]_{{\textsf {q}},{\textsf {r}}} , {\textsf {H}}[\mathbf{a }]_{{\textsf {q}}} \), and \( {\textsf {H}}[\mathbf{a }]\) depend on \( m \in \mathbb {N}\), we suppress m from the notation. To simplify the notation we set \( {\textsf {N}}= {\textsf {N}}_1 \cup {\textsf {N}}_2 \cup {\textsf {N}}_3 \), where

$$\begin{aligned}&{\textsf {N}}_1 = \{ {\textsf {r}}\in \mathbb {N}\}, \ {\textsf {N}}_2 = \{ ({\textsf {q}}, {\textsf {r}}) \ ;\, {\textsf {q}}, {\textsf {r}}\in \mathbb {N}\} , \ {\textsf {N}}_3 = \{ ({\textsf {p}},{\textsf {q}},{\textsf {r}}) \ ;\, {\textsf {p}},{\textsf {q}},{\textsf {r}}\in \mathbb {N}\} \end{aligned}$$
(8.5)

and for \( {\textsf {n}}\in {\textsf {N}}\) we define \( {\textsf {n}}+ 1 \in {\textsf {N}}\) such that

$$\begin{aligned}&{\textsf {n}}+ 1 = {\left\{ \begin{array}{ll} ({\textsf {p}}+ 1, {\textsf {q}}, {\textsf {r}}) &{}\quad \text { for } {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}}) \in {\textsf {N}}_3 , \\ ({\textsf {q}}+1, {\textsf {r}})&{}\quad \text { for } {\textsf {n}}= ({\textsf {q}}, {\textsf {r}}) \in {\textsf {N}}_2, \\ {\textsf {r}}+1 &{}\quad \text { for } {\textsf {n}}= {\textsf {r}}\in {\textsf {N}}_1. \end{array}\right. } \end{aligned}$$
(8.6)

We shall take a limit in \( {\textsf {n}}\) along with the order \( {\textsf {n}}\mapsto {\textsf {n}}+ 1\). We write \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}= {\textsf {H}}[\mathbf{a }]_{{\textsf {p}},{\textsf {q}},{\textsf {r}}}\) for \( {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}}) \in {\textsf {N}}_3 \). We set \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\) for \( {\textsf {n}}= ({\textsf {q}},{\textsf {r}}) \in {\textsf {N}}_2\) and \( {\textsf {n}}= {\textsf {r}}\in {\textsf {N}}_1\) similarly.

We remark that \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\) is compact for \( {\textsf {n}}\in {\textsf {N}}_3 \). This property has critical importance in the proof of Proposition 8.1.

We set \( S_{{\textsf {r}}}^{m,\circ } = \{ |x| < {\textsf {r}},\, x \in {\mathbb {R}}^2\}^m \). Let \( {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}})\) be the open kernel of \( {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}({\textsf {s}})\):

$$\begin{aligned} {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}})= \Bigg \{ \mathbf{x } \in S_{{\textsf {r}}}^{m,\circ } \, ;\, \&\inf _{j\not =k } |x_j-x_k |> 2^{-{\textsf {p}}} ,\ \inf _{l,i} |x_l-s_i| > 2^{-{\textsf {p}}} \Bigg \}. \end{aligned}$$
(8.7)

For \( {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}}) \in {\textsf {N}}_3 \) we set

$$\begin{aligned}&{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }={\textsf {H}}[\mathbf{a }]_{{\textsf {p}},{\textsf {q}},{\textsf {r}}}^{\circ } := \left\{ (\mathbf{x },{\textsf {s}}) \in {\textsf {S}}_{\text {s.i.}}^{[m]} \, ;\, \ \mathbf{x } \in {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}}),\, {\textsf {s}} \in {\textsf {K}}[\textit{a}_{{\textsf {q}}}^+]\right\} , \end{aligned}$$
(8.8)
$$\begin{aligned}&{\textsf {H}}[\mathbf{a }]^{\circ }= \bigcup _{{\textsf {r}}=1}^{\infty } {\textsf {H}}[\mathbf{a }]_{{\textsf {r}}}^{\circ } , \quad {\textsf {H}}[\mathbf{a }]_{{\textsf {r}}}^{\circ } = \bigcup _{{\textsf {q}}=1}^{\infty } {\textsf {H}}[\mathbf{a }]_{{\textsf {q}},{\textsf {r}}}^{\circ }, \quad {\textsf {H}}[\mathbf{a }]_{{\textsf {q}},{\textsf {r}}}^{\circ } = \bigcup _{{\textsf {p}}=1}^{\infty } {\textsf {H}}[\mathbf{a }]_{{\textsf {p}},{\textsf {q}},{\textsf {r}}}^{\circ } . \end{aligned}$$
(8.9)

Then \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\subset {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\) and \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cup \partial {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }= {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\). Note that \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\) has a compact closure for each \( {\textsf {n}}\in {\textsf {N}}_3 \). The next lemma gives a localization of the m-labeled process.

Lemma 8.1

For each \( m \in \mathbb {N}\) the following holds:

$$\begin{aligned} {\textsf {P}}_{{\textsf {s}}}\left( \lim _{{\textsf {n}}\rightarrow \infty } \tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }} (\mathbf{X }^m,{\textsf {X}}^{m*}) = \infty \right) = 1 \quad \text { for } \mu _{\text {Gin}}\text { -a.s.}\, {\textsf {s}}. \end{aligned}$$
(8.10)

Here \( \tau _A \) denotes the exit time from a set A, \( (\mathbf{X }^m,{\textsf {X}}^{m*})\) is the m-labeled process given by \( (\mathbf{X },\mathbf{B })\) in Theorem 7.1, and

$$\begin{aligned} \lim _{{\textsf {n}}\rightarrow \infty } = \lim _{{\textsf {r}}\rightarrow \infty }\lim _{{\textsf {q}}\rightarrow \infty }\lim _{{\textsf {p}}\rightarrow \infty } . \end{aligned}$$
(8.11)

Proof

Let \( \mathbf{a }^+ =\{ a_{{\textsf {q}}}^+ \}_{{\textsf {q}}\in {\mathbb {N}}} \). Then from [23, Lemma 2.5 (4)], we obtain

$$\begin{aligned} \lim _{{\textsf {q}}\rightarrow \infty }\text {Cap}^{\mu _{\text {Gin}}} ( {\textsf {K}}[\textit{a}_{{\textsf {q}}}^+]^c) = 0 . \end{aligned}$$

Then we see for \( \mu _{\text {Gin}}\)-a.s. \( {\textsf {s}}\)

$$\begin{aligned}&{\textsf {P}}_{{\textsf {s}}}\left( \lim _{{\textsf {q}}\rightarrow \infty }\tau _{{\textsf {K}}[\textit{a}_{{\textsf {q}}}^+]}({\textsf {X}})=\infty \right) = 1 . \end{aligned}$$
(8.12)

By definition we deduce \( \tau _{{\textsf {K}}[\textit{a}_{{\textsf {q}}}^+]}({\textsf {X}})\le \tau _{{\textsf {K}}[\textit{a}_{{\textsf {q}}}^+]}({\textsf {X}}^{m*})\). This combined with (8.12) yields

$$\begin{aligned}&{\textsf {P}}_{{\textsf {s}}}\left( \lim _{{\textsf {q}}\rightarrow \infty }\tau _{ {\textsf {K}}[\textit{a}_{{\textsf {q}}}^+]}({\textsf {X}}^{m*})=\infty \right) = 1. \end{aligned}$$
(8.13)

From Lemma 7.2 we have \( {\textsf {P}}_{\mu _{\text {Gin}}} (W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})) = 1 \). Then tagged particles neither collide nor explode. Hence we deduce for \( \mu _{\text {Gin}}\)-a.s. \( {\textsf {s}}\)

$$\begin{aligned}&{\textsf {P}}_{{\textsf {s}}}\left( \tau _{{\textsf {S}}_{\text {s.i.}}^{[m]}} (\mathbf{X }^m,{\textsf {X}}^{m*}) =\infty \right) = 1 , \end{aligned}$$
(8.14)
$$\begin{aligned}&{\textsf {P}}_{{\textsf {s}}}\left( \lim _{{\textsf {r}}\rightarrow \infty } \tau _{S_{{\textsf {r}}}^{m,\circ } }(\mathbf{X }^{m}) =\infty \right) = 1 . \end{aligned}$$
(8.15)

From (8.14) we have for \( \mu _{\text {Gin}}\)-a.s. \( {\textsf {s}}\)

$$\begin{aligned}&{\textsf {P}}_{{\textsf {s}}}\left( \lim _{{\textsf {p}}\rightarrow \infty } \tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {p}},{\textsf {q}},{\textsf {r}}}^{\circ }} (\mathbf{X }^m,{\textsf {X}}^{m*}) = \tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {q}},{\textsf {r}}}^{\circ }} (\mathbf{X }^m,{\textsf {X}}^{m*}) \right) = 1 . \end{aligned}$$
(8.16)

By (8.13) we see for \( \mu _{\text {Gin}}\)-a.s. \( {\textsf {s}}\)

$$\begin{aligned}&{\textsf {P}}_{{\textsf {s}}}\left( \lim _{{\textsf {q}}\rightarrow \infty } \tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {q}},{\textsf {r}}}^{\circ }} (\mathbf{X }^m,{\textsf {X}}^{m*}) = \tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {r}}}^{\circ }} (\mathbf{X }^m,{\textsf {X}}^{m*}) \right) = 1. \end{aligned}$$
(8.17)

From (8.14) and (8.15) we see for \( \mu _{\text {Gin}}\)-a.s. \( {\textsf {s}}\)

$$\begin{aligned} {\textsf {P}}_{{\textsf {s}}}\left( \lim _{{\textsf {r}}\rightarrow \infty } \tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {r}}}^{\circ }} (\mathbf{X }^m,{\textsf {X}}^{m*}) = \infty \right) = 1 \end{aligned}$$
(8.18)

Putting (8.16), (8.17), and (8.18) together we conclude (8.10). \(\square \)

Let \( b^m =(b^{m,i})_{i=1}^m\) be the drift coefficient of the SDE describing \( \mathbf{X }^m \) and let \( \mathbf{B }^m\) be the 2m-dimensional Brownian motion. Then

$$\begin{aligned}&d\mathbf{X }_t^m = d\mathbf{B }_t^m + b^m (\mathbf{X }_t^m,{\textsf {X}}_t^{m*}) dt \end{aligned}$$
(8.19)

and \( b^{m,i}\) is given by

$$\begin{aligned}&b^{m,i}(\mathbf{x },{\textsf {s}})= \frac{1}{2}{\textsf {d}}^{\mu _{\text {Gin}}} \left( x_i,\sum _{j=1,j\not =i}^m \delta _{x_j} + {\textsf {s}}\right) ,\quad \mathbf{x }=(x_1,\ldots ,x_m) . \end{aligned}$$

Let \( {\widetilde{b}}^{m}\) be a version of \( b^m =(b^{m,i})_{i=1}^m\) with respect to \( \mu _{\text {Gin}}^{[m]} \). We shall prove in Lemma 8.4 that \( b^{m,i} \) are locally in the domain of the m-labeled Dirichlet form, and we shall take a (locally) quasi-continuous version of \( b^m \) later.

Let \( \varPi \!:\!{\mathbb {R}}^{2m}\times {\textsf {S}}\!\rightarrow \!{\textsf {S}}\) be the projection such that \( (x,{\textsf {s}}) \mapsto {\textsf {s}}\). Let \( \{ {\textsf {I}}_{{\textsf {m}}}\}_{{\textsf {m}}\in \mathbb {N}} \) be an increasing sequence of closed sets in \( {\mathbb {R}}^{2m}\times {\textsf {S}}\). Then by definition

$$\begin{aligned} \varPi ({\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}})=&\{ {\textsf {s}}\in {\textsf {S}}\, ; \, {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\cap \big ({\mathbb {R}}^{2m}\times \{ {\textsf {s}}\} \big ) \not =\emptyset \} . \end{aligned}$$
(8.20)

We set

$$\begin{aligned} \langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle =&\bigcup _{{\textsf {s}}\in \varPi ({\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}})} {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}})\times \{ {\textsf {s}} \} . \end{aligned}$$
(8.21)

For \( {\textsf {n}}=({\textsf {p}},{\textsf {q}},{\textsf {r}}) \in {\textsf {N}}_3 \), let \( c_{2} ({\textsf {m}},{\textsf {n}})\) be the constant such that \( 0\le c_{2} \le \infty \) and that

$$\begin{aligned} c_{2}&= \sup \Bigg \{ \frac{|{\widetilde{b}}^{m}(\mathbf{x },{\textsf {s}}) - {\widetilde{b}}^{m}(\mathbf{y },{\textsf {s}}) |}{|\mathbf{x }-\mathbf{y }|} ; \, \ \mathbf{x }\not =\mathbf{y } ,\ {\textsf {s}} \in \varPi ({\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}),\, \nonumber \\&(\mathbf{x },{\textsf {s}}) , (\mathbf{y },{\textsf {s}}) \in {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}}),\ (\mathbf{x },{\textsf {s}}) \sim _{{\textsf {p}},{\textsf {r}}}(\mathbf{y },{\textsf {s}}) \Bigg \}. \end{aligned}$$
(8.22)

Here \( (\mathbf{x },{\textsf {s}}) \sim _{{\textsf {p}},{\textsf {r}}}(\mathbf{y },{\textsf {s}}) \) means \( \mathbf{x } \) and \( \mathbf{y } \) are in the same connected component of \( {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}})\).

Proposition 8.1

There exist a \( \mu _{\text {Gin}}^{[m]} \)-version \( {\widetilde{b}}^{m}\) of \( b^m \) and an increasing sequence of closed sets \( \{ {\textsf {I}}_{{\textsf {m}}}\}_{{\textsf {m}}\in \mathbb {N}} \) such that for each \( {\textsf {m}}\in \mathbb {N}\) and \( {\textsf {n}}\in {\textsf {N}}_3 \)

$$\begin{aligned}&c_{2} ({\textsf {m}},{\textsf {n}}) < \infty , \end{aligned}$$
(8.23)
$$\begin{aligned}&{\textsf {P}}_{{\textsf {s}}}\left( \lim _{{\textsf {n}}\rightarrow \infty } \lim _{{\textsf {m}}\rightarrow \infty } \tau _{\langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle } ( \mathbf{X }^m,{\textsf {X}}^{m*}) = \infty \right) = 1 \quad \text { for } \mu _{\text {Gin}}\text {-a.s.}\, {\textsf {s}}. \end{aligned}$$
(8.24)

Here \( \tau _{\langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle }(\mathbf{X }^m,{\textsf {X}}^{m*}) \) denotes the exit time of \( (\mathbf{X }^m,{\textsf {X}}^{m*})\) from the set \(\langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle \) and \( (\mathbf{X }^m,{\textsf {X}}^{m*})\) is given by \( (\mathbf{X },\mathbf{B })\) in Theorem 7.1 starting at \( {\mathfrak {l}} ({\textsf {s}})\).

We shall prove Proposition 8.1 in Sect. 8.2.

From \( c_{2} ({\textsf {m}},{\textsf {n}}) < \infty \) we see \( {\widetilde{b}}^{m}(\mathbf{x },{\textsf {s}}) \) restricted on each connected component of \( {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}})\) is Lipschitz continuous in \( \mathbf{x }\) for each fixed \( {\textsf {s}}\) and that the Lipschitz constant is bounded by \( c_{2} ({\textsf {m}},{\textsf {n}})\). Thus, Proposition 8.1 implies a local Lipschitz continuity of the coefficients of the m-labeled SDE (8.70). Using this we shall obtain the pathwise uniqueness and the existence of a strong solution of finite-dimensional SDEs.

The ideas of the proof of Proposition 8.1 are twofolds. One is the property that \( b^m \) are in the domain of Dirichlet forms, hence we can take a quasi-continuous version of them, which enable us to control the maximal norm with suitable cut off due to \( {\textsf {I}}_{{\textsf {m}}}\). The second is the Taylor expansion of \( b^m \) using the logarithmic interaction potential. We note here that differential gains the integrability of coefficients at infinity, which is a key point of the proof of Proposition 8.1. We refer to Sect. 11.3 for Taylor expansion, and to Sect. 8.2 for a specific calculation in case of the Ginibre interacting Brownian motion.

Proof of local Lipschitz continuity of coefficients: Proposition 8.1

This section proves Proposition 8.1 to complete the proof of Theorem 7.1. For simplicity we prove only for \( m = 1 \). Let \( {\textsf {N}}_3 \) be as (8.5). Let \( \chi _{{\textsf {n}}}\) (\( {\textsf {n}}\in {\textsf {N}}_3 \)) be the cut-off function defined on \( {\mathbb {R}}^2 \times {\textsf {S}}\) introduced by (11.14) with \( m = 1 \). Then by Lemma 11.4 the function \( \chi _{{\textsf {n}}}\) satisfies the following.

$$\begin{aligned}&\chi _{{\textsf {n}}}(x,{\textsf {s}}) = {\left\{ \begin{array}{ll} 0 &{} \text { for } (x,{\textsf {s}}) \not \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}+1}\\ 1&{} \text { for } (x,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\end{array}\right. } ,\quad \chi _{{\textsf {n}}}\in {\mathscr {D}}_{\text {Gin}}^{[1]} ,\nonumber \\&0 \le \chi _{{\textsf {n}}}(x,{\textsf {s}}) \le 1 ,\quad |\nabla _x \chi _{{\textsf {n}}}(x,{\textsf {s}}) |^2 \le c_{16} ,\quad {\mathbb {D}}[\chi _{{\textsf {n}}},\chi _{{\textsf {n}}}] (x,{\textsf {s}}) \le c_{17} . \end{aligned}$$
(8.25)

Here \( c_{16}({\textsf {n}})\) and \( c_{17}({\textsf {n}})\) are positive constants independent of \( (x,{\textsf {s}}) \) in Lemma 11.4, and \({\mathscr {D}}_{\text {Gin}}^{[1]} \) is the domain of the Dirichlet form of the 1-labeled process \( (\mathbf{X }^1,{\textsf {X}}^{1*})\) given by (8.1). Moreover, \( \nabla _x = (\frac{\partial }{\partial x_1},\frac{\partial }{\partial x_2})\) for \( x =(x_1,x_2) \in {\mathbb {R}}^2\).

We refine the result in Lemma 7.3 from \( L_{\text {loc}}^{p}(\mu ^{[1]}_{\text {Gin}})\) (\( 1\le p < 2\)) to \( L^2 (\chi _{{\textsf {n}}}^2\mu ^{[1]}_{\text {Gin}})\).

Lemma 8.2

\( {\textsf {d}}^{\text {Gin} }\in L^2 (\chi _{{\textsf {n}}}^2\mu ^{[1]}_{\text {Gin}})\) holds and the convergence (7.5) and (7.6) takes place in \( L^2 (\chi _{{\textsf {n}}}^2\mu ^{[1]}_{\text {Gin}})\) for each \( {\textsf {n}}\in {\textsf {N}}_3 \).

Proof

From [26, Lemma 72], we deduce the convergence in \( L_{\text {loc}}^{2}(\mu ^{[1]}_{\text {Gin}})\) of the series

$$\begin{aligned}&{\textsf {d}}^{\text {Gin} }_{1+} (x, {\textsf {s}}) := 2 \lim _{R\rightarrow \infty } \sum _{1 \le |x-s_i|< R} \frac{x-s_i}{|x-s_i|^2} , \\&{\textsf {d}}^{\text {Gin} }_{2+} (x, {\textsf {s}}) := -2x + 2 \lim _{R\rightarrow \infty } \sum _{1 \le |s_i| < R} \frac{x-s_i}{|x-s_i|^2}. \end{aligned}$$

By the definition of \( \chi _{{\textsf {n}}}\), this yields the convergence in \( L^2 (\chi _{{\textsf {n}}}^2\mu ^{[1]}_{\text {Gin}})\). Because the weight \( \chi _{{\textsf {n}}}\) cuts off the sum around x, we easily see that

$$\begin{aligned}&{\textsf {d}}^{\text {Gin} }_{1-} (x, {\textsf {s}}) := 2 \sum _{|x-s_i|< 1 } \frac{x-s_i}{|x-s_i|^2} \in L^2 (\chi _{{\textsf {n}}}^2\mu ^{[1]}_{\text {Gin}}), \\&{\textsf {d}}^{\text {Gin} }_{2-} (x, {\textsf {s}}) := -2x + 2 \sum _{ |s_i| < 1 } \frac{x-s_i}{|x-s_i|^2} \in L^2 (\chi _{{\textsf {n}}}^2\mu ^{[1]}_{\text {Gin}}). \end{aligned}$$

As \( {\textsf {d}}^{\text {Gin} }= {\textsf {d}}^{\text {Gin} }_{1+} + {\textsf {d}}^{\text {Gin} }_{1-} = {\textsf {d}}^{\text {Gin} }_{2+} + {\textsf {d}}^{\text {Gin} }_{2-} \), we conclude Lemma 8.2. \(\square \)

Let \( \varphi _{R}\in C_0^{\infty }({\mathbb {R}}^2) \) be a cut-off function such that

$$\begin{aligned} 0 \le \varphi _{R}(x) \le 1 , \quad |\nabla \varphi _{R}(x)| \le 2 , \quad \varphi _{R}(x) = {\tilde{\varphi }}_{R}(|x|) \quad \text { for all } x \in {\mathbb {R}}^2, \end{aligned}$$
(8.26)

where \( {\tilde{\varphi }}_{R}\in C_0^{\infty }({\mathbb {R}})\) is such that

$$\begin{aligned}&{\tilde{\varphi }}_{R}(t )= {\left\{ \begin{array}{ll}1 &{}\text {for } |t| \le R, \\ 0&{}\text {for } |t| \ge R+1 \end{array}\right. } \quad \text { for all } t \in {\mathbb {R}}. \end{aligned}$$
(8.27)

We set

$$\begin{aligned}&{\textsf {d}}^{\text {Gin} }_{R} (x,{\textsf {s}}) = -2 x + 2 \sum _{i} \varphi _{R}(s_i) \frac{x-s_i}{|x-s_i|^2} . \end{aligned}$$
(8.28)

We write \( {\textsf {d}}^{\text {Gin} }_{R} = {}^t ({\textsf {d}}^{\text {Gin} }_{{R},1},{\textsf {d}}^{\text {Gin} }_{{R},2})\) and \( \partial _p =\frac{\partial }{\partial x_p}\), where \( x= {}^t(x_1,x_2)\in {\mathbb {R}}^2\). Then a straightforward calculation shows for \( j,k,l \in \{ 1,2 \} \)

$$\begin{aligned}&\partial _j {\textsf {d}}^{\text {Gin} }_{{R},k} (x,{\textsf {s}}) = -2 \delta _{jk} + 2 \sum _{i} \varphi _{R}(s_i) \frac{A_{jk} (x-s_i)}{|x-s_i|^4}, \end{aligned}$$
(8.29)
$$\begin{aligned}&\partial _j\partial _k {\textsf {d}}^{\text {Gin} }_{{R},l} (x,{\textsf {s}})= 2 \sum _{i} \varphi _{R}(s_i) \partial _j \left\{ \frac{A_{kl} (x-s_i)}{|x-s_i|^4} \right\} , \end{aligned}$$
(8.30)

where \( A\!:\!{\mathbb {R}}^2\!\rightarrow \!{\mathbb {R}}^4 \) is the \( 2\times 2\) matrix-valued function defined by

$$\begin{aligned} A (x) = [A_{ij}(x)]_{i,j=1}^2= \begin{pmatrix} -x_1^2+x_2^2 &{}\quad \ -2x_1x_2 \\ -2x_1x_2 &{} \quad \ x_1^2-x_2^2 \end{pmatrix} \quad \text { for } x = (x_1,x_2) \in {\mathbb {R}}^2. \end{aligned}$$
(8.31)

We easily see there exist constants \(c_{3}\) and \(c_{4}\) such that for all \( x \in {\mathbb {R}}^2\)

$$\begin{aligned}&\frac{|A_{kl} (x )|}{|x |^4} \le \frac{c_{3}}{|x |^{2}} , \end{aligned}$$
(8.32)
$$\begin{aligned}&\Bigg | \partial _j \Big \{ \frac{A_{kl} (x )}{|x |^4} \Bigg \} \Bigg | \le \frac{c_{4}}{|x |^{3}} . \end{aligned}$$
(8.33)

We write \( {\textsf {d}}^{\text {Gin} }= {}^t ({\textsf {d}}^{\text {Gin} }_1,{\textsf {d}}^{\text {Gin} }_2)\) similarly as \( {\textsf {d}}^{\text {Gin} }_{R} = {}^t \left( {\textsf {d}}^{\text {Gin} }_{{R},1},{\textsf {d}}^{\text {Gin} }_{{R},2}\right) \).

Lemma 8.3

For any \( {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}}) \in {\textsf {N}}_3 \) and \( j,k,l \in \{ 1,2 \} \)

$$\begin{aligned}&{\textsf {d}}^{\text {Gin} }= \lim _{{R}\rightarrow \infty } {\textsf {d}}^{\text {Gin} }_{R} \text { in } L^2 (\chi _{{\textsf {n}}}^2\mu ^{[1]}_{\text {Gin}})\end{aligned}$$
(8.34)
$$\begin{aligned}&\partial _j {\textsf {d}}^{\text {Gin} }_{k}= \lim _{{R}\rightarrow \infty } \partial _j {\textsf {d}}^{\text {Gin} }_{{R},k} \quad \text { weakly in } L^2 (\chi _{{\textsf {n}}}^2\mu ^{[1]}_{\text {Gin}}), \end{aligned}$$
(8.35)
$$\begin{aligned}&\partial _j\partial _k {\textsf {d}}^{\text {Gin} }_l (x,{\textsf {s}}) = \lim _{{R}\rightarrow \infty } \sum _{|s_i|\le {R}} \partial _j \bigg \{ \frac{A_{kl} (x-s_i)}{|x-s_i|^4} \bigg \} \quad \text { for each } (x,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}. \end{aligned}$$
(8.36)

Here the sum converges absolutely and uniformly in \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\). In particular,

$$\begin{aligned}&\sup \{ |\partial _j\partial _k {\textsf {d}}^{\text {Gin} }_l (x,{\textsf {s}}) |\, ;\, (x,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\} < \infty \end{aligned}$$
(8.37)

and \( \partial _j\partial _k {\textsf {d}}^{\text {Gin} }_l (x,{\textsf {s}}) \) is continuous in x for each \( {\textsf {s}}\) on \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\).

Proof

We deduce (8.34) from Lemma 8.2, (8.26), (8.27), and (8.28) immediately.

We next prove (8.35). For this purpose it is enough to show the summation term in (8.29) is bounded in \( L^2 (\chi _{{\textsf {n}}}^2\mu ^{[1]}_{\text {Gin}})\) as \( {R} \rightarrow \infty \) for each \( {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}}) \in {\textsf {N}}_3 \). Indeed, we deduce from this that the sequence \( \{\partial _j {\textsf {d}}^{\text {Gin} }_{{R},k} \}\) is relatively compact under the weak convergence in \( L^2 (\chi _{{\textsf {n}}}^2\mu ^{[1]}_{\text {Gin}})\) and that the limit points are unique by (8.34).

Let \( \rho _{\text {Gin}}^1 \) and \( \rho _{\text {Gin},x}^1\) be the one-point correlation functions of \( \mu _{\text {Gin}}\) and \( \mu _{\text {Gin},x}\) with respect to the Lebesgue measure, respectively. Then

$$\begin{aligned} \rho _{\text {Gin}}^1(x) = \frac{1}{\pi },\quad \rho _{\text {Gin},x}^1(s) = \frac{1}{\pi }- \frac{1}{\pi } e^{-|x-s|^2 } . \end{aligned}$$
(8.38)

Here \( \rho _{\text {Gin}}^1(x) = \frac{1}{\pi }\) follows from (7.1) and (7.2), and \( \rho _{\text {Gin},x}^1(s) = \frac{1}{\pi }- \frac{1}{\pi } e^{-|x-s|^2 }\) follows from the formula due to Shirai-Takahashi [36] such that the determinantal kernel \( {\textsf {K}}_{\text {Gin},x}\) of Palm measure \( \mu _{\text {Gin},x}\) is given by

$$\begin{aligned}&{\textsf {K}}_{\text {Gin},x}(y,z) = \{{\textsf {K}}_{\text {Gin}}(y,y) {\textsf {K}}_{\text {Gin}}(z,z) - {\textsf {K}}_{\text {Gin}}(y,z){\textsf {K}}_{\text {Gin}}(z,y) \}/{\textsf {K}}_{\text {Gin}}(x,x) . \end{aligned}$$

Let \( H(x,{\textsf {p}}) =\{ s \in {\mathbb {R}}^2;|x-s|\ge 2^{-{\textsf {p}}}\}\), where \( {\textsf {p}}\in \mathbb {N}\) and \( x \in {\mathbb {R}}^2\). Then we see

$$\begin{aligned}&\Big | E^{\mu _{\text {Gin},x}} \left[ \left\langle 1_{H(x,{\textsf {p}})} \varphi _{R}\frac{A_{kl} (x- \cdot )}{|x-\cdot |^4}, {\textsf {s}}\right\rangle \right] \Big | \nonumber \\&\quad = \Big | \int _{{\mathbb {R}}^2}1_{H(x,{\textsf {p}}) }(s) \varphi _{R}(s) \frac{A_{kl} (x-s)}{|x-s|^4} \rho _{\text {Gin},x}^1(s) ds \Big | \nonumber \\&\quad \le \Big | \int _{{\mathbb {R}}^2}1_{H(x,{\textsf {p}}) }(s) \varphi _{R}(s) \frac{A_{kl} (x-s)}{|x-s|^4} \frac{1}{\pi }ds \Big | + \int _{{\mathbb {R}}^2}1_{H(x,{\textsf {p}}) }(s) \frac{c_{3}}{|x-s|^{2}} \frac{1}{\pi } e^{-|x-s|^2 } ds. \end{aligned}$$
(8.39)

Here we used (8.38) and (8.32) for the last line. Note that by (8.26) and (8.31)

$$\begin{aligned}&\int _{{\mathbb {R}}^2}1_{H(0,{\textsf {p}}) }(s) \varphi _{R}(s) \frac{A_{kl} (0-s)}{|0-s|^4} \frac{1}{\pi }ds = 0 . \end{aligned}$$
(8.40)

Then we easily see from (8.40)

$$\begin{aligned}&\Bigg | \int _{{\mathbb {R}}^2}1_{H(x,{\textsf {p}}) }(s) \varphi _{R}(s) \frac{A_{kl} (x-s)}{|x-s|^4} \frac{1}{\pi }ds \Bigg | \nonumber \\&\quad = \Bigg | \int _{{\mathbb {R}}^2} \Bigg \{ 1_{H(x,{\textsf {p}}) }(s) \frac{A_{kl} (x-s)}{|x-s|^4} - 1_{H(0,{\textsf {p}}) }(s) \frac{A_{kl} (0-s)}{|0-s|^4} \Bigg \} \varphi _{R}(s) \frac{1}{\pi }ds \Bigg | \nonumber \\&\quad \le \int _{{\mathbb {R}}^2}\Big | 1_{H(x,{\textsf {p}}) }(s) \frac{A_{kl} (x-s)}{|x-s|^4} - 1_{H(0,{\textsf {p}}) }(s) \frac{A_{kl} (0-s)}{|0-s|^4} \Big | \frac{1}{\pi }ds. \end{aligned}$$
(8.41)

We set \( I(x,{\textsf {p}}) = \{ s \in {\mathbb {R}}^2;|x-s| < 2^{-{\textsf {p}}}\}\). Then \( I(x,{\textsf {p}})= {\mathbb {R}}^2\backslash H(x,{\textsf {p}})\). Hence

$$\begin{aligned}&1_{H(x,{\textsf {p}}) }(s) \frac{A_{kl} (x-s)}{|x-s|^4} - 1_{H(0,{\textsf {p}}) }(s) \frac{A_{kl} (0-s)}{|0-s|^4} \nonumber \\&\quad = 1_{H(x,{\textsf {p}}) }(s) 1_{I(0,{\textsf {p}}) }(s) \frac{A_{kl} (x-s)}{|x-s|^4} - 1_{I(x,{\textsf {p}}) }(s) 1_{H(0,{\textsf {p}}) }(s) \frac{A_{kl} (0-s)}{|0-s|^4} \nonumber \\&\qquad + 1_{H(x,{\textsf {p}}) }(s) 1_{H(0,{\textsf {p}}) }(s) \Bigg ( \frac{A_{kl} (x-s)}{|x-s|^4}- \frac{A_{kl} (0-s)}{|0-s|^4}\Bigg ) . \end{aligned}$$
(8.42)

It is clear that

$$\begin{aligned}&\sup _{|x| \le {\textsf {r}}+1 } \int _{{\mathbb {R}}^2}\Big | 1_{H(x,{\textsf {p}}) }(s)1_{I(0,{\textsf {p}}) }(s) \frac{A_{kl} (x-s)}{|x-s|^4} \Big | \frac{1}{\pi } ds< \infty ,\nonumber \\&\sup _{|x| \le {\textsf {r}}+1 } \int _{{\mathbb {R}}^2}\Big | 1_{I(x,{\textsf {p}}) }(s) 1_{H(0,{\textsf {p}}) }(s) \frac{A_{kl} (0-s)}{|0-s|^4} \Big | \frac{1}{\pi } ds< \infty ,\nonumber \\&\sup _{|x| \le {\textsf {r}}+1 } \int _{|s| \le 2({\textsf {r}}+1 )} 1_{H(x,{\textsf {p}}) }(s) 1_{H(0,{\textsf {p}}) }(s) \Big | \frac{A_{kl} (x-s)}{|x-s|^4}- \frac{A_{kl} (0-s)}{|0-s|^4} \Big | \frac{1}{\pi } ds < \infty . \end{aligned}$$
(8.43)

We write \( x =(x_1,x_2) \in {\mathbb {R}}^2\). Using (8.33), we see for all \( |x| \le {\textsf {r}}+1\) and \(| s | > 2( {\textsf {r}}+ 1 )\)

$$\begin{aligned} \Big |\frac{A_{kl} (x-s)}{|x-s|^4} - \frac{A_{kl} (0-s)}{|0-s|^4}\Big |&= \Big | \int _0^1 \sum _{j=1}^2 x_j \partial _j \Big \{ \frac{A_{kl} (\cdot )}{|\cdot |^4} \Big \} (tx -s) dt \Big | \\&\le \int _0^1 \sum _{j=1}^2 |x_j| \sup _{|x| \le {\textsf {r}}+1 } \Big \{ \frac{c_{4}}{|tx -s |^3 } \Big \} dt \quad \text { by } (8.33) \\&\le 2({\textsf {r}}+ 1 ) \Big \{ \frac{c_{4}2^3}{|s |^3 } \Big \}. \end{aligned}$$

Here we used \( | s | /2 < |tx -s |\) in the last line. This follows from \( 0 \le t \le 1 \), \( |x| \le {\textsf {r}}+1\), and \( | s | > 2( {\textsf {r}}+ 1 ) \). Note that \( 1_{H(x,{\textsf {p}}) }(s) 1_{H(0,{\textsf {p}}) }(s) =1\) for \( |x| \le {\textsf {r}}+1\) and \(| s | > 2( {\textsf {r}}+ 1 )\). Hence we deduce

$$\begin{aligned}&\sup _{|x| \le {\textsf {r}}+1 } \int _{ | s |> 2( {\textsf {r}}+ 1 )} 1_{H(x,{\textsf {p}}) }(s) 1_{H(0,{\textsf {p}}) }(s) \Big | \frac{A_{kl} (x-s)}{|x-s|^4}- \frac{A_{kl} (0-s)}{|0-s|^4} \Big | \frac{1}{\pi } ds \nonumber \\&\quad \quad \quad \quad \le c_{4}2^4 ({\textsf {r}}+1)\int _{ | s | > 2( {\textsf {r}}+ 1 )} \frac{1}{|s|^3} \frac{1}{\pi }ds < \infty . \end{aligned}$$
(8.44)

Collecting (8.41)–(8.44) together, we obtain

$$\begin{aligned}&\limsup _{{R}\rightarrow \infty } \sup _{|x| \le {\textsf {r}}+1 } \left| \int _{{\mathbb {R}}^2}1_{H(x,{\textsf {p}}) }(s) \varphi _{R}(s) \frac{A_{kl} (x-s)}{|x-s|^4} \frac{1}{\pi }ds \right| < \infty . \end{aligned}$$
(8.45)

Clearly, we have

$$\begin{aligned}&\int _{{\mathbb {R}}^2}1_{H(x,{\textsf {p}}) }(s) \frac{c_{3}}{|x-s|^{2}} \frac{1}{\pi } e^{-|x-s|^2 } ds < \infty . \end{aligned}$$
(8.46)

Putting (8.45) and (8.46) into (8.39), we obtain

$$\begin{aligned}&\limsup _{{R}\rightarrow \infty } \sup _{|x| \le {\textsf {r}}+1 } E^{\mu _{\text {Gin},x}} \left[ \left\langle 1_{H(x,{\textsf {p}})} \varphi _{R}\frac{A_{kl} (x- \cdot )}{|x-\cdot |^4}, {\textsf {s}}\right\rangle \right] < \infty . \end{aligned}$$
(8.47)

From the inequality \( \text {Var}[\langle f,{\textsf {s}}\rangle ] \le \int |f|^2 \rho ^1(s)ds \) valid for determinantal random point fields with Hermitian symmetric kernel, we have

$$\begin{aligned}&\limsup _{{R}\rightarrow \infty } \sup _{|x| \le {\textsf {r}}+1 } \text {Var}^{\mu _{\text {Gin},x}} \left[ \left\langle 1_{H(x,{\textsf {p}})} \varphi _{R}\frac{A_{kl} (x- \cdot )}{|x-\cdot |^4}, {\textsf {s}}\right\rangle \right] \nonumber \\&\quad \le \limsup _{{R}\rightarrow \infty } \sup _{|x| \le {\textsf {r}}+1 } \int _{{\mathbb {R}}^2}\left| 1_{H(x,{\textsf {p}})} (s)\varphi _{R}(s) \frac{A_{kl} (x-s)}{|x-s|^4} \right| ^2\rho _{\text {Gin},x}^1(s)ds \nonumber \\&\quad < \infty \quad \text {by }(8.32)\quad \text { and }\quad (8.38). \end{aligned}$$
(8.48)

Putting (8.47) and (8.48) together we immediately deduce

$$\begin{aligned}&\limsup _{{R}\rightarrow \infty } \sup _{|x| \le {\textsf {r}}+1 } E^{\mu _{\text {Gin},x}} \left[ \Big |\langle 1_{H(x,{\textsf {p}})} \varphi _{R}\frac{A_{kl} (x- \cdot )}{|x-\cdot |^4}, {\textsf {s}}\rangle \Big |^2 \right] < \infty . \end{aligned}$$
(8.49)

From (8.25), (8.38), (8.49) and recalling \( {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}}) \in {\textsf {N}}_3 \) we easily obtain

$$\begin{aligned}&\limsup _{{R}\rightarrow \infty }\int \Big | 2 \sum _{i} \varphi _{R}(s_i) \frac{A_{kl} (x-s_i)}{|x-s_i|^4} \Big |^2 \chi _{{\textsf {n}}}^2 d\mu _{\text {Gin}}^{[1]}< \infty . \end{aligned}$$
(8.50)

Then from (8.29) and (8.50) with a simple calculation we see

$$\begin{aligned}&\limsup _{{R}\rightarrow \infty }\int | \partial _j {\textsf {d}}^{\text {Gin} }_{{R},k} |^2 \chi _{{\textsf {n}}}^2 d\mu _{\text {Gin}}^{[1]}< \infty . \end{aligned}$$

Hence \( \{\partial _j {\textsf {d}}^{\text {Gin} }_{{R},k} \}\) is relatively compact in the weak topology in \( L^2 (\chi _{{\textsf {n}}}^2\mu ^{[1]}_{\text {Gin}})\) for each \( {\textsf {n}}=({\textsf {p}},{\textsf {q}},{\textsf {r}})\in {\textsf {N}}_3 \). This and Lemma 8.2 yield (8.35).

We next prove (8.36) and (8.37). We see from (8.33)

$$\begin{aligned}&\sum _i \Big | \partial _j \bigg \{ \frac{A_{kl} (x-s_i)}{|x-s_i|^4} \bigg \} \Big | \le c_{4}\sum _i \frac{1}{|x-s_i|^{3}} . \end{aligned}$$
(8.51)

We have for any \( (x,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\)

$$\begin{aligned} \sum _{i }\frac{1}{|x-s_i|^{3}} =&\sum _{|s_i|< {\textsf {r}}+1}\frac{1}{|x-s_i|^{3}} + \sum _{ t={\textsf {r}}+2}^{\infty } \, \sum _{t- 1 \le |s_i|< t} \frac{1}{|x-s_i|^{3}} \nonumber \\ \le&2^{3{\textsf {p}}}{\textsf {s}}(S_{{\textsf {r}}+1} ) + \sum _{ t={\textsf {r}}+2 }^{\infty }\, \sum _{t- 1 \le |s_i| < t} \frac{1 }{ ( |s_i| - {\textsf {r}})^{ 3 }} . \end{aligned}$$
(8.52)

Here we used \( |x-s_i|\ge 2^{-{\textsf {p}}} \) and \( |x| \le {\textsf {r}}\). By a straightforward calculation we have

$$\begin{aligned}&\sum _{ t={\textsf {r}}+2 }^{\infty }\, \sum _{t- 1 \le |s_i| < t} \frac{1 }{ ( |s_i| - {\textsf {r}})^{ 3 }} \nonumber \\&\quad \le \sum _{ t={\textsf {r}}+2 }^{\infty } \frac{{\textsf {s}}(S_{t} ) - {\textsf {s}} (S_{t- 1 })}{ (t- 1 -{\textsf {r}})^{3 }} \nonumber \\&\quad \le \lim _{R\rightarrow \infty } \Big \{ \frac{{\textsf {s}} (S_R ) }{(R -1-{\textsf {r}})^{3}} + \sum _{ t={\textsf {r}}+3}^{R} {\textsf {s}}(S_{t- 1 }) \Bigg \{ \frac{1}{ ( t-2-{\textsf {r}})^{3}}- \frac{ 1 }{ ( t-1-{\textsf {r}})^{3}} \Big \} \Bigg \} . \end{aligned}$$
(8.53)

Recall that \( {\textsf {s}} (S_{t} ) \le a_{{\textsf {q}}}( t) \) by \( (x,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\). Then (8.51)–(8.53) yield

$$\begin{aligned}&\sum _i \Big | \partial _j \big \{ \frac{A_{kl} (x-s_i)}{|x-s_i|^4} \big \} \Big | \le c_{4}2^{3{\textsf {p}}}a_{{\textsf {q}}}({\textsf {r}}+1) \nonumber \\&\quad + c_{4} \lim _{R\rightarrow \infty } \Bigg \{ \frac{a_{{\textsf {q}}}(R)}{(R -1-{\textsf {r}})^{3}} + \ \sum _{ t={\textsf {r}}+3}^{R} a_{{\textsf {q}}}(t- 1 ) \Big \{ \frac{1}{ ( t-2-{\textsf {r}})^{3}}- \frac{ 1 }{ ( t-1-{\textsf {r}})^{3}} \Big \} \Bigg \}. \end{aligned}$$
(8.54)

Because \( a_{{\textsf {q}}}(t) = k t^2 \), the sum in (8.54) converges in \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\) uniformly. Hence, we obtain (8.36) and (8.37) from (8.26), (8.30), and (8.54) immediately. The last claim follows from (8.36) and the uniform convergence of the series in (8.36). \(\square \)

Recall that \( \sigma (x,{\textsf {s}}) = E \) and \( b (x,{\textsf {s}}) = \frac{1}{2} {\textsf {d}}^{\text {Gin} }(x,{\textsf {s}}) \). Let \( {\mathscr {D}}_{\text {Gin}} ^{[1]} \) be the domain of the 1-labeled Dirichlet form of the Ginibre interacting Brownian motion.

Lemma 8.4

\( \chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} },\, \chi _{{\textsf {n}}}\partial _j {\textsf {d}}^{\text {Gin} },\, \chi _{{\textsf {n}}}\partial _j\partial _k {\textsf {d}}^{\text {Gin} }\in {\mathscr {D}}_{\text {Gin}}^{[1]} \)\( (j,k\in \{ 1,2 \})\) for all \( {\textsf {n}}\in {\textsf {N}}_3 \).

Proof

We only prove \( \chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }\in {\mathscr {D}}_{\text {Gin}}^{[1]} \) because the other cases can be proved in a similar fashion. We set \( {\mathbb {D}}[f] = {\mathbb {D}}[f,f]\). Recall that \( \nabla _x = (\frac{\partial }{\partial x_1},\frac{\partial }{\partial x_2})= (\partial _1,\partial _2)\). Then

$$\begin{aligned}&{\mathscr {E}} ^{\mu ^{[1]}_{\text {Gin}}} (\chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} },\chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }) = \int _{{\mathbb {R}}^2\times {\textsf {S}}} \frac{1}{2}|\nabla _x ( \chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }) |^2 + \, {\mathbb {D}}[\chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }] d \mu ^{[1]}_{\text {Gin}} . \end{aligned}$$

From (8.25), Lemma 8.2, and Lemma 8.3, we deduce that

$$\begin{aligned}&\int _{{\mathbb {R}}^2\times {\textsf {S}}} |\nabla _x (\chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }) |^2 d \mu ^{[1]}_{\text {Gin}} \le 2\int _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}+1}} \{ \chi _{{\textsf {n}}}^2 |\nabla _x {\textsf {d}}^{\text {Gin} }|^2 + |\nabla _x \chi _{{\textsf {n}}}|^2 |{\textsf {d}}^{\text {Gin} }|^2 \} d \mu ^{[1]}_{\text {Gin}} < \infty . \end{aligned}$$

We set \( \frac{\partial }{\partial s_{j}} =(\frac{\partial }{\partial s_{j,k}} )_{k=1,2}\). From (8.28) and \( {\textsf {d}}^{\text {Gin} }_{R} = {}^t ({\textsf {d}}^{\text {Gin} }_{{R},1},{\textsf {d}}^{\text {Gin} }_{{R},2})\) we have

$$\begin{aligned}&\frac{\partial }{\partial s_{j,k}} {\textsf {d}}^{\text {Gin} }_{{R},l} (x,{\textsf {s}}) = 2 (\partial _k \varphi _{R}) (s_j)\frac{(x-s_j)_l}{|x-s_j|^2} - 2 \varphi _{R}(s_j) \frac{A_{kl} (x-s_j)}{|x-s_j|^4} . \end{aligned}$$

Then from (8.25) and (8.32), we similarly deduce that

$$\begin{aligned} \int _{{\mathbb {R}}^2\times {\textsf {S}}} \, {\mathbb {D}}&[\chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }] d \mu ^{[1]}_{\text {Gin}} \le \, 2 \int _{{\mathbb {R}}^2\times {\textsf {S}}} \, \big \{ \chi _{{\textsf {n}}}^2 {\mathbb {D}}[ {\textsf {d}}^{\text {Gin} }] + {\mathbb {D}}[\chi _{{\textsf {n}}}] \, |{\textsf {d}}^{\text {Gin} }|^2 \big \} \, d \mu ^{[1]}_{\text {Gin}} \\ \le \,&2 \int _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}+1}} \, \big \{ {\mathbb {D}}[ {\textsf {d}}^{\text {Gin} }] + c_{17}^2 |{\textsf {d}}^{\text {Gin} }|^2 \big \} \, d \mu ^{[1]}_{\text {Gin}} \\ \le \,&2 \int _{ {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}+1}} \left\{ c_{5} \left( \sum _{i} \frac{1}{|x-s_i |^4} \right) + c_{17}^2 |{\textsf {d}}^{\text {Gin} }|^2 \right\} \, d \mu ^{[1]}_{\text {Gin}} < \, \infty . \end{aligned}$$

Here \(c_5\) is a finite, positive constant and \(c_{17}({\textsf {n}})\) is the positive constant in Lemma 11.4. The finiteness of the integral in the last line follows from the translation invariance of \( \mu _{\text {Gin}}\) and Lemma 8.2.

Combining these, we obtain \( {\mathscr {E}} ^{\mu ^{[1]}_{\text {Gin}}} (\chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} },\chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }) < \infty \). We proved \( \chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }\in L^2 (\mu ^{[1]}_{\text {Gin}} ) \) in Lemma 8.2. Hence, we see \( \chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }\in {\mathscr {D}}_{\text {Gin}}^{[1]} \). This completes the proof. \(\square \)

Proof of Proposition 8.1

For simplicity we prove the case \( m =1 \); the general case follows from the same argument.

By Lemma 8.3 we can take a version of \( \partial _j\partial _k {\textsf {d}}^{\text {Gin} }_l (x,{\textsf {s}}) \) which is continuous in \( \mathbf{x }\) for each \( {\textsf {s}}\) on \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\). We always take this version and denote it by the same symbol \( \partial _j\partial _k {\textsf {d}}^{\text {Gin} }_l \). By (8.37) we have a finite constant \( c_{6} ({\textsf {n}})\) (\( {\textsf {n}}\in {\textsf {N}}_3 \)) such that

$$\begin{aligned} c_{6}=\sup \{ |\partial _j\partial _k {\textsf {d}}^{\text {Gin} }_l (x,{\textsf {s}}) |\, ;\, (x,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}, \ j,k,l \in \{ 1,2 \} \} < \infty . \end{aligned}$$
(8.55)

We denote by \( {\widetilde{f}} \) a quasi-continuous version of \( f \in {\mathscr {D}}_{\text {Gin}}^{[1]} \). From Lemma 8.4 we take a quasi-continuous version of \( \chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }_l \) being commutative with \( \partial _k \):

$$\begin{aligned}&\partial _k \widetilde{(\chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }_l )} = \widetilde{(\partial _k \chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }_l ) }. \end{aligned}$$
(8.56)

Then by definition there exists an increasing sequence \( \{ {\textsf {I}}_{{\textsf {m}}}\}_{{\textsf {m}}=1}^{\infty } \) of closed sets such that \( \widetilde{\chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }_l } \) and \( \widetilde{(\partial _k \chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }_l )} \) are continuous on \( {\textsf {I}}_{{\textsf {m}}}\) for each \( {\textsf {m}}\) and that

$$\begin{aligned}&\lim _{{\textsf {m}}\rightarrow \infty }\text {Cap}^{\mu _{\text {Gin}}^{[1]}} ({\textsf {H}}[\mathbf{a }]_{{\textsf {n}}+1}\backslash {\textsf {I}}_{{\textsf {m}}}) = 0 \text { for each }{\textsf {n}}\in {\textsf {N}}_3 . \end{aligned}$$

We used here \( \chi _{{\textsf {n}}}= 0 \) on \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}+1}^c \) by (8.25). Because \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\subset {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}+1}\), we have

$$\begin{aligned} \lim _{{\textsf {m}}\rightarrow \infty }\text {Cap}^{\mu _{\text {Gin}}^{[1]}} ({\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\backslash {\textsf {I}}_{{\textsf {m}}}) = 0 \quad \text { for each } {\textsf {n}}\in {\textsf {N}}_3 . \end{aligned}$$
(8.57)

Note that \( \chi _{{\textsf {n}}}= 1\) on \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\) by (8.25), \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\uparrow {\textsf {H}}[\mathbf{a }]\), and \( \mu _{\text {Gin}}^{[1]}({\textsf {H}}[\mathbf{a }]^c) = 0\). We set

$$\begin{aligned} \widetilde{{\textsf {d}}^{\text {Gin} }_l } (x,{\textsf {s}}) =\widetilde{\chi _{{\textsf {n}}}{\textsf {d}}^{\text {Gin} }_l } (x,{\textsf {s}}) \quad \text { for } (x,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}. \end{aligned}$$

Thus \( \widetilde{{\textsf {d}}^{\text {Gin} }_l }\) is a version of \( {\textsf {d}}^{\text {Gin} }_l \) such that \( \widetilde{{\textsf {d}}^{\text {Gin} }_l }\) and \( \partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } \) are continuous on \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\cap {\textsf {I}}_{{\textsf {m}}}\) for each \( {\textsf {n}}\in {\textsf {N}}_3 \). Let \( {\textsf {n}}\in {\textsf {N}}_3 \) and \( {\textsf {m}}\in \mathbb {N}\). Let \( c_{7} ({\textsf {n}},{\textsf {m}})\) be the constant such that

$$\begin{aligned}&c_{7} = \sup \big \{ |\widetilde{{\textsf {d}}^{\text {Gin} }_l } (\xi ,{\textsf {s}}) |,\, |\partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (\xi ,{\textsf {s}}) | \, ; (\xi ,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\cap {\textsf {I}}_{{\textsf {m}}}, \ k,l \in \{ 1,2 \} \big \}. \end{aligned}$$
(8.58)

Note that \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\) is a compact set because \( {\textsf {n}}\in {\textsf {N}}_3 \). Recall that \( {\textsf {I}}_{{\textsf {m}}}\) is a closed set. Then \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\cap {\textsf {I}}_{{\textsf {m}}}\) is compact. Because \( \widetilde{{\textsf {d}}^{\text {Gin} }_l } \) and \( \partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } \) are continuous on \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\cap {\textsf {I}}_{{\textsf {m}}}\), these are bounded on \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\cap {\textsf {I}}_{{\textsf {m}}}\). Thus we have \( c_{7} < \infty \).

We suppose

$$\begin{aligned}&{\textsf {s}} \in \varPi ({\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\cap {\textsf {I}}_{{\textsf {m}}}). \end{aligned}$$
(8.59)

We write \( {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}})\) as before. Let \( \sim _{{\textsf {p}},{\textsf {r}}}\) be same as (8.22). Fix \( {\textsf {m}}\) and \( {\textsf {n}}\). Then there exists a positive constant \( c_8 < \infty \) depending only on \( {\textsf {m}}\) and \( {\textsf {n}}\) such that the following holds: For any \( ( x ,{\textsf {s}}) ,( \xi ,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\) such that \( ( x ,{\textsf {s}}) \sim _{{\textsf {p}},{\textsf {r}}}( \xi ,{\textsf {s}}) \), there exist \( \{ x_1,\ldots ,x_{q} \} \) with \( x_1 = \xi \) and \(x_{q} =x \) such that (8.60) and (8.61) hold.

$$\begin{aligned}{}[x_j,x_{j+1}]\times \{ {\textsf {s}} \} \subset {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}+1}\quad \text { for all }1\le j < q, \end{aligned}$$
(8.60)

where \( [x_j,x_{j+1}] \subset {\mathbb {R}}^2\) denotes the segment connecting \( x_j \) and \( x_{j+1} \),

$$\begin{aligned}&\sum _{j=1}^{q-1} |x_j - x_{j+1}| \le c_{8}|\xi -x | . \end{aligned}$$
(8.61)

Using (8.56) and Taylor expansion we deduce that for each \( 1 \le j < q\)

$$\begin{aligned}&\partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (x_{j+1},{\textsf {s}}) - \partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (x_{j} ,{\textsf {s}}) \nonumber \\&=\int _{0}^{1} \sum _{p=1}^2(x_{j+1,p}-x_{j,p})\partial _p \partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (t(x_{j+1}-x_{j}) + x_{j} ,{\textsf {s}}) dt . \end{aligned}$$
(8.62)

Here we set \( x_j = (x_{j,1},x_{j,2}) \in {\mathbb {R}}^2\). Taking the sum of both sides of (8.62) over \( j=1,\ldots ,q-1 \) and recalling \(x_{q} =x \) and \( x_1 = \xi \) we see

$$\begin{aligned} \partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (x,{\textsf {s}}) - \partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (\xi ,{\textsf {s}}) = \sum _{j=1}^{q-1} \int _{0}^{1} \sum _{p=1}^2(x_{j+1,p}-x_{j,p})\partial _p \partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (t(x_{j+1}-x_{j}) + x_{j} ,{\textsf {s}}) dt . \end{aligned}$$

This together with (8.60) and (8.61) yields

$$\begin{aligned}&|\partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (x,{\textsf {s}}) - \partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (\xi ,{\textsf {s}}) | \nonumber \\&\le \sum _{j=1}^{q-1}\int _0^1 | \sum _{p=1}^2(x_{j+1,p}-x_{j,p})\partial _p \partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (t(x_{j+1}-x_{j}) + x_{j} ,{\textsf {s}}) | dt \nonumber \\&\le \sqrt{2} c_{6} c_{8} |x-\xi | \nonumber \\&\le \sqrt{2} c_{6} c_{8} 2{\textsf {r}}\quad \text { by } x,\xi \in S_{{\textsf {r}}}. \end{aligned}$$
(8.63)

Hence for each \( {\textsf {s}}\) satisfying (8.59), \( \partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (x,{\textsf {s}}) \) is bounded in x with bound

$$\begin{aligned}&|\partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (x,{\textsf {s}})| \le |\partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (\xi ,{\textsf {s}}) | + \sqrt{2} c_{6} c_{8} 2{\textsf {r}}. \end{aligned}$$
(8.64)

Because of (8.59), we can take \( (\xi ,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\cap {\textsf {I}}_{{\textsf {m}}}\). Then by (8.58)

$$\begin{aligned}&|\partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (\xi ,{\textsf {s}}) | \le c_{7}. \end{aligned}$$
(8.65)

From (8.64) and (8.65) we obtain

$$\begin{aligned} |\partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (x,{\textsf {s}})| \le c_{7}+ \sqrt{2} c_{6} c_{8} 2{\textsf {r}}=:c_{9} \end{aligned}$$

The bound \( c_{9}\) depends only on \( {\textsf {m}}, {\textsf {n}}\) appearing in (8.59). Then

$$\begin{aligned}&\sup \{ |\partial _k \widetilde{{\textsf {d}}^{\text {Gin} }_l } (x,{\textsf {s}}) |\,; \, (x,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}},\, {\textsf {s}} \in \varPi ({\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\cap {\textsf {I}}_{{\textsf {m}}})\} \le c_{9} . \end{aligned}$$
(8.66)

Using (8.56) and Taylor expansion again we deduce that for each \( 1 \le j < q\)

$$\begin{aligned} \widetilde{{\textsf {d}}^{\text {Gin} }_l } (x_{j+1},{\textsf {s}}) - \widetilde{{\textsf {d}}^{\text {Gin} }_l } (x_{j} ,{\textsf {s}}) =&\int _0^1 \sum _{p=1}^2 (x_{j+1,p}-x_{j,p})\partial _p \widetilde{{\textsf {d}}^{\text {Gin} }_l }(t (x_{j+1}-x_{j}) + x_{j} ,{\textsf {s}}) dt . \end{aligned}$$

Then from this and (8.66) we deduce in a similar fashion as (8.63)

$$\begin{aligned} |\widetilde{{\textsf {d}}^{\text {Gin} }_l } (x,{\textsf {s}}) - \widetilde{{\textsf {d}}^{\text {Gin} }_l } (\xi ,{\textsf {s}}) |\le&\sqrt{2} c_{8}c_{9} |x-\xi |. \end{aligned}$$
(8.67)

Note that (8.67) holds for all \( (x,{\textsf {s}}) \sim _{{\textsf {p}},{\textsf {r}}}(\xi ,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\) with \( {\textsf {s}}\) satisfying (8.59). Here \( (x ,{\textsf {s}}) \) is not necessary in \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\cap {\textsf {I}}_{{\textsf {m}}}\), whereas \( (\xi ,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\cap {\textsf {I}}_{{\textsf {m}}}\). The constant \(\sqrt{2} c_{8}c_{9} \) depends only on \( {\textsf {m}}\) and \({\textsf {n}}\). Hence we deduce \( c_{2} ({\textsf {m}},{\textsf {n}}) < \infty \).

By definition \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\supset {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}-1} \). Then we easily deduce from (8.20) and (8.21)

$$\begin{aligned}&\langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle \supset \langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}-1} \cap {\textsf {I}}_{{\textsf {m}}}\rangle \supset {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}-1} \cap {\textsf {I}}_{{\textsf {m}}}. \end{aligned}$$
(8.68)

Let \( (\mathbf{X }^1,{\textsf {X}}^{1*})=(X^1,\sum _{i=2}^{\infty } \delta _{X^i})\) be the one-labeled process. By (8.57) we see

$$\begin{aligned} {\textsf {P}}_{{\textsf {s}}}\big ( \tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}} (\mathbf{X }^1,{\textsf {X}}^{1*})= \lim _{{\textsf {m}}\rightarrow \infty }\tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\cap {\textsf {I}}_{{\textsf {m}}}} (\mathbf{X }^1,{\textsf {X}}^{1*})\big ) = 1 \end{aligned}$$

for all \( {\textsf {n}}\in {\textsf {N}}_3 \). Then this yields

$$\begin{aligned}&{\textsf {P}}_{{\textsf {s}}}\big ( \lim _{{\textsf {n}}\rightarrow \infty }\tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}} (\mathbf{X }^1,{\textsf {X}}^{1*})= \lim _{{\textsf {n}}\rightarrow \infty } \lim _{{\textsf {m}}\rightarrow \infty }\tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\cap {\textsf {I}}_{{\textsf {m}}}} (\mathbf{X }^1,{\textsf {X}}^{1*})\big ) = 1 . \end{aligned}$$
(8.69)

Combining (8.68), (8.69), and Lemma 8.1, we obtain

$$\begin{aligned} {\textsf {P}}_{{\textsf {s}}}\left( \lim _{{\textsf {n}}\rightarrow \infty } \lim _{{\textsf {m}}\rightarrow \infty } \tau _{\langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle } (\mathbf{X }^1,{\textsf {X}}^{1*})= \infty \right)&\ge {\textsf {P}}_{{\textsf {s}}}\left( \lim _{{\textsf {n}}\rightarrow \infty } \lim _{{\textsf {m}}\rightarrow \infty } \tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}-1} \cap {\textsf {I}}_{{\textsf {m}}}} (\mathbf{X }^1,{\textsf {X}}^{1*})= \infty \right) \\&= {\textsf {P}}_{{\textsf {s}}}\left( \lim _{{\textsf {n}}\rightarrow \infty } \tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}-1} } (\mathbf{X }^1,{\textsf {X}}^{1*})= \infty \right) \\&\ge {\textsf {P}}_{{\textsf {s}}}\left( \lim _{{\textsf {n}}\rightarrow \infty } \tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}-1}^{\circ }} (\mathbf{X }^1,{\textsf {X}}^{1*})= \infty \right) \\&= 1 . \end{aligned}$$

Indeed, the first line follows from (8.68). The second line follows from (8.69). The inequality in the third line is clear by \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}-1} \supset {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}-1}^{\circ } \). The last line is immediate from Lemma 8.1. We have thus proved (8.24). \(\square \)

A unique strong solution of SDEs with random environment

Throughout Sect. 8.3, \( (\mathbf{X },\mathbf{B })\) is the weak solution of (1.6) starting at \( \mathbf{s }={\mathfrak {l}} ({\textsf {s}})\) defined on \( (\varOmega ,{\mathscr {F}}, {\textsf {P}}_{{\textsf {s}}}, \{ {\mathscr {F}}_t \} )\) obtained in Lemma 7.4. We write \( \mathbf{X }=(X^i)_{i=1}^{\infty }\) and denote by \( \mathbf{B }^m \) the first m-components of the Brownian motion \( \mathbf{B }=(B^i)_{i=1}^{\infty }\).

For each \( m \in \mathbb {N}\), we introduce the 2m-dimensional SDE of \( \mathbf{Y }^m \) for \( (\mathbf{X },\mathbf{B })\) under \( {\textsf {P}}_{{\textsf {s}}}\) such that \( \mathbf{Y }^m =(Y^{m,i})_{i=1}^m \) is an \( \{ {\mathscr {F}}_t \} \)-adapted, continuous process satisfying

$$\begin{aligned}&dY_t^{m,i} = dB_t^i - Y_t^{m,i} dt + \sum _{\begin{array}{c} j=1 \\ j\not =i \end{array}}^m \frac{Y_t^{m,i}-Y_t^{m,j}}{|Y_t^{m,i}-Y_t^{m,j}|^{2}} dt \nonumber \\&\qquad + \lim _{r\rightarrow \infty } \sum _{\begin{array}{c} j = m +1 ,\\ |X_t^j|<r \end{array}}^{\infty } \frac{Y_t^{m,i}-X_t^j}{|Y_t^{m,i}-X_t^j|^{2}} dt , \end{aligned}$$
(8.70)
$$\begin{aligned}&{\textsf {P}}_{{\textsf {s}}}(\mathbf{Y }_t^m \in \mathbf{S }_{\text {sde}}^m(t,{\textsf {X}})\, \text { for all } t ) = 1, \end{aligned}$$
(8.71)
$$\begin{aligned}&\mathbf{Y }_0^m = {\mathfrak {l}} ^m ({\textsf {s}}) . \end{aligned}$$
(8.72)

Here \( \mathbf{S }_{\text {sde}}^m(t,{\textsf {w}})\) is the set given by (3.9) with replacement of \( S\) by \( {\mathbb {R}}^2\). Furthermore, we set \( {\mathfrak {l}} ^m ({\textsf {s}})=({\mathfrak {l}} _i({\textsf {s}}))_{i=1}^m\) for \( {\mathfrak {l}} ({\textsf {s}})= ({\mathfrak {l}} _i({\textsf {s}}))_{i=1}^{\infty }\). For convenience we introduce a variant of notation of solution. Let \( {\textsf {X}}^{m*}= \sum _{i=m+1}^{\infty } \delta _{X^i}\) as before. We say \( (\mathbf{Y }^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) is a solution of (8.70) if and only if \( (\mathbf{Y }^m , \mathbf{B }^m , \mathbf{X }^{m*})\) is a solution of (8.70).

Lemma 8.5

  1. 1.

    For each \( m \in \mathbb {N}\), the SDE (8.70)–(8.72) has a pathwise unique, weak solution for \( \mu _{\text {Gin}}\)-a.s. \( {\textsf {s}}\). That is, for \( \mu _{\text {Gin}}\)-a.s. \( {\textsf {s}}\), arbitrary solutions \( (\mathbf{Y }^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) and \( (\hat{\mathbf{Y }}^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) of (8.70)–(8.72) defined on \( (\varOmega ,{\mathscr {F}}, {\textsf {P}}_{{\textsf {s}}}, \{ {\mathscr {F}}_t \} )\) satisfy

    $$\begin{aligned}&{\textsf {P}}_{{\textsf {s}}}(\mathbf{Y }^m = \hat{\mathbf{Y }}^m) = 1. \end{aligned}$$
    (8.73)

    In particular, \( (\mathbf{Y }^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) coincides with \( (\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) under \( {\textsf {P}}_{{\textsf {s}}}\).

  2. 2.

    Let \( (\mathbf{Z }^m , \hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*})\) and \( (\hat{\mathbf{Z }}^m , \hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*})\) be weak solutions of the SDE (8.70)–(8.72) defined on a filtered space \((\varOmega ',{\mathscr {F}}', P' ,\{ {\mathscr {F}}_t '\} )\) satisfying

    $$\begin{aligned}&(\hat{\mathbf{Z }}^m , \hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*}) {\mathop {=}\limits ^{\text {law}}}(\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*}) . \end{aligned}$$

    Then, for \( \mathbf{Z }^m \) and \(\hat{\mathbf{Z }}^m\) satisfying \( \mathbf{Z }_0^m=\hat{\mathbf{Z }}_0^m\) a.s., it holds that for \( \mu _{\text {Gin}}\)-a.s. \( {\textsf {s}}\)

    $$\begin{aligned}&P' (\mathbf{Z }^m = \hat{\mathbf{Z }}^m) = 1 . \end{aligned}$$
    (8.74)
  3. 3.

    Make the same assumptions as (2) except that the filtrations of \( (\mathbf{Z }^m , \hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*})\) and \( (\hat{\mathbf{Z }}^m , \hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*})\), \( \{{\mathscr {F}}_t'\} \) and \( \{ {\mathscr {F}}_t'' \} \) say, are different (but on the same probability space \((\varOmega ',{\mathscr {F}}', P' )\)). Then (8.74) holds for \( \mu _{\text {Gin}}\)-a.s. \({\textsf {s}}\).

Remark 8.1

In conventional situations, the pathwise uniqueness in Lemma 8.5 (3) is called the pathwise uniqueness in the strict sense [9, Remark 1.3, 162p].

Proof

We first prove (1). Recall that \( (\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) under \( {\textsf {P}}_{{\textsf {s}}}\) is a weak solution of (8.70)–(8.72). So it only remains to prove (8.73) for \( \hat{\mathbf{Y }}^m = \mathbf{X }^m \).

Let \( {\textsf {n}}\in {\textsf {N}}_3 \) with \( {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}})\). Let \( {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}})\) be as (8.7).

For \( (\mathbf{u },{\textsf {v}})\in W ({\mathbb {R}}^{2m}\times {\textsf {S}})\) and \( (\mathbf{u },{\textsf {v}},\mathbf{w }) \in W ({\mathbb {R}}^{2m}\times {\textsf {S}}\times {\mathbb {R}}^{2m})\), let

$$\begin{aligned}&\theta _{{\textsf {p}},{\textsf {r}}}^1(\mathbf{u },{\textsf {v}})= \inf \{ t > 0 \, ;\, \mathbf{u }_t \not \in {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {v}}_t) \} , \end{aligned}$$
(8.75)
$$\begin{aligned}&\theta _{{\textsf {p}},{\textsf {r}}}^2(\mathbf{u },{\textsf {v}},\mathbf{w }) = \inf \{ t>0 ; (\mathbf{u }_t,{\textsf {v}}_t) \not \sim _{{\textsf {p}},{\textsf {r}}}(\mathbf{w }_t,{\textsf {v}}_t) \} . \end{aligned}$$
(8.76)

Here \( (\mathbf{x },{\textsf {s}}) \sim _{{\textsf {p}},{\textsf {r}}}(\mathbf{x }',{\textsf {s}}) \) is same as (8.22). \( (\mathbf{x },{\textsf {s}}) \not \sim _{{\textsf {p}},{\textsf {r}}}(\mathbf{y },{\textsf {s}}) \) means \( \mathbf{x } \) and \( \mathbf{y } \) are not in the same connected component of \( {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}})\). Let \( {\textsf {I}}_{{\textsf {m}}}\) be as Proposition 8.1. Let \( \theta _{{\textsf {m}},{\textsf {n}}}^3({\textsf {v}})\) be the exit time of \( {\textsf {v}}\) from \(\varPi ({\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}) \):

$$\begin{aligned}&\theta _{{\textsf {m}},{\textsf {n}}}^3({\textsf {v}}) = \inf \{ t > 0 \, ;\, {\textsf {v}}_t \not \in \varPi ({\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}) \} . \end{aligned}$$

Then we easily see for \( {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}}) \in {\textsf {N}}_3 \) and \( {\textsf {m}}\in \mathbb {N}\)

$$\begin{aligned}&\min \{ \theta _{{\textsf {p}},{\textsf {r}}}^1(\mathbf{u } ,{\textsf {v}} ), \theta _{{\textsf {m}},{\textsf {n}}}^3({\textsf {v}} ) \} \ge \tau _{\langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle } (\mathbf{u } ,{\textsf {v}} ). \end{aligned}$$
(8.77)

For \( {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}}) \in {\textsf {N}}_3 \) and \( {\textsf {m}}\in \mathbb {N}\) we set

$$\begin{aligned}&\vartheta _{{\textsf {m}},{\textsf {n}}}(\mathbf{u },{\textsf {v}},\mathbf{w })= \min \{ \theta _{{\textsf {p}},{\textsf {r}}}^1(\mathbf{u } ,{\textsf {v}} ) , \theta _{{\textsf {p}},{\textsf {r}}}^1(\mathbf{w } ,{\textsf {v}} ) , \theta _{{\textsf {p}},{\textsf {r}}}^2(\mathbf{u } ,{\textsf {v}} ,\mathbf{w } ), \theta _{{\textsf {m}},{\textsf {n}}}^3({\textsf {v}} ) \} . \end{aligned}$$

Let

$$\begin{aligned} \tau ({\textsf {m}},{\textsf {n}})= \vartheta _{{\textsf {m}},{\textsf {n}}} (\mathbf{X }^m,{\textsf {X}}^{m*},\mathbf{Y }^m) . \end{aligned}$$

Then from (8.19) and (8.22) and a straightforward calculation we see

$$\begin{aligned} | \mathbf{X }_{t\wedge \tau ({\textsf {m}},{\textsf {n}}) }^m - \mathbf{Y }_{t\wedge \tau ({\textsf {m}},{\textsf {n}}) }^m | =&\, \big | \int _0^{t\wedge \tau ({\textsf {m}},{\textsf {n}}) } b^{m} (\mathbf{X }_u^m,\mathbf{B }_u^m,{\textsf {X}}_u^{m*}) - b^{m} (\mathbf{Y }_u^m,\mathbf{B }_u^m,{\textsf {X}}_u^{m*}) du \big | \\ \le&~\sqrt{m} c_{2} \int _0^{t\wedge \tau ({\textsf {m}},{\textsf {n}}) } |\mathbf{X }_{u}^m -\mathbf{Y }_{u}^m |du . \end{aligned}$$

From this we easily deduce

$$\begin{aligned} (\mathbf{X }_t^m,\mathbf{B }_t^m,{\textsf {X}}_t^{m*}) = (\mathbf{Y }_t^m,\mathbf{B }_t^m,{\textsf {X}}_t^{m*}) \quad \text { for all } t \le \tau ({\textsf {m}},{\textsf {n}}). \end{aligned}$$
(8.78)

Suppose that \( \tau ({\textsf {m}},{\textsf {n}}) < \theta _{{\textsf {m}},{\textsf {n}}}^3({\textsf {X}}^{m*})\). Note that \( {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}})\) is an open set. Combining this with (8.75), (8.76), and (8.78) we obtain

$$\begin{aligned}&\theta _{{\textsf {p}},{\textsf {r}}}^1(\mathbf{X }^m,{\textsf {X}}^{m*})= \theta _{{\textsf {p}},{\textsf {r}}}^1(\mathbf{Y }^m,{\textsf {X}}^{m*}) = \theta _{{\textsf {p}},{\textsf {r}}}^2(\mathbf{X }^m,{\textsf {X}}^{m*},\mathbf{Y }^m) . \end{aligned}$$
(8.79)

Hence we deduce from the definition of \( \tau ({\textsf {m}},{\textsf {n}}) \) and (8.79)

$$\begin{aligned} \tau ({\textsf {m}},{\textsf {n}}) =&\min \{ \theta _{{\textsf {p}},{\textsf {r}}}^1(\mathbf{X }^m,{\textsf {X}}^{m*}), \theta _{{\textsf {m}},{\textsf {n}}}^3({\textsf {X}}^{m*}) \}. \end{aligned}$$
(8.80)

From (8.80), (8.77), and Proposition 8.1 we deduce for \( \mu _{\text {Gin}}\)-a.s. \( {\textsf {s}}\)

$$\begin{aligned}&{\textsf {P}}_{{\textsf {s}}}(\lim _{{\textsf {n}}\rightarrow \infty }\lim _{{\textsf {m}}\rightarrow \infty }\tau ({\textsf {m}},{\textsf {n}}) =\infty ) \\&\quad = {\textsf {P}}_{{\textsf {s}}}(\lim _{{\textsf {n}}\rightarrow \infty }\lim _{{\textsf {m}}\rightarrow \infty } \min \{ \theta _{{\textsf {p}},{\textsf {r}}}^1(\mathbf{X }^m,{\textsf {X}}^{m*}), \theta _{{\textsf {m}},{\textsf {n}}}^3({\textsf {X}}^{m*}) \} =\infty ) \\&\quad \ge {\textsf {P}}_{{\textsf {s}}}( \lim _{{\textsf {n}}\rightarrow \infty }\lim _{{\textsf {m}}\rightarrow \infty } \tau _{\langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle } (\mathbf{X }^m,{\textsf {X}}^{m*}) = \infty ) \\&\quad = 1. \end{aligned}$$

We therefore deduce that the equality in (8.78) holds for all \( 0 \le t < \infty \). We thus see that the SDE (8.70)–(8.72) has a pathwise unique weak solution.

By \( (\hat{\mathbf{Z }}^m , \hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*}) {\mathop {=}\limits ^{\text {law}}}(\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) we can prove (2) similarly as (1).

We note that the coefficients of the martingale terms of the SDE are constant. Hence we have not used any specific property of the filtrations in the proof of (1). Then we can prove (3) similarly as (1). \(\square \)

Lemma 8.6

  1. 1.

    For each \( m \in \mathbb {N}\), the SDE (8.70)–(8.72) has a unique strong solution \( F_{\mathbf{s }}^m\) for \( (\mathbf{X },\mathbf{B })\) under \( {\textsf {P}}_{{\textsf {s}}}\) for \( \mu _{\text {Gin}}\)-a.s. \( {\textsf {s}} \), where \( \mathbf{s }= {\mathfrak {l}} ({\textsf {s}})\).

  2. 2.

    For \( \mu _{\text {Gin}}\)-a.s. \( {\textsf {s}}\)

    $$\begin{aligned}&F_{\mathbf{s }}^m(\mathbf{B }^m ,\mathbf{X }^{m*}) = \mathbf{X }^{m} \quad \text { under }{\textsf {P}}_{{\textsf {s}}}. \end{aligned}$$

Proof

In Yamada–Watanabe theory the existence of weak solutions and the pathwise uniqueness imply that the SDE (8.70)–(8.72) has a unique strong solution (see Theorem 1.1 in [9, 163p]). We modify it to prove (1). Indeed, we shall prove the general result Proposition 11.1 including (1). The appearance of new randomness \( {\textsf {X}}^*\) requires a substantial modification of the theory.

We next prove (2). From (1), the SDE (8.70)–(8.72) has a unique strong solution \( F_{\mathbf{s }}^m\). Because \( (\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) under \( {\textsf {P}}_{{\textsf {s}}}\) is a weak solution, (2) follows from (1). \(\square \)

Completion of proof of Theorem 7.1

Let \( (\mathbf{X },\mathbf{B })\) and \( {\textsf {P}}_{\mu _{\text {Gin}}}= \int {\textsf {P}}_{{\textsf {s}}}\mu _{\text {Gin}}(d{\textsf {s}})\) be as Sect. 7.2.

Lemma 8.7

\( (\mathbf{X },\mathbf{B })\) under \( {\textsf {P}}_{\mu _{\text {Gin}}}\) satisfies \(\mathbf {(NBJ)}\).

Proof

Let \( {\mathscr {R}} (t) = \int _t^{\infty } (1/\sqrt{2\pi }) e^{-|x|^2/2} dx \) be a (scaled) complementary error function. Let \( \rho _{\text {Gin}}^1 (x) = 1/\pi \) be the one-point correlation function of \( \mu _{\text {Gin}}\). Then

$$\begin{aligned}&\quad \int _{{\mathbb {R}}^2}{\mathscr {R}} \left( \frac{|x|-r}{T}\right) \rho _{\text {Gin}}^1 (x) dx < \infty \quad \text { for each } r,T \in \mathbb {N}. \end{aligned}$$
(8.81)

Let \( ({\mathscr {E}} ^{\mu _{\text {Gin}}^{[1]}}, {\mathscr {D}}_{\text {Gin}}^{[1]}) \) be the Dirichlet form of the 1-labeled process given by (8.1). Let \( {\textsf {P}}_{(x,{\textsf {s}})}^{[1]} \) be the associated diffusion measure starting at \( (x,{\textsf {s}})\). We set

$$\begin{aligned} {\textsf {P}}_{\mu _{\text {Gin}}^{[1]} }^{[1]} = \int _{{\mathbb {R}}^2\times {\textsf {S}}} {\textsf {P}}_{(x,{\textsf {s}})}^{[1]} \mu _{\text {Gin}}^{[1]} (dx d{\textsf {s}}) . \end{aligned}$$
(8.82)

Let \( (\mathbf{X }^1,{\textsf {X}}^{1*})= (X^1 , \sum _{i=2}^{\infty } \delta _{X^i})\) denote the one-labeled process. Then applying the Lyons–Zheng decomposition ( [6, Theorem 5.7.1]) to the coordinate function x, we have for each \( 0 \le t \le T \),

$$\begin{aligned}&X_t^1 - X_0^1 = \frac{1}{2}B_t^1 + \frac{1}{2} {\hat{B}}_t^1 \quad \text { for } {\textsf {P}}_{\mu _{\text {Gin}}^{[1]} }^{[1]} \text {-a.e.} , \end{aligned}$$
(8.83)

where \( {\hat{B}}^1 \) is the time reversal of \( B^1 \) on [0, T] such that

$$\begin{aligned} {\hat{B}}_t^1 = B_{T-t}^1 (r_T)- B_{T}^1 (r_T) . \end{aligned}$$
(8.84)

Here we set \( r_T\!:\!C([0,T]; {\mathbb {R}}^2\times {\textsf {S}})\!\rightarrow \!C([0,T]; {\mathbb {R}}^2\times {\textsf {S}})\) such that \( r_T(w )(t) = w (T-t)\).

Because of the coupling in Lemma 9.3 ( [25, Theorem 2.4]), the definition of the one-Campbell measure, and (8.82), we see

$$\begin{aligned} \sum _{i=1}^{\infty } {\textsf {P}}_{\mu _{\text {Gin}}}\left( \inf _{t\in [0,T]}|X_t^i| \le r \right) =&\int _{S\times {\textsf {S}}} \mu _{\text {Gin}}^{[1]} (dx d{\textsf {s}}) {\textsf {P}}_{(x,{\textsf {s}})}^{[1]} \left( \inf _{t\in [0,T]}|X_t^1| \le r \right) \nonumber \\ =&~{\textsf {P}}_{\mu _{\text {Gin}}^{[1]}}^{[1]} \left( \inf _{t\in [0,T]}|X_t^1| \le r \right) . \end{aligned}$$
(8.85)

Then we have, taking \( x = X_0^1\),

$$\begin{aligned}&{\textsf {P}}_{\mu _{\text {Gin}}^{[1]}}^{[1]} \left( \inf _{t\in [0,T]}|X_t^1| \le r \right) \le \, {\textsf {P}}_{\mu _{\text {Gin}}^{[1]}}^{[1]} \left( \sup _{t\in [0,T]}|X_t^1- x | \ge | x | - r \right) \nonumber \\&\quad \le \, {\textsf {P}}_{\mu _{\text {Gin}}^{[1]}}^{[1]} \left( \sup _{t\in [0,T]} |B_t^1| \ge | x | - r \right) + {\textsf {P}}_{\mu _{\text {Gin}}^{[1]}}^{[1]} \left( \sup _{t\in [0,T]} |\ {\hat{B}}_t^1| \ge | x | - r \right) \quad \quad \text {by }\left( 8.83\right) \nonumber \\&\quad = \, 2 {\textsf {P}}_{\mu _{\text {Gin}}^{[1]}}^{[1]} \left( \sup _{t\in [0,T]} |B_t^1| \ge | x | - r \right) \nonumber \\&\quad \le 2 \int _{{\textsf {S}}} {\mathscr {R}} \left( \frac{| x |-r}{{c_{10}}T} \right) \mu _{\text {Gin}}^{[1]} \left( dxd{\textsf {s}}\right) \nonumber \\&\quad =2 \int _{{\mathbb {R}}^2}{\mathscr {R}} \left( \frac{|x|-r}{{c_{10}}T}\right) \rho _{\text {Gin}} ^1 \left( x\right) dx < \, \infty \quad \quad \text {by }\left( 8.81\right) , \end{aligned}$$
(8.86)

where \(c_{10}\) is a positive constant. From (8.85) and (8.86) we deduce

$$\begin{aligned}&\sum _{i=1}^{\infty } {\textsf {P}}_{\mu _{\text {Gin}}}\left( \inf _{t\in [0,T]}|X_t^i| \le r \right) < \infty . \end{aligned}$$

Hence from Borel–Cantelli’s lemma we have for each \( r ,T \in \mathbb {N}\)

$$\begin{aligned} {\textsf {P}}_{\mu _{\text {Gin}}}\left( \limsup _{i\rightarrow \infty }\left\{ \inf _{t\in [0,T]}|X_t^i| \le r \right\} \right) =0. \end{aligned}$$
(8.87)

Then we deduce from (3.13) and (8.87) for each \( r ,T \in \mathbb {N}\)

$$\begin{aligned}&{\textsf {P}}_{\mu _{\text {Gin}}}( {\textsf {m}}_{r,T} (\mathbf{X })< \infty ) = 1 - {\textsf {P}}_{\mu _{\text {Gin}}}\left( \limsup _{i\rightarrow \infty }\left\{ \inf _{t\in [0,T]}|X_t^i| \le r \right\} \right) = 1 . \end{aligned}$$

This completes the proof. \(\square \)

Lemma 8.8

  1. 1.

    \( (\mathbf{X },\mathbf{B })\) under \( {\textsf {P}}_{\mu _{\text {Gin}}}\) satisfies \(\mathbf {(IFC)}\) for ISDE (1.6).

  2. 2.

    \( (\mathbf{X },\mathbf{B })\) under \( {\textsf {P}}_{\mu _{\text {Gin}}}\) satisfies \(\mathbf {(AC)}\) for \( \mu _{\text {Gin}}\).

Proof

By Lemma 7.4, \( (\mathbf{X },\mathbf{B })\) under \( {\textsf {P}}_{{\textsf {s}}}\) is a weak solution of (1.6) starting at \( \mathbf{s } = {\mathfrak {l}} ({\textsf {s}})\) for \( \mu _{\text {Gin}}\)-a.s. \( {\textsf {s}}\). By Lemma 8.6 we see \( (\mathbf{X },\mathbf{B })\) under \( {\textsf {P}}_{{\textsf {s}}}\) satisfies \((\mathbf {IFC})_{{\mathbf {s}}}\) for \( \mu _{\text {Gin}}\)-a.s. \( {\textsf {s}}\). Hence the first claim holds. The second claim is obvious because \( \{ {\textsf {P}}_{{\textsf {s}}}\} \) is a \( \mu _{\text {Gin}}\)-stationary diffusion and \( {\textsf {P}}_{\mu _{\text {Gin}}}= \int {\textsf {P}}_{{\textsf {s}}}\mu _{\text {Gin}}(d{\textsf {s}})\). \(\square \)

Proof of Theorem 7.1

We use Theorem 3.1. For this, we check that \( \mu _{\text {Gin}}\) is tail trivial and that \( (\mathbf{X },\mathbf{B })\) under \( {\textsf {P}}_{\mu _{\text {Gin}}} \) satisfies \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \( \mu _{\text {Gin}}\), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\).

Because \( \mu _{\text {Gin}}\) is a determinantal random point field, \( \mu _{\text {Gin}}\) satisfies \(\mathbf {(TT)}\). \(\mathbf {(IFC)}\) for (1.6) and \(\mathbf {(AC)}\) for \( \mu _{\text {Gin}}\) follow from Lemma 8.8. \(\mathbf {(SIN)}\) follows from Lemma 7.2. \(\mathbf {(NBJ)}\) follows from Lemma 8.7. Thus all the assumptions of Theorem 3.1 are fulfilled. Hence we obtain (3) by Theorem 3.1. (4) is clear. Indeed, we can take a subset \( {\textsf {H}}\) such that \( \mu _{\text {Gin}}({\textsf {H}}) = 1 \) and that the same conclusions of (3) hold for all \( {\textsf {s}} \in {\textsf {H}}\).

If a weak solution \( (\hat{\mathbf{X }},\hat{\mathbf{B }})\) of (1.5) satisfies \(\mathbf {(AC)}\) for \( \mu _{\text {Gin}}\). Then \( (\hat{\mathbf{X }},\hat{\mathbf{B }})\) satisfies the ISDE (1.6) because of the coincidence of the logarithmic derivatives (7.5) and (7.6) given by Lemma 7.3. Furthermore, the SDE (8.70) for \( (\hat{\mathbf{X }},\hat{\mathbf{B }}) \) coincides with

$$\begin{aligned} dY_t^{m,i} = d{\hat{B}}_t^i + \sum _{\begin{array}{c} j=1 \\ j\not =i \end{array}}^m \frac{Y_t^{m,i}-Y_t^{m,j}}{|Y_t^{m,i}-Y_t^{m,j}|^{2}} dt + \lim _{r\rightarrow \infty } \sum _{\begin{array}{c} j = m +1 ,\\ |Y_t^{m,i}-{\hat{X}}_t^j|<r \end{array}} ^{\infty } \frac{Y_t^{m,i}-{\hat{X}}_t^j}{|Y_t^{m,i}-{\hat{X}}_t^j|^{2}} dt . \end{aligned}$$

Thus \( (\hat{\mathbf{X }},\hat{\mathbf{B }})\) is a weak solution of (1.5) satisfying \(\mathbf {(IFC)}\) if and only if \( (\hat{\mathbf{X }},\hat{\mathbf{B }})\) is a weak solution of (1.6) satisfying \(\mathbf {(IFC)}\). Hence (1) and (2) follow from (3) and (4).

We proceed with the proof of (5). Recall that both solutions satisfy \(\mathbf {(AC)}\) for \( \mu _{\text {Gin}}\). Then the drift coefficients are equal because of the coincidence of the logarithmic derivatives given by Lemma 7.3. Hence we deduce (5) from (1) and (3). \(\square \)

Dirichlet forms and weak solutions

Relation between ISDE and a random point field

We shall deduce the existence of a strong solution and the pathwise uniqueness of solution of ISDE (3.3)–(3.4) from the existence of a random point field \( \mu \) satisfying the assumptions in the sequel. Recall the notion of the logarithmic derivative \( {\textsf {d}}^{\mu }\) given by Definition 2.1. To relate the ISDE (3.3) with the random point field \( \mu \), we make the following assumption.

(A1) There exists a random point field \( \mu \) such that \( \sigma \in L_{\text {loc}}^{\infty }(\mu ^{[1]}) \) and \( \textit{b}\in L_{\text {loc}}^{1}(\mu ^{[1]})\) and that \( \mu \) has a logarithmic derivative \( {\textsf {d}}^{\mu }= {\textsf {d}}^{\mu }(x,{\textsf {y}}) \) satisfying the relation

$$\begin{aligned}&\textit{b}(x,{\textsf {y}}) = \frac{1}{2} \{\nabla _x \cdot \textit{a}(x,{\textsf {y}}) + \textit{a}(x,{\textsf {y}}) {\textsf {d}}^{\mu }(x,{\textsf {y}}) \} .\end{aligned}$$
(9.1)

Here \( \nabla _x \cdot \textit{a}(x,{\textsf {y}})= (\sum _{q=1}^d \frac{\partial \textit{a}_{pq}}{\partial x_q}(x,{\textsf {y}}))_{p=1}^d \) and \( \textit{a}(x,{\textsf {y}})=\{ \textit{a}_{pq}(x,{\textsf {y}}) \}_{p,q=1}^d \) is the \( d\times d\)-matrix-valued function defined by

$$\begin{aligned}&\textit{a}(x,{\textsf {y}}) = \sigma (x,{\textsf {y}})^t \sigma (x,{\textsf {y}}). \end{aligned}$$
(9.2)

Furthermore, we assume that there exists a bounded, lower semi-continuous, non-negative function \( a_0\!:\!{\mathbb {R}}^+\!\rightarrow \!{\mathbb {R}}^+\) and positive constants \( c_11\) and \( c_{12} \) such that

$$\begin{aligned}&a_0 (|x|) |\xi |^2 \le (\textit{a}(x,{\textsf {y}}) \xi , \xi )_{{\mathbb {R}}^d} \le c_{11} a_0 (|x|) |\xi |^2 \quad \text { for all } \xi \in {\mathbb {R}}^d, \end{aligned}$$
(9.3)
$$\begin{aligned}&c_{12}:= \sup _{t\in {\mathbb {R}}^+} a_0 (t) < \infty \end{aligned}$$
(9.4)

and that \( \textit{a}(x,{\textsf {y}}) \) is smooth in x for each \( {\textsf {y}}\).

Remark 9.1

From (2.3), we obtain an informal expression \( {\textsf {d}}^{\mu }= \nabla _x \log \mu ^{[1]}\). We then interpret the relation (9.1) as a differential equation of random point fields \( \mu \) as

$$\begin{aligned}&\textit{b}(x,{\textsf {y}}) = \frac{1}{2} \{\nabla _x \cdot \textit{a}(x,{\textsf {y}}) + \textit{a}(x,{\textsf {y}}) \nabla _x \log \mu ^{[1]}(x,{\textsf {y}}) \} . \end{aligned}$$
(9.5)

That is, (9.5) is an equation of \( \mu \) for given coefficients \( \sigma \) and \( \textit{b}\). Theorems 3.13.2 deduce that, if the differential equation (9.5) of random point fields has a solution \( \mu \) satisfying the assumptions in these theorems, then the existence of a strong solution and the pathwise uniqueness of a solution of ISDE (3.3)–(3.4) hold.

A weak solution of ISDE (First step)

In this subsection, we recall a construction of \( \mu \)-reversible diffusion from [23, 27]. Recall the notion of quasi-Gibbs property given by Definition 2.2. We now make the following assumption.

(A2) \( \mu \) is a \( ( \varPhi , \varPsi )\)-quasi Gibbs measure such that there exist upper semi-continuous functions \( ({\hat{\varPhi }}, {\hat{\varPsi }})\) and positive constants \(c_{13}\) and \(c_{14}\) satisfying

$$\begin{aligned} c_{13}^{-1} {\hat{\varPhi }} (x)\le \varPhi (x) \le c_{13} {\hat{\varPhi }}(x) ,\quad c_{14}^{-1} {\hat{\varPsi }} (x,y)\le \varPsi (x,y) \le c_{14} {\hat{\varPsi }}(x,y) \end{aligned}$$

and that \( \varPhi \) and \( \varPsi \) are locally bounded from below.

We refer to [27, 28] for sufficient conditions of (A2). These conditions give us the quasi-Gibbs property of the random point fields appearing in random matrix theory, such as sine\(_{\beta } \), Airy\(_{\beta } \) (\( \beta =1,2,4\)), and Bessel\(_{2, \alpha } \) (\( 1\le \alpha \)), and the Ginibre random point fields [8, 27, 28, 30].

Let \( \sigma _r^{m}\) be the m-density function of \( \mu \) on \( S_{r}\) with respect to the Lebesgue measure \( d\mathbf{x }^{m}\) on \( S_{r}^{m}\). By definition, \( \sigma _r^{m}\) is a non-negative symmetric function such that

$$\begin{aligned}&\frac{1}{m!} \int _{S_{r}^m} {\check{f}} (\mathbf{x }^m)\sigma _r^{m} (\mathbf{x }^m) d\mathbf{x }^m = \int _{{\textsf {S}}_r^{m}} f ({\textsf {x}}) \mu (d{\textsf {x}}) \quad \text { for all } f \in C_b({\textsf {S}}) . \end{aligned}$$

Here we set \( \mathbf{x }^m =(x_1,\ldots ,x_m)\) and \( d\mathbf{x }^m = dx_1\cdots dx_m \). Furthermore, we denote by \( {\textsf {S}}_r^{m} \) the subset of \( {\textsf {S}}\) such that \( {\textsf {S}}_r^{m} = \{ {\textsf {s}} \in {\textsf {S}}\, ;\, {\textsf {s}} (S_{r}) = m \} \) as before. We make the following assumption.

(A3) \( \mu \) satisfies for each \( m, r \in \mathbb {N}\)

$$\begin{aligned}&\sum _{k=m}^{\infty } \frac{k!}{(k-m)!} \mu ({\textsf {S}}_r^k ) < \infty . \end{aligned}$$
(9.6)

Clearly, (9.6) is equivalent to \( \int _{S_{r}^m}\rho ^m (\mathbf{x }^m ) d\mathbf{x }^m < \infty \) if the m-point correlation function \( \rho ^m \) of \( \mu \) with respect to the Lebesgue measure exists. Under assumptions of (A2) and (A3) \( \mu \) has correlation functions and Campbell measures of any order.

Let \( ({\mathscr {E}}^{a ,\mu },{\mathscr {D}}_{\circ }^{a ,\mu } )\) be a bilinear form on \( L^2({\textsf {S}},\mu )\) with domain \( {\mathscr {D}}_{\circ }^{a ,\mu }\) defined by

$$\begin{aligned}&{\mathscr {D}}_{\circ }^{a ,\mu } = \{ f \in {\mathscr {D}}_{\circ }\cap L^2({\textsf {S}},\mu )\, ;\, {\mathscr {E}}^{a ,\mu }(f,f) < \infty \} , \end{aligned}$$
(9.7)
$$\begin{aligned}&{\mathscr {E}}^{a ,\mu }(f,g) = \int _{{\textsf {S}}} {\mathbb {D}}^{a}[f,g] \, \mu (d{\textsf {s}}) , \end{aligned}$$
(9.8)
$$\begin{aligned}&{\mathbb {D}}^{a}[f,g] ({\textsf {s}}) = \frac{1}{2} \sum _{i} ( a (s_i , {\textsf {s}}^{i\diamondsuit }) \partial _{s_i}{\check{f}} , \partial _{s_i}{\check{g}} )_{{\mathbb {R}}^d}. \end{aligned}$$
(9.9)

Here \( {\textsf {s}}=\sum _i \delta _{s_i}\) and \( {\textsf {s}}^{i\diamondsuit } = \sum _{j\not =i} \delta _{s_j}\), \( \partial _{s_i}= (\frac{\partial }{\partial s_{i1}},,\ldots ,\frac{\partial }{\partial s_{id}})\) and \((\cdot , \cdot )_{{\mathbb {R}}^d}\) denotes the standard inner product in \( {\mathbb {R}}^d\). When a is the unit matrix, we often remove it from the notation; for example, \( {\mathscr {E}}^{a ,\mu }= {\mathscr {E}}^{\mu } \), \( {\mathscr {D}}_{\circ }^{a ,\mu } = {\mathscr {D}}_{\circ }^{\mu } \), and \( {\mathbb {D}}^{a}= {\mathbb {D}}\).

Lemma 9.1

Assume (9.3), (9.4), (A2), and (A3). Then \( ({\mathscr {E}}^{a ,\mu },{\mathscr {D}}_{\circ }^{a ,\mu } )\) is closable on \( L^2({\textsf {S}},\mu )\), and its closure \( ({\mathscr {E}}^{a ,\mu },{\mathscr {D}}^{a ,\mu })\) is a quasi-regular Dirichlet form on \( L^2({\textsf {S}},\mu )\). Moreover, the associated \( \mu \)-reversible diffusion \( ({\textsf {X}},\{{\textsf {P}}_{{\textsf {s}}}\}_{{\textsf {s}}\in {\textsf {S}} }) \) exists.

Lemma 9.1 is a refinement of [23, 119p. Corollary 1] and can be proved similarly.

In Theorem 3.2 we decompose \( \mu \) by taking the regular conditional probability with respect to the tail \( \sigma \)-field. The refinement above is motivated by this decomposition. Indeed, (9.6) is stable under this decomposition (see Lemma 12.1).

By construction, a diffusion measure \( {\textsf {P}}_{{\textsf {s}}}\) given by a quasi-regular Dirichlet form with quasi-continuity in \( {\textsf {s}}\) is unique for quasi-everywhere starting point \( {\textsf {s}}\). Equivalently, there exists a set \( {\textsf {H}}\) such that the complement of \( {\textsf {H}}\) has capacity zero, and the diffusion measure \( {\textsf {P}}_{{\textsf {s}}}\) associated with the Dirichlet space above with quasi-continuity in \( {\textsf {s}}\) is unique for all \( {\textsf {s}} \in {\textsf {H}}\) and \( {\textsf {P}}_{{\textsf {s}}}({\textsf {X}}_t \in {\textsf {H}}\text { for all }t ) = 1 \) for all \( {\textsf {s}} \in {\textsf {H}}\). The set \( {\textsf {H}}\) is unique up to capacity zero.

Let \( \{{\textsf {P}}_{{\textsf {s}}}\}_{{\textsf {s}}\in {\textsf {S}} }\) be as in Lemma 9.1. We set

$$\begin{aligned}&{\textsf {P}}_{\mu }= \int _{{\textsf {S}}} {\textsf {P}}_{{\textsf {s}}}\, \mu (d{\textsf {s}}) \end{aligned}$$
(9.10)

Let \( W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})\) be as in (2.9). We assume the following.

\(\mathbf {(SIN)}\)   \( {\textsf {P}}_{\mu }\) satisfies \( {\textsf {P}}_{\mu }(W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})) = 1 \).

From \(\mathbf {(SIN)}\) we see \( {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) is well defined under \( {\textsf {P}}_{\mu }\)-a.s. We present an ISDE that \( \mathbf{X }= {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) satisfies.

Lemma 9.2

( [26, Theorem 26]) Assume (A1)–(A3). Assume that \( {\textsf {P}}_{\mu }\) satisfies \(\mathbf {(SIN)}\). Then there exists an \( {\textsf {H}} \subset {\textsf {S}}_{\text {sde}}\) satisfying (3.1) and (3.2) such that \( ({\mathfrak {l}} _{\text {path}}({\textsf {X}}) ,\mathbf{B }) \) defined on \( (\varOmega ,{\mathscr {F}}, {\textsf {P}}_{{\textsf {s}}}, \{ {\mathscr {F}}_t \} )\) is a weak solution of ISDE (3.3)–(3.5) for each \( {\textsf {s}}\in {\textsf {H}} \) and that \( \mu ({\textsf {H}} ) = 1 \) and \( {\textsf {P}}_{{\textsf {s}}}({\textsf {X}}_t \in {\textsf {H}} \text { for }0 \le \forall t < \infty ) = 1 \).

Remark 9.2

  1. 1.

    The solution \( {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) of the ISDE (3.3)–(3.4) is defined on the space \( (W ({\textsf {S}}),{\mathscr {B}}(W ({\textsf {S}}))) \) with filtering \( {\mathscr {B}}_t ({\textsf {S}}) \) given by (7.3).

  2. 2.

    In Lemma 9.2, Brownian motion \( \mathbf{B }=(B^i)_{ i \in \mathbb {N}}\) is given by the additive functional of the diffusion \( ({\textsf {X}},\{{\textsf {P}}_{{\textsf {s}}}\})\). In particular, \( \mathbf{B } \) is a functional of \( {\textsf {X}}\). Hence we write \( \mathbf{B }({\textsf {X}})\) when we emphasize this. Summing up, the weak solution in Lemma 9.2 is given by \( ({\mathfrak {l}} _{\text {path}}({\textsf {X}}) ,\mathbf{B }({\textsf {X}}))\) and defined on the filtered space \( (W ({\textsf {S}}),{\mathscr {B}}(W ({\textsf {S}})))\) with \( \{ {\mathscr {B}}_t ({\textsf {S}})\} \).

  3. 3.

    A sufficient condition of \(\mathbf {(SIN)}\) will be given in Sect. 10.

The Dirichlet forms of the m-labeled processes and the coupling

Let \( ({\mathscr {E}}^{a ,\mu },{\mathscr {D}}^{a ,\mu })\) be the Dirichlet form on \( L^2({\textsf {S}},\mu )\) in (A3), and let \( ({\textsf {X}},\{ {\textsf {P}}_{{\textsf {s}}} \}_{{\textsf {s}}\in {\textsf {S}}} )\) be the associated diffusion.

Let \( {\mathfrak {l}} \) be a label. Assume that \(\mathbf {(SIN)}\) holds. Then \( {\mathfrak {l}} _{\text {path}}\) makes sense and we construct the fully labeled process \( \mathbf{X } =(X^i)_{i\in {\mathbb {N}}}\) with \( \mathbf{X }_0 = {\mathfrak {l}} ({\textsf {X}}_0)\) associated with the unlabeled process \( {\textsf {X}} \) by taking \( \mathbf{X } = {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \), where \( {\textsf {X}} = \sum _{i\in {\mathbb {N}}} \delta _{X^i} \).

Let \(\mathbf{X }^m = (X^i)_{i=1}^m \) and \( {\textsf {X}}^{m*} = \sum _{m<i } \delta _{X^i} \). We call the pair \( (\mathbf{X }^m,{\textsf {X}}^{m*} )\) the m-labeled process. We shall present the Dirichlet form associated with the m-labeled process.

We write \( \mathbf{x }^m=(x_i)_{i=1}^m \in ({\mathbb {R}}^d)^m\). Let \( \mu ^{[m]} \) be the (reduced) m-Campbell measure of \( \mu \) defined as

$$\begin{aligned}&\mu ^{[m]} (d\mathbf{x }^md{\textsf {y}}) = \rho ^m (\mathbf{x }^m) \mu _{\mathbf{x }^m} ( d{\textsf {y}})d\mathbf{x }^m, \end{aligned}$$
(9.11)

where \( \rho ^m \) is the m-point correlation function of \( \mu \) with respect to the Lebesgue measure dx on \( S^m\), and \( \mu _{\mathbf{x }^m} \) is the reduced Palm measure conditioned at \( \mathbf{x }^m\in S^m\). Let

$$\begin{aligned}&{\mathscr {E}} ^{a ,\mu ^{[m]}} (f,g) = \int _{S^m \times {\textsf {S}}} \left\{ \frac{1}{2}\sum _{i=1}^{m} \left( a (x_i , {\textsf {x}}^{i\diamondsuit } + {\textsf {y}}) \frac{\partial f}{\partial x_i}, \frac{\partial g}{\partial x_i}\right) _{{\mathbb {R}}^d} + {\mathbb {D}}^{a}[f,g] \right\} \mu ^{[m]} (d\mathbf{x }^md{\textsf {y}}) , \end{aligned}$$
(9.12)

where \( \frac{\partial }{\partial x_i}\) is the nabla in \( {\mathbb {R}}^d\), \( {\textsf {x}}^{i\diamondsuit } = \sum _{j\not =i}^m \delta _{x_j}\), and \( a (x,{\textsf {y}}) \) is given by (9.2). Moreover, \( {\mathbb {D}}^{a}\) is defined by (9.9) naturally regarded as the carré du champ on \( S^m \times {\textsf {S}}\), and

$$\begin{aligned}&{\mathscr {D}}_{\circ }^{a ,\mu ^{[m]}}=\left\{ f \in C_0^{\infty }(S^m)\otimes {\mathscr {D}}_{\circ }\, ;\, {\mathscr {E}} ^{a ,\mu ^{[m]}} (f,f) < \infty ,\ f \in L^2(S^m \times {\textsf {S}}, \mu ^{[m]}) \right\} . \end{aligned}$$

When \( m=0 \), we interpret \( {\mathscr {E}} ^{a ,\mu ^{[0]}} \) and \( \mu ^{[0]} \) as the unlabeled Dirichlet form \( {\mathscr {E}}^{a ,\mu }\) and \( \mu \), respectively.

The closablity of the bilinear form \( ({\mathscr {E}} ^{a ,\mu ^{[m]}} , C_0^{\infty }(S^m)\otimes {\mathscr {D}}_{\circ }) \) on \( L^2(S^m \times {\textsf {S}}, \mu ^{[m]}) \) follows from (9.3), (9.4), (A2), and (A3). We can prove this in a similar fashion as the case \( m =0 \) as Lemma 9.1 ( [23]). We then denote by \( ({\mathscr {E}} ^{a ,\mu ^{[m]}} ,{\mathscr {D}}^{a ,\mu ^{[m]}} )\) its closure. The quasi-regularity of \( ({\mathscr {E}} ^{a ,\mu ^{[m]}} , {\mathscr {D}}^{a ,\mu ^{[m]}} )\) is proved by [25] for \( \mu \) with bounded correlation functions. The generalization to (A3) is easy, and we left its proof.

Let \({\textsf {P}}_{(\mathbf{s }^m,{\textsf {s}}^{m*})}^{[m]}\) denote the diffusion measures associated with the m-labeled Dirichlet form \( ({\mathscr {E}} ^{a ,\mu ^{[m]}} , {\mathscr {D}}^{a ,\mu ^{[m]}} ) \) on \( L^2(S^m \times {\textsf {S}}, \mu ^{[m]}) \). (see [25]). We quote:

Lemma 9.3

( [25, Theorem 2.4]) Let \( {\textsf {s}}={\mathfrak {u}} \left( \left( \mathbf{s }^m,{\textsf {s}}^{m*}\right) \right) = \sum _{i=1}^m\delta _{s_i}+{\textsf {s}}^{m*}\). Then

$$\begin{aligned}&{\textsf {P}}_{(\mathbf{s }^{m},{\textsf {s}}^{m*})}^{[m]} = {\textsf {P}}_{{\textsf {s}}} \circ (\mathbf{X }^m, {\textsf {X}}^{m*})^{-1} .\end{aligned}$$

Note that \( {\textsf {P}}_{{\textsf {s}}}\) in the right-hand side is independent of \( m \in \mathbb {N}\). Hence this gives a sequence of coupled \( S^m\times {\textsf {S}}\)-valued continuous processes with distributions \({\textsf {P}}_{(\mathbf{s }^m,{\textsf {s}}^{m*})}^{[m]}\). In this sense, there exists a natural coupling among the m-labeled Dirichlet forms \( ({\mathscr {E}} ^{a ,\mu ^{[m]}} ,{\mathscr {D}}^{a ,\mu ^{[m]}}) \) on \( L^2(S^m\times {\textsf {S}},\mu ^{[m]}) \). This coupling is a crucial point of the construction of weak solutions of ISDE in [26].

Introducing the m-labeled processes, we can regard \( \mathbf{X }^m\) as a Dirichlet process of the diffusion \( (\mathbf{X }^m, {\textsf {X}}^{m*})\) associated with the m-labeled Dirichlet space. That is, one can regard \( A_t^{[\mathbf{x} _\mathbf{m} ]}:= \mathbf{X }_t^m - \mathbf{X }_0^m \) as a dm-dimensional additive functional given by the composition of \( (\mathbf{X }^m, {\textsf {X}}^{m*})\) with the coordinate function \( \mathbf{x }^m=(x_1,\ldots ,x_m) \in ({\mathbb {R}}^d)^m\). Although \( \mathbf{X }^m\) can be regarded as an additive functional of the unlabeled process \( {\textsf {X}}=\sum _i \delta _{X^i}\), \( \mathbf{X }^m\) is no longer a Dirichlet process in this case. Indeed, as a function of \( {\textsf {X}} \), we cannot identify \( \mathbf{X }_t^m\) without tracing the trajectory of \( {\textsf {X}}_s = \sum _i \delta _{X_s^i}\) for \( s \in [0,t]\).

Once \( \mathbf{X }^m\) can be regarded as a Dirichlet process, we can apply the It\( \hat{\text {o}}\) formula (Fukushima decomposition), and Lyons–Zheng decomposition to \( \mathbf{X }^m\), which is important in proving the results in the subsequent sections.

Sufficient conditions of \(\mathbf {(SIN)}\) and \(\mathbf {(NBJ)}\)

The purpose of this section is to give sufficient conditions of \(\mathbf {(SIN)}\) and \(\mathbf {(NBJ)}\) for \( \mathbf{X }={\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}_{\mu }\). The following assumptions are related to Dirichlet forms introduced in Sect. 9.2. So we named the series of them as (A4) followed by (A3) in Sect. 9.2. Let \( {\mathscr {R}} = {\mathscr {R}} (t) \) be a (scaled) complementary error function:

$$\begin{aligned}&{\mathscr {R}} (t) = \int _t^{\infty } (1/\sqrt{2\pi }) e^{-|x|^2/2} dx. \end{aligned}$$

We set \( \langle f, {\textsf {s}}\rangle = \sum _{i} f (s_i)\) for \( {\textsf {s}}= \sum _i \delta _{s_i}\) as before.

(A4) (1)

For each \( r,T \in \mathbb {N}\)

$$\begin{aligned}&E^{\mu }\left[ \left\langle {\mathscr {R}} \left( \frac{|\cdot |-r}{T}\right) , {\textsf {s}}\right\rangle \right] < \infty , \end{aligned}$$
(10.1)

and there exists a \( T>0 \) such that for each \( R>0 \)

$$\begin{aligned}&\liminf _{r\rightarrow \infty } {\mathscr {R}} \left( \frac{r}{T}\right) \, E^{\mu }\left[ \langle 1_{S_{r+R}}, {\textsf {s}}\rangle \right] = 0 . \end{aligned}$$
(10.2)

(2) Each \( X^i \) neither hits the boundary \( \partial S\) of S nor collides each other. That is,

$$\begin{aligned}&\text {Cap}^{a ,\mu }(\{ {\textsf {s}} ; {\textsf {s}} (\partial S) \ge 1 \}) = 0 , \end{aligned}$$
(10.3)
$$\begin{aligned}&\text {Cap}^{a ,\mu } ({\textsf {S}}_{\text {s}}^c) = 0 . \end{aligned}$$
(10.4)

Here \( \text {Cap}^{a ,\mu } \) is the capacity of the Dirichlet form \( ({\mathscr {E}}^{a ,\mu },{\mathscr {D}}^{a ,\mu }) \) on \( L^2({\textsf {S}},\mu )\).

The condition (10.2) is an analogy of [6, (5.7.14)]. Unlike this, the carré du champ has uniform upper bounds \( c_{11}c_{12}\) by (9.2)–(9.4).

We remark that (10.2) is easy to check. (10.2) holds if the 1-correlation function of \( \mu \) has at most polynomial growth at infinity. Obviously, condition (10.3) is always satisfied if \( \partial S= \emptyset \). We state sufficient conditions of (10.4).

Lemma 10.1

( [24, Theorem 2.1, Proposition 7.1]) Assume (A1)–(A3). Assume that \( \varPhi \) and \( \varPsi \) are locally bounded from below. We then obtain the following.

  1. 1.

    Assume that \( \mu \) is a determinantal random point field with a locally Lipschitz continuous kernel with respect to the Lebesgue measure. (10.4) then holds.

  2. 2.

    Assume that \( d \ge 2 \). (10.4) then holds.

Proof

Claims (1) and (2) follow from [24, Theorem 2.1, Proposition 7.1]. \(\square \)

We now deduce \(\mathbf {(SIN)}\) from (A1)–(A4).

Lemma 10.2

Assume (A1)–(A4). Then \( \mathbf{X }={\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}_{\mu }\) satisfies \(\mathbf {(SIN)}\) and

$$\begin{aligned}&\text {Cap}^{a ,\mu }({\textsf {S}}_{\text {sde}}^c) = 0 ,\quad {\textsf {P}}_{\mu }\circ {\mathfrak {l}} _{\text {path}}^{-1}(W (\mathbf{S }_{\text {sde}})) = 1 . \end{aligned}$$
(10.5)

Proof

Applying the argument in [6, Theorem 5.7.3] to \( \mathbf{X }^m \) of this Dirichlet form, we see that the diffusion \((\mathbf{X }^m, {\textsf {X}}^{m*}) \) is conservative. Because this holds for all \( m \in {\mathbb {N}}\),

$$\begin{aligned}&{\textsf {P}}_{\mu }( \, \sup \{|X_t^i |;\, t \le T \} < \infty \, \text { for all } T, i \in {\mathbb {N}} \, ) = 1 . \end{aligned}$$
(10.6)

Because \( {\textsf {X}}\in W ({\textsf {S}}_{\text {s}} )\)\( {\textsf {P}}_{\mu }\)-a.s. by (10.4), we write \( {\textsf {X}}_t = \sum _i \delta _{X_t^i} \) such that \( X^i \in C(I^i;S)\), where \( I^i = [0,b)\) or \( I^i = (a,b)\) for some \( a, b \in (0,\infty ]\). We shall prove \( I^i = [0,\infty )\)\( {\textsf {P}}_{\mu }\)-a.s.. Suppose that \( I^i = [0,b)\). Then, from (10.6), we deduce that \( b = \infty \). Next suppose that \( I^i = (a,b)\). Then, applying the strong Markov property of the diffusion \( \{ {\textsf {P}}_{{\textsf {s}}} \} \) at any \( a' \in (a,b)\) and using the preceding argument, we deduce that \( b = \infty \). As a result, we have \( I^i = (a,\infty )\). Because of reversibility, we see that such open intervals do not exist. Hence, we obtain \( I^i = [0,\infty )\) for all i. From this, (10.4), and \( \mu (\{ {\textsf {s}}(S)=\infty \} ) = 1 \), we obtain \( \text {Cap}^{a ,\mu }(\{{\textsf {s}} \in {\textsf {S}}\, ; {\textsf {s}}(S) < \infty \} ) = 0 \). From this and (A4) (2) we have

$$\begin{aligned}&\text {Cap}^{a,\mu }({\textsf {S}}_{\text {s.i.}}^c) = 0 . \end{aligned}$$
(10.7)

From (10.6), (10.7), and (10.3) we obtain \( {\textsf {P}}_{\mu }(W _{\text {NE}}({\textsf {S}}_{\text {s.i.}})) = 1 \). By Lemma 9.2 we see the first claim in (10.5). The second claim is clear from the first. We have thus completed the proof. \(\square \)

We next deduce \(\mathbf {(NBJ)}\) from (A1)–(A4).

Lemma 10.3

Assume (A1)–(A4). Then \( \mathbf{X }={\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}_{\mu }\) satisfies \(\mathbf {(NBJ)}\).

Proof

Let \( \rho ^1 \) be the one-point correlation function of \( \mu \) with respect to the Lebesgue measure. Then (10.1) implies

$$\begin{aligned} \int _{S} {\mathscr {R}} \left( \frac{|x| - r}{\sqrt{c_{11}c_{12}}T} \right) \rho ^1 (x) dx < \infty . \end{aligned}$$
(10.8)

Here \( c_{12}= \sup _{t\in {\mathbb {R}}^+} a_0 (t) \) is finite by (9.4).

Let \( ({\mathscr {E}} ^{a ,\mu ^{[1]}} , {\mathscr {D}}^{a ,\mu ^{[1]}} ) \) be the Dirichlet form of the 1-labeled process. Let \( {\textsf {P}}_{(x,{\textsf {s}})}^{[1]} \) be the associated diffusion measure starting at \( (x,{\textsf {s}}) \in S\times {\textsf {S}}\). We set

$$\begin{aligned} {\textsf {P}}_{\mu ^{[1]}}^{[1]} = \int _{S\times {\textsf {S}}} {\textsf {P}}_{(x,{\textsf {s}})}^{[1]} \mu ^{[1]}(dx d{\textsf {s}}) . \end{aligned}$$
(10.9)

Let \( (\mathbf{X }^1,{\textsf {X}}^{1*})= (X^1 , \sum _{i=2}^{\infty } \delta _{X^i}) \) denote the one-labeled process. Then applying the Lyons–Zheng decomposition ( [6, Theorem 5.7.1]) to the coordinate function x, we have a continuous local martingale \( M^1 \) such that for each \( 0 \le t \le T \),

$$\begin{aligned} X_t^1 - X_0^1 = \frac{1}{2}M_t^1 + \frac{1}{2} {\hat{M}}_t^1 \quad \text { for } {\textsf {P}}_{\mu ^{[1]}}^{[1]} \text { -a.e.} , \end{aligned}$$
(10.10)

where \( {\hat{M}}^1 \) is the time reversal of \( M^1 \) on [0, T] such that

$$\begin{aligned} {\hat{M}}_t^1 = M_{T-t}^1 (r_T)- M_{T}^1 (r_T) . \end{aligned}$$
(10.11)

Here we set \( r_T\!:\!C([0,T]; {\mathbb {R}}^d\times {\textsf {S}})\!\rightarrow \!C([0,T]; {\mathbb {R}}^d\times {\textsf {S}})\) such that \( r_T(w )(t) = w (T-t)\). By (9.1), (9.2), and (9.12) the quadratic variation of \( M^1=(M_1^1,\ldots ,M_d^1) \) is given by

$$\begin{aligned}&\langle M_{p}^1 ,M_{q}^1 \rangle _t = \int _0^t \textit{a}_{pq}\Big (X_u^1 , \sum _{i=2}^{\infty } \delta _{X_u^i}\Big )du . \end{aligned}$$
(10.12)

We can prove Lemma 10.3 in a similar fashion as Lemma 8.7. Indeed, (8.81), (8.82), (8.83), and (8.84) correspond to (10.8), (10.9), (10.10), and (10.11), respectively. Although the continuous local martingale \( M^1 \) in (10.10) is not Brownian motion, the quadratic variation process in (10.12) is controlled by (9.3) and (9.4). So the proof of Lemma 8.7 is still valid for Lemma 10.3. Hence we omit the detail. \(\square \)

Sufficient conditions of \(\mathbf {(IFC)}\)

Localization of coefficients

Let \( \mathbf{a }=\{ a_{{\textsf {q}}}\}_{{\textsf {q}}\in {\mathbb {N}}} \) be a sequence of increasing sequences \( a_{{\textsf {q}}}= \{ a_{{\textsf {q}}}(r) \}_{r\in {\mathbb {N}}} \) of natural numbers such that \( a_{{\textsf {q}}}(r) < a_{{\textsf {q}}+1}(r)\) for all \( r , {\textsf {q}}\in {\mathbb {N}}\). We set for \( \mathbf{a } = \{a_{{\textsf {q}}}\}_{{\textsf {q}}\in {\mathbb {N}}} \)

$$\begin{aligned}&{\textsf {K}}[\mathbf{a }]= \bigcup _{{\textsf {q}}=1}^{\infty } {\textsf {K}}[a_{{\textsf {q}}}], \quad {\textsf {K}}[a_{{\textsf {q}}}] =\{ {\textsf {s}}\, ;\, {\textsf {s}} (S_{r}) \le a_{{\textsf {q}}}(r) \text { for all } r \in {\mathbb {N}} \} . \end{aligned}$$
(11.1)

By construction, \( {\textsf {K}}[a_{{\textsf {q}}}] \subset {\textsf {K}}[a_{{\textsf {q}}+1}]\) for all \( {\textsf {q}}\in {\mathbb {N}}\). It is well known that \( {\textsf {K}}[a_{{\textsf {q}}}]\) is a compact set in \( {\textsf {S}}\) for each \( {\textsf {q}}\in {\mathbb {N}}\). To quantify \( \mu \) by \( \mathbf{a }\) we assume

(B1)    \( \mu ({\textsf {K}}[\mathbf{a }]) = 1 \).

We note that a sequence \( \mathbf{a }\) with \( \mu ({\textsf {K}}[\mathbf{a }]) = 1 \) always exists for any random point field \( \mu \) (see [23, Lemma 2.6]). We set \( a_{{\textsf {q}}}^+ (r) = a_{{\textsf {q}}}(r+1)\) and \( \mathbf{a }^+ =\{ a_{{\textsf {q}}}^+ \}_{{\textsf {q}}\in {\mathbb {N}}} \). Let

$$\begin{aligned}&{\textsf {S}}_{\text {s.i.}}^{[m]} = \{ (\mathbf{x },{\textsf {s}})\in S^m\times {\textsf {S}}\, ;\, {\mathfrak {u}} (\mathbf{x }) + {\textsf {s}} \in {\textsf {S}}_{\text {s.i.}}\} , \end{aligned}$$

where \( \mathbf{x }=(x_1,\ldots ,x_m)\) and \( {\mathfrak {u}} (\mathbf{x }) = \sum _{i=1}^m \delta _{x_i}\). Similarly as (8.2)–(8.4) let

$$\begin{aligned}&{\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}({\textsf {s}})= \big \{ \mathbf{x }\in S_{r}^m\, ;\, \min _{j\not =k } |x_j-x_k | \ge 2^{-{\textsf {p}}} ,\ \inf _{l,i} |x_l-s_i| \ge 2^{-{\textsf {p}}} \big \}, \end{aligned}$$

where \( j,k,l=1,\ldots ,m \), \({\textsf {s}}=\sum _i \delta _{s_i}\), and \( S_{{\textsf {r}}}^m = \{ x \in S;\, |x| \le {\textsf {r}}\}^m \), and for \( {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}}) \)

$$\begin{aligned}&{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}= {\textsf {H}}[\mathbf{a }]_{{\textsf {p}},{\textsf {q}},{\textsf {r}}} :=\big \{ (\mathbf{x },{\textsf {s}}) \in {\textsf {S}}_{\text {s.i.}}^{[m]} \, ;\, \ \mathbf{x } \in {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}({\textsf {s}}),\ {\textsf {s}} \in {\textsf {K}}[\textit{a}_{{\textsf {q}}}^+]\big \} .\end{aligned}$$

We set \( S_{{\textsf {r}}}^{m,\circ } = \{ |x| < {\textsf {r}},\, x \in S\}^m \). Let \( {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}})\) be the open kernel of \( {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}({\textsf {s}})\):

$$\begin{aligned} {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}})= \Bigg \{ \mathbf{x } \in S_{{\textsf {r}}}^{m,\circ } \, ;\, \&\inf _{j\not =k } |x_j-x_k |> 2^{-{\textsf {p}}} ,\ \inf _{l,i} |x_l-s_i| > 2^{-{\textsf {p}}} \Bigg \}. \end{aligned}$$

For \( {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}}) \in {\textsf {N}}_3 \) we set

$$\begin{aligned} {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }= {\textsf {H}}[\mathbf{a }]_{{\textsf {p}},{\textsf {q}},{\textsf {r}}}^{\circ } := \big \{ (\mathbf{x },{\textsf {s}}) \in {\textsf {S}}_{\text {s.i.}}^{[m]} \, ;\, \&\mathbf{x } \in {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}}), \ {\textsf {s}} \in {\textsf {K}}[\textit{a}_{{\textsf {q}}}^+]\big \} . \end{aligned}$$

Similarly as (8.4) and (8.9), we set \( {\textsf {H}}[\mathbf{a }]\), \( {\textsf {H}}[\mathbf{a }]_{{\textsf {q}},{\textsf {r}}} \), \( {\textsf {H}}[\mathbf{a }]^{\circ } \), and \( {\textsf {H}}[\mathbf{a }]_{{\textsf {q}},{\textsf {r}}}^{\circ } \).

Lemma 11.1

Assume (A1)–(A4) and (B1). For each \( m \in \mathbb {N}\) the following then holds:

$$\begin{aligned} {\textsf {P}}_{{\textsf {s}}}( \lim _{{\textsf {n}}\rightarrow \infty } \tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }} (\mathbf{X }^m,{\textsf {X}}^{m*}) = \infty ) = 1 \quad \text { for } \mu \text {-a.s.}\, {\textsf {s}}. \end{aligned}$$
(11.2)

Here \( (\mathbf{X }^m,{\textsf {X}}^{m*})\) is the m-labeled process given by \( (\mathbf{X },\mathbf{B })\) in Lemma 9.2, \( \tau _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }} \) is the exit time from \( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\), and \( \lim _{{\textsf {n}}\rightarrow \infty } \) is same as (8.11).

Proof

The proof is same as Lemma 8.1. Hence we omit it. \(\square \)

A sufficient condition of IFC and Yamada–Watanabe theory for SDE of random environment type

We set \( \mathbf{x } = (x_1,\ldots ,x_m) \in S^m\) and \( {\textsf {x}}^{i\diamondsuit }=\sum _{j\not =i}^m \delta _{x_j}\). Let \( (\sigma , b ) \) be as (3.3). We set

$$\begin{aligned}&\sigma ^m (\mathbf{x },{\textsf {s}}) = (\sigma (x_i, {\textsf {x}}^{i\diamondsuit }+{\textsf {s}}))_{i=1}^m ,\quad b^m (\mathbf{x },{\textsf {s}}) = ( b (x_i,{\textsf {x}}^{i\diamondsuit }+{\textsf {s}}))_{i=1}^m . \end{aligned}$$

Then the time-inhomogeneous coefficients of SDE (3.18) are given by \( ({\widetilde{\sigma }}^m , \, {\widetilde{b}}^m )|_{{\textsf {s}}={\textsf {X}}_t^{m*}} \), where \(({\widetilde{\sigma }}^m , \, {\widetilde{b}}^m )\) is a version of \( (\sigma ^m, b^m )\) with respect to \( \mu ^{[m]}\).

Let \( {\textsf {N}}\) and \( {\textsf {n}}\) be as (8.5) and (8.6). Let \( \{ {\textsf {I}}_{{\textsf {m}}}\}_{{\textsf {m}}\in \mathbb {N}} \) be an increasing sequence of closed sets in \( S^m\times {\textsf {S}}\). Let \( \varPi \!:\!S^m\times {\textsf {S}}\!\rightarrow \!{\textsf {S}}\) be the projection such that \( (\mathbf{x },{\textsf {s}}) \mapsto {\textsf {s}}\). Then

$$\begin{aligned}&\varPi ({\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}})= \{ {\textsf {s}}\in {\textsf {S}}\, ; \, {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\cap \big (S^m\times \{ {\textsf {s}}\} \big ) \not =\emptyset \} . \end{aligned}$$

For \( {\textsf {n}}=({\textsf {p}},{\textsf {q}},{\textsf {r}}) \in {\textsf {N}}_3 \), let \( c_{15} ({\textsf {m}},{\textsf {n}}) \) be the constant such that \( 0\le c_{15} \le \infty \) and that

$$\begin{aligned} c_{15} = \sup \{&\frac{|{\widetilde{\sigma }}^{m}(\mathbf{x },{\textsf {s}}) - {\widetilde{\sigma }}^{m}(\mathbf{y },{\textsf {s}}) |}{|\mathbf{x }-\mathbf{y }|} ,\ \frac{|{\widetilde{b}}^{m}(\mathbf{x },{\textsf {s}}) - {\widetilde{b}}^{m}(\mathbf{y },{\textsf {s}}) |}{|\mathbf{x }-\mathbf{y }|} ; \, \ \mathbf{x }\not =\mathbf{y } ,\ \nonumber \\&{\textsf {s}} \in \varPi ({\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}),\, \ (\mathbf{x },{\textsf {s}}) , (\mathbf{y },{\textsf {s}}) \in {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}}),\ (\mathbf{x },{\textsf {s}}) \sim _{{\textsf {p}},{\textsf {r}}}(\mathbf{y },{\textsf {s}}) \} . \end{aligned}$$
(11.3)

Here \( (\mathbf{x },{\textsf {s}}) \sim _{{\textsf {p}},{\textsf {r}}}(\mathbf{y },{\textsf {s}}) \) means \( \mathbf{x } \) and \( \mathbf{y } \) are in the same connected component of \( {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}})\).

Let \( (\mathbf{X },\mathbf{B })= ({\mathfrak {l}} _{\text {path}}({\textsf {X}}) , \mathbf{B })\) be the weak solution of ISDE (3.3)–(3.5) given by Lemma 9.2 defined on \( (\varOmega ,{\mathscr {F}}, {\textsf {P}}_{{\textsf {s}}}, \{ {\mathscr {F}}_t \} )\). We set similarly as (8.21)

$$\begin{aligned}&\langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle = \bigcup _{{\textsf {s}}\in \varPi ({\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}})} {\mathbb {R}}_{{\textsf {p}},{\textsf {r}}}^{\circ }({\textsf {s}})\times \{ {\textsf {s}} \} . \end{aligned}$$

We assume the following.

(B2) For each \( m \in \mathbb {N}\), there exist a \( \mu ^{[m]}\)-version \( ({\widetilde{\sigma }}^m , \, {\widetilde{b}}^m )\) of \( (\sigma ^m, b^m )\) and an increasing sequence of closed sets \( \{ {\textsf {I}}_{{\textsf {m}}}\}_{{\textsf {m}}\in \mathbb {N}} \) such that for each \( {\textsf {m}}\in \mathbb {N}\) and \( {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}}) \in {\textsf {N}}_3 \)

$$\begin{aligned}&c_{15} ({\textsf {m}},{\textsf {n}}) < \infty , \end{aligned}$$
(11.4)
$$\begin{aligned}&\lim _{{\textsf {m}}\rightarrow \infty } \text {Cap}^{a,\mu ^{[m]}} \Big ( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\backslash \langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle \Big ) = 0 . \end{aligned}$$
(11.5)

Remark 11.1

We shall later assume the coefficients are in the domain of the m-labeled Dirichlet form, and take \( ({\widetilde{\sigma }}^m , \, {\widetilde{b}}^m )\) as a quasi-continuous version and \(\{ {\textsf {I}}_{{\textsf {m}}}\}_{{\textsf {m}}= 1}^{\infty }\) as a nest. We refer to [6, pp 67-69] for quasi-continuous version and nest. The sequence of sets \( \{ {\textsf {I}}_{{\textsf {m}}}\} \) plays a crucial role in the proof of Lemma 13.1.

Lemma 11.2

Assume (A1)–(A4) and (B1)–(B2). Then the following hold.

  1. 1.

    For each \( m \in \mathbb {N}\), the SDE (3.10)–(3.12) has a pathwise unique, weak solution starting at \( \mathbf{s }^m={\mathfrak {l}} ^m ({\textsf {s}}) \) for \( \mu \)-a.s. \( {\textsf {s}}\) in the sense that arbitrary solutions \( (\mathbf{Y }^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) and \( (\hat{\mathbf{Y }}^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) of (3.10)–(3.12) defined on \( (\varOmega ,{\mathscr {F}}, {\textsf {P}}_{{\textsf {s}}}, \{ {\mathscr {F}}_t \} )\) satisfy

    $$\begin{aligned} {\textsf {P}}_{{\textsf {s}}}(\mathbf{Y }^m = \hat{\mathbf{Y }}^m) = 1 . \end{aligned}$$

    In particular, \( (\mathbf{Y }^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) coincides with \( (\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) under \( {\textsf {P}}_{{\textsf {s}}}\).

  2. 2.

    Let \( (\mathbf{Z }^m , \hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*})\) and \( (\hat{\mathbf{Z }}^m , \hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*})\) be weak solutions of the SDE (3.10)–(3.12) defined on a filtered space \((\varOmega ',{\mathscr {F}}', P' ,\{ {\mathscr {F}}_t '\} )\) satisfying

    $$\begin{aligned} (\hat{\mathbf{Z }}^m , \hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*}) {\mathop {=}\limits ^{\text {law}}}(\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*}) . \end{aligned}$$

    Then it holds that for \( \mu \)-a.s. \( {\textsf {s}}\)

    $$\begin{aligned}&P' (\mathbf{Z }^m = \hat{\mathbf{Z }}^m) = 1. \end{aligned}$$
    (11.6)
  3. 3.

    Make the same assumptions as (2) except that the filtrations of \( (\mathbf{Z }^m , \hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*})\) and \( (\hat{\mathbf{Z }}^m , \hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*})\), \( \{{\mathscr {F}}_t'\} \) and \( \{ {\mathscr {F}}_t'' \} \) say, are different (but on the same probability space \((\varOmega ',{\mathscr {F}}', P' )\)). Assume that the coefficient \( \sigma ^m \) is constant. Then (11.6) holds for \( \mu \)-a.s. \( {\textsf {s}}\).

Proof

From (A1)–(A4) we can construct a weak solution \( (\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) under \( {\textsf {P}}_{{\textsf {s}}}\) of (3.10)–(3.12). We remark that (A1)–(A4) and (B1)–(B2) yield the related claims of Proposition 8.1. Indeed, (11.4) corresponds to (8.23). From (A1)–(A4) and (B1)–(B2) we apply Lemma 11.1 to obtain (11.2). Then from (11.2) and (11.5) we easily see

$$\begin{aligned} {\textsf {P}}_{{\textsf {s}}}( \lim _{{\textsf {n}}\rightarrow \infty } \lim _{{\textsf {m}}\rightarrow \infty }\tau _{\langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle } ( \mathbf{X }^m,{\textsf {X}}^{m*}) = \infty ) = 1 \quad \text { for } \mu \text {-a.s.}\, {\textsf {s}} . \end{aligned}$$
(11.7)

Here \( \tau _{\langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle }(\mathbf{X }^m,{\textsf {X}}^{m*}) \) is the exit time of \( (\mathbf{X }^m,{\textsf {X}}^{m*})\) from the set \( \langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle \). (11.7) corresponds to (8.24). The claims in Proposition 8.1 are used in the proof of Lemma 8.5. We do not need any other specific properties of the Ginibre interacting Brownian motion in the proof of Lemma 8.5. Hence the proof of Lemma 11.2 is same as Lemma 8.5. \(\square \)

Remark 11.2

In (3) the reference families \( \{{\mathscr {F}}_t'\} \) and \( \{ {\mathscr {F}}_t'' \} \) of SDEs are different although the probability space and the Brownian motion is the same. The pathwise uniqueness in (3) is called the pathwise uniqueness in the strict sense in [9, 162p]. We shall use this refinement in the proof of Proposition 11.1. In the classical situation of Yamada–Watanabe theory, the pathwise uniqueness in the strict sense follows from the pathwise uniqueness and the existence of weak solutions as a corollary of their main result. In the current case, it has been not yet succeeded to generalize this part of the Yamada–Watanabe theory to SDEs of random environment type. So we add the additional assumption in (3).

Recall that \( (\mathbf{X },\mathbf{B })\) is the weak solution of (3.3)–(3.5) defined on \( (\varOmega ,{\mathscr {F}}, {\textsf {P}}_{{\textsf {s}}}, \{ {\mathscr {F}}_t \} )\) given by Lemma 9.2. Let \( (\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*})\) be the weak solution of (3.10) made of \( (\mathbf{X },\mathbf{B })\). To simplify the notation, we set

$$\begin{aligned} w=(\mathbf{b },{\textsf {x}}) \in W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\textsf {S}}). \end{aligned}$$

Let \(P_{w}\) be the regular conditional probability such that

$$\begin{aligned} P_{w}= {\textsf {P}}_{{\textsf {s}}}(\mathbf{X }^m \in \cdot \,| \, (\mathbf{B }^m , {\textsf {X}}^{m*}) =w) . \end{aligned}$$

Let \( (\mathbf{Y }^m , \hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*})\) be an independent copy of \( (\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*})\). Let \( {\hat{P}}\) be the distributions of \( (\mathbf{Y }^m , \hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*})\). Let \( {\hat{P}}_{w}= {\hat{P}} (\mathbf{Y }^m \in \cdot \,| \, (\hat{\mathbf{B }}^m , \hat{{\textsf {X}}}^{m*})=w)\). Let Q be the distribution of \( (\mathbf{B }^m , {\textsf {X}}^{m*})\). We set the probability measure R on

$$\begin{aligned} W^{\bullet }:= W (S^m )\times W (S^m )\times W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\textsf {S}})\end{aligned}$$

by

$$\begin{aligned}&R (d\mathbf{u }d\mathbf{v }dw) = P_{w}( d\mathbf{u }) {\hat{P}}_{w}(d\mathbf{v }) Q(dw). \end{aligned}$$
(11.8)

We set \( {\mathscr {G}} \) to be the completion of the topological \( \sigma \)-field \( {\mathscr {B}}(W^{\bullet }) \) by R, and \( {\mathscr {G}}_t = \cap _{\varepsilon }( {\mathscr {B}}_{t+\varepsilon }(W^{\bullet }) \vee {\mathscr {N}} ) \), where \( {\mathscr {B}}_t = \sigma [(\mathbf{u }(u),\mathbf{v }(u),w(u))\, ;\, 0 \le u \le t ] \) and \( {\mathscr {N}} \) is the set of all R-null sets.

Proposition 11.1

Assume (A1)–(A4) and (B1)–(B2). Assume that the coefficient \( \sigma ^m \) is constant. Then \( \mathbf{X }={\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}_{\mu }\) satisfies \(\mathbf {(IFC)}\).

Proof

From Lemma 11.2 (3) we have a pathwise unique, weak solution \( (\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*}) \). Then from a generalization of the Yamada–Watanabe theory (see Theorem 1.1 in [9, 163p]) to SDE with random environment, we shall construct a strong solution.

Under R, both \((\mathbf{u },\mathbf{b },{\textsf {x}}) \) and \( (\mathbf{v },\mathbf{b },{\textsf {x}}) \) are weak solutions of (3.10)–(3.12). These solutions are defined on \( \varXi = (W^{\bullet },{\mathscr {G}} , R ,\{ {\mathscr {G}}_t \} )\) and SDEs (3.10) for \( (\mathbf{u },\mathbf{b },{\textsf {x}}) \) and \((\mathbf{v },\mathbf{b },{\textsf {x}}) \) become as follows.

$$\begin{aligned}&d u_t^{i} = \sigma ^m d\mathbf{b }_t^i + \textit{b}_{{\textsf {x}}}^m (t, (u_t^{i},\mathbf{u }_t^{i\diamondsuit })) dt , \end{aligned}$$
(11.9)
$$\begin{aligned}&d v_t^{i} = \sigma ^m d\mathbf{b }_t^i + \textit{b}_{{\textsf {x}}}^m (t, (v_t^{i},\mathbf{v }_t^{i\diamondsuit })) dt . \end{aligned}$$
(11.10)

Here \( \textit{b}_{{\textsf {x}}}^m \) is defined by (3.8) with replacement of \( {\textsf {X}}_t^{m*}\) by \( {\textsf {x}}_t^{m*}=\sum _{j=m+1}^{\infty } \delta _{x_t^j}\).

Solutions of SDEs (11.9) and (11.10) are defined on \( \varXi = (W^{\bullet },{\mathscr {G}} , R ,\{ {\mathscr {G}}_t \} )\). Although we do not need the fact that \( \mathbf{b } \) under R is a \( \{ {\mathscr {G}}_t \} \)-Brownian motion in the present proof, we shall prove this here combined with Lemma 11.3 below. By construction \( \mathbf{b } \) under R is a Brownian motion. So for this, it only remains to prove \( \mathbf{b }(u)- \mathbf{b }(t)\) is independent of \( {\mathscr {G}}_t \) for all \( t < u \) under R, which we prove in Lemma 11.3.

Note that the distributions of both \((\mathbf{u },\mathbf{b },{\textsf {x}}) \) and \( (\mathbf{v },\mathbf{b },{\textsf {x}}) \) under R coincide with that of \( (\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*}) \). Hence we obtain

$$\begin{aligned}&R (\mathbf{u }= \mathbf{v }) = 1 \end{aligned}$$
(11.11)

from the pathwise uniqueness of weak solutions given by Lemma 11.2 (3). Let

$$\begin{aligned}&Q_w= R ((\mathbf{X }^m ,\mathbf{Y }^m )\in \cdot | (\mathbf{B }^m , {\textsf {X}}^{m*})=w ) . \end{aligned}$$
(11.12)

Then we deduce from (11.8)

$$\begin{aligned}&Q_w (d\mathbf{u }d\mathbf{v })= P_{w}( d\mathbf{u }) {\hat{P}}_{w}(d\mathbf{v }). \end{aligned}$$
(11.13)

The identity \( R (\mathbf{u }= \mathbf{v }) = 1\) in (11.11) together with (11.12) implies \( Q_w (\mathbf{u }= \mathbf{v }) = 1 \) for Q-a.s. w. Meanwhile, from (11.13) we deduce that \( \mathbf{u }\) and \( \mathbf{v }\) under \( Q_w \) are mutually independent. Hence the distribution of \( (\mathbf{u },\mathbf{v })\) under \( Q_w\) is \( \delta _{(F(w),F(w))}\), where F(w) is a nonrandom element of \( W (S^m )\) depending on w. Thus F is regarded as a function from \( W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\textsf {S}})\) to \( W (S^m )\) by \( w \longmapsto F(w)\). The distributions of \( P_{w}( d\mathbf{u }) \) and \({\hat{P}}_{w}(d\mathbf{v }) \) coincide with \( \delta _{F(w)}\). We therefore obtain \( (\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*}) = (F(\mathbf{B }^m , {\textsf {X}}^{m*}),\mathbf{B }^m , {\textsf {X}}^{m*})\).

We easily see that F is \( \overline{{\mathscr {B}}( W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\textsf {S}}))}^{Q} /{\mathscr {B}}( W (S^m )) \)-measurable. Indeed, letting \( \iota \) and \( \kappa \) be the projections such that \( \iota (\mathbf{u },\mathbf{v },w) = \mathbf{u }\) and \( \kappa (\mathbf{u },\mathbf{v },w)= w \), we see \( F =\iota \circ \kappa ^{-1}\)R-a.s. Then \( \kappa ^{-1} (F^{-1}(A)) = \iota ^{-1}(A) \)R-a.s. for each \( A \in {\mathscr {B}}( W (S^m )) \), that is,

$$\begin{aligned} R ( \kappa ^{-1}(F^{-1}(A)) \ominus \iota ^{-1}(A)) = 0,\end{aligned}$$

where \( \ominus \) denotes the symmetric difference of sets. Note that

$$\begin{aligned} \iota ^{-1}(A) \in {\mathscr {G}}= \overline{{\mathscr {B}}( W (S^m )\times W (S^m )\times W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\textsf {S}})) }^{R} . \end{aligned}$$

Then we see \( \kappa (\iota ^{-1}(A)) \in \overline{{\mathscr {B}}(W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\textsf {S}})) }^{Q} \) by \( Q = R \circ \kappa ^{-1}\). This implies

$$\begin{aligned} F^{-1}(A) \in \overline{{\mathscr {B}}( W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\textsf {S}}))}^{Q} . \end{aligned}$$

Hence F is \( \overline{{\mathscr {B}}( W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\textsf {S}}))}^{Q} \)/\({\mathscr {B}}( W (S^m )) \)-measurable.

We can prove that F is \( \overline{{\mathscr {B}}_t (W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\textsf {S}}))}^{Q} \)/\({\mathscr {B}}_t( W (S^m )) \)-measurable for each t in a similar fashion. Here subscript t denotes the \( \sigma \)-field being generated until time t. Indeed, we can localize the ISDE and the solution in time to [0, T] for all \( 0 < T \). The restriction of the original weak solution \( (\mathbf{X }^m , \mathbf{B }^m , {\textsf {X}}^{m*}) \) on the time interval [0, T] is also a solution of the ISDE. So with the same argument we can construct a strong solution \( F_T \) defined on \( W_{\mathbf{0 }}([0,T];{\mathbb {R}}^{dm}) \times W([0,T];{\textsf {S}})\). The solution \( F_T \) is \( \overline{{\mathscr {B}}(W_{\mathbf{0 }}([0,T];{\mathbb {R}}^{dm}) \times W([0,T];{\textsf {S}}))}^{Q} \)/ \({\mathscr {B}}(W ([0,T]; S^m)) \)-measurable. Because of the pathwise uniqueness of weak solutions, \( F_T(\mathbf{b },{\textsf {x}}) (t)= F (\mathbf{b },{\textsf {x}})(t) \) for all \( 0 \le t \le T \). We have natural identities

$$\begin{aligned} {\mathscr {B}}(W_{\mathbf{0 }}([0,T];{\mathbb {R}}^{dm}) \times W([0,T];{\textsf {S}}))=&{\mathscr {B}}_T (W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\textsf {S}})) \\ {\mathscr {B}}(W ([0,T]; S^m)) =&{\mathscr {B}}_T( W (S^m )) .\end{aligned}$$

From this F is \( \overline{{\mathscr {B}}_T (W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\textsf {S}}))}^{Q} \)/\({\mathscr {B}}_T( W (S^m )) \)-measurable for each T.

Recall that \( \mathbf{s }\) is suppressed from the notation of F. We set

$$\begin{aligned}&F_{\mathbf{s }}^m(\mathbf{B }^m,\mathbf{X }^{m*}) = F (\mathbf{B }^m, {\mathfrak {u}} _{\text {path}}(\mathbf{X }^{m*})). \end{aligned}$$

Because \( {\mathfrak {l}} _{\text {path}}\circ {\mathfrak {u}} _{\text {path}}= \text {id.}\) for \( {\textsf {P}}_{\mu }\circ {\mathfrak {l}} _{\text {path}}^{-1}\)-a.s. and \( {\mathfrak {u}} _{\text {path}}\circ {\mathfrak {l}} _{\text {path}}= \text {id.}\) for \( {\textsf {P}}_{\mu }\)-a.s., the function \( F_{\mathbf{s }}^m\) inherits necessary measurabilities in Definition 3.9 from those of F. Hence we see \( F_{\mathbf{s }}^m\) is a strong solution of (3.10)–(3.12) for \( (\mathbf{X },\mathbf{B })\) under \( P_{\mathbf{s }}\) for \( {\textsf {P}}_{\mu }\circ {\mathfrak {l}} ^{-1}\)-a.s. \( \mathbf{s }\). We have already proved the pathwise uniqueness. Thus, \( F_{\mathbf{s }}^m\) is a unique strong solution for \( (\mathbf{X },\mathbf{B })\) starting at \( \mathbf{s }^m\) for \( {\textsf {P}}_{\mu }\circ {\mathfrak {l}} ^{-1}\)-a.s. \( \mathbf{s }\). Hence \( \mathbf{X }={\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}_{\mu }\) satisfies \(\mathbf {(IFC)}\). \(\square \)

Lemma 11.3

For each t, \( \{\mathbf{b }_u-\mathbf{b }_t \}\)\( (t< u < \infty )\) are independent of \( {\mathscr {G}}_t \) under R.

Proof

Note that \( P_{w}=R(\mathbf{u }\in \cdot | w) \) and \(P_{w}'=R(\mathbf{v }\in \cdot | w) \). Let F be the strong solution obtained in the proof of Proposition 11.1. Note that F is \( \overline{{\mathscr {B}}_t (W _{\mathbf{0 }} ({\mathbb {R}}^{dm})\times W ({\textsf {S}}))}^{Q} \)-adapted. Then for any \( A_1\times A_2\times A_3 \in {\mathscr {G}}_t \) and \( \theta \in {\mathbb {R}}^{dm}\)

$$\begin{aligned} E^{R}[e^{\sqrt{-1}\langle \theta , \mathbf{b }_u-\mathbf{b }_t\rangle }1_{A_1\times A_2 \times A_3 }]&=\int _{A_3} e^{\sqrt{-1}\langle \theta , \mathbf{b }_u-\mathbf{b }_t\rangle } P_{w}(A_1)P_{w}'(A_2) Q (dw)\\&=\int _{A_3} e^{\sqrt{-1}\langle \theta , \mathbf{b }_u-\mathbf{b }_t\rangle } 1_{A_1}(F(w))1_{A_2}(F(w))Q(dw)\\&= e^{-|\theta |^2/2(u-t)} \int _{A_3} 1_{A_1}(F(w))1_{A_2}(F(w))Q(dw)\\&= e^{-|\theta |^2/2(u-t)} R (A_1\times A_2\times A_3 ) . \end{aligned}$$

This implies the claim. \(\square \)

Recall that \( {\textsf {P}}_{\mu }\) is given by (9.10).

Theorem 11.1

Assume that \( \mu \) and \( {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}_{\mu }\) satisfy \(\mathbf {(TT)}\), (A1)–(A4), and (B1)–(B2). Assume that the coefficient \( \sigma ^m \) is constant. Then ISDE (3.3)–(3.4) has a family of unique strong solutions \( \{ {F}_{\mathbf{s }}\} \) starting at \( \mathbf{s }={\mathfrak {l}} ({\textsf {s}})\) for \( \mu \)-a.s. \( {\textsf {s}}\) under the constraints of \((\mathbf {MF})\), \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \(\mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\).

Proof

We take \( (\mathbf{X },\mathbf{B })\) and P in Theorem 3.1 as \( \mathbf{X }:={\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) and \( P := {\textsf {P}}_{\mu }\). We check this satisfies all the assumptions of Theorem 3.1. That is, we have to check \(\mathbf {(TT)}\) for \( \mu \) and that \( (\mathbf{X },\mathbf{B })\) under P is a weak solution satisfying \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \(\mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\).

We see \(\mathbf {(TT)}\) for \( \mu \) follows by assumption. By Lemma 10.2\( {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}_{\mu }\) satisfies \(\mathbf {(SIN)}\). Hence by Lemma 9.2\( {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}_{\mu }\) is a weak solution. Note that the coefficient \( \sigma ^m \) is constant and that (A1)–(A4) and (B1)–(B2) hold by assumption. Then the assumptions of Proposition 11.1 are fulfilled. Hence we obtain \(\mathbf {(IFC)}\) from Proposition 11.1. Using Lemma 10.3 we obtain \(\mathbf {(NBJ)}\). Because \( {\textsf {P}}_{\mu }\) is \( \mu \)-reversible, \(\mathbf {(AC)}\) for \( \mu \) is obvious. Thus, all the assumptions of Theorem 3.1 are fulfilled. Hence the claim follows from Theorem 3.1. \(\square \)

Corollary 11.1

Under the same assumptions of Theorem 11.1, \( {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}_{\mu }\) is a weak solution of (3.3)–(3.4) satisfying \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \(\mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\). Furthermore, \( {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( \{{\textsf {P}}_{{\textsf {s}}}\} \) is a family of unique strong solutions starting at \( \mathbf{s }={\mathfrak {l}} ({\textsf {s}})\) for \( \mu \)-a.s. \( {\textsf {s}}\) under the constraints of \((\mathbf {MF})\), \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \(\mu \), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\).

A sufficient condition of (B2) and Taylor expansion of coefficients

In this section we give a sufficient condition of (B2). We begin by introducing the cut-off functions on \( S^m\times {\textsf {S}}\). Let \( \varphi _{{\textsf {r}}}\in C_0^{\infty }(S^m) \) be the cut-off function such that

$$\begin{aligned} 0 \le \varphi _{{\textsf {r}}}(\mathbf{x }) \le 1 , \quad |\nabla \varphi _{{\textsf {r}}}(\mathbf{x })| \le 2 , \quad \varphi _{{\textsf {r}}}(\mathbf{x }) = {\tilde{\varphi }}_{{\textsf {r}}}(|\mathbf{x }|) \quad \text { for all } \mathbf{x }\in S^m, \end{aligned}$$

where \( {\tilde{\varphi }}_{{\textsf {r}}}\in C_0^{\infty }({\mathbb {R}})\) is given by (8.27). Let \( h_{{\textsf {p}}}\in C_0^{\infty }({\mathbb {R}})\)\( ({\textsf {p}}\in \{ 0 \}\cup \mathbb {N})\) such that

$$\begin{aligned}&h_{{\textsf {p}}}(t) = {\left\{ \begin{array}{ll} 1 &{} (t \le 2^{-{\textsf {p}}-2}) ,\\ 0 &{} (2^{-{\textsf {p}}-1} \le t ) \end{array}\right. } ,\quad 0 \le h_{{\textsf {p}}}(t) \le 1 , \quad |h_{{\textsf {p}}}'(t)| \le 2^{{\textsf {p}}+ 3} \quad \text { for all }t . \end{aligned}$$

We write \( \mathbf{x }=(x_k)_{k=1}^m \in S^m\) and \( {\textsf {s}} = \sum _i \delta _{s_i}\in {\textsf {S}}\). Let \( h_{{\textsf {p}}}^{\dagger }\!:\!S^m\times {\textsf {S}}\!\rightarrow \!{\mathbb {R}}\) such that

$$\begin{aligned}&\quad \quad \quad h_{{\textsf {p}}}^{\dagger } ( \mathbf{x },{\textsf {s}}) = \prod _{k=1}^{m} \Bigg \{\prod _{j\not =k}^m \{ 1- h_{{\textsf {p}}}(|x_k-x_j|)\}\Bigg \} \Bigg \{\prod _{i}\{ 1- h_{{\textsf {p}}}(|x_k-s_i|)\}\Bigg \}. \end{aligned}$$

We label \( {\textsf {s}}=\sum _i \delta _{s_i}\) in such a way that \( |s_i| \le |s_{{i+1}}|\) for all i. We set

$$\begin{aligned} I({\textsf {q}},r ,{\textsf {s}}) = \{ i \, ;\, i > a_{{\textsf {q}}}(r) ,\ s_i \in S_{r}\} . \end{aligned}$$

Here \( a_{{\textsf {q}}}= \{ a_{{\textsf {q}}}(r) \}_{r\in {\mathbb {N}}} \) are the increasing sequences in (11.1). We set

$$\begin{aligned} d_{a_{{\textsf {q}}}} ({\textsf {s}})&= \left\{ \sum _{r=1}^{\infty } \sum _{i\in I({\textsf {q}},r ,{\textsf {s}}) } (r-|s_i|)^2 \right\} ^{1/2} ,\\&\quad \chi _{a_{{\textsf {q}}}} ({\textsf {s}}) = h_{0} \circ d_{a_{{\textsf {q}}}} ({\textsf {s}}) . \end{aligned}$$

We introduce the cut-off functions defined as

$$\begin{aligned} {\left\{ \begin{array}{ll} \chi _{{\textsf {r}}} (\mathbf{x },{\textsf {s}}) = \varphi _{{\textsf {r}}}(\mathbf{x }) ,\\ \chi _{{\textsf {q}},{\textsf {r}}} (\mathbf{x },{\textsf {s}}) = \varphi _{{\textsf {r}}}(\mathbf{x }) \chi _{a_{{\textsf {q}}}} ({\textsf {s}}) ,\\ \chi _{{\textsf {p}},{\textsf {q}},{\textsf {r}}} (\mathbf{x },{\textsf {s}}) = \varphi _{{\textsf {r}}}(\mathbf{x }) \chi _{a_{{\textsf {q}}}} ({\textsf {s}}) h_{{\textsf {p}}}^{\dagger } (\mathbf{x },{\textsf {s}}) \end{array}\right. }. \end{aligned}$$
(11.14)

We easily see that

$$\begin{aligned}&\lim _{{\textsf {r}}\rightarrow \infty } \lim _{{\textsf {q}}\rightarrow \infty } \lim _{{\textsf {p}}\rightarrow \infty } \chi _{{\textsf {p}},{\textsf {q}},{\textsf {r}}} (\mathbf{x },{\textsf {s}}) = 1 \quad \text { for all } (\mathbf{x },{\textsf {s}})\in {\textsf {H}}[\mathbf{a }]. \end{aligned}$$

Let \( {\textsf {N}}= \{ ({\textsf {p}},{\textsf {q}},{\textsf {r}}), ({\textsf {r}},{\textsf {q}}), {\textsf {q}}\ ;\, {\textsf {p}},{\textsf {q}},{\textsf {r}}\in \mathbb {N}\} \), \( {\textsf {N}}_1\), \( {\textsf {N}}_2 \), and \( {\textsf {N}}_3 \) be as (8.5). Recall that for \( {\textsf {n}}\in {\textsf {N}}\) we define \( {\textsf {n}}+ 1 \in {\textsf {N}}\) such that

$$\begin{aligned}&{\textsf {n}}+ 1 = {\left\{ \begin{array}{ll} ({\textsf {p}}+ 1, {\textsf {q}}, {\textsf {r}}) &{}\text { for } {\textsf {n}}= ({\textsf {p}},{\textsf {q}},{\textsf {r}}), \\ ({\textsf {q}}+1, {\textsf {r}})&{}\text { for } {\textsf {n}}= ({\textsf {q}}, {\textsf {r}}) ,\\ {\textsf {r}}+1 &{}\text { for } {\textsf {n}}= {\textsf {r}}. \end{array}\right. } \end{aligned}$$

We set \( \chi _{{\textsf {n}}}= \chi _{{\textsf {p}},{\textsf {q}},{\textsf {r}}} \). Then \( \{ \chi _{{\textsf {n}}}\} \) are consistent in the sense that \( \chi _{{\textsf {n}}}(\mathbf{x },{\textsf {s}}) = \chi _{{\textsf {n}}+1}(\mathbf{x },{\textsf {s}}) \) for \( (\mathbf{x },{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\). We suppress m from the notation of \( \chi _{{\textsf {n}}}\) although \( \chi _{{\textsf {n}}}\) depends on \( m \in \mathbb {N}\). By a direct calculation similar to that in [23, Lemma 2.5], we obtain the following.

Lemma 11.4

For each \( m \in \mathbb {N}\), the functions \( \chi _{{\textsf {n}}}\) (\( {\textsf {n}}\in {\textsf {N}}\)) satisfy the following.

$$\begin{aligned}&\chi _{{\textsf {n}}}(\mathbf{x },{\textsf {s}}) = {\left\{ \begin{array}{ll} 0 &{} \text { for } (\mathbf{x },{\textsf {s}}) \not \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}+1}\\ 1&{} \text { for } (\mathbf{x },{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\end{array}\right. } ,\quad \chi _{{\textsf {n}}}\in {\mathscr {D}}^{a ,\mu ^{[m]}} ,\\&0 \le \chi _{{\textsf {n}}}(\mathbf{x },{\textsf {s}}) \le 1 ,\quad |\nabla _\mathbf{x }\chi _{{\textsf {n}}}(\mathbf{x },{\textsf {s}}) |^2 \le c_{16} ,\quad {\mathbb {D}}[\chi _{{\textsf {n}}},\chi _{{\textsf {n}}}] (\mathbf{x },{\textsf {s}}) \le c_{17} . \end{aligned}$$

Here \( c_{16}({\textsf {n}})\) and \( c_{17}({\textsf {n}})\) are positive finite constants independent of \( (\mathbf{x },{\textsf {s}}) \), and \( {\mathscr {D}}^{a ,\mu ^{[m]}}\) is the domain of the Dirichlet form of the m-labeled process \( (\mathbf{X }^m, {\textsf {X}}^{m*})\).

We shall give a sufficient condition of \( {(\mathbf{B2} )}\) using Taylor expansion. Let

$$\begin{aligned}&\mathbf{J }^{[l]} = \left\{ \mathbf{j }= (j_{k,i})_{k=1,\ldots ,m,\, i=1,\ldots ,d} ;\ j_{k,i} \in \{ 0 \} \cup \mathbb {N},\, \sum _{k=1}^m \sum _{i=1}^{d} j_{k,i}=l \right\} . \end{aligned}$$
(11.15)

We set \( \partial _{\mathbf{j }} = \prod _{k,i} (\partial /\partial x_{k,i} )^{j_{k,i}} \) for \( \mathbf{j }=( j_{k,i}) \in \mathbf{J }^{[l]} \), where \( ((x_{k,i})_{i=1}^d)_{k=1}^m \in {\mathbb {R}}^{dm}\) and \( ({\partial }/{\partial x_{k,i}})^{j_{k,i}}\) denotes the identity if \( j_{k,i}=0 \). For \( \ell \in \mathbb {N}\), we introduce the following:

(C1) For each \( \mathbf{j } \in \cup _{l=0}^{\ell } \mathbf{J }^{[l]} \) and \( {\textsf {n}}\in {\textsf {N}}_3 \),

$$\begin{aligned}&\chi _{{\textsf {n}}}\partial _{\mathbf{j }} \sigma ^m, \chi _{{\textsf {n}}}\partial _{\mathbf{j }} b^m \in {\mathscr {D}}^{a ,\mu ^{[m]}} . \end{aligned}$$

(C2) There exists a \( \mu ^{[m]}\)-version \( \{ {\widetilde{\sigma }}^m , \, {\widetilde{b}}^m \}\) of \( \{ \sigma ^m ,b^m \} \) such that

$$\begin{aligned}&\sup \big \{ |\partial _{\mathbf{j }}{\textsf {f}}(\mathbf{x },{\textsf {s}})| ; \, (\mathbf{x },{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}},\ {\textsf {f}}\in \{ {\widetilde{\sigma }}^m , \, {\widetilde{b}}^m \}\big \} < \infty \end{aligned}$$

for each \( \mathbf{j } \in \mathbf{J }^{[\ell ]}\) and \( {\textsf {n}}\in {\textsf {N}}_3 \).

Remark 11.3

Note that \( \varphi _{{\textsf {r}}}(\mathbf{x }) \) and \( h_{{\textsf {p}}}^{\dagger } (\mathbf{x },{\textsf {s}}) \) in (11.14) are smooth in \( \mathbf{x }\), and that \(\partial _{\mathbf{i }} \varphi _{{\textsf {r}}}(\mathbf{x }) \) and \( \partial _{\mathbf{i }} h_{{\textsf {p}}}^{\dagger } (\mathbf{x },{\textsf {s}}) \) are bounded. Hence (11.14) and (C1) with a straightforward calculation show \( (\partial _{\mathbf{i }}\chi _{{\textsf {n}}})( \partial _{\mathbf{j }} \sigma ^m ) \) and \((\partial _{\mathbf{i }}\chi _{{\textsf {n}}})(\partial _{\mathbf{j }} b^m ) \) belong to \( {\mathscr {D}}^{a ,\mu ^{[m]}} \).

Proposition 11.2

Assume that (C1) and (C2) hold for some \( \ell \in \mathbb {N}\). Then (B2) holds.

Proof

We omit the proof of Proposition 11.2 because it is same as Proposition 8.1. Indeed, in the proof of Proposition 8.1, we need Lemma 8.1, Lemma 8.4, and (8.55). These correspond to Lemma 11.1, (C1), and (C2), respectively. \(\square \)

Theorem 11.2

Under the same assumptions as Theorem 11.1 with replacement (B2) by (C1)–(C2), the same conclusions as Theorem 11.1 holds.

Proof

From Proposition 11.2 and Theorem 11.1 we obtain the claim. \(\square \)

Corollary 11.2

Under the same assumptions of Theorem 11.2, the same conclusions as Corollary 11.1 hold.

Remark 11.4

For sine\(_{2} \), Airy\(_2\), and Bessel\(_{2,\alpha }\) random point fields, there is another construction of stochastic dynamics based on space-time correlation functions [14, 33]. Theorem 11.2 combined with tail triviality obtained in [2, 19, 29] proves that these two dynamics are the same [30,31,32].

Sufficient conditions of \(\mathbf {(SIN)}\) and \(\mathbf {(NBJ)}\) for \( \mu \) with non-trivial tails

In this section, we deduce \(\mathbf {(SIN)}\) and \(\mathbf {(NBJ)}\) from the assumptions of \( \mu \). We shall show the stability of \(\mathbf {(SIN)}\) and \(\mathbf {(NBJ)}\) under the operation of conditioning with respect to the tail \( \sigma \)-field \( {\mathscr {T}}({\textsf {S}}) \). Recalling the decomposition (3.20) of \( \mu \) we have

$$\begin{aligned}&\mu ({\textsf {A}}) = \int _{{\textsf {S}}} \mu _{\text {Tail}}^{{\textsf {a}}}({\textsf {A}}) \mu (d {\textsf {a}}) . \end{aligned}$$
(12.1)

Lemma 12.1

Assume (A1)–(A3) for \( \mu \). Then \( \mu _{\text {Tail}}^{{\textsf {a}}}\) satisfies (A1)–(A3) for \(\mu \)-a.s. Å.

Proof

(A1)–(A2) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\) are clear from the definitions of logarithmic derivatives and quasi-Gibbs measures combined with Fubini’s theorem, respectively. (A3) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\) follows from (A3) for \( \mu \) and Fubini’s theorem. Indeed, we have

$$\begin{aligned} \int _{{\textsf {S}}} \Bigg [ \sum _{k=m}^{\infty } \frac{k!}{(k-m)!} \, \mu _{\text {Tail}}^{{\textsf {a}}}({\textsf {S}}_r^k) \Bigg ] \mu (d {\textsf {a}})&= \sum _{k=m}^{\infty } \frac{k!}{(k-m)!} \int _{{\textsf {S}}} \mu _{\text {Tail}}^{{\textsf {a}}}({\textsf {S}}_r^k) \mu (d {\textsf {a}}) \\&= \sum _{k=m}^{\infty } \frac{k!}{(k-m)!} \, \mu ({\textsf {S}}_r^k)< \infty . \end{aligned}$$

Hence we see that \( \mu _{\text {Tail}}^{{\textsf {a}}}\) satisfies (9.6) for \( \mu \)-a.s. Å, which implies (A3). \(\square \)

Let \( ({\mathscr {E}} ^{a,\mu _{\text {Tail}}^{{\textsf {a}}}}, {\mathscr {D}}^{a,\mu _{\text {Tail}}^{{\textsf {a}}}})\) be the Dirichlet form given by (9.7)–(9.8) with replacement of \( \mu \) by \( \mu _{\text {Tail}}^{{\textsf {a}}}\). By Lemma 9.1 and Lemma 12.1\( ({\mathscr {E}} ^{a,\mu _{\text {Tail}}^{{\textsf {a}}}}, {\mathscr {D}}^{a,\mu _{\text {Tail}}^{{\textsf {a}}}})\) are quasi-regular Dirichlet forms on \( L^2({\textsf {S}},\mu _{\text {Tail}}^{{\textsf {a}}})\) for \( \mu \)-a.s. Å. We easily see that

$$\begin{aligned}&{\mathscr {D}}^{a,\mu _{\text {Tail}}^{{\textsf {a}}}} \supset {\mathscr {D}}^{a ,\mu }\text { for } \mu \text {-a.s.}\, {\textsf {a}}. \end{aligned}$$
(12.2)

Let \( \text {Cap}^{a,\mu } \) and \( \text {Cap}^{a,\mu _{\text {Tail}}^{{\textsf {a}}}} \) be the capacities associated with \( ({\mathscr {E}}^{a ,\mu },{\mathscr {D}}^{a ,\mu }) \) on \( L^2({\textsf {S}},\mu )\) and \( ({\mathscr {E}} ^{a,\mu _{\text {Tail}}^{{\textsf {a}}}}, {\mathscr {D}}^{a,\mu _{\text {Tail}}^{{\textsf {a}}}})\) on \( L^2({\textsf {S}},\mu _{\text {Tail}}^{{\textsf {a}}})\), respectively. Then by the variational formula of capacity and (12.2), we easily deduce that for each A

$$\begin{aligned}&\text {Cap}^{a,\mu _{\text {Tail}}^{{\textsf {a}}}} ( A ) \le \text {Cap}^{a,\mu } ( A ) \quad \text { for } \mu \text {-a.s.}\, {\textsf {a}} . \end{aligned}$$

This implies

$$\begin{aligned}&\int _{{\textsf {S}}} \text {Cap}^{a,\mu _{\text {Tail}}^{{\textsf {a}}}} ( A ) \mu (d{\textsf {a}}) \le \text {Cap}^{a,\mu } ( A ) . \end{aligned}$$
(12.3)

Lemma 12.2

Assume (A4) for \( \mu \). Then \( \mu _{\text {Tail}}^{{\textsf {a}}}\) satisfies (A4) for \(\mu \)-a.s. Å.

Proof

We begin by proving (A4) (1) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\). By (10.1) for \( \mu \) and (12.1) we have

$$\begin{aligned}&\int _{{\textsf {S}}} E^{\mu _{\text {Tail}}^{{\textsf {a}}}}\Big [\langle {\mathscr {R}} ( \frac{|\cdot |-r}{T}) , {\textsf {s}}\rangle \Big ] \mu (d{\textsf {a}}) = E^{\mu }\Big [\langle {\mathscr {R}} ( \frac{|\cdot |-r}{T}) , {\textsf {s}}\rangle \Big ] < \infty . \end{aligned}$$

Then \( \mu _{\text {Tail}}^{{\textsf {a}}}\) satisfies (10.1) by Fubini’s theorem. By Fatou’s lemma, (10.2), and (12.1)

$$\begin{aligned}&\int _{{\textsf {S}}} \liminf _{r\rightarrow \infty } {\mathscr {R}} \left( \frac{r}{ T }\right) \, E^{\mu _{\text {Tail}}^{{\textsf {a}}}}[\langle 1_{S_{r+R}}, {\textsf {s}}\rangle ] \mu (d{\textsf {a}}) \nonumber \\&\quad \le \liminf _{r\rightarrow \infty } \int _{{\textsf {S}}} {\mathscr {R}} \left( \frac{r}{ T }\right) \, E^{\mu _{\text {Tail}}^{{\textsf {a}}}}[\langle 1_{S_{r+R}}, {\textsf {s}}\rangle ] \mu (d{\textsf {a}})&\text { by Fatou's lemma} \nonumber \\&\quad = \liminf _{r\rightarrow \infty } {\mathscr {R}} \left( \frac{r}{ T }\right) \, \int _{{\textsf {S}}} E^{\mu _{\text {Tail}}^{{\textsf {a}}}}[\langle 1_{S_{r+R}}, {\textsf {s}}\rangle ] \mu (d{\textsf {a}}) \nonumber \\&\quad = \liminf _{r\rightarrow \infty } {\mathscr {R}} \left( \frac{r}{ T }\right) \, E^{\mu }[\langle 1_{S_{r+R}}, {\textsf {s}}\rangle ] =0&\text { by }(12.1), (10.2). \end{aligned}$$
(12.4)

Hence from (12.4) we see \( \liminf _{r\rightarrow \infty } {\mathscr {R}} \left( \frac{r}{ T }\right) \, E^{\mu _{\text {Tail}}^{{\textsf {a}}}}[\langle 1_{S_{r+R}}, {\textsf {s}}\rangle ] = 0 \) for \( \mu \)-a.s. Å. This implies (10.2) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\). We have thus obtained (A4) (1) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\).

From (12.3) and (A4) (2) for \( \mu \), we deduce

$$\begin{aligned}&\int _{S} \text {Cap}^{a,\mu _{\text {Tail}}^{{\textsf {a}}}} (\{ {\textsf {s}} ; {\textsf {s}} (\partial S) \ge 1 \}) \mu (d{\textsf {a}}) \le \text {Cap}^{a,\mu }(\{ {\textsf {s}} ; {\textsf {s}} (\partial S) \ge 1 \}) = 0 \\&\int _{S} \text {Cap}^{a,\mu _{\text {Tail}}^{{\textsf {a}}}} ({\textsf {S}}_{\text {s}}^c) \mu (d{\textsf {a}}) \le \text {Cap}^{a ,\mu } ({\textsf {S}}_{\text {s}}^c) = 0. \end{aligned}$$

Hence we obtain (A4) (2) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\) for \( \mu \)-a.s. Å. This completes the proof. \(\square \)

We can regard \( ({\mathscr {E}} ^{a,\mu _{\text {Tail}}^{{\textsf {a}}}}, {\mathscr {D}}^{a,\mu _{\text {Tail}}^{{\textsf {a}}}}) \) as a Dirichlet form on \( L^2( {\textsf {H}}_{{\textsf {a}}},\mu _{\text {Tail}}^{{\textsf {a}}}) \) from (3.22)–(3.24). Let \( {\textsf {P}}_{{\textsf {s}}}^{{\textsf {a}}}\) be the distribution of the unlabeled diffusion starting at \( {\textsf {s}}\) associated with the Dirichlet form \( ({\mathscr {E}} ^{a,\mu _{\text {Tail}}^{{\textsf {a}}}}, {\mathscr {D}}^{a,\mu _{\text {Tail}}^{{\textsf {a}}}}) \) on \( L^2( {\textsf {H}}_{{\textsf {a}}},\mu _{\text {Tail}}^{{\textsf {a}}}) \) given by Lemma 9.1. Let

$$\begin{aligned}&{\textsf {P}}^{{\textsf {a}}}(\cdot ) = \int _{{\textsf {H}}_{{\textsf {a}}}} {\textsf {P}}_{{\textsf {s}}}^{{\textsf {a}}}(\cdot ) \mu _{\text {Tail}}^{{\textsf {a}}}(d{\textsf {s}}) . \end{aligned}$$
(12.5)

Then \( {\textsf {P}}^{{\textsf {a}}}\) is a \( \mu _{\text {Tail}}^{{\textsf {a}}}\)-stationary diffusion on \( {\textsf {H}}_{{\textsf {a}}}\).

Lemma 12.3

Assume (A1)–(A4) for \( \mu \). Then the diffusion \( {\textsf {P}}^{{\textsf {a}}}\) satisfies \(\mathbf {(SIN)}\) and \(\mathbf {(NBJ)}\) for \(\mu \)-a.s. Å. Furthermore, for \(\mu \)-a.s. Å

$$\begin{aligned}&\text {Cap}^{a,\mu _{\text {Tail}}^{{\textsf {a}}}} ({\textsf {S}}_{\text {sde}}^c) = 0 , \quad {\textsf {P}}^{{\textsf {a}}}\circ {\mathfrak {l}} _{\text {path}}^{-1}(W (\mathbf{S }_{\text {sde}})) = 1 . \end{aligned}$$
(12.6)

Proof

By Lemma 12.1 and Lemma 12.2\( \mu _{\text {Tail}}^{{\textsf {a}}}\) satisfy (A1)–(A4) for \(\mu \)-a.s. Å, which are the assumptions of Lemma 10.2 and Lemma 10.3. Then \( \mu _{\text {Tail}}^{{\textsf {a}}}\) and \( {\textsf {P}}^{{\textsf {a}}}\) satisfy \(\mathbf {(SIN)}\), (12.6), and \(\mathbf {(NBJ)}\) for \(\mu \)-a.s. Åby Lemma 10.2 and Lemma 10.3. \(\square \)

Lemma 12.4

Assume that \( \mu \) and \( \{ {\textsf {P}}_{{\textsf {s}}}\} \) satisfy (A1)–(A4) and (B1)–(B2). Assume that the coefficient \( \sigma ^m \) is constant. Then \( \mathbf{X }={\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}^{{\textsf {a}}}\) satisfies \(\mathbf {(IFC)}\) for \(\mu \)-a.s. Å.

Proof

We use Proposition 11.1 to prove Lemma 12.4. Our task is to check \( \mu _{\text {Tail}}^{{\textsf {a}}}\) and \( \{{\textsf {P}}^{{\textsf {a}}}_{{\textsf {s}}} \} \) fulfill the assumptions of Proposition 11.1: namely, (A1)–(A4) and (B1)–(B2).

We see \( \mu _{\text {Tail}}^{{\textsf {a}}}\) satisfies (A1)–(A4) for \(\mu \)-a.s. Åby Lemma 12.1 and Lemma 12.2. From (B1) for \( \mu \) and the tail decomposition (3.20) of \( \mu \) combined with Fubini’s theorem, we obtain (B1) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\) for \(\mu \)-a.s. Å.

We next proceed with (B2) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\) and \( {\textsf {P}}^{{\textsf {a}}}\). Let \( {\textsf {I}}_{{\textsf {m}}}\) be the increasing sequence of closed sets in (B2) for \( \mu \) and \( \{ {\textsf {P}}_{{\textsf {s}}}\} \). Then (11.4) is satisfied. Indeed, the condition (11.4) is independent of the measure, so (11.4) for \( \mu \) implies (11.4) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\).

Let \( \mu _{\text {Tail}}^{\AA ,[m] }\) be the m-Campbell measure of \( \mu _{\text {Tail}}^{{\textsf {a}}}\). From the analogy of (12.3) for the m-labeled Dirichlet form, we deduce for all \( {\textsf {n}}\in {\textsf {N}}_3 \) and \( {\textsf {m}}\in \mathbb {N}\)

$$\begin{aligned}&\int \text {Cap}^{a,\mu _{\text {Tail}}^{{\textsf {a}},[m] } } \Big ( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\backslash \langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle \Big ) \mu (d{\textsf {a}}) \le \text {Cap}^{a,\mu ^{[m]}} \Big ( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\backslash \langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle \Big ). \end{aligned}$$
(12.7)

By (11.5) for all \( {\textsf {n}}\in {\textsf {N}}_3 \)

$$\begin{aligned}&\lim _{{\textsf {m}}\rightarrow \infty } \text {Cap}^{a,\mu ^{[m]}} \Big ( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\backslash \langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle \Big ) = 0. \end{aligned}$$
(12.8)

From (12.7), (12.8), and Fatou’s lemma we obtain for \( \mu \)-a.s. \( {\textsf {a}}\) and for all \( {\textsf {n}}\in {\textsf {N}}_3 \)

$$\begin{aligned} \lim _{{\textsf {m}}\rightarrow \infty } \text {Cap}^{a,\mu _{\text {Tail}}^{{\textsf {a}},[m] } } \Big ( {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\backslash \langle {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}^{\circ }\cap {\textsf {I}}_{{\textsf {m}}}\rangle \Big ) = 0 . \end{aligned}$$

We hence obtain (11.5) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\). Then we have verified (B2) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\) for \( \mu \)-a.s. \( {\textsf {a}}\).

We thus see that all the assumptions of Proposition 11.1 are fulfilled for \( \mu _{\text {Tail}}^{{\textsf {a}}}\) and \( {\textsf {P}}^{{\textsf {a}}}\), which deduces that \( {\textsf {P}}^{{\textsf {a}}}\) satisfies \(\mathbf {(IFC)}\) for \(\mu \)-a.s. Å. \(\square \)

The next theorem claims quenched results follows from anneal assumptions.

Theorem 12.1

Assume that \( \mu \) and \( \{ {\textsf {P}}_{{\textsf {s}}}\} \) satisfy (A1)–(A4) and (B1)–(B2). Let \( {\textsf {P}}^{{\textsf {a}}}\) as (12.5). Assume that the coefficient \( \sigma ^m \) is constant. Then, for \( \mu \)-a.s. Å, (3.3)–(3.4) has a unique strong solution starting at \( \mathbf{s }={\mathfrak {l}} ({\textsf {s}})\) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\)-a.s. \( {\textsf {s}}\) under the constraints of \((\mathbf {MF})\), \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\).

Proof

We use Theorem 3.1 for the proof. We take \( (\mathbf{X },\mathbf{B })\) and P in Theorem 3.1 as \( \mathbf{X }:={\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) and \( P :={\textsf {P}}^{{\textsf {a}}}\) for \( \mu \)-a.s. Å. We check this choice satisfies all the assumptions of Theorem 3.1, namely, \( {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}^{{\textsf {a}}}\) is a weak solution satisfying \(\mathbf {(IFC)}\), \(\mathbf {(AC)}\) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\), \(\mathbf {(SIN)}\), and \(\mathbf {(NBJ)}\), and that \( \mu _{\text {Tail}}^{{\textsf {a}}}\) satisfies \(\mathbf {(TT)}\) for \( \mu \)-a.s. Å.

Below we omit “for \( \mu \)-a.s. Å”. From Lemma 9.2, Lemma 12.1, and Lemma 12.3, we see that \( {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}^{{\textsf {a}}}\) is a weak solution of (3.3)–(3.4).

\(\mathbf {(TT)}\) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\) follows from (3.21). Obviously, \( {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}^{{\textsf {a}}}\) satisfies \(\mathbf {(AC)}\) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\) because \( {\textsf {P}}^{{\textsf {a}}}\) is \( \mu _{\text {Tail}}^{{\textsf {a}}}\)-reversible. \(\mathbf {(SIN)}\), \(\mathbf {(NBJ)}\), and \(\mathbf {(IFC)}\) for \( {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}^{{\textsf {a}}}\) follow from Lemma 12.3 and Lemma 12.4.

Thus, all the assumptions of Theorem 3.1 are fulfilled. \(\square \)

Theorem 12.2

Under the same assumptions of Theorem 12.1 with replacement (B2) by (C1)–(C2) the same conclusions as Theorem 12.1 hold.

Proof

From Proposition 11.2, (B2) holds. Then from Theorem 12.1 we deduce the claim. \(\square \)

Corollary 12.1

Make the same assumptions of Theorem 12.1 or Theorem 12.2. Then \( {\mathfrak {l}} _{\text {path}}({\textsf {X}}) \) under \( {\textsf {P}}_{{\textsf {s}}}^{{\textsf {a}}}\) is a unique strong solution of (3.3)–(3.4) starting at \( \mathbf{s }={\mathfrak {l}} ({\textsf {s}})\) under the same constraints as Theorem 12.1.

Remark 12.1

  1. 1.

    We have two diffusions \( \{ ({\textsf {X}}, {\textsf {P}}_{{\textsf {s}}})\}_{{\textsf {s}} \in {\textsf {S}} } \) and \( \{\{ ({\textsf {X}}, {\textsf {P}}_{{\textsf {s}}}^{\AA })\}_{{\textsf {s}} \in {\textsf {H}}_{{\textsf {a}}}} \}_{\AA \in {\textsf {H}}}\). The former is deduced from (A2) and (A3) for \( \mu \), and the associated Dirichlet form is given by \( ({\mathscr {E}} ^{\mu } , {\mathscr {D}}^{\mu } ) \) on \( L^2( {\textsf {S}},\mu ) \). The latter is deduced from (A2) and (A3) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\), and is a collection of diffusions whose Dirichlet forms are \( ({\mathscr {E}} ^{a,\mu _{\text {Tail}}^{{\textsf {a}}}}, {\mathscr {D}}^{a,\mu _{\text {Tail}}^{{\textsf {a}}}}) \) on \( L^2( {\textsf {H}}_{{\textsf {a}}},\mu _{\text {Tail}}^{{\textsf {a}}}) \). These two diffusions are the same (up to quasi-everywhere starting points) when \( \mu \) is tail trivial. We emphasize that Theorem 12.1 does not follow from Theorem 3.2 because we do not know how to prove \( \{ ({\textsf {X}}, {\textsf {P}}_{{\textsf {s}}})\}_{{\textsf {s}} \in {\textsf {S}} } \) satisfies \( {\mathbf {(AC)}}\) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\).

  2. 2.

    In Theorem 12.1, we have constructed the tail preserving solution. Its uniqueness however is under the constraint of \(\mathbf {(AC)}\) for \( \mu _{\text {Tail}}^{{\textsf {a}}}\). Hence it does not exclude the possibility that there exists a family of solutions \( \mathbf{X }\) not satisfying this condition, and, in particular, a family of solutions whose distributions on the tail \( \sigma \)-field \( {\mathscr {T}}({\textsf {S}}) \) changing as t varies. We refer to [15] for a tail preserving property of the Dyson model.

Examples

We devote to examples and applications of Theorems 11.1, 12.1, and 12.2. Throughout this section, \( b (x,{\textsf {y}}) = \frac{1}{2} {\textsf {d}}^{\mu }(x,{\textsf {y}})\), where \( {\textsf {d}}^{\mu }\) is the logarithmic derivative of random point field \( \mu \) associated with ISDE.

We present a simple sufficient condition of (C2) introduced before Proposition 11.2. We write \( \mathbf{x }=(x_1,\ldots ,x_m) \in {\mathbb {R}}^{dm}\) and \( {\textsf {s}}=\sum _i\delta _{s_i}\) as before. Let \( \mathbf{J }^{[\ell ]} \) be as (11.15) with \( l = \ell \). Recall that we suppress m from the notation.

Lemma 13.1

Assume that for each \( {\textsf {f}}\in \{ {\widetilde{\sigma }}^m , \, {\widetilde{b}}^m \}\), \( \mathbf{j } \in \mathbf{J }^{[\ell ]}\), and \( k \in \{ 1,\ldots ,m \} \), there exists \( {\textsf {g}}_{\mathbf{j },k} \) such that \( {\textsf {g}}_{\mathbf{j },k} \in C (S^2 \backslash \{ x = s \}) \) and that

$$\begin{aligned}&\partial _{\mathbf{j }} {\textsf {f}}(\mathbf{x },{\textsf {s}}) = \sum _{k=1}^m \left\{ \sum _{p\not = k}^m {\textsf {g}}_{\mathbf{j },k} (x_k , x_p ) + \sum _{i} {\textsf {g}}_{\mathbf{j },k} (x_k , s_i ) \right\} \ \text { for } (\mathbf{x },{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]. \end{aligned}$$

Assume (B1). Assume that there exist positive constants \( c_{18} \) and \(c_{19}\) such that

$$\begin{aligned}&\limsup _{r\rightarrow \infty } \frac{a_{{\textsf {q}}}(r)}{r^{c_{18}}}< \infty ,\quad \sum _{r=1}^{\infty } \frac{a_{{\textsf {q}}}(r)}{r^{ c_{18}+1 }} < \infty , \end{aligned}$$
(13.1)
$$\begin{aligned}&| {\textsf {g}}_{\mathbf{j },k} (x ,s ) | \le \frac{c_{19} }{ (1+|s|)^{ c_{18} }} \quad \text { for all } (x ,s) \in H_{{\textsf {p}},{\textsf {r}}} . \end{aligned}$$
(13.2)

Here \( H_{{\textsf {p}},{\textsf {r}}} = \{ (x ,s)\in S^2 ;\, 2^{-{\textsf {p}}} \le |x -s| ,\, x \in S_{{\textsf {r}}}\} \) for \( {\textsf {p}},{\textsf {r}}\in \mathbb {N}\). We then obtain (C2).

Proof

From (13.2), we deduce that for \( (x,{\textsf {s}}) \in {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}\)

$$\begin{aligned} \sum _{ i } | {\textsf {g}}_{\mathbf{j },k} (x_k,s_i )|&\le c_{19} \Bigg \{ \Bigg ( \sum _{r=1}^{\infty } \sum _{ s_i\in S_{r}\backslash S_{r-1}} \frac{1 }{ (1+|s_i|)^{ c_{18} }}\Big ) + {\textsf {s}}(S_0 ) \Bigg \} \\&\le c_{19} \Bigg \{ \Bigg ( \sum _{r=1}^{\infty } \sum _{ s_i\in S_{r}\backslash S_{r-1}} \frac{1}{ (1+r-1)^{ c_{18} }}\Bigg ) + {\textsf {s}}(S_0 )\Bigg \} \\&= c_{19} \limsup _{R\rightarrow \infty } \Bigg \{ \frac{{\textsf {s}} (S_R ) }{R^{c_{18}}} + \sum _{r=2}^{R} {\textsf {s}}(S_{r-1}) \Bigg \{ \frac{1}{(r-1)^{c_{18}}}- \frac{ 1 }{ r^{ c_{18}}} \Bigg \} + {\textsf {s}}(S_0 ) \Bigg \}. \end{aligned}$$

The last line is finite by (11.1) and (13.1). This yields (C2). \(\square \)

In the rest of this section, \( \sigma \) is the unit matrix.

sine\(_{\beta }\) random point fields/Dyson model in infinite dimensions

Let \( d = 1 \) and \( S= {\mathbb {R}}\). Recall ISDE (1.2) and take \( \beta = 1,2,4\).

$$\begin{aligned} dX_t^i = dB_t^i + \frac{\beta }{2} \lim _{r\rightarrow \infty }\sum _{|X_t^i - X_t^j |< r, \ j\not = i}^{\infty } \frac{1}{X_t^i - X_t^j } dt \quad (i\in {\mathbb {Z}}) . \end{aligned}$$
(13.3)

Let \( \mu _{\text {sin}_{\beta }}\) be the sine\(_{\beta }\) random point field [4, 21] with \( \beta = 1,2,4\). \( \mu _{\text {sin}_{2 }} \) is the random point field on \( {\mathbb {R}}\) whose n-point correlation function \( \rho _{\text {sin}_{2 }}^{n} \) is given by

$$\begin{aligned}&\rho _{\text {sin}_{2 }}^{n} (\mathbf{x } ) = \det [{\textsf {K}}_{\text {sin}_{2 }} (x_i,x_j)]_{i,j=1}^{n}. \end{aligned}$$
(13.4)

Here \( {\textsf {K}}_{\text {sin}_{2 }} (x,y) = \sin \pi (x-y)/\pi (x-y)\) is the sine kernel. \( \mu _{\text {sin}_{1 }}\) and \( \mu _{\text {sin}_{4 }}\) are also defined by correlation functions given by quaternion determinants [21]. \( \mu _{\text {sin}_{\beta }}\) are solutions of the geometric differential equations (9.5) with \( a (x,{\textsf {y}}) = 1 \) and

$$\begin{aligned}&b (x,{\textsf {y}}) = \frac{\beta }{2} \lim _{r\rightarrow \infty } \sum _{|x-y_i|<r} \frac{1}{x-y_i} \quad \text { in }L_{\text {loc}}^1 \left( \mu _{\text {sin}_{\beta }}^{[1]}\right) . \end{aligned}$$
(13.5)

Here “in \( L_{\textsf {loc}}^1 (\mu _{\text {sin}_{\beta }}^{[1]})\)” means the convergence in \( L^1(S_{r}\times {\textsf {S}},\, \mu _{\text {sin}_{\beta }}^{[1]} )\) for all \( r \in \mathbb {N}\). Unlike the Ginibre random point field, (13.5) is equivalent to

$$\begin{aligned}&b (x,{\textsf {y}}) = \frac{\beta }{2} \lim _{r\rightarrow \infty } \sum _{|y_i|<r} \frac{1}{x-y_i} \quad \text { in }L_{\text {loc}}^1 \left( \mu _{\text {sin}_{\beta }}^{[1]}\right) . \end{aligned}$$

We obtain the following.

Theorem 13.1

  1. 1.

    The conclusions of Theorem 11.1 hold for \( \mu _{\text {sin}_{2 }} \).

  2. 2.

    Let \( \beta = 1,4\). Let \( \mu _{\text {sin}_{\beta },\, \text {Tail}}^{\AA }\) be defined as (3.19) for \( \mu _{\text {sin}_{\beta }}\). Then, for \( \mu _{\text {sin}_{\beta }}\)-a.s. Å, the conclusions of Theorem 12.1 hold for \( \mu _{\text {sin}_{\beta },\, \text {Tail}}^{\AA }\).

Remark 13.1

When \( \beta = 2 \), the solution of ISDE (13.3) is called the Dyson model in infinite dimensions [38]. The random point fields \( \mu _{\text {sin}_{\beta }}\) are constructed for all \( \beta >0 \) [41]. It is plausible that our method can be applied to this case. We also remark that Tsai [40] solved ISDE (13.3) for all \( \beta \ge 1 \) employing a different method.

To prove Theorem 13.1 we shall check the assumptions of Theorem 11.1\( (\beta =2)\) and Theorem 12.1\( (\beta =1,4)\) for \( \mu _{\text {sin}_{\beta }} \). Let \( \chi _{{\textsf {n}}}\) be as in Lemma 11.4.

Lemma 13.2

Let \( \beta =1,2,4\). For each \( {\textsf {n}}\in {\textsf {N}}_3 \) the following hold.

  1. 1.

    The logarithmic derivative \( {\textsf {d}}^{\mu _{\text {sin}_{\beta }}} \) of \( \mu _{\text {sin}_{\beta }} \) exists in \( L^2\left( \chi _{{\textsf {n}}}\mu ^{[1]}_{\text {sin}_{\beta }}\right) \).

  2. 2.

    The logarithmic derivative \( {\textsf {d}}^{\mu _{\text {sin}_{\beta }}} \) has the expressions.

    $$\begin{aligned}&{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} (x, {\textsf {s}}) = \beta \lim _{r\rightarrow \infty } \sum _{|x-s_i| < r } \frac{1}{x-s_i} \quad \text { in } L^2\left( \chi _{{\textsf {n}}}\mu ^{[1]}_{\text {sin}_{\beta }}\right) , \end{aligned}$$
    (13.6)
    $$\begin{aligned}&{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} (x, {\textsf {s}}) = \beta \lim _{r\rightarrow \infty } \sum _{|s_i| < r } \frac{1}{x-s_i} \quad \text { in } L^2\left( \chi _{{\textsf {n}}}\mu ^{[1]}_{\text {sin}_{\beta }}\right) . \end{aligned}$$
    (13.7)

Proof

(1) and (13.6) follow from [26, Theorem 82]. Set \( S_{r}^x = \{ s;|x-s| < r \} \) and let \( S_{r}\ominus S_{r}^x \) be the symmetric difference of \( S_{r}\) and \( S_{r}^x \). Then we have

$$\begin{aligned}&\lim _{r\rightarrow \infty } \sum _{s_i\in S_{r}\ominus S_{r}^x } \frac{1}{x-s_i} = 0 \quad \text { in } L_{\text {loc}}^{2}\left( \mu ^{[1]}_{\text {sin}_{\beta }}\right) \end{aligned}$$
(13.8)

because \( d = 1\) and one- and two-point correlation functions of \( \mu _{\text {sin}_{\beta }} \) are bounded. Hence, (13.7) follows from (13.6) and (13.8). \(\square \)

Take \( \ell = 1 \) in Theorem 11.1 and note that \( \sigma (x,{\textsf {s}})= 1 \) and \( b(x,{\textsf {s}}) = \frac{1}{2}{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} (x,{\textsf {s}})\). Let \( {\mathscr {D}}_{\text {sin}_{\beta }}^{[1]} \) be the domain of the Dirichlet form associated with the 1-labeled process.

Lemma 13.3

Let \( \beta =1,2,4\). Then \( \chi _{{\textsf {n}}}{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} ,\, \chi _{{\textsf {n}}}\nabla _x {\textsf {d}}^{\mu _{\text {sin}_{\beta }}} \in {\mathscr {D}}_{\text {sin}_{\beta }}^{[1]} \) for all \( {\textsf {n}}\in {\textsf {N}}_3 \). In particular, (C1) holds for \( \ell = 1 \).

Proof

We only prove \( \chi _{{\textsf {n}}}{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} \in {\mathscr {D}}_{\text {sin}_{\beta }}^{[1]} \) because \( \chi _{{\textsf {n}}}\nabla _x {\textsf {d}}^{\mu _{\text {sin}_{\beta }}} \in {\mathscr {D}}_{\text {sin}_{\beta }}^{[1]} \) can be proved similarly. We set \( {\mathbb {D}}[f] = {\mathbb {D}}[f,f]\) for simplicity. By definition,

$$\begin{aligned}&{\mathscr {E}} ^{\mu ^{[1]}_{\text {sin}_{\beta }}} \left( \chi _{{\textsf {n}}}{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} ,\chi _{{\textsf {n}}}{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} \right) = \int _{S\times {\textsf {S}}} \frac{1}{2}|\nabla _x \chi _{{\textsf {n}}}{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} |^2 + \, {\mathbb {D}}[\chi _{{\textsf {n}}}{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} ] d \mu ^{[1]}_{\text {sin}_{\beta }}.\end{aligned}$$
(13.9)

From Lemmas 11.4 and 13.2 (1), we deduce that

$$\begin{aligned} \int _{S\times {\textsf {S}}} |\nabla _x \chi _{{\textsf {n}}}{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} |^2 d \mu ^{[1]}_{\text {sin}_{\beta }}\le&2\int _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}+1}} \{ \chi _{{\textsf {n}}}^2 |\nabla _x {\textsf {d}}^{\mu _{\text {sin}_{\beta }}} |^2 + |\nabla _x \chi _{{\textsf {n}}}|^2 |{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} |^2 \} d \mu ^{[1]}_{\text {sin}_{\beta }}\\ <&\infty . \end{aligned}$$

From the Schwarz inequality and Lemma 11.4, we deduce that

$$\begin{aligned} \int _{S\times {\textsf {S}}} \,&{\mathbb {D}}[\chi _{{\textsf {n}}}{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} ] d \mu ^{[1]}_{\text {sin}_{\beta }}\le \, 2 \int _{S\times {\textsf {S}}} \, \chi _{{\textsf {n}}}^2 {\mathbb {D}}[ {\textsf {d}}^{\mu _{\text {sin}_{\beta }}} ] + {\mathbb {D}}[\chi _{{\textsf {n}}}] \, |{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} |^2 \, d \mu ^{[1]}_{\text {sin}_{\beta }}\\ \le \,&2 \int _{{\textsf {H}}[\mathbf{a }]_{{\textsf {n}}+1}} \, {\mathbb {D}}[ {\textsf {d}}^{\mu _{\text {sin}_{\beta }}} ] + c_{17} |{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} |^2 \, d \mu ^{[1]}_{\text {sin}_{\beta }}\\ \le \,&2 \int _{ {\textsf {H}}[\mathbf{a }]_{{\textsf {n}}+1}} \frac{\beta ^2}{2}\bigg ( \sum _{i} \frac{1}{|x-s_i |^4} \bigg ) + c_{17} |{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} |^2 \, d \mu ^{[1]}_{\text {sin}_{\beta }}< \, \infty \ . \end{aligned}$$

Here the last line follows from a direct calculation and Lemma 13.2. Putting these inequalities into (13.9), we obtain \( \chi _{{\textsf {n}}}{\textsf {d}}^{\mu _{\text {sin}_{\beta }}} \in {\mathscr {D}}_{\text {sin}_{\beta }}^{[1]} \) for all \( {\textsf {n}}\in {\textsf {N}}_3 \). \(\square \)

Lemma 13.4

\( \mu _{\text {sin}_{\beta }} \) satisfies (A1)–(A4) for \( \beta =1,2,4\).

Proof

(9.1) in (A1) follows from [26, Theorem 82]. In [27, Theorem 2.2], it was proved that \( \mu _{\text {sin}_{\beta }} \) is a \( (0, -\beta \log |x-y|)\)-quasi-Gibbs measure for \( \beta = 1,2,4\). This yields (A2). (A3) is immediate from (13.4) for \( \beta =2\), and a similar determinantal expression of correlation functions in [21] for \( \beta =1,4 \). We next verify (A4). (10.3) holds obviously. (10.4) is satisfied by Lemma 10.1. Hence we have (A4) (2). Because \( \mu _{\text {sin}_{\beta }} \) is translation invariant, (A4) (1) holds We thus see (A4) holds. \(\square \)

Proof of Theorem 13.1

We check the assumptions of Corollary 12.1. (A1)–(A4) follow from Lemma 13.4. Let \( \mathbf{a } = \{ a_{{\textsf {q}}}\} \) be as in (11.1). Take \( a_{{\textsf {q}}}(r) = {\textsf {q}}r \). Then (B1) holds because \( \mu _{\text {sin}_{\beta }} \) is translation invariant. (C1) follows from Lemma 13.2 and Lemma 13.3. From the Lebesgue convergence theorem, we obtain

$$\begin{aligned}&\nabla _x b(x,{\textsf {s}}) = \frac{1}{2}\nabla _x {\textsf {d}}^{\mu _{\text {sin}_{\beta }}} (x,{\textsf {s}}) = - \frac{\beta }{2} \sum _{i} \frac{1}{(x-s_i)^2} \in L^{\infty } ({\textsf {H}}[\mathbf{a }]_{{\textsf {n}}}, \mu ^{[1]}_{\text {sin}_{\beta }}). \end{aligned}$$

Hence we can apply Lemma 13.1 to obtain (C2).

Assume \( \beta = 2\). Then \( \mu _{\text {sin}_{2 }} \) satisfies \(\mathbf {(TT)}\) because \( \mu _{\text {sin}_{2 }} \) is a determinantal random point field. Hence applying Theorem 11.2 we obtain (1). Assume \( \beta = 1,4\). Then applying Theorem 12.2 we obtain (2). \(\square \)

Ruelle’s class potentials

Let \( S= {\mathbb {R}}^d\) and \( \varPhi = 0\). Let \( \varPsi \) be translation invariant, that is, \( \varPsi (x,y) = \varPsi (x-y,0)\) for all \( x,y \in {\mathbb {R}}^d\). We set \( \varPsi (x) = \varPsi (x,0)\). Then (1.1) becomes

$$\begin{aligned}&dX^i_t = dB^i_t - \frac{\beta }{2} \sum ^{\infty }_{j=1,j\ne i} \nabla \varPsi (X_t^i - X_t^j ) dt \quad (i\in \mathbb {N}) . \end{aligned}$$

Assume that \( \varPsi \) is a Ruelle’s class potential, smooth outside the origin. That is, \( \varPsi \) is super-stable and regular in the sense of Ruelle [35]. Here we say \( \varPsi \) is regular if there exists a non-negative decreasing function \( \psi \!:\![0,\infty )\!\rightarrow \![0,\infty )\) and \( R_0 \) such that

$$\begin{aligned}&\varPsi (x) \ge - \psi (|x|) \quad \text { for all } x , \quad \varPsi (x) \le \psi (|x|) \quad \text { for all } |x| \ge R_0 , \\&\int _0^{\infty } \psi (t)\, t^{d-1}dt < \infty . \end{aligned}$$

Let \( \mu _{\varPsi }\) be a canonical Gibbs measure with interaction \( \varPsi \). We do not a priori assume the translation invariance of \( \mu _{\varPsi }\). Instead, we assume a quantitative condition in (13.10) below, which is obviously satisfied by the translation invariant canonical Gibbs measures. Let \( \rho ^m \) be the m-point correlation function of \( \mu _{\varPsi }\).

Theorem 13.2

Let \( S= {\mathbb {R}}^d\) and \( \beta > 0\). Let \( \varPsi \) be an interaction potential of Ruelle’s class smooth outside the origin. Assume that, for each \( {\textsf {p}}\in \mathbb {N}\), there exist positive constants \( c_{20} \) and \(c_{21}\) satisfying

$$\begin{aligned}&\sum _{r=1}^{\infty } \frac{\int _{S_{r}}\rho ^1 (x)dx }{r^{ c_{20} +1 }}< \infty , \quad \limsup _{r\rightarrow \infty } \frac{\int _{S_{r}}\rho ^1 (x)dx }{r^{ c_{20} }} < \infty ,\nonumber \\&| \nabla \varPsi ( x )|,\ | \nabla ^2 \varPsi ( x )| \le \frac{c_{21} }{ (1+|x|)^{ c_{20} }} \quad \text { for all } x \text { such that } |x|\ge 1/{\textsf {p}}. \end{aligned}$$
(13.10)

Assume either \( d \ge 2 \) or \( d=1\) with \( \mu _{\varPsi }\) such that there exist a positive constant \(c_{22}\) and a positive function \( h\!:\![0,\infty )\!\rightarrow \![0,\infty ]\) depending on \( m , R \in \mathbb {N}\) such that

$$\begin{aligned}&\int _{0\le t \le c_{22}} \frac{1}{h(t)} dt = \infty ,\nonumber \\&\rho ^m(x_1,\ldots ,x_m) \le h (|x_i-x_j|) \text { for all } x_i\not =x_j \in S_R. \end{aligned}$$
(13.11)

Then the conclusions of Theorem 12.1 hold for \( \mu _{\varPsi }\).

We begin with the calculation of the logarithmic derivative.

Lemma 13.5

Assumption (A1) holds and the logarithmic derivative \( {\textsf {d}}^{\mu _{\varPsi }}\) is given by

$$\begin{aligned}&\quad \quad {\textsf {d}}^{\mu _{\varPsi }}(x,{\textsf {y}}) = - \beta \sum _i \nabla \varPsi (x-y_i) \quad ({\textsf {y}}=\sum _{i} \delta _{y_i}) . \end{aligned}$$
(13.12)

Proof

This lemma is clear from the DLR equation. For the sake of completeness we give a proof. We suppose \( \mu _{\varPsi }({\textsf {s}}(S)=\infty ) = 1 \).

Let \( {\textsf {S}}_r^{[1],m} = S_{r}\times {\textsf {S}}_{r}^m \) for \( m \in \{ 0 \} \cup \mathbb {N}\), where \( {\textsf {S}}_{r}^m = \{ {\textsf {s}}\in {\textsf {S}}; {\textsf {s}}(S_{r}) = m \} \). Let \( \mu _{r,\eta }^{[1]} \) be a conditional measure of the 1-Campbell measure \( \mu _{\varPsi }^{[1]}\) conditioned at \( {\pi _r^{c}}({\textsf {y}}) = {\pi _r^{c}}(\eta )\) for \( (x,{\textsf {y}}) \in S\times {\textsf {S}}\). We normalize \( \mu _{\varPsi }^{[1]}( \cdot \cap { {\textsf {S}}_r^{[1],m} }) \) and we denote by \( \sigma _{r,\eta }^{[1],m} \) the density function of \( \mu _{r,\eta }^{[1]} \) on \( {\textsf {S}}_r^{[1],m} \). Then, by the DLR equation and the definitions of reduced Palm and Campbell measures we obtain

$$\begin{aligned}&\sigma _{r,\eta }^{[1],m} (x,\mathbf{y }) = \frac{1}{{\mathscr {Z}}_{r,\eta }^{m} } e^{- \beta \big \{\sum _{i=1}^m \varPsi (x-y_i) + \sum _{i<j}^m \varPsi (y_i-y_j) + \sum _{\eta _k \in S_{r}^c} \{ \varPsi (x-\eta _k ) + \sum _{i=1}^m \varPsi (y_i-\eta _k) \} \big \} } , \end{aligned}$$

where \( \mathbf{y }=(y_1,\ldots ,y_m) \in S_{r}^m\), \( \eta = \sum _k \delta _{\eta _k}\), and \( {\mathscr {Z}}_{r,\eta }^{m} \) is the normalizing constant. Then we see that

$$\begin{aligned}&\nabla _x \log \sigma _{r,\eta }^{[1],m} (x,\mathbf{y }) = - \beta \Bigg \{\sum _{i=1}^{m} \nabla \varPsi (x-y_i) + \sum _{\eta _k \in S_{r}^c} \nabla \varPsi (x-\eta _k) \Bigg \} . \end{aligned}$$
(13.13)

For \( \varphi \in C_0(S)\otimes {\mathscr {D}}_{\circ } \) such that \( \varphi (x,{\textsf {y}})= 0 \) on \( S_{r}^c \times {\textsf {S}}\), we have that

$$\begin{aligned}&- \int _{ S\times {\textsf {S}}} \nabla _x \varphi (x,{\textsf {y}}) \mu _{\varPsi }^{[1]}(dxd{\textsf {y}})\\&\quad = - \sum _{m=0}^{\infty } \int _{ S_{r}\times S_{r}^{m} \times {\textsf {S}}} \left\{ \nabla _x \varphi \left( x,\sum _{i=1}^m \delta _{y_i}\right) \right\} \, \sigma _{r,\eta }^{[1],m}(x,\mathbf{y })\, dxd\mathbf{y }\, \mu _{\varPsi }^{[1]} \circ ({\pi _r^{c}})^{-1} (d\eta )\\&\quad = \sum _{m=0}^{\infty } \int _{ S_{r}\times S_{r}^{m} \times {\textsf {S}}} \varphi (x,\sum _{i=1}^m \delta _{y_i}) \, \left\{ \nabla _x \log \sigma _{r,\eta }^{[1],m}(x,\mathbf{y }) \right\} \, \sigma _{r,\eta }^{[1],m}(x,\mathbf{y })\, dxd\mathbf{y }\, \mu _{\varPsi }^{[1]} \circ ({\pi _r^{c}})^{-1} (d\eta )\\&\quad = \int _{ S\times {\textsf {S}}} \varphi (x,{\textsf {y}}) \left\{ - \beta \sum _{i=1}^{\infty } \nabla \varPsi (x-y_i)\right\} \mu _{\varPsi }^{[1]}(dxd{\textsf {y}}) \quad \text { by } (13.13) . \end{aligned}$$

From this, we obtain (13.12). \(\square \)

Lemma 13.6

Assume that \( d=1 \) and also (13.11). Then (10.4) holds.

Proof

We can prove Lemma 13.6 in a similar fashion as the proof of [24, Theorem 2.1]. We easily deduce from the argument of [10] that [24, (4.5)] holds under the assumptions (13.11), and the rest of the proof is the same as that of [24, Theorem 2.1]. \(\square \)

Proof of Theorem 13.2

We verify that \( \mu _{\varPsi }\) satisfies the assumptions (A1)–(A4), (B1), and (C1)–(C2) with \( \ell =1 \).

(A1) follows from Lemma 13.5. We obtain (A2) from the DLR equation and the assumption that \( \varPsi \) is smooth outside the origin. We deduce (A3) and (A4) from (13.10), Lemma 13.6, and \( \partial S= \emptyset \). Taking \( a_{{\textsf {q}}}(r) = {\textsf {q}}\int _{S_{r}}\rho ^1(x)dx \), we obtain (B1). From (13.10) we can apply the Lebesgue convergence theorem to differentiate (13.12) to obtain (C1). (C2) follow from Lemma 13.1 and (13.10). \(\square \)

Remark 13.2

  1. 1.

    Inukai [10] proved that the assumption (13.11) is a necessary and sufficient conditions of the particles never to collide for finite particle systems.

  2. 2.

    One can easily generalize Theorem 13.2 even if a free potential \( \varPhi \) exists.

  3. 3.

    For given potentials of Ruelle’s class \( \varPsi \), there exist translation invariant grand canonical Gibbs measures associated with \( \varPsi \) such that the m-point correlation function \( \rho ^m \) with respect to the Lebesgue measure satisfies \( \rho ^m (x_1,\ldots ,x_m) \le c_{23}^m \) for all \( (x_1,\ldots ,x_m) \in ({\mathbb {R}}^d)^m \) and \( m \in {\mathbb {N}}\) (see [35]). Here \(c_{23}\) is a positive constant.