1 Introduction

1.1 Posing

In this paper, we study the dynamics of an infinite system of point particles of two types placed in \(\mathbb {R}^d\). The particles perform random jumps in the course of which particles of different types repel each other whereas those of the same type do not interact. We do not require that the repulsion is of hard-core type. This model can be viewed as a dynamical version of the Widom–Rowlinson model [15] of equilibrium statistical mechanics – one of the few models of phase transitions in continuum particle systems, see the corresponding discussion in [7] where a similar birth-and-death model was introduced and studied. Note that the latter paper and this our work are the only publications where the dynamics of two-component infinite systems of interacting particles have been studied so far.

The phase space of our model is defined as follows. Let \(\Gamma \) denote the set of all \(\gamma \subset \mathbb {R}^d\) that are locally finite, i.e., such that \(\gamma \cap \Lambda \) is a finite set whenever \(\Lambda \subset \mathbb {R}^d\) is compact. Thus, \(\Gamma \) is a configuration space as defined in [1, 3, 8, 11]. In order to take into account the particle’s type we use the Cartesian product \(\Gamma ^2 = \Gamma \times \Gamma \), see [5, 7, 9], the elements of which are denoted by \(\gamma = (\gamma _0, \gamma _1)\). In a standard way, \(\Gamma ^2\) is equipped with a \(\sigma \)-field of measurable subsets which allows one to deal with probability measures as states of the system. Among them one may distinguish Poissonian states in which the particles are independently distributed over \(\mathbb {R}^d\). Sub-Poissonian states are characterized by a rather weak dependence between the particle’s positions, see Definition 2.1 below. As was shown in [10], for infinite particle systems with birth-and-death dynamics the evolution of states exists and is such that they remain sub-Poissonian globally in time if the birth of the particles is in a sense controlled by their state-dependent death. For conservative dynamics, in which the particles do not appear or disappear and only change their positions, the interaction may in general change the sub-Poissonian character of the state in finite time (even cause an explosion), e.g., due to an infinite number of simultaneous correlated jumps. Thus, the conceptual outcome of the present study is that this is not the case for the considered model. Note that we do not impose any kind of restrictions on the model parameters, and the existence of the global in time evolution of states is proved to hold even if there may exist multiple equilibrium states (phase transitions), and hence no ergodicity can be expected. Our another aim in this paper is to study the dynamics of the considered model in the mesoscopic limit, which yields its though an approximate (mean-field like) but more detailed picture. We do this and show how this approximate picture and the description of the evolution of states are related to each other.

1.2 Presenting the Results

The Markov evolution of states of the system which we consider is described by the Kolmogorov equation

$$\begin{aligned} \frac{d}{dt} F_t = L F_t, \qquad F_t|_{t=0} = F_0, \end{aligned}$$
(1.1)

where \(F_t:\Gamma ^2\rightarrow \mathbb {R}\) is an observable and the operator L specifies the model. It has the following form

$$\begin{aligned} (LF)(\gamma _{0},\gamma _{1})= & {} \sum _{x\in \gamma _{0}}\int _{\mathbb {R}^d} c_{0}(x,y,\gamma _{1}) [F(\gamma _{0}\backslash x \cup y,\gamma _{1})-F(\gamma _{0},\gamma _{1})]d y \nonumber \\&+ \sum _{x\in \gamma _{1}}\int _{\mathbb {R}^d} c_{1}(x,y,\gamma _{0}) [F(\gamma _{0},\gamma _{1}\backslash x \cup y)-F(\gamma _{0},\gamma _{1})]d y. \end{aligned}$$
(1.2)

The evolution of states is supposed to be obtained by solving the Fokker-Planck equation

$$\begin{aligned} \frac{d}{dt} \mu _t = L^* \mu _t, \qquad \mu _t|_{t=0} = \mu _0, \end{aligned}$$
(1.3)

related to that in (1.1) by the duality

$$\begin{aligned} \int _{\Gamma ^2}F_t (\gamma ) \mu _0 ( d \gamma ) = \int _{\Gamma ^2}F_0 (\gamma ) \mu _t ( d \gamma ). \end{aligned}$$
(1.4)

As is usual for models of this kind, the direct meaning of (1.1) or (1.3) can only be given for states of finite systems, cf. [12]. In this case, the Banach space where the Cauchy problem in (1.3) is defined can be the space of signed measures with finite variation. For infinite systems, the evolution of states is constructed indirectly, by employing correlation functions and or the related Bogoliubov functionals, see [3, 6,7,8,9,10] and the references quoted therein.

In this paper, in describing the evolution of states, see Theorem 3.5 below, we mostly follow the scheme elaborated in [10]. It consists in: (a) constructing the evolution of correlation functions \(k_0 \mapsto k_t\), \(t< T < +\infty \), based on the Cauchy problem in (3.1); (b) proving that each \(k_t\) is the correlation function of a unique sub-Poissonian state \(\mu _t\); (c) constructing the continuation of thus obtained evolution \(k_{\mu _0}=k_0 \mapsto k_t = k_{\mu _t}\) to all \(t>0\). Step (a) is performed by means of Ovcyannikov-like arguments similar to those used, e.g., in [3, 6, 7]. Step (b) is based on the use of the Denjoy–Carleman theorem [4]. In realizing step (c), we crucially use the result of (b). Our description of the mesoscopic limit is based on the scaling procedure described in Sect. 4. It is equivalent to the Lebowitz-Penrose scaling used in [7], and also to the Vlasov scaling used in [3, 6]. In this procedure, passing to the mesoscopic level amounts to considering the system at different spatial scales parameterized by \(\varepsilon \in (0; 1]\) in such a way that \(\varepsilon = 1\) corresponds to the micro-level, whereas the limit \(\varepsilon \rightarrow 0\) yields the meso-level description in which the corpuscular structure disappears and the system turns into a (two-component) medium characterized by a density function. The evolution of the latter is supposed to be found from the kinetic equation (3.15). In Theorem 3.8, we show that the kinetic equation has a unique global (in time) solution in the corresponding Banach space. In Theorem 3.9, we demonstrate that the micro- and meso-scopic descriptions are indeed connected by the scaling procedure in the sense of Definition 3.6. In Theorems 3.10 and 3.11, we describe the stability of translation invariant stationary solutions of the kinetic equation. In particular, we show that some of such solutions can be unstable with respect to space-dependent perturbations.

The rest of the paper has the following structure. In Sect. 2, we give necessary information on the analysis on configuration spaces and on the description of sub-Poissonian states on such spaces with the help of Bogoliubov functionals and correlation functions. We also describe in detail the model which we consider. In Sect. 3, we formulate the results mentioned above and prove Theorems 3.10 and 3.11. We also provide some comments; in particular, we relate our results with those of [7] describing a birth-and-death version of the Widom–Rowlinson dynamics in the continuum. Section 4 is dedicated to developing our main technical tool—Proposition 4.2. By means of it we realize step (a) in proving Theorem 3.5, see above. Steps (b) and (c) are based on Lemmas 5.1, 5.2, 5.4 and 5.5 proved in Sect. 5. Section 6 is dedicated to the proof of Theorems 3.8 and 3.10.

2 Preliminaries and the Model

2.1 Two-Component Configuration Spaces

Here we present necessary information on the subject. A more detailed description can be found in, e.g., [5, 7, 9].

Let \(\mathcal {B}(\mathbb {R}^d)\) and \(\mathcal {B}_\mathrm{b}(\mathbb {R}^d)\) denote the sets of all Borel and all bounded Borel subsets of \(\mathbb {R}^d\), respectively. The configuration space \(\Gamma \) mentioned above is equipped with the vague topology and thus with the corresponding Borel \(\sigma \)-field \(\mathcal {B}(\Gamma )\). It is known, see [11, Sect. 2.2], that \(\mathcal {B}(\Gamma )=\sigma \{ N_\Delta : \Delta \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\}\), that is, \(\mathcal {B}(\Gamma )\) is generated by the counting maps \(\Gamma \ni \gamma \mapsto N_\Delta (\gamma ):=|\gamma \cap \Delta |\), where \(|\cdot |\) denotes cardinality. The elements of \(\Gamma ^2:=\Gamma \times \Gamma \) are \(\gamma = (\gamma _0, \gamma _1)\), i.e., the one-component configurations are always written with the subscript \(i=0,1\). By \(\mathcal {B}(\Gamma ^2)\) we denote the corresponding product \(\sigma \)-field. For \(\Lambda _i\in \mathcal {B}(\mathbb {R}^d)\), \(i=0,1\), we denote \(\Lambda = \Lambda _0 \times \Lambda _1\) and set

$$\begin{aligned} \Gamma ^2_\Lambda = \left\{ \gamma =(\gamma _0, \gamma _1)\in \Gamma ^2: \gamma _i \subset \Lambda _i, \ i=0,1\right\} . \end{aligned}$$

Clearly \(\Gamma ^2_\Lambda \in \mathcal {B}(\Gamma ^2)\) and hence

$$\begin{aligned} \mathcal {B}(\Gamma ^2_\Lambda ):=\left\{ A \cap \Gamma ^2_\Lambda : A \in \mathcal {B}(\Gamma ^2)\right\} \end{aligned}$$

is a sub-field of \(\mathcal {B}(\Gamma ^2)\). Let \(p_{\Lambda }:\Gamma ^2\rightarrow \Gamma ^2_\Lambda \) be the projection

$$\begin{aligned} p_\Lambda (\gamma ) = \gamma _\Lambda := (\gamma _0 \cap \Lambda _0, \gamma _1 \cap \Lambda _1). \end{aligned}$$

It is clearly measurable, and thus the sets

$$\begin{aligned} p^{-1}_\Lambda (A_\Lambda ) :=\{ \gamma \in \Gamma ^2: p_\Lambda (\gamma ) \in A_\Lambda \}, \quad A_\Lambda \in \mathcal {B}(\Gamma ^2_\Lambda ), \end{aligned}$$

belong to \(\mathcal {B}(\Gamma ^2)\) for each Borel \(\Lambda _i\), \(i=0,1\).

Let \(\mathcal {P}(\Gamma ^2)\) denote the set of all probability measures on \((\Gamma ^2, \mathcal {B}(\Gamma ^2))\). For a given \(\mu \in \mathcal {P}(\Gamma ^2)\), its projection on \((\Gamma ^2_\Lambda , \mathcal {B} (\Gamma ^2_\Lambda ))\) is

$$\begin{aligned} \mu ^\Lambda (A_\Lambda ) := \mu \left( p^{-1}_\Lambda (A_\Lambda ) \right) , \qquad A_\Lambda \in \mathcal {B}(\Gamma ^2_\Lambda ). \end{aligned}$$
(2.1)

Let \(\pi \) be the standard homogeneous Poisson measure on \((\Gamma ,\mathcal {B}(\Gamma ))\) with density (intensity) \(\varkappa =1\). Then the product measure \(\pi ^2:=\pi \otimes \pi \) is a probability measure on \((\Gamma ^2, \mathcal {B}(\Gamma ^2))\). By \(\mathcal {P}_\pi (\Gamma ^2)\) we denote the set of all \(\mu \in \mathcal {P}(\Gamma ^2)\), for each of which the projections \(\mu ^\Lambda \), with all possible \(\Lambda = \Lambda _0 \times \Lambda _1\), \(\Lambda _i \in \mathcal {B}_\mathrm{b} (\mathbb {R}^d)\), \(i=0,1\), are absolutely continuous with respect to the corresponding projections of \(\pi ^2\). It is known, see [5, Proposition 3.1], that for each \(\mu \in \mathcal {P}_\pi (\Gamma ^2)\) the following holds

$$\begin{aligned} \mu \left( \{ \gamma =(\gamma _0, \gamma _1) \in \Gamma ^2: \gamma _0 \cap \gamma _1 = \emptyset \}\right) =1. \end{aligned}$$

Since we are going to deal with elements of \(\mathcal {P}_\pi (\Gamma ^2)\) only, from now on we assume that the configurations \(\gamma _0\) and \(\gamma _1\) are subsets of one and the same space \(\mathbb {R}^d\).

Let \(\Gamma _0\subset \Gamma \) be the set of all finite configurations. It is known that \(\Gamma _0 \in \mathcal {B}(\Gamma )\), see [11, Sect. 2.2]. Hence \(\widetilde{\mathcal {B}}(\Gamma _0):=\{A \subset \Gamma _0: A \in \mathcal {B}(\Gamma )\}\) ia a sub-\(\sigma \)-field of \(\mathcal {B}(\Gamma )\). At the same time, \(\Gamma _0\) can be equipped with the topology related to the Euclidean topology of \(\mathbb {R}^d\), see [11, Sect. 2.1]. Let \(\mathcal {B}(\Gamma _0)\) be the corresponding Borel \(\sigma \)-field of subsets of \(\Gamma _0\). Clearly, a function, \(g:\Gamma _0 \rightarrow \mathbb {R}\) is \(\mathcal {B}(\Gamma _0)/\mathcal {B}(\mathbb {R})\)-measurable if and only if there exist symmetric Borel functions \(g^{(n)}: (\mathbb {R}^d)^n \rightarrow \mathbb {R}\) such that \(g(\{x_1 , \dots , x_n\}) = g^{(n)}(x_1 , \dots , x_n)\), \(n\in \mathbb {N}\). The relationship between this measurability and the corresponding property of \(g:\Gamma _0 \subset \Gamma \rightarrow \mathbb {R}\) is clarified by Obata’s result, see Lemma 1.1 and Proposition 1.3 in [14], by which \(\widetilde{\mathcal {B}}(\Gamma _0) = \mathcal {B} (\Gamma _0)\). Thus, such a function g is \(\mathcal {B}(\Gamma )/\mathcal {B}(\mathbb {R})\)-measurable if and only if there exist \(\{g^{(n)}:n\in \mathbb {N}\}\) with the above mentioned properties. For completeness, one adds to this family also \(g^{(0)} = g(\varnothing )\).

By the very definition of \(\mathcal {B}(\Gamma ^2)\) we have that \(\Gamma _0 \times \Gamma _0=: \Gamma _0^2 \in \mathcal {B}(\Gamma ^2)\). Set \(\mathbb {N}_0 = \mathbb {N}\cup \{0\}\), and also \(\mathbb {N}_0^2 = \{n=(n_0,n_1): n_i \in \mathbb {N}_0, i=0,1\}\). Then a function \(G:\Gamma ^2_0 \rightarrow \mathbb {R}\) is \(\mathcal {B}(\Gamma ^2)/\mathcal {B}(\mathbb {R} )\)-measurable if and only if for each \(n\in \mathbb {N}_0^2\), there exists a Borel function \(G^{(n)}: (\mathbb {R}^{d})^{n_0} \times (\mathbb {R}^{d})^{n_1} \rightarrow \mathbb {R}\), symmetric with respect to the permutations of the components of each of \(\eta _i\), \(i=0,1\), such that

$$\begin{aligned} G(\eta ) = G(\eta _0, \eta _1) = G^{(n)} ( x_1, \dots , x_{n_0}; y_1, \dots , y_{n_1}), \end{aligned}$$

for \(\eta _0 = \{ x_1, \dots , x_{n_0}\}\) and \(\eta _1 = \{y_1, \dots , y_{n_1}\}\).

By \(B_\mathrm{bs}(\Gamma _0^2)\) we denote the set of all measurable functions \(G:\Gamma ^2_0 \rightarrow \mathbb {R}\) that have the following properties: (a) there exists \(C_G>0\) such that \(|G(\eta )| \le C_G\) for all \(\eta \in \Gamma ^2_0\); (b) there exists \(\Lambda = \Lambda _0 \times \Lambda _1\) with \(\Lambda _i \in \mathcal {B}_\mathrm{b} (\mathbb {R}^d)\), \(i=0,1\), such that \(G(\eta ) = 0\) whenever \(\eta _i\cap \Lambda _i^c \ne \emptyset \) for either of \(i=0,1\); (c) there exists \(N\in \mathbb {N}_0\) such that \(G(\eta )=0\) whenever \(\max _{i=0,1}|\eta _i| >N\). Here \(\Lambda _i^c := \mathbb {R}^d \setminus \Lambda _i\). By \(\Lambda (G)\) and N(G) we denote the smallest \(\Lambda \) and N with the properties just described.

By standard arguments \(B_\mathrm{bs}(\Gamma _0^2)\) is a measure-defining class for measures on \((\Gamma ^2_0, \mathcal {B}(\Gamma ^2_0))\). The Lebesgue-Poisson measure \(\lambda \) on \((\Gamma ^2_0, \mathcal {B}(\Gamma ^2_0))\) is then defined by the following formula, see [5] and [9, page 130],

$$\begin{aligned} \int _{\Gamma _0^2} G(\eta ) \lambda ( d \eta )= & {} \sum _{n_0=0}^\infty \sum _{n_1=0}^\infty \frac{1}{n_0! n_1!} \int _{(\mathbb {R}^d)^{n_0}} \int _{(\mathbb {R}^d)^{n_1}} G^{(n)} ( x_1, \dots , x_{n_0}; y_1, \dots , y_{n_1}) \nonumber \\&\times d x_1 \cdots dx_{n_0} d y_1 \cdots dy_{n_1}, \end{aligned}$$
(2.2)

which has to hold for all \(G\in B_\mathrm{bs}(\Gamma _0^2)\) with the usual convention regarding the cases \(n_i=0\). The same can also be written as

$$\begin{aligned} \int _{\Gamma _0^2} G(\eta ) \lambda ( d \eta ) = \int _{\Gamma _0} \int _{\Gamma _0} G(\eta _0,\eta _1 ) (\lambda _0 \otimes \lambda _1)( d \eta _0, d \eta _1), \end{aligned}$$
(2.3)

where both \(\lambda _i\) are the copies of the standard Lebesgue-Poisson measure on the single-component set \(\Gamma _0\), see, e.g., [11]. In the sequel, both Lebesgue-Poisson measures on \(\Gamma _0^2\) and on \(\Gamma _0\) will be denoted by \(\lambda \) if no ambiguity may arise.

For \(\gamma \in \Gamma ^2\), by writing \(\eta \Subset \gamma \) we mean that \(\eta _i\Subset \gamma _i\), \(i=0,1\), i.e., both \(\eta _i\) are finite subsets of the corresponding \(\gamma _i\). For \(G\in B_\mathrm{bs}(\Gamma ^2_0)\), we set

$$\begin{aligned} (KG)(\gamma ) := \sum _{\eta \Subset \gamma } G(\eta ) = \sum _{\eta _0 \Subset \gamma _0} \sum _{\eta _1 \Subset \gamma _1} G(\eta _0,\eta _1), \end{aligned}$$
(2.4)

see [5, 9]. Note that the sums in (2.4) are finite and KG is a cylinder function on \(\Gamma ^2\). The latter means that it is \(\mathcal {B}(\Gamma ^2_{\Lambda (G)})\)-measurable. Moreover, cf. [9, Eqs. (2.3) and (2.4), page 129],

$$\begin{aligned} |(KG)(\gamma )| \le C_G \left( 1 + |\gamma _0\cap \Lambda _0 (G)|\right) ^{N_0(G)} \left( 1 + |\gamma _1\cap \Lambda _1 (G)|\right) ^{N_1(G)}. \end{aligned}$$
(2.5)

2.2 Correlation Functions

In the approach we follow, see [3, 6, 10], the evolution of states is constructed in the next way. Let \(\varTheta \) denote the set of all compactly supported continuous maps \(\theta = (\theta _0, \theta _1): \mathbb {R}^d \times \mathbb {R}^d \rightarrow (-1,0]^2\). For each \(\theta \in \varTheta \), the map

$$\begin{aligned} \Gamma ^2 \ni \gamma \mapsto \prod _{x\in \gamma _0} (1+ \theta _0 (x)) \prod _{y\in \gamma _1} (1+ \theta _1 (y)) \end{aligned}$$

is measurable and bounded. Hence, for a state \(\mu \), one may define

$$\begin{aligned} B_\mu (\theta ) = \int _{\Gamma ^2} \prod _{x\in \gamma _0} (1+ \theta _0 (x)) \prod _{y\in \gamma _1} (1+ \theta _1 (y)) \mu ( d \gamma ), \end{aligned}$$
(2.6)

– the so called Bogoliubov functional corresponding to \(\mu \), considered as a map \(\varTheta \rightarrow \mathbb {R}\).

Definition 2.1

By \(\mathcal {P}_\mathrm{exp}(\Gamma ^2)\) we denote the set of sub-Poissonian states consisting of all those \(\mu \in \mathcal {P}_\pi (\Gamma ^2)\) for which \(B_\mu \) can be continued to an exponential type entire function of \(\theta \in L^1 (\mathbb {R}^d\times \mathbb {R}^d\rightarrow \mathbb {R}^2) \).

It can be shown that a given \(\mu \in \mathcal {P}_\pi (\Gamma ^2)\) is sub-Poissonian if and only if \(B_\mu \) can be written in the form, cf. (2.3),

$$\begin{aligned} B_\mu (\theta ) = \int _{\Gamma _0^2} k_\mu (\eta ) E(\theta ;\eta ) \lambda (d \eta ), \end{aligned}$$
(2.7)

cf. (2.2), with \(k_\mu :\Gamma _0^2 \rightarrow [0,+\infty )\) such that \(k_\mu ^{(n)}\in L^\infty ((\mathbb {R}^d)^{n_0}\times (\mathbb {R}^d)^{n_1}\rightarrow \mathbb {R})\) and

$$\begin{aligned} E(\theta ;\eta ) = e ( \theta _0; \eta _0)e ( \theta _1;\eta _1) := \prod _{x\in \eta _0} \theta _0 (x) \prod _{y\in \eta _1} \theta _1 (y). \end{aligned}$$
(2.8)

This, in particular, means that \(k_\mu \) is essentially bounded with respect to the Lebesgue-Poisson measure \(\lambda \) defined in (2.2). For the (heterogeneous) Poisson measure \(\pi _\varrho \), the Bogoliubov functional is

$$\begin{aligned} B_{\pi _\varrho } (\theta ) = \exp \left( \sum _{i=0,1}\int _{\mathbb {R}^d} \theta _i (x) \varrho _i(x) dx \right) , \end{aligned}$$
(2.9)

where \(\varrho = (\varrho _0, \varrho _1)\) is the (two-component) density function. Then by (2.2) and (2.7) we have

$$\begin{aligned} k_{\pi _\varrho } (\eta )=E( \varrho ;\eta ) = e(\varrho _0;\eta _0)e( \varrho _1;\eta _1). \end{aligned}$$
(2.10)

If one rewrites (2.6) in the form

$$\begin{aligned} B_\mu (\theta ) = \int _{\Gamma ^2} F_\theta (\gamma ) \mu (d \gamma ), \end{aligned}$$

then the action of L on F as in (1.2) can be transformed to the action of \(L^\Delta \) on \(k_\mu \) according to the following rule

$$\begin{aligned} \int _{\Gamma ^2}(L F_\theta ) (\gamma ) \mu (d \gamma ) = \int _{\Gamma _0^2} (L^\Delta k_\mu ) (\eta ) E(\theta ;\eta )\lambda (d \eta ) \end{aligned}$$
(2.11)

The main advantage of this is that \(k_\mu \) is a function of finite configurations.

For \(\mu \in \mathcal {P}_\mathrm{exp}(\Gamma ^2)\) and \(\Lambda =(\Lambda _0, \Lambda _1)\), \(\Lambda _i \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\), let \(\mu ^\Lambda \) be as in (2.1). Then \(\mu ^\Lambda \) is absolutely continuous with respect to the corresponding restriction \(\lambda ^\Lambda \) of the measure defined in (2.2), and hence we may write

$$\begin{aligned} \mu ^\Lambda (d \eta ) = R^\Lambda _\mu (\eta ) \lambda ^\Lambda ( d \eta ), \qquad \eta \in \Gamma ^2_\Lambda . \end{aligned}$$
(2.12)

Then the correlation function \(k_\mu \) and the Radon-Nikodym derivative \(R_\mu ^\Lambda \) are related to each other by, cf. (2.3),

$$\begin{aligned} k_\mu (\eta )= & {} \int _{\Gamma ^2_\Lambda } R^\Lambda _\mu (\eta \cup \xi ) \lambda ^\Lambda ( d\xi )\nonumber \\= & {} \int _{\Gamma _{\Lambda _0}} \int _{\Gamma _{\Lambda _1}} R^\Lambda _\mu (\eta _0 \cup \xi _0, \eta _1 \cup \xi _1) (\lambda _0^{\Lambda _0} \otimes \lambda _1^{\Lambda _1}) (d\xi _0, d \xi _1), \qquad \eta \in \Gamma ^2_\Lambda . \end{aligned}$$
(2.13)

Note that (2.13) relates \(R^\Lambda _\mu \) with the restriction of \(k_\mu \) to \(\Gamma _\Lambda ^2\). The fact that these are the restrictions of one and the same function \(k_\mu :\Gamma _0^2\rightarrow \mathbb {R}\) corresponds to the Kolmogorov consistency of the family \(\{\mu ^\Lambda \}_{\Lambda }\).

By (2.4), (2.1), and (2.12) we get

$$\begin{aligned} \int _{\Gamma ^2} (KG)(\gamma ) \mu (d\gamma ) = \langle \langle G, k_\mu \rangle \rangle , \end{aligned}$$

holding for each \(G\in B_\mathrm{bs}(\Gamma _0^2)\) and \(\mu \in \mathcal {P}_\mathrm{exp}(\Gamma ^2)\). Here

$$\begin{aligned} \langle \langle G, k \rangle \rangle := \int _{\Gamma _0^2} G(\eta ) k(\eta ) \lambda (d \eta ), \end{aligned}$$
(2.14)

for suitable G and k. Define

$$\begin{aligned} B^\star _\mathrm{bs} (\Gamma _0^2) =\{ G\in B_\mathrm{bs}(\Gamma _0^2): (KG)(\gamma ) \ge 0 \ \mathrm{for} \ \mathrm{all} \ \gamma \in \Gamma ^2\}. \end{aligned}$$
(2.15)

By [11, Theorems 6.1 and 6.2 and Remark 6.3] one can prove that the following holds.

Proposition 2.2

Let a measurable function \(k : \Gamma _0^2 \rightarrow \mathbb {R}\) have the following properties:

$$\begin{aligned}&\mathrm{(a)}&\ \langle \langle G, k \rangle \rangle \ge 0, \qquad \mathrm{for} \ \mathrm{all} \ G\in B^\star _\mathrm{bs} (\Gamma _0^2);\nonumber \\&\mathrm{(b)}&\ k(\varnothing , \varnothing ) = 1; \qquad \mathrm{(c)} \ \ k(\eta ) \le C^{|\eta _0|+|\eta _1|} , \end{aligned}$$
(2.16)

with (c) holding for some \(C >0\) and \(\lambda \)-almost all \(\eta \in \Gamma _0^2\). Then there exists a unique \(\mu \in \mathcal {P}_\mathrm{exp}(\Gamma ^2)\) for which k is the correlation function.

2.3 The Model

The model we consider is specified by the operator L given in (1.2) where the coefficients are supposed to be of the following form

$$\begin{aligned} c_{0}(x,y,\gamma _{1})= & {} a_{0}(x-y)\exp \left( -\sum _{z \in \gamma _{1}}\phi _{0}(y-z)\right) ,\nonumber \\ c_{1}(x,y,\gamma _{0})= & {} a_{1}(x-y)\exp \left( -\sum _{z \in \gamma _{0}}\phi _{1}(y-z)\right) , \end{aligned}$$
(2.17)

with jump kernels \(a_{i}: \mathbb {R}^d \rightarrow [0,+\infty )\) such that \(a_{i}(x)=a_{i}(-x)\) and

$$\begin{aligned} \int _{\mathbb {R}^d}a_{i}(x)d x =: \alpha _i < \infty , \qquad i=0,1. \end{aligned}$$
(2.18)

The repulsion potentials in (2.17) \(\phi _{i}: \mathbb {R}^d \rightarrow [0,+\infty )\) are supposed to be symmetric, \(\phi _i(x) = \phi _i(-x)\), and such that

$$\begin{aligned} \int _{\mathbb {R}^d} \phi _i(x) d x =:\langle \phi _i \rangle< \infty , \qquad \mathop {{{\mathrm{\mathrm {ess\,sup}}}}}\limits _{x\in \mathbb {R}^d} \phi _i (x) =:\bar{\phi }_i < \infty . \end{aligned}$$
(2.19)

Then

$$\begin{aligned} \int _{\mathbb {R}^d}\bigg (1-\exp (-\phi _{i}(x))\bigg )d x \le \langle \phi _{i} \rangle , \qquad i=0,1. \end{aligned}$$
(2.20)

By (1.2) and (2.11) one obtains the action of \(L^\Delta \) in the following form. For \(x\in \mathbb {R}^d\), we set

$$\begin{aligned} \tau _x^i (y) = \exp (-\phi _i (x-y)), \quad t_x^i (y)= \tau _x^i (y) - 1, \quad y\in \mathbb {R}^d, \ \ i=0,1. \end{aligned}$$
(2.21)

Next, for a function \(k(\eta ) = k(\eta _0, \eta _1)\), cf. (2.3), we introduce the maps

$$\begin{aligned} (Q_y^0 k) (\eta _0, \eta _1)= & {} \int _{\Gamma _0} k(\eta _0, \eta _1 \cup \xi )e(t^0_y ;\xi ) \lambda (d\xi ), \nonumber \\ (Q_y^1 k) (\eta _0, \eta _1)= & {} \int _{\Gamma _0} k(\eta _0\cup \xi , \eta _1 )e(t^1_y ;\xi ) \lambda (d\xi ), \end{aligned}$$
(2.22)

where e is as in (2.8). Then

$$\begin{aligned} (L^\Delta k) (\eta _0, \eta _1)= & {} \sum _{y\in \eta _0} \int _{\mathbb {R}^d} a_0 (x-y) e(\tau ^0_y;\eta _1) (Q_y^0 k) (\eta _0\setminus y \cup x, \eta _1) d x \nonumber \\&- \sum _{x\in \eta _0} \int _{\mathbb {R}^d} a_0 (x-y) e(\tau ^0_y;\eta _1) (Q_y^0 k) (\eta _0, \eta _1) d y \nonumber \\&+ \sum _{y\in \eta _1} \int _{\mathbb {R}^d} a_1 (x-y) e(\tau ^1_y;\eta _0) (Q_y^1 k) (\eta _0, \eta _1\setminus y \cup x) d x \nonumber \\&- \sum _{x\in \eta _1} \int _{\mathbb {R}^d} a_1 (x-y) e(\tau ^1_y;\eta _0) (Q_y^1 k) (\eta _0, \eta _1) d y. \end{aligned}$$
(2.23)

This expression can be derived from the general form obtained in [9, Eqs. (4.4) and (4.5), page 142] by using the concrete form of the kernels given in (2.17). It can also be obtained directly from (1.2) and (2.11). Note that in (2.23) we use the convention \(\sum _{x\in \varnothing } =0\).

3 The Results

3.1 The Microscopic Level

As mentioned above, instead of directly studying the evolution of states by solving the problem in (1.3), we pass from \(\mu _0\) to the corresponding correlation function \(k_{\mu _0}\) and then consider the problem

$$\begin{aligned} \frac{d}{dt} k_t = L^\Delta k_t, \qquad k_t|_{t=0} = k_{\mu _0}, \end{aligned}$$
(3.1)

where \(L^\Delta \) is given in (2.23). For this problem, we prove the existence of a unique global solution \(k_t\) which is the correlation function of a unique state \(\mu _t \in \mathcal {P}_\mathrm{exp}(\Gamma ^2)\).

We begin by defining the problem (3.1) in the corresponding spaces of functions \(k:\Gamma _0^2 \rightarrow \mathbb {R}\). From the very representation (2.7), see also (2.2), it follows that \(\mu \in \mathcal {P}_\mathrm{exp}(\Gamma ^2)\) implies

$$\begin{aligned} |k_\mu (\eta )| \le C \exp \big ( \vartheta \left( |\eta _0| + |\eta _1|\right) \big ), \end{aligned}$$

holding for \(\lambda \)-almost all \(\eta \in \Gamma _0^2\), some \(C>0\), and \(\vartheta \in \mathbb {R}\). Keeping this in mind we set

$$\begin{aligned} \Vert k\Vert _\vartheta = \mathop {{{\mathrm{\mathrm {ess\,sup}}}}}\limits _{\eta \in \Gamma ^2_0}\left\{ |k_\mu (\eta )| \exp \big ( - \vartheta \left( |\eta _0| +|\eta _1| \right) \big ) \right\} . \end{aligned}$$
(3.2)

Then

$$\begin{aligned} \mathcal {K}_\vartheta := \{ k:\Gamma ^2_0\rightarrow \mathbb {R}: \Vert k\Vert _\vartheta <\infty \} \end{aligned}$$

is a Banach space with norm (3.2) and the usual linear operations. In fact, we are going to use the ascending scale of such spaces \(\mathcal {K}_\vartheta \), \(\vartheta \in \mathbb {R}\), with the property

$$\begin{aligned} \mathcal {K}_\vartheta \hookrightarrow \mathcal {K}_{\vartheta '}, \qquad \vartheta < \vartheta ', \end{aligned}$$
(3.3)

where \(\hookrightarrow \) denotes continuous embedding. Set, cf. (2.14) and (2.15),

$$\begin{aligned} \mathcal {K}^\star _\vartheta =\{k\in \mathcal {K}_\vartheta : \langle \langle G,k \rangle \rangle \ge 0 \ \mathrm{for} \ \mathrm{all} \ G\in B^\star _\mathrm{bs} (\Gamma _0^2)\}. \end{aligned}$$
(3.4)

It is a subset of the cone

$$\begin{aligned} \mathcal {K}^+_\vartheta =\{k\in \mathcal {K}_\vartheta : k(\eta ) \ge 0 \ \ \mathrm{for} \ \lambda -\mathrm{almost} \ \mathrm{all} \ \eta \in \Gamma _0^2\}. \end{aligned}$$
(3.5)

By Proposition 2.2 it follows that each \(k\in \mathcal {K}^\star _\vartheta \) such that \(k(\varnothing , \varnothing ) = 1\) is the correlation function of a unique \(\mu \in \mathcal {P}_\mathrm{exp}(\Gamma ^2)\). Then we define

$$\begin{aligned} \mathcal {K} = \bigcup _{\vartheta \in \mathbb {R}} \mathcal {K}_\vartheta , \qquad \mathcal {K}^\star = \bigcup _{\vartheta \in \mathbb {R}} \mathcal {K}_\vartheta ^\star . \end{aligned}$$
(3.6)

As a sum of Banach spaces, the linear space \(\mathcal {K}\) is equipped with the corresponding inductive topology which turns it into a locally convex space.

For a given \(\vartheta \in \mathbb {R}\), by (2.21)–(2.23) we define \(L^\Delta _\vartheta \) as a linear operator in \(\mathcal {K}_\vartheta \) with domain

$$\begin{aligned} \mathcal {D} (L^\Delta _\vartheta ) = \{ k\in \mathcal {K}_\vartheta : L^\Delta k \in \mathcal {K}_\vartheta \}. \end{aligned}$$
(3.7)

Lemma 3.1

For each \(\vartheta '' < \vartheta \), cf. (3.3), it follows that \(\mathcal {K}_{\vartheta ''} \subset \mathcal {D} (L^\Delta _\vartheta )\).

Proof

For \(\vartheta '' < \vartheta \), by (2.20), (2.21), (2.22), and (3.2) we have

$$\begin{aligned} \left| (Q^0_y k)(\eta _0, \eta _1)\right|\le & {} \Vert k\Vert _{\vartheta ''} \exp \left( \vartheta '' |\eta _0| + \vartheta '' |\eta _1|\right) \nonumber \\&\times \int _{\Gamma _0} \exp \left( \vartheta '' |\xi |\right) \prod _{z\in \xi } \big ( 1 - \exp \left( - \phi _0 (z-y)\right) \big ) \lambda ( d\xi )\nonumber \\\le & {} \Vert k\Vert _{\vartheta ''} \exp \left( \vartheta '' |\eta _0| + \vartheta '' |\eta _1|\right) \exp \left( \langle \phi _{0} \rangle e^{\vartheta ''} \right) . \end{aligned}$$
(3.8)

Likewise

$$\begin{aligned} \left| (Q^1_y k)(\eta _0, \eta _1)\right| \le \Vert k\Vert _{\vartheta ''} \exp \left( \vartheta '' |\eta _0| + \vartheta '' |\eta _1|\right) \exp \left( \langle \phi _{1} \rangle e^{\vartheta ''} \right) . \end{aligned}$$
(3.9)

Now we apply the latter two estimates together with (2.18) in (2.23) and obtain

$$\begin{aligned} \left| (L^\Delta k)(\eta _0, \eta _1)\right|&\le 2 \Vert k\Vert _{\vartheta ''} \exp \left( \vartheta '' |\eta _0| + \vartheta '' |\eta _1|\right) \nonumber \\&\quad \times \bigg ( \alpha _0 |\eta _0| \exp \left( \langle \phi _{0} \rangle e^{\vartheta ''} \right) + \alpha _1 |\eta _1| \exp \left( \langle \phi _{1} \rangle e^{\vartheta ''} \right) \bigg ). \end{aligned}$$
(3.10)

By means of the inequality \(x\exp (-\sigma x) \le 1/ e \sigma \), \(x, \sigma >0\), we get from (3.2) and (3.10) the following estimate

$$\begin{aligned} \Vert L^\Delta k\Vert _{\vartheta } \le \frac{4\Vert k\Vert _{\vartheta ''}}{e(\vartheta - \vartheta '')} \max _{i=0,1}\alpha _i \exp \left( \langle \phi _{i} \rangle e^{\vartheta ''}\right) , \end{aligned}$$
(3.11)

which yields the proof. \(\square \)

Corollary 3.2

For each \(\vartheta ,\vartheta ''\in \mathbb {R}\) such that \(\vartheta '' < \vartheta \), the expression in (2.23) defines a bounded linear operator \(L^\Delta _{\vartheta \vartheta ''}: \mathcal {K}_{\vartheta ''}\rightarrow \mathcal {K}_{\vartheta }\) the norm of which can be estimated by means of (3.11).

In what follows, we consider two types of operators defined by the expression in (2.23): (a) unbounded operators \((L^\Delta _\vartheta , \mathcal {D}(L^\Delta _\vartheta ))\), \(\vartheta \in \mathbb {R}\), with domains as in (3.7) and Lemma 3.1; (b) bounded operators \(L^\Delta _{ \vartheta \vartheta ''}\) described in Corollary 3.2. These operators are related to each other in the following way:

$$\begin{aligned} \forall \vartheta '' < \vartheta \ \ \forall k \in \mathcal {K}_{\vartheta ''} \qquad L^\Delta _{\vartheta \vartheta ''} k = L^\Delta _{\vartheta } k. \end{aligned}$$
(3.12)

By means of the bounded operators \(L^\Delta _{\vartheta \vartheta ''} : \mathcal {K}_{\vartheta ''} \rightarrow \mathcal {K}_{\vartheta }\) we define also a continuous linear operator \(L^\Delta :\mathcal {K} \rightarrow \mathcal {K} \), see (3.6). In view of this, we consider the following two equations. The first one is

$$\begin{aligned} \frac{d}{dt} k_t = L^\Delta _\vartheta k_t, \qquad k_t|_{t=0} = k_{\mu _0}, \end{aligned}$$
(3.13)

considered as an equation in a given Banach space \(\mathcal {K}_{\vartheta }\). The second equation is (3.1) with \(L^\Delta \) given in (2.23) considered in the locally convex space \(\mathcal {K}\).

Definition 3.3

By a solution of (3.13) on a time interval, [0, T), \(T\le +\infty \), we mean a continuous map \([0,T)\ni t \mapsto k_t \in \mathcal {D} (L^\Delta _\vartheta )\) such that the map \([0,T)\ni t \mapsto d k_t / dt\in \mathcal {K}_\vartheta \) is also continuous and both equalities in (3.13) are satisfied. Likewise, a continuously differentiable map \([0,T)\ni t \mapsto k_t \in \mathcal {K}\) is said to be a solution of (3.1) in \(\mathcal {K}\) if both equalities therein are satisfied for all t. Such a solution is called global if \(T=+\infty \).

Remark 3.4

The map \([0,T)\ni t\mapsto k_t \in \mathcal {K}\) is a solution of (3.1) if and only if, for each \(t \in [0, T)\), there exists \(\vartheta ''\in \mathbb {R}\) such that \(k_t\in \mathcal {K}_{\vartheta ''}\) and, for each \(\vartheta > \vartheta ''\), the map \(t\mapsto k_t\) is continuously differentiable at t in \(\mathcal {K}_\vartheta \) and \(d k_t/ dt = L^\Delta _\vartheta k_t = L^\Delta _{\vartheta \vartheta ''} k_t\).

The main result of this subsection is contained in the following statement.

Theorem 3.5

Assume that (2.18) and (2.19) hold. Then for each \(\mu _0 \in \mathcal {P}_\mathrm{exp}(\Gamma ^2)\), the problem (3.1) with \(L^\Delta \) given in (2.23) and \(k_0 = k_{\mu _0}\) has a unique global solution \(k_t \in \mathcal {K}^\star \subset \mathcal {K}\) which has the property \(k_t(\varnothing , \varnothing ) = 1\). Therefore, for each \(t\ge 0\) there exists a unique state \(\mu _t\in \mathcal {P}_\mathrm{exp}(\Gamma ^2)\) such that \(k_t = k_{\mu _t}\). Moreover, let \(k_0\) and \(C>0\) be such that \(k_0(\eta ) \le C^{|\eta _0|+ |\eta _1|}\) for \(\lambda \)-almost all \(\eta \in \Gamma _0^2\), see (2.16). Then the mentioned solution satisfies

$$\begin{aligned} \forall t\ge 0 \qquad 0\le k_t (\eta ) \le C^{|\eta _0|+ |\eta _1|} \exp \left\{ t\left( \alpha _0 |\eta _0| + \alpha _1 |\eta _1| \right) \right\} . \end{aligned}$$
(3.14)

3.2 The Mesoscopic Level

As is commonly recognized, see [2, Chapter 8] and [13], the comprehensive understanding of the behavior of an infinite interacting particle system requires its multi-scale analysis. In the approach which we follow, see [3] (jump dynamics) and [7] (two-component system), passing from the micro- to the mesoscopic levels amounts to considering the system at different spatial scales parameterized by \(\varepsilon \in (0,1]\) in such a way that \(\varepsilon =1\) corresponds to the micro-level, whereas the limit \(\varepsilon \rightarrow 0\) yields the meso-level description in which the corpuscular structure disappears and the system turns into a (two-component) medium characterized by a density function \(\varrho = (\varrho _0, \varrho _1)\), \(\varrho _i : \mathbb {R}^d \rightarrow [0, +\infty )\), \(i=0,1\). Then the evolution \(\varrho _0 \mapsto \varrho _t\), obtained from a kinetic equation, approximates (in the mean-field sense) the evolution of the system’s states as it may be seen from the mesoscopic level.

3.2.1 The Kinetic Equation

Keeping in mind that the Poissonian state \(\pi _\varrho \) is completely characterized by the density \(\varrho \), see (2.9) and (2.10), we introduce the following notion, cf. [3, p. 1046] and [7, p. 70].

Definition 3.6

A state \(\mu \in \mathcal {P}_\mathrm{exp}(\Gamma ^2)\) is said to be Poisson-approximable if: (i) there exist \(\vartheta \in \mathbb {R}\) and \(\varrho = (\varrho _0, \varrho _1)\), \(\varrho _i \in L^\infty ( \mathbb {R}^d \rightarrow \mathbb {R})\), \(\varrho _i \ge 0\), \(i=0,1\), such that both \(k_\mu \) and \(k_{\pi _\varrho }\) lie in \(\mathcal {K}_\vartheta \); (ii) for each \(\varepsilon \in (0,1]\), there exists \(q_\varepsilon \in \mathcal {K}_\vartheta \) such that \(q_1 = k_\mu \) and \(\Vert q_\varepsilon - k_{\pi _\varrho }\Vert _\vartheta \rightarrow 0\) as \(\varepsilon \rightarrow 0\).

Our aim is to show that the evolution \(\mu _0 \mapsto \mu _t\) obtained in Theorem 3.5 preserves the property just defined relative to the time dependent density \(\varrho _t = (\varrho _{0,t}, \varrho _{1,t})\), obtained from the following system of kinetic equations

$$\begin{aligned} \left\{ \begin{array}{ll} \frac{d}{dt} \varrho _{0,t} =(a_0 *\varrho _{0,t}) \exp \left( - (\phi _0 *\varrho _{1,t})\right) - \varrho _{0,t} \left( a_0 *\exp \left( - (\phi _0 *\varrho _{1,t})\right) \right) , \\ \frac{d}{dt} \varrho _{1,t} = (a_1 *\varrho _{1,t}) \exp \left( - (\phi _1 *\varrho _{0,t})\right) - \varrho _{1,t} \left( a_1 *\exp \left( - (\phi _1 *\varrho _{0,t})\right) \right) , \end{array} \right. \end{aligned}$$
(3.15)

where \(*\) denotes convolution; e.g.,

$$\begin{aligned} (a_i *\varrho _{i,t}) (x) = \int _{\mathbb {R}^d} a_i (x-y) \varrho _{i,t} (y) d y, \qquad i=0,1. \end{aligned}$$

Definition 3.7

By the global solution of the system of kinetic equations (3.15), subject to an initial condition, we understand a continuously differentiable map

$$\begin{aligned}{}[0, +\infty ) \ni t \mapsto (\varrho _{0,t}, \varrho _{1,t}) \in L^\infty (\mathbb {R}^d \rightarrow \mathbb {R}^2) \end{aligned}$$
(3.16)

such that both equalities in (3.15) hold. This solution is called positive if \(\varrho _{i,t}(x) \ge 0\), \(i=0,1\), for all \(t\ge 0\) and Lebesgue-almost all \(x\in \mathbb {R}^d\). By the positive solution of (3.15) on the time interval [0, T], \(0<T<\infty \), we mean the corresponding restriction of this map.

Let \(\Vert \cdot \Vert _{L^\infty }\) stand for the norm in \(L^\infty (\mathbb {R}^d \rightarrow \mathbb {R})\). In Theorem 3.8, the space \(L^\infty (\mathbb {R}^d \rightarrow \mathbb {R}^2)\) is equipped with the norm

$$\begin{aligned} \Vert \varrho \Vert _{\infty } = \max _{i=0,1}\Vert \varrho _{i}\Vert _{L^\infty }. \end{aligned}$$
(3.17)

Theorem 3.8

For each positive \(\varrho _0=(\varrho _{0,0}, \varrho _{1,0})\in L^\infty (\mathbb {R}^d \rightarrow \mathbb {R}^2)\), the system of kinetic equations (3.15) with the initial condition \((\varrho _{0,t}, \varrho _{1,t})|_{t=0}=(\varrho _{0,0}, \varrho _{1,0})\) has a unique positive global solution such that

$$\begin{aligned} \forall t \ge 0 \qquad \varrho _{i,t} (x) \le \Vert \varrho _{i,0}\Vert _{L^\infty } \exp (\alpha _i t ), \quad i=0,1, \end{aligned}$$
(3.18)

where \(\alpha _i\) are defined in (2.18).

The relationship between the micro- and mesoscopic descriptions is established by the following statement.

Theorem 3.9

Let (2.19) hold and \(k_t\) and \(\varrho _t\) be the solutions described in Theorems 3.5 and 3.8, respectively. Assume also that the initial state \(\mu _0\) is Poisson-approximable by \(\pi _{\varrho _0}\), see Definition 3.6. That is, there exist \(\vartheta _*\in \mathbb {R}\) and \(q_{0, \varepsilon }\), \(\varepsilon \in (0,1]\), such that \(k_{\mu _0} = q_{0,1}\) and \(\Vert q_{0,\varepsilon } - k_{\pi _{\varrho _0}}\Vert _{\vartheta _*} \rightarrow 0\) as \(\varepsilon \rightarrow 0\). Then there exist \(\vartheta > \vartheta _*\) and \(T>0\) such that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \sup _{t\in [0,T]} \Vert q_{t,\varepsilon } - k_{\pi _{\varrho _t}}\Vert _{\vartheta } =0. \end{aligned}$$
(3.19)

Theorems 3.8 and 3.9 are proved in Sect. 6 below.

3.2.2 The Stationary Solutions

Stationary solutions \(\varrho _{i,t} = \varrho _i\), \(t\ge 0\), of the system in (3.15) are supposed to solve the following system of equations

$$\begin{aligned} \left\{ \begin{array}{ll} (a_0 *\varrho _{0}) \exp \left( - (\phi _0 *\varrho _{1})\right) = \varrho _{0} \left( a_0 *\exp \left( - (\phi _0 *\varrho _{1})\right) \right) , \\ (a_1 *\varrho _{1}) \exp \left( - (\phi _1 *\varrho _{0})\right) = \varrho _{1} \left( a_1 *\exp \left( - (\phi _1 *\varrho _{0})\right) \right) . \end{array} \right. \end{aligned}$$
(3.20)

It might be instructive to rewrite it in the form

$$\begin{aligned} \left\{ \begin{array}{ll} \psi _0 (x) = \int _{\mathbb {R}^d} \tilde{a}_0 (x,y) \psi _0 (y) dy, \\ \psi _1 (x) = \int _{\mathbb {R}^d} \tilde{a}_1 (x,y) \psi _1 (y) dy, \end{array}\right. \end{aligned}$$
(3.21)

where

$$\begin{aligned} \tilde{a}_0 (x,y):= & {} \frac{a_0 (x-y) \exp \left( - (\phi _0 *\varrho _{1})(y)\right) }{\int _{\mathbb {R}^d}a_0 (x-y) \exp \left( - (\phi _0 *\varrho _{1})(y)\right) d y}, \\ \tilde{a}_1 (x,y):= & {} \frac{a_1 (x-y) \exp \left( - (\phi _1 *\varrho _{0})(y)\right) }{\int _{\mathbb {R}^d}a_1 (x-y) \exp \left( - (\phi _1 *\varrho _{0})(y)\right) d y}, \end{aligned}$$

and

$$\begin{aligned} \psi _0 := \varrho _0 \exp \left( \phi _0 *\varrho _{1}\right) , \qquad \psi _1 := \varrho _1 \exp \left( \phi _1 *\varrho _{0}\right) . \end{aligned}$$
(3.22)

For each \(\widetilde{C}_i> 0\), \(i=0,1\), the system in (3.21) has constant solutions \(\psi _i \equiv \widetilde{C}_i\). Then the corresponding \(\varrho _i\) are to be found from

$$\begin{aligned} \left\{ \begin{array}{l} \varrho _0 = \widetilde{C}_0 \exp \left( - (\phi _0 *\varrho _{1})\right) , \\ \varrho _1 = \widetilde{C}_1 \exp \left( - (\phi _1 *\varrho _{0})\right) . \end{array} \right. \end{aligned}$$
(3.23)

Those in (3.23) may be called birth-and-death solutions since they solve the corresponding equation for the birth-and-death version of the Widom–Rowlinson dynamics with specific values of \(\widetilde{C}_i\), expressed in terms of the model parameters, see [7, eq. (4.13)]. The translation invariant (i.e., constant) solution of (3.23) is \(\varrho _i\equiv C_i\), \(i=0,1\), with \(C_i\) satisfying, cf. (3.22),

$$\begin{aligned} \widetilde{C}_0 = C_0 \exp \left( \langle \phi _0 \rangle C_1 \right) , \qquad \widetilde{C}_1 = C_1 \exp \left( \langle \phi _1 \rangle C_0 \right) . \end{aligned}$$
(3.24)

For given \(\widetilde{C}_0, \widetilde{C}_1>0\), let \(\mathcal {S}(\widetilde{C}_0, \widetilde{C}_1)\) be the set of all positive \((\varrho _0,\varrho _1)\in L^\infty (\mathbb {R}^d \rightarrow \mathbb {R}^2)\) that satisfy (3.23). Let also \(\mathcal {S}_c(\widetilde{C}_0, \widetilde{C}_1)\) be the subset of \(\mathcal {S}(\widetilde{C}_0, \widetilde{C}_1)\) consisting of constant solutions \(\varrho _i \equiv C_i\), \(i=0,1\), with \(C_i\) satisfying (3.24). The symmetric case of (3.24) with specific values of \(\widetilde{C}_i\) (as mentioned above) was studied in [7, Sect. 5]. Namely, for \( \langle \phi _1 \rangle \widetilde{C}_0 = \langle \phi _0 \rangle \widetilde{C}_1 =:a\), the set \(\mathcal {S}_c(\widetilde{C}_0, \widetilde{C}_1)\) is a singleton \(\{C_0,C_1\}\) whenever \(a\le e\). Here

$$\begin{aligned} C_0 = x_0/\langle \phi _1 \rangle , \qquad C_1 =x_0/ \langle \phi _0\rangle , \end{aligned}$$
(3.25)

with some \(x_0 \in (0,1)\). This solution is a stable node for \(a<e\). For \(a>e\), there exist three solutions: (a) \( C_0 = x_1/\langle \phi _1 \rangle \), \( C_1 = x_3/\langle \phi _0 \rangle \); (b) \( C_0 = x_3/\langle \phi _1 \rangle \), \( C_1 = x_1/\langle \phi _0 \rangle \); (c) \( C_0 = x_2/\langle \phi _1 \rangle \), \( C_1 = x_2/\langle \phi _0 \rangle \). The first two solutions are stable nodes and \(x_3>1\). The stability means the existence of a small neighborhood in \(\mathcal {S}_c(\widetilde{C}_0, \widetilde{C}_1)\) of the mentioned solution, which does not contain any other solution.

Let us now turn to the study of the stability of the constant solutions of (3.23) with respect to perturbations \(\varrho _i = C_i + \epsilon _i\), \(i=0,1\). By (3.23) and (3.24) we conclude that the perturbations ought to satisfy

$$\begin{aligned} \left\{ \begin{array}{l} \epsilon _0 = C_0\left[ \exp \left\{ -\left( \phi _0 *\epsilon _{1} \right) \right\} -1 \right] , \\ \epsilon _1 = C_1 \left[ \exp \left\{ -\left( \phi _1 *\epsilon _{0} \right) \right\} -1 \right] . \end{array} \right. \end{aligned}$$
(3.26)

Theorem 3.10

The solution \(\varrho _i\equiv C_i\), \(i=0,1\), of the system of equations in (3.20) is locally stable in \(\mathcal {S}(\widetilde{C}_0, \widetilde{C}_1)\), with \(\widetilde{C}_i\) and \(C_i\) satisfying (3.24), whenever the following holds, cf. (3.25),

$$\begin{aligned} C_0 C_1 \langle \phi _0 \rangle \langle \phi _1 \rangle < 1. \end{aligned}$$
(3.27)

This means that there exists \(\delta >0\) such that \(\varrho _i\equiv C_i\), \(i=0,1\), is the only solution in the set \(K_\delta := \mathcal {S}(\widetilde{C}_0, \widetilde{C}_1) \cap \{\varrho : \Vert \varrho -C\Vert _{\infty } < \delta \}\), cf. (3.17).

Proof

Assume that \(\Vert \epsilon _0\Vert _{L^\infty } >0\). By means of the inequality \(|e^{-\alpha }-1| \le |\alpha |e^{|\alpha |}\) we get from (3.26)

$$\begin{aligned} \Vert \epsilon _0\Vert _{L^\infty } \le C_0 C_1 \langle \phi _0 \rangle \langle \phi _1 \rangle \exp \left[ \delta \left( \langle \phi _0 \rangle + \langle \phi _1 \rangle \right) \right] \cdot \Vert \epsilon _0\Vert _{L^\infty } < \Vert \epsilon _0\Vert _{L^\infty }, \end{aligned}$$

holding for small enough \(\delta \) in view of (3.27). This contradicts the assumption, and hence yields \(\epsilon _0 =0\). The corresponding estimate for \(\Vert \epsilon _1\Vert _{L^\infty }\) is obtained analogously. \(\square \)

Assume now that both \(\epsilon _i\) satisfy \(\epsilon _i \in L^\infty (\mathbb {R}^d\rightarrow \mathbb {R})\cap L^1 (\mathbb {R}^d\rightarrow \mathbb {R})\). Then each solution of (3.26) is a fixed point of the nonlinear map \(\Phi :L^\infty (\mathbb {R}^d\rightarrow \mathbb {R}^2)\cap L^1 (\mathbb {R}^d\rightarrow \mathbb {R}^2) \rightarrow L^\infty (\mathbb {R}^d\rightarrow \mathbb {R}^2)\cap L^1 (\mathbb {R}^d\rightarrow \mathbb {R}^2) \) defined by the right-hand of (3.26). Note that this \(\Phi \) takes values in \(L^\infty (\mathbb {R}^d\rightarrow \mathbb {R}^2)\cap L^1 (\mathbb {R}^d\rightarrow \mathbb {R}^2) \) in view of (2.19). The zero solution of (3.26) gets unstable whenever there exist nonzero \(\epsilon = (\epsilon _0, \epsilon _1)\) in the kernel of \(I- \Phi '\), where \(\Phi '\) is the Fréchet derivative of \(\Phi \) at \(\epsilon = (0,0)\). By (3.26) we have

$$\begin{aligned} \Phi ' \epsilon := \Phi ' \left( \begin{array}{ll} \epsilon _0\\ \epsilon _1 \end{array} \right) = \left( \begin{array}{ll} - C_0(\phi _0 *\epsilon _1)\\ - C_1(\phi _1 *\epsilon _0) \end{array} \right) . \end{aligned}$$
(3.28)

Since \(\Phi '\) contains convolutions, it can be partially diagonalized by means of the Fourier transform

$$\begin{aligned} \hat{\phi }_i (p)= \int _{\mathbb {R}^d} \phi _i (x) \exp \left( i (p,x)\right) d x, \qquad p\in \mathbb {R}^d, \ \ i=0,1. \end{aligned}$$

Note that both \(\hat{\phi }_i\) are uniformly continuous on \(\mathbb {R}^d\) and satisfy \(|\hat{\phi }_i(p)| \le \hat{\phi }_i(0) = \langle \phi _i \rangle \), that follows from their positivity. Moreover, \(|\hat{\phi }_i(p)| \rightarrow 0\) as \(|p|\rightarrow +\infty \) (by the Riemann-Lebesgue lemma). Note also that \(\hat{\epsilon }_i\), \(i=0,1\), exist since \(\epsilon _i\) are supposed to be integrable.

Theorem 3.11

Assume that the following holds, cf. (3.27),

$$\begin{aligned} C_0 C_1 \langle \phi _0 \rangle \langle \phi _1 \rangle > 1. \end{aligned}$$
(3.29)

Then the constant solution \(\varrho _i\equiv C_i\) of (3.23), and hence of (3.20), is unstable with respect to the perturbation \(\varrho _i = C_i + \epsilon _i\), \(i=0,1\), with \(\epsilon _i \in L^\infty (\mathbb {R}^d\rightarrow \mathbb {R})\cap L^1 (\mathbb {R}^d\rightarrow \mathbb {R})\).

Proof

In view of the mentioned continuity of \(\hat{\phi }_i\) and the Riemann-Lebesgue lemma, the condition in (3.29) implies the existence of \(p\in \mathbb {R}^d\setminus \{0\}\) such that

$$\begin{aligned} C_0 C_1 \hat{\phi }_0 (p) \hat{\phi }_1 (p) =1 . \end{aligned}$$
(3.30)

The instability in question takes place whenever the equation \(\Phi ' \epsilon = \epsilon \), cf. (3.28), has nonzero solutions in the considered space. By means of the Fourier transform it can be turned into

$$\begin{aligned} \hat{\epsilon }_i (p) = C_0 C_1 \hat{\phi }_0 (p) \hat{\phi }_1 (p) \hat{\epsilon }_i (p) , \qquad i=0,1, \end{aligned}$$
(3.31)

that has to hold for some \(p\in \mathbb {R}\setminus \{0\}\), which is certainly the case in view of (3.30). \(\square \)

Given \(C_i\), \(i=0,1\), let \(\epsilon =(\epsilon _0, \epsilon _1)\) solve (3.26). Then \(\varrho = (C_0+\epsilon _0, C_1+ \epsilon _1)\) solves (3.23) with \(\widetilde{C}_i\) as in (3.24) and hence lies in \(\mathcal {S}(\widetilde{C}_0, \widetilde{C}_1)\). Then Theorem 3.11 describes the instability of the solution \(\varrho \equiv (C_0, C_1)\) in the latter set. For this reason, it is independent of the jump kernels \(a_i\). In order to study the corresponding instability in the set of all solutions of (3.20), one has to rewrite (3.20) in the form \(\Psi (\varrho )=0\) and then to show that the Fréchet derivative \(\Psi '\) of \(\Psi \) at \(\varrho \equiv (C_0, C_1)\), defined as a bounded linear self-map of \(L^\infty (\mathbb {R}^d\rightarrow \mathbb {R})\cap L^1 (\mathbb {R}^d\rightarrow \mathbb {R})\), has nonzero \(\epsilon \) in its kernel. By means of the arguments used in the proof of Theorem 3.11 one readily obtains that this is equivalent to, cf. (3.31),

$$\begin{aligned} \hat{\epsilon }_i (p) \left[ 1 - C_0 C_1 \hat{\phi }_0 (p) \hat{\phi }_1 (p)\right] \cdot \left[ \alpha _i - \hat{a}_i(p) \right] =0 , \qquad i=0,1, \end{aligned}$$

that has to hold for some nonzero \(p\in \mathbb {R}^d\). Here \(\hat{a}_i(p)\), \(i=0,1\), are the Fourier transforms of the jump kernels, see (2.18). Thus, if both these kernels are such that \(\hat{a}_i(p) < \hat{a}_i(0) = \alpha _i\) for all nonzero p, then the latter condition turns into that in (3.31).

3.3 Comments

3.3.1 The Microscopic Description

The existence of the global in time evolution stated in Theorem 3.5 is proved in the subsequent sections without any restrictions on the model parameters \(\alpha _i\) and \(\langle \phi _i \rangle \), \(i=0,1\), see (2.18) and (2.19), respectively. That is, the global evolution exists, however, its ergodicity can hardly be expected. The analysis of the kinetic equation made in Theorem 3.11 points to the possibility of having a phase transition in the model, i.e., to the possibility of having multiple stationary states \(\mu \in \mathcal {P}_\mathrm{exp}(\Gamma ^2)\).

The only work on the Widom–Rowlinson dynamics of an infinite particle system is that in [7] where a birth-and-death (rather immigration-emigration) version was studied. In that version, the particles of two types appear and disappear at random; the appearance is subject to the repulsion from the particles of the other type. The system’s evolution was described by means of the corresponding initial value problem for the Bogoliubov functional. Namely, for \(t< T\), where \(T<\infty \) is expressed via the model parameters, in [7, Theorem 1] there was constructed the evolution \(B_{\mu _0} \mapsto B_t\), where \(B_t: L^1(\mathbb {R}^d \rightarrow \mathbb {R}^2)\rightarrow \mathbb {R}\) is an exponential type entire function and hence can be written down as, cf. (2.7),

$$\begin{aligned} B_t (\theta ) = \int _{\Gamma _0^2} k_t (\eta )E(\theta ;\eta ) \lambda ( d \eta ). \end{aligned}$$

However, it was not shown that \(B_t\) is the Bogoliubov functional, i.e., that \(k_t\) above is the correlation function, of some state \(\mu \in \mathcal {P}_\mathrm{exp} (\Gamma ^2)\). In the present work, for the jump version of the Widom–Rowlinson model we show (Theorem 3.5) that: (a) the evolution \(k_{\mu _0} \mapsto k_t\), and hence also \(B_{\mu _0} \mapsto B_t\), can be continued to all \(t>0\); (b) for each \(t>0\), \( B_t\) is the Bogoliubov functional of a unique sub-Poissonian state \(\mu _t\).

3.3.2 The Mesoscopic Description

In passing to the mesoscopic level of description, we use a scaling procedure described in Sect. 4 below. It is equivalent to the Lebowitz-Penrose scaling used in [7], and also to the Vlasov scaling used in [3, 6]. Our Theorem 3.9 is analogous to [7, Theorem 2] proved for the birth-and-death version. Note that the convergence in (3.19) is uniform in t, whereas in the mentioned statement of [7] the convergence is point-wise.

Now we turn to the stationary solutions of (3.15) which one obtains from the system in (3.20), or, equivalently, in (3.21). The latter may have nonconstant solutions \(\psi _i\), which then can be used to find the corresponding \(\varrho _i\) from (3.22). These solutions may depend on the jump kernels \(a_i\). The set of all solutions of (3.20) contains the sets \(\mathcal {S}(\widetilde{C}_0, \widetilde{C}_1)\) for each pair \(\widetilde{C}_0\), \(\widetilde{C}_1>0\). The corresponding solutions \(\varrho _i\) are independent of the jump kernels. Moreover, \(\mathcal {S}(\widetilde{C}_0, \widetilde{C}_1)\) is exactly the set of solutions of the birth-and-death kinetic equation [7, Eq. (5.1)] corresponding to the specific values of \(\widetilde{C}_i\). Thus, our Theorems 3.10 and 3.11 describe also the birth-and-death kinetic equation, which is an extension of the study in [7, Sect. 5].

4 The Rescaled Evolution

In this section, we construct the evolution \(q_{0, \varepsilon } \mapsto q_{t, \varepsilon }\), \(\varepsilon \in (0, 1]\), which then will be used for: (a) obtaining the evolution stated in Theorem 3.5 in the form \(k_t = q_{t,1}\); (b) proving Theorem 3.9. To this end along with \(L^\Delta \) defined in (2.23) we will use

$$\begin{aligned} L^{\varepsilon ,\Delta } = R^{-1}_\varepsilon L^\Delta _{\varepsilon } R_\varepsilon , \qquad \varepsilon \in (0,1], \end{aligned}$$
(4.1)

where \(L^\Delta _\varepsilon \) is obtained from \(L^\Delta \) by multiplying both \(\phi _i\) by \(\varepsilon \), and

$$\begin{aligned} (R_\varepsilon q)(\eta _0, \eta _1) = \varepsilon ^{-|\eta _0| - |\eta _1|} q(\eta _0, \eta _1). \end{aligned}$$

We refer the reader to [3, 7] for more information on deriving operators as in (4.1). Denote, cf. (2.21),

$$\begin{aligned} \tau _{x,\varepsilon }^i (y) = \exp \left( -\varepsilon \phi _i (x-y)\right) , \quad t_{x,\varepsilon }^i (y) = \varepsilon ^{-1} \left[ \tau _{x,\varepsilon }^i (y) -1\right] , \ \ i=0,1. \end{aligned}$$
(4.2)

Observe that

$$\begin{aligned} \tau _{x,\varepsilon }^i (y) \rightarrow 1, \qquad t_{x,\varepsilon }^i (y) \rightarrow - \phi _i(x-y), \ \ \mathrm{as} \quad \varepsilon \rightarrow 0. \end{aligned}$$
(4.3)

For \(\varepsilon \in (0,1]\), let \(Q^i_{y,\varepsilon }\) be as in (2.22) with \(t_{x}^i\) replaced by \(t_{x,\varepsilon }^i\) given in (4.2). Then the action of \(L^{\varepsilon ,\Delta }\) is given by the right-hand side of (2.23) with both \(Q^i_{y}\) replaced by the corresponding \(Q^i_{y,\varepsilon }\) and \(\tau _{x}^i\) replaced by \(\tau _{x,\varepsilon }^i\). Note that, cf. (2.19),

$$\begin{aligned} \varepsilon ^{-1} \int _{\mathbb {R}^d} \left( 1 - e^{-\varepsilon \phi _i(x)}\right) d x \le \langle \phi _i \rangle , \quad i=0,1. \end{aligned}$$
(4.4)

For each \(\vartheta ''\in \mathbb {R}\), \(k\in \mathcal {K}_{\vartheta ''}\), and \(\varepsilon \in (0,1]\), by (4.4) both \(Q^i_{y,\varepsilon }k\) satisfy the estimates as in (3.8) and (3.9). Therefore, \(L^{\varepsilon ,\Delta } k\) satisfies (3.10), which allows one to introduce the corresponding linear operators \(L^{\varepsilon ,\Delta }_\vartheta : \mathcal {D}(L^\Delta _\vartheta ) \rightarrow \mathcal {K}_\vartheta \) and \(L^{\varepsilon ,\Delta }_{\vartheta '\vartheta } : \mathcal {K}_\vartheta \rightarrow \mathcal {K}_{\vartheta '}\), where \(\mathcal {D}(L^\Delta _\vartheta )\) is defined in (3.7), see also Corollary 3.2 and (3.12). Thus, along with (3.13) we will consider the problem

$$\begin{aligned} \frac{d}{dt} q_{t,\varepsilon } = L^{\varepsilon ,\Delta }_\vartheta q_{t,\varepsilon } , \qquad q_{t,\varepsilon }|_{t=0} = q_{0,\varepsilon }\in \mathcal {K}_{\vartheta _0}, \quad \vartheta _0 < \vartheta . \end{aligned}$$
(4.5)

Its solutions \(q_{t,\varepsilon }\in \mathcal {D}(L^\Delta _\vartheta )\subset \mathcal {K}_{\vartheta }\) are defined analogously as in Definition 3.3.

For \(\vartheta , \vartheta '\in \mathbb {R}\) such that \(\vartheta < \vartheta '\), we set, cf. (3.11),

$$\begin{aligned} T (\vartheta ',\vartheta ) = \frac{\vartheta ' - \vartheta }{4\alpha }\exp \left( - c e^{\vartheta '}\right) , \qquad \alpha = \max _{i=0,1} \alpha _i, \qquad c = \max _{i=0,1} \langle \phi _i \rangle . \end{aligned}$$
(4.6)

For a fixed \(\vartheta '\in \mathbb {R}\), \(T (\vartheta ', \vartheta ))\) can be made as big as one wants by taking small enough \(\vartheta \). However, if \(\vartheta \) is fixed, then

$$\begin{aligned} \sup _{\vartheta ' > \vartheta } T(\vartheta ', \vartheta ) = \frac{\delta (\vartheta )}{4 \alpha } \exp \left( - \frac{1}{\delta (\vartheta )}\right) =: \tau (\vartheta ) < \infty , \end{aligned}$$
(4.7)

where \(\delta (\vartheta )\) is the unique positive solution of the equation

$$\begin{aligned} \delta e^\delta = \exp \left( - \vartheta - \log c \right) . \end{aligned}$$
(4.8)

Remark 4.1

The supremum in (4.7) is attained at

$$\begin{aligned} \vartheta ' = \vartheta + \delta (\vartheta ). \end{aligned}$$

Note also that \(\delta (\vartheta ) \rightarrow 0\), and hence \(\tau (\vartheta ) \rightarrow 0\), as \(\vartheta \rightarrow +\infty \).

Proposition 4.2

For arbitrary \( \vartheta _0 \in \mathbb {R}\) and \(\varepsilon \in (0,1]\), the problem in (4.5) with \(q_{0,\varepsilon }\in \mathcal {K}_{\vartheta _0}\) and \(\vartheta = \vartheta _0 + \delta (\vartheta _0)\) has a unique solution \(q_{t,\varepsilon }\in \mathcal {K}_{\vartheta }\) on the time interval \([0, \tau (\vartheta _0))\).

Proof

Take \(T < \tau (\vartheta _0)\) and then pick \(\vartheta ' \in (\vartheta _0, \vartheta _0 + \delta (\vartheta _0))\) such that \( T< T(\vartheta ', \vartheta _0)\). Our aim is to construct the family

$$\begin{aligned} \{S^\varepsilon _{\vartheta '\vartheta _0} (t) \in \mathcal {L}(\mathcal {K}_{\vartheta _0}, \mathcal {K}_{\vartheta '}) :t\in [0, T ( \vartheta ', \vartheta _0))\}, \end{aligned}$$
(4.9)

defined by the series

$$\begin{aligned} S^\varepsilon _{\vartheta '\vartheta _0} (t) = \sum _{n=0}^\infty \frac{t^n}{n!} \left( L^{\varepsilon , \Delta }\right) ^n_{\vartheta '\vartheta _0}. \end{aligned}$$
(4.10)

In (4.9), \(\mathcal {L}(\mathcal {K}_{\vartheta _0}, \mathcal {K}_{\vartheta '})\) stands for the Banach space of bounded linear operators acting from \(\mathcal {K}_{\vartheta _0}\) to \(\mathcal {K}_{\vartheta '}\) equipped with the corresponding operator norm. In (4.10), \(\left( L^{\varepsilon , \Delta }\right) ^0_{\vartheta '\vartheta _0}\) is the embedding operator and

$$\begin{aligned} \left( L^{\varepsilon , \Delta }\right) ^n_{\vartheta '\vartheta _0} := \prod _{l=1}^n L^{\varepsilon , \Delta }_{\vartheta _l \vartheta _{l-1}}, \quad \vartheta _l = \vartheta _0 + l(\vartheta '- \vartheta _0)/n, \end{aligned}$$
(4.11)

for \(n\in \mathbb {N}\). Now we take into account that \(\vartheta _l - \vartheta _{l-1}= (\vartheta '- \vartheta _0)/n\) and that \(L^{\varepsilon , \Delta }\) satisfies (3.11) for all \(\varepsilon \in (0,1]\). This yields the following estimate

$$\begin{aligned} \Vert L^{\varepsilon , \Delta }_{\vartheta _l \vartheta _{l-1}}\Vert\le & {} \left( \frac{n}{e}\right) (\vartheta ' - \vartheta _0)\left\{ 2 \alpha _0 \exp \left( \langle \phi _0 \rangle e^{\vartheta '}\right) + 2 \alpha _1 \exp \left( \langle \phi _1 \rangle e^{\vartheta '}\right) \right\} ^{-1} \nonumber \\\le & {} n \big /e T (\vartheta ', \vartheta _0), \end{aligned}$$
(4.12)

see (3.11) and (4.6). Then we apply (4.12) in (4.11) and conclude that the series in (4.10) converges in the operator norm, uniformly on [0, T], to the operator-valued function \([0,T] \ni t \mapsto S^\varepsilon _{\vartheta '\vartheta _0} (t) \in \mathcal {L}(\mathcal {K}_{\vartheta _0}, \mathcal {K}_{\vartheta '})\) such that

$$\begin{aligned} \forall t\in [0,T]\qquad \Vert S^\varepsilon _{\vartheta '\vartheta _0} (t) \Vert \le \frac{T (\vartheta ', \vartheta _0)}{T (\vartheta ', \vartheta _0) - t}. \end{aligned}$$
(4.13)

In a similar way, we get

$$\begin{aligned} \frac{d}{dt} S^\varepsilon _{\vartheta \vartheta _0} (t)= & {} \sum _{n=0}^\infty \frac{t^n}{n!} \left( L^{\varepsilon , \Delta }\right) ^{n+1}_{\vartheta \vartheta _0}\nonumber \\= & {} \sum _{n=0}^\infty \frac{t^n}{n!} L^{\varepsilon , \Delta }_{\vartheta \vartheta '} \left( L^{\varepsilon , \Delta }\right) ^n_{\vartheta '\vartheta _0} = L^{\varepsilon , \Delta }_{\vartheta \vartheta '} S^\varepsilon _{\vartheta '\vartheta _0} (t), \ \quad t\in [0,T] \end{aligned}$$
(4.14)

Then

$$\begin{aligned} q_{t,\varepsilon } = S^\varepsilon _{\vartheta '\vartheta _0} (t) q_{0, \varepsilon } \in \mathcal {K}_{\vartheta '} \subset \mathcal {D}(L^{\varepsilon ,\Delta }_{\vartheta }), \end{aligned}$$
(4.15)

see Lemma 3.1, is a solution of (4.5) on the time interval \([0, \tau (\vartheta _0))\) since \(T< \tau (\vartheta _0)\) has been taken in an arbitrary way and \(L^{\varepsilon , \Delta }_{\vartheta \vartheta '} q_t = L^{\varepsilon , \Delta }_{\vartheta }q_t\) whenever \(q_t \in \mathcal {K}_{\vartheta '}\), see (3.12).

Let us prove that the solution given in (4.15) is unique. In view of the linearity, to this end it is enough to show that the problem in (4.5) with the zero initial condition has a unique solution. Assume that \(v_t\in \mathcal {D}(L^{\varepsilon ,\Delta }_{\vartheta })\) is one of such solutions. Then \(v_t\) lies in \(\mathcal {K}_{\vartheta ''}\) for each \(\vartheta '' > \vartheta \), see (3.3). Fix any such \(\vartheta ''\) and then take \(t < \tau (\vartheta _0)\) such that \(t< T (\vartheta '', \vartheta )\). Then, cf. (3.12),

$$\begin{aligned} v_t= & {} \int _0^t L^{\varepsilon , \Delta }_{\vartheta '' \vartheta } v_s d s \nonumber \\= & {} \int _0^t \int _0^{t_1} \cdots \int _0^{t_{n-1}} \left( L^{\varepsilon , \Delta }\right) ^n_{\vartheta ''\vartheta } v_{t_n} d t_n \cdots d t_1, \end{aligned}$$

where \(n\in \mathbb {N}\) is an arbitrary number. Similarly as above we get from the latter

$$\begin{aligned} \Vert v_t \Vert _{\vartheta ''} \le \frac{t^n}{n!} \left( \frac{n}{e T (\vartheta '', \vartheta )}\right) ^n \sup _{s\in [0,t]}\Vert v_s\Vert _{\vartheta }. \end{aligned}$$

Since n is an arbitrary number, this yields \(v_s =0\) for all \(s\in [0,t]\). The extension of this result to all \(t <\tau (\vartheta _0)\) can be done by repeating this procedure due times. \(\square \)

Remark 4.3

Similarly as in obtaining (4.14) we have that, for each \(\varepsilon \in (0,1]\) and all \(\vartheta _0, \vartheta _1, \vartheta _2 \in \mathbb {R}\) such that \(\vartheta _0< \vartheta _1 < \vartheta _2\), the following holds

$$\begin{aligned} \quad S^\varepsilon _{\vartheta _2 \vartheta _0} (t+s) = S^\varepsilon _{\vartheta _2 \vartheta _1} (t) S^\varepsilon _{\vartheta _1 \vartheta _0} (s), \quad t\in [0, T(\vartheta _2, \vartheta _1)), \quad s \in [0, T(\vartheta _1, \vartheta _0)). \end{aligned}$$
(4.16)

5 The Proof of Theorem 3.5

With the help of Proposition 4.2 we have already obtained the unique solution of (3.13) in the form

$$\begin{aligned} k_t = S^1_{\vartheta \vartheta _0}(t) k_{\mu _0}, \qquad t< \tau (\vartheta _0), \end{aligned}$$
(5.1)

where \(k_{\mu _0} \in \mathcal {K}_{\vartheta _0}\) and \(\vartheta \in ( \vartheta _0 , \vartheta _0 + \delta (\vartheta _0))\) is taken such that \(t < T(\vartheta _0 + \delta (\vartheta _0),\vartheta )\). To prove Theorem 3.5 we first show (Lemma 5.1) that \(k_t\) lies in the cone (3.4) and hence is a correlation function of a unique state \(\mu _t\). Then, in Lemma 5.2, we construct an auxiliary evolution \(u_0 \mapsto u_t\), with which we compare the evolution \(k_{\mu _0} \mapsto k_t\) defined in (5.1). Thereby, we construct the extension of \(k_{\mu _t}\) to all \(t>0\) as stated in the theorem.

5.1 The Identification Lemma

Our aim now is to show that the solution of (3.13) given in (5.1) has the property \(k_t \in \mathcal {K}^\star _{\vartheta }\), see (3.4). By this one can identify \(k_t\) as \(k_{\mu _t}\) for a unique state \(\mu _t\). Recall that bounded operators \(L^\Delta _{\vartheta \vartheta ''}\), \(\vartheta '' < \vartheta \), were introduced in Corollary 3.2.

Lemma 5.1

For arbitrary \(\vartheta \in \mathbb {R}\) and \(\vartheta _0<\vartheta \), and for each \(t\in [0, T(\vartheta , \vartheta _0))\), the operator defined in (4.10) has the property

$$\begin{aligned} S^1_{\vartheta \vartheta _0}(t):\mathcal {K}^\star _{\vartheta _0} \rightarrow \mathcal {K}^\star _{\vartheta }. \end{aligned}$$
(5.2)

Proof

We follow the line of arguments used in the proof of Theorem 3.8 of [3], see also [10, Lemma 4.8]. Let \(\mu _0\in \mathcal {P}_\mathrm{exp} (\Gamma ^2)\) be such that \(k_{\mu _0} \in \mathcal {K}_{\vartheta _0}^\star \), see Proposition 2.2. For \(\Lambda =(\Lambda _0,\Lambda _1)\), \(\Lambda _i\in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\), \(i=0,1\), let \(\mu ^\Lambda _0\) and \(R^\Lambda _{\mu _0}\) be as in (2.12). For \(N\in \mathbb {N}\), we then set

$$\begin{aligned} R^{\Lambda ,N}_0 (\eta ) = R^\Lambda _{\mu _0} (\eta ) I_N (\eta ), \qquad \eta \in \Gamma _0^2, \end{aligned}$$
(5.3)

where \(I_N (\eta )=1\) whenever \(\max _{i=0,1} |\eta _i| \le N\) and \(I_N (\eta )=0\) otherwise. Set

$$\begin{aligned} \mathcal {R}= & {} L^1 (\Gamma ^2_0, d \lambda ) , \quad \mathcal {R}_\beta = L^1 (\Gamma ^2_0, b_\beta d \lambda ), \nonumber \\ b_\beta (\eta ):= & {} \exp \big ( \beta \left( |\eta _0| + |\eta _1|\right) \big ), \qquad \beta >0. \end{aligned}$$
(5.4)

Let \(\Vert \cdot \Vert _{\mathcal {R}}\) and \(\Vert \cdot \Vert _{\mathcal {R}_\beta }\) be the norms of the spaces introduced in (5.4) and \(\mathcal {R}^+\) and \(\mathcal {R}^+_\beta \) be the corresponding cones of positive elements (in the usual \(L^1\)-sense). For each \(\beta >0\), \(R^{\Lambda ,N}_0\) defined in (5.3) lies in \(\mathcal {R}_\beta ^+ \subset \mathcal {R}^+\) and is such that \(\Vert R^{\Lambda ,N}_0\Vert _{\mathcal {R}} \le 1\). Then one can define a (non-normalized) measure

$$\begin{aligned} \mu _0^{\Lambda ,N}(\eta ) = R_0^{\Lambda ,N}(\eta ) \lambda ( d\eta ), \qquad \eta \in \Gamma _0^2. \end{aligned}$$

Similarly as for the Kawasaki model, see [3, Sect. 3.2], it is possible to show that \(L^*\), related by (1.4) to L given in (1.2), generates an evolution \(\mu _0 \mapsto \mu _t\), \(t\ge 0\), for which \(0\le \mu _t(\Gamma ^2_0)\le 1\) whenever \(\mu _0\) has such a property, that is the case for \(\mu ^{\Lambda ,N}_0\). Moreover, for each \(t\ge 0\), the mentioned \(\mu _t\) is absolutely continuous with respect to \(\lambda \), and the equation for \(R_t = d\mu _t / d \lambda \) corresponding to (1.3) can be written in the form

$$\begin{aligned} \frac{d}{dt} R_t = L^\dagger R_t, \qquad R_t|_{t=0}=R_{\mu _0}, \end{aligned}$$
(5.5)

where, cf. (2.23), \(L^\dagger \) is defined by the relation \(L^\dagger R = d(L^* \mu )/d\lambda \), and hence acts according to the following formula

$$\begin{aligned} ( L^\dagger R)(\eta _0, \eta _1)= & {} \sum _{y\in \eta _0} \int _{\mathbb {R}^d} a_0 (x-y) e(\tau ^0_y;\eta _1) R(\eta _0\setminus y \cup x, \eta _1) d x \nonumber \\&+ \sum _{y\in \eta _1} \int _{\mathbb {R}^d} a_1 (x-y) e(\tau ^1_y;\eta _0) R(\eta _0, \eta _1\setminus y \cup x) d x \nonumber \nonumber \\&- \, \Psi (\eta _0, \eta _1) R(\eta _0, \eta _1), \nonumber \\ \Psi (\eta _0, \eta _1):= & {} \sum _{x\in \eta _0} \int _{\mathbb {R}^d} a_0 (x-y) e(\tau ^0_y;\eta _1) d y \nonumber \\&+ \sum _{x\in \eta _1} \int _{\mathbb {R}^d} a_1 (x-y) e(\tau ^1_y;\eta _0) d y. \end{aligned}$$
(5.6)

Like in [3, Theorem 3.7], one shows that \(L^\dagger \) generates a stochastic \(C_0\)-semigroup, \(S_R:= \{S_R(t)\}_{t\ge 0}\), on \(\mathcal {R}\), which leaves invariant each \(\mathcal {R}_\beta \), \(\beta >0\). Then the solution of (5.5) is \(R_t = S_R (t) R_0\). For \(R_0^{\Lambda ,N}\) as in (5.3), we thus set

$$\begin{aligned} R_t^{\Lambda ,N} (t) = S_R(t) R_0^{\Lambda ,N}, \qquad t>0. \end{aligned}$$
(5.7)

Then \(R_t^{\Lambda ,N}\in \mathcal {R}_\beta ^+ \subset \mathcal {R}^+\) and \(\Vert R^{\Lambda ,N}_t\Vert _{\mathcal {R}} \le 1\). This yields that, for each \(G\in B_\mathrm{bs}^\star (\Gamma _0^2)\), see (2.14) and (2.15), the following holds

$$\begin{aligned} \Big \langle \Big \langle KG , R^{\Lambda ,N}_t \Big \rangle \Big \rangle \ge 0, \qquad t\ge 0. \end{aligned}$$
(5.8)

The integral in (5.8) exists as \(R_t^{\Lambda ,N}\in \mathcal {R}_\beta \) and KG satisfies (2.5). Moreover, like in (3.11) and (5.21), for each \(\beta '\) such that \(0< \beta ' < \beta \), we derive from (5.6) the following estimate

$$\begin{aligned} \Vert L^\dagger R\Vert _{\mathcal {R}_{\beta '}} \le \frac{4\alpha \Vert R\Vert _{\mathcal {R}_{\beta }} }{e(\beta - \beta ')}. \end{aligned}$$

This allows us to define the corresponding bounded operators \((L^\dagger )^n_{\beta '\beta } : \mathcal {R}_{\beta } \rightarrow \mathcal {R}_{\beta '}\), \(n\in \mathbb {N}\), cf. (4.11) and (5.23), the norms of which satisfy

$$\begin{aligned} \Vert (L^\dagger )^n_{\beta '\beta } \Vert \le n^n \left( e\bar{T}(\beta ,\beta ')\right) ^{-n}. \end{aligned}$$
(5.9)

On the other hand, we have that

$$\begin{aligned} k_0^{\Lambda ,N} (\eta ):= & {} \int _{\Gamma _0^2} R^{\Lambda ,N}_0 (\eta \cup \xi )\lambda (d\xi )\nonumber \\= & {} \int _{\Gamma _0^2} R^{\Lambda ,N}_0 (\eta _0\cup \xi _0, \eta _1\cup \xi _1)(\lambda _0\otimes \lambda _1)(d\xi _0, d \xi _1), \end{aligned}$$
(5.10)

cf. (2.13) and (5.3), lies in \(\mathcal {K}_{\vartheta _0}^\star \subset \mathcal {K}_{\vartheta _0}\), and hence we may get

$$\begin{aligned} k_t^{\Lambda ,N} = S^1_{\vartheta \vartheta _0}(t) k_0^{\Lambda ,N}, \qquad t\in [0,T(\vartheta , \vartheta _0)), \end{aligned}$$
(5.11)

where \(S^1_{\vartheta \vartheta _0}(t) = S^\varepsilon _{\vartheta \vartheta _0}(t)|_{\varepsilon =1}\) is given in (4.10). Then the proof of (5.2) consists in showing:

$$\begin{aligned}&\mathrm{(i)}&\quad \forall G\in B^\star _\mathrm{bs}(\Gamma ^2_0) \qquad \Big \langle \Big \langle G, k^{\Lambda ,N}_t \Big \rangle \Big \rangle \ge 0;\nonumber \\&\mathrm{(ii)}&\quad \Big \langle \Big \langle G, S^1_{\vartheta \vartheta _0} (t) k_0 \Big \rangle \Big \rangle = \lim _{\Lambda \rightarrow \mathbb {R}^d \times \mathbb {R}^d} \lim _{N\rightarrow +\infty } \Big \langle \Big \langle G, k^{\Lambda ,N}_t \Big \rangle \Big \rangle . \end{aligned}$$
(5.12)

To prove claim (i) of (5.12), for a given \(G\in B^\star _\mathrm{bs}(\Gamma ^2_0)\), one sets

$$\begin{aligned} \varphi _G (t) = \Big \langle \Big \langle KG , R^{\Lambda ,N}_t \Big \rangle \Big \rangle , \quad \ \ \psi _G (t) = \Big \langle \Big \langle G , k^{\Lambda ,N}_t \Big \rangle \Big \rangle , \end{aligned}$$
(5.13)

where \(\psi _G\) is defined for t as in (5.11). For a given \(t\in (0,T (\vartheta , \vartheta _0))\), we pick \(\vartheta '<\vartheta \) such that \(t < T (\vartheta ', \vartheta _0)\), and hence \(k_{s}^{\Lambda ,N} \in \mathcal {K}_{\vartheta '}\) for \(s\in [0,t]\). Then the direct calculation based on (4.14) yields for the n-th derivative

$$\begin{aligned} \psi _G^{(n)} (t) = \Big \langle \Big \langle G, (L^\Delta )^n_{\vartheta \vartheta '} k^{\Lambda ,N}_t \Big \rangle \Big \rangle , \qquad n \in \mathbb {N}. \end{aligned}$$

As in obtaining (4.13) we then get from the latter

$$\begin{aligned} \Big |\psi _G^{(n)} (t) \Big | \le A^n n^n C_{\vartheta '}(G) \sup _{s\in [0,t]}\Big \Vert k^{\Lambda ,N}_s\Big \Vert _{\vartheta '}. \end{aligned}$$
(5.14)

Here \(A= 1/ e T(\vartheta , \vartheta ')\) and

$$\begin{aligned} C_{\vartheta '}(G) = \int _{\Gamma _0^2} |G(\eta )|\exp \left( \vartheta ' |\eta _0| + \vartheta '|\eta _1| \right) \lambda (d\eta ) <\infty , \end{aligned}$$

as \(G\in B_\mathrm{bs}(\Gamma _0^2)\). Likewise, from (5.7) we have

$$\begin{aligned} \varphi ^{(n)}_G (t) = \Big \langle \Big \langle KG, (L^\dagger )^n_{\beta '\beta } R^{\Lambda ,N}_t \Big \rangle \Big \rangle \end{aligned}$$

For the same t as in (5.14), by (5.9) we have from the latter

$$\begin{aligned} \Big |\varphi _G^{(n)} (t) \Big | \le \bar{A}^n n^n \bar{C}_{\beta '}(G) \sup _{s\in [0,t]}\Big \Vert R^{\Lambda ,N}_s\Big \Vert _{\beta '}. \end{aligned}$$
(5.15)

Here \(\bar{A}= 1/ e \bar{T}(\beta ', \beta )\) and

$$\begin{aligned} \bar{C}_{\beta '}(G) = \mathop {{{\mathrm{\mathrm {ess\,sup}}}}}\limits _{\eta \in \Gamma _0^2} |KG(\eta )| \exp \left( -\beta ' |\eta _0| - \beta '|\eta _1|\right) <\infty \end{aligned}$$

which holds in view of (2.5). By (2.23), (5.6), and (5.10) it follows that

$$\begin{aligned} (L^\Delta k^{\Lambda ,N}_0)(\eta )=\int _{\Gamma ^2_0}(L^\dagger R_0^{\Lambda ,N})(\eta \cup \xi ) \lambda (d\xi ), \end{aligned}$$

which then yields

$$\begin{aligned} \forall n\in \mathbb {N}_0 \qquad \varphi ^{(n)}_G(0) = \psi ^{(n)}_G(0). \end{aligned}$$
(5.16)

By the Denjoy–Carleman theorem [4], (5.15) and (5.14) imply that both functions defined in (5.13) are quasi-analytic on [0, t]. Then (5.16) implies

$$\begin{aligned} \forall t\in [0,T(\vartheta , \vartheta _0)) \qquad \varphi _G(t) = \psi _G(t), \end{aligned}$$
(5.17)

which by (5.8) yields the first line in (5.12). The convergence claimed in (ii) of (5.12) is proved in a standard way, see Appendix in [3]. \(\square \)

Note that (5.17) yields also that

$$\begin{aligned} \forall t\in [0,T(\vartheta , \vartheta _0)) \qquad \Big \langle \Big \langle G, q^{\Lambda ,N}_t \Big \rangle \Big \rangle = \Big \langle \Big \langle G, k^{\Lambda ,N}_t \Big \rangle \Big \rangle , \end{aligned}$$
(5.18)

where G and \(k^{\Lambda ,N}_t\) are as in (5.13) and

$$\begin{aligned} q^{\Lambda ,N}_t (\eta ) := \int _{\Gamma _0^2} R^{\Lambda ,N}_t (\eta \cup \xi ) \lambda (d\xi ), \end{aligned}$$
(5.19)

cf. (5.10).

5.2 An Auxiliary Evolution

The evolution which we construct now will be used to continue the solution \(k_t\) given in (5.1) to all \(t>0\) as stated in Theorem 3.5. The construction employs the operator

$$\begin{aligned} (\bar{L} k) (\eta _0, \eta _1)= & {} \sum _{y\in \eta _0} \int _{\mathbb {R}^d} a_0 (x-y) k(\eta _0 \setminus y \cup x, \eta _1) d x \nonumber \\&+ \sum _{y\in \eta _1} \int _{\mathbb {R}^d} a_1 (x-y) k(\eta _0, \eta _1 \setminus y \cup x) d x \end{aligned}$$
(5.20)

obtained from \(L^\Delta \) given in (2.23) by putting \(\phi _i =0\), \(i=0,1\), and then dropping the second and fourth terms. Note that \(\bar{L}\) does not correspond to any Markov evolution as it describes (free) “half-jumps”. Similarly as in (3.11), we get

$$\begin{aligned} \Vert \bar{L}k\Vert _\vartheta \le \frac{4 \alpha \Vert k\Vert _{\vartheta ''}}{ e(\vartheta - \vartheta '')}, \end{aligned}$$
(5.21)

which allows us to introduce the operators \((\bar{L}_\vartheta , \mathcal {D} (\bar{L}_\vartheta ))\) and \(\bar{L}_{\vartheta \vartheta ''}\in \mathcal {L}(\mathcal {K}_{\vartheta ''}, \mathcal {K}_{\vartheta })\) such that, cf. (3.12),

$$\begin{aligned} \forall k \in \mathcal {\vartheta ''} \qquad \bar{L}_{\vartheta \vartheta ''}k = \bar{L}_{\vartheta } k, \qquad \vartheta '' < \vartheta . \end{aligned}$$

Like above, we have that

$$\begin{aligned} \mathcal {K}_{\vartheta ''} \subset \mathcal {D}(\bar{L}_{\vartheta }):= \{ k \in \mathcal {K}_{\vartheta } : \bar{L} k \in \mathcal {K}_{\vartheta }\}, \qquad \vartheta '' < \vartheta . \end{aligned}$$

Note that

$$\begin{aligned} \bar{L}_{\vartheta \vartheta ''} :\mathcal {K}_{\vartheta ''}^+ \rightarrow \mathcal {K}_{\vartheta }^+ , \qquad \vartheta '' < \vartheta , \end{aligned}$$
(5.22)

see (3.5). For \(n\in \mathbb {N}\), we define \((\bar{L})^n_{\vartheta '\vartheta }\) similarly as in (4.11) and denote, cf. (4.6),

$$\begin{aligned} \bar{T}(\vartheta ', \vartheta ) = (\vartheta ' - \vartheta )/4 \alpha , \qquad \vartheta < \vartheta ' . \end{aligned}$$
(5.23)

Our aim is to study the operator valued function defined by the series

$$\begin{aligned} \bar{S}_{\vartheta '\vartheta } (t) = \sum _{n=0}^\infty \frac{t^n}{n!} \left( \bar{L}\right) ^n_{\vartheta '\vartheta }. \end{aligned}$$
(5.24)

Lemma 5.2

For each \(\vartheta _0, \vartheta \in \mathbb {R}\) such that \(\vartheta _0 < \vartheta \), the series in (5.24) defines a continuous function

$$\begin{aligned}{}[0, \bar{T}(\vartheta , \vartheta _0) ) \ni t \mapsto \bar{S}_{\vartheta \vartheta _0} (t) \in \mathcal {L}( \mathcal {K}_{\vartheta _0}, \mathcal {K}_{\vartheta }), \end{aligned}$$
(5.25)

which has the following properties:

(a):

For t as in (5.25), let \(\vartheta ''\in (\vartheta _0, \vartheta )\) be such that \(t< \bar{T}(\vartheta '', \vartheta _0)\). Then, cf. (4.14),

$$\begin{aligned} \frac{d}{dt}\bar{S}_{\vartheta \vartheta _0}(t) = \bar{L}_{\vartheta \vartheta ''} \bar{S}_{\vartheta ''\vartheta _0} (t). \end{aligned}$$
(5.26)
(b):

The problem

$$\begin{aligned} \frac{d}{dt} u_t = \bar{L}_\vartheta u_t , \qquad u_t|_{t=0} = u_0 \in \mathcal {K}^+_{\vartheta _0}, \end{aligned}$$
(5.27)

has a unique solution \(u_t \in \mathcal {K}^+_{\vartheta }\) on the time interval \([0, \bar{T}(\vartheta , \vartheta _0))\) given by

$$\begin{aligned} u_t = \bar{S}_{\vartheta ''\vartheta _0} (t)u_0, \end{aligned}$$
(5.28)

where, for a fixed \(t \in [0, \bar{T}(\vartheta , \vartheta _0))\), \(\vartheta ''\) is chosen to satisfy \(t< \bar{T}(\vartheta '', \vartheta _0)\).

Proof

Proceeding as in the proof of Proposition 4.2, by means of the estimate in (5.21) we prove the convergence of the series in (5.24). This allows also for proving (5.26), which yields the existence of the solution of (5.27) in the form given in (5.28). The uniqueness is proved analogously as in the case of Proposition 4.2. The stated positivity of \(u_t\) follows from (5.24) and (5.22). \(\square \)

Corollary 5.3

For a given \(C>0\), we let in (5.27) and (5.28) \(\vartheta _0 =\log C\) and \(u_0 (\eta ) = C^{|\eta _0|+|\eta _1|}\). Then the unique solution of (5.27) is

$$\begin{aligned} u_t (\eta ) = C^{|\eta _0|+|\eta _1|} \exp \left\{ t(\alpha _0 |\eta _0|+ \alpha _1 |\eta _1|)\right\} . \end{aligned}$$
(5.29)

This solution can naturally be continued to all \(t>0\) for which it lies in \(\mathcal {K}_{\vartheta (t)}\) with

$$\begin{aligned} \vartheta (t) = \log C + t \max _{i=0,1}\alpha _i. \end{aligned}$$
(5.30)

Proof

In view of the lack of interaction in (5.20), the equations for particular \(u^{(n)}_t\) take the following (decoupled) form

$$\begin{aligned}&\frac{d}{dt} u_t^{(n)} (x_1, \dots ,x_{n_0}; y_1, \dots ,y_{n_1}) \\&\quad = \sum _{i=1}^{n_0} \int _{\mathbb {R}^d} a_0 (x- x_i) u_t^{(n)} (x_1, \dots , x_{i-1}, x , x_{i+1}, \dots ,x_{n_0}; y_1, \dots ,y_{n_1}) d x\\&\qquad + \sum _{i=1}^{n_1} \int _{\mathbb {R}^d} a_1 (y- y_i) u_t^{(n)} (x_1, \dots x_{n_0}; y_1, \dots y_{i-1}, y , y_{i+1}, \dots y_{n_1}) d y, \ n\in \mathbb {N}^2, \qquad \qquad \end{aligned}$$

which for the initial translation invariant \(u_0\) yields (5.29). \(\square \)

5.3 The Global Solution

As follows from Proposition 4.2 and Lemma 5.1, the unique solution of the problem (3.13) with \(k_{0}\in \mathcal {K}^\star _{\vartheta _0}\) lies in \(\mathcal {K}_\vartheta ^\star \) for \(t \in (0, T (\vartheta , \vartheta _0))\). At the same time, for fixed \(\vartheta _0\), \(T (\vartheta , \vartheta _0)\) is bounded, see (4.7). This means that the mentioned solution cannot be directly continued to all \(t>0\). In this subsection, by a comparison method we prove that, for \(t \in (0, T (\vartheta , \vartheta _0))\), \(k_t\) satisfies (3.14) which is then used to get the continuation in question, cf. Corollary 5.3. Recall that the operators \(Q_y^i\), \(i=0,1\), were introduced in (2.22) and the cone \(\mathcal {K}^+_\vartheta \) was defined in (3.5).

Lemma 5.4

For each \(k_0\in \mathcal {K}_{\vartheta _0}^\star \) and \(t \in (0, T (\vartheta , \vartheta _0))\), \(k_t := S^1_{\vartheta \vartheta _0}k_0\) has the property

$$\begin{aligned} k_t - e(\tau ^i_y;\cdot ) Q^i_y k_t \in \mathcal {K}_\vartheta ^+, \qquad i=0,1, \end{aligned}$$
(5.31)

holding for Lebesgue-almost all \(y\in \mathbb {R}^d\).

Proof

Clearly, it is enough to show that (5.31) holds for \(i=0\). For a fixed y, we denote

$$\begin{aligned} v_{t,1} = k_t - Q^0_y k_t, \quad v_{t,2} = [1- e(\tau ^0_y;\cdot )] Q^0_y k_t. \end{aligned}$$

The proof will be done if we show that, for all \(G\in B_\mathrm{bs}(\Gamma ^2_0)\) such that \(G(\eta ) \ge 0\) for \(\lambda \)-almost all \(\eta \in \Gamma _0^2\), the following holds

$$\begin{aligned} \langle \langle G, v_{t,j} \rangle \rangle \ge 0, \qquad j=1,2. \end{aligned}$$
(5.32)

Let \(\Lambda \), N, and \(k_0^{\Lambda ,N}\) be as in (5.10), and then \(k_t^{\Lambda ,N}\) be as in (5.11). Next, let \(v_{t,j}^{\Lambda ,N}\), \(j=1,2\), be defined as above with \(k_t\) replaced by \(k_t^{\Lambda ,N}\). By (5.18) and (5.19) we then get

$$\begin{aligned} \langle \langle G, Q^0_y k^{\Lambda ,N}_t \rangle \rangle= & {} \int _{\Gamma _0^2} \widetilde{G}(\eta ) k^{\Lambda ,N}_t (\eta ) \lambda (d \eta ) \\= & {} \int _{\Gamma _0^2} \int _{\Gamma _0^2} \widetilde{G}(\eta ) R^{\Lambda ,N}_t (\eta \cup \xi ) \lambda (d\eta ) \lambda (d\xi ), \nonumber \end{aligned}$$
(5.33)

where

$$\begin{aligned} \widetilde{G}(\eta _0,\eta _1):=\sum _{\xi \subset \eta _1} e(t^0_y; \xi ) G(\eta _0 , \eta _1 \setminus \xi ). \end{aligned}$$

Furthermore, by (5.33) we get

$$\begin{aligned} \Big \langle \Big \langle G, Q^0_y k^{\Lambda ,N}_t \Big \rangle \Big \rangle&= \int _{\Gamma _0^2} G(\eta _0, \eta _1) \nonumber \\&\quad \times \int _{\Gamma _0^2} \left( \int _{\Gamma _0} e(t^0_y;\zeta ) R^{\Lambda ,N}_t(\eta _0\cup \xi _0, \eta _1 \cup \xi _1 \cup \zeta ) \lambda _1 ( d \zeta ) \right) \lambda (d\eta ) \lambda ( d \xi ) \nonumber \\&= \int _{\Gamma _0^2} G(\eta _0, \eta _1) \int _{\Gamma _0^2} \left( \sum _{\zeta \subset \xi _1} e(t^0_y;\zeta )\right) R^{\Lambda ,N}_t(\eta _0\cup \xi _0, \eta _1 \cup \xi _1) \lambda (d\eta ) \lambda ( d \xi ). \end{aligned}$$
(5.34)

By (2.21) it follows that

$$\begin{aligned} \sum _{\zeta \subset \xi _1} e(t^0_y;\zeta ) = e(\tau ^0_y;\xi _1). \end{aligned}$$

We apply this in the last line of (5.34) and obtain

$$\begin{aligned} \Big \langle \Big \langle G, Q^0_y k^{\Lambda ,N}_t \Big \rangle \Big \rangle&= \int _{\Gamma _0^2} G(\eta _0, \eta _1) \int _{\Gamma _0^2} e(\tau ^0_y;\xi _1) R^{\Lambda ,N}_t(\eta _0\cup \xi _0, \eta _1 \cup \xi _1) \lambda (d\eta ) \lambda ( d \xi )\nonumber \\&\le \int _{\Gamma _0^2} G(\eta _0, \eta _1) \int _{\Gamma _0^2} R^{\Lambda ,N}_t(\eta _0\cup \xi _0, \eta _1 \cup \xi _1) \lambda (d\eta ) \lambda ( d \xi )\nonumber \\&= \Big \langle \Big \langle G, k^{\Lambda ,N}_t \Big \rangle \Big \rangle , \end{aligned}$$
(5.35)

which after the limiting transition as in (5.12) yields (5.32) for \(j=1\). For the same G, we set \(\bar{G} = e(\tau ^0_y;\cdot ) G\). Then by (2.21) and the second line in (5.35) we get

$$\begin{aligned} \Big \langle \Big \langle \bar{G}, Q^0_y k^{\Lambda ,N}_t \Big \rangle \Big \rangle \le \Big \langle \Big \langle G, Q^0_y k^{\Lambda ,N}_t \Big \rangle \Big \rangle , \end{aligned}$$

which after the limiting transition as in (5.12) yields (5.32) for \(j=2\). \(\square \)

Lemma 5.5

Let \(C>0\) be such that the initial condition in (3.13) satisfies \(k_{\mu _0}(\eta ) =k_0 (\eta ) \le C^{|\eta _0|+|\eta _1|}\). Then for all \(t< T (\vartheta , \vartheta _0)\) with \(\vartheta _0 = \log C\) and any \(\vartheta > \vartheta _0\), the unique solution of (3.13) given by the formula

$$\begin{aligned} k_t = S^1_{\vartheta \vartheta _0} (t) k_0 \end{aligned}$$
(5.36)

satisfies (3.14) for \(\lambda \)-almost all \(\eta \in \Gamma _0^2\).

Proof

Take any \(\vartheta > \vartheta _0\) and fix \(t< T (\vartheta , \vartheta _0)\); then pick \(\vartheta ^1 \in (\vartheta _0 , \vartheta )\) such that \(t< T (\vartheta ^1, \vartheta _0)\). Next take \(\vartheta ^2, \vartheta ^3\in \mathbb {R}\) such that \(\vartheta ^1<\vartheta ^2 < \vartheta ^3\) and \(t< \bar{T} (\vartheta ^3, \vartheta ^2)\). The latter is possible since \(\bar{T}\) depends only on the difference \(\vartheta _3 - \vartheta _2\), see (5.23). For the fixed t, \(k_t \in \mathcal {K}_{\vartheta ^1}^\star \hookrightarrow \mathcal {K}_{\vartheta ^3}^\star \), and hence one can write

$$\begin{aligned} u_t= & {} \bar{S}_{\vartheta ^3 \vartheta ^*}(t) u_0 \nonumber \\= & {} (u_0 - k_0)+ k_t + \int _0^t \bar{S}_{\vartheta ^3 \vartheta ^2} (t-s) D_{\vartheta ^2\vartheta ^1} k_s ds, \end{aligned}$$
(5.37)

where

$$\begin{aligned} D_{\vartheta \vartheta ''} = \bar{L}_{\vartheta \vartheta ''} - L^\Delta _{\vartheta \vartheta ''}, \qquad D_\vartheta = \bar{L}_\vartheta - L^\Delta _\vartheta , \end{aligned}$$

and the latter two operators are as in (5.27) and (3.13) respectively. By Lemma 5.1, for \(s\le t\), \(k_s \in \mathcal {K}_{\vartheta ^1}^\star \). By (2.23), (5.20), and Lemma 5.4 we have that \(D_{\vartheta ^2\vartheta ^1}: \mathcal {K}_{\vartheta ^1}^\star \rightarrow \mathcal {K}_{\vartheta ^2}^+\). Then by Lemma 5.2 the third summand in the second line in (5.37) is in \(\mathcal {K}_{\vartheta ^3}^+\) which completes the proof since \(u_0 - k_0\) is also positive. \(\square \)

Proof of Theorem 3.5

According to Definition 3.3 and Remark 3.4 the map \([0,+\infty )\ni t \mapsto k_t \in \mathcal {K}^\star \) is the solution in question if: (a) \(k_t(\emptyset ,\emptyset )=1\); (b) for each \(t>0\), there exists \(\vartheta ''\in \mathbb {R}\) such that \(k_t \in \mathcal {K}_{\vartheta ''}\) and \(\frac{d}{dt} k_t = L^\Delta _\vartheta k_t\) for each \(\vartheta > \vartheta ''\).

Let \(k_0\) and \(C>0\) be as in the statement of Theorem 3.5. Set \(\vartheta ^*= \log C\). Then, for \(\vartheta = \vartheta ^* + \delta (\vartheta ^*)\), see (4.7) and (4.8), \(k_t\) as given in (5.36) is a unique solution of (3.13) in \(\mathcal {K}_\vartheta \) on the time interval \([0,T(\vartheta , \vartheta ^*))\). By (2.23) we have

$$\begin{aligned} \left( \frac{d}{dt} k_t\right) (\emptyset , \emptyset ) = (L^\Delta k_t)(\emptyset , \emptyset ) =0, \end{aligned}$$

which yields that \(k_t(\emptyset , \emptyset ) = k_0(\emptyset , \emptyset ) =1\). By Lemma 5.1 \(k_t \in \mathcal {K}_\vartheta ^\star \), and hence \(k_t\) is the solution in question for \(t< \tau ( \vartheta ^*)\). According to Lemma 5.5 \(k_t\) lies in \(\mathcal {K}_{\vartheta (t)}\) with \(\vartheta (t)\) given in (5.30). Fix any \(\epsilon \in (0,1)\) and then set \(s_0=0\), \(s_1 =(1-\epsilon ) \tau (\vartheta ^*)\), and \(\vartheta ^{*}_1 = \vartheta (s_1)\). Thereafter, set \(\vartheta ^1 = \vartheta ^{*}_{1} + \delta (\vartheta ^{*}_{1})\) and

$$\begin{aligned} k_{t+ s_1} = S^1_{\vartheta ^1 \vartheta ^{*}_{1}}(t)k_{s_1}, \qquad t \in [0, \tau ( \vartheta ^{*}_{1})). \end{aligned}$$

Note that for t such that \(t+s_1 < \tau (\vartheta ^*)\),

$$\begin{aligned} k_{t+ s_1} = S^1_{\vartheta ^1 \vartheta ^{*}}(t+s_1)k_{0}, \end{aligned}$$

see (4.16). Thus, by Lemmas 5.1 and 5.5 the map \([0, s_1 + \tau (\vartheta ^{*}_{1})) \ni t \mapsto k_t \in \mathcal {K}_{\vartheta (t)}\) with

$$\begin{aligned} k_t = \left\{ \begin{array}{ll} S^1_{\vartheta ^{*}_1 \vartheta ^{*}}(t) k_0 \quad &{}t\le s_1;\\ S^1_{\vartheta ^1 \vartheta ^{*}_{1}}(t-s_1) k_{s_1} \quad &{}t\in [s_1, s_1+ \tau ( \vartheta ^{*}_{1})) \end{array} \right. \end{aligned}$$

is the solution in question on the indicated time interval. We continue this procedure by setting \(s_n =(1-\epsilon ) \tau (\vartheta ^*_{n-1})\), \(n\ge 2\), and then

$$\begin{aligned} \vartheta ^*_n = \vartheta (s_1+\cdots +s_n ), \qquad \vartheta ^n = \vartheta ^*_n + \delta (\vartheta ^*_n). \end{aligned}$$
(5.38)

This yields the solution in question on the time interval \([0, s_1 + \cdots + s_{n+1}]\) which for \(t \in [s_1 +\cdots + s_l, s_1 +\cdots + s_{l+1}]\), \(l=0, \dots , n\), is given by

$$\begin{aligned} k_t = S^1_{\vartheta ^l \vartheta ^*_{l}}(t-(s_1 +\cdots +s_l)) k_{s_l}. \end{aligned}$$

Then the global solution in question exists whenever the series

$$\begin{aligned} \sum _{n\ge 1}s_n = (1-\epsilon ) \sum _{n\ge 1} \tau (\vartheta ^*_n) \end{aligned}$$

diverges. Assume that this is not the case. Then by (5.30) and (5.38) we get that both (a) and (b) ought to be true, where (a) \(\sup _{n\ge 1} \vartheta ^*_n =: \bar{\vartheta }<+ \infty \) and (b) \(\tau (\vartheta ^*_n) \rightarrow 0\) as \(n\rightarrow +\infty \). However, by (4.7) and (4.8) it follows that (a) implies \(\tau (\vartheta ^*_n) \ge \tau (\bar{\vartheta }) > 0\), which contradicts (b). \(\square \)

6 The Proof of Theorems 3.8 and 3.9

6.1 The Kinetic Equations

Here we prove Theorem 3.8. For a continuous function

$$\begin{aligned}{}[0,+\infty ) \ni t \mapsto \varrho _t = (\varrho _{0,t}, \varrho _{1,t}) \in L^\infty (\mathbb {R}^d \rightarrow \mathbb {R}^2), \end{aligned}$$

cf. (3.16), let us consider

$$\begin{aligned} F_{0,t}(\varrho )(x)&= \varrho _{0,0}(x) e^{-\alpha _0 t} + \int _0^t e^{-\alpha _0(t-s)} (a_0 *\varrho _{0,s})(x)\exp \left[ - (\phi _0 *\varrho _{1,s})(x)\right] ds \nonumber \\&\quad + \int _0^t e^{-\alpha _0(t-s)} \varrho _{0,s}(x) \bigg (a_0 *\bigg [ 1- \exp \left[ - (\phi _0 *\varrho _{1,s})\right] \bigg ] \bigg )(x) ds ,\nonumber \\ F_{1,t}(\varrho )(x)&= \varrho _{1,0}(x) e^{-\alpha _1 t} + \int _0^t e^{-\alpha _1(t-s)} (a_1 *\varrho _{1,s})(x)\exp \left[ - (\phi _1 *\varrho _{0,s})(x)\right] ds \nonumber \\&\quad + \int _0^t e^{-\alpha _1(t-s)} \varrho _{1,s}(x) \bigg (a_1 *\bigg [ 1- \exp \left[ - (\phi _1 *\varrho _{0,s})\right] \bigg ] \bigg )(x) ds. \end{aligned}$$
(6.1)

For a given \(T>0\), let \(\mathcal {C}_T\) stand for the Banach space of continuous functions

$$\begin{aligned}{}[0, T] \ni t \mapsto (\varrho _{0,t}, \varrho _{1,t}) \in L^\infty (\mathbb {R}^d \rightarrow \mathbb {R}^2), \end{aligned}$$
(6.2)

with norm

$$\begin{aligned} \Vert \varrho \Vert _T = \max _{i=0,1} \sup _{t\in [0,T]}\left\{ \Vert \varrho _{i,t}\Vert _{L^\infty } e^{-\alpha _i t}\right\} . \end{aligned}$$
(6.3)

Let also \(\mathcal {C}_T^+\) denote the set of all positive \(\varrho \in \mathcal {C}_T\), i.e., such that \(\varrho _{i,t}(x) \ge 0\) for all \(i=0,1\), \(t\in [0,T]\), and Lebesgue-almost all x. By means of \(F_{i,t}\) introduced in (6.1) we then define the map

$$\begin{aligned} \mathcal {C}_T \ni \varrho \mapsto F(\varrho ) = (F_0(\varrho ), F_{1}(\varrho )) \in \mathcal {C}_T \end{aligned}$$

such that the values of \(F_i(\varrho )\) are given in the right-hand sides of (6.1). By direct inspection one concludes that both \(F_{i,t}(\varrho )\), \(i=0,1\), are continuously differentiable in t, and the function as in (6.2) is a positive solution of (3.15) on [0, T] if and only if it solves in \(\mathcal {C}_T^+\) the following fixed-point equation

$$\begin{aligned} \varrho = F(\varrho ). \end{aligned}$$
(6.4)

Let \(C>0\) be an arbitrary number and \(\varrho _{i,0}\), \(i=0,1\), be as in (3.18) and (6.1). Set

$$\begin{aligned} \varDelta _C = \{ \varrho \in \mathcal {C}_T^+ : (\varrho _{0,t}, \varrho _{1,t})|_{t=0}=(\varrho _{0,0}, \varrho _{1,0}), \ \ \mathrm{and } \ \ \Vert \varrho \Vert _T \le C\}. \end{aligned}$$
(6.5)

By (6.1) one readily gets that \(F:\mathcal {C}_T^+\rightarrow \mathcal {C}_T^+\). Let us show that

$$\begin{aligned} \forall C>0 \qquad F:\varDelta _C\rightarrow \varDelta _C. \end{aligned}$$
(6.6)

For \(\varrho \in \varDelta _C\), from the first equation in (6.1) one gets

$$\begin{aligned} \Vert F_{0,t} (\varrho ) \Vert _{L^\infty }\le & {} C e^{-\alpha _0 t} + 2 \alpha _0 e^{-\alpha _0 t} \int _0^t e^{\alpha _0 s} \Vert \varrho _{0,s} \Vert _{L^\infty } d s \nonumber \\\le & {} C e^{\alpha _0 t}, \qquad \qquad t \in [0,T]. \end{aligned}$$
(6.7)

Similarly, \(\Vert F_{1,t} (\varrho ) \Vert _{L^\infty } \le C e^{\alpha _1 t}\), which proves (6.6). To solve (6.4) we apply the Banach contraction principle. To this end we pick \(T>0\) such that F is a contraction on (6.5). We do this as follows. For \(\varrho , \bar{\varrho } \in \varDelta _C\), like in (6.7) we obtain

$$\begin{aligned} \Vert F_{0,t} (\varrho ) - F_{0,t} (\bar{\varrho }) \Vert _{L^\infty }&\le 2 \alpha _0 e^{-\alpha _0 t} \int _0^t e^{\alpha _0 s} \Vert \varrho _{0,s} - \bar{\varrho }_{0,s}\Vert _{L^\infty } d s \nonumber \\&\quad + 2 \alpha _0 e^{-\alpha _0 t} \int _0^t e^{\alpha _0 s} \Vert \bar{\varrho }_{0,s}\Vert _{L^\infty } \Vert \varrho _{1,s} - \bar{\varrho }_{1,s}\Vert _{L^\infty } d s. \nonumber \\&\le e^{\alpha _0 t} \Vert \varrho - \bar{\varrho }\Vert _T \bigg ( 1 - e ^{-2 \alpha _0 t}\left[ 1 - \frac{2}{3} C \left( e^{3 \alpha _0 t} -1 \right) \right] \bigg ). \end{aligned}$$

The corresponding estimate for \(\Vert F_{1,t} (\varrho ) - F_{1,t} (\bar{\varrho }) \Vert _{L^\infty }\) (with \(e^{\alpha _1 t}\)) can be obtained in the same way. Then according to (6.3) F is a contraction on \(\varDelta _C\) whenever \(C>0\) and T satisfy

$$\begin{aligned} e^{3 \alpha T} < 1 + \frac{3}{2C}, \quad \qquad \alpha :=\max _{i=0,1}\alpha _i. \end{aligned}$$
(6.8)

This yields the existence of the unique positive solution of (3.15) on the time interval [0, T], where T is defined in (6.8) by the initial condition \((\varrho _{0,0}, \varrho _{1,0})\). This solution lies in \(\varDelta _C\) and hence

$$\begin{aligned} \Vert \varrho _{i,T}\Vert _{L^\infty } \le e^{\alpha T} C, \qquad i=0,1. \end{aligned}$$
(6.9)

Now we consider the problem (3.15) for \(\varrho ^{(1)}_{i, t} = \varrho _{i,T+t}\), \(i=0,1\), where \(\varrho \) is the solution just constructed. For this new problem, by (6.9) we have

$$\begin{aligned} \Vert \varrho ^{(1)}_{i,0}\Vert _{L^\infty } \le C_1 := e^{\alpha T} C, \qquad i=0,1. \end{aligned}$$

Then we repeat the above construction and obtain the solution \(\varrho ^{(1)}\) on the time interval \([0,T_1]\) with \(T_1>0\) satisfying, cf. (6.8),

$$\begin{aligned} e^{3 \alpha T_1} = 1 + \frac{1}{C} e^{-\alpha T} < 1 + \frac{3}{2C}e^{-\alpha T} = 1 + \frac{3}{2C_1}. \end{aligned}$$

By further repeating this construction we obtain \(\varrho _{i,t}^{(n)} = \varrho _{i, T+T_1 + \cdots + T_{n-1} + t}\), \(i=0,1\), \(t\in [0,T_n]\), where the sequence \(\{T_n\}_{n\in \mathbb {N}}\) is defined recursively by the condition

$$\begin{aligned} e^{3 \alpha T_n} = 1 + \frac{1}{C}\exp \left[ -\alpha \left( T + T_1 +\cdots + T_{n-1}\right) \right] , \quad n\in \mathbb {N}. \end{aligned}$$
(6.10)

Thus, the global solution in question exists if the series \(\sum _{n} T_n\) is divergent. Assume that this is not the case. Then the right-hand side of (6.10) is bounded from below by some \(b >1\), uniformly in n. This yields that \(T_n \ge \log b/ 3 \alpha >0\), holding for all \(n\in \mathbb {N}\), which contradicts the summability of \(\{T_n\}_{n\in \mathbb {N}}\) and thus completes the proof of Theorem 3.8.

6.2 The Scaling Limit

For each k and \(\lambda \)-almost all \(\eta \in \Gamma ^2_0\), we have that the following holds, cf. (2.22) and (4.3),

$$\begin{aligned}&(Q^0_{y,\varepsilon } k) (\eta _0,\eta _1) \rightarrow (Q^0_{y,0} k) (\eta _0,\eta _1)\\&\qquad \qquad := \int _{\Gamma _0} k(\eta _0, \eta _1\cup \xi ) e(-\phi _0 (y - \cdot );\xi )\lambda (d\xi ), \quad \varepsilon \rightarrow 0, \\&(Q^1_{y,\varepsilon } k) (\eta _0,\eta _1) \rightarrow (Q^1_{y,0} k) (\eta _0,\eta _1) \\&\qquad \qquad := \int _{\Gamma _0} k(\eta _0\cup \xi , \eta _1) e(-\phi _1 (y - \cdot );\xi )\lambda (d\xi ), \quad \varepsilon \rightarrow 0. \end{aligned}$$

Thus, for each k and \(\lambda \)-almost all \(\eta \in \Gamma ^2_0\),

$$\begin{aligned} (L^{\varepsilon , \Delta } k)(\eta ) \rightarrow (Vk)(\eta ), \qquad \mathrm{as } \ \ \varepsilon \rightarrow 0, \end{aligned}$$

where, cf. (2.23), (4.3)

$$\begin{aligned} (Vk)(\eta _0 , \eta _1)= & {} \sum _{y\in \eta _0} \int _{\mathbb {R}^d} a_0 (x-y) (Q_{y,0}^0 k) (\eta _0\setminus y \cup x, \eta _1) d x \nonumber \\&- \sum _{x\in \eta _0} \int _{\mathbb {R}^d} a_0 (x-y) (Q_{y,0}^0 k) (\eta _0, \eta _1) d y \nonumber \\&+ \sum _{y\in \eta _1} \int _{\mathbb {R}^d} a_1 (x-y) (Q_{y,0}^1 k) (\eta _0, \eta _1\setminus y \cup x) d x \nonumber \\&- \sum _{x\in \eta _1} \int _{\mathbb {R}^d} a_1 (x-y) (Q_{y,0}^1 k) (\eta _0, \eta _1) d y. \end{aligned}$$
(6.11)

Like above, for each \(\vartheta ''\in \mathbb {R}\) and \(k\in \mathcal {K}_{\vartheta ''}\), both \(Q^i_{y,0}k\) satisfy the estimates as in (3.8) and (3.9). Then for \(\vartheta ,\vartheta ''\in \mathbb {R}\) such that \(\vartheta '' < \vartheta \), \(\Vert V k\Vert _{\vartheta }\) is bounded by the right-hand side of (3.11). This allows one to define the operators \(V_\vartheta \) and \(V_{\vartheta \vartheta ''}\) analogous to \(L^\Delta _\vartheta \) and \(L^\Delta _{\vartheta \vartheta ''}\), respectively. For \(\varrho _t\) being the solution as in Theorem 3.8, \(k_{\pi _{\varrho _t}}\) satisfies

$$\begin{aligned} \frac{d}{dt} k_{\pi _{\varrho _t}} = V_{\vartheta \vartheta ''} k_{\pi _{\varrho _t}}, \qquad t>0, \end{aligned}$$
(6.12)

where \(\vartheta ''\in \mathbb {R}\) is such that \(k_{\pi _{\varrho _t}}\in \mathcal {K}_{\vartheta ''}\), see (3.18), and \(\vartheta > \vartheta ''\) is arbitrary. This can be checked by direct calculations based on (6.11) and (3.15). Moreover, if we set \(C= \Vert \varrho _0\Vert _\infty \), see (3.17), then \(k_{\pi _{\varrho _t}}\) satisfies (3.14) with this C, which follows from (3.18). Thus, by Corollary 5.3 we conclude that \(k_{\pi _{\varrho _t}}\in \mathcal {K}_{\vartheta (t)}\) for all \(t>0\).

The proof of Theorem 3.9

Let \(\vartheta _*\) be as assumed. As mentioned above, we then have that \(k_{\pi _{\varrho _t}} \in \mathcal {K}_{\vartheta _T}\) for all \(t\in [0,T]\) with \(\vartheta _T := \vartheta _* + \alpha T\) and T such that

$$\begin{aligned} T < \tau (\vartheta _* + \alpha T). \end{aligned}$$
(6.13)

The latter is possible since the function \(\vartheta \mapsto \tau (\vartheta )\) is continuous and \(\tau (\vartheta _* ) >0\), see (4.7). Since the inequality in (6.13) is strict, we can also pick \(\vartheta _1 > \vartheta _T\) such that \(T < \tau (\vartheta _1)\). Thereafter, we set \(\vartheta = \vartheta _1 + \delta (\vartheta _1)\), cf. Remark 4.1. For \(q_{0, \varepsilon }\) with the assumed property, let \(q_{t,\varepsilon }\) be the solution of (4.5) in \(\mathcal {K}_\vartheta \). In view of (6.12), we then have

$$\begin{aligned} q_{t,\varepsilon } - k_{\pi _{\varrho _t}} = \int _0^t S^\varepsilon _{\vartheta \vartheta _1}(t-s) \left( L^{\varepsilon , \Delta }_{\vartheta _1 \vartheta _T} - V_{\vartheta _1 \vartheta _T} \right) k_{\pi _{\varrho _s}} d s \\ + S^\varepsilon _{\vartheta \vartheta _*}(t)\left[ q_{0, \varepsilon } - k_{\pi _{\varrho _0}}\right] , \qquad t \in [0,T]. \nonumber \end{aligned}$$
(6.14)

Since \(\vartheta \mapsto \tau (\vartheta )\) is decreasing, by (6.13) we have that \(T < \tau (\vartheta _*)\). By (4.13) we then get

$$\begin{aligned} \forall t \in [0,T] \qquad \Vert S^\varepsilon _{\vartheta \vartheta _*}(t) \Vert \le \frac{T(\vartheta ,\vartheta _*)}{ T(\vartheta ,\vartheta _*) - T}, \end{aligned}$$

which yields that the second term in (6.14) tends to zero uniformly on [0, T]. Also by (4.13) we have

$$\begin{aligned}&\bigg \Vert \int _0^t S^\varepsilon _{\vartheta \vartheta _1}(t-s) \left( L^{\varepsilon , \Delta }_{\vartheta _1 \vartheta _T} - V_{\vartheta _1 \vartheta _T} \right) k_{\pi _{\varrho _s}} d s \bigg \Vert _\vartheta \nonumber \\&\qquad \le \Vert k_{\pi _{\varrho _T}}\Vert _{\vartheta _T} \tau (\vartheta _1) \log \frac{T(\vartheta ,\vartheta _1)}{T(\vartheta ,\vartheta _1) -T} \Vert L^{\varepsilon , \Delta }_{\vartheta _1 \vartheta _T} - V_{\vartheta _1 \vartheta _T}\Vert . \end{aligned}$$
(6.15)

To estimate the latter term we set

$$\begin{aligned} W^i_{y,\varepsilon }k = Q^i_{y,0}k - Q^i_{y,\varepsilon }k, \qquad i=0,1, \ \ y\in \mathbb {R}^d. \end{aligned}$$
(6.16)

By means of the inequality, cf. the proof of Theorem 4.6 in [3],

$$\begin{aligned} \left| b_1 \cdots b_n - a_1 \cdots a_n \right| \le \sum _{i=1}^n |b_i - a_i| \prod _{j\ne i} \max \{|a_j|; |b_j|\}, \end{aligned}$$

and

$$\begin{aligned} 0\le \psi (t) := (t -1 +e^{-t})/t^2 \le 1/2, \qquad t\ge 0, \end{aligned}$$

we obtain, cf. (3.8),

$$\begin{aligned} \left| W^0_{y,\varepsilon }k (\eta _0, \eta _1)\right|&\le \varepsilon \Vert k\Vert _{\vartheta ''} \exp \left( \vartheta '' |\eta _0| + \vartheta '' |\eta _1| \right) \nonumber \\&\quad \times \int _{\Gamma _0}\bigg ( e^{\vartheta '' |\xi |} \sum _{x\in \xi } \left[ \phi _0(y-x) \right] ^2 \psi \left( \varepsilon \phi _0(y-x)\right) \prod _{z\in \xi \setminus x} \phi _0(y-z) \bigg )\lambda (d \xi ) \nonumber \\&\le ( \varepsilon /2)\bar{\phi _0} \Vert k\Vert _{\vartheta ''} \exp \left( \vartheta '' |\eta _0| + \vartheta '' |\eta _1| \right) \nonumber \\&\quad \times \int _{\Gamma _0}\bigg ( |\xi | e^{\vartheta '' |\xi |} \prod _{z\in \xi } \phi _0(y-z) \bigg )\lambda (d \xi ) \nonumber \\&= (\varepsilon /2) \bar{\phi _0} \langle \phi _0 \rangle \exp \left( \langle \phi _0 \rangle e^{\vartheta ''} \right) \Vert k\Vert _{\vartheta ''} \exp \left( \vartheta '' |\eta _0| + \vartheta '' |\eta _1| +\vartheta '' \right) . \end{aligned}$$
(6.17)

Likewise,

$$\begin{aligned} \left| W^1_{y,\varepsilon }k (\eta _0, \eta _1)\right| \le (\varepsilon /2) \bar{\phi _1} \langle \phi _1 \rangle \exp \left( \langle \phi _1 \rangle e^{\vartheta ''} \right) \Vert k\Vert _{\vartheta ''} \exp \left( \vartheta '' |\eta _0| + \vartheta '' |\eta _1| +\vartheta '' \right) .\nonumber \\ \end{aligned}$$
(6.18)

Next, by (2.23), (6.11), and (6.16) we have

$$\begin{aligned} (L^{\varepsilon ,\Delta } - V)k(\eta _0 , \eta _1)= & {} \sum _{y\in \eta _0} \int _{\mathbb {R}^d} a_0 (x-y) (U^0_{y,\varepsilon } k) (\eta _0\setminus y \cup x, \eta _1) d x \nonumber \\&- \sum _{x\in \eta _0} \int _{\mathbb {R}^d} a_0 (x-y) (U^0_{y,\varepsilon } k) (\eta _0, \eta _1) d y \nonumber \\&+ \sum _{y\in \eta _1} \int _{\mathbb {R}^d} a_1 (x-y) (U^1_{y,\varepsilon } k) (\eta _0, \eta _1\setminus y \cup x) d x \nonumber \\&- \sum _{x\in \eta _1} \int _{\mathbb {R}^d} a_1 (x-y) (U^1_{y,\varepsilon } k) (\eta _0, \eta _1) d y. \end{aligned}$$
(6.19)

Here we use the following notations

$$\begin{aligned} (U^0_{y,\varepsilon } k) (\eta _0,\eta _1) = e(\tau ^0_{y,\varepsilon };\eta _1) (Q^0_{y,\varepsilon } k) (\eta _0,\eta _1) - (Q^0_{y,0} k) (\eta _0,\eta _1), \nonumber \\ (U^1_{y,\varepsilon } k) (\eta _0,\eta _1) = e(\tau ^1_{y,\varepsilon };\eta _0) (Q^1_{y,\varepsilon } k) (\eta _0,\eta _1) - (Q^1_{y,0} k) (\eta _0,\eta _1). \end{aligned}$$

Then, cf. (6.16),

$$\begin{aligned} \left| (U^0_{y,\varepsilon } k) (\eta _0,\eta _1)\right|\le & {} \left| (W^0_{y,\varepsilon } k) (\eta _0,\eta _1)\right| + \varepsilon \bar{\phi }_0 |\eta _1|\left| (Q^0_{y,0} k) (\eta _0,\eta _1)\right| , \nonumber \\ \left| (U^1_{y,\varepsilon } k) (\eta _0,\eta _1)\right|\le & {} \left| (W^1_{y,\varepsilon } k) (\eta _0,\eta _1)\right| + \varepsilon \bar{\phi }_1 |\eta _0|\left| (Q^1_{y,0} k) (\eta _0,\eta _1)\right| . \end{aligned}$$

Now by (3.8), (3.9), (6.17), (6.18), and (6.19) we get

$$\begin{aligned} \left| (L^{\varepsilon ,\Delta } - V)k(\eta _0 , \eta _1)\right|&\le \varepsilon \alpha \Vert k\Vert _{\vartheta ''} \exp \left[ \vartheta '' (|\eta _0|+|\eta _1|)\right] \exp \left( ce^{\vartheta ''}\right) \\&\quad \times \bigg ( 2|\eta _0||\eta _1|\left( \bar{\phi }_0+\bar{\phi }_1\right) + e^{\vartheta ''} \left( \bar{\phi }_0 \langle \phi _0 \rangle |\eta _0| + \bar{\phi }_1 \langle \phi _1 \rangle |\eta _1| \right) \bigg ) \end{aligned}$$

Like in obtaining (3.11) we then get from the latter

$$\begin{aligned} \Vert L^{\varepsilon ,\Delta }_{\vartheta _1 \vartheta _T} - V_{\vartheta _1 \vartheta _T}\Vert \le \varepsilon \varPhi (\vartheta _1, \vartheta _T), \end{aligned}$$

where \(\varPhi (\vartheta _1, \vartheta _T)>0\) depends on the choice of \(\vartheta _1\), \(\vartheta _T\) and on the model parameters only, and may be calculated explicitly. Then the use of the latter estimate in (6.15) completes the proof. \(\square \)