1 Introduction

1.1 The Setup

In this paper, we study the dynamics of an infinite system of point particles in \(\mathbb {R}^d\) which hop and interact with each other. The corresponding phase space is the set of configurations

$$\begin{aligned} \Gamma =\left\{ \gamma \subset \mathbb R^d : |\gamma \cap K|<\infty \, \mathrm{for \, any \, compact} \,K\subset {\mathbb {R}}^d \right\} , \end{aligned}$$
(1.1)

where \(|A|\) denotes the cardinality of a finite set \(A\). The set \(\Gamma \) is equipped with a complete metric and with the corresponding Borel \(\sigma \)-field, which allows one to employ probability measures on \(\Gamma \).

In this work, we follow the statistical approach to stochastic dynamics, see e.g., [11, 12, 15] and the literature quoted in those articles. In this approach, a model is specified by a Markov ‘generator’, which acts on observables—appropriate functions \(F:\Gamma \rightarrow \mathbb {R}\). For the model considered here, it has the form

$$\begin{aligned} (LF) (\gamma ) = \sum \limits _{x\in \gamma }\int \limits _{\mathbb {R}^d} c(x,y,\gamma ) \left[ F(\gamma \setminus x \cup y) - F(\gamma )\right] d y, \quad \gamma \in \Gamma . \end{aligned}$$
(1.2)

In (1.2) and in the sequel in the corresponding context, we treat each \(x\in \mathbb {R}^d\) also as a single-point configuration \(\{x\}\). That is, if \(x\) belongs to \(\gamma \) (resp. \(y\) does not), by \(\gamma \setminus x\) (resp. \(\gamma \cup y\)) we mean the configuration which is obtained from \(\gamma \) by removing \(x\) (resp. by adding \(y\)). The elementary act of the dynamics described by (1.2), which with probability \( c(x,y,\gamma )dt\) occurs during the infinitesimal time \(dt\), consists in a random change from \(\gamma \) to \(\gamma \setminus x \cup y\). The rate \(c(x,y,\gamma )\) may depend on \(z\in \gamma \) with \(z\ne x,y\), which is interpreted as an interaction of particles. In this article, we choose

$$\begin{aligned} c(x,y,\gamma ) = a (x-y) \exp \left( - E^\phi (y, \gamma ) \right) , \end{aligned}$$
(1.3)

where the jump kernel \(a: \mathbb {R}^d \rightarrow [0,+\infty ) =: \mathbb {R}_+\) is such that \(a(x) = a(-x)\) and

$$\begin{aligned} \alpha : = \int \limits _{\mathbb {R}^d} a(x) d x < \infty . \end{aligned}$$
(1.4)

The second factor in (1.3) describes the interaction, which is supposed to be pair-wise and repulsive. This means that

$$\begin{aligned} E^\phi (y, \gamma ) = \sum \limits _{z\in \gamma } \phi (y-z) \ge 0, \end{aligned}$$
(1.5)

where the ‘potential’ \(\phi : \mathbb {R}^d \rightarrow \mathbb {R}_+\) is such that \(\phi (x) = \phi (-x)\) and

$$\begin{aligned} c_\phi := \int \limits _{\mathbb {R}^d} \left( 1 - e^{-\phi (x)}\right) d x < \infty . \end{aligned}$$
(1.6)

In the sequel, when we speak of the model we consider, we mean the one defined in (1.2)–(1.6). We also call it continuum Kawasaki system.

The main reason for us to choose the rates as in (1.3) is that any grand canonical Gibbs measure with potential \(\phi \), see e.g., [30], is invariant (even symmetrizing) for the dynamics generated by (1.2) with such rates, see [21].

As is usual for Markov dynamics, the ‘generator’ (1.2) enters the backward Kolmogorov equation

$$\begin{aligned} \frac{d}{dt} F_t = L F_t , \qquad F_t|_{t=0} = F_0, \end{aligned}$$
(1.7)

where, for each t, \(F_t\) is an observable. In the approach we follow, the states of the system are probability measures on \(\Gamma \), and hence \( \int \nolimits _\Gamma F d \mu \) can be considered as the value of observable \(F\) in state \(\mu \). This pairing allows one to define also the corresponding forward Kolmogorov or Fokker–Planck equation

$$\begin{aligned} \frac{d}{dt} \mu _t = L^* \mu _t, \qquad \mu _t|_{t=0} = \mu _0. \end{aligned}$$
(1.8)

The evolutions described by (1.2) and (1.8) are mutually dual in the sense that

$$\begin{aligned} \int \limits _{\Gamma } F_t d \mu _0 = \int \limits _{\Gamma } F_0 d \mu _t. \end{aligned}$$

Thus, the Cauchy problem in (1.8) determines the evolution of states of our model. If we were able to solve it for all possible probability measures as initial conditions, we could construct a Markov process on \(\Gamma \). For nontrivial models, however, including that considered in this work, this is far beyond the possibilities of the available technical tools. The main reason for this is that the configuration space \(\Gamma \) has a complex topological structure. Furthermore, the mere existence of the process related to (1.8) would not be enough for drawing conclusions on the collective behavior of the considered system. The basic idea of the approach which we follow is to solve (1.8) not for all possible \(\mu _0\), but only for those belonging to a properly chosen class of probability measures on \(\Gamma \). It turns out that even with such restrictions the direct solving (1.8) is also unattainable, at least so far. Then the solution in question is obtained by employing the so called moment or correlation functions. Similarly as a probability measure on \(\mathbb {R}\) is characterized by its moments, a probability measure on \(\Gamma \) can be characterized by its correlation functions. Of course, as not every measure on \(\mathbb {R}\) has all moments, not every measure on \(\Gamma \) possesses correlation functions. The mentioned restriction in the choice of \(\mu _0\) takes into account, among others, also this issue.

By certain combinatoric calculations, one transforms (1.8) into the following Cauchy problem

$$\begin{aligned} \frac{d}{dt} k_t = L^\Delta k_t, \quad k_t|_{t=0} = k_0, \end{aligned}$$
(1.9)

where \(k_0\) is the correlation function of \(\mu _0\). Note that the equation in (1.9) is, in fact, an infinite chain of coupled linear equations. Then the construction of the evolution of states \(\mu _0 \mapsto \mu _t\) is performed by: (a) solving (1.9) with \(k_0 = k_{\mu _0}\); (b) proving that, for each t, there exists a unique probability measure \(\mu _t\) such that \(k_t = k_{\mu _t}\). This way of constructing the evolution of states is, in a sense, analogous to that suggested by Bogoliubov [2] in the statistical approach to the Hamiltonian dynamics of large systems of interacting physical particles, cf. [4, 16, 23] and also a review in [7]. In the theory of such systems, the equation analogous to (1.9) is called BBGKY chain [7].

The description based on (1.8) or (1.9) is microscopic since one deals with coordinates of individual particles; cf. the Introduction in [29]. More coarse-grained levels are meso- and macroscopic ones. They are attained by appropriate space and time scalings [29, 31]. Of course, certain details of the system’s behavior are then lost. Kinetic equations provide a space-dependent mean-field-like approximate description of the evolution of infinite particle systems. For systems of physical particles, such an equation is the Boltzmann equation related to the BBGKY chain, cf. Sect. 6 in [7] and also [29, 31]. Nowadays, a mathematically consistent way of constructing the mesoscopic description based on kinetic equations is the procedure analogous to the Vlasov scaling in plasma physics, see [12]. In its framework, we obtain from (1.9) a new chain of linear equations for limiting ‘correlation functions’ \(r_t\), called Vlasov hierarchy. Note that these \(r_t\) may not be correlation functions at all but they have one important property. Namely, if the initial state \(\mu _0\) is the Poisson measure with density \(\varrho _0 \), then \(r_t\) is the correlation function for the Poisson measure \(\mu _t\) with the density \(\varrho _t \) which solves the corresponding kinetic equation.

In the present article, we aim at:

  • constructing the evolution of states \(\mu _0 \mapsto \mu _t\) of the model (1.2), (1.3) by solving (1.9) and then by identifying its solution with a unique \(\mu _t\);

  • deriving rigorously the limiting Vlasov hierarchy, which includes also the convergence of rescaled correlation functions to the limiting functions \(r_t\), as well as deriving the kinetic equation;

  • studying the solvability of the kinetic equation.

Let us make some comments. When speaking of the evolution of states, one might distinguish between equilibrium and non-equilibrium cases. The equilibrium evolution is built with the help of the reversible measures, if such exist for the considered model, and with the corresponding Dirichlet forms. Recall that, for the choice as in (1.3), such reversible measures are grand canonical Gibbs measures. The result is a stationary Markov process, see [21] where a version of the model studied in this work was considered. Note that in this framework, the evolution is restricted to the set of states which are absolutely continuous with respect to the corresponding Gibbs measures. The non-equilibrium evolution, where initial states can be “far away” from equilibrium, is much more interesting and much more complex—for the model considered in this work, it has been constructed for noninteracting particles only, see [22]. In this article, we go further in this direction and construct the non-equilibrium evolution for the continuum Kawasaki system with repulsion. Results similar to those presented here were obtained for a continuum Glauber model in [9], and for a spatial ecological model in [10].

There exists a rich theory of interacting particle systems based on continuous time Markov processes, which studies so called lattice models, see [24] and Part II of [31], and also [25] for the latest results. The essential common feature of these models is that the particles are distributed over a discrete set (lattice), typically \(\mathbb {Z}^d\). However, in many real-world applications, such as population biology or spatial ecology, the habitat, i.e., the space where the particles are placed, should essentially be continuous, cf. [26], which we take into account in this work. In statistical physics, a lattice model of ‘hopping spins’ was put forward in [17], see also a review in [18]. There exists an extended theory of interacting particles hopping over \(\mathbb {Z}^d\), cf. [31], Sect. 1 in Part II], and also, e.g., [3, 8] for some aspects of the recent development. However, this theory cannot be applied to continuum Kawasaki systems for a number of reasons. One of which is that a bounded \(K\subset \mathbb {R}^d\) can contain arbitrary number of particles, whereas the number of particles contained in a bounded \(K\subset \mathbb {Z}^d\) is at most \(|K|\).

1.2 The Overview of the Results

The microscopic description is performed in Sect. 3 in two steps. First, we prove that, for a given correlation function \(k_0\), the problem (1.9) has a unique classical solution \(k_t\) on a bounded time interval \([0,T)\) and in a Banach space, somewhat bigger than that containing \(k_0\), cf. Theorems 3.1 and 3.2. Here bigger means that the initial space is a proper subspace of the latter. The parameter \(T>0\) is related to the ‘difference’ between the spaces. The main characteristic feature of both Banach spaces is that if their elements are correlation functions of some probability measures on \(\Gamma \), then these measures are sub-Poissonian, cf. Definition 2.3 and Remark 2.4. This latter property is important in view of the mesoscopic description which we construct subsequently, cf. Remark 4.1. The restriction of the evolution \(k_0 \mapsto k_t\) to a bounded time interval is because we failed to apply to (1.9) semigroup methods, or similar techniques, which would allow for solving this equation on \([0,+\infty )\) in the mentioned Banach spaces. Our method is based on Ovcyannikov’s observation, cf. [6, pp. 9–13] and [33], that an unbounded operator can be redefined as a bounded one acting, however, from a ‘smaller’ to a ‘bigger’ space, both belonging to a scale of Banach spaces, indexed by \(\vartheta \in \mathbb {R}\). The essential fact here is that the norm of such a bounded operator has an upper bound proportional to \((\vartheta '' - \vartheta ')^{-1}\), see (3.18). This implies that the expansion for \(k_t\) in powers of \(t\) converges for \(t\in [0,T)\), cf. (3.19) and (3.21). Second, we prove that the evolution \(k_0 \mapsto k_t\) corresponds to the evolution \(\mu _0 \mapsto \mu _t\) of uniquely determined probability measures on \(\Gamma \) in the following sense. In Theorem 3.8, we show that if \(k_0\) is the correlation function of a sub-Poissonian measure \(\mu _0\), then, for each \(t\in (0,T)\), \(k_t\) is also a correlation function for a unique sub-Poissonian measure \(\mu _t\). The proof is based on the approximation of states of the infinite system by probability measures on \(\Gamma \) supported on the set of finite configurations \(\Gamma _0 \subset \Gamma \) (we call such measures \(\Gamma _0\)-states). The evolution of the latter states can be derived directly from (1.8), which we perform in Theorem 3.7. It is described by a stochastic semigroup constructed with the help of a version of Miyadera’s theorem obtained in [32]. Then we prove that the correlation functions of the mentioned states supported on \(\Gamma _0\) weakly converge to the solution \(k_t\), which implies that it has the positivity property as in (3.38), which by Proposition 2.2 yields that \(k_t\) is also a correlation function for a unique state.

The mesoscopic description is performed in Sect. 4 in the framework of the scaling method developed in [12]. First, we derive an analog of (1.9) for the rescaled correlation functions, that is, the Cauchy problem in (4.5). This problem contains the scaling parameter \(\varepsilon >0\), which is supposed to tend to zero in the mesoscopic limit. In this limit, we obtain another Cauchy problem, given in (4.12). By the results of Sect. 3, we readily prove the existence of classical solutions of both (4.5) and (4.12). The essence of the scaling technique which we use is that the evolution \(r_0\mapsto r_t\) obtained from (4.12) preserves the set of correlation functions of Poisson measures, cf. Lemma 4.3. Then the density \(\varrho _t\) that corresponds to \(r_t\) satisfies the kinetic equation (4.13), which we then transform into an integral equation, cf. (4.15). For its eventual solutions, by the Gronwall inequality we obtain an a priori bound, cf. (4.16), (4.17), by means of which we prove the existence of a unique solution of both (4.13) and (4.15) on \([0,+\infty )\), which implies the global evolution \(r_0\mapsto r_t\), cf. Theorem 4.5. Finally, in Theorem 4.6 we show that the rescaled correlation functions converge to the Poisson correlation functions \(r_t\) as \(\varepsilon \rightarrow 0^+\), uniformly on compact subsets of \([0,T)\). This result links both micro- and mesoscopic evolutions constructed in this work.

Let us mention some open problems related to the model studied in this work. The existence of the global mesoscopic evolution does not, however, imply that the restriction of the microscopic evolution to a bounded time interval is only a technical problem. One cannot exclude that, due to an infinite number of jumps, \(k_t\) finally leaves any space of the type of (3.10). It is still unclear whether the global evolution \(k_0 \mapsto k_t\) exists in any of Banach spaces reasonably bigger than those used in Theorems 3.1 and 3.2.

A very interesting problem, in the spirit of the philosophy of Cox [5], is to relate the rate of convergence in (3.54) to the value of \(t\), which determines the space in which \(k_t\) lies, cf. Theorem 3.2. Another open problem is the existence of globally bounded solutions of the kinetic equation (4.13). It can be proven that, for a local repulsion, this is the case. Namely, if \(\phi \) in (4.13) is such that \((\phi * \varrho )(x) = \varkappa \varrho (x)\) for all \(x\) and some \(\varkappa \ge 0\), and if \(\varrho _0\) is a bounded continuous function on \(\mathbb {R}^d\), then the solution \(\varrho _t\) is also a continuous function, cf. Corollary 3.3, such that \(\varrho _t \le \sup _{x\in \mathbb {R}^d}\varrho _0 (x) + \epsilon \) for all \(t>0\) and any \(\epsilon >0\). From this one can see how important can be the relation between the radii of the jump kernel \(a\) and of the repulsion potential \(\phi \).

2 The Basic Notions

In this paper, we work in the approach of [10, 11, 1315, 19] where all the relevant details can be found.

2.1 The Configuration Spaces

By \(\mathcal {B}(\mathbb {R}^d)\) and \(\mathcal {B}_\mathrm{b}(\mathbb {R}^d)\) we denote the sets of all Borel and all bounded Borel subsets of \(\mathbb {R}^d\), respectively. The configuration space \(\Gamma \) is

$$\begin{aligned} \Gamma =\left\{ \gamma \subset \mathbb R^d : |\gamma \cap K|<\infty \quad \mathrm{for \; any \; compact \,}\quad K\subset {\mathbb {R}}^d \right\} . \end{aligned}$$

Each \(\gamma \in \Gamma \) can be identified with the following positive Radom measure

$$\begin{aligned} \gamma (d x) = \sum \limits _{y\in \gamma } \delta _y(d x) \in \mathcal {M}\left( \mathbb {R}^d \right) , \end{aligned}$$

where \(\delta _y\) is the Dirac measure centered at \(y\), and \(\mathcal {M}(\mathbb {R}^d)\) denotes the set of all positive Radon measures on \(\mathcal {B}(\mathbb {R}^d)\). This allows one to consider \(\Gamma \) as a subset of \(\mathcal {M}(\mathbb {R}^d)\), and hence to endow it with the vague topology. The latter is the weakest topology in which all the maps

$$\begin{aligned} \Gamma \ni \gamma \mapsto \int \limits _{\mathbb {R}^d} f(x) \gamma (d x) = \sum \limits _{x\in \gamma } f(x), \qquad f\in C_0\left( \mathbb {R}^d \right) , \end{aligned}$$

are continuous. Here \(C_0 (\mathbb {R}^d)\) stands for the set of all continuous functions \(f:\mathbb {R}^d \rightarrow \mathbb {R}\) which have compact support. The vague topology on \(\Gamma \) admits a metrization, which turns it into a complete and separable (Polish) space, see, e.g., [20, Theorem 3.5]. By \(\mathcal {B}(\Gamma )\) we denote the corresponding Borel \(\sigma \)-field.

For \(n\in \mathbb {N}_0 :=\mathbb {N}\cup \{0\}\), the set of \(n\)-particle configurations in \(\mathbb {R}^d\) is

$$\begin{aligned} \Gamma ^{(0)} = \{ \emptyset \}, \qquad \Gamma ^{(n)} = \left\{ \eta \subset \mathbb {R}^d: |\eta | = n \right\} , \quad \ n\in \mathbb {N}. \end{aligned}$$
(2.1)

For \(n\ge 2\), \(\Gamma ^{(n)}\) can be identified with the symmetrization of the set

$$\begin{aligned} \left\{ (x_1, \ldots , x_n)\in \left( \mathbb {R}^d \right) ^n: x_i \ne x_j, \quad \mathrm{for} \quad i\ne j \right\} \subset (\mathbb {R}^{d})^n, \end{aligned}$$

which allows one to introduce the corresponding topology and hence the Borel \(\sigma \)-field \(\mathcal {B}(\Gamma ^{(n)})\). The set of finite configurations is

$$\begin{aligned} \Gamma _{0} := \bigsqcup _{n\in \mathbb {N}_0} \Gamma ^{(n)}. \end{aligned}$$
(2.2)

We equip it with the topology of the disjoint union and hence with the Borel \(\sigma \)-field \(\mathcal {B}(\Gamma _{0})\). Obviously, \(\Gamma _0\) is a subset of \(\Gamma \), cf. (1.1). However, the topology just mentioned and that induced from \(\Gamma \) do not coincide. At the same time, \(\Gamma _0 \in \mathcal {B}(\Gamma )\). In the sequel, by \(\Lambda \) we denote a bounded subset of \(\mathbb {R}^d\), that is, we always mean \(\Lambda \in \mathcal {B}_\mathrm{b} (\mathbb {R}^d)\). For such \(\Lambda \), we set

$$\begin{aligned} \Gamma _\Lambda = \left\{ \gamma \in \Gamma : \gamma \subset \Lambda \right\} . \end{aligned}$$

Clearly, \(\Gamma _\Lambda \) is also a measurable subset of \(\Gamma _0\) and the following holds

$$\begin{aligned} \Gamma _\Lambda = \bigsqcup _{n\in \mathbb {N}_0} \Bigl (\Gamma ^{(n)} \cap \Gamma _\Lambda \Bigr ), \end{aligned}$$

which allows one to equip \(\Gamma _\Lambda \) with the topology induced by that of \(\Gamma _0\). Let \(\mathcal {B}(\Gamma _\Lambda )\) be the corresponding Borel \(\sigma \)-field. It is clear that, for \(A \in \mathcal {B}(\Gamma _0)\), \(\Gamma _\Lambda \cap A \in \mathcal {B}(\Gamma _\Lambda )\). It can be proven, see Lemma 1.1 and Proposition 1.3 in [27], that

$$\begin{aligned} \mathcal {B}(\Gamma _\Lambda ) = \left\{ \Gamma _\Lambda \cap A : A \in \mathcal {B}(\Gamma )\right\} , \end{aligned}$$
(2.3)

and hence

$$\begin{aligned} \mathcal {B}(\Gamma _0) = \left\{ A \in \mathcal {B}(\Gamma ): A \subset \Gamma _0 \right\} . \end{aligned}$$
(2.4)

Next, we define the projection

$$\begin{aligned} \Gamma \ni \gamma \mapsto p_\Lambda (\gamma )= \gamma _\Lambda := \gamma \cap \Lambda , \qquad \Lambda \in \mathcal {B}_\mathrm{b} \left( \mathbb {R}^d \right) . \end{aligned}$$
(2.5)

It is known [1, p. 451] that \(\mathcal {B}(\Gamma )\) is the smallest \(\sigma \)-algebra of subsets of \(\Gamma \) such that the maps \(p_\Lambda \) with all \(\Lambda \in \mathcal {B}_\mathrm{b} (\mathbb {R}^d)\) are \(\mathcal {B}(\Gamma )/\mathcal {B}(\Gamma _\Lambda )\) measurable. This means that \((\Gamma , \mathcal {B}(\Gamma ))\) is the projective limit of the measurable spaces \((\Gamma _\Lambda , \mathcal {B}(\Gamma _\Lambda ))\), \(\Lambda \in \mathcal {B}_\mathrm{b} (\mathbb {R}^d)\). A set \(A\in \mathcal {B}(\Gamma _0)\) is said to be bounded if

$$\begin{aligned} A \subset \bigsqcup _{n=0}^N \Gamma ^{(n)}_\Lambda \end{aligned}$$
(2.6)

for some \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\) and \(N\in \mathbb {N}\). The smallest \(\Lambda \) such that \(A \subset \Gamma _\Lambda \) will be called the support of \(A\).

2.2 Measures and Functions

Given \(n\in \mathbb {N}\), by \(m^{(n)}\) we denote the restriction of the Lebesgue product measure \(dx_1 dx_2 \cdots dx_n \) to \((\Gamma ^{(n)}, \mathcal {B}(\Gamma ^{(n)}))\). The Lebesgue–Poisson measure with intensity \(\varkappa >0\) is a measure on \((\Gamma _0, \mathcal {B}(\Gamma _0))\) defined by

$$\begin{aligned} \lambda _\varkappa = \delta _{\emptyset } + \sum \limits _{n=1}^\infty \frac{\varkappa ^n}{n!} m^{(n)}. \end{aligned}$$
(2.7)

For \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\), the restriction of \(\lambda _\varkappa \) to \(\Gamma _{\Lambda }\) will be denoted by \(\lambda _\varkappa ^\Lambda \). This is a finite measure on \(\mathcal {B}(\Gamma _\Lambda )\) such that

$$\begin{aligned} \lambda ^\Lambda _\varkappa (\Gamma _\Lambda ) = \exp \left[ \varkappa m(\Lambda )\right] , \end{aligned}$$

where \(m(\Lambda ):=m^{(1)}(\Lambda )\) is the Lebesgue measure of \(\Lambda \). Then

$$\begin{aligned} \pi ^\Lambda _\varkappa := \exp ( - \varkappa m(\Lambda )) \lambda ^\Lambda _\varkappa \end{aligned}$$
(2.8)

is a probability measure on \(\mathcal {B}(\Gamma _\Lambda )\). It can be shown [1] that the family \(\{\pi ^\Lambda _\varkappa \}_{\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)}\) is consistent, and hence there exists a unique probability measure, \(\pi _\varkappa \), on \(\mathcal {B}(\Gamma )\) such that

$$\begin{aligned} \pi ^\Lambda _\varkappa = \pi _\varkappa \circ p^{-1}_\Lambda , \qquad \Lambda \in \mathcal {B}_\mathrm{b} \left( \mathbb {R}^d \right) , \end{aligned}$$

where \(p_\Lambda \) is the same as in (2.5). This \(\pi _\varkappa \) is called the Poisson measure. The Poisson measure \(\pi _{\varrho }\) corresponding to the density \(\varrho : \mathbb {R}\rightarrow \mathbb {R}_{+}\) is introduced by means of the measure \(\lambda _{\varrho }\), defined as in (2.7) with \(\varkappa m\) replaced by \(m_\varrho \), where, for \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\),

$$\begin{aligned} m_{\varrho } (\Lambda ) := \int \limits _{\Lambda } \varrho (x) d x, \end{aligned}$$
(2.9)

which is supposed to be finite. Then \(\pi _{\varrho }\) is defined by its projections

$$\begin{aligned} \pi ^\Lambda _{\varrho } = \exp ( - m_{\varrho }(\Lambda )) \lambda ^\Lambda _{\varrho }. \end{aligned}$$
(2.10)

For \(\varkappa =1\), we shall drop the subscript and consider the Lebesgue–Poisson measure \(\lambda \) and the Poisson measure \(\pi \).

For a measurable \(f:\mathbb {R}^d \rightarrow \mathbb {R}\) and \(\eta \in \Gamma _0\), the Lebesgue–Poisson exponent is

$$\begin{aligned} e(f, \eta ) = \prod _{x\in \eta } f(x), \quad e(f, \emptyset ) = 1. \end{aligned}$$
(2.11)

Clearly, \( e(f, \cdot )\in L^1 (\Gamma _0, d \lambda )\) for any \(f \in L^1 (\mathbb {R}^d):= L^1 (\mathbb {R}^d, dx)\), and

$$\begin{aligned} \int \limits _{\Gamma _0} e(f, \eta ) \lambda (d\eta ) = \exp \left\{ \int \limits _{\mathbb {R}^d} f(x) d x \right\} . \end{aligned}$$
(2.12)

By \(B_\mathrm{bs} (\Gamma _0)\) we denote the set of all bounded measurable functions \(G: \Gamma _0 \rightarrow \mathbb {R}\), which have bounded supports. That is, each such \(G\) is the zero function on \(\Gamma _0 \setminus A\) for some bounded \(A\), cf. (2.6). Note that any measurable \(G: \Gamma _0 \rightarrow \mathbb {R}\) is in fact a sequence of measurable symmetric functions \(G^{(n)}: (\mathbb {R}^d)^n \rightarrow \mathbb {R}\) such that, for \(\eta = \{x_1 , \ldots , x_n\}\), \(G(\eta ) = G^{(n)}(x_1 , \ldots , x_n)\). We say that \(F:\Gamma \rightarrow \mathbb {R}\) is a cylinder function if there exists \(\Lambda \in \mathcal {B}_\mathrm{b} (\mathbb {R}^d)\) and \(G:\Gamma _\Lambda \rightarrow \mathbb {R}\) such that \(F(\gamma ) = G(\gamma _\Lambda )\) for all \(\gamma \in \Gamma \). By \(\mathcal F_{\text {cyl}}(\Gamma )\) we denote the set of all measurable cylinder functions. For \(\gamma \in \Gamma \), by writing \(\eta \Subset \gamma \) we mean that \(\eta \subset \gamma \) and \(\eta \) is finite, i.e., \(\eta \in \Gamma _0\). For \(G \in B_\mathrm{bs}(\Gamma _0)\), we set

$$\begin{aligned} (KG)(\gamma ) = \sum \limits _{\eta \Subset \gamma } G(\eta ), \qquad \gamma \in \Gamma . \end{aligned}$$
(2.13)

Clearly \(K\) maps \(B_\mathrm{bs}(\Gamma _0)\) into \(\mathcal {F}_\mathrm{cyl}(\Gamma )\) and is linear and positivity preserving. This map plays an important role in the theory of configuration spaces, cf. [19].

By \(\mathcal {M}^1 (\Gamma )\) we denote the set of all probability measures on \((\Gamma , \mathcal {B}(\Gamma ))\), and let \(\mathcal {M}^1_\mathrm{fm} (\Gamma )\) denote the subset of \(\mathcal {M}^1 (\Gamma )\) consisting of all measures which have finite local moments, that is, for which

$$\begin{aligned} \int \limits _\Gamma |\gamma _\Lambda |^n \mu (d \gamma ) < \infty \quad \mathrm{for} \quad \mathrm{all} \quad n\in \mathbb {N} \quad \mathrm{and}\quad \Lambda \in \mathcal {B}_\mathrm{b} \left( \mathbb {R}^d \right) . \end{aligned}$$

Definition 2.1

A measure \(\mu \in \mathcal {M}^1_\mathrm{fm} (\Gamma )\) is said to be locally absolutely continuous with respect to the Poisson measure \(\pi \) if, for every \(\Lambda \in \mathcal {B}_\mathrm{b} \left( \mathbb {R}^d \right) \), the projection

$$\begin{aligned} \mu ^\Lambda := \mu \circ p_\Lambda ^{-1} \end{aligned}$$
(2.14)

is absolutely continuous with respect to \(\pi ^\Lambda \) and hence with respect to \(\lambda ^\Lambda \), see (2.8).

A measure \(\chi \) on \((\Gamma _0, \mathcal {B}(\Gamma _0))\) is said to be locally finite if \(\chi (A)< \infty \) for every bounded measurable \(A\subset \Gamma _0\). By \(\mathcal {M}_\mathrm{lf} (\Gamma _0)\) we denote the set of all such measures. Let a measurable \(A\subset \Gamma _0\) be bounded, and let \(\mathbb {I}_A\) be its indicator function on \(\Gamma _0\). Then \(\mathbb {I}_A\) is in \(B_\mathrm{bs}(\Gamma _0)\), and hence one can apply (2.13). For \(\mu \in \mathcal {M}^1_\mathrm{fm} (\Gamma )\), we let

$$\begin{aligned} \chi _\mu (A) = \int \limits _{\Gamma } (K\mathbb {I}_A) (\gamma ) \mu (d \gamma ), \end{aligned}$$
(2.15)

which uniquely determines a measure \(\chi _\mu \in \mathcal {M}_\mathrm{lf}(\Gamma _0))\). It is called the correlation measure for \(\mu \). For instance, let \(A\subset \Gamma ^{(n)}\subset \Gamma _0\) and let \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\) be the support of \(A\), cf. (2.6). Then \((K\mathbb I_{A})(\gamma )\) is the number of distinct \(n\)-particle sub-configurations of \(\gamma \) contained in \(\Lambda \), and thus \(\chi _\mu (A)\) is the expected number of such sub-configurations in \(\Lambda \) in state \(\mu \). In particular, if \(A\subset \Gamma ^{(1)}\), then \(\chi _\mu (A)\) is just the expected number of particles in state \(\mu \) contained in \(\Lambda \).

The equation (2.15) defines a map \(K^*: \mathcal {M}^1_\mathrm{fm} (\Gamma ) \rightarrow \mathcal {M}_\mathrm{lf}\,(\Gamma _0))\) such that \(K^*\mu = \chi _\mu \). In particular, \(K^*\pi = \lambda \). It is known, see [19, Proposition 4.14], that \(\chi _\mu \) is absolutely continuous with respect to \(\lambda \) if \(\mu \) is locally absolutely continuous with respect to \(\pi \). In this case, we have that, for any \(\Lambda \in \mathcal {B}_\mathrm{b} (\mathbb {R}^d)\) and \(\lambda ^\Lambda \)-almost all \(\eta \in \Gamma _\Lambda \),

$$\begin{aligned} k_\mu (\eta ) = \frac{d \chi _\mu }{d \lambda }(\eta )&= \int \limits _{\Gamma _{\Lambda }} \frac{d\mu ^\Lambda }{d \pi ^\Lambda } (\eta \cup \gamma ) \pi ^\Lambda (d \gamma )\\&= \int \limits _{\Gamma _{\Lambda }} \frac{d\mu ^\Lambda }{d \lambda ^\Lambda } (\eta \cup \xi ) \lambda ^\Lambda (d \xi ) \nonumber . \end{aligned}$$
(2.16)

The Radon–Nikodym derivative \(k_\mu \) is called the correlation function corresponding to the measure \(\mu \). As all real-valued measurable functions on \(\Gamma _0\), each \(k_\mu \) is the collection of measurable \(k_\mu ^{(n)}: (\mathbb {R}^d)^n \rightarrow \mathbb {R}\) such that \(k_\mu ^{(0)} \equiv 1\), and \(k_\mu ^{(n)}\), \(n\ge 2\), are symmetric. In particular, \(k_\mu ^{(1)}\) is the particle’s density in state \(\mu \), cf. (2.15).

Recall that by \(B_\mathrm{bs} (\Gamma _0)\) we denote the set of all bounded measurable functions \(G: \Gamma _0 \rightarrow \mathbb {R}\), which have bounded supports. We also set

$$\begin{aligned} B_\mathrm{bs}^+ (\Gamma _0)=\left\{ G \in B_\mathrm{bs}(\Gamma _0): (KG)(\gamma ) \ge 0 \right\} . \end{aligned}$$
(2.17)

The following fact is known, see Theorems 6.1, 6.2 and Remark 6.3 in [19].

Proposition 2.2

Suppose \(\chi \in \mathcal {M}_\mathrm{lf}(\Gamma _0))\) has the properties

$$\begin{aligned} \chi (\{\emptyset \}) = 1 , \qquad \int \limits _{\Gamma _0} G (\eta ) \chi (d \eta ) \ge 0 , \end{aligned}$$
(2.18)

for each \(G\in B^+_\mathrm{bs}(\Gamma _0)\). Then there exist \(\mu \in \mathcal {M}^1_\mathrm{fm}(\Gamma )\) such that \(K^* \mu =\chi \). For the uniqueness of such \(\mu \), it is enough that the Radon–Nikodym derivative (2.16) of \(\chi \) obeys

$$\begin{aligned} k(\eta ) \le \prod _{x \in \eta } C_R (x) , \end{aligned}$$
(2.19)

for all \(\eta \in \Gamma _0\) and for some locally integrable \(C_R : \mathbb {R}^d \rightarrow \mathbb {R}_{+}\).

Let \(\pi _\varrho \) be the Poisson measure as in (2.10), and let \(\Lambda \) be the support of a given bounded \(A\subset \Gamma _0\). If \(A\subset \Gamma ^{(n)}\), cf. (2.1), then

$$\begin{aligned} \chi _{\pi _{\varrho }} (A) = \int \limits _{\Lambda ^n} \varrho (x_1) \cdots \varrho (x_n) dx_1 \cdots dx_n = \left( \int \limits _{\Lambda } \varrho (x) dx \right) ^n, \end{aligned}$$
(2.20)

which, in particular, means that particles appear in \(\Lambda \) independently. In this case,

$$\begin{aligned} k_{\pi _\varrho } (\eta ) = e(\varrho , \eta ), \end{aligned}$$
(2.21)

where \(e\) is as in (2.11). In particular,

$$\begin{aligned} k^{(n)}_{\pi _\varrho } (x_1 ,\ldots , x_n) = \varrho (x_1) \cdots \varrho (x_n), \quad n\in \mathbb {N}. \end{aligned}$$
(2.22)

Definition 2.3

A locally absolutely continuous measure \(\mu \in \mathcal {M}^1_\mathrm{fm} (\Gamma )\), cf. Definition 2.1, is called sub-Poissonian if its correlation function \(k_\mu \) obeys (2.19) for some locally integrable \(C_R : \mathbb {R}^d \rightarrow \mathbb {R}_{+}\).

Remark 2.4

If \(\mu \) is sub-Poissonian and \(A\) is as in (2.20), then

$$\begin{aligned} \chi _\mu (A) \le C^n \left( \int \limits _\Lambda k_\mu ^{(1)} (x) dx\right) ^n, \end{aligned}$$

for some \(C>0\). That is, the correlation measure is controlled by the density in this case. For instance, if one knows that \(k_{\mu _t}^{(1)}\) does not explode for all \(t>0\), then so does \(k_{\mu _t}\), and hence \(\mu _t\) exists for all \(t>0\). A faster increase of \(\chi _\mu (A)\), e.g., as \(n!\), can be interpreted as clustering in state \(\mu \).

Finally, we present the following integration rule, cf. [11, Lemma 2.1],

$$\begin{aligned} \int \limits _{\Gamma _0} \sum \limits _{\xi \subset \eta } H(\xi , \eta \setminus \xi , \eta ) \lambda (d \eta ) = \int \limits _{\Gamma _0}\int \limits _{\Gamma _0}H(\xi , \eta , \eta \cup \xi ) \lambda (d \xi ) \lambda (d \eta ), \end{aligned}$$
(2.23)

which holds for any appropriate function \(H\).

3 Microscopic Dynamics

In view of the fact that \(\Gamma \) contains also infinite configurations, the direct construction of the evolution based on (1.7) and (1.2) cannot be done, and thus we pass to the description based on correlation functions, cf. (1.9) and (2.16). The ‘generator’ in (1.9) has the form, cf. [15, Eq. (4.8)]

$$\begin{aligned} \left( {L}^\Delta k\right) (\eta )&= \sum \limits _{y\in \eta } \int \limits _{\mathbb {R}^d} a (x-y)e \left( \tau _y, \eta \setminus y \cup x \right) \nonumber \\&\quad \times \left( \int \limits _{\Gamma _0}e(t_y, \xi ) k\left( \xi \cup x \cup \eta \setminus y \right) \lambda (d \xi ) \right) dx \nonumber \\&\quad - \int \limits _{\Gamma _0} k(\xi \cup \eta ) \left( \sum \limits _{x\in \eta } \int \limits _{\mathbb {R}^d} a(x-y) e(\tau _y,\eta ) \right. \nonumber \\&\qquad \times \left. e(t_y, \xi ) d y\right) \lambda (d\xi ). \end{aligned}$$
(3.1)

Here \(e\) is as in (2.11) and

$$\begin{aligned} t_x (y) = e^{-\phi (x-y)} - 1, \quad \tau _x (y) = t_x (y) + 1. \end{aligned}$$
(3.2)

We shall also consider the following auxiliary evolution \(G_0 \mapsto G_t\), dual to that \(k_0 \mapsto k_t\) described by (1.9) and (3.1). The duality is understood in the sense

$$\begin{aligned} \left\langle \! \left\langle G_t , k_0 \right\rangle \! \right\rangle = \left\langle \! \left\langle G_0,\, k_t \right\rangle \! \right\rangle , \end{aligned}$$
(3.3)

where

$$\begin{aligned} \left\langle \! \left\langle G,k\right\rangle \! \right\rangle := \int \limits _{\Gamma _0} G(\eta ) k(\eta )\lambda (d \eta ) , \end{aligned}$$
(3.4)

and \(\lambda \) is the Lebesgue–Poisson measure defined in (2.7) with \(\varkappa =1\). The ‘generators’ are related to each other by

$$\begin{aligned} \left\langle \left\langle G , L^\Delta k \right\rangle \right\rangle = \left\langle \left\langle \widehat{L} G, k \right\rangle \right\rangle . \end{aligned}$$
(3.5)

Then the equation dual to (1.9) is

$$\begin{aligned} \frac{d}{dt} G_t = \widehat{L} G_t, \qquad G_t|_{t=0} = G_0, \end{aligned}$$
(3.6)

with, cf. [15, Eq. (4.7)],

$$\begin{aligned} \left( \widehat{L}G\right) (\eta )&= \sum \limits _{\xi \subset \eta } \sum \limits _{x\in \xi } \int \limits _{\mathbb {R}^d} a (x-y) e(\tau _y, \xi ) \nonumber \\&\quad \times \, e(t_y , \eta \setminus \xi ) \left[ G(\xi \setminus x \cup y) - G(\xi ) \right] dy. \end{aligned}$$
(3.7)

3.1 The Evolution of Correlation Functions

We consider (1.9) with \(L^\Delta \) given in (3.1). To place this problem in the right context we introduce the following Banach spaces. Recall that a function \(G:\Gamma _0 \rightarrow \mathbb {R}\) is a sequence of \(G^{(n)}: (\mathbb {R}^d)^n \rightarrow \mathbb {R}\), \(n\in \mathbb {N}_0\), where \(G^{(0)}\) is constant and all \(G^{(n)}\), \(n\ge 2\), are symmetric. Let \(k:\Gamma _0\rightarrow \mathbb {R}\) be such that \(k^{(n)}\in L^\infty ((\mathbb {R}^d)^n)\), for \(n\in \mathbb {N}\). For this \(k\) and \(\vartheta \in \mathbb {R}\), we set

$$\begin{aligned} \Vert k\Vert _\vartheta = \sup _{n\in \mathbb {N}_0} \nu _n(k) \exp (\vartheta n), \end{aligned}$$
(3.8)

where

$$\begin{aligned} \nu _0(k) = |k^{(0)}|, \quad \nu _n(k) = \Vert k^{(n)} \Vert _{L^\infty \left( (\mathbb {R}^d)^n\right) }, \quad n\in \mathbb {N}. \end{aligned}$$
(3.9)

Then

$$\begin{aligned} \mathcal {K}_\vartheta := \left\{ k: \Gamma _0 \rightarrow \mathbb {R} : \Vert k\Vert _\vartheta < \infty \right\} , \end{aligned}$$
(3.10)

is a real Banach space with norm (3.8) and usual point-wise linear operations. Note that \(\{\mathcal {K}_\vartheta :\vartheta \in \mathbb {R}\}\) is a scale of Banach spaces in the sense that

$$\begin{aligned} \mathcal {K}_\vartheta \subset \mathcal {K}_{\vartheta '}, \quad \mathrm{for} \quad \vartheta > \vartheta '. \end{aligned}$$
(3.11)

As usual, by a classical solution of (1.9) in \(\mathcal {K}_\vartheta \) on time interval \(I\), we understand a map \(I\ni t\mapsto k_t\in \mathcal {K}_\vartheta \), which is continuous on \(I\), continuously differentiable on the interior of \(I\), lies in the domain of \(L^\Delta \), and solves (1.9). Recall that we suppose (1.6) and (1.4).

Theorem 3.1

Given \(\vartheta \in \mathbb {R}\) and \(T>0\), we let

$$\begin{aligned} \vartheta _0=\vartheta + 2 \alpha T \exp (c_{\phi }e^{-\vartheta }). \end{aligned}$$
(3.12)

Then the problem (1.9) with \(k_0 \in {\mathcal {K}}_{\vartheta _0}\) has a unique classical solution \(k_t\in {\mathcal {K}}_{\vartheta }\) on \([0,T)\).

According to the above theorem, for arbitrary \(T>0\) and \(\vartheta \), one can pick the initial space such that the evolution \(k_0 \mapsto k_t\) lasts in \(\mathcal {K}_\vartheta \) until \(t<T\). On the other hand, if the initial space is given, the evolution is restricted in time to the interval \([0,T(\vartheta ))\) with

$$\begin{aligned} T(\vartheta ) = \frac{\vartheta _0 - \vartheta }{2 \alpha } \exp \left( - c_\phi e^{-\vartheta }\right) . \end{aligned}$$
(3.13)

Clearly, \(T(\vartheta _0) = 0\) and \(T(\vartheta ) \rightarrow 0\) as \(\vartheta \rightarrow - \infty \). Hence, there exists \(T_*=T_* (\vartheta _0, \alpha , c_\phi )\) such that \(T(\vartheta ) \le T_*\) for all \(\vartheta \in (-\infty , \vartheta _0]\). Set

$$\begin{aligned} \vartheta (t) = \sup \left\{ \vartheta \in (-\infty , \vartheta _0]: t < T(\vartheta )\right\} . \end{aligned}$$
(3.14)

Then the alternative version of the above theorem can be formulated as follows.

Theorem 3.2

For every \(\vartheta _0\in \mathbb {R}\), there exists \(T_* = T_* (\vartheta _0, \alpha , c_\phi )\) such that the problem (1.9) with \(k_0 \in \mathcal {K}_{\vartheta _0}\) has a unique classical solution \(k_t\in \mathcal {K}_{\vartheta (t)}\) on \([0, T_*)\).

Proof of Theorem 1

Let \(\vartheta \in \mathbb {R}\) be fixed. Set

$$\begin{aligned} \mathrm{Dom} (L^\Delta ) = \left\{ k \in \mathcal {K}_\vartheta : L^\Delta k \in \mathcal {K}_\vartheta \right\} . \end{aligned}$$
(3.15)

Given \(k\), let \(L^\Delta _1 k\) and \(L^\Delta _2 k\) denote the first and the second summands in (3.1), respectively. Then, for \(\vartheta \le \vartheta ' < \vartheta ''\) and \(k\in \mathcal K_{\vartheta ''}\), we have

$$\begin{aligned}&|(L^{\Delta }_1k)(\eta )|e^{\vartheta '|\eta |}\\&\quad \le \sum \limits _{y\in \eta }\int \limits _{\mathbb R^d}dx\text { }a(x-y)\int \limits _{\Gamma _0}\lambda (d\xi )|k(\eta \backslash y\cup x\cup \xi )| \exp (\vartheta '|\eta \cup \xi |) \\&\quad \quad \times \exp (-\vartheta ''|\eta \cup \xi |)e(|t_y|,\xi )|e^{\vartheta '|\eta |}\\&\quad \le \sum \limits _{y\in \eta }\int \limits _{\mathbb R^d}dx\text { }a(x-y)\int \limits _{\Gamma _0}\lambda (d\xi )\Vert k\Vert _{\vartheta ''}e^{-\vartheta ''|\xi |}\\&\quad \quad \times e(|t_y|,\xi )|e^{-|\eta |(\vartheta ''-\vartheta ')}\\&\quad =\Vert k\Vert _{\vartheta ''} \alpha \exp \left( e^{-\vartheta ''}c_{\phi }\right) |\eta |e^{-|\eta |(\vartheta ''-\vartheta '')}\\&\quad \le \Vert k\Vert _{\vartheta ''} \alpha \exp \left( e^{-\vartheta ''}c_{\phi }\right) \frac{1}{e(\vartheta ''-\vartheta ')}, \end{aligned}$$

which holds for \(\lambda \)-almost all \(\eta \in \Gamma _0\). In the last line we used (2.12) and the following inequality

$$\begin{aligned} \tau e^{-\delta \tau }\le {1}/{e\delta }, \quad \mathrm{for} \quad \mathrm{all} \quad \tau \ge 0, \quad \delta >0. \end{aligned}$$

Similarly one estimates also \(L^\Delta _2 k\), which finally yields

$$\begin{aligned} \Vert L^\Delta k \Vert _{\vartheta '} \le \frac{2\alpha }{e(\vartheta '' - \vartheta ')} \exp \left( c_\phi e^{-\vartheta ''} \right) \Vert k\Vert _{\vartheta ''}, \qquad \vartheta '' > \vartheta ', \end{aligned}$$
(3.16)

and hence, cf. (3.11) and (3.15),

$$\begin{aligned} \mathrm{Dom}\left( L^\Delta \right) \supset \mathcal {K}_{\vartheta '}, \qquad \mathrm{for} \quad \mathrm{all} \quad \vartheta ' > \vartheta . \end{aligned}$$
(3.17)

By (3.16), \({L}^\Delta \) can be defined as a bounded linear operator \({L}^\Delta :\mathcal {K}_{\vartheta ''} \rightarrow {\mathcal {K}}_{\vartheta '}\), \(\vartheta ' < \vartheta ''\), with norm

$$\begin{aligned} \Vert L^\Delta \Vert _{\vartheta '' \vartheta '} \le \frac{2\alpha }{e(\vartheta '' - \vartheta ')} \exp \left( c_\phi e^{-\vartheta ''} \right) . \end{aligned}$$
(3.18)

Given \(k_0\in \mathcal {K}_{\vartheta _0}\), we seek the solution of (1.9) as the limit of the sequence \(\{k_{t,n}\}_{n\in \mathbb {N}_0 }\subset \mathcal {K}_\vartheta \), where \(k_{t,0} = k_0\) and

$$\begin{aligned} k_{t,n} = k_0 + \int \limits _0^t L^{\Delta } k_{s,n-1} d s, \quad n \in \mathbb {N}. \end{aligned}$$

The latter can be iterated to yield

$$\begin{aligned} k_{t,n} = k_0 + \sum \limits _{m=1}^n \frac{1}{m!}t^m \left( L^{\Delta }\right) ^m k_0. \end{aligned}$$
(3.19)

Then, for \(n,p\in \mathbb {N}\), we have

$$\begin{aligned} \Vert k_{t,n} - k_{t,n+p}\Vert _\vartheta \le \sum \limits _{m=n+1}^{n+p} \frac{t^m}{m!} \Vert \left( L^\Delta \right) ^m\Vert _{\vartheta _0 \vartheta } \Vert k_0\Vert _{\vartheta _0}. \end{aligned}$$
(3.20)

For a given \(m\in \mathbb {N}\) and \(l = 0, \dots , m\), set \(\vartheta _l = \vartheta + (m- l )\epsilon \), \(\epsilon = (\vartheta _0 - \vartheta )/m\). Then by (3.18) and (3.13), we get

$$\begin{aligned} \left\| \left( L^\Delta \right) ^m \right\| _{\vartheta _0 \vartheta }&\le \prod _{l=0}^m \Vert L^\Delta \Vert _{\vartheta _{l}\vartheta _{l+1}} \le \left( \frac{2 \alpha m\exp \left[ c_\phi e^{-\vartheta }\right] }{e (\vartheta _0 - \vartheta )} \right) ^m \nonumber \\&= \left( \frac{m}{e}\right) ^m \frac{1}{[T(\vartheta )]^m}. \end{aligned}$$
(3.21)

Applying the latter estimate in (3.20) we obtain that the sequence \(\{k^{(n)}_t\}_{n\in \mathbb {N}}\) converges in \(\mathcal {K}_\vartheta \) for \(|t|< T\), and hence is differentiable, even real analytic, in \(\mathcal {K}_\vartheta \) on the latter set. From the proof above we see that \(\{k^{(n)}_t\}_{n\in \mathbb {N}}\) converges also in \(\mathcal {K}_{\vartheta (t)}\) with \(\vartheta (t)\) as in (3.14), which proves Theorem 3.2. The latter by (3.17) yields that \(k_t \in \mathrm{Dom}(L^\Delta )\) for all \(t< T(\vartheta )\), which completes the proof of the existence. The uniqueness readily follows by the analyticity just mentioned. \(\square \)

Let now \(k_t\), as a function of \(\eta \in \Gamma _0\), be continuous. Then instead of (3.10) we consider

$$\begin{aligned} \widetilde{\mathcal {K}}_\vartheta = \left\{ k \in C (\Gamma _0\rightarrow \mathbb {R}): \Vert k\Vert _\vartheta < \infty \right\} , \end{aligned}$$

endowed with the same norm as in (3.8), (3.9).

Corollary 3.3

Let \(\vartheta \), \(T\), and \(\vartheta _0\) be as in Theorem 3.1. Suppose in addition that the function \(\phi \) is continuous. Then the problem (1.9) with \(k_0\in \widetilde{\mathcal {K}}_{\vartheta _0}\) has a unique classical solution \(k_t \in \widetilde{\mathcal {K}}_{\vartheta }\) on \([0, T)\).

Now we consider (3.6) in the Banach space

$$\begin{aligned} \mathcal {G}_\vartheta = L^1 \left( \Gamma _0, e^{-\vartheta |\cdot |}d \lambda \right) ,\quad \vartheta \in \mathbb {R}, \end{aligned}$$

that is, \(G\in \mathcal {G}_\vartheta \) if \(\Vert G\Vert _\vartheta < \infty \), where

$$\begin{aligned} \Vert G\Vert _\vartheta := \int \limits _{\Gamma _0} \exp (-\vartheta |\eta |) \left| G(\eta ) \right| \lambda (d \eta ). \end{aligned}$$
(3.22)

Theorem 3.4

Let \(\vartheta _0\) \(\vartheta \), \(T>0\) be as in (3.12). Then the Cauchy problem (3.6) with \(G_0 \in \mathcal {G}_{\vartheta }\) has a unique classical solution \(G_t\in \mathcal {G}_{\vartheta _0}\) on \([0,T)\).

Proof

As above, we obtain the solution of (3.6) as the limit of the sequence \(\{G_{t,n}\}_{n\in \mathbb {N}_0 }\subset \mathcal {G}_{\vartheta _0}\), where \(G^{(0)}_t = G_0\) and

$$\begin{aligned} G_{t,n} = G_0 + \sum \limits _{m=1}^n \frac{1}{m!}t^m \widehat{L}^m G_0. \end{aligned}$$
(3.23)

For the norm (3.22), from (3.7) similarly as above by (2.23) we get

$$\begin{aligned} \left\| \widehat{L}G \right\| _{\vartheta ''} \le \frac{2\alpha }{e(\vartheta '' - \vartheta ')} \exp \left( c_\phi e^{-\vartheta ''} \right) \Vert G\Vert _{\vartheta '}. \end{aligned}$$

This means that \(\widehat{L}\) can be defined as a bounded linear operator \(\widehat{L}:\mathcal {G}_{\vartheta '} \rightarrow \mathcal {G}_{\vartheta ''}\) with norm

$$\begin{aligned} \left\| \widehat{L}\right\| _{\vartheta ' \vartheta ''} \le \frac{2\alpha }{e(\vartheta '' - \vartheta ')} \exp \left( c_\phi e^{-\vartheta } \right) . \end{aligned}$$

Then we apply the latter estimate in (3.23) and obtain, for any \(p,n\in \mathbb N\),

$$\begin{aligned} \left\| G_{t,n} - G_{t,n+p}\right\| _\vartheta \le \sum \limits _{m=n+1}^{n+p} \frac{(m/e)^m}{m!} \left( \frac{t}{T}\right) ^m. \end{aligned}$$

The latter estimate yields the proof, as in the case of Theorem 3.1. \(\square \)

Corollary 3.5

Let \(k_0\), \(k_t\), and \(G_0\), \(G_t\) be as in Theorems 3.1 and 3.4, respectively. Then, cf. (3.3), the following holds

$$\begin{aligned} \left\langle \! \left\langle G_0 , k_t \right\rangle \! \right\rangle = \left\langle \! \left\langle G_t , k_0 \right\rangle \! \right\rangle . \end{aligned}$$
(3.24)

That is, the evolutions described by these Theorems are dual.

Proof

By (3.5) and by (3.19) and (3.23), we see that, for all \(n\in \mathbb {N}\),

$$\begin{aligned} \left\langle \left\langle G_0 , k_{t,n} \right\rangle \right\rangle = \left\langle \left\langle G_{t,n} , k_0 \right\rangle \right\rangle . \end{aligned}$$

Then (3.24) is obtained from the latter by passing to the limit \(n\rightarrow +\infty \), since we have the norm-convergences of the sequences \(\{k_{t,n}\}\) and \(\{G_{t,n}\}\). \(\square \)

3.2 The Evolution of \(\Gamma _0\)-States

We recall that the set of finite configurations \( \Gamma _0\), cf. (2.2), is a measurable subset of \(\Gamma \). By a \(\Gamma _0\)-state we mean a state \(\mu \in \mathcal {M}^1(\Gamma )\) such that \(\mu (\Gamma _0) =1\). That is, in a \(\Gamma _0\)-state the system consists of a finite number of particles, but this number is random. Each \(\Gamma _0\)-state can be redefined as a probability measure on \((\Gamma _0, \mathcal {B}(\Gamma _0))\), cf. (2.3) and (2.4). The action of the ‘generator’ in (1.8) on \(\Gamma _0\)-states can be written down explicitly. Namely, for such a state \(\mu \) and \(A \in \mathcal {B}(\Gamma _0)\),

$$\begin{aligned} (L^* \mu )(A) = - \int \limits _{\Gamma _0} \Omega (\Gamma _0, \eta ) \mathbb {I}_A (\eta ) \mu (d\eta ) + \int \limits _{\Gamma _0} \Omega (A, \eta ) \mu (d\eta ), \end{aligned}$$
(3.25)

where, cf. (1.3) and (1.5),

$$\begin{aligned} \Omega (A, \eta ) = \sum \limits _{x\in \eta } \int \limits _{\mathbb {R}^d} a(x-y) \exp ( - E^\phi (y, \eta ))\mathbb {I}_{A}(\eta \setminus x \cup y) dy, \end{aligned}$$
(3.26)

which is a measure kernel on \((\Gamma _0,\mathcal {B}(\Gamma _0))\). That is, \(\Omega (\cdot , \eta )\) is a measure for all \(\eta \in \Gamma _0\), and \(\Omega (A, \cdot )\) is \(\mathcal {B}(\Gamma _0)\)-measurable for all \(A \in \mathcal {B}(\Gamma _0)\). Note that

$$\begin{aligned} \Omega (\Gamma _0, \eta ) = \sum \limits _{x\in \eta } \int \limits _{\mathbb {R}^d} a(x-y) \exp ( - E^\phi (y, \eta )) dy \le \alpha |\eta |, \end{aligned}$$
(3.27)

which is obtained by (1.4) and the positivity of \(\phi \).

Let \(\mathcal {M}(\Gamma _0)\) be the Banach space of all signed measures on \((\Gamma _0,\mathcal {B}(\Gamma _0))\) which have bounded variation. For each \(\mu \in \mathcal {M}(\Gamma _0)\), there exist \(\beta _{\pm } \ge 0\) and probability measures \(\mu _{\pm }\) such that

$$\begin{aligned} \mu = \beta _{+} \mu _{+} - \beta _{-} \mu _{-}, \quad \mathrm{and } \quad \Vert \mu \Vert = \beta _{+} + \beta _{-}. \end{aligned}$$
(3.28)

Let \(\mathcal {M}_{+} (\Gamma _0)\) be the cone of positive elements of \(\mathcal {M}(\Gamma _0)\), for which \(\Vert \mu \Vert = \mu (\Gamma _0)\). Then we define, cf. (3.27),

$$\begin{aligned} \mathrm{Dom} (L^*) = \left\{ \mu \in \mathcal {M}(\Gamma _0): \Omega (\Gamma _0, \cdot )\mu \in \mathcal {M}(\Gamma _0)\right\} . \end{aligned}$$
(3.29)

Recall that a \(C_0\)-semigroup \(\{S_\mu (t)\}_{t\ge 0}\) of bounded operators in \(\mathcal {M}(\Gamma _0)\) is called stochastic if each \(S_\mu (t)\), \(t>0\), leaves the cone \(\mathcal {M}_{+}(\Gamma _0)\) invariant, and \(\Vert S_\mu (t)\mu \Vert =1\) whenever \(\Vert \mu \Vert =1\). Our aim is to show that the problem (1.8) has a solution in the form

$$\begin{aligned} \mu _t = S_\mu (t) \mu _0, \end{aligned}$$
(3.30)

where \(\{S_\mu (t)\}_{t\ge 0}\) is a stochastic semigroup in \(\mathcal {M}(\Gamma _0)\), that leaves invariant important subspaces of \(\mathcal {M}(\Gamma _0)\). For a measurable \(b: \Gamma _0 \rightarrow \mathbb {R}_{+}\), we set

$$\begin{aligned} \mathcal {M}_b (\Gamma _0) = \left\{ \mu \in \mathcal {M}(\Gamma _0): \mu _{\pm } (b) < \infty \right\} , \end{aligned}$$

where \(\mu _{\pm }\) are the same as in (3.28) and

$$\begin{aligned} \mu _{\pm }(b) := \int \limits _{\Gamma _0} b(\eta ) \mu _{\pm }(d\eta ). \end{aligned}$$

The set \(\mathcal {M}_b (\Gamma _0)\) can be equipped with the norm

$$\begin{aligned} \Vert \mu \Vert _b = \alpha _{+} \mu _{+} (b) + \alpha _{-} \mu _{-} (b), \end{aligned}$$

which turns it into a Banach space. Set

$$\begin{aligned} \mathcal {M}_{b,+} (\Gamma _0) = \mathcal {M}_{b} (\Gamma _0) \cap \mathcal {M}_{+} (\Gamma _0). \end{aligned}$$

We also suppose that \(b\) is such that the embedding \( \mathcal {M}_b (\Gamma _0) \hookrightarrow \mathcal {M} (\Gamma _0)\) is dense and continuous. In the sequel, we use [32, Proposition 5.1], which we rephrase as follows.

Proposition 3.6

Suppose that \(b\) and some positive \(C\) and \(\varepsilon \) obey the following estimate

$$\begin{aligned} \int \limits _{\Gamma _0} \left( b(\xi ) - b(\eta )\right) \Omega (d \xi , \eta ) \le C b(\eta ) - \varepsilon \Omega (\Gamma _0, \eta ), \end{aligned}$$
(3.31)

which holds for all \(\eta \in \Gamma _0\). Then the closure of \(L^*\) as in (3.25), (3.26) with domain (3.29) generates a \(C_0\)-stochastic semigroup \(\{S_\mu (t)\}_{t\ge 0}\), which leaves \(\mathcal {M}_{b} (\Gamma _0)\) invariant and induces a positive \(C_0\)-semigroup on \((\mathcal {M}_{b} (\Gamma _0), \Vert \cdot \Vert _b)\).

Theorem 3.7

The problem (1.8) with a \(\Gamma _0\)-state \(\mu _0\) has a unique classical solution in \(\mathcal {M}_{+} (\Gamma _0)\) on \([0, +\infty )\), given by (3.30) where \(S_\mu (t)\), \(t>0\), constitute the stochastic semigroup on \(\mathcal {M} (\Gamma _0)\) generated by the closure of \(L^*\) given in (3.25), (3.26), and (3.29). Moreover, for each \(b\) which satisfies

$$\begin{aligned} b(\eta ) = \delta (|\eta |) \ge \varepsilon \Omega (\Gamma _0, \eta ), \qquad \mathrm{for} \quad \mathrm{all} \quad \eta \in \Gamma _0, \end{aligned}$$
(3.32)

with some \(\varepsilon >0\) and suitable \(\delta :\mathbb {N} \rightarrow [0, +\infty )\), the mentioned semigroup \(\{S_\mu (t)\}_{t\ge 0}\) leaves \(\mathcal {M}_{b,+} (\Gamma _0)\) invariant.

Proof

Computations based on (2.23) show that, for \(b(\eta ) = \delta (|\eta |)\), the left-hand side of (3.31) vanishes, which reflects the fact the the Kawasaki dynamics is conservative. Then the proof follows by Proposition 3.6. The condition that \(\mu _0 \in \mathcal {M}_{b,+} (\Gamma _0)\) with \(b\) satisfying (3.32) merely means that this \(\mu _0\) is taken from the domain of \(L^*\), cf. (3.29). \(\square \)

Suppose now that the initial state \(\mu _0\) in (1.8) is supported on \(\Gamma _0\) and is absolutely continuous with respect to the Lebesgue–Poisson measure \(\lambda \). Then

$$\begin{aligned} R_0 (\eta ) = \frac{d \mu _0}{d \lambda } (\eta ) \end{aligned}$$
(3.33)

is a positive element of unit norm of the Banach space \(\mathcal {R} :=L^1 (\Gamma _0, d \lambda )\). If \(\mu _0 \in \mathcal {M}_{b,+} (\Gamma _0)\), then also \(R_0 \in \mathcal {R}_b := L^1 (\Gamma _0,b d \lambda )\). For \(b\) obeying (3.32), it is possible to show that, for any \(t>0\), the solution \(\mu _t\) as in Theorem 3.7 has the Radon–Nikodym derivative \(R_t\) which lies in \(\mathcal {R}_b\). Furthermore, there exists a stochastic semigroup \(\{S_R(t)\}_{t\ge 0}\) on \(\mathcal {R}\), which leaves invariant each \(\mathcal {R}_b\) with \(b\) obeying (3.32), and such that

$$\begin{aligned} R_{t} = S_R (t) R_0, \qquad t \ge 0. \end{aligned}$$
(3.34)

The generator \(L^\dagger \) of the semigroup \(\{S_R(t)\}_{t\ge 0}\) has the following properties

$$\begin{aligned} \mathrm{Dom} (L^\dagger ) \supset \mathcal {R}_b, \end{aligned}$$
(3.35)
$$\begin{aligned} \int \limits _{\Gamma _0} F (\eta ) (L^\dagger R)(\eta ) \lambda (d \eta ) = \int \limits _{\Gamma _0}(L F )(\eta ) R(\eta ) \lambda (d \eta ), \end{aligned}$$
(3.36)

which holds for each \(b\) obeying (3.32), and for each \(R\in \mathcal {R}\) and each measurable \(F:\Gamma _0 \rightarrow \mathbb {R}\) such that both integrals in (3.36) exist. Here \(L\) is as in (1.2). For each \(t \ge 0\), the correlation function of \(\mu _t\) and its Radon–Nikodym derivative satisfy, cf. (2.16),

$$\begin{aligned} k_{\mu _t} (\eta ) = \int \limits _{\Gamma _0} R_t (\eta \cup \xi ) \lambda (d\xi ). \end{aligned}$$
(3.37)

By this representation and by (2.23), we derive

$$\begin{aligned} \int \limits _{\mathbb {R}^d} k^{(1)}_{\mu _t}(x) dx = \int \limits _{\Gamma _0}|\eta | R_t (\eta ) \lambda (d\eta ), \end{aligned}$$

which yields the expected number of particles in state \(\mu _t\). Note that we cannot expect now that \(k_{\mu _t}\) lies in the spaces where we solve (1.9), cf. Theorem 3.1.

3.3 The Evolution of States

Recall that by \(B_\mathrm{bs}(\Gamma _0)\) we denote the set of all bounded measurable functions \(G:\Gamma _0 \rightarrow \mathbb {R}\) each of which is supported on a bounded \(A\), cf. (2.6). Its subset \(B^+_\mathrm{bs}(\Gamma _0)\) is defined in (2.17).

Given \(\vartheta \in \mathbb {R}\), let \(\mathcal {M}_\vartheta (\Gamma )\) stand for the set of all \(\mu \in \mathcal {M}^1_\mathrm{fm}(\Gamma )\), for which \(k_\mu \in \mathcal {K}_\vartheta \), see (2.16) and (3.10). Let also \(\mathcal {K}_\vartheta ^+\) be the set of all \(k \in \mathcal {K}_\vartheta \) such that, cf. (2.18),

$$\begin{aligned} \int \limits _{\Gamma _0} G(\eta ) k(\eta ) \lambda (d \eta ) \ge 0, \end{aligned}$$
(3.38)

which holds for every \(G\in B^+_\mathrm{bs}(\Gamma _0)\). Note that this property is ‘more than the mere positivity’ as \(B^+_\mathrm{bs}(\Gamma _0)\) can contain functions which take also negative values, see (2.13) and (2.17). Then in view of Proposition 2.2, the map \(\mathcal {M}_\vartheta (\Gamma )\ni \mu \mapsto k_\mu \in \mathcal {K}_\vartheta ^+\) is a bijection as such \(k_\mu \) certainly obeys (2.19). In what follows, the evolution of states \(\mu _0 \mapsto \mu _t\) is understood as the evolution of the corresponding correlation functions \(k_{\mu _0} \mapsto k_{\mu _t}\) obtained by solving the problem (1.9).

Theorem 3.8

Let \(\vartheta _0\in \mathbb R\), \(\vartheta \), and \(T(\vartheta )\) be as in Theorem 3.1 and in (3.13), respectively, and let \(\mu _0\) be in \(\mathcal {M}_{\vartheta _0}(\Gamma )\). Then the evolution described in Theorem 3.1 with \(k_0 =k_{\mu _0}\) leaves \(\mathcal {K}_{\vartheta }^+\) invariant, which means that each \(k_t\) is the correlation function of a unique \(\mu _t \in \mathcal {M}_\vartheta (\Gamma )\). Hence, the evolution \(k_{\mu _0}\mapsto k_t\), \(t\in [0,T(\vartheta ))\), determines the evolution of states

$$\begin{aligned} \mathcal {M}_{\vartheta _0}(\Gamma ) \ni \mu _0 \mapsto \mu _t \in \mathcal {M}_\vartheta (\Gamma ), \quad t\in [0, T(\vartheta )). \end{aligned}$$

Proof

To prove the statement we have to show that a solution \(k_t\) of the problem (1.9) with \(k_0 = k_{\mu _0}\) obeys (3.38) for all \(t\in (0,T(\vartheta ))\). Fix \(\mu _0 \in \mathcal {M}_{\vartheta _0}(\Gamma )\) and take \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\). Let \(\mu ^\Lambda _0\) be the projection of \(\mu _0\) onto \(\Gamma _{\Lambda }\), cf. (2.14). Since \(\mu _0\) is in \(\mathcal {M}^1_\mathrm{fm}(\Gamma )\), its density \(R_0^\Lambda \), as in (3.33), is in \(\mathcal {R}\). Given \(N\in \mathbb {N}\), we let \(I_N(\eta ) = 1\) whenever \(|\eta |\le N\), and \(I_N(\eta ) = 0\) otherwise. Then we set

$$\begin{aligned} R^{\Lambda , N}_0 (\eta ) = R^\Lambda _0 (\eta ) I_N(\eta ). \end{aligned}$$
(3.39)

As a function on \(\Gamma _0\), \(R^{\Lambda , N}_0\) is a collection of \(R^{\Lambda , N,n}_0: (\mathbb {R}^d)^n \rightarrow \mathbb {R}_+\), \(n\in \mathbb {N}_0\). Clearly, \(R^{\Lambda , N}_0\) is a positive element of \(\mathcal {R}\) of norm \(\Vert R^{\Lambda , N}_0\Vert _{\mathcal {R}} \le 1\). Furthermore, for each \(\beta >0\),

$$\begin{aligned} \int \limits _{\Gamma _0} e^{\beta |\eta |} R^{\Lambda , N}_0 (\eta ) \lambda (d\eta ) = \sum \limits _{n=0}^N \frac{e^{n \beta }}{n!}\int \limits _{\Lambda ^n} R_0^{\Lambda ,N,n} (x_1 , \ldots , x_n)d x_1 \cdots d x_n <\infty , \end{aligned}$$

and hence \(R^{\Lambda , N}_0 \in \mathcal {R}_b\) with \(b(\eta ) = e^{\beta |\eta |}\), for each \(\beta >0\). Set

$$\begin{aligned} R^{\Lambda , N}_t = S_R(t) R^{\Lambda , N}_0. \quad t\ge 0, \end{aligned}$$
(3.40)

where \(S_R(t)\) is as in (3.34). By Theorem 3.7, we have that

$$\begin{aligned} \forall t\ge 0:&(a)\; \forall \beta >0 \quad R^{\Lambda , N}_t \in \mathcal {R}_b , \quad \mathrm{with} \quad b(\eta ) = e^{\beta |\eta |}, \nonumber \\&(b) \; R^{\Lambda , N} (\eta ) \ge 0, \quad \mathrm{for} \quad \lambda -\mathrm{a.a.} \quad \eta \in \Gamma _0, \nonumber \\&(c) \; \Vert R^{\Lambda , N}_t\Vert _{\mathcal {R}} \le 1. \end{aligned}$$
(3.41)

Furthermore, in view of (3.35), by [28, Theorem 2.4, pp. 4–5] we have from (3.40)

$$\begin{aligned} R^{\Lambda , N}_t = R^{\Lambda , N}_0 + \int \limits _0^t L^\dagger R^{\Lambda , N}_s ds. \end{aligned}$$
(3.42)

Set, cf. (2.16) and (3.37),

$$\begin{aligned} q_{t}^{\Lambda ,N} (\eta ) = \int \limits _{\Gamma _0} R^{\Lambda , N}_t (\eta \cup \xi ) \lambda (d \xi ). \end{aligned}$$
(3.43)

For \(G\in B_\mathrm{bs}(\Gamma _0)\), let \(N(G)\in \mathbb {N}_0\) be such that \(G^{(n)} \equiv 0\) for \(n> N(G)\). For such \(G\), \(KG\) is a cylinder function on \(\Gamma \), which can also be considered as a measurable function on \(\Gamma _0\). By (2.13), we have that, for every \(G \in B_\mathrm{bs}(\Gamma _0)\) and each \(t\ge 0\),

$$\begin{aligned} \left\langle \! \left\langle K G, R^{\Lambda , N}_t \right\rangle \! \right\rangle = \left\langle \! \left\langle G, q^{\Lambda , N}_t \right\rangle \! \right\rangle , \end{aligned}$$
(3.44)

see (3.4). Since \(G\in B_\mathrm{bs}(\Gamma _0)\) is bounded, we have

$$\begin{aligned} C(G) := \max _{n\in \{0, \dots , N(G)\}}\Vert G^{(n)}\Vert _{L^\infty \left( (\mathbb {R}^d)^n \right) }<\infty , \end{aligned}$$
(3.45)

which immediately yields that

$$\begin{aligned} |(KG) (\eta )| \le (1+|\eta |)^{N(G)} C(G), \end{aligned}$$

and hence both integrals in (3.44) exist since \(R^{\Lambda ,N}_t\in \mathcal {R}_b\) for \(b(\eta ) = e^{\beta |\eta |}\) with any \(\beta >0\). Moreover, by the same argument the map \(\mathcal {R} \ni R \mapsto \langle \! \langle KG , R \rangle \! \rangle \) is continuous, and thus from (3.42) and (3.36) we obtain

$$\begin{aligned} \left\langle \! \left\langle KG ,R^{\Lambda , N}_t \right\rangle \! \right\rangle&= \left\langle \! \left\langle KG ,R^{\Lambda ,N}_0 \right\rangle \! \right\rangle + \int \limits _0^t \left\langle \! \left\langle KG ,{L}^\dagger R^{\Lambda ,N}_s \right\rangle \! \right\rangle ds \nonumber \\&= \left\langle \! \left\langle KG ,R^{\Lambda ,N}_0 \right\rangle \! \right\rangle + \int \limits _0^t \left\langle \! \left\langle L KG,R^{\Lambda ,N}_s \right\rangle \! \right\rangle ds. \end{aligned}$$
(3.46)

Now we would want to interchange in the latter line \(L\) and \(K\). If \(\widehat{L} G\) were in \(B_\mathrm{bs}(\Gamma _0)\), one could get point-wise \(LKG = K \widehat{L} G\) – by the very definition of \(\widehat{L}\). However, this is not the case since, cf. (3.7),

$$\begin{aligned} \left| (\widehat{L} G)(\eta )\right| \le (KUG) (\eta ), \end{aligned}$$

where

$$\begin{aligned} (UG)(\xi ) = \sum \limits _{x\in \xi } \int \limits _{\mathbb {R}^d}a(x-y) \left| G(\xi \setminus x \cup y) - G(\xi )\right| dy. \end{aligned}$$

Here we used, cf. (2.11) and (3.2), that

$$\begin{aligned} 0\le e(\tau _y,\xi ) \le 1, \qquad |e(t_y, \eta \setminus \xi )| \le 1, \end{aligned}$$

which holds for almost all all \(y\), \(\xi \), and \(\eta \). Then, for \(G\in B_\mathrm{bs}(\Gamma _0)\), we have, cf (3.45),

$$\begin{aligned} N(UG) = N(G), \qquad \ C(UG)\le 2 \alpha N(G) C(G), \end{aligned}$$

which then yields

$$\begin{aligned} \left| (\widehat{L} G)(\eta )\right| \le 2\alpha N(G) C(G) (1+ |\eta |)^{N(G)}. \end{aligned}$$
(3.47)

Let us show that, for any \(t\ge 0\), the function \((\widehat{L} G) q^{\Lambda ,N}_t\) is \(\lambda \)-integrable, cf. (3.43). By (2.23), from (3.43) and (3.47) we get

$$\begin{aligned} \left\langle \! \left\langle \widehat{L} G, q^{\Lambda , N}_t \right\rangle \! \right\rangle&\le 2 \alpha N(G) C(G) \int \limits _{\Gamma _0} R^{\Lambda ,N}_t(\eta ) \left( \sum \limits _{\xi \subset \eta } (1+ |\xi |)^{N(G)}\right) \lambda (d \eta ) \qquad \nonumber \\&\le 2 \alpha N(G) C(G) \int \limits _{\Gamma _0} 2^{|\eta |} (1+ |\eta |)^{N(G)}R^{\Lambda ,N}_t(\eta ) \lambda (d \eta ). \end{aligned}$$

Hence, by claim \((a)\) in (3.41) we get the integrability in question. Then by (3.44) we transform (3.46) into

$$\begin{aligned} \left\langle \! \left\langle G, q^{\Lambda , N}_t \right\rangle \! \right\rangle = \left\langle \! \left\langle G, q^{\Lambda , N}_0 \right\rangle \! \right\rangle + \int \limits _0^t \left\langle \! \left\langle \widehat{L} G, q^{\Lambda , N}_s \right\rangle \! \right\rangle ds. \end{aligned}$$
(3.48)

Since \(R^{\Lambda ,N}_t\) is positive, cf. \((b)\) in (3.41), by (3.44) we get

$$\begin{aligned} \left\langle \! \left\langle G, q^{\Lambda , N}_t \right\rangle \! \right\rangle \ge 0 \qquad \mathrm{for} \quad G\in B^+_\mathrm{bs}(\Gamma _0). \end{aligned}$$
(3.49)

On the other hand, by (3.39) and (3.43) we have, see also (2.16),

$$\begin{aligned} 0 \le q_0^{\Lambda ,N}(\eta ) \le \int \limits _{\Gamma _0} R^\Lambda (\eta \cup \xi ) \lambda (d\xi ) = k_{\mu _0}(\eta ) \mathbb {I}_{\Gamma _\Lambda }(\eta ) \le k_{\mu _0}(\eta ), \end{aligned}$$
(3.50)

where \(\mathbb {I}_{\Gamma _\Lambda }\) is the indicator of \(\Gamma _\Lambda \), i.e., \(\mathbb {I}_{\Gamma _\Lambda }(\eta ) = 1\) whenever \(\eta \in \Gamma _\Lambda \), and \(\mathbb {I}_{\Gamma _\Lambda }(\eta )=0\) otherwise. By (3.50), \(q_0^{\Lambda ,N} \in \mathcal {K}_{\vartheta _0}\). Let \(k^{\Lambda ,N}_t\), \(t\in [0,T)\), be the solution of (1.9) with \(k_0 = q_0^{\Lambda ,N}\), as stated in Theorem 3.1. Then

$$\begin{aligned} k^{\Lambda ,N}_t = k^{\Lambda ,N}_0 + \int \limits _0^t L^\Delta k^{\Lambda ,N}_s ds, \end{aligned}$$

which for \(G\) as in (3.48) yields

$$\begin{aligned} \left\langle \! \left\langle G, k^{\Lambda , N}_t \right\rangle \! \right\rangle = \left\langle \! \left\langle G, q^{\Lambda , N}_0 \right\rangle \! \right\rangle + \int \limits _0^t \left\langle \! \left\langle \widehat{L} G, k^{\Lambda , N}_s \right\rangle \! \right\rangle ds. \end{aligned}$$
(3.51)

Set

$$\begin{aligned} \varphi (t; G) = \left\langle \! \left\langle G, q^{\Lambda , N}_t \right\rangle \! \right\rangle , \quad \psi (t;G) = \left\langle \! \left\langle G, k^{\Lambda , N}_t \right\rangle \! \right\rangle . \end{aligned}$$

By (3.48) and (3.51), we obtain, cf. Corollary 3.5,

$$\begin{aligned} \frac{d^n\varphi }{dt^n } (0; G) = \frac{d^n \psi }{dt^n } (0; G) = \left\langle \! \left\langle \widehat{L}^n G, q^{\Lambda , N}_0 \right\rangle \! \right\rangle = \left\langle \! \left\langle G,( L^\Delta )^n q^{\Lambda , N}_0 \right\rangle \! \right\rangle . \end{aligned}$$
(3.52)

From this we can get that, cf. (3.49),

$$\begin{aligned} \left\langle \! \left\langle G, k^{\Lambda , N}_t \right\rangle \! \right\rangle = \left\langle \! \left\langle G, q^{\Lambda , N}_t \right\rangle \! \right\rangle \ge 0, \qquad \mathrm{for} \quad G\in B^+_\mathrm{bs}(\Gamma _0), \end{aligned}$$
(3.53)

provided the series

$$\begin{aligned} \sum \limits _{m=0}^\infty \frac{t^m}{m!} \left\langle \! \left\langle G,( L^\Delta )^m q^{\Lambda , N}_0 \right\rangle \! \right\rangle \end{aligned}$$

converges for all \(t\in [0,T(\vartheta ))\), cf. (3.52). But the latter indeed holds true in view of (3.21), which implies that (3.53) holds for all \(t\in [0, T(\vartheta ))\).

In Appendix, we show that, for each \(G\in B_\mathrm{bs}^+ (\Gamma _0)\) and any \(t\in [0, T(\vartheta ))\),

$$\begin{aligned} \left\langle \! \left\langle G, k_t \right\rangle \! \right\rangle = \lim _{n\rightarrow +\infty } \lim _{l\rightarrow +\infty } \left\langle \! \left\langle G, k^{\Lambda _n, N_l}_t \right\rangle \! \right\rangle , \end{aligned}$$
(3.54)

for certain increasing sequences \(\{\Lambda _n\}_{n\in \mathbb {N}}\) and \(\{N_l\}_{l\in \mathbb {N}}\) such that \(N_l \rightarrow + \infty \) and \(\Lambda _n \rightarrow \mathbb {R}^d\). Then by (3.54) and (3.53) we obtain (3.38), and thus complete the proof. \(\square \)

4 Mesoscopic Dynamics

As mentioned above, the mesoscopic description of the considered model is obtained by means of a Vlasov-type scaling, originally developed for describing mesoscopic properties of plasma. We refer to [7, 29, 31] as to the source of general concepts in this field, as well as to [12] where the peculiarities of the scaling method which we use are given along with the updated bibliography on this item.

4.1 The Vlasov Hierarchy

The main idea of the scaling which we use in this article is to make the particle system more and more dense whereas the interaction respectively weaker. This corresponds to the so called mean field approximation widely employed in theoretical physics. Note that we are not scaling time, which would be the case for a macroscopic scaling. The object of our manipulations will be the problem (1.9). The scaling parameter \(\varepsilon >0\) will be tending to zero. The first step is to assume that the initial state depends on \(\varepsilon \) in such a way that the correlation function \(k_0^{(\varepsilon )}\) diverges as \(\varepsilon \rightarrow 0\) in such a way that the so called renormalized correlation function

$$\begin{aligned} k_{0, \mathrm{ren}}^{(\varepsilon )} (\eta ) := \varepsilon ^{|\eta |} k_0^{(\varepsilon )} \end{aligned}$$
(4.1)

converges \(k_{0, \mathrm{ren}}^{(\varepsilon )}\rightarrow r_0\), as \(\varepsilon \rightarrow 0\), to the correlation function of a certain measure. Let \(k_0^{(\varepsilon ,n)}:(\mathbb {R}^d)^n \rightarrow \mathbb {R}\) denote \(n\)-particle ‘component’ of \(k_0^{(\varepsilon )}\). Then our assumption, in particular, means

$$\begin{aligned} k_0^{(\varepsilon ,1)} \sim \varepsilon ^{-1}. \end{aligned}$$
(4.2)

Then the second step is to consider the Cauchy problem

$$\begin{aligned} \frac{d}{dt}k^{(\varepsilon )}_t = L_{\varepsilon }^\Delta k^{(\varepsilon )}_t, \qquad k^{(\varepsilon )}_t|_{t=0} = k^{(\varepsilon )}_0, \end{aligned}$$
(4.3)

where \(L_{\varepsilon }^\Delta \) is as in (3.1) but with \(\phi \) multiplied by \(\varepsilon \). As might be seen from (3.19), the solution \(k^{(\varepsilon )}_t\), which exists in view of Theorem 3.1, diverges as \(\varepsilon \rightarrow 0\). Thus, similarly as in (4.1) we pass to

$$\begin{aligned} k_{t, \mathrm{ren}}^{(\varepsilon )} (\eta ) = \varepsilon ^{|\eta |} k_t^{(\varepsilon )}, \end{aligned}$$
(4.4)

which means that instead of (4.3) we are going to solve the following problem

$$\begin{aligned} \frac{d}{dt}k^{(\varepsilon )}_{t, \mathrm{ren}} = L_{\varepsilon ,\mathrm{ren}} k^{(\varepsilon )}_{t,\mathrm{ren}} \qquad k^{(\varepsilon )}_{t,\mathrm{ren}}|_{t=0} = k^{(\varepsilon )}_{0,\mathrm{ren}}, \end{aligned}$$
(4.5)

with

$$\begin{aligned} L_{\varepsilon ,\mathrm{ren}} = R_\varepsilon ^{-1} L_{\varepsilon }^\Delta R_\varepsilon ,\qquad \left( R_\varepsilon k\right) (\eta ) := \varepsilon ^{-|\eta |} k(\eta ). \end{aligned}$$
(4.6)

Remark 4.1

Since \(k^{(\varepsilon )}_0\) is a correlation function, by Theorem 3.7 we know that \(k^{(\varepsilon )}_t\) is the correlation function of a unique measure \(\mu ^{(\varepsilon )}_t\). If \(\mu _0^{(1)}\) is a Poisson measure with density \(k_0^{(1,1)} = \varrho _0\), then also \(\mu _0^{(\varepsilon )}\) with density \(k_0^{(\varepsilon ,1)} = \varepsilon ^{-1}\varrho _0\) is a Poisson measure. We can expect that, for \(t>0\), \(k^{(\varepsilon )}_{t, \mathrm{ren}}\) has a nontrivial limit as \(\varepsilon \rightarrow 0^+\), only if \(k^{(\varepsilon )}_{t }(\eta ) \le [k^{(\varepsilon ,1)}_{t }(x)]^{|\eta |}\), cf. (4.2) and (4.4). For this to hold, \(\mu ^{(\varepsilon )}_t\) should be sub-Poissonian, cf. Definition 2.3 and Remark 2.4. That is, the evolution \(\mu _0^{(\varepsilon )} \mapsto \mu _t^{(\varepsilon )}\) should preserve sub-Poissonicity, which is the case by Theorem 3.1 in view of (3.8).

By (3.1) and (4.6), we have

$$\begin{aligned} \left( {L}_{\varepsilon ,\mathrm{ren}} k\right) (\eta )&= \sum \limits _{y\in \eta } \int \limits _{\mathbb {R}^d} a (x-y) e( \tau ^{(\varepsilon )}_y, \eta \setminus y \cup x)\\&\quad \times \left( \int \limits _{\Gamma _0}e(\varepsilon ^{-1}t^{(\varepsilon )}_y, \xi ) k( \xi \cup x \cup \eta \setminus y) \lambda (d \xi ) \right) dx \nonumber \\&\quad - \int \limits _{\Gamma _0} k(\xi \cup \eta ) \left( \sum \limits _{x\in \eta } \int \limits _{\mathbb {R}^d} a(x-y) e(\tau ^{(\varepsilon )}_y,\eta ) \right. \nonumber \\&\quad \times \left. e(\varepsilon ^{-1} t^{(\varepsilon )}_y, \xi ) d y \right) \lambda (d\xi ), \nonumber \end{aligned}$$
(4.7)

where, cf. (3.2),

$$\begin{aligned} t^{(\varepsilon )}_x (y) = e^{-\varepsilon \phi (x-y)} - 1, \qquad \tau ^{(\varepsilon )}_x (y) = t^{(\varepsilon )}_x (y) + 1. \end{aligned}$$

As in (3.18), for any \(\vartheta '\in \mathbb {R}\) and \(\vartheta '' > \vartheta '\), we have

$$\begin{aligned} \Vert L_{\varepsilon ,\mathrm{ren}} \Vert _{\vartheta '' \vartheta '} \le \frac{2\alpha }{e(\vartheta '' - \vartheta ')} \exp \left( c^{(\varepsilon )}_\phi e^{-\vartheta ''} \right) , \end{aligned}$$
(4.8)

where, cf. (1.6),

$$\begin{aligned} c^{(\varepsilon )}_\phi = \varepsilon ^{-1}\int \limits _{\mathbb {R}^d} \left( 1 - e^{-\varepsilon \phi (x)}\right) d x. \end{aligned}$$

Suppose now that \(\phi \) is in \(L^1(\mathbb {R}^d)\) and set

$$\begin{aligned} \langle \phi \rangle = \int \limits _{\mathbb {R}^d} \phi (x) d x. \end{aligned}$$

Recall that we still assume \(\phi \ge 0\). Then

$$\begin{aligned} \Vert L_{\varepsilon ,\mathrm{ren}} \Vert _{\vartheta '' \vartheta '} \le \sup _{\varepsilon >0} \{ \mathrm{RHS(4.8)}\} = \frac{2\alpha }{e(\vartheta '' - \vartheta ')} \exp \left( \langle \phi \rangle e^{-\vartheta ''} \right) . \end{aligned}$$
(4.9)

Let us now, informally, pass in (4.7) to the limit \(\varepsilon \rightarrow 0\). Then we get the following operator

$$\begin{aligned} \left( L_V k\right) (\eta )&= \sum \limits _{y\in \eta } \int \limits _{\mathbb {R}^d} a (x-y)\int \limits _{\Gamma _0}e( - \phi (y-\cdot ), \xi )\\&\quad \times \, k( \xi \cup x \cup \eta \setminus y) \lambda (d \xi ) dx \nonumber \\&- \int \limits _{\Gamma _0} k(\xi \cup \eta ) \sum \limits _{x\in \eta } \int \limits _{\mathbb {R}^d} a(x-y) \nonumber \\&\quad \times e(- \phi (y-\cdot ), \xi ) d y \lambda (d\xi ). \nonumber \end{aligned}$$
(4.10)

It certainly obeys

$$\begin{aligned} \Vert L_V \Vert _{\vartheta '' \vartheta '} \le \frac{2\alpha }{e(\vartheta '' - \vartheta ')} \exp \left( \langle \phi \rangle e^{-\vartheta ''} \right) , \end{aligned}$$
(4.11)

and hence along with (4.3) we can consider the problem

$$\begin{aligned} \frac{d}{dt} r_t = L_V r_t, \qquad r_t|_{t=0} = r_0, \end{aligned}$$
(4.12)

which is called the Vlasov hierarchy for the Kawasaki system which we consider. Repeating the arguments used in the proof of Theorem 3.1 we obtain the following

Proposition 4.2

For every \(\vartheta _0\in \mathbb {R}\), there exists \(T_* = T_* (\vartheta _0, \alpha , \langle \phi \rangle )\) such that the problem (4.5) (resp. (4.11)) with any \(\varepsilon >0\) and \(k^{(\varepsilon )}_0\in \mathcal {K}_{\vartheta _0}\) (resp. \(r_0 \in \mathcal {K}_{\vartheta _0}\)) has a unique classical solution \(k^{(\varepsilon )}_t\in \mathcal {K}_{\vartheta (t)}\) (resp. \(r_t\in \mathcal {K}_{\vartheta (t)}\)) for \(t\in [0, T_*)\).

As mentioned in Remark 4.1, \(k^{(\varepsilon )}_t\) is also a correlation function if \(k^{(\varepsilon )}_0\) is so. However, this could not be the case for \(r_t\), even if \(r_0 = k^{(\varepsilon )}_0\). Moreover, we do not know how ‘close’ is \(r_t\) to \(k^{(\varepsilon )}_t\), as the passage from \(L_{\varepsilon , \mathrm{ren}}\) to \(L_V\) was only informal. In the remaining part of the article we give answers to both these questions.

4.2 The Vlasov Equation

Here we show that the problem (4.12) has a very particular solution, which gives sense to the whole construction. For \(a\) as in (1.3) and an appropriate \(g:\mathbb {R}^d \rightarrow \mathbb {R}\), we write

$$\begin{aligned} (a*g)(x) = \int \limits _{\mathbb {R}^d} a(x-y) g(y) d y, \end{aligned}$$

and similarly for \(\phi *g\). Then let us consider in \(L^\infty (\mathbb {R}^d)\) the following Cauchy problem

$$\begin{aligned} \frac{d}{dt}\varrho _t(x)&= \left( a *\varrho _t \right) (x) \exp \left[ - (\varrho _t *\phi )(x)\right] \\&- \varrho _t (x) \left( a *\exp \left( - \varrho _t *\phi \right) \right) (x), \nonumber \\ \varrho _t|_{t=0}&= \varrho _0. \nonumber \end{aligned}$$
(4.13)

Denote

$$\begin{aligned} \varDelta ^+&= \left\{ \varrho \in L^\infty \left( \mathbb {R}^d \right) : \varrho (x) \ge 0 \quad \mathrm{for} \quad \mathrm{a. a.} \quad x \right\} , \\ \varDelta _u&= \left\{ \varrho \in L^\infty \left( \mathbb {R}^d \right) : \Vert \varrho \Vert _{L^\infty (\mathbb {R}^d)} \le u \right\} , \qquad u>0,\\ \varDelta ^{+}_u&= \varDelta ^{+} \cap \varDelta _u. \end{aligned}$$

Lemma 4.3

Let \(\vartheta _0\) and \(T_*\) be as in Proposition 4.2. Suppose that, for some \(T\in (0,T_*)\), the problem (4.13) with \(\varrho _0 \in \varDelta ^{+}_{u_0}\), has a unique classical solution \(\varrho _t\in \varDelta ^{+}_{u_T}\) on \([0,T]\), for some \(u_T>0\). Then, for \(\vartheta _0 = - \log u_0\) and \(\vartheta (T)= - \log u_T\), the solution \(r_t\in \mathcal {K}_{\vartheta (T)}\) of (4.11) as in Proposition 4.2 with \(r_0 (\eta ) = e(\varrho _0 , \eta )\) is given by

$$\begin{aligned} r_t (\eta ) = e(\varrho _t, \eta )= \prod _{x\in \eta } \varrho _t(x). \end{aligned}$$
(4.14)

Proof

First of all we note that, for a given \(\vartheta \), \(e(\varrho , \cdot ) \in \mathcal {K}_\vartheta \) if and only if \(\varrho \in \varDelta _u\) with \(u= e^{-\vartheta }\), see (3.8). Now set \(\tilde{r}_t = e(\varrho _t, \cdot )\) with \(\varrho _t\) solving (4.13). This \(\tilde{r}_t\) solves (4.12), which can easily be checked by computing \(d/dt\) and employing (4.13). In view of the uniqueness as in Proposition 4.2, we then have \(\tilde{r}_t = r_t\) on \([0,T]\), from which it can be continued to \([0,T_*)\). \(\square \)

Remark 4.4

As (4.14) is the correlation function of the Poisson measure \(\pi _{\varrho _t}\), see (2.9) and (2.10), the above lemma establishes the so called chaos preservation or chaos propagation in time. Indeed, the most chaotic states those corresponding to Poisson measures, cf. (2.20), (2.21), and (2.22).

Let us show now that the problem (4.13) does have the solution we need. In a standard way, (4.13) can be transformed into the following integral equation

$$\begin{aligned} \qquad \varrho _t(x)&= F_t(\varrho )(x) := \varrho _0(x)e^{-\alpha t}\\&+ \int \limits _0^t \exp \left( - \alpha (t-s)\right) \left( a *\varrho _s \right) (x) \exp \left[ - (\varrho _s *\phi )(x)\right] ds \nonumber \\&+ \int \limits _0^t \exp \left( - \alpha (t-s)\right) \varrho _s (x) \left[ a *\left( 1- \exp \left( - \varrho _s *\phi \right) \right) \right] (x) ds, \nonumber \end{aligned}$$
(4.15)

that is, \([0,T)\ni t \mapsto \varrho _t \in L^\infty (\mathbb {R}^d)\) is a classical solution of (4.13) if and only if it solves (4.15). Suppose \(\varrho _t \in \varDelta ^+\) is such a solution. Then we set

$$\begin{aligned} u_t = \Vert \varrho _t \Vert _{L^\infty \left( \mathbb {R}^d \right) }, \qquad t \in [0,T). \end{aligned}$$
(4.16)

Since \(\phi \ge 0\) and \(\varrho _t \in \varDelta ^+\), from (4.15) we get for \(v_t:= u_t \exp (\alpha t)\), cf. (1.4),

$$\begin{aligned} v_t \le v_0 + 2 \alpha \int \limits _0^t v_s ds, \end{aligned}$$

from which by the Gronwall inequality we obtain \(v_t \le v_0 \exp ( 2 \alpha t)\); and hence,

$$\begin{aligned} u_t \le u_0 e^{\alpha t}. \end{aligned}$$
(4.17)

In a similar way, one shows that, for \(\varrho _0 \in \varDelta ^{+}_{u_0}\) and \(\varrho _s \in \varDelta ^{+}_{u_t}\) for all \(s\in [0,t]\),

$$\begin{aligned} F_t (\varrho ) \in \varDelta ^{+}_{ u_t} , \qquad u_t:= \frac{u_0}{2- e^{\alpha t}}. \end{aligned}$$
(4.18)

Now for \(\varrho _0\in \varDelta _{u_0}^{+}\) and some \(t>0\) such that \(e^{\alpha t} < 2\), cf. (4.18), we consider the sequence

$$\begin{aligned} \varrho ^{(0)}_t=\varrho _0,\qquad \varrho _t^{(n)}=F_t(\varrho ^{(n-1)}), \quad n\in \mathbb {N}. \end{aligned}$$

Obviously, each \(\varrho ^{(n)}_t\) is in \(\varDelta ^+_{u_t}\). Now let us find \(T<\min \{T_*, \log 2/ \alpha \}\), \(T_*\) being as in Lemma 4.3, such that the sequence of

$$\begin{aligned} \delta _n:= \sup _{t\in [0,T]} \left\| \varrho _t^{(n)} - \varrho _t^{(n-1)}\right\| _{L^\infty \left( \mathbb {R}^d \right) }, \quad n\in \mathbb {N} \end{aligned}$$
(4.19)

is summable, which would guarantee that, for each \(t\le T\), \(\{\varrho _t^{(n)}\}_{n\in \mathbb {N}_0}\) is a Cauchy sequence. For \(\varrho _s^{(n-1)}, \varrho _s^{(n-2)} \in \varDelta _{u_T}\), we have

$$\begin{aligned} \left\| 1 - \exp \left( \phi *\left( \varrho _s^{(n-1)} - \varrho _s^{(n-2)}\right) \right) \right\| {}_{L^\infty \left( \mathbb {R}^d \right) } \le \left\| \phi *\left( \varrho _s^{(n-1)} - \varrho _s^{(n-2)}\right) \right\| {}_{L^\infty \left( \mathbb {R}^d \right) }&\\ \times \sum \limits _{m=0}^\infty \frac{1}{m!} \frac{1}{m+1} \left\| \phi *\left( \varrho _s^{(n-1)} - \varrho _s^{(n-2)}\right) \right\| {}^m_{L^\infty \left( \mathbb {R}^d \right) }&\qquad \\ \le \langle \phi \rangle \Vert \varrho _s^{(n-1)} - \varrho _s^{(n-2)} \Vert _{L^\infty \left( \mathbb {R}^d \right) } \exp \left( 2 \langle \phi \rangle u_T \right) . \quad&\end{aligned}$$

By means of this estimate, we obtain from (4.15) and (4.19)

$$\begin{aligned} \delta _n \le q(T) \delta _{n-1}, \end{aligned}$$

where

$$\begin{aligned} q(T) = 2 \left( 1 - e^{-\alpha T}\right) \left( 1 + \langle \phi \rangle u_0 \exp \left( \alpha T + 2 \langle \phi \rangle u_0 e^{\alpha T} \right) \right) . \end{aligned}$$
(4.20)

Since \(q(T)\) is a continuous increasing function such that \(q(0) =0\), one finds \(T>0\) such that \(q(T) < 1\). For this \(T\), the sequence \(\{\varrho _t^{(n)}\}_{n\in \mathbb {N}_0}\) converges to some \(\varrho _t \in \varDelta _{u_T}^{+}\), uniformly on \([0,T]\). Clearly, this \(\varrho _t \) solves (4.13) and hence (4.15).

Theorem 4.5

The unique classical solution of (4.12) with \(r_0 = e(\varrho _0, \cdot )\), \(\varrho _0 \in \varDelta ^{+}\), exists for all \(t>0\) and is given by (4.14) with \(\varrho _t \in \varDelta ^{+}\) being the solution of (4.13). Moreover, this solution obeys

$$\begin{aligned} r_t (\eta ) \le r_0 (\eta ) \exp ( \alpha |\eta | t). \end{aligned}$$
(4.21)

Proof

For a given \(\varrho _0 \in \varDelta ^+\), we find \(T\) such that \(q(T)<1\), cf. (4.16) and (4.20). Then there exists a unique classical solution of (4.13) \(\varrho _t \in \varDelta ^+_{u_T}\) on \([0,T]\), which by Lemma 4.3 yields the solution (4.14). Since \(\varrho _t\) obeys the a priori bound (4.17), it does not explode and hence can be continued, which yields also the continuation of \(r_t\). Finally, the bound (4.21) follows from (4.17). \(\square \)

4.3 The Scaling Limit \(\varepsilon \rightarrow 0\)

Our final task in this work is to show that the solution of (4.5) \(k_{t,\mathrm{ren}}^{(\varepsilon )}\) converges in \(\mathcal {K}_\vartheta \) uniformly on compact subsets of \([0,T_*)\) to that of (4.12), see Proposition 4.2. Here we should impose an additional condition on the potential \(\phi \), which, however, seems quite natural. Recall that in this section we suppose \(\phi \in L^1 (\mathbb {R}^d)\).

Theorem 4.6

Let \(\vartheta _0\) and \(T_*\) be as in Proposition 4.2, and for \(T\in [0,T_*)\), take \(\vartheta \) such that \(T < T( \vartheta )\), see (3.13). Assume also that \(\phi \in L^1 (\mathbb {R}^d) \cap L^\infty (\mathbb {R}^d)\) and consider the problems (4.5) and (4.12) with \(k^{(\varepsilon )}_{0, \mathrm{ren}} = r_0 \in {\mathcal {K}}_{\vartheta _0}\). For their solutions \(k^{(\varepsilon )}_{t, \mathrm{ren}}\) and \(r_t\), it follows that \(k^{(\varepsilon )}_{t, \mathrm{ren}} \rightarrow r_t\) in \({\mathcal {K}}_{\vartheta }\), as \(\varepsilon \rightarrow 0\), uniformly on \([0,T]\).

Proof

For \(n\in \mathbb {N}\), let \(k^{(\varepsilon )}_{t,n}\) and \(r_{t,n}\) be defined as in (3.19) with \(L_{\varepsilon , \mathrm{ren}}\) and \(L_V\), respectively. As in the proof of Theorem 3.1, one can show that the sequences of \(k^{(\varepsilon )}_{t,n}\) and \(r_{t,n}\) converge in \(\mathcal {K}_\vartheta \) to \(k^{(\varepsilon )}_{t,\mathrm{ren}}\) and \(r_{t}\), respectively, uniformly on \([0,T]\). Then, for \(\delta >0\), one finds \(n\in \mathbb {N}\) such that, for all \(t\in [0,T]\),

$$\begin{aligned} \left\| k^{(\varepsilon )}_{t,n} - k^{(\varepsilon )}_{t,\mathrm{ren}}\right\| _{\vartheta } + \left\| r_{t,n} - r_t \right\| _{\vartheta } < \delta /2. \end{aligned}$$

From (3.19) we then have

$$\begin{aligned} \left\| k^{(\varepsilon )}_{t,\mathrm{ren}} - r_t \right\| _\vartheta&\le \left\| \sum \limits _{m=1}^n \frac{1}{m!} t^m \left( L_{\varepsilon , \mathrm{ren}}^m - L^m_V \right) r_0 \right\| {}_\vartheta + \frac{\delta }{2} \\&\le \Vert L_{\varepsilon , \mathrm{ren}} - L_V\Vert _{\vartheta _0 \vartheta } \Vert r_0\Vert _{\vartheta _0} T \exp \left( T b(\vartheta )\right) + \frac{\delta }{2}, \nonumber \end{aligned}$$
(4.22)

where, see (4.9) and (4.11),

$$\begin{aligned} b(\vartheta ) := \frac{2\alpha }{e(\vartheta _0 - \vartheta )} \exp \left( \langle \phi \rangle e^{-\vartheta } \right) . \end{aligned}$$

Here we used the following identity

$$\begin{aligned}&L_{\varepsilon , \mathrm{ren}}^m - L^m_V = \left( L_{\varepsilon , \mathrm{ren}} - L_V \right) L_{\varepsilon , \mathrm{ren}}^{m-1} + L_V \left( L_{\varepsilon , \mathrm{ren}} - L_V \right) L_{\varepsilon , \mathrm{ren}}^{m-2}\qquad \qquad \\&\qquad + \cdots + L_V^{m-2} \left( L_{\varepsilon , \mathrm{ren}} - L_V \right) L_{\varepsilon , \mathrm{ren}} + L_V^{m-1} \left( L_{\varepsilon , \mathrm{ren}} - L_V \right) . \end{aligned}$$

Thus, we have to show that

$$\begin{aligned} \Vert L_{\varepsilon , \mathrm{ren}} -L_V\Vert _{\vartheta _0\vartheta }\rightarrow 0, \quad \mathrm{as} \quad \varepsilon \rightarrow 0, \end{aligned}$$
(4.23)

which will allow us to make the first summand in the right-hand side of (4.22) also smaller than \(\delta /2\) and thereby to complete the proof.

Subtracting (4.10) from (4.7) we get

$$\begin{aligned} \left( L_{\varepsilon , \mathrm{ren}} -L_V \right) k(\eta )&= \sum \limits _{y \in \eta } \int \limits _{\mathbb {R}^d} \int \limits _{\Gamma _0} a(x-y) k(\xi \cup x \cup \eta \setminus y)\\&\quad \times Q_{\varepsilon } (y,\eta \setminus y \cup x, \xi ) \lambda (d \xi ) d x \nonumber \\&\quad - \sum \limits _{x \in \eta } \int \limits _{\mathbb {R}^d} \int \limits _{\Gamma _0} a(x-y) k(\xi \cup \eta ) \nonumber \\&\quad \times Q_{\varepsilon } (y,\eta ,\xi ) \lambda (d \xi ) d y \nonumber \end{aligned}$$
(4.24)

where

$$\begin{aligned} Q_{\varepsilon } (y,\zeta ,\xi )&:= e(\tau _y^{(\varepsilon )}, \zeta ) e(\varepsilon ^{-1}t^{(\varepsilon )}_y,\xi ) - e (- \phi (y - \cdot ), \xi )\\&= e(\varepsilon ^{-1}t^{(\varepsilon )}_y,\xi ) - e (- \phi (y - \cdot ) \\&- \left[ 1 - e(\tau _y^{(\varepsilon )}, \zeta ) \right] e(\varepsilon ^{-1}t^{(\varepsilon )}_y,\xi ). \end{aligned}$$

For \(t>0\), the function \(e^{-t} - 1 + t\) takes positive values only; hence,

$$\begin{aligned} \Psi (t) : = (e^{-t} - 1 + t)/t^2, \quad t>0, \end{aligned}$$

is positive and bounded, say by \(C>0\). Then by means of the inequality

$$\begin{aligned} b_1 \cdots b_n - a_1 \cdots a_n \le \sum \limits _{i=1}^n (b_i - a_i) b_1 \cdots b_{i-1} b_{i+1} \cdots b_n, \quad b_i \ge a_1 >0, \end{aligned}$$

we obtain

$$\begin{aligned} \left| e(\varepsilon ^{-1}t^{(\varepsilon )}_y,\xi ) - e (- \phi (y - \cdot ), \xi ) \right|&\le \sum \limits _{z\in \xi } \varepsilon [\phi (y-z)]^2 \Psi \left( \varepsilon \phi (y-z)\right) \\&\times \prod _{u\in \xi \setminus z} \phi (y-u) \\&\le \varepsilon C \sum \limits _{z\in \xi } [\phi (y-z)]^2 e(\phi (y-\cdot ), \xi \setminus z), \end{aligned}$$

and

$$\begin{aligned} \left| \left[ 1 - e(\tau _y^{(\varepsilon )}, \zeta ) \right] e(\varepsilon ^{-1}t^{(\varepsilon )}_y,\xi ) \right| \le \varepsilon \sum \limits _{z\in \zeta } \phi (y-z) e(\phi (y-\cdot ), \xi ). \end{aligned}$$

Then from (4.24) for \(\lambda \)-almost all \(\eta \) we have, see (3.8),

$$\begin{aligned} \left| \left( L_{\varepsilon , \mathrm{ren}} -L_V \right) k(\eta ) \right| \le \varepsilon \Vert k\Vert _{\vartheta _0} \left( \widetilde{C} |\eta |e^{- \vartheta _0 |\eta |} + D (\eta ) e^{- \vartheta _0 |\eta |}\right) , \end{aligned}$$
(4.25)

with

$$\begin{aligned} \widetilde{C} = 2 C \alpha \Vert \phi \Vert _{L^\infty \left( \mathbb {R}^d \right) } \langle \phi \rangle e^{-\vartheta _0} \end{aligned}$$

and

$$\begin{aligned} D (\eta ) = 2 \alpha \exp \left( \langle \phi \rangle e^{-\vartheta _0} \right) \Vert \phi \Vert _{L^\infty \left( \mathbb {R}^d \right) } |\eta | (|\eta |+1). \end{aligned}$$

Thus, we conclude that the expression in \(( \cdot )\) in the right-hand side of (4.25) is in \(\mathcal {K}_\vartheta \), which yields (4.23) and hence completes the proof. \(\square \)