1 Introduction

1.1 Posing

The development of a mathematical theory of complex living systems is a challenging task of modern mathematics [4]. In many cases of such systems, one deals with birth-and-death processes the theory of which traces back to works by Kolmogorov and Feller, see [9, Chapter XVII] and e.g. [16, 30] for a more recent account on the related concepts and results. In the simplest models of this kind, the system is finite and the state space is \(\mathbb {N}_0 :=\mathbb {N}\cup \{0\}\). Then the time evolution of the probability of having n particles in the system is obtained from the Kolmogorov equation in which the generator is a tridiagonal infinite matrix containing the birth and death rates \(\lambda _n\) and \(\mu _n\), respectively. If their increase is controlled by affine functions of n, the evolution is obtained with the help of a stochastic semigroup, see e.g. [3, 22] and the papers quoted in these works. However, if \(\lambda _n\) and \(\mu _n\) increase faster than n, one would not expect that the evolution takes place, for all \(t>0\), in one and the same Banach space and thus is described by a \(C_0\)-semigroup of operators acting in this space. For infinite systems, the situation is much more complex as the very definition of the Kolmogorov equation cannot be performed directly (for \(\lambda _\infty \) and \(\mu _\infty \) are infinite in such cases). The main result of the present paper consists in constructing the evolution of states of an infinite birth-and-death system of particles placed in \(\mathbb {R}^d\) with ‘rates’ that roughly speaking increase as \(n^2\), cf. (3.9) below. This evolution takes place in an ascending sequence of Banach spaces and is obtained by a method developed in the paper. To the best of our knowledge, this is the first construction of this type.

We continue, cf. [10, 12, 13, 23], studying the model introduced in [6, 7, 24]. It describes an infinite evolving population of identical point entities (particles) distributed over \(\mathbb {R}^d\), \(d\ge 1\), which reproduce themselves and die, also due to competition. This is one of the most important individual-based models in studying large ecological communities (e.g. of perennial plants), see [27] and [25, page 1311]. As is now commonly adopted [6, 7, 27], the appropriate mathematical context for studying models of this kind is provided by the theory of random point fields in \(\mathbb {R}^d\) in which populations are modeled as point configurations constituting the set

$$\begin{aligned} \Gamma = \big \{\gamma \subset \mathbb R^d : |\gamma \cap \Lambda |<\infty \text { for any compact }\Lambda \subset \mathbb {R}^d \big \}, \end{aligned}$$
(1.1)

where \(|\cdot |\) denotes cardinality. It is equipped with a \(\sigma \)-field of measurable subsets that allows one to consider probability measures on \(\Gamma \) as states of the system. To characterize such states one employs observables – appropriate functions \(F:\Gamma \rightarrow \mathbb {R}\). Their evolution is obtained from the Kolmogorov equation

$$\begin{aligned} \frac{d}{dt} F_t = L F_t , \qquad F_t|_{t=0} = F_0, \qquad t>0, \end{aligned}$$
(1.2)

where the generator L specifies the model. The states’ evolution is then obtained from the Fokker–Planck equation

$$\begin{aligned} \frac{d}{dt} \mu _t = L^* \mu _t, \qquad \mu _t|_{t=0} = \mu _0, \end{aligned}$$
(1.3)

related to that in (1.2) by the duality

$$\begin{aligned} \int _{\Gamma } F_0 d \mu _t = \int _{\Gamma } F_t d \mu _0. \end{aligned}$$

The generator for the model studied in this paper is

$$\begin{aligned} (LF)(\gamma )= & {} \sum _{x\in \gamma }\left[ m + E^{-} (x, \gamma \setminus x) \right] \left[ F(\gamma \setminus x) - F(\gamma ) \right] \nonumber \\&+ \int _{\mathbb {R}^d} E^{+} (y, \gamma ) \left[ F(\gamma \cup y) - F(\gamma ) \right] dy, \end{aligned}$$
(1.4)

where

$$\begin{aligned} E^{\pm }(x, \gamma ) := \sum _{y\in \gamma } a^{\pm } (x-y). \end{aligned}$$
(1.5)

The first summand in (1.4) corresponds to the death of the particle located at x occurring independently at rate \(m\ge 0\) (intrinsic mortality) and under the influence of the other particles in \(\gamma \) – at rate \(E^{-}(x, \gamma \setminus x)\ge 0\) (competition). The second term in (1.4) describes the birth of a particle at \(y\in \mathbb {R}^d\) occurring at rate \(E^{+}(y, \gamma )\ge 0\). In the sequel, we call \(a^{-}\) and \(a^{+}\) competition and dispersal kernels, respectively. The particular case of (1.4) with \(a^{-} \equiv 0\) is the continuum contact model studied in [18, 21]. Having in mind the results of these works, along with purely mathematical tasks we aim at understanding the ecological consequences of the competition taken into account in (1.4).

The problem of constructing spatial birth and death processes in infinite volume was first studied by Holley and Stroock in their pioneering work [15], where a special case of nearest neighbor interactions on the real line was considered. For more general versions of continuum birth-and-death systems, the few results known by this time were obtained under severe restrictions imposed on the birth and death rates. This relates to the construction of a Markov process in [14], as well as to the result obtained in [12] in the statistical approach (see below). In the present work, we make an essential step forward in studying the model specified in (1.4) assuming only that the kernels \(a^{\pm }\) satisfy some rather natural condition.

The set of finite configurations \(\Gamma _0\) is a measurable subset of \(\Gamma \). If \(\mu \) is such that \(\mu (\Gamma _0) =1\), then the considered system is finite in this state. If \(\mu _0\) in (1.3) has such a property, the evolution \(\mu _0 \mapsto \mu _t\) can be obtained directly from (1.3), see [23]. In this case \(\mu _t(\Gamma _0) =1\) for all \(t>0\). States of infinite systems are mostly such that \(\mu (\Gamma _0)=0\), which makes direct solving (1.3) with an arbitrary initial state \(\mu _0\) rather unaccessible for the method existing at this time, cf. [20]. In this work we continue following the statistical approach [5, 10, 12, 13, 20] in which the evolution of states is described by means of the corresponding correlation functions. To briefly explain its essence let us consider the set of all compactly supported continuous functions \(\theta :\mathbb {R}^d\rightarrow (-1,0]\). For a probability measure \(\mu \) on \(\Gamma \) its Bogoliubov functional [11, 19] is defined as

$$\begin{aligned} B_\mu (\theta ) = \int _{\Gamma } \prod _{x\in \gamma } ( 1 + \theta (x)) \mu ( d \gamma ), \end{aligned}$$
(1.6)

with \(\theta \) running through the mentioned set of functions. For \(\pi _\varkappa \) – the homogeneous Poisson measure with intensity \(\varkappa >0\), (1.6) takes the form

$$\begin{aligned} B_{\pi _\varkappa } (\theta ) = \exp \left( \varkappa \int _{\mathbb {R}^d}\theta (x) d x \right) . \end{aligned}$$

In state \(\pi _\varkappa \), the particles are independently distributed over \(\mathbb {R}^d\) with density \(\varkappa \). The set of sub-Poissonian states \(\mathcal {P}_\mathrm{sP}\) is then defined as that containing all the states \(\mu \) for which \(B_\mu \) can be continued, as a function of \(\theta \), to an exponential type entire function on \(L^1 (\mathbb {R}^d)\). This exactly means that \(B_\mu \) can be written down in the form

$$\begin{aligned} B_\mu (\theta ) = \sum _{n=0}^\infty \frac{1}{n!}\int _{(\mathbb {R}^d)^n} k_\mu ^{(n)} (x_1 , \ldots , x_n) \theta (x_1) \cdots \theta (x_n) d x_1 \cdots d x_n, \end{aligned}$$
(1.7)

where \(k_\mu ^{(n)}\) is the n-th order correlation function corresponding to \(\mu \). It is a symmetric element of \(L^\infty ((\mathbb {R}^d)^n)\) for which

$$\begin{aligned} \Vert k^{(n)}_\mu \Vert _{L^\infty ((\mathbb {R}^d)^n)} \le C \exp ( \alpha n), \qquad n\in \mathbb {N}_0, \end{aligned}$$
(1.8)

with some \(C>0\) and \(\alpha \in \mathbb {R}\). This guarantees that \(B_\mu \) is of exponential type. One can also consider a wider class of states, \(\mathcal {P}_\mathrm{anal}\), by imposing the condition that \(B_\mu \) can be continued to a function on \(L^1(\mathbb {R}^d)\) analytic in some neighborhood of the origin, see [19]. In that case, the estimate corresponding to (1.8) will contain \(n!C e^{\alpha n}\) in its right-hand side. States \(\mu \in \mathcal {P}_\mathrm{anal}\) are characterized by strong correlations corresponding to ‘clustering’. In the contact model the clustering does take place, see [18, 21] and especially [10, Eq. (3.5), page 303]. Namely, in this model for each \(t>0\) and \(n\in \mathbb {N}\) the correlation functions satisfy the following estimates

$$\begin{aligned} \mathrm{const}\cdot n! c_t^n \le k^{(n)}_t(x_1, \ldots , x_n) \le \mathrm{const}\cdot n! C_t^n, \end{aligned}$$

where the left-hand inequality holds if all \(x_i\) belong to a ball of sufficiently small radius. If the mortality rate m is big enough, then \(C_t \rightarrow 0\) as \(t\rightarrow +\infty \). That is, in the continuum contact model the clustering persists even if the population asymptotically dies out. With this regard, a paramount question about the model (1.4) is whether the competition contained in L can suppress clustering. In short, the answer given in this work is in affirmative provided the competition and dispersal kernels satisfy a certain natural condition. They do satisfy if \(a^{-}\) is strictly positive in some vicinity of the origin, and \(a^{+}\) has finite range.

1.2 Presenting the Result

  1. (i)

    Under the condition on the kernels \(a^{\pm }\) formulated in Assumption 1 we prove in Theorem 3.3 that the correlation functions evolve \(k_{\mu _0}^{(n)}\mapsto k_t^{(n)}\) in such a way that each \(k_t^{(n)}\) is the correlation function of a unique sub-Poissonian measure \(\mu _t\).

  2. (ii)

    We give examples of the kernels \(a^{\pm }\) which satisfy Assumption 1. These examples include kernels of finite range – both short and long dispersals (Proposition 3.7), and also Gaussian kernels (Propositions 3.8).

  3. (iii)

    For the whole range of values of the intrinsic mortality rate m, in Theorem 3.4 we obtain the following bounds for the correlation functions holding for all \(t\ge 0\):

    $$\begin{aligned}&(i) \quad 0\le k^{(n)}_t(x_1 ,\ldots , x_n) \le C_\delta ^n \exp \left( n(\langle a^{+} \rangle -\delta )t \right) , \quad 0\le m \le \langle a^{+} \rangle , \\&(ii) \quad 0\le k^{(n)}_t(x_1 ,\ldots , x_n) \le C_\varepsilon ^n e^{-\varepsilon t}, \quad m > \langle a^{+} \rangle , \end{aligned}$$

    where \(\langle a^{+} \rangle \) is the \(L^1\)-norm of \(a^+\), \(C_\delta \) and \(C_\varepsilon \) are appropriate positive constants, whereas \(\delta < m\) and \(\varepsilon \in (\langle a^{+} \rangle , m)\) take any value in the mentioned sets. By (1.7) these estimates give upper bounds for the type of \(B_{\mu _t}\). We describe also the pure death case where \(\langle a^{+} \rangle =0\).

More detailed comments and comparison with the previous results on this model are given in Sect. 3.3 below.

2 The Basic Notions

A detailed description of various aspects of the mathematical framework of this paper can be found in [1, 5, 10, 12, 13, 17, 18, 21, 26]. Here we present only some of its aspects and indicate in which of the mentioned papers further details can be found. By \(\mathcal {B}(\mathbb {R}^d)\) and \(\mathcal {B}_\mathrm{b}(\mathbb {R}^d)\) we denote the set of all Borel and all bounded Borel subsets of \(\mathbb {R}^d\), respectively.

2.1 The Configuration Spaces

The space \(\Gamma \) defined in (1.1) is endowed with the weakest topology that makes continuous all the maps

$$\begin{aligned} \Gamma \ni \gamma \mapsto \sum _{x\in \gamma } f(x) , \quad f\in C_0 \big (\mathbb {R}^d \big ). \end{aligned}$$

Here \(C_0 (\mathbb {R}^d)\) stands for the set of all continuous compactly supported functions \(f:\mathbb {R}^d \rightarrow \mathbb {R}\). The mentioned topology on \(\Gamma \) admits a metrization which turns it into a complete and separable metric (Polish) space. By \(\mathcal {B}(\Gamma )\) we denote the corresponding Borel \(\sigma \)-field. For \(n\in \mathbb {N}_0:=\mathbb {N}\cup \{0\}\), the set of n-particle configurations in \(\mathbb {R}^d\) is

$$\begin{aligned} \Gamma ^{(0)} = \{ \emptyset \}, \qquad \Gamma ^{(n)} = \{\eta \subset X: |\eta | = n \}, \ \ n\in \mathbb {N}. \end{aligned}$$

For \(n\ge 1\), \(\Gamma ^{(n)}\) can be identified with the symmetrization of the set

$$\begin{aligned} \left\{ (x_1, \ldots , x_n)\in \bigl (\mathbb {R}^{d}\bigr )^n: x_i \ne x_j, \ \mathrm{for} \ i\ne j\right\} , \end{aligned}$$

which allows one to introduce the topology on \(\Gamma ^{(n)}\) related to the Euclidean topology of \(\mathbb {R}^d\) and hence the corresponding Borel \(\sigma \)-field \(\mathcal {B}(\Gamma ^{(n)})\). The set of finite configurations

$$\begin{aligned} \Gamma _{0} := \bigsqcup _{n\in \mathbb {N}_0} \Gamma ^{(n)} \end{aligned}$$

is endowed with the topology of the disjoint union and with the corresponding Borel \(\sigma \)-field \(\mathcal {B}(\Gamma _{0})\). It is a measurable subset of \(\Gamma \). However, the topology just mentioned and that induced on \(\Gamma _0\) from \(\Gamma \) do not coincide.

For \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\), the set \( \Gamma _\Lambda := \{ \gamma \in \Gamma : \gamma \subset \Lambda \}\) is a Borel subset of \(\Gamma _0\). We equip \(\Gamma _\Lambda \) with the topology induced by that of \(\Gamma _0\). Let \(\mathcal {B}(\Gamma _\Lambda )\) be the corresponding Borel \(\sigma \)-field. It can be proved, see [26, Lemma 1.1 and Proposition 1.3], that

$$\begin{aligned} \mathcal {B}(\Gamma _\Lambda ) = \big \{ \Gamma _\Lambda \cap \Upsilon : \Upsilon \in \mathcal {B}(\Gamma ) \big \}. \end{aligned}$$

It is known [1, page 451] that \(\mathcal {B}(\Gamma )\) is the smallest \(\sigma \)-field of subsets of \(\Gamma \) such that all the projections

$$\begin{aligned} \Gamma \ni \gamma \mapsto p_\Lambda (\gamma )= \gamma _\Lambda := \gamma \cap \Lambda , \qquad \Lambda \in \mathcal {B}_\mathrm{b} \big (\mathbb {R}^d \big ), \end{aligned}$$
(2.1)

are \(\mathcal {B}(\Gamma )/\mathcal {B}(\Gamma _\Lambda )\) measurable. This means that \((\Gamma , \mathcal {B}(\Gamma ))\) is the projective limit of the measurable spaces \((\Gamma _\Lambda , \mathcal {B}(\Gamma _\Lambda ))\), \(\Lambda \in \mathcal {B}_\mathrm{b} (\mathbb {R}^d)\).

Remark 2.1

From the latter discussion it follows that \(\Gamma _0 \in \mathcal {B}(\Gamma )\) and

$$\begin{aligned} \mathcal {B}(\Gamma _0) = \{ A \cap \Gamma _0: A \in \mathcal {B}(\Gamma )\}. \end{aligned}$$
(2.2)

Hence, a probability measure \(\mu \) on \(\mathcal {B}(\Gamma )\) with the property \(\mu (\Gamma _0) = 1\) can be considered also as a measure on \(\mathcal {B}(\Gamma _0)\).

2.2 Functions and Measures on Configuration Spaces

A Borel set \(\Upsilon \subset \Gamma \) is said to be bounded if the following holds

$$\begin{aligned} \Upsilon \subset \bigcup _{n=0}^N \Gamma ^{(n)}_\Lambda , \end{aligned}$$

for some \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\) and \(N\in \mathbb {N}\). In view of (2.2), each bounded set is in \(\mathcal {B}(\Gamma _0)\). A function \(G:\Gamma _0 \rightarrow \mathbb {R}\) is measurable if and only if, for each \(n\in \mathbb {N}\), there exists a symmetric Borel function \(G^{(n)}:(\mathbb {R}^d)^n \rightarrow \mathbb {R}\) such that

$$\begin{aligned} G(\eta ) = G^{(n)} (x_1 , \ldots , x_n), \quad \mathrm{for} \ \ \eta = \{x_1 , \ldots , x_n\}. \end{aligned}$$
(2.3)

Definition 2.2

A bounded measurable function \(G:\Gamma _0 \rightarrow \mathbb {R}\) is said to have bounded support if: (a) \(G(\eta ) = 0\) whenever \(\eta \cap \Lambda ^c \ne \emptyset \) for some \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\), \(\Lambda ^c :=\mathbb {R}^d \setminus \Lambda \); (b) \(G^{(n)} \equiv 0\) whenever \(n> N\) for some \(N\in \mathbb {N}\). The set of all such functions is denoted by \(B_\mathrm{bs}(\Gamma _0)\). For a given \(G\in B_\mathrm{bs}(\Gamma _0)\), by N(G) we denote the smallest N with the property as in (b).

A map \(F:\Gamma \rightarrow \mathbb {R}\) is called cylinder function if there exist \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\) and a measurable \(G: \Gamma _{\Lambda }\rightarrow \mathbb {R}\) such that, cf. (2.1), \(F(\gamma ) = G(\gamma _\Lambda )\) for all \(\gamma \in \Gamma \). Clearly, such a map F is measurable. By \(\mathcal {F}_\mathrm{cyl}(\Gamma )\) we denote the set of all cylinder functions. For \(\gamma \in \Gamma \), by writing \(\eta \Subset \gamma \) we mean that \(\eta \subset \gamma \) and \(\eta \) is finite, i.e., \(\eta \in \Gamma _0\). For \(G \in B_\mathrm{bs}(\Gamma _0)\), we set

$$\begin{aligned} (KG)(\gamma ) = \sum _{\eta \Subset \gamma } G(\eta ), \qquad \gamma \in \Gamma . \end{aligned}$$
(2.4)

As proved in [17], K maps \(B_\mathrm{bs}(\Gamma _0)\) onto \(\mathcal {F}_\mathrm{cyl}(\Gamma )\) and is invertible. The Lebesgue-Poisson measure \(\lambda \) on \(\mathcal {B}(\Gamma _0)\) is defined by the relation

$$\begin{aligned} \int _{\Gamma _0} G(\eta ) \lambda ( d \eta ) = G(\emptyset ) + \sum _{n=1}^{\infty } \frac{1}{n!} \int _{(\mathbb {R}^d)^n} G^{(n)} (x_1 , \ldots , x_n) d x_1 \cdots d x_n, \end{aligned}$$
(2.5)

which has to hold for all \(G\in B_\mathrm{bs}(\Gamma _0)\), cf. (2.3). Note that \(B_\mathrm{bs}(\Gamma _0)\) is a measure defining class. Clearly, \(\lambda (\Upsilon ) < \infty \) for each bounded \(\Upsilon \in \mathcal {B}(\Gamma _0)\). With the help of (2.5), we rewrite (1.7) in the following form

$$\begin{aligned} B_\mu (\theta ) = \int _{\Gamma _0} k_\mu (\eta ) \left( \prod _{x\in \eta }\theta (x) \right) \lambda ( d \eta ). \end{aligned}$$
(2.6)

In the sequel, by saying that something holds for all \(\eta \) we mean that it holds for \(\lambda \)-almost all \(\eta \in \Gamma _0\). This relates also to (2.3).

Let \(\mathcal {P}(\Gamma )\), resp. \(\mathcal {P}(\Gamma _0)\), stand for the set of all probability measures on \(\mathcal {B}(\Gamma )\), resp. \(\mathcal {B}(\Gamma _0)\). Note that \(\mathcal {P}(\Gamma _0)\) can be considered as a subset of \(\mathcal {P}(\Gamma )\), see Remark 2.1. For a given \(\mu \in \mathcal {P}(\Gamma )\), the projection \(\mu ^\Lambda \) is defined as

$$\begin{aligned} \mu ^\Lambda (A) = \mu \big (p_\Lambda ^{-1} (A) \big ), \qquad A \in \mathcal {B}(\Gamma _\Lambda ), \end{aligned}$$
(2.7)

where \(p_\Lambda ^{-1} (A):= \{ \gamma \in \Gamma : p_\Lambda (\gamma )\in A\}\), see (2.1). The projections of the Lebesgue-Poisson measure \(\lambda \) are defined in the same way.

Recall that \(\mathcal {P}_\mathrm{anal}\) (resp. \(\mathcal {P}_\mathrm{sP}\)) denotes the set of all those \(\mu \in \mathcal {P}(\Gamma )\) for each of which \(B_\mu \) defined in (1.6), or (2.6), admits continuation to a function on \(L^1(\mathbb {R}^d)\) analytic in some neighborhood of zero (resp. exponential type entire function). The elements of \(\mathcal {P}_\mathrm{sP}\) are called sub-Poissonian states. One can show [17, Proposition 4.14] that for each \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\) and \(\mu \in \mathcal {P}_\mathrm{sP}\), \(\mu ^\Lambda \) is absolutely continuous with respect to \(\lambda ^\Lambda \). The Radon-Nikodym derivative

$$\begin{aligned} R_\mu ^\Lambda (\eta ) = \frac{d\mu ^\Lambda }{d \lambda ^\Lambda }(\eta ), \qquad \eta \in \Gamma _\Lambda , \end{aligned}$$
(2.8)

and the correlation function \(k_\mu \) satisfy

$$\begin{aligned} k_\mu (\eta ) = \int _{\Gamma _\Lambda } R^\Lambda _\mu (\eta \cup \xi ) \lambda ^\Lambda (d\xi ), \qquad \eta \in \Gamma _\Lambda , \end{aligned}$$
(2.9)

which holds for all \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\). Note that (2.9) ties \(R^\Lambda _\mu \) with the restriction of \(k_\mu \) to \(\Gamma _\Lambda \). The fact that these are the restrictions of one and the same function \(k_\mu :\Gamma _0 \rightarrow \mathbb {R}\) corresponds to the Kolmogorov consistency of the family \(\{\mu ^\Lambda \}_{\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)}\).

By (2.4), (2.7), and (2.9) we get

$$\begin{aligned} \int _{\Gamma } (KG) (\gamma ) \mu (d \gamma ) = \langle \! \langle G,k_\mu \rangle \! \rangle , \end{aligned}$$

which holds for each \(G\in B_\mathrm{bs}(\Gamma _0)\) and \(\mu \in \mathcal {P}_\mathrm{sP}\). Here and in the sequel we use the notation

$$\begin{aligned} \langle \! \langle G,k \rangle \! \rangle = \int _{\Gamma _0} G(\eta ) k(\eta )\lambda (d\eta ), \end{aligned}$$
(2.10)

Define

$$\begin{aligned} B_\mathrm{bs}^\star (\Gamma _0) = \{ G \in B_\mathrm{bs}(\Gamma _0): KG \not \equiv 0, \ (KG)(\gamma ) \ge 0 \ \mathrm{for} \ \mathrm{all} \ \gamma \in \Gamma \}. \end{aligned}$$
(2.11)

By [17, Theorems 6.1 and 6.2 and Remark6.3] we know that the following holds.

Proposition 2.3

Let a measurable function \(k:\Gamma _0\rightarrow \mathbb {R}\) have the following properties:

$$\begin{aligned}&\mathrm{(i)} \quad \langle \! \langle G,k \rangle \! \rangle \ge 0 \qquad \mathrm{for} \ \ \mathrm{all} \ \ G\in B_\mathrm{bs}^\star (\Gamma _0),\\&\mathrm{(ii)}\quad k (\emptyset ) = 1, \qquad (iii) \ \ k (\eta ) \le C^{|\eta |},\qquad \eta \in \Gamma _0,\nonumber \end{aligned}$$
(2.12)

property (iii) holding for some \(C>0\). Then there exists a unique \(\mu \in \mathcal {P}_\mathrm{sP}\) for which k is the correlation function.

Finally, we mention the convention

$$\begin{aligned} \sum _{a\in \emptyset } \phi _a := 0 , \qquad \prod _{a\in \emptyset } \psi _a := 1 \end{aligned}$$

which we use in the sequel and the integration rule, see, e.g. [10],

$$\begin{aligned} \int _{\Gamma _0} \sum _{\xi \subset \eta } H\big (\xi , \eta \setminus \xi , \eta \big ) \lambda (d \eta ) = \int _{\Gamma _0}\int _{\Gamma _0}H \big (\xi , \eta , \eta \cup \xi \big ) \lambda (d \xi ) \lambda (d \eta ), \end{aligned}$$
(2.13)

valid for appropriate functions H.

2.3 Spaces of Functions

For each \(\mu \in \mathcal {P}_\mathrm{sP}\), the correlation function satisfies (1.8) in view of which we introduce the following Banach spaces. For \(\alpha \in \mathbb {R}\), we set

$$\begin{aligned} \Vert k\Vert _\alpha = \mathop {\hbox {ess sup}}\limits _{\eta \in \Gamma _0} |k(\eta )|\exp (-\alpha |\eta |). \end{aligned}$$
(2.14)

It is a norm that can also be written as follows. As in (2.3), each \(k:\Gamma _0\rightarrow \mathbb {R}\) is defined by its restrictions to \(\Gamma ^{(n)}\). Let \(k^{(n)}:(\mathbb {R}^d)^n \rightarrow \mathbb {R}\) be a symmetric Borel function such that \(k^{(n)} (x_1 , \ldots , x_n) = k(\eta )\) for \(\eta = \{x_1, \ldots , x_n\}\). We then assume that \(k^{(n)}\in L^\infty ((\mathbb {R}^d)^n)\), \(n\in \mathbb {N}\), cf. (1.8), and define

$$\begin{aligned} \Vert k\Vert _\alpha = \sup _{n\in \mathbb {N}_0} e^{-\alpha n} \nu _n(k), \quad \nu _n(k):= \Vert k^{(n)}\Vert _{L^\infty ((\mathbb {R}^d)^n)}, \end{aligned}$$
(2.15)

that yields the same norm as in (2.14). Obviously,

$$\begin{aligned} \mathcal {K}_\alpha := \{k:\Gamma _0\rightarrow \mathbb {R}: \Vert k\Vert _\alpha < \infty \}, \end{aligned}$$
(2.16)

is a Banach space. For \(\alpha ' < \alpha ''\), we have \(\Vert k\Vert _{\alpha ''} \le \Vert k\Vert _{\alpha '}\). Hence,

$$\begin{aligned} \mathcal {K}_{\alpha '} \hookrightarrow \mathcal {K}_{\alpha ''}, \qquad \ \mathrm{for} \ \alpha ' < \alpha ''. \end{aligned}$$
(2.17)

Here and in the sequel \(X \hookrightarrow Y\) denotes continuous embedding. For \(\alpha \in \mathbb {R}\), we define, cf. (2.11) and (2.10),

$$\begin{aligned} \mathcal {K}_\alpha ^{\star } = \{ k \in \mathcal {K}_\alpha : \forall G\in B^{\star }_\mathrm{bs} (\Gamma _0)\ \langle \! \langle G,k \rangle \! \rangle \ge 0 \}. \end{aligned}$$
(2.18)

It is a subset of the cone

$$\begin{aligned} \mathcal {K}_\alpha ^{+} = \{ k \in \mathcal {K}_\alpha : k(\eta ) \ge 0 \ \ \mathrm{for} \ \ \mathrm{a.a.} \ \ \eta \in \Gamma _0\}. \end{aligned}$$
(2.19)

By Proposition 2.3 we have that each \(k\in \mathcal {K}_\alpha ^{\star }\) with the property \(k(\emptyset ) =1\) is the correlation function of a unique \(\mu \in \mathcal {P}_\mathrm{sP}\). We also put

$$\begin{aligned} \mathcal {K}_{\infty } = \bigcup _{\alpha \in \mathbb {R}} \mathcal {K}_{\alpha }, \end{aligned}$$
(2.20)

and equip this set with the inductive topology.

Finally, we define

$$\begin{aligned} \mathcal {K}^\star _{\infty } = \bigcup _{\alpha \in \mathbb {R}} \mathcal {K}^\star _{\alpha }. \end{aligned}$$

3 The Model and the Results

3.1 The Model

As was already mentioned, the model is specified by the expression given in (1.4). Regarding the kernels in (1.5) we suppose that

$$\begin{aligned} a^{\pm } \in L^1\big (\mathbb {R}^d \big )\cap L^\infty \big (\mathbb {R}^d \big ), \qquad a^{\pm }(x) = a^{\pm }(-x) \ge 0, \end{aligned}$$
(3.1)

and thus define

$$\begin{aligned} \langle {a}^{\pm }\rangle = \int _{\mathbb {R}^d} a^{\pm } (x)dx,\qquad \Vert a^{\pm } \Vert = \mathop {\hbox {ess sup}}\limits _{x\in \mathbb {R}^d} a^{\pm }(x), \end{aligned}$$
(3.2)

and

$$\begin{aligned} E^{\pm } (\eta ) = \sum _{x\in \eta }E^{\pm } (x,\eta \setminus x) = \sum _{x\in \eta } \sum _{y\in \eta \setminus x}a^{\pm } (x-y), \quad \eta \in \Gamma _0 . \end{aligned}$$
(3.3)

We also denote

$$\begin{aligned} E(\eta ) = \sum _{x\in \eta }\left( m + E^{-} (x, \eta \setminus x) \right) = m|\eta | + E^{-}(\eta ), \end{aligned}$$
(3.4)

where m is the same as in (1.4).

In addition to the standard assumptions (3.1) we shall use the following

Assumption 1

(\((b, \vartheta )\)-assumption) There exist \(\vartheta >0\) and \(b\ge 0\) such that the functions introduced in (3.3) satisfy

$$\begin{aligned} b |\eta | + E^{-}(\eta ) \ge \vartheta E^{+} (\eta ), \qquad \eta \in \Gamma _0. \end{aligned}$$
(3.5)

Note that the case of point-wise domination

$$\begin{aligned} a^{-} (x) \ge \vartheta a^{+} (x) , \qquad \quad x \in \mathbb {R}^d, \end{aligned}$$
(3.6)

cf. [13, Eq. (3.11)], corresponds to (3.5) with \(b=0\). In Sect. 3.4 below we give examples of the kernels \(a^{\pm }\) which satisfy (3.5). To exclude the trivial case of \(a^{+} = a^{-} =0\) we also assume that

$$\begin{aligned} \langle a^{-} \rangle >0. \end{aligned}$$

3.2 The Results

3.2.1 The Operators

In view of the relationship between states and correlation functions discussed in Sect. 2.3, we describe the system’s dynamics in the following way. First we obtain the evolution \(k_{\mu _0} \mapsto k_t\) by proving the existence of a unique solution of the Cauchy problem of the following type

$$\begin{aligned} \frac{d k_t}{d t} = L^\Delta k_t , \qquad k_t|_{t=0} = k_{\mu _0}, \end{aligned}$$
(3.7)

where the action of \(L^\Delta \) is calculated from (1.4). Thereafter, we show that each \(k_t\) has property \(k_t(\emptyset )=1\) and lies in \(\mathcal {K}_{\alpha }^{\star }\) for some \(\alpha \in \mathbb {R}\). Hence, it is the correlation function of a unique \(\mu _t\in \mathcal {P}_\mathrm{sP}\). This yields in turn the evolution \(\mu _0 \mapsto \mu _t\).

To describe the action of \(L^\Delta \) in a systematic way we write it in the following form, see [10, 13],

$$\begin{aligned} {L}^\Delta = A^\Delta +B^\Delta , \end{aligned}$$
(3.8)

where

$$\begin{aligned} A^\Delta= & {} A^\Delta _1 + A^\Delta _2,\nonumber \\ \big (A_1^\Delta k \big )(\eta )= & {} - E(\eta ) k(\eta ), \nonumber \\ \big (A_2^\Delta k \big )(\eta )= & {} \sum _{x\in \eta } E^{+} (x,\eta \setminus x) k (\eta \setminus x), \end{aligned}$$
(3.9)

see also (3.3), (3.4), and

$$\begin{aligned} B^\Delta= & {} B^\Delta _1 + B^\Delta _2 , \nonumber \\ \big (B^\Delta _1 k \big )(\eta )= & {} - \int _{\mathbb {R}^d} E^{-} (y,\eta ) k(\eta \cup y) dy ,\nonumber \\ \big (B^\Delta _2 k \big )(\eta )= & {} \int _{\mathbb {R}^d} \sum _{x\in \eta } a^{+} (x - y) k(\eta \setminus x \cup y) d y. \end{aligned}$$
(3.10)

The key idea of the method that we use to study (3.7) is to employ the scale of spaces (2.16) in which \(A^\Delta \) and \(B^\Delta \) act as bounded operators from \(\mathcal {K}_{\alpha '}\), to any \(\mathcal {K}_{\alpha }\) with \(\alpha > \alpha '\), cf. (2.17). For such \(\alpha \) and \(\alpha ' \), by (2.14) and (2.15) we have, see (3.9),

$$\begin{aligned} \Vert A^\Delta _1 k\Vert _\alpha\le & {} \Vert k\Vert _{\alpha '} \mathop {\hbox {ess sup}}\limits _{\eta \in \Gamma _0} E(\eta )\exp \left( - \big (\alpha - \alpha ' \big )|\eta | \right) , \\ \Vert A^\Delta _2 k\Vert _\alpha\le & {} \mathop {\hbox {ess sup}}\limits _{\eta \in \Gamma _0} e^{-\alpha |\eta |}\sum _{x\in \eta }E^{+}(x,\eta \setminus x) |k(\eta \setminus x)|\\\le & {} \Vert k\Vert _{\alpha '}e^{-\alpha '}\mathop {\hbox {ess sup}}\limits _{\eta \in \Gamma _0} E^{+}(\eta )\exp \left( -\big (\alpha - \alpha ' \big )|\eta | \right) , \end{aligned}$$

which by (2.15) and (3.2) yields

$$\begin{aligned} \Vert A^\Delta _1 k\Vert _\alpha\le & {} \Vert k\Vert _{\alpha '} \left( \frac{m}{e(\alpha - \alpha ')} + \frac{4 \Vert a^{-} \Vert }{e^2 (\alpha - \alpha ')^2} \right) \nonumber \\ \Vert A^\Delta _2 k\Vert _\alpha\le & {} \Vert k\Vert _{\alpha '}e^{-\alpha '} \frac{4 \Vert a^{+} \Vert }{e^2 (\alpha - \alpha ')^2}, \end{aligned}$$
(3.11)

where we have used the estimate

$$\begin{aligned} n^p e^{-\sigma n} \le \left( \frac{p}{e\sigma }\right) ^p, \quad p\ge 1, \ \sigma >0, \ n \in \mathbb {N}. \end{aligned}$$
(3.12)

In a similar way, we obtain from (3.10) the following estimate, see (3.2),

$$\begin{aligned} \Vert B^\Delta k\Vert _\alpha \le \Vert k\Vert _{\alpha '} \frac{ \langle a^{+} \rangle + \langle a^{-} \rangle e^{\alpha '} }{ e(\alpha - \alpha ')}. \end{aligned}$$
(3.13)

Thus, by means of (3.8) – (3.10), and then by (3.11) and (3.13), for each \(\alpha \), \(\alpha ' \in \mathbb {R}\), \(\alpha '<\alpha \), one can define a continuous operator

$$\begin{aligned} L^\Delta _{\alpha \alpha '} : \mathcal {K}_{\alpha '} \rightarrow \mathcal {K}_{\alpha }. \end{aligned}$$
(3.14)

Let \(\mathcal {L}(\mathcal {K}_{\alpha '}, \mathcal {K}_{\alpha })\) stand for the set of all bounded linear operators \(\mathcal {K}_{\alpha '} \rightarrow \mathcal {K}_{\alpha }\). The operator norm of \(L^\Delta _{\alpha \alpha '}\) can be estimated by means of the above formulas. Thus, the family \(\{L^\Delta _{\alpha \alpha '}\}_{\alpha ,\alpha '}\) determines a continuous linear operator \(L^\Delta : \mathcal {K}_\infty \rightarrow \mathcal {K}_\infty \). Along with them in each \(\mathcal {K}_\alpha \), \(\alpha \in \mathbb {R}\), we define an unbounded operator, \(L^\Delta _\alpha \), with domain

$$\begin{aligned} \mathcal {D}^\Delta _\alpha = \big \{ k \in \mathcal {K}_\alpha : L^\Delta k \in \mathcal {K}_\alpha \big \}\supset \mathcal {K}_{\alpha '}, \end{aligned}$$
(3.15)

with the inclusion holding for each \(\alpha ' < \alpha \), see (3.11), (3.13), and (3.8). The operators such introduced are related to each other in the following way:

$$\begin{aligned} \forall \alpha '< \alpha \quad \forall k \in \mathcal {K}_{\alpha '} \qquad L^\Delta _{\alpha \alpha '} k = L^\Delta _{\alpha }k. \end{aligned}$$
(3.16)

3.2.2 The Statements

Now we can make precise which equations we are going to solve. One possibility is to consider (3.7) in a given Banach space, \(\mathcal {K}_\alpha \).

Definition 3.1

Given \(\alpha \in \mathbb {R}\) and \(T\in (0, +\infty ]\), by a solution of the Cauchy problem

$$\begin{aligned} \frac{d}{dt} k_t = L^\Delta _\alpha k_t , \qquad k_t|_{t=0} = k_0 \in \mathcal {D}^\Delta _\alpha , \end{aligned}$$
(3.17)

in \(\mathcal {K}_{\alpha }\) we mean a continuous map \([0,T) \ni t \mapsto k_t \in \mathcal {D}_\alpha \), continuously differentiable in \(\mathcal {K}_{\alpha }\) on [0, T) and such that (3.17) is satisfied for all \(t\in [0,T)\).

Another possibility is to define (3.7) in the locally convex space (2.20).

Definition 3.2

By a global solution of the Cauchy problem (3.7) in \(\mathcal {K}_\infty \) with a given \(k_0 \in \mathcal {K}_\infty \) we mean a map \([0,+\infty ) \ni t \mapsto k_t \in \mathcal {K}_\infty \), continuously differentiable on \([0,+\infty )\) and such that (3.7) is satisfied for all \(t\ge 0\).

According to Definition 3.2, for each \(T<+\infty \), there exist \(\alpha _0, \alpha \in \mathbb {R}\), \(\alpha _0 < \alpha \), for which the mentioned \(k_t\) is a solution as in Definition 3.1 with \(k_0 \in \mathcal {K}_{\alpha _0}\). Our main results are contained in the following two statements.

Theorem 3.3

Let \((b,\vartheta )\)-assumption (3.5) hold true, and \(\mu _0\) be an arbitrarily sub-Poissonian state. Then the problem (3.7) with \(k_0 = k_{\mu _0}\) has a unique global solution \(k_t \in \mathcal {K}^{\star }_{\infty }\) with property \(k_t(\emptyset ) =1\). Therefore, for each \(t\ge 0\) there exists a unique sub-Poissonian measure \(\mu _t\) such that \(k_t = k_{\mu _t}\).

The next statement describes the solutions in more detail.

Theorem 3.4

Let \((b,\vartheta )\)-assumption (3.5) hold true with \(b>0\) (resp. \(b=0\)), and let \(\alpha _0\) be such that \(k_{\mu _0}\in \mathcal {K}_{\alpha _0}\). Then the solution \(k_t\) as in Theorem 3.3, corresponding to this \(k_{\mu _0}\), for all \(t\ge 0\), satisfies the following estimates.

  1. (i)

    Case \(\langle a^{+} \rangle >0\) and \(m\in [0,\langle a^{+} \rangle ]\): for each \(\delta <m\) (resp. \(\delta \le m\)) there exists a positive \(C_\delta \) such that \(\log C_\delta \ge \alpha _0\) and

    $$\begin{aligned} k_t (\eta ) \le C_\delta ^{|\eta |} \exp \left[ (\langle a^{+} \rangle - \delta )|\eta | t \right] , \qquad \eta \in \Gamma _0. \end{aligned}$$
    (3.18)
  2. (ii)

    Case \(\langle a^{+} \rangle >0\) and \(m>\langle a^{+} \rangle \): for each \(\varepsilon \in (0,m-\langle a^{+} \rangle )\), there exists a positive \(C_\varepsilon \) such that \(\log C_\varepsilon \ge \alpha _0\) and

    $$\begin{aligned} k_t (\eta ) \le C_\varepsilon ^{|\eta |} \exp (-\varepsilon t) , \qquad \eta \ne \emptyset . \end{aligned}$$
    (3.19)
  3. (iii)

    Case \(\langle a^{+} \rangle =0\):

    $$\begin{aligned} k_t (\eta ) \le k_0(\eta ) \exp \left[ - E(\eta ) t \right] , \qquad \eta \in \Gamma _0. \end{aligned}$$
    (3.20)

If \(m=0\) and \(a^{-} (x) = \vartheta a^{+} (x)\), then

$$\begin{aligned} k_t (\eta ) = \vartheta ^{-|\eta |}, \qquad t\ge 0, \end{aligned}$$
(3.21)

is a stationary solution.

Corollary 3.5

In case (i) of Theorem 3.4, for each \(T>0\), \(k_t\) solves (3.17) in \(\mathcal {K}_{\alpha _T}\) on the time interval [0, T), where

$$\begin{aligned} \alpha _T = \log C_\delta + \left( \langle a^{+} \rangle -\delta \right) T. \end{aligned}$$
(3.22)

In case (ii) (resp. (iii)), \(k_t\) solves (3.17) in \(\mathcal {K}_\alpha \), \(\alpha = \log C_\varepsilon \) (resp. any \(\alpha > \alpha _0\)) on the time interval \([0, +\infty )\).

3.3 Comments and Comparison

3.3.1 Comments on the Basic Assumption

By means of the function

$$\begin{aligned} \phi _\vartheta (x) = a^{-} (x) - \vartheta a^{+} (x) \end{aligned}$$
(3.23)

one can rewrite (3.5) in the following form

$$\begin{aligned} \sum _{x\in \eta } \sum _{y\in \eta \setminus x} \phi _\vartheta (x-y) \ge - b |\eta |, \qquad \eta \in \Gamma _0. \end{aligned}$$

This resembles the stability condition (with stability constant \(b\ge 0\)) for the interaction potential \(\phi _\vartheta \) used in the statistical mechanics of continuum systems of interacting particles, see [31, Chapter 3]. Below we employ some techniques developed therein to prove that important classes of the kernels \(a^{\pm }\) have this property, see Propositions 3.7 and 3.8.

The \((b, \vartheta )\) assumption holds with \(b=0\) if and only if (3.6) does. In this case, the dispersal kernel \(a^{+}\) decays faster than the competition kernel \(a^{-}\) (short dispersal). It can be characterized as the possibility for each daughter-entity to kill her mother-entity, or to be killed by her. In the previous works on this model [10, 12, 13] the results were based on this short dispersal condition. The novelty of the result of Proposition 3.7 is that it covers also the case of long dispersal where the range of \(a^{+}\) is finite but can be bigger than that of \(a^{-}\). Noteworthy, by our Proposition 3.7 it follows that the interaction potential \(\Phi \) used in [29] is stable, which was unknown to the authors of that paper, cf. [29, page 146]. Proposition 3.8 describes Gaussian kernels, for which the basic assumption is valid also for both long and short dispersals. In this paper, we restricted our attention to the classes of kernels described in Propositions 3.7 and 3.8. Extensions beyond this classes, which we plan to realize in a separate work, can be made by means of the corresponding methods of the statistical mechanics of interacting particle systems.

3.3.2 Comments on the Results

An important feature of Theorems 3.3 and 3.4 is that the intrinsic mortality rate \(m\ge 0\) can be arbitrary. Theorem 3.3 gives a general existence of the evolution \(\mu _0 \mapsto \mu _t\), \(t>0\), in the class of sub-Poissonian states through the evolution of the corresponding correlation functions. Its ecological outcome is that the competition in the form as in (1.4), (1.5) excludes clustering provided the kernels satisfy (3.5). A complete characterization of the evolution \(k_0 \mapsto k_t\) is then given in Theorem 3.4. By means of it this evolution is ‘localized’ in the spaces \(\mathcal {K}_\alpha \) in Corollary 3.5. According to Theorem 3.4, for \(m < \langle a^{+} \rangle \), or \(m \le \langle a^{+} \rangle \) and \(b>0\) in (3.24), the evolution described in Theorem 3.3 takes place in an ascending sequence \(\{\mathcal {K}_{\alpha _T}\}_{T\ge 0}\) of Banach spaces, see (2.14) – (2.17), and also (3.22). If \(m> \langle a^{+} \rangle \), the evolution holds in one and the same space, see Corollary 3.5. The only difference between the cases of \(b>0\) and \(b=0\) is that one can take \(\delta =m\) in the latter case. This yields different results for \(m = \langle a^{+} \rangle \), where the evolution takes place in the same space \(\mathcal {K}_\alpha \) with \(\alpha = \log C_m\). Note also that for \(m=0\), one should take \(\delta <0\). For \(m> \langle a^{+} \rangle \), it follows from (3.19) that the population dies out: for \(\langle a^{+} \rangle >0\), the following holds

$$\begin{aligned} k^{(n)}_{\mu _t} (x_1 ,\ldots , x_n) \le e^{-\varepsilon t}k^{(n)}_{\mu _0} (x_1 ,\ldots , x_n), \quad t>0, \end{aligned}$$

for some \(\varepsilon \in (0, m - \langle a^{+} \rangle )\), almost all \((x_1 ,\ldots , x_n)\), and each \(n\in \mathbb {N}\). For \(m>0\) and \(\langle a^{+} \rangle =0\), by (3.20) we get

$$\begin{aligned} k^{(n)}_{\mu _t} (x_1 ,\ldots , x_n) \le \exp \left( - n m t \right) k^{(n)}_{\mu _0} (x_1 ,\ldots , x_n), \quad t>0. \end{aligned}$$

This means that \( k^{(n)}_{\mu _t} (x_1 ,\ldots , x_n)\rightarrow 0\) as \(n\rightarrow +\infty \) for sufficiently big \(t>0\). This phenomenon does not follow from (3.19). Finally, we mention that (3.21) corresponds to a special case of (3.6) and \(m=b=0\).

3.3.3 Comparison

Here we compare Theorems 3.3 and 3.4 with the corresponding results obtained for this model in [10, 12] (where it was called BDLP model), and in [13]. Note that these are the only works where the infinite particle version of the model considered here was studied. In [10, 12], the model was supposed to satisfy the conditions, see [12, Eqs. (3.38) and (3.39)], which in the present notations can be formulated as follows: (a) (3.6) holds with a given \(\vartheta >0\); (b) \(m > 16 \langle a^{-} \rangle / \vartheta \) holding with the same \(\vartheta \). Under these conditions the global evolution \(k_0 \mapsto k_t\) was obtained in \(\mathcal {K}_\alpha \) with some \(\alpha \in \mathbb {R}\) by means of a \(C_0\)-semigroup. No information was available on whether \(k_t\) is a correlation function and hence on the sign of \(k_t\). In [13], the restrictions were reduced just to (3.6). Then the evolution \(k_0 \mapsto k_t\) was obtained in a scale of Banach spaces \(\mathcal {K}_\alpha \) as in Theorem 3.3, but on a bounded time interval. Like in [10, 12], also here no information was obtained on whether \(k_t\) is a correlation function. Until the present work no results on the extinction as in (3.19) and on the case of \(a^{+} \equiv 0\) were known.

3.4 Kernels Satisfying the Basic Assumption

Our aim now is to show that the assumption (3.5) can be satisfied in the most of ‘realistic’ models. We begin, however, by establishing an important property of the kernels satisfying (3.5). To this end we rewrite (3.5) in the form

$$\begin{aligned} \Phi _\vartheta (\eta ) := \sum _{x\in \eta } \sum _{y\in \eta \setminus x} \left[ a^{-} (x-y) - \vartheta a^{+} (x-y)\right] \ge - b |\eta |, \qquad \eta \in \Gamma _0. \end{aligned}$$
(3.24)

Proposition 3.6

Assume that (3.24) holds with some \(\vartheta _0>0\) and \(b_0\ge 0\). Then for each \(\vartheta < \vartheta _0\), it also holds with \(b = b_0 \vartheta / \vartheta _0\).

Proof

For \(\vartheta \in (0,\vartheta _0]\), we have

$$\begin{aligned} \Phi _\vartheta (\eta ) = \frac{\vartheta }{\vartheta _0} \left[ \left( \frac{\vartheta _0}{\vartheta } - 1\right) E^{-} (\eta ) + \Phi _{\vartheta _0}(\eta )\right] \ge - \frac{\vartheta }{\vartheta _0}b_0 |\eta |, \end{aligned}$$

which yields the proof. \(\square \)

In the following two propositions we give examples of the kernels with the property (3.5). In the first one, we assume that the dispersal kernel has finite range, which is quite natural in many applications. The competition kernel in turn is assumed to be just nontrivial.

Proposition 3.7

In addition to (3.1) and (3.2) assume that the kernels \(a^{\pm }\) have the following properties:

  1. (a)

    there exist positive \(c^{-}\) and r such that \(a^{-} (x) \ge c^{-}\) for \(|x|< r\);

  2. (b)

    there exist positive \(c^{+}\) and R such that \(a^{+} (x) \le c^{+}\) for \(|x|< R\) and \(a^{+} (x) =0\) for \(|x|\ge R\).

Then for each \(b>0\), there exists \(\vartheta >0\) such that (3.24) holds for these b and \(\vartheta \).

Proof

For \(r\ge R\), (3.24) holds with \(b= 0\) and \(\vartheta = c^{-}/c^{+}\). Thus, it remains to consider the case \(r<R\).

For \(|\eta |=0\) and \(|\eta |=1\), (3.24) trivially holds with each \(b>0\) and \(\vartheta >0\). For \(|\eta |=2\), (3.24) holds whenever \(\vartheta \le b/c^{+}\). For \(|\eta |>2\), we apply an induction in \(|\eta |\), similarly as it was done in [2]. For \(x\in \eta \), we define

$$\begin{aligned} \xi ^{-}_x = \big \{y\in \eta : |y-x|<r \big \}, \quad \xi ^{+}_x = \big \{y\in \eta : r \le |y-x|< R \big \}. \end{aligned}$$

Set

$$\begin{aligned} U_\vartheta (\eta ) = \Phi _\vartheta (\eta ) + b|\eta | = b|\eta | + E^{-}(\eta ) - \vartheta E^{+} (\eta ). \end{aligned}$$

Then the next estimate holds true for each \(x\in \eta \):

$$\begin{aligned} U_\vartheta (x,\eta \setminus x):= & {} U_\vartheta (\eta ) - U_\vartheta (\eta \setminus x)\nonumber \\= & {} b + 2 E^{-} (x, \eta \setminus x) - 2 \vartheta E^{+} (x, \eta \setminus x) \nonumber \\\ge & {} b + 2(c^{-} - \vartheta c^{+}) |\xi _x^{-}| - 2 \vartheta c^{+} |\xi _x^{+}|. \end{aligned}$$
(3.25)

Given \(n>2\) and positive \(\vartheta \) and b, assume that \(U_\vartheta (\eta ) \ge 0\) for each \(|\vartheta |=n-1\). Then to make the inductive step by means of (3.25) we have to show that, for each \(\eta \) such that \(|\eta |=n\), there exists \(x\in \eta \) such that \(U_\vartheta (x,\eta \setminus x)\ge 0\). Set

$$\begin{aligned} \bar{n} = |\xi ^{-}_x| = \max _{y\in \eta }|\xi ^{-}_y|, \qquad x\in \eta . \end{aligned}$$
(3.26)

If \(\bar{n}=0\), then \(\eta \) is such that \(|y-z|\ge r\) for each distinct \(y,z\in \eta \). In this case, the balls \(B_z:= \{y\in \mathbb {R}^d: |y-z|< r/2\}\), \(z\in \eta \), do not overlap. Then \(|\xi ^{+}_x|\le \Xi (d, r, R)-1 \le \Delta (d) (1 + 2R/r)^d-1\), where \(\Xi (d, r, R)\) is the maximum number of rigid spheres of radius r / 2 packed in a ball of radius \(R+r/2\), and \(\Delta (d)\) is the density of the densest packing of equal rigid spheres in \(\mathbb {R}^d\), see e.g. [8, Chapter 1]. We apply this in (3.25) and get that \(U_\vartheta (x,\eta \setminus x)\ge 0\) whenever \(\vartheta \le b/2 c^{+}(\Xi (d, r, R) - 1)\). For \(\bar{n} >0\), let x be as in (3.26). Choose \(y_1, \ldots , y_s\) in \(\xi ^+_x\) such that the balls \(B_x\) and \(B_{y_i}\), \(i=1, \ldots , s\), realize the densest possible packing of the ball of radius \(R+r/2\) centered at x. Then \(s \le \Xi (d, r, R)-1\) and, for each \(y\in \xi _x^{+}\), one finds i such that \(|y-y_i|< r\). Otherwise \(B_y\) would not overlap each \(B_{y_i}\), and thus the mentioned packing is not the densest one. Therefore, the balls \(C_i:=\{z\in \mathbb {R}^d: |z-y_i|<r\}\), \(i=1, \ldots , s\), cover \(\xi _x^{+}\). By (3.26) each \(C_i\) contains \(\bar{n}+1\) elements at most. This yields

$$\begin{aligned} |\xi _x^{+}|\le (\bar{n}+1) (\Xi (d, r, R) -1). \end{aligned}$$

Now we apply this in (3.25) and obtain that \(U_\vartheta (x,\eta \setminus x)\ge 0\) for

$$\begin{aligned} \vartheta =\min \left\{ \frac{c^{-}}{c^{+}\Xi (d, r, R)}; \frac{b}{2c^{+}(\Xi (d, r, R)-1)}\right\} . \end{aligned}$$

Thus, the inductive step can be done, which yields the proof. \(\square \)

As an example of kernels with infinite range we consider the Gaussian kernels

$$\begin{aligned} a^{\pm } (x) = \frac{c_{\pm }}{(2\pi \sigma _{\pm }^2)^{d/2}} \exp \left( - \frac{1}{2\sigma _{\pm }^2}|x|^2 \right) , \end{aligned}$$
(3.27)

where \(c_{\pm }>0\) and \(\sigma _{\pm } >0\) are parameters.

Proposition 3.8

Let \(a^{\pm }\) be as in (3.27). Then for each \(b>0\), there exists \(\vartheta \) such that (3.5) holds for these \(\vartheta \) and b.

Proof

For \(\sigma _{-} \ge \sigma _{+}\), we have \(a^{-}(x) \ge \vartheta a^{+}(x)\) for all x and

$$\begin{aligned} \vartheta \le \left( \frac{\sigma _{+} c_{-}^{1/d}}{\sigma _{-} c_{+}^{1/d}} \right) ^d. \end{aligned}$$

Then (3.24), and thus (3.5), hold for such \(\vartheta \) and all \(b\ge 0\). For \(\sigma _{-} < \sigma _{+}\), we can write, see (3.23),

$$\begin{aligned} \phi _\vartheta (x) = \int _{\mathbb {R}^d} \hat{\phi }_\vartheta (k) \exp (i k\cdot x) d k, \end{aligned}$$

where

$$\begin{aligned} \hat{\phi }_\vartheta (k) = c_{-} \exp \left( - \frac{1}{2} \sigma _{-}^2|k|^2 \right) \left[ 1 - \vartheta \frac{c_{+}}{c_{-}} \exp \left( - \frac{1}{2}\big (\sigma _{+}^2 - \sigma _{-}^2 \big )|k|^2 \right) \right] . \end{aligned}$$

For \(\vartheta _0= c_{-} / c_{+}\), we have that \(\hat{\phi }_{\vartheta _0} (k)\ge 0\) for all \(k\in \mathbb {R}^d\). Then \(\phi _{\vartheta _0}\) is positive definite in the sense of [31, Sect. 3.2]. This means that it is the Fourier transform of a positive finite measure on \(\mathbb {R}^d\), and hence by the Bochner theorem it follows that

$$\begin{aligned} \sum _{x,y\in \eta } \phi _{\vartheta _0} (x-y) = \phi _{\vartheta _0} (0) |\eta | + \Phi _{\vartheta _0} (\eta ) \ge 0. \end{aligned}$$

Thus, \(\Phi _{\vartheta _0}\) satisfies (3.24) with stability constant \(b_0= \phi _{\vartheta _0}(0)\). Then we apply Proposition 3.6 and obtain that (3.24) holds for

$$\begin{aligned} \vartheta = \frac{(2\pi \sigma _{-}^2)^{d/2} b}{\sigma _{+}\left( 1-\left( \frac{\sigma _{-}}{\sigma _{+}}\right) ^d \right) } \end{aligned}$$

which completes the proof. \(\square \)

4 Evolution of Correlation Functions and States

Our proof of Theorems 3.3 and 3.4 may roughly be divided into the following three steps. First, we show that for each \(k_0\in \mathcal {K}_{\alpha _1}\) and any \(\alpha _2 > \alpha _1\), cf. (2.17), the problem in (3.17) has a unique solution in \(\mathcal {K}_{\alpha _2}\) on the time interval \([0,T(\alpha _2, \alpha _1))\) with an explicitly computed \(T(\alpha _2, \alpha _1)<\infty \), see Lemma 4.8. To this end, in Lemma 4.5 we construct a family of bounded operators, indexed by \(t\in [0,T(\alpha _2, \alpha _1))\) and acting from \(\mathcal {K}_{\alpha _1}\) to \(\mathcal {K}_{\alpha _2}\), which gives the solution in question in the way resembling the action of a \(C_0\)-semigroup. Here we employ a combination of the usual Ovcynnikov method, as in e.g. [5], based on the estimates in (4.18) and (4.19), and a substochastic semigroup constructed in the pre-dual space in Lemma 4.2. The construction employs Assumption 1 and a perturbation result of [32] applied to the operator pre-dual to \(A^\Delta _b\) given in (4.15). In this way, we avoid the consequences of the right-hand sides of (3.11) related to \((\alpha -\alpha ')^2\), bad for using Ovcynnikov’s method. However, due to the term \(e^{\alpha _2}\) in (4.20) the length of the time interval \(T(\alpha _2, \alpha _1))\) is bounded by some \(\tau (\alpha _1)<\infty \). This and the fact that \(\tau (\alpha _1)\rightarrow 0\) as \(\alpha _1 \rightarrow +\infty \) do not allow one to increase \(T(\alpha _2, \alpha _1))\) ad infinitum just by increasing the space \(\mathcal {K}_{\alpha _2}\) containing the solution. To overcome this difficulty, and thus to construct the global solution, we make another two steps. In Lemma 4.9, we show that the constructed solution \(k_t\) lies in the cone defined in (2.18), and hence is the correlation function of a unique state \(\mu _t\), see Proposition 2.3. The relevance of this fact is twofold. First of all, it implies that the evolution \(k_{\mu _0}\mapsto k_t\) corresponds to the uniquely determined evolution of states – the main aim of this work. At the same time, by Lemma 4.9 we obtain that \(k_t (\eta ) \ge 0\). By the comparison made in Lemma 4.10 based on this positivity we get rid of the mentioned term \(e^{\alpha _2}\), cf. the second line in (4.20). This finally allows us to continue the solution \(k_t\) to all \(t>0\) – the third step – and thereby to construct the solution as claimed in Theorem 3.3. The estimates as in Theorem 3.4 are obtained by the mentioned comparison.

We begin by constructing auxiliary semigroups used to make (in Sect. 4.2) the first step of the construction outlined above.

4.1 Auxiliary Semigroups

For a given \(\alpha \in \mathbb {R}\), the space predual to \(\mathcal {K}_\alpha \), defined in (2.16), is

$$\begin{aligned} \mathcal {G}_\alpha := L^1 (\Gamma _0 , e^{\alpha |\cdot |} d \lambda ), \end{aligned}$$
(4.1)

in which the norm is, cf. (2.5),

$$\begin{aligned}&|G|_{\alpha } = \int _{\Gamma _0} |G(\eta )| \exp ( \alpha |\eta |) \lambda (d \eta )\nonumber \\&\quad = \sum _{n=0}^\infty \frac{e^{\alpha n}}{n!} \Vert G^{(n)}\Vert _{L^1((\mathbb {R}^d)^n)}. \end{aligned}$$
(4.2)

Clearly, \(|G|_{\alpha '} \le |G|_{\alpha }\) for \(\alpha ' < \alpha \), which yields

$$\begin{aligned} \mathcal {G}_{\alpha } \hookrightarrow \mathcal {G}_{\alpha '}, \qquad \mathrm{for} \ \ \alpha ' < \alpha , \end{aligned}$$
(4.3)

cf. (2.17). One can show that this embedding is also dense.

Recall that by \(m\ge 0\) we denote the mortality rate, see (1.4). For \(b\ge 0\) as in (3.5) we set

$$\begin{aligned} E_b(\eta ) = (b+m)|\eta | + E^{-} (\eta ) = b|\eta | +E(\eta ) . \end{aligned}$$
(4.4)

Here \(E^{-}(\eta )\) and \(E(\eta )\) are as in (3.3) and (3.4), respectively. For the same b, let the action of \(A_b\) on functions \(G:\Gamma _0 \rightarrow \mathbb {R}\) be as follows

$$\begin{aligned} A_b= & {} A_{1,b} + A_2 \nonumber \\ (A_{1,b} G) (\eta )= & {} - E_b(\eta ) G(\eta ), \nonumber \\ (A_2 G) (\eta )= & {} \int _{\mathbb {R}^d} E^{+} (y,\eta ) G (\eta \cup y) dy. \end{aligned}$$
(4.5)

Our aim now is to define \(A_b\) as a closed unbounded operator in \(\mathcal {G}_\alpha \) the domain of which contains \(\mathcal {G}_{\alpha '}\) for any \(\alpha ' > \alpha \). Let \(\mathcal {G}^+_\alpha \) denote the set of all those \(G\in \mathcal {G}_\alpha \) for which \(G(\eta )\ge 0\) for \(\lambda \)-almost all \(\eta \in \Gamma _0\). Set

$$\begin{aligned} \mathcal {D}_\alpha = \{ G \in \mathcal {G}_\alpha : E_b(\cdot ) G(\cdot ) \in \mathcal {G}_\alpha \}. \end{aligned}$$
(4.6)

For each \(\alpha ' >\alpha \), \( \mathcal {D}_\alpha \) contains \(\mathcal {G}_{\alpha '}\) and hence is dense in \( \mathcal {G}_\alpha \), see (4.3). Then the first summand in \(A_b\) turns into a closed and densely defined operator \((A_{1,b}, \mathcal {D}_\alpha )\) in \(\mathcal {G}_\alpha \) such that \(- A_{1,b} G \in \mathcal {G}^+_\alpha \) for each \(G \in \mathcal {D}_\alpha ^{+}:= \mathcal {D}_\alpha \cap \mathcal {G}_\alpha ^+\). By (2.13) and (3.5) one gets

$$\begin{aligned} |A_2 G |_\alpha\le & {} \int _{\Gamma _0} \int _{\mathbb {R}^d} E^{+}(y, \eta ) |G(\eta \cup y)|e^{\alpha |\eta |}dy \lambda (d\eta )\nonumber \\= & {} e^{-\alpha } \int _{\Gamma _0} |G(\eta )| e^{\alpha |\eta |} \left( \sum _{x\in \eta } E^{+} (x, \eta \setminus x) \right) \lambda ( d \eta )\nonumber \\= & {} e^{-\alpha } |E^{+} (\cdot ) G(\cdot )|_\alpha \le (e^{-\alpha }/\vartheta ) |A_{1,b} G|_\alpha . \end{aligned}$$
(4.7)

Then for \(\alpha > - \log \vartheta \), we have that \(e^{-\alpha }/\vartheta < 1\), and hence \(A_2\) is \(A_{1,b}\)-bounded. This means that \((A_b,\mathcal {D}_\alpha )\) is closed and densely defined in \(\mathcal {G}_\alpha \), see (4.5).

In the proof of Lemma 4.2 below we employ the perturbation theory for positive semigroups of operators in ordered Banach spaces developed in [32]. Prior to stating the lemma we present the relevant fragments of this theory in spaces of integrable functions. Let E be a measurable space with a \(\sigma \)-finite measure \(\nu \), and \(X:=L^{1}\left( E\rightarrow \mathbb {R},d\nu \right) \) be the Banach space of \(\nu \)-integrable real-valued functions on X with norm \(\left\| \cdot \right\| \). Let \(X^{+}\) be the cone in X consisting of all \(\nu \)-a.e. nonnegative functions on E. Clearly, \(\left\| f+g\right\| =\left\| f\right\| +\left\| g\right\| \) for any \(f,g\in X^{+}\), and \(X=X^{+}-X^{+}\). Recall that a \(C_0\)-semigroup \(\{S(t)\}_{t\ge 0}\) of bounded linear operators on X is called positive if \(S(t)f\in X^+\) for all \(f\in X^+\). A positive semigroup is called substochastic (resp. stochastic) if \(\Vert S(t)f\Vert \le \Vert f\Vert \) (resp. \(\Vert S(t)f\Vert = \Vert f\Vert \)) for all \(f\in X^+\). Let \(\left( A_{0},D(A_{0})\right) \) be the generator of a positive \(C_{0}\) -semigroup \(\{S_{0}\left( t\right) \}_{t\ge 0}\) on X. Set \(D^{+}(A_{0})=D(A_{0})\cap X^{+}\). Then \(D(A_{0})\) is dense in X, and \(D^{+}(A_{0})\) is dense in \(X^{+}\). Let \(P:D(A_0)\rightarrow X\) be a positive linear operator, i.e., \(Pf\in X^+\) for all \(f\in D^+(A_0)\). The next statement is an adaptation of Theorem 2.2 in [32].

Proposition 4.1

Suppose that for any \(f\in D^+(A_0)\), the following holds

$$\begin{aligned} \int _{E}\bigl ( ( A_{0}+P) f\bigr ) \left( x\right) \nu \left( d x\right) \le 0. \end{aligned}$$
(4.8)

Then for all \(r\in [0,1)\), the operator \(\bigl (A_0+rP, D(A_0)\bigr )\) is the generator of a substochastic \(C_0\)-semigroup in X.

Lemma 4.2

For each \(\alpha > - \log \vartheta \), the operator \((A_b, \mathcal {D}_\alpha )\) is the generator of a substochastic semigroup \(\{S(t)\}_{t\ge 0}\) in \(\mathcal {G}_{\alpha }\).

Proof

We apply Proposition 4.1 with \(E=\Gamma _0\), \(X=\mathcal {G}_\alpha \) as in (4.1), and \(A_0=A_{1,b}\). For \(r>0\) and \(A_2\) as in (4.5), we set \(P=r^{-1}A_2\). For such \(A_0\) and P, and for \(G \in \mathcal {D}_\alpha ^+\), the left-hand side of (4.8) takes the form, cf. (4.7),

$$\begin{aligned}&- \int _{\Gamma _0} E_b(\eta ) G(\eta ) \exp ( \alpha |\eta |) \lambda (d \eta ) \\&+ \,\, r^{-1} \int _{\Gamma _0} \int _{\mathbb {R}^d} E^{+}(y, \eta ) G(\eta \cup y) \exp (\alpha |\eta |)dy \lambda (d\eta )\\&\quad = \int _{\Gamma _0} \bigl ( - E_b(\eta ) + r^{-1} e^{-\alpha } E^{+} (\eta )\bigr ) G(\eta ) \exp (\alpha |\eta |) \lambda (d\eta ). \end{aligned}$$

For a fixed \(\alpha > - \log \vartheta \), pick \(r\in (0,1)\) such that \(r^{-1} (e^{-\alpha }/\vartheta )<1\). Then, for such \(\alpha \) and r, we have

$$\begin{aligned} \int _{\Gamma _0} \bigl ( - E_b(\eta ) + r^{-1} e^{-\alpha } E^{+} (\eta )\bigr ) G(\eta ) \exp (\alpha |\eta |) \lambda (d\eta ) \le 0, \end{aligned}$$
(4.9)

which holds in view of (3.5). Since \(r^{-1} A_2\) is a positive operator, by Proposition 4.1 we have that \(A_b= A_{1,b} + A_2 = A_{1,b} + r(r^{-1} A_2)\) generates a substochastic semigroup \(\{S(t)\}_{t\ge 0}\) in \(\mathcal {G}_{\alpha }\). \(\square \)

Now we turn to constructing the semigroup ‘sun-dual’ to that mentioned in Lemma 4.2. Let \(A^*_b\) be the adjoint of \((A_b, \mathcal {D}_\alpha )\) in \(\mathcal {K}_\alpha \) with domain, cf. (3.13),

$$\begin{aligned} \mathrm{Dom}(A^*_b) =\bigl \{k\in \mathcal {K}_\alpha : \exists \tilde{k}\in \mathcal {K}_\alpha \ \ \forall G\in \mathcal {D}_\alpha \ \ \langle \! \langle A_b G , k\rangle \!\rangle = \langle \! \langle G, \tilde{k}\rangle \!\rangle \bigr \}. \end{aligned}$$

For each \(k\in \mathrm{Dom}(A^*_b)\), the action of \(A^*_b\) on k is described in (3.9) with E replaced by \(E_b\), see (4.4). By (3.11) we then get \(\mathcal {K}_{\alpha '} \subset \mathrm{Dom}(A^*_b)\) for each \(\alpha ' < \alpha \). Let \(\mathcal {Q}_\alpha \) stand for the closure of \(\mathrm{Dom}(A^*_b)\) in \(\Vert \cdot \Vert _\alpha \). Then

$$\begin{aligned} \mathcal {Q}_\alpha := \overline{\mathrm{Dom}(A^*_b)}\supset \mathrm{Dom}(A^*_b) \supset \mathcal {K}_{\alpha '}, \qquad \mathrm{for} \ \mathrm{any} \ \alpha '<\alpha . \end{aligned}$$
(4.10)

Note that \(\mathcal {Q}_\alpha \) is a proper subset of \(\mathcal {K}_\alpha \). For each \(t\ge 0\), the adjoint \(S^*(t)\) of S(t) is a bounded operator in \(\mathcal {K}_\alpha \). However, the semigroup \(\{S^*(t)\}_{t\ge 0}\) is not strongly continuous. For \(t>0\), let \( S^{\odot }_\alpha (t) \) denote the restriction of \(S^*(t)\) to \(\mathcal {Q}_\alpha \). Since \(\{S(t)\}_{t\ge 0}\) is the semigroup of contractions, for \(k\in \mathcal {Q}_\alpha \) and all \(t\ge 0\), we have that

$$\begin{aligned} \Vert S^{\odot }_\alpha (t)k\Vert _\alpha = \Vert S^{*}(t)k\Vert _\alpha \le \Vert k\Vert _\alpha . \end{aligned}$$
(4.11)

Proposition 4.3

For every \(\alpha ' <\alpha \) and any \(k\in \mathcal {K}_{\alpha '}\), the map

$$\begin{aligned}{}[0, +\infty ) \ni t \mapsto S^{\odot }_\alpha (t) k \in \mathcal {K}_\alpha \end{aligned}$$

is continuous.

Proof

By [28, Theorem 10.4, page 39], the collection \(\{S^{\odot }_\alpha (t)\}_{t\ge 0}\) constitutes a \(C_0\)-semigroup on \(\mathcal {Q}_\alpha \) the generator of which, \(A^{\odot }_\alpha \), is the part of \(A^*_b\) in \(\mathcal {Q}_\alpha \). That is, \(A^{\odot }_\alpha \) is the restriction of \(A^*_b\) to the set

$$\begin{aligned} \mathrm{Dom} \big (A^{\odot }_\alpha \big ):= \big \{ k\in \mathrm{Dom}(A^*_b): A^*_b k \in \mathcal {Q}_\alpha \big \}, \end{aligned}$$

cf. [28, Definition 10.3, page39]. The continuity in question follows by the \(C_0\)-property of the semigroup \(\{S^{\odot }_\alpha (t)\}_{t\ge 0}\) and (4.10).\(\square \)

By (3.11) it follows that

$$\begin{aligned} \mathrm{Dom} \big (A^{\odot }_{\alpha ''} \big ) \supset \mathcal {K}_{\alpha '}, \qquad \alpha ' < \alpha '', \end{aligned}$$
(4.12)

and hence, see [28, Theorem 2.4, page4],

$$\begin{aligned} S^{\odot }_{\alpha ''} (t) k \in \mathrm{Dom} \big (A^{\odot }_{\alpha ''} \big ), \end{aligned}$$
(4.13)

and

$$\begin{aligned} \frac{d}{dt} S^{\odot }_{\alpha ''} (t) k = A^{\odot }_{\alpha ''} S^{\odot }_{\alpha ''} (t) k, \end{aligned}$$
(4.14)

which holds for all \(\alpha '' \in (\alpha ', \alpha ]\) and \(k\in \mathcal {K}_{\alpha '}\).

4.2 The Main Operators

For \(E_b\) as in (4.4), we set

$$\begin{aligned} A_b^\Delta= & {} A^\Delta _{1,b} + A^\Delta _2, \nonumber \\ \big (A^\Delta _{1,b}k \big )(\eta )= & {} - E_b (\eta ) k(\eta ), \end{aligned}$$
(4.15)

and \(A^\Delta _2\) being as in (3.9). We also set

$$\begin{aligned} B_b^\Delta= & {} B^\Delta _{1} + B^\Delta _{2,b}, \nonumber \\ \big (B^\Delta _{2,b}k \big )(\eta )= & {} \big (B^\Delta _2 k \big )(\eta ) + b|\eta |k(\eta ). \end{aligned}$$
(4.16)

Here \(B^\Delta _{1}\) and \(B^\Delta _{2}\) are as in (3.10). Note that

$$\begin{aligned} L^\Delta = A^\Delta + B^\Delta = A^\Delta _b + B^\Delta _b. \end{aligned}$$
(4.17)

The expressions in (4.15) and (4.16) can be used to define the corresponding continuous operators acting from \(\mathcal {K}_{\alpha '}\) to \(\mathcal {K}_{\alpha }\), \(\alpha '<\alpha \), cf. (3.14), and hence the elements of \(\mathcal {L}(\mathcal {K}_{\alpha '}, \mathcal {K}_{\alpha })\) the norms of which are estimated by means of the analogies of (3.11) and (3.13). For these operators, we use notations \((B^\Delta _b)_{\alpha \alpha '}\) and \((B^\Delta _{2,b})_{\alpha \alpha '}\). Then \(\Vert (B^\Delta _b)_{\alpha \alpha '}\Vert \) will stand for the operator norm, and thus (3.13) can be rewritten in the form

$$\begin{aligned} \Vert (B^\Delta _b)_{\alpha \alpha '} \Vert \le \frac{\langle a^{+} \rangle + b + \langle a^{-} \rangle e^{\alpha '}}{e(\alpha -\alpha ')}. \end{aligned}$$
(4.18)

For fixed \(\alpha > \alpha ' > - \log \vartheta \), we construct continuous operators \(Q_{\alpha \alpha '} (t;\mathbb {B}): \mathcal {K}_{\alpha '} \rightarrow \mathcal {K}_{\alpha }\), \(t>0\), which will be used to obtain the solution \(k_t\) as in Theorem 3.3 and to study its properties. Here \(\mathbb {B}\) will be taken in the following two versions: (a) \(\mathbb {B}= B^\Delta _b\); (b) \(\mathbb {B}= B^\Delta _{2,b}\), see (4.16). In both cases, for each \(\alpha _1, \alpha _2 \in [\alpha ', \alpha ]\) such that \(\alpha _1 < \alpha _2\), cf. (4.18), the following holds

$$\begin{aligned} \Vert \mathbb {B}_{\alpha _2\alpha _1} \Vert \le \frac{\beta (\alpha _2;\mathbb {B})}{e(\alpha _2 - \alpha _1)}, \end{aligned}$$
(4.19)

with

$$\begin{aligned}&\beta \big (\alpha _2; B^\Delta _b \big ) = \langle a^{+} \rangle + b + \langle a^{-} \rangle e^{\alpha _2}, \nonumber \\&\beta \big (\alpha _2; B^\Delta _{2,b} \big ) = \langle a^{+} \rangle + b. \end{aligned}$$
(4.20)

For \(t>0\) and \(\alpha _1\), \(\alpha _2\) as above, let \(\Sigma _{\alpha _2\alpha _1}(t):\mathcal {K}_{\alpha _1}\rightarrow \mathcal {K}_{\alpha _2}\) be the restriction of \(S^{\odot }_{\alpha _2}(t)\) to \(\mathcal {K}_{\alpha _1}\), cf. (4.12) and (4.13). Note that the embedding \(\mathcal {K}_{\alpha _1}\hookrightarrow \mathcal {K}_{\alpha _2}\). can be written as \(\Sigma _{\alpha _2\alpha _1}(0)\), and hence

$$\begin{aligned} \Sigma _{\alpha _2\alpha _1}(t) = \Sigma _{\alpha _2\alpha _1}(0) S^{\odot }_{\alpha _1}(t). \end{aligned}$$
(4.21)

Also, for each \(\alpha _3 > \alpha _2\), we have

$$\begin{aligned} \Sigma _{\alpha _3\alpha _1}(t) = \Sigma _{\alpha _3\alpha _2}(0) \Sigma _{\alpha _2\alpha _1}(t):= \Sigma _{\alpha _2\alpha _1}(t), \qquad t\ge 0. \end{aligned}$$
(4.22)

Here and in the sequel, we omit writing embedding operators if no confusing arises. In view of (4.11), it follows that

$$\begin{aligned} \Vert \Sigma _{\alpha _2\alpha _1}(t)\Vert \le 1. \end{aligned}$$
(4.23)

Remark 4.4

By Lemma 4.2 we have that

$$\begin{aligned} \forall k\in \mathcal {K}_{\alpha _{1}}^{+} \qquad \Sigma _{\alpha _2\alpha _1}(t)k \in \mathcal {K}_{\alpha _{2}}^{+}, \ \ t\ge 0, \end{aligned}$$

see (2.19). Also \((B^\Delta _{2,b})_{\alpha _2\alpha _1}\), but not \((B^\Delta _b)_{\alpha _2\alpha _1}\), has the same positivity property.

Set, cf. (4.20),

$$\begin{aligned} T(\alpha _2, \alpha _1;\mathbb {B}) = \frac{\alpha _2 - \alpha _1}{\beta (\alpha _2;\mathbb {B})}, \qquad \alpha _2 > \alpha _1 , \end{aligned}$$
(4.24)

and then

$$\begin{aligned} \mathcal {A}(\mathbb {B}) = \big \{(\alpha _1, \alpha _2, t): - \log \vartheta < \alpha _1 < \alpha _2, \ \ 0\le t < T(\alpha _2, \alpha _1;\mathbb {B}) \big \}. \end{aligned}$$
(4.25)

Lemma 4.5

For each of the two choices of \(\mathbb {B}\), see (4.20), there exists the corresponding family of linear maps, \(\{Q_{\alpha _2 \alpha _1}(t;\mathbb {B}): (\alpha _1, \alpha _2, t)\in \mathcal {A}(\mathbb {B})\}\), each element of which has the following properties:

  1. (i)

    \(Q_{\alpha _2 \alpha _1}(t;\mathbb {B})\in \mathcal {L}(\mathcal {K}_{\alpha _1} , \mathcal {K}_{\alpha _2})\);

  2. (ii)

    the map \([0, T(\alpha _2, \alpha _1; \mathbb {B})) \ni t \mapsto Q_{\alpha _2 \alpha _1}(t;\mathbb {B}) \in \mathcal {L}(\mathcal {K}_{\alpha _1}, \mathcal {K}_{\alpha _2}) \) is continuous;

  3. (iii)

    the operator norm of \(Q_{\alpha _2 \alpha _1}(t;\mathbb {B})\in \mathcal {L}(\mathcal {K}_{\alpha _1}, \mathcal {K}_{\alpha _2})\) satisfies

    $$\begin{aligned} \Vert Q_{\alpha _2 \alpha _1}(t;\mathbb {B})\Vert \le \frac{T(\alpha _2, \alpha _1;\mathbb {B})}{T(\alpha _2, \alpha _1;\mathbb {B}) -t}; \end{aligned}$$
    (4.26)
  4. (iv)

    for each \(\alpha _3 \in (\alpha _1 , \alpha _2)\) and \(t< T(\alpha _3, \alpha _1;\mathbb {B})\), the following holds

$$\begin{aligned} \frac{d}{dt} Q_{\alpha _2 \alpha _1}(t;\mathbb {B}) = \bigl ( \left( A^\Delta _b\right) _{\alpha _2\alpha _3} + \mathbb {B}_{\alpha _2\alpha _3} \bigr ) Q_{\alpha _3 \alpha _1}(t;\mathbb {B}). \end{aligned}$$
(4.27)

The proof of this lemma is based on the following construction. For \(l\in \mathbb {N}\) and \(t>0\), we set

$$\begin{aligned} \mathcal {T}_l := \big \{ (t,t_1, \ldots , t_l) : 0\le t_l \le \cdots \le t_1 \le t \big \}, \end{aligned}$$
(4.28)

take \(\alpha \in (\alpha _1, \alpha _2]\), and then take \(\delta < \alpha - \alpha _1\). Next we divide the interval \([\alpha _1,\alpha ]\) into subintervals with endpoints \(\alpha ^s\), \(s=0, \ldots , 2l +1\), as follows. Set \(\alpha ^0 =\alpha _1\), \(\alpha ^{2l+1 } = \alpha \), and

$$\begin{aligned} \alpha ^{2s}= & {} \alpha _1 +\frac{s}{l+1}\delta + s \epsilon , \qquad \epsilon = (\alpha - \alpha _1 - \delta )/l, \nonumber \\ \alpha ^{2s+1}= & {} \alpha _1+ \frac{s+1}{l+1}\delta + s \epsilon , \qquad s= 0,1, \ldots , l. \end{aligned}$$
(4.29)

Then for \((t,t_1, \ldots , t_l) \in \mathcal {T}_l\), define

$$\begin{aligned}&\Pi _{\alpha \alpha _1}^{(l)}(t,t_1, \ldots , t_l;\mathbb {B}) = \Sigma _{\alpha \alpha ^{2l}}(t-t_1) \mathbb {B}_{\alpha ^{2l}\alpha ^{2l-1}}\times \cdots \nonumber \\&\quad \times \Sigma _{\alpha ^{2s+1} \alpha ^{2s}} (t_{l-s}-t_{l-s+1}) \mathbb {B}_{\alpha ^{2s}\alpha ^{2s-1}} \cdots \Sigma _{\alpha ^{3}\alpha ^{2}} (t_{l-1}-t_{l}) \mathbb {B}_{\alpha ^{2}\alpha ^{1}} \Sigma _{\alpha ^1 \alpha _1} (t_l). \end{aligned}$$
(4.30)

Proposition 4.6

For both choices of \(\mathbb {B}\) and each \(l\in \mathbb {N}\), the operators defined in (4.30) have the following properties:

  1. (i)

    for each \((t, t_1 , \ldots , t_l) \in \mathcal {T}_l\), \(\Pi _{\alpha \alpha _1}^{(l)}(t,t_1, \ldots , t_l;\mathbb {B})\in \mathcal {L}(\mathcal {K}_{\alpha _1}, \mathcal {K}_{\alpha })\), and the map

    $$\begin{aligned} \mathcal {T}_l \ni (t,t_1, \ldots , t_l) \mapsto \Pi _{\alpha \alpha _1}^{(l)} (t,t_1, \ldots , t_l;\mathbb {B}) \in \mathcal {L}(\mathcal {K}_{\alpha _1}, \mathcal {K}_{\alpha }) \end{aligned}$$

    is continuous;

  2. (ii)

    for fixed \(t_1,t_2, \ldots , t_l\), and each \(\varepsilon >0\), the map

    $$\begin{aligned} (t_1, t_1 + \varepsilon ) \ni t \mapsto \Pi _{\alpha \alpha _1}^{(l)}(t,t_1, \ldots , t_l;\mathbb {B}) \in \mathcal {L}(\mathcal {K}_{\alpha _1}, \mathcal {K}_{\alpha _2}) \end{aligned}$$

    is continuously differentiable and for each \(\alpha ' \in (\alpha _1, \alpha )\) the following holds

$$\begin{aligned} \frac{d}{dt} \Pi _{\alpha \alpha _1}^{(l)} (t,t_1, \ldots , t_l;\mathbb {B}) = (A^\Delta _b)_{\alpha \alpha '} \Pi _{\alpha '\alpha _1}^{(l)} (t,t_1, \ldots , t_l;\mathbb {B}). \end{aligned}$$
(4.31)

Proof

The first part of claim (i) follows by (4.30), (4.19), and (4.23). To prove the second part we apply Proposition 4.3 and (4.21), and then (4.19), (4.20). By (4.12), (4.14), and (4.22), and the fact that

$$\begin{aligned} A^{\odot }_{\alpha '}k = \big (A^\Delta _b \big )_{\alpha ' \alpha } k, \qquad \mathrm{for} \ \ k\in \mathcal {K}_\alpha , \end{aligned}$$

one gets

$$\begin{aligned} \frac{d}{dt} \Sigma _{\alpha '\alpha _{2l}} (t) = \big (A^\Delta _b \big )_{\alpha '\alpha } \Sigma _{\alpha \alpha _{2l}} (t), \qquad \alpha ' >\alpha , \end{aligned}$$
(4.32)

which then yields (4.31). \(\square \)

Proof of Lemma 4.5

Take any \(T< T(\alpha _2, \alpha _1;\mathbb {B})\) and then pick \(\alpha \in (\alpha _1, \alpha _2]\) and a positive \(\delta < \alpha - \alpha _1\) such that

$$\begin{aligned} T < T_\delta := \frac{\alpha - \alpha _1 - \delta }{\beta (\alpha _2;\mathbb {B})}. \end{aligned}$$

For this \(\delta \), take \(\Pi ^{(l)}_{\alpha \alpha _1}\) as in (4.30), and then for set

$$\begin{aligned}&Q_{\alpha \alpha _1}^{(n)} (t;\mathbb {B}) = \Sigma _{\alpha \alpha _1}(t) \nonumber \\&\quad + \sum _{l=1}^n \int _0^t\int _0^{t_1} \cdots \int _0^{t_{l-1}}\Pi _{\alpha \alpha _1}^{(l)}(t,t_1, \ldots , t_l;\mathbb {B}) d t_l \cdots dt_1, \quad n\in \mathbb {N}. \end{aligned}$$
(4.33)

By (4.23), (4.19), and (4.29) we have from (4.30) that

$$\begin{aligned} \Vert \Pi _{\alpha \alpha _1}^{(l)} (t,t_1, \ldots , t_l;\mathbb {B})\Vert \le \left( \frac{l}{eT_\delta }\right) ^l, \end{aligned}$$
(4.34)

holding for all \(l=1, \ldots , n\). This yields

$$\begin{aligned}&\Vert Q_{\alpha \alpha _1}^{(n)} (t;\mathbb {B}) - Q_{\alpha \alpha _1}^{(n-1)} (t;\mathbb {B})\Vert \nonumber \\&\quad \le \int _0^t\int _0^{t_1} \cdots \int _0^{t_{n-1}}\Vert \Pi _{\alpha \alpha _1}^{(n)}(t,t_1, \ldots , t_l;\mathbb {B})\Vert d t_n \cdots dt_1 \nonumber \\&\quad \le \frac{1}{n!} \left( \frac{n}{e}\right) ^n \left( \frac{T}{T_\delta }\right) ^n, \end{aligned}$$
(4.35)

hence,

$$\begin{aligned} \forall t\in [0,T] \quad Q_{\alpha \alpha _1}^{(n)} (t;\mathbb {B}) \rightarrow Q_{\alpha \alpha _1}(t;\mathbb {B})\in \mathcal {L}(\mathcal {K}_{\alpha _1},\mathcal {K}_{\alpha }), \ \ \mathrm{as} \ \ n\rightarrow +\infty . \end{aligned}$$

This proves claim (i) of the lemma. The proof of claim (ii) follows by the fact that the mentioned above convergence is uniform on [0, T]. The estimate (4.26) readily follows from that in (4.34). Now by (4.30) and (4.32) we obtain

$$\begin{aligned} \frac{d}{dt} Q_{\alpha _2\alpha _1}^{(n)} (t;\mathbb {B}) = \left( A^\Delta _b\right) _{\alpha _2\alpha } Q_{\alpha \alpha _1}^{(n)} (t;\mathbb {B}) + B_{\alpha _2\alpha } Q_{\alpha \alpha _1}^{(n-1)} (t;\mathbb {B}), \quad n\in \mathbb {N}. \end{aligned}$$

Then the continuous differentiability of the limit and (4.27) follow by standard arguments. \(\square \)

Remark 4.7

By (4.30), (4.33), and Lemma 4.5 we have that

$$\begin{aligned} \forall k\in \mathcal {K}_{\alpha _1}^+ \qquad Q_{\alpha _2 \alpha _1}(t;B_{2,b}^\Delta )k \in \mathcal {K}_{\alpha _2}^+, \quad t \in [0, T(\alpha _2 , \alpha _1;B_2^\Delta )). \end{aligned}$$
(4.36)

At the same time, \(Q_{\alpha _2 \alpha _1}(t;B^\Delta _b)\) is not positive, see (3.10) and Remark 4.4.

4.3 The Proof of Theorem 3.3

First we prove that the problem (3.17) has a unique solution on a bounded time interval.

Lemma 4.8

For each \(\alpha _2 > \alpha _1 > - \log \vartheta \), the problem (3.17) with \(k_0 \in \mathcal {K}_{\alpha _1}\) has a unique solution \(k_t \in \mathcal {K}_{\alpha _2}\) on the time interval \([0,T(\alpha _2, \alpha _1, B^\Delta _b))\). The solution has the property: \(k_t (\emptyset ) =1\) for all \(t\in [0,T(\alpha _2, \alpha _1, B^\Delta _b))\).

Proof

For each \(t\in [0,T(\alpha _2, \alpha _1, B^\Delta _b))\), one finds \(\alpha \in (\alpha _1, \alpha _2)\) such that also \(t\in [0,T(\alpha , \alpha _1, B^\Delta _b))\). Then by claim (i) of Lemma 4.5 and (3.15)

$$\begin{aligned} k_t := Q_{\alpha \alpha _1}(t;B^\Delta _b) k_{0} \end{aligned}$$
(4.37)

lies in \(\mathcal {D}^\Delta _{\alpha _2}\). By (4.27) the derivative of \(k_t\in \mathcal {K}_{\alpha _2}\) is

$$\begin{aligned} \frac{d}{dt} k_t = \Big ( \big (A^\Delta _b \big )_{\alpha _2\alpha } + \big (B^\Delta _b \big )_{\alpha _2\alpha } \Big )k_t = L^\Delta _{\alpha _2\alpha }k_t. \end{aligned}$$

Hence, \(k_t\) is a solution of (3.17), see (3.16). Moreover, \(k_t (\emptyset ) =1\) since \(k_0 (\emptyset ) =1\), see (2.12), and

$$\begin{aligned} \left( \frac{d}{dt} k_t\right) (\emptyset ) = \big (L^\Delta _\alpha k_t \big )(\emptyset ) = 0, \end{aligned}$$

see (3.8) – (3.10). To prove the stated uniqueness assume that \(\tilde{k}_t \in \mathcal {D}^\Delta _{\alpha _2}\) is another solution of (3.17) with the same initial condition. Then for each \(\alpha _3>\alpha _2\), \(v_t := k_t - \tilde{k}_t\) is a solution of (3.17) in \(\mathcal {K}_{\alpha _3}\) with the zero initial condition. Here we assume that t and \(\alpha _3\) are such that \(t< T(\alpha _3, \alpha _1;B^\Delta _b)\). Clearly, \(v_t\) also solves (3.17) in \(\mathcal {K}_{\alpha _2}\). Thus, it can be written down in the following form

$$\begin{aligned} v_t = \int _0^t \Sigma _{\alpha _3\alpha } (t-s) \left( B^\Delta _b \right) _{\alpha \alpha _2} v_s d s, \end{aligned}$$
(4.38)

where \(v_t\) on the left-hand side (resp. \(v_s\) on the right-hand side) is considered as an element of \(\mathcal {K}_{\alpha _3}\) (resp. \(\mathcal {K}_{\alpha _2}\)) and \(\alpha \in (\alpha _2, \alpha _3)\). Indeed, one obtains (4.38) by integrating the equation, see (4.17),

$$\begin{aligned} \frac{d}{dt} v_t = L^\Delta _{\alpha _3\alpha _2} v_t = \Big (\big (A^\Delta _b\big )_{\alpha _3 \alpha _2} + \big (B^\Delta _b \big )_{\alpha _3\alpha _2} \Big ) v_t, \qquad v_0 = 0, \end{aligned}$$

in which the second summand is considered as a nonhomogeneous term, see (4.32). Let us show that for all \(t<T(\alpha _2, \alpha _1; B^\Delta _b))\), \(v_t = 0\) as an element of \(\mathcal {K}_{\alpha _2}\). In view of the embedding \(\mathcal {K}_{\alpha _2}\hookrightarrow \mathcal {K}_{\alpha _3}\), cf. (2.17), this will follow from the fact that \(v_t = 0\) as an element of \(\mathcal {K}_{\alpha _3}\). For a given \(n\in \mathbb {N}\), we set \(\epsilon = (\alpha _3 - \alpha _2)/2n\) and \(\alpha ^l = \alpha _2 + l \epsilon \), \(l=0, \ldots , 2n\). Then we repeatedly apply (4.38) and obtain

Similarly as in (4.34) we then get from the latter, see (4.19), (4.20), and (4.23),

$$\begin{aligned} \Vert v_t\Vert _{\alpha _3}\le & {} \frac{t^n}{n!} \prod _{l=1}^{n} \Big \Vert (B^\Delta _b)_{\alpha ^{2l-1}\alpha ^{2l-2}} \Big \Vert \sup _{s\in [0,t]} \Vert v_s\Vert _{\alpha _2} \nonumber \\\le & {} \frac{1}{n!} \left( \frac{n}{e}\right) ^n \left( \frac{2 t \beta \big (\alpha _3; B^\Delta _b \big )}{\alpha _3 - \alpha _2}\right) ^n \sup _{s\in [0,t]} \Vert v_s\Vert _{\alpha _2}. \end{aligned}$$
(4.39)

This implies that \(v_t =0\) for \(t < (\alpha _3 - \alpha _2)/ 2 \beta (\alpha _3; B^\Delta _b)\). To prove that \(v_t =0\) for all t of interest one has to repeat the above procedure appropriate number of times. \(\square \)

To make the next step we need the following result, the proof of which will be done in Sect. 5 below.

Lemma 4.9

(Identification Lemma) For each \(\alpha _2 > \alpha _1 > -\log \vartheta \), there exists \(\tau (\alpha _2, \alpha _1) \in (0, T(\alpha _2, \alpha _1;B^\Delta _b))\) such that \(Q_{\alpha _2\alpha _1} (t;B^\Delta _b): \mathcal {K}_{\alpha _1}^\star \rightarrow \mathcal {K}_{\alpha _2}^\star \) for each \(t\in [0, \tau (\alpha _2, \alpha _1)]\), see (2.18) and Lemma 4.5.

In the light of Proposition 2.3, Lemma 4.9 claims that for \(t\in [0, \tau (\alpha _2, \alpha _1)]\), the solution \(k_t\) as in Lemma 4.8 is the correlation function of a unique sub-Poissonian state \(\mu _t\) whenever \(k_0 = k_{\mu _0}\) for some \(\mu _0 \in \mathcal {P}_\mathrm{sP}\).

To complete the proof of Theorem 4.2 we need the following result. Recall that \(\mathcal {K}^\star _\alpha \subset \mathcal {K}^+_\alpha \), \(\alpha \in \mathbb {R}\), see (2.19).

Lemma 4.10

Let \(\alpha _2\), \(\alpha _1\), and \(\tau (\alpha _2, \alpha _1)\) be as in Lemma 4.9. Then there exists positive \(\tau _1 (\alpha _2, \alpha _1) \le \tau (\alpha _2, \alpha _1)\) such that, for each \(t\in [0, \tau _1(\alpha _2, \alpha _1)]\) and arbitrary \(k_0\in \mathcal {K}_{\alpha _1}^\star \) the following holds, cf. (4.20) and Remark 4.4,

$$\begin{aligned} 0\le \left( Q_{\alpha _2\alpha _1}\big (t;B^\Delta _b \big ) k_0\right) (\eta ) \le \left( Q_{\alpha _2\alpha _1}\big (t;B^\Delta _{2,b}\big ) k_0\right) (\eta ), \qquad \eta \in \Gamma _0. \end{aligned}$$
(4.40)

Proof

The left-hand inequality in (4.40) follows directly by Lemma 4.9. By Lemma 4.8 \(k_t\) as in (4.37) solves (3.17) in \(\mathcal {K}_{\alpha _2}\). Set

$$\begin{aligned} L^\Delta _2 = A^\Delta + B^\Delta _2 = A^\Delta _b + B^\Delta _{2,b}, \end{aligned}$$

where \(A^\Delta \), \(B^\Delta _2\) and \(A^\Delta _b\), \(B^\Delta _{2,b}\) are as in (3.9), (3.10) and (4.15), (4.16), respectively. Then we introduce \(((L^\Delta _2)_\alpha , \mathcal {D}_\alpha ^\Delta )\) and \((L^\Delta _2)_{\alpha \alpha '}\) as in Sect. 3.2. By claims (i) and (iv) of Lemma 4.5 we have that

$$\begin{aligned} u_t := Q_{\alpha \alpha _1} \big (t;B_{2,b}^\Delta \big ) k_{0}, \qquad \alpha \in (\alpha _1, \alpha _2), \end{aligned}$$
(4.41)

solves the problem

$$\begin{aligned} \frac{d}{dt} u_t = \big (L^\Delta _2 \big )_{\alpha _2} u_t, \qquad u_0 = k_0, \end{aligned}$$
(4.42)

on the time interval \([0, T(\alpha _2, \alpha _1; B^\Delta _{2,b}))\). Note that

$$\begin{aligned} T \big (\alpha _2, \alpha _1; B^\Delta _{b} \big ) \le T \big (\alpha _2, \alpha _1; B^\Delta _{2,b} \big ), \end{aligned}$$

see (4.20) and (4.24). Take \(\alpha , \alpha ' \in (\alpha _1, \alpha _2)\), \(\alpha ' < \alpha \), and pick positive \(\tau _1\le \tau (\alpha _2, \alpha _1)\) such that

$$\begin{aligned} \tau _1=\tau _1(\alpha _2,\alpha _1) < \min \Big \{T \big ( \alpha _2, \alpha ;B^\Delta _b \big ); T\big (\alpha ', \alpha _1;B^\Delta _{2,b} \big ) \Big \}. \end{aligned}$$

By (4.42) the difference \(u_t - k_t \in \mathcal {K}_{\alpha _2}\) can be written down in the form

$$\begin{aligned} u_t - k_t = \int _0^t Q_{\alpha _2 \alpha }\big (t-s;B^\Delta _{2,b} \big ) \left( - B^\Delta _1\right) _{\alpha \alpha '} k_s ds, \end{aligned}$$
(4.43)

where \(t\le \tau _1\) and the operator \(( - B^\Delta _1)_{\alpha \alpha '}\) is positive with respect to the cone (2.19), see (3.10) and (3.13). In (4.43), \(k_s \in \mathcal {K}_{\alpha '}\) and \(Q_{\alpha _2 \alpha }(t-s;B^\Delta _{2,b}) \in \mathcal {L}(\mathcal {K}_{\alpha },\mathcal {K}_{\alpha _2})\) for all \(s\in [0,\tau _1]\). Since \(Q_{\alpha _2 \alpha }(t-s;B^\Delta _{2,b})\) is also positive, see Remark 4.4, and \(k_s \in \mathcal {K}_{\alpha '}^\star \subset \mathcal {K}_{\alpha '}^+\) (by (4.37) and Lemma 4.9), we have \(u_t - k_t \in \mathcal {K}_{\alpha _2}^+\) for \(t\le \tau _1(\alpha _2 , \alpha _1)\), which yields (4.40).\(\square \)

Corollary 4.11

Let \(\alpha _2\), \(\alpha _1\), and \(\tau _1(\alpha _2, \alpha _1)\) be as in Lemma 4.10. Then the following holds for all \(t \le \tau _1(\alpha _2, \alpha _1)\)

$$\begin{aligned} \Vert k_t\Vert _{\alpha _2} = \bigl \Vert Q_{\alpha _2\alpha _1}\big (t;B^\Delta _b \big )k_0\bigr \Vert _{\alpha _2} \le \frac{(\alpha _2 - \alpha _1)\Vert k_0\Vert _{\alpha _1}}{\alpha _2 - \alpha _1 - t(\langle a^{+} \rangle + b)}. \end{aligned}$$
(4.44)

Proof

Apply (4.40) and then (4.20) and (4.24). \(\square \)

Proof of Theorem 3.3

Let \(\alpha _0 > - \log \vartheta \) be such that \(k_{\mu _0} \in \mathcal {K}_{\alpha _0}\), cf, (2.17). Then by Lemma 4.8 we have that for each \(\alpha _1 > \alpha _0\) and \(\alpha \in (\alpha _0, \alpha _1)\),

$$\begin{aligned} k_t := Q_{\alpha \alpha _0}\big (t;B^\Delta _b \big )k_0 \in \mathcal {K}_{\alpha }^\star , \qquad t\le \tau _1 (\alpha _1, \alpha _0), \end{aligned}$$

solves (3.17) in \(\mathcal {K}_{\alpha _1}\). Its continuation to an arbitrary \(t>0\) follows by (4.44) in a standard way. \(\square \)

4.4 The Proof of Theorem 3.4

4.4.1 Case \(\langle a^{+} \rangle >0\) and \(m\in [0,\langle a^{+} \rangle ]\)

The proof will be done by picking the corresponding bounds for \(u_t\) defined in (4.41) with \(k_0 = k_{\mu _0} \in \mathcal {K}_{\alpha _0}^\star \). Recall that, for \(\alpha _1 > \alpha _0\), \(u_t\in \mathcal {K}_{\alpha _1}\) for \(t< T(\alpha _1, \alpha _0;B^\Delta _{2,b})\). For a given \(\delta \le m\), let us choose the value of \(C_\delta \). The first condition is that

$$\begin{aligned} C_\delta ^{|\eta |} \ge k_0 (\eta ). \end{aligned}$$
(4.45)

Next, if (3.5) holds with a given \(\vartheta >0\) and \(b=0\), we take any \(\delta \le m\) and \(C_\delta \ge 1/\vartheta \) such that also (4.45) holds. If (3.5) holds with \(b>0\), we take any \(\delta < m\) and then \(C_\delta \ge b/(m-\delta )\vartheta \) such that also (4.45) holds. In all this cases, by Proposition 3.6 we have that

$$\begin{aligned} E^{-} (\eta ) - \frac{1}{C_\delta } E^{+} (\eta ) \ge - (m-\delta ) |\eta |, \qquad \eta \in \Gamma _0. \end{aligned}$$
(4.46)

Let \(r_t(\eta )\) denote the right-hand side of (3.18). For \(\alpha _1 > \alpha _0\), we take \(\alpha , \alpha ' \in (\alpha _0, \alpha _1)\), \(\alpha ' < \alpha \) and then consider

$$\begin{aligned} v_t:= & {} Q_{\alpha _1 \alpha _0}\big (t;B^\Delta _{2,b} \big )r_0 \nonumber \\= & {} r_t + \int _0^t Q_{\alpha _1 \alpha }\big (t-s;B^\Delta _{2,b} \big )D_{\alpha \alpha '} r_s ds, \end{aligned}$$
(4.47)

where

$$\begin{aligned} t\le \tau _2 := \min \left\{ \frac{\alpha '- \alpha _0}{\langle a^{+} \rangle - \delta } ;T \big (\alpha _1, \alpha ;B^\Delta _{2,b}\big )\right\} . \end{aligned}$$
(4.48)

The operator D in (4.47) is

$$\begin{aligned} (D_{\alpha \alpha '} r_s) (\eta )= & {} \bigg [ - m|\eta | - E^{-}(\eta ) + \frac{1}{C_\delta } \exp \bigg (- \big (\langle a^{+} \rangle - \delta \big )s\bigg ) E^{+}(\eta ) \nonumber \\&\quad + \delta |\eta | \bigg ] r_s (\eta ) \le 0, \qquad \eta \in \Gamma _0. \end{aligned}$$
(4.49)

The latter inequality holds for all \(s\in [0,\tau _2]\), see (4.46), and all \(m\in [0, \langle a^{+} \rangle ]\) and \(\delta < m\). Then by (4.36) we obtain from (4.41), the first line of (4.47), and (4.45) that

$$\begin{aligned} u_t ( \eta ) \le v_t (\eta ), \qquad t< T \big (\alpha _1, \alpha _0;B^\Delta _{2,b} \big ). \end{aligned}$$

Then by the second line of (4.47) and (4.49) we get that for \(t\le \tau _2\), see (4.48), the following holds

$$\begin{aligned} u_t (\eta ) \le v_t(\eta ) \le r_t(\eta ) , \qquad \eta \in \Gamma _0. \end{aligned}$$

The continuation of the latter inequality to bigger values of t is straightforward. This completes the proof for this case.

4.4.2 Case \(\langle a^{+} \rangle >0\) and \(m >\langle a^{+} \rangle \)

Take \(\varepsilon \in (0, m - \langle a^{+} \rangle )\) and then set

$$\begin{aligned} \vartheta _\varepsilon = \vartheta \left( 1 - \frac{\varepsilon + 2 \langle a^{+} \rangle }{2 m} \right) . \end{aligned}$$

Thereafter, choose \(C_\varepsilon \ge 1/\vartheta _\varepsilon \) such that

$$\begin{aligned} C_\varepsilon ^{|\eta |} \ge k_0(\eta ), \qquad \eta \in \Gamma _0. \end{aligned}$$

Then, cf. (4.46),

$$\begin{aligned} E^{-} (\eta ) - \frac{1}{C_\varepsilon } E^{+} (\eta ) \ge - \big (m-\langle a^{+} \rangle - \varepsilon /2 \big ) |\eta |, \qquad \eta \in \Gamma _0. \end{aligned}$$
(4.50)

Let now \(r_t\) stand for the right-hand side of (3.19). Then the second line of (4.47) holds with \(D_{\alpha \alpha '}\) replaced by \(D_{\alpha \alpha '}^\varepsilon \). By definition the latter is such that: (a) \((D_{\alpha \alpha '}^\varepsilon r_s)(\emptyset )=0\);

$$\begin{aligned} \mathrm{(b)} \quad \big (D^\varepsilon _{\alpha \alpha '}r_s \big )(\{x\}) = - \big (m - \langle a^{+} \rangle - \varepsilon \big ) r_s (\{x\}) \le 0, \end{aligned}$$

and, for \(|\eta | \ge 2\), see (4.50),

$$\begin{aligned} \mathrm{(c)} \quad \big (D^\varepsilon _{\alpha \alpha '}r_s \big )(\eta )= & {} \left[ \varepsilon - m|\eta | - E^{-} (\eta ) + \frac{1}{C_\varepsilon }E^{+} (\eta ) + \langle a^{+} \rangle |\eta | \right] r_s (\eta )\\\le & {} \varepsilon \left( 1 - |\eta |/2\right) r_s(\eta ) \le 0. \end{aligned}$$

This yields (3.19) and thus completes the proof for this case.

4.4.3 The Remaining Cases

For \(\langle a^{+} \rangle =0\) and \(t>0\), we set

$$\begin{aligned} \left( Q^{(0)}_{\alpha \alpha '} (t)u \right) (\eta ) = \exp \left[ - t E(\eta ) \right] u(\eta ), \end{aligned}$$
(4.51)

where \(\alpha ' < \alpha \) and \(u\in \mathcal {K}_{\alpha '}\). Then, cf. Lemma 4.5, \(Q^{(0)}_{\alpha \alpha '} (t): \mathcal {K}_{\alpha '} \rightarrow \mathcal {K}_{\alpha }\) continuously, and the map

$$\begin{aligned}{}[0, +\infty ) \ni t \mapsto Q^{(0)}_{\alpha \alpha '}(t) \in \mathcal {L}( \mathcal {K}_{\alpha '} , \mathcal {K}_{\alpha }) \end{aligned}$$

is continuous and such that, cf. (4.27),

$$\begin{aligned} \frac{d}{dt} Q^{(0)}_{\alpha '' \alpha '} (t) = \big (A^\Delta _1 \big )_{\alpha '' \alpha } Q^{(0)}_{\alpha \alpha '} (t), \qquad \alpha '' > \alpha , \end{aligned}$$
(4.52)

where \((A^\Delta _1)_{\alpha '' \alpha }\) is defined in (3.9) and (3.11). Now we set \(u_t = Q^{(0)}_{\alpha \alpha _0}(t) k_{\mu _0}\) and obtain from (4.51) and (4.52), similarly as in (4.43),

$$\begin{aligned} u_t - k_t = \int _0^t Q^{(0)}_{\alpha \alpha _1}(t) \left( - B^\Delta _1\right) _{\alpha _1\alpha _2} k_s ds \ge 0, \end{aligned}$$

which yields (3.20).

To prove that \(r_t (\eta ) := \vartheta ^{-|\eta |}\), \(t\ge 0\), is a stationary solution we set

$$\begin{aligned} k_t = Q_{\alpha \alpha _0} \big (t; B^\Delta _b \big ) r_0, \end{aligned}$$

where \(\alpha _0 > -\log \vartheta \) and \(\alpha > \alpha _0\). Then the following holds, cf. (4.43),

$$\begin{aligned} k_t = r_t + \int _0^t Q_{\alpha \alpha _2} \big (t-s; B^\Delta _b \big ) L^\Delta _{\alpha _2 \alpha _1} r_s d s, \end{aligned}$$

where \(\alpha _1 < \alpha _2\) are taken from \((\alpha _0, \alpha )\). For the case considered, we have

$$\begin{aligned} L^\Delta _{\alpha _2 \alpha _1} r_s = L^\Delta _{\alpha _2 \alpha _1} r_0 =0, \end{aligned}$$

which completes the proof for this case.

5 The Proof of the Identification Lemma

To prove Lemma 4.9 we use Proposition 2.3. Note that the solution mentioned in Lemma 4.8 already has properties (ii) and (iii) of (2.12), cf. (2.14). Thus, it remains to prove that also (i) holds. We do this as follows. First, we approximate the evolution \(k_0 \mapsto k_t\) established in Lemma 4.8 by evolutions \(k_{0,\mathrm{app}} \mapsto k_{t,\mathrm{app}}\) such that \(k_{t,\mathrm{app}}\) has property (i). Then we prove that for each \(G\in B^{\star }_\mathrm{bs}(\Gamma _0)\), \(\langle \!\langle G, k_{t,\mathrm{app}} \rangle \! \rangle \rightarrow \langle \!\langle G, k_{t} \rangle \! \rangle \) as the approximations are eliminated. The limiting transition is based on the representation \(\langle \!\langle G, k_{t,\mathrm{app}} \rangle \! \rangle = \langle \!\langle G_t, k_{0,\mathrm{app}} \rangle \! \rangle \) in which we use the so called predual evolution \(G \mapsto G_t\). Then we just show that \(\langle \!\langle G_t, k_{0,\mathrm{app}} \rangle \! \rangle \rightarrow \langle \!\langle G_t, k_{0} \rangle \! \rangle \).

5.1 The Predual Evolution

The aim of this subsection is to construct the evolution \(B_\mathrm{loc}(\Gamma _0) \ni G_0 \mapsto G_t \in \mathcal {G}_{\alpha _1}\), see (4.1) and (4.2), such that, for each \(\alpha > \alpha _1\) and \(k_0\in \mathcal {K}_{\alpha _1}\), the following holds, cf. (4.37),

$$\begin{aligned} \big \langle \! \big \langle G_0 , Q_{\alpha \alpha _1}\big (t;B^\Delta _b \big )k_0 \big \rangle \! \big \rangle = \langle \! \langle G_t , k_0 \rangle \! \rangle , \end{aligned}$$
(5.1)

where \(b\ge 0\) and \(B^\Delta _b\) are as in (3.5) and (4.16), respectively. Let us define the action of \(B_b\) on appropriate \(G:\Gamma _0 \rightarrow \mathbb {R}\) via the duality

$$\begin{aligned} \langle \! \langle G, B^\Delta _b k \rangle \! \rangle = \langle \! \langle B_b G, k \rangle \! \rangle . \end{aligned}$$

Similarly as in (4.16) we then get

(5.2)

For \(\alpha _2 > \alpha _1\), let \((B_b)_{\alpha _1 \alpha _2}\) be the bounded linear operator from \(\mathcal {G}_{\alpha _2}\) to \(\mathcal {G}_{\alpha _1}\) the action of which is defined in (5.2). As in estimating the norm of \(B^\Delta _b\) in (4.18) one then gets

$$\begin{aligned} \Vert (B_b)_{\alpha _1\alpha _2}\Vert \le \frac{\langle a^+ \rangle + b+ \langle a^- \rangle e^{\alpha _2}}{e (\alpha _2 -\alpha _1)}. \end{aligned}$$
(5.3)

For the same \(\alpha _2\) and \(\alpha _1\), let \(S_{\alpha _1 \alpha _2}(t)\) be the restriction to \(\mathcal {G}_{\alpha _2}\) of the corresponding element of the semigroup mentioned in Lemma 4.2. Then \(S_{\alpha _1 \alpha _2}(t)\) acts as a bounded contraction from \(\mathcal {G}_{\alpha _2}\) to \(\mathcal {G}_{\alpha _1}\).

Now for a given \(l\in \mathbb {N}\) and \(\alpha \), \(\alpha _1\) as in (5.1), let \(\delta \) and \(\alpha ^s\), \(s=0, \ldots , 2l+1\), be as in (4.29). Then for \(t>0\) and \((t,t_1 , \ldots , t_l)\in \mathcal {T}_l\), see (4.28), we define, cf. (4.30),

As in Proposition 4.6, one shows that the map

$$\begin{aligned} \mathcal {T}_l \ni (t,t_1 , \ldots , t_l) \mapsto \Omega _{\alpha _1\alpha }^{(l)} (t, t_1, \ldots , t_l) \in \mathcal {L}(\mathcal {G}_{\alpha },\mathcal {G}_{\alpha _1}) \end{aligned}$$

is continuous. Define

$$\begin{aligned} H^{(n)}_{\alpha _1\alpha } (t) = S_{\alpha _1\alpha }(t) + \sum _{l=1}^n \int _0^t \int _0^{t_1} \cdots \int _0^{t_{l-1}} \Omega _{\alpha _1\alpha }^{(l)} (t, t_1 , \ldots , t_l) d t_l \cdots d t_1. \end{aligned}$$
(5.4)

Lemma 5.1

For each \(T\in (0, T(\alpha , \alpha _1;B^\Delta _b))\), see (4.24) and (4.20), the sequence of operators defined in (5.4) converges in \(\mathcal {L}(\mathcal {G}_{\alpha },\mathcal {G}_{\alpha _1})\) to a certain \(H_{\alpha _1\alpha } (t)\) uniformly on [0, T], and for each \(G_0 \in \mathcal {G}_\alpha \) and \(k_0 \in \mathcal {K}_{\alpha _1}\) the following holds

$$\begin{aligned} \big \langle \! \big \langle G_0 , Q_{\alpha \alpha _1}\big (t;B^\Delta _b \big )k_0 \big \rangle \! \big \rangle = \langle \! \langle H_{\alpha _1\alpha } (t) G_0, k_0 \rangle \! \rangle , \qquad t\in [0,T]. \end{aligned}$$
(5.5)

Proof

For the operators defined in (5.4), similarly as in (4.35) we get the following estimate

$$\begin{aligned} \big \Vert H^{(n)}_{\alpha _1\alpha } (t) - H^{(n-1)}_{\alpha _1\alpha } (t) \big \Vert \le \frac{1}{n!} \left( \frac{n}{e}\right) ^n \left( \frac{T}{T_\delta }\right) ^n, \end{aligned}$$

which yields the convergence stated in the lemma. By direct inspection one gets that

$$\begin{aligned} \big \langle \! \big \langle G_0 , Q^{(n)}_{\alpha \alpha _1}(t;B^\Delta )k_0 \big \rangle \! \big \rangle = \big \langle \! \big \langle H^{(n)}_{\alpha _1\alpha } (t) G_0, k_0 \big \rangle \! \big \rangle , \end{aligned}$$

see (4.33). Then (5.5) is obtained from the latter in the limit \(n\rightarrow +\infty \). Similarly as in (4.26), for the limiting operator the following estimate holds

$$\begin{aligned} \Vert H_{\alpha _1\alpha }(t)\Vert \le \frac{T\big (\alpha , \alpha _1;B^\Delta _b \big )}{T\big (\alpha , \alpha _1;B^\Delta _b \big ) - t}. \end{aligned}$$
(5.6)

\(\square \)

5.2 An Auxiliary Model

The approximations mentioned at the beginning of this section employ also an auxiliary model, which we introduce and study now. For this model, we construct three kinds of evolutions. The first one is \(k_{0} \mapsto k_{t}\in \mathcal {K}_\alpha \) obtained as in Lemma 4.8. Another evolution \(q_{0} \mapsto q_{t}\in \mathcal {G}_\omega \) is constructed in such a way that \(q_t\) is positive definite in the sense that \(\langle \! \langle G, q_t \rangle \! \rangle \ge 0\) for all \(G\in B^\star _\mathrm{bs}(\Gamma _0)\). These evolutions, however, take place in different spaces. To relate them to each other we construct one more evolution, \(u_0 \mapsto u_t\), which takes place in the intersection of the mentioned Banach spaces. The aim is to show that \(k_{t} = u_t = q_t\) and thereby to get the desired property of \(k_{t}\). Thereafter, we prove the convergence mentioned above.

5.2.1 The Model

The function

$$\begin{aligned} \varphi _\sigma (x) = \exp \left( - \sigma |x|^2 \right) , \qquad \sigma >0, \quad x\in \mathbb {R}^d, \end{aligned}$$
(5.7)

has the following evident properties

$$\begin{aligned} \bar{\varphi }_\sigma := \int _{\mathbb {R}^d} \varphi (x) d x < \infty , \quad \quad \varphi _\sigma (x) \le 1, \ \ \ x\in \mathbb {R}^d. \end{aligned}$$
(5.8)

The model we need is characterized by L as in (1.4) with \(E^{+}(x,\eta )\), cf. (1.5), replaced by

$$\begin{aligned} E^{+}_\sigma (x, \eta ) = \varphi _\sigma (x)E^{+}_\sigma (x, \eta ) = \varphi _\sigma (x) \sum _{y\in \eta } a^{+} (x-y). \end{aligned}$$
(5.9)

5.2.2 The Evolution in \(\mathcal {K}_\alpha \)

For the new model (with \(E^{+}_\sigma \) as in (5.9)), the operator \( L^{\Delta ,\sigma }\) corresponding to \(L^\Delta \) takes the form, cf. (3.8) – (3.10) and (4.15) – (4.17),

$$\begin{aligned} L^{\Delta ,\sigma } = A^{\Delta ,\sigma } + B^{\Delta ,\sigma } = A^{\Delta ,\sigma }_b + B^{\Delta ,\sigma }_b. \end{aligned}$$
(5.10)

Here

$$\begin{aligned} A^{\Delta ,\sigma }= & {} A^{\Delta }_1 + A^{\Delta ,\sigma }_2, \qquad A^{\Delta ,\sigma }_b = A^{\Delta }_{1,b} + A^{\Delta ,\sigma }_2, \nonumber \\ B^{\Delta ,\sigma }= & {} B^{\Delta }_1 + B^{\Delta ,\sigma }_2, \qquad B^{\Delta ,\sigma }_b = B^{\Delta }_1 + B^{\Delta ,\sigma }_{2,b}, \end{aligned}$$
(5.11)

where \(A^{\Delta }_1\), \(B^{\Delta }_1\), and \(A^{\Delta }_{1,b}\) are the same as in (3.9), (3.10), and (4.15), respectively, and

$$\begin{aligned} \left( A^{\Delta ,\sigma }_2 k\right) (\eta )= & {} \sum _{x\in \eta } \varphi _\sigma (x) E^{+} (x, \eta \setminus x) k(\eta \setminus x), \nonumber \\ \left( B^{\Delta ,\sigma }_2 k\right) (\eta )= & {} b|\eta | k(\eta ) + \int _{\mathbb {R}^d} \sum _{x\in \eta } \varphi _\sigma (x) a^{+} (x-y) k(\eta \setminus x\cup y) dy. \end{aligned}$$
(5.12)

Note that these \(A^{\Delta ,\sigma }_b\) and \(B^{\Delta ,\sigma }_b\) define the corresponding bounded operators acting from \(\mathcal {K}_{\alpha '}\) to \(\mathcal {K}_{\alpha }\) for each real \(\alpha > \alpha '\). As in (3.15) we then set

$$\begin{aligned} \mathcal {D}^{\Delta , \sigma }_\alpha = \big \{k\in \mathcal {K}_\alpha : L^{\Delta , \sigma } k \in \mathcal {K}_\alpha \big \}, \end{aligned}$$
(5.13)

and thus define the corresponding operator \((L^{\Delta , \sigma }_\alpha , \mathcal {D}^{\Delta , \sigma }_\alpha )\). Along with (3.17) we also consider

$$\begin{aligned} \frac{d}{dt} k_t = L^{\Delta , \sigma }_\alpha k_t, \qquad k_t|_{t=0}= k_0 \in \mathcal {D}^{\Delta , \sigma }_\alpha . \end{aligned}$$
(5.14)

By the literal repetition of the construction used in the proof of Lemma 4.5 one obtains the operators \(Q^\sigma _{\alpha \alpha '}(t; B^{\Delta , \sigma }_b)\), \((\alpha , \alpha ', t) \in \mathcal {A}(B^\Delta _b)\), see (4.25), the norm of which satisfies, cf. (4.26),

$$\begin{aligned} \big \Vert Q^\sigma _{\alpha \alpha '}(t; B^{\Delta , \sigma }_b)\big \Vert \le \frac{T\big (\alpha , \alpha ';B^\Delta _b \big ) }{T\big (\alpha , \alpha ';B^\Delta _b \big ) - t}, \end{aligned}$$
(5.15)

which is uniform in \(\sigma \).

Lemma 5.2

Let \(\alpha _1\) and \(\alpha _2\) be as in Lemma 4.8. Then for a given \(k_0\in \mathcal {K}_{\alpha _1}\), the unique solution of (5.14) in \(\mathcal {K}_{\alpha _2}\) is given by

$$\begin{aligned} k_t = Q^\sigma _{\alpha \alpha _1}\big (t; B^{\Delta , \sigma }_b \big )k_0, \qquad \alpha \in (\alpha _1, \alpha _2), \ \ t< T \big (\alpha _2, \alpha _1;B^\Delta _b \big ). \end{aligned}$$
(5.16)

Proof

Repeat the proof of Lemma 4.8. \(\square \)

5.2.3 The Evolution in \(\mathcal {U}_{\sigma ,\alpha }\)

For \(\varphi _\sigma \) as in (5.7) we set

$$\begin{aligned} e(\varphi _\sigma ; \eta ) = \prod _{x\in \eta } \varphi _\sigma (x), \qquad \eta \in \Gamma _0, \end{aligned}$$

and introduce the following Banach space. For \(u:\Gamma _0 \rightarrow \mathbb {R}\), we define the norm, cf. (2.14),

$$\begin{aligned} \Vert u\Vert _{\sigma , \alpha } = \mathop {\hbox {ess sup}}\limits _{\eta \in \Gamma _0} \frac{|u(\eta )|\exp (-\alpha |\eta |)}{e(\varphi _\sigma ; \eta )}. \end{aligned}$$
(5.17)

Thereafter, set

$$\begin{aligned} \mathcal {U}_{\sigma , \alpha } = \{u:\Gamma _0 \rightarrow \mathbb {R}: \Vert u\Vert _{\sigma , \alpha } < \infty \}. \end{aligned}$$

By (5.7) and (2.14) we have that

$$\begin{aligned} \Vert u\Vert _{ \alpha } \le \Vert u\Vert _{\sigma , \alpha }, \qquad u \in \mathcal {U}_{\sigma , \alpha }, \end{aligned}$$

which yields \(\mathcal {U}_{\sigma , \alpha } \hookrightarrow \mathcal {K}_{\alpha }\). Moreover, as in (2.17) we also have that \(\mathcal {U}_{\sigma , \alpha '} \hookrightarrow \mathcal {U}_{\sigma , \alpha }\) for each real \(\alpha > \alpha '\).

Now let us define the operator \(L^{\Delta ,\sigma }_{\alpha ,u}\) in \(\mathcal {U}_{\sigma ,\alpha }\) the action of which is described in (5.10) – (5.12) and the domain is, cf. (5.13),

$$\begin{aligned} \mathcal {D}^{\Delta ,\sigma }_{\alpha ,u} = \big \{ u \in \mathcal {U}_{\sigma ,\alpha }: L^{\Delta ,\sigma }u\in \mathcal {U}_{\sigma ,\alpha } \big \}. \end{aligned}$$
(5.18)

Then we consider

$$\begin{aligned} \frac{d}{dt} u_t = L^{\Delta ,\sigma }_{\alpha ,u} u_t, \qquad u_t|_{t=0} = u_0 \in \mathcal {D}^{\Delta ,\sigma }_{\alpha ,u}. \end{aligned}$$
(5.19)

Note that \(\mathcal {U}_{\sigma ,\alpha ''} \subset \mathcal {D}(L^{\Delta ,\sigma }_{\alpha ,u})\) for each \(\alpha '' <\alpha \), and

$$\begin{aligned} \big (L^{\Delta ,\sigma }_{\alpha ,u}, \mathcal {D}^{\Delta ,\sigma }_{\alpha ,u} \big ) \subset \big (L^{\Delta ,\sigma }_{\alpha }, \mathcal {D}^{\Delta ,\sigma }_{\alpha } \big ). \end{aligned}$$
(5.20)

Our aim now is to prove that the problem (5.19) with \(u_0 \in \mathcal {U}_{\sigma , \alpha _1}\) has a unique solution in \(\mathcal {U}_{\sigma , \alpha _2}\), where \(\alpha _1 < \alpha _2\) are as in Lemma 4.8. To this end we first construct the semigroup analogous to that obtained in Lemma 4.2. Thus, in the predual space \(\mathcal {G}_{\sigma , \alpha }\) equipped with the norm, cf. (4.2),

$$\begin{aligned} |G|_{\sigma ,\alpha }:= \int _{\Gamma _0} |G(\eta )| \exp (\alpha |\eta |) e(\varphi _\sigma ;\eta ) \lambda ( d\eta ) \end{aligned}$$

we define the action of \(A^\sigma _b\) as follows, cf. (4.5),

$$\begin{aligned} A^\sigma _b= & {} A_{1,b} + A^\sigma _2\nonumber \\ \big (A^\sigma _2 G \big )(\eta )= & {} \int _{\mathbb {R}^d} \varphi _\sigma (y)E^{+} (y,\eta ) G(\eta \cup y) d y, \end{aligned}$$

and \(A_{1,b}\) acts as in (4.5). Then we have, cf. (4.7),

$$\begin{aligned}&|A^\sigma _2 G|_{\sigma ,\alpha } \nonumber \\&\quad \le \int _{\Gamma _0} \left( \int _{\mathbb {R}^d} \varphi _\sigma (y)E^{+} (y,\eta ) |G(\eta \cup y)| d y \right) \exp (\alpha |\eta |) e(\varphi _\sigma ;\eta ) \lambda ( d\eta ) \nonumber \\&\quad = \int _{\Gamma _0}e^{-\alpha } \left( \sum _{x\in \eta } E^{+} (x, \eta \setminus x)\right) |G(\eta )| \exp (\alpha |\eta |) e(\varphi _\sigma ;\eta ) \lambda (d\eta ) \nonumber \\&\quad \le (e^{-\alpha }/\vartheta ) |A_{1,b} G|_{\sigma ,\alpha }. \end{aligned}$$

Now the existence of the substochastic semigroup \(\{S_{\sigma , \alpha }(t)\}_{t\ge 0}\) generated by \((A^\sigma _b, \mathcal {D}_{\sigma ,\alpha })\) follows as in Lemma 4.2. Here, cf. (4.6),

$$\begin{aligned} \mathcal {D}_{\sigma ,\alpha } := \big \{G \in \mathcal {G}_{\sigma ,\alpha }: E_b(\cdot ) G \in \mathcal {G}_{\sigma ,\alpha } \big \}. \end{aligned}$$

Let \(S^{\odot }_{\sigma , \alpha }(t)\) be the sun-dual to \(S_{\sigma , \alpha }(t)\), cf. (4.11). Then for each \(\alpha ' <\alpha \) and any \(u\in \mathcal {U}_{\sigma , \alpha '}\), the map

$$\begin{aligned}{}[0, +\infty )\ni t \mapsto S^{\odot }_{\sigma , \alpha }(t)u \in \mathcal {U}_{\sigma , \alpha } \end{aligned}$$

is continuous, see Proposition 4.3. For real \(\alpha '< \alpha \) and \(t>0\), let \(\Sigma ^{\sigma ,u}_{\alpha \alpha '}(t)\) be the restriction of \(S^{\odot }_{\sigma , \alpha }(t)\) to \(\mathcal {U}_{\sigma , \alpha '}\). Then the map

$$\begin{aligned}{}[0, +\infty ) \ni t \mapsto \Sigma ^{\sigma ,u}_{\alpha \alpha '}(t) \in \mathcal {L}(\mathcal {U}_{\sigma ,\alpha '}, \mathcal {U}_{\sigma ,\alpha }) \end{aligned}$$

is continuous and such that, cf. (4.23),

$$\begin{aligned} \big \Vert \Sigma ^{\sigma ,u}_{\alpha \alpha '}(t) \big \Vert \le 1, \qquad t\ge 0. \end{aligned}$$
(5.21)

Now we define \((B^{\Delta ,\sigma }_b)_{\alpha \alpha '}\) which acts from \(\mathcal {U}_{\sigma ,\alpha '}\) to \(\mathcal {U}_{\sigma ,\alpha }\) according to (5.11) and (5.12). Then its norm satisfies

$$\begin{aligned} \big \Vert (B^{\Delta ,\sigma }_b)_{\alpha \alpha '} \big \Vert \le \frac{\langle a^{+} \rangle + b+ \langle a^{-} \rangle e^{\alpha }}{e(\alpha - \alpha ')}. \end{aligned}$$
(5.22)

In proving this we take into account that \(\varphi _\sigma (x) \le 1\) and repeat the arguments used in obtaining (4.18).

For real \(\alpha _2 > \alpha _1 > - \log \vartheta \), we take \(\alpha \in (\alpha _1, \alpha _2]\) and then pick \(\delta < \alpha - \alpha _1\) as in the proof of Lemma 4.5. Next, for \(l\in \mathbb {N}\) we divide \([\alpha _1,\alpha ]\) into subintervals according to (4.29) and take \((t,t_1, \ldots , t_l)\in \mathcal {T}_l\), see (4.28). Then define, cf. (4.30),

Thereafter, for \(n\in \mathbb {N}\) we set, cf. (4.33),

$$\begin{aligned}&U^{(n)}_{\alpha \alpha _1}(t) = \Sigma ^{\sigma ,u}_{\alpha \alpha _1}(t)\\&\qquad \qquad \qquad + \sum _{l=1}^n \int _0^t \int _0^{t_1} \cdots \int _0^{t_{n-1}} \Pi ^{l,\sigma }_{\alpha \alpha _1} (t,t_1, \ldots , t_l) d t_l \cdots d t_1. \end{aligned}$$

By means of (5.21) and (5.22) we then prove that the sequence \(\{U^{(n)}_{\alpha \alpha _1}(t)\}_{n\in \mathbb {N}}\) converges in \(\mathcal {L}(\mathcal {U}_{\sigma ,\alpha _1}, \mathcal {U}_{\sigma ,\alpha })\), uniformly on [0, T], \(T< T(\alpha , \alpha _1;B^\Delta _b)\), see (4.24) and (4.20). The limit \( U_{\alpha \alpha _1}(t) \in \mathcal {L}(\mathcal {U}_{\sigma ,\alpha _1}, \mathcal {U}_{\sigma ,\alpha })\) has the property, cf. (4.27),

$$\begin{aligned} \frac{d}{dt} U_{\alpha _2\alpha _1}(t) = \left( \big (A^{\Delta ,\sigma }_b \big )_{\alpha _2\alpha } + \big (B^{\Delta ,\sigma }_b \big )_{\alpha _2\alpha } \right) U_{\alpha \alpha _1}(t), \end{aligned}$$

where \((A^{\Delta ,\sigma }_b)_{\alpha _2\alpha }\in \mathcal {L}(\mathcal {U}_{\sigma ,\alpha }, \mathcal {U}_{\sigma ,\alpha _2})\) is defined in (5.11) and (5.12), analogously to (5.22). Note that

$$\begin{aligned} \forall u \in \mathcal {U}_{\sigma , \alpha } \quad L^{\Delta ,\sigma }_{\alpha _2,u} u = \left( \big (A^{\Delta ,\sigma }_b \big )_{\alpha _2\alpha } + \big (B^{\Delta ,\sigma }_b \big )_{\alpha _2\alpha } \right) u, \end{aligned}$$
(5.23)

see (5.18). Now we can state the following analog of Lemma 4.8.

Lemma 5.3

Let \(\alpha _2 > \alpha _1 > - \log \vartheta \) be as in Lemma 4.8. Then the problem (5.19) with \(u_0\in \mathcal {U}_{\sigma , \alpha _1}\) has a unique solution \(u_t \in \mathcal {U}_{\sigma , \alpha _2}\) on the time interval \([0, T(\alpha _2, \alpha _1;B^\Delta _b))\).

Proof

Fix \(T<T(\alpha _2, \alpha _1;B^\Delta _b)\) and find \(\alpha \in (\alpha _1, \alpha _2)\) such that also \(T<T(\alpha ', \alpha _1;B^\Delta _b)\). Then, cf. (4.37),

$$\begin{aligned} u_t := U_{\alpha \alpha _1}(t) u_0 \end{aligned}$$
(5.24)

is the solution in question, which can be checked by means of (5.23). Its uniqueness can be proved by the literal repetition of the corresponding arguments used in the proof of Lemma 4.8. \(\square \)

Corollary 5.4

Let \(k_t\) be the solution of the problem (5.14) with \(k_0 \in \mathcal {U}_{\sigma ,\alpha _1}\) mentioned in Lemma 5.2. Then \(k_t\) coincides with the solution mentioned in Lemma 5.3.

Proof

Since \((L^{\Delta ,\sigma }_{\alpha }, \mathcal {D}^{\Delta ,\sigma }_{\alpha })\) is an extension of \((L^{\Delta ,\sigma }_{\alpha ,u}, \mathcal {D}^{\Delta ,\sigma }_{\alpha ,u})\), see (5.20), and the embedding \(\mathcal {U}_{\sigma ,\alpha }\hookrightarrow \mathcal {K}_\alpha \) is continuous, the solution as in (5.24) with \(u_0= k_0\) satisfies also (5.14), and hence coincides with \(k_t\) in view of the uniqueness stated in Lemma 5.2. \(\square \)

5.2.4 The Evolution in \(\mathcal {G}_{\omega }\)

We recall that the space \(\mathcal {G}_{\alpha }\) was introduced in (4.1), (4.2), where we used it as a predual space to \(\mathcal {K}_{\alpha }\). Now we employ \(\mathcal {G}_{\alpha }\) to get the positive definiteness mentioned at the beginning of this subsection. Here, however, we write \(\mathcal {G}_{\omega }\) to show that we use it not as a predual space.

Let \(L^{\Delta , \sigma }\) be as in (5.10). For \(\omega \in \mathbb {R}\), we set, cf. (5.13) and (5.18),

$$\begin{aligned} \mathcal {D}^{\Delta ,\sigma }_\omega = \big \{ q \in \mathcal {G}_\omega : L^{\Delta ,\sigma }q \in \mathcal {G}_\omega \big \}. \end{aligned}$$

Then we define the corresponding operator \((L^{\Delta ,\sigma }_\omega , \mathcal {D}^{\Delta ,\sigma }_\omega )\) and consider the following Cauchy problem

$$\begin{aligned} \frac{d}{dt} q_t = L^{\Delta ,\sigma }_\omega q_t , \qquad q_t|_{t=0} = q_0 \in \mathcal {D}^{\Delta ,\sigma }_\omega . \end{aligned}$$
(5.25)

As above, one can show that \(\mathcal {G}_{\omega '} \subset \mathcal {D}^{\Delta ,\sigma }_\omega \) for each \(\omega ' > \omega \). By (5.17) and (4.2) for \(u\in \mathcal {U}_{\sigma , \alpha }\) we have

$$\begin{aligned} |u|_\omega\le & {} \Vert u\Vert _{\sigma ,\alpha } \int _{\Gamma _0} \exp ((\omega + \alpha )|\eta |)e(\varphi _\sigma ;\eta ) \lambda (d\eta )\nonumber \\\le & {} \Vert u\Vert _{\sigma ,\alpha } \exp \left( \bar{\varphi }_\sigma e^{\omega + \alpha }\right) , \end{aligned}$$
(5.26)

see also (5.8). Hence \(\mathcal {U}_{\sigma , \alpha } \hookrightarrow \mathcal {G}_\omega \) for each \(\omega \) and \(\alpha \). Like in (5.20) we then get

$$\begin{aligned} \big (L^{\Delta ,\sigma }_{\alpha ,u}, \mathcal {D}^{\Delta ,\sigma }_{\alpha ,u}\big ) \subset \big (L^{\Delta ,\sigma }_{\omega }, \mathcal {D}^{\Delta ,\sigma }_{\omega }\big ). \end{aligned}$$
(5.27)

Lemma 5.5

Assume that the problem (5.25) with \(\omega > 0\) and \(q_0 \in \mathcal {G}_{\omega '}\), \(\omega '> \omega \), has a solution, \(q_t\in \mathcal {G}_\omega \), on some time interval \([0, T(\omega ', \omega ))\). Then this solution is unique.

Proof

Set

$$\begin{aligned} w_t(\eta ) = (-1)^{|\eta |} q_t (\eta ), \end{aligned}$$

which is an isometry on \(\mathcal {G}_\omega \). Then \(q_t\) solves (5.25) if and only if \(w_t\) solves the following equation

$$\begin{aligned} \frac{d}{dt} w_t(\eta )= & {} - E(\eta ) w_t(\eta ) + \int _{\mathbb {R}^d} E^{-}(y,\eta ) w_t(\eta \cup y)d y\nonumber \\&- \sum _{x\in \eta } \varphi _\sigma (x) E^{+} (x, \eta \setminus x) w_t (\eta \setminus x) \nonumber \\&+ \int _{\mathbb {R}^d}\sum _{x\in \eta } \varphi _\sigma (x) a^{+} (x-y) w_t (x\setminus x \cup y) dy. \end{aligned}$$
(5.28)

Set

$$\begin{aligned} \mathcal {D}_\omega =\{ w \in \mathcal {G}_\omega : E(\cdot ) w \in \mathcal {G}_\omega \}. \end{aligned}$$

By Proposition 4.1 we prove that the operator defined by the first two summands in (5.28) with domain \( \mathcal {D}_\omega \) generates a substochastic semigroup, \(\{V_\omega (t)\}_{t\ge 0}\), acting in \(\mathcal {G}_\omega \). Indeed, in this case the condition analogous to that in (4.8) takes the form, cf. (4.9),

$$\begin{aligned}- & {} \int _{\Gamma _0} E(\eta ) w(\eta ) \exp (\omega |\eta |) \lambda ( d\eta )\\&+\,\, r^{-1}e^{-\omega } \int _{\Gamma _0} E^{-}(\eta ) w(\eta ) \exp (\omega |\eta |) \lambda ( d\eta ) \le 0, \end{aligned}$$

which certainly holds for each \(\omega >0\) and an appropriate \(r<1\). For each \(\omega '' \in (0, \omega )\), we have that \(\mathcal {G}_\omega \hookrightarrow \mathcal {G}_{\omega ''}\), and the second two summands in (5.28) define a bounded operator, \(W_{\omega ''\omega }: \mathcal {G}_\omega \rightarrow \mathcal {G}_{\omega ''}\), the norm of which can be estimated as follows, cf. (5.3),

$$\begin{aligned} \Vert W_{\omega ''\omega } \Vert \le \frac{(e^{\omega } +1) \langle a^{+} \rangle }{e(\omega - \omega '')}. \end{aligned}$$
(5.29)

Assume now that (5.28) has two solutions corresponding to the same initial condition \(w_0 (\eta ) = (-1)^{|\eta |} q_0(\eta )\). Let \(v_t\) be their difference. Then it solves the following equation, cf. (4.38),

$$\begin{aligned} v_t = \int _0^t V_{\omega ''}(t-s) W_{\omega ''\omega } v_s d s, \end{aligned}$$
(5.30)

where \(v_t\) on the left-hand side is considered as an element of \(\mathcal {G}_{\omega ''}\) and \(t>0\) will be chosen later. Now for a given \(n\in \mathbb {N}\), we set \(\epsilon = (\omega - \omega '')/n\) and then \(\omega ^l := \omega - l \epsilon \), \(l=0, \ldots , n\). Thereafter, we iterate (5.30) and get

$$\begin{aligned}&v_t = \int _0^t \int _0^{t_1} \cdots \int _0^{t_{n-1}} V_{\omega ''}(t-t_1) W_{\omega ''\omega ^{n-1}} V_{\omega ^{n-1}} (t_1-t_2) \times \cdots \\&\qquad \quad \times \, W_{\omega ^{2}\omega ^{1}} V_{\omega ^1} (t_{n-1} - t_n) W_{\omega ^{1} \omega } v_{t_n} d t_n \cdots d t_1. \end{aligned}$$

Similarly as in (4.39), by (5.29) this yields the following estimate

$$\begin{aligned} |v_t|_{\omega ''} \le \frac{1}{n!} \left( \frac{n}{e}\right) ^n \left( \frac{t\langle a^{+} \rangle (e^{\omega } +1)}{\omega - \omega ''}\right) ^n \sup _{s\in [0,t]} |v_s|_{\omega }. \end{aligned}$$

The latter implies that \(v_t =0\) for \(t < (\omega - \omega '')/ \langle a^{+} \rangle (e^{\omega } +1)\). To prove that \(v_t =0\) for all t of interest one has to repeat the above procedure appropriate number of times. \(\square \)

Recall that each \(\mathcal {U}_{\sigma , \alpha }\) is continuously embedded into each \(\mathcal {G}_\omega \), see (5.26).

Corollary 5.6

For each \(\omega >0\), the problem (5.25) with \(q_0 \in \mathcal {U}_{\sigma , \alpha _0}\) has a unique solution \(q_t\) which coincides with the solution \(u_t\in \mathcal {U}_{\sigma , \alpha }\) mentioned in Lemma 5.3.

Proof

By (5.27) \(u_t\) is a solution of (5.25). Its uniqueness follows by Lemma 5.5. \(\square \)

5.3 Local Evolution

In this subsection we pass to the so called local evolution of states of the auxiliary model (5.10), (5.11). For this evolution, the corresponding ‘correlation function’ \(q_t \in \mathcal {G}_\omega \) has the positive definiteness in question. Then we apply Corollaries 5.4 and 5.6 to get the same for the evolution in \(\mathcal {K}_\alpha \). Thereafter, we pass to the limit and get the proof of Lemma 4.9.

5.3.1 The Evolution of Densities

In view of (2.2), each state with the property \(\mu (\Gamma _0)=1\) can be redefined as a probability measure on \(\mathcal {B}(\Gamma _0)\), cf. Remark 2.1. Then the Fokker–Planck equation (1.3) can be studied directly, see [23, Eq. (2.8)]. Its solvability is described in [23, Theorem 2.2], which, in particular, states that the solution is absolutely continuous with respect to the Lebesgue-Poisson measure \(\lambda \) if \(\mu _0\) has the same property. In view of this we write the corresponding problem for the density

$$\begin{aligned} R_t := \frac{d \mu _t}{d \lambda }, \end{aligned}$$
(5.31)

see also [23, Eq. (2.16)], and obtain

$$\begin{aligned} \frac{d}{dt} R_t (\eta ) = (L^{\dagger ,\sigma } R_t)(\eta ), \quad R_t|_{t=0} = R_0, \end{aligned}$$
(5.32)

where

$$\begin{aligned} (L^{\dagger ,\sigma } R)(\eta ):= & {} - \Psi _\sigma (\eta ) R (\eta ) + \sum _{x\in \eta } \varphi _\sigma (x) E^{+} (x, \eta \setminus x) R_t (\eta \setminus x) \nonumber \\+ & {} \int _{\mathbb {R}^d} \left( m + E^{-} (x, \eta )\right) R_t (\eta \cup x) d x, \end{aligned}$$
(5.33)

and

$$\begin{aligned} \Psi _\sigma (\eta )= E(\eta ) + \int _{\mathbb {R}^d} \varphi _\sigma (x) E^{+} (x, \eta ) dx. \end{aligned}$$

We solve (5.32) in the Banach spaces \(\mathcal {G}_0 = L^1(\Gamma _0, d\lambda )\), cf. (4.1). For \(n\in \mathbb {N}\) we denote by \(\mathcal {G}_{0,n}\) the subset of \(\mathcal {G}_0\) consisting of all those \(R:\Gamma _0 \rightarrow \mathbb {R}\) for which

$$\begin{aligned} \int _{\mathbb {R}^d} |\eta |^n \left| R(\eta )\right| \lambda ( d\eta ) < \infty . \end{aligned}$$

Let also \(\mathcal {G}_\omega ^+\) stand for the cone of positive elements of \(\mathcal {G}_\omega \). Set

$$\begin{aligned} \mathcal {D}_0 = \{ R\in \mathcal {G}_0: \Psi _\sigma R \in \mathcal {G}_0\}. \end{aligned}$$
(5.34)

Then the relevant part of [23, Theorem 2.2] can be formulated as follows.

Proposition 5.7

The closure in \(\mathcal {G}_0\) of the operator \((L^{\dagger , \sigma }, \mathcal {D}_0)\) defined in (5.33) and (5.34) generates a stochastic semigroup \(\{S^{\dagger ,\sigma }(t)\}_{t\ge 0}:= S^{\dagger ,\sigma }\) of bounded operators in \(\mathcal {G}_0\), which leaves invariant each \(\mathcal {G}_{0,n}\), \(n\in \mathbb {N}\). Moreover, for each \(\beta '>0\) and \(\beta \in (0, \beta ')\), \(R\in \mathcal {G}_{\beta '}^{ +}\) implies \(S^{\dagger ,\sigma } (t) R\in \mathcal {G}_{\beta }^{ +}\) holding for all \(t < T(\beta ' , \beta )\), where \(T(\beta ' , \beta )= +\infty \) for \(\langle a^{+}\rangle = 0\), and

$$\begin{aligned} T(\beta ' , \beta ) = (\beta ' - \beta )e^{-\beta '}/\langle a^{+}\rangle , \qquad \mathrm{for} \ \ \langle a^{+}\rangle >0. \end{aligned}$$
(5.35)

Let now \(\mu _0\) be the initial state as in Theorem 3.3. Then for each \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\), the projection \(\mu ^\Lambda \) is absolutely continuous with respect to \(\lambda ^\Lambda \), see (2.7). For this \(\mu _0\), and for \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\) and \(N \in \mathbb {N}\), we set, see (5.31),

$$\begin{aligned} R^{\Lambda }_0 (\eta ) = \frac{d \mu ^{\Lambda }}{ d \lambda ^{\Lambda }}(\eta ) \mathbb {I}_{\Gamma _{\Lambda }} (\eta ), \qquad R^{\Lambda , N}_0 (\eta ) = R^{\Lambda }_0 (\eta ) I_{N} (\eta ), \ \ \eta \in \Gamma _0. \end{aligned}$$
(5.36)

Here \(I_{N}\) and \(\mathbb {I}_{\Gamma _\Lambda }\) are the indicator functions of the sets \(\{\eta \in \Gamma _0: |\eta |\le N\}\), \(N\in \mathbb {N}\), and \(\Gamma _\Lambda \), respectively. Clearly,

$$\begin{aligned} \forall \beta >0 \qquad R^{\Lambda , N}_0\in \mathcal {G}^+_\beta . \end{aligned}$$
(5.37)

Set

$$\begin{aligned} R^{\Lambda , N}_t = S^{\dagger ,\sigma }(t) R^{\Lambda , N}_0 , \quad t>0, \end{aligned}$$
(5.38)

where \(S^{\dagger ,\sigma }\) is the semigroup as in Proposition 5.7. Then also \(R^{\Lambda , N}_t \in \mathcal {G}^+_0\) for all \(t>0\).

For some \(G\in B_\mathrm{bs}(\Gamma _0)\), let us consider \(F=KG\), cf. (2.4). Since \(G(\xi ) =0\) for all \(\xi \) such that \(|\xi |>N(G)\), see Definition 2.2, we have \(F\in \mathcal {F}_\mathrm{cyl}(\Gamma )\) and

$$\begin{aligned} |F (\gamma )| \le (1+|\gamma |)^{N(G)} C(G), \qquad \gamma \in \Gamma _0, \end{aligned}$$

for some \( C(G)>0\). By Proposition 5.7 we then have from the latter

$$\begin{aligned} \left| \big \langle \! \big \langle KG , R^{\Lambda ,N}_t \big \rangle \! \big \rangle \right| < \infty . \end{aligned}$$
(5.39)

5.3.2 The Evolution of Local Correlation Functions

For a given \(\mu \in \mathcal {P}_\mathrm{sP}\), the correlation function \(k_\mu \) and the local densities \(R^\Lambda _\mu \), \(\Lambda \in \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\), see (2.8), are related to each other by (2.9). In the first formula of (5.36) we extend \(R_0^\Lambda \) to the whole \(\Gamma _0\). Then the corresponding integral as in (2.9) coincides with \(k_{\mu _0}\) only on \(\Gamma _\Lambda \). The truncation made in the second formula in (5.36) diminishes \(R_0^\Lambda \). Its aim is to satisfy (5.37). Thus, with a certain abuse of the terminology we call

$$\begin{aligned} q^{\Lambda ,N}_0 (\eta ) = \int _{\Gamma _0} R^{\Lambda ,N}_0 (\eta \cup \xi ) \lambda (d \xi ) \end{aligned}$$
(5.40)

local correlation function. The evolution \(q^{\Lambda , N}_0 \mapsto q^{\Lambda , N}_t\) can be obtained from (5.38) by setting

$$\begin{aligned} q^{\Lambda ,N}_t (\eta ) = \int _{\Gamma _0} R^{\Lambda ,N}_t (\eta \cup \xi ) \lambda (d\xi ), \qquad t\ge 0. \end{aligned}$$
(5.41)

However, so far this can only be used in a weak sense based on (5.39). Note that for \(G\in B^{\star }_\mathrm{bs}(\Gamma _0)\), cf. (2.11), we have

$$\begin{aligned} \big \langle \! \big \langle G , q^{\Lambda ,N}_t \big \rangle \! \big \rangle = \big \langle \! \big \langle KG , R^{\Lambda ,N}_t \big \rangle \! \big \rangle \ge 0, \end{aligned}$$
(5.42)

since \(R^{\Lambda ,N}_t \in \mathcal {G}^+_0\). To place the evolution \(q^{\Lambda , N}_0 \mapsto q^{\Lambda , N}_t\) into an appropriate Banach space we use the concluding part of Proposition 5.7 and the following fact

$$\begin{aligned} \int _{\Gamma _0} e^{\omega |\eta | } q_t^{\Lambda ,N}(\eta ) \lambda (d \eta ) = \int _{\Gamma _0} \left( 1+ e^{\omega }\right) ^{|\eta |} R_t^{\Lambda ,N}(\eta ) \lambda (d \eta ), \end{aligned}$$
(5.43)

that can be obtained by (2.13). Since \(R^{\Lambda , N}_0 \in \mathcal {G}_{\beta '}\) for any \(\beta '>0\), see (5.37), we can take \(\beta ' = \beta +1\) which maximizes \(T(\beta ', \beta )\) given in (5.35). Then for each \(\beta >0\), we have that

$$\begin{aligned} R^{\Lambda , N}_t \in \mathcal {G}_{\beta }, \qquad \mathrm{for} \ \ t < \tau (\beta ):=\frac{e^{-\beta }}{e\langle a^{+} \rangle }. \end{aligned}$$
(5.44)

Hence, \(q_t^{\Lambda ,N} \in \mathcal {G}_\omega \) whenever \(R_t^{\Lambda ,N} \in \mathcal {G}_\beta \) with \(\beta \) such that \(e^{\beta } = 1+ e^{\omega }\), cf. (5.43). Moreover, for such \(\omega \) and \(\beta \) the right-hand side of (5.41) defines a continuous map from \(\mathcal {G}_\beta \) to \(\mathcal {G}_\omega \).

Lemma 5.8

Given \(\omega _1 > 0\) and \(\omega _2 > \omega _1\), let \(\beta _i\) be such that \(e^{\beta _i} = e^{\omega _i} +1\), \(i=1,2\). Then \(q^{\Lambda ,N}_t\) is continuously differentiable in \(\mathcal {G}_{\omega _1}\) on \([0, \tau (\beta _2))\) and the following holds

$$\begin{aligned} \frac{d}{dt} q^{\Lambda ,N}_t = L^{\Delta , \sigma }_{\omega _1} q^{\Lambda ,N}_t. \end{aligned}$$
(5.45)

Proof

By the mentioned continuity of the map in (5.41) the continuous differentiability of \(q^{\Lambda ,N}_t\) follows from the corresponding property of \(R^{\Lambda ,N}_t\in \mathcal {G}_{\beta _2}\), which it has in view of (5.38). Then the following holds

$$\begin{aligned} \left( \frac{d}{dt} q^{\Lambda ,N}_t \right) (\eta ) = \int _{\Gamma _0} \left( L^{\dagger ,\sigma }_{\beta _1} R^{\Lambda ,N}_t\right) (\eta \cup \xi ) \lambda (d\xi ) \end{aligned}$$
(5.46)

Where \(L^{\dagger ,\sigma }_{\beta _1}\) is the trace of \(L^{\dagger ,\sigma }\) in \(\mathcal {G}_{\beta _1}\). We define the action of \(\widehat{L}^{\sigma } = A^\sigma + B^\sigma \) in such a way that

$$\begin{aligned} \big \langle \! \big \langle \widehat{L}^{\sigma } G, k \big \rangle \! \big \rangle = \big \langle \! \big \langle G, L^{\Delta ,\sigma } k \big \rangle \! \big \rangle , \end{aligned}$$

where that \(L^{\Delta ,\sigma }\) acts as in (5.10) and (5.11). Then \(A^\sigma \) acts as in (4.5) with \(E^{+} (y,\eta )\) replaced by \(\varphi _\sigma (y) E^{+} (y,\eta )\), and \(B^\sigma \) acts as in (5.2) with \(a^{+}(x-y)\) multiplied by \(\varphi _\sigma (x)\). Let \(G:\Gamma _0\rightarrow \mathbb {R}\) be bounded and continuous. Then for some \(C>0\) we have, see (2.4),

$$\begin{aligned} \big |\widehat{L}^\sigma G(\eta ) \big | \le |\eta |^2 C \sup _{\eta \in \Gamma _0}|G(\eta )|, \quad \big |K(\widehat{L}^\sigma G)(\eta ) \big | \le |\eta |^22^{|\eta |} C \sup _{\eta \in \Gamma _0}|G(\eta )|, \end{aligned}$$

and hence we can calculate the integrals below

$$\begin{aligned} \big \langle \! \big \langle \widehat{L}^{\sigma } G, q_t^{\Lambda ,N} \big \rangle \! \big \rangle = \big \langle \! \big \langle G, L^{\Delta ,\sigma }_{\omega _1}q_t^{\Lambda ,N} \big \rangle \!\big \rangle , \end{aligned}$$
(5.47)

where \(\omega _1\) and \(q_t^{\Lambda ,N}\) are as in (5.46). On the other hand, by (5.46) we have

$$\begin{aligned} \big \langle \! \big \langle \widehat{L}^{\sigma } G, q_t^{\Lambda ,N} \big \rangle \! \big \rangle= & {} \big \langle \! \big \langle K \widehat{L}^{\sigma } G, R_t^{\Lambda ,N} \rangle \! \big \rangle \nonumber \\= & {} \big \langle \! \big \langle K G, L^{\dagger ,\sigma }_{\beta _1} R_t^{\Lambda ,N} \big \rangle \! \big \rangle = \big \langle \! \big \langle G, \frac{d}{dt} q^{\Lambda ,N}_t \big \rangle \! \big \rangle . \end{aligned}$$
(5.48)

Since (5.47) and (5.48) hold true for all bounded continuous functions, we have that the expression on both sides of (5.45) are equal to each other, which completes the proof. \(\square \)

Corollary 5.9

Let \(k_t^{\Lambda ,N}\in \mathcal {K}_{\alpha _2}\) be the solution of the problem (5.14) with \(k^{\Lambda ,N}_0 = q^{\Lambda ,N}_0\in \mathcal {K}_{\alpha _1}\), see Lemma 5.2. Then for each \(G\in B_\mathrm{bs}^{\star }(\Gamma _0)\) and

$$\begin{aligned} t < \min \{T(\alpha _2, \alpha _1;B^\Delta ); 1/ e \langle a^{+} \rangle \}, \end{aligned}$$

see (5.44), we have that

$$\begin{aligned} \big \langle \! \big \langle G, k_t^{\Lambda ,N} \big \rangle \! \big \rangle \ge 0. \end{aligned}$$
(5.49)

Proof

By (5.36) and (5.40) we have that \(q_0^{\Lambda ,N}\in \mathcal {U}_{\sigma , \alpha _1}\) (this is the reason to consider such local evolutions). Let then \(u_t\) be the solution as in Lemma 5.3 with this initial condition. Then by Corollaries 5.4 and 5.6 it follows that \(k^{\Lambda ,N}_t = u_t = q_{t}^{\Lambda ,N}\) for the mentioned values of t. Thus, the validity of (5.49) follows by (5.42). \(\square \)

5.4 Taking the Limits

Note that (5.49) holds for

$$\begin{aligned} k_t^{\Lambda ,N} = Q^\sigma _{\alpha \alpha _1} \big (t;B^{\Delta ,\sigma }_b \big ) q_0^{\Lambda ,N}, \end{aligned}$$

with \(\alpha \in (\alpha _1, \alpha _2)\) dependent on t, see (5.16). In this subsection, we first pass in (5.49) to the limit \(\sigma \downarrow 0\), then we get rid of the locality imposed in (5.36).

Lemma 5.10

Let \(k_t\) and \(k_t^\sigma \) be the solutions of the problems (3.17) and (5.14), respectively, with \(k_t|_{t=0}= k^\sigma _t|_{t=0} = k_0 \in \mathcal {K}_{\alpha _0}\), \(\alpha _0 > -\log \vartheta \). Then for each \(\alpha > \alpha _0\) there exists \(\widetilde{T}=\widetilde{T}(\alpha , \alpha _0) < T(\alpha , \alpha _0;B^\Delta _b)\) such that for each \(G\in B_\mathrm{bs}(\Gamma _0)\) and \(t\in [0, \widetilde{T}]\) the following holds

$$\begin{aligned} \lim _{\sigma \downarrow 0} \langle \! \langle G, k_t^{\sigma } \rangle \!\rangle = \langle \! \langle G, k_t \rangle \!\rangle . \end{aligned}$$
(5.50)

Proof

Take \(\alpha _2 \in (\alpha _0, \alpha )\) and \(\alpha _1 \in (\alpha _0, \alpha _2)\). Thereafter, take

$$\begin{aligned} \widetilde{T} < \min \left\{ T \big (\alpha _1, \alpha _0;B^\Delta _b \big ); T \big (\alpha , \alpha _2;B^\Delta _b \big ) \right\} . \end{aligned}$$
(5.51)

For \(t\le \widetilde{T}\), by (4.27), (4.37), (5.11), and (5.16) we then have that the following holds, see (4.37) and (5.24),

$$\begin{aligned} Q_{\alpha \alpha _0}\big (t;B^\Delta _b \big ) k_0= & {} Q^\sigma _{\alpha \alpha _0} (t) k_0 + M_\sigma (t) + N_\sigma (t),\\ M_\sigma (t):= & {} \int _0^t Q_{\alpha \alpha _2} \big (t-s;B^\Delta _b \big ) \left( \big (A_2^{\Delta } \big )_{\alpha _2 \alpha _1} - \big (A_2^{\Delta ,\sigma } \big )_{\alpha _2 \alpha _1}\right) k^\sigma _s d s \\ N_\sigma (t):= & {} \int _0^t Q_{\alpha \alpha _2} \big (t-s;B^\Delta _b \big ) \left( \big (B_{2,b}^{\Delta }\big )_{\alpha _2 \alpha _1} - \big (B_{2,b}^{\Delta ,\sigma }\big )_{\alpha _2 \alpha _1}\right) k^\sigma _s d s, \end{aligned}$$

where

$$\begin{aligned} k_s^\sigma = Q^\sigma _{\alpha _1 \alpha _0} (s;B^\Delta _b) k_0. \end{aligned}$$
(5.52)

Then

$$\begin{aligned} \langle \! \langle G, k_t \rangle \!\rangle - \langle \! \langle G, k_t^\sigma \rangle \!\rangle = \langle \! \langle G, M_\sigma (t) \rangle \!\rangle + \langle \! \langle G, N_\sigma (t) \rangle \!\rangle . \end{aligned}$$
(5.53)

By (5.5) we get

$$\begin{aligned} \langle \! \langle G, M_{\sigma } (t)\rangle \!\rangle= & {} \int _0^t \big \langle \! \big \langle G, Q_{\alpha \alpha _2} \big (t-s;B^\Delta _b \big ) v_s \big \rangle \! \big \rangle d s \nonumber \\= & {} \int _0^t \langle \! \langle H_{\alpha _2 \alpha }(t-s) G, v_s \rangle \!\rangle d s = \int _0^t \langle \! \langle G_{ t-s}, v_s \rangle \!\rangle d s , \end{aligned}$$
(5.54)

where

$$\begin{aligned}&\langle \! \langle G_{ t-s}, v_s \rangle \!\rangle \nonumber \\&\quad = \int _{\Gamma _0} G_{ t-s} (\eta ) \sum _{x\in \eta } \left( 1- \varphi _\sigma (x)\right) E^{+} (x,\eta \setminus x) k^\sigma _s (\eta \setminus x) \lambda ( d \eta )\qquad \nonumber \\&\quad = \int _{\Gamma _0} \int _{\mathbb {R}^d} G_{ t-s} (\eta \cup x) \left( 1- \varphi _\sigma (x)\right) E^{+} (x,\eta ) k^\sigma _s (\eta ) d x \lambda ( d \eta ), \end{aligned}$$
(5.55)

where the latter line was obtained by means of (2.13). Note that \(k_s^\sigma \in \mathcal {K}_{\alpha _1}\) and \(G_{ t-s}\in \mathcal {G}_{\alpha _2}\) for \(s\le t\le \widetilde{T}\), see (5.51). We use this fact to prove that

$$\begin{aligned} g_{s} (x) := \int _{\Gamma _0} \frac{1}{|\eta | + 1} \left| G_{s}(\eta \cup x) \right| e^{\alpha _2 |\eta |} \lambda ( d \eta ) \end{aligned}$$

lies in \(L^1(\mathbb {R}^d)\) for each \(s\in [0,t]\). Indeed, by (2.13) and (4.2) we get

$$\begin{aligned} \Vert g_{s}\Vert _{L^1(\mathbb {R}^d)} \le e^{-\alpha _2} |G_{s}|_{\alpha _2} \le C_1 <\infty , \end{aligned}$$
(5.56)

where

$$\begin{aligned} C_1 := e^{-\alpha _2}\max _{s\in [0,\widetilde{T}]} |G_s|_{\alpha _2} \le \frac{e^{-\alpha _0}T(\alpha , \alpha _2;B^\Delta _b) |G|_{\alpha }}{T(\alpha , \alpha _2;B^\Delta _b)-\widetilde{T}}, \end{aligned}$$
(5.57)

see (5.6). By (5.15) and (5.52) we also get

$$\begin{aligned} \max _{s\in [0,\widetilde{T}]} \Vert k_s^\sigma \Vert _{\alpha _2} \le \frac{T(\alpha _1, \alpha _0;B^\Delta _b) \Vert k_0\Vert _{\alpha _0}}{T(\alpha _1, \alpha _0;B^\Delta _b)-\widetilde{T}} =: C_2 <\infty , \end{aligned}$$
(5.58)

see (5.51). Now we use (5.55), (5.56), (5.58) and obtain by (3.2) and (3.12) that the following holds

$$\begin{aligned}&\left| \langle \! \langle G, M_{\sigma } (t)\rangle \!\rangle \right| \le \varkappa (\alpha _2 - \alpha _1) \Vert a^{+}\Vert e^{\alpha _1} C_2 \nonumber \\&\qquad \qquad \qquad \quad \;\; \times \int _0^{\widetilde{T}}\int _{\mathbb {R}^d} g_s (x) (1- \varphi _\sigma (x)) d x ds, \end{aligned}$$
(5.59)

where

$$\begin{aligned} \varkappa (\beta ) := \frac{1}{e\beta } + \left( \frac{2}{e\beta } \right) ^2. \end{aligned}$$

By (5.56) and (5.57) we conclude that the integrand in the right-hand side of (5.59) is bounded by \(C_1\). By the Lebesgue dominated convergence theorem this yields RHS (5.59) \(\rightarrow 0\) as \(\sigma \downarrow 0\). In the same way one proves that also

$$\begin{aligned} \left| \langle \! \langle G, N_{\sigma } (t)\rangle \!\rangle \right| \rightarrow 0, \qquad \sigma \downarrow 0, \end{aligned}$$

which yields (5.50), see (5.53). \(\square \)

Below by a cofinal sequence \(\{\Lambda _n\}_{n\in \mathbb {N}} \subset \mathcal {B}_\mathrm{b}(\mathbb {R}^d)\) we mean a sequence such that: \(\Lambda _{n}\subset \Lambda _{n+1}\) for all n, and each \(x\in \mathbb {R}\) belongs to a certain \(\Lambda _n\).

Lemma 5.11

Let \(\{\Lambda _n\}_{n\in \mathbb {N}}\) be a cofinal sequence and \(q_0^{\Lambda ,N}\) be as in (5.40). Let also \(\alpha _1\) and \(\alpha _2\) be as in Lemma 4.9. Then for each \(t\in [0,T(\alpha _2, \alpha _1;B^\Delta _b))\) and \(G\in B_\mathrm{bs}(\Gamma _0)\), the following holds

$$\begin{aligned} \lim _{n \rightarrow +\infty }\lim _{N \rightarrow +\infty } \big \langle \! \big \langle G, Q_{\alpha _2\alpha _1} \big (t;B^\Delta _b \big ) q_{0}^{\Lambda _n,N}\big \rangle \! \big \rangle = \big \langle \! \big \langle G, Q_{\alpha _2\alpha _1} \big (t;B^\Delta _b \big )k_{\mu _0}\big \rangle \!\big \rangle . \end{aligned}$$
(5.60)

Proof

As in (5.54), we prove (5.60) by showing that

$$\begin{aligned}&\lim _{n \rightarrow +\infty }\lim _{N \rightarrow +\infty } \big \langle \! \big \langle G, Q_{\alpha _2\alpha _1}(t;B^\Delta _b) q_{0}^{\Lambda _n,N}\big \rangle \! \big \rangle \nonumber \\= & {} \lim _{n \rightarrow +\infty }\lim _{N \rightarrow +\infty } \big \langle \! \big \langle H_{\alpha _1 \alpha _2} (t)G, q_{0}^{\Lambda _n,N}\big \rangle \! \big \rangle = \big \langle \! \big \langle H_{\alpha _1 \alpha _2}(t) G, k_{\mu _0}\big \rangle \! \big \rangle . \end{aligned}$$
(5.61)

Since \(G_t := H_{\alpha _1 \alpha _2}(t) G\) lies in \(\mathcal {G}_{\alpha _1}\), the proof of (5.61) can be done by the repetition of arguments used in the proof of the analogous result in [5, Appendix]. \(\square \)

5.5 The Proof of Lemma 4.9

Let \(\alpha _1\) and \(\alpha _2\) be as in Lemma 4.9 and \(\{\Lambda _n\}_{n\in \mathbb {N}}\) be a cofinal sequence. Take \(k_{\mu _0} \in \mathcal {K}_{\alpha _1}\) and then produce \(q_0^{\Lambda _n,N}\), \(n\in \mathbb {N}\), by employing (5.36) and (5.40). Let \(T(\alpha _2, \alpha _1)< T(\alpha _2, \alpha _1;B^\Delta _b)\) be such that (5.49) holds with

$$\begin{aligned} k_t^{\Lambda _n,N} = Q^\sigma (\alpha _2,\alpha _1;B^\Delta _b) q_0^{\Lambda _n,N}, \qquad t\le T(\alpha _2, \alpha _1). \end{aligned}$$

Note that \(T(\alpha _2, \alpha _1)\) is independent of \(\Lambda _n\) and N, see Corollary 5.9. By Lemma 5.11 we then have that

$$\begin{aligned} \big \langle \! \big \langle G, Q^\sigma _{\alpha _2\alpha _1}\big (t;B^\Delta _b \big )k_{\mu _0}\big \rangle \! \big \rangle \ge 0. \end{aligned}$$

Now we apply Lemma 5.10 and obtain

$$\begin{aligned} \big \langle \! \big \langle G, Q_{\alpha _2\alpha _1}\big (t;B^\Delta _b \big )k_{\mu _0}\big \rangle \!\big \rangle \ge 0, \end{aligned}$$

which holds for

$$\begin{aligned} t\le \tau (\alpha _2, \alpha _1) :=\min \left\{ T(\alpha _2, \alpha _1); \widetilde{T}(\alpha _2, \alpha _1) \right\} , \end{aligned}$$

which completes the proof.