1 Introduction

We study the Markov dynamics of an infinite system of point entities placed in \({\mathbbm {R}}^d\), \(d\ge 1\), which appear (immigrate) with space-dependent rate \(b(x)\ge 0\), and disappear. The rate of disappearance of the entity located at a given \(x\in {\mathbbm {R}}^d\) is the sum of the intrinsic disappearance rate \(m(x)\ge 0\) and the part related to the interaction with the existing community, which is interpreted as competition between the entities. The phase space is the set \(\Gamma \) of all subsets \(\gamma \subset {\mathbbm {R}}^d\) such that the set \(\gamma _\Lambda :=\gamma \cap \Lambda \) is finite whenever \(\Lambda \subset {\mathbbm {R}}^d\) is compact. For each such \(\Lambda \), one defines the counting map \(\Gamma \ni \gamma \mapsto |\gamma _\Lambda | := \#\{\gamma \cap \Lambda \}\), where the latter denotes cardinality. Thereby, one introduces the subsets \(\Gamma ^{\Lambda ,n}:=\{ \gamma \in \Gamma : |\gamma _\Lambda | = n\}\), \(n \in {\mathbbm {N}}_0\), and equips \(\Gamma \) with the \(\sigma \)-field generated by all such \(\Gamma ^{\Lambda ,n}\). This allows for considering probability measures on \(\Gamma \) as states of the system. Among them there are Poissonian states in which the entities are independently distributed over \({\mathbbm {R}}^d\), see [6, Chapter 2]. They may serve as reference states for studying correlations between the positions of the entities. For the nonhomogeneous Poisson measure \(\pi _\varrho \) with density \(\varrho :{\mathbbm {R}}^d \rightarrow {\mathbbm {R}}_{+}:=[0, +\infty )\), \(n \in {\mathbbm {N}}_0\), and every compact \(\Lambda \), one has

$$\begin{aligned} \pi _\varrho (\Gamma ^{\Lambda ,n}) = \langle \varrho \rangle _\Lambda ^n \exp \left( - \langle \varrho \rangle _\Lambda \right) / n!, \qquad \langle \varrho \rangle _\Lambda := \int _{\Lambda } \varrho (x) dx , \end{aligned}$$
(1.1)

which implies

$$\begin{aligned} \pi _\varrho (N_\Lambda ) = \int _{\Gamma } |\gamma _\Lambda | \pi _{\varrho } (d\gamma )= \langle \varrho \rangle _\Lambda , \end{aligned}$$
(1.2)

where \(N_\Lambda \) is the number of entities contained in \(\Lambda \). In the case of \(\varrho \equiv \varkappa >0\), one deals with the homogeneous Poisson measure \(\pi _\varkappa \).

The counting map \(\Gamma \ni \gamma \mapsto |\gamma |\) can also be defined for \(\Lambda = {\mathbbm {R}}^d\). Then the set of finite configurations

$$\begin{aligned} \Gamma _0:= \bigcup _{n\in {\mathbbm {N}}_0}\{ \gamma \in \Gamma : |\gamma |=n\} \end{aligned}$$
(1.3)

is clearly measurable. In a state with the property \(\mu (\Gamma _0)=1\), the system is (\(\mu \)-almost surely) finite. By (1.1) one gets that either \(\pi _\varrho (\Gamma _0) =1\) or \(\pi _\varrho (\Gamma _0) =0\), depending on whether or not \(\varrho \) is globally integrable. If \(\pi _\varkappa (\Gamma _0) =0\), the system in state \(\pi _\varkappa \) is infinite. The use of infinite configurations for modeling large finite populations is as a rule justified, see, e.g., [2], by the argument that in such a way one gets rid of the boundary and size effects. Note that a finite system with dispersal—like the one studied in [5, 7]—being placed in a noncompact habitat always disperse to fill its empty parts, and thus is developing. Infinite configurations are supposed to model developed populations. In this work, we shall consider infinite systems and hence deal with states \(\mu \) such that \(\mu (\Gamma _0)=0\).

To characterize states on \(\Gamma \) one employs observables—appropriate functions \(F:\Gamma \rightarrow {\mathbbm {R}}\). Their evolution is obtained from the Kolmogorov equation

$$\begin{aligned} \frac{d}{dt} F_t = L F_t , \qquad F_t|_{t=0} = F_0, \qquad t>0, \end{aligned}$$
(1.4)

where the generator L specifies the model. The states’ evolution is then obtained from the Fokker–Planck equation

$$\begin{aligned} \frac{d}{dt} \mu _t = L^* \mu _t, \qquad \mu _t|_{t=0} = \mu _0, \end{aligned}$$
(1.5)

related to that in (1.4) by the duality \(\mu _t(F_0) = \mu _0(F_t)\), where

$$\begin{aligned} \mu (F) := \int _{\Gamma } F(\gamma ) \mu (d \gamma ). \end{aligned}$$

The model that we study in this work is specified by the following

$$\begin{aligned} \left( L F \right) (\gamma )= & {} \sum _{x\in \gamma }\left( m(x) + \sum _{y\in \gamma \setminus x} a (x-y) \right) \left[ F(\gamma \setminus x) - F(\gamma ) \right] \nonumber \\&+ \int _{{\mathbbm {R}}^d} b(x) \left[ F(\gamma \cup x) - F(\gamma ) \right] dx. \end{aligned}$$
(1.6)

Here, for appropriate \(\gamma \in \Gamma \) and \(x\in {\mathbbm {R}}^d\), by writing \(\gamma \setminus x\) we mean \(\gamma \setminus \{x\}\). Likewise, \(\gamma \cup x\) stands for \(\gamma \cup \{x\}\). In (1.6), b(x) is the immigration rate, \(m(x) \ge 0\) is the intrinsic emigration (mortality) rate, and \(a\ge 0\) is the competition kernel. The model parameters are supposed to satisfy the following.

Assumption 1.1

The competition kernel a is continuous and belongs to \(L^1 ({\mathbbm {R}}^d) \cap L^\infty ({\mathbbm {R}}^d)\). If not explicitly stated otherwise, \(a(0)>0\). The immigration and mortality rates b and m are continuous and bounded.

According to this we set

$$\begin{aligned} \langle a\rangle =&\int _{{\mathbbm {R}}^d} a (x ) dx, \qquad \Vert a\Vert = \sup _{x\in {\mathbbm {R}}^d} a(x) , \nonumber \\ \Vert b\Vert =&\sup _{x\in {\mathbbm {R}}^d} b(x), \qquad \Vert m\Vert = \sup _{x\in {\mathbbm {R}}^d} m(x) . \end{aligned}$$
(1.7)

If one sets in (1.6) \(a\equiv 0\), the model becomes exactly soluble, see Sect. 2.3 below. This means that the evolution can be constructed explicitly for each initial state \(\mu _0\). In this case, assuming that \(\mu _0 (N_{\Lambda }^n) < \infty \) for all \(n\in {\mathbbm {N}}\), one can get the information about the time dependence of such moments. Namely, if \(m(x) \ge m_*>0\) for all \(x\in {\mathbbm {R}}^d\), then for each compact \(\Lambda \), the following holds

$$\begin{aligned} \forall t>0 \qquad \mu _t (N_{\Lambda }^n) \le C_\Lambda ^{(n)}. \end{aligned}$$

Otherwise, all the moments \(\mu _t (N_{\Lambda }^n)\) are increasing ad infinitum as \(t\rightarrow +\infty \). If the initial state is \(\pi _{\varrho _0}\), then \(\mu _t = \pi _{\varrho _t}\) with \(\varrho _t (x) = \varrho _0(x) +b(x) t\) for all x such that \(m(x)=0\), cf (2.22) below. In [3], for the model (1.6) with \(m\equiv 0\) and a nonzero a satisfying a certain (quite burdensome) condition, it was shown that \(\mu _t (N_\Lambda ) / \mathrm{V}(\Lambda ) \le C\) for large enough values of the Euclidean volume \(\mathrm{V}(\Lambda )\), provided the evolution of states \(\mu _0 \mapsto \mu _t\) exists.

In this article, assuming that the initial state \(\mu _0\) is sub-Poissonian, see Definition 2.1 below, we prove that the evolution of states \(\mu _0 \mapsto \mu _t\), \(t>0\), exists (Theorem 2.4) and is such that \(\mu _t (N_\Lambda ^n) \le C^{(n)}_\Lambda \) for each \(t>0\) (Theorem 2.5) and all m, including the case \(m\equiv 0\). Moreover, if the correlation functions \(k_{\mu _0}^{(n)}\), \(n\in {\mathbbm {N}}\), of the initial state are continuous, see Sect. 2.1 below, then \(k_{\mu _t}^{(n)}\), \(n\in {\mathbbm {N}}\), are also continuous and the following holds

$$\begin{aligned} k^{(1)}_{\mu _t}(x) \le C_1, \qquad k_{\mu _t}^{(2)} (x, y) \le C_2, \end{aligned}$$
(1.8)

with some positive \(C_1\) and \(C_2\).

The structure of the article is as follows. In Sect. 2, we introduce the necessary technicalities and then formulate the results: Theorems 2.4 and 2.5. Thereafter, we make a number of comments to them. In Sects. 3 and 4, we present the proofs of Theorems 2.4 and 2.5, respectively.

2 Preliminaries and the results

We begin by presenting some facts on the subject—a more detailed description of them can be found in [5, 7, 8] and in the literature quoted therein.

By \({\mathcal {B}}({\mathbbm {R}}^d)\) and \({\mathcal {B}}_\mathrm{b}({\mathbbm {R}}^d)\) we denote the sets of all Borel and bounded Borel subsets of \({\mathbbm {R}}^d\), respectively. The configuration space \(\Gamma \) is equipped with the vague topology, see [8], and thus with the corresponding Borel \(\sigma \)-field \({\mathcal {B}}(\Gamma )\), which makes it a standard Borel space. Note that \({\mathcal {B}}(\Gamma )\) is exactly the \(\sigma \)-field generated by the sets \(\Gamma ^{\Lambda ,n}\), mentioned in Introduction. By \({\mathcal {P}}(\Gamma )\) we denote the set of all probability measures on \((\Gamma , {\mathcal {B}}(\Gamma ))\).

2.1 Correlation functions

Like in [5, 7], the evolution of states will be described by means of correlation functions without the direct use of (1.5). To explain the essence of this approach let us consider the set \(\varTheta \) of all compactly supported continuous functions \(\theta :{\mathbb {R}}^d\rightarrow (-1,0]\). For a state, \(\mu \), its Bogoliubov functional, cf. [9, 12], is

$$\begin{aligned} B_\mu (\theta ) = \int _{\Gamma } \prod _{x\in \gamma } ( 1 + \theta (x)) \mu ( d \gamma ), \qquad \theta \in \varTheta . \end{aligned}$$
(2.1)

For the homogeneous Poisson measure \(\pi _\varkappa \), it takes the form

$$\begin{aligned} B_{\pi _\varkappa } (\theta ) = \exp \left( \varkappa \int _{{\mathbb {R}}^d}\theta (x) d x \right) . \end{aligned}$$

Definition 2.1

The set of sub-Poissonian states \({\mathcal {P}}_\mathrm{exp}(\Gamma )\) consists of all those states \(\mu \in {\mathcal {P}}(\Gamma )\) for which \(B_\mu \) can be continued, as a function of \(\theta \), to an exponential type entire function on \(L^1 ({\mathbb {R}}^d)\).

It can be shown that a given \(\mu \) belongs to \({\mathcal {P}}_\mathrm{exp}(\Gamma )\) if and only if its functional \(B_\mu \) can be written down in the form

$$\begin{aligned} B_\mu (\theta ) = 1+ \sum _{n=1}^\infty \frac{1}{n!}\int _{({\mathbb {R}}^d)^n} k_\mu ^{(n)} (x_1 , \dots , x_n) \theta (x_1) \cdots \theta (x_n) d x_1 \cdots d x_n, \end{aligned}$$
(2.2)

where \(k_\mu ^{(n)}\) is the n-th order correlation function of \(\mu \), which is a symmetric element of \(L^\infty (({\mathbb {R}}^d)^n)\) satisfying

$$\begin{aligned} \Vert k^{(n)}_\mu \Vert _{L^\infty (({\mathbb {R}}^d)^n)} \le C \exp ( \vartheta n), \qquad n\in {\mathbb {N}}_0, \end{aligned}$$
(2.3)

with some \(C>0\) and \(\vartheta \in {\mathbb {R}}\). Note that \(k_{\pi _\varkappa }^{(n)} (x_1 , \dots , x_n)= \varkappa ^n\). Note also that (2.2) resembles the Taylor expansion of the characteristic function of a probability measure. In view of this, \(k^{(n)}_\mu \) are also called (factorial) moment functions, cf. e.g., [11].

Recall that \(\Gamma _0\)—the set of all finite \(\gamma \in \Gamma \) defined in (1.3)—is an element of \({\mathcal {B}}(\Gamma )\). A function \(G:\Gamma _0 \rightarrow {\mathbbm {R}}\) is \({\mathcal {B}}(\Gamma )/{\mathcal {B}}({\mathbbm {R}} )\)—measurable, see [5], if and only if, for each \(n\in {\mathbbm {N}}\), there exists a symmetric Borel function \(G^{(n)}: ({\mathbbm {R}}^{d})^{n} \rightarrow {\mathbbm {R}}\) such that

$$\begin{aligned} G(\eta ) = G^{(n)} ( x_1, \dots , x_{n}), \quad \mathrm{for} \ \eta = \{ x_1, \dots , x_{n}\}. \end{aligned}$$
(2.4)

Definition 2.2

A measurable function \(G:\Gamma _0 \rightarrow {\mathbbm {R}}\) is said to have bounded support if: (a) there exists \(\Lambda \in {\mathcal {B}}_\mathrm{b} ({\mathbbm {R}}^d)\) such that \(G(\eta ) = 0\) whenever \(\eta \cap ({\mathbbm {R}}^d \setminus \Lambda )\ne \emptyset \); (b) there exists \(N\in {\mathbbm {N}}_0\) such that \(G(\eta )=0\) whenever \(|\eta | >N\). By \(\Lambda (G)\) and N(G) we denote the smallest \(\Lambda \) and N with the properties just mentioned. By \(B_\mathrm{bs}(\Gamma _0)\) we denote the set of all bounded functions with bounded support.

The Lebesgue–Poisson measure \(\lambda \) on \((\Gamma _0, {\mathcal {B}}(\Gamma _0))\) is defined by the following formula

$$\begin{aligned} \int _{\Gamma _0} G(\eta ) \lambda ( d \eta ) = G(\emptyset ) + \sum _{n=1}^\infty \frac{1}{n! } \int _{({\mathbbm {R}}^d)^{n}} G^{(n)} ( x_1, \dots , x_{n} ) d x_1 \cdots dx_{n}, \end{aligned}$$
(2.5)

holding for all \(G\in B_\mathrm{bs}(\Gamma _0)\). Like in (2.4), we introduce \(k_\mu : \Gamma _0 \rightarrow {\mathbbm {R}}\) such that \(k_\mu (\eta ) = k^{(n)}_\mu (x_1, \dots , x_n)\) for \(\eta = \{x_1, \dots , x_n\}\), \(n\in {\mathbbm {N}}\). We also set \(k_\mu (\emptyset )=1\). With the help of the measure introduced in (2.5), the expressions for \(B_\mu \) in (2.1) and (2.2) can be combined into the following formulas

$$\begin{aligned} B_\mu (\theta )= & {} \int _{\Gamma _0} k_\mu (\eta ) \prod _{x\in \eta } \theta (x) \lambda (d\eta )=: \int _{\Gamma _0} k_\mu (\eta ) e( \eta ; \theta ) \lambda (d \eta )\nonumber \\= & {} \int _{\Gamma } \prod _{x\in \gamma } (1+ \theta (x)) \mu (d \gamma ) =: \int _{\Gamma } F_\theta (\gamma ) \mu (d \gamma ). \end{aligned}$$
(2.6)

Thereby, one can transform the action of L on F, see (1.6), to the action of \(L^\Delta \) on \(k_\mu \) according to the rule

$$\begin{aligned} \int _{\Gamma }(L F_\theta ) (\gamma ) \mu (d \gamma ) = \int _{\Gamma _0} (L^\Delta k_\mu ) (\eta ) e(\eta ;\theta ) \lambda (d \eta ). \end{aligned}$$
(2.7)

This will allow us to pass from (1.5) to the corresponding Cauchy problem for the correlation functions

$$\begin{aligned} \frac{d}{dt} k_t = L^\Delta k_t, \qquad k_t|_{t=0} = k_{\mu _0}. \end{aligned}$$
(2.8)

By (2.7) the action of \(L^\Delta \) is as follows

$$\begin{aligned} \left( L^\Delta k \right) (\eta ) = (L^{\Delta ,-}k)(\eta ) + \sum _{x\in \eta } b(x) k(\eta \setminus x), \end{aligned}$$
(2.9)

where

$$\begin{aligned} (L^{\Delta ,-}k)(\eta ) = - E(\eta ) k(\eta ) - \int _{{\mathbbm {R}}^d}\left( \sum _{y\in \eta } a (x-y) \right) k(\eta \cup x)d x, \end{aligned}$$
(2.10)

and

$$\begin{aligned} E(\eta ) = \sum _{x\in \eta }m(x) + \sum _{x\in \eta } \sum _{y\in \eta \setminus x} a(x-y). \end{aligned}$$
(2.11)

In the next subsection, we introduce the spaces where we are going to define (2.8).

2.2 The Banach spaces

By (2.2) and (2.6), it follows that \(\mu \in {\mathcal {P}}_\mathrm{exp}(\Gamma )\) implies

$$\begin{aligned} |k_\mu (\eta )| \le C \exp ( \vartheta |\eta |), \end{aligned}$$

holding for \(\lambda \)-almost all \(\eta \in \Gamma _0\), some \(C>0\), and \(\vartheta \in {\mathbbm {R}}\). In view of this, we set

$$\begin{aligned} {\mathcal {K}}_\vartheta := \{ k:\Gamma _0\rightarrow {\mathbbm {R}}: \Vert k\Vert _\vartheta <\infty \}, \end{aligned}$$
(2.12)

where

$$\begin{aligned} \Vert k\Vert _\vartheta = {{\mathrm{{\mathrm {ess\,sup}}}}}_{\eta \in \Gamma _0}\left\{ |k_\mu (\eta )| \exp \big ( - \vartheta |\eta | \big ) \right\} . \end{aligned}$$
(2.13)

Clearly, (2.12) and (2.13) define a Banach space with the usual point-wise linear operations. In the following, we use the ascending scale of such spaces \({\mathcal {K}}_\vartheta \), \(\vartheta \in {\mathbbm {R}}\), with the property

$$\begin{aligned} {\mathcal {K}}_\vartheta \hookrightarrow {\mathcal {K}}_{\vartheta '}, \qquad \vartheta < \vartheta ', \end{aligned}$$
(2.14)

where \(\hookrightarrow \) denotes continuous embedding.

For \(G\in B_\mathrm{bs}(\Gamma )\), we set

$$\begin{aligned} (KG)(\gamma ) = \sum _{\eta \Subset \gamma } G(\eta ), \end{aligned}$$
(2.15)

where \(\Subset \) indicates that the summation is taken over all finite subsets. It satisfies, see Definition 2.2,

$$\begin{aligned} |(KG)(\gamma )| \le C_G \left( 1 + |\gamma \cap \Lambda (G)|\right) ^{N(G)}, \qquad C_G := \sup _{\eta \in \Gamma _0}|G(\eta )|. \end{aligned}$$

The latter means that \(\mu (KG) < \infty \) for each \(\mu \in {\mathcal {P}}_\mathrm{exp}(\Gamma )\). By (2.6) this yields

(2.16)

Set

$$\begin{aligned} B^\star _\mathrm{bs} (\Gamma _0) =\{ G\in B_\mathrm{bs}(\Gamma _0): (KG)(\gamma ) \ge 0 \ \mathrm{for} \ \mathrm{all} \ \gamma \in \Gamma \}. \end{aligned}$$
(2.17)

By [8, Theorems 6.1 and 6.2 and Remark 6.3] one can prove the next statement.

Proposition 2.3

Let a measurable function \(k : \Gamma _0 \rightarrow {\mathbbm {R}}\) have the following properties:

(2.18)

with (c) holding for some \(C >0\) and \(\lambda \)-almost all \(\eta \in \Gamma _0\). Then there exists a unique state \(\mu \in {\mathcal {P}}_\mathrm{exp}(\Gamma )\) for which k is the correlation function.

Define, cf. (2.17),

(2.19)

This is clearly a subset of the cone

$$\begin{aligned} {\mathcal {K}}^+_\vartheta =\{k\in {\mathcal {K}}_\vartheta : k(\eta ) \ge 0 \ \ \mathrm{for} \ \lambda -\mathrm{almost} \ \mathrm{all} \ \eta \in \Gamma _0\}. \end{aligned}$$
(2.20)

By Proposition 2.3 it follows that each \(k\in {\mathcal {K}}^\star _\vartheta \) with the property \(k(\emptyset ) = 1\) is the correlation function of a unique state \(\mu \in {\mathcal {P}}_\mathrm{exp}(\Gamma )\). Then we define

$$\begin{aligned} {\mathcal {K}} = \bigcup _{\vartheta \in {\mathbbm {R}}} {\mathcal {K}}_\vartheta , \qquad {\mathcal {K}}^\star = \bigcup _{\vartheta \in {\mathbbm {R}}} {\mathcal {K}}_\vartheta ^\star . \end{aligned}$$

As a sum of Banach spaces, the linear space \({\mathcal {K}}\) is equipped with the corresponding inductive topology that turns it into a locally convex space.

2.3 Without competition

The version of (1.6) with \(a\equiv 0\) is known as the Surgailis model, see [14] and the discussion in [3]. This model is exactly soluble. This means that the solution of (2.8) can be written down explicitly in the following form

$$\begin{aligned} k_t (\eta ) = \sum _{\xi \subset \eta }e(\xi ; \phi _t) e(\eta \setminus \xi ; \psi _t) k_{\mu _0}(\eta \setminus \xi ), \end{aligned}$$
(2.21)

where

$$\begin{aligned} \psi _t (x)= & {} e^{- m(x) t}, \qquad \quad e(\xi ; \phi )= \prod _{x\in \xi }\phi (x), \nonumber \\ \phi _t(x)= & {} \left\{ \begin{array}{ll} \left( 1 - e^{- m(x) t } \right) \frac{b(x)}{m(x)} \qquad &{}\mathrm{for} \ \ m(x)>0,\\ b(x)t \qquad &{}\mathrm{for} \ \ m(x)=0. \end{array} \right. \end{aligned}$$
(2.22)

The corresponding state \(\mu _t\) has the Bogoliubov functional

$$\begin{aligned} B_{\mu _t} (\theta ) = \exp \left( \int _{{\mathbbm {R}}^d} \theta (x) \phi _t (x) d x \right) B_{\mu _0} (\theta \psi _t), \end{aligned}$$
(2.23)

which one obtains from (2.6) and (2.21). This formula can be used to extend the evolution \(\mu _0\mapsto \mu _t\) to all \(\mu _0\in {\mathcal {P}}(\Gamma )\). Indeed, for each \(t>0\) and \(\theta \in \varTheta \), cf (2.1), we have that \(\theta \psi _t \in \varTheta \), and hence \(B_{\mu _0} (\theta \psi _t)\) is the Bogoliubov functional of a certain state.Footnote 1 The same is true for the left-hand side of (2.23), and the state \(\mu _t\) contained therein can be condidered as a weak solution of the corresponding Fokker–Planck equation (1.5).

If the initial state is Poissonian with density \(\varrho _0 (x)\), by (2.23) the state \(\mu _t\) is also Poissonian with the density

$$\begin{aligned} \varrho _t (x) = \psi _t (x) \varrho _0 (x) + \phi _t(x). \end{aligned}$$

If \(m(x) \ge m_*>0\) for some \(m_*\) and all \(x\in {\mathbbm {R}}^d\), then the solution in (2.21) lies in \({\mathcal {K}}_{\vartheta _*}\) for all \(t>0\). Here

$$\begin{aligned} \vartheta _* = \max \{ \vartheta _0; \log (\Vert b\Vert /m_*)\}. \end{aligned}$$
(2.24)

Otherwise, the solution in (2.21) is unbounded in t. If, for some compact \(\Lambda \), \(m(x)=0\) for \(x\in \Lambda \), then by (2.21) and (2.22) we get

$$\begin{aligned} k_t^{(1)} (x) = k_{\mu _0}^{(1)} (x) + b(x) t , \qquad x \in \Lambda , \end{aligned}$$

that by (1.2), (2.15) and (2.16) yields

$$\begin{aligned} \mu _t (N_\Lambda ) =&\int _{\Gamma } |\gamma _\Lambda | \mu _t ( d \gamma ) = \int _{\Gamma }\left( \sum _{x\in \gamma } I_\Lambda (x)\right) \mu _t(d\gamma )\nonumber \\ =&\int _{\Gamma } (K I_\Lambda )(\gamma ) \mu _t(d \gamma ) = \int _{\Lambda }k^{(1)}_t(x) d x = \mu _0 (N_\Lambda ) + t \int _{\Lambda } b(x) dx, \end{aligned}$$
(2.25)

where \(I_\Lambda \) is the indicator of \(\Lambda \). Then \(\mu _t (N_\Lambda )\rightarrow +\infty \) as \(t\rightarrow +\infty \) if b is not identically zero on \(\Lambda \).

2.4 The statements

For each \(\vartheta \in {\mathbbm {R}}\) and \(\vartheta '>\vartheta \), the expressions in (2.9) and (2.10) can be used to define the corresponding bounded linear operators \(L^\Delta _{\vartheta ' \vartheta }\) acting from \({\mathcal {K}}_\vartheta \) to \({\mathcal {K}}_{\vartheta '}\). Their operator norms can be estimated similarly as in [7, eqs. (3.11), (3.13)], which yields, cf. (1.7),

$$\begin{aligned} \Vert L^\Delta _{\vartheta ' \vartheta }\Vert \le \frac{4 \Vert a \Vert }{e^2 (\vartheta ' - \vartheta )^2} + \frac{ \Vert b\Vert e^{-\vartheta } + \Vert m\Vert + \langle a \rangle e^{\vartheta '}}{e (\vartheta ' - \vartheta )}. \end{aligned}$$
(2.26)

By means of the collection \(\{L^\Delta _{\vartheta ' \vartheta }\}\) with all \(\vartheta \in {\mathbbm {R}}\) and \(\vartheta '>\vartheta \) we introduce a continuous linear operator acting on \({\mathcal {K}}\), denoted also as \(L^\Delta \), and thus define the corresponding Cauchy problem (2.8) in this space. By its (global in time) solution we will mean a continuously differentiable function \([0,+\infty ) \ni t \mapsto k_t \in {\mathcal {K}}\) such that both equalities in (2.8) hold. Our results are given in the following statements, both based on Assumption 1.1.

Theorem 2.4

(Existence of evolution) For each \(\mu _0 \in {\mathcal {P}}_\mathrm{exp} (\Gamma )\), the problem in (2.8) with \(L^\Delta :{\mathcal {K}}\rightarrow {\mathcal {K}}\) as in (2.9), (2.10) and (2.26) has a unique solution which lies in \({\mathcal {K}}^\star \) and is such that \(k_t(\emptyset )=1\) for all \(t>0\). Therefore, for each \(t>0\), there exists a unique state \(\mu _t\in {\mathcal {P}}_\mathrm{exp}(\Gamma )\) such that \(k_t = k_{\mu _t}\). Moreover, for all \(t>0\), the following holds

$$\begin{aligned} 0\le k_t (\eta ) \le \sum _{\xi \subset \eta }e(\xi ; \phi _t) e(\eta \setminus \xi ; \psi _t) k_{\mu _0}(\eta \setminus \xi ), \end{aligned}$$
(2.27)

where \(\phi _t\) and \(\psi _t\) are as in (2.22). If the intrinsic mortality rate satisfies \(m(x) \ge m_* >0\) for all \(x\in {\mathbbm {R}}^d\), then for all \(t>0\) the solution \(k_t\) lies in \({\mathcal {K}}_{\vartheta _*}\) with \(\vartheta _*\) given in (2.24).

Theorem 2.5

(Global boundedness) The states \(\mu _t\), \(t\ge 0\), mentioned in Theorem 2.4 have the property: for every \(n\in {\mathbbm {N}}\) and compact \(\Lambda \subset {\mathbbm {R}}^d\), the following holds

$$\begin{aligned} \forall t>0 \qquad \mu _t (N_\Lambda ^n) \le C_\Lambda ^{(n)}, \end{aligned}$$
(2.28)

with some \(C_\Lambda ^{(n)}>0\). If \(\mu _0\) is such that each \(k_{\mu _0}^{(n)}\) is a continuous function, then so is \(k_{\mu _t}^{(n)}\) for all \(n\in {\mathbbm {N}}\) and \(t>0\). Moreover, \(k^{(1)}_{\mu _t}\) and \(k^{(2)}_{\mu _t}\) have the properties as in (1.8).

2.5 Comments and comparison

By (2.25) it follows that the global in time boundedness in the Surgailis model is possible only if \(m(x) \ge m_*>0\) for all \(x\in {\mathbbm {R}}^d\). As follows from our Theorem 2.5, adding competition to the Surgailis model with the zero intrinsic mortality rate yields the global in time boundedness. In this case, the competition rate a(0) appears to be an effective mortality, see (4.19) below and the comments following the proof of Theorem 2.5. Note also that the global boundedness as in Theorem 2.5 does not mean that the evolution \(k_{\mu _0} \mapsto k_t\) holds in one and the same \({\mathcal {K}}_\vartheta \) with sufficiently large \(\vartheta \). It does if \(m(x) \ge m_*>0\). Since Theorem 2.4 covers also the case \(a \equiv 0\), the solution in (2.21) is unique in the same sense. A partial result on the global boundedness in the model discussed here was obtained in [3, Theorem 1]. Therein, under quite a strong condition imposed on the competition kernel a (which, in particular, implies that it has infinite range), and under the assumption that the evolution of states \(\mu _0 \mapsto \mu _t\) exists, there was proved the fact which in the present notations can be formulated as \(\mu _t(N_\Lambda ) \le C_\Lambda \).

3 The existence of the evolution of states

We follow the line of arguments used in proving Theorem 3.3 in [7] and perform the following three steps:

  1. (i)

    Defining the Cauchy problem (2.8) with \(k_{\mu _0 }\in {\mathcal {K}}_{\vartheta _0}\) in a given Banach space \({\mathcal {K}}_\vartheta \) with \(\vartheta >\vartheta _0\), see (2.12) and (2.14), and then showing that this problem has a unique solution \(k_t\in {\mathcal {K}} _\vartheta \) on a bounded time interval \([0,T(\vartheta , \vartheta _0))\) (Sect. 3.1).

  2. (ii)

    Proving that the mentioned solution \(k_t\) has the properties (a) and (b) in (2.18) ((c) follows by the fact that \(k_t\in {\mathcal {K}}_\vartheta \)). Then \(k_t \in {\mathcal {K}}_\vartheta ^\star \) and hence also in \({\mathcal {K}}_\vartheta ^{+}\), see (2.20) and (2.19). By Proposition 2.3 it follows that \(k_t\) is the correlation function of a unique state \(\mu _t\) (Sect. 3.2).

  3. (iii)

    Constructing a continuation of \(k_t\) from \([0,T(\vartheta , \vartheta _0))\) to all \(t>0\) by means of the fact that \(k_t \in {\mathcal {K}}_\vartheta ^{+}\) (Sect. 3.3).

3.1 Solving the Cauchy problem

We begin by rewriting \(L^\Delta \) [given in (2.9), (2.10)] in the following form

$$\begin{aligned} L^\Delta= & {} A + B, \nonumber \\ (Ak)(\eta )= & {} - E (\eta ) k(\eta ) , \nonumber \\ (Bk)(\eta )= & {} - \int _{{\mathbbm {R}}^d} \left( \sum _{y\in \eta } a(x-y) \right) k(\eta \cup x) dx + \sum _{x\in \eta } b(x) k(\eta \setminus x). \end{aligned}$$
(3.1)

For \(\vartheta \in {\mathbbm {R}}\) and \(\vartheta '>\vartheta \), let \({\mathcal {L}}({\mathcal {K}}_{\vartheta }, {\mathcal {K}}_{\vartheta '})\) be the Banach space of all bounded linear operators acting from \({\mathcal {K}}_{\vartheta }\) to \({\mathcal {K}}_{\vartheta '}\). Like in (2.26) we define \(A_{\vartheta '\vartheta }, B_{\vartheta '\vartheta }\in {\mathcal {L}}({\mathcal {K}}_{\vartheta }, {\mathcal {K}}_{\vartheta '})\), satisfying

$$\begin{aligned} \Vert A_{\vartheta '\vartheta } \Vert \le \frac{4\Vert a\Vert }{e^2(\vartheta ' - \vartheta )^2} + \frac{ \Vert m\Vert }{e(\vartheta ' - \vartheta )}, \qquad \Vert B_{\vartheta '\vartheta } \Vert \le \frac{\Vert b\Vert e^{-\vartheta }+ \langle a \rangle e^{\vartheta '}}{e(\vartheta ' - \vartheta )}. \end{aligned}$$
(3.2)

Now we set, see (2.11),

$$\begin{aligned} (S (t) k)( \eta ) = \exp \left( - t E(\eta ) \right) k(\eta ), \qquad t\ge 0, \end{aligned}$$
(3.3)

and then introduce the corresponding \(S_{\vartheta '\vartheta }(t) \in {\mathcal {L}}({\mathcal {K}}_{\vartheta }, {\mathcal {K}}_{\vartheta '})\), \(t\ge 0\). By the first estimate in (3.2) one shows that the map

$$\begin{aligned} {[}0,+\infty ) \ni t \mapsto S_{\vartheta '\vartheta }(t) \in {\mathcal {L}}({\mathcal {K}}_{\vartheta }, {\mathcal {K}}_{\vartheta '}) \end{aligned}$$
(3.4)

is continuous and such that

$$\begin{aligned} \frac{d}{dt} S_{\vartheta '\vartheta }(t) = A_{\vartheta ' \vartheta ''} S_{\vartheta ''\vartheta }(t), \qquad t >0, \end{aligned}$$
(3.5)

holding for each \(\vartheta '' \in (\vartheta , \vartheta ')\). Note that (3.3) may be used to define a bounded multiplication operator, \(S_{\vartheta }(t)\in {\mathcal {L}}({\mathcal {K}}_\vartheta ) := {\mathcal {L}}({\mathcal {K}}_\vartheta ,{\mathcal {K}}_\vartheta )\). However, in this case the map \([0,+\infty ) \ni t \mapsto S_{\vartheta }(t) \in {\mathcal {L}}({\mathcal {K}}_{\vartheta })\) would not be continuous.

For \(\vartheta \) and \(\vartheta '>\vartheta \) as above, we fix some \(\delta < \vartheta ' - \vartheta \). Then, for a given \(l\in {\mathbbm {N}}\), we divide the interval \([\vartheta , \vartheta ']\) into subintervals with endpoints \(\vartheta ^s\), \(s=0, 1 , \dots , 2l+1\), as follows. Set \(\vartheta ^0= \vartheta \), \(\vartheta ^{2l+1} = \vartheta '\), and

$$\begin{aligned} \vartheta ^{2s}= & {} \vartheta + \frac{s}{l+1}\delta + s \epsilon , \qquad \epsilon := (\vartheta ' - \vartheta - \delta )/l,\nonumber \\ \vartheta ^{2s+1}= & {} \vartheta + \frac{s+1}{l+1}\delta + s \epsilon , \qquad s =0, 1, \dots , l. \end{aligned}$$
(3.6)

Then, for \(t>0\) and

$$\begin{aligned} (t, t_1 , \dots , t_l) \in {\mathcal {T}}_l :=\{(t, t_1 , \dots , t_l) : 0\le t_l \le t_{l-1} \cdots \le t_1 \le t\}\subset {\mathbbm {R}}^{l+1}, \end{aligned}$$

define

$$\begin{aligned} \Pi ^{(l)}_{\vartheta '\vartheta } (t, t_1 , \dots , t_l)= & {} S_{\vartheta '\vartheta ^{2l}}(t-t_1) B_{\vartheta ^{2l} \vartheta ^{2l-1}} \cdots S_{\vartheta ^{2s+1}\vartheta ^{2s}}(t_{l-s}-t_{l-s+1})\nonumber \\&\times B_{\vartheta ^{2s} \vartheta ^{2s-1}} \cdots S_{\vartheta ^{3}\vartheta ^{2}}(t_{l-1}-t_{l}) B_{\vartheta ^{2} \vartheta ^{1}}S_{\vartheta ^{1}\vartheta }(t_l). \end{aligned}$$
(3.7)

By (3.5), (3.4) and (3.1) one can prove the next statement, cf [7, Proposition 4.6],

Proposition 3.1

For each \(l\in {\mathbbm {N}}\), the operators defined in (3.7) have the properties:

  1. (i)

    for each \((t, t_1 , \dots , t_l) \in {\mathcal {T}}_l\), \(\Pi ^{(l)}_{\vartheta '\vartheta } (t, t_1 , \dots , t_l)\) is in \({\mathcal {L}}({\mathcal {K}}_{\vartheta }, {\mathcal {K}}_{\vartheta '})\) and the map

    $$\begin{aligned} (t, t_1 , \dots , t_l) \mapsto \Pi ^{(l)}_{\vartheta '\vartheta } (t, t_1 , \dots , t_l)\in {\mathcal {L}}({\mathcal {K}}_{\vartheta }, {\mathcal {K}}_{\vartheta '}) \end{aligned}$$

    is continuous;

  2. (ii)

    for fixed \(t_1 , \dots , t_l\) and each \(\varepsilon >0\), the map

    $$\begin{aligned} (t_1 , t_1 + \varepsilon ) \ni t \mapsto \Pi ^{(l)}_{\vartheta '\vartheta } (t, t_1 , \dots , t_l)\in {\mathcal {L}}({\mathcal {K}}_{\vartheta }, {\mathcal {K}}_{\vartheta '}) \end{aligned}$$

    is continuously differentiable and such that, for each \(\vartheta ''\in (\vartheta , \vartheta ')\), the following holds

    $$\begin{aligned} \frac{d}{dt} \Pi ^{(l)}_{\vartheta '\vartheta } (t, t_1 , \dots , t_l) = A_{\vartheta ' \vartheta ''} \Pi ^{(l)}_{\vartheta ''\vartheta } (t, t_1 , \dots , t_l). \end{aligned}$$
    (3.8)

Define

$$\begin{aligned} T(\vartheta ', \vartheta ) = \frac{\vartheta ' - \vartheta }{\Vert b\Vert e^{-\vartheta } + \langle a \rangle e^{\vartheta '}}. \end{aligned}$$
(3.9)

Then assume that \(k_{\mu _0 }\in {\mathcal {K}}_{\vartheta _0}\), fix some \(\vartheta _1 >\vartheta _0\), and set

$$\begin{aligned} \varTheta = \{(\vartheta , \vartheta ', t): \vartheta _0 \le \vartheta< \vartheta ' \le \vartheta _1, \quad t < T(\vartheta ', \vartheta )\}. \end{aligned}$$
(3.10)

Proposition 3.2

There exists a family of linear operators, \(\{Q_{\vartheta ' \vartheta }(t): (\vartheta , \vartheta ', t) \in \varTheta \}\), each element of which is in the corresponding \({\mathcal {L}}({\mathcal {K}}_{\vartheta }, {\mathcal {K}}_{\vartheta '})\) and has the following properties:

  1. (i)

    the map \([0,T(\vartheta ', \vartheta )) \ni t \mapsto Q_{\vartheta ' \vartheta }(t) \in {\mathcal {L}}({\mathcal {K}}_{\vartheta }, {\mathcal {K}}_{\vartheta '})\) is continuous and \( Q_{\vartheta ' \vartheta }(0)\) is the embedding \({\mathcal {K}}_\vartheta \hookrightarrow {\mathcal {K}}_{\vartheta '}\);

  2. (ii)

    for each \(\vartheta '' \in (\vartheta , \vartheta ')\) and \(t < T(\vartheta '', \vartheta )\), the following holds

    $$\begin{aligned} \frac{d}{dt} Q_{\vartheta ' \vartheta }(t) = L^\Delta _{\vartheta '\vartheta ''} Q_{\vartheta '' \vartheta }(t). \end{aligned}$$
    (3.11)

Proof

We go along the line of arguments used in the proof of Lemma 4.5 in [7]. Take any \(T< T(\vartheta ', \vartheta )\) and then pick \(\vartheta '' \in (\vartheta , \vartheta ']\) and a positive \(\delta < \vartheta '' - \vartheta \) such that also \(T< T_\delta := T(\vartheta ''-\delta , \vartheta )\). For these values of the parameters, take \(\Pi ^{(l)}_{\vartheta ''\vartheta }\) as in (3.7) and then, for \(n\in {\mathbbm {N}}\), set

$$\begin{aligned} Q_{\vartheta '' \vartheta }^{(n)} (t) = S_{\vartheta ''\vartheta } (t) + \sum _{l=1}^n \int _0^t \int _0^{t_1} \cdots \int _0^{t_{l-1}} \Pi ^{(l)}_{\vartheta '' \vartheta } (t, t_1 \dots , t_l ) d t_l \cdots d t_1. \end{aligned}$$
(3.12)

By (3.3) and the second estimate in (3.2) we have from (3.7) that

$$\begin{aligned} \Vert \Pi ^{(l)}_{\vartheta '' \vartheta } (t,t_1 , \dots , t_l)\Vert \le \left( \frac{l}{e T_\delta }\right) ^l, \end{aligned}$$

holding for all \(l=1, \dots , n\). By (3.12), for \(t\in [0,T)\), this yields

$$\begin{aligned} \Vert Q_{\vartheta '' \vartheta }^{(n)} (t) - Q_{\vartheta '' \vartheta }^{(n-1)} (t) \Vert&\le \int _0^t \int _0^{t_1} \cdots \int _0^{t_{n-1}}\Vert \Pi ^{(n)}_{\vartheta '' \vartheta } (t, t_1 \dots , t_n )\Vert d t_l \cdots d t_n \\&\le \frac{1}{n!} \left( \frac{n}{e} \right) ^n \left( \frac{T}{T_\delta }\right) ^n. \end{aligned}$$

Hence, for all \(t\in [0,T]\), \(\{ Q_{\vartheta '' \vartheta }^{(n)} (t) \}_{n\in {\mathbbm {N}}}\) is a Cauchy sequence in \({\mathcal {L}}({\mathcal {K}}_{\vartheta },{\mathcal {K}}_{\vartheta ''} )\). The operator \(Q_{\vartheta '' \vartheta } (t)\) in question is then its limit. The continuity in (i) follows by the fact that the convergence to \(Q_{\vartheta '' \vartheta } (t)\) is uniform on [0, T]. Moreover, by (3.12) we have that \(Q_{\vartheta '' \vartheta }^{(n)} (0) = S_{\vartheta ''\vartheta } (0)\), cf. (3.3), which yields the stated property of \(Q_{\vartheta '' \vartheta }(0)\). Finally, (3.11) follows from (3.8) by the same arguments. \(\square \)

From (3.11) one can get that the family mentioned in Proposition 3.2 enjoys the following ‘semigroup’ property

$$\begin{aligned} Q_{\vartheta ' \vartheta } (t+s) = Q_{\vartheta ' \vartheta ''} (t) Q_{\vartheta '' \vartheta } (s), \end{aligned}$$
(3.13)

holding whenever \((\vartheta , \vartheta ', t+s)\), \((\vartheta '', \vartheta ', t)\), and \((\vartheta , \vartheta '', s)\) are in \(\varTheta \).

Now we make precise which Cauchy problem we are going to solve. Set

$$\begin{aligned} {\mathcal {D}}_\vartheta = \{ k\in {\mathcal {K}}_\vartheta : L^\Delta k \in {\mathcal {K}}_\vartheta \}, \end{aligned}$$
(3.14)

where \(L^\Delta \) is as in (2.9). This defines an unbounded linear operator \(L^\Delta _\vartheta : {\mathcal {D}}_\vartheta \rightarrow {\mathcal {K}}_\vartheta \), being the extension of the operators \(L^\Delta _{\vartheta '' \vartheta _0}: {\mathcal {K}}_{\vartheta _0} \rightarrow {\mathcal {K}}_{\vartheta ''} \hookrightarrow {\mathcal {K}}_\vartheta \) with \(\vartheta '' \in (\vartheta _0, \vartheta )\) and all \(\vartheta _0< \vartheta \), cf. (2.14). Then we consider the Cauchy problem (2.8) in \({\mathcal {K}}_{\vartheta _1}\) with this operator \(L^\Delta _{\vartheta _1}\) and \(k_{\mu _0}\in {\mathcal {K}}_{\vartheta _0}\). By its classical solution we understand the corresponding map \(t \mapsto k_t \in {\mathcal {D}}_{\vartheta _1}\), continuously differentiable in \({\mathcal {K}}_{\vartheta _1}\).

Lemma 3.3

Let \(\vartheta _0\) and \(\vartheta _1\) be as in (3.10). Then for each \(k_{\mu _0} \in {\mathcal {K}}_{\vartheta _0}\), the problem (2.8) as described above, cf. (3.16) below, has a unique solution \(k_t\in {\mathcal {K}}_{\vartheta _1}\) with \(t \in [0, T(\vartheta _1, \vartheta _0))\) given by the formula

$$\begin{aligned} k_t = Q_{\vartheta _1 \vartheta _0}(t)k_{\mu _0}, \end{aligned}$$
(3.15)

such that \(k_t(\emptyset )=1\) for all \(t \in [0, T(\vartheta _1, \vartheta _0))\).

Proof

For each \(t< T(\vartheta _1, \vartheta _0)\), one finds \(\vartheta '' \in (\vartheta _0, \vartheta _1)\) such that also \(t< T(\vartheta '', \vartheta _0)\). By (3.11) we then get

$$\begin{aligned} \frac{d}{dt} k_t = L^\Delta _{\vartheta _1 \vartheta ''} k_t = L^\Delta _{\vartheta _1}k_t. \end{aligned}$$
(3.16)

By claim (i) of Proposition 3.2 we have that \(k_0 = k_{\mu _0}\). Moreover, \(k_t(\emptyset )=1\) for all \(t \in [0, T(\vartheta _1, \vartheta _0))\) since \(k_0 = k_{\mu _0}\), see (b) in (2.18), and

$$\begin{aligned} \left( \frac{d}{dt} k_t \right) (\emptyset ) = \left( L^\Delta _{\vartheta _1 \vartheta ''} k_t \right) (\emptyset ) = 0, \end{aligned}$$

which follows from (2.10)–(2.9). The stated uniqueness follows by the arguments used in the proof of Lemma 4.8 in [7]. \(\square \)

Remark 3.4

As in the proof of Lemma 3.3 one can show that, for each \(t\in [0, T(\vartheta _1, \vartheta _0))\), the following holds:

$$\begin{aligned} Q_{\vartheta _1 \vartheta _0}(t): {\mathcal {K}}_{\vartheta _0} \rightarrow {\mathcal {D}}_{\vartheta _1}, \end{aligned}$$

see (3.14), and

$$\begin{aligned} \qquad \frac{d}{dt} Q_{\vartheta _1 \vartheta _0}(t) = L^\Delta _{\vartheta _1} Q_{\vartheta _1 \vartheta _0}(t). \end{aligned}$$

Now we construct the evolution of functions \(G_0 \mapsto G_t\) such that, for \(k\in {\mathcal {K}}_{\vartheta }\), the following holds, cf. (2.16),

(3.17)

To this end, we introduce, cf. (2.12) and (2.13),

$$\begin{aligned} |G|_\vartheta= & {} \int _{\Gamma _0} |G(\eta ) | \exp \left( \vartheta |\eta |\right) \lambda (d \eta ), \\ {\mathcal {G}}_\vartheta= & {} \{G:\Gamma _0\rightarrow {\mathbbm {R}}: |G|_\vartheta < \infty \}. \nonumber \end{aligned}$$
(3.18)

Clearly, \({\mathcal {G}}_{\vartheta '} \hookrightarrow {\mathcal {G}}_{\vartheta }\) for \(\vartheta < \vartheta '\); hence, we have introduced another scale of Banach spaces, cf. (2.14). As in (3.1) and (3.3), we define the corresponding multiplication operators \(A_{\vartheta \vartheta '}\) and \(S_{\vartheta \vartheta '}(t)\), and also \(C_{\vartheta \vartheta '}\in {\mathcal {L}}({\mathcal {G}}_{\vartheta '} , {\mathcal {G}}_{\vartheta })\) which acts as

$$\begin{aligned} \left( CG\right) (\eta ) = -\sum _{x\in \eta } \left( \sum _{y\in \eta \setminus x} a(x-y) \right) G(\eta \setminus x) + \int _{{\mathbbm {R}}^d} b(x) G(\eta \cup x) d x, \end{aligned}$$

and thus satisfies, cf. (3.2),

$$\begin{aligned} \Vert C_{\vartheta \vartheta '} \Vert \le \frac{\Vert b\Vert e^{-\vartheta }+ \langle a \rangle e^{\vartheta '}}{e(\vartheta ' - \vartheta )}. \end{aligned}$$
(3.19)

Now, for the same division of \([\vartheta , \vartheta ']\) as in (3.6), we introduce, cf. (3.7),

$$\begin{aligned} \Omega ^{l}_{\vartheta \vartheta '} (t, t_1, \dots , t_l) =&\, S_{\vartheta \vartheta ^1}(t_l) C_{\vartheta ^1 \vartheta ^2} S_{\vartheta ^2\vartheta ^3}(t_{l-1}- t_l) \cdots C_{\vartheta ^{2s-1} \vartheta ^{2s}}\nonumber \\&\quad \times S_{\vartheta ^{2s}\vartheta ^{2s+1}}(t_{l-s}- t_{l-s+1}) \cdots C_{\vartheta ^{2l-1}\vartheta ^{2l}}S_{\vartheta ^{2l}\vartheta '}(t- t_{1}). \end{aligned}$$

For this \(\Omega ^{l}_{\vartheta \vartheta '}\), one can get the properties analogous to those stated in Proposition 3.1. Next, for \(n\in {\mathbbm {N}}\), we define, cf. (3.12),

$$\begin{aligned} H^{(n)}_{\vartheta \vartheta '} (t) = S_{\vartheta \vartheta '}(t) + \sum _{l=1}^n \int _0^t\int _0^{t_1} \cdots \int _0^{t_{l-1}} \Omega ^{l}_{\vartheta \vartheta '} (t, t_1, \dots , t_l) d t_l \cdots d t_1. \end{aligned}$$
(3.20)

As in the proof of Proposition 3.2, by means of (3.19) we then show that the sequence \(\{H^{(n)}_{\vartheta \vartheta '} (t)\}_{n\in {\mathbbm {N}}}\) converges in \({\mathcal {L}}({\mathcal {G}}_{\vartheta '} , {\mathcal {G}}_{\vartheta })\), uniformly on compact subsets of \([0, T(\vartheta ', \vartheta ))\). Let \(H_{\vartheta \vartheta '} (t)\) be the limit. Then, by the very construction in (3.20), it follows that, cf. (3.17),

(3.21)

holding for each \(G\in {\mathcal {G}}_{\vartheta '}\) and \(k \in {\mathcal {K}}_\vartheta \).

3.2 The identification

Our next step is based on the following statement.

Lemma 3.5

Let \(\{Q_{\vartheta '\vartheta }(t): (\vartheta , \vartheta ', t) \in \varTheta \}\) be the family as in Proposition 3.2. Then, for each \(\vartheta \) and \(\vartheta '\) and \(t\in [0, T(\vartheta ', \vartheta )/2)\), we have that \(Q_{\vartheta '\vartheta } (t): {\mathcal {K}}_{\vartheta }^\star \rightarrow {\mathcal {K}}_{\vartheta '}^\star \).

We prove this lemma in a number of steps. First we introduce auxiliary models, indexed by \(\sigma >0\), for which we construct the families of operators \(Q^\sigma _{\vartheta '\vartheta }(t)\) similar as in Proposition 3.2. Then we prove that these families have the property stated in Lemma 3.5. Thereafter, we show that

(3.22)

holding for each \(G\in B^\star _\mathrm{bs}(\Gamma _0)\) and \(k_t\) as in Lemma 3.3 with \(t\in [0,T(\vartheta ',\vartheta )/2)\), see (2.17). By Proposition 2.3 this yields the fact we wish to prove.

3.2.1 Auxiliary models

For \(\sigma >0\), we set

$$\begin{aligned} \varphi _\sigma (x) = \exp \left( - \sigma |x|^2\right) , \qquad b_\sigma (x) = b(x) \varphi _\sigma (x). \end{aligned}$$
(3.23)

Let also \(L^{\Delta ,\sigma }\) stand for \(L^\Delta \) as in (2.9) with b replaced by \(b_\sigma \). Note that \(\Vert b_\sigma \Vert \le \Vert b\Vert \). Clearly, for this \(L^{\Delta ,\sigma }\), we can perform the same construction as in the previous subsection and obtain the family \(\{Q^\sigma _{\vartheta '\vartheta }(t): (\vartheta , \vartheta ', t) \in \varTheta \}\) as in Proposition 3.2 with \(\varTheta \) and \(T(\vartheta ', \vartheta )\) given in (3.10) and (3.9), respectively. Note also that \(Q^\sigma _{\vartheta '\vartheta }(t)\) satisfy, cf. (3.11) and Remark 3.4,

$$\begin{aligned} \frac{d}{dt} Q^\sigma _{\vartheta '\vartheta }(t) = L^{\Delta , \sigma }_{\vartheta ' \vartheta ''} Q^\sigma _{\vartheta ''\vartheta }(t) =L^{\Delta , \sigma }_{\vartheta '} Q^\sigma _{\vartheta '\vartheta }(t). \end{aligned}$$
(3.24)

Like in (3.15) we then set

$$\begin{aligned} k^\sigma _t = Q^\sigma _{\vartheta _1\vartheta _0}(t) k_{\mu _0} , \qquad t < T(\vartheta _1 , \vartheta _0). \end{aligned}$$
(3.25)

Also as above, we construct the operators \(H^\sigma _{\vartheta \vartheta '}(t)\) such that, cf. (3.21),

(3.26)

holding for appropriate G and k.

Proposition 3.6

Assume that \(Q^\sigma _{\vartheta _1\vartheta _0}:{\mathcal {K}}^\star _{\vartheta _0} \rightarrow {\mathcal {K}}^\star _{\vartheta _1}\) for all \(t< T(\vartheta _1, \vartheta _0)\). Then, for all \(t < T(\vartheta _1, \vartheta _0)/2\) and \(G\in B_\mathrm{bs}(\Gamma _0)\), the convergence in (3.22) holds.

Proof

Take \(\vartheta = (\vartheta _1 + \vartheta _0)/2\) and then pick \(\vartheta ' \in (\vartheta , \vartheta _1)\) such that

$$\begin{aligned} \widetilde{T}:=\frac{1}{2} T(\vartheta _1 , \vartheta _0) \le \min \{T(\vartheta _1, \vartheta '); T(\vartheta , \vartheta _0)\}, \end{aligned}$$
(3.27)

which is possible in view of the continuous dependence of \(T(\vartheta ', \vartheta )\) on both its arguments, see (3.9). For \(t< T(\vartheta _1 , \vartheta _0)/2\), by (3.15) and (3.25) we get that

$$\begin{aligned} k_t - k^\sigma _t = \int _0^t Q_{\vartheta _1 \vartheta '}(t-s) \left( L^{\Delta }_{\vartheta '\vartheta }- L^{\Delta ,\sigma }_{\vartheta '\vartheta } \right) k^\sigma _s d s =: \int _0^t Q_{\vartheta _1 \vartheta '}(t-s) D_{\vartheta '\vartheta } k^\sigma _s d s, \end{aligned}$$
(3.28)

where (2.9),

$$\begin{aligned} (D k)(\eta )= \sum _{x\in \eta } \left( 1 - \varphi _\sigma (x)\right) b(x) k(\eta \setminus x), \end{aligned}$$
(3.29)

see (2.9) and \(k^\sigma _s\) lies in \({\mathcal {K}}_\vartheta \), which is possible since

$$\begin{aligned} s \le t < \frac{1}{2}T(\vartheta _1, \vartheta _0) \le T(\vartheta , \vartheta _0), \end{aligned}$$

see (3.27). Take \(G\in B_\mathrm{bs}\). Since it lies in each \({\mathcal {G}}_\vartheta \), and hence in \({\mathcal {G}}_{\vartheta _1}\), we can get

$$\begin{aligned} H_{\vartheta '\vartheta _1}(t-s) G =:G_{t-s} \in {\mathcal {G}}_{\vartheta '}, \qquad t-s < T(\vartheta _1, \vartheta _0)/2, \end{aligned}$$

see (3.27). For this G, by (3.21) and (3.28) we have

(3.30)

To get the latter line we also used (3.29). Recall that here \(G_{t-s}\in {\mathcal {G}}_{\vartheta '}\) and \(k_s^\sigma \in {\mathcal {K}}_\vartheta \) with \(\vartheta < \vartheta '\). Let us prove that

$$\begin{aligned} g_s (x) := \int _{\Gamma _0} \frac{1}{|\eta |+1}\left| G_s (\eta \cup x)\right| \exp \left( \vartheta '|\eta |\right) \lambda (d \eta ) \end{aligned}$$

lies in \(L^1 ({\mathbbm {R}}^d)\) for each \(s\le \widetilde{T}\). Indeed, by (2.5) and (3.18) we have

$$\begin{aligned} \Vert g_s \Vert _{L^1 ({\mathbbm {R}}^d)} \le e^{-\vartheta '}\sup _{s\in [0,\widetilde{T}]} |G_s|_{\vartheta '}. \end{aligned}$$
(3.31)

We use this in (3.30) to get

$$\begin{aligned}&|\psi _\sigma (t) | \le \sup _{s\in [0,\widetilde{T}]} \Vert k_s\Vert _\vartheta \frac{\Vert b \Vert e^{\vartheta ' - \vartheta -1} }{\vartheta '-\vartheta }\int _0^t \int _{{\mathbbm {R}}^d} g_s (x) \left( 1 - \varphi _\sigma (x)\right) d x ds \rightarrow 0,\\&\quad \mathrm{as} \ \ \sigma \rightarrow 0. \end{aligned}$$

The latter convergence follows by (3.31) and the Lebesgue dominated convergence theorem. This completes the proof. \(\square \)

3.2.2 Auxiliary evolutions

Now we turn to proving that the assumption of Proposition 3.6 holds true. For a compact \(\Lambda \), by \(\Gamma _\Lambda \) we denote the set of configurations \(\eta \) contained in \(\Lambda \). It is a measurable subset of \(\Gamma _0\), i.e., \(\Gamma _\Lambda \in {\mathcal {B}}(\Gamma )\). Recall that \({\mathcal {B}}(\Gamma )\) can be generated by the cylinder sets \(\Gamma ^{\Lambda ,n}\) with all possible compact \(\Lambda \) and \(n\in {\mathbbm {N}}_0\). Let \({\mathcal {B}}(\Gamma _\Lambda )\) denote the sub-\(\sigma \)-field of \({\mathcal {B}}(\Gamma )\) consisting of \(A\subset \Gamma _\Lambda \). For \(A\in {\mathcal {B}}(\Gamma _\Lambda )\), we set \(C_\Lambda (A) = \{ \gamma \in \Gamma : \gamma _\Lambda \in A\}\). Then, for a state \(\mu \), we define \(\mu ^\Lambda \) by setting \(\mu ^\Lambda (A) = \mu (C_\Lambda (A))\); thereby, \(\mu ^\Lambda \) is a probability measure on \({\mathcal {B}}(\Gamma _\Lambda )\). It is possible to show, see [8], that for each compact \(\Lambda \) and \(\mu \in {\mathcal {P}}_\mathrm{exp} (\Gamma )\), the measure \(\mu ^\Lambda \) has density with respect to the Lebesgue–Poisson measure defined in (2.5), which we denote by \(R_\mu ^\Lambda \). Moreover, the correlation function \(k_\mu \) and the density \(R_\mu ^\Lambda \) satisfy

$$\begin{aligned} k_\mu (\eta ) = \int _{\Gamma _\Lambda } R^\Lambda _\mu (\eta \cup \xi ) \lambda (d \xi ), \qquad \eta \in \Gamma _\Lambda . \end{aligned}$$
(3.32)

Let \(\mu _0\in {\mathcal {P}}_\mathrm{exp}(\Gamma )\) be the initial state as in Lemma 3.3. Fix some compact \(\Lambda \) and \(N\in {\mathbbm {N}}\), and then, for \(\eta \in \Gamma _0\), set

$$\begin{aligned} R^{\Lambda ,N}_0 (\eta ) = \left\{ \begin{array}{ll} R^{\Lambda }_{\mu _0} (\eta ), \qquad &{}\mathrm{if} \ \ \eta \subset \Lambda \ \ \mathrm{and} \ \ |\eta |\le N;\\ 0, \qquad &{}\mathrm{otherwise.} \end{array}\right. \end{aligned}$$
(3.33)

Clearly, \(R^{\Lambda ,N}_0\in {\mathcal {G}}_\vartheta \) with any \(\vartheta \in {\mathbbm {R}}\), and \(R^{\Lambda ,N}_0(\eta ) \ge 0\) for \(\lambda \)-almost all \(\eta \in \Gamma _0\).

Let us now consider the auxiliary model specified by \(L^{\Delta , \sigma }\), and also by \(L^\sigma \) which one obtains by replacing in (1.6) b by \(b_\sigma \), see (3.23). Then the equation for the densities obtained from the Fokker–Planck equation (1.5) takes the form

$$\begin{aligned}&\frac{d}{dt} R_t (\eta ) =(L^\dagger R_t )(\eta )\nonumber \\&\quad := - \Psi _\sigma (\eta ) R_t(\eta ) + \sum _{x\in \eta } b_\sigma (x) R_t (\eta \setminus x) \nonumber \\&\qquad + \int _{{\mathbbm {R}}^d} \left( \sum _{y\in \eta } a(x-y)\right) R_t(\eta \cup x) dx, \end{aligned}$$
(3.34)

where

$$\begin{aligned} \Psi _\sigma (\eta ) := E(\eta ) + \langle b_\sigma \rangle , \qquad \langle b_\sigma \rangle := \int _{{\mathbbm {R}}^d} b(x) \varphi _\sigma (x) d x. \end{aligned}$$
(3.35)

Set

$$\begin{aligned} {\mathcal {G}}^{+}_\vartheta = \{ G \in {\mathcal {G}}_\vartheta : G(\eta ) \ge 0 , \ \ \mathrm{for} \ \lambda -\mathrm{a.a.} \ \eta \in \Gamma _0\}, \end{aligned}$$

and also

$$\begin{aligned} {\mathcal {D}}= \{ R\in {\mathcal {G}}_0 : \Psi _\sigma R \in {\mathcal {G}}_0 \}, \qquad {\mathcal {D}}^+= {\mathcal {D}} \bigcap {\mathcal {G}}_0^+. \end{aligned}$$
(3.36)

Proposition 3.7

The operator \((L^\dagger , {\mathcal {D}})\) defined in (3.34) and (3.36) is the generator of a substochastic semigroup \(S^\dagger = \{S^\dagger (t)\}_{t \ge 0}\) on \({\mathcal {G}}_0\), which leaves invariant each \({\mathcal {G}}_\vartheta \), \(\vartheta > 0\).

Proof

In this statement we mean that

$$\begin{aligned} \forall t\ge 0 \qquad&(a)&\ \ S^\dagger (t) : {\mathcal {G}}^+_0 \rightarrow {\mathcal {D}}^+; \nonumber \\&(b)&\ \ |S^\dagger (t) R|_0 \le 1, \ \ \mathrm{whenever} \ \ |R|_0 \le 1 \ \ \mathrm{and} \ \ R\in {\mathcal {G}}^+_0; \nonumber \\&(c)&\ \ S^\dagger (t) : {\mathcal {G}}^+_\vartheta \rightarrow {\mathcal {G}}^+_\vartheta , \ \ \mathrm{for} \ \ \mathrm{all} \ \ \vartheta >0. \end{aligned}$$
(3.37)

We use the Thieme–Voigt theorem in the form of [10, Propositions 3.1 and 3.2]. By this theorem the proof amounts to checking the validity of the following inequalities:

$$\begin{aligned}&\forall R\in {\mathcal {D}}^+ \qquad \int _{\Gamma _0} \left( L^\dagger R \right) (\eta ) \lambda ( d\eta ) \le 0,\nonumber \\&\quad \forall \vartheta >0 \qquad (L^\sigma G_\vartheta )(\eta ) + \varepsilon \Psi _\sigma (\eta ) \le C G_\vartheta (\eta ), \qquad G_\vartheta (\eta ) := e^{\vartheta |\eta |}, \end{aligned}$$
(3.38)

holding for some positive C and \(\varepsilon \). Recall that \(\Psi _\sigma \) is defined in (3.35). By direct inspection we get from (3.34) that the left-hand side of the first line in (3.38) equals zero for each \(R\in {\mathcal {D}}\). Proving the second inequality in (3.38) reduces to showing that, for each \(\vartheta >0\), the function

$$\begin{aligned} \varSigma (\eta ) := - E (\eta ) (1 - e^{-\vartheta } ) +\langle b_\sigma \rangle (e^\vartheta -1) + \varepsilon \Psi _\sigma (\eta ) e^{-\vartheta |\eta |}, \qquad \eta \in \Gamma _0, \end{aligned}$$

is bounded from above, which is obviously the case. \(\square \)

The second auxiliary evolution is supposed to be constructed in \({\mathcal {G}}_\vartheta \). It is generated by the operator \(\widehat{L}_\vartheta \) the action of which coincides with that of \(L^{\Delta , \sigma }\), see (2.9) and (2.10) with b replaced by \(b_\sigma \). The domain of this operator is

$$\begin{aligned} \widehat{{\mathcal {D}}}_\vartheta = \{ q \in {\mathcal {G}}_\vartheta : \Psi _\sigma (\cdot ) q \in {\mathcal {G}}_\vartheta \}. \end{aligned}$$
(3.39)

Proposition 3.8

For each \(\vartheta >0\), the operator \((\widehat{L}_\vartheta ,\widehat{{\mathcal {D}}}_\vartheta )\) is the generator of a \(C_0\)-semigroup \(\widehat{S}_\vartheta := \{\widehat{S}_\vartheta (t)\}_{t\ge 0}\) of bounded operators on \({\mathcal {G}}_\vartheta \).

Proof

As in the proof of Lemma 5.5 in [7], we pass from q to w by setting \(w(\eta ) = (-1)^{|\eta |} q(\eta )\), and hence to \(\widetilde{L}_\vartheta \) defined on the same domain (3.39) by the relation \((\widetilde{L}_\vartheta w)(\eta ) = (-1)^{|\eta |}(\widehat{L}_\vartheta q)(\eta )\). Then we just prove that \((\widetilde{L}_\vartheta , \widehat{{\mathcal {D}}}_\vartheta )\) generates a \(C_0\)-semigroup on \({\mathcal {G}}_\vartheta \). In view of (2.10) – (2.9), we have

$$\begin{aligned} \widetilde{L}_\vartheta= & {} \widetilde{A} + \widetilde{B} + \widetilde{C} \nonumber \\ (\widetilde{A} w)(\eta )= & {} - E (\eta ) w(\eta ), \quad (\widetilde{B} w)(\eta ) = \int _{{\mathbbm {R}}^d} \left( \sum _{y\in \eta } a(x-y) \right) w(\eta \cup x) d x ,\nonumber \\ (\widetilde{C} w)(\eta )= & {} - \sum _{x\in \eta } b_\sigma (x) w(\eta \setminus x). \end{aligned}$$

By (3.18) we get

$$\begin{aligned} |\widetilde{C} w|_\vartheta \le e^\vartheta \langle b_\sigma \rangle |w|_\vartheta , \end{aligned}$$

hence \(\widetilde{C}\) is a bounded operator. For \(w\in {\mathcal {G}}_\vartheta ^{+}\), we have

$$\begin{aligned} |\widetilde{B} w|_\vartheta =&\int _{\Gamma _0}e^{\vartheta |\eta |}\left( \int _{{\mathbbm {R}}^d} \left( \sum _{y\in \eta } a (x-y) \right) w(\eta \cup x) d x \right) \lambda ( d\eta ) \\ =&\int _{\Gamma _0}e^{\vartheta (|\eta |-1)} \left( \sum _{x\in \eta }\sum _{y\in \eta \setminus x} a(x-y) \right) w(\eta ) \lambda (d\eta ) \\ \le&\, e^{-\vartheta } \int _{\Gamma _0}e^{\vartheta |\eta |} E (\eta ) w(\eta ) \lambda ( d\eta ) = e^{-\vartheta } |\widetilde{A} w|_\vartheta < |\widetilde{A} w|_\vartheta . \end{aligned}$$

The latter estimate allows us to apply here the Thieme–Voigt theorem, see [10, Proposition 3.1] by which \(\widetilde{A} +\widetilde{B}\) generates a substochastic semigroup in \({\mathcal {G}}_\vartheta \). Thus, \(\widetilde{L}_\vartheta \) generates a \(C_0\)-semigroup since \(\widetilde{C}\) is bounded. This completes the proof. \(\square \)

Now for \(R_0^{\Lambda , N}\) defined in (3.33), we set

$$\begin{aligned} q^{\Lambda ,N}_0 (\eta ) = \int _{\Gamma _0} R^{\Lambda ,N}_0 (\eta \cup \xi ) \lambda (d \xi ), \qquad \eta \in \Gamma _0. \end{aligned}$$
(3.40)

By (3.32)

$$\begin{aligned} 0 \le q^{\Lambda ,N}_0 (\eta ) \le k_{\mu _0} (\eta ). \end{aligned}$$
(3.41)

Hence, \(q^{\Lambda ,N}_0\in {\mathcal {K}}_{\vartheta _0}\). By (3.33) \(R_0^{\Lambda ,N}\) lies in each \({\mathcal {G}}_\vartheta \), \(\vartheta \ge 0\). At the same time,

$$\begin{aligned} |q_0^{\Lambda ,N}|_{\vartheta }= & {} \int _{\Gamma _0}\int _{\Gamma _0} e^{\vartheta |\eta |} R_0^{\Lambda ,N}(\eta \cup \xi ) \lambda (d\eta ) \lambda (d \xi ) \nonumber \\= & {} \int _{\Gamma _0}\left( \sum _{\eta \subset \xi } e^{\vartheta |\eta |}\right) R_0^{\Lambda ,N}(\xi ) \lambda (d\xi ) = |R_0^{\Lambda ,N}|_\beta , \end{aligned}$$

where \(\beta >0\) is to satisfy \(e^\beta = 1 + e^\vartheta \). Hence, \(q_0^{\Lambda ,N}\in {\mathcal {G}}_\vartheta \) for each \(\vartheta >0\). In view of this, \(q_0^{\Lambda ,N}\in \widehat{{\mathcal {D}}}_\vartheta \) for each \(\vartheta >0\), see (3.39). Consider the problem in \({\mathcal {G}}_\vartheta \)

$$\begin{aligned} \frac{d}{dt} q_t = \widehat{L}_\vartheta q_t, \qquad q_t|_{t=0} = q_0^{\Lambda ,N}. \end{aligned}$$
(3.42)

Proposition 3.9

For each \(\vartheta >0\), the problem in (3.42) has a unique global solution \(q_t \in \widehat{{\mathcal {D}}}_\vartheta \) such that, for each \(G\in B_\mathrm{bs}^\star (\Gamma _0)\), the following holds

(3.43)

Proof

By Proposition 3.8 the problem in (3.42) has a unique global solution given by

$$\begin{aligned} q_t = \widehat{S}_\vartheta (t) q^{\Lambda ,N}_0. \end{aligned}$$
(3.44)

On the other hand, this solution can be sought in the form

$$\begin{aligned} q_t(\eta ) = \int _{\Gamma _0} \left( S^\dagger (t) R_0^{\Lambda ,N} \right) (\eta \cup \xi ) \lambda (d \xi ), \end{aligned}$$
(3.45)

where \(S^\dagger \) is the semigroup constructed in Proposition 3.7. Indeed, by direct inspection one verifies that \(q_t\) in this form satisfies (3.42), cf. the proof of Lemma 5.8 in [7]. Then, cf. (2.15),

(3.46)

which yields (3.43). The inequality in (3.46) follows by the fact that the semigroup \(S^\dagger \) is substochastic, see (3.37). This completes the proof. \(\square \)

By (3.41) it follows that \(q^{\Lambda ,N}_0 \in {\mathcal {K}}_{\vartheta _0}\), hence we may use it in (3.25) and obtain

$$\begin{aligned} k_t^{\Lambda ,N} = Q^\sigma _{\vartheta _1 \vartheta _0}(t)q^{\Lambda ,N}_0, \qquad t \in [0,T(\vartheta _1, \vartheta _0)). \end{aligned}$$
(3.47)

Proposition 3.10

Let \(k_t^{\Lambda ,N}\) and \(q_t\) be as in (3.47) and in (3.44), (3.45), respectively. Then, for all \(t \in [0,T(\vartheta _1, \vartheta _0))\), it follows that \(k_t^{\Lambda ,N}=q_t\).

Proof

A priori \(k_t^{\Lambda ,N}\) and \(q_t\) lie in different spaces: \({\mathcal {K}}_{\vartheta _1}\) and \({\mathcal {G}}_\vartheta \), respectively. Note that the latter \(\vartheta \) can be arbitrary positive. The idea is to construct one more evolution \(q^{\Lambda ,N}_0\mapsto u_t\) in some intersection of these two spaces, related to the evolutions in (3.47) and (3.44). Then the proof will follow by the uniqueness as in Proposition 3.9.

For \(\vartheta \in {\mathbbm {R}}\), \(\varphi _\sigma \) as in (3.23) and \(u:\Gamma _0 \rightarrow {\mathbbm {R}}\), we set, cf. (2.13) and (2.6),

$$\begin{aligned} \Vert u\Vert _{\sigma , \vartheta } = {{\mathrm{{\mathrm {ess\,sup}}}}}_{\eta \in \Gamma _0} \frac{|u(\eta )|\exp \left( - \vartheta |\eta |\right) }{e(\eta ;\varphi _\sigma )} , \qquad e(\eta ;\varphi _\sigma ):=\prod _{x\in \eta } \varphi _\sigma (x), \end{aligned}$$
(3.48)

and then \({\mathcal {U}}_{\sigma , \vartheta }:= \{ u:\Gamma _0 \rightarrow {\mathbbm {R}}: \Vert u\Vert _{\sigma , \vartheta } < \infty \}\). Clearly,

$$\begin{aligned} {\mathcal {U}}_{\sigma , \vartheta } \hookrightarrow {\mathcal {K}}_{\vartheta }, \qquad \vartheta \in {\mathbbm {R}}, \end{aligned}$$
(3.49)

since \(\Vert u\Vert _\vartheta \le \Vert u\Vert _{\sigma , \vartheta }\). Moreover, as in (2.14) we have that \( {\mathcal {U}}_{\sigma , \vartheta } \hookrightarrow {\mathcal {U}}_{\sigma , \vartheta '}\) for \(\vartheta ' > \vartheta \). Let \(L^{\Delta , \sigma }\) be defined as in (2.9) with b replaced by \(b_\sigma \). Then we define an unbounded linear operator \(L^{\Delta , \sigma }_{\vartheta ,u}: {\mathcal {D}}^{\Delta , \sigma }_{\vartheta ,u} \rightarrow {\mathcal {U}}_{\sigma , \vartheta }\) with the action as just described and the domain

$$\begin{aligned} {\mathcal {D}}^{\Delta , \sigma }_{\vartheta ,u}=\{ u\in {\mathcal {U}}_{\sigma , \vartheta } : L^{\Delta , \sigma } u \in {\mathcal {U}}_{\sigma , \vartheta }\}. \end{aligned}$$
(3.50)

Clearly, \({\mathcal {U}}_{\sigma , \vartheta ''} \subset {\mathcal {D}}^{\Delta , \sigma }_{\vartheta ,u}\) for each \(\vartheta '' < \vartheta \). By (3.33) and (3.40) it follows that \(q^{\Lambda ,N}_0 (\eta ) = 0\) if \(|\eta |>N\) or if \(\eta \) is not contained in \(\Lambda \). Then \(q^{\Lambda ,N}_0\) lies in each \({\mathcal {U}}_{\sigma , \vartheta ''}\), \(\vartheta ''\in {\mathbbm {R}}\), and hence in the domain of \(L^{\Delta , \sigma }_{\vartheta ,u}\) given in (3.50). Thus, we can consider

$$\begin{aligned} \frac{d}{dt} u_t = L^{\Delta , \sigma }_{\vartheta ,u} u_t, \qquad u_t|_{t=0} = q^{\Lambda ,N}_0. \end{aligned}$$
(3.51)

As in (3.1) we write \(L^{\Delta , \sigma }_{\vartheta ,u} = A^{\sigma ,u}+B^{\sigma ,u}\), where \(A^{\sigma ,u}\) is the multiplication operator by \(-E(\eta )\). The operator norm of \(B^{\sigma ,u}\) can be estimated as follows. By (3.48) we have

$$\begin{aligned} |u(\eta )| \le \Vert u\Vert _{\sigma , \vartheta } \exp \left( \vartheta |\eta |\right) \prod _{x\in \eta }\varphi _\sigma (x), \end{aligned}$$

which yields

$$\begin{aligned} \left| \left( B^{\sigma ,u}u\right) (\eta ) \right| \le \Vert u\Vert _{\sigma , \vartheta } |\eta | \exp \left( \vartheta |\eta |\right) \left( \Vert b\Vert e^{-\vartheta } + \langle a \rangle e^\vartheta \right) \prod _{x\in \eta }\varphi _\sigma (x). \end{aligned}$$

Hence, the operator norm of \(B^{\sigma ,u}_{\vartheta '\vartheta }\in {\mathcal {L}}( {\mathcal {U}}_{\sigma , \vartheta }, {\mathcal {U}}_{\sigma , \vartheta '})\) satisfies

$$\begin{aligned} \Vert B^{\sigma ,u}_{\vartheta ',\vartheta }\Vert \le \frac{\Vert b\Vert e^{-\vartheta } + \langle a \rangle e^{\vartheta '}}{e(\vartheta ' - \vartheta )}, \end{aligned}$$

which coincides with that in (3.2). Then we repeat the construction made in Propositions 3.13.2 and Lemma 3.3 and obtain the solution of (3.51) in the form

$$\begin{aligned} u_t = Q^{\sigma ,u}_{\vartheta _1 \vartheta _0}(t) q^{\Lambda ,N}_0, \qquad t\in [0, T(\vartheta _1, \vartheta _0)), \end{aligned}$$

where \(T(\vartheta _1, \vartheta _0)\) is as in (3.9) whereas \(Q^{\sigma ,u}_{\vartheta _1 \vartheta _0}(t)\) satisfies, cf. (3.11) and Remark 3.4,

$$\begin{aligned} \frac{d}{dt} Q^{\sigma ,u}_{\vartheta _1 \vartheta _0}(t) = \left( A^{\sigma ,u}_{\vartheta _1 \vartheta '} +B^{\sigma ,u}_{\vartheta _1 \vartheta '} \right) Q^{\sigma ,u}_{\vartheta ' \vartheta _0}(t) = L^{\Delta , \sigma }_{\vartheta _1,u} Q^{\sigma ,u}_{\vartheta _1 \vartheta _0}(t). \end{aligned}$$

Since \((L^{\Delta , \sigma }_{\vartheta _1,u}, {\mathcal {D}}^{\Delta , \sigma }_{\vartheta _1,u}) \subset (L^{\Delta , \sigma }_{\vartheta _1}, {\mathcal {D}}^{\Delta , \sigma }_{\vartheta _1})\), and in view of (3.24) and (3.47), (3.49), we have that

$$\begin{aligned} \forall t\in [0, T(\vartheta _1, \vartheta _0)) \qquad k^\sigma _t = u_t. \end{aligned}$$
(3.52)

On the other hand, for \(\vartheta >0\) and \(u\in {\mathcal {U}}_{\sigma , \vartheta '}\), by (3.48) we get

$$\begin{aligned}&\int _{\Gamma _0}|u(\eta )| e^{\vartheta |\eta |} \lambda (d \eta ) \le \Vert u\Vert _{\sigma , \vartheta '} \int _{\Gamma _0} \exp \left( (\vartheta ' + \vartheta )|\eta |\right) e(\eta ; \varphi _\sigma ) \lambda ( d \eta )\nonumber \\&\quad = \Vert u\Vert _{\sigma , \vartheta '} \exp \left( \langle \varphi _\sigma \rangle e^{\vartheta + \vartheta '}\right) , \qquad \langle \varphi _\sigma \rangle := \int _{{\mathbbm {R}}^d} \varphi _\sigma (x) d x. \end{aligned}$$

Thus, \({\mathcal {U}}_{\sigma , \vartheta '}\hookrightarrow {\mathcal {G}}_{ \vartheta }\) for each \(\vartheta '\in {\mathbbm {R}}\) and \(\vartheta \ge 0\). Likewise, one shows that \({\mathcal {D}}^{\Delta , \sigma }_{\vartheta ',u}\hookrightarrow \widehat{{\mathcal {D}}}_\vartheta \), see (3.39). Since the action of \(\widehat{L}\) coincides with that of \(L^{\Delta ,\sigma }\), by the latter embedding we have that \((L^{\Delta , \sigma }_{\vartheta _1,u}, {\mathcal {D}}^{\Delta , \sigma }_{\vartheta _1,u}) \subset (\widehat{L}_{\vartheta }, \widehat{{\mathcal {D}}}_{\vartheta })\), holding for each \(\vartheta >0\). Then by the uniqueness stated in Proposition 3.9 we conclude that \(q_t = u_t\) for all \(t \in [0,T(\vartheta _1, \vartheta _0))\). In view of (3.52), this yields \(k_t^{\Lambda ,N} = u_t\), which completes the proof. \(\square \)

3.2.3 Proof of Lemma 3.5

We have to show that the assumption of Proposition 3.6 holds true for each \(\sigma >0\), which is equivalent to proving that \(k^\sigma _t\) given in (3.25) has the property

(3.53)

holding for all \(t < T(\vartheta _1, \vartheta _0)\) and \(G_0\in B^{\star }_\mathrm{bs}(\Gamma _0)\). By definition, a cofinal sequence of \(\{\Lambda _n\}_{n\in {\mathbbm {N}}}\) is a sequence of compact subsets \(\Lambda _n \subset {\mathbbm {R}}^d\) such that \(\Lambda _n \subset \Lambda _{n+1}\), \(n\in {\mathbbm {N}}\), and each \(x \in {\mathbbm {R}}^d\) is contained in a certain \(\Lambda _n\). Let \(\{\Lambda _n\}_{n\in {\mathbbm {N}}}\) be such a sequence. Fix \(\sigma >0\) and then, for given \(\Lambda _n\) and \(N\in {\mathbbm {N}}\), obtain \(q_0^{\Lambda _n,N}\) from \(k_{\mu _0}\in {\mathcal {K}}_{\vartheta _0}\) by (3.33), (3.40). As in [1, Appendix] one can show that, for each \(G\in {\mathcal {G}}_{\vartheta _0}\), the following holds

(3.54)

Let \(G_0\) be as in (3.53) and hence lie in any \({\mathcal {G}}_\vartheta \). For \(t\in [0,T(\vartheta _1, \vartheta _0))\) and \(k_t^{\Lambda _n,N}\) as in (3.47), by (3.26) we get

(3.55)

The latter inequality follows by Proposition 3.10 and (3.46). Then (3.53) follows by (3.54) and (3.55).

3.3 Proof of Theorem 2.4

To complete proving the theorem we have to construct the continuation of the solution (3.15) to all \(t\ge 0\) and prove the upper bound in (2.27). The lower bound follows by the fact that \(k_t \in {\mathcal {K}}^\star \). This will be done by comparing \(k_t\) with the solution of the equation (2.8) for the Surgailis model given in (2.21). If we denote the latter by \(v_t\), then

$$\begin{aligned} v_t (\eta ) = (W(t) k_{\mu _0})(\eta ) := \sum _{\xi \subset \eta }e(\xi ; \phi _t) e(\eta \setminus \xi ; \psi _t) k_{\mu _0}(\eta \setminus \xi ). \end{aligned}$$
(3.56)

For \(k_{\mu _0}\in {\mathcal {K}}_{\vartheta _0}\), by (2.13) and (1.7) we get from the latter

$$\begin{aligned} v_t (\eta ) \le \Vert k_{\mu _0}\Vert _{\vartheta _0} \exp \left\{ \vartheta _0 + \log \left( 1 + t \Vert b\Vert e^{-\vartheta _0} \right) \right\} , \end{aligned}$$
(3.57)

which holds also in the case \(m\equiv 0\). Thus, for a given \(T>0\), W(t) with \(t\in [0,T]\) acts as a bounded operator \(W_{\vartheta _T \vartheta _0}(t)\) from \({\mathcal {K}}_{\vartheta _0}\) to \({\mathcal {K}}_{\vartheta _T}\) with

$$\begin{aligned} \vartheta _T := \vartheta _0 + \log \left( 1 + T \Vert b\Vert e^{-\vartheta _0} \right) . \end{aligned}$$
(3.58)

For \(\vartheta \in {\mathbbm {R}}\), we set, cf. (3.9),

$$\begin{aligned} \tau (\vartheta ) = T(\vartheta +1 , \vartheta ) = \left[ \Vert b \Vert e^{-\vartheta } + \langle a \rangle e^\vartheta \right] ^{-1}. \end{aligned}$$
(3.59)

For \(\vartheta _1 = \vartheta _0 + 1\), let \(k_t\) be given in (3.15) with \(t \in [0, \tau (\vartheta _0))\). Fix some \(\kappa \in (0,1/2)\) and set \(T_1 = \kappa \tau (\vartheta _0)\). By Lemmas 3.3 and 3.5 we know that \(k_t = Q_{\vartheta _1 \vartheta _0} (t) k_{\mu _0}\) exists and lies in \({\mathcal {K}}_{\vartheta _1}^\star \) for all \(t\in [0,T_1]\). Take \(\vartheta \in (\vartheta _0 , \vartheta _0+1)\) such that \(T_1 < T(\vartheta , \vartheta _0)\). Then take \(\vartheta '>\vartheta \) and set, cf. (3.58),

$$\begin{aligned} \tilde{\vartheta }_1 = \max \left\{ \vartheta _0 + 1; \vartheta ' + \log \left( 1 + T_1 \Vert b\Vert e^{-\vartheta '}\right) \right\} . \end{aligned}$$

For \(t\in [0,T_1]\), we have

$$\begin{aligned} v_t - k_t = \int _0^t W_{\tilde{\vartheta }_1 \vartheta '}(t-s) D_{\vartheta ' \vartheta } k_s ds , \end{aligned}$$
(3.60)

where \(k_s\) belongs to \({\mathcal {K}}_\vartheta \), whereas \(v_t\) and \(k_t\) belong to \({\mathcal {K}}_{\tilde{\vartheta }_1}\). By (3.56) and (3.1) the action of D in (3.60) is

$$\begin{aligned} (D k)(\eta ) = \left( \sum _{x\in \eta }\sum _{y\in \eta \setminus x} a(x-y)\right) k(\eta ) + \int _{{\mathbbm {R}}^d} \left( \sum _{y\in \eta \setminus x} a(x-y)\right) k(\eta \cup x) d x, \end{aligned}$$

hence, \(v_t (\eta ) - k_t (\eta ) \ge 0\) for \(\lambda \)-almost all \(\eta \in \Gamma _0\) since W(t) is positive, see (3.56) and (2.22), and \(k_s \in {\mathcal {K}}_\vartheta ^\star \subset {\mathcal {K}}_\vartheta ^+\), see (2.19), (2.20), and Lemma 3.5. Since \(k_t\) in (3.60) is in \({\mathcal {K}}^\star \), we have that

$$\begin{aligned} 0 \le k_t \le v_t, \qquad t \in [0, T_1], \end{aligned}$$
(3.61)

which by (3.57) yields \(k_t \in {\mathcal {K}}_{\vartheta _{T_1}}\) and the bound in (2.27) for such t, see (3.58). Set \(T_2 = \kappa \tau (\vartheta _{T_1})\), \(\vartheta _2 = \vartheta _{T_1} +1\) and consider \(k^{(2)}_t = Q_{\vartheta _2\vartheta _{T_1}}(t) k_{T_1}\) with \(t\in [0,T_2]\). Clearly, \(k^{(2)}_t = k_{T_1 + t}\) for \(T_1 + t< T(\vartheta _0+1, \vartheta )\), see (3.13), and hence is a continuation of \(k_t\) to \([T_1, T_2]\). Now we repeat this procedure due times and obtain

$$\begin{aligned} k^{(n)}_t = Q_{\vartheta _n \vartheta _{T_{n-1}}} (t) k_{T_{n-1}}, \end{aligned}$$

where \(\vartheta _n = \vartheta _{T_{n-1}} +1\) and

$$\begin{aligned} T_n =&\,\, \kappa \tau (\vartheta _{T_{n-1}}) \nonumber \\ \vartheta _{T_n} =&\,\, \vartheta _{T_{n-1}} + \log \left( 1 + T_{n-1}\Vert b\Vert e^{-\vartheta _{T_n-1}}\right) , \quad \vartheta _{T_0} := \vartheta _0. \end{aligned}$$
(3.62)

The continuation to all \(t>0\) will be obtained if we show that \(\sum _{n\ge 1} T_n = +\infty \). Assume that this is not the case. From the second line in (3.62) we get \(T_{n-1} = (e^{\vartheta _{T_n}} - e^{\vartheta _{T_{n-1}}})/\Vert b\Vert \). Hence

$$\begin{aligned} \sum _{n=1}^N T_n = \left( e^{\vartheta _{T_N}} - e^{\vartheta _0}\right) / \Vert b\Vert . \end{aligned}$$

Thus, the mentioned series converges if the sequence \(\{\vartheta _{T_n}\}_{n\in {\mathbbm {N}}}\) is bounded, say by \(\bar{\vartheta }\). However, in this case one cannot get \(T_n \rightarrow 0\) as \(n\rightarrow +\infty \), for it contradicts the first line in (3.62) since

$$\begin{aligned} \tau (\vartheta )\ge \left[ e\langle a \rangle e^{\bar{\vartheta }} + \Vert b\Vert e^{-\vartheta _0} \right] ^{-1}, \end{aligned}$$

see (3.59). Clearly, the upper bound in (3.61) holds on each \([T_{n-1}, T_{n}]\). This completes the proof.

4 The global boundedness

Here we prove Theorem 2.5. In the nontrivial case \(a(0)>0\), see Assumption 1.1, let \(\varDelta \) be a cubic cell containing the origin such that

$$\begin{aligned} \inf _{x\in \varDelta }a (x) =: a_\varDelta >0, \end{aligned}$$
(4.1)

which is possible since a is continuous. For \(\eta \) contained in a translate of \(\varDelta \), \(|\eta |\ge 2\), and \(x\in \eta \), we then have

$$\begin{aligned} \sum _{y \in \eta \setminus x} a (x-y) \ge a_\varDelta \left( |\eta |-1\right) \ge a_\varDelta . \end{aligned}$$
(4.2)

For a translate of \(\varDelta \), we consider the observables \(N_\varDelta ^n: \Gamma \rightarrow {\mathbbm {N}}_0\) defined as follows: \(N^n_\varDelta (\gamma ) =|\gamma _\varDelta |^n\), \(n\in {\mathbbm {N}}\). Then

$$\begin{aligned} N_\varDelta (\gamma )= & {} \sum _{x\in \gamma } I_\varDelta (x), \nonumber \\ N^n_\varDelta (\gamma )= & {} \sum _{l=1}^n l! S(n,l) \sum _{\{x_1 , \dots , x_l\} \subset \gamma } I_\varDelta (x_1) \cdots I_\varDelta (x_l), \quad n\ge 2, \end{aligned}$$
(4.3)

where \(I_\varDelta \) is the indicator function of \(\varDelta \) and S(nl) is a Stirling numbers of the second kind, equal to the number of distinct ways of dividing n labeled items into l unlabeled groups. It has the following representation, cf. [13],

$$\begin{aligned} S(n,l) = \frac{1}{l!}\sum _{s=0}^l \left( -1\right) ^{l-s} \left( {\begin{array}{c}l\\ s\end{array}}\right) s^n. \end{aligned}$$
(4.4)

Then, for \(\mu \in {\mathcal {P}}_\mathrm{exp}(\Gamma )\), by (2.6) we have that

$$\begin{aligned} \mu (N_\varDelta ^n) = \sum _{l=1}^n S(n,l) \int _\varDelta \cdots \int _\varDelta k^{(l)}_\mu (x_1 , \dots , x_l) d x_1 \cdots d x_l. \end{aligned}$$
(4.5)

For \(l\in {\mathbbm {N}}\), we set

$$\begin{aligned} F_\varDelta ^{(l)} (\gamma )= & {} \sum _{\{x_1 , \dots , x_l\} \subset \gamma } I_\varDelta (x_1) \cdots I_\varDelta (x_l)\nonumber \\= & {} \frac{1}{l!} N_\varDelta (\gamma ) \left( N_\varDelta (\gamma ) - 1\right) \cdots \left( N_\varDelta (\gamma ) - l+1\right) . \end{aligned}$$
(4.6)

And also \(F_\varDelta ^{(0)} (\gamma )\equiv 1\). Then we can rewrite (4.3) as follows

$$\begin{aligned} N^n_\varDelta (\gamma ) = \sum _{l=1}^n l! S(n,l) F_\varDelta ^{(l)} (\gamma ). \end{aligned}$$
(4.7)

An easy calculation yields

$$\begin{aligned} F_\varDelta ^{(l)} (\gamma \cup x) - F_\varDelta ^{(l)} (\gamma ) = I_\varDelta (x) F_\varDelta ^{(l-1)} (\gamma ), \end{aligned}$$
(4.8)

For \(\mu _t\) as in Theorem 2.4, we set

$$\begin{aligned} q_\varDelta ^{(0)} (t) \equiv 1, \qquad q_\varDelta ^{(l)} (t) = \mu _t (F_\varDelta ^{(l)}), \qquad l\in {\mathbbm {N}}. \end{aligned}$$
(4.9)

By (4.5), (4.6) and (4.7) it follows that

$$\begin{aligned} q_\varDelta ^{(l)} (0) = \frac{1}{l!} \int _\varDelta \cdots \int _\varDelta k^{(l)}_{\mu _0} (x_1 , \dots , x_l) d x_1 \cdots d x_l. \end{aligned}$$
(4.10)

Since \(\mu _0\) is in \({\mathcal {P}}_\mathrm{exp} (\Gamma )\), one finds \(\vartheta \in {\mathbbm {R}}\) such that \(k^{(l)}_{\mu _0} (x_ 1, \dots , x_l) \le e^{\vartheta }\), cf. (2.3). By (4.10) this yields

$$\begin{aligned} q_\varDelta ^{(l)} (0) \le \left[ \mathrm{V}(\varDelta ) e^\vartheta \right] ^l/l!, \qquad l\in {\mathbbm {N}}. \end{aligned}$$
(4.11)

Recall that \(a_\varDelta \) is defined in (4.1), see also (4.2). Set

$$\begin{aligned} b_\varDelta = \int _{\varDelta } b(x) dx, \qquad \kappa _\varDelta = \max \left\{ \mathrm{V}(\varDelta ) e^\vartheta ; {b_\varDelta }/{ a_\varDelta }\right\} , \end{aligned}$$
(4.12)

where \(\vartheta \) is as in (4.11).

The proof of the lemma below is based on the following elementary arguments. Let \(u:[0,+\infty ) \rightarrow {\mathbb {R}}\) be continuously differentiable with the derivative satisfying

$$\begin{aligned} u'(t) \le \upsilon _0 - \upsilon _1 u(t), \qquad \upsilon _0, \upsilon _1 >0. \end{aligned}$$
(4.13)

Then by standard arguments one obtains that

$$\begin{aligned} u(t) \le u(0) e^{-\upsilon _1 t} + \frac{\upsilon _0}{\upsilon _1} \left( 1 - e^{-\upsilon _1t}\right) , \qquad t\ge 0, \end{aligned}$$

which, in particular, means that

$$\begin{aligned} u(t) \le \max \{ u(0); \upsilon _0/\upsilon _1 \}, \end{aligned}$$
(4.14)

and also: for each \(\varepsilon >0\), there exists \(\tau _\varepsilon \ge 0\) such that

$$\begin{aligned} \forall t\ge \tau _\varepsilon \qquad u(t) \le \varepsilon + \upsilon _0/\upsilon _1. \end{aligned}$$
(4.15)

Lemma 4.1

Let \(\varDelta \) be as in (4.1) and \(\mu _t\), \(t\ge 0\) be as in Theorem 2.4, and hence \(q_\varDelta ^{(l)} (0)\) satisfies (4.11) with some \(\vartheta \). Let \(\kappa _\varDelta \) be as in (4.12) for these parameters. Then

$$\begin{aligned} \forall t \ge 0 \qquad q_\varDelta ^{(l)} (t) \le \kappa ^l_\varDelta /l!, \qquad l\in {\mathbbm {N}}. \end{aligned}$$
(4.16)

Proof

By (1.4) we have that

$$\begin{aligned} \frac{d}{dt} q_\varDelta ^{(l)} (t) = \mu _t (L F_\varDelta ^{(l)}), \end{aligned}$$

which by means of (4.8) can be written

$$\begin{aligned} \frac{d}{dt} q_\varDelta ^{(l)} (t)= & {} b_\varDelta q_\varDelta ^{(l-1)} (t) - \int _{\Gamma } \left( \sum _{x\in \gamma _\varDelta }\left( \sum _{y\in \gamma \setminus x} a(x-y) \right) F^{(l-1)}_\varDelta (\gamma \setminus x) \right) \mu _t (d \gamma ) \nonumber \\\le & {} b_\varDelta q_\varDelta ^{(l-1)} (t) - \int _{\Gamma } \left( \sum _{x\in \gamma _\varDelta } \left( \sum _{y\in \gamma _\varDelta \setminus x} a(x-y) \right) F^{(l-1)}_\varDelta (\gamma \setminus x) \right) \mu _t (d \gamma ) \nonumber \\\le & {} b_\varDelta q_\varDelta ^{(l-1)} (t) - a_\varDelta \int _{\Gamma } \left( \sum _{x\in \gamma _\varDelta } F^{(l-1)}_\varDelta (\gamma \setminus x) \right) \mu _t (d \gamma ), \end{aligned}$$
(4.17)

By (4.7) it follows that

$$\begin{aligned} \sum _{x\in \gamma _\varDelta } F^{(l-1)}_\varDelta (\gamma \setminus x) = l F^{(l)}_\varDelta (\gamma ). \end{aligned}$$

We apply this in (4.17) and obtain, cf. (4.13) and (4.12),

$$\begin{aligned} \frac{d}{dt} q_\varDelta ^{(l)} (t) \le b_\varDelta q_\varDelta ^{(l-1)} (t) - l a_\varDelta q_\varDelta ^{(l)} (t), \qquad l\in {\mathbbm {N}}. \end{aligned}$$
(4.18)

For \(l=1\), by (4.9) and (4.14) we get from the latter that (4.16) holds. Now we assume that (4.16) holds for a given \(l-1\). It yields in (4.18)

$$\begin{aligned} \frac{d}{dt} q_\varDelta ^{(l)} (t) \le \frac{b_\varDelta \kappa _\varDelta ^{l-1}}{(l-1)!} - l a_\varDelta q_\varDelta ^{(l)} (t), \end{aligned}$$

from which by (4.14) we obtain that (4.16) holds also for l. \(\square \)

Proof of Theorem 2.5

By means of the evident monotonicity

$$\begin{aligned} \mu (N_\Lambda ^n) \le \mu (N_\Lambda ^{n+1}), \qquad \mu (N_\Lambda ^n) \le \mu (N_{\Lambda '}^n), \ \quad \mathrm{for} \ \ \Lambda \subset \Lambda ', \end{aligned}$$

we conclude that it is enough to prove the statement for: (a) \(n=2^s\); (b) \(\Lambda \) being a finite sum of the disjoint translates of the cubic cell \(\varDelta \) as in Lemma 4.1. Let m be such that

$$\begin{aligned} \Lambda = \bigcup _{l=1}^m \varDelta _{l}. \end{aligned}$$

By the estimate

$$\begin{aligned} \left( \sum _{l=1}^n a_l \right) ^2 \le n \sum _{l=1}^n a_l^2, \end{aligned}$$

we prove that

$$\begin{aligned} N_\Lambda ^{2^s}(\gamma ) \le m^{2^s -1} \sum _{l=1}^m N_{\varDelta _l}^{2^s} (\gamma ), \qquad s\in {\mathbbm {N}}_0. \end{aligned}$$

Then by Lemma 4.1 and (4.7) we obtain

$$\begin{aligned} \mu _t (N_\Lambda ^{2^s}) \le m^{2^s} T_{2^s} (\bar{\kappa }_\varDelta ) = \left[ \mathrm{V}(\Lambda ) \right] ^{2^s} \left( T_{2^s} (\bar{\kappa }_\varDelta )/\left[ \mathrm{V}(\varDelta ) \right] ^{2^s}\right) , \end{aligned}$$

where \(T_n\) is the Touchard polynomial

$$\begin{aligned} T_n(\varkappa ) := \sum _{l=1}^n S(n, l) \varkappa ^l, \end{aligned}$$

with S(nl) given in (4.4), see [13, Eq. (3.4)], and

$$\begin{aligned} \bar{\kappa }_\varDelta := \mathrm{V}(\varDelta ) \max \left\{ e^\vartheta ; {\Vert b\Vert }/{a_\varDelta }\right\} , \end{aligned}$$

cf. (4.12). This proves (2.28).

Like in [4], it is possible to show that if the initial state \(\mu _0\) is such that each \(k^{(l)}_{\mu _0}\in C_\mathrm{b} (({\mathbbm {R}}^{d})^l)\)—the set of bounded continuous functions, then so is \(k^{(l)}_{t}\) for all \(t>0\). As in (2.25) we have, see also (4.5),

$$\begin{aligned} \mu _t (N_\varDelta ) = \int _{\varDelta }k^{(1)}_t (x) dx. \end{aligned}$$

By taking a sequence of \(\varDelta \) shrinking up to a given x and applying (4.16) we obtain

$$\begin{aligned} k_t^{(1)} (x) \le \max \left\{ k_{\mu _0}^{(1)}(x); {b(x)}/{ a(0)} \right\} \le \max \{ \Vert k_{\mu _0}^{(0)}\Vert _{L^\infty ({\mathbbm {R}}^d)}; \Vert b\Vert /a(0) \}, \end{aligned}$$
(4.19)

which proves the bound for \(k^{(1)}_t\). Let us now prove the validity of the second estimate in (1.8). The bound for \(k^{(2)}_t (x,x)\) can be obtained from (4.16) in the way similar to that used in getting (4.19). To bound for \(k^{(2)}_t (x,y)\) with \(x\ne y\), let us take two disjoint cells \(\varDelta _x\) and \(\varDelta _y\). Both are of side \(h>0\) and such that: (a) \(x\in \varDelta _x\) and \(y\in \varDelta _y\); (b) \(\varDelta _x \rightarrow \{x\}\) and \(\varDelta _y \rightarrow \{y\}\) as \(h\rightarrow 0\). Then set

$$\begin{aligned} F_h (\gamma ) = \left[ \sum _{z\in \gamma } \left( I_{\varDelta _x}(z) - I_{\varDelta _y}(z) \right) \right] ^2. \end{aligned}$$

For the state \(\mu _t\), we have

$$\begin{aligned} 0\le \mu _t (F_h) = q^{(2)}_{\varDelta _x}(t) + q^{(2)}_{\varDelta _y}(t) - 2 \int _{\varDelta _x} \int _{\varDelta _y} k_t^{(2)} (z_1 , z_2) d z_1 d z_2. \end{aligned}$$

By (4.12) and (4.16) this yields

$$\begin{aligned} \int _{\varDelta _x} \int _{\varDelta _y} k_t^{(2)} (z_1 , z_2) d z_1 d z_2 \le \frac{1}{2} \max \left\{ \kappa ^2_{\varDelta _x}; \kappa ^2_{\varDelta _y} \right\} \le \frac{h^{2d}}{2} \max \left\{ e^{2\vartheta }; \left( \Vert b\Vert /a_h\right) ^2\right\} , \end{aligned}$$

where \(a_h := \min \{a_{\varDelta _x};a_{\varDelta _y} \}\). Passing here to the limit \(h\rightarrow 0\) and taking into account the assumed continuity of \(k^{(2)}_t\) and a we obtain the second inequality in (1.8). This completes the proof. \(\square \)

Note that the smaller bound in (4.19) coincides with the corresponding bound in the exactly soluble Surgailis model in which the mortality rate m(x) is substituted by a(0). That is, the competition here amounts to the appearance of an effective mortality a(0). Another important observation regarding the competition in this model is based on (4.15). Let \(\Lambda \) be compact and \(k_t^{(1)}\) satisfy (4.19). Then for an arbitrary \(\varepsilon >0\), one finds \(\tau (\varepsilon , \Lambda )\), dependent also on \(\mu _0\), such that, for all \(x\in \Lambda \) and \(t\ge \tau (\varepsilon , \Lambda )\), the following holds

$$\begin{aligned} k_t^{(1)} (x) \le \frac{b(x)}{ a(0)} +\varepsilon . \end{aligned}$$

That is, after some time the density at each point of \(\Lambda \) approaches a certain level, independent of the initial distribution of the entities in \(\Lambda \).