1 Introduction

1.1 Model

Let \(\left\{ \mathcal {P}_{t}\right\} _{t\ge 0}\) be the semigroup of a strongly recurrent diffusion on \(\mathbb {R}^{d}\) with the infinitesimal generator L. We also introduce the so-called branching mechanism \(\psi :\mathbb {R}_{+}\mapsto \mathbb {R}_{+}\). It is represented as

$$\begin{aligned} \psi (\lambda )=-\alpha \lambda +\beta \lambda ^{2}+\int _{\mathbb {R}_{+}}\left( e^{-\lambda x}-1+\lambda x\right) \Pi ( \mathrm d x), \end{aligned}$$
(1.1)

where \(\alpha ,\beta \in \mathbb {R},\beta >0\) and \(\Pi \) is a measure concentrated on \(\mathbb {R}_{+}\) such that \(\int _{\mathbb {R}_{+}}\min (x^{2},x)\Pi ( \mathrm d x)<+\infty \). In this paper, we will study the behavior of a superprocess \(\left\{ X_{t}\right\} _{t\ge 0}\) with the infinitesimal operator L (or equivalently, with the semigroup \(\mathcal {P}_{}\)) and branching mechanism \(\psi \). It is a time-homogenous, measure-valued Markov process. As such it is characterized by a transition kernel, which in our case is expressed in terms of its Laplace transform

$$\begin{aligned} -\log \mathbb {E}(e^{-\langle f,X_{t}\rangle }|X_{0}=\nu )=\int _{\mathbb {R}^{d}}u_{f}(x,t)\nu (\mathrm{d}x), \end{aligned}$$
(1.2)

where \(t\ge 0\), \(f\in b^{+}(\mathbb {R}^{d})\) (bounded, positive and measurable functions on \(\mathbb {R}^{d}\)) and \(\nu \in \mathcal {M}_{F}(\mathbb {R}^{d})\) (finite, compactly supported measures). The function \(u_{f}(x,t)\) is the unique nonnegative solution of the integral equation

$$\begin{aligned} u_{f}(x,t)=\mathcal {P}_{t}f(x)-\int _{0}^{t}\mathcal {P}_{t-s}[\psi (u_{f}(\cdot ,s))](x) \mathrm d s. \end{aligned}$$
(1.3)

For the technical details of this construction, we refer the reader to [6, 7]. The above definition could appear quite abstract, but actually any superprocess has a natural interpretation as the short lifetime and high-density limit of branching particle systems (see, for example, Introduction of [8] and Sect. 1.3). There is a vast body of literature concerning various aspects of superprocesses, e.g., [6, 7, 9, 10].

1.2 Results: Outline

We postpone a formal description of our assumptions and results to Sects. 3 and 4 providing now intuitions.

In this paper, we are interested in the supercritical case in which the system grows exponentially (on the event of survival). The rate of growth is given by \(-\psi '(0)=\alpha \) which, in this paper, is assumed to be strictly positive:

$$\begin{aligned} -\psi '(0)=\alpha >0. \end{aligned}$$

It is standard to prove that the limit: \(V_{\infty }:=\lim _{t\rightarrow +\infty }e^{-\alpha t}|X_{t}|\), where \(|X_{t}|:=\langle X_{t},1\rangle \) is the total mass of the system, exists and is a non-trivial random variable. The semigroup \(\mathcal {P}_{}\) corresponds to a strongly recurrent diffusion with its unique invariant measure denoted by \(\varphi \).

Superprocesses of this type fulfill a spatial law of large numbers. In a nutshell and without specifying detailed assumptions, recall [8, Theorem 1], this means that for any bounded continuous function f, we have

$$\begin{aligned} \lim _{t\rightarrow +\infty }e^{-\alpha t}\langle X_{t},f\rangle =\langle \varphi ,f\rangle V_{\infty },\quad \text {in probability}. \end{aligned}$$

The goal of our paper is to prove the corresponding central limit theorem. This will be achieved by studying the spatial fluctuations:

$$\begin{aligned} \frac{\langle X_{t},f\rangle -|X_{t}|\langle \varphi ,f\rangle }{N_{t}}, \end{aligned}$$
(1.4)

where \(N_{t}\) is some norming, not necessarily deterministic.

Before further discussion we need to quantify the recurrence of \(\mathcal {P}_{}\). For the sake of discussion, not being quite precise, we assume that there exists

$$\begin{aligned} \mu >0, \end{aligned}$$

such that for a bounded continuous function f, the quantity \(\mathcal {P}_{t}f-\langle f,\varphi \rangle \) decays exponentially fast at rate \(\mu \). The behavior of (1.4) depends qualitatively on the sign of \(\alpha -2\mu \). Roughly speaking, it reflects the interplay of two antagonistic forces, the growth which is local and makes the system more coarse and the smoothing induced by the spatial evolution corresponding to \(\mathcal {P}_{}\). The results split into three qualitatively different classes:

  • Small growth rate \(\alpha <2\mu \) see Theorem 4. In this case, “the smoothing” prevails and the formulation of the result resembles the standard CLT. The normalization is \(N_{t}=|X_{t}|^{1/2}\) (which is of order \(e^{-(\alpha /2)t}\)), and the limit is Gaussian, though its variance is given by a complicated formula. Moreover, the limit does not depend on \(X_{0}\).

  • Critical growth rate \(\alpha =2\mu \) see Theorem 6. In this case, we are in a situation of a delicate balance between “the growth and “the smoothing” with the growth being “somewhat stronger.” The normalization is slightly bigger compared to the classical case: \(N_{t}=t^{1/2}|X_{t}|^{1/2}\). The limit still does not depend on \(X_{0}\).

  • Large growth rate \(\alpha >2\mu \) see Theorem 8. In this case “the growth” prevails. The normalization is even bigger: \(N_{t}=e^{(\alpha -\mu )t}\) (we have \(\alpha -\mu >\alpha /2\) and therefore \(N_{t}\gg \sqrt{|X_{t}|}\)). What is perhaps most surprising the limit holds in probability. In addition, the growth is so fast that the limit depends on the starting configuration \(X_{0}\). Moreover, we suspect that the limit is non-Gaussian.

In either case, we prove that the spatial fluctuations (1.4) become independent of the fluctuations of the total mass:

$$\begin{aligned} \frac{|X_{t}|-e^{\alpha t}V_{\infty }}{|X_{t}|^{1/2}}, \end{aligned}$$

as the time increases.

1.3 Related Results

In [2], the authors established central limit theorem results for the branching particle system in which particles move according to the Ornstein–Uhlenbeck process (i.e., the one with infinitesimal generator \(Lf=\frac{1}{2}\sigma ^{2}\Delta f-\mu \left( x\cdot \text {grad}f\right) \)) and branch after exponential time into two particles. Such a system is closely related to the superprocess with L and \(\psi (\lambda )=\alpha \lambda +\beta \lambda ^{2}\). In fact, it can be defined as the weak limit of branching particle systems. In the nth approximation, the system starts from a particle configuration distributed according to a Poisson point process with intensity \(n\nu \) (\(\nu \) is the starting distribution of the superprocess). Each particles carries mass 1 / n and lives for an exponential time with parameter 1 / n. During this time, it executes a random movement according to an Ornstein–Uhlenbeck process. When it dies, the particle is replaced by a random number of offspring. The mean of this number is supposed to be \(1+\alpha /n\), while the variance \(2\beta \). Each particle evolves independently of the others. We note that this construction can be extended to general L and \(\psi \) (see, for example, [8]).

In [2], the authors studied fluctuations akin to (1.4) discovering three regimes similar to the list above. The particle point of view gives arguably more compelling intuitions. Having this picture in mind, it might be easier to understand the discussion above; moreover, some further heuristics are given in [2, Remarks 3.4, 3.9, 3.13].

Although [2] was inspiration for this paper, it must be stressed that the approximation, insightful as it is, cannot be easily used as a proof method in the superprocess setting nor the proofs of [2] can be transferred directly. The main difficulty compared to the branching systems is that a superprocess is not a discrete object. This was overcome using the backbone construction developed in [5]. It represents a supercritical superprocess as a subcritical superprocess (called dressing) immigrating continuously on top of a branching diffusion. Controlling the aggregate behavior of the dressing was the main technical issue to be resolved in this paper. This was achieved using analytical estimates of the behavior of \(\mathcal {P}_{}\), which is a different approach then the coupling techniques applied in [2]. It is noteworthy that these analytical methods proved to be much more robust and allowed to obtain results for a quite general class of L. Moreover, in this paper we work with a general branching mechanism \(\psi \), assuming only finite fourth moment.

Related problems for branching particle systems were also considered in [1, 4].

1.4 Organization

The next section presents notation and basic facts required further. Section 3 contains formulation of the assumptions. Section 4 is devoted to the presentation of our results. The proofs are deferred to Sects. 5, 6, 7 and 8 and “Appendix.”

2 Preliminaries and Notation

Let us first recall the notions which appeared in the introduction. \(\mathcal {P}_{}\) is the semigroup of the diffusion process with the infinitesimal operator L. To shorten the notation for \(\alpha \in \mathbb {R}\), we define a semigroup \(\left\{ \mathcal {P}_{t}^{\alpha }\right\} _{t\ge 0}\) by

$$\begin{aligned} \mathcal {P}_{t}^{\alpha }f(x):=e^{\alpha t}\mathcal {P}_{t}f(x). \end{aligned}$$
(2.1)

\(\mathcal {M}_{F}\) is the space of finite, compactly supported measures and \(b^{+}(\mathbb {R}^{d})\) is the space of bounded, positive and measurable functions on \(\mathbb {R}^{d}\). By \(c_{1},c_{2},\ldots \), we will denote generic constants which might vary from line to line.

For a measure \(\nu \) and a measurable function f, we write \(\langle f,\nu \rangle :=\int _{\mathbb {R}^{d}}f(x)\nu ( \mathrm d x)\), provided it exists, and by \(|\nu |\), we denote its total mass, i.e., \(|\nu |:=\langle 1,\nu \rangle \) (we allow it to be infinite).

We will use \(\mathcal {C}_{0}\) to denote the space of continuous functions which grow at most polynomially. Formally:

$$\begin{aligned}&\mathcal {C}_{0}{}=\mathcal {C}_{0}{}(\mathbb {R}^{d}):=\left\{ f:\mathbb {R}^{d}\mapsto \mathbb {R}:f\,\text { is continuous and }\right. \\&\quad \left. \text {there exists }n\text { such that }|f(x)|/\Vert x\Vert _{}^{n}\rightarrow 0\text { as }\Vert x\Vert _{}^{}\rightarrow +\infty \right\} . \end{aligned}$$

We will use \(R_{1},R_{2},\ldots \) to denote generic functions for \(\mathcal {C}_{0}\), and these may vary from line to line.

For \(x,y\in \mathbb {R}^{n}\) by \(x\cdot y\), we denote the usual scalar product. By \(\rightarrow ^{d}\), we denote convergence in law.

The parameter \(\alpha \) in (1.1) is the rate of growth of the model. By \(\text {Ext}\), we denote the event that the process becomes extinguished, i.e.,

$$\begin{aligned} \text {Ext}:=\left\{ \lim _{t\rightarrow +\infty }|X_{t}|=0\right\} . \end{aligned}$$
(2.2)

It is well known that \(\mathbb {P}\left( \text {Ext}\right) =e^{-\lambda ^{*}|X_{0}|}\) where

$$\begin{aligned} \lambda ^{*}\text { is the largest root of }\psi (\lambda )=0. \end{aligned}$$
(2.3)

Clearly, in the supercritical case we have \(\lambda ^{*}>0\).

3 Assumptions

In this section, we state precisely the assumptions on the branching mechanism \(\psi \) and the diffusion semigroup \(\mathcal {P}_{}\). We will discuss them and give an example in Sect. 4.4

B1 :

The branching mechanism \(\psi \) given by (1.1) is non-trivial, precisely either \(\beta \ne 0\) or \(\Pi \ne 0\). It is supercritical, i.e., \(\alpha >0\). Moreover, \(\Pi \) fulfills

$$\begin{aligned} \int _{\mathbb {R}_{+}}\max \left( x^{4},x^{2}\right) \Pi ( \mathrm d x)<+\infty . \end{aligned}$$

These conditions imply

$$\begin{aligned} \psi '(0)=-\alpha ,\quad \psi ^{(i)}(0)<+\infty ,\,\text {for }i\in \left\{ 2,3,4\right\} , \end{aligned}$$

and \(\psi ''(0)\ne 0\).

Further, we formulate assumptions on the semigroup \(\mathcal {P}_{}\). Note that our formulation, although not the most compact, is chosen so that it is easy to verify and apply in proofs. Such a presentation also highlights what properties are essential for proofs.

S1 :

The semigroup \(\mathcal {P}_{}\) has the unique invariant probability measure \(\varphi \). We require that any \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\) is integrable with respect to \(\varphi \) and for any \(x\in \mathbb {R}^{d}\)

$$\begin{aligned} \lim _{t\rightarrow +\infty }\mathcal {P}_{t}f(x)=\langle f,\varphi \rangle . \end{aligned}$$

We will use \(\tilde{f}\) to denote the centering of f with respect to \(\varphi \) i.e.,

$$\begin{aligned} \tilde{f}:=f-\langle f,\varphi \rangle . \end{aligned}$$
(3.1)

We note that \(\mathcal {P}_{t}\tilde{f}=\mathcal {P}_{t}f-\langle f,\varphi \rangle \) and for \(f=const\) we have \(\tilde{f}=0\).

S2 :

There exists \(\mu >0\) such that for any function \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\) one can find \(R\in \mathcal {C}_{0}(\mathbb {R}^{d}\)) fulfilling

$$\begin{aligned} |\mathcal {P}_{t}\tilde{f}(x)|\le R(x)e^{-\mu t}. \end{aligned}$$
(3.2)
S3 :

There exist \(\mu >0\) and \(h:\mathbb {R}^{d}\mapsto \mathbb {R}^{k}\) (for some \(k\ge 1\)) such that \(h\ne 0\) for any \(i\in \left\{ 1,\ldots ,k\right\} \) we have \(h_{i}\in \mathcal {C}_{0}(\mathbb {R}^{d})\) and for any \(t\ge 0\)

$$\begin{aligned} \mathcal {P}_{t}h=e^{-\mu t}h. \end{aligned}$$

Moreover, for any function \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\), there are \(R\in \mathcal {C}_{0}(\mathbb {R}^{d})\) and a bounded function \(r:\mathbb {R}_{+}\mapsto \mathbb {R}_{+}\) such that \(r(t)\searrow 0\) and

$$\begin{aligned} |e^{\mu t}\mathcal {P}_{t}\tilde{f}(x)-h(x)\cdot \langle fh,\varphi \rangle |\le R(x)r(t). \end{aligned}$$
(3.3)

Note that for any \(t\ge 0\), we have \(\langle \mathcal {P}_{t}h,\varphi \rangle =0\) (indeed by the fact that \(\varphi \) is invariant we have \(\langle \mathcal {P}_{t}h,\varphi \rangle =\langle h,\varphi \rangle \) and moreover \(\langle \mathcal {P}_{t}h,\varphi \rangle =e^{-\mu t}\langle h,\varphi \rangle \)).

We note that (S3) implies (S2). Indeed one can obtain (3.2) easily by dividing (3.3) by \(e^{\mu t}\). We note also that (S1) and (S2) imply the following fact. For any \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\), there exists \(R\in \mathcal {C}_{0}(\mathbb {R}^{d})\) such that for any \(t\ge 0\)

$$\begin{aligned} |\mathcal {P}_{t}f(x)|\le R(x). \end{aligned}$$
(3.4)

Remark 1

Conditions (S1), (S2) and (S3) state, roughly speaking, that the diffusion associated with \(\mathcal {P}_{}\) is strongly recurrent with the spectral gap \(\mu \). It might be possible that these conditions can be verified using a Bakry–Emery-type condition or by Foster–Lyapunov criteria. We refer to the classical work [13]. Section 6 addresses the so-called exponential ergodicity which might be useful for checking (S1) and (S2). Property (S3) seems harder to be check in generality, one can use the asymptotics of the transition density (as in the subsequent example). Other methods include using tools of functional analysis as, for example, [14, Sect. 3].

Example 2

Let us consider a superprocess with

$$\begin{aligned} Lf=\frac{1}{2}\sigma ^{2}\Delta f-\gamma \left( x\cdot \text {grad}f\right) , \end{aligned}$$
(3.5)

i.e., the infinitesimal operator of an Ornstein–Uhlenbeck process, where \(\sigma >0\) and \(\gamma >0\) and \(\psi (\lambda )=-\alpha \lambda +\beta \lambda ^{2}\) for \(\alpha ,\beta >0\). It is obvious that (B1) holds. It is well known that the unique invariant distribution \(\varphi \) of L has density

$$\begin{aligned} \left( \frac{\gamma }{\pi \sigma ^{2}}\right) ^{1/2}\exp \left( -\frac{\gamma }{\sigma ^{2}}\Vert x\Vert _{}^{2}\right) . \end{aligned}$$

Moreover, for any \(f\in \mathcal {C}_{0}\) we have the following representation:

$$\begin{aligned} \mathcal {P}_{t}f(x)=\mathbb {E}{}f\left( xe^{-\gamma t}+ou(t)G\right) , \end{aligned}$$

where \(ou(t):=\sqrt{1-e^{-2\gamma t}}\) and G is distributed according to \(\varphi \). Using this representation conditions, (S1), (S2) and (S3) can be verified quite easily (we refer to [1, Section 6]). Let us just mention that the function h in (S3) is \(h(x)=x\) and \(\mu =\gamma \). The limit objects \(V_{\infty }\) and \(H_{\infty }\) can be given a more explicit representation. \(V_{\infty }\) is distributed according to \(\text {Exp}(|X_{0}|^{-1})\) and \(H_{\infty }\) is non-Gaussian. More information about the joint distribution of \((V_{\infty },H_{\infty })\) is contained in forthcoming Conjecture 14 which can be proved in this particular case.

4 Results

We start with a brief discussion of the behavior of the total mass of the superprocess, i.e., \(\left\{ |X_{t}|\right\} _{t\ge 0}\). Let \(\left\{ V_{t}\right\} _{t\ge 0}\) be defined by

$$\begin{aligned} V_{t}:=e^{-\alpha t}|X_{t}|. \end{aligned}$$
(4.1)

Fact 3

Under assumption (B1), the process \(\left\{ V_{t}\right\} _{t\ge 0}\) is a positive martingale with respect to its natural filtration. Moreover, it converges

$$\begin{aligned} V_{\infty }:=\lim _{t\rightarrow +\infty }V_{t},\quad \text {a.s. and in }L^{2}. \end{aligned}$$
(4.2)

Therefore, \(V_{\infty }\) is non-trivial (e.g., \(\mathbb {E}{}V_{\infty }=V_{0}\)). We also have

$$\begin{aligned} \text {Var}(V_{\infty })=\sigma _{V}^{2}|X_{0}|,\quad \sigma _{V}:=\frac{\psi ''(0)}{\alpha }. \end{aligned}$$
(4.3)

The proof of the martingale property and (4.2) is analogous to the proof of forthcoming Fact 13 and is left to the reader.

We recall that \(\alpha >0\) is the growth rate of the system (see (1.1)) and that \(\mu >0\) is the constant introduced in (S2)–(S3). Analogously to the presentation in the introduction, we split this section into three parts depending on the sign of \(\alpha -2\mu \).

4.1 Slow Growth \(\alpha <2\mu \)

We recall (2.1), (3.1) and define

$$\begin{aligned} \sigma _{f}^{2}:=\psi ''(0)\int _{0}^{\infty }e^{\alpha s}\langle \left( \mathcal {P}_{s}\tilde{f}(\cdot )\right) ^{2},\varphi \rangle \mathrm d s. \end{aligned}$$
(4.4)

Let us also remind the event \(\text {Ext}\) in (2.2) and \(\sigma _{V}\) given in (4.3). The main result of this section is

Theorem 4

Let \(\left\{ X_{t}\right\} _{t\ge 0}\) be the superprocess starting from \(X_{0}\in \mathcal {M}_{F}(\mathbb {R}^{d})\). Let us assume that (B1), (S1), (S2) and \(\alpha <2\mu \) hold. Then, for any \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\) we have \(\sigma _{f}<+\infty \) and conditionally on the event \(\text {Ext}^{c}\) the following holds

$$\begin{aligned} \left( e^{-\alpha t}|X_{t}|,\frac{|X_{t}|-e^{\alpha t}V_{\infty }}{\sqrt{|X_{t}|}},\frac{\langle X_{t},f\rangle -|X_{t}|\langle f,\varphi \rangle }{\sqrt{|X_{t}|}}\right) \rightarrow ^{d}(\hat{V}_{\infty },G_{1},G_{2}),\quad \text {as }t\rightarrow +\infty , \end{aligned}$$
(4.5)

where \(G_{1}\sim \mathcal {N}(0,\sigma _{V}^{2}),G_{2}\sim \mathcal {N}(0,\sigma _{f}^{2})\) and \(\hat{V}_{\infty }\) is \(V_{\infty }\) conditioned on \(\text {Ext}^{c}\). Moreover, the random variables \(\hat{V}_{\infty },G_{1},G_{2}\) are independent.

Remark 5

The law of the first coordinate of the limit depends on \(X_{0}\) only though its total mass \(|X_{0}|\) (see Fact 3). The second and third coordinate do not depend on \(X_{0}\) at all.

The proof is given in Sect. 6.

4.2 Critical Growth \(\alpha =2\mu \)

We recall the function h from (S3) and define

$$\begin{aligned} \sigma _{f}^{2}:=\psi ''(0)\langle \left( h\cdot \langle fh,\varphi \rangle \right) ^{2},\varphi \rangle . \end{aligned}$$
(4.6)

Using (S1) and (S3), one easily checks that for \(f\in \mathcal {C}_{0}\) we have \(\sigma _{f}^{2}<+\infty \). Let us remind the event \(\text {Ext}\) in (2.2) and \(\sigma _{V}\) given by (4.3). The main result of this section is

Theorem 6

Let \(\left\{ X_{t}\right\} _{t\ge 0}\) be the superprocess starting from \(X_{0}\in \mathcal {M}_{F}(\mathbb {R}^{d})\). Let us assume that (B1), (S1), (S2), (S3) and \(\alpha =2\mu \) hold. Then, for any \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\) and conditionally on the event \(\text {Ext}^{c}\) the following holds

$$\begin{aligned} \left( e^{-\alpha t}|X_{t}|,\frac{|X_{t}|-e^{\alpha t}V_{\infty }}{\sqrt{|X_{t}|}},\frac{\langle X_{t},f\rangle -|X_{t}|\langle f,\varphi \rangle }{t^{1/2}\sqrt{|X_{t}|}}\right) \rightarrow ^{d}(\hat{V}_{\infty },G_{1},G_{2}),\quad \text {as }t\rightarrow +\infty , \end{aligned}$$

where \(G_{1}\sim \mathcal {N}(0,\sigma _{V}^{2}),G_{2}\sim \mathcal {N}(0,\sigma _{f}^{2})\) and \(\hat{V}_{\infty }\) is \(V_{\infty }\) conditioned on \(\text {Ext}^{c}\). Moreover, the variables \(\hat{V}_{\infty },G_{1},G_{2}\) are independent.

The proof is given in Sect. 8.

4.3 Fast Growth \(\alpha >2\mu \)

Let h be the function from (S3). We define a process \(\left\{ H_{t}\right\} _{t\ge 0}\) by

$$\begin{aligned} H_{t}:=e^{-(\alpha -\mu )t}\langle X_{t},h\rangle . \end{aligned}$$
(4.7)

Fact 7

Let us assume (B1), (S1), (S2) and (S3). The process H is a martingale and under assumption \(\alpha >2\mu \) it is \(L^{2}\)-bounded.

From this fact, it follows that in the setting of this section, the limit

$$\begin{aligned} H_{\infty }:=\lim _{t\rightarrow +\infty }H_{t}, \end{aligned}$$
(4.8)

exists both a.s. and in \(L^{2}\). Let us remind the event \(\text {Ext}\) in (2.2) and \(\sigma _{V}\) given by (4.3). The main result of this section is

Theorem 8

Let \(\left\{ X_{t}\right\} _{t\ge 0}\) be the superprocess starting from \(X_{0}\in \mathcal {M}_{F}(\mathbb {R}^{d})\). Let us assume that (B1), (S1), (S2), (S3) and \(\alpha >2\mu \) hold. Then, for any \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\) conditionally on the event \(\text {Ext}^{c}\) the following holds

$$\begin{aligned}&\left( e^{-\alpha t}|X_{t}|,\frac{|X_{t}|-e^{\alpha t}V_{\infty }}{\sqrt{|X_{t}|}},\frac{\langle X_{t},f\rangle -|X_{t}|\langle f,\varphi \rangle }{\exp \left( (\alpha -\mu )t\right) }\right) \rightarrow ^{d}(\hat{V}_{\infty },G,\langle fh,\varphi \rangle \cdot \hat{H}_{\infty }),\nonumber \\&\quad \text {as }t\rightarrow +\infty , \end{aligned}$$
(4.9)

where \(G\sim \mathcal {N}(0,\sigma _{V}^{2})\), the variables \(\hat{V}_{\infty },\hat{H}_{\infty }\) are, respectively, \(H_{\infty },V_{\infty }\) conditioned on \(\text {Ext}^{c}\) and \((\hat{V}_{\infty },\hat{H}_{\infty }),G\) are independent. Moreover

$$\begin{aligned} \left( e^{-\alpha t}|X_{t}|,\frac{\langle X_{t},f\rangle -|X_{t}|\langle f,\varphi \rangle }{\exp \left( (\alpha -\mu )t\right) }\right) \rightarrow (V_{\infty },\langle fh,\varphi \rangle \cdot H_{\infty }),\quad \text {in probability.} \end{aligned}$$
(4.10)

Remark 9

The law of \(H_{\infty }\) exhibits non-trivial dependence on the starting condition \(X_{0}\) and \(V_{\infty },H_{\infty }\) are not independent. We expect that \(H_{\infty }\) is non-Gaussian. We make those observations precise in Conjecture 14 which is illustrated using the Ornstein–Uhlenbeck superprocess from Example 2. We notice that being the limit of infinitely divisible processes, the pair \((V_{\infty },H_{\infty })\) is also infinitely divisible. Determining its Lévy exponent would be an interesting result, though it seems unlikely to be obtained in a general setting.

The convergence of the second coordinate in (4.10) is closer to a law of large numbers than to a central limit theorem. Intuitively speaking, the system grows so fast that the fluctuations become localized. This also manifests itself in the fact that the normalization is much bigger than the classical one. Writing \(\exp ((\alpha -\mu )t)=\exp (\alpha t)\exp (-\mu t)\), we can decompose the normalization into \(\exp (\alpha t)\) and \(\exp (-\mu t)\). The first term corresponds to the standard law of large numbers, and the second one reflects the fact that the mass of the system, roughly speaking, is distributed according to \(\mathcal {P}_{t}^{*}\) (the measure adjoint to \(\mathcal {P}_{t}\)). More precisely by (S3), we have \(e^{\mu t}\mathcal {P}_{t}\tilde{f}\approx h\cdot \langle fh,\varphi \rangle \). Following these observations, we also conjecture that the convergence above holds almost surely.

The proofs are given in Sect. 7.

4.4 Discussion and Remarks

Remark 10

In our paper, we assume (B1) which states that the branching mechanism admits fourth moment. We use this assumption to verify Lyapunov’s condition in the proof of central limit theorems. It seems that existence of \((2+\epsilon )\)-moment for some \(\epsilon >0\) should be sufficient, but we do not have the necessary formulas to calculate moments of superprocess in such a case. Further, it is not unlikely that the existence of second moment is enough for the results to hold.

An interesting question would be to go beyond this assumption. Namely, to study branching laws with heavy tails. It is natural to expect a different normalization and convergence to stable laws.

5 Proof Preliminaries

In this section, we gather necessary prerequisites for the proofs in Sects. 6, 7 and 8.

5.1 Backbone Construction

Supercritical superprocesses admit a beautiful and insightful description known as the backbone construction/decomposition. According to this construction, a supercritical superprocess consists of subcritical superprocesses (the so-called dressing) immigrating along the so-called (prolific) backbone which is a supercritical branching particle system. This allows to transfer many results concerning supercritical branching systems to superprocesses. On the conceptual level, this paper follows the strategy of [2], which presents CLTs for some branching particle systems. The main issue is to control the behavior of the dressing. We will comment on that once again after presenting the decomposition (5.7).

Now we briefly discuss some aspects of the backbone construction referring the reader to [5, Sect. 2.4] for more details.Footnote 1 Let us recall the branching mechanism given by (1.1), we assume that it is supercritical, i.e., \(\alpha >0\). Let \(\lambda ^{*}\) be the largest root of \(\psi (\lambda )=0\). We denote

$$\begin{aligned} \psi ^{*}(\lambda ):=\psi (\lambda +\lambda ^{*}). \end{aligned}$$
(5.1)

This happens to be a valid branching mechanism, and thus, we may consider a superprocess with this branching mechanism, it will be referred to as \(X^{*}\). It is subcritical, i.e., its total mass decays exponentially fast with rate

$$\begin{aligned} \alpha ^{*}=-(\psi ^{*})'(0)=-\psi '(\lambda ^{*})<0. \end{aligned}$$
(5.2)

The inequality follows by the fact that \(\psi \) is strictly convex. Next we define

$$\begin{aligned} F(s):=\frac{1}{\lambda ^{*}\psi '(\lambda ^{*})}\psi (\lambda ^{*}(1-s)). \end{aligned}$$
(5.3)

It is the generating function of the branching law of the backbone process \(\left\{ Z_{t}\right\} _{t\ge 0}\). More precisely, it is a Markov process consisting of finite number of individuals. Each of them from the moment of birth lives for an independent and exponentially distributed period of time with parameter \(\psi '(\lambda ^{*})\) during which it executes an L-diffusion started from its position of birth and at death gives birth at the same position to an independent number of offspring with distribution described by F. The configuration of particles can be naturally identified with an atomic measure. The space of such measures is denoted by \(\mathcal {M}_{a}(\mathbb {R}^{d})\).

Definition 11

Fix \(\nu \in \mathcal {M}_{F}(\mathbb {R}^{d})\) and \(\gamma \in \mathcal {M}_{\mathrm{a}}(\mathbb {R}^{d})\). Let Z be a branching particle diffusion (i.e., a backbone) with initial configuration \(\gamma \) and \(X^{0,*}\) be an independent copy of \(X^{*}\) (i.e., with subcritical branching mechanism (5.1)) such that \(X_{0}^{0,*}=\nu \). We define a \(\mathcal {M}_{F}(\mathbb {R}^{d})\)-valued stochastic process \(\left\{ \Lambda _{t}\right\} _{t\ge 0}\) by

$$\begin{aligned} \Lambda =X^{0,*}+I, \end{aligned}$$
(5.4)

where the processes \(\left\{ I\right\} _{t\ge 0}\) is independent of \(X^{0,*}\). This process has a certain pathwise description, namely I consists of a subcritical superprocess immigrating along the backbone process. The full description is presented in [5]. The joint process \(\left\{ (\Lambda _{t},Z_{t})\right\} _{t\ge 0}\) is Markovian, we denote its law by \(\mathbf {P}_{\nu \times \gamma }\). The following equation characterizes the transition kernel of this process

$$\begin{aligned} \mathbf {E}_{\nu \times \gamma }\exp (-\langle f,\Lambda _{t}\rangle +\langle h,Z_{t}\rangle )=e^{-\langle u_{f}^{*}(\cdot ,t),\nu \rangle -\langle v_{f,h}(\cdot ,t),\gamma \rangle }, \end{aligned}$$
(5.5)

where \(f,h\in b^{+}(\mathbb {R}^{d})\) and \(e^{-v_{f,h}(x,t)}\) is the unique [0, 1]-valued solution of the integral equation

$$\begin{aligned} e^{-v_{f,h}(x,t)}= & {} \mathcal {P}_{t}\left[ e^{-h}\right] (x)+\frac{1}{\lambda ^{*}}\nonumber \\&\int _{0}^{t}\mathcal {P}_{t-s}\left[ \psi ^{*}\left( -\lambda ^{*}e^{-v_{f,h}(\cdot ,s)}+u_{f}^{*}(\cdot ,s)\right) -\psi ^{*}\left( u_{f}^{*}(\cdot ,s)\right) \right] (x) \mathrm d s,\qquad \nonumber \\ \end{aligned}$$
(5.6)

where \(u_{f}^{*}\) is the solution of (1.3) with the subcritical branching mechanism \(\psi ^{*}\) given by (5.1).

We now present the main result concerning the backbone construction. First we randomize the law of \(\mathbf {P}_{\nu \times \gamma }\) for \(\nu \in \mathcal {M}_{F}(\mathbb {R}^{d})\) by replacing the deterministic choice of \(\gamma \) with a Poisson random measure having intensity \(\lambda ^{*}\nu \). We denote the resulting law by \(\mathbf {P}_{\nu }\).

Theorem 12

([5, Theorem 2]) For any \(\nu \in \mathcal {M}_{F}(\mathbb {R}^{d})\), under the measure \(\mathbf {P}_{\nu }\) the process \(\Lambda \) is Markovian and has the same law as X starting from \(X_{0}=\nu \).

For any \(0\le s<t\), we decompose the immigration process I (see (5.4)) as follows

$$\begin{aligned} I_{t}=D_{t-s}^{s}+\sum _{i=1}^{|Z_{s}|}\Gamma _{t-s}^{i,s}, \end{aligned}$$

where \(\left\{ D_{t}^{s}\right\} _{t\ge 0}\) describes the evolution of the dressing which appeared in the system before time s. The process \(\Gamma ^{i,s}\) describes the mass which immigrated along the sub-tree stemming from the ith prolific individual at time s located at \(Z_{s}(i)\) (we choose any enumeration of the particles of Z). We have thus the following decomposition of \(\Lambda \)

$$\begin{aligned} \Lambda _{t}=X_{t}^{0,*}+D_{t-s}^{s}+\sum _{i=1}^{|Z_{s}|}\Gamma _{t-s}^{i,s}. \end{aligned}$$
(5.7)

Let us define \(\left\{ Y_{t}^{s}\right\} _{t\ge 0}\) by

$$\begin{aligned} Y_{t}^{s}:=X_{t+s}^{0,*}+D_{t}^{s}. \end{aligned}$$
(5.8)

We have \(Y_{0}^{s}=X_{s}\) and Y evolves according the subcritical branching mechanism \(\psi ^{*}\). Subcriticality is fundamental for our proof because this process is negligible when \(t\gg s\). The third term of (5.7) is a sum of random variables indexed by the branching process Z to which techniques similar to [2] can be applied. Each of the processes \(\Gamma ^{i,s}\) performs Markovian evolution described by (5.5) with the starting conditions \(\nu =0\) and \(\gamma =\delta _{Z_{t}(i)}\).

5.2 Martingales and Their Limits

We recall V and H (given by (4.1) and (4.7)). We define their analogues \(\left\{ W_{t}\right\} _{t\ge 0},\left\{ I_{t}\right\} _{t\ge 0}\), associated with the backbone process Z. Namely,

$$\begin{aligned} W_{t}:= & {} e^{-\alpha t}|Z_{t}|, \end{aligned}$$
(5.9)
$$\begin{aligned} I_{t}:= & {} e^{-(\alpha -\mu )t}\langle Z_{t},h\rangle , \end{aligned}$$
(5.10)

where h is the eigenfunction introduced in (S3).

Let us assume that VHW and I are defined for the backbone construction.

Fact 13

Let us assume that (B1), (S1) and (S2) hold. Then, W is a positive, \(L^{2}\)-bounded martingale. We denote its limit by \(W_{\infty }\). Moreover

$$\begin{aligned} V_{\infty }=\frac{1}{\lambda ^{*}}W_{\infty },\quad \text {a.s}. \end{aligned}$$
(5.11)

If, in addition, (S3) holds then I is a martingale, which for \(\alpha >2\mu \) is \(L^{2}\)-bounded. In this case the limit

$$\begin{aligned} I_{\infty }=\lim _{t\rightarrow +\infty }I_{t}, \end{aligned}$$

exists a.s and in \(L^{2}\). Moreover

$$\begin{aligned} H_{\infty }=\frac{1}{\lambda ^{*}}I_{\infty },\quad \text {a.s}. \end{aligned}$$
(5.12)

The proof uses some facts which are presented later and thus is postponed to Sect. 7.

By Theorem 12, the backbone Z starts with a random number of particles. The definitions of W and I and the convergences remain valid under assumption \(Z_{0}=\delta _{0}\) (i.e., one particle located at 0). We denote the joint limit in this case by \((\check{I}_{\infty },\check{W}_{\infty })\). We conjecture the following behavior of the law of \((H_{\infty },V_{\infty })\).

Conjecture 14

Let us assume that (B1), (S1), (S3) and \(\alpha >2\mu \) hold and let \(\left\{ (\check{I}_{\infty }^{i},\check{W}_{\infty }^{i})\right\} _{i\ge 1}\) be an i.i.d. sequence distributed according to \((\check{I}_{\infty },\check{W}_{\infty })\). Let \(\nu \in \mathcal {M}_{F}(\mathbb {R}^{d})\) and N be a Poisson point process with intensity \(\nu \) independent of the sequence. We define

$$\begin{aligned} \hat{H}_{\infty }:=\frac{1}{\lambda ^{*}}\left( \sum _{i=1}^{|N|}\check{I}_{\infty }^{i}+\sum _{i=1}^{|N|}h(x_{i})\check{W}_{\infty }^{i}\right) ,\quad \hat{V}_{\infty }:=\frac{1}{\lambda ^{*}}\left( \sum _{i=1}^{|N|}\check{W}_{\infty }^{i}\right) , \end{aligned}$$

where |N| is the number of points in N and \((x_{1},\ldots ,x_{|N|})\) are their positions.

Let \((H_{\infty },V_{\infty })\) be the limits of martingales (4.2) and (4.8) for the superprocess starting from \(X_{0}=\nu \) then

$$\begin{aligned} (H_{\infty },V_{\infty })=^{d}(\check{H}_{\infty },\check{V}_{\infty }). \end{aligned}$$

The conjecture is supported by the fact that it holds in the case of superprocess from Example 2. In this case, it follows simply by Fact 13, Theorem 12 and an analogous decomposition for the Ornstein–Uhlenbeck branching process given in [2, Proposition 3.11]. In [2, Remark 3.14], it is proven also that in this case \(H_{\infty }\) is not Gaussian.

5.3 Moments

This section is devoted to the presentation of the moment formulas of processes appearing in the proofs. In the paper, we utilize moments up to order 4. We recall the branching mechanisms \(\psi \) and \(\psi ^{*}\) given in (1.1) and (5.1).

Given \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\), we define \(u_{f}^{1},u_{f}^{2}:\mathbb {R}^{d}\times \mathbb {R}_{+}\mapsto \mathbb {R}\) and \(u_{f}^{*,1},u_{f}^{*,2}:\mathbb {R}^{d}\times \mathbb {R}_{+}\mapsto \mathbb {R}\) by

$$\begin{aligned} u_{f}^{1}(x,t):= & {} \mathcal {P}_{t}^{\alpha }f(x),\quad u_{f}^{*,1}(x,t):=\mathcal {P}_{t}^{\alpha ^{*}}f(x). \end{aligned}$$
(5.13)
$$\begin{aligned} u_{f}^{2}(x,t):= & {} -\psi ''(0)\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \left( u_{f}^{1}(\cdot ,s)\right) ^{2}\right] (x) \mathrm d s,\nonumber \\ u_{f}^{*,2}(x,t):= & {} -(\psi ^{*})''(0)\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha ^{*}}\left[ \left( u_{f}^{*,1}(\cdot ,s)\right) ^{2}\right] (x) \mathrm d s. \end{aligned}$$
(5.14)

Further, let

$$\begin{aligned} B_{3}=\left\{ (1,1,0),(3,0,0)\right\} ,\quad B_{4}=\left\{ (1,0,1,0),(0,2,0,0),(2,1,0,0),(4,0,0,0)\right\} , \end{aligned}$$
(5.15)

and \(\left\{ c_{\mathbf {m}}^{1}\right\} _{\mathbf {m}\in B_{3}\cup B_{4}}\) be constants to be specified later. We define \(u_{f}^{*,3},u_{f}^{*,4}:\mathbb {R}^{d}\times \mathbb {R}_{+}\mapsto \mathbb {R}\) by

$$\begin{aligned} u_{f}^{*,k}(x,t):=\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha ^{*}}\left[ \sum _{\mathbf {m}\in B_{k}}c_\mathbf{m }^{1}\prod _{j=1}^{k}\left( u_{f}^{*,j}(\cdot ,s)\right) ^{m_{j}}\right] (x) \mathrm d s. \end{aligned}$$
(5.16)

We define also \(V_{f}^{1},V_{f}^{2}:\mathbb {R}^{d}\times \mathbb {R}_{+}\mapsto \mathbb {R}\) by

$$\begin{aligned} V_{f}^{1}(x,t):= & {} -\frac{e^{\alpha t}-e^{\alpha ^{*}t}}{\lambda ^{*}}\mathcal {P}_{t}f(x), \end{aligned}$$
(5.17)
$$\begin{aligned} V_{f}^{2}(x,t):= & {} \frac{1}{\lambda ^{*}}\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha } \left[ \psi ''(0)\left( \mathcal {P}_{s}^{\alpha }f(\cdot )\right) ^{2}-\psi ''(0)\left( u_{f}^{*,1}(\cdot ,s)\right) ^{2}\right. \nonumber \\&\left. -\,(\alpha -\alpha ^{*})u_{f}^{*,2}(\cdot ,s)\right] (x) \mathrm d s. \end{aligned}$$
(5.18)

Finally, we set \(V_{f}^{3},V_{f}^{4}:\mathbb {R}^{d}\times \mathbb {R}_{+}\mapsto \mathbb {R}\) as

$$\begin{aligned} V_{f}^{k}(x,t):= & {} \int _{0}^{t}\mathcal {P}_{t-s}^{\alpha } \left[ \sum _{\mathbf {m}\in B_{k}}c_{\mathbf {m}}^{2}\prod _{j=1}^{k}\left( -\lambda ^{*}V_{f}^{j} +u_{f}^{*,j}\right) ^{m_{j}}\right. \nonumber \\&\left. -\,(\alpha -\alpha ^{*})u_{f}^{*,k}-\sum _{\mathbf {m}\in B_{k}}c_{\mathbf {m}}^{3}\prod _{j=1}^{k}\left( u_{f}^{*,j}\right) ^{m_{j}}\right] (x)\mathrm{d}s, \end{aligned}$$
(5.19)

where \(\left\{ c_{\mathbf {m}}^{2}\right\} _{\mathbf {m}\in B_{3}\cup B_{4}}\) and \(\left\{ c_{\mathbf {m}}^{3}\right\} _{\mathbf {m}\in B_{3}\cup B_{4}}\) are constants to be specified. The usefulness of these formulas follows by

Lemma 15

Under assumption (B1) and (S1) for any \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\) formulas (5.13), (5.14), (5.16), (5.17), (5.18) and (5.19) are well defined (in particular all quantities are finite). They can used to calculate moments, viz.

  1. (1)

    For any \(X_{0}\in \mathcal {M}_{F}(\mathbb {R}^{d})\) we have

    $$\begin{aligned} \mathbb {E}{}\langle X_{t},f\rangle =\langle u_{f}^{1}(\cdot ,t),X_{0}\rangle ,\quad \text {Var}\left( \langle X_{t},f\rangle \right) =-\langle u_{f}^{2}(\cdot ,t),X_{0}\rangle . \end{aligned}$$
    (5.20)

    Similar formulas hold for the subcritical superprocess with the branching mechanism \(\psi ^{*}\), namely

    $$\begin{aligned} \mathbb {E}{}\langle X_{t}^{*},f\rangle =\langle u_{f}^{*,1}(\cdot ,t),X_{0}\rangle ,\quad \text {Var}\left( \langle X_{t}^{*},f\rangle \right) =-\langle u_{f}^{*,2}(\cdot ,t),X_{0}\rangle . \end{aligned}$$
    (5.21)
  2. (2)

    There is a choice of constants \(\left\{ c_{\mathbf {m}}^{1}\right\} ,\left\{ c_{\mathbf {m}}^{2}\right\} \) and \(\left\{ c_{\mathbf {m}}^{3}\right\} \) such that for \(x\in \mathbb {R}\) and \(k\le 4\) we have

    $$\begin{aligned} \mathbf {E}_{0\times \delta _{x}}\langle f,\Lambda _{t}\rangle ^{k}=(-1)^{k}V_{f}^{k}(x,t). \end{aligned}$$
    (5.22)

Remark 16

We recall that the process \(\Lambda \) was defined in Definition 11. Formula (5.22) will be used to calculate moments of \(\Gamma ^{i,s}\) in (5.7). To this end, notice that under \(\mathbf {E}_{0\times \delta _{x}}\) the process \(\Lambda \) has the same law as \(\left\{ \Gamma _{u}^{i,s}\right\} _{u\ge 0}\) under condition \(Z_{s}(i)=x\).

Moreover, we note that the constants \(\left\{ c_{\mathbf {m}}^{1}\right\} ,\left\{ c_{\mathbf {m}}^{2}\right\} \) and \(\left\{ c_{\mathbf {m}}^{3}\right\} \) can be specified explicitly though are not relevant to our proofs.

Using these formulas, we analyze the process \(Y^{s}\) defined in (5.8). Let \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\), and using the strong Markov property, (5.20) and (5.21), we obtain

$$\begin{aligned} \mathbb {E}{}\langle Y_{t}^{s},f\rangle= & {} \mathbb {E}{}\mathbb {E}{}_{X_{s}}\langle X_{t}^{*},f\rangle =e^{\alpha ^{*}t}\mathbb {E}{}\langle X_{s},\mathcal {P}_{t}f\rangle =e^{\alpha s}e^{\alpha ^{*}t}\langle \mathcal {P}_{s}(\mathcal {P}_{t}f),X_{0}\rangle \nonumber \\= & {} e^{\alpha s+\alpha ^{*}t}\langle \mathcal {P}_{t+s}f,X_{0}\rangle , \end{aligned}$$
(5.23)

where under the measure \(\mathbb {E}{}_{X_{s}}\), the process \(X^{*}\) is a subcritical superprocess starting from \(X_{s}\). In the last transformation, we used the fact that \(\mathcal {P}_{}\) is a semigroup. Now, by \(\alpha ^{*}<0\), we see that indeed \(Y_{t}^{s}\) is negligible for \(t\gg s\).

It will be useful to have the following bounds

Lemma 17

Assume (B1), (S1) and (S2). Given \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\) there exists \(R\in \mathcal {C}_{0}(\mathbb {R}^{d})\) such that

$$\begin{aligned} |u_{f}^{2}(x,t)|\le e^{2\alpha t}R(x),\quad |u_{f}^{*,2}(x,t)|\le e^{\alpha ^{*}t}R(x), \end{aligned}$$
(5.24)

and

$$\begin{aligned} |u_{f}^{*,3}(x,t)|\le e^{\alpha ^{*}t}R(x),\quad |u_{f}^{*,4}(x,t)|\le e^{\alpha ^{*}t}R(x), \end{aligned}$$
(5.25)

finally also

$$\begin{aligned} V_{f}^{2}(x,t)\le e^{2\alpha t}R(x). \end{aligned}$$
(5.26)

The proofs of Lemmas 15 and 17 are technical and thus postponed to “Appendix.” We will also need moment formulas for the backbone process. We skip proofs referring the reader to [11] and derivation on [2, Sect. 4.1].

Lemma 18

Let us assume (B1) and (S1). Let Z be the backbone process as in Theorem 12. Then, there exists \(C>0\) such that for any \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\) we have

$$\begin{aligned} \mathbb {E}{}\langle Z,f\rangle= & {} \langle \mathcal {P}_{t}^{\alpha }f,\lambda ^{*}\nu \rangle . \end{aligned}$$
(5.27)
$$\begin{aligned} \mathbb {E}{}\langle Z,f\rangle ^{2}= & {} \lambda ^{*}\int _{\mathbb {R}^{d}}\left( \mathcal {P}_{t}^{\alpha }\left[ f^{2}\right] (x)+C\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ (\mathcal {P}_{s}^{\alpha }f(\cdot ))^{2}\right] (x) \mathrm d s\right) \nu ( \mathrm d x),\nonumber \\ \end{aligned}$$
(5.28)

where we recall that \(\lambda ^{*}\) is given by (2.3).

6 Proof of Theorem 4

In this section we fix \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\) and make the standing assumption that (B1), (S1), (S2) and \(\alpha <2\mu \) hold.

Let us first outline the proof. We use the decomposition of \(\Lambda \) given in (5.7). We recall that \(V_{\infty }\) is the limit of the martingale V (see (4.1) and Fact 3), that \(\tilde{f}=f-\langle f,\varphi \rangle \) and finally (2.3). We start with the following random vectors

$$\begin{aligned} K_{1}(t)&:=\left( e^{-\alpha t}|\Lambda _{t}|,e^{-(\alpha /2)t}(|\Lambda _{t}|-e^{\alpha t}V_{\infty }),e^{-(\alpha /2)t}\langle \Lambda _{t},\tilde{f}\rangle \right) .\nonumber \\ K_{5}(t)&:=\left( e^{-\alpha t}|Z_{t}|/\lambda ^{*},e^{-(k\alpha /2)t}\sum _{i=1}^{\lfloor |\Lambda _{kt}|\rfloor }(1-V_{\infty }^{i}),e^{-(\alpha /2)t}\sum _{i=1}^{|Z_{t}|}(M_{t}^{i}-m_{t}^{i})1_{\left\{ \Vert Z_{t}(i)\Vert _{}^{}<\log t\right\} }\right) , \end{aligned}$$
(6.1)

where \(\left\{ V_{\infty }^{i}\right\} _{i\in \mathbb {N}}\) are i.i.d. copies of \(V_{\infty }\), \(M_{t}^{i}:=e^{-((k-1)\alpha /2)t}\langle \Gamma _{(k-1)t}^{i,t},\tilde{f}\rangle \) and \(m_{t}^{i}:=\mathbb {E}{}\left( M_{t}^{i}|Z_{t}\right) =\mathbb {E}{}\left( M_{t}^{i}|Z_{t}(i)\right) \) (k and further details of definitions will be specified later).

We will show that

$$\begin{aligned} \lim _{t\rightarrow +\infty }\left[ K_{1}(t)-K_{5}(t)\right] =0,\quad \text {in probability.} \end{aligned}$$
(6.2)

Next, we will consider a random vector related to \(K_{5}\) defined by

$$\begin{aligned} K_{6}(t):=\left( e^{-\alpha t}|Z_{t}|/\lambda ^{*},\lfloor |\Lambda _{kt}|\rfloor {}^{-1/2}\sum _{i=1}^{\lfloor |\Lambda _{kt}|\rfloor }(1-V_{\infty }^{i}),\left( |Z_{t}|/\lambda ^{*}\right) ^{-1/2}\sum _{i=1}^{|Z_{t}|}(M_{t}^{i}-m_{t}^{i})1_{\left\{ \Vert Z_{t}(i)\Vert _{}^{}<\log t\right\} }\right) . \end{aligned}$$
(6.3)

We will show that conditionally on the set of non-extinction \(\text {Ext}^{c}\) (see (2.2)), we have

$$\begin{aligned} K_{6}(t)\rightarrow ^{d}(\hat{V}{}_{\infty },G_{1},G_{2}), \end{aligned}$$
(6.4)

where the limit is as in (4.5). From these results, Theorem 4 follows by standard arguments.

Before going to the proofs, we recall (3.1) and \(\sigma _{f}^{2}\) given by (4.4) and we state the following technical lemma.

Lemma 19

We have \(\sigma _{f}^{2}<+\infty \) and

$$\begin{aligned} \lim _{t\rightarrow +\infty }\sup _{\Vert x\Vert _{}^{}\le \log t}|e^{-\alpha t}V_{\tilde{f}}^{2}(x,t)-\sigma _{f}^{2}/\lambda ^{*}|=0, \end{aligned}$$
(6.5)

Moreover, there exists \(R\in \mathcal {C}_{0}(\mathbb {R}^{d})\) such that

$$\begin{aligned} |V_{\tilde{f}}^{4}(x,t)|\le e^{2\alpha t}R(x). \end{aligned}$$
(6.6)

The proof is deferred to the end of this section.

6.1 Proof of (6.2)

We will proceed defining auxiliary \(K_{2}(t),K_{3}(t),K_{4}(t)\) and proving that for \(i\in \left\{ 1,2,3,4\right\} \) we have

$$\begin{aligned} \lim _{t\rightarrow +\infty }|K_{i+1}(t)-K_{i}(t)|=0,\quad \text {in probability.} \end{aligned}$$

This clearly will establish (6.2). Let us fix \(k\in \mathbb {N}\) such that (we recall that \(\alpha ^{*}\) is negative)

$$\begin{aligned} k>\max \left( \mu /(\mu -\alpha /2),-\alpha /\alpha ^{*}\right) . \end{aligned}$$
(6.7)

We set

$$\begin{aligned} K_{2}(t):=\left( e^{-\alpha t}|Z_{t}|/\lambda ^{*},e^{-(k\alpha /2)t}(|\Lambda _{kt}|-e^{k\alpha t}V_{\infty }),e^{-(k\alpha /2)t}\langle \Lambda _{kt},\tilde{f}\rangle \right) . \end{aligned}$$
(6.8)

Obviously, the limit of \(K_{1}(kt)\) is the same as the one of \(K_{1}(t)\). Moreover, we recall (5.9), and by Fact 3 and Fact 13, we have \(V_{kt}-W_{t}/\lambda ^{*}\rightarrow 0\) a.s. Therefore, \(|K_{2}(t)-K_{1}(t)|\rightarrow 0\) almost surely.

We will now concentrate on the second coordinate. The process of the total mass \(\left\{ |\Lambda _{t}|\right\} _{t\ge 0}\) is a continuous-state branching processes (CSBP) (see [12, Sect. 10]. As such, it enjoys the branching property (see [12, 10.1]). Thus, for \(s\ge kt\) we may decompose

$$\begin{aligned} |\Lambda _{s}|=\sum _{i=1}^{\lfloor |\Lambda _{kt}|\rfloor }F_{s-kt}^{i}+\hat{F}_{s-kt}, \end{aligned}$$
(6.9)

where \(\left\{ F_{s}^{i}\right\} _{s\ge 0}\) are independent CSBPs having the initial mass 1 and \(\left\{ \hat{F}_{s}\right\} _{s\ge 0}\) is a CSBP with the initial mass \(|\Lambda _{kt}|-\lfloor |\Lambda _{kt}|\rfloor \). Analogously to (4.1) processes \(V_{s}^{i}:=e^{-\alpha s}F_{s}^{i}\) and \(\hat{V}_{s}:=e^{-\alpha s}\hat{F}_{s}\) are positive martingales with the respective limits \(V_{\infty }^{i}\) and \(\hat{V}_{\infty }\) as described in Fact 3. Passing to the limit in (6.9), we get

$$\begin{aligned} V_{\infty }=e^{-k\alpha t}\left( \sum _{i=1}^{\lfloor |\Lambda _{kt}|\rfloor }V_{\infty }^{i}+\hat{V}_{\infty }\right) . \end{aligned}$$

One easily checks that

$$\begin{aligned} e^{-(k\alpha /2)t}\left( |\Lambda _{kt}|-e^{k\alpha t}V_{\infty }\right) -e^{-(k\alpha /2)t}\sum _{i=1}^{\lfloor |\Lambda _{kt}|\rfloor }(1-V_{\infty }^{i})=e^{-(k\alpha /2)t}\left( |\Lambda _{kt}|-\lfloor |\Lambda _{kt}|\rfloor -\hat{V}_{\infty }\right) \rightarrow 0, \end{aligned}$$
(6.10)

in probability.

We pass to analyze the third coordinate of (6.8). We recall that \(M_{t}^{i}=e^{-((k-1)\alpha /2)t}\langle \Gamma _{(k-1)t}^{i,t},\tilde{f}\rangle \) and observe that by (5.7) and (5.22) we have

$$\begin{aligned} \mathbb {E}{}\left| e^{-(k\alpha /2)t}\langle \Lambda _{kt},\tilde{f}\rangle -e^{-(\alpha /2)t}\sum _{i=1}^{|Z_{t}|}M_{t}^{i}\right| \le e^{-(k\alpha /2)t}\mathbb {E}{}\langle Y_{(k-1)t}^{t},|\tilde{f}|\rangle \rightarrow 0. \end{aligned}$$
(6.11)

This follows easily by (5.23) and (6.7) (the second proviso). To recapitulate, we set

$$\begin{aligned} K_{3}(t):=\left( e^{-\alpha t}|Z_{t}|/\lambda ^{*},e^{-(k\alpha /2)t}\sum _{i=1}^{\lfloor |\Lambda _{kt}|\rfloor }(1-V_{\infty }^{i}),e^{-(\alpha /2)t}\sum _{i=1}^{|Z_{t}|}M_{t}^{i}\right) . \end{aligned}$$
(6.12)

By (6.10) and (6.11), we have \(|K_{3}(t)-K_{2}(t)|\rightarrow 0\).

We recall also \(m_{t}^{i}=\mathbb {E}{}\left( M_{t}^{i}|Z_{t}\right) =\mathbb {E}{}\left( M_{t}^{i}|Z_{t}(i)\right) \), with \(Z_{t}(i)\) being the location of the ith particle of Z (in some ordering). We define

$$\begin{aligned} K_{4}(t):=\left( e^{-\alpha t}|Z_{t}|/\lambda ^{*},e^{(k\alpha /2)t}\sum _{i=1}^{\lfloor |\Lambda _{kt}|\rfloor }(1-V_{\infty }^{i}),e^{-(\alpha /2)t}\sum _{i=1}^{|Z_{t}|}(M_{t}^{i}-m_{t}^{i})\right) . \end{aligned}$$

By (5.22) and assumption (S2), we have

$$\begin{aligned} |m_{t}^{i}|\le c_{1}e^{-((k-1)\alpha /2)t}e^{(k-1)\alpha t}|\mathcal {P}_{(k-1)t}\tilde{f}(Z_{t}(i))|\le e^{(\alpha (k-1)/2)t}e^{-\mu (k-1)t}R_{1}(Z_{t}(i)). \end{aligned}$$

Further by (3.4), (5.27), the fact that \(X_{0}\in \mathcal {M}_{F}(\mathbb {R}^{d})\) and using the first proviso of (6.7) we obtain

$$\begin{aligned} \mathbb {E}{}\left( e^{-(\alpha /2)t}\sum _{i=1}^{|Z_{t}|}|m_{t}^{i}|\right) \le e^{(\alpha /2)t}e^{\alpha ((k-1)/2)t}e^{-\mu (k-1)t}\langle \mathcal {P}_{t}R_{1},X_{0}\rangle \rightarrow 0. \end{aligned}$$
(6.13)

In this way, we have established that \(|K_{3}(t)-K_{4}(t)|\rightarrow 0\).

Finally, we deal with \(|K_{5}(t)-K_{4}(t)|\). We introduce truncation in order to be able to control moments in the next section and Lemma 19. The choice of \(\log t\) is somewhat arbitrary. We define I(t) and use the conditional expectation to calculate

$$\begin{aligned} I(t)&:=\mathbb {E}{}\left( e^{-(\alpha /2)t}\sum _{i=1}^{|Z_{t}|}(M_{t}^{i}-m_{t}^{i})1_{\left\{ \Vert Z_{t}(i)\Vert _{}^{} \ge \log t\right\} }\right) ^{2}\nonumber \\&=e^{-\alpha t}\mathbb {E}{}\left( \sum _{i=1}^{|Z_{t}|}\mathbb {E}{}\left( (M_{t}^{i}-m_{t}^{i})^{2}|Z_{t}(i)\right) 1_{\left\{ \Vert Z_{t}(i)\Vert _{}^{}\ge \log t\right\} }\right) \nonumber \\&\le e^{-\alpha t}\mathbb {E}{}\left( \sum _{i=1}^{|Z_{t}|}\mathbb {E}{}\left( (M_{t}^{i})^{2}|Z_{t}(i)\right) 1_{\left\{ \Vert Z_{t}(i)\Vert _{}^{}\ge \log t\right\} }\right) . \end{aligned}$$
(6.14)

We recall (5.18), (5.22) and obtain

$$\begin{aligned} \mathbb {E}{}\left( (M_{t}^{i})^{2}|Z_{t}\right)\le & {} c_{1}e^{-((k-1)\alpha )t}\int _{0}^{(k-1)t}\mathcal {P}_{(k-1)t-s}^{\alpha }\left[ \left( \mathcal {P}_{s}^{\alpha }\tilde{f}(\cdot )\right) ^{2}\right. \nonumber \\&\left. +\left( u_{\tilde{f}}^{*,1}(\cdot ,s)\right) ^{2}+u_{\tilde{f}}^{*,2}(\cdot ,s)\right] (Z_{t}(i)) \mathrm d s. \end{aligned}$$

We treat the first term. By (S2), \(\alpha <2\mu \) and (3.4), we obtain

$$\begin{aligned}&c_{1}e^{-((k-1)\alpha )t}\int _{0}^{(k-1)t}\mathcal {P}_{(k-1)t-s}^{\alpha }\left[ \left( \mathcal {P}_{s}^{\alpha }\tilde{f}(\cdot )\right) ^{2}\right] (Z_{t}(i)) \mathrm d s\\&\quad \le \int _{0}^{(k-1)t}e^{(\alpha -2\mu )s}\mathcal {P}_{(k-1)t-s}\left[ R_{1}^{2}\right] (Z_{t}(i)) \mathrm d s\le R_{2}(Z_{t}(i)). \end{aligned}$$

Other terms are easier and left to the reader. We conclude that

$$\begin{aligned} \mathbb {E}{}\left( (M_{t}^{i})^{2}|Z_{t}\right) \le R_{3}(Z_{t}(i)). \end{aligned}$$
(6.15)

By (5.27) and the Cauchy–Schwarz inequality, we conclude that

$$\begin{aligned} I(t)&\le e^{-\alpha t}\mathbb {E}{}\left( \sum _{i=1}^{|Z_{t}|}R_{3}(Z_{t}(i))1_{\left\{ \Vert Z_{t}(i)\Vert _{}^{}\ge \log t\right\} }\right) =\lambda ^{*}\langle \mathcal {P}_{t}\left[ R_{3}(\cdot )1_{\left\{ \Vert \cdot \Vert _{}^{}\ge \log t\right\} }\right] ,X_{0}\rangle \\&\le |X_{0}|\sup _{x\in \text {supp}(X_{0})}\left[ \mathcal {P}_{t}R_{4}^{2}(x)\right] ^{1/2}\left[ \left( \mathcal {P}_{t}1_{\Vert \cdot \Vert _{}^{}\ge \log t}\right) (x)\right] ^{1/2}. \end{aligned}$$

For any \(x\in \mathbb {R}\), using (S1), we get

$$\begin{aligned} \limsup _{t\rightarrow +\infty }\left( \mathcal {P}_{t}1_{\Vert \cdot \Vert _{}^{}\ge \log t}\right) (x)\le \limsup _{y\rightarrow +\infty }\limsup _{t\rightarrow +\infty }\left( \mathcal {P}_{t}1_{\Vert \cdot \Vert _{}^{}\ge y}\right) (x)=\limsup _{y\rightarrow +\infty }\langle 1_{\Vert \cdot \Vert _{}^{}\ge y},\varphi \rangle =0. \end{aligned}$$

Function \(x\mapsto \left( \mathcal {P}_{t}1_{\Vert \cdot \Vert _{}^{}\ge \log t}\right) (x)\) is continuous and the support of \(X_{0}\) is compact thus

$$\begin{aligned} \limsup _{t\rightarrow +\infty }\sup _{x\in \text {supp}(X_{0})}\left( \mathcal {P}_{t}1_{\Vert \cdot \Vert _{}^{}\ge \log t}\right) (x)=0. \end{aligned}$$

This and (3.4) imply \(I(t)\rightarrow 0\) and consequently \(|K_{5}(t)-K_{4}(t)|\rightarrow 0\).

6.2 Proof of (6.4)

We will use characteristic functions. It will be convenient to work conditionally on the event \(\mathcal {E}_{t}:=\left\{ |\Lambda _{kt}|\ge t\right\} \cap \left\{ Z_{t}\ge t\right\} \) (we denote the corresponding expectation by \(\mathbb {E}{}^{t}\)). We set

$$\begin{aligned} \chi _{1}(\theta _{1},\theta _{2},\theta _{3};t):=\mathbb {E}{}^{t}\exp \left\{ \text {i}\theta _{1}e^{-\alpha t}|Z_{t}|/\lambda ^{*}+\text {i}\theta _{2}\lfloor |\Lambda _{kt}|\rfloor ^{-1/2}\sum _{i=1}^{\lfloor |\Lambda _{kt}|\rfloor }(1-V_{\infty }^{i})\right. \\ \left. +\,\text {i}\theta _{3}\left( |Z_{t}|/\lambda ^{*}\right) ^{-1/2}\sum _{i=1}^{|Z_{t}|}(M_{t}^{i}-m_{t}^{i})1_{\left\{ \Vert Z_{t}(i)\Vert _{}^{}<\log t\right\} }\right\} , \end{aligned}$$

and

$$\begin{aligned} \chi _{3}(\theta _{1},\theta _{2},\theta _{3};t):=\left[ e^{-(\theta _{3}\sigma _{f})^{2}/2}e^{-(\theta _{2}\sigma _{V})^{2}/2}\right] \mathbb {E}^{t}\exp \left\{ \text {i}\theta _{1}e^{-\alpha t}|Z_{t}|/\lambda ^{*}\right\} . \end{aligned}$$

We shall show that for any \(\theta _{1},\theta _{2},\theta _{3}\), we have

$$\begin{aligned} \lim _{t\rightarrow +\infty }|\chi _{1}(\theta _{1},\theta _{2},\theta _{3};t)-\chi _{3}(\theta _{1},\theta _{2},\theta _{3};t)|=0. \end{aligned}$$
(6.16)

Secondly, we notice that \(\mathbb {P}(\epsilon \le |\Lambda _{kt}|\le t)\rightarrow 0\) and \(\mathbb {P}(\epsilon \le |Z_{t}|\le t)\rightarrow 0\); thus, \(1_{\mathcal {E}_{t}}\rightarrow 1_{\text {Ext}^{c}}\) a.s. and Fact 3 implies that

$$\begin{aligned} \lim _{t\rightarrow +\infty }\chi _{3}(\theta _{1},\theta _{2},\theta _{3};t)=\left[ e^{-(\theta _{3}\sigma _{f})^{2}/2}e^{-(\theta _{2}\sigma _{V})^{2}/2}\right] \left( \mathbb {E}\exp \left\{ \text {i}\theta _{1}\hat{V}_{\infty }\right\} \right) . \end{aligned}$$
(6.17)

Using \(1_{\mathcal {E}_{t}}\rightarrow 1_{\text {Ext}^{c}}\) a.s. it is a standard task to conclude (6.4) from (6.16) and (6.17).

To get (6.16), we will introduce an intermediate function \(\chi _{2}\) and show that \(|\chi _{i}(\theta _{1},\theta _{2},\theta _{3};t)-\chi _{i+1}(\theta _{1},\theta _{2},\theta _{3};t)|\rightarrow _{t}0\) for \(i\in \left\{ 1,2\right\} \) using the central limit theorem.

Let h be the characteristic function of \((1-V_{\infty }^{i})\). One checks that all the random variables in the definition of \(\chi _{1}\), except for \(V_{\infty }^{i}\), are measurable with respect to the \(\sigma \)-field \(\mathcal {F}\) generated by \(\left\{ \Lambda _{ks},Z_{s}\right\} _{s\le t}\). Moreover, conditionally on \(\mathcal {F}\), \(V_{\infty }^{i}\) are i.i.d. By Fact 3, we have \(\mathbb {E}{}(1-V_{\infty }^{i})=0\) and \(\text {Var}(V_{\infty }^{i})=\sigma _{V}\). Using conditional expectation, we obtain

$$\begin{aligned}&\chi _{1}(\theta _{1},\theta _{2},\theta _{3};t)\\&\quad :=\mathbb {E}{}^{t}\left[ \exp \left\{ \text {i}\theta _{1}e^{-\alpha t}|Z_{t}|/\lambda ^{*}+\text {i}\theta _{3}\left( |Z_{t}|/\lambda ^{*}\right) ^{-1/2}\sum _{i=1}^{|Z_{t}|}(M_{t}^{i}-m_{t}^{i})1_{\left\{ \Vert Z_{t}(i)\Vert _{}^{}<\log t\right\} }\right\} \right. \\&\qquad \left. h\left( \theta _{2}\lfloor |\Lambda _{kt}|\rfloor ^{-1/2}\right) ^{\lfloor |\Lambda _{kt}|\rfloor }\right] . \end{aligned}$$

The central limit theorem yields \(h\left( \theta _{2}/\sqrt{n}\right) ^{n}\rightarrow e^{-(\theta _{2}\sigma _{V})^{2}/2}\). This motivates the following definition

$$\begin{aligned}&\chi _{2}(\theta _{1},\theta _{2},\theta _{3};t):=\left[ e^{-(\theta _{2}\sigma _{V})^{2}/2}\right] \\&\mathbb {E}{}^{t}\exp \left\{ \text {i}\theta _{1}e^{-\alpha t}|Z_{t}|/\lambda ^{*}+\text {i}\theta _{3}\left( |Z_{t}|/\lambda ^{*}\right) ^{-1/2}\sum _{i=1}^{|Z_{t}|}(M_{t}^{i}-m_{t}^{i})1_{\left\{ \Vert Z_{t}(i)\Vert _{}^{}<\log t\right\} }\right\} . \end{aligned}$$

Dominated Lebesgue’s theorem and the assumption on the event \(\mathcal {E}_{t}\) yield

$$\begin{aligned} |\chi _{1}(\theta _{1},\theta _{2},\theta _{3};t)-\chi _{2}(\theta _{1},\theta _{2},\theta _{3};t)|\le \mathbb {E}{}^{t}\left| h\left( \theta _{2}\lfloor |X_{kt}|\rfloor ^{-1/2}\right) ^{\lfloor |X_{kt}|\rfloor }-e^{-(\theta _{2}\sigma _{V})^{2}/2}\right| \rightarrow _{t}0. \end{aligned}$$
(6.18)

Similarly we will deal with the other sum. We work conditionally on \(Z_{t}\), and for notational simplicity, we work with integer times. We introduce sequences \(\left\{ a_{n}\right\} _{n\ge 0},\left\{ p_{n}\right\} _{n\ge 0}\) such that \(a_{n}\in \mathbb {N}\), \(p_{n}\in \mathbb {R}^{da_{n}}\) (intuitively \(a_{n}\) is the number of particles at time n and \(p_{n}\) are their positions). We assume that \(a_{n}e^{-\alpha n}\rightarrow a>0\), \(\Vert p_{n}(i)\Vert _{}^{}\le \log n\). We denote

$$\begin{aligned} S_{n}:=(\lambda ^{*})^{1/2}a_{n}^{-1/2}\sum _{i=1}^{a_{n}}\left( \tilde{M}_{n}^{i}-\tilde{m}_{n}^{i}\right) , \end{aligned}$$

where \(\tilde{M}_{n}^{i}:=\mathbb {E}{}\left( e^{-((k-1)\alpha /2)n}\langle \Gamma _{(k-1)n}^{i,n},\tilde{f}\rangle |Z_{n}(i)=p_{n}(i)\right) \) (compare with \(M_{t}^{i}\) defined below (6.1)) we set also \(\tilde{m}_{n}^{i}=\mathbb {E}{}\tilde{M}_{n}^{i}\) . We are going to use the CLT to analyze \(S_{n}\). Firstly, we calculate its variance

$$\begin{aligned} v_{n}:=\text {Var}\left( S_{n}\right)&=\lambda ^{*}a_{n}^{-1}\sum _{i=1}^{a_{n}}\text {Var}(\tilde{M}_{n}^{i}-\tilde{m}_{n}^{i})\nonumber \\&=\lambda ^{*}a_{n}^{-1}\sum _{i=1}^{a_{n}}\mathbb {E}{}(\tilde{M}_{n}^{i})^{2}-\lambda ^{*}a_{n}^{-1}\sum _{i=1}^{a_{n}}(\tilde{m}_{n}^{i})^{2}. \end{aligned}$$
(6.19)

A proof analogous to (6.13) gives \(\lambda ^{*}a_{n}^{-1}\sum _{i=1}^{a_{n}}(\tilde{m}_{n}^{i})^{2}\rightarrow 0\). By (5.22), we have \(\mathbb {E}{}(\tilde{M}_{n}^{i})^{2}=e^{-\alpha (k-1)n}V_{\tilde{f}}^{2}(p_{n}(i),(k-1)n).\) Recalling (6.5), we obtain

$$\begin{aligned} \lim _{n\rightarrow 0}v_{n}=\sigma _{f}^{2}, \end{aligned}$$

where \(\sigma _{f}^{2}\) is given by (4.4). Secondly, we check the Lyapunov condition. Using Hölder’s inequality and (6.6), we get

$$\begin{aligned} a_{n}^{-2}\sum _{i=1}^{a_{n}}\mathbb {E}{}(\tilde{M}_{n}^{i}-\tilde{m}_{n}^{i})^{4}\le & {} c_{1}a_{n}^{-2}\sum _{i=1}^{a_{n}}\mathbb {E}{}(\tilde{M}_{n}^{i})^{4}\le a_{n}^{-2}\sum _{i=1}^{a_{n}}R_{1}(p_{n}(i))\\\le & {} a_{n}^{-1}\sup _{\Vert x\Vert _{}^{}\le \log n}R_{1}(x)\rightarrow _{n}0. \end{aligned}$$

Therefore, the CLT implies

$$\begin{aligned} S_{n}\rightarrow ^{d}\mathcal {N}\left( 0,\sigma _{f}^{2}\right) . \end{aligned}$$

Using the dominated convergence theorem in a similar manner as in the case of \(\chi _{1}-\chi _{2}\), one can show that \(|\chi _{2}(\theta _{1},\theta _{2},\theta _{3};t)-\chi _{3}(\theta _{1},\theta _{2},\theta _{3};t)|\rightarrow _{t}0\).

6.3 Proof of Lemma 19

In order to prove (6.5), we will show that

$$\begin{aligned} e^{-\alpha t}V_{\tilde{f}}^{2}(0,t)\rightarrow \sigma _{f}^{2}/\lambda ^{*},\quad \sup _{\Vert x\Vert _{}^{}\le \log t}e^{-\alpha t}|V_{\tilde{f}}^{2}(x,t)-V_{\tilde{f}}^{2}(0,t)|\rightarrow 0. \end{aligned}$$
(6.20)

To get the first convergence, we use (5.18) and write

$$\begin{aligned} e^{-\alpha t}V_{\tilde{f}}^{2}(0,t)&= \frac{\psi ''(0)}{\lambda ^{*}}\int _{0}^{t}e^{-\alpha s}\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}^{\alpha }\tilde{f}(\cdot )\right) ^{2}\right] (0) \mathrm d s\\&\quad -\frac{1}{\lambda ^{*}}\int _{0}^{t}e^{-\alpha s}\mathcal {P}_{t-s}\left[ \psi ''(0)\left( u_{\tilde{f}}^{*,1}(\cdot ,s)\right) ^{2}+(\alpha -\alpha ^{*})u_{\tilde{f}}^{*,2}(\cdot ,s)\right] (0) \mathrm d s\\&=:I_{1}(t)+I_{2}(t). \end{aligned}$$

We have

$$\begin{aligned} I_{1}(t)=\frac{\psi ''(0)}{\lambda ^{*}}\int _{0}^{t}e^{\alpha s}\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}\tilde{f}(\cdot )\right) ^{2}\right] (0) \mathrm d s. \end{aligned}$$

Using (S2), the integrand in the last expression can be estimated as follows

$$\begin{aligned} 0\le L(x):=e^{\alpha s}\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}\tilde{f}(\cdot )\right) ^{2}\right] (x)\le e^{(\alpha -2\mu )s}\mathcal {P}_{t-s}R_{1}(x). \end{aligned}$$
(6.21)

Using (3.4), we get \(L(0)\le c_{1}e^{(\alpha -2\mu )s}\) which, by assumption \(\alpha <2\mu \), is integrable with respect to s. By (S1) for any fixed \(s\ge 0\), we have

$$\begin{aligned} \lim _{t\rightarrow +\infty }\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}\tilde{f}(\cdot )\right) ^{2}\right] (0)=\left\langle \varphi , {\mathcal {P}_{s}\tilde{f}}^{2}\right\rangle . \end{aligned}$$

Recalling (4.4) and appealing to dominated Lebesgue’s theorem we conclude \(I_{1}(t)\rightarrow \sigma _{f}^{2}/\lambda ^{*}<+\infty .\) An analogous argument, using (5.13) and (5.24), gives

$$\begin{aligned} I_{2}(t)\rightarrow I_{2}:=-\frac{1}{\lambda ^{*}}\int _{0}^{\infty }e^{-\alpha s}\langle \varphi ,\psi ''(0)\left( u_{\tilde{f}}^{*,1}(\cdot ,s)\right) ^{2}+(\alpha -\alpha ^{*})u_{\tilde{f}}^{*,2}(\cdot ,s)\rangle \mathrm d s. \end{aligned}$$

By (S1) \(\varphi \) is an invariant measure thus \(\langle \varphi ,\mathcal {P}_{u}f\rangle =\langle \varphi ,f\rangle \), using (5.13), (5.14) and Fubini’s theorem, we get

$$\begin{aligned} I_{2}=&-\frac{1}{\lambda ^{*}}\int _{0}^{\infty }e^{-\alpha s}\langle \varphi ,\psi ''(0)\left( \mathcal {P}_{s}^{\alpha ^{*}}\tilde{f}\right) ^{2}-(\alpha -\alpha ^{*})\psi ''(0)\int _{0}^{s}\mathcal {P}_{s-u}^{\alpha ^{*}}\left[ \left( \mathcal {P}_{u}^{\alpha ^{*}}\tilde{f}\right) ^{2}\right] \mathrm d u\rangle \mathrm d s\\ =&-\frac{\psi ''(0)}{\lambda ^{*}}\int _{0}^{\infty }e^{-\alpha s} \langle \varphi ,\left( \mathcal {P}_{s}^{\alpha ^{*}}\tilde{f}\right) ^{2}\rangle \mathrm d s\\&+\frac{(\alpha -\alpha ^{*})\psi ''(0)}{\lambda ^{*}}\int _{0}^{\infty }\int _{u}^{\infty }e^{-\alpha s}e^{\alpha ^{*}(s-u)}\langle \varphi ,\left[ \left( \mathcal {P}_{u}^{\alpha ^{*}}\tilde{f}\right) ^{2}\right] \rangle \mathrm d s \mathrm d u\\ =&-\frac{\psi ''(0)}{\lambda ^{*}}\int _{0}^{\infty }e^{-\alpha s}\langle \varphi ,\left( \mathcal {P}_{s}^{\alpha ^{*}}\tilde{f}\right) ^{2}\rangle \mathrm d s+\frac{\psi ''(0)}{\lambda ^{*}}\int _{0}^{\infty }e^{-\alpha u}\langle \varphi ,\left[ \left( \mathcal {P}_{u}^{\alpha ^{*}}\tilde{f}\right) ^{2}\right] \rangle \mathrm d u=0. \end{aligned}$$

Now we pass to the second statement of (6.20). We analyze the first term of (5.18) which is hardest and leave the other terms to the reader. Namely, we will prove that

$$\begin{aligned}&\sup _{\Vert x\Vert _{}^{}\le \log t}\left| \int _{0}^{t}e^{\alpha s}\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}\tilde{f}(\cdot )\right) ^{2}\right] (x) \mathrm d s-\int _{0}^{t}e^{\alpha s}\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}\tilde{f}(\cdot )\right) ^{2}\right] (0) \mathrm d s\right| \nonumber \\&\,\qquad \le \int _{0}^{t}f(s,t) \mathrm d s\rightarrow _{t}0, \end{aligned}$$
(6.22)

where

$$\begin{aligned} f(s,t):=e^{\alpha s}\sup _{\Vert x\Vert _{}^{}\le \log t}\left| \mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}\tilde{f}\right) ^{2}\right] (x)-\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}\tilde{f}\right) ^{2}\right] (0)\right| . \end{aligned}$$
(6.23)

We recall (6.21) and notice that \(f(s,t)\le 2\sup _{\Vert x\Vert _{}^{}\le \log t}L(x)\) thus

$$\begin{aligned} f(s,t)&\le 2 e^{(\alpha -2\mu )s}\sup _{\Vert x\Vert _{}^{}\le \log t} \mathcal {P}_{t-s}\left[ R_{1}\right] (x)\\&=2e^{(\alpha -2\mu )s}\left[ \langle R_{1},\varphi \rangle +\sup _{\Vert x\Vert _{}^{}\le \log t}\left( \mathcal {P}_{t-s}\left[ R_{1}\right] (x)-\langle R_{1},\varphi \rangle \right) \right] \\&\le 2e^{(\alpha -2\mu )s}\left[ \langle R_{1},\varphi \rangle +e^{-\mu (t-s)}\sup _{\Vert x\Vert _{}^{}\le \log t}R_{2}(x)\right] \le c_{1}e^{(\alpha -2\mu )s/2}. \end{aligned}$$

Fix \(s\ge 0\). We denote \(H_{s}(x)=\left( \mathcal {P}_{s}\tilde{f}\right) ^{2}(x)\) and \(\tilde{H}_{s}=H_{s}-\langle H_{s},\varphi \rangle \). Applying (S2) and using the triangle inequality, we get

$$\begin{aligned} \limsup _{t\rightarrow +\infty }f(s,t)\le & {} \limsup _{t\rightarrow +\infty }\left( 2e^{\alpha s}\sup _{\Vert x\Vert _{}^{}\le \log t}\left| \mathcal {P}_{t-s}\tilde{H}_{s}(x)\right| \right) \\\le & {} 2e^{\alpha s}\limsup _{t\rightarrow +\infty }\left( e^{-\mu (t-s)}\sup _{\Vert x\Vert _{}^{}\le \log t}R_{1}(x)\right) =0. \end{aligned}$$

Now the convergence in (6.22) follows by Lebesgue’s dominated theorem and we conclude the proof of (6.20).

In order to prove (6.6), we apply the triangle inequality to (5.19) and Lemma 17

$$\begin{aligned} |V_{\tilde{f}}^{k}(x,t)|&\le c_{1} \sum _{\mathbf {m}\in B_{k}}\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \prod _{j=1}^{k}\left| -\lambda ^{*}V_{\tilde{f}}^{j}(\cdot ,s)+u_{\tilde{f}}^{*,j}(\cdot ,s)\right| ^{m_{j}}\right] (x) \mathrm d s\\&\quad +c_{1}\sum _{\mathbf {m}\in B_{k}}\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \prod _{j=1}^{k}\left| u_{\tilde{f}}^{*,j}(\cdot ,s)\right| ^{m_{j}}\right] (x) \mathrm d s+\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ u_{\tilde{f}}^{*,k}(\cdot ,s)\right] (x) \mathrm d s. \end{aligned}$$

For simplicity, we skip all terms \(u_{\tilde{f}}^{*,k}\) (which by (5.25) and \(\alpha ^{*}<0\) are easy to control). We will thus consider

$$\begin{aligned} S_{k}(x,t):=\sum _{\mathbf {m}\in B_{k}}\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \prod _{j=1}^{k}\left| V_{\tilde{f}}^{j}(\cdot ,s)\right| ^{m_{j}}\right] (x) \mathrm d s. \end{aligned}$$
(6.24)

For \(k=2\), calculations similar to (6.21) lead easily to \(S_{2}(x,t)\le e^{\alpha t}R_{1}(x)\). Thus, we conclude

$$\begin{aligned} |V_{\tilde{f}}^{2}(x,t)|\le e^{\alpha t}R_{2}(x). \end{aligned}$$
(6.25)

For \(k=3\), we recall (5.15) for and use (3.2), (5.17) and (6.25) to get

$$\begin{aligned} S_{3}(x,t)&\le c_{1}\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \left| V_{\tilde{f}}^{1}(\cdot ,s)V_{\tilde{f}}^{2}(\cdot ,s)\right| +\left| V_{\tilde{f}}^{1}(\cdot ,s)\right| ^{3}\right] (x) \mathrm d s\\&\le \int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \left| e^{(\alpha -\mu )s}R_{1}(\cdot )e^{\alpha s}R_{2}(\cdot )\right| +\left| e^{(\alpha -\mu )s}R_{3}(\cdot )\right| ^{3}\right] (x) \mathrm d s. \end{aligned}$$

Using assumption \(\alpha <2\mu \) and (3.4), we estimate

$$\begin{aligned} S_{3}(x,t)\le e^{\alpha t}\int _{0}^{t}e^{(\alpha -\mu )s}\mathcal {P}_{t-s}R_{4}(x) \mathrm d s\le R_{5}(x)e^{\alpha t}\int _{0}^{t}e^{(\alpha -\mu )s} \mathrm d s\le e^{(3\alpha /2)t}R_{5}(x). \end{aligned}$$

This yields that

$$\begin{aligned} |V_{\tilde{f}}^{3}(x,t)|\le e^{(3\alpha /2)t}R_{6}(x). \end{aligned}$$
(6.26)

Finally, we pass to \(k=4\). We recall (5.15) and use (3.2), (5.17), (6.25) and (6.26) to get

$$\begin{aligned} S_{4}(x,t)&\le c_{1}\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \left| V_{\tilde{f}}^{1} (\cdot ,s)V_{\tilde{f}}^{3}(\cdot ,s)\right| +\left| V_{\tilde{f}}^{2}(\cdot ,s)\right| ^{2}\right. \\&\quad \left. +\left| V_{\tilde{f}}^{1}(\cdot ,s)\right| ^{2} \left| V_{\tilde{f}}^{2}(\cdot ,s)\right| +\left| V_{\tilde{f}}^{1}(\cdot ,s)\right| ^{4}\right] (x) \mathrm d s\\&\le \int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \left| e^{(\alpha -\mu )s}R_{1}(\cdot )e^{(3\alpha /2)s}R_{6}(\cdot )\right| +\left| e^{\alpha s}R_{2}(\cdot )\right| ^{2}+\right. \\&\quad \left. \left| e^{(\alpha -\mu )s}R_{1}(\cdot )\right| ^{2}\left| e^{\alpha s}R_{2}(\cdot )\right| +\left| e^{(\alpha -\mu )s}R_{1}(\cdot )\right| ^{4}\right] (x) \mathrm d s. \end{aligned}$$

Using the assumption \(\alpha <2\mu \) and (3.4), we get

$$\begin{aligned} S_{4}(x,t)\le e^{\alpha t}\int _{0}^{t}e^{\alpha s}\mathcal {P}_{t-s}R_{7}(x) \mathrm d s\le R_{8}(x)e^{\alpha t}\int _{0}^{t}e^{\alpha s} \mathrm d s\le e^{2\alpha t}R_{8}(x). \end{aligned}$$

This is enough to conclude (6.6).

7 Proof of Theorem 8

In this section, we fix \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\) and make the standing assumption that (B1), (S1), (S2), (S3) and \(\alpha >2\mu \) hold. Proving the convergence of the whole vectors (4.9) and (4.10) would be notationally cumbersome. As it follows similar lines as in the proof of Theorem 4, it is left to the reader. We focus on the most important part which is the convergence of the second coordinate of (4.10). Recalling (3.1) and the backbone construction given in Definition 11, we denote

$$\begin{aligned} Y_{1}(t):=e^{-(\alpha -\mu )t}\left( \langle \Lambda _{t},f\rangle -|\Lambda _{t}|\langle f,\varphi \rangle \right) =e^{-(\alpha -\mu )t}\langle \Lambda _{t},\tilde{f}\rangle . \end{aligned}$$

We shall prove that

$$\begin{aligned} Y_{1}(t)-\langle fh,\varphi \rangle \cdot H_{\infty }\rightarrow 0,\quad \text {in probability}, \end{aligned}$$
(7.1)

where, slightly abusing notation, we used \(H_{\infty }\) to denote the limit of martingale (4.7) defined for \(\left\{ \Lambda _{t}\right\} _{t\ge 0}\). By Theorem 12, the processes X and \(\Lambda \) have the same law and thus (7.1) implies the convergence

$$\begin{aligned} \left[ e^{-(\alpha -\mu )t}\left( \langle X_{t},f\rangle -|X_{t}|\langle f,\varphi \rangle \right) \right] -\langle fh,\varphi \rangle \cdot H_{\infty }\rightarrow ^{d}0, \end{aligned}$$

and hence also in probability. This establishes the convergence of the second coordinate in (4.10). Before the proof, we formulate a technical lemma

Lemma 20

There exists \(R\in \mathcal {C}_{0}(\mathbb {R}^{d})\) such that

$$\begin{aligned} |V_{\tilde{f}}^{2}(x,t)|\le e^{2(\alpha -\mu )t}R(x). \end{aligned}$$
(7.2)

We will define intermediate processes \(Y_{2},Y_{3},Y_{4}\). The convergence (7.1) will follow immediately once we show

$$\begin{aligned}&|Y_{1}(t+j(t))-Y_{2}(t)|\rightarrow 0,\quad |Y_{2}(t)-Y_{3}(t)|\rightarrow 0,\nonumber \\&|Y_{3}(t)-Y_{4}(t)|\rightarrow 0,\quad |Y_{4}(t)-\langle fh,\varphi \rangle \cdot H_{\infty }|\rightarrow 0, \end{aligned}$$
(7.3)

where the convergences hold in probability and \(j:\mathbb {R}_{+}\mapsto \mathbb {R}_{+}\) is a continuous function.

Recall (5.7) and let us set

$$\begin{aligned} Y_{2}(t):=e^{-(\alpha -\mu )t}\sum _{i=1}^{|Z_{t}|}e^{-(\alpha -\mu )j(t)}\langle \Gamma _{j(t)}^{i,t},\tilde{f}\rangle , \end{aligned}$$

choosing j to be any continuous function fulfilling

$$\begin{aligned} j(t)\ge -\frac{\alpha +1}{\alpha ^{*}}t,\quad \text {and}\quad r(j(t))e^{\mu t}\rightarrow 0, \end{aligned}$$
(7.4)

where r is the function introduced in (S3) and \(\alpha ^{*}\) defined in (5.2). Using (3.4), (5.8) and (5.23), we get

$$\begin{aligned} \mathbb {E}{}|Y_{1}(t+j(t))-Y_{2}(t)|=\mathbb {E}{}|Y_{j(t)}^{t}|\le e^{\alpha t+\alpha ^{*}j(t)}\langle \mathcal {P}_{t+s}|f|,X_{0}\rangle \le c_{1}e^{\alpha t+\alpha ^{*}j(t)}\rightarrow 0, \end{aligned}$$

where we used the first proviso in (7.4).

We proceed to the second convergence of (7.3). Let \(M_{t}^{i}:=e^{-(\alpha -\mu )j(t)}\langle \Gamma _{j(t)}^{i,t},\tilde{f}\rangle \) and \(m_{t}^{i}:=\mathbb {E}{}(M_{t}^{i}|Z_{t})=\mathbb {E}{}(M_{t}^{i}|Z_{t}(i))\) and we set

$$\begin{aligned} Y_{3}(t):=e^{-(\alpha -\mu )t}\sum _{i=1}^{|Z_{t}|}m_{t}^{i}. \end{aligned}$$

Clearly, \(\mathbb {E}{}\left( M_{t}^{i}-m_{t}^{i}|Z_{t}\right) =0\) and \(\left\{ M_{t}^{i}-m_{t}^{i}\right\} _{i}\) are independent conditionally on \(Z_{t}\), thus

$$\begin{aligned} \mathbb {E}{}\left( Y_{2}(t)-Y_{3}(t)\right) ^{2}&=e^{-2(\alpha -\mu )t}\mathbb {E}{}\left( \mathbb {E}{}\left( \left. \sum _{i=1}^{|Z_{t}|}\sum _{j=1}^{|Z_{t}|}(M_{t}^{i}-m_{t}^{i})(M_{t}^{j}-m_{t}^{j})\right| Z_{t}\right) \right) \end{aligned}$$
(7.5)
$$\begin{aligned}&=e^{-2(\alpha -\mu )t}\mathbb {E}{}\left( \sum _{i=1}^{|Z_{t}|}\sum _{j=1}^{|Z_{t}|}\mathbb {E}{}\left( (M_{t}^{i}-m_{t}^{i})(M_{t}^{j}-m_{t}^{j})|Z_{t}\right) \right) \nonumber \\&=e^{-2(\alpha -\mu )t}\mathbb {E}{}\left( \sum _{i=1}^{|Z_{t}|}\mathbb {E}{}\left( (M_{t}^{i}-m_{t}^{i})^{2}|Z_{t}\right) \right) . \end{aligned}$$
(7.6)

By (5.22) and (5.26), we get

$$\begin{aligned} \mathbb {E}{}\left( (M_{t}^{i}-m_{t}^{i})^{2}|Z_{t}\right) \le \mathbb {E}{}\left( (M_{t}^{i})^{2}|Z_{t}\right) \le R_{1}(Z_{t}(i)). \end{aligned}$$

Using (5.27), \(\alpha >2\mu \) and (3.4), we obtain

$$\begin{aligned} \mathbb {E}{}\left( Y_{2}(t)-Y_{3}(t)\right) ^{2}\le c_{1}e^{-2(\alpha -\mu )t}\mathbb {E}{}\langle Z_{t},R_{1}\rangle \le c_{2}e^{-2(\alpha -\mu )t}e^{\alpha t}\langle \mathcal {P}_{t}R_{1},X_{0}\rangle \rightarrow 0, \end{aligned}$$
(7.7)

which establishes the second convergence in (7.3).

We recall (5.17) and (5.22) to get

$$\begin{aligned} m_{t}^{i}=e^{-(\alpha -\mu )j(t)}\frac{e^{\alpha j(t)}-e^{\alpha ^{*}j(t)}}{\lambda ^{*}}\left( \mathcal {P}_{j(t)}\tilde{f}(Z_{t}(i))\right) =l(t)\frac{1}{\lambda ^{*}}\left( e^{\mu j(t)}\mathcal {P}_{j(t)}\tilde{f}(Z_{t}(i))\right) , \end{aligned}$$

where \(l(t)=\left( 1-e^{(\alpha ^{*}-\alpha )j(t)}\right) \). Following (S3), we decompose \(m_{t}^{i}=\tilde{m}_{t}^{i}+\hat{m}_{t}^{i}\) with

$$\begin{aligned}&\tilde{m}_{t}^{i}:=l(t)\frac{1}{\lambda ^{*}}\left( e^{\mu j(t)}\mathcal {P}_{j(t)}\tilde{f}(Z_{t}(i))-\langle fh,\varphi \rangle \cdot h(Z_{t}(i))\right) ,\\&\quad \hat{m}_{t}^{i}:=l(t)\frac{1}{\lambda ^{*}}\langle fh,\varphi \rangle \cdot h(Z_{t}(i)). \end{aligned}$$

We recall (5.10) and write

$$\begin{aligned} Y_{4}(t):=e^{-(\alpha -\mu )t}\sum _{i=1}^{|Z_{t}|}\hat{m}_{t}^{i}=l(t)\langle fh,\varphi \rangle \cdot \frac{I_{t}}{\lambda ^{*}}. \end{aligned}$$

By (S3) we have \(|\tilde{m}_{t}^{i}|\le r(j(t))R_{1}(Z_{t}(i))\). Applying (5.27), the second proviso of (7.4) and (3.4) we obtain

$$\begin{aligned} \mathbb {E}{}|Y_{3}(t)-Y_{4}(t)|\le & {} e^{-(\alpha -\mu )t}\sum _{i=1}^{|Z_{t}|}|\tilde{m}_{t}^{i}|\le e^{-(\alpha -\mu )t}r(j(t))\mathbb {E}{}\langle Z_{t},R_{1}\rangle \\= & {} e^{\mu t}\langle \mathcal {P}_{t}R_{1},X_{0}\rangle r(j(t))\rightarrow 0, \end{aligned}$$

thus the third convergence of (7.3) holds.

Finally, noticing \(l(t)\rightarrow 1\) and using Fact 13, we get

$$\begin{aligned} Y_{5}(t)\rightarrow \langle fh,\varphi \rangle \cdot \frac{I_{\infty }}{\lambda ^{*}}=\langle fh,\varphi \rangle \cdot H_{\infty },\quad \text {a.s}. \end{aligned}$$

This is the last statement of (7.3), and thus, the proof is concluded.

7.1 Proof of Lemma 20

By (3.2), (5.18), (5.13) and (5.24), we obtain

$$\begin{aligned} |V_{\tilde{f}}^{2}(x,t)|&\le c_{1}\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \left( \mathcal {P}_{s}^{\alpha }\tilde{f}(\cdot )\right) ^{2}+\left( u_{f}^{*,1}(\cdot ,s)\right) ^{2}+\left| u_{f}^{*,2}(\cdot ,s)\right| \right] (x) \mathrm d s\\&\le \int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \left( e^{(\alpha -\mu )s}R_{1}(\cdot )\right) ^{2}+\left( e^{(\alpha ^{*}-\mu )t}R_{2}(\cdot )\right) ^{2}+\left| e^{\alpha ^{*}t}R_{3}(\cdot )\right| \right] (x) \mathrm d s. \end{aligned}$$

Using (3.4), we get

$$\begin{aligned} |V_{\tilde{f}}^{2}(x,t)|\le e^{\alpha t}\int _{0}^{t}e^{(\alpha -2\mu )s}\mathcal {P}_{t-s}R_{4}(x) \mathrm d s\le R_{5}(x)e^{\alpha t}\int _{0}^{t}e^{(\alpha -2\mu )s} \mathrm d s. \end{aligned}$$

7.2 Proof of Fact 7

We recall \(h=(h_{1},\ldots ,h_{k})\) introduced in (S3). By Lemma 15, (5.20) and (S3) for any \(i\in \left\{ 1,\ldots ,k\right\} \), we get

$$\begin{aligned} \mathbb {E}{}\langle X_{t},h_{i}\rangle =\langle X_{0},u_{h_{i}}^{1}(\cdot ,t)\rangle =e^{\alpha t}\langle X_{0},\mathcal {P}_{t}h_{i}\rangle =e^{(\alpha -\mu )t}\langle X_{0},h_{i}\rangle =e^{(\alpha -\mu )t}\langle X_{0},h_{i}\rangle . \end{aligned}$$

Using the Markov property, one concludes that H is a martingale.

Let \(\alpha >2\mu \), \(i\in \left\{ 1,\ldots ,k\right\} \), by (S3), (5.14) and (3.4), we get

$$\begin{aligned} e^{-2(\alpha -\mu )t}|u_{h_{i}}^{2}(x,t)|&\le e^{-2(\alpha -\mu )t} \int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ (\mathcal {P}_{s}^{\alpha }h_{i}(\cdot ))^{2}\right] (x) \mathrm d s\\&=e^{-2(\alpha -\mu )t}\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ e^{2(\alpha -\mu )s}h_{i}{}^{2}\right] (x) \mathrm d s\\&=e^{-(\alpha -2\mu )t}\int _{0}^{t}e^{(\alpha -2\mu )s}\mathcal {P}_{t-s}\left[ h_{i}{}^{2}\right] (x) \mathrm d s\le R_{1}(x). \end{aligned}$$

By Lemma 15 and \(X_{0}\in \mathcal {M}_{F}(\mathbb {R}^{d})\) now follows that the martingale H is \(L^{2}\)-bounded and thus converges in \(L^{2}\) and a.s.

7.3 Proof of Fact 13

The fact that W is a martingale is well known (see, for example, [3, Theorem A.6.1]). The properties of I are proved in [2, Sect. 3.3] (where its name is H).

We now concentrate on showing (5.12). Having a.s convergence of H and I it is sufficient to show that for some \(l>0\)

$$\begin{aligned} H_{(l+1)t}-\frac{1}{\lambda ^{*}}I_{t}\rightarrow 0, \end{aligned}$$

in probability. Recalling \(h=(h_{1},\ldots ,h_{k})\) introduced in (S3) and decomposition (5.7) we obtain for any \(j\in \left\{ 1,\ldots ,k\right\} \) the jth coordinate of \(H_{(l+1)t}-\frac{1}{\lambda ^{*}}I_{t}\) is given by

$$\begin{aligned}&e^{-(l+1)(\alpha -\mu )t}\langle Y_{lt}^{t},h_{j}\rangle +e^{-(\alpha -\mu )t}\sum _{i=1}^{|Z_{t}|}e^{-l(\alpha -\mu )t}\langle \Gamma _{lt}^{i,t},h_{j}\rangle -e^{-(\alpha -\mu )t}\frac{1}{\lambda ^{*}}\sum _{i=1}^{|Z_{t}|}h_{j}(Z_{t}(i))\\&\,\quad =e^{-(l+1)(\alpha -\mu )t}\langle Y_{lt}^{t},h_{j}\rangle +e^{-(\alpha -\mu )t}\sum _{i=1}^{|Z_{t}|}\left( M_{t}^{i}-m_{t}^{i}\right) \\&\,\qquad +e^{-(\alpha -\mu )t}\sum _{i=1}^{|Z_{t}|}\left( m_{t}^{i}-\frac{1}{\lambda ^{*}}h_{j}(Z_{t}(i))\right) \\&\,\quad =:I_{1}(t)+I_{2}(t)+I_{3}(t), \end{aligned}$$

where \(M_{t}^{i}:=e^{-l(\alpha -\mu )t}\langle \Gamma _{lt}^{i,t},h_{j}\rangle \) and \(m_{t}^{i}:=\mathbb {E}{}(M_{t}^{i}|Z_{t})=\mathbb {E}{}(M_{t}^{i}|Z_{t}(i))\). Use (5.23), (3.4), (S3) to calculate

$$\begin{aligned}&\mathbb {E}{}|I_{1}(t)|\le e^{-(l+1)(\alpha -\mu )t}e^{\alpha t+\alpha ^{*}lt}\langle \mathcal {P}_{(l+1)t}|h_{j}|,X_{0}\rangle \\&\quad \le e^{-(l+1)(\alpha -\mu )t}e^{\alpha t}e^{\alpha ^{*}lt}\langle R_{1},X_{0}\rangle \le c_{1}e^{-(l+1)(\alpha -\mu )t}e^{\alpha t}e^{\alpha ^{*}lt}. \end{aligned}$$

By (5.2) we can choose l such that \(\mathbb {E}{}|I_{1}(t)|\rightarrow 0\). The proof of \(\mathbb {E}{}\left( I_{2}(t)\right) ^{2}\rightarrow 0\) is the same as the one of (7.7). Finally, for \(I_{3}\) we use (5.22) and (S3) to get

$$\begin{aligned} m_{t}^{i}-\frac{1}{\lambda ^{*}}h(Z_{t}(i))=\frac{1}{\lambda ^{*}}e^{l(\alpha ^{*}-\alpha )t}h(Z_{t}(i)). \end{aligned}$$

The convergence \(I_{3}(t)\rightarrow 0\) follows by the convergence of the martingale I. Putting together, we obtain (5.12). Relation (5.11) can be proven using a similar but simpler way. Details are left to the reader.

8 Proof of Theorem 6

In this section, we fix \(f\in \mathcal {C}_{0}(\mathbb {R}^{d})\) and make the standing assumption that (B1), (S1), (S2), (S3) and \(\alpha =2\mu \) hold. Let us first present the outline of the proof. We start with the following random vector

$$\begin{aligned} K_{1}(t):=\left( e^{-\alpha t}|\Lambda _{t}|,e^{-(\alpha /2)t}(|\Lambda _{t}|-e^{\alpha t}V_{\infty }),t^{-1/2}e^{-(\alpha /2)t}\langle \Lambda _{t},\tilde{f}\rangle \right) . \end{aligned}$$
(8.1)

We will define \(K_{2},K_{3},K_{4}\) which will fulfill the following relations. For any \(k>-\alpha /\alpha ^{*}\) we have

$$\begin{aligned} |K_{1}(t)-K_{2}(t;k)|\rightarrow 0,\quad |K_{3}(t;k)-K_{4}(t;k)|\rightarrow 0 \end{aligned}$$
(8.2)

in probability as \(t\rightarrow +\infty \) moreover

$$\begin{aligned}&\limsup _{t\rightarrow +\infty }\mathbb {E}{}\Vert K_{2}(t;k)-K_{3}(t;k)\Vert _{}^{2}\le C/k,\nonumber \\&K_{4}(t;k)\rightarrow ^{d}\mathcal {L}_{k}:=\left( \hat{V}_{\infty },\sqrt{\hat{V}_{\infty }}G_{1},\left( \frac{k-1}{k}\right) ^{1/2}\sqrt{\hat{V}_{\infty }}G_{2}\right) , \end{aligned}$$
(8.3)

for \(C>0\) and we recall that \(\Vert \cdot \Vert _{}^{}\) denotes the Euclidian norm. The last convergence holds conditionally on \(\text {Ext}^{c}\) and \(G_{1},G_{2}\) are the same as in Theorem 6. Proving the theorem is rather standard once (8.2) and (8.3) are established. Indeed, let \(\mathcal {L}_{\infty }\) we denote the law of \(\left( \tilde{V}_{\infty },\sqrt{\tilde{V}_{\infty }}G_{1},\sqrt{\tilde{V}_{\infty }}G_{2}\right) \). For any \(\mu _{1},\mu _{2}\) probability measures on \(\mathbb {R}^{d}\) we define

$$\begin{aligned} m(\mu _{1},\mu _{2}):=\sup _{g\in \text {Lip}(1)}|\langle g,\mu _{1}\rangle -\langle g,\mu _{2}\rangle |, \end{aligned}$$

where \(\text {Lip}(1)\) be the space of continuous functions \(\mathbb {R}^{d}\mapsto [-1,1]\) with the Lipschitz constant smaller or equal to 1. It is well known that m is a metric equivalent to weak convergence. Moreover, when \(\mu _{1},\mu _{2}\) correspond to two random variables \(X_{1},X_{2}\) on the same probability space, then we have

$$\begin{aligned} m(\mu _{1},\mu _{2})\le \mathbb {E}{}\Vert X_{1}-X_{2}\Vert _{}^{}\le \sqrt{\mathbb {E}{}\Vert X_{1}-X_{2}\Vert _{}^{2}}. \end{aligned}$$
(8.4)

We fix \(\epsilon >0\) and choose k large enough such that \(\sqrt{\mathbb {E}{}\Vert K_{2}(t;k)-K_{3}(t;k)\Vert _{}^{2}}\le \varepsilon \) and \(m(\mathcal {L}_{k},\mathcal {L}_{\infty })\le \epsilon \). Further, we find \(T_{k}\) such that for any \(t>T_{k}\), one have \(m(K_{1}(t),K_{2}(t;k))\le \epsilon \), \(m(K_{3}(t;k),K_{4}(t;k))\le \epsilon \) and \(m(K_{4}(t;k),\mathcal {L}_{k})\le \epsilon \). With these choices, we get

$$\begin{aligned} m(K_{1}(t),\mathcal {L}_{\infty })\le 5\epsilon , \end{aligned}$$

for \(t\ge T_{k}\). The proof of Theorem 6 is concluded since \(\epsilon \) can be taken arbitrary small.

Before the proofs of (8.2) and (8.3), we state a technical lemma.

Lemma 21

We have

$$\begin{aligned} \lim _{t\rightarrow +\infty }\sup _{\Vert x\Vert _{}^{}\le \log t}|t^{-1}e^{-\alpha t}V_{\tilde{f}}^{2}(x,t)-\sigma _{f}^{2}/\lambda ^{*}|\rightarrow 0, \end{aligned}$$
(8.5)

where \(\lambda ^{*}\) is given by (2.3) and \(\sigma _{f}^{2}\) by (4.6). Moreover, there exists \(R\in \mathcal {C}_{0}\) such that

$$\begin{aligned} |V_{\tilde{f}}^{4}(x,t)|\le t^{2}e^{2\alpha t}R(x). \end{aligned}$$
(8.6)

We will skip some details of the proof which are repetition of arguments used in the proof of Theorem 4 or are easy to establish. In particular, we recall (5.7) and leave to the reader showing that the first convergence in (8.2) holds with

$$\begin{aligned} K_{2}(t;k):=\left( \frac{1}{\lambda ^{*}}e^{-\alpha t}|Z_{t}|,e^{-(k\alpha /2)t}\sum _{i=1}^{\lfloor |\Lambda _{kt}|\rfloor }(1-V_{\infty }^{i}),e^{-(\alpha /2)t}\sum _{i=1}^{|Z_{t}|}M_{t}^{k,i}\right) , \end{aligned}$$

where \(M_{t}^{k,i}:=(kt)^{-1/2}e^{-((k-1)\alpha /2)t}\langle \Gamma _{(k-1)t}^{i,t},\tilde{f}\rangle \) and \(V_{\infty }^{i}\) is an i.i.d. sequence distributed as in (4.2). We define also \(m_{t}^{k,i}:=\mathbb {E}{}\left( M_{t}^{k,i}|Z_{t}\right) =\mathbb {E}{}\left( M_{t}^{k,i}|Z_{t}(i)\right) \), \(H_{t}^{k}:=e^{-(\alpha /2)t}\sum _{i=1}^{|Z_{t}|}m_{t}^{k,i}\) and

$$\begin{aligned} K_{3}(t;k):=\left( \frac{1}{\lambda ^{*}}e^{-\alpha t}|Z_{t}|,e^{-(k\alpha /2)t}\sum _{i=1}^{\lfloor |\Lambda _{kt}|\rfloor }(1-V_{\infty }^{i}),e^{-(\alpha /2)t}\sum _{i=1}^{|Z_{t}|}(M_{t}^{k,i}-m_{t}^{k,i})\right) . \end{aligned}$$
(8.7)

This expression differs from \(K_{2}(t;k)\) only by \(H^{k}\) so in order to show the first assertion of (8.3) we need to bound it from above in \(L^{2}\). Applying (5.17) to \(m_{t}^{k,i}\), we obtain

$$\begin{aligned} H_{t}^{k}=(kt)^{-1/2}e^{-((k\alpha /2)t}\frac{e^{(k-1)\alpha t}-e^{(k-1)\alpha ^{*}t}}{\lambda ^{*}}\sum _{i=1}^{|Z_{t}|}\mathcal {P}_{(k-1)t}\tilde{f}(Z_{t}(i)). \end{aligned}$$

By Lemma 18, we have

$$\begin{aligned} \mathbb {E}{}(H_{t}^{k})^{2}&\le c_{1}\frac{e^{(k-2)\alpha t}}{kt}\mathbb {E}{}\left( \sum _{i=1}^{|Z_{t}|}\mathcal {P}_{(k-1)t}\tilde{f}(Z_{t}(i))\right) ^{2}\\&\le c_{2}\frac{e^{(k-2)\alpha t}}{kt}\int _{\mathbb {R}^{d}}\left( \mathcal {P}_{t}^{\alpha }\left[ (\mathcal {P}_{(k-1)t}\tilde{f}(\cdot ))^{2}\right] (x)\right. \\&\quad \left. +\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \left( \mathcal {P}_{s}^{\alpha }\mathcal {P}_{(k-1)t}\tilde{f}(\cdot )\right) ^{2}\right] (x) \mathrm d s\right) X_{0}( \mathrm d x). \end{aligned}$$

Using (S2), we estimate

$$\begin{aligned} \mathbb {E}{}(H_{t}^{k})^{2}\le & {} \frac{e^{(k-2)\alpha t}}{kt}\int _{\mathbb {R}^{d}}\left( e^{\alpha t-2\mu (k-1)t}\mathcal {P}_{t}\left[ R_{1}^{2}\right] (x)\right. \\&\left. +\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \left( e^{\alpha s}e^{-\mu (s+(k-1))t}R_{1}(\cdot )\right) ^{2}\right] (x) \mathrm d s\right) X_{0}( \mathrm d x). \end{aligned}$$

Applying (3.4), and recalling \(\alpha =2\mu \), we get that

$$\begin{aligned} \mathbb {E}{}(H_{t}^{k})^{2}&\le \frac{e^{(k-2)\alpha t}}{kt}\langle R_{2},X_{0}\rangle \left( e^{\alpha t-2\mu (k-1)t}+e^{\alpha t}\int _{0}^{t}e^{\alpha s}e^{-2\mu (s+(k-1))t} \mathrm d s\right) \nonumber \\&\le \frac{e^{(k-2)\alpha t}}{kt}\langle R_{3},X_{0}\rangle \left( e^{-\alpha (k-2)t}+e^{-\alpha (k-2)t}\int _{0}^{t} \mathrm d s\right) \le C/k, \end{aligned}$$
(8.8)

for some \(C>0\). Let us now concentrated on the third coordinate of \(K_{3}(t;k)\). We introduce truncation. We recall I(t) defined in (6.14), one can follow the proof there to show \(I(t)\rightarrow 0\), the only change is to show (6.15), namely that \(\mathbb {E}{}\left( (M_{t}^{i,k})^{2}|Z_{t}\right) \le R_{1}(Z_{t}(i))\). This is left to the reader. Therefore, we have \(|K_{3}(t;k)-K_{4}(t;k)|\rightarrow 0\) in (8.2) with

$$\begin{aligned} K_{4}(t;k):=\left( \frac{1}{\lambda ^{*}}e^{-\alpha t}|Z_{t}|,e^{-(k\alpha /2)t}\sum _{i=1}^{\lfloor |X_{kt}|\rfloor }(1-V_{\infty }^{i}),e^{-(\alpha /2)t}\sum _{i=1}^{|Z_{t}|}(M_{t}^{k,i}-m_{t}^{k,i})1_{\left\{ \Vert Z_{t}(i)\Vert _{}^{}<\log t\right\} }\right) . \end{aligned}$$

The final step listed in (8.3) that is to show \(K_{4}(t;k)\rightarrow \mathcal {L}_{k}\). We proceed along the lines of the proof in Sect. 6.2. The definitions and arguments are analogous. The only significant change is the proof of convergence of \(v_{n}\) defined by (6.19). In our case

$$\begin{aligned} v_{n}=\lambda ^{*}a_{n}^{-1}\sum _{i=1}^{a_{n}}\mathbb {E}{}(\tilde{M}_{n}^{k,i})^{2}-\lambda ^{*}a_{n}^{-1}\sum _{i=1}^{a_{n}}\mathbb {E}{}(\tilde{m}_{n}^{k,i})^{2}, \end{aligned}$$

where \(\tilde{M}_{n}^{k,i}:=\mathbb {E}{}\left( (kt)^{-1/2}e^{-((k-1)\alpha /2)n}\langle \Gamma _{(k-1)n}^{i,n},\tilde{f}\rangle |Z_{n}(i)=p_{n}(i)\right) \) and \(\tilde{m}_{n}^{k,i}=\mathbb {E}{}\tilde{M}_{n}^{k,i}\). For the second term we use (5.17), (S2) and \(\alpha =2\mu \) to estimate

$$\begin{aligned} |\tilde{m}_{n}^{k,i}|\le & {} c_{1}n{}^{-1/2}e^{((k-1)\alpha /2)n}|\mathcal {P}_{(k-1)n}\tilde{f}(p_{n}(i))|\le n^{-1/2}e^{(k-1)(\alpha /2-\mu )n}R_{1}(p_{n}(i))\\= & {} n^{-1/2}R_{1}(p_{n}(i)). \end{aligned}$$

We recall that \(\Vert p_{n}(i)\Vert _{}^{}\le \log n\) so \(\sup _{i\le a_{n}}|\tilde{m}_{n}^{k,i}|\rightarrow 0\) and consequently \(a_{n}^{-1}\sum _{i=1}^{a_{n}}(\tilde{m}_{n}^{k,i})^{2}\rightarrow 0\). Using (5.22), we have

$$\begin{aligned} \mathbb {E}{}(\tilde{M}_{n}^{k,i})^{2}=(kn)^{-1}e^{-\alpha (k-1)n}V_{\tilde{f}}^{2}(p_{n}(i),(k-1)n). \end{aligned}$$

By (8.5), we obtain

$$\begin{aligned} \lim _{n\rightarrow +\infty }v_{n}=\left( \frac{k-1}{k}\right) \sigma _{f}^{2}, \end{aligned}$$

where \(\sigma _{f}^{2}\) is given by (4.6). This completes the proof of \(K_{4}(t;k)\rightarrow \mathcal {L}_{k}\) and consequently the whole proof of Theorem 6.

8.1 Proof of Lemma 21

To show (8.5), we will prove

$$\begin{aligned} t^{-1}e^{-\alpha t}V_{\tilde{f}}^{2}(0,t)\rightarrow \sigma _{f}^{2}/\lambda ^{*},\quad \sup _{\Vert x\Vert _{}^{}\le \log t}t^{-1}e^{-\alpha t}|V_{\tilde{f}}^{2}(x,t)-V_{\tilde{f}}^{2}(0,t)|\rightarrow 0. \end{aligned}$$
(8.9)

To obtain the first convergence, we use (5.18) and write

$$\begin{aligned} t^{-1}e^{-\alpha t}V_{\tilde{f}}^{2}(0,t)&= \frac{\psi ''(0)}{\lambda ^{*}}t^{-1}\int _{0}^{t}e^{-\alpha s}\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}^{\alpha }\tilde{f}(\cdot )\right) ^{2}\right] (0) \mathrm d s\\&\quad -\frac{1}{\lambda ^{*}}t^{-1}\int _{0}^{t}e^{-\alpha s}\mathcal {P}_{t-s}\\&\quad \times \left[ \psi ''(0)\left( u_{\tilde{f}}^{*,1}(\cdot ,s)\right) ^{2}+(\alpha -\alpha ^{*})u_{\tilde{f}}^{*,2} (\cdot ,s)\right] (0) \mathrm d s\\&=:I_{1}(t)+I_{2}(t). \end{aligned}$$

We start with \(I_{2}\) recalling (5.2), using (5.13), (5.14) and applying (3.4) multiple time we obtain

$$\begin{aligned} |I_{2}(t)|&\le c_{1}t^{-1}\int _{0}^{t}e^{-\alpha s}\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}^{\alpha ^{*}}\tilde{f}(\cdot )\right) ^{2}+\int _{0}^{s}\mathcal {P}_{s-u}^{\alpha ^{*}}\left[ \left( \mathcal {P}_{u}^{\alpha ^{*}}\tilde{f}(\cdot )\right) ^{2}\right] \mathrm d u\right] (0) \mathrm d s\\&\le \, t^{-1}\int _{0}^{t}e^{-\alpha s}\mathcal {P}_{t-s}\left[ \left( e^{\alpha ^{*}s}R_{1}(\cdot )\right) ^{2}+\int _{0}^{s}e^{\alpha ^{*}(s+u)}\mathcal {P}_{s-u}R_{1}^{2}(\cdot ) \mathrm d u\right] (0) \mathrm d s\\&\le \, t^{-1}\int _{0}^{t}e^{-\alpha s}\mathcal {P}_{t-s}\left[ \left( e^{\alpha ^{*}s}R_{1}(\cdot )\right) ^{2}+\int _{0}^{s}e^{\alpha ^{*}(s+u)}R_{2}(\cdot ) \mathrm d u\right] (0) \mathrm d s\\&\le \, t^{-1}\int _{0}^{t}e^{(\alpha ^{*}-\alpha )s}\mathcal {P}_{t-s}R_{3}(0) \mathrm d s\le \, t^{-1}\int _{0}^{t}e^{(\alpha ^{*}-\alpha )s} \mathrm d s\rightarrow 0. \end{aligned}$$

To treat \(I_{1}\), we use \(\alpha =2\mu \) and decompose following notation of (S3)

$$\begin{aligned} \frac{\lambda ^{*}}{\psi ''(0)}I_{1}(t)&= t^{-1}\int _{0}^{t}\mathcal {P}_{t-s}\left[ \left( h(\cdot )\cdot \langle fh,\varphi \rangle +\mathcal {P}_{s}^{\mu }\tilde{f}(\cdot )-h(\cdot )\cdot \langle fh,\varphi \rangle \right) ^{2}\right] (0) \mathrm d s\\&= t^{-1}\int _{0}^{t}\mathcal {P}_{t-s}\left[ \left( h(\cdot )\cdot \langle fh,\varphi \rangle \right) ^{2}\right] (0) \mathrm d s\\&\quad +t^{-1}\int _{0}^{t}\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}^{\mu }\tilde{f}(\cdot )-h(\cdot )\cdot \langle fh,\varphi \rangle \right) ^{2}\right] (0) \mathrm d s\\&\quad +2t^{-1}\int _{0}^{t}\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}^{\mu }\tilde{f}(\cdot )-h(\cdot )\cdot \langle fh,\varphi \rangle \right) \left( h(\cdot )\cdot \langle fh,\varphi \rangle \right) \right] (0) \mathrm d s\\&=:I_{3}(t)+I_{4}(t)+I_{5}(t). \end{aligned}$$

Recalling (S1), we check

$$\begin{aligned} I_{3}(t)\rightarrow \sigma _{f}^{2}/\psi ''(0). \end{aligned}$$

To \(I_{4}\), we apply (S3) and (3.4), namely

$$\begin{aligned} |I_{4}(t)|\le t^{-1}\int _{0}^{t}r(s)^{2}\mathcal {P}_{t-s}\left[ R_{1}^{2}\right] (0) \mathrm d s\le c_{1}t^{-1}\int _{0}^{t}r(s)^{2} \mathrm d s\rightarrow 0. \end{aligned}$$

Similarly, one can prove \(|I_{5}(t)|\rightarrow 0\). Putting these results together we conclude \(I_{1}(t)\rightarrow \sigma _{f}^{2}/\lambda ^{*}\) and consequently the first convergence in (8.9) holds. Let us pass to the second statement. We analyze the first term of (5.18), which is hardest, and leave the others to the reader. Namely, we will show that

$$\begin{aligned}&\sup _{\Vert x\Vert _{}^{}\le \log t}t^{-1}\left| \int _{0}^{t}e^{\alpha s}\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}\tilde{f}(\cdot )\right) ^{2}\right] (x) \mathrm d s-\int _{0}^{t}e^{\alpha s}\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}\tilde{f}(\cdot )\right) ^{2}\right] (0) \mathrm d s\right| \nonumber \\&\quad \qquad \le t^{-1}\int _{0}^{t}f(s,t) \mathrm d s\rightarrow _{t}0, \end{aligned}$$
(8.10)

where f(st) given in (6.23). We recall the notation of (S3) and write

$$\begin{aligned} \left( \mathcal {P}_{s}\tilde{f}(\cdot )\right) ^{2}&=e^{-2\mu s}\left( \mathcal {P}_{s}^{\mu }\tilde{f}(\cdot )-h(\cdot )\cdot \langle fh,\varphi \rangle \right) ^{2}+e^{-2\mu s}\left( h(\cdot )\cdot \langle fh,\varphi \rangle \right) ^{2}\\&\quad +2e^{-2\mu s}\left( \mathcal {P}_{s}^{\mu }\tilde{f}(\cdot )-h(\cdot )\cdot \langle fh,\varphi \rangle \right) \left( h(\cdot )\cdot \langle fh,\varphi \rangle \right) . \end{aligned}$$

Using this decomposition, \(\alpha =2\mu \) together with the triangle inequality, we get

$$\begin{aligned} f(s,t)\le & {} 2\sup _{\Vert x\Vert _{}^{}\le \log t}\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}^{\mu }\tilde{f}(\cdot )-h(\cdot )\cdot \langle fh,\varphi \rangle \right) ^{2}\right] (x)\\&+\,\,4\sup _{\Vert x\Vert _{}^{}\le \log t}\mathcal {P}_{t-s}\left[ \left( \mathcal {P}_{s}^{\mu }\tilde{f}(\cdot )-h(\cdot )\cdot \langle fh,\varphi \rangle \right) \left( h(\cdot )\cdot \langle fh,\varphi \rangle \right) \right] (x)\\&+\,\sup _{\Vert x\Vert _{}^{}\le \log t}\left| \mathcal {P}_{t-s}\left[ \left( h(\cdot )\cdot \langle fh,\varphi \rangle \right) ^{2}\right] (x)-\mathcal {P}_{t-s}\left[ \left( h(\cdot )\cdot \langle fh,\varphi \rangle \right) ^{2}\right] (0)\right| . \end{aligned}$$

We use \(\alpha =2\mu \), apply (S3) to the first two expressions and define \(H=\left( h\cdot \langle fh,\varphi \rangle \right) ^{2}-\langle \left( h\cdot \langle fh,\varphi \rangle \right) ^{2},\varphi \rangle \). We write

$$\begin{aligned} f(s,t)\le & {} r(s)^{2}\sup _{\Vert x\Vert _{}^{}\le \log t}\mathcal {P}_{t-s}\left[ R_{1}^{2}\right] (x)+r(s)\sup _{\Vert x\Vert _{}^{}\le \log t}\mathcal {P}_{t-s}\left[ R_{2}(\cdot )\left( h(\cdot )\cdot \langle fh,\varphi \rangle \right) \right] (x)\\&+\,2\sup _{\Vert x\Vert _{}^{}\le \log t}\left| \mathcal {P}_{t-s}H(x)\right| . \end{aligned}$$

Let \(R_{3}\in \mathcal {C}_{0}\) then by (S2)

$$\begin{aligned} \sup _{\Vert x\Vert _{}^{}\le \log t}|\mathcal {P}_{t-s}R_{3}(x)|=\langle \varphi ,R_{3}\rangle +\sup _{\Vert x\Vert _{}^{}\le \log t}\mathcal {P}_{t-s}\tilde{R}_{3}(x)\le \langle \varphi ,R_{3}\rangle +e^{-\mu (t-s)}R_{4}(\log t). \end{aligned}$$

This will be applied to the first two terms. The third one can be analyzed similarly using (S2)

$$\begin{aligned} \sup _{\Vert x\Vert _{}^{}\le \log t}\left| \mathcal {P}_{t-s}H(x)\right| \le e^{-\mu (t-s)}R_{5}(\log t). \end{aligned}$$

By the fact that \(r(s)\searrow 0\), we can write

$$\begin{aligned} f(s,t)\le c_{1}r(s)+e^{-\mu (t-s)}R_{6}(\log t). \end{aligned}$$

Clearly, it is enough to establish (8.10).

To prove (8.6), we will follow the same route as in the proof of Lemma 19. Analogously, we omit terms \(u_{f}^{*,k}\) and estimate \(S_{k}\) as defined in (6.24). For \(k=2\), analogously as in the proof of (8.5), one checks that

$$\begin{aligned} |V_{\tilde{f}}^{2}(x,t)|\le te^{\alpha t}R_{1}(x). \end{aligned}$$
(8.11)

For \(k=3\), we recall (5.15) for and use (5.17), (S2) and (8.11) to get

$$\begin{aligned} S_{3}(x,t)\le & {} c_{1}\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \left| V_{\tilde{f}}^{1}(\cdot ,s)V_{\tilde{f}}^{2}(\cdot ,s)\right| +\left| V_{\tilde{f}}^{1}(\cdot ,s)\right| ^{3}\right] (x) \mathrm d s\\\le & {} \int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \left| e^{\alpha s/2}R_{1}(\cdot )se^{\alpha s}R_{2}(\cdot )\right| +\left| e^{\alpha s/2}R_{3}(\cdot )\right| ^{3}\right] (x) \mathrm d s.\\ \end{aligned}$$

Using (3.4), we estimate that

$$\begin{aligned} S_{3}(x,t)\le e^{\alpha t}\int _{0}^{t}e^{\alpha s/2}s\mathcal {P}_{t-s}\left[ R_{4}\right] (x) \mathrm d s\le R_{5}(x)e^{\alpha t}\int _{0}^{t}se^{\alpha s/2} \mathrm d s\le te^{(3\alpha /2)t}R_{5}(x). \end{aligned}$$

We conclude that

$$\begin{aligned} |V_{\tilde{f}}^{3}(x,t)|\le te^{(3\alpha /2)t}R_{6}(x). \end{aligned}$$
(8.12)

Finally, we pass to \(k=4\). We recall (5.15) and use (5.17), (S2), (8.11) and (8.12) to get

$$\begin{aligned} S_{4}(x,t)\le & {} c_{1}\int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \left| V_{\tilde{f}}^{1}(\cdot ,s)V_{\tilde{f}}^{3}(\cdot ,s)\right| +\left| V_{\tilde{f}}^{2}(\cdot ,s)\right| ^{2}\right. \\&\left. +\,\left| V_{\tilde{f}}^{1}(\cdot ,s)\right| ^{2}\left| V_{\tilde{f}}^{2}(\cdot ,s)\right| +\left| V_{\tilde{f}}^{1}(\cdot ,s)\right| ^{4}\right] (x) \mathrm d s\\\le & {} \int _{0}^{t}\mathcal {P}_{t-s}^{\alpha }\left[ \left| e^{\alpha s/2}R_{1}(\cdot )se^{(3\alpha /2)s}R_{2}(\cdot )\right| +\left| se^{\alpha s}R_{3}(\cdot )\right| ^{2}\right. \\&\left. +\,\left| e^{\alpha s/2}R_{1}(\cdot )\right| ^{2}\left| se^{\alpha s}R_{3}(\cdot )\right| +\left| e^{\alpha s/2}R_{1}(\cdot )\right| ^{4}\right] (x) \mathrm d s. \end{aligned}$$

By (3.4), we obtain

$$\begin{aligned} S_{4}(x,t)\le e^{\alpha t}\int _{0}^{t}s^{2}e^{\alpha s}\mathcal {P}_{t-s}R_{7}(x) \mathrm d s\le R_{8}(x)e^{\alpha t}\int _{0}^{t}s^{2}e^{\alpha s} \mathrm d s\le t^{2}e^{2\alpha t}R_{9}(x). \end{aligned}$$

This is enough to conclude the proof of (8.6).