Abstract
Suppose that \(X =(X_t, t\ge 0)\) is either a superprocess or a branching Markov process on a general space E, with nonlocal branching mechanism and probabilities \({\mathbb {P}}_{\delta _x}\), when issued from a unit mass at \(x\in E\). For a general setting in which the first moment semigroup of X displays a Perron–Frobenius type behaviour, we show that, for \(k\ge 2\) and any positive bounded measurable function f on E,
where the constant \(C_k(x, f)\) can be identified in terms of the principal right eigenfunction and left eigenmeasure and \(g_k(t)\) is an appropriate deterministic normalisation, which can be identified explicitly as either polynomial in t or exponential in t, depending on whether X is a critical, supercritical or subcritical process. The method we employ is extremely robust and we are able to extract similarly precise results that additionally give us the moment growth with time of \(\int _0^t \langle f, X_t \rangle \mathrm{d}s\), for bounded measurable f on E.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and main results
A fundamental question concerning general spatial branching processes, both superprocesses and branching Markov processes, pertains to their moments. Whilst the setting of first and second moments has received quite some attention, limited information seems to be known about higher moments, in particular, their asymptotic behaviour with time. Relevant references that touch upon this topic include [12, 17, 18, 21, 24]. In this paper, we provide a single general result that pertains to both superprocesses and spatial branching Markov processes and which provides a very precise and somewhat remarkable result for moment growth.
We show that, under the assumption that the first moment semigroup of the process exhibits a natural Perron Frobenious type behaviour, the kth moment functional of either a superprocess or branching Markov process, when appropriately normalised, limits to a precise constant. The setting in which we work is remarkably general, even allowing for the setting of nonlocal branching; that is, where mass is created at a different point in space to the position of the parent. Moreover, the methodology we use appears to be extremely robust and we show that the asymptotic kth moments of the running occupation measure are equally accessible using essentially the same approach. Our results will thus expand on what is known for branching diffusions and superdiffusions e.g. in [10, 23], as well as giving precise growth rates for the moments of occupations. In future work we hope to use the ideas in this paper to develop general central limit theorems for the aforesaid class of processes.
To this end, let us spend some time providing the general setting in which we wish to work. Let E be a Lusin space. Throughout, will write B(E) for the Banach space of bounded measurable functions on E with norm \(\Vert \cdot \Vert \), \(B^{+}(E)\) for nonnegative bounded measurable functions on E and \(B^{+}_1(E)\) for the subset of functions in \(B^{+}(E)\) which are uniformly bounded by unity. We are interested in spatial branching processes that are defined in terms of a Markov process and a branching operator. The former can be characterised by a semigroup on E, denoted by \(\texttt {P}=(\texttt {P}_t, t\ge 0)\). We do not need \(\texttt {P}\) to have the Feller property, and it is not necessary that \(\texttt {P}\) is conservative. That said, if so desired, we can append a cemetery state \(\{\dagger \}\) to E, which is to be treated as an absorbing state, and regard \(\texttt {P}\) as conservative on the extended space \(E\cup \{\dagger \}\), which can also be treated as a Lusin space. Equally, we can extend the branching operator to \(E\cup \{\dagger \}\) by defining it to be zero on \(\{\dagger \}\), i.e. no branching activity on the cemetery state.
1.1 Branching Markov processes
Consider now a spatial branching process in which, given their point of creation, particles evolve independently according to a \(\texttt {P}\)Markov process. In an event which we refer to as ‘branching’, particles positioned at x die at rate \(\beta (x)\), where \(\beta \in B^+(E)\), and instantaneously, new particles are created in E according to a point process. The configurations of these offspring are described by the random counting measure
for Borel A in E. The law of the aforementioned point process depends on x, the point of death of the parent, and we denote it by \({\mathcal {P}}_x\), \(x\in E\), with associated expectation operator given by \({\mathcal {E}}_x\), \(x\in E\). This information is captured in the socalled branching mechanism
where we recall \( f\in B^+_1(E): = \{f\in B^+(E):\sup _{x\in E}f(x)\le 1\}\). Without loss of generality we can assume that \({\mathcal {P}}_x(N =1) = 0\) for all \(x\in E\) by viewing a branching event with one offspring as an extra jump in the motion. On the other hand, we do allow for the possibility that \({\mathcal {P}}_x(N =0)>0\) for some or all \(x\in E\).
Henceforth we refer to this spatial branching process as a \((\texttt {P}, \texttt {G})\)branching Markov process. It is well known that if the configuration of particles at time t is denoted by \(\{x_1(t), \ldots , x_{N_t}(t)\}\), then, on the event that the process has not become extinct or exploded, the branching Markov process can be described as the coordinate process \(X= (X_t, t\ge 0)\) in the space of atomic measures on E with nonnegative integer total mass, denoted by N(E), where
In particular, X is Markovian in N(E). Its probabilities will be denoted \({\mathbb {P}}: = ({\mathbb {P}}_\mu , \mu \in N(E))\). With this notation in hand, it is worth noting that the independence that is manifest in the definition of branching events and movement implies that if we define,
then for \(\mu \in N(E)\) given by \(\mu = \sum _{i =1}^n\delta _{y_i}\), we have
Moreover, for \(f\in B^+(E)\) and \(x\in E\),
where \({{\hat{\texttt {P}}}}_t\) is defined similarly to \(\texttt {P}_t\) but returns a value of 1 on the event of killing. The above equation describes the evolution of the semigroup \(v_t\): either the initial particle has not branched (and has possibly been absorbed) by time t or at some time \(s \le t\), the initial particle has branched, producing offspring according to \(\texttt {G}\). We refer the reader to [19, 22] for a proof.
Branching Markov processes enjoy a very long history in the literature, dating back as far as [30,31,32], with a broad base of literature that is arguably too voluminous to give a fair summary of here. Most literature focuses on the setting of local branching. This corresponds to the setting that all offspring are positioned at their parent’s point of death (i.e. \(x_i = x\) in the definition of \(\texttt {G}\)). In that case, the branching mechanism reduces to
where \(s\in [0,1]\) and \((p_k(x), k\ge 0)\) is the offspring distribution when a parent branches at site \(x\in E\). The branching mechanism \(\texttt {G}\) may otherwise be seen in general as a mixture of local and nonlocal branching.
1.2 Superprocesses
Superprocesses can be thought of as the highdensity limit of a sequence of branching Markov processes, resulting in a new family of measurevalued Markov processes; see e.g. [6, 7, 13, 26, 34]. Just as branching Markov processes are Markovian in N(E), the former are Markovian in the space of finite Borel measures on E topologised by the weak convergence topology, denoted by M(E). There is a broad literature base for superprocesses, e.g. [6, 15, 17, 26, 34], with socalled local branching mechanisms, and later broadened to the more general setting of nonlocal branching mechanisms in [7, 26]. Let us now introduce these concepts with an autonomous definition of what we mean by a superprocess.
A Markov process \(X : = (X_t:t\ge 0)\) with state space M(E) and probabilities \({\mathbb {P}} := ({\mathbb {P}}_\mu , \mu \in M(E))\) is called a \((\texttt {P},\psi ,\phi )\)superprocess if it has transition semigroup \((\hat{{\mathbf {E}}}_t, t\ge 0)\) on M(E) satisfying
Here, we work with the inner product on \(B^+(E)\times M(E)\) defined by \(\langle f, \mu \rangle = \int _Ef(x)\mu (\mathrm{d}x)\) and \(({\texttt {V}}_t, t\ge 0)\) is a semigroup evolution that is characterised via the unique bounded positive solution to the evolution equation
Here \(\psi \) denotes the local branching mechanism
where \(b\in B(E)\), \(c\in B^+(E)\) and \((x\wedge x^2)\nu (x, \mathrm{d}y)\) is a bounded kernel from E to \((0,\infty )\), and \(\phi \) is the nonlocal branching mechanism
where \(\beta \in B^+(E)\) and \(\zeta \) has representation
such that \(\gamma (x,f)\) is a bounded function on \(E\times B^+(E)\) and \(\nu (1)\Gamma (x,\mathrm{d}\nu )\) is a bounded kernel from E to \(M(E)^{\circ }:=M(E){\setminus }\left\{ 0\right\} \) with
We refer the reader to [7, 27] for more details regarding the above formulae. Lemma 3.1 in [7] tells us that the functional \(\zeta (x,f)\) has the following equivalent representation
where \(M_0(E)\) denotes the set of probability measures on E, \(\gamma \ge 0\) is a bounded function on \(E\times M_0(E)\), \(un(x,\pi ,\mathrm{d}u)\) is a bounded kernel from \(E\times M_0(E)\) to \((0,\infty )\) and \(G(x,\mathrm{d}\pi )\) is a probability kernel from E to \(M_0(E)\) with
The reader will note that we have deliberately used some of the same notation for both branching Markov processes and superprocesses. In the sequel there should be no confusion and the motivation for this choice of repeated notation is that our main result is indifferent to which of the two processes we are talking about.
1.3 Main results: kth moments
As alluded to above, in what follows, \((X, {\mathbb {P}})\) is taken as either a branching Markov process or a superprocess as defined in the previous section. Our main results concern understanding the growth of the kth moment functional in time
For convenience, we will write \(\texttt {T}\) in preference of \(\texttt {T}^{(1)}\) throughout.
Before stating our main theorem, we first introduce some assumptions that will be crucial in analysing the moments defined above. First, we have a Perron–Frobeniustype assumption.
(H1): There exists an eigenvalue \(\lambda \in {\mathbb {R}}\) and a corresponding right eigenfunction \(\varphi \in B^+(E)\) and finite left eigenmeasure \({\tilde{\varphi }}\) such that, for \(f\in B^+(E)\),
for all \(\mu \in N(E)\) (resp. M(E)) if \((X, {\mathbb {P}})\) is a branching Markov process (resp. a superprocess). Further let us define
We suppose that
Before explaining the heuristics behind (H1), we state are second assumption, which is a moment condition on the offspring distribution.
(H2): Suppose \(k \ge 1\). If \((X, {\mathbb {P}})\) is a branching Markov process,
and if \((X, {\mathbb {P}})\) is a superprocess,
Let us spend a little time considering these two assumptions in more detail. For a lot of literature surrounding spatial branching processes, there has been emphasis on results for which an underlying assumption of exponential ergodic growth in the first moment is present as in (H1); see e.g. [1, 16, 19, 22, 27, 29]. Due to this, we may characterise the process as supercritical if \(\lambda > 0\), critical if \(\lambda = 0\) and subcritical if \(\lambda < 0\).
One way to understand (14), is through the martingale that comes handinhand with the eigenpair \((\lambda , \varphi )\), i.e.
Normalising this martingale and using it as a change of measure results in the ubiquitous spine decomposition; cf. [21, 22, 29]. Roughly speaking, under the change of measure, the process is equal in law to a copy of the original process with a superimposed process of immigration, which occurs both in space and time along the path of a single particle trajectory in E, the spine. Moreover, the assumption (14) implies that the spine has a stationary limit with stationary measure \(\varphi {\tilde{\varphi }}\).
The assumptions (15) and (16) of (H2) are natural to ensure that kmoments are well defined for all \(t\ge 0\). If not explicitly stated in the literature, their need to ensure that the functional moments \(\texttt {T}^{(k)}_t[f](x)\) are finite for all \(t\ge 0\), \(f\in B^+(E)\) and \(x\in E\) is certainly folklore. The two conditions (15) and (16) are clearly natural analogues of one another. Indeed, whereas for superprocesses, it is usual to separate out the nondiffusive local branching behaviour from nonlocal behaviour, i.e via the measures \(\nu (x,\mathrm{d}y)\) and \(\Gamma (x, \mathrm{d}\nu )\), the analogous behaviour is captured in the single point process \({\mathcal {Z}}\) for branching Markov processes.
In terms of the eigenvalue in (H1), the following suite of results give us the precise growth rates for kth moments in each of the critical (\(\lambda = 0\)), supercritical (\(\lambda >0\)) and subcritical (\(\lambda <0\)) settings. In all three results, \((X, {\mathbb {P}})\) is either a \((\texttt {P},\texttt {G})\)branching particle system or a \((\texttt {P},\psi ,\phi )\)superprocess on E.
Theorem 1
(Critical, \(\lambda = 0\)) Suppose that (H1) holds along with (H2) for some \(k \ge 2\) and \(\lambda = 0\). Define
where
if \((X, {\mathbb {P}})\) is a branching Markov process or
if \((X, {\mathbb {P}})\) is a superprocess. Then, for all \(\ell \le k\)
The novel contribution of Theorem 1 is the fact that no such result currently exists in the literature at this level of generality. The only comparable results appear in [11], which uses similar methods to derive the convergence for the critical CrumpModeJagers process^{Footnote 1} and in [20], which inspired this paper but only deals with the special case of a general critical branching particle processes where the test function f is specifically taken to be the eigenfunction \(\varphi \). We will discuss these examples in more detail after stating our main results. We have been unable to find comparable results for superprocesses.
There are two facts that stand out in Theorem 1. The first is the polynomial scaling, which is quite a delicate conclusion given that there is no exponential growth to rely on. The second is that, for \(k\ge 3\), the scaled moment limit is expressed not in terms of the kth moments in (15) and (16), but rather the second order moments.
In some sense, however, both the polynomial growth and the nature of the limiting constant are not entirely surprising given the folklore for the critical setting. More precisely, in at least some settings (see e.g. [20]), one would expect to see a Yaglomtype result at criticality. The latter would classically see, conditional on survival, convergence in law of \(t^{1}\langle f, X_t\rangle \) to an exponentially distributed random variable as \(t\rightarrow \infty \), whose parameter is entirely determined by the second moment of X. This implies that, conditional on survival, the limit of the kth moment of \(\langle f, X_t\rangle \) behaves like \(k! t^k c^k\) (i.e. the kth moment of the aforesaid exponential distribution) as \(t \rightarrow \infty \), where \(c>0\) is written in terms of a second moment functional of the spatial offspring distribution. One can also expect a Kolmogorovtype result for the survival probability, which says that
where we recall that \(N_t = \langle 1, X_t\rangle \) is the total mass of the process at time t; see for example [20] for the particle system setting and [29] in the superprocess setting. Combining this with the moment asymptotic of
implies that the kth moment behaves like \(\varphi (x) t^{k1}k! c^k\), which is precisely the result given in Theorem 1.
It is also curious why, at criticality, all higher moments can be expressed in terms of the constant c, which is written in terms of a second moment functional of the spatial offspring distribution. We can give some intuition here, at least in the branching Markov setting, as to why this is the case. Let us recall the classical folklore that conditioning on survival is equivalent to studying the process under the change of measure
The above change of measure (21) yields the classical spine decomposition, cf. [20]. In particular, this means that we can write, for \(f\in B^+(E)\),
where \(\xi = (\xi _t, t \ge 0)\) is the motion of an immortal particle, called the spine, whose semigroup is conservative, \(n_t\) is the number of branching events along the spine, which arrive at a rate which depends on the motion of \(\xi \), \(N_i\) is the number of offspring produced such that, at the ith such branching event, which occurs at time \(T_i\le t\), \(X_{tT_i}^{i, j}\), \(j = 1, \ldots , N_i\) are i.i.d copies of the original branching Markov process under \({\mathbb {P}}_{\xi _{T_i}}\), which provide mass at time t. In other words, this means that under \({\mathbb {P}}^\varphi \), the process X can be decomposed into a single immortal trajectory, off which, copies of the original process \((X, {\mathbb {P}})\) immigrate simultaneously to groups of siblings.
With this in mind, let us consider genealogical lines of descent that contribute to the bulk of the mass of the kth moment at large times t. For each copy of \((X, {\mathbb {P}})\) that immigrates onto the spine at time \(s>0\), the probability that the process survives to time \(t \ge s\), thus contributing to the bulk of the kth moment at time t, is \(O(1/(ts))\approx O(1/t)\); cf. (19). If there are multiple offspring at an immigration event at time s, then the chance that at least two of these offspring contribute to the bulk of the kth moment at time t is \(O(1/t^2)\). Moreover, the semigroup of the spine \(\xi \) is given by \(\texttt {P}^\varphi _t[f](x) =\texttt {P}_t[\varphi f](x)/\varphi (x)\), which, under (H1), limits to a stationary distribution \(\varphi (x){\tilde{\varphi }}(\mathrm{d}x)\), \(x\in E\). This has the effect that the arrival of branching events along the spine begin to look increasingly like a Poisson process as \(t\rightarrow \infty \). Hence, for large t, \(n_t \approx O(t)\).
Putting these pieces together, as \(t\rightarrow \infty \), there are approximately O(t) branch points along the spine, each of which has the greatest likelihood of a single offspring among immigrating siblings contributing to the bulk of the kth moment at time t, with probability of order O(1/t). Thus, it is clear that we only expect to see one of each sibling group of immigrants along the spine contributing to the mass of the kth moment at time t. Now let \(\beta ^\varphi \) denote the spatial rate at which offspring immigrates onto the spine and let \(\{x_1, \ldots , x_N\}\) denote their positions at the point of branching including the position of the spine at this instance. Let \({\mathcal {P}}^\varphi \) denote the law of this offspring distribution, and suppose that \(i^*\) is the (random) index of the offspring that continues the evolution of the spine. We have that the rate at which a ‘uniform selection’ of a single offspring which is not the spine at a branching event (seen through the function \(f\in B^+(E)\)) is given by
where we have used, from [20], that \(\beta ^\varphi (x) = \beta (x){{\mathcal {E}}_{x}\left[ \langle \varphi , {\mathcal {Z}}\rangle \right] }/{\varphi (x)}\), \({\mathcal {P}}^\varphi _x\) is absolutely continuous with respect to \({\mathcal {P}}_x\) with density \({\langle \varphi , {\mathcal {Z}}\rangle }/{ {\mathcal {E}}_{x}\left[ \langle \varphi , {\mathcal {Z}}\rangle \right] }\) and, given \(\{x_1, \ldots , x_N\}\), \(i^* = i\) is empirically selected with probability proportional to \(\varphi (x_i)\).
We know from the the setting of first moments that it is the projection of \(\langle f, X_t \rangle \) on to \(\langle \varphi , X_t\rangle \), with coefficient \(\langle f, {\tilde{\varphi }}\rangle \), which dominates the mean growth; indeed a strong law large numbers exists to this effect as well, see e.g. [19]. In this spirit, let us take \(f = \varphi \) for simplicity and we see that in (23) we get precisely \({\mathbb {V}}[\varphi ](x)/\varphi (x)\) on the righthand side. Hence, finally, we conclude our heuristic by observing that the rate at which immigration off the spine contributes to the bulk of the kth moment limit of \(\langle \varphi , X_t\rangle \) is determined by the second moment functional \({\mathbb {V}}[\varphi ]\); together with (20) and the associated remarks above, this goes some way to explaining the appearance of the limit in Theorem 1.
The next results present a significantly different picture for the supercritical and subcritical cases. For those settings, the exponential behaviour of the first moment semigroup becomes a dominant feature of the higher moments.
Theorem 2
(Supercritical, \(\lambda > 0\)) Suppose that (H1) holds along with (H2) for some \(k \ge 2\) and \(\lambda > 0\). Redefine
where \(L_1 = 1\) and we define iteratively for \(k \ge 2\)
where \([k_1, \ldots , k_N]_k^2\) is the set of all nonnegative Ntuples \((k_1, \ldots , k_N)\) such that \(\sum _{i = 1}^N k_i = k\) and at least two of the \(k_i\) are strictly positive^{Footnote 2} if \((X, {\mathbb {P}})\) is a branching Markov process, or
and iteratively with \({\mathbb {V}}_2\left[ \varphi \right] (x)={\mathbb {V}}\left[ \varphi \right] (x)\) (defined in the previous theorem) and for \(k\ge 3\)
if \((X, {\mathbb {P}})\) is a superprocess. Here the sums run over the set \(\left\{ m_1,\ldots ,m_{k1}\right\} _k\) of positive integers such that \(m_1+2m_2+\cdots +(k1)m_{k1}=k\). Then, for all \(\ell \le k\)
Although we have not been able to find existing results of this kind in such generality for supercritical processes, we note that the asymptotics of branching diffusions with either constant or compactly supported branching potentials were studied in [25]. Moreover, the asymptotics for the first and second moments of agedependent branching processes were considered in [4] and [5], respectively. While Jensen’s inequality easily shows that this is the minimal rate of growth, it turns out that it is the exact rate of growth, implying that the kth moment grows as the kth power of the first moment, i.e. using the terminology of [35], there is no intermittency. If we again appeal to folklore then this is again not necessarily surprising. In a number of settings, we would expect X to obey a strong law of large numbers (cf. [1, 16, 19, 27]) in the sense that
where \((M^\varphi _t, t\ge 0)\) was defined in (17) and the limit holds either almost surely or in the sense of \(L^p\) moments, for \(p>1\). Moreover, returning to the heuristic involving the spine decomposition discussed after Theorem 1, in this case, the copies of the original process that branch off the spine are all supercritical so we expect them all to survive to time t and hence why we see the sum over tuples with at least two positive entries in the recurrence \(L_k\) (the argument for not seeing precisely one still holds).
It is also worth commenting on the difference in the dependence on x in the limit for the two processes. In the case of the branching Markov process, the dependence on x appears through the normalisation by \(\varphi (x)\), which appears due to the assumption (H1). For superprocesses, this dependence is much more complicated. Thinking in terms of what is known as the skeletal decomposition (see e.g. [2, 14]), the superprocess issued from a unit mass at x can be seen at time \(t>0\) as the aggregation of a Poisson point process of ‘superprocess excursions’ conditioned to survive beyond time t as well as a copy of the superprocess conditioned to become extinct by time t. In the supercritical setting, a finite Poisson number of these excursions will contribute to the overall growth of the process, as \(t\rightarrow \infty \), with a rate proportional to \(p(x)\delta _x\), where p(x) is rate of survival of an excursion issued from \(x\in E\). Moreover, in terms of mass, each of the excursions is of the same order of magnitude as an analogous branching particle system. Taking kth moments of the xdependent Poisson sum of such excursions introduces an additional layer of complexity to its asymptotic behaviour and, specifically, the dependency on x. This may go part way to explaining the dependency of \(L_k\) on x in that setting.
Finally we turn to the growth of moments in the subcritical setting, which offers the heuristically appealing result that the kth moment decays slower than the kth moment of the linear semigroup.
Theorem 3
(Subcritical, \(\lambda < 0\)) Suppose that (H1) holds along with (H2) for some \(k \ge 2\) and \(\lambda < 0\). Redefine
where we define iteratively \(L_1 = 1\) and for \(k \ge 2\),
where \([k_1, \ldots , k_N]_k^n\) is the set of all nonnegative Ntuples \((k_1, \ldots , k_N)\) such that \(\sum _{i = 1}^N k_i = k\) and exactly \(2\le n\le k\) of the \(k_i\) are strictly positive if \((X, {\mathbb {P}})\) is a branching Markov process, or \(L_k = \left\langle {\mathbb {V}}_k[\varphi ],\tilde{\varphi }\right\rangle \), where \({\mathbb {V}}_1[\varphi ](x) = \varphi (x)\) and for \(k\ge 2\),
if \((X, {\mathbb {P}})\) is a superprocess. Here the sums run over the set \(\left\{ m_1,\ldots ,m_{k1}\right\} _k\) of nonnegative integers such that \(m_1+2m_2+\cdots +(k1)m_{k1}=k\). Then, for all \(\ell \le k\)
As alluded to above, it is heuristically appealing that the the kth moment does not grow at the rate \(\exp (k \lambda t)\). On the other hand the actual growth rate \(\exp ( \lambda t)\) is slightly less obvious but nonetheless the obvious candidate. The decay in mass to zero in the branching system would suggest that the kth moment similarly does so, but no slower that the first moment.
1.4 Main result: kth occupation moments
It also transpires that our method is remarkably robust. Indeed, again taking an agnostic position on whether X is a branching Markov process or a superprocess, as we will show, careful consideration of the proofs of Theorems 1, 2 and 3 demonstrate that we can also conclude results for the quantities
We can think of \(\int _0^t\left\langle g,X_s\right\rangle \mathrm{d}s\) as characterising the running occupation measure \(\int _0^t X_s(\cdot ){\mathrm{d}s}\) of the process X and hence we refer to \(\texttt {M}_t^{(k)}[g](x)\) as the kth moment of the running occupation. The following results also emerge from our calculations, mirroring Theorems 1, 2 and 3 respectively.
Theorem 4
(Critical, \(\lambda = 0\)) Suppose that (H1) holds along with (H2) for \(k \ge 2\) and \(\lambda = 0\). Define
where \(L_1 = 1\) and \(L_k\) is defined through the recursion \(L_k =( \sum _{i = 1}^{k1}L_i L_{ki})/(2k1)\) if \((X,{\mathbb {P}})\) is a branching Markov process or \(L_k=(\sum _{\left\{ k_1,k_2\right\} ^{+}}L_{k_1}L_{k_2})/(2k1)\) where \(\left\{ k_1,k_2\right\} ^{+}\) is the set of nonnegative integers \(k_1,k_2\) such that \(k_1+k_2=k\) if \((X,{\mathbb {P}})\) is a superprocess. Then, for all \(\ell \le k\)
Theorem 5
(Supercritical, \(\lambda > 0\)) Suppose that (H1) holds along with (H2) for some \(k \ge 2\) and \(\lambda > 0\). Redefine
where \(L_k\) was defined in Theorem 2, albeit that \(L_1 = 1/\lambda \).
Then, for all \(\ell \le k\)
Theorem 6
(Subcritical, \(\lambda < 0\)) Suppose that (H1) holds along with (H2) for some \(k \ge 2\) and \(\lambda < 0\). Redefine
where \(L_1 = 1/\lambda \) and for \(k\ge 2\), the constants \(L_k\) are defined recursively via
if X is a branching Markov process and
where
and where we define recursively, \({\mathbb {U}}_1\left[ \varphi \right] (x)=\varphi (x)/\lambda \) and for \(k\ge 2\),
if X is a superprocess. Then, for all \(\ell \le k\)
The results in Theorems 4, 5 and 6 are slightly less predictable. Let us discuss this point a little further in the particle setting for convenience. For the supercritical case, the extra “linear” term arising from the time integral does not affect the exponential growth of the process, and hence the leading order behaviour is still dominated by \(\mathrm{e}^{k\lambda t}\). In the critical case, let us assume momentarily that we are permitted to assume a Yaglom limit holds (see for example [20] in the branching Markov process setting). In that setting, we know that, conditional on \(N_t > 0\), \(\langle f, X_t\rangle \sim O(t)\) at \(t\rightarrow \infty \). Conditional on survival, we thus have (up to a constant)
This implies that (still conditional on survival) the kth moment of the occupation measure behaves like \(t^{2k}\). Recalling that, at criticality, the survival probability behaves like 1/t, we obtain the scaling \(t^{2k1}\). Finally, in the subcritical case, we know that the total occupation \(\int _0^\zeta \langle g , X_s\rangle \mathrm{d}s\), where \(\zeta = \inf \{t>0: \langle 1, X_t\rangle = 0\}\), is finite, behaving like an average spatial distribution of mass, i.e. \(\langle g, {\tilde{\varphi }}\rangle \), multiplied by \(\zeta \), meaning that no normalisation is required to control the “growth” of the running occupation moments in this case.
1.5 Examples for specific branching processes
We now give some examples to illustrate our results and the generality of this setting.
Continuoustime Galton–Watson process and CSBPs: Let us now consider the simplest branching particle setting where the process is not spatially dependent. In effect, we can take \(E = \{0\}\), \(\texttt {P}\) to be the Markov process which remains at \(\{0\}\) and the branching mechanism has no spatial dependence. This is the setting of a continuoustime Galton–Watson process. Its branching rate \(\beta \) is constant, and the first and second moments of the offspring distribution are given by \(m_1 = {\mathcal {E}}[N]\) and \(m_2 = {\mathcal {E}}[N^2]\), respectively, where N is the number of offspring produced at a branching event, are key to characterising moment limits. When the process is independent of space, we have \(\lambda = \beta (m_11)\), \(\varphi =1\), \({\tilde{\varphi }}\) can be taken as \(\delta _{\{0\}}\) and (H1) trivially holds. Theorem 1 now tells us that, at criticality, i.e. \(m_1 = 1\), the limit for the \(\ell \)th moment of the population size at time t, i.e. \(N_t\), satisfies
when \({\mathcal {E}}[N^\ell ]<\infty \) and \(\ell \ge 1\).
In the supercritical case, i.e. \(m_1>1\), the limit in Theorem 2 simplifies to
where the iteration
holds. Here, although the simplified formula is still a little complicated, it demonstrates more clearly that the moments in Theorem 2 grow according to the leading order terms of the offspring distribution. Indeed, in the case \(\ell = 2\), we have
The constant \(L_3\) can now be computed explicitly in terms of \(L_2\) and \(L_1 = 1\), and so on.
The limits in the subcritical case can be detailed similarly and only offer minor simplifications of the constants \(L_k\), \(k\ge 1\) presented in the statement of Theorem 3. Hence we leave the details for the reader to check.
The analogue of the continuoustime Galton–Watson process in the superprocess setting is that of a continuousstate branching process (CSBP). In this setting there is no associated movement, \(\psi \) in (7) is not spatially dependent and the nonlocal branching mechanism (8) satisfies \(\phi \equiv 0\). The right eigenvector and left eigenmeasure can be structured in the same way as in the Galton–Watson setting, with eigenvalue given by \(\psi '(0+) = b\).
Similarly to (29), Theorem 1 tells us at criticality, i.e. \(\psi '(0+)=0\), that the CSBP \((Z_t, t\ge 0)\) satisfies
when \(\int _0^\infty y^\ell \nu (\mathrm{d}y)<\infty \) and \(\ell \ge 1\). The situation at super and subcriticality again do not offer much more insight than the natural analogue of (30) with an induction on the \(L_k\) constants. Hence we end our discussion of the CSBP example here.
CrumpModeJagers (CMJ) processes: Consider a branching process in which particles live for a random amount of time \(\zeta \) and during their lifetime, give birth to a (possibly random) number of offspring at random times; in essence, the age of a parent at the birth times forms a point process, say \(\eta (\mathrm{d}t)\) on \([0,\zeta ]\). We denote the law of the latter by \({\mathcal {P}}\). The offspring reproduce and die as independent copies of the parent particle and the law of the process is denotes by \({\mathbb {P}}\) when initiated from a single individual. Although this model, i.e. a CMJ process, is not covered in the present article, it appears to be the only context in which comparable asymptotic moment results can be found in the literature.
First let us consider the case where the number of offspring, \(N = \eta [0,\zeta ]\), born to the initial individual during its lifetime satisfies \({\mathbb {E}}[N] = 1\), which corresponds to the critical case. (Note, criticality is usually described in terms of the Malthusian parameter, however, the setting \({\mathbb {E}}[N] = 1\) is equivalent.) Further, let \(Z_t\) denote the number of individuals in the population at time \(t\ge 0\). Under the moment assumption \({\mathbb {E}}[N^k]< \infty \) for some \(k \ge 1\), [11] showed that the factorial moments \(m_k(t) := {\mathbb {E}}[Z_t(Z_t  1) \cdots (Z_t  k + 1)]\) satisfy
where \(m_2 = {\mathcal {E}}[N^2]\) and \(b = {\mathcal {E}}[\int _0^\zeta t\eta (\mathrm{d}t)]\). We encourage the reader to compare this to the spatially independent example considered above.
The proof in [11] echoes a similar approach to the one presented in the current article. The author first develops a nonlinear integral equation that describes the evolution of \(m_k(t)\) in terms of the lower order moments, cf. [11, Theorem 1]. An inductive argument along with this evolution equation is then used to prove the above asymptotics.
Branching Brownian motion in a bounded domain: In [28], the Yaglom limit for branching Brownian motion (BBM) in a bounded domain was proved. In this setting, the semigroup \(\texttt {P}\) corresponds to that of a Brownian motion killed on exiting a \(C^1\) domain, E. The branching rate is taken as the constant \(\beta >0\) and the offspring distribution is not spatially dependent. Moreover, the first and second moments, \(m_1:={\mathcal {E}}[N]\) and \(m_2= {\mathcal {E}}[N^2]\), are finite. In this setting, the right eigenfunction \(\varphi \) exists on E, satisfying Dirichlet boundary conditions, and is accompanied by the left eigenmeasure \(\varphi (x)\mathrm{d}x\) on E. The associated eigenvalue is identified explicitly as \(\lambda = \beta (m_11) \lambda _E\), where \(\lambda _E\) is the ground state eigenvalue of the Laplacian on E. The critical regime thus occurs when \(\beta (m_11) =\lambda _E\)
Among the main results of [28], are the Kolmogorov limit,
as \(t\rightarrow \infty \) and the Yaglom distributional limit,
where \({\mathbf {e}}_{\langle \varphi , f\rangle \Sigma /2}\) is an exponentially distributed random variable with rate \(\langle \varphi , f\rangle \Sigma /2\). (Note, we understand \(\langle f, \varphi \rangle = \int _E \varphi (x)f(x)\mathrm{d}x\) in this context.) In particular these two results allude to the limit of moments (albeit further moment assumptions would be needed on N), which, in the spirit of (20), can be heuristically read as
for \(k\ge 1\). Taking into account the fact that \({\tilde{\varphi }}(x) = \varphi (x) \mathrm{d}x\) and \(\beta =\lambda _E/(m_11)\), we see that
Hence (31) agrees precisely with Theorem 1.
Neutron branching processes: The neutron branching process (NBP), as introduced in [3, 22], is our final example. Neutrons evolve in the configuration space \(E = D \times V\), where \(D \subset {\mathbb {R}}^3\) is a bounded, open set denoting the set of particle locations and \(V:= \{\upsilon \in {\mathbb {R}}^3 : \texttt {v}_{\min } \le \upsilon  \le \texttt {v}_{\max }\}\) with \(0< \texttt {v}_{\min } \le \texttt {v}_{\max } < \infty \), denotes the set of velocities. From an initial spacevelocity configuration \((r, \upsilon )\), particles move according to piecewise deterministic Markov processes characterised by \(\sigma _\texttt {s}\pi _\texttt {s}\), where \(\sigma _\texttt {s}(r, \upsilon )\) denotes the rate at which particles change velocity (also called scattering events) at \((r, \upsilon )\), and \(\pi _\texttt {s}(r, \upsilon , \upsilon ')\mathrm{d}\upsilon '\) denotes the probability that such a scattering event results in a new outgoing velocity \(\upsilon '\). When at \((r, \upsilon ) \in D \times V\), at rate \(\sigma _\texttt {f}(r, \upsilon )\), a branching (or fission) event occurs, resulting in the release of several new neutrons with configurations \((r, \upsilon _1), \ldots , (r, \upsilon _N)\), say. The quantity \(\pi _\texttt {f}(r, \upsilon , \upsilon ')\mathrm{d}\upsilon '\) gives the average number of neutrons produced with outgoing velocity \(\upsilon '\) from a fission event at \((r, \upsilon )\). Thus, the NBP is an example of a branching Markov process with nonlocal branching, where the motion is a piecewise deterministic Markov process and the nonlocality at branching events appears in the velocity.
As previously mentioned, Theorem 1 was established for the NBP in [20] using a spine decomposition approach but under more restrictive assumptions and only when the test function f is taken to be the right eigenfunction \(\varphi \). In [22], under the assumptions

(A1)
\(\sigma _\texttt {s}\), \(\pi _\texttt {s}\), \(\sigma _\texttt {f}\), \(\pi _\texttt {f}\) are uniformly bounded from above.

(A2)
\(\inf _{r \in D \upsilon , \upsilon ' \in V}(\sigma _\texttt {s}(r, \upsilon )\pi _\texttt {s}(r, \upsilon , \upsilon ') + \sigma _\texttt {f}(r, \upsilon )\pi _\texttt {f}(r, \upsilon , \upsilon ')) > 0\),
it was shown that (H1) holds. Moreover, since only a finite number of neutrons can be produced at a fission event, the number of offspring is uniformly bounded from above and thus (H2) holds for all \(k\ge 1\). Hence, the results obtained in this paper hold for the NBP.
Although there is no particular simplification of the limiting constants in Theorems 1, 2, 3, 4, 5 and 6 in this setting, of particular interest is the notion of particle clustering that appears in Monte Carlo criticality calculations [8, 9, 33]. This phenomena occurs in critical reactors where particles exhibit strong spatial correlations. Studying the moments \(\texttt {M}^{(k)}\) and other correlation structures in this setting will shed further light on this phenomenon.
1.6 Outline of the paper and strategy for proofs
In the remainder of the paper, there are two sections. One handling the proofs for the branching Markov process setting, and another handling the proofs for the superprocess setting. The proofs of the six main theorems follow a similar pattern and so we will briefly provide a heuristic to their strategic approach here, and how this is laid out in the remainder of the paper. Roughly speaking, each of the proofs follows the following fundamental steps.
Step 1: The first step in all of the proofs is to establish a nonlinear semigroup equation in the spirit of (4) for branching Markov processes and (5) for superprocesses, but for the joint Markovian pair \(\langle f, X_t\rangle \) and \(\int _0^t \langle g, X_s\rangle \mathrm{d}s\), \(t\ge 0\). Moreover, we want this semigroup equation to be written as an integral equation in terms of \((\texttt {T}_t, t\ge 0)\), rather than \((\texttt {P}_t, t\ge 0)\); recall \(\texttt {T}= \texttt {T}^{(1)}\) is the mean semigroup cf. (13). This is done for branching Markov processes in detail in Sect. 2.1, whereas we simply state the relationship for superprocesses in Sect. 3.1 as the proof is very similar.
Step 2: Next, we work with the simple observation that differentiating the nonlinear semigroup k times will give us access to the the kth moment process \((\texttt {T}^{(k)}_t, t\ge 0)\). That is,
with a similar formula holding for \(\texttt {M}^{(k)}\). However, given the recursion of the nonlinear semigroup derived in the previous step, this turns out to give us a new recursion of \(\texttt {T}^{(k)}\) in terms of \(\texttt {T}^{(k1)}, \texttt {T}^{(k2)}, \ldots , \texttt {T}^{(1)}\) and similarly for \(\texttt {M}^{(k)}\) in terms of \(\texttt {M}^{(k1)}, \texttt {M}^{(k2)}, \ldots , \texttt {M}^{(1)}\); the two recursions are extremely close to one another, but subtly different nonetheless.
Here we see a fundamental difference between the approaches we take for superprocesses and branching Markov processes. This is due to the way in which we differentiate the branching mechanisms in the recursion of the nonlinear semigroup derived in Step 1. For the branching Markov process setting, the branching mechanism \(\texttt {G}\) that appears in the aforesaid recursion is written in terms of a product over particles at a birth event. This allows us to use the Leibniz rule when differentiating across the product; this is done in Sect. 2.2. In the case of superprocesses, the branching mechanism is written in terms of an analytically smoother object, namely a LévyKhintchinetype formula. With no representation in terms of particles, we must turn instead to Fàa di Bruno’s rule in order to differentiate this branching mechanism; cf. Sect. 3.2
Step 3: The differentiation in Step 2 yields two fundamentally different sets of equations for the kth moment evolutions and kth occupation moment evolutions for each of the two classes of branching processes. In turn, this results in slightly different combinatorial formulae to work with when when completing the proofs of the theorems. In essence the proof of all of the main results now pertains to identifying the leading order terms in the aforesaid combinatorial formulae and appealing to an inductive argument; for example, using the statement of Theorem 1 for \(\ell \le k\) to prove the statement of that theorem for \(\ell = k+1\).
In Sect. 2.3 we give the full argument for the proof of Theorem 1 in the branching Markov process setting. This sets the scene for all the other proofs for this class of branching processes. Indeed, based on this proof, we then complete the proof of Theorem 4 in Sect. 2.4, and Theorems 2, 3, 5 and 6 in Sect. 2.5, highlighting only the main differences relative to the key arguments in the proof of Theorem 1.
In the same spirit, we give a complete proof for Theorem 1 in Sect. 3.3 for superprocesses which gives the roadmap for all other proofs for this class of processes. Thereafter, in Sect. 3.4, we highlight only the differences for the proofs of Theorems 2 and 3 for superprocesses. Finally, at even greater brevity than in the branching Markov process setting (because of familiarity) we sketch the main differences for the proof of Theorem 4, 5, and 6.
2 Proofs for branching Markov processes
The proof is a mixture of analytical and combinatorial computations which are based around the behaviour of the linear and nonlinear semigroups of X.
2.1 Linear and nonlinear semigroup equations
For \(f\in B^+(E)\), it is well known that the mean semigroup evolution satisfies
where
See for example the calculations in [22]. Associated with every linear semigroup of a branching process is a socalled manytoone formula. Manytoone formulae are not necessarily unique and the one we will develop here is slightly different from the usual construction because of nonlocality.
Suppose that \(\xi = (\xi _t, t\ge 0)\), with probabilities \({\mathbf {P}} = ({\mathbf {P}}_x, x\in E)\), is the Markov process corresponding to the semigroup \(\texttt {P}\). Let us introduce a new Markov process \({\hat{\xi }} = ({\hat{\xi }}_t, t\ge 0)\) which evolves as the process \(\xi \) but at rate \(\beta (x)\texttt {m}[1](x)\) the process is sent to a new position in E, such that for all Borel \(A\subset E\), the new position is in A with probability \(\texttt {m}[{\mathbf {1}}_A](x)/\texttt {m}[1](x)\). We will refer to the latter as extra jumps. Note the law of the extra jumps is well defined thanks to the assumption (15), which ensures that \(\sup _{x \in E}\texttt {m}[1](x) =\sup _{x\in E}{\mathcal {E}}_x(\langle 1, {\mathcal {Z}}\rangle ) <\infty \). Accordingly we denote the probabilities of \({{\hat{\xi }}}\) by \((\hat{{\mathbf {P}}}_x, x\in E)\). We can now state our manytoone formula.
Lemma 1
Write \(B(x) = \beta (x)(\texttt {m}[1](x)  1)\), \(x\in E\). For \(f\in B^+(E)\) and \(t\ge 0\), we have
The proof is classical and follows standard reasoning for semigroup integral equations e.g. as in [19, 22]: First conditioning the righthand side of (33) on the time of the first extra jump, then using the principle of transferring between multiplicative and additive potentials in the resulting integral equation (cf. Lemma 1.2, Chapter 4 in [13]) shows that (32) holds. Grönwall’s Lemma, the fact that \(\beta \in B^+(E)\) and (15) for \(k = 1\) ensure that the relevant integral equations have unique solutions.
We now define a variant of the nonlinear evolution equation (2) associated with X via
For \(f\in B^+_1(E)\), define
Our first preparatory result relates the two semigroups \((\texttt {u}_t, t\ge 0)\) and \((\texttt {T}_t, t\ge 0)\).
Lemma 2
For all \(f, g\in B^+(E)\), \(x \in E\) and \(t\ge 0\), the nonlinear semigroup \(\texttt {u}_t[f, g](x)\) satisfies
Proof
Again, the proof uses standard techniques for integral evolution equations so we only sketch the proof. Instead of considering \(\texttt {u}_t[f,g]\), we will first work instead with
which will turn out to be more convenient for technical reasons.
By splitting the expectation in (36) on the first branching event and appealing to the Markov property, we get, for \(f, g\in B^+(E)\), \(t\ge 0\) and \(x\in E\),
where
Using similar reasoning to Lemma 1.2, Chapter 4 in [13] we can move the multiplicative potential with rate \(\beta +g\) to an additive potential in the above evolution equation to obtain
Now define
and \(({\tilde{\texttt {v}}}_t, t\ge 0)\) via
for \(x \in E, t\ge 0\) and \(f, g\in B^+(E)\). Note that for the moment we don’t claim a solution to (38) exists.
For convenience, we will define
so that \({\tilde{\texttt {v}}}_{t}[f, g](x) = \texttt {T}_t[\mathrm{e}^{f}](x)+\texttt {K}_t[f, g](x) \). By conditioning the righthand side of (38) on the first jump of \({{\hat{\xi }}}\) (bearing in mind the dynamics of \({{\hat{\xi }}}\) given just before Lemma 1) with the help of the Markov property (recalling that \(\texttt {B}(x)  \beta \texttt {m}[1] = \beta \)), we get
Gathering terms and exchanging the order of integration in the double integral, this simplifies to
Finally, appealing to the change of multiplicative potential to additive potential in the spirit of e.g. Lemma 1.2, Chapter 4 of [13], we get
and hence \(({\tilde{\texttt {v}}}_{t}, t\ge 0)\) is a solution to (37). Reversing these arguments also shows that solutions to (37) solve (38). As such, a standard argument using \(\beta \in B^+(E)\), the assumption (15) for \(k=1\) and Grönwall’s Lemma tells us that all of the integral equations thus far have unique solutions. In conclusion, \((\texttt {v}_t[g],t\ge 0)\) and \((\tilde{\texttt {v}}_t[g], t\ge 0)\) agree.
To complete the lemma, note that
moreover,
Hence, working form (38) and the definitions of \(\texttt {D}\) and \(\texttt {A}\), which are related via
we get
as required. \(\square \)
2.2 Evolution equations for the kth moment of branching Markov processes
Next we turn our attention to the evolution equation generated by the kth moment functional \(\texttt {T}^{(k)}_t\), \(t\ge 0\). To this end, we start by observing that
The following result gives us an iterative approach to writing the kth moment functional in terms of lower order moment functionals.
Proposition 1
Fix \(k\ge 2\). Under the assumptions of Theorem 1, with the additional assumption that
it holds that
where
and \([k_1, \ldots , k_N]_k^2\) is the set of all nonnegative Ntuples \((k_1, \ldots , k_N)\) such that \(\sum _{i = 1}^N k_i = k\) and at least two of the \(k_i\) are strictly positive.
Proof
Recall from (35) that
It is clear that differentiating the first term k times and setting \(\theta = 0\) on the righthand side of (42) yields
Thus it remains to differentiate the second term on the righthand side of (42) k times. To this end, without concern for passing derivatives through expectations, using the Leibniz rule in Lemma A.2 of the Appendix, we have
where the sum is taken over all nonnegative integers \(k_1,\ldots , k_N\) such that \(\sum _{i = 1}^N k_i = {k}\).
Next let us look in more detail at the sum/product term on the righthand (44). Consider the terms where only one of the \(k_i\) in the sum is positive, in which case \(k_i = {k}\) and
There are N ways this can happen in the sum of the sumproduct term and hence
where \([k_1, \ldots , k_N]_k^2\) is the set of all nonnegative Ntuples \((k_1, \ldots , k_N)\) such that \(\sum _{i = 1}^N k_i = k\) and at least two of the \(k_i\) are strictly positive. Substituting this back into (44) yields
Now let us return to the justification that we can pass the derivatives through the expectation in the above calculation, we first note that derivatives are limits and so an ‘epsilondelta’ argument will ultimately require dominated convergence. This is where the assumption (15) and (40) come in. On the righthand side of (44), each of the \(\texttt {T}_{t}^{(k_j)}[f](x_j)\) in the sum term are uniformly bounded by the assumption (40) and the collection \([k_1, \ldots , k_N]_k^2\) means that \(0\le k_j\le k1\) for each \(j = 1,\ldots , N\). Moreover, there can be at most k items in the sum/product. Noting that
the assumption (15) allows us to use a domination argument with the kth order moment.
Combining this with (43) and (42), using an easy dominated convergence argument to pull the k derivatives through the integral in t, then dividing by \((1)^{{k} +1}\), we get (41), as required. \(\square \)
2.3 Completing the proof of Theorem 1: critical case
We will prove Theorem 1 by induction, starting with the case \(k = 1\). In this case, (18) reads
which holds due to (14).
We now assume that the theorem holds true in the branching Markov process setting for some \(k \ge 1\) and proceed to show that (18) holds for all \(\ell \le k +1\). As alluded to in the introduction, the strategy for the proof is to use equation (40), since this allows us to write the kth moment in terms of the lower order moments. To prove that the righthand side of this equation converges to the limit appearing in the statement of the theorem, we use Theorem A.1 stated in the appendix along with the induction hypothesis.
To this end, first note that the induction hypothesis implies that (40) holds. Hence Proposition 1 tells us that
where we have used the change of variables \(s = ut\) in the final equality.
We now make some observations that will simplify the expression on the righthand side of (46) as \(t \rightarrow \infty \). First note that due to (14), the first term on the righthand side of (46) will vanish as \(t \rightarrow \infty \). Next, note that, if more than two of the \(k_i\) in the sum are strictly positive, then the renormalising by \(t^{k  1}\) will cause the associated summand to go to zero as well. For example, suppose without loss of generality that \(k_1\) and \(k_2\) are both strictly positive, we can write \(t^{k  1} = t^{(k + 1)  2} = t^{k_1  1}t^{k_2  1}t^{k_3} \ldots t^{k_N}\). Now the induction hypothesis tells us that the correct normalisation of each of the terms in the product is \(t^{k_j  1}\), which means that the item \(\texttt {T}^{(k_j)}_{t(1u)}\) for a third \(k_j>0\) will be ‘over normalised to zero’ in the limit.
To make this heuristic rigorous, we can employ Theorem A.1 from the Appendix. To this end, let us set
where \([k_1, \ldots , k_N]_{k+1}^3\) is the subset of \([k_1, \ldots , k_N]_{k+1}^2\), for which at least three of the \(k_i\) are strictly positive (which can be an empty set). We will show that conditions (A.1) and (A.2) are satisfied via
First note that there are no more than \(k + 1\) of the \(k_i\) that are strictly greater than 1 in the product in (47). This follows from the fact that it is not possible to partition the set \(\{1, \ldots , k+1\}\) into more than \(k + 1\) nonempty blocks. Next note that
The product term on the righthand side is uniformly bounded in \(x_j\) and \(t(1u)\) on compact intervals due to boundedness of \(\varphi \) and the fact that (18) is assumed to hold for all \(\ell \le k\) by induction. Moreover, if \(\#\{j : k_j > 0\} \le 1\), the set \([k_1, \ldots , k_N]_{k+1}^3\) is empty, otherwise, the term \(\textstyle {(t(1u))^{k+1  \#\{j : k_j > 0\}}}/{t^{k  1}}\) is finite for all \(t \ge 1\), say. From (45) and (15), we also observe that
Taking these facts into account, it is now straightforward to see that the earlier given heuristic can be made rigorous and (48) holds. In particular, we can use dominated convergence to pass the limit in t through the expectation in (47) to achieve the second statement in (48).
As F belongs to the class of functions \( {\mathcal {C}}\), defined just before Theorem A.1 in the Appendix, the aforesaid theorem tells us that
Returning to (46), since the sum there requires that at least two of the \(k_i\) are positive, this means that the only surviving terms in the limit are those that are combinations of two strictly positive terms \(k_i\) and \(k_j\) such that \(i \ne j\) and \(k_i + k_j = k+1\). This can be thought of as choosing \(i, j \in \{1, \ldots N\}\) with \(i \ne j\), choosing \(k_i \in \{1, \ldots , k\}\) and then setting \(k_j = k+1  k_i\). One should take care however to avoid double counting each pair \((k_i, k_j)\). Thus, we have
where the factor of 1/2 appears to compensate for the aforementioned double counting.
In order to show that the righthand side above delivers the required finiteness and limit (18), we again turn to Theorem A.1. For \(x \in E\), \(t \ge 0\) and \(0\le u \le 1\), in anticipation of using this theorem, we now redefine
After some rearrangement, we have
Using similar arguments to those given previously in the proof of (49) may, again, combine the induction hypothesis, simple combinatorics and dominated convergence to pass the limit as \(t\rightarrow \infty \) through the expectation and show that
for which one uses that
Note that, thanks to the assumption (H2), the expression for F(s, x) clearly satisfies (A.1).
Subtracting the righthand side of (52) from the righthand side of (51), again appealing to the induction hypotheses, specifically the second statement in (18), it is not difficult to show that, for each \(\varepsilon \in (0,1)\),
On the other hand, the first statement in the induction hypothesis (18) also implies that three exists a constant \(C_k>0\) (which depends on k but not \(\varepsilon \)) such that
Since we may take \(\varepsilon \) arbitrarily close to 1, we conclude that (A.2) holds.
In conclusion, since the conditions of Theorem A.1 are now met, we get the two statements of (18) as a consequence. \(\square \)
2.4 Proof of Theorem 4
Next we turn our attention to the evolution equation generated by the kth moment functional \(\texttt {M}^{(k)}_t\), \(t\ge 0\). To this end, we start by defining observing that
Taking account of (35), we see that
Given the proximity of (54) to (42), it is easy to see that we can apply the same reasoning that we used for \(\texttt {T}^{(k)}_t[f](x)\) to \(\texttt {M}^{(k)}_t[g](x)\) and conclude that, for \(k\ge 2\),
where \({{\hat{\eta }}}^k\) plays the role of \(\eta ^k\) albeit replacing the moment operators \(\texttt {T}^{(j)}\) by the moment operators \(\texttt {M}^{(j)}\).
We now proceed to prove Theorem 4, also by induction. First we consider the setting \(k = 1\). In that case,
Referring now to Theorem A.1 in the Appendix, we can take \(F(x, s, t) = f(x)/\varphi (x)\), since \(f\in B^+(E)\), the conditions of the theorem are trivially met and hence
Note that this limit sets the scene for the polynomial growth in \(t^{n(k)}\) of the higher moments for some function n(k). If we are to argue by induction, whatever the choice of n(k), it must satisfy \(n(1)=1\).
Next suppose that Theorem 4 holds for all integer moments up to and including \(k1\). To make the inductive step, we follow similar arguments to the proof of Theorem 1. That is, we use Theorem A.1 and the induction hypothesis to show that the limit of the righthand side of (55) is precisely the limit that appears in the statement of the theorem.
We have from (55) that
Let us first deal with the right most integral in (56). It can be written as
Arguing as in the spirit of the proof of Theorem 1, our induction hypothesis ensures that
satisfies (A.1) and (A.2). Theorem A.1 thus tells us that, uniformly in \(x\in E\) and \(g\in B^+_1(E)\),
Now turning our attention to the first integral on the righthand side of (56) and again following the style of the reasoning in the proof of Theorem 1, we can pull out the leading order terms, uniformly for \(x\in E\) and \(g\in B^+_1(E)\),
It is again worth noting here that the choice of the polynomial growth in the form \(t^{n(k)}\) also constrains the possible linear choices of n(k) to \(n(k) = 2k1\) if we are to respect \(n(1)=1\) and the correct distribution of the index across (58).
Identifying
our induction hypothesis allows us to conclude that \(F[g](x,u): = \lim _{t\rightarrow \infty }F[g](x,u, t)\) exists and
Thanks to our induction hypothesis, we can also easily verify (A.1) and (A.2). Theorem A.1 now gives us the required uniform (in \(x\in E\) and \(g\in B^+_1(E)\)) limit
Putting (59) together with (57) we get the statement of Theorem 4. \(\square \)
2.5 Remaining proofs in the noncritical cases
We now give an outline of the main steps in the proof of Theorem 1 for the sub and supercritical cases. As previously mentioned, the ideas used in this section will closely follow those presented in the previous section for the proof of the critical case and so we leave the details to the reader. We first note that the Perron Frobenius behaviour in (H1) ensures the base case for the induction argument, regardless of the value of \(\lambda \). We thus turn to the inductive step, assuming the result holds for \(k 1\).
Proof of Theorem 2 (supercritical case)
From the evolution equation (41), we have
It then follows that
Noting that \(\textstyle \sum _{j=1}^N k_j =k\), we may again share the exponential term across the product in the righthand side above as follows,
Combining this with to (61) and changing variables in the final integral of the latter, we have
where we have defined
It is easy to see that, pointwise in \(x\in E\) and \(u\in [0,1]\), using the induction hypothesis and (H2),
where we have again used the fact that the \(k_j\)s sum to k to extract the \(\langle f, {\tilde{\varphi }}\rangle ^k\) term. Similarly to the critical setting we can also verify using the induction hypothesis and (H2) that (A.1) and (A.2) hold.
This is sufficient to note that, by using a triangle inequality similar spirit to the one found in (A.4) and appealing to (14) of the assumption (H1), we have that
This means that for t sufficiently large, we can control the modulus in the integral on the righthand side of (62) by a global constant. The remainder of integral, yields a bound of \(\varepsilon (1\mathrm{e}^{\lambda (k  1) t})/\lambda (k  1) \), which tends to zero as \(t\rightarrow \infty \). \(\square \)
Proof of Theorem 3 (subcritical case)
We now outline the subcritical case. First note that since we only compensate by \(\mathrm{e}^{\lambda t}\), the term \(\texttt {T}_{t}[f^k](x)\) that appears in equation (41) does not vanish after the normalisation. Due to assumption (H1), we have
Next we turn to the integral term in (41). Define \([k_1, \ldots , k_N]^n_k\), for \(2 \le n \le k\) to be the set of tuples \((k_1, \ldots , k_N)\) with exactly n positive terms and whose sum is equal to k. Similar calculations to those given above yield
Again, we leave the details to the reader but the idea is that the induction hypothesis will take care of the product of the lower order moments and the second part of (14) in assumption (H1) will then take care of the asymptotic behaviour semigroup \(\texttt {T}_{t(1u)}\). The second part of (14) allows one to control the difference between this term and its limit. In a similar manner to the final step in the proof of Theorem 2, the difference of (41) and its limit can be reduced to the limit as \(t\rightarrow \infty \) of \(\varepsilon ({1\mathrm{e}^{\lambda  (n1) t} })/\lambda (n  1)\), which is bounded above by \(\varepsilon \). \(\square \)
Proof of Theorem 5
For the case \(k = 1\), we have
Thanks to (H1) and similar arguments to those used in the proof of Theorem 2, we may choose t sufficiently large such that the modulus in the integral is bounded above by \(\varepsilon > 0\), uniformly in \(g \in B^+_1(E)\) and \(x \in E\). Then, the righthand side of (64) is bounded above by \(\varepsilon \lambda ^{1} (1\mathrm{e}^{\lambda t}) + \mathrm{e}^{\lambda t}{\left\langle g,{\tilde{\varphi }}\right\rangle }/{\lambda }\). Since \(\varepsilon \) can be taken arbitrarily small, this gives the desired result and also pins down the initial value \(L_1 = 1/\lambda \).
Now assume the result holds for all \(\ell \le k1\). Reflecting on proof of Theorem 2, we note that in this setting the starting point is almost identical except that the analogue of (60), which is derived from (55), is now the need to evaluate
The first term on the righthand side of (65) can be handled in essentially the same way as in the proof of Theorem 2. The second term on the righthand side of (65) can easily be dealt with along the lines that we are now familiar with from earlier proofs, using the induction hypothesis. In particular, its limit is zero. Hence combined with the first term on the righthand side of (65), we recover the same recursion equation for \(L_k\). \(\square \)
Proof of Theorem 6
The case \(k = 1\) is relatively straightforward and, again in the interest of keeping things brief, we point the reader to the fact that, as \(t \rightarrow \infty \), we have
Now suppose the result holds for all \(\ell \le k1\). We again refer to (55), which means we are interested in handling a limit which is very similar to (65), now taking the form
Again skipping the details, we can quickly see from (67) the argument in (66), and the induction hypothesis gives us
which gives us the required recursion for \(L_k\). \(\square \)
3 Proofs for superprocesses
For the proof of Theorems 1, 2 and 3 in the setting of superprocesses we follow a similar approach. One difference is that we cannot work with the kth moment as a product of an almost surely finite sum. As such the use of the Leibniz formula as in the previous section is no longer helpful. Instead, we use the Faà di Bruno formula (see Lemma A.1) to assist with multiple derivatives of the nonlinear evolution equation (5).
3.1 Linear and nonlinear semigroup equations
The evolution equation for the expectation semigroup \((\texttt {T}_t, t\ge 0)\) is well known and satisfies
for \(t\ge 0\), \(x\in E\) and \(f\in B^+(E)\), where, with a meaningful abuse of our branching Markov process notation, we now define
See for example equation (3.24) of [7].
In the spirit of Lemma 1 we can give a second representation of \(\texttt {T}_t[f]\) in terms of an auxiliary process, the so called manytoone formula. To this end, if, as before, we work with the process \((\xi , {\mathbf {P}})\) to represent the Markov process associated to the semigroup \((\texttt {P}_t, t\ge 0)\), then, although we have redefined the quantity \(\texttt {m}[f](x)\), we can still meaningfully work with the process \(({{\hat{\xi }}}, \hat{{\mathbf {P}}})\) as defined just before Lemma 1.
Lemma 3
Let \(\vartheta (x) = B(x)+b(x) = \beta (x)(\texttt {m}[1](x)  1)+b(x)\), then, for \(t\ge 0\) and \(f\in B^+(E)\),
As with Lemma 1, the proof is classical, requiring only that we take the righthand side of (71) and condition on the first extra jump of \(({{\hat{\xi }}}, \hat{{\mathbf {P}}})\) to show that it also solves (70). It is a straightforward application of Grönwall’s inequality to show that (70) has a unique solution and hence (69) holds. The reader will note that because we have separated out the local and nonlocal branching mechanisms of the superprocess, the deliberate repeat definition of \(\texttt {m}[f]\) for superprocesses is only the analogue of its counter part for branching Markov processes in the sense of nonlocal activity. The mean local branching rate has otherwise been singled out as the term b.
Similarly to the branching Markov process setting, let us rewrite an extended version of the nonlinear semigroup evolution \(({\texttt {V}}_t, t\ge 0)\), defined in (6), i.e. the natural analogue of (35), in terms of the linear semigroup \((\texttt {T}_t, t\ge 0)\). To this end, define
Analogously to Theorem 2 we have the following result.
Lemma 4
For all \(f, g\in B^+(E)\), \(x \in E\) and \(t\ge 0\), the nonlinear semigroup \(\texttt {V}_t[f, g](x)\) satisfies
where, for \(h\in B^+(E)\) and \(x\in E\),
The proof is essentially the same as the proof of Lemma 2 and hence we leave the details to the reader.
3.2 Evolution equations for the kth moment of a superprocesses
Recall that we defined \(\texttt {T}_t^{(k)}\left[ f\right] (x):={\mathbb {E}}_{\delta _x}[\langle f,X_t\rangle ^k]\), \(t\ge 0\), \(f\in B^+(E)\), \(k\ge 1\). As with the setting of branching Markov processes, we want to establish an evolution equation for \((\texttt {T}_t^{(k)}, t\ge 0)\), from which we can establish the desired asymptotics. To this end, let us introduce the following notation.
For \(x\in E\), \(k\ge 2\) and \(t\ge 0\), define
and
and finally
and the sums run over the set of nonnegative integers \(\left\{ m_1,\ldots ,m_{k1}\right\} \) such that \(m_1+2m_2+\cdots +(k1) m_{k1}=k\).
Theorem 7
Fix \(k\ge 2\). Suppose that (H1) and (H2) hold, with the additional assumption that
Then,
where
Proof
First note that similarly to the Markov branching process case, defining
we have
and
To prove (77), recall the definition (5) and let
The idea is to use Lemma A.1 to obtain two equivalent expressions for \( \texttt {v}_t^{(k)}[f](x)\) that, when equated, yield (77).
To this end, not that due to Faà di Bruno’s Lemma A.1, we have
where the sum runs over the set of nonnegative integers \(\left\{ m_1,\ldots ,m_k\right\} _k\) such that
Note that \(m_k>0\) if and only if \(m_k=1\) and \(m_1=m_2=\cdots =m_{k1}=0\), so the kth moment term \(\texttt {T}^{(k)}_t[f]\) appears only once and with a factor \((1)^{k+1}\), that is,
where all the terms in \(R_k(x,t)\) are products of two or more lower order moments. Thus it remains to show that
Differentiating the evolution equation (72) k times at \(\theta =0\), momentarily not worrying about passing derivatives through integrals, we get
where
We first deal with the kth derivative of the term involving \(\psi \) in the above integral. For this, we again use Lemma A.1 to get
where the last equality holds because \(m_k=1\) if and only if \(m_1=\cdots =m_{k1}=0\) and \(\psi '(x,0+)=b(x)\).
Similarly, for the the kth derivative of the remaining terms, recalling (8), (9) and (70), we have
Using Lemma A.1 yields
where, in the final equality, we have singled out the case that \(m_k=1\) and \(m_1=\cdots =m_{k1}= 0\) in the Faà di Bruno formula. Using the definition of \(\texttt {m}[f](x)\) in (70) and the same observation as above about the \(m_j\)’s, we get
Putting the pieces together, we obtain
Combining this with equation (80) yields
which is the desired result.
There is one final matter we must attend to, which is the ability to move derivatives through integrals. In this setting, this follows from the assumption (76), (H2) and the LévyKhintchinetype formulae for \(\psi \) and \(\phi \). \(\square \)
3.3 Completing the proof of Theorem 1: critical case
We will prove Theorem 1 for superprocesses using induction, similarly to the setting of branching Markov processes. The case \(k=1\) follows from assumption (H1).
Now assume that the statement of Theorem 1 holds in the superprocess setting for all \(\ell \le k\). Our aim is to prove that the result holds for \(k+1\). Using Theorem 7 and a change of variables, we have that
where R and U were defined in equations (73) and (78), respectively. As with the particle system, we first aim to simplify the righthand side before showing that its limit is equivalent to the expression given in the statement of the theorem. In particular, we will first show that for each \(x\in E\), the limit of the righthand side of (83) is equivalent to
where
and
such that \(\left\{ k_1,k_2\right\} ^{+}\) is defined to be the set of positive integers \(k_1,k_2\) such that \(k_1+k_2=k+1\).
To this end, writing \(c(m_1,\ldots ,m_{k})\) for the constants preceding the product summands in (73), observe that
where the final equality is due to the induction hypothesis and the fact that \(m_1+\cdots +m_{k}>1\), which follows from the fact that \(m_1+2m_2+\cdots +\cdots +km_k=k+1\). Note, moreover that the induction hypothesis ensures that the limit is uniform in \(x\in E\) and, in fact, that
We now return to (83), to deal with the term involving \(U_{k+1}\), which we recall is a linear combination of \({ K}_{k+1}\) and \(S_{k+1}\), which were defined in (73) and (75), respectively. Note that if any of the summands in either \({K}_{k+1}\) or \(S_{k+1}\) have more than two of the \(m_j\) positive, the limit of that summand, when renormalised by \(1/t^{k1}\), will be zero. In essence, the argument here is analogous to those that led to (49) in the branching Markov process setting. This implies that the only terms in the sums of (73) and (75) that remain in the limit of (83) are those for which \(m_{k_1}=m_{k_2}=1\) and \(m_j=0\) for all \(j\ne k_1,k_2\), with \(k_1<k_2\) such that \(k_1+k_2=k+1\), and if \(k+1\) is even, the terms in which \(m_{(k+1)/2}=2\) and \(m_j=0\) for all \(j\ne (k+1)/2\).
Let us now convert all of the above heuristics into rigorous computation, for which we appeal to Theorem A.1. We write
where \(K_{k+1}^{(3+)}\) and \(S_{k+1}^{(3+)}\) contain the terms in \(K_{k+1}\) and \(S_{k+1}\), respectively, for which the sum \(m_1+\cdots +m_k\) is greater than or equal to 3. We will prove that \(\lim _{t\rightarrow \infty }{F}(x,s,t)=0\) and that (A.1) and (A.2) hold.
Due to (16) and boundness of \(\varphi \), dominated convergence implies that
where the set \(\left\{ m_1,\ldots ,m_k\right\} ^{3}_{k+1}\) is the subset of \(\left\{ m_1,\ldots ,m_k\right\} _{k+1}\) for which \(m_1+\cdots +m_k\ge 3\). Using the induction hypothesis and (87), we get that the righthand side above is zero. The same arguments also imply that the limit of \(S_{k+1}^{(3+)}\) is zero. Thus \({F[f](x,s)}: = \lim _{t\rightarrow \infty } {F[f](x,s,t)} = 0\). The condition (A.1) trivially holds. For (A.2), the required uniformity follows from the induction hypothesis and (87).
Using Theorem A.1 in the Appendix, we conclude that
Let us now define \(\{k_1<k_2\} \) to be the elements in \(K_{k+1}\) for which \(m_{k_1}=m_{k_2}=1\) with \(k_1<k_2\) such that \(k_1+k_2=k+1\) and \(m_j=0\) for all other indices and, in the case where \(k+1\) is even, \(m_{(k+1)/2}=2\) and \(m_j=0\) for all \(j\ne (k+1)/2\). Restricting the sum to this set in \(K_{k+1}\) we get the following expression
where we recall \(\left\{ k_1,k_2\right\} ^{+}\) is the set of positive integers \(k_1,k_2\) such that \(k_1+k_2=k+1\). Similarly, we obtain the following expression for \(S_{k+1}\):
This shows that the righthand side of (83) is equivalent to (84).
To conclude the proof, we will again use Theorem A.1 to show that (84) is equivalent to \({\mathbb {V}}\) given in the theorem. To this end, define
Due to (16) and the induction hypothesis,
where the last equality holds because the total number of ways of splitting one set of size \(k+1\) into two non empty sets is equal to k. To obtain the limit for \(S_{k+1}^{(2)}\), we use (16), the induction hypothesis, dominated convergence and linearity to obtain
Combining these two limits, we get that
To complete the proof it remains to verify assumptions (A.1) and (A.2) in order to apply Theorem A.1 to (84). By now the reader will be familiar with the arguments required to check these assumptions and thus, we exclude the details. Hence, it follows that
where the limit is uniform in \(x\in E\). Moreover, \(\sup _{t\ge 0, x\in E}\texttt {T}^{(k+1)}\left[ f\right] (x)/\varphi (x)t^{k}<\infty \).
3.4 Proofs for moments in the noncritical cases
In this section we present the main ideas behind the proof of Theorems 2 and 3. The methods follow a similar reasoning to the critical case and the details are left to the reader. The base case is given by the Perron Frobenius behaviour in (H1) for both sub and supercritical cases. Thus, we assume the result for \(k1\) and proceed to give the outline of the inductive step of the argument.
Proof of Theorem 2 (supercritical case)
The main difference here, compared to the critical case, is that all the terms in \(R_{k}(x,t)\) will survive after the normalisation \(\mathrm{e}^{\lambda k t}\) since the exponential term shares across the product. From the evolution equation (77) and the definition of \(L_k\) we have that
The first terms in the right hand side goes to zero uniformly since
and the induction hypothesis implies that
For the second term, define for \(k\ge 2\)
We will use induction to prove that for \(k\ge 2\)
which will complete the proof of the theorem.
First notice that for any \(k\ge 2\) due to a change of variable, we have that
where
Recalling the definitions of \(K_k\) and \(S_k\) given in (74) and (75) respectively, and using the fact that \(I_j(x,t)=((1)^{j+1}\texttt {T}_t\left[ f\right] (x)R_j(x,t))\) for \(j=2,\ldots ,k1\), after sharing the exponential term across the products, we obtain
and
From these expressions and the definition of \({\mathbb {V}}_2\), the case \(k=2\) follows easily. Now we assume (91) holds for \(\ell =1,\ldots , k1\), then similarly to the branching Markov process case it follows that
where we have defined
It is easy to see that, pointwise in \(x\in E\) and for \(u\in (0,1)\), using the induction hypothesis for \(I_k\) and the assumed Perron Frobenius behaviour (H1) for \(\texttt {T}_{t(1u)}\left[ f\right] (x)\) we have
We again verify that the conditions (A.1) and (A.2) hold using the induction hypothesis and (H2). To complete the proof of (91), we again proceed along the same lines as in the branching Markov processes setting. Similar arguments to those given in the proof of Theorem A.1
it follows that the remainder of the integral in (92) can be bounded by \(\varepsilon (1\mathrm{e}^{\lambda (k1)t})/\lambda (k1)\), which can be bounded by \(\varepsilon \). Combining this with (90) we get the desired result. \(\square \)
Proof of Theorem 3 (subcritical case)
We now outline the proof for the subcritical case. Again we use an inductive argument. The case \(k=1\) follows from (H1) and the fact that \(\left\langle \varphi ,\tilde{\varphi }\right\rangle =1\). Now assume the result to be true for \(\ell =1,\ldots ,k1\). We first note first that the term \(R_k(x,t)\) in (77) vanishes in the limit after the normalisation \(\mathrm{e}^{\lambda t}\). To see this, note from (73) that
where \(c(m_1,\ldots ,m_{k1})\) is a constant depending only on \(m_1, \ldots ,m_{k1}\). Since each of the terms in the product is bounded, \(\lambda <0\), and \(m_1+\cdots +m_{k1}>1\) for any partition, the limit of the righthand side above is zero. Using this, along with the induction hypothesis, assumption (H1) and the evolution equation (77) we see that
where
Then, similar calculations to those above yield
To finish the proof, we use induction hypothesis to deal with the lower order moments, dominated convergence and (H1) to deal with the limit of \(\texttt {T}_{t(1u)}\). Similarly to the last part of the proof of Theorem 2 we get that (93) is bounded as \(t\rightarrow \infty \) by \(\varepsilon (1\mathrm{e}^{\lambda t (1m_1\cdots m_k)})/\lambda (1m_1\cdots m_k)\), which is bounded by \(\varepsilon \) since \(m_1+\cdots +m_{k1}>1\). We once more leave the details of the rest of the proof to the reader. \(\square \)
3.5 Proofs for occupation moments, Theorems 4, 5 and 6
Given the proofs we have now seen for the branching particle setting for these three theorems, as well as the proofs of Theorem 1, 2 and 3, we mention only that a similar calculation to the one presented in Theorem 7 tells us that
where
and \({\tilde{R}}\), \({\tilde{K}}\) and \({\tilde{S}}\) are defined as R, K and S in (73), (74), (75), respectively, albeit replacing \(\texttt {T}^{(j)}\) by \(\texttt {M}^{(j)}\). From here we can consider the claimed asymptotics using the the inductive reasoning as in the proof of Theorems 1, 2 and 3, respectively. \(\square \)
Change history
21 July 2023
A Correction to this paper has been published: https://doi.org/10.1007/s0044002301220w
Notes
The article [11] only came to light after submitting this article.
We interpret \(\textstyle \sum _\emptyset = 0\) and \(\textstyle \prod _\emptyset = 1\).
References
Chen, Z.Q., Ren, Y.X., Yang, T.: Law of large numbers for branching symmetric Hunt processes with measurevalued branching rates. J. Theoret. Probab. 30(3), 898–931 (2017)
Chen, Z.Q., Ren, Y.X., Yang, T.: Skeleton decomposition and law of large numbers for supercritical superprocesses. Acta Appl. Math. 159, 225–285 (2019)
Cox, A.M.G., Harris, S.C., Horton, E.L., Kyprianou, A.E.: Multispecies neutron transport equation. J. Stat. Phys. 176(2), 425–455 (2019)
Crump, K.S., Mode, C.J.: A general agedependent branching process. i. J. Math. Anal. Appl. 24(3), 494–508 (1968)
Crump, K.S., Mode, C.J.: A general agedependent branching process. ii. J. Math. Anal. Appl. 25(1), 8–17 (1969)
Dawson, D.A.: Measurevalued Markov processes. In: École d’Été de Probabilités de SaintFlour XXI—1991, volume 1541 of Lecture Notes in Mathematics, pp. 1–260. Springer, Berlin (1993)
Dawson, D.A., Gorostiza, L.G., Li, Z.: Nonlocal branching superprocesses and some related models. Acta Appl. Math. 74(1), 93–112 (2002)
Dumonteil, E., Bahran, R., Cutler, T., Dechenaux, B., Grove, T., Hutchinson, J., McKenzie, G., McSpaden, A., Monange, W., Nelson, M., Thompson, N., Zoia, A.: Patchy nuclear chain reactions. Commun. Phys. 4(1), 1–10 (2021)
Dumonteil, E., Malvagi, F., Zoia, A., Mazzolo, A., Artusio, D., Dieudonné, C., De Mulatier, C.: Particle clustering in Monte Carlo criticality simulations. Ann. Nucl. Energy 63, 612–618 (2014)
Dumonteil, E., Mazzolo, A.: Residence times of branching diffusion processes. Phys. Rev. E 94, 012131 (2016)
Durham, S.D.: Limit theorems for a general critical branching process. J. Appl. Probab. 8(1), 1–16 (1971)
Dynkin, E.B.: Branching particle systems and superprocesses. Ann. Probab. 19(3), 1157–1194 (1991)
Dynkin, E.B.: Diffusions, Superdiffusions, and Partial Differential Equations, vol. 50. American Mathematical Society, Providence, RI (2002)
Eckhoff, M., Kyprianou, A.E., Winkel, M.: Spines, skeletons and the strong law of large numbers for superdiffusions. Ann. Probab. 43(5), 2545–2610 (2015)
Engländer, J.: Spatial Branching in Random Environments and with Interaction. Advanced Series on Statistical Science & Applied Probability, vol. 20. World Scientific Publishing, Hackensack, NJ (2015)
Engländer, J., Harris, S.C., Kyprianou, A.E.: Strong law of large numbers for branching diffusions. Ann. Inst. Henri Poincaré Probab. Stat. 46(1), 279–298 (2010)
Etheridge, A.M.: An Introduction to Superprocesses. University Lecture Series, vol. 20. American Mathematical Society, Providence, RI (2000)
Fleischman, J.: Limiting distributions for branching random fields. Trans. Am. Math. Soc. 239, 353–389 (1978)
Harris, S.C., Horton, E., Kyprianou, A.E.: Stochastic methods for the neutron transport equation II: almost sure growth. Ann. Appl. Probab. 30(6), 2815–2845 (2020)
Harris, S.C., Horton, E., Kyprianou, A.E., Wang, M.: Yaglom limit for critical neutron transport (2021)
Harris, S.C., Roberts, M.I.: The manytofew lemma and multiple spines. Ann. Inst. Henri Poincaré Probab. Stat. 53(1), 226–242 (2017)
Horton, E., Kyprianou, A.E., Villemonais, D.: Stochastic methods for the neutron transport equation I: linear semigroup asymptotics. Ann. Appl. Probab. 30(6), 2573–2612 (2020)
Iscoe, I.: On the supports of measurevalued critical branching Brownian motion. Ann. Probab. 16(1), 200–221 (1988)
Klenke, A.: Multiple scale analysis of clusters in spatial branching models. Ann. Probab. 25(4), 1670–1711 (1997)
Koralov, L., Molchanov, S.: Structure of population inside propagating front. J. Math. Sci. 189(4), 637–658 (2013)
Li, Z.: Measure Valued Branching Markov Processes. SpringerVerlag, Berlin (2011)
Palau, S., Yang, T.: Law of large numbers for supercritical superprocesses with nonlocal branching. Stoch. Process. Appl. 130(2), 1074–1102 (2020)
Powell, E.: An invariance principle for branching diffusions in bounded domains. Probab. Theory Relat. Fields 173(3–4), 999–1062 (2019)
Ren, Y.X., Song, R., Sun, Z.: Spine decompositions and limit theorems for a class of critical superprocesses. Acta Appl. Math. 165, 91–131 (2020)
Sevast’janov, B.A.: The extinction conditions for branching processes with diffusion. Teor. Verojatnost. i Primenen. 6, 276–286 (1961)
Sevast’yanov, B.A.: Branching stochastic processes for particles diffusing in a bounded domain with absorbing boundaries. Teor. Veroyatnost. i Primenen. 3, 121–136 (1958)
Skorohod, A.V.: Branching diffusion processes. Teor. Verojatnost. i Primenen. 9, 492–497 (1964)
Sutton, T.M., Mittal, A.: Neutron clustering in Monte Carlo iteratedsource calculations. Nucl. Eng. Technol. 49(6), 1211–1218 (2017)
Watanabe, S.: A limit theorem of branching processes and continuous state branching processes. J. Math. Kyoto Univ. 8, 141–167 (1968)
Zel’Dovich, Y.A.B., Molchanov, S.A., Ruzmaĭkin, A.A., Sokolov, D.D.: Intermittency in random media. Soviet Phys. Uspekhi 30(5), 353 (1987)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
I. Gonzalez: Research supported by CONACYT scholarship nr 472301.
Appendix
Appendix
We first state two fundamental combinatorial results for complex derivatives, the classical Faà di Bruno and Leibniz rules. In both cases, for a sufficiently smooth function g on \({\mathbb {R}}\), we will denote by \(g^{(k)}\) by its kth derivative.
Lemma A.1
(Faà di Bruno rule) Let f and g ktimes continuously differentiable functions on \({\mathbb {R}}\). Then the kth derivative is given by the following formula
where the sum goes over the set \(\left\{ m_1,\ldots ,m_k\right\} _k\) of nonnegative integers such that
Lemma A.2
(Leibniz rule) Suppose \(g_1, \ldots , g_m\) are ktimes continuously differentiable functions on \({\mathbb {R}}\), for \(k \ge 1\). Then
where the sum is taken over all nonnegative integers \(k_1,\ldots , k_m\) such that \(\sum _{i = 1}^mk_i = k\).
The third result of the appendix is a general ergodic limit theorem which is key to the moment convergence. We will only state the result in the critical case, since we will only apply it in the proof of Theorem 1, however, the result can easily be extended to the noncritical case by including the normalisation \(\mathrm{e}^{\lambda ut}\) in the first integral.
In order to state it, let us introduce a class of functions \({\mathcal {C}}\) on \(B_1^+(E)\times E\times [0,1]\times [0,\infty )\) such that F belongs to class \({\mathcal {C}}\) if
exists,
and
Theorem A.1
Assume (H1) holds, \(\lambda = 0\) and that \(F\in {\mathcal {C}}\). Define
Then
Proof
We will show that
since then
First note that,
Due to assumption (A.2), for t sufficiently large, the first term on the righthand side above can be controlled by \(\varphi ^{1}(x)\texttt {T}_{ut}[\varepsilon ](x)\). Combining this with the above inequality yields
We note that (A.1) and the first (resp. second) statement of (14) in (H1), together with dominated convergence, immediately imply that the first (resp. second) statement in (A.3) holds. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Gonzalez, I., Horton, E. & Kyprianou, A.E. Asymptotic moments of spatial branching processes. Probab. Theory Relat. Fields 184, 805–858 (2022). https://doi.org/10.1007/s00440022011312
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440022011312