Correction to: Probability Theory and Related Fields (2022) 184:805–858 https://doi.org/10.1007/s00440-022-01131-2

1 Summary of corrections

In this short note, we remark that there were erroneous limits used in the non-critical cases for the moment convergence and occupation moments in [2]. The source of the error was a misuse of the sense in which uniformity held in various convergence arguments. This affects the nature of the constants that appear in the limits. The results for the critical cases, which were the principal results, are correct with one minor adjustment in the statement which is that \(\sup _{t\ge 0}\Delta ^{(\ell )}_t <\infty \) should read \(\sup _{t\ge c}\Delta ^{(\ell )}_t <\infty \), for any \(c>0\).

In what follows we assume the notation and hypotheses of [2] and provide the correct statements and brief corrections of the proofs for the non-critical setting. The theorem numbers correspond to the same theorem numbers in [2].

Theorem 2

(Supercritical, \(\lambda > 0\)) Suppose that (H1) holds along with (H2) for some \(k \ge 2\) and \(\lambda > 0\). Redefine

$$\begin{aligned} \Delta _t^{(\ell )} = \sup _{x\in E, f\in B^+_1(E)} \left| \textrm{e}^{-\ell \lambda t} {\varphi (x)^{-1}}\texttt{T}^{(\ell )}_t[f](x) - \,\ell !\langle f, \tilde{\varphi }\rangle ^\ell L_\ell (x)\right| , \end{aligned}$$

where \({L_1 (x)= 1}\) and we define iteratively for \(k \ge 2\)

$$\begin{aligned} L_k(x) =\int _0^\infty \textrm{e}^{-\lambda ks}{\varphi (x)^{-1}}\texttt{T}_s\left[ {\beta }\mathcal {E}_{\cdot }\left[ \sum _{[k_1, \dots , k_N]_k^{2}}\prod _{\begin{array}{c} j = 1 \\ j: k_j > 0 \end{array}}^N {\varphi (x_j)} L_{k_j}(x_j)\right] \right] (x)\textrm{d}s, \end{aligned}$$

with \([k_1, \dots , k_N]_k^2\) is the set of all non-negative N-tuples \((k_1, \dots , k_N)\) such that \(\sum _{i = 1}^N k_i = k\) and at least two of the \(k_i\) are strictly positiveFootnote 1 if \((X, \mathbb {P})\) is a branching Markov process. Alternatively, if \((X, \mathbb {P})\) is a superprocess, define iteratively for \(k\ge 2\) with \({L_1(x) =1}\) and \(\mathfrak {I}_2(x)=\tfrac{1}{2}{\varphi ^{-1}(x)}\int _{0}^{\infty }\textrm{e}^{-2\lambda s}\texttt{T}_s\left[ \mathbb {V}\left[ \varphi \right] \right] (x)\textrm{d}s\)

$$\begin{aligned} L_k(x)=\mathfrak {R} _k(x)+\mathfrak {I}_k(x), \end{aligned}$$

where

$$\begin{aligned} \mathfrak {R}_k(x)&=\sum _{\left\{ m_1,\ldots ,m_{k-1}\right\} _k}\frac{1}{m_1!\ldots m_{k-1}!}(m_1+\cdots +m_{k-1}-1)!{\varphi (x)^{m_1+\cdots +m_{k-1}-1}}\nonumber \\ {}&\quad \prod _{j=1}^{k-1}(-L_j(x))^{m_j} \end{aligned}$$
(1)

and

$$\begin{aligned}&\mathfrak {I}_k(x)=\int _{0}^{\infty }\textrm{e}^{-\lambda k t}{\varphi ^{-1}(x)}\texttt{T}_s \\&\quad \left[ \sum _{\left\{ m_1,\ldots , m_{k-1}\right\} _k}\frac{1}{m_1!\ldots m_{k-1}!} \left( \psi ^{(m_1+\cdots +m_{k-1})}(\cdot ,0+)(-\varphi (\cdot ))^{m_1}\prod _{j=2}^{k-1} (-{\varphi (\cdot )}\mathfrak {I}_j(\cdot ))^{m_j}\right. \right. \\&\quad +\left. \left. \beta (\cdot )\int _{M(E)^{\circ }}\left\langle \varphi ,\nu \right\rangle ^{m_1}\prod _{j=2}^{k-1}\left\langle {\varphi }\mathfrak {I}_j,\nu \right\rangle ^{m_j}\Gamma (\cdot ,\textrm{d}\nu )\right) \right] (x)\textrm{d}s. \end{aligned}$$

Here the sums run over the set \(\left\{ m_1,\ldots ,m_{k-1}\right\} _k\) of positive integers such that \(m_1+2m_2+\cdots +(k-1)m_{k-1}=k\). Then, for all \(\ell \le k\)

$$\begin{aligned} \sup _{t\ge 0} \Delta _t^{(\ell )}<\infty \quad \text { and }\quad \lim _{t\rightarrow \infty } \Delta _t^{(\ell )} =0. \end{aligned}$$
(2)

Theorem 3

(Subcritical, \(\lambda < 0\)) Suppose that (H1) holds along with (H2) for some \(k \ge 2\) and \(\lambda < 0\). Redefine

$$\begin{aligned} \Delta _t^{(k)} = \sup _{x\in E, f\in B^+_1(E)}\left| \varphi (x)^{-1}\textrm{e}^{-\lambda t}\texttt{T}^{(k)}_t[f](x)- L_k \right| , \end{aligned}$$

where we define iteratively \(L_1 = \left\langle f,\tilde{\varphi }\right\rangle \) and for \(k \ge 2\),

$$\begin{aligned} L_k&= \langle f^k, \tilde{\varphi }\rangle + \int _0^\infty \textrm{e}^{-\lambda s} \left\langle {\beta } \mathcal {E}_{\cdot }\left[ \sum _{[k_1, \dots , k_N]^2_k}{k \atopwithdelims ()k_1, \dots , k_m}\prod _{\begin{array}{c} j = 1 \\ j : k_j > 0 \end{array}}^N \texttt{T}_s^{(k_j)}[f](x_j)\right] , \tilde{\varphi }\right\rangle \textrm{d}s. \end{aligned}$$

if \((X, \mathbb {P})\) is a branching Markov process. Alternatively, if \((X, \mathbb {P})\) is a superprocess, we still have \(L_1 = \left\langle f,\tilde{\varphi }\right\rangle \) but

$$\begin{aligned} L_k= \int _{0}^{\infty }\textrm{e}^{-\lambda s}\left\langle \mathbb {V}_k[f](\cdot ,s),\tilde{\varphi }\right\rangle \textrm{d}s \end{aligned}$$

for \(k\ge 2\), with

$$\begin{aligned}&\mathbb {V}_k[f](x,s) = \sum _{\left\{ m_1,\ldots ,m_{k-1}\right\} _k}\frac{k!}{m_1!\ldots m_{k-1}!}\\&\quad \times \left[ \psi ^{(m_1+\cdots +m_{k-1})}(x,0+)(-\texttt{T}_{s}\left[ f\right] (x))^{m_1}\right. \\&\qquad \quad \left. \prod _{j=2}^{k-1}\left( \frac{1}{j!}\left( -\texttt{T}_s^{(j)}\left[ f\right] (x)+(-1)^{j+1}R_j(x,s)\right) \right) ^{m_j}\right. \\&\quad \left. +\beta (x)\int _{M(E)^{\circ }}\left\langle \texttt{T}_s\left[ f\right] ,\nu \right\rangle ^{m_1}\prod _{j=2}^{k-1}\left( \frac{1}{j!}\left\langle \texttt{T}_s^{(j)}\left[ f\right] +(-1)^{j}R_j(\cdot ,s),\nu \right\rangle \right) ^{m_j}\Gamma (x,\textrm{d}\nu )\right] . \end{aligned}$$

Here the sums run over the set \(\left\{ m_1,\ldots ,m_{k-1}\right\} _k\) of non-negative integers such that \(m_1+2m_2+\cdots +(k-1)m_{k-1}=k\). Then, for all \(\ell \le k\)

$$\begin{aligned} \sup _{t\ge 0} \Delta _t^{(\ell )}<\infty \quad \text { and }\quad \lim _{t\rightarrow \infty } \Delta _t^{(\ell )} =0. \end{aligned}$$
(3)

Theorem 5

(Supercritical, \(\lambda > 0\)) Let \((X, \mathbb {P})\) be either a branching Markov process or a superprocess. Suppose that (H1) holds along with (H2) for some \(k \ge 2\) and \(\lambda > 0\). Redefine

$$\begin{aligned} \Delta _t^{(\ell )} = \sup _{x\in E, f\in B^+_1(E)} \left| \textrm{e}^{-\ell \lambda t}{\varphi (x)^{-1}}\texttt{M}^{(\ell )}_t[g](x) - \,\ell !\langle g, \tilde{\varphi }\rangle ^\ell L_\ell (x)\right| , \end{aligned}$$

where \(L_k(x)\) was defined in Theorem 2 (both for branching Markov processes and superprocesses), albeit that \({L_1(x) = 1/\lambda }\). Then, for all \(\ell \le k\)

$$\begin{aligned} \sup _{t\ge 0} \Delta _t^{(\ell )}<\infty \quad \text { and }\quad \lim _{t\rightarrow \infty } \Delta _t^{(\ell )} =0. \end{aligned}$$
(4)

Theorem 6

(Subcritical, \(\lambda < 0\)) Suppose that (H1) holds along with (H2) for some \(k \ge 2\) and \(\lambda < 0\). Redefine

$$\begin{aligned} \Delta _t^{(\ell )} = \sup _{x\in E, f\in B^+_1(E)}\left| {\varphi (x)^{-1}}\texttt{M}^{(\ell )}_t[g](x) - L_\ell (x)\right| , \end{aligned}$$

where \(L_1(x) = \int _0^\infty {\varphi (x)^{-1}} \texttt{T}_s[g](x)\textrm{d}s\) and for \(k\ge 2\), the \(L_k(x)\) are defined recursively via

$$\begin{aligned} L_k (x)&=\int _0^\infty {\varphi (x)^{-1}}\texttt{T}_s\left[ {{\beta }} \mathcal {E}_\cdot \left[ \sum _{[k_1, \dots , k_N]_k^2} {k \atopwithdelims ()k_1, \dots , k_N}\prod _{\begin{array}{c} j = 1 \\ j : k_j > 0 \end{array}}^N {\varphi (x_j)}L_{k_j}(x_j)\right] \right] (x)\,\textrm{d}s\\&\quad -k \int _0^\infty \varphi (x)^{-1}\texttt{T}_s \Big [g{\varphi } L_{k-1}\Big ](x)\,\textrm{d}s, \end{aligned}$$

if X is a branching Markov process. Alternatively, if X is a superprocess,

$$\begin{aligned} L_k (x)&= (-1)^{k+1}\mathbb {R}_k(x)+ (-1)^{k} \int _{0}^{\infty }{\varphi (x)^{-1}}\texttt{T}_s\left[ \mathbb {U}_k\right] (x)\textrm{d}s\\&\quad -k\int _{0}^{\infty }{\varphi (x)^{-1}}\texttt{T}_s\left[ g{\varphi }\left( L_{k-1}+(-1)^{k-1}\mathbb {R}_{k-1}\right) \right] (x)\textrm{d}s, \end{aligned}$$

where

$$\begin{aligned} \mathbb {R}_k(x)&={\varphi (x)^{-1}}\sum _{\left\{ m_1,\ldots ,m_{k-1}\right\} _k}\frac{k!}{m_1!\ldots m_{k-1}!}(-1)^{m_1+\cdots +m_{k-1}-1}\\&\quad (m_1+\cdots +m_{k-1}-1)! \prod _{j=1}^{k-1}\left( \frac{(-1)^j {\varphi (x)}L_j(x)}{j!}\right) ^{m_j}, \end{aligned}$$

and

$$\begin{aligned}&\mathbb {U}_k(x)\\&\quad =\sum _{\left\{ m_1,\ldots ,m_{k-1}\right\} _k}\frac{k!}{m_1!\ldots m_{k-1}!}\\&\quad \left[ \psi ^{(m_1+\cdots +m_{k-1})}(x,0+)\left( {\varphi (x) L_1(x)}\right) ^{m_1}\prod _{j=2}^{k-1}\left( \frac{(-1)^{j+1}{\varphi (x)}L_j(x)-{\varphi (x)}\mathbb {R}_j(x)}{j!}\right) ^{m_j}\right. \\&\quad \left. +\beta (x)\int _{M(E)^{\circ }}(-1)^{m_1\!+\!\cdots \!+\!m_{k-1}}\left\langle {\varphi }L_1,\nu \right\rangle ^{m_1}\!\prod _{j=2}^{k-1}\left( \frac{\left\langle (-1)^{j+1}{\varphi }L_j\!-\!{\varphi }\mathbb {R}_j,\nu \right\rangle }{j!}\right) ^{m_j}\!\Gamma (x,\textrm{d}\nu )\right] \!. \end{aligned}$$

Then, for all \(\ell \le k\)

$$\begin{aligned} \sup _{t\ge 0} \Delta _t^{(\ell )}<\infty \text { and } \lim _{t\rightarrow \infty } \Delta _t^{(\ell )} =0. \end{aligned}$$
(5)

2 Summary of proofs

We give a brief proof of Theorems 2 and 3 to give a sense of the corrected reasoning. The proofs of Theorems 5 and 6 are left out given that the corrected reasoning uses the same logic. The reader may also consult [1] for more details.

Proof of Theorem 2

Suppose for induction that the result is true for all \(\ell \)-th integer moments with \(1\le \ell \le k -1\). From the evolution equation in Proposition 1 of [2], noting that \(\textstyle \sum _{j=1}^N k_j =k\), when the limit exists, we have

$$\begin{aligned}&\lim _{t\rightarrow \infty } \textrm{e}^{-{\lambda } k t} \int _0^t {\varphi (x)^{-1}}{\texttt{T}}_s\left[ {\beta }\mathcal {E}_\cdot \left[ \sum _{[k_1, \dots , k_N]_k^2}{k \atopwithdelims ()k_1, \dots , k_N}\prod _{j = 1}^N {\texttt{T}}^{(k_j)}_{t-s}[f](x_j) \right] \right] (x) \textrm{d}s \nonumber \\&\quad =\lim _{t\rightarrow \infty } t\int _0^1 \textrm{e}^{-{\lambda } (k-1) ut} \textrm{e}^{-{\lambda } ut} {\varphi (x)^{-1}} {\texttt{T}}_{ut}\Big [ H[f](x, u, t) \Big ](x) \textrm{d}u, \end{aligned}$$
(6)

where

$$\begin{aligned} H[f](x, u, t):= {\beta }(x) \mathcal {E}_x \left[ \sum _{[k_1, \dots , k_N]_k^2}{k \atopwithdelims ()k_1, \dots , k_N}\prod _{j = 1}^N \textrm{e}^{-{\lambda } k_j t(1-u)}{\texttt{T}}_{t(1-u)}^{(k_j)}[f](x_j)\right] . \end{aligned}$$

It is easy to see that, pointwise in \(x\in E\) and \(u\in [0,1]\), using the induction hypothesis and (H2),

$$\begin{aligned} H[f](x)&: = \lim _{t\rightarrow \infty } H[f](x, u, t) =k!\langle f, \tilde{\varphi }\rangle ^k{\beta }(x) \mathcal {E}_{x}\\ {}&\quad \left[ \sum _{[k_1, \dots , k_N]_k^{2}} \prod _{\begin{array}{c} j = 1 \\ j : k_j > 0 \end{array}}^N { \varphi (x_j)}L_{k_j}(x_j)\right] , \end{aligned}$$

where we have again used the fact that the \(k_j\)s sum to k to extract the \(\langle f, \tilde{\varphi }\rangle ^k\) term.

Using the expressions for H[f](xut) and H[f](x) together with the definition of \(L_k(x)\), we have, for any \(\epsilon >0\), as \(t\rightarrow \infty \),

$$\begin{aligned}&\sup _{x\in E, f\in B^+_1(E)} |\textrm{e}^{-k {\lambda } t}{\varphi ^{-1}}{\texttt{T}}^{(k)}_t[f] - \,k!\langle f, \tilde{\varphi }\rangle ^k L_k|\nonumber \\&\qquad \le t\int _0^1 \textrm{e}^{-{\lambda } (k-1) ut} \sup _{x\in E, f\in B^+_1(E)}\left| \textrm{e}^{-{\lambda } ut} {\varphi ^{-1}}{\texttt{T}}_{ut}\left[ H[f](\cdot , u, t)- H[f]\right] \right| \textrm{d}u +\epsilon , \end{aligned}$$
(7)

where \(\epsilon \) is an upper estimate for

$$\begin{aligned}&\sup _{x\in E, f\in B_1^+(E)} k!\langle f, \tilde{\varphi }\rangle ^k\int _t^\infty \textrm{e}^{-{\lambda } ks}{\varphi (x)^{-1}}{\texttt{T}}_s\left[ {\beta }\mathcal {E}_{\cdot }\right. \nonumber \\&\qquad \quad \left. \left[ \sum _{[k_1, \dots , k_N]_k^{2}}\prod _{\begin{array}{c} j = 1 \\ j: k_j > 0 \end{array}}^N {\varphi (x_j)}L_{k_j}(x_j)\right] \right] (x)\textrm{d}s. \end{aligned}$$
(8)

Note, convergence to zero as \(t\rightarrow \infty \) in (8) follows thanks to the induction hypothesis (ensuring that \(L_{k_j}(x)\) is uniformly bounded), (H2) and the uniform boundedness of \({\beta }\).

The induction hypothesis, (H2) and dominated convergence again ensure that

$$\begin{aligned} \lim _{t\rightarrow \infty }\sup _{x\in E, f\in B^+_1(E), u\in [0,\varepsilon ]} \left| H[f](\cdot , u, t) -H[f] \right| = 0. \end{aligned}$$
(9)

As such, in (7), we can split the integral on the right-hand side over \([0,\varepsilon ]\) and \((\varepsilon ,1]\), for \(\varepsilon \in (0,1)\). Using (9), we can ensure that, for any arbitrarily small \(\varepsilon '>0\), making use of the boundedness in (H1), there is a global constant \(C>0\) such that, for all t sufficiently large,

$$\begin{aligned}&t\int _0^{\varepsilon } \textrm{e}^{-{\lambda } (k-1) ut} \sup _{x\in E, f\in B^+_1(E)}\left| \textrm{e}^{-{\lambda } ut}{\varphi ^{-1}} {\texttt{T}}_{ut}\left[ H[f](\cdot , u, t)- H[f]\right] \right| \textrm{d}u\nonumber \\&\quad \le \varepsilon ' C t\int _0^{\varepsilon } \textrm{e}^{-{\lambda } (k-1) ut} \textrm{d}u\nonumber \\&\quad =\frac{\varepsilon ' C}{{\lambda }(k-1)}(1-\textrm{e}^{-{\lambda } (k-1) \varepsilon t}). \end{aligned}$$
(10)

On the other hand, we can also control the integral over \((\varepsilon ,1]\), again appealing to (H1) and (H2) to ensure that

$$\begin{aligned} \sup _{x\in E, f\in B^+_1(E), u\in (\varepsilon ,1]}\left| \textrm{e}^{-{\lambda } ut} {\varphi ^{-1}}{\texttt{T}}_{ut}\left[ H[f](\cdot , u, t)- H[f]\right] \right| <\infty . \end{aligned}$$

We can again work with a (different) global constant \(C>0\) such that

$$\begin{aligned}&t\int _{\varepsilon }^1 \textrm{e}^{-{\lambda } (k-1) ut} \sup _{x\in E, f\in B^+_1(E)}\left| \textrm{e}^{-{\lambda } ut}{\varphi ^{-1}}{\texttt{T}}_{ut}\left[ H[f](\cdot , u, t)- H[f]\right] \right| \textrm{d}u \nonumber \\&\quad \le C t\int _{\varepsilon }^1 \textrm{e}^{-{\lambda } (k-1) ut} \textrm{d}u\nonumber \\&\quad =\frac{ C}{{\lambda }(k-1)}(\textrm{e}^{-{\lambda } (k-1) \varepsilon t} - \textrm{e}^{-{\lambda } (k-1) t}). \end{aligned}$$
(11)

In conclusion, using (10) and (11), we can take limits as \(t\rightarrow \infty \) in (7) and the statement of the theorem follows for branching Markov processes.

The proof in the superprocess setting starts the same way as in [2] up to equation (90) therein, noting that the term \(R_k(x,t)\) in the moment evolution equation

$$\begin{aligned} \texttt{T}_t^{(k)}\left[ f\right] (x)=(-1)^{k+1}R_k(x,t)+(-1)^{k}\int _{0}^{t}\texttt{T}_s\left[ U_k(\cdot ,t-s)\right] (x)\textrm{d}s, \end{aligned}$$
(12)

from equation (77) of [2] can be compensated in the limit using \(\mathfrak {R}_k(x,t)\) defined in (1) above. The remainder of the proof deals with the compensation of the integral term in (12).

We have

$$\begin{aligned}&\lim _{t\rightarrow \infty } \textrm{e}^{-{\lambda } k t} (-1)^k \int _0^t{\varphi (x)^{-1}} {\texttt{T}}_s\left[ U_k(\cdot ,t-s) \right] (x) \textrm{d}s \\&\quad =\lim _{t\rightarrow \infty } t\int _0^1 \textrm{e}^{-{\lambda } (k-1) ut} \textrm{e}^{-{\lambda } ut} {\varphi (x)^{-1}} {\texttt{T}}_{ut}\Big [ H[f](x, u, t) \Big ](x) \textrm{d}u, \end{aligned}$$

where \(H\left[ f\right] (x,u,t)\) as

$$\begin{aligned} H\left[ f\right] (x,u,t)=(-1)^{k}\textrm{e}^{-\lambda k t(1-u)}U_k(x,t(1-u)), \end{aligned}$$

that is,

$$\begin{aligned} H\left[ f\right] (x,u,t)&:=\sum _{\left\{ m_1,\ldots ,m_{k-1}\right\} } \frac{k!}{m_1!\ldots m_{k-1}!}\\ {}&\quad \left[ \psi ^{(m_1+\cdots +m_{k-1})}(x,0+)(-\textrm{e}^{-\lambda t(1-u)}\texttt{T}_{t(1-u)}\left[ f\right] (x))^{m_1}\right. \\&\quad \prod _{j=2}^{k-1}\left( -\frac{\textrm{e}^{-\lambda j t(1-u)}}{j!}\left( \texttt{T}_{t(1-u)}^{(j)}\left[ f\right] (x)+(-1)^{j} R_j(x,t(1-u))\right) \right) ^{m_j}\\&\quad + \beta (x)\int _{M(E)^{\circ }}\left\langle \textrm{e}^{-\lambda t (1-u)}\texttt{T}_{t(1-u)}\left[ f\right] ,\nu \right\rangle ^{m_1}\\&\quad \left. \!\prod _{j=2}^{k-1}\left\langle \frac{\textrm{e}^{-\lambda jt(1-u)}}{j!}(\texttt{T}_{t(1-u)}^{(j)}\left[ f\right] \!+\!(-1)^{j}R_j(\cdot ,t(1\!-\!u))),\nu \right\rangle ^{m_j}\Gamma (x,\textrm{d}\nu )\right] \!. \end{aligned}$$

The induction hypothesis and (H1) allow us to get

$$\begin{aligned}{} & {} H\left[ f\right] (x)\\{} & {} \quad :=\lim _{t\rightarrow \infty } H(x,u,t)\\{} & {} \quad = \sum _{\left\{ m_1,\ldots , m_{k-1}\right\} _k}\frac{1}{m_1!\ldots m_{k-1}!}\\{} & {} \quad \left( \psi ^{(m_1+\cdots +m_{k-1})}(x,0+)(-\varphi (x))^{m_1}\prod _{j=2}^{k-1}(-{\varphi (x)}\mathfrak {I}_j(x))^{m_j}\right. \\{} & {} \qquad \quad \left. +\beta (x)\int _{M(E)^{\circ }}\left\langle \varphi ,\nu \right\rangle ^{m_1}\prod _{j=2}^{k-1}\left\langle {\varphi }\mathfrak {I}_j,\nu \right\rangle ^{m_j}\Gamma (x,\textrm{d}\nu )\right) . \end{aligned}$$

Using the same arguments used above from (10) onwards, we get the desiblack result. \(\square \)

Proof of Theorem 3

First note that since we only compensate by \(\textrm{e}^{-{\lambda } t}\), the term \({\texttt{T}}_{t}[f^k](x)\) that appears in equation (41) of [2] does not vanish after the normalisation. Due to assumption (H1), we have

$$\begin{aligned} \lim _{t \rightarrow \infty }\varphi ^{-1}(x)\textrm{e}^{-{\lambda } t}{\texttt{T}}_{t}[f^k](x) = \langle f^k, \tilde{\varphi }\rangle . \end{aligned}$$

Next we turn to the integral term in (41) of [2]. Define \([k_1, \dots , k_N]_k^{(n)}\), for \(2 \le n \le k\) to be the set of tuples \((k_1, \dots , k_N)\) with exactly n positive terms and whose sum is equal to k. Similar calculations to those given above yield

$$\begin{aligned}&\frac{\textrm{e}^{-{\lambda } t}}{\varphi (x)} \int _0^t {\texttt{T}}_s\left[ {\beta }\mathcal {E}_x\left[ \sum _{[k_1, \dots , k_N]_k^2}{k \atopwithdelims ()k_1, \dots , k_N}\prod _{j = 1}^N {\texttt{T}}^{(k_j)}_{t-s}[f](x_j) \right] \right] (x) \textrm{d}s \nonumber \\&\quad = t\int _0^1\sum _{n = 2}^k \textrm{e}^{{\lambda } (n-1)u t}\frac{\textrm{e}^{-{\lambda } (1-u)t}}{\varphi (x)}\nonumber \\&\qquad \times {\texttt{T}}_{(1-u)t}\left[ {\beta }\mathcal {E}_\cdot \left[ \sum _{[k_1, \dots , k_N]_k^{(n)}}{k \atopwithdelims ()k_1, \dots , k_N} \prod _{j = 1}^N \textrm{e}^{-{\lambda }ut} {\texttt{T}}^{(k_j)}_{ut}[f](x_j) \right] \right] (x) \textrm{d}u. \end{aligned}$$
(13)

Now suppose for induction that the result holds for all \(\ell \)-th integer moments with \(1\le \ell \le k-1\). Roughly speaking the argument can be completed by noting that the integral in the definition of \(L_k\) can be written as

$$\begin{aligned} \int _0^\infty \sum _{n = 2}^k \textrm{e}^{{\lambda } (n-1)s} \left\langle {\beta }\mathcal {E}_\cdot \left[ \sum _{[k_1, \dots , k_N]_k^{(n)}}{k \atopwithdelims ()k_1, \dots , k_N} \prod _{j = 1}^N \textrm{e}^{-{\lambda }s} {\texttt{T}}^{(k_j)}_{s}[f](x_j) , \tilde{\varphi }\right. \right\rangle \,\textrm{d}s, \end{aligned}$$
(14)

which is convergent by appealing to (H2), the fact that \({\beta }\in B^+(E)\) and the induction hypothesis. As a convergent integral, it can be truncated at \(t>0\) and the residual of the integral over \((t,\infty )\) can be made arbitrarily small by taking t sufficiently large. By changing variables in (14) when the integral is truncated at arbitrarily large t, so it is of a similar form to that of (13), we can subtract it from (13) to get

$$\begin{aligned} t\int _0^1\sum _{n = 2}^k \textrm{e}^{{\lambda } (n-1)u t}\left( \frac{\textrm{e}^{-{\lambda } (1-u)t}}{\varphi (x)}{\texttt{T}}_{(1-u)t}[H^{(n)}_{ut}] - \langle H^{(n)}_{ut}, \tilde{\varphi }\rangle \right) \textrm{d}u, \end{aligned}$$

where

$$\begin{aligned} H^{(n)}_{ut}(x) = {\beta }\mathcal {E}_x\left[ \sum _{[k_1, \dots , k_N]_k^{(n)}}{k \atopwithdelims ()k_1, \dots , k_N} \prod _{j = 1}^N \textrm{e}^{-{\lambda }ut} {\texttt{T}}^{(k_j)}_{ut}[f](x_j)\right] . \end{aligned}$$

One proceeds to split the integral of the difference over [0, 1] into two integrals, one over \([0,1-\varepsilon ]\) and one over \((1-\varepsilon , 1]\). For the aforesaid integral over \([0,1-\varepsilon ]\), we can control the behaviour of \(\varphi ^{-1}\textrm{e}^{-\lambda (1-u)t}{\texttt{T}}_{(1-u)t}[H^{(n)}_{ut}]-\langle H^{(n)}_{ut}, \tilde{\varphi }\rangle \) as \(t\rightarrow \infty \), making it arbitrarily small, by appealing to uniform dominated control of its argument in square brackets thanks to (H1). The integral over \([0,1-\varepsilon ]\) can thus be bounded, as \(t\rightarrow \infty \), by \(t(1-\textrm{e}^{{\lambda }(n-1)(1-\varepsilon )})/|{\lambda }|(n-1)\).

For the integral over \((1-\varepsilon , 1]\), we can appeal to the uniformity in (H1) and (H2) to control the entire term \(\textrm{e}^{-\lambda (1-u)t}{\texttt{T}}_{(1-u)t}[H^{(n)}_{ut}]\) (over time and its argument in the square brackets) by a global constant. Up to a multiplicative constant, the magnitude of the integral is thus of order

$$\begin{aligned} t\int _{1-\varepsilon }^1\textrm{e}^{{\lambda }(n-1)ut}\textrm{d}u = \frac{1}{|{\lambda }|(n-1)}(\textrm{e}^{{\lambda }(n-1)(1-\varepsilon )t} - \textrm{e}^{{\lambda }(n-1)t}), \end{aligned}$$

which tends to zero as \(t\rightarrow \infty \).

In the superprocess setting, as in the original proof, the exponential scaling kills the term \(R_k(x,t)\) in (12). For the integral term in (12), define \(H_{ut}^{(m_1,\ldots ,m_{k-1})}\) by

$$\begin{aligned}&H_{ut}^{(m_1,\ldots ,m_{k-1})}(x)\\&\quad :=\psi ^{(m_1+\cdots +m_{k-1})}(x,0+)(-\textrm{e}^{-\lambda ut}\texttt{T}_{ut}\left[ f\right] (x))^{m_1}\\ {}&\quad \prod _{j=2}^{k-1}\left( -\frac{\textrm{e}^{-\lambda ut}}{j!}\left( \texttt{T}_{ut}^{(j)}\left[ f\right] (x)+(-1)^{j} R_j(x,ut)\right) \right) ^{m_j}\\&\qquad + \beta (x)\int _{M(E)^{\circ }}\left\langle \textrm{e}^{-\lambda ut}\texttt{T}_{ut}\left[ f\right] ,\nu \right\rangle ^{m_1} \\ {}&\quad \prod _{j=2}^{k-1}\left\langle \frac{\textrm{e}^{-\lambda ut}}{j!}(\texttt{T}_{ut}^{(j)}\left[ f\right] +(-1)^{j}R_j(\cdot ,ut)),\nu \right\rangle ^{m_j}\Gamma (x,\textrm{d}\nu ) \end{aligned}$$

and noting that \(L_k\) can be written as

$$\begin{aligned} \int _0^\infty \sum _{\left\{ m_1,\ldots ,m_{k-1}\right\} ^{k}}\frac{k!}{m_1! \ldots m_{k-1}!} \textrm{e}^{{\lambda } (m_1+\cdots +m_{k-1}-1)s} \left\langle H_s^{(m_1,\ldots ,m_{k-1})}\left[ f\right] , \tilde{\varphi }\right\rangle \,\textrm{d}s, \end{aligned}$$
(15)

which is also convergent by appealing to (H2). The rest of the proof follows similar arguments to that of the particle system. That is, one splits the integral (15) at t and uses the integral over [0, t] to compensate the integral component of (12), changing variable so that it becomes an integral over [0, 1] and handling things as with the particle system. The remaining integral from \((t,\infty )\) can be argued away as arbitrarily small because of the convergence of (15). \(\square \)

3 Concluding remarks

Fundamentally, the corrected results offer the same rates of convergence and simply the constants take a different iterative structure. The corrections also remove the discrepancy between when there is dependency on x or not in the constants in the case of branching Markov processes and superprocesses. Hence, the original attempt at explaining the discrepancy in the dependency on the limiting constants is no longer necessary nor valid.