Avoid common mistakes on your manuscript.
Correction to: Probability Theory and Related Fields (2022) 184:805–858 https://doi.org/10.1007/s00440-022-01131-2
1 Summary of corrections
In this short note, we remark that there were erroneous limits used in the non-critical cases for the moment convergence and occupation moments in [2]. The source of the error was a misuse of the sense in which uniformity held in various convergence arguments. This affects the nature of the constants that appear in the limits. The results for the critical cases, which were the principal results, are correct with one minor adjustment in the statement which is that \(\sup _{t\ge 0}\Delta ^{(\ell )}_t <\infty \) should read \(\sup _{t\ge c}\Delta ^{(\ell )}_t <\infty \), for any \(c>0\).
In what follows we assume the notation and hypotheses of [2] and provide the correct statements and brief corrections of the proofs for the non-critical setting. The theorem numbers correspond to the same theorem numbers in [2].
Theorem 2
(Supercritical, \(\lambda > 0\)) Suppose that (H1) holds along with (H2) for some \(k \ge 2\) and \(\lambda > 0\). Redefine
where \({L_1 (x)= 1}\) and we define iteratively for \(k \ge 2\)
with \([k_1, \dots , k_N]_k^2\) is the set of all non-negative N-tuples \((k_1, \dots , k_N)\) such that \(\sum _{i = 1}^N k_i = k\) and at least two of the \(k_i\) are strictly positiveFootnote 1 if \((X, \mathbb {P})\) is a branching Markov process. Alternatively, if \((X, \mathbb {P})\) is a superprocess, define iteratively for \(k\ge 2\) with \({L_1(x) =1}\) and \(\mathfrak {I}_2(x)=\tfrac{1}{2}{\varphi ^{-1}(x)}\int _{0}^{\infty }\textrm{e}^{-2\lambda s}\texttt{T}_s\left[ \mathbb {V}\left[ \varphi \right] \right] (x)\textrm{d}s\)
where
and
Here the sums run over the set \(\left\{ m_1,\ldots ,m_{k-1}\right\} _k\) of positive integers such that \(m_1+2m_2+\cdots +(k-1)m_{k-1}=k\). Then, for all \(\ell \le k\)
Theorem 3
(Subcritical, \(\lambda < 0\)) Suppose that (H1) holds along with (H2) for some \(k \ge 2\) and \(\lambda < 0\). Redefine
where we define iteratively \(L_1 = \left\langle f,\tilde{\varphi }\right\rangle \) and for \(k \ge 2\),
if \((X, \mathbb {P})\) is a branching Markov process. Alternatively, if \((X, \mathbb {P})\) is a superprocess, we still have \(L_1 = \left\langle f,\tilde{\varphi }\right\rangle \) but
for \(k\ge 2\), with
Here the sums run over the set \(\left\{ m_1,\ldots ,m_{k-1}\right\} _k\) of non-negative integers such that \(m_1+2m_2+\cdots +(k-1)m_{k-1}=k\). Then, for all \(\ell \le k\)
Theorem 5
(Supercritical, \(\lambda > 0\)) Let \((X, \mathbb {P})\) be either a branching Markov process or a superprocess. Suppose that (H1) holds along with (H2) for some \(k \ge 2\) and \(\lambda > 0\). Redefine
where \(L_k(x)\) was defined in Theorem 2 (both for branching Markov processes and superprocesses), albeit that \({L_1(x) = 1/\lambda }\). Then, for all \(\ell \le k\)
Theorem 6
(Subcritical, \(\lambda < 0\)) Suppose that (H1) holds along with (H2) for some \(k \ge 2\) and \(\lambda < 0\). Redefine
where \(L_1(x) = \int _0^\infty {\varphi (x)^{-1}} \texttt{T}_s[g](x)\textrm{d}s\) and for \(k\ge 2\), the \(L_k(x)\) are defined recursively via
if X is a branching Markov process. Alternatively, if X is a superprocess,
where
and
Then, for all \(\ell \le k\)
2 Summary of proofs
We give a brief proof of Theorems 2 and 3 to give a sense of the corrected reasoning. The proofs of Theorems 5 and 6 are left out given that the corrected reasoning uses the same logic. The reader may also consult [1] for more details.
Proof of Theorem 2
Suppose for induction that the result is true for all \(\ell \)-th integer moments with \(1\le \ell \le k -1\). From the evolution equation in Proposition 1 of [2], noting that \(\textstyle \sum _{j=1}^N k_j =k\), when the limit exists, we have
where
It is easy to see that, pointwise in \(x\in E\) and \(u\in [0,1]\), using the induction hypothesis and (H2),
where we have again used the fact that the \(k_j\)s sum to k to extract the \(\langle f, \tilde{\varphi }\rangle ^k\) term.
Using the expressions for H[f](x, u, t) and H[f](x) together with the definition of \(L_k(x)\), we have, for any \(\epsilon >0\), as \(t\rightarrow \infty \),
where \(\epsilon \) is an upper estimate for
Note, convergence to zero as \(t\rightarrow \infty \) in (8) follows thanks to the induction hypothesis (ensuring that \(L_{k_j}(x)\) is uniformly bounded), (H2) and the uniform boundedness of \({\beta }\).
The induction hypothesis, (H2) and dominated convergence again ensure that
As such, in (7), we can split the integral on the right-hand side over \([0,\varepsilon ]\) and \((\varepsilon ,1]\), for \(\varepsilon \in (0,1)\). Using (9), we can ensure that, for any arbitrarily small \(\varepsilon '>0\), making use of the boundedness in (H1), there is a global constant \(C>0\) such that, for all t sufficiently large,
On the other hand, we can also control the integral over \((\varepsilon ,1]\), again appealing to (H1) and (H2) to ensure that
We can again work with a (different) global constant \(C>0\) such that
In conclusion, using (10) and (11), we can take limits as \(t\rightarrow \infty \) in (7) and the statement of the theorem follows for branching Markov processes.
The proof in the superprocess setting starts the same way as in [2] up to equation (90) therein, noting that the term \(R_k(x,t)\) in the moment evolution equation
from equation (77) of [2] can be compensated in the limit using \(\mathfrak {R}_k(x,t)\) defined in (1) above. The remainder of the proof deals with the compensation of the integral term in (12).
We have
where \(H\left[ f\right] (x,u,t)\) as
that is,
The induction hypothesis and (H1) allow us to get
Using the same arguments used above from (10) onwards, we get the desiblack result. \(\square \)
Proof of Theorem 3
First note that since we only compensate by \(\textrm{e}^{-{\lambda } t}\), the term \({\texttt{T}}_{t}[f^k](x)\) that appears in equation (41) of [2] does not vanish after the normalisation. Due to assumption (H1), we have
Next we turn to the integral term in (41) of [2]. Define \([k_1, \dots , k_N]_k^{(n)}\), for \(2 \le n \le k\) to be the set of tuples \((k_1, \dots , k_N)\) with exactly n positive terms and whose sum is equal to k. Similar calculations to those given above yield
Now suppose for induction that the result holds for all \(\ell \)-th integer moments with \(1\le \ell \le k-1\). Roughly speaking the argument can be completed by noting that the integral in the definition of \(L_k\) can be written as
which is convergent by appealing to (H2), the fact that \({\beta }\in B^+(E)\) and the induction hypothesis. As a convergent integral, it can be truncated at \(t>0\) and the residual of the integral over \((t,\infty )\) can be made arbitrarily small by taking t sufficiently large. By changing variables in (14) when the integral is truncated at arbitrarily large t, so it is of a similar form to that of (13), we can subtract it from (13) to get
where
One proceeds to split the integral of the difference over [0, 1] into two integrals, one over \([0,1-\varepsilon ]\) and one over \((1-\varepsilon , 1]\). For the aforesaid integral over \([0,1-\varepsilon ]\), we can control the behaviour of \(\varphi ^{-1}\textrm{e}^{-\lambda (1-u)t}{\texttt{T}}_{(1-u)t}[H^{(n)}_{ut}]-\langle H^{(n)}_{ut}, \tilde{\varphi }\rangle \) as \(t\rightarrow \infty \), making it arbitrarily small, by appealing to uniform dominated control of its argument in square brackets thanks to (H1). The integral over \([0,1-\varepsilon ]\) can thus be bounded, as \(t\rightarrow \infty \), by \(t(1-\textrm{e}^{{\lambda }(n-1)(1-\varepsilon )})/|{\lambda }|(n-1)\).
For the integral over \((1-\varepsilon , 1]\), we can appeal to the uniformity in (H1) and (H2) to control the entire term \(\textrm{e}^{-\lambda (1-u)t}{\texttt{T}}_{(1-u)t}[H^{(n)}_{ut}]\) (over time and its argument in the square brackets) by a global constant. Up to a multiplicative constant, the magnitude of the integral is thus of order
which tends to zero as \(t\rightarrow \infty \).
In the superprocess setting, as in the original proof, the exponential scaling kills the term \(R_k(x,t)\) in (12). For the integral term in (12), define \(H_{ut}^{(m_1,\ldots ,m_{k-1})}\) by
and noting that \(L_k\) can be written as
which is also convergent by appealing to (H2). The rest of the proof follows similar arguments to that of the particle system. That is, one splits the integral (15) at t and uses the integral over [0, t] to compensate the integral component of (12), changing variable so that it becomes an integral over [0, 1] and handling things as with the particle system. The remaining integral from \((t,\infty )\) can be argued away as arbitrarily small because of the convergence of (15). \(\square \)
3 Concluding remarks
Fundamentally, the corrected results offer the same rates of convergence and simply the constants take a different iterative structure. The corrections also remove the discrepancy between when there is dependency on x or not in the constants in the case of branching Markov processes and superprocesses. Hence, the original attempt at explaining the discrepancy in the dependency on the limiting constants is no longer necessary nor valid.
Notes
Recall that we interpret \(\textstyle \sum _\emptyset = 0\) and \(\textstyle \prod _\emptyset = 1\).
References
Gonzalez, I.: Moments of branching Markov processes and related problems. Ph.D. thesis, University of Bath (2022)
Gonzalez, I., Horton, E., Kyprianou, A.E.: Asymptotic moments of spatial branching processes. Probab. Theory Rel. Fields (2022)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
I. Gonzalez: Research supported by CONACYT scholarship nr 472301.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Gonzalez, I., Horton, E. & Kyprianou, A.E. Correction to: Asymptotic moments of spatial branching processes. Probab. Theory Relat. Fields 187, 505–515 (2023). https://doi.org/10.1007/s00440-023-01220-w
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-023-01220-w