On exceptional times for generalized Fleming–Viot processes with mutations

Article
  • 385 Downloads

Abstract

If \(\mathbf Y\) is a standard Fleming–Viot process with constant mutation rate (in the infinitely many sites model) then it is well known that for each \(t>0\) the measure \(\mathbf Y_t\) is purely atomic with infinitely many atoms. However, Schmuland proved that there is a critical value for the mutation rate under which almost surely there are exceptional times at which the stationary version of \(\mathbf Y\) is a finite sum of weighted Dirac masses. In the present work we discuss the existence of such exceptional times for the generalized Fleming–Viot processes. In the case of Beta-Fleming–Viot processes with index \(\alpha \in \,]1,2[\) we show that—irrespectively of the mutation rate and \(\alpha \)—the number of atoms is almost surely always infinite. The proof combines a Pitman–Yor type representation with a disintegration formula, Lamperti’s transformation for self-similar processes and covering results for Poisson point processes.

Keywords

Fleming–Viot processes Mutations Exceptional times  Excursion theory Jump-type SDE Self-similarity 

Mathematics Subject Classification (2000)

Primary 60J80 Secondary 60G18 

1 Main result

The measure-valued Fleming–Viot diffusion processes were first introduced by Fleming and Viot [20] and have become a cornerstone of mathematical population genetics in the last decades. It is a model which describes the evolution (forward in time) of the genetic composition of a large population. Each individual is characterized by a genetic type which is a point in a type-space \(E\). The Fleming–Viot process is a Markov process \((\mathbf Y _t)_{t\ge 0}\) on
$$\begin{aligned} \mathcal M^1_{E}=\big \{\nu : \nu \text { is a probability measure on } E\big \} \end{aligned}$$
for which we interpret \(\mathbf Y_t(B)\) as the proportion of the population at time \(t\) which carries a genetic type belonging to a Borel set \(B\) of types. In particular, the number of (different) types at time \(t\) is equal to the number of atoms of \(\mathbf Y _t\) with the convention that the number of types is infinite if \(\mathbf Y _t\) has absolutely continuous part.
Fleming–Viot superprocesses can be defined through their infinitesimal generators
$$\begin{aligned} (\mathcal {L} \phi )(\mu ) \!=\! \int \limits _E\int \limits _E \mu (dv) (\delta _v(dy)\!-\!\mu (dy)) {\delta ^2 \phi (\mu ) \over \delta \mu (v) \delta \mu (y)} \!+\!\int \limits _E \mu (dv) A\left( {\delta \phi (\mu ) \over \delta \mu (\cdot ) }\right) (v),\nonumber \\ \end{aligned}$$
(1.1)
acting on smooth test-functions where \(\delta \phi (\mu ) /\delta \mu (v) = \lim _{\epsilon \rightarrow 0+} \epsilon ^{-1}\{ \phi (\mu +\epsilon \delta _v) -\phi (\mu ))\) and \(A\) is the generator for a Markov process in \(E\) which represents the effect of mutations. Here \(\delta _v\) is the Dirac measure at \(v\). It is well known that the Fleming–Viot superprocess arises as the scaling limit of a Moran-type model for the evolution of a finite discrete population of fixed size if the reproduction mechanism is such that no individual gives birth to a positive proportion of the population in a small number of generations. For a detailed description of Fleming–Viot processes and discussions of variations we refer to the overview article of Ethier and Kurtz [19] and to Etheridge’s lecture notes [17].
The first summand of the generator reflects the genetic resampling mechanism whereas the second summand represents the effect of mutations. Several choices for \(A\) have appeared in the literature. In the present work we shall work in the setting of the infinitely-many-alleles model where each mutation creates a new type never seen before. Without loss of generality let the type space be \(E=[0,1]\). Then the following choice of \(A\) gives an example of an infinite site model with mutations:
$$\begin{aligned} (Af)(v) = \theta \int \limits _E (f(y)-f(v))dy, \end{aligned}$$
(1.2)
for some \(\theta >0\). The choice of the uniform measure \(dy\) is arbitrary (we could choose the new type according to any distribution that has a density with respect to the Lebesgue measure), all that matters is that the newly created type \(y\) is different from all other types. With \(A\) as in (1.2), mutations arrive at rate \(\theta \) and create a new type picked at random from \(E\) according to the uniform measure, therefore the corresponding process is sometimes called the Fleming–Viot process with neutral mutations.
Let us briefly recall two classical facts concerning the infinite types Fleming–Viot process described above. For any initial condition \(\mathbf Y _0\):
  1. (i)

    If there is no mutation, then, for all \(t>0\) fixed, the number of types is almost surely finite.

     
  2. (ii)

    If the mutation parameter \(\theta \) is strictly positive, then, for all \(t>0\) fixed, the number of types is infinite almost surely.

     
This can be deduced e.g. from the explicit representation of the transition function given in Ethier and Griffiths in [18].

A beautiful complement to (i) and (ii) was found by Schmuland for exceptional times that are not fixed in advance:

Theorem 1.1

(Schmuland [35]) For the stationary infinitely-many-alleles model
$$\begin{aligned} {\mathbb P} \big ( \exists \, t>0 : \# \{\text {types at time }t \}<\infty \big )=\left\{ \begin{array}{c}1 \quad \text { if } \theta <1,\\ 0\quad \text { if } \theta \ge 1. \end{array}\right. \end{aligned}$$

Schmuland’s proof of the dichotomy is based on analytic arguments involving the capacity of finite dimensional subspaces of the infinite dimensional state-space. In Sect. 6 we reprove Schmuland’s theorem with a simple proof via excursion theory, that yields the result for arbitrary initial conditions.

In the series of articles [5, 6, 7], Bertoin and Le Gall introduced and started the study of \(\Lambda \)-Fleming–Viot processes, a class of stochastic processes which naturally extends the class of standard Fleming–Viot processes. These processes are completely characterized by a finite measure \(\Lambda \) on \(]0,1]\) and a generator \(A\). Similarly to the standard Fleming–Viot process, these processes can be defined through their infinitesimal generator
$$\begin{aligned} (\mathcal {L} \phi )(\mu )&= \int \limits _0^1 y^{-2} \Lambda (dy) \int \limits \mu (da) (\phi ((1-y)\mu +y \delta _a)- \phi (\mu ))\nonumber \\&\quad + \int \limits _E \mu (dv) A\left( {\delta \phi (\mu ) \over \delta \mu (\cdot ) }\right) (v), \end{aligned}$$
(1.3)
and the sites of atoms are again called types. For \(A=0\), the generator formulation only appeared implicitly in [6] and is explained in more details in Birkner et al. [10] and for \(A\) as in (1.2) it can be found in Birkner et al. [9]. The dynamics of a generalized Fleming–Viot process \((\mathbf Y _t)_{t\ge 0}\) are as follows: at rate \(y^{-2}\Lambda (dy)\) a point \(a\) is sampled at time \(t>0\) according to the probability measure \({\mathbf Y} _{t-}(da)\) and a point-mass \(y\) is added at position \(a\) while scaling the rest of the measure by \((1-y)\) to keep the total mass at 1. The second term of (1.3) is the same mutation operator as in (1.1). For a detailed description of \(\Lambda \)-Fleming–Viot processes and discussions of variations we refer to the overview article of Birkner and Blath [8].
In the following we are going to focus only on the choice \(\Lambda =Beta(2-\alpha ,\alpha )\), the Beta distribution with density
$$\begin{aligned} f(u)= C_\alpha u^{1-\alpha }(1-u)^{\alpha -1}du,\quad \quad C_\alpha =\frac{1}{\Gamma (2-\alpha )\Gamma (\alpha )}, \end{aligned}$$
for \(\alpha \in \, ]1,2[\), and mutation operator \(A\) as in (1.2). The corresponding \(\Lambda \)-Fleming–Viot process \(({\mathbf Y} _t)_{t\ge 0}\) is called Beta-Fleming–Viot process or \((\alpha ,\theta )\)-Fleming–Viot process and several results have been established in recent years. The \((\alpha ,\theta )\)-Fleming–Viot processes converge weakly to the standard Fleming–Viot process as \(\alpha \) tends to \(2\). It was shown in [10] that a \(\Lambda \)-Fleming–Viot process with \(A=0\) is related to measure-valued branching processes in the spirit of Perkin’s disintegration theorem precisely if \(\Lambda \) is a Beta distribution (this relation is recalled and extended in Sect. 2.3 below).

If we chose \(\alpha \in \, ]1,2[\) and \(\mathbf Y _0\) uniform on \([0,1]\), then we find the same properties (i) and (ii) for the one-dimensional marginals \(\mathbf Y _t\) unchanged with respect to the classical case (1.1), (1.2). In fact, for a general \(\Lambda \)-Fleming–Viot process, (i) is equivalent to the requirement that the associated \(\Lambda \)-coalescent comes down from infinity (see for instance [2]). Here is our main result: contrary to Schmuland’s result, \((\alpha ,\theta )\)-Fleming–Viot processes with \(\alpha \in \, ]1,2[\) and \(\theta >0\) never have exceptional times:

Theorem 1.2

Let \(({\mathbf Y} _t)_{t\ge 0}\) be an \((\alpha ,\theta )\)-Fleming–Viot superprocess with mutation rate \(\theta >0\) and parameter \(\alpha \in \, ]1,2[.\) Then for any starting configuration \(\mathbf Y _0\)
$$\begin{aligned} {\mathbb P} \big ( \exists \, t>0 : \# \{\text {types at time }t \}<\infty \big )=0 \end{aligned}$$
for any \(\theta >0\).

One way one can get a first rough understanding of why this should be true is by using a heuristic based on the duality between \(\Lambda \)-Fleming–Viot processes and \(\Lambda \)-coalescents. If \(\Lambda \)-Fleming–Viot processes describe how the composition of a population evolve forward in time, \(\Lambda \)-coalescents describe how the ancestral lineages of individuals sampled in the population merge as one goes back in time. The fact that \(\Lambda \)-coalescents describe the genealogies of \(\Lambda \)-Fleming–Viot processes can be seen from Donelly and Kurtz [14] so-called lookdown construction of Fleming–Viot processes and was also established through a functional duality relation by Bertoin and Le Gall in [5].

The coalescent which corresponds to the classical Fleming–Viot process is the celebrated Kingman’s coalescent. Kingman’s coalescent comes down from infinity at speed \(2/t\), i.e. if one initially samples infinitely many individuals in the populations, then the number of active lineages at time \(t\) in the past is \(N_t\) and \(N_t \sim 2/t\) almost surely when \(t\rightarrow 0\). It is known (see [6] or more recently [28]) that the process \((N_t, t\ge 0)\) has the same law as the process of the number of atoms of the Fleming–Viot process. For a Beta-coalescent (that is a \(\Lambda \)-coalescent where the measure \(\Lambda \) is the density of a Beta\((\alpha ,2-\alpha )\) variable) with parameter \(\alpha \in (1,2)\) we have \(N_t \sim c_\alpha t^{-1/(\alpha -1)}\) almost surely as \(t\rightarrow 0\) (see [3, Theorem 4]). Therefore Kingman’s coalescents comes down from infinity much quicker than Beta-coalescents. Since the speed at which the generalized Fleming–Viot processes looses types roughly corresponds to the speed at which the dual coalescent comes down from infinity, it is possible that \((\alpha ,\theta )\)-Fleming–Viot processes do not lose types fast enough, and hence there are no exceptional times at which the number of types is finite.

2 Auxiliary constructions

To prove Theorem 1.2 we construct two auxiliary objects: a particular measure-valued branching process and a corresponding Pitman–Yor type representation. Those will be used in Sect. 5 to relate the question of exceptional times to covering results for point processes. In this section we give the definitions and state their relations to the Beta-Fleming–Viot processes with mutations. All appearing stochastic processes and random variables will be defined on a common stochastic basis \((\Omega , \mathcal {G}, \mathcal {G}_{t}, {\mathbb P} )\) that is rich enough to carry all Poisson point processes (PPP in short) that appear in the sequel.

2.1 Measure-valued branching processes with immigration

We recall that a continuous state branching process (CSBP in short) with \(\alpha \)-stable branching mechanism, \(\alpha \in \,]1,2]\), is a Markov family \((P_v)_{v\ge 0}\) of probability measures on càdlàg trajectories with values in \({\mathbb R}_+\), such that
$$\begin{aligned} E_v\big (e^{-\lambda X_t}\big ) = e^{-v\,u_t(\lambda )},\qquad v\ge 0, \lambda \ge 0, \end{aligned}$$
(2.1)
where for \(\psi :{\mathbb R}_+\mapsto {\mathbb R}_+\), \(\psi (u):=u^\alpha \), we have the evolution equation
$$\begin{aligned} u'_t(\lambda ) = - \psi (u_t(\lambda )), u_0(\lambda ) = \lambda . \end{aligned}$$
See e.g. [27] for a good introduction to CSBP. For \(\alpha =2\), \(\psi (u)=u^2\) is the branching mechanism for Feller’s branching diffusion, where \(P_v\) is the law of the unique solution to the SDE
$$\begin{aligned} X_t=v+\int \limits _0^t\sqrt{2X_s}\,dB_s, \qquad t\ge 0, \end{aligned}$$
(2.2)
driven by a Brownian motion \((B_t)_{t\ge 0}\). On the other hand, for \(\alpha \in \, ]1,2[\), \(\psi (u)=u^\alpha \) gives the so-called \(\alpha \)-stable branching processes which can be defined as the unique strong solution of the SDE
$$\begin{aligned} X_t=v+\int \limits _0^tX_{s-}^{1/\alpha }\,dL_s, \qquad t\ge 0, \end{aligned}$$
(2.3)
driven by a spectrally positive \(\alpha \)-stable Lévy process \((L_t)_{t\ge 0}\), with Lévy measure given by
$$\begin{aligned} {\small 1}\!\!1_{{(x>0)}}\, c_\alpha \, x^{-1-\alpha }\,dx, \qquad c_\alpha :=\frac{\alpha (\alpha -1)}{\Gamma (2-\alpha )}. \end{aligned}$$
Note that strong existence and uniqueness for (2.3) follows from the fact that the function \(x\mapsto x^{1/\alpha }\) is Lipschitz outside zero, and hence strong existence and uniqueness holds for (2.3) until \(X\) hits zero. Moreover \(X\), being a non-negative martingale, stays at zero forever after hitting it. For a more extensive discussion on strong solutions for jumps SDEs see [22] and [32].
The main tool that we introduce is a particular measure-valued branching process with interactive immigration (MBI in short). For a textbook treatment of this subject we refer to Li [31]. Following Dawson and Li [12], we are not going to introduce the MBIs via their infinitesimal generators but as strong solutions of a system of stochastic differential equations instead. On \((\Omega , \mathcal {G}, \mathcal {{G}_{t}},{\mathbb P} )\), let us consider a Poisson point process \(\mathcal {N}=(r_i,x_i,y_i)_{i\in I}\) on \((0,\infty )\times (0,\infty )\times (0,\infty )\) adapted to \( \mathcal {{G}_{t}}\) and with intensity measure
$$\begin{aligned} \nu (dr,dx,dy):={\small 1}\!\!1_{{(r>0)}} \, dr\, \otimes \, c_\alpha \, {\small 1}\!\!1_{{(x>0)}}\, x^{-1-\alpha }\,dx \, \otimes \, {\small 1}\!\!1_{{(y>0)}}\, dy. \end{aligned}$$
(2.4)
Throughout the paper we adopt the notation
$$\begin{aligned} \tilde{\mathcal N} :={\mathcal N} -\nu , \end{aligned}$$
i.e. \(\tilde{\mathcal N} \) is the compensated version of \({\mathcal N} \). It was shown in [12] that the solution to (2.3) has the same law as the unique strong solution to the SDE
$$\begin{aligned} X_t= X_0+\int \limits _{]0,t]\times {\mathbb R}_+\times {\mathbb R}_+} {\small 1}\!\!1_{{(y<X_{r-})}}\, x \, \tilde{\mathcal N} (dr,dx,dy) \end{aligned}$$
(2.5)
with \(X_{0}=v\).
Now we are going to switch to the measure-valued setting. The real-valued process \(X\) in (2.3), (2.5) describes the evolution of the total mass of the CSBP starting at time zero at the mass \(X_0=v\). We are going to consider all initial masses \(v\in [0,1]\) simultaneously, constructing a process \((\mathbf {X}_t)_{t\ge 0}\) taking values in the space \({\mathcal M} ^{F}_{[0,1]}\) of finite measures on \([0,1]\), endowed with the narrow topology, i.e. the trace of the weak-\(*\) topology of \((C[0,1])^*\). Assume that at time \(t=0\), \(\mathbf {X}_0\) is a finite measure on \([0,1]\) with cumulative distribution function \((F(v), v\in [0,1])\), and denote
$$\begin{aligned} X_t(v):= \mathbf {X}_t([0,v]), \quad t\ge 0, v\in [0,1]. \end{aligned}$$
Then the measure-valued branching process \((\mathbf {X}_t)_{t\ge 0}\) can be constructed in such a way that for each \(v\), \((X_t(v))_{t\ge 0}\) solves (2.5) with \(X_0=F(v)\), and with the same driving noise for all \(v\in [0,1]\). In what follows, we deal with a version of (2.5) including an immigration term only depending on the total-mass \(X_t(1)\):
$$\begin{aligned} {\left\{ \begin{array}{ll} X_t(v) = F(v) + \int \limits _{{]}0,t{]}\times {\mathbb R}_+\times {\mathbb R}_+} {\small 1}\!\!1_{{(y<X_{r-}(v))}}\, x \, \tilde{\mathcal N} (dr,dx,dy) + I(v) \int \limits _0^t g(X_{s}(1))\, ds,\\ v \in [0,1], t\!\ge \! 0, \end{array}\right. }\nonumber \\ \end{aligned}$$
(2.6)
where \((I(v), v\in [0,1])\) is the cumulative distribution function of a finite measure on \([0,1]\) and we assume
(G)

\(g : {\mathbb R}_+ \mapsto {\mathbb R}_+\) is monotone non-decreasing, continuous and locally Lipschitz continuous away from zero.

Definition 2.1

An \({\mathcal M} ^{F}_{[0,1]}\)-valued process \((\mathbf X _t)_{t\ge 0}\) on \((\Omega , \mathcal {G}, \mathcal {G}_{t},{\mathbb P} )\) is called a solution to (2.6) if
  • it is càdlàg \({\mathbb P} \)-a.s.,

  • for all \(v\in [0,1]\), setting \(X_t(v):=\mathbf X _t([0,v])\), \((X_t(v))_{v\in [0,1],t\ge 0}\) satisfies \({\mathbb P} \)-a.s. (2.6).

Moreover, a solution \((\mathbf X _t)_{t\ge 0}\) is strong if it is adapted to the natural filtration \({\mathcal F} _t\) generated by \(\mathcal {N}\). Finally, we say that pathwise uniqueness holds if
$$\begin{aligned} {\mathbb P} \big (\mathbf X ^1_t=\mathbf X ^2_t, \ \forall t\ge 0\big )=1, \end{aligned}$$
for any two solutions \(\mathbf X ^1\) and \(\mathbf X ^2\) on \((\Omega , \mathcal {G}, \mathcal {G}_t,{\mathbb P} )\) driven by the same Poisson point process.

Here is a well-posedness result for (2.6):

Theorem 2.2

Let \(F\) and \(I\) be as above. For any immigration mechanism \(g\) satisfying Assumption (G), there is a strong solution \((\mathbf X _t)_{t\ge 0}\) to (2.6) and pathwise uniqueness holds until \(T_0:= \inf \{t \ge 0 : \mathbf X _t([0,1]) =0\}\).

The proof of Theorem 2.2 relies on ideas from recent articles on pathwise uniqueness for jump-type SDEs such as Fu and Li [22] or Dawson and Li [12]. Our equation (2.6) is more delicate since all coordinate processes depend on the total-mass \(X_t(1)\). The uniqueness statement is first deduced for the total-mass \((X_t(1))_{t\ge 0}\) and then for the other coordinates interpreting the total-mass as random environment. To construct a (weak) solution we use a (pathwise) Pitman–Yor type representation as explained in the next section.

2.2 A Pitman–Yor type representation for interactive MBIs

Let us denote by \({\mathcal E} \) the set of càdlàg trajectories \(w:{\mathbb R}_+\mapsto {\mathbb R}_+\) such that \(w(0)=0\), \(w\) is positive on a bounded interval \(]0,\zeta (w)[\) and \(w\equiv 0\) on \([\zeta (w),+\infty [\). We recall the construction of the excursion measure of the \(\alpha \)-stable CSBP \((P_v)_{v\ge 0}\), also called the Kuznetsov measure, see [30, Section 4] or [31, Chapter 8]: For all \(t\ge 0\), let \(K_t(dx)\) be the unique \(\sigma \)-finite measure on \({\mathbb R}_+\) such that
$$\begin{aligned} \int \limits _{{\mathbb R}_+} \left( 1-e^{-\lambda \, x}\right) K_t(dx) = u_t(\lambda ) = \left( \lambda ^{1-\alpha } + (\alpha -1)t\right) ^{\frac{1}{1-\alpha }}, \qquad \lambda \ge 0, \end{aligned}$$
where we recall that the function \((u_t(\lambda ))_{t\ge 0}\) is the unique solution to the equation
$$\begin{aligned} u_t(\lambda ) + \int \limits _0^t (u_s(\lambda ))^\alpha \, ds = \lambda , \qquad t\ge 0, \ \lambda \ge 0. \end{aligned}$$
We also denote by \(Q_t(x,dy)\) the Markov transition semigroup of \((P_v)_{v\ge 0}\). Then there exists a unique Markovian \(\sigma \)-finite measure \({\mathbb Q} \) on \({\mathcal E} \) with entrance law \((K_t)_{t\ge 0}\) and transition semigroup \((Q_t)_{t\ge 0}\), i.e. such that for all \(0< t_1< \cdots <t_n\), \(n\in {\mathbb N} \),
$$\begin{aligned}&{\mathbb Q} (w_{t_1}\in dy_1, \ldots , w_{t_n}\in dy_n, \, t_n<\zeta (w))\nonumber \\&= K_{t_1}(dy_1) \, Q_{t_2-t_{1}}(y_{1}, dy_2) \cdots Q_{t_n-t_{n-1}}(y_{n-1}, dy_n). \end{aligned}$$
(2.7)
By construction
$$\begin{aligned} \int \limits _{{\mathcal E} } \left( 1-e^{-\lambda \, w_s}\right) \, {\mathbb Q} (dw) = u_s(\lambda ) = \left( \lambda ^{1-\alpha } + (\alpha -1)s\right) ^{\frac{1}{1-\alpha }}, \qquad s\ge 0, \ \lambda \ge 0,\nonumber \\ \end{aligned}$$
(2.8)
and under \({\mathbb Q} \), for all \(s>0\), conditionally on \(\sigma (w_r, r\le s)\), \((w_{t+s})_{t\ge 0}\) has law \(P_{w_s}\). The \(\sigma \)-finite measure \({\mathbb Q} \) is called the excursion measure of the CSBP (2.3). By (2.8), it is easy to check that for any \(s>0\)
$$\begin{aligned} \int \limits _{\mathcal E} w_{s} \, {\mathbb Q} (dw) = \left. \frac{\partial }{\partial \lambda } u_s(\lambda ) \right| _{\lambda =0} = \lim _{\lambda \downarrow 0} \left( 1 + \lambda ^{\alpha -1}(\alpha -1)s\right) ^{\frac{\alpha }{1-\alpha }} = 1. \end{aligned}$$
(2.9)
In Duquesne–Le Gall’s setting [15], under the \(\sigma \)-finite measure \({\mathbb Q} \) with infinite total mass, \(w\) has the distribution of \((\ell ^a(e))_{a\ge 0}\) under \(n(de)\), where \(n(de)\) is the excursion measure of the height process \(H\) and \(\ell ^a\) is the local time at level \(a\). For the more general superprocess setting see for instance Dynkin and Kuznetsov [16].
We need now to extend the space of excursions as follows:
$$\begin{aligned} {\mathcal D} :=\{w:{\mathbb R}_+\mapsto {\mathbb R}_+: \ \exists s\ge 0, \, w\equiv 0 \ \mathrm{on} \ [0,s], \ w_{\cdot -s}\in {\mathcal E} \}, \end{aligned}$$
i.e. \({\mathcal D} \) is the set of càdlàg trajectories \(w:{\mathbb R}_+\mapsto {\mathbb R}_+\) such that \(w\) is equal to 0 on \([0,s(w)]\), \(w\) is positive on a bounded interval \(]s(w),s(w)+\zeta (w)[\) and \(w\equiv 0\) on \([s(w)+\zeta (w),+\infty [\). For \(s\ge 0\), we denote by \({\mathbb Q} _s(dw)\) the \(\sigma \)-finite measure on \({\mathcal D} \) given by
$$\begin{aligned} \int \limits _{\mathcal D} \Phi (w) \, {\mathbb Q} _s(dw) := \int \limits _{\mathcal E} \Phi \left( {\small 1}\!\!1_{{(\cdot \ge s)}}\, w_{\cdot -s}\right) \, {\mathbb Q} (dw), \end{aligned}$$
(2.10)
i.e. \({\mathbb Q} _s\) is the image measure of \({\mathbb Q} \) under the map
$$\begin{aligned} w\mapsto (\gamma _t := {\small 1}\!\!1_{{(t\ge s)}} \, w_{t-s},)_{t\ge 0}. \end{aligned}$$
(2.11)
Let us consider a Poisson point process \((s_i, u_i,a_i,w^i)_{i\in I}\) on \({\mathbb R}_+\times {\mathbb R}_+\times {\mathcal D} \) with intensity measure
$$\begin{aligned} \Gamma (ds,du,da,dw) := \left( \delta _0(ds)\otimes \delta _0(du) \otimes F(da) + ds\otimes du \otimes I(da)\right) \otimes {\mathbb Q} _s(dw)\nonumber \\ \end{aligned}$$
(2.12)
where \(F\) and \(I\) are the cumulative distribution functions appearing in (2.6). An atom \((s_i, u_i,a_i,w^i)\) is a population that has immigrated at time \(s_i\) whose size evolution is given by \(w^i\) and whose genetic type is given by \(a_i\). The coordinate \(u_i\) is used for thinning purposes, to decide whether or not this particular immigration really happened or not.

Theorem 2.3

Suppose \(g:{\mathbb R}_+\mapsto {\mathbb R}_+\) satisfies Assumption (G). Then, for all \(v\in [0,1]\), there is a unique càdlàg process \((Z_t(v), t\ge 0)\) on \((\Omega , \mathcal {G}, \mathcal {G}_{t}, {\mathbb P} )\) satisfying \({\mathbb P} \)-a.s.
$$\begin{aligned} {\left\{ \begin{array}{ll} Z_t(v) = \sum _{s_i=0} w_{t}^i \, {\small 1}\!\!1_{{(a_i \le v)}} + \sum _{s_i>0} w_{t}^i \,{\small 1}\!\!1_{{(a_i \le v)}} {\small 1}\!\!1{(u_i \le g(Z_{s_i-}(1)))},\qquad t> 0,\\ Z_0(v)=F(v). \end{array}\right. }\nonumber \\ \end{aligned}$$
(2.13)
Moreover, we can construct on \((\Omega , \mathcal {G}, \mathcal {G}_{t}, {\mathbb P} )\) a PPP \({\mathcal N} \) with intensity \(\nu \) given by (2.4) such that \(Z\) solves (2.6) with respect to \(\mathcal {N}\).
If \(I(1)=1\), then in the special case of branching mechanism \(\psi (\lambda )=\lambda ^2\) and constant immigration rate \(g\equiv \theta \), the total-mass process \(X_t=X_t(1)\) for (2.6) also solves
$$\begin{aligned} {\left\{ \begin{array}{ll} dX_t=\sqrt{2 X_t}\,dB_t+\theta \,dt,\qquad t\ge 0,\\ X_0=F(1). \end{array}\right. } \end{aligned}$$
for which Pitman and Yor obtained the excursion representation in their seminal paper [34].

Remark 2.4

The recent monograph [31] by Zenghu Li contains a full theory of this kind of Pitman–Yor type representations for measure-valued branching processes, see in particular Chapter 10. We present a different approach below which shows directly how the different Poisson point processes in (2.6) and in (2.13) are related to each other. The most important feature of our construction is that it relates the excursion construction and the SDE construction on a pathwise level.

Observe that an immediate and interesting corollary of Theorem 2.3 is the following:

Corollary 2.5

Let \(g\) be an immigration mechanism satisfying assumption (G) and let \((\mathbf {X}_t)_{t\ge 0}\) be a solution to (2.6). Then almost surely, \(\mathbf {X}_t\) is purely atomic for all \(t\ge 0\).

In the proof of our Theorem 1.2 we make use of the fact that the Pitman–Yor type representation is well suited for comparison arguments. If \(g\) can be bounded from above or below by a constant, then the righthand side of (2.6) can be compared to an explicit PPP for which general theory can be applied.

2.3 From MBI to Beta-Fleming–Viot processes with mutations

Let us first recall an important characterization started in [6] and completed in [12] which relates Fleming–Viot processes, defined as measure-valued Markov processes by the generator (1.3), and strong solutions to stochastic equations.

Theorem 2.6

(Dawson and Li [12]) Let \(\Lambda \) be the Beta distribution with parameters \((2-\alpha , \alpha )\). Suppose \(\theta \ge 0\) and \(\mathcal M\) is a non-compensated Poisson point process on \((0,\infty )\times [0,1]\times [0,1]\) with intensity \(ds \otimes y^{-2} \Lambda (dy) \otimes du\). Then there is a unique strong solution \((Y_t(v))_{t\ge 0,v\in [0,1]}\) to
$$\begin{aligned} {\left\{ \begin{array}{ll} Y_t(v) = v + \int \limits _{]0,t]\times [0,1]\times [0,1]} y \left[ {\small 1}\!\!1_{{(u \le Y_{s-}(v))}} -Y_{s-}(v) \right] {\mathcal M} (ds, dy, du) +\theta \int \limits _0^t [v-Y_{s}(v)]ds,\\ v \in [0,1], \ t\ge 0, \end{array}\right. }\nonumber \\ \end{aligned}$$
(2.14)
and the measure-valued process \(\mathbf Y _t([0,v]):={Y_t(v)}\) is an \((\alpha ,\theta )\)-Fleming–Viot process started at uniformly distributed initial condition.

Existence and uniqueness of solutions for this equation was proved in Theorem 4.4 of [12] while the characterization of the generator of the measure-valued process \({\mathbf Y} \) is the content of their Theorem 4.9.

We next extend a classical relation between Fleming–Viot processes and measure-valued branching processes which is typically known as disintegration formula. Without mutations, for the standard Fleming–Viot process this goes back to Konno and Shiga [26] and it was shown in Birkner et al. [10] that the relation extends to the generalized \(\Lambda \)-Fleming–Viot processes without immigration if and only if \(\Lambda \) is a Beta-measure. Our extension relates \((\alpha ,\theta )\)-Fleming–Viot processes to (2.6) with immigration mechanism \(g(x)=\alpha (\alpha -1)\Gamma (\alpha )\theta x^{2-\alpha }\) and for \(\theta =0\) gives an SDE formulation of the main result of [10].

Theorem 2.7

Let \(F(v)=I(v)=v\) and let \(g:{\mathbb R}_+\mapsto {\mathbb R}_+\) be defined by \(g(x)=\alpha (\alpha -1)\Gamma (\alpha )\theta x^{2-\alpha }\) for some \(\alpha \in (1,2).\) Let then \((\mathbf X_t)_{t\ge 0}\) be the unique solution of to (2.6) (in the sense of Definition 2.1) such that
$$\begin{aligned} X_t(1)=0, \quad \forall \ t\ge T_0:=\inf \{ s>0: \ X_s(1)=0\}. \end{aligned}$$
Define
$$\begin{aligned} S(t) =\alpha (\alpha -1)\Gamma (\alpha ) \int \limits _0^tX_s(1)^{1-\alpha }\,ds \end{aligned}$$
and
$$\begin{aligned} \mathbf{Y}_t(dv)= \frac{\mathbf{X}_{S^{-1}(t)}(dv)}{X_{S^{-1}(t)}(1)},\quad t\ge 0. \end{aligned}$$
(2.15)
Then \(\big (\mathbf Y _t\big )_{t\ge 0}\) is well-defined, i.e. \(S^{-1}(t)<T_0\) for all \(t\ge 0\), and is an \((\alpha ,\theta )\)-Fleming–Viot process, i.e. a strong solution to (2.14) with \(\Lambda =Beta(2-\alpha ,\alpha )\).

The proof of the theorem is different from the known result for \(\theta =0\). To prove that \(X_{S^{-1}(t)}(1)>0\) for all \(t\ge 0\), Lamperti’s representation for CSBPs was crucially used in [10]. This idea breaks down in our generalized setting since the total-mass process \(X_t(1)\) is not a CSBP. Our proof uses instead the fact that for all \(\theta \ge 0\) the total-mass process is self-similar and an interesting cancellation effect of Lamperti’s transformation for self-similar Markov processes and the time-change \(S\).

In [1] we study (a generalized version of) the total mass process \((X_t(1), t\ge 0)\) and we show that the extinction time \(T_0 = \inf \{ t\ge 0 : X_t(1)=0\}\) is finite almost surely if and only if \(\theta <\Gamma (\alpha )\). Otherwise \(T_0=\infty \) almost surely. We will see in the proof of Theorem (2.7) that in both cases
$$\begin{aligned} \lim _{t\rightarrow \infty } S^{-1}(t) =T_0 \, \text { a.s.} \end{aligned}$$
Theorem 2.7 thus gives some partial information on the behavior of \(\big (\mathbf X _t\big )_{t\ge 0}\) near the extinction time \(T_0\):

Corollary 2.8

As \(t \rightarrow \infty \) the probability-valued process \(\left( {\mathbf {X}_{S^{-1}(t)}(dv) \over X_{S^{-1}(t)}(1)}\right) _{t\ge 0}\) converges weakly to the unique invariant measure of \((\mathbf {Y}_t, t\ge 0)\).

As \(t\rightarrow T_0\), almost surely, there exists a (random) sequence of times \(t_1 <t_2 <\ldots <T_0\) tending to \(T_0\) such that the sets
$$\begin{aligned} A_i = \text {support of } \mathbf {X}_{t_i} \end{aligned}$$
are pairwise disjoints.

This corollary is a direct consequence of the result, due to Donnelly and Kurtz [13, 14], that the \((\alpha , \theta )\)-Fleming–Viot process (as well as its lookdown particle system) is strongly ergodic and of Theorem 2.7. For the sake of self-containdeness, a sketch of the proof is given in Sect. 7 which specialize and explicits the arguments of Donelly and Kurtz to our case.

3 Proof of Theorems 2.2 and 2.3

Recall that \((s_i, u_i,a_i,w^i)_{i\in I}\) is a Poisson point process on \({\mathbb R}_+^3 \times {\mathcal D} \) with intensity measure \(\Gamma \) given as in (2.12), and that we use the notation (2.11). We are going to show that for all \(v\in [0,1]\) there exists a unique càdlàg process \((Z_t(v), t\ge 0)\) solving
$$\begin{aligned} {\left\{ \begin{array}{ll} Z_t(v)= \sum _{s_i=0} w_{t}^i \, {\small 1}\!\!1_{{(a_i \le v)}} + \sum _{s_i>0} w_{t}^i \, {\small 1}\!\!1_{{(a_i \le v)}} {\small 1}\!\!1_{{(u_i \le g(Z_{s_i-}(1)))}},\;t>0,\\ Z_0=F(v). \end{array}\right. }\nonumber \\ \end{aligned}$$
(3.1)
Then we are going to construct a PPP \({\mathcal N} \) with intensity \(dr\otimes c_\alpha \, {\small 1}\!\!1{(x>0)} \, x^{-1-\alpha }\, dx\otimes dy\) such that, for all \(v\in [0,1]\), \(Z\) is solution of (2.6) .

3.1 The Pitman–Yor type representation with predictable random immigration

We start by replacing the immigration rate \((g(Z_{s-}(1)))_{s>0}\) in the right-hand side of (3.1) with a generic \(({\mathcal F} _t)\)-predictable process \((V_s)_{s\ge 0}\), that we assume to satisfy
$$\begin{aligned} V_t\ge 0 \text { and } \int \limits _0^t {\mathbb E} (V_s) \, ds<+\infty \ \ \forall t\ge 0; \end{aligned}$$
(3.2)
this will be useful when we perform a Picard iteration in the proof of existence of solutions to (2.6) and (3.1). Then we consider
$$\begin{aligned} {\left\{ \begin{array}{ll} Z_t(v) := \sum _{s_i=0} w_{t}^i \, {\small 1}\!\!1_{{(a_i \le v)}} + \sum _{s_i>0} w_{t}^i \, {\small 1}\!\!1_{{(a_i \le v)}}{\small 1}\!\!1_{{(u_i \le V_{s_i})}},\qquad t>0, \ v\in [0,1],\\ Z_0(v):=F(v),\qquad \ v\in [0,1]. \end{array}\right. }\nonumber \\ \end{aligned}$$
(3.3)
Then we want to show that there is a noise \(\mathcal {N}\) on \((\Omega , \mathcal {G}, \mathcal {G_t}, {\mathbb P} )\) such that \(Z\) is a solution of an equation of the type (2.6).

3.1.1 Definition of \({\mathcal N} \)

Fig. 1

Definition of \(\mathcal {N}\). On the left-hand side we represent the point process \((s_i,w^i, a_i)\). Observe that \(s_4=0\) while \(s_1,s_2,s_3>0\). On the right-hand side we show how the \(w^i\) are combined to construct the noise \({\mathcal N} \)

Let us consider a family of independent random variables \((U_{ij})_{i,j\in {\mathbb N} }\) such that \(U_{ij}\) is uniform on \([0,1]\) for all \(i,j\in {\mathbb N} \). We also assume that \((U_{ij})_{i,j\in {\mathbb N} }\) is independent of the PPP \((s_i, u_i,a_i,w^i)\). Then, for all atoms \((s_i, u_i,a_i,w^i)\) in the above PPP, we define the following point process \({\mathcal N} ^i:=(r_j^i,x_j^i,y_j^i)_{j\in J^i}\):
  1. (1)

    \((r_j^i)_{j\in J^i}\) is the family of jump times of \(r\mapsto w^i_r\);

     
  2. (2)

    for each \(r_j^i\) we set

     
$$\begin{aligned} x_j^i := w^i_{r_j^i}-w^i_{r_j^i-}, \qquad y_j^i := w^i_{r_j^i-} \, \cdot U_{ij}. \end{aligned}$$
(3.4)
We note that \( {\mathcal N} ^i\) is not expected to be a Poisson point process. For each \(k \in {\mathbb N} \) we set
$$\begin{aligned}&L^k_0 := F(a_k) \text { and } L^k_t:=\sum _{a_i< a_k , u_i\le V_{s_i}} w^i_{t-}, \qquad t> 0, \nonumber \\&L^\infty _t := \sup _k L^k_t , \qquad t\ge 0. \end{aligned}$$
(3.5)
We consider a PPP \({\mathcal N} ^\circ =(r^\circ _j,x^\circ _j,y^\circ _j)_j\) with intensity measure \(\nu \) given by (2.4) and independent of \(( (s_i, u_i, a_i, w^i)_i, (U_{ij})_{i,j\in {\mathbb N} }, (V_t)_{t\ge 0})\). We set for any non-negative measurable \(f=f(r,x,y)\)
$$\begin{aligned}&\int \limits f \, d{\mathcal N} := \sum _{k} {\small 1}\!\!1_{{(u_k \le V_{s_k})}} \int \limits f(r,x,y+L^k_{r}) \, {\mathcal N} ^k(dr, dx,dy) \nonumber \\&\quad + \int \limits f(r,x,y+L^\infty _{r}) \, {\mathcal N} ^\circ (dr, dx,dy). \end{aligned}$$
(3.6)
The filtration we are going to work with is
$$\begin{aligned}&{\mathcal F} _t:=\sigma \big ((s_i, u_i, a_i, w^i_r, U_{ij}, V_r), (r^\circ _i,x^\circ _i,y^\circ _i)_i :\\&\quad r\le t, s_i\le t, r^\circ _i\le t, i,j\in {\mathbb N} ), \qquad t\ge 0. \end{aligned}$$
We are going to prove the following

Proposition 3.1

\({\mathcal N} \) is a PPP with intensity \(\nu (dr , dx , dy) = dr\otimes c_\alpha \, x^{-1-\alpha }\, dx\otimes dy\).

Proof

For \(f=f(r,x,y)\ge 0\) we now set
$$\begin{aligned} I(t):= \sum _k {\small 1}\!\!1_{{(u_k \le V_{s_k})}} \int \limits _{]0,t]\times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y+L^k_{r}) \, {\mathcal N} ^k(dr, dx,dy). \end{aligned}$$
Since \(w_t^i=0\) if \(s_i\ge t\), \(V\) is predictable and we can write
$$\begin{aligned} L^k_t:=\sum _{s_i=0} {\small 1}\!\!1_{{(a_i<a_k)}} \, w^i_{t-} + \sum _{s_i>0 , u_i\le V_{s_i}} {\small 1}\!\!1_{{(a_i<a_k )}}\, w^i_{t-}, \end{aligned}$$
then we obtain that \((L^k_{\cdot })_k\) is predictable. Hence, \(I(t)\) is \({\mathcal F} _t\)-measurable and for \(0\le t<T\)
$$\begin{aligned}&{\mathbb E} \left( \left. I(T)-I(t) \,\right| \, {\mathcal F} _t \right) = {\mathbb E} \left( \left. \sum _k {\small 1}\!\!1_{{(u_k \le V_{s_k})}} \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y+L^k_{r}) \, {\mathcal N} ^k(dr, dx,dy) \,\right| \, {\mathcal F} _t \right) \\&\quad = {\mathbb E} \left( \left. \sum _k {\small 1}\!\!1_{{(u_k \le V_{s_k})}} {\small 1}\!\!1_{{(s_k < t)}} \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y+L^k_{r}) \, {\mathcal N} ^k(dr, dx,dy) \,\right| \, {\mathcal F} _t \right) \\&\qquad + {\mathbb E} \left( \left. \sum _k {\small 1}\!\!1_{{(u_k \le V_{s_k})}}{\small 1}\!\!1_{{(s_k \ge t)}} \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y+L^k_{r}) \, {\mathcal N} ^k(dr, dx,dy) \,\right| \, {\mathcal F} _t \right) \end{aligned}$$
We will need the following two facts:
  1. (1)

    Conditionally on \(w^k_t\) and \(s_k < t\) the process \(w^k_{\cdot +t}\) has law \(P_{w^k_t}\) (this follows for instance from (2.7)).

     
  2. (2)
    Let \((w_t, t\ge 0)\) be a CSBP started from \(w_0\) with law \({\mathbb P} _{w_0}\). Let \({\mathcal M} =(r_i,x_i, y_i)\) be a point process which is defined from \(w\) and a sequence of i.i.d. uniform variables on \([0,1]\) as \({\mathcal N} ^k\) is constructed from \(w^k\) and \((U_{ij})_{i,j\in {\mathbb N} }\). Then for any positive function \(f=f(r,x,y)\)
    $$\begin{aligned}&{\mathbb E} \left[ \int \limits _{[0,T] \times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y) {\mathcal M} (dr,dx,dy) \right] \\&\quad ={\mathbb E} _{w_0} \left[ \int \limits _{[0,T] \times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y) \mathbf {1}_{y\le w_{r-}} \nu (dr,dx,dy) \right] . \end{aligned}$$
     
Let us start with the case \(s_k < t\). Using the above facts we see that
$$\begin{aligned}&{\mathbb E} \left( \left. \sum _k {\small 1}\!\!1_{{(u_k \le V_{s_k})}}{\small 1}\!\!1_{{(s_k < t)}} \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y+L^k_{r}) \, {\mathcal N} ^k(dr, dx,dy) \,\right| \, {\mathcal F} _t \right) \\&= {\mathbb E} \left( \left. \sum _k {\small 1}\!\!1_{{(u_k \le V_{s_k})}}{\small 1}\!\!1_{{(s_k < t)}} {\mathbb E} \left[ \left. \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y+L^k_{r}) \, {\mathcal N} ^k(dr, dx,dy)\,\right| w^k_t, L^k_{.} \right] \, \right| \, {\mathcal F} _t \right) \\&= {\mathbb E} \left( \left. \sum _k {\small 1}\!\!1_{{(u_k \le V_{s_k})}}{\small 1}\!\!1_{{(s_k < t)}} \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} {\small 1}\!\!1_{{[L^k_{r},L^k_{r}+w^k_{r-} [}}(y) \, f(r,x,y) \, dr \, \frac{c_\alpha }{x^{1+\alpha }}\,dx\, dy\,\right| \, {\mathcal F} _t \right) \end{aligned}$$
Let us now consider the case \(s_k>t.\)
$$\begin{aligned}&{\mathbb E} \left( \left. \sum _k {\small 1}\!\!1_{{(u_k \le V_{s_k}})}{\small 1}\!\!1_{{(s_k \ge t)}} \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y+L^k_{r}) \, {\mathcal N} ^k(dr, dx,dy) \,\right| \, {\mathcal F} _t \right) \\&= \lim _{\epsilon \rightarrow 0} {\mathbb E} \left( \left. \sum _k {\small 1}\!\!1_{{(u_k \le V_{s_k})}}{\small 1}\!\!1{(s_k \ge t)} \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y+L^k_{r}) {\small 1}\!\!1_{{(s_k+\epsilon <r)}} \, {\mathcal N} ^k(dr, dx,dy) \,\right| \, {\mathcal F} _t \right) \\&= \lim _{\epsilon \rightarrow 0} {\mathbb E} \left( \left. \sum _k {\small 1}\!\!1_{{(u_k \le V_{s_k})}}{\small 1}\!\!1{(s_k \ge t)} {\mathbb E} \left[ \left. \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y+L^k_{r}) {\small 1}\!\!1_{{(s_k+\epsilon <r)}} \, {\mathcal N} ^k(dr, dx,dy) \right| w^k_{s_k+\epsilon } ,L^k_{.} \right] \,\right| \, {\mathcal F} _t \right) \\&= \lim _{\epsilon \rightarrow 0} {\mathbb E} \left( \left. \sum _k {\small 1}\!\!1_{{(u_k \le V_{s_k})}}{\small 1}\!\!1_{{(s_k \ge t)}} \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} {\small 1}\!\!1_{{[L^k_{r},L^k_{r}+w^k_{r-} [}}(y) \, f(r,x,y) {\small 1}\!\!1_{{(s_k+\epsilon <r)}}\, dr \, \frac{c_\alpha }{x^{1+\alpha }}\,dx\, dy\,\right| \, {\mathcal F} _t \right) \\&= {\mathbb E} \left( \left. \sum _k {\small 1}\!\!1_{{(u_k \le V_{s_k})}}{\small 1}\!\!1_{{(s_k \ge t)}} \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} {\small 1}\!\!1_{{[L^k_{r},L^k_{r}+w^k_{r-} [}}(y) \, f(r,x,y) \, dr \, \frac{c_\alpha }{x^{1+\alpha }}\,dx\, dy\,\right| \, {\mathcal F} _t \right) \end{aligned}$$
where we need to introduce the indicator that \(r >s_k+\epsilon \) to get a sum of CSBP started from a positive initial mass and thus be in a position to apply the above fact.
We conclude that
$$\begin{aligned}&{\mathbb E} \left( \left. I(T)-I(t) \,\right| \, {\mathcal F} _t \right) = {\mathbb E} \left( \left. \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} {\small 1}\!\!1_{{]0,\sup _k L^k_{r} [}}(y) \, f(r,x,y) \, dr \, \frac{c_\alpha }{x^{1+\alpha }}\,dx\, dy\,\right| \, {\mathcal F} _t \right) , \end{aligned}$$
Therefore by the Definition (3.6) of \({\mathcal N} \)
$$\begin{aligned}&{\mathbb E} \left( \left. \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y) \, {\mathcal N} (dr, dx,dy) \,\right| \, {\mathcal F} _t \right) \\&\quad = {\mathbb E} \left( \left. \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} \left( {\small 1}\!\!1_{{]0,L^\infty _{r} [}}(y)+{\small 1}\!\!1_{{]L^\infty _{r},\infty [}}(y) \right) \, f(r,x,y) \, dr \, \frac{c_\alpha }{x^{1+\alpha }}\,dx\, dy\,\right| \, {\mathcal F} _t \right) .\\&\quad = \int \limits _{]t,T]\times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y) \, dr \, \frac{c_\alpha }{x^{1+\alpha }}\,dx\, dy. \end{aligned}$$
By [23, Theorem II.6.2], a point process with deterministic compensator is necessarily a Poisson point process, and therefore the proof is complete. \(\square \)

Proposition 3.1 tells us how to construct a Poisson noise \({\mathcal N} \) from the \((s_i,u_i,a_i,w^i)\). Let us now show that \(Z\) solves (2.6) with this particular noise.

Proposition 3.2

Let \(Z\) satisfy (3.3). Then for all \(v\ge 0\), \((Z(v),{\mathcal N} )\) satisfies \({\mathbb P} \)-a.s.
$$\begin{aligned} Z_t(v) = F(v) + \int \limits _{]0,t]\times {\mathbb R}_+\times {\mathbb R}_+} {\small 1}\!\!1_{{(y< Z_{r-}(v))}} \, x \, \tilde{\mathcal N} (dr,dx,dy) +I(v)\int \limits _0^{t} V_{s} \, ds,\qquad t\ge 0. \end{aligned}$$

Proof

Using an idea introduced by Dawson and Li [11], we set for \(n\in {\mathbb N} ^*\)
$$\begin{aligned} Z_t^{n}(v) := \sum _{a_i \le v, u_i \le V_{s_i}} w_{t}^i \, {\small 1}\!\!1_{{(s_i+\frac{1}{n}\le t)}}. \end{aligned}$$
(3.7)
Note that \({\mathbb Q} (\{w_{1/n}>0\})<+\infty \) for all \(n\ge 1\), so that \(Z^n_t\) is \({\mathbb P} \)-a.s. given by a finite sum of terms. Moreover, by the properties of PPPs, \(((s_i,u_i,a_i,w^i): w_{1/n}^i>0)\) is a PPP with intensity \((\delta _0(ds) \otimes \delta _0(du) \otimes F(da) + ds \otimes du\otimes I(da) )\otimes {\small 1}\!\!1_{{(w_{1/n}>0)}} \, {\mathbb Q} (dw)\). Moreover \(Z^n_t\uparrow Z_t\) as \(n\uparrow +\infty \) for all \(t\ge 0\). Now we can write
$$\begin{aligned} Z_t^n(v) = M^n_t(v)+J^n_t(v), \end{aligned}$$
with
$$\begin{aligned} M^n_t(v)&:= \sum _{a_i \le v , u_i \le V_{s_i}} (w_{t}^i-w_{s_i+\frac{1}{n}}^i) \,{\small 1}\!\!1_{{(s_i+\frac{1}{n}\le t)}}, \nonumber \\ J^n_t(v)&:= \sum _{a_i \le v , u_i \le V_{s_i}} w_{s_i+\frac{1}{n}}^i\, {\small 1}\!\!1_{{(s_i+\frac{1}{n}\le t)}}. \end{aligned}$$
(3.8)
Let us concentrate on \(M^n\) first. We can write, for \(s_i+\frac{1}{n}\le t\),
$$\begin{aligned} w_{t}^i-w_{s_i+\frac{1}{n}}^i = \int \limits _{[s_i+\frac{1}{n} ,t]\times {\mathbb R}^+\times {\mathbb R}^+} {\small 1}\!\!1_{{(y< w^i_{r-})}} \, x\, \tilde{\mathcal N} ^i(dr,dx,dy) \end{aligned}$$
where \({\mathcal N} ^i\) is defined in (3.4) and \(\tilde{\mathcal N} ^i(dr,dx,dy)\) is the compensated version of \({\mathcal N} ^i\):
$$\begin{aligned} \tilde{\mathcal N} ^i(dr,dx,dy) := {\mathcal N} ^i(dr,dx,dy) - {\small 1}\!\!1_{{(y< w^i_{r-})}} \, \nu (dr,dx,dy), \end{aligned}$$
with \(\nu \) defined in (2.4). We set
$$\begin{aligned} A_{i,n} := \left\{ (y,r) : \ L^i_{r}\le y < L^i_{r}+w^i_{r-}\, {\small 1}\!\!1_{{(s_i+\frac{1}{n}\le r)}} \right\} , \qquad B^{v}_n:= \bigcup _{a_i \le v\ , \ u_i \le V_{s_i}} A_{i,n}. \end{aligned}$$
Since \({\mathbb Q} (\{w_{1/n}>0\})<+\infty \), only finitely many \(\{A_{i,n}\}_i\) such that \(u_i\le V_{s_i}\) are non-empty \({\mathbb P} \)-a.s and, moreover, the \(\{A_{i,n}\}_i\) are disjoint. Then by (3.6)
$$\begin{aligned}&\int \limits _{]0,t]\times {\mathbb R}^+\times {\mathbb R}^+} {\small 1}\!\!1_{{A_{i,n}}}(y,r) \, x\, \tilde{\mathcal N} (dr,dx,dy)\\&= {\small 1}\!\!1_{{(s_i+\frac{1}{n}\le t)}}\int \limits _{[s_i+\frac{1}{n},t]\times {\mathbb R}^+\times {\mathbb R}^+} {\small 1}\!\!1_{{(y< w^i_{r-})}} \, x\, \tilde{\mathcal N} ^i(dr,dx,dy) \\&= \left( w^i_t-w^i_{s_i+\frac{1}{n}} \right) \, {\small 1}\!\!1_{{(s_i+\frac{1}{n}\le t )}} \end{aligned}$$
so that
$$\begin{aligned} \int \limits _{]0,t]\times {\mathbb R}^+\times {\mathbb R}^+} {\small 1}\!\!1_{{B^v_n}}(y,r) \, x\, \tilde{\mathcal N} (dr,dx,dy)&= \sum _{a_i \le v\ , \ u_i \le V_{s_i}} \left( w^i_t-w^i_{s_i+\frac{1}{n}} \right) \, {\small 1}\!\!1_{{(s_i+\frac{1}{n}\le r)}} \nonumber \\&= M^n_t(v). \end{aligned}$$
We need first the two following technical lemmas. \(\square \)

Lemma 3.3

For a \(({\mathcal F} _t)\)-predictable bounded process \(f_t:{\mathbb R}_+\mapsto {\mathbb R}\) we set
$$\begin{aligned} M_t := \int \limits _{]0,t]\times {\mathbb R}_+\times {\mathbb R}_+} f_r(y)\, x \, \tilde{\mathcal N} (dr,dx,dy), \qquad t\ge 0. \end{aligned}$$
Then we have
$$\begin{aligned} {\mathbb E} \left( \sup _{t\in [0,T]} |M_t|\right) \le&\ C\left( \sqrt{\int \limits _{[0,T]\times {\mathbb R}_+} {\mathbb E} (f^2_r(y)) \,dr\, dy}+ \int \limits _{[0,T]\times {\mathbb R}_+} {\mathbb E} (|f_r(y)|) \,dr\, dy\right) . \end{aligned}$$

Proof

Recall that \(\nu _\alpha (dx)=c_\alpha x^{-1-\alpha }\,dx\). We set
$$\begin{aligned} J_{1,t}&:= \int \limits _{]0,t]\times {\mathbb R}_+\times {\mathbb R}_+} f_r(y)\, {\small 1}\!\!1_{{(x\le 1)}} \, x \, \tilde{\mathcal N} (dr,dx,dy), \qquad t\ge 0,\\ J_{2,t}&:= \int \limits _{]0,t]\times {\mathbb R}_+\times {\mathbb R}_+} f_r(y)\, {\small 1}\!\!1_{{(x> 1)}} \, x \, \tilde{\mathcal N} (dr,dx,dy), \qquad t\ge 0. \end{aligned}$$
Then, by Doob’s inequality,
$$\begin{aligned} \left( {\mathbb E} \left( \sup _{t\in [0,T]} |J_{1,t}|\right) \right) ^2&\le {\mathbb E} \left( \sup \limits _{t\in [0,T]} |J_{1,t}|^2\right) \\&\le 4\int \limits _{]0,1]}c_\alpha \, x^{1-\alpha }\,dx\int \limits _{[0,T]\times {\mathbb R}_+} {\mathbb E} (f^2_r(y)) \,dr\, dy \end{aligned}$$
while
$$\begin{aligned} {\mathbb E} \left( \sup _{t\in [0,T]} |J_{2,t}|\right) \le&\ 2 \int \limits _{]1,\infty [} c_\alpha \, x^{-\alpha }\,dx\int \limits _{[0,T]\times {\mathbb R}_+} {\mathbb E} (|f_r(y)|) \,dr\, dy. \end{aligned}$$
\(\square \)

Lemma 3.4

  1. (1)

    \(\lim _{n\rightarrow \infty }\int \limits _{{\mathcal E} } (z_{\frac{1}{n}})^2 \, {\small 1}\!\!1_{{(z_{\frac{1}{n}}\le 1)}} \, {\mathbb Q} (dz)=0.\)

     
  2. (2)

    \(\lim _{n\rightarrow \infty }\int \limits _{{\mathcal E} } z_{\frac{1}{n}}\, {\small 1}\!\!1_{{(z_{\frac{1}{n}}\ge 1)}} \, {\mathbb Q} (dz)=0.\)

     

Proof

First recall from (2.9) that \(\int \limits _{{\mathcal E} } z_{\frac{1}{n}}\, {\mathbb Q} (dz)=1\) for all \(n\). The proof of (1) is based on the estimate \(\frac{1}{e} x \le 1-e^{-x}\) for \(x\in [0,1]\) which follows from differentiating both sides. Of course, the inequality also implies that
$$\begin{aligned} x^2 \mathbf 1_{(x\le 1)} \le e x(1-e^{-x}),\quad x\ge 0. \end{aligned}$$
We apply this estimate to the excursion measure:
$$\begin{aligned}&\int \limits _{{\mathcal E} } (z_{\frac{1}{n}})^2 \, {\small 1}\!\!1_{{(z_{\frac{1}{n}}\le 1)}} \, {\mathbb Q} (dz) \le e \int \limits _{\mathcal E} z_{\frac{1}{n} } (1-e^{-z_{\frac{1}{n}}})\, {\mathbb Q} (dz)\nonumber \\&=e\left( \int \limits _{\mathcal E} z_{\frac{1}{n} }\, {\mathbb Q} (dz)-\int \limits _{\mathcal E} z_{\frac{1}{n}} e^{-z_{\frac{1}{n}}}\, {\mathbb Q} (dz)\right) . \end{aligned}$$
(3.9)
Next, by (2.8),
$$\begin{aligned} \int \limits _{\mathcal E}z_{\frac{1}{n}} e^{-z_{\frac{1}{n}}}\, {\mathbb Q} (dz)&=\frac{d}{d \lambda } u_{1/n}(\lambda )\Big |_{\lambda =1}=\frac{1}{\left( 1+(\alpha -1) \frac{1}{n}\right) ^{\alpha /(\alpha -1)}}\mathop {\rightarrow }\limits ^{n\rightarrow \infty }1, \end{aligned}$$
so that (3.9) combined with \(\int \limits _{{\mathcal E} } z_{\frac{1}{n}}\, {\mathbb Q} (dz)=1\) proves (1). For (2) we use that \(x \mathbf 1_{(x>1)}\le \frac{e}{(e-1)} x(1-e^{-x})\) to get
$$\begin{aligned} \int \limits _{\mathcal {E}} z_{\frac{1}{n}} \mathbf 1_{(z_{\frac{1}{n}}> 1)}\, {\mathbb Q} (dz) \le \frac{e}{e-1}\int \limits _{\mathcal E} z_{\frac{1}{n}}(1-e^{-z_{\frac{1}{n}}}) \, {\mathbb Q} (dz) \end{aligned}$$
which goes to zero as argued above. \(\square \)

Lemma 3.5

For all \(v\ge 0\) and \(T\ge 0\) we have
$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb E} \left( \sup _{t\in [0,T]} |Z_t(v)-Z_t^n(v)|\right) =0, \end{aligned}$$
\((Z_t(v))_{t\ge 0}\) is \({\mathbb P} \)-a.s. càdlàg and \({\mathbb P} \)-a.s.
$$\begin{aligned} Z_t(v) = F(v) + \int \limits _{]0,t]\times {\mathbb R}_+\times {\mathbb R}_+} {\small 1}\!\!1_{{(y< Z_{r-}(v))}} \, x \, \tilde{\mathcal N} (dr,dx,dy) +I(v)\int \limits _0^{t} V_{s} \, ds. \end{aligned}$$

Proof

We have obtained above the representation
$$\begin{aligned} Z_t^n(v) = \int \limits _{]0,t]\times {\mathbb R}_+\times {\mathbb R}_+} {\small 1}\!\!1_{{B^v_n}}(y,r) \, x \, \tilde{\mathcal N} (dr,dx,dy) + J^n_t(v). \end{aligned}$$
(3.10)
First, let us note that
$$\begin{aligned} B^v_n \subset B^v:=\bigcup _{a_i\le v\ , \ u_i \le V_{s_i}} A_i, \qquad A_i:= \left\{ (y,r) : \ L^i_{r}\le y < L^i_{r}+w^i_{r-} \right\} , \end{aligned}$$
and moreover
$$\begin{aligned} B^v\setminus B^v_n = \bigcup _{a_i\le v}\left\{ (y,r) : \ L^i_{r}\le y < L^i_{r}+w^i_{r-} \, {\small 1}\!\!1_{{(s_i+\frac{1}{n}> r)}} \right\} \end{aligned}$$
and the latter union is disjoint. If we set
$$\begin{aligned} M_t(v) := \int \limits _{]0,t]\times {\mathbb R}_+\times {\mathbb R}_+} {\small 1}\!\!1_{{B^v}}(y,r) \, x \, \tilde{\mathcal N} (dr,dx,dy), \end{aligned}$$
then
$$\begin{aligned}&M_t(v)-M_t^n(v) = \int \limits _{]0,t]\times {\mathbb R}_+\times {\mathbb R}_+} {\small 1}\!\!1_{{B^v\setminus B^v_n}}(y,r) \, x \, \tilde{\mathcal N} (dr,dx,dy) \end{aligned}$$
and by Lemma 3.3
$$\begin{aligned}&\frac{1}{C}\,{\mathbb E} \left( \sup _{t\in [0,T]} |M_t-M^n_t|\right) \le \sqrt{\int \limits _0^T {\mathbb E} ({\small 1}\!\!1_{{B^v\setminus B^v_n}}(y,r)) \, dr\, dy}+ \int \limits _0^T {\mathbb E} ({\small 1}\!\!1_{{B^v\setminus B^v_n}}(y,r)) \, dr\, dy\nonumber \\&\quad = \sqrt{{\mathbb E} \left( \sum _{a_i\le v , u_i \le V_{s_i}} \int \limits _{s_i\wedge T}^{(s_i+\frac{1}{n})\wedge T} w^i_r \, dr \right) }+ {\mathbb E} \left( \sum _{a_i\le v , u_i \le V_{s_i}} \int \limits _{s_i\wedge T}^{(s_i+\frac{1}{n})\wedge T} w^i_r \, dr \right) . \end{aligned}$$
(3.11)
Then we get
$$\begin{aligned}&{\mathbb E} \left( \sum _{a_i\le v , u_i \le V_{s_i}} \int \limits _{s_i\wedge T}^{(s_i+\frac{1}{n})\wedge T} w^i_r \, dr \right) \\&= {\mathbb E} \left( \sum _{a_i\le v} {\small 1}\!\!1_{{(s_i=0)}}\int \limits _{s_i\wedge T}^{(s_i+\frac{1}{n})\wedge T} w^i_r \, dr \right) +{\mathbb E} \left( \sum _{a_i\le v , u_i \le V_{s_i}} {\small 1}\!\!1_{{(s_i>0)}}\int \limits _{s_i\wedge T}^{(s_i+\frac{1}{n})\wedge T} w^i_r \, dr \right) \\&= \int \limits _0^{T\wedge \frac{1}{n}} F(v) \int \limits _{{\mathcal E} } w_r {\mathbb Q} (dw) dr+\int \limits _0^T {\mathbb E} \left( V_s I(v)\right) \int \limits _s^{(s+\frac{1}{n}) \wedge T} \int \limits _{{\mathcal E} } w_{r-s} {\mathbb Q} (dw) dr ds\\&= F(v)\left( T\wedge \frac{1}{n}\right) +\int \limits _0^T {\mathbb E} \left( V_sI(v)\right) \left( T\wedge \left( s+\frac{1}{n}\right) -s\right) ds, \end{aligned}$$
where the last equality follows by (2.9). By our assumptions on \(V\) the right hand side in the above display converges to \(0\), as \(n\rightarrow \infty \). Hence (3.11) also converges to \(0\), as \(n\rightarrow \infty \). Let us now deal with \((J^n_t)_{\ge 0}\). Note that we can write
$$\begin{aligned} J^n_{t+\frac{1}{n}}(v) = \sum _{a_i \le v , u_i \le V_{s_i}} w^i_{s_i+\frac{1}{n}} \, {\small 1}\!\!1_{{(s_i\le t)}}= A_t^n+\sum _{a_i \le v} w^i_{\frac{1}{n}} \, {\small 1}\!\!1_{{(s_i=0)}}+I(v)\int \limits _0^{t} V_{s} \, ds, \end{aligned}$$
where
$$\begin{aligned} A^n_t := \sum _{0<s_i\le t} w^i_{s_i+\frac{1}{n}} \, {\small 1}\!\!1_{{(a_i \le v, u_i \le V_{s_i})}} - I(v)\int \limits _0^{t} V_{s} \, ds, \end{aligned}$$
and \((A^n_t)_{t\ge 0}\) is a martingale such that \(A^n_0=0\). We have by an analog of Lemma 3.3 and its proof
$$\begin{aligned} {\mathbb E} \left( \sup _{t\in [0,T]} |A^n_t|\right) \le 2\sqrt{K_V \int \limits _{{\mathcal E} } (z_{\frac{1}{n}})^2 \, {\small 1}\!\!1_{{(z_{\frac{1}{n}}\le 1)}} \, {\mathbb Q} (dz) } + 2K_V\int \limits _{{\mathcal E} } z_{\frac{1}{n}} \, {\small 1}\!\!1_{{(z_{\frac{1}{n}}> 1)}} \, {\mathbb Q} (dz), \end{aligned}$$
where \(K_V:=I(v)\int \limits _0^T {\mathbb E} (V_s)\, ds\). The righthand side tends to zero as \(n\rightarrow \infty \) by Lemma 3.4. Analogously
$$\begin{aligned}&{\mathbb E} \left( \left| \sum _{a_i \le v} w^i_{1/n} \, {\small 1}\!\!1_{{(s_i=0)}}-F(v)\right| \right) \\&\le 2\sqrt{F(1)\int \limits _{{\mathcal E} } (z_{\frac{1}{n}})^2 \, {\small 1}\!\!1_{{(z_{\frac{1}{n}}\le 1)}} \, {\mathbb Q} (dz) } + 2F(1)\int \limits _{{\mathcal E} } z_{\frac{1}{n}} \, {\small 1}\!\!1_{{(z_{\frac{1}{n}}> 1)}} \, {\mathbb Q} (dz), \end{aligned}$$
which again tends to \(0\) as \(n\rightarrow \infty \) by Lemma 3.4. Therefore
$$\begin{aligned} {\mathbb E} \big [ \sup _{t\in [0,T]} |Z_t(v)-Z^n_t(v)| \big ] \rightarrow 0. \end{aligned}$$
and, passing to a subsequence, we see that a.s.
$$\begin{aligned} \sup _{t\in [0,T]} |Z_t(v)-Z^{n_k}_t(v)| \rightarrow 0 \end{aligned}$$
(3.12)
(observe that in fact we don’t need to take a subsequence since \(Z^n_t\) is monotone non-decreasing in \(n\)).
In particular, a.s. \((Z_t(v), t\ge 0)\) is càdlàg and we obtain
$$\begin{aligned} Z_t(v) = F(v) + \int \limits _{]0,t]\times {\mathbb R}_+\times {\mathbb R}_+} {\small 1}\!\!1_{{B^v}}(y,r) \, x \, \tilde{\mathcal N} (dr,dx,dy) +I(v)\int \limits _0^{t} V_{s} \, ds. \end{aligned}$$
It remains to prove that a.s. \(B^v = \{(y,r): y< Z_{r-}(v)\}\). By definition a.s.
$$\begin{aligned} Z_{r-}(v) = \sum _{a_i\le v, u_i \le V_{s_i}} w^i_{r-}, \qquad r\ge 0. \end{aligned}$$
If \(a_i\le v\) and \(u_i \le V_{s_i}\), then \(L^i_{r}+w^i_{r-}\le Z_{r-}(v)\), so that \(B^v \subset \{(y,r): y< Z_{r-}(v)\}\). On the other hand, if \(y< Y_{r-}(v)\), then there is one \(j\) such that
$$\begin{aligned} \sum _{a_i<a_j , u_i \le V_{s_i} } w^i_{r-} = L^j_{r} \le y< L^j_{r}+w^j_{r-}. \end{aligned}$$
Therefore we have obtained the desired results. \(\square \)

The proof of Proposition 3.2 is complete.

3.2 Proof of Theorem 2.3

With a localisation argument we can suppose that \(g\) is globally Lipschitz. Let us first show uniqueness of solutions to (3.1). Let \(v=1\). If \((Z^i_t, t\ge 0)\) for \(i=1,2\) is a càdlàg process satisfying (3.1) with \(v=1\), then taking the difference we obtain
$$\begin{aligned} {\mathbb E} (|Z^1_t-Z^2_t|)&\le I(1)\int \limits _0^t ds \int \limits _{\mathcal E} z_{t-s} \, {\mathbb Q} (dz) \, {\mathbb E} (|g(Z^1_s)-g(Z^2_s)|) \\&= I(1)\int \limits _0^t ds \, {\mathbb E} (|g(Z^1_s)-g(Z^2_s)|), \end{aligned}$$
where the second equality follows by (2.9). By the Lipschitz-continuity of \(g\) and the Gronwall Lemma we obtain \(Z^1=Z^2\) a.s., i.e. uniqueness of solutions to (3.1).
The next step is to use an iterative Picard scheme in order to construct a solution of (3.1) (and thus of (2.6)). Let \(v:=1\), and let us set \(Z^0_t:=0\) and for all \(n\ge 0\)
$$\begin{aligned} Z^{n+1}_t:=\sum _{s_i=0} w_{t}^i \, {\small 1}\!\!1_{{(u_i \le 1)}} + \sum _{s_i>0} w_{t}^i \, {\small 1}\!\!1_{{(u_i \le g(Z_{s_i-}^n))}}, \qquad t\ge 0. \end{aligned}$$
By recurrence and monotonicity of \(g\), \(Z^{n+1}_t\ge Z^{n}_t\) and therefore a.s. there exists the limit \(Z_t:=\lim _nZ^{n}_t\).
To show that \(Z\) is actually the solution of (3.1) we show first that it is càdlàg (by proving that the convergence holds in a norm that makes the space of càdlàg processes on \([0,T]\) complete) and then by proving that (3.1) holds almost surely for each fixed \(t\ge 0.\) Let us first show that \(Z^n\) is a Cauchy sequence for the norm \(\Vert Z\Vert = {\mathbb E} (\sup _{t\in [0,T]} \left| Z_t \right| )\) for which first we set
$$\begin{aligned} Z^{n,k}_t:=Z^{n+k+1}_t-Z^{n+1}_t = \sum _{s_i>0} w_{t}^i \, {\small 1}\!\!1_{{( g(Z_{s_i-}^{n})<u_i \le g(Z_{s_i-}^{n+k}))}}. \end{aligned}$$
By an analog of Proposition 3.2 we can construct a PPP \({\mathcal N} ^{n,k}\) with the intensity measure \({\small 1}\!\!1_{{(r>0)}} \, dr\, \otimes \, c_\alpha \, {\small 1}\!\!1_{{(x>0)}}\, x^{-1-\alpha }\,dx \, \otimes \, {\small 1}\!\!1_{{(y>0)}}\, dy\) such that for all \(t\ge 0\)
$$\begin{aligned} Z^{n,k}_t = \int \limits _0^t {\small 1}\!\!1_{{(y< Z^{n,k}_{r-})}} \, x\, \tilde{\mathcal N} ^{n,k}(dr,dx,dy)+I(1)\int \limits _0^t\left[ g(Z_s^{n+k})-g(Z_{s}^{n})\right] \, ds. \end{aligned}$$
Then by the Lipschitz-continuity of \(g\) with the Lipschitz constant \(L\), and by Lemma 3.3
$$\begin{aligned} {\mathbb E} \left( \sup _{t\in [0,T]} \left| Z^{n,k}_t \right| \right) \le \sqrt{\int \limits _0^T {\mathbb E} \left( Z^{n,k}_s\right) \, ds}+ \int \limits _0^T {\mathbb E} \left( Z^{n,k}_s\right) \, ds + I(1)L \int \limits _0^T {\mathbb E} \left( Z^{n-1,k}_s\right) \, ds. \end{aligned}$$
We show now that the right hand side in the latter formula vanishes as \(n\rightarrow +\infty \) uniformly in \(k\). Indeed
$$\begin{aligned} {\mathbb E} (Z^{0}_t) = 0, \qquad 0\le {\mathbb E} (Z^{n+1}_t) = F(1) + \int \limits _0^t {\mathbb E} (g(Z^n_s)) \, ds \le C+L\int \limits _0^t {\mathbb E} (Z^n_s) \, ds. \end{aligned}$$
Then by recurrence \({\mathbb E} (Z^{n+1}_t) \le Ce^{tL}\) and by monotone convergence we obtain that \({\mathbb E} (Z^{n+1}_t)\uparrow {\mathbb E} (Z_t)\le Ce^{tL}\). By dominated convergence it follows that
$$\begin{aligned} \int \limits _0^T {\mathbb E} (Z^{n}_s) \, ds \rightarrow \int \limits _0^T {\mathbb E} (Z_s) \, ds, \end{aligned}$$
i.e. the sequence \(\int \limits _0^T {\mathbb E} (Z^{n}_s) \, ds\) is Cauchy and we conclude that \(Z_n \rightarrow Z\) in the sense of the above norm and therefore \(Z\) is almost surely càdlàg. The above argument also show that
$$\begin{aligned} Z_t=\sum _{s_i=0} w_{t}^i \, {\small 1}\!\!1_{{(u_i \le 1)}} + \sum _{s_i>0} w_{t}^i \, {\small 1}\!\!1_{{(u_i \le g(Z_{s_i-}))}}, \end{aligned}$$
holds almost surely for each fixed \(t\) and therefore for all \(t \ge 0\), i.e. \(Z\) is a solution of (3.1) for \(v=1\). Setting \(V_s:=g(Z_{s-}(1))\) and applying Proposition 3.2, we obtain (3.1) and the proof of Theorem 2.3 is complete.

3.3 Proof of Theorem 2.2

Let us start from existence of a weak solution to (2.6); by Theorem 2.3 we can build a process \((Z_t(v), t\ge 0, v\in [0,1])\) and a Poisson point process \(\mathcal {N}(dr,dx,dy)\) such that (3.1) and (2.6) hold. Now, we set
$$\begin{aligned} \mathbf X _t := \sum _{s_i=0} w_{t}^i \, \delta _{u_i} + \sum _{s_i>0} w_{t}^i \, {\small 1}\!\!1_{{(g(Z_{s_i-}(1))>0)}} \, \delta _{u_i /g(Z_{s_i-}(1))}, \end{aligned}$$
where \(\delta _a\) denote the Dirac mass at \(a\); by construction it is clear that \(X_t(v):=\mathbf X _t([0,v])\), for all \(v\in [0,1]\), is a solution to (2.6). It remains to prove that \((\mathbf X _t)_{t\ge 0}\) is càdlàg in the space of finite measures on the space \([0,1]\). By Lemma 3.5, for all \(v\in [0,1]\), \((X_t(v))_{t\ge 0}\) is càdlàg; by countable additivity, a.s. \((X_t(v))_{t\ge 0}\) is càdlàg for all \(v\in {\mathbb Q} \cap [0,1]\); then, by the compactness of \([0,1]\), it is easy to see that \((\mathbf X _t)_{t\ge 0}\) is càdlàg: for instance, a.s. any limit point of \(\mathbf X _{t_n}\), for \(t_n\ge t\) and \(t_n\rightarrow t\), is equal on each interval \(]a,b]\), \(a,b\in {\mathbb Q} \cap [0,1]\), to \(X_t(b)-X_t(a)=\mathbf X _t(]a,b])\). Therefore, we have proved that \((\mathbf X _t)_{t\ge 0}\) is a solution to (2.6) in the sense of Definition 2.1.

It remains to prove pathwise uniqueness. Let \((\mathbf X ^i_t)_{t\ge 0}\), \(i=1,2\), be two solutions to (2.6) driven by the same Poisson noise \({\mathcal N} \) and let us set \(X^i_t(v):=\mathbf X ^i_t([0,v])\), \(v\in [0,1]\). Let us first consider the case \(v=1\): then \((X^i_t(1), t\ge 0)\), \(i=1,2\), solves a particular case of the equation considered by Dawson and Li [12, (2.1)]; therefore, by [12, Theorem 2.5], \({\mathbb P} (X^1_t(1)=X^2_t(1), \, \forall \, t\ge 0)=1\).

Let us now consider \(0\le v<1\); in this case the equation satisfied by \((X^i_t(v), t\ge 0)\) depends on \((X^i_t(1), t\ge 0)\) and therefore the uniqueness result by Dawson and Li does not apply directly. Instead, we consider the difference \(D_t:=X^1_t(v)-X^2_t(v)\) so that the drift terms cancel since \(X^1(1)=X^2(1)\). Hence, \((D_t, t\ge 0)\) can be treated as if \(g\) were identically equal to 0. The same proof as in [12] shows that \({\mathbb P} (X^1_t(v)=X^2_t(v), \, \forall \, t\ge 0)=1\). Finally, since a.s. the two finite measures \(\mathbf X ^1_t\) and \(\mathbf X ^2_t\) are equal on each interval \(]a,b]\), \(a,b\in {\mathbb Q} \cap [0,1]\), they coincide. Therefore, pathwise uniqueness holds for (2.6).

Finally, in order to obtain existence of a strong solution, we apply the classical Yamada-Watanabe argument, for instance in the general form proved by Kurtz [25, Theorem 3.14].

4 Proof of Theorem 2.7

We consider the immigration rate function \(g(x)=\alpha (\alpha -1)\Gamma (\alpha )\theta x^{2-\alpha }\), \(x\ge 0\). Now \(g\) is not Lipschitz-continuous, so that Theorem 2.3 does not apply directly. However, by considering \(g^n(x)=\alpha (\alpha -1)\Gamma (\alpha )\theta (x\vee n^{-1})^{2-\alpha }\), we obtain a monotone non-decreasing and Lipschitz continuous function for which Theorem 2.3 yields existence and uniqueness of a solution \((X^n_t(v), t\ge 0, v\ge 0)\) to (2.6). We now define \( T^0:=0\), \(T^n := \inf \{t>0: \ X^n_t(1) = n^{-1}\}\) and
$$\begin{aligned} X_t(v) := \sum _{n\ge 1} X^n_t(v) \, {\small 1}\!\!1_{{(T^{n-1}\le t< T^n)}}. \end{aligned}$$
Since \((X_t(1))_{t\ge 0}\) has no downward jumps, it follows that \(T_0:=\sup _n T^n\) is equal to \(\inf \{s>0: X_s(1)=0\}\), and moreover \(X_t(1)=0\) for all \(t\ge T_0\). By pathwise uniqueness, if \(n\ge m\) then \(X_t^n(v)=X_t^m(v)\) on \(\{t\le T^m\}\), and therefore \((X_t(v), t\ge 0, v\ge 0)\) is a solution to (2.6) for \(g(x)=\alpha (\alpha -1)\Gamma (\alpha )\theta x^{2-\alpha }\) with the desired properties. Pathwise uniqueness follows from the same localisation argument.
To prove that the right-hand side of (2.15) is well-defined, i.e. the denominator is always strictly positive, we are going to apply Lamperti’s representation for self-similar Markov process. A positive self-similar Markov process of index \(w\) is a strong Markov family \(({\mathbb P} ^x)_{x>0}\) with coordinate process denoted by \((U_t)_{t\ge 0}\) in the Skorohod space of càdlàg functions with values in \([0,+\infty [\), satisfying
$$\begin{aligned} \text {the law of } (cU_{c^{-1/w} t})_{t\ge 0}\text { under }{\mathbb P} ^x \text { is given by }{\mathbb P} ^{cx} \end{aligned}$$
(4.1)
for all \(c>0\). Lamperti has shown in [29] that this property is equivalent to the existence of a Lévy process \(\xi \) such that, under \({\mathbb P} ^x\), the process \(( U_{t\wedge T_0})_{t\ge 0}\) has the same law as \(\big ( x \exp \big (\xi _{A^{-1}({tx^{-1/w})}} \big )\big )_{t\ge 0}\), where
$$\begin{aligned} A^{-1}(t):=\inf \{s\ge 0: A_s> t\}\qquad \text {and}\qquad A(t):=\int \limits _0^t \exp \left( \frac{1}{w}\, \xi _s\right) \, ds. \end{aligned}$$
We now use Lamperti’s representation to find a surprisingly simple argument for the well-posedness of (2.15).

Lemma 4.1

The right-hand side of (2.15) is well-defined for all \(v\in [0,1]\) and \(t\ge 0\).

Proof

In Lemma 1 of [1] it was shown that, if \(L\) is a spectrally positive \(\alpha \)-stable Lévy process as in (2.3), solutions to the SDE
$$\begin{aligned} X_t=X_0 + \int \limits _0^t X_{s-} ^{1/\alpha } \,dL_s+\alpha (\alpha -1)\Gamma (\alpha )\theta \int \limits _0^t X_s^{2-\alpha }\,ds \end{aligned}$$
(4.2)
trapped at zero induce a positive self-similar Markov process of index \(1/(\alpha -1)\). The corresponding Lévy process \(\xi \) has been calculated explicitly in [1, Lemma 2.2], but for the proof here we only need that \(\xi \) has infinite lifetime and additionally a remarkable cancellation effect between the time-changes. Since, by Lemma 1 of Fournier [21], the unique solution to the SDE (4.2) for \(X_0=1\) coincides in law with the unique solution to
$$\begin{aligned} X_t = 1 +\int \limits _{]0,t]\times {\mathbb R}_+\times {\mathbb R}_+} {\small 1}\!\!1_{{(y<{X_{s-}})}} \, x\, \tilde{\mathcal N} (ds, dx , dy) + \alpha (\alpha -1)\Gamma (\alpha )\theta \int \limits _0^t X_{s}^{2-\alpha }\, ds, \end{aligned}$$
we see that the total-mass process \((X_t(1))_{t\ge 0}\) and \(\left( \exp \left( \xi _{A^{-1}(t)} \right) \right) _{t\ge 0}\) are equal in law up to first hitting \(0\). Applying the Lamperti transformation for \(t<T_0\) yields
$$\begin{aligned} \bar{S}(t)&:=\int \limits _0^t X_s(1)^{1-\alpha }\,ds\\&=\int \limits _0^t \exp ((1-\alpha )\xi _{A^{-1}(s)})\,ds\\&=\int \limits _0^{A^{-1}(t)}\exp ((1-\alpha )\xi _s)\exp ((\alpha -1)\xi _s)\,ds\\&=A^{-1}(t) \end{aligned}$$
so that \(\bar{S}\) and \(A\) are reciprocal for \(t<T_0\). Plugging this identity into the Lamperti transformation yields
$$\begin{aligned} 0= X_{T_0}(1)=\lim _{t\uparrow T_0} X_t(1)=\lim _{t\uparrow T_0}\exp (\xi _{A^{-1}(t)})=\lim _{t\uparrow T_0}\exp (\xi _{\bar{S}(t)}). \end{aligned}$$
(4.3)
For the second equality we used left-continuity of \(X(1)\) at \(T_0\) which is due to Sect. 3 of [29] because the Lévy process \(\xi \) does not jump to \(-\infty \). Using that \(\xi _t>-\infty \) for any \(t\in [0,\infty )\), from(4.3) we see that \(\bar{S}\) explodes at \(T_0\), that is \(\bar{S}(T_0)=\infty \). Since \(S\) and \(\bar{S}\) only differ by the factor \(\alpha (\alpha -1)\Gamma (\alpha )\), it also holds that \(S(T_0)=\infty \) so that \(X_{S^{-1}(t)}(1)>0\) for all \(t\ge 0\). \(\square \)

We can now show how to construct on a pathwise level the Beta-Fleming–Viot processes with mutations the measure-valued branching process.

Proof of Theorem 2.7

Suppose \(\mathcal {N}\) is the PPP with compensator measure \(\nu \) that drives the strong solution of (2.6) with atoms

\((r_i,x_i,y_i)_{i\in I}\in (0,\infty )\times (0,\infty )\times (0,\infty )\). Then we define a new point process on \((0,\infty )\times (0,\infty )\times (0,\infty )\) by
$$\begin{aligned} \mathcal {M} (ds,d z,d u):=\sum _{i\in I} \delta _{\big \{S(r_i),\frac{ x_i}{X_{r_i-}(1)+x_i \mathbf 1 _{(y_i\le X_{r_i-}(1))}},\frac{y_i}{X_{r_i-}(1)}\big \}}(ds,d z,d u). \end{aligned}$$
If we can show that the restriction \(\mathcal M_|\) of \(\mathcal M\) to \((0,\infty )\times (0,1)\times (0,1)\) is a PPP with intensity measure \(\mathcal M_|'(ds,dz,du)=ds \otimes C_\alpha z^{-2}z^{1-\alpha }(1-z)^{\alpha -1}dz\otimes du\) and furthermore that
$$\begin{aligned} \bigg (R_t(v):=\frac{X_{S^{-1}(t)}(v)}{X_{S^{-1}(t)}(1)}\bigg )_{t\ge 0, v\in [0,1]} \end{aligned}$$
is a solution to (2.14) with respect to \(\mathcal M_|\), then the claim follows from the pathwise uniqueness of (2.14).
Step 1:
We have
$$\begin{aligned}&\quad R_t(v) =\frac{X_{S^{-1}(t)}(v)}{X_{S^{-1}(t)}(1)} = \\&=\frac{X_0(v)}{X_0(1)}+\int \limits _0^{S^{-1}(t)}\int \limits _0^\infty \int \limits _0^\infty \Bigg [\frac{X_{r-}(v)+ x\mathbf 1_{(y\le X_{r-}(v))}}{X_{r-}(1)+ x\mathbf 1_{(y\le X_{r-}(1))}}-\frac{X_{r-}(v)}{X_{r-}(1)}\Bigg ] (\mathcal {N}-\nu )(dr,dx,dy) \\&\quad +\int \limits _0^{S^{-1}(t)}\int \limits _0^\infty \int \limits _0^\infty \Bigg [\frac{X_{r-}(v)+ x\mathbf 1_{(y\le X_{r-}(v))}}{X_{r-}(1)+ x\mathbf 1_{(y\le X_{r-}(1))}}-\frac{X_{r-}(v)}{X_{r-}(1)}-\frac{ x\mathbf 1_{(y\le X_{r-}(v))} }{X_{r-}(1)}\\&\quad \quad +\frac{ x\mathbf 1_{(y\le X_{r-}(1))} X_{r-}(v)}{X_{r-}(1)^2}\Bigg ] \nu (dr,dx,dy)\\&\quad + \alpha (\alpha -1)\Gamma (\alpha )\theta v\int \limits _0^{S^{-1}(t)}\frac{1}{X_{r}(1)}X_r(1)^{2-\alpha } \,dr- \theta \int \limits _0^{S^{-1}(t)}\frac{X_r(v)}{X_{r}(1)^2} X_r(1)^{2-\alpha }\,dr\\&=v+\int \limits _0^{S^{-1}(t)}\int \limits _0^\infty \int \limits _0^\infty \Bigg [\frac{X_{r-}(v)+ x\mathbf 1_{(y\le X_{r-}(v))}}{X_{r-}(1)+ x\mathbf 1_{(y\le X_{r-}(1))}}-\frac{X_{r-}(v)}{X_{r-}(1)}\Bigg ] \mathcal {N}(dr,dx,dy)\\&\quad + \alpha (\alpha -1)\Gamma (\alpha )\theta v\int \limits _0^{S^{-1}(t)}X_r(1)^{1-\alpha } \,dr- \theta \int \limits _0^{S^{-1}(t)}\frac{X_r(v)}{X_{r}(1)} X_r(1)^{1-\alpha }\,dr. \end{aligned}$$
To verify the third equality, first note that due to Lemma II.2.18 of [24] the compensation can be split from the martingale part and then can be canceled by the compensator integral since integrating-out the \(y\)-variable yields
$$\begin{aligned}&\quad \int \limits _0^{S^{-1}(t)}\int \limits _0^\infty \int \limits _0^\infty \Bigg [-\frac{ x\mathbf 1_{(y\le X_{r-}(v))} }{X_{r-}(1)}+\frac{ x\mathbf 1_{(y\le X_{r-}(1))} X_{r-}(v)}{X_{r-}(1)^2}\Bigg ] c_\alpha x^{-1-\alpha } dr\,dx\,dy =0. \end{aligned}$$
To replace the jumps governed by the PPP \(\mathcal {N}\) by jumps governed by \(\mathcal M\) note that by the definition of \(\mathcal M\) we find, for measurable non-negative test-functions \(h\) for which the first integral is defined, the almost sure transfer identity
$$\begin{aligned}&\int \limits _{0}^{S^{-1}(t)}\int \limits _{0}^{\infty }\int \limits _{0}^{\infty } h\left( S(r),\frac{x}{X_{r-}(1)+ x\mathbf 1_{(y\le X_{r-}(1))}},\frac{y}{X_{r-}(1)}\right) \mathcal {N}(dr,dx,dy)\nonumber \\&\quad =\int \limits _{0}^{t}\int \limits _{0}^{1}\int \limits _{0}^{\infty } h(s, z, u)\,\mathcal {M}(ds,d z,d u) \end{aligned}$$
(4.4)
or in an equivalent but more suitable form
$$\begin{aligned}&\quad \int \limits _0^{S^{-1}(t)}\int \limits _0^\infty \int \limits _0^\infty h\left( r,\frac{x}{X_{S^{-1}(r-)}(1)+ x\mathbf 1_{(y\le X_{S^{-1}(r-)}(1))}},\frac{y}{X_{S^{-1}(r-)}(1)}\right) \mathcal {N}(dr,dx,dy)\nonumber \\&=\int \limits _0^{t}\int \limits _0^1\int \limits _0^\infty h\big (S^{-1}(s), z, u\big )\,\mathcal {M}(d s,d z,d u). \end{aligned}$$
(4.5)
Since the integrals are non-compensated we actually defined \(\mathcal {M}\) in such a way that the integrals produce exactly the same jumps. Let us now rewrite the equation found for \(R\) in such a way that (4.5) can be applied:
$$\begin{aligned} R_t(v)&=v+\int \limits _0^{S^{-1}(t)}\int \limits _0^\infty \int \limits _0^\infty \Bigg [\frac{x\mathbf 1_{(y\le X_{r-}(v))}X_{r-}(1)-X_{r-}(v) x\mathbf 1_{(y\le X_{r-}(1))}}{(X_{r-}(1)+ x\mathbf 1_{(y\le X_{r-}(1))})X_{r-}(1)}\Bigg ]\\&\quad \times \mathcal {N}(dr,dx,dy) + \alpha (\alpha -1)\Gamma (\alpha )\theta \int \limits _0^{S^{-1}(t)}\\&\quad \times \Big [vX_r(1)^{1-\alpha } -\frac{X_r(v)}{X_{r}(1)} X_r(1)^{1-\alpha }\Big ]\,dr\\&=v+\int \limits _0^{S^{-1}(t)}\int \limits _0^\infty \int \limits _0^\infty \frac{x}{X_{r-}(1)+ x\mathbf 1_{(y\le X_{r-}(1))}} \\&\quad \quad \times \left[ \mathbf 1_{(y\le X_{r-}(v))}-\frac{X_{r-}(v)}{X_{r-}(1)}\mathbf 1_{(y\le X_{r-}(1))}\right] \mathcal {N}(dr,dx,dy)\\&\quad + \alpha (\alpha -1)\Gamma (\alpha )\theta \int \limits _0^{S^{-1}(t)}\Big [vX_r(1)^{1-\alpha } -\frac{X_r(v)}{X_{r}(1)} X_r(1)^{1-\alpha }\Big ]\,dr. \end{aligned}$$
The stochastic integral driven by \(\mathcal {N}\) can now be replaced by a stochastic integral driven by \(\mathcal {M}\) via (4.5):
$$\begin{aligned} R_t(v)&= v+\int \limits _0^{t}\int \limits _0^1\int \limits _0^\infty z\bigg [\mathbf 1_{( uX_{S^{-1}( s)-}(1)\le {X_{S^{-1}( s)-}(v)})} - \\&\quad - R_{S^{-1}( s)-}( v)\mathbf 1_{( uX_{S^{-1}( s)-}(1)\le {X_{S^{-1}( s)-}(1)})} \bigg ]\mathcal {M}(d s,d z,d u)\\&\quad + \theta \int \limits _0^{t}\big [v-R_s(v)\big ]\,ds\\&=v+\int \limits _0^{t}\int \limits _0^1\int \limits _0^\infty z\bigg [\mathbf 1_{(u\le R_{s-}(v))}- R_{ s-}( v)\mathbf 1_{( u\le 1)} \bigg ]\mathcal {M}(d s,d z,d u) \\&\quad + \theta \int \limits _0^{t}\big [v-R_s(v)\big ]\,ds. \end{aligned}$$
By monotonicity in \(v\), \(R_t(v)\le 1\) so that the \(du\)-integral in fact only runs up to \(1\) and the second indicator can be skipped:
$$\begin{aligned} R_t(v)&= v+\int \limits _0^{t}\int \limits _0^1\int \limits _0^1 z\bigg [\mathbf 1_{(u\le R_{s-}(v))}- R_{ s-}( v) \bigg ]\mathcal {M_|}(d s,d z,d u)\\&+ \theta \int \limits _0^{t}\big [v-R_s(v)\big ]\,ds. \end{aligned}$$
This is precisely the equation we wanted to derive.
Step 2:

The proof is complete if we can show that the restriction \(\mathcal {M}_|\) of \(\mathcal M\) to \((0,\infty )\times [0,1]\times [0,1]\) is a PPP with intensity \(\mathcal M'(ds,dz,du)=ds \otimes C_\alpha z^{-1-\alpha }(1-z)^{\alpha -1}dz\otimes du\). For this sake, we choose a non-negative measurable predictable function \(W:\Omega \times (0,\infty )\times (0,1)\times (0,1)\rightarrow {\mathbb R}\) bounded in the second and third variable and compactly supported in the first, plug-in the definition of \(\mathcal M_|\) and use the compensator measure \(\nu \) of \(\mathcal {N}\) to obtain via (4.4)

$$\begin{aligned}&{\mathbb E} \left( \int \limits _0^t\int \limits _0^1\int \limits _0^1 W( s, z, u) \mathcal M_|(d s,d z,d u)\right) \\&={\mathbb E} \left( \int \limits _0^t\int \limits _0^1\int \limits _0^\infty \mathbf 1_{(u\le 1)} W( s, z, u) \mathcal M(d s,d z,d u)\right) \\&={\mathbb E} \bigg (\int \limits _0^{S^{-1}(t)}\int \limits _0^\infty \int \limits _0^\infty \mathbf 1_{\big (\frac{y}{X_{r-}(1)}\le 1\big )}\\&\quad \quad \times W\bigg (S(r),\frac{x}{X_{r-}(1)+ x\mathbf 1_{(y\le X_{r-}(1))}},\frac{y}{X_{r-}(1)}\bigg ) \mathcal {N}(dr,dx,dy)\bigg ) \end{aligned}$$
which, by predictable projection and change of variables, equals
$$\begin{aligned}&\quad {\mathbb E} \bigg (\int \limits _0^{S^{-1}(t)}\int \limits _0^\infty \int \limits _0^\infty \mathbf 1_{\big (\frac{y}{X_{r}(1)}\le 1\big )} \\&\quad \quad \times W\bigg (S(r),\frac{x}{X_{r}(1)+ x\mathbf 1_{(\frac{y}{X_{r}(1)}\le 1)}},\frac{y}{X_{r}(1)}\bigg )c_\alpha x^{-1-\alpha }\,dr\, dx\, dy\bigg ). \end{aligned}$$
Now we substitute the three variables \(r, x, y\) (in this order), using \(C_\alpha =\frac{1}{\alpha (\alpha -1)\Gamma (\alpha )}c_\alpha \) for the substitution of \(r\) and the identity
$$\begin{aligned} \int \limits _0^\infty g\left( \frac{x}{a+x}\right) x^{-1-\alpha }\,dx=a^{-\alpha }\int \limits _0^1 g(z) z^{-1-\alpha }(1-z)^{\alpha -1}\,dz \end{aligned}$$
for the substitution of \(x\) to obtain
$$\begin{aligned}&\quad {\mathbb E} \left( \int \limits _0^t\int \limits _0^1\int \limits _0^1W( s, z, u) \mathcal M_|(d s,d z,d u)\right) \\&={\mathbb E} \left( \int \limits _0^{t}\int \limits _0^1\int \limits _0^1 W(s,z,u) C_\alpha z^{-1-\alpha }(1-z)^{\alpha -1}ds\, dz\, du\right) . \end{aligned}$$
It now follows from Theorems II.4.8 of [24] and the definitions of \(c_\alpha , C_\alpha \) that \(\mathcal {M}_|\) is a PPP with intensity \(ds \otimes C_\alpha z^{-2}z^{1-\alpha }(1-z)^{\alpha -1}dz\otimes du\). \(\square \)\(\square \)

5 Proof of Theorem 1.2

Let us briefly outline the strategy for the proof: In order to show that the measure-valued process \(\mathbf Y \), \({\mathbb P} \)-a.s., does not possess times \(t\) for which \(\mathbf Y _t\) has finitely many atoms, by Theorem 2.7 it suffices to show that \({\mathbb P} \)-a.s. the same is true for the measure-valued branching process \(\mathbf X \). In order to achieve this, it suffices to deduce the same property for the Pitman-Yor type representation up to extinction, i.e. we need to show that
$$\begin{aligned} {\mathbb P} \big ( \#\{ v\in \, ]0,1]: Z_t(v)-Z_t(v-)>0\}=\infty , \ \forall \, t\in \,]0,T_0[ \big ) =1. \end{aligned}$$
(5.1)
The upshot of working with \(Z\) instead of \(\mathbf Y\) is that things are easier due to a comparison property that is not available for \(\mathbf Y \). More precisely, we are going to prove that with probability 1, the number of immigrated types alive is infinite at all times, therefore proving that the result in Theorem 1.2 is indeed independent of the starting configuration \(\mathbf Y _0\).
We start the proof with a technical result on the covering of a half line by the shadows of a Poisson point process defined on some probability space \((\Omega , \mathcal {G}, \mathcal {G}_{t}, P)\). Suppose \((s_i,h_i)_{i\in I}\) are the points of a Poisson point process \(\Pi \) on \((0,\infty )\times (0,\infty )\) with intensity \(dt\otimes \Pi '(dh)\). For a point \((s_i,h_i)\) we define the shadow on the half line \({\mathbb R}^+\) by \((s_i,s_i+h_i)\) which is precisely the line segment covered by the shadow of the line segment connecting \((s_i,0)\) and \((s_i,h_i)\) with light shining in a 45 degrees angle from the above left-hand side. Shepp proved that the half line \({\mathbb R}^+\) is almost surely fully covered by the shadows induced by the points \((s_i,h_i)_{i\in I}\) if and only if
$$\begin{aligned} \int \limits _0^1 \exp \left( \int \limits _t^1 (h-t) \,\Pi ' (dh)\right) dt=\infty . \end{aligned}$$
(5.2)
The reader is referred to the last remark of [36]. For our purposes we need the following variant:

Lemma 5.1

Suppose \(\Pi \) is a PPP with intensity \(dt\,\otimes \,\Pi '(dh)\) and Shepp’s condition (5.2) holds, then
$$\begin{aligned} {\mathbb P} \big ( \#\big \{ s_i \le t : (s_i,h_{i})\in \Pi \text { and } s_i+h_{i}> t\big \}=\infty , \ \forall \, t>0\big )=1, \end{aligned}$$
i.e. almost surely every point of \({\mathbb R}^+\) is covered by the shadows of infinitely many line segments.

Proof

The proof is an iterated use of Shepp’s result for the sequence of restricted Poisson point processes \(\Pi _k\) obtained by removing all the atoms \((s_i,h_i)\) with \(h_i>\frac{1}{k}\) from \(\Pi \), i.e. restricting the intensity measure to \([0,\frac{1}{k}]\). Since Shepp’s criterion (5.2) only involves the intensity measure around zero, the shadows of all point processes \(\Pi _k\) cover the half line. Consequently, if there is some \(t>0\) such that \(t\) is only covered by the shadows of finitely many points \((s_i,h_i)\in \Pi \), then \(t\) is not covered by the shadows generated by \(\Pi _{k'}\) for some \(k'\) large enough. But this is a contradiction to Shepp’s result applied to \(\Pi _{k'}\). \(\square \)

Now we want to apply Shepp’s result to the Pitman-Yor type representation. We want to prove that (5.1) holds for any \(\theta >0\). Let us set for all \(\epsilon >0\)
$$\begin{aligned} T_\epsilon := \inf \{t>0: Z_t(1)\le \epsilon \}. \end{aligned}$$
Then it is clearly enough to prove that for all \(\epsilon >0\)
$$\begin{aligned} {\mathbb P} \big ( \#\{ v\in \, ]0,1]: Z_t(v)-Z_t(v-)>0\}=\infty , \ \forall \, t\in \,]0,T_\epsilon [ \big ) =1. \end{aligned}$$
In order to connect the covering lemma with the question of exceptional times, we use the comparison property of the Pitman-Yor representation to reduce the problem to the process \(Z^\epsilon \) explicitly defined by
$$\begin{aligned} Z^\epsilon _t(v) = \sum _{s_i>0} w^i_{t}\, {\small 1}\!\!1_{{(u_i \le v\,\alpha (\alpha -1)\Gamma (\alpha )\theta \epsilon ^{2-\alpha })}} ,\qquad v \in [0,1],\ t\ge 0. \end{aligned}$$
(5.3)
Setting
$$\begin{aligned}&N_t:=\#\big \{ v\in \, ]0,1]: Z_t(v)-Z_t(v-)>0\big \}, \\&N_t^\epsilon :=\#\big \{ v\in \, ]0,1]: Z_t^\epsilon (v)-Z_t^\epsilon (v-)>0\big \}, \end{aligned}$$
it is obvious by the definition of \(Z\) and \(Z^\epsilon \) that
$$\begin{aligned} {\mathbb P} ( N_t \ge N^\epsilon _t, \ \forall \, t\in \,]0,T_\epsilon [\} ) =1. \end{aligned}$$
(5.4)
We are now prepared to prove our main result.

Proof of Theorem 1.2

Due to (5.4) we only need to show that almost surely \(v\mapsto Z^\epsilon _t(v)\) has infinitely many jumps for all \(t>0\) and arbitrary \(\epsilon >0\). To verify the latter, Lemma 5.1 will be applied to a PPP defined in the sequel. If \(\Pi \) denotes the Poisson point process with atoms \((s_i,w^i,u_i)_{i\in I}\) from which \(Z^\epsilon _t(v)\) is defined, then we define a new Poisson point process \(\Pi _l\) via the atoms
$$\begin{aligned} (s_i,h_i,u_i)_{i\in I}:=(s_i,\ell (w^i),u_i)_{i\in I}, \end{aligned}$$
where \(\ell (w):=\inf \{t>0: w_t=0\}\) denotes the length of the trajectory \(w\). In order to apply Lemma 5.1 we need the intensity of \(\Pi _l\). Using the definition of \({\mathbb Q} \) and the Laplace transform duality (2.8) with the explicit form
$$\begin{aligned} \int \limits \left( 1-e^{-\lambda w_t}\right) {\mathbb Q} (dw)= \left( \lambda ^{1-\alpha } + (\alpha -1)t\right) ^{\frac{1}{1-\alpha }}, \end{aligned}$$
we find the distribution
$$\begin{aligned} {\mathbb Q} (\ell (w)>h)&= {\mathbb Q} (w_h>0) = \lim \limits _{\lambda \rightarrow +\infty } {\mathbb Q} (1-e^{-\lambda w_h})\nonumber \\&= \lim \limits _{\lambda \rightarrow +\infty } u_h(\lambda ) = ((\alpha -1)h)^{\frac{1}{1-\alpha }}. \end{aligned}$$
(5.5)
Differentiating in \(h\) shows that \(\Pi _l\) is a Poisson point process on \({\mathbb R}^+\times {\mathbb R}^+\times {\mathbb R}^+\) with intensity measure
$$\begin{aligned} \Pi _l'(dt,dh,du)=dt\otimes ((\alpha -1)h)^{\frac{\alpha }{(1-\alpha )}}\,dh\otimes du. \end{aligned}$$
Plugging-in the new definitions leads to
$$\begin{aligned} N^\epsilon _t&= \big (\text {number of non-zero summands of } Z^\epsilon _t(1)\big )_{t\ge 0} \nonumber \\&= \big (\#\big \{ s_i \le t : (s_i,w_i,u_i)\in \Pi \text { and } w^i_{t-s_i}\,{\small 1}\!\!1_{{( u_i \le \alpha (\alpha -1)\Gamma (\alpha )\theta \epsilon ^{2-\alpha })}}> 0\big \}\big )_{t\ge 0}\nonumber \\&= \big (\#\big \{ s_i \le t \!:\! (s_i,w_i,u_i)\!\in \!\Pi \text { and }\ell (w_i)\!>\!t\!-\!s_i, u_i \le \alpha (\alpha \!-\!1)\Gamma (\alpha )\theta \epsilon ^{2-\alpha }\big \}\big )_{t\ge 0}\nonumber \\&= \big (\#\big \{ s_i \le t \!:\! (s_i,h_i,u_i)\!\in \!\Pi _l\text { and }s_i\!+\!h_i\!>\!t, u_i \le \alpha (\alpha \!-\!1)\Gamma (\alpha )\theta \epsilon ^{2-\alpha }\big \}\big )_{t\ge 0}.\nonumber \\ \end{aligned}$$
(5.6)
There is one more simplification that we can do. Let us define \(\Pi _{l,\epsilon }\) as a Poisson point process on \((0,\infty )\times (0,\infty )\) with intensity measure
$$\begin{aligned} \Pi _{l,\epsilon }'(dt,dh)=\alpha (\alpha -1)\Gamma (\alpha )\theta \epsilon ^{2-\alpha }\, dt\otimes \,(\alpha -1)^{\alpha /(1-\alpha )}h^{\frac{\alpha }{(1-\alpha )}}\,dh, \end{aligned}$$
(5.7)
then by the properties of Poisson point processes we have the equality in law
$$\begin{aligned} \{(s_i,h_i) : (s_i,h_i,u_i)\in \Pi _l\text { and } u_i \le \alpha (\alpha -1)\Gamma (\alpha )\theta \epsilon ^{2-\alpha } \} \mathop {=}\limits ^{(d)} \Pi _{l,\epsilon }. \end{aligned}$$
Then (5.6) yields
$$\begin{aligned} \big (N_t^\epsilon \big )_{t\ge 0} \mathop {=}\limits ^{(d)} \big (\#\big \{ s_i \le t : (s_i,h_i)\in \Pi _{l,\epsilon }\text { and } s_i+h_i > t\big \}\big )_{t\ge 0}. \end{aligned}$$
Now we are precisely in the setting of Shepp’s covering results and the theorem follows from Lemma 5.1 if (5.2) holds. Shepp’s condition can be checked easily for \(\Pi _{l,\epsilon }\) for (5.7) independently of \(\theta \) and \(\epsilon \). \(\square \)

6 A Proof of Schmuland’s Theorem

In this section we sketch how our lines of arguments can be adopted for the continuous case corresponding to \(\alpha =2\). The proofs go along the same lines (reduction to a measure-valued branching process and then to an excursion representation for which the covering result can be applied) but are much simpler due to a constant immigration structure. The crucial difference, leading to the possibility of exceptional times, occurs in the final step via Shepp’s covering results.

Proof of Schmuland’s Theorem 1.1

We start with the continuous analogue to Theorem 2.2. Suppose \(W\) is a white-noise on \((0,\infty )\times (0,\infty )\), then one can show via the standard Yamada-Watanabe argument that there is a unique strong solution to
$$\begin{aligned} {\left\{ \begin{array}{ll} X_t(v)=v+\sqrt{2}\int \limits _{]0,t]\times {\mathbb R}_+} {\small 1}\!\!1_{{(u\le X_s(v))}} \, W(ds,du)+\theta vt,\\ v\in [0,1], t\ge 0. \end{array}\right. } \end{aligned}$$
(6.1)
In fact, since the immigration mechanism \(g\) is constant, pathwise uniqueness holds. For every \(v\in [0,1]\), \((X_t(v))_{t\ge 0}\) satisfies
$$\begin{aligned} X_t(v)=v+\int \limits _0^t \sqrt{2X_s(v)}\, dB_s+v\theta t \end{aligned}$$
for a Brownian motion \(B\). Recalling (2.2), we see that (6.1) is a measure-valued process with branching mechanism \(\psi (u)=u^2\) and constant-rate immigration.
The Pitman-Yor type representation corresponding to Theorem 2.3 looks as follows: in the setting of Sect. 2.2, we consider a Poisson point process \((s_i, u_i,w^i)_i\) on \({\mathbb R}_+\times {\mathbb R}_+\times {\mathcal D} \) with intensity measure \((\delta _0(ds)\otimes F(du)+ds\otimes I(du))\otimes {\mathbb Q} _s(dw)\), where the excursion measure \({\mathbb Q} \) is defined via the law of the CSBP (2.2) with branching mechanism \(\psi (\lambda )=\lambda ^2\). Then the analog of Theorem 2.3 is the following:
$$\begin{aligned} {\left\{ \begin{array}{ll} Z_t(v) = \sum _{s_i=0} w^i_{t}\, {\small 1}\!\!1_{{( u_i \le v )}} + \sum _{0<s_i \le t} w^i_{t-s_i} {\small 1}\!\!1_{{(u_i \le v \theta )}},\\ v\in [0,1], t\ge 0, \end{array}\right. } \end{aligned}$$
(6.2)
can be shown to solve (6.1); this result, for fixed \(v\), goes back to Pitman and Yor [34]. The calculation (5.5), now using that \(u_t(\lambda ) = \left( \lambda ^{-1} + t\right) ^{-1}\) is the unique non-negative solution to
$$\begin{aligned} \partial _t u_t(\lambda ) = -(u_t(\lambda ))^2, u_0(\lambda ) = \lambda , \end{aligned}$$
yields \({\mathbb Q} (\ell (w)\in dh)=\frac{1}{h^2}dh\).
For the analogue for Theorem 2.7 we define now the process
$$\begin{aligned} R_t(v)=\frac{X_{S^{-1}(t)}(v)}{X_{S^{-1}(t)}(1)}, \end{aligned}$$
with \(S(t) = \int \limits _0^t X_s(1)^{-1}\,ds\). It then follows again from the self-similarity that \(R\) is well-defined and from Itō’s formula that \(R\) is a standard Fleming–Viot process on \([0,1]\). The arguments here involve a continuous SDE which has been studied in [12]:
$$\begin{aligned} {\left\{ \begin{array}{ll} Y_t(v)=v+\int \limits _0^t \int \limits _0^1 \big [{\small 1}\!\!1_{{(u\le Y_s(v))}}-Y_s(v)\big ] W(ds,du)+\theta \int \limits _0^t \big [v-Y_s(v)\big ]\,ds,\\ v\in [0,1], t\ge 0, \end{array}\right. } \end{aligned}$$
(6.3)
where \(W\) is a white-noise on \((0,\infty )\times (0,1)\). It was shown in Theorem 4.9 of [12] that the measure-valued process \(\mathbf Y \) associated with \((Y_t(v), t\ge 0, v\in [0,1])\) solves the martingale problem for the infinitely many sites model with mutations, i.e. \(\mathbf Y \) has generator (1.1) with the choice (1.2) for \(A\).
Finally, in order to prove Schmuland’s Theorem 1.1 on exceptional times it suffices to prove the same result for (6.2). We proceed again via Shepp’s covering arguments as we did in Sect. 5. The crucial difference is that the immigration is already constant \(\theta \) so that (5.3) becomes superfluous. The role of the Poisson point process \(\Pi _{l,\epsilon }\) is played by \(\Pi _{\theta ,l}\) with intensity measure
$$\begin{aligned} \Pi _{\theta ,l}'(dt,dh)=dt\otimes \frac{\theta }{h^2}\,dh. \end{aligned}$$
Plugging into Shepp’s criterion (5.2), by Lemma 5.1 and
$$\begin{aligned} \int \limits _0^1 \exp \Big (-\theta \log (t)\Big )dt=\int \limits _0^1t^{-\theta }\,dt \end{aligned}$$
(6.4)
we find that there are no exceptional times if \(\theta \ge 1\). Conversely, let us assume \(\theta <1\). Recalling that for \(\theta =0\) the Fleming–Viot process has almost surely finitely many atoms for all \(t>0\), we see that the first term in (6.2) almost surely has finitely non-zero summands for all \(t>0\). Hence, it suffices to show the existence of exceptional times for which the second term in (6.2) vanishes. Arguing as before, this question is reduced to Shepp’s covering result applied to \(\Pi _{\theta ,l}\): (6.4) combined with (5.2) leads to the result. \(\square \)

7 Proof of Corollary 2.8

The fact that the \((\alpha ,\theta )\)-Fleming–Viot process \((\mathbf Y_t,t\ge 0)\) converges in distribution to its unique invariant distribution and that this invariant distribution is not trivial (i.e. it charges measures with at least two atoms) is proven by Donelly and Kurtz in [14] at the end of Sect. 5.1 and [13] Sect. 4.1. Here we re-sketch their argument that relies on the so-called lookdown construction of \((\mathbf Y_t,t\ge 0)\) which was introduced in the same papers. Let us very briefly describe how the lookdown construction works (for more details we refer to [9, 14]).

The idea is to construct a sequence of processes \((\xi _i(t), t\ge 0), i=1,2,\ldots \) which take their values in the type-space \(E\) (here \(E=[0,1]\)). We say that \(\xi _i(t)\) is the type of the level\(i\) at time \(t\). The types evolve by two mechanisms :
  • lookdown events: with rate \(x^{-2}\Lambda (dx)\) a proportion \(x\) of lineages are selected by i.i.d. Bernoulli trials. Call \(i_1, i_2,\ldots \) the selected levels at a given event at time \(t\). Then, \(\forall k >1, \xi _{i_k(t)} =\xi _{i_1}(t-)\), that is the levels all adopt the type of the smallest participating level. The type \(\xi _{i_k}(t-)\) which was occupying level \(i_k\) before the event is pushed up to the next available level.

  • mutation events: On each level \(i\) there is an independent Poisson point process \((t^{(i)}_j, j\ge 1)\) of rate \(\theta \) of mutation events. At a mutation event \(t^{(i)}_j\) the type \(\xi _i(t^{(i)}_j-)\) is replaced by a new independent variable uniformly distributed on \([0,1]\) and the previous type is pushed up by one level (as well as all the types above him).

The point is then that
$$\begin{aligned} \Xi _t :=\lim _{n\rightarrow \infty } \frac{1}{n} \sum _{i=1}^n \delta _{\xi _i(t)} \end{aligned}$$
exists simultaneously for all \(t\ge 0\) almost surely and that \((\Xi _t , t\ge 0) =(\mathbf Y_t, t\ge 0)\) in distribution.
Fix \(n \in {\mathbb N} ,\) and define a process \((\pi _t, t\ge 0)\) with values in the partitions of \(\{1,2,\ldots \}\) by saying that \(i\sim j\) for \(\pi ^{(n)}_t\) if and only \(\xi _i(t)=\xi _j(t).\) It is well known that this is an exchangeable process. Recall from Corollary 2.5 that for each \(t\ge 0\) fixed, \(\Xi _t\) is almost surely purely atomic. Alternatively this can be seen from the lookdown construction since at a fixed time \(t>0\), the level one has been looked down upon by infinitely many level above since the last mutation event on level one. We can thus write
$$\begin{aligned} \Xi _t = \sum a_i(t) \delta _{x_i(t)}, \end{aligned}$$
where the \(a_i\) are enumerated in decreasing order. It is also known that the sequences \((a_i(t), i\ge 1)\) of atom masses and \((x_i(t), i\ge 1)\) of atom locations are independent. The \(a_i(t)\) are the asymptotic frequencies of the blocks of \(\pi (t)\) which are thus in one-to-one correspondence with the atoms of \(\Xi _t\). Furthermore the sequence \((x_i(t), i\ge 1)\) converges in distribution to a sequence of i.i.d random variables with common distribution \(I\) because all the types that were present initially have been replaced by immigrated types after some time. To see this note that after the first mutation on level 1, the type \(\xi _1(0)\) is pushed up to infinity in a finite time which is stochastically dominated by the fixation time of the type at level 1 in a Beta-Fleming–Viot without mutation. This also proves the second point of the corollary.

For each \(n\ge 1, \) let us consider \(\pi ^{(n)}(t) = \pi _{|[n]}(t)\) the restriction to \(\{1,\ldots ,n\}\) of \(\pi (t)\). Then, for all \(n\ge 1\), the process \((\pi ^{(n)}_t, t\ge 0)\) is an irreducible Markov process on a finite state-space and thus converges to its unique invariant distribution. This now implies that \((\pi (t), t\ge 0)\) must also converges to its invariant distribution. By Kingman continuity Theorem (see [33, Theorem 36] or [4, Theorem 1.2]) this implies that the ordered sequence of the atom masses \((a_i(t))\) converges in distribution as \(t\rightarrow \infty \). Because \((x_i(t), i\ge 1)\) also converges in distribution this implies that \(\Xi _t\) itself converges in distribution to its invariant measure. (Alternatively, this second part of the Corollary could be deduced from Theorem 2 in [28])

Furthermore it is also clear that the invariant distribution of \((\pi ^{(n)}_t, t\ge 0)\) must charge configurations with at least two non-singleton blocks. Since \(\pi \) is an exchangeable process, so is its invariant distribution. Exchangeable partitions have only two types of blocks: singletons and blocks with positive asymptotic frequency so this proves that the invariant distribution of \(\pi \) charges partition with at least two blocks of positive asymptotic frequency.

Notes

Acknowledgments

The authors are very grateful for stimulating discussions with Zenghu Li and Marc Yor and to an anonymous referee whose insightful comments greatly helped us improve this manuscript. L. Döring was supported by the Fondation Science Mathématiques de Paris, L. Mytnik is partly supported by the Israel Science Foundation. J. Berestycki, L. Mytnik and L. Zambotti thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for a very pleasant stay during which part of this work was produced. L. Mytnik thanks the Laboratoire de Probabilités et Modèles Aléatoires for the opportunity to visit it and carry out part of this research there

References

  1. 1.
    Berestycki, J., Döring, L., Mytnik, L., Zambotti, L.: Hitting properties and non-uniqueness for SDEs driven by stable processes, Preprint http://arxiv.org/abs/1111.4388
  2. 2.
    Berestycki, J., Berestycki nd, N., Limic, V.: A small-time coupling between \(\Lambda \)-coalescents and branching processes, Preprint http://arxiv.org/abs/1101.1875
  3. 3.
    Berestycki, J., Berestycki, N., Schweinsberg, J.: Beta-coalescents and continuous stable random trees. Ann. Probab. 35, 1835–1887 (2007)CrossRefMATHMathSciNetGoogle Scholar
  4. 4.
    Berestyck, N.: Recent progress in coalescent theory. Ens. Mat. 16, 1–193 (2009)Google Scholar
  5. 5.
    Bertoin, J., Le Gall, J.F.: Stochastic flows associated to coalescent processes. Probab. Theory Relat. F. 126(2), 261–288 (2003)CrossRefMATHGoogle Scholar
  6. 6.
    Bertoin, J., Le Gall, J.F.: Stochastic flows associated to coalescent processes. II. Stochastic differential equations. Ann. Inst. Henri Poincaré Probab. Stat. 41(3), 307–333 (2005)CrossRefMATHMathSciNetGoogle Scholar
  7. 7.
    Bertoin, J., Le Gall, J.F.: Stochastic flows associated to coalescent processes. III. Limit theorems. Ill. J. Math. 50(1–4), 147–181 (2006)MATHGoogle Scholar
  8. 8.
    Birkner, M., Blath, J.: Measure-Valued Diffusions, General Coalescents and Population Genetic Inference Trends in Stochastic Analysis. Cambridge University Press, Cambridge (2009)Google Scholar
  9. 9.
    Birkner, M., Blath, J., Möhle, M., Steinrücken, M., Tams, J.: A modified lookdown construction for the Xi–Fleming–Viot process with mutation and populations with recurrent bottlenecks. ALEA Lat. Am. J. Probab. Math. Stat. 6, 25–61 (2009)MATHMathSciNetGoogle Scholar
  10. 10.
    Birkner, M., Blath, J., Capaldo, M., Etheridge, A., Möhle, M., Schweinsberg, J., Wakolbinger, A.: Alpha-stable branching and beta-coalescents. Electron. J. Probab. 10(9), 303–325 (2005)MathSciNetGoogle Scholar
  11. 11.
    Dawson, D.A., Li, Z.: Construction of immigration superprocesses with dependent spatial motion from one-dimensional excursions. Probab. Theory Relat. F. 127, 37–61 (2003)CrossRefMATHMathSciNetGoogle Scholar
  12. 12.
    Dawson, D.A., Li, Z.: Stochastic equations, flows and measure-valued processes. Ann. Probab. 40(2), 813–857 (2012)CrossRefMATHMathSciNetGoogle Scholar
  13. 13.
    Donnelly, Kurtz, T.: A countable representation of the Fleming–Viot measure-valued diffusion. Ann. Probab. 24, 698–742 (1996)CrossRefMATHMathSciNetGoogle Scholar
  14. 14.
    Donnelly, P., Kurtz, T.: Particle representations for measure-valued population models. Ann. Probab. 27, 166–205 (1999)CrossRefMATHMathSciNetGoogle Scholar
  15. 15.
    Duquesne, T., Le Gall, J.F.: Random trees, Levy processes and spatial branching processes. Asterisque 281, 1–147 (2002)Google Scholar
  16. 16.
    Dynkin, E.B., Kuznetsov, S.E.: \(\mathbb{N}\)-measures for branching Markov exit systems and their applications to differential equations. Prob. Theory Relat. F. 130, 135–150 (2004)MATHMathSciNetGoogle Scholar
  17. 17.
    Etheridge, A.M.: Some mathematical models from population genetics. Lectures from the 39th Probability Summer School held in Saint-Flour, 2009. Lecture Notes in Mathematics, 2012. Springer, Heidelberg, viii+119 pp (2011)Google Scholar
  18. 18.
    Ethier, S.N., Griffiths, R.C.: The transition function of a Fleming–Viot process. Ann. Prob. 21, 1571–1590 (1993)CrossRefMATHMathSciNetGoogle Scholar
  19. 19.
    Ethier, S.N., Kurtz, T.G.: Fleming–Viot processes in population genetics. SIAM J. Control Optim. 31, 345–386 (1993)CrossRefMATHMathSciNetGoogle Scholar
  20. 20.
    Fleming, W.H., Viot, M.: Some measure-valued Markov processes in population genetics. Indiana Univ. Math. J. 28, 817–843 (1979)CrossRefMATHMathSciNetGoogle Scholar
  21. 21.
    Fournier, N.: On pathwise uniqueness for stochastic differential equations driven by stable Lévy processes. Ann. Inst. Henri Poincaré Probab. Stat. 49(1), 138–159 (2013)Google Scholar
  22. 22.
    Fu, Z., Li, Z.: Stochastic equations of non-negative processes with jumps. Stoch. Process. Appl. 120, 306–330 (2010)CrossRefMATHMathSciNetGoogle Scholar
  23. 23.
    Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion Processes. North-Holland Mathematical Library, North-Holland Publishing Co, Amsterdam (1989)MATHGoogle Scholar
  24. 24.
    Jacod, J., Shiryaev, A.N.: Limit Theorems for Stochastic Processes, 2nd edn. Springer, Berlin (2003)CrossRefMATHGoogle Scholar
  25. 25.
    Kurtz, T.G.: The Yamada–Watanabe–Engelbert theorem for general stochastic equations and inequalities. Electron. J. Probab. 12, 951–965 (2007)CrossRefMATHMathSciNetGoogle Scholar
  26. 26.
    Konno, N., Shiga, T.: Stochastic differential equations for some measure-valued diffusions. Probab. Theory Relat. F. 79, 201–225 (1988)CrossRefMATHMathSciNetGoogle Scholar
  27. 27.
    Kyprianou, A.E.: Fluctuations of Fluctuations of Lévy Processes with Applications, Introductory Lectures. Universitext, 2nd edn, p. XVIII. Springer-Verlag, Berlin (2014)Google Scholar
  28. 28.
    Labbé, C.: Flots stochastiques et representation lookdown. PhD Thesis (2013). http://tel.archives-ouvertes.fr/tel-00874551
  29. 29.
    Lamperti, J.: Semi-stable Markov processes. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 22, 205–225 (1972)CrossRefMATHMathSciNetGoogle Scholar
  30. 30.
    Li, Z.: Skew convolution semigroups and related immigration processes. Theory Probab. Appl. 46(2), 274–296 (2002)CrossRefGoogle Scholar
  31. 31.
    Li, Z.: Measure-Valued Branching Markov Processes, Probability and Its Applications. Springer, Berlin (2010)Google Scholar
  32. 32.
    Li, Z., Mytnik, L.: Strong solutions for stochastic differential equations with jumps. Ann. Inst. Henri Poincaré Probab. Stat. 47(4), 1055–1067 (2011)Google Scholar
  33. 33.
    Pitman, J.: Coalescents with multiple collisions. Ann. Probab. 27, 1870–1902 (1999)CrossRefMATHMathSciNetGoogle Scholar
  34. 34.
    Pitman, J., Yor, M.: A decomposition of Bessel bridges. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 59, 425–457 (1982)CrossRefMATHMathSciNetGoogle Scholar
  35. 35.
    Schmuland, B.: A result on the infinitely many neutral alleles diffusion model. J. Appl. Probab. 28(2), 253–267 (1991)CrossRefMATHMathSciNetGoogle Scholar
  36. 36.
    Shepp, L.A.: Covering the line with random intervals. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 23, 163–170 (1972a)CrossRefMATHMathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • J. Berestycki
    • 1
  • L. Döring
    • 1
  • L. Mytnik
    • 2
  • L. Zambotti
    • 1
  1. 1.Laboratoire de Probabilités et Modéles AléatoiresUniversité Paris 6Paris Cedex 05France
  2. 2.Faculty of Industrial Engineering and ManagementTechnion Israel Institute of TechnologyHaifaIsrael

Personalised recommendations