Abstract
If \(\mathbf Y\) is a standard Fleming–Viot process with constant mutation rate (in the infinitely many sites model) then it is well known that for each \(t>0\) the measure \(\mathbf Y_t\) is purely atomic with infinitely many atoms. However, Schmuland proved that there is a critical value for the mutation rate under which almost surely there are exceptional times at which the stationary version of \(\mathbf Y\) is a finite sum of weighted Dirac masses. In the present work we discuss the existence of such exceptional times for the generalized Fleming–Viot processes. In the case of BetaFleming–Viot processes with index \(\alpha \in \,]1,2[\) we show that—irrespectively of the mutation rate and \(\alpha \)—the number of atoms is almost surely always infinite. The proof combines a Pitman–Yor type representation with a disintegration formula, Lamperti’s transformation for selfsimilar processes and covering results for Poisson point processes.
Main result
The measurevalued Fleming–Viot diffusion processes were first introduced by Fleming and Viot [20] and have become a cornerstone of mathematical population genetics in the last decades. It is a model which describes the evolution (forward in time) of the genetic composition of a large population. Each individual is characterized by a genetic type which is a point in a typespace \(E\). The Fleming–Viot process is a Markov process \((\mathbf Y _t)_{t\ge 0}\) on
for which we interpret \(\mathbf Y_t(B)\) as the proportion of the population at time \(t\) which carries a genetic type belonging to a Borel set \(B\) of types. In particular, the number of (different) types at time \(t\) is equal to the number of atoms of \(\mathbf Y _t\) with the convention that the number of types is infinite if \(\mathbf Y _t\) has absolutely continuous part.
Fleming–Viot superprocesses can be defined through their infinitesimal generators
acting on smooth testfunctions where \(\delta \phi (\mu ) /\delta \mu (v) = \lim _{\epsilon \rightarrow 0+} \epsilon ^{1}\{ \phi (\mu +\epsilon \delta _v) \phi (\mu ))\) and \(A\) is the generator for a Markov process in \(E\) which represents the effect of mutations. Here \(\delta _v\) is the Dirac measure at \(v\). It is well known that the Fleming–Viot superprocess arises as the scaling limit of a Morantype model for the evolution of a finite discrete population of fixed size if the reproduction mechanism is such that no individual gives birth to a positive proportion of the population in a small number of generations. For a detailed description of Fleming–Viot processes and discussions of variations we refer to the overview article of Ethier and Kurtz [19] and to Etheridge’s lecture notes [17].
The first summand of the generator reflects the genetic resampling mechanism whereas the second summand represents the effect of mutations. Several choices for \(A\) have appeared in the literature. In the present work we shall work in the setting of the infinitelymanyalleles model where each mutation creates a new type never seen before. Without loss of generality let the type space be \(E=[0,1]\). Then the following choice of \(A\) gives an example of an infinite site model with mutations:
for some \(\theta >0\). The choice of the uniform measure \(dy\) is arbitrary (we could choose the new type according to any distribution that has a density with respect to the Lebesgue measure), all that matters is that the newly created type \(y\) is different from all other types. With \(A\) as in (1.2), mutations arrive at rate \(\theta \) and create a new type picked at random from \(E\) according to the uniform measure, therefore the corresponding process is sometimes called the Fleming–Viot process with neutral mutations.
Let us briefly recall two classical facts concerning the infinite types Fleming–Viot process described above. For any initial condition \(\mathbf Y _0\):

(i)
If there is no mutation, then, for all \(t>0\) fixed, the number of types is almost surely finite.

(ii)
If the mutation parameter \(\theta \) is strictly positive, then, for all \(t>0\) fixed, the number of types is infinite almost surely.
This can be deduced e.g. from the explicit representation of the transition function given in Ethier and Griffiths in [18].
A beautiful complement to (i) and (ii) was found by Schmuland for exceptional times that are not fixed in advance:
Theorem 1.1
(Schmuland [35]) For the stationary infinitelymanyalleles model
Schmuland’s proof of the dichotomy is based on analytic arguments involving the capacity of finite dimensional subspaces of the infinite dimensional statespace. In Sect. 6 we reprove Schmuland’s theorem with a simple proof via excursion theory, that yields the result for arbitrary initial conditions.
In the series of articles [5–7], Bertoin and Le Gall introduced and started the study of \(\Lambda \) Fleming–Viot processes, a class of stochastic processes which naturally extends the class of standard Fleming–Viot processes. These processes are completely characterized by a finite measure \(\Lambda \) on \(]0,1]\) and a generator \(A\). Similarly to the standard Fleming–Viot process, these processes can be defined through their infinitesimal generator
and the sites of atoms are again called types. For \(A=0\), the generator formulation only appeared implicitly in [6] and is explained in more details in Birkner et al. [10] and for \(A\) as in (1.2) it can be found in Birkner et al. [9]. The dynamics of a generalized Fleming–Viot process \((\mathbf Y _t)_{t\ge 0}\) are as follows: at rate \(y^{2}\Lambda (dy)\) a point \(a\) is sampled at time \(t>0\) according to the probability measure \({\mathbf Y} _{t}(da)\) and a pointmass \(y\) is added at position \(a\) while scaling the rest of the measure by \((1y)\) to keep the total mass at 1. The second term of (1.3) is the same mutation operator as in (1.1). For a detailed description of \(\Lambda \)Fleming–Viot processes and discussions of variations we refer to the overview article of Birkner and Blath [8].
In the following we are going to focus only on the choice \(\Lambda =Beta(2\alpha ,\alpha )\), the Beta distribution with density
for \(\alpha \in \, ]1,2[\), and mutation operator \(A\) as in (1.2). The corresponding \(\Lambda \)Fleming–Viot process \(({\mathbf Y} _t)_{t\ge 0}\) is called BetaFleming–Viot process or \((\alpha ,\theta )\)Fleming–Viot process and several results have been established in recent years. The \((\alpha ,\theta )\)Fleming–Viot processes converge weakly to the standard Fleming–Viot process as \(\alpha \) tends to \(2\). It was shown in [10] that a \(\Lambda \)Fleming–Viot process with \(A=0\) is related to measurevalued branching processes in the spirit of Perkin’s disintegration theorem precisely if \(\Lambda \) is a Beta distribution (this relation is recalled and extended in Sect. 2.3 below).
If we chose \(\alpha \in \, ]1,2[\) and \(\mathbf Y _0\) uniform on \([0,1]\), then we find the same properties (i) and (ii) for the onedimensional marginals \(\mathbf Y _t\) unchanged with respect to the classical case (1.1), (1.2). In fact, for a general \(\Lambda \)Fleming–Viot process, (i) is equivalent to the requirement that the associated \(\Lambda \)coalescent comes down from infinity (see for instance [2]). Here is our main result: contrary to Schmuland’s result, \((\alpha ,\theta )\)Fleming–Viot processes with \(\alpha \in \, ]1,2[\) and \(\theta >0\) never have exceptional times:
Theorem 1.2
Let \(({\mathbf Y} _t)_{t\ge 0}\) be an \((\alpha ,\theta )\)Fleming–Viot superprocess with mutation rate \(\theta >0\) and parameter \(\alpha \in \, ]1,2[.\) Then for any starting configuration \(\mathbf Y _0\)
for any \(\theta >0\).
One way one can get a first rough understanding of why this should be true is by using a heuristic based on the duality between \(\Lambda \)Fleming–Viot processes and \(\Lambda \)coalescents. If \(\Lambda \)Fleming–Viot processes describe how the composition of a population evolve forward in time, \(\Lambda \)coalescents describe how the ancestral lineages of individuals sampled in the population merge as one goes back in time. The fact that \(\Lambda \)coalescents describe the genealogies of \(\Lambda \)Fleming–Viot processes can be seen from Donelly and Kurtz [14] socalled lookdown construction of Fleming–Viot processes and was also established through a functional duality relation by Bertoin and Le Gall in [5].
The coalescent which corresponds to the classical Fleming–Viot process is the celebrated Kingman’s coalescent. Kingman’s coalescent comes down from infinity at speed \(2/t\), i.e. if one initially samples infinitely many individuals in the populations, then the number of active lineages at time \(t\) in the past is \(N_t\) and \(N_t \sim 2/t\) almost surely when \(t\rightarrow 0\). It is known (see [6] or more recently [28]) that the process \((N_t, t\ge 0)\) has the same law as the process of the number of atoms of the Fleming–Viot process. For a Betacoalescent (that is a \(\Lambda \)coalescent where the measure \(\Lambda \) is the density of a Beta\((\alpha ,2\alpha )\) variable) with parameter \(\alpha \in (1,2)\) we have \(N_t \sim c_\alpha t^{1/(\alpha 1)}\) almost surely as \(t\rightarrow 0\) (see [3, Theorem 4]). Therefore Kingman’s coalescents comes down from infinity much quicker than Betacoalescents. Since the speed at which the generalized Fleming–Viot processes looses types roughly corresponds to the speed at which the dual coalescent comes down from infinity, it is possible that \((\alpha ,\theta )\)Fleming–Viot processes do not lose types fast enough, and hence there are no exceptional times at which the number of types is finite.
Auxiliary constructions
To prove Theorem 1.2 we construct two auxiliary objects: a particular measurevalued branching process and a corresponding Pitman–Yor type representation. Those will be used in Sect. 5 to relate the question of exceptional times to covering results for point processes. In this section we give the definitions and state their relations to the BetaFleming–Viot processes with mutations. All appearing stochastic processes and random variables will be defined on a common stochastic basis \((\Omega , \mathcal {G}, \mathcal {G}_{t}, {\mathbb P} )\) that is rich enough to carry all Poisson point processes (PPP in short) that appear in the sequel.
Measurevalued branching processes with immigration
We recall that a continuous state branching process (CSBP in short) with \(\alpha \)stable branching mechanism, \(\alpha \in \,]1,2]\), is a Markov family \((P_v)_{v\ge 0}\) of probability measures on càdlàg trajectories with values in \({\mathbb R}_+\), such that
where for \(\psi :{\mathbb R}_+\mapsto {\mathbb R}_+\), \(\psi (u):=u^\alpha \), we have the evolution equation
See e.g. [27] for a good introduction to CSBP. For \(\alpha =2\), \(\psi (u)=u^2\) is the branching mechanism for Feller’s branching diffusion, where \(P_v\) is the law of the unique solution to the SDE
driven by a Brownian motion \((B_t)_{t\ge 0}\). On the other hand, for \(\alpha \in \, ]1,2[\), \(\psi (u)=u^\alpha \) gives the socalled \(\alpha \)stable branching processes which can be defined as the unique strong solution of the SDE
driven by a spectrally positive \(\alpha \)stable Lévy process \((L_t)_{t\ge 0}\), with Lévy measure given by
Note that strong existence and uniqueness for (2.3) follows from the fact that the function \(x\mapsto x^{1/\alpha }\) is Lipschitz outside zero, and hence strong existence and uniqueness holds for (2.3) until \(X\) hits zero. Moreover \(X\), being a nonnegative martingale, stays at zero forever after hitting it. For a more extensive discussion on strong solutions for jumps SDEs see [22] and [32].
The main tool that we introduce is a particular measurevalued branching process with interactive immigration (MBI in short). For a textbook treatment of this subject we refer to Li [31]. Following Dawson and Li [12], we are not going to introduce the MBIs via their infinitesimal generators but as strong solutions of a system of stochastic differential equations instead. On \((\Omega , \mathcal {G}, \mathcal {{G}_{t}},{\mathbb P} )\), let us consider a Poisson point process \(\mathcal {N}=(r_i,x_i,y_i)_{i\in I}\) on \((0,\infty )\times (0,\infty )\times (0,\infty )\) adapted to \( \mathcal {{G}_{t}}\) and with intensity measure
Throughout the paper we adopt the notation
i.e. \(\tilde{\mathcal N} \) is the compensated version of \({\mathcal N} \). It was shown in [12] that the solution to (2.3) has the same law as the unique strong solution to the SDE
with \(X_{0}=v\).
Now we are going to switch to the measurevalued setting. The realvalued process \(X\) in (2.3), (2.5) describes the evolution of the total mass of the CSBP starting at time zero at the mass \(X_0=v\). We are going to consider all initial masses \(v\in [0,1]\) simultaneously, constructing a process \((\mathbf {X}_t)_{t\ge 0}\) taking values in the space \({\mathcal M} ^{F}_{[0,1]}\) of finite measures on \([0,1]\), endowed with the narrow topology, i.e. the trace of the weak\(*\) topology of \((C[0,1])^*\). Assume that at time \(t=0\), \(\mathbf {X}_0\) is a finite measure on \([0,1]\) with cumulative distribution function \((F(v), v\in [0,1])\), and denote
Then the measurevalued branching process \((\mathbf {X}_t)_{t\ge 0}\) can be constructed in such a way that for each \(v\), \((X_t(v))_{t\ge 0}\) solves (2.5) with \(X_0=F(v)\), and with the same driving noise for all \(v\in [0,1]\). In what follows, we deal with a version of (2.5) including an immigration term only depending on the totalmass \(X_t(1)\):
where \((I(v), v\in [0,1])\) is the cumulative distribution function of a finite measure on \([0,1]\) and we assume
 (G) :

\(g : {\mathbb R}_+ \mapsto {\mathbb R}_+\) is monotone nondecreasing, continuous and locally Lipschitz continuous away from zero.
Definition 2.1
An \({\mathcal M} ^{F}_{[0,1]}\)valued process \((\mathbf X _t)_{t\ge 0}\) on \((\Omega , \mathcal {G}, \mathcal {G}_{t},{\mathbb P} )\) is called a solution to (2.6) if

it is càdlàg \({\mathbb P} \)a.s.,

for all \(v\in [0,1]\), setting \(X_t(v):=\mathbf X _t([0,v])\), \((X_t(v))_{v\in [0,1],t\ge 0}\) satisfies \({\mathbb P} \)a.s. (2.6).
Moreover, a solution \((\mathbf X _t)_{t\ge 0}\) is strong if it is adapted to the natural filtration \({\mathcal F} _t\) generated by \(\mathcal {N}\). Finally, we say that pathwise uniqueness holds if
for any two solutions \(\mathbf X ^1\) and \(\mathbf X ^2\) on \((\Omega , \mathcal {G}, \mathcal {G}_t,{\mathbb P} )\) driven by the same Poisson point process.
Here is a wellposedness result for (2.6):
Theorem 2.2
Let \(F\) and \(I\) be as above. For any immigration mechanism \(g\) satisfying Assumption (G), there is a strong solution \((\mathbf X _t)_{t\ge 0}\) to (2.6) and pathwise uniqueness holds until \(T_0:= \inf \{t \ge 0 : \mathbf X _t([0,1]) =0\}\).
The proof of Theorem 2.2 relies on ideas from recent articles on pathwise uniqueness for jumptype SDEs such as Fu and Li [22] or Dawson and Li [12]. Our equation (2.6) is more delicate since all coordinate processes depend on the totalmass \(X_t(1)\). The uniqueness statement is first deduced for the totalmass \((X_t(1))_{t\ge 0}\) and then for the other coordinates interpreting the totalmass as random environment. To construct a (weak) solution we use a (pathwise) Pitman–Yor type representation as explained in the next section.
A Pitman–Yor type representation for interactive MBIs
Let us denote by \({\mathcal E} \) the set of càdlàg trajectories \(w:{\mathbb R}_+\mapsto {\mathbb R}_+\) such that \(w(0)=0\), \(w\) is positive on a bounded interval \(]0,\zeta (w)[\) and \(w\equiv 0\) on \([\zeta (w),+\infty [\). We recall the construction of the excursion measure of the \(\alpha \)stable CSBP \((P_v)_{v\ge 0}\), also called the Kuznetsov measure, see [30, Section 4] or [31, Chapter 8]: For all \(t\ge 0\), let \(K_t(dx)\) be the unique \(\sigma \)finite measure on \({\mathbb R}_+\) such that
where we recall that the function \((u_t(\lambda ))_{t\ge 0}\) is the unique solution to the equation
We also denote by \(Q_t(x,dy)\) the Markov transition semigroup of \((P_v)_{v\ge 0}\). Then there exists a unique Markovian \(\sigma \)finite measure \({\mathbb Q} \) on \({\mathcal E} \) with entrance law \((K_t)_{t\ge 0}\) and transition semigroup \((Q_t)_{t\ge 0}\), i.e. such that for all \(0< t_1< \cdots <t_n\), \(n\in {\mathbb N} \),
By construction
and under \({\mathbb Q} \), for all \(s>0\), conditionally on \(\sigma (w_r, r\le s)\), \((w_{t+s})_{t\ge 0}\) has law \(P_{w_s}\). The \(\sigma \)finite measure \({\mathbb Q} \) is called the excursion measure of the CSBP (2.3). By (2.8), it is easy to check that for any \(s>0\)
In Duquesne–Le Gall’s setting [15], under the \(\sigma \)finite measure \({\mathbb Q} \) with infinite total mass, \(w\) has the distribution of \((\ell ^a(e))_{a\ge 0}\) under \(n(de)\), where \(n(de)\) is the excursion measure of the height process \(H\) and \(\ell ^a\) is the local time at level \(a\). For the more general superprocess setting see for instance Dynkin and Kuznetsov [16].
We need now to extend the space of excursions as follows:
i.e. \({\mathcal D} \) is the set of càdlàg trajectories \(w:{\mathbb R}_+\mapsto {\mathbb R}_+\) such that \(w\) is equal to 0 on \([0,s(w)]\), \(w\) is positive on a bounded interval \(]s(w),s(w)+\zeta (w)[\) and \(w\equiv 0\) on \([s(w)+\zeta (w),+\infty [\). For \(s\ge 0\), we denote by \({\mathbb Q} _s(dw)\) the \(\sigma \)finite measure on \({\mathcal D} \) given by
i.e. \({\mathbb Q} _s\) is the image measure of \({\mathbb Q} \) under the map
Let us consider a Poisson point process \((s_i, u_i,a_i,w^i)_{i\in I}\) on \({\mathbb R}_+\times {\mathbb R}_+\times {\mathcal D} \) with intensity measure
where \(F\) and \(I\) are the cumulative distribution functions appearing in (2.6). An atom \((s_i, u_i,a_i,w^i)\) is a population that has immigrated at time \(s_i\) whose size evolution is given by \(w^i\) and whose genetic type is given by \(a_i\). The coordinate \(u_i\) is used for thinning purposes, to decide whether or not this particular immigration really happened or not.
Theorem 2.3
Suppose \(g:{\mathbb R}_+\mapsto {\mathbb R}_+\) satisfies Assumption (G). Then, for all \(v\in [0,1]\), there is a unique càdlàg process \((Z_t(v), t\ge 0)\) on \((\Omega , \mathcal {G}, \mathcal {G}_{t}, {\mathbb P} )\) satisfying \({\mathbb P} \)a.s.
Moreover, we can construct on \((\Omega , \mathcal {G}, \mathcal {G}_{t}, {\mathbb P} )\) a PPP \({\mathcal N} \) with intensity \(\nu \) given by (2.4) such that \(Z\) solves (2.6) with respect to \(\mathcal {N}\).
If \(I(1)=1\), then in the special case of branching mechanism \(\psi (\lambda )=\lambda ^2\) and constant immigration rate \(g\equiv \theta \), the totalmass process \(X_t=X_t(1)\) for (2.6) also solves
for which Pitman and Yor obtained the excursion representation in their seminal paper [34].
Remark 2.4
The recent monograph [31] by Zenghu Li contains a full theory of this kind of Pitman–Yor type representations for measurevalued branching processes, see in particular Chapter 10. We present a different approach below which shows directly how the different Poisson point processes in (2.6) and in (2.13) are related to each other. The most important feature of our construction is that it relates the excursion construction and the SDE construction on a pathwise level.
Observe that an immediate and interesting corollary of Theorem 2.3 is the following:
Corollary 2.5
Let \(g\) be an immigration mechanism satisfying assumption (G) and let \((\mathbf {X}_t)_{t\ge 0}\) be a solution to (2.6). Then almost surely, \(\mathbf {X}_t\) is purely atomic for all \(t\ge 0\).
In the proof of our Theorem 1.2 we make use of the fact that the Pitman–Yor type representation is well suited for comparison arguments. If \(g\) can be bounded from above or below by a constant, then the righthand side of (2.6) can be compared to an explicit PPP for which general theory can be applied.
From MBI to BetaFleming–Viot processes with mutations
Let us first recall an important characterization started in [6] and completed in [12] which relates Fleming–Viot processes, defined as measurevalued Markov processes by the generator (1.3), and strong solutions to stochastic equations.
Theorem 2.6
(Dawson and Li [12]) Let \(\Lambda \) be the Beta distribution with parameters \((2\alpha , \alpha )\). Suppose \(\theta \ge 0\) and \(\mathcal M\) is a noncompensated Poisson point process on \((0,\infty )\times [0,1]\times [0,1]\) with intensity \(ds \otimes y^{2} \Lambda (dy) \otimes du\). Then there is a unique strong solution \((Y_t(v))_{t\ge 0,v\in [0,1]}\) to
and the measurevalued process \(\mathbf Y _t([0,v]):={Y_t(v)}\) is an \((\alpha ,\theta )\)Fleming–Viot process started at uniformly distributed initial condition.
Existence and uniqueness of solutions for this equation was proved in Theorem 4.4 of [12] while the characterization of the generator of the measurevalued process \({\mathbf Y} \) is the content of their Theorem 4.9.
We next extend a classical relation between Fleming–Viot processes and measurevalued branching processes which is typically known as disintegration formula. Without mutations, for the standard Fleming–Viot process this goes back to Konno and Shiga [26] and it was shown in Birkner et al. [10] that the relation extends to the generalized \(\Lambda \)Fleming–Viot processes without immigration if and only if \(\Lambda \) is a Betameasure. Our extension relates \((\alpha ,\theta )\)Fleming–Viot processes to (2.6) with immigration mechanism \(g(x)=\alpha (\alpha 1)\Gamma (\alpha )\theta x^{2\alpha }\) and for \(\theta =0\) gives an SDE formulation of the main result of [10].
Theorem 2.7
Let \(F(v)=I(v)=v\) and let \(g:{\mathbb R}_+\mapsto {\mathbb R}_+\) be defined by \(g(x)=\alpha (\alpha 1)\Gamma (\alpha )\theta x^{2\alpha }\) for some \(\alpha \in (1,2).\) Let then \((\mathbf X_t)_{t\ge 0}\) be the unique solution of to (2.6) (in the sense of Definition 2.1) such that
Define
and
Then \(\big (\mathbf Y _t\big )_{t\ge 0}\) is welldefined, i.e. \(S^{1}(t)<T_0\) for all \(t\ge 0\), and is an \((\alpha ,\theta )\)Fleming–Viot process, i.e. a strong solution to (2.14) with \(\Lambda =Beta(2\alpha ,\alpha )\).
The proof of the theorem is different from the known result for \(\theta =0\). To prove that \(X_{S^{1}(t)}(1)>0\) for all \(t\ge 0\), Lamperti’s representation for CSBPs was crucially used in [10]. This idea breaks down in our generalized setting since the totalmass process \(X_t(1)\) is not a CSBP. Our proof uses instead the fact that for all \(\theta \ge 0\) the totalmass process is selfsimilar and an interesting cancellation effect of Lamperti’s transformation for selfsimilar Markov processes and the timechange \(S\).
In [1] we study (a generalized version of) the total mass process \((X_t(1), t\ge 0)\) and we show that the extinction time \(T_0 = \inf \{ t\ge 0 : X_t(1)=0\}\) is finite almost surely if and only if \(\theta <\Gamma (\alpha )\). Otherwise \(T_0=\infty \) almost surely. We will see in the proof of Theorem (2.7) that in both cases
Theorem 2.7 thus gives some partial information on the behavior of \(\big (\mathbf X _t\big )_{t\ge 0}\) near the extinction time \(T_0\):
Corollary 2.8
As \(t \rightarrow \infty \) the probabilityvalued process \(\left( {\mathbf {X}_{S^{1}(t)}(dv) \over X_{S^{1}(t)}(1)}\right) _{t\ge 0}\) converges weakly to the unique invariant measure of \((\mathbf {Y}_t, t\ge 0)\).
As \(t\rightarrow T_0\), almost surely, there exists a (random) sequence of times \(t_1 <t_2 <\ldots <T_0\) tending to \(T_0\) such that the sets
are pairwise disjoints.
This corollary is a direct consequence of the result, due to Donnelly and Kurtz [13, 14], that the \((\alpha , \theta )\)Fleming–Viot process (as well as its lookdown particle system) is strongly ergodic and of Theorem 2.7. For the sake of selfcontaindeness, a sketch of the proof is given in Sect. 7 which specialize and explicits the arguments of Donelly and Kurtz to our case.
Proof of Theorems 2.2 and 2.3
Recall that \((s_i, u_i,a_i,w^i)_{i\in I}\) is a Poisson point process on \({\mathbb R}_+^3 \times {\mathcal D} \) with intensity measure \(\Gamma \) given as in (2.12), and that we use the notation (2.11). We are going to show that for all \(v\in [0,1]\) there exists a unique càdlàg process \((Z_t(v), t\ge 0)\) solving
Then we are going to construct a PPP \({\mathcal N} \) with intensity \(dr\otimes c_\alpha \, {\small 1}\!\!1{(x>0)} \, x^{1\alpha }\, dx\otimes dy\) such that, for all \(v\in [0,1]\), \(Z\) is solution of (2.6) .
The Pitman–Yor type representation with predictable random immigration
We start by replacing the immigration rate \((g(Z_{s}(1)))_{s>0}\) in the righthand side of (3.1) with a generic \(({\mathcal F} _t)\)predictable process \((V_s)_{s\ge 0}\), that we assume to satisfy
this will be useful when we perform a Picard iteration in the proof of existence of solutions to (2.6) and (3.1). Then we consider
Then we want to show that there is a noise \(\mathcal {N}\) on \((\Omega , \mathcal {G}, \mathcal {G_t}, {\mathbb P} )\) such that \(Z\) is a solution of an equation of the type (2.6).
Definition of \({\mathcal N} \)
Let us consider a family of independent random variables \((U_{ij})_{i,j\in {\mathbb N} }\) such that \(U_{ij}\) is uniform on \([0,1]\) for all \(i,j\in {\mathbb N} \). We also assume that \((U_{ij})_{i,j\in {\mathbb N} }\) is independent of the PPP \((s_i, u_i,a_i,w^i)\). Then, for all atoms \((s_i, u_i,a_i,w^i)\) in the above PPP, we define the following point process \({\mathcal N} ^i:=(r_j^i,x_j^i,y_j^i)_{j\in J^i}\):

(1)
\((r_j^i)_{j\in J^i}\) is the family of jump times of \(r\mapsto w^i_r\);

(2)
for each \(r_j^i\) we set
We note that \( {\mathcal N} ^i\) is not expected to be a Poisson point process. For each \(k \in {\mathbb N} \) we set
We consider a PPP \({\mathcal N} ^\circ =(r^\circ _j,x^\circ _j,y^\circ _j)_j\) with intensity measure \(\nu \) given by (2.4) and independent of \(( (s_i, u_i, a_i, w^i)_i, (U_{ij})_{i,j\in {\mathbb N} }, (V_t)_{t\ge 0})\). We set for any nonnegative measurable \(f=f(r,x,y)\)
The filtration we are going to work with is
We are going to prove the following
Proposition 3.1
\({\mathcal N} \) is a PPP with intensity \(\nu (dr , dx , dy) = dr\otimes c_\alpha \, x^{1\alpha }\, dx\otimes dy\).
Proof
For \(f=f(r,x,y)\ge 0\) we now set
Since \(w_t^i=0\) if \(s_i\ge t\), \(V\) is predictable and we can write
then we obtain that \((L^k_{\cdot })_k\) is predictable. Hence, \(I(t)\) is \({\mathcal F} _t\)measurable and for \(0\le t<T\)
We will need the following two facts:

(1)
Conditionally on \(w^k_t\) and \(s_k < t\) the process \(w^k_{\cdot +t}\) has law \(P_{w^k_t}\) (this follows for instance from (2.7)).

(2)
Let \((w_t, t\ge 0)\) be a CSBP started from \(w_0\) with law \({\mathbb P} _{w_0}\). Let \({\mathcal M} =(r_i,x_i, y_i)\) be a point process which is defined from \(w\) and a sequence of i.i.d. uniform variables on \([0,1]\) as \({\mathcal N} ^k\) is constructed from \(w^k\) and \((U_{ij})_{i,j\in {\mathbb N} }\). Then for any positive function \(f=f(r,x,y)\)
$$\begin{aligned}&{\mathbb E} \left[ \int \limits _{[0,T] \times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y) {\mathcal M} (dr,dx,dy) \right] \\&\quad ={\mathbb E} _{w_0} \left[ \int \limits _{[0,T] \times {\mathbb R}_+\times {\mathbb R}_+} f(r,x,y) \mathbf {1}_{y\le w_{r}} \nu (dr,dx,dy) \right] . \end{aligned}$$
Let us start with the case \(s_k < t\). Using the above facts we see that
Let us now consider the case \(s_k>t.\)
where we need to introduce the indicator that \(r >s_k+\epsilon \) to get a sum of CSBP started from a positive initial mass and thus be in a position to apply the above fact.
We conclude that
Therefore by the Definition (3.6) of \({\mathcal N} \)
By [23, Theorem II.6.2], a point process with deterministic compensator is necessarily a Poisson point process, and therefore the proof is complete. \(\square \)
Proposition 3.1 tells us how to construct a Poisson noise \({\mathcal N} \) from the \((s_i,u_i,a_i,w^i)\). Let us now show that \(Z\) solves (2.6) with this particular noise.
Proposition 3.2
Let \(Z\) satisfy (3.3). Then for all \(v\ge 0\), \((Z(v),{\mathcal N} )\) satisfies \({\mathbb P} \)a.s.
Proof
Using an idea introduced by Dawson and Li [11], we set for \(n\in {\mathbb N} ^*\)
Note that \({\mathbb Q} (\{w_{1/n}>0\})<+\infty \) for all \(n\ge 1\), so that \(Z^n_t\) is \({\mathbb P} \)a.s. given by a finite sum of terms. Moreover, by the properties of PPPs, \(((s_i,u_i,a_i,w^i): w_{1/n}^i>0)\) is a PPP with intensity \((\delta _0(ds) \otimes \delta _0(du) \otimes F(da) + ds \otimes du\otimes I(da) )\otimes {\small 1}\!\!1_{{(w_{1/n}>0)}} \, {\mathbb Q} (dw)\). Moreover \(Z^n_t\uparrow Z_t\) as \(n\uparrow +\infty \) for all \(t\ge 0\). Now we can write
with
Let us concentrate on \(M^n\) first. We can write, for \(s_i+\frac{1}{n}\le t\),
where \({\mathcal N} ^i\) is defined in (3.4) and \(\tilde{\mathcal N} ^i(dr,dx,dy)\) is the compensated version of \({\mathcal N} ^i\):
with \(\nu \) defined in (2.4). We set
Since \({\mathbb Q} (\{w_{1/n}>0\})<+\infty \), only finitely many \(\{A_{i,n}\}_i\) such that \(u_i\le V_{s_i}\) are nonempty \({\mathbb P} \)a.s and, moreover, the \(\{A_{i,n}\}_i\) are disjoint. Then by (3.6)
so that
We need first the two following technical lemmas. \(\square \)
Lemma 3.3
For a \(({\mathcal F} _t)\)predictable bounded process \(f_t:{\mathbb R}_+\mapsto {\mathbb R}\) we set
Then we have
Proof
Recall that \(\nu _\alpha (dx)=c_\alpha x^{1\alpha }\,dx\). We set
Then, by Doob’s inequality,
while
\(\square \)
Lemma 3.4

(1)
\(\lim _{n\rightarrow \infty }\int \limits _{{\mathcal E} } (z_{\frac{1}{n}})^2 \, {\small 1}\!\!1_{{(z_{\frac{1}{n}}\le 1)}} \, {\mathbb Q} (dz)=0.\)

(2)
\(\lim _{n\rightarrow \infty }\int \limits _{{\mathcal E} } z_{\frac{1}{n}}\, {\small 1}\!\!1_{{(z_{\frac{1}{n}}\ge 1)}} \, {\mathbb Q} (dz)=0.\)
Proof
First recall from (2.9) that \(\int \limits _{{\mathcal E} } z_{\frac{1}{n}}\, {\mathbb Q} (dz)=1\) for all \(n\). The proof of (1) is based on the estimate \(\frac{1}{e} x \le 1e^{x}\) for \(x\in [0,1]\) which follows from differentiating both sides. Of course, the inequality also implies that
We apply this estimate to the excursion measure:
Next, by (2.8),
so that (3.9) combined with \(\int \limits _{{\mathcal E} } z_{\frac{1}{n}}\, {\mathbb Q} (dz)=1\) proves (1). For (2) we use that \(x \mathbf 1_{(x>1)}\le \frac{e}{(e1)} x(1e^{x})\) to get
which goes to zero as argued above. \(\square \)
Lemma 3.5
For all \(v\ge 0\) and \(T\ge 0\) we have
\((Z_t(v))_{t\ge 0}\) is \({\mathbb P} \)a.s. càdlàg and \({\mathbb P} \)a.s.
Proof
We have obtained above the representation
First, let us note that
and moreover
and the latter union is disjoint. If we set
then
and by Lemma 3.3
Then we get
where the last equality follows by (2.9). By our assumptions on \(V\) the right hand side in the above display converges to \(0\), as \(n\rightarrow \infty \). Hence (3.11) also converges to \(0\), as \(n\rightarrow \infty \). Let us now deal with \((J^n_t)_{\ge 0}\). Note that we can write
where
and \((A^n_t)_{t\ge 0}\) is a martingale such that \(A^n_0=0\). We have by an analog of Lemma 3.3 and its proof
where \(K_V:=I(v)\int \limits _0^T {\mathbb E} (V_s)\, ds\). The righthand side tends to zero as \(n\rightarrow \infty \) by Lemma 3.4. Analogously
which again tends to \(0\) as \(n\rightarrow \infty \) by Lemma 3.4. Therefore
and, passing to a subsequence, we see that a.s.
(observe that in fact we don’t need to take a subsequence since \(Z^n_t\) is monotone nondecreasing in \(n\)).
In particular, a.s. \((Z_t(v), t\ge 0)\) is càdlàg and we obtain
It remains to prove that a.s. \(B^v = \{(y,r): y< Z_{r}(v)\}\). By definition a.s.
If \(a_i\le v\) and \(u_i \le V_{s_i}\), then \(L^i_{r}+w^i_{r}\le Z_{r}(v)\), so that \(B^v \subset \{(y,r): y< Z_{r}(v)\}\). On the other hand, if \(y< Y_{r}(v)\), then there is one \(j\) such that
Therefore we have obtained the desired results. \(\square \)
The proof of Proposition 3.2 is complete.
Proof of Theorem 2.3
With a localisation argument we can suppose that \(g\) is globally Lipschitz. Let us first show uniqueness of solutions to (3.1). Let \(v=1\). If \((Z^i_t, t\ge 0)\) for \(i=1,2\) is a càdlàg process satisfying (3.1) with \(v=1\), then taking the difference we obtain
where the second equality follows by (2.9). By the Lipschitzcontinuity of \(g\) and the Gronwall Lemma we obtain \(Z^1=Z^2\) a.s., i.e. uniqueness of solutions to (3.1).
The next step is to use an iterative Picard scheme in order to construct a solution of (3.1) (and thus of (2.6)). Let \(v:=1\), and let us set \(Z^0_t:=0\) and for all \(n\ge 0\)
By recurrence and monotonicity of \(g\), \(Z^{n+1}_t\ge Z^{n}_t\) and therefore a.s. there exists the limit \(Z_t:=\lim _nZ^{n}_t\).
To show that \(Z\) is actually the solution of (3.1) we show first that it is càdlàg (by proving that the convergence holds in a norm that makes the space of càdlàg processes on \([0,T]\) complete) and then by proving that (3.1) holds almost surely for each fixed \(t\ge 0.\) Let us first show that \(Z^n\) is a Cauchy sequence for the norm \(\Vert Z\Vert = {\mathbb E} (\sup _{t\in [0,T]} \left Z_t \right )\) for which first we set
By an analog of Proposition 3.2 we can construct a PPP \({\mathcal N} ^{n,k}\) with the intensity measure \({\small 1}\!\!1_{{(r>0)}} \, dr\, \otimes \, c_\alpha \, {\small 1}\!\!1_{{(x>0)}}\, x^{1\alpha }\,dx \, \otimes \, {\small 1}\!\!1_{{(y>0)}}\, dy\) such that for all \(t\ge 0\)
Then by the Lipschitzcontinuity of \(g\) with the Lipschitz constant \(L\), and by Lemma 3.3
We show now that the right hand side in the latter formula vanishes as \(n\rightarrow +\infty \) uniformly in \(k\). Indeed
Then by recurrence \({\mathbb E} (Z^{n+1}_t) \le Ce^{tL}\) and by monotone convergence we obtain that \({\mathbb E} (Z^{n+1}_t)\uparrow {\mathbb E} (Z_t)\le Ce^{tL}\). By dominated convergence it follows that
i.e. the sequence \(\int \limits _0^T {\mathbb E} (Z^{n}_s) \, ds\) is Cauchy and we conclude that \(Z_n \rightarrow Z\) in the sense of the above norm and therefore \(Z\) is almost surely càdlàg. The above argument also show that
holds almost surely for each fixed \(t\) and therefore for all \(t \ge 0\), i.e. \(Z\) is a solution of (3.1) for \(v=1\). Setting \(V_s:=g(Z_{s}(1))\) and applying Proposition 3.2, we obtain (3.1) and the proof of Theorem 2.3 is complete.
Proof of Theorem 2.2
Let us start from existence of a weak solution to (2.6); by Theorem 2.3 we can build a process \((Z_t(v), t\ge 0, v\in [0,1])\) and a Poisson point process \(\mathcal {N}(dr,dx,dy)\) such that (3.1) and (2.6) hold. Now, we set
where \(\delta _a\) denote the Dirac mass at \(a\); by construction it is clear that \(X_t(v):=\mathbf X _t([0,v])\), for all \(v\in [0,1]\), is a solution to (2.6). It remains to prove that \((\mathbf X _t)_{t\ge 0}\) is càdlàg in the space of finite measures on the space \([0,1]\). By Lemma 3.5, for all \(v\in [0,1]\), \((X_t(v))_{t\ge 0}\) is càdlàg; by countable additivity, a.s. \((X_t(v))_{t\ge 0}\) is càdlàg for all \(v\in {\mathbb Q} \cap [0,1]\); then, by the compactness of \([0,1]\), it is easy to see that \((\mathbf X _t)_{t\ge 0}\) is càdlàg: for instance, a.s. any limit point of \(\mathbf X _{t_n}\), for \(t_n\ge t\) and \(t_n\rightarrow t\), is equal on each interval \(]a,b]\), \(a,b\in {\mathbb Q} \cap [0,1]\), to \(X_t(b)X_t(a)=\mathbf X _t(]a,b])\). Therefore, we have proved that \((\mathbf X _t)_{t\ge 0}\) is a solution to (2.6) in the sense of Definition 2.1.
It remains to prove pathwise uniqueness. Let \((\mathbf X ^i_t)_{t\ge 0}\), \(i=1,2\), be two solutions to (2.6) driven by the same Poisson noise \({\mathcal N} \) and let us set \(X^i_t(v):=\mathbf X ^i_t([0,v])\), \(v\in [0,1]\). Let us first consider the case \(v=1\): then \((X^i_t(1), t\ge 0)\), \(i=1,2\), solves a particular case of the equation considered by Dawson and Li [12, (2.1)]; therefore, by [12, Theorem 2.5], \({\mathbb P} (X^1_t(1)=X^2_t(1), \, \forall \, t\ge 0)=1\).
Let us now consider \(0\le v<1\); in this case the equation satisfied by \((X^i_t(v), t\ge 0)\) depends on \((X^i_t(1), t\ge 0)\) and therefore the uniqueness result by Dawson and Li does not apply directly. Instead, we consider the difference \(D_t:=X^1_t(v)X^2_t(v)\) so that the drift terms cancel since \(X^1(1)=X^2(1)\). Hence, \((D_t, t\ge 0)\) can be treated as if \(g\) were identically equal to 0. The same proof as in [12] shows that \({\mathbb P} (X^1_t(v)=X^2_t(v), \, \forall \, t\ge 0)=1\). Finally, since a.s. the two finite measures \(\mathbf X ^1_t\) and \(\mathbf X ^2_t\) are equal on each interval \(]a,b]\), \(a,b\in {\mathbb Q} \cap [0,1]\), they coincide. Therefore, pathwise uniqueness holds for (2.6).
Finally, in order to obtain existence of a strong solution, we apply the classical YamadaWatanabe argument, for instance in the general form proved by Kurtz [25, Theorem 3.14].
Proof of Theorem 2.7
We consider the immigration rate function \(g(x)=\alpha (\alpha 1)\Gamma (\alpha )\theta x^{2\alpha }\), \(x\ge 0\). Now \(g\) is not Lipschitzcontinuous, so that Theorem 2.3 does not apply directly. However, by considering \(g^n(x)=\alpha (\alpha 1)\Gamma (\alpha )\theta (x\vee n^{1})^{2\alpha }\), we obtain a monotone nondecreasing and Lipschitz continuous function for which Theorem 2.3 yields existence and uniqueness of a solution \((X^n_t(v), t\ge 0, v\ge 0)\) to (2.6). We now define \( T^0:=0\), \(T^n := \inf \{t>0: \ X^n_t(1) = n^{1}\}\) and
Since \((X_t(1))_{t\ge 0}\) has no downward jumps, it follows that \(T_0:=\sup _n T^n\) is equal to \(\inf \{s>0: X_s(1)=0\}\), and moreover \(X_t(1)=0\) for all \(t\ge T_0\). By pathwise uniqueness, if \(n\ge m\) then \(X_t^n(v)=X_t^m(v)\) on \(\{t\le T^m\}\), and therefore \((X_t(v), t\ge 0, v\ge 0)\) is a solution to (2.6) for \(g(x)=\alpha (\alpha 1)\Gamma (\alpha )\theta x^{2\alpha }\) with the desired properties. Pathwise uniqueness follows from the same localisation argument.
To prove that the righthand side of (2.15) is welldefined, i.e. the denominator is always strictly positive, we are going to apply Lamperti’s representation for selfsimilar Markov process. A positive selfsimilar Markov process of index \(w\) is a strong Markov family \(({\mathbb P} ^x)_{x>0}\) with coordinate process denoted by \((U_t)_{t\ge 0}\) in the Skorohod space of càdlàg functions with values in \([0,+\infty [\), satisfying
for all \(c>0\). Lamperti has shown in [29] that this property is equivalent to the existence of a Lévy process \(\xi \) such that, under \({\mathbb P} ^x\), the process \(( U_{t\wedge T_0})_{t\ge 0}\) has the same law as \(\big ( x \exp \big (\xi _{A^{1}({tx^{1/w})}} \big )\big )_{t\ge 0}\), where
We now use Lamperti’s representation to find a surprisingly simple argument for the wellposedness of (2.15).
Lemma 4.1
The righthand side of (2.15) is welldefined for all \(v\in [0,1]\) and \(t\ge 0\).
Proof
In Lemma 1 of [1] it was shown that, if \(L\) is a spectrally positive \(\alpha \)stable Lévy process as in (2.3), solutions to the SDE
trapped at zero induce a positive selfsimilar Markov process of index \(1/(\alpha 1)\). The corresponding Lévy process \(\xi \) has been calculated explicitly in [1, Lemma 2.2], but for the proof here we only need that \(\xi \) has infinite lifetime and additionally a remarkable cancellation effect between the timechanges. Since, by Lemma 1 of Fournier [21], the unique solution to the SDE (4.2) for \(X_0=1\) coincides in law with the unique solution to
we see that the totalmass process \((X_t(1))_{t\ge 0}\) and \(\left( \exp \left( \xi _{A^{1}(t)} \right) \right) _{t\ge 0}\) are equal in law up to first hitting \(0\). Applying the Lamperti transformation for \(t<T_0\) yields
so that \(\bar{S}\) and \(A\) are reciprocal for \(t<T_0\). Plugging this identity into the Lamperti transformation yields
For the second equality we used leftcontinuity of \(X(1)\) at \(T_0\) which is due to Sect. 3 of [29] because the Lévy process \(\xi \) does not jump to \(\infty \). Using that \(\xi _t>\infty \) for any \(t\in [0,\infty )\), from(4.3) we see that \(\bar{S}\) explodes at \(T_0\), that is \(\bar{S}(T_0)=\infty \). Since \(S\) and \(\bar{S}\) only differ by the factor \(\alpha (\alpha 1)\Gamma (\alpha )\), it also holds that \(S(T_0)=\infty \) so that \(X_{S^{1}(t)}(1)>0\) for all \(t\ge 0\). \(\square \)
We can now show how to construct on a pathwise level the BetaFleming–Viot processes with mutations the measurevalued branching process.
Proof of Theorem 2.7
Suppose \(\mathcal {N}\) is the PPP with compensator measure \(\nu \) that drives the strong solution of (2.6) with atoms
\((r_i,x_i,y_i)_{i\in I}\in (0,\infty )\times (0,\infty )\times (0,\infty )\). Then we define a new point process on \((0,\infty )\times (0,\infty )\times (0,\infty )\) by
If we can show that the restriction \(\mathcal M_\) of \(\mathcal M\) to \((0,\infty )\times (0,1)\times (0,1)\) is a PPP with intensity measure \(\mathcal M_'(ds,dz,du)=ds \otimes C_\alpha z^{2}z^{1\alpha }(1z)^{\alpha 1}dz\otimes du\) and furthermore that
is a solution to (2.14) with respect to \(\mathcal M_\), then the claim follows from the pathwise uniqueness of (2.14).
 Step 1: :

We have
$$\begin{aligned}&\quad R_t(v) =\frac{X_{S^{1}(t)}(v)}{X_{S^{1}(t)}(1)} = \\&=\frac{X_0(v)}{X_0(1)}+\int \limits _0^{S^{1}(t)}\int \limits _0^\infty \int \limits _0^\infty \Bigg [\frac{X_{r}(v)+ x\mathbf 1_{(y\le X_{r}(v))}}{X_{r}(1)+ x\mathbf 1_{(y\le X_{r}(1))}}\frac{X_{r}(v)}{X_{r}(1)}\Bigg ] (\mathcal {N}\nu )(dr,dx,dy) \\&\quad +\int \limits _0^{S^{1}(t)}\int \limits _0^\infty \int \limits _0^\infty \Bigg [\frac{X_{r}(v)+ x\mathbf 1_{(y\le X_{r}(v))}}{X_{r}(1)+ x\mathbf 1_{(y\le X_{r}(1))}}\frac{X_{r}(v)}{X_{r}(1)}\frac{ x\mathbf 1_{(y\le X_{r}(v))} }{X_{r}(1)}\\&\quad \quad +\frac{ x\mathbf 1_{(y\le X_{r}(1))} X_{r}(v)}{X_{r}(1)^2}\Bigg ] \nu (dr,dx,dy)\\&\quad + \alpha (\alpha 1)\Gamma (\alpha )\theta v\int \limits _0^{S^{1}(t)}\frac{1}{X_{r}(1)}X_r(1)^{2\alpha } \,dr \theta \int \limits _0^{S^{1}(t)}\frac{X_r(v)}{X_{r}(1)^2} X_r(1)^{2\alpha }\,dr\\&=v+\int \limits _0^{S^{1}(t)}\int \limits _0^\infty \int \limits _0^\infty \Bigg [\frac{X_{r}(v)+ x\mathbf 1_{(y\le X_{r}(v))}}{X_{r}(1)+ x\mathbf 1_{(y\le X_{r}(1))}}\frac{X_{r}(v)}{X_{r}(1)}\Bigg ] \mathcal {N}(dr,dx,dy)\\&\quad + \alpha (\alpha 1)\Gamma (\alpha )\theta v\int \limits _0^{S^{1}(t)}X_r(1)^{1\alpha } \,dr \theta \int \limits _0^{S^{1}(t)}\frac{X_r(v)}{X_{r}(1)} X_r(1)^{1\alpha }\,dr. \end{aligned}$$To verify the third equality, first note that due to Lemma II.2.18 of [24] the compensation can be split from the martingale part and then can be canceled by the compensator integral since integratingout the \(y\)variable yields
$$\begin{aligned}&\quad \int \limits _0^{S^{1}(t)}\int \limits _0^\infty \int \limits _0^\infty \Bigg [\frac{ x\mathbf 1_{(y\le X_{r}(v))} }{X_{r}(1)}+\frac{ x\mathbf 1_{(y\le X_{r}(1))} X_{r}(v)}{X_{r}(1)^2}\Bigg ] c_\alpha x^{1\alpha } dr\,dx\,dy =0. \end{aligned}$$To replace the jumps governed by the PPP \(\mathcal {N}\) by jumps governed by \(\mathcal M\) note that by the definition of \(\mathcal M\) we find, for measurable nonnegative testfunctions \(h\) for which the first integral is defined, the almost sure transfer identity
$$\begin{aligned}&\int \limits _{0}^{S^{1}(t)}\int \limits _{0}^{\infty }\int \limits _{0}^{\infty } h\left( S(r),\frac{x}{X_{r}(1)+ x\mathbf 1_{(y\le X_{r}(1))}},\frac{y}{X_{r}(1)}\right) \mathcal {N}(dr,dx,dy)\nonumber \\&\quad =\int \limits _{0}^{t}\int \limits _{0}^{1}\int \limits _{0}^{\infty } h(s, z, u)\,\mathcal {M}(ds,d z,d u) \end{aligned}$$(4.4)or in an equivalent but more suitable form
$$\begin{aligned}&\quad \int \limits _0^{S^{1}(t)}\int \limits _0^\infty \int \limits _0^\infty h\left( r,\frac{x}{X_{S^{1}(r)}(1)+ x\mathbf 1_{(y\le X_{S^{1}(r)}(1))}},\frac{y}{X_{S^{1}(r)}(1)}\right) \mathcal {N}(dr,dx,dy)\nonumber \\&=\int \limits _0^{t}\int \limits _0^1\int \limits _0^\infty h\big (S^{1}(s), z, u\big )\,\mathcal {M}(d s,d z,d u). \end{aligned}$$(4.5)Since the integrals are noncompensated we actually defined \(\mathcal {M}\) in such a way that the integrals produce exactly the same jumps. Let us now rewrite the equation found for \(R\) in such a way that (4.5) can be applied:
$$\begin{aligned} R_t(v)&=v+\int \limits _0^{S^{1}(t)}\int \limits _0^\infty \int \limits _0^\infty \Bigg [\frac{x\mathbf 1_{(y\le X_{r}(v))}X_{r}(1)X_{r}(v) x\mathbf 1_{(y\le X_{r}(1))}}{(X_{r}(1)+ x\mathbf 1_{(y\le X_{r}(1))})X_{r}(1)}\Bigg ]\\&\quad \times \mathcal {N}(dr,dx,dy) + \alpha (\alpha 1)\Gamma (\alpha )\theta \int \limits _0^{S^{1}(t)}\\&\quad \times \Big [vX_r(1)^{1\alpha } \frac{X_r(v)}{X_{r}(1)} X_r(1)^{1\alpha }\Big ]\,dr\\&=v+\int \limits _0^{S^{1}(t)}\int \limits _0^\infty \int \limits _0^\infty \frac{x}{X_{r}(1)+ x\mathbf 1_{(y\le X_{r}(1))}} \\&\quad \quad \times \left[ \mathbf 1_{(y\le X_{r}(v))}\frac{X_{r}(v)}{X_{r}(1)}\mathbf 1_{(y\le X_{r}(1))}\right] \mathcal {N}(dr,dx,dy)\\&\quad + \alpha (\alpha 1)\Gamma (\alpha )\theta \int \limits _0^{S^{1}(t)}\Big [vX_r(1)^{1\alpha } \frac{X_r(v)}{X_{r}(1)} X_r(1)^{1\alpha }\Big ]\,dr. \end{aligned}$$The stochastic integral driven by \(\mathcal {N}\) can now be replaced by a stochastic integral driven by \(\mathcal {M}\) via (4.5):
$$\begin{aligned} R_t(v)&= v+\int \limits _0^{t}\int \limits _0^1\int \limits _0^\infty z\bigg [\mathbf 1_{( uX_{S^{1}( s)}(1)\le {X_{S^{1}( s)}(v)})}  \\&\quad  R_{S^{1}( s)}( v)\mathbf 1_{( uX_{S^{1}( s)}(1)\le {X_{S^{1}( s)}(1)})} \bigg ]\mathcal {M}(d s,d z,d u)\\&\quad + \theta \int \limits _0^{t}\big [vR_s(v)\big ]\,ds\\&=v+\int \limits _0^{t}\int \limits _0^1\int \limits _0^\infty z\bigg [\mathbf 1_{(u\le R_{s}(v))} R_{ s}( v)\mathbf 1_{( u\le 1)} \bigg ]\mathcal {M}(d s,d z,d u) \\&\quad + \theta \int \limits _0^{t}\big [vR_s(v)\big ]\,ds. \end{aligned}$$By monotonicity in \(v\), \(R_t(v)\le 1\) so that the \(du\)integral in fact only runs up to \(1\) and the second indicator can be skipped:
$$\begin{aligned} R_t(v)&= v+\int \limits _0^{t}\int \limits _0^1\int \limits _0^1 z\bigg [\mathbf 1_{(u\le R_{s}(v))} R_{ s}( v) \bigg ]\mathcal {M_}(d s,d z,d u)\\&+ \theta \int \limits _0^{t}\big [vR_s(v)\big ]\,ds. \end{aligned}$$This is precisely the equation we wanted to derive.
 Step 2::

The proof is complete if we can show that the restriction \(\mathcal {M}_\) of \(\mathcal M\) to \((0,\infty )\times [0,1]\times [0,1]\) is a PPP with intensity \(\mathcal M'(ds,dz,du)=ds \otimes C_\alpha z^{1\alpha }(1z)^{\alpha 1}dz\otimes du\). For this sake, we choose a nonnegative measurable predictable function \(W:\Omega \times (0,\infty )\times (0,1)\times (0,1)\rightarrow {\mathbb R}\) bounded in the second and third variable and compactly supported in the first, plugin the definition of \(\mathcal M_\) and use the compensator measure \(\nu \) of \(\mathcal {N}\) to obtain via (4.4)
which, by predictable projection and change of variables, equals
Now we substitute the three variables \(r, x, y\) (in this order), using \(C_\alpha =\frac{1}{\alpha (\alpha 1)\Gamma (\alpha )}c_\alpha \) for the substitution of \(r\) and the identity
for the substitution of \(x\) to obtain
It now follows from Theorems II.4.8 of [24] and the definitions of \(c_\alpha , C_\alpha \) that \(\mathcal {M}_\) is a PPP with intensity \(ds \otimes C_\alpha z^{2}z^{1\alpha }(1z)^{\alpha 1}dz\otimes du\). \(\square \) \(\square \)
Proof of Theorem 1.2
Let us briefly outline the strategy for the proof: In order to show that the measurevalued process \(\mathbf Y \), \({\mathbb P} \)a.s., does not possess times \(t\) for which \(\mathbf Y _t\) has finitely many atoms, by Theorem 2.7 it suffices to show that \({\mathbb P} \)a.s. the same is true for the measurevalued branching process \(\mathbf X \). In order to achieve this, it suffices to deduce the same property for the PitmanYor type representation up to extinction, i.e. we need to show that
The upshot of working with \(Z\) instead of \(\mathbf Y\) is that things are easier due to a comparison property that is not available for \(\mathbf Y \). More precisely, we are going to prove that with probability 1, the number of immigrated types alive is infinite at all times, therefore proving that the result in Theorem 1.2 is indeed independent of the starting configuration \(\mathbf Y _0\).
We start the proof with a technical result on the covering of a half line by the shadows of a Poisson point process defined on some probability space \((\Omega , \mathcal {G}, \mathcal {G}_{t}, P)\). Suppose \((s_i,h_i)_{i\in I}\) are the points of a Poisson point process \(\Pi \) on \((0,\infty )\times (0,\infty )\) with intensity \(dt\otimes \Pi '(dh)\). For a point \((s_i,h_i)\) we define the shadow on the half line \({\mathbb R}^+\) by \((s_i,s_i+h_i)\) which is precisely the line segment covered by the shadow of the line segment connecting \((s_i,0)\) and \((s_i,h_i)\) with light shining in a 45 degrees angle from the above lefthand side. Shepp proved that the half line \({\mathbb R}^+\) is almost surely fully covered by the shadows induced by the points \((s_i,h_i)_{i\in I}\) if and only if
The reader is referred to the last remark of [36]. For our purposes we need the following variant:
Lemma 5.1
Suppose \(\Pi \) is a PPP with intensity \(dt\,\otimes \,\Pi '(dh)\) and Shepp’s condition (5.2) holds, then
i.e. almost surely every point of \({\mathbb R}^+\) is covered by the shadows of infinitely many line segments.
Proof
The proof is an iterated use of Shepp’s result for the sequence of restricted Poisson point processes \(\Pi _k\) obtained by removing all the atoms \((s_i,h_i)\) with \(h_i>\frac{1}{k}\) from \(\Pi \), i.e. restricting the intensity measure to \([0,\frac{1}{k}]\). Since Shepp’s criterion (5.2) only involves the intensity measure around zero, the shadows of all point processes \(\Pi _k\) cover the half line. Consequently, if there is some \(t>0\) such that \(t\) is only covered by the shadows of finitely many points \((s_i,h_i)\in \Pi \), then \(t\) is not covered by the shadows generated by \(\Pi _{k'}\) for some \(k'\) large enough. But this is a contradiction to Shepp’s result applied to \(\Pi _{k'}\). \(\square \)
Now we want to apply Shepp’s result to the PitmanYor type representation. We want to prove that (5.1) holds for any \(\theta >0\). Let us set for all \(\epsilon >0\)
Then it is clearly enough to prove that for all \(\epsilon >0\)
In order to connect the covering lemma with the question of exceptional times, we use the comparison property of the PitmanYor representation to reduce the problem to the process \(Z^\epsilon \) explicitly defined by
Setting
it is obvious by the definition of \(Z\) and \(Z^\epsilon \) that
We are now prepared to prove our main result.
Proof of Theorem 1.2
Due to (5.4) we only need to show that almost surely \(v\mapsto Z^\epsilon _t(v)\) has infinitely many jumps for all \(t>0\) and arbitrary \(\epsilon >0\). To verify the latter, Lemma 5.1 will be applied to a PPP defined in the sequel. If \(\Pi \) denotes the Poisson point process with atoms \((s_i,w^i,u_i)_{i\in I}\) from which \(Z^\epsilon _t(v)\) is defined, then we define a new Poisson point process \(\Pi _l\) via the atoms
where \(\ell (w):=\inf \{t>0: w_t=0\}\) denotes the length of the trajectory \(w\). In order to apply Lemma 5.1 we need the intensity of \(\Pi _l\). Using the definition of \({\mathbb Q} \) and the Laplace transform duality (2.8) with the explicit form
we find the distribution
Differentiating in \(h\) shows that \(\Pi _l\) is a Poisson point process on \({\mathbb R}^+\times {\mathbb R}^+\times {\mathbb R}^+\) with intensity measure
Pluggingin the new definitions leads to
There is one more simplification that we can do. Let us define \(\Pi _{l,\epsilon }\) as a Poisson point process on \((0,\infty )\times (0,\infty )\) with intensity measure
then by the properties of Poisson point processes we have the equality in law
Then (5.6) yields
Now we are precisely in the setting of Shepp’s covering results and the theorem follows from Lemma 5.1 if (5.2) holds. Shepp’s condition can be checked easily for \(\Pi _{l,\epsilon }\) for (5.7) independently of \(\theta \) and \(\epsilon \). \(\square \)
A Proof of Schmuland’s Theorem
In this section we sketch how our lines of arguments can be adopted for the continuous case corresponding to \(\alpha =2\). The proofs go along the same lines (reduction to a measurevalued branching process and then to an excursion representation for which the covering result can be applied) but are much simpler due to a constant immigration structure. The crucial difference, leading to the possibility of exceptional times, occurs in the final step via Shepp’s covering results.
Proof of Schmuland’s Theorem 1.1
We start with the continuous analogue to Theorem 2.2. Suppose \(W\) is a whitenoise on \((0,\infty )\times (0,\infty )\), then one can show via the standard YamadaWatanabe argument that there is a unique strong solution to
In fact, since the immigration mechanism \(g\) is constant, pathwise uniqueness holds. For every \(v\in [0,1]\), \((X_t(v))_{t\ge 0}\) satisfies
for a Brownian motion \(B\). Recalling (2.2), we see that (6.1) is a measurevalued process with branching mechanism \(\psi (u)=u^2\) and constantrate immigration.
The PitmanYor type representation corresponding to Theorem 2.3 looks as follows: in the setting of Sect. 2.2, we consider a Poisson point process \((s_i, u_i,w^i)_i\) on \({\mathbb R}_+\times {\mathbb R}_+\times {\mathcal D} \) with intensity measure \((\delta _0(ds)\otimes F(du)+ds\otimes I(du))\otimes {\mathbb Q} _s(dw)\), where the excursion measure \({\mathbb Q} \) is defined via the law of the CSBP (2.2) with branching mechanism \(\psi (\lambda )=\lambda ^2\). Then the analog of Theorem 2.3 is the following:
can be shown to solve (6.1); this result, for fixed \(v\), goes back to Pitman and Yor [34]. The calculation (5.5), now using that \(u_t(\lambda ) = \left( \lambda ^{1} + t\right) ^{1}\) is the unique nonnegative solution to
yields \({\mathbb Q} (\ell (w)\in dh)=\frac{1}{h^2}dh\).
For the analogue for Theorem 2.7 we define now the process
with \(S(t) = \int \limits _0^t X_s(1)^{1}\,ds\). It then follows again from the selfsimilarity that \(R\) is welldefined and from Itō’s formula that \(R\) is a standard Fleming–Viot process on \([0,1]\). The arguments here involve a continuous SDE which has been studied in [12]:
where \(W\) is a whitenoise on \((0,\infty )\times (0,1)\). It was shown in Theorem 4.9 of [12] that the measurevalued process \(\mathbf Y \) associated with \((Y_t(v), t\ge 0, v\in [0,1])\) solves the martingale problem for the infinitely many sites model with mutations, i.e. \(\mathbf Y \) has generator (1.1) with the choice (1.2) for \(A\).
Finally, in order to prove Schmuland’s Theorem 1.1 on exceptional times it suffices to prove the same result for (6.2). We proceed again via Shepp’s covering arguments as we did in Sect. 5. The crucial difference is that the immigration is already constant \(\theta \) so that (5.3) becomes superfluous. The role of the Poisson point process \(\Pi _{l,\epsilon }\) is played by \(\Pi _{\theta ,l}\) with intensity measure
Plugging into Shepp’s criterion (5.2), by Lemma 5.1 and
we find that there are no exceptional times if \(\theta \ge 1\). Conversely, let us assume \(\theta <1\). Recalling that for \(\theta =0\) the Fleming–Viot process has almost surely finitely many atoms for all \(t>0\), we see that the first term in (6.2) almost surely has finitely nonzero summands for all \(t>0\). Hence, it suffices to show the existence of exceptional times for which the second term in (6.2) vanishes. Arguing as before, this question is reduced to Shepp’s covering result applied to \(\Pi _{\theta ,l}\): (6.4) combined with (5.2) leads to the result. \(\square \)
Proof of Corollary 2.8
The fact that the \((\alpha ,\theta )\)Fleming–Viot process \((\mathbf Y_t,t\ge 0)\) converges in distribution to its unique invariant distribution and that this invariant distribution is not trivial (i.e. it charges measures with at least two atoms) is proven by Donelly and Kurtz in [14] at the end of Sect. 5.1 and [13] Sect. 4.1. Here we resketch their argument that relies on the socalled lookdown construction of \((\mathbf Y_t,t\ge 0)\) which was introduced in the same papers. Let us very briefly describe how the lookdown construction works (for more details we refer to [9, 14]).
The idea is to construct a sequence of processes \((\xi _i(t), t\ge 0), i=1,2,\ldots \) which take their values in the typespace \(E\) (here \(E=[0,1]\)). We say that \(\xi _i(t)\) is the type of the level \(i\) at time \(t\). The types evolve by two mechanisms :

lookdown events: with rate \(x^{2}\Lambda (dx)\) a proportion \(x\) of lineages are selected by i.i.d. Bernoulli trials. Call \(i_1, i_2,\ldots \) the selected levels at a given event at time \(t\). Then, \(\forall k >1, \xi _{i_k(t)} =\xi _{i_1}(t)\), that is the levels all adopt the type of the smallest participating level. The type \(\xi _{i_k}(t)\) which was occupying level \(i_k\) before the event is pushed up to the next available level.

mutation events: On each level \(i\) there is an independent Poisson point process \((t^{(i)}_j, j\ge 1)\) of rate \(\theta \) of mutation events. At a mutation event \(t^{(i)}_j\) the type \(\xi _i(t^{(i)}_j)\) is replaced by a new independent variable uniformly distributed on \([0,1]\) and the previous type is pushed up by one level (as well as all the types above him).
The point is then that
exists simultaneously for all \(t\ge 0\) almost surely and that \((\Xi _t , t\ge 0) =(\mathbf Y_t, t\ge 0)\) in distribution.
Fix \(n \in {\mathbb N} ,\) and define a process \((\pi _t, t\ge 0)\) with values in the partitions of \(\{1,2,\ldots \}\) by saying that \(i\sim j\) for \(\pi ^{(n)}_t\) if and only \(\xi _i(t)=\xi _j(t).\) It is well known that this is an exchangeable process. Recall from Corollary 2.5 that for each \(t\ge 0\) fixed, \(\Xi _t\) is almost surely purely atomic. Alternatively this can be seen from the lookdown construction since at a fixed time \(t>0\), the level one has been looked down upon by infinitely many level above since the last mutation event on level one. We can thus write
where the \(a_i\) are enumerated in decreasing order. It is also known that the sequences \((a_i(t), i\ge 1)\) of atom masses and \((x_i(t), i\ge 1)\) of atom locations are independent. The \(a_i(t)\) are the asymptotic frequencies of the blocks of \(\pi (t)\) which are thus in onetoone correspondence with the atoms of \(\Xi _t\). Furthermore the sequence \((x_i(t), i\ge 1)\) converges in distribution to a sequence of i.i.d random variables with common distribution \(I\) because all the types that were present initially have been replaced by immigrated types after some time. To see this note that after the first mutation on level 1, the type \(\xi _1(0)\) is pushed up to infinity in a finite time which is stochastically dominated by the fixation time of the type at level 1 in a BetaFleming–Viot without mutation. This also proves the second point of the corollary.
For each \(n\ge 1, \) let us consider \(\pi ^{(n)}(t) = \pi _{[n]}(t)\) the restriction to \(\{1,\ldots ,n\}\) of \(\pi (t)\). Then, for all \(n\ge 1\), the process \((\pi ^{(n)}_t, t\ge 0)\) is an irreducible Markov process on a finite statespace and thus converges to its unique invariant distribution. This now implies that \((\pi (t), t\ge 0)\) must also converges to its invariant distribution. By Kingman continuity Theorem (see [33, Theorem 36] or [4, Theorem 1.2]) this implies that the ordered sequence of the atom masses \((a_i(t))\) converges in distribution as \(t\rightarrow \infty \). Because \((x_i(t), i\ge 1)\) also converges in distribution this implies that \(\Xi _t\) itself converges in distribution to its invariant measure. (Alternatively, this second part of the Corollary could be deduced from Theorem 2 in [28])
Furthermore it is also clear that the invariant distribution of \((\pi ^{(n)}_t, t\ge 0)\) must charge configurations with at least two nonsingleton blocks. Since \(\pi \) is an exchangeable process, so is its invariant distribution. Exchangeable partitions have only two types of blocks: singletons and blocks with positive asymptotic frequency so this proves that the invariant distribution of \(\pi \) charges partition with at least two blocks of positive asymptotic frequency.
References
 1.
Berestycki, J., Döring, L., Mytnik, L., Zambotti, L.: Hitting properties and nonuniqueness for SDEs driven by stable processes, Preprint http://arxiv.org/abs/1111.4388
 2.
Berestycki, J., Berestycki nd, N., Limic, V.: A smalltime coupling between \(\Lambda \)coalescents and branching processes, Preprint http://arxiv.org/abs/1101.1875
 3.
Berestycki, J., Berestycki, N., Schweinsberg, J.: Betacoalescents and continuous stable random trees. Ann. Probab. 35, 1835–1887 (2007)
 4.
Berestyck, N.: Recent progress in coalescent theory. Ens. Mat. 16, 1–193 (2009)
 5.
Bertoin, J., Le Gall, J.F.: Stochastic flows associated to coalescent processes. Probab. Theory Relat. F. 126(2), 261–288 (2003)
 6.
Bertoin, J., Le Gall, J.F.: Stochastic flows associated to coalescent processes. II. Stochastic differential equations. Ann. Inst. Henri Poincaré Probab. Stat. 41(3), 307–333 (2005)
 7.
Bertoin, J., Le Gall, J.F.: Stochastic flows associated to coalescent processes. III. Limit theorems. Ill. J. Math. 50(1–4), 147–181 (2006)
 8.
Birkner, M., Blath, J.: MeasureValued Diffusions, General Coalescents and Population Genetic Inference Trends in Stochastic Analysis. Cambridge University Press, Cambridge (2009)
 9.
Birkner, M., Blath, J., Möhle, M., Steinrücken, M., Tams, J.: A modified lookdown construction for the Xi–Fleming–Viot process with mutation and populations with recurrent bottlenecks. ALEA Lat. Am. J. Probab. Math. Stat. 6, 25–61 (2009)
 10.
Birkner, M., Blath, J., Capaldo, M., Etheridge, A., Möhle, M., Schweinsberg, J., Wakolbinger, A.: Alphastable branching and betacoalescents. Electron. J. Probab. 10(9), 303–325 (2005)
 11.
Dawson, D.A., Li, Z.: Construction of immigration superprocesses with dependent spatial motion from onedimensional excursions. Probab. Theory Relat. F. 127, 37–61 (2003)
 12.
Dawson, D.A., Li, Z.: Stochastic equations, flows and measurevalued processes. Ann. Probab. 40(2), 813–857 (2012)
 13.
Donnelly, Kurtz, T.: A countable representation of the Fleming–Viot measurevalued diffusion. Ann. Probab. 24, 698–742 (1996)
 14.
Donnelly, P., Kurtz, T.: Particle representations for measurevalued population models. Ann. Probab. 27, 166–205 (1999)
 15.
Duquesne, T., Le Gall, J.F.: Random trees, Levy processes and spatial branching processes. Asterisque 281, 1–147 (2002)
 16.
Dynkin, E.B., Kuznetsov, S.E.: \(\mathbb{N}\)measures for branching Markov exit systems and their applications to differential equations. Prob. Theory Relat. F. 130, 135–150 (2004)
 17.
Etheridge, A.M.: Some mathematical models from population genetics. Lectures from the 39th Probability Summer School held in SaintFlour, 2009. Lecture Notes in Mathematics, 2012. Springer, Heidelberg, viii+119 pp (2011)
 18.
Ethier, S.N., Griffiths, R.C.: The transition function of a Fleming–Viot process. Ann. Prob. 21, 1571–1590 (1993)
 19.
Ethier, S.N., Kurtz, T.G.: Fleming–Viot processes in population genetics. SIAM J. Control Optim. 31, 345–386 (1993)
 20.
Fleming, W.H., Viot, M.: Some measurevalued Markov processes in population genetics. Indiana Univ. Math. J. 28, 817–843 (1979)
 21.
Fournier, N.: On pathwise uniqueness for stochastic differential equations driven by stable Lévy processes. Ann. Inst. Henri Poincaré Probab. Stat. 49(1), 138–159 (2013)
 22.
Fu, Z., Li, Z.: Stochastic equations of nonnegative processes with jumps. Stoch. Process. Appl. 120, 306–330 (2010)
 23.
Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion Processes. NorthHolland Mathematical Library, NorthHolland Publishing Co, Amsterdam (1989)
 24.
Jacod, J., Shiryaev, A.N.: Limit Theorems for Stochastic Processes, 2nd edn. Springer, Berlin (2003)
 25.
Kurtz, T.G.: The Yamada–Watanabe–Engelbert theorem for general stochastic equations and inequalities. Electron. J. Probab. 12, 951–965 (2007)
 26.
Konno, N., Shiga, T.: Stochastic differential equations for some measurevalued diffusions. Probab. Theory Relat. F. 79, 201–225 (1988)
 27.
Kyprianou, A.E.: Fluctuations of Fluctuations of Lévy Processes with Applications, Introductory Lectures. Universitext, 2nd edn, p. XVIII. SpringerVerlag, Berlin (2014)
 28.
Labbé, C.: Flots stochastiques et representation lookdown. PhD Thesis (2013). http://tel.archivesouvertes.fr/tel00874551
 29.
Lamperti, J.: Semistable Markov processes. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 22, 205–225 (1972)
 30.
Li, Z.: Skew convolution semigroups and related immigration processes. Theory Probab. Appl. 46(2), 274–296 (2002)
 31.
Li, Z.: MeasureValued Branching Markov Processes, Probability and Its Applications. Springer, Berlin (2010)
 32.
Li, Z., Mytnik, L.: Strong solutions for stochastic differential equations with jumps. Ann. Inst. Henri Poincaré Probab. Stat. 47(4), 1055–1067 (2011)
 33.
Pitman, J.: Coalescents with multiple collisions. Ann. Probab. 27, 1870–1902 (1999)
 34.
Pitman, J., Yor, M.: A decomposition of Bessel bridges. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 59, 425–457 (1982)
 35.
Schmuland, B.: A result on the infinitely many neutral alleles diffusion model. J. Appl. Probab. 28(2), 253–267 (1991)
 36.
Shepp, L.A.: Covering the line with random intervals. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 23, 163–170 (1972a)
Acknowledgments
The authors are very grateful for stimulating discussions with Zenghu Li and Marc Yor and to an anonymous referee whose insightful comments greatly helped us improve this manuscript. L. Döring was supported by the Fondation Science Mathématiques de Paris, L. Mytnik is partly supported by the Israel Science Foundation. J. Berestycki, L. Mytnik and L. Zambotti thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for a very pleasant stay during which part of this work was produced. L. Mytnik thanks the Laboratoire de Probabilités et Modèles Aléatoires for the opportunity to visit it and carry out part of this research there
Author information
Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Berestycki, J., Döring, L., Mytnik, L. et al. On exceptional times for generalized Fleming–Viot processes with mutations. Stoch PDE: Anal Comp 2, 84–120 (2014). https://doi.org/10.1007/s4007201400266
Received:
Accepted:
Published:
Issue Date:
Keywords
 Fleming–Viot processes
 Mutations
 Exceptional times
 Excursion theory
 Jumptype SDE
 Selfsimilarity
Mathematics Subject Classification (2000)
 Primary 60J80
 Secondary 60G18