On Conditioning Brownian Particles to Coalesce

We introduce the notion of a conditional distribution to a zero-probability event in a given direction of approximation and prove that the conditional distribution of a family of independent Brownian particles to the event that their paths coalesce after the meeting coincides with the law of a modified massive Arratia flow, defined in Konarovskyi (Ann Probab 45(5):3293–3335, 2017. https://doi.org/10.1214/16-AOP1137).


Introduction
One of classical systems of interacting particles is the Arratia flow or coalescing Brownian particles, proposed by R. Arratia in [1] (see also [3,38,39]).It is the family of one-dimensional Brownian motions with the same diffusion rate starting at every point of the real line and moving independently until their meeting.When two particles collide, they coalesce and move together.The model was obtained as a scaling limit of a continuous analog of a family of coalescing random walks on the real line, and the initial interest of the study was its connection with a voter model [1,2].Later the Arratia flow and its generalization, Brownian web [21], appear as scaling limits of seemingly disconnected models like true self-repelling motion [53], Hastings-Levitov planer aggregation models [43], oriented percolation [49], isotropic stochastic flows of homeomorphisms in R [45], solutions to evolutionary stochastic differential equations [13], etc.In particular, this leads to the intensive study of the properties of the Arratia flow.We refer to [55,21,46,17,50,14,24,16,54,42,15] for more details.
However the classical Arratia flow does not take into account the physical characteristics of particles like mass, spin, charge, etc., which can influence the particle behavior.In [33,36], the first author proposed a physical improvement called a modified massive Arratia flow (shortly MMAF), where the diffusion rate of particles depends inversely proportional on their mass.More precisely, every particle carries a mass that obeys the conservation law, i.e., the mass of a new particle that appeared after the coalescing equals the sum of the colliding particles.This type of interaction makes the particle system more natural from a physical point of view and leads to a new local phenomena [32].It turns out that the MMAF is closely related with the geometry of the Wasserstein space of probabilities measures on the real line [36] and also is a non-trivial solution to the Dean-Kawasaki equation for supercooled liquids appearing in macroscopic fluctuation theory or models for glass dynamics in non-equilibrium statistical physics [4,52,9,10,11,12,18,23,29,30,40,47,48,51,56,35,34].For the regularised versions of the Dean-Kawasaki equation see also [7,8,20].This makes the model of a reasonable candidate for a Brownian motion on the Wasserstein space.
The main goal of this paper is to show that the MMAF appears by the conditioning of independent Brownian particles (more precisely, a cylindrical Wiener process) to the event that particle paths "coalesce" after their meeting.To be more precice, we will justify that the conditional law of a cylindrical Wiener process in L 2 [0, 1] starting at some non-decreasing function g to the event of coalescence is the law of a MMAF.But we will pay the prize of having to investigate more carefully the notion of conditional law to a zero-probability event, allowing to define it only in some directions of approximation.First of all, this observation would explain some similarities of the particle model with a Wiener process in the Euclidian space.For instance, the rate function in the large deviation principle for the MMAF has a similar form as the rate function for a usual Wiener process (see [37,Theorem 2.1] for a finite particle system and [36,Theorem 1.4] for the MMAF starting from all points of an interval).Secondary, we hope this result will shed some light on the uniqueness of the distribution of the MMAF, which is one of the biggest problems.
We first introduce a definition of a conditional distribution along a direction, which allows to interprete a value of the commomly-used notion of regular conditional probability at a fixed point (see e.g [25,Theorem I.3.3] and [28,Theorem 6.3] for the existence of the regular conditional probability).
Let E be a Polish space, B(E) denote the Borel σ-algebra on E and P(E) be the space of probability measures on (E, B(E)) endowed with the topology of weak convergence.In general, given a random element X in E and C ∈ B(E) such that P [X ∈ C] = 0, defining the conditional probability P [X ∈ •|X ∈ C] has no sense if we consider {X ∈ C} as an isolated event.However, one can make a proper definition with the help of regular conditional probability if C is given by C = T −1 ({z 0 }), where z 0 belongs to a metric space F and T : E → F is some measurable map.Let p : B(E) × F → [0, 1] be a regular conditional probability 1 of X given T(X).If p(•, z), z ∈ F, is continuous in z 0 , then one can define P [X ∈ C] to be equal to p(•, z 0 ).But in general case, the regular conditional probability p is well-defined for only P T(X) -almost every z ∈ F, where P T(X) denotes the law of T(X).Therefore, we will introduce a notion of the value of p at a fixed point along a (random) direction.Definition 1.1 Let {ξ n } n≥1 be a sequence of random elements in F such that (B1) for each n ≥ 1, the law of ξ n is absolutely continuous with respect to the law of T(X); (B2) {ξ n } n≥1 converges in distribution to z 0 in F.
A probability measure ν on (E, B(E)) is the value of the conditional distribution of X to the event {T(X) = z 0 } along the sequence {ξ n } if for every f ∈ C b (E) where p is a regular conditional probability of X given T(X).We denote this measure by ν = Law {ξ n } (X|T(X) = z 0 ).
We remark that the measure ν does not depend on the version of the regular conditional probability p.In Section 2, we explain that the above definition generalizes the case where p is continuous at z 0 and that it is very close to the intuitive definition of the conditional probability P [X ∈ • |X ∈ C] by approximation of the set C. Furthermore, we introduce in Section 2 a method to construct ν.
) is called modified massive Arratia flow (shortly MMAF) starting at g if it satisfies the following properties (E1) for all u ∈ (0, 1) the process Y (u, •) is a continuous square-integrable martingale with respect to the filtration Intuitively, the massive particles Y (u, •), for each u ∈ (0, 1), evolve like independent Brownian particles with diffusion rates inversely proportional to their masses, until two of them collide.When two particles meet, they coalesce and form a new particle with the mass equal to the sum of masses of the colliding particles.
The random element Y can be identified with an L ↑ 2 -valued process Y t , t ≥ 0, where L ↑ 2 is the subset of L 2 [0, 1] consisting of all functions which have non-decreasing versions.There exists a cylindrical Wiener process W in L 2 [0, 1] starting at g such that where for any f ∈ L ↑ 2 , pr f is the orthogonal projection operator in L 2 [0, 1] onto the subspace of σ(f )-measurable functions.Those results will be recalled with further details and references in Section 3.
Our main results consists in the construction of the following objects and in the following theorem.
(S1) We start from Y , a MMAF starting at a strictly increasing map g. (S2) Thus there exists a cylindrical Wiener process W in L 2 [0, 1] starting at g satisfying (1.3).Y can be seen as the coalescing part of W . (S3) Given X = (Y , W ), we decompose W into Y and a non-coalescing part T(X ), so that W is completely determined by Y and T(X ).We postpone to Section 3.3 the precise definition of the map T. We are interested in the conditional distribution of X to the event {T(X ) = 0}, which is the event where W coincides with its coalescing part Y .
(S4) For every n ≥ 1, ξ n is defined as a sequence {ξ n j } j≥1 of independent Ornstein-Uhlenbeck processes such that {ξ n } n≥1 converges in distribution to zero in the space C[0, ∞) N , equipped with the product topology, and the law of ξ n is absolutely continuous with respect to the law of T(X ), which is the law of a sequence of independent standard Brownian motions.
Unfortunatly, we can not prove the result for any sequence {ξ n } satisfying (B1)-(B2), and this seems to be not achievable and possibly even not true.Nevertheless, a sequence of Ornstein-Uhlenbeck processes seems a reasonable choice of {ξ n } satisfying (B1)-(B2).We refer to Theorem 3.11 for a more precise statement after having carefully defined T and {ξ n } n≥1 among others.
Our second result is the fact that (Y , W ) coupled by equation (1.3) is uniquely determined by the law of Y .It does not impy the uniqueness of the distribution of Y .However, we hope that it could be a first step in the proof that Y is a unique solution the the SDE (1.3).
Theorem 1.4 Let Y t , t ≥ 0, be a MMAF starting at g. Let W and W be cylindrical Wiener processes in L 2 starting at g and such that (Y , W ) and (Y , W ) satisfy equation (1.3).Then Law(Y , W ) = Law(Y , W ). Theorem 1.4 has an interest which is independent of the conditional distribution problem, but it is proved using the same techniques as for Theorem 1.3.Moreover, as a corollary, one can see that steps (S1) and (S2) in the statement of the main result can be replaced by starting from any pair (Y , W ) coupled by (1.3), which is a stronger result.
Content of the paper.In Section 2, we propose a method for the construction of a conditional distribution according to Definition 1.1.In Section 3, we recall needed properties of the MMAF and define the non-coalescing map T, using a construction of an orthonormal basis in L 2 [0, 1] which is tailored for the MMAF.Finally in that section, we state the main result in Theorem 3.11.Sections 4, and 5 are devoted to the proofs of Theorem 3.11 and Theorem 1.4, respectively.
2 On conditional distributions Definition 1.1 is consistent with the continuous case.Indeed, if the map z → p(•, z) is continuous at z 0 , then by the continuous mapping theorem p(•, z 0 ) = Law {ξ n } (X|T(X) = z 0 ) for any sequence {ξ n } n≥1 satisfying (B1) and (B2).Actually, it is an equivalence, as the following lemma shows.Lemma 2.1 Let z 0 belong to the support of P T(X) .There exists a probability measure ν such that ν = Law {ξ n } (X|T(X) = z 0 ) along any sequence {ξ n } n≥1 satisfying (B1) and (B2) if and only if there exists a version of p which is continuous at z 0 ∈ F. In this case, ν is equal to the value of the continuous version of p at z 0 .
We postpone the proof of the lemma to Section A.2 in the appendix.
Remark 2.2 Definition 1.1 extends the intuitive definition of the conditional distribution of X given {X ∈ C} as the weak limit where C is a closed subset of E and C ε denotes its ε-extension, that is, C ε = {x ∈ E : d E (C, x) < ε}.We assume P [X ∈ C ε ] > 0 for any ε > 0. Then T can be defined by T(x) := d E (C, x).We note that {X ∈ C} = {T(X) = 0} and {X ∈ C ε } = {T(X) < ε} for all ε > 0. The sequence {ξ n } could then be defined by One can easily check that {ξ n } satisfies conditions (B1) and (B2) with z 0 = 0, and that Therefore, the weak limit of the sequence (P X ∈ • |X ∈ C 1/n ) n≥1 coincides with the measure Law {ξ n } (X|T(X) = 0) if it exists.
We next introduce an idea to build a conditional distribution of X given {T(X) = z 0 } along a sequence {ξ n }.The idea is to split the random element X into two independent parts, Y and Z, so that Z has the same law as T(X).More precisely, we assume that there exists a quadruple (G, Ψ, Y, Z) satisfying the following conditions (P1) G is a measurable space; (P2) Y and Z are independent random elements in G and F, respectively; is a regular conditional probability of X given T(X).
Moreover, if {ξ n } n≥1 is a sequence of random elements in F independent of Y and satisfying (B1) and (B2) of Definition 1.1, then Ψ (Y, ξ n ) converges in distribution to the measure Law {ξ n } (X|T(X) = z 0 ).Proof Since Ψ is measurable, p defined by (2.1) satisfies properties (R1) and (R2) of Definition A.1.Furthermore, for every A ∈ B(E) and B ∈ B(F) Moreover, since X and Ψ (Y, Z) have the same law, T(X) and Z = T(Ψ (Y, Z)) have the same law too, so P Z = P T(X) .This concludes the proof of (R3).
Let f ∈ C b (E).By (2.1) and Proposition A.2, we know that for any regular conditional probability p of X given T(X), the equality E f (x)p(dx, z) = E [f (Ψ (Y, z))] holds for P T(X) -almost all z ∈ F. It also holds P ξ n -almost everywhere by Property (B1).By independence of ξ n and Y and Fubini's theorem, By (1.1), the last term tends to E f (x)ν(dx), where ν = Law {ξ n } (X|T(X) = z 0 ).This concludes the proof of the convergence in distribution.

Precise statement of the main result
In the introduction, we announced the construction of several objects, including a modified massive Arratia flow (MMAF) and a non-coalescing remainder map T. The main part of this construction will be the definition of an orthonormal basis of L 2 [0, 1] which is tailored for the MMAF.In this section, we will follow the steps (S1)-(S4) from the introduction and finally, we will state again Theorem 1.3 in a more precise form, see Theorem 3.11.

MMAF and set of coalescing paths
In this section, we define the set Coal of coalescing trajectories in an infinitedimensional space and we recall important properties of the MMAF introduced in Definition 1.2 to show that it takes values almost surely in Coal.Since they are not the central issue of this paper, the proofs of this section will be succinct, but we will refer to previous works or to the appendix for the detailled versions.
We can interpret y as a deterministic particle system, where y t (u), t ≥ 0, describes the trajectory of a particle labeled by u.Condition (G3) means that there is only a finite number of particles at each positive time.By Condition (G4), two particles coalesce when they meet.Moreover, by Condition (G5), there can be at most one coalescence at each time, and the number of particles is equal to one for large time.
Note that, according to Lemma B.2 in appendix, the set Coal is measurable in C([0, ∞), L ↑ 2 ).We will also consider Coal as a metric subspace of C([0, ∞), L ↑ 2 ).Recall the following existence property of modified massive Arratia flow.
Equivalently, we may also define a MMAF as an L ↑ 2 -valued process, in the following sense.For every f ∈ L ↑ 2 , pr f denotes the orthogonal projection operator in L 2 onto the subspace of σ(f )-measurable functions.

Lemma 3.4
The process Y t , t ≥ 0, belongs almost surely to Coal.

MMAF and cylindrical Wiener process
The goal of this section is the precise construction of a cylindrical Wiener process W for which the equality (1.3) holds for a given MMAF Y .This will complete step (S2) from the introduction.
For every f ∈ L ↑ 2 , let L 2 (f ) denote the subspace of L 2 consisting of σ(f )measurable functions.In particular, if f is of the form (3.1), then L 2 (f ) consists of all step functions which are constant on each π j .For any Moreover, for any progressively measurable process κ t , t ≥ 0, in L 2 and for any cylindrical Wiener process B in L 2 , we denote where 2+ and Y t , t ≥ 0, be a MMAF starting at g. Let B t , t ≥ 0, be a cylindrical Wiener process in L 2 starting at 0 defined on the same probability space and independent of Y .Then the process W t , t ≥ 0, defined by is a cylindrical Wiener process in L 2 starting at g, where equality (3.2) should be understood 3 as follows: Moreover, (Y , W ) satisfies equation (1.3).
Proof It follows from Property (M3) and from [22, Corollary 2.2] that there exists a cylindrical Wiener process B in L 2 starting at 0 (possibly on an extended probability space also denoted by (Ω, F , P)) such that Moreover, we may assume that B is independent of 2) is linear.Let (F t ) t≥0 be the natural filtration generated by B and B. Let us check that W t (h), t ≥ 0, is an (F t )-Brownian motion starting at (g, h) L2 with diffusion rate h 2 L2 for any h ∈ L 2 .Using the independence of B and B, we have that W t (h), t ≥ 0, is a continuous (F t )-martingale with quadratic variation This implies that W is a cylindrical Wiener process.
Moreover, for every h ∈ L 2 and t ≥ 0, Note that it is not obvious whether each cylindrical Wiener process W in L 2 starting at g and satisfying (1.3) is necessary of the form (3.2).Actually, this is the result of Theorem 1.4 and will be proved in Section 5. 3 The process pr ⊥ Yt , t ≥ 0, does not take values in the space of Hilbert-Schmidt operators in L 2 .Therefore, the integral t 0 pr ⊥ Ys dBs is not well-defined but h → t 0 pr ⊥ Ys h • dBs is.

Construction of non-coalescing remainder map
Up to now and until the end of Section 4, we fix a strictly increasing function g in L ↑ 2+ and X := (Y , W ), where Y t , t ≥ 0, is a modified massive Arratia flow starting at g and W t , t ≥ 0, is defined by (3.2).In particular, the assumption on g implies that L 2 (g) = L 2 .In this section, we consider step (S3) from the introduction.
Let us introduce for every y ∈ Coal the corresponding coalescence times: Since g is a strictly increasing function, one has that N (g) = +∞, and therefore, the family {τ y k , k ≥ 0} is strictly decreasing for all y ∈ Coal, i.e.

Now we are going to define an orthonormal basis {e
Lemma 3.6 For each y ∈ Coal there exists a unique orthonormal basis {e y l , l ≥ 0} of L 2 such that 1) the family {e y l , 0 ≤ l < k} is a basis of L 2 (y τ y k ) for each k ≥ 1; 2) (e y l , 1 [0,u] ) L2 ≥ 0 for every u ∈ (0, 1).
Moreover, the family {e y l , l ≥ k} is a basis of H y k for each k ≥ 1.
In other words, the map t → pr yt is a projection map onto a subspace which decreases from exactly one dimension whenever a coalescence of y occurs, and the basis {e y l , l ≥ 0} is adapted to that decreasing sequence of subspaces.
. We say that an interval I is a step of a map f if f is constant on I but not constant on any interval strictly larger than I.At time τ y k a coalescence occurs.So there exist a < b < c such that [a, b) and [b, c) are steps of y τ y k+1 , and [a, c) is a step of y τ y k .We call b the coalescence point of y τ y k .The only possible choice for e y k so that it has norm 1, it belongs to L 2 (y τ y k+1 ), it is orthogonal to every element of L 2 (y τ y k ) and it satisfies Condition 2) is: By construction of e k in Lemma 3.6, (Y t , e k ) L2 vanishes for all t ≥ τ k .Thus, we note that for t ∈ [0, τ k ], W t (e k ) = (Y t , e k ) L2 and that W τ k (e k ) = 0, whereas for t ≥ τ k , W t (e k ) = B t (e k ) − B τ k (e k ).Since B is independent of Y and thus of e k , B t (e k ) is well-defined by B t (e k ) = t 0 e k • dB s , t ≥ 0. To recap, in space direction e k , the projection of W is equal to the projection of its coalescing part Y before stopping time τ k , and is equal to the projection of a noise B which is independent of Y after τ k .Therefore, we define formally ξ = T(X ) = T(Y , W ) as follows More rigorously 4 , we define ξ t as a map from the Hilbert space In order to prove the above statement, we start with the following lemma.Lemma 3.9 The processes W •+τ k (e k ), k ≥ 1, are independent standard Brownian motions that do not depend on the MMAF Y .Proof Let us denote We fix n ≥ 1 and show that the processes Y , η k , k ∈ [n], are independent and that η k , k ∈ [n], are standard Brownian motions.Let be bounded measurable functions.By strong Markov property of B and the independence of B and Y , are independent standard Brownian motions.Therefore, we can compute where w k , k ∈ [n], are independent standard Brownian motions that do not depend on Y .This completes the proof of the lemma.

⊓ ⊔
Proof (Proof of Proposition 3.8) Let h ∈ L 0 2 and y ∈ Coal be fixed.For every n ∈ N we define where η k , k ≥ 1, are defined by (3.6).By Lemma 3.9, η k , k ≥ 1, are independent standard Brownian motions, hence M y,n t (h) , t ≥ 0, is a continuous square-integrable martingale with respect to the filtration (F η t ) t≥0 generated by η k , k ≥ 1, with quadratic variation M y,n (h) t = n k=1 (e y k , h) 2 L2 t, t ≥ 0.Moreover, for each T > 0 the sequence of processes {M y,n (h)} n≥1 restricted to the interval [0, T ] converges in L 2 (Ω, C[0, T ]).Indeed, for each m < n, by Doob's inequality The sum for every T > 0, and therefore, in C[0, ∞).Recall that by Lemma 3.9, the sequence {η k } k≥1 is independent of Y , and by Lemma 3.4, Y belongs to Coal almost surely.Then ∞ k=1 (e k , h) L2 η k also converges almost surely in C[0, ∞) to a limit that we have called ξ(h).
Moreover, similarly as the proof of Lemma 3.9, we show that the processes Y and {ξ(h i ), i ∈ [n]} for every h i ∈ L 0 2 , i ∈ [n], n ≥ 1, are independent.We conclude that ξ is independent of Y .
Let us show that ξ is a cylindrical Wiener process.Obviously, h → ξ(h) is a linear map.We denote F η,Y t = F η t ∨ σ(Y ), t ≥ 0. We need to check that for every h ∈ L 0 2 , ξ(h) is an ( F η,Y t )-Brownian motion.According to Lévy's characterization of Brownian motion [25, Theorem II.6.1], it is enough to show that ξ(h) is a continuous square-integrable ( F η,Y t )-martingale with quadratic variation h 2 L2 t.So, we take n ≥ 1 and a bounded measurable function Then using Lemma 3.9 and the fact that M y (h) is an (F η t )-martingale, we have for every s < t )-cylindrical Wiener process in L 0 2 starting at 0. This finishes the proof of the proposition.

⊓ ⊔
We conclude this section by defining properly the space E on which the random element X take values and the non-coalescing remainder map T : E → F needed to achieve step (S3) from the introduction.However, as we already noted, the cylindrical Wiener process W is not a random element in Here, C[0, ∞) is the space of continuous functions from [0, ∞) to R equipped with its usual Fréchet distance, C 0 [0, ∞) denotes the subspace of all functions vanishing at 0 and N 0 := N ∪ {0}.Equipped with the metric induced by the product topology, E is a Polish space.Now, we fix an orthonormal basis {h j , j ≥ 0} of L 2 such that h 0 = 1 [0,1] .In particular, {h j , j ≥ 1} is an orthonormal basis of L 0 2 .We identify the cylindrical Wiener process W with the following random element in C[0, ∞) N0 : , for all t ≥ 0 and h ∈ L 2 , where the series converges in C[0, ∞) almost surely for every h ∈ L 2 .

Statement of the main result
Let us clarify step (S4) from the introduction.According to Definition 1.1, we need to define a random sequence in distribution and such that P ξ n is absolutely continuous with respect to the law of T( X ).By (3.8) and Proposition 3.8, P T( X ) is the law of a sequence of independent Brownian motions.
Let for each n ≥ 1, ξ n := (ξ n j ) j≥1 be the sequence of Ornstein-Uhlenbeck processes, independent of Y , that are strong solutions to the equations where {α n j , n, j ≥ 1} is a family of non-negative real numbers such that (O1) for every n ≥ 1 the series The event { T( X ) = 0}, which equals to { ξ = 0}, is by construction the event where the non-coalescing part of W vanishes.Remark 3.12 For simplicity, we assumed in sections 3.3 and 3.4 that the initial condition g is strictly increasing.Actually, everything remains true if g is an arbitrary element of L ↑ 2+ , up to replacing the space L 2 by the space L 2 (g).In particular, if g is a step function, then L 2 (g) has finite dimension, equal to N (g), and the orthonormal basis constructed in Lemma 3.6 and the sum in the definition of ξ consists of finitely many summands.

Proof of the main theorem
In order to prove Theorem 3.11, we follow the strategy introduced in Section 2. We start by the construction of a quadruple (G, Ψ, Y, Z) satisfying (P1)-(P4).The idea behind the construction of Ψ is inspired by the result of Proposition 3.5, stating that W can be build from the MMAF Y and some independent process.

Construction of quadruple
Define G := Coal, Y := Y and Z := Z, where Z is a cylindrical Wiener process in L 0 2 starting at 0 that is independent of Y .By the same identification as previously, for the same basis {h j , j ≥ 0}, Z t = Z j (t) j≥1 := (Z t (h j )) j≥1 , t ≥ 0, is a sequence of independent standard Brownian motions and is a random element in F. Therefore, properties (P1) and (P2) are satisfied.
We define where ϕ t (Y , Z) is a map from L 2 to L 2 (Ω) defined by Since Z(e k ), k ≥ 1, are independent standard Brownian motions, one can easily check that R y,n t (h), t ≥ 0, is a continuous square-integrable martingale with respect to the filtration generated by Z t−τ y k (e k ), k ≥ 1.As in the proof of Proposition 3.8, one can show that the sequence of partial sums {R y,n (h)} n≥1 converges in C[0, ∞) almost surely for each y ∈ Coal.By the independence of Z(e k ), k ≥ 1, and Y , one can see that the series Next, we claim that there exists a cylindrical Wiener process θ t , t ≥ 0, in L 0 2 starting at 0 independent of Y such that For each fixed y ∈ Coal, the family has the same distribution as Therefore, using the independence of Y and θ on the one hand and the independence of Y and Z on the other hand, we get the equality This relation and equalities (4.1) and (4.2) yield that the law of X = (Y , W ) is equal to the law of ψ(Y , Z) = (Y , ϕ(Y , Z)).In particular, ϕ(Y , Z) is a cylindrical Wiener process in L 2 starting at g.

⊓ ⊔
Moreover, there exists a measurable map ϕ : Proof By continuity in t of T(Ψ (Y , Z)) j (t) and Z j (t), it is enough to show that for each t ≥ 0 and j ≥ 1 almost surely T(Ψ (Y , Z)) j (t) = Z j (t).Since {h i , i ≥ 1} is an orthonormal basis of L 0 2 , we have By (4.1) and Lemma 3.6, we have Hence, almost surely Thus, Property (P3) holds.Hence, by Proposition 2.3, the probability kernel p defined by p(A, z) for all A ∈ B(E) and z ∈ F, is a regular conditional probability of X given T( X ).

Value of p along a sequence of Ornstein-Uhlenbeck processes
According to Proposition 2.3, it remains to show the following to complete the proof of Theorem 3.11.Let {ξ n } n≥1 be the sequence defined by (3.9) and independent of Y .Let Ψ be defined by (4.3).Then Ψ (Y , ξ n ) converges in distribution to (Y , Y ).
For y ∈ Coal we consider where the map ϕ : E → C[0, ∞) N0 was defined in Section 4.1.Since for every n ≥ 1 the law of ξ n is absolutely continuous with respect to P ξ (which is equal to P Z ), we have that for almost all y ∈ Coal with respect to for each j ≥ 0, where the series converges in C[0, ∞) almost surely.Without loss of generality, we may assume that equality (4.5) holds for all y ∈ Coal.
Otherwise, we can work with a measurable subset of Coal of P Y -measure one for which equality (4.5) holds.
Let us fix y ∈ Coal satisfying the assumption of Proposition 4.5.Before starting the proof, we define for all j ≥ 0 and Indeed, this will imply that Let us first prove some auxiliary lemmas.

Lemma 4.6
The sequence of random elements Proof In order to prove the lemma, we first show that the sequence {ξ n } n≥1 is tight in C[0, ∞) N .This will imply that the sequence {ξ n } n≥1 is relatively compact, by Prohorov's theorem.Then we will show that every (weakly) convergent subsequence of {ξ n } n≥1 converges to 0. This will immediately yield According to [19,Proposition 3.2.4], the tightness of {ξ n } n≥1 will follow from the tightness of {ξ n j } n≥1 in C[0, ∞) for every j ≥ 1.So, let j ≥ 1 and T > 0 be fixed.Since the covariance of Ornstein-Uhlenbeck processes is wellknown, one can easily check that for every n ≥ 1 and every 0 ≤ s ≤ t ≤ n, where 1 0 := +∞.Since ξ n j is a Gaussian process, it follows that for every 0 ≤ s ≤ t ≤ T and every n ≥ T , Moreover, ξ n j (0) = 0. Hence, by Kolmogorov-Chentsov tightness criterion (see e.g.[28,Corollary 16.9]), the sequence of processes Next, let {ξ n } n≥1 converges in distribution to ξ ∞ in C[0, ∞) N along a subsequence N ⊆ N. Then for every t ≥ 0 and j ≥ 1 {ξ n j (t)} n≥1 converges in distribution to ξ ∞ j (t) in R along N .But on the other hand, for each n ≥ t, by (4.7) and Assumption (O2) in Section 3.4.Hence, ξ ∞ j (t) = 0 almost surely for all t ≥ 0 and j ≥ 1.Thus, we have obtained that ξ ∞ = 0, and therefore, To prove that {R n } n≥1 converges to 0, we will use the same argument as in the proof of Lemma 4.6.So, we start from the tightness of {R n }.

Lemma 4.7 Under the assumption of Proposition 4.5, the sequence {R
Proof Again, according to [19,Proposition 3.2.4], it is enough to check that the sequence {R n j } n≥1 is tight in C[0, ∞) for every j ≥ 0. For j = 0, R n 0 = 0 so the result is obvious.So, let j ≥ 1 be fixed.We set Then R n j = R n,1 j + R n,2 j .We will prove the tightness separately for {R n,1 j } n≥1 and {R n,2 j } n≥1 .Tightness of {R n,1 j } n≥1 .Using the fact that {e y k , k ≥ 1} and {h l , l ≥ 1} are bases of L 0 2 , a simple computation shows that almost surely Due to the absolute continuity of the law of ξ n with respect to the law of ξ and the equality Γ j (ξ n ) = R n,1 j , we get that R n,1 j = ξ n j .Hence it follows from Lemma 4.6 that R n,1 j converges in distribution to 0 in C[0, ∞).In particular, {R n,1 j } n≥1 is tight in C[0, ∞), according to Prohorov's theorem.Tightness of {R n,2 j } n≥1 .
Step I.For any t ∈ [0, n] the vector belongs almost surely to L 0 2 and E V n t 2 L2 ≤ ∞ k=1 (t ∧ τ y k ) < ∞.Indeed, by Parseval's equality (with respect to the orthonormal family {e y k , k ≥ 1}) and by the independence of {ξ n l } l≥1 , where . Since ξ n l (0) = 0, we have (4.9)By inequality (4.7), we can deduce that Therefore, by Parseval's identity (with respect to the orthonormal family {h l , l ≥ 1}).Moreover, Therefore, for any t ∈ [0, n], V n t belongs to L 0 2 almost surely.In particular, for every t ∈ [0, n] the inner product (V n t , h j ) L2 is well-defined, and almost surely R n,2 j (t) = (V n t , h j ) L2 .
Step II.Let T > 0. There exists C y,ε depending on y and ε such that for all 0 ≤ s ≤ t ≤ T and n ≥ T , Indeed, proceeding as in Step I, we get where we use as previously inequality (4.7).The series ∞ k=1 (τ y k ) 1−ε converges by assumption on y, so the proof of Step II is achieved.
Step III.There exists α > 0, β > 0 and C y,ε depending on y and ε such that for all 0 ≤ s ≤ t ≤ T and n ≥ T , Indeed, for any The statement of Step III follows by choosing p larger than 1 ε .

Conclusion of the proof.
As the sum of two tight sequences, the sequence {R n j } n≥1 is tight in C[0, ∞) for any j ≥ 1.Since C[0, ∞) N is equipped with the product topology, it follows from [19,Proposition 3.2.4]that the sequence Proof Let j ≥ 1 and t ≥ 0 be fixed.We recall that → 0 follows immediately from inequality (4.7).
Due to the equality R n,2 j (t) = (V n t , h j ) L2 , we can estimate for n ≥ t By (4.9) and (4.7), we have for every k, l ≥ 1 Therefore, inequalities (4.10) and (4.11) and the dominated convergence theorem imply that E V n 2 ).Therefore, Proposition 4.5 and the independence of Y and ⊓ ⊔

Coupling of MMAF and cylindrical Wiener process
We have already seen, in Proposition 3.5 and its proof, that for every MMAF Y starting at g there exists a cylindrical Wiener process W in L 2 starting at g such that equation (1.3) holds.However, it is unknown whether equation (1.3) has a strong solution.
In Proposition 3.5, we considered a process W defined by (3.2) and we proved that the pair (Y , W ) satisfies (1.3).The reverse statement holds true, in the following sense.Proposition 5.1 Let Y t , t ≥ 0, be a MMAF and W t , t ≥ 0 be a cylindrical Wiener process in L 2 both starting at g and such that (Y , W ) satisfies (1.3).Then there exists a cylindrical Wiener process B t , t ≥ 0, in L 2 starting at 0 independent of (Y , W ) such that for every h ∈ L 2 almost surely Proposition 5.1 directly implies the statement of Theorem 1.4.Before we prove Proposition 5.1, we will show several auxiliary statements.
Recall that we denote e k := e Y k and τ k := τ Y k , and that for every k ≥ 1, the random element e k is F Y τ k -measurable.Let (F X t ) t≥0 be the complete rightcontinuous filtration generated by X := (Y , W ).
For every k ≥ 1 we remark that almost surely and the random element e l is F X τ l -measurable, hence also F X τ k -measurable.Therefore, the process is well-defined.

Lemma 5.2
The processes Y , W k (e k ), k ≥ 1, are independent.
In order to prove that lemma, we start by some auxiliary definitions and results.The process Proof In order to show the measurability of Y with respect to G k , it is enough to show the measurability of Y τ k +t , t ≥ 0.
By Corollary B.8, we know that for every g ∈ St and cylindrical Wiener process W , there exists a unique continuous L ↑ 2 -valued process Y such that almost surely where W g t = t 0 pr g dW s , t ≥ 0. Let us consider the equation where We note that Y τ k belongs to St almost surely and is independent of W k .Furthermore, the process Y τ k +t , t ≥ 0, is a strong solution to (5.3).Therefore, it is uniquely determined by Then the processes are independent standard Brownian motions that do not depend on Proof By Lemma 3.6, the family {e y l , l ≥ 0} is orthonormal.Consequently, W k (e y l ), l ≥ 0, are independent Brownian motions.Moreover, by Lemma 3.6 again, ζ y,k t = k−1 j=0 e k j W k t (e y j ), t ≥ 0, thus it is independent to W k (e y l ), l ≥ k. ⊓ ⊔ Lemma 5.5 For every k ≥ 1 the processes W k (e l ), l ≥ k, are independent Brownian motions and do not depend on G k .Furthermore, for each l > k, Proof Let n ≥ k and m ≥ 1 be fixed.Let h j , j ≥ 0, be an arbitrary orthonormal basis of L 2 .We consider bounded measurable functions We then use the independence of W k from F X τ k .
Then we apply Lemma 5.4 and we denote by w l , l = k, . . ., n, a family of standard independent Brownian motions that do not depend on Y and W .
which achieves the proof of the first part of the statement.
Furthermore, for every l > k, we remark that e l and τ l are G k -measurable because they are F Y τ l -measurable and ) is also G k -measurable.This finishes the proof of the second part of the lemma.
⊓ ⊔ Next, we define the gluing map Gl : where a + := a ∨ 0. It is easily seen that the map Gl is continuous and therefore measurable.
Since almost surely, W k t (e l ) = W l t+τ k −τ l (e l ) − W l τ k −τ l (e l ), t ≥ 0, for every l > k ≥ 1, a simple computation shows that for every l > k ≥ 1 almost surely where τ k,l := τ k − τ l .
Proof (Proof of Lemma 5.2) In order to prove this lemma, it is enough to show that for each k ≥ 1, W k (e k ) is independent of Y , W l (e l ), l > k.
Let us denote by H k be the complete σ-algebra generated by G k and W k (e l ), l > k.By Lemma 5.5, the process W k (e k ) is independent of H k .Moreover for every l > k, using Lemma 5.3, Y and τ k,l are G k -measurable, hence they are H k By Lemma 5.5 and by the definition of H k , we also see that W l •∧τ k,l (e l ) and W k (e l ) are H k -measurable.By (5.5), it follows that W l (e l ) is H k -measurable for every l > k.Therefore Y , W l (e l ), l > k, are independent of W k (e k ).
Note that if w 1 and w 2 are independent standard Brownian motions and r > 0, then the process Gl(w 1 , w 2 , r) is a standard Brownian motion.It follows that for any fixed y ∈ Coal, Gl , is a family of independent standard Brownian motions.Thus for every y ∈ Coal, where w k , k ∈ [n], denotes an arbitrary family of independent standard Brownian motions.This easily implies the statement of the lemma, because B k , k ∈ [n], are independent standard Brownian motions by Lemma 5.6.
⊓ ⊔ Now, we finish the proof of Proposition 5.1.
Proof (Proof of Proposition 5.1) Define Since B k , k ≥ 0, are independent Brownian motions that do not depend on Y and hence on e k , k ≥ 1, one can show similarly to the proof of Lemma 3.9 that the series converges in C[0, ∞) almost surely for every h ∈ L 2 , and B t , t ≥ 0, is a cylindrical Wiener process in L 2 starting at 0. Moreover, B is independent of Y .Indeed, for any n ≥ 1, for any h 1 , . . ., h n in L 2 , for any bounded and measurable functions where w k , k ∈ [n], denotes an arbitrary family of independent standard Brownian motions.
Moreover, since pr ⊥ Yt e k = 1 {t≥τ k } e k , we easily check that for all t ≥ 0, which implies equality (5.1).

⊓ ⊔
A Appendix: Regular conditional probability

A.1 Definition
Let E be a Polish space and F be a metric space.We consider random elements X and ξ in E and F, respectively, defined on the same probability space (Ω, F , P).Let also B(E) (resp.B(F)) denote the Borel σ-algebra on E (resp.F) and P(E) be the space of probability measures on (E, B(E)) endowed with the topology of weak convergence.
Recall the following existence and uniqueness result (see e.g.[28, Theorem 6.3]): Proposition A.2 There exists a regular conditional probability of X given ξ.Moreover, it is unique in the following sense: if p and p ′ are regular conditional probabilities of X given ξ, then

A.2 Proof of Lemma 2.1
We first recall that the sufficiency of Lemma 2.1 immediately follows from the continuous mapping theorem.
We next prove the necessity.We first choose a family {f k , k ≥ 1} ⊂ C b (E) which strongly separate points in E. One can show that such a family exists since E is separable (see also [5,Lemma 2]).By [19,Theorem 4.5] (or [5,Theorem 6] for weaker assumptions on the space E), any sequence {µn} n≥1 of probability measures on E converges weakly to a probability measure µ if and only if We define the following sets where B k m is the ball in F with center z 0 and radius δ k m , then p has a version continuous at z 0 .Moreover, it can be taken as Proof We first remark that according to (A.1), p ′ = p P T(X) -a.e.Next, let zn → z 0 in F as n → ∞.Without loss of generality, we may assume that zn ∈ ∞ k,m=1 A k m ∩ B k m for all n ≥ 1.Let m ≥ 1 and k ≥ 1 be fixed.Then there exists a number N such that zn ∈ B k m for all n ≥ N .Consequently, zn ∈ A k m , ∀n ≥ N , that yields for all n ≥ N .This finishes the proof of the lemma.

⊓ ⊔
We come back to the proof of Lemma 2.1.Let us assume that p has no version continuous at z 0 .Then, according to Lemma A.3, there exists k ≥ 1 and m ≥ 1 such that for every δ > 0 where B δ denotes the ball with center z 0 and radius δ.Without loss of generality, we may assume that P T(X) A k,+ m ∩ B δ > 0 for every δ > 0. For every n ≥ 1, let ξ n be a random element in F with distribution By the construction, P ξ n ≪ P T(X) , n ≥ 1.Moreover, it is easy to see that Indeed, for every n ≥ 1 the random element ξ n takes values almost surely in A k,+ m , which implies that We have obtained the contradiction with assumption (1.1).This finishes the proof of Lemma 2.1.

B.1 Measurability of coalescing set
We recall that the set D((0, 1), C[0, ∞)) denotes the space of càdlàg functions from (0, 1) to C[0, ∞) equipped with the Skorokhod distance, which makes it a Polish space.Set It is easily seen that D ↑ a closed subspace of D((0, 1), C[0, ∞)).So, we will consider D ↑ as a Polish subspace of D((0, 1), C[0, ∞)).Let Proof First we are going to show that D ↑ 2 is a subset of CL ↑ 2 .So, we take y ∈ D ↑ 2 and check that y is a continuous L 2 -valued function.The continuity of y at 0 follows from the definition of D ↑ 2 .Let t > 0 and tn → t as n → ∞.Without loss of generality, we may assume that tn ∈ [ 1 T , T ] for some T ∈ N and all n ≥ 1.We are going to show that yt n → yt in L 2 , n → ∞.Let us note that the sequence {yt n } n≥1 is relatively compact, according to [32, Lemma 5.1] and the fact that yt n ∈ L ↑ 2 , n ≥ 1, are uniformly bounded in L 2+δ -norm.This implies that there exists a subsequence N ⊆ N and f ∈ L ↑ 2 such that yt n → f in L 2 along N .On the other hand, yt n → yt pointwise, that implies the equality f = yt.Moreover, it yields that every convergent subsequence of {yt n } n≥1 converges to yt in L 2 .Using the relatively compactness of {yt n } n≥1 , we can conclude that yt n → yt in L 2 as n → ∞.Thus, y ∈ CL ↑ 2 .Next, we will check that the set D ↑ 2 is measurable in D ↑ .We fix t ≥ 0 and make the following observation.For every y ∈ D ↑ the real-valued function yt is non-decreasing on (0, 1).This implies that it has at most countable number of discontinuous points.Hence, by [19,Proposition 3.5.3], the convergence y n → y in D ↑ implies the convergence of y n t → yt a.e.(with respect to the Lebesgue measure on [0, 1]).Using Fatou's lemma, we get that the set Λ(t, f, K, p) for every K ≥ 0, p ≥ 2 and f ∈ Lp.Hence the set Proof In order to check the estimate, one needs to repeat the proof of [32,Proposition 4.4] replacing the summation with the integration.⊓ ⊔ Recall that for every f ∈ St, N (f ) denotes the number of steps of f .We write N (f ) = ∞ for each non-decreasing càdlàg function f which does not belong to St.For any y ∈ C([0, ∞), L ↑ 2 ), define τ y k = inf{t ≥ 0 : N (yt) ≤ k}, k ≥ 0.
The following lemma states that a MMAF satisfies almost surely Property (G5) of Definition 3.1.Proof Let g ∈ L 2+ε for some ε > 0. In order to prove the lemma, we will use the estimate from [32,Remark 4.6], where C ε,T is a constant depending on ε and T > 0. Take an arbitrary number T > 0 and estimate for β > 1 P4) X and Ψ (Y, Z) have the same distribution.

Definition 3 . 1
We define Coal as the set of functions y from C([0, ∞), L ↑

Remark 3 .
10 (i) Using Kakutani's theorem[27, p. 218] and Jensen's inequality, it is easily seen that Condition (O1) guaranties the absolute continuity of P ξ n with respect to P ξ on C[0, ∞) N .The indicator function in the drift is important, otherwise the law is singular.Hence, Assumption (B1) of Definition 1.1 is satisfied by the sequence {ξ n } n≥1 .(ii) Condition (O2) yields the convergence in distribution of {ξ n } n≥1 to 0 in C[0, ∞) N (see Lemma 4.6 below).Thus Assumption (B2) is also satisfied.The following theorem is the main result of the paper.Theorem 3.11 The value of the conditional distribution of X = (Y , W ) to the event { T( X ) = 0} along {ξ n } is the law of (Y , Y ).

t 2 L2 → 0 .
This concludes the proof.⊓ ⊔ Proof (Proof of Proposition 4.5) Lemma 4.7 and Prohorov's theorem yield that the sequence {R n } n≥1 is relatively compact in C[0, ∞) N0 .Moreover, by Lemma 4.8, we deduce that each weakly convergent subsequence of {R n } n≥1 converges in distribution to 0. It implies convergence (4.6), which achives the proof of the proposition.⊓ ⊔ Proof (Proof of Theorem 3.11) By lemmas 3.4 and B.6, Y belongs almost surely to Coal and the series ∞ k=1

Lemma 5 . 6 Lemma 5 . 7
The processes B k , k ≥ 0, defined by(5.6), are independent standard Brownian motions.Proof The statement of this lemma directly follows from Lévy's characterization of Brownian motion[25, Theorem II.6.1].⊓ ⊔We will now use the result of Lemma 5.2 to prove the following lemma.The processes Y , B k , k ≥ 0, are independent.Proof Since B 0 = β 0 is independent of Y by definition and of B k , k ≥ 1, by Lemma 5.6, it is enough to prove that the processes Y , B k , k ∈ [n], are independent, for any given n.
{π 1 , . . .π n } is an ordered partition of [0, 1) into half-open intervals of the form π j = [a j , b j ).The natural number n is denoted by N (f ) and is by definition finite for every f ∈ St.
we get that {e y k , k ≥ 0} form a basis of L 2 .The last part of the statement follows from the fact that for each k ≥ 1, The construction of the basis {e y k , k ≥ 0} in the above proof easily implies that the map Coal ∋ y → e y k ∈ L 2 is measurable for any k ≥ 0, where Coal is endowed with the induced topology of C([0, ∞), L ↑ 2 ).Moreover, by(3.4), for every k ≥ 1, e y k is uniquely determined by y •∧τ y k .To simplify the notation, we will write e k and τ k instead of e Y k and τ Y k , respectively.Recall that W is defined by equality (3.2).In particular, the real-valued process W t (e k ), t ≥ 0, satisfies and hence, it converges to a limit denoted by M y (h) = The equality E [ξ t (h 1 )ξ t (h 2 )] = t(h 1 , h 2 ) L2 , t ≥ 0, trivially follows from the polarization equality and the fact that ξ(h 1 ) and ξ(h 2 ) are martingales with respect to the same filtration ( F η,Y t ≥ 0, is also an ( F η,Y t )-martingale.This proves that ξ(h) is a continuous square-integrable ( F η,Y t )-martingale with quadratic variation h 2 L2 t, t ≥ 0. t ) t≥0 .Thus, ξ is an ( F η,Y t