Reaction–Diffusion Models for a Class of Infinite-Dimensional Nonlinear Stochastic Differential Equations

We establish the existence of solutions to a class of nonlinear stochastic differential equations of reaction–diffusion type in an infinite-dimensional space, with diffusion corresponding to a given transition kernel. The solution obtained is the scaling limit of a sequence of interacting particle systems and satisfies the martingale problem corresponding to the target differential equation.


Background
In this paper, we extend the main results from [4] to reaction-diffusion systems evolving on infinite sets.As in [4], the class of stochastic differential equations we consider here is: where a, b, κ, ℓ are positive real numbers with κ, ℓ ≥ 1, V is a discrete set, ∆ p is the Laplacian induced by a probability kernel p on V, that is, and {B x • } x∈V is a family of independent standard Brownian motions on R.This system of equations can be used to model a reaction-diffusion system associated for instance with chemical reactions or population dynamics.In the setting of chemical reactions, space is divided into cells, corresponding to points of V, and each cell contains a certain density of particles.Within each cell, particles are subject to a reaction that can lead to a change in their density.As an image, consider the evolution of the density of ozone subject to the reaction ozone ⇌ oxygen in a confined region.This modelling framework is inspired by auto-catalytic models as presented in Nicolis and Prigogine [12,Chap. 7], and resembles the modelling adopted by Blount [3], the main difference being that we keep the size of reaction cells constant.Note that the system in (1.1) has ζ ≡ 0 as a stable point, and the interaction term −b(ζ t (x)) κ ) can then be interpreted as a restoring force, driving the system back to equilibrium.Hence, a solution to (1.1) represents how these processes converge to equilibrium in a path-wise sense.
The focus of [4] was on the finite-dimensional setting, that is, when V is finite.Therein a sequence of interacting particle systems {η n • } n≥1 on {0, 1, . ..}V is shown to converge after being properly rescaled to a solution of (1.1).This solution was moreover proved to be unique.For each n, the dynamics of η n • can be encoded by the following formal generator expression, for η ∈ {0, 1 . ..}V and a local function f : {0, 1 . ..}V → R: In words, a pile of η n t (x) particles occupies site x at time t; each particle moves with rate one according to the kernel p, and in addition, particles are born and die at x with rates F n,+ (η n t (x)) and F n,− (η n t (x)), respectively.The motion of distinct particles and births and deaths at distinct sites are independent.
The functions F n,+ and F n,− are defined, for every u ≥ 0, as These rates are chosen so that, for every z ≥ 0, we have The idea is to make it so that the interacting particle system resembles, with increasing precision as n → ∞, a solution to (1.1).In addition, these functions satisfy other important properties.First, F n,− (0) = F n,+ (0) = 0, so there is no birth (and evidently no death) of particles at empty sites.Second, F n,− (u) ≥ F n,+ (u) ≥ 0 for all u; this guarantees that the number of particles in the system is stochastically decreasing, so that the dynamics has no finite-time explosion.
This leads to the result that, given a sequence

and letting η n
• denote the process started from η n 0 and with dynamics governed by L n , we have that the sequence of processes { 1 n η n • } n≥1 converges to a weak solution of (1.1) [4, Theorem 1].We will review the meaning of a weak solution of the SDE (1.1) in Section 2.4 below.A limit obtained through this sort of scaling procedure, where there is no scaling of space, but the "mass" of individual particles is taken to zero, is often referred to as a fluid limit.

Results
Here we are interested in obtaining the fluid limit described above in the case where V is a countably infinite set.Through this extension, one can hope to achieve a better understanding of stability properties of the solution with respect to the underlying space.Apart from this, the extension has theoretical interest, as it brings forward some important challenges.
Our approach requires an assumption on the transition kernel p(•, •), as well as a restriction on the set of allowed initial conditions for the SDE.As in [11], we assume that there exists a function α : and the set of configurations We will only consider initial conditions of (1.1) belonging to E. Assumptions and restrictions of this type are common in the treatment of systems involving diffusions on infinite environments; see for instance [1,11].The key point here is to avoid explosion from diffusion, that is, situations where infinite amounts of mass can enter a finite set instantaneously due to excessive growth of the initial configuration.
The next step in establishing a fluid limit result is to construct processes η n • on N V 0 whose limit should be a solution to the stochastic differential equation.As the dynamics linked to the generator (1.3) has unbounded jump rates, and the space N V 0 is not compact (or locally compact), such a construction does not fall into the most standard framework of the theory, by means of the Hille-Yosida theorem, as presented in Chapter I of [9].While there are still ways to construct the process under our assumptions (see for instance Chapter IX of [9], or the aforementioned references [1,11]), here we avoid this construction issue by only constructing particle systems with finite mass.That is, we define the (countable) set and only consider particle systems η n • with initial configuration in E. This way, since the dynamics of (1.3) causes the number of particles to decrease stochastically, it ends up producing a non-explosive continuous-time Markov chain on the countable state space E.
We are now ready to state our main result.(c) The mapping E ∋ ζ 0 → (ζ t ) t≥0 of initial conditions to corresponding solutions obtained through the limit of part (a) is continuous when E is endowed with the norm • and the set of processes on C([0, ∞), E) is endowed with the topology of weak convergence of probability measures.

Outline of methods and organization of the paper
Let us give an outline of our methods.Using generator estimates, we prove that a collection of processes {η n • } as in the statement of Theorem 1.1(a), with 1 n η n 0 −ζ 0 n→∞ −−−→ 0 for some ζ 0 ∈ E, is tight.This allows us to extract convergent subsequences, say {η n k • } converging to a process ζ • .We then prove that the Dynkin martingales associated to η n k • , as defined in Lemma 2.1, converge to processes of the form where L * , the generator associated to (1.1), is given by for a suitable collection of functions f .This convergence allows us to obtain that (1.9) is a local martingale.Using classical results from the theory of stochastic differential equations, we then conclude that the subsequential limit ζ • is a solution of (1.1).An adaptation of the argument in [13] gives us Theorem 1.1(b), that is, that (1.1) has at most one solution in case ζ 0 has finite mass.Combining these ideas, we get that if ζ 0 has finite mass, then {η n • } has a single accumulation point, so the whole sequence converges.From this, we finish the proof of Theorem 1.1(a), that is, we prove convergence for any ζ 0 ∈ E with infinite mass by approximation: any ζ 0 ∈ E is arbitrarily close to configurations with finite mass.
A key tool that we rely on for this approximation and for several other arguments is a coupling inequality, Lemma 3.4 below, allowing us to compare pairs of processes with same generator but different initial configurations.
The rest of the paper is organized as follows.In Section 2, we review several technical concepts and results, including notions of convergence of probability measures, local martingales defined from Markov chains, and classical results about stochastic differential equations.In Section 3 we study particle systems with finite mass on N V 0 , and obtain the key coupling inequality in Lemma 3.4.In Section 4.1 we state our tightness result and use it to follow the rest of the outline given above, proving our main results.In Section 5 we prove the tightness result.Section A is an appendix where we include some proofs to ease the flow of the exposition in the paper.

Technical preliminaries
In this Section, we collect remarks, definitions, and properties that will be useful in the study of convergence of a family of stochastic processes as mentioned in the previous Section.

Configuration spaces
We let V be a countable set and p : V × V → [0, 1] be a probability transition function (that is, y p(x, y) = 1 for all x), and assume that there exists a function α : V → [0, ∞) for which (1.6) holds.We define • , E and E as in (1.7)- (1.8).Note that E is countable, that • is a norm on the linear subspace of R V where it is finite, and that the metric induced by • turns E into a complete and separable metric space.For the sake of clarity, we mostly denote the (integer-valued) elements of E by the letter η rather than ζ, and processes taking values on E by η • rather than ζ • .
It will be useful to observe that the assumptions (1.6) yields: This implies that, for ζ ∈ E, In particular, ∆ p ζ(x) in (1.2) is well defined for all ζ ∈ E and all x ∈ V.

Convergence of probability measures on trajectory spaces
To study convergence of probability measures on trajectory spaces, we first define a metric on the space of trajectories, then we consider a family of σ-algebras associated to this metric and finally we define a distance between probability measures on such σ-algebras.
Metric.Let X = (X , d X ) be a complete, separable metric space.In most cases, this will be either (R, | • |) or E or E with the metric induced by • .We denote by D X = D([0, ∞), X ) the space of càdlàg functions γ : [0, ∞) → X , and by C X the set of functions in D X which are continuous.The Skorokhod metric on D X is defined by where the infimum is taken over all increasing bijections ϕ : [0, t] → [0, t].This turns (D X , d S ) into a complete and separable metric space, and we denote by D X its Borel σ-algebra.We refer the reader to [2, Chapter 3] and [5,Chapter 3] for expositions on this metric.Here let us only make one further observation, see [2, Section 12, p. 124]: that is, convergence in the Skorokhod topology to a continuous function implies uniform convergence on compact intervals.

Sigma-algebras.
Given a stochastic process X • on X with càdlàg trajectories, we denote by F t = F X t the σ-algebra generated by (X s ) 0≤s≤t and by N := {A ∈ D X : P(X • ∈ A) = 0}.We refer to (F t ) t≥0 as the natural filtration of (X t ) t≥0 .
Convergence.Finally, we recall the definition of the Lévy-Prohorov distance in the case of two probability measures µ and ν defined on D X = (D X , D X ): where A ǫ := {y ∈ D X : d S (x, y) < ǫ for some x ∈ A}.Convergence in this metric is equivalent to weak convergence of probability measures, see [5,Theorem 3 for all continuous and bounded functions f : D X → R. Denote by C(µ, ν) the set of all measures λ on D X × D X that couple µ, ν.By [5, Theorem 3.1.2,p. 98], we remark that (2.3)

Continuous-time Markov chains and martingales
Given a stochastic process X on D X there is a sequence of stopping times τ X 0 := 0 and we say that a process on For the following two results, let S be a countable set and (X t ) t≥0 be a non-explosive continuous-time Markov chain on S. For distinct x, y ∈ S, let q(x, y) ≥ 0 be the jump rate from x to y, and let q(x, x) = − y =x q(x, y).For a function f : we define Lf (x) := y∈S q(x, y) we define Qf (x) := y∈S q(x, y) Lemma 2.1.Let f : S → R be a function satisfying (2.4).Then, the process is a local martingale with respect to the natural filtration of (X t ) t≥0 .If f also satisfies (2.5), then the process is also a local martingale with respect to the natural filtration of (X t ) t≥0 .
Proof.Fix an arbitrary initial state x 0 ∈ S, and let (Λ j ) j≥1 be an increasing sequence of finite subsets of S with x 0 ∈ Λ 1 and ∪ j Λ j = S. Define τ j := inf{t ≥ 0 : X t / ∈ Λ j }.We then have that τ j ≤ τ j+1 for each j and, because (X t ) t≥0 is non-explosive, τ j j→∞ − −− → ∞ almost surely.Then, under the assumptions (2.4) and (2.5), classical arguments establish that M f • and N f • are local martingales with respect to the natural filtration of (X t ) t≥0 , see for instance [8, Appendix 1, Lemma 5.1].
Remark 2.2.The martingales M f • for functions f that satisfy (2.4) are commonly referred to as Dynkin Martingales.Unless the generator L is not clear from the context, we will omit the dependence (as we have done here) to alleviate notation.
To obtain stochastic bounds, we prove a supermartingale inequality associated to the martingales M f • .
Lemma 2.3.Let f : S → R be a non-negative function satisfying (2.4).Assume that there exists C > 0 such that y∈S q(x, y) Then, the process (e −Ct • f (X t )) t≥0 is a supermartingale with respect to the natural filtration of (X t ) t≥0 .
Proof.Fix an arbitrary initial state x 0 ∈ S. By the Markov property, it is sufficient to prove that, for any t ≥ 0, Let (Λ j ) j≥1 and τ j be as in the proof of Lemma 2.1.Define the processes where ⋆ denotes a cemetery state.Then, X j • is a Markov chain on the finite state space S j := Λ j ∪ {⋆}, with jump rates, for distinct x, y ∈ S j , given by Let f j : S j → R be defined by f j (x) = f (x) for x = ⋆ and f j (⋆) = 0. We have, for any x ∈ S j \{⋆},

Some properties of solutions of the SDE (1.1)
Let (ζ t ) t≥0 be a stochastic process (defined on some space (Ω, F, P)) with values in E and continuous trajectories.We say that ζ • is a weak solution to the SDE (1.1) if there exists a space ( Ω, F , P) in which we have defined We will need the following result, which gives uniqueness for solutions of (1.1) with finite mass.The essence of its proof is taken from [13].We present in the Appendix (Section A.1) the proof with slight modifications to adjust to our setting.
• and ζ 2 • be two weak solutions of the SDE (1.1) with initial condition ζ.Then, ζ 1 • and ζ 2 • have the same distribution.
Let f : E → R be a function for which there exists a finite set {x 1 , . . ., x k } ⊂ V so that f only depends on (ζ(x 1 ), . . ., ζ(x k )), and moreover f is a twice continuously differentiable function of this vector.Define In particular, we denote by is a local martingale.Then, ζ • is a weak solution of (1.1).
This proposition is the same as Proposition 4.6, page 315 in [7], except that here we deal with infinite-dimensional processes, whereas the setup of the proposition in [7] is finitedimensional.However, largely due to the fact that the cross-variation of our SDE is trivial (that is, the expression for dζ t (x) in (1.1) does not involve dB y t for y = x), the proof in [7] carries through to our setting.In order to highlight the differences and the main steps, we sketch the proof in the appendix (Section A.2).

Construction of diffusive birth-and-death particle systems
Recall that E = {η ∈ N V 0 : x η(x) < ∞}.In this section, we will construct continuous-time Markov chains on E that will later be used in the fluid limit for the proof of our main result.Rather than having an index n and functions F n,+ and F n,− as in (1.4) and (1.5), for now we will have no index, and functions F + and F − satisfying certain properties (see (3.1) below).
Marks will serve as instructions for the dynamics.Marks of the form (x, +) and (x, −) represent the birth and the death of a particle at site x, respectively, and a mark of the form (x, y) represents that a particle from site x jumps to site y.Marks are thus associated with the transition operators where for x ∈ V, δ x ∈ E is the configuration with only one particle at x.For a configuration η ∈ E and x, y ∈ V, we define transition rates by where we assume that the reaction functions F + and F − satisfy We now define a continuous-time Markov chain on E with the prescription that for each a ∈ M, η jumps to Γ a (η) with rate R a (η).
Noting that a∈M R a (η) < ∞ for any η ∈ E, this indeed describes jump rates of a continuoustime Markov chain.The assumption F + ≤ F − combined with a simple stochastic comparison argument (see [4,Lemma 2]) implies that the chain is non-explosive.It will be important to leave the initial condition explicit, so we will denote the chain started from η ∈ E by (Φ t (η)) t≥0 .
For any function f : we define Recall that a function defined on a subset of R V is called local if there exists a finite set V ′ ⊂ V such that the function depends on η ∈ R V only through (η(x) : x ∈ V ′ ).We then have Lemma 3.1.Any local function f : E → R satisfies (3.2), and the processes are local martingales with respect to the natural filtration of (Φ t (η)) t≥0 .
Proof.The first statement is straightforward to check: since f is local and its argument is an element of E, the sum in (3.2) only has finitely many non-zero terms.The second statement then follows from Lemma 2.1.
We then define for functions g : E × E → R for which the sum on the right-hand side is absolutely convergent for all (η, η ′ ).
Proof.For this choice of g, the first line in the right-hand side of (3.4) vanishes, so we can write Lg(η, η ′ ) = a∈M (Ξ a 1 + Ξ a 2 )(η, η ′ ), where We first deal with the reaction terms, that is, the terms corresponding to marks of the form (x, +) and (x, −).Due to the nature of the rates we have chosen, the only cases we have to look at are those for which η(x) = η ′ (x).If η(x) > η ′ (x), the contribution from (x, +) marks is: Doing similarly for marks (x, −), we get: where the last inequality follows from our hypothesis (3.1) ensuring that F = F + − F − is decreasing.Observe that we don't need to assume (although it would be natural to) that each F + and F − are increasing.
The same argument shows (3.5) for the case η(x) < η ′ (x).We thus conclude that We now turn to the diffusion terms.Fix x, y ∈ V, and first assume that η(x) > η ′ (x).Again, note that reaction rates are equal when η(x) = η ′ (x), and the contribution to Lg is zero in those cases.We then have Ξ Treating the case η(x) < η ′ (x) analogously, we obtain x,y∈V t≥0 is a supermartingale with respect to its natural filtration.In particular, for any T > 0 and A > 0, we have Proof.The first statement is a consequence of Lemma 2.3 and Lemma 3.3.To prove the second statement, abbreviate For a > 0, define τ a := inf{t ≥ 0 : X t > a}.We have η − η ′ ≥ E[X τa∧T ] ≥ a • P(τ a ≤ T ) for any T > 0 and a > 0, so We then obtain

Convergence to solutions of reaction-diffusion equations 4.1 Sequence of particle systems: definition and first estimates
In this section, following the program outlined in the Introduction, we consider a sequence of processes of the type constructed in the previous section, and prove that this sequence converges to solutions of the system of reaction-diffusion equations (1.1).
We recall that we define, for ≥ 0, We denote by (Φ n t (η)) t≥0 the Markov chains, as constructed in Section 3, corresponding to the rate functions F + = F n,+ and F − = F n,− (we leave the diffusion part of the dynamics constant for all n, that is, particles jump with rate one regardless of n).Conditions in (3.1) are satisfied with this choice of rate functions.We also denote by Φn • (η, η ′ ) the corresponding coupling, as described in the previous section.Finally, we write Hence, ϕ n • (ζ) and φn We have the following consequence of Lemma 3.4.
Corollary 4.1.For every ǫ > 0 and T > 0, there exists δ > 0 such that, for all n ∈ N and all We may now state a distance bound with respect to the Lévy-Prohorov metric.
Lemma 4.2.For every ǫ > 0, there exists δ > 0 such that, for all n ∈ N and all Proof.Fix ǫ > 0. It follows from the definition of the Skorokhod metric that Together with Lemma 4.1 (with T = log(2/ǫ)), this implies that there exists δ > 0 such that for any n and any ζ, ζ ′ with ζ − ζ ′ < δ we have The desired result now follows from (2.3) and the fact that φn For a local function f : follows from Lemma 3.1 that the processes )ds, and defined for t ≥ 0 are local martingales.
In the following lemma, we give explicit expressions for L n and Q n applied to functions of the form We postpone the calculations to Appendix A.3.

Lemma 4.3. We have, for any
Moreover, for distinct x, y ∈ V, Finally, we state our tightness result, which will allow us to extract convergent subsequences of a sequence of processes of the form The proof of this proposition will be carried out in Section 5.

Limit points are solutions
Recall from Section 2.4 that we defined for any local and twice continuously differentiable function f .Substituting f x , f xx and f xy gives Lemma 4.5.For any f ∈ {f x , f xy : x, y ∈ V} and A > 0 we have Proof.Let us fix x ∈ and consider the case f = f x .Comparing (4.3) and (4.7), and noting that ζ(x) ≤ ζ /α(x), the supremum on the left-hand side of (4.10) is at most the convergence can be checked by separately considering the cases κ > ℓ and κ ≤ ℓ.
Next, for f = f xx , comparing (4.5) and (4.8) and using the case of f x that we just treated, it suffices to note that, by (5.8), we have Finally, the case f = f xy with given x = y is easier: comparing (4.6) and (4.9) and using the convergence for f x and f y , it suffices to note that and ζ  * • is a solution of the SDE (1.1).
Proof of Proposition 4.6.Denote the convergent subsequence by . By Skorokhod's representation theorem [2, p. 70], we may consider a probability space ( Ω, F, P ) with processes {Z k • } k≥1 and Z * • so that E) and same distribution as ζ * • ; • for each ω ∈ Ω, we have d For γ ∈ D([0, ∞), E) and t ≥ 0, define J t (γ) := sup s≤t γ s − γ s− , the largest jump size of γ until time t.We have that so, by continuity of J t in the Skorokhod topology [2, p. 125], we obtain J t (Z * t (ω)) = 0 for all ω and t.This implies that the trajectories of Z * • are continuous, and we proved the first part.
Then, using (2.2), we obtain We now define, for f ∈ {f x , f xy : x, y ∈ V}, Let us first show how this convergence will allow us to conclude.Since the trajectories of M * ,f • are continuous, (2.2) and (4.12) imply that Since almost sure convergence implies convergence in distribution, this gives Now, Corollary 1.19, page 527 in [6] states that if a sequence of càdlàg local martingales converges in distribution (with respect to the Skorokhod topology), then the limiting process is also a local martingale.We then obtain that M * ,f • is a local martingale.By Proposition 2.5, this implies that Z * • is a solution to the SDE (1.1).It remains to prove (4.12).To do so, fix f ∈ {f x , f xy : x, y ∈ V}, ω ∈ Ω and t ≥ 0. Using (4.11), we obtain that and also that A := sup{ Z k s (ω) : s ≤ t, k ≥ 1} is finite.By this latter point and Lemma 4.5, we then have that sup Next, from the generator expressions in (4.7), (4.8) and (4.9), it follows that L * f is uniformly continuous on {ζ ∈ E : ζ ≤ A}; this and (4.11) imply that The desired convergence (4.12) follows.

Convergence to solutions: proof of Theorem 1.1
Proof of Theorem 1.1.We split the proof in two parts: first with finite initial condition, then the general case.

Finite case
where T n T is the set stopping times (with respect to the natural filtration of ζ n • ) that are bounded by T .
To verify the first criterion, we will rely on a definition and a lemma.For any r > 0 we define Λ(r) := {x ∈ V : α(x) > 1/r}.
(5.3) Lemma 5.1 (Negligible norm near infinity).For any T > 0 and ǫ > 0 there exists R > 0 such that We postpone the proof of this lemma to Section 5.2.
Proof of Proposition 4.4, condition (5.1).Fix ≥ 0 and ǫ > 0. Using (4.1) with ζ ′ ≡ 0, we obtain that there exists A > 0 such that Furthermore, by Lemma 5.1, for any k ∈ N there exists R k such that we have that P(ζ n t ∈ K) > 1 − ǫ for all n.We claim that K is compact.To verify this, fix a sequence {ζ j } j≥1 of elements of K.For every x ∈ V we have that ζ j (x) ≤ A/α(x) for every j, so, using a diagonal argument, we can obtain a subsequence {ζ j ′ } j ′ ≥1 so that ζ j ′ (x) is convergent for each x.Let ζ be defined by ζ(x) := lim j ′ ζ j ′ (x) for each x.Next, Fatou's Lemma gives ζ which can be made as small as desired by taking k large.This shows that K is compact and completes the proof of (5.1).
For the proof of (5.2), again we will need a preliminary result.
Lemma 5.2 (Oscillation of coordinates).For any T > 0, ǫ > 0 and x ∈ V, we have We postpone the proof of this lemma to Section 5.3.
Proof of Proposition 4.4, condition (5.2).Fix T > 0 and ǫ > 0. By Lemma 5.1, we may choose R > 0 such that, for any n, Next, by Lemma 5.2 and a union for x ∈ Λ(R), we can choose δ 0 > 0 such that, for any δ ≤ δ 0 , n ∈ N and τ ∈ T n T , we have Therefore, by the triangle inequality, Since ǫ is arbitrary, the proof is complete.

Norm near infinity: proof of Lemma 5.1
Let us define We observe that, by (4.1), for any ǫ > 0, T > 0 and n ≥ 1 we have Proof of Lemma 5.1.Fix T > 0 and ǫ > 0. By (5.4), we can choose r large enough that Next, note that for any n, R and t we have so, for any n and R, Rǫ .

Now, the assumption that ζ
if R is large enough.Combining (5.5) and (5.6) with the bound gives the desired bound.

Oscillation of coordinates: proof of Lemma
In the proof of Lemma 5.2, it will be useful to note that, for any n ∈ N, x ∈ V and A > 0 we have sup Proof of Lemma 5.2.As noted in Section 4.1, writing we have that M n • is a local martingale.Its quadratic variation is given by Now, using (5.7), we can choose δ > 0 small enough that To conclude the proof, we choose A > 0 large enough that P(τ n A < T ) < ǫ 2 .

A Appendix
A.  (A.1) The expected value of the last integral is zero.To bound the first two, we note that for every s ≤ σ A and x ∈ V we have ζ 1 s (x) ≤ A and ζ 2 s (x) ≤ A. In the remainder of this proof, C A will denote a positive constant that depends only on A, and whose value may change from line to line.Since both κ and ℓ are larger than or equal to one, we have We now extend the probability space in which ζ • is defined to a space ( Ω, F , P) where an auxiliary family of independent Brownian motions {W
then these conditions are satisfied by α(x) := exp{−|x|}, where | • denotes any norm in R d .Next, we define

Theorem 1. 1 .
(a) Let ζ 0 ∈ E and let {η n 0 } be a sequence in E with 1 n η n 0 − ζ 0 n→∞ −−−→ 0. For each n, let (η n t ) t≥0 denote the Markov chain on E with transitions encoded by (1.3) started from η n 0 .Then, as n → ∞, the processes η n • converge in distribution (with respect to the Skorokhod topology) to an E-valued process ζ • with continuous trajectories which is a weak solution to (1.1) with initial condition ζ = ζ 0 .The law of this process does not depend on the choice of sequence {η n 0 } with 1 n η n 0 − ζ 0 n→∞ −−−→ 0. (b) In case x∈V ζ 0 (x) < ∞, the process obtained through this limit is the unique weak solution to (1.1) with initial condition ζ = ζ 0 in the sense that it has the same distribution as any other solution of the same equation.

1 Uniqueness with finite mass: of Proposition 2.4
be two limit points of {ζ n • } n∈V having the same initial condition ζ * 0 ∈ E 0 , By [13, Proposition 1, p. 158], we can assume that the two processes ζ 1 • and ζ 2 • of the statement of the proposition are defined in the same probability space, and that they solve (1.1) with respect to the same Brownian family { B x • } x∈V .This allows us to consider the difference process D t (x) := ζ * ,1 t (x) − ζ * ,2 t (x).To prove that ζ 1 • and ζ 2 • have the same distribution it is enough to show that, for all t ∈ [0, T ], E [ D t 1 ] = 0, where D t 1 = x |D t (x)|.To do so, we first bound E D t∧σ A 1 , where x • } x∈V is defined.To keep track of this extension, we denote by ζ• and M f • the versions of ζ • and M f • in the larger space.We define Using (A.4), it can then be proved that {B x • } x∈V is a family of continuous local martingales with cross-variation process given by B x • , B y • t = t • 1 {x=y} , t ≥ 0. By Lévy's representation theorem [7, p.157], it follows that {B x • } x∈V is a family of independent Brownian motions.Since, almost surely, t 0 L * f x ( ζs )ds + M fx t as ζt (x) = ζ0 (x) + t 0 ∆ p ζs (x) − b( ζs (x)) κ ds + t 0 a( ζs (x)) ℓ 1/2 dB x s .