Stochastic Reaction-diffusion Equations Driven by Jump Processes

We establish the existence of weak martingale solutions to a class of second order parabolic stochastic partial differential equations. The equations are driven by multiplicative jump type noise, with a non-Lipschitz multiplicative functional. The drift in the equations contains a dissipative nonlinearity of polynomial growth.

One consequence of the main result in this paper, i.e. Theorem 5.2, is that for any u 0 ∈ C 0 (O) there exists an C 0 (O)-valued cádlág process u = {u(t) : t ≥ 0} which is a martingale solution to problem (1.1). Our main result allows treatment of equations with more general coefficients than in problem (1.1), for instance the diffusion coefficient, i.e. the function g(u) = sin(u) sin( 1 u )1 R\{0} (u) is just an example of a bounded and continuous function, the drift, i.e. the function f (u) = u − u 3 is an example of an dissipative function f : R → R of polynomial growth and the Laplace operator ∆ is a special case of a second order, with variable coefficients, uniformly elliptic dissipative operator. Moreover, our results are applicable to equations with infinite dimensional Lévy processes as well as to more general classes of the initial data, for instance the L q (O) spaces with q ≥ p, as well as the Sobolev spaces H γ,q 0 (O). The above and other examples are carefully presented in Sections 6 and 7.
The approach we use in this paper follows basically a recent work [11] by one of the authours, in which a similar problem but with Wiener process was treated. The current work is a major improvement of the work [11]. Indeed, the Itô integral in martingale type 2 Banach spaces with respect to a cylindrical Wiener process on which [11] is so heavily relied, is replaced by the Itô integral in martingale type p Banach spaces with respect to Poisson random measures, see [14]. The compactness argument used in [11] depends on the Hölder continuity of the trajectories of the stochastic convolutions driven by a Wiener process. However, since the trajectories of the stochastic convolutions driven by a Lévy process are not continuous, the counterpart of the Hölder continuity, i.e the cádlág property of the trajectories, seems natural to be used. Unfortunately, as many known counterexamples show, see for instance a recent monograph [57], as well as even more recent papers [17] and [12], the trajectories of the stochastic convolutions driven by a Lévy process may not even be cádlág in the space the Lévy process lives and hence this issue has to be handled with special care. Finally, the third but not the least difference with respect to [11] is the martingale representation theorem. In [11] a known result by Dettweiler [19] was used. In the Lévy process case we have not found, to our great surprise, an appropriate martingale representation Theorem. In the current paper we proved a generalization of the result from Dettweiler [19]. Finally, instead of using stopping times as in in [11], we used interpolation methods in order to control certain norms of the solution, see Theorem C.1.
As we indicated in the Abstract, to the best of our knowledge, our paper answers positively a long standing open problem on the existence of solutions to stochastic reaction diffusion equations with multiplicative Lévy (or jump) noise.
Parabolic SPDEs driven by additive Lévy noise were introduced by Walsh [67] and Gyöngy in [35]. Walsh, whose motivation came from neurophysiology, studied a particular example of the cable equation which describes the behaviour of voltage potentials of spatially extended neurons, see also Tuckwell [66]. Gyöngy considered stochastic equations with general Hilbert space valued semimartingales replacing the Wiener process and generalized the existence and uniqueness theorem of Krylov and B. L. Rozovskii [46]. The question we are studying in our paper are of similar type to those studied by Walsh and Tuckwell but more general as we allow more irregular and non-additive noise. However, it is of different type than studied by Gyöngy: the difference is of the similar order as between [11] and [46] in the Wiener process case.
Since the early eighties many works have been written on the topic. In particular, Albeverio, Wu, Zhang [2], Applebaum and Wu [6], Bié [8], Hausenblas [38,39], Kallianpur and Xiong [43,44], and Knoche [45]. Some of these papers use the framework Poisson random measures while others of Lévy noise. However, all these papers typically deal with Lipschitz coefficients and/or Hilbert spaces and hence none of them is applicable to the stochastic reaction diffusion equations.
There are not so many works about SPDEs driven by Lévy processes in Banach spaces. Here we can only mention the papers [38,39] by the second authour, a very recent paper [14] by both authours and [49] by Mandrekar and Rüdiger (who actually study ordinary stochastic differential equations in martingale type 2 Banch spaces). On the other hand, Peszat and Zabczyk in [57] formulate there results in the framework of Hilbert spaces, Zhang and Röckner [59] generalized the Gyöngy [35] and Pardoux [54] results by firstly, studying evolution equations driven by both Wiener and Lévy processes and well considered globally Lipschitz (and not only linear) coefficients, and secondly studying the large deviation principle and exponential integrability of the solutions.
Mytnik in [53] established existence of a weak solution for the stochastic heat equation driven by stable noise with the noise coefficients of polynomial type and without any drift term f . Our results are hence not only incomparable but also the methods of proof are different. In [52] Mueller, Mytnik and Stan investigated the heat equation with one-sided time independent stable noise.
Martingale solution of SPDEs driven by Levy processes in Hilbert spaces are not treated quite often in the literature. Mytnik [53] constructed a weak solution to SPDEs with non Lipschitz coefficients driven by space time stable Lévy noise. Mueller [51] studied non Lipschitz SPDEs driven by nonnegative Lévy noise of index α ∈ (0, 1).
Concerning nonlinear stochastic equation with Lévy processes not many paper exists. The most recent paper [21] by Dong considers Burgers equation with compound Poisson noise and thus in fact deals with deterministic Burgers equation on random intervals. Some discussion of stochastic Burgers equation with additive Lévy noise is contained in [17], where it is shown how integrability properties of trajectories of the corresponding Ornstein-Uhlenbeck process play an important rôle.
The paper is organised as follows: In sections 2 and 3 we introduce the definitions necessary to formulate the main results. Then the main results are presented in Section 4 and Section 5. First we considered an SPDE with continuous and bounded coefficients. Here, the main Theorem is formulated int terms of Poisson random measures, i.e. Theorem 4.5, and in terms of Lévy processes, i.e. Theorem 4.7. Then, in Section 5 we consider an SPDE of Reaction Diffusion type and list the exact conditions under which we were able to show existence of a martingale solution. Two examples illustrating the applicability of our results are then present in Section 6 and Section 7.
To be more precise, in Section 6 we present an SPDE of reaction diffusion type driven by a Lévy noise of spectral type, in Section 6 we present an SPDE of reaction diffusion type driven by a space time Lévy white noise. The remaining Sections and the Appendix are devoted to the proofs of our results. Notation 1.1. By N we denote the set of natural numbers, i.e. N = {0, 1, 2, · · · } and byN we denote the set N ∪ {+∞}. Whenever we speak about N (orN)-valued measurable functions we implicitly assume that that set is equipped with the trivial σ-field 2 N (or 2N). By R + we will denote the interval [0, ∞) and by R * the set R \ {0}. If X is a topological space, then by B(X) we will denote the Borel σ-field on X. By λ d we will denote the Lebesgue measure on (R d , B(R d )), by λ the Lebesgue measure on (R, B(R)).
If If (S, S) is a measurable space then we will denote by S ⊗ B(R + ) the product σ-field on S × R + and by ν ⊗ λ the product measure of ν and the Lebesgue measure λ.

Stochastic Preliminaries
In this paper we use the terminus of Poisson random measure to deal with Equations of type (1.1). Therefore, we introduce in the first part of this Section Poisson random measures. In the second Part of this Section we point out the relationship between Poisson random measures and Lévy processes.
Let 1 < p ≤ 2 be fixed throughout the whole paper. Moreover, throughout the whole paper, we assume that (Ω, F, F, P) is a complete filtered probability space. Here, for simplicity, we denoted the filtration {F t } t≥0 by F. The following definitions are presented here for the sake of completeness because the notion of time homogeneous random measure is introduced in many, not always equivalent ways. Definition 2.2. (see [40], Def. I.8.1) Let (Z, Z) be a measurable space. A Poisson random measure η on (Z, Z) over (Ω, F, F, P) is a measurable function η : (Ω, F) → (M I (Z × R + ), M I (Z × R + )), such that (i) for each B ∈ Z ⊗ B(R + ), η(B) := i B • η : Ω →N is a Poisson random variable with parameter 1 Eη(B); (ii) η is independently scattered, i.e. if the sets B j ∈ Z ⊗B(R + ), j = 1, · · · , n, are disjoint, then the random variables η(B j ), j = 1, · · · , n, are independent; (iii) for each U ∈ Z, theN-valued process (N (t, U )) t≥0 defined by N (t, U ) := η(U × (0, t]), t ≥ 0 is F-adapted and its increments are independent of the past, i.e. if t > s ≥ 0, then N (t, U ) − N (s, U ) = η(U × (s, t]) is independent of F s . Remark 2.3. In the framework of Definition 2.2 the assignment ν : Z ∋ A → E η(A × (0, 1)) defines a uniquely determined measure.
Given a complete filtered probability space (Ω, F, F, P), where F = {F t } t≥0 denotes the filtration, the predictable random field P on Ω × R + is the σfield generated by all continuous F-adapted processes (see e.g. Kallenberg [42,Chapter 25]). A real valued stochastic process {x(t) : t ≥ 0}, defined on a filtered probability space (Ω, F, F, P) is called predictable, if the mapping x : Definition 2.4. Assume that (Z, Z) is a measurable space and ν is a nonnegative measure on (Z, Z). Assume that η is a time homogeneous Poisson random measure with intensity measure ν on (Z, Z) over (Ω, F, F, P). The compensator of η is the unique predictable random measure, denoted by γ, on Z × B(R + ) over (Ω, F, F, P) such that for each T < ∞ and A ∈ Z with is a martingale on (Ω, F, F, P).
Remark 2.5. Assume that η is a time homogeneous Poisson random measure with intensity ν on (S, S) over (Ω, F, F, P). It turns out that the compensator γ of η is uniquely determined and moreover The difference between a time homogeneous Poisson random measure η and its compensator γ, i.e.η = η − γ, is called a compensated Poisson random measure.
The classical Itô stochastic integral has been generalised in several directions, for example in Banach spaces of martingale type p. Since it would exceed the scope of the paper, we have decided not to present a detailed introduction on this topic and restrict ourselves only to the necessary definitions. A short summary of stochastic integration in Banach spaces of martingale type p is given in Brzeźniak [9], Hausenblas [38], or Brzeźniak and Hausenblas [14].
We finish with the following version of the Stochastic Fubini Theorem (see [68] (i) for any choice n ∈ N and 0 ≤ t 0 < t 1 < · · · t n , the random variables L(t 0 ), L(t 1 ) − L(t 0 ), . . ., X(t n ) − X(t n−1 ) are independent; (ii) L 0 = 0 a.s.; (iii) For all 0 ≤ s < t, the law of L(t + s) − L(s) does not depend on s; (iv) L is stochastically continuous; (v) the trajectories of L are a.s. cádlág on E.
Let F = {F t } t≥0 be a filtration on F. We say that L is a Lévy process over (Ω, F, F, P), if L is an E-valued and F-adapted Lévy process.
The characteristic function of a Lévy process is uniquely defined and is given by the Lévy-Khinchin formula. In particular, if E is a Banach space of type p and L = {L(t) : t ≥ 0} is an E-valued Lévy process, there exist a positive operator Q : E ′ → E, a non negative measure ν concentrated on E \ {0} such that E 1 ∧ |z| p ν(dz) < ∞, and an element m ∈ E such that (we refer e.g. to [3,4,23,1]) The measure ν is called characteristic measure of the Lévy process L. A Lévy process is of pure jump type iff Q = 0. Moreover, the triplet (Q, m, ν) uniquely determines the law of the Lévy process. Now, starting with an E-valued Lévy process over a filtered probability space (Ω, F, F, P) one can construct an integer valued random measure as follows. For each (B, I) ∈ B(R) × B(R + ) let If E = R d , it can be shown that η L defined above is a time homogeneous Poisson random measure (see Theorem 19.2 [62,Chapter 4]).
Vice versa, let η be a a time homogeneous Poisson random measure on a Banach space E of type p, p ∈ (0, 2]. Then the integral (Dettweiler [29]) is well defined, if the intensity measure is a Lévy measure, whose definition is given below.
is a characteristic function of a Radon measure on E.
3 i.e. λ(A) = λ(−A) for all A ∈ B(E), 4 As remarked in Linde [23,Chapter 5.4] we do not need to suppose that the integral Suppose E is a Banach space of martingale type p. Therefore, for a time homogeneous Poisson random measure η on E with intensity measure ν ∈ L(E), the process is an E-valued Lévy process with triplet (0, m,ν), wherê For more details about the connection of Banach spaces of type p and stochastic integration we refer to Dettweiler [29].

Analytic Assumptions and Hypotheses
Let us begin with a list of assumptions which will be frequently used throughout this and later sections. Whenever we use any of them this will be specifically written.
(H1) B is an Banach space of martingale type p. (H2-a) A is a positive operator in B, i.e. a densely defined and closed operator for which there exists M > 0 such that for λ ≥ 0 Moreover, (A + λ) −1 : B → B is assumed to be a compact operator.
(H2-b) −A is a generator of an analytic semigroup e −tA t≥0 on B. (H3) There exist positive constants K and ϑ satisfying Let us now formulate some consequences of the last assumption. We begin with recalling a result from [24] Lemma 3.1. Suppose that a linear operator A in a Banach space E satisfies the conditions (H2-a), (H2-b) and (H3). Then, if µ ≥ 0, the operator A + µI satisfies those conditions as well. The condition (H2-a) is satisfied with the same constant M and moreover there exists a constantK such that for each µ ≥ 0, Theorem 3.2. Suppose that a linear operator A in a Banach space E satisfies the conditions (H2-a), (H2-b) and (H3). Then there exists a constant K such that for all µ ≥ 0 and α ∈ [0, 1], This function is continuous in the closed strip 0 ≤ Re z ≤ 1 and analytic in its interior. From the assumptions we infer that Therefore, the inequality (3.4) follows by applying the Hadamamard's three line theorem, see e.g. Appendix to IX.4 in [60]. The proof of the other two inequalities is analogous. Lemma 3.3. Assume that −A is an infinitesimal generator of an analytic semigroup {e −tA } t≥0 on a Banach space E. Assume that for an ω ∈ (0, π 2 ), M > 0 and r > 0, compare with Assumption II.6.1 in [56], where B C (0, r) is the open ball in C centred at 0 of radius r and ρ(−A) is the resolvent set of the operator −A, and Then, there exists a constant δ > 0 and constants M α > 0 for α ≥ 0, such that Proof. For the first part see Theorem 6.13(c) in [56,Pazy]. If α ≤ 1, for the second part see Theorem 6.13(c) therein. The general case follows by induction in [α], the integer part of α.

Martingale Solutions
Consider the following stochastic evolution equation.
We suppose that B is a separable Banach space and −A is a generator of an analytic semigroup {e −tA } t≥0 on B. More detailed conditions on B and A are listed in assumptions (H1)-(H4) in Section 3.
We shall define now a mild and, later on, a martingale solution to Problem (4.1).
Definition 4.1. Assume that p ∈ (1, 2] and B is a separable Banach space of martingale type p such that the Conditions (H1)-(H2) and (H3) from Section 3 are satisfied. Assume that (Z, Z) is a measurable space andη is a compensated time homogeneous Poisson random measure on (Z, Z) over (Ω, F, F, P) with intensity measure ν. Suppose that F is a densely defined function from [0, ∞) × B to B and G is a densely defined function from [0, ∞) × B to L p (Z, ν, B), such that Z 1 {0} (G(t, x; z))ν(dz) = 0 for all t ∈ R + . Assume that u 0 ∈ B.
Assume that u is a B-valued, progressively measurable and cádlág process such that (F (t, u(t)) t≥0 and (G(t, u(t)) t≥0 are well defined B-valued, resp. L p (Z, ν, B)-valued and progressively measurable processes. Then u is called a mild solution on B to the Problem (4.1) iff for any t > 0, and, P-a.s., A martingale solution on B to the Problem (4.1) is a system is a time homogeneous Poisson random measure on (Z, B(Z)) over (Ω, F, F, P) with intensity measure ν, We say that the martingale solution (4.4) to (4.1) is unique iff given another martingale solution to (4.1) the laws of the processes u and u ′ on the space D(R + ; B) are equal.
In the next paragraph we will formulate assumptions for our first main result. Firstly, we begin with introducing an auxiliary Banach space E on which the functions F and G will be defined.   There exists δ F ∈ [0, 1) such that the map A −δ F F : [0, ∞) × E → E is measurable, bounded and locally continuous with respect to the second variable, uniformly with respect to the first one. Let us denote  Let (Z, Z) be a measurable space and ν be a σ-finite measure on it. There exists a δ G ∈ [0, 1 p ) such that the function is measurable, bounded and locally continuous with respect to the second variable, uniformly with respect to the first one. Moreover, we assume that are Lipschitz continuous, uniformly with respect to t ∈ R + , i.e. there exists a K > 0 such that for all t ∈ R + and all u 1 , u 2 ∈ E, then the SPDEs (4.1) has a unique strong solution. In our work we are interested in the case when both these conditions are relaxed.
Finally, we will impose the following assumption on the initial condition.  Let δ be a constant such that δ > max(δ G + 1 p , δ f − 1 + 1 p , δ I ). Suppose, that a function F satisfies the Assumption 4.2 and a function G satisfeis the Assumption 4.3 with E = D B A (δ, p). Then, for any u 0 satisfying Assumption 4.4, there exists a martingale solution (Ω, F, P, F, Remark 4.6. The Assumption (H4) is not needed, since it can be compensated by taking A + λI instead of A.
As we pointed out in paragraph 2.1, one can construct from a Lévy process a time homogeneous Poisson random measure on a Banach space and vice versa. Hence, Theorem 4.5 can be written in terms of a Lévy process. Let Z be a Banach space, L = {L(t) : t ≥ 0} be a Z-valued Lévy process, F and G certain mappings specified in the next paragraph. Thus we are interested in the following stochastic equation.
As before we need an underlying Banach space B and an auxiliary space E. Therefore, again, we assume that B be a separable Banach space and −A is a generator of an analytic semigroup {e −tA } t≥0 on B satisfying the assumptions (H1)-(H4). Furthermore, E satisfies the assumption 4.
is measurable, bounded and locally continuous with respect to the second variable, uniformly with respect to the first one. In particular, we assume that (ii) for any u 0 ∈ E and each ε > 0 there exists a δ > 0 such that Theorem 4.7. Assume that p ∈ (1, 2] and B is a separable Banach space of martingale type p such that the Conditions (H1)-(H2) and (H4) from Section 3 are satisfied. Let Z be a Banach space of type p and L = {L(t) : 0 ≤ t < ∞} be a Z-valued Lévy process of pure jump type over a complete filtered probability space (Ω, F, F, P) with characteristic ν ∈ M(Z).
Let δ be a constant such that δ > max(δ G + 1 p , δ f − 1 + 1 p , δ I ). Suppose, that for a function F the Assumption 4.2 and a function G the Assumption 4.5 with E = D B A (δ, p) are satisfied.
Moreover, there exists a λ ∈ R such that it satisfies ∞ 0 e −λt E|u(t)| p E dt < ∞.

Stochastic reaction diffusion equations
The next assumption uses a notion of a subdifferential of a norm ϕ, see [26]. Given x, y ∈ X the map ϕ : R ∋ s → |x + sy| ∈ X is convex and therefore it is right and left differentiable. Define D ± |x|y to be the the right/left derivative of ϕ at 0. Then the subdifferential ∂|x| of |x|, x ∈ X, is defined by where X * is the dual space to X. One can show that not only ∂|x| is a nonempty, closed and convex set, but also ∂|x| = {x * ∈ X * : x, x * = |x| and |x * | ≤ 1}.
In particular, ∂|0| is the unit ball in X * .
Assumption 5.1. The mapping F : [0, ∞) × X → X is separately continuous with respect to both variables. Moreover, there exist a constant k ∈ R and an increasing function a : R + → R + with lim t→∞ a(t) = ∞ such that for all x ∈ D(A), y ∈ X and t ≥ 0 for any z ∈ x * = ∂|x|.
Remark 5.1. Later on we will take for X an intermediate space between B and E.
To show our next result, i.e. Theorem 5.2, we approximate the possible unbounded mapping F by a bounded mapping and apply Theorem 4.5. Therefore, we will introduce the following Assumption. Studying our examples you will note that this Assumption is satisfied in many interesting applications.
Assumption 5.2. There exists a sequence of bounded mappings F n : [0, ∞)× X → X such that • F n converges pointwise to F in B • F n satisfies the Assumption 5.1 uniformly in n.
Later on we shall also need some of the following conditions. Assumption 5.3. There exist numbers k 0 ≥ 0 and q ≥ p such that the Assumption (5.1) is satisfied with a function a : R + → R + defined by a(r) = k 0 (1 + r q ), r ≥ 0. In the main result of this section we replace the boundedness assumption of F by the dissipativity of the drift −A + F . Thus, under the weaker hypotheses on the perturbation, we can proof the following Theorem.
Theorem 5.2. Assume that p ∈ (1, 2] and B is a separable Banach space of martingale type p such that the Conditions (H1)-(H2) and (H4) from Section 3 are satisfied. Let Z be a measurable space andη be a compensated time homogeneous Poisson random measure on B(Z) over a probability space (Ω, F, F, P) with intensity ν ∈ M(Z).
If X is a separable Banach space such that X 0 ֒→ X ֒→ X 1 continuously, then for any function F : [0, ∞) × X → X satisfing the Assumptions 5.1-5.4 and any u 0 satisfying Assumption 4.4 there exists a martingale solution on B of Problem (4.1). Moreover, this solution is q integrable in X, i.e. there exists a real number λ < ∞ such that Remark 5.3. Similarly to Theorem 4.7, which was reformulated in terms of Lévy processes by Theorem 4.7, we can reformulate Theorem 5.2 in terms of Lévy processes. However, because it would exceed the scope of the paper we omit it.

The Reaction-Diffusion Equation with Lévy Noise of Spectral Type
Through the whole Section, let O be a bounded open domain in R d , d ≥ 1, with C ∞ boundary, α > 0 and let p ∈ (1, 2] be a fixed number.
Let {L i : i ∈ N} be a family of i.i.d. real valued Lévy processes with characteristic ν R , where ν R is a Lévy measure over R. Our aim in the section is to specify the conditions under which Theorem 5.2 covers an equation of the following type where ∆ is the Laplacian with the Dirichlet boundary conditions, {e i : i ∈ N} are the eigenfunctions of ∆ and α is choosen, e.g. , according to First, we will describe how to reformulate the sum of Lévy processes in terms of a Poisson random measure. Then we define a more general deterministic setting. 6.1. The stochastic setting. Let {e i : i ∈ N} be an orthonormal basis in L 2 (O) and let ν R be a Lévy measure on R such that We define a Lévy measure ν by Let η be a time homogeneous Poisson random measure on N × R over a complete filtered probability space (Ω, F, F, P) with intensity ν. Let λ = (λ i ) i∈N ∈ l 1 (R) be a positive sequence and define a processL = be a second order uniformly elliptic differential operator, where a i,j = a j,i for i, j = 1, . . . , d. Assume that the functions a ij and a 0 are of C ∞ class on O. In particular, suppose that there exists a C > 0 such that for all Let A be the operator defined by Now, we are interested in an SPDEs of the following type Remark 6.1. Observe, Equation (6.1) is the same type as Equation (6.5) , only written in terms of Lévy processes.
The following Theorem can be proved by verifying the assumptions of Theorem 5.2.
Theorem 6.2. Under the conditions described above let us assume that λ n = O(n −α ) for a real number α > 0 and that there exist numbers r ∈ [p, ∞) and δ ∈ R such that the inequality is satisfied. Then there exists a martingale solution to Problem (6.5) in B = H δ,r 0 (O). Corollary 6.3. Let O ⊂ R d be a bounded domain with smooth boundary and let ∆ be the Laplace operator. Let L = {L(t) : t ≥ 0} be a one dimensional Lévy process with characteristic measure ν such that for a p ∈ (1, 2] we have We consider the following stochastic partial differential equation where u 0 ∈ L q (O), q ≥ p. It follows from our main result, i.e. Theorem 5.2, that there exists an L q (O)-valued cádlág process u = {u(t) : 0 ≤ t < ∞} which is a martingale solution to problem (6.6).
Proof of Corollary 6.3. Since the Lévy process is a one dimensional Lévy process and the function R ∋ ξ → sin(ξ) is infinitely often differentiable, the α in Theorem 6.2 can be taken arbitrary large. Thus, the only restriction is given by the initial condition. Since u 0 is supposed to be in L q (O) and L q (O) is of martingale type p, the trajectories of the solution u to Problem (6.6) are cádlág in L q (O). The second claim can be shown similarly.
Proof of Theorem 6.2. We will show, that Theorem 5.2 is applicable. We will first analyze the stochastic term, then, secondly, the nonlinearity, and finally, as third step, we will choose the right Banach space to which the solution will belong.
Part I: The stochastic perturbation: The first step is to identify the Banach space, in which the stochastic perturbation takes values.
Proof of Claim 6.1. This can be proved by straightforward calculations. By interpolation, for all γ ∈ (0, 2) Secondly, by a special case of the Gagliardo Nirenberg inequality, for all j ∈ N 0 and r ≥ 1 with 1 Hence, for any γ ∈ (0, 2), r ≥ 2 and m ≥ 2, m ∈ N, with γ ≤ d 1 there exists a constant C such that Since A is strongly elliptic and of second order, we know by [56, Chapter 4.10.1, Chapter 5.6.2] that for the i-th eigenfunction e i , i ∈ N, Now, since (6.7) the RHS of (6.8) is finite.
Part II: The nonlinearity: Let us consider a function f by Obviously, f is a separately continuous real valued function and satisfies the following condition (6.10) satisfies Assumption 5.1 and 5.2 with X = C 0 (R) 6 we obtain a sequence (F n ) n∈N defined by which satisfies Assumption 5.2.
Part III: The triplet of Banach spaces: To verify that there exists a solution to equation (6.5), one has to specify the underlying Banach spaces B, E and X 0 . For this purpose, we choose γ 0 < γ 1 < γ 2 and let us define auxiliarly spaces Then, we fix r ∈ [p, ∞) and γ such that and put X 1 = H γ,r (O). Let us note that due to the assumptions of the Theorem 5.2 the embedding X 1 ֒→ B has to be continuous. Therefore, γ 0 < γ has to be satisfied. Moreover, Claim 6.1 implies that there exists Hence, G satisfies the Assumption 4.3. Now, we choose the entities γ 0 , γ 1 and γ 2 according to the following constrains.
For the time being, we assume that conditions (i) to (iv) are valid. Consequently, E, X, X 0 , X 1 and B satisfy the assumption of Theorem 5.2.
then we can find γ 0 , γ 1 , γ 2 and δ G satisfying conditions (i) to (iv). Fix It remains to show, if (i) to (iv) are valid, then E, X 0 , X = C 0 (O), X 1 and B satisfy the assumption of Theorem 5.2. From δ G < 1 p follows, that G satisfies Assumption 4.3. From (i) follows that X ֒→ X 1 continously. From

The Reaction-Diffusion Equation of an arbitrary Order with Space Time Lévy Noise
In this chapter we want to apply our main result, i.e. Theorem 5.2, to a SPDEs of reaction diffusion type driven by so called space time Poissonian noise or impulsive white noise. This kind of noise is a generalisation of the space time white noise and is treated quite often in the literature, e.g. in Peszat and Zabczyk [57,Definition 7.24] or St Lubert Bié [8].
First, we will introduce the space time Poissonian noise, and, secondly, the deterministic part of the equation. Finally, we present our result, i.e. the exact condition under which a martingale solution of such an reaction diffusion type equation with space time Poissonian noise exists. dz, 7.1. The space time Poissonian white noise. Similar to the space time Gaussian white noise one can construct a space time Lévy white noise or space time Poissonian white noise. But before doing this, let us recall the definition of a Gaussian white noise (see e.g. Dalang [25]).
Gaussian random variable with mean 0 and variance σ(A); Then, by definition, the space time Gaussian white noise is the measure valued process process {W ( · × [0, t)); t ≥ 0}. If (Ω, F, F, P) is a filtered probability space, then we say W is a space time Gaussian white noise over (Ω, Moreover, one can show that the measure valued process t → W (· ×[0, t)) generates, in a unique way, an L 2 (O)-cylindrical Wiener process (Ŵ t ) t≥0 , see [13,Definition 4.1]. In particular, for any A ∈ O such that λ(A) < ∞, and In a similar way, one can define a Lévy white noise and a space time Lévy white noise.
Definition 7.2. Let (Ω, F, P) be a complete probability space, let (S, S, σ) be a measurable space and let ν ∈ L(R) (see Definition 2.8). Then a Lévy white noise on (S, S, σ) with intensity jump size measure ν is an (ii) if the sets A 1 , A 2 ∈ S are disjoint, then the random variables L(A 1 ) and L(A 2 ) are independent and L( Proposition 7.5. If L is a space time white noise, then the process t → L( · × [0, t)) is an impulsive cylindrical process on L 2 (O) with jump size intensity ν. The distribution of the time derivative ∂L/∂t is an impulsive white noise with the jumps size intensity ν.
can be uniquely extended to L 2 (O). Thus, by Definition 7.23 [57] an impulsive cylindrical process on L 2 (O) with jump size intensity ν is given by the unique L 2 (O)valued process such that for all φ ∈ L 2 (O) Since the set of simple functions are dense in But the identity (7.1) follows from the fact that Z(t, 1 A ) = L(A×[0, t)) for all t ≥ 0 and from part (i) of Definition 7.2. Now, approximating an arbitrary φ ∈ L 2 (O) by a sequence of simple functions, part (ii) of Definition 7.2 gives equivalence of the two Definitions.
Going back, again, in the very same way, one can first define a Poissonian white noise, and then, a space time Poissonian white noise.
Definition 7.6. Let (Ω, F, P) be a complete probability space, let (S, S, σ) be a measurable space and let ν ∈ L(R). Then the Poissonian white noise ) and σ the Lebesgue measure. Then, by definition, (homogeneous) the space time Poissonian white noise is the measure valued process process Again, to fix our notation we will call ν the jump size intensity measure of η. If (Ω, F, F, P) is a filtered probability space, then we say η is a (homogeneous) space time Poissonian white noise over (Ω, F, F, P), if the measure valued process t → Π(t) defined in (7.2) Our main result is formulated in terms of Banach spaces. Therefore, in order to apply our main results to the space time Poissonian white noise, we have to construct a time homogeneous Poisson random measure taking values in a Banach space. Therefore, we will finish this section with the following Proposition.
That η Z is a time homogeneous Poisson random measure follows from the fact that η is a time homogeneous Poisson random measure and 1 B does not dependent on the time. Therefore, straightforward calculations show that η Z is actually a time homogeneous Poisson random measure on Z with intensityν.
7.2. The deterministic setting. Let O be a bounded open domain in R d with boundary ∂O of C ∞ class. Let A be an second order partial differential operator. We assume that it is given in the following divergence form with all coefficients of C ∞ class on the closureŌ of a bounded domain C ∞ domain O and the matrix [a ij (x)] not necessary symmetric. We define an

The Equation.
Fix p ∈ (1, 2] and q ≥ p. We keep the notation introduced in Section 7.2. Let L be a space time Lévy white noise with intensity jump size measure ν ∈ L(R), see Definition 7.2. Suppose that g : R × O → L p (R, ν; R) is a bounded and continuous function. Now, the SPDEs which we are interested can be heuristically written in the following form where roughly speakingL denotes the Radon-Nikodym derivative of the space time Lévy white noise L, i.e.
We define the solution to the problem (7.5) in the weak sense.
Definition 7.9. A weak martingale solution to equation (7.5) is a system (iii) u is an F-adapted stochastic process such that for any φ ∈ C ∞ (R + ×Ō) the real valued process { u, φ : t ∈ R + } satisfy the following equation P-a.s.
The following result will be proved by applying the Theorem 5.2.
Theorem 7.10. Under the conditions described in the subsection 7.3, for The same Theorem can be reformulated in terms of space time Poisson noise. But, since such a result would not differ significantly from the last one, we omit it.
Proof. of Theorem 7.10: We will show that the Theorem 5.2 is applicable. As in the proof of Theorem 6.2, we have to find an appropriate quadruple of Banach spaces satisfying the assumption of Theorem 5.2. We will first analyse the stochastic term. Then, secondly, we will analyse the nonlinearity. Finally, we will choose the right Banach spaces which are used in Theorem 5.2.
Part I: The stochastic term: First, note that we can replace L by a space time homogeneous Poissonian white noise η with intensity jump size measure ν. This follows from the introduction of the Section 7.1. Now, we construct a Poisson random measure on a Banach space which describes the impulsive white noise with jump size intensity ν from Definition 7.6. For this aim we set Z = B As a next point, we have to identify the operator G of Equation 4.1 acting on L p (Z,ν, B). In the first step, we define a map Now, by Proposition D.1 the operator G 0 can be extended to a continuous operator G acting on L p (Ō)×R×Ō. In fact, if g is bounded and continuous, then satisfy the Assumption 4.3.
Part II: The nonlinear term: Let us define a function f by setting It is obvious that f is separately continuous. Moreover, see Manthey and Maslowski [50], there exists a constant K > 0 such that It follows, that if f satisfies the condition (7.10) then for all v, z ∈ R and t ≥ 0, x ∈ O. Therefore, as in the previous example 6, put X = C 0 (R) 8 . Now, the map F : X → X, where F (u), u ∈ X, is defined by (compare 6.11) satisfies Assumption 5.1 and 5.2 on X.
Part III: The quadtruple of Banach spaces: We will specify the underlying Banach spaces B, E, X 0 and X 1 . For this purpose, let us choose fixed γ 0 < γ 1 < γ 2 and let us define the auxiliar spaces First, we fix γ such that isomorphically, we put γ 2 = γ + 2kδ G . Now, γ 0 and γ 1 are chosen according to the following constrains.
For the time being, we assume that if (i) to (v) are valid. Let δ I < 1 and put Consequently, E, X, X 0 , X 1 , and B satisfy the assumption of Theorem 5.2. Thus, if It remains to show, if (i) to (v) are valid, then E, X 0 , X, X 1 and B satisfy the assumption of Theorem 5.2. From δ G < 1 p and γ < −(d − d p ) it follows, that G satisfies Assumption 4.3. From (i) follows that X ֒→ X 1 continously. From (ii) follows that δ = 1 2 (γ 2 − γ 0 ) that δ > δ G + 1 p . From (ii) follows that X 0 ֒→ X contiuously. From (iv) follows that θ > 1 − p q .

Some Auxiliary Results
The purpose of this section is twofold. First we will summarize some results concerning the deterministic convolution process. Here we will use results already shown in [27] and [10]. Secondly, we will state several results concerning the stochastic convolution process. We begin with introducing some notation.
Remark 8.1. Without loss of generality we can assume that A generates a semigroup of contractions on the underlying Banach space B. This is no restriction, since if A satisfies (H2-b), then there exists a number ν > 0 such that A − νI generates a semigroup of contractions. In this case λ appearing in the estimate (4.5) has to be chosen at least larger than ν.
Notation 8.1. For any Banach space Y and numbers q ∈ [1, ∞), λ ∈ R, we denote by IL q λ (R + ; Y ) the space of (equivalence classes of ) measurable is a Banach space, see e.g. [20]. The compact sets of IL q λ (R + ; Y ) can be characterized by the Sobolev space The space W α,p λ (R + ; Y ) equipped with the norm (see also Appendix B) Furthermore, we will denote by C(R + ; Y ) the space of all continuous functions. We equipp C(R + ; Y ) with the topology induced on compact intervals. In particular, the space of all continuous and bounded functions u ∈ C(R + ; Y ) such that for any T > 0 Next, by D(R + , Y ) we will denote the space of all cádlág functions u : [0, ∞) → Y . Here, we equipped D(R + ; Y ) also with the topology induced on compact intervals. In particular, x n → x in D(R + ; Y ), iff for all T > 0,

Remark 8.2.
A known fact is that the Skorohod topology is weaker than the uniform topology (see e.g. [55, Lemma 6.8, p. 248]), i.e. the embedding In the part dealing with the stochastic convolution process we will need also the following notation.
Notation 8.2. For any Banach space Y and numbers q ∈ [1, ∞), λ ∈ R, by N (R + ; Y ) we denote the space of (equivalence classes) of progressivelymeasurable processes ξ : R + × Ω → Y and by N q λ (R + ; Y ) we denote the subspace of N (R + ; Y ) consisting of those processes ξ for which, P-a.s.
By M q λ (R + ; Y ) we denote the Banach space consisting of those ξ ∈ N (R + ; Y ) for which E ∞ 0 e −λt |ξ(t)| q dt < ∞ By L p (Ω; Y ) we will denote the Banach space of (equivalence classes) of measurable functions ξ : (Ω, F) → X such that E|ξ| p Y < ∞}, see [20]. If X is a metric space by L 0 (Ω; X) we denote the set of measurable functions from (Ω, F) to X.
With q and E as above we set For more details we refer to [11].
If B is a UMD Banach space and A + νI, for some ν ≥ 0, satisfies the conditions (H2)-(H3) then, since Λ = B − νI + A + νI, by [30] and [34], Λ is a positive operator. In particular, Λ has a bounded inverse. The domain D(Λ) of Λ endowed with a 'graph' norm is a Banach space. Before continuing, we present two results on the fractional powers of the operator Λ T , see [10] for the proof. Assume also that for some ν ≥ 0, A + νI satisfies the condition (H3).
Theorem 8.5. Assume that E is an UMD Banach space and a operator A satisfying condition (H2) is such that A+νI, for a ν ≥ 0, satisfies condition (H3) as well. Assume that α ∈ (0, 1] and γ, δ ≥ 0 are such that δ ≥ γ and Remark 8.6. In view of Theorem 8.5 Assumption 4.2 implies that for any p ∈ [1, ∞), is a well defined bounded linear operator.
Corollary 8.7. Assume the first set of assumptions Theorem 8.5 are satisfied. Assume that three nonnegative numbers α, β, δ satisfy the following condition Assume also that γ > λ q . Then the operator Λ −α T : is also compact. In particular, if α > 1 q and the operator (A + νI) −1 : E → E is compact, the map Λ −α : IL q λ (R + ; E) → C γ (R + ; E) is compact. -compare with the beginning of section 8 8.2. The stochastic convolution. In this section, we will investigate the properties of the stochastic convolution operator defined, for an appropriate process ξ ∈ N λ (R + ; L p (Z, ν, B)), by the formula, Throughout the whole subsection we assume that the underlying Banach spaces are fixed. We assume that B is a Banach space of martingale type p. Furthermore, we assume that δ > δ G + 1 p and we put E = D B A (δ, p) 9 . Let us recall Remark 4.3. First, the space E is invariant under the action of {e −tA } t≥0 . Secondly, by [9, Theorem A.7] we know that D B A (δ, p) is also a Banach space of martingale type p. Therefore, {e −tA } t≥0 restricted to E = D B A (δ, p) will also be an analytic semigroup. We begin with the following preparatory Propositions, the next follows from Theorem 2.1 [14].
Proposition 8.8. Assume that the conditions (H1) and (H2) are satisfied. Assume also that there exists a ν ≥ 0 such that A + νI satisfies the condition (H3). Assume that 0 ≤ ρ < 1 p − δ G . Then there exists a constant C > 0 such that for any process ξ with A −δ G ξ ∈ M p λ (R + ; L p (Z, ν; E)) and for any λ > 0, . (8.13) Proposition 8.9. Assume that the conditions (H1) and (H2) are satisfied. Assume also that for some ν ≥ 0, A+νI satisfies the condition (H3). Then, for any α ∈ (0, 1 p − δ G ), θ ∈ [0, 1) and any R > 0 the stochastic convolution operator maps any set of the form into a bounded subset of L p (Ω; W α,p λ (R + ; E)). In particular, there exists a constant C < ∞ such that for all λ > 0 and ξ ∈ M p λ (R + ; L p (Z, ν; B)) Proof of Proposition (8.9). Suppose that t > s and ξ ∈ N (R + ; L p (Z, ν; B)). Similar calculations as in the Gaussian case lead to the following identity It is enough to show that each term on the RHS of the above equality satisfies the inequality (8.14). Indeed, by employing the Fubini Theorem 2.6 we infer that By [14, Theorem 2.1] we get the following estimate of the term S 1 .
where β = (α + δ G )p, which is smaller than one by the assumptions. In order to study the term S 2 let us recall, see [56, Theorem 5.2, Chapter 2], that there exists a C > 0 such that Then, the Young inequality for convolutions implies for ρ > 0 with α < ρ < Thus, by Proposition 8.8 we infer that .

Now, by
the assertion follows.
Proof of Lemma 8.11. Let us fix λ > λ 0 > 0. First, let us recall that due to Remark C.2 following inequality holds.
Hence, if rp ′ (1 − θ) = p, rp ′′ θ = 1 and 1 p ′ + 1 p ′′ = 1, by the Hölder and Young inequality, for all v ∈ N q λ (R + ; E) ∩ N ∞ (R + ; B) Hence, by (8.18), if r is as above, then for all ξ ∈ M p λ 0 (R + ; L p (Z, ν; B)) This concludes the proof of the first point of the Lemma. Before proofing the tightness of the laws of the set {S(ξ); ξ ∈ A}. Let us recall that by Theorem B.2 for any E 0 ֒→ E compactly and λ > 0 , IL ∞ (R + ; B) θ compactly. Therefore, by applying Propositions 8.8, 8.9 and 8.10 we infer that there exists a C < ∞ such that Tightness of the laws of the family follows by the Chebyschev inequality and Theorem C.1.
Proof. Let us fix t > 0. As in the previous proof, by the Burkholder inequality the following sequence of calculation are verified.
In this part of the section, we will investigate the properties of the stochastic convolution term with respect to G. For this purpose let us put Note that G(u) = S(ξ), where and ξ(s; z) = G(s; u(s); z), (s, z) ∈ R + × Z, and S(ξ) is defined in formula (8.12) on page 34. Thus, in a non-rigorous way, the map G is a composition of the map G with the stochastic convolution operator S. In the next lines we will state three results, which are important for the proof of (ii) Fix λ > 0, α ∈ (0, 1 p − δ G ) and 0 < λ 0 < λ. Then, the map is bounded. To be precise, there exists a constant C < ∞ such that (iii) there exists a constant C such that for all λ > 0 Proof. In order to show (i), (ii) and (iii) we put ξ(t, z) = G(t, u(t), z) for (t, z) ∈ R + × Z in Proposition 8.8, Proposition 8.9 and Proposition 8.10. Then by the Assumption 4.3, E Z A −δ G ξ(s; z) p ν(dz) ≤ R G and therefore This concludes the proof of (i), (ii) and (iii).
The next Corollary, although it is a simple consequence of the previous one, is important as a result in itself.
Corollary 8.14. Assume that the conditions (H1), (H2) are satisfied. Assume also that for a ν ≥ 0, A + νI satisfies the condition (H3). Assume that q ∈ [p, ∞) and θ = 1 − p q and put X to be the complex interpolation space (E, B)  In this Section we will use the notation introduced in Theorem 4.5 and within Section 8. In particular, throughout this Section, p ∈ (1, 2], δ I , δ G and δ F are fixed numbers, B is a Banach space of martingale type p, δ > max(0, δ F − 1 + 1 p , δ G + 1 p , δ I ), and E = D B A (δ, p). Due to Remark 4.3 it is sufficient to assume that B is of martingale type p. Moreover, we will use the notation given in Paragraph 8.1 and Paragraph 8.2 on pages 30 and 31. Now, we will begin the proof of Theorem 4.5 by first defining a certain sequence of processes. Consider a sequence {x n } ⊂ E such that A −δ I x n → A −δ I x in E as n → ∞. Define a function φ n : [0, ∞) → [0, ∞) by φ n (s) = k 2 n , if k ∈ N and k 2 n ≤ s < k+1 2 n , i.e. φ n (s) = 2 −n [2 n s], s ≥ 0, where [t] is the integer part of t ≥ 0. Let us define a sequence {u n } of adapted E-valued processes by u n (t) = e −tA x n + Note, thatû is a progressively measurable, piecewise constant, E-valued process. Between the grid points, Equation (9.1) is linear, therefore, u n is well defined for all n ∈ N. Secondly, we define a sequence of Poisson random measures {η n | n ∈ N} by putting η n = η for all n ∈ N. Note, that, since M N (Z ×R + ) is a separable metric space, by Theorem 3.2 of [55] the laws of the family {η n , n ∈ N} are tight on M N (Z × R + ).
The proof of Theorem 4.5 will be divided into several steps. The first two steps are the following.
Step (I) The laws of the family {u n , n ∈ N} are tight on IL p λ (R + ; E); Step (II) The laws of the family {u n , n ∈ N} are tight on D(R + ; B).
We postponed the proofs of Step (I) and Step (II) at the end of this chapter and suppose for the time being that the proofs of those two steps have been accomplished. Hence, there exists a subsequence of {(u n , η n ), n ∈ N}, denoted by {(u n , η n ), n ∈ N}, and there exists a Borel probability measure µ * on D(R + ; B) ∩ IL p λ (R + ; E) × M N (Z × R + ) such that L(u n , η n ) → µ * weakly.
andη n = η * for all n ∈ N. Later on we will need the following fact.
Step (III) The following holds Again, suppose for the time being that the proof of Step (III) has been accomplished. LetF = (F t ) t≥0 be the filtration generated by the Poisson random measure η * , the processes {ū n , n ∈ N} and the process u * . The next two Steps imply that the following two integrals over the filtered probability space (Ω,F,F,P) Step (IV) The following holds (i) for every n ∈ N,η n is a time homogeneous Poisson random measure on B(Z) × B(R + ) over (Ω,F ,F,P) with intensity measure ν; (ii) η * is a time homogeneous Poisson random measure on B(Z) × B(R + ) over (Ω,F,F,P) with intensity measure ν; Step (V) The following holds (i) for all n ∈ N,ū n is aF-progressively measurable process; (ii) the process u * is aF-progressively measurable process.
Again, let us assume, for the time being, that the Steps (IV) and (V) have been established. For clarity, we will now introduce an operator K. Let µ be a time homogeneous Poisson random measure over (Ω,F ,F,P) with intensity measure ν and let v be a process over (Ω,F ,F,P), such that and let x ∈ B. Then we put Here, as usual,μ denotes the compensated Poisson random measure of µ.
∀t ∈ R + , which will be shown in the last step.
Since the solution to (4.1) is a fix point of the operator K, it follows from Step (VIII) that u * is a martingale solution to Equation (4.1). The assertion (4.5) follows from Step (III). Therefore, in order to proof Theorem 4.5 we have to verify Step (I) to Step (VIII), which will be done in the following. Nevertheless, before, let us state the following two auxiliary results.
Proposition 9.1. For n ∈ N, let g n be defined by g n (t) := e −tA x n , t ≥ 0.
Then (i) The family {g n , n ∈ N} is precompact in IL p λ (R + ; E). (ii) The family {g n , n ∈ N} is precompact in C(R + ; B).
Proof of Proposition 9.1. Theorem 5.2-(d) of [56] implies that for any t 0 > 0 the family {g n , n ∈ N} is precompact in IL p λ ([t 0 , ∞); E) and C([t 0 , ∞); B). Therefore, the critical point is at t = 0. Since A −δ I x n → A −δ I x in E and δ I < 1, the Young inequality for stochastic convolutions implies (i). Since δ > δ I , x n → x ∈ B in B implies (ii).

Proposition 9.2.
(i) For any γ ∈ (0, 1 p − δ G ) the set {A γû n , n ∈ N} is bounded in M p λ (R + ; E); (ii) There exists θ > 0 and C < ∞ such that Proof of Proposition 9.2. First, we will show part (i). First note that, since Let us put s n k := k 2 n , k, n ∈ N. Then Since, by Corollary 8.13-(i) and Remark 8.6, the set {A γ u n , n ∈ N} is bounded in M p λ (R + ; E)), the assertion (i) follows by Assumption 4.4.
To prove Part (ii), let p ′ be the conjugate exponent to p, i.e. 1 p + 1 p ′ = 1. Let us put ρ = α + 1 p . Then by the Hölder inequality By applying Corollary 8.13-(ii) we can conclude the part (ii).

Proof of Step (I). Set
and v n (t) = t 0 Z e −(t−s)A G(s,û n (s); z)η(dz; ds), t ∈ R + . (9.7) By Proposition 9.2-(i), the family {û n , n ∈ N} is bounded in M p λ (R + ; E). Therefore, in view of Assumption 4.2, we infer that the family {A −δ F f n | n ∈ N} is bounded in IL p λ (R + ; E). Hence, by Theorem 2.6 of [11], the laws of the family {Λ −1 f n | n ∈ N} are tight on IL p λ (R + ; E). Again, by Proposition 9.2-(i), the family {û n , n ∈ N} is bounded in M p λ (R + ; E). Hence, by Corollary 8.13, we infer that the laws of the family {v n | n ∈ N} are tight on IL p (R + ; E).
Finally, from Proposition 9.1 follows that the family of functions {e −·A x n , n ∈ N} is precompact in IL p λ (R + ; E). Since u n = v n + Λ −1 f n + e −·A x n , n ∈ N, we conclude that the laws of the family {u n | n ∈ N} are tight on IL p λ (R + ; E).

Proof of
Step (II). Again, we use the notation introduced in fromulae (9.5), (9.6) and (9.7). By Proposition 9.2-(i), the family {û n , n ∈ N} is bounded in M p λ (R + ; E), and, by Assumption 4.2, we infer that the family {A −δ F f n | n ∈ N} is bounded in M p λ (R + ; E). Now, for any δ > max(0, δ F − 1 + 1 p ), Corollary 8.7 implies that the laws of the family of {Λ −α A −δ Λ 1−α f n | n ∈ N} are tight on C(R + ; E). Since the linear operators A and Λ commute, we infer that the laws of the family {Λ −1 f n | n ∈ N} are tight on C(R + ; B). Since the embedding C(R + ; B) ֒→ D(R + ; B) is continuous (see Remark 8.2), it follows that the laws of the family {Λ −1 f n , n ∈ N} are tight on D(R + ; B).
By Proposition 9.2-(i), the family {û n , n ∈ N} is bounded in M p λ (R + ; E). Therefore, it satisfies the assumptions of Corollary 8.15. Hence, we infer that the laws of the family {v n , n ∈ N} are tight on D(R + ; B).
Finally, by Proposition 9.1 we infer that the family of functions {e −·A x n , n ∈ N} is precompact in D(R + ; B). Since Proposition VI.1.23 of [41] gives that the laws of the family {u n , n ∈ N} are tight on D(R + ; B).

Proof of
Step (III). Let us recall that by our construction which used the Skorohod embedding Theorem, the laws of u n andū n on IL p λ (R + ; E) are identical for any n ∈ N. Hence, u n M p λ (R + ;E) = ū n M p λ (R + ;E) . By Proposition 9.2-(i) the family {û n , n ∈ N} is bounded in M p λ (R + ; E). Now, by Proposition 8.3 and Corollary 8.13-(i) we infer that sup n u n M p λ (R + ;E) < ∞. This proves part (i).
Part (ii) of Step (III) follows by part (i) of Step (III), by the Lebesgue dominated convergence Theorem and the fact thatP-a.s.ū n → u * . 9.4. Proof of Step (IV). Before proofing (i) and (ii), let us recall first that the modified version of the Skorohod embedding Theorem, i.e. Theorem E.1, implies thatη n (ω) = η * (ω) for all ω ∈ Ω and n ∈ N. Let us recall secondly that forF = (F t ) t≥0 , whereF t for t > 0 is defined bȳ Here for a process v = {v(t)} t≥0 and for any interval I ⊂ R + we define (t, ω). For any Poisson random measure ρ ∈ M N (S × R + ) and for any A ∈ S let us define anN-valued process (N ρ (t, A) In addition, we denote by (N ρ (t)) t≥0 the measure valued process defined by By Definition 2.2, an element ρ of M N (S × R + ) is a time homogeneous Poisson random measure over (Ω, F, (F t ) t≥0 , P) with intensity ν, iff (a) for any A ∈ S, the random variable N ρ (t, A) is Poisson distributed with parameter t ν(A); (b) for any disjoint sets A 1 , A 2 ∈ S, and any t ∈ R + the random variables N ρ (t, A 1 ) and N ρ (t, A 2 ) are independent; (c) the measured valued process (N ρ (t)) t≥0 is adapted to (F t ) t≥0 ; (d) for any A ∈ S the increments of (N ρ (t, A)) t≥0 are independent from (F t ) t≥0 , i.e. for any t ∈ R + , A ∈ S and any r, s ≥ t, the random variables N ρ (r, A) − N ρ (s, A) are independent of F t .

Proof of Step (IV)-(i):
Fix n ∈ N. Sinceη n (ω) = η * (ω) =η m (ω) for all ω ∈Ω and m ∈ N, it is sufficient to show thatη n satisfies properties (a), (b), (c) and (d) listed above. The properties (a) and (b) can be characterised by the law ofη n on M N (Z × R + ). In particular, (a) holds, if (N ηn (t, A)) onN. Since η n is a time homogeneous Poisson random measure with intensity ν, N ηn (t, A) is a Poisson distributed random variable with parameter t ν(A). Hence Nη n (t, A) is also a Poisson distributed random variable with parameter t ν(A). This proves equality (9.9). Similarly, (b) holds iff for any two disjoint sets A 1 , A 2 ∈ S, Ee i(θ 1 Nη n (t,A 1 )+θ 2 Nη n (t,A 2 )) = Ee i θ 1 Nη n (t,A 1 ) Ee i θ 2 Nη n (t,A 2 ) . (9.10) Again, since Law(η n ) = Law(η n ) on M N (S ×R + ) we infer that (9.10) holds. In order to show thatη n is a time homogeneous Poisson random measure over (Ω,F,F,P), one needs to show thatη satisfies properties (c) and (d). Sinceη n (ω) = η * (ω) for all ω ∈Ω, it follows that one has forF = (F = t) t≥0 Clearly,η n is adapted toF. Moreover,η n is a Poisson random measure, and, so independently scattered. Hence, for any t 0 ∈ R + , Nη n (t 0 )(:= {S ∋ A → Nη n (t 0 , A) ∈ N}) is independent from Nη m (r) − Nη m (s) for all r, s > t 0 . Let us remind, that the filtration F is not only generated by η * but also by the family {ū m , m ∈ N} and u * . Therefore, we have to show that for any t 0 ≥ 0 the processes 1 [0,t 0 ] u m , m ∈ N, and 1 [0,t 0 ] u * are independent from the increments of Nη m after time t 0 , or, in other words, are independent from the process t → 1 (t 0 ,∞) Nη m (t + t 0 ) − Nη m (t 0 ). Recall that Law((ū m ,η n )) = Law((u m , η n )), and therefore Law((1 [0,t 0 ]ūm ,η n )) = Law((1 [0,t 0 ] u m , η n )). Similarly, and, hence Let us observe that due to the construction of the process u m , the process is adapted to the σ algebra generated by η m . Moreover, the random variable 1 [0,t 0 ] u m and the process 1 (t 0 ,∞) N ηm (t + t 0 ) − N ηm (t 0 ) are independent. Since independence can be expressed in terms of the law, 1 [0,t 0 ]ūm and 1 (t 0 ,∞) Nη n (t + t 0 ) − Nη n (t 0 ) are independent. It remains to show that the random variable 1 [0,t]ū * is independent from the process 1 (t 0 ,∞) Nη n (t + t 0 ) − Nη n (t 0 ). But, this is a subject of the next Lemma. Lemma 9.3. Let B be a Banach space, z and y * be two B-valued random variables over (Ω, F, P). Let {y n , n ∈ N} be a family of B valued random variables over a probability space (Ω, F, P) such that y n → y * weakly, i.e. for all φ ∈ B * , e i φ,yn → e i φ,y . If for all n ≥ 1 the two random variables y n and z are independent, then z is also independent of y * .
The weak convergence and the independence of z and y n for all n ∈ N justify the following chain of equalities.

Proof of Step (IV)-(ii):
We have to show that η * ∈ M I (S × R + ) is a Poisson random measure with intensity ν, in particular, satisfies the properties (a), (b), (c) and (d) from above. But, since η * (ω) =η 1 (ω) for all ω ∈ Ω, this follows from Step (III)-(i). 9.5. Proof of Step (V). Here, we have to show that for each n ∈ N,ū n and u * areF-progressively measurable. But, for fixed n ∈ N the process u n is adapted toF by the definition ofF. By Step (III) the processū n is bounded in M p λ (R + ; E), hence, there exists a sequence of simple functions u m n , m ∈ N such thatū m n →ū n as m → ∞ in M p λ (R + ; E). In particularly, we can consider the approximation by the shifted Haar projections used in [16]. It follows thatū n is progressively measurable, i.e. (i) holds. Sinceū m n →ū n as m → ∞ in M p λ (R + ; E) andū n → u * as n → ∞ also in M p λ (R + ; E), it follows that u * is a limit in M p λ (R + ; E) of some progressively measurable step functions. In particular, u * is also progressively measurable, i.e. (ii) holds. 9.6. Proof of Step (VI). Before proving (VI) we will do some preparations, that is we will verify that we can replaceū n andη n by u n and η n . First, recall that the family {(u n , η n ), n ∈ N} on (Ω, F, P) and the family {(ū n ,η n ), n ∈ N} on (Ω,F ,P) have equal laws on IL p λ (R + ; E)× M N (Z × R + ). Secondly, recall that we know how u n is constructed. In order to do this, let us Let us define a mapping G : 2 n φn(s) φn(s)−2 −n u(r) dr, if s ≥ 2 −n . Note that G is a bounded and linear map from IL p λ (R + ; E) into itself (see [15]). Therefore, for any n ∈ N, the two triplets of random variables {(u n , η n ,û n )} and {(ū n ,η n ,û n )}, whereû n = G(u n )) and have equal laws on u n = G(ū n ), IL p λ (R + ; E)×M N (Z ×R + )×IL p λ (R + ; E). Let us define processes z n andz n byz n (t) := e −tA x n +  Let us also define processesz n andz n by replacing u n andû n byū n and u n in formulae (9.11) and (9.12). According to Step (III) and (IV) the integrals in (9.11) and (9.12) with respect to the Poisson random measure η n andη n can be interpreted as Ito integral. Due to Theorem 1, [16] and the continuity of G, i.e. Assumption 4.3, for any n ∈ N, the quintuples of random variables {(u n , η n ,û n ,z n ,z n )} and {(ū n ,η n ,û n ,z n ,z n )} have equal laws on IL p λ (R + ; E)×M N (Z ×R + )×IL p λ (R + ; E)×IL p λ (R + ; E)×IL p λ (R + ; E). In addition, since Law((u n ,z n )) is supported by the diagonal, Law((ū n ,z n )) is also supported by the diagonal. Since the diagonal is a Borel set, P(ū n =ȗ n ) and, hence, we infer that After this preparations we can start with the actual proof of (VI) by showing that for any ̺ > 0, there exists a natural numbern ∈ N such that u n − z n p M p λ (R + ;E) ≤ ̺, n ≥n. To show (9.13) fix ̺ > 0. Since, by the definition of η n , we have η n (ω) = η(ω) for all ω ∈ Ω, we write in the following η instead ofη n . The definition of u n and z n gives where C is a generic positive constant. We will first deal with S n 1 . The Young inequality and the Assumption 4.2 on F we get For λ > 0 let P λ = m λ ⊗ P be the product measure on (R + × Ω, where m λ is a weighted Lebesgue measure defined by m λ (A) = λ A e −λt dt , A ∈ B(R + ). (9.14) By E λ we will denote the expectation with respect to P λ . Then, by the Fubini Theorem it follows that for any γ ≥ 0  Then we choose a constant K > 0 such that and define, for each n ∈ N, a set By Hypothesis (H2-a), the operator A −1 is compact, therefore A −γ is also compact, and by Assumption 4.2 the map F is uniformly continuous on B γ (0, K), where B γ (0, K) is a ball of radius K in ths space D(A γ E ). In particularly, there exists a constant ς > 0 such that if |x − y| ≤ ς, x, y ∈ B γ (0, K) then |F (s, x) − F (s, y)| p −δ F ≤ ̺ 6 . Let us define, for each n ∈ N, a second auxiliary set by Q n ς := {(t, ω) ∈ R + × Ω, such that |û n (t, ω) − u n (t, ω)| ≤ ς} .
By the definition ofû n and u n given in (9.1) and (9.2), we infer that Since F is bounded by Assumption 4.2, we get By applying the Chebyschev inequality and the definition of D n K we infer that where R satisfies (9.16). Therefore, and by the choice of K, we obtain In order to handle the second integral, we note that Now we can proceed as before. By Assumption 4.2 and by the Chebychev inequality, we get Since by Proposition 9.2,û n −u n → 0 in M p λ (R + ; E)), there exists a number , ∀n ≥n 1 .
It remains to analyse the term By the definition of the sets D n K and Q n ς and by choice of K and ς, it follows that Summing up, we proved that there exists a natural numbern 1 ≥ 0, such that The term S n 2 can be dealt exactly in the same way only the conditions on K and ̺ have to be altered. To be precise, we first note that Let R be again a constant satisfying (9.16) and let K > 0 be chosen such that is satisfied. Then there exists a ς > 0 such that if |x−y| ≤ ς, x, y ∈ B γ (0, K) such that Now proceeding in the same way as above, it can be shown that there exists an ≥n 1 such that , n ≥n. (9.18) 9.7. Proof of Step (VII). In Step (III) and Step (IV) we showed, that u * is a progressively measurable process on (Ω,F ,F,P) and η * a time homogeneous Poisson random measure. Hence, we can define a process u by Here, the stochastic integral in the last term is interpreted as Itô integral. We will proof that there exists a p 0 ∈ (1, p] such that Observe, that for any n 0 ∈ N From the previous construction invoking the Skorohod embedding Theorem, we infer that (see (9.3)) lim n→∞ (ū n ,η n ) = (u * , η * ) in IL p λ (R + ; E) ∩ D(R + ; B) × M N (Z × R + ),P a.s. .

Next, by the Young inequality we infer that
where C = ∞ 0 e −λs |A δ F e −sA | ds . By Assumption (4.2) and sinceū n , u * ∈ IL p λ (R + ; E) a.s. , there exists a C such that ∞ 0 e −λt |A −δ F F (s,ū n (s))| p ds ≤ C, The Lebesgue's dominated convergence theorem and the continuity of F (see Assumption (4.2)) givē To prove convergence of S n,n 0 2 , we proceed in a similar way as we have done to show convergence for S n 1 . In particular, applying the Burkholder inequality and the Fubini Theorem as well as (9.3), give (9.19) where C = ∞ 0 e −λs |A δ G e −sA | p ds . By Assumption 4.3 there exists a C > 0 such that λ ⊗P-a.e. sup n Z e −λt |A −δ G G(s,ū n (s); z)| p ν(dz) ≤ C. (9.20) Hence and since the RHSs of (9.19) and (9.20) are independent of n 0 , by the Lebesgue dominated convergence theorem we infer that ∀ε > 0, ∃n 2 ∈ N such that S n,n 0 2 ≤ n, ∀n ≥ n 2 , and ∀n 0 ∈ N.
It follows that P-a.s. for all t ≥ 0 the identity K(x, u * , η * )(t) = u * (t) holds. In the main result of this section we replace the boundedness assumption on F by the dissipativity of the drift −A + F . Before we proceed let us state (and prove) the following important consequence of Assumption 5.1.
In this section we show the existence of a martingale solution to equation (4.1). First, let us notice that Assumption 5.1 implies that Proof of Theorem 5.2. Without loss of generality we assume that ν = 0. Let (F n ) n∈N be a sequence of functions from [0, ∞) × X to X given by Assumption 5.2. In particular, there exists a sequence (R n F ) n∈N of positive numbers, such that |F n (s, y)| X ≤ R n F for all (s, x) ∈ [0, ∞) × X, n ∈ N, and |F n (s, x) − F (s, x)| B → 0 as n → ∞ for all (s, x) ∈ [0, ∞) × X. For n ∈ N let F E n be the restriction of F n to [0, ∞) × E. Since E ֒→ X continuously, F E n : [0, ∞) × E → X is also separately continuous with respect to both variables and bounded by R n F . Finally, since X ֒→ X 1 continuously, the function F E n : [0, ∞) × E → X 1 satisfies all the assumptions of Theorem 4.5. In view of that Theorem there exists a martingale solution on B to the following Problem Let us denote this martingale solution by (Ω n , F n , P n , BF n , {η n (t, z)} t≥0,z∈Z , {u n (t)} t≥0 ) .
In view of the Definition, for each n ∈ N, u n is a mild solution on B over the probability space (Ω n , F n , F n , P n ). In particular u n (t) = e −tA x + z n (t) + Notice, that z n (t) = u n (t) − v n (t) − e −tA x, t ∈ R + . Similarly to the proof of Theorem 4.5 the proof of Theorem 5.2 will be divided into several steps. The first two steps are the following.
Step (I) There exists a number r > 0 andq > q such that Step (II) The laws of the family {z n , n ∈ N} are tight on C(R + ; X). In particular, for any T > 0 there exists a number r > 0 such that Finally, in order to use the Skorohod embedding theorem, we also need the following.
Step (III) The laws of the family {v n , n ∈ N} are tight on D(R + ; B).
Step (IV) The laws of the family {η n , n ∈ N} are tight on M I (Z ×R + ).
The next two Steps imply that the following two Itô integrals over the filtered probability space (Ω,F ,F,P) Step (V) The following holds (i) for all n ∈ N,η n is a time homogeneous Poisson random measure on B(Z) × B(R + ) over (Ω,F,F,P) with intensity measure ν; (ii) η * is a time homogeneous Poisson random measure on B(Z) × B(R + ) over (Ω,F,F t ,P) with intensity measure ν; Step (VI) The following holds (i) for all n ∈ N, the processesv n andẑ n are with respect to (F t ) t≥0 progressively measurable; (ii) the processes z * and v * are with respect to (F t ) t≥0 progressively measurable. where u * = z * + v * and the map K is again, with a slight abuse of notation, defined by To prove the equality (10.7) it is sufficient to show that for any T > 0 To do this, we write first u * (t) − K(x, u * , η * )(t) = (u * (t) −ẑ n (t) −v n (t)) + (ẑ n (t) +v n (t) − K n (x,ẑ n +v n ,η n )(t)) where K n is an operator defined by the same formula as the operator K but with F replaced by F n . Equality (10.7) will be shown by the four following steps.
Step (IX) There exists an r > 0 such that for any T > 0 With these four steps Therom 5.2 is shown.
gl:010 It remains to proof Step (I) to Step (IX), which is done in the following.

Proof of Step (I).
Step (I) is a simple consequence of Corollary 8.13, Corollary 8.14, inequality (8.20), and the continuity of the embedding X 0 ֒→ X.

Proof of
Step (II). By Lemma 10.1 we infer that for any T ≥ 0 Hence, we proved the second part of Step (II). The last inequality also implies that sup n E|z n | r q ILq(R + ;X) < ∞. Hence, by the identity z n = Λ −1 (v n +z n ) and Corollary 8.7 we infer that the laws of the family {z n | n ∈ N} are tight on C(R + , X). Consequently, since the embedding X ֒→ B is continuous, they are tight on C(R + , B). Thus, we have finished the proof of Step (I).
Proof of Step (III). By Corollary 8.14 and Corollary 8.15 the laws of the family {v n | n ∈ N} are tight on L p (R + , E) ∩ D(R + , B). Thus, we have finished the proof of Step (III).

Proof of
Step (IV). Since E n η n (A, I) = Eη(A, I) 11 for all A ∈ B(Z) and I ∈ B(R + ), the laws of the random variables η n , n ∈ N, are identical on M N (Z × R + ). Therefore, and in view of Theorem 3.2 of [55], the laws of the family {η n , n ∈ N} are tight on M N (Z × R + ). Thus, we have finished the proof of Step (IV).

Proof of
Step (VII). Sinceẑ n , n ∈ N, and z * are continuous B-valued functions, We first will show that there exists an r > 0 such that Next, we will show that there exists an r > 0 such that Next, we investigate the limit of the second term δ(v n , v * ). Let us note that, since δ(v n , v * ) → 0 P-a.s., it follows from the definition of the Prokorov metric 12 that there exists a sequence of functions λ n ∈ Λ such that Since for any n ∈ N in probability. Since by Step (I) there exists an r > 0 such that sup 0≤t≤T |v n (t)| r is uniformly integrable, the family {sup 0≤t≤T |v n • λ n (t)| r , n ∈ N} is uniformly integrable. Hence, it follows by [ Since Proof of Step (VIII). In order to prove Step (VIII), let us recall that the laws of (ẑ n ,v n ,η n ) and (z n , v n , η n ) are equal on C(R + ; X) × L p (R + , E) ∩ D(R + , B) × M N (Z × R + ) and, moreover, the process u n = z n + v n is a martingale solution to Problem (10.4). Hence, u n − K n (x, u n , η n ) M p λ (R + ,E) = 0.
In particular, for any T > 0, Therefore, by the main result of [16] we can conclude that for any T > 0 Thus, we have finished the proof of Step (VIII).

Proof of
Step (IX). First we split the difference into the following differences The aim is to tackle the first term similarly to Step (VII) and the last term by the Lebesgue dominated convergence Theorem. Note, that, according to Proposition A.1, the second term is zero. We begin with the first term. Let us note that Therefore, we infer that The second summand is zero by Proposition A.1. In order to tackle the last summand, we apply Corollary 8.13-(iii). Again, by the Lebesgue dominated convergence Theorem almost sure convergence implies the convergence of the expectation. Note here, that, in contrary to F by Assumption 4.3, the function G is bounded.
Appendix A. Stochastic Integration -a useful result As seen in [14], the stochastic integral is an extension of the set of simple functions to progressively measurable functions. In the same way we will show the following Proposition.
Proposition A.1. Assume that (S, S) is a measurable space, ν ∈ M + S is a non-negative measure on (S, S) and P = (Ω, F, (F t ) t≥0 , P) is a filtered probability space. Assume also that η 1 and η 2 are two time homogeneous Poisson random measures over P, with the intensity measure ν. Assume that p ∈ (1, 2] and E is a martingale type p Banach space and ξ ∈ M p (0, ∞, L p (S, ν; E)). If P-a.s. η 1 = η 2 on M N (S, R + ), then for any t ≥ 0 P-a.s. Proof of Proposition A.1. In order to show the equality, we will go back to the definition of the stochastic integral by step functions. First, put I = (a, b]. We may suppose that f = A ji ∈ F a and B i ∈ S. For fixed i we may suppose that the finite family of sets {A ji × B i , j = 1, . . . , J} are pairwise disjoint and that the family of sets {B i , i = 1, . . . I} are also pair-wise disjoint and ν(B i ) < ∞. Let us notice that for i = 1, . . . , I Since P-a.s. η 1 = η 2 on M N (S, R + ), it follows that a.s. S f (x)η 1 (dx, I) = S f (x)η 2 (dx, I). It remains to investigate the limit as done in [14,Appendix C]. But if x n = y n for all n ∈ N and x n → x, y n → y in a Banach space X, then x = y.

Appendix B. A Compactness result
Let B be a separable Banach space, I ⊂ R + be an interval. Recall that   ∞) and α ∈ (0, 1) be given. Let X be the space we get the following result.
Theorem B.2. Let B 0 ⊂ B ⊂ B 1 be Banach spaces, B 0 and B 1 reflexive, with compact embedding of B 0 in B. Let λ > 0. Let p ∈ (1, ∞) and α ∈ (0, 1) be given. Let X be the space Proof. The proof is similar to the proof of Flandoli and Gatarek [37] resp. Dunford and Schwartz [32,Chapter IV.8.20]. Similarly, let us first define a family of operators J a in L p (R + ; B i ), i = 0, 1, a > 0, by setting where, we put g(t) := 0 if t > 0. First, note that for any a > 0, J a maps L p (R + ; B i ) to C(R + ; B i ), i = 0, 1. Taking into account the finiteness of the measure B(R + ) ∋ A → A e −tλ dt, by [32, Theorem 18, Chapter IV.8, p. 297] a bounded set K of L p λ (R + ; R) is precompact in L p λ (R + ; R), iff lim a→0 J a f = f uniformly on K. Now, let G ⊂ X be a bounded set. We have to prove that G is relatively compact in L p λ (R + ; B). Taking into account that e −λ(t+s) ≤ e −λs e −λa , for all t ∈ (s − a, s + a), we get . Therefore, lim a→0 J a f = f uniformly on G in L p λ (R + ; B 1 ). Next, since G is bounded in L p λ (R + : B 1 ) and B 1 ֒→ B compactly, G is compact in L p λ (R + : B).
In order to characterise regularity of the functions we used real interpolation spaces between B and D(A). If the underlying space B is a Hilbert space and A is selfadjoint, then [B, D(A)] θ = D B A (θ, 2) for any θ ∈ (0, 1). If B is a Banach space, this is not necessarily true. Nevertheless, the following Theorems are valid.
In general, the real method and the complex method leads to different results. Neither of the indices 1 and ∞ can be replaced with a q, 1 < q < ∞. However, in case of reiteration one can mix up both methods to a certain extent.
Given two compatible couples (A 0 , A 1 ) and (B 0 , B 1 ), an linear operator T : A 0 → B 0 compactly and T : A 1 → B 1 continuous, it is still an open problem in interpolation theory to determine whether or not an interpolated operator by the complex method is compact. Nevertheless, in certain specific cases, one can verify if the interpolated operator is compact or not.
Theorem C.5. (see [18,Theorem 9]) Let (A 0 , A 1 ) and (B 0 , B 1 ) be two compatible Banach couples and T : (A 0 , A 1 ) → (B 0 , B 1 ). If A 0 is a UMD-Banach space, then for any 0 < θ < 1, the interpolated operator T : 15 Two topological vector spaces are called compatible, iff there exists a Hausdorff topological vector space A such that A1 and A2 are subspaces of A. If we want to underline the space A, then we say with respect to A compatible. 16 For θ ∈ (0, 1), 1 ≤ p ≤ ∞ and two Banach spaces A1 and A2, (A1, A2) θ,p denotes the real interpolation functor.
In the following we denote the value of Λ(f ) at a, where f ∈ L p (R d ) by f δ a . Let us recall the definition of the Besov spaces as given in [61, Definition 2, pp. 7-8]. First we choose a function ψ ∈ S(R d ) such that 0 ≤ ψ(x) ≤ 1, x ∈ R d and With the choice of φ = {φ j } ∞ j=0 as above and F and F −1 being the Fourier and the inverse Fourier transformations (acting in the space S ′ (R d ) of Schwartz distributions) we have the following definition.
Definition D.2. Let s ∈ R, 0 < p ≤ ∞ and 0 < q ≤ ∞. If 0 < q < ∞ and f ∈ S ′ (R d ) we put We denote by B s p,q (R d ) the space of all f ∈ S ′ (R d ) for which |f | B s p,q is finite.
Proof. Let us recall that hδ b ∈ S ′ (R d ) assigns to a function g ∈ S(R d ) a value δ a (hg) = h(a)g(a). Since by the definition of a convolution of a distribution with a test function, whereǧ(·) = g(− ·) Lemma D.4. If ϕ ∈ S(R d ), λ > 0 and g(x) := ϕ(λx). x ∈ R d , then Proof. Simple calculations.
Proof of Proposition D.1. Obviously it is enough to prove the first part of the Proposition. We will use the definition of the Fourier transform as in [61, p. 6]. In particular, with ·, · being the scalar product in R d , we put (Ff )(ξ) := (2π) −d/2 R d e −i x,ξ f (x) dx, ξ ∈ R d , f ∈ S(R d ).
Moreover, = f H s,1 p (R d ) . Summing up, one can show that for any s ∈ R, there exists a constant C > 0 such that For s ≤ 0 the operator I(s) is positive. Therefore, it follows Theorem 5.1.2 in [7] gives inequality (D.2).

Appendix E. A modified version of the Skorohod embedding Theorem
Within the proof we are considering the limit of pairs of random variables. For us it was important that certain propetries of the pairs are conserved by the Skorohod embedding Theorem. Therefore, we had to use a modified version wich is stated below.
Theorem E.1. Let (Ω, F, P) be a probability space and E 1 , E 2 be two separable Banach spaces. Let χ n : Ω → E 1 × E 2 , n ∈ N, be a family of random variables, such that the sequence {Law(χ n ), n ∈ N} is weakly convergent on E 1 × E 2 .
For i = 1, 2 let π i : E 1 × E 2 be the projection onto E i , i.e.
For simplicity, we renumerate for any k ∈ N these families and call them now (A k i ) i∈N , and (C k ) j∈N . LetΩ := [0, 1) × [0, 1) andP be the Lebesgue measure on [0, 1) × [0, 1). In the first step, we will construct a family of partition consisting of rectangles inΩ.
Definition E.2. Suppose that µ is a Borel probability measure on E = E 1 × E 2 and µ 1 is the marginal of µ on E 1 , i.e. µ 1 (A) := µ(A × E 2 ), A ∈ B(E 1 ). Assume that (A i ) i∈N and (C i ) i∈N are partitions of E 1 and E 2 , respectively. Define the following partition of the square [0, 1) × [0, 1). For i, j ∈ N we put Remark Obviously, if µ 1 (A i ) = 0 for some i ∈ N, then we have I ij = ∅ for all j ∈ N.
Next for fixed l ∈ N and n ∈ N∪{∞}, we will define a partition I l,n ij i,j∈N ofΩ = [0, 1) × [0, 1) corresponding to partitions (A l i ) i∈N and (C l i ) i∈N of the spaces E 1 and E 2 , respectively.