On a Theorem by A.S. Cherny for Semilinear Stochastic Partial Differential Equations

We consider semilinear stochastic partial differential equations on 2-smooth Banach spaces driven by Hilbert space valued Brownian motion and non-anticipating coefficients. For these equations we show that weak uniqueness is equivalent to weak joint uniqueness, and thereby generalize a theorem by A.S. Cherny to an infinite dimensional setting. Our proof for the technical key step is different from Cherny's and uses cylindrical martingale problems. As an application, we deduce a dual version of the Yamada-Watanabe theorem, i.e. we show that strong existence and weak uniqueness imply weak existence and strong uniqueness.


Introduction
The classical Yamada-Watanabe theorem [19] for finite-dimensional Brownian stochastic differential equations (SDEs) states that weak existence and strong (i.e. pathwise) uniqueness implies strong existence and weak uniqueness (i.e. uniqueness in law). Jacod [5] lifted this result to SDEs driven by semimartingales and extended it by showing that weak existence and strong uniqueness is equivalent to strong existence and joint weak uniqueness, i.e. uniqueness of the joint law of the solution process and its random driver. Jacod's theorem extended the classical Yamada-Watanabe theorem in two directions: It improved the classical direction by showing that even joint weak uniqueness holds, and it provided an equivalence statement.
It is an interesting question how Jacod's theorem is related to the classical Yamada-Watanabe theorem. For finite dimensional SDEs driven by Brownian motion, A.S. Cherny [1] answered this question by proving that weak uniqueness is equivalent to joint weak uniqueness. As a consequence, Cherny obtained the converse implication in the Yamada-Watanabe theorem, which is nowadays often called the dual Yamada-Watanabe theorem.
In this short article we establish Cherny's result for analytically weak solutions to the Banach space valued semilinear stochastic partial differential equation (SPDE) dX t = (AX t + v t (X))dt + u t (X)dW t , X 0 = x 0 , (1.1) where A is a closed and densely defined operator and v and u are progressively measurable processes on the path space of continuous functions. Furthermore, we deduce a dual Yamada-Watanabe theorem for this framework.
Our results are closely related to the articles [12,17], where Cherny's results are proven for mild solutions under slightly different assumptions on the equations: The linearity A is assumed to be the generator of a C 0 -semigroup, in which case mild and weak solutions often coincide, see Proposition 2.2 below. In addition, [12] works with Markovian coefficients and [17] considers Hilbert space valued equations with further assumptions on A. In contrast to [12], we restrict our attention to solution processes with continuous paths and we assume that the diffusion coefficient has more integrability. Some detailed comments on the relation to [17] are given in Remark 2.3 below. Further related works are [14,15] for the variational framework (see [10] for more details).
The important difference between these articles and ours is the proof of the main result. The basic strategy is similar to the finite dimensional case. Namely, we recover the noise W from the solution process X and an independent (infinite dimensional) Brownian motion V . The technical challenge in this argument is the proof for independence of the (infinite dimensional) processes X and V . Cherny's proof for this uses additional randomness, an enlargement of filtration and a conditioning argument. In [12,14,15] these ideas have been adapted for infinite dimensional frameworks. 1 Our approach is quite different: We use arguments based on martingale problems for SPDEs and infinite dimensional Brownian motion. Hereby, we transfer ideas from [2] for one-dimensional SDEs with jumps to our continuous infinite dimensional setting. In comparison with Cherny's method, ours appears to be more straightforward, because it works directly with X and V without introducing further randomness.
The paper is structured as follows: In Section 2 we introduce our setting and state our main results: Theorem 2.4 and Corollary 2.6. The proof of Theorem 2.4 is given in Section 3.
Let us end the introduction with a short comment on notation and terminology: In general we use classical notation as introduced in the monograph of Da Prato and Zabczyk [3]. Further standard references for infinite dimensional stochastic analysis are the monographs [10,11]. A detailed construction and standard properties of the stochastic integral in 2-smooth Banach spaces can be found in [12].

The Setting and Main Results
Let U 1 be a real separable 2-smooth Banach space ([12, Definition 3.1]) and let U be a real separable Hilbert space. We denote the corresponding norms by · U 1 and · U and the scalar product of U by ·, · U . As usual, the topological dual of U is identified with U via the Riesz representation. The topological dual of U 1 is denoted by U * 1 and we write y * (y) y, y * U 1 , (y, y * ) ∈ U 1 × U * 1 . Fix a covariance operator Q on U , i.e. a self-adjoint, non-negative operator U → U with finite trace. We assume that all eigenvalues λ i of Q are strictly positive. If this is not the case we could simply start with ker(Q) ⊥ instead of U . The associated eigenvectors (f k ) k∈N form an orthonormal basis of U . We set The proof in [17] is indirect in the sense that it uses the method of the moving frame to transfer results from [15] for infinite dimensional SDEs to SPDEs. The difficulty is that the method of the moving frame transports joint weak uniqueness, but not weak uniqueness. Due to this, some additional assumptions on A are needed.
which is a separable Hilbert space ([10, Proposition C.0.3]). The space of all radonifying operators U 0 → U 1 ([12, Definition 2.3]) is denoted by L 0 2 L 2 (U 0 , U 1 ). In case that U 1 is a Hilbert space, L 0 2 coincides isometrically with the space of Hilbert-Schmidt operators U 0 → U 1 . Further, we define C C(R + , U 1 ) to be the space of continuous functions R + → U 1 . Let X be the coordinate process on C, i.e. X t (ω) = ω(t) for (ω, t) ∈ C × R + , and set C σ(X t , t ∈ R + ) and C (C t ) t≥0 , where C t s>t σ(X u , u ∈ [0, s]) for t ∈ R + . The input data for the SPDE (1.1) is the following: -Two processes defined on the filtered space (C, C, C): An U 1 -valued progressively measurable process v = (v t ) t≥0 and an L 0 2 -valued progressively measurable 2 process u = (u t ) t≥0 . -A closed and densely defined operator A : Because A is densely defined, there exists a unique adjoint A * : D(A * ) ⊆ U * 1 → U * 1 to A, which is a closed operator (see [9, Section III.5, pp. 167]). Furthermore, because A is closed and U 1 is reflexive ([13, Remark 5]), the adjoint A * is also densely defined (see [9,Theorem III.5.29, p. 168]).
In the following definition we introduce analytically and probabilistically weak solutions to the SPDE (1.1) and two weak uniqueness concepts.
is a filtered probability space with right-continuous and complete filtration which supports an U -valued Brownian motion W with covariance Q.
and for all y * ∈ D(A * ) a.s.
The process X is called a solution process on the driving system (B, W ). (iii) We say that weak (joint) uniqueness holds for the SPDE (1.1), if for any two weak solutions (B 1 , W 1 , X 1 ) and (B 2 , W 2 , X 2 ) the laws of X 1 and X 2 (the laws of (X 1 , W 1 ) and (X 2 , W 2 )) coincide. The law of a solution process is called a solution measure.
Here, the law of a U -valued Brownian motion W is meant to be the push-forward measure P • W −1 on the canonical space of continuous functions R + → U equipped with the σ-field generated by the corresponding coordinate process.
Let us relate weak solutions to so-called mild ( or martingale) solutions, which are also frequently used in the literature (see, e.g. [12,17]). The following proposition is a direct consequence of [12,Theorem 13].
Let (B, W ) be a driving system. A continuous U 1 -valued adapted process X on B is a solution process on (B, W ) if and only if (2.1) holds and a.s.
Remark 2.3. In the following, we comment on the relation of our setting to those in [17]. Assume that U 1 is a separable Hilbert space and that A is the generator of a C 0 -semigroup. Proposition 2.2 shows that X is a mild solution process in the sense of [17,Definition 2.2] if and only if it is a weak solution process in the sense of Definition 2.1 with slightly different diffusion coefficient. 4 Moreover, the concepts of (joint) weak uniqueness are the same for both frameworks. These claims deserve some explanation, because at first glance the noise in [17] is different to ours. More precisely, in [17] (and [14,15]) the noise is a standard R ∞ -Wiener process (β k ) k∈N , i.e. a collection of independent one-dimensional standard Brownian motions. To see that this makes no difference, let us recall the definition of the stochastic integral w.r.t. an R ∞ -Brownian motion (see [10]). Fix a one-to-one Hilbert-Schmidt operator J from U into another separable Hilbert space U and let (e k ) k∈N be an orthonormal basis of U . Due to [10, Proposition 2.5.2], the series Due to this definition, our first claim on the relation of solution processes is clear. Moreover, also our second claim on the relation of the uniqueness concepts follows, because in [17] the law of the R ∞ -Brownian motion (β k ) k∈N is identified with the law of W .
Our main result is the following.
Let ((Ω, F, (F t ) t≥0 , P ), W ) be a driving system which supports two solution processes X and Y . Now, recalling that joint weak uniqueness holds, we obtain Consequently, strong uniqueness holds and the proof is complete.
Before we turn to the proof of Theorem 2.4, note that Corollary 2.6 generalizes [17, Theorem 1.6] in the sense that all assumptions imposed on A can be dropped.

Proof of Theorem 2.4
The if implication is obvious. Thus, we will only prove the only if implication. Assume that weak uniqueness holds for the SPDE (1.1).
Let X be a solution process to the SPDE (1.1) which is defined on a driving system ((Ω * , F * , (F * t ) t≥0 , P * ), W ). We take a second driving system (( Define F t to be the P -completion of the σ-field s>t (F * s ⊗ F o s ). In the following B = (Ω, F, (F t ) t≥0 , P ) will be our underlying filtered probability space. We extend X, W and B to B by setting for (ω * , ω o ) ∈ Ω. It is easy to see that (B, W ) and (B, B) are again driving systems and that X is a solution process on (B, W ). For a closed linear subspace U o of U we denote by pr U o the orthogonal projection onto U o . For (ω, t) ∈ C × R + we define φ t (ω) pr ker(ut(ω)Q 1/2 ) ∈ L(U, U ), ψ t (ω) Id U −φ t (ω) ∈ L(U, U ).
Let us summarize some basic properties of φ and ψ: Moreover, we have the following: Lemma 3.1. The processes φ = (φ t ) t≥0 and ψ = (ψ t ) t≥0 are progressively measurable (as processes on the canonical space (C, C, C)). Furthermore, for all t ∈ R + Proof: The measurability follows by a routine approximation argument (see the proof of [12, Lemma 9.2] for details). To see that (3.2) holds, it suffices to note that . The proof is complete. Lemma 3.1 allows us to define a U -valued local martingale V by The following lemma is the technical core of the proof. We postpone its proof till the proof of Theorem 2.4 is complete.
Lemma 3.2. The process V is a U -valued Brownian motion with covariance Q. Moreover, V is independent of X, i.e. the σ-fields σ(V t , t ∈ R + ) and σ(X t , t ∈ R + ) are independent.
Because (Q By the construction of the stochastic integral (see, e.g. [12,Chapter 3]), the law of the second term is determined by the law of (X, V ), cf. [7, Proposition 17.26] for a similar argument. In the following we explain that the same is true for the first term. In fact, we even show that its law is determined by those of X. Let us start by defining the U 1 -valued local martingale dZ t = u t (X)dW t . Here, we stress that Z is well-defined due to (2.1). The process H(·, ·, x, y * ) is progressively measurable by Lemma 3.1. It is clear that y * → H(t, ω, x, y * ) is continuous (when U * 1 is equipped with the (operator) norm topology). Finally, we show that {y * ∈ U * 1 : H(t, ω, x, y * ) < 1/m} = ∅ for every m ∈ N. Fix (t, ω) ∈ R + × C and note that We conclude that {y * ∈ U * 1 : H(t, ω, x, y * ) < 1/m} = ∅ for every m ∈ N. In summary, the claim follows from [12,Proposition 8.8].
Fix T > 0 and x ∈ U and let (s m ) m∈N be as in Lemma 3.3. Now, define the two functionals Note that in probability as m → ∞. Because the law of · 0 dZ s , s m s (X) U 1 is determined by the law of (X, Z) and because the law of a continuous Banach space valued process is determined by its cylindrical finite dimensional distributions, we conclude that the law of · 0 Q 1/2 ψ s (X)Q −1/2 dW s is determined by the law of (X, Z).
Lemma 3.4. For every t ∈ R + and y * ∈ U * 1 there exists a measurable map F : C → R such that a.s. Z t , y * U 1 = F (X).
Proof: Take t ∈ R + and y * ∈ U * For ω ∈ C and k ∈ N we define which we set to be +∞ if the last integral diverges, and F (ω) lim inf k→∞ G k (ω). It is clear that F : C → R is measurable. Furthermore, recalling (2.2), we have a.s.
which completes the proof.
Using again that the law of a continuous Banach space valued process is determined by its cylindrical finite dimensional distributions, Lemma 3.4 implies that the law of (X, Z) is determined by the law of X. In summary, we conclude that the law of (X, W ) is determined by the law of (X, V ) and hence, by Lemma 3.2, it is determined by the law of X. The proof is complete.

It remains to prove Lemma 3.2:
Proof of Lemma 3.2: Step 1. We use Lévy's characterization of Hilbert space valued Brownian motion ([11, Proposition, p. 60]) to conclude that V is a Brownian motion with covariance Q. Using that B and W are independent, [15,Proposition 4.7] and [3,Theorem 4.27], for every t ∈ R + we compute Consequently, Lévy's characterization yields that V is a U -valued Brownian motion with covariance Q.
Step 2. In this step we prepare the proof of the independence of V and X.
Lemma 3.5. Let Y be a continuous U -valued adapted process starting in 0 U . The following are equivalent: > 0 and f (0) = 1, and all z ∈ U the process is a martingale.
Proof: Due to the classical martingale problem for (one-dimensional) Brownian motion ([16, Theorem 4.1.1]) and [4,Proposition 4.3.3], (ii) is equivalent to the following: For all z ∈ U the process Y, z U is a real-valued Brownian motion with covariance Qz, z U . Hence, (i) and (ii) are equivalent, cf. [11, p. 60].
For f = g( ·, y * U 1 ) with g ∈ C 2 (R) and y * ∈ D(A * ) we define Lf (X, s)ds is a local Q ′ -martingale. Furthermore, for every solution process X to the SPDE (1.1) the process K f • X is a local martingale on the corresponding driving system.
Proof: The structure of the proof is classical and similar to the finite-dimensional case (see, e.g. [8,Chapter 5.4]). Let Q ′ be a solution measure to the SPDE (1.1) and let (B, W, X) be a weak solution such that B = (Ω, F, (F t ) t≥0 , P ) and Q ′ = P • X −1 . Take f = g( ·, y * U 1 ) ∈ D. Then, the classical one-dimensional Itô formula yields that Hence, K f • X is a local martingale. Moreover, using classical arguments such as [5,Lemmata 2.7,2.9], the local martingale property transfers to the canonical space (C, C, C, Q ′ ) and the only if implication follows.
Conversely, let Q ′ be as in the statement of the lemma. Then, using the hypothesis with g(x) = x and g(x) = x 2 , it follows that for every y * ∈ D(A * ) is a local Q ′ -martingale with quadratic variation [Y(y * ), Y(y * )] = · 0 u s (X) * y * , u s (X) * y * U 0 ds.
Finally, we deduce from [13, Corollary 6] that, possibly on an extension of the filtered probability space (C, C, C, Q ′ ), there exists an U -valued Brownian motion W with covariance Q such that Due to (3.6), we conclude the if implication. The proof is complete.
Define M f and K g as in (3.3) and (3.4) with Y replaced by V and X replaced by X. It follows from Lemma 3.5 and Step 1 that M f is a martingale. Similarly, because X is a solution process to the SPDE (1.1), K g is a local martingale by Lemma 3.6. We now show that [M f , K g ] = 0, where [·, ·] denotes the quadratic variation process. The (one-dimensional) Itô formula shows that In view of (3.5) and (3.7), the (one-dimensional) formula [8, Eq. 3.2.26] 5 implies Using (Q −1/2 ) * = Q 1/2 , the independence of W and B, well-known formulas for the quadratic variation of stochastic integrals (see [12,Summary of Step 5, p. 27]) and (3.1), we obtain In summary, [M f , K g ] = 0 follows.
Step 3: We are in the position to follow the proof of [2, Theorem 3]. More precisely, we deduce the independence of V and X from [M f , K g ] = 0. While M f is already a martingale, the process K g is only a local martingale. We now define an explicit localizing sequence: For n ∈ N set T n inf(t ∈ R + : |K g t | ≥ n), K g,n K g ·∧Tn . Because K g has continuous paths, K g,n is bounded on bounded time intervals and consequently, K g,n is a martingale.
Step 2 yields that [M f , K g,n ] = [M f , K g ] ·∧Tn = 0. Hence, by integration by parts, the process M f K g,n is a local martingale and a true martingale, because it is bounded on bounded time intervals. Next, fix a bounded stopping time S and define a measure Q ′ on (Ω, F) as follows: Because M f 0 = 1, the optional stopping theorem shows that Q ′ is a probability measure. Because M f , K g,n and M f K g,n are P -martingales, we deduce again from the optional 5 that is [ = E P M f S 1 {S≤T } E P K g,n T |F S∧T + K g,n T 1 {T <S} E P M f S |F S∧T = E P M f S K g,n S∧T 1 {S≤T } + K g,n T M f S∧T 1 {T <S} = E P M f S∧T K g,n S∧T = 0. Thus, because T was arbitrary and T n ր ∞ as n → ∞, K g is a local Q ′ -martingale. Furthermore, because g was arbitrary, Lemma 3.6 and classical arguments such as [6,Lemmata 2.7,2.9] show that the push-forward Q ′ • X −1 is a solution measure to the SPDE (1.1). The weak uniqueness assumption now implies that P • X −1 = Q ′ • X −1 . Next, fix a set F ∈ σ(X t , t ∈ R + ) such that P (F ) > 0 and set Clearly, Q * is a probability measure on (Ω, F). Recalling P (F ) = Q ′ (F ), we obtain Thus, because S was arbitrary, M f is a Q * -martingale. Since f was arbitrary, Lemma 3.5 yields that Q * • V −1 = P • V −1 and consequently, for every G ∈ σ(V t , t ∈ R + ) P (G, F ) = Q * (G)P (F ) = P (G)P (F ).
Because this equality holds trivially whenever F ∈ σ(X t , t ∈ R + ) satisfies P (F ) = 0, we conclude that V and X are independent. The proof is complete.