On the Propagation of the Weak Representation Property in Independently Enlarged Filtrations: The General Case

In this paper, we investigate the propagation of the weak representation property (WRP) to an independently enlarged filtration. More precisely, we consider an F\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {F}}$$\end{document}-semimartingale X possessing the WRP with respect to F\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {F}}$$\end{document} and an H\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {H}}$$\end{document}-semimartingale Y possessing the WRP with respect to H\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {H}}$$\end{document}. Assuming that F\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {F}}$$\end{document} and H\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {H}}$$\end{document} are independent, we show that the G\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {G}}$$\end{document}-semimartingale Z=(X,Y)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z=(X,Y)$$\end{document} has the WRP with respect to G\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {G}}$$\end{document}, where G:=F∨H\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {G}}:={\mathbb {F}}\vee {\mathbb {H}}$$\end{document}. In our setting, X and Y may have simultaneous jump-times. Furthermore, their jumps may charge the same predictable times. This generalizes all available results about the propagation of the WRP to independently enlarged filtrations.


Introduction
Let X be a d-dimensional semimartingale with respect to a right-continuous filtration F, and let X c denote the continuous local martingale part of X . We say that X possesses the weak representation property (from now on WRP) with respect to F if every F-

B Paolo Di Tella
Paolo.Di_Tella@tu-dresden.de 1 Technische Universitat Dresden Internationales Hochschulinstitut Zittau, Dresden, Germany local martingale can be represented as the sum of a stochastic integral with respect to X c and a stochastic integral with respect to the compensated jump measure of X (for details, see Definition 3.1).
The WRP of X is a property depending on the filtration F: For example, if X is a Lévy process and F = F X is the smallest right-continuous filtration with respect to which X is adapted, then X possesses the WRP with respect to F X . However, this need not be true if X is considered with respect to a larger filtration. Therefore, it is natural to investigate under which conditions and in which form the WRP of a semimartingale X with respect to a filtration F propagates to a larger filtration G.
In this paper, we suppose that X is an R d -valued F-semimartingale possessing the WRP with respect to F. We then assume that F is enlarged by a right-continuous filtration H and we denote by G the smallest right-continuous filtration containing both F and H. We assume that H has the following properties: (1) H is independent of F; (2) H supports an R -valued semimartingale Y possessing the WRP with respect to H. Under these conditions, we show in Theorem 3.6 (the main result of this paper) that the R d × R -valued G-semimartingale Z = (X , Y ) possesses the WRP with respect to G. We stress that, in the present paper, we do not make any further assumption: The semimartingales X and Y may have simultaneous jump-times or their jumps may charge (the same) predictable times. To the best of our knowledge, Theorem 3.6 is the most general result about the propagation of the WRP to an independently enlarged filtration.
The propagation of the WRP to an independently enlarged filtration has been studied in Xue [14] under the further assumptions that X (or Y ) are special quasi-left continuous semimartingales. (In particular, their jumps do not charge predictable jump-times.) At a first look, these assumptions seem to be harmless and fairly general, but they are actually quite restrictive: First of all, not every Lévy process is a special semimartingale and hence [14] does not cover the case of arbitrary Lévy processes X and Y . Furthermore, more importantly, the quasi-left continuity assumption leads to the following strong simplification of the problem: F-local martingales and H-local martingales have no common jumps. Contrarily, in the present paper we face the additional difficulty to determine an adequate representation of the simultaneous jumps of Fand H-local martingales. Moreover, examples of non-necessarily quasi-left continuous semimartingales possessing the WRP are known (see Example 3.7) and, in this case, the propagation of the WRP to the independently enlarged filtration G cannot be derived from [14].
In Wu and Gang [13], under the independence assumption of F and H, necessary and sufficient conditions on the semimartingale characteristics of X and Y are stated for the G-semimartingale X + Y to possess the WRP with respect to G. In particular, the authors do not assume that X (or Y ) are quasi-left continuous, but only that the sets of their accessible jump-times are disjoint. This however again yields that F-local martingales and H-local martingales have no common jumps (see [13,Lemma 7]).
If (X , F) and (Y , H) are local martingales, a first work investigating martingale representation theorems in the independently enlarged filtration G, without quasi-left continuity assumptions and allowing simultaneous jumps of X and Y , is Calzolari and Torti [5]. However, [5] deals with the propagation of the predictable representation property (from now on PRP 1 ) and not with the WRP. We recall that the WRP is a more general property than the PRP (see, e.g., [10,Theorem 13.14] or Lemma 3.9). In Calzolari and Torti [6], the results of [5] are extended to multidimensional local martingales. We show in Corollary 3.10 that the results obtained by [5,6] can be reformulated in terms of the WRP of the G-local martingale Z = (X , Y ).
If the filtration G is obtained enlarging the filtration F by a non-necessarily independent filtration H, then very little is known about the propagation of the WRP. Results in this direction are available if F is enlarged progressively by a random time τ that need not be an F-stopping time: In this case, H is generated by the process 1 [τ,+∞) and, therefore, G is the smallest right-continuous filtration containing F and such that τ is a stopping time. We are now going to review these results.
In Barlow [3], a semimartingale X possessing the WRP with respect to a filtration F is considered and a WRP is obtained in G if τ is a honest time. However, honest times are (morally) F ∞ -measurable random variables and, therefore, this excludes the independent enlargement.
In Di Tella [8], the WRP in G is obtained under the assumptions that F-martingales are G-martingales (that is, if the immersion property holds) and P[τ = σ < +∞] = 0, for all F-stopping times σ (that is, if τ avoids F stopping times). If, from one side, the independent enlargement is a special case of the immersion property, on the other side, the avoidance of stopping times yields that τ cannot be charged by the jumps of F-local martingales.
In summary, also if F is progressively enlarged by a random time τ , the case studied in the present paper cannot be covered by [3] nor by [8].
The present work has the following structure: In Sect. 2, we recall some basic definitions and results needed in this paper. In Sect. 3, we prove our main result (Theorem 3.6) about the propagation of the WRP to the independently enlarged filtration G. Then, as a consequence of Theorem 3.6, we also investigate the propagation of the PRP to G. Section 4 is devoted to two applications of Theorem 3.6: In Sect. 4.1, we study the propagation of the WRP to the iterated independent enlargement. This part is inspired by [6, § 4.1]. In Sect. 4.2, we assume that the filtration F is enlarged by a random time τ satisfying the so-called Jacod's equivalence hypothesis. We stress that in Sect. 4.2 the filtrations F and H need not be independent. Therefore, Sect. 4.2 is an extension of Theorem 3.6. We also observe that, in Sect. 4.2, τ need not avoid F-stopping times. Hence, this part extends known results as Callegaro, Jeanblanc and Zargari [4,Proposition 5.5], where the avoidance property is additionally assumed. Finally, we postpone to Appendix the proof of some technical results.

Basic Notions
In this paper, we regard d-dimensional vectors as columns, that is, . . , n, we denote by (v 1 , v 2 , . . . , v n ) tr the d 1 +d 2 +. . .+d ndimensional column vector, obtained continuing with v 2 after v 1 and so on, till v n .
Stochastic Processes, Filtrations and Martingales Let ( , F , P) be a complete probability space. For any càdlàg process X , we denote by X the jump process of X , i.e., X t := X t − X t− , t > 0, and X 0− := X 0 .
We denote by F = (F t ) t≥0 a right-continuous filtration and by O(F) (resp. P(F)) the σ -algebra of the F-optional (resp. F-predictable) sets of × R + , R + := [0, +∞). We set F ∞ := t≥0 F t . We sometimes use the notation (X , F) to denote an F-adapted stochastic process X .
For a process X , we denote by F X the smallest right-continuous filtration such that X is adapted.
Let X be an [−∞, +∞]-valued and F ⊗B(R + )-measurable process, where B(R) denotes the Borel σ -algebra on R. We denote by p,F X the (extended) F-predictable projection of X . For the definition p,F X , we refer to [12,Theorem I.2.28].
An F-adapted càdlàg process X is called quasi-left continuous if X T = 0 for every finite-valued, F-predictable stopping time T .
For q ≥ 1, we denote by H q (F) the space of F-uniformly integrable martingales We observe that H 1 loc (F) coincides with the space of all F-local martingales (see [11,Lemma 2.38]). We denote by H p 0 (F) (resp., H q loc,0 (F)) the subspace of martingales (resp., local martingales) Z ∈ H q (F) (resp., Z ∈ H p loc (F)) such that Z 0 = 0. Two local martingales X and Y are called orthogonal if XY ∈ H 1 loc,0 (F) holds. For X , Y ∈ H 2 loc (F), we denote by X , Y the predictable covariation of X and Y . We recall that XY − X , Y ∈ H 1 loc (F). Hence, if X 0 Y 0 = 0, then X and Y are orthogonal if and only if X , Y = 0.
An R-valued F-adapted process X such that X 0 = 0 is called increasing if X is càdlàg and the paths t → X t (ω) are non-decreasing, ω ∈ . We denote by A + = A + (F) the space of F-adapted integrable processes; that is, A + is the space of increasing process X such that E[X ∞ ] < +∞ (see [12, I.3.6]). We denote by A + loc = A + loc (F) the localized version of A + . For X ∈ A + loc , we denote by X p ∈ A + loc the F-dual predictable projection of X (see [12,Theorem I.3.17]). Let (X , F) be an increasing process, and let K ≥ 0 be an F-optional process. We denote by K · X = (K · X t ) t≥0 the process defined by the (Lebesgue-Stieltjes) integral of K with respect to X , that is, is finite-valued, for every ω ∈ and t ≥ 0. Notice that (K · X , F) is an increasing process.
Random Measures Let μ be a nonnegative random measure on R + × E in the sense of [12,Definition II.1.3], where E coincides with R d or with a Borel subset of R d . We stress that we assume μ(ω, {0} × E) = 0 identically.
We denote by B(E) the Borel σ -algebra on E and set := × R + × E. We then introduce the following σ - Let W be an O(F)-measurable (resp. P(F)-measurable) mapping from into R. We say that W is an F-optional (resp. F-predictable) function. Let W be an F-optional function. As in [12, II.1.5], we define We say that μ is an F-optional (resp. F-predictable) random measure if W * μ is an F-optional (resp. an F-predictable) process, for every optional (resp. F-predictable) function W .
Semimartingales Let X be an R d -valued F-semimartingale. We denote by μ X the jump-measure of X , that is, where δ a denotes the Dirac measure at point a ∈ R d . From [12,Theorem II.1.16], μ X is an integer-valued random measure with respect to F (see [12,Definition II.1.13]).
By (B X , C X , ν X ), we denote the F-predictable characteristics of X with respect to the truncation function h(x) = 1 {|x|≤1} x (see [12,Definition II.2.3]). Recall that ν X is a predictable random measure characterized by the following properties: For any F-predictable mapping W such that |W | * μ X ∈ A + loc , we have |W | * ν X ∈ A + loc and (W * μ X − W * ν X ) ∈ H 1 loc,0 (see [12,Theorem II.1.8]). We are now going to introduce the stochastic integral with respect to (μ X − ν X ) of an F-predictable mapping W .
Let W be an F-predictable mapping. We define the process W X by where, for t ≥ 0, Notice that, according to [12, Lemma II.1.25], W X is predictable and a version of the predictable projection of the process (ω, t) → W (ω, t, X t (ω))1 { X t (ω) =0} . In symbols, denoting this latter process by W (·, ·, X )1 { X =0} , we have W X = p,F (W (·, ·, X )1 { X =0} ). So, since W X depends on the filtration F as well, we shall also write, if necessary, W X ,F and W X ,F to stress the filtration. This notation will be especially used in Sect. 3. For q ≥ 1, we introduce (see [11, (3.

62)])
G q (μ X ) := W : W is an F-predictable function and 0≤s≤· ( W X ) 2 The definition of G q loc (μ X ) is similar and makes use of A + loc instead. To specify the filtration, we sometimes write G q (μ X , F). Setting we get a seminorm on G q (μ X ). Let now W ∈ G 1 loc (μ X ). The stochastic integral of W with respect to (μ X − ν X ) is denoted by W * (μ X − ν X ) and is defined as the unique purely discontinuous local martingale Z ∈ H 1 loc,0 (F) such that Z = W X (up to an evanescent set). See [12,Definition II.1.27] and the subsequent comment. We recall that, according to [11,Proposition 3.66 For two F-semimartingales X and Y , we denote by [X , Y ] the quadratic variation of X and Y : where X c and Y c denote the continuous local martingale part of X and Y , respectively.
The Stochastic Integral for Multidimensional Local Martingales Let us fix q ≥ 1 and consider an R d -valued stochastic process X = (X 1 , . . . , We denote by a and A the processes introduced in [11, Chapter 4, Section 4 § a] such that [X , X ] = a · A. We recall that A ∈ A + loc (F) and that a is an optional process taking values in the space of d-dimensional symmetric and nonnegative matrices. Notice that, if q = 2, then we can take C = A p and c is a predictable process taking values in the space of d-dimensional symmetric and nonnegative matrices such that c i, j · C = (a i, j · A) p .
Let K be an R d -valued measurable process and define loc,0 and that it is always an R-valued process. Sometimes, to stress the underlying filtration, we write L q (X , F) or L q loc (X , F). We observe that if X ∈ H 1 loc (F) (in particular, X is R-valued), then we can chose a = 1, A = [X , X ] and we get the usual definition of the stochastic integral with respect to X (see [11,Definition 2.46]). If furthermore X is of finite variation and K ∈ L 1 loc (X ), then the stochastic integral K · X coincides with the Stieltjes-Lebesgue integral, whenever this latter one exists and is finite.

Martingale Representation in the Independently Enlarged Filtration
We start this section introducing the notion of the weak representation property (abbreviated by WRP).
Definition 3.1 Let F be a right-continuous filtration and (X , F) an R d -valued semimartingale with continuous local martingale part X c and predictable F-characteristics (B X , C X , ν X ). We say that X possesses the WRP with respect to F if every N ∈ H 1 loc (F) can be represented as For our aims, the following characterization of the WRP will be useful: Proof By localization, it is enough to show that (3.2) holds if and only if every N ∈ H 1 (F) can be represented as But this is just [8, Proposition 3.2]. The proof is complete.

Propagation of the Weak Representation Property
Let F = (F t ) t≥0 and H = (H t ) t≥0 be right-continuous filtrations. In this section, we consider the filtration G := F ∨ H; that is, G is the smallest filtration containing both F and H. We then make the following assumption: The filtrations F and H are independent.
We recall that two filtrations F = (F t ) t≥0 and H = (H t ) t≥0 are called independent, if the σ -algebras F ∞ and H ∞ are independent. As a first consequence of Assumption 3.3, we get that the filtration G is again right continuous (see Lemma A.4 (i)). Furthermore, the following result holds: Proposition 3.4 Let F and H satisfy Assumption 3.3. Let (X , F) be an R d -valued semimartingale, and let (B X , C X , ν X ) denote the F-predictable characteristics of X . Then, (X , G) is a semimartingale and the G-predictable characteristics of X are again given by (B X , C X , ν X ).
Proof The result follows from Lemma A.4 (ii) and [12,Theorem II.2.21]. The proof is complete.
In the proof of Theorem 3.6, we need the following technical proposition: Proof First, we notice that, by Proposition 3.4, X is a G-semimartingale. Hence, (ii) is a direct consequence of (i) and of the definition of W X ,G . Furthermore, (iii) follows immediately from (ii). We now show (i). Because of Proposition 3.4, the G-predictable compensator of μ X is again given by ν X . Therefore, by the definition of W X ,G we have the identity W X ,G = W X ,F . The proof is complete.
Let (E, E ) be a measurable space endowed by the σ -algebra E . We denote by B(E ) the space of R-valued E -B(R)-measurable and bounded functions on E. We now come to the main result of the present paper.
Proof Let ξ belong to B(F ∞ ) and let η belong to B(H ∞ ). We consider the martingales s. for every t ≥ 0. Since X has the WRP with respect to F and Y has the WRP with respect to H, because of Proposition 3.2, we can represent M as and N as (3.5) We split the remaining part of the proof in several steps. Step meaning that the G-local martingale S belongs to H 2 (G) as well. By the definition of S and Proposition 3.5 (ii), we have By the definition of W X ,G and V Y ,G and Proposition 3.5 (i), we have We are now going to compute the G-predictable projection of W X ,G V Y ,G . We recall that the processes W X ,G and V Y ,G are G-predictable, by [12, Lemma II.1.25]. Furthermore, since W X ,G ∈ G 2 (μ X , G) and V Y ,G ∈ G 2 (μ Y , G) by Proposition 3.5 (iii), we also have that W X ,G and V Y ,G are finite-valued. By the G-martingale property of S, we have p,G ( S) = 0 (see [ and clearly have From this, by [12, Lemma II.1.25] and (3.9), we see that − W X ,G V Y ,G is a version of the G-predictable projection of the process U (·, ·, X , Y )1 { Z =0} . Hence, we obtain (ω), for every t ∈ R + , a.s. (3.11) where, in the last identity, we used (3.8). We therefore have where, for the last estimate, we recall that S ∈ H 2 (G) (see [12,Proposition I.4.50 c)]). Therefore, the inclusion U ∈ G 2 (μ Z , G) holds. Hence, we can introduce the purely discontinuous square-integrable G-martingale U * (μ Z − ν Z ) and we get up to an evanescent set. Hence, the purely discontinuous martingales U * (μ Z −ν Z ) and S = [M, N ] have the same jumps, up to an evanescent set. By [12,Corollary I.4.19], we conclude that S and U * (μ Z − ν Z ) are indistinguishable. Summarizing, we have shown that U ∈ G 2 (μ Z , G) and The proof of Step 1 is complete.
Step 3 Representation of the continuous part. We stress that for this step we need the properties of the stochastic integral of multidimensional predictable processes with So, A and B are absolutely continuous with respect to C. Additionally, because of Lemma A.4 (ii) and the uniqueness of the point brackets, According to [11, Section 4.2 § a and § b], there exists a G-predictable process c taking values in the set of nonnegative symmetric (d Analogously, we find an F-predictable processes a taking values in the set of nonnegative symmetric d × d-matrices such that (X i ) c , (X j ) c = a i, j · A, i, j = 1, . . . , d and a H-predictable processes b taking values in the set of nonnegative symmetric × -matrices such that We are going to show that the G-predictable process H introduced above belongs to L 2 (Z c , G). For this, according to [12,III.4.3], we have to verify that the increasing process H cH · C is integrable. Because of the structure of c, we see that the identity holds. By linearity of the integral with respect to C, we now get Hence, the independence of F and H yields where, in the last estimate, we used that M and N are square-integrable martingales, that K ∈ L 2 (X c , F) and that J ∈ L 2 (Y c , H). This shows the inclusion H ∈ L 2 (Z c , G).
We are now going to verify that the identity H · Z c = N − K · X c + M − J · Y c holds. Let R ∈ H 2 loc (G) be continuous. Then, there exist two G-predictable process a Ri and b R j such that R, . . . , d, j = 1, . . . , (see [12,Eq. (III.4.4) and the explanation before]). On the other side, since A and B are absolutely continuous with respect to C, we find two G-predictable processes α and β such that A = α · C and B = β · C. In conclusion, we get R, we get R, (Z i ) c = c Ri · C, i = 1, . . . , d + l. By the linearity of the predictable quadratic covariation and [12, Theorem III.4.5b)] applied to the two stochastic integrals N − K · X c and M − J · Y c , we compute where in the last identity we again applied [12,Theorem III.4.5b)]. This latter computation together with [12, Theorem III.4.5b)] yields that H · Z c and N − K · X c + M − J ·Y c are indistinguishable, because R was chosen arbitrarily. The proof of Step 3 is complete.
Step 1 and Step 2 yield the inclusion G ∈ G 2 (μ Z , G), and by the linearity of the stochastic integral with respect to μ Z − ν Z , we get the identity Let us now consider the R d ×R -valued continuous local martingale Z c = (X c , Y c ) tr .

By Step 3, we have the identity H
has been defined in Step 3. Therefore, from (3.15) and (3.5), we get . Now, using that each term on the right-and on the left-hand side in this latter expression belongs to H 2 (G), taking the limit t → +∞, by the martingale convergence theorem (see [12, Theorem I.1.42a)]), we get where we used the independence to write M 0 N 0 = E[ξη|G 0 ]. The proof of Step 4 is complete.
Step 5: Representation of bounded G ∞ -measurable random variables. As an application of the monotone class theorem, we now show that every ξ ∈ B(G ∞ ) can be represented as where K ∈ L 2 (Z c , G) and W ∈ G 2 (μ Z , G). To this aim, we denote by K the linear space of random variables ξ ∈ B(G ∞ ) that can be represented as in (3.16).
We denote by Then, C is clearly stable under multiplication and σ (C ) = G ∞ . Furthermore, by Step 4, we have C ⊆ K . The linear space K is a monotone class of B(G ∞ ). Indeed, let (ξ n ) n ⊆ K be a uniformly bounded sequence such that ξ n ≥ 0 and ξ n ↑ ξ pointwise. Then, ξ is bounded and, by dominated convergence, we get ξ n −→ ξ in L 2 ( , G ∞ , P) as n → +∞. From Lemma A.2, we immediately get that ξ ∈ K . Since the inclusion 1 ∈ K obviously holds, we see that K is a monotone class of B(G ∞ ). The monotone class theorem for functions (see [10,Theorem 1.4]) now yields the inclusion B(G ∞ ) ⊆ K . The proof of Step 5 is complete.
Step 6 Approximation and conclusion. Let now ξ ∈ L 2 ( , G ∞ , P) be a nonnegative random variable. Then, the random variable ξ n := ξ ∧ n, n ≥ 1, is bounded and furthermore ξ n −→ ξ in L 2 ( , G ∞ , P) as n → +∞. By Step 5, ξ n can be represented as in (3.16), for every n ≥ 1. By Lemma A.2, we get that the same holds for ξ . If now ξ ∈ L 2 ( , G ∞ , P) is an arbitrary random variable, we write ξ as the difference of the positive and negative part: ξ = ξ + − ξ − . Clearly, we have that ξ ± ∈ L 2 ( , G ∞ , P) and ξ ± ≥ 0 can be represented as in (3.16). Using now the linearity of the stochastic integrals, we obtain that ξ can be represented as in (3.16) as well. To conclude the proof, we now use that the Hilbert spaces (H 2 (G), · 2 ) and (L 2 ( , G ∞ , P), · 2 ) are isomorphic. Therefore, we have that every S ∈ H 2 (G) can be represented as where K ∈ L 2 (Z c , G) and W ∈ G 2 (μ Z , G). Because of Proposition 3.2 , this means that the R d × R -valued semimartingale Z = (X , Y ) tr has the WRP with respect to G.
The proof of the theorem is now complete.

Examples 3.7
We now discuss some cases in which the propagation of the WRP to the independently enlarged filtration G immediately follows from Theorem 3.6. Notice that in all the following examples we do not exclude that the semimartingales X and Y may have common jumps. Furthermore, X and Y may charge the same predictable times.
(1) Assume that the semimartingales (X , F X ) and (Y , F Y ) are independent step processes (see [10,Definition 11.55]) and F : where R X and R Y are some initial σ -fields such that F and H are independent (i.e., such that {F X ∞ , R X } and {F Y ∞ , R Y } are independent). Then, [10,Theorem 13.19] yields that X has the WRP with respect to F and Y has the WRP with respect to H. Because of Theorem 3.6, we deduce that the R 2 -valued G-semimartingale Z = (X , Y ) tr has the WRP with respect to G = F ∨ H.
(2) We can generalize the case in (1) as follows: Let X and Y be as in (1). Let B and W be two independent Brownian motions such that B is independent of X and W is independent of Y . We define R := B + X and S := W + Y . Assume furthermore for simplicity that R X and R Y in (1) are both trivial. By [13,Corollary 2], R has the WRP with respect to F R and S has the WRP with respect to F S . Assuming that F R and F S are independent, Theorem 3.6 implies that the R 2 -valued G-semimartingale Z = (R, S) tr has the WRP with respect to G = F R ∨ F S .
(3) We take Y and H as in (1) but (X , F X ) is assumed to be an R d -valued semimartingale with conditionally independent increments with respect to F = F X ∨ R X . Again we assume that F and H are independent. From [12,Theorem III.4.34] (i), X has the WRP with respect to F. Hence, Theorem 3.6 yields that the R d × R-valued G-semimartingale Z = (X , Y ) tr has the WRP with respect to G = F ∨ H.
(4) Let (X , F X ) be an R d -valued semimartingale, and let (Y , F Y ) be an R -valued semimartingale. Let R X and R Y denote two initial σ -fields. We assume that X has conditionally independent increments with respect to F = F X ∨ R X and Y has conditionally independent increments with respect to H := F Y ∨ R Y . As an immediate consequence of [12,Theorem III.4.34] and of Theorem 3.6, if F and H are independent, we get that Z = (X , Y ) tr possesses the WRP with respect to G = F X ∨ F Y .
(5) Combining Theorem 3.6 and [13, Theorem 2], we can construct several new semimartingales possessing the WRP. Indeed, let X 1 and X 2 be real-valued semimartingales possessing the WRP with respect to F 1 and F 2 , respectively. Assume that F 1 and F 2 are independent and set F = F 1 ∨ F 2 . Assume furthermore that the set of the F-predictable jump-times of X 1 and X 2 are disjoint and that the second and the third F-semimartingale characteristics of X 1 and X 2 are mutually singular on P(F) ⊗ B(R) (see [13,Theorem 2]). Then, by [13,Theorem 2], the semimartingale X = X 1 +X 2 possesses the WRP with respect to F (see also [13, Corollary 1 and Corollary 2]). We consider the semimartingales Y 1 and Y 2 with respect to H 1 and H 2 . We set H := H 1 ∨ H 2 and make on Y 1 and Y 2 with respect to H 1 and H 2 , respectively, similar assumptions as for X 1 and X 2 . Then, the semimartingale Y = Y 1 + Y 2 has the WRP with respect to H. If we now assume that F and H are independent, we get by Theorem 3.6 that the R 2 -valued semimartingale Z = (X , Y ) tr has WRP with respect to G := F ∨ H.
(6) The counterexample constructed in [13] after Corollary 2 therein can be also handled with the help of Theorem 3.6. We now use the notation of [13]: Let X 1 and X 2 denote the processes introduced in [13] after Corollary 2. Then, Wang and Gang showed that the semimartingale X 1 + X 2 does not possess the WRP with respect to the filtration F := F 1 ∨ F 2 . However, by Theorem 3.6, we see that the R 2 -valued semimartingale Z = (X 1 , X 2 ) tr possesses the WRP with respect to F.

Propagation of the Predictable Representation Property
In this subsection, we investigate the propagation of the predictable representation property to the independently enlarged filtration. To begin with, we state the following definition of the predictable representation property. Definition 3.8 Let X = (X 1 , . . . , X d ) tr be such that X i ∈ H 1 loc (F), i = 1, . . . , d. We say that the multidimensional local martingale X has the predictable representation property (from now on PRP) with respect to F if, for every Y ∈ H 1 loc (F), there exists K ∈ L 1 loc (X , F), such that Y = Y 0 + K · X holds.
In the next proposition, we state the relation between the PRP and the WRP for multidimensional local martingales. We stress that at this point we cannot directly use [10, Theorem 13.14] because that result is only formulated for R-valued (and not for multidimensional) local martingales. This is a deep difference (although the formulation can be given in a notationally similar way) because, in the proof of Proposition 3.9, the stochastic integral for multidimensional local martingales is needed instead of the usual stochastic integral. Proposition 3.9 Let X = (X 1 , . . . , X d ) tr be such that X i ∈ H 1 loc (F), i = 1, . . . , d. Assume that X possess the PRP with respect to F. Then, X possesses the WRP with respect to F.
We postpone the proof of Proposition 3.9 to Appendix. We stress that its converse is, in general, not true: If for example X is an R-valued homogeneous Lévy process and a martingale, then X possesses the WRP with respect to F X , but it possesses the PRP with respect to F X if and only if it is a Brownian motion or a compensated Poisson process (see, e.g., [10,Corollary 13.54]). Proof The statement is an immediate consequence of Proposition 3.9 and of Theorem 3.6.

Remark 3.11
We recall that, if X has the PRP with respect to F = F X , Y has the PRP with respect to H = F Y and if F X and F Y are independent (at least under an equivalent martingale measure), Calzolari and Torti showed in [6] (under some additional conditions as, in particular, the triviality of F X 0 and F Y 0 and the locally square integrability of X and Y ) that every S ∈ H 2 (G) can also be represented as

Applications
In this part, we discuss two consequences of Theorem 3.6. First, in Sect. 4.1 we show the propagation of the WRP to an iteratively independent enlarged filtration. In Sect. 4.2, we show the propagation of the WRP to the progressively enlargement by a random time τ satisfying Jacod's equivalence hypothesis.

The Iterated Enlargement
We recall that the filtrations F 1 , . . . , F n are called jointly independent, if {F 1 ∞ , . . . , F n ∞ } is an independent family of σ -algebras.
. . , n. We denote G := n i=1 F i , and we assume that F 1 , . . . , F n are jointly independent. We then have: (i) The filtration G is right continuous. (ii) X i is a G-semimartingale and its G-predictable characteristics are (B i , C i , ν i ), i = 1, . . . , n.
(iii) If X i possesses the WRP with respect to F i , i = 1, . . . , n, then the R d 1 × · · · × R d n -valued semimartingale Z = (X 1 , . . . , X n ) tr possesses the WRP with respect to G.
Proof We show the result by induction, as a direct consequence of Theorem 3.6. To this aim, we observe that the joint independence of F 1 , . . . , F n is equivalent to the joint independence of F 1 , . . . , F n−1 together with the independence of the family {F 1 , . . . , F n−1 } of F n . Now, we start with the inductive argument. If n = 1, there is nothing to show. We assume that (i), (ii) and (iii) hold for n = m − 1. We are going to verify them for n = m. By the induction hypothesis and Lemma A.3, we immediately obtain (i). Analogously, from Proposition 3.4, we deduce (ii). Let us now define X := (X 1 , . . . , X m−1 ) tr and Y = X m . Since G = m−1 i=1 F i ∨ F m , by the induction hypothesis and Theorem 3.6, we obtain that Z has the WRP with respect to G, which is (iii). The proof of the theorem is complete. . . . , n) such that X i possesses the PRP with respect to F i , i = 1, . . . , n. We denote G := n i=1 F i , and we assume that F 1 , . . . , F n are jointly independent. Then, (X i , G) are local martingales, i = 1, . . . , n, and the R d 1 × · · · × R d n -valued G-local martingale Z = (X 1 , . . . , X n ) tr possesses the WRP with respect to G.
Proof Because of the independence, we obtain by induction and Lemma A.4 that (X i , G) are local martingales, i = 1, . . . , n. From Proposition 3.9 and Theorem 4.1 (iii), we immediately get the WRP of Z with respect to G. The proof of the corollary is complete.

Jacod's Equivalence Hypothesis
Let (X , F) be an R d -valued semimartingale, and let (B X , C X , ν X ) be the F-predictable characteristics of X . We assume that F satisfies the usual conditions. Let τ : −→ [0, +∞] be a random time. We stress that τ is a random variable, but it is not necessarily an F-stopping time. We denote by H = 1 [τ,+∞) the default process associated with τ and by H the smallest filtration satisfying the usual conditions such that H is Hadapted. We stress that, being a point process, H possesses the WRP with respect to H. In this part, we do not assume that τ and F (i.e., that H and F) are independent. We rather work under the following assumption (see, e.g., [1,Definition 4.13]):

Assumption 4.3 (Jacod's equivalence hypothesis)
Let F τ denote the law of τ . The regular conditional distribution of τ given F t is equivalent to the distribution of τ ; that is, if P t (·, A) denotes a version of P[τ ∈ A|F t ], A ∈ B([0, +∞]), we have: where the symbol ∼ denotes the equivalence of the two measures.
We denote by G the smallest filtration containing F and H; that is, G = F ∨ H is the progressive enlargement of F by τ . We notice that G obviously coincides with the smallest filtration containing F and such that τ is a G-stopping time.
The propagation of the PRP for an R-valued local martingale (X , F) to the progressive enlargement G of F by a random time τ satisfying Jacod's equivalence hypothesis has been investigated in [4] under the additional condition that F τ is continuous. As a consequence of Theorem 3.6, we are now going to show that under Assumption 4.3 the WRP propagates to G, also without the further continuity assumption on F τ .
Using the process L defined above as a density, according to [1,Theorem 4.37], for every arbitrary but fixed deterministic time T > 0, we can define the probability measure Q on G τ T by setting dQ G τ T := L T dP G τ T which has the following properties: . (P3) F T and σ (τ ) are conditionally independent given F 0 .
We stress that (P1) only holds on G τ T and not on G τ ∞ because, in general, L is not a uniformly integrable martingale (see [1,Remark 4.38]).
Since the inclusions F t ⊆ G t ⊆ G τ t , t ∈ [0, T ], hold, F T containing all the P-null sets of F , we also get Q G T ∼ P G T , Q H T = P H T . Furthermore, F T and H T ⊆ σ (τ ) are conditionally independent given F 0 . Notice that, from [4, Lemma 2.10] and the comment after the proof therein, we obtain dQ G T = T dP G T , where is a G-martingale.
For the remaining part of this section, Q will denote the equivalent probability measure described above.
Let T > 0 be an arbitrary but fixed deterministic time. We denote F T := Proof Because of (P3) and the triviality of F 0 , we have that F T and H T are independent under Q. Hence, we get the right continuity of G T by Lemma A.4. The proof of the lemma is complete.
Theorem 4.5 Let τ satisfy Assumption 4.3, and let (X , F) be an R d -valued semimartingale. Let us furthermore assume that F 0 is trivial and that X possesses the WRP with respect to F. Then, the R d × R + -valued G-semimartingale Z = (X , H ) tr possesses the WRP with respect to G T . Proof Assumption 4.3 implies that X is a G-semimartingale with respect to the probability measure P. Furthermore, since Q coincides with P on F T , we deduce that, under Q, the Q-semimartingale X possesses the WRP with respect to F T . Furthermore, under Q, H possesses the WRP with respect to H T (see [12,Theorem III.4.37]). Because of (P3) and the triviality of F 0 , the filtrations F T and H T are independent under Q. We can therefore apply Theorem 3.6 under Q to obtain that the G T -semimartingale Z possesses the WRP with respect to G T under the measure Q. So, by the equivalence of Q and P on G T , applying [12,Theorem III.5.24], we obtain that Z has the WRP with respect to G T also under the original measure P. The proof of the theorem is complete.
Funding Open Access funding enabled and organized by Projekt DEAL.
Data Availability Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

A Technical Results and Proofs
Lemma A.1 Let q ≥ 1. Let X = (X 1 , . . . , X d ) tr be such that X i ∈ H q loc (F), i = 1, . . . , d, and let K ∈ L 1 loc (X ). Then, K · X ∈ H q (F) if and only if K ∈ L q (X ).
Proof Let K ∈ L 1 loc (X ). Then, it is known (see [11,Eq. (4.57) and Remark 4.61 2)]) that the identity [K · X , K · K ] = (K tr aK ) · A. The claim is therefore an immediate consequence of Burkholder-Davis-Gundy's inequality. The proof of the lemma is complete.
part, from the PRP of X with respect to F and Lemma A.1, we get where H ∈ L 1 (X c , F) and W ∈ G 1 (μ X , F). This means that X has the WRP with respect to F. The proof of the proposition is complete.
So, by the isometry, (K n ) n≥1 is a Cauchy sequence in (L 2 (X c ), · L 2 (X c ) ) and (W n ) n≥1 is a Cauchy sequence in (G 2 (μ X ), · G 2 (μ X ) ). Therefore, we find K ∈ L 2 (X c ) and W ∈ G 2 (μ X ) such that K n −→ K in L 2 (X c ) and W n −→ W in G 2 (μ X ) as n → +∞, respectively. Considering now the stochastic integrals K · X c and W * (μ X − ν X ), we see that ξ = E[ξ |F 0 ] + K · X c ∞ + W * (μ X − ν X ) ∞ holds. The proof of the lemma is complete.
Proof We only show (i) and (ii) for W ∈ G 1 loc (μ X , D), the proof for V ∈ G 1 loc (μ Y , D) being completely analogous. First, we verify (i). Since W ∈ G 1 loc (μ X , D), the predictable process W X is finite-valued and a version of the D-predictable projection of the process W (·, ·, X )1 { X =0} . Furthermore, we observe that the process W g 1 Z is always defined and D-predictable. Because of { X = 0} ⊆ { Z = 0}, we obviously have 1 { X =0} 1 { Z =0} = 1 { X =0} . Hence, for every finite-valued D-predictable stopping time T , we have , we now get = W X (ω, t), for every t ≥ 0 a.s. (A.1) From (A.1), we deduce that W g 1 ∈ G 1 loc (μ Z , D) holds. The proof of (i) is complete. We now verify (ii). Because of A.1, we have where the equalities have to be understood in the sense of indistinguishability. Hence, W * (μ X − ν X ) and W g 1 * (μ Z − ν Z ) are purely discontinuous local martingales with indistinguishable jumps. By [12,Corollary I.4.19], this shows (ii). The proof of the lemma is complete. Proof For (i), we refer to [13,Theorem 1]. To see (ii), we first verify the statement for uniformly integrable martingales. Let therefore M be an F-martingale. (The proof for H is the same.) By the independence assumption, we have that σ (M t ) ∨ F s is independent of H s , for every 0 ≤ s ≤ t. Hence, we can compute If now M is an F-local martingale and (T n ) n≥1 is a sequence of F-stopping times localizing M to an F-martingale, by the previous step, we see that the stopped process M T n is a G-martingale, for every n. Hence, (T n ) n≥1 being also a sequence of G-stopping times, we deduce that M is a G-local martingale. The proof of (ii) is complete. To see (iii), we again verify the statement first for martingales. Let M be an F-martingale, and let N be a H-martingale. Since, by the independence assumption, we have that E[|M t N t |] < +∞ and that σ (M t ) ∨ F s is independent of H t , for every 0 ≤ s ≤ t, we can compute where, in the last identity, we used that σ (N t ) ∨ H s is independent of F s . Let us now assume that M is an F-local martingale and that N is a H-local martingale. Let (T n ) n≥1 be a sequence of F-stopping times localizing M to H 1 (F), and let (S n ) n≥1 be a sequence of H-stopping times localizing N to H 1 (H). (We observe that we can always find such sequences of stopping times because of [11,Lemma 2.38].) By the previous step, we know that M T n N S n is a G-martingale. Furthermore, we have M T n N S n ∈ H 1 (G), since E[sup t≥0 |M T n t N S n t |] = E[sup t≥0 |M T n t |]E[sup t≥0 |N S n t |] < +∞ holds, by the independence assumption. We now define R n := T n ∧ S n , n ≥ 1. Then, R n ↑ +∞, for n → +∞. Using Doob's stopping theorem, we deduce that (M N ) R n = (M T n N S n ) S n T n belongs to H 1 (G), for every n ≥ 1. Hence, M N is a G-local martingale. The proof is complete.