1 Introduction

Let X be a d-dimensional semimartingale with respect to a right-continuous filtration \({\mathbb {F}}\), and let \(X^ c\) denote the continuous local martingale part of X. We say that X possesses the weak representation property (from now on WRP) with respect to \({\mathbb {F}}\) if every \({\mathbb {F}}\)-local martingale can be represented as the sum of a stochastic integral with respect to \(X^ c\) and a stochastic integral with respect to the compensated jump measure of X (for details, see Definition 3.1).

The WRP of X is a property depending on the filtration \({\mathbb {F}}\): For example, if X is a Lévy process and \({\mathbb {F}}={\mathbb {F}}^ X\) is the smallest right-continuous filtration with respect to which X is adapted, then X possesses the WRP with respect to \({\mathbb {F}}^ X\). However, this need not be true if X is considered with respect to a larger filtration. Therefore, it is natural to investigate under which conditions and in which form the WRP of a semimartingale X with respect to a filtration \({\mathbb {F}}\) propagates to a larger filtration \({\mathbb {G}}\).

In this paper, we suppose that X is an \({\mathbb {R}}^ d\)-valued \({\mathbb {F}}\)-semimartingale possessing the WRP with respect to \({\mathbb {F}}\). We then assume that \({\mathbb {F}}\) is enlarged by a right-continuous filtration \({\mathbb {H}}\) and we denote by \({\mathbb {G}}\) the smallest right-continuous filtration containing both \({\mathbb {F}}\) and \({\mathbb {H}}\). We assume that \({\mathbb {H}}\) has the following properties: (1) \({\mathbb {H}}\) is independent of \({\mathbb {F}}\); (2) \({\mathbb {H}}\) supports an \({\mathbb {R}}^\ell \)-valued semimartingale Y possessing the WRP with respect to \({\mathbb {H}}\). Under these conditions, we show in Theorem 3.6 (the main result of this paper) that the \({\mathbb {R}}^ d\times {\mathbb {R}}^ \ell \)-valued \({\mathbb {G}}\)-semimartingale \(Z=(X,Y)\) possesses the WRP with respect to \({\mathbb {G}}\). We stress that, in the present paper, we do not make any further assumption: The semimartingales X and Y may have simultaneous jump-times or their jumps may charge (the same) predictable times. To the best of our knowledge, Theorem 3.6 is the most general result about the propagation of the WRP to an independently enlarged filtration.

The propagation of the WRP to an independently enlarged filtration has been studied in Xue [14] under the further assumptions that X (or Y) are special quasi-left continuous semimartingales. (In particular, their jumps do not charge predictable jump-times.) At a first look, these assumptions seem to be harmless and fairly general, but they are actually quite restrictive: First of all, not every Lévy process is a special semimartingale and hence [14] does not cover the case of arbitrary Lévy processes X and Y. Furthermore, more importantly, the quasi-left continuity assumption leads to the following strong simplification of the problem: \({\mathbb {F}}\)-local martingales and \({\mathbb {H}}\)-local martingales have no common jumps. Contrarily, in the present paper we face the additional difficulty to determine an adequate representation of the simultaneous jumps of \({\mathbb {F}}\)- and \({\mathbb {H}}\)-local martingales. Moreover, examples of non-necessarily quasi-left continuous semimartingales possessing the WRP are known (see Example 3.7) and, in this case, the propagation of the WRP to the independently enlarged filtration \({\mathbb {G}}\) cannot be derived from [14].

In Wu and Gang [13], under the independence assumption of \({\mathbb {F}}\) and \({\mathbb {H}}\), necessary and sufficient conditions on the semimartingale characteristics of X and Y are stated for the \({\mathbb {G}}\)-semimartingale \(X+Y\) to possess the WRP with respect to \({\mathbb {G}}\). In particular, the authors do not assume that X (or Y) are quasi-left continuous, but only that the sets of their accessible jump-times are disjoint. This however again yields that \({\mathbb {F}}\)-local martingales and \({\mathbb {H}}\)-local martingales have no common jumps (see [13, Lemma 7]).

If \((X,{\mathbb {F}})\) and \((Y,{\mathbb {H}})\) are local martingales, a first work investigating martingale representation theorems in the independently enlarged filtration \({\mathbb {G}}\), without quasi-left continuity assumptions and allowing simultaneous jumps of X and Y, is Calzolari and Torti [5]. However, [5] deals with the propagation of the predictable representation property (from now on PRPFootnote 1) and not with the WRP. We recall that the WRP is a more general property than the PRP (see, e.g., [10, Theorem 13.14] or Lemma 3.9). In Calzolari and Torti [6], the results of [5] are extended to multidimensional local martingales. We show in Corollary 3.10 that the results obtained by [5, 6] can be reformulated in terms of the WRP of the \({\mathbb {G}}\)-local martingale \(Z=(X,Y)\).

If the filtration \({\mathbb {G}}\) is obtained enlarging the filtration \({\mathbb {F}}\) by a non-necessarily independent filtration \({\mathbb {H}}\), then very little is known about the propagation of the WRP. Results in this direction are available if \({\mathbb {F}}\) is enlarged progressively by a random time \(\tau \) that need not be an \({\mathbb {F}}\)-stopping time: In this case, \({\mathbb {H}}\) is generated by the process \(1_{[\tau ,+\infty )}\) and, therefore, \({\mathbb {G}}\) is the smallest right-continuous filtration containing \({\mathbb {F}}\) and such that \(\tau \) is a stopping time. We are now going to review these results.

In Barlow [3], a semimartingale X possessing the WRP with respect to a filtration \({\mathbb {F}}\) is considered and a WRP is obtained in \({\mathbb {G}}\) if \(\tau \) is a honest time. However, honest times are (morally) \({\mathscr {F}}_\infty \)-measurable random variables and, therefore, this excludes the independent enlargement.

In Di Tella [8], the WRP in \({\mathbb {G}}\) is obtained under the assumptions that \({\mathbb {F}}\)-martingales are \({\mathbb {G}}\)-martingales (that is, if the immersion property holds) and \({\mathbb {P}}[\tau =\sigma <+\infty ]=0\), for all \({\mathbb {F}}\)-stopping times \(\sigma \) (that is, if \(\tau \) avoids \({\mathbb {F}}\) stopping times). If, from one side, the independent enlargement is a special case of the immersion property, on the other side, the avoidance of stopping times yields that \(\tau \) cannot be charged by the jumps of \({\mathbb {F}}\)-local martingales.

In summary, also if \({\mathbb {F}}\) is progressively enlarged by a random time \(\tau \), the case studied in the present paper cannot be covered by [3] nor by [8].

The present work has the following structure: In Sect. 2, we recall some basic definitions and results needed in this paper. In Sect. 3, we prove our main result (Theorem 3.6) about the propagation of the WRP to the independently enlarged filtration \({\mathbb {G}}\). Then, as a consequence of Theorem 3.6, we also investigate the propagation of the PRP to \({\mathbb {G}}\). Section 4 is devoted to two applications of Theorem 3.6: In Sect. 4.1, we study the propagation of the WRP to the iterated independent enlargement. This part is inspired by [6, § 4.1]. In Sect. 4.2, we assume that the filtration \({\mathbb {F}}\) is enlarged by a random time \(\tau \) satisfying the so-called Jacod’s equivalence hypothesis. We stress that in Sect. 4.2 the filtrations \({\mathbb {F}}\) and \({\mathbb {H}}\) need not be independent. Therefore, Sect. 4.2 is an extension of Theorem 3.6. We also observe that, in Sect. 4.2, \(\tau \) need not avoid \({\mathbb {F}}\)-stopping times. Hence, this part extends known results as Callegaro, Jeanblanc and Zargari [4, Proposition 5.5], where the avoidance property is additionally assumed. Finally, we postpone to Appendix the proof of some technical results.

2 Basic Notions

In this paper, we regard d-dimensional vectors as columns, that is, if \(v\in {\mathbb {R}}^ d\), then \(v=(v^1,\ldots ,v^ d)^{tr}\), where tr denotes the transposition operation and \(v^ i\in {\mathbb {R}}\). If \(v^ i\in {\mathbb {R}}^ {d_i}\), \(d_i\ge 1\), \(i=1,\ldots ,n\), we denote by \((v^ 1,v^2,\ldots ,v^ n)^{tr}\) the \(d_1+d_2+\ldots +d_n\)-dimensional column vector, obtained continuing with \(v^2\) after \(v^1\) and so on, till \(v^ n\).

Stochastic Processes, Filtrations and Martingales Let \((\Omega ,{\mathscr {F}},{\mathbb {P}})\) be a complete probability space. For any càdlàg process X, we denote by \(\Delta X\) the jump process of X, i.e., \(\Delta X_t:=X_t-X_{t-}\), \(t>0\), and \(X_{0-}:=X_0\).

We denote by \({\mathbb {F}}=({\mathscr {F}}_t)_{t\ge 0}\) a right-continuous filtration and by \({\mathscr {O}}({\mathbb {F}})\) (resp. \({\mathscr {P}}({\mathbb {F}})\)) the \(\sigma \)-algebra of the \({\mathbb {F}}\)-optional (resp. \({\mathbb {F}}\)-predictable) sets of \(\Omega \times {\mathbb {R}}_+\), \({\mathbb {R}}_+:=[0,+\infty )\). We set \({\mathscr {F}}_\infty :=\bigvee _{t\ge 0}{\mathscr {F}}_t\). We sometimes use the notation \((X,{\mathbb {F}})\) to denote an \({\mathbb {F}}\)-adapted stochastic process X.

For a process X, we denote by \({\mathbb {F}}^ X\) the smallest right-continuous filtration such that X is adapted.

Let X be an \([-\infty ,+\infty ]\)-valued and \({\mathscr {F}}\otimes {\mathscr {B}}({\mathbb {R}}_+)\)-measurable process, where \({\mathscr {B}}({\mathbb {R}})\) denotes the Borel \(\sigma \)-algebra on \({\mathbb {R}}\). We denote by \({}^{p,{\mathbb {F}}} X\) the (extended) \({\mathbb {F}}\)-predictable projection of X. For the definition \({}^{p,{\mathbb {F}}} X\), we refer to [12, Theorem I.2.28].

An \({\mathbb {F}}\)-adapted càdlàg process X is called quasi-left continuous if \(\Delta X_T=0\) for every finite-valued, \({\mathbb {F}}\)-predictable stopping time T.

For \(q\ge 1\), we denote by \({\mathscr {H}}^ q({\mathbb {F}})\) the space of \({\mathbb {F}}\)-uniformly integrable martingales X such that \(\Vert X\Vert _{{\mathscr {H}}^ q}:={\mathbb {E}}[\sup _{t\ge 0}|X_t|^ q]^{1/q}<+\infty \). Recall that \(({\mathscr {H}}^ q,\Vert \cdot \Vert _{{\mathscr {H}}^ q})\) is a Banach space. For \(X\in {\mathscr {H}}^2({\mathbb {F}})\), we also introduce the equivalent norm \(\Vert X\Vert _2:={\mathbb {E}}[X^2_\infty ]^{1/2}\) and \(({\mathscr {H}}^ 2,\Vert \cdot \Vert _{2})\) is a Hilbert space.

For each \(q\ge 1\), the space \({\mathscr {H}}^ q_\mathrm {loc}({\mathbb {F}})\) is introduced from \({\mathscr {H}}^ q({\mathbb {F}})\) by localization. We observe that \({\mathscr {H}}^1_\mathrm {loc}({\mathbb {F}})\) coincides with the space of all \({\mathbb {F}}\)-local martingales (see [11, Lemma 2.38]). We denote by \({\mathscr {H}}^ p_0({\mathbb {F}})\) (resp., \({\mathscr {H}}^ q_{\mathrm {loc},0}({\mathbb {F}})\)) the subspace of martingales (resp., local martingales) \(Z\in {\mathscr {H}}^ q({\mathbb {F}})\) (resp., \(Z\in {\mathscr {H}}^ p_\mathrm {loc}({\mathbb {F}})\)) such that \(Z_0=0\).

Two local martingales X and Y are called orthogonal if \(XY\in {\mathscr {H}}^1_{\mathrm {loc},0}({\mathbb {F}})\) holds. For \(X,Y\in {\mathscr {H}}^2_\mathrm {loc}({\mathbb {F}})\), we denote by \(\langle X,Y \rangle \) the predictable covariation of X and Y. We recall that \(XY-\langle X,Y \rangle \in {\mathscr {H}}^1_\mathrm {loc}({\mathbb {F}})\). Hence, if \(X_0Y_0=0\), then X and Y are orthogonal if and only if \(\langle X,Y \rangle =0\).

An \({\mathbb {R}}\)-valued \({\mathbb {F}}\)-adapted process X such that \(X_0=0\) is called increasing if X is càdlàg and the paths \(t\mapsto X_t(\omega )\) are non-decreasing, \(\omega \in \Omega \). We denote by \({\mathscr {A}}^+={\mathscr {A}}^+({\mathbb {F}})\) the space of \({\mathbb {F}}\)-adapted integrable processes; that is, \({\mathscr {A}}^+\) is the space of increasing process X such that \({\mathbb {E}}[X_\infty ]<+\infty \) (see [12, I.3.6]). We denote by \({\mathscr {A}}^+_\mathrm {loc}={\mathscr {A}}^+_\mathrm {loc}({\mathbb {F}})\) the localized version of \({\mathscr {A}}^+\). For \(X\in {\mathscr {A}}^+_\mathrm {loc}\), we denote by \(X^ p\in {\mathscr {A}}^+_\mathrm {loc}\) the \({\mathbb {F}}\)-dual predictable projection of X (see [12, Theorem I.3.17]).

Let \((X,{\mathbb {F}})\) be an increasing process, and let \(K\ge 0\) be an \({\mathbb {F}}\)-optional process. We denote by \(K\cdot X=(K\cdot X_t)_{t\ge 0}\) the process defined by the (Lebesgue–Stieltjes) integral of K with respect to X, that is, \(K\cdot X_t(\omega ):=\int _0^ t K_s(\omega )\mathrm {d}X_s(\omega )\), if \(\int _0^ t K_s(\omega )\mathrm {d}X_s(\omega )\) is finite-valued, for every \(\omega \in \Omega \) and \(t\ge 0\). Notice that \((K\cdot X,{\mathbb {F}})\) is an increasing process.

Random Measures Let \(\mu \) be a nonnegative random measure on \({\mathbb {R}}_+\times E\) in the sense of [12, Definition II.1.3], where E coincides with \({\mathbb {R}}^ d\) or with a Borel subset of \({\mathbb {R}}^ d\). We stress that we assume \(\mu (\omega ,\{0\}\times E)=0\) identically.

We denote by \({\mathscr {B}}(E)\) the Borel \(\sigma \)-algebra on E and set \(\widetilde{\Omega }:=\Omega \times {\mathbb {R}}_+\times E\). We then introduce the following \(\sigma \)-fields on \(\widetilde{\Omega }\): \(\widetilde{\mathscr {O}}({\mathbb {F}}):={\mathscr {O}}({\mathbb {F}})\otimes {\mathscr {B}}(E)\) and \(\widetilde{\mathscr {P}}({\mathbb {F}}):={\mathscr {P}}({\mathbb {F}})\otimes {\mathscr {B}}(E)\).

Let W be an \(\widetilde{\mathscr {O}}({\mathbb {F}})\)-measurable (resp. \(\widetilde{\mathscr {P}}({\mathbb {F}})\)-measurable) mapping from \(\widetilde{\Omega }\) into \({\mathbb {R}}\). We say that W is an \({\mathbb {F}}\)-optional (resp. \({\mathbb {F}}\)-predictable) function. Let W be an \({\mathbb {F}}\)-optional function. As in [12, II.1.5], we define

$$\begin{aligned} W*\mu (\omega )_t:={\left\{ \begin{array}{ll} \displaystyle \int _{[0,t]\times E}W(\omega ,t,x)\mu (\omega ,\mathrm {d}t,\mathrm {d}x),&{}\quad \text {if } \displaystyle \int _{[0,t]\times E}|W(\omega ,t,x)|\mu (\omega ,\mathrm {d}t,\mathrm {d}x)<+\infty ;\\ \\ \displaystyle +\infty ,&{}\quad \text {else}. \end{array}\right. } \end{aligned}$$

We say that \(\mu \) is an \({\mathbb {F}}\)-optional (resp. \({\mathbb {F}}\)-predictable) random measure if \(W*\mu \) is an \({\mathbb {F}}\)-optional (resp. an \({\mathbb {F}}\)-predictable) process, for every optional (resp. \({\mathbb {F}}\)-predictable) function W.

Semimartingales Let X be an \({\mathbb {R}}^ d\)-valued \({\mathbb {F}}\)-semimartingale. We denote by \(\mu ^ X\) the jump-measure of X, that is,

$$\begin{aligned} \mu ^{X}(\omega ,\mathrm {d}t,\mathrm {d}x)=\sum _{s>0}1_{\{\Delta X_s(\omega )\ne 0\}}\delta _{(s,\Delta X_s(\omega ))}(\mathrm {d}t,\mathrm {d}x), \end{aligned}$$

where \(\delta _a\) denotes the Dirac measure at point \(a\in {\mathbb {R}}^ d\). From [12, Theorem II.1.16], \(\mu ^ X\) is an integer-valued random measure with respect to \({\mathbb {F}}\) (see [12, Definition II.1.13]).

By \((B^ X, C^ X,\nu ^ X)\), we denote the \({\mathbb {F}}\)-predictable characteristics of X with respect to the truncation function \(h(x)=1_{\{|x|\le 1\}}x\) (see [12, Definition II.2.3]). Recall that \(\nu ^ X\) is a predictable random measure characterized by the following properties: For any \({\mathbb {F}}\)-predictable mapping W such that \(|W|*\mu ^ X\in {\mathscr {A}}^ +_\mathrm {loc}\), we have \(|W|*\nu ^ X\in {\mathscr {A}}^ +_\mathrm {loc}\) and \((W*\mu ^ X-W*\nu ^ X)\in {\mathscr {H}}^1_{\mathrm {loc},0}\) (see [12, Theorem II.1.8]).

We are now going to introduce the stochastic integral with respect to \((\mu ^ X-\nu ^ X)\) of an \({\mathbb {F}}\)-predictable mapping W.

Let W be an \({\mathbb {F}}\)-predictable mapping. We define the process \(\widetilde{W}^ X\) by

$$\begin{aligned} \widetilde{W}_t^ X(\omega ):=W(\omega ,t,\Delta X_t(\omega ))1_{\{\Delta X_t(\omega )\ne 0\}}-{{\widehat{W}}}_t^ X(\omega ), \end{aligned}$$
(2.1)

where, for \(t\ge 0\),

$$\begin{aligned} {{\widehat{W}}}_t^ X(\omega ):={\left\{ \begin{array}{ll} \displaystyle \int _{{\mathbb {R}}^ d}W(\omega ,t,x)\nu ^ X(\omega ,\{t\}\times \mathrm {d}x),&{}\quad \text {if } \displaystyle \int _{{\mathbb {R}}^ d}|W(\omega ,t,x)|\nu ^ X(\omega ,\{t\}\times \mathrm {d}x)<+\infty ;\\ \\ \displaystyle +\infty ,&{}\quad \text {else}. \end{array}\right. } \end{aligned}$$

Notice that, according to [12, Lemma II.1.25], \({{\widehat{W}}}^ X\) is predictable and a version of the predictable projection of the process \((\omega ,t)\mapsto W(\omega ,t,\Delta X_t(\omega ))1_{\{\Delta X_t(\omega )\ne 0\}}\). In symbols, denoting this latter process by \(W(\cdot ,\cdot ,\Delta X)1_{\{\Delta X\ne 0\}}\), we have \({{\widehat{W}}}^ X={}^{p,{\mathbb {F}}}(W(\cdot ,\cdot ,\Delta X)1_{\{\Delta X\ne 0\}})\). So, since \({{\widehat{W}}}^ X\) depends on the filtration \({\mathbb {F}}\) as well, we shall also write, if necessary, \({{\widehat{W}}}^{X,{\mathbb {F}}}\) and \(\widetilde{W}^{X,{\mathbb {F}}}\) to stress the filtration. This notation will be especially used in Sect. 3.

For \(q\ge 1\), we introduce (see [11, (3.62)])

$$\begin{aligned} \textstyle {\mathscr {G}}^ q(\mu ^ X):=\big \{W:\ W\text { is an } {\mathbb {F}}\text {-predictable function and }\ \big (\sum _{0\le s\le \cdot }(\widetilde{W}^ X)^2_s\big )^{q/2}\in {\mathscr {A}}^+\big \}. \end{aligned}$$

The definition of \({\mathscr {G}}^ q_\mathrm {loc}(\mu ^ X)\) is similar and makes use of \({\mathscr {A}}^+_\mathrm {loc}\) instead. To specify the filtration, we sometimes write \({\mathscr {G}}^ q(\mu ^ X,{\mathbb {F}})\). Setting

$$\begin{aligned} \textstyle \Vert W\Vert _{{\mathscr {G}}^ q(\mu ^ X)}:={\mathbb {E}}\Big [\big (\sum _{s\ge 0}(\widetilde{W}^ X)^2_s\big )^{q/2}\Big ]^{1/q}, \end{aligned}$$

we get a seminorm on \({\mathscr {G}}^ q(\mu ^ X)\).

Let now \(W\in {\mathscr {G}}^1_\mathrm {loc}(\mu ^ X)\). The stochastic integral of W with respect to \((\mu ^ X-\nu ^ X)\) is denoted by \(W*(\mu ^ X-\nu ^ X)\) and is defined as the unique purely discontinuous local martingale \(Z\in {\mathscr {H}}^1_\mathrm {loc,0}({\mathbb {F}})\) such that \(\Delta Z=\widetilde{W}^ X\) (up to an evanescent set). See [12, Definition II.1.27] and the subsequent comment. We recall that, according to [11, Proposition 3.66], the inclusion \(W*(\mu ^ X-\nu ^ X)\in {\mathscr {H}}^ q\) holds if and only if \(W\in {\mathscr {G}}^ q(\mu ^ X)\), \(q\ge 1\).

For two \({\mathbb {F}}\)-semimartingales X and Y, we denote by [XY] the quadratic variation of X and Y:

$$\begin{aligned} {[}X,Y]_t:=\langle X^ c,Y^ c \rangle _t+\sum _{s\le t}\Delta X_s\Delta Y_s, \end{aligned}$$

where \(X^ c\) and \(Y^ c\) denote the continuous local martingale part of X and Y, respectively. We recall that if \(X,Y\in {\mathscr {H}}^1_{\mathrm {loc}}({\mathbb {F}})\), then \(XY\in {\mathscr {H}}^1_{\mathrm {loc}}({\mathbb {F}})\) if and only if \([X,Y]\in {\mathscr {H}}^1_{\mathrm {loc}}({\mathbb {F}})\) holds. If furthermore \(X,Y\in {\mathscr {H}}^{2}_{\mathrm {loc}}({\mathbb {F}})\), then \([X,Y]\in {\mathscr {A}}^{+}_{\mathrm {loc}}({\mathbb {F}})\) and \([X,Y]^{p}=\langle X,Y \rangle \).

The Stochastic Integral for Multidimensional Local Martingales Let us fix \(q\ge 1\) and consider an \({\mathbb {R}}^ d\)-valued stochastic process \(X=(X^1,\ldots ,X^ d)^{tr}\) such that \(X^ i\in {\mathscr {H}}_{\mathrm {loc}}^ q({\mathbb {F}})\), \(i=1,\ldots ,d\). We denote by a and A the processes introduced in [11, Chapter 4, Section 4 § a] such that \([X,X]=a\cdot A\). We recall that \(A\in {\mathscr {A}}^+_\mathrm {loc}({\mathbb {F}})\) and that a is an optional process taking values in the space of d-dimensional symmetric and nonnegative matrices. Notice that, if \(q=2\), then we can take \(C=A^ p\) and c is a predictable process taking values in the space of d-dimensional symmetric and nonnegative matrices such that \(c^{i,j}\cdot C=(a^{i,j}\cdot A)^ p\).

Let K be an \({\mathbb {R}}^ d\)-valued measurable process and define \(\Vert K\Vert _{\mathrm {L}^ q(X)}={\mathbb {E}}[((K^{tr}aK)\cdot A_\infty )^{q/2}]\). We denote by \(\mathrm {L}^ q(X)\) the space of \({\mathbb {R}}^ d\)-valued and \({\mathbb {F}}\)-predictable processes K such that \(\Vert K\Vert _{\mathrm {L}^ q(X)}<+\infty \). Notice that \(K\in \mathrm {L}^ q(X)\) if and only if K is \({\mathbb {R}}^ d\)-valued, \({\mathbb {F}}\)-predictable and \(((K^{tr}aK)\cdot A)^{q/2}\in {\mathscr {A}}^+\). The space \(\mathrm {L}^{q}_\mathrm {loc}(X)\) is defined analogously but making use of \({\mathscr {A}}^+_\mathrm {loc}\) instead of \({\mathscr {A}}^+\). For \(K\in \mathrm {L}_\mathrm {loc}^1(X)\), we denote by \(K\cdot X\) the stochastic integral of K with respect to X. Recall that \(K\cdot X\in {\mathscr {H}}^1_{\mathrm {loc},0}\) and that it is always an \({\mathbb {R}}\)-valued process. Sometimes, to stress the underlying filtration, we write \(\mathrm {L}^ q(X,{\mathbb {F}})\) or \(\mathrm {L}^ q_\mathrm {loc}(X,{\mathbb {F}})\).

We observe that if \(X\in {\mathscr {H}}^1_{\mathrm {loc}}({\mathbb {F}})\) (in particular, X is \({\mathbb {R}}\)-valued), then we can chose \(a=1\), \(A=[X,X]\) and we get the usual definition of the stochastic integral with respect to X (see [11, Definition 2.46]). If furthermore X is of finite variation and \(K\in \mathrm {L}_\mathrm {loc}^1(X)\), then the stochastic integral \(K\cdot X\) coincides with the Stieltjes–Lebesgue integral, whenever this latter one exists and is finite.

3 Martingale Representation in the Independently Enlarged Filtration

We start this section introducing the notion of the weak representation property (abbreviated by WRP).

Definition 3.1

Let \({\mathbb {F}}\) be a right-continuous filtration and \((X,{\mathbb {F}})\) an \({\mathbb {R}}^ d\)-valued semimartingale with continuous local martingale part \(X^ c\) and predictable \({\mathbb {F}}\)-characteristics \((B^ X,C^ X,\nu ^ X)\). We say that X possesses the WRP with respect to \({\mathbb {F}}\) if every \(N\in {\mathscr {H}}_{\mathrm {loc}}^ 1({\mathbb {F}})\) can be represented as

$$\begin{aligned}&N_t=N_0+K\cdot X^ c_t+W*(\mu ^ X-\nu ^ X)_t,\quad t\ge 0,\quad \nonumber \\&K\in \mathrm {L}_\mathrm {loc}^ 1(X^ c,{\mathbb {F}}),\quad W\in {\mathscr {G}}_{\mathrm {loc}}^ 1(\mu ^ X,{\mathbb {F}}). \end{aligned}$$
(3.1)

For our aims, the following characterization of the WRP will be useful:

Proposition 3.2

The \({\mathbb {R}}^ d\)-valued semimartingale X possesses the WRP with respect to \({\mathbb {F}}\) if and only if every \(N\in {\mathscr {H}}^2({\mathbb {F}})\) has the representation

$$\begin{aligned}&N_t=N_0+K\cdot X^ c_t+W*(\mu ^ X-\nu ^ X)_t,\quad t\ge 0,\quad \nonumber \\&K\in \mathrm {L}^ 2(X^ c,{\mathbb {F}}),\quad W\in {\mathscr {G}}^ 2(\mu ^ X,{\mathbb {F}}). \end{aligned}$$
(3.2)

Proof

By localization, it is enough to show that (3.2) holds if and only if every \(N\in {\mathscr {H}}^1({\mathbb {F}})\) can be represented as

$$\begin{aligned} N_t\!=\!N_0+K\cdot X^ c_t\!+\!W*(\mu ^ X-\nu ^ X)_t,\quad t\ge 0,\quad K\in \mathrm {L}^ 1(X^ c,{\mathbb {F}}),\quad W\in {\mathscr {G}}^ 1(\mu ^ X,{\mathbb {F}}). \end{aligned}$$

But this is just [8, Proposition 3.2]. The proof is complete. \(\square \)

3.1 Propagation of the Weak Representation Property

Let \({\mathbb {F}}=({\mathscr {F}}_t)_{t\ge 0}\) and \({\mathbb {H}}=({\mathscr {H}}_t)_{t\ge 0}\) be right-continuous filtrations. In this section, we consider the filtration \({\mathbb {G}}:={\mathbb {F}}\vee {\mathbb {H}}\); that is, \({\mathbb {G}}\) is the smallest filtration containing both \({\mathbb {F}}\) and \({\mathbb {H}}\). We then make the following assumption:

Assumption 3.3

The filtrations \({\mathbb {F}}\) and \({\mathbb {H}}\) are independent.

We recall that two filtrations \({\mathbb {F}}=({\mathscr {F}}_t)_{t\ge 0}\) and \({\mathbb {H}}=({\mathscr {H}}_t)_{t\ge 0}\) are called independent, if the \(\sigma \)-algebras \({\mathscr {F}}_\infty \) and \({\mathscr {H}}_\infty \) are independent.

As a first consequence of Assumption 3.3, we get that the filtration \({\mathbb {G}}\) is again right continuous (see Lemma A.4 (i)). Furthermore, the following result holds:

Proposition 3.4

Let \({\mathbb {F}}\) and \({\mathbb {H}}\) satisfy Assumption 3.3. Let \((X,{\mathbb {F}})\) be an \({\mathbb {R}}^ d\)-valued semimartingale, and let \((B^ X,C^ X,\nu ^ X)\) denote the \({\mathbb {F}}\)-predictable characteristics of X. Then, \((X,{\mathbb {G}})\) is a semimartingale and the \({\mathbb {G}}\)-predictable characteristics of X are again given by \((B^ X,C^ X,\nu ^ X)\).

Proof

The result follows from Lemma A.4 (ii) and [12, Theorem II.2.21]. The proof is complete. \(\square \)

In the proof of Theorem 3.6, we need the following technical proposition:

Proposition 3.5

Let \({\mathbb {F}}\) and \({\mathbb {H}}\) satisfy Assumption 3.3. Let \((X,{\mathbb {F}})\) be an \({\mathbb {R}}^ d\)-valued semimartingale with jump-measure \(\mu ^ X\) and \({\mathbb {F}}\)-predictable characteristics \((B^ X,C^ X,\nu ^ X)\). Then, for every \(W\in {\mathscr {G}}_{\mathrm {loc}}^1(\mu ^ X,{\mathbb {F}})\) we have:

(i) \(\widehat{W}^{X,{\mathbb {F}}}=\widehat{W}^{X,{\mathbb {G}}}\).

(ii) \(\widetilde{W}^{X,{\mathbb {F}}}=\widetilde{W}^{X,{\mathbb {G}}}\).

(iii) \(W\in {\mathscr {G}}_{\mathrm {loc}}^1(\mu ^ X,{\mathbb {G}})\). If moreover \(W\in {\mathscr {G}}^ q(\mu ^ X,{\mathbb {F}})\), \(q\ge 1\), then \(W\in {\mathscr {G}}^ q(\mu ^ X,{\mathbb {G}})\).

Proof

First, we notice that, by Proposition 3.4, X is a \({\mathbb {G}}\)-semimartingale. Hence, (ii) is a direct consequence of (i) and of the definition of \(\widetilde{W}^{X,{\mathbb {G}}}\). Furthermore, (iii) follows immediately from (ii). We now show (i). Because of Proposition 3.4, the \({\mathbb {G}}\)-predictable compensator of \(\mu ^ X\) is again given by \(\nu ^{X}\). Therefore, by the definition of \(\widehat{W}^{X,{\mathbb {G}}}\) we have the identity \(\widehat{W}^{X,{\mathbb {G}}}=\widehat{W}^{X,{\mathbb {F}}}\). The proof is complete. \(\square \)

Let \((E,{\mathscr {E}})\) be a measurable space endowed by the \(\sigma \)-algebra \({\mathscr {E}}\). We denote by \({\mathbb {B}}({\mathscr {E}})\) the space of \({\mathbb {R}}\)-valued \({\mathscr {E}}\)\({\mathscr {B}}({\mathbb {R}})\)-measurable and bounded functions on E. We now come to the main result of the present paper.

Theorem 3.6

Let \((X,{\mathbb {F}})\) be an \({\mathbb {R}}^ d\)-valued semimartingale with jump-measure \(\mu ^ X\) and \({\mathbb {F}}\)-predictable characteristics \((B^ X,C^ X,\nu ^ X)\). Let \((Y,{\mathbb {H}})\) be an \({\mathbb {R}}^ \ell \)-valued semimartingale with jump-measure \(\mu ^ Y\) and \({\mathbb {H}}\)-predictable characteristics \((B^ Y,C^ Y,\nu ^ Y)\). Let furthermore X have the WRP with respect to \({\mathbb {F}}\), and let Y have the WRP with respect to \({\mathbb {H}}\). If \({\mathbb {F}}\) and \({\mathbb {H}}\) satisfy Assumption 3.3, then the \({\mathbb {R}}^{d}\times {\mathbb {R}}^\ell \)-valued \({\mathbb {G}}\)-semimartingale \(Z=(X,Y)^{tr}\) possesses the WRP with respect to \({\mathbb {G}}\), where \({\mathbb {G}}:={\mathbb {F}}\vee {\mathbb {H}}\).

Proof

Let \(\xi \) belong to \({\mathbb {B}}({\mathscr {F}}_\infty )\) and let \(\eta \) belong to \({\mathbb {B}}({\mathscr {H}}_\infty )\). We consider the martingales \(M\in {\mathscr {H}}^2({\mathbb {F}})\) and \(N\in {\mathscr {H}}^2({\mathbb {H}})\) such that \(M_t={\mathbb {E}}[\xi |{\mathscr {F}}_t]\) and \(N_t={\mathbb {E}}[\eta |{\mathscr {H}}_t]\) a.s. for every \(t\ge 0\). Since X has the WRP with respect to \({\mathbb {F}}\) and Y has the WRP with respect to \({\mathbb {H}}\), because of Proposition 3.2, we can represent M as

$$\begin{aligned} M_t=M_0+K\cdot X_t^ c+W*(\mu ^ X-\nu ^ X)_t,\quad t\ge 0,\quad K\in \mathrm {L}^2(X^ c,{\mathbb {F}}),\quad W\in {\mathscr {G}}^2(\mu ^ X,{\mathbb {F}})\nonumber \\ \end{aligned}$$
(3.3)

and N as

$$\begin{aligned} N_t=N_0+J\cdot Y_t^ c+V*(\mu ^ Y-\nu ^ Y)_t,\quad t\ge 0,\quad J\in \mathrm {L}^2(Y^ c,{\mathbb {H}}),\quad V\in {\mathscr {G}}^2(\mu ^ Y,{\mathbb {H}}).\nonumber \\ \end{aligned}$$
(3.4)

Furthermore, because of Lemma A.4 (ii) and (iii), the processes M, N and MN are \({\mathbb {G}}\)-martingales and they are bounded. Hence, we have, in particular, \(M,N,MN\in {\mathscr {H}}^2({\mathbb {G}})\). By Proposition 3.4, \((X,{\mathbb {G}})\) and \((Y,{\mathbb {G}})\) are semimartingales and their \({\mathbb {G}}\)-predictable characteristics are given by \((B^ X,C^ X,\nu ^ X)\) and \((B^ Y,C^ Y,\nu ^ Y)\), respectively. In conclusion, (3.3) and (3.4) are also valid in \({\mathbb {G}}\). We can then apply the formula of integration by parts with respect to the enlarged filtration \({\mathbb {G}}\) and use (3.3), (3.4), [12, Proposition II.1.30 b) and III.4.9 in Theorem III.4.5] to obtain

$$\begin{aligned} \begin{aligned} M_tN_t&=M_0N_0+N_-\cdot M_t+M_-\cdot N_t+[M,N]_t \\&=M_0N_0+N_-K\cdot X^ c_t+N_-W*(\mu ^ X-\nu ^ X)_t\\&\quad +M_-J\cdot Y^ c_t+M_-V*(\mu ^ Y-\nu ^ Y)_t+[M,N]_t. \end{aligned} \end{aligned}$$
(3.5)

We split the remaining part of the proof in several steps.

Step 1: Representation of [MN]. By Lemma A.4 (ii), the process \(M^ cN^ c\) is a continuous local martingale and hence \(\langle M^ c,N^ c \rangle =0\). By definition of the quadratic covariation, from Proposition 3.5 (ii) and the identities (3.3) and (3.4), we therefore get

$$\begin{aligned} {[}M,N]_t=\sum _{0\le s\le t}\widetilde{W}^{X,{\mathbb {F}}}_s\widetilde{V}^{Y,{\mathbb {H}}}_s=\sum _{0\le s\le t}\widetilde{W}^{X,{\mathbb {G}}}_s\widetilde{V}^{Y,{\mathbb {G}}}_s. \end{aligned}$$
(3.6)

Since \(MN\in {\mathscr {H}}^2({\mathbb {G}})\), we have that \(S:=[M,N]\in {\mathscr {H}}^1_{\mathrm {loc}}({\mathbb {G}})\). Moreover, Kunita–Watanabe’s inequality (see [10, Theorem 8.3, Eq. (3.1)]) and the independence of \({\mathbb {F}}\) and \({\mathbb {H}}\) yield

$$\begin{aligned}\begin{aligned} {\mathbb {E}}\bigg [\sup _{t\ge 0}\big |[M,N]_t\big |^2\bigg ]&\le {\mathbb {E}}\big [\mathrm {Var}\big ([M,N]\big )_\infty ^2\big ]\\&\le {\mathbb {E}}\big [[M,M]_\infty [N,N]_\infty \big ]={\mathbb {E}}\big [[M,M]_\infty \big ]{\mathbb {E}}\big [[N,N]_\infty \big ]<+\infty , \end{aligned}\end{aligned}$$

meaning that the \({\mathbb {G}}\)-local martingale S belongs to \({\mathscr {H}}^2({\mathbb {G}})\) as well. By the definition of S and Proposition 3.5 (ii), we have

$$\begin{aligned} \Delta S=\widetilde{W}^{X,{\mathbb {F}}}\widetilde{V}^{Y,{\mathbb {H}}}=\widetilde{W}^{X,{\mathbb {G}}}\widetilde{V}^{Y,{\mathbb {G}}}. \end{aligned}$$
(3.7)

By the definition of \(\widetilde{W}^{X,{\mathbb {G}}}\) and \(\widetilde{V}^{Y,{\mathbb {G}}}\) and Proposition 3.5 (i), we have

$$\begin{aligned} \begin{aligned} \widetilde{W}^{X,{\mathbb {G}}}\widetilde{V}^{Y,{\mathbb {G}}}&=\widetilde{W}^{X,{\mathbb {F}}}\widetilde{V}^{Y,{\mathbb {H}}}=\Big (W(\cdot ,\cdot ,\Delta X)1_{\{\Delta X\ne 0\}}-\widehat{W}^{X,{\mathbb {G}}}\Big )\Big (V(\cdot ,\cdot ,\Delta Y)1_{\{\Delta Y\ne 0\}}-\widehat{V}^{Y,{\mathbb {G}}}\Big ) \\ {}&=W(\cdot ,\cdot ,\Delta X)V(\cdot ,\cdot ,\Delta Y)1_{\{\Delta X\ne 0,\Delta Y\ne 0\}}-\widehat{W}^{X,{\mathbb {G}}}V(\cdot ,\cdot ,\Delta Y)1_{\{\Delta Y\ne 0\}}\\ {}&\qquad -\widehat{V}^{Y,{\mathbb {G}}}W(\cdot ,\cdot ,\Delta X)1_{\{\Delta X\ne 0\}}+\widehat{V}^{Y,{\mathbb {G}}}\widehat{W}^{X,{\mathbb {G}}}. \end{aligned} \end{aligned}$$
(3.8)

We are now going to compute the \({\mathbb {G}}\)-predictable projection of \(\widetilde{W}^{X,{\mathbb {G}}}\widetilde{V}^{Y,{\mathbb {G}}}\). We recall that the processes \(\widehat{W}^{X,{\mathbb {G}}}\) and \(\widehat{V}^{Y,{\mathbb {G}}}\) are \({\mathbb {G}}\)-predictable, by [12, Lemma II.1.25]. Furthermore, since \(\widetilde{W}^{X,{\mathbb {G}}}\in {\mathscr {G}}^2(\mu ^ X,{\mathbb {G}})\) and \(\widetilde{V}^{Y,{\mathbb {G}}}\in {\mathscr {G}}^2(\mu ^ Y,{\mathbb {G}})\) by Proposition 3.5 (iii), we also have that \(\widehat{W}^{X,{\mathbb {G}}}\) and \(\widehat{V}^{Y,{\mathbb {G}}}\) are finite-valued. By the \({\mathbb {G}}\)-martingale property of S, we have \({}^{p,{\mathbb {G}}}{(\Delta S)}=0\) (see [12, Corollary I.2.31]). Therefore, using (3.7), [12, Theorem I.2.28 c), Corollary I.2.31 and Lemma II.1.25] in (3.8), since all involved processes are finite-valued, we can compute

$$\begin{aligned} \begin{aligned} 0&={}^{p,{\mathbb {G}}}(\widetilde{W}^{X,{\mathbb {G}}}\widetilde{V}^{Y,{\mathbb {G}}})\\&={}^{p,{\mathbb {G}}}\big (W(\cdot ,\cdot ,\Delta X)V(\cdot ,\cdot ,\Delta Y)1_{\{\Delta X\ne 0,\Delta Y\ne 0\}}\big )-{}^{p,{\mathbb {G}}}\big (V(\cdot ,\cdot ,\Delta Y)1_{\{\Delta Y\ne 0\}}\big )\widehat{W}^{X,{\mathbb {G}}}\\ {}&\qquad -{}^{p,{\mathbb {G}}}\big (W(\cdot ,\cdot ,\Delta X)1_{\{\Delta X\ne 0\}}\big )\widehat{V}^{Y,{\mathbb {G}}}+\widehat{W}^{X,{\mathbb {G}}}\widehat{V}^{Y,{\mathbb {G}}} \\ {}&={}^{p,{\mathbb {G}}}\big (W(\cdot ,\cdot ,\Delta X)V(\cdot ,\cdot ,\Delta Y)1_{\{\Delta X\ne 0,\Delta Y\ne 0\}}1_{\{\Delta Z\ne 0\}}\big )-\widehat{W}^{X,{\mathbb {G}}}\widehat{V}^{Y,{\mathbb {G}}}, \end{aligned} \end{aligned}$$

which yields

$$\begin{aligned} \widehat{W}^{X,{\mathbb {G}}}\widehat{V}^{Y,{\mathbb {G}}}\quad \text { is a version of }\quad {}^{p,{\mathbb {G}}}\big (W(\cdot ,\cdot ,\Delta X)V(\cdot ,\cdot ,\Delta Y)1_{\{\Delta X\ne 0,\Delta Y\ne 0\}}1_{\{\Delta Z\ne 0\}}\big ).\nonumber \\ \end{aligned}$$
(3.9)

We set \(F(\omega ,t,x,y):= W(\omega ,t,x)V(\omega ,t,y)1_{\{x\ne 0,y\ne 0\}}\). Then, F is \({\mathscr {P}}({\mathbb {G}})\otimes {\mathscr {B}}({\mathbb {R}}^ d)\otimes ({\mathbb {R}}^\ell )\)-measurable. By [12, Lemma II.1.25] and (3.9), we get that \(\widehat{W}^{X,{\mathbb {G}}}\widehat{V}^{Y,{\mathbb {G}}}\) is a version of \(\widehat{F}^{\,Z,{\mathbb {G}}}\). We now define

$$\begin{aligned} U(\omega ,t,x,y):= & {} F(\omega ,t,x,y)-\widehat{W}^{X,{\mathbb {G}}}_t(\omega )V(\omega ,t,y)1_{\{y\ne 0\}}\nonumber \\&-\widehat{V}^{Y,{\mathbb {G}}}_t(\omega )W(\omega ,t,x)1_{\{x\ne 0\}} \end{aligned}$$
(3.10)

and clearly have

$$\begin{aligned} \begin{aligned}&U(\omega ,t,\Delta X_t(\omega ),\Delta Y_t(\omega ))1_{\{\Delta Z_t(\omega )\ne 0\}}\\&\quad = W(\omega ,t,\Delta X_t(\omega ))V(\omega ,t,\Delta Y_t(\omega ))1_{\{\Delta X_t(\omega )\ne 0,\Delta Y_t(\omega )\ne 0\}}\\&\qquad -\widehat{W}^{X,{\mathbb {G}}}_t(\omega )V(\omega ,t,\Delta Y_t(\omega ))1_{\{\Delta Y_t(\omega )\ne 0\}}\\&\qquad -\widehat{V}^{Y,{\mathbb {G}}}_t(\omega )W(\omega ,t,\Delta X_t(\omega ))1_{\{\Delta X_t(\omega )\ne 0\}}. \end{aligned}\qquad \end{aligned}$$

From this, by [12, Lemma II.1.25] and (3.9), we see that \(-\widehat{W}^{X,{\mathbb {G}}}\widehat{V}^{Y,{\mathbb {G}}}\) is a version of the \({\mathbb {G}}\)-predictable projection of the process \( U(\cdot ,\cdot ,\Delta X,\Delta Y)1_{\{\Delta Z\ne 0\}}\). Hence, we obtain

$$\begin{aligned} \begin{aligned} \widetilde{U}^{Z,{\mathbb {G}}}_t(\omega )&:=U(\omega ,t,\Delta X_t(\omega ),\Delta Y_t(\omega ))1_{\{\Delta Z_t(\omega )\ne 0\}}-\widehat{U}^{Z,{\mathbb {G}}}_t(\omega ) \\&=U(\omega ,t,\Delta X_t(\omega ),\Delta Y_t(\omega ))1_{\{\Delta Z_t(\omega )\ne 0\}}+\widehat{W}^{X,{\mathbb {G}}}_t(\omega )\widehat{V}^{Y,{\mathbb {G}}}_t(\omega ) \\&=\widetilde{W}^{X,{\mathbb {G}}}_t(\omega )\widetilde{V}^{Y,{\mathbb {G}}}_t(\omega ),\quad \text {for every }t\in {\mathbb {R}}_+,\mathrm{\ a.s.\ } \end{aligned}\qquad \end{aligned}$$
(3.11)

where, in the last identity, we used (3.8). We therefore have

$$\begin{aligned} {\mathbb {E}}\left[ \sum _{s\ge 0}\big (\widetilde{U}^{Z,{\mathbb {G}}}_s\big )^2\right] ={\mathbb {E}}\left[ \sum _{s\ge 0}\big (\widetilde{W}^{X,{\mathbb {G}}}_s\widetilde{V}^{Y,{\mathbb {G}}}_s\big )^2\right] ={\mathbb {E}}\big [[S,S]_\infty \big ]<+\infty , \end{aligned}$$

where, for the last estimate, we recall that \(S\in {\mathscr {H}}^2({\mathbb {G}})\) (see [12, Proposition I.4.50 c)]). Therefore, the inclusion \(U\in {\mathscr {G}}^2(\mu ^{Z},{\mathbb {G}})\) holds. Hence, we can introduce the purely discontinuous square-integrable \({\mathbb {G}}\)-martingale \(U*(\mu ^{Z}-\nu ^{Z})\) and we get

$$\begin{aligned} \Delta U*(\mu ^{Z}-\nu ^{Z})=\widetilde{U}^{Z,{\mathbb {G}}}=\widetilde{W}^{X,{\mathbb {G}}}\widetilde{V}^{Y,{\mathbb {G}}}=\Delta S, \end{aligned}$$

up to an evanescent set. Hence, the purely discontinuous martingales \(U*(\mu ^{Z}-\nu ^{Z})\) and \(S=[M,N]\) have the same jumps, up to an evanescent set. By [12, Corollary I.4.19], we conclude that S and \(U*(\mu ^{Z}-\nu ^{Z})\) are indistinguishable. Summarizing, we have shown that \(U\in {\mathscr {G}}^2(\mu ^{Z},{\mathbb {G}})\) and

$$\begin{aligned} {[}M,N]=U*(\mu ^{Z}-\nu ^{Z}). \end{aligned}$$
(3.12)

The proof of Step 1 is complete.

Step 2: Representation of \(N_-W*(\mu ^ X-\nu ^ X)\) and of \(M_-V*(\mu ^ Y-\nu ^ Y)\). For notational convenience, we denote in this part \(g_1(x,y):=1_{\{x\ne 0\}}\) and \(g_2(x,y):=1_{\{y\ne 0\}}\), \((x,y)\in {\mathbb {R}}^ d\times {\mathbb {R}}^\ell \). We are only going to show how to represent \(N_-W*(\mu ^ X-\nu ^ X)\), the result for \(M_-V*(\mu ^ Y-\nu ^ Y)\) being completely analogous. We observe that, by Lemma A.3, we obtain the identity \(W*(\mu ^ X-\nu ^ X)=Wg_1*(\mu ^{Z}-\nu ^{Z})\). By Proposition 3.4, Lemma A.3, \(N_-\) being predictable, we have, because of Proposition 3.5 (ii), (iii) and the independence,

$$\begin{aligned} \begin{aligned} {\mathbb {E}}\bigg [\sum _{s\ge 0}\big (\widetilde{N_{-}Wg_1}_s^{Z,{\mathbb {G}}}\big )^2\bigg ]&={\mathbb {E}}\bigg [\sum _{s\ge 0}\big (N_{s-}\widetilde{Wg_1}_s^{Z,{\mathbb {G}}}\big )^2\bigg ] \\ {}&={\mathbb {E}}\bigg [\sum _{s\ge 0}\big (N_{s-}\widetilde{W}_s^{X,{\mathbb {G}}}\big )^2\bigg ] \\ {}&\le {\mathbb {E}}\bigg [\sup _{t\ge 0}(N^2_t)\sum _{s\ge 0}\big (\widetilde{W}_s^{X,{\mathbb {F}}}\big )^2\bigg ] \\ {}&=\Vert N\Vert _{{\mathscr {H}}^2({\mathbb {G}})}^2\Vert W\Vert _{{\mathscr {G}}^2(\mu ^ X,{\mathbb {G}})}^2<+\infty , \end{aligned} \end{aligned}$$

showing the inclusion \(N_-Wg_1\in {\mathscr {G}}^2(\mu ^{Z},{\mathbb {G}})\). Lemma A.3 (i) and [12, Proposition II.1.30b)] furthermore yield

$$\begin{aligned} \begin{aligned} N_-W*(\mu ^ X-\nu ^ X)&=N_-\cdot (W*(\mu ^ X-\nu ^ X))\\ {}&=N_-\cdot (Wg_1*(\mu ^{Z}-\nu ^{Z}))\\ {}&=N_-Wg_1*(\mu ^{Z}-\nu ^{Z}). \end{aligned} \end{aligned}$$
(3.13)

Analogously, we get \(M_-Vg_2\in {\mathscr {G}}^2(\mu ^{Z},{\mathbb {G}})\) and

$$\begin{aligned} M_-V*(\mu ^ Y-\nu ^ Y)=M_-Vg_2*(\mu ^{Z}-\nu ^{Z}). \end{aligned}$$
(3.14)

The proof of Step 2 is complete.

Step 3 Representation of the continuous part. We stress that for this step we need the properties of the stochastic integral of multidimensional predictable processes with respect to a multidimensional continuous local martingale. For this topic, we refer to [12, III.§4a] and [11, Section 4.2 § a and § b].

We consider the \({\mathbb {R}}^{d}\times {\mathbb {R}}^{\ell }\)-valued continuous \({\mathbb {G}}\)-local martingale \(Z^ c=(X^ c, Y^ c)^{tr}\) and introduce the \({\mathbb {G}}\)-predictable \({\mathbb {R}}^{d}\times {\mathbb {R}}^{\ell }\)-valued process \(H=(N_-K,M_-J)^{tr}\). We notice that the identities \((Z^ i)^ c=(X^ i)^ c\), \(H^ i=N_-K^ i\) for \(i=1,\ldots ,d\) and \((Z^ i)^ c=(Y^ i)^ c\), \(H^ i=N_-K^ i\) for \(i=d+1,\ldots ,d+\ell \) hold. Let us define \(C:=A+B\), where \(A:=\sum _{i=1}^ d\langle (X^ i)^ c,(X^ i)^ c \rangle \) and \(B:=\sum _{i=1}^ \ell \langle (Y^ i)^ c,(Y^ i)^ c \rangle \). So, A and B are absolutely continuous with respect to C. Additionally, because of Lemma A.4 (ii) and the uniqueness of the point brackets, \(\langle (X^ i)^ c,(X^ i)^ c \rangle \) does not change in \({\mathbb {G}}\). The same holds for \(\langle (Y^ i)^ c,(Y^ i)^ c \rangle \). Therefore, A is \({\mathbb {F}}\)-predictable and B is \({\mathbb {H}}\)-predictable. According to [11, Section 4.2 § a and § b], there exists a \({\mathbb {G}}\)-predictable process c taking values in the set of nonnegative symmetric \({(d+\ell )\times (d+\ell )}\)-matrices such that \(\langle (Z^ i)^ c,(Z^ j)^ c \rangle =c^{i,j}\cdot C\). Analogously, we find an \({\mathbb {F}}\)-predictable processes a taking values in the set of nonnegative symmetric \({d\times d}\)-matrices such that \(\langle (X^ i)^ c,(X^ j)^ c \rangle =a^{i,j}\cdot A\), \(i,j=1,\ldots , d\) and a \({\mathbb {H}}\)-predictable processes b taking values in the set of nonnegative symmetric \({\ell \times \ell }\)-matrices such that \(\langle (Y^ i)^ c,(Y^ j)^ c \rangle =b^{i,j}\cdot B\), \(i,j=1,\ldots , \ell \). Furthermore, since \(\langle (X^ i)^ c,(Y^ j)^ c \rangle =0\), \(i=1,\ldots ,d\), \(j=1,\ldots ,\ell \), we also have \(c^{i,d+j}=0\), \(i=1,\ldots ,d\), \(j=1,\ldots ,\ell \). We are going to show that the \({\mathbb {G}}\)-predictable process H introduced above belongs to \(\mathrm {L}^2(Z^ c,{\mathbb {G}})\). For this, according to [12, III.4.3], we have to verify that the increasing process \(H^\top c H\cdot C\) is integrable. Because of the structure of c, we see that the identity

$$\begin{aligned} \begin{aligned} H^{tr} c H&=\sum _{i,j=1}^{d+\ell }H^ ic^{i,j}H^ j=\sum _{i,j=1}^{d}H^ ic^{i,j}H^ j+\sum _{i,j=1}^{\ell }H^ {d+i}c^{d+i,d+j}H^{d+j}\\ {}&=\sum _{i,j=1}^{d}N_-^2K^ iK^ jc^{i,j}+\sum _{i,j=1}^{\ell }M_-^2J^ {i}J^ jc^{d+i,d+j} \end{aligned} \end{aligned}$$

holds. By linearity of the integral with respect to C, we now get

$$\begin{aligned} \begin{aligned} (H^{tr} c H)\cdot C&=\sum _{i,j=1}^{d}\{N_-^2K^ iK^ jc^{i,j}\cdot C\}+\sum _{i,j=1}^{\ell }\{M_-^2J^ {i}J^ jc^{d+i,d+j}\cdot C\} \\ {}&=\sum _{i,j=1}^{d}\{N_-^2K^ iK^ j\cdot \langle (X^ i)^ c,(X^ j)^ c \rangle \}+\sum _{i,j=1}^{\ell }\{M_-^2J^ {i}J^ j\cdot \langle (Y^ i)^ c,(Y^ j)^ c \rangle \} \\ {}&=\sum _{i,j=1}^{d}\{N_-^2K^ iK^ ja^{i,j}\cdot A\}+\sum _{i,j=1}^{\ell }\{M_-^2J^ {i}J^ jb^{i,j}\cdot B\} \\ {}&=(N_-^2K^{tr} aK)\cdot A+(M_-^2J^{tr} bJ)\cdot B. \end{aligned} \end{aligned}$$

Hence, the independence of \({\mathbb {F}}\) and \({\mathbb {H}}\) yields

$$\begin{aligned}\begin{aligned}{\mathbb {E}}\big [(H^{tr} c H)\cdot C_\infty \big ]&= {\mathbb {E}}\big [(N_-^2K^{tr} aK)\cdot A_\infty +(M_-^2J^{tr} bJ)\cdot B_\infty \big ]\\ {}&\le {\mathbb {E}}\Big [\sup _{t\ge 0}N^2_t\Big ]\Vert K\Vert _{\mathrm {L}^2(X^ c,{\mathbb {F}})}^2+{\mathbb {E}}\Big [\sup _{t\ge 0}M^2_t\Big ]\Vert J\Vert _{\mathrm {L}^2(Y^ c,{\mathbb {H}})}^2<+\infty , \end{aligned}\end{aligned}$$

where, in the last estimate, we used that M and N are square-integrable martingales, that \(K\in \mathrm {L}^2(X^ c,{\mathbb {F}})\) and that \(J\in \mathrm {L}^2(Y^ c,{\mathbb {H}})\). This shows the inclusion \(H\in \mathrm {L}^2(Z^ c,{\mathbb {G}})\). We are now going to verify that the identity \(H\cdot Z^ c=N_-K\cdot X^ c+M_-J\cdot Y^ c\) holds. Let \(R\in {\mathscr {H}}_{\mathrm {loc}}^2({\mathbb {G}})\) be continuous. Then, there exist two \({\mathbb {G}}\)-predictable process \(a^{Ri}\) and \(b^{R^ j}\) such that \(\langle R,(X^ i)^ c \rangle =a^{Ri}\cdot A\) and \(\langle R,(Y^ j)^ c \rangle =b^{Rj}\cdot B\), \(i=1,\ldots ,d\), \(j=1,\ldots ,\ell \) (see [12, Eq. (III.4.4) and the explanation before]). On the other side, since A and B are absolutely continuous with respect to C, we find two \({\mathbb {G}}\)-predictable processes \(\alpha \) and \(\beta \) such that \(A=\alpha \cdot C\) and \(B=\beta \cdot C\). In conclusion, we get \(\langle R,(X^ i)^ c \rangle =a^{Ri}\alpha \cdot C\) and \(\langle R,(Y^ j)^ c \rangle =b^{Rj}\beta \cdot C\), \(i=1,\ldots ,d\), \(j=1,\ldots ,\ell \). So, if we define the \({\mathbb {G}}\)-predictable process \(c^{R i}\), \(i=1,\ldots ,d+\ell \), by

$$\begin{aligned} c^{Ri}:={\left\{ \begin{array}{ll}a^{Ri}\alpha ,\quad i=1,\ldots ,d\\ \\ b^{Ri-d}\beta , \quad i=d+1,\ldots ,\ell ,\end{array}\right. } \end{aligned}$$

we get \(\langle R,(Z^ i)^ c \rangle =c^{Ri}\cdot C\), \(i=1,\ldots ,d+l\). By the linearity of the predictable quadratic covariation and [12, Theorem III.4.5b)] applied to the two stochastic integrals \(N_-K\cdot X^ c\) and \(M_-J\cdot Y^ c\), we compute

$$\begin{aligned} \begin{aligned} \langle R,N_-K\cdot X^ c+M_-J\cdot Y^ c \rangle&=\langle R,N_-K\cdot X^ c \rangle +\langle R,M_-J\cdot Y^ c \rangle \\ {}&=\sum _{i=1}^ dN_-K^ ia^{Ri}\cdot A+\sum _{i=1}^ \ell M_-J^ ib^{Ri}\cdot B \\ {}&=\bigg (\sum _{i=1}^ dN_-K^ ia^{Ri}\alpha +\sum _{i=1}^ \ell M_-J^ ib^{Ri}\beta \bigg )\cdot C \\ {}&=\bigg (\sum _{i=1}^ dN_-K^ ic^{Ri}+\sum _{i=1}^ \ell M_-J^ ic^{Rd+i}\bigg )\cdot C \\ {}&=\bigg (\sum _{i=1}^{d+\ell }H^ ic^{Ri}\bigg )\cdot C=\langle R,H\cdot Z^ c \rangle \end{aligned} \end{aligned}$$

where in the last identity we again applied [12, Theorem III.4.5b)]. This latter computation together with [12, Theorem III.4.5b)] yields that \(H\cdot Z^ c\) and \(N_-K\cdot X^ c+M_-J\cdot Y^ c\) are indistinguishable, because R was chosen arbitrarily. The proof of Step 3 is complete.

Step 4: Representation of \(\xi \eta \). We now introduce the \({\mathscr {P}}({\mathbb {G}})\otimes {\mathscr {B}}({\mathbb {R}}^ d\times {\mathbb {R}}^\ell )\)-measurable function \((\omega ,t,x,y)\mapsto G(\omega ,t,x,y)\) by setting, for \((\omega ,t,x,y)\in \Omega \times {\mathbb {R}}_+\times {\mathbb {R}}^ d\times {\mathbb {R}}^\ell \),

$$\begin{aligned} G(\omega ,t,x,y):= & {} N_{t-}(\omega )W(\omega ,t,x)g_1(x,y)+M_{t-}(\omega )V(\omega ,t,y)g_2(x,y)\\&+U(\omega ,t,x,y), \end{aligned}$$

where the \({\mathbb {G}}\)-predictable function U has been defined in (3.10) and the functions \(g_1\) and \(g_2\) have been defined at the beginning of Step 2. Step 1 and Step 2 yield the inclusion \(G\in {\mathscr {G}}^2(\mu ^{Z},{\mathbb {G}})\), and by the linearity of the stochastic integral with respect to \(\mu ^ Z-\nu ^ Z\), we get the identity

$$\begin{aligned} N_-W*(\mu ^ X-\nu ^ X)+M_-V*(\mu ^ Y-\nu ^ Y)+[M,N]=G*(\mu ^{Z}-\nu ^{Z}). \end{aligned}$$
(3.15)

Let us now consider the \({\mathbb {R}}^{d}\times {\mathbb {R}}^\ell \)-valued continuous local martingale \(Z^ c=(X^ c,Y^ c)^{tr}\). By Step 3, we have the identity \(H\cdot Z^ c=N_-K\cdot X^ c+M_-J\cdot Y^ c\), where \(H\in \mathrm {L}^2(Z^ c,{\mathbb {G}})\) has been defined in Step 3. Therefore, from (3.15) and (3.5), we get \(M_tN_t=M_0N_0+H\cdot Z^ c_t+G*(\mu ^{Z}-\nu ^{Z})_t\), \(t\ge 0\). Now, using that each term on the right- and on the left-hand side in this latter expression belongs to \({\mathscr {H}}^2({\mathbb {G}})\), taking the limit \(t\rightarrow +\infty \), by the martingale convergence theorem (see [12, Theorem I.1.42a)]), we get

$$\begin{aligned} \xi \eta \!=\!{\mathbb {E}}[\xi \eta |{\mathscr {G}}_0]\!+\!H\cdot Z^ c_\infty +G*(\mu ^ Z\!-\!\nu ^ Z)_\infty ,\quad H\in \mathrm {L}^2(Z^ c,{\mathbb {G}}),\quad G\in {\mathscr {G}}^2(\mu ^{Z},{\mathbb {G}}), \end{aligned}$$

where we used the independence to write \(M_0N_0={\mathbb {E}}[\xi \eta |{\mathscr {G}}_0]\). The proof of Step 4 is complete.

Step 5: Representation of bounded \({\mathscr {G}}_\infty \)-measurable random variables. As an application of the monotone class theorem, we now show that every \(\xi \in {\mathbb {B}}({\mathscr {G}}_\infty )\) can be represented as

$$\begin{aligned} \xi ={\mathbb {E}}[\xi |{\mathscr {G}}_0]+K\cdot Z^ c_\infty +W*(\mu ^{Z}-\nu ^{Z})_\infty , \end{aligned}$$
(3.16)

where \(K\in \mathrm {L}^2(Z^ c,{\mathbb {G}})\) and \(W\in {\mathscr {G}}^2(\mu ^{Z},{\mathbb {G}})\). To this aim, we denote by \({\mathscr {K}}\) the linear space of random variables \(\xi \in {\mathbb {B}}({\mathscr {G}}_\infty )\) that can be represented as in (3.16). We denote by \({\mathscr {C}}\subseteq {\mathbb {B}}({\mathscr {G}}_\infty )\) the family \({\mathscr {C}}:=\{\xi \eta ,\ \xi \in {\mathbb {B}}({\mathscr {F}}_\infty ),\ \eta \in {\mathbb {B}}({\mathscr {H}}_\infty )\}\). Then, \({\mathscr {C}}\) is clearly stable under multiplication and \(\sigma ({\mathscr {C}})={\mathscr {G}}_\infty \). Furthermore, by Step 4, we have \({\mathscr {C}}\subseteq {\mathscr {K}}\). The linear space \({\mathscr {K}}\) is a monotone class of \({\mathbb {B}}({\mathscr {G}}_\infty )\). Indeed, let \((\xi _n)_n\subseteq {\mathscr {K}}\) be a uniformly bounded sequence such that \(\xi _n\ge 0\) and \(\xi _n\uparrow \xi \) pointwise. Then, \(\xi \) is bounded and, by dominated convergence, we get \(\xi _n\longrightarrow \xi \) in \(L^2(\Omega ,{\mathscr {G}}_\infty ,{\mathbb {P}})\) as \(n\rightarrow +\infty \). From Lemma A.2, we immediately get that \(\xi \in {\mathscr {K}}\). Since the inclusion \(1\in {\mathscr {K}}\) obviously holds, we see that \({\mathscr {K}}\) is a monotone class of \({\mathbb {B}}({\mathscr {G}}_\infty )\). The monotone class theorem for functions (see [10, Theorem 1.4]) now yields the inclusion \({\mathbb {B}}({\mathscr {G}}_\infty )\subseteq {\mathscr {K}}\). The proof of Step 5 is complete.

Step 6 Approximation and conclusion. Let now \(\xi \in L^2(\Omega ,{\mathscr {G}}_\infty ,{\mathbb {P}})\) be a nonnegative random variable. Then, the random variable \(\xi _n:=\xi \wedge n\), \(n\ge 1\), is bounded and furthermore \(\xi _n\longrightarrow \xi \) in \(L^2(\Omega ,{\mathscr {G}}_\infty ,{\mathbb {P}})\) as \(n\rightarrow +\infty \). By Step 5, \(\xi _n\) can be represented as in (3.16), for every \(n\ge 1\). By Lemma A.2, we get that the same holds for \(\xi \). If now \(\xi \in L^2(\Omega ,{\mathscr {G}}_\infty ,{\mathbb {P}})\) is an arbitrary random variable, we write \(\xi \) as the difference of the positive and negative part: \(\xi =\xi ^+-\xi ^-\). Clearly, we have that \(\xi ^\pm \in L^2(\Omega ,{\mathscr {G}}_\infty ,{\mathbb {P}})\) and \(\xi ^\pm \ge 0\) can be represented as in (3.16). Using now the linearity of the stochastic integrals, we obtain that \(\xi \) can be represented as in (3.16) as well. To conclude the proof, we now use that the Hilbert spaces \(({\mathscr {H}}^2({\mathbb {G}}),\Vert \cdot \Vert _2)\) and \((L^2(\Omega ,{\mathscr {G}}_\infty ,{\mathbb {P}}),\Vert \cdot \Vert _2)\) are isomorphic. Therefore, we have that every \(S\in {\mathscr {H}}^2({\mathbb {G}})\) can be represented as

$$\begin{aligned} S_t={\mathbb {E}}[S_\infty |{\mathscr {G}}_t]=Y_0+K\cdot Z^ c_t+W*(\mu ^{Z}-\nu ^{Z})_t,\quad t\ge 0, \end{aligned}$$

where \(K\in \mathrm {L}^2(Z^ c,{\mathbb {G}})\) and \(W\in {\mathscr {G}}^2(\mu ^{Z},{\mathbb {G}})\). Because of Proposition 3.2 , this means that the \({\mathbb {R}}^{d}\times {\mathbb {R}}^{\ell }\)-valued semimartingale \(Z=(X,Y)^{tr}\) has the WRP with respect to \({\mathbb {G}}\).

The proof of the theorem is now complete. \(\square \)

Examples 3.7

We now discuss some cases in which the propagation of the WRP to the independently enlarged filtration \({\mathbb {G}}\) immediately follows from Theorem 3.6. Notice that in all the following examples we do not exclude that the semimartingales X and Y may have common jumps. Furthermore, \(\Delta X\) and \(\Delta Y\) may charge the same predictable times.

(1) Assume that the semimartingales \((X,{\mathbb {F}}^ X)\) and \((Y,{\mathbb {F}}^ Y)\) are independent step processes (see [10, Definition 11.55]) and \({\mathbb {F}}:={\mathbb {F}}^ X\vee {\mathscr {R}}^ X\), \({\mathbb {H}}:={\mathbb {F}}^ Y\vee {\mathscr {R}}^ Y\), where \({\mathscr {R}}^ X\) and \({\mathscr {R}}^ Y\) are some initial \(\sigma \)-fields such that \({\mathbb {F}}\) and \({\mathbb {H}}\) are independent (i.e., such that \(\{{\mathscr {F}}_\infty ^ X,{\mathscr {R}}^ X\}\) and \(\{{\mathscr {F}}_\infty ^ Y,{\mathscr {R}}^ Y\}\) are independent). Then, [10, Theorem 13.19] yields that X has the WRP with respect to \({\mathbb {F}}\) and Y has the WRP with respect to \({\mathbb {H}}\). Because of Theorem 3.6, we deduce that the \({\mathbb {R}}^ 2\)-valued \({\mathbb {G}}\)-semimartingale \(Z=(X,Y)^{tr}\) has the WRP with respect to \({\mathbb {G}}={\mathbb {F}}\vee {\mathbb {H}}\).

(2) We can generalize the case in (1) as follows: Let X and Y be as in (1). Let B and W be two independent Brownian motions such that B is independent of X and W is independent of Y. We define \(R:=B+X\) and \(S:=W+Y\). Assume furthermore for simplicity that \({\mathscr {R}}^ X\) and \({\mathscr {R}}^ Y\) in (1) are both trivial. By [13, Corollary 2], R has the WRP with respect to \({\mathbb {F}}^ R\) and S has the WRP with respect to \({\mathbb {F}}^ S\). Assuming that \({\mathbb {F}}^ R\) and \({\mathbb {F}}^ S\) are independent, Theorem 3.6 implies that the \({\mathbb {R}}^ 2\)-valued \({\mathbb {G}}\)-semimartingale \(Z=(R,S)^{tr}\) has the WRP with respect to \({\mathbb {G}}={\mathbb {F}}^ R\vee {\mathbb {F}}^ S\).

(3) We take Y and \({\mathbb {H}}\) as in (1) but \((X,{\mathbb {F}}^ X)\) is assumed to be an \({\mathbb {R}}^ d\)-valued semimartingale with conditionally independent increments with respect to \({\mathbb {F}}={\mathbb {F}}^ X\vee {\mathscr {R}}^ X\). Again we assume that \({\mathbb {F}}\) and \({\mathbb {H}}\) are independent. From [12, Theorem III.4.34] (i), X has the WRP with respect to \({\mathbb {F}}\). Hence, Theorem 3.6 yields that the \({\mathbb {R}}^ d\times {\mathbb {R}}\)-valued \({\mathbb {G}}\)-semimartingale \(Z=(X,Y)^{tr}\) has the WRP with respect to \({\mathbb {G}}={\mathbb {F}}\vee {\mathbb {H}}\).

(4) Let \((X,{\mathbb {F}}^ X)\) be an \({\mathbb {R}}^ d\)-valued semimartingale, and let \((Y,{\mathbb {F}}^ Y)\) be an \({\mathbb {R}}^\ell \)-valued semimartingale. Let \({\mathscr {R}}^ X\) and \({\mathscr {R}}^ Y\) denote two initial \(\sigma \)-fields. We assume that X has conditionally independent increments with respect to \({\mathbb {F}}={\mathbb {F}}^ X\vee {\mathscr {R}}^ X\) and Y has conditionally independent increments with respect to \({\mathbb {H}}:={\mathbb {F}}^ Y\vee {\mathscr {R}}^ Y\). As an immediate consequence of [12, Theorem III.4.34] and of Theorem 3.6, if \({\mathbb {F}}\) and \({\mathbb {H}}\) are independent, we get that \(Z=(X,Y)^{tr}\) possesses the WRP with respect to \({\mathbb {G}}={\mathbb {F}}^ X\vee {\mathbb {F}}^ Y\).

(5) Combining Theorem 3.6 and [13, Theorem 2], we can construct several new semimartingales possessing the WRP. Indeed, let \(X^{1}\) and \(X^{2}\) be real-valued semimartingales possessing the WRP with respect to \({\mathbb {F}}^ 1\) and \({\mathbb {F}}^ 2\), respectively. Assume that \({\mathbb {F}}^1\) and \({\mathbb {F}}^ 2\) are independent and set \({\mathbb {F}}={\mathbb {F}}^1\vee {\mathbb {F}}^ 2\). Assume furthermore that the set of the \({\mathbb {F}}\)-predictable jump-times of \(X^{1}\) and \(X^{2}\) are disjoint and that the second and the third \({\mathbb {F}}\)-semimartingale characteristics of \(X^{1}\) and \(X^{2}\) are mutually singular on \({\mathscr {P}}({\mathbb {F}})\otimes {\mathscr {B}}({\mathbb {R}})\) (see [13, Theorem 2]). Then, by [13, Theorem 2], the semimartingale \(X=X^{1}+X^{2}\) possesses the WRP with respect to \({\mathbb {F}}\) (see also [13, Corollary 1 and Corollary 2]). We consider the semimartingales \(Y^{1}\) and \(Y^{2}\) with respect to \({\mathbb {H}}^ 1\) and \({\mathbb {H}}^ 2\). We set \({\mathbb {H}}:={\mathbb {H}}^1\vee {\mathbb {H}}^2\) and make on \(Y^{1}\) and \(Y^{2}\) with respect to \({\mathbb {H}}^ 1\) and \({\mathbb {H}}^ 2\), respectively, similar assumptions as for \(X^{1}\) and \(X^{2}\). Then, the semimartingale \(Y=Y^{1}+Y^{2}\) has the WRP with respect to \({\mathbb {H}}\). If we now assume that \({\mathbb {F}}\) and \({\mathbb {H}}\) are independent, we get by Theorem 3.6 that the \({\mathbb {R}}^2\)-valued semimartingale \(Z=(X,Y)^{tr}\) has WRP with respect to \({\mathbb {G}}:={\mathbb {F}}\vee {\mathbb {H}}\).

(6) The counterexample constructed in [13] after Corollary 2 therein can be also handled with the help of Theorem 3.6. We now use the notation of [13]: Let \(X^{1}\) and \(X^{2}\) denote the processes introduced in [13] after Corollary 2. Then, Wang and Gang showed that the semimartingale \(X^{1}+X^{2}\) does not possess the WRP with respect to the filtration \({\mathbb {F}}:={\mathbb {F}}^1\vee {\mathbb {F}}^2\). However, by Theorem 3.6, we see that the \({\mathbb {R}}^2\)-valued semimartingale \(Z=(X^{1},X^{2})^{tr}\) possesses the WRP with respect to \({\mathbb {F}}\).

3.2 Propagation of the Predictable Representation Property

In this subsection, we investigate the propagation of the predictable representation property to the independently enlarged filtration. To begin with, we state the following definition of the predictable representation property.

Definition 3.8

Let \(X=(X^1,\ldots ,X^ d)^{tr}\) be such that \(X^ i\in {\mathscr {H}}_{\mathrm {loc}}^ 1({\mathbb {F}})\), \(i=1,\ldots ,d\). We say that the multidimensional local martingale X has the predictable representation property (from now on PRP) with respect to \({\mathbb {F}}\) if, for every \(Y\in {\mathscr {H}}^1_{\mathrm {loc}}({\mathbb {F}})\), there exists \(K\in \mathrm {L}^1_\mathrm {loc}(X,{\mathbb {F}})\), such that \(Y=Y_0+K\cdot X\) holds.

In the next proposition, we state the relation between the PRP and the WRP for multidimensional local martingales. We stress that at this point we cannot directly use [10, Theorem 13.14] because that result is only formulated for \({\mathbb {R}}\)-valued (and not for multidimensional) local martingales. This is a deep difference (although the formulation can be given in a notationally similar way) because, in the proof of Proposition 3.9, the stochastic integral for multidimensional local martingales is needed instead of the usual stochastic integral.

Proposition 3.9

Let \(X=(X^1,\ldots ,X^ d)^{tr}\) be such that \(X^ i\in {\mathscr {H}}^ 1_\mathrm {loc}({\mathbb {F}})\), \(i=1,\ldots ,d\). Assume that X possess the PRP with respect to \({\mathbb {F}}\). Then, X possesses the WRP with respect to \({\mathbb {F}}\).

We postpone the proof of Proposition 3.9 to Appendix. We stress that its converse is, in general, not true: If for example X is an \({\mathbb {R}}\)-valued homogeneous Lévy process and a martingale, then X possesses the WRP with respect to \({\mathbb {F}}^ X\), but it possesses the PRP with respect to \({\mathbb {F}}^ X\) if and only if it is a Brownian motion or a compensated Poisson process (see, e.g., [10, Corollary 13.54]).

Corollary 3.10

(to Theorem 3.6: Propagation of the PRP) Let \({\mathbb {F}}\) and \({\mathbb {H}}\) satisfy Assumption 3.3. Let \((X,{\mathbb {F}})\) be an \({\mathbb {R}}^ d\)-valued local martingale possessing the PRP with respect to \({\mathbb {F}}\), and let \((Y,{\mathbb {H}})\) be a semimartingale. Then, the \({\mathbb {R}}^{d}\times {\mathbb {R}}^{\ell }\)-valued \({\mathbb {G}}\)-semimartingale \(Z=(X,Y)^{tr}\) possesses the WRP with respect to \({\mathbb {G}}\) if one of the following two conditions is satisfied:

(i) Y has the WRP with respect to \({\mathbb {H}}\).

(ii) Y is a \({\mathbb {H}}\)-local martingale possessing the PRP with respect to \({\mathbb {H}}\).

Proof

The statement is an immediate consequence of Proposition 3.9 and of Theorem 3.6. \(\square \)

Remark 3.11

We recall that, if X has the PRP with respect to \({\mathbb {F}}={\mathbb {F}}^ X\), Y has the PRP with respect to \({\mathbb {H}}={\mathbb {F}}^ Y\) and if \({\mathbb {F}}^ X\) and \({\mathbb {F}}^ Y\) are independent (at least under an equivalent martingale measure), Calzolari and Torti showed in [6] (under some additional conditions as, in particular, the triviality of \({\mathscr {F}}^ X_0\) and \({\mathscr {F}}^ Y_0\) and the locally square integrability of X and Y) that every \(S\in {\mathscr {H}}^2({\mathbb {G}})\) can also be represented as

$$\begin{aligned} S=Y_0+K\cdot X+H\cdot Y+V\cdot [X,Y], \end{aligned}$$
(3.17)

for \(K\in \mathrm {L}^2(X,{\mathbb {G}})\), \(H\in \mathrm {L}^2(M,{\mathbb {G}})\), \(V\in \mathrm {L}^2([X,M],{\mathbb {G}})\). Corollary 3.10 (iii) shows that the representation in (3.17) can be formulated in terms of WRP with respect to \(Z=(X,Y)^{tr}\).

4 Applications

In this part, we discuss two consequences of Theorem 3.6. First, in Sect. 4.1 we show the propagation of the WRP to an iteratively independent enlarged filtration. In Sect. 4.2, we show the propagation of the WRP to the progressively enlargement by a random time \(\tau \) satisfying Jacod’s equivalence hypothesis.

4.1 The Iterated Enlargement

We recall that the filtrations \({\mathbb {F}}^ 1,\ldots ,{\mathbb {F}}^ n\) are called jointly independent, if \(\{{\mathscr {F}}^ 1_\infty ,\ldots ,{\mathscr {F}}^ n_\infty \}\) is an independent family of \(\sigma \)-algebras.

Theorem 4.1

Let \((X^ i,{\mathbb {F}}^ i)\) be an \({\mathbb {R}}^{d_i}\)-valued semimartingale, \(d_i\in {\mathbb {N}}\), and let \((B^ i,C^ i,\nu ^ i)\) denote the \({\mathbb {F}}^ i\)-predictable characteristics of \(X^ i\), \(i=1,\ldots ,n\). We denote \({\mathbb {G}}:=\bigvee _{i=1}^ n{\mathbb {F}}^ i\), and we assume that \({\mathbb {F}}^ 1,\ldots ,{\mathbb {F}}^ n\) are jointly independent. We then have:

(i) The filtration \({\mathbb {G}}\) is right continuous.

(ii) \(X^ i\) is a \({\mathbb {G}}\)-semimartingale and its \({\mathbb {G}}\)-predictable characteristics are \((B^ i,C^ i,\nu ^ i)\), \(i=1,\ldots ,n\).

(iii) If \(X^ i\) possesses the WRP with respect to \({\mathbb {F}}^ i\), \(i=1,\ldots ,n\), then the \({\mathbb {R}}^{d_1}\times \cdots \times {\mathbb {R}}^{d_n}\)-valued semimartingale \(Z=(X^ 1,\ldots ,X^ n)^{tr}\) possesses the WRP with respect to \({\mathbb {G}}\).

Proof

We show the result by induction, as a direct consequence of Theorem 3.6. To this aim, we observe that the joint independence of \({\mathbb {F}}^ 1,\ldots ,{\mathbb {F}}^ n\) is equivalent to the joint independence of \({\mathbb {F}}^ 1,\ldots ,{\mathbb {F}}^{n-1}\) together with the independence of the family \(\{{\mathbb {F}}^ 1,\ldots ,{\mathbb {F}}^{n-1}\}\) of \({\mathbb {F}}^ n\). Now, we start with the inductive argument. If \(n=1\), there is nothing to show. We assume that (i), (ii) and (iii) hold for \(n=m-1\). We are going to verify them for \(n=m\). By the induction hypothesis and Lemma A.3, we immediately obtain (i). Analogously, from Proposition 3.4, we deduce (ii). Let us now define \(X:=(X^ 1,\ldots ,X^{m-1})^{tr}\) and \(Y=X^ m\). Since \({\mathbb {G}}=\big (\bigvee _{i=1}^{m-1}{\mathbb {F}}^ i\big )\vee {\mathbb {F}}^ m\), by the induction hypothesis and Theorem 3.6, we obtain that Z has the WRP with respect to \({\mathbb {G}}\), which is (iii). The proof of the theorem is complete. \(\square \)

Corollary 4.2

Let \((X^ i,{\mathbb {F}}^ i)\) be an \({\mathbb {R}}^{d_i}\)-valued local martingale (i.e., \(X^{ij}\in {\mathscr {H}}^1_{\mathrm {loc}}({\mathbb {F}}^ i)\), \(j=1,\ldots ,d_i\), \(i=1,\ldots ,n\)) such that \(X^ i\) possesses the PRP with respect to \({\mathbb {F}}^ i\), \(i=1,\ldots ,n\). We denote \({\mathbb {G}}:=\bigvee _{i=1}^ n{\mathbb {F}}^ i\), and we assume that \({\mathbb {F}}^ 1,\ldots ,{\mathbb {F}}^ n\) are jointly independent. Then, \((X^ i,{\mathbb {G}})\) are local martingales, \(i=1,\ldots ,n\), and the \({\mathbb {R}}^{d_1}\times \cdots \times {\mathbb {R}}^{d_n}\)-valued \({\mathbb {G}}\)-local martingale \(Z=(X^ 1,\ldots ,X^ n)^{tr}\) possesses the WRP with respect to \({\mathbb {G}}\).

Proof

Because of the independence, we obtain by induction and Lemma A.4 that \((X^ i,{\mathbb {G}})\) are local martingales, \(i=1,\ldots ,n\). From Proposition 3.9 and Theorem 4.1 (iii), we immediately get the WRP of Z with respect to \({\mathbb {G}}\). The proof of the corollary is complete. \(\square \)

4.2 Jacod’s Equivalence Hypothesis

Let \((X,{\mathbb {F}})\) be an \({\mathbb {R}}^ d\)-valued semimartingale, and let \((B^ X,C^ X,\nu ^ X)\) be the \({\mathbb {F}}\)-predictable characteristics of X. We assume that \({\mathbb {F}}\) satisfies the usual conditions. Let \(\tau :\Omega \longrightarrow [0,+\infty ]\) be a random time. We stress that \(\tau \) is a random variable, but it is not necessarily an \({\mathbb {F}}\)-stopping time. We denote by \(H=1_{[\tau ,+\infty )}\) the default process associated with \(\tau \) and by \({\mathbb {H}}\) the smallest filtration satisfying the usual conditions such that H is \({\mathbb {H}}\)-adapted. We stress that, being a point process, H possesses the WRP with respect to \({\mathbb {H}}\). In this part, we do not assume that \(\tau \) and \({\mathbb {F}}\) (i.e., that \({\mathbb {H}}\) and \({\mathbb {F}}\)) are independent. We rather work under the following assumption (see, e.g., [1, Definition 4.13]):

Assumption 4.3

(Jacod’s equivalence hypothesis) Let \(F_\tau \) denote the law of \(\tau \). The regular conditional distribution of \(\tau \) given \({\mathscr {F}}_t\) is equivalent to the distribution of \(\tau \); that is, if \(P_t(\cdot ,A)\) denotes a version of \({\mathbb {P}}[\tau \in A|{\mathscr {F}}_t]\), \(A\in {\mathscr {B}}([0,+\infty ])\), we have:

$$\begin{aligned} P_t(\mathrm {d}u)\sim F_\tau (\mathrm {d}u),\quad \text {a.s.}, \end{aligned}$$

where the symbol \(\sim \) denotes the equivalence of the two measures.

We denote by \({\mathbb {G}}\) the smallest filtration containing \({\mathbb {F}}\) and \({\mathbb {H}}\); that is, \({\mathbb {G}}={\mathbb {F}}\vee {\mathbb {H}}\) is the progressive enlargement of \({\mathbb {F}}\) by \(\tau \). We notice that \({\mathbb {G}}\) obviously coincides with the smallest filtration containing \({\mathbb {F}}\) and such that \(\tau \) is a \({\mathbb {G}}\)-stopping time.

In Assumption 4.3, we do not require the continuity of \(F_\tau \). Therefore, if \(\tau \) satisfies Assumption 4.3, then \(\tau \) need not avoid \({\mathbb {F}}\)-stopping times, i.e., it can happen that \({\mathbb {P}}[\tau =\sigma <+\infty ]>0\) for some \({\mathbb {F}}\)-stopping time \(\sigma \). If we require the continuity of \(F_\tau \), we then have \({\mathbb {P}}[\tau =\sigma <+\infty ]=0\), for every \({\mathbb {F}}\)-stopping time \(\sigma \) (see [9, Corollary 2.2]).

The propagation of the PRP for an \({\mathbb {R}}\)-valued local martingale \((X,{\mathbb {F}})\) to the progressive enlargement \({\mathbb {G}}\) of \({\mathbb {F}}\) by a random time \(\tau \) satisfying Jacod’s equivalence hypothesis has been investigated in [4] under the additional condition that \(F_\tau \) is continuous. As a consequence of Theorem 3.6, we are now going to show that under Assumption 4.3 the WRP propagates to \({\mathbb {G}}\), also without the further continuity assumption on \(F_\tau \).

It is well known that under Assumption 4.3 the following properties hold:

  • \({\mathbb {F}}\)-semimartingales remain \({\mathbb {G}}\)-semimartingale (see [1, Theorem 5.30]).

  • There exists a version of the density of \({\mathbb {P}}[\tau \in \mathrm {d}u|{\mathscr {F}}_t]\) with respect to \(F_\tau (\mathrm {d}u)\), denoted by \(p_t(\omega ,u)\), such that \({\mathbb {R}}_+\times \Omega \times {\mathbb {R}}_+\mapsto p_t(\omega ,u)\) is a strictly positive \(\widetilde{\mathscr {O}}({\mathbb {F}})\)-measurable function and, for every fixed \(u\in {\mathbb {R}}_+\), the process \((t,\omega )\mapsto p_t(\omega ,u)\) is an \({\mathbb {F}}\)-martingale (see [1, Theorem 4.17] or [2, Lemma 2.2]).

  • The process \(L:=p_0(\tau )/p_\cdot (\tau )\) is a strictly positive \({\mathbb {G}}^\tau \)-martingale, \({\mathbb {G}}^\tau \) denoting the initial enlargement of \({\mathbb {F}}\) by \(\tau \), such that \(L_0=1\) (see [1, Theorem 4.37]).

Using the process L defined above as a density, according to [1, Theorem 4.37], for every arbitrary but fixed deterministic time \(T>0\), we can define the probability measure \({\mathbb {Q}}\) on \({\mathscr {G}}_T^\tau \) by setting \(\mathrm {d}{\mathbb {Q}}\big |_{{\mathscr {G}}_T^\tau }:=L_T\mathrm {d}{\mathbb {P}}\big |_{{\mathscr {G}}_T^\tau }\) which has the following properties:

  1. (P1)

    \({\mathbb {Q}}\big |_{{\mathscr {G}}_T^\tau }\sim {\mathbb {P}}\big |_{{\mathscr {G}}_T^\tau }\).

  2. (P2)

    \({\mathbb {Q}}\big |_{{\mathscr {F}}_T}={\mathbb {P}}\big |_{{\mathscr {F}}_T}\) and \({\mathbb {Q}}\big |_{\sigma (\tau )}={\mathbb {P}}\big |_{\sigma (\tau )}\).

  3. (P3)

    \({\mathscr {F}}_T\) and \(\sigma (\tau )\) are conditionally independent given \({\mathscr {F}}_0\).

We stress that (P1) only holds on \({\mathscr {G}}^\tau _T\) and not on \({\mathscr {G}}^\tau _\infty \) because, in general, L is not a uniformly integrable martingale (see [1, Remark 4.38]).

Since the inclusions \({\mathscr {F}}_t\subseteq {\mathscr {G}}_t\subseteq {\mathscr {G}}_t^\tau \), \(t\in [0,T]\), hold, \({\mathscr {F}}_T\) containing all the \({\mathbb {P}}\)-null sets of \({\mathscr {F}}\), we also get \({\mathbb {Q}}\big |_{{\mathscr {G}}_T}\sim {\mathbb {P}}\big |_{{\mathscr {G}}_T}\), \({\mathbb {Q}}\big |_{{\mathscr {H}}_T}={\mathbb {P}}\big |_{{\mathscr {H}}_T}\). Furthermore, \({\mathscr {F}}_T\) and \({\mathscr {H}}_T\subseteq \sigma (\tau )\) are conditionally independent given \({\mathscr {F}}_0\). Notice that, from [4, Lemma 2.10] and the comment after the proof therein, we obtain \(\mathrm {d}{\mathbb {Q}}\big |_{{\mathscr {G}}_T}=\ell _T\mathrm {d}{\mathbb {P}}\big |_{{\mathscr {G}}_T}\), where \(\ell \) is a \({\mathbb {G}}\)-martingale.

For the remaining part of this section, \({\mathbb {Q}}\) will denote the equivalent probability measure described above.

Let \(T>0\) be an arbitrary but fixed deterministic time. We denote \({\mathbb {F}}^ T:=({\mathscr {F}}_{t\wedge T})_{t\ge 0}=({\mathscr {F}}_t)_{t\in [0,T]}\) and \({\mathbb {H}}^ T:=({\mathscr {H}}_{t\wedge T})_{t\ge 0}=({\mathscr {H}}_t)_{t\in [0,T]}\).

Lemma 4.4

Let \({\mathscr {F}}_0\) be trivial. Then, \({\mathbb {G}}^ T:={\mathbb {F}}^ T\vee {\mathbb {H}}^ T\) is right continuous.

Proof

Because of (P3) and the triviality of \({\mathscr {F}}_0\), we have that \({\mathbb {F}}^ T\) and \({\mathbb {H}}^ T\) are independent under \({\mathbb {Q}}\). Hence, we get the right continuity of \({\mathbb {G}}^ T\) by Lemma A.4. The proof of the lemma is complete. \(\square \)

Theorem 4.5

Let \(\tau \) satisfy Assumption 4.3, and let \((X,{\mathbb {F}})\) be an \({\mathbb {R}}^ d\)-valued semimartingale. Let us furthermore assume that \({\mathscr {F}}_0\) is trivial and that X possesses the WRP with respect to \({\mathbb {F}}\). Then, the \({\mathbb {R}}^ d\times {\mathbb {R}}_+\)-valued \({\mathbb {G}}\)-semimartingale \(Z=(X,H)^{tr}\) possesses the WRP with respect to \({\mathbb {G}}^ T\).

Proof

Assumption 4.3 implies that X is a \({\mathbb {G}}\)-semimartingale with respect to the probability measure \({\mathbb {P}}\). Furthermore, since \({\mathbb {Q}}\) coincides with \({\mathbb {P}}\) on \({\mathscr {F}}_T\), we deduce that, under \({\mathbb {Q}}\), the \({\mathbb {Q}}\)-semimartingale X possesses the WRP with respect to \({\mathbb {F}}^ T\). Furthermore, under \({\mathbb {Q}}\), H possesses the WRP with respect to \({\mathbb {H}}^ T\) (see [12, Theorem III.4.37]). Because of (P3) and the triviality of \({\mathscr {F}}_0\), the filtrations \({\mathbb {F}}^ T\) and \({\mathbb {H}}^ T\) are independent under \({\mathbb {Q}}\). We can therefore apply Theorem 3.6 under \({\mathbb {Q}}\) to obtain that the \({\mathbb {G}}^ T\)-semimartingale Z possesses the WRP with respect to \({\mathbb {G}}^ T\) under the measure \({\mathbb {Q}}\). So, by the equivalence of \({\mathbb {Q}}\) and \({\mathbb {P}}\) on \({\mathscr {G}}_T\), applying [12, Theorem III.5.24], we obtain that Z has the WRP with respect to \({\mathbb {G}}^ T\) also under the original measure \({\mathbb {P}}\). The proof of the theorem is complete. \(\square \)