In this section \(\mathbf {X}=(X_t)_{t\ge 0}\) stands for a càdlàg infinitely divisible process which is representable by some infinitely divisible random measure \(\Lambda \), i.e., a process of the form
$$\begin{aligned} X_t=\int _{(-\infty ,t]\times V} \phi (t, u)\,\Lambda (du), \end{aligned}$$
(3.1)
where \( \phi :\mathbb {R}_+\times (\mathbb {R}\times V) \rightarrow \mathbb {R}\) is a measurable deterministic function and \(\Lambda \) is specified by (2.1)–(2.2). We assume that for every \(u=(s,v) \in \mathbb {R}\times V\), \(\phi (\cdot , u)\) is càdlàg, cf. Remark 3.2. Let \(B\) be given by (2.3). We further assume that
$$\begin{aligned} \int _{(0,t] \times V} \big | B\big (\phi (s, s, v), (s,v)\big ) \big | \, \kappa (ds, dv) < \infty \quad \text {for every t>0.} \end{aligned}$$
(3.2)
The following is the main result of this section.
Theorem 3.1
Under the above assumptions \(\mathbf {X}\) is a semimartingale relative to the filtration \(\mathbb {F}^{\Lambda }=(\mathcal {F}^{\Lambda }_t)_{t \ge 0}\) if and only if
$$\begin{aligned} X_t = X_0 + M_t + A_t, \quad t \ge 0, \end{aligned}$$
(3.3)
where \(\mathbf {M}= ( M_t )_{t\ge 0}\) is a semimartingale with independent increments given by the stochastic integral
$$\begin{aligned} M_t = \int _{(0,t] \times V} \phi (s,(s,v))\,\Lambda (ds,dv), \quad t\ge 0, \end{aligned}$$
(3.4)
and \(\mathbf {A}= ( A_t )_{t\ge 0}\) is a predictable càdlàg process of finite variation of the form
$$\begin{aligned} A_t = \int _{(-\infty ,t] \times V} \big [\phi (t,(s,v)) - \phi (s_{+}, (s,v))\big ] \,\Lambda (ds,dv). \end{aligned}$$
(3.5)
Decomposition (3.3) is unique in the following sense: if \(\mathbf {X}=X_0+\mathbf {M}'+\mathbf A '\), where \(\mathbf {M}'\) and \(\mathbf A '\) are processes representable by \(\Lambda \) such that \(\mathbf {M}'\) is a semimartingale with independent increments relative to \(\mathbb {F}^{\Lambda }\) and \(\mathbf A '\) is a predictable càdlàg process of finite variation, then \(\mathbf {M}'=\mathbf {M}+g\) and \(\mathbf A '=\mathbf A -g\) for some càdlàg deterministic function \(g\) of finite variation, where \(\mathbf {M}\) and \(\mathbf A \) are given by (3.4) and (3.5).
\(\mathbf X\) is a special semimartingale if and only if (3.3)–(3.5) hold and \({\mathbb {E}|M_t|<\infty }\) for all \(t>0\). In this case, \((M_t-\mathbb {E}M_t)_{t\ge 0}\) is a martingale and
$$\begin{aligned} X_t= X_0+ (M_t -\mathbb {E}M_t)+ (A_t+\mathbb {E}M_t),\quad t\ge 0 \end{aligned}$$
is the canonical decomposition of \(\mathbf {X}\).
In the next section we use Theorem 3.1 to characterize the semimartingale property of various infinitely divisible processes with stationary increments. In the following we conduct the proofs of Theorems 3.1 and 1.6, but first we consider two remarks and an example.
Remark 3.2
If \(\mathbf {X}\) given by (3.1) is a semimartingale relative \(\mathbb {F}^\Lambda \), and \(\Lambda \) satisfies the non-deterministic condition
$$\begin{aligned} \kappa \big (u\in \mathbb {R}\times V\! : \sigma ^2(u)=0,\, \rho _u(\mathbb {R})=0 \big )=0, \end{aligned}$$
(3.6)
then \(\phi \) can be chosen such that \(\phi (\cdot , u)\) is càdlàg for every \(u=(s,v) \in \mathbb {R}\times V\). The proof of this statement is given in the Appendix A.
Remark 3.3
Condition (3.2) is always satisfied when \(\Lambda \) is symmetric. Indeed, in this case \(B\equiv 0\).
Example 3.4
Consider the setting in Theorem 3.1 and suppose that \(\Lambda \) is an \(\alpha \)-stable random measure and \(\alpha \in (0,1)\). Then \(\mathbf X\) is a semimartingale with respect to \(\mathbb {F}^\Lambda \) if and only if it is of finite variation. This follows by Theorem 3.1 because the process \(\mathbf M\) given by (3.4) is of finite variation. Indeed, the Lévy–Itô decomposition of \(\mathbf {M}\) [22, II, 2.34] combined with [22, II, 1.28] show that \(\mathbf {M}\) is of finite variation.
Proof of Theorem 3.1
The sufficiency is obvious. To show the necessary part we need to show that a semimartingale \(\mathbf {X}\) has a decomposition (3.3) where the processes \(\mathbf M\) and \(\mathbf A\) have the stated properties. We will start by considering the case where \(\Lambda \) does not have a Gaussian component, i.e. \(\sigma ^2=0\). We may and will assume that \(\phi (0,u)=0\) for all \(u\) corresponding to \(X_0=0\) a.s., and that \(\phi (t,(s,v))=0\) for \(s>t\) and \(v\in V\).
Case 1.
\(\Lambda \)
has no Gaussian component: We divide the proof into the following six steps.
Step 1. Let \(X^0_t = X_t - \beta (t)\), with
$$\begin{aligned} \beta (t) = \int _{U} B\big (\phi (t, u), u\big ) \, \kappa (d u),\quad U=\mathbb {R}\times V. \end{aligned}$$
We will give the series representation for \(\mathbf {X}^0\) that will be crucial for our considerations. To this end, define for \(s\ne 0\) and \(u \in U=\mathbb {R}\times V\)
$$\begin{aligned} R(s, u) = {\left\{ \begin{array}{ll} \inf \{ x>0:\rho _u(x,\infty ) \le s\} &{} \text {if } s>0, \\ \sup \{ x<0:\rho _u(-\infty , x) \le -s\} &{} \text {if } s<0. \end{array}\right. } \end{aligned}$$
Choose a probability measure \(\tilde{\kappa }\) on \(U\) equivalent to \(\kappa \), and let \(h(u)= \frac{1}{2}(d \tilde{\kappa }/d\kappa )(u)\). By an extension of our probability space if necessary [36], Proposition 2 and Theorem 4.1, shows that there exists three independent sequences \((\Gamma _i)_{i\in \mathbb {N}}\), \((\epsilon _i)_{i\in \mathbb {N}}\), and \((T_i)_{i\in \mathbb {N}}\), where \(\Gamma _i\) are partial sums of i.i.d. standard exponential random variables, \(\epsilon _i\) are i.i.d. symmetric Bernoulli random variables, and \(T_i=(T_i^1, T_i^2)\) are i.i.d. random variables in \(U\) with the common distribution \(\tilde{\kappa }\), such that for every \(A\in \fancyscript{S}\),
$$\begin{aligned} \Lambda (A)= \nu _0(A)+ \sum _{j=1}^\infty \big [R_j\mathbf {1}_A(T_j)-\nu _j(A)\big ] \quad \text {a.s.} \end{aligned}$$
(3.7)
where \(R_j=R(\epsilon _j\Gamma _j h(T_j),T_j)\), \(\nu _0(A)= \int _A b(u) \, \kappa (d u)\), and for \(j\ge 1\)
$$\begin{aligned} \nu _j(A) = \int _{\Gamma _{j-1}}^{\Gamma _j} \mathbb {E}[[ R(\epsilon _1 r h(T_1),T_1)]]\mathbf {1}_A(T_1) \, dr. \end{aligned}$$
It follows by the same argument that
$$\begin{aligned} X^0_t = \sum _{j=1}^{\infty } \big [ R_j \phi (t, T_j) - \alpha _j(t) \big ] \quad \text {a.s.}, \end{aligned}$$
where
$$\begin{aligned} \alpha _j(t) = \int _{\Gamma _{j-1}}^{\Gamma _j} \mathbb {E}[[ R(\epsilon _1 r h(T_1),T_1) \phi (t, T_1)]]\, dr. \end{aligned}$$
Step 2. Set \(J=\{t\ge 0:\kappa (\{t\}\times V)>0\}\),
$$\begin{aligned} T^{1,c}_i=T_i^1\mathbf {1}_{\{T_i^1\in \mathbb {R}_+{\setminus } J\}}\quad \text {and}\quad T^{1,d}_i=T_i^1\mathbf {1}_{\{T_i^1\in J\}}. \end{aligned}$$
Since \(\kappa \) is a \(\sigma \)-finite measure the set \(J\) is countable. Furthermore, \(\mathbb {P}(T^{1,c}_i=x)=0\) for all \(x>0\) and \(T^{1,d}_i\) is discrete. We will show that for every \(i \in \mathbb {N}\)
$$\begin{aligned} \Delta X_{T_i^{1,c}} = R_i \phi \left( T_i^{1,c}, T_i\right) \quad \text {a.s. } \end{aligned}$$
(3.8)
Since \(\mathbf {X}\) is càdlàg, the series
$$\begin{aligned} X_t^0=\sum _{j=1}^{\infty } \big [ R_j \phi (t, T_j) - \alpha _j(t) \big ] \end{aligned}$$
converges uniformly for \(t\) in compact intervals a.s., cf. [8, Corollary 3.2]. Moreover, \(\beta \) is càdlàg, see [8, Lemma 3.5], and by Lebesgue’s dominated convergence theorem it follows that \(\alpha _j\), for \(j\in \mathbb {N}\), are càdlàg as well. Therefore, with probability one,
$$\begin{aligned} \Delta X_t = \Delta \beta (t) + \sum _{j=1}^{\infty } \big [ R_j \Delta \phi (t, T_j) - \Delta \alpha _j(t) \big ] \quad \text {for all} \ t>0. \end{aligned}$$
Hence, for every \(i \in \mathbb {N}\) almost surely
$$\begin{aligned} \Delta X_{T_i^{1,c}} = \Delta \beta \left( T_i^{1,c}\right) + \sum _{j=1}^{\infty } \Bigg [ R_j \Delta \phi \Big (T_i^{1,c}, T_j\Big ) - \Delta \alpha _j\Big (T_{i}^{1,c}\Big ) \Bigg ]. \end{aligned}$$
(3.9)
Since \(\beta \) has at most countable many discontinuities (it is càdlàg), with probability one \(T_i^{1,c}\) is a continuity point of \(\beta \) since \(\mathbb {P}(T^{1,c}_i=x)=0\) for all \(x>0\). Hence \(\Delta \beta (T_i^{1,c})=0\) a.s. Since \((\Gamma _j)_{j\in \mathbb {N}}\) are independent of \(T_i^{1,c}\), the argument used for \(\beta \) also yields \(\Delta \alpha _j(T_i^{1,c})=0\) a.s. By (3.9) this proves
$$\begin{aligned} \Delta X_{T_i^{1,c}} = \sum _{j=1}^{\infty } R_j \Delta \phi \Big (T_{i}^{1,c}, T_j\Big ). \end{aligned}$$
(3.10)
Furthermore, for \(i\ne j\) we get
$$\begin{aligned} \mathbb {P}\Bigg ( \Delta \phi \Big (T_i^{1,c}, T_j\Big ) \ne 0\Bigg )&= \int _{U} \mathbb {P}\Bigg ( \Delta \phi \Big (T_i^{1,c}, T_j\Big ) \ne 0\, |\, T_j= u\Bigg ) \, \tilde{\kappa }(d u) \\&= \int _{U} \mathbb {P}\Bigg ( \Delta \phi \Big (T_i^{1,c}, u\Big ) \ne 0 \Bigg ) \, \tilde{\kappa }(d u) = 0 \end{aligned}$$
again because \(\phi (\cdot , u)\) has only countably many jumps and the distribution of \(T_i^{1,c}\) is continuous on \((0,\infty )\). If \(j=i\) then
$$\begin{aligned} \Delta \phi \Big (T_i^{1,c}, T_i\Big )&= \lim _{h \downarrow 0,\, h>0} \Bigg [\phi \Big (T_i^{1,c}, \Big (T_i^1, T_i^2\Big )\Big ) - \phi \Big (T_i^{1,c}-h, \Big (T_i^1, T_i^2\Big )\Big ) \Bigg ] \\&= \phi \Big (T_i^{1,c}, T_i\Big ) \end{aligned}$$
as \(\phi (t,(s,v))=0\) whenever \(t<s\) and \(v\in V\). This simplifies (3.10)–(3.8).
Step 3. Next we will show that \(\mathbf {M}\), defined in (3.4), is a well-defined càdlàg process satisfying
$$\begin{aligned} \Delta M_{T^{1,c}_i} =\Delta X_{T^{1,c}_i}\quad \text {a.s.} \quad \text {for all }i\in \mathbb {N}. \end{aligned}$$
(3.11)
Since any semimartingale has finite quadratic variation, we have in particular
$$\begin{aligned} \sum _{0<s\le t} \big (\Delta X_s\big )^2<\infty \quad \text {a.s.} \end{aligned}$$
Let \(\mathbf {X}'\) be an independent copy of \(\mathbf {X}\) and set \(\tilde{\mathbf {X}}=\mathbf {X}-\mathbf {X}'\). Let \(\bar{R}_j= R(\epsilon _j \Gamma _j h(T_j)/2,T_j)\) and \((\xi _j)_{j\in \mathbb {N}}\) be i.i.d. symmetric Bernoulli random variables defined on a probability space \((\Omega ',\mathcal {F}',\mathbb {P}')\). By [35, Theorem2.4] it follows that for all \(t\ge 0\) the series
$$\begin{aligned} \bar{X}_t=\sum _{j=1}^\infty \xi _j \bar{R}_j\phi (t,T_j) \end{aligned}$$
defined on \(\Omega \times \Omega '\) converge a.s. under \(\mathbb {P}\otimes \mathbb {P}'\) and \(\bar{\mathbf {X}}\) equals \(\tilde{\mathbf {X}}\) in finite dimensional distributions. Thus \(\bar{\mathbf {X}}\) has a càdlàg modification satisfying
$$\begin{aligned} \sum _{s\in (0,t]} \big (\Delta \bar{X}_s)^2<\infty \quad \mathbb {P}\otimes \mathbb {P}'\text {-a.s.} \end{aligned}$$
(3.12)
By [8, Corollary 3.2], we have \(\mathbb {P}\otimes \mathbb {P}'\)-a.s. for all \(t\ge 0\) that
$$\begin{aligned} \Delta \bar{X}_t=\sum _{j=1}^\infty \xi _j \bar{R}_j\Delta \phi (t,T_j). \end{aligned}$$
(3.13)
By (3.12) and (3.13) we have for \(\mathbb {P}\)-a.a. \(\omega \in \Omega \) that
$$\begin{aligned} \sum _{s\in A} Y_s^2< \infty \quad \mathbb {P}'\text {-a.s.,}\quad&\text {where}\quad Y_s=\sum _{j=1}^\infty a(s,j) \xi _j,\\ a(s,j)= \bar{R}_j(\omega )\Delta \phi (s,T_j(\omega ))\quad&\text {and}\quad A=\mathop {\cup }\limits _{j\in \mathbb {N}}\{s\in (0,t]:\Delta \phi (s,T_j(\omega ))\ne 0\}. \end{aligned}$$
For a fixed \(\omega \in \Omega \) as above, \(A\) is a countable deterministic set and \(\mathbf{Y}=(Y_s)_{s\in A}\) is a Bernoulli/Rademacher random element in \(\ell ^2(A)\) defined on \((\Omega ',\mathcal {F}',\mathbb {P}')\). By [28, Theorem 4.8], \(\mathbb {E}'[\Vert Y\Vert _{\ell ^2(A)}^2]<\infty \) which implies that
$$\begin{aligned} \infty >\mathbb {E}'\left[ \sum _{s\in A} Y_s^2\right] =\sum _{s\in A} \mathbb {E}'[ Y_s^2]=\sum _{s\in A} \sum _{j=1}^\infty a(s,j)^2= \sum _{j=1}^\infty \left( \sum _{s\in A} a(s,j)^2\right) . \end{aligned}$$
(3.14)
Equation (3.14) implies that \(\mathbb {P}\)-a.s.
$$\begin{aligned} \infty >\sum _{i:\,T_i^1\in (0,t]} \left| \bar{R}_i\Delta \phi \left( T^1_i,T_i\right) \right| ^2=\sum _{i:\,T_i^1\in (0,t]} \left| \bar{R}_i\phi \left( T^1_i,T_i\right) \right| ^2. \end{aligned}$$
Put for \(t,r \ge 0\) and \((\epsilon ,s,v) \in \{-1,1\} \times \mathbb {R}\times V\)
$$\begin{aligned} H(t; r, (\epsilon ,s,v)) = R\big (\epsilon r h(s,v)/2, (s,v)\big ) \phi (s,(s,v)) \mathbf {1}_{\{0< s \le t\}}. \end{aligned}$$
The above bound shows that for each \(t\ge 0\)
$$\begin{aligned} \sum _{i=1}^{\infty } \left| H\left( t; \Gamma _i,\left( \epsilon _i,T_i^1,T_i^2\right) \right) \right| ^2 < \infty \quad \text {a.s.} \end{aligned}$$
That implies, by [36, Theorem4.1], that the following limit is finite
$$\begin{aligned}&\lim _{n\rightarrow \infty } \int _0^n \mathbb {E}\left[ \left[ H\left( t; r, \left( \epsilon _1,T_1^1,T_1^2\right) \right) ^2 \right] \right] \, dr \\&\quad = \int _0^{\infty } \mathbb {E}\left[ \left[ H\left( t; r, \left( \epsilon _1,T_1^1,T_1^2\right) \right) ^2 \right] \right] \, dr. \end{aligned}$$
Evaluating this limit we get
$$\begin{aligned} \infty&> \int _0^{\infty } \mathbb {E}\left[ \left[ R(\epsilon _1 r h(T_1)/2, T_1) \phi \left( T_i^1, T_i\right) \mathbf {1}_{\{0<T_i^1 \le t\}}\right] \right] ^2 \, dr \\&= \int _0^{\infty } \int _{\mathbb {R}\times V} \mathbb {E}\left[ \left[ R(\epsilon _1 r h(s,v)/2, (s,v)) \phi (s, (s,v)) \mathbf {1}_{\{0< s \le t\}}\right] \right] ^2 \, \tilde{\kappa }(ds,dv) \,dr \\&= 4 \int _0^{\infty } \int _{\mathbb {R}\times V} \mathbb {E}\left[ \left[ R(\epsilon _1 z, (s,v)) \phi (s, (s,v)) \mathbf {1}_{\{0< s \le t\}}\right] \right] ^2 \, \kappa (ds,dv) \,dz \\&= 2\int _{\mathbb {R}\times V} \int _{\mathbb {R}} \left[ \left[ x \phi (s, (s,v)) \mathbf {1}_{\{0< s \le t\}}\right] \right] ^2 \, \rho _{(s,v)}(dx) \,\kappa (ds,dv) \\&= 2\int _{(0,t] \times V} \int _{\mathbb {R}} \min \{ |x \phi (s, (s,v))|^2, 1\} \, \rho _{(s,v)}(dx)\, \kappa (ds,dv). \end{aligned}$$
Finiteness of this integral in conjunction with (3.2) yield the existence of the stochastic integral
$$\begin{aligned} M_t = \int _{(0,t] \times V} \phi (s, s,v) \, \Lambda (ds,dv) \end{aligned}$$
by (a) and (b) on page 7. The fact that \(\mathbf {M}\) has independent increments is obvious since \(\Lambda \) is independently scattered. Furthermore, \(\mathbf {M}\) is càdlàg in probability by the continuity properties of stochastic integrals, and by Lemma 6.2 it has a càdlàg modification which will also be denoted by \(\mathbf {M}\).
Let \((\zeta _t)_{t\ge 0}\) be the shift component of \(\mathbf {M}\). By (3.2) and the fact that
$$\begin{aligned} \zeta _t=\int _{(0,t]\times V} B\big (\phi (s,s,v),(s,v)\big )\,\kappa (ds,dv),\quad t\ge 0, \end{aligned}$$
see [33, Theorem 2.7], we deduce that \((\zeta _t)_{t\ge 0}\) is of finite variation. Therefore the independent increments of \(\mathbf {M}\) and [22, II, 5.11] show that \(\mathbf {M}\) is a semimartingale. For \(t\ge 0\) we can write \(M_t\) as a series using the series representation (3.7) of \(\Lambda \). It follows that
$$\begin{aligned} M_t =\zeta _t+ \sum _{i=1}^{\infty } \Bigg [ R_i \phi \Big (T_i^1, T_i\Big ) \mathbf {1}_{\{0<T_i^1 \le t\}} - \gamma _j(t)\Bigg ] \end{aligned}$$
where
$$\begin{aligned} \gamma _j(t)=\int _{\Gamma _{j-1}}^{\Gamma _j} \mathbb {E}\left[ \left[ R(\epsilon _1 r h(T_1),T_1) \phi \left( T^1_1,T_1\right) \mathbf {1}_{\{ 0<T^1_j\le t\}}\right] \right] \, dr. \end{aligned}$$
By arguments as above we have \(\Delta M_{T^{1,c}_i}= R_i\phi (T^{1,c}_i,T_i)\) a.s. and hence by (3.8) we obtain (3.11).
Step 4. In the following we will show the existence of a sequence \((\tau _k)_{k\in \mathbb {N}}\) of totally inaccessible stopping times such that all local martingales \(\mathbf {Z}=(Z_t)_{t\ge 0}\) with respect to \(\mathbb {F}^\Lambda \) are purely discontinuous and up to evanescent
$$\begin{aligned} \{\Delta \mathbf{Z}\ne 0\}\subseteq (\Omega \times J)\cup \left( \mathop {\cup }\limits _{k\in \mathbb {N}}[\tau _k]\right) ,\quad \mathop {\cup }\limits _{k\in \mathbb {N}}[\tau _k] \subseteq \mathop {\cup }\limits _{k\in \mathbb {N}}\left[ T^{1,c}_k\right] . \end{aligned}$$
(3.15)
Recall that \(\{\Delta \mathbf{Z}\ne 0\}\) denotes the random set \(\{(\omega , t)\in \Omega \times \mathbb {R}_+:Z_t(\omega )\ne 0\}\) and \(J\) is the countable subset of \(\mathbb {R}_+\) defined in Step 2. Set \(\mathcal {V}_0=\{A\in \mathcal {V}:A\subseteq V_k\text { for some }k\in \mathbb {N}\}\) where \((V_k)_{k\in \mathbb {N}}\) is given in the Sect. 2. To show (3.15) choose a sequence \((B_k)_{k\ge 1}\subseteq \mathcal {V}_0\) of disjoint sets which generates \(\mathcal V\) and for all \(k\in \mathbb {N}\) let \(\mathbf{U}^k=(U^k_t)_{t\ge 0}\) be given by
$$\begin{aligned} U^k_t=\Lambda ((0,t]\times B_k). \end{aligned}$$
For \(k\in \mathbb {N}\), \(\mathbf{U}^k\) is a càdlàg in probability infinitely divisible process with independent increments and has therefore a càdlàg modification by Lemma 6.2 (which will also be denoted \(\mathbf{U}^k\)). Hence \(\mathbf{U}=\{(U_t^k)_{k\in \mathbb {N}}:t\in \mathbb {R}_+\}\) is a càdlàg \(\mathbb {R}^\mathbb {N}\)-valued process with no Gaussian component. Let \(E=\mathbb {R}^\mathbb {N}{\setminus }\{0\}\). Then \(E\) is a Blackwell space and \(\mu \) defined by
$$\begin{aligned} \mu (A)=\sharp \big \{t\in \mathbb {R}_+:(t,\Delta U_t)\in A\big \},\quad A\in \fancyscript{B}(\mathbb {R}_+\times E) \end{aligned}$$
is an extended Poisson random measure on \(\mathbb {R}_+\times E\), in the sense of [22, II, 1.20]. Let \(\nu \) be the intensity measure of \(\mu \). We have that \(\mathbb {F}^\Lambda \) is the least filtration for which \(\mu \) is an optional random measure. Thus according to [22, III, 1.14(b) and the remark after III, 4.35], \(\mu \) has the martingale representation property, that is for all real-valued local martingales \(\mathbf{Z}=(Z_t)_{t\ge 0}\) with respect to \(\mathbb {F}^\Lambda \) there exists a predictable function \(\phi \) from \(\Omega \times \mathbb {R}_+\times E\) into \(\mathbb {R}\) such that
$$\begin{aligned} Z_t = \phi *(\mu -\nu )_t,\quad t\ge 0 \end{aligned}$$
(3.16)
(in (3.16) the symbol \(*\) denotes integration with respect to \(\mu -\nu \) as in [22, II, 1.]). Note that \(\{t\ge 0:\nu (\{t\}\times E)>0\}\subseteq J\). By definition, see [22, II, 1.27(b)], \(\mathbf Z\) is a purely discontinuous local martingale and \(\Delta Z_t(\omega )=\phi (\omega ,t,\Delta U_t(\omega ))\mathbf {1}_{\{\Delta U_t(\omega )\ne 0\}}\) for \((\omega ,t)\in \Omega \times J^c\) up to evanescent, which shows that
$$\begin{aligned} \{ \Delta \mathbf{Z}\ne 0\}\subseteq (\Omega \times J)\cup \{ \Delta \mathbf{U}\ne 0\}\quad \text {up to evanescent.} \end{aligned}$$
Lemma 6.1 and a diagonal argument show the existence of a sequence of totally inaccessible stopping times \((\tau _k)_{k\in \mathbb {N}}\) such that up to evanescent
$$\begin{aligned} \{\Delta \mathbf{U}\ne 0\}= (\Omega \times J)\cup (\cup _{k\in \mathbb {N}} [\tau _k]). \end{aligned}$$
Arguing as in Step 2 with \(\phi (t,(s,v))=\mathbf {1}_{(0, t]}(s)\mathbf {1}_{ B_k}(v)\) shows that with probability one
$$\begin{aligned} \Delta U^k_t = \Delta \zeta (t) + \sum _{j=1}^{\infty } \Bigg [ R_j \mathbf {1}_{\{t=T^1_j\}} \mathbf {1}_{\{T^2_j\in B_k\}}- \Delta \gamma _j(t) \Bigg ] \quad \text {for all} \ t>0 \end{aligned}$$
where
$$\begin{aligned} \xi (t)&= \int _{\mathbb {R}\times V} \mathbf {1}_{\{0\le s\le t\}}\mathbf {1}_{\{v\in B_k\}}b(s,v)\, \kappa (ds,dv),\\ \gamma _j(t)&= \int _{\Gamma _{j-1}}^{\Gamma _j} \mathbb {E}\left[ \left[ R(\epsilon _1 r h(T_1),T_1) \mathbf {1}_{\{ T^1_j\le t\}}\mathbf {1}_{\{T^2_j\in B_k\}}\right] \right] \, dr. \end{aligned}$$
The functions \(\xi \) and \(\gamma _j\), for \(j\in \mathbb {N}\), are continuous on \(\mathbb {R}_+{\setminus } J\) and hence with probability one
$$\begin{aligned} \Delta U^k_t= \sum _{j=1}^{\infty } R_j \mathbf {1}_{\{t=T^1_j\}} \mathbf {1}_{\{T^2_j\in B_k\}} \quad \text {for all }t\in \mathbb {R}_+{\setminus } J. \end{aligned}$$
(3.17)
Since each \(\tau _k\) is totally inaccessible and \(J\) is countable, we have \(\mathbb {P}(\tau _k\in J)=0\). Hence by (3.17) we conclude that
$$\begin{aligned} \cup _{k\in \mathbb {N}} [\tau _k]\subseteq \cup _{k\in \mathbb {N}} \left[ T^{1,c}_k\right] \quad \text {up to evanescent.} \end{aligned}$$
This completes the proof of Step 4.
Step 5. Fix \(r\in \mathbb {N}\) and let \(\mathbf {X}'=(X'_t)_{t\ge 0}\) be given by
$$\begin{aligned} X' _t=X_t-\sum _{s\in (0,t]} \Delta X_s\mathbf {1}_{\{|\Delta X_s|>r\}}. \end{aligned}$$
We will show that \(\mathbf {X}'\) is a special semimartingale with martingale component \(\mathbf {M}'=(M'_t)_{t\ge 0}\) given by
$$\begin{aligned} M_t'=\tilde{M}_t-\mathbb {E}\tilde{M}_t \quad \text {where}\; \tilde{M}_t=M_t-\sum _{s\in (0,t]}\Delta M_s \mathbf {1}_{\{|M_s|>r\}}. \end{aligned}$$
Recall that \(\mathbf M\) is given by (3.4). By [22, II, 5.10c)] it follows that \(\mathbf {M}'\) is a martingale (and well-defined). The process \(\mathbf {X}'\) is a special semimartingale since its jumps are bounded by \(r\) in absolute value; denote by \(\mathbf W\) and \(\mathbf N\) the finite variation and martingale compnents, respectively, in the canonical decomposition \(\mathbf {X}'=X_0+\mathbf {W}+\mathbf {N}\) of \(\mathbf {X}'\). That is, we want to show that \(\mathbf {N}= \mathbf {M}'\). By (3.11) we have for all \(i\in \mathbb {N}\)
$$\begin{aligned} \Delta M'_{T_i^{1,c}}=\Delta M_{T_i^{1,c}}\mathbf {1}_{\{|\Delta M_{T_i^{1,c}}|\le r\}}=\Delta X_{T_i^{1,c}}\mathbf {1}_{\{|\Delta X_{T_i^{1,c}}|\le r\}}=\Delta X'_{T_i^{1,c}} \quad \text {a.s.} \end{aligned}$$
(3.18)
Let \((\tau _k)_{k\in \mathbb {N}}\) be a sequence of totally inaccessible stopping times satisfying (3.15) for both \(\mathbf{Z}=\mathbf{N}\) and \(\mathbf{Z}=\mathbf {M}'\). Since \(\mathbf W \) is predictable and \(\tau _k\) is a totally inaccessible stopping time we have that \(\Delta W_{\tau _k}=0\) a.s. cf. [22, I,2.24] and hence
$$\begin{aligned} \Delta N_{\tau _k}=\Delta X_{\tau _k}'-\Delta W_{\tau _k}=\Delta X_{\tau _k}'=\Delta M_{\tau _k}'\quad \text {a.s.} \end{aligned}$$
(3.19)
the last equality follows by (3.18) and the second inclusion in (3.15).
Since \(J\) is countable we may find a set \(K\subseteq \mathbb {N}\) such that \(J=\{t_k\}_{k\in K}\). Next we will show that for all \(k\in K\)
$$\begin{aligned} \Delta N_{t_k}=\Delta M_{t_k}\quad \text {a.s.} \end{aligned}$$
(3.20)
By linearity, \(\mathbf A \), define in (3.5), is a well-defined càdlàg process. For all \(k\in K\) we have almost surely
$$\begin{aligned} A_{t_k}&= \int _{(-\infty ,t_k]\times V} \big [\phi (t_k,(s,v))-\phi (s,(s,v))\big ]\,\Lambda (ds,dv)\\&= \int _{(-\infty ,t_k)\times V} \big [\phi (t_k,(s,v))-\phi (s,(s,v))\big ]\,\Lambda (ds,dv) \end{aligned}$$
which shows that \(A_{t_k}\) is \(\mathcal {F}_{t_k-}^\Lambda \)-measurable. Define a process \(\mathbf{Z}=(Z_t)_{t\ge 0}\) by
$$\begin{aligned} Z_t=\sum _{k\in K} \big (\Delta A_{t_k}-\Delta W_{t_k}\big )\mathbf {1}_{\{t=t_k\}}. \end{aligned}$$
(3.21)
Since \(\Delta A_{t_k}-\Delta W_{t_k}\) is \(\mathcal {F}_{t_k-}^\Lambda \)-measurable for all \(k\in K\), (3.21) shows that \(\mathbf Z\) is a predictable process. Let \(^{p} \mathbf {Y}\) denote the predictable projection of any measurable process \(\mathbf {Y}\), see [22, I, 2.28]. Since \(\mathbf Z\) is predictable
$$\begin{aligned} \mathbf{Z}=\,\! ^p \mathbf{Z}=\,\! ^p\big (\mathbf {1}_{\Omega \times J}(\Delta \mathbf A +\Delta \mathbf{W})\big ) = \mathbf {1}_{\Omega \times J} \, ^p (\Delta \mathbf A -\Delta \mathbf{W})=\mathbf {1}_{\Omega \times J} \, ^p (\Delta \mathbf {M}'-\Delta \mathbf{N})=0 \end{aligned}$$
(3.22)
where the third equality follows by [22, I, 2.28(c)] and the fact that \(\Omega \times J\) is a predictable set, the last equality follows by [22, I, 2.31] and the fact that \(\mathbf {M}'\) and \(\mathbf N\) are local martingales. Equation (3.22) shows that \(\Delta A_t=\Delta W_t\) for all \(t\in J\), which implies (3.20).
By (3.19), (3.20) and the fact that
$$\begin{aligned} \{\Delta \mathbf{N}\ne 0\}\subseteq (\Omega \times J)\cup (\cup _{k\in \mathbb {N}} [\tau _k]),\quad \{\Delta \mathbf {M}'\ne 0\} \subseteq (\Omega \times J)\cup (\cup _{k\in \mathbb {N}} [\tau _k]) \end{aligned}$$
we have shown that \(\Delta \mathbf{N}=\Delta \mathbf {M}'\). By Step 4, \(\mathbf N\) and \(\mathbf {M}'\) are purely discontinuous local martingale which implies that \(\mathbf{N}=\mathbf {M}'\), cf. [22, I, 4.19].This completes Step 5.
Step 6. We will show that \(\mathbf A \) is a predictable càdlàg process of finite variation. According to Step 5 the process \(\mathbf W :=\mathbf {X}'-X_0-\mathbf {M}'\) is predictable and has càdlàg paths of finite variation. Thus with \(\mathbf {V}=(V_t)_{t\ge 0}\) given by
$$\begin{aligned} V_t= \sum _{s\in (0,t]} \Delta X_s\mathbf {1}_{\{|X_s|>r\}}-\sum _{s\in (0,t]} \Delta M_s\mathbf {1}_{\{|M_s|>r\}} \end{aligned}$$
we have by the definitions of \(\mathbf W\) and \(\mathbf V\) that
$$\begin{aligned} A_t= X_t-X_0-M_t=W_t+V_t-\mathbb {E}\tilde{M}_t. \end{aligned}$$
(3.23)
This shows that \(\mathbf A \) has càdlàg sample paths of finite variation. Next we will show that \(\mathbf A \) is predictable. Since the processes \(\mathbf W\), \(\mathbf V\) and \(\tilde{\mathbf {M}}\) depend on the truncation level \(r\) they will be denoted \(\mathbf {W}^r\), \(\mathbf {V}^{r}\) and \(\tilde{\mathbf {M}}^r\) in the following. As \(r\rightarrow \infty \), \(V_t^r(\omega )\rightarrow 0\) point wise in \((\omega ,t)\), which by (3.23) shows that \(W^r_t(\omega )-\mathbb {E}\tilde{M}^r_t \rightarrow A_t(\omega )\) point wise in \((\omega ,t)\) as \(r\rightarrow \infty \). For all \(r\in \mathbb {N}\), \((W^r_t-\mathbb {E}\tilde{M}^r_t)_{t\ge 0}\) is a predictable process, which implies that \(\mathbf A\) is a point wise limit of predictable processes and hence predictable. This completes the proof of Step 6 and the proof of the decomposition (3.3) in Case 1.
Case 2.
\(\Lambda \)
is symmetric Gaussian: suppose that \(\Lambda \) is a symmetric Gaussian random measure. By [3, Theorem 4.6] used on the sets \(C_t=(-\infty ,t]\times V\), \(\mathbf {X}\) is a special semimartingale in \(\mathbb {F}^{\Lambda }\) with martingale component \(\mathbf {M}=(M_t)_{t\ge 0}\) given by
$$\begin{aligned} M_t=\int _{(0,t]\times V} \phi (s,(s,v))\,\Lambda (ds,dv),\quad t\ge 0, \end{aligned}$$
see [3, Equation (4.11)], which completes the proof in the Gaussian case.
Case 3.
\(\Lambda \)
is general: Let us observe that it is enough to show the theorem in the above two cases. We may decompose \(\Lambda \) as \(\Lambda = \Lambda _G + \Lambda _P\), where \(\Lambda _G, \Lambda _P\) are independent, independently scattered random measures. \(\Lambda _G\) is a symmetric Gaussian random measure characterized by (2.1) with \(b \equiv 0\) and \(\kappa \equiv 0\) while \(\Lambda _P\) is given by (2.1) with \(\sigma ^2 \equiv 0\). Observe that
$$\begin{aligned} \mathbb {F}^{\Lambda } = \mathbb {F}^{\Lambda _G} \vee \mathbb {F}^{\Lambda _P}, \end{aligned}$$
which can be deduced from the Lévy-Itô decomposition, see [22, II, 2.35], used on the processes \(\mathbf {Y}=(Y_t)_{t\ge 0}\) of the form \(Y_t=\Lambda ((0,t]\times B)\) where \(B\in \mathcal {V}_0\) (\(\mathcal {V}_0\) is defined on page 13). We have \(\mathbf {X}= \mathbf {X}^G + \mathbf {X}^P\), where \(\mathbf {X}^G\) and \(\mathbf {X}^P\) are defined by (3.1) with \(\Lambda _G\) and \(\Lambda _P\) in the place of \(\Lambda \), respectively. Since \((\Lambda , \mathbf {X})\) and \((\Lambda _P -\Lambda _G, \mathbf {X}^P- \mathbf {X}^G)\) have the same distributions, the process \(\mathbf {X}^P -\mathbf {X}^G\) has a modification which is a semimartingale with respect to \(\mathbb {F}^{\Lambda _P-\Lambda _G}= \mathbb {F}^{\Lambda _P} \vee \mathbb {F}^{-\Lambda _G}= \mathbb {F}^{\Lambda }\).
Consequently, processes \(\mathbf {X}^G\) and \(\mathbf {X}^P\) have modifications which are semimartingales with respect to \(\mathbb {F}^{\Lambda }\), and so, they are semimartingales relative to \(\mathbb {F}^{\Lambda _G}\) and \(\mathbb {F}^{\Lambda _P}\), respectively, and the general result follows from the above two cases.
The uniqueness: Let \(\mathbf {M}, \mathbf {M}', \mathbf A \) and \(\mathbf A '\) be as in the theorem. We will first show that \((\mathbf {M}, \mathbf {M}')\) is a bivariate process with independent increments relative to \(\mathbb {F}^{\Lambda }\). To this aim, choose \(0 \le s < t\) and \(A_1, \dots , A_n \in \fancyscript{S}\) such that \(A_i \subset (- \infty , s] \times V\), \(i\le n\), \(n \ge 1\). Consider random vectors \(\xi =(\xi _1,\xi _2):= (M_t -M_s, M_t' - M_s')\) and \(\eta = (\eta _1,\dots ,\eta _n):=(\Lambda (A_1),\dots , \Lambda (A_n))\). Since \(\mathbf {M}\) and \(\mathbf {M}'\) are processes representable by \(\Lambda \), \((\xi , \eta )\) has an infinitely divisible distribution in \(\mathbb {R}^{n+2}\). Since \(\mathbf {M}\) and \(\mathbf {M}'\) have independent increments relative to \(\mathbb {F}^{\Lambda }\), \(\xi _i\) is independent of \(\eta _j\) for every \(i\le 2\), \(j\le n\). It follows from the form characteristic function and the uniqueness of Lévy–Khintchine triplets that the pairwise independence between blocks of jointly infinitely divisible random variables is equivalent to the independence of blocks (this is a straightforward extension of [21, Theorem 4]). Therefore, \(\xi \) is independent of \(\eta \). We infer that \(\xi \) is independent of \(\mathcal {F}^{\Lambda }_s\), so that \((\mathbf {M}, \mathbf {M}')\) is a process with independent increments relative to \(\mathbb {F}^{\Lambda }\), so is \(\overline{\mathbf {M}}:= \mathbf {M}'- \mathbf {M}\).
Since \(\mathbf {X}=X_0+\mathbf {M}+\mathbf A =X_0+\mathbf {M}'+\mathbf A '\) by assumption, we have
$$\begin{aligned} \overline{\mathbf {M}}= \mathbf {M}' - \mathbf {M}= \mathbf A '- \mathbf A , \end{aligned}$$
so that the independent increment semimartingale \(\overline{\mathbf {M}}\) is predictable. For each \(n\in \mathbb {N}\) define the truncated process \({\overline{\mathbf {M}}}^{(n)} = (\overline{M}_t^{(n)})_{t\ge 0}\) by
$$\begin{aligned} \overline{M}^{(n)}_t={\overline{M}}_t-\sum _{s\le t} \Delta {\overline{M}}_s\mathbf {1}_{\{|\Delta {\overline{M}}_s|> n\}}. \end{aligned}$$
According to [22, II, 5.10], there exists a càdlàg deterministic function \(\mathbf{g}_{n}\) of finite variation with \(g_{n}(0)=0\) such that \({\overline{\mathbf {M}}}^{(n)}-\mathbf{g}_{n}\) is a martingale. Since \({\overline{\mathbf {M}}}^{(n)}-\mathbf{g}_{n}\) is also predictable and of finite variation, \({\overline{\mathbf {M}}}^{(n)}=\mathbf{g}_{n}\), cf. [22, I, 3.16]. Letting \(n\rightarrow \infty \) we obtain that \(\overline{\mathbf {M}}\) is deterministic and obviously càdlàg and of finite variation.
The special semimartingale part: To prove the part concerning the special semimartingale property of \(\mathbf {X}\) we note that the process \(\mathbf A\) in (3.5) is a special semimartingale since it is a predictable càdlàg process of finite variation. Thus \(\mathbf X\) is a special semimartingale if and only if \(\mathbf M\) is special semimartingale. Due to the independent increments, \(\mathbf M\) is a special semimartingale if and only if \(\mathbb {E}|M_t|<\infty \) for all \(t>0\), cf. [22, II, 2.29(a)], and in that case \(M_t=(M_t-\mathbb {E}M_t)+\mathbb {E}M_t\) is the canonical decomposition of \(\mathbf M\). This completes the proof. \(\square \)
Proof of Theorem 1.6
We only need to prove the only if-implication. Suppose that \(\mathbf {X}\) is a semimartingale with respect to \(\mathbb {F}^\Lambda \). According to Remark 3.2 we may and do choose \(\phi \) such that \(t\mapsto \phi (t,u)\) is càdlàg of all \(u\). By Remark 3.3, assumption (3.2) is satisfied and hence by letting \(\mathbf {M}\) and \(\mathbf A \) be defined by (3.4) and (3.5), respectively, we obtain the representation of \(\mathbf {X}\) as claimed in Theorem 1.6. To show the uniqueness part we note that by symmetric, the deterministic function \(g\) in Theorem 3.1 satisfies that \(g(t)\) equals \(-g(t)\) in law, which implies that \(g(t)=0\) for all \(t\ge 0\). Since the expectation of a symmetric random variable is zero whenever it exists the last part regarding the special semimartingale property follows as well. \(\square \)
Remark 3.5
We conclude this section by recalling that the proof of any of the results on Gaussian semimartingales \(\mathbf {X}\) mentioned in the Introduction relies on the approximations of the finite variation component \(\mathbf A \) by discrete time Doob–Meyer decompositions \(\mathbf A ^n=(A^n_t)_{t\ge 0}\) given by
$$\begin{aligned} A^n_t=\sum _{i=1}^{[2^n t]} \mathbb {E}[X_{i2^{-n}}-X_{(i-1)2^{-n}}|\mathcal {F}_{(i-1)2^{-n}}], \quad t\ge 0 \end{aligned}$$
and showing that the convergence \(\lim _n A^n_t=A_t\) holds in an appropriate sense, see [30, 39]. This technique does not seem effective in the non-Gaussian situation since it relies on strong integrability properties of functionals of \(\mathbf {X}\), which are in general not present and can not be obtained by stopping arguments.