Abstract
The information dynamics in finance and insurance applications is usually modelled by a filtration. This paper looks at situations where information restrictions apply so that the information dynamics may become non-monotone. A fundamental tool for calculating and managing risks in finance and insurance are martingale representations. We present a general theory that extends classical martingale representations to non-monotone information generated by marked point processes. The central idea is to focus only on those properties that martingales and compensators show on infinitesimally short intervals. While classical martingale representations describe innovations only, our representations have an additional symmetric counterpart that quantifies the effect of information loss. We exemplify the results with examples from life insurance and credit risk.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The value at time \(t \in [0, T]\) of a financial claim \(\xi \in L^{1}(\Omega , \mathcal{A},P)\) at time \(T \in (0, \infty )\) is commonly calculated by
where \(B\) is the value process of a risk-free asset, \((\mathcal{F}_{t})_{t \geq 0}\) is a filtration that describes the available information at each time \(t\geq 0\), and \(Q\) is some equivalent measure. For studying the time dynamics of the value process, we can exploit the fact that \(t \mapsto E_{Q}[ \xi / B(T) | \mathcal{F}_{t} ]\) is always a martingale.
In this paper, we suppose that information restrictions apply and replace the filtration \((\mathcal{F}_{t})_{t \geq 0}\) by a family of sub-sigma-algebras \((\mathcal{G}_{t})_{t \geq 0}\) that may be non-monotone, i.e., we do not assume that \((\mathcal{G}_{t})_{t \geq 0}\) is a filtration. We focus on modelling frameworks where \((\mathcal{G}_{t})_{t \geq 0}\) is generated by a marked point process, because this allows us to calculate martingale representations explicitly. Our approach seems to work also in more general settings, but a general theory is left to future research.
Information restrictions can be motivated by legal restrictions, data privacy efforts, information summarisation or model simplifications. An example for a legal information restriction is the General Data Protection Regulation 2016/679 of the European Union, which includes in Article 17 a so-called ‘right to erasure’, causing possible information loss.
Example 1.1
Consider life insurance contracts that are evaluated by using big data. Data from activity trackers, social media, etc., can improve individual forecasts of the mortality and morbidity of insured persons. By exercising the ‘right to erasure’ according to the General Data Protection Regulation of the European Union, the policyholder may ask the insurer to delete parts of the health-related data at discretion. Moreover, data providers might implement self-imposed information restrictions for data privacy reasons. For example, users of Google products can opt for an auto-delete of location history and activity data after a fixed time limit. As a result, the evaluation of an insurance liability \(\xi \) according to (1.1) will be restricted to sub-sigma-algebras \((\mathcal{G}_{t})_{t \geq 0}\) that are non-monotone in \(t\) due to data deletions.
Examples of information summarisation can be found in Norberg [22], where summarised life insurance values (retrospective and prospective reserves) are defined that encompass non-monotone information. A popular model simplification is Markovian modelling even when the empirical data does not fully support the Markov assumption.
Example 1.2
We consider a credit rating process. In the Jarrow–Lando–Turnbull model, the filtration \((\mathcal{F}_{t})_{t \geq 0}\) is generated by a finite-state-space Markov chain \((R_{t})_{t \geq 0}\) that represents credit ratings; cf. Jarrow et al. [15]. The Markov property makes it possible to equivalently replace \(\mathcal{F}_{t}\) in (1.1) by the sub-sigma-algebra \(\mathcal{G}_{t}:= \sigma (R_{t})\). The Markov assumption can be motivated by the theoretical idea that a credit rating should fully describe the current risk profile of a prospective debtor so that historical ratings can be ignored. However, empirical data does not always support the Markov property, so that \(E_{Q}[ \xi / B(T) | \mathcal{G}_{t} ]\) may in fact differ from \(E_{Q}[ \xi / B(T) | \mathcal{F}_{t} ]\); cf. Lando and Skodeberg [17]. The information dynamics of \(\mathcal{G}_{t}=\sigma (R_{t})\) is non-monotone in \(t\).
Non-monotone information structures can also be found in Pardoux and Peng [24] and Tang and Wu [27], but in these papers, specific independence assumptions make it possible to go back to filtrations and work with classical martingale representations.
From now on, we skip the subscript \(Q\) in (1.1) and all related expectations. Depending on the application, we interpret \(P\) either as the real-world measure or as a risk-neutral measure.
When we replace the filtration \((\mathcal{F}_{t})_{t \geq 0}\) in (1.1) by some non-monotone information \((\mathcal{G}_{t})_{t \geq 0}\), all the powerful tools from martingale theory for studying the time dynamics of (1.1) are not available any more. In order to fill that gap, this paper derives general representations of the form
where \(\xi \) is any integrable random variable, \((\mathcal{G}_{t})_{t \geq 0}\) is a non-monotone family of sigma-algebras generated by an extended marked point process that involves information deletions, \((\mu _{I})_{I \in \mathcal{N}}\) is a set of counting measures that uniquely corresponds to the extended marked point process, \((\nu _{I})_{I \in \mathcal{N}}\) and \((\rho _{I})_{I \in \mathcal{N}}\) are infinitesimal forward and backward compensators of \((\mu _{I})_{I \in \mathcal{N}}\), and the integrands \(G_{I}(u-,u,e)\) and \(G_{I}(u,u,e)\) are adapted to the information at time \(u-\) and time \(u\), respectively. In case that \((\mathcal{G}_{t})_{t \geq 0}\) is increasing, i.e., it is a filtration, the second line in (1.2) is zero and the first line conforms with classical martingale representations. The central idea in this paper is to focus on those properties only that martingales and compensators show on infinitesimally small intervals. We call this the ‘infinitesimal approach’. In principle, the infinitesimal approach is not restricted to point process frameworks, but a fully general theory is beyond the scope of this paper. We further extend our representation results to processes of the form
where \((X_{t})_{t \geq 0}\) is a suitably integrable càdlàg process. In this case, an additional drift term appears on the right-hand side of (1.2).
Martingale representations have various applications in finance and insurance, and this is in particular true for marked point process frameworks:
– If a financial or insurance claim is hedgeable, then explicit hedges can be derived from martingale representations; see e.g. Norberg [23] and Last and Penrose [18].
– Martingale representations are a central tool for constructing and solving backward stochastic differential equations (BSDEs); see e.g. Cohen and Elliott [6], Bandini [1] and Confortola [8]. Many optimal control problems in finance and insurance correspond to a BSDE problem, see e.g. Cohen and Elliott [7] and Delong [11, Chap. 1].
– Martingale representations can serve as additive risk factor decompositions; see Schilling et al. [26]. An insurer needs to additively decompose the surplus from a policy or an insurance portfolio for regulatory reasons; see e.g. Møller and Steffensen [20, Sect. 6]. Additive risk factor decompositions are also used in finance; see e.g. Rosen and Saunders [25].
In all three applications, infinitesimal martingale representations according to (1.2) allow us to include information restrictions into the modelling. We study later a hedging application for the model in Example 1.2. We shall see that estimation and calculation of hedging strategies under inappropriate Markov assumptions may unintentionally replace classical martingales by infinitesimal forward martingales (the first line on the right-hand side of (1.2)), and then the implied hedging error is just the corresponding infinitesimal backward martingale part (the second line in (1.2)). The application of infinitesimal martingale representations in BSDE theory is exemplarily discussed for Example 1.1. We shall see that the integrands in (1.2) correspond to the so-called sum at risk, which is a central quantity in life insurance risk management. In Example 1.1, we also briefly discuss risk factor decompositions. Information deletions upon request for data privacy reasons can provoke arbitrage opportunities, and these can be split off as infinitesimal backward martingales, which is important for dealing with them.
The representation (1.2) implies that \(t \mapsto E[ \xi | \mathcal{G}_{t} ]\) has a (unique) semimartingale modification. More generally, we show that \(t \mapsto E[ X_{t} | \mathcal{G}_{t} ]\) has a (unique) semimartingale modification whenever \(X\) is a semimartingale with integrable variation on compacts. The uniqueness and the semimartingale property are crucial in applications where the time dynamics need to be studied. For example, in life insurance, the differential \(\mathrm{d}E[ X_{t} | \mathcal{G}_{t}]\) might describe the insurer’s current surplus or loss at time \(t\); cf. Norberg [21, 22].
The study of jump process martingales and their representations largely dates back to the 1970s; see e.g. Jacod [14], Boel et al. [2], Chou and Meyer [3], Davis [10] and Elliott [13]. Since then, extensions have been developed in different directions; see e.g. Last and Penrose [18] and Cohen [5]. All these papers stay within the framework of filtrations, i.e., the information dynamics is monotone. The infinitesimal approach we introduce here allows us to go beyond the framework of filtrations. An elegant way to derive the classical martingale representation is a bare-hands approach that starts with the Chou and Meyer construction of the martingale representation for a single jump process, followed by Elliott’s extension to the case of ordered jumps. In this paper, we also use a bare-hands approach, but the classical stopping time concept is not applicable in our non-monotone information setting, so that we need to leave the common paths.
The paper is organised as follows. In Sect. 2, we explain the basic concepts of the infinitesimal approach but avoid technicalities. In Sect. 3, we add technical assumptions and narrow the modelling framework down to pure jump process drivers. Section 4 verifies that (1.2) is indeed a well-defined process. In Sect. 5, we identify infinitesimal compensators for a large class of jump processes. The central result (1.2) is proved in Sect. 6 and extended to processes of the form (1.3) in Sect. 7. In Sect. 8, we take a closer look at Examples 1.1 and 1.2.
2 The infinitesimal approach
The central idea of the infinitesimal approach is to focus only on those properties that martingales and compensators show on infinitesimally short intervals. This section explains the basic ideas under the general assumption that all limits in this section actually exist. Only from the next section on, we narrow the framework down to pure jump process drivers, which is sufficient but not necessary to guarantee the existence of the limits. So in general, the infinitesimal approach is not restricted to jump process frameworks, but it is beyond the scope of this paper to find general conditions for the existence of the limits here.
Let \((\Omega , \mathcal{A}, P)\) be a complete probability space and let \(\mathcal{Z}\subseteq \mathcal{A}\) be the family of its nullsets. Let \(\mathbb{F}=(\mathcal{F}_{t})_{t \geq 0 }\) be a complete and right-continuous filtration on this probability space. We interpret \(\mathcal{F}_{t}\) as the observable information on the time interval \([0,t]\). Suppose that certain pieces of information expire after a finite holding time. By subtracting from \(\mathcal{F}_{t}\) all pieces of information that have expired until time \(t\), we obtain the admissible information at time \(t\). We assume that this admissible information is represented by a family \(\mathbb{G}=(\mathcal{G}_{t})_{t \geq 0 }\) of complete sigma-algebras
which may be non-monotone in \(t\).
A process \(X\) is adapted to the filtration \(\mathbb{F}\) if \(X_{t}\) is \(\mathcal{F}_{t}\)-measurable for each \(t \geq 0\). Likewise we say that a process \(X\) is adapted to the possibly non-monotone information \(\mathbb{G}\) if \(X_{t}\) is \(\mathcal{G}_{t}\)-measurable for each \(t \geq 0\). In addition to this classical concept, we also take an incremental perspective.
Definition 2.1
We call a process \(X\) incrementally adapted to \(\mathbb{G}\) if the increment \(X_{t}-X_{s}\) is \(\sigma ( \mathcal{G}_{u}, u \in (s,t])\)-measurable for any interval \((s,t] \subseteq [0,\infty )\).
In finance and insurance applications, we think of \(X\) as an aggregated cash flow where the aggregated payments \(X_{t} - X_{s}\) on the interval \((s, t]\) should depend only on the admissible information on \((s, t]\). If \(\mathbb{G}\) is a filtration, incremental adaptedness is equivalent to classical adaptedness, but the two concepts differ for non-monotone information.
An integrable process \(X\) is a martingale with respect to \(\mathbb{F}\) if it is \(\mathbb{F}\)-adapted and
almost surely for each \(0 \leq s \leq t\). Focusing on infinitesimally short intervals, in particular we have
a.s. for each \(t \geq 0\), where \((\mathcal{T}_{n}^{{{t}}})_{n\in \mathbb{N}}\) is any increasing sequence (i.e., \(\mathcal{T}_{n}^{{{t}}} \subseteq \mathcal{T}_{n+1}^{{{t}}}\) for all \(n\)) of partitions \(0=t_{0} < t_{1} < \cdots < t_{n} =t \) of the interval \([0, {t]} \) such that the mesh size \(|\mathcal{T}_{n}^{{{t}}}| := \max \{ t_{k}-t_{k-1}: k=1, 2, \ldots \}\) tends to 0 for \(n \rightarrow \infty \). In the literature, we can find for (2.1) the intuitive notation \(E[ \mathrm{d}X_{t} | \mathcal{F}_{t-} ] = 0\).
Definition 2.2
Let \(X\) be incrementally adapted to \(\mathbb{G}\). We say that \(X\) is an infinitesimal forward/backward martingale (IF/IB-martingale) with respect to \(\mathbb{G}\) if for each \(t \geq 0\) and any increasing sequence \((\mathcal{T}_{n}^{{{t}}})_{n\in \mathbb{N}}\) of partitions of \([0, {t]} \) with \(\lim _{n \rightarrow \infty } |\mathcal{T}_{n}^{{{t}}}|=0\), we have
or
respectively, assuming that the expectations and limits exist.
Suppose now that \(X\) is an \(\mathbb{F}\)-adapted and integrable counting process. The so-called compensator \(C\) of \(X\) is the unique \(\mathbb{F}\)-predictable finite-variation process starting from \(C_{0}=0\) such that \(X-C\) is an \(\mathbb{F}\)-martingale. In particular, \(C\) satisfies the equation
almost surely for each \(t \geq 0\); see Karr [16, Theorem 2.17]. The intuitive notation for (2.2) is \(E[ \mathrm{d}X_{t} | \mathcal{F}_{t-} ] = \mathrm{d}C_{t}\). Furthermore, one can show that the \(\mathbb{F}\)-predictability of \(C\) implies that
almost surely for each \(t \geq 0\), intuitively written as \(E[ \mathrm{d}C_{t} | \mathcal{F}_{t-} ] = \mathrm{d}C_{t}\). The latter fact motivates the following definition.
Definition 2.3
We call \(X\) infinitesimally forward/backward predictable (IF/IB-predictable) with respect to \(\mathbb{G}\) if for each \(t \geq 0\) and any increasing sequence \((\mathcal{T}_{n}^{{{t}}})_{n\in \mathbb{N}}\) of partitions of \([0, {t]}\) with \(\lim _{n \rightarrow \infty } |\mathcal{T}_{n}^{{{t}}}|=0\), we almost surely have
or
respectively, assuming that the expectations and limits exist.
By combining (2.2) and (2.3), we obtain
almost surely for each \(t \geq 0\), which means that the process \(X-C\) is an IF-martingale with respect to \(\mathbb{F}\) according to Definition 2.2.
Definition 2.4
Let \(X\) be incrementally adapted to \(\mathbb{G}\). We say that a process \(C\) is an infinitesimal forward/backward compensator of \(X\) (IF/IB-compensator) with respect to \(\mathbb{G}\) if \(C\) is incrementally adapted to \(\mathbb{G}\) and IF/IB-predictable and \(X-C\) is an IF/IB-martingale with respect to \(\mathbb{G}\), respectively.
Let \(\mathcal{G}_{[t_{k},t_{k+1}]}:=\sigma ( \mathcal{G}_{u}, u \in [t_{k},t_{k+1}])\) for any \(t_{k+1} \geq t_{k} \geq 0\) and \(\xi \in L^{1}(\Omega , \mathcal{A},P)\). Then the construction
may yield a decomposition of the process \(t\mapsto E[ \xi | \mathcal{G}_{t}]\) into the difference of an IF-martingale and an IB-martingale, since
Definition 2.5
We say that \(E[ \xi | \mathcal{G}_{t}]-E[ \xi | \mathcal{G}_{0}] = F_{t} - B_{t}\), \(t \geq 0\), is an infinitesimal martingale representation if \(F\) is an IF-martingale and \(B\) is an IB-martingale with respect to \(\mathbb{G}\).
Suppose now that \(X\) describes a discounted claim process in a finance or insurance application. Then we are typically interested in the process \(t \mapsto E[ X_{t} | \mathcal{F}_{t}]\), which is not necessarily well defined. If \(X\) is a càdlàg process whose suprema on compacts have finite expectations, then there exists a unique càdlàg process \(X^{\mathbb{F}}\), the so-called optional projection of \(X\) with respect to \(\mathbb{F}\), such that
almost surely for each \(t \geq 0\). We say here that a process is unique if it is unique up to evanescence. We now expand the concept of optional projections to non-monotone information.
Definition 2.6
Let \(X\) be an integrable càdlàg process. If there exists a unique càdlàg process \(X^{\mathbb{G}}\) such that
almost surely for each \(t \geq 0\), we call \(X^{\mathbb{G}}\) the optional projection of \(X\) with respect to \(\mathbb{G}\).
The optional projection \(X^{\mathbb{G}}\) can be decomposed to
which may represent a sum of an IF-martingale, an IB-martingale and an IB-compensator with respect to \(\mathbb{G}\). By switching the roles of \(t_{k}\) and \(t_{k+1}\), we can obtain a similar decomposition where the IB-compensator is replaced by an IF-compensator.
Definition 2.7
We call \(E[ X_{t} | \mathcal{G}_{t}]-E[ \xi | \mathcal{G}_{0}] = F_{t} - B_{t} + C_{t}\), \(t \geq 0\), an infinitesimal representation if \(F\) is an IF-martingale, \(B\) is an IB-martingale and \(C\) is either an IB-compensator or an IF-compensator with respect to \(\mathbb{G}\).
As mentioned at the beginning of this section, we simply assumed so far that all the limits discussed here indeed exist. In the next section, we focus on a marked point process framework since this guarantees not only the existence of the limits, but also allows us to calculate the limits explicitly.
3 Jump process framework
In the literature, we can find different approaches for defining a jump process framework. One way is to start with a marked point process \((\tau _{i},\zeta _{i})_{i \in \mathbb{N}}\) on \((\Omega , \mathcal{A}, P)\) with some measurable mark space \((E, \mathcal{E})\), i.e.,
– the mappings \(\tau _{i}:(\Omega , \mathcal{A}) \rightarrow ([0,\infty ], \mathcal{B}([0,\infty ]))\), \(i \in \mathbb{N}\), are random times,
– the mappings \(\zeta _{i}: (\Omega , \mathcal{A}) \rightarrow (E, \mathcal{E})\), \(i \in \mathbb{N}\), are random marks.
Differently from the point process literature, we do not assume here that the random times \((\tau _{i})_{i\in \mathbb{N}}\) are increasing or ordered in any specific way. This gives us useful modelling flexibility; see also the comments at the end of this section. Let \(E\) be a Polish space and \(\mathcal{E}:=\mathcal{B}(E)\) its Borel sigma-algebra. For the sake of a simple notation, we moreover assume that \(\Omega \) is a Polish space and \(\mathcal{A}\) its Borel sigma-algebra. The latter assumption can actually be dropped by observing that all random activity in our model comes from a marked point process that can be embedded into a Polish space. We interpret each \(\zeta _{i}\) as a piece of information that can be observed from time \(\tau _{i}\) on. As motivated in the introduction, we additionally assume that the information pieces \(\zeta _{i}\) are possibly deleted after a finite holding time. Therefore, we expand the marked point process \((\tau _{i},\zeta _{i})_{i \in \mathbb{N}}\) to \((\tau _{i}, \zeta _{i}, \sigma _{i} )_{i \in \mathbb{N}}\), where
– the mappings \(\sigma _{i}:(\Omega , \mathcal{A}) \rightarrow ([0,\infty ], \mathcal{B}([0,\infty ]))\), \(i \in \mathbb{N}\), are random times such that \(\tau _{i} \leq \sigma _{i} \).
We interpret \(\sigma _{i}\) as the deletion time of information piece \(\zeta _{i}\). Note that the random times \((\sigma _{i})_{i \in \mathbb{N}}\) are in general not ordered. For the sake of a more compact notation, we work in the following with the equivalent sequence \((T_{i},Z_{i})_{i \in \mathbb{N}}\) defined as
i.e., the random times \(T_{2i-1}\) with odd indices refer to innovations and the consecutive random times \(T_{2i}\) with even indices are the corresponding deletion times. We generally assume that
which will ensure the existence of (infinitesimal) compensators. Condition (3.1) implies that almost surely, there are at most finitely many random times on bounded intervals. Moreover, we assume that
i.e., a new piece of information is not instantaneously deleted but is available for at least a short amount of time. Based on the sequence \((T_{i},Z_{i})_{i \in \mathbb{N}}\), we generate random counting measures \(\mu _{I}\) via
for \(t \geq 0\), \(B \in \mathcal{E}_{I}\) and finite subsets \(I \subseteq \mathbb{N}\), where
If the different random times \(T_{i}\) never coincide, then we just need to consider the counting measures \(\mu _{\{i\}}\), \(i \in \mathbb{N}\), which describe separate arrivals of the random times \(T_{i}\) and their marks \(Z_{i}\). But if random times can occur simultaneously, then we need the full scale of counting measures \(\mu _{I}\), \(I \subseteq \mathbb{N}\), \(\vert I \vert <\infty \), which cover all kinds of separate and joint events. For each \(I \), the measures \(\{\mu _{I}(\cdot )(\omega ) : \omega \in \Omega \}\) generated by their values on \([0,t] \times B \) form a random counting measure on \(([0, \infty ) \times E_{I} , \mathcal{B}([0, \infty ) \times E_{I} ) )\), i.e.,
– for any fixed \(A \in \mathcal{B}([0,\infty ) \times E_{I})\), the mapping \(\omega \mapsto \mu _{I} ( A)(\omega )\) is measurable from \((\Omega , \mathcal{A})\) to \((\overline{\mathbb{N}}_{0}, \mathcal{B}(\overline{\mathbb{N}}_{0}))\) with \(\overline{\mathbb{N}}_{0}:= \mathbb{N}_{0} \cup \{\infty \}\),
– for almost each \(\omega \in \Omega \), the mapping \(A \mapsto \mu _{I} (A)(\omega )\) is a locally finite measure on \(([0, \infty ) \times E_{I} , \mathcal{B}([0, \infty ) \times E_{I}) )\).
The observable information at time \(t \geq 0\) is given by the complete filtration
which lets the random times \(T_{i}\), \(i \in \mathbb{N}\), be stopping times. Here the symbol ∨ denotes the sigma-algebra that is generated by the union of the involved sets. The admissible information at time \(t\geq 0 \) is given by the family of sub-sigma-algebras
The admissible information immediately before time \(t> 0 \) is given by the family of sub-sigma-algebras
Analogously to filtrations, we write \(\mathbb{G}= (\mathcal{G}_{t})_{t \geq 0}\) and \(\mathbb{G}^{-}= (\mathcal{G}^{-}_{t})_{t \geq 0}\).
Remark 3.1
Recall that \(T_{2i-1} \leq T_{2i}\), \(i \in \mathbb{N}\), is the only kind of order that we assume to hold between the random times \(T_{i}\), resulting from the natural assumption \(\tau _{i} \leq \sigma _{i} \), \(i \in \mathbb{N}\). This fact is relevant when an ordering unintentionally reveals additional information. For example, if we have a model where the innovation times \(\tau _{i}\) are ordered, i.e., \(T_{1} < T_{3} < T_{5} < \cdots \), then \(\mathcal{G}_{t}\) reveals among other things the exact number of deletions that have happened until \(t\). This can be an unwanted feature if the number of past deletions is itself a non-admissible piece of information. In many situations, we can avoid such an implied information effect by ordering the pairs \((T_{2i-1},T_{2i})\) in a non-informative way.
Remark 3.2
Without loss of generality, suppose here that \(0 \not \in E \). We define an infinite-dimensional process \((\Gamma _{t})_{t \geq 0} \) by
Then, using the fact that the paths of \((\Gamma _{t})_{t \geq 0} \) are componentwise càdlàg, the information \(\mathcal{G}_{t} \) and \(\mathcal{G}^{-}_{t}\) can be alternatively represented as
where the left limit \(\Gamma _{t-}\) is defined componentwise. However, \(\mathcal{G}^{-}_{t}\) is usually different from the left set-limit \(\mathcal{G}_{t-}\), and the latter set-limit might not even exist. For example, consider a model with only two jumps \(T_{1}\), \(T_{2}\) in finite time and a trivial mark \(Z_{2}=\mathrm{{const}}\). It is not difficult to choose \(T_{1}, T_{2}\) in such a way that the events \(\{T_{1} \leq s < T_{2}\}\) and \(\{T_{1} \leq u < T_{2}\}\) are different for each \(s < u \leq t\). In this case, \(\liminf _{s \uparrow t} \mathcal{G}_{s}= \bigvee _{ s < t} \bigcap _{ s < u < t} \mathcal{G}_{u} \) equals the completed trivial sigma-algebra, whereas \(\limsup _{s \uparrow t} \mathcal{G}_{s}= \bigcap _{ s < t} \bigvee _{ s < u < t} \mathcal{G}_{u} \) equals \(\mathcal{G}^{-}_{t}\).
4 Optional projections
In this section, we study existence and path properties of optional projections. Note that this and all following sections generally assume that we are in the marked point process framework of Sect. 3. Recall also our specific definition of \(\mathcal{G}^{-}_{t}\).
Theorem 4.1
Suppose that \(X=(X_{t})_{t\geq 0}\) is a càdlàg process that satisfies
Then the optional projection \(X^{\mathbb{G}}\) according to Definition 2.6exists, and we have \(X^{\mathbb{G}}_{t-}=E[ X_{t-} | \mathcal{G}^{-}_{t}]\) almost surely for each \(t > 0\). If \(X\) has integrable variation on compacts, then \(X^{\mathbb{G}}\) has paths of finite variation on compacts.
It might be surprising here that \(X^{\mathbb{G}}\) is always a càdlàg process, but note that condition (3.1) rules out clusters of jump times in our marked point process framework. Before we turn to the proof of Theorem 4.1, we develop several auxiliary results. Let
be all finite subsets of the natural or the odd natural numbers, and define
where \(Q_{I}:=\sup \{t \geq 0 : \mu _{I}([0,t] \times E_{I})=0\} \).
Since \(\Omega \) is a Polish space and \(\mathcal{A}\) its Borel sigma-algebra, there exist regular conditional probabilities \(P[ \, \cdot \, | Z_{M}]\) and \(P[ \, \cdot \, | Z_{M},R_{I}]\) on \((\Omega , \mathcal{A})\) for each \(M \in \mathcal{M}\) and \(I \in \mathcal{N}\). As the sets ℳ and \(\mathcal{N}\) are countable, all these conditional probabilities are simultaneously unique up to a joint exception nullset. In this paper, the notation
refers to an arbitrary but fixed regular version of the conditional probability on the right-hand side, and for any integrable random variable \(Z\), we set
i.e., \(E_{M,R_{I}}[Z ]\) is the specific version of the conditional expectation \(E[ Z | Z_{M},R_{I}]\) that we obtain by integrating \(Z\) with respect to the specific regular versions that we picked for \(P[\,\cdot \,| Z_{M},R_{I}] \). In case of \(I= \emptyset \), we also use the short forms \(P_{M}=P_{M,R_{\emptyset }}\) and \(E_{M}=E_{M,R_{\emptyset }}\) since \(P_{M,R_{\emptyset }}\) is a version of \(P[\,\cdot \, | Z_{M}]\).
Moreover, with defining \(I-1:= \{i-1: i \in I\}\), the mappings
refer to arbitrary but fixed regular versions of the factorised conditional expectations on the right-hand side, and for any integrable random variable \(Z\), we define
By reducing \(M\) down to \(M_{I}\), we leave out exactly those random variables in \(Z_{M}\) that are already covered by \(R_{I}\). Note that the mapping \(P_{M,R_{I}=r}[\,\cdot \, ]|_{r=R_{I}}\) equals \(P_{M,R_{I}} [\,\cdot \,]\). For \(M \in \mathcal{M}\) and \(t \geq 0\), we define the \(\mathcal{G}_{t}\)-measurable sets
and corresponding \(\mathbb{G}\)-adapted stochastic processes \(\mathbb{I}^{M}=(\mathbb{I}_{t}^{M})_{t\geq 0}\) via
Because of the assumption (3.1), the paths of \(\mathbb{I}^{M}\) have finitely many jumps on compacts only, so that they have left and right limits. Moreover, they are right-continuous by construction, so that the processes \(\mathbb{I}^{M}\) are càdlàg. The left limits can be represented as \(\mathbb{I}_{t-}^{M} = \mathbf{1}_{A^{M}_{t-}}\), where
Proposition 4.2
For any integrable random variable \(\xi \) and any sets \(M \in \mathcal{M}\) and \(I \in \mathcal{N}\), we almost surely have
with the convention that \(0/0:=0\).
Note here that \(\sigma (R_{I})\) equals the trivial sigma-algebra if \(I = \emptyset \). Whenever we have \(E_{M,R_{I}} [ \mathbb{I}^{M}_{t}]=0\) and \(E_{M,R_{I}} [ \mathbb{I}^{M}_{t-} ]=0\), we necessarily have \(E_{M,R_{I}} [ \xi \mathbb{I}^{M}_{t} ] =0\) and \(E_{M,R_{I}} [ \xi \mathbb{I}^{M}_{t-}]=0\), respectively, so that the right-hand sides of (4.3) are well defined.
Proof of Proposition 4.2
The left-hand sides of (4.3) almost surely equal the conditional expectations that one obtains when the families \(\mathbb{G}\) and \(\mathbb{G}^{-}\) of sigma-algebras are replaced by their non-completed versions. Therefore, in the remaining proof, we ignore the extension by \(\mathcal{Z}\) in the definitions of \(\mathbb{G}\) and \(\mathbb{G}^{-}\).
For each \(H \in \sigma (Z_{M}) \), there exists a \(G \in \mathcal{G}_{t} \) such that \(H \cap A_{t}^{M} = G \cap A_{t}^{M} \), and for each \(G \in \mathcal{G}_{t} \), there exists an \(H \in \sigma (Z_{M}) \) such that \(G \cap A_{t}^{M} = H \cap A_{t}^{M}\). Thus
This implies that the random variable \(\mathbb{I}^{M}_{t} \frac{E_{M,R_{I}} [ \xi \mathbb{I}^{M}_{t} ]}{E_{M,R_{I}}[ \mathbb{I}^{M}_{t}]}\) is \((\mathcal{G}_{t}\vee \sigma (R_{I}))\)-measurable, and for each \(G \in \mathcal{G}_{t} \vee \sigma (R_{I})\), we obtain
i.e., the first equation in (4.3) holds. By replacing (4.4) by
we can analogously show that the second equation in (4.3) holds. □
Lemma 4.3
For each \(M \in \mathcal{M}, I \in \mathcal{N}, r \in [0,\infty) \times E_{I} \) and each càdlàg process \(X\) that satisfies condition (4.1), the stochastic processes
have càdlàg paths. Moreover, their left limits can be obtained by replacing \(X_{t} \mathbb{I}_{t}^{M}\) by \(X_{t-} \mathbb{I}_{t-}^{M}\).
Proof
Apply the dominated convergence theorem. □
Proposition 4.4
With the conventions \(0/0:=0\) and \(1/0:=\infty \), we have for each \(M\in \mathcal{M} \) almost surely that
Proof
Let \(\tau \) and \(\sigma \) be any two nonnegative random times such that \(\tau \leq \sigma \). At first we are going to show that
almost surely. For each \((t,s) \in [0,\infty )^{2}\), we define the unbounded rectangles
and the countably generated set
Let \(\partial B\) and \(B^{\circ }\) be the boundary and the interior of \(B\). Any line of the form
intersects \(\partial B\) at most at one point, since for any two points \(y,y' \in L_{x}\) with \(y\neq y'\), we either have \(y \in A_{y'}^{\circ }\) or \(y' \in A_{y}^{\circ }\). Therefore the set
is countable, and
is countably generated. The sets \(N_{B}=\{(\tau ,\sigma ) \in B\}\) and \(N_{C}=\{(\tau ,\sigma ) \in C\}\) are both nullsets since they equal countable unions of nullsets.
Suppose now that \(Z(\omega ) = \infty \) for an arbitrary but fixed \(\omega \in \Omega \). We necessarily have \(\tau (\omega ) < \sigma (\omega )\). Since \(t \mapsto E[ \mathbf{1}_{\{\tau \leq t < \sigma \}} ]\) is a càdlàg function, at least one of the following statements is true:
-
1.
\(E[ \mathbf{1}_{\{\tau \leq u < \sigma \}} ]=0\) for some \(u \in (\tau (\omega ),\sigma (\omega ))\).
-
2.
\(E[ \mathbf{1}_{\{\tau < u \leq \sigma \}} ]=0\) for some \(u \in (\tau (\omega ),\sigma (\omega ))\).
-
3.
\(E[ \mathbf{1}_{\{\tau \leq u < \sigma \}} ]=0\) for \(u = \tau (\omega )\).
-
4.
\(E[ \mathbf{1}_{\{\tau < u \leq \sigma \}} ]=0\) for \(u = \sigma (\omega )\).
In case (1), we have \(P[(\tau ,\sigma )\in A_{(u,u)}] =E[ \mathbf{1}_{\{\tau \leq u < \sigma \}} ] =0\) and \((\tau (\omega ),\sigma (\omega ))\) is in \(A_{(u,u)}^{\circ }\), so that we can conclude that \(\omega \in N_{B}\).
In case (2), we can argue analogously to case (1), but need to replace the definition of \(A_{(t,s)}\) by \(\{ (t',s') : t'< t, s\leq s'\}\) and define a corresponding nullset \(N_{B}'\). We obtain that \(\omega \in N_{B}'\).
In case (3), we have \(P[(\tau ,\sigma )\in A_{(\tau (\omega ),\tau (\omega ))}]=E[ \mathbf{1}_{\{\tau \leq \tau (\omega ) < \sigma \}} ] =0\) as well as \((\tau (\omega ),\sigma (\omega )) \in A_{(\tau (\omega ),\tau ( \omega ))} \subseteq B \cup \partial B\). If \((\tau (\omega ),\sigma (\omega )) \in B\), then \(\omega \in N_{B}\). If \((\tau (\omega ),\sigma (\omega )) \in \partial B \), then the whole line segment
is in \(\partial B\) because of \((\tau (\omega ), \tau (\omega )) \in \partial B\) and the rectangular shape of the sets \(A_{(t,s)}\). On this line, there is at least one intersection with \(C\), so that we can conclude that \(\omega \in N_{C}\).
In case (4), we can argue similarly to case (3), but need to replace the definition of \(A_{(t,s)}\) by \(\{ (t',s') : t'< t, s\leq s'\}\) and define corresponding nullsets \(N_{B}'\) and \(N_{C}'\).
All in all, we have \(P[Y=\infty ] \leq P[N_{B} \cup N_{C}\cup N_{B}' \cup N_{C}']=0\), i.e., (4.6) holds.
Now let \(M \in \mathcal{M}\) be arbitrary but fixed and choose \(\tau \) and \(\sigma \) as the random times where \(\mathbb{I}_{t}^{M}\) jumps from zero to one and jumps back to zero, respectively. Suppose that \(P_{Z_{M}=z}\) is a regular version of \(P[\, \cdot \,| Z_{M} = z]\) with corresponding expectation \(E_{Z_{M}=z}[\,\cdot \,]\). Then from (4.6), we can conclude that
for each choice of \(z\). Replacing both \(z\) by \(Z_{M}\), where we use the insertion rule for conditional expectations for the inner \(z\), and taking the unconditional expectation on both sides of the equation, we end up with
□
Proof of Theorem 4.1
Motivated by Proposition 4.2, we set
since this process almost surely equals \(X^{\mathbb{G}}_{t}\) for each \(t \geq 0\). Note that there are at most a countable number of conditional expectations involved; so the corresponding regular versions are simultaneously unique up to evanescence. For each compact interval \([0,t]\) and almost each \(\omega \in \Omega \), the set
is finite due to the assumption (3.1). If \(E_{M} [ \mathbb{I}^{M}_{t} ](\omega )\neq 0\), Lemma 4.3 yields that
If \(E_{M} [ \mathbb{I}^{M}_{t} ](\omega )=0\), Proposition 4.4 implies that \(\mathbb{I}^{M}_{t}=0\) for almost all \(\omega \in \Omega \), where the exception nullset does not depend on the choice of \(t\). So (4.8) is almost surely true on \([0,\infty )\), since \(\mathbb{I}^{M}_{t}(\omega )=0\) implies that there is a whole interval \([t,t+\epsilon _{\omega })\) where the right-continuous jump path \(s \mapsto \mathbb{I}^{M}_{s}(\omega )\) is constantly zero. Similarly, we can show that the process \(Y\) almost surely has left limits, which are of the form
According to Proposition 4.2, \(Y_{t-}\) almost surely equals \(E[ X_{t-} | \mathcal{G}^{-}_{t}]\). As càdlàg processes are uniquely defined by their values on countable dense subsets of the time line, our choice for \(X^{\mathbb{G}}\) is almost surely the only possible modification of \((E[X_{t} | \mathcal{G}_{t}])_{t\geq 0}\).
The variation of \(Y\) on \([0,t]\) is bounded by
where \(\mathcal{T}^{t}\) is any partition of \([0,t]\). As \(C_{M}(\omega ):= \sup _{t} \mathbb{I}^{M}_{t}(\omega )/E_{M} [ \mathbb{I}^{M}_{t} ](\omega )\) is finite for almost each \(\omega \in \Omega \) (see Proposition 4.4) and the variation of \(L_{M}(s):= E_{M}[\mathbb{I}^{M}_{s}] \) is bounded by 2, the latter bound is dominated by
which is finite for almost each \(\omega \in \Omega \) since \(X\) has integrable variation on compacts and \(\mathcal{M}_{t}(\omega )\) is finite. □
5 Infinitesimal compensators
In this section, we derive infinitesimal compensators for a large class of incrementally adapted jump processes, in particular for the counting processes \(t \mapsto \mu _{I}([0,t]\times B)\) for any \(I \in \mathcal{N}\) and \(B\in \mathcal{E}_{I}\). Under the conventions \(0/0:=0\) and (4.2), let
for \(t \geq 0\), \(B \in \mathcal{E}_{I}\), \(I \in \mathcal{N}\).
Proposition 5.1
For each \(I \in \mathbb{N}\), the mappings \(\nu _{I}\) and \(\rho _{I}\) can be uniquely extended to random measures on \(([0, \infty ) \times E_{I}, \mathcal{B}([0, \infty ) \times E_{I}))\).
The proof of the proposition is given below. In the following, we use the notation
for random measures \(\kappa \) and integrable random functions \(F\).
Theorem 5.2
Suppose that the mappings \((t,e,\omega ) \mapsto F_{I}(t,e)(\omega )\), \(I \in \mathcal{N}\), are jointly measurable and satisfy
If \(F_{I}(t,e)\) is \(\mathcal{G}_{t}^{-}\)-measurable for each \((t,e)\), then for each \(B \in \mathcal{E}_{I}\), the jump process
has the IF-compensator
If \(F_{I}(t,e)\) is \(\mathcal{G}_{t}\)-measurable for each \((t,e)\), then for each \(B \in \mathcal{E}_{I}\), the jump process
has the IB-compensator
By choosing \(F_{I}\equiv 1\), Theorem 5.2 yields in particular that \(\nu _{I}\) is the IF-compensator and \(\rho _{I}\) is the IB-compensator of the counting process \(\mu _{I}\). In intuitive notation, we write this fact as
The proofs of Proposition 5.1 and Theorem 5.2 follow now in several steps.
Lemma 5.3
For each \(M \in \mathcal{M}\) and \(t \geq 0\), we almost surely have
Proof
For each \(t \geq 0\) and \(M \in \mathcal{M}\), (3.1) implies that
almost surely. Therefore, applying the monotone convergence theorem yields
almost surely for each \(M \in \mathcal{M}\) and \(t \geq 0\). □
Proof of Proposition 5.1
The processes \((\mathbb{I}^{M}_{u-})\) and \((P_{M}[A^{M}_{u-}])\) are jointly measurable with respect to \((u,\omega )\) since they are left-continuous in \(u\); see Lemma 4.3. The mapping \((u,e,\omega ) \mapsto P_{M,R_{I}=(u,e)}[A_{u-}^{M}]\) is jointly measurable with respect to \((u,e,\omega )\) since \(P_{M,R_{I}=(u,e)}[A_{s-}^{M}]\) is left-continuous in \(s\) and jointly measurable with respect to \((u,e,\omega )\in [0,\infty )^{|I|}\times E_{I}\times \Omega \); see Lemma 4.3. Thus for any fixed \(A \in \mathcal{B}([0, \infty ) \times E_{I})\), the mapping \(\omega \mapsto \nu _{I} ( A)(\omega )\) is measurable. Moreover, for almost each \(\omega \in \Omega \), the mapping \(A \mapsto \nu _{I} (A)(\omega )\) is a locally finite measure on \(([0, \infty ) \times E_{I} , \mathcal{B}([0, \infty ) \times E_{I} ))\). This can be seen by combining Proposition 4.4 and (5.2) and using the fact that \(P_{M, R_{I}=(u,e)}[A_{u-}^{M}]\) is bounded by 1. Hence \(\nu _{I}\) has a unique extension to a random measure on \(([0, \infty ) \times E_{I} , \mathcal{B}([0, \infty ) \times E_{I}))\). Similar conclusions hold for the mappings \(\rho _{I}\). □
Proposition 5.4
Suppose that the mappings \((t,e,\omega ) \mapsto F_{I}(t,e)(\omega )\), \(I \in \mathcal{N}\), are jointly measurable and satisfy (5.1). For each \(t >0\) and \(B \in \mathcal{E}_{I} \), we almost surely have
for any increasing sequence \((\mathcal{T}_{n}^{{{t}}})_{n\in \mathbb{N}}\) of partitions of \([0, {t]}\) with \(\lim _{n \rightarrow \infty } |\mathcal{T}_{n}^{{{t}}}|=0\) and for \(G_{I}\) and \(H_{I}\) defined by
Proof
By decomposing \(F\) into a positive part \(F^{+}\) and a negative part \(F^{-}\), it suffices to prove the first equation for the nonnegative mappings \(F^{+}\) and \(F^{-}\) only. Therefore, without loss of generality, we suppose from now on that \(F\) is nonnegative.
Let \(\mathcal{M}_{t}=\mathcal{M}_{t}(\omega )\) be defined as in (4.7). In the following, we use the notation \(J_{k}:=(t_{k},t_{k+1}]\). Since \(\sum _{M \in \mathcal{M}_{t}} \mathbb{I}^{M}_{t_{k}}=1\) for any \(t_{k}\), applying (4.3), the monotone convergence theorem and the law of total probability gives
for almost each \(\omega \in \Omega \). For \(u \in (0,t]\), let \(J^{u}\) be the unique interval \((t_{k},t_{k+1}]\) from \(\mathcal{T}_{n}^{{{t}}}\) such that \(t_{k}< u \leq t_{k+1}\), and let \(t(u)\) be the left end point of \(J^{u}\). Then we can write
Taking the limit for \(n\rightarrow \infty \), we obtain for almost each \(\omega \in \Omega \) that
using that \(\mathcal{M}_{t}\) is finite for almost each \(\omega \) and applying the monotone convergence and the dominated convergence theorem. Note that Proposition 4.4, the assumption (5.1) and \(0 \leq \mathbb{I}_{t(u)}^{M} F_{I}\bullet \mu _{I}(J^{u} \times B ) \leq F_{I}\bullet \mu _{I}((0,t] \times B )\) ensure the existence of an integrable majorant. For \(n \rightarrow \infty \), we have \(t(u) \uparrow u\) and \(J^{u} \downarrow \{u\}\); so the dominated convergence theorem implies that
In summary, the right-hand side of (5.3) equals the integral \(G_{I}\bullet \nu _{I}((0,t ]\times B)\), and we can conclude that the first equation in Proposition 5.4 holds. The proof of the second is similar. □
Proposition 5.5
Under the assumptions of Proposition 5.4, for each \(t \geq 0\) and \(B \in \mathcal{E}_{I} \), we almost surely have
for any increasing sequence \((\mathcal{T}_{n}^{{{t}}})_{n\in \mathbb{N}}\) of partitions of \([0, {t]}\) with \(\lim _{n \rightarrow \infty } | \mathcal{T}_{n}^{{{t}}}|=0\).
Proof
By decomposing \(G\) into a positive part \(G^{+}\) and a negative part \(G^{-}\), it suffices to prove the first equation for the nonnegative mappings \(G^{+}\) and \(G^{-}\) only. Therefore, without loss of generality, we suppose from now on that \(G\) is nonnegative.
From the definition of \(\nu _{I}\) and the monotone convergence theorem, we get
From Proposition 4.2, we know that \(G_{I}(u,e)\) is \(\mathcal{G}^{-}_{u}\)-measurable for each \((u,e)\). This fact and (4.5) imply that
Applying the Fubini–Tonelli theorem and the monotone convergence theorem gives
The latter expectation is finite according to (5.1). Hence for each \(M \in \mathcal{M}\), we almost surely have
Let \(J_{k}:=(t_{k},t_{k+1}]\). From the dominated convergence theorem, we obtain
since \(\mathbb{I}^{M}\) is bounded by 1 and because of the second line in (5.4). By using the first line in (5.4), the dominated convergence theorem moreover yields
By applying the Fubini–Tonelli theorem, we can show that the last term equals
Using Proposition 4.4 and the dominated convergence theorem, we therefore obtain
where the second equality uses that (4.5) and the \(\mathcal{G}^{-}_{u}\)-measurability of \(G_{I}(u,e)\) allows us to pull \(G_{I}(u,e)\) out of the conditional expectation \(E_{M}[\mathbb{I}^{M}_{u-} G_{I}(u,e)] \). Summing the latter equation over \(M \in \mathcal{M}_{t}\) for \(\mathcal{M}_{t}\) as in (4.7) and applying Proposition 4.2, we obtain
almost surely. Thus we can conclude that the first equation in Proposition 5.5 holds. The proof of the second is similar. □
Proof of Theorem 5.2
Let \(G_{I}\) and \(H_{I}\) be defined as in Proposition 5.4. If \(F_{I}(t,e)\) is \(\mathcal{G}_{t}^{-}\)-measurable for each \((t,e)\), then Proposition 4.2 implies that \(G_{I}(t,e)=F_{I}(t,e)\) almost surely. Similarly, if \(F_{I}(t,e)\) is \(\mathcal{G}_{t}\)-measurable for each \((t,e)\), we have almost surely that \(H_{I}(t,e)=F_{I}(t,e)\). With this fact and by subtracting the limit equations in Propositions 5.4 and 5.5, we obtain that
satisfy the defining limit equations for IF/IB-martingales. IF/IB-predictability of the compensators follows from Proposition 5.5. Note that all involved processes are incrementally adapted to \(\mathbb{G}\) because of (4.4) and (4.5). □
6 Infinitesimal martingale representations
Suppose that \(\lambda _{I}\) is the compensator of \(\mu _{I}\) with respect to \(\mathbb{F}\). For each integrable random variable \(\xi \), the classical martingale representation theorem yields that the martingale \(X_{t}=E[ \xi | \mathcal{F}_{t} ]\), \(t \geq 0\), can be represented as
where the mapping \((u,e,\omega ) \mapsto F(u,e)(\omega )\) is jointly measurable and the mapping \(\omega \mapsto F(u,e)(\omega )\) is \(\mathcal{F}_{u-}\)-measurable for each \((u,e)\); see e.g. Karr [16, Theorem 2.34]. We now extend this result to the non-monotone information \(\mathbb{G}\).
Theorem 6.1
Let \(\xi \) be an integrable random variable. Then for each \(t \geq 0\), (1.2) holds almost surely for
For each \(I \in \mathcal{N}\) and \(e \in E_{I}\), the process \(u \mapsto G_{I}(u-,u,e)\) is \(\mathbb{G}^{-}\)-adapted and the process \(u \mapsto G_{I}(u,u,e)\) is \(\mathbb{G}\)-adapted.
If the mappings \(F_{I}(u,e)=G_{I}(u-,u,E)\) and \(F_{I}(u,e)=G_{I}(u,u,e)\) both satisfy the integrability condition in Theorem 5.2, then the representation (1.2) is a sum of IF-martingales and IB-martingales with respect to \(\mathbb{G}\). In the case of \(\mathbb{F}= \mathbb{G}\), we have \(\nu _{I} = \lambda _{I}, \rho _{I} =\mu _{I}\) and (1.2) equals (6.1); so (1.2) is a generalisation of (6.1).
The proof of Theorem 6.1 is given below. Recall that our notation uses the convention (4.2).
Lemma 6.2
Let \(\xi \) be an integrable random variable. Then for each \(t \geq 0\), we have
Proof
As (6.3) is additive in \(\xi \), it suffices to show the equation for nonnegative and bounded random variables \(\xi \) only. The general case then follows from monotone convergence applied to both parts of the sequence \(\xi _{n} := (\xi _{n} \wedge n)^{+} - (-\xi _{n} \wedge n)^{+} \), \(n \in \mathbb{N}\). Therefore, in the remaining proof, we suppose that \(0 \leq \xi \leq C \) for a finite real number \(C\).
Let \(U_{t_{k}}(\omega ):= \sup \{ s \in (t_{k} ,\infty ) : T_{j}(\omega ) \not \in (t_{k},s), j \in \mathbb{N}\}\), i.e., \(U_{t_{k}}\) is the time of the first occurrence of a random time strictly after \(t_{k}\). Since \(1=\sum _{I \in \mathcal{N}} \mathbf{1}_{\{U_{t_{k}}=Q_{I}\}} \), we can conclude that
for \(B_{I,k}=(t_{k},t_{k+1}] \times E_{I}\), where we use the fact that
unless \(t_{k}< Q_{I} \leq t_{k+1}\). Because of (5.2) and
we can apply the dominated convergence theorem on the last line in (6.4), which leads to (6.3). Note here that
for \(t_{k+1} \downarrow u\) and \(t_{k} \uparrow u\) implies that
□
Proof of Theorem 6.1
Fix \(M \in \mathcal{M}\) and define \(M+1:= \{i+1 : i \in M\}\). If \(\mathbb{I}^{M}_{u-}=1\), then only random times from the index set
can occur at time \(u\). If \(\mathbb{I}^{M}_{u}=1\), then only random times from the index set
can occur at time \(u\). Therefore (6.3) can be represented as
where
Furthermore, using \(M'\cap M'' = \emptyset \), we can show that
for \(t>0\), which implies that \(E_{M}[ \mathbb{I}_{t-}^{M} \mathbb{I}_{t}^{M} \xi ]= K_{t}+L_{t-}\). Analogously we obtain
In the specific case \(\xi =1\), we write \(k_{t}\) and \(\ell _{t}\) instead of \(K_{t}\) and \(L_{t}\). By applying integration by parts pathwise for each \(\omega \in \Omega \), we get that
The equation formed by the first and last line can be rewritten as
because of
With the convention \(0/0:=0\) and by using the Radon–Nikodým theorem, we may multiply by \((k_{t}+\ell _{t-})^{-1}\) on both sides, which leads to
Because of (6.3) and \(\mathrm{d}\mathbb{I}_{t}^{M} = \sum _{I \in \mathcal{N}} ( \mathbb{I}_{t}^{M} -\mathbb{I}_{t-}^{M} ) \mu _{I} (\mathrm{d}t \times E_{I} )\), the last equation can be rewritten to
Let \(I \in \mathcal{N}\) be arbitrary but fixed. Then for each \(M \in \mathcal{M}\), there exists an \(\tilde{M} \in \mathcal{M}\), and for each \(\tilde{M} \in \mathcal{M}\), there exists an \(M \in \mathcal{M}\) such that
As a consequence, for almost each \(\omega \in \Omega \), we have
Because of
summing (6.5) over \(M \in \mathcal{M}\) and adding (6.6) yields for almost each \(\omega \in \Omega \) that we have (1.2) and (6.2) after rearranging the addends. By applying Proposition 4.2, we can see that \(G_{I}(u-,u,e)\) is \(\mathcal{G}_{u-}\)-measurable and \(G_{I}(u,u,e)\) is \(\mathcal{G}_{u}\)-measurable for each \((I,e)\). □
7 Infinitesimal representations for optional projections
Suppose that \(X\) is a càdlàg process that satisfies (4.1) and such that \(X_{t}-X_{0}\) is \(\mathcal{F}_{t}\)-measurable for each \(t \geq 0\). Then the optional projection of \(X\) with respect to \(\mathbb{F}\) can be represented as
for random mappings \(F_{I}(t,e)\) that are \(\mathcal{F}_{t-}\)-measurable for each \((t,I,e)\). In order to see this, apply the classical martingale representation theorem on the \(\mathbb{F}\)-martingale
and rearrange the addends. The following theorem extends (7.1) to non-monotone information settings.
Theorem 7.1
Let \(X\) be a càdlàg process that satisfies (4.1) and has an IB-compensator with respect to \(\mathbb{G}\), denoted as \(X^{IB}\). Then
almost surely with
If \(X\) has an IF-compensator with respect to \(\mathbb{G}\), denoted as \(X^{IF}\), then (7.2) still holds but with \(X_{t}^{IB}\) replaced by \(X_{t}^{IF}\) and \(X_{u-}\) replaced by \(X_{u}\) in (7.3).
By applying Proposition 4.2, we can see that \(G_{I}(u-,u,e)\) is \(\mathcal{G}^{-}_{u}\)-measurable and \(G_{I}(u,u,e)\) is \(\mathcal{G}_{u}\)-measurable. Hence the integrals in the first and second line of (7.2) describe IF-martingales and IB-martingales with respect to \(\mathbb{G}\) if the mappings \(F_{I}(u,e)=G_{I}(u-,u,e)\) and \(F_{I}=G_{I}(u,u,e)\) both satisfy the integrability condition (5.1); see the comments below Theorem 5.2.
In the special case \(\mathbb{G}= \mathbb{F}\), we have \(\nu _{I} = \lambda _{I}\), \(\rho _{I} =\mu _{I}\), \(X=X^{IB}\) and the representations (7.2) and (7.1) are equivalent, i.e., (7.2) is a generalisation of (7.1).
Even if \(\mathbb{G} \neq \mathbb{F}\), we can still have \(X=X^{IB}\) or \(X=X^{IB}\). The following example presents non-trivial processes \(X\) that equal their IB-compensators or their IF-compensators.
Example 7.2
Let \(h(M,t)(\omega ): \mathcal{M} \times [0,\infty ) \times \Omega \rightarrow \mathbb{R}\) be measurable and suppose that \(|h(M,t)| \leq Z\) for an integrable majorant \(Z\). Let \(\gamma \) be the sum of the Lebesgue measure and a countable number of Dirac measures,
for deterministic time points \(0\leq t_{1} < t_{2} < \cdots \) that increase to infinity. Then the càdlàg process \(X\) defined by
has the IB-compensator
In order to see this, apply Proposition 4.2, the dominated convergence theorem, Proposition 4.4 and Lemma 4.3 in order to obtain that
almost surely, where \(\mathcal{M}_{t}\) is defined as in (4.7). If \(s\mapsto h(M,s)\) is \(\mathbb{G}\)-adapted for each \(M\), we have \(X=X^{IB}\). Likewise we can show that the càdlàg process
has the IF-compensator
If \(s\mapsto h(M,s)\) is \(\mathbb{G}^{-}\)-adapted for each \(M\), we have \(Y=Y^{IF}\).
Proof of Theorem 7.1
The theorem follows from the additive decomposition
and from applying Theorem 6.1 for each summand \(E[ X_{t_{k}} | \mathcal{G}_{t_{k+1}}] - E[ X_{t_{k}} | \mathcal{G}_{t_{k}}]\). The sum \(\sum _{\mathcal{T}_{n}^{{{t}}}} (E[ X_{t_{k}} | \mathcal{G}_{t_{k+1}}] - E[ X_{t_{k}} | \mathcal{G}_{t_{k}}])\) has a representation of the form (1.2) in case of \(t_{k} < s \leq t_{k+1}\) for \(G_{I}(s, u,e)\) defined by (7.3). Because of the càdlàg property of \(X\), by applying the dominated convergence theorem pathwise for almost each \(\omega \in \Omega \), we end up with (7.2) and (7.3). The alternative decomposition
leads to the second variant with \(X^{IB}\) replaced by \(X^{IF}\) and \(X_{u-}\) by \(X_{u}\) in (7.3). □
Remark 7.3
Without loss of generality, suppose here that \(0 \not \in E\). Motivated by Remark 3.2, for any \(t > 0\) and any integrable random variable \(\xi \), define for \(e \in E_{I}\),
where \(J_{t}:= \sum _{I \in \mathcal{N}} \mu _{I}(\{t\}\times E_{I})\) indicates whether there is a stopping event at time \(t\). One can then show that the integrands in (7.2) are almost surely equal to
for each \(t>0\), \(I \in \mathcal{N}\) and \(e \in E_{I}\). The differences on the right-hand side have intuitive interpretations. The first line describes the difference in expectation between a change scenario and a remain scenario if we are currently at time \(t-\) and are looking forward in time. Similarly, the second line describes the difference in expectation between a change scenario and a remain scenario if we are currently at time \(t\) and are looking backward in time. In (7.2), these differences in expectation are integrated with respect to the compensated forward and backward scenario dynamics.
8 Examples
Here we come back to Examples 1.1 and 1.2 and show how our infinitesimal martingale representations can be applied in life insurance and credit risk modelling.
Example 8.1
Consider a life insurance contract where the insurer collects health-related information about the insured with the aim to improve forecasts of the individual future insurance liabilities. For example, this can involve data from activity trackers or social media. Here, the marked point process includes the time of death \(\tau _{1}\), which is recorded as \(\zeta _{1}:=\tau _{1}\), and further health-related information \((\tau _{i}, \zeta _{i})_{i \geq 2}\). Upon request of the policyholder with reference to the ‘right to erasure’ according to the General Data Protection Regulation of the European Union, or as a self-imposed data privacy effort of the data provider, the insurer deletes parts of the health-related data at certain time points, i.e., we expand \((\tau _{i}, \zeta _{i})_{i \geq 2}\) by deletion times \((\sigma _{i})_{i \geq 2}\). For completeness, we define \(\sigma _{1}:= \infty \).
In the classical insurance modelling without data deletion, the time dynamics of the expected future insurance payments is commonly described by Thiele’s equation; see e.g. Møller [19] and Djehiche and Löfdahl [12]. Suppose that \(A_{t}\) gives the aggregated benefit cash flow of the life insurance contract on \([0,t]\), including survival benefits with rate \(a(t)\) and a death benefit of \(\alpha (t)\) upon death at time \(t\), i.e.,
We assume here that \(a:[0,\infty ) \rightarrow \mathbb{R}\) and \(\alpha :[0,\infty ) \rightarrow \mathbb{R}\) are bounded. For a given interest intensity \(\phi :[0,\infty ) \rightarrow [0,\infty )\) and a finite contract horizon \(T\), the process
describes the discounted future liabilities of the insurer seen from time \(t\). As the càdlàg process \(X=(X_{t})_{t \geq 0}\) is neither adapted to \(\mathbb{F}\) nor to \(\mathbb{G}\), an insurer has to work with the optional projection instead (the so-called prospective reserve), i.e., the insurer aims to calculate
in case that there is no data deletion and
in case that information deletions may occur. The process \(X^{\mathbb{G}}\) is a well-defined càdlàg process according to Theorem 4.1.
By applying (7.1) and Itô’s lemma, we can derive the so-called stochastic Thiele equation
with terminal condition \(X^{\mathbb{F}}_{T}=0\); cf. Møller [19, Eq. (2.17)]. The integrand \(F_{I}(t,e)\), which almost surely equals
according to Remark 7.3, is a key quantity in life insurance risk management and is known as sum at risk. Equation (8.1) can be interpreted as a backward stochastic differential equation (BSDE) with solution \((X^{\mathbb{F}},(F_{I})_{I})\); see Djehiche and Löfdahl [12] for Markovian and Christiansen and Djehiche [4] for non-Markovian models. The BSDE (8.1) is in particular relevant if the life insurance payments \(a\) and \(\alpha \) depend on the current policy value so that the insurance cash flow \(A\) is only implicitly defined.
By applying Theorem 7.1 and Itô’s lemma and using the fact that the process \(A\) equals its own IB-compensator (since \(\sigma _{1} = \infty \)), we are able to derive an analogous equation for \(X^{\mathbb{G}}\), namely
with terminal condition \(X^{\mathbb{G}}_{T}=0\). This equation can be interpreted as a new type of BSDE with solution \((X^{\mathbb{G}},(G_{I})_{I})\), featuring an IF-martingale and an IB-martingale instead of a classical martingale. The IF-martingale in the first line describes the impact of new information on the optional projection \(X^{\mathbb{G}}\). The IB-martingale in the second line quantifies the effect on \(X^{\mathbb{G}}\) of information deletions. The integrands \(G_{I}(t-,t,e)\) and \(G_{I}(t,t,e)\), which are almost surely equal to
according to Remark 7.3, generalise the classical definition of the sum at risk. They are needed in life insurance risk management for sensitivity analyses, safe-side calculations, contract modifications and surplus decompositions.
If the policyholder may decide about data deletions at discretion, then the resulting value changes of the insurance contract can be systematically exploited by the policyholder, leading to a kind of data privacy arbitrage. Since it is the IB-martingale in (8.2) that measures the value changes due to data deletions at times \((\sigma _{i})_{i \geq 2}\), it represents the potential data privacy arbitrage. A simple solution for avoiding data privacy arbitrage could be to charge the IB-martingale as a fee upon a data deletion request. The fee can also be negative and represents then a bonus payment. However, more complex risk sharing schemes will be needed in insurance practice that moreover distinguish between different causes for data deletions. By following the concept of Schilling et al. [26] to interpret martingale representations as risk factor decompositions, we may interpret the infinitesimal martingale parts in (8.2) as an additive surplus decomposition that can distinguish between numerous kinds of jump events \(\mu _{I}\), \(I\subseteq \mathbb{N}\), \(\vert I \vert <\infty \). Such an additive decomposition of the insurer’s surplus is an important step for aligning insurance risk management to the digital age.
Example 8.2
A popular approximation concept in credit rating modelling is to pretend that the credit rating process is Markovian even if the empirical data does not fully support this assumption. Suppose that credit ratings are updated at integer times only. By setting \(\tau _{i}:=i-1\) and \(\sigma _{i}:=i\) for \(i \in \mathbb{N}\) and defining \(\zeta _{i}\) as the credit rating at time \(\tau _{i}\), the rating process \(R=(R_{t})_{t \geq 0}\) has the representation
and satisfies
The jumps of the process \(R\) correspond to the random counting measures \(\mu _{I}\). In the Jarrow–Lando–Turnbull model, the rating space \(E\) is finite, \((R_{i})_{i \in \mathbb{N}_{0}}\) is assumed to be a Markov chain, and
for \(r_{i},r_{i+1} \in E\) and \(i \in \mathbb{N}_{0}\), where \(Q\) is the risk-neutral measure and \(\pi \) is a deterministic function on \(\mathbb{N}_{0} \times E\). The latter formula allows us to estimate \(Q\) from market data by a two step method. First, the transition probabilities \(P[R_{i+1}= r_{i+1}| R_{i}=r_{i} ]\) are estimated from observed credit rating time series. Then the function \(\pi \) is calibrated such that the risk-neutral values of credit rating derivatives conform with observed market prices. Once we have \(Q\), we can use the (classical) martingale representation (6.1) in order to explicitly construct hedges for financial claims \(\xi \); see e.g. Last and Penrose [18]. For example, by arguing analogously to (8.1), the claim \(\xi =h(R_{T})\) has the martingale representation
The integral in the first line describes the investments in the risk-free asset \(B\). The second line corresponds to risky investments. It can be rewritten in terms of the tradable assets in a complete financial market; cf. Last and Penrose [18, Sect. 5], which yields a trading strategy that can be used to replicate the claim \(\xi \).
A standard estimator for the state occupation probabilities of the Markov chain \(R\) with respect to \(P\) is the Aalen–Johansen estimator, which directly corresponds to the Nelson–Aalen estimator for the compensators \(\lambda _{I}\) of the random counting measures \(\mu _{I}\). Under the assumption that \(R\) is Markovian, the Nelson–Aalen estimator consistently estimates \(\lambda _{I}=\nu _{I}\). If \(R\) is not Markovian, then the Nelson–Aalen estimator still consistently estimates \(\nu _{I}\), see Datta and Satten [9], but now \(\nu _{I} \neq \lambda _{I}\). In other words, if we ignore the information beyond \(\mathbb{G}\) in the estimation of \(\lambda _{I}\) due to an incorrect Markov assumption, then we actually estimate the infinitesimal forward compensator \(\nu _{I}\) instead of the classical compensator \(\lambda _{I}\). Similarly, ignoring the information beyond \(\mathbb{G}\) upon estimating \(F_{I}\) and (1.1) from market data means that we unintentionally end up with the integrands
and \(B(t-) E_{Q}[ h(R_{T})/B(T) |\mathcal{G}_{t}] \) rather than the integrands
and \(B(t-) E_{Q}[ h(R_{T})/B(T) |\mathcal{F}_{t}] \). For the correct interpretation of the latter conditional expectations with mixed conditions, see Remark 7.3. All in all, by ignoring the information beyond \(\mathbb{G}\) in the estimation and calculation of (8.3) due to an incorrect Markov assumption, we unintentionally end up with
instead of the right-hand side of (8.3). This unintentional modification distorts the replicating trading strategy for the claim \(h(R_{T})\) which was included in (8.3). Do we still correctly replicate \(h(R_{T})\)? By applying Theorem 6.1 instead of (6.1) and using that \(\mathcal{F}_{0}=\mathcal{G}_{0}\) and \(\mathcal{G}_{T}= \sigma (R_{T}) \vee \mathcal{Z}\), we get analogously to (8.3) that
Equation (8.5) implies that the distorted trading strategy we might derive from (8.4) is actually not a hedge for \(\xi =h(R_{T})\). The hedging error is given by the third line in (8.5). To sum up, by estimating and calculating (8.3) under an incorrect Markov assumption for \(R\), we unintentionally replace the (classical) \(\mathbb{F}\)-martingale in (8.3) by the \(\mathbb{G}\)-IF-martingale in (8.4) (the risk-free investment is also affected), and the corresponding \(\mathbb{G}\)-IB-martingale is just the hedging error.
Schilling et al. [26] interpret martingale representations as additive risk factor decompositions. Likewise we can read the (infinitesimal) martingale parts in (8.3) and (8.5) as linear risk factor decompositions. The relevance of such decompositions in credit risk modelling is explained in Rosen and Saunders [25].
References
Bandini, E.: Existence and uniqueness for backward stochastic differential equations driven by a random measure, possibly non quasi-left continuous. Electron. Commun. Probab. 20, 1–13 (2015)
Boel, R., Varaiya, P., Wong, E.: Martingales on jump processes I: representation results. SIAM J. Control 13, 999–1021 (1975)
Chou, C.-S., Meyer, P.A.: Sur la répresentation des martingales comme intégrales stochastiques dans les processus ponctuels. In: Meyer, P.A. (ed.) Séminaire de Probabilités IX. Lecture Notes in Mathematics, vol. 465, pp. 226–236. Springer, Berlin (1975)
Christiansen, M.C., Djehiche, B.: Nonlinear reserving and multiple contract modifications in life insurance. Insur. Math. Econ. 93, 187–195 (2020)
Cohen, S.N.: A martingale representation theorem for a class of jump processes. Preprint (2013). arXiv:1310.6286
Cohen, S.N., Elliott, R.J.: Solutions of backward stochastic differential equations on Markov chains. Commun. Stoch. Anal. 2, 251–262 (2008)
Cohen, S.N., Elliott, R.J.: Comparisons for backward stochastic differential equations on Markov chains and related no-arbitrage conditions. Ann. Appl. Probab. 20, 267–311 (2010)
Confortola, F.: \(L^{p} \) solution of backward stochastic differential equations driven by a marked point process. Math. Control Signals Syst. 31, 1–32 (2019)
Datta, S., Satten, G.A.: Validity of the Aalen–Johansen estimators of stage occupation probabilities and Nelson–Aalen estimators of integrated transition hazards for non-Markov models. Stat. Probab. Lett. 55, 403–411 (2001)
Davis, M.H.A.: The representation of martingales of jump processes. SIAM J. Control Optim. 14, 623–638 (1976)
Delong, Ł.: Backward Stochastic Differential Equations with Jumps and Their Actuarial and Financial Applications. Springer, London (2013)
Djehiche, B., Löfdahl, B.: Nonlinear reserving in life insurance: aggregation and mean-field approximation. Insur. Math. Econ. 69, 1–13 (2016)
Elliott, R.J.: Stochastic integrals for martingales of a jump process with partially accessible jump times. Z. Wahrscheinlichkeitstheor. Verw. Geb. 36, 213–266 (1976)
Jacod, J.: Multivariate point processes: predictable projections, Radon–Nikodým derivatives, representation of martingales. Z. Wahrscheinlichkeitstheor. Verw. Geb. 31, 235–253 (1975)
Jarrow, R.A., Lando, D., Turnbull, S.M.: A Markov model for the term structure of credit risk spreads. Rev. Financ. Stud. 10, 481–523 (1997)
Karr, A.: Point Processes and Their Statistical Inference. Dekker, New York (1986)
Lando, D., Skodeberg, T.: Analyzing rating transitions and rating drift with continuous observations. J. Bank. Finance 26, 423–444 (2002)
Last, G., Penrose, M.D.: Martingale representation for Poisson processes with applications to minimal variance hedging. Stoch. Process. Appl. 121, 1588–1606 (2011)
Møller, C.M.: A stochastic version of Thiele’s differential equation. Scand. Actuar. J. 1, 1–16 (1993)
Møller, T., Steffensen, M.: Market-Valuation Methods in Life and Pension Insurance. Cambridge University Press, Cambridge (2007)
Norberg, R.: Hattendorff’s theorem and Thiele’s differential equation generalized. Scand. Actuar. J. 1992, 2–14 (1992)
Norberg, R.: A theory of bonus in life insurance. Finance Stoch. 3, 373–390 (1999)
Norberg, R.: The Markov chain market. ASTIN Bull. 33, 265–287 (2003)
Pardoux, É., Peng, S.: Backward doubly stochastic differential equations and systems of quasilinear SPDEs. Probab. Theory Relat. Fields 98, 209–227 (1994)
Rosen, D., Saunders, D.: Risk factor contributions in portfolio credit risk models. J. Bank. Finance 34, 336–349 (2010)
Schilling, K., Bauer, D., Christiansen, M.C., Kling, A.: Decomposing dynamic risks into risk components. Manag. Sci. 66, 5485–6064 (2020)
Tang, H., Wu, Z.: Backward stochastic differential equations with Markov chains and related asymptotic properties. Adv. Differ. Equ. 285, 1–17 (2013)
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Christiansen, M.C. Time-dynamic evaluations under non-monotone information generated by marked point processes. Finance Stoch 25, 563–596 (2021). https://doi.org/10.1007/s00780-021-00456-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00780-021-00456-5
Keywords
- Credit risk modelling
- Life insurance modelling
- Information restrictions
- Optional projections
- Infinitesimal martingale representations