A martingale concept for non-monotone information in a jump process framework

The information dynamics in finance and insurance applications is usually modeled by a filtration. This paper looks at situations where information restrictions apply such that the information dynamics may become non-monotone. A fundamental tool for calculating and managing risks in finance and insurance are martingale representations. We present a general theory that extends classical martingale representations to non-monotone information generated by marked point processes. The central idea is to focus only on those properties that martingales and compensators show on infinitesimally short intervals. While classical martingale representations describe innovations only, our representations have an additional symmetric counterpart that quantifies the effect of information loss. We exemplify the results with examples from life insurance and credit risk.


Introduction
The value at time t ∈ [0, T ] of a financial claim ξ ∈ L 1 (Ω, A, P ) at time T ∈ (0, ∞) is commonly calculated by where B is the value process of a risk-free asset, (F t ) t≥0 is a filtration that describes the available information at each time t ≥ 0, and Q is some equivalent measure. For studying the time dynamics of the value process, we can exploit the fact that t → E Q [ξ/B(T )|F t ] is always a martingale.
In this paper we suppose that information restrictions apply and replace the filtration (F t ) t≥0 by a family of sub-sigma-algebras (G t ) t≥0 that may be non-monotone, i.e. we do not assume that (G t ) t≥0 is a filtration. We focus on modelling frameworks where (G t ) t≥0 is generated by a marked point process, because this allows us to calculate martingale representations explicitly. Our approach seems to work also in more general settings, but a general theory is left to future research.
Information restrictions can be motivated by legal restrictions, data privacy efforts, information summarization or model simplifications. An example for a legal information restriction is the General Data Protection Regulation 2016/679 of the European Union, which includes in Article 17 a so-called 'right to erasure', causing possible information loss.
Example 1.1 (Evaluation of life insurance based on big data). Data from activity trackers, social media, etc. can improve individual forecasts of the mortality and morbidity of insured persons. By exercising the 'right to erasure' according to the General Data Protection Regulation of the European Union, the policyholder may ask the insurer to delete parts of the health related data at discretion. Moreover, data providers might implement self-imposed information restrictions for data privacy reasons. For example, users of Google products can opt for an auto-delete of location history and activity data after a fixed time limit. As a result, the evaluation of an insurance liability ζ according to formula (1.1) will be restricted to sub-sigma-algebras (G t ) t≥0 that are non-monotone in t due to data deletions.
Examples of information summarization can be found in Norberg (1991), where summarized life insurance values (retrospective and prospective reserves) are defined that encompass non-monotone information. A popular model simplification is Markovian modeling even when the empirical data is not fully supporting the Markov assumption.
Example 1.2 (Markovian approximations in credit rating models). In the Jarrow-Lando-Turnbull model, the filtration (F t ) t≥0 is generated by a finite state space Markov chain (R t ) t≥0 that represents credit ratings, cf. Jarrow et al. (1997). The Markov property makes it possible to equivalently replace F t in (1.1) by the sub-sigma-algebra G t := σ(R t ). The Markov assumption can be motivated by the theoretical idea that a credit rating should fully describe the current risk profile of a prospective debtor such that historical ratings can be ignored. However, empirical data does not always support the Markov property, so that E Q [ξ/B(T )|G t ] may in fact differ from E Q [ξ/B(T )|F t ], cf. Lando and Skodeberg (2002). The information dynamics of G t = σ(R t ) is nonmonotone in t.
Non-monotone information structures can also be found in Pardoux & Peng (1994) and Tang & Wu (2013), but in these papers specific independence assumptions make it possible to go back to filtrations and to work with classical martingale representations.
From now on we skip the subscript Q in (1.1) and all related expectations. Depending on the application, we interpret P either as the real world measure or as a risk-neutral measure.
When we replace filtration (F t ) t≥0 in (1.1) by non-monotone information (G t ) t≥0 , then all the powerful tools from martingale theory for studying the time dynamics of (1.1) are not available anymore. In order to fill that gap, this paper derives general representations of the form where ξ is any integrable random variable, (G t ) t≥0 is a non-monotone family of sigmaalgebras generated by an extended marked point process that involves information deletions, (µ I ) I∈M is a set of counting measures that uniquely corresponds to the extended marked point process, (ν I ) I∈M and (ρ I ) I∈M are infinitesimal forward and backward compensators of (µ I ) I∈M , and the integrands G I (u−, u, e) and G I (u, u, e) are adapted to the information at time u− and time u, respectively. In case that (G t ) t≥0 is increasing, i.e. it is a filtration, the second line in (1.2) is zero and the first line conforms with classical martingale representations. The central idea in this paper is to focus on those properties only that martingales and compensators show on infinitesimally small intervals. We call this the 'infinitesimal approach'. In principle, the infinitesimal approach is not restricted to point process process frameworks, but a fully general theory is beyond the scope of this paper. We will further extend our representation results to processes of the form where (X t ) t≥0 is a suitably integrable càdlàg process. In this case an additional drift term appears on the right hand side of (1.2). Martingale representations have various applications in finance and insurance, and this in particular true for marked point process frameworks: • If a financial or insurance claim is hedgeable, then explicit hedges can be derived from martingale representations, see e.g. Norberg (2003) and Last & Penrose (2011).
• Martingale representations can serve as additive risk factor decompositions, see Schilling et al. (2020). An insurer needs to additively decompose the surplus from a policy or an insurance portfolio for regulatory reasons, see e.g. Møller & Steffensen (2007). Additive risk factor decompositions are also used in finance, see e.g. Rosen & Saunders (2010).
In all three applications, infinitesimal martingale representations according to (1.2) allow us to include information restrictions into the modelling. We will study a hedging application for the model in Example 1.2. We will see that estimation and calculation of hedging strategies under inappropriate Markov assumptions may unintentionally replace classical martingales by infinitesimal forward martingales (the first line on the right hand side of (1.2)), and then the implied hedging error is just the corresponding infinitesimal backward martingale part (the second line in (1.2)). The application of infinitesimal martingale representations in BSDE theory is exemplarily discussed for Example 1.1.
We will see that the integrands in (1.2) correspond to the so-called sum at risk, which is a central figure in life insurance risk management. In Example 1.1 we also briefly discuss risk factor decompositions. Information deletions upon request for data privacy reasons can provoke arbitrage opportunities, and these can be split off as infinitesimal backward martingales, which is important for dealing with them. Representation (1.2) implies that t → E[ξ|G t ] has a (unique) semimartingale modification. More generally, we will show that t → E[X t |G t ] has a (unique) semimartingale modification whenever X is a semimartingale with integrable variation on compacts. The uniqueness and the semimartingale property are crucial in applications where the time dynamics shall be studied. For example, in life insurance the differential dE[X t |G t ] might describe the insurer's current surplus or loss at time t, cf. Norberg (1992) and Norberg (1999).
The study of jump process martingales and their representations largely dates back to the 1970s, see e.g. Jacod (1975), Boel et al. (1975), Chou & Meyer (1975), Davis (1976) and Elliott (1976). Since then extensions have been developed in different directions, see e.g. Last &Penrose (2011) andCohen (2013). All of these papers stay within the framework of filtrations, i.e. the information dynamics is monotone. The infinitesimal approach that we introduce here allows us to go beyond the framework of filtrations. An elegant way to derive the classical martingale representation is a bare hands approach that starts with the Chou and Meyer construction of the martingale representation for a single jump process, followed by Elliott's extension to the case of ordered jumps. In this paper we also use a bare hands approach, but the classical stopping time concept is not applicable in our non-monotone information setting, so we need to leave the common paths.
The paper is organized as follows. In Section 2 we explain the basic concepts of the infinitesimal approach but avoid technicalities. In Section 3 we add technical assumptions and narrow the modelling framework down to pure jump process drivers. Section 4 verifies that (1.2) is indeed a well-defined process. In Section 5 we identify infinitesimal compensators for a large class of jump processes. The central result (1.2) is proven in Section 6 and extended to processes of the form (1.3) in Section 7. In Section 8 we take a closer look at the Examples 1.1 and 1.2.

The infinitesimal approach
The central idea of the infinitesimal approach is to focus only on those properties that martingales and compensators show on infinitesimally short intervals. This section explains the basic ideas under the general assumption that all limits in this section actually exist. Only from the next section on we narrow the framework down to pure jump process drivers, which is sufficient but not necessary to guarantee the existence of the limits. So, in general the infinitesimal approach is not restricted to jump process frameworks, but it is beyond the scope of this paper to find necessary conditions for the existence of the limits here.
Let (Ω, A, P ) be a complete probability space and let Z ⊂ A be its null sets. Let F = (F t ) t≥0 be a complete and right-continuous filtration on this probability space. We interpret F t as the observable information on the time interval [0, t]. Suppose that certain pieces of information expire after a finite holding time. By subtracting from F t all pieces of information that have expired until time t, we obtain the admissible information at time t. We assume that this admissible information is represented by a family of complete sigma-algebras G = (G t ) t≥0 , which may be non-monotone in t.
A process X is said to be adapted to the filtration F if X t is F t -measurable for each t ≥ 0. Likewise we say that a process X is adapted to the possibly non-monotone information G if X t is G t -measurable for each t ≥ 0. In addition to this classical concept, we also take an incremental perspective.
Definition 2.1 (incrementally adapted). We say that X is incrementally adapted to G if X t − X s is σ(G u , u ∈ [s, t])-measurable for any interval [s, t] ⊂ [0, ∞).
In finance and insurance applications we think of X as an aggregated cash flow where the aggregated payments X t − X s on the interval [s, t] should depend only on the admissible information on [s, t]. If G is a filtration, then incremental adaptedness is equivalent to classical adaptedness, but the two concepts differ for non-monotone information.
An integrable process X is said to be a martingale with respect to F if it is F-adapted and E[X t − X s |F s ] = 0 almost surely for each 0 ≤ s ≤ t. Focussing on infinitesimally short intervals, in particular we have almost surely for each t ≥ 0, where (T n ) n∈N is any increasing sequence (i.e. T n ⊂ T n+1 for all n) of partitions 0 = t 0 < · · · < t n = t of the interval [0, t] such that |T n | := max{t k − t k−1 : k = 1, . . . , n} → 0 for n → ∞. In the literature we can find for (2.1) the intuitive notation E[dX t |F t− ] = 0.
Definition 2.2 (infinitesimal martingales). Let X be incrementally adapted to G. We say that X is an infinitesimal forward/backward martingale (IF/IB-martingale) with respect to G if for each t ≥ 0 and any increasing sequence of partitions (T n ) n∈N of [0, t] with lim n→∞ |T n | = 0 we almost surely have respectively, given that the expectations and limits exist.
Suppose now that X is an F-adapted and integrable counting process. The so-called compensator C of X is the unique F-predictable finite variation process starting from C 0 = 0 such that X − C is an F-martingale. In particular, C satisfies the equation almost surely for each t ≥ 0, see Karr (1986, Theorem 2.17). The intuitive notation for intuitively written as E[dC|F t− ] = dC t . The latter fact motivates the following definition.
Definition 2.3 (infinitesimally predictable processes). We say that X is infinitesimally forward/backward predictable (IF/IB-predictable) with respect to G if for each t ≥ 0 and any increasing sequence of partitions (T n ) n∈N of [0, t] with lim n→∞ |T n | = 0 we almost surely have respectively, given that the expectations and limits exist.
Note that any IF/IB-predictable process is also incrementally adapted. By combining (2.2) and (2.3), we obtain almost surely for each t ≥ 0, which means that the process X − C is an IF-martingale with respect to F according to Definition 2.2.
Definition 2.4 (infinitesimal compensators). We say that a process C is an infinitesimal forward/backward compensator of X (IF/IB-compensator) with respect to G if C is IF/IB-predictable and X − C is an IF/IB-martingale with respect to G, respectively. Let for any t k+1 ≥ t k ≥ 0 and ξ ∈ L 1 (Ω, A, P ). Then the construction decomposes the process t → E[ξ|G t ] into the difference of an IF-martingale and an IBmartingale, since

is an infinitesimal martingale representation if F is an IF-martingale and
B is an IB-martingale with respect to G.
Suppose now that X describes a discounted claim process in a finance or insurance application. Then we are typically interested in the process t → E[X t |F t ], which is not necessarily well-defined. If X is a càdlàg process whose suprema on compacts have finite expectations, then there exists a unique càdlàg process X F , the so-called optional projection of X with respect to F, such that almost surely for each t ≥ 0. We say here that a process is unique if it is unique up to evanescence. We now expand the concept of optional projections to non-monotone information.
Definition 2.6 (optional projection). Let X be an integrable càdlàg process. If there exists a unique càdlàg process X G such that almost surely for each t ≥ 0, then we call X G the optional projection of X with respect to G.
The optional projection X G can be decomposed to which is a sum of an IF-martingale, an IB-martingale and an IB-compensator with respect to G. By switching the roles of t k and t k+1 we can obtain a similar decomposition where the IB-compensator is replaced by an IF-compensator.
is an IB-martingale, and C is either an IB-compensator or an IF-compensator with respect to G.
As we mentioned at the beginning of this section, so far we simply assumed that all the limits that we discussed here indeed exist. In the next section we focus on a marked point process framework, since this guarantees not only the existence of the limits but also allows us to calculate the limits explicitly.

Jump process framework
In the literature, we can find different approaches for defining a jump process framework. One way is to start with a marked point process • the ζ i : (Ω, A) → (E, E) are random variables giving the marks.
Different from the point process literature, we do not assume here that the random times (τ i ) i∈N are increasing or ordered in any specific way. This gives us useful modelling flexibility, see also the comments at the end of this section. Let E be a separable complete metric space and E := B(E) its Borel sigma-algebra. Moreover, let Ω be a Polish space and A its Borel sigma algebra. We interpret each ζ i as a piece of information that can be observed from time τ i on. As motivated in the introduction, we additionally assume that the information pieces ζ i are possibly deleted after a finite holding time. Therefore, we expand the marked point process ( We interpret σ i as the deletion time of information piece ζ i . Note that the random times (σ i ) i∈N are in general not ordered. For the sake of a more compact notation, in the following we will work with the equivalent sequence (T i , Z i ) i∈N defined as i.e. the random times T 2i−1 with odd indices refer to innovations and the consecutive random times T 2i with even indices are the corresponding deletion times. We generally assume that which will ensure the existence of (infinitesimal) compensators. Condition (3.1) implies that almost surely there are at most finitely many random times on bounded intervals. Moreover, we assume that i.e. a new piece of information is not instantaneously deleted but is available for at least a short amount of time. Based on the sequence (T i , Z i ) i∈N we generate random counting measures µ I via If the different random times (T i ) i never coincide, then we just need to consider the counting measures µ {i} , i ∈ N, which describe separate arrivals of the random times T i and their marks Z i . But if random times can occur simultaneously, then we need the full scale of counting measures µ I , I ⊆ N, which cover all kinds of separate and joint events. For each I ⊆ N, the measures {µ I (·)(ω)|ω ∈ Ω} generated by their values on The observable information at time t ≥ 0 is given by the complete filtration which lets the random times T i , i ∈ N, be stopping times. Here the symbol '∨' denotes the sigma algebra that is generated by the union of the involved sets. The admissible information at time t ≥ 0 is given by the family of sub-sigma-algebras The admissible information immediately before time t > 0 is given by the family of sub-sigma-algebras is the only kind of order that we assume to hold between the random times (T i ) i , resulting from the natural assumption τ i ≤ σ i , i ∈ N. This fact is relevant when an ordering unintentionally reveals additional information. For example, if we have a model where the innovation times (τ i ) i are ordered, i.e. T 1 < T 3 < T 5 < · · · , then G t reveals among other things the exact number of deletions that have happened until t. This can be an unwanted feature if the number of past deletions is itself a non-admissible piece of information. In many situations we can avoid such an implied information effect by ordering the pairs (T 2i−1 , T 2i ) in a non-informative way.
Remark 3.2. Without loss of generality suppose here that 0 ∈ E. Then, by defining the càdlàg process the information G t and G − t can be alternatively represented as

Optional projections
In this section we study existence and path properties of optional projections. Note that this and all following sections generally assume that we are in the marked point process framework of section 3. Recall also our specific definition of G − t .
Theorem 4.1. Suppose that X = (X t ) t≥0 is a càdlàg process that satisfies Then the optional projection X G according to Definition 2.6 exists, and we have almost surely for each t > 0. If X has integrable variation on compacts, then X G has paths of finite variation on compacts.
It might be surprising here that X G is indeed always a càdlàg process, but note that condition (3.1) rules out clusters of jump times in our marked point process framework. Before we turn to the proof of Theorem 4.1, we develop several auxiliary results. Let Since Ω is a Polish space and A its Borel sigma algebra, there exist regular conditional distributions P (·|Z M ) and P (·|Z M , R I ) on (Ω, A) for each M ∈ M and I ∈ N . As the sets M and N are countable, all these conditional distributions are simultaneously unique up to a joint exception zero set. In this paper the notation P M,R I (·) = P ( · |Z M , R I ) refers to an arbitrary but fixed regular version of the conditional expectation on the right hand side, and for any integrable random variable Z we set is the specific version of the conditional expectation E[Z|Z M , R I ] that we obtain by integrating Z with respect to the specific regular versions that we picked for P ( · |Z M , R I ). In case of I = ∅ we also use the short forms P M = P M,R ∅ and E M = E M,R ∅ since P M,R ∅ is a version of P (·|Z M ).
Moreover, with defining I − 1 := {i − 1 : i ∈ I}, the mappings refer to arbitrary but fixed regular versions of the factorized conditional expectations on the right hand side, and for any integrable random variable Z we define By reducing M down to M I we leave out exactly those random variables in Z M that are already covered by R I . Note that the mapping P M,R I =r (·)| r=R I equals P M,R I (·). For M ∈ M and t ≥ 0 we define the G t -measurable sets and corresponding G-adapted stochastic processes Because of assumption (3.1) the paths of I M t have finitely many jumps on compacts only, so they have left and right limits. Moreover, by construction they are right-continuous, so the processes I M are càdlàg. The left limits can be represented as Proposition 4.2. For any integrable random variable ξ and any sets M ∈ M and I ∈ N we almost surely have Proof. The left hand side of (4.4) almost surely equals the conditional expectations that one obtains when the sigma-algebras G and G − are replaced by their non-completed versions. Therefore, in the remaining proof we will ignore the extension by Z in the definitions of G and G − .
For each H ∈ σ(Z M ) there exists a G ∈ G t such that H ∩ A M t = G ∩ A M t and vice versa. Thus, i.e. the first equation in (4.4) holds. By replacing (4.5) by we can analogously show that the second equation in (4.4) holds.
In case (4) we can argue similarly to case (3), but we need to replace the definition of (4.7) holds. Now, let M ∈ M be arbitrary but fixed and choose τ and σ as the random times where I M t jumps from zero to one and jumps back to zero, respectively. Suppose that P Z M =z is a regular version of P (·|Z M = z) and E Z M =z [·] its corresponding expectation. Then from (4.7) we can conclude that , t ≥ 0, since this process almost surely equals X G t for each t ≥ 0. Note that there are at most a countable number of conditional expectations involved, so the corresponding regular (4.9) In case of E M [I M t ](ω) = 0, Proposition 4.4 implies that I M t = 0 for almost all ω ∈ Ω, where the exception zero set does not depend on the choice of t. So (4.9) is almost surely true on [0, ∞) since I M t (ω) = 0 implies that there is a whole interval [t, t + ǫ ω ) where the right-continuous jump path s → I M s (ω) is constantly zero. Similarly, we can show that the process Y almost surely has left limits, which are of the form , t > 0.
According to Proposition 4.2 Y t− almost surely equals E[X t− |G − t ]. As càdlàg processes are uniquely defined by their values on separable subsets of the time line, our choice for X G is almost surely the only possible modification of (E[X t |G t ]) t≥0 .
The variation of Y on [0, t] is bounded by where T is any partition 0 = t 0 < · · · < t n = t of the interval which is finite for almost each ω ∈ Ω, since X has integrable variation on compacts and since M t (ω) is finite.

Infinitesimal compensators
In this section we derive infinitesimal compensators for a large class of incrementally adapted jump processes, including the counting processes t → µ I ( The proof of the proposition is given below. In the following we use the short notation for random measures κ and integrable random functions F . If F I (t, e) is G − t -measurable for each (t, e), then for each B ∈ E I the jump process is an IF-martingale with IF-compensator If F I (t, e) is G t -measurable for each (t, e), then for each B ∈ E I the jump process is an IB-martingale with IB-compensator t → F I .ρ I ((0, t] × B).
By choosing G I ≡ 1 and G I ′ ≡ 0 for I ′ = I, Theorem 5.2 yields in particular that ν I is the IF-compensator and ρ I is the IB-compensator of the counting process µ I . In intuitive notation we write this fact as The proofs of Proposition 5.1 and Theorem 5.2 follow now in several steps. Proof. For each s ≥ 0 and M ∈ M assumption (3.1) implies that almost surely. Therefore, by applying the Monotone Convergence Theorem we obtain Proposition 5.4. Suppose that the mappings (t, e, ω) → F I (t, e)(ω), I ∈ N , are jointly measurable and satisfy (5.1). For each t > 0 and B ∈ E I we almost surely have for any increasing sequence of partitions (T n ) n∈N of [0, t] with lim n→∞ |T n | = 0 and for G I and H I defined by .
Proof. By decomposing F into a positive part F + and a negative part F − , it suffices to prove the first equation for the non-negative mappings F + and F − only. Therefore, without loss of generality we suppose from now on that F is non-negative. Let M t = M t (ω) be defined as in (4.8). In the following we use the short notation J k := (t k , t k+1 ]. Since M ∈Mt I M t k = 1 for any t k , by applying (4.4), the Monotone Convergence Theorem and the Law of Total Probability we obtain for almost each ω ∈ Ω. For u ∈ (0, t] let J u be the unique interval (t k , t k+1 ] from T n such that t k < u ≤ t k+1 , and let t(u) be the left end point of J u . Then we can write Taking the limit for n → ∞, for almost each ω ∈ Ω we obtain lim n→∞ Tn All in all we get that the right hand side of equation ( for any increasing sequence of partitions (T n ) n∈N of [0, t] with lim n→∞ |T n | = 0.
Proof. By decomposing G into a positive part G + and a negative part G − , it suffices to prove the first equation for the non-negative mappings G + and G − only. Therefore, without loss of generality we suppose from now on that G is non-negative.
The latter expectation is finite according to assumption (5.1). Hence, for each M ∈ M we almost surely have Let J k := (t k , t k+1 ]. From the Dominated Convergence Theorem we obtain lim n→∞ Tn since I M is bounded by 1 and because of the second line in (5.7). By using the first line in (5.7), the Dominated Convergence Theorem moreover yields By applying the Fubini-Tonelli Theorem we can show that the latter equation equals Using Proposition 4.4 and the Dominated Convergence Theorem, we therefore obtain lim n→∞ Tn where the second equality bases on the fact that (4.6) and the G − u -adaptedness of G I (u, e, ) allows us to pull G I (u, e) out of the conditional expectation E M [I M u− G I (u, e)].
Summing the latter equation over M ∈ M t for M t defined as in (4.8) and applying Proposition 4.2, we obtain lim n→∞ Tn almost surely. Thus, we can conclude that the first equation in Proposition 5.5 holds.
The proof of the second equation in Proposition 5.5 is similar.
Proof of Theorem 5.2. Let G I and H I be defined as in Proposition 5.4. If F I (t, e) is G − tmeasurable for each (t, e), then Proposition 4.2 implies that G I (t, e) = F I (t, e) almost surely. Similarly, if F I (t, e) is G t -measurable for each (t, e), we have H I (t, e) = F I (t, e) almost surely. With this fact and by subtracting the limit equations in Proposition 5.4 and Proposition 5.5, we obtain that G I .µ I ([0, t]×B)−G I .ν I ([0, t]×B) and H I .µ I ([0, t]× B) − H I .ρ I ([0, t] × B) satisfy the defining limit equations for IF/IB-martingales. The IF/IB-predictability of the compensators follows from Proposition 5.5. Note that all involved processes are incrementally adapted to G because of (4.5) and (4.6).

Infinitesimal martingale representations
Suppose that λ I is the compensator of µ I with respect to F. For each integrable random variable ξ, the classical martingale representation theorem yields that the martingale X t := E[ξ|F t ] can be represented as where F (u, e)(ω) is jointly measurable in (u, e, ω) and ω → F (u, e)(ω) is F u− -measurable for each (u, e), see e.g. Karr (1986). We now extend this result to the non-monotone information G. .

(6.2)
For each I ∈ I and e ∈ E I the process u → G I (u−, u, e) is G − -adapted and the process u → G I (u, u, e) is G-adapted.
If the mappings F I (u, e) = G I (u−, u, E) and F I (u, e) = G I (u, u, e) satisfy the integrability condition in Theorem 5.2, then representation (1.2) is a sum of IF-martingales and IB-martingales with respect to G. In case of F = G we have ν I = λ I , ρ I = µ I and (1.2) equals (6.1), so (1.2) is a generalization of (6.1).
The proof of Theorem 6.1 is given below. Recall that our notation uses the convention (4.2). Furthermore, using M ′ ∩ M ′′ = ∅, we can show that since ∆K t and ∆L t are dominated by ∆k t and ∆l t and because of by summing equation (6.6) over M ∈ M and adding (6.7), for almost each ω ∈ Ω we end up with (1.2) and (6.2) after rearranging the addends. By applying Proposition 4.2 we can see that G I (u−, u, e) is G u− -measurable and that G I (u, u, e) is G u -measurable for each (I, e).

Infinitesimal representations for optional projections
Suppose that X is a càdlàg process that satisfies (4.1) and such that X t − X 0 is F tmeasurable for each t ≥ 0. Then the optional projection of X with respect to F can be represented as for mappings F I (t, e) that are F-predictable processes in argument t for each (I, e).
In order to see that, apply the classical martingale representation theorem on the Fmartingale and rearrange the addends. The following theorem extends (7.1) to non-monotone information settings.
Theorem 7.1. Let X be a càdlàg process that satisfies (4.1) and that has an IBcompensator with respect to G, denoted as X IB . Then If X has an IF-compensator with respect to G, denoted as X IF , then (7.2) still holds but with X IB t replaced by X IF t and X u− replaced by X u in (7.3).
By applying Proposition 4.2 we can see that G I (u−, u, e) is G − u -measurable and that G I (u, u, e) is G u -measurable. Hence, the integrals in the first and second line of (7.2) describe IF-martingales and IB-martingales with respect to G if F I (u, e) = G I (u−, u, e) and F I = G I (u, u, e) satisfy the integrability condition (5.1), see the comments below Theorem 5.2.
In the special case G = F we have ν I = λ I , ρ I = µ I , X = X IB and the representations (7.2) and (7.1) are equivalent, i.e. (7.2) is a generalization of (7.1).
Even if G F we can still have X = X IB or X = X IB . The following example presents non-trivial processes X that equal their IB-compensators or their IF-compensators.
Proof of Theorem 7.1. The theorem follows from the additive decomposition and from applying Theorem 6.1 for each addend E ) has a representation of the form (1.2) in case of t k < s ≤ t k+1 for G I (s, u, e) defined by (7.3). Because of the càdlàg property of X, by applying the Dominated Convergence Theorem pathwise for almost each ω ∈ Ω, we end up with (7.2) and (7.3). The alternative decomposition leads to the second variant where X IB is replaced by X IF and X u− is replaced by X u in (7.3).
where the random variable ∆ t := I∈N µ I ({t} × E I ) indicates whether there is a stopping event at time t. One can then show that the integrands in (7.2) almost surely equal for each t > 0, I ∈ N , and e ∈ E I . The differences on the right hand side have intuitive interpretations: The first line describes the difference in expectation between a change scenario and a remain scenario if we are currently at time t− and are looking forwards in time. Similarly, the second line describes the difference in expectation between a change scenario and a remain scenario if we are currently at time t and are looking backwards in time. In formula (7.2) these differences in expectation are integrated with respect to the compensated forward and backward scenario dynamics.

Examples
Here we come back to Examples 1.1 and 1.2 and show how our infinitesimal martingale representations can be applied in life insurance and credit risk modeling.
Example 8.1 (Evaluation of life insurance based on big data). Consider a life insurance contract where the insurer collects health-related information about the insured with the aim to improve forecasts of the individual future insurance liabilities. For example, this can involve data from activity trackers or social media. Here, the marked point process includes the time of death τ 1 , which is recorded as ζ 1 := τ 1 , and further health-related information (τ i , ζ i ) i≥2 . Upon request of the policyholder with reference to the 'right to erasure' according to the General Data Protection Regulation of the European Union, or as a self-imposed data privacy effort of the data provider, the insurer deletes parts of the health related data at certain time points, i.e. we expand (τ i , ζ i ) i≥2 by deletion times (σ i ) i≥2 . For completeness we define σ 1 := ∞.
In the classical insurance modelling without data deletion, the time dynamics of the expected future insurance payments is commonly described by Thiele's equation, see e.g. Møller (1993) and Djehiche & Löfdahl (2016). Suppose that A t gives the aggregated benefit cash flow of the life insurance contract on [0, t], including survival benefits with rate a(t) and a death benefit of α(t) upon death at time t, i.e.  e − s t φ(u)du dA s describes the discounted future liabilities of the insurer seen from time t. As the càdlàg process X = (X t ) t≥0 is neither adapted to F nor adapted to G, an insurer has to work with the optional projection instead (the so-called prospective reserve), i.e. the insurer aims to calculate in case that there is no data deletion and , t ≥ 0 in case that information deletions may occur. The process X G is well defined càdlàg process according to Theorem 4.1.
By applying (7.1) and Itô's Lemma, we can derive the so-called stochastic Thiele equation with terminal condition X F T = 0, cf. Møller (1993, equation (2.17)). The integrand F I (t, e), which almost surely equals according to Remark 7.3), is a key figure in life insurance risk management, known as the sum at risk. Equation (8.1) can be interpreted as a backward stochastic differential equation (BSDE) with solution (X F , (F I ) I ), see Djehiche & Löfdahl (2016) for Markovian models and Christiansen & Djehiche (2020) for non-Markovian models. The BSDE (8.1) is in particular relevant if the life insurance payments a and α depend on the current policy value such that the insurance cash flow A is only implicitly defined.
By applying Theorem 7.1 and Itô's Lemma and using the fact that process A equals its own IB-compensator (since σ 1 = ∞), we are able to derive an analogous equation for X G , namely with terminal condition X G T = 0. This equation can be interpreted as a new type of BSDE with solution (X G , (G I ) I ), featuring an IF-martingale and an IB-martingale instead of a classical martingale. The IF-martingale in the first line describes the impact of new information on the optional projection X G . The IB-martingale in the second line quantifies the effect of information deletions on X G . The integrands G I (t−, t, e) and G I (t, t, e), which almost surely equal according to Remark 7.3, generalize the classical definition of the sum at risk. They are needed in life insurance risk management for sensitivity analyses, safe-side calculations, contract modifications and surplus decompositions.
If the policyholder may decide about data deletions at discretion, then the resulting value changes of the insurance contract can be systematically exploited by the policyholder, leading to a kind of data privacy arbitrage. Since it is the IB-martingale in (8.2) that measures the value changes due to data deletions at times (σ i ) i≥2 , it represents the potential data privacy arbitrage. A simple solution for avoiding data privacy arbitrage could be to charge the IB-martingale as a fee upon a data deletion request. The fee can also be negative and represents then a bonus payment. However, more complex risk sharing schemes will be needed in insurance practice that moreover distinguish between different causes for data deletions. By following the concept of Schilling et al. (2020) to interpret martingale representations as risk factor decompositions, we may interpret the infinitesimal martingale parts in (8.2) as an additive surplus decomposition that can distinguish between numerous kinds of jump events µ I , I ⊂ N. Such an additive decomposition of the insurer's surplus is an important step for aligning insurance risk management to the digital age.
Example 8.2 (Markovian approximations in credit rating models). A popular approximation concept in credit rating modeling is to pretend that the credit rating process is Markovian even if the empirical data does not fully support this assumption. Suppose that credit ratings are updated at integer times only. By setting τ i := i − 1 and σ i := i for i ∈ N and defining ζ i as the credit rating at time τ i , the rating process R = (R t ) t≥0 has the representation The jumps of process R correspond to the random counting measures µ I . In the Jarrow-Lando-Turnbull model the rating space E is finite, (R i ) i∈N 0 is assumed to be a Markovchain, and where Q is the equivalent martingale measure and π is a deterministic function on N 0 ×E. The latter formula allows us to estimate Q from market data by a two step method: First, the transition probabilities P (R i+1 = r i+1 |R i = r i ) are estimated from observed credit rating time series. Then the function π is calibrated such that the risk-neutral values of credit rating derivatives conform with observed market prices. Once we have Q, we can use the (classical) martingale representation (6.1) in order to explicitly construct hedges for financial claims ξ, see e.g. Last & Penrose (2011). For example, by arguing analogously to formula (8.1), the claim ξ = h(R T ) has the hedging strategy The integral in the first line describes the investments in the risk-free asset B. The second line corresponds to risky investments and can be rewritten in terms of the tradable assets in a complete financial market, cf. Section 5 in Last & Penrose (2011). A standard estimator for the stage occupation probabilities of the Markov-chain R w.r.t. P is the Aalen-Johansen estimator, which directly corresponds to the Nelson-Aalen estimator for the compensators λ I of the random counting measures µ I . Under the assumption that R is Markovian, the Nelson-Aalen estimator consistently estimates λ I = ν I . If R is not Markovian, then the Nelson-Aalen estimator still consistently estimates ν I , see Datta & Satten (2001), but now ν I = λ I . In other words, if we ignore the information beyond G in the estimation of λ I due to an incorrect Markov assumption, then we actually estimate the infinitesimal forward compensator ν I instead of the classical compensator λ I . Similarly, ignoring the information beyond G upon estimating F I and (1.1) from market data means that we unintentionally end up with instead of the hedging strategy on the right hand side of (8.4). Is (8.5) still a hedge for h(R T )? By applying Theorem 6.1 instead of (6.1) and using that F 0 = G 0 and G T = σ(R T ) ∨ Z, analogously to ( (8.6) Equation (8.6) implies that the trading strategy (8.5) is actually not a hedge for ξ = h(R T ). The hedging error is given by the third line in (8.6). To sum it up, by estimating and calculating the hedging strategy (8.4) under an incorrect Markov assumption for R, we unintentionally replace the (classical) F-martingale in (8.4) by the G-IF-martingale in (8.5) (the risk-free investment is also affected), and the corresponding G-IB-martingale is just the hedging error. Schilling et al. (2020) interpret martingale representations as additive risk factor decompositions. Likewise we can read the (infinitesimal) martingale parts in (8.4) and (8.6) as linear risk factor decompositions. The relevance of such decompositions in credit risk modelling is explained in Rosen & Saunders (2010).