1 Introduction

In this paper, we present in a uniform fashion some new results from two closely related topics. The first is on the representation of the additive and multiplicative decomposition of the Azéma supermartingale associated with finite honest times or last passage times, see Definition 2.1. The second is on semimartingales of class-\((\Sigma )\) and the Madan–Roynette–Yor option pricing formula. For more applications of honest times and semimartingale of class-\((\Sigma )\) in mathematical finance, we refer interested readers to Nikeghbali and Platen [28].

To motivate the study, we recall Nikeghbali and Yor [26] have shown under the assumptions that all martingales are continuous and that the given finite honest time \(\tau \) avoids all stopping times, see Definition 2.4, the following additive and multiplicative representations of the Azéma supermartingale associate with \(\tau \) given by \(Z_t:={{\mathbb {P}}}(\tau > t\,|\,{{{\mathcal {F}}}}_t)\) holds, that is

$$\begin{aligned} Z_t&= 1+ m_t - \sup _{s\le t}m_s \end{aligned}$$
(1)
$$\begin{aligned} Z_t&= \frac{M_t}{\sup _{s\le t} M_s} \end{aligned}$$
(2)

where m is a continuous local martingale and M is a non-negative continuous local martingale with the property that \(\lim _{t\rightarrow \infty } M_t = 0\). In other words, the process \(1- Z\) can be expressed as the drawdownFootnote 1 of a local martingale m and the relative drawdown of a non-negative local martingale M. Conversely, given a non-negative continuous local martingale M such that \(\lim _{t\rightarrow \infty } M_t = 0\), the Azéma supermartingale of the finite honest time \(\tau : = \sup \{s: M_s = \sup _{u\le s} M_u\}\) is of the form given in (2). In applications, the multiplicative decomposition and representation of the Azéma supermartingale has recently received interest from credit risk modelling and the study of asymmetric information, e.g. Aksamit et al. [3], Fontana et al. [9], Zwierz [35] and Kardaras [19].

The above representations of Z and the corresponding characterisation of honest times through non-negative local martingales were again recovered in Nikeghbali and Platen [28] under only the assumption that the finite honest time \(\tau \) avoids all stopping times. Under similar assumptions, the multiplicative representation was also studied in Kardaras [18] and Acciaio and Penner [1]. To illustrate the extent of their results, a counterexample from [3] was given in [1] to show that there exist finite honest times for which \(1-Z\) cannot be expressed as the relative drawdown of a non-negative local martingale with continuous supremum, i.e. the representation (2) does not hold. This observation then led to Song [30] where, for an arbitrary random time, necessary and sufficient conditions for the representation of the form (2) to hold were obtained. Here, to better illustrate that there exists finite honest time for which (2) does not hold, we also provide two simple counterexamples in Example 2.2 and Example 2.3.

In view of the counterexamples, the most important contribution of the paper is that we remove the last standing assumption that \(\tau \) avoids all stopping times to fill the final gap in the literature on the existence and uniqueness of the additive and the multiplicative representations of the Azéma supermartingale associated with a finite honest time, which is then used to provide a complete characterisation of finite honest times. More precisely, by combining Theorem 2.9, Proposition 2.17 and Proposition 2.20, we show that given an arbitrary finite honest time, instead of local martingales in (1) and (2), the process \(1-Z\) can be uniquely expressed as the drawdown of some local supermartingale and relative drawdown of some non-negative local supermartingale with continuous running supremum. In hindsight, the main obstacle in removing the assumption that \(\tau \) avoids all stopping times is that one was too focused on Z and have insisted that m and M should be local martingales. In fact, instead of the supermartingale Z, it is more natural to consider the Azéma optional supermartingale \(\widetilde{Z}_t :={{\mathbb {P}}}(\tau \ge t\, |\,{{{\mathcal {F}}}}_t) \) as any finite honest time \(\tau \) can be expressed as the end of the optional set \({\lbrace \widetilde{Z} = 1 \rbrace }\) and the representations for Z can be obtained by noticing that \(Z = \widetilde{Z}_+\). The switch from Z to \(\widetilde{Z}\) is crucial, and by doing so, we can remove the assumption that \(\tau \) avoids all stopping times and obtain representations of \(\widetilde{Z}\) in the form given in (1) and (2), with the key difference being that the local martingales m and M are replaced by local optional supermartingales which exhibits làglàd trajectories. The main technical difficulty faced in this study is that the process \(\widetilde{Z}\) is in general not a càdlàg process. Therefore, the standard càdlàg semimartingale calculus cannot be applied and the techniques employed in [1, 18, 26, 28], such as the Doob maximal identity and the Skorokhod reflection lemma, e.g. Lemma 2.1 and Lemma 2.4 in [26], for càdlàg functions are not directly applicable and one needs to seek alternative methods.

The second topic considered here is semimartingales of class-(\(\Sigma \)). The notion of class-(\(\Sigma \)) was first introduced for positive continuous submartingales in Yor [33] and later extended in [6, 26, 27, 33, 34] to semimartingales and more recently examined in Eyi-Obiang et al. [7, 8] in the context of signed measures. Notably, the authors of [6] have shown that the introduction of class-\((\Sigma )\) allows for a martingale proof of the Madan–Roynette–Yor formula, see, e.g. [21], which established a link between the last passage time of zero of a semimartingale of class-\((\Sigma )\) and the price of a European option.

In the current definition of semimartingales of class-\((\Sigma )\), the predictable process of finite variation in the semimartingale decomposition is continuous. In the context of the Azéma supermartingale associated with a finite honest time, this continuity assumption is equivalent to the assumption that the honest time avoids all stopping times which we have previously removed. Hence, the main contributions in the second part of the paper are (i) we extend the notion of semimartingales of class-\((\Sigma )\) by allowing for jumps in the predictable process of finite variation part of the decomposition and (ii) we recover, under the extended definition, some existing results for semimartingales of class-\((\Sigma )\), in particular the Madan–Roynette–Yor formula, and apply them to the construction of honest times.

The structure of the paper is as follows. In Sect. 1 we introduce the necessary notations and tools for our study. In Sect. 2 we consider the existence and uniqueness of an additive and a multiplicative representation for \(\widetilde{Z}\). To prove our main result in Theorem 2.9, we first derive a multiplicative decomposition of \(\widetilde{Z}\) in Lemma 2.6 and then identify the required representation in Theorem 2.9. Our approach is inspired by the works of Azéma, Meyer and Yœurp in [5, 24, 25, 32] on the multiplicative decomposition of positive submartingales. Also, we rely heavily on the finer properties of honest times exposed in Jeulin [17] and stochastic calculus for làglàd semimartingales under the usual conditions, which can be obtained from the \(\underline{\underline{\mathtt {A}}}\)-semimartingale calculus developed in Lenglart [20] or the stochastic calculus for optional semimartingales developed in Gal’čuk [10,11,12]. For notational convenience, we adopt here the framework of Gal’čuk.

Having obtained, in Theorem 2.9, the existence of an additive and a multiplicative representation of \(\widetilde{Z}\) and identified the key properties of the local optional supermartingales involved. In Sect. 2.2, by using an extension of the Doob maximal identity obtained in Lemma 2.15, we extend the existing characterisation of finite honest times which avoids all stopping times to all finite honest times in Corollary 2.16 and study the uniqueness of the multiplicative representation in Proposition 2.17. Finally, we provide a làglàd extension of the Skorokhod reflection lemma in Lemma 2.19, which we use in Proposition 2.20 to obtain the uniqueness of the additive representation.

In Sect. 3, under the extended definition of class-(\(\Sigma \)) given in Definition 3.2, we generalise existing results on semimartingales of class-\((\Sigma )\) from [6]. First, we show in Lemmas 3.4 and 3.5 that if X and Y are processes of class-(\(\Sigma \)) then \(X^+\), \(X^-\), |X| and XY are again of class-\((\Sigma )\), and that any positive optional submartingale of class-\((\Sigma )\) can be represented as the drawdown of some optional supermartingale. Second, we recover the Madan–Roynette–Yor option pricing formula in Theorem 3.9 and Theorem 3.11. Lastly, as an application of the results obtained in Section 3, we illustrate in Proposition 3.14 and Example 3.15, a method to construct examples of finite honest times for which the additive and multiplicative representation of the Azéma supermartingale can be retrieved from Theorem 2.9, but not from the results of [1, 18, 26, 28].

For the reader’s convenience, we collect in appendix some useful definitions and results from the theory of enlargement of filtrations and stochastic calculus for optional semimartingales.

2 Notations and Terminologies

We work on a filtered probability space \((\Omega ,{\mathcal {A}},{\mathbb {F}},{\mathbb {P}})\), where \({\mathbb {F}}:=({{{\mathcal {F}}}}_t)_{t\ge 0}\) denotes a filtration satisfying the usual conditions, we set \({{{\mathcal {F}}}}_\infty := \bigvee _{t\ge 0} {{{\mathcal {F}}}}_t \subset {\mathcal {A}}\) and all martingales are càdlàg. The main tool used in this work is the stochastic calculus for optional semimartingales developed under the unusual conditions in Gal’čuk [10,11,12]. We stress that we do not make use of the full power of the calculus as \({{\mathbb {F}}}\) is assumed to satisfy the usual conditions and all martingales are taken to be càdlàg. The case where \({{\mathbb {F}}}\) does not satisfy the usual conditions can potentially be of interest, however, that would first require a complete study of finite honest times under the unusual conditions as our results in Sect. 2 rely on existing results for honest times which are all obtained under the usual conditions. On the other hand, the notion of optional semimartingale of class-\((\Sigma )\) and the corresponding results in Sect.3 can most likely be easily extended, under the unusual conditions, as the results of Gal’čuk already treats non-càdlàg martingales. However, we are reluctant to do so as there is a lack of concrete examples and applications.

Given a real-valued process X, as convention, we set \(X_{0-} = 0\) and \(X_\infty = \lim _{t\rightarrow \infty } X_t\) a.s, if it exists. The running supremum and infimum process of X are denoted by \({\overline{X}}_t := \sup _{s\le t} X_s\) and \({\underline{X}}_t := \inf _{s\le t} X_s\). Given a càdlàg non-decreasing function a on \({{\mathbb {R}}}_+\), we say that the measure da is carried on a set G if \(\int _{[0,\infty )}\mathbbm {1}_{G^c}(s) da(s) = 0\). The support of a that is the smallest closed set in \({{\mathbb {R}}}_+\) which carries da is given by \(S(a) := {\lbrace t \ge 0: \forall \epsilon>0,\, a(t - \epsilon ) < a(t + \epsilon ) > 0 \rbrace }\) and the left support of a is given by \(S^g(a):= {\lbrace t \ge 0: \forall \epsilon>0, \,a(t - \epsilon ) < a(t) > 0 \rbrace }\), see page 61, Chapter IV of Jeulin [17]. We denote by \({\mathcal {T}}\) the set of all stopping times and for \(0\le s< t<\infty \), \({\mathcal {T}}_{[s,t]}\) the set of all stopping times T such that \(s\le T\le t\). A stochastic process X is said to be of class-(D) if the family \({\lbrace X_T\mathbbm {1}_{\lbrace T<\infty \rbrace }, T \in {\mathcal {T}} \rbrace }\) is uniformly integrable, and it is said to be of class-(DL) if for every \(0<t <\infty \), the family \({\lbrace X_{T}, T \in {\mathcal {T}}_{[0,t]} \rbrace }\) is uniformly integrable. For any integrable variation process V, we denote the \({{\mathbb {F}}}\)-optional (predictable) projection of V by \(^{o}V\) (\(^{p}V\)) and the \({{\mathbb {F}}}\)-dual optional (predictable) projection of V by \(V^{o}\) (\(V^{p}\)). From Corollary 5.31 in He et al. [14] the process \(\,^oV - V^o\) is a uniformly integrable \({{\mathbb {F}}}\)-martingale starting at zero and \(\,^o(\Delta V) = \Delta V^o\) holds.

Under the usual conditions, an optional martingale is a càdlàg uniformly integrable martingale, an optional local martingale is a càdlàg local martingale and any optional semimartingale X takes the form \(X = X_0 + M + A\), where M is a càdlàg local martingale and A is a làglàd process of finite variation. As a convention, we suppose both M and A take value zero at time zero. We shall write \(M^X\) and \(A^X\) whenever there is a need to stress the dependence on X. In this setting, stochastic integrals for optional semimartingales reduce to the usual stochastic integrals, and one needs only to take care in counting the jumps of the integral against the process of finite variation A. As an alternative, one can apply the \(\underline{\underline{\mathtt {A}}}\)-semimartingale calculus in Lenglart [20] by taking \(\underline{\underline{\mathtt {A}}} = {\mathcal {O}}({{\mathbb {F}}})\), i.e. the optional \(\sigma \)-algebra generated by \({{\mathbb {F}}}\), and the Itô formula together with the solution to the stochastic exponential are readily available in section VI within. However, although it is more natural to apply the \(\underline{\underline{\mathtt {A}}}\)-semimartingale calculus as formulae developed in [20] are directly applicable under the usual conditions, we find the notations and presentations of Gal’čuk better suited for this work.

In the rest of this paper, unless otherwise stated, all stochastic processes in concern are optional semimartingales which are known to exhibit finite left and right limits. Given any làglàd process X we denote by \(X_-\) and \(X_+\) the left and right limits of X. The left and right jumps of X are denoted by \(\Delta X= X - X_-\) and \(\Delta ^+ X = X_+-X\), respectively. Any làglàd process of finite variation V can be decomposed into its right continuous part and left continuous part, respectively, by setting \(V^g = \sum _{s< \cdot } \Delta ^+ V_s\) and \(V^r := V - V^g\). The right continuous part \(V^r\) can be further decomposed into \(V^r = V^c + V^d\), where \(V^d = \sum _{s\le \cdot } \Delta V_s\) and \(V^c := V^r - V^d\). This gives us the decomposition

$$\begin{aligned} V = V^c + V^d+ V^g. \end{aligned}$$
(3)

Finally, we mention that prior Gal’čuk [11] and Lenglart [20], Mertens [23] introduced under the usual conditions the notion of strong optional supermartingale and generalised the Doob–Meyer decomposition to this setting. However, here we work with optional supermartingales as defined by Gal’čuk and point out that the process \(\widetilde{Z}\) is both an optional supermartingale and a strong optional supermartingale. Therefore, we will abuse slightly the terminology and call the Doob decomposition for optional supermartingales, the Doob–Meyer–Mertens–Gal’čuk decomposition. For more details on the general theory for stochastic processes, the reader is referred to He et al. [14], for results from the theory of enlargement of filtrations to Jeulin [17]. The reader can also refer to the recent book of Aksamit and Jeanblanc [2] for a modern English exposition of the results from the theory of enlargement of filtration.

3 The Additive and Multiplicative Representations

In this part of the paper, given a finite honest time \(\tau \), we study in Sect. 2.1, the existence of an additive and multiplicative representation of the Azéma supermartingale associated with \(\tau \) and, we study in Sect. 2.2, the uniqueness of such representations and provide a complete characterisation of finite honest time through a family of optional supermartingales.

Definition 2.1

A random time \(\tau \) is a honest time, if for all \(t\ge 0\), there exists a \({{{\mathcal {F}}}}_t\)-measurable random variable \(\tau _t\) such that \(\tau _t = \tau \) on the set \({\lbrace \tau < t \rbrace }\).

We first introduce some quantities that are specific to the study of random times. For an arbitrary random time \(\tau \), we set \(H:=\mathbbm {1}_{[\![\tau ,\infty )}\) and define

\(\bullet \) The supermartingale Z associated with \(\tau \), \(Z :=\,^{o}(\mathbbm {1}_{[\![0,\tau [\![})= 1- \,^{o}H\),

\(\bullet \) The supermartingale \({{\widetilde{Z}}}\) associated with \(\tau \), \(\widetilde{Z} :=\,^{o}(\mathbbm {1}_{[\![0,\tau ]\!]})= 1- \,^{o}(H_-)\), \(\bullet \) The martingale \(m :=1-\left( \,^{o}H-H^o\right) \).

In the literature, the process Z is often termed the Azéma supermartingale. Here we shall name the process \(\widetilde{Z}\), the Azéma optional supermartingale, and the process \(1-Z\), the Azéma submartingale. From the above, one can deduce that the following relationships hold:

$$\begin{aligned} Z=m-H^o \quad \text {and} \quad {{\widetilde{Z}}}=m- (H^o)_- \end{aligned}$$
(4)

and we have \(\widetilde{Z} - Z = \Delta H^o\), \(\widetilde{Z}_+ = Z\) and \(\widetilde{Z}_- = Z_-\). From Theorem 5.22 [14], the dual optional projection \(H^o\) is of integrable variation since H is of integrable variation. At time equal to zero, we have \(1-\widetilde{Z}_0 =0\) and \(1-\widetilde{Z}_{0+} = 1-Z_0 = 1-(\Delta H^o)_0 = 1-H^o_0\). We set \(R := \inf {\lbrace s: Z_s = 0 \rbrace }\) and for a random time \(\tau \), from Lemma 1.51 in [2], we have \(\tau \le R\).

The process \(\widetilde{Z}\) is a bounded positive optional supermartingale with it’s Doob–Meyer–Mertens–Gal’čuk decomposition given by \(\widetilde{Z} = m- (H^{o})_-\). For notational simplicity and to be consistent with the notation later used in the additive decomposition of optional semimartingales, we will set \(A:= (H^o)_-\). Note that A is a left continuous process and to which one can apply the decomposition in (3) to obtain the additive decomposition \(\widetilde{Z} = m - A^c - A^g\).

3.1 Existence of the Representations

The first main result of this paper is presented in Theorem 2.9, where we obtain the existence of an additive and multiplicative representation of \(\widetilde{Z}\) in terms of the drawdown and the relative drawdown of some optional supermartingale. To better understand the limitations of the existing results, we give below two counter examples of finite honest times for which the Azéma supermartingale is not of the form given in (1) and (2), and thus not covered by existing results in [1, 18, 26, 28, 30].

Example 2.2

Let \({{\mathbb {F}}}\) be the Brownian filtration and \(\sigma \) a finite \({{\mathbb {F}}}\)-stopping time. Then \(\sigma \) is an example of a finite honest time or last passage time for which representation (1) and (2) does not hold, since otherwise it will contradict the fact that all martingales are continuous in \({{\mathbb {F}}}\).

Example 2.3

Let \({{\mathbb {F}}}\) be the Brownian filtration and \(\tau := \sup \{s: X_s = {\overline{X}}_s\}\) where \(X_t = e^{-\sigma ^2t/2 + \sigma W_t}\). The process X is a positive local martingale for which \(X_\infty = 0\), and by using the fact that \(\sup _{t\ge 0 } \left( \sigma W_t-\sigma ^2t/2\right) \sim \exp (\sigma ^2)\), one can show that \({{\mathbb {P}}}(\tau > t\,|\,{{{\mathcal {F}}}}_t) = X_t/{\overline{X}}_t\) which is of the form given in (2). On the other hand, it can be shown, by checking directly Definition 2.1, that for any \(T\in {{\mathbb {R}}}_+\) the random time \(\tau ' = \tau \vee T\) is a finite honest time and

$$\begin{aligned} {{\mathbb {P}}}(\tau ' > t \,|\, {{{\mathcal {F}}}}_t) = 1- {{\mathbb {P}}}(\tau \vee T \le t \,|\, {{{\mathcal {F}}}}_t) = 1-\left( 1-X_t/{\overline{X}}_t\right) \mathbbm {1}_{\{T \le t\}}\\ {{\mathbb {P}}}(\tau ' \ge t \,|\, {{{\mathcal {F}}}}_t) = 1- {{\mathbb {P}}}(\tau \vee T< t \,|\, {{{\mathcal {F}}}}_t) = 1-\left( 1-X_t/{\overline{X}}_t\right) \mathbbm {1}_{\{T < t\}} \end{aligned}$$

which are both discontinuous at T. Clearly \({{\mathbb {P}}}(\tau ' > t \,|\, {{{\mathcal {F}}}}_t)\) cannot be of the form given in (2), since otherwise it will contradict the fact that all martingales are continuous in the Brownian filtration.

Definition 2.4

A random time \(\tau \) is said to avoid all \({{\mathbb {F}}}\)-stopping times if for each \({{\mathbb {F}}}\)-stopping time \(\sigma \), \({{\mathbb {P}}}(\tau = \sigma <\infty ) = 0\).

The class of finite honest times considered in [1, 18, 26, 28, 30] are assumed to avoid all \({{\mathbb {F}}}\)-stopping times, while in Example 2.2 and Example 2.3, the honest time \(\sigma \) is a stopping time and the honest time \(\tau '\) does not avoid \(T \in {{\mathbb {R}}}_+\). We mention that, as the process \(\widetilde{Z}\) is not càdlàg, the standard càdlàg Skorokhod reflection lemma cannot be applied to obtain the existence and uniqueness of the additive representation as done in [26] and the standard Doob maximal identity cannot be applied to obtain the multiplicative representation as done in [1, 18, 26, 28].

To find a multiplicative representation of \(\widetilde{Z}\), one needs to first find the multiplicative decomposition of \(\widetilde{Z}\). However, in general, the process \(\widetilde{Z}\) is not càdlàg and the multiplicative decomposition of \(\widetilde{Z}\) is not available in the literature. Hence, we study below the multiplicative decomposition of \(\widetilde{Z}\) under the assumption that \(Z>0\).

Lemma 2.5

Given a finite honest time \(\tau \) such that \(Z > 0\), then for all \(t\ge 0\) the process

$$\begin{aligned} Y_t&:= \int _{(0,t]} \widetilde{Z}^{-1}_sdA^c_s + \int _{[0,t)} \widetilde{Z}^{-1}_{s+} dA^{g}_{s+} \end{aligned}$$
(5)

is a finite làgàd increasing process, and the optional stochastic exponential of Y denoted by \(\widetilde{D} = {\mathcal {E}}(Y)\) is an increasing process such that both \(d\widetilde{D}^c\) and \(d\widetilde{D}^g_+\) are carried on the set \(\{\widetilde{Z} = 1\}\).

Proof

From (5), we have \(Y^c_t = \int _{(0,t]} \widetilde{Z}^{-1}_sdA^c_s\) and \(Y^g_t =\int _{[0,t)} \widetilde{Z}^{-1}_{s+} dA^{g}_{s+}\), and the aim is to show that these integrals are finite for all \(t\ge 0\). We stress that the limit of integration in the second integral is [0, t) and thus the process Y, once shown to be finite for all \(t\ge 0\), is an increasing làglàd process. By (iii) of Proposition A.17, both the processes \(Y^c\) and \(Y^g\) are increasing and stopped after \(\tau \). For both \(Y^c\) and \(Y^g\) to be well-defined finite increasing processes, we need to check that they are finite before \(\tau \) and have finite right limit at \(\tau \).

We first note that, by Lemmas A.16 (i) and A.18, the measure \(dA_+\) is carried on \({\lbrace \widetilde{Z}= 1 \rbrace } \subset \llbracket \,0, \tau \,\rrbracket \); therefore, we have \(Y^c = A^c < \infty \). Second, for \(Y^g\), we see that for \(t>\tau \)

$$\begin{aligned} \int _{[0,t)} \widetilde{Z}^{-1}_{s+} dA^{g}_{s+} = \sum _{0\le s\le \tau }Z^{-1}_s\Delta ^+ A^{g}_s < \infty , \end{aligned}$$

which is due to the fact that for almost all \(\omega \) the integrand \(Z^{-1}\) is bounded away zero as \(R = \inf {\lbrace s: Z_s = 0 \rbrace } = \infty \) (see Theorem 2.62 [14]) and \(A^g < \infty \).

Having shown that Y is a finite increasing process, by using Theorem B.24, we define \(\widetilde{D}\) as the optional stochastic exponential of Y. That is \(\widetilde{D}\) is the unique solution to the following equation

$$\begin{aligned} \widetilde{D}_t&= 1 + \int _{(0,t]}\widetilde{D}_{s-} dY^{c}_s + \int _{[0,t)}\widetilde{D}_s dY^{g}_{s+}\nonumber \\&= 1+ \int _{(0,t]}\widetilde{D}_{s-}\widetilde{Z}^{-1}_s dA^c_s + \int _{[0,t)}\widetilde{D}_s\widetilde{Z}^{-1}_{s+} dA^{g}_{s+} =: 1+ \widetilde{D}^c_t + \widetilde{D}^g_t \end{aligned}$$
(6)

and it clear from (6) and Lemma A.18 that \(d\widetilde{D}^c\) and \(d\widetilde{D}^g\) are carried on the set \(\{\widetilde{Z} = 1\}\).

\(\square \)

Lemma 2.6

Suppose \(Z > 0\) then the process \(\widetilde{M} = \widetilde{D}\widetilde{Z}\) is a càdlàg local martingale.

Proof

By an application of the Itô formula given in Theorem B.22,

$$\begin{aligned} \widetilde{D}_t\widetilde{Z}_t - \widetilde{D}_0\widetilde{Z}_0&= \int _{(0,t]}\widetilde{D}_{s-}d\widetilde{Z}^r_s + \int _{[0,t)}\widetilde{D}_s d\widetilde{Z}^g_{s+}\\&\quad + \int _{(0,t]}\widetilde{Z}_{s-}d\widetilde{D}^r_s + \int _{[0,t)}\widetilde{Z}_{s} d\widetilde{D}^g_{s+} + \sum _{0\le s<t} \Delta \widetilde{D}^g_s \Delta ^+\widetilde{Z}_s + \sum _{0< s\le t} \Delta \widetilde{D}^r_s \Delta \widetilde{Z}_s\\&= \int _{(0,t]}\widetilde{D}_{s-}d\widetilde{Z}^r_s + \int _{[0,t)}\widetilde{D}_s d\widetilde{Z}^g_{s+} + \int _{(0,t]}\widetilde{Z}_sd\widetilde{D}^c_s + \int _{[0,t)}\widetilde{Z}_{s+} d\widetilde{D}^g_{s+}\\&= \int _{(0,t]}\widetilde{D}_{s-}dm_s \end{aligned}$$

where in the last equality, we have used the fact that \(\widetilde{Z}^r = m - A^c\), \(\widetilde{Z}^g = -A^{g}\) and (6). \(\square \)

From Lemmas 2.5 and 2.6 we see that, under the assumption that \(Z >0\), the multiplicative decomposition of \(\widetilde{Z}\) is given by \(\widetilde{Z} = \widetilde{M}/\widetilde{D}\). Note that, by using Theorem B.24 we see that the unique solution to the optional stochastic exponential \(\widetilde{D}\) is given by

$$\begin{aligned} \widetilde{D} = e^{Y^c}e^{Y^g}\prod _{0\le s< \cdot } (1+\Delta ^+ Y_s)e^{\Delta ^+ Y_s}. \end{aligned}$$

The process \(\widetilde{D}\) can then be further decomposed multiplicatively into its continuous and left continuous parts. That is \(\widetilde{D} = D^cD^g\), where

$$\begin{aligned} D^c&:= e^{Y^c} \quad \mathrm {and}\quad D^g := e^{Y^g}\prod _{0\le s< \cdot } (1+\Delta ^+ Y_s)e^{\Delta ^+ Y_s}. \end{aligned}$$
(7)

Here we recall that

$$\begin{aligned} Y^c_t = \int _{(0,t]} \widetilde{Z}^{-1}_sdA^c_s = A^c \quad \mathrm {and} \quad Y^g_t =\int _{[0,t)} \widetilde{Z}^{-1}_{s+} dA^{g}_{s+}. \end{aligned}$$

From the form of \(Y^c\) and \(Y^g\), we see that \(D^c\) and \(D^g\) are strictly positive increasing processes such that \(dD^c\) and \(dD^g_+\) are carried on the set \({\lbrace \widetilde{Z} = 1 \rbrace }\).

We are now in a position to study the additive and multiplicative representations. The key idea is that, instead of the local martingales m and \(\widetilde{M}\), we consider the local optional supermartingales

$$\begin{aligned} n : = m - A^{g} = \widetilde{Z} + A^c \qquad \mathrm {and} \qquad N: = \widetilde{Z}D^c = \widetilde{Z} e^{A^c}. \end{aligned}$$
(8)

We stress here that the process \(N: = \widetilde{Z}D^c\) is well defined, even if Z is not strictly positive, since \(D^c = e^{A^c}\) is always well defined. In general, the processes n and N are not necessarily càdlàg on \(\llbracket \,0 ,\tau \,\rrbracket \) and due to Proposition A.17 they are only càdlàg on \(\,\rrbracket \tau ,\infty \llbracket \,\). We remark that since \(A^{g}\) has only positive jumps, the processes \({\overline{n}}\) and \({\overline{N}}\) must be càdlàg and hence optional processes. Therefore, the sets \({\lbrace n= {\overline{n}} \rbrace }\) and \({\lbrace N = {\overline{N}} \rbrace }\) are optional sets.

Lemma 2.7

For any finite honest time \(\tau \), we have \({\lbrace n = {\overline{n}} \rbrace } = {\lbrace \widetilde{Z} = 1 \rbrace } = {\lbrace N = {\overline{N}} \rbrace }\).

Proof

From Proposition A.17, we observe that

$$\begin{aligned} \tau = \sup {\lbrace s: \widetilde{Z}_s = 1 \rbrace } = \sup {\lbrace s: n_s=1+ A^c_{s} \rbrace }. \end{aligned}$$
(9)

From the inequality \(\widetilde{Z} \le 1\), we deduce that \(n \le 1+ A^c\) and \({\lbrace n=1+ A^c \rbrace } \subseteq {\lbrace n={{\overline{n}}} \rbrace }\). Using (9) and the fact that the process \(1+A\) is constant after \(\tau \) (from Lemma A.16 (i) and Lemma A.18), the process \({\overline{n}}\) must be equal to the constant process \(1+A^c\) after \(\tau \). This together with the fact that \({\lbrace \widetilde{Z} = 1 \rbrace }\) is contained on \(\llbracket \,0, \tau \,\rrbracket \), we have

$$\begin{aligned} {\lbrace n={\overline{n}} \rbrace }\,\cap \,\,\rrbracket \tau ,\infty \llbracket \,= {\lbrace \widetilde{Z}=1 \rbrace }\,\cap \,\,\rrbracket \tau ,\infty \llbracket \,= \emptyset . \end{aligned}$$

This implies \({\lbrace n=1+ A^c \rbrace } \subseteq {\lbrace n={{\overline{n}}} \rbrace } \subseteq \llbracket \,0,\tau \,\rrbracket \). By Lemma A.16, the set \({\lbrace \widetilde{Z} = 1 \rbrace }\) is the largest optional set contained in \(\llbracket \,0, \tau \,\rrbracket \) from which we conclude that \({\lbrace n= {\overline{n}} \rbrace } = {\lbrace \widetilde{Z}= 1 \rbrace }\). Similar arguments show that \({\lbrace N= {\overline{N}} \rbrace } = {\lbrace \widetilde{Z}= 1 \rbrace }\) \(\square \)

Remark 2.8

The set equality \({\lbrace n = {\overline{n}} \rbrace } = {\lbrace n = 1+A^c \rbrace }\) implies that \({\overline{n}} = 1+A^c\) on the set \({\lbrace \widetilde{Z} = 1 \rbrace }\). Intuitively, the equality \({\overline{n}} = 1+A^c\) should also hold everywhere, since they have the same initial condition and both \(d{\overline{n}}\) and \(dA^c\) are carried on the set \({\lbrace \widetilde{Z} = 1 \rbrace }\).

To prove the above observation and therefore the additive and multiplicative representation of \(\widetilde{Z}\), we make use of the following time change process,

$$\begin{aligned} g_t&= \sup {\lbrace s \le t: \widetilde{Z}_s = 1 \rbrace }. \end{aligned}$$

Note that \(\widetilde{Z}_\tau = 1\), but in general, it is not true that \(\widetilde{Z}_{g_t} = 1\).

Theorem 2.9

Let \(\tau \) be a finite honest time random time.

(i) An additive representation of \(\widetilde{Z}\) is given by

$$\begin{aligned} \widetilde{Z}&= 1+ n - {\overline{n}}, \end{aligned}$$

where \(n = \widetilde{Z} + A^{c}\) and \(1+A^c = {\overline{n}}\).

(ii) A multiplicative representation of \(\widetilde{Z}\) is given by

$$\begin{aligned} \widetilde{Z}&= N/{\overline{N}} \end{aligned}$$

where \(N = \widetilde{Z}D^c\) and \(D^c =e^{A^c}= {\overline{N}}\).

Proof

The goal of the proof is to show that \({\overline{n}} = 1+A^c\) and \({\overline{N}} = e^{A^c}\). First, the processes \({\overline{n}}\) and \(1+A^c\) have the same initial condition and it is clear that \({\overline{n}} \le 1+A^c\). To show the reverse inequality, we must consider two cases. Given a finite stopping time T, we first suppose that \((\omega ,g_T(\omega )) \in {\lbrace \widetilde{Z}=1 \rbrace }\), then by using the facts that \(A^c\) is continuous, \(dA^c\) is carried on the set \(\{\widetilde{Z} = 1\}\) and, by Lemma 2.7, that \({\lbrace n = {\overline{n}} \rbrace } = {\lbrace n = 1+A^c \rbrace } = {\lbrace \widetilde{Z} = 1 \rbrace }\), we have \(1+ A^c_T = 1+ A^c_{g_T} = {\overline{n}}_{g_T} \le {\overline{n}}_T\). On the other hand, suppose \((\omega ,g_T(\omega ))\not \in {\lbrace \widetilde{Z} = 1 \rbrace }\) but belongs to the right closure of \({\lbrace \widetilde{Z} = 1 \rbrace }\), then there exists an increasing sequence of random times \((g^n_T)_{n\in {{\mathbb {N}}}}\) in \({\lbrace \widetilde{Z} = 1 \rbrace }\) such that \(g^n_T\uparrow g_T\) a.s.. Then similar to the previous case, we have \(1+ A^c_T = 1+ A^c_{g_T} = {\overline{n}}_{g_T-} \le {\overline{n}}_T\), which implies \(1+ A^c = {\overline{n}}\).

For the multiplicative representation, we first observe that \(dD^{c}\) is carried on the set \({\lbrace \widetilde{Z}= 1 \rbrace } = {\lbrace N= D^c \rbrace }\). Then one can repeat the arguments used in the proof of the additive representation with N, \({\overline{N}}\) and \(D^c\) in place of n, \({\overline{n}}\) and \(1+ A^c\) to conclude that \({\overline{N}}\) is equal to \(D^c\). \(\square \)

Example 2.10

To illustrate our main result obtained in Theorem 2.9, let us revisit below the honest times given in Examples 2.2 and 2.3. We recall from Example 2.3 that the Azéma optional supermartingale \(\widetilde{Z}\) associated with the finite honest time \(\tau '\) is

$$\begin{aligned} \widetilde{Z}_t = {{\mathbb {P}}}(\tau ' \ge t \,|\, {{{\mathcal {F}}}}_t) = 1-\left( 1-X_t/{\overline{X}}_t \right) \mathbbm {1}_{\{T < t\}}. \end{aligned}$$

By applying the Itô’s formula to \(\widetilde{Z}\), using the uniqueness of the Doob–Meyer–Mertens–Gal’čuk decomposition, and the fact that \(d{\overline{X}}\) is carried on the set \(\{X = {\overline{X}}\}\), one can deduce that

$$\begin{aligned} A^c_{t} = \ln ({{\overline{X}}}_{t\vee T}) - \ln ({{\overline{X}}}_{T}). \end{aligned}$$

From (7) and (8) we have \(D^c_t = e^{A^c_t} = {\overline{X}}_{t\vee T}/{\overline{X}}_{T}\) and \(N_t = \widetilde{Z}_t D^c_t = \left( \mathbbm {1}_{\{T \ge t\}} + (X_t/{\overline{X}}_T)\mathbbm {1}_{\{T < t\}}\right) \). From the form of N we clearly have \({\overline{N}}_t = {\overline{X}}_{t\vee T}/{\overline{X}}_{T} = D^c_t\) and therefore \(\widetilde{Z}_t = N_t/{\overline{N}}_t\) for all \(t\ge 0\).

On the other hand, the multiplicative decomposition for the Azéma optional supermartingale associated with a stopping time \(\sigma \), given in Example 2.2, is trivial in the sense that \(\widetilde{Z} = \mathbbm {1}_{\llbracket \,0, \sigma \,\rrbracket } = 1 - A^g\) and clearly \(\widetilde{Z} = N/{\overline{N}}\) where \(N = \widetilde{Z} =\mathbbm {1}_{\llbracket \,0, \sigma \,\rrbracket }\) is an optional supermartingale.

Remark 2.11

To obtain non-trivial examples, such as \(\tau '\) studied in Example 2.3, we need to find finite honest times for which both \(A^c\) and \(A^g\) are nonzero. Honest times with such property can be constructed by taking known examples of finite honest times \(\tau \) which avoids all stopping times and consider the honest time \(\tau \vee \sigma \) where \(\sigma \) is any finite stopping time. We will discuss this type of construction in more details in Example 3.15 once we have developed some generic tools in Sect. 3.

3.2 Characterisation of Honest Times & Uniqueness of the Representations

The aim of this section is of twofold. We first extend the Doob maximal identity in Lemma 2.15 in order to provide in Corollary 2.16 a characterisation of finite honest times using optional supermartingales of class \({\mathcal {N}}_0\), defined in Definition 2.12. Then we prove the uniqueness of the multiplicative representation in Proposition 2.17 and, by using a làglàd variant of the Skorokhod reflection lemma obtained in Lemma 2.19, we prove the uniqueness of the additive representation in Proposition 2.20.

Definition 2.12

A local optional supermartingale N is said to belong to the class \({\mathcal {N}}_0\) if

  1. (i)

    The process N is non-negative and \(\lim _{t\rightarrow \infty } N_t = 0\),

  2. (ii)

    The running supremum \({\overline{N}}\) is continuous,

  3. (iii)

    The graph of \(\tau := \sup {\lbrace s: N_s = {\overline{N}}_s \rbrace }\) belongs to \(\{N = {\overline{N}}\}\) or equivalently \(N_{\tau } = {\overline{N}}_{\tau }\),

  4. (iv)

    The process N exhibits the decomposition \(N = N_0 + M^N - A^N\) where \(M^N\) is a local martingale and \(A^N\) is a left continuous increasing process such that \(dA^N_+\) is carried on \({\lbrace N = {\overline{N}} \rbrace }\).

The class \({\mathcal {N}}_0\) extends the notion of local martingales of class \({\mathcal {C}}_0\) and class \({\mathcal {M}}_0\), where the class \({\mathcal {M}}_0\) consists of non-negative local martingales with continuous running supremum which converges to zero at infinity, see page 616 of [28], and \({\mathcal {C}}_0\) consists of continuous local martingales of class \({\mathcal {M}}_0\).

Definition 2.13

A local optional supermartingale N is said to belong to the class \({\mathcal {N}}^*_0\) if \(N \in {\mathcal {N}}_0\), \(N_0 = 1\) and \(A^N\) is a pure jump process.

Remark 2.14

The process \(N :=\widetilde{Z}D^c\) given in Theorem 2.9 belong to class \({\mathcal {N}}_0^*\). The process \(N = \widetilde{Z}D^c\) is clearly non-negative and \(\lim _{t\rightarrow \infty } N_t = 0\) since \(\tau \) is finite and hence \(\lim _{t\rightarrow \infty } \widetilde{Z}_t = 0\). We know from Proposition A.17 that \(\widetilde{Z}_\tau = N_\tau /{\overline{N}}_\tau = 1\) and by applying the Itô formula in Theorem B.22 to \(N = \widetilde{Z}D^c\), we have

$$\begin{aligned} D_t^c\widetilde{Z}_t&= 1+ \int _{(0,t]}D^c_{s-}d\widetilde{Z}^r_s + \int _{[0,t)} D_s^c d\widetilde{Z}^g_{s+} + \int _{(0,t]}\widetilde{Z}_sd\widetilde{D}^c_s \\&= 1+ \int _{(0,t]} e^{A^c_s} dm_s - \int _{[0,t)}e^{A^c_s} dA^g_{s+}. \end{aligned}$$

From the above, we see that \(A^N\) is a left continuous increasing pure jump process and \(dA^N_+\) is carried on \(\{N = {\overline{N}}\}\) since \(dA^g_+\) is carried on \(\{\widetilde{Z}= 1\}\) which is equal to \(\{N = {\overline{N}}\}\) by Lemma 2.7.

Lemma 2.15

(Variant of the Doob maximal identity) Suppose N is a local optional supermartingale which belongs to \({\mathcal {N}}_0\) and we let \(\tau := \sup {\lbrace s: N_s = {\overline{N}}_s \rbrace }\). Then \(\tau \) is a finite honest time such that its Azéma optional supermartingale is given by \(\widetilde{Z} = N/{\overline{N}}\).

Proof

It is clear that \(\tau \) is a finite last passage time and hence a finite honest time. Therefore, we need only to compute \(\widetilde{Z}\) associated with \(\tau \). Let us consider the process \(Y = 1- N/{\overline{N}}\) which is positive and bounded by one. Hence, Y is a positive optional submartingale of class-(D) and \(Y_\infty = 1\). By applying the Itô formula to Y and using the uniqueness of the Doob–Meyer–Mertens–Gal’cǔk decomposition of Y given in Theorem B.25, we can conclude that

$$\begin{aligned} M^Y_t = -\int _{(0,t]}{\overline{N}}^{-1}_s dM^N_s \quad \mathrm {and} \quad A^Y_t = \int _{[0,t)}{\overline{N}}^{-1}_s dA^N_s + \ln ({\overline{N}}_t). \end{aligned}$$

where \(M^Y\) optional martingale and hence a uniformly integrable martingale, and \(dA^Y_+\) is carried on \(\{Y = 0 \}= \{N = {\overline{N}} \}\). Let \(\gamma _t := \inf {\lbrace s\ge t: N = {\overline{N}} \rbrace } = \inf {\lbrace s\ge t: Y = 0 \rbrace }\), which by convention takes the value infinite if the set is empty. We observe that for every stopping time T,

$$\begin{aligned} Y_{\gamma _T} = Y_{\gamma _T}\mathbbm {1}_{\lbrace \tau< T \rbrace } + Y_{\gamma _T}\mathbbm {1}_{\lbrace \tau \ge T \rbrace } = \mathbbm {1}_{\lbrace \tau < T \rbrace }. \end{aligned}$$
(10)

To obtain the second equality above, we notice that \(Y_{\gamma _T}\mathbbm {1}_{\lbrace \tau< T \rbrace } = Y_\infty \mathbbm {1}_{\lbrace \tau< T \rbrace } = \mathbbm {1}_{\lbrace \tau < T \rbrace } \) and

$$\begin{aligned} Y_{\gamma _T}\mathbbm {1}_{\lbrace \tau \ge T \rbrace } = Y_{\gamma _T}\mathbbm {1}_{\lbrace \tau > T \rbrace } + Y_{\gamma _T}\mathbbm {1}_{\lbrace \tau = T \rbrace }. \end{aligned}$$

On the set \(\{\tau > T\}\), the equality \(N_{\gamma _T} = {{\overline{N}}}_{\gamma _T}\) clearly holds for points \((\omega ,\gamma _T(\omega )) \in {\lbrace N = {\overline{N}} \rbrace }\). Suppose now that \((\omega ,\gamma _T(\omega ))\not \in {\lbrace N = {\overline{N}} \rbrace }\) but is in the left closure of the set \(\{N = {\overline{N}}\}\). That is there exists a decreasing sequence of stopping times \((\gamma ^n_T)_{n\in {{\mathbb {N}}}}\) such that \(\gamma ^n_T \downarrow \gamma _T\) almost surely and for every n, \((\omega , \gamma ^n_T(\omega )) \in {\lbrace N = {\overline{N}} \rbrace }\). This together with the continuity of \({\overline{N}}\) implies that \(N_{\gamma _T+} = {\overline{N}}_{\gamma _T} > N_{\gamma _T}\). However, since \(\Delta ^+ N = -\Delta ^+ A^N\le 0\), we must have \(N_{\gamma _T} \ge N_{\gamma _T+}\) which is a contradiction. While, on the set \(\{\tau = T\}\), we have

$$\begin{aligned} N_{\gamma _T} = N_{\tau }\mathbbm {1}_{\{N_\tau = {{\overline{N}}}_\tau \}} + N_\infty \mathbbm {1}_{\{N_\tau < {{\overline{N}}}_\tau \}} = {\overline{N}}_{T} = {\overline{N}}_{\gamma _T}, \end{aligned}$$

where we have used the fact that \(N_\infty = 0\), \(\gamma _T = \gamma _\tau = \tau = T\) on the set \(\{N_\tau = {{\overline{N}}}_\tau \}\) which is assumed to be of probability one, and the fact that \({\overline{N}}\) is continuous. This shows that \(Y_{\gamma _T}\mathbbm {1}_{\lbrace \tau \ge T \rbrace } = 0\).

By taking the \({{{\mathcal {F}}}}_T\) conditional expectation of both hand sides of (10), we obtain

$$\begin{aligned} {\mathbb {E}}(Y_{\gamma _T}\,|\,{{{\mathcal {F}}}}_T) = 1-\widetilde{Z}_T. \end{aligned}$$

Using the fact that \(M^Y\) is uniformly integrable, we have from the Doob optional sampling theorem, \({\mathbb {E}}(Y_{\gamma _T}\,|\,{{{\mathcal {F}}}}_T) = M^Y_T + {\mathbb {E}}(A^Y_{\gamma _T}\,|\,{{{\mathcal {F}}}}_T)\). Finally, as \(A^Y\) is left-continuous and \(dA^Y_+\) is carried on \(\{N = {\overline{N}}\}\), we have \(A^Y_{\gamma _T} = A^Y_{T}\) and hence \(\widetilde{Z}_T = N_T/{\overline{N}}_T\) for all finite stopping times T. \(\square \)

Corollary 2.16

Suppose that \(\tau \) is a finite honest time then there exists an optional supermartingale N of class \({\mathcal {N}}_0\) such that \(\tau \) is the end of the optional set \({\lbrace N = {\overline{N}} \rbrace }\) and \(\widetilde{Z} = N/{\overline{N}}\). Conversely, given a local optional supermartingale N of class \({\mathcal {N}}_0\), the end of the optional set \({\lbrace N = {\overline{N}} \rbrace }\) is a finite honest time such that \(\widetilde{Z} = N/{\overline{N}}\).

Proof

It is sufficient to combine Lemma 2.7, Theorem 2.9 and Lemma 2.15. \(\square \)

The above corollary gives a characterisation of the finite honest times through local optional supermartingales. However, given a finite honest time \(\tau \), the class \({\mathcal {N}}_0\) is too big to have the uniqueness of the multiplicative representation. It is not hard to see that \(\widetilde{Z} = N/{\overline{N}}\) where N can be either \(\widetilde{Z}\), \(\widetilde{Z}D^c\), \(k\widetilde{Z}\) or \(k\widetilde{Z}D^c\) for any \(k\ge 0\), which all belongs to the class \({\mathcal {N}}_0\). The class \({\mathcal {N}}^*_0\) is introduced to restrict our attention to the case \(k = 1\) and to remove, whenever possible, the trivial candidate \(\widetilde{Z}\).

Proposition 2.17

Suppose that \(\tau \) is a finite honest time then there exists a unique optional supermartingale N of class \({\mathcal {N}}^*_0\) such that \(\tau \) is the end of the optional set \({\lbrace N = {\overline{N}} \rbrace }\) and \(\widetilde{Z} = N/{\overline{N}}\).

Proof

We need only to show the uniqueness of the process N inside the class \({\mathcal {N}}^*_0\). Suppose that there exists another process \(X \in {\mathcal {N}}^*_0\) such that \(\widetilde{Z} = N/{\overline{N}} = X/{\overline{X}}\) and the decomposition of N and X are given by \(N = N_0 + M^N - A^N\) and \(X = X_0 + M^X - A^X\) where \(A^N\) and \(A^X\) are left continuous increasing pure jump processes. Then by the làglàd Itô formula and the fact that \(d{\overline{N}}\) and \(d{\overline{X}}\) are carried on the set \(\{N = {\overline{N}}\}\) and \(\{X = {\overline{X}}\}\), respectively, we have

$$\begin{aligned} \widetilde{Z}_t&= 1+ \int _{(0,t]}{\overline{X}}^{-1}_s dM^X_s - \int _{[0,t)}{\overline{X}}^{-1}_s dA^X_s - \ln ({\overline{X}}_t) \\&= 1+ \int _{(0,t]}{\overline{N}}^{-1}_s dM^N_s - \int _{[0,t)}{\overline{N}}^{-1}_s dA^N_s - \ln ({\overline{N}}_t). \end{aligned}$$

From the uniqueness of Doob–Meyer–Mertens–Gal’čuk decomposition of \(\widetilde{Z} = n - A\). We deduce that

$$\begin{aligned} n&= 1+ \int _{(0,t]}{\overline{X}}^{-1}_s dM^X_s = 1+ \int _{(0,t]}{\overline{N}}^{-1}_s dM^N_s\\ A_t&= \int _{[0,t)}{\overline{X}}^{-1}_s dA^X_{s+} + \ln ({\overline{X}}_t) = \int _{[0,t)}{\overline{N}}^{-1}_s dA^N_{s+} + \ln ({\overline{N}}_t). \end{aligned}$$

From the continuity of \({\overline{X}}\) and \({\overline{N}}\) and the fact that \(A^X\) and \(A^N\) are left continuous pure jump processes, we deduce that \(A^c_t = \ln ({\overline{N}}_t) = \ln ({\overline{X}}_t)\) which implies that \({\overline{N}}_t = {\overline{X}}_t\) and \(A^g_t = \int _{[0,t)}{\overline{N}}^{-1}_s dA^N_{s+} = \int _{[0,t)}{\overline{X}}^{-1}_s dA^X_{s+}\) and consequently \(A^N = A^X\). Finally, since \({\overline{N}}_t = {\overline{X}}_t\), we can conclude that \(M^X= M^N\) and hence \(X - X_0 = N - N_0\), where \(X_0 = N_0 = 1\). \(\square \)

Remark 2.18

The class of finite honest times for which the process \(N \in {\mathcal {N}}_0^*\) is the trivial candidate \(\widetilde{Z}\) is exactly the class of finite thin honest times. That is honest times whose graph is contained in the disjoint union of the graph of a family of \({{\mathbb {F}}}\)-stopping times or equivalently the process \(H^o_- = A\) is a pure jump increasing process (see Definition 1.1 and Theorem 1.4 in [4]). In this case, we have \(A = A^g\), \(A^c = 0\), \(D^c = 1\) and \(N= \widetilde{Z}D^c = \widetilde{Z}\).

The counter example mentioned in the introduction, given by Acciaio and Penner [1], is an example of a thin honest time, and the multiplicative representation holds trivially with \(N = \widetilde{Z}\). As this example is quit involved, we refer to readers to Proposition 4.8 of [3]. For a simpler illustration, let us return to Example 2.2 and consider a finite \({{\mathbb {F}}}\)-stopping time \(\sigma \), which is a finite thin honest time with \(\widetilde{Z} = \mathbbm {1}_{\llbracket \,0, \sigma \,\rrbracket }\). Suppose that there exists \(N \in {\mathcal {N}}_0\) such that \(\widetilde{Z} = N/{\overline{N}}\), then we can deduce from the equality \({\overline{N}} \mathbbm {1}_{\llbracket \,0, \sigma \,\rrbracket } = N\) that N must be non-decreasing on \(\llbracket \,0, \sigma \,\rrbracket \) and zero on \(\,\rrbracket \sigma , \infty \,\rrbracket \). In fact, the process N must be \(k\mathbbm {1}_{\llbracket \,0, \sigma \,\rrbracket }\) from some \(k\ge 0\) since N is a local optional supermartingale. If we restrict ourselves to the class \({\mathcal {N}}_0^*\) then it is necessary that \(k = 1\) and \(N = \widetilde{Z}\).

We now investigate the uniqueness of the additive representation of \(\widetilde{Z}\). To do this, we provide a làglàd variant of the Skorokhod reflection lemma which is not available in the literature.

Lemma 2.19

(Variant of the Skorokhod reflection lemma) Let y be a real-valued làglàd function on \([0,\infty )\) such that \(y(0) = 0\) and it’s running infimum \(\inf y = {\underline{y}}\) is continuous. Then, there exists a unique pair (za) on \([0,\infty )\) where \(a(t)=\sup _{s \le t}-y(s)\), satisfying the following conditions:

  1. (i)

    \(z(t) = y(t) + a(t) \ge 0\) for all \(t \ge 0\),

  2. (ii)

    a is an increasing, continuous function with initial value zero,

  3. (iii)

    The measure da is carried on the set \(\{t : z(t) = 0\}\).

Proof

See Appendix 1. \(\square \)

Proposition 2.20

Given a finite honest time \(\tau \), there exists a unique local optional supermartingale n with \(n_0 = 1\) and continuous running supremum, such that \(\widetilde{Z} = 1 + n- {{\overline{n}}}\).

Proof

The existence of n with \(n_0 = 1\) and continuous running supremum follows from Theorem 2.9. The uniqueness of the additive representation of \(\widetilde{Z}\) follows from Lemma 2.19 if we set \(y= 1- n\), which gives \(a = \overline{-1+n} = -1+ {\overline{n}}\) and \(z = 1-n + a = -n+ {\overline{n}}\). \(\square \)

Remark 2.21

We stress that, in Theorem 2.9, the core of the proof is showing that \({\overline{N}} = e^{A^c}\) and \({\overline{n}} = 1+A^c\), which gives the important property that both \({\overline{N}}\) and \({\overline{n}}\) are continuous. In fact, one can argue that the continuity of both \({\overline{N}}\) and \({\overline{n}}\), where n and N are defined in (8), is the most important property. Since if one can show directly that \({\overline{N}}\) and \({\overline{n}}\) are continuous then the existence and uniqueness of the additive representation of \(\widetilde{Z}\) can be obtained through Lemma 2.19. While the existence and uniqueness of the multiplicative representation of \(\widetilde{Z}\) can be obtained by combining Lemma 2.7, Lemma 2.15 and Proposition 2.17.

We conclude the first part of the paper by showing, for the sack of completeness, that the processes \({\overline{n}}\) and \({\overline{N}}\) are continuous without showing that they are equal to \(1+A^c\) and \(e^{A^c}\). This gives us an alternative method to prove the existence of the additive and the multiplicative representation of \(\widetilde{Z}\) and highlights the importance of Lemma 2.7.

Proposition 2.22

Given a finite honest time \(\tau \), the running supremum of the processes \(n = m-A^g\) and \(N = \widetilde{Z}D^c\) is continuous.

Proof

We will only present the proof of continuity for \({\overline{n}}\) since the proof of continuity for \({\overline{N}}\) follows from similar arguments. To this end, suppose that the supremum of n is not continuous and the left jumps of n, that is the jump of the martingale m, can take n to its supremum. More specifically, we set \(T:= \inf {\lbrace s: \Delta {\overline{n}}_s > 0 \rbrace }\) and suppose that \(T<\infty \). Using the fact that \({{\overline{n}}}\) is càdlàg, we deduce that \(T> 0\) and \(\llbracket \,T \,\rrbracket \in {\lbrace n = {\overline{n}} \rbrace }\) which by Lemma 2.7 is equal to \({\lbrace \widetilde{Z} = 1 \rbrace }\). Then for fixed \(\omega \in \Omega \), there are two cases to consider, (i) the point \(T(\omega )\) is a left isolated point of the set \({\lbrace s:\widetilde{Z}_s(\omega ) =1 \rbrace }\) and (ii) the point \(T(\omega )\) is not a left isolated point of the set \({\lbrace s:\widetilde{Z}_s(\omega ) =1 \rbrace }\).

In case (i), we consider the random time \(\tau _t = \sup {\lbrace s < t: \widetilde{Z}_s = 1 \rbrace }\). Note that since \(T(\omega )\) is a left isolated point and \(dA^c\) is carried on the set \({\lbrace n = 1+A^c \rbrace } = {\lbrace \widetilde{Z} = 1 \rbrace } = {\lbrace n = {\overline{n}} \rbrace }\), we must have, for the fixed \(\omega \), \(\tau _T < T\) and \(A^c_{\tau _T} = A^c_{T-} =A^c_{T}\). However, this is a contraction since this will imply

$$\begin{aligned} 1+A^c_{T-} = 1+A^c_{\tau _T} = {\overline{n}}_{\tau _T} = {\overline{n}}_{T-} < {\overline{n}}_{T} = 1+A^c_T. \end{aligned}$$

We point out that, in this case, one does not have to distinguish whether \(\tau _t(\omega )\) belongs to the set \({\lbrace s:\widetilde{Z}_s(\omega ) =1 \rbrace }\) or is in its right closure, since \({\overline{n}}\) is continuous before T.

In case (ii), since \(T(\omega )\) is not a left isolated point of \({\lbrace s:\widetilde{Z}_s(\omega ) =1 \rbrace }\), there exists an increasing sequence \((T_n(\omega ))_{n \in {{\mathbb {N}}}}\) such that \(\forall n\in {{\mathbb {N}}}\), \(T_n(\omega ) < T(\omega )\), \(T_n(\omega ) \in {\lbrace s:\widetilde{Z}_s(\omega ) =1 \rbrace }\) and \(T_n(\omega )\uparrow T(\omega )\). This implies that for the fixed \(\omega \), \(1+A^c_{T-}= {\overline{n}}_{T-}< {\overline{n}}_T = 1+A^c_{T}\) and this gives a contradiction. \(\square \)

4 Optional Semimartingales of Class-\((\Sigma )\)

In this part of the paper, we study the Azéma supermartingale of finite honest times in a general context and extend, in Definition 3.2, the notion of semimartingales of class-\((\Sigma )\) to optional semimartingales of class-\((\Sigma )\) by allowing for jumps in the finite variational part of the semimartingale decomposition. The goal below is to recover some existing results in the literature for semimartingales of class-\((\Sigma )\) in the context of optional semimartingales of class-\((\Sigma )\) and to apply them to the construction of finite honest times. Although some results presented below might not be surprising, we believe that the techniques used are of interest as we no longer deal with càdlàg processes.

To be specific, we extend Lemma 2.2 (1)–(3), Lemmas 2.3 and 2.4 from [6] in Lemmas 3.4, 3.5 and 3.6, respectively. Second, we extend Theorem 3.1 (1) in [6] by showing in Theorems 3.9 and 3.11 that Madan–Roynette–Yor type formulae, which relates the price of a put/call option with the last passage time of zero of the pay-off, can be recovered for optional semimartingales of class-\((\Sigma )\). Lastly, by using Lemma 3.5 and Theorem 3.11, we obtain in Proposition 3.14 a method to construct finite honest times for which the multiplicative decomposition of \(\widetilde{Z}\) obtain in Theorem 2.9 is non-trivial, in that \(N \ne \widetilde{Z}\).

Definition 3.1

An optional semimartingale X with decomposition \(X = X_0+ M + A\) where M is a local martingale with \(M_0= 0\) and \(A = A^c + A^d + A^g\) is a làglàd process of finite variation with \(A_0 =0\) is said to satisfy the Skorokhod minimal reflection condition at zero if for every \(t\ge 0\),

$$\begin{aligned} \int _{[0,t)} \mathbbm {1}_{\lbrace X_s \ne 0 \rbrace } (dA^c_s+dA^g_{s+}) = 0 \quad \mathrm {and} \quad \int _{(0,t]}\mathbbm {1}_{\lbrace X_{s-} \ne 0 \rbrace } dA^d_s= 0. \end{aligned}$$

Definition 3.2

An optional semimartingale X is said to be of class-\((\Sigma )\) if it satisfies the Skorokhod minimial reflection condition and \(X_0 = 0\) and \(A^d = 0\).

In the existing definition of semimartingale of class-(\(\Sigma \)) given in [1, 7, 8, 18, 26, 27, 33, 34], the process of finite variation A in the decomposition of X is continuous by definition, that is \(A = A^c\). The current extension is non-trivial in that recent studies of honest times in the Poisson filtration have provided explicit examples of positive optional submartingales of class-\((\Sigma )\) whose finite variation part A is a pure jump process, i.e. \(A^c= 0\) and \(A^g \ne 0\). In fact, it is proven in Theorem 3.6 of Aksamit et al. [4], that in any jumping filtration, for example the Poisson filtration, the finite variation part A in the Doob–Meyer–Mertens–Gal’čuk decomposition of the Azéma optional supermartingale \(\widetilde{Z}\) associated with a finite honest time must be a pure jump process.

Example 3.3

Given a finite honest times \(\tau \), we see from Lemma A.18 that the process \(1-\widetilde{Z}\) is a positive optional submartingale of class-(\(\Sigma \)) such that \(\llbracket \,\tau \,\rrbracket \in \{1- \widetilde{Z} = 0\}\).

From this point onwards, given an optional semimartingale \(X = M + A\), the process M and A will denote the local martingale and the làglàd process of finite variation in the optional semimartingale decomposition of X. The left jumps are given by \(\Delta X = \Delta M\), the right jumps are given by \(\Delta ^+ X = \Delta A_+ = \Delta A^g_+\) and \({\lbrace \Delta ^+ X \ne 0 \rbrace } \subset {\lbrace X= 0 \rbrace }\).

Lemma 3.4

Let X be an optional semimartingale of class-\((\Sigma )\) then: (i) The processes \(X^+\), \(X^-\) and |X| are local optional submartingales. (ii) If \(\Delta X \ge 0\) then \(X^+\) is of class-\((\Sigma )\). (iii) If \(\Delta X \le 0\) then \(X^-\) is of class-\((\Sigma )\). (iv) If \(\Delta X = 0\) then |X| is of class-\((\Sigma )\). (v) If X is a positive optional submartingale then \(A^c =(\overline{-n})\vee 0\) where \(n := M + A^g\).

Proof

(i) The fact that \(X^+\), \(X^-\) and |X| are local submartingales follows directly from the Tanaka formula in Lemma B.26 and the fact that \(dA_+\) is carried on \({\lbrace X= 0 \rbrace }\).

(ii) We prove only \((\mathrm ii)\) as the proof of \((\mathrm iii)\) and \((\mathrm{iv})\) are similar. By Lemma B.26

$$\begin{aligned} X_t^+&= \int _{(0,t]} \mathbbm {1}_{\lbrace X_{s-}> 0 \rbrace }d(A^c_s + M_s) + \int _{[0,t)} \mathbbm {1}_{\lbrace X_{s}> 0 \rbrace }\, dA^g_{s+} + \sum _{0< s \le t} \mathbbm {1}_{\lbrace X_{s-}> 0 \rbrace }(X_{s})^- \\&\quad + \sum _{0< s \le t} \mathbbm {1}_{\lbrace X_{s-} \le 0 \rbrace }(X_{s})^+ + \sum _{0\le s< t} \mathbbm {1}_{\lbrace X_s > 0 \rbrace }(X_{s+})^- + \sum _{0\le s < t} \mathbbm {1}_{\lbrace X_s \le 0 \rbrace }(X_{s+})^+ + \frac{1}{2}L^0_t(X). \end{aligned}$$

The process \(L^0_t(X)\) is called the local time of X at zero and by Theorem B.27, the measure \(dL^0(X)\) is carried on the set \({\lbrace X= 0 \rbrace } \subseteq {\lbrace X^+ = 0 \rbrace }\). Then by using the fact that \(\Delta X\ge 0\), we have

$$\begin{aligned} X_t^+&= \int _{(0,t]} \mathbbm {1}_{\lbrace X_{s-}> 0 \rbrace }dM_s + \sum _{0< s \le t} \mathbbm {1}_{\lbrace X_{s-} \le 0 \rbrace }(X_{s})^+ + \sum _{0\le s< t} \mathbbm {1}_{\lbrace X_s > 0 \rbrace }(X_{s+})^- \\&\quad + \sum _{0\le s < t} \mathbbm {1}_{\lbrace X_s \le 0 \rbrace }(X_{s+})^+ + \frac{1}{2}L^{0}_t(X). \end{aligned}$$

Note that the right-hand jumps \(\sum _{0\le s < t} \mathbbm {1}_{\lbrace X_s > 0 \rbrace }(X_{s+})^- + \mathbbm {1}_{\lbrace X_s \le 0 \rbrace }(X_{s+})^+\) are supported on the set \({\lbrace X^+ = 0 \rbrace }\) because \({\lbrace \mathrm {sign}(X) \ne \mathrm {sign}(X_+) \rbrace } \subseteq {\lbrace \Delta ^+X > 0 \rbrace } \subseteq {\lbrace X = 0 \rbrace } \subseteq {\lbrace X^+ = 0 \rbrace }\).

The làglàd process of finite variation A in the optional semimartingale decomposition of X is left continuous and therefore predictable. This implies that there exists a localising sequence of stopping times \((T_n)_n\) such that \(X^+ = (M + A)^+\) is integrable and \(M^{T_n}\) is a uniformly integrable martingale. This implies that the increasing process

$$\begin{aligned} V_t = \sum _{0< s \le t} \mathbbm {1}_{\lbrace X_{s-} \le 0 \rbrace }(X_{s})^+ \end{aligned}$$

stopped at \(T_n\) is of integrable variation and the dual predictable projection \(V^p\) of V exists and is locally of integrable variation.

To show that \(V^p\) is continuous, following similar argument to Lemma 2.2 in [6], we note that on the set \({\lbrace \Delta V>0 \rbrace }\), the jump \(\Delta V\) is bounded by \(\Delta X = \Delta M\ge 0 \). Therefore,

$$\begin{aligned} \Delta V^p = \,^p(\Delta V)&\le \,^p(\Delta M) \end{aligned}$$

and from the predictable sampling theorem, \(\,^p(\Delta M)_T = 0\) for all predictable stopping times T. To this end, by using the continuity of \(V^p\) we obtain

$$\begin{aligned} {\mathbb {E}}\big (\int _{[0,T_n)} \mathbbm {1}_{\lbrace X^+_s> 0 \rbrace } dV^p_s\,\big )&= {\mathbb {E}}\big (\int _{[0,{T_n})} \mathbbm {1}_{\lbrace X^+_{s-}> 0 \rbrace } dV^p_s\,\big )\\&= {\mathbb {E}}\big (\int _{[0,{T_n})} \mathbbm {1}_{\lbrace X^+_{s-} > 0 \rbrace }\mathbbm {1}_{\lbrace X_{s-}\le 0 \rbrace } dV_s \,\big ) = 0. \end{aligned}$$

Finally, by monotone convergence theorem, we let \(n\rightarrow \infty \) to show that \(V^p\) is supported on \({\lbrace X^+=0 \rbrace }\). By similarly arguments we can conclude that \(X^-\) and |X| are of class-\((\Sigma )\).

(v) From the fact that \(X\ge 0\), we have \(-n\le A^c\) and hence \({\lbrace X = 0 \rbrace } = {\lbrace -n = A^c \rbrace } \subseteq {\lbrace -n = \overline{-n} \rbrace }\). This shows that \(A^c = \overline{-n}\) on the set \({\lbrace X = 0 \rbrace }\). It is also evident that the processes \((\overline{-n})\vee 0\) and \(A^c\) have the same initial condition. Using the inequality \(-n \le A^c\), we can conclude that \(\overline{-n} \le A^c\) and \((\overline{-n})\vee 0 \le A^c\). To show the reverse inequality, let \(g_t = \sup \{s\le t: X_s = 0\}\) and given any stopping time T, either \(g_T = 0\) or \(g_T>0\). In the case where \(g_T(\omega ) > 0\) and \((\omega ,g_t(\omega )) \in {\lbrace X=0 \rbrace }\), we have from the continuity of \(A^c\) that

$$\begin{aligned} A^c_T = A^c_{g_T} = (\overline{-n})_{g_T} = (\overline{-n})_{g_T} \vee 0 \le (\overline{-n})_T \vee 0. \end{aligned}$$

In the case where \(g_T(\omega ) > 0\) and \((\omega ,g_T(\omega ))\) does not belong to \({X=0}\) but is in the right closure of \({\lbrace X=0 \rbrace }\), we have by continuity of \(A^c\)

$$\begin{aligned} A^c_T = A^c_{g_T} = (\overline{-n})_{g_T-} \le (\overline{-n})_{g_T} \vee 0 \le (\overline{-n})_T \vee 0. \end{aligned}$$

In the case where \(g_T(\omega ) = 0\), we have \(A^c_T = A^c_0 = 0 \le (\overline{-n})_T \vee 0\). Hence, we have \(A^c = (\overline{-n})\vee 0\). \(\square \)

Lemma 3.5

Given two optional semimartingales X and Y of class-\((\Sigma )\), suppose \([M^X, M^Y]= 0\) then XY is an optional semimartingale of class-\((\Sigma )\).

Proof

By application of the Itô formula in Theorem B.22

$$\begin{aligned}&X_tY_t - X_0Y_0 \\&\quad = \int _{(0,t]}Y_{s-}dX^r_s + \int _{[0,t)}Y_s d X^g_{s+} + \int _{(0,t]} X_{s-}dY^r_s + \int _{[0,t)} X_{s} dY^g_{s+} + [M^X, M^Y]_t \\&\qquad + \sum _{0\le s<t} \Delta Y^g_s \Delta ^+ X_s \\&\quad = \int _{(0,t]}Y_{s-}dX^r_s + \int _{[0,t)}Y_{s} d X^g_{s+} + \int _{(0,t]} X_{s-}dY^r_s + \int _{[0,t)} X_{s+} dY^g_{s+}. \end{aligned}$$

To see that the finite variation part only moves on the set \({\lbrace XY = 0 \rbrace }\), it is sufficient to note that \({\lbrace XY \ne 0 \rbrace } = {\lbrace X\ne 0 \rbrace }\cap {\lbrace Y \ne 0 \rbrace }\). \(\square \)

For simplicity, we present the following lemma for \(C^1\)-functions rather than bounded measurable, since unlike Nikeghbali [27] or Cheridito et al. [6], we do not attempt to solve the Skorokhod embedding problem for optional semimartingales of class-\((\Sigma )\) and we only include the following result to illustrate that class-\((\Sigma )\) is closed under this transform.

Lemma 3.6

Let \(B= B^c + B^g\) be a left continuous increasing process such that \(dB_+\) is carried on the set \(\{X = 0\}\), then for any given \(C^1\)-function f

$$\begin{aligned} f(B_{t})X_t&= f(0)X_0 + \int _{(0,t]} f(B_s)d(X^r_s - B^c_s) + \int _{[0,t)} f(B_{s+})d(X^g_s - B^g_s)\\&\quad + \int _{[0,t)} f(B_{s+})dB_{s+}. \end{aligned}$$

and \(f(B_{+})dB_{+}\) is carried on the set \(\{f(B)X = 0\}\).

In particular, if \(X = M + A\) is of an optional semimartingale of class-\((\Sigma )\) and \(B = A\) then

$$\begin{aligned} f(A_{t})X_t = f(0)X_0 + \int _{(0,t]} f(A_{s})dM_s + \int _{[0,t)} f(A_{s+})dA_{s+} \end{aligned}$$

is an optional semimartingale of class-\((\Sigma )\).

Proof

We first note that the process \(B = B^c + B^g\) is left continuous and by an application of the Itô formula in Theorem B.22 we obtain

$$\begin{aligned} f(B_{t})X_t&= f(0)X_0 + \int _{(0,t]} f(B_{s})dX^r_s + \int _{[0,t)} f(B_{s})dX^g_s \\&\quad + \int _{(0,t]} X_{s-}df(B)^r_s + \int _{[0,t)} X_{s}df(B)^g_{s+} + \sum _{s<t} (f(B_{s+})- f(B_s))\Delta A^+_s. \end{aligned}$$

By applying the Itô formula to f(B), we obtain

$$\begin{aligned} df(B_t)&= \int _{(0,t]}f'(B_s)dB^c_s + \int _{[0,t)}f'(B_s)dB^g_{s+} + \sum _{s<t} f(B_{s+}) - f(B_s) - f'(B_s)\Delta ^+ B_s. \end{aligned}$$

The above implies that \(df(B)_+\) is carried on the set \({\lbrace X=0 \rbrace }\) since \(dB_+\) is carried on \({\lbrace X= 0 \rbrace }\). Therefore, we have

$$\begin{aligned} f(B_t)X_t&= f(0)X_0 + \int _{(0,t]} f(B_s)dX^r_s + \int _{[0,t)} f(B_s)dX^g_s + \sum _{s<t} (f(B_{s+})- f(B_s))\Delta ^+ A_s\\&= f(0)X_0 + \int _{(0,t]} f(B_s)dX^r_s + \int _{[0,t)} f(B_{s+})dX^g_s\\&= f(0)X_0 + \int _{(0,t]} f(B_s)d(X^r_s - B^c_s) + \int _{[0,t)} f(B_{s+})d(X^g_s - B^g_s)\\&\quad + \int _{[0,t)} f(B_{s+})dB_{s+}. \end{aligned}$$

It is sufficient to note that since \(dB_+\) is carried on \({\lbrace X= 0 \rbrace }\)

$$\begin{aligned} \int _{[0,t)} \mathbbm {1}_{\lbrace X_sf(B_{s}) \ne 0 \rbrace }f(B_{s+})dB_{s+} = \int _{[0,t)} \mathbbm {1}_{\lbrace X_s\ne 0 \rbrace }\mathbbm {1}_{\lbrace f(B_{s})\ne 0 \rbrace }f(B_{s+})dB_{s+} = 0, \end{aligned}$$

which concludes the proof. \(\square \)

4.1 The Madan–Roynette–Yor formula

From the works of Madan et al. [21], Profeta et al, [29] and generalisations of Cheridito et al. [6] we know that there is a deep connection between semimartingales of class-\((\Sigma )\) and their last passage time of zero. From this point onwards, given a optional semimartingale X of class-\((\Sigma )\), the honest time \(\tau := \sup {\lbrace s:X_s = 0 \rbrace }\) is assumed to be finite. Let us first record Corollary 3.5 from [6].

Proposition 3.7

(Corollary 3.5 in [6] or see Madan, Roynette and Yor [21]) Let K be a constant and M is a local martingale with no positive jumps such that \(M^-\) is of class-(D). Denote \(g_K = \sup \{t \ge 0 : M_t= K\}\) then for every stopping time T,

$$\begin{aligned} (K-M_T)^+ = {\mathbb {E}}((K-M_\infty )^+ \mathbbm {1}_{\{g_K\le T\}} | {{{\mathcal {F}}}}_T). \end{aligned}$$
(11)

In Proposition 3.7, the process M is assumed to have no positive jumps, hence by Lemma 3.4 (ii) the process \(X := (K-M)^+\) is a positive submartingale of class-(\(\Sigma \)) and \(g_K = \tau = \sup {\lbrace s:X_s = 0 \rbrace }\). Formula (11) can then be rewritten into the form \({\mathbb {E}}(X_\infty \mathbbm {1}_{\{\tau \le T\}} | {{{\mathcal {F}}}}_T) = X_T\) which, with a slight abuse of terminology, we shall refer to as the Madan–Roynette–Yor type formula. The goal now is to recover formulae of this type for optional semimartingale of class-\((\Sigma )\) and their last passage times of zero.

In the following, deviating from [6,7,8, 27], we will work with both processes of class-(DL) and class-(D). To proceed, we introduce the time change processes

$$\begin{aligned} \tau _t = \sup {\lbrace s < t: X_s = 0 \rbrace } \quad \text {and} \quad \gamma _t = \inf {\lbrace s \ge t: X_s = 0 \rbrace }. \end{aligned}$$

whereby convention \(\gamma _t\) takes the value infinity if the set is empty. We observe that \(\tau _\infty := \lim _{t\rightarrow \infty } \tau _t = \tau \) and \({\lbrace \tau _t < u \rbrace } \subset {\lbrace t\le \gamma _u \rbrace } \subset \{\tau _t \le u\}\) for \(0\le u <t \le \infty \). Here the set inclusions are strict, since there could be trajectories such that \(\gamma _u \ge t\) but \(\tau _t = u\) and \(X_{\tau _t} \ne 0\). However, we have

$$\begin{aligned} {\lbrace t\le \gamma _u \rbrace } \cap \{X_{\tau _t} = 0\} = \{\tau _t < u\} \cap \{X_{\tau _t} = 0\} \end{aligned}$$
(12)

which follows from the fact that \({\lbrace t\le \gamma _u \rbrace } \cap \{X_{\tau _t} = 0\} \cap \{\tau _t = u\} = \emptyset \), since here \(\gamma _u = u < t\). Similarly, we also consider the time change processes

$$\begin{aligned} g_t = \sup {\lbrace s \le t: X_s = 0 \rbrace } \quad \text {and} \quad k_t = \inf {\lbrace s > t: X_s = 0 \rbrace } \end{aligned}$$

where we have \({\lbrace g_t \le u \rbrace } = {\lbrace t< k_u \rbrace }\) for \(0\le u < t\). It is important to point out that for \(t= u\), the set \({\lbrace g_u \le u \rbrace }\) is of probability one, but the set \(\Omega \setminus {\lbrace u< k_u \rbrace }\) could be of positive probability.

We derive below a balayage type formula for X and \(X_+\) which provide us with the martingale that will underpin later computations. Note that the right jumps \(\Delta ^+ X = \Delta A^g\) are only nonzero on the set \({\lbrace X= 0 \rbrace }\), this implies \(X_{\gamma _t} = X_{k_t} = 0\) but the quantities \(X_{\tau _t}\) and \(X_{g_t}\) may or may not be zero. Also by definition \(X_0 = 0\) and by convention \(X_{0-} = 0\), hence \(g_0 = 0\) and we can set \(\tau _0 = 0\).

Lemma 3.8

Let X be an optional semimartingale of class-\((\Sigma )\) then for \(0\le u \le t < \infty \)

$$\begin{aligned} X_{t+}\mathbbm {1}_{\lbrace t < k_u \rbrace }&= M_{t\wedge k_{u}} + A^c_{u} + A^g_{u+} - \Delta ^+ A^g_u \mathbbm {1}_{\{k_u = u\}}\\ X_t\mathbbm {1}_{\lbrace t \le \gamma _u \rbrace }&= M_{t\wedge \gamma _{u}} + A^c_{u} + A^g_{u}. \end{aligned}$$

Proof

For \(0\le u \le t < \infty \), we have for \(X_+\),

$$\begin{aligned} X_{t+}\mathbbm {1}_{\lbrace t < k_u \rbrace }&= M_{t\wedge k_u} + A^c_{t\wedge u} + (A^g_{+})_{t\wedge k_u}- X_{k_u+}\mathbbm {1}_{\lbrace t\ge k_u \rbrace }\\&= M_{t\wedge k_u} + A^c_{t\wedge u} + (A^g_{+})_{t\wedge k_u}- \Delta A^g_{k_u+}\mathbbm {1}_{\lbrace t\ge k_u \rbrace } \end{aligned}$$

where in the last equality follows from the fact that \(X_{k_u} = 0\) and \(\Delta ^+ X = \Delta A^g_+ = \Delta ^+ A^g\). To further simply the above expression, we see that on the set \({\lbrace t\ge k_u \rbrace }\)

$$\begin{aligned} (A^g_{+})_{t\wedge k_u} - \Delta A^g_{k_u}\mathbbm {1}_{\lbrace t\ge k_u \rbrace }&= A^g_{k_u} \mathbbm {1}_{\lbrace t\ge k_u \rbrace }\\&= A^g_{u} \mathbbm {1}_{\lbrace t\ge k_u \rbrace } \mathbbm {1}_{\lbrace u = k_u \rbrace } + A^g_{k_u} \mathbbm {1}_{\lbrace t\ge k_u \rbrace } \mathbbm {1}_{\lbrace u< k_u \rbrace }\\&= A^g_{u+}\mathbbm {1}_{\lbrace u = k_u \rbrace }- \Delta ^+A^g_{u} \mathbbm {1}_{\lbrace u = k_u \rbrace }+ A^g_{k_u} \mathbbm {1}_{\lbrace t\ge k_u \rbrace } \mathbbm {1}_{\lbrace u < k_u \rbrace } \end{aligned}$$

and, by using the fact that \(A^g\) is left continuous, the third term above is given by

$$\begin{aligned} A^g_{k_u} \mathbbm {1}_{\lbrace t\ge k_u \rbrace } \mathbbm {1}_{\lbrace u< k_u \rbrace }&= A^g_{u+} \mathbbm {1}_{\lbrace t\ge k_u \rbrace } \mathbbm {1}_{\lbrace u< k_u \rbrace } \mathbbm {1}_{\lbrace \Delta ^+ A^g_u \ne 0 \rbrace } + A^g_{u} \mathbbm {1}_{\lbrace t\ge k_u \rbrace } \mathbbm {1}_{\lbrace u< k_u \rbrace } \mathbbm {1}_{\lbrace \Delta ^+ A^g_u = 0 \rbrace }\\&= A^g_{u+}\mathbbm {1}_{\lbrace t\ge k_u \rbrace } \mathbbm {1}_{\lbrace u < k_u \rbrace }. \end{aligned}$$

On the complement \({\lbrace t < k_u \rbrace }\), we have \((A^g_{+})_{t\wedge k_u} = A^g_{t+} = A^g_{u+}\), where the last equality comes from the fact that \(u\le t <k_u\) and \(A^g_+\) does not increase on \(\llbracket \,u, k_u\llbracket \,\). By combining the above computations, we obtain

$$\begin{aligned}&X_{t+}\mathbbm {1}_{\lbrace g_t \le u \rbrace } \\&\quad = M_{t\wedge k_u} + A^c_{u} + A^g_{u+}\mathbbm {1}_{\lbrace t<k_u \rbrace } + A^g_{u+} \mathbbm {1}_{\lbrace t\ge k_u \rbrace } \mathbbm {1}_{\lbrace u < k_u \rbrace } + A^g_{u+} \mathbbm {1}_{\lbrace t\ge k_u \rbrace } \mathbbm {1}_{\lbrace u = k_u \rbrace } \\&\qquad - \Delta ^+ A^g_{u} \mathbbm {1}_{\lbrace t\ge k_u \rbrace } \mathbbm {1}_{\lbrace u = k_u \rbrace } \\&\quad = M_{t\wedge k_u} + A^c_{u} + A^g_{u+} - \Delta ^+ A^g_{u}\mathbbm {1}_{\lbrace u = k_u \rbrace } \end{aligned}$$

Similarly, by using the fact that \(X_{\gamma _u} = 0\) on the set \(\{\gamma _u <\infty \}\) we have

$$\begin{aligned} X_{t\wedge \gamma _u } = X_{t}\mathbbm {1}_{\{t \le \gamma _u\}} + X_{\gamma _u }\mathbbm {1}_{\{t > \gamma _u\}} = X_{t}\mathbbm {1}_{\{t \le \gamma _u\}}. \end{aligned}$$

On the other hand, we deduce from the fact that \(A^g\) is left continuous and \(dA^g_+\) is carried on \(\{X = 0\}\)

$$\begin{aligned} X_{t\wedge \gamma _u } = M_{t\wedge \gamma _u } + A^c_{t\wedge \gamma _u } + A^g_{t\wedge \gamma _u } = M_{t\wedge \gamma _u } + A^c_{u} + A^g_{u} \end{aligned}$$

which is a local martingale for on \(t \in [u, \infty )\). \(\square \)

The first observation we make is that \(\mathbbm {1}_{\lbrace u \le \gamma _u \rbrace } = 1\) and \(X_{u+}\mathbbm {1}_{\{u=k_u\}} = 0\). The equality \(X_{u+}\mathbbm {1}_{\{u=k_u\}} = 0\) follows from the fact that, on the set \(\{u = k_u\}\) there exists a sequence of random times \((k_u^n)_{n\in {\mathbb {N}}}\) strictly greater than u such that \(X_{k_u^n} = 0\) and \(\lim _{n\rightarrow \infty } k_u^n = u\), which implies that \(\lim _{n\rightarrow \infty } X_{k_u^n} = X_{u+} =0\). Therefore, from Lemma 3.8, we have for fixed \(u\ge 0\),

$$\begin{aligned} X_{t+}\mathbbm {1}_{\lbrace t < k_u \rbrace } - X_{u+}&= M_{t}^{k_{u}} - M_{u} \qquad u\le t, \end{aligned}$$
(13)
$$\begin{aligned} X_{t}\mathbbm {1}_{\lbrace t\le \gamma _u \rbrace } - X_{u}&= M_{t}^{\gamma _{u}} - M_{u} \qquad u \le t. \end{aligned}$$
(14)

By examining (13), we note that if the local martingale \(M^{k_u}-M^u\) is a true martingale on \([u,\infty )\) then one can take the conditional expectation with respect to \({{{\mathcal {F}}}}_u\) and eliminate the right-hand side using optional sampling theorem (see for example Theorem 2.58 of [14]). The second observation we make is that the integrability properties of \(M^{k_{u}}_s - M_{s}^{u}\) for \(s \in [u,\infty )\) can be derived from the integrability properties of X or \(X_+\). In view of this, one can, in the definition of class-\((\Sigma )\), restrict ourselves to optional semimartingales for which M is a martingale; however, the goal is to look for some sufficient conditions on the process X or \(X_+\). The assumption that X and \(X_+\) are of class-(D) is likely too strong for problems on a finite horizon. For example, the Brownian motion is of class-\((\Sigma )\) but does not belong to class-(D).

Theorem 3.9

Let X be a optional semimartingale of class-\((\Sigma )\) and \(0\le u \le t < \infty \),

  1. (i)

    If \(X_+\) is of class-(DL) then

    $$\begin{aligned} {\mathbb {E}}(X_{t+}\mathbbm {1}_{\lbrace t < k_u \rbrace }|{{{\mathcal {F}}}}_u)&= X_{u+}. \end{aligned}$$
  2. (ii)

    If X is of class-(DL) then

    $$\begin{aligned} {\mathbb {E}}(X_t\mathbbm {1}_{\lbrace t \le \gamma _u \rbrace }|{{{\mathcal {F}}}}_u)&= X_u, \quad 0\le u\le t < \infty , \end{aligned}$$

Proof

We prove only (ii) since the proof for (i) is similar. Given a fixed \(u\ge 0\), we have from (14) that for \(u\le t\)

$$\begin{aligned} X_{t}\mathbbm {1}_{\lbrace t \le \gamma _u \rbrace } - X_{u}\mathbbm {1}_{\lbrace u \le \gamma _u \rbrace } = \int _{(0,t]} \mathbbm {1}_{\{u < s\le \gamma _u\}} dM_s =: M_t(u). \end{aligned}$$

The process M(u) is a local martingale and \(M_t(u) = (M^{\gamma _u}_t - M_u)\mathbbm {1}_{\{t\ge u\}}\). To obtain the claim, it is enough to show that M(u) is a martingale and then take the conditional expectation with respect to \({{{\mathcal {F}}}}_u\). To do that we make use the fact that a local martingale is a martingale if and only if it is of class-(DL). To see that the local martingale M(u) is of class-(DL), we observe that for any \(t \ge 0\) and any stopping time T

$$\begin{aligned} |M_{t\wedge T}(u)|&\le |X_{t\wedge T}\mathbbm {1}_{\lbrace t\wedge T \le \gamma _u \rbrace } - X_{u}|\mathbbm {1}_{\{t\wedge T \ge u\}}\\&\le |X_{t\wedge T}| + |X_{t\wedge u\wedge T}|. \end{aligned}$$

Since X is of class-(DL), we deduce that M(u) of class-(DL) and hence a martingale. \(\square \)

Remark 3.10

For positive optional submartingales, the Madan–Roynette–Yor type formulae established in Theorem 3.9 can be viewed as a special case of a general result on the multiplicative system associated with a positive optional submartingale. For interested readers, we refer to Lemma 3.10 in the recent work of Jeanblanc and Li [16] and the references within.

To establish the analogues formula at \(t =\infty \), we point out that, since \(\tau = \sup {\lbrace s:X_s = 0 \rbrace }\) is assumed to be finite and the measure \(dA_+\) is carried on the set \({\lbrace X=0 \rbrace }\), the process A is flat on \(\,\rrbracket \tau , \infty \,\rrbracket \). Hence, if \(X_+\) converges almost surely to an integrable random variable \(X_\infty \), then X must also converge almost surely to \(X_\infty \). In view of this, we simplify the problem and suppose below that \(X_+\) is of class-(D) and make use of limit results for càdlàg submartingales.

Theorem 3.11

Let X be a optional semimartingale of class-\((\Sigma )\) such that \(X_+\) is of class-(D) then

  1. (i)

    For any finite stopping time \(\sigma \), we have

    $$\begin{aligned} {\mathbb {E}}(X_\infty \mathbbm {1}_{\lbrace \tau \le \sigma \rbrace }|{{{\mathcal {F}}}}_\sigma )&= X_{\sigma +} \end{aligned}$$
  2. (ii)

    and if \(X_\tau =0\) then we have

    $$\begin{aligned} {\mathbb {E}}(X_\infty \mathbbm {1}_{\lbrace \tau < \sigma \rbrace }|{{{\mathcal {F}}}}_\sigma )&= X_{\sigma }. \end{aligned}$$

Proof

From Lemma 3.4, we know that X and hence \(X_+\) can be written as the difference of two submartingales, that is \(X_+ = (X^+)_+ - (X^-)_+\) and \(|X_+| = (X^+)_+ + (X^-)_+\). This implies that both \((X^+)_+\) and \((X^-)_+\) are right continuous positive submartingales of class-(D) and there exists \(X_\infty := X^+_\infty -X^-_\infty \in L^1.\) Since \(X^+\) is an optional submartingale of class-(D), we have all stopping times T, the inequality \({\mathbb {E}}(X_\infty ^+ \,|\,{{{\mathcal {F}}}}_T) \ge (X^+)_{T+} \ge X^+_{T}\).

From this we deduce that \(X^+\) is also of class-(D) and \(\lim _{t\rightarrow \infty } X^+_t = X_\infty \). Similar arguments show that \(X^-\) is an optional submartingale of class-(D). From the Doob–Meyer–Mertens–Gal’čuk decomposition we can write \(X^+ = m + a\) and \(X^- = u+v\), where m and u are optional martingales and a and v are strongly predictable increasing process of integrable variation. From the decomposition \(X = M + A\), we deduce that \(M + A = m-u + a-v\). Then by using the fact that X is of class-\((\Sigma )\), the process A is left continuous and therefore strongly predictable, we see that \(M-(m-u)= (a-v)-A\) is a càdlàg predictable local martingale of finite variation. This implies \(M = m-u\) is an optional martingale and is therefore uniformly integrable.

(i) From Lemma 3.8 or (13) and the fact that \({\lbrace g_t \le u \rbrace } = {\lbrace t< k_u \rbrace }\) for \(0\le u < t\), we deduce that for any finite stopping time \(\sigma \),

$$\begin{aligned} X_{\infty }\mathbbm {1}_{\lbrace \tau \le \sigma \rbrace } - X_{\sigma +}= M_{k_{\sigma }} - M_{\sigma }. \end{aligned}$$

The result then follows from optional sampling theorem and the uniformly integrability of M.

(ii) We recall \(\gamma _s = \inf {\lbrace u\ge s: X_u= 0 \rbrace }\) and observe that for any finite stopping time \(\sigma \)

$$\begin{aligned} X_{\gamma _{\sigma }}&= X_\infty \mathbbm {1}_{\lbrace \tau< \sigma \rbrace } + X_{\gamma _\sigma }\mathbbm {1}_{\lbrace \tau \ge \sigma \rbrace } = X_\infty \mathbbm {1}_{\lbrace \tau < \sigma \rbrace } \end{aligned}$$

where the second equality follows from the fact that \(X_\tau = 0\) and

$$\begin{aligned} X_{\gamma _{\sigma }} \mathbbm {1}_{\{\tau = \sigma \}} = X_{\infty } \mathbbm {1}_{\{\tau = \sigma \}\cap \{X_\tau \ne 0\}} + X_{\tau } \mathbbm {1}_{\{\tau = \sigma \}\cap \{X_\tau = 0\}} = X_{\infty } \mathbbm {1}_{\{\tau = \sigma \}\cap \{X_\tau \ne 0\}} = 0. \end{aligned}$$

On the other hand, \(X_{\gamma _\sigma } = M_{\gamma _\sigma } - A_{\sigma }\), since A is left continuous and \(dA_+\) is carried on \({\lbrace X= 0 \rbrace }\). The result again follows from applying optional sampling theorem to M.

\(\square \)

Remark 3.12

As a check, one can apply Theorem 3.11 (i) and (ii) to \(X := 1-\widetilde{Z}\), where \(\widetilde{Z}\) is the Azéma supermartingale associated with a finite honest time \(\tau \). From the fact that \(\widetilde{Z}_\infty = Z_\infty = 0\) and \(\widetilde{Z}_\tau = 1\), we recover \({{\mathbb {P}}}(\tau < \sigma \,|\, {{{\mathcal {F}}}}_\sigma ) = 1-\widetilde{Z}_\sigma \) and \({\mathbb {P}}(\tau \le \sigma |{{{\mathcal {F}}}}_\sigma ) = 1-Z_{\sigma }\). Finally, one can also recover Proposition 3.7 from Theorem 3.11 (i), by observing that for \(X := (K-M)^+\) we have \(X = X_+\) and \(X_{k_\sigma } = 0\).

4.2 Construction of Finite Honest Times

As an application of the results we have obtained on optional semimartingale of class-\((\Sigma )\), we show in Proposition 3.14 a method to construct examples of optional submartingales of class-\((\Sigma )\), where both \(A^c\) and the left continuous pure jump part \(A^g\) are nonzero. This gives examples of finite honest times for which the representations obtain in Theorem 2.9 are non-trivial.

We recall that in continuous filtrations, all martingales are continuous, while for jumping filtration it was shown in Theorem 1 of Jacod and Skorokhod [15] that all martingales are almost surely of locally finite variation. Therefore, by combining Theorem 1 of Jacod and Skorokhod [15] and Lemma 3.5, one can produce non-trivial examples of positive optional submartingales of class-\((\Sigma )\) as defined in Definition 3.2 by taking the products of known examples in the Brownian filtration (see Appendix A. in Mansuy and Yor [22]) and the Poisson filtrations (see Aksamit et al. [4]).

Definition 3.13

A honest time is said to be of type-c (resp. type-d) if the martingale part of the Doob–Meyer–Mertens–Gal’čuk decomposition of \(1-\widetilde{Z}\) is continuous (resp. locally of finite variation).

Proposition 3.14

Let \(\tau ^c\) be a finite honest time of type-c and \(\tau ^d\) a finite honest time of type-d, then \({{\mathbb {P}}}(\tau ^c\vee \tau ^d< t\,|\,{{{\mathcal {F}}}}_t) = {{\mathbb {P}}}(\tau ^c< t\,|\,{{{\mathcal {F}}}}_t) {{\mathbb {P}}}(\tau ^d < t\,|\,{{{\mathcal {F}}}}_t)\).

Proof

We see that both \(X_t : = {{\mathbb {P}}}(\tau ^c < t\,|\,{{{\mathcal {F}}}}_t)\) and \(Y_t:={{\mathbb {P}}}(\tau ^d < t\,|\,{{{\mathcal {F}}}}_t)\) are positive optional submartingales of class-\((\Sigma )\). From Definition 2.1 the random time \(\tau ^c\vee \tau ^d\) is a finite honest time and from Lemma 3.5 the process XY is a positive optional submartingale of class-\((\Sigma )\). We observe that

$$\begin{aligned} \tau ^c\vee \tau ^d = \sup {\lbrace s: X_s = 0 \rbrace } \vee \sup {\lbrace s: Y_s = 0 \rbrace } = \sup {\lbrace s: X_sY_s = 0 \rbrace } \end{aligned}$$

and \((XY)_{\tau ^c\vee \tau ^d} = 0\) since \(X_{\tau ^c} = Y_{\tau ^d} = 0\). The result then follows by an application of Theorem 3.11 (ii) to the process XY. \(\square \)

The above result says that the Azéma optional submartingale associated with the maximum of two finite honest times can be expressed as the product of the Azéma optional submartingale associated with each individual honest time. To the best of our knowledge, this type of representation has not previously appeared in the literature.

Example 3.15

For examples of a honest time of type-c, we consider the following taken from Mansuy and Yor [22]. Let B be a Brownian motion and

$$\begin{aligned} T_1 := \inf {\lbrace t: B_t = 1 \rbrace } \quad \mathrm {and}\quad \tau ^c := \sup {\lbrace t\le T : B_t = 0 \rbrace }. \end{aligned}$$

The random time \(\tau ^c\) is a finite honest time and the Azéma’s submartingale associated with \(\tau ^c\) is given by \(X_t := {{\mathbb {P}}}(\tau ^c < t\,|\, {{{\mathcal {F}}}}^B_t) = {{\mathbb {P}}}(\tau ^c \le t\,|\, {{{\mathcal {F}}}}^B_t) = M^c_t + A^c_t\) where

$$\begin{aligned} M^c_t = B^+_{t\wedge T_1}-\frac{1}{2}L^0_{t\wedge T_1}(B) \quad \mathrm {and} \quad A^c_t = \frac{1}{2}L^0_{t\wedge T_1}(B). \end{aligned}$$

Here the process \(L^0(B)\) is the local time of the Brownian motion B at zero.

For a honest time of type-d, we consider the example in Proposition 4.12 of [4]. Let J be a compound Poisson process with intensity \(\mu \). Given \(a \ge 0\), we set

$$\begin{aligned} \tau ^d := \sup {\lbrace t : \mu t - J_t \le a \rbrace }. \end{aligned}$$

Then under certain conditions on the intensity and the distribution of the jump size, it is known that \(\tau ^d\) is a finite honest time.

We denote by \(\Psi (x)\) the ruin probability associated with the process \(\mu t - J_t\), i.e. for every \(x \ge 0\), \(\Psi (x) := {{\mathbb {P}}}(t^x < \infty )\) with \(t^x := \inf {\lbrace t : x + \mu t - J_t < 0 \rbrace }\). Then the Azéma submartingale and optional submartingale of \(\tau ^d\) admits the decomposition \(Y_{t+} := {{\mathbb {P}}}(\tau ^d \le t\,|\,{{{\mathcal {F}}}}^J_t) = M^d_t + A^d_t\) and \(Y_t := {{\mathbb {P}}}(\tau ^d < t\,|\,{{{\mathcal {F}}}}^J_t) = M^d_t + A^d_{t-}\) where

$$\begin{aligned} M^d_t&= 1- (1 - \Psi (0))\sum _n \mathbbm {1}_{\lbrace t\ge T_n \rbrace } - \Psi (\mu t - X_t - a)\mathbbm {1}_{\lbrace \mu t - J_t\ge a \rbrace } - \mathbbm {1}_{\lbrace \mu t-J_t< a \rbrace }\\ A^d_t&= (1 - \Psi (0)) \sum _n \mathbbm {1}_{\lbrace t\ge T_n \rbrace }. \end{aligned}$$

Here the martingale \(M^d\) is of finite variation and \(A^d\) is predictable pure jump process with jump times given by \((T_n)_{n\in {{\mathbb {N}}}}\), where for \(n> 1\),

$$\begin{aligned} T_1 := \inf \{t>0: \mu t - J_t = a\} \quad \mathrm {and} \quad T_n := \inf \{t > T_{n-1} : \mu t- J_t = a\}. \end{aligned}$$

Finally, we suppose that the Brownian motion B and the compound Poisson process J given above are independent of each other and we consider the joint filtration \({{\mathbb {F}}}= ({{{\mathcal {F}}}}_t)_{t\ge 0}\) where \({{{\mathcal {F}}}}_t = {{{\mathcal {F}}}}^B_t\vee {{{\mathcal {F}}}}^J_t\). From Proposition 3.14 we have \(\widetilde{Z}_t = {{\mathbb {P}}}(\tau ^c \vee \tau ^d \ge t\, |\, {{{\mathcal {F}}}}_t) = (1-X_tY_t)\). To compute the multiplicative representation of \(\widetilde{Z}\) obtained in Theorem 2.9, it is sufficient to apply the Itô formula to \(1-XY\) to obtain N and \({\overline{N}}\), which are given by

$$\begin{aligned} N_t = (1- X_tY_t) e^{\int ^t_0 \frac{1}{2} Y_s dL^0_{s\wedge T_1}(B)} \quad \text {and} \quad {\overline{N}}_t = e^{\int ^t_0 \frac{1}{2}Y_s dL^0_{s\wedge T_1}(B)}. \end{aligned}$$