1 Introduction

Optional projections of martingales onto smaller filtrations retain the martingale property; for the class of local martingales, this preservation may fail. For instance, the projection of a nonnegative local martingale can only be guaranteed to be a supermartingale in the smaller filtration, but might fail to be a local martingale; see Stricker [41] and Föllmer and Protter [14].

Positive local martingales appear naturally as deflators in arbitrage theory. (See Sect. 2 for definitions and a review of classical concepts in the theory of no-arbitrage.) Consider two nested, right-continuous filtrations \(\mathbb {F} \subseteq \mathbb {G}\) and a continuous and \(\mathbb {F}\)-adapted process \(S\), having the interpretation of the discounted price of a financial asset. Then the existence of a strictly positive \(\mathbb {G}\)-local martingale \(Y\) such that \(Y S\) is also a \(\mathbb {G}\)-local martingale is equivalent to the so-called absence of arbitrage of the first kind. If no such arbitrage opportunities are possible under \(\mathbb {G}\), then the same is true under the smaller filtration \(\mathbb {F}\); we refer to Sect. 2 for a rigorous argument for this assertion. Hence, there must exist an \(\mathbb {F}\)-local martingale \(L\) such that \(L S\) is an \(\mathbb {F}\)-local martingale. Let now \(\mathcal{Y}^{\mathbb {G}}\) and \(\mathcal{Y}^{\mathbb {F}}\) denote the set of all \(\mathbb {G}\)-adapted and \(\mathbb {F}\)-adapted such local martingale deflators, respectively. The above no-arbitrage considerations yield the implication

$$ \mathcal{Y}^{\mathbb {G}} \neq \emptyset \qquad \Longrightarrow \qquad \mathcal{Y}^{\mathbb {F}} \neq \emptyset . $$

It is natural to ask at this point if there is a direct way to construct an element of \(\mathcal{Y}^{\mathbb {F}}\) from a given \(Y \in \mathcal{Y}^{\mathbb {G}}\). The optional projection \({}^{o} Y\) of \(Y\) on \(\mathbb {F}\) is not necessarily an \(\mathbb {F}\)-local martingale, as discussed above; hence it cannot be expected to be in \(\mathcal{Y}^{\mathbb {F}}\). However, as our first main result in Theorem 3.1 implies, the local martingale part \(L\) of the multiplicative Doob–Meyer \(\mathbb {F}\)-decomposition \({}^{o} Y = L(1-K)\) is an element of \(\mathcal{Y}^{\mathbb {F}}\). Sect. 3 contains the proof of Theorem 3.1 and related results.

The previous motivates another natural question: when does the projection of \(Y\) lose the local martingale property, that is, under which circumstances is it the case that \(K_{\infty }> 0\)? In Sect. 4, we investigate this question from a Bayesian viewpoint. As it turns out, whenever certain models (which were possible under the Bayesian prior) become impossible under the observed data (the stock price path, in this case), the projection of the deflator \(Y\) loses the local martingale property, and \(K\) increases. In Sect. 5, we generalise the Bayesian viewpoint, under the assumption that a certain dominating probability measure exists.

Markets admitting local martingale deflators are complete if and only if such a deflator is unique. Since different local martingale deflators in \(\mathcal{Y}^{\mathbb {G}}\) might have the same projection, it is easy to find an example such that a market is incomplete under \(\mathbb {G}\) but complete under \(\mathbb {F}\). Indeed, consider a complete market under \(\mathbb {F}\) and add an independent Brownian motion to get to a filtration \(\mathbb {G}\); then the market is automatically incomplete under \(\mathbb {G}\).

The reverse question is of more interest: given that the market is complete under \(\mathbb {G}\), is it also complete under \(\mathbb {F}\)? As it turns out, this is not always true; it is possible that certain \(\mathbb {F}\)-local martingale deflators do not result from the local martingale component of projections of \(\mathbb {G}\)-local martingale deflators, and completeness in financial markets may be lost when we pass to smaller filtrations. We provide an explicit counterexample in Sect. 6. This uses the Lévy transformation \(B\) of a standard Brownian motion \(W\), namely

$$ B :=\int _{0}^{\cdot }\mathrm {sign}(W_{u}) \mathrm {d}W_{u} = |W| - \Lambda , $$

where \(\Lambda \) is the local time of \(W\) at zero; see Revuz and Yor [36, Theorem VI.1.2]. In fact, we provide a rather general class of counterexamples whose construction is of independent interest. More precisely, let \(\mathbb {F}^{W}\) and \(\mathbb {F}^{B}\) denote the smallest right-continuous filtrations making \(W\) and \(B\) adapted, respectively. Both \(W\) and \(B\) are standard Brownian motions with the predictable representation property in \(\mathbb {F}^{W}\). Furthermore, \(B\) is a standard Brownian motion with the predictable representation property in \(\mathbb {F}^{B}\), and it holds that \(\mathbb {F}^{B} = \mathbb {F}^{|W|}\); see Jeanblanc et al. [20, Sect. 5.8.2]. The information lost when passing from \(W\) to \(B\) consists of the signs of the excursions of \(W\); in view of Blumenthal [6, page 114], conditionally on \(\mathcal {F}^{B}_{\infty }= \mathcal {F}^{|W|}_{\infty }\), these signs are independent and identically distributed. Given that there are countably many excursions of \(W\), there clearly exist non-deterministic \(\mathcal {F}^{W}_{\infty }\)-measurable random variables which are independent of \(\mathcal {F}^{B} _{\infty }\). As we argue in Theorem 6.1 in Sect. 6.2, one may construct such random variables in an \(\mathbb {F}^{W}\)-adapted way: there exist \(\mathbb {F}^{W}\)-stopping times with any prescribed probability law on the positive half-line which are independent of \(\mathcal {F}^{B}_{\infty }\). The last result provides an interesting corollary: the existence of two nested filtrations \(\mathbb {F} \subseteq \mathbb {G}\) and a one-dimensional continuous stock price process \(S\), adapted to \(\mathbb {F}\), such that the market is complete under \(\mathbb {G}\) and under \(\mathbb {F}\), but not under some “intermediate information” model. The material in Sect. 6.4 also yields a counterexample to a conjecture put forth in Jacod and Protter [18].

The complementary problem of whether certain no-arbitrage and completeness conditions are preserved after filtration enlargement has been studied extensively, but is not considered in the present paper; we instead refer to Coculescu et al. [11], Fontana et al. [16], Jeanblanc and Song [19], Acciaio et al. [1], Song [39], Aksamit et al. [2], Song [40], Aksamit et al. [3], Chau et al. [9], Fontana [15], Chau et al. [8] and the references therein. Filtration shrinkage and its effect on semimartingale characteristics has been studied by several researchers. Föllmer and Protter [14], Larsson [27] and Kardaras and Ruf [26] consider reciprocals of Bessel processes, projected on the filtration generated by one of its components, providing explicit examples where projections of nonnegative local martingales fail to be local martingales themselves. Bielecki et al. [5] discuss how the characteristics of semimartingales are related in different filtrations. Biagini et al. [4] consider questions of “bubbles” and arbitrage opportunities in the absence of full information.

2 Notation, definitions and review of classical results

In this section, we introduce the framework and recall certain classical results which find use later on.

Fix a probability space \((\Omega , \mathcal {G}_{\infty }, {\mathbb {P}})\), equipped with two right-continuous filtrations \(\mathbb {F} := (\mathcal {F}_{t})_{t \geq 0}\) and \(\mathbb {G} := (\mathcal {G}_{t})_{t \geq 0}\) which are nested in the sense that \(\mathbb {F} \subseteq \mathbb {G}\), i.e., \(\mathcal {F}_{t} \subseteq \mathcal {G}_{t}\) holds for all \(t \geq 0\). For a given process \(X = (X_{t})_{t \geq 0}\), we use \(\mathbb {F}^{X}\) to denote the smallest right-continuous filtration that makes \(X\) adapted. If \(X\) is additionally nonnegative, let \({}^{o} X\) denote its \(\mathbb {F}\)-optional projection, which always exists but could take the value \(\infty \); see for example Nikeghbali [31, Theorem 4.1]. If \(X\) is a semimartingale, we use \(\mathcal {E}(X)\) to denote its stochastic exponential.

We consider an \(\mathbb {F}\)-adapted continuous-path \(\mathbb {G}\)-semimartingale \(S\), representing the price of a financial asset expressed in terms of a certain tradable denomination. Everything that follows carries over to the multi-asset case \(S = (S^{1}, \dots , S^{d})\) for \(d \in \mathbb{N}\), at the expense of more complicated notation; we refrain from considering multi-asset models as notation is already a bit heavy. However, we stress that continuity of the paths of \(S\)will be important, as we shall explain at places. All wealth is considered in terms of the same tradable denomination, which is operationally the same as having an additional asset available with unit price.

The financial notions below can be considered under different filtrations; so we use ℍ to generically denote either the “small” \(\mathbb {F}\) or the “large” \(\mathbb {G}\) filtration.

For given \(x \geq 0\), let \(\mathcal{X}^{\mathbb {H}} (x)\) denote the set of all nonnegative wealth processes, i.e., all nonnegative processes \(V^{x,\theta }\) of the form

$$\begin{aligned} V^{x,\theta } = x + \int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u}, \end{aligned}$$

where \(\theta \) is ℍ-predictable and \(S\)-integrable. We set \(\mathcal{X}^{\mathbb {H}} :=\bigcup _{x \geq 0} \mathcal{X}^{ \mathbb {H}} (x)\).

For any \(T > 0\) and \(\xi \in L^{0}_{+} (\mathcal {H}_{T})\), we define

$$ x^{\mathbb {H}} (T, \xi ) :=\inf \{ x \geq 0 : \exists V \in \mathcal{X}^{\mathbb {H}} (x) \text{ such that } V_{T} \geq \xi \ {\mathbb {P}}\text{-a.e.} \} $$

to be the hedging capital associated with \(\xi \).

Definition 2.1

We say that the market is ℍ-viable if \(x^{\mathbb {H}} (T, \xi ) = 0\) implies \(\xi = 0\) ℙ-a.e., for any \(T > 0\) and any \(\xi \in L^{0}_{+} (\mathcal {H}_{T})\).

The concept of viability (for a specific filtration) is also known as absence of arbitrage of the first kind, or as the condition of locally, in time, no unbounded profit with bounded risk in Karatzas and Kardaras [23].

Definition 2.2

An ℍ-local martingale deflator is a strictly positive ℍ-local martingale \(Y\) such that \(Y S\) is also an ℍ-local martingale. Correspondingly, an ℍ-supermartingale deflator is a strictly positive ℍ-supermartingale \(Y\) such that \(Y X\) is an ℍ-supermartingale for all \(X \in \mathcal{X}^{\mathbb {H}}\).  □

The class of all ℍ-local martingale deflators is denoted by \(\mathcal{Y}^{\mathbb {H}}\).

Theorem 2.3

(Choulli and Stricker [10], Kardaras [25])

The following statements are equivalent:

1) The market is ℍ-viable.

2) There exists an ℍ-local martingale deflator, i.e., \(\mathcal{Y}^{\mathbb {H}} \neq \emptyset \).

3) There exists an ℍ-supermartingale deflator.

4) Writing \(S = A + M\), where \(A\)is a continuous finite variation ℍ-adapted process and \(M\)an ℍ-local martingale, it holds that \(A = \int _{0}^{\cdot }H_{u} \mathrm {d}[S, S]_{u}\), where \(H\)is an ℍ-predictable process such that the nondecreasing process \(\int _{0}^{\cdot }H^{2}_{u}\mathrm {d}[S, S]_{u}\)is ℙ-a.e. finite-valued.

Note that the structure condition 4) in Theorem 2.3 above implies that \(S\)-integrability of an ℍ-predictable process \(\theta \) amounts to

$$ \int _{0}^{\cdot }\theta ^{2}_{u} \mathrm {d}[S, S]_{u} < \infty \qquad {\mathbb {P}}\text{-a.e.,} $$
(2.1)

as the validity of (2.1) and the Cauchy–Schwarz inequality already imply that ℙ-a.e.,

$$\begin{aligned} \int _{0}^{\cdot }| \theta _{u} | | \mathrm {d}A_{u} | &\leq \int _{0}^{\cdot }| \theta _{u} | |H_{u}| \mathrm {d}[S, S]_{u} \\ &\leq \left ( \int _{0}^{\cdot }\theta ^{2}_{u} \mathrm {d}[S, S]_{u} \right )^{1/2} \left ( \int _{0}^{\cdot }H^{2}_{u} \mathrm {d}[S, S]_{u} \right )^{1/2} < \infty . \end{aligned}$$

In particular, under the structure condition 4), \(H\) is \(S\)-integrable and we may define the specific ℍ-local martingale deflator \(Y = 1 / \widehat{V}\), where

$$ \widehat{V} :=\mathcal {E}\bigg( \int _{0}^{\cdot }H_{u} \mathrm {d}S_{u} \bigg). $$
(2.2)

The above \(\widehat{V}\) is a special wealth process in \(\mathcal{X}^{\mathbb {H}} (1)\) called the ℍ-numéraire.

Remark 2.4

The nesting property \(\mathbb {F} \subseteq {\mathbb {G}}\) seems to yield directly that

$$\begin{aligned} \mathbb {G}\text{-viability implies } \mathbb {F}\text{-viability}. \end{aligned}$$
(2.3)

However, the implication in (2.3) is a bit more subtle, the reason being that the inclusion \(\mathcal {X}^{\mathbb {F}} \subseteq \mathcal {X}^{\mathbb {G}}\) is not in general true when \(\mathbb {F} \subseteq {\mathbb {G}}\). Indeed, an \(\mathbb {F}\)-predictable process \(\theta \) might be \(S\)-integrable under \(\mathbb {F}\), but not under \(\mathbb {G}\). For example, assume that \(\mathbb {F}\) is the natural filtration of a Brownian motion \(W\) and \(\mathbb {G}\) is the smallest right-continuous filtration that makes \(W\) adapted and \(W_{1}\) a \(\mathcal {G}_{0}\)-measurable random variable. The process \(\theta \) given by \(\theta _{t} :=- 1 / ( \sqrt{1-t} \log (1-t) ) \mathbf {1}_{\{t < 1\}}\), \(t \in [0,1]\), is shown in Jeulin and Yor [21] to be \(S\)-integrable under \(\mathbb {F}\), but not under \(\mathbb {G}\); hence, in this example, \(\mathcal {X}^{\mathbb {F}} \nsubseteq \mathcal {X}^{\mathbb {G}}\).

The above notwithstanding, \(\mathbb {G}\)-viability implies that \(\mathcal {X}^{\mathbb {F}} \subseteq \mathcal {X}^{\mathbb {G}}\). Indeed, if an \(\mathbb {F}\)-predictable process \(\theta \) is \(S\)-integrable, it satisfies (2.1) a fortiori, which then implies that it is also \(S\)-integrable in the filtration \(\mathbb {G}\) in view of \(\mathbb {G}\)-viability and the discussion right before this remark. Therefore, since \(\mathbb {G}\)-viability implies \(\mathcal {X}^{\mathbb {F}} \subseteq \mathcal {X}^{\mathbb {G}}\), (2.3) follows.

A wealth process \(X \in \mathcal{X}^{\mathbb {H}}\) is called ℍ-maximal if whenever \(X' \in \mathcal{X}^{\mathbb {H}}\) is such that \(X'_{0} = X_{0}\) and \({\mathbb {P}}[X'_{T} \geq X_{T}] = 1\) for some \(T \geq 0\), then in fact \({\mathbb {P}}[X'_{T} = X_{T}] = 1\).

Definition 2.5

The market \(S\) is called ℍ-complete if for any \(T > 0\) and \(\xi \in L^{0}_{+} (\mathcal {H}_{T})\) with \(x = x^{\mathbb {H}} (T, \xi ) < \infty \), there exists a maximal \(X \in \mathcal{X}^{\mathbb {H}} (x)\) with \({\mathbb {P}}[X_{T} = \xi ] = 1\).

Theorem 2.6

(Stricker and Yan [42])

Assume ℍ-viability or, equivalently, that we have \(\mathcal{Y}^{\mathbb {H}} \neq \emptyset \). Then the market is complete if and only if there exists exactly one ℍ-local martingale deflator.

3 Projections of local martingale deflators

3.1 A first result

The main result of this section is the following.

Theorem 3.1

Let \(Y\)be a \(\mathbb {G}\)-local martingale deflator for \(S\)with \(\mathbb {F}\)-optional projection \({}^{o} Y\). Consider the multiplicative decomposition \({}^{o} Y = L (1 - K)\), where \(L\)is an \(\mathbb {F}\)-local martingale and \(K\)a nondecreasing \(\mathbb {F}\)-predictable \([0,1)\)-valued process with \(K_{0} = 0\). Then \(L\)is an \(\mathbb {F}\)-local martingale deflator for \(S\).

Theorem 3.1 will be immediate after the following two results have been established. Related to the first result is Gombani et al. [17, Proposition 3.1], where instead of local martingale deflators so-called linear price systems are considered.

Proposition 3.2

Let \(Y\)be a \(\mathbb {G}\)-supermartingale deflator for \(S\). Then its \(\mathbb {F}\)-optional projection \({}^{o} Y\)is an \(\mathbb {F}\)-supermartingale deflator for \(S\).

Proof

For any \(0 \leq s \leq t < \infty \) and \(X \in \mathcal {X}^{\mathbb {F}} \subseteq \mathcal {X}^{\mathbb {G}}\), thanks to Remark 2.4, it holds that

$$\begin{aligned} {\mathbb {E}}[{}^{o} Y_{t} X_{t} | \mathcal {F}_{s}] &= {\mathbb {E}}\left [ {\mathbb {E}}[{}^{o} Y_{t} X_{t} | \mathcal {F}_{t}] \big| \mathcal {F}_{s}\right ] \\ &= {\mathbb {E}}\big[ {\mathbb {E}}\left [Y_{t} X_{t} | \mathcal {F}_{t}\right ] \big| \mathcal {F}_{s}\big] \\ &={\mathbb {E}}\left [Y_{t} X_{t} | \mathcal {F}_{s}\right ] \\ &= {\mathbb {E}}\big[ {\mathbb {E}}\left [Y_{t} X_{t} | \mathcal {G}_{s}\right ] \big| \mathcal {F}_{s}\big] \\ &\leq {\mathbb {E}}\left [ Y_{s} X_{s} | \mathcal {F}_{s}\right ] \\ &= {}^{o} Y_{s} X_{s}. \end{aligned}$$

It remains to show that \({}^{o} Y\) is strictly positive. For this, fix \(t \geq 0\) and note that

$$ 0 = {\mathbb {E}}[{}^{o} Y_{t} \mathbf {1}_{\{{}^{o} Y_{t} = 0\}}] = {\mathbb {E}}[Y_{t} \mathbf {1}_{\{{}^{o} Y_{t} = 0\}}]. $$

Since \({\mathbb {P}}\left [Y_{t} = 0\right ] = 0\), it follows that \({\mathbb {P}}[{}^{o} Y_{t} = 0] = 0\). □

The filtration in the statement of Proposition 3.3 below is implicit.

Proposition 3.3

Consider the multiplicative decomposition \(Y = L (1 - K)\)of a supermartingale deflator \(Y\)for the continuous semimartingale \(S\), where \(L\)is a local martingale and \(K\)a nondecreasing predictable \([0,1)\)-valued process. Then \(L\)is a local martingale deflator for \(S\).

Proof

Thanks to Theorem 2.3, the process \(\widehat{V}\) in (2.2) exists. Note that \(1/ \widehat{V}\) is a local martingale. By stopping, we may assume without loss of generality that \(\widehat{V}\) is a uniformly integrable martingale and hence defines a probability measure ℚ equivalent to ℙ. Then \(S\) is a local ℚ-martingale and it suffices to prove that the local ℚ-martingale \(\widehat{V} L\) is a local martingale deflator for \(S\) under ℚ. Upon changing to ℚ and replacing \(Y\) and \(L\) by \(\widehat{V} Y\) and \(\widehat{V} L\), respectively, we may and shall assume that \(S\) is a local ℙ-martingale in everything below.

Because \(L\) is strictly positive, the Kunita–Watanabe decomposition yields the representation \(L = \mathcal {E}(\int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u}) N\), where \(\theta \) is predictable and \(S\)-integrable, and \(N\) is a strictly positive local martingale such that \([N, S] = 0\) holds. In order to prove the statement, it now suffices to show that \(L = N\), i.e., \(\int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u} = 0\).

For each \(n \in \mathbb {N}\), consider the positive wealth process \(\mathcal {E}(n \int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u}) \in \mathcal {X}\). Since

$$ Y \mathcal {E}\left (n \int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u} \right ) = (1 - K) \mathcal {E}\left (\int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u}\right ) \mathcal {E}\left (n \int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u} \right ) N $$

is a supermartingale, \(N\) is a strictly positive local martingale strongly orthogonal to the continuous semimartingale \(S\) and \((1 - K) \mathcal {E}(\int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u}) \mathcal {E}(n \int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u})\) is predictable, integration by parts implies that \((1 - K) \mathcal {E}(\int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u}) \mathcal {E}(n \int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u})\) is also a (local) supermartingale. Write \(1 - K = \mathcal {E}(- C)\), where \(C\) is nondecreasing and predictable with \(\Delta C < 1\). Then

$$\begin{aligned} &(1 - K) \mathcal {E}\left (\int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u}\right ) \mathcal {E}\left (n \int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u}\right ) \\ &= \mathcal {E}(- C) \mathcal {E}\left (\int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u} \right ) \mathcal {E}\left (n \int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u} \right ) \\ &= \mathcal {E}\left (- C + (n+1) \int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u} + n \int _{0}^{\cdot }\theta ^{2}_{u} \mathrm {d}[S, S ]_{u} \right ) \end{aligned}$$

holds in view of Yor’s formula, where we have used the fact that \([C, S] = 0\). It follows that \(- C + n \int _{0}^{\cdot }\theta ^{2}_{u} \mathrm {d}[S, S ]_{u}\) must be a nonincreasing process. Since this must hold for all \(n \in \mathbb {N}\), we obtain \(\int _{0}^{\cdot }\theta ^{2}_{u} \mathrm {d}[S, S ]_{u} = 0\), which is the same as \(\int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u} = 0\). □

3.2 Ramifications

As mentioned after Theorem 2.3, a particular \(\mathbb {G}\)-local martingale deflator is the one corresponding to the reciprocal of the \(\mathbb {G}\)-numéraire in \(\mathcal {X}^{\mathbb {G}}\). It is natural to ask whether the \(\mathbb {F}\)-optional projection of the reciprocal of the \(\mathbb {G}\)-numéraire is the reciprocal of the \(\mathbb {F}\)-numéraire. The following example shows that this is not necessarily the case, even if the reciprocal of the \(\mathbb {G}\)-numéraire is a \(\mathbb {G}\)-martingale. For a positive result in this direction, under additional assumptions, we refer to Proposition 5.10 later on.

Example 3.4

Suppose the underlying probability space supports a standard Brownian motion \(W\) and for some \(q \in (0,1)\) an independent Bernoulli random variable \(\Theta \) with \({\mathbb {P}}[\Theta = 1] = q = 1 - {\mathbb {P}}[\Theta = 0]\). The filtration \(\mathbb {G}\) is given by \(\mathcal {G}_{t} = \mathcal {F}_{t}^{W} \vee \sigma (\Theta )\) for all \(t \geq 0\), while \(\mathbb {F} = \mathbb {F}^{X} = \mathbb {F}^{S}\), where

$$\begin{aligned} X :=\Theta \int _{0}^{\cdot }\mathrm {d}u + \int _{0}^{\cdot }(\mathbf {1}_{ \{u < 1\}} + \Theta \mathbf {1}_{\{u \geq 1\}}) \mathrm {d}W_{u}, \qquad S :=\mathcal {E}(X). \end{aligned}$$

Define the wealth process \(\widehat{V} \in \mathcal {X}^{\mathbb {G}}\) by

$$\begin{aligned} \widehat{V} &:=\mathcal {E}\left (\int _{0}^{\cdot }\frac{\Theta }{S_{u}} \mathrm {d}S_{u}\right ) = \mathcal {E}\left (\Theta ^{2} \int _{0}^{\cdot }\mathrm {d}u + \Theta \int _{0}^{\cdot }(\mathbf {1}_{\{u < 1\}} + \Theta \mathbf {1}_{\{u \geq 1\}}) \mathrm {d}W_{u}\right ) \\ &\phantom{:}= \mathcal {E}\left (\Theta \int _{0}^{\cdot }\left ( \mathrm {d}u + \mathrm {d}W_{u} \right ) \right ), \end{aligned}$$

where we have used the fact that \(\Theta = \Theta ^{2}\) since \(\Theta \) is \(\{0,1\}\)-valued. It is straightforward to check that \(Y :=1 / \widehat{V} = \mathcal {E}(- \Theta W)\) is a \(\mathbb {G}\)-local martingale deflator, and obviously a \(\mathbb {G}\)-martingale. Moreover, \(\Theta \) is \(\mathcal {F}_{1}\)-measurable, yielding \(\mathcal {F}_{t} = \mathcal {G}_{t}\) for all \(t \geq 1\) and

$$\begin{aligned} {}^{o} Y_{t} = {\mathbb {E}}\left [\left .Y_{t}\right | \mathcal {F}_{t} \right ] = \mathbf {1}_{\{\Theta = 0\}} + \mathrm {e}^{-X_{t} + t/2} \mathbf {1}_{ \{\Theta = 1\}}, \qquad t \geq 1. \end{aligned}$$

Straightforward computations give

$$ {\mathbb {P}}[\Theta = 0 | \mathcal {F}_{t}] = \frac{1-q}{1 - q + q \exp (X_{t} - t/2)}, \qquad 0 \leq t < 1, $$

implying that

$$ {}^{o} Y_{t} = {\mathbb {E}}[\mathbf {1}_{\{\Theta = 0\}} | \mathcal {F}_{t}] + \mathrm {e}^{-X_{t} + t/2} {\mathbb {E}}[\mathbf {1}_{\{\Theta = 1\}} | \mathcal {F}_{t} ] = \frac{1}{1 - q + q \exp (X_{t} - t/2)} $$

for \(0 \leq t < 1\). Hence, \({}^{o} Y\) has a jump at time \(t=1\). Since the \(\mathbb {F}\)-numéraire has continuous paths, its reciprocal clearly cannot equal \({}^{o} Y\).

We now provide a result concerning the dynamics of \(S\) in the smaller \(\mathbb {F}\)-filtration. To make headway, note that under \(\mathbb {G}\)-viability, Theorem 2.3 yields some \(\mathbb {G}\)-predictable process \(G\) and some \(\mathbb {G}\)-local martingale \(M\) such that

$$\begin{aligned} S = S_{0} + \int _{0}^{\cdot }G_{u} \mathrm {d}[M, M]_{u} + M. \end{aligned}$$
(3.1)

Proposition 3.5

Assume that \(S\)is \(\mathbb {G}\)-viable and that the \(\mathbb {F}\)-optional projection of \(|G|\)is \(({\mathbb {P}}\times [S, S])\)-a.e. finite, i.e.,

$$\begin{aligned} {\mathbb {E}}\left [ \int _{0}^{\infty }\mathbf {1}_{ \{{}^{o} |G|_{u} = \infty \}} \mathrm {d}[S, S]_{u} \right ] = 0. \end{aligned}$$
(3.2)

Then the \(\mathbb {F}\)-predictable projection \(F\)of \(G\)exists and satisfies \(\int _{0}^{\cdot }F_{u}^{2} \mathrm {d}[S, S]_{u} < \infty \). Moreover,

$$\begin{aligned} S = S_{0} + \int _{0}^{\cdot }F_{u} \mathrm {d}[S, S]_{u} + N, \end{aligned}$$
(3.3)

where \(N\)is an \(\mathbb {F}\)-local martingale.

Proof

Without loss of generality, upon using \(\mathbb {F}\)-localisation, we may assume that the \(\mathbb {F}\)-adapted processes \(S\) and \([M, M] = [S,S]\) are uniformly bounded; hence \(M\) is a \(\mathbb {G}\)-martingale. An appropriate modification of Meyer [30, Theoreme \(1'\)] yields that the dual optional \(\mathbb {F}\)-projection of \(\int _{0}^{\cdot }G_{u} \mathrm {d}[S, S]_{u}\) equals \(\int _{0}^{\cdot }F_{u} \mathrm {d}[S,S]_{u}\). The fact that \(\int _{0}^{\cdot }F_{u}^{2} \mathrm {d}[S, S]_{u} < \infty \) follows from Theorem 2.3, given that \(\mathbb {G}\)-viability implies \(\mathbb {F}\)-viability from (2.3). □

As the next example illustrates, although (3.3) always holds, under the assumption (3.2), for some \(\mathbb {F}\)-predictable process \(F\), the predictable \(\mathbb {F}\)-projection of \(G\) need not exist in general. (We are grateful to Walter Schachermayer for proposing the idea for this example.)

Example 3.6

Let \(\Omega = \mathbb{N} \times C([0, \infty ); \mathbb {R})\). Define \(\Theta (\theta , w) = \theta \) and \(W_{t}(\theta , w) = w_{t}\) for all \((\theta ,w) \in \Omega \) and \(t \in [0, \infty )\). Let \(\mathbb {G}\) denote the smallest right-continuous filtration making \(\Theta \) a \(\mathcal {G}_{0}\)-measurable random variable and \(W\) adapted. Consider any probability measure \(\mu \) on \(2^{\mathbb {N}}\) with \(\sum _{\theta \in \mathbb {N}}\theta \mu [ \{ \theta \}] = \infty \), and let ℙ denote the product probability on \(\mathcal {G}_{\infty }\) of \(\mu \) and Wiener measure. Note that \({\mathbb {E}}[\Theta ] = \infty \) and that \(\Theta \) and \(W\) are independent under ℙ. Prokaj and Schachermayer [35, Theorem 1] and a simple conditioning argument yield the existence of a \(\mathbb {G}\)-predictable process \(H\) taking values in \(\{-1, 1\}\) such that the process

$$ X :=\int _{0}^{\cdot }H_{u} \Theta \mathrm {d}u + \int _{0}^{\cdot }H_{u} \mathrm {d}W_{u} $$

is an \(\mathbb {F}\)-Brownian motion, where \(\mathbb {F} :=\mathbb {F}^{X}\). Note also that \(\Theta \) and \(X\) are independent under ℙ (indeed, the law of \(X\) conditionally on \(\Theta \) coincides with its unconditional one, namely the standard Wiener measure). Thus for \(u \geq 0\),

$$\begin{aligned} {\mathbb {E}}[ |H_{u} \Theta |]= \lim _{n \uparrow \infty } {\mathbb {E}}\left [ |H_{u} \Theta | \wedge n | \mathcal {F}_{u}\right ] &= \lim _{n \uparrow \infty } {\mathbb {E}}\left [ \Theta \wedge n | \mathcal {F}_{u}\right ] \\ &= \lim _{n \uparrow \infty } {\mathbb {E}}\left [ \Theta \wedge n\right ] = {\mathbb {E}}[\Theta ] = \infty . \end{aligned}$$
(3.4)

Defining now \(S :=\mathcal {E}(X)\), (3.1) holds with \(M :=\int _{0}^{\cdot }S_{u} H_{u} \mathrm {d}W\) and \(G :=H \Theta / S\). In particular, \(\int _{0}^{\cdot }G_{u}^{2} \mathrm {d}u = \Theta ^{2} \int _{0}^{\cdot }S_{u}^{-2} \mathrm {d}u < \infty \) holds. Moreover, it follows from (3.4) that (3.2) fails. Nevertheless, (3.3) holds with \(F = 0\) and \(N = \mathcal {E}(X)\); here, \(F\) is not the predictable \(\mathbb {F}\)-projection of \(G\), as the latter does not exist.

Remark 3.7

If there exist finitely many \(\mathbb {F}\)-optional processes \((\Phi ^{i})_{i = 1, \dots , I}\) for some \(I \in \mathbb{N}\) such that \(\mathrm {sign}(G) = \mathrm {sign}(\Phi ^{i})\)\([S, S]\)-a.e. for \(i = 1, \dots , I\) ℙ-a.e., then (3.2) holds. To see this, note that by localisation, we may assume uniform boundedness, say by a constant \(\kappa > 0\), of the \(\mathbb {F}\)-predictable process \([S, S]\) as well as of all processes

$$ \widetilde{S}^{i} :=S_{0} + \int _{0}^{\cdot }\mathrm {sign}(\Phi ^{i}_{u}) \mathrm {d}S_{u} = S_{0} + \int _{0}^{\cdot }\mathrm {sign}(\Phi ^{i}_{u}) G_{u} \mathrm {d}[M, M]_{u} + \int _{0}^{\cdot }\mathrm {sign}(\Phi ^{i}_{u}) \mathrm {d}M_{u} $$

for \(i = 1, \dots , I\). It now suffices to observe that

$$\begin{aligned} {\mathbb {E}}\left [\int _{0}^{\cdot }|G_{u}| \mathrm {d}[M, M]_{u} \right ] & \leq \sum _{i = 1}^{I} {\mathbb {E}}\bigg[\mathbf {1}_{\{\mathrm {sign}(G) = \mathrm {sign}( \Phi ^{i})\}} \bigg(\widetilde{S}^{i} - S_{0} - \int _{0}^{\cdot }\mathrm {sign}(\Phi ^{i}_{u}) \mathrm {d}M_{u} \bigg)\bigg] \\ &\leq 2 I \kappa + \sum _{i = 1}^{I} {\mathbb {E}}\bigg[\bigg|\int _{0}^{\cdot }\mathrm {sign}(\Phi ^{i}_{u}) \mathrm {d}M_{u} \bigg|\bigg] < \infty ; \end{aligned}$$

here the last inequality uses the fact that the stochastic integrals have bounded quadratic variation. This then yields (3.2).

For example, assume that \(\mathbb {G}\) supports a Brownian motion \(W\) and a \(\mathcal {G}_{0}\)-measurable ℝ-valued random variable \(\Theta \). Let \(H\) denote any \(\mathbb {G}\)-predictable process such that \(\int _{0}^{\cdot }H_{u}^{2} \mathrm {d}u < \infty \) and set

$$ S :=\mathcal {E}\bigg(\Theta \int _{0}^{\cdot }H_{u} \mathrm {d}u + W\bigg). $$

Consider now a right-continuous filtration \(\mathbb {F}\) with \(\mathbb {F} \subseteq \mathbb {G}\) such that \(S\) and \(\mathrm {sign}(H)\) are \(\mathbb {F}\)-adapted. With \(I = 2\), \(\Psi ^{1} = \mathrm {sign}(H)\) and \(\Psi ^{2} = -\mathrm {sign}(H)\), the \(\mathbb {F}\)-optional projection of \(|G|\), where \(G :=H/S\), is \(({\mathbb {P}}\times [S,S])\)-a.e. finite, i.e., (3.2) holds.

4 A Bayesian framework

4.1 Setup

Consider some parameter space ℜ equipped with a \(\sigma \)-algebra ℛ and a probability measure \(\mu \) which will be the “prior” law of a parameter. Let \(\Omega = \mathfrak{R} \times C([0, \infty ); \mathbb {R})\). Define \(\Theta (\theta , x) = \theta \) and \(X_{t}(\theta , x) = x_{t}\) for all \((\theta ,x) \in \Omega \) and \(t \in [0, \infty )\). Define \(\mathbb {F}\) as the smallest right-continuous filtration making \(X\) adapted, i.e., \(\mathbb {F} = \mathbb {F}^{X}\). Moreover, let \(\mathbb {G}\) be the smallest right-continuous filtration containing \(\mathbb {F}\) and further making \(\Theta \) a \(\mathcal {G}_{0}\)-measurable random variable. Next, define ℚ as the product probability measure on \(\mathcal {G}_{\infty }\) of \(\mu \) and Wiener measure; under ℚ, \(X\) is a \(\mathbb {G}\)-Brownian motion independent of \(\Theta \), the latter random variable having law \(\mu \). Also, under ℚ, \(X\) is an \(\mathbb {F}\)-Brownian motion. Let \({\mathbb {W}}= {\mathbb {Q}}|_{\mathcal {F}_{\infty }}\) denote Wiener measure.

Consider a functional \(G : \Omega \times [0, \infty ) \to [- \infty , \infty ]\), assumed to be \(\mathbb {G}\)-optional, which will serve as the drift functional for the stock returns in the filtration \(\mathbb {G}\). We allow \(G\) to take the values \(\pm \infty \), although such values will not be “seen” by the solutions of the martingale problems we consider later. Define also \(A : \Omega \times [0, \infty ) \to [0, \infty ]\) via \(A(\theta , x, \cdot ) :=\int _{0}^{\cdot }G(\theta , x, u)^{2} \mathrm {d}u\); note that \(A\) is nondecreasing in the time component.

For \(\mu \)-a.e. \(\theta \in \mathfrak{R}\), we assume the existence of a probability \({\mathbb {P}}^{\theta }\) on \(\mathcal {F}_{\infty }\) such that \({\mathbb {P}}^{\theta }\ll _{\mathcal {F}_{t}} {\mathbb {W}}\) for all \(t \geq 0\), \(\int _{0}^{\cdot }|G(\theta , X, u)| \mathrm {d}u\) is \({\mathbb {P}}^{\theta }\)-a.e. finitely valued and \(X - \int _{0}^{\cdot }G(\theta , X, u) \mathrm {d}u\) is an \(\mathbb {F}\)-local \({\mathbb {P}}^{\theta }\)-martingale.

Some remarks are in order. First of all, under the previous assumptions, the process \(W^{\theta }:=X - \int _{0}^{\cdot }G(\theta , X, u) \mathrm {d}u\) is actually an \((\mathbb {F}, {\mathbb {P}}^{\theta })\)-Brownian motion, as follows from Lévy’s characterisation theorem. Secondly, if we define the set-valued process \(\Sigma : \Omega \times [0, \infty ) \to \mathcal{R}\) via

$$ \Sigma _{t} :=\left \{\theta \in \mathfrak{R} : A(\theta , X, t) = \infty\right \} \in \mathcal {F}_{t-}, \qquad t \geq 0, $$

Girsanov’s theorem implies that for \((\theta ,t) \in \mathfrak{R} \times [0, \infty )\),

$$\begin{aligned} \zeta ^{\theta }_{t} :=\frac{\mathrm {d}{\mathbb {P}}^{\theta }}{\mathrm {d}{\mathbb {W}}}\bigg|_{\mathcal {F}_{t}} = \exp \left (\int _{0}^{t} G(\theta , X, u) \mathrm {d}X_{u} - \frac{1}{2} A(\theta , X, t)\right ) \mathbf {1}_{\{\theta \notin \Sigma _{t}\}}, \end{aligned}$$

which in particular implies that \({\mathbb {P}}^{\theta }\) is necessarily unique. Thanks to Stricker and Yor [43, Proposition 5] applied under ℚ and a \(\mathbb {G}\)-localisation argument, the mapping \(\zeta : \mathfrak{R} \times C([0, \infty ); \mathbb {R}) \times [0, \infty ) \to [- \infty , \infty ]\) can be chosen jointly measurable by taking an appropriate version. Finally, since \({\mathbb {W}}[\theta \in \Sigma _{t}, \, \zeta ^{\theta}_{t} > 0] = 0\), it follows that \({\mathbb {P}}^{\theta }\left [\theta \in \Sigma _{t}\right ] = 0\) holds for all \((\theta ,t) \in \mathfrak{R} \times [0, \infty )\), even though \({\mathbb {Q}}\left [\theta \in \Sigma _{t}\right ] > 0\) is possible.

Define now ℙ on \(\mathcal {G}_{\infty }\) via \({\mathbb {P}}\left [\mathrm {d}\theta , \mathrm {d}x\right ] :=\mu \left [\mathrm {d}\theta\right ] {\mathbb {P}}^{\theta }\left [\mathrm {d}x\right ]\), and note that the process \(W :=X - \int _{0}^{\cdot }G(\Theta , X, u) \mathrm {d}u\) is a standard \((\mathbb {G}, {\mathbb {P}})\)-Brownian motion; in particular, \(W\) and \(\Theta \) are independent under ℙ. Indeed, as in Example 3.6, this follows from the fact that the conditional law of \(W\) given \(\Theta \) coincides with its unconditional law.

In order to connect with the financial setting of the previous sections, one may define the asset price \(S\) to equal \(X\), or if one insists on positive asset prices, one may set \(S = \mathcal {E}(X)\). Choosing one or the other is plainly a matter of interpretation and will not affect the mathematical content of the discussion here. The fact that \({\mathbb {P}}^{\theta }\left [\theta \in \Sigma _{t}\right ] = 0\) for all \(t \geq 0\), equivalent to finiteness of the process \(A(\theta , x, \cdot ) = \int _{0}^{\cdot }G( \theta , x, u)^{2} \mathrm {d}u\), implies by Theorem 2.3 the \(\mathbb {F}\)-viability of the \({\mathbb {P}}^{\theta }\)-model for all \(\theta \in \mathfrak{R}\).

We are interested in the dynamics of \(X\) in \(\mathbb {F}\) under ℙ. For this, we make one final assumption (recall also the discussion in Remark 3.7), namely

$$ \int _{\mathfrak{R}} |G(\theta , X, \cdot )| \zeta ^{\theta }_{\cdot }\mu \left [\mathrm {d}\theta\right ] < \infty \qquad ({\mathbb {P}}\times [X,X]) \text{-a.e.} $$
(4.1)

Under all the previous assumptions, Bayes’ formula yields for \(t \geq 0\) that

$$ {\mathbb {E}}_{{\mathbb {P}}}\left [G(\Theta , X, t) | \mathcal {F}_{t}\right ] = \frac{\int _{\mathfrak{R}} G(\theta , X, t) \zeta ^{\theta }_{t} \mu \left [\mathrm {d}\theta\right ]}{\zeta _{t}}, \qquad \text{where } \zeta _{t} :=\int _{\mathfrak{R}} \zeta ^{\theta }_{t} \mu \left [\mathrm {d}\theta\right ] $$

is an \((\mathbb {F}, {\mathbb {Q}})\)-Brownian martingale. In fact, upon defining the random measure-valued process \((\mu _{t})_{t \geq 0}\) via

$$ \frac{\mu _{t} [\mathrm {d}\theta ]}{\mu [\mathrm {d}\theta ]} = \frac{\zeta ^{\theta }_{t}}{\zeta _{t}} = \frac{\zeta (\theta , X, t)}{\int _{\mathfrak{R}} \zeta (\eta , X, t) \mu \left [\mathrm {d}\eta\right ]}, $$

it follows that \({\mathbb {E}}_{{\mathbb {P}}}\left [G(\Theta , X, t) | \mathcal {F}_{t}\right ] = \int _{ \mathfrak{R}} G(\theta , X, t) \mu _{t} \left [\mathrm {d}\theta\right ]\). Therefore, defining the functional \(F: C([0, \infty ); \mathbb {R}) \times [0, \infty ) \rightarrow (- \infty , \infty ]\) via

$$ F(x, t) :=\frac{\int _{\mathfrak{R}} G(\theta , x, t) \zeta (\theta , x, t) \mu \left [\mathrm {d}\theta\right ]}{\int _{{\mathfrak{R}}} \zeta (\theta , x, t) \mu \left [\mathrm {d}\theta\right ]} \qquad \text{if} \int _{\mathfrak{R}} \left \vert G(\theta , x, t)\right \vert \zeta ( \theta , x, t) \mu \left [\mathrm {d}\theta\right ] < \infty $$

and \(F(x, t) :=\infty \) otherwise, it follows that the process \(W^{\mathbb {F}} :=X - \int _{0}^{\cdot }F(X, u) \mathrm {d}u\) is an \(( \mathbb {F}, {\mathbb {P}})\)-Brownian motion.

The \(\mathbb {G}\)-local martingale deflators for \(S = X\) (or \(S = \mathcal {E}(X)\)) are of the form

$$\begin{aligned} Y &= h(\Theta ) \exp \left ( - \int _{0}^{\cdot}G(\Theta , X, u) \mathrm {d}W_{u} - \frac{1}{2} \int _{0}^{\cdot}G(\Theta , X, u)^{2} \mathrm {d}u\right ) \\ &= h(\Theta ) \exp \left ( - \int _{0}^{\cdot}G(\Theta , X, u) \mathrm {d}X_{u} + \frac{1}{2} \int _{0}^{\cdot}G(\Theta , X, u)^{2} \mathrm {d}u\right ) = h(\Theta ) \frac{1}{\zeta ^{\Theta }}, \end{aligned}$$

where \(h: {\mathfrak{R}} \rightarrow (0,\infty )\) is any strictly positive Borel function with the property \(\int _{\mathfrak{R}} h(\theta ) \mu [\mathrm {d}\theta ] = 1\). Note that since \({\mathbb {P}}[\Theta \in \Sigma _{t}] = 0\), we have \({\mathbb {P}}[\zeta ^{\Theta}_{t} > 0] = 1\) for all \(t \geq 0\). The optional projection of any such \(Y\) on \(\mathbb {F}\) satisfies, by Bayes’ rule,

$$\begin{aligned} {}^{o} Y_{t} = {\mathbb {E}}_{{\mathbb {P}}}\left [Y_{t} | \mathcal {F}_{t}\right ] &= \frac{\int _{\mathfrak{R}} (h(\theta ) / \zeta ^{\theta}_{t} ) \zeta ^{\theta }_{t} \mathbf {1}_{\{\zeta ^{\theta }_{t} > 0\}} \mu [\mathrm {d}\theta ] }{\zeta _{t}} \\ &= \frac{\int _{\mathfrak{R}} h(\theta ) \mathbf {1}_{\{\zeta ^{\theta }_{t} > 0\}} \mu [\mathrm {d}\theta ] }{\zeta _{t}} = (1 - K_{t}^{h}) \frac{1}{\zeta _{t}}, \qquad t \geq 0, \end{aligned}$$

where

$$ K^{h}_{t} :=\int _{\Sigma _{t}} h(\theta ) \mu [\mathrm {d}\theta ], \qquad t \geq 0. $$

Note that \(K^{h}\) is a nondecreasing \(\mathbb {F}\)-predictable process. In particular, there is a “loss of mass” exactly when certain models become impossible. If the conditional law of \(\Theta \) under ℙ given \(\mathcal {F}_{t}\) maintains the same support as \(\Theta \) has for all \(t \geq 0\), it follows that \(K^{h} = 0\). By Theorem 3.1, \(1/\zeta \) is an \(\mathbb {F}\)-local martingale deflator. This uses the fact that \(1/\zeta \) is indeed an \((\mathbb {F}, {\mathbb {P}})\)-local martingale since \(\zeta \) is an \((\mathbb {F}, {\mathbb {Q}})\)-Brownian martingale, hence continuous, not jumping to zero.

Lemma 4.1

It holds that \(\left \{\zeta = 0\right \} = \left \{\mu [\Sigma ] = 1\right \}\). In particular, for any \(\mathcal {F}_{t}\)-measurable nonnegative \(\xi \)and \(t \geq 0\), it holds that

$$ {\mathbb {E}}_{{\mathbb {P}}}\left [\frac{1}{\zeta _{t}} \xi\right ] = {\mathbb {E}}_{{\mathbb {W}}}[\xi \mathbf {1}_{\{\mu [\Sigma _{t}] < 1\}}]. $$

Proof

Simply note that

$$ \left \{\zeta = 0\right \} = \left \{ \int _{\mathfrak{R}} \zeta ^{\theta}\mu [\mathrm {d}\theta ] = 0\right \} = \left \{ \mu [\Sigma ] = 1\right \}. $$

Given that \(\zeta \) is the density process of ℙ with respect to \({\mathbb {W}}\) on \(\mathbb {F}\), we have

$$ {\mathbb {E}}_{{\mathbb {P}}}\left [\frac{1}{\zeta _{t}} \xi\right ] = {\mathbb {E}}_{{\mathbb {W}}}[\xi \mathbf {1}_{\{\zeta _{t}> 0\}}], \qquad t \geq 0, $$

which immediately gives the result. □

As a corollary of the above, we obtain that the “default” (in the terminology of Elworthy et al. [12]) of the \((\mathbb {F}, {\mathbb {P}})\)-local martingale \(1/\zeta \) equals

$$ 1 - {\mathbb {E}}_{{\mathbb {P}}}\left [\frac{1}{\zeta _{t}}\right ] = 1 - {\mathbb {W}}\big[ \mu [\Sigma _{t}] < 1 \big] = {\mathbb {W}}\big[ \mu [\Sigma _{t}] = 1 \big], \qquad t \geq 0. $$

This is clear: \(1/\zeta \) will be an (\(\mathbb {F}, {\mathbb {P}}\))-martingale if and only if ℙ and \({\mathbb {W}}\) are locally equivalent, which will happen exactly when under \({\mathbb {W}}\), the family of parameters that yield a strictly positive Radon–Nikodým derivative at time \(t\) has strictly positive \(\mu \)-measure, for each \(t \geq 0\).

Remark 4.2

The \((\mathbb {F}, {\mathbb {P}})\)-market is complete. Indeed, fix some \(T \geq 0\) and some nonnegative \(\mathcal {F}_{T}\)-measurable random variable \(D_{T}\) with \(D_{0} = {\mathbb {E}}_{{\mathbb {P}}}\left [D_{T}/\zeta _{T}\right ] < \infty \). We then also have \(D_{0} = {\mathbb {E}}_{{\mathbb {Q}}} [D_{T} \mathbf {1}_{\{\zeta _{T} > 0\}}] < \infty \), and the martingale representation theorem gives the existence of a \(({\mathbb {Q}}, S)\)-integrable \(H\) such that \(D_{0} + \int _{0}^{T} H_{u} \mathrm {d}S_{u} = D_{T} \mathbf {1}_{\{\zeta _{T} > 0\}}\) holds ℚ-a.e. This also implies that \(D_{0} + \int _{0}^{T} H_{u} \mathrm {d}S_{u} = D_{T}\) holds ℙ-a.e.

Example 4.3

Let \(\mu \) be an arbitrary law on \(\mathfrak{R} :=\mathbb{R}\) and set \(G(\cdot , \cdot , \theta ) :=\theta H(\cdot , \cdot )\) for all \(\theta \in \mathbb {R}\), where \(H\) is \(\mathbb {F}\)-optional with \(0 < \int _{0}^{t} H^{2}(u, x) \mathrm {d}u < \infty \) for all \(t > 0\) and \(x \in C([0, \infty ); \mathbb {R})\). Then

$$ \begin{aligned} \zeta ^{\Theta }&= \mathcal {E}\left (\Theta \int _{0}^{\cdot }H(u, X) \mathrm {d}X_{u} \right ) \\ &= \exp \left (\Theta \int _{0}^{\cdot }H(u, X) \mathrm {d}X_{u} - \frac{\Theta ^{2}}{2} \int _{0}^{\cdot }H^{2}(u, X) \mathrm {d}u\right ) \end{aligned} $$

holds; hence \(\Sigma = \emptyset \) in this case. Moreover, for \(t > 0\), we have

$$ {\mathbb {E}}_{{\mathbb {P}}} [|\Theta | |\mathcal {F}_{t}] = \frac{\int _{\mathbb{R}} |\theta | \exp (\theta \int _{0}^{t} H(u, X) \mathrm {d}X_{u} - \theta ^{2} /{2} \int _{0}^{t} H^{2}(u, X) \mathrm {d}u) \mu [\mathrm {d}\theta ]}{\zeta _{t}} < \infty , $$

where

$$ \zeta _{t} = \int _{\mathbb{R}} \exp \left (\theta \int _{0}^{t} H(u, X) \mathrm {d}X_{u} - \frac{\theta ^{2}}{2} \int _{0}^{t} H^{2}(u, X) \mathrm {d}u\right ) \mu [\mathrm {d}\theta ]. $$

Hence all the assumptions of the present section, including (4.1), are satisfied and

$$\begin{aligned} X = \Theta \int _{0}^{\cdot }H(u, X) \mathrm {d}u + W = \int _{0}^{\cdot }{\mathbb {E}}_{{\mathbb {P}}} [\Theta |\mathcal {F}_{u}] H(u, X) \ \mathrm {d}u + W^{ \mathbb {F}} \end{aligned}$$

for some \((\mathbb {G}, {\mathbb {P}})\)-Brownian motion \(W\) and some \((\mathbb {F}, {\mathbb {P}})\)-Brownian motion \(W^{\mathbb {F}}\). However heavy-tailed the law of \(\Theta \) may be (and even if it does not have any moments), its generalised conditional expectations given \(\mathcal {F}_{\cdot }\) exist. This example with \(H = 1\) is discussed in Kailath [22]; see also Remark 3.7.

Example 4.4

Let \(\mu \) be an arbitrary law on ℝ and \(G(\theta , x, t) :=- \mathbf {1}_{\{{\theta x_{t} < 1}\}} \theta / (1- \theta x_{t})\) whenever \(\theta \in \mathbb {R}\), \(t \geq 0\) and \(x \in C([0,\infty ), \mathbb {R})\). The corresponding dynamics

$$ X = - \int _{0}^{\cdot }\frac{\Theta }{1-\Theta X_{u} } \mathrm {d}u + W $$
(4.2)

are those of Brownian motion conditioned to never cross the level \(1 / \Theta \), with the case \(\Theta = 0\) simply corresponding to Brownian motion. For future reference, note that

$$\begin{aligned} \lim _{t \uparrow \infty } X_{t} = -\infty \qquad \text{on }\{\Theta > 0\},\ {\mathbb {P}}\text{-a.e.}, \end{aligned}$$
(4.3)

and correspondingly, \(\lim _{t \uparrow \infty } X_{t} = \infty \) on \(\{\Theta < 0\}\), ℙ-a.e.

In this example, one easily computes \(\zeta ^{\theta }= (1 - \theta X) \mathbf {1}_{(\underline{\theta}, \, \overline{\theta}) } (\theta )\) for each \(\theta \in \mathbb {R}\), where we define \(\underline{\theta} := 1 / \inf _{u \in [0, \cdot ]} X_{u} < 0\) and \(\overline{\theta} := 1 / \sup _{u \in [0, \cdot ]} X_{u} > 0\). Clearly, \(\Sigma = \mathbb {R}\setminus (\underline{\theta}, \, \overline{\theta})\). We compute

$$ \zeta _{t} = \int _{(\underline{\theta}_{t}, \overline{\theta}_{t})} \left (1 - \theta X_{t} \right ) \mu [\mathrm {d}\theta] = \mu [(\underline{\theta}_{t}, \overline{\theta}_{t})] - X_{t} \int _{(\underline{\theta}_{t}, \overline{\theta}_{t})} \theta \mu \left [\mathrm {d}\theta\right ], \qquad t \geq 0. $$

Note also that

$$\begin{aligned} \int _{0}^{\infty }G(\theta , X, t) \zeta ^{\theta }_{t} \mu \left [\mathrm {d}\theta\right ] &= - \int _{(\underline{\theta}_{t}, \overline{\theta}_{t})} \theta (1-\theta X_{t} )^{-1} \left (1 - \theta X_{t} \right ) \mu \left [\mathrm {d}\theta\right ] \\ &= - \int _{(\underline{\theta}_{t}, \overline{\theta}_{t})} \theta \mu \left [\mathrm {d}\theta\right ], \qquad t \geq 0. \end{aligned}$$

Defining

$$ \widehat{\Theta }_{t} = \frac{1}{\mu [(\underline{\theta}_{t}, \overline{\theta}_{t})]} \int _{(\underline{\theta}_{t}, \overline{\theta}_{t})} \theta \mu \left [\mathrm {d}\theta\right ], \qquad t > 0, $$

observe that

$$ F(X, t) = \frac{- \int _{(\underline{\theta}_{t}, \overline{\theta}_{t})} \theta \mu \left [\mathrm {d}\theta\right ]}{\mu [(\underline{\theta}_{t}, \overline{\theta}_{t})] - X_{t} \int _{(\underline{\theta}_{t}, \overline{\theta}_{t})} \theta \mu \left [\mathrm {d}\theta\right ]} = - \frac{\widehat{\Theta }_{t}}{1-\widehat{\Theta }_{t} X_{t} }. $$

It follows that the dynamics of \(X\) in \(\mathbb {F}\) are

$$ X = - \int _{0}^{\cdot }\frac{\widehat{\Theta }_{u}}{1 - \widehat{\Theta }_{u} X_{u} } \mathrm {d}u + W^{\mathbb {F}}, $$

which are the same dynamics as (4.2) with \(\Theta \) there replaced by the process \(\widehat{\Theta }\).

Note that \(K^{1} = 1 - \mu [(\underline{\theta}, \overline{\theta})]\). In this example, \(1/\zeta \) will be an actual \((\mathbb {F}, {\mathbb {P}})\)-martingale if and only if \(\mu [(\underline{\theta}_{t}, \overline{\theta}_{t})] > 0\) holds \({\mathbb {W}}\)-a.e. for all \(t \geq 0\), which is equivalent to saying that \(\mu [(-\varepsilon , \varepsilon ) ] > 0\) holds for all \(\varepsilon > 0\).

Let us also note that the distribution of the overall maximum \(X^{*}_{\infty }:=\max _{t \geq 0} X_{t}\) can be computed in this setup. To this end, fix \(y > 0\) and recall from (4.3) that \({\mathbb {P}}^{\theta }[X^{*}_{\infty }> y] = 1\) if \(\theta \leq 0\). If \(\theta > 0\), then

$$\begin{aligned} {\mathbb {P}}^{\theta }[X^{*}_{\infty }> y] &= {\mathbb {P}}^{\theta }\Big[\min _{t \geq 0} \zeta ^{\theta }_{t} < 1 - \theta {y}\Big] = {\mathbb {P}}^{\theta }\bigg[\max _{t \geq 0} \frac{1}{\zeta ^{\theta }_{t}} \geq \frac{1}{ 1 - \theta y} \bigg] = (1 - \theta y)^{+}. \end{aligned}$$

Here we used the fact that the \((\mathbb {F}, {\mathbb {P}}^{\theta })\)-local martingale \({1}/{\zeta ^{\theta }}\) satisfies \({1}/{\zeta ^{\theta }_{\infty }} = 0\) for each \(\theta > 0\). Hence we get

$$\begin{aligned} {\mathbb {P}}[X^{*}_{\infty }> y] &= \mu \left [\left (-\infty , \frac{1}{y} \right )\right ] - y \int _{0}^{1/y} \theta \mu \left [\mathrm {d}\theta\right ]. \end{aligned}$$
(4.4)

Similar computations also hold for the overall minimum of \(X\).

Remark 4.5

Explicit formulas for the quantities in Example 4.4 may be obtained for nice laws \(\mu \). For example, if \(\mu [\mathrm {d}\theta ] = \mathbf {1}_{\{\theta > 0\}} \theta ^{-3} \mathrm {e}^{-1/\theta } \mathrm {d}\theta \) for all \(\theta \in \mathbb {R}\) (inverse Gamma distribution), one obtains

$$ \frac{1}{\mu \left [\left (0, 1 / x^{*}\right )\right ]} \int _{0}^{1/x^{*}} \theta \mu [\mathrm {d}\theta ] = \frac{1}{\int _{x^{*}}^{\infty }u \mathrm {e}^{-u} \mathrm {d}u} \mathrm {e}^{-x^{*}} = \frac{1}{1+x^{*}}, \qquad x^{*} > 0. $$

This then yields \(\widehat{\Theta }= 1/(1+X^{*})\), where \(X^{*} :=\max _{u \in [0,\cdot ]} X_{u}\) is the running maximum of \(X\); hence \(F(X, t) = - {1}/{(1 + X^{*}_{t} - X_{t})} \) for all \(t \geq 0\). We thus obtain

$$ X = - \int _{0}^{\cdot }\frac{1}{1 + X^{*}_{u} - X_{u}} \mathrm {d}u+ W^{ \mathbb {F}} $$

for some \((\mathbb {F}, {\mathbb {P}})\)-Brownian motion \(W^{\mathbb {F}}\). Furthermore,

$$ \zeta = \mu \left [\left (0, \frac{1}{X^{*}}\right )\right ] - X \int _{0}^{1/X^{*}} \theta \mu \left [\mathrm {d}\theta\right ] = (1 + X^{*} - X) \mathrm {e}^{- X^{*}}, $$

giving in conjunction with (4.3) that the limiting conditional law for \(\Theta \) is

$$\begin{aligned} \mu _{\infty }[\mathrm {d}\theta ] &= \left (\lim _{t \uparrow \infty} \frac{\zeta ^{\theta}_{t}}{\zeta _{t}} \right ) \mu [\mathrm {d}\theta ] = \theta \mathrm {e}^{X^{*}_{\infty }} \mathbf {1}_{\{1/ \theta > X^{*}_{\infty }\}} \mu [\mathrm {d}\theta ] \\ &= \frac{1}{\theta ^{2}} \mathrm {e}^{ - (1/\theta - X^{*}_{\infty })} \mathbf {1}_{\{1/\theta > X^{*}_{\infty }\}} \mathrm {d}\theta ; \end{aligned}$$

in other words, \(1/\Theta - X^{*}_{\infty }\) given \(\mathcal {F}_{\infty }\) (hence in particular also given \(X^{*}_{\infty }\)) has a standard exponential law under ℙ. Moreover, (4.4) yields that \(X^{*}_{\infty }\) also has a standard exponential law under ℙ. Hence \(1/\Theta \) is the sum of the two independent standard exponentially distributed random variables \(1/\Theta - X^{*}_{\infty }\) and \(X^{*}_{\infty }\). Note also that the overall maximum \(X^{*}_{\infty }\) of \(X\) has the same distribution as the overall maximum of Brownian motion with drift rate \(-{1}/{2}\); see for example Karatzas and Shreve [24, Exercise 3.5.9].

Remark 4.6

Fix \(t > 0\) and an \(\mathcal {F}_{t}\)-measurable nonnegative random variable \(\xi \) representing the payoff of a contingent claim. As already observed in Remark 4.2, the \((\mathbb {F}, {\mathbb {P}})\)-market is complete. Indeed, the price \(p\) of \(\xi \) in the \((\mathbb {F}, {\mathbb {P}})\)-market equals

$$ p = {\mathbb {E}}_{{\mathbb {P}}} \left [\frac{1}{\zeta _{t}} \xi \right ] = {\mathbb {E}}_{{\mathbb {W}}}[\xi \mathbf {1}_{\{\mu [\Sigma _{t}] < 1\}}] $$

by Lemma 4.1. Similarly, in the \((\mathbb {G}, {\mathbb {P}})\)-market, one has the \(\mathcal {G}_{0}\)-measurable price \(p^{\Theta }\), where

$$ p^{\theta }:={\mathbb {E}}_{{\mathbb {W}}}\big[ \xi \mathbf {1}_{\{ \zeta ^{\theta }_{t} > 0 \}}\big] = {\mathbb {E}}_{{\mathbb {W}}}[\xi \mathbf {1}_{\{\theta \notin \Sigma _{t}\}}], \qquad \theta \in \mathfrak{R}. $$

It is clear, both by economic and by mathematical reasoning, that \(p^{\Theta }\leq p\) ℙ-a.e.

Let us now consider the question how \(p\) and ℙ-\(\mathrm {ess}\sup p^{\Theta }\) relate (that is, how does the hedging cost of an “uninformed” agent relate to the worst-case hedging cost of an “informed” agent) in the context of Example 4.4. Using the fact that \(\Sigma = \mathbb {R}\setminus (\underline{\theta}, \, \overline{\theta})\), we have

$$ p = {\mathbb {E}}_{{\mathbb {W}}}\big[\xi \mathbf {1}_{\{ \mu [(\underline{\theta}_{t}, \, \overline{\theta}_{t})\}] > 0}\big], \qquad p^{\theta }= {\mathbb {E}}_{{\mathbb {W}}}\big[\xi \mathbf {1}_{\{\underline{\theta}_{t} < \theta < \overline{\theta}_{t}\}}\big], \quad \theta \in \mathbb {R}. $$

First, note that in the three cases \(\mu [[0,\infty )] = 1\), \(\mu [(-\infty , 0]] = 1\) or \(\mu [(-\varepsilon , \varepsilon ) ] > 0\) for all \(\varepsilon > 0\), we have \(p ={\mathbb {P}}\)-\(\mathrm {ess}\sup p^{\Theta }\). In words, in these three cases, the worst-case hedging cost of the informed agent equals the hedging cost of the uniformed agent. Indeed, if \(\mu [[0,\infty )] = 1\), then

$$ {\mathbb {P}}\text{-}\mathrm {ess}\sup p^{\Theta }= {\mathbb {E}}_{{\mathbb {W}}}[\xi \mathbf {1}_{\{ ({\mathbb {P}}\text{-}\mathrm {ess}\inf \Theta ) < \overline{\theta}_{t}\}}] = {\mathbb {E}}_{{\mathbb {W}}}[\xi \mathbf {1}_{\{\mu [[0, \overline{\theta}_{t})] > 0\}}] = p, $$

the case \(\mu [(-\infty , 0])] = 1\) is symmetric, and if \(\mu [(-\varepsilon , \varepsilon ) ] > 0\) for all \(\varepsilon > 0\) holds, then ℙ-\(\mathrm {ess}\sup p^{\Theta }= {\mathbb {E}}_{{\mathbb {W}}}\left [\xi\right ] = p\).

Consider now the complementary case where there exist \(\varepsilon _{1} > 0\), \(\varepsilon _{2} > 0\) with \(\mu [(-\varepsilon _{1}, \varepsilon _{2})] = 0\) and \(\mu [(-\varepsilon _{1}-\varepsilon , -\varepsilon _{1}]] > 0\) and \(\mu [[\varepsilon _{2}, \varepsilon _{2} + \varepsilon )] > 0\) for all \(\varepsilon > 0\). For the unit claim \(\xi \equiv 1\), we then have

$$ p = {\mathbb {W}}\big[\underline{\theta}_{t} \leq - \varepsilon _{1} \text{ or } \overline{\theta}_{t} \geq \varepsilon _{2}\big] > {\mathbb {W}}\big[\, \underline{\theta}_{t} \leq - \varepsilon _{1} \big] \vee {\mathbb {W}}\big[\, \overline{\theta}_{t} \geq \varepsilon _{2}\big] = \text{${\mathbb {P}}$-$\mathrm {ess}\sup p^{\Theta }$.} $$

Therefore, even the worst-case hedging cost for the informed agent is strictly smaller than the uninformed agent’s hedging cost. Note that in all cases, the replication strategy for the informed agent starting from \(p^{\Theta }\) depends on \(\Theta \). If ℙ-\(\mathrm {ess}\sup p^{\Theta }< p\), the superreplication strategy of the informed agent starting from the deterministic amount ℙ-\(\mathrm {ess}\sup p^{\Theta }\) also depends on \(\Theta \); however, when ℙ-\(\mathrm {ess}\sup p^{\Theta }= p\), no knowledge of \(\Theta \) is required in order to (super)replicate starting from \(p\).

5 Under the presence of a dominating measure

We now consider a more general setup than in Sect. 4. We assume throughout this section the existence of a \(\mathbb {G}\)-local martingale deflator \(\mathbf{Y}\). Moreover, we make the following technical assumption.

Assumption 5.1

There exist a probability measure ℚ and a \((\mathbb {G}, {\mathbb {Q}})\)-martingale \(Z\) such that \((\mathrm{d} {\mathbb {P}}/ \mathrm{d} {\mathbb {Q}})|_{\mathcal {G}_{t}}= Z_{t}\) for all \(t \geq 0\) and \(Z = 1 / Y\), ℙ-a.e.

We refer to Föllmer [13] and Perkowski and Ruf [33] for sufficient conditions for the existence of such ℚ and \(Z\). Note in particular that \({\mathbb {P}}\ll _{\mathcal{F}_{t}} {\mathbb {Q}}\) holds for all \(t \geq 0\).

In the sequel, we need to consider optional projections under both probabilities ℙ and ℚ; therefore, for the purposes of this section, we denote explicitly, via a superscript, the probability under which the projection is considered.

Under Assumption 5.1, Bayes’ rule yields

$$\begin{aligned} {}^{o} Y^{{\mathbb {P}}}_{t} &= {\mathbb {E}}_{{\mathbb {P}}}\left [\left .Y_{t} \right | \mathcal {F}_{t}\right ] = \frac{{\mathbb {E}}_{{\mathbb {Q}}} [Y_{t} Z_{t} \mathbf {1}_{\{Z_{t}>0\}} | \mathcal {F}_{t}] }{{\mathbb {E}}_{{\mathbb {Q}}} [ Z_{t} | \mathcal {F}_{t} ]} \\ &= \frac{{\mathbb {Q}}[Z_{t}>0 | \mathcal {F}_{t}]}{{}^{o} Z_{t}^{{\mathbb {Q}}} } = (1 - K_{t}) M_{t} \frac{1}{{}^{o} Z_{t}^{{\mathbb {Q}}}}, \qquad t \geq 0, \end{aligned}$$
(5.1)

where

$$\begin{aligned} {}^{o} Z_{t}^{{\mathbb {Q}}}= {\mathbb {E}}_{{\mathbb {Q}}}\left [\left . Z_{t} \right | \mathcal {F}_{t}\right ], \qquad t \geq 0, \end{aligned}$$
(5.2)

and \((1-K) M\) is the multiplicative Doob–Meyer decomposition (see for example [33, Proposition B.1]) of the nonnegative \((\mathbb {F}, {\mathbb {Q}})\)-supermartingale \({\mathbb {Q}}[Z_{\cdot }>0 \, | \, \mathcal {F}_{\cdot }]\); so \(K\) is a nondecreasing \(\mathbb{F}\)-predictable \([0,1]\)-valued process with \(K_{0} = 0\) and \(M\) is an \((\mathbb {F}, {\mathbb {Q}})\)-local martingale with

$$\begin{aligned} (1-K_{t}) M_{t} = {\mathbb {Q}}\left [\left .Z_{t} > 0 \right | \mathcal {F}_{t} \right ] = {\mathbb {Q}}\left [\left .\tau _{0} > t \right | \mathcal {F}_{t} \right ], \qquad t \geq 0, \end{aligned}$$
(5.3)

where we have introduced the \(\mathbb {G}\)-stopping time

$$ \tau _{0} :=\inf \{t \geq 0 : Z_{t} = 0\}. $$

To ensure uniqueness of the multiplicative decomposition, we assume that \(M = M^{\rho }\) and \(K = K^{\rho }\), where \(\rho \) is the first time that \({\mathbb {Q}}\left [\left .Z_{\cdot }> 0 \right | \mathcal {F}_{\cdot }\right ]\) hits zero, and additionally that \(\Delta M_{\rho }= 0\) on the event \(\{K_{\rho }= 1\}\).

Note that the Bayesian setup of Sect. 4 leads to \(Z = \zeta ^{\Theta }\) and \(M = 1\).

Let us collect some properties on these processes that we have introduced so far.

Proposition 5.2

In the notation of this section and under Assumption 5.1, the following statements hold:

1) The process \(1 / {}^{o} Z^{{\mathbb {Q}}}\)is an \((\mathbb {F}, {\mathbb {P}})\)-supermartingale and satisfies

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {P}}}\bigg[\frac{1}{{}^{o} Z_{t}^{{\mathbb {Q}}}} \mathbf {1}_{A} \bigg] = {\mathbb {Q}}\big[\{{}^{o} Z_{t}^{{\mathbb {Q}}} > 0\} \cap A \big], \qquad t \geq 0, A \in \mathcal {F}_{t}. \end{aligned}$$
(5.4)

2) The process \(M / {}^{o} Z^{{\mathbb {Q}}}\)is an \((\mathbb {F}, {\mathbb {P}})\)-local martingale. Hence the right-hand side of (5.1) also gives the multiplicative Doob–Meyer decomposition of the \((\mathbb {F}, {\mathbb {P}})\)-supermartingale \({}^{o} Y^{{\mathbb {P}}}\).

Proof

Thanks to \((\mathrm {d}{\mathbb {P}}/ \mathrm {d}{\mathbb {Q}})|_{\mathcal{F}_{t}} = {}^{o} Z_{t}^{ {\mathbb {Q}}}\), we have (5.4), which then yields the statement in 1).

Fix now \(s, t \geq 0\) with \(s< t\) and \(A \in \mathcal {F}_{s}\) and take any bounded \(\mathbb {F}\)-stopping time \(\tau \) such that \(M^{\tau }\) is an \((\mathbb {F}, {\mathbb {Q}})\)-martingale and \({}^{o} Z^{{\mathbb {Q}}}\) is uniformly bounded away from zero on \([\!\![0, \tau [\!\![\). Since \({}^{o} Z^{{\mathbb {Q}}}\) is an \((\mathbb {F}, {\mathbb {Q}})\)-martingale, we then have \(\{{}^{o} Z^{{\mathbb {Q}}}_{\tau }= 0\} \subseteq \{M_{\tau }= 0\}\). Hence (5.4) yields

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {P}}}\bigg[ \frac{M_{t}^{\tau }}{({}^{o} Z_{t}^{{\mathbb {Q}}})^{\tau }} \mathbf {1}_{A}\bigg] &= {\mathbb {E}}_{{\mathbb {Q}}}\big[M_{t}^{\tau }\mathbf {1}_{\{Z_{t \wedge \tau }^{{\mathbb {Q}}} > 0\}} \mathbf {1}_{A}\big] ={\mathbb {E}}_{{\mathbb {Q}}} [ M_{t}^{\tau }\mathbf {1}_{A} ] = {\mathbb {E}}_{{\mathbb {Q}}} [ M_{s}^{\tau }\mathbf {1}_{A} ] \\ &= {\mathbb {E}}_{{\mathbb {Q}}}\big[ M_{s}^{\tau }\mathbf {1}_{\{{}^{o} Z^{{\mathbb {Q}}}_{s \wedge \tau } > 0\}} \mathbf {1}_{A}\big] = {\mathbb {E}}_{{\mathbb {P}}}\bigg[ \frac{M_{s}^{\tau }}{({}^{o} Z^{{\mathbb {Q}}}_{s})^{\tau }} \mathbf {1}_{A}\bigg]. \end{aligned}$$

Hence \(M^{\tau } / ({}^{o} Z^{{\mathbb {Q}}})^{\tau }\) is an \((\mathbb {F}, {\mathbb {P}})\)-martingale.

Let now \((\tau _{n}')_{n \in \mathbb {N}}\) denote an \((\mathbb {F}, {\mathbb {Q}})\)-localisation sequence for \(M\) and \(\tau _{n}''\) the first time that \({}^{o} Z^{{\mathbb {Q}}}\) crosses the level \(1/n\), for each \(n \in \mathbb {N}\). Defining \(\tau _{n} = \tau _{n}' \wedge \tau _{n}''\) for each \(n \in \mathbb {N}\), we get \(\lim _{n \uparrow \infty } \tau _{n} = \infty \) ℙ-a.s., \(M^{\tau _{n}}\) is an \((\mathbb {F}, {\mathbb {Q}})\)-martingale, and \({}^{o} Z^{{\mathbb {Q}}}\) is uniformly bounded away from zero on \([\!\![0, \tau _{n} [\!\![\). This then yields statement 2). □

Proposition 5.3

In the notation of this section and under Assumption 5.1, the following statements concerning the optional projection \({}^{\circ }Y^{{\mathbb {P}}}\)are equivalent:

1) \({}^{\circ }Y^{{\mathbb {P}}}\)is an \((\mathbb {F}, {\mathbb {P}})\)-local martingale.

2) \(K\)is \(\{0, 1\}\)-valued ℚ-a.e.

Under any of the above equivalent conditions, it holds that \(M = 1\) ℚ-a.e., hence also \({}^{\circ }Y^{{\mathbb {P}}}= 1 / {}^{o} Z^{{\mathbb {Q}}}\) ℙ-a.e.

Proof

Let us first assume that statement 1) holds, i.e., \({}^{\circ }Y^{{\mathbb {P}}}= (1-K)M/{}^{\circ }Z^{{\mathbb {Q}}}\) is an \((\mathbb {F}, {\mathbb {P}})\)-local martingale. Then Proposition 5.2, 2) yields that \(K = 0\) ℙ-a.e.; hence

$$\begin{aligned} \{K > 0\} \subseteq \{^{\circ }Z^{{\mathbb {Q}}}= 0\} = \{K = 1\} \cup \{M = 0 \} \qquad \text{${\mathbb {Q}}$-a.e.} \end{aligned}$$
(5.5)

Furthermore, since the \((\mathbb {F}, {\mathbb {Q}})\)-supermartingale \({\mathbb {Q}}[Z_{\cdot }>0 \, | \, \mathcal {F}_{\cdot }] = (1 - K) M\) is \([0,1]\)-valued, we have

$$\begin{aligned} \{K = 0\} \subseteq \{M \leq 1\} \qquad \text{${\mathbb {Q}}$-a.e.} \end{aligned}$$
(5.6)

Combining now (5.5) and (5.6) yields that \(M \leq 1\) on \([\!\![0 , \rho [\!\![\), where \(\rho \) is the predictable time when \(K\) hits one. Since additionally by assumption \(\Delta M_{\rho }= 0\) and \(M = M^{\rho }\) on the set \(\{K_{\rho }= 1\}\), we have \(M \leq 1\). Next, since \(M\) is also an \((\mathbb {F}, {\mathbb {Q}})\)-local martingale with \(M_{0} = 1\), we obtain that \(M \equiv 1\) ℚ-a.e. Then again recalling (5.5) yields that \(K\) is \(\{0, 1\}\)-valued ℚ-a.e.

Assume now that statement 2) holds. Then since \(K < 1\) holds ℙ-a.e., we have \(K = 0\) ℙ-a.e., and an application of Lemma 5.2, 2) yields 1). □

Example 5.4

Assume that the underlying probability space, equipped with the probability measure ℚ, supports a ℚ-Brownian motion \(W\) and an independent ℝ-valued random variable \(\Theta \). Choose the filtration \(\mathbb {G}\) to be the smallest right-continuous one that makes \(W\) adapted, and such that \(\Theta \) is \(\mathcal{G}_{0}\)-measurable. Moreover, consider the \(\mathbb {G}\)-stopping time

$$ \tau _{0} :=\inf \{t \geq 0 : \mathbf {1}_{\{\Theta \neq 0\}} W_{t} = 1 \}. $$

Consider also the nonnegative \((\mathbb {G}, {\mathbb {Q}})\)-supermartingale

$$\begin{aligned} Z &:=\mathcal {E}\left (\int _{0}^{\cdot }\frac{-\Theta }{1- W_{u}} \mathrm {d}W_{u} \right ) \mathbf {1}_{[\!\![0 , \tau _{0} [\!\![} \\ &\phantom{:}= (1 - W)^{\Theta }\exp \left (\frac{\Theta - \Theta ^{2}}{2} \int _{0}^{\cdot }\frac{1}{(1-W_{u} )^{2} } \mathrm {d}u\right ) \mathbf {1}_{ [\!\![0 , \tau _{0} [\!\![}. \end{aligned}$$

Since \(\int _{0}^{\tau _{0}} \Theta ^{2} (1-W_{u})^{-2} \mbox{\d{u}} = \infty \), we have that \(Z\) is continuous by Larsson and Ruf [29, Theorem 4.2]. This then yields that \(Z\) is a \((\mathbb {G}, {\mathbb {Q}})\)-local martingale. We assume from now on that \({\mathbb {Q}}[\Theta \in \{0\} \cup [1/2, \infty )] = 1\), as this is a necessary and sufficient condition for \(Z\) to be a \((\mathbb {G}, {\mathbb {Q}})\)-martingale, by Ruf [38, Theorem 3.3].

Set now \(S :=\mathcal {E}(W)\) and \(\mathbb {F} :=\mathbb {F}^{W}\) and define the \(\mathbb {F}\)-predictable time

$$ \tau _{0}^{W} :=\inf \{t \geq 0 : W_{t} = 1\}. $$

Then we obtain \(K = {\mathbb {Q}}[\Theta \geq 1/2] \mathbf {1}_{ [\!\![\tau _{0}^{W}, \infty ]\!\!]}\) and \(M = 1\). Hence by Proposition 5.3, \({}^{\circ }Y^{{\mathbb {P}}}\) is an \((\mathbb {F}, {\mathbb {P}})\)-local martingale if and only if either \({\mathbb {Q}}[\Theta \geq 1/2] = 1\) or \({\mathbb {Q}}[\Theta = 0] = 1\). In the latter case, \({}^{\circ }Y^{{\mathbb {P}}}=1\) is an \((\mathbb {F}, {\mathbb {P}})\)-martingale. If \(\mu [\mathrm {d}\theta ] :={\mathbb {Q}}[\Theta \in \mathrm {d}\theta ]\) describes the marginal law of \(\Theta \), then

$$ {}^{\circ }Z^{{\mathbb {Q}}}= {\mathbb {Q}}[\Theta = 0] + \int _{1/2}^{\infty } (1 - W)^{\theta }\exp \left (\frac{\theta - \theta ^{2}}{2} \int _{0}^{\cdot }\frac{1}{(1-W_{u} )^{2} } \mathrm {d}u\right ) \mathbf {1}_{ [\!\![0 , \tau _{0} [\!\![} \mu [\mathrm {d}\theta ] $$

and hence

$$\begin{aligned} {}^{\circ }Y^{{\mathbb {P}}}&= \bigg({\mathbb {Q}}[\Theta = 0] + \int _{1/2}^{ \infty } (1 - W)^{\theta }\exp \Big(\frac{\theta - \theta ^{2}}{2} \int _{0}^{\cdot }\frac{1}{(1-W_{u} )^{2} } \mathrm {d}u\Big) \mu [\mathrm {d}\theta ] \bigg)^{-1} \\ &\phantom{:=}\times \mathbf {1}_{[\!\![0 , \tau _{0}^{W} [\!\![} + \mathbf {1}_{ [\!\![\tau _{0}^{W}, \infty [\!\![}. \end{aligned}$$

On the event \(\{\tau _{0}^{W} < \infty \}\), we have \({}^{\circ }Y^{{\mathbb {P}}}_{\tau _{0}^{W}-} = 1/{\mathbb {Q}}[\Theta = 0]\) ℙ-a.e., illustrating that if \({\mathbb {Q}}[\Theta = 0] > 0\), then indeed \({}^{\circ }Y^{{\mathbb {P}}}\) is not an \((\mathbb {F}, {\mathbb {P}})\)-local martingale.

Remark 5.5

Under any of the conditions in Proposition 5.3, it holds that \(M = 1\). A general characterisation of when exactly \(M = 1\) holds eludes us at the time of writing. However, when \(M = 1\), then \(\tau _{0}\) is an \((\mathbb {F}, {\mathbb {Q}})\)-pseudo-stopping time, meaning that \({\mathbb {E}}_{{\mathbb {Q}}} [N_{\tau _{0}}] = {\mathbb {E}}_{{\mathbb {Q}}} [N_{0}]\) for each \((\mathbb {F}, {\mathbb {Q}})\)-uniformly integrable martingale \(N\); vice versa, if each \((\mathbb {F}, {\mathbb {Q}})\)-martingale is continuous and \(\tau _{0}\) is an \((\mathbb {F}, {\mathbb {Q}})\)-pseudo-stopping time, then \(M = 1\). These facts follows from Nikeghbali and Yor [32, Theorem 1]. (The proof of [32, Theorem 1] only requires the continuity of \((\mathbb {F}, {\mathbb {Q}})\)-martingales in one direction; moreover, the assumption \(\tau _{0} < \infty \) in that paper can also omitted by a change-of-time argument.) In light of this fact, the previous section adds new examples of pseudo-stopping times to the literature.

Thanks to Theorem 3.1, the process \(M / {}^{o} Z^{{\mathbb {Q}}}\) is of special interest, as it serves as an \(\mathbb {F}\)-local martingale deflator.

Proposition 5.6

In the notation of this section and under Assumption 5.1, the following statements concerning the \((\mathbb {F}, {\mathbb {P}})\)-local martingale \(M / {}^{o} Z^{{\mathbb {Q}}}\)are equivalent:

1) \(M / {}^{o} Z^{{\mathbb {Q}}}\)is an \((\mathbb {F}, {\mathbb {P}})\)-martingale.

2) \(M\)is an \((\mathbb {F}, {\mathbb {Q}})\)-martingale and \(\{{}^{o} Z^{{\mathbb {Q}}} = 0 \} \subseteq \{M = 0\}\) ℚ-a.s.

3) \(M\)is an \((\mathbb {F}, {\mathbb {Q}})\)-martingale and \(\{K = 1 \} \subseteq \{M = 0\}\) ℚ-a.s.

Proof

Note that

$$ {\mathbb {E}}_{{\mathbb {P}}}\bigg[\frac{M_{t} }{ {}^{o} Z^{{\mathbb {Q}}}_{t}}\bigg] = {\mathbb {E}}_{{\mathbb {Q}}}\big[M_{t} \mathbf {1}_{\{{}^{o} Z^{{\mathbb {Q}}}_{t} > 0\}} \big] = {\mathbb {E}}_{{\mathbb {Q}}} [M_{t}] - {\mathbb {E}}_{{\mathbb {Q}}}\big[M_{t} \mathbf {1}_{ \{ {}^{o} Z^{{\mathbb {Q}}}_{t} = 0\}}\big], \qquad t \geq 0. $$

This yields the equivalence of 1) and 2). For the equivalence of 2) and 3), one only needs to observe that \(\{{}^{o} Z^{{\mathbb {Q}}} = 0\} = \{K = 1\} \cup \{M = 0\}\) thanks to (5.2) and (5.3). □

We continue with a couple of examples. The first one involves a non-constant \(M\) appearing as information on \(\tau _{0}\) gets revealed in \(\mathbb {F}\), and illustrates the necessary and sufficient conditions of Proposition 5.6.

Example 5.7

Assume that the underlying probability space, equipped with the probability measure ℚ, supports a ℚ-Brownian motion \(W\) and an independent random variable \(\Theta \) with \({\mathbb {Q}}[\Theta = -1] = q/2 = {\mathbb {Q}}[\Theta = 1]\) and \({\mathbb {Q}}[\Theta = 0] = 1 - q\), where \(q \in (0,1]\). Consider the filtration \(\mathbb {G}\) to be the smallest right-continuous one that makes \(W\) adapted and such that \(\Theta \) is \(\mathcal{G}_{0}\)-measurable. Moreover, consider the \(\mathbb {G}\)-stopping times

$$\begin{aligned} \nu &:=\inf \{t \geq 0 : |W_{t}| = 1\}, \\ \rho &:=\inf \{t \geq \nu : W_{t} = - \mathrm {sign}(W_{\nu }) \}, \\ \tau _{0} &:=\inf \{t \geq 0 : \Theta W_{t} = 1\}. \end{aligned}$$

Note that \({\mathbb {Q}}[0 < \nu < \rho < \infty ] = 1\) and that \(\tau _{0}\) equals ℚ-a.e. either \(\nu \), \(\rho \) or \(\infty \) (the latter if and only if \(\Theta = 0\)). Consider also the \((\mathbb {G}, {\mathbb {Q}})\)-martingale \(Z :=1 - \Theta W^{\tau _{0}} \geq 0\).

Let \(\mathbb {F}\) be the smallest right-continuous filtration that makes \(W\) adapted and such that \(\Theta \) is \(\mathcal{F}_{\rho }\)-measurable. Under \(\mathbb {F}\), \(\Theta \) is only revealed at \(\rho \), as opposed to \(\mathbb {G}\) where \(\Theta \) is known from the beginning. We set \(S :=\mathcal {E}(W)\), which is both a \((\mathbb {G}, {\mathbb {Q}})\)- and an \((\mathbb {F}, {\mathbb {Q}})\)-martingale. We also note that \(\nu \) and \(\rho \) are \(\mathbb {F}\)-predictable times. Observe that despite the different filtration structure, this example resembles Example 4.4. In both cases, \(W\) is a Brownian motion conditioned to never hit the level \(1 / \Theta \).

With the above setup, we compute

$$ {\mathbb {Q}}\left [\left . \tau _{0} \leq t \right | \mathcal {F}_{t} \right ] = \frac{q}{2} \mathbf {1}_{[\!\![\nu , \rho [\!\![}(t) + \mathbf {1}_{\{ \Theta \neq 0\}} \mathbf {1}_{[\!\![\rho , \infty [\!\![}(t), \qquad t \geq 0. $$

Hence, we have

$$ K =\frac{q}{2} \mathbf {1}_{[\!\![\nu , \rho [\!\![} + q \mathbf {1}_{ [\!\![\rho , \infty [\!\![}, \qquad M = \mathbf {1}_{[\!\![0, \rho [\!\![} + \frac{1}{1-q} \mathbf {1}_{\{\Theta = 0\}} \mathbf {1}_{ [\!\![\rho , \infty [\!\![}, $$

with the understanding that \(M = 1\) if \(q = 1\). Note that \(M\) is a bounded \((\mathbb {F}, {\mathbb {Q}})\)-martingale. Moreover, straightforward computations give

$$\begin{aligned} {}^{o} Z^{{\mathbb {Q}}} = \mathbf {1}_{[\!\![0, \nu [\!\![} + \bigg(1 - q + \frac{q}{2} \big(1 + \mathrm {sign}(W_{\nu }) W\big) \bigg) \mathbf {1}_{ [\!\![\nu , \rho [\!\![} + \mathbf {1}_{\{\Theta = 0 \}} \mathbf {1}_{ [\!\![\rho , \infty [\!\![}. \end{aligned}$$

Hence, when \(q \in (0,1)\),

$$\begin{aligned} \frac{M}{{}^{o} Z^{{\mathbb {Q}}}} = \mathbf {1}_{[\!\![0, \nu [\!\![} + \frac{1}{1 - q + (q / 2) (1 + \mathrm {sign}(W_{\nu }) W)} \mathbf {1}_{ [\!\![\nu , \rho [\!\![} + \frac{1}{1-q} \mathbf {1}_{\{\Theta = 0 \}} \mathbf {1}_{[\!\![\rho , \infty [\!\![} \end{aligned}$$

ℙ-a.e., which is a bounded \((\mathbb {F}, {\mathbb {P}})\)-martingale; however, when \(q = 1\), then

$$ \frac{M}{{}^{o} Z^{{\mathbb {Q}}}} = \mathbf {1}_{[\!\![0, \nu [\!\![} + \frac{2}{1 + \mathrm {sign}(W_{\nu }) W} \mathbf {1}_{[\!\![\nu , \infty [\!\![} $$

ℙ-a.e., which can be seen to be a strict local \((\mathbb {F}, {\mathbb {P}})\)-martingale. These observations are consistent with the result of Proposition 5.6.

We next modify Example 5.7 to illustrate that it is also possible that the local martingale part \(M\) in the multiplicative decomposition of \(({\mathbb {Q}}[\tau > t|\mathcal {F}_{t}])_{t \geq 0}\) is continuous.

Example 5.8

Assume that the underlying probability space, equipped with the probability measure ℚ, supports a pair of independent ℚ-Brownian motions \((W, B)\). Let \(\mathbb {F} :=\mathbb {F}^{(W,B)}\) and let \(\mathbb {G}\) denote the smallest right-continuous filtration that makes \(W\) adapted and contains all the information of \(B\) already at time 0. Consider the process \(\psi :=\sqrt{2} \int _{0}^{\cdot }\exp (-u) \mathrm {d}B_{u}\), and note that \(\Theta :=\psi _{\infty }\) is \(\mathcal{G}_{0}\)-measurable with a standard normal distribution and that the conditional law of \(\Theta \) given \(\mathcal{F}_{t}\) is Gaussian with mean \(\psi _{t}\) and standard deviation \(\exp (-t)\) for each \(t \geq 0\). Set \(\tau _{0} :=\inf \{t \geq 0 : \Theta W_{t} = 1 \}\) as before and consider the \((\mathbb {G}, {\mathbb {Q}})\)-martingale \(Z :=1 - \Theta W^{\tau _{0}} \geq 0\).

With \(\underline{\theta} :=1 / \inf _{u \in [0, \cdot ]} W_{u} < 0\) and \(\overline{\theta} :=1 / \sup _{u \in [0, \cdot ]} W_{u} > 0\), note that we have \(\{\tau _{0} > t\} = \{ \underline{\theta}_{t} < \Theta < \overline{\theta}_{t}\}\). It follows that

$$ A_{t} :={\mathbb {Q}}\left [\left . \tau _{0} > t \right | \mathcal {F}_{t} \right ] = \Phi \left ( \exp (t)(\overline{\theta}_{t} - \psi _{t}) \right ) - \Phi \left ( \exp (t)(\underline{\theta}_{t} - \psi _{t}) \right ), \qquad t \geq 0, $$

where \(\Phi \) denotes the standard normal distribution function. Writing the dynamics of the above, we see that the local martingale part in the additive decomposition of \(A\) has non-zero quadratic variation everywhere. The same properties carry over to the multiplicative decomposition, yielding that \(M\) is a Brownian local martingale with strictly increasing quadratic variation.

The next example has similar features as the setup of Sect. 4, in the sense that the projection of the local martingale deflator loses mass whenever one learns in the small filtration about the sign of an excursion of a Brownian motion. This example also relates to the framework of the following section.

Example 5.9

Assume that the underlying probability space, equipped with the probability measure ℚ, supports a ℚ-Brownian motion \(W\), and define its Lévy transformation by \(B :=\int _{0}^{\cdot }\mathrm {sign}(W_{u}) \mathrm {d}W_{u} = |W| - \Lambda \) as mentioned in the introduction. Consider the filtrations \(\mathbb {G} :=\mathbb {F}^{W}\) and \(\mathbb {F} :=\mathbb {F}^{B} = \mathbb {F}^{|W|}\). Let us write

$$ \tau _{0} :=\inf \{t \geq 0 : 1 + W_{t} - B_{t} = 0\} = \inf \left \{ t \geq 0 : W_{t} = - \frac{1+\Lambda _{t}}{2}\right \} , $$

where \(\Lambda \) denotes the local time of \(W\) at zero. We now set \(S :=\mathcal {E}(B)\) and consider the process

$$ Z :=\mathcal {E}\left (\int _{0}^{\cdot }\frac{1}{1 + W_{u} - B_{u}} \mathrm {d}B_{u}\right ) \mathbf {1}_{[\!\![0, \tau _{0} [\!\![}. $$

We claim that \(Z\) has continuous paths and is a \((\mathbb {G}, {\mathbb {Q}})\)-martingale. To see path-continuity, note that just before \(\tau _{0}\), the process \(1 + W - B = 1 + W - |W|+\Lambda \) behaves like twice a Brownian motion hitting the level zero, given that \(\Lambda \) will be flat (since \(W\) is away from zero); then, it suffices to note that \(\int _{0}^{\cdot }(1 + \beta _{u})^{-2} \mathrm {d}u\) explodes at the first time that a Brownian motion \(\beta \) hits −1. Path-continuity of \(Z\) coupled with its definition implies that it is a \((\mathbb {G}, {\mathbb {Q}})\)-local martingale. To see the actual martingale property of \(Z\), we apply Ruf [38, Theorem 3.3] as follows. Note that a continuous nonnegative local martingale \(Z\) is also a local martingale in its own filtration (since one may choose the localising sequence to consist of level-crossing times); therefore, for proving that it is an actual martingale, which is equivalent to showing that it has constant expectation in time, one may assume that \(Z\) lives on an appropriate canonical path-space, where results from Föllmer [13] on change of measure can be utilised. Consider then the Föllmer measure \(\overline{{\mathbb {P}}}\), given by the extension of the measures defined via the Radon–Nikodým derivatives \((Z_{\tau _{n} \wedge n})_{n \in \mathbb {N}}\) on the increasing sequence \((\mathcal {F}_{\tau _{n} \wedge n})_{n \in \mathbb {N}}\), where \((\tau _{n})_{n \in \mathbb {N}}\) is a \(\mathbb{G}\)-localisation sequence for \(Z\). For some \(\overline{{\mathbb {P}}}\)-Brownian motion \(U\), we then have

$$ W = \int _{0}^{\cdot }\frac{\mathrm {sign}(W_{u})}{1 + W_{u} - B_{u}} \mathrm {d}u + U. $$

Hence, whenever \(1+W-B\) becomes small, the process \(W\) moves like a two-dimensional \((\mathbb {G}, \overline{{\mathbb {P}}})\)-Bessel process. In particular, the process \(1-W-B\) never hits zero and \(\int _{0}^{\cdot }(1 + W_{u} - B_{u})^{-2} \mathrm {d}u < \infty \)\(\overline{{\mathbb {P}}}\)-a.e., yielding that \(Z\) is indeed a martingale.

Let us now consider the \(\mathbb {F}\)-predictable times \((\rho _{i})_{i \in \mathbb{N}_{0}}\) and \((\tau _{i})_{i \in \mathbb{N}}\) defined inductively by \(\rho _{0} :=0\) and

$$ \tau _{i} :=\inf \left \{ t >\rho _{i-1}: |W_{t}| = \frac{1 + \Lambda _{t}}{2}\right \} , \quad \rho _{i} :=\inf \{t > \tau _{i}: |W_{t}| = 0\}, \quad i \in \mathbb{N}. $$

Then we have

$$\begin{aligned} {\mathbb {Q}}\left [\left .\tau _{0}> t \right | \mathcal {F}_{t}\right ] = \left (\frac{1}{2}\right )^{\# \{i \in \mathbb{N}: \tau _{i} \leq t\}}, \qquad t \geq 0. \end{aligned}$$

Hence by (5.3), we get

$$\begin{aligned} K_{t} = 1 - \left (\frac{1}{2}\right )^{\# \{i \in \mathbb{N}: \tau _{i} \leq t\}}, \qquad t \geq 0. \end{aligned}$$

For the \(\mathbb {F}\)-optional ℙ-projection \({}^{\circ }Y^{{\mathbb {P}}}\) of the \(\mathbb {G}\)-local martingale deflator \(Y = 1/Z\), we then have \({}^{\circ }Y^{{\mathbb {P}}} = (1-K)/\,{{}^{o} Z^{{\mathbb {Q}}}} \), where \(1/\,{}^{o} Z^{{\mathbb {Q}}}\) is an \((\mathbb {F}, {\mathbb {P}})\)-martingale by Proposition 5.6.

In Example 3.4, it was shown that \(\mathbb {F}\)-optional projections of reciprocals of \(\mathbb {G}\)-numéraires are not necessarily reciprocals of \(\mathbb {F}\)-numéraires. However, we have the following result.

Proposition 5.10

In the notation of this section, suppose that Assumption 5.1holds. Moreover, assume that Jacod’s hypothesis (H) holds under ℚ, i.e., each \((\mathbb {F}, {\mathbb {Q}})\)-martingale is also a \((\mathbb {G}, {\mathbb {Q}})\)-martingale. Then the following statements hold:

1) \(K_{\rho }\mathbf {1}_{\{\rho < \infty \}} = {{\mathbb {Q}}}[\tau _{0} \leq \rho | \mathcal {F}_{\infty }] \mathbf {1}_{\{\rho < \infty \}} \)for all \(\mathbb {F}\)-predictable times \(\rho \), and \(M = 1\).

2) If the \(\mathbb {G}\)-local martingale deflator \(Y = 1/Z\)is a \(\mathbb {G}\)-numéraire, then \(1/\,{}^{o} Z^{{\mathbb {Q}}}\)is an \(\mathbb {F}\)-local martingale deflator and an \(\mathbb {F}\)-numéraire.

Proof

For 1), it suffices to argue that \({{\mathbb {Q}}}[\tau _{0} \leq \cdot | \mathcal {F}_{\infty }]\) is the \(\mathbb {F}\)-predictable projection of the process \(\mathbf {1}_{\{\tau _{0} \leq \cdot \}}\). That is, for any \(\mathbb {F}\)-predictable time \(\rho \), we need to argue that

$$\begin{aligned} {{\mathbb {Q}}}[\tau _{0} \leq \rho | \mathcal {F}_{\infty }] \mathbf {1}_{\{\rho < \infty \}} = {{\mathbb {Q}}}[\tau _{0} \leq \rho | \mathcal {F}_{\rho -}] \mathbf {1}_{\{\rho < \infty \}}. \end{aligned}$$
(5.7)

We argue this by showing that the right-hand side is indeed the \(\mathcal {F}_{\infty }\)-conditional expectation of \(\mathbf {1}_{\{\tau _{0} \leq \rho < \infty \}}\). To this end, fix \(A \in \mathcal {F}_{\infty }\) and note that \({\mathbb {Q}}[A | \mathcal {F}_{\rho -}] = {\mathbb {Q}}[A | \mathcal {G}_{\rho -}] \) since each \((\mathbb {F}, {\mathbb {Q}})\)-martingale is also a \((\mathbb {G}, {\mathbb {Q}})\)-martingale by assumption. This yields

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {Q}}}\left [{{\mathbb {Q}}}[\tau _{0} \leq \rho | \mathcal {F}_{ \rho -}] \mathbf {1}_{\{\rho < \infty \}} \mathbf {1}_{A}\right ] &= {\mathbb {E}}_{{\mathbb {Q}}}\left [ {{\mathbb {Q}}}[A | \mathcal {F}_{\rho -}] \mathbf {1}_{\{\tau _{0} \leq \rho < \infty \}} \right ] \\ &= {\mathbb {E}}_{{\mathbb {Q}}}\left [ {{\mathbb {Q}}}[A | \mathcal {G}_{\rho -}] \mathbf {1}_{ \{\tau _{0} \leq \rho < \infty \}} \right ] \\ &= {\mathbb {Q}}\left [A \cap \{ \tau _{0} \leq \rho < \infty \}\right ], \end{aligned}$$

where the last equality uses the fact that \(\tau _{0}\) is ℚ-a.e. equal to a \(\mathbb {G}\)-predictable time since \(Z\) does not jump to zero; see Larsson and Ruf [28, Lemma 3.5]. This now yields (5.7).

For 2), observe that we have \({}^{o} Z^{{\mathbb {Q}}} = \mathcal {E}(\int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u}) N\) for some nonnegative \(\mathbb {F}\)-predictable process \(\theta \) and some \((\mathbb {F}, {\mathbb {Q}})\)-local martingale \(N\) with \([N, S] = 0\). By assumption, \(N\) is also a \((\mathbb {G}, {\mathbb {Q}})\)-local martingale. By the product rule, so is \(N Z\). Hence, \(N\) is a nonnegative \((\mathbb {G}, {\mathbb {P}})\)-local martingale, thus an \((\mathbb {F}, {\mathbb {P}})\)-local martingale, as it is \(\mathbb {F}\)-adapted. Moreover, \({}^{o} Z^{{\mathbb {Q}}} / N = \mathcal {E}(\int _{0}^{\cdot }\theta _{u} \mathrm {d}S_{u})\) is an \((\mathbb {F}, {\mathbb {Q}})\)-local martingale; hence \(1/N\) is also an \((\mathbb {F}, {\mathbb {P}})\)-local martingale. This implies that \(N = 1\). □

Remark 5.11

From a modelling point of view, it is convenient to observe that Jacod’s hypothesis (H) holds for example if \(\mathbb {G}\) is of the form

$$\begin{aligned} \mathcal {G}_{t} :=\bigcap _{s > t} \left ( \mathcal {F}_{s} \vee \mathcal {H}_{s}\right ), \qquad t \geq 0, \end{aligned}$$

where ℍ is a filtration such that \(\mathcal {F}_{\infty }\) and \(\mathcal {H}_{\infty }\) are independent under ℚ. Indeed, fix any \((\mathbb {F}, {\mathbb {Q}})\)-martingale \(N\), some \(s,t \geq 0\) with \(s < t\) and some \(A \in \mathcal {H}_{s}\). Then

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {Q}}} [N_{t} \mathbf {1}_{A}] = {\mathbb {E}}_{{\mathbb {Q}}} [N_{t} ] {\mathbb {Q}}[A] = {\mathbb {E}}_{{\mathbb {Q}}} [N_{s} ] {\mathbb {Q}}[A] = {\mathbb {E}}_{{\mathbb {Q}}} [N_{s} \mathbf {1}_{A}], \end{aligned}$$

where we have used repeatedly the independence of \(\mathcal {F}_{\infty }\) and \(\mathcal {H}_{\infty }\) under ℚ. In particular, the Bayesian setup of Sect. 4 satisfies the assumptions of Proposition 5.10. As a corollary, Assumption 5.1 and Jacod’s hypothesis (H) holding under ℚ do not imply that each \((\mathbb {F}, {\mathbb {P}})\)-martingale is also a \((\mathbb {G}, {\mathbb {P}})\)-martingale. For example, the process \(1/\zeta \) in Sect. 4 is an \((\mathbb {F}, {\mathbb {P}})\)-local martingale, but not a \((\mathbb {G}, {\mathbb {P}})\)-local martingale if \(K^{h}_{\infty }> 0\).

6 Completeness and such

6.1 A motivating example

We return to a question posed in the introduction: Could a complete market become incomplete after shrinking the filtration? We first provide a motivating example demonstrating that this is indeed possible. Theorem 6.1 and Corollary 6.2 will also yield a slew of examples in a more comprehensive manner.

Let \(W\) denote a standard Brownian motion and \(B\) its Lévy transformation, defined via

$$ B :=\int _{0}^{\cdot }\mathrm {sign}(W_{u}) \mathrm {d}W_{u}. $$

Consider the \(\mathbb {F}^{W}\)-stopping time

$$ \tau :=\inf \{t \geq 0: W_{t} = 1\}, $$

noting that \(\tau \) is not a stopping time for the filtration \(\mathbb {F}^{B} = \mathbb {F}^{|W|}\). Set \({\mathbb {G}} :=\mathbb {F}^{W}\) and \(\mathbb {F} :=\mathbb {F}^{B, \mathbf {1}_{ [\!\![\tau , \infty [\!\![}}\), the smallest right-continuous filtration that makes \(B\) adapted and \(\tau \) a stopping time. It follows that \(\mathbb {F} \subseteq {\mathbb {G}}\) and that the (one-dimensional) stock price \(S :=\mathcal {E}(B)\) is \(\mathbb {F}\)-adapted. Both \(B\) and \(S\) have the predictable representation property in \(\mathbb {G}\), rendering market completeness in \(\mathbb {G}\) thanks to Theorem 2.6.

Consider the \(\mathbb {F}^{B}\)-stopping times \((\rho _{i})_{i \in \mathbb{N}_{0}}\) and \((\tau _{i})_{i \in \mathbb{N}}\) defined inductively by \(\rho _{0} = 0\) and

$$ \tau _{i} :=\inf \{t >\rho _{i-1}: |W_{t}| = 1\}, \quad \rho _{i} :=\inf \{t >\tau _{i}: |W_{t}| = 0\}, \qquad i \in \mathbb{N}. $$

These stopping times allow to define the \(\mathbb {F}\)-adapted process

$$\begin{aligned} N :=\mathbf {1}_{ [\!\![\tau , \infty [\!\![} - \frac{1}{2} \sum _{i \in \mathbb{N}} \mathbf {1}_{ [\!\![\tau _{i}, \infty [\!\![} \mathbf {1}_{\{\tau _{i} \leq \tau \}} \end{aligned}$$

which is piecewise constant and jumps only at those times before \(\tau \) when \(|W|\) hits one. More precisely, \(N\) jumps up or down by \(1/2\) with probability \(1/2\), depending on whether \(W\) hits 1 or −1; hence, it is an \(\mathbb {F}\)-martingale, but not a \(\mathbb {G}\)-local martingale. The discontinuous process \(N\) a fortiori cannot be expressed as a stochastic integral with respect to the geometric Brownian motion \(S\); hence, the market is indeed incomplete under \(\mathbb {F}\). Note that this observation is consistent with the martingale representation results in Brémaud and Yor [7, Proposition 9].

6.2 A more general construction

The following result is of independent interest.

Theorem 6.1

Let \(W\)be a standard Brownian motion and \(B\)its Lévy transformation. Then for any given probability law \(\mu \)on \(((0, \infty ], \mathcal {B}(0, \infty ])\), there exists an \(\mathbb {F}^{W}\)-stopping time \(\tau \)with law \(\mu \)and independent of \(\mathcal {F}^{B}_{\infty }\).

Theorem 6.1, proved in Sect. 6.3 below, yields an interesting class of examples where the completeness property fails through filtration shrinkage.

Corollary 6.2

There exist two nested filtrations \(\mathbb {F} \subseteq \mathbb {G}\)and a one-dimensional continuous stock price process \(S\)adapted to \(\mathbb {F}\)such that the market is complete for \(\mathbb {G}\)and for \(\mathbb {F}^{S}\), but not for the “intermediate information” model \(\mathbb {F}\).

Proof

Using the above notation, set \({\mathbb {G}} :=\mathbb {F}^{W}\) and \(S :=\mathcal {E}(B)\) so that \(\mathbb {F}^{S} = \mathbb {F}^{B}\). Next, take \(\mu \) to be the law of a non-deterministic distribution and consider an \(\mathbb {F}^{W}\)-stopping time \(\tau \) as in Theorem 6.1 with distribution \(\mu \). Define now \(\mathbb {F}\) to be the right-continuous modification of the progressive enlargement of the filtration \(\mathbb {F}^{B}\) with the random time \(\tau \). Clearly, \(\mathbb {F}^{S} = \mathbb {F}^{B} \subseteq \mathbb {F} \subseteq \mathbb {F}^{W} = {\mathbb {G}}\), and \(B\) is a Brownian motion in all three filtrations. However, although \(B\) (hence \(S\)) has the predictable representation property in both \(\mathbb {F}^{B}\) and \(\mathbb {F}^{W}\), it loses the predictable representation property in \(\mathbb {F}\). This can be readily seen by considering the (non-continuous) \(\mathbb {F}\)-local martingale \(N\) defined by \(N = \mathbf {1}_{[\!\![\tau , \infty [\!\![} - C^{\tau }\), where \(C^{\tau }\) denotes the compensator of \(\mathbf {1}_{[\!\![\tau , \infty [\!\![}\) under \(\mathbb {F}\). □

In the context of the proof of Corollary 6.2, the pair \((B, N)\) jointly has the predictable representation property in \(\mathbb {F}\). It follows that every local martingale deflator in \(\mathcal{Y}^{\mathbb {F}}\) is of the form \(\mathbf {1}_{[\!\![0, \tau [\!\![} + g(\tau ) \mathbf {1}_{ [\!\![\tau , \infty [\!\![}\) for some strictly positive Borel-measurable function \(g: (0,\infty ) \rightarrow (0,\infty )\) with \(\int _{0}^{\infty }g(s) \mu [\mathrm {d}s] = 1\).

6.3 Proof of Theorem 6.1

Before we state the main auxiliary result in order to prove Theorem 6.1, we discuss some prerequisites on excursions of Brownian motion. We stick as much as possible to notation from Revuz and Yor [36, Chap. XII]. For a continuous function \(w: [0,\infty ) \rightarrow \mathcal {R}\) with \(w(0) = 0\), set \(R(w) :=\inf \left \{t > 0 : w(t) = 0\right \}\). Then, let \(\mathcal{U}\) be the subset of all continuous functions \(w\) such that \(w(0) = 0\), \(R(w) > 0\) and \(w(t) = 0\) for all \(t \geq R(w)\). We denote by \(\mathcal{U}_{+}\) (respectively, \(\mathcal{U}_{-}\)) the subset of \(\mathcal{U}\) with the extra property that \(w(t) > 0\) (respectively, \(w(t) < 0\)) holds for all \(t \in (0, R(w))\), in which case we speak of positive (respectively, negative) excursions. With \(\delta : [0,\infty ) \rightarrow \mathbb {R}\) denoting the function that is identically equal to zero (which in particular implies that \(\delta \notin \mathcal{U}\)), consider the state space \(\mathcal{U}_{\delta }:=\mathcal{U} \cup \left \{\delta\right \}\).

Recalling that \(\Lambda \) is the local time of the Brownian motion \(W\) at zero, define

$$ \sigma _{s} :=\inf \left \{t > 0 : \Lambda _{t} > s\right \}, \qquad s \in [0, \infty ), $$

and note that this is a stopping time in \(\mathbb {F}^{B} = \mathbb {F}^{|W|}\) for all \(s \in [0, \infty )\). We denote by \((e_{s})_{s \in [0, \infty )}\) the excursion Poisson point process of \(W\). More precisely, for \(s \in [0, \infty )\) with \(\sigma _{s-} = \sigma _{s}\), we set \(e_{s} \equiv \delta \), while if \(\sigma _{s-} < \sigma _{s}\), then \(e_{s} \in \mathcal{U}\) will be the excursion of \(W\) over the interval \([\sigma _{s-}, \sigma _{s}]\) defined via \(e_{s} (t) = W(\sigma _{s-} + t)\) for \(t \in [0, \sigma _{s} - \sigma _{s-}]\) and \(e_{s} (t) = 0\) for \(t > \sigma _{s} - \sigma _{s-}\). We use \(|e| :=(|e|_{s})_{s \in [0, \infty )}\) to denote the process such that \(|e|_{s} (t) = |e_{s} (t)|\) holds for all \(s, t \geq 0\), and note that \(|e|\) is also a Poisson point process, with state space \(\mathcal{U}_{+} \cup \left \{\delta\right \}\); in effect, \(|e|\) forgets the excursion signs.

With the above notation in place, we need the following facts:

1) The \(\sigma \)-algebra generated by the Poisson point process \(|e| = (|e|_{s})_{s \in [0, \infty )}\) coincides with \(\mathcal {F}^{|W|}_{\infty }=\mathcal {F}^{B}_{\infty }\).

2) Conditionally on the process \(|e|\) (i.e., conditionally on \(\mathcal {F}^{|W|}_{\infty }\)), the signs of the excursions are (a countable number of) independent and identically distributed random variables taking the values −1 and \(+1\) with probability \(1/2\).

Statement 1) is a consequence of [36, Proposition XII.2.5]. Indeed, because \(\Lambda \) is \(\mathcal {F}^{|W|}_{\infty }\)-measurable, meaning that \(\Lambda _{s}\) and \(\sigma _{s}\) are \(\mathcal{F}_{\infty }^{|W|}\)-measurable for each \(s \geq 0\), the process \((\sigma _{s})_{s \geq 0}\) is also \(\mathcal {F}^{|W|}_{\infty }\)-measurable, and then it is straightforward that \((|e|_{s})_{s \geq 0}\) is \(\mathcal {F}^{|W|}_{\infty }\)-measurable. On the other hand, one may reconstruct \(|W|\) from \(|e|\) as follows: first, for \(s \geq 0\), one defines \(\sigma _{s} :=\sum _{v \in (0, s]} R(|e|_{v})\), then one obtains \(\Lambda \) as the right-continuous inverse of \(\sigma \), and finally one defines \(|W_{t}| :=|e|_{\Lambda _{t}} (t - \sigma (\Lambda _{t} -))\) for all \(t \geq 0\).

Statement 2) comes for example as a consequence of the discussion in Blumenthal [6, Chap. IV, mostly page 114]; see also Prokaj [34].

The following result is the main tool in establishing the validity of Theorem 6.1.

Lemma 6.3

Fix a strictly decreasing sequence \((s_{n})_{n \in \mathbb{N}}\)in \((0, \infty )\)with \(\lim _{n \uparrow \infty } s_{n} = 0\). Then there exists a countable collection \((U_{n})_{n \in \mathbb{N}}\)of random variables such that

  • for each \(n \in \mathbb {N}\), \(U_{n}\)is \(\mathcal {F}^{W}_{\sigma _{s_{n}}}\)-measurable, and

  • \((U_{n})_{n \in \mathbb {N}}\)consists of independent and identically distributed random variables with the standard uniform law and is furthermore independent from \(\mathcal {F}^{|W|}_{\infty }=\mathcal {F}^{B}_{\infty }\).

Proof

For each \(n \in \mathbb {N}\), write \(\sigma _{n} :=\sigma _{s_{n}}\) for typographical simplicity. Consider the intervals \(I_{n} :=(s_{n+1}, s_{n}]\); since the sequence \((s_{n})_{n \in \mathbb {N}}\) is strictly decreasing, \((I_{n})_{n \in \mathbb {N}}\) consists of disjoint intervals.

Define the random, ℙ-a.e. countable set

$$ D :=\left \{s \in [0, \infty ) : e_{s} \neq \delta\right \} = \left \{s \in [0, \infty ) : \sigma _{s-} < \sigma _{s}\right \} $$

where excursions actually happen in the local time clock. The set \(D \cap I_{n}\) corresponds to excursion times that happen in \(I_{n}\) in the local time clock, and is clearly countable and infinite. Furthermore, \((e_{s})_{s \in I_{n}}\) is \(\mathcal {F}^{W}_{\sigma _{n}}\)-measurable for all \(n \in \mathbb {N}\). It is straightforward (for example, by ordering the excursion sizes) to see that one may find an \(\mathcal {F}^{W}_{ \sigma _{n}}\)-measurable enumeration \((v_{n, k})_{k \in \mathbb {N}}\) of \(D \cap I_{n}\) for each \(n \in \mathbb {N}\). Then for all \(n \in \mathbb {N}\), the \(\mathcal {F}^{W}_{\sigma _{n}}\)-measurable random variables \(X_{n, k} :=\mathbf {1}_{\{e_{v_{n, k}} \in \mathcal{U}_{+}\}}\) are \(\left \{0, 1\right \}\)-valued. Moreover, using the fact that the intervals \((I_{n})_{n \in \mathbb {N}}\) are disjoint, we obtain that conditionally on \(\mathcal {F}^{B}_{\infty }= \mathcal {F}^{|W|}_{\infty }\), the doubly-indexed collection \((X_{n, k})_{(n, k) \in \mathbb {N}\times \mathbb {N}}\) consists of independent and identically distributed random variables with

$$ {\mathbb {P}}[X_{n,k} = 0] = \frac{1}{2} = {\mathbb {P}}[X_{n,k} = 1]. $$

Therefore, upon defining \(U_{n} :=\sum _{k=1}^{\infty }2^{-k} X_{n, k}\) for all \(n \in \mathbb {N}\), we obtain a sequence \((U_{n})_{n \in \mathbb {N}}\) of independent and identically distributed random variables with the standard uniform law that are further independent of \(\mathcal {F}^{|W|}_{\infty }\). Finally, since \((X_{n, k})_{k \in \mathbb {N}}\) are \(\mathcal {F}^{W}_{\sigma _{n}}\)-measurable for each \(n \in \mathbb {N}\), we obtain that \(U_{n}\) is \(\mathcal {F}^{W}_{\sigma _{n}}\)-measurable for each \(n \in \mathbb {N}\), which completes the argument. □

Given Lemma 6.3 above, we may now proceed to prove Theorem 6.1.

Proof of Theorem 6.1

Let \(\mu \) be any probability law on \(((0, \infty ], \mathcal {B}((0, \infty ]) )\). For any \(s \in [0, \infty )\), let \(\mu _{s}\) denote the probability law on \(((0, \infty ], \mathcal {B}((0, \infty ]) )\) that is “\(\mu \) conditioned to be greater than \(s\)”; more formally, if \(\mu [(s, \infty ]] = 0\), then set \(\mu _{s}[ A] :=\mathbf {1}_{\{\infty \in A\}}\) for all \(A \in \mathcal {B}((0, \infty ])\), and otherwise, if \(\mu [(s, \infty ]] > 0\), set

$$ \mu _{s} (A) :=\frac{\mu [A \cap (s, \infty ]]}{\mu [(s, \infty ]]}, \qquad A \in \mathcal {B}\big((0, \infty ]\big). $$

Note that \((\mu _{s})_{s \geq 0}\) is increasing in first-order stochastic dominance and that as \(s \downarrow 0\), \(\mu _{s}\) converges (actually, in total variation) to \(\mu _{0} = \mu \).

Pick \((s_{n})_{n \in \mathbb {N}}\) to be any strictly decreasing sequence of positive numbers with \(\lim _{n \uparrow \infty } s_{n} = 0\). In the notation of Lemma 6.3, consider a corresponding sequence \((U_{n})_{n \in \mathbb {N}}\). We construct inductively a nonincreasing sequence \((\tau _{n})_{n \in \mathbb {N}}\) of \(\mathbb {F}^{W}\)-stopping times each having conditional law with respect to \(\mathcal {F}^{B}_{\infty }\) equal to \(\mu _{\sigma _{s_{n}}}\) and being measurable with respect to \(\mathcal {F}^{B}_{\infty }\vee \sigma (U_{m} ; m \leq n)\).

As in the proof of Lemma 6.3, write \(\sigma _{n} :=\sigma _{s_{n}}\) for each \(n \in \mathbb {N}\) and note that \(\sigma _{n}\) is a stopping time in \(\mathbb {F}^{B} = \mathbb {F}^{|W|}\) for all \(n \in \mathbb {N}\). Let \(F_{s} : (0, \infty ] \rightarrow [0,1]\) be the cumulative distribution function of \(\mu _{s}\) defined via \(F_{s}(t) :=\mu _{s} [(0, t]]\) for all \(t > 0\) and \(s \geq 0\) and set \(\tau _{1} :=F_{\sigma _{1}}^{-1} (U_{1})\), where we use the generalised right-continuous inverse. Since \(\tau _{1} > \sigma _{1}\) and \(U_{1}\) is \(\mathcal {F}^{W}_{\sigma _{1}}\)-measurable, it is clear that \(\tau _{1}\) is an \(\mathbb {F}^{W}\)-stopping time. By construction and since \(U_{1}\) is independent of \(\mathcal {F}^{B}_{\infty }\), the conditional law of \(\tau _{1}\) with respect to \(\mathcal {F}^{B}_{\infty }\) equals \(\mu _{\sigma _{1}}\). Fix now \(n \in \mathbb {N}\) and suppose that we have constructed \(\tau _{n}\) which is an \(\mathbb {F}^{W}\)-stopping time, measurable with respect to \(\mathcal {F}^{B}_{\infty }\vee \sigma (U_{m})_{m \leq n}\) and having conditional law with respect to \(\mathcal {F}^{B}_{\infty }\) equal to \(\mu _{\sigma _{n}}\). Set \(\tau '_{n+1} :=F_{\sigma _{n+1}}^{-1} (U_{n+1})\) which, since \(\tau '_{n+1} > \sigma _{n+1}\) and \(U_{n+1}\) is \(\mathcal {F}^{W}_{\sigma _{n+1}}\)-measurable, is an \(\mathbb {F}^{W}\)-stopping time. Given \(\mathcal {F}^{B}_{\infty }\), the conditional law of \(\tau '_{n+1}\) is \(\mu _{\sigma _{n+1}}\). Set then

$$ \tau _{n+1} :=\tau '_{n+1} \mathbf {1}_{\{\tau '_{n+1} \leq \sigma _{n} \}} + \tau _{n} \mathbf {1}_{\{\tau '_{n+1} > \sigma _{n}\}}. $$

Since \(U_{n+1}\) is \(\mathcal {F}^{W}_{\sigma _{n+1}}\)-measurable and \(\tau _{n} > \sigma _{n}\), it follows that \(\tau _{n+1}\) is an \(\mathbb {F}^{W}\)-stopping time and measurable with respect to \(\mathcal {F}^{B}_{\infty }\vee (U_{m} ; m \leq n + 1)\). Clearly, \(\tau _{n+1} \leq \tau _{n}\). Furthermore, using conditional independence of \(\tau '_{n+1}\) and \(\tau _{n}\) given \(\mathcal {F}^{B}_{\infty }\) which follows from the conditional independence of \(U_{n+1}\) from \((U_{m})_{m \leq n}\) given \(\mathcal {F}^{B}_{\infty }\), and using the shorthand notation \({\mathbb {P}}^{B} \left [\cdot\right ] :={\mathbb {P}}[\cdot | \mathcal {F}^{B}_{\infty}]\) for conditional probabilities given \(\mathcal {F}^{B}_{\infty }= \mathcal {F}^{|W|}_{\infty }\), one obtains

$$\begin{aligned} {\mathbb {P}}^{B} [\tau _{n+1} \in A] &= {\mathbb {P}}^{B} \left [\tau '_{n+1} \in A \cap (0, \sigma _{n}]\right ] + {\mathbb {P}}^{B} \left [ \tau '_{n+1} > \sigma _{n}, \tau _{n} \in A \cap (\sigma _{n}, \infty ] \right ] \\ &= {\mathbb {P}}^{B} \left [\tau '_{n+1} \in A \cap (0, \sigma _{n}]\right ] + {\mathbb {P}}^{B} [\tau '_{n+1} > \sigma _{n}] {\mathbb {P}}^{B} \big[\tau _{n} \in A \cap (\sigma _{n}, \infty ]\big] \\ &= \mu _{\sigma _{n+1}} \big[A \cap (0, \sigma _{n}]\big] + \mu _{ \sigma _{n+1}} \big[(\sigma _{n}, \infty ]\big] \mu _{\sigma _{n}} \big[A \cap (\sigma _{n}, \infty ]\big] \\ &= \frac{\mu [A \cap (\sigma _{n+1}, \sigma _{n}]]}{\mu [(\sigma _{n+1}, \infty ]]} + \frac{\mu \left [(\sigma _{n}, \infty ]\right ]}{\mu [(\sigma _{n+1}, \infty ]]} \frac{\mu \left [A \cap (\sigma _{n}, \infty ]\right ]}{\mu [(\sigma _{n}, \infty ]]} \\ &= \frac{\mu [A \cap (\sigma _{n+1}, \infty ]]}{\mu [(\sigma _{n+1}, \infty ]]} = \mu _{\sigma _{n+1}} [A], \qquad A \in \mathcal {B}\big((0, \infty ] \big), \end{aligned}$$

which implies that the conditional law of \(\tau _{n+1}\) given \(\mathcal {F}^{B}_{\infty }\) is \(\mu _{\sigma _{n+1}}\). The inductive step is complete.

Define now \(\tau :=\lim _{n \uparrow \infty } \tau _{n} = \bigwedge _{n \in \mathbb {N}} \tau _{n}\); since \(\tau _{n}\) is an \(\mathbb {F}^{W}\)-stopping time for all \(n \in \mathbb {N}\), \(\tau \) is also an \(\mathbb {F}^{W}\)-stopping time. Given that the conditional law of \(\tau _{n}\) given \(\mathcal {F}^{B}_{\infty }\) is \(\mu _{\sigma _{n}}\) for each \(n \in \mathbb {N}\) and that \((\sigma _{n})_{n \in \mathbb {N}}\) decreases ℙ-a.e. to zero, it follows that the conditional law of \(\tau \) given \(\mathcal {F}^{B}_{\infty }\) is \(\mu _{0} = \mu \). This implies both that \(\tau \) is independent of \(\mathcal {F}^{B}_{\infty }\) and that its probability law equals \(\mu \), which concludes the proof of Theorem 6.1. □

6.4 A further example where incompleteness arises though filtration shrinkage

The markets described in Corollary 6.2 have an interesting “quasi-completeness” property: For each \(T \geq 0\), any nonnegative bounded \(\mathcal {F}^{S}_{T}\)-measurable contingent claim \(\xi \) can by replicated, i.e., in the notation of Sect. 2 and with \(x = x^{\mathbb{F}}(T, \xi )\), there exists a maximal \(X \in \mathcal{X}^{\mathbb{F}}(x)\) such that \(\mathbb{P}[X_{T} = \xi ] = 1\). See Ruf [37, Sect. 3.4] for a discussion of this weaker notion of completeness. Below, we provide an example where the market is complete for the large filtration \(\mathbb {G}\), but not even quasi-complete for the smaller filtration \(\mathbb {F}\); this example answers negatively a conjecture put forth in Jacod and Protter [18].

Let \(B\) be a one-dimensional Brownian motion and \(\mathbb {G}\) the right-continuous filtration generated by \(B\), i.e., \(\mathbb {G} :=\mathbb {F}^{B}\). Set \(S :=\mathcal {E}(\int _{0}^{\cdot }\theta _{u} \mathrm {d}B_{u})\), where

$$ \theta _{t} :=\textstyle\begin{cases} B_{t}, \quad &t < 1, \\ 1, \quad &t \geq 1, B_{1} > 0, \\ 2, \quad &t \geq 1, B_{1} \leq 0. \end{cases} $$

Since \(\theta \) is non-zero \(({\mathbb {P}}\times [B,B])\)-a.e., the market is complete in \(\mathbb {G}\). Now let \(\mathbb {F} :=\mathbb {F}^{S}\) be the right-continuous filtration generated by \(S\). Since

$$ S_{t} = \exp \bigg( \frac{B_{t}^{2}}{ 2} - \frac{1}{2} \int _{0}^{t} (1 + B_{s}^{2}) ds\bigg)\qquad \hbox{for all $t \leq 1$,} $$

it follows that \(\mathcal {F}_{t} = \mathcal {F}^{|B|}_{t}\) for all \(t < 1\). On the other hand, since

$$ \left \{B_{1} > 0\right \} = \left \{[\log S,\log S]_{t} - [\log S,\log S]_{1} = t - 1\right \} \in \mathcal {F}_{t} \qquad \hbox{for all $t > 1$,} $$

it follows that \(B_{1} \in \mathcal {F}_{t}\). Furthermore, \(B_{s} - B_{1}\) is also \(\mathcal {F}_{t}\)-measurable, which then implies that \(B_{s} \) is \(\mathcal {F}_{t}\)-measurable, for all \(t > 1\) and \(s \in [1,t]\). Using right-continuity of the filtration, it then easily follows that for all \(t \geq 1\), \(\mathcal {F}_{t}\) is generated by \(\mathcal {F}^{|B|}_{1}\) and \((B_{s})_{s \in [1, t]}\).

Here, there is a jump of information at the filtration \(\mathbb {F}\) happening exactly at the (deterministic) time 1: The sign of \(B\) at \(t=1\) is suddenly revealed, while the only previous information was on the absolute value of \(B\). In particular, any process of the form

$$ Z^{\alpha }:=1 + (\mathbf {1}_{\{B_{1} > 0\}} -\mathbf {1}_{\{B_{1} \leq 0\}} ) \alpha \mathbf {1}_{[\!\![1, \infty [\!\![} $$

for all \(\alpha \in (-1,1)\) is a strictly positive \(\mathbb {F}\)-martingale and clearly purely discontinuous. Therefore, \(S\) cannot have the predictable representation property in \(\mathbb {F}\).

To expand further, consider any claim of the form \(f(\mathbf {1}_{\{B_{1} > 0\}})\) with delivery at time 1, where \(f: \left \{0,1\right \} \rightarrow [0, \infty )\). Its hedging cost is \((f(0) + f(1)) / 2\) in the larger filtration \(\mathbb {G}\). In the filtration \(\mathbb {F}\), its hedging cost is at least

$$ \sup _{\alpha \in (-1,1)} {\mathbb {E}}[Z^{\alpha}_{1} f(\mathbf {1}_{\{B_{1} > 0\}})] = \max \left \{f(0), f(1)\right \}; $$

in fact, this is the actual hedging cost since one may trivially hedge starting from this amount. This can also be argued in a “dual” way, observing that the probabilities \({\mathbb {Q}}^{\alpha }\) constructed from \(Z^{\alpha }\) for all \(\alpha \in (-1,1)\) form exactly the class of equivalent local martingale measures in the filtration \(\mathbb {F}\).