1 Introduction

Over the last 20 years, a mathematical theory of bubbles for continuous-time models has been developed based on the concept of strict local martingales; see the seminal papers by Loewenstein and Willard [12], Cox and Hobson [4], Jarrow et al. [9, 10] as well as the survey article by Protter [17] and the references therein. In economic terms, an asset price bubble exists if the fundamental value of the asset deviates from its current price. If the fundamental value is understood to be the expectation of the (discounted) price process \(S = (S_{t})_{t \geq 0}\) under the equivalent local martingale measure ℙ, then the asset \(S\) has a ℙ-bubble if E P [ S T ]< E P [ S 0 ] for some fixed time \(T>0\), i.e., if \(S\) is a strict local ℙ-martingale, that is, a local ℙ-martingale that fails to be a ℙ-martingale.

While many implications and extensions of this definition have been discussed in the literature (see e.g. Ekström and Tysk [6], Bayraktar et al. [1], Biagini et al. [2], Herdegen and Schweizer [7]), the strict local martingale definition of bubbles has no direct analogue in discrete-time models. The reason for this is that a nonnegative local martingale S= ( S k ) k N 0 in discrete time with \(S_{0}\in L^{1}\) is automatically a (true) martingale. Hence, a definition of bubbles based on strict local martingales is void. Also, trying to define a bubble in discrete time as a martingale S= ( S k ) k N 0 that is not uniformly integrable does not lead to a meaningful concept as this would imply that virtually all relevant models such as the standard binomial model (considered on an unbounded time horizon) are bubbles, which seems absurd.

Despite the above negative results, the goal of this paper is to introduce a new definition of bubbles for discrete-time models on an unbounded time horizon – keeping the standard assumption that the discounted stock price is a martingale. This definition has to satisfy at least two conditions.

(I) It should split martingales that are not uniformly integrable into two sufficiently rich classes: those that are bubbles and those that are not. In particular, standard discrete-time models with i.i.d. returns like the binomial model should not be bubbles.

(II) It should be consistent with the strict local martingale definition in continuous time in the sense that a continuous local martingale in continuous time is a strict local martingale if and only if all appropriate discretisations thereof are bubbles in discrete time.

To the best of our knowledge, there has been no attempt in the extant literature to extend the martingale theory of bubbles to discrete time. The only slight exception is Roch [18] who introduced the notion of asymptotic asset price bubbles using the concept of weakly convergent discrete-time models (“large financial market”). More precisely, he showed that even if the price process is a martingale in a sequence of weakly convergent discrete-time models, it can have properties similar to a bubble in that the fundamental value in the asymptotic market can be lower than the current price in the asymptotic market. In contrast to [18], our approach is non-asymptotic.

To motivate our definition of a bubble in a discrete-time model, consider a non-sophisticated investor who follows a simple buy-and-hold strategy to invest into an asset with (discounted) price process S= ( S k ) k N 0 . The investor buys the asset at time \(k=0\) and hopes that it will rise and rise. When the (discounted) asset price drops for the first time, the investor fears to lose money and sells the asset. Denoting by \(\tau _{1}:=\inf \{j>0 : S_{j}< S_{j-1}\}\) the time of the first drawdown of the asset, the fundamental value under ℙ of \(S\) at time 0 (viewed with regard to the first drawdown) is E P [ S τ 1 ], where ℙ denotes an equivalent martingale measure. As the process \(S\) is a nonnegative supermartingale, we always have E P [ S τ 1 ] S 0 . If E P [ S τ 1 ]< S 0 , the fundamental value of the asset (viewed with regard to the first drawdown) is lower than its initial price and hence \(S\) might be considered a ℙ-bubble. Indeed, if the market is complete, the predictable representation theorem implies that a sophisticated investor might choose a dynamic trading strategy \(\vartheta \) whose (discounted) value process V(ϑ)= ( V k ( ϑ ) ) k N 0 satisfies V 0 (ϑ)= E P [ S τ 1 ]< S 0 and \(V_{\tau}(\vartheta ) = S_{\tau}\).

Of course, the requirement that \(S\) loses mass at the first drawdown is somewhat arbitrary and unrealistic. For this reason, our precise definition of a bubble in discrete time is more general and only requires \(S\) to lose mass at the \(k\)th drawdown for some kN. While this definition is very simple, it leads to a rich theory.

In Sect. 2, we provide several equivalent probabilistic characterisations for a nonnegative discrete-time martingale to be a bubble. We also provide necessary and sufficient characterisations for a discrete-time martingale with independent increments to have a bubble. In particular, we show that i.i.d. returns models such as the standard binomial model do not have a bubble, which implies that Condition (I) above is satisfied.

In Sect. 3, we look at the special case that \(S\) is a Markov martingale. We provide characterisations for the presence or absence of bubbles, depending on the probability a(x)= P x [ S 1 <x] of going down, and the relative recovery b(x)= E x [ S 1 x 1 { S 1 < x } ] when going down. Loosely speaking, it turns out that \(S\) is a bubble if and only if \(b(x)\) converges to 0 fast enough as \(x \to \infty \). To make this precise, however, is quite involved. While we are able to give sufficient conditions in the general case, we provide necessary and sufficient conditions in the case of complete markets.

In Sect. 4, we continue our study of Markov martingales by looking more closely at the underlying Markov kernel. We show that the existence of bubbles for \(S\) is directly linked to the existence of a non-trivial nonnegative solution to a linear Volterra integral equation of the second kind involving the Markov kernel. Among other things, this allows us to give some additional sufficient conditions for the existence of bubbles that cannot be covered with the results from Sect. 3.

Finally, in Sect. 5, we discuss how our definition of a bubble in discrete time relates to the strict local martingale definition in continuous time. We show that when discretising a positive continuous strict local martingale along sequences of stopping times in a certain somewhat canonical class, one obtains a bubble in discrete time. Conversely, we show that a positive continuous local martingale is a strict local martingale if for all localising sequences in the same class, the corresponding discretised martingales are bubbles. This shows that Condition (II) above is also satisfied.Footnote 1 To prove these discretisation results, we rely on the deep change of measure techniques first employed by Delbaen and Schachermayer [5] and further developed by Pal and Protter [15], Kardaras et al. [11] and Perkowski and Ruf [16] that allow turning the inverse of a nonnegative strict local martingale into a true martingale under a locally dominating probability measure. Some technical proofs of this section are shifted to the Appendix.

2 Definition and characterisation of bubbles

In this section, we introduce our definition of a bubble in discrete time and provide equivalent probabilistic characterisations of this concept.

Definition 2.1

Let (Ω,F,F= ( F k ) k N 0 ,P) be a filtered probability space. A nonnegative (P,F)-martingale S= ( S k ) k N 0 is called a bubble if

E[ S τ k ]<E [ S 0 ]
(2.1)

for some kN, where \(\tau _{0}:=0\) and \(\tau _{k}:=\inf \{j>\tau _{k-1} : S_{j}< S_{j-1}\}\), kN, denotes the \(k\)th drawdown of \(S\). We also call ℙ a bubble measure for \(S\).

Some comments on the above definition are in order.

Remark 2.2

(a) We include the possibility that P[ τ k =]>0. Since \(S\) is a nonnegative martingale, it converges P-a.s. to some integrable random variable \(S_{\infty}\) by Doob’s supermartingale convergence theorem. Hence \(S_{\tau _{k}}\) is well defined in any case.

(b) If \(S\) is a Markov process, \(S\) will be a bubble if and only if E[ S τ 1 ]<E[ S 0 ]; cf. Sect. 3 below. In general, however, the above definition does not contain any redundancy; one may think for example of dynamics with a change point.

(c) By the stopping theorem for uniformly integrable martingales, \(S\) can only be a bubble if it is not uniformly integrable. Example 2.9, however, shows that not every martingale that fails to be uniformly integrable is a bubble. Also, note that the precise definition of the stopping time \(\tau _{k}\) in (2.1) is important and cannot naively be replaced by an arbitrary stopping time.

(d) Our bubbles are strictly speaking ℙ-bubbles since the definition depends on the choice of the (equivalent) martingale measure ℙ. In incomplete markets, it is possible to have a ℙ-bubble under some equivalent martingale measure (EMM) ℙ but not a P ˜ -bubble under a different EMM P ˜ . A simple Markovian trinomial-type example for this can be constructed by using the results in Sect. 3. In this sense, our definition is not robust with respect to the choice of EMM. Note that exactly the same issue arises in the strict local martingale definition of bubbles in continuous time. Using similar ideas as in Herdegen and Schweizer [7], one could introduce the notion of a strong bubble in discrete time. We leave the details of this to future work.

(e) Definition 2.1 can be reformulated in the following way: For some kN, the stopped processes \(S^{\tau _{k}}\) fails to be uniformly integrable. This reformulation allows applying results from the literature that are proved for general càdlàg (local) martingales. For example, Hulley and Ruf [8] provide necessary and sufficient characterisations for a càdlàg (local) martingale to be uniformly integrable. However, the conditions given in [8] require calculating the distribution of the supremum of \(S^{\tau _{k}}\) and the distribution of the jumps of \(S^{\tau _{k}}\) at certain hitting times, both of which are difficult to get hold of. For this reason, they are not very useful in our discrete-time setup.

We proceed to give a first simple example of a bubble in a complete market model.

Example 2.3

Define the process S= ( S k ) k N 0 and the probability measure ℙ recursively by \(S_{0} = s_{0} > \frac{1}{2}\) and for kN,

Then \(S\) is a ℙ-martingale for its natural filtration, we have \(\tau _{1} < \infty \) P-a.s., and E[ S τ 1 ]= 1 2 < s 0 =E[ S 0 ], i.e., \(S\) is a bubble.

The following result provides two equivalent characterisations of bubbles. The first shows that a nonnegative martingale \(S \) is a bubble if and only if there exists a deterministic time k N 0 such that \(S\) loses mass at the first drawdown after \(k\). The second provides a limit characterisation. The latter characterisation is particularly useful for checking whether or not a martingale \(S\) is a bubble.

Theorem 2.4

Let S= ( S k ) k N 0 be a nonnegative martingale. For k N 0 , define the stopping time \(\tilde{\tau}_{k}\) by

$$ \tilde{\tau}_{k} := \inf \{j > k: S_{j} < S_{j-1}\}. $$

Then the following are equivalent:

(a) \(S\) is a bubble.

(b) There exists \(k \geq 0 \) such that E[ S τ ˜ k ]<E[ S k ].

(c) There exists \(k \geq 0 \) such that lim n E[( S n S ) 1 { S k S k + 1 S n } ]>0.

Proof

(a) ⇒ (b) Suppose that (b) is not true, i.e., E[ S τ ˜ k ]=E[ S k ] for all \(k\geq 0\). Then for all \(k\geq 0\), the stopped process \(S^{\tilde{\tau}_{k}}\) is a right-closed supermartingale which does not lose mass at \(\infty \) and hence is a uniformly integrable martingale. We proceed by induction to show that E[ S τ ]=E[ S 0 ] for all \(\ell \geq 0\), whence (a) is not true. The induction basis \(\ell = 0\) is trivial. For the induction step, suppose that E[ S τ 1 ]=E[ S 0 ] for some \(\ell \geq 1\). Then using the stopping theorem for the uniformly integrable martingale \(S^{\tilde{\tau}_{k}}\) in the third equality, we obtain

E [ S τ ] = k = 1 E [ S τ 1 { τ 1 = k } ] + E [ S 1 { τ 1 = } ] = k = 1 E [ S τ ˜ k 1 { τ 1 = k } ] + E [ S τ 1 1 { τ 1 = } ] = k = 1 E [ S τ 1 1 { τ 1 = k } ] + E [ S τ 1 1 { τ 1 = } ] = E [ S τ 1 ] = E [ S 0 ] .

(b) ⇒ (a) This follows from the fact that E[ S k ]=E[ S 0 ] by the martingale property of \(S\) together with the fact that \(\tilde{\tau}_{k} \leq \tau _{k+1}\).

(b) ⇔ (c) The equivalence follows from the following calculation, which uses the martingale property of \(S\) in the third equality:

E [ S τ ˜ k ] = lim n = k + 1 n E [ S 1 { S k S k + 1 S 1 > S } ] + E [ S 1 { S k S k + 1 } ] = lim n = k + 1 n ( E [ S 1 { S k S 1 } ] E [ S 1 { S k S } ] ) = : + E [ S 1 { S k S k + 1 } ] = lim n = k + 1 n ( E [ S 1 1 { S k S 1 } ] E [ S 1 { S k S } ] ) = : + E [ S 1 { S k S k + 1 } ] = E [ S k ] + lim n E [ ( S S n ) 1 { S k S k + 1 S n } ] .
(2.2)

This finishes the proof. □

The characterisation (c) in Theorem 2.4 is generally the most useful to decide whether or not \(S\) is a bubble. The following corollary strengthens this characterisation. It shows directly that bounded martingales fail to be bubbles. (Of course, this follows directly from the fact that a bounded martingale is uniformly integrable.)

Corollary 2.5

Let S= ( S k ) k N 0 be a nonnegative martingale. Then the following are equivalent:

(a) \(S\) is a bubble.

(b) There exist \(x \geq 0\) and \(k \geq 0 \) such that

lim n E[( S n S ) 1 { x S k S k + 1 S n } ]>0.

(c) For all \(x \geq 0\), there exists \(k \geq 0 \) such that

lim n E[( S n S ) 1 { x S k S k + 1 S n } ]>0.

Proof

It is clear that (c) implies (b), and (b) a fortiori implies condition (c) of Theorem 2.4, which by Theorem 2.4 gives (a). It remains to prove (a) ⇒ (c). So fix \(x \geq 0\). Since \(S\) is a bubble, by Theorem 2.4 (b) and the martingale property of \(S\), there exists \(\ell \geq 0\) such that E[ S τ ˜ ]<E[ S ]=E[ S 0 ]. Define the stopping times \(\sigma _{\ell ,x}\) and \(\tau ^{\sigma _{\ell ,x}}_{1}\) by

$$\begin{aligned} \sigma _{\ell ,x} := \inf \{k \geq \ell : S_{k} \geq x\}, \qquad \tau ^{\sigma _{\ell ,x}}_{1} := \inf \{j > \sigma _{\ell ,x}: S_{j} < S_{j-1}\}. \end{aligned}$$

Then \(\sigma _{\ell ,x}\) is the first hitting time of \([x, \infty )\) after \(\ell \), and \(\tau ^{\sigma _{\ell ,x}}_{1}\) is the first drawdown of \(S\) after \(\sigma _{\ell ,x}\). Since \(\sigma _{\ell ,x} \geq \ell \), it follows that \(\tau ^{\sigma _{\ell ,x}}_{1} \geq \tilde{\tau}_{\ell}\).

By the definition of \(\sigma _{\ell ,x}\), the stopped process \(S^{\sigma _{\ell ,x}}\) is uniformly integrable because

$$ S^{\sigma _{\ell ,x}}_{k} \leq \max (S_{0}, \ldots , S_{\ell}, S_{ \sigma _{\ell ,x}}, x)\in L^{1}.$$

This implies that E[ S σ , x ]=E[ S 0 ]. Since \(\tau ^{\sigma _{\ell ,x}}_{1} \geq \tilde{\tau}_{\ell}\), this in turn implies that

E[ S τ 1 σ , x ]E[ S τ ˜ ]<E [ S 0 ] =E[ S σ , x ].

A similar calculation as in (2.2) shows that

E[ S τ 1 σ , x ]=E[ S σ , x ]+ lim n E[( S S σ , x + n ) 1 { S σ , x S σ , x + n } 1 { σ , x < } ].

This together with the tower property of conditional expectations and dominated convergence yields

E[ lim n E[( S σ , x + n S ) 1 { S σ , x S σ , x + 1 S σ , x + n } | F σ , x ] 1 { σ , x < } ]>0.

We may deduce that there is \(k \geq \ell \) such that

E [ lim n E [ ( S k + n S ) 1 { S k S k + 1 S k + n } 1 { σ , x = k } | F k ] ] >0.

Using that \(S_{k} \geq x\) on \(\{\sigma _{\ell ,x} = k\}\), this implies that

E [ lim n E [ ( S k + n S ) 1 { x S k S k + 1 S k + n } | F k ] ] >0.

Dominated convergence and the tower property of conditional expectations give (c). □

While characterisations (b) and (c) in Corollary 2.5 are an improvement of Theorem 2.4 (c), they still depend on \(S_{\infty}\), of which we generally do not have a good knowledge. The following corollary provides a mild condition on \(S\) under which the bubble behaviour of \(S\) can be characterised without involving \(S_{\infty}\).

Proposition 2.6

Let S= ( S k ) k N 0 be a nonnegative martingale and \(x > 0\). Suppose that

k = 0 P[ S k <x or  S k > S k + 1 | F k ]=P-a.s.

Then for each k N 0 ,

lim n E[ S 1 { x S k S k + 1 S n } ]=0.
(2.3)

In particular, \(S\) is a bubble if and only if there exists k N 0 such that

lim n E[ S n 1 { x S k S k + 1 S n } ]>0.

Proof

The conditional Borel–Cantelli lemma implies that

P[ lim inf k {x S k S k + 1 }]=1P[ lim sup k ( { S k < x } { S k > S k + 1 } ) ]=0.

This implies a fortiori that \(\{x \leq S_{k} \leq S_{k +1} \leq \cdots \}\) is a ℙ-null set for each k N 0 . This gives (2.3). The final claim now follows from Corollary 2.5. □

The following result gives simple necessary and sufficient conditions for a nonnegative martingale with independent increments to be a bubble.

Theorem 2.7

Let ( X k ) k N be an independent sequence of nonnegative random variables with E[ X k ]=1 for kN. Define the process S= ( S k ) k N 0 by \(S_{k}=\prod _{\ell =1}^{k} X_{\ell}\) and the filtration F= ( F k ) k N 0 by \(\mathcal{F}_{k} := \sigma (S_{0}, \ldots , S_{k})\). Moreover, for kN, set

a k :=P[ X k <1][0,1), b k :=E[ X k 1 { X k < 1 } ][0, a k ].

Then \(S\) is a positive (P,F)-martingale. It is a bubble if and only if

$$ \sum _{k =1}^{\infty }a_{k} =\infty \qquad \textit{and} \qquad \sum _{k =1}^{ \infty }b_{k} < \infty . $$

Proof

It is clear by construction that \(S\) is a nonnegative (P,F)-martingale.

First, we argue that if \(\sum _{k =1}^{\infty }a_{k} < \infty \), then \(S\) is uniformly integrable and hence cannot be a bubble. To this end, note that

> k = 0 a k = k = 0 P[ X k <1] k = 0 E[(1 X k ) 1 { X k < 1 } ] k = 0 E[(1 X k )].

By Kakutani’s theorem (see e.g. Williams [19, Theorem 14.12 (v)]), this implies that \(S\) is uniformly integrable.

Next, if \(\sum _{k =1}^{\infty }a_{k} = \infty \), by independence of the \(X_{k}\),

k = 0 P[ S k <1 or  S k > S k + 1 | F k ] k = 0 P[ X k + 1 <1| F k ]= k = 1 a k =.

This together with Corollary 2.5 implies that \(S\) is a bubble if and only if there exists \(k \geq 0\) such that

lim n E [ S n 1 { 1 S k S k + 1 S n } ] = P [ S k 1 ] = k + 1 E [ X 1 { X 1 } ] = P [ S k 1 ] = k + 1 ( 1 b ) > 0 .

Since P[ S k 1]>0 and \(1- b_{\ell} > 0\) for all N, this is equivalent to

$$ \prod _{\ell =1}^{\infty }(1 - b_{\ell}) > 0,$$

which in turn is equivalent to \(\sum _{k =1}^{\infty }b_{k} < \infty \). □

We illustrate the above theorem by two examples. The first gives a bubble in a time-dependent binomial-type model, where the downward jumps get more and more severe.

Example 2.8

Let ( X k ) k N be a sequence of independent random variables satisfying \(P[X_{k} = \frac{1}{k}] = \frac{1}{k}\) and \(P[X_{k} = 1 + \frac{1}{k}] = 1- \frac{1}{k}\). Define the process S= ( S k ) k N 0 by \(S_{k}:=\prod _{\ell =1}^{k} X_{\ell}\) and the filtration F= ( F k ) k N 0 by \({\mathcal{F}_{k} := \sigma (S_{0}, \ldots , S_{k})}\). Then \(a_{k} = \frac{1}{k}\) and \(b_{k} = \frac{1}{k^{2}}\). Hence \(\sum _{k =1}^{\infty }a_{k} =\infty \) and \(\sum _{k =1}^{\infty }b_{k} < \infty \), whence \(S\) is a bubble.

The second example shows that a martingale with i.i.d. returns is never a bubble. In particular, a standard binomial model is never a bubble, which agrees with our intuition.

Example 2.9

Let ( X k ) k N be a sequence of i.i.d. random variables that are nonnegative and satisfy E[ X k ]=1 and P[ X k 1]>0. Define the process S= ( S k ) k N 0 by \(S_{k}:=\prod _{\ell =1}^{k} X_{\ell}\) and the filtration F= ( F k ) k N 0 by \({\mathcal{F}_{k} := \sigma (S_{0}, \ldots , S_{k})}\). Then \(S\) is a martingale but fails to be uniformly integrable because E[log X k ]<1 implies that \(S_{n}=\exp (\sum _{k=1}^{n}\log X_{k} ) \stackrel{{\mathrm{a.s.}}}{\longrightarrow}0\) by the strong law of large numbers. However, setting b k :=E[ X k 1 { X k < 1 } ]=b>0, we obtain

$$ \sum _{k = 1}^{\infty }b_{k} = \infty . $$

Thus \(S\) fails to be a bubble.

3 Characterisation of bubble measures for Markov chains

Throughout this section, we suppose that S= ( S k ) k N 0 is a positive martingale which is a Markov process with transition kernel \(K:(0,\infty )\times \mathcal{B}{(0,\infty )} \to [0,\infty )\) and starting from \(S_{0}=x>0\). Our goal is to determine under which conditions on the kernel \(K(x, \,\mathrm{d}y)\) the measure P x is a bubble measure. To this end, we define the functions \(a, b: (0, \infty ) \to [0, 1)\) by

a(x):= P x [ S 1 <x]= [ 0 , x ) K(x,dy),
(3.1)
b(x):= E x [ S 1 x 1 { S 1 < x } ]= [ 0 , x ) y x K(x,dy).
(3.2)

Hence \(a(x)\) denotes the probability of a downward jump and \(b(x)\) the relative recovery in case of a downward jump.

First, we show we show that \(S\) cannot be a bubble unless the relative recovery function \(b\) converges to zero at infinity.

Proposition 3.1

Assume that \(\liminf _{x \to \infty} b(x) > 0\). Then \(S\) fails to be a bubble under P x for any \(x \in (0, \infty )\).

Proof

There exist \(x_{0} > 0\) and \(\varepsilon \in (0,1)\) such that \(b(x) \geq \varepsilon \) for all \(x \geq x_{0}\). Pick \(x \in (0, \infty )\). By Corollary 2.5 and using that \(S_{\infty }\geq 0\), it suffices to show that for each k N 0 ,

lim n E x [ S n 1 { x 0 S k S k + 1 S n } ]=0.

So let \(k < n\). Then by the Markov property and the definition of \(b\),

E x [ S n 1 { x 0 S k S k + 1 S n } ] = E x [ E x [ S n S n 1 1 { S n S n 1 } | F n 1 ] S n 1 1 { x 0 S k S k + 1 S n 1 } ] = E x [ ( 1 b ( S n 1 ) ) S n 1 1 { x 0 S k S k + 1 S n 1 } ] ( 1 ε ) E x [ S n 1 1 { x 0 S k S k + 1 S n 1 } ] ( 1 ε ) n k E x [ S k 1 { x 0 S k } ] ( 1 ε ) n k x .

Now the claim follows by letting \(n \to \infty \). □

We proceed to formulate a mild condition on the function \(a\) which allows us to characterise the bubble behaviour of \(S\) without involving \(S_{\infty}\).

Assumption 3.2

There exists \(x_{a} > 0\) such that \(\inf _{x \in [x_{a}, y]} a(x) > 0\) for all \(y > x_{a}\).

Remark 3.3

Assumption 3.2 is in particular fulfilled if \(a\) is positive and lower semi-continuous.

Proposition 3.4

Suppose Assumption 3.2is satisfied for some \(x_{a}>0\). Let \(x' \geq x_{a}\). Then for each k N 0 ,

lim n E x [ S 1 { x S k S k + 1 S n } ]=0.
(3.3)

Moreover, \(S\) is a bubble under P x if and only if there exists k N 0 such that

lim n E x [ S n 1 { x S k S k + 1 S n } ]>0.

Proof

By Proposition 2.6, it suffices to check that P x -a.s.,

k = 0 P x [ S k < x  or  S k > S k + 1 | F k ]= k = 0 ( 1 { S k < x } +a( S k ) 1 { S k x } )=.

We now distinguish two cases. If \(\omega \in \limsup _{k \to \infty} \{S_{k} < x'\}\), it follows that we have \(\sum _{k = 0}^{\infty }\mathbf{1}_{\{S_{k}(\omega ) < x'\}}= \infty \). If \(\omega \in \liminf _{k \to \infty} \{S_{k} \geq x'\} \cap \{\sup _{k \geq 0} S_{k} < \infty \}\), then \(\sum _{k = 0}^{\infty }a(S_{k}( \omega ))\mathbf{1}_{\{S_{k}(\omega ) \geq x\}} = \infty \) by the assumption on \(a\). Since \(\sup _{k \geq 0} S_{k} < \infty \) P x -a.s. by Doob’s martingale convergence theorem, the claim follows. □

We next aim to give a sufficient condition for \(S\) to be a bubble. To this end, we need to slightly relax the definition of the function \(b\) from (3.2). For \(\varepsilon > 0\), define the function \(b_{\varepsilon }: (0, \infty ) \to (0, 1]\) by

b ε (x):= E x [ S 1 x 1 { S 1 < x ( 1 + ε ) } ] = [ 0 , x ( 1 + ε ) ) y x K(x,dy).

Theorem 3.5

Suppose that Assumption 3.2is satisfied and there exist \(\varepsilon > 0\) and \(x_{b} > 0\) with the property that the function \(b_{\varepsilon }\) is nonincreasing for \(x \geq x_{b}\) and satisfies \(\int _{\log x_{b}}^{\infty }b_{\varepsilon }(\exp (x)) \, \mathrm{d}x < \infty \). Then \(S\) is a bubble under each P x for which \(S\) is not P x -a.s. bounded.

Proof

Suppose that \(S\) is not P x -a.s. bounded. Let \(x_{a}\) be the constant in Assumption 3.2. We may assume without loss of generality that \(x_{b} \geq x_{a}\). By Proposition 3.4, it suffices to check that there is k N 0 such that

lim n E x [ S n 1 { x b S k S k + 1 S k + n } ] lim n E x [ S k 1 { S k x b } j = 1 n S k + j S k + j 1 1 { S k + j S k + j 1 ( 1 + ε ) } ] > 0 .

Since \(S\) is not P x -a.s. bounded, there exists k N 0 with E x [ S k 1 { S k x b } ]>0. Fix nN. By the definition of \(b_{\varepsilon }\), we obtain for \(j \in \{1, \ldots , n\}\) that

E x [ S k + j S k + j 1 1 { S j + k 1 ( 1 + ε ) S j + k } | F k + j 1 ]=(1 b ε ( S k + j 1 )) P x -a.s.

This together with the tower property of conditional expectations and the fact that \(b_{\varepsilon }\) is nonincreasing for \(x \geq x_{b}\) gives

E x [ S k 1 { S k x b } j = 1 n S k + j S k + j 1 1 { S k + j S k + j 1 ( 1 + ε ) } ] = E x [ S k 1 { S k x b } ( j = 1 n 1 S k + j S k + j 1 1 { S k + j S k + j 1 ( 1 + ε ) } ) ( 1 b ε ( S k + n 1 ) ) ] E x [ S k 1 { S k x b } ( j = 1 n 1 S k + j S k + j 1 1 { S k + j S k + j 1 ( 1 + ε ) } ) ( 1 b ε ( x b ( 1 + ε ) n 1 ) ) ] E x [ S k 1 { S k x b } j = 0 n 1 ( 1 b ε ( x b ( 1 + ε ) j ) ) ] = E x [ S k 1 { S k x b } ] j = 0 n 1 ( 1 b ε ( x b ( 1 + ε ) j ) ) .

Thus it remains to show that \(\prod _{j =0}^{\infty }(1 - b_{\varepsilon }(x_{b} (1 + \varepsilon )^{j})) > 0\). The latter condition is equivalent to \(\sum _{j =0}^{\infty }b_{\varepsilon }(\exp (\log x_{b} + \log (1 + \varepsilon ) j )) < \infty \), which is equivalent to \(\int _{\log x_{b}}^{\infty }b_{\varepsilon }(\exp (x)) \,\mathrm{d}x < \infty \). □

We illustrate the above result by two examples. The first is a “smooth” version of Example 2.3; the second is an example of a “discrete diffusion” for the log price.

Example 3.6

Assume that the Markov kernel is given by

$$ K(x,\mathrm{d}y)= \textstyle\begin{cases} \frac{1}{2} \mathbf{1}_{(0, 1)}(y) \,\mathrm{d}y + \frac{1}{2} \mathbf{1}_{(2x-1, 2x)}(y) \,\mathrm{d}y&\quad \text{if } x> 1, \\ \frac{1}{2x} \mathbf{1}_{(0, 2x)}(y) \,\mathrm{d}y &\quad \text{if } x \leq 1. \end{cases} $$

Then \(a(x) = \frac{1}{2} \) and for \(\varepsilon \in (0, 1)\),

$$ b_{\varepsilon }(x) = \frac{(1+\varepsilon )^{2}}{4}\mathbf{1}_{\{x \leq 1\}} + \bigg(1 - x\Big(1 - \frac{(1+\varepsilon )^{2}}{4}\Big) \bigg)\mathbf{1}_{\{1 \leq x \leq \frac{1}{1 -\varepsilon }\}} + \frac{1}{4 x} \mathbf{1}_{\{x > \frac{1}{1 -\varepsilon }\}} .$$

By Theorem 3.5, \(S\) is a bubble under P x for all \(x>0\).

Example 3.7

Let ( Z k ) k N be a sequence of i.i.d. standard normal random variables and σ:R(0,) a measurable function such that \(\sigma (x)\) is nondecreasing for large values of \(x\). Define the process ( X k ) k N 0 recursively by \(X_{0} := 0\) and

X k + 1 := X k +σ( X k ) Z k + 1 σ 2 ( X k ) 2 ,k N 0 .

Then the process S= ( S k ) k N 0 defined by \(S_{k} :=\exp (X_{k})\) for k N 0 is a Markov martingale.

Denoting by \(\Phi \) the distribution function of a standard normal random variable, it is not difficult to check that for \(x > 0\) and \(\varepsilon \geq 0\),

$$ a(x)=\Phi \bigg(\frac{\sigma (\log x)}{2}\bigg),\qquad b_{ \varepsilon }(x)=\Phi \bigg( \frac{\log (1+\varepsilon )}{\sigma (\log x)}- \frac{\sigma (\log x)}{2}\bigg), $$

where \(b_{0} = b\). Thus by Proposition 3.1, for \(S\) to be a bubble, it is necessary that \(\sigma (x)\to \infty \) as \(x\to \infty \). Moreover, by Theorem 3.5, a sufficient condition for \(S\) to be a bubble is given by

x 0 Φ ( log 2 σ ( x ) σ ( x ) 2 ) dx<for some  x 0 R.
(3.4)

Denoting the density function of a standard normal random variable by \(\varphi \), using Mills’ ratio and the fact that \(|(\frac{\log 2}{\sigma (x)}-\frac{\sigma (x)}{2})^{2} - ( \frac{\sigma (x)}{2})^{2} | \leq (\frac{\log 2}{\sigma (x)})^{2}+ \log 2\) is uniformly bounded for all sufficiently large \(x\) as \(\sigma \) is nondecreasing, it is not difficult to check that (3.4) is equivalent to

x 0 1 σ ( x ) φ( σ ( x ) 2 )dx<for some  x 0 R.
(3.5)

Remark 3.8

It is insightful to compare Example 3.7 to the continuous-time theory of bubbles. To this end, recall that in continuous time, the process \(S = (S_{t})_{t \geq 0}\) given by \(S_{t}:=\exp (X_{t})\), where \(X=(X_{t})_{t\geq 0}\) solves the SDE

$$ dX_{t}=\sigma (X_{t})dW_{t}-\frac{1}{2}\sigma ^{2}(X_{t})dt,$$

is a strict local martingale and hence a bubble if and only if for some x 0 R,

$$ \int ^{\infty}_{x_{0}}\frac{dx}{\sigma ^{2}(x)}< \infty ; $$
(3.6)

cf. Mijatović and Urusov [14, Corollary 4.3]. While (3.5) and (3.6) both say that \(S\) is a bubble if and only if \(\sigma (x)\to \infty \) fast enough as \(x\to \infty \), the exact rate of increase of \(\sigma \) required for a bubble is quite different as (3.6) is a much stronger requirement on the growth of \(\sigma \) than (3.5). The reason for this is that the discretisation of the diffusion model in continuous time should not be done along a deterministic time grid, but along certain sequences of stopping times; see Sect. 5 below.

While Proposition 3.1 and Theorem 3.5 give useful general sufficient conditions for the absence or presence of a bubble, respectively, these conditions are not necessary. In the complete Markov case, we can give a necessary and sufficient characterisation of bubbles under mild assumptions on the functions \(a\) and \(b\).

Theorem 3.9

Suppose the Markov kernel is given by

$$ K(x, \mathrm{d}y) = a(x)\delta _{\frac{b(x)x}{a(x)}} (\mathrm{d}y) + \big(1- a(x)\big)\delta _{\frac{(1-b(x))x}{1-a(x)}} (\mathrm{d}y), $$

where \(0 \leq b(x) < a(x) < 1\) and \(0 < \liminf _{x \to \infty} a(x) \leq \limsup _{x \to \infty} a(x) < 1\). Moreover, suppose that there exists \(x_{b} > 0\) such that the function \(b\) is nonincreasing for \(x \geq x_{b}\). Then \(S\) is a bubble under P x if and only if \(S\) is not P x -a.s. bounded and \(\int _{\log x_{b}}^{\infty }b(\exp (x)) \,\mathrm{d}x < \infty \).

The above process corresponds to a binomial-type model where the probability of downward jumps is bounded away from 0 and 1 and the relative recovery in case of a downward jump decreases for large values.

Proof of Theorem 3.9

Suppose that \(S\) is not P x -a.s. bounded. We may focus on the case where \(\lim _{x \to \infty} b(x) = 0\). Indeed, otherwise it follows that \(\lim _{x \to \infty} b(x) > 0\) and hence \(\int _{\log x_{b}}^{ \infty }b(\exp (x)) \,\mathrm{d}x = \infty \), and \(S\) fails to be a bubble by Proposition 3.1.

Since \(\lim _{x \to \infty} b(x) = 0\) and \(0 < \liminf _{x \to \infty} a(x) \leq \limsup _{x \to \infty} a(x) < 1\), after potentially enlarging \(x_{b}\), we may assume that there exists \(0 < c < C\) such that \(1 +c \leq \frac{1- b(x)}{1 - a(x)} \leq 1 + C\) for all \(x \geq x_{b}\). By Proposition 3.4, \(S\) is a bubble under P x if and only if there exists \(k \geq 0\) such that

lim n E x [ S n 1 { x b S k S k + 1 S k + n } ]>0.

Using that \(\{S_{j} \leq S_{j+1}\} = \{S_{j+1} = \frac{1- b(S_{j})}{1 - a(S_{j})} S_{j}\}\) for j N 0 and arguing as in the proof of Theorem 3.5 gives for each \(0 \leq k \leq n\) that

E x [ S k 1 { S k x b } ] j = 0 n 1 ( 1 b ( x b ( 1 + C ) j ) ) E x [ S n 1 { x b S k S k + 1 S k + n } ] E x [ S k 1 { S k x b } ] j = 0 n 1 ( 1 b ( x b ( 1 + c ) j ) ) .

Now using that for \(\gamma \in \{c, C\}\), \(\prod _{j =0}^{\infty }(1 - b(x_{b} (1 + \gamma )^{j})) > 0\) if and only if

$$ \sum _{j =0}^{\infty }b\Big(\exp \big(\log x_{b} + \log (1 + \gamma ) j \big)\Big) < \infty , $$

which in turn is equivalent to \(\int _{\log x_{b}}^{\infty }b(\exp (x)) \,\mathrm{d}x < \infty \), the claim follows. □

4 A fixed point equation associated to a Markovian bubble

In this section, we continue our study of Markov martingales, taking a more analytic perspective. We assume throughout that S= ( S k ) k N 0 is a positive Markov martingale with kernel \(K:(0,\infty )\times \mathcal{B}{(0, \infty )}\to [0,\infty )\), starting from \(S_{0}=x>0\). The key object of this section is the default function of \(S\).

Definition 4.1

The Borel-measurable function \(M_{S}: (0, \infty ) \to [0, \infty )\) defined by

M S (x):= lim n E x [( S n S ) 1 { S n S n 1 S 1 x } ]

is called the default function of \(S\).

It follows from (2.2) (with \(k = 0\)) that M S (x)= E x [ S τ 1 ], so that \(M_{S}\) measures the loss of mass at the first drawdown of \(S\). It is clear that P x is a bubble measure for \(S\) if \(M_{S}(x) > 0\). The following two results show that \(M\) essentially fully characterises the bubble behaviour of \(S\) under P x for all \(x > 0\).

Proposition 4.2

Suppose \(M_{S}(x) = 0\) for all \(x \geq x' > 0\). Then \(M_{S}(x) = 0\) for all \(x > 0\), and P x fails to be a bubble measure for \(S\) for any \(x > 0\).

Proof

Fix \(x > 0\). Then for all k N 0 , the Markov property of \(S\), the choice of \(x'\) and dominated convergence give

lim n E x [ ( S n S ) 1 { x S k S k + 1 S n } ] = E x [ lim n E S k [ ( S n S ) 1 { S k S k + 1 S n } ] 1 { S k x } ] = E x [ M S ( S k ) 1 { S k x } ] = 0 .

Thus Corollary 2.5 implies that \(S\) is not a bubble under P x , and so \(M_{S}(x) = 0\). □

Proposition 4.3

Suppose that \(\liminf _{x \to \infty} M_{S}(x) > 0\). Then \(S\) is a bubble under P x for all \(x > 0\) for which \(S\) is not P x -a.s. bounded.

Proof

Fix \(x > 0\) and suppose that \(S\) is not P x -a.s. bounded. By hypothesis, there exists \(x' \geq x\) such that \(M_{S}(y) > 0\) for all \(y \geq x'\). Since \(S\) is not P x -a.s. bounded, there is \(k \geq 0\) such that P x [ S k x ]>0. Then by the Markov property of \(S\), the choice of \(x'\) and dominated convergence,

lim n E x [ ( S n S ) 1 { x S k S k + 1 S n } ] = E x [ lim n E S k [ ( S n S ) 1 { S k S k + 1 S n } ] 1 { S k x } ] = E x [ M S ( S k ) 1 { S k x } ] > 0 .

Thus \(S\) is a bubble under P x by Corollary 2.5. □

In the remainder of this section, we seek to characterise the function \(M_{S}\) in an analytic way and provide conditions for it to be non-zero.

First, we show that \(M_{S}\) solves a fixed point equation, more precisely a homogeneous Volterra integral equation of the second kind; cf. Brunner [3, Chap. 1.2] for a textbook treatment.

Lemma 4.4

The default function \(M_{S}\) is a solution to the Volterra integral equation

$$ M_{S}(x)=\int _{[x,\infty )} M_{S}(y)K(x,\mathrm{d}y),\qquad x>0. $$
(4.1)

Proof

Fix \(x>0\). Using the definition of \(M_{S}\), dominated convergence and the Markov property of \(S\), we obtain

[ x , ) M S ( y ) K ( x , d y ) = E x [ M S ( S 1 ) 1 { S 1 x } ] = E x [ lim n E S 1 [ ( S n S ) 1 { S n S n 1 S 1 x } ] ] = lim n E x [ ( S n S ) 1 { S n S n 1 S 1 x } ] = M S ( x ) .

 □

Note that (4.1) is non-standard in that the domain is non-compact. Therefore, we cannot apply standard existence and uniqueness results for Volterra integral equations, cf. [3, Chap. 8]. In fact, existence is anyway not an issue since the zero function always solves (4.1). Since the bubble case corresponds to (4.1) having a non-zero (nonnegative) solution, we are actually interested in non-uniqueness, i.e., the case that (4.1) has multiple nonnegative solutions. By homogeneity of (4.1), we then always have infinitely many solutions, and so it is clear that we need an additional condition to pin down the default function \(M_{S}\).

It follows from the definition of \(M_{S}\) that \(M_{S}(x) \leq x\) for all \(x > 0\). So we consider nonnegative solutions to (4.1) that are dominated by the identity. To this end, denote by ℐ all Borel-measurable functions \(M: (0, \infty ) \to [0, \infty )\) satisfying \(M(x) \leq x\) for all \(x > 0\). Using that

$$\begin{aligned} 0 \leq \int _{[x,\infty )} M(y)K(x,\mathrm{d}y) \leq \int _{(0, \infty )} M(y)K(x,\mathrm{d}y) \leq \int _{(0,\infty )} y K(x, \mathrm{d}y) = x \end{aligned}$$

for all \(M \in \mathcal{I}\) and \(x > 0\), we can define the map \(\mathcal{K}: \mathcal{I}\to \mathcal{I}\) by

$$ \mathcal{K}(M)(x) = \int _{[x,\infty )} M(y)K(x,\mathrm{d}y), \qquad x > 0. $$

Then the nonnegative solutions to (4.1) dominated by the identity are precisely given by fixed points of \(\mathcal{K}\).

While the map \(\mathcal{K}\) is in general not a contraction (and therefore (4.1) may have multiple solutions on ℐ), it is monotone, and this property will prove crucial for our subsequent analysis.

Proposition 4.5

The map \(\mathcal{K}\) is monotone on ℐ.

Proof

Let \(M_{1}, M_{2} \in \mathcal{I}\) with \(M_{1} \leq M_{2}\). Then monotonicity of the integral gives for \(x > 0\) that

$$ \mathcal{K}(M_{1})(x) = \int _{[x,\infty )} M_{1}(y)K(x,\mathrm{d}y) \leq \int _{[x,\infty )} M_{2}(y)K(x,\mathrm{d}y) = \mathcal{K}(M_{2})(x). $$

 □

Due to monotonicity of \(\mathcal{K}\), it is very useful to consider subsolutions and supersolutions to (4.1) on ℐ.

Definition 4.6

A function \(M \in \mathcal{I}\) is called a subsolution to (4.1) if

$$ M(x) \leq \int _{[x,\infty )} M(y)K(x,dy),\qquad x>0. $$

It is called a supersolution to (4.1) if

$$ M(x) \geq \int _{[x,\infty )} M(y)K(x,dy),\qquad x>0. $$

The following result shows that we can construct from each sub- or supersolution a solution to (4.1) by Picard iteration. To this end, for n N 0 , define \(\mathcal{K}^{n}(M)\) recursively by \(\mathcal{K}^{0}(M) := M\) and \(\mathcal{K}^{n}(M) := \mathcal{K}(\mathcal{K}^{n-1}(M))\) for \(n \geq 1\).

Proposition 4.7

Let \(M \in \mathcal{I}\) be a sub- or supersolution to (4.1). Then the limit \(\mathcal{K}^{\infty}(M) = \lim _{n \to \infty} K^{n}(M)\) exists and is a solution to (4.1). Moreover,

– if \(M\) is a subsolution, then the sequence ( K n ( M ) ) n N is nondecreasing and \(\mathcal{K}^{\infty}(M)\) is the smallest solution dominating \(M\);

– if \(M\) is a supersolution, then the sequence ( K n ( M ) ) n N is nonincreasing and \(\mathcal{K}^{\infty}(M)\) is the largest solution dominated by \(M\).

Proof

We only consider the case that \(M\) is a subsolution; the proof for the case that \(M\) is a supersolution is analogous. If \(M\) is a subsolution, the sequence ( K n ( M ) ) n N 0 is nondecreasing in \(M\) by monotonicity of \(\mathcal{K}\). Hence the limit \(\lim _{n \to \infty} K^{n}(M)\) exists and is in ℐ since each \(\mathcal{K}^{n}(M)\) is in ℐ. Moreover, it follows from monotone convergence that

$$\begin{aligned} \mathcal{K}^{\infty}(M)(x) = \lim _{n \to \infty} \mathcal{K}^{n}(M)(x) &= \lim _{n \to \infty} \int _{[x,\infty )} \mathcal{K}^{n-1}(M)(y) K(x, \mathrm{d}y) \\ &= \int _{[x,\infty )} \mathcal{K}^{\infty}(M)(y) K(x,\mathrm{d}y), \qquad x > 0, \end{aligned}$$

whence \(\mathcal{K}^{\infty}(M)\) is a solution to (4.1).

Now let \(\tilde{M} \in \mathcal{I}\) be any solution to (4.1) dominating \(M\). It suffices to show that \(\tilde{M} \geq \mathcal{K}^{n}(M)\) for all n N 0 . We argue by induction. The induction basis is trivial. For the induction step, suppose that \(n \geq 1\) and \(\tilde{M} \geq \mathcal{K}^{n-1}(M)\). Then by the induction hypothesis and the definition of \(\mathcal{K}^{n}(M)\), for \(x > 0\),

$$ \tilde{M}(x)=\int _{[x,\infty )}\tilde{M}(y)K(x,\mathrm{d}y)\geq \int _{[x,\infty )}\mathcal{K}^{n-1}(M)(y)K(x,\mathrm{d}y)= \mathcal{K}^{n}(M)(x). $$

 □

We note the following important corollary.

Corollary 4.8

The largest solution to (4.1) onis given by \(\mathcal{K}^{\infty}(\operatorname{id})\), where \(\operatorname{id}\) denotes the identity function.

It follows from Lemma 4.4 and Corollary 4.8 that the default function \(M_{S}\) is dominated by \(\mathcal{K}^{\infty}(\operatorname{id})\). Under a mild assumption on the kernel \(K\), we can assert that \(M_{S}\) coincides with \(\mathcal{K}^{\infty}(\operatorname{id})\). Thus in this case, we can characterise the default function \(M_{s}\) as the maximal solution to (4.1) dominated by the identity.

Theorem 4.9

Suppose that Assumption 3.2is satisfied for any \(x_{a} > 0\). Then \(M_{S}\) is the maximal solution to (4.1) dominated by the identity. It is given by \(M_{S} = \mathcal{K}^{\infty}(\operatorname{id})\). Moreover, \(M_{S}(x) < x\) for all \(x > 0\) and if \(M_{S} \not \equiv 0\), then

$$ \limsup _{x \to \infty} \frac{M_{S}(x)}{x} = 1. $$
(4.2)

Proof

We first show by induction that for each n N 0 , we have

K n (id)(x)= E x [ S n 1 { S n S n 1 S 1 S 0 } ],x>0.

The induction basis \(n = 0\) follows from the martingale property of \(S\). For the induction step, let \(n \geq 1\). By the definition of \(\mathcal{K}^{n}(\operatorname{id})\) and the Markov property of \(S\), we obtain

K n ( id ) ( x ) = [ x , ) K n 1 ( id ) ( y ) K ( x , d y ) = E x [ K n 1 ( id ) ( S 1 ) 1 { S 1 x } ] = E x [ E S 1 [ S n 1 { S n S n 1 S 1 } ] 1 { S 1 x } ] = E x [ S n 1 { S n S n 1 S 1 S 0 } ] .

Hence the definitions of \(\mathcal{K}^{\infty}(\operatorname{id})\) and \(M_{S}\) together with (3.3) give

K ( id ) ( x ) = lim n E x [ S n 1 { S n S n 1 S 1 S 0 } ] = lim n E x [ ( S n S ) 1 { S n S n 1 S 1 S 0 } ] = M S ( x ) , x > 0 .

It follows from Corollary 4.8 that \(M_{S}\) is the maximal solution to (4.1) dominated by the identity. Moreover, \(M_{S} \leq \operatorname{id}\), the fact that P x [ S 1 <x]=a(x)>0 for all \(x > 0\) and the martingale property of \(S\) give

M S (x)= E x [ M S ( S 1 ) 1 { S 1 x } ] E x [ S 1 1 { S 1 x } ]< E x [ S 1 ]=x.

Finally, to establish (4.2), let us define the function \(H_{S}: (0, \infty ) \to [0, 1]\) by \(H_{S}(x) = \sup _{y \geq x} \frac{M_{S}(x)}{x}\). If \(M_{S} \not \equiv 0\), it follows from Proposition 4.2 that \(H_{S}(x) >0\) for all \(x > 0\). It suffices to show that \(H_{S}(x) = 1\) for all \(x > 0\). Seeking a contradiction, suppose there exists \(x' > 0\) such that \(H_{S}(x') < 1\). Define the function \(M: (0, \infty ) \to [0, \infty )\) by

$$ M(y) = \textstyle\begin{cases} 0 &\quad \text{if } x < x', \\ \frac{M_{S}(x)}{H_{S}(x')} & \quad \text{if } x \geq x'. \end{cases} $$

Then \(M \in \mathcal{I}\) by the fact that \(H_{S}(x') \geq \frac{M_{S}(x)}{x}\) for \(x \geq x'\). Moreover, \(M\) is a subsolution to (4.1) and \(M(x) > M_{S}(x)\) for \(x \geq x'\). This together with Proposition 4.7 implies that \(\mathcal{K}^{\infty}(M)\) is a solution to (4.1) dominated by the identity. Since \(\mathcal{K}^{\infty}(M)(x) \geq M(x) > M_{S}(x)\) for \(x > x'\), this is in contradiction to \(M_{S}\) being the maximal solution to (4.1) dominated by the identity. □

The following corollary shows that if we can find a non-trivial subsolution to (4.1), then \(S\) is a bubble.

Corollary 4.10

Suppose Assumption 3.2is satisfied for any \(x_{a} > 0\). If \(M \in \mathcal{I}\) is a subsolution to (4.1), then \(M \leq M_{S}\). If in addition \(\liminf _{x \to \infty} M(x) > 0\), then \(S\) is a bubble under P x for all \(x > 0\) for which \(S\) is not P x -a.s. bounded.

Proof

Proposition 4.7 and Theorem 4.9 give \(M \leq \mathcal{K}^{\infty}(M) \leq M_{S}\). The additional claim then follows from Proposition 4.3. □

A typical candidate for a subsolution in Corollary 4.10 is given by the call function \(M(x) = (x -L)^{+}\) for some \(L > 0\). This is illustrated by the following example. Note that this example cannot be addressed with the results from Sect. 3.

Example 4.11

Suppose that \(K(x,\mathrm{d}y)=k(x,y) \,\mathrm{d}y\) for all \(x>0\), where the density \(k\) satisfies

$$ k(x,y)=\frac{2}{3(x+1)}\mathbf{1}_{[x, 2x]}(y)\qquad \text{for }y\geq x>0$$

and \(k(x, y)\) on \(\{(x,y):0< y< x\}\) is chosen such that \(K\) is a martingale kernel. Then the function \(a\) from (3.1) satisfies \(a(x) = 1 - \int _{[x, \infty )} k(x, y) \,\mathrm{d}y = \frac{3+x}{3 + 3x} \geq \frac{1}{3}\) so that Assumption 3.2 is satisfied for any \(x_{a} > 0\). Consider \(M(x):=(x-3)^{+}\). Then trivially \(\int _{x}^{\infty }M(y)k(x,y)\,\mathrm{d}y \geq 0 = M(x)\) for \(x \leq 3\) and

$$ \int _{x}^{\infty }M(y) k(x,y) \,\mathrm{d}y= x - 3\frac{x}{x+1} \geq x-3 = M(x) \qquad \text{for } x > 3. $$

It follows that \(M\) is subsolution to (4.1), and we deduce that \(S\) has a bubble under P x for all \(x > 0\) by Corollary 4.10 since \(S\) is not P x -a.s. bounded for any \(x > 0\).

While Theorem 4.9 provides a characterisation of the default function \(M\), it does not provide a criterion to decide whether (4.1) has a non-trivial, i.e., a non-zero nonnegative solution dominated by the identity. Moreover, it does not provide a criterion to decide whether a given candidate solution \(M\) to (4.1) is indeed maximal. Under a stronger assumption on the kernel \(K\), we can provide a sufficient criterion for the existence of non-trivial solutions to (4.1) dominated by the identity. Moreover, we obtain a local uniqueness result in this case. To this end, recall the definitions of the functions \(a\) and \(b\) from (3.1) and (3.2), respectively. Moreover, denote by \(\Vert \cdot \Vert _{\sup}\) the supremum norm.

Theorem 4.12

Suppose \(\inf _{x > 0} a(x) > 0\). Then the following are equivalent:

(a) \(\sup _{x > 0} x b(x) < \infty \).

(b) For all \(L > 0\) sufficiently large, the call function \(M(x) = (x - L)^{+}\) is a subsolution to (4.1).

(c) The default function \(M_{S}\) is non-trivial and satisfies \(\Vert \operatorname{id}- M_{S}\Vert _{\sup} < \infty \).

Moreover, if one of the above conditions is satisfied, \(M_{S}\) is the unique solution to (4.1) among all solutions \(M \in \mathcal{I}\) satisfying \(\Vert \operatorname{id}- M \Vert _{\sup} < \infty \).

Proof

(a) ⇒ (b) Set \(\alpha := \inf _{x > 0} a(x) > 0\), \(\beta := \sup _{x > 0} x b(x)\) and \(L \geq \frac{\beta}{\alpha}\). Then the call function \(M(x) := (x - L)^{+}\) satisfies \(\int _{[x, \infty )} M(y)K(x, \mathrm{d}y)\geq 0 = M(x)\) for \(x \leq L\) and

$$\begin{aligned} \int _{[x, \infty )} M(y) K(x, \,\mathrm{d}y) &= x \big(1 - b(x)\big) - L\big(1- a(x)\big) \\ & = M(x) - \beta (x) x + \alpha (x) L \\ &\geq M(x) -\beta + \alpha L \geq M(x), \qquad x > L. \end{aligned}$$

Hence \(M\) is a subsolution to (4.1).

(b) ⇒ (c) Let \(L > 0\) be such that \(M(x) = (x - L)^{+}\) is a subsolution to (4.1). Then we obtain \(M_{S} \geq M\) by Corollary 4.10, whence \(M_{S}\) is non-trivial and satisfies \(\Vert \operatorname{id}- M_{S}\Vert _{\sup} < \Vert \operatorname{id}- M\Vert _{ \sup} = L\).

(c) ⇒ (a) Since \(\mathcal{K}(\operatorname{id}) \geq \mathcal{K}^{\infty}(\operatorname{id}) = M_{S}\) by Proposition 4.7 and Theorem 4.9, it follows that \(\Vert \operatorname{id}- \mathcal{K}(\operatorname{id}) \Vert _{\sup} \leq \Vert \operatorname{id}- M_{S} \Vert _{\sup} < \infty \). Now the claim follows from the fact that \(x b(x) = \int _{(0, x)} y K(x, \mathrm{d}y) = x - \mathcal{K}( \operatorname{id})(x)= \operatorname{id}(x) - \mathcal{K}(\operatorname{id})(x)\) for \(x > 0\).

For the additional claim, set \(\mathcal{I}_{\sup} := \{M \in \mathcal{I}: \Vert \operatorname{id}- M \Vert _{ \sup}< \infty \}\). Then \(\mathcal{I}_{\sup}\) is a complete metric space for the metric generated by the supremum norm. Moreover, \(\mathcal{K}\) maps \(\mathcal{I}_{\sup}\) to itself. Indeed, for each \(M \in \mathcal{I}_{\sup}\), there exists by (b) some \(L \geq \Vert \operatorname{id}- M \Vert _{\sup}\) such that \(\tilde{M} (x) := (x-L)^{+}\) is a subsolution to (4.1). Hence by monotonicity of \(\mathcal{K}\) and the fact that \(\tilde{M} \leq M \) is a subsolution, we get

$$ \Vert \operatorname{id}- \mathcal{K}(M) \Vert _{\sup} \leq \Vert \operatorname{id}- \mathcal{K}(\tilde{M}) \Vert _{\sup} \leq \Vert \operatorname{id}- \tilde{M} \Vert _{\sup} = L < \infty . $$

Finally, we show that \(\mathcal{K}\) is a contraction on \(\mathcal{I}_{\sup}\). Let \(M_{1}, M_{2} \in \mathcal{I}_{\sup}\). Then

$$\begin{aligned} |\mathcal{K}(M_{1})(x) - \mathcal{K}(M_{2})(x)| &= \bigg\vert \int _{[x, \infty )} \big(M_{1}(y)- M_{2}(y)\big) K(x, \mathrm{d}y)\bigg\vert \\ &\leq \Vert M_{1} - M_{2} \Vert _{\sup} \int _{[x, \infty ]} K(x, \mathrm{d}y) \\ &\leq (1 - \alpha )\Vert M_{1} - M_{2} \Vert _{\sup}. \end{aligned}$$

Taking the supremum over \(x\) shows that \(\mathcal{K}\) is indeed a contraction since \(\alpha > 0\). Now Banach’s fixed point theorem implies that \(\mathcal{K}\) has a unique fixed point, and by (c), this fixed point is \(M_{S}\). □

We proceed to illustrate Theorem 4.12 by an example.

Example 4.13

Suppose that \(K(x,\mathrm{d}y)=k(x,y) \,\mathrm{d}y\) for all \(x>0\), where the density satisfies

$$ k(x,y)=\frac{e}{2} \frac{1-e^{-x}}{1-e^{-y}} \frac{1}{x} e^{-y/x} \qquad \text{for }y\geq x>0$$

and \(k(x, y)\) on \(\{(x,y):0< y< x\}\) is chosen such that \(K\) is a martingale kernel. Note that

$$ \int _{x}^{\infty }k(x,y) \,\mathrm{d}y < \frac{e}{2} \int _{x}^{ \infty }\frac{1}{x} e^{-y/x} \,\mathrm{d}y = \frac{1}{2}. $$

This implies in particular that \(a(x) \geq 1/2\) for all \(x > 0\). In this case, the fixed point equation (4.1) is given by

$$ M(x) =\int _{x}^{\infty }M(y) \frac{e}{2} \frac{1-e^{-x}}{1-e^{-y}} \frac{1}{x} e^{-y/x} \,\mathrm{d}y, $$

which is equivalent to

$$ \frac{M(x)x}{1-e^{-x}} =\frac{e}{2}\int _{x}^{\infty } \frac{M(y)}{1-e^{-y}} e^{-y/x} \,\mathrm{d}y. $$

As one easily checks, the function \(M_{\lambda}(x) := \lambda x(1-e^{-x})\), \(x > 0\), is a solution in ℐ to (4.1) for any \(\lambda \in [0, 1]\). As \(M_{S}\) is the largest solution to (4.1) dominated by the identity, this yields the candidate \(M_{1}(x) =x(1-e^{-x})\). But \(\Vert \operatorname{id}- M_{1} \Vert _{\sup} = e^{-1} < \infty \), and so it follows from Theorem 4.12 that \(M_{S} = M_{1}\).

Combining Theorem 4.12 with Proposition 4.3, we get the following existence results for bubbles. Note that this result covers cases that cannot be treated with the theory of Sect. 3.

Corollary 4.14

Suppose that \(\inf _{x > 0} a(x) > 0\) and \(\sup _{x > 0} x b(x) < \infty \). Then \(S\) is a bubble under P x for all \(x > 0\) for which \(S\) is not P x -a.s. bounded.

5 Relation to the strict local martingale definition of asset price bubbles in continuous-time models

In this final section, we discuss how our definition of bubbles in discrete time relates to the strict local martingale definition of bubbles in continuous time. To approach this question, one first has to discretise a positive continuous local martingale \(X = (X_{t})_{t \geq 0}\) in continuous time in such a way that it becomes a discrete-time martingale. Of course, there are many ways to do this, and we choose a somewhat canonical construction. More precisely, we consider localising sequences ( τ n ) n N of stopping times with \(\tau _{n}\to \infty \) ℙ-a.s. such that for each \(n\), both \(\tau _{n}\) and the stopped process \(X^{\tau _{n}}\) are uniformly bounded. We then define the discrete-time process S= ( S n ) n N by \(S_{n} : = X_{\tau _{n}}\). Then \(S\) is a martingale by the stopping theorem and satisfies \(S_{\infty }= X_{\infty}\) P-a.s., which implies that \(S\) is uniformly integrable if and only if \(X\) is uniformly integrable.

The simplest way to get localising sequences as above is to choose two increasing sequences of positive real numbers a= ( a n ) n N and b= ( b n ) n N converging to infinity and to define the sequence ( τ n a , b ) n N of stopping times by \(\tau ^{a,b}_{0}:=0\) and

τ n a , b :=inf{t0: X t b n } a n ,nN.
(5.1)

Then ( τ n a , b ) n N is a localising sequence of stopping times for \(X\) with \(\tau ^{a, b}_{n} \leq a_{n}\) and \(\sup _{t \geq 0} X^{\tau ^{a, b}_{n}}_{t} \leq b_{n}\), by continuity of \(X\).

In the special case that \(X\) is a Markov process, we should like to stop in such a way that the discrete-time process \(S\) is again a Markov process. In this case, the simplest way to get localising sequences as above is to choose two constants \(\alpha , \beta > 0\) and to define the sequence of stopping times ( τ n α , β ) n N by \(\tau ^{\alpha ,\beta}_{0}:=0\) and

τ n α , β :=inf{t τ n 1 α , β : X t (1+β) X τ n 1 α , β }( τ n 1 α , β +α),nN.
(5.2)

In this case, it is still true that ( τ n α , β ) n N is a localising sequence of stopping times for \(X\) and that \(\tau ^{\alpha , \beta}_{n}\) and \(X^{\tau ^{\alpha , \beta}_{n}}\) are uniformly bounded.

Our first goal in this section is to show that if \(X\) is a continuous positive strict local martingale, then the discrete-time process \(S\) is a bubble for either choice of stopping times above. The proof of this result relies on the following deep characterisation of strict local martingales in continuous time; cf. Meyer [13], Delbaen and Schachermayer [5] and Kardaras et al. [11]. Let \(X = (X_{t})_{t \geq 0}\) be a positive càdlàg local ℙ-martingale with \(X_{0} = x\). Then under some technical assumptions on the probability space and the underlying filtration, there exists a probability measure ℚ with Q | F t P | F t for all \(t\geq 0\) such that \(Y:=1/X\) is a nonnegative true ℚ-martingale, and for all bounded stopping times \(\tau \) and all \(A\in \mathcal{F}_{\tau}\),

P[A]=x E Q [ Y τ 1 A ] .

Especially, we have the identity

E P [ X 0 X t ]=xQ[ Y t =0],t0,

i.e., \(X\) is a strict local martingale on \([0,t]\) if and only if Q[ Y t =0]>0.

With this, we have the following two results.

Proposition 5.1

Let \(X=(X_{t})_{t\geq 0}\) be a continuous positive strict local ℙ-martingale. Let a= ( a n ) n N and b= ( b n ) n N be increasing sequences of positive real numbers converging to \(\infty \). Define the sequence of stopping times ( τ n a , b ) n N by (5.1) and set \(S^{a,b}_{n} = X_{\tau ^{a,b}_{n}}\) for n N 0 . Then the measureis a bubble measure for the discrete-time martingale S a , b = ( S n a , b ) n N 0 .

Proposition 5.2

Let \(X=(X_{t})_{t\geq 0}\) be a continuous positive strict local Markov martingale under the measure P x . Let \(\alpha , \beta > 0\), define the sequence of stopping times ( τ n α , β ) n N by (5.2) and set \(S^{\alpha ,\beta}_{n} = X_{\tau ^{\alpha ,\beta}_{n}}\) for n N 0 . Then the measure P x is a bubble measure for the discrete-time Markov martingale S α , β = ( S n α , β ) n N 0 .

We only establish the proof of Proposition 5.1. The proof of Proposition 5.2 is similar and left to the reader.

Proof of Proposition 5.1

Since \(X\) is a local martingale with respect to its natural filtration and ( τ n a , b ) n N is adapted to this filtration, we may assume without loss of generality that \(X\) is the canonical process on \(C([0, \infty ); (0, \infty ])\) with \(X_{0} = x > 0\). Set \(\tau _{\infty }:= \inf \{t \geq 0 : X_{t} = \infty \}\). Then \(\tau ^{a, b}_{n} < \tau _{\infty}\) for all nN.

By the above results, there exists a measure ℚ on \(C([0, \infty );(0, \infty ])\) which has Q | F t P | F t for all \(t\geq 0\) and such that \(Y := 1/X\) is a nonnegative ℚ-martingale and E P [ X τ 1 A ] =xQ[A] for each bounded stopping time \(\tau < \tau _{\infty}\) and each \(A \in \mathcal{F}_{\tau}\). Let k:=min{nN:E[ X 0 X a n ]>0}. Then for \(n\geq k\), using that \(\tau ^{a,b}_{n}\) is bounded by \(a_{n}\) and \(\tau ^{a,b}_{n} < \tau _{\infty}\), we obtain

E P [ S n a , b 1 { S k a , b S k + 1 a , b S n a , b } ] = E P [ X τ n a , b 1 { X τ k a , b X τ k + 1 a , b X τ n a , b } ] = x Q [ Y τ n a , b Y τ n 1 a , b Y τ k a , b ] x Q [ Y _ a n < 1 b n , , Y _ a k + 1 < 1 b k + 1 ] x Q [ Y _ a k + 1 = 0 ] = E P [ X 0 X a k + 1 ] > 0 .

Taking the limit as \(n\to \infty \) on the left-hand side, it follows that ℙ is a bubble measure for \(S\) by Theorem 2.4. □

We proceed to illustrate Proposition 5.2 by an example.

Example 5.3

Let \((X_{t})_{t\geq 0}\) be the three-dimensional inverse Bessel process, i.e., \(X\) is the unique strong solution to the SDE \(\mathrm{d}X_{t}=-X_{t}^{2} \mathrm{d}B_{t}\), where \(B = (B_{t})_{t \geq 0}\) is a P x -Brownian motion. The process \(Y:=1/X\) is then a Q 1 / x -Brownian motion stopped when it reaches zero.

Fix \(\alpha , \beta > 0\). We proceed to calculate the Markov kernel \(K(x, \,\mathrm{d}y)\) for \(S^{\alpha , \beta}\) under P x . Denote by \(W\) a standard Brownian motion starting at zero and by \(\Phi \) the distribution function of a standard normal random variable. Using the reflection principle for Brownian motion and denoting the running supremum and the running infimum of a process \(Z\) by \(\overline{Z}\) and \(\underline{Z}\), respectively, we obtain

P x [ S 1 α , β = ( 1 + β ) x ] = P x [ X τ 1 α , β = ( 1 + β ) x ] = x E 1 / x Q [ Y τ 1 α , β 1 { Y τ 1 α , β = 1 ( 1 + β ) x } ] = 1 1 + β Q 1 / x [ Y _ α 1 ( 1 + β ) x ] = 1 1 + β Q [ W α 1 x 1 ( 1 + β ) x ] = 2 1 + β Φ ( β ( 1 + β ) x α ) .

Moreover, for \(z \in (0, (1+\beta )x)\), using the joint density of Brownian motion and its running supremum, we obtain

P x [ S 1 α , β z ] = P x [ X τ 1 α , β z ] = P x [ X α z , X α < ( 1 + β ) x ] = x E 1 / x Q [ Y α 1 { Y α 1 z } 1 { Y _ α > 1 ( 1 + β ) x } ] = x E Q [ ( 1 x W α ) 1 { W α 1 x 1 z } 1 { W α < 1 x 1 ( 1 + β ) x } ] = x 1 x 1 z ( 1 x y ) 0 β ( 1 + β ) x 2 ( 2 u y ) 2 π α 3 exp ( ( 2 u y ) 2 2 α ) d u d y = x 1 x 1 z ( 1 x y ) y α 2 β ( 1 + β ) x y α v 2 π α exp ( v 2 2 ) d v d y = x 1 x 1 z 1 2 π α ( 1 x y ) ( exp ( y 2 2 α ) exp ( ( 2 β ( 1 + β ) x y ) 2 2 α ) ) d y = x 0 z 1 2 π α w 3 ( exp ( ( 1 x 1 w ) 2 2 α ) exp ( ( 2 β ( 1 + β ) x 1 x + 1 w ) 2 2 α ) ) d w .

Thus the Markov kernel \(K(x, \mathrm{d}y)\) for \(S^{\alpha , \beta}\) under P x is given by

$$\begin{aligned} K(x, \mathrm{d}y) &= \frac{2}{1+\beta} \Phi \left (- \frac{\beta}{(1+\beta ) x\sqrt{\alpha}} \right ) \delta _{(1+\beta ) x}( \mathrm{d}y) \\ & \phantom{=:} + \frac{x}{\sqrt{2\pi \alpha} y^{3}} \bigg(e^{- \frac{(\frac{1}{y} - \frac{1}{x})^{2}}{2 \alpha}}-e^{- \frac{(\frac{1}{y} +\frac{\beta -1}{(1+\beta )x} )^{2}}{2 \alpha} } \bigg) \mathbf{1}_{(0, (1+\beta ) x)}(y) \,\mathrm{d}y. \end{aligned}$$

In this way, we have constructed a somewhat natural Markov bubble in discrete time.

We finish this section by providing a converse to Proposition 5.1.

Theorem 5.4

Let \(X=(X_{t})_{t\geq 0}\) be a continuous local ℙ-martingale that ℙ-a.s. never becomes constant, i.e., P[ X t = X  for all ts]=0 for all \(s \geq 0\). Then \(X\) is a strict local ℙ-martingale if and only if for all sequences a= ( a n ) n N and b= ( b n ) n N converging to infinity, the measureis a bubble measure for the discrete-time martingale S a , b = ( S n a , b ) n N 0 .

Proof

If \(X\) is a strict local martingale, the result follows from Proposition 5.1. Conversely, suppose \(X\) is a true ℙ-martingale. As in the proof of Proposition 5.1, we assume without loss of generality that \(X\) is the canonical process on \(C([0, \infty );(0, \infty ])\) with \(X_{0} = 1\). Then there exists a measure ℚ on \(C([0, \infty );(0, \infty ])\) with Q | F t P | F t for all \(t\geq 0\) and such that \(Y := 1/X\) is a positive ℚ-martingale that converges to 0 ℚ-almost surely and E P [ X τ 1 A ] =Q[A] for each bounded stopping time \(\tau < \tau _{\infty}\) and each \(A \in \mathcal{F}_{\tau}\). By Proposition A.1, there exists an increasing sequence a= ( a n ) n N converging to infinity such that for each kN,

Q[ Y a k Y a k + 1 ]=0.
(5.3)

Since \(Y\) is positive Q-a.s., we can find an increasing sequence b= ( b n ) n N converging to infinity such that Q[ Y _ a n 1/ b n ]< 2 n for all nN. By the Borel–Cantelli lemma, this implies that

Q [ Y _ a k 1 b k  for infinitely many  k ] =0.
(5.4)

Then for any kN, using (5.3) and (5.4) and recalling that each \(\tau ^{a,b}_{n}\) is bounded by \(a_{n}\), we obtain

lim n E P [ S n a , b 1 { S k a , b S k + 1 a , b S n a , b } ] = lim n Q [ Y τ n a , b Y τ n 1 a , b Y τ k a , b ] = Q [ Y τ k a , b Y τ k + 1 a , b ] Q [ Y _ a 1 b  for infinitely many  ] + Q [ Y a Y a 1  eventually ] = 0 .

By Theorem 2.4, this shows that ℙ is not a bubble measure for \(S^{a,b}\). □