Advertisement

Finance and Stochastics

, Volume 19, Issue 3, pp 617–651 | Cite as

Forward equations for option prices in semimartingale models

  • Amel Bentata
  • Rama Cont
Article

Abstract

We derive a forward partial integro-differential equation for prices of call options in a model where the dynamics of the underlying asset under the pricing measure is described by a—possibly discontinuous—semimartingale. This result generalizes Dupire’s forward equation to a large class of non-Markovian models with jumps.

Keywords

Forward equation Dupire equations Jump process Semimartingale Tanaka–Meyer formula Markovian projection Call option Option pricing 

Mathematics Subject Classification

60H30 91G20 35S10 91G80 

JEL Classification

C60 G13 

1 Introduction

Since the seminal work of Black, Scholes, and Merton [7, 31], partial differential equations (PDEs) have been used as a way of characterizing and efficiently computing option prices. In the Black–Scholes–Merton model and various extensions of this model that retain the Markov property of the risk factors, option prices can be characterized in terms of solutions to a backward PDE whose variables are time (to maturity) and the value of the underlying asset. The use of backward PDEs for option pricing has been extended to cover options with path-dependent and early exercise features, as well as to multifactor models (see e.g. [1]). When the underlying assets exhibit jumps, option prices can be computed by solving an analogous partial integro-differential equation (PIDE) [2, 14].

A second important step was taken by Dupire [15, 16, 18], who showed that when the underlying asset is assumed to follow a diffusion process
$$dS_{t}= S_{t} \sigma(t,S_{t}) dW_{t}, $$
prices of call options (at a given date \(t_{0}\)) solve the forward PDE
$$\frac{\partial C_{t_{0}}}{\partial T}(T,K) = -r(T)K\frac{\partial C_{t_{0}}}{\partial K}(T,K)+\frac{K^{2}\sigma (T,K)^{2}}{2}\, \frac{\partial^{2} C_{t_{0}}}{\partial K^{2}}(T,K) $$
on \([t_{0},\infty)\times \mathbb{R}_{++}\) in the strike and maturity variables, with the initial condition
$$\forall K>0,\quad C_{t_{0}}(t_{0},K)= (S_{t_{0}}-K)^{+}. $$
This forward equation allows one to price call options with various strikes and maturities on the same underlying asset by solving a single partial differential equation. Dupire’s forward equation also provides useful insights into the inverse problem of calibrating diffusion models to observed call and put option prices [6].
Given the theoretical and computational usefulness of the forward equation, there have been various attempts to extend Dupire’s forward equation to other types of options and processes, most notably to Markov processes with jumps [2, 10, 12, 27, 9]. Most of these constructions use the Markov property of the underlying process in a crucial way (see however [28]). As noted by Dupire [17], the forward PDE holds in a more general context than the backward PDE: even if the (risk-neutral) dynamics of the underlying asset is not necessarily Markovian, but described by a continuous Brownian martingale
$$dS_{t}= S_{t}\delta_{t} dW_{t}, $$
then call options still satisfy a forward PDE where the diffusion coefficient is given by the local (or effective) volatility function
$$\sigma(t,S)=\sqrt{\mathbb{E}[\delta_{t}^{2} |S_{t}=S]}. $$
This method is linked to the “Markovian projection” problem, that is, the construction of a Markov process that mimics the marginal distributions of a martingale [5, 24, 30]. Such “mimicking processes” provide a method to extend the Dupire equation to non-Markovian settings.

We show in this work that the forward equation for call prices holds in a more general setting where the dynamics of the underlying asset is described by a—possibly discontinuous—semimartingale. Our parameterization of the price dynamics is general, allows stochastic volatility, and does not assume jumps to be independent or driven by a Lévy process, although it includes these cases. Also, our derivation does not require ellipticity or nondegeneracy of the diffusion coefficient. The result is thus applicable to various stochastic volatility models with jumps, pure jump models, and point process models used in equity and credit risk modeling.

Our result extends the forward equation from the original diffusion setting of Dupire [16] to various examples of non-Markovian and/or discontinuous processes and implies previous derivations of forward equations [2, 10, 9, 12, 16, 17, 27, 29] as particular cases. Section 3 gives examples of forward PIDEs obtained in various settings: time-changed Lévy processes, local Lévy models, and point processes used in portfolio default risk modeling. In the case where the underlying risk factor follows an Itô process or a Markovian jump-diffusion driven by a Lévy process, we retrieve previously known forms of the forward equation. In this case, our approach gives a rigorous derivation of these results under precise assumptions in a unified framework. In some cases, such as index options (Sect. 3.5) or CDO expected tranche notionals (Sect. 3.6), our method leads to a new, more general form of the forward equation valid for a larger class of models than previously studied [3, 12, 36].

The forward equation for call options is a PIDE in one (spatial) dimension, regardless of the number of factors driving the underlying asset. It may thus be used as a method for reducing the dimension of the problem. The case of index options (Sect. 3.5) in a multivariate jump-diffusion model illustrates how the forward equation projects a high-dimensional pricing problem into a one-dimensional state equation.

2 Forward PIDEs for call options

2.1 General formulation of the forward equation

Consider a (strictly positive) semimartingale \(S\) whose dynamics under the pricing measure ℙ is given by
$$ S_{T} = S_{0} +\int_{0}^{T} r(t) S_{t-} dt + \int_{0}^{T} S_{t-}\delta_{t} dW_{t} + \int _{0}^{T}\int_{-\infty}^{+\infty} S_{t-}(e^{y} - 1) \tilde{M}(dt\ dy), $$
(2.1)
where \(r(t)>0\) represents a (deterministic) bounded discount rate, \(\delta_{t}\) the (random) volatility, \(M\) is an integer-valued random measure with compensator
$$\mu(dt\,dy; \omega)=m(t,dy,\omega)\,dt, $$
representing jumps in the log-price, and \(\tilde{M}=M-\mu\) is the compensated random measure associated to \(M\) (see [13] for further background). Both the volatility \(\delta_{t}\) and \(m(t,dy)\), which represents the intensity of jumps of size \(y\) at time \(t\), are allowed to be stochastic. In particular, we do not assume the jumps to be driven by a Lévy process or a process with independent increments. The specification (2.1) thus includes most stochastic volatility models with jumps.

We assume the following conditions:

Assumption 2.1

(Full support)

\(\forall t\geq0\), \(\mathrm{supp}(S_{t})={\mathbb{R}_{+}}\).

Assumption 2.2

(Integrability condition)

$$\forall T>0, \quad \mathbb{E}\left[\exp{\left(\frac{1}{2}\int_{0}^{T}\delta_{t}^{2}\,dt + \int_{0}^{T} dt \int_{\mathbb{R}} (e^{y}-1)^{2} m(t,dy)\right)}\right]< \infty . $$
The value \(C_{t_{0}}(T,K)\) at \(t_{0}\) of a call option with expiry \(T>t_{0}\) and strike \(K>0\) is given by
$$ C_{t_{0}}(T,K)=e^{-\int_{t_{0}}^{T} r(t)\,dt}\mathbb{E}[\max(S_{T}-K,0)|\mathcal {F}_{t_{0}}]. $$
(2.2)
As argued in Sect. 2.2, under Assumption 2.2, the expectation in (2.2) is finite. Our main result is the following:

Theorem 2.3

(Forward PIDE for call options)

Let \(\psi_{t}\) be the exponential double tail of the compensator \(m(t,dy)\),
$$ \psi_{t}(z)= \textstyle\begin{cases} \int_{-\infty}^{z} dx\ e^{x} \int_{-\infty}^{x} m(t,du), & z< 0,\\ \int_{z}^{+\infty} dx\ e^{x} \int_{x}^{\infty} m(t,du), & z>0, \end{cases} $$
and let \(\sigma:[t_{0},T]\times \mathbb{R}_{++}\to\mathbb{R}_{+}\) and \(\chi :[t_{0},T]\times \mathbb{R}_{++}\to\mathbb{R}_{+}\) be measurable functions such that for all \(t\in[t_{0},T]\),
$$ \textstyle\begin{cases} \sigma(t,S_{t-}) =\sqrt{\mathbb{E}[\delta_{t}^{2}|S_{t-}]}, \\ \chi_{t,S_{t-}}(z) =\mathbb{E} [\psi_{t} (z ) |S_{t-} ]\qquad a.s. \end{cases} $$
(2.3)
Under Assumption  2.2, the call option price \((T,K)\mapsto C_{t_{0}}(T,K)\), as a function of maturity and strike, is a solution (in the sense of distributions) of the partial integro-differential equation
$$\begin{aligned} \frac{\partial C_{t_{0}}}{\partial T}(T,K) = &-r(T)K\frac{\partial C_{t_{0}}}{\partial K}(T,K)+\frac {K^{2}\sigma(T,K)^{2}}{2}\, \frac{\partial^{2} C_{t_{0}}}{\partial K^{2}}(T,K) \\ &{} +\int_{0}^{+\infty} y\frac{\partial^{2} C_{t_{0}}}{\partial K^{2}}(T,dy)\, \chi_{T,y} \bigg(\ln{ \frac{K}{y }}\bigg) \end{aligned}$$
(2.4)
on \([t_{0},\infty[\,{\times}\,\mathbb{R}_{++}\) with the initial condition \(C_{t_{0}}(t_{0},K)= (S_{t_{0}}-K)^{+}\) for all \(K>0\).

Remark 2.4

Recall that \(f: [t_{0},\infty)\times \mathbb{R}_{++}\to\mathbb{R}\) is a solution of (2.4) in the sense of distributions on \([t_{0},\infty)\times \mathbb{R}_{++}\) if for any test function \(\varphi\in {\mathcal{C}}_{0}^{\infty}([t_{0},\infty)\times \mathbb{R}_{++},\mathbb{R})\) and for any \(T\geq t_{0}\),
$$\begin{aligned} &\int_{t_{0}}^{T}dt\,\int_{0}^{\infty}dK \varphi(t,K) \bigg( -\frac{\partial f(t,K)}{\partial t} -r(t)K\frac{\partial f(t,K)}{\partial K}+ \frac{K^{2}\sigma(t,K)^{2}}{2}\, \frac{\partial^{2} f(t,K)}{\partial K^{2}} \\ &\phantom{\int_{t_{0}}^{T}dt\,\int_{0}^{\infty}dK \varphi(t,K) \bigg(}{}+\int_{0}^{+\infty} y\frac{\partial^{2} f(t,K)}{\partial K^{2}}(t,dy)\,\chi_{t,y}\Big(\ln{ \frac{K}{y} }\Big)\bigg) =0, \end{aligned}$$
where \({\mathcal{C}}_{0}^{\infty}([t_{0},\infty)\times \mathbb{R}_{++},\mathbb{R})\) is the set of infinitely differentiable functions with compact support in \([t_{0},\infty)\times \mathbb{R}_{++}\). This notion of generalized solution allows one to separate the discussion of existence of solutions from the discussion of their regularity (which may be delicate; see [14]).

Remark 2.5

The discounted asset price
$$\hat{S}_{T}= e^{-\int_{0}^{T} r(t)dt}\,S_{T} $$
is the stochastic exponential of the martingale \(U\) defined by
$$U_{T}=\int_{0}^{T} \delta_{t}\,dW_{t}+\int_{0}^{T}\int(e^{y}-1)\tilde{M}(dt\,dy). $$
Under Assumption 2.2, we have
$$\forall T>0,\quad\mathbb{E}\bigg[\exp{\bigg(\frac{1}{2}\langle U, U\rangle_{T}^{d} + \langle U,U\rangle_{T}^{c}\bigg)}\bigg]< \infty, $$
where \(\langle U,U\rangle^{c}\) and \(\langle U,U\rangle^{d}\) denote the continuous and purely discontinuous parts of \([U,U]\). So [33, Theorem 9] implies that \((\hat{S}_{T})\) is a ℙ-martingale.

The form of the integral term in (2.4) may seem different from the integral term appearing in backward PIDEs [14, 26]. The following lemma expresses \(\chi_{T,y}(z)\) in a more familiar form in terms of call payoffs.

Lemma 2.6

Let \(n(t,dz,y)\,dtdy\) be a measure on \([0,T]\times\mathbb{R}\times\mathbb{R}_{+}\) satisfying
$$\forall t\in[0,T], \forall y \in\mathbb{R}_{+}, \quad\int_{-\infty }^{\infty}(e^{z}\wedge|z|^{2}) n(t,dz,y) < \infty. $$
Then the exponential double tail \(\chi_{t,y}(z)\) of \(n\), defined as
$$ \chi_{t,y}(z)= \textstyle\begin{cases} \int_{-\infty}^{z} dx\ e^{x} \int_{-\infty}^{x} n(t,du,y), & z< 0,\\ \int_{z}^{+\infty} dx\ e^{x} \int_{x}^{\infty} n(t,du,y), & z>0,\\ \end{cases} $$
satisfies
$$\int_{\mathbb{R}}\big((ye^{z}-K)^{+}-e^{z}(y-K)^{+}-K(e^{z}-1)1_{ \{y>K\}}\big)n(t,dz,y)=y\,\chi_{t,y}\bigg(\ln{ \frac{K}{y}\ }\bigg). $$

Proof

Let \(K,T >0\). Then
$$\begin{aligned} &\int_{\mathbb{R}}\big((ye^{z}-K)^{+}-e^{z}(y-K)^{+}-K(e^{z}-1)1_{ \{y>K\}}\big)n(t,dz,y) \\ &= \int_{\mathbb{R}} \big((ye^{z}-K)1_{ \{z>\ln{ \frac{K}{y} }\} }-e^{z}(y-K)1_{\{ y>K\}}-K(e^{z}-1)1_{ \{y>K\}}\big)n(t,dz,y) \\ &= \int_{\mathbb{R}} \big((ye^{z}-K)1_{ \{z>\ln{ \frac{K}{y} }\} }+(K-ye^{z})1_{\{ y>K\}}\big)n(t,dz,y). \end{aligned}$$
If \(K\geq y\), then
$$\begin{aligned} &\int_{\mathbb{R}} 1_{ \{K\geq y\}}\big((ye^{z}-K)1_{\{ z>\ln{ \frac {K}{y} }\}}+(K-ye^{z})1_{ \{y>K\}}\big)n(t,dz,y) \\ &=\int_{\ln{ \frac{K}{y} }}^{+\infty} y\big(e^{z}-e^{\ln{ \frac{K}{y} }}\big)\,n(t,dz,y). \end{aligned}$$
If \(K< y\), then
$$\begin{aligned} &\int_{\mathbb{R}} 1_{ \{K< y\}}\big((ye^{z}-K)1_{ \{z>\ln{ \frac{K}{y} }\}}+(K-ye^{z})1_{ \{y>K \}}\big)n(t,dz,y) \\ &=\int_{\ln{ \frac{K}{y} }}^{+\infty}\big((ye^{z}-K)+(K-ye^{z})\big)n(t,dz,y)+\int_{-\infty}^{\ln{ \frac{K}{y} }} (K-ye^{z} )n(t,dz,y) \\ &=\int_{-\infty}^{\ln{ \frac{K}{y} }}y\big(e^{\ln{ \frac{K}{y} }}-e^{z}\big)n(t,dz,y). \end{aligned}$$
Using integration by parts, we can equivalently express \(\chi_{t,y}\) as
$$ \chi_{t,y}(z)= \textstyle\begin{cases} \int_{-\infty}^{z} (e^{z}-e^{u})\,n(t,du,y), & z< 0,\\ \int_{z}^{\infty} (e^{u}-e^{z})\,n(t,du,y), & z>0. \end{cases} $$
Hence,
$$\int_{\mathbb{R}}\big((ye^{z}-K)^{+}-e^{z}(y-K)^{+}-K(e^{z}-1)1_{\{ y>K\}}\big)n(t,dz,y) =y\,\chi_{t,y}\bigg(\ln{ \frac{K}{y} }\bigg). $$
 □

2.2 Derivation of the forward equation

In this section, we present a proof of Theorem 2.3 under Assumption 2.2, using the Tanaka–Meyer formula for semimartingales [25, Theorem 9.43].

Proof of Theorem 2.3

We first note that by replacing ℙ by the conditional measure \({\mathbb{P}}[\cdot|\mathcal{F}_{t_{0}}]\) given \(\mathcal{F}_{t_{0}}\), we may replace the conditional expectation in (2.2) by the expectation with respect to the marginal distribution \(p^{S}_{T}(dy)\) of \(S_{T}\) under \({\mathbb{P}}[\cdot|\mathcal{F}_{t_{0}}]\). Thus, without loss of generality, we set \(t_{0}=0\) in the sequel and consider the case where \({\mathcal{F}_{0}}\) is the \(\sigma\)-algebra generated by all ℙ-nullsets, and we write \(C_{0}(T,K)=: C(T,K)\) for simplicity. We can express (2.2) as
$$ C(T,K)=e^{-\int_{0}^{T} r(t)\,dt}\int_{\mathbb{R}_{+}} \left(y-K\right)^{+}\, p^{S}_{T}(dy). $$
By differentiating with respect to \(K\) we obtain
$$ \begin{aligned} &\frac{\partial C}{\partial K}(T,K)=-e^{-\int_{0}^{T} r(t)\,dt}\int _{K}^{\infty}p^{S}_{T}(dy)=- e^{-\int_{0}^{T} r(t)\,dt}\mathbb{E}\left[1_{ \{ S_{T}>K\}}\right],\\ &\frac{\partial^{2} C}{\partial K^{2}}(T,dy)=e^{-\int_{0}^{T} r(t)\,dt}p^{S}_{T}(dy). \end{aligned} $$
(2.5)
Let \(L^{K} =L^{K} (S)\) be the semimartingale local time of \(S\) at \(K\) under ℙ (see [25, Chap. 9] or [34, Chap. IV] for definitions). Applying the Tanaka–Meyer formula, we have
$$\begin{aligned} (S_{T}-K)^{+} =&(S_{0}-K)^{+} +\int_{0}^{T} 1_{ \{S_{t-}> K\}} dS_{t} + \frac {1}{2} L^{K}_{T} \\ &{}+ \sum_{0< t\leq T} \left( (S_{t}-K)^{+}-(S_{t-}-K)^{+}-1_{ \{S_{t-}> K\} }\Delta S_{t}\right). \end{aligned}$$
(2.6)
As noted in Remark 2.5, the integrability condition 2.2 implies that the discounted price \(\hat{S}_{t}=e^{-\int_{0}^{t} r(s)\,ds}S_{t}=\mathcal{E}(U)_{t}\) is a martingale under ℙ. So (2.1) can be rewritten as
$$dS_{t}=e^{\int_{0}^{t} r(s)\,ds}\big(r(t)S_{t-} dt+ d\hat{S}_{t}\big) $$
and
$$\int_{0}^{T} 1_{ \{S_{t-}> K\}} dS_{t} = \int_{0}^{T}e^{\int_{0}^{t} r(s)\,ds}\, 1_{ \{S_{t-}> K\}} d\hat{S}_{t}+ \int_{0}^{T} e^{\int_{0}^{t} r(s)\,ds}\,r(t) S_{t-} 1_{ \{S_{t-}> K\}} dt, $$
where the first term is a martingale. Taking expectations, we obtain
$$\begin{aligned} & e^{\int_{0}^{T}r(t)\,dt}C(T,K)-(S_{0}-K)^{+} \\ &=\mathbb{E}\left[\int_{0}^{T} e^{\int_{0}^{t} r(s)\,ds}\,r(t) S_{t-} \, 1_{\{ S_{t-}> K\}} dt + \frac{1}{2}L^{K}_{T}\right] \\ &\phantom{=}{}+ \mathbb{E}\bigg[ \sum_{0< t\leq T} \left ((S_{t}-K)^{+}-(S_{t-}-K)^{+}-1_{\{ S_{t-}> K\}}\Delta S_{t}\right)\bigg]. \end{aligned}$$
Noting that \(S_{t-} 1_{ \{S_{t-}> K\}}= (S_{t-}-K)^{+} + K1_{ \{S_{t-}> K\}} \) and using Fubini’s theorem and (2.5), we obtain
$$\begin{aligned} &\mathbb{E}\left[ \int_{0}^{T} e^{\int_{0}^{t} r(s)\,ds}\,r(t) S_{t-} 1_{ \{ S_{t-}> K\}} dt\right]\\ & =\int_{0}^{T} r(t)e^{\int_{0}^{t}r(s)\,ds}\left(C(t,K)- K \frac{\partial C}{\partial K}(t,K)\right)dt. \end{aligned}$$
As for the jump term, we have
$$\begin{aligned} &\mathbb{E}\bigg[\sum_{0< t\leq T}\big( (S_{t}-K)^{+}-(S_{t-}-K)^{+}-1_{ \{S_{t-}> K\}} \Delta S_{t}\big)\bigg] \\ &= \mathbb{E}\left[\int_{0}^{T}dt\int m(t,dx)\,\big((S_{t-}e^{x}-K)^{+}-(S_{t-}-K)^{+}-1_{\{ S_{t-}> K\}}S_{t-}(e^{x}-1)\big)\right ] \\ &= \mathbb{E}\bigg[\int_{0}^{T} dt \int m(t,dx)\big( (S_{t-}e^{x}-K)^{+}-(S_{t-}-K)^{+} \\ &\quad{}-(S_{t-}-K)^{+}(e^{x}-1) - K1_{\{ S_{t-}> K\}}(e^{x}-1)\big)\bigg] . \end{aligned}$$
Applying Lemma 2.6 to the random measure \(m\), we obtain that
$$\begin{aligned} &\int m(t,dx)\big((S_{t-}e^{x}-K)^{+}-e^{x}(S_{t-}-K)^{+}-K1_{\{ S_{t-}> K\} }(e^{x}-1)\big)\\ & =S_{t-}\,\psi_{t}\bigg(\ln \frac{K}{S_{t-}} \bigg). \end{aligned}$$
We observe that, for all \(z\) in ℝ,
$$\begin{aligned} \psi_{t}(z) \leq& 1_{\{z< 0\}}\,\int_{-\infty}^{z} e^{z}\,m(t,du) +1_{\{ z>0\}} \int_{-\infty}^{z} e^{u}\,m(t,du)\\ =&1_{\{z< 0\}}e^{z}\,\int_{-\infty}^{z}1\, m(t,du) +1_{\{z>0\}} \int _{-\infty}^{z} e^{u}\,m(t,du). \end{aligned}$$
Using Assumption 2.2, we have
$$\begin{aligned} & \mathbb{E}\Bigg[ \sum_{0< t\leq T} \left ((S_{t}-K)^{+}-(S_{t-}-K)^{+}-1_{ \{S_{t-}> K\}}\Delta S_{t}\right)\Bigg]\\ & =\mathbb{E}\left[\int_{0}^{T} dt\,S_{t-}\,\psi_{t}\bigg(\ln \frac {K}{S_{t-}} \bigg)\right]< \infty. \end{aligned}$$
Hence, applying Fubini’s theorem leads to
$$\begin{aligned} &\mathbb{E}\Bigg[\sum_{0< t\leq T} \big((S_{t}-K)^{+}-(S_{t-}-K)^{+}-1_{ \{S_{t-}> K\}} \Delta S_{t}\big)\Bigg]\\ &=\int_{0}^{T}dt\mathbb{E}\bigg[\int\,m(t,dx)\big((S_{t-}e^{x}-K)^{+} -e^{x}(S_{t-}-K)^{+}-K1_{ \{S_{t-}> K\}}(e^{x}-1)\big)\bigg]\\ &= \int_{0}^{T}dt\,\mathbb{E}\bigg[S_{t-}\,\psi_{t}\bigg(\ln \frac {K}{S_{t-}} \bigg)\bigg] \\ &= \int_{0}^{T}dt\,\mathbb{E}\left[S_{t-}\mathbb{E}\Big[\psi_{t}\Big(\ln \frac{K}{S_{t-}} \Big)\Big|S_{t-}\Big]\right] \\ &=\int_{0}^{T}dt\,\mathbb{E}\bigg[S_{t-}\,\chi_{t,S_{t-}}\bigg(\ln \frac {K}{S_{t-}} \bigg)\bigg]. \end{aligned}$$
Let \(\varphi\in{\mathcal{C}}_{0}^{\infty}([0,T]\times \mathbb{R}_{++})\). The extended occupation time formula [35, Chap. VI, Exercise 1.15] yields
$$ \int_{0}^{+\infty} dK \int_{0}^{T}\,\varphi(t,K)\, dL^{K}_{t} =\int _{0}^{T}\varphi(t,S_{t-}) d[S]_{t}^{c}= \int_{0}^{T} dt\,\varphi (t,S_{t-})S_{t-}^{2}\delta_{t}^{2} . $$
(2.7)
Since \(\varphi\) is bounded and has compact support, in order to apply Fubini’s theorem to
$$ \mathbb{E}\left[\int_{0}^{+\infty} dK \int_{0}^{T}\,\varphi(t,K)\, dL^{K}_{t}\right], $$
it is sufficient to show that \(\mathbb{E} [L^{K}_{t} ]<\infty\) for \(t\in[0,T]\). Rewriting (2.6) yields
$$\begin{aligned} \frac{1}{2} L^{K}_{T} =& (S_{T}-K)^{+}-(S_{0}-K)^{+}-\int_{0}^{T} 1_{ \{S_{t-}> K\}} dS_{t}\\ &{}- \sum_{0< t\leq T} \left( (S_{t}-K)^{+}-(S_{t-}-K)^{+}-1_{ \{S_{t-}> K\} }\Delta S_{t}\right). \end{aligned}$$
Since \(\hat{S}\) is a martingale, we have \(\mathbb{E}[S_{T}]<\infty\); moreover, \(\mathbb{E} [(S_{T}-K)^{+} ] \leq\mathbb{E} [S_{T} ]\) and \(\mathbb{E} [\int_{0}^{T} 1_{ \{S_{t-}> K\}} dS_{t} ]<\infty\). As discussed above,
$$\mathbb{E}\Bigg[ \sum_{0< t\leq T} \left((S_{t}-K)^{+}-(S_{t-}-K)^{+}-1_{ \{ S_{t-}> K\}}\Delta S_{t}\right)\Bigg]< \infty, $$
yielding that \(\mathbb{E} [L^{K}_{T} ]<\infty\). Hence, we may take expectations in (2.7) to obtain
$$\begin{aligned} &\mathbb{E}\left[\int_{0}^{+\infty} dK \int_{0}^{T}\,\varphi(t,K)\, dL^{K}_{t}\right]= \mathbb{E}\left[ \int_{0}^{T}\varphi (t,S_{t-})S_{t-}^{2}\delta_{t}^{2} dt\right]\\ &= \int_{0}^{T}dt\,\mathbb{E}\big[\varphi(t,S_{t-})S_{t-}^{2}\delta _{t}^{2}\big] = \int_{0}^{T}dt\,\mathbb{E}\left[\mathbb{E}\big[\varphi (t,S_{t-})S_{t-}^{2}\delta_{t}^{2} \big|S_{t-}\big] \right] \\ &= \mathbb{E}\left[\int_{0}^{T}dt\,\varphi(t,S_{t-})S_{t-}^{2}\sigma (t,S_{t-})^{2} \right] \\ &= \int_{0}^{\infty}\int_{0}^{T} \varphi(t,K)K^{2}\sigma(t,K)^{2} p_{t}^{S}(dK)\, dt\\ &= \int_{0}^{T} dt\,e^{\int_{0}^{t} r(s)\,ds}\int_{0}^{\infty}\varphi(t,K) K^{2}\sigma(t,K)^{2} \frac{\partial^{2} C}{\partial K^{2}}(t,dK), \end{aligned}$$
where the last line is obtained by using (2.5). Using integration by parts, we have
$$\begin{aligned} & \int_{0}^{\infty}dK\, \int_{0}^{T} dt\,\varphi(t,K)\,\frac{\partial }{\partial t}\left(e^{\int_{0}^{t} r(s)\,ds}\,C(t,K)-(S_{0}-K)^{+}\right)\\ &= \int_{0}^{\infty}dK\, \int_{0}^{T} dt\,\varphi(t,K)\,\frac{\partial }{\partial t}\left(e^{\int_{0}^{t} r(s)\,ds}\,C(t,K)\right)\\ &= \int_{0}^{\infty}dK\, \int_{0}^{T} dt\,\varphi(t,K)\, e^{\int_{0}^{t} r(s)\,ds}\left(\frac{\partial C}{\partial t}(t,K)+r(t)C(t,K)\right)\\ &= -\int_{0}^{\infty}dK\, \int_{0}^{T} dt\,\frac{\partial\varphi }{\partial t} (t,K)\,\left(e^{\int_{0}^{t} r(s)\,ds}\,C(t,K)\right), \end{aligned}$$
where derivatives are used in the sense of distributions. Gathering together all terms, we have
$$\begin{aligned} &\int_{0}^{\infty}dK\, \int_{0}^{T} dt\,\frac{\partial\varphi}{\partial t}(t,K)\,\left(e^{\int_{0}^{t} r(s)\,ds}\,C(t,K)\right)\\ &=\int_{0}^{T}dt\int_{0}^{\infty}dK\,\frac{\partial\varphi}{\partial t}(t,K)\,\int_{0}^{t}\,ds\,r(s)\,e^{\int_{0}^{s}r(u)\,du}\bigg(C(s,K)- K \frac {\partial C}{\partial K}(s,K)\bigg) \\ &\quad{} +\int_{0}^{T} dt\int_{0}^{\infty}\frac{1}{2}\frac{\partial\varphi }{\partial t}(t,K)\,\int_{0}^{t} dL_{s}^{K}\\ &\quad{} +\int_{0}^{T}dt \int_{0}^{\infty}dK\,\,\frac{\partial\varphi }{\partial t}(t,K)\int_{0}^{t} ds\,e^{\int_{0}^{s} r(u)\,du}\,\int_{0}^{+\infty } y\frac{\partial^{2} C}{\partial K^{2}}(s,dy)\chi_{s,y}\bigg(\ln{ \frac {K}{y} }\bigg)\\ &=-\int_{0}^{T}dt\int_{0}^{\infty}dK\,\varphi(t,K)\,r(t)\,e^{\int _{0}^{t}r(s)\,ds}\bigg(C(t,K)- K \frac{\partial C}{\partial K}(t,K)\bigg) \\ &\quad{} -\int_{0}^{T} dt\int_{0}^{\infty}\frac{1}{2}\, \varphi(t,K)\, dL_{t}^{K}\\ &\quad{} -\int_{0}^{T}dt \int_{0}^{\infty}dK\,\varphi(t,K)\,e^{\int_{0}^{t} r(s)\,ds} \int_{0}^{+\infty} y\frac{\partial^{2} C}{\partial K^{2}}(t,dy)\chi _{t,y}\bigg(\ln{ \frac{K}{y} }\bigg). \end{aligned}$$
So finally we have shown that for any test function \(\varphi\in {\mathcal{C}}_{0}^{\infty}([0,T]\times \mathbb{R}_{++},\mathbb{R} )\),
$$\begin{aligned} &\int_{0}^{\infty}dK\, \int_{0}^{T} dt\,\frac{\partial\varphi}{\partial t}(t,K)\,\left(e^{\int_{0}^{t} r(s)\,ds}\,C(t,K)\right)\\ &= -\int_{0}^{\infty}dK\, \int_{0}^{T} dt\,\varphi(t,K)\, e^{\int_{0}^{t} r(s)\,ds}\left(\frac{\partial C}{\partial t}(t,K)+r(t)C(t,K)\right)\\ &= -\int_{0}^{T}dt\int_{0}^{\infty}dK\,\varphi(t,K)\,r(t)\,e^{\int _{0}^{t}r(s)\,ds}\bigg(C(t,K)- K \frac{\partial C}{\partial K}(t,K)\bigg) \\ &\quad{} -\int_{0}^{T} dt\int_{0}^{\infty}\frac{1}{2}\,e^{\int_{0}^{t}r(s)\, ds}\,\varphi(t,K)\,K^{2}\sigma(t,K)^{2} \frac{\partial^{2} C}{\partial K^{2}}(t,dK) \\ &\quad{} -\int_{0}^{T}dt \int_{0}^{\infty}dK\,\varphi(t,K)\,e^{\int_{0}^{t} r(s)\,ds}\,\int_{0}^{+\infty} y\frac{\partial^{2} C}{\partial K^{2}}(t,dy)\chi _{t,y}\bigg(\ln{ \frac{K}{y} }\bigg). \end{aligned}$$
Therefore, \(C(\cdot,\cdot)\) is a solution of (2.4) in the sense of distributions. □

2.3 Uniqueness of solutions to the forward PIDE

Theorem 2.3 shows that the call price \((T,K)\mapsto C_{t_{0}}(T,K)\) solves the forward PIDE (2.4). The uniqueness of the solution of such a PIDE has been shown using analytical methods [4, 22] under various types of conditions on the coefficients. We give below a direct proof of uniqueness for (2.4) using a probabilistic method, under explicit conditions that cover most examples of models used in finance.

Define, for \(u\in\mathbb{R}\), \(t\geq0\), \(z> 0\), the measure \(n(t,du,z)\) by
$$\begin{aligned} n\big(t,[u,\infty),z\big) =&-e^{-u}\,\frac{\partial}{\partial u} \chi _{t,z}(u) ,\quad u>0,\\ n\big(t,(-\infty,u],z\big) =&e^{-u}\frac{\partial}{\partial u} \chi _{t,z}(u) ,\quad u< 0. \end{aligned}$$
Let \(\mathcal{\mathcal{C}}_{0}([0,\infty)\times\mathbb{R}_{+})\) be the set of continuous functions defined on \([0,\infty)\times\mathbb{R}_{+}\) and vanishing at infinity for the supremum norm. Let \(\mathcal{C}^{1}_{0} ({\mathbb{R}_{+})}\) be the set of continuously differentiable functions with compact support on \({\mathbb{R}_{+}}\). Similarly, \(\mathcal{C}_{0}(\mathbb{R}\setminus\{0\},{\mathbb{R}_{+}}) \) denotes the space of positive continuous functions defined on \(\mathbb {R} \setminus\{0\}\) and vanishing at infinity for the supremum norm. Finally, \(\mathcal{C}_{b} (X)\) denotes as usual the space of all bounded continuous functions on \(X\).

Throughout this section, we make the following assumption.

Assumption 2.7

$$ \forall B\in\mathcal{B}(\mathbb{R})\setminus\{0\},\quad (t,z)\mapsto\sigma(t,z),\quad(t,z)\mapsto n(t,B,z) $$
are continuous in \(z\in\mathbb{R}_{+}\), uniformly in \(t\in[0,T]\), continuous in \(t\) on \([0,T)\), uniformly in \(z\in\mathbb{R}_{+}\), and
$$ \exists K_{T}>0, \forall(t,z)\in[0,T]\times\mathbb{R}_{+},\ |\sigma(t,z)|+\int_{\mathbb{R}} (1\wedge|u|^{2})\,n(t,du,z)\leq K_{T}. $$

Theorem 2.8

Under Assumption  2.7, if
$$\begin{aligned} {\textit{either}} & \ \mathrm{(i)} \quad\forall R>0, \forall t\in[0,T), \quad\inf_{0\leq z \leq R}\sigma(t,z)>0, \\ {\textit{or}} & \mathrm{(ii)}\quad\sigma(t,z)\equiv0 \ {\textit{and}}\ \exists \beta\in(0,2),\, \exists C>0, \forall R>0, \forall(t,z)\in[0,T)\times [0,R], \\ & \ \,\forall f\in{\mathcal{C}}_{0}(\mathbb{R}\setminus\{0\},\mathbb {R}_{+}),\qquad\int\left(n(t,du,z)- 1_{\{|u|\leq1\}} \frac{C\,du}{|u|^{1+\beta}}\right)\,f(u)\geq0, \\ & \ \,\exists K'_{T,R}>0, \int_{\{|u|\leq1\}}|u|^{\beta}\,\left (n(t,du,z)-\frac{C\,du}{|u|^{1+\beta}}\right) \leq K'_{T,R}, \\ {\textit{and}} & \ \mathrm{(iii)}\quad\lim_{R\to\infty} \int_{0}^{T} \sup_{z\in \mathbb{R}_{+}} n\left(t,\{|u|\geq R\},z\right)\,dt=0, \end{aligned}$$
then the call option price \((T,K)\mapsto C_{t_{0}}(T,K)\), as a function of maturity and strike, is the unique solution (in the sense of distributions) of the partial integro-differential equation (2.4) on \([t_{0},\infty)\times \mathbb{R}_{++}\) such that \(C_{t_{0}}(.,.)\in\mathcal{\mathcal{C}}_{0} ([t_{0}, \infty) \times\mathbb{R}_{+})\) and \(C_{t_{0}}(t_{0},K)= (S_{t_{0}}-K)^{+}\) for all \(K>0\).

The proof uses the uniqueness of the solution to the forward Kolmogorov equation associated to a certain integro-differential operator. We believe that similar statements like the next result can be found in the literature on PIDEs, even though we could not find them. For completeness, we therefore provide a proof here.

Proposition 2.9

Define, for \(t\in[0,\infty)\) and \(f\in{\mathcal{C}}^{\infty}_{0}(\mathbb{R})\), the integro-differential operator
$$\begin{aligned} L_{t}f(x) = & r(t)xf'(x)+\frac{x^{2}\sigma(t,x)^{2}}{2}f''(x)\\ &{}+\int_{\mathbb{R}}\left(f(t,xe^{y})-f(t,x)-x(e^{y}-1) f'(x)\right)n(t,dy,x). \end{aligned}$$
Under Assumption  2.7, if the conditions (either (i) or (ii)) and (iii) of Theorem  2.8 hold, then for each \(x_{0}\) in \(\mathbb{R}_{+}\), there exists a unique family \((p_{t}(x_{0},dy),t\geq0)\) of bounded measures such that \(t\mapsto p_{t}(x_{0},\cdot)\) is weakly continuous and
$$\begin{aligned} \forall t\geq0,\forall g\in \mathcal{\mathcal{C}}_{0}^{\infty}(\mathbb {R}),\ &\int_{\mathbb{R}} p_{t}(x_{0},dy)g(y)=g(x_{0})+\int_{0}^{t}\int _{\mathbb{R}} p_{s}(x_{0},dy)\,L_{s}g(y)\,ds, \\ & p_{0}(x_{0},\cdot)=\epsilon_{x_{0}}, \end{aligned}$$
(2.8)
where \(\epsilon_{x_{0}}\) is the point mass at \(x_{0}\). Furthermore, \(p_{t}(x_{0},\cdot)\) is a probability measure on  \({\mathbb{R}_{+}}\).

Proof

For any \(T>0\), denote by \((X_{t})_{t\in[0,T]}\) the canonical process on \(D([0,T],\mathbb{R}_{+})\). Under ((i) or (ii)) and (iii), the martingale problem for \(((L_{t})_{t\in[0,T]},{\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{+}))\) on \([0,T]\) is well posed in the following sense [32, Theorem 1]: For any \(x_{0}\in\mathbb{R}_{+}\), \(t_{0}\in[0,T)\), there exists a unique probability measure \(\mathbb{Q}_{t_{0},x_{0}}\) on \((D([0,T],\mathbb{R}_{+}),\mathcal{B}_{T})\) with \(\mathbb{Q}_{t_{0}, x_{0}}[X_{u}=x_{0},\,0\leq u\leq t_{0}]=1\), and such that for any \(f\in{\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{+})\),
$$f(X_{t})-f(x_{0})-\int_{t_{0}}^{t} L_{s}f(X_{s})\,ds $$
is a \((\mathbb{Q}_{t_{0},x_{0}},(\mathcal{B}_{t})_{t\geq t_{0}} )\)-martingale on \([t_{0},T]\). Under \(\mathbb{Q}_{t_{0},x_{0}}\), \(X\) is a Markov process. Define the evolution operator \((\mathcal{Q}_{t_{0},t})_{t\in[t_{0},T]}\) on \(\mathcal{C}_{0}(\mathbb{R}^{+})\) by
$$ \forall f\in{\mathcal{C}}_{0}(\mathbb{R}_{+}),\quad Q_{t_{0},t} f(x_{0})=\mathbb{E}^{\mathbb{Q}_{t_{0},x_{0}}}\left[f(X_{t})\right]. $$
Then, for \(f\in C^{\infty}_{0} (\mathbb{R}_{+})\),
$$\begin{aligned} Q_{t_{0},t} f(x_{0}) =&f(x_{0})+\mathbb{E}^{\mathbb{Q}_{t_{0},x_{0}}}\left[\int_{t_{0}}^{t} L_{s}f(X_{s})\,ds\right]. \end{aligned}$$
Given Assumption 2.7, \([0,T]\ni t \mapsto\int _{t_{0}}^{t} L_{s}f(X_{s})\,ds\) is uniformly bounded on \([0,T]\). Moreover, again in view of Assumption 2.7, since \(X\) is right-continuous, the mapping \([0,T)\ni s \mapsto L_{s}f(X_{s})\) is right-continuous up to a \(\mathbb{Q}_{t_{0},x_{0}}\)-nullset, and
$$ \lim_{t\downarrow t_{0}} \int_{t_{0}}^{t} L_{s}f(X_{s})\,ds= 0 \quad\mbox{a.s.} $$
Applying the dominated convergence theorem yields
$$ \lim_{t\downarrow t_{0}} \mathbb{E}^{\mathbb{Q}_{t_{0},x_{0}}}\left[\int _{t_{0}}^{t} L_{s}f(X_{s})\,ds\right]=0, \qquad\mbox{so } \lim_{t\downarrow t_{0}} Q_{t_{0},t} f(x_{0}) = f(x_{0}), $$
implying that \([0,T)\ni t \mapsto Q_{t_{0},t} f(x_{0})\) is right-continuous at \(t_{0}\) for each \(x_{0}\in\mathbb{R}_{+}\). Hence, the evolution operator \((Q_{t_{0},t})_{t\in[t_{0},T]}\) satisfies the continuity property
$$ \forall f\in{\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{+}), \forall x\in \mathbb{R}_{+},\quad\lim_{t\downarrow t_{0}} Q_{t_{0},t}f (x) = f(x). $$
In particular, denoting by \(q_{t}(x_{0},dy)\) the marginal distribution of \(X_{t}\), we have that the map
$$ [0,T)\ni t \mapsto\int_{\mathbb{R}_{+}} q_{t}(x_{0},dy) f(y) $$
is right-continuous for any \(f\in {\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{+}), x_{0}\in\mathbb{R}_{+}\). We extend \((\mathcal{L}_{t})_{t\in[0,T ]}\) and \(\mathcal{Q}_{t_{0},t}\) as follows:
$$\begin{aligned} \forall f\in \mathcal{C}_{0}^{\infty}(\mathbb{R}_{+}),\quad \forall t > T,\quad \mathcal{L}_{t}f\equiv 0,\quad \mathcal{Q}_{t_{0},t}f = \mathcal{Q}_{t_{0},T} f. \end{aligned}$$
If the martingale problem for \(((\mathcal{L}_{t})_{t\in[0,T ]}, \mathcal{C}_{0}^{\infty}(\mathbb{R}_{+}))\) is well posed, then clearly the martingale problem for \(((\mathcal{L}_{t})_{t\in[0,\infty[}, \mathcal{C}_{0}^{\infty}(\mathbb{R}_{+}))\) is well posed also and the right-continuity property shown above holds for \(t > T\). The martingale property implies that \(q_{t}(x_{0},dy)\) satisfies, for any \(t\geq0\),
$$ \forall g\in{\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{+}),\quad\int_{\mathbb {R}_{+}} q_{t}(x_{0},dy)g(y)=g(x_{0})+\int_{0}^{t}\int_{\mathbb{R}_{+}} q_{s}(x_{0},dy)L_{s}g(y)\,ds. $$
(2.9)
Given Assumption 2.7, the family \(q_{t} (x_{0}, dy)\) is a solution of (2.8) with initial condition \(q_{0}(x_{0},dy)=\epsilon_{x_{0}}(dy)\). In particular, the measure \(q_{t} (x_{0}, \cdot)\) has mass 1. To show the uniqueness of solutions to (2.8), we rewrite (2.8) as the forward Kolmogorov equation associated with a homogeneous operator on the space-time domain and use uniqueness results for the corresponding homogeneous equation. Let \({\mathcal{C}}^{1}_{0}([0,\infty))\otimes{\mathcal{C}}_{0}^{\infty}(\mathbb {R}_{+})\) be the algebraic tensor product of \({\mathcal{C}}^{1}_{0}([0,\infty))\) and \({\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{+})\), viewed as a subset of the space of functions on \([0,\infty) \times \mathbb{R}_{+}\).The operator \(L_{t}\) can be extended to a (homogeneous) linear operator \(A\) on \({\mathcal{C}}^{1}_{0}([0,\infty))\otimes{\mathcal{C}}_{0}^{\infty}(\mathbb {R}_{+})\) defined via
$$ \forall f\in{\mathcal{C}}^{\infty}_{0}(\mathbb{R}_{+}), \forall\gamma\in {\mathcal{C}}^{1}_{0}([0,\infty)),\quad A(f\gamma)(t,x)=\gamma (t)L_{t}f(x)+f(x)\gamma'(t). $$
Theorem 7.1 of [20, Chap. 4] implies that for any \(x_{0}\in \mathbb{R}_{+}\), if \((X,\mathbb{Q}_{t_{0},x_{0}})\) is a solution of the martingale problem for \(L\), then the law of \(\eta_{t}=(t,X_{t})\) under \(\mathbb{Q}_{t_{0},x_{0}}\) is a solution of the martingale problem for \(A\). In particular, for any \(f\in{\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{+})\) and \(\gamma\in {\mathcal{C}}^{1}_{0}([0,\infty))\),
$$ \int q_{t}(x_{0},dy)f(y)\gamma(t)= f(x_{0})\gamma(0)+\int_{0}^{t} \int q_{s}(x_{0},dy)A(f\gamma)(s,y)\,ds. $$
Theorem 7.1 of [20, Chap. 4] also implies that if the law of \(\eta_{t}=(t,X_{t})\) is a solution of the martingale problem for \(A\), then the law of \(X\) is also a solution of the martingale problem for \(L\), namely: Uniqueness holds for the martingale problem associated to the operator \(L\) on \({\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{+})\) if and only if uniqueness holds for the martingale problem associated to the martingale problem for \(A\) on \({\mathcal{C}}^{1}_{0}([0,\infty))\otimes{\mathcal{C}}^{\infty}_{0}(\mathbb{R}_{+})\). Define, for \(t\in[0,\infty)\) and \(h \in{\mathcal{C}}_{0}([0,\infty) \times\mathbb{R}_{+})\),
$$ \forall(s,x) \in[0,\infty) \times\mathbb{R}_{+},\quad\mathcal{Q}_{t}h(s,x)= \mathcal {Q}_{s,s+t}\big( h(t+s, \cdot) \big)(x), $$
which extends \(Q_{t_{0},t}\) to a “homogeneous” operator on \({\mathcal{C}}_{0}([0,\infty) \times\mathbb{R}_{+})\). Since \((\mathcal{Q}_{t})_{t\geq0}\) is a solution of the martingale problem for \(A\), for any \(h\in {\mathcal{C}}^{1}_{0}([0,\infty))\otimes{\mathcal{C}}_{0}^{\infty}(\mathbb {R}_{+})\) and \(\epsilon>0\), we have
$$ \mathcal{Q}_{t}h(s,x_{0})-\mathcal{Q}_{\epsilon}h(s,x_{0}) = \int_{\epsilon}^{t} \mathcal{Q}_{u}Ah(s,x_{0})\,du. $$
(2.10)
Consider now a family \(p_{t}(x_{0},dy)\) of positive measures that are solutions of (2.8) and such that \(p_{0}(x_{0},dy)=\epsilon _{x_{0}}(dy)\). Then \(p_{t}\) is also a solution of (2.9). Integration by parts implies that, for \((f,\gamma)\in {\mathcal{C}}^{1}_{0}([0,\infty)) \times{\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{+})\),
$$ \int_{\mathbb{R}_{+}} p_{t}(x_{0},dy)f(y)\gamma(t)= f(x_{0})\gamma(0)+\int_{0}^{t} \int_{\mathbb{R}_{+}} p_{s}(x_{0},dy)A(f\gamma)(s,y)\,ds. $$
(2.11)
Define, for \(t\) in \([0,\infty), h\in{\mathcal{C}}_{0}([0,\infty) \times\mathbb{R}_{+})\), a family of linear operators \((\mathcal{P}_{t})_{t\geq0}\) by
$$\begin{aligned} \forall(t,x_{0})\in[0,\infty) \times\mathbb{R}_{+},\qquad\mathcal{P}_{t} h(0,x_{0}) =& \int_{\mathbb {R}_{+}} p_{t}(x_{0},dy)h(t,y). \end{aligned}$$
Using (2.11), we obtain for \((f,\gamma)\in {\mathcal{C}}^{1}_{0}([0,\infty)) \times{\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{+})\) that
$$\begin{aligned} \forall\epsilon>0\quad\mathcal{P}_{t}(f\gamma)-\mathcal{P}_{\epsilon }(f\gamma) =&\int_{\epsilon}^{t} \int_{\mathbb{R}_{+}} p_{u}(x_{0}, dy)A(f\gamma)(u,y)\,du \\ =&\int_{\epsilon}^{t} \mathcal{P}_{u}\big(A(f\gamma)\big)\,du. \end{aligned}$$
(2.12)
Multiplying by \(e^{-\lambda t}\) and integrating with respect to \(t\), we obtain
$$\begin{aligned} &\lambda\int_{0}^{\infty}e^{-\lambda t} \,\mathcal{P}_{t} (f\gamma )(0,x_{0})\,dt\\ &=f(x_{0})\gamma(0)+\lambda\int_{0}^{\infty}e^{-\lambda t}\int_{0}^{t} \mathcal {P}_{u}\big(A (f\gamma)\big)(0,x_{0})\,du\,dt\\ &= f(x_{0})\gamma(0)+\lambda\int_{0}^{\infty}\left(\int_{u}^{\infty}e^{-\lambda t} dt\right)\, \mathcal{P}_{u}\big(A(f\gamma)\big)(0,x_{0})\, du\\ &= f(x_{0})\gamma(0)+\int_{0}^{\infty}e^{-\lambda u}\, \mathcal{P}_{u}\big(A(f\gamma)\big)(0,x_{0})\,du \end{aligned}$$
for any \(\lambda>0\). Similarly, from (2.10) we obtain for any \(\lambda>0\) that
$$\begin{aligned} \lambda\int_{0}^{\infty}e^{-\lambda t} \,\mathcal{Q}_{t}(f\gamma)(0,x_{0})\,dt =&f(x_{0})\gamma(0)+\int_{0}^{\infty}e^{-\lambda u}\, \mathcal{Q}_{u}\big(A(f\gamma)\big)(0,x_{0})\,du. \end{aligned}$$
Hence, for \((f,\gamma)\in {\mathcal{C}}^{1}_{0}([0,\infty)) \times{\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{+})\), we have
$$\begin{aligned} \int_{0}^{\infty}e^{-\lambda t}\,\mathcal{Q}_{t}(\lambda-A)(f\gamma )(0,x_{0})\,dt =&f(x_{0})\gamma(0)\\ =&\int_{0}^{\infty}e^{-\lambda t}\,\mathcal{P}_{t}(\lambda- A)(f\gamma )(0,x_{0})\,dt. \end{aligned}$$
By linearity, for any \(h\in {\mathcal{C}}^{1}_{0}([0,\infty)) \otimes{\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{+})\), we have
$$ \int_{0}^{\infty}e^{-\lambda t}\,\mathcal{Q}_{t}(\lambda-A)h(0,x_{0})\, dt=h(0,x_{0})=\int_{0}^{\infty}e^{-\lambda t}\,\mathcal{P}_{t}(\lambda- A)h(0,x_{0})\,dt. $$
Proposition 2.6 of [19, Chap. 1] then implies that for all \(\lambda>0\),
$$\mathrm{Im} (\lambda-\overline{A})={\mathcal{C}}^{1}_{0}([0,\infty) \times \mathbb{R}_{+}), $$
where \(\overline{A}\) is the closure of \(A\), and \(\mathrm{Im} (\lambda-A)\) is the image of \({\mathcal{C}}^{1}_{0}([0,\infty)) \otimes{\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{+})\) by the mapping \((\lambda{-}A)\). Using the density of \(\mathrm{Im} (\lambda\,{-}\,A)\) in \(\mathrm{Im} (\lambda \,{-}\,\overline{A})\,{=}\, {\mathcal{C}}^{1}_{0}([0,\infty) {\times}\mathbb{R}_{+})\), we get for all \(h \in{\mathcal{C}}_{0}([0,\infty)\times\mathbb{R}_{+})\) that
$$ \int_{0}^{\infty}e^{-\lambda t}\,\mathcal{Q}_{t}h\,(0,x_{0})\,dt=\int _{0}^{\infty}e^{-\lambda t}\,\mathcal{P}_{t}h(0,x_{0}) \,dt, $$
(2.13)
so the Laplace transform of \(t\mapsto\mathcal{Q}_{t}h\,(s,x_{0})\) is uniquely determined. Now the two right-continuous functions \(t \mapsto Q_{t} h(0,x_{0})\) and \(t \mapsto P_{t} h(0,x_{0})\) have the same Laplace transform by (2.13), so they are equal. Thus, we have shown that
$$ \forall h\in{\mathcal{C}}_{0}([0,\infty) \times\mathbb{R}_{+}), \quad\int h(t,y) q_{t}(x_{0},dy)= \int h(t,y) p_{t}(x_{0},dy). $$
(2.14)
Proposition 4.4 of [20, Chap. 3] implies that \({\mathcal{C}}_{0}([0,\infty) \times\mathbb{R}_{+}) \) is separating, so (2.14) allows us to conclude that \(p_{t}(x_{0},dy)=q_{t}(x_{0},dy)\). □

Remark 2.10

In abstract terms, the preceding result can be formulated as follows. Given a Banach space \(X\) and a strongly continuous semigroup \(Q\) on \(X\) with generator \(A\), consider the integral equation (the “forward equation”)
$$P_{t} u(0) = u(0) + \int_{0}^{t} P_{s} A u(0) ds $$
for \(t \geq0 \). It is understood that \(u(0) \in D(A)\) and that \((P_{s})\) is a family of bounded linear operators (being strongly locally integrable). Then \(P_{t} = Q_{t} \) for \(t \geq0\). This can—alternatively to the preceding proof—be seen from differentiating \(s \mapsto P_{s} Q_{T-s} u(0) \), which yields 0. See also Sect. 6 of [19].

We now study the uniqueness of the forward PIDE (2.4) and prove Theorem 2.8.

Proof of Theorem 2.8

We start by decomposing \(L_{t}\) as \(L_{t}=A_{t}+B_{t}\), where
$$\begin{aligned} A_{t}f(y) =&r(t)yf'(y)+\frac{y^{2}\sigma(t,y)^{2}}{2}f''(y), \\ B_{t}f(y) =&\int_{\mathbb{R}}\big(f(ye^{z})-f(y)-y(e^{z}-1)f'(y)\big) n(t,dz,y). \end{aligned}$$
Then using the fact that \(y\frac{\partial}{\partial y}(y-x)^{+}= x 1_{\{ y>x\}}+(y-x)^{+}=y\,1_{\{y>x\}}\) and \(\frac{\partial^{2}}{\partial y^{2}}(y-x)^{+}=\epsilon_{x}(dy)\), where \(\epsilon_{x}\) is a unit mass at \(x\), we obtain
$$ A_{t}(y-x)^{+}=r(t)y\,1_{\{y>x\}}+\frac{y^{2}\sigma(t,y)^{2}}{2}\epsilon_{x}(dy) $$
and
$$\begin{aligned} B_{T}(y-x)^{+} =&\int_{\mathbb{R}}\Big((ye^{z}-x)^{+}-(y-x)^{+} \\ &{} -(e^{z}-1)\left(x1_{\{y>x\}}+(y-x)^{+}\right)\Big)n(t,dz,y)\\ =&\int_{\mathbb{R}} \big((ye^{z}-x)^{+}-e^{z}(y-x)^{+}-x(e^{z}-1)1_{\{y>x\}}\big)n(t,dz,y). \end{aligned}$$
Using Lemma 2.6 for the random measure \(n(t,dz,y)\) and denoting by \(\psi_{t,y}\) its exponential double tail, we get
$$ B_{t}(y-x)^{+}=y\psi_{t,y}\bigg(\ln{ \frac{x}{y} }\bigg). $$
Hence, we have the identity
$$ L_{t}(y-x)^{+}=r(t)\left(x 1_{\{y>x\}}+(y-x)^{+}\right)+\frac{y^{2}\sigma (t,y)^{2}}{2}\epsilon_{x}(dy)+ y\psi_{t,y}\bigg(\ln{ \frac{x}{y} }\bigg). $$
Let \(f: [t_{0},\infty)\times \mathbb{R}_{++}\mapsto\mathbb{R}\) be a solution to (2.4) in the sense of distributions with the initial condition \(f(0,x)=(S_{0}-x)^{+}\). Integration by parts yields
$$\begin{aligned} &\int_{0}^{\infty}\frac{\partial^{2} f}{\partial x^{2}}(t,dy)L_{t}(y-x)^{+}\\ & = \int_{0}^{\infty}\frac{\partial^{2} f}{\partial x^{2}}(t,dy)\\ &\phantom{=}{}\times\left(r(t)\big(x 1_{\{y>x\}}+(y-x)^{+}\big)+\frac {y^{2}\sigma(t,y)^{2}}{2}\epsilon_{x}(dy)+ y\psi_{t,y}\Big(\ln{ \frac{x}{y} }\Big)\right)\\ &= -r(t)x\int_{0}^{\infty}\frac{\partial^{2} f}{\partial x^{2}}(t,dy) 1_{\{ y>x\}} + r(t)\int_{0}^{\infty}\frac{\partial^{2} f}{\partial x^{2}}(t,dy)(y-x)^{+}\\ &\phantom{=}{}+\frac{x^{2}\sigma(t,x)^{2}}{2}\frac{\partial^{2} f}{\partial x^{2}}+\int_{0}^{\infty}\frac{\partial^{2} f}{\partial x^{2}}(t,dy) y\psi _{t,y}\bigg(\ln{ \frac{x}{y} }\bigg)\\ &= -r(t)x\frac{\partial f}{\partial x}+r(t)f(t,x) +\frac{x^{2}\sigma(t,x)^{2}}{2}\frac{\partial^{2} f}{\partial x^{2}}+\int _{0}^{\infty}\frac{\partial^{2} f}{\partial x^{2}}(t,dy)\, y\,\psi_{t,y}\bigg(\ln{ \frac{x}{y} }\bigg). \end{aligned}$$
Hence, given (2.4), we get the equality
$$ \frac{\partial f}{\partial t}(t,x)=-r(t)f(t,x)+\int_{0}^{\infty}\frac {\partial^{2} f}{\partial x^{2}}(t,dy)L_{t}(y-x)^{+}\ $$
(2.15)
or, equivalently, after integration with respect to time \(t\),
$$ e^{\int_{0}^{t} r(s)\,ds}f(t,x)-f(0,x)= \int_{0}^{\infty}e^{\int_{0}^{t} r(s)\, ds}\frac{\partial^{2} f}{\partial x^{2}}(t,dy)L_{t}(y-x)^{+}. $$
(2.16)
Integration by parts shows that
$$ f(t,x)=\int_{0}^{\infty}\frac{\partial^{2} f}{\partial x^{2}}(t,dy)(y-x)^{+}. $$
Hence, (2.15) may be rewritten as
$$\begin{aligned} &\int_{0}^{\infty}e^{\int_{0}^{t} r(s)\,ds}\frac{\partial^{2} f}{\partial x^{2}}(t,dy)(y-x)^{+}-(S_{0}-x)^{+} \\ &= \int_{0}^{t}\int_{0}^{\infty}e^{\int_{0}^{s} r(u)\,du}\frac{\partial^{2} f}{\partial x^{2}}(s,dy)L_{s}(y-x)^{+}\,ds. \end{aligned}$$
(2.17)
Defining \(q_{t}(dy):= e^{\int_{0}^{t} r(s)\,ds}\,\frac{\partial^{2} f}{\partial x^{2}}(t,dy)\), we have \(q_{0}(dy)=\epsilon_{S_{0}}(dy)=p_{0}(S_{0},dy)\). For \(g\in{\mathcal{C}}_{0}^{\infty}(\mathbb{R}_{++},\mathbb{R})\), integration by parts yields
$$ g(y)=\int_{0}^{\infty}g''(z)(y-z)^{+}\,dz. $$
Putting this expression into \(\int_{\mathbb{R}} g(y)q_{t}(dy)\) and using (2.17), we obtain
$$\begin{aligned} \int_{0}^{\infty}g(y)q_{t}(dy) =&\int_{0}^{\infty}g(y)\, e^{\int_{0}^{t} r(s)\, ds}\,\frac{\partial^{2} f}{\partial x^{2}}(t,dy)\\ =&\int_{0}^{\infty}g''(z)\int_{0}^{\infty}e^{\int_{0}^{t} r(s)\,ds}\,\frac {\partial^{2} f}{\partial x^{2}}(t,dy)(y-z)^{+}\,dz\\ =&\int_{0}^{\infty}g''(z)(S_{0}-z)^{+}\,dz \\ &{}+ \int_{0}^{\infty}g''(z)\int_{0}^{t}\int_{0}^{\infty}e^{\int_{0}^{s} r(u)\,du}\frac {\partial^{2} f}{\partial x^{2}}(s,dy)L_{s}(y-z)^{+}\,dz\\ =&g(S_{0})\\ &{}+\int_{0}^{t}\int_{0}^{\infty}e^{\int_{0}^{s} r(u)\,du}\frac{\partial^{2} f}{\partial x^{2}}(s,dy)L_{s} \bigg(\int_{0}^{\infty}g''(z)(y-z)^{+}\,dz\bigg)\\ =&g(S_{0})+\int_{0}^{t}\int_{0}^{\infty}q_{s}(dy)L_{s}g(y)\,ds. \end{aligned}$$
This is none other than (2.8). By the uniqueness of the solution \(p_{t}(S_{0},dy)\) to (2.8) in Proposition 2.9,
$$e^{\int_{0}^{t} r(s)\,ds}\,\frac{\partial^{2} f}{\partial x^{2}}(t,dy)= p_{t}(S_{0},dy). $$
We may rewrite (2.16) as
$$ f(t,x)= e^{-\int_{0}^{t} r(s)\,ds} \left(f(0,x)+ \int_{0}^{\infty}p_{t}(S_{0},dy) L_{t}(y-x)^{+}\right), $$
showing that the solution of (2.4) with initial condition \(f(0,x)=(S_{0}-x)^{+}\) is unique. □

3 Examples

We now give various examples of pricing models for which Theorem 2.3 allows us to retrieve or generalize previously known forms of forward pricing equations.

3.1 Itô processes

When \((S_{t})\) is an Itô process, that is, when the jump part is absent, the forward equation (2.4) reduces to the Dupire equation [16]. In this case, our result reduces to the following.

Proposition 3.1

(Dupire equation)

Consider the price process \((S_{t})\) whose dynamics under the pricing measureis given by
$$S_{T} = S_{0} +\int_{0}^{T} r(t) S_{t} dt + \int_{0}^{T} S_{t}\delta_{t} dW_{t}. $$
Assume that there exists a measurable function \(\sigma:[t_{0},T]\times \mathbb{R}_{++}\to\mathbb{R}_{+}\) such that
$$ \forall t\in[t_{0},T],\qquad\sigma(t,S_{t-})=\sqrt{\mathbb{E}[\delta _{t}^{2}|S_{t-}]}. $$
If
$$ \mathbb{E}\left[\exp{\left(\frac{1}{2}\int_{0}^{T} \delta_{t}^{2}\,dt\right )}\right]< \infty, $$
then the call option price (2.2) is a solution (in the sense of distributions) to the partial differential equation
$$ \frac{\partial C_{t_{0}}}{\partial T}(T,K) = -r(T)K\frac{\partial C_{t_{0}}}{\partial K}(T,K)+\frac{K^{2}\sigma (T,K)^{2}}{2}\, \frac{\partial^{2} C_{t_{0}}}{\partial K^{2}}(T,K) $$
on \([t_{0},\infty)\times \mathbb{R}_{++}\) with the initial condition \(C_{t_{0}}(t_{0},K)= (S_{t_{0}}-K)^{+}\) for all \(K>0\).

Proof

It is sufficient to take \(\mu\equiv0\) in (2.1) and then equivalently in (2.4). We leave the end of the proof to the reader. □

Notice in particular that this result does not require a nondegeneracy condition on the diffusion term.

3.2 Markovian jump-diffusion models

Another important particular case in the literature is the case of a Markov jump-diffusion driven by a Poisson random measure. Andersen and Andreasen [2] derived a forward PIDE in the situation where the jumps are driven by a compound Poisson process with time-homogeneous Gaussian jumps. We now show here that Theorem 2.3 implies the PIDE derived in [2], given here in a more general context allowing a time- and state-dependent Lévy measure, as well as an infinite number of jumps per unit time (“infinite jump activity”).

Proposition 3.2

(Forward PIDE for jump-diffusion model)

Consider the price process \(S\) whose dynamics under the pricing measureis given by
$$ S_{t} = S_{0} +\int_{0}^{T} r(t)S_{t-} dt + \int_{0}^{T} S_{t-}\sigma(t,S_{t-}) dB_{t} + \int_{0}^{T}\int_{-\infty}^{+\infty} S_{t-}(e^{y} - 1) \tilde{N}(dt dy), $$
where \(B \) is a Brownian motion, and \(\tilde{N}\) is the compensated random measure associated with a Poisson random measure \(N\) on \([0,T]\times\mathbb{R}\) with compensator \(\nu(dz)\,dt\). Assume that
$$ \sigma(\cdot,\cdot)\quad{\textit{is bounded}}\qquad{\textit{and}}\quad\int_{ \{ |y|>1\}} e^{2y} \nu(dy) < \infty. $$
Then the call option price
$$C_{t_{0}}(T,K)=e^{-\int_{t_{0}}^{T} r(t)\,dt}{\mathbb{E}}\, [\max(S_{T}-K,0)|\mathcal{F}_{t_{0}}] $$
is a solution (in the sense of distributions) of the PIDE
$$\begin{aligned} \frac{\partial C_{t_{0}}}{\partial T}(T,K) =& -r(T)K\frac{\partial C_{t_{0}}}{\partial K}(T,K)+\frac {K^{2}\sigma(T,K)^{2}}{2}\, \frac{\partial^{2} C_{t_{0}}}{\partial K^{2}}(T,K) \\ &{} +\int_{\mathbb{R}} \nu(dz)\,e^{z}\\ &{}\times\left(C_{t_{0}}(T,Ke^{-z})-C_{t_{0}}(T,K)-K(e^{-z}-1)\frac{\partial C_{t_{0}}}{\partial K}(T,K) \right) \end{aligned}$$
on \([t_{0},\infty)\times \mathbb{R}_{++}\) with the initial condition \(C_{t_{0}}(t_{0},K)= (S_{t_{0}}-K)^{+} \) for all \(K>0\).

Proof

As in the proof of Theorem 2.3, we put \(t_{0}=0\) and write \(C_{0}(T,K)=: C(T,K)\). Differentiating (2.2) in the sense of distributions with respect to \(K\), we obtain (2.5). In this particular case, \(m(t,dz)\,dt=\nu(dz)\,dt\), and \(\psi_{t}\) are simply given by
$$ \psi_{t}(z)=\psi(z)= \textstyle\begin{cases} \int_{-\infty}^{z} dx\ e^{x} \int_{-\infty}^{x} \nu(du), & z< 0,\\ \int_{z}^{+\infty} dx\ e^{x} \int_{x}^{\infty} \nu(du), & z>0. \end{cases} $$
Then (2.3) yields
$$\chi_{t,S_{t-}}(z)=\mathbb{E} [\psi_{t} (z) |S_{t-} ]=\psi(z). $$
Let us now focus on the term
$$\int_{0}^{+\infty} y\frac{\partial^{2} C}{\partial K^{2}}(T,dy)\,\chi\bigg(\ln{ \frac{K}{y} }\bigg) $$
in (2.4). Applying Lemma 2.6 yields
$$\begin{aligned} &\int_{0}^{+\infty} y\frac{\partial^{2} C}{\partial K^{2}}(T,dy)\,\chi\bigg(\ln{ \frac{K}{y} }\bigg) \\ &= \int_{0}^{\infty}e^{-\int_{0}^{T} r(t)\,dt}p^{S}_{T}(dy)\\ &\phantom{\int}{}\times\int_{\mathbb{R}}\big((ye^{z}-K)^{+}-e^{z}(y-K)^{+}-K(e^{z}-1)1_{ \{y>K\}}\big)\nu(dz) \\ &= \int_{\mathbb{R}} e^{z}\left(C(T,Ke^{-z})-C(T,K)-K(e^{-z}-1)\frac {\partial C}{\partial K}(T,K)\right)\,\nu(dz). \end{aligned}$$
This ends the proof. □

3.3 Pure jump processes

For price processes with no Brownian component, Assumption 2.2 reduces to
$$\forall T>0, \quad\mathbb{E}\left[\exp{\left(\int_{0}^{T} dt \int (e^{y}-1)^{2} m(t,\,dy)\right)}\right]< \infty. $$
Assume that there exists a measurable function \(\chi:[t_{0},T]\times \mathbb{R}_{++}\to\mathbb{R}_{+}\) such that for all \(t\in[t_{0},T]\) and for all \(z\in \mathbb{R}\),
$$ \chi_{t,S_{t-}}(z)=\mathbb{E} [\psi_{t} (z )|S_{t-} ] $$
with
$$\psi_{T}(z)= \textstyle\begin{cases} \int_{-\infty}^{z} dx\ e^{x} \int_{-\infty}^{x} m(T,du),& z< 0, \\ \int_{z}^{+\infty} dx\ e^{x} \int_{x}^{\infty} m(T,du),& z>0. \end{cases} $$
Then the forward equation for call options becomes
$$ \frac{\partial C}{\partial T} (T,K) + r(T)K\frac{\partial C}{\partial K} (T, K) = \int_{0}^{+\infty} y\frac{\partial^{2} C}{\partial K^{2}}(T,dy)\,\chi_{T,y}\bigg(\ln{ \frac{K}{y} }\bigg). $$
It is convenient to use the change of variable \(v=\ln y, k=\ln K\). If we define \(c(T,k) = C(T, e^{k})\), then we can write this PIDE as
$$ \frac{\partial c}{\partial T}(T,k) + r(T)\frac{\partial c}{\partial k}(T,k) = \int_{-\infty }^{+\infty} e^{2(v-k)}\left(\frac{\partial^{2} c}{\partial k^{2}}-\frac {\partial c}{\partial k}\right)(T,dv)\,\chi_{T,v}(k-v). $$
(3.1)
In the case considered in [9], where the Lévy density \(m_{Y}\) has a deterministic separable form
$$ m_{Y}(t,dz,y)\,dt=\alpha(y,t)\,k(z)\,dz\,dt, $$
Eq. (3.1) allows us to recover1 equation (14) in [9],
$$\frac{\partial c}{\partial T}(T,k) + r(T)\frac{\partial c}{\partial k}(T,k) = \int_{-\infty }^{+\infty} \kappa(k-v)e^{2(v-k)}\alpha(e^{v},T)\left(\frac{\partial^{2} c}{\partial k^{2}}-\frac{\partial c}{\partial k}\right)(T,dv), $$
where \(\kappa\) is defined as the exponential double tail of \(k(u)\,du\), that is,
$$\kappa(z)= \textstyle\begin{cases} \int_{-\infty}^{z} dx\ e^{x} \int_{-\infty}^{x} k(u)\,du,& z< 0, \\ \int_{z}^{+\infty} dx\ e^{x} \int_{x}^{\infty} k(u)\,du,& z>0. \end{cases} $$
The right-hand side can be rewritten as a convolution of distributions, namely
$$\begin{aligned} \frac{\partial c}{\partial T} + r(T)\frac{\partial c}{\partial k} =\bigg(a_{T}(\cdot) \Big(\frac{\partial^{2} c}{\partial k^{2}}-\frac {\partial c}{\partial k}\Big)\bigg)*g, \end{aligned}$$
where
$$\begin{aligned} g(u)= e^{-2u}\kappa(u),\qquad a_{T}(u)=\alpha(e^{u},T). \end{aligned}$$
Therefore, knowing \(c(\cdot,\cdot)\) and given \(\kappa(\cdot)\), we can recover \(a_{T}\) and hence \(\alpha(\cdot,\cdot)\). As noted by Carr et al. [9], this equation is analogous to the Dupire formula for diffusions: It enables to “invert” the structure of the jumps—represented by \(\alpha\)—from the cross-section of option prices. Note that like the Dupire formula, this inversion involves a double deconvolution/differentiation of \(c\), which illustrates the ill-posedness of the inverse problem.

3.4 Time-changed Lévy processes

Time-changed Lévy processes were proposed in [8] in the context of option pricing. Consider the price process \(S\) with dynamics under the pricing measure ℙ given by
$$ S_{t}= e^{\int_{0}^{t}r(u)\,du}\,X_{t}, \qquad X_{t}= \exp{\left(L_{\varTheta _{t}}\right)}, \qquad\varTheta_{t}=\int_{0}^{t} \theta_{s} ds, $$
where \(L \) is a Lévy process with characteristic triplet \((b,\sigma^{2},\nu)\), \(N\) is its jump measure, and \((\theta_{t})\) is a locally bounded positive semimartingale. Then \(X\) is a ℙ-martingale if
$$ b+\frac{1}{2}\sigma^{2}+\int_{\mathbb{R}}\big(e^{z}-1-z\,1_{ \{|z|\leq1\} }\big)\nu(dy)=0. $$
Define the value \(C_{t_{0}}(T,K)\) of the call option at \(t_{0}\) with expiry \(T>t_{0}\) and strike \(K>0\) as
$$ C_{t_{0}}(T,K)=e^{-\int_{0}^{T}r(t)\,dt}{\mathbb{E}}\, [\max(S_{T}-K,0)|\mathcal{F}_{t_{0}}].$$

Proposition 3.3

Assume that there exists a measurable function \(\alpha:[0,T]\times \mathbb{R}\to\mathbb{R}\) such that
$$ \alpha(t,X_{t-})={\mathbb{E}}[\theta_{t}| X_{t-}], $$
and let \(\chi\) be the exponential double tail of \(\nu\) defined as
$$ \chi(z)= \textstyle\begin{cases} \int_{-\infty}^{z} dx\ e^{x} \int_{-\infty}^{x} \nu(du),& z< 0, \\ \int_{z}^{+\infty} dx\ e^{x} \int_{x}^{\infty} \nu(du),& z>0. \end{cases} $$
If \(\beta=\frac{1}{2}\sigma^{2} + \int_{\mathbb{R}} (e^{y}-1)^{2} \nu (dy)<\infty\) and
$$ \mathbb{E}[\exp{(\beta\varTheta_{T} )}]< \infty, $$
(3.2)
then the call option price \((T,K)\mapsto C_{t_{0}}(T,K)\) at date \(t_{0}\), as a function of maturity and strike, is a solution (in the sense of distributions) of the partial integro-differential equation
$$\begin{aligned} \frac{\partial C}{\partial T}(T,K) =& -r\alpha(T,K)K\frac{\partial C}{\partial K}(T,K)+\frac {K^{2}\alpha(T,K)\sigma^{2}}{2}\, \frac{\partial^{2} C}{\partial K^{2}}(T,K) \\ &{}+\int_{0}^{+\infty} y\frac{\partial^{2} C}{\partial K^{2}}(T,dy)\,\alpha (T,y)\,\chi\bigg(\ln{ \frac{K}{y} }\bigg) \end{aligned}$$
on \([t,\infty)\times \mathbb{R}_{++}\) with the initial condition \(C_{t_{0}}(t_{0},K)= (S_{t_{0}}-K)^{+}\) for all \(K>0\).

Proof

Using [5, Theorem 6], we can rewrite \((L_{\varTheta _{t}})\) as
$$\begin{aligned} L_{\varTheta_{t}} =& L_{0}+\int_{0}^{t} \sigma \sqrt{\theta_{s}} dB_{s} + \int _{0}^{t} b \theta_{s} ds \\ &{}+ \int_{0}^{t}\theta_{s}\int_{|z|\leq1} z \tilde{N}(ds\,dz) + \int_{0}^{t}\int _{\{|z|>1\}} z N(ds\,dz), \end{aligned}$$
where \(N\) is an integer-valued random measure with compensator \(\theta_{t}\nu(dz)\,dt\), and \(\tilde{N}\) is its compensated random measure. Applying the Itô formula yields
$$\begin{aligned} X_{t} =&X_{0}+\int_{0}^{t} X_{s-}dL_{T_{s}}+\frac{1}{2}\int_{0}^{t} X_{s-}\sigma ^{2}\theta_{s}\,ds +\sum_{s\leq t} \left(X_{s}-X_{s-}-X_{s-}\Delta L_{T_{s}}\right) \\ =&X_{0}+\int_{0}^{t} X_{s-}\left(b\theta_{s}+\frac{1}{2}\sigma^{2}\theta_{s}\right )\,ds+\int_{0}^{t}X_{s-}\sigma\sqrt{\theta_{s}}\,dB_{s} \\ &{}+ \int_{0}^{t}X_{s-}\theta_{s}\int_{\{|z|\leq1\}} z \tilde{N}(ds\,dz) + \int_{0}^{t}X_{s-}\theta_{s}\int_{\{|z|>1\}} z N(ds\,dz) \\ &{}+ \int_{0}^{t}\int_{\mathbb{R}} X_{s-}(e^{z}-1-z)N(ds\,dz). \end{aligned}$$
Under our assumptions, \(\int(e^{z}-1-z\,1_{ \{|z|\leq1\}})\nu(dz)<\infty \); hence,
$$\begin{aligned} X_{t} =&X_{0}+\int_{0}^{t} X_{s-}\left(b\theta_{s}+\frac{1}{2}\sigma^{2}\theta _{s}+\int_{\mathbb{R}}\big(e^{z}-1-z\,1_{\{ |z|\leq1\}}\big)\theta_{s}\nu (dz)\right)\,ds \\ &{}+\int_{0}^{t}X_{s-}\sigma\sqrt{\theta}\,dB_{s} + \int_{0}^{t}\int_{\mathbb{R}} X_{s-}\theta_{s}(e^{z}-1)\tilde{N}(ds\, dz) \\ =&X_{0} + \int_{0}^{t}X_{s-}\sigma\sqrt{\theta_{s}}\,dB_{s} + \int_{0}^{t}\int _{\mathbb{R}} X_{s-}(e^{z}-1)\tilde{N}(ds\,dz), \end{aligned}$$
and \((S_{t})\) may be expressed as
$$S_{t}=S_{0} + \int_{0}^{t} S_{s-} r(s)\,ds+ \int_{0}^{t}S_{s-}\sigma\sqrt{\theta _{s}}\,dB_{s} + \int_{0}^{t}\int_{\mathbb{R}} S_{s-}(e^{z}-1)\tilde{N}(ds\,dz). $$
Assumption (3.2) implies that \(S\) fulfills Assumption 2.2, and \((S_{t})\) is now in the suitable form (2.1) to apply Theorem 2.3, which yields the result. □

3.5 Index options in a multivariate jump-diffusion model

Consider a multivariate model with \(d\) assets,
$$S_{T}^{i} = S_{0}^{i} +\int_{0}^{T} r(t) S_{t-}^{i} dt + \int_{0}^{T} S_{t-}\delta_{t}^{i} dW_{t}^{i} + \int_{0}^{T}\int_{\mathbb{R}^{d}} S_{t-}^{i}(e^{y_{i}} - 1) \tilde {N}(dt\,dy), $$
where \(\delta^{i}\) is an adapted process taking values in ℝ representing the volatility of asset \(i\), \(W\) is a \(d\)-dimensional Wiener process, \(N\) is a Poisson random measure on \([0,T]\times\mathbb {R}^{d}\) with compensator \(\nu(dy)\,dt\), and \(\tilde{N}\) denotes its compensated random measure. The Wiener processes \(W^{i}\) are correlated, that is,
$$\langle W^{i}, W^{j} \rangle_{t}=\rho_{i j}t \quad \mbox{for } i,j=1,\ldots,d, $$
with \(\rho_{ij}>0\) and \(\rho_{ii}=1\). An index is defined as a weighted sum of asset prices,
$$I_{t}=\sum_{i=1}^{d} w_{i} S^{i}_{t}\quad \mbox{with } w_{i}>0 \quad\mbox{and } \sum_{i=1}^{d} w_{i}=1. $$
The value \(C_{t_{0}}(T,K)\) at time \(t_{0}\) of an index call option with expiry \(T>t_{0}\) and strike \(K>0\) is given by
$$ C_{t_{0}}(T,K)=e^{-\int_{t_{0}}^{T} r(t)\,dt}{\mathbb{E}} \ [\max(I_{T}-K,0)|\mathcal{F}_{t_{0}}]. $$
The following result is a generalization of the forward PIDE studied by Avellaneda et al. [3] for the diffusion case.

Theorem 3.4

(Forward PIDE for index options)

Assume that
$$ \textstyle\begin{cases} \forall T>0 \quad\mathbb{E}[\exp{(\frac{1}{2}\int_{0}^{T}\|\delta_{t}\|^{2}\, dt)}]< \infty, \\ \int_{\mathbb{R}^{d}}(\|y\|\wedge\|y\|^{2})\,\nu(dy)< \infty. \end{cases} $$
(3.3)
Define
$$ \eta_{t}(z)= \textstyle\begin{cases} \int_{-\infty}^{z} dx\ e^{x} \int_{ \mathbb{R}^{d}} 1_{\{\ln \frac{\sum ^{d-1}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}} \leq x\}}\nu(dy),& z< 0,\\ \int_{z}^{\infty}dx\ e^{x} \int_{ \mathbb{R}^{d}} 1_{\{ \ln\frac{ \sum^{d-1}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}} \geq x\}}\nu(dy),& z>0, \end{cases} $$
(3.4)
and assume that there exist measurable functions \(\sigma:[t_{0},T]\times \mathbb{R}_{++}\to\mathbb{R}_{+}\) and \(\chi:[t_{0},T]\times \mathbb{R}_{++}\to\mathbb{R}_{+}\) such that for all \(t\in[t_{0},T]\) and for all \(z\in\mathbb{R}\),
$$\begin{aligned} \sigma(t,I_{t-}) =&\frac{1}{z}\,\sqrt{\mathbb{E}\Bigg[ \sum_{i,j=1}^{d} w_{i}w_{j}\rho_{ij}\,\delta_{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-}\Bigg| I_{t-}\Bigg]}\quad {\textit{a.s.}},\\ \chi_{t,I_{t-}}(z) =&\mathbb{E} [\eta_{t} (z )| I_{t-} ]\quad {\textit{a.s.}} \end{aligned}$$
Then the index call price \((T,K)\mapsto C_{t_{0}}(T,K)\), as a function of maturity and strike, is a solution (in the sense of distributions) of the partial integro-differential equation
$$\begin{aligned} \frac{\partial C_{t_{0}}}{\partial T}(T,K) =& -r(T)K\frac{\partial C_{t_{0}}}{\partial K}(T,K)+\frac{\sigma (T,K)^{2}}{2}\, \frac{\partial^{2} C_{t_{0}}}{\partial K^{2}}(T,K)\\ &{}+\int_{0}^{+\infty} y\frac{\partial^{2} C_{t_{0}}}{\partial K^{2}}(T,dy)\, \chi_{T,y}\bigg(\ln{ \frac{K}{y} }\bigg) \end{aligned}$$
on \([t_{0},\infty)\times \mathbb{R}_{++}\) with the initial condition \(C_{t_{0}}(t_{0},K)= (I_{t_{0}}-K)^{+}\) for all \(K>0\).

Proof

The process \((B_{t})_{t\geq0}\) defined by
$$dB_{t}=\frac{\sum_{i=1}^{d} w_{i}S^{i}_{t-}\delta_{t}^{i} dW_{t}^{i}}{(\sum_{i,j=1}^{d} w_{i}w_{j}\rho_{ij}\,\delta_{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-})^{1/2}} $$
is a continuous local martingale with quadratic variation \(t\), so by Lévy’s theorem, \(B~\) is a Brownian motion. Hence, \(I\) may be decomposed as
$$\begin{aligned} I_{T} =& \sum_{i=1}^{d} w_{i} S^{i}_{0}+ \int_{0}^{T} r(t)I_{t-}\,dt + \int_{0}^{T}\bigg(\sum_{i,j=1}^{d} w_{i}w_{j}\rho_{ij}\,\delta_{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-}\bigg)^{\frac{1}{2}} dB_{t}\\ &{} + \int_{0}^{T}\int_{\mathbb{R}^{d}} \sum_{i=1}^{d} w_{i} S^{i}_{t-}(e^{y_{i}}-1)\tilde{N}(dt\,dy). \end{aligned}$$
The essential part of the proof consists in rewriting \((I_{t})\) in the suitable form (2.1) to apply Theorem 2.3. Applying the Itô formula to \(\ln{ I_{T} }\) yields
$$\begin{aligned} \ln{ I_{T} }-\ln{ I_{0} } =&\int_{0}^{T} \Bigg(r(t)-\frac{1}{2I_{t-}^{2}}\sum_{i,j=1}^{d} w_{i}w_{j}\rho _{ij}\,\delta_{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-} \\ &{}-\int\bigg(\frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}}-1-\ln {\frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}}}\bigg)\nu(dy)\Bigg)\,dt \\ &{}+ \int_{0}^{T}\frac{1}{I_{t-}}\bigg(\sum_{i,j=1}^{d} w_{i}w_{j}\rho_{ij}\,\delta _{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-}\bigg)^{\frac{1}{2}}\,dB_{t}\\ &{}+\int_{0}^{T}\int\ln{ \frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}} }\,\tilde{N}(dt\:dy). \end{aligned}$$
Using the convexity property of the logarithm, we have
$$\begin{aligned} &\ln{ \frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}} }\geq\frac {\sum^{d}_{i=1} w_{i}S_{t-}^{i} y_{i}}{I_{t-}}\geq-\|y\| ,\\ &\ln{ \frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}} }\leq\ln{\Big( \max_{1\leq i \leq d}\, e^{y_{i}}\Big)}, \end{aligned}$$
implying that
$$ \bigg|\ln{ \frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}} }\bigg|\leq \bigg|\sum^{d}_{i=1} \frac{w_{i}S_{t-}^{i}}{I_{t-}} y_{i}\bigg| \leq\sum^{d}_{i=1} |y_{i}|\leq\|y\|, $$
so the functions \(y\mapsto\ln{ \frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}} }\) and \(y\mapsto\frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}}\) are integrable with respect to \(\nu(dy)\) under assumptions (3.3). We furthermore observe that
$$\begin{aligned} &\int 1\wedge\bigg|\ln{ \frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}} }\bigg|\,\nu(dy)< \infty\quad\mbox{a.s.}\\ & \int_{0}^{T} \int_{ \{\|y\|>1\}} \exp{{\bigg(2\bigg|\ln{ \frac{\sum ^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}} }\bigg|\bigg)}} \nu(dy)\, dt< \infty\quad\mbox{a.s.} \end{aligned}$$
Similarly, (3.3) implies that \(\int(e^{y_{i}}-1-1_{\{ |y_{i}|\leq1\}}y_{i} )\nu(dy)<\infty\), so \(\ln{ S_{T}^{i} }\) may be expressed as
$$\begin{aligned} \ln{ S_{T}^{i} } =&\ln{ S_{0}^{i} }+\int_{0}^{T}\Big( r(t)-\frac{1}{2}(\delta _{t}^{i})^{2}-\int\left(e^{y_{i}}-1-1_{ \{|y_{i}|\leq1\}}y_{i}\,\right)\nu(dy)\Big)\,dt \\ &{}+ \int_{0}^{T} \delta_{t}^{i}\,dW_{t}^{i} +\int_{0}^{T}\int y_{i}\,\tilde{N}(dt\: dy). \end{aligned}$$
Define the \(d\)-dimensional martingale \(W =(W ^{1},\dots,W ^{d-1},B )\). Then we have for \(i,j = 1, \dots, d-1\) that
$$\langle W^{i},W^{j} \rangle_{t}=\rho_{ij }t \quad\mbox{and}\quad\langle W^{i}, B\rangle_{t}=\frac{\sum_{j=1}^{d} w_{j}\rho_{ij}S_{t-}^{j}\delta_{t}^{j}}{(\sum _{i,j=1}^{d} w_{i}w_{j} \rho_{ij}\,\delta_{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-})^{1/2}}\,t. $$
Define
$$\begin{aligned} \varTheta_{t} =& \left( \textstyle\begin{array}{c@{\quad}c@{\quad}c} 1 & \cdots & \rho_{1,d-1} \\ \vdots & \ddots & \vdots \\ \rho_{d-1,1} & \cdots & 1 \\ \frac{\sum_{j=1}^{d} w_{j}\rho_{1,j}S_{t-}^{j}\delta_{t}^{j}}{(\sum_{i,j=1}^{d} w_{i}w_{j} \rho_{ij}\,\delta_{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-})^{1/2}} & \cdots & \frac{\sum_{j=1}^{d} w_{j}\rho_{d-1,j}S_{t-}^{j}\delta_{t}^{j}}{(\sum _{i,j=1}^{d} w_{i}w_{j} \rho_{ij}\,\delta_{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-})^{1/2}} \\ \end{array}\displaystyle \right.\\ &\quad\quad \left. \textstyle\begin{array}{c} \frac{\sum_{j=1}^{d} w_{j}\rho _{1j}S_{t-}^{j}\delta_{t}^{j}}{(\sum_{i,j=1}^{d} w_{i}w_{j} \rho_{ij}\,\delta _{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-})^{1/2}} \\ \vdots \\ \frac{\sum_{j=1}^{d} w_{j}\rho _{d-1,j}S_{t-}^{j}\delta_{t}^{j}}{(\sum_{i,j=1}^{d} w_{i}w_{j} \rho_{ij}\,\delta _{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-})^{1/2}} \\ 1 \end{array}\displaystyle \right) \end{aligned}$$
There exists a standard Brownian motion \((Z_{t})\) such that \(W_{t}=A Z_{t}\), where \(A\) is a \(d\times d\) matrix verifying \(\varTheta =A^{\top}A\). Define \(X_{T}:= (\ln{ S_{T}^{1} },\dots, \ln{ S_{T}^{d-1} },\ln{ I_{T} } )\),
$$\begin{aligned} &\delta_{t}= \left( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} \delta_{t}^{1} & \cdots& 0 & 0\\ \vdots& \ddots& \vdots& \vdots\\ 0 & \cdots& \delta_{t}^{d-1}& 0\\ 0 & \cdots& 0 & \frac{1}{I_{t-}}(\sum_{i,j=1}^{d} w_{i}w_{j}\rho_{ij}\,\delta _{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-})^{\frac{1}{2}} \end{array}\displaystyle \right), \\ &\beta_{t}= \left( \textstyle\begin{array}{l} \quad\quad r(t)-\frac{1}{2}(\delta_{t}^{1})^{2}-\int (e^{y_{1}}-1-y_{1})\,\nu(dy)\\ \vdots\\ \quad r(t)-\frac{1}{2}(\delta_{t}^{d-1})^{2}-\int (e^{y_{d-1}}-1-y_{d-1})\,\nu (dy)\\ r(t)-\frac{1}{2I_{t-}^{2}}\sum_{i,j=1}^{d} w_{i}w_{j}\rho_{ij}\,\delta_{t}^{i}\delta _{t}^{j}\,S^{i}_{t-} S^{j}_{t-} \\ \quad{}-\int (\frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}}-1-\ln{ \frac{\sum ^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}} })\,\nu(dy) \end{array}\displaystyle \right), \\ &\psi_{t}(y)= \left( \textstyle\begin{array}{c} y_{1}\\ \vdots\\ y_{d-1}\\ \ln{ \frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}} } \end{array}\displaystyle \right). \end{aligned}$$
Then \(X_{T}\) may be expressed as
$$ X_{T}=X_{0}+\int_{0}^{T} \beta_{t}\,dt+\int_{0}^{T} \delta_{t}A\,dZ_{t} +\int_{0}^{T}\int _{\mathbb{R}^{d}} \psi_{t}(y)\,\tilde{N}(dt\:dy). $$
The predictable function \(\phi_{t}\) defined, for \(t \in[0,T], y\in\psi _{t}(\mathbb{R}^{d})\), by
$$\phi_{t}(y)=\bigg(y_{1},\dots,y_{d-1},\ln{ \frac{e^{y_{d}}I_{t-}-\sum ^{d-1}_{i=1}w_{i} S_{t-}^{i} e^{y_{i}}}{w_{d} S_{t-}^{d}} }\bigg) $$
is the left inverse of \(\psi_{t}\), that is, \(\phi_{t}(\omega,\psi_{t}(\omega, y) )= y\). Observe that \(\psi_{t}(\cdot,0)=0\), \(\phi\) is predictable, and \(\phi _{t}(\omega, \cdot)\) is differentiable on \(\mathrm{Im} (\psi_{t})\) with Jacobian matrix given by
$$ \nabla_{y}\phi_{t}(y)=\left( \textstyle\begin{array}{c@{\ }c@{\ }c@{\ }c} 1 & 0 &0 & 0\\ \vdots& \ddots& \vdots& \vdots\\ 0& \cdots& 1 & 0\\ \frac{-e^{y_{1}}w_{1}S_{t-}^{1}}{e^{y_{d}}I_{t-}-\sum^{d-1}_{i=1}w_{i} S_{t-}^{i} e^{y_{i}}} & \cdots& \frac {-e^{y_{d-1}}w_{d-1}S_{t-}^{d-1}}{e^{y_{d}}I_{t-}-\sum^{d-1}_{i=1}w_{i} S_{t-}^{i} e^{y_{i}}}& \frac{e^{y_{d}}I_{t-}}{e^{y_{d}}I_{t-}-\sum ^{d-1}_{i=1}w_{i} S_{t-}^{i} e^{y_{i}}} \end{array}\displaystyle \right) . $$
So \((\psi,\nu)\) satisfies the assumptions of [5, Lemma 2]; indeed, using assumption (3.3), for all \(T\geq t \geq0\), we have
$$\begin{aligned} &\mathbb{E}\left[\int_{0}^{T}\int_{\mathbb{R}^{d}} \big(1\wedge\|\psi _{t}(.,y)\|^{2}\big)\,\nu(dy)\,dt\right] \\ &= \int_{0}^{T}\int_{\mathbb{R}^{d}} 1\wedge\Bigg(y_{1}^{2}+\cdots +y_{d-1}^{2}+{\bigg(\ln\frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}}\bigg)}^{2}\, \Bigg) \;\nu(dy)\,dt \\ &\leq \int_{0}^{T}\int_{\mathbb{R}^{d}} \big(1\wedge 2\|y\|^{2}\big)\,\nu (dy)\,dt < \infty. \end{aligned}$$
Define \(\nu_{\phi}\), the image of \(\nu\) by \(\phi\), as
$$ \forall B\in\mathcal{B}(\mathbb{R}^{d}\setminus\{0\})\subseteq\psi _{t}(\mathbb{R}^{d}),\quad\nu_{\phi}(\omega,t,B)=\nu\big(\phi_{t}(\omega ,B)\big). $$
Applying [5, Lemma 2], we may \(X_{T}\) express as
$$X_{T}= X_{0}+\int_{0}^{T} \beta_{t}\,dt+ \int_{0}^{T}\delta_{t}A\,dZ_{t}+\int_{0}^{T}\int y\, \tilde{M}(dt\:dy), $$
where \(M\) is an integer-valued random measure (and \(\tilde{M}\) its compensated random measure) with compensator \(\mu(\omega; dt\ dy)=m(t,dy;\omega)\,dt\), defined via its density
$$\begin{aligned} \frac{d\mu}{d\nu_{\phi}}(\omega,t,y) =&1_{\{\psi_{t}(\mathbb{R}^{d})\}}(y)\, |\mathrm{det}\nabla_{y} \phi_{t}|(y)\\ =&1_{\{\psi_{t}(\mathbb{R}^{d})\}}(y)\,\bigg|\frac {e^{y_{d}}I_{t-}}{e^{y_{d}}I_{t-}-\sum^{d-1}_{i=1}w_{i} S_{t-}^{i} e^{y_{i}}}\bigg| \end{aligned}$$
with respect to \(\nu_{\phi}\). Considering now the \(d\)th component of \(X_{T}\), we obtain the semimartingale decomposition of \(\ln{ I_{t} }\) as
$$\begin{aligned} \ln{ I_{T} }- \ln{ I_{0} } =&\int_{0}^{T} \Bigg(r(t)-\frac{1}{2I_{t-}^{2}}\Bigg(\sum_{i,j=1}^{d} w_{i}w_{j}\rho_{ij}\,\delta_{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-}\Bigg) \\ &{} -\int\Bigg(\frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}}-1-\ln{ \frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}} }\Bigg)\,\nu(dy)\Bigg)\,dt \\ &{}+ \int_{0}^{T}\frac{1}{I_{t-}}\Bigg(\sum_{i,j=1}^{d} w_{i}w_{j}\rho_{ij}\,\delta _{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-}\Bigg)^{\frac{1}{2}}\,dB_{t} +\int _{0}^{T}\int y\,\tilde{K}(dt\:dy), \end{aligned}$$
where \(K\) is an integer-valued random measure on \([0,T]\times\mathbb{R}\) with compensator \(k(t,dy)\,dt\), where
$$\begin{aligned} k(t,B) =& \int_{ \mathbb{R}^{d-1}\times B}\mu(t,dy)=\int_{\mathbb {R}^{d-1}\times B} 1_{ \psi_{t}(\mathbb{R}^{d}) }(y)\,|\mathrm{det}\nabla_{y} \phi_{t}|(y)\,\nu_{\phi}(t,dy)\\ =&\int_{(\mathbb{R}^{d-1}\times B) \cap\psi_{t}(\mathbb{R}^{d})} |\mathrm{det}\nabla_{y} \phi_{t}|\big(\psi_{t}(y)\big)\,\nu(dy)\\ =&\int_{\{y\in\mathbb{R}^{d} \setminus\{0\}, \ln{ \frac{\sum ^{d-1}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}} }\in B\}}\,\nu(dy)\qquad\mbox{for } B\in\mathcal{B}(\mathbb{R}\setminus\{0\}). \end{aligned}$$
In particular, the exponential double tail of \(k(t,dy)\), which we denote by
$$ \eta_{t}(z)= \textstyle\begin{cases} \int_{-\infty}^{z} dx\ e^{x} k(t,(-\infty,x]),& z< 0, \\ \int_{z}^{+\infty} dx\ e^{x} k(t,[x,\infty)),& z>0, \end{cases} $$
is given by (3.4). So, finally, \(I_{T}\) may be expressed as
$$\begin{aligned} I_{T} =&I_{0}+\int_{0}^{T}r(t)I_{t-} \,dt + \int_{0}^{T}\Bigg(\sum_{i,j=1}^{d} w_{i}w_{j}\rho_{ij}\,\delta_{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-}\Bigg)^{\frac {1}{2}}\,dB_{t}\\ &{}+ \int_{0}^{T}\int_{\mathbb{R}^{d}} (e^{y}-1 )I_{t-}\tilde{K}(dt\,dy). \end{aligned}$$
The normalized volatility of \(I_{t}\) satisfies, for \(t\in[0,T]\),
$$\begin{aligned} \frac{\sum_{i,j=1}^{d} w_{i}w_{j} \rho_{ij}\,\delta_{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-}}{I_{t-}^{2}} \leq&\sum_{i,j=1}^{d} \rho_{ij}\,\delta_{t}^{i}\delta_{t}^{j}, \qquad \bigg|\ln{ \frac{\sum^{d}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}}{I_{t-}}}\bigg| \leq \|y\|. \end{aligned}$$
Hence,
$$\begin{aligned} &\frac{1}{2}\int_{0}^{T} \frac{\sum_{i,j=1}^{d} w_{i}w_{j} \rho_{ij}\,\delta _{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-}}{I_{t-}^{2}}\,dt+ \int_{0}^{T} \int (e^{y}-1)^{2} k(t,\,dy)\,dt \\ &= \frac{1}{2}\int_{0}^{T} \frac{\sum_{i,j=1}^{d} w_{i}w_{j} \rho_{ij}\,\delta _{t}^{i}\delta_{t}^{j}\,S^{i}_{t-} S^{j}_{t-}}{I_{t-}^{2}}\,dt \\ &\phantom{\leq}{}+ \int_{0}^{T} \int_{\mathbb{R}^{d}} \Bigg(\frac{\sum ^{d-1}_{i=1} w_{i}S_{t-}^{i} e^{y_{i}}+w_{d}S_{t-}^{d} e^{y}}{I_{t-}}-1\Bigg)^{2} \nu(dy_{1},\dots,dy_{d-1},dy)\,dt \\ &\leq \frac{1}{2}\sum_{i,j=1}^{d} \rho_{ij}\,\delta_{t}^{i}\delta_{t}^{j}+ \int _{0}^{T} \int_{\mathbb{R}^{d}} (e^{\|y\|}-1)^{2} \nu(dy_{1},\dots,dy_{d-1},dy)\, dt. \end{aligned}$$
By (3.3) the last inequality implies that \(I \) satisfies Assumption 2.2. Hence, Theorem 2.3 can now be applied to \(I\), which yields the result. □

3.6 Forward equations for CDO pricing

Portfolio credit derivatives such as CDOs or index default swaps are derivatives whose payoff depends on the total loss \(L_{t}\) due to defaults in a reference portfolio of obligers. Reduced-form top-down models of portfolio default risk [21, 23, 36, 11, 37] describe the default losses of a portfolio as a marked point process \((L_{t})_{t\geq0}\), where the jump times represent credit events in the portfolio and the jump sizes \(\Delta L_{t}\) the portfolio loss upon a default event. Marked point processes with random intensities are increasingly used as ingredients in such models [21, 23, 29, 36, 37]. In all such models, the loss process (represented as a fraction of the portfolio notional) may be written as
$$L_{t} = \int_{0}^{t}\int_{0}^{1} x \,M(ds\, dx), $$
where \(M(dt\,dx)\) is an integer-valued random measure with compensator
$$\mu(dt\,dx;\omega)=m(t,dx; \omega)\,dt. $$
If, furthermore,
$$ \int_{0}^{1} x\,m(t,dx)< \infty, $$
then \(L_{t}\) may be expressed in the form
$$L_{t} = \int_{0}^{t}\int_{0}^{1} x \big(m(s,dx)\,ds+\tilde{M}(ds\, dx)\big) , $$
where \(\int_{0}^{t}\int_{0}^{1} x \,\tilde{M}(ds\, dx)\) is a ℙ-martingale. The point process \(N_{t}= M([0,t]\times[0,1])\) represents the number of defaults, and
$$\lambda_{t}(\omega)=\int_{0}^{1} m(t,dx;\omega) $$
represents the default intensity. Denote by \(T_{1}\leq T_{2}\leq\cdots\) the jump times of \(N\). The cumulative loss process \(L\) may also be represented as
$$L_{t} = \sum_{k=1}^{N_{t}} Z_{k}, $$
where the “mark” \(Z_{k}\), with values in \([0,1]\), is distributed according to
$$F_{t}(dx ;\omega)=\frac{m_{X}(t,dx;\omega)}{\lambda_{t}(\omega)}. $$
Note that the percentage loss \(L_{t}\) belongs to \([0,1]\), so \(\Delta L_{t}\in[0,1-L_{t-}]\). For the equity tranche \([0,K]\), we define the expected tranche notional at maturity \(T\) as
$$\begin{aligned} C_{t_{0}}(T,K)=\mathbb{E}[ (K-L_{T})^{+} |\mathcal{F}_{t_{0}}]. \end{aligned}$$
(3.5)
As noted in [11], the prices of portfolio credit derivatives such as CDO tranches only depend on the loss process through the expected tranche notionals. Therefore, if one is able to compute \(C_{t_{0}}(T,K)\), then one is able to compute the values of all CDO tranches at date \(t_{0}\). In the case of a loss process with constant loss increment, Cont and Savescu [12] derived a forward equation for the expected tranche notional. The following result generalizes the forward equation derived in [12] to a more general setting which allows random, dependent loss sizes and possible dependence between the loss given default and the default intensity.

Proposition 3.5

(Forward equation for expected tranche notionals)

Assume that there exists a family \(m_{Y} (t, dy, z)\) of measures on \([0,1]\) such that for all \(t\in[t_{0},T]\) and for all \(A\in\mathcal{B}([0,1])\),
$$ m_{Y}(t,A,L_{t-})=\mathbb{E}[m_{X}(t,A,\cdot)|L_{t-}], $$
and denote by \(M_{Y}(dt\,dy)\) the integer-valued random measure with compensator \(m_{Y}(t,dy,z)\,dt\). Define the effective default intensity
$$ \lambda^{Y}(t,z)= \int_{0}^{1-z} m_{Y}(t,dy,z). $$
Then the expected tranche notional \((T,K)\mapsto C_{t_{0}}(T,K)\), as a function of maturity and strike, is a solution of the partial integro-differential equation
$$\begin{aligned} \frac{\partial C_{t_{0}}}{\partial T}(T,K) =&-\int_{0}^{K} \frac{\partial^{2} C_{t_{0}}}{\partial K^{2}}(T,dy) \\ &{}\times\left(\int_{0}^{K-y}(K-y-z)\,m_{Y}(T,dz,y)-(K-y)\lambda ^{Y}(T,y)\right) \end{aligned}$$
(3.6)
on \([t_{0},\infty)\times(0,1)\) with the initial condition \(C_{t_{0}}(t_{0},K)= (K-L_{t_{0}})^{+}\) for all \(K\in[0,1]\).

Proof

As in the proof of Theorem 2.3, we put \(t_{0}=0\) and write \(C_{0}(T,K)=: C(T,K)\). Then (3.5) can be expressed as
$$ C(T,K)=\int_{\mathbb{R}_{+}} \left(K-y\right)^{+}\,p_{T}(dy), $$
where \(\rho_{T}\) denotes the marginal distribution of \(L_{T}\) under \(\mathbb {P} [\cdot|\mathcal{F}_{0}]\). Differentiating with respect to \(K\), we get
$$ \frac{\partial C}{\partial K} (T,K)=\int_{0}^{K} p_{T}(dy)=\mathbb{E}\left [1_{ \{L_{T}\leq K\}}\right],\quad\quad\frac{\partial^{2} C}{\partial K^{2}}(T,dy)=p_{T}(dy). $$
For \(h>0\), applying the Tanaka–Meyer formula to \((K-{L}_{t})^{+}\) between \(T\) and \(T+h\), we have
$$\begin{aligned} (K-L_{T+h})^{+} = & (K-L_{T})^{+} -\int_{T}^{T+h} 1_{ \{L_{t-}\leq K\}} dL_{t} \\ &{}+ \sum_{T< t\leq T+h} \left((K-L_{t})^{+}-(K-L_{t-})^{+}+1_{ \{L_{t-}\leq K\}}\Delta L_{t}\right). \end{aligned}$$
Taking expectations, we get
$$\begin{aligned} & C(T+h,K)-C(T,K)\\ &= \mathbb{E}\left[\int_{T}^{T+h} dt\,1_{\{ L_{t-}\leq K\}}\,\int _{0}^{1-L_{t-}} x\,m(t,dx)\right] \\ &\phantom{=}{}+ \mathbb{E}\bigg[ \sum_{T< t\leq T+h} \big((K-L_{t})^{+}-(K-L_{t-})^{+} +1_{\{ L_{t-}\leq K\}}\Delta L_{t}\big)\bigg]. \end{aligned}$$
The first term may be computed as
$$\begin{aligned} &\mathbb{E}\left[\int_{T}^{T+h} dt\,1_{\{ L_{t-}\leq K\}}\,\int _{0}^{1-L_{t-}} x\,m(t,dx)\right]\\ &= \int_{T}^{T+h} dt\,\mathbb{E}\left[\mathbb{E}\Big[1_{\{ L_{t-}\leq K\} }\,\int_{0}^{1-L_{t-}} x\,m(t,dx)\Big|L_{t-}\Big]\right]\\ &= \int_{T}^{T+h} dt\,\mathbb{E}\left[1_{\{ L_{t-}\leq K\}}\int _{0}^{1-L_{t-}} x\,m_{Y}(t,dx,L_{t-})\right]\\ &= \int_{T}^{T+h} dt\,\int_{0}^{K} p_{t}(dy)\left(\int_{0}^{1-y} x\, m_{Y}(t,dx,y)\right). \end{aligned}$$
As for the jump term,
$$\begin{aligned} &\mathbb{E}\Bigg[\sum_{T< t\leq T+h} \big((K-L_{t})^{+}-(K-L_{t-})^{+}+1_{ \{L_{t-}\leq K\}} \Delta L_{t}\big)\Bigg] \\ &= \mathbb{E}\bigg[\int_{T}^{T+h}dt\int_{0}^{1-L_{t-}} m(t,dx)\,\Big((K-L_{t-}-x)^{+}\\ &\quad{}-(K-L_{t-})^{+}+1_{\{ L_{t-}\leq K\}}x\Big)\bigg] \\ &=\int_{T}^{T+h}dt\,\mathbb{E}\bigg[\mathbb{E}\Big[\int_{0}^{1-L_{t-}} \, m(t,dx)\big((K-L_{t-}-x)^{+} \\ &\quad{}-(K-L_{t-})^{+}+1_{\{ L_{t-}\leq K\}}x\big)\Big|L_{t-}\Big]\bigg]\\ &=\int_{T}^{T+h}dt\,\mathbb{E}\bigg[\int_{0}^{1-L_{t-}} \, m_{Y}(t,dx,L_{t-})\big((K-L_{t-}-x)^{+}\\ &\quad{}-(K-L_{t-})^{+}+1_{\{ L_{t-}\leq K\}}x\big)\bigg]\\ &=\int_{T}^{T+h} dt\,\int_{0}^{1} p_{t}dy)\int_{0}^{1-y} \,m_{Y}(t,dx,y)\big((K-y-x)^{+} \\ &\quad{} -(K-y)^{+}+1_{\{y\leq K\}}x\big), \end{aligned}$$
where the inner integrals may be computed as
$$\begin{aligned} &\int_{0}^{1} p_{T}(dy)\int_{0}^{1-y} \,m_{Y}(t,dx,y)\left ((K-y-x)^{+}-(K-y)^{+}+1_{\{y\leq K\}}x\right)\\ &= \int_{0}^{K} p_{T}(dy)\int_{0}^{1-y} \,m_{Y}(t,dx,y)\left((K-y-x)1_{\{ K-y>x\} }-(K-y-x)\right)\\ &= \int_{0}^{K} p_{T}(dy) \int_{K-y}^{1-y} \,m_{Y}(t,dx,y)(K-y-x). \end{aligned}$$
Gathering together all the terms, we obtain
$$\begin{aligned} &C(T+h,K)-C(T,K)\\ &= \int_{T}^{T+h} dt\,\int_{0}^{K} p_{t}(dy)\left(\int_{0}^{1-y} x\, m_{Y}(t,dx,y)\right)\\ &\quad{}+\int_{T}^{T+h} dt\,\int_{0}^{K} p_{t}(dy) \left(\int_{K-y}^{1-y} \, m_{Y}(t,dx,y)(K-y-x)\right)\\ &= \int_{T}^{T+h} dt\,\int_{0}^{K} p_{t}(dy)\left(-\int_{0}^{K-y} \, m_{Y}(t,dx,y)(K-y-x)+(K-y)\lambda^{Y}(t,y)\right). \end{aligned}$$
Dividing by \(h\) and taking the limit as \(h\to0\) yield
$$\begin{aligned} &\frac{\partial C}{\partial T}(T,K)\\ &= -\int_{0}^{K} p_{T}(dy)\,\left(\int_{0}^{K-y}(K-y-x)\, m_{Y}(T,dx,y)-(K-y)\lambda^{Y}(T,y)\right)\\ &= -\int_{0}^{K}\frac{\partial^{2} C}{\partial K^{2}}(T,dy)\,\left(\int _{0}^{K-y}(K-y-x)\,m_{Y}(T,dx,y)-(K-y)\lambda^{Y}(T,y)\right). \end{aligned}$$
 □
In [12], the loss given default (i.e., the jump size of \(L\)) is assumed to be constant equal to \(\delta=(1-R)/n\); then \(Z_{k}=\delta\), so \(L_{t}=\delta N_{t}\), and we can compute \(C(T,K)\) using the law of \(N_{T}\). Setting, as before, \(t_{0}=0\) and writing \(C_{0}(T,K) =: C(T,K)\), we have
$$\begin{aligned} C(T,K)=\mathbb{E}[(K-L_{T})^{+}]=\mathbb{E}[(k\,\delta-L_{T})^{+}] =\delta\,\mathbb{E}[(k-N_{T})^{+}]=: \delta\,C_{k}(T). \end{aligned}$$
The compensator of \(L_{t}\) is \(\lambda_{t}\,\epsilon_{\delta}(dz)\,dt\). The effective compensator becomes
$$m_{Y}(t,dz,y)=\mathbb{E}[\lambda_{t}|L_{t-}=y]\,\epsilon_{\delta}(dz) =\lambda^{Y}(t,y)\,\epsilon_{\delta}(dz), $$
and the effective default intensity is \(\lambda^{Y}(t,y)= \mathbb{E}[\lambda_{t}|L_{t-}=y]\). Using the notation in [12], if we set \(y=j\delta\), then
$$\lambda^{Y}(t,j\delta)= \mathbb{E}[\lambda_{t}|L_{t-}=j\delta]=\mathbb {E}[\lambda_{t}|N_{t-}=j]=a_{j}(t) $$
and \(p_{t}(dy)=\sum_{j=0}^{n} q_{j}(t)\epsilon_{j\delta}(dy)\). Let us focus on (3.6) in this case. We recall from the proof of Proposition 3.5 that
$$\begin{aligned} \frac{\partial C}{\partial T}(T,k\delta) =&\int_{0}^{1} p_{T}(dy)\,\int_{0}^{1-y}\big((k\delta-y-z)^{+}-(k\delta -y)^{+}\big)\,\lambda^{Y}(T,y)\,\epsilon_{\delta}(dz) \\ =&\int_{0}^{1} p_{T}(dy)\,\lambda^{Y}(T,y)\,\big((k\delta-y-\delta )^{+}-(k\delta-y)^{+}\big)\,1_{ \{\delta< 1-y\}} \\ =&-\delta\sum_{j=0}^{n} q_{j}(T)\,a_{j}(T)\,1_{\{ j\leq k-1\}}. \end{aligned}$$
This expression can be simplified as in [12, Proposition 2], leading to the forward equation
$$\begin{aligned} \frac{\partial C_{k}(T)}{\partial T} =& a_{k}(T)C_{k-1}(T)-a_{k-1}(T)C_{k}(T)\\ &{}-\sum_{j=1}^{k-2} C_{j}(T) \big(a_{j+1}(T)-2a_{j}(T)+a_{j-1}(T)\big) \\ =&\big(a_{k}(T)-a_{k-1}(T)\big)C_{k-1}(T) \\ &{}-\sum_{j=1}^{k-2}(\nabla^{2} a)_{j}C_{j}(T)-a_{k-1}(T)\big(C_{k}(T)-C_{k-1}(T)\big). \end{aligned}$$
Hence, we recover [12, Proposition 2] as a particular case of Proposition 3.5.

Footnotes

  1. 1.

    Note, however, that the equation given in [9] does not seem to be correct: it involves the double tail of \(k(z)\,dz\) instead of the exponential double tail.

Notes

Acknowledgements

We thank Damien Lamberton for numerous remarks, which helped improve the manuscript. Substantial assistance from an Associate Editor and the Editor is also gratefully acknowledged. Any remaining errors are our own.

References

  1. 1.
    Achdou, Y., Pironneau, O.: Computational Methods for Option Pricing. Frontiers in Applied Mathematics. SIAM, Philadelphia (2005) CrossRefGoogle Scholar
  2. 2.
    Andersen, L., Andreasen, J.: Jump diffusion models: volatility smile fitting and numerical methods for pricing. Rev. Deriv. Res. 4, 231–262 (2000) CrossRefzbMATHGoogle Scholar
  3. 3.
    Avellaneda, M., Boyer-Olson, D., Busca, J., Friz, P.: Application of large deviation methods to the pricing of index options in finance. C. R. Math. Acad. Sci. Paris 336, 263–266 (2003) MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Barles, G., Imbert, C.: Second-order elliptic integro-differential equations: viscosity solutions theory revisited. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 25, 567–585 (2008) MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Bentata, A., Cont, R.: Mimicking the marginal distributions of a semimartingale. Working paper (2012). arXiv:0910.3992v5 [math.PR]
  6. 6.
    Berestycki, H., Busca, J., Florent, I.: Asymptotics and calibration of local volatility models. Quant. Finance 2, 61–69 (2002) MathSciNetCrossRefGoogle Scholar
  7. 7.
    Black, F., Scholes, M.: The pricing of options and corporate liabilities. J. Polit. Econ. 3, 637–654 (1973) CrossRefGoogle Scholar
  8. 8.
    Carr, P., Geman, H., Madan, D.B., Yor, M.: Stochastic volatility for Lévy processes. Math. Finance 13, 345–382 (2003) MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Carr, P., Geman, H., Madan, D.B., Yor, M.: From local volatility to local Lévy models. Quant. Finance 4, 581–588 (2004) MathSciNetCrossRefGoogle Scholar
  10. 10.
    Carr, P., Hirsa, A.: Why be backward? Risk 16(1), 103–107 (2003) Google Scholar
  11. 11.
    Cont, R., Minca, A.: Recovering portfolio default intensities implied by CDO tranches. Math. Finance 23, 94–121 (2013) MathSciNetCrossRefGoogle Scholar
  12. 12.
    Cont, R., Savescu, I.: Forward equations for portfolio credit derivatives. In: Cont, R. (ed.) Frontiers in Quantitative Finance: Credit Risk and Volatility Modeling, pp. 269–288. Wiley, New York (2008), Chap. 11 CrossRefGoogle Scholar
  13. 13.
    Cont, R., Tankov, P.: Financial Modelling with Jump Processes. CRC Press, Boca Raton (2004) Google Scholar
  14. 14.
    Cont, R., Voltchkova, E.: Integro-differential equations for option prices in exponential Lévy models. Finance Stoch. 9, 299–325 (2005) MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Dupire, B.: Model art. Risk 6, 118–120 (1993) Google Scholar
  16. 16.
    Dupire, B.: Pricing with a smile. Risk 7, 18–20 (1994) Google Scholar
  17. 17.
    Dupire, B.: A unified theory of volatility. Working paper, Paribas (1996). Unpublished Google Scholar
  18. 18.
    Dupire, B.: Pricing and hedging with smiles. In: Dempster, M., Pliska, S. (eds.) Mathematics of Derivative Securities, pp. 103–111. Cambridge University Press, Cambridge (1997) Google Scholar
  19. 19.
    Engel, K.-J., Nagel, R.: One-Parameter Semigroups for Linear Evolution Equations. Springer, Berlin (2000) zbMATHGoogle Scholar
  20. 20.
    Ethier, S.N., Kurtz, T.G.: Markov Processes: Characterization and Convergence. Wiley, New York (1986) CrossRefzbMATHGoogle Scholar
  21. 21.
    Filipovic, D., Overbeck, L., Schmidt, T.: Dynamic CDO term structure modeling. Math. Finance 21, 53–71 (2011) MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Garroni, M.G., Menaldi, J.L.: Second Order Elliptic Integro-Differential Problems. CRC Press, Boca Raton (2002) CrossRefzbMATHGoogle Scholar
  23. 23.
    Giesecke, K.: Portfolio credit risk: top down vs. bottom up approaches. In: Cont, R. (ed.) Frontiers in Quantitative Finance: Credit Risk and Volatility Modeling, pp. 251–265. Wiley, New York (2008), Chap. 10 Google Scholar
  24. 24.
    Gyöngy, I.: Mimicking the one-dimensional marginal distributions of processes having an Itô differential. Probab. Theory Relat. Fields 71, 501–516 (1986) MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    He, S.W., Wang, J.G., Yan, J.A.: Semimartingale Theory and Stochastic Calculus. Kexue Chubanshe (Science Press), Beijing (1992) zbMATHGoogle Scholar
  26. 26.
    Hilber, N., Reich, N., Schwab, C., Winter, C.: Numerical methods for Lévy processes. Finance Stoch. 13, 471–500 (2009) MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Jourdain, B.: Stochastic flows approach to Dupire’s formula. Finance Stoch. 11, 521–535 (2007) MathSciNetCrossRefGoogle Scholar
  28. 28.
    Klebaner, F.: Option price when the stock is a semimartingale. Electron. Commun. Probab. 7, 79–83 (2002) (electronic) MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Lopatin, A.V., Misirpashaev, T.: Two-dimensional Markovian model for dynamics of aggregate credit loss. In: Fouque, J.P., Fomby, T.B., Solna, K. (eds.) Advances in Econometrics, vol. 22, pp. 243–274. Emerald Group Publishing, Bingley (2008) Google Scholar
  30. 30.
    Madan, D., Yor, M.: Making Markov martingales meet marginals. Bernoulli 8, 509–536 (2002) MathSciNetzbMATHGoogle Scholar
  31. 31.
    Merton, R.: Theory of rational option pricing. Bell J. Econ. 4, 141–183 (1973) MathSciNetCrossRefGoogle Scholar
  32. 32.
    Mikulevičius, R., Pragarauskas, H.: On the martingale problem associated with integro-differential operators. In: Grigelionis, B. (ed.) Probability Theory and Mathematical Statistics, vol. II, Vilnius, 1989. VSP, pp. 168–175. Mokslas, Vilnius (1990) Google Scholar
  33. 33.
    Protter, P., Shimbo, K.: No arbitrage and general semimartingales. In: Ethier, S.N., et al. (eds.) Markov Processes and Related Topics: A Festschrift for Thomas G. Kurtz. Inst. Math. Stat. Collect., vol. 4, pp. 267–283. Institute of Mathematical Statistics, Beachwood (2008) CrossRefGoogle Scholar
  34. 34.
    Protter, P.E.: Stochastic Integration and Differential Equations, 2nd edn. Springer, Berlin (2005) CrossRefGoogle Scholar
  35. 35.
    Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion, 3rd edn. Springer, Berlin (1999) CrossRefzbMATHGoogle Scholar
  36. 36.
    Schönbucher, P.: Portfolio losses and the term structure of loss transition rates: a new methodology for the pricing of portfolio credit derivatives. Working paper (2005). http://www.nccr-finrisk.uzh.ch/wps.php?action=query&id=264
  37. 37.
    Sidenius, J., Piterbarg, V., Andersen, L.: A new framework for dynamic credit portfolio loss modeling. Int. J. Theor. Appl. Finance 11, 163–197 (2008) MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.Laboratoire de Probabilités et Modèles AléatoiresCNRS—Université de Paris VIParisFrance

Personalised recommendations