1 Introduction

In the quest to enhance option pricing models in order to reproduce the volatility smile or smirk observed in derivative markets, researchers like Heston (1993) and some others, came up with stochastic volatility models to cater this stylized fact. Recall that in a stochastic volatility model, the price process under a risk-neutral measure is assumed to depend not on constant volatility as in the Black–Scholes model, but on a stochastic volatility described by a second stochastic differential equation driven by a Brownian motion correlated with the Brownian motion that drives the price process. Later, in order to improve them, jumps following a compound Poisson process were added to the price process, as in Bates (1996a, b). Currently, Heston and Bates models (see Heston (1993) and Bates (1996a) respectively) are standard models regularly used in the financial industry. Bates model is the Heston model with the addition of jumps in the price process described by a compound Poisson process with normal amplitudes. In Bates (2000), in order to overcome some inconsistencies of Heston and Bates models in trying to generate volatility surfaces similar to those observed in derivative markets, a second factor was added to the volatility equation, modeling separately the long-term and the short-term volatility evolution. This idea was later developed by several authors, see for example Christoffersen et al. (2009) and Andersen and Benzoni (2010).

Certainly, most of previous models, have the advantage of having exact semi-closed pricing formulas, however, they involve numerical integration which is computationally expensive especially when calibrating models. See the recent papers Orzechowski (2020), Deng (2020), and Orzechowski (2021) for discussions about the efficiency of different methods to compute approximately these formulas. The last two papers cover the 2FSVJ model and in fact, Deng (2020) extends the 2FSVJ model including jumps in the volatility equations.

In general, the need for fast option pricing has driven, during the last years, the research of closed approximate formulas. A different line in this direction is the one started by Alòs (2012), who derived an exact decomposition of an option price in terms of volatility and correlation in the case of the Heston model, that can be well approximated by an easy-to-manage closed approximate formula. In this approach, the problem is not how to do fast numerical integration in the price closed formula but to obtain another type of approximate formula based on a Taylor type decomposition. This point of view is not only interesting since the computational finance point of view, but also since an intrinsic point of view that shows the impact of correlation and volatility of volatility in option pricing.

The ideas in Alòs (2012) were exploited in Alòs et al. (2015) to develop an alternative method to fast calibration of the Heston model on the basis of a market price surface. This approximate formula for the Heston model was improved in terms of accuracy in Gulisashvili et al. (2020). Moreover, the same ideas were extended beyond Heston model in several papers. In Merino and Vives (2015) the decomposition formula was extended to a general stochastic volatility models without jumps, in Merino and Vives (2017), stochastic local volatility and spot-dependent models were considered, and in Merino et al. (2018) the case of Bates model was treated. Recently, in Merino et al. (2021), similar results for rough Volterra stochastic volatility models have been obtained.

It is important too to comment on the advantages of this line of research with alternative methodologies in relation to accuracy and computational efficiency in pricing derivatives. In Alòs (2012), results are compared with another approximate formula developed by E. Benhamou, E. Gobet and M. Miri based on Malliavin calculus techniques, see Benhamou et al. (2010) and the references therein. In Alòs et al. (2015), accuracy and computational efficiency is compared with results in Forde et al. (2011) based on a an alternative closed form approximate formula. In Merino et al. (2018), one of the main references for the present paper, the accuracy and computational efficiency of the obtained approximate formula for Bates model is compared with transform pricing methods based on a semi-closed pricing formula. Concretely, the new formula is compared with the Fourier transform based pricing formula used in Baustian et al. (2017), resulting in a three times faster method with similar accuracy. As a summary, approximate formulas based on the mentioned decomposition formula, beyond its advantages in terms of computational efficiency, allow to understand the key terms contributing to the option fair value and to infer parametric approximations to the implied volatility surface.

In the present paper, in line with the mentioned previous papers, the goal is to obtain a decomposition formula and a closed approximate option pricing formula for a two-factor Heston–Kou 2FSVJ model, as described in Bates (2000) and Christoffersen et al. (2009). Our study brings some innovations to the existing and mentioned literature on three fronts. Firstly, we consider a two-factor model which to the best of our knowledge has not been studied in the context of the mentioned decomposition formula. Secondly, we get a second-order formula like in the case of Gulisashvili et al. (2020) while most research in this line obtains first-order formulae only. Lastly, in addition to log-normal jumps, double exponential jumps as in Kou (2002) and Gulisashvili and Vives (2012) are considered, and in this sense, this is a generalization of Merino et al. (2018). Our results are compared with the Fourier integral method obtaining faster results.

The rest of the paper is divided as follows: in Sect. 2 we introduce the model and outline some key concepts and assumptions. In Sect. 3 the generic decomposition formula is obtained. In Sect. 4 we derive the first and second-order approximate formulae. Section 5 describes the numerical experiments and results while Sect. 6 outlines the conclusions of our research.

2 The model

Assume we have an asset \(S:=\{S_{t}, t\in [0,T]\}\) described by the SDE

$$\begin{aligned} \frac{dS_{t}}{S_{t^{-}}}&= {} (r-k\lambda )dt+\sqrt{Y_{1,t}}\left( \rho _{1}dW_{1,t}+\sqrt{1-\rho _{1}}dB_{1,t}\right) \nonumber \\{} & {} \quad + \sqrt{Y_{2,t}}\left( \rho _{2}dW_{2,t}+\sqrt{1-\rho _{2}}dB_{2,t}\right) +d\sum _{i=1}^{N_{t}}(e^{Z_{i}}-1)\end{aligned}$$
(1)
$$\begin{aligned} dY_{1,t}&= {} \kappa _{1}(\theta _{1}-Y_{1,t})dt+\nu _{1}\sqrt{Y_{1,t}}dW_{1,t} \end{aligned}$$
(2)
$$\begin{aligned} dY_{2,t}&= {} \kappa _{2}(\theta _{2}-Y_{2,t})dt+\nu _{2}\sqrt{Y_{2,t}}dW_{2,t} \end{aligned}$$
(3)

under a risk-neutral probability measure, where \((B_{i,t})_{t\in [0,T]}\) and \((W_{i,t})_{t\in [0,T]}\) are mutually independent Wiener processes for \(i=1,2\). The i.i.d. jumps \((Z_{i})_{i\in \mathbb {N}}\) have a known distribution and are independent of the Poisson process \(N_{t}\) and the Wiener processes.

In order to compute the decomposition formula we need a version of the variance processes suitable for our computations. We use an alternative adapted specification that is suitable for Itô calculus, that is, the expected future average variance defined as

$$\begin{aligned} V_{i,t}=\frac{1}{T-t}\int _{t}^{T}\mathbb {E}_{t}[Y_{i,s}]ds\text { for }i=1,2, \end{aligned}$$

where \(\mathbb {E}_{t}\) denotes the conditional expectation with respect to the complete natural filtration generated by the five processes involved in the model.

The following lemma will be useful in the remainder of the paper.

Lemma 1

The process \(V_{i,t}\) satisfies the differential form

$$\begin{aligned} dV_{i,t}&= {} \frac{1}{T-t}\left( dM_{i,t}+(V_{i,t}-Y_{i,t})dt\right) \text { for }i=1,2, \end{aligned}$$

where

$$\begin{aligned} M_{i,t}=\int _{0}^{T}\mathbb {E}_{t}[Y_{i,s}]ds \text { for }i=1,2 \end{aligned}$$

is a martingale. In particular,

$$\begin{aligned} dM_{i,t}&= {} \nu _{i}\psi _{i}(t)\sqrt{Y_{i,t}}dW_{i,t}\text { for }i=1,2 \end{aligned}$$
(4)

where

$$\begin{aligned} \psi _{i}(t)=\int _t^T e^{-\kappa _i (s-t)}ds=\frac{1}{\kappa _{i}}\left( 1-e^{-\kappa _{i}(T-t)}\right) . \end{aligned}$$

Proof

Integrating (2) and (3) on [ts] and taking conditional expectations yields:

$$\begin{aligned} Y_{i,s}=Y_{i,t}+\kappa _{i}\int _{t}^{s}(\theta _{i}-Y_{i,u})du+\nu _{i} \int _{t}^{s}\sqrt{Y_{i,u}}dW_{i,u} \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}_{t}\left[ Y_{i,s}\right] =Y_{i,t} + \kappa _{i}\int _{t}^{s}(\theta _{i}-\mathbb {E}_{t}\left[ Y_{i,u}\right] )du. \end{aligned}$$

Transforming the second expression via an integrating factor we get the following differential equation:

$$\begin{aligned} d{ \left( e^{\kappa _{i}s}\mathbb {E}_{t}\left[ Y_{i,s}\right] \right) =\kappa _{i}\theta _{i}e^{\kappa _{i}s}ds.} \end{aligned}$$

Integrating and multiplying by \(e^{-\kappa _{i}s}\) reveals that

$$\begin{aligned} \mathbb {E}_{t}\left[ Y_{i,s}\right] =\theta _{i}+\left( Y_{i,t}-\theta _{i} \right) e^{-\kappa _{i}(s-t)}. \end{aligned}$$

Integrating the above on [tT] yields

$$\begin{aligned} \int _{t}^{T}\mathbb {E}_{t}\left[ Y_{i,s}\right] ds= \theta _{i}(T-t)+\frac{1}{\kappa _{i}}\left( Y_{i,t}-\theta _{i}\right) \left( 1-e^{-\kappa _{i}(T-t)}\right) . \end{aligned}$$
(5)

Now, from the definition of \(V_{i,t}\)

$$\begin{aligned} dV_{i,t}=\frac{1}{T-t}[V_{i,t}dt+d\int _{t}^{T}\mathbb {E}_{t}\left[ Y_{i,s} \right] ds] \end{aligned}$$

where

$$\begin{aligned} d\int _{t}^{T}\mathbb {E}_{t}\left[ Y_{i,s}\right] ds&= {} \left[ -\theta _{i}-\left( Y_{i,t}-\theta _{i}\right) e^{-\kappa _{i}(T-t)} \right] dt+ \frac{1}{\kappa _{i}}\left( 1-e^{-\kappa _{i}(T-t)}\right) dY_{i,t}\\& = {} \left[ -\theta _{i}-\left( Y_{i,t}-\theta _{i}\right) e^{-\kappa _{i}(T-t)} \right] dt\\{} & {} \quad + \frac{1}{\kappa _{i}}\left( 1-e^{-\kappa _{i}(T-t)}\right) \left( \kappa _{i}(\theta _{i}-Y_{i,t})dt+\nu _{i}\sqrt{Y_{i,t}}dW_{i,t}\right) \\& = {} -Y_{i,t}dt+ \frac{\nu _{i}}{\kappa _{1}}\left( 1-e^{-\kappa _{i}(T-t)} \right) \sqrt{Y_{i,t}}dW_{i,t}. \end{aligned}$$

Then, the differential form of \(V_{i,t}\) follows.

In relation with the expression of \(dM_{i,t}\), note that using (5) we have

$$\begin{aligned} M_{i,t}=\int _0^t Y_{i,s}ds+ \theta _{i}(T-t)+\left( Y_{i,t}-\theta _{i}\right) \psi _{i}(t) \end{aligned}$$

and

$$\begin{aligned} dM_{i,t}& = {} Y_{i,t}dt-\theta _i dt+\psi _{i}(t)dY_{i,t}+\left( Y_{i,t}-\theta _{i}\right) \psi ^{\prime }_{i}(t)dt\\& = {} \kappa _i\psi _i(t)(Y_{i,t}-\theta _i)dt+\psi _{i}(t)dY_{i,t} \end{aligned}$$

Substituting the expression of \(dY_{i,t}\), the differential form of \(M_{i,t},\) (4), follows. \(\square\)

Remark 1

Recall that in the two-factor Black–Scholes model, we transform the diffusion term as follows:

$$\begin{aligned} \sigma _{1}dW_{1,t}+\sigma _{2}dW_{2,t}=\Vert \sigma \Vert d\widetilde{W_{t}} \end{aligned}$$

where

$$\begin{aligned} \Vert \sigma \Vert =\sqrt{\sigma _{1}^{2}+\sigma _{2}^{2}} \end{aligned}$$

and

$$\begin{aligned} d\widetilde{W_{t}}=\frac{1}{\Vert \sigma \Vert }\left( \sigma _{1}dW_{1,t}+\sigma _{2}dW_{2,t}\right) . \end{aligned}$$

Thus, taking the above remark into account and letting \(X_{t}=\ln (S_{t})\) we have

$$\begin{aligned} dX_{t}=(r-k\lambda -\frac{1}{2}{\overline{Y}}_{t})dt+\sqrt{\overline{Y}_{t}}d\widetilde{W_{t}}+d\sum _{i=1}^{N_{t}}{Z_{i}} \end{aligned}$$
(6)

where

$$\begin{aligned} d\widetilde{W_{t}}=\frac{1}{\sqrt{\overline{Y}_{t}}}\left[ \sqrt{Y_{1,t}}\left( \rho _{1}dW_{1,t}+\sqrt{1-\rho _{1}}dB_{1,t}\right) +\sqrt{Y_{2,t}}\left( \rho _{2}dW_{2,t}+\sqrt{1-\rho _{2}}dB_{2,t}\right) \right] \end{aligned}$$

and

$$\begin{aligned} \overline{Y}_{t}=Y_{1,t}+Y_{2,t}. \end{aligned}$$

The process \({{\overline{Y}}}_{t}\) has an expected future average variance whose differential form

$$\begin{aligned} d\overline{V}_{t}=\frac{1}{T-t}\left( d{{\overline{M}}}_{t}+({\overline{V}}_{t}-{{\overline{Y}}}_{t})dt\right) \end{aligned}$$

can easily be derived since it is a linear combination of independent processes. Here,

$$\begin{aligned} {{\overline{V}}}_t=\frac{1}{T-t}\int _{t}^{T}\mathbb {E}_{t}[{\overline{Y}}_s]ds \end{aligned}$$

and

$$\begin{aligned} {\overline{M}}_t=\int _{0}^{T}\mathbb {E}_{t}[{\overline{Y}}_s]ds. \end{aligned}$$

3 Decomposition formula

Having defined the terms and processes related to the volatility, we recall some notation according to the Black–Scholes formula. Let B(txy) be the Black–Scholes function that gives the acclaimed plain vanilla Black–Scholes option price with variance y, log price x, and maturity T:

$$\begin{aligned} B(t,x,y)=e^{x}N(d_{+})-e^{-r(T-t)}KN(d_{-}) \end{aligned}$$

where N is the standard normal cumulative distribution function and

$$\begin{aligned} d_{+}& = {} \frac{x-\ln (K)+(r+y/2)(T-t)}{\sqrt{y(T-t)}},\\ d_{-}& = {} d_{+}-\sqrt{y(T-t)}. \end{aligned}$$

Recall that \(\mathcal {L}_{y}B(t,x,y)=0\) where \({{\mathcal {L}}}_y\) is the Black–Scholes operator

$$\begin{aligned} { \mathcal {L}_{y}=-r+\partial _{t}+\left(r-k\lambda -\frac{y}{2}\right)\partial _{x}}+\frac{y}{2}\partial _{x}^{2}. \end{aligned}$$

We begin by obtaining a generic decomposition formula which is instrumental throughout our discussion. It will be particularly useful in deriving the approximate versions of the decomposition formula as discussed in the “Appendix”.

Lemma 2

Let

$$\begin{aligned} \widehat{X}_{t}=X_0+\int _0^t \left(r-k\lambda -\frac{1}{2}{\overline{Y}}_{t}\right)dt+\int _0^t \sqrt{{\overline{Y}}_{t}}d\widetilde{W_{t}} \end{aligned}$$

be the continuous part of \(X_{t}\), and let the function

$$\begin{aligned} A\in C^{1,2,2}([0,T]\times \mathbb {R}\times [0,\infty )) \end{aligned}$$

satisfy

$$\begin{aligned} \partial _{y}A(t,x,y)=\frac{1}{2}(T-t)(\partial _{x}^{2}-\partial _{x}) A(t,x,y). \end{aligned}$$
(7)

Suppose that \(G_{t}\) is a continuous semi-martingale adapted to the complete natural filtration generated by \(W_{1,t}\) and \(W_{2,t}.\) Then, the following generic decomposition formula holds:

$$\begin{aligned} \mathbb {E}_{t}\left[ e^{-r(T-t)}A(T,\widehat{X}_{T},{\overline{V}}_{T})G_{T}\right]& = {} A(t,\widehat{X}_{t},{\overline{V}}_{t})G_{t}\nonumber \\{} & {} \quad + \mathbb {E}_{t}\left[ \int _{t}^{T}e^{-r(s-t)}A(s,\widehat{X}_{s},{\overline{V}}_{s})dG_{s}\right] \nonumber \\{} & {} \quad + \frac{1}{8}\sum _{i=1}^2\mathbb {E}_{t}\left[ \int _{t}^{T}e^{-r(s-t)}G_{s}\Gamma ^{2}A(s,\widehat{X}_{s},{\overline{V}}_{s})d[M_i,M_i]_{s}\right] \nonumber \\{} & {} \quad + \frac{1}{2}\sum _{i=1}^2\rho _i \mathbb {E}_{t}\left[ \int _{t}^{T}e^{-r(s-t)}G_{s}\sqrt{{Y_{i, s}}}\Lambda \Gamma A(s,\widehat{X}_{s},{\overline{V}}_{s})d[{W}_i,M_i]_{s}\right] \nonumber \\{} & {} \quad + \sum _{i=1}^2\rho _i \mathbb {E}_{t}\left[ \int _{t}^{T}e^{-r(s-t)}\sqrt{{Y_{i,s}}}\Lambda A(s,\widehat{X}_{s},{\overline{V}}_{s})d[{W}_i,G]_{s}\right] \nonumber \\{} & {} \quad + \frac{1}{2}\sum _{i=1}^2\mathbb {E}_{t}\left[ \int _{t}^{T}e^{-r(s-t)}\Gamma A(s,\widehat{X}_{s},{\overline{V}}_{s})d[M_i,G]_{s}\right] , \end{aligned}$$

where \(\Lambda =\partial _x\), \(\Gamma :=\partial ^2_{xx}-\partial _x.\)

Proof

Refer to Theorem 3.1 in Merino et al. (2018). \(\square\)

Remark 2

Note that in the Lemma 2 function A is a generic function. Moreover, condition (7), which is satisfied by the Black–Scholes function, is used only to simplify terms in the decomposition. The proof is based on the Itô formula. Therefore, the methodology used in this paper is completely general. Properties of the Black–Scholes function and of any concrete stochastic volatility model can be useful to obtain some simplifications, but the ideas behind the decomposition formula, are general and can be developed for any stochastic volatility model and any function.

Corollary 1

Assuming that \(A(t,x,y)=B(t,x,y)\) and \(G\equiv 1\) in Lemma 2, we have

$$\begin{aligned} P(t)& = {} B(t,\widehat{X}_{t},{\overline{V}}_{t})\\{} & {} \quad + \sum _{i=1}^2\frac{1}{8}\mathbb {E}_{t}\left[ \int _{t}^{T}e^{-r(s-t)}\Gamma ^{2}B(s,\widehat{X}_{s},\overline{V}_{s})d[M_{i},M_{i}]_{s}\right] \text {(I.i)}\\{} & {} \quad + \sum _{i=1}^2\frac{\rho _{i}}{2}\mathbb {E}_t\left[ \int _{t}^{T}e^{-r(s-t)}\sqrt{Y_{i,s}}\Lambda \Gamma B(s,\widehat{X}_{s},\overline{V}_{s})d[W_{i},M_{i}]_{s}\right] \text {(II.i)} \end{aligned}$$

Remark 3

Though this formula can be written similarly to the one derived by Merino et al. (2018), it is different due to the two driving stochastic volatility terms

$$\begin{aligned} d[\widetilde{W},\overline{M}]_{t}& = {} \frac{1}{\sqrt{\overline{Y}_{t}}}\left( \rho _{1}\sqrt{Y_{1,t}}d[W_{1},M_{1}]_{t}+\rho _{2}\sqrt{Y_{2,t}}d[W_{2},M_{2}]_{t}\right) \\ d[\overline{M},\overline{M}]_{t}& = {} d[M_{1},M_{1}]_{t}+d[M_{2},M_{2}]_{t} \end{aligned}$$

Hence, our decomposition formula can be resolved into five terms instead of three terms.

In Merton (1976) and Merino et al. (2018) the treatment of a jump model problem is reduced to a treatment of a continuous case problem by conditioning on the number of jumps. Assuming that we observe k jumps in the time period [tT] we have

$$\begin{aligned} X_{T}=\widehat{X}_{T}+\sum _{i=1}^{N_T}Z_{i} =X_t+\widehat{X}_{T}-\widehat{X}_{t}+L_{k} \end{aligned}$$

where \(L_k=\sum _{i=1}^{k} Z_{i}.\)

From now on we will write for simplicity \(D_s:=X_t+\widehat{X}_{s}-\widehat{X}_{t}\) for any \(s\ge t.\) Note that \(D_t=X_t.\) Define moreover

$$\begin{aligned} H_{k}(s,D_s, \overline{V}_{s})=\mathbb {E}_{L_k}\left[ B(s,D_s+L_k,\overline{V}_{s})\right] \end{aligned}$$

Thus, it follows that we can set

$$\begin{aligned} P(t)& = {} \mathbb {E}_{t}\left[ e^{-r(T-t)}B(T,X_{T},\overline{V}_{T})\right] \\& = {} \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))\mathbb {E}_{t}\left[ e^{-r(T-t)}B(T,\widehat{X}_{T}+\sum _{i=1}^{N_{T}}Z_{i},\overline{V}_{T})|\Big |N_T-N_t=k\right] \\& = {} \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))\mathbb {E}_{t}\left[ e^{-r(T-t)}\mathbb {E}_{L_k}[B(T,D_T+L_{k},\overline{V}_{T})]\right] \\& = {} \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))\mathbb {E}_{t}\left[ e^{-r(T-t)}H_{k}(T,D_T,\overline{V}_{T})\right] \end{aligned}$$

where in general, for any positive \(\eta ,\)

$$\begin{aligned} p_{k}(\eta ):=e^{-\eta }\frac{\eta ^k}{k!}, \end{aligned}$$

and then,

$$\begin{aligned} p_{k}(\lambda (T-t))=e^{-\lambda (T-t)}\frac{\lambda ^k(T-t)^k}{k!} \end{aligned}$$

is the probability of observing k jumps in [tT].

This enables us to deal with our problem in a continuous setting. Following that, we obtain the decomposition of the 2FSVJ model.

Applying Lemma 2 recursively to \(A=H_k\) and \(G\equiv 1\) we obtain the following corollary:

Corollary 2

The price of the plain vanilla European call option is given as

$$\begin{aligned} P(t)& = {} \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))H_{k}(t,X_t,\overline{V}_{t})\\{} & {} \quad + \frac{1}{8} \sum _{i=1}^2\sum _{k=0}^{\infty }p_{k}(\lambda (T-t))\mathbb {E}_{t}\left[ \int _{t}^{T}e^{-r(s-t)}\Gamma ^{2}H_{k}(s,D_s,\overline{V}_{s})d[M_i,M_i]_{s}\right] \nonumber \\{} & {} \quad + \sum _{i=1}^2 \frac{\rho _i}{2}\sum _{k=0}^{\infty }p_{k}(\lambda (T-t))\mathbb {E}_{t}\left[ \int _{t}^{T}e^{-r(s-t)}\sqrt{{Y_{i,s}}}\Lambda \Gamma H_{k}(s,D_s,\overline{V}_{s})d[{W}_i,M_i]_{s}\right] \nonumber \end{aligned}$$
(8)

4 Approximate formulae

In the study of decomposition formulas, it has been found that formulas like (8) are not easy to compute in their present form. But they allow building closed-form approximation formulas that are computationally tractable.

The idea is to freeze the integrands in formula (8), to compute the difference between the original and the frozen approximate formulas, and decompose this error formula in a series of decreasing terms. Adding to the approximate formula terms of the error formula up to a certain order allows us to obtain good approximations; see Gulisashvili et al. (2020).

Freezing the integrands of the formula in Corollary 2 gives

$$\begin{aligned} P(t)& = {} \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))H_{k}(t,X_t,\overline{V}_{t})\\{} & {} \quad + \sum _{i=1}^2\sum _{k=0}^{\infty }p_{k}(\lambda (T-t))\Gamma ^{2}H_{k}(t,X_t,\overline{V}_{t}) \mathbb {E}_{t}\left[ \frac{1}{8}\int _{t}^{T}d[M_i,M_i]_{s}\right] \nonumber \\{} & {} \quad + \sum _{i=1}^2 \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))\Lambda \Gamma H_{k}(t,X_t,\overline{V}_{t})\mathbb {E}_{t}\left[ \frac{\rho _i}{2}\int _{t}^{T}\sqrt{{Y_{i,s}}}d[{W}_i,M_i]_{s}\right] +\epsilon (T-t) \end{aligned}$$

where \(\epsilon (T-t)\) denotes an error term that has to be estimated.

From now on we will denote

$$\begin{aligned}{} & {} R_{i,t}=\frac{1}{8}\mathbb {E}_{t}\left[ \int _{t}^{T}d[M_i,M_i]_{s}\right] ,\\{} & {} U_{i,t}=\frac{\rho _{i}}{2}\mathbb {E}\left[ \int _{t}^{T}\sqrt{{Y_{i,s}}}d[{W}_i,M_i]_{s}\right] \end{aligned}$$

and

$$\begin{aligned} {Q_{i,t}=\rho _{i}\mathbb {E}\left[ \int _{t}^{T}\sqrt{{Y_{i,s}}} d[{W}_i,U_i]_{s} \right] }. \end{aligned}$$

Using this notation, the first naive version of the approximate formula is given by

$$\begin{aligned} P(t)& = {} \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))H_{k}(t,X_t,\overline{V}_{t})\\{} & {} \quad + \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))(R_{1,t}+R_{2,t})\Gamma ^{2}H_{k}(t,X_t,\overline{V}_{t})R_{i,t}\\{} & {} \quad + \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))(U_{1,t}+U_{2,t})\Lambda \Gamma H_{k}(t,X_t,\overline{V}_{t})+\epsilon (T-t) \end{aligned}$$

Before giving precise approximate formulas, we recall two lemmas:

Lemma 3

(Alòs 2012) For any \(n\ge 0\) and \(0\le t\le T,\) there exists a constant C(n) such that

$$\begin{aligned} \Lambda ^{n}\Gamma B(t,x,y)\le \frac{C(n)}{(\sqrt{y(T-t)})^{n+1}}. \end{aligned}$$

Lemma 4

(Alòs et al. 2015) The following relations hold::

  1. 1.
    $$\begin{aligned} \psi _{i}(t)\le \frac{1}{\kappa _i}. \end{aligned}$$
  2. 2.
    $$\begin{aligned} \int _{t}^{T}{ \mathbb {E}}_{t}\left[ {Y}_{i,s}\right] ds\ge Y_{i,t}\psi _{i}(t). \end{aligned}$$
  3. 3.
    $$\begin{aligned} \int _{t}^{T}{ \mathbb {E}}_{t}\left[ {Y}_{i,s}\right] ds\ge \frac{\theta _{i}\kappa _{i}}{2}\psi ^{2}_i(t). \end{aligned}$$
  4. 4.
    $$\begin{aligned} R_{i,t}=\frac{\nu _{i}^{2}}{8}\int _{t}^{T}\mathbb {E}_t [Y_{i,u}]\psi _{i}^{2}(u)du. \end{aligned}$$
  5. 5.
    $$\begin{aligned} U_{i,t}=\frac{\rho _{i}\nu _{i}}{2}\int _{t}^{T}\psi _{i}(u)\mathbb {E}_t \left[ Y_{i,u}\right] du. \end{aligned}$$
  6. 6.
    $$\begin{aligned} Q_{i,t}=\frac{\rho _{i}^2\nu _{i}^2}{2}\int _{t}^{T}\mathbb {E}_t \left[ Y_{i,u}\right] \left (\int _{u}^{T}e^{-\kappa _{i}(z-u)}\psi _{i}(z)dz\right )du. \end{aligned}$$
  7. 7.
    $$\begin{aligned} dR_{i,t}=\frac{\nu _{i}^{3}}{8}\left (\int _{t}^{T}e^{-\kappa _{i}(z-t)}\psi _{i}(z)^2dz\right ) \sqrt{Y_{i,t}}dW_{i,t}-\frac{\nu _{i}^2}{8}\psi _{i}^{2}(t)Y_{i,t}dt \end{aligned}$$
  8. 8.
    $$\begin{aligned} dU_{i,t}=\frac{\rho _i\nu _{i}^{2}}{2}\left (\int _{t}^{T}e^{-\kappa _{i}(z-t)}\psi _{i}(z)dz\right ) \sqrt{Y_{i,t}}dW_{i,t}-\frac{\rho _i\nu _{i}}{2}\psi _{i}(t)Y_{i,t}dt \end{aligned}$$
  9. 9.
    $$\begin{aligned} dQ_{i,t}=\frac{\rho _i^2\nu _{i}^{3}}{2}\int _{t}^{T}\left [e^{-\kappa _{i}(u-t)}\left (\int _{u}^{T}e^{-\kappa _{i}(z-u)}\psi _{i}(z)dz\right )du\right ] \sqrt{Y_{i,t}}dW_{i,t} \end{aligned}$$
    $$\begin{aligned} -\frac{\rho _{i}^2\nu _{i}^2}{2}\left (\int _{t}^{T}e^{-\kappa _{i}(z-t)}\psi _{i}(z)dz\right )Y_{i,t}dt \end{aligned}$$

Following Gulisashvili et al. (2020) we derive higher order approximations by applying the generic decomposition formula in Lemma 2 for appropriate choices of \(A(t,X_t,V_t)\) and \(G_t\) as follows. Under this approach, it is necessary to evaluate the respective error bounds.

Proposition 1

We have the following approximate formula:

$$\begin{aligned} P(t)& = {} \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))H_{k}(t,X_{t},\overline{V}_t)\\{} & {} \quad + \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))(R_{1,t}+R_{2,t})\Gamma ^{2}H_{k}(t,X_{t},\overline{V}_t)\\{} & {} \quad + \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))(U_{1,t}+U_{2,t})\Lambda \Gamma H_{k}(t,X_{t},\overline{V}_{t})\\{} & {} \quad +\sum _{k=0}^{\infty }p_{k}(\lambda (T-t))(U_{1,t}+U_{2,t})^2\Lambda ^2\Gamma ^2 H_{k}(t,X_{t},\overline{V}_{t})\\ &\quad+ {} \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))(Q_{1,t}+Q_{2,t})\Lambda ^2\Gamma H_{k}(t,X_{t},\overline{V}_{t})\\{} & {} \quad + \epsilon (T-t) \end{aligned}$$

where

$$\begin{aligned} |\epsilon (T-t)|\le (\frac{1}{r}\wedge (T-t)) C(\theta _1,\theta _2,\kappa _1,\kappa _2)\nu ^3 \end{aligned}$$

where \(C(\theta _1,\theta _2,\kappa _1,\kappa _2)\) is a constant that depends only on parameters \(\theta _i\) and \(\kappa _i\) and \(\nu =\max \{\nu _1,\nu _2\}.\)

Proof

See the “Appendix”. \(\square\)

Remark 4

Note that this approximated option price is the Black–Scholes price plus appropriate correction terms. It is worth mentioning that this formula provides significant generality within the framework of the 2FSVJ model. Furthermore, it encompasses and extends the formulas presented in the references cited, namely Heston (1993), Bates (1996a). Christoffersen et al. (2009), Merino et al. (2018), as well as some of the results obtained in Gulisashvili et al. (2020), which can be considered specific instances of our more comprehensive formula.

While the above approximate formula is second-order one, we can obtain the first-order version as it is given in the following corollary.

Corollary 3

We have the following approximate formula:

$$\begin{aligned} P(t)& = {} \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))H_{k}(t,X_{t},\overline{V}_t)\\{} & {} \quad + \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))(R_{1,t}+R_{2,t})\Gamma ^{2}H_{k}(t,X_{t},\overline{V}_t)\\{} & {} \quad + \sum _{k=0}^{\infty }p_{k}(\lambda (T-t))(U_{1,t}+U_{2,t})\Lambda \Gamma H_{k}(t,X_{t},\overline{V}_{t})\\{} & {} \quad + \epsilon (T-t) \end{aligned}$$

where

$$\begin{aligned} |\epsilon (T-t)|\le & {} (\frac{1}{r}\wedge (T-t))C(\theta _1, \theta _2, \kappa _1, \kappa _2)\\&\quad \times {} \sum _{i=1}^2\left \{\sum _{j=1}^2\left [\nu _{i}^2\nu _{j}^2+ \nu _{i}^2\nu _{j}|\rho _{j}|\right ]+|\rho _{i}|\nu _{i}^3+ \nu _{i}^4+\sum _{j=1}^2\left [|\rho _{i}|\nu _{i}\nu _{j}^2+ |\rho _i||\rho _{j}|\nu _{i}\nu _{j}\right ]\right.\\{} &\quad \left. {} +|\rho _{i}|^2\nu _{i}^2+ |\rho _{i}|\nu _{i}^3\right \} \end{aligned}$$

where \(C(\theta _1, \theta _2, \kappa _1, \kappa _2)\) is a constant that depends only on \(\theta _1, \theta _2, \kappa _1, \kappa _2.\)

Proof

See the “Appendix”. \(\square\)

Remark 5

  1. 1.

    Expanding the scope of the approximate options pricing formula to include other types of options, such as barrier or American options, presents great potential. However, it is important to note that the decomposition results are derived from the Black–Scholes formula, which is specifically applicable to European options. Therefore, extending these decomposition formulae to include additional option types requires extensive investigation and comprehensive studies to establish a robust framework. Such explorations have the potential to open up new avenues for research and provide valuable insights into the pricing and analysis of a broader range of option types.

  2. 2.

    Incorporating real-data examples would not only enhance the credibility of the research but also offer valuable contributions to the field. Nevertheless, there are numerous challenges that contribute to the difficulty in obtaining real market data examples for the application of option pricing formulas, such as our decomposition formula. The challenges include limited availability, market complexity, and potential deviations from model assumptions, such as risk-neutral assumptions. It is worth noting that the lack of real-data examples presents an opportunity for new directions of future research to explore and provide valuable insights into the practical application and performance of the formula using real market data.

5 Numerical computations

Though our focus is on a class of Heston–Kou like models with two factors, this model is general enough to cover other jump structures studied in the literature. Thus, from henceforth we shall assume that jumps are defined by the Compound Poisson process

$$\begin{aligned} J_t = \sum _{i=1}^{N_{t}}\left( e^{Z_{i}}-1\right) \end{aligned}$$

where \(Z_i\) is a double exponential random variable whose distribution is given by

$$\begin{aligned} f(u)=p\eta _1e^{-\eta _1 u}{1\!\!1}_{\{u\ge 0\}}+q\eta _2e^{-\eta _2 |u|}{1\!\!1}_{\{u< 0\}} \end{aligned}$$

where \(\eta _1>1\), \(\eta _2>0\), \(p,q\in (0,1)\) such that \(p+q=1\). Assuming that k jumps are recorded then the convolution of the law of k jumps is

$$\begin{aligned} f^{*(k)}(u)& = {} e^{-\eta _{1} u} \sum _{j=1}^{k} P_{k, j} \eta _{1}^{j} \frac{1}{(j-1) !} u^{j-1} {1\!\!1}_{\{u\ge 0\}} \\{} & {} \quad +e^{\eta _{2} u} \sum _{j=1}^{k} Q_{k, j} \eta _{2}^{j} \frac{1}{(j-1) !}(-u)^{j-1} {1\!\!1}_{\{u<0\}} \end{aligned}$$

where

$$\begin{aligned} P_{k, j} =\sum _{i=j}^{k-1}\left( \begin{array}{c} k-j-1 \\ i-j \end{array}\right) \left( \begin{array}{c} k \\ i \end{array}\right) \left( \frac{\eta _{1}}{\eta _{1}+\eta _{2}}\right) ^{i-j}\left( \frac{\eta _{2}}{\eta _{1}+\eta _{2}}\right) ^{k-i} p^{i} q^{k-i} \end{aligned}$$

for all \(1 \le j \le k-1\), and

$$\begin{aligned} Q_{k, j} =\sum _{i=j}^{k-1}\left( \begin{array}{c} k-j-1 \\ i-j \end{array}\right) \left( \begin{array}{c} k \\ i \end{array}\right) \left( \frac{\eta _{1}}{\eta _{1}+\eta _{2}}\right) ^{k-i}\left( \frac{\eta _{2}}{\eta _{1}+\eta _{2}}\right) ^{i-j} p^{k-i} q^{i} \end{aligned}$$

for all \(1 \le j \le k-1\) with \(P_{k, k} = p^k\) and \(Q_{k, k} = q^k\). See Kou (2002) and Gulisashvili and Vives (2012).

Consequently,

$$\begin{aligned}{} & {} H_{k}(t,D_t,\overline{V}_{t})\\& = {} \mathbb {E}_{L_k}\left[ B(t,D_t+L_{k},\overline{V}_{t})\right] \\& = {} \int _{-\infty }^{\infty }B(t,D_t+u,\overline{V}_{t})f^{*(k)}(u)du\\& = {} \intop _{-\infty }^{\infty } B(t,D_t+u,\overline{V}_{t})\\{} & {} \left( \sum _{j=1}^{k}P_{k,j}\frac{\eta _{1}^j u^{j-1}}{(j-1)!} e^{-\eta _{1}u} {1\!\!1}_{\{u\ge 0\}} + \sum _{j=1}^{k}Q_{k,j}\frac{\eta _{2}^j (-u)^{j-1}}{(j-1)!} e^{\eta _{2}u}{1\!\!1}_{\{u<0\}}\right) du. \end{aligned}$$

Then, we want to compute

$$\begin{aligned} \sum _{k=1}^{\infty }p_{k}(\lambda (T-t)) H_k(t, D_t,\overline{V}_{t}). \end{aligned}$$

And this is equal to

$$\begin{aligned} \int _{-\infty }^{\infty } B(t,D_t+u,\overline{V}_t)K(u)du \end{aligned}$$
(9)

where

$$\begin{aligned} K(u)=\sum _{j=1}^{\infty }\frac{1}{(j-1)!}(\eta _1^j\alpha _j u^{j-1}e^{-\eta _1 y}{1\!\!1}_{\{u\ge 0\}}+\eta _2^j \beta _j (-u)^{j-1}e^{\eta _2 y}{1\!\!1}_{\{u<0\}}) \end{aligned}$$

with

$$\begin{aligned} \alpha _j=\sum _{k=j}^{\infty } P_{k,j} p_k(\lambda (T-t)) \end{aligned}$$

and

$$\begin{aligned} \beta _j=\sum _{k=j}^{\infty } Q_{k,j} p_k(\lambda (T-t)). \end{aligned}$$

To compute the integral (9) we truncate it at \(\pm 30.5.\) Additionally, we consider that there are a total of 150 jumps. We are assured that the approximation converges well since several terms converge to zero very fast.

Besides the double-exponential jumps, we also consider the case where \((Z_i)\) are i.i.d. normal random variables with mean \(\mu _J\) and standard deviation \(\sigma _J\). In this case, see Merino et al. (2018),

$$\begin{aligned} H_{k}(t,D_t,\overline{V}_{t})=B\left (t, D_t,\overline{V}_{t}+k\frac{\sigma _J^2}{(T-t)}\right ) \end{aligned}$$

where the modified risk-free rate \(r^* = r-\lambda (e^{\mu _J+\frac{\sigma _J^2}{2}}-1)+k\frac{\mu _J+\frac{\sigma _J^2}{2}}{(T-t)}\) is used.

The parameters used in our computations are obtained from Pacati et al. (2018) who consider a similar model with log-normal jumps. Unless otherwise stated, the parameters used are given in Table 1.

Table 1 Model parameters

Comparing the first-order and the second-order decomposition methods to the Fourier integral method based on Gil-Pelaez (1951) and we find that the decomposition methods perform very well in relation to the Fourier integral method under both the log-normal and double exponential jumps. See Figs. 123 and 4. Take note that the error is so small that the three option price plots for the Fourier integral (green), the first-order decomposition (blue), and the second-order decomposition (orange) cannot be distinguished by the naked eye. The first-order approximation indicates that the method performs well under out-of-the-money conditions. Moreover, we analyze the impact of time to maturity on the method performance in Figs. 5 and 6. Finally, in Figs. 7 and 8 we show the impact of the vol-of-vol in the pricing error for different strike prices and different jump regimes. Generally, our method behaves well for short-dated options. In addition, we find that the method is faster and more accurate for log-normal jumps as compared to double exponential jumps.

Fig. 1
figure 1

Pricing error against strike price under double exponential jumps

Fig. 2
figure 2

Option pricing error against strike price under log-normal jumps

Fig. 3
figure 3

Pricing error against underlying price under double exponential jumps

Fig. 4
figure 4

Option pricing error against underlying price under log-normal jumps

Fig. 5
figure 5

Second order pricing error against strike price for various maturities under log-normal jumps

Fig. 6
figure 6

Second order pricing error against strike price for various maturities under double exponential jumps

Fig. 7
figure 7

Pricing error against Vol. of vol. \(\nu _1\) for \(S_0=100\) under Double Exponential jumps

Fig. 8
figure 8

Pricing error against Vol. of vol. \(\nu _1\) for \(S_0=100\) under Log-Normal jumps

Additionally, to investigate the computational performance of our method we computed option prices for five different strikes and measured the average time taken. This experiment was repeated 1000 times and the results in Table 2 show that the decomposition is at least 20% faster than the Fourier integral method under log-normal jumps.

Table 2 Computational speed comparison in seconds

6 Conclusion

This paper investigates the valuation of European options under an enhanced model for the underlying asset prices. We consider a two-factor stochastic volatility jump (2FSVJ) model that includes stochastic volatility and jumps. A decomposition formula for the option price and first-order and second-order approximate formulae via Itô calculus techniques are obtained. Moreover, several numerical computations and illustrations are carried out, and they suggest that our method under double exponential and log-normal jumps offers computational gains. The results of this paper generalize the existing work in the literature in relation to the decomposition formula and its applications. As in the other cases cited in the introduction, the given approximate pricing formula is fast to compute and accurate enough.