Skip to main content
Log in

Valuation of asset and volatility derivatives using decoupled time-changed Lévy processes

  • Published:
Review of Derivatives Research Aims and scope Submit manuscript

Abstract

In this paper we propose a general derivative pricing framework that employs decoupled time-changed (DTC) Lévy processes to model the underlying assets of contingent claims. A DTC Lévy process is a generalized time-changed Lévy process whose continuous and pure jump parts are allowed to follow separate random time scalings; we devise the martingale structure for a DTC Lévy-driven asset and revisit many popular models which fall under this framework. Postulating different time changes for the underlying Lévy decomposition allows the introduction of asset price models consistent with the assumption of a correlated pair of continuous and jump market activity rates; we study one illustrative DTC model of this kind based on the so-called Wishart process. The theory we develop is applied to the problem of pricing not only claims that depend on the price or the volatility of an underlying asset, but also more sophisticated derivatives whose payoffs rely on the joint performance of these two financial variables, such as the target volatility option. We solve the pricing problem through a Fourier-inversion method. Numerical analyses validating our techniques are provided. In particular, we present some evidence that correlating the activity rates could be beneficial for modeling the volatility skew dynamics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. Two processes \(X_t\) and \(Y_t\) are said to be orthogonal if \(\langle X, Y \rangle _t=0\) for all \(t \ge 0\).

  2. Jacod (1979) uses \(T_t\)-adapted, and \(T_t\)-synchronized is sometimes found; however, \({T_t}\)-continuous is also common in the literature, and in our view less ambiguous.

  3. In general, time changes of Markov processes are not Markovian; by using Dambis, Dubins and Schwarz’s theorem (Karatzas and Shreve 2000, theorem 4.6) one can manufacture a large class of counterexamples by starting from any continuous martingale that is not a Markov process.

  4. Torricelli (2013) independently found \(\varPhi _{t_0}\) for the Heston model by augmenting the SDE system (56)–(57) with the equation \( d I_t = v_t dt\), and solved the associated Fourier-transformed parabolic equation via the usual Feynman-Kac argument. As has to be the case, the two approaches coincide.

  5. See e.g. Lewis (2000), chapter 2, for the Laplace transform of the cited models.

  6. \((\beta (\tau )^T \varSigma _1 \beta (\tau ))^k:=\beta (\tau )^T \varSigma _1^k \beta (\tau ), \; k=1, \ldots ,n.\)

  7. The process \(X_{T_t}\) is a particular instance of an Ito semimartingale: see Jacod and Protter (2011).

References

  • Ane, T., & Geman, H. (2000). Order flow, transaction clock, and normality of asset returns. The Journal of Finance, 55, 2259–2284.

    Article  Google Scholar 

  • Bates, D. S. (1996). Jumps and stochastic volatility: Exchange rate processes implicit in Deutsche mark options. Review of Financial Studies, 9, 69–107.

    Article  Google Scholar 

  • Bergomi, L. (2005). Simle dynamics I & II. Risk, 94.

  • Barndorff-Nielsen, O. E. (1997). Processes of normal inverse Gaussian type. Finance and Stochastics, 2, 41–68.

    Article  Google Scholar 

  • Bru, M. F. (1991). Wishart processes. Journal of Theoretical Probability, 4, 725–743.

    Article  Google Scholar 

  • Carr, P., & Madan, D. B. (1999). Option valuation using the fast Fourier transform. Journal of Computational Finance, 2, 61–73.

    Google Scholar 

  • Carr, P., & Sun, J. (2007). A new approach for option pricing under stochastic volatility. Review of Derivatives Research, 10, 87–150.

    Article  Google Scholar 

  • Carr, P., Geman, H., Madan, D. B., & Yor, M. (2002). The fine structure of asset returns: An empirical investigation. Journal of Business, 75, 305–332.

    Article  Google Scholar 

  • Carr, P., & Wu, L. (2004). Time-changed Lévy processes and option pricing. Journal of Financial Economics, 71, 113–41.

    Article  Google Scholar 

  • Clark, P. K. (1973). A subordinated stochastic process model with finite variance for speculative prices. Econometrica, 41, 135–155.

    Article  Google Scholar 

  • Cont, R., & Tankov, P. (2003). Financial modelling with jump processes. London: Chapman and Hall/CRC Press.

    Book  Google Scholar 

  • da Fonseca, J., & Grasselli, M. (2011). Riding on the smiles. Quantitative Finance, 11(11), 1609–1632.

    Article  Google Scholar 

  • da Fonseca, J., Grasselli, M., & Tebaldi, C. (2008). A multifactor Heston volatility model. Quantitative Finance, 8, 591–604.

    Article  Google Scholar 

  • da Fonseca, J., Grasselli, M., & Tebaldi, C. (2007). Option pricing when correlation are stochastic: An analytical model. Review of Derivatives Research, 2, 151–180.

    Article  Google Scholar 

  • Di Graziano, G., & Torricelli, L. (2012). Target volatility option pricing. International Journal of Theoretical and Applied Finance, 15(1).

  • Duffie, D., Pan, J. E., & Singleton, K. (2000). Transform analysis and asset pricing for affine jump diffusions. Econometrica, 68, 1343–1376.

  • Dufresne, D. (2001). The integrated square-root process. University of Montreal Research Paper, p. 90.

  • Eberlein, E., Papapantoleon, A., & Shiryaev, A. N. (2009). Esscher transforms and the duality principle for multidimensional semimartingales. The Annals of Applied Probability, 19, 1944–1971.

    Article  Google Scholar 

  • Fang, H. (2000). Option pricing implications of a stochastic jump rate. University of Virginia working paper.

  • Filipović, D. (2001). A general characterization of one factor affine term structure models. Finance and Stochastics, 5, 389–412.

    Article  Google Scholar 

  • Gallant, A. R., Hsu, C. T., & Tauchen, G. (2013). Using daily range data to calibrate volatility diffusions and extract the forward integrated variance. The Review of Economics and Statistics, 81, 617–631.

    Article  Google Scholar 

  • Grasselli, M., & Tebaldi, C. (2007). Solvable affine term structure models. Mathematical Finance, 18, 135–153.

    Article  Google Scholar 

  • Gouriéroux, C. (2003). Continuous Time Wishart process for stochastic risk. Econometric Review, 25, 177–217.

    Article  Google Scholar 

  • Gouriéroux, C., & Sufana, R. (2003). Wishart quadratic term structure models. CREF 03-10, HEC Montreal.

  • Gouriéroux, C., & Sufana, R. (2010). Derivative pricing with wishart multivariate stochastic volatility. Journal of Business and Economic Statistics, 3, 438–451.

    Article  Google Scholar 

  • Huang, J., & Wu, L. (2004). Specification analysis of option pricing models based on time-changed Lévy processes. The Journal of Finance, 59, 1405–1440.

    Article  Google Scholar 

  • Heston, S. L. (1993). A closed-form solution for options with stochastic volatility with applications to bond and currency options. Review of Financial Studies, 6, 327–343.

    Article  Google Scholar 

  • Jacod, J. (1979). Calcul Stochastique et Problèmes de Martingales. Lecture Notes in Mathematics, Springer, Berlin.

  • Jacod, J., & Protter, P. (2011). Discretization of processes. Stochastic modelling and applied probability. Berlin:Springer.

  • Jacod, J., & Shiryaev, A. N. (1987). Limit Theorems for stochastic processes. Grundlehren der Mathematischen Wissenschaften (Vol. 288). Berlin:Springer.

  • Kallsen, J., & Shiryaev, A. N. (2002). Time change representation of stochastic integrals. Theory of Probability and Its Applications, 46, 522–528.

    Article  Google Scholar 

  • Karatzas, I., & Shreve, S. E. (2000). Brownian motion and stochastic calculus. Berlin: Springer.

    Google Scholar 

  • Kou, S. G. (2002). A Jump-Diffusion model for option pricing. Management Science, 48, 1086–1101.

    Article  Google Scholar 

  • Lewis, A. (2000). Option valuation under stochastic volatility. New York: Finance Press.

    Google Scholar 

  • Lewis, A. (2001). A simple option formula for general jump-diffusion and other exponential Lévy processes. OptionCity.net Publications.

  • Merton, S. C. (1976). Option pricing when underlying stock returns are discontinuous. Journal of Financial Economics, 3, 125–144.

    Article  Google Scholar 

  • Monroe, I. (1978). Processes that can be embedded in Brownian motions. The Annals of Probability, 6, 42–56.

    Article  Google Scholar 

  • Sin, C. A. (1998). Complications with stochastic volatility models. Advances in Applied Probability, 30, 256–268.

    Article  Google Scholar 

  • Torricelli, L. (2013). Pricing joint claims on an asset and its realized variance in stochastic volatility models. International Journal of Theoretical and Applied Finance, 16(1).

  • Zheng, W., & Kwok, Y. K. (2014). Closed form pricing formulas for discretely sampled generalized variance swaps. Mathematical Finance, 24(4), 855–881.

    Article  Google Scholar 

Download references

Acknowledgments

The author would like to thank Martino Grasselli, Giuseppe Di Graziano, William T. Shaw, Yue Kuen Kwok, Andrea Macrina and two anonymous referees for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lorenzo Torricelli.

Appendix: Proofs

Appendix: Proofs

We begin by recalling some basic definitions from the semimartingale representation theory; in particular, we refer to Jacod and Shiryaev (1987), chapters 2 and 3, and Jacod (1979), chapitre X.

We define the Doléans-Dade exponential of an n-dimensional semimartingale \(X_t\) starting at 0 as:

$$\begin{aligned} {\mathcal {E}}(X_t)=e^{X_t- \langle X^{c} \rangle _t/2} \prod _{s \le t}(1+ \varDelta X_s)e^{- \varDelta X_s} \end{aligned}$$
(91)

where \(X^{c}_t\) denotes the continuous part of \(X_t\) and the infinite product converges uniformly. This is known to be the solution of the SDE \(d Y_t=Y_{t^-}dX_t\), \(Y_0=1\).

Let \(\epsilon (x)\) be a truncation function and \((\alpha _t, \beta _t, \rho (dt \times dx))\) be a triplet of predictable processes that are well-behaved in the sense of Jacod and Shiryaev (1987), chapter 2, equations (2.12)–(2.14). For \(\theta \in {\mathbb {C}}^n\), associate with \((\alpha _t, \beta _t, \rho (dt \times dx))\) the following complex-valued functional:

$$\begin{aligned} \varPsi _t(\theta )= i \theta ^T \alpha _t - \theta ^T \beta _t \theta /2+ \int _0^t\int _{{\mathbb {R}}^n} (e^{i \theta ^T x}-1-i \theta ^T x \epsilon (x) )\rho (ds \times dx). \end{aligned}$$
(92)

This functional is well-defined on:

$$\begin{aligned} {\mathcal {D}}= \left\{ \theta \in {\mathbb {C}}^n \text { such that } \int _0^t \int _{{\mathbb {R}}^n}e^{i \theta ^T x} \epsilon (x) \rho (ds \times dx) < + \infty \text { almost surely} \right\} \quad \end{aligned}$$
(93)

and because of the assumptions made it is also predictable and of finite variation.

Let \(X_t\) be an n-dimensional semimartingale. The local characteristics of \(X_t\) are the unique predictable processes \((\alpha _t, \beta _t, \nu (dt \times dx))\) as above, such that \({\mathcal {E}}(\varPsi _t(\theta )) \ne 0\) and \(\exp (i \theta ^T X_t)/{\mathcal {E}}(\varPsi _t(\theta ))\) is a local martingale for all \(\theta \in {\mathcal {D}}\). The process \(\varPsi ^X_t(\theta )\) in (92) arising from the local characteristics of \(X_t\) is called the cumulant process of \(X_t\), and it is independent of the choice of \(\epsilon (x)\). It is clear that the local characteristics of a Lévy process \(X_t\) of Lévy triplet \((\mu , \varSigma , \nu )\) are \((\mu t, \varSigma t, \nu dt )\).

If \({\mathcal {B}}\) is a Borel space, the time change of a random measure \(\rho ( d t \times dx)\) on the product measure space \(\varOmega \times {\mathcal {B}}({\mathbb {R}}_+ \times {\mathbb {R}}^n)\) according to some time change \(T_t\), is the random measure:

$$\begin{aligned} \rho ( d T_t \times dx)(\omega , [0,t) \times B)=\rho ( dt \times dx) (\omega , [0, T_t(\omega )) \times B) \end{aligned}$$
(94)

for \(\omega \in \varOmega \), \(t \ge 0\) and all sets \(B \in {\mathcal {B}}({\mathbb {R}}^n)\). A random measure \(\rho (dt \times dx)\) is \(T_t\)-adapted if for all \(t, \omega \) and B holds \(\rho (dt \times dx)((T_{t^-}, T_t], \omega , B)=0\). This is equivalent to say that for each measurable random function W, the integral of W with respect to \(\rho \) is \(T_t\)-continuous (see Jacod 1979, chapitre X); conversely, if \(X_t\) is a pure jump process that is \(T_t\)-continuous, then its associated jump measure \(\rho (dt \times dx)\) is \(T_t\)-adapted (Kallsen and Shiryaev 2002, proof of lemma 2.7).

A semimartingale \(X_t\) is said to be quasi-left-continuous if its local characteristic \(\nu \) is such that \(\nu ( dt \times dx )(\omega , \{t \} \times B)=0\) for all \(t \ge 0\), Borel sets B in \({\mathbb {R}}^n\), and \(\omega \in \varOmega \). In most cases, quasi-left-continuity means that the discontinuities of the process cannot occur at fixed times.

The following theorem clarifies the importance of continuity/adaptedness under time changing, i.e. that stochastic integration and integration with respect to a random measure “commute” with the time changing operation.

Theorem A

Let \(T_t\) be a time change with respect to some filtration \({\mathcal {F}}_t\).

  1. (i)

    Let \(X_t\) be a \(T_t\)-continuous semimartingale. For all \({\mathcal {F}}_t\)-predictable integrands \(H_t\), we have that \(H_{T_t}\) is \({\mathcal {F}}_{T_t}\)-predictable, and:

    $$\begin{aligned} \int _{0}^{T_t}H_{s}dX_s=\int _{0}^{t} H_{T_{s^-}}dX_{T_s}; \end{aligned}$$
    (95)
  2. (ii)

    Let \(\rho (dt \times dx)\) be a \(T_t\)-adapted random measure on \(\varOmega \times {\mathcal {B}}({\mathbb {R}}_+ \times {\mathbb {R}}^n)\). For all measurable random functions \(W(t, \omega , x)\) and \(\omega \in \varOmega \) it is:

    $$\begin{aligned} \int _{0}^{T_t} \int _{{\mathbb {R}}^n}W(s, \omega ,x) \rho (ds \times dx)(\omega )=\int _{0}^{t} \int _{{\mathbb {R}}^n} W(T_{s^-}(\omega ), \omega , x) \rho (dT_{s} \times dx)(\omega ). \end{aligned}$$
    (96)

Proof

See Jacod (1979), théorème 10.19, (a), for part (i), and théorème 10.27, (a), for part (ii). \(\square \)

In particular, from part (ii) of theorem A follows that if \(X_t\) is a pure jump proces with associated jump measure \(\rho (dt \times dx)\) adapted to some time change \(T_t\), then the time-changed process \(X_{T_t}\) has associated jump measure \(\rho (dT_t \times dx)\).

It is essentially a consequence of theorem A that under the assumption of continuity with respect to \(T_t\), the local characteristics of a time-changed semimartingale are well-behaved, in the sense of the next theorem.

Theorem B

Let \(X_t\) be a semimartingale having local characteristics \((\alpha _t, \beta _t, \rho (dx \times dt))\) and cumulant process \(\varPsi ^X_t(\theta )\) with domain \({\mathcal {D}}\), and let \(T_t\) be a time change such that \(X_t\) is \(T_t\)-continuous. Then the time-changed semimartingale \(Y_t=X_{T_t}\) has local characteristics \((\alpha _{T_t}, \beta _{T_t}, \rho ( d T_t \times dx))\) and the cumulant process \(\varPsi _t^Y(\theta )\) equals \(\varPsi ^X_{T_t}(\theta )\), for all \(\theta \in {\mathcal {D}}\).

Proof

See Kallsen and Shiryaev (2002), lemma 2.7. \(\square \)

Proof of proposition 1

Let \((\mu , \varSigma ,0)\) and \((0,0, \nu )\) be the Lévy triplets of \(X^1_t\) and \(X^2_t\). Because of the \(T^1_t\) and \(T_t^2\)-continuity assumption, we can apply theorem B and we immediately see that the local characteristics of \(X^1_{T^1_t}\) and \(X^2_{T^2_t}\) are respectively \((T^1_t \mu , T^1_t \varSigma , 0 )\) and \((0,0, dT^2_t \nu )\). By a result on the linear transformation of semimartingales, such two sets of local characteristics are additive [in Eberlein et al. 2009, proposition 2.4, take U to be the juxtaposition of two \(n \times n\) identity blocks and \(H=(X^1_{T^1_t}\) \(X^2_{T^2_t})^T\) ], so that \(X_{T_t}\) has local characteristicsFootnote 7 \((T^1_t \mu , T^1_t \varSigma , dT^2_t \nu ).\)

Let \(\varPsi _t(\theta )\) be the cumulant process of \(X_{T_t}\); by definition the exponential \({\mathcal {E}}(\varPsi _t(\theta ))\) is well-defined if and only if \(\theta \in \varTheta \). But now the fact that \(T^1_t\) and \(T_t^2\) are continuous implies that \(X_{T_t}\) is quasi-left-continuous (Jacod and Shiryaev 1987, chapter 2, proposition 2.9), that in turn is sufficient for \(\varPsi _t(\theta )\) to be continuous (Jacod and Shiryaev 1987, chapter 3, theorem 7.4). Therefore, since \(\varPsi _t\) is of finite variation, we have that \({\mathcal {E}}(\varPsi _t(\theta ))=\exp (\varPsi _t(\theta ))\); in particular, this means that \({\mathcal {E}}(\varPsi _t(\theta ))\) never vanishes. By definition of the local characteristics, we then have that \(M_t(\theta , X_t, T_t)\) is a local martingale for all \(\theta \in \varTheta \), and thus it is a martingale if and only if \(\theta \in \varTheta _0\). \(\square \)

Proof of proposition 2

An immediate consequence of theorem B is that, under the present assumptions, the class of continuous and pure jump martingales are closed under time changing, so that orthogonality follows. Therefore:

$$\begin{aligned} \langle X_{T,U} \rangle _t= \langle X^c_{T} \rangle _t + \langle X^d_{U} \rangle _t. \end{aligned}$$
(97)

The equation \(\langle X^c_T \rangle _t=\varSigma T_t=\langle X^c \rangle _{T_t}\) can be established by the application of Dubins and Schwarz theorem. Regarding the discontinuous part, we notice that if \(\rho ( d t \times dx)\) is the jump measure associated to \(X^d_t\) we have that \(\rho \) is \(U_t\)-adapted because \(X_t^d\) is \(U_t\)-continuous. Hence, the application of theorem A, part (ii), yields:

$$\begin{aligned} \langle X^d \rangle _{U_t}=\sum _{s < U_t}(\varDelta X_s)^2 = \int _0^{U_t} x^2 \rho (ds \times dx)= \int _0^{t} x^2 \rho (dU_s \times dx)=\langle X^d_U \rangle _t.\quad \quad \end{aligned}$$
(98)

\(\square \)

Counterexample to proposition 2

Let \(X_t^c\) be a standard Brownian motion, and let \(T_t\) be an inverse Gaussian subordinator with parameters \(\alpha >0\) and 1, independent of \(X_t^c\). The process \(X_{T_t}^c\) is a normal inverse Gaussian process of parameters \((\alpha , 0, 0, 1)\) and is a pure jump process (Barndorff-Nielsen 1997). Therefore by letting \(X^d_t=X_{T_t}^c\) and \(U_t=t\) we have \(X^c_{T_t}=X_{U_t}^d\) so that orthogonality does not hold; moreover \(\langle X_{T,U}\rangle _t=2 \langle X^d \rangle _t\) while the left hand side of (16) equals \(\langle T \rangle _t + \langle X^d \rangle _t\). \(\square \)

Proof of proposition 3

Since \(T_t\) and \(U_t\) are of finite variation, the total realized variance of an asset as in (15) satisfies \(TV_t= -\theta ^2_0 \langle X_{T,U}\rangle _t\), so that by proposition 2 we have:

$$\begin{aligned} TV_t= -\theta ^2_0( \sigma ^2{T_t} + \langle X^d \rangle _{U_t}). \end{aligned}$$
(99)

The application of proposition 1 to \(C_t + D_t\) guarantees that the process in (18) is a martingale for all \(z,w \in {\mathbb {C}}\) such that \((iz\theta _0,i w \theta _0) \in \varTheta _0\). By using relation (99) and operating the change of measure entailed by (18) we have:

$$\begin{aligned} \varPhi _{t_0} (z,w)= & {} {\mathbb {E}}_{t_0}[ \exp (iz\log ( \tilde{S}_t/S_{t_0} ) + i w \, (TV_t- TV_{t_0} )] \\= & {} {\mathbb {E}}_{t_0} [ \exp (iz (i \theta _0 (\varDelta X^c_{T_t} + \varDelta X^d_{U_t}) - \varDelta T_t \psi ^c_X(\theta _0) - \varDelta U_t \psi ^d_X(\theta _0))\\&- i w \theta _0^2 ( \sigma ^2 { \varDelta T_t} + \varDelta \langle X^d \rangle _{U_t}) )] \\= & {} {\mathbb {E}}_{t_0} [ \exp ( i(iz \theta _0, iw \theta _0) \cdot ( \varDelta C_{T_t} + \varDelta D_{U_t}) - \varDelta T_t (iz\psi ^c_X(\theta _0)+i w \theta _0^2 \sigma ^2)\\&- \varDelta U_t i z \psi ^d_X(\theta _0) )] \\= & {} {\mathbb {E}}^ {{\mathbb {Q}}}_{t_0}[ \exp ( -\varDelta T_t ( \theta _0 \mu (z - i z) - \theta ^2_0 \sigma ^2 ( z^2 + i z - 2 i w )/2) \\&- \varDelta U_t(i z \psi _X^d(\theta _0)- \psi _D(iz\theta _0, iw \theta _0)) )]. \end{aligned}$$

To fully characterize \(\varPhi _{t_0}\) all that is left is expressing \(\psi _D\) in terms of \(\nu \). Since

$$\begin{aligned} \psi _D(z,w)=\log {\mathbb {E}}\left[ \exp \left( \sum _{s<t}i z \varDelta X^d_s + i w (\varDelta X_s^d)^2 \right) \right] , \end{aligned}$$
(100)

we have that:

$$\begin{aligned} \psi _D(z,w)= \int _{{\mathbb {R}}}(e^{iz x + iw x^2}-1-i(z x+ w x^2) \mathbbm {1}_{|x| \le 1})\nu (dx) \end{aligned}$$
(101)

which completes the proof. \(\square \)

Proof of proposition 4

We follow the proof by Lewis (2001), theorem 3.2, lemma 3.3 and theorem 3.4. By writing the expectation as an inverse-Fourier integral (which can be done by the assumptions on F and because \(\varPhi _{t_0}\) is a characteristic function) and passing the expectation under the integration sign we have:

$$\begin{aligned}&{\mathbb {E}}_{t_0} [e^{-r(t-t_0)}F(Y_t, \langle Y \rangle _t)]\nonumber \\&\quad \! ={\mathbb {E}}_{t_0} \left[ \frac{e^{-r(t-t_0)} }{4 \pi ^2} \int _{i k_1-\infty }^{i k_1 +\infty } \int _{i k_2-\infty }^{i k_2+\infty } S_t^{-i z} e^{- i w \langle Y \rangle _t} \hat{F}(z,w)dz dw \right] \nonumber \\&\quad \! =\frac{ e^{-r(t-t_0)} }{4 \pi ^2} \int _{i k_1-\infty }^{i k_1 +\infty } \int _{i k_2-\infty }^{i k_2+\infty } e^{-i w \langle Y \rangle _{t_0}} S_{t_0}^{-i z} e^{-r(t-t_0)iz} \varPhi _{t_0}(-z, -w) \hat{F}(z,w)dz dw. \end{aligned}$$
(102)

All that remains to be proven is that Fubini’s theorem application is justified. Let \(N_t=\log M_t(\theta _0, X_t, (T_t, U_t))\) be the discounted, normalized log-price; define the probability transition densities \(p_t(x,y)={\mathbb {P}}(N_t< x, \langle N \rangle _t <y| \; t_0, N_{t_0}, \langle N \rangle _{t_0}) \mathbbm {1}_{\{x \in {\mathbb {R}}, y \ge \langle N \rangle _{t_0}\}}\), and let \(\hat{p}_t(z,w)\) be their characteristic functions. For all \((z,w) \in L_{k_1,k_2}\) we have:

$$\begin{aligned} \int _{i k_1-\infty }^{i k_1 +\infty } \int _{i k_2-\infty }^{i k_2+\infty }&\Big |e^{-i w \langle Y \rangle _{t_0}} S_{t_0}^{-i z} e^{-r(t-t_0)i z} \varPhi _{t_0}(-z, -w)\Big | \hat{F}(z,w)dz dw \nonumber \\ =&\int _{i k_1-\infty }^{i k_1 +\infty } \int _{i k_2-\infty }^{i k_2+\infty } \hat{p}_t(-z,-w) \hat{F}(z,w)dz dw \nonumber \\ =&\int _{{\mathbb {R}}^2} \hat{p}_t(-z+i k_1,-w+i k_2) \hat{F}(z+i k_1,w+ i k_2)dz dw. \end{aligned}$$
(103)

For \(x \in {\mathbb {R}}\), \(y \ge 0\), set \(f(x,y)=e^{ -k_1 x - k_2 y }F(x,y)\) \(g(x,y)=e^{k_1 x + k_2 y}p_t(x,y)\). We see that the integrand in the right-hand side of (103) equals \( \hat{g}^*(z,w)\hat{f}(z,w)\). But now f is \( L^1(dx \times dy)\) because F is Fourier-integrable in \(\varSigma _F\) (for \((z,w) \in \varSigma _F\) take \(\text {Re}(z)=\text {Re}(w)=0\)); similarly, \(\hat{g}^*\) is \(L^1(dz \times dw)\) because of the \(L^1\) assumption on \(\varPhi _{t_0}\). Therefore, the application of Parseval’s formula yields:

$$\begin{aligned}&\int _{-\infty }^{ +\infty } \int _{-\infty }^{+\infty } \hat{p}_t(-z +i k_1,-w+i k_2) \hat{F}(z+ik_1,w+ik_2)dz dw \nonumber \\&\quad \!= 4 \pi ^2 \int _{-\infty }^{ +\infty } \int _{-\infty }^{+\infty } p_t(x,y)F(x,y)dx dy \!=\! 4 \pi ^2 {\mathbb {E}}_{t_0}[ F(N_t, \langle N \rangle _t)] < +\infty , \end{aligned}$$
(104)

since \(F \in L^1_{t_0}(N_t, \langle N \rangle _t)\).

This proof straightforwardly adapts to forward starting payoffs, because

$$\begin{aligned} V_{t_0, t^*}&= \frac{e^{-r(T-t_0)}}{4 \pi ^2} {\mathbb {E}}_{t_0}\left[ \int _{-\infty +i k_1}^{\infty +i k_1} \int _{-\infty +i k_2}^{\infty +i k_2} e^{-i z r(T-t_0)} e^-iz(\tilde{Y}_T- \tilde{Y}_{t^*}) \right. \nonumber \\&\quad \left. - i w(\langle Y \rangle _t-\langle Y \rangle _{t^*}) \hat{F}(z,w) dz dw \right] \nonumber \\&\quad = \frac{e^{-r(T-t_0)}}{4 \pi ^2} \int _{-\infty +i k_1}^{\infty +i k_1} \int _{-\infty +i k_2}^{\infty +i k_2} e^{-i z r(T-t_0)} \varPhi _{t_0,t^*} (-z, -w) \hat{F} dz dw . \end{aligned}$$
(105)

Since the integrability conditions of the transition probability functions are not altered when changing to their forward-starting counterparts, the application of Fubini’s theorem can be justified as above. \(\square \)

Proofs of the equations in Sect. 7

We can endow \(Y_t\) with a correlation structure as follows. Let \(Z_t\) be a two-dimensional matrix Brownian motion independent of \(W_t\). The matrix process:

$$\begin{aligned} B_t= \left( \begin{matrix} \rho W^1_t + \sqrt{ 1 - \rho ^2} Z^{1,1}_t &{} Z_t^{1,2}\\ \rho W^2_t + \sqrt{ 1 - \rho ^2} Z^{2,1}_t &{} Z_t^{2,2} \end{matrix} \right) \end{aligned}$$
(106)

is also a matrix Brownian motion enjoying the property that \(\langle W^j, B^{j,1} \rangle _t = \rho t\) and \( W_t\) is independent of \(B_t^{j,2}\) for \(j=1,2\). Since \(\varSigma _t^{i,i}=(\sigma _t^{i,i})^2+(\sigma _t^{1,2})^2\), we have that \(X_t^c\) is indeed a Brownian motion and the activity rates are connected through the element \(\sigma _t^{1,2}\).

To verify Eqs. (80) and (81), observe that for \(j=1,2\) there exist some bounded variation processes \(F^j_t\) such that

$$\begin{aligned} d \varSigma _t^{j,j}= F_t^j dt + 2 \sigma _t^{1,j}(Q^{1,j}d B_t^{1,1} + Q^{2,j} d B_t^{1,2})+ 2 \sigma _t^{j,2}(Q^{1,j}d B_t^{2,1} + Q^{2,j} d B_t^{2,2}), \end{aligned}$$
(107)

from which:

$$\begin{aligned} d w^j_t:= & {} \frac{ d \varSigma _t^{j,j} -F^j_t dt}{ 2 \sqrt{ \varSigma _t^{j,j}((Q^{1,j})^2+ (Q^{2,j})^2) }}\nonumber \\= & {} \frac{ \sigma _t^{1,j} (Q^{1,j}d B_t^{1,1} +Q^{2,j}d B_t^{1,2})+ \sigma _t^{j,2} (Q^{1,j}d B_t^{2,1} +Q^{2,j}d B_t^{2,2}) }{ \sqrt{\varSigma _t^{j,j}((Q^{1,j})^2+ (Q^{2,j})^2) }}.\quad \quad \quad \quad \end{aligned}$$
(108)

By taking the quadratic variation of the right-hand side we see that \(w^j_t\) are two Brownian motions such that \(d\varSigma _t^{j,j}= F^j_t dt+ 2 \sqrt{\varSigma _t^{j,j}((Q^{1,j})^2+ (Q^{2,j})^2) } dw^j_t\); Eqs. (80) and (81) then follow from a direct computation.

Since \(X_t^d\) is orthogonal to every entry of the matrix Brownian motion \(B_t\), the change in the dynamics of \(\varSigma _t\) under \({\mathbb {Q}}(z,w)\) is only due to the correlation between \(X^c_t\) and \(B_t\). Hence, for \((z,w) \in \varTheta \), the Radon-Nikodym derivative \(M_t\) to be considered in (18) reduces to

$$\begin{aligned} M_t={\mathcal {E}} \left( iz \int _0^t \sqrt{ \varSigma _s^{1,1} } d X^c_s \right) . \end{aligned}$$
(109)

Furthermore, for \(j=1,2\) we have:

$$\begin{aligned} d \left\langle \int _0^{\cdot } \sqrt{ \varSigma _s^{1,1}} d X_s^c, B^{j,1} \right\rangle _t&= \rho \sigma ^{1,j}_t dt \end{aligned}$$
(110)
$$\begin{aligned} d \left\langle \int _0^{\cdot } \sqrt{ \varSigma _s^{1,1}}d X_s^c, B^{j,2} \right\rangle _t&= 0 \end{aligned}$$
(111)

so that application of Girsanov’s theorem tells us that

$$\begin{aligned} d \tilde{B}_t = d B_t - iz \rho \left( \begin{matrix} \displaystyle { \sigma ^{1,1}_t dt }&{} 0 \\ \displaystyle { \sigma ^{1,2}_t dt } &{} 0 \end{matrix} \right) \end{aligned}$$
(112)

is a \({\mathbb {Q}}(z,w)\)-matrix Brownian motion. Solving the above for \(B_t\) and substituting in (77) yields (82). Equation (84) then follows from (19).

Finally, we give the formula for \({\mathcal {L}}_{T_t, U_t}(\cdot )\). For \(\tau >0\) and \(n>1\) consider the transform:

$$\begin{aligned} \phi _\varSigma (z)={\mathbb {E}}\left[ \exp \left( -\int _0^\tau \sum _{j=1}^n z_j \varSigma _s^{j,j}ds \right) \right] \end{aligned}$$
(113)

for every vector of complex numbers \(z=(z_1, \ldots , z_n)\) such that the above expectation is finite. The function \(\phi _\varSigma (z)\) is exponentially-affine of the form

$$\begin{aligned} \phi _\varSigma (z)=\exp (-a(\tau )-Tr(A(\tau ) \varSigma _0)), \end{aligned}$$
(114)

since it is a particular case of the transforms studied in e.g. Grasselli and Tebaldi (2007), and Gouriéroux (2003). The ODEs for \(A(\tau ), a(\tau )\) are given by:

$$\begin{aligned}&\displaystyle {A(\tau )'= A(\tau )M+M^T A(\tau ) - 2 A(\tau ) Q^T Q A(\tau ) +D, } A(0)=0 \end{aligned}$$
(115)
$$\begin{aligned}&\displaystyle {a(\tau )'= Tr(c Q^T Q A(\tau ) ),} a(0)=0. \end{aligned}$$
(116)

Here D is the diagonal matrix having the values \(z_1, \ldots , z_n\) on the diagonal. The solution of (115)–(116) is obtainable through a linearization procedure that entails doubling the dimension of the problem, which yields:

$$\begin{aligned}&A(\tau )=(A^{2,2}(\tau ))^{-1}A^{2,1}(\tau ) \end{aligned}$$
(117)
$$\begin{aligned}&a(\tau )= \frac{c}{2} Tr(\log \left( A^{2,2}(\tau )\right) +M^T \tau ) \end{aligned}$$
(118)
$$\begin{aligned}&\left( \begin{matrix} A^{1,1}(\tau ) &{} A^{1,2}(\tau ) \\ A^{2,1}(\tau ) &{} A^{2,2}(\tau ) \ \end{matrix}\right) = \exp \left( \tau \left( \begin{matrix} M &{} 2 Q^T Q \\ D &{} -M^T \end{matrix}\right) \right) \end{aligned}$$
(119)

(see for example Gouriéroux 2003, proposition 7, or Grasselli and Tebaldi 2007, section 3.4.2). The formula for \({\mathcal {L}}_{\varDelta T, \varDelta U }\) follows from (117)–(119) when we choose \(n=2\), \((z_1, z_2)=(z,w)\) in (113), and set \(\tau =t-t_0\), \( \varSigma _0=\varSigma _{t_0}\) in (114). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Torricelli, L. Valuation of asset and volatility derivatives using decoupled time-changed Lévy processes. Rev Deriv Res 19, 1–39 (2016). https://doi.org/10.1007/s11147-015-9113-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11147-015-9113-8

Keywords

Mathematics Subject Classification

Navigation