Pricing American Options in an Infinite Activity Lévy Market: Monte Carlo and Deterministic Approaches Using a Diffusion Approximation

  • Lisa J. Powers
  • Johanna Nešlehová
  • David A. Stephens
Conference paper

DOI: 10.1007/978-3-642-25746-9_9

Part of the Springer Proceedings in Mathematics book series (PROM, volume 12)
Cite this paper as:
Powers L.J., Nešlehová J., Stephens D.A. (2012) Pricing American Options in an Infinite Activity Lévy Market: Monte Carlo and Deterministic Approaches Using a Diffusion Approximation. In: Carmona R., Del Moral P., Hu P., Oudjane N. (eds) Numerical Methods in Finance. Springer Proceedings in Mathematics, vol 12. Springer, Berlin, Heidelberg

Abstract

Computational methods for pricing exotic options when the underlying is driven by a Lévy process are prone to numerical inaccuracy when the driving price process has infinite activity. Such inaccuracies are particularly severe for pricing of American options. In this chapter, we examine the impact of utilizing a diffusion approximation to the contribution of the small jumps in the infinite activity process. We compare the use of deterministic and stochastic (Monte Carlo) methods, and focus on designing strategies tailored to the specific difficulties of pricing American options. We demonstrate that although the implementation of Monte Carlo pricing methods for common Lévy models is reasonably straightforward, and yield estimators with acceptably small bias, deterministic methods for exact pricing are equally successful but can be implemented with rather lower computational overhead. Although the generality of Monte Carlo pricing methods may still be an attraction, it seems that for models commonly used in the literature, deterministic numerical approaches are competitive alternatives.

Keywords

CGMY process Finite element method Galerkin method Lévy process Monte Carlo least squares option pricing 

1 Introduction

Monte Carlo methods have long been used for derivatives options pricing, but with the increasing complexity of models of market microstructure, and the multidimensional nature of many pricing settings, the need for analytical and numerical study of well-established procedures is widely felt. One of the chief modelling developments introduced since the late 1990s is the use of Lévy process models for underlying asset dynamics [267]. Pure-jump Lévy processes are frequently used to model the underlying price dynamics of assets in the context of option pricing. Such processes are able to capture perceived market microstructure in a way that processes with continuous (Brownian) paths are not. General Lévy processes that contain continuous and pure-jump components can also be useful, but in most cases it is the pure-jump component that is most worthy of mathematical, numerical and statistical study. In this chapter, we consider the pricing of American options when the underlying process is a pure-jump Lévy process, and address two specific issues, first relating to the practical implications of the approximation of infinite activity Lévy processes, and secondly considering the advantages and disadvantages of different numerical approaches to option pricing for these processes.

1.1 The Approximation of Infinite Activity Lévy Processes

When a pure-jump Lévy process has infinite activity, that is, almost surely has an infinite number of jumps in any finite time interval, numerical computation of options prices may require the truncation of the small jumps of the process. This is the case for simulation schemes as well as some finite difference schemes. To approximate the small jumps that have been removed, typically a small Brownian Motion component is added. Provided that the standard deviation of the small jumps of the Lévy process converges more slowly to zero than the level of truncation, the approximated Lévy process will converge weakly to the true Lévy process (see, for example, [25], and the discussion in Sect. 2.1); this guarantees that approximated European options prices will converge to true prices. However, for American Options near the free exercise boundary, since there may be no smooth pasting, we are using a smooth approximation to a non-smooth quantity, and it is questionable whether appropriate convergence will be obtained.

In this chapter, we investigate numerically the approximated options prices and examine how the convergence behaves near and away from the free exercise boundary. We also examine how to choose the truncation level close to, and far from, the exercise boundary to achieve similar accuracy in the options prices. To do this, we compute options prices in two ways. First, we will simulate the prices of American options and price using Monte Carlo; this necessarily requires the approximation of small jumps in the forward simulation of the price process. Secondly, we will use deterministic finite element methods to compute exact American options prices. This method requires no truncation of small jumps. We will compute the approximated prices and true (ε = 0) prices with the finite element method to determine the relative error induced by truncation. Finally, we will calculate the options prices at this level both via simulation and deterministically and compare the results. These tests will be performed for the Variance Gamma process and the CGMY process.

The chapter will be organized as follows: Sect. 2 gives an introduction to Lévy processes, results relevant to options pricing, and an overview of the small jump regularization method. Section 3 describes the Monte Carlo and finite element methods used to compute the options prices. Numerical results are presented in Sect. 4, and the truncation error discussed in Sect. 5.

1.2 American Options

We briefly recap on key definitions. American options are contracts which can be exercised at any time until maturity, for a given payoff. Options of this type can be formulated as an optimal stopping problem. Let St denote the stock price dynamics under the (or more generally, an) equivalent martingale measure \(\mathbb{Q}\) and g denote the payoff function. Then the value of the American option is f(t, s), where
$$f\left (t,s\right ) =\sup_{t\leq \tau \leq T}{\mathbb{E}}_{\mathbb{Q}}\left [{e}^{-r\left (\tau -t\right )}g\left ({S}_{ \tau }\right )\big{\vert}{S}_{t} = s\right ].$$
(1)
Here \(\mathbb{Q}\) is fixed such that \({e}^{-rt}{S}_{t}\) is a \(\mathbb{Q}\)-martingale. The supremum above is taken over all stopping times τ adapted to the filtration generated by \({\left \{{S}_{t}\right \}}_{0\leq t\leq T}\).

American options are characterized by a continuation region and a stopping region: In the continuation region, the value of an American option is greater than the payoff, so it is more valuable to hold the option. In the stopping region, the value of the American option equals the payoff. As soon as the price process S enters the stopping region, one should exercise the option, receiving the payoff and the time-value of the remaining time until maturity. A priori, the boundary between the exercise region and the continuation region is not known. In the continuation region, the price of the American option solves the pricing partial-integro differential equation (PIDE). In the stopping region, the price is equal to the payoff.

2 Lévy Process Models for Price Processes

2.1 Lévy Processes

Lévy processes comprise a generalized class of stochastic processes that includes both Poisson processes and Brownian motion as special cases. In this section, we briefly review the key definitions and theorems relating to this class.

Definition 2.1.

\(\left (\mathrm{L\acute{e}vy\ process}\right )\) An adapted process \(X ={ \left ({X}_{t}\right )}_{t\geq 0}\) on the filtered probability space \(\left (\Omega,\mathcal{F}, \mathbb{F},P\right )\) with X0 = 0 a.s. is a Lévy process if
  1. 1.

    X has independent increments: For \(0 \leq s < t < \infty \), \({X}_{t} - {X}_{s}\) is independent of \({\mathcal{F}}_{s}\).

     
  2. 2.

    X has stationary increments: For \(0 \leq s < t < \infty \), \({X}_{t} - {X}_{s} \sim {X}_{t-s}\).

     
  3. 3.

    X is continuous in probability, \(\forall \epsilon > 0\mathrm{,}\lim_{s\rightarrow t}P\left (\left \vert {X}_{t} - {X}_{s}\right \vert > \epsilon \right ) = 0\).

     

Remark: For any Lévy process X, there exists a unique càdlàg modification, which is also a Lévy process. We will always consider this càdlàg modification.

Let \({X}_{1} =\sum_{k=1}^{n}\left ({X}_{j/n} - {X}_{(j-1)/n}\right )\). By the definition of a Lévy process, this representation shows that X1 can be expressed as the sum of n independent random variables, and hence follows an infinitely divisible distribution. The characteristic function of Xt can be expressed (for \(u \in \mathbb{R}\), t ≥ 0) as follows:
$${\varphi }_{{X}_{t}}\left (u\right ) = \mathbb{E}\left [{e}^{iu{X}_{t} }\right ] ={ \left (\mathbb{E}\left [{e}^{iu{X}_{1} }\right ]\right )}^{t}.$$

Theorem 2.1 (Lévy Khinchin). 

For any infinitely divisible distribution on\(\mathbb{R}\), there exists a unique Lévy process X such that X1follows that distribution. For any\(u \in \mathbb{R}\)and t ≥ 0 we have that
$${\varphi }_{{X}_{t}}\left (u\right ) = \mathbb{E}\left [{e}^{iu{X}_{t} }\right ] = {e}^{t\psi \left (u\right )},$$
where\(\psi \left (u\right )\)is called the characteristic exponent and is given by
$$\psi \left (u\right ) = -\frac{{\sigma }^{2}{u}^{2}} {2} + ibu +{\int}_{\mathbb{R}}\left ({e}^{iux} - 1 - iux{\mathbf{1}}_{\vert x\vert \leq \epsilon }\right )\nu \left (dx\right )$$
for σ, \(b \in \mathbb{R}\)and ν a σ-finite measure on\(\mathbb{R}\)satisfying
$$\nu \left (\left \{0\right \}\right ) = 0,\qquad \qquad {\int}_{\mathbb{R}}\left (1 \wedge {\left \vert x\right \vert }^{2}\right )\nu \left (dx\right ) < \infty.$$

Proof.

See ( [25], Theorem 25.3).

Remark: The triple \(\left (b,\sigma,\nu \right )\) is known as the Lévy Characteristic Triple and leads to a unique Lévy process, as the following theorem will show.

Theorem 2.2 (Lévy-Itô Decomposition). 

Let ε > 0.
  1. 1.

    Every Lévy process X can be decomposed in a unique fashion as a sum of three independent Lévy processes\(X = {X}^{\left (1\right )} + {X}^{\left (2\right )} + {X}^{\left (3\right )}\)where\({X}^{\left (1\right )}\)is a linear transform of Brownian motion, \({X}^{\left (2\right )}\)is a compound Poisson process containing all jumps of X which are of magnitude greater than ε, and\({X}^{\left (3\right )}\), a purely discontinuous square-integrable martingale containing all jumps of X of magnitude less than ε.

     
  2. 2.
    Given a triple\(\left (b,\sigma,\nu \right )\)which satisfies the properties of Theorem 2.1, there exists a unique probability measure P on\(\left (\Omega,\mathcal{F}\right )\)under which the process X with characteristic exponent ψ as above is a Lévy process. The decomposition of the process corresponds to a decomposition of the characteristic exponent, \(\psi = {\psi }^{\left (1\right )} + {\psi }^{\left (2\right )} + {\psi }^{\left (3\right )}\), where
    $$\begin{array}{rcl}{ \psi }^{\left (1\right )}\left (u\right )& =& -\frac{1} {2}{\sigma }^{2}{u}^{2} + ibu, \end{array}$$
    (2)
    $$\begin{array}{rcl}{ \psi }^{\left (2\right )}\left (u\right )& =& {\int}_{\mathbb{R}}\left ({e}^{iux} - 1\right ){\mathbf{1}}_{\left \vert x\right \vert >\epsilon }\nu \left (dx\right ), \end{array}$$
    (3)
    $$\begin{array}{rcl}{ \psi }^{\left (3\right )}\left (u\right )& =& {\int}_{\mathbb{R}}\left ({e}^{iux} - 1 - iux\right ){\mathbf{1}}_{\left \vert x\right \vert \leq \epsilon }\nu \left (dx\right ). \end{array}$$
    (4)
     

Proof.

See ( [2], Sect. 2.4).

Note that the Lévy measure does not have to be a probability measure in general, as the Lévy process can be of infinitevariation, i.e., \({\int}_{\mathbb{R}}\nu \left (dx\right ) = \infty \). However, if \(\nu \left (dx\right )\) is of finite variation, i.e., \(\lambda :={\int}_{\mathbb{R}}\nu \left (dx\right ) < \infty \), then ν can be normalized to define a probability measure μ on \(\mathbb{R} \setminus \left \{0\right \}\). If \(\lambda < \infty \), then X is a compound Poisson process with jump intensity–or expected number of jumps per unit of time–λ. In the continuous-time finance literature, infinite variation processes have received specific attention due to the fact that under common market assumptions, such processes do not yield arbitrage opportunities [13].

2.2 Lévy Models for Pricing

For pricing, we assume the equivalent martingale measure \(\mathbb{Q}\) has been chosen. In Lévy markets, the price process under \(\mathbb{Q}\) is given by
$${S}_{t} = {S}_{0}\exp \left \{\left (r -\frac{{\sigma }^{2}} {2} - c\right )t + {X}_{t}\right \},$$
(5)
where X is a Lévy process with no drift (b = 0). The Lévy Characteristic Triple is \((0,\sigma,{\nu }_{\mathbb{Q}})\), where σ ≥ 0, and \({\nu }_{\mathbb{Q}}\) is a Lévy measure. Given \({\int}_{\mathbb{R}}\min \left (1,{x}^{2}\right ){v}_{\mathbb{Q}}\left (dx\right )< \infty \), by the Lévy Khinchin formula X can be decomposed into a Brownian motion (diffusion) component B, and a quadratic pure jump process, Y , independent of B. That is, \({X}_{t} = \sigma {B}_{t} + {Y }_{t}\). In particular, when σ = 0, \({X}_{t} = {Y }_{t}\). The parameter c is chosen such that the discounted exponential quadratic pure jump process is a martingale, and the mean rate of return on S is risk-neutrally r, the risk-free interest rate. This is achieved through the following equality:
$${e}^{\mathit{ct}} = {\mathbb{E}}_{ \mathbb{Q}}\left [{e}^{{Y }_{t} }\right ].$$
(6)
For an explicit formula for c, refer to (14). To find the dynamics of the price process, define \(\mu \left (dx,dt\right )\) to be the integer-valued jump measure that counts the number of jumps of Y in space-time. By Itô’s formula, St solves the following SDE:
$$d{S}_{t} = {S}_{{t}_{-}}d{X}_{t} + {S}_{{t}_{-}}{\int}_{\mathbb{R}}\left ({e}^{y} - 1 - y\right )\mu \left (dy,dt\right ) + {S}_{{ t}_{-}}\left (r - c\right )dt.$$
Because Lévy processes are time homogeneous (stationary increments), the jump measure can be decomposed further as follows: \(\mu \left (dx,dt\right ) = {v}_{\mathbb{Q}}\left (dx\right ) \times dt\) where dt is the Lebesgue measure. We assume that the Lévy measure has a density under \(\mathbb{Q}\): \({v}_{\mathbb{Q}}\left (dx\right ) = k\left (x\right )dx\), where \(k\left (x\right )\) describes the jumps of size x in Y.Now we will introduce the parametric Lévy processes that we will use in our numerical investigations.

The CGMY Process

One popular parametric class of Lévy processes that has been empirically vetted is the class of CGMY processes. The CGMY process is a four parameter Lévy process. When considered by Carr, Geman, Madan and Yor, the CGMY process could either be a pure jump process, or a diffusion component could be added [5]. Let G, M, C > 0 and 0 < Y < 2 and σ = 0. Then the Lévy measure, ν(dx), of the pure jump process has density
$$k\left (x\right ) = C\left \{\begin{array}{@{}l@{\quad }l@{}} {\left \vert x\right \vert }^{-1-Y }{e}^{-G\left \vert x\right \vert }\quad &\mathrm{if\ }x < 0, \\ {\left \vert x\right \vert }^{-1-Y }{e}^{-Mx} \quad &\mathrm{if\ }x >0. \end{array} \right.$$
(7)
Note that since k(x) is decaying exponentially, we no longer require the truncation function in the characteristic exponent, which can be expressed as
$$\psi \left (u\right ) ={\int}_{\mathbb{R}}\left ({e}^{iux} - 1 - iux\right )k\left (x\right )\,dx.$$
(8)
As before, we add a drift c such that \({e}^{{X}_{t}-c\,t}\) is a martingale.

The Variance Gamma Process

An important special case of the CGMY process is the Variance Gamma process. The Variance Gamma process can be seen as a CGMY process with this choice of parameters:
$$\begin{array}{rcl} C& =& 1/\nu, \\ G& =& \Big{(}\sqrt{ \frac{1} {4}{\theta }^{2}{\nu }^{2} + \frac{1} {2}{\sigma }^{2}\nu } -\frac{1} {2}\theta \nu {\Big{)}}^{-1}, \\ M& =& \Big{(}\sqrt{ \frac{1} {4}{\theta }^{2}{\nu }^{2} + \frac{1} {2}{\sigma }^{2}\nu } + \frac{1} {2}\theta \nu {\Big{)}}^{-1}, \\ Y & =& \end{array}$$
(0.)
In particular, \(\theta \in \mathbb{R}\), ν > 0, σ > 0. For more information on the Variance Gamma and CGMY processes, as well as other parametric Lévy processes, see [27].

2.3 Infinite Activity Processes

Pure-jump Lévy processes can be of infinite or finite activity; finite activity processes are characterized as compound Poisson processes, whereas for infinite activity processes, the Lévy measure has infinite mass, and given our assumption of absolutely continuous measures, this implies there are an infinite amount of jumps in any open set, and that there is a continuum of jump sizes [5]. In particular, there are infinitely many small jumps. These processes are particularly useful in financial modeling, because the richness of the jump density is able to produce realistic price processes that require no Brownian motion component [5]. In options pricing, these infinite activity Lévy processes produce realistic volatility smiles, which is a key advantage compared to the original Black-Scholes model [7]. The infinite quantity of small jumps is requisite for Lévy models to capture real world phenomena, hence it is important to numerically capture them as best as possible.

Since simulation schemes, Monte Carlo pricing, and some deterministic option pricing methods require the truncation of the small jumps, we aim to quantify the simulation and pricing error for American options using a range of truncation levels.

2.4 The Diffusion Approximation

To examine the error induced into the pricing problem by removing small jumps it is useful to recall the Lévy-Itô decomposition. Assume X is a non-singular Lévy process. Then X can be decomposed into the sum of a Brownian motion with drift, a compound Poisson process, and a square-integrable martingale containing the small jumps of X. Asmussen and Rosiński  [3] studied the removal of the small jump component from X and found that under certain conditions, the error induced by the truncation converges weakly to a diffusion process (see Theorem 2.3). Based on this convergence result, they suggested the small jumps could be compensated for by adding a diffusion component. For European options, in [8] a finite difference scheme is proposed using the small jump regularization of [3] and error rates are obtained. Signahl  [27] showed the weak convergence of the truncated Lévy process with various methods of regularization.

Truncation of Small Jumps

Under the assumptions of Theorem 2.2, a Lévy process \({\left ({X}_{t}\right )}_{t\geq 0}\), for \(0 < \epsilon < 1\) has the following unique decomposition:
$$X = {X}^{\left (1\right )} + {X}_{ \epsilon }^{\left (2\right )} + {X}_{ \epsilon }^{\left (3\right )}.$$
We take σ = 0. The small jumps (represented by \({X}_{\epsilon }^{\left (3\right )}\)) can simply be ignored, or they can be accounted for by adding an additional diffusion component to X.
Step 1. Remove small jumps:
$${Z}_{t}^{0,\epsilon } := {X}_{ t} -\sum_{s\leq t}\Delta {X}_{s}{\mathbf{1}}_{\left \{\vert \Delta {X}_{s}\vert <\epsilon \right \}} = {X}_{t} - {X}_{\epsilon,t}^{\left (3\right )}.$$
Step 2. Replace the small jumps with their expected value:
$${Z}_{t}^{1,\epsilon } := {Z}_{ t}^{0,\epsilon } + \mathbb{E}\left [{X}_{ \epsilon,t}^{(3)}\right ],$$
where \(\mathbb{E}\left [{X}_{\epsilon,t}^{(3)}\right ] = t\,{\int}_{0<\vert x\vert <\epsilon }xk\left (x\right )dx\).
Step 3. Add a diffusion component based on the level of truncation:
$${Z}_{t}^{2,\epsilon } := {Z}_{ t}^{1,\epsilon } + \sigma \left (\epsilon \right ){W}_{ t},$$
where \(\sigma {\left (\epsilon \right )}^{2} :={\int}_{\vert x\vert <\epsilon }{x}^{2}k\left (x\right )dx\) and Wt is an independent Brownian motion.

Remark: To approximate a truncated Lévy process, the drift is given as above defined. However, when using the approximations in Steps 2–3 in options pricing, we must adjust the drift so that our exponential Lévy process is a martingale (cf. (6)).

Theorem 2.3 (Normal Approximation for the Small Jumps of a Lévy Process). 

Define\({X}_{\epsilon } := {X}_{\epsilon,t}^{(3)} - \mathbb{E}[{X}_{\epsilon,t}^{(3)}]\). Then\(\sigma {\left (\epsilon \right )}^{-1}{X}_{\epsilon }\)converges weakly to a diffusion process as ε → 0 if and only if for each κ > 0,
$$\sigma \left (\kappa \sigma \left (\epsilon \right ) \wedge \epsilon \right ) \sim \sigma \left (\epsilon \right ),\ \mathrm{as\ }\epsilon \rightarrow 0.$$

Proof.

See ( [3], Theorem 2.1).

This condition can be conveniently validated as follows.

Proposition 2.1.

If\(\lim_{\epsilon \rightarrow 0}\frac{\sigma \left (\epsilon \right )} {\epsilon } = \infty \)then for each κ > 0,
$$\sigma \left (\kappa \sigma \left (\epsilon \right ) \wedge \epsilon \right ) \sim \sigma \left (\epsilon \right ),\ \mathrm{as\ }\epsilon \rightarrow 0.$$
Moreover, if the Lévy density k(x) has no atoms in a neighborhood of the origin, these conditions are equivalent (that is, \(\lim_{\epsilon \rightarrow 0}\frac{\sigma \left (\epsilon \right )} {\epsilon } = \infty \)is necessary and sufficient).

Proof.

See ( [3], Proposition 2.1).

Example: Assume X is a Lévy process with Lévy density k(x). Assume further that there exist constants 0 < α < 2 and C −  > 0 such that for any x ∈ ( − 1, 1),
$$\frac{1} {2}(k(-x) + k(x)) \geq \frac{{C}_{-}} {\vert x{\vert }^{1+\alpha }}.$$
(9)
To verify that the small jumps can be approximated with a diffusion component we estimate \(\sigma \left (\epsilon \right )\) from below and apply Proposition 2.1.
$$\begin{array}{rcl}{ \sigma }^{2}\left (\epsilon \right )& = & {\int}_{-\epsilon }^{\epsilon }{x}^{2}k\left (x\right )dx ={\int}_{0}^{\epsilon }{x}^{2}k\left (x\right )dx +{\int}_{0}^{\epsilon }{x}^{2}k\left (-x\right )dx \\ & = & {\int}_{0}^{\epsilon }{x}^{2}\left (k\left (x\right ) + k\left (-x\right )\right )dx \\ &{ \mathrm{Eq.\ (9)} \atop \geq } & 2{\int}_{0}^{\epsilon }{x}^{2}{C}_{ -}\vert x{\vert }^{-1-\alpha }dx = 2{C}_{ -}\frac{{x}^{2-\alpha }} {2 - \alpha }{\Big{\vert }}_{0}^{\epsilon } = 2{C}_{ -}\frac{{\epsilon }^{2-\alpha }} {2 - \alpha }.\end{array}$$
Therefore,
$$\lim_{\epsilon \rightarrow 0}\frac{\sigma \left (\epsilon \right )} {\epsilon } \geq \lim_{\epsilon \rightarrow 0}\sqrt{ \frac{2{C}_{- } } {2 - \alpha }} \frac{{\epsilon }^{1-\alpha /2}} {\epsilon } =\lim_{\epsilon \rightarrow 0}\sqrt{ \frac{2{C}_{- } } {2 - \alpha }} \frac{1} {{\epsilon }^{\alpha /2}} = \infty.$$
By ( [3], Proposition 5.2), the small jumps of any Lévy process satisfying condition (9) can be approximated with a diffusion. This condition (9) is sufficient for the existence and uniqueness of a solution to the options pricing PIDE, which will be discussed in more depth in Sect. 3.2.1. This approximation is not valid for all Lévy processes.
Example: Let X be a Gamma process. Then X has the following Lévy density: for a, b > 0, x > 0, \(k(x) = a \cdot {x}^{-1}\exp (-x/b)\). To apply Proposition 2.1, we must calculate σ(ε):
$$\begin{array}{rcl}{ \sigma }^{2}(\epsilon )& =& a{\int}_{0}^{\epsilon }{x}^{2}k(x)\,dx = a{\int}_{0}^{\epsilon }x{e}^{-x/b}\,dx \\ & =& -a\,b\,\epsilon \,{e}^{-\epsilon /b} - {b}^{2}a\,{e}^{-\epsilon /b} + {b}^{2}a \end{array}$$
(10)
$$\begin{array}{rcl} & =& {b}^{2}a(1 - {e}^{-\epsilon /b}(1 + \frac{\epsilon } {b})) \\ & \leq & {b}^{2}a(1 - (1 -\frac{\epsilon } {b})(1 + \frac{\epsilon } {b})) = a\,{\epsilon }^{2}.\end{array}$$
(11)
Since \(\lim_{\epsilon \rightarrow 0}\sigma (\epsilon )/\epsilon \leq \lim_{\epsilon \rightarrow 0}{a}^{1/2}\epsilon /\epsilon = {a}^{1/2}\), and k(x) has no atoms in the neighborhood of the origin, the diffusion approximation cannot be applied to Gamma processes.

Remark: By an analogous calculation, one can see that the small jumps of the Variance Gamma process cannot formally be approximated by a Brownian Motion. However, in practice, the approximation appears to converge with the rate predicted in [27]. For more information and convergence results in the Variance Gamma case, see [21]. Examples of processes where the small jumps can be approximated by a diffusion include normal inverse Gaussian Lévy processes as well as CGMY Lévy processes (with 0 < Y < 2).

Given Proposition 2.1 is satisfied, the approximating Lévy process will always converge to the true process. However, will options prices on an approximating Lévy process also converge to the true options price? For European options, the aforementioned weak convergence result for the diffusion approximation guarentees convergence of options prices. For exact rates see [27]. However, for infinite activity Lévy processes with no diffusion component, the smooth pasting condition (i.e., that the option price hits the exercise boundary smoothly) can be violated. On the other hand, the smooth pasting condition applies in the case of Brownian Motion. Therefore, in the case of American options, we are approximating a nonsmooth quantity with a smooth approximant. Hence we expect pricing errors near the free exercise boundary, where it is crucial to find accurate prices. In our numerical investigations, we will try to find if it is still possible to find a trucation level such that the error in the options prices is acceptably small.

3 Numerical Methods

We now address how the approximations above will influence the numerical methods to be utilized for options pricing. We first study stochastic Monte Carlo approaches to pricing, and focus on a widely used procedure, the Longstaff-Schwartz algorithm [15] which, for many classes of underlying processes, achieves price estimation with relatively small bias. An essential part of the Monte Carlo pricing approach is the simulation of the paths of the underlying process; focussing on the CGMY class, we use the simulation approach outlined in [20]. We then examine deterministic approaches, specifically numerical methods for the solution of partial differential equations, and try to assess the relative accuracy and computational burden compared to Monte Carlo procedures.

3.1 Stochastic Numerical Methods

Monte Carlo Methods for Infinite Activity Lévy Processes

Monte Carlo methods are the industry-standard method for pricing of options when analytical pricing results are not available. The approach – for a large number of paths, simulate the underlying forward until termination of the contract, and then compute the price as the expected discounted value of the value function – requires efficient methods for simulating the price process. For infinite activity Lévy process-driven models such as the CGMY, the diffusion approximation from earlier sections must be utilized. In this section, we adopt the simulation approach from [20] that can efficiently simulate the process for any given selection of the C, G, M, Y parameters, any given level of truncation, and at any level of time discretization. Note that [20] addresses the pricing of European options, and with the associated fixed option expiry times, the Monte Carlo pricing error behaviour is more straightforward to understand. For American options, however, the error behaviour is more complex, but is amenable to study in simulation. For a summary of approaches to pricing of American options using Monte Carlo, see [12].

Monte Carlo Pricing Using Least-Squares

In this chapter, we adopt the Monte Carlo least-squares pricing method introduced by Longstaff and Schwartz [15] and studied formally in [6]. This approach uses backward induction, and estimates value functions by ordinary least-squares on a fixed collection of basis functions, such as polynomial or Laguerre orthogonal bases. It is simple to implement, but yields a sub-optimal strategy, resulting in a lower bound for the option price. However, the sub-optimality becomes negligible in the Monte Carlo limit, so here we persist with the approach. We now give brief details of the method.

The Longstaff-Schwartz algorithm proceeds by utilizing a discrete time approximation to the choice of exercise time and subsequent valuation. We assume that the option may be exercised at times \(0 = {t}_{0} < {t}_{1} < \cdots < {t}_{n} = T\), say, where throughout we assume that \({t}_{j} = j\bigtriangleup t\), so that the exercise times are equally spaced. Suppose that we have path realizations of the underlying process St (generated according to (5), say), that is, values of \({S}_{0},{S}_{{t}_{1}},\ldots,{S}_{T}\). Note that for pure-jumps processes with independent increments, simulation of St reduces to simulation of the jumps of the process in fixed finite time intervals; for example, for the Variance Gamma process, the representation as the difference of two Gamma processes means that the simulation essentially reduces to the simulation of Gamma random variates, which can be trivially implemented.

At each time point tj, the decision to exercise or not rests on the comparison of the payoff and continuation values (that is, the value of the option if it is not exercised at tj). Denote by pj(. ) and qj(. ) the payoff and continuation functions of underlying price \({S}_{{t}_{j}} = {s}_{j}\) at time point tj. As before, we have for strike value K and interest rate r,
$${p}_{j}(s) = {e}^{-r(T-{t}_{j})}{(K - s)}_{ +}.$$
Using a backwards dynamic programming argument, the continuation function at time j, for \(j = n - 1,n - 2,\ldots,1,0\) is given by
$${q}_{j}(s) = \mathbb{E}\left [\max \left \{{p}_{j+1}({S}_{{t}_{j+1}}),{q}_{j+1}({S}_{{t}_{j+1}})\right \}\big\vert{S}_{{t}_{j}} = s\right ].$$
(12)
For a single path \(\{{s}_{0},{s}_{1},\ldots,{s}_{n}\}\), the series of payoff values \({p}_{0}({s}_{0}),\!{p}_{1}({s}_{1}),\ldots,{p}_{n}({s}_{n})\) can be computed exactly, and the continuation values \({q}_{0}({s}_{0}),{q}_{1}({s}_{1}),\ldots,{q}_{n}({s}_{n})\) (note that \({q}_{n}({s}_{n}) = {q}_{n}(T) = 0\)) can be computed by using an approximating finite basis function expansion
$${q}_{j}(x) =\sum_{k=0}^{K}{\beta }_{ jk}{h}_{jk}(x).$$
(13)
In this chapter, we use Laguerre polynomials to order four to perform the approximation
$$\begin{array}{lll} {h}_{j0}(x) = 1,\qquad {h}_{j1}(x) = -x + 1,\qquad {h}_{j2}(x) = \frac{1} {2}({x}^{2} - 4x + 2),\\ \\ {h}_{j3}(x) = \frac{1} {6}(-{x}^{3} + 9{x}^{2} - 18x + 6),\\ \\ {h}_{j4}(x) = \frac{1} {24}({x}^{4} - 16{x}^{3} + 72{x}^{2} - 96x + 24).\end{array}$$
In practice, the coefficients \({\beta }_{jk},j = 0,\ldots,n,k = 0,\ldots,K\) in (13) must be estimated from N path realizations, as must the expectation in (12). Denote the mth path realization \(\{{s}_{m0},{s}_{m1},\ldots,{s}_{mn}\}\), 1 ≤ m ≤ N. In the least-squares approach, the βjk are estimated using the normal equations as
$$\widehat{{\beta }}_{j} = {({\mathbf{\mathit{X}}}_{j}^{\mathrm{T}}{\mathbf{\mathit{X}}}_{ j})}^{-1}{\mathbf{\mathit{X}}}_{ j}^{\mathrm{T}}{\mathbf{\mathit{c}}}_{ j},$$
where \({\mathbf{\mathit{X}}}_{j}\) is the \({N}_{j} \times (K + 1)\) design matrix at step j, where Nj is the number of in-the-money paths at time \({t}_{j}\), \({\mathbf{\mathit{c}}}_{j}\) is the continuation values for those paths. The columns of \({\mathbf{\mathit{X}}}_{j}\) are formed using the basis functions \({h}_{jk}(.)\), specifically, the kth column of \({\mathbf{\mathit{X}}}_{j}\) is \({({h}_{j1}({s}_{j1}),\ldots,{h}_{j{N}_{j}}({s}_{j{N}_{j}}))}^{\mathrm{T}}\). If necessary, a ridge-regression stabilized estimator could be used, that is,
$$\widehat{{\beta }}_{j} = {({\mathbf{\mathit{X}}}_{j}^{\mathrm{T}}{\mathbf{\mathit{X}}}_{ j} + \lambda {\mathbf{\mathit{I}}}_{K+1})}^{-1}{\mathbf{\mathit{X}}}_{ j}^{\mathrm{T}}{\mathbf{\mathit{c}}}_{ j}$$
for some λ > 0. Finally, the value of the option at time t0 = 0 is given by
$${v}_{0} =\max \{ {p}_{0}({S}_{0}),{q}_{0}({S}_{0})\}.$$
If a single collection of paths is used, the dual use of the paths at the two stages induces dependence and results in a low-biased estimator. This issue has been addressed extensively in the literature (see, for example, [11], Chap. 16, [14]). Also, typically a two-pass simulation version of the algorithm is used, where the first pass is used to compute the exercise strategy, and the second is used to obtain the valuation. Finally, standard Monte Carlo methods for variance reduction (for example, the use of antithetic variables) can also be applied.

Simulation of the Underlying Process Under the Martingale Measure

The algorithm from [20] is used to simulate the CGMY process for 0 < Y < 2. For the variance gamma (Y = 0) case, direct simulation of the process is achieved by using the representation as the difference of two Gamma processes. In both cases, the process is simulated via its increments, and then cumulated, and then the appropriate constant c is subtracted (as outlined in Sect. 2.2, c is selected to make the exponentiated process \({e}^{-rt}{S}_{t}\) a martingale). For the CGMY process with ε = 0,

$$\begin{array}{rcl} c& =& {\int}_{-\infty }^{\infty }\left ({e}^{x} - x - 1\right )k\left (x\right )dx \\ & & \\ & =& \left \{\begin{array}{@{}l@{\quad }l@{}} -C\left \{\log \left (1 - \frac{1} {M} + \frac{1} {G} - \frac{1} {MG}\right ) + \frac{1} {M} - \frac{1} {G}\right \} \quad &\mathrm{if\ }Y = 0\\ \quad & \\ C\left \{\left (M - 1\right )\log \left (1 - \frac{1} {M}\right ) + \left (G + 1\right )\log \left (1 + \frac{1} {G}\right )\right \} \quad &\mathrm{if\ }Y = 1\\ \quad & \\ C\Gamma \left (-Y \right )\left ({\left (M - 1\right )}^{Y } - {M}^{Y } + Y {M}^{Y -1} +{ \left (G + 1\right )}^{Y } - {G}^{Y } - Y {G}^{Y -1}\right )\quad &\mathrm{else}. \end{array} \right. \end{array}$$
whereas if ε > 0, and the diffusion approximation used, different drift terms are used (see [21], Appendix for full details). For brief details of the algorithm from [20], see 5.

The core simulation generates an N ×n matrix \(\mathbf{\mathit{Z}}\) of independent increments, with realizations of the increments of a single process per row of this matrix; the columns contain independent but also identically distributed random quantities. To form the Monte Carlo replicates \({X}_{t}^{(m)},m = 1,\ldots,N\), the cumulative row sum vectors are obtained and translated by the appropriate drift value, and then the underlying replicates \({S}_{t}^{(m)},m = 1,\ldots,N\) are formed by exponentiation. Note that permuting or resampling the elements in a column of \(\mathbf{\mathit{Z}}\) does not influence the distribution of \({S}_{t}^{(m)}\) for each m, so from a single realization of N paths, many more probabilistically identical (but dependent) paths can be generated by column-wise permutation or resampling. This fact can be an advantage, as typically the simulation of the independent increments can be time-consuming for some parameter settings.

In the Monte Carlo study, we must focus on the effect of varying (a) the Monte Carlo sample size N, (b) the time-discretization order n and time step △ t, and (c) the truncation value ε. Recall that in the Monte Carlo setting, the effect of changing N is well understood, and also that ε only effects the core simulation, not the pricing.

After outlining a standard Monte Carlo approach above, we now turn to deterministic numerical methods. We discuss first pricing of European options in a Lévy market, and then outline the adjustments necessary for pricing American options.

3.2 Deterministic Numerical Methods

To price American options in Lévy markets by solving a deterministic Partial-Integro Differential Equation (PIDE), we must first find a formulation for pricing European options. This is simply because in the continuation region, the option price of the American option satisfies the PIDE for a European option. Therefore, we begin by showing an extension of Feynman-Kac for exponential Lévy processes which applies to the European case.

European Options

In Lévy markets, we assume the price process S is driven by a Lévy process as in (5). Under the chosen risk neutral equivalent measure \(\mathbb{Q}\), the price \(f\left (t,s\right )\), of a European option with \(\mathbb{Q}\)-integrable payoff \(g\left (s\right )\) can be written
$$f\left (t,s\right ) = {\mathbb{E}}_{\mathbb{Q}}\left [{e}^{-r\left (T-t\right )}g\left ({S}_{ T}\right )\bigg\vert{S}_{t} = s\right ].$$
To compute the option value deterministically, we need a generalization of the Feynman-Kac Formula to relate the expectation to a PIDE. First we convert to log price and time to maturity. Let \(x =\ln \left (s\right )\); τ = T − t. Then \(f\left (t,s\right ) = u\left (\tau,x\right )\).

Theorem 3.1 (Extended Feynman Kac). 

Given a Lévy process X on\(\mathbb{R}\)with characteristic triple\(\left (0,{\sigma }^{2},\nu \right )\)where\(\sigma \,\geq \,0\)and the Lévy measure ν satisfies \(\nu (dx)\,=\,k\left (x\right )dx\)and\({\int}_{\mathbb{R}}\left (1 \wedge {\left \vert x\right \vert }^{2}\right )\nu \left (dx\right )\,<\,\infty \)and the following three conditions:
[FK1] (Activity of Small Jumps): There exist constants C+ > 0 and α < 2 such that, for all \(0 \leq \vert z\vert \leq 1\),
$$\vert k\left (z\right )\vert \leq {C}_{+} \frac{1} {\vert z{\vert }^{\alpha +1}}.$$
[FK2] (Semi-heavy Tails): There are constants\(C > 0,{\beta }_{-} > 0\)and\({\beta }_{+} > 1\)such that, for all\(\left \vert z\right \vert > 1\),
$$k\left (z\right ) \leq C\left \{\begin{array}{@{}l@{\quad }l@{}} {e}^{-{\beta }_{-}\left \vert z\right \vert }\quad &\mathrm{if\ }z < 0, \\ {e}^{-{\beta }_{+}\left \vert z\right \vert }\quad &\mathrm{if\ }z >0.\end{array} \right.$$
If σ = 0, we assume in addition that 0 < α < 2 and
[FK3] (Boundedness from below of\(k\left (z\right )\)): There is C> 0 such that, for all\(0 < \left \vert z\right \vert < 1\),
$$\frac{1} {2}\left (k\left (-z\right ) + k\left (z\right )\right ) \geq \frac{{C}_{-}} {{\left \vert z\right \vert }^{1+\alpha }}.$$
Assume that\(u\left (\tau,x\right )\ \mathrm{in\ }{C}^{1,2}\left (\left (0,T\right ) \times \mathbb{R}\right ) \cap {C}^{0}\left (\left [0,T\right ] \times \mathbb{R}\right )\)solves the PIDE
$$\frac{\partial u} {\partial \tau } -\frac{{\sigma }^{2}} {2} \frac{{\partial }^{2}u} {\partial {x}^{2}} + \left (\frac{{\sigma }^{2}} {2} - r + c\right )\frac{\partial u} {\partial x} + {A}^{J}\left [u\right ] + ru = 0$$
in\(\left (0,T\right ) \times \mathbb{R}\), where AJdenotes the integro-differential operator defined for\(\varphi \in {C}^{2}\left (\mathbb{R}\right )\)by
$${A}^{J}\left [\varphi \right ]\left (x\right ) := -{\int}_{\mathbb{R}}\left \{\varphi \left (x + y\right ) - \varphi \left (x\right ) - {\varphi }^{{\prime}}\left (x\right )y\right \}k\left (y\right )dy$$
and\(c \in \mathbb{R}\)is given by
$$c ={\int}_{\mathbb{R}}\left ({e}^{y} - 1 - y\right )k\left (y\right )dy$$
(14)
with the initial condition,
$${u}_{0}(x) = u\left (0,x\right ) := g\left ({e}^{x}\right ).$$
(15)
Then\(f\left (t,s\right ) = u\left (T - t,log\left (s\right )\right )\)satisfies
$$f\left (t,s\right ) = {\mathbb{E}}_{\mathbb{Q}}\left [{e}^{-r\left (T-t\right )}g\left ({S}_{ T}\right )\bigg\vert{S}_{t} = s\right ].$$
Conversely, if\(f\left (t,s\right )\)above is sufficiently regular, the function\(u\left (\tau,x\right ) = f\left (T - \tau,{e}^{x}\right )\)solves the given PIDE.

Proof.

See ( [1922], Sect. 1.5)

Let X be a Lévy process with state space \(\mathbb{R}\) and characteristic triplet \(\left (0,\sigma,\nu \right )\) such that ν satisfies [FK2] (“Semi-heavy Tails”). The interest rate and drift in the PIDE can be set to zero by the transformation \(u\left (\tau,x\right ) = {e}^{-r\tau }\check{u}\left (\tau,x + (r - c -\frac{{\sigma }^{2}} {2} )\tau \right )\). Henceforth, for simplicity of notation we will denote our solution u (not \(\check{u}\)).

Therefore the strong form of the PIDE can be expressed as follows:

  • \(\mathrm{Find\ }u\left (\tau,x\right ) \in {C}^{1,2}\left ((0,T) \times \mathbb{R}\right ) \cap {C}^{0}\left (\left [0,T\right ] \times \mathbb{R}\right )\ \mathrm{such\ that}\)
    $$\begin{array}{rcl} \frac{\partial u} {\partial \tau } + {A}^{BS}\left [u\right ] + {A}^{J}\left [u\right ]& =& 0\qquad \quad \mathrm{in\ }\left (0,T\right ) \times \mathbb{R} \\ u\left (0,x\right )& =& {u}_{0}(x)\quad \mathrm{in\ }\mathbb{R} \end{array}$$
    (16)
    where \({A}^{BS}[u] = -\frac{{\sigma }^{2}} {2} \frac{{\partial }^{2}u} {\partial {x}^{2}}\).
We will transform the integro-differential operator via integration by parts. For \(u \in {C}^{2}(\mathbb{R})\) satisfying FK1–FK3 we have,
$$\begin{array}{rcl}{ A}^{J}[u]& =& -{\int}_{\mathbb{R}}\left (u\left (x + y\right ) - u\left (x\right ) - {u}^{{\prime}}\left (x\right )y\right )k\left (y\right )dy \\ & =& -{\left (u\left (x + y\right ) - u\left (x\right ) - {u}^{{\prime}}\left (x\right )y\right )}_{ u\in {C}^{2}\left (\mathbb{R}\right )}{ {k}^{-1}\left (y\right )}_{ [FK2]}{\vert }_{-\infty }^{\infty } \\ & & +\quad {\int}_{\mathbb{R}}\left ({u}^{{\prime}}\left (x + y\right ) - {u}^{{\prime}}\left (x\right )\right ){k}^{-1}\left (y\right )dy \\ & =& {\int}_{\mathbb{R}}\left ({u}^{{\prime}}\left (x + y\right ) - {u}^{{\prime}}\left (x\right )\right ){k}^{-1}\left (y\right )dy \\ & =& {{k}^{-2}\left (y\right )}_{ [FK2]}{ \left ({u}^{{\prime}}\left (x + y\right ) - {u}^{{\prime}}\left (x\right )\right )}_{ u\in {C}^{2}\left (\mathbb{R}\right )}{\vert }_{-\infty }^{\infty }-{\int}_{\mathbb{R}}{u}^{{\prime\prime}}\left (x + y\right ){k}^{-2}\left (y\right )dy \\ & =& -{\int}_{\mathbb{R}}{u}^{{\prime\prime}}\left (x + y\right ){k}^{-2}\left (y\right )dy.\end{array}$$
Here, the integro-differential operator is expressed in terms of the antiderivatives of the Lévy density, which are computed as follows:
$${k}^{-i}\left (z\right ) = \left \{\begin{array}{@{}l@{\quad }l@{}} {\int}_{-\infty }^{z}{k}^{-i+1}\left (x\right )dx\quad &\mathrm{if\ }z < 0, \\ -{\int}_{z}^{\infty }{k}^{-i+1}\left (x\right )dx\quad &\mathrm{if\ }z >0. \end{array} \right.$$

For the case of CGMY and Variance Gamma processes, these antiderivatives can be expressed in analytic formulae (for ε = 0 and ε > 0). For these expressions, refer to [21].

American Options

The solution of the optimal stopping problem can be formulated as the solution of a parabolic integro-differential inequality.

Theorem 3.2.

Let u0(x) be a sufficiently regular payoff function on\(\mathbb{R}\)and let σ>0. Then the solution\(u(\tau,x)=f(T - t,{e}^{x})\)of the optimal stopping problem (1) is given by the following integro-differential inequality:
$$\begin{array}{rcl} \frac{\partial u} {\partial \tau } + {A}^{BS}\left [u\right ] + {A}^{J}\left [u\right ]& \geq & 0\quad in\ \left (0,T\right ) \times \mathbb{R}\end{array}$$
(17)
$$\begin{array}{rcl} u\left (\tau,x\right )& \geq & {u}_{0}(x)\quad in\ \left [0,T\right ] \times \mathbb{R}\end{array}$$
(18)
$$\begin{array}{rcl} \left (u\left (\tau,x\right ) - {u}_{0}(x)\right )\left (\frac{\partial u} {\partial \tau } + {A}^{BS}\left [u\right ] + {A}^{J}\left [u\right ]\right )& =& 0\quad \;\;in\ \left (0,T\right ) \times \mathbb{R}\end{array}$$
(19)
$$\begin{array}{rcl} u\left (0,x\right )& = & {u}_{0}(x)\quad \;\;in\ \mathbb{R}\end{array}$$
(20)

Proof.

See [4].

Denote \(\mathcal{C}\) the continuation region and \(\mathcal{E}\) the stopping (exercise) region. In the continuation region, u satisfies the PIDE for a European option, therefore
$$\frac{\partial u} {\partial \tau } -\frac{{\sigma }^{2}} {2} \frac{{\partial }^{2}u} {\partial {x}^{2}} -{\int}_{\mathbb{R}}{u}^{{\prime\prime}}\left (x + y\right ){k}^{-2}\left (y\right )dy = 0\mathrm{\ in\ }\mathcal{C}.$$
(21)
In the stopping region, the value of the American option is equal to the payoff. Inserting the payoff into the above PIDE will result in a positive value, therefore
$$\frac{\partial u} {\partial \tau } -\frac{{\sigma }^{2}} {2} \frac{{\partial }^{2}u} {\partial {x}^{2}} -{\int}_{\mathbb{R}}{u}^{{\prime\prime}}\left (x + y\right ){k}^{-2}\left (y\right )dy > 0\mathrm{\ in\ }\mathcal{E}.$$
(22)
Together, (21) and (22) justify (17). The inequality (17) holds by no arbitrage. For (18), note the following “complementarity”:
$$\begin{array}{rcl} u\left (\tau,x\right ) > {u}_{0}(x)\ \mathrm{and}\ \frac{\partial u} {\partial \tau } -\frac{{\sigma }^{2}} {2} \frac{{\partial }^{2}u} {\partial {x}^{2}} -{\int}_{\mathbb{R}}{u}^{{\prime\prime}}\left (x + y\right ){k}^{-2}\left (y\right )dy = 0\ \mathrm{in\ }\mathcal{C},& & \\ \end{array}$$
$$\begin{array}{rcl} u\left (\tau,x\right ) = {u}_{0}(x)\ \mathrm{and}\ \frac{\partial u} {\partial \tau } -\frac{{\sigma }^{2}} {2} \frac{{\partial }^{2}u} {\partial {x}^{2}} -{\int}_{\mathbb{R}}{u}^{{\prime\prime}}\left (x + y\right ){k}^{-2}\left (y\right )dy > 0\ \mathrm{in\ }\mathcal{E}.& & \\ \end{array}$$
Therefore it must hold in \(\mathcal{E}\cup \mathcal{C} = \left (0,T\right ) \times \mathbb{R}\) that:
$$\left (u\left (\tau,x\right ) - {u}_{0}(x)\right )\left (\frac{\partial u} {\partial \tau } -\frac{{\sigma }^{2}} {2} \frac{{\partial }^{2}u} {\partial {x}^{2}} -{\int}_{\mathbb{R}}{u}^{{\prime\prime}}\left (x + y\right ){k}^{-2}\left (y\right )dy\right ) = 0.$$
Finally, (19) holds because at maturity, the American option is equivalent to a European option (there is no longer any early exercise premium).

Variational Formulation

Define \(K := \left \{v \in V \vert v \geq {u}_{0}(x)\ \mathrm{a.e.\ in\ }\mathbb{R}\right \}\). Then K is the cone of admissible solutions. Then the variational formulation reads as follows:

  • Given \({u}_{0}(x) \in {L}^{2}\left (\mathbb{R}\right )\),
    $$\begin{array}{rcl}{ \left ( \frac{\partial } {\partial \tau }u,v - u\right )}_{{L}^{2}\left (\mathbb{R}\right )} + {a}^{\mathit{BS}}\left (u,v - u\right ) + {a}^{J}\left (u,v - u\right ) \geq 0\ \mathrm{in\ }\left (0,T\right ) \times \mathbb{R}.& & \\ \end{array}$$
    The bilinear forms \({a}^{\mathit{BS}}\), aJ are given by
    $$\begin{array}{rcl}{ a}^{\mathit{BS}}\left (u,v\right )& =& \frac{{\sigma }^{2}} {2} {\int}_{\mathbb{R}}u\prime\left (x\right )v\prime\left (x\right )dx, \\ {a}^{J}\left (u,v\right )& =& {\int}_{\mathbb{R}}{\int}_{\mathbb{R}}u\prime\left (z\right )v\prime\left (x\right ){k}^{-2}\left (z - x\right )dzdx.\end{array}$$
Integration by parts is again required to arrive at these bilinear forms. These calculations are presented in detail in [21]. Here, \({a}^{\mathit{BS}}\) and aJ are well defined for piecewise linear hat functions. We sometimes denote \(a\left (u,v\right ) := {a}^{\mathit{BS}}\left (u,v\right ) + {a}^{J}\left (u,v\right )\) for simplicity of notation. Note that this functional setup only allows for square-integrable payoff functions, u0(x). The localized problem allows for payoff functions with exponential growth. For our problem, V has the following form:
$$V = \left \{\begin{array}{@{}l@{\quad }l@{}} {H}_{0}^{\alpha /2}\left (R\right )\quad &\mathrm{if\ }\sigma = 0, \\ {H}_{0}^{1}\left (R\right ) \quad &\mathrm{if\ }\sigma > 0. \end{array} \right.$$
(23)

For the derivation of the variational formulation for American options see [18].

Localization

For numerical computations, we truncate to a finite domain \({\Omega }_{R} = \left (-R,R\right )\) and define \(K := \left \{v \in V \vert v \geq {u}_{0}(x)\ \mathrm{a.e.\ in\ }{\Omega }_{R}\right \}\). The variational formulation reads as follows:

  • Find \(u \in {L}^{2}\left (0,T;V \right )\), \(\frac{\partial u} {\partial \tau } \in {L}^{2}\left (0,T;{L}^{2}\left ({\Omega }_{ R}\right )\right )\), such that \(u\left (\tau,x\right ) \geq {u}_{0}(x)\) in \(\left (0,T\right )\) and such that for all v ∈ K:
    $$\begin{array}{rcl}{ \left ( \frac{\partial } {\partial \tau }u,v - u\right )}_{{L}^{2}\left ({\Omega }_{R}\right )} + {a}_{R}\left (u,v - u\right )& \geq & 0\ \mathrm{in}\ \left (0,T\right ) \times {\Omega }_{R} \\ u\left (0,x\right )& =& {u}_{0}(x)\ \mathrm{\ in}\ {\Omega }_{R} \\ u\left (\tau,x\right )& =& 0\ \mathrm{\ in\ }{\Omega }_{R}^{c} \end{array}$$
    (24)

Here u, v have support in \({\Omega }_{R}\), \({a}_{R}\left (u,v\right ) := a\left (\tilde{u},\tilde{v}\right )\), where \(\tilde{u},\tilde{v}\) denote the extension of u, v by zero to all of \(\mathbb{R}\). The Hilbert space V is given by (23).

Proposition 3.1.

Given a Lévy process Xtwhich satisfies [FK1], [FK2], and [FK3], the localized problem admits a unique solution\(u \in {L}^{2}\left (0,T;V \cap K\right )\)where V is given by (23).

Proof.

See ( [18], Theorem 3.2).

Discretization in Space

For the space discretization of the weak formulation of the pricing problem for American options, we use the Galerkin method with a finite element subspace \({V }^{N} \subset V\) where \({V }^{N} = {S}_{\Delta }^{1} \cap V\), and \({S}_{\Delta }^{1}\) denotes the space of piecewise continuous functions on a mesh Δ. As a basis for VN, we use linear hat functions, defined as
$$\mathit{{b}_{i}}\left (x\right ) =\mathrm{ max}\ \left (1 -\frac{\vert x - {x}_{i}\vert } {h},0\right ).$$
Then \({V }^{N}\,=\,\mathrm{span}\ {\left \{{b}_{i}\left (x\right )\right \}}_{i=1}^{N}\). We discretize using a uniform mesh with N subintervals of size \(h\,=\,\frac{2R} {N}\) on the interval \({\Omega }_{R}\,=\,\left (-R,R\right )\). We approximate \(u\left (\tau,x\right )\) by an element \({u}^{N}\left (\tau,x\right ) \in {V }^{N}\). Then \({u}^{N}\left (\tau,x\right )\) can be written as a linear combination of the basis elements \({b}_{i}\left (x\right )\):
$${u}^{N}\left (\tau,x\right ) =\sum_{j=1}^{N}{u}_{ j}^{N}\left (\tau \right ){b}_{ j}\left (x\right ) ={ \left ({\mathbf{u}}^{N}\left (\tau \right )\right )}^{\mathrm{T}}\mathbf{b}$$
where \({\left ({\mathbf{u}}^{N}\left (\tau \right )\right )}^{\mathrm{T}} = \left ({u}_{1}^{N}\left (\tau \right ),{u}_{2}^{N}\left (\tau \right ),\ldots,{u}_{N}^{N}\left (\tau \right )\right )\) and \(\mathbf{b}\)\(= ({b}_{1}\left (x\right ),{b}_{2}\left (x\right ),\ldots,{b}_{N}\)\((x){)}^{\mathrm{T}}\). Here \({\mathbf{u}}^{N}\left (\tau \right )\) is an unknown vector of coefficient functions.

Approximating \(u\left (\tau,x\right )\) by an element \({u}^{N}\left (\tau,x\right ) \in {V }^{N}\) and the test functions v ∈ V by vN ∈ VN we can approximate the localized pricing problem for American options as follows:

  • Find \({u}^{N}\left (\tau,x\right ) \in K = \left \{{v}^{N} \in {V }^{N}\vert {v}^{N} \geq {u}_{0}(x)\right \}\) such that, for all \({v}^{N} \in K\),
    $$\left ( \frac{\partial } {\partial \tau }{u}^{N}\left (\tau \right ),{v}^{N} - {u}^{N}\left (\tau \right )\right ) + a\left ({u}^{N}\left (\tau \right ),{v}^{N} - {u}^{N}\left (\tau \right )\right ) \geq 0.$$

Substituting the representation of uN and vN in the hat function basis, we find the equivalent matrix inequality:

  • Find \({\mathbf{u}}^{N}\left (\tau \right ) \in \mathbf{K} = \left \{{\mathbf{v}}^{N} \in {\mathbb{R}}^{N}\vert {\mathbf{v}}^{N} \geq {\mathbf{u}}_{0}^{N}\right \}\) such that, for all \({v}^{N} \in K\),
    $${ \left ({\mathbf{v}}^{N} -{\mathbf{u}}^{N}\left (\tau \right )\right )}^{\mathrm{T}}\left (\mathbf{M} \frac{\partial } {\partial \tau }{\mathbf{u}}^{N}\left (\tau \right ) + \mathbf{A}{\mathbf{u}}^{N}\left (\tau \right )\right ) \geq 0.$$

where the mass matrix M is given by \(\mathbf{M} :={ \left ({b}_{j},{b}_{i}\right )}_{1\leq i,j\leq N}\), and the stiffness matrix \(\mathbf{A}\) is given by \(\mathbf{A} := a{\left ({b}_{j},{b}_{i}\right )}_{1\leq i,j\leq N}\).

Discretization in Time

In time we discretized using the backward Euler scheme:

  • Find \({\mathbf{u}}_{m+1}^{N} \in \mathbf{K} := \left \{{\mathbf{v}}^{N} \in {\mathbb{R}}^{N}\vert {\mathbf{v}}^{N} \geq {\mathbf{u}}_{0}^{N}\right \}\) such that
    $${ \left ({\mathbf{v}}^{N} -{\mathbf{u}}_{ m+1}^{N}\right )}^{\mathrm{T}}\left (\frac{1} {k}\mathbf{M}\left ({\mathbf{u}}_{m+1}^{N} -{\mathbf{u}}_{ m}^{N}\right ) + \mathbf{A}{\mathbf{u}}_{ m+1}^{N}\right ) \geq 0.$$

Proposition 3.2.

The following formulations for the discretized American option pricing problem are equivalent:
  1. 1.
    The Discretized Variational Formulation:
    1. (a)

      \({\mathbf{u}}_{m+1}^{N} \in \mathbf{K} := \left \{{\mathbf{v}}^{N} \in {\mathbb{R}}^{N}\vert {\mathbf{v}}^{N} \geq {\mathbf{u}}_{0}^{N}\right \}\)

       
    2. (b)

      \(\forall {\mathbf{v}}^{N} \in \mathbf{K}\), \({\left ({\mathbf{v}}^{N} -{\mathbf{u}}_{m+1}^{N}\right )}^{\mathrm{T}}\left (\frac{1} {k}\mathbf{M}\left ({\mathbf{u}}_{m+1}^{N} -{\mathbf{u}}_{ m}^{N}\right ) + \mathbf{A}{\mathbf{u}}_{ m+1}^{N}\right ) \geq 0\)

       
     
  2. 2.
    The Matrix Linear Complementary Problem (LCP):
    1. (c)

      \({\mathbf{u}}_{m+1}^{N} \geq {\mathbf{u}}_{0}^{N}\)

       
    2. (d)

      \(\left (\mathbf{M} + k\mathbf{A}\right ){\mathbf{u}}_{m+1}^{N} \geq \mathbf{M}{\mathbf{u}}_{m}^{N}\)

       
    3. (e)

      \(\left ({\mathbf{u}}_{m+1}^{N} -{\mathbf{u}}_{0}^{N}\right )\left (\left (\mathbf{M} + k\mathbf{A}\right ){\mathbf{u}}_{m+1}^{N} -\mathbf{M}{\mathbf{u}}_{m}^{N}\right ) = 0\)

       
     

Proof.

See [21].

To solve the LCP, we use the PSOR algorithm [9]. Using the substitution \({\mathbf{v}}_{m+1}^{N} ={ \mathbf{u}}_{m+1}^{N} -{\mathbf{u}}_{0}^{N}\), the LCP for the American pricing problem can be posed as follows:
$$\begin{array}{rcl}{ \mathbf{v}}_{m+1}^{N}& \geq & \mathbf{0} \\ \left (\mathbf{M} + k\mathbf{A}\right ){\mathbf{v}}_{m+1}^{N}& \geq & \mathbf{M}{\mathbf{v}}_{ m}^{N} - k\mathbf{A}{\mathbf{u}}_{ 0}^{N} \\ {\left ({\mathbf{v}}_{m+1}^{N}\right )}^{\mathrm{T}}\left (\left (\mathbf{M} + k\mathbf{A}\right ){\mathbf{v}}_{ m+1}^{N} -\mathbf{M}{\mathbf{v}}_{ m}^{N} + k\mathbf{A}{\mathbf{u}}_{ 0}^{N}\right )& =& 0 \end{array}$$
(25)
Once the matrix LCP is solved, one can simply add back the payoff \({\mathbf{u}}_{0}^{N}\) to obtain the solution vector \({\mathbf{u}}_{M}^{N}\).

4 Numerical Results

We study the CGMY process for different parameter settings using deterministic and Monte Carlo methods. Our numerical results are calculated based on the parameters found in Table 1. The parameters and reference values in column I were obtained from [16]; II is from [23]; III can be found in [1]; and IV is from [28]. We acknowledge that in the parameter set denoted II, we have G > M, and contend that G < M is more realistic in the context of options pricing. This is an empirical result derived from [5].

4.1 Monte Carlo Results

We first try to assess the bias of the Monte Carlo method for parameter set I, which yields an option price of 10,000. We use a Monte Carlo sample size of N = 2, 000, and time-step grid of length n = 10, 20, 50, 100, 200, 500, 1, 000, 2, 000 each replicated 1,000 times. In this example, where Y = 0 (Variance Gamma), the sample paths can be simulated exactly (for any discrete grid), so the truncation level ε plays no part. The Longstaff-Schwartz method, without bias-correction, produces an estimated price for the American put which is biased low, but it can be seen that the percent relative error (for this S0 ∕ K combination) drops below 1% for n = 200 (Table 2).
Table 1

CGMY and VG parameters

 

I

II

III

IV

 

C

5

1

1

0.42

 

G

18.37

8

5

4.37

 

M

37.82

6

5

191.2

 

Y

0

0

0.6

1.0102

 

T

1

1

1

0.25

 

r

0.1

0.06

0.1

0.06

 

K

110

10

1

98

 

S0

100

10

1

90

 

\(f{({S}_{0},0)}_{ref}\)

10.00000

0.49587

0.11215

9.22548

 
Figure 1 depicts the estimated free exercise boundary derived from the n = 2, 000 grid, averaged over all Monte Carlo replications. The Monte Carlo error bounds are pointwise 95% intervals, that are wider near maturity as the sample size of in-the-money paths is smaller in that region.
Fig. 1

Free exercise boundary estimated for parameter set I using least-squares Monte Carlo and derived exercise times. The parameter set is I from Table 1

Table 2

Monte Carlo prices for parameter set I (Variance Gamma) for increasing grid order n. \({S}_{0} = 100,K = 110\). True price is 10,000

 

n

 
 

10

20

50

100

200

500

1,000

2,000

 

Est.

9.0452

9.5027

9.7975

9.8990

9.9498

9.9818

9.9920

9.9978

 

s.e.

0.0935

0.0666

0.0435

0.0305

0.0224

0.0139

0.0105

0.0085

 

% Rel. Error

9.55

4.97

2.02

1.01

0.50

0.18

0.08

0.02

 

For parameter set II (also Variance Gamma, but with spot price and strike equal, \({S}_{0}\,=\,K\,=\,10\)), the Monte Carlo least-squares method provides an increasingly high-biased estimate as grid size n increases, although for these parameter settings the percent relative error is within acceptable bounds (Table 3).

This pattern of results is repeated for parameter set III, where the spot and strike are again equal, but the process is now CGMY with Y = 0. 6; for n = 200, the percent relative (high) bias is about 1.5%. Full results for this case are omitted here.
Table 3

Monte Carlo prices for parameter set II (Variance Gamma) for increasing grid order n. S0 = K = 10. True price is 0.49587

 

n

 
 

10

20

50

100

200

 

Est.

0.4932

0.4962

0.4970

0.4978

0.4978

 

s.e.

0.0163

0.0170

0.0171

0.0169

0.0177

 

% Rel. Error

0.55

-0.07

-0.23

-0.38

-0.40

 

In the next study, we examine the effect of truncation in simulation on the Monte Carlo pricing. For n = 500 and N = 5, 000, we examine the prices obtained for parameter set III (Y = 0. 6) as ε is varied in the range \({2}^{-4},\ldots,{2}^{-30}\), and the same settings with Y = 1. 4. In the limit as \(\epsilon \rightarrow 0\), we expect to converge to a high-biased estimate using Monte Carlo least-squares; it is the variation in bias that we wish to study. Note that when truncation is used, the drift constant c must be adjusted (downwards in this case) to acknowledge the diffusion for pure-jump replacement. For full details, see ( [21], Appendices).

Figure 2 shows the change in price for the two scenarios. The effect of truncation is evident, as the computed price decreases monotonically with ε. The observed effect is not a feature of the least-squares Monte Carlo, as this proceeds in an identical fashion irrespective of the value of ε, but instead demonstrates that truncation and the diffusion approximation has a substantial impact on the resulting price.

Summary

It is clear that although Monte Carlo least-square pricing methods are readily implementable for exponential Lévy processes, their accuracy is subject to user-specified quantities, even if bias-adjusted procedures are used. Two key parameters, the truncation level ε and the grid order n have considerable influence on the resulting estimates. Whereas making ε as small as practicable does not adversely affect implementation (it slows the path simulation a little), the simulation complexity increases linearly with n, making accurate computation prohibitive.
Fig. 2

Least-squares Monte Carlo price as truncation ε increases. The parameter set is III from Table 1 (Y = 0. 6, left panel), plus the same settings with Y = 1. 4 (right panel)

4.2 Deterministic Numerical Results

In Fig. 3, we demonstrate the pricing error near the free exercise boundary for an American put generated with parameter set I. Option prices were computed for the entire pricing domain (a range of S0 values) for a sequence of values of ε terminating at ε = 0 at the specified strike value. In this case, the reference value in Table 1 was recovered with eight decimals of accuracy when S0 = 100, yet there are high pricing errors near the exercise boundary. The Variance Gamma setting (Y = 0) generated the most stark errors in the pricing problem; recall that the Variance Gamma process does not admit the diffusion approximation for small jumps.

In Fig. 4, we have the convergence rates for the American puts near to and far away from the exercise boundary. Here, I, II, III, and IV correspond to the parameter sets in Table 1, (a) denotes errors near the exercise boundary, and (b) denotes errors away from the exercise boundary. The rates are exactly the rates shown for European options in [27], 3 − Y. However, the change in accuracy is considerable between prices at the exercise boundary and prices away from the exercise boundary. For the Variance Gamma parameter sets, one must have an ε one magnitude smaller to achieve the same accuracy as the prices away from the exercise boundary. For the CGMY parameter sets, moving closer to the exercise boundary still has an effect, though not so great as the Variance Gamma case. However, as Y grows, the convergence rate 3 − Y , shrinks, so one must still reduce the size of epsilon by about an order of magnitude to achieve the same accuracy at the exercise boundary. Choosing the grid size h to be smaller than ε (or choosing ε to be smaller than the grid size h) has no effect on these convergence rates.
Fig. 3

The relative error for American put for truncation levels ε. The parameter set is I from Table 1. Here, h = 0. 0133, k = 0. 0014, and R = 6

Note. Although we use the parameter ε in our deterministic study, this is simply to investigate the nature of the error near the free exercise boundary numerically. When computing options prices using the deterministic finite element method, one should never choose ε > 0, as it will induce unnecessary error into the pricing problem without yielding any extra efficiency; in the CGMY case with Y > 0, pricing via the finite element method is essentially exact under the conditions of Theorem 3.1, and for Y = 0 (the Variance Gamma case, where the conditions are not met), empirical evidence demonstrates that pricing with \(\epsilon \,=\,0\) is most accurate, even though mathematically this remains unverified.
Fig. 4

Convergence in ε across all parameter sets. III(a): \({S}_{0} = 0.72\), Y = 0. 5, h = 0. 008, k = 0. 002, R = 2, III(b): S0 = 0. 86, Y = 0. 5, h = 0. 008, k = 0. 002, R = 2, IV(a): S0 = 85. 15, h = 0. 0077, k = 0. 0013, R = 5, IV(b): S0 = 132. 65, h = 0. 0077, k = 0. 0013, R = 5. I(a): S0 = 104. 99, h = 0. 0122, k = 0. 005, R = 5. 5, I(b): S0 = 149. 47, h = 0. 0122, k = 0. 005, R = 5. 5, II(a): S0 = 8. 75, h = 0. 0089, k = 0. 0014, R = 4, II(b): S0 = 11. 16, h = 0. 0089, k = 0. 0014, R = 4

Finally, in Fig. 5, we vary Y in parameter set III and show the percent relative error against Y. The trend is clear: as Y increases, the error of truncation and regularization diminishes. As Y grows, the concentration of small jumps grows, and replacing the jumps of size less than ε with a continuous diffusion component makes intuitive sense. However, the convergence in ε also slows as Y increases. Therefore it is not immediate that taking larger Y -values will result in higher accuracy.

From these numerical investigations, we see that there is decreased accuracy near the free exercise boundary. Hence when pricing American options via the small jump approximation, one must decrease ε in this region of the pricing domain.

Computation Time

We give a brief comparison of the computational burden associated with the Monte Carlo and finite element methods. In the Monte Carlo approach, the main determinants of computational load are the Monte Carlo sample size, N, the path discretization, n, and the simulation truncation ε. Simulation speed increases approximately linearly with N and n for fixed ε, as the pathwise simulation essentially depends an sampling an N ×n matrix of increments of the underlying process (recall that the Monte Carlo precision increases with order N1 ∕ 2). Pricing time increases at order n, as the per-path computation of exercise/continuation values involves n computations and comparisons. The impact of the truncation parameter ε is harder to study, as it influences the rejection-sampling efficiency of the path simulation method of [20] in a non-trivial fashion, dependent on the parameter settings. In particular, the efficiency decreases as Y increases. Figure 6 illustrates the CPU time required to implement the path simulation for N = 5, 000 paths with n = 500 for parameter set III for Y = 0. 6 and Y = 1. 4.
Fig. 5

Relative error near free exercise boundary normalized by option price for ε = 0. 05, h = 0. 008, k = 0. 002, R = 2

For the finite element method, the main determinant of computational speed is the grid size. Figure 7 depicts the (linear on the log-log scale) increase in computation (CPU) time for the finite element method as a function of grid size for parameter set III.
Fig. 6

CPU time in seconds for Monte Carlo method (N = 5, 000, n = 500) as a function of ε for parameter set III with Y = 0. 6 (red), Y = 1. 0 (green) and Y = 1. 4 (blue). Timings on a HP Xeon Quad core 2.67 GHz workstation

5 Discussion

The investigations above lead us to conclude that although Monte Carlo pricing may have advantages in general (including the flexibility for pricing exotic options or products derived in higher-dimensional settings), deterministic numerical approaches seem competitive, even preferable for pricing single asset American options, as the number of Monte Carlo replicates required to obtain the same accuracy and precision as deterministic approaches is huge. We have demonstrated that the diffusion approximation for infinite activity processes performs as expected, yielding the correct convergence rates for the numerical procedures, and rendering Monte Carlo feasible without excessive loss of precision. Even in the Variance Gamma case, where the diffusion approximation breaks down mathematically, the numerical procedures appear to perform adequately in option pricing settings away from the free exercise boundary. However, implementation of Monte Carlo least-squares pricing, even if bias-correction adjustments are made, requires considerable computation and a case-by-case specification of implementation constants. These issues are present, but less evident, in the pricing of European options in a Lévy market by Monte Carlo.
Fig. 7

CPU time as a function of grid size for k = 0. 01, R = 2, parameter set III

Another issue not addressed above is inference: we have assumed certain values for the CGMY parameters, but in practice these parameters must be estimated from data. Typically, historical series of reasonable length of the asset and/or option price series are needed to obtain decent estimates of the parameters; maximum likelihood estimates in the asset price (risk neutral) process can be obtained in a reasonably straightforward fashion, but for option price series moment-based or transform methods are the only techniques available. A necessary and important extension to our work is developing mechanisms to propagate inferential uncertainty through the pricing mechanisms; when historical stock or option price data are used to infer parameters that appear in pricing formulae, the estimates are subject to random variation, and this should be recognized when prices are computed. For example, standard errors can be computed and used to assess sensitivity of quoted prices, or Bayesian posterior distributions can be computed and used to compute price forecasts. Finally, the study of data-driven truncation choices remains an open area for future work.

6 Appendix: The Simulation of the CGMY Process

The algorithm from [20] is used to simulate the CGMY process necessary for Monte Carlo pricing; see Sect. 4. of that paper for full details. The algorithm adopts the approach introduced by [17], and is based on the following representation of the CGMY process as a time-changed Brownian motion with drift. Recall that the Lévy density, k(x), of the CGMY process takes the form
$$k\left (x\right ) = C\left \{\begin{array}{@{}l@{\quad }l@{}} {\left \vert x\right \vert }^{-1-Y }{e}^{-G\left \vert x\right \vert }\quad &\mathrm{if\ }x < 0, \\ {\left \vert x\right \vert }^{-1-Y }{e}^{-Mx} \quad &\mathrm{if\ }x >0, \end{array} \right.$$
(26)
where G, M, C > 0 and 0 < Y < 2. Now, let \({A}_{1} = (G - M)/2\), \({A}_{2} = (G + M)/2\), and consider the Lévy subordinator Vt with Lévy density
$${k}_{V }(t) = \frac{C\exp \{t{A}_{1}^{2}/2 - t{A}_{2}^{2}/4\}{\mathcal{D}}_{-Y }({A}_{2}\sqrt{t})} {{t}^{Y/2+1}} = \frac{C{A}_{3}(t)} {{t}^{Y/2+1}} \qquad t > 0$$
say, where \({\mathcal{D}}_{\nu }\) is the parabolic cylinder function with parameter ν (this special function is available in Matlab, and can be also computed and used as look-up table). Then, if Bt is a standard Brownian motion, using results from [25], Theorem 30.1, the process
$${X}_{t} = {A}_{1}\mathit{{V }_{t}} + {B}_{\mathit{{V}_{t}}}$$
(27)
is a Lévy process with Lévy density identical to (26), and with drift/centering parameter, b in the Lévy triplet, given by
$$b = {\int} x(1 - {e}^{-{A}_{1}x})k(x)dx.$$
If \(Y \neq 1\), then \(b = C\Gamma (1 - Y )({M}^{Y -1} - {G}^{Y -1})\). Furthermore, it is evident that for t > 0, \({k}_{V }(t) = f(t){k}_{0}(t)\), where
$$f(t) = \frac{{2}^{Y/2}\Gamma (Y/2 + 1/2){A}_{3}(t)} {\sqrt{\pi }},\qquad \qquad {k}_{0}(t) = \frac{{2}^{-Y/2}\sqrt{\pi }\:C} {\Gamma (Y/2 + 1/2){t}^{Y/2+1}}.$$
Note that f(t) ≤ 1, and also that k0(t) is the Lévy density for the Y ∕ 2 − stable subordinator.
The simulation of \(\mathit{{V }_{t}}\) proceeds using a rejection sampling approach, after approximate simulation of a Y ∕ 2 − stable subordinator, \(\mathit{{U}_{t}}\), using a compound Poisson approximation (that is, using a truncation of the Lévy measure in a manner similar to that described in Sect. 2.4 of this paper). Specifically, for ε > 0, let
$${k}_{0}^{\epsilon }(t) = \frac{{2}^{-Y/2}\sqrt{\pi }\:C} {\Gamma (Y/2 + 1/2){t}^{Y/2+1}} = \frac{{K}_{0}} {{t}^{Y/2+1}}\qquad t > \epsilon,$$
(28)
and zero otherwise. The drift in the approximating process induced by the truncation is easily computed to be \(d = {K}_{0}{\epsilon }^{1-Y/2}/(1 - Y/2)\). Simulation of the finite-activity, pure-jump process with Lévy density given by (28) is straightforward; this process is a compound Poisson process which may be simulated using either discretization ( [26], pp. 103–4) or directly the using the series representation ( [10], see also [24]).
The resulting representation of Ut on finite interval [0, T] say takes the form
$${U}_{t}{ \mathcal{L} \atop =} \sum_{i=1}^{{n}_{J} }{J}_{i}\mathbf{1}\{{\tau }_{i} \leq t\}\qquad 0 \leq t \leq T,$$
where \({\tau }_{1},\ldots,{\tau }_{{n}_{J}}\) are uniform order statistics representing the event times of a unit rate Poisson process, and \({J}_{1},\ldots,{J}_{{n}_{J}}\) are a collection of jump sizes. The corresponding representation of Vt takes the form
$${V }_{t}{ \mathcal{L} \atop =} td +\sum_{i=1}^{{n}_{J} }{J}_{i}\mathbf{1}\{{\tau }_{i} \leq t\}\mathbf{1}\{f({J}_{i}) > {W}_{i}\}\qquad 0 \leq t \leq T,$$
where d is the previous computed drift, and where \({W}_{1},\ldots,{W}_{{n}_{J}}\) are independent and identically distributed Uniform(0,1) random variables that facilitate the rejection strategy. This allows approximate simulation of Xt via (27).

Acknowledgements

Nešlehová and Stephens acknowledge the support of Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grants. Nešlehová also acknowledges the support of an FQRNT Nouveau Chercheur grant.

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Lisa J. Powers
    • 1
  • Johanna Nešlehová
    • 1
  • David A. Stephens
    • 1
  1. 1.Department of Mathematics and StatisticsMcGill UniversityMontréalCanada

Personalised recommendations