Random walk algorithm for the Dirichlet problem for parabolic integro-differential equation

We consider stochastic differential equations driven by a general Lévy processes (SDEs) with infinite activity and the related, via the Feynman–Kac formula, Dirichlet problem for parabolic integro-differential equation (PIDE). We approximate the solution of PIDE using a numerical method for the SDEs. The method is based on three ingredients: (1) we approximate small jumps by a diffusion; (2) we use restricted jump-adaptive time-stepping; and (3) between the jumps we exploit a weak Euler approximation. We prove weak convergence of the considered algorithm and present an in-depth analysis of how its error and computational cost depend on the jump activity level. Results of some numerical experiments, including pricing of barrier basket currency options, are presented.


Introduction
Stochastic differential equations driven by Lévy processes (SDEs) have become a very important modelling tool in finance, physics, and biology (see e.g. [1,4,6,24]). Successful use of SDEs relies on effective numerical methods. In this paper, we are interested in weak-sense approximation of SDEs driven by general Lévy processes Communicated by David Cohen. in which the noise has both the Wiener process and Poisson processes components including the case of infinite jump activity.
Let G be a bounded domain in R d , Q = [t 0 , T ) × G be a cylinder in R d+1 , =Q \ Q be the part of the cylinder's boundary consisting of the upper base and lateral surface, G c = R d \Q be the complement of G and Q c := (t 0 , T ]×G c ∪{T }×Ḡ. Consider the Dirichlet problem for the parabolic integro-differential equation (PIDE): where the integro-differential operator L is of the form x), and ϕ(t, x) are scalar functions; F(t, x) = F i j (t, x) is a d ×m-matrix; and ν(z), z ∈ R m , is a Lévy measure such that R m (|z| 2 ∧ 1)ν(dz) < ∞. We allow ν to be of infinite intensity, i.e. we may have ν B(0, r ) = ∞ for some r > 0, where as usual for x ∈ R d and s > 0 we write B(x, s) for the open ball of radius s centred at x.
The Feynman-Kac formula provides a probabilistic representations of the solution u(t, x) to (1.1) in terms of a system of Lévy-driven SDEs (see Sect. 2), which can be viewed as a system of characteristics for this PIDE. A weak-sense approximation of the SDEs together with the Monte Carlo technique gives us a numerical approach to evaluating u(t, x), which is especially effective in higher dimensions.
There has been a considerable amount of research on weak-sense numerical methods for Lévy-type SDEs of finite and infinite activity (see e.g. [10][11][12]14,15,17,[20][21][22][23] and references therein). Our approach is most closely related to [12]. As in [3,11,12], we replace small jumps with an appropriate Brownian motion, which makes the numerical solution of SDEs with infinite activity of the Lévy measure feasible in practice. There are three main differences between our approach and that of [12]. First, we use restricted jump-adapted time-stepping while in [12] jump-adapted time-stepping was used. Here by jump-adapted we mean that time discretization points are located at jump times τ k and between the jumps the remaining diffusion process is effectively approximated [11,12]. By restricted jump-adapted time-stepping, we understand the following. We fix a time-discretization step h > 0. If the jump time increment δ for the next time step is less than h, we set the time increment θ = δ, otherwise θ = h, i.e., our time steps are defined as θ = δ ∧ h. We note that this is a different time-stepping strategy to commonly used ones in the literature including the finite-activity case (i.e., jump-diffusion). For example, in the finite activity case it is common [14,20,21] to simulate τ k before the start of simulations and then superimpose those random times on a grid with some constant or variable finite, small time-step h. Our time-stepping approach is more natural for the problem under consideration than both commonly used strategies; its benefits are discussed in Sect. 3, with the infinite activity case considered in more detail in Sects. 3.5 and 4.2. Restricting δ by h is beneficial for accuracy when jumps are rare (e.g. in the jump-diffusion case) and it is also beneficial for convergence rates (measured in the average number of steps) in the case of α-stable Lévy measure with α ∈ (1, 2) (see Sects. 3 and 4). Second, in comparison with [12] we explicitly show (singular) dependence of the numerical integration error of our algorithm on the parameter which is the cut-off for small jumps replaced by the Brownian motion. Third, in comparison with the literature we consider the Dirichlet problem for PIDEs, though we also comment on the Cauchy case in Sect. 3.4, which is novel with respect to the use of restricted time-stepping and dependence of the algorithm's error on .
The paper is organised as follows. In Sect. 2, we write down a probabilistic representation for the solution u(t, x) of (1.1), we state assumptions used throughout the paper, and we consider the approximation u (t, x) that solves an auxiliary Dirichlet problem corresponding to the system of characteristics with jumps cut-off by . In Sect. 3, we introduce the numerical algorithm which approximates u (t, x). The algorithm uses the restricted jump-adapted time-stepping and approximates the diffusion by a weak Euler scheme. In this section we also obtain and discuss the weak-sense error estimate for the algorithm. In Sect. 4, we illustrate our theoretical findings by three numerical examples, including an application of our algorithm to pricing an FX barrier basket option whose underlyings follow an exponential Lévy model.

Preliminaries
Let ( , F, {F t } t 0 ≤t≤T , P) be a filtered probability space satisfying the usual hypotheses. The operator L defined in (1.2), on an appropriate domain, is the generator of the d-dimensional process X t 0 ,x (t) given by where Z (t), t ≥ t 0 , is an m-dimensional Lévy process with the characteristic exponent is considered instead of the general SDEs (2.1). The Eq. (2.2) is obtained as a special case of (2.1) by setting When the solution u of (1.1) is regular enough, for example u ∈ C 1,2 [t 0 , T ] × R d , it can be shown that u has the following probabilistic representation x,y,z (s)) for s ≥ t, solves the system of SDEs consisting of (2.1) and and τ t,x = inf{s ≥ t : (s, X t,x (s)) / ∈ Q} is the fist exit-time of the space-time Lévy process (s, X t,x (s)) from the space-time cylinder Q. To see why this holds, one may apply Ito's lemma, see e.g. [2,Theorem 4.4.7], and the fact that u solves (1.1) to prove that the process is a martingale. The claimed formula follows by letting t → ∞.
If one can simulate trajectories of {(s, X t,x (s), Y t,x,1 (s), Z t,x,1,0 (s)); s ≥ 0} then the solution of the Dirichlet problem for PIDE (1.1) can be estimated by applying the Monte Carlo technique to (2.3). This approach however is not generally implementable for Lévy measures of infinite intensity, that is when ν B(0, r ) = ∞ for some r > 0. The difficulty arises from the presence of an infinite number of small jumps in any finite time interval, and can be overcome by replacing these small jumps by an appropriate diffusion exploiting the idea of the method developed in [3,11], which we apply here. Alternatively, the issue can be overcome if one can simulate directly from the increments of Lévy process. We will not discuss this case in this paper as we only assume that one has access to the Lévy measure.

Approximation of small jumps by diffusion
We will now consider the approximation of (2.1) discussed above, where small jumps are replaced by an appropriate diffusion. In the case of the whole space (the Cauchy problem for a PIDE) such an approximation was considered in [3,11], see also Sect. 3.4 here.
Let γ be an m-dimensional vector with the components and B is an m × m matrix with the components while β be obtained from the formula β β = B . Note that |B i j | (and hence also the elements of β ) are bounded by a constant independent of thanks to the Lévy measure definition.

Remark 2.2
In many practical situations (see e.g. [6]), where the dependence among the components of X (t) introduced through the structure of the SDEs is enough, we can allow the components of the driving Poisson measure to be independent. This amounts to saying that ν is concentrated on the axes, and as a result B will be a diagonal matrix.
We shall consider the modified jump-diffusion X t 0 ,x (t) = X t 0 ,x (t) defined as where W (t) is a standard m-dimensional Wiener process, independent of N and w. We observe that, in comparison with (2.1), in (2.8) jumps less than in magnitude are replaced by the additional diffusion part. In this way, the new Lévy measure has finite activity allowing us to simulate its events exactly, i.e. in a practical way. Consequently, we can approximate the solution of u(t, x) the PIDE (1.1) by where τ t,x = inf{s ≥ t : (s, X t,x (s)) / ∈ Q} is the fist exit time of the space-time Lévy process (s, X t,x (s)) from the space-time cylinder Q and X t,x (s), Y t,x,y (s), Z t,x,y,z (s) s≥0 solves the system of SDEs consisting of (2.8) along with Since the new Lévy measure has finite activity, we can derive a constructive weak scheme for (2.8), (2.10)-(2.11) (see Sect. 3). By using this method together with the Monte Carlo technique, we will arrive at an implementable approximation of u (t, x) and hence of u(t, x). We will next show that indeed u defined in (2.9) is a good approximation to the solution of (1.1). Before proceeding, we need to formulate appropriate assumptions.

Remark 2.3
Since G is bounded, in practice the above assumptions in the space variable are only required inḠ. We chose to impose them in R d to simplify the presentation as it allows us to construct a global solution to the SDEs (2.8), rather than having to deal with local solutions built up to the exit time from the domain. In practice the assumption can be bypassed by multiplying the coefficients with a bump function that vanishes outside G, without affecting the value of (2.3).
In order to streamline the presentation and avoid lengthy technical discussions (see Remarks 2.4 and 2.5), we will make the following assumption regarding the regularity of solutions to (1.1).
In addition to the PIDE problem (1.1), we also consider the PIDE problem for u from (2.9): Again, for simplicity (but see Remark 2.4), we impose the following conditions on the solution u of the above Dirichlet problem.
Finally, we also require that u and its derivatives do not grow faster than a polynomial function at infinity.

Assumption 2.5 (Smoothness and growth)
There exist constants K > 0 and q ≥ 1 such that for all x ∈ R d , all t ∈ [t 0 , T ] and > 0, the solution u of the PIDE problem (2.15) and its derivatives satisfy where 0 ≤ 2l + j ≤ 4, j k=1 i k = j, and i k are integers from 0 to d. Remark 2.4 Sufficient conditions guaranteeing Assumptions 2.3, 2.4 and 2.5 consist in sufficient smoothness of the coefficients, the boundary ∂G, and the function ϕ and in appropriate compatibility of ϕ and g and also of the integral operator (see e.g. [8,9,16]).

Remark 2.5
The main goal of the paper is to present the numerical method and study its convergence under 'good' conditions when its convergence rates are optimal (i.e., highest possible). As usual, in these circumstances, the conditions (here Assumptions 2.3, 2.4, and 2.5) are somewhat restrictive. See Theorem 3.3 in [8, p. 93], which indicates sufficient conditions for Assumption 2.3 to hold. If one drops the compatibility condition (3.11) in Theorem 3.3 of [8, p. 93], then, as in the diffusion case, the smoothness of the solution will be lost through the boundary of Q at the terminal time T . This affects only the last step of the method and the proof can be modified (see such a recipe in the case of the Neumann problem and diffusion in e.g. [13]), but we do not include such complications here for transparency of the proofs. Further, in the case of an α-stable Lévy process with α ∈ (1, 2) spatial derivatives of u(t, x) may blow up near the boundary ∂G, the blow up is polynomial with the power dependent on α if the integral operator does not satisfy some compatibility conditions (see the discussion in [8, p. 96]). This situation requires further analysis of the proposed method, which is beyond the scope of the present paper. At the same time, the method can be successfully used when the assumptions stated in this section are not satisfied as demonstrated in our numerical experiments (see Sect. 4.3).

Closeness of u (t, x) and u(t, x)
We now state and prove the theorem on closeness of u (t, x) and u (t, x). In what follows we use the same letters K and C for various positive constants independent of x, t, and .  (2.18) where K > 0 does not depend on t, x, .
By Ito's formula, we get Since u(t, x) solves (1.1) and recalling (2.6), we obtain from (2.20): − F(s , X t,x (s −))z, ∇u(s , X t,x (s −)) I(|z| ≤ 1)}ν(dz)]ds Replacing s with the stopping time τ t,x in (2.21) (cf. (2.19)), taking expectations of the resulting left-and right-hand sides of (2.21) and using the martingale property and (2.7), we arrive at By Taylor's expansion, we get for some θ ∈ [0, 1] which may depend on the randomness, where to obtain inequality (2.24) we used the fact that by definition of τ t,x , X t,x (s−) ∈ G for s ≤ τ t,x , and therefore we have for some K > 0 that does not depend on , t, x, s, after noting that |z| < . Using Assumption 2.3 and combining (2.22)-(2.24) and

Weak approximation of jump-diffusions in bounded domains
In this section we propose and study a numerical algorithm which weakly approximates the solutions of the jump-diffusion (2.8), (2.10)-(2.11) with finite intensity of jumps in a bounded domain, i.e., approximates u (t, x) from (2.9). In Sect. 3.1 we formulate the algorithm based on a simplest random walk. We analyse the one-step error of the algorithm in Sect. 3

Algorithm
In what follows we also require the following to hold.
This is a natural assumption since Lévy measures of practical interest (see e.g. [6] and also examples here of Example 2.1 and Sect. 4) have this property.
Let us describe an algorithm for simulating a Markov chain that approximates a trajectory of (2.8), (2.10)- (2.11). In what follows we assume that we can exactly sample the intervals δ between consecutive jump times with the intensity and jump sizes J distributed according to the density

Remark 3.1
There are known methods for simulating jump times and sizes for many standard distributions. In general, if there exists an explicit expression for the jump size density, one can construct a rejection method to sample jump sizes. An overview with regard to simulation of jump times and sizes can be found in [6,7].
Thanks to Assumption 3.1, we have with K > 0 being independent of and p ≥ 2. We also note that where K > 0 is a constant independent of ε, since by the Cauchy-Schwarz inequality thanks to the Lévy measure definition. We now describe the algorithm. Fix a time-discretization step h > 0 and suppose the current position of the chain is (t, x, y, z). If the jump time increment δ < h, we In the case θ = h, we apply the weak explicit Euler approximation with the simplest simulation of noise to the system (2.8), (2.10)-(2.11) with no jumps: . . , ξ d and η 1 , . . . , η m mutually independent random variables, taking the values ±1 with equal probability. In the case of θ < h, we replace (3.5) by the following explicit Euler approximation x) solves the problem (2.15). Introduce a discretization of the interval [t 0 , T ], for example the equidistant one: To approximate the solution of the system (2.8), we construct a Markov chain (ϑ k , X k , Y k , Z k ) which stops at a random step κ when (ϑ k , X k ) exits the domain Q. The algorithm is formulated as Algorithm 1 below.

Remark 3.2
If λ is large so that 1 − e −λ h is close to 1, then I k = 1 (i.e., jump happens) is almost on every time step. In this situation it is computationally beneficial to modify Algorithm 1 in the following way: instead of sampling both I k and θ k , sample δ k according to the exponential distribution with parameter λ and set θ k = δ k ∧ h and 3 We note [18,19] that in the diffusion case (i.e., when there is no jump component in the noise which drives SDEs) solving Dirichlet problems for parabolic or elliptic PDEs requires to complement a random walk inside the domain G with a special approximation near the boundary ∂G. In contrast, in the case of Dirichlet problems for PIDEs we do not need a special construction near the boundary since the boundary condition is defined on the whole complement G c . Here, when the chain X k exits G, we know the exact value of the solution u (θ κ , X κ ) = ϕ(θ κ , X κ ) at the exit point (θ κ , X κ ), while in the diffusion case when a chain exits G, we do not know the exact value of the solution at the exit point and need an approximation. Due to this fact, Algorithm 1 is somewhat simpler than algorithms for Dirichlet problems for parabolic or elliptic PDEs (cf. [18,19] and references therein). Algorithm 1 Algorithm for (2.8), (2.10)-(2.11).
Simulate: ξ k and η k with i.i.d. components taking values ±1 with probability 1/2 and independently if I k = 0, then 5:

One-step error
In this section we consider the one-step error of Algorithm 1. The one step of this algorithm takes the form for (t, x) ∈ Q: Before we state and prove an error estimate for the one-step of Algorithm 1, we need to introduce some additional notation. For brevity let us where x ∈ G. Note that Q i , i = 1, . . . , 3, can be outside G.
where Q i are defined in (3.12).
Proof It is not difficult to see that the points Q i , i = 1, 2, are of the following form where c 1 is either 0 or 1. It is obvious that ξ and η and their moments are all bounded. The functions b(t, x), σ (t, x) and F(t, x) are bounded as (t, x) ∈ Q, and for x ∈ G, |x| 2 p is also bounded. Recall that sufficiently high moments of J are bounded as in (3.3). Then, using the Cauchy-Schwarz inequality, we can show that Hence, we obtained (3.13). The bound (3.14) is shown analogously.
We will need the following technical lemma.
where K > 0 depends on p but is independent of λ and h.

Proof
The proof is by induction. By straightforward calculations, we get Then assuming that (3.15) is true for all integer p ≥ 2, we obtain Now we prove an estimate for the one-step error.
. . , f k ] for the l-th time derivative of the k-th spatial derivative evaluated in the directions f j . For example, if k = 2 and l = 1, We will also use the following short notation The final aim of this theorem is to achieve an error estimate explicitly capturing the (singular) dependence of the one-step error on . To this end, we split the error into several parts according to the intermediate points Q i defined in (3.12).
Using (3.9) and (3.12), we have To precisely account for the factor γ and powers of θ in the analysis of the one-step error, we use multiple Taylor expansions of u (t + θ, X ). We obtain where the remainders are as follows Using (3.17), (3.10)-(3.11), and the fact that ξ and η have mean zero and that components of ξ, η, θ, J are mutually independent, we obtain The following elementary formulas are needed for future calculations: Noting that u 4 = u (t, x) = u and using ( 3.18), (3.12), (3.19) and (2.15), we obtain It is clear that many of the terms in R are only non-zero in the case θ < h, i.e. when a jump occurs. We rearrange the terms in R 0 according to their degree in θ :

I(θ<h)θ-terms
Now to estimate the terms in the error R 0 , we observe that (i) |s|> sν(ds) = γ + |s|>1 sν(ds) with the latter integral bounded and, in particular, |E[J ]| ≤ K (1 + |γ |)/λ ; (ii) E |J | 2 p , p ≥ 1, are bounded by K /λ (see (3.3)); (iii) the terms R 17 , R 18 , R 19 , R 21 and R 22 contain derivatives of u evaluated at or between the points Q 3 and Q 4 and in their estimation Assumption 2.5 and (3.14) from Lemma 3.1 are used; (iv) the terms R 11 , R 12 , R 13 , R 14 , R 15 and R 16 contain derivatives of u evaluated at or between the points Q 1 and Q 2 and in their estimation Assumption 2.5, (3.13) from Lemma 3.1, and Lemma 3.2 are used; (v) γ 2 /λ is bounded by a constant independent of . As a result, we obtain where all constants K i > 0 are independent of h and and q ≥ 1.
Overall we obtain Remark 3. 4 We note the following two asymptotic regimes for the one-step error (3.16). For λ h < 1 (in practice, this occurs only when λ is small or moderate like it is in jump-diffusions), we can expand the exponent in (3.16) and obtain that the one-step error is of order O(h 2 ) : When λ is very large (e.g., for small in the infinite activity case) then the term with e −λ h can be neglected and we get The usefulness of a more precise estimate (3.16) is that it includes situations in between these two asymptotic regimes and also allows to consider an interplay between h and (see Sect. 3.5).

Global error
In this section we obtain an estimate for the global weak-sense error of Algorithm 1. We first estimate average number of steps E [κ] of Algorithm 1.

Lemma 3.3 (Number of steps)
The average number of steps κ for the chain X k from Algorithm 1 satisfies the following bound Proof It is obvious that if we replace the bounded domain G in Algorithm 1 with the whole space R d (i.e., replace the Dirichlet problem by the Cauchy one), then the corresponding number of steps κ of Algorithm 1 is not less than κ. Hence it is sufficient to get an estimate for E κ . Let δ 1 , δ 2 , . . . be the interarrival times of the jumps, θ i = δ i ∧ h for i ≥ 0, and S k = k−1 i=0 θ i for k ≥ 0. Then Introduce the martingale: S 0 = 0 and S k := S k − kE [θ ] for k ≥ 1. Since θ i ≤ h we have that S κ −1 ≤ S κ −1 < T − t 0 almost surely and thus by the optional stopping theorem we obtain and we conclude We also need the following auxiliary lemma.

Lemma 3.4 (Boundedness of Y k in Algorithm 1) The chain Y k defined in (3.6) is uniformly bounded by a deterministic constant:
Proof From (3.6), we can express Y k via previous Y k−1 and get the required estimate as follows: Now we prove the convergence theorem for Algorithm 1.

20)
where K > 0 is a constant independent of h and .
Proof Recall (see (2.9)): The global error can be written as Using Lemma 3.4, Assumption 2.5 and Lemmas 3.1 and 3.2 as well as thatθ κ −ϑ κ ≤ θ κ , we have for the first term in (3.21): where K > 0 does not depend on h or ε. For the second term in (3.21), we exploit ideas from [19] to re-express the global error. We get using Theorem 3.1 and Lemmas 3.4 and 3.3: where, as usual constants K > 0 are changing from line to line. Combining ( 3.21)-(3.23), we arrive at (3.20).

Remark 3.5 (Error estimate and convergence)
Note that the error estimate in Theorem 3.2 gives us the expected results in the limiting cases (see also Remark 3.4). If λ h < 1, we obtain: which is expected for weak convergence in the jump-diffusion case. If λ is large (meaning that almost always θ < h), the error is tending to as expected (cf. [11]). We also remark that for any fixed λ , we have first order convergence when h → 0.

Remark 3.6
In the case of symmetric measure ν(z) we have γ = 0 and hence the global error (3.20) becomes (3.24)

Remark on the Cauchy problem
Let us set G = R d in (2.15) and hence consider the Cauchy problem for the PIDE: In this case Algorithm 1 stops only when ϑ κ ≥ T as there is no spatial boundary (and hence we write u (T , x) = ϕ(x) instead of u (T , x) = ϕ(T , x)). Theorem 3.1 remains valid for the Cauchy problem, although in this case one should replace the constant K in the right-hand side of the bound (3.16) with a function K (x) > 0 satisfying with some constants K > 0 and q ≥ 1. Consequently, to prove an analogue of the global convergence Theorem 3.2, we need to prove boundedness of moments E X with some constants K > 0 and p ≥ 1.
Proof As usual, in this proof K > 0 is a constant independent of and h which can change from line to line in derivations. We first prove the lemma for an integer p ≥ 1. Noting (3.26), we have For κ > k: where we used Then, by the linear growth Assumption 2.2, we get For the last term in (3.28), using the linear growth Assumptions 2.2 and 3.1, we get for l = 2, . . . , 2 p: where to obtain the last line we used that θ l/2 k+1 for odd l is estimated by K (θ ) and exploited Lemma 3.2, boundedness of |γ | l λ l/2 and (3.3). Then Introduce a continuous time piece-wise constant process and Then we can write (3.31) as By Gronwall's inequality, we get implies (3.27) for integer p ≥ 1. Then, by Jensen's inequality, (3.27) holds for noninteger p ≥ 1 as well.
Based on the discussion before Lemma 3.5 and on the moments estimate (3.27) of Lemma 3.5, it is not difficult to show that the global error estimate (3.20) for Algorithm 1 also holds in the Cauchy problem case.

The case of infinite intensity of jumps
In this section we combine the previous results, Theorem 2.1 and 3.2, to obtain an overall error estimate for solving the problem (1.1) in the case of infinite intensity of jumps by Algorithm 1. We obtain where K > 0 is independent of h and . Let us consider an α-stable process in which the Lévy measure has the following singular behaviour near zero ν(dz) ∼ |z| −m−α dz, α ∈ (0, 2), (3.33) i.e., we are focusing our attention here on the singularity near zero only and the sign ∼ means that the limit of the ratio of both sides equals to some positive constant.
and γ 2 ∼ |ln| 2 for = 1, Let us measure the computational cost of Algorithm 1 in terms of the average number of steps (see Lemma 3.3). Since we choose to use the cost associated with the average number of steps as We fix a tolerance level ρ tol and require and h to be so that Note that since we are using the Euler scheme for SDEs' approximation, the decrease of ρ tol in terms of cost cannot be faster than linear. We now consider three cases of α. The case α ∈ (0, 1) We have and, by choosing sufficiently small , we can reach the required ρ tol . It is optimal to take h = ∞ (in practice, taking h = T − t 0 ) and the cost is then C = 1/ α . Hence ρ tol is inversely proportional to C, and convergence is linear in cost (to reduce ρ tol twice, we need to double C). The case α = 1 We have i.e. convergence is almost linear in cost. The case α ∈ (1, 2) If we take h = ∞, then ρ( , h) = O( 2−α ) and the convergence order in terms of cost is 2/α − 1, which is very slow (e.g., for α = 3/2, the order is 1/3 and for α = 1.9, the order is ≈ 0.05). Let us now take h = with ≥ α. Then and the convergence order in terms of cost is (3 − α)/(1 + α), which is much better (e.g., for α = 3/2, the order is 3/5 and it cannot be smaller than 1/3 for any α ∈ (1, 2)). Note that in the case of symmetric measure ν(z) (see Remark 3.6), convergence is linear in cost for α ∈ (1, 2). To conclude, for α ∈ (0, 1) we have first order convergence and there is no benefit of restricting jump adapted steps by h (see a similar result in the case of the Cauchy problem and not restricted jump-adapted steps in [12]). However, in the case of α ∈ (1, 2), it is beneficial to use restricted jump-adapted steps to get the order of (3 − α)/(1 + α). We also recall that restricted jump-adapted steps should typically be used for jump-diffusions (the finite activity case when there is no singularity of λ and γ ) because jump time increments δ typically take too large values and to control the error at every step we should truncate those times at a sufficiently small h > 0 for a satisfactory accuracy.

Numerical experiments
In this section we illustrate the theoretical results of Sect. 3. In particular, we display the behaviour in the case of infinite intensity of jumps for different regimes of α. We showcase numerical tests of Algorithm 1 in four different examples: (i) a non-singular Lévy measure (Example 4.1), (ii) a singular Lévy measure which is similar to that of Example 2.1 (see Example 4.2), and (iii) pricing a foreign-exchange (FX) barrier basket option where the underlying model is of exponential Lévy-type (Example 4.3) and (iv) pricing a FX barrier option showing that the convergence orders hold (Example 4.4).
As it is usual for weak approximation (see e.g. [19]), in simulations we complement Algorithm 1 by the Monte Carlo techniques and evaluate u(t 0 , x) or u (t 0 , x) as The Monte Carlo error of (4.1) is κ . Thenū(t 0 , x) falls in the corresponding confidence intervalû ± 2 D M with probability 0.95.

Example with a non-singular Lévy measure
In this subsection, we illustrate Algorithm 1 in the case of a simple non-singular Lévy measure (i.e., the jump-diffusion case), where there is no need to replace small jumps and hence we directly approximate u(t 0 , x) rather than u (t 0 , x). Consequently, the numerical integration error does not depend on . We recall (see Theorem 3.2) that Algorithm 1 has first order of convergence in h.

Example 4.1 (Non-singular Lévy measure)
To construct this and the next example, we use the same recipe as in [18,19]: we choose the coefficients of the problem (1.1) so that we can write down its solution explicitly. Having the exact solution is very useful for numerical tests. Consider the problem (1.1) with d = 3, G = U 1 which is the open unit ball centred at the origin in R 3 , and with the coefficients with the boundary condition and with the Lévy measure density where C − and C + are some positive constants. Note that, keeping in mind Remark 2.3, the coefficients from (4.2)-(4.4) satisfy Assumptions 2.1-2.2.
It is not difficult to verify that this problem has the solution and we also find We simulated jump sizes by analytically inverting the cumulative distribution function corresponding to the density ρ(z) and making use of uniform random numbers in the standard manner.
Here the absolute error e is given by where the true solution for the point (0, 0) is u = u(0, 0) ≈ 0.987433. The expected convergence order O(h) can be clearly seen in Fig. 1 and Table 1.  The parameters are the same as in Fig. 1. The columnκ gives the sample average of the number of steps together with its Monte Carlo error

Example with a singular Lévy measure
In this subsection, we confirm dependence of the error of Algorithm 1 on the cut-off parameter for jump sizes and on the parameter α of the Lévy measure as well as associated computational costs which were derived in Sect. 3.5.
with the boundary condition (4.5), and with the Lévy measure density where C − , C + , and μ are some positive constants and α ∈ (0, 2). We observe that C − = C + gives an asymmetric jump measure and the Lévy process has infinite activity and, if α ∈ [1, 2), infinite variation. Note that, keeping in mind Remark 2.3, the coefficients from (4.2), ( 4.3), (4.7) satisfy Assumptions 2.1-2.2. It is not difficult to verify that this problem has the following solution Other quantities needed for the algorithm take the form The parameters are the same as in Fig. 2. The columnκ gives the sample average of the number of steps together with its Monte Carlo error In this example, the absolute error e is given by For the case of α = 0.5, we can clearly see in Fig. 2 and Table 2 that the error is of order O( α ) = O( 0.5 ) as expected. We also observe linear convergence in computational cost (measured in average number of steps). In addition we note that choosing a smaller time step, e.g. h = 0.1, does not change the behaviour in this case which is in accordance with our prediction of Sect. 3.5 (Fig. 3).
Numerical results for the case α = 1.5 are given in Figs. 4 and 5 and Tables 3 and 4. As is shown in Sect. 3.5, convergence (in terms of computational costs) can be improved in the case of α ∈ (1, 2) by choosing h = 1+α . In Fig. 5, for all it can be seen that choosing a smaller (but optimally chosen) step parameter h results in quicker convergence (i.e., for the same cost, we can achieve a better result if h is chosen in an optimal way) and naturally in a smaller error.
We recall that if the jump measure is symmetric, i.e. C − = C + in the considered example, then γ = 0 and the numerical integration error of Algorithm 1 is no longer singular (see Theorem 3.2 and Remark 3.6). Consequently (see Sect. 3.5), in this case the computational cost depends linearly on even for α = 1.5, which is confirmed on Fig. 6.

FX option pricing under a Lévy-type currency exchange model
In this subsection, we demonstrate the use of Algorithm 1 for pricing financial derivatives where underliers follow a Lévy process. We apply the algorithm to estimate the price of a foreign exchange (FX) barrier basket option. A barrier basket option gives the holder the right to buy or sell a certain basket of assets (here foreign currencies) at a specific price K at maturity T in the case when a certain barrier event has occurred. The most used barrier-type options are knock-in and knock-out options. This type of option becomes active (or inactive) in the case of the underlying price S(t) reaching

Example 4.3 (Barrier basket option pricing)
Let us consider the case with five currencies: GBP, USD, EUR, JPY and CHF, and let us assume that the domestic currency is GBP. We denote the corresponding spot exchange rates as where S F O R DO M (t) describes the amount of domestic currency DOM one pays/receives for one unit of foreign currency FOR (for more details see [5,25]). We assume that under a risk-neutral measure Q the dynamics for the spot exchange rates can be written as where r i are the corresponding short rates of USD, EUR, JPY, CHF and r G B P is the short rate for GBP, which are for simplicity assumed to be constant; and X (t) is a 4-dimensional Lévy process similar to (2.1) with a single jump noise: (4.10) Here ) is a 4-dimensional standard Wiener process. As ν(z), we choose the Lévy measure with density (4.8) as in Example 4.2 and we take F(t, x) = ( f 1 , f 2 , f 3 , f 4 ) . We also assume that σ (s, x) is a constant 4 × 4 matrix. The risky asset for a domestic GBP business are the foreign currencies Y i (t) = B i (t) · S i (t), where B i (t) denotes the foreign currency (account). Under the measure Q all the discounted assets Y i (t) = e (r i −r G B P )(t−t 0 ) S i (t) = S i (t 0 ) exp(X i (t)) have to be martingales on the domestic market (therefore discounted by the domestic interest rate) to avoid arbitrage. Using the Ito formula for Lévy processes, we can derive the SDEs for Y i (see e.g. [2, p. 288 N (dz, ds). (4.11) Hence, for all Y i to be martingales, the drift component b i has to be so that Here σ i are volatilities for the corresponding pairs and ρ i j are the correlation coefficients for the corresponding two pairs where We also note that Let us consider a down-and-out (DAO) put option, which can be written as where I min We use Algorithm 1 (the algorithm is applied to X from (4.10) and then S is computed as exp(X ) to achieve higher accuracy) together with the Monte Carlo technique to evaluate this barrier basket option price (4.13). In Table 5, market data for the 4 currency pairs are given, and in Table 6 the option and model parameters are provided, which are used in simulations here.
To find the matrix σ = {σ i j } used in the model (4.10), we form the matrix a using the volatility σ i and correlation coefficient data from Table 5 in the usual way, i.e.,  The results of the simulations are presented in Fig. 7 for different choices of and different choices of h. In Fig. 8, it can be seen that (similar to Example 4.2) by choosing the step size h optimally results in a better approximation for the same cost.
In this example we demonstrated that Algorithm 1 can be successfully used to price a FX barrier basket option involving 4 currency pairs following an exponential Lévy model despite the considered problem not satisfying Assumptions 2.3-2.5 of Sect. 2.2. In particular, we note that the algorithm is easy to implement and it gives sufficient accuracy with relatively small computational costs. Moreover, application of Algorithm 1 can be easily extended to other multi-dimensional barrier option (and other types of options and not only on FX markets), while other approximation techniques  Let us consider the case with two currencies: GBP and USD. As before, we assume that the domestic currency is GBP. The corresponding spot exchange rate is

S(t) = S U SDG B P (t).
We assume the same dynamics under a risk-neutral measure Q for the spot exchange rates as in Example 4.3. Moreover, X (t) is a 1-dimensional Lévy process as defined in (4.10) but for one dimension only. Following the same fashion as in Example 4.3, the risky asset for a domestic GBP business is the foreign currency Y (t) = B(t) · S(t), where B(t) denotes the foreign currency (account) and under the measure Q the discounted asset Y (t) has to be a martingale on the domestic market to avoid arbitrage. Using the Ito formula for Lévy processes, we can derive the SDE for Y as we did in (4.11)-(4.12). We compute the value for a DAO put option (cf. (4.13)): P t 0 (T , K ) = exp −r G B P (T −t 0 ) E I min  Here σ is the volatility  The approximate solutionP =P t 0 (T , K ) is obtained by applying Algorithm 1 directly to the SDE for S(t). To study the dependence of the error of Algorithm 1 on the cut-off parameter for jump sizes and on the parameter α of the Lévy measure as well as associated computational costs, we need to compare the approximationP with the true price P t 0 (T , K ). However, in this example, we do not have the exact price, and therefore need to accurately simulate a reference solution. To this end, as in Example 4.3, we apply Algorithm 1 to X (t) and use a sufficiently small and h and also a large number of Monte Carlo simulations M (see Tables 9 and 13). We denote this reference solution asP re f =P re f t 0 (T , K ). In this example the absolute error e re f of Algorithm 1 is evaluated as e re f = |P −P re f |.
In Table 7, market data for the currency pair are given, and in Table 8 the option and model parameters are provided, which are used in simulations here ( Table 9).
The results of the simulations for α = 0.5 are presented in Figs. 9 and 10 and in Tables 10 and 11 for different choices of and fixed h = 1.0 and h = 0.1. We can clearly see that the error is of order O( α ) = O( 0.5 ) as expected. We also observe linear convergence in computational cost (measured in average number of steps).      Numerical results for the case α = 1.5 are given in Figs. 11 and 12 and in Tables 12, 13 and 14. We observe the expected orders of convergence as given in Sect. 3.5.
In this example, we experimentally demonstrated that convergence orders and computational cost for Algorithm 1 are consistent with predictions of Sect. 3.5 despite the considered problem not satisfying assumptions of Sect. 2.2.