The Truncated Euler-Maruyama Method for Stochastic Differential Delay Equations

The numerical solutions of stochastic differential delay equations (SDDEs) under the generalized Khasminskii-type condition were discussed by Mao [15], and the theory there showed that the Euler-Maruyama (EM) numerical solutions converge to the true solutions in probability. However, there is so far no result on the strong convergence (namely in L^p) of the numerical solutions for the SDDEs under this generalized condition. In this paper, we will use the truncated EM method developed by Mao [16] to study the strong convergence of the numerical solutions for the SDDEs under the generalized Khasminskii-type condition.


Introduction
In the study of stochastic differential delay equations (SDDEs), the classical existence-and-uniqueness theorem requires the coefficients of the SDDEs satisfy the local Lipschitz condition and the linear growth condition (see, e.g., [4,8,11,12,21]). However, there are many SDDEs which do not satisfy the linear growth condition. In 2002, Mao [14] generalized the the well-known Khasminskii test [6] from stochastic differential equations (SDEs) to SDDEs. The Khasminskii-type theorem established in [14] for SDDEs gives the conditions, in terms of Lyapunov functions, under which the solutions to SDDEs will not explode to infinity at a finite time. The Khasminskii-type theorem enables us to verify if a given nonlinear SDDE has a unique global solution under the local Lipschitz condition but without the linear growth condition. In 2005, Mao and Rassias [17] demonstrated that there are many important SDDEs which are not covered by the Khasminskii-type theorem given in [14], and established a generalized Khasminskii-type theorem which covers a very wide class of nonlinear SDDEs.
On the other hand, there are in general no explicit solutions to nonlinear SDDEs, whence numerical solutions are required in practice. The numerical solutions under the linear growth condition plus the local Lipschitz condition have been discussed intensively by many authors (see, e.g., [3,7,10,13,18,19]). The numerical solutions of SDDEs under the generalized Khasminskiitype condition were discussed by Mao [15], and the theory there showed that the Euler-Maruyama (EM) numerical solutions converge to the true solutions in probability. However, there is so far no result on the strong convergence (namely in L p ) of the numerical solutions for the SDDEs under the generalized Khasminskii-type condition.
Recently, Mao [16] develops a new explicit numerical method, called the truncated EM method, for SDEs under the Khasminskii-type condition plus the local Lipschitz condition and establishes the strong convergence theory. In this paper, we will use this new truncated EM method to study the strong convergence of the numerical solutions for the SDDEs under the generalized Khasminskii-type condition.
This paper is organized as follows: We will introduce necessary notion, state the generalized Khasminskii-type condition and define the truncated EM numerical solutions for SDDEs in Section 2. We will establish the strong convergence theory for the truncated EM numerical solutions in Sections 3 and 4 and discuss the convergence rates in Section 5. In each of these three sections we will illustrate our theory by examples. We will see from these examples that the truncated EM numerical method can be applied to approximate the solutions of many highly nonlinear SDDEs. We will finally conclude our paper in Section 6.

The Truncated Euler-Maruyama Method
Throughout this paper, unless otherwise specified, we use the following notation. Let | · | be the Euclidean norm in R n . If A is a vector or matrix, its transpose is denoted by A T . If A is a matrix, its trace norm is denoted by |A| = trace(A T A). Let R + = [0, ∞) and τ > 0. Denote by C([−τ, 0]; R n ) the family of continuous functions from [−τ, 0] to R n with the norm ϕ = sup −τ ≤θ≤0 |ϕ(θ)|. Let (Ω, F , {F t } t≥0 , P) be a complete probability space with a filtration {F t } t≥0 satisfying the usual conditions (i.e., it is increasing and right continuous while F 0 contains all P-null sets). Let B(t) = (B 1 (t), · · · , B m (t)) T be an m-dimensional Brownian motion defined on the probability space. Moreover, for two real numbers a and b, we use a ∨ b = max(a, b) and a ∧ b = min(a, b). If G is a set, its indicator function is denoted by I G , namely I G (x) = 1 if x ∈ G and 0 otherwise. If a is a real number, we denote by ⌊a⌋ the largest integer which is less or equal to a, e.g., ⌊−1.2⌋ = −2 and ⌊2.3⌋ = 2.
Consider a nonlinear SDDE with the initial data given by Here f : R n × R n → R n and g : R n × R n → R n×m .
We assume that the coefficients f and g obey the Local Lipschitz condition: Assumption 2.1 For every positive number R there is a positive constant K R such that for those x, y,x,ȳ ∈ R n with |x| ∨ |y| ∨ |x| ∨ |ȳ| ≤ R.
The classical existence-and-uniqueness theorem does not only require this local Lipschitz condition but also the linear growth condition (see, e.g., [11,12,13,21]). In this paper we shall retain the local Lipschitz condition but replace the linear growth condition by a generalized Khasminskiitype condition.
Assumption 2.2 There are constants K 1 > 0, K 2 ≥ 0 and β > 2 such that To have a feeling about what type of nonlinear SDDEs to which our theory may apply, please consider, for example, the scalar SDDE where a 3 > 0 and a 1 , a 2 , a 4 , a 5 ∈ R (see Example 3.7 for the details). The following result, established in [17], is a generalized Khasminskii- It has been shown (see, e.g., [15]) that under Assumptions 2.1 and 2.2, the EM numerical solutions converge to the true solution in probability. But, to our best knowledge, there is so far no result on the strong convergence under these assumptions. In this paper, we will use the truncated EM method developed in [16] and show that the truncated EM solutions will converge to the true solution in L q for some q ≥ 1.
Lemma 2.4 Let Assumption 2.2 hold. Then, for every ∆ ∈ (0, ∆ * ], we have for all x, y ∈ R n . Proof. Fix any ∆ ∈ (0, ∆ * ]. Recalling that h(∆ * ) ≥ µ(1), we see that ) and any y ∈ R n , we have, by (2.3), which implies the desired assertion (2.9). On the other hand, for x ∈ R n with |x| > µ −1 (h(∆)) and any y ∈ R n , we have Namely, we have showed that the required assertion (2.9) also holds for x ∈ R n with |x| > µ −1 (h(∆)) and any y ∈ R n . The proof is hence complete. ✷ From now on, we will let the step size ∆ be a fraction of τ . That is, we will use ∆ = τ /M for some positive integer M. When we use the terms of a sufficiently small ∆, we mean that we choose M sufficiently large.
Let us now form the discrete-time truncated EM solutions. Define t k = k∆ for k = −M, −(M− 1), · · · , 0, 1, 2, · · · . Set X ∆ (t k ) = ξ(t k ) for k = −M, −(M − 1), · · · , 0 and then form . In our analysis, it is more convenient to work on the continuous-time approximations. There are two continuous-time versions. One is the 14) The other one is the continuous-time continuous process We see that x ∆ (t) is an Itô process on t ≥ 0 with its Itô differential However, the following lemma shows that x ∆ (t) andx ∆ (t) are close to each other in the sense of L p . This indicates that it is sufficient to usex ∆ (t) in practice. On the other hand, in our analysis, it is more convenient to work on both of them.
Lemma 2.5 For any ∆ ∈ (0, ∆ * ] and any p ≥ 2, we have where c p is a positive constant dependent only on p. Consequently Proof. In what follows, we will use c p to stand for generic positive real constants dependent only on p and its values may change between occurrences. Fix ∆ ∈ (0, ∆ * ] arbitrarily. For any t ≥ 0, there is a unique integer k ≥ 0 such that t k ≤ t < t k+1 . By (2.8) and the properties of the Itô integral (see, e.g., [13]), we then derive from (2.16) that From now on we will fix T > 0 arbitrarily. In this section we will show that The following lemma gives an upper bound, independent of ∆, for the second moment.
where, and from now on, C stands for generic positive real constants dependent on T, K 1 , K 2 , ξ (andp, K 3 etc. as well in the next sections) but independent of ∆ and its values may change between occurrences.
Proof. Fix ∆ ∈ (0, ∆ * ] and the initial data ξ arbitrarily. By the Itô formula, we derive from (2.16) that for 0 ≤ t ≤ T , By Lemma 2.4, we get However, it is easy to show that Furthermore, by Lemma 2.5 with p = 2 and inequalities (2.8) and (2.6), we derive that As this holds for any t ∈ [0, T ] while the sum of the right-hand-side (RHS) terms is non-decreasing in t, we then see The well-known Gronwall inequality yields that As this holds for any ∆ ∈ (0, ∆ * ] while C is independent of ∆, we obtain the required assertion (3.1). ✷ Let us present two more lemmas before we state one of our main results in this paper.
where throughout this paper we set inf ∅ = ∞ (and as usual ∅ denotes the empty set). Then (Recall that C stands for generic positive real constants dependent on T, K 1 , K 2 , ξ so C here is independent of R.) Proof. By the Itô formula and Assumption 2.2, we derive that for 0 ≤ t ≤ T , But the sum of the RHS terms is non-decreasing in t, we hence have The Gronwall inequality shows sup In particular, we have This implies, by the Chebyshev inequality, Then (Please recall that C is independent of ∆ and R.) Proof. We simply write ρ ∆,R = ρ. In the same way as (3.2) was obtained, we can show that for In the same way as we performed in the proofs of Lemmas 3.1 and 3.2, we can then show that This, together with (3.5), implies Noting that the sum of the RHS terms is increasing in t while The Gronwall inequality shows sup This implies the required assertion (3.7) easily. ✷ For the numerical solutions to converge to the true solution in L q , we need to assume that the initial data are Hölder continuous with exponent γ (or γ-Hölder continuous). This is a standard condition which is also needed for the classical EM method under the global Lipschitz condition (see, e.g., [18,19,22]).

Assumption 3.4
There is a pair of constants K 3 > 0 and γ ∈ (0, 1] such that the initial data ξ satisfies We can now show one of our main results in this paper.
Proof. Let τ R and ρ ∆,R be the same as before. Set Obviously Let δ > 0 be arbitrary. Using the Young inequality We hence have

Substituting this into (3.11) yields
Now, let ε > 0 be arbitrary. Choose δ sufficiently small for Cqδ/2 ≤ ε/3 and then choose R sufficiently large for We then see from (3.12) that for this particularly chosen R, If we can show that for all sufficiently small ∆, we have lim In other words, to complete our proof, all we need is to show (3.14). For this purpose, we define the truncated functions for x, y ∈ R n . Without loss of any generality, we may assume that ∆ * is already sufficiently small for µ −1 (h(∆ * )) ≥ R. Hence, for all ∆ ∈ (0, ∆ * ], we have that for those x, y ∈ R n with |x| ∨ |y| ≤ R. Consider the SDDE on t ≥ 0 with the initial data z(u) = ξ(u) on u ∈ [−τ, 0]. By Assumption 2.1, we see that both F R (x, y) and G R (x, y) are globally Lipschitz continuous with the Lipschitz constant K R . So the SDDE (3.15) has a unique global solution z(t) on t ≥ −τ . It is straightforward to see that On the other hand, for each step size ∆ ∈ (0, ∆ * ], we can apply the (classical) EM method to the SDDE (3.15) and we denote by z ∆ (t) the continuous-time continuous EM solution. It is again straightforward to see that However, it is well known (see, e.g., [18,19]) that where H is a positive constant dependent on K R , T, ξ, q but independent of ∆. Consequently, Using (3.16) and (3.17), we then have This implies (3.14) as desired. The proof is therefore complete. ✷ Let make a useful remark which will be used in next sections before we discuss an example to illustrate our theory.
Its inverse function µ −1 : R + → R + has the form µ −1 (r) = r a 4 Convergence in L q for q ≥ 2 In the previous section, we showed that the truncated EM solutions x ∆ (T ) andx ∆ (T ) will converge to the true solution x(T ) in L q for any q ∈ [1, 2). This is sufficient for some applications, for example, when we need to approximate the mean value of the solution or the European call option value (see, e.g., [5]). However, we sometimes need to approximate the variance or higher moment of the solution. In these situations, we need to have the convergence in L q for q ≥ 2. For this purpose, we impose a stronger Khasminskii-type condition.
Assumption 4.1 There is a pair of constantsp > 2 and K 1 > 0 such that for all (x, y) ∈ R n × R n .
Once again, the truncated functions f ∆ and g ∆ preserve this condition nicely.
Lemma 4.2 Let Assumption 4.1 hold. Then, for every ∆ ∈ (0, ∆ * ], we have for all x, y ∈ R n . This lemma can be proved in the same way as Lemma 2.4 was proved. We also cite a stronger result than Lemma 2.3 from [17].   Proof. Fix any ∆ ∈ (0, ∆ * ]. By the Itô formula, we derive from (2.16) that, for 0 ≤ t ≤ T , By Lemma 4.2 and the Young inequality we then have But, by Lemma 2.5 with p =p and inequalities (2.8) and (2.6), we have We therefore have E|x ∆ (u)|p ds.
As this holds for any t ∈ [0, T ] while the sum of the RHS terms is non-decreasing in t, we then see The well-known Gronwall inequality yields that sup 0≤u≤T E|x ∆ (u)|p ≤ C.
As this holds for any ∆ ∈ (0, ∆ * ] while C is independent of ∆, we see the required assertion (4.4). ✷ The following two lemmas are the analogues of Lemmas 3.2 and 3.3.
Lemma 4.6 Let Assumptions 2.1 and 4.1 hold. For any real number R > ξ and ∆ ∈ (0, ∆ * ], define the stopping time ρ ∆,R = inf{t ≥ 0 : |x ∆ (t)| ≥ R}. Then Their proofs are similar to those of Lemmas 3.2 and 3.3, respectively, so are omitted. We can now state our main result in this section.
Proof. We use the same notation as in the proof of Theorem 3.5. Fix any q ∈ [2,p). Using the Young inequality, we can show that for any δ > 0, (4.9)

By Lemmas 4.3 and 4.4, we have
E|e ∆ (T )|p ≤ C, (4.10) while by Lemmas 4.5 and 4.6, (4.11) Using these and (3.20) (please recall Remark 3.6), we obtain Now, for any ε > 0, we first choose δ sufficiently small for Cqδ/p ≤ ε/3 and then choose R sufficiently large for and further then choose ∆ sufficiently small for H∆ q(0.5∧γ) ≤ ε/3 to get that In other words, we have shown that This, along with Lemma 2.5, implies another assertion The proof is therefore complete. ✷ Let us now discuss an example to illustrate this theorem before we study the convergence rates.

Convergence Rates
In the previous sections, we showed the convergence in L q of the truncated EM solutions to the true solution. However, the convergence was in the asymptotic form without the convergence rate. In this section we will discuss the rate. To avoid the notation becoming too complicated, we will only discuss the convergence rate in L 2 but the technique developed here can certainly be applied to study the rate in L q . Recall that we use two functions µ(·) and h(·) to define the truncated EM method. The choices of these functions are independent as long as they satisfy (2.5) and (2.6), respectively. It is interesting to see that they will satisfy a related condition in order for us to obtain the convergence rate. We need an additional condition. To state it, we need a new notation. Let U denote the family of continuous functions U : R n × R n → R + such that for each b > 0, there is a positive constant κ b for which Assumption 5.1 Assume that there is a positive constant H 1 and a function U ∈ U such that for all x, y,x,ȳ ∈ R n .
Let us first present a key lemma.
Lemma 5.2 Let Assumptions 2.1, 3.4 and 5.1 hold. Let R > ξ be a real number and let ∆ ∈ (0, ∆ * ) be sufficiently small such that µ −1 (h(∆)) ≥ R. Let θ ∆,R and e ∆ (t) be the same as defined in Section 3. Then where, as before, C is the generic constant independent of R and ∆.
Proof. We write θ ∆,R = θ for simplicity. The Itô formula shows that for 0 ≤ t ≤ T . We observe that for 0 ≤ s ≤ t ∧ θ, But we have the condition that µ −1 (h(∆)) ≥ R, so Recalling the definition of the truncated functions f ∆ and g ∆ as well as (2.5), we hence have that and for 0 ≤ s ≤ t ∧ θ. It therefore follows from (5.3) that By Assumption 5.1 and (5.4), we then derive that But, by Assumption 3.4 and Lemma 2.5, we derive that Moreover, by the property of the U-class function U and Assumption 3.4, we have where b = ξ . Furthermore, by Lemma 2.5, Substituting (5.7)-(5.9) into (5.6), we get By the Gronwall inequality, we obtain the required assertion (5.2). ✷ Let us now state our first result on the convergence rate, where we reveal a strong relation between functions µ(·) and h(·), which are used to define the truncated EM method.
for all sufficiently small ∆ ∈ (0, ∆ * ). Then, for every such small ∆, and Proof. We use the same notation as in the proof of Theorem 4.7. It follows from (4.9)-(4.11) with q = 2 that the inequality holds for any ∆ ∈ (0, ∆ * ), R > ξ and δ > 0. In particular, choosing But, by condition (5.10), we have We can hence apply Lemma 5.2 to obtain . (5.15) Substituting this into (5.14) yields the first assertion (5.11) . The second assertion (5.12) follows from (5.11) and Lemma 2.5. ✷ Let us discuss an example to illustrate Theorem 5.3 and to motivate our further results on the convergence rates.
Assumption 5.6 Assume that there is a pair of positive constants r and H 3 such that for all x, y,x,ȳ ∈ R n .
Lemma 5.7 Let Assumptions 2.1, 3.4, 4.1, 5.5 and 5.6 hold andp > r. Let R > ξ be a real number and let ∆ ∈ (0, ∆ * ) be sufficiently small such that µ −1 (h(∆)) ≥ R. Let θ ∆,R and e ∆ (t) be the same as defined in Section 3. Then Proof. We use the same notation as in the proof of Lemma 5.2. It follows from (5.5) that (5.25) By Assumptions 3.4, 5.5 and 5.6, we can then show where (5.8) has been used and But, by the Hölder inequality, Lemmas 2.5 and 4.3 and Assumption 3.4, we can derive that

Conclusion
In this paper we have used the new explicit method, called the truncated EM method, to study the strong convergence of the numerical solutions for nonlinear SDDEs. for any T > 0 and q ∈ [1, 2). Under a slightly stronger Khasminskii-type condition, we have showed the above convergence for some q ≥ 2. We have also discussed the convergence rates in L 2 under some additional conditions. We have used several examples to illustrate our theory throughout the paper.