1 Introduction

Stochastic differential equations [13] have attracted a lot of attention, because the problems are not only academically challenging, but also of a practical importance and have played an important role in many fields such as in option pricing, forecast of the growth of population, etc. (see, e.g., [1]). Recently, much work has been done on stochastic differential equations. Here, we highlight Mao et al.’s great contribution (see [39] and references therein). Svishchuk and Kazmerchuk [10] studied the exponential stability of solutions of linear stochastic differential equations with Poisson jump [1113] and Markovian switching [4, 12, 14].

In many applications, one assumes that the system under consideration is governed by a principle of causality, that is, the future states of the system are independent of the past states and are determined solely by the present. However, under closer scrutiny, it becomes apparent that the principle of causality is often only the first approximation to the true situation, and that a more realistic model would include some of the past states of the system. Stochastic functional differential equations [9] give a mathematical explanation for such a system.

Unfortunately, in general, it is impossible to find the explicit solution for stochastic functional differential equations with the Poisson jump. Even when such a solution can be found, it may be only in an implicit form or too complicated to visualize and evaluate numerically. Therefore, many approximate schemes were presented, for example, EM scheme, time discrete approximations, stochastic Taylor expansions [15], and so on.

Meanwhile, the rate of approximation to the true solution by the numerical solution is different for different numerical schemes. Jankovic et al. investigated the following stochastic differential equations (see [15]):

dx(t)=f( x t ,t)dt+g( x t ,t)dW(t), t 0 tT.

In this paper, we develop approximate methods for stochastic differential equations driven by Poisson process, that is,

dx(t)=f( x t ,t)dt+g( x t ,t)dW(t)+h( x t ,t)dN(t), t 0 tT.

The rate of the L p -closeness between the approximate solution and the solution of the initial equation increases when the number of degrees in Taylor approximations of coefficients increases. Although the Poisson jump is concerned, the rate of approximation to the true solution by the numerical solution is the same as the equation in [15]. Even when the Poisson process is replaced by Poisson random measure, the rate is also the same.

2 Approximate scheme and hypotheses

Throughout this paper, we let {Ω,F, { F t } t 0 ,P} be a probability space with a filtration satisfying the usual conditions, i.e., the filtration is continuous on the right and F 0 -contains all P-zero sets. Let W(t)= ( w 1 ( t ) , w 2 ( t ) , , w m ( t ) ) T be an m-dimensional Brownian motion defined on the probability space. For a,bR with a<b, denoted by D([a,b]; R n ), the family of functions φ from [a,b] to R n , that are continuous on the right and limitable on the left. D([a,b]; R n ) is equipped with the norm φ= sup a s b |φ(s)|, where || is the Euclidean norm in R n , i.e., |x|= x T x (x R n ). If A is a vector or matrix, its trace norm is denoted by |A|= trace ( A T A ) , where its operator norm is denoted by A=sup{|Ax|:x=1}. Denote by D F 0 b ([τ,0]; R n ) the family of all bounded, F 0 -measurable, D([τ,0]; R n )-valued random variable.

We consider the following Itô stochastic functional differential equations with Poisson jump:

dx(t)=f( x t ,t)dt+g( x t ,t)dW(t)+h( x t ,t)dN(t), t 0 tT
(1)

with the initial condition x t 0 ={ξ(t),t[τ,0]}, x t ={x(t+θ),θ[τ,0]} D F t b ([τ,0]; R n ), and x t 0 is independent of W() and N().

Assume that

f : D F t b ( [ τ , 0 ] ; R n ) × [ t 0 , T ] R n , g : D F t b ( [ τ , 0 ] ; R n ) × [ t 0 , T ] R n × m , h : D F t b ( [ τ , 0 ] ; R n ) × [ t 0 , T ] R n ,

where

t 0 T | f ( x t , t ) | dt<, t 0 T [ g ( x t , t ) ] 2 dt<, t 0 T [ h ( x t , t ) ] 2 dt<.

For the existence and uniqueness of the solutions of Eq. (1) (see [3], Theorem 5.2.5), we give the following rather general assumptions.

(H1) f, g and h satisfy the Lipschitz condition and the linear growth condition as follows: for any t[ t 0 ,T] there exists a constant L 1 >0 such that

| f ( φ , t ) f ( ψ , t ) | | g ( φ , t ) g ( ψ , t ) | | h ( φ , t ) h ( ψ , t ) | L 1 φ ψ , | f ( φ , t ) | | g ( φ , t ) | | h ( φ , t ) | L 1 ( 1 + φ ) ,

where φ,ψ D F t b ([τ,0]; R n ).

(H2) (The Hölder continuity of the initial data.) There exist constants K0 and γ(0,1] such that for all τs<t0,

E | ξ ( t ) ξ ( s ) | K ( t s ) γ .

(H3) The functions f, g and h have Taylor expansions in the argument x up to the m 1 th, m 2 th, and m 3 th Fréchet derivatives, respectively [16].

(H4) The functions f ( x , t ) ( m 1 + 1 ) , g ( x , t ) ( m 2 + 1 ) and h ( x , t ) ( m 3 + 1 ) are uniformly bounded, i.e., there exists a positive constant L 2 such that

sup C F t b ( [ τ , 0 ] ; R n ) × [ t 0 , T ] R n f ( x , t ) ( m 1 + 1 ) ( h , h , , h ) L 2 h m 1 + 1 , sup C F t b ( [ τ , 0 ] ; R n ) × [ t 0 , T ] R n g ( x , t ) ( m 2 + 1 ) ( h , h , , h ) L 2 h m 2 + 1 , sup C F t b ( [ τ , 0 ] ; R n ) × [ t 0 , T ] R n h ( x , t ) ( m 3 + 1 ) ( h , h , , h ) L 2 h m 1 + 1 .

For some sufficiently large enough nN, we assume the step Δ= T t 0 n , where 0<Δ1. Let t 0 < t 1 << t n =T be an equidistant partition of the interval [ t 0 ,T], that is, the partition points are t k = t 0 + k n (T t 0 ), k=0,1,,n. The explicit discrete approximation scheme is defined as follows:

{ x t 0 n = ξ ( t ) , τ t 0 ; x t k + 1 n = x n ( t k ) + t k t i = 0 m 1 f ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! d s x t k + 1 n = + t k t i = 0 m 2 g ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! d W ( s ) x t k + 1 n = + t k t i = 0 m 3 h ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! d N ( s ) , k = 1 , , n .
(2)

Then the continuous approximate solution is defined by

x n ( t ) = x n ( t k ) + t k t i = 0 m 1 f ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! d s + t k t i = 0 m 2 g ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! d W ( s ) + t k t i = 0 m 3 h ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! d N ( s ) , t [ t k , t k + 1 ] ,
(3)

satisfying the initial condition x t 0 =ξ, x t k n ={ x n ( t k +θ),θ[τ,0]}, k=1,2,,n1.

Besides the hypotheses motioned above, we will need another one:

(H5) There exists a positive constant Q, which is independent of n, such that for r2,

E [ sup t [ t 0 τ , T ] | x ( t ) | r ] <,E [ sup t [ t 0 τ , T ] | x n ( t ) | r ] Q.

Moreover, in what follows, C is a generic positive constant independent of Δ, whose values may vary from line to line.

3 Preparatory lemmas and the main result

Since the proof of the main result is very technical, to begin with, we present several lemmas which will play an important role in the subsequent section.

Lemma 1 Let conditions (H1), (H3), (H4), (H5) be satisfied. Then, for any r2,

E [ sup s [ t k , t ] | x n ( s ) x n ( t k ) | r ] C Δ r / 2 ,t[ t k , t k + 1 ],k=0,1,,n1.
(4)

Proof For convenience, we denote

F ( x t n , t ; x t k n ) = i = 0 m 1 f ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! , G ( x t n , t ; x t k n ) = i = 0 m 2 g ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! , H ( x t n , t ; x t k n ) = i = 0 m 3 h ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! .

Then, in view of (H3), for t[ t k , t k + 1 ], k=0,1,,n, and β(0,1),

f ( x t n , t ) = F ( x t n , t ; x t k n ) + f ( x t k n + β ( x t n x t k n ) , t ) ( m 1 + 1 ) ( x t n x t k n , , x t n x t k n ) ( m 1 + 1 ) ! , g ( x t n , t ) = F ( x t n , t ; x t k n ) + g ( x t k n + β ( x t n x t k n ) , t ) ( m 2 + 1 ) ( x t n x t k n , , x t n x t k n ) ( m 2 + 1 ) ! , h ( x t n , t ) = F ( x t n , t ; x t k n ) + h ( x t k n + β ( x t n x t k n ) , t ) ( m 3 + 1 ) ( x t n x t k n , , x t n x t k n ) ( m 3 + 1 ) ! .

Obviously, for any t[ t k , t k + 1 ], k=0,1,,n1,

x t n x t k n = t k t F ( x s n , s ; x t k n ) ds+ t k t G ( x s n , s ; x t k n ) dW(s)+ t k t H ( x s n , s ; x t k n ) dN(s).

Making use of the elementary inequality | a + b + c | r 3 r 1 ( | a | 3 + | b | 3 + | c | 3 ), a,b,c0, rN, the Hölder inequality to the Lebesgue integral, and the Burkholder-Davis-Gundy inequality to the Itô integral for r2, we can obtain

E [ sup s [ t k , t ] | x n ( s ) x n ( t k ) | ] 3 r 1 [ ( t t k ) r 1 t k t E | F ( x s n , s ; x t k n ) | r d s + ( r 3 2 ( r 1 ) ) r / 2 ( t t k ) r / 2 1 t k t E | G ( x s n , s ; x t k n ) | r d s + E [ sup t k s t | t k s H ( x t n , t ; x t k n ) d N ( t ) | r ] ] 3 r 1 [ ( t t k ) r 1 J 1 ( t ) + ( r 3 2 ( r 1 ) ) r / 2 ( t t k ) r / 2 1 ( t t k ) r / 2 1 J 2 ( t ) + J 3 ( t ) ] .

Then we compute J 1 (t), J 2 (t), J 3 (t)

J 1 ( t ) = t k t E | f ( x s n , s ) [ f ( x s n , s ) F ( x s n , t ; x t k n ) ] | r d s = t k t E | f ( x s n , s ) f ( x t k n + β ( x s n x t k n ) , s ) ( m 1 + 1 ) ( x s n x t k n , , x s n x t k n ) ( m 1 + 1 ) ! | r d s 2 r 1 [ L 1 r t k t E ( 1 + x t n ) r d s + L 2 r [ ( m 1 + 1 ) ! ] r t k t E [ x n ( s ) x n ( t k ) ( m 1 + 1 ) r ] d s ] C ( t t k ) + L r ( m 1 + 1 ) ! Q m + 1 ( t t k ) C ( t t k ) .

Similarly, by repeating the procedure above, we see that J 2 (t)C(t t k ).

Noting that {N(s),s[ t k ,t]} is a Poisson process, we will use the compensated Poisson process { N ˜ (s)λs,s[ t k ,t]}, which is a martingale. Then we obtain

J 3 ( t ) = E [ sup t k s t | t k s H ( x t n , t ; x t k n ) d N ¯ ( t ) + λ t k s H ( x t n , t ; x t k n ) d t | r ] 2 r 1 E [ sup t k s t | t k s H ( x t n , t ; x t k n ) d N ¯ ( t ) | r ] + 2 r 1 λ r E [ sup t k s t | t k s H ( x t n , t ; x t k n ) d t | r ] 2 r 1 λ r / 2 ( r 3 2 ( r 1 ) ) r / 2 E [ t k t | H ( x s n , s ; x t k n ) | 2 d s ] r / 2 + 2 r 1 λ r t k t E | H ( x s n , s ; x t k n ) | r d s 2 r 1 λ r / 2 ( r 3 2 ( r 1 ) ) r / 2 ( t t k ) r / 2 1 t k t E | H ( x s n , s ; x t k n ) | r d s + 2 r 1 λ r t k t E | H ( x s n , s ; x t k n ) | r d s C ( t t k ) r / 2 + C ( t t k ) .

In view of J 1 (t), J 2 (t), J 3 (t), we can obtain

E [ sup s [ t k , t ] | x n ( t ) x n ( t k ) | ] C ( t t k ) r / 2 +C(t t k )C ( t t k ) r / 2 C Δ r / 2 .

 □

Lemma 2 Under conditions (H1), (H3), (H4) and (H5), for any r2,

E [ x t n x t k n r ] C Δ r / 2 ,t[ t k , t k + 1 ],k=0,1,,n1.
(5)

The proof of this lemma is similar to Proposition 2 in [4].

Then, by Lemmas 1 and 2, we can prove the following main result.

Theorem 1 Let conditions (H1)-(H5) be satisfied, then for any r2,

E [ sup t [ t 0 τ , T ] | x ( t ) x n ( t ) | r ] C Δ ( m + 1 ) r / 2 ,
(6)

where m=min{ m 1 , m 2 , m 3 }.

Proof For an arbitrary t[ t 0 ,T], it follows that

x ( t ) x n ( t ) = t 0 t k : t k t [ f ( x s , s ) F ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) ( s ) d s + t 0 t k : t k t [ g ( x s , s ) G ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) ( s ) d W ( s ) + t 0 t k : t k t [ h ( x s , s ) H ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) ( s ) d N ( s ) .

Since x(t) and x n (t) satisfy the same initial condition, we can obtain

E [ sup s [ t 0 τ , t ] | x ( s ) x n ( s ) | r ] E [ sup s [ t 0 τ , t 0 ] | x ( s ) x n ( s ) | r ] + E [ sup s [ t 0 , t ] | x ( s ) x n ( s ) | r ] = E [ sup s [ t 0 , t ] | x ( s ) x n ( s ) | r ] 3 r 1 E [ sup s [ t 0 τ , t 0 ] | t 0 s k : t k t [ f ( x u , u ) F ( x u n , u ; x t k n ) ] I [ t k , t k + 1 ) ( u ) d u | r ] + 3 r 1 E [ sup s [ t 0 τ , t 0 ] | t 0 s k : t k t [ g ( x u , u ) G ( x u n , u ; x t k n ) ] I [ t k , t k + 1 ) d W ( u ) | r ] + 3 r 1 E [ sup s [ t 0 τ , t 0 ] | t 0 s k : t k t [ h ( x u , u ) H ( x u n , u ; x t k n ) ] I [ t k , t k + 1 ) d N ( u ) | r ] 3 r 1 ( t t 0 ) r 1 E [ t 0 s | k : t k t [ f ( x s , s ) F ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) | r d s ] + 3 r 1 C ( t t 0 ) r / 2 1 E [ t 0 s k : t k t | [ g ( x s , s ) G ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) | r d s ] + 3 r 1 C ( t t 0 ) r / 2 1 E [ t 0 s k : t k t | [ h ( x s , s ) H ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) | r d s ] + 3 r 1 λ r E [ t 0 s k : t k t | [ h ( x s , s ) H ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) | r d s ] .
(7)

Let j=max{i{0,1,2,,n1}, t i tT}. Denote that

J 1 ( t k , t , u ) = [ f ( x u , u ) F ( x u n , u ; x t k n ) ] I t k , t ( u ) , J 2 ( t k , t , u ) = [ g ( x u , u ) G ( x u n , u ; x t k n ) ] I t k , t ( u ) , J 3 ( t k , t , u ) = [ h ( x u , u ) H ( x u n , u ; x t k n ) ] I t k , t ( u ) .

Then we can write (6) as

E [ sup s [ t 0 τ , t ] | x ( s ) x n ( s ) | r ] 3 r 1 ( t t 0 ) r 1 t 0 t E | k = 0 j 1 J 1 ( t k , t k + 1 , u ) + J 1 ( t j , t , u ) | r d s + 3 r 1 C ( t t 0 ) r / 2 1 t 0 t E | k = 0 j 1 J 2 ( t k , t k + 1 , u ) + J 2 ( t j , t , u ) | r d s + 3 r 1 C ( t t 0 ) r / 2 1 t 0 t E | k = 0 j 1 J 3 ( t k , t k + 1 , u ) + J 3 ( t j , t , u ) | r d s + 3 r 1 λ r t 0 t E | k = 0 j 1 J 3 ( t k , t k + 1 , u ) + J 3 ( t j , t , u ) | r d s .
(8)

On the other hand, for k=0,1,,j1,

k = 0 j 1 J 1 ( t k , t k + 1 , u ) + J 1 ( t j , t , u ) = { f ( x u , u ) F ( x u n , u ; x t k n ) , u [ t k , t k + 1 ) , f ( x u , u ) F ( x u n , u ; x t j n ) , u [ t j , t ) , k = 0 j 1 J 2 ( t k , t k + 1 , u ) + J 2 ( t j , t , u ) = { g ( x u , u ) G ( x u n , u ; x t k n ) , u [ t k , t k + 1 ) , g ( x u , u ) G ( x u n , u ; x t j n ) , u [ t j , t ) , k = 0 j 1 J 3 ( t k , t k + 1 , u ) + J 3 ( t j , t , u ) = { h ( x u , u ) H ( x u n , u ; x t k n ) , u [ t k , t k + 1 ) , h ( x u , u ) H ( x u n , u ; x t j n ) , u [ t j , t ) .

The relation (7) becomes

E [ sup s [ t 0 τ , t ] | x ( s ) x n ( s ) | r ] 3 r 1 ( T t 0 ) r 1 k = 0 j 1 t k t k + 1 E | f ( x u , u ) F ( x u n , u ; x t k n ) | r d u + 3 r 1 ( T t 0 ) r 1 t j t E | f ( x u , u ) F ( x u n , u ; x t j n ) | r d u + 3 r 1 C ( T t 0 ) r / 2 1 k = 0 j 1 t k t k + 1 E | g ( x u , u ) G ( x u n , u ; x t k n ) | r d u + 3 r 1 C ( T t 0 ) r / 2 1 t j t E | g ( x u , u ) G ( x u n , u ; x t j n ) | r d u + 3 r 1 C ( T t 0 ) r / 2 1 k = 0 j 1 t k t k + 1 E | h ( x u , u ) H ( x u n , u ; x t k n ) | r d u + 3 r 1 C ( T t 0 ) r / 2 1 t j t E | h ( x u , u ) H ( x u n , u ; x t j n ) | r d u + 3 r 1 λ r k = 0 j 1 t k t k + 1 E | h ( x u , u ) H ( x u n , u ; x t k n ) | r d u + 3 r 1 t j t E | h ( x u , u ) H ( x u n , u ; x t j n ) | r d u .

Using (H1), (H4) and (4), yields

t k t E | f ( x u , u ) F ( x u n , u ; x t j n ) | r d u 2 r 1 [ t k t E | f ( x u , u ) f ( x u n , u ) | r d u + t k t E | f ( x u n , u ) F ( x u n , u ; x t j n ) | r d u ] 2 r 1 L 1 r t k t E x u x u n r d u + 2 r 1 t k t E f ( x t k n + β ( x u n x t k n ) , u ) ( m 1 + 1 ) ( x u n x t k n , , x u n x t k n ) ( m 1 + 1 ) ! r d u 2 r 1 L 1 r t k t E x u x u n r d u + 2 r 1 L 2 r [ ( m 1 + 1 ) ! ] r t k t E x u n x t k n ( m 1 + 1 ) r d u 2 r 1 L 1 r t k t E x u x u n r d u + 2 r 1 L 2 r C [ ( m 1 + 1 ) ! ] r Δ ( m 1 + 1 ) r / 2 ( t t k ) ,

where k=0,1,,j and t[ t k , t k + 1 ]. Similarly,

t k t E | g ( x u , u ) G ( x u n , u ; x t j n ) | r d u C t k t E x u x u n r d u + C Δ ( m 2 + 1 ) r / 2 ( t t k ) , t k t E | h ( x u , u ) H ( x u n , u ; x t j n ) | r d u C t k t E x u x u n r d u + C Δ ( m 3 + 1 ) r / 2 ( t t k ) .

Altogether,

E [ sup s [ t 0 τ , t ] | x ( s ) x n ( s ) | r ] C t k t E x u x u n r du+C Δ ( m + 1 ) r / 2 (t t k ),

where m=min{ m 1 , m 2 , m 3 }. In order to estimate the E x u x u n r , we distinguish two cases:

  1. (1)

    when uτ< t 0 ,

    E x u x u n r E [ sup θ [ τ , 0 ] x ( u + θ ) x n ( u + θ ) r ] = E [ sup γ [ u τ , u ] x ( γ ) x n ( γ ) r ] E [ sup γ [ u τ , t 0 ] x ( γ ) x n ( γ ) r ] + E [ sup γ [ t 0 , u ] x ( γ ) x n ( γ ) r ] = E [ sup γ [ t 0 , u ] x ( γ ) x n ( γ ) r ] E [ sup γ [ t 0 τ , u ] x ( γ ) x n ( γ ) r ] ;
  2. (2)

    when uτ t 0 ,

    E x u x u n r E [ sup γ [ u τ , u ] x ( γ ) x n ( γ ) r ] E [ sup γ [ t 0 τ , u ] x ( γ ) x n ( γ ) r ] .

So,

E [ sup s [ t 0 τ , t ] | x ( s ) x n ( s ) | r ] C t 0 t E [ sup γ [ t 0 τ , u ] x ( γ ) x n ( γ ) r ] du+C Δ ( m + 1 ) r / 2 (t t 0 ).

By the Gronwall inequality, we obtain the desired result

E [ sup s [ t 0 τ , t ] | x ( s ) x n ( s ) | r ] C Δ ( m + 1 ) r / 2 (T t 0 ) e C ( T t 0 ) =C Δ ( m + 1 ) r / 2 ,

which completes the proof. □

Remark From the proof, we can easily understand that the convergence speed between the true solution of Eq. (1) and the approximation solution is faster than the Euler-Maruyama method.