1 Introduction

The numerical reproduction of asymptotic properties of stochastic differential equations (SDEs) studies that given the underlying SDE has certain asymptotic property how one chooses a proper numerical method such that the corresponding discrete numerical solution can reproduce the same property. Among different types of asymptotic properties, stability has been attracting a lot of attention in recent years. Many papers are devoted to the numerical reproduction of stability of SDEs in different senses, such as mean square stability [13], almost sure stability [46], stability in small moment [7], we just mention some of them here. Another asymptotic property, asymptotic boundedness, was studied rarely, but has its own interest. Not like the stability that requires the solution to tend to the trivial solution as time becomes large, the boundedness only needs the solution to be bounded above by some positive constant. On the one hand, the stability could be regarded as a specific situation of the boundedness. More importantly, the asymptotic boundedness plays an important role in the study of stationary distribution of SDEs. In the series papers of Mao, Yuan, Yin, etc. [811], the stationary distributions of numerical solutions were used to approximate the stationary distribution of underlying equations. One of the key components in proving the existence and uniqueness of the numerical stationary distribution is the moment boundedness of numerical solutions. We will give more details about it in Section 4.

In this paper, we investigate the asymptotic moment boundedness of the stochastic theta method (STM). As the parameter theta is employed to control the implicitness of the method, the STM is regarded a generalisation of the Euler-Maruyama (EM) method and the backward Euler-Maruyama (BEM) method. The stabilities in different senses of the STM have been studied by many authors [1214]. But, to our best knowledge, few papers have discussed the asymptotic moment boundedness of the STM for SDEs. Recently, in [11], the authors studied the moment boundedness for the EM method and the BEM method. The results presented in this paper can be seen as a generalisation of those in [11]. In addition, different choices of the θ will lead to distinguishing conditions on the drift and diffusion coefficients. We will study the asymptotic boundedness in the second moment and p th moments for p much less than one. The study of the second moment is typical as it can be related to many concepts in engineering such as energy function. While the small moments may have no obvious physical meanings, but they can be connected to the boundedness in probability which is crucial to the proof of the numerical stationary distribution. Besides, for the small moments, the conditions required for the drift and diffusion coefficients are weaker than those for the second moment.

We construct this paper as follows. In Section 2, some mathematical preliminaries are stated. The main results and proofs are presented in Section 3. The application of the moment boundedness in the study of numerical stationary distribution is discussed in Section 4.

2 Mathematical preliminary

Throughout this paper, we let (Ω,F, { F t } t 0 ,P) be a complete probability space with a filtration { F t } t 0 which is increasing and right continuous, with F 0 containing all ℙ-null sets. Let B(t) be a scalar Brownian motion defined on the probability space. The results in this paper could be extended to the case of multi-dimensional Brownian motion. But to keep the simplicity of the notation, we only consider the case of scalar Brownian motion. Let || denote the Euclidean norm in R n . The inner product of x, y in R n is denoted by x,y. In this paper, we consider the n-dimensional Itô SDE

dx(t)=f ( x ( t ) ) dt+g ( x ( t ) ) dB(t),t0,x(0) R n .
(2.1)

We assume that f,g: R n R n are smooth enough for SDE (2.1) to have a unique global solution on [0,).

Let us recall the stochastic θ numerical methods we will use below. The reader is referred to [1517] for more details on the numerical methods. The stochastic theta method (STM) applied to (2.1) is defined by

x k + 1 = x k +(1θ)f( x k )Δt+θf( x k + 1 )Δt+g( x k )Δ B k , x 0 =x(0),
(2.2)

for k=0,1, , where Δt is the time step and Δ B k =B((k+1)Δt)B(kΔt) is the Brownian motion increment.

Since the STM is semi-implicit when θ0, to ensure that this method is well defined, let us impose the one-sided Lipschitz condition on the drift coefficient f: there exists a constant b such that for any x,y R n ,

x y , f ( x ) f ( y ) b | x y | 2 .

This, together with θbΔ<1, ensures that (2.2) is well defined, that is, STM (2.2) can be solved uniquely for the next step x k + 1 (see, for example, [4]).

3 Main results

Our main results and their proofs are presented in this section. We start off with the second moment in Section 3.1, two cases of the θ are discussed. Then the same structure is used for Section 3.2 to investigate the small moments.

3.1 The second moment

First we discuss the situation for θ[0,1/2), which has the linear growth condition on both drift and diffusion coefficients. Second we relax the constraint on the drift coefficient when θ[1/2,1].

The boundedness of the underlying SDE is well known. We state the following theorem and refer the readers to Chapter 5 of [18] for the proof.

Theorem 3.1 Assume that f and g satisfy the local Lipschitz condition. Assume that there exists a negative constant μ and positive constants σ, a 1 , a 2 such that for any x R n ,

x , f ( x ) μ | x | 2 + a 1
(3.1)

and

|g(x) | 2 σ | x | 2 + a 2 .
(3.2)

If

2μ+σ<0,
(3.3)

then the underlying solution of SDE (2.1) is asymptotically bounded in the second moment

lim sup t E ( | x ( t ) | 2 ) 2 a 1 + a 2 ( 2 μ + σ ) ,x(0) R n .
(3.4)

Now we consider reproducing this boundedness property by the STM.

3.1.1 θ[0,1/2)

Theorem 3.2 Let (3.1), (3.2) and (3.3) hold; furthermore, if f satisfies the linear growth condition

|f(x) | 2 κ | x | 2 + a 3 ,
(3.5)

where κ and a 3 are positive, then for Δt< 4 ( 2 μ + σ ) κ , the STM solution (2.2) satisfies

lim sup k E ( | x k | 2 ) 2 a 1 + a 2 + ( 1 θ ) 2 a 3 Δ t ( 2 μ + σ ) ( 1 θ ) 2 κ Δ t , x 0 R n .

Moreover, let the stepsize Δt0, then

lim Δ t 0 lim sup k E | x k | 2 2 a 1 + a 2 ( 2 μ + σ ) , x 0 R n .
(3.6)

Proof Due to (3.1), (3.2), (3.3) and (3.5), we obtain

| x k + 1 | 2 = x k + 1 , x k + ( 1 θ ) f ( x k ) Δ t + g ( x k ) Δ B k + x k + 1 , θ f ( x k + 1 ) Δ t 1 2 | x k + 1 | 2 + 1 2 | x k + ( 1 θ ) f ( x k ) Δ t + g ( x k ) Δ B k | 2 + ( μ | x k + 1 | 2 + a 1 ) θ Δ t 1 2 | x k + 1 | 2 + 1 2 [ | x k | 2 + ( 1 θ ) 2 Δ t 2 ( κ | x k | 2 + a 3 ) + ( σ | x k | 2 + a 2 ) Δ t + 2 ( 1 θ ) Δ t ( μ | x k | 2 + a 1 ) + m k ] + ( μ | x k + 1 | 2 + a 1 ) θ Δ t 1 + ( 1 θ ) 2 κ Δ t 2 + σ Δ t + 2 ( 1 θ ) μ Δ t 1 2 μ θ Δ t | x k | 2 + ( 1 θ ) 2 a 3 Δ t 2 + a 2 Δ t + 2 a 1 Δ t + m k 1 2 μ θ Δ t ,

where m k =[ | g ( x k ) | 2 (Δ B k 2 Δt)+2 x k +(1θ)f( x k )Δt,g( x k )Δ B k ]. Taking expectation on both sides, noting that E( m k )=0, yields

E | x k + 1 | 2 c 1 E | x k | 2 + c 2 c 1 k + 1 E | x 0 | 2 + c 2 ( 1 c 1 k + 1 ) 1 c 1 ,
(3.7)

where

c 1 = 1 + ( 1 θ ) 2 κ Δ t 2 + σ Δ t + 2 ( 1 θ ) μ Δ t 1 + 2 μ θ Δ t

and

c 2 = ( 1 θ ) 2 a 3 Δ t 2 + a 2 Δ t + 2 a 1 Δ t 1 + 2 μ θ Δ t .

Then, for Δt< 4 ( 2 μ + σ ) κ , we have c 1 <1. From (3.7), we deduce

lim sup k E | x k | 2 c 2 1 c 1 2 a 1 + a 2 + ( 1 θ ) 2 a 3 Δ t ( 2 μ + σ ) ( 1 θ ) 2 κ Δ t .

Let Δt0, then assertion (3.6) holds. □

This theorem shows that the STM can reproduce the upper bound of true solution (3.4) for the case of θ[0,1/2). The result of the EM boundedness, Theorem 5.2 in [19], is reproduced perfectly as a special case.

3.1.2 θ[1/2,1]

We try to release the constraint on the drift coefficient when θ[1/2,1] and reproduce the boundedness property in STM as well. To show the theorem of this case, we first present the following lemma.

Lemma 3.3 Let conditions (3.1) and (3.3) hold, then for any A,BR with AB0, we have the inequality

|xBf(x)Δt | 2 +2B a 1 C ( | x A f ( x ) Δ t | 2 + 2 A a 1 ) ,

where C= 1 2 B μ Δ t 1 2 A μ Δ t .

Proof

| x | 2 2 B x , f ( x ) Δ t + B 2 | f ( x ) Δ t | 2 + 2 B a 1 C ( | x | 2 2 A x , f ( x ) Δ t + A 2 | f ( x ) Δ t | 2 + 2 A a 1 ) ( 1 C ) | x | 2 + 2 ( C A B ) μ Δ t | x | 2 + ( B 2 C A 2 ) | f ( x ) Δ t | 2 0 .

 □

For the theorem below, we denote

A=B+λ,B=1θ,λ=(2θ1) ( 1 + σ 2 μ ) ( 1 2 μ Δ t ( 1 θ ) 1 + 2 μ Δ t ( 1 θ ) + σ Δ t ) .

Theorem 3.4 Let (3.1), (3.2) and (3.3) hold. If θ[1/2,1], then for any Δt>0 STM (2.2) satisfies

lim sup k E ( | x k | 2 ) ( 2 a 1 + a 2 ) ( 1 2 μ Δ t A ) 2 μ λ , x 0 R n .

Moreover, let the stepsize Δt0, then

lim Δ t 0 lim sup k E ( | x k | 2 ) 2 a 1 + a 2 2 μ ( 1 2 θ ) ( 2 μ + σ ) , x 0 R n .
(3.8)

Especially, when θ=1, thus

lim Δ t 0 lim sup k E ( | x k | 2 ) 2 a 1 + a 2 ( 2 μ + σ ) , x 0 R n .
(3.9)

Proof Using Lemma 3.3 with B=0, we have

| x k | 2 | x k Af( x k )Δt | 2 +2A a 1 Δt.

By conditions (3.1)-(3.3) and Lemma 3.3, we have

E | x k + 1 | 2 E | x k + 1 A f ( x k + 1 ) Δ t | 2 + 2 A a 1 Δ t E | x k + 1 θ f ( x k + 1 ) Δ t + ( θ A ) f ( x k + 1 ) Δ t | 2 + 2 A a 1 Δ t E [ | x k + 1 θ f ( x k + 1 ) Δ t | 2 + 2 ( θ A ) x k + 1 , f ( x k + 1 ) Δ t + ( A 2 θ 2 ) | f ( x k + 1 ) Δ t | 2 ] + 2 A a 1 Δ t E [ | x k ( 1 θ ) f ( x k ) Δ t | 2 + 4 ( 1 θ ) x k , f ( x k ) Δ t ] + E | g ( x k ) Δ B k | 2 + 2 ( θ A ) ( μ E | x k + 1 | 2 + a 1 ) Δ t + 2 A a 1 Δ t [ E | x k ( 1 θ ) f ( x k ) Δ t | 2 + 2 B a 1 Δ t ] + [ 4 ( 1 θ ) μ Δ t + σ Δ t ] + 2 ( θ A ) μ E | x k + 1 | 2 Δ t + [ 2 ( B A + 1 ) a 1 + a 2 ] Δ t + 2 A a 1 Δ t 2 B a 1 Δ t C [ E | x k A f ( x k ) Δ t | 2 + 2 A a 1 Δ t ] + [ 4 ( 1 θ ) μ Δ t + σ Δ t ] + 2 ( θ A ) μ E | x k + 1 | 2 Δ t + ( 2 a 1 + a 2 ) Δ t C k + 1 [ E | x 0 A f ( x 0 ) Δ t | 2 + 2 A a 1 Δ t + 2 μ ( θ A ) E | x 0 | 2 Δ t ] + ϕ λ Δ t i = 1 k ( C k i E | x i | 2 ) + 2 ( θ A ) μ E | x k + 1 | 2 Δ t + ( 2 a 1 + a 2 ) Δ t 1 C k 1 C ,

where C<1, E | x 0 A f ( x 0 ) Δ t | 2 +2A a 1 Δt+2μ(θA)E | x 0 | 2 Δt0, 2(θA)μE | x k + 1 | 2 Δt0. Noting that

ϕ λ =4(1θ)μ+σ+2Cμ(2θ1λ)

when θ1+σ/4μ, thus λ=2θ1, we have ϕ λ 0 easily; when θ>1+σ/4μ, thus λ=(1+ σ 2 μ )( 1 2 μ Δ t ( 1 θ ) 1 + 2 μ Δ t ( 1 θ ) + σ Δ t ), we still have ϕ λ 0.

Let k, we have

lim sup k E ( | x k | 2 ) ( 2 a 1 + a 2 ) Δ t 1 C ( 2 a 1 + a 2 ) ( 1 2 μ Δ t A ) 2 μ λ , x 0 R n .

Let Δt0, assertion (3.8) and the special case θ=1 hold. □

Without the linear growth condition on the drift coefficient, this theorem shows that the STM can still reproduce the boundedness property of true solution (3.4). The result of the BEM boundedness, Theorem 5.4 in [19], is recovered perfectly as a special case when θ=1.

3.2 The small moment

In this section, we discuss the asymptotic boundedness of STM in the p th moment for small p. First we discuss the situation for θ[0,1/2), which has a linear growth condition on both drift and diffusion coefficients. Second we release the constraint on the drift coefficient when θ[1/2,1].

3.2.1 θ[0,1/2)

We begin by imposing the linear growth condition on both drift and diffusion coefficients of SDE (2.1):

|f(x) | 2 |g(x) | 2 κ | x | 2 +a,x R n ,
(3.10)

where κ and a are positive constants. We first present the theorem on the asymptotic boundedness in small moment of the solution of (2.1).

Theorem 3.5 Let (3.10) hold. If there exists a positive constant D such that for any x R n ,

x , f ( x ) + 1 2 | g ( x ) | 2 D + | x | 2 x , g ( x ) 2 ( D + | x | 2 ) 2 λ+ P 3 ( | x | ) ( D + | x | 2 ) 2 ,
(3.11)

where λ is a positive constant and P i (|x|) is a polynomial of |x| with degree i, then there exists p (0,1) such that for all 0<p< p the solution of (2.1) obeys

lim sup t E ( | x ( t ) | p ) C,x(0) R n ,
(3.12)

where C is a positive constant dependent on κ, a, p, D, but independent of x(0).

Following the same technique as the one used in Theorem 5.2 in [18], by choosing the Lyapunov function V= ( D + | x ( t ) | 2 ) p / 2 , it is straightforward to prove this theorem. So we omit it here. Now we give the result for the STM solution.

Theorem 3.6 Let (3.10) and (3.11) hold, and λ>θ(1+κ). Then, for any ε(0,λθ(1+κ)), there exists a pair of constants p (0,1) and Δ t (0,1) such that for p(0, p ) and Δt(0,Δ t ), the STM solution (2.2) satisfies

lim sup k E | x k | p C 2 p ( λ θ ( 1 + κ ) ε ) , x 0 R n ,
(3.13)

where C 2 is a constant dependent on κ, a, p and D, but independent of x 0 and Δt.

Especially, when θ=0,

lim sup k E | x k | p C 2 p ( λ ε ) , x 0 R n .

Proof From (2.2), for Δt< 1 θ ( 1 + κ ) we have

| x k + 1 | 2 = x k + 1 , x k + ( 1 θ ) f ( x k ) Δ t + g ( x k ) Δ B k + x k + 1 , θ f ( x k + 1 ) Δ t ( 1 2 + 1 2 θ Δ t + 1 2 θ κ Δ t ) | x k + 1 | 2 + 1 2 θ a Δ t + 1 2 | x k + ( 1 θ ) f ( x k ) Δ t + g ( x k ) Δ B k | 2 1 1 θ ( 1 + κ ) Δ t ( | x k | 2 + 2 x k , ( 1 θ ) f ( x k ) Δ t + g ( x k ) Δ B k + | ( 1 θ ) f ( x k ) Δ t + g ( x k ) Δ B k | 2 + θ a Δ t ) .

For the constant D in (3.11), we have

D + | x k + 1 | 2 D 1 θ ( 1 + κ ) Δ t + | x k + 1 | 2 D + | x k | 2 1 θ ( 1 + κ ) Δ t ( 1 + ξ k ) ,

where

ξ k = 1 D + | x k | 2 ( 2 x k , ( 1 θ ) f ( x k ) Δ t + g ( x k ) Δ B k + | ( 1 θ ) f ( x k ) Δ t + g ( x k ) Δ B k | 2 + θ a Δ t ) .

For any p(0,1) we have

|D+ | x k + 1 | 2 | p / 2 ( D + | x k | 2 1 θ ( 1 + κ ) Δ t ) p / 2 ( 1 + ξ k ) p / 2 .

Clearly ξ k >1, recalling the fundamental inequality

( 1 + u ) p / 2 1+ p 2 u+ p ( p 2 ) 8 u 2 + p ( p 2 ) ( p 4 ) 2 3 × 3 ! u 3 ,u>1,
(3.14)

we have

|D+ | x k + 1 | 2 | p / 2 ( D + | x k | 2 1 θ ( 1 + κ ) Δ t ) p / 2 ( 1 + p 2 ξ k + p ( p 2 ) 8 ξ k 2 + p ( p 2 ) ( p 4 ) 2 3 × 3 ! ξ k 3 ) .

Hence the conditional expectation

E ( | D + | x k + 1 | 2 | p / 2 | F k Δ t ) ( D + | x k | 2 1 θ ( 1 + κ ) Δ t ) p / 2 E ( 1 + p 2 ξ k + p ( p 2 ) 8 ξ k 2 + p ( p 2 ) ( p 4 ) 2 3 × 3 ! ξ k 3 | F k Δ t ) .
(3.15)

Since Δ B k is independent of F k Δ t , we have E(Δ B k | F k Δ t )=E(Δ B k )=0, E( ( Δ B k ) 2 | F k Δ t )=E( ( Δ B k ) 2 )=Δt. By (3.10) we can get

E ( ξ k | F k Δ t ) = E ( 1 D + | x k | 2 [ 2 x k , ( 1 θ ) f ( x k ) Δ t + g ( x k ) Δ B k + | ( 1 θ ) f ( x k ) Δ t + g ( x k ) Δ B k | 2 + θ a Δ t ] | F k Δ t ) 1 D + | x k | 2 [ 2 x k , f ( x k ) + | g ( x k ) | 2 ] Δ t + θ ( 1 + κ ) Δ t + C 1 Δ t 2 + C 2 D + | x k | 2 Δ t .
(3.16)

Similarly, we can show that

E ( ξ k 2 | F k Δ t ) 4 ( D + | x k | 2 ) 2 x k , g ( x k ) 2 Δt C 1 Δ t 2 C 2 ( D + | x k | 2 ) 2 Δt
(3.17)

and

E ( ξ k 3 | F k Δ t ) C 1 Δ t 2 + C 2 ( D + | x k | 2 ) 3 Δt,
(3.18)

where C 1 is a positive constant dependent on κ, and C 2 is a positive constant dependent on a. C 1 and C 2 may change from line to line. Now consider the following fraction:

( D + | x k | 2 ) p / 2 P 3 ( | x k | ) ( D + | x k | 2 ) 2 .
(3.19)

For 0<p<1, it is obvious that the fraction has an upper bound. Substituting (3.16), (3.17) and (3.18) into (3.15), then using (3.10), (3.11) and the argument for (3.19), we have that

E ( ( D + | x k + 1 | 2 ) p / 2 | F k Δ t ) ( D + | x k | 2 1 θ ( 1 + κ ) Δ t ) p / 2 [ 1 + p 2 ( D + | x k + 1 | 2 ) ( 2 x k , f ( x k ) + | g ( x k ) | 2 ) Δ t + p ( p 2 ) 2 ( D + | x k | 2 ) 2 x k , g ( x k ) 2 Δ t + p 2 θ ( 1 + κ ) Δ t + C 1 Δ t 2 ] + C 2 Δ t ( D + | x k | 2 1 θ ( 1 + κ ) Δ t ) p / 2 [ 1 + p Δ t ( x k , f ( x k ) + 1 2 | g ( x k ) | 2 D + | x k | 2 x k , g ( x k ) 2 ( D + | x k | 2 ) 2 ) + p 2 Δ t x k , g ( x k ) 2 2 ( D + | x k | 2 ) 2 + p 2 θ ( 1 + κ ) Δ t + C 1 Δ t 2 ] + C 2 Δ t ( D + | x k | 2 1 θ ( 1 + κ ) Δ t ) p / 2 ( 1 + p ( θ ( 1 + κ ) 2 λ ) Δ t + p 2 κ Δ t 2 + C 1 Δ t 2 ) + C 2 Δ t ,

where C 1 is a positive constant dependent on κ and p, C 2 is a positive constant dependent on κ, a, p and D, and both of them may change from line to line. Taking expectations on both sides, we obtain

E ( ( D + | x k + 1 | 2 ) p / 2 ) 1 + p [ 1 2 θ ( 1 + κ ) λ ] Δ t + 1 2 p 2 κ Δ t + C 1 Δ t 2 ( 1 θ ( 1 + κ ) Δ t ) p / 2 E ( ( D + | x k | 2 ) p / 2 ) + C 2 Δ t .
(3.20)

For any ε(0,λθ(1+κ)), by choosing p sufficiently small such that p κ1/(4ε) and sufficiently small Δ t , for p< p and Δt<Δ t , we have

( 1 θ ( 1 + κ ) Δ t ) p / 2 1 1 2 pθ(1+κ)Δt C 3 Δ t 2 >0,
(3.21)

where C 3 >0 is a constant dependent on θ, κ and p. Then further reducing Δ t gives that for Δt<Δ t ,

C 1 Δt< 1 8 pε, C 3 Δt< 1 4 pε,|p ( 1 2 θ ( 1 + κ ) + 1 4 ε ) Δt|< 1 2 .

Using these three inequalities together with (3.21), we have from (3.20) that

E ( ( D + | x k + 1 | 2 ) p / 2 ) 1 + p ( 1 2 θ ( 1 + κ ) λ + 1 4 ε ) Δ t 1 p ( 1 2 θ ( 1 + κ ) + 1 4 ε ) Δ t E ( ( D + | x k | 2 ) p / 2 ) + C 2 Δ t .
(3.22)

Since for any h[0.5,0.5],

( 1 h ) 1 =1+h+ h 2 i = 0 h i 1+h+ h 2 i = 0 0.5 i =1+h+2 h 2 ,

then by further reducing Δ t such that for any Δt<Δ t , we obtain

2 p ( 1 2 θ ( 1 + κ ) + 1 4 ε ) 2 Δ t + ( 1 2 θ ( 1 + κ ) λ + 1 4 ε ) × [ p ( 1 2 θ ( 1 + κ ) + 1 4 ε ) Δ t + 2 ( p ( 1 2 θ ( 1 + κ ) + 1 4 ε ) Δ t ) 2 ] < ε 2 .

Together with (3.22), we arrive at

E ( ( D + | x k + 1 | 2 ) p / 2 ) [ 1 + p ( 1 2 θ ( 1 + κ ) λ + 1 4 ε ) Δ t ] [ 1 + p ( 1 2 θ ( 1 + κ ) + 1 4 ε ) Δ t + 2 ( p ( 1 2 θ ( 1 + κ ) + 1 4 ε ) Δ t ) 2 ] E ( ( D + | x k | 2 ) p / 2 ) + C 2 Δ t [ 1 + p ( θ ( 1 + κ ) λ + ε ) Δ t ] E ( ( D + | x k | 2 ) p / 2 ) + C 2 Δ t .

Due to θ(1+κ)λ+ε<0, we have 1+p(θ(1+κ)λ+ε)Δt<1. Then, by iteration and letting k, we have

lim sup k E ( | x k + 1 | p ) lim sup k E ( ( D + | x k + 1 | 2 ) p / 2 ) C 2 p ( λ θ ( 1 + κ ) ε ) .

 □

The theorem shows that the STM can reproduce the boundedness property of true solution (3.4). The result of the EM boundedness, Theorem 3.2 in [19], is recovered as a special case when θ=0.

3.2.2 θ[1/2,1]

In this part, we consider the case θ[1/2,1]. One may notice from the next theorem that in this case the parameter θ exists in the conditions, therefore the boundedness of the underlying equation may not be fully reproduced under the same conditions. However, as we stated in Section 1 that the asymptotic moment boundedness of the numerical as a stand-alone result is a key component in the study of numerical stationary distribution. Thus we still keep the next theorem and the problem that if one could construct some θ independent sufficient conditions for this case remains open.

Theorem 3.7 Assume that the drift coefficient satisfies (3.1) and the diffusion coefficient satisfies (3.10) if the following holds for some positive constant λ:

x , f ( x ) + 1 2 | g ( x ) | 2 D + | F ( x ) | 2 x , g ( x ) 2 ( D + | F ( x ) | 2 ) 2 λ+ P 3 ( | x | ) ( D + | x | 2 ) 2 ,

where F(x)=xθΔf(x) and D is some positive constant larger than a 1 θΔt. Then

lim sup k E ( | x k | p ) c 2 p [ λ + p 2 8 ( 1 + κ ) ( 1 / θ 2 + κ ) ε ] ,

where 0<ε<λ+ p 2 8 (1+κ)(1/ θ 2 +κ) and c 2 is a positive constant dependent on κ, a, p and D.

Proof We start off with

D+|F( x k ) | 2 (DaθΔt)+(12μθΔt) | x k | 2 + θ 2 |f( x k ) | 2 Δ t 2 >(12μθΔt) | x k | 2 >0,

where D> a 1 θΔt, 0<Δt< 1 2 θ ( μ ε ) . Then we have

| F ( x k + 1 ) | 2 = | F ( x k ) | 2 + ( 2 x k , f ( x k ) + | g ( x k ) | 2 ) Δ t + ( 1 2 θ ) | f ( x k ) | 2 Δ t 2 + 2 x k + ( 1 θ ) f ( x k ) Δ t , g ( x k ) Δ B k + | g ( x k ) | 2 ( Δ B k 2 Δ t ) | F ( x k ) | 2 + ( 2 x k , f ( x k ) + | g ( x k ) | 2 ) Δ t + 2 x k + ( 1 θ ) f ( x k ) Δ t , g ( x k ) Δ B k + | g ( x k ) | 2 ( Δ B k 2 Δ t ) .

Using (3.14), we have

E ( [ D + | F ( x k + 1 ) | 2 ] p / 2 | F k Δ t ) [ D + | F ( x k ) | 2 ] p / 2 E ( 1 + p 2 ξ k + p ( p 2 ) 8 ξ k 2 + p ( p 2 ) ( p 4 ) 2 3 × 3 ! ξ k 3 | F k Δ t ) ,
(3.23)

where

ξ k = 1 D + | F ( x k ) | 2 [ ( 2 x k , f ( x k ) + | g ( x k ) | 2 ) Δ t + 2 x k + ( 1 θ ) f ( x k ) Δ t , g ( x k ) Δ B k + | g ( x k ) | 2 ( Δ B k 2 Δ t ) ] .

Similar to the proof of Theorem 3.6, we compute that

E ( ξ k | F k Δ t ) = 1 D + | F ( x k ) | 2 ( 2 x k , f ( x k ) + | g ( x k ) | 2 ) Δ t , E ( ξ k 2 | F k Δ t ) 4 x k , g ( x k ) 2 Δ t ( D + | F ( x k ) | 2 ) 2 [ ( 1 + κ ) ( 1 / θ 2 + κ ) Δ t + c 1 Δ t 2 ]

and

E ( ξ k 3 | F k Δ t ) c 2 Δ t 2 + c 2 Δ t ( D + | F ( x k ) | 2 ) 3 .

Substituting these three estimates into (3.23), we get

E ( ( D + | F ( x k + 1 ) | 2 ) p / 2 | F k Δ t ) [ D + | F ( x k ) | 2 ] p / 2 [ 1 + p 2 ( D + | F ( x k ) | 2 ) ( 2 x k , f ( x k ) + | g ( x k ) | 2 ) Δ t + p ( p 2 ) 2 ( D + | F ( x k ) | 2 ) 2 x k , g ( x k ) 2 Δ t p ( p 2 ) 8 ( 1 + κ ) ( 1 / θ 2 + κ ) Δ t + c 2 Δ t 2 ] + c 2 Δ t [ D + | F ( x k ) | 2 ] p / 2 [ 1 + p Δ t ( x k , f ( x k ) + 1 2 | g ( x k ) | 2 D + | F ( x k ) | 2 x k , g ( x k ) 2 ( D + | F ( x k ) | 2 ) 2 ) + p 2 Δ t x k , g ( x k ) 2 2 ( D + | F ( x k ) | 2 ) 2 p ( p 2 ) 8 ( 1 + κ ) ( 1 / θ 2 + κ ) Δ t + c 2 Δ t 2 ] + c 2 Δ t [ D + | F ( x k ) | 2 ] p / 2 [ 1 p Δ t [ λ + p 2 8 ( 1 + κ ) ( 1 / θ 2 + κ ) ] + p 2 κ Δ t 2 ( 1 2 μ θ Δ t ) + c 2 Δ t 2 ] + c 2 Δ t ,

where c 1 is a positive constant dependent on κ and p, c 2 is a positive constant dependent on κ, a, p and D, and both of them may change from line to line. Taking expectations on both sides, we obtain

E ( ( D + | F ( x k + 1 ) | 2 ) p / 2 ) [ 1 p Δ t [ λ + p 2 8 ( 1 + κ ) ( 1 / θ 2 + κ ) ] + p 2 κ Δ t 2 ( 1 2 μ θ Δ t ) + c 2 Δ t 2 ] × E ( [ D + | F ( x k ) | 2 ] p / 2 ) + c 2 Δ t .

For any ε(0,λ+ p 2 8 (1+κ)(1/ θ 2 +κ)), by choosing p sufficiently small such that p κ ( 1 2 μ θ Δ t ) ε, then choose Δ t (0,1) sufficiently small for p Δ t [λ+ p 2 8 (1+κ)(1/ θ 2 +κ)]1 and c 2 Δ t 1 2 p ε. For any p(0, p ) and any Δt(0,Δ t ), we have

E ( ( D + | F ( x k + 1 ) | 2 ) p / 2 ) [ 1 p Δ t [ λ + p 2 8 ( 1 + κ ) ( 1 / θ 2 + κ ) ε ] ] E ( [ D + | F ( x k ) | 2 ] p / 2 ) + c 2 Δ t .

Then, by iteration and letting k, we have

lim sup k E ( ( D + | F ( x k ) | 2 ) p / 2 ) c 2 p [ λ + p 2 8 ( 1 + κ ) ( 1 / θ 2 + κ ) ε ] .

Then

lim sup k E ( | x k | p ) lim sup k E ( ( D + | F ( x k ) | 2 ) p / 2 ) 1 2 μ θ Δ t c 2 p [ λ + p 2 8 ( 1 + κ ) ( 1 / θ 2 + κ ) ε ] .

The proof is complete. □

4 Application and further research

In this section, we illustrate the application of the results in the last section to the study of numerical stationary distribution.

Recalling Theorem 3.1 in [10], the authors proved that for any given one-step numerical method if the following three assumptions hold and the numerical solution is a homogeneous Markov process with a proper transition probability kernel, then the numerical solution has a unique stationary distribution as time tends to infinity.

Assumption 4.1 For any ε>0 and x 0 R d , there exists a constant R=R(ε, x 0 )>0 such that

P ( | x k x 0 | R ) <εfor any k0.

Assumption 4.2 For any ε>0 and any compact subset K of R d , there exists a positive integer k = k (ε,K) such that

P ( | x k x 0 x k y 0 | < ε ) 1εfor any k k  and any ( x 0 , y 0 )K×K.

Assumption 4.3 For any ε>0, n1 and any compact subset K of R d , there exists R=R(ε,n,K)>0 such that

P ( sup 0 k n | x k x 0 | R ) >1εfor any  x 0 K.

It is clear that Assumption 4.1 is satisfied by the results in Section 3 and the Chebyshev inequality. Furthermore, it is not hard to see that one can adapt the proofs in the previous section to show that for p=2 and some small enough p, E | x k x 0 x k y 0 | p tends to 0 as time becomes large. Then Assumption 4.2 follows. Due to the page limit, we omit the proof here. Assumption 4.3 can be obtained from the finite time moment boundedness of the STM; see, for example, [15]. In addition, it is easy to adapt the proof of Theorem 2.7 in [11] to show that the numerical solution derived from STM is a homogeneous Markov process with a proper transition probability kernel.

Therefore, one can see that there exists a unique stationary distribution for the numerical solution generated by the STM. As stated in [811], the reason to study the numerical stationary distribution is to approximate the stationary distribution of the underlying equations by avoiding solving the nontrivial Kolmogorov-Fokker-Planck partial differential equation. A more interesting open problem to us is if the numerical stationary distribution could be used as numerical solutions to certain type of partial differential equations.

One may see that those three assumptions are given in the sense of probability, but the existing sufficient conditions for Assumptions 4.1 and 4.2 are all in moment. The small moments as illustrated in this paper need weaker conditions than the second moment, but those conditions are still much stronger than those for underlying SDEs [20, 21] in which the sufficient conditions are given in the format of Lyapunov V functions. Thus, another interesting open problem is if one can construct some sufficient conditions in the format of Lyapunov V so that the assumptions can be satisfied directly without via the moment results.