1. Introduction

Stochastic differential equations (SDEs) are well known to model problems from many areas of science and engineering. For instance, in 2006, Henderson and Plaschko [1] published the SDEs in Science and Engineering, in 2007, Mao [2] published the SDEs, in 2010, Li and Fu [3] considered the stability analysis of stochastic functional differential equations with infinite delay and its application to recurrent neural networks.

In recent years, there is an increasing interest in stochastic functional differential equations(SFDEs) with finite and infinite delay under less restrictive conditions than Lipschitz condition. For instance, in 2007, Wei and Wang [4] discussed the existence and uniqueness of the solution for stochastic functional differential equations with infinite delay, in 2008, Mao et al. [5] discussed almost surely asymptotic stability of neutral stochastic differential delay equations with Markovian switching, in 2008, Ren et al. [6] considered the existence and uniqueness of the solutions to SFDEs with infinite delay, and in 2009, Ren and Xia [7], discussed the existence and uniqueness of the solution to neutral SFDEs with infinite delay. Furthermore, on this topic, one can see Halidias [8], Henderson and Plaschko [1], Kim [9, 10], Ren [11], Ren and Xia [7], Taniguchi [12] and references therein for details.

On the other hand, Mao [2] discussed d-dimensional stochastic functional differential equations with finite delay

d x ( t ) =f ( x t , t ) d t+g ( x t , t ) d B ( t ) , t 0 tT,
(1.1)

where x t = {x(t + θ): -τθ ≤ 0} could be considered as a C([-τ, 0]; Rd)-value stochastic process. The initial value of (1.1) was proposed as follows:

x t 0 = ξ = { ξ ( θ ) : - τ θ 0 } is an F t 0   - measurable C ( [ - τ , 0 ] ; R d ) - value random variable such that  E ξ 2 < .

Furthermore, Ren et al. [6] also considered the stochastic functional differential equations with infinite delay at phase space BC((-∞, 0]; Rd) to be described below.

d x ( t ) = f ( x t , t ) d t + g ( x t , t ) d B ( t ) , t 0 t T ,
(1.2)

where x t = {X(t + θ): -∞ ≤ θ ≤ 0} could be considered as a BC((-∞, 0]; Rd)-value stochastic process. The initial value of (1.2) was proposed as follows:

X t 0 = ξ = { ξ ( θ ) : - θ 0 } is an F t 0   - measurable BC ( ( - , 0 ] ; R d ) - value random variable such that ξ M 2 ( ( - , 0 ] ; R d ) .
(1.3)

Following this way, now we recall the existence and uniqueness of the solutions to the Equation 1.2 with initial data (1.3) under the non-Lipschitz condition and the weakened linear growth condition. In this paper, we will give some new proof of the existence and uniqueness of the solutions to ISDEs under an alternative way.

2. Preliminary

Let | · | denote Euclidean norm in Rn. If A is a vector or a matrix, its transpose is denoted by AT; if A is a matrix, its trace norm is represented by A = trace ( A T A ) . Let t0 be a positive constant and ( Ω , F , P ) , throughout this paper unless otherwise specified, be a complete probability space with a filtration { F t } t t 0 satisfying the usual conditions (i.e. it is right continuous and F t 0 contains all P-null sets). Assume that B(t) is a m-dimensional Brownian motion defined on complete probability space, that is B(t) = (B1(t), B2(t), ..., B m (t))T. Let BC((-∞, 0]; Rd) denote the family of bounded continuous Rd-value functions φ defined on (-∞, 0] with norm ||φ|| = sup-∞ < θ≤0(θ)|. We denote by M 2 ( ( - , 0 ] ; R d ) the family of all F t 0 -measurable, ℝd-valued process ψ(t) = ψ(t, w), t ∈ (-∞, 0], such that E - 0 ψ ( t ) 2 d t < . And let L p ( [ a , b ] ; R d ) is the family of Rd-valued F t -adapted processes {f(t)}atbsuch that a b f ( t ) p d t < .

With all the above preparation, consider a d-dimensional stochastic functional differential equations:

d x ( t ) =f ( x t , t ) d t+g ( x t , t ) d B ( t ) , t 0 tT,
(2.1)

where x t = {x(t + θ): -∞ < θ ≤ 0} can be considered as a BC((-∞, 0]; Rd)-value stochastic process, where

f : BC ( ( - , 0 ] ; R d ) × [ t 0 , T ] R d , g : BC ( ( - , 0 ] ; R d ) × [ t 0 , T ] R d × m

be Borel measurable. Next, we give the initial value of (2.1) as follows:

x t 0 = ξ = { ξ ( θ ) : - < θ 0 } is an F t 0   - measurable BC ( ( - , 0 ] ; R d )  - value random variable such that  ξ M 2 ( ( - , 0 ] ; R d ) .
(2.2)

To find out the solution, we give the definition of the solution of the Equation 2.1 with initial data (2.2).

Definition 2.1. [6] Rd-value stochastic process x(t) defined on -∞ < tT is called the solution of (2.1) with initial data (2.2), if x(t) has the following properties:

  1. (i)

    x(t) is continuous and { x ( t ) } t 0 t T is F t -adapted;

  2. (ii)

    { f ( x t , t ) } L 1 ( [ t 0 , T ] ; R d ) and { g ( x t , t ) } L 2 ( [ t 0 , T ] ; R d × m ) ;

  3. (iii)

    x t 0 = ξ , for each t0tT,

    x ( t ) = ξ ( 0 ) + t 0 t f ( x s , s ) d s + t 0 t g ( x s , s ) d B ( s ) a . s .
    (2.3)

A solution x(t) is called as a unique if any other solution x ̄ ( t ) is indistinguishable with x(t), that is

P { x ( t ) = x ̄ ( t ) , for any - < t T } =1.

The integral inequalities of Gronwall type have been applied in the theory of SDEs to prove the results on existence, uniqueness, and stability etc. [10, 1316]. Naturally, Gronwall's inequality will play an important role in next section.

Lemma 2.1. (Gronwall's inequality). Let u(t) and b(t) be non-negative continuous functions for tα, and let

u ( t ) a+ α t b ( s ) u ( s ) d s,tα,

where a ≥ 0 is a constant. Then

u ( t ) a exp α t b ( s ) d s ,tα.

Lemma 2.2. (Bihari's inequality). Let u and b be non-negative continuous functions defined on R+. Let g(u) be a non-decreasing continuous function on R+ and g(u) > 0 on (0, ∞). If

u ( t ) k+ 0 t b ( s ) g ( u ( s ) ) d s,

for tR+, where k ≥ 0 is a constant. Then for 0 ≤ tt1,

u ( t ) G - 1 G ( k ) + 0 t b ( s ) d s ,

where

G ( r ) = r 0 r d s g ( s ) , r > 0 , r 0 > 0 ,

and G-1 is the inverse function of G and t1R+ is chosen so that

G ( k ) + 0 t b ( s ) d s Dom ( G - 1 ) ,

for all tR+ lying in the interval 0 ≤ tt1.

The following two lemmas are known as the moment inequality for stochastic integrals which will play an important role in next section.

Lemma 2.3. [2]. If p ≥ 2, g M 2 ( [ 0 , T ] ; R d × m ) such that

E 0 T g ( s ) p d s<,

then

E 0 T g ( s ) d B ( s ) p p ( p - 1 ) 2 p 2 T p - 2 2 E 0 T g ( s ) p d s .

In particular, for p = 2, there is equality.

Lemma 2.4. [2]. If p ≥ 2, g M 2 ( [ 0 , T ] ; R d × m ) such that

E 0 T g ( s ) p d s < ,

then

E sup 0 t T 0 t g ( s ) d B ( s ) p p 3 2 ( p - 1 ) p 2 T p - 2 2 E 0 T g ( s ) p d s .

3. Existence and Uniqueness of the Solutions

In order to obtain the existence and uniqueness of the solutions to (2.1) with initial data (2.2), we define x t 0 0 =ξ and x0(t) = ξ(0), for t0tT. Let x t 0 n =ξ, n = 1, 2, ... and define the Picard sequence

x n ( t ) = ξ ( 0 ) + t 0 t f ( x s n - 1 , s ) d s + t 0 t g ( x s n - 1 , s ) d B ( s ) , t 0 t T .

Now we begin to establish the theory of the existence and uniqueness of the solution. We first show that the non-Lipschitz condition and the weakened linea growth condition guarantee the existence and uniqueness.

Theorem 3.1. Assume that there exist a positive number K such that

  1. (i)

    For any φ, ψ ∈ BC((-∞, 0]; Rd) and t ∈ [t0, T], it follows that

    f ( φ , t ) - f ( ψ , t ) 2 g ( φ , t ) - g ( ψ , t ) 2 κ ( φ - ψ 2 ) ,
    (3.1)

where κ(·) is a concave non-decreasing function from ℝ+ to ℝ+ such that κ(0) = 0, κ(u) > 0 for u > 0 and 0 + d u/κ ( u ) =.

  1. (ii)

    For any t ∈ [t0, T], it follows that f(0, t), g(0, t) ∈ L2 such that

    f ( 0 , t ) 2 g ( 0 , t ) 2 K.
    (3.2)

Then the initial value problem (2.1) has a solution x(t). Moreover, x t M 2 ( ( - , T ] ; R d ) . We prepare a lemma to prove this theorem.

Lemma 3.2. Let the assumption (3.1) and (3.2) of Theorem 3.1 hold. If x(t) is a solution of equation (2.1) with initial data (2.2), then

E sup - < t T x ( t ) 2 E ξ 2 + c 2 e 6 b ( T - t 0 + 1 ) ( T - t 0 ) ,

where c2 = c1 + E ǁ ξ ǁ2, c1 = 3E ǁξ ǁ2 + 6(T - t0 + 1)(T - t0)[K + a]. In particular, x(t) belong to M 2 ( ( - , T ] ; R d ) .

Proof. For each number n ≥ 1, define the stopping time

τ n =T inf { t [ t 0 , T ] : | | x ( t ) | | n } .

Obviously, as n → ∞, τ n T a.s. Let xn(t) = x(tτ n ), t ∈ (-∞, T]. Then, for t0tT, xn(t) satisfy the following equation

x n ( t ) =ξ ( 0 ) + t 0 t f x s n , s I [ t 0 , τ n ] ( s ) d s+ t 0 t g ( x s n , s ) I [ t 0 , τ n ] ( s ) d B ( s ) .

Using the elementary inequality x i p n p - 1 x i p when p ≥ 1, we have

x n ( t ) 2 3 ξ ( 0 ) 2 + 3 t 0 t f ( x s n , s ) I [ t 0 , τ n ] ( s ) d s 2 + 3 t 0 t g ( x s n , s ) I [ t 0 , τ n ] ( s ) d B ( s ) 2 .

By the Hölder's inequality and the moment inequality, we have

E x n ( t ) 2 3 E ξ 2 + ( t - t 0 ) E t 0 t f ( x s n , s ) 2 d s + E t 0 t g ( x s n , s ) 2 d s .

Hence, by the condition (3.1) and (3.2) one can further show that

E x n ( t ) 2 3E ξ 2 +6 ( t - t 0 + 1 ) E t 0 t κ ( x s n 2 ) d s + E t 0 t K d s .

Given that κ(·) is concave and κ(0) = 0, we can find a positive constants a and b such that κ(u) ≤ a + bu for u ≥ 0. So, we obtains that

E sup t 0 s t x n ( s ) 2 c 1 + 6 b ( t - t 0 + 1 ) t 0 t E x s n 2 d s ,

where c1 = 3E||ξ||2 + 6(T - t0 + 1)(T - t0)[K + a]. Noting the fact that sup - < s t x n ( s ) 2 ξ 2 + sup t 0 s t x n ( s ) 2 , we obtain

E sup - < s t x n ( s ) 2 c 2 + 6 b ( t - t 0 + 1 ) t 0 t E sup - < r s x n ( r ) 2 d s ,

where c2 = c1 + E||ξ||2. So, by the Gronwall inequality yields that

E sup - < s t x n ( s ) 2 c 2 exp ( 6 b ( T - t 0 + 1 ) ( T - t 0 ) ) .

Letting t = T, it then follows that

E sup - < s T x ( s τ n ) 2 E ξ 2 + c 2 e 6 b ( T - t 0 + 1 ) ( T - t 0 ) .

Thus

E sup - < s τ n x ( s ) 2 E ξ 2 + c 2 e 6 b ( T - t 0 + 1 ) ( T - t 0 ) .

Consequently the required result follows by letting n → ∞. □

Proof of Theorem 3.1. Let x(t) and x ̄ ( t ) be any tow solutions of (2.1). By Lemma 3.2, x(t), x ̄ ( t ) M 2 ( ( - , T ] ; R d ) . Note that

x ( t ) - x ̄ ( t ) = t 0 t [ f ( x s , s ) - f ( x ̄ s , s ) ] d s + t 0 t [ g ( x s , s ) - g ( x ̄ s , s ) ] d B ( s ) .

By the elementary inequality (u + v)2 ≤ 2(u2 + v2), one then gets

x ( t ) - x ̄ ( t ) 2 = 2 t 0 t [ f ( x s , s ) - f ( x ̄ s , s ) ] d s 2 + 2 t 0 t [ g ( x s , s ) - g ( x ̄ s , s ) ] d B ( s ) 2 .

By the Hölder's inequality, the moment inequality, and (3.1) we have

E sup t 0 s t x ( s ) - x ̄ ( s ) 2 2 ( T - t 0 + 1 ) E t 0 t κ ( x s - x ̄ s 2 ) d s .

Since κ(·) is concave, by the Jensen inequality, we have

E κ ( x s - x ̄ s 2 ) κ ( E x s - x ̄ s 2 ) .

Consequently, for any ϵ > 0,

E sup t 0 s t x ( s ) - x ̄ ( s ) 2 ϵ + 2 ( T - t 0 + 1 ) t 0 t κ E sup t 0 r s x ( r ) - x ̄ ( r ) 2 d s .

By the Bihari inequality, one deduces that, for all sufficiently small ϵ > 0,

E sup t 0 s t x ( s ) - x ̄ ( s ) 2 G - 1 [ G ( ϵ ) + 2 ( T - t 0 + 1 ) ( T - t 0 ) ] ,
(3.3)

where

G ( r ) = 1 r 1 κ ( u ) d u

on r > 0, and G-1(·) be the inverse function of G(·). By assumption 0 + d u κ ( u ) = and the definition of κ(·), one sees that limϵ↓0G(ϵ) = -∞ and then

lim ϵ 0 G - 1 [ G ( ϵ ) + 2 ( T - t 0 + 1 ) ( T - t 0 ) ] = 0 .

Therefore, by letting ϵ → 0 in (3.3), gives

E sup t 0 t T x ( t ) - x ̄ ( t ) 2 = 0 .

This implies that x ( t ) = x ̄ ( t ) for t0tT, Therefore, for all -∞ < tT, x ( t ) = x ̄ ( t ) a.s. The uniqueness has been proved.

Next to check the existence. Define x t 0 0 =ξ and x0(t) = ξ(0) for t0tT. For each n = 1, 2, ..., set x t 0 n =ξ and define, by the Picard iterations,

x n ( t ) = ξ ( 0 ) + t 0 t f ( x s n - 1 , s ) d s + t 0 t g ( x s n - 1 , s ) d B ( s )
(3.4)

for t0tT. Obviously, x 0 ( t ) M 2 ( [ t 0 , T ] : R d ) . Moreover, it is easy to see that x n ( t ) M 2 ( ( - , T ] : R d ) , in fact

x n ( t ) 2 3 ξ ( 0 ) 2 + 3 t 0 t f ( x s n - 1 , s ) d s 2 + 3 t 0 t g ( x s n - 1 , s ) d B ( s ) 2 .

Taking the expectation on both sides and using the Hölder inequality and moment inequality, we have

E x n ( t ) 2 3 E ξ 2 + 3 ( t - t 0 ) E t 0 t f ( x s n - 1 , s ) 2 d s + 3 E t 0 t g ( x s n - 1 , s ) d B ( s ) 2 3 E ξ 2 + 3 ( t - t 0 ) E t 0 t f ( x s n - 1 , s ) - f ( 0 , s ) + f ( 0 , s ) 2 d s + 3 E t 0 t g ( x s n - 1 , s ) - g ( 0 , s ) + g ( 0 , s ) 2 d s .

Using the elementary inequality (u + v)2 ≤ 2u2 + 2v2, (3.1), and (3.2), we have

E x n ( t ) 2 3 E ξ 2 + 3 ( t - t 0 + 1 ) E t 0 t 2 κ ( x s n - 1 2 )  + 2 K d s 3 E ξ 2 + 6 K ( T - t 0 ) ( T - t 0 + 1 ) + 6 ( T - t 0 + 1 ) E t 0 t κ ( x s n - 1 2 ) d s .

Given that κ(·) is concave and κ(0) = 0, we can find a positive constants a and b such that κ(u) ≤ a + bu for u ≥ 0. So, we have

E x n ( t ) 2 c 1 + 6 b ( T - t 0 + 1 ) t 0 t E x s n - 1 2 d s ,

where c1 = 3E||ξ||2 + 6(T - t0)(T - t0+ 1)[K + a]. It also follows from the inequality that for any k ≥ 1,

max 1 n k E x n ( t ) 2 c 1 + 6 b ( T - t 0 + 1 ) t 0 t max 1 n k E x n - 1 ( s ) 2 d s c 1 + 6 b ( T - t 0 + 1 ) t 0 t ( E ξ 2 + max 1 n k E x n ( s ) 2 ) d s c 2 + 6 b ( T - t 0 + 1 ) t 0 t max 1 n k E x n ( s ) 2 d s ,

where c2 = c1 = 6b(T - t0)(T - t0+ 1) E||ξ||2. The Gronwall inequality implies

max 1 n k E x n ( t ) 2 c 2 exp ( 6 b ( T - t 0 ) ( T - t 0 + 1 ) ) .

Since k is arbitrary, we must have

E x n ( t ) 2 c 2 exp ( 6 b ( T - t 0 + 1 ) ( T - t 0 ) )
(3.5)

for all t0tT, n ≥ 1.

Next, we that the sequence {xn(t)} is Cauchy sequence. For all n ≥ 0 and t0tT, we have

x n ( t ) - x m ( t ) = t 0 t [ f ( x s n - 1 , s ) - f ( x s m - 1 , s ) ] d s + t 0 t [ g ( x s n - 1 , s ) - g ( x s m - 1 , s ) ] d B ( s ) .

Next, using an elementary inequality ( u + v ) 2 1 α u 2 + 1 1 - α v 2 and the condition (H3), we derive that

x n ( t ) - x m ( t ) 2 1 α t 0 t [ f ( x s n - 1 , s ) - f ( x s m - 1 , s ) ] d s 2 + 1 1 - α t 0 t [ g ( x s n - 1 , s ) - g ( x s m - 1 , s ) ] d B ( s ) 2 .

On the other hand, by Hölder's inequality, Lemma (2.4), and the condition, one can show that

E sup t 0 < s t x n ( s ) - x m ( s ) 2 β t 0 t κ E sup t 0 u s x n - 1 ( u ) - x m - 1 ( u ) 2 d s ,
(3.6)

where β = (T - t0)/α + 4/(1 - α). Let

Z ( t ) = lim n , m sup  E sup t 0 s t x n ( s ) - x m ( s ) 2 .

From (3.6), for any ϵ > 0, we get

Z ( t ) ϵ + β t 0 t κ ( Z ( s ) ) d s .

By the Bihari inequality, one deduces that, for all sufficiently small ϵ > 0,

Z ( t ) G - 1 [ G ( ϵ ) + β ( T - t 0 ) ] ,

where

G ( r ) = 1 r 1 κ ( u ) d u

on r > 0, and G-1(·) be the inverse function of G(·). By assumption, we get Z(t) = 0. This shows the sequence {xn(t), n ≥ 0} is a Cauchy sequence in L2. Hence, as n → ∞, xn(t) - x(t), that is E |xn(t) - x(t)|2 → 0. Letting n → ∞ in (3.5) then yields that

E sup t 0 s t x ( s ) 2 c 2 exp ( 6 b ( T - t 0 + 1 ) ( T - t 0 ) )

for all t0tT. Therefore, x t M 2 ( ( - , T ] ; R d ) . It remains to show that x(t) satisfies Equation 2.3. Note that

E t 0 t ( f ( x s n , s ) - f ( x s , s ) ) d s 2 + E t 0 t ( g ( x s n , s ) - g ( x s , s ) ) d B ( s ) 2 ( t - t 0 ) E t 0 t ( f ( x s n , s ) - f ( x s , s ) ) 2 d s + E t 0 t ( g ( x s n , s ) - g ( x s , s ) ) 2 d s ( t - t 0 + 1 ) t 0 T κ ( E ( sup t 0 u s x n ( u ) - x ( u ) 2 ) ) d s .

Noting that sequence xn(t) is uniformly converge on (-∞, T ], it means that

E sup t 0 u s x n ( u ) - x ( u ) 2 0

as n → ∞, further

κ E sup t 0 u s x n ( u ) - x ( u ) 2 0

as n → ∞. Hence, taking limits on both sides in the Picard sequence, we obtain that

x ( t ) = x 0 + t 0 t f ( x s , s ) d s + t 0 t g ( x s , s ) d B ( s )

on t0tT. The above expression demonstrates that x(t) is the solution of (2.3). So, the existence of theorem is complete.

Remark 3.1. In the proof of Theorem 3.1, the solution is constructed by the successive approximation. It shows that how to get the approximate solution of (2.1) and how to construct Picard sequence xn(t). For SFDEs, we know that a weakened linear growth condition imposed on Theorem 3.1 is rigorous for our discussion. In papers [6, 7], the proofs of the assertions are based on some function inequalities. Recall that the procedures has become more and more complicated in those proofs. For this reason, although analogous problem is studied here, the proof of the assertion in the Theorem is completely different with respect to the ones from [7]. Moreover, our new proof in this paper is completed by Bihari's inequality and more simple than that reported in [7].