1 Introduction

As the limit process from a weak polymers model, the following doubly perturbed Brownian motion

$$ x_{t}=B_{t}+\alpha\sup_{0\le s\le t}x_{s}+ \beta\inf_{0\le s\le t}x_{s}, $$
(1.1)

was discussed by Norris et al. [1], and it also arises as the scaling limit of some self-interacting random walks (see [2]). During the past few decades, equation (1.1) has attracted much interest from many scholars, for example, [39]. Following them, Doney et al. [10] studied the singly perturbed Skorohod equations

$$ x_{t}=x_{0}+ \int_{0}^{t}\sigma(x_{s}) \,dB_{s}+ \int_{0}^{t}b(x_{s})\,ds+\alpha\sup _{0\le s\le t}x_{s}. $$
(1.2)

Using the Picard iterative procedure, they showed the existence and uniqueness of the solution to equation (1.2). Hu et al. [11] discussed the existence and uniqueness of the solution to doubly perturbed neutral stochastic functional equations, while Luo [12] obtained the existence and uniqueness of the solution to doubly perturbed jump-diffusion processes.

In fact, the Picard iterative method is a well-known procedure for approximating the solution of stochastic differential equations (SDEs). However, to obtain the Picard iterative sequence \(x^{n}(t)\), one needs to compute \(x^{i}(t)\), \(0\le i\le n-1\). And this brings us a lot of calculations on stepwise iterated Ito’s integrals. In the early twentieth century, Carathéodory [13] put forward the Carathéodory approximation scheme for ordinary differential equations. In this scheme, Carathéodory defined the approximate solution via a delay equation, and the delay equation can be solved explicitly by successive integrations over intervals of length \(\frac{1}{n}\). In other words, the Carathéodory approximation scheme avoids calculating \(x^{i}(t)\), \(0\le i\le n-1\).

Because of its advantage, this approximation procedure has received great attention, and many people have been devoted to the study of the Carathéodory scheme for SDEs. For example, Bell and Mohammed [14] extended the Carathéodory approximation scheme to the case of SDEs and showed the convergence of the Carathéodory approximate solution. Mao [15, 16] considered a class of SDEs with variable delays and studied the Carathéodory approximate solution of delay SDEs. Turo [17] discussed the Carathéodory approximate solution of stochastic functional differential equations (SFDEs) and established the existence theorem for SFDEs. Liu [18] investigated a class of semilinear stochastic evolution equations with time delays and proved that the Carathéodory approximate solution converges to the solution of stochastic delay evolution equations.

Motivated by the above mentioned papers, we will study the Carathéodory approximate scheme of doubly perturbed stochastic differential equations (DPSDEs)

$$ x(t) = x(0)+ \int_{0}^{t}f\bigl(s,x(s)\bigr)\,ds+ \int_{0}^{t}g\bigl(s,x(s)\bigr)\,dw(s)+\alpha\sup _{0\le s\le t}x(s) +\beta\inf_{0\le s\le t}x(s). $$
(1.3)

To the best of our knowledge, so far little is known about the Carathéodory approximations for equation (1.3), and the aim of this paper is to close this gap. In this paper, we will prove that the Carathéodory approximate solution converges to the solution under the global Lipschitz condition. Moreover, we will replace the global Lipschitz condition by a more general condition proposed by [19, 20] and show that equation (1.3) has a unique solution under the non-Lipschitz condition.

This paper is organized as follows. In Section 2, we establish the existence theorem of equation (1.3) and show that the Carathéodory approximate solution converges to the solution of equation (1.3) under the global Lipschitz condition. While in Section 3, we extend the existence and convergence results of Section 2 to the case of equation (1.3) with non-Lipschitz coefficients.

2 Carathéodory approximation and global Lipschitz DPSDEs

Let \((\Omega , {\mathcal {F}}, \{{\mathcal {F}}_{t}\}_{t\ge0}, P)\) be a complete probability space with a filtration \(\{{\mathcal {F}}_{t}\}_{t\ge 0}\) satisfying the usual conditions (i.e., it is increasing and right continuous, while \({\mathcal {F}}_{0}\) contains all P-null sets). Let \(\{w(t)\}_{t\ge0}\) be a one-dimensional Brownian motion defined on the probability space \((\Omega , {\mathcal {F}}, P)\). Let \({\mathcal {L}}^{2}([a,b];R)\) denote the family of \(\mathcal {F}_{t}\)-measurable, R-valued processes \(f(t)=\{f(t,\omega)\}\), \(t\in [a,b]\) such that \(\int_{a}^{b}|f(t)|^{2}\,dt<\infty\) a.s.

Consider the following doubly perturbed stochastic differential equations:

$$ x(t) = x(0)+ \int_{0}^{t}f\bigl(s,x(s)\bigr)\,ds+ \int_{0}^{t}g\bigl(s,x(s)\bigr)\,dw(s)+\alpha\sup _{0\le s\le t}x(s) +\beta\inf_{0\le s\le t}x(s), $$
(2.1)

where \(\alpha,\beta\in(0,1)\), the initial value \(x(0)=x_{0}\in R\) and \(f:[0,T]\times R\to R\), \(g:[0,T]\times R \to R\) are both Borel-measurable functions. In this paper, we assume that the initial value \(x_{0}\) is independent of w and satisfies \(E|x_{0}|^{2}<\infty\).

Now, we define the sequence of the Carathéodory approximate solutions \(x^{n}: [-1,T]\to R\). For all \(n\ge1\), we define

$$\begin{aligned}& x^{n}(t)=x_{0}, \quad -1\le t\le0, \\& x^{n}(t)=x_{0}+ \int_{0}^{t}f\biggl(s,x^{n}\biggl(s- \frac{1}{n}\biggr)\biggr)\,ds+ \int _{0}^{t}g\biggl(s,x^{n}\biggl(s- \frac{1}{n}\biggr)\biggr)\,dw(s) \\& \hphantom{x^{n}(t)={}}{}+\alpha\sup_{0\le s\le t}x^{n}\biggl(s- \frac{1}{n}\biggr) +\beta\inf_{0\le s\le t}x^{n} \biggl(s-\frac{1}{n}\biggr), \quad t\in(0,T]. \end{aligned}$$
(2.2)

Note that \(x^{n}(t)\) can be calculated step by step on the intervals \([0,\frac{1}{n})\), \([\frac{1}{n},\frac{2}{n}),\ldots \) , etc.

To obtain the main results, we give the following conditions.

Assumption 2.1

For any \(x,y\in R\) and \(t\in[0,T]\), there exists a positive constant k such that

$$ \bigl\vert f(t,x)-f(t,y) \bigr\vert ^{2}\vee \bigl\vert g(t,x)-g(t,y) \bigr\vert ^{2} \le k \vert x-y \vert ^{2}. $$
(2.3)

Assumption 2.2

For any \(t\in[0,T]\), there exists a positive constant such that

$$ \bigl\vert f(t,0) \bigr\vert ^{2}\vee \bigl\vert g(t,0) \bigr\vert ^{2}\le\bar{k}. $$
(2.4)

Assumption 2.3

The coefficients satisfy \(|\alpha|+|\beta|<1\).

Remark 2.1

Clearly, Assumptions 2.1 and 2.2 imply the linear growth condition. That is, for any \(x\in R\) and \(t\in[0,T]\),

$$ \bigl\vert f(t,x) \bigr\vert ^{2}\le2 \bigl\vert f(t,x)-f(t,0) \bigr\vert ^{2}+2 \bigl\vert f(t,0) \bigr\vert ^{2}\le2k \vert x \vert ^{2}+2\bar{k}\le L\bigl(1+ \vert x \vert ^{2}\bigr), $$
(2.5)

where \(L=2\max(k,\bar{k})\). Similarly, we have \(|g(t,x)|^{2}\le L(1+|x|^{2})\).

Now, we state our main results.

Theorem 2.1

Let Assumptions 2.1-2.3 hold. Then there exists a unique \(\mathcal{F}_{t}\)-adapted solution \(\{x(t)\}_{t\ge0}\) to equation (2.1). Moreover, for any \(T>0\),

$$ E\sup_{0\le t\le T} \bigl\vert x^{n}(t)-x(t) \bigr\vert ^{2} \le C\frac{1}{n}, $$
(2.6)

where C is a constant independent of n.

In the sequel, to prove our main results, we need some useful lemmas.

Lemma 2.1

(Gronwall’s inequality [21])

Let \(u_{0}\ge0\) and \(v(t)\ge0\), and let \(u(\cdot)\) be a real continuous function on \([0,T ]\). If

$$u(t)\le u_{0}+ \int_{0}^{t} v(s)u(s)\,ds, \quad \textit{for all } t \in[0,T], $$

then we have

$$u(t)\le u_{0}e^{\int_{0}^{t} v(s)\,ds} $$

for all \(t\in[0,T]\).

Lemma 2.2

Under Assumptions 2.1-2.3, for all \(n\ge1\),

$$ E\sup_{0\le t\le T} \bigl\vert x^{n}(t) \bigr\vert ^{2}\le C_{1}, $$
(2.7)

where \(C_{1}\) is a positive constant.

Proof

For any \(t\in[0,T]\), it follows from (2.2) that

$$\begin{aligned} \bigl\vert x^{n}(t) \bigr\vert \le& \vert x_{0} \vert + \biggl\vert \int_{0}^{t}f\biggl(s,x^{n}\biggl(s- \frac{1}{n}\biggr)\biggr)\,ds \biggr\vert + \biggl\vert \int_{0}^{t}g\biggl(s,x^{n}\biggl(s- \frac{1}{n}\biggr)\biggr)\,dw(s) \biggr\vert \\ &{} + \vert \alpha \vert \biggl\vert \sup_{0\le s\le t}x^{n} \biggl(s-\frac{1}{n}\biggr) \biggr\vert + \vert \beta \vert \biggl\vert \inf_{0\le s\le t}x^{n}\biggl(s-\frac{1}{n} \biggr) \biggr\vert \\ \le& \vert x_{0} \vert + \biggl\vert \int_{0}^{t}f\biggl(s,x^{n}\biggl(s- \frac{1}{n}\biggr)\biggr)\,ds \biggr\vert + \biggl\vert \int_{0}^{t}g\biggl(s,x^{n}\biggl(s- \frac{1}{n}\biggr)\biggr)\,dw(s) \biggr\vert \\ &{}+\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)\sup _{0\le s\le t} \biggl\vert x^{n}\biggl(s-\frac{1}{n} \biggr) \biggr\vert \\ \le& \vert x_{0} \vert + \biggl\vert \int_{0}^{t}f\biggl(s,x^{n}\biggl(s- \frac{1}{n}\biggr)\biggr)\,ds \biggr\vert + \biggl\vert \int_{0}^{t}g\biggl(s,x^{n}\biggl(s- \frac{1}{n}\biggr)\biggr)\,dw(s) \biggr\vert \\ &{} +\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)\sup _{-\frac{1}{n}\le s\le0} \bigl\vert x^{n}(s) \bigr\vert +\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)\sup_{0\le s\le t} \bigl\vert x^{n}(s) \bigr\vert . \end{aligned}$$

By the basic inequality \(|a+b+c|^{2}\le3(|a|^{2}+|b|^{2}+|c|^{2})\), one has

$$\begin{aligned}& \bigl(1- \vert \alpha \vert - \vert \beta \vert \bigr)^{2}E \sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t) \bigr\vert ^{2} \\& \quad \le3 \biggl(\bigl(1+ \vert \alpha \vert + \vert \beta \vert \bigr)^{2}E \vert x_{0} \vert ^{2}+E\sup _{0 \le t\le t_{1}} \biggl\vert \int_{0}^{t}f\biggl(s,x^{n}\biggl(s- \frac{1}{n}\biggr)\biggr)\,ds \biggr\vert ^{2} \\& \qquad {} +E\sup_{0 \le t\le t_{1}} \biggl\vert \int_{0}^{t}g\biggl(s,x^{n}\biggl(s- \frac{1}{n}\biggr)\biggr)\,dw(s) \biggr\vert ^{2} \biggr) \end{aligned}$$
(2.8)

for any \(t_{1}\in[0,T]\). Using the Hölder inequality and the Burkholder-Davis-Gundy inequality, we can easily see that

$$ E\sup_{0\le t\le t_{1}} \biggl\vert \int_{0}^{t}f\biggl(s,x^{n}\biggl(s- \frac{1}{n}\biggr)\biggr)\,ds \biggr\vert ^{2} \le TE \int_{0}^{t_{1}} \biggl\vert f\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr) \biggr\vert ^{2}\,ds $$
(2.9)

and

$$ E\sup_{0\le t\le t_{1}} \biggl\vert \int_{0}^{t}g\biggl(s,x^{n}\biggl(s- \frac{1}{n}\biggr)\biggr)\,dw(s) \biggr\vert ^{2} \le 4E \int_{0}^{t_{1}} \biggl\vert g\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr) \biggr\vert ^{2}\,ds. $$
(2.10)

Then, by Assumptions 2.1, 2.2 and 2.3, we have

$$\begin{aligned}& E\sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t) \bigr\vert ^{2} \\& \quad \le \frac{3(1+ \vert \alpha \vert + \vert \beta \vert )^{2}}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}}E \vert x_{0} \vert ^{2}+ \frac {3(T+4)L}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}}E \int_{0}^{t_{1}}\biggl(1+ \biggl\vert x^{n} \biggl(s-\frac {1}{n}\biggr) \biggr\vert ^{2}\biggr)\,ds \\& \quad \le \frac{3(1+ \vert \alpha \vert + \vert \beta \vert )^{2}+3(T+4)L}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}}E \vert x_{0} \vert ^{2} \\& \qquad {}+ \frac {3(T+4)L}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \int_{0}^{t_{1}}\Bigl(1+E\sup_{0\le s\le t} \bigl\vert x^{n}(s) \bigr\vert ^{2}\Bigr)\,dt. \end{aligned}$$

Finally, the Gronwall inequality implies that

$$ 1+E\sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t) \bigr\vert ^{2} \le \frac{3(1+ \vert \alpha \vert + \vert \beta \vert )^{2}+3(T+4)L}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}}E \vert x_{0} \vert ^{2}e^{\frac {3(T+4)L}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}}T}. $$

The proof is therefore complete. □

Lemma 2.3

For all \(n\ge1\) and \(0\le s< t\le T\),

$$ E \bigl\vert x^{n}(t)-x^{n}(s) \bigr\vert ^{2} \le C_{2}(t-s), $$
(2.11)

where \(C_{2}\) is a positive constant.

Proof

For all \(n\ge1\) and \(0\le s< t\le T\), it follows from (2.2) that

$$\begin{aligned} x^{n}(t)-x^{n}(s) =& \int_{s}^{t}f\biggl(\sigma,x^{n}\biggl( \sigma-\frac{1}{n}\biggr)\biggr)\,d\sigma \\ &{}+ \int_{s}^{t}g\biggl(\sigma,x^{n}\biggl( \sigma-\frac{1}{n}\biggr)\biggr)\,dw(\sigma) \\ &{}+\alpha\sup_{0\le\sigma\le t}x^{n}\biggl(\sigma- \frac{1}{n}\biggr)+\beta \inf_{0\le\sigma\le t}x^{n}\biggl( \sigma-\frac{1}{n}\biggr) \\ &{}-\alpha\sup_{0\le\sigma\le s}x^{n}\biggl(\sigma- \frac{1}{n}\biggr) -\beta\inf_{0\le\sigma\le s}x^{n}\biggl( \sigma-\frac{1}{n}\biggr). \end{aligned}$$
(2.12)

Note that \(\inf_{0\le\sigma\le t}x^{n}(\sigma-\frac{1}{n})\le\inf_{0\le\sigma\le s}x^{n}(\sigma-\frac{1}{n})\), we have

$$\begin{aligned} \bigl\vert x^{n}(t)-x^{n}(s) \bigr\vert \le& \biggl\vert \int_{s}^{t}f\biggl(\sigma,x^{n}\biggl( \sigma-\frac {1}{n}\biggr)\biggr)\,d\sigma \biggr\vert \\ &{}+ \biggl\vert \int_{s}^{t}g\biggl(\sigma,x^{n}\biggl( \sigma-\frac {1}{n}\biggr)\biggr)\,dw(\sigma) \biggr\vert \\ &{}+\alpha \biggl\vert \sup_{0\le\sigma\le t}x^{n}\biggl( \sigma-\frac{1}{n}\biggr)-\sup_{0\le\sigma\le s}x^{n}\biggl( \sigma-\frac{1}{n}\biggr) \biggr\vert . \end{aligned}$$
(2.13)

Next, let us consider the following two cases.

Case I. If \(\sup_{0\le\sigma\le t}x^{n}(\sigma-\frac{1}{n})=\sup_{0\le \sigma\le s}x^{n}(\sigma-\frac{1}{n})\), then we get from (2.13) that

$$\begin{aligned} \bigl\vert x^{n}(t)-x^{n}(s) \bigr\vert \le& \biggl\vert \int_{s}^{t}f\biggl(\sigma,x^{n}\biggl( \sigma-\frac {1}{n}\biggr)\biggr)\,d\sigma \biggr\vert \\ &{}+ \biggl\vert \int_{s}^{t}g\biggl(\sigma,x^{n}\biggl( \sigma-\frac {1}{n}\biggr)\biggr)\,dw(\sigma) \biggr\vert . \end{aligned}$$
(2.14)

Case II. If \(\sup_{0\le\sigma\le t}x^{n}(\sigma-\frac{1}{n})>\sup_{0\le \sigma\le s}x^{n}(\sigma-\frac{1}{n})\), then there exists \(r\in(s,t]\) such that \(x^{n}(r)=\sup_{0\le\sigma\le t}x^{n}(\sigma-\frac{1}{n})\). So we get from (2.13) that

$$\begin{aligned} \bigl\vert x^{n}(t)-x^{n}(s) \bigr\vert \le& \biggl\vert \int_{s}^{t}f\biggl(\sigma,x^{n}\biggl( \sigma-\frac {1}{n}\biggr)\biggr)\,d\sigma \biggr\vert + \biggl\vert \int_{s}^{t}g\biggl(\sigma,x^{n}\biggl( \sigma-\frac {1}{n}\biggr)\biggr)\,dw(\sigma) \biggr\vert \\ &{}+\alpha \biggl\vert x^{n}(r)-\sup_{0\le\sigma\le s}x^{n} \biggl(\sigma-\frac {1}{n}\biggr) \biggr\vert \\ \le& \biggl\vert \int_{s}^{t}f\biggl(\sigma,x^{n}\biggl( \sigma-\frac{1}{n}\biggr)\biggr)\,d\sigma \biggr\vert + \biggl\vert \int _{s}^{t}g\biggl(\sigma,x^{n}\biggl( \sigma-\frac{1}{n}\biggr)\biggr)\,dw(\sigma) \biggr\vert \\ &{}+\alpha \bigl\vert x^{n}(r)-x^{n}(s) \bigr\vert \\ \le& \biggl\vert \int_{s}^{t}f\biggl(\sigma,x^{n}\biggl( \sigma-\frac{1}{n}\biggr)\biggr)\,d\sigma \biggr\vert + \biggl\vert \int _{s}^{t}g\biggl(\sigma,x^{n}\biggl( \sigma-\frac{1}{n}\biggr)\biggr)\,dw(\sigma) \biggr\vert \\ &{}+\alpha\sup_{s\le s'< t'\le t} \bigl\vert x^{n} \bigl(t'\bigr)-x^{n}\bigl(s'\bigr) \bigr\vert . \end{aligned}$$

We therefore have

$$\begin{aligned} \sup_{s\le s'< t'\le t} \bigl\vert x^{n}\bigl(t' \bigr)-x^{n}\bigl(s'\bigr) \bigr\vert \le& \frac{1}{1-\alpha} \biggl(\sup_{s\le s'< t'\le t} \biggl\vert \int _{s'}^{t'}f\biggl(\sigma,x^{n}\biggl( \sigma-\frac{1}{n}\biggr)\biggr)\,d\sigma \biggr\vert \\ &{} +\sup_{s\le s'< t'\le t} \biggl\vert \int_{s'}^{t'}g\biggl(\sigma,x^{n}\biggl( \sigma-\frac {1}{n}\biggr)\biggr)\,dw(\sigma) \biggr\vert \biggr). \end{aligned}$$
(2.15)

Hence,

$$\begin{aligned}& E\sup_{s\le s'< t'\le t} \bigl\vert x^{n}\bigl(t' \bigr)-x^{n}\bigl(s'\bigr) \bigr\vert ^{2} \\& \quad \le \frac{2}{(1-\alpha)^{2}} \biggl(E \biggl\vert \int_{s}^{t}f\biggl(\sigma,x^{n}\biggl( \sigma -\frac{1}{n}\biggr)\biggr)\,d\sigma \biggr\vert ^{2} \\& \qquad {}+E \biggl\vert \int_{s}^{t}g\biggl(\sigma,x^{n}\biggl( \sigma-\frac {1}{n}\biggr)\biggr)\,dw(\sigma) \biggr\vert ^{2} \biggr). \end{aligned}$$

Then Lemma 2.2 yields

$$\begin{aligned}& E\sup_{s\le s'< t'\le t} \bigl\vert x^{n}\bigl(t' \bigr)-x^{n}\bigl(s'\bigr) \bigr\vert ^{2} \\& \quad \le \frac{2}{(1-\alpha)^{2}} \biggl((t-s)E \int_{s}^{t} \biggl\vert f\biggl(\sigma ,x^{n}\biggl(\sigma-\frac{1}{n}\biggr)\biggr) \biggr\vert ^{2}\,d\sigma \\& \qquad {}+4E \int_{s}^{t} \biggl\vert g\biggl( \sigma,x^{n}\biggl(\sigma -\frac{1}{n}\biggr)\biggr) \biggr\vert ^{2}\,d\sigma \biggr) \\& \quad \le \frac{2(T+4)L}{(1-\alpha)^{2}} \int_{s}^{t}\biggl(1+E \biggl\vert x^{n} \biggl(\sigma-\frac {1}{n}\biggr) \biggr\vert ^{2}\biggr)\,d \sigma \\& \quad \le C_{2}(t-s), \end{aligned}$$

where \(C_{2}=\frac{2(T+4)L}{(1-\alpha)^{2}}(1+C_{1})\). The proof is therefore complete. □

Proof of Theorem 2.1

Firstly, we will show that the sequence \(\{x^{n}(t)\}\) is a Cauchy sequence in \({\mathcal {L}}^{2}([0,T];R)\). For any \(n>m\ge1\), it follows that

$$\begin{aligned} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert \le& \biggl\vert \int_{0}^{t}\biggl[f\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-f\biggl(s,x^{m}\biggl(s- \frac {1}{m}\biggr)\biggr)\biggr]\,ds \biggr\vert \\ &{} + \biggl\vert \int_{0}^{t}\biggl[g\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-g\biggl(s,x^{m}\biggl(s- \frac {1}{m}\biggr)\biggr)\biggr]\,dw(s) \biggr\vert \\ &{}+ \vert \alpha \vert \biggl\vert \sup_{0\le s\le t}x^{n} \biggl(s-\frac{1}{n}\biggr)-\sup_{0\le s\le t}x^{m} \biggl(s-\frac{1}{m}\biggr) \biggr\vert \\ &{} + \vert \beta \vert \biggl\vert \inf_{0\le s\le t}x^{n} \biggl(s-\frac{1}{n}\biggr)-\inf_{0\le s\le t}x^{m} \biggl(s-\frac{1}{m}\biggr) \biggr\vert . \end{aligned}$$

Noting that

$$ \biggl\vert \sup_{0\le s\le t}x^{n}\biggl(s- \frac{1}{n}\biggr)-\sup_{0\le s\le t}x^{m}\biggl(s- \frac {1}{m}\biggr) \biggr\vert \le\sup_{0\le s\le t} \biggl\vert x^{n}\biggl(s-\frac{1}{n}\biggr)-x^{m}\biggl(s- \frac {1}{m}\biggr) \biggr\vert $$

and

$$ \biggl\vert \inf_{0\le s\le t}x^{n}\biggl(s- \frac{1}{n}\biggr)-\inf_{0\le s\le t}x^{m}\biggl(s- \frac {1}{m}\biggr) \biggr\vert \le\sup_{0\le s\le t} \biggl\vert x^{n}\biggl(s-\frac{1}{n}\biggr)-x^{m}\biggl(s- \frac {1}{m}\biggr) \biggr\vert , $$

one can have

$$\begin{aligned} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert \le& \biggl\vert \int_{0}^{t}\biggl[f\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-f\biggl(s,x^{m}\biggl(s- \frac {1}{m}\biggr)\biggr)\biggr]\,ds \biggr\vert \\ &{} + \biggl\vert \int_{0}^{t}\biggl[g\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-g\biggl(s,x^{m}\biggl(s- \frac {1}{m}\biggr)\biggr)\biggr]\,dw(s) \biggr\vert \\ &{}+\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)\sup _{0\le s\le t} \biggl\vert x^{n}\biggl(s-\frac{1}{n} \biggr)-x^{m}\biggl(s-\frac {1}{m}\biggr) \biggr\vert \\ \le& \biggl\vert \int_{0}^{t}\biggl[f\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-f\biggl(s,x^{m}\biggl(s- \frac {1}{m}\biggr)\biggr)\biggr]\,ds \biggr\vert \\ & {}+ \biggl\vert \int_{0}^{t}\biggl[g\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-g\biggl(s,x^{m}\biggl(s- \frac {1}{m}\biggr)\biggr)\biggr]\,dw(s) \biggr\vert \\ &{}+\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)\sup _{0\le s\le t} \biggl\vert x^{n}\biggl(s-\frac{1}{n} \biggr)-x^{m}\biggl(s-\frac {1}{n}\biggr) \biggr\vert \\ &{}+\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)\sup _{0\le s\le t} \biggl\vert x^{m}\biggl(s-\frac{1}{n} \biggr)-x^{m}\biggl(s-\frac {1}{m}\biggr) \biggr\vert . \end{aligned}$$

By the basic inequality and Assumption 2.3, we obtain that

$$ \begin{aligned}[b] &\bigl(1- \vert \alpha \vert - \vert \beta \vert \bigr)^{2}E \sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2} \\ &\quad \le 3 \biggl(E\sup_{0 \le t\le t_{1}} \biggl\vert \int_{0}^{s}\biggl[f\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-f\biggl(s,x^{m}\biggl(s- \frac {1}{m}\biggr)\biggr)\biggr]\,ds \biggr\vert ^{2} \\ &\qquad {}+E\sup_{0 \le t\le t_{1}} \biggl\vert \int_{0}^{s}\biggl[g\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-g\biggl(s,x^{m}\biggl(s- \frac {1}{m}\biggr)\biggr)\biggr]\,dw(s) \biggr\vert ^{2} \\ &\qquad {}+\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)^{2}E\sup_{0\le t\le t_{1}} \biggl\vert x^{m} \biggl(t-\frac{1}{n}\biggr)-x^{m}\biggl(t-\frac{1}{m} \biggr) \biggr\vert ^{2} \biggr). \end{aligned} $$
(2.16)

Then, using the Hölder inequality and the Burkholder-Davis-Gundy inequality again, we have

$$\begin{aligned}& E\sup_{0\le t\le t_{1}} \biggl\vert \int_{0}^{t}\biggl[f\biggl(s,x^{n} \biggl(s-\frac {1}{n}\biggr)\biggr)-f\biggl(s,x^{m}\biggl(s- \frac{1}{m}\biggr)\biggr)\biggr]\,d\sigma \biggr\vert ^{2} \\& \quad \le TE \int_{0}^{t_{1}} \biggl\vert f\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-f\biggl(s,x^{m}\biggl(s- \frac {1}{m}\biggr)\biggr) \biggr\vert ^{2}\,ds \\& \quad \le TkE \int_{0}^{t_{1}} \biggl\vert x^{n}\biggl(s- \frac{1}{n}\biggr)-x^{m}\biggl(s-\frac{1}{m}\biggr) \biggr\vert ^{2}\,ds \end{aligned}$$
(2.17)

and

$$\begin{aligned}& E\sup_{0 \le t\le t_{1}} \biggl\vert \int_{0}^{t}\biggl[g\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-g\biggl(s,x^{m}\biggl(s- \frac {1}{m}\biggr)\biggr)\biggr]\,dw(s) \biggr\vert ^{2} \\& \quad \le 4E \int_{0}^{t_{1}} \biggl\vert g\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-g\biggl(s,x^{m}\biggl(s- \frac {1}{m}\biggr)\biggr) \biggr\vert ^{2}\,ds \\& \quad \le 4kE \int_{0}^{t_{1}} \biggl\vert x^{n}\biggl(s- \frac{1}{n}\biggr)-x^{m}\biggl(s-\frac{1}{m}\biggr) \biggr\vert ^{2}\,ds. \end{aligned}$$
(2.18)

Substituting (2.17) and (2.18) into (2.16), one has

$$\begin{aligned}& E\sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2} \\& \quad \le \frac{3}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \biggl((4+T)kE \int _{0}^{t_{1}} \biggl\vert x^{n} \biggl(s-\frac{1}{n}\biggr)-x^{m}\biggl(s-\frac{1}{m} \biggr) \biggr\vert ^{2}\,ds \\& \qquad {} +\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)^{2}E\sup_{0\le t\le t_{1}} \biggl\vert x^{m} \biggl(t-\frac{1}{n}\biggr)-x^{m}\biggl(t-\frac{1}{m} \biggr) \biggr\vert ^{2} \biggr) \\& \quad \le \frac{3}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \biggl(2(4+T)kE \int _{0}^{t_{1}} \biggl\vert x^{n} \biggl(s-\frac{1}{n}\biggr)-x^{m}\biggl(s-\frac{1}{n}\biggr) \biggr\vert ^{2}\,ds \\& \qquad {} +2(4+T)kE \int_{0}^{t_{1}} \biggl\vert x^{m}\biggl(s- \frac{1}{n}\biggr)-x^{m}\biggl(s-\frac {1}{m}\biggr) \biggr\vert ^{2}\,ds \\& \qquad {} +\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)^{2}E\sup_{0\le t\le t_{1}} \biggl\vert x^{m} \biggl(t-\frac{1}{n}\biggr)-x^{m}\biggl(t-\frac{1}{m} \biggr) \biggr\vert ^{2} \biggr). \end{aligned}$$

Then Lemma 2.3 yields

$$\begin{aligned}& E\sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2} \\& \quad \le \frac{3}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \biggl(2(4+T)kE \int _{0}^{t_{1}-\frac{1}{n}} \bigl\vert x^{n}(s)-x^{m}(s) \bigr\vert ^{2}\,ds \\& \qquad {}+\bigl[2(4+T)kT+\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)^{2}\bigr]C_{2}\biggl(\frac{1}{m}- \frac{1}{n}\biggr) \biggr) \\& \quad \le \frac{3}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \biggl(2(4+T)k \int _{0}^{t_{1}}E\sup_{0\le s\le t} \bigl\vert x^{n}(s)-x^{m}(s) \bigr\vert ^{2}\,dt \\& \qquad{} +\bigl[2(4+T)kT+\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)^{2}\bigr]C_{2}\biggl(\frac{1}{m}- \frac{1}{n}\biggr) \biggr). \end{aligned}$$

Hence,

$$ E\sup_{0\le t\le T} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2} \le C_{3} \int_{0}^{T}\Bigl[E\sup_{0\le s\le t} \bigl\vert x^{n}(s)-x^{m}(s) \bigr\vert ^{2} \Bigr]\,dt+C_{4}\biggl(\frac{1}{m}-\frac{1}{n} \biggr), $$
(2.19)

where \(C_{3}=\frac{6(4+T)k}{(1-|\alpha|-|\beta|)^{2}}, C_{4}=\frac {3[2(4+T)kT+(|\alpha| +|\beta|)^{2}]C_{2}}{(1-|\alpha|-|\beta|)^{2}}\). By the Gronwall inequality, we have

$$ E\sup_{0\le t\le T} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2} \le C_{4} e^{C_{3}T}\biggl( \frac{1}{m}-\frac{1}{n}\biggr), $$
(2.20)

which implies that

$$ E\sup_{0\le t\le T} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2}\to0\quad \mbox{as } n,m \to \infty. $$

This shows that the sequence \(\{x^{n}(t)\}\) is a Cauchy sequence in \({\mathcal {L}}^{2}([0,T];R)\). Denote the limit by \(x(t)\). Letting \(m\to\infty\) in (2.20) yields

$$ E\sup_{0\le t\le T} \bigl\vert x^{n}(t)-x(t) \bigr\vert ^{2} \le C_{4} e^{C_{3}T}\frac{1}{n}. $$
(2.21)

Then the Borel-Cantelli lemma can be used to show that \(x^{n}(t)\) converges to \(x(t)\) almost surely uniformly on [0,T] as \(n\to\infty\). Taking limits on both sides of (2.2) and letting \(n\to\infty\), we can obtain that \(x(t)\) is a solution of equation (2.1).

Now we show the uniqueness of the solution. Let \(x(t)\) and \(y(t)\) be any two solutions of equation (2.1). We can prove using the same procedure as (2.19) that

$$ E\sup_{0\le t\le T} \bigl\vert x(t)-y(t) \bigr\vert ^{2} \le C \int_{0}^{T}E\Bigl(\sup_{0\le s\le t} \bigl\vert x(s)-y(s) \bigr\vert ^{2}\Bigr)\,ds $$

for all \(t\in[0,T]\). The Gronwall inequality gives that

$$E\sup_{0 \le t\le T} \bigl\vert x(t)-y(t) \bigr\vert ^{2}=0, $$

i.e., for any \(t\in[0,T]\), \(x(t)\equiv y(t)\) a.s. This completes the proof. □

Remark 2.2

By (2.6), we conclude that the Carathéodory approximate solution converges to the true solution of equation (2.1) in the mean square sense, i.e., for any \(T>0\),

$$ E\sup_{0\le t\le T} \bigl\vert x^{n}(t)-x(t) \bigr\vert ^{2} \to0 \quad \mbox{as } n\to\infty. $$

In fact, the proof of the convergence of the Carathéodory approximation represents an alternative to the procedure for establishing the existence and uniqueness of the solution to delay DPSDEs. In other words, the Carathéodory approximation scheme is applicable to a class of DPSDEs.

3 Non-Lipschitz DPSDEs

In this section, we will replace the global Lipschitz condition (2.3) with a more general condition and show that the Carathéodory approximate solution still converges to the true solution of equation (2.1).

Assumption 3.1

For any \(x,y\in R\) and \(t\in[0,T]\), there exists a function \(k(\cdot)\) such that

$$ \bigl\vert f(t,x)-f(t,y) \bigr\vert \vee \bigl\vert g(t,x)-g(t,y) \bigr\vert \le k\bigl( \vert x-y \vert \bigr), $$
(3.1)

where \(k(u)\) is a concave non-decreasing continuous function such that \(k(0)=0\) and \(\int_{0^{+}}\frac{u}{k^{2}(u)}\,du=\infty\).

Remark 3.1

Since \(k(\cdot)\) is concave and \(k(0)=0\), one can find a pair of positive constants a and b such that

$$k(u)\le a+bu \quad \mbox{for } u\ge0. $$

Theorem 3.1

Let Assumptions 3.1, 2.2 and 2.3 hold. Then there exists a unique \(\mathcal{F}_{t}\)-adapted solution \(\{x(t)\}_{t\ge0}\) to equation (2.1). Moreover, for any \(T>0\),

$$ \lim_{n\to\infty}E\sup_{0\le t\le T} \bigl\vert x^{n}(t)-x(t) \bigr\vert ^{2} =0. $$
(3.2)

To prove Theorem 3.1, we will need the following Bihari inequality.

Lemma 3.1

(Bihari’s inequality [22])

Let \(k: R_{+} \to R_{+}\) be a continuous, non-decreasing function satisfying \(k(0)=0\) and \(\int_{0^{+}} \frac{ds}{k(s)}=+\infty\). Let \(u(\cdot)\) be a Borel measurable bounded non-negative function defined on \([0,T ]\) satisfying

$$u(t)\le u_{0}+ \int_{0}^{t} v(s)k\bigl(u(s)\bigr)\,ds,\quad t \in[0,T], $$

where \(u_{0}>0\) and \(v(\cdot)\) is a non-negative integrable function on \([0,T]\). Then we have

$$u(t)\le G^{-1}\biggl(G(u_{0})+ \int_{0}^{t} v(s)\,ds\biggr), $$

where \(G(t)=\int_{t_{0}}^{t}\frac{du}{k(u)}\) is well defined for some \(t_{0}>0\), and \(G^{-1}\) is the inverse function of G.

In particular, if \(u_{0}=0\), then \(u(t)=0\) for all \(t\in[0,T]\).

Lemma 3.2

Under Assumptions 3.1, 2.2 and 2.3, for all \(n\ge1\),

$$ E\sup_{0\le t\le T} \bigl\vert x^{n}(t) \bigr\vert ^{2}\le\bar{C}_{1}, $$
(3.3)

where \(\bar{C}_{1}\) is a positive constant.

Proof

By the Hölder inequality and the Burkholder-Davis-Gundy inequality, it follows from (2.8) that

$$\begin{aligned}& \bigl(1- \vert \alpha \vert - \vert \beta \vert \bigr)^{2}E \sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t) \bigr\vert ^{2} \\& \quad \le 3 \biggl(\bigl(1+ \vert \alpha \vert + \vert \beta \vert \bigr)^{2}E \vert x_{0} \vert ^{2}+2TE \int_{0}^{t_{1}} \biggl\vert f\biggl(s,x^{n} \biggl(s-\frac {1}{n}\biggr)\biggr)-f(s,0) \biggr\vert ^{2}\,ds \\& \qquad {}+8E \int_{0}^{t_{1}} \biggl\vert g\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-g(s,0) \biggr\vert ^{2}\,ds \\& \qquad {}+2TE \int_{0}^{t_{1}} \bigl\vert f(s,0) \bigr\vert ^{2}\,ds+8E \int_{0}^{t_{1}} \bigl\vert g(s,0) \bigr\vert ^{2}\,ds \biggr). \end{aligned}$$
(3.4)

By Assumptions 2.2 and 3.1, we have

$$\begin{aligned}& \bigl(1- \vert \alpha \vert - \vert \beta \vert \bigr)^{2}E \sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t) \bigr\vert ^{2} \\& \quad \le 3 \biggl(\bigl(1+ \vert \alpha \vert + \vert \beta \vert \bigr)^{2}E \vert x_{0} \vert ^{2} \\& \qquad {}+2(T+4)E \int_{0}^{t_{1}}k^{2}\biggl( \biggl\vert x^{n}\biggl(s-\frac{1}{n}\biggr) \biggr\vert \biggr)\,ds +2(T+4)T\bar{k} \biggr). \end{aligned}$$
(3.5)

Then the Jensen inequality implies that

$$\begin{aligned}& \bigl(1- \vert \alpha \vert - \vert \beta \vert \bigr)^{2}E \sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t) \bigr\vert ^{2} \\& \quad \le 3 \biggl(\bigl(1+ \vert \alpha \vert + \vert \beta \vert \bigr)^{2}E \vert x_{0} \vert ^{2} \\& \qquad {}+2(T+4) \int_{0}^{t_{1}}k^{2}\biggl(\biggl(E \biggl\vert x^{n}\biggl(s-\frac {1}{n}\biggr) \biggr\vert ^{2}\biggr)^{\frac{1}{2}}\biggr)\,ds+2(T+4)T\bar{k} \biggr). \end{aligned}$$

Let \(\rho(x)=k^{2}(x^{\frac{1}{2}})\), it follows that

$$\begin{aligned} E\sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t) \bigr\vert ^{2} \le& \frac{3(1+ \vert \alpha \vert + \vert \beta \vert )^{2}E \vert x_{0} \vert ^{2}+6(T+4)T\bar{k}}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \\ &{}+\frac{6(T+4)}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \int_{0}^{t_{1}}\rho \biggl(E \biggl\vert x^{n}\biggl(s-\frac{1}{n}\biggr) \biggr\vert ^{2} \biggr)\,ds. \end{aligned}$$
(3.6)

Since \(\frac{k(x)}{x}\) and \(k'_{+}(x)\) are non-negative, non-increasing functions, we have that

$$\rho'_{+}(x)=x^{-\frac{1}{2}}k\bigl(x^{\frac{1}{2}} \bigr)k'_{+}(x) $$

is a non-negative, non-increasing function which implies that ρ is a non-negative, non-decreasing concave function. Note that \(k(0)=0\), then \(\rho(0)=0\), and there exists a pair of positive constants a and b such that

$$\rho(u)\le a+bu \quad \mbox{for } u\ge0. $$

We therefore have

$$\begin{aligned} E\sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t) \bigr\vert ^{2} \le& \frac{3(1+ \vert \alpha \vert + \vert \beta \vert )^{2}E \vert x_{0} \vert ^{2}+6(T+4)T(a+\bar{k})}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \\ &{}+\frac{6(T+4)b}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \int _{0}^{t_{1}}E \biggl\vert x^{n} \biggl(s-\frac{1}{n}\biggr) \biggr\vert ^{2}\,ds \\ \le& \frac{[3(1+ \vert \alpha \vert + \vert \beta \vert )^{2}+\frac{6(T+4)b}{n}]E \vert x_{0} \vert ^{2}+6(T+4)T(a+\bar {k})}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \\ &{}+\frac{6(T+4)b}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \int_{0}^{t_{1}}E\sup_{0\le s\le t} \bigl\vert x^{n}(s) \bigr\vert ^{2}\,dt. \end{aligned}$$
(3.7)

Set

$$r(t)=\frac{[3(1+|\alpha| +|\beta|)^{2}+\frac{6(T+4)b}{n}]E|x_{0}|^{2}+6(T+4)T(a+\bar {k})}{(1-|\alpha|-|\beta|)^{2}}e^{\frac{6(T+4)b}{(1-|\alpha|-|\beta|)^{2}}t}, $$

then \(r(\cdot)\) is the solution to the following ordinary differential equation:

$$\begin{aligned} r(t_{1}) =& \frac{[3(1+|\alpha| +|\beta|)^{2}+\frac{6(T+4)b}{n}]E|x_{0}|^{2}+6(T+4)T(a+\bar {k})}{(1-|\alpha|-|\beta|)^{2}} \\ &{}+\frac{6(T+4)b}{(1-|\alpha|-|\beta |)^{2}} \int_{0}^{t_{1}}r(t)\,dt. \end{aligned}$$

By recurrence, it is easy to verify that, for each \(n\ge0\),

$$ E\sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t) \bigr\vert ^{2} \le r(t_{1}). $$

Note that \(r(t_{1})\) is continuous and bounded on \([0,T]\), one can have

$$ E\sup_{0\le t\le T} \bigl\vert x^{n}(t) \bigr\vert ^{2} \le\bar{C}_{1}< +\infty $$

for any \(n\ge1\). This completes the proof of Lemma 3.2. □

Lemma 3.3

For all \(n\ge1\) and \(0\le s< t\le T\),

$$ E \bigl\vert x^{n}(t)-x^{n}(s) \bigr\vert ^{2}\le\bar{C}_{2}(t-s), $$
(3.8)

where \(\bar{C}_{2}\) is a positive constant.

The proof is similar to Lemma 2.3, we omit its proof.

Now, let us apply the above lemmas to prove Theorem 3.1.

Proof of Theorem 3.1

By the Hölder inequality and the Burkholder-Davis-Gundy inequality, it follows from (2.16) that

$$\begin{aligned}& \bigl(1- \vert \alpha \vert - \vert \beta \vert \bigr)^{2}E \sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2} \\& \quad \le 3 \biggl(TE \int_{0}^{t_{1}} \biggl\vert f\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-f\biggl(s,x^{m}\biggl(s- \frac {1}{m}\biggr)\biggr) \biggr\vert ^{2}\,ds \\& \qquad {}+4E \int_{0}^{t_{1}} \biggl\vert g\biggl(s,x^{n} \biggl(s-\frac{1}{n}\biggr)\biggr)-g\biggl(s,x^{m}\biggl(s- \frac {1}{m}\biggr)\biggr) \biggr\vert ^{2}\,ds \\& \qquad {}+\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)^{2}E\sup_{0\le t\le t_{1}} \biggl\vert x^{m} \biggl(t-\frac{1}{n}\biggr)-x^{m}\biggl(t-\frac{1}{m} \biggr) \biggr\vert ^{2} \biggr). \end{aligned}$$

By Assumption 3.1 and the Jensen inequality, we have

$$\begin{aligned}& \bigl(1- \vert \alpha \vert - \vert \beta \vert \bigr)^{2}E \sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2} \\& \le 3 \biggl((T+4)E \int_{0}^{t_{1}}k^{2}\biggl( \biggl\vert x^{n}\biggl(s-\frac{1}{n}\biggr)-x^{m}\biggl(s- \frac {1}{m}\biggr) \biggr\vert \biggr)\,ds \\& \qquad {}+\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)^{2}E\sup_{0\le t\le t_{1}} \biggl\vert x^{m} \biggl(t-\frac{1}{n}\biggr)-x^{m}\biggl(t-\frac{1}{m} \biggr) \biggr\vert ^{2} \biggr) \\& \quad \le 3 \biggl((T+4) \int_{0}^{t_{1}}k^{2}\biggl(\biggl(E \biggl\vert x^{n}\biggl(s-\frac{1}{n}\biggr)-x^{m}\biggl(s- \frac {1}{m}\biggr) \biggr\vert ^{2}\biggr)^{\frac{1}{2}} \biggr)\,ds \\& \qquad {}+\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)^{2}E\sup_{0\le t\le t_{1}} \biggl\vert x^{m} \biggl(t-\frac{1}{n}\biggr)-x^{m}\biggl(t-\frac{1}{m} \biggr) \biggr\vert ^{2} \biggr). \end{aligned}$$
(3.9)

Similar to (3.6), one obtains

$$\begin{aligned}& E\sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2} \\& \quad \le \frac{3}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \biggl((T+4) \int_{0}^{t_{1}}\rho \biggl(E \biggl\vert x^{n}\biggl(s-\frac{1}{n}\biggr)-x^{m}\biggl(s- \frac{1}{m}\biggr) \biggr\vert ^{2}\biggr)\,ds \\& \qquad {} +\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)^{2}E\sup_{0\le t\le t_{1}} \biggl\vert x^{m} \biggl(t-\frac{1}{n}\biggr)-x^{m}\biggl(t-\frac{1}{m} \biggr) \biggr\vert ^{2} \biggr). \end{aligned}$$
(3.10)

Since \(\rho(\cdot)\) is concave, we have \(\rho(a+b)\le\rho(a)+\rho(b)\). Then Lemma 3.3 yields

$$\begin{aligned}& E\sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2} \\& \quad \le \frac{3}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \biggl(2(T+4) \int_{0}^{t_{1}}\rho \biggl(E \biggl\vert x^{n}\biggl(s-\frac{1}{n}\biggr)-x^{m}\biggl(s- \frac{1}{n}\biggr) \biggr\vert ^{2}\biggr)\,ds \\& \qquad {}+2(T+4) \int_{0}^{t_{1}}\rho\biggl(E \biggl\vert x^{m}\biggl(s-\frac{1}{n}\biggr)-x^{m}\biggl(s- \frac {1}{m}\biggr) \biggr\vert ^{2}\biggr)\,ds \\& \qquad {}+\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)^{2}E\sup_{0\le t\le t_{1}} \biggl\vert x^{m} \biggl(t-\frac{1}{n}\biggr)-x^{m}\biggl(t-\frac{1}{m} \biggr) \biggr\vert ^{2} \biggr) \\& \quad \le \frac{6(T+4)}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \biggl( \int_{0}^{t_{1}}\rho \biggl(E \biggl\vert x^{n}\biggl(s-\frac{1}{n}\biggr)-x^{m}\biggl(s- \frac{1}{n}\biggr) \biggr\vert ^{2}\biggr)\,ds \\& \qquad {}+ \int_{0}^{t_{1}}\rho\biggl(\bar{C}_{2}\biggl( \frac{1}{m}-\frac{1}{n}\biggr)\biggr)\,ds \biggr)+\frac{3( \vert \alpha \vert + \vert \beta \vert )^{2}}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \bar{C}_{2}\biggl(\frac{1}{m}-\frac{1}{n} \biggr), \end{aligned}$$
(3.11)

where

$$\begin{aligned}& \int_{0}^{t_{1}}\rho\biggl(E \biggl\vert x^{n}\biggl(s-\frac{1}{n}\biggr)-x^{m}\biggl(s- \frac {1}{n}\biggr) \biggr\vert ^{2}\biggr)\,ds \\& \quad \le \int_{0}^{t_{1}}\rho\biggl(E\sup_{0\le\sigma\le s} \biggl\vert x^{n}\biggl(\sigma-\frac {1}{n}\biggr)-x^{m} \biggl(\sigma-\frac{1}{n}\biggr) \biggr\vert ^{2}\biggr)\,ds \\& \quad \le \int_{0}^{t_{1}}\rho\Bigl(E\sup_{-\frac{1}{n}\le v\le s-\frac {1}{n}} \bigl\vert x^{n}(v)-x^{m}(v) \bigr\vert ^{2} \Bigr)\,ds \\& \quad \le \int_{0}^{t_{1}}\rho \Bigl(E\sup_{-\frac{1}{n}\le v\le 0} \bigl\vert x^{n}(v)-x^{m}(v) \bigr\vert ^{2}+E \sup_{0\le v\le s} \bigl\vert x^{n}(v)-x^{m}(v) \bigr\vert ^{2} \Bigr)\,ds \\& \quad \le \rho\bigl(2E \Vert \xi \Vert ^{2}\bigr)T+ \int_{0}^{t_{1}}\rho \Bigl(E\sup_{0\le s\le t} \bigl\vert x^{n}(s)-x^{m}(s) \bigr\vert ^{2} \Bigr)\,dt. \end{aligned}$$
(3.12)

Inserting (3.12) into (3.11), we obtain that

$$\begin{aligned}& E\sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2} \\& \quad \le\frac{6(T+4)}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \int_{0}^{t_{1}}\rho\Bigl(E\sup_{0\le s\le t} \bigl\vert x^{n}(s)-x^{m}(s) \bigr\vert ^{2} \Bigr)\,dt \\& \qquad {}+\frac{3}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}}C(m,n), \end{aligned}$$
(3.13)

where

$$\begin{aligned} C(m,n) =&2(T+4)T\rho\bigl(2E\|\xi\|^{2}\bigr)+2(T+4)T \bar{C}_{2}\rho\biggl(\frac {1}{m}-\frac{1}{n}\biggr) \\ &{}+\bigl( \vert \alpha \vert + \vert \beta \vert \bigr)^{2} \bar{C}_{2}\biggl(\frac{1}{m}-\frac{1}{n}\biggr). \end{aligned}$$

Then the Bihari inequality gives that

$$ E\sup_{0\le t\le t_{1}} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2} \le G^{-1} \biggl(G\biggl( \frac {3C(m,n)}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}}\biggr)+\frac{6(T+4)T}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}} \biggr), $$

where \(G(t)=\int_{1}^{t}\frac{ds}{\rho(s)}\). Obviously, G is a strictly increasing function, then G has an inverse function which is strictly increasing, and \(G^{-1}(-\infty)=0\). Note that when \(m,n\to\infty\), then \(\frac{3C(m,n)}{(1-|\alpha|-|\beta|)^{2}}\to0\). Recalling \(\int_{0^{+}}\frac{ds}{\rho(s)}=\int_{0^{+}}\frac {s}{k^{2}(s)}\,ds=\infty\), we have

$$ G\biggl(\frac{3C(m,n)}{(1-|\alpha|-|\beta|)^{2}}\biggr)\to-\infty $$

and

$$ G^{-1}\biggl[G\biggl(\frac{3C(m,n)}{(1-|\alpha|-|\beta|)^{2}}\biggr)+\frac {6(T+4)T}{(1-|\alpha|-|\beta|)^{2}} \biggr]\to 0. $$

We therefore have

$$\begin{aligned}& \limsup_{n,m \to\infty}E\Bigl(\sup_{0 \le t\le t_{1}} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2}\Bigr) \\& \quad \le \limsup_{n,m \to\infty}G^{-1}\biggl[G\biggl( \frac{3C(m,n)}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}}\biggr)+\frac{6(T+4)T}{(1- \vert \alpha \vert - \vert \beta \vert )^{2}}\biggr]=0, \end{aligned}$$
(3.14)

which implies that \(\{x^{n}(t)\}_{n\ge1}\) is a Cauchy sequence. Denote the limit by \(x(t)\). Letting \(m\to\infty\) in (3.14) yields

$$ \lim_{n\to\infty}E\sup_{0\le t\le T} \bigl\vert x^{n}(t)-x(t) \bigr\vert ^{2} =0. $$

Similar to (3.13), (3.14), we can show that \(x(t)\) is a unique solution of equation (2.1) under non-Lipschitz conditions. Then the proof is completed. □

Remark 3.2

To see the generality of our results, let us give a few examples of the function \(k(\cdot)\). Let \(\delta\in(0,1)\) be sufficiently small, define

$$ k_{1}(u)= \textstyle\begin{cases} 0, & u=0, \\ u\sqrt{\log(u^{-1})}, & u\in(0,\delta], \\ \delta\sqrt{\log{(\delta^{-1})}}, & u\in[\delta ,+\infty], \end{cases} $$

and

$$ k_{2}(u)= \textstyle\begin{cases} 0, &u=0, \\ u\sqrt{\log(u^{-1})}\log\log(u^{-1}), & u\in(0,\delta], \\ \delta\sqrt{\log{(\delta^{-1})}}\log\log{(\delta^{-1})},& u\in[\delta,+\infty]. \end{cases} $$

They are all concave non-decreasing functions satisfying \(\int_{0^{+}}\frac{u}{k_{i}^{2}(u)}\,du=\infty\), \(i=1,2\).

Remark 3.3

In particular, if we let \(k(u)=ku\), \(u\ge0\), we see that the Lipschitz condition (2.3) is a special case of our proposed condition (3.1). In other words, we obtain a more general result than Theorem 2.1.

Remark 3.4

In fact, our theories developed can be applied to study doubly perturbed stochastic differential equations with jumps (DPSDEwJs) and doubly perturbed stochastic differential equations with Markovian switching (DPSDEwMS) respectively. If \(\alpha=\beta=0\), then DPSDEwJs and DPSDEwMS will become SDEs with jumps and SDEs with Markovian switching which were investigated by [2334]. Similarly, we can also give the Carathéodory approximate solution and show that the Carathéodory approximate solution converges to the solution of SDEs with jumps and SDEs with Markovian switching under our assumptions.