1 Introduction

In many physical phenomena with random nature, the state future of a system not only depends on the current state but also depends on the whole past history of the system over a finite time interval, and certainly the mathematical modelling actually describing the system leads to a stochastic delay differential equation (SDDE) and not a stochastic ordinary differential equation (SODE). In this paper, an autonomous d-dimensional Itô stochastic delay differential equation is considered

$$ \textstyle\begin{cases} dX(t) = a (X(t), X(t - \tau ) ) \,dt + b (X(t), X(t - \tau ) ) \,dW(t) ,\quad t \in [t_{0}, T], \\ X(t) = \eta (t),\quad t \in [t_{0} - r, t_{0}],& \end{cases} $$
(1.1)

where r is a positive constant and τ is called lag process. The drift and diffusion coefficients \(a, b^{j}: \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}\) for \(j = 1,\ldots, m\) are Borel-measurable functions and \(\eta , \eta \in \mathbb{R}^{d}\), is named initial process. Obviously, the inaccessibility of the closed-form of the solutions or their distributions of these mathematical modelings, which arise in diverse areas of applications, reveal the significance of addressing numerical methods, because they play an important role to educe a realistic view of the solution behaviour of such equations. In recent years, some authors have dealt with the numerical analysis of SDDEs whose time lag is a discrete, see, e.g. [3, 12, 16]. But the delay function might be dynamically changed and even disturbed under an ambient noise. If the delay function depends only on time, then it is called time-dependent, see, e.g. [1, 6, 8, 22]. But if, in addition to the time, it depends on the solution process, then it is named state-dependent. As far as the author knows, only a few numerical schemes for SDDEs which contain the third type of lag function have been proposed, see, e.g. [13, 14]. The authors [13, 14] considered the continuous-time \(\operatorname{GARCH}(1,1)\) model for stochastic volatility involving state-dependent delayed response and applied the Euler–Maruyama discrete-time approximation in the strong convergence sense to simulate. On the other hand, there exist some papers which extend some types of stochastic functional (evolution, fractional, neutral) differential equations with state-dependent delay and study some theoretical aspects particularly the existence and uniqueness of (mild) solution and controllability results, see, e.g. [2, 19, 31, 32]. In the following, a new interpolation, whose computational costs are not too high, is presented. The main contribution of this paper is to investigate the numerical solution of Eq. (1.1), under sufficient conditions which will be mentioned later, with three cases of lag process as follows:

  1. (L1)

    τ is a constant,

  2. (L2)

    τ is time-dependent as \(\tau (t)\),

  3. (L3)

    τ is state-dependent as \(\tau (t, X(t))\).

Note that in case (L2), τ can be a continuous-time random process or a deterministic function. Here, we just consider the deterministic case. Since the main task in all integration formulas for SDDEs is to provide an interpolation at non-mesh points, a new split-step scheme will be properly extended over the whole interval \([t_{0}, T]\). The authors in [28] studied the strong convergence and the mean-square stability of the split-step backward Euler method to linear SDDEs with constant lag and took the stepsize as a multiplier of that. This type of stepsize selection by a semi-implicit split-step θ-Milstein method was developed in [7], too. But Wang et al. in [22] proposed a new improved split-step backward Euler method for SDDEs with time-dependent delay where a piecewise linear interpolation is used to approximate the solution at the delayed points. Also, in contrast to [7, 28], in [22], the restriction of stepsize is removed and the unconditional stability property is extended as well. In our proposed method, this restriction on stepsize is dropped, too. As more papers, we could mention [11, 17] which investigate the behaviour of the split-type methods for stochastic differential equations and [26] which studies the strong convergence of the split-step θ-method for a class of neutral stochastic delay differential equations.

As we know, the (numerical) stability concept is a powerful tool in measuring the sensitivity of the (difference) equation for any confusion. For instance, the disturbances, which occur during mathematical modelling, or round off errors made in the implementation of the numerical method may lead to fundamental changes. Undoubtedly, reviewing the numerical stability of stochastic differential equations is an inspiration to that of SDDEs. Among the most prominent papers which scrutinise the numerical stability for stochastic differential equations, the readers can refer to [9, 20] for a review. Mao in [18] developed pth moment and almost sure exponential stability of stochastic functional differential equations by means of the Razumikhin-type theorems. Also, [25] examined almost sure exponential stability of the Euler–Maruyama scheme for such equations. Furthermore, the authors in [4] employed the Halanay-type theory as the main tool to analyse pth exponential stability of the solution and the Euler-type method. In this work, we study the mean-square stability of SDDE (1.1) and also that of the proposed scheme. Note that the case of state-dependent delay is almost new. Some papers help us to accomplish our aim; see [18, 27, 30]. Also, the stability for such a class of SDDEs under weaker conditions like one-sided Lipschitz and locally Lipschitz, which has been studied in the case of SDE or SDDE with discrete or time-dependent delay [5, 10, 24, 29, 30], could be extended in the future. This paper consists of two parts. The first deals with convergence and the second with stability, both of which are examined in the mean-square sense.

This paper is organised as follows. Section 2 is concerned with the notations, assumptions and a numerical scheme for the underlying problem. In Sect. 3, the convergence of the scheme in the mean-square sense is derived. In Sect. 4, we define the stability concept for the problem and numerical solution, too. Moreover, the mean-square stability of the scheme is established. Ultimately, some test problems are indicated in Sect. 5.

2 Results formulation

Let \((\varOmega , {\mathcal{F}}, {\{\mathcal{F}_{t}\}}_{t \geq t_{0}}, {P})\) be a complete probability space with the filtration \({\{ \mathcal{F}_{t}\}}_{t \geq t_{0}}\) satisfying the usual conditions. Moreover, \(W =(W_{t})_{t \geq t_{0}}\) is an m-dimensional Brownian motion on the probability space. Let \(D = C([t_{0} - r, t_{0}], \mathbb {R}^{d})\) be the Banach space of all continuous functions from \([t_{0} - r, t_{0}]\) to \(\mathbb {R}^{d}\). Also, we use the \(\mathcal{L}^{p}( \varOmega , D)\) to be the space of all \(\mathcal{F}_{t_{0}}\)-measurable and integrable initial processes \(\eta : \varOmega \rightarrow D\) which can be equipped with the following seminorm:

$$ \Vert \eta \Vert _{{\mathcal{L}^{p}}(D)} = \biggl( \int _{\varOmega } \Vert \eta \Vert ^{p} _{D} \,dP \biggr)^{1/p}, $$

where the supremum norm \(\|\cdot\|^{p}_{D}\) for \(p \geq 1\) is defined as

$$ \Vert \eta \Vert ^{p}_{D} = \sup_{s \in [t_{0} - r, t_{0}]} \bigl\vert \eta (s) \bigr\vert ^{p}, $$

where \(|\cdot|\) is the Euclidian norm in \(\mathbb{R}^{k}\) for \(k \geq 1\). In this sequel, we make the necessary assumptions on the problem as follows.

Assumption 1

The functions \(a: (\mathbb{R}^{d} )^{2} \rightarrow \mathbb{R}^{d}\) and \(b: (\mathbb{R}^{d} )^{2} \rightarrow \mathbb{R}^{d\times m}\) are globally Lipschitz continuous, i.e. there is a positive constant \(K_{1}\) such that

$$ \bigl\vert a(X_{1}, Y_{1}) - a(X_{2}, Y_{2}) \bigr\vert ^{2} \vee \bigl\vert b(X_{1}, Y_{1}) - b(X _{2}, Y_{2}) \bigr\vert ^{2} \leq K_{1} \bigl( \vert X_{1} - X_{2} \vert ^{2} + \vert Y_{1} - Y_{2} \vert ^{2} \bigr) $$

for all \(X_{1}, X_{2}, Y_{1}, Y_{2} \in \mathbb{R}^{d}\).

Assumption 2

In case (L2), let \(\tau : [t_{0}, T] \rightarrow (0, r]\) be of Lipschitz continuous as

$$ \bigl\vert \tau (t)- \tau (s) \bigr\vert \leq K_{2} \vert t - s \vert ,\quad t, s \in [t_{0}, T], $$

where \(K_{2}\) is a positive constant. In case (L3), let \(\tau : [t_{0}, T] \times \mathbb{R}^{d} \rightarrow (0, r]\) be of Lipschitz continuous, i.e. there exist two positive constants \(K_{2}\) and \(K_{3}\) such that

$$\begin{aligned}& \bigl\vert \tau (t, X)- \tau (s, X) \bigr\vert \leq K_{2} \vert t - s \vert , \\& \bigl\vert \tau (t, X_{1}) - \tau (t, X_{2}) \bigr\vert \leq K_{3} \vert X_{1} - X_{2} \vert \end{aligned}$$

for all \(t, s \in [t_{0}, T]\) and \(X, X_{1}, X_{2} \in \mathbb{R}^{d}\). Note that in each of the two cases above, the fact that τ is positive guarantees the measurability and then the existence of the Itô integral.

Assumption 3

Given two adapted integrable stochastic processes \(\rho _{1}, \rho _{2} : \varOmega \times [t_{0} - r, T] \rightarrow [t_{0} - r, t_{0}]\), we have

$$ \mathbf{E} \bigl\vert \eta \bigl(\rho _{1}(t)\bigr) - \eta \bigl( \rho _{2}(s)\bigr) \bigr\vert ^{2} \leq K_{4} \mathbf{E} \bigl\vert \rho _{1}(t) - \rho _{2}(s) \bigr\vert ,\quad t, s \in [t_{0} - r, T], $$

where \(K_{4}\) is a positive constant.

Theorem 2.1

Suppose that \(\|\eta \|_{{\mathcal{L}^{2}}(D)} < \infty \), under Assumptions 13, SDDE (1.1) has a unique strong solution X such that

$$ \Vert X \Vert _{\mathcal{L}^{2}(\bar{D})} \leq H, $$

where \(\bar{D} = C([t_{0} - r, T], \mathbb{R}^{d})\) is the Banach space of all continuous sample paths with values in \(\mathbb{R}^{d}\) and H is a positive constant. Moreover, there exists a positive constant \(K_{5}\) such that for every \(t, s \in [t_{0}, T]\) we have

$$ \mathbf{E} \bigl\vert X\bigl(\rho _{1}(t)\bigr) - X \bigl(\rho _{2}(s)\bigr) \bigr\vert ^{2} \leq K _{5} \mathbf{E} \bigl\vert \rho _{1}(t) - \rho _{2}(s) \bigr\vert , $$
(2.1)

where \(\rho _{1}\), \(\rho _{2}\): \(\varOmega \times [t_{0}, T]\)\([t_{0} ,T]\) are two adapted integrable stochastic processes.

The proof of the theorem above is deferred to the Appendix.

2.1 Underlying scheme

We now focus on the main intent, namely developing a new continuous split-step scheme based on the Euler–Maruyama to SDDE (1.1). To do this, consider a non-equidistant discretization of the interval \(I = [t_{0}, T]\) as follows:

$$ t_{0}\leq t_{1}\leq \cdots\leq t_{N} = T, $$

the approximation \(\widetilde{X}(t)\) for SDDE (1.1) is defined recursively through the underlying scheme

$$ \textstyle\begin{cases} \widetilde{X}(t_{0}) = X^{*}(t_{0}) = X(t_{0}), \\ X^{*}(t_{k}) = \widetilde{X}(t_{k}) + a (X^{*}(t_{k}), \widetilde{Z}(t_{k}) )\Delta t_{k}, \quad k = 1,\ldots,N - 1, \\ \widetilde{X}(t_{k+1})= X^{*}(t_{k}) + b (X^{*}(t_{k}), \widetilde{Z}(t_{k}) )\Delta W_{k}, \quad k = 0,1,\ldots,N - 1, \end{cases} $$
(2.2)

where if \(t_{k} - \tau _{k} \leq t_{0}\), then

$$ \widetilde{Z}(t_{k})= \eta (t_{k} - \tau _{k} ), $$
(2.3)

otherwise if \(t_{k} - \tau _{k} \in [t_{i}, t_{i+1})\), then

$$ \textstyle\begin{cases} X^{*}(t_{i}) = \widetilde{X}(t_{i}) + a (X^{*}(t_{i}), \widetilde{Z}(t_{i}) ) ((t_{k} - \tau _{k}) - t_{i} ), \\ \widetilde{Z}(t_{k})= X^{*}(t_{i}) + b (X^{*}(t_{i}), \widetilde{Z}(t_{i}) ) (W(t_{k} - \tau _{k}) - W(t_{i}) ), \end{cases} $$
(2.4)

where \(\Delta t_{k} = t_{k+1} - t_{k}\) and \(\Delta W_{k} = W(t_{k+1}) - W(t_{k})\) are independent Gaussian distributed random variables with mean zero and variance \(t_{k+1} - t_{k}\) which can be made by a pseudo random generation. Note that \(\tau _{k}\) in (2.3)–(2.4) is equal to τ and \(\tau (t_{k})\) in cases (L1) and (L2), respectively, and in case (L3), \(\tau _{k} = \tau (t_{k}, \widetilde{X}(t_{k}))\). One can simulate the value \(W(t_{k} - \tau _{k}) - W(t_{i})\) by means of the Brownian bridges to remain on the correct Brownian paths [15]. We can extend the following continuous approximation for whole \([t_{0}, T]\):

$$ \textstyle\begin{cases} X^{*}(t_{i}) = \widetilde{X}(t_{i}) + a (X^{*}(t_{i}), \widetilde{Z}(t_{i}) ) (t - t_{i} ), \\ \widetilde{X}_{i}(t)= X^{*}(t_{i}) + b (X^{*}(t_{i}), \widetilde{Z}(t_{i}) ) (W(t) - W(t_{i}) ), \end{cases} $$
(2.5)

where \(t \in [t_{i}, t_{i+1})\) for \(i=0,\ldots,N-1\). Furthermore, \(\widetilde{Z}(t_{i})\) is obtained similar to (2.3)–(2.4). We can present a continuous version of the approximation solution as follows:

$$ \widetilde{X}(t) = \sum_{i=0}^{N - 1} \widetilde{X} _{i}(t)\mathbf{{1}}_{[t_{i}, t_{i+1})}(t) + \widetilde{X}(t_{N}) \mathbf{{1}}_{\{t = t_{N}\}}, $$
(2.6)

where 1 denotes the indicator function.

Proposition 2.2

Consider the approximation processes \({X}^{*}\), and which are computed by (2.2), (2.3) and (2.4). Assume that \(a(0,0) =0\) and \(b(0,0) = 0\), then there exists a positive constant such that

$$ \bigl\Vert {X}^{*} \bigr\Vert _{\mathcal{L}^{2}(\bar{D})} \leq \bar{H},\qquad \Vert \widetilde{Z} \Vert _{\mathcal{L}^{2}(\bar{D})} \leq \bar{H},\qquad \Vert \widetilde{X} \Vert _{\mathcal{L}^{2}(\bar{D})} \leq \bar{H}, $$
(2.7)

where was specified in Theorem 2.1.

In the sequel, for the simplicity, we take such that it is equal to H in Theorem 2.1. Under these conditions, we establish the strong convergence of scheme (2.5)–(2.6) over \([t_{0}, T]\) in the next section.

3 Convergence

Having been motivated to analyse the behaviour of scheme (2.5)–(2.6), we naturally concentrate on the convergence concept. To this end, the mean-square convergence is invoked by a theorem as follows.

Theorem 3.1

Suppose that Assumptions 13 hold and \(\|\eta \|_{ {\mathcal{L}^{2}}(D)} < \infty \). Moreover, we assume that \(a(0,0) = 0\) and \(b(0,0) = 0\). If we apply scheme (2.5)(2.6) to SDDE (1.1), then

$$ \Bigl(\mathbf{E} \Bigl(\sup_{t_{0} \leq t \leq T} \bigl\vert e(t) \bigr\vert ^{2}\Bigr) \Bigr)^{1/2} = \mathcal{O}\bigl(h ^{\gamma }\bigr), $$

where \(e(t) = X(t) - \widetilde{X}(t)\) and \(h = \smash{\max_{i =0,\ldots, N - 1}}\Delta _{t_{i}}\). Moreover, γ is equal to \(1/2\) in cases (L1) and (L2) and to \({1/4}\) in case (L3).

Proof

Note that proving of case (L1) is similar to that of (L2), so we leave it to the reader and we start proving with (L2). Let \(t \in [t_{n}, t_{n+1})\). We can write

$$\begin{aligned} e(t) &= \int _{t_{0}}^{t} a\bigl(X(s), X(s - \tau )\bigr) \,ds + \int _{t_{0}}^{t} b\bigl(X(s), X(s - \tau ) \bigr) \,dW(s) \\ &\quad {}- \int _{t_{0}}^{t} a\bigl(X^{*}(s), \widetilde{Z}(s)\bigr) \,ds - \int _{t_{0}} ^{t} b\bigl(X^{*}(s), \widetilde{Z}(s)\bigr) \,dW(s), \end{aligned}$$

where based on (2.5) and (2.6), \(X^{*}(s) = X^{*}(t _{i})\) and \(\widetilde{Z}(s) = \widetilde{Z}(t_{i})\) when \(s \in [t _{i}, t_{i+1})\), \(i = 0,\ldots, n\). So

$$\begin{aligned}& \mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert ^{2} \Bigr) \\& \quad \leq 2 \mathbf{E} \biggl(\sup_{t_{0} \leq v \leq t} \biggl\vert \int _{t_{0}}^{v} \bigl(a\bigl(X(s), X(s - \tau )\bigr) - a\bigl(X^{*}(s), \widetilde{Z}(s)\bigr) \bigr) \,ds \biggr\vert ^{2} \biggr) \\& \qquad {} + 2 \mathbf{E} \biggl(\sup_{t_{0} \leq v \leq t} \biggl\vert \int _{t_{0}}^{v} \bigl(b\bigl(X(s), X(s - \tau ) \bigr) - b\bigl(X^{*}(s), \widetilde{Z}(s)\bigr) \bigr) \,dW(s) \biggr\vert ^{2} \biggr). \end{aligned}$$

By Hölder’s and Doob’s martingale inequalities, we derive

$$\begin{aligned} \mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert ^{2} \Bigr) &\leq 2(t - t_{0}) \int _{t_{0}}^{t_{n+1}} \mathbf{E} \bigl\vert a\bigl(X(s), X(s - \tau )\bigr) - a\bigl(X ^{*}(s), \widetilde{Z}(s)\bigr) \bigr\vert ^{2} \,ds \\ &\quad {}+ 8 \int _{t_{0}}^{t_{n+1}} \mathbf{E} \bigl\vert b\bigl(X(s), X(s - \tau )\bigr) - b \bigl(X ^{*}(s), \widetilde{Z}(s)\bigr) \bigr\vert ^{2} \,ds. \end{aligned}$$

By Assumption 1, we have

$$\begin{aligned} &\mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert ^{2} \Bigr) \\ &\quad \leq 2K _{1}(T-t_{0} + 4) \int _{t_{0}}^{t_{n+1}} \bigl(\mathbf{E} \bigl\vert X(s) - X^{*}(s) \bigr\vert ^{2} + \mathbf{E} \bigl\vert X(s - \tau ) - \widetilde{Z}(s) \bigr\vert ^{2} \bigr) \,ds \\ &\quad = 2K_{1}(T-t_{0} + 4)\sum_{i = 0}^{n} \int _{t_{i}}^{t_{i+1}} \bigl( \mathbf{E} \bigl\vert X(s) -X^{*}(t_{i}) \bigr\vert ^{2} + \mathbf{E} \bigl\vert X(s - \tau ) - \widetilde{Z}(t_{i}) \bigr\vert ^{2} \bigr) \,ds. \end{aligned}$$

We can write

$$ \mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert ^{2} \Bigr) \leq M \sum_{i = 0}^{n} \biggl( \int _{t_{i}}^{t_{i+1}} \bigl(A_{1}(s) + A_{2}(s) + A_{3}(s) + A_{4}(s) \bigr) \,ds \biggr), $$
(3.1)

where \(M = 4K_{1}(T-t_{0} + 4)\) and for \(s \in [t_{i}, t_{i+1})\)

$$\begin{aligned}& A_{1}(s) = \mathbf{E} \bigl\vert X(s) - \widetilde{X}(s) \bigr\vert ^{2}, \\& A_{2}(s) = \mathbf{E} \bigl\vert \widetilde{X}(s) - X^{*}(t_{i}) \bigr\vert ^{2}, \end{aligned}$$

and in case (L2)

$$\begin{aligned}& A_{3}(s) = \mathbf{E} \bigl\vert X\bigl(s - \tau (s)\bigr) - X \bigl(t_{i} - \tau (t_{i})\bigr) \bigr\vert ^{2}, \\& A_{4}(s) = \mathbf{E} \bigl\vert X\bigl(t_{i} - \tau (t_{i})\bigr) - \widetilde{Z}(t_{i}) \bigr\vert ^{2}, \end{aligned}$$

and in case (L3)

$$\begin{aligned}& A_{3}(s) = \mathbf{E} \bigl\vert X\bigl(s - \tau \bigl(s, X(s) \bigr)\bigr) - X\bigl(t_{i} - \tau \bigl(t_{i}, \widetilde{X}(t_{i})\bigr)\bigr) \bigr\vert ^{2}, \\& A_{4}(s) = \mathbf{E} \bigl\vert X\bigl(t_{i} - \tau \bigl(t_{i}, \widetilde{X}(t_{i})\bigr)\bigr) - \widetilde{Z}(t_{i}) \bigr\vert ^{2}, \end{aligned}$$

where \(\widetilde{Z}(t_{i}) = \widetilde{X}(t_{i} - \tau (t_{i}, \widetilde{X}(t_{i})))\). We can write \(A_{1i}\), \(A_{2i}\), \(A_{3i}\) and \(A_{4i}\) instead of \(A_{1}\), \(A_{2}\), \(A_{3}\) and \(A_{4}\) to be more precise. But the second subscripts have been removed for the sake of simplicity. We now present the necessary upper bounds for these functions. Definitely

$$ A_{1}(s) = \mathbf{E} \bigl\vert e(s) \bigr\vert ^{2}. $$
(3.2)

Due to the Hölder continuity property of the Brownian motion as well as Assumption 1 and relation (2.7), we have

$$\begin{aligned} A_{2}(s) &= \mathbf{E} \bigl\vert b \bigl({X}^{*}(t_{i}), \widetilde{Z}(t _{i})\bigr) \bigl(W(s) - W(t_{i})\bigr) \bigr\vert ^{2} \\ &\leq K_{6} \vert s - t_{i} \vert , \end{aligned}$$
(3.3)

where

$$ K_{6} = 2K_{1}H. $$

Note that \(H =\bar{H}\). We now try to obtain the necessary error bounds for \(A_{3}(s)\) and \(A_{4}(s)\). We first consider case (L2). If both values \(s - \tau (s)\) and \(t_{i} - \tau (t_{i})\) are less or larger than \(t_{0}\), under Assumption 3, Theorem 2.1 and due to being Lipschitz of τ in Assumption 2, we see that

$$ \mathbf{E} \bigl\vert X\bigl(s - \tau (s)\bigr) - X \bigl(t_{i} - \tau (t_{i})\bigr) \bigr\vert ^{2} \leq (K_{4} + K_{5}) (1 + K_{2}) \vert s - t_{i} \vert . $$
(3.4)

If \(s - \tau (s)\) or \(t_{i} - \tau (t_{i})\) is less than \(t_{0}\) and the other is larger than \(t_{0}\), then under the intermediate value theorem there exists a point \(t^{*} \in [t_{i}, s] \subset [t_{i}, t_{i+1}]\) such that \(t^{*} - \tau (t^{*}) = t_{0}\). So we get

$$\begin{aligned} \mathbf{E} \bigl\vert X\bigl(s - \tau (s)\bigr) - X\bigl(t_{i} - \tau (t_{i})\bigr) \bigr\vert ^{2} &\leq 2 \mathbf{E} \bigl\vert X\bigl(s - \tau (s)\bigr) - X\bigl(t^{*} - \tau \bigl(t^{*}\bigr)\bigr) \bigr\vert ^{2} \\ &\quad {}+ 2 \mathbf{E} \bigl\vert X\bigl(t^{*} - \tau \bigl(t^{*}\bigr)\bigr) - X\bigl(t_{i} - \tau (t_{i})\bigr) \bigr\vert ^{2}. \end{aligned}$$

Similar to the previous argument discussed in obtaining (3.4), we get

$$ A_{3}(s) \leq 2(K_{4}+K_{5}) (1 + K_{2}) \vert s - t_{i} \vert . $$

Besides, we find that \(\widetilde{Z}(t_{i})\) approximates the solution at \(t_{i} - \tau (t_{i})\) by (2.3)–(2.4). So we can write \(A_{4}(s) = \mathbf{E}|e(t_{i} - \tau (t_{i}))|^{2}\) and then we have

$$ \mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert ^{2} \Bigr) \leq M \bigl(2(K_{4}+ K_{5}) (1 + K_{2} )+ K_{6} \bigr) \sum_{i=0}^{n} \frac{ {\Delta t_{i}}^{2}}{2}+2 M \int _{t_{0}}^{t}\mathbf{E}\Bigl( \sup _{t_{0} \leq v \leq s} \bigl\vert e(v) \bigr\vert ^{2} \Bigr) \,ds. $$

Applying Gronwall’s lemma yields

$$ \mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert ^{2} \Bigr) \leq C \sum_{i=0}^{n}{ \Delta t_{i}}^{2}, $$

where \(C = \frac{M}{2} (2(K_{4}+ K_{5})(1 + K_{2})+ K_{6} ) e ^{2M(T - t_{0})}\). Due to the arbitrariness of n, we can write

$$ \mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq T} \bigl\vert e(v) \bigr\vert ^{2} \Bigr) = C\sum_{i=0}^{N-1}{ \Delta t_{i}}^{2} \leq C h \sum_{i=0}^{N - 1}{ \Delta t _{i}}, $$

since \(\sum_{i=0}^{N-1}{\Delta t_{i}}= T - t_{0}\), the desired result is obtained. We now turn to case (L3) where τ is the function of \(X(t)\). To this end, we break down \(A_{3}(s)\) into four terms as follows:

$$ A_{3}(s) = A_{31} + A_{32} + A_{33} + A_{34}, $$

where

$$\begin{aligned}& A_{31}(s) = \int _{\varOmega _{1}} \bigl\vert X\bigl(s - \tau \bigl(s, X(s)\bigr) \bigr) - X\bigl(t_{i} - \tau \bigl(t_{i}, \widetilde{X}(t_{i})\bigr)\bigr) \bigr\vert ^{2} \,dP, \\& A_{32}(s) = \int _{\varOmega _{2}} \bigl\vert X\bigl(s - \tau \bigl(s, X(s)\bigr) \bigr) - X\bigl(t_{i} - \tau \bigl(t_{i}, \widetilde{X}(t_{i})\bigr)\bigr) \bigr\vert ^{2} \,dP, \\& A_{33}(s) = \int _{\varOmega _{3}} \bigl\vert X\bigl(s - \tau \bigl(s, X(s)\bigr) \bigr) - X\bigl(t_{i} - \tau \bigl(t_{i}, \widetilde{X}(t_{i})\bigr)\bigr) \bigr\vert ^{2} \,dP, \\& A_{34}(s) = \int _{\varOmega _{4}} \bigl\vert X\bigl(s - \tau \bigl(s, X(s)\bigr) \bigr) - X\bigl(t_{i} - \tau \bigl(t_{i}, \widetilde{X}(t_{i})\bigr)\bigr) \bigr\vert ^{2} \,dP, \end{aligned}$$

with

$$\begin{aligned}& \varOmega _{1} = \bigl\{ w, s - \tau \bigl(s, {X}(s)\bigr) \leq t_{0}\mbox{ and }t_{i} - \tau \bigl(t_{i}, \widetilde{X}(t_{i})\bigr) \leq t_{0}\bigr\} , \\& \varOmega _{2} = \bigl\{ w, s - \tau \bigl(s, {X}(s)\bigr) \leq t_{0}\mbox{ and }t_{i} - \tau \bigl(t_{i}, \widetilde{X}(t_{i})\bigr) > t_{0}\bigr\} , \\& \varOmega _{3} = \bigl\{ w, s - \tau \bigl(s, {X}(s)\bigr) > t_{0}\mbox{ and }t_{i} - \tau \bigl(t_{i}, \widetilde{X}(t_{i})\bigr) \leq t_{0}\bigr\} , \\& \varOmega _{4} = \bigl\{ w, s - \tau \bigl(s, {X}(s)\bigr) > t_{0}\mbox{ and }t_{i} - \tau \bigl(t_{i}, \widetilde{X}(t_{i})\bigr) > t_{0}\bigr\} . \end{aligned}$$

Assumption 3, Assumption 2 and Theorem 2.1 yield that

$$\begin{aligned} A_{31}(s) &=\mathbf{E} \bigl\vert 1_{\varOmega _{1}} \bigl( X\bigl(s - \tau \bigl(s, X(s)\bigr)\bigr) - X\bigl(t _{i} - \tau \bigl(t_{i}, \widetilde{X}(t_{i})\bigr)\bigr) \bigr) \bigr\vert ^{2} \\ &\leq K_{4} \bigl( \vert s - t_{i} \vert + \mathbf{E} \bigl\vert \tau \bigl(s, {X}(s)\bigr) - \tau \bigl(t _{i}, \widetilde{X}(t_{i})\bigr) \bigr\vert \bigr) \\ &\leq K_{4} \bigl( \vert s - t_{i} \vert + \mathbf{E} \bigl\vert \tau \bigl(s, {X}(s)\bigr) - \tau \bigl(t _{i}, {X}(s) \bigr) \bigr\vert \\ &\quad {}+ \mathbf{E} \bigl\vert \tau \bigl(t_{i}, {X}(s)\bigr) - \tau \bigl(t_{i}, {X}(t_{i})\bigr) \bigr\vert + \mathbf{E} \bigl\vert \tau \bigl(t_{i}, {X}(t_{i})\bigr) - \tau \bigl(t_{i}, \widetilde{X}(t _{i})\bigr) \bigr\vert \bigr) \\ &\leq K_{4} \bigl((1 + K_{2}) \vert s - t_{i} \vert + K_{3} \bigl(\mathbf{E} \bigl\vert {X}(s) - {X}(t_{i}) \bigr\vert + \mathbf{E} \bigl\vert e(t_{i}) \bigr\vert \bigr) \bigr) \\ &\leq K_{4} \bigl((1 + K_{2}) \vert s - t_{i} \vert + K_{3} \bigl(\sqrt{K_{5}} \vert s - t_{i} \vert ^{1/2} + \mathbf{E} \bigl\vert e(t_{i}) \bigr\vert \bigr) \bigr), \end{aligned}$$

by considering the dominant terms, we achieve

$$ A_{31}(s) \leq K_{4} K_{3} \bigl( \sqrt{K_{5}} \vert s - t_{i} \vert ^{1/2} + \mathbf{E} \bigl\vert e(t_{i}) \bigr\vert \bigr). $$

Similarly, an upper bound for \(A_{34}(s)\) using Theorem 2.1, Assumption 2 and Assumption 3 is obtained as

$$ A_{34}(s) \leq K_{5} K_{3} \bigl( \sqrt{K_{5}} \vert s - t_{i} \vert ^{1/2} + \mathbf{E} \bigl\vert e(t_{i}) \bigr\vert \bigr). $$

To obtain an upper bound for \(A_{32}\) and \(A_{33}\), we suppose that there exists \(t^{*}\), \(t_{i} < t^{*} < s\), such that

$$ P \bigl(\bigl\{ t^{*} - \tau \bigl(t^{*}, {X} \bigl(t^{*}\bigr)\bigr) = t_{0} \bigr\} \bigr) = 1. $$

We can write

$$\begin{aligned} A_{32}(s) &=\mathbf{E} \bigl\vert 1_{\varOmega _{2}} \bigl( X\bigl(s - \tau \bigl(s, X(s)\bigr)\bigr) - X\bigl(t _{i} - \tau \bigl(t_{i}, \widetilde{X}(t_{i})\bigr)\bigr) \bigr) \bigr\vert ^{2} \\ &\leq 2 \int _{\varOmega _{2}} \bigl( \bigl\vert X\bigl(s - \tau \bigl(s, X(s) \bigr)\bigr) - X\bigl(t^{*} - \tau \bigl(t^{*}, {X} \bigl(t^{*}\bigr)\bigr)\bigr) \bigr\vert ^{2} \\ &\quad {}+ \bigl\vert X\bigl(t^{*} - \tau \bigl(t^{*}, {X} \bigl(t^{*}\bigr)\bigr)\bigr) - X\bigl(t_{i} - \tau \bigl(t_{i}, \widetilde{X}(t_{i})\bigr)\bigr) \bigr\vert ^{2} \bigr) \,dP. \end{aligned}$$

In a similar manner which was employed in finding the upper bound of \(A_{31}(s)\), we see that

$$ A_{32}(s) \leq 2K_{3}\sqrt{K_{5}}(K_{4} + K_{5}) \vert s - t_{i} \vert ^{1/2} +2K _{5}K_{3}\mathbf{E} \bigl\vert e(t_{i}) \bigr\vert $$

and

$$ A_{33}(s) \leq 2K_{3}\sqrt{K_{5}}(K_{4} + K_{5}) \vert s - t_{i} \vert ^{1/2} +2K _{4}K_{3}\mathbf{E} \bigl\vert e(t_{i}) \bigr\vert . $$

Then we have

$$ A_{3}(s) \leq 5(K_{4} + K_{5})K_{3}\sqrt{K_{5}} \vert s - t _{i} \vert ^{1/2} + 3(K_{4} + K_{5})K_{3}\mathbf{E} \bigl\vert e(t_{i}) \bigr\vert . $$
(3.5)

Since \(\widetilde{Z}(t_{i}) = \widetilde{X}(t_{i} - \tau (t_{i}, \widetilde{X}(t_{i})))\), the function \(A_{4}\) becomes

$$ A_{4}(s) = \mathbf{E} \bigl\vert e \bigl(t_{i} - \tau \bigl(t_{i}, \widetilde{X}(t _{i})\bigr)\bigr) \bigr\vert ^{2}. $$
(3.6)

Therefore by (3.1), (3.2), (3.3), (3.5) and (3.6)

$$\begin{aligned} \mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert ^{2} \Bigr) &\leq M \sum_{i = 0}^{n} \int _{t_{i}}^{t_{i+1}} 5(K_{4} + K_{5})K_{3} \sqrt{K _{5}} \vert s - t_{i} \vert ^{1/2} \,ds \\ &\quad {} + 3M(K_{4} + K_{5})K_{3} \sum _{i=0}^{n}(t_{i+1} - t_{i}) \mathbf{E} \bigl\vert e(t_{i}) \bigr\vert \\ &\quad {} + 2M \int _{t_{0}}^{t} \mathbf{E} \Bigl(\sup _{t_{0} < v \leq s} \bigl\vert e(v) \bigr\vert ^{2} \Bigr) \,ds. \end{aligned}$$

The application of Gronwall’s lemma results in

$$\begin{aligned} \mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert ^{2} \Bigr) &\leq M(K _{4} + K_{5})K_{3} e^{2M(t - t_{0})} \Biggl( \frac{10}{3}\sqrt{K_{5}} \sum _{i=0}^{n} \vert t_{i+1} - t_{i} \vert ^{3/2} \\ &\quad {}+3 \sum_{i=0}^{n}(t_{i+1} - t_{i}) \mathbf{E} \bigl\vert e(t_{i}) \bigr\vert \Biggr), \end{aligned}$$

we can write

$$ \mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert ^{2} \Bigr) \leq C_{1} h^{1/2} + \frac{9C_{1}}{10 \sqrt{K_{5}}} \mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert \Bigr), $$
(3.7)

where \(C_{1} = \frac{10(T-t_{0})}{3} M(K_{4} + K_{5})K_{3}\sqrt{K _{5}}e^{2M(T-t_{0})}\). In view of \((\mathbf{E}(A))^{2} \leq \mathbf{E}(A^{2})\), by Jensen’s inequality, we can write

$$ \Bigl(\mathbf{E}\Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert \Bigr) \Bigr)^{2} \leq C_{1} h^{1/2} + \frac{9C_{1}}{10 \sqrt{K_{5}}} \mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert \Bigr). $$
(3.8)

We now set

$$ A = \mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert \Bigr), $$
(3.9)

and hence by (3.8) and (3.9)

$$ A^{2} \leq C_{1} h^{1/2} + C_{2} {A}, $$

where \(C_{2} = \frac{9C_{1}}{10 \sqrt{K_{5}}}\). By recurrence, one sees that

$$\begin{aligned} A^{2} &\leq C_{1}h^{1/2} + C_{2}{A} \\ &\leq C_{1}h^{1/2} + C_{2}\sqrt{C_{1}h^{1/2}+ C_{2} {A}}. \end{aligned}$$
(3.10)

We define

$$ B = C_{1}h^{1/2} + C_{2}{A}. $$
(3.11)

By (3.10), firstly,

$$ A^{2} \leq B, $$
(3.12)

and secondly,

$$ B \leq C_{1}h^{1/2} + C_{2} \sqrt{B}. $$
(3.13)

Since \(C_{2} > 0\) and \(A \geq 0\), so from (3.11) we get

$$ B - C_{1}h^{1/2} \geq 0. $$
(3.14)

By (3.13) and (3.14), we observe that \((B - C_{1}h^{1/2})^{2} \leq {C_{2}}^{2}{B}\) and

$$ B^{2} - \bigl({C_{2}}^{2} + 2C_{1}h^{1/2} \bigr)B + {C_{1}}^{2} h \leq 0. $$
(3.15)

For this quadratic inequality, we obtain two quantities for B as follows:

$$\begin{aligned}& B_{1} = \frac{({C_{2}}^{2} + 2C_{1}h^{1/2}) - \sqrt{{C_{2}}^{4} + 4C _{1}{C_{2}}^{2}h^{1/2}}}{2}, \\& B_{2} = \frac{({C_{2}}^{2} + 2C_{1}h^{1/2}) + \sqrt{{C_{2}}^{4} + 4C _{1}{C_{2}}^{2}h^{1/2}}}{2}. \end{aligned}$$

Obviously, relation (3.15) is satisfied for \(B \in [B_{1}, B_{2}]\). By the Taylor expansion of function \(\sqrt{{C_{2}}^{4} + 4C_{1} {C_{2}}^{2}h^{1/2}}\) about point \({C_{2}}^{4}\), we achieve

$$\begin{aligned}& B_{1} = {C_{1}}^{2}{{C_{2}}^{4}} {{\xi _{2}}^{-3/2}} h, \\& B_{2} = {C_{2}}^{2} + C_{1}\bigl(1 + {{C_{2}}^{2}} {{\xi _{1}}^{-1/2}}\bigr) h ^{1/2}, \end{aligned}$$

where \(\xi _{1}, \xi _{2} \in ({C_{2}}^{4}, {C_{2}}^{4} + 4C_{1}{C_{2}} ^{2}h^{1/2} )\). By (3.12), \(A^{2} \leq B\) for all \(B \in [B_{1}, B_{2}]\) and also \(B_{1}\) is the sharpest bound. Note that if \(h \rightarrow 0\), then \(B_{1} \rightarrow 0\), and so \(A^{2} \rightarrow 0\). Hence, by definition A in (3.9), \(\mathbf{E} ( \sup_{t_{0} \leq v \leq t} |e(v)| ) \rightarrow 0\), and we get \(\mathbf{E} (\sup_{t_{0} \leq v \leq t} |e(v)|^{2} ) \rightarrow 0\) by (3.7). Henceforth, the convergence of the scheme is visible. In order to determine a sharp bound for \(A^{2}\), we are interested in a quantity which is as small as possible. So \(B_{1}\) reveals the convergence rate. Because \(A = \mathcal{O}(h^{1/2})\) and consequently by (3.9) and (3.7), we obtain

$$ \mathbf{E} \Bigl(\sup_{t_{0} \leq v \leq t} \bigl\vert e(v) \bigr\vert ^{2} \Bigr) = \mathcal{O}\bigl(h^{1/2}\bigr). $$

Since there is no restriction on t, the desired result follows immediately. □

Let us now mention two important remarks in line with Theorem 3.1.

Remark 3.2

([21])

As we know, there exist no two seminorms which can be generally majorized by a multiplier of each other. So there is no ambiguity if \(\|\cdot\|_{{\mathcal{L}^{1}}}\) and \(\|\cdot\|_{{\mathcal{L}^{2}}}\) are proportional to different powers of h in Theorem 3.1.

Remark 3.3

Note that as the scheme proceeds on the partition \({\varLambda }_{1}\) to integrate the solution at the mesh points, where \({\varLambda }_{1} = \{t_{0}, t_{1},\ldots, t_{N} =T\}\) is a partition which covers all discrete points, some points, which we can say are hidden, are brought up. For instance, \(t_{k} \in \varLambda _{1}\) corresponds to \(t_{k} - \tau _{k}\) and the approximation \(\widetilde{Z}_{k}\). We can rename \(t_{k} - \tau _{k}\) by \(t_{m}\) and make partition \(\varLambda _{2}\) by means of these points. Besides, here, the underlying scheme and the interpolation, which approximate the stochastic process on \(\varLambda _{1}\) and \(\varLambda _{2}\), respectively, are the same. So, practically, the proposed approximation computes the solution at the points in \({\varLambda }_{1}\) and \({\varLambda }_{2}\).

4 Stability

The main objective in the numerical stability literature is to examine whether the numerical solution mimics the behaviour of the exact process or not. In particular, it is important to know the reaction of the scheme when n tends to infinity whilst the exact process becomes trivial as large as t becomes very large. In fact, the impact of rounding errors, which are not inevitable, on the numerical results in the long term case is analysed. In this section, the asymptotic mean-square stability, corresponding to SDDE (1.1) and also a linear test equation, will be challenged.

Definition 4.1

The exact solution (strong solution) of SDDE (1.1) denoted by X is named asymptotically mean-square stable if

$$ {\lim_{t \rightarrow \infty }} \mathbf{E} \bigl\vert {X}(t) \bigr\vert ^{2} = 0. $$

In the sequel, we present a theorem which deals with the mean-square stability of SDDE (1.1) and the proof sketch is given in the Appendix.

Theorem 4.2

Given SDDE (1.1) which satisfies Assumptions 13, assume that there exist a positive constant λ and non-negative constants \(\alpha _{0}\), \(\alpha _{1}\), \(\beta _{0}\) and \(\beta _{1}\) such that

$$\begin{aligned}& x^{T}a(x,0) \leq -\lambda \vert x \vert ^{2}, \\& \bigl\vert a(x,0) - a(\bar{x}, y) \bigr\vert \leq \alpha _{0} \vert x-\bar{x} \vert + \alpha _{1} \vert y \vert , \\& \operatorname{trace}\bigl[b^{T}(x, y)b(x, y)\bigr] \leq \beta _{0} \vert x \vert ^{2} + \beta _{1} \vert y \vert ^{2} \end{aligned}$$

for all \(x, \bar{x}, y \in \mathbb{R}^{d}\). If

$$ \lambda > \alpha _{1} + \frac{1}{2}(\beta _{0} + \beta _{1}), $$

then the zero solution is mean-square stable.

Definition 4.3

Let be the numerical solution of SDDE (1.1). If there exists \({\bar{h}}(a, b, c, d) > 0\) such that the maximum stepsize lies in \((0, {\bar{h}}(a, b, c, d))\), then the scheme is called asymptotically mean-square stable if

$$ {\lim_{k \rightarrow \infty }} \mathbf{E} \bigl\vert \widetilde{X}(t_{k}) \bigr\vert ^{2} = 0. $$

Theorem 4.4

Given SDDE (1.1), let Assumptions 13 hold and \(\|\eta \|_{{\mathcal{L}^{2}}(D)} < \infty \). Assume that there exist two positive constants \(\lambda _{1}\) and \(\lambda _{2}\) and non-negative constants \(\beta _{0}\) and \(\beta _{1}\) such that

$$ \textstyle\begin{cases} x^{T}a(x,y) \leq -\lambda _{1} \vert x \vert ^{2} + \lambda _{2} \vert y \vert ^{2} , \\ \operatorname{trace}[b^{T}(x, y)b(x, y)] \leq \beta _{0} \vert x \vert ^{2} + \beta _{1} \vert y \vert ^{2} \end{cases} $$
(4.1)

for all \(x, y \in \mathbb{R}^{d}\). Assume that \(a(0,0) =0\) and \(b(0,0) = 0\). If \(2\lambda _{1}>2\lambda _{2}+\beta _{0} +\beta _{1}\), \(\beta _{0}\lambda _{2} +\beta _{1}\lambda _{1} \neq 0\) and \({\Delta t _{k}} \in (0, \frac{2\lambda _{1}-2\lambda _{2}-\beta _{0} -\beta _{1}}{2( \beta _{0}\lambda _{2}+ \beta _{1}\lambda _{1})})\), then scheme (2.2)(2.4) is asymptotically mean-square stable.

Proof

From (2.2), we can write

$$ \textstyle\begin{cases} \vert \widetilde{X}(t_{k}) \vert ^{2} = \langle X^{*}(t_{k}) - a (X^{*}(t _{k}), \widetilde{Z}(t_{k}) )\Delta t_{k}, X^{*}(t_{k}) - a (X ^{*}(t_{k}), \widetilde{Z}(t_{k}) )\Delta t_{k}\rangle \\ \hphantom{ \vert \widetilde{X}(t_{k}) \vert ^{2}} = \vert X^{*}(t_{k}) \vert ^{2} -2{X^{*}}^{T}(t_{k})a (X^{*}(t_{k}), \widetilde{Z}(t_{k})){\Delta t_{k}} + \vert a (X^{*}(t_{k}), \widetilde{Z}(t_{k}) ) \vert ^{2}{\Delta t_{k}}^{2}, \end{cases} $$

by (4.1), we get

$$\begin{aligned} \bigl\vert \widetilde{X}(t_{k}) \bigr\vert ^{2} &\geq \bigl\vert X^{*}(t_{k}) \bigr\vert ^{2} +2\bigl( \lambda _{1} \bigl\vert X ^{*}(t_{k}) \bigr\vert ^{2} - \lambda _{2} \bigl\vert \widetilde{Z}(t_{k}) \bigr\vert ^{2}\bigr){\Delta t _{k}} + \bigl\vert a \bigl(X^{*}(t_{k}), \widetilde{Z}(t_{k}) \bigr) \bigr\vert ^{2}{\Delta t _{k}}^{2} \\ &\geq \bigl\vert X^{*}(t_{k}) \bigr\vert ^{2} +2\bigl(\lambda _{1} \bigl\vert X^{*}(t_{k}) \bigr\vert ^{2} - \lambda _{2} \bigl\vert \widetilde{Z}(t_{k}) \bigr\vert ^{2}\bigr){\Delta t_{k}}. \end{aligned}$$

Henceforth, we get

$$ \bigl\vert X^{*}(t_{k}) \bigr\vert ^{2} \leq \frac{1}{1+2\lambda _{1}{\Delta t_{k}}} \bigl( \bigl\vert \widetilde{X}(t_{k}) \bigr\vert ^{2} + 2\lambda _{2} \bigl\vert \widetilde{Z}(t _{k}) \bigr\vert ^{2}{\Delta t_{k}}\bigr). $$
(4.2)

Again, from (2.2), we have

$$\begin{aligned} \bigl\vert \widetilde{X}(t_{k+1}) \bigr\vert ^{2} &= \bigl\langle X^{*}(t_{k}) + b \bigl(X^{*}(t _{k}), \widetilde{Z}(t_{k}) \bigr)\Delta W_{k}, X^{*}(t_{k}) + b \bigl(X ^{*}(t_{k}), \widetilde{Z}(t_{k}) \bigr)\Delta W_{k} \bigr\rangle \\ & = \bigl\vert X^{*}(t_{k}) \bigr\vert ^{2} + 2{X^{*}}^{T}(t_{k})b \bigl(X^{*}(t_{k}), \widetilde{Z}(t_{k}) \bigr)\Delta W_{k} \\ &\quad {} + \bigl(b \bigl(X^{*}(t_{k}), \widetilde{Z}(t_{k}) \bigr)\Delta W_{k}\bigr)^{T}\bigl(b \bigl(X^{*}(t_{k}), \widetilde{Z}(t_{k}) \bigr)\Delta W_{k}\bigr). \end{aligned}$$

By applying the expectation and using (4.1) and (4.2), we get

$$\begin{aligned} \mathbf{E} \bigl\vert \widetilde{X}(t_{k+1}) \bigr\vert ^{2} &= \mathbf{E} \bigl\vert X^{*}(t_{k}) \bigr\vert ^{2} + \mathbf{E} \bigl(\operatorname{trace}\bigl[b \bigl(X^{*}(t_{k}), \widetilde{Z}(t_{k}) \bigr)^{T} b \bigl(X^{*}(t_{k}), \widetilde{Z}(t_{k}) \bigr)\bigr] \bigr)\Delta t_{k} \\ &\leq \mathbf{E} \bigl\vert X^{*}(t_{k}) \bigr\vert ^{2} + \bigl(\beta _{0} \mathbf{E} \bigl\vert X^{*}(t _{k}) \bigr\vert ^{2} + \beta _{1}\mathbf{E} \bigl\vert \widetilde{Z}(t_{k}) \bigr\vert ^{2} \bigr) \Delta t_{k} \\ &\leq \frac{1 + \beta _{0} \Delta t_{k}}{1+2\lambda _{1}{\Delta t_{k}}} \bigl( \mathbf{E} \bigl\vert \widetilde{X}(t_{k}) \bigr\vert ^{2} + 2\lambda _{2}\mathbf{E} \bigl\vert \widetilde{Z}(t_{k}) \bigr\vert ^{2}{\Delta t_{k}} \bigr) + \beta _{1}\mathbf{E} \bigl\vert \widetilde{Z}(t_{k}) \bigr\vert ^{2}{\Delta t_{k}} \\ &= \frac{1 + \beta _{0} \Delta t_{k}}{1+2\lambda _{1}{\Delta t_{k}}} \mathbf{E} \bigl\vert \widetilde{X}(t_{k}) \bigr\vert ^{2} + \biggl(\frac{2\lambda _{2}{\Delta t_{k}}(1 + \beta _{0} \Delta t_{k}) }{1+2\lambda _{1}{\Delta t_{k}}} + \beta _{1} { \Delta t_{k}} \biggr) \mathbf{E} \bigl\vert \widetilde{Z}(t_{k}) \bigr\vert ^{2} \\ &= \frac{1 + \beta _{0} \Delta t_{k}}{1+2\lambda _{1}{\Delta t_{k}}} \mathbf{E} \bigl\vert \widetilde{X}(t_{k}) \bigr\vert ^{2} + \biggl(\frac{2\lambda _{2}{\Delta t_{k}}(1 + \beta _{0} \Delta t_{k}) }{1+2\lambda _{1}{\Delta t_{k}}} + \beta _{1} { \Delta t_{k}} \biggr)\mathbf{E} \bigl\vert \widetilde{X}(t_{m_{k}}) \bigr\vert ^{2}, \end{aligned}$$

where \(\widetilde{X}(t_{m_{k}}) = \widetilde{Z}(t_{k})\) by \(t_{m_{k}} = t_{k} - \tau _{k}\). Hence, we can write

$$ \mathbf{E} \bigl\vert \widetilde{X}(t_{k+1}) \bigr\vert ^{2} \leq \biggl(\frac{(1+2\lambda _{2}{\Delta t_{k}})(1 + \beta _{0} \Delta t_{k}) }{1+2\lambda _{1}{\Delta t _{k}}} + \beta _{1} {\Delta t_{k}} \biggr)\cdot \max \bigl\{ \mathbf{E} \bigl\vert \widetilde{X}(t_{k}) \bigr\vert ^{2}, \mathbf{E} \bigl\vert \widetilde{X}(t_{m_{k}}) \bigr\vert ^{2}\bigr\} . $$

To have mean-square stability, we set \(\frac{(1+2\lambda _{2}{\Delta t _{k}})(1 + \beta _{0} \Delta t_{k}) }{1+2\lambda _{1}{\Delta t_{k}}} + \beta _{1} {\Delta t_{k}} <1\), and so we obtain \({\Delta t_{k}} \in (0, \frac{2\lambda _{1}-2\lambda _{2}-\beta _{0} -\beta _{1}}{2(\beta _{0} \lambda _{2}+ \beta _{1}\lambda _{1})})\) by assumption that \(2\lambda _{1}>2\lambda _{2}+\beta _{0} +\beta _{1}\) and \(\beta _{0}\lambda _{2} + \beta _{1}\lambda _{1} \neq 0\). □

As it is customary, we consider a linear scalar test equation with state-dependent delay:

$$ \textstyle\begin{cases} dX(t) = (a X(t) +b X(t - \tau ) ) \,dt + (c X(t) + d X(t - \tau ) ) \,dW(t) ,\quad t \in [t_{0}, T], \\ X(t) = \eta (t),\quad t \in [t_{0} - r, t_{0}], \end{cases} $$
(4.3)

where \(a, b, c, d \in \mathbb{R}\) and \(\tau = \tau (t, X(t))\). Let \(\|\eta \|_{{\mathcal{L}^{2}}(D)} < \infty \). Such test problems in the case of discrete delay and time-dependent delay can be found in [23, 28]. The following corollary states what condition results in asymptotic mean-square stability for (4.3).

Corollary 4.5

If the following condition is satisfied

$$ a < - \vert b \vert - \frac{( \vert c \vert + \vert d \vert )^{2}}{2}, $$
(4.4)

then SDDE (4.3) is mean-square stable.

Proof

By Theorem 4.2 and setting \(\lambda = -a\), \(\alpha _{0} = |a|\), \(\alpha _{1} = |b|\), \(\beta _{0} = 2|c|^{2}\), \(\beta _{1} = 2 |d|^{2}\), the desired result is achieved. □

Now we turn our attention to the proposed method (2.2)–(2.4). Clearly, applying this scheme to the given equation (4.3) is indicated as

$$ \textstyle\begin{cases} \widetilde{X}_{k+1} = \frac{1 + c \Delta W_{k}}{1 - a \Delta _{t_{k}}}(\widetilde{X}_{k} + b \widetilde{Z}_{k}\Delta _{t_{k}}) + d \widetilde{Z}_{k} \Delta W_{k}, \\ \quad \text{where if } t_{k} -\tau _{k} \in [t_{m}, t_{m+1}), \\ \widetilde{Z}_{k} = \frac{1 + c (W(t_{k} -\tau _{k}) - W(t_{m}))}{1 - a(t_{k} - \tau _{k} -t_{m})} (\widetilde{X}_{m} + b \widetilde{Z} _{m} (t_{k} - \tau _{k} -t_{m}) ) + d \widetilde{Z}_{m} ( W(t _{k} - \tau _{k}) - W(t_{m}) ), \\ \quad \text{else if } t_{k} -\tau _{k} \leq t_{0}, \widetilde{Z}_{k} = \eta (t_{k} - \tau _{k}). \end{cases} $$
(4.5)

Remember that the scheme has been applied on \({\varLambda }_{1}\), which is a non-uniform partition as \({\varLambda }_{1} =\{t_{0}, t_{1},\ldots,t_{N}=T \}\), with \(\Delta _{t_{k}} = t_{k+1} - t_{k}\) and \(\Delta W_{k} = W _{k+1} - W_{k}\). Based on what was discussed in Remark 3.3, we have an approximation on \({\varLambda }_{1} \cup {\varLambda }_{2}\). This survey paves the way to analyse the stability of the scheme. In the following, the sufficient conditions to mean-square stability of scheme (4.5) will be determined.

Theorem 4.6

Under condition (4.4), the numerical scheme (4.5) applied to linear SDDE (4.3) is asymptotic mean-square stable by some limitations on the stepsize.

Proof

As we know, practically, the scheme proceeds along the partition \({\varLambda }_{1} \cup {\varLambda }_{2}\). Thus, we can write

$$\begin{aligned} \vert \widetilde{X}_{k+1} \vert ^{2} &\leq \biggl( \frac{1 + c\Delta W_{k}}{1 - a \Delta _{t_{k}}}\biggr)^{2} \vert \widetilde{X}_{k} \vert ^{2} + 2 \biggl(\frac{1 + c\Delta W _{k}}{1 - a \Delta _{t_{k}}}\biggr) \biggl( \frac{1 + c \Delta W_{k}}{1 - a{\Delta _{t_{k}}}}b \Delta _{t_{k}} + d \Delta W_{k}\biggr) \vert \widetilde{X}_{k} \vert \vert \widetilde{Z}_{k} \vert \\ &\quad {} + \biggl(\frac{1 + c \Delta W_{k}}{1 - a \Delta _{t_{k}}}b{\Delta _{t_{k}}} + d{\Delta W_{k}}\biggr)^{2} \vert \widetilde{Z}_{k} \vert ^{2}. \end{aligned}$$

Now, by applying the expectation, we have

$$\begin{aligned} \mathbf{E} \vert \widetilde{X}_{k+1} \vert ^{2} &\leq \frac{1 + c^{2}\Delta _{t _{k}}}{ \vert 1 - a \Delta _{t_{k}} \vert ^{2}}\mathbf{E} \vert \widetilde{X}_{k} \vert ^{2} +2 \biggl(\frac{1 + c^{2}\Delta _{t_{k}}}{ \vert 1 - a \Delta _{t_{k}} \vert ^{2}} \vert b \vert \Delta _{t_{k}} + \frac{ \vert dc \vert }{ \vert 1 - a {\Delta _{t_{k}}} \vert } {\Delta _{t_{k}}}\biggr) \mathbf{E} \bigl( \vert \widetilde{X}_{k} \vert \vert \widetilde{Z}_{k} \vert \bigr) \\ &\quad {}+ \biggl(\frac{1 + c^{2}\Delta _{t_{k}}}{ \vert 1 - a \Delta _{t_{k}} \vert ^{2}}b^{2} {\Delta _{t_{k}}}^{2} +\frac{2 \vert dbc \vert }{ \vert 1 - a \Delta _{t_{k}} \vert } {\Delta _{t_{k}}} ^{2}+ d^{2} \Delta _{t_{k}}\biggr)\mathbf{E} \vert \widetilde{Z}_{k} \vert ^{2}. \end{aligned}$$

By replacing \(2|\widetilde{X}_{k}||\widetilde{Z}_{k}|\) to \(| \widetilde{X}_{k}|^{2} + |\widetilde{Z}_{k}|^{2}\), we obtain

$$\begin{aligned} \mathbf{E} \vert \widetilde{X}_{k+1} \vert ^{2} &\leq \biggl(\frac{1 + c^{2}\Delta _{t _{k}}}{ \vert 1 - a \Delta _{t_{k}} \vert ^{2}} \bigl(1 + \vert b \vert \Delta _{t_{k}} \bigr) + \frac{ \vert dc \vert }{ \vert 1 - a \Delta _{t_{k}} \vert } {\Delta _{t_{k}}}\biggr)\mathbf{E} \vert \widetilde{X}_{k} \vert ^{2} \\ &\quad {}+ \biggl(\frac{1 + c^{2}\Delta _{t_{k}}}{ \vert 1 - a \Delta _{t_{k}} \vert ^{2}} \vert b \vert {\Delta _{t_{k}}}\bigl(1 + \vert b \vert \Delta _{t_{k}}\bigr) +\frac{ \vert dc \vert {\Delta _{t_{k}}}(1 + 2 \vert b \vert \Delta _{t_{k}})}{ \vert 1 - a \Delta _{t_{k}} \vert } + d^{2}\Delta _{t_{k}}\biggr) \mathbf{E} \vert \widetilde{Z}_{k} \vert ^{2}, \end{aligned}$$

we can rearrange the relation above as follows:

$$\begin{aligned} \mathbf{E} \vert \widetilde{X}_{k+1} \vert ^{2} &\leq \biggl(\frac{1 + c^{2} \Delta _{t_{k}}}{ \vert 1 - a \Delta _{t_{k}} \vert ^{2}} \bigl(1 + 2 \vert b \vert \Delta _{t_{k}}+b ^{2}{\Delta _{t_{k}}}^{2} \bigr)+ 2\frac{ \vert dc \vert {\Delta _{t_{k}}}(1 + \vert b \vert {\Delta _{t _{k}}})}{ \vert 1 - a \Delta _{t_{k}} \vert } + d^{2}\Delta _{t_{k}} \biggr) \\ &\quad {}\times \max \bigl\{ \mathbf{E} \vert \widetilde{X}_{k} \vert ^{2}, \mathbf{E} \vert \widetilde{Z}_{k} \vert ^{2}\bigr\} . \end{aligned}$$

Note that based on what was discussed on the mesh points, \(t_{k} - \tau _{k} \in \varLambda _{2}\), and so we can replace \(\widetilde{Z}_{k}\) with \(\widetilde{X}_{m}\) in which \(t_{m} = t_{k} - \tau _{k}\). Hence

$$ \mathbf{E} \vert \widetilde{X}_{k+1} \vert ^{2} \leq \mathbf{M}(a, b, c, d, {\Delta _{t _{k}}})\cdot \max \bigl\{ \mathbf{E} \vert \widetilde{X}_{k} \vert ^{2}, \mathbf{E} \vert \widetilde{X}_{m} \vert ^{2}\bigr\} , $$

where

$$ \mathbf{M}(a, b, c, d, \Delta _{t_{k}}) = \frac{1 + c^{2}\Delta _{t_{k}}}{ \vert 1 - a \Delta _{t_{k}} \vert ^{2}} \bigl(1 + 2 \vert b \vert \Delta _{t _{k}}+b^{2}{\Delta _{t_{k}}}^{2} \bigr)+ 2\frac{ \vert dc \vert {\Delta _{t_{k}}}(1 + \vert b \vert {\Delta _{t_{k}}})}{ \vert 1 - a \Delta _{t_{k}} \vert } + d^{2} \Delta _{t_{k}}. $$

Obviously, in order to have the desired result, we have to impose

$$ \mathbf{M}(a, b, c, d, \Delta _{t_{k}}) < 1, $$
(4.6)

that is, \({\Delta _{t_{k}}}\) must be selected such that M will be bounded by 1. Relation (4.6) yields that

$$\begin{aligned} &\bigl(1 + c^{2}\Delta _{t_{k}}\bigr) \bigl(1 + 2 \vert b \vert \Delta _{t_{k}}+b^{2}{\Delta _{t _{k}}}^{2} \bigr)+ 2 \vert dc \vert {\Delta _{t_{k}}}\bigl(1 + \vert b \vert {\Delta _{t_{k}}}\bigr) \vert 1 - a \Delta _{t_{k}} \vert \\ &\quad {} + d^{2}\Delta _{t_{k}} \vert 1 - a\Delta _{t_{k}} \vert ^{2} < \vert 1 - a\Delta _{t _{k}} \vert ^{2}, \end{aligned}$$

and then

$$\begin{aligned} &\bigl( \vert cb \vert - \vert ad \vert \bigr)^{2} {\Delta _{t_{k}}}^{2} + \bigl(2\bigl( \vert c \vert + \vert d \vert \bigr) \bigl( \vert cb \vert -a \vert d \vert \bigr)+b ^{2}-a^{2} \bigr){\Delta _{t_{k}}} \\ &\quad {}+ \bigl(2a+2 \vert b \vert +\bigl( \vert c \vert + \vert d \vert \bigr)^{2} \bigr) < 0, \end{aligned}$$

we now come up with a quadratic equation which can be invoked as

$$ A(a,b,c,d){\Delta }^{2} + B(a,b,c,d){\Delta } + C(a,b,c,d) < 0, $$
(4.7)

with

$$\begin{aligned}& A(a,b,c,d ) = (ad)^{2} -2a \vert bcd \vert + (bc)^{2}, \\& B(a,b,c,d) = 2\bigl( \vert c \vert + \vert d \vert \bigr) \bigl( \vert cb \vert -a \vert d \vert \bigr)+b^{2}-a^{2}, \\& C(a,b,c,d) =2a+2 \vert b \vert +\bigl( \vert c \vert + \vert d \vert \bigr)^{2}. \end{aligned}$$

We define

$$ J(a,b,c,d,\Delta ) = A(a,b,c,d){\Delta }^{2} + B(a,b,c,d){\Delta } + C(a,b,c,d). $$

Here, we first consider

$$ J(a,b,c,d,\Delta )=0, $$
(4.8)

and compute the discriminant Δ for that. Obviously, \(A(a, b, c, d) \geq 0\) and by Theorem 4.5 we have

$$ C(a,b,c,d)< 0. $$

It follows that \(\Delta \geq 0\) with

$$ \Delta = B(a,b,c,d)^{2} - 4 A(a,b,c,d ) C(a,b,c,d). $$

If \(A(a, b, c, d) > 0\), then the discriminant Δ is positive, and we have two distinct roots \(h_{1}(a, b, c, d)\) and \(h_{2}(a,b,c,d)\). Furthermore, given the relation between the roots of a polynomial and its coefficients, we get

$$ h_{1}(a, b, c, d) * h_{2}(a,b,c,d) = \frac{C(a,b,c,d)}{A(a,b,c,d )}< 0. $$

It is profitably viewed that \(h_{1}(a, b, c, d)\) and \(h_{2}(a,b,c,d)\) have different signs. Without any loss of generality, we assume that \(h_{1}(a, b, c, d) < h_{2}(a,b,c,d)\), then (4.7) is satisfied for all \(\Delta _{t_{k}} \in (h_{1}(a, b, c, d), h_{2}(a,b,c,d))\). Hence, if the stepsize is taken in the interval \((0, h_{2}(a,b,c,d))\), then the stability of the scheme will be guaranteed. If \(A(a, b, c, d) = 0\), then \(B(a,b,c,d) = b^{2} - a^{2}\) and by Theorem 4.5, \(b^{2} < a^{2}\), and so \(B(a,b,c,d)<0\). In this case, for all stepsizes, relation (4.7) is satisfied, and so unconditional stability arises. So we can say if \(\Delta _{t_{k}} \in (0, \bar{h})\) for all k, where \(\bar{h} = \infty \) or \(\bar{h} = h_{2}(a,b,c,d)\), scheme (4.5) is asymptotic mean-square stable. □

Remark 4.7

We can prove Theorem 4.6 using Theorem 4.4 by setting \(\lambda _{1} = -a -\frac{|b|}{2}\), \(\lambda _{2} = \frac{|b|}{2}\), \(\beta _{0} = |c|^{2}+|c||d|\) and \(\beta _{1} = |d|^{2}+|c||d|\). By (4.4), we have \(2\lambda _{1}>2\lambda _{2}+ \beta _{0} +\beta _{1}\) and \(\beta _{0} \lambda _{2} + \beta _{1}\lambda _{1} \neq 0\). By Theorem 4.4, if we take the stepsize in \((0, h ^{*} )\), \(h^{*}= \frac{-2a-2|b|-(|c|+|d|)^{2}}{|b|(|c|^{2}+|cd|)+(|d|^{2}+|cd|)(-2a-|b|)}\), then scheme (4.5) is mean-square stable. Thus, if \(\Delta _{t_{k}} \in (0, {\tilde{h}})\) for all k, then scheme (4.5) is mean-square stable where \({\tilde{h}} = \max (h ^{*}, \bar{h})\). Note that was obtained in the proof of Theorem 4.6.

5 Simulation experiments

In this section, we seek the accuracy and efficiency of the numerical scheme for some test problems. As we know, stochastic models in addition to the deterministic aspect possess a probabilistic one. Accordingly, the noise process must be properly simulated in order to obtain an efficient numerical scheme. In this work, the Wiener process models the noise, and one must be careful when generating the Brownian motion and take an approach such that the correct paths of that are followed. Particularly, due to the delay nature, some points become evident during the implementation of the scheme which do not belong to the partition but have to be simulated. For this aim, given the Markov property of the Wiener process, one can utilise a linear interpolation to simulate \(W_{t}\) if the values \(W_{t_{1}}\) and \(W_{t_{2}}\), \(t_{1} < t < t_{2}\), are known. We call \(\Delta W_{t_{i}}\), \(\Delta W_{t_{i}} = W_{t_{i+1}} - W_{t_{i}}\), to be the Brownian increment and \(\Delta t_{i}\), \(\Delta t_{i} = t_{i+1} - t_{i}\), to be time one. Besides, one very challenging point in the study of stochastic delay differential equations is the non-availability of the exact solution of the most test problems. So one has to obtain the exact one by discretising the equation on a fine mesh. Note that, because of the computational and round off errors, too delicate partition is not necessarily the right choice. Thus, we must be cautious in choosing the right mesh. We begin with a discrete delay.

Example 5.1

$$ \textstyle\begin{cases} dX(t) = (-3 X(t) + 0.5 X(t - 1) ) \,dt + ( X(t) + X(t - 1) ) \,dW(t), \quad t \in [0, T], \\ \eta (t) = 1 + t,\quad t \in [-1, 0]. \end{cases} $$

Here \(\tau = 1\) and in the determining the parameters, the numerical stability condition has been taken into account, see relation (4.4), and such condition will also be considered in the two next examples. Since delay is constant, for every \(t_{i}\), \(t_{i}- 1\) point situates in a subinterval far from current interval, and accordingly the overlapping does not exist. We postpone this discussion to the next example. Consequently, the relevant programmes are easily accomplished, which leads us to Table 1, Fig. 1 and Fig. 5.

As a second problem, we consider the time-dependent case with a delay function \((1 + t^{2})^{-1}\) decreasing versus time.

Example 5.2

$$ \textstyle\begin{cases} dX(t) = (-4X(t) + X(t - \frac{1}{1 + t^{2}}) ) \,dt + (X(t) + X(t - \frac{1}{1+t^{2}}) ) \,dW(t),\quad t \in [0, T], \\ \eta (t) = 1,\quad t \in [-1, 0]. \end{cases} $$

By virtue of the fact that \((1 + t^{2})^{-1}\) decreases in t, it is expected that the implementation of the scheme with a slight difference is similar to (L1). Here, unlike the constant delay, we face overlapping. We use the word overlapping if during the implementation of the scheme, there exists a point \(t_{i+1}\) such that \(t_{i+1} - \tau _{i+1} \in [t_{i}, t_{i+1})\) separates the latter interval into \([t_{i}, t_{i+1} - \tau _{i+1} )\) and \([t_{i+1} - \tau _{i+1}, t_{i+1})\). In this case, by approximating the process at \(t_{i+1} - \tau _{i+1}\), because of the fact that the scheme is one-step, recomputation of \(\widetilde{X}(t_{i+1})\) is required. So the generation of each path is more time-consuming than that of the previous one. Furthermore, stepsize 2−12 will be selected to create a partition in order to simulate the exact solution in these two examples.

Example 5.3

$$ \textstyle\begin{cases} dX(t) = ( -5 X(t) + X(t - \frac{ \vert X(t) \vert }{c+ \vert X(t) \vert }) ) \,dt \\ \hphantom{dX(t) ={}}{}+ (0.5 X(t) + X(t - \frac{ \vert X(t) \vert }{c+ \vert X(t) \vert }) ) \,dW(t), \quad t \in [-1, T], \\ X(t) = 0.5,\quad t \leq -1, \end{cases} $$

where \(\tau (t, X(t))= \frac{|X(t)|}{c+|X(t)|}\), and let c be a positive constant.

Here, according to the dependence of the delay term on the noise process, the simulation is not analogous to the two previous ones. Notice that for every point \(t_{i}\), in addition to \(\widetilde{X} _{i}\), the amount \(\widetilde{Z}_{i}\) has to be computed, namely approximation of the process at \(t_{i} - \tau _{i}\), and if we denote this point by \({t}_{mi}\), then the value \(\widetilde{Z}_{mi}\) has to be computed. If we set \(t_{ki} = {t}_{mi} - \tau _{mi}\), then an important question is whether the quantity \(\widetilde{Z}_{ki}\) has to be computed or not. Since in the two first cases of delay term we have \({t}_{mj} > {t}_{mi} > t_{ki}\) for every j such that \(j > i\), so we do not need the approximation \(\widetilde{Z}_{ki}\). Note that τ is a decreasing function in Example 5.2. Hence, in order to avoid nesting calculations, we can use α for the value of the \(\widetilde{Z}_{ki}\), requiring α to be constant. But in the state-dependent case, as we evidently observed in our running, unlike the two first cases, going back to the past is not necessarily consecutive. Hence, we might use \(\widetilde{Z}_{ki}\) during the next computations, and so it has to be modified. Assume that nesting calculations have to be required at the points \(t_{ki_{1}} < t_{ki_{2}}<\cdots<t _{ki_{l}} <t_{ki}\), and naturally they have to be stopped after a finite period. For this, we rectify \(\widetilde{Z}_{ki_{1}}\) by averaging every \(\widetilde{Z}_{j}\) where \(\widetilde{Z}_{j} \neq \alpha \) for \(j < ki_{1}\). By the way, the overlapping problem exists here, too. Thus, as mentioned in the previous example, we must carefully execute the scheme. Considering the case of state-dependent delay which includes more computation errors, too many small stepsizes may lead us to a wrong direction. So our proposed partition is made by the stepsize 2−11. Needless to say, time and computing costs are considerably higher than those of the two previous cases.

5.1 Comment on results

Table 1 Computational error at endpoint \(T = 1\) for Example 5.1
Figure 1
figure 1

The rate of strong convergence for Example 5.1 with \(T = 1\). The logarithm of \(\epsilon _{T}\) is denoted by the green line with asterisk which is parallel with the dashed red line by slope \(\frac{1}{2}\)

After having finished implementing the numerical scheme for the three examples above, we now hint at some tips in line with the obtained numerical results. To simplify the notation, let \(X^{e}\) and \(X^{a}\) stand for the exact and numerical solutions, respectively. The numerical implementations have been made with various input stepsizes up to the end of the interval and the exact one by a small stepsize as previously mentioned for N discretized Brownian paths. Then, in order to visualise error \(\epsilon _{T} = \mathbf{E}{|X^{e}(T) - X^{a}(T)|}\) inspiring confidence to the scheme, practically, we utilise the sample mean of the individual paths as \(\frac{1}{N} \sum_{i=1}^{N} |{X_{i}} ^{e}(T) - {X_{i}}^{a}(T)|\) provided N is sufficiently large. Here, \(N = 10\text{,}000\). All four Tables 1, 2, 3 and 4 reveal the reasonable behaviour of the computational error to the shrinkaging of the stepsize. Apparently, the smaller stepsize results in improving the approximation, which indicates the significant response of the scheme. In order to indicate the speed of convergence of the scheme, we draw the logarithm of global computational error versus that of the stepsize. In doing so, the command loglog in Matlab, which is interpreted as the logarithm function, has been applied and it provides us with Figs. 1, 2, 3 and 4. We observe that the obtained curves are parallel to the functions \(x^{\frac{1}{2}}\) and \(x^{\frac{1}{4}}\), which is in agreement with the theoretical results. Remember that we must pay particular attention to (4.4) in order to preserve the numerical stability. The parameters of the given problem are \(T= 20\) and \(\Delta t_{i} = 0.2\). Note that Figs. 5, 6, 7 and 8 present the mean of normed approximation over N realisations as \(\frac{1}{N} \sum_{i=1}^{N} |X^{a} _{i}(t_{n})|^{2}\), which establishes the stability. Regarding the trends, one can see that the summation converges to zero as \(t_{n}\) becomes larger. By means of these results, we conclude that scheme (2.2)–(2.4) is a well-developed scheme for problem (1.1).

Figure 2
figure 2

The rate of strong convergence for Example 5.2 with \(T = 1\). The logarithm of \(\epsilon _{T}\) is denoted by the green line with asterisk which is parallel with the dashed red line by slope \(\frac{1}{2}\)

Figure 3
figure 3

The rate of strong convergence for Example 5.3 with \(T=1\), \(c = 0.01\). The logarithm of \(\epsilon _{T}\) is denoted by the green line with asterisk which is parallel with the dashed red line by slope \(\frac{1}{4}\)

Figure 4
figure 4

The rate of strong convergence for Example 5.3 with \(T = 1\), \(c = 1\). The logarithm of \(\epsilon _{T}\) is denoted by the green line with asterisk which is parallel with the dashed red line by slope \(\frac{1}{4}\)

Figure 5
figure 5

The average of numerical solution over 10,000 discretized Brownian paths with \(h = 0.2\) and \(T = 20\) for Example 5.1

Figure 6
figure 6

The average of numerical solution over 10,000 discretized Brownian paths with \(h = 0.2\) and \(T = 20\) for Example 5.2

Figure 7
figure 7

The average of numerical solution over 10,000 discretized Brownian paths with \(h = 0.2\), \(T = 20\) and \(c = 0.01\) for Example 5.3

Figure 8
figure 8

The average of numerical solution over 10,000 discretized Brownian paths with \(h = 0.2\), \(T = 20\) and \(c = 1\) for Example 5.3

Table 2 Computational error at endpoint \(T = 1\) for Example 5.2
Table 3 Computational error at endpoint \(T = 1\) for Example 5.3 with \(c = 0.01\)
Table 4 Computational error at endpoint \(T = 1\) for Example 5.3 with \(c = 1\)

6 Conclusion

In this paper, emphasising the numeric, the stochastic differential equations with a variety of delay terms were interrogated. A new continuous split-step scheme based on the Euler–Maruyama, with a non-uniform partition and free-limitation stepsize, was introduced and the convergence in the \(\mathcal{L}^{2}\) sense was stabilised. As expected, because of using an interpolation which is the same as the underlying scheme, the rate of mean-square convergence takes the amount \(1/2\) in the two first delay terms and it does \(1/4\) in the last one. The asymptotic mean-square stability of the scheme was probed. The stability and convergence concepts, as the two basic desirable properties, satisfied the efficiency of the scheme. More general test equations under weaker conditions and also the other senses of stability such as almost sure asymptotic (exponential) stability in the state-dependent case, can be inquired in the future.