1 Introduction

Deterministic differential models require that the parameters involved be completely known, though in the original problem, one often has insufficient information on parameter values. These may fluctuate due to some external or internal ‘noise’, which is random. Thus, it is necessary to move from deterministic problems to stochastic problems. Stochastic differential equations (SDEs) play a prominent role in a range of application areas, such as biology, chemistry, epidemiology, mechanics, microelectronics, economics and so on [1, 2]. For SDEs, roughly speaking, there are two major types of numerical methods, explicit numerical methods [3, 4] and implicit numerical methods [5, 6]. One can refer to [7] for an overview of the numerical solution of SDEs.

In nature, there are many processes which involve time delays. That is, the future state of the system is dependent on some of the past history. Stochastic delay differential equations (SDDEs), which are a generalization of both deterministic delay differential equations (DDEs) and stochastic ordinary differential equations (SODEs), are better to simulate these kinds of systems. In order to give the reader a general insight into the application of SDDEs, we introduce briefly the cell population growth model which is given as follows:

$$ \left \{ \begin{array}{@{}l} dx(t)=(\rho_{0}x(t)+\rho_{1}x(t-\tau)) dt+\beta dw(t),\quad t\geq0,\\ x(t)=\Psi(t),\quad {-}\tau\leq t< 0. \end{array} \right . $$
(1.1)

Assume that these biological systems operate in a noisy environment whose overall noise rate is distributed like white noise \(\beta dw(t)\). The constant τ denotes the average cell-division time. Then the population \(x(t)\) is a random process, whose growth can be described by (1.1). In fact, SDDEs as the stochastic models appear frequently in applied research and lead to an increasing interest in the investigation. For additional examples one can refer to applications in neural control mechanisms: neurological diseases [8], human postural sway [9] and pupil light reflex [10]. Since explicit solutions are rarely available for SDDEs, numerical approximations [11, 12] become increasingly important in many applications. To make the implementation viable, effective numerical methods are clearly the key ingredient and deserve much investigation. In the present work we make efforts in this direction and propose a new efficient scheme.

In the paper, we attempt to construct a Chebyshev spectral collocation method to solve SDDEs of the form

$$ \left \{ \begin{array}{@{}l} dx(t)=f(x(t),x(t-\tau)) dt+g(x(t)) dw(t),\quad t\in[0,T],\\ x(t)=\Psi(t), \quad t\in[-\tau,0], \end{array} \right . $$
(1.2)

where \(0< T<\infty\) and τ is a positive fixed delay, \(f:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\) and \(g:\mathbb{R}\rightarrow \mathbb{R}\) are assumed to be continuous. \(w(t)\) is a one-dimensional standard Wiener process defined on the complete probability space \((\Omega,\mathcal{F}_{t}, P)\) with a filtration \(\{\mathcal{F}_{t}\}_{t\geq0}\) satisfying the usual conditions (that is, it is increasing and right continuous while \(\mathcal{F}_{0}\) contains all P-null sets). \(\Psi(t)\) is an \(\mathcal{F}_{0}\)-measurable \(C([-\tau,0], \mathbb{R})\)-valued random process such that \(\mathbb{E}\|\Psi\|^{2}<\infty\) (\(C([-\tau,0], \mathbb{R})\) is the Banach space of all continuous paths from \([-\tau ,0]\rightarrow\mathbb{R}\) equipped with the supremum norm).

Our approach is derived by constructing the interpolating polynomial of degree N based on a spectral collocation method and applying the differentiation matrix to approximate the differential operator arising in SDDEs. The interpolating polynomial of degree N is constructed by applying the Chebyshev-Gauss-Lobatto (C-G-L) points as interpolation points and the Lagrange polynomial as a trial function. To the best of our knowledge, they have not been utilized in solving SDDEs. Finally, we would like to mention that the idea of the spectral collocation was previously employed in [13, 14] to construct methods for SODEs and DDEs. The authors in [13] propose a spectral collocation method for SODEs. Inspired by the idea, we construct the Chebyshev spectral collocation method for SDDEs.

This paper is organized as follows. In the next section, some fundamental knowledge is reviewed and the derivation of the Chebyshev spectral collocation for solving SDDEs is introduced. Section 3 is devoted to reporting some numerical experiments to confirm the accuracy and effectiveness of the method. At the end of the article, conclusions are made briefly.

2 Construction of the Chebyshev spectral collocation method

2.1 The Lamperti-type transformation

In this section, we introduce the Lamperti-type transformation [15], which can guarantee that the diffusion term of (1.2) is a constant. For equation (1.2), assume

$$ y(t)=F\bigl(x(t)\bigr)=\int^{x(t)}_{u} \frac{1}{g(s)}\,ds, $$
(2.1)

where u is any arbitrary value in the state space of X. For \(0\leq t\leq T\), applying the Itô formula yields

$$ \begin{aligned}[b] dy(t)&= \biggl( \frac{f (x(t),x(t-\tau) )}{g(x(t))}- \frac {1}{2}g'\bigl(x(t)\bigr) \biggr) dt+dw(t)\\ &= \biggl( \frac{f (F^{-1}(y(t)),F^{-1}(y(t-\tau)) )}{g (F^{-1}(y(t)) )}-\frac{1}{2}g' \bigl(F^{-1}\bigl(y(t)\bigr) \bigr) \biggr) dt+dw(t). \end{aligned} $$
(2.2)

Let

$$ b\bigl(y(t),y(t-\tau)\bigr)= \frac{f(F^{-1}(y(t)),F^{-1}(y(t-\tau)))}{g (F^{-1}(y(t)) )}-\frac{1}{2}g' \bigl(F^{-1}\bigl(y(t)\bigr) \bigr). $$
(2.3)

Hence, equation (2.2) can be rewritten simply as

$$ dy(t)=b\bigl(y(t),y(t-\tau)\bigr) dt+dw(t), \quad 0\leq t\leq T. $$
(2.4)

Using the Lamperti transform, one can transform equation (1.2) into (2.4). Therefore, here and hereafter, we only consider the SDDEs of the form

$$ \left \{ \begin{array}{@{}l} dx(t)=b(x(t),x(t-\tau)) dt+dw(t), \quad t\in[0,T],\\ x(t)=\Psi(t),\quad t\in[-\tau,0]. \end{array} \right . $$
(2.5)

2.2 Review on Chebyshev interpolation polynomials

Chebyshev polynomials are a well-known family of orthogonal polynomials on the interval \([-1,1]\). These polynomials present, among others, very good properties in the approximation of functions. Therefore, Chebyshev polynomials appear frequently in several fields of mathematics, physics and engineering. In this subsection, we will recall the Chebyshev interpolation polynomial for a given function \(x(t)\in C^{k}(-1,1)\), where \(C^{k}\) is the space of all functions whose k times derivatives are continuous on the interval \((-1,1)\). More details can be found in [16].

Let \(T_{k}(t)=\cos(k\arccos(t))\) be the first kind Chebyshev polynomial of degree k and choose \(N+1\) Chebyshev-Gauss-Lobatto (C-G-L) nodes such that

$$ t_{j}=\cos\frac{(N-j)\pi}{N},\quad j=0,1,\ldots, N. $$
(2.6)

Define the Lagrange basis functions as follows:

$$ l_{k}(t)=\frac{\omega(t)}{(t-t_{k})\omega'(t_{k})},\quad k=0,1,\ldots,N, $$
(2.7)

where \(\omega(t)\) is given by

$$\omega(t)=(t-t_{0}) (t-t_{1})\cdots(t-t_{N}). $$

It is noted that for \(k,j=0,1,\ldots, N\) the Lagrange interpolating basis functions have the Kronecker property

$$l_{k}(t_{j})=\left \{ \begin{array}{@{}l@{\quad}l} 0, & j\neq k,\\ 1, & j=k. \end{array} \right . $$

To sum up, the Chebyshev interpolation polynomial for a function \(x(t)\) can be given by

$$ P_{N}(t)=\sum_{k=0}^{N}x_{k}l_{k}(t), $$
(2.8)

where \(x_{k}:=x(t_{k})\), \(k=0,1,\ldots,N\). By calculating the derivative of (2.8) and substituting C-G-L nodes \(t_{j}\), \(j=0,1,\ldots, N\), into it, one can get

$$ \left ( \begin{array}{@{}c@{}} P_{N}'(t_{0})\\ P_{N}'(t_{1})\\ \vdots\\ P_{N}'(t_{N}) \end{array} \right ) =\left ( \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} l_{0}'(t_{0})&l_{1}'(t_{0})&\cdots&l_{N}'(t_{0})\\ l_{0}'(t_{1})&l_{1}'(t_{1})&\cdots&l_{N}'(t_{1})\\ \cdots&\cdots&\cdots&\cdots\\ l_{0}'(t_{N})&l_{1}'(t_{N})&\cdots&l_{N}'(t_{N}) \end{array} \right ) \left ( \begin{array}{@{}c@{}} x_{0}\\ x_{1}\\ \vdots\\ x_{N} \end{array} \right ). $$
(2.9)

Define the differentiation matrix by

$$ D_{N}=\left ( \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} l_{0}'(t_{0})&l_{1}'(t_{0})&\cdots&l_{N}'(t_{0})\\ l_{0}'(t_{1})&l_{1}'(t_{1})&\cdots&l_{N}'(t_{1})\\ \cdots&\cdots&\cdots&\cdots\\ l_{0}'(t_{N})&l_{1}'(t_{N})&\cdots&l_{N}'(t_{N}) \end{array} \right ). $$
(2.10)

Remark 2.1

The differentiation matrix \(D_{N}\) is not dependent on the problem itself but dependent on the C-G-L nodes. Therefore, the differentiation matrices can be obtained before a problem setting.

Remark 2.2

Differentiation matrices are derived from the spectral collocation method for solving differential equations of boundary value type, more details can be found in [16, 17].

Remark 2.3

([13])

The differentiation matrix \(D_{N}\) itself is singular.

2.3 Chebyshev spectral collocation method for DDEs

Spectral method is one of the three technologies for numerical solutions of partial differential equations. The other two are finite difference methods (FDMs) and finite element methods (FEMs). The spectral methods based on Chebyshev polynomials as basis functions for solving numerical differential equations [1618] with smooth coefficients and simple domain have been well applied by many authors. Furthermore, they can often achieve ten digits of accuracy while FDMs and FEMs would get two or three. An interested reader can refer to references [19, 20]. Later, the spectral methods are developed to solve neutral differential equations [14] or special DDEs [18]. In this subsection, we will introduce the spectral collocation method for DDEs.

Consider the DDEs

$$ \left \{ \begin{array}{@{}l} dx(t)=f(x(t),x(t-\tau)) dt, \quad t\in[0,T],\\ x(t)=\phi(t), \quad t\in[-\tau,0]. \end{array} \right . $$
(2.11)

Let \(t=\frac{T}{2}(1+s)\), \(x(t)=x (\frac{T}{2}(1+s) ):=y(s)\), one can transform equation (2.11) into the following form:

$$ \left \{ \begin{array}{@{}l} dy(s)=\frac{T}{2} (f(y(s),y(s-\frac{2\tau}{T}) )) ds,\quad s\in [-1,1],\\ y(s)=\varphi(s),\quad s\in[-\frac{2\tau}{T}-1,-1], \end{array} \right . $$
(2.12)

where \(y(s)=x(\frac{T}{2}(1+s))=\phi(\frac{T}{2}(1+s)):=\varphi(s)\), \(s\in[-\frac{2\tau}{T}-1,-1]\). Using the transformation, we shift the interval of the solution of DDEs (2.11) from \([0,T]\) into \([-1,1]\). Therefore, in the following we focus mainly on the DDEs as follows:

$$ \left \{ \begin{array}{@{}l} dx(t)=f(x(t),x(t-\tau)) dt,\quad t\in[-1,1],\\ x(t)=\varphi(t), \quad t\in[-\tau-1,-1]. \end{array} \right . $$
(2.13)

Approximating the function \(x(t)\) on the left-hand side of (2.13) by the interpolation polynomial \(P_{N}(t)\) defined in (2.8) yields

$$d\sum_{k=0}^{N}x_{k}l_{k}(t) \approx f\bigl(x(t),x(t-\tau)\bigr) dt. $$

Substituting the C-G-L points \(t_{j}\), \(j=0,1,\ldots, N\), defined in (2.6) into the above equality and replacing the differential operator \(\frac{d}{dt}\) with the differentiation matrix \(D_{N}\), one can obtain

$$\begin{aligned} D_{N}X&=\left ( \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} l_{0}'(t_{0})&l_{1}'(t_{0})&\cdots&l_{N}'(t_{0})\\ l_{0}'(t_{1})&l_{1}'(t_{1})&\cdots&l_{N}'(t_{1})\\ \cdots&\cdots&\cdots&\cdots\\ l_{0}'(t_{N})&l_{1}'(t_{N})&\cdots&l_{N}'(t_{N}) \end{array} \right ) \left ( \begin{array}{@{}c@{}} x_{0}\\ x_{1}\\ \vdots\\ x_{N} \end{array} \right ) \\ &=\left ( \begin{array}{@{}c@{}} f(x_{0}, x(t_{0}-\tau))\\ f(x_{1}, x(t_{1}-\tau))\\ \cdots\\ f(x_{N}, x(t_{N}-\tau)) \end{array} \right ). \end{aligned}$$
(2.14)

Denoting by

$$ X=\left ( \begin{array}{@{}c@{}} x_{0}\\ x_{1}\\ \vdots\\ x_{N} \end{array} \right ),\qquad F= \left ( \begin{array}{@{}c@{}} f(x_{0}, x(t_{0}-\tau))\\ f(x_{1}, x(t_{1}-\tau))\\ \vdots\\ f(x_{N},x(t_{N}-\tau)) \end{array} \right ), $$
(2.15)

one can obtain the discrete approximative equations for DDEs (2.13)

$$ D_{N}X=F. $$
(2.16)

It is appropriate to emphasize that the remainder elements of the vector \(X=(x_{0},x_{1},\ldots, x_{N})^{T}\) are unknown except the first element \(x_{0}\) which can be calculated by the condition \(x(t)=\varphi(t)\), \(t\in[-\tau-1,-1]\). Here and hereafter, we assume that \(x_{0}=0\). If it is not, we can take a transform \(y(t)=x(t)-x_{0}\) to vanish the initial condition. Subsequently, we give an explanation of the elements in the vector F. On the one hand, if \(t_{k}-\tau\leq-1\), we can calculate correctly by \(x(t_{k}-\tau)=\varphi(t_{k}-\tau)\) due to the initial condition \(x(t)=\varphi(t)\), \(t\in[-\tau-1,-1]\). On the other hand, if \(t_{k}-\tau> -1\), we apply the Chebyshev interpolation polynomial \(P_{N}(t_{k}-\tau)\) to approximate \(x(t_{k}-\tau)\). Therefore, the vector is given by

$$\begin{aligned} \left ( \begin{array}{@{}c@{}} x(t_{0}-\tau)\\ \vdots\\ x(t_{m}-\tau)\\ x(t_{m+1}-\tau)\\ \vdots\\ x(t_{N}-\tau) \end{array} \right ) ={}&\left ( \begin{array}{@{}c@{}} \varphi(t_{0}-\tau)\\ \vdots\\ \varphi(t_{m}-\tau)\\ \sum_{k=0}^{N}x_{k}l_{k}(t_{m+1}-\tau)\\ \vdots\\ \sum_{k=0}^{N}x_{k}l_{k}(t_{N}-\tau) \end{array} \right ) \\ ={}&\left ( \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} 0&0&\cdots&0\\ \cdots&\cdots&\cdots&\cdots\\ 0&0&\cdots&0\\ l_{0}(t_{m+1}-\tau)&l_{1}(t_{m+1}-\tau)&\cdots&l_{N}(t_{m+1}-\tau)\\ \cdots&\cdots&\cdots&\cdots\\ l_{0}(t_{N}-\tau)&l_{1}(t_{N}-\tau)&\cdots&l_{N}(t_{N}-\tau) \end{array} \right ) \left ( \begin{array}{@{}c@{}} x_{0}\\ \vdots\\ x_{m}\\ x_{m+1}\\ \vdots\\ x_{N} \end{array} \right ) \\ &{} + \left ( \begin{array}{@{}c@{}} \varphi(t_{0}-\tau)\\ \vdots\\ \varphi(t_{m}-\tau)\\ 0\\ \vdots\\ 0 \end{array} \right )=AX+d. \end{aligned}$$
(2.17)

For convenience, we denote the \(N+1\)-dimensional \(G(X,AX+d)\) with components \(G_{i}=f(x_{i},\varphi(t_{i}-\tau))\) for \(i=0,1,\ldots, m\) and \(G_{i}=f(x_{i},\sum_{k=0}^{N}x_{k}l_{k}(t_{i}-\tau))\) for \(i=m+1,\ldots, N\). Therefore, the vector F can be represented as \(F=G(X,AX+d)\).

Remark 2.4

Note that \(x_{0}\) is fixed at zero in the approximative equation (2.14). This implies that the first column of \(D_{N}\) has no effect (since multiplied by zero) and the first row has no effect either (since ignored). Therefore, by removing the first row and first column of \(D_{N}\), we can get a new matrix denoted by

$$ \tilde{D}_{N}=\left ( \begin{array}{@{}c@{\quad}c@{\quad}c@{}} l_{1}'(t_{1})&\cdots&l_{N}'(t_{1})\\ \cdots&\cdots&\cdots\\ l_{1}'(t_{N})&\cdots&l_{N}'(t_{N}) \end{array} \right ). $$
(2.18)

Remark 2.3 shows that the differentiation matrix \(D_{N}\) itself is singular. However, \(\tilde{D}_{N}\) is invertible.

Remark 2.5

By removing the first row and the first column of A and the first element of the vectors X, d respectively, one can get

$$ \begin{aligned} &\tilde{A}=\left ( \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} 0&\cdots&0\\ \cdots&\cdots&\cdots\\ 0&\cdots&0\\ l_{1}(t_{m+1}-\tau)&\cdots&l_{N}(t_{m+1}-\tau)\\ \cdots&\cdots&\cdots\\ l_{1}(t_{N}-\tau)&\cdots&l_{N}(t_{N}-\tau) \end{array} \right ), \\ &\tilde{X}=\left ( \begin{array}{@{}c@{}} x_{1}\\ \vdots\\ x_{m}\\ x_{m+1}\\ \vdots\\ x_{N} \end{array} \right ),\qquad \tilde{d}=\left ( \begin{array}{@{}c@{}} \varphi(t_{1}-\tau)\\ \vdots\\ \varphi(t_{m}-\tau)\\ 0\\ \vdots\\ 0 \end{array} \right ). \end{aligned} $$
(2.19)

Therefore,

$$ \tilde{D}_{N}\tilde{X}=g(\tilde{X},\tilde{A}\tilde{X}+ \tilde{d}), $$
(2.20)

where \(\tilde{D}_{N}\), \(\tilde{A}\), \(\tilde{X}\), \(\tilde{d}\) are defined respectively by (2.18), (2.19), and \(g(\tilde{X},\tilde{A} \tilde{X}+\tilde{d})\) is an N-dimensional vector with components \(g_{i}=f(x_{i},\varphi(t_{i}-\tau))\) for \(i=1,\ldots, m\) and \(g_{i}=f(x_{i},\sum_{k=0}^{N}x_{k}l_{k}(t_{i}-\tau))\) for \(i=m+1,\ldots, N\). The solution vector \(\tilde{X}\) can be obtained by solving the discrete approximative (2.20).

2.4 The Chebyshev spectral collocation method for SDDEs

In this subsection, we first give the theorem to guarantee the existence and uniqueness of the exact solution of SDDE (1.2).

Theorem 2.6

([21, 22])

Assume that there exist positive constants \(L_{f,i}\), \(i=1,2\), and \(K_{f}\) such that both the functions f and g satisfy a uniform Lipschitz condition and a linear growth bound of the following form, for all \(\zeta_{1}, \zeta_{2}, \eta_{1},\eta_{2},\zeta,\eta\in \mathbb{R}\) and \(t\in[0,T]\):

$$\begin{aligned}& \bigl|f(\zeta_{1},\eta_{1})-f(\zeta_{2}, \eta_{2})\bigr|\leq L_{f,1}|\zeta_{1}-\zeta _{2}|+L_{f,2}|\eta_{1}-\eta_{2}|, \\& \bigl|f(\zeta,\eta)\bigr|^{2}\leq K_{f}\bigl(1+|\zeta|^{2}+| \eta|^{2}\bigr), \end{aligned}$$

and likewise for g with constants \(L_{g}\) and \(K_{g}\). Then there exists a path-wise unique strong solution to (1.2).

Consider the SDDEs

$$ \left \{ \begin{array}{@{}l} dx(t)=(f(x(t),x(t-\tau))) dt+dw(t), \quad t\geq0,\\ x(t)=\varphi(t), \quad t\in[-\tau,0]. \end{array} \right . $$
(2.21)

Assume that the coefficients in (1.2) satisfy a uniform Lipschitz condition and a linear growth bound mentioned in Theorem 2.6. Therefore, (1.2) has a path-wise unique strong solution. Equation (2.21) is transformed from (1.2) by using the Lamperti-type transformation. Hence, there exists a path-wise unique strong solution to (2.21). Following the same lines as mentioned in Section 2.3, it is easy to obtain the approximative equations for SDDE (2.21) as follows:

$$ \tilde{D}_{N}\tilde{X}=g(\tilde{X},\tilde{A}\tilde{X}+\tilde{d})+ \tilde{D}_{N}w, $$
(2.22)

where the N-dimensional vector \(g(\tilde{X},\tilde{A}\tilde{X}+\tilde {d})\) has the components \(g_{i}=f(x_{i},\varphi(t_{i}-\tau))\) for \(i=1,\ldots , m\) and \(g_{i}=f(x_{i},\sum_{k=0}^{N}x_{k}l_{k}(t_{i}-\tau))\) for \(i=m+1,\ldots , N\). Applying the invertible property of the matrix yields

$$ \tilde{X}=\tilde{D}_{N}^{-1}g(\tilde{X}, \tilde{A}\tilde{X}+\tilde{d})+w. $$
(2.23)

Remark 2.7

To find the derivative of function \(x(t)\) on C-G-L nodes, \(D_{N}x\) is of high accuracy only if \(x(t)\) is smooth enough. But the standard Wiener process \(w(t)\) is a nowhere differentiable process, \(D_{N}w\) behaves very badly. However, if the coefficient of diffusion term of SDDEs is a constant, we can avoid \(D_{N}w\) as above (see [13]).

3 Numerical experiments

The theoretical discussion of numerical processes is intended to provide an insight into the performance of numerical methods in practice. Therefore, in this section, some numerical experiments are reported to test the accuracy and the effectiveness of the spectral collocation method.

Example 1

Consider the SDDE

$$ dx(t)=\bigl(ax(t)+bx(t-1)\bigr) dt+\bigl(\beta_{1}+ \beta_{2}x(t)+\beta_{3}x(t-1)\bigr) dw(t) $$
(3.1)

as a test equation for our method. In the case of additive noise (\(\beta_{2}=\beta_{3}=0\)), an explicit solution on the first interval \([0,\tau]\) has been calculated by the method of steps (see, for example, [23]). Using \(\varphi(t)=1+t\) for \(t\in[-1,0]\) as an initial function, the solution on \(t\in[0,1]\) is given by

$$ x(t)=e^{at}\biggl(1+\frac{b}{a^{2}}\biggr)- \frac{b}{a}t-\frac{b}{a^{2}}+\beta _{1}e^{at}\int _{0}^{t}e^{-as} \,dw(s). $$
(3.2)

In our experiments, the mean-square error \(\mathbb{E}(|x(T)-\bar{X}_{N}|^{2})\) (note that \(\bar{X}_{N}\) is the last element of the vector \(\tilde {X}_{N}\) calculated by (2.23)) at the final time T is estimated in the following way. A set of 20 blocks, each containing 100 outcomes (\(\omega_{i,j}\), \(1\leq i\leq20\), \(1\leq j\leq100\)), is simulated, and for each block the estimator

$$\epsilon_{i}=\frac{1}{100}\sum_{j=1}^{100}\bigl|x(T, \omega_{i,j})-\bar{X}_{N}(\omega_{i,j})\bigr| $$

is formed. In Table 1 ϵ denotes the mean of this estimator, which is estimated in the usual way: \(\epsilon=\frac {1}{20}\sum_{i=1}^{20}\epsilon_{i}\).

Table 1 Mean-square errors of the numerical solutions for ( 3.1 )

It is noted that, for the Chebyshev spectral method, we collocate \(N+1\) points on the interval. Hence, there exist N subintervals. Different from the Euler-Maruyama method [11] for solving SDDEs, the subintervals are not equidistant, and the distance between successive Chebyshev points is \(\frac{\sqrt{1-x^{2}}}{N}\), \(x\in[-1,1]\). Figure 1 [13] shows the difference between the two methods.

Figure 1
figure 1

An explanation of collocation points for the Euler-Maruyama method and the spectral collocation method on a subinterval.

We apply the spectral collocation method to solve (3.1) under the set of coefficients I: \(a=-0.9\), \(b=0.1\), \(\beta_{1}=0.1\) and II: \(a=-2\), \(b=0.1\), \(\beta_{1}=1\). The numerical results are shown in Table 1. In Table 1, we denote N by the number of the C-G-L nodes. The approximation errors reported in Table 1 show that the Chebyshev spectral collocation method works very well for SDDEs and has high accuracy and effectiveness.

Example 2

Consider (3.1) as a DDE, i.e., \(\beta _{1}=\beta_{2}=\beta_{3}=0\), which reads

$$ \left \{ \begin{array}{@{}l} dx(t)= (a(x(t)+bx(t-1) )) dt, \quad t> 0,\\ x(t)=1+t,\quad -1\leq t\leq0. \end{array} \right . $$
(3.3)

We apply the spectral collocation method for (3.3) with two groups of parameters I: \(a=-0.9\), \(b=1\) and II: \(a=-2\), \(b=0.1\). In Tables 2 and 3, for (3.3) with the parameters I and II, we list the approximation errors of the spectral collocation methods with different N which is denoted by the number of the C-G-L nodes. It is clear that the spectral accuracy and the convergence are obtained when the spectral collocation method is applied to solve (3.3).

Table 2 Errors for equation ( 3.3 ) in the case of \(\pmb{a=-0.9}\) , \(\pmb{b=1}\)
Table 3 Errors for equation ( 3.3 ) in the case of \(\pmb{a=-2}\) , \(\pmb{b=0.1}\)

In order to visualize the approximation error behavior of the proposed methods with different N, we plot the speed of the errors damping under different coefficients in Figures 2 and 3. As one may expect, the errors decay very quickly with the increase in the number of the interpolation points.

Figure 2
figure 2

The error decay for equation ( 3.3 ) in the case of \(\pmb{a=-0.9}\) , \(\pmb{b=1}\) .

Figure 3
figure 3

The error decay for equation ( 3.3 ) in the case of \(\pmb{a=-2}\) , \(\pmb{b=0.1}\) .

4 Conclusions

In the paper, the Chebyshev spectral collocation is proposed to solve a certain type of SDDEs. The most important step to construct the scheme is the Lamperti-type transformation. This transformation allows to shift non-linearities from the diffusion coefficient into the drift coefficient. Then, based on the spectral collocation method, we construct the Nth degree interpolating polynomials to approximate the SDDEs. The numerical results confirm that the scheme is effective and easy to implement. However, the convergence of the method is not obtained for its complexity. It will be our future work.