Introduction

Fractional calculus introduced because it can fill the existing gap for describing a large amount of work in engineering [1],[2], and different phenomena in nature such as biology, physics [3, 4]. Mathematicians and physicists have been created numerous articles about fractional differential equations(FDEs) for finding analytical and numerical methods, including Adomian Decomposition Method [5], Variational Iteration Method [6, 7], Homotopy perturbation Method [8] and homotopy analysis method [9]. H. Rezazadeh et al. have generalized the Floquet system to the fractional Floquet system in 2016 [10]. Stochastic Differential Equations (SDEs) models play a great role in various sciences such as physics, economics, biology, chemistry and finance [11,12,13,14,15]. The reader has at least knowledge about independence, expected values and variances and also basic definitions of stochastic, that is necessary to read articles in this field [16]. M.Khodabin et al. approximate solution of stochastic Volterra integral equations in 2014 [17], also R.Ezzati et al. work on a stochastic operational matrix based on block pulse functions in 2014 [18]. We introduce SFDEs [19] :

$$\begin{aligned} D^\alpha u(t)=f(t,u(t))+\sigma \int ^{t}_{t_{0}} g(t,s)dw(s),\qquad u(t_{0})=u_{t_{0}}.\qquad \end{aligned}$$
(1)

for \(0\le \alpha \le 1\) and \(t\in \left[ 0,T\right] \) , where \(D^\alpha \) is the Caputo fractional derivative of order \(\alpha \) which will be defined later. \(\sigma \) is Max amplitude of noise also \(\int ^{t}_{t_{0}} g(t,s)dw(s)\) is the stochastic term, that produce some noise in our result, throughout the paper we putting \(\sigma =1\). SFDEs play a remarkable role for physical applications in nature [20,21,22].

Using RBFs for solving partial differential equations (PDEs) are very popular among many researchers, during the last two decades [23, 24]. Also RBFs is applied in mechanics [25], Kdv equation [26], Klein-Gordon equation [27], then in 2012 Vanani et al. used RBF for solving fractional partial differential equations [28]. Gonzalez-Gaxiola and Gonzalez-Perez used Multi-Quadratic RBF for approximating the solution of the Black-Scholes equation in 2014 [29].

The motivation of this paper is to extend the application of the RBF to solve SFDEs.

The layout of the paper is the following. In Sect. 2 some essential definitions of fractional calculus is proposed. In Sect. 3 we explain using RBFs method for SFDEs and prove the existence and uniqueness of the presented method. In Sect. 4 various examples are solved to illustrate the effectiveness of the proposed method. Also a conclusion is given in the last section.

Preliminaries and notations

In this section, we give some basic definitions and properties of fractional calculus which are defined as follow [4]

Definition 2.1

The Caputo fractional derivative of order \(\nu \) is defined as

$$\begin{aligned} D^\vartheta f(x)=J^{p-\vartheta } D^p f(x)=&\frac{1}{\Gamma (p-\vartheta )}\int ^{x}_{0} (x-t)^{p-\vartheta -1} \frac{d^p}{dt^p } f(t)dt,\\&p-1< \vartheta \le p,\qquad x> 0 \end{aligned}$$

where \(D^p\) is the classical differential operator of order p

Remark 2.2

For the Caputo derivative we have

$$\begin{aligned} D^{\vartheta }x^{\beta }=\left\{ \begin{array}{c} {\displaystyle 0, \ \ \ \ \ \ \ \ \beta <\vartheta , \ \ } \\ {\displaystyle \frac{\Gamma (\beta +1)}{\Gamma (\beta +1-\vartheta )}, \beta \ge \vartheta ,} \end{array} \right. \end{aligned}$$

Remark 2.3

\(D^{-\vartheta }\) is defined as \(D^{-\vartheta } f(t)=\frac{1}{\Gamma (\vartheta )}\int _0^t f(t)(t-\zeta )^{\vartheta -1} d\zeta ,\qquad t>0,\quad 0< \vartheta \le 1.\)

Definition 2.4

Let \((\Omega , F, \rho )\) be a probability space with a normal filteration \((F_t)_{t\ge 0}\) and \(w=\{w(t):t\ge 0\}\) be a Brownian motion defined over this filtered probability space. Consider the following SFDE

$$\begin{aligned} D^{\alpha } u(t,\nu )=f(t,u(t,\nu ))+\int ^{t}_{0} g(t,s,\nu )dw(s) ,\qquad u(0,\nu )=u_{0} \end{aligned}$$

for \(t\in [0,T]\), and \(\nu \in \Omega \). For simplicity of notation we drop the variable \(\nu \) so we have the following equation

$$\begin{aligned} D^{\alpha } u(t)=f(t,u(t))+\int ^{t}_{0} g(t,s)dw(s) ,\qquad u(0)=u_{0}\qquad \end{aligned}$$

from remark (2.3) we can see

$$\begin{aligned} u(t)=u_{0}+D^{-\alpha } f(t,u(t))+D^{-\alpha }\int ^{t}_{0} g(t,s)dw(s) , \end{aligned}$$

therefore, we have

$$\begin{aligned} u(t)=u_{0}+\frac{1}{\Gamma (\alpha )}\int ^{t}_{0}f(v,u(v))(t-v)^{\alpha -1}dv+\frac{1}{\Gamma (\alpha )}\int ^{t}_{0} \int ^{v}_{0} g(v,s)(t-v)^{\alpha -1}dw(s) dv. \end{aligned}$$
(2)

Also we admit the following assumptions.

Assumption 2.5

Suppose f and g are \(L^2\) measurable functions satisfying

$$\begin{aligned} \left| f(m,x)-f(m,y)\right| \le K_1\left| x-y\right| ,\qquad \left| g(t,m)-g(t,n)\right| \le K_2(\left| m-n\right| ), \end{aligned}$$
(3)
$$\begin{aligned} (\left| f(m,x)\right| )\le K_3 (1+\left| x\right| ),\qquad (\left| g(t,m)\right| )\le K_4, \end{aligned}$$
(4)

for some constants \( K_1, K_2, K_3, K_4 \) and for every \(x,y\in \mathbb {R}\) and \(0\le m, n \le t \le T=1.\)

Stochastic integral 2.6

Now we should explain the approximation of the stochastic term. White noise is known as the derivative of the brownian motion W(s) [30], so we approximating the term \(\frac{dw_{s}}{dt}\) . Let \(t_{0}=0\prec t_{1}=\Delta t \prec \cdots \prec t_{N}=T=1,\) with \(t_{i}=i \Delta t\) , for \(i=0,\ldots ,N\) be a partition of \(\left[ 0,1\right] \). This method introduced in [31] , we approximate \(\frac{dw_{s}}{dt}\) by \(\frac{d\hat{w}_{s}}{dt}\)

$$\begin{aligned} \frac{d\hat{w}_{s}}{dt}=\frac{1}{\sqrt{\Delta t}}\sum ^{N}_{i=1}\gamma _{i}\zeta _{i}(s) , \end{aligned}$$
(5)

where \(\gamma _{i} \sim N(0,1) \) is introducing by

$$\begin{aligned} \gamma _{i}=\frac{1}{\sqrt{\Delta t}} \int ^{t_{i+1}}_{t_{i}} dW(t),\quad i=1,\ldots ,N \end{aligned}$$

where

$$\begin{aligned} \zeta _{i}(s)=\left\{ \begin{array}{c} {\displaystyle 1, \quad t_{i}\le s<t_{i+1}, \ \ } \\ \\ {\displaystyle 0, \quad otherwise,} \end{array} \right. \end{aligned}$$

Collocation method based on RBFs for solving SFDEs

The Radial basis functions method has been known as a powerful tool for solving ordinary, partial and fractional differential equations and also integral equations and etc. So in this section we use this method for solving (2). Before that we consider some preliminaries.

Interpolation by RBFs

Let \(\{t_1,\ldots ,t_N\}\) be a given set of distinct points in \(\left[ 0,T\right] \subseteq \mathbb {R}\). Then the approximation of a function u(t) using RBFs \(\varphi (t)=\varphi (\left\| t\right\| )\), can be written in the following form [32, 33]

$$\begin{aligned} u(t)\approx \pi _{N,m-1}u(t)=\sum ^{N}_{k=1}c_k \varphi (\left\| t-t_k\right\| )+\sum ^{m-1}_{l=0}d_l p_l(t),\quad x\in D \end{aligned}$$

where \(p_0,\ldots ,p_m\) form a basis for m-dimensional linear space \(P_m([0,T])\) of polynomials of total degree less than or equal to m on the [0, T]. Suppose \(C^{m}_{N}([0,T])=span\{\varphi _0,\ldots ,\varphi _N,p_0,\ldots ,p_m\}\) then \(\pi _{N,m}:C([0,T])\rightarrow C^{m}_{N}([0,T])\) is the collocation projector on the collocation points \(X=\{t_0,\ldots ,t_N\}\subset [0,T]\). Since enforcing the interpolation conditions \(\pi _{N,m}(t_i)=u(t_i),\quad i=1,\ldots ,N\), leads to a system of N linear equations with \(N+m\) unknown, usually we add m additional conditions:

$$\begin{aligned} \sum ^{m-1}_{k=0}c_k p_l (t_k)=0,\quad l=0,\ldots ,m-1. \end{aligned}$$

Using RBFs for solving SFDEs (2)

Let \(D=[0,T] \subseteq \mathbb {R} \) and \(u:C([0,T]) \rightarrow \mathbb {R}\) and also suppose that \(u_N\) is the approximation of u based on these functions so we can write

$$\begin{aligned} u_{N}(t)=\sum ^{N}_{i=1} \lambda _i \varphi _i(t)+\sum ^{m-1}_{l=0}d_l p_l(t),\qquad t\in D \subset \mathbb {R}. \end{aligned}$$

N is the number of nodal points within the domain D and \(c_i\) denotes the shape parameter. Also we know there are different kinds of RBFs, but in this research we need only one of them with titled Gaussian that we represented as follow:

$$\begin{aligned} \varphi _i(t)= e^{\frac{-\left\| t-t_i\right\| ^2}{c_i^2}}. \end{aligned}$$

The collocation method based on RBF basis for solving (2) can be written in the following form:

$$\begin{aligned} \left\{ \begin{array}{c} {\displaystyle u_N(t_1)=u(t_0)+\frac{1}{\Gamma (\alpha )}\int ^{t_1}_{0}f(\nu ,u_N(\nu ))(t_1-\nu )^{\alpha -1}d\nu +\frac{1}{\Gamma (\alpha )}\int ^{t_1}_{0}\int ^{\nu }_{0}g(\nu ,s)d\hat{w} (s) d\nu } \\ \\ {\displaystyle u_N(t_2)=u(t_0)+\frac{1}{\Gamma (\alpha )}\int ^{t_2}_{0}f(\nu ,u_N(\nu ))(t_2-\nu )^{\alpha -1}d\nu +\frac{1}{\Gamma (\alpha )}\int ^{t_2}_{0}\int ^{\nu }_{0}g(\nu ,s)d\hat{w}(s) d\nu } \\ \\ \\ \vdots \\ {\displaystyle u_N(t_N)=u(t_0)+\frac{1}{\Gamma (\alpha )}\int ^{t_N}_{0}f(\nu ,u_N(\nu ))(t_N-\nu )^{\alpha -1}d\nu +\frac{1}{\Gamma (\alpha )}\int ^{t_N}_{0}\int ^{\nu }_{0}g(\nu ,s)d\hat{w}(s) d\nu } \\ \\ {\displaystyle \sum ^{m-1}_{k=0}c_k p_l (t_k)=0,\quad l=0,\ldots ,m-1} \end{array} \right. \end{aligned}$$

where

$$\begin{aligned} u_N(t)=\sum ^{N}_{i=1}c_i \varphi _i(\left\| t-t_i\right\| )+\sum ^{m-1}_{l=0}d_l p_l(t),\quad x\in D \end{aligned}$$

Lemma 3.1

(Existence and Uniqueness) Assume that there exists a constant \(K_1>0\) such that

$$\begin{aligned} \vert f(u_1,t)-f(u_2,t)\vert \le K_1\vert u_1-u_2 \vert \end{aligned}$$

and

$$\begin{aligned} \dfrac{K_1}{\alpha \Gamma (\alpha )} <1 \end{aligned}$$

for each \( t\in [0,T] \) and all \( x,y\in \mathbb {R}^{n}\), then Eq.(2) has an unique solution on [0, T] .

Proof

First, we transform Eq.(2) in to a fixed point problem. For this purpose consider the operation

$$\begin{aligned} P:C\left( [0,T],\mathbb {R}^{n}\right) \rightarrow C\left( [0,T],\mathbb {R}^{n}\right) \end{aligned}$$

defined by

$$\begin{aligned} P(u)(t)=u_0+\frac{1}{\Gamma (\alpha )}\int ^{t}_{0}f(\nu ,u(\nu ))(t-\nu )^{\alpha -1}d\nu +\frac{1}{\Gamma (\alpha )}\int ^{t}_{0}\int ^{\nu }_{0} g(\nu ,s)d\hat{w}(s) d\nu \end{aligned}$$

As we can see according to Eq.(2) we have

$$\begin{aligned} P(u)(t)=u(t) \end{aligned}$$

If we show P is a contraction operator, using the Banach contraction principle we conclude P has a fixed point and we conclude Eq.(2) has a unique solution. Consequently applying Eq.(3) it is easy to see that

$$\begin{aligned} \vert P(u_1)(t)-P(u_2)(t)\vert \le \frac{1}{\Gamma (\alpha )}\int ^{t}_{0}\vert \left( f(u_1(\nu ),\nu )-f(u_2(\nu ),\nu )\right) \vert (t-\nu )^{\alpha -1} d\nu \le \end{aligned}$$
$$\begin{aligned} \frac{K_1}{\Gamma (\alpha )}\vert u_1-u_2\vert \int ^{t}_{0} (t-\nu )^{\alpha -1} d\nu \le \frac{K_1}{\alpha \Gamma (\alpha )}\vert u_1-u_2\vert . \end{aligned}$$

On the other side using the assumption of the lemma we know that \(\frac{K_1}{\alpha \Gamma (\alpha )}<1\), therefore the proof is completed. \(\square \)

Illustrative example

In this section, we solve SFDEs using RBFs and Galerkin method [19], these equations don’t have exact solution so we use numerical approximation for sufficiently small partition on t. While we have this relation

$$\begin{aligned} |e_{exact}-e_{RBFs}|=|e_{exact}-e_{Galerkin}+e_{Galerkin}-e_{RBFs}| \end{aligned}$$
$$\begin{aligned} \le |e_{exact}-e_{Galerkin}|+|e_{Galerkin}-e_{RBFs}| \end{aligned}$$

then we approximate the solution in the form of RMSError [34] as follow

$$\begin{aligned} RMSError=\sqrt{\frac{\sum ^{n}_{i=0}\left( U_{RBFs}(x_{i},0)-U_{Galerkin}(x_{i},0)\right) ^2}{n}} \end{aligned}$$

In this paper, we introduce RMSError after 50 and 60 times run the program for different points and \(\sigma \).

Example 4.1

Consider the following SFDE:

$$\begin{aligned} D^{\frac{3}{2}} u(t)+u(t)=t+1+\sigma \int ^{t}_{0} dW(s) ,\qquad u(0)=1. \end{aligned}$$

We have table 1,2,3 after 60 times run the program with \(n=17\).

Table 1 \(U_{RBFs}\) for example (4.1) with \(n=17\) and \(\sigma =1\).
Table 2 \(U_{Galerkin}\) for example (4.1) with \(n=17\) and \(\sigma =1\).
Table 3 RMS Error for example (4.1), \(n=17\).

Example 4.2

Consider the following SFDE:

$$\begin{aligned} D^{\alpha }u(t)+u(t)=\frac{2t^{2-\alpha }}{\Gamma (3-\alpha )}-\frac{t^{1-\alpha }}{\Gamma (2-\alpha )}+t^2-t+\sigma \int ^{t}_{0} sdw(s) ,\qquad u(0)=0,\qquad 0 <\alpha \le 1. \end{aligned}$$

For \(\alpha =\frac{1}{2}\) and various \(\sigma \) we have table 4 after 50 times run the program with nodal points \(n=12\) and for \(\alpha =\frac{3}{2}\) and different value for \(\sigma \) we have table 5 after 50 times run the program with \(n=11\).

Table 4 RMS Error for example (4.2) with \(\alpha =\frac{1}{2}\), \(n=12\).
Table 5 RMS Error for example (4.2) with \(\alpha =\frac{3}{2}\), \(n=11\).

Example 4.3

Consider the following SFDE:

$$\begin{aligned} D^{0.5} u(t)+u(t)=(t+1)^5+\int ^{t}_{0}\cos (s) dw(s) ,\qquad u(0)=1. \end{aligned}$$
Table 6 RMS Error for Example 4.3 for different values of n at points t.

Example 4.4

Consider the following SFDE:

$$\begin{aligned} D^{0.75} u(t)+u^3(t)=t^3+1+\int ^{t}_{0}(s^2+1)^3 dw(s) ,\qquad u(0)=0. \end{aligned}$$
Table 7 RMS Error for Example 4.4 for different values of n at points t.

In this work, the accuracy of approximate solution, when taking larger n and smaller \(\sigma \), is expected that more accurate the approximate results.

Conclusion

The main goal of this work was to purpose an efficient algorithm for the stochastic fractional differential equations. In this paper, while we don’t have exact solution for SFDEs we used RBFs to approximate the solution of these kind of equations. In addition, we discussed about existence and uniqeness of the presented method. The present RMS Error in the tables shows that the results are highly accurate in comparison with another method using by Galerkin algorithm.