1 Introduction

Until recently, many of the models ignored stochastic effects because of difficulty in solution. But now, stochastic differential equations (SDEs) play a significant role in many departments of science and industry because of their application for modeling stochastic phenomena, e.g., finance, population dynamics, biology, medicine and mechanics. If we add a random element or elements to the deterministic differential equation, we have transition from an ordinary differential equation to SDE. Unfortunately, in many cases analytic solutions are not available for these equations, so we are required to use numerical methods [1, 2] to approximate the solution. [36] discussed the numerical solutions of SDEs. [7] presented many numerical experiments. Some analytical and numerical solutions were proposed in [8]. [9] considered numerical approximations of random periodic solutions for SDEs. On the other hand, [10] constructed a Milstein scheme by adding an error correction term for solving stiff SDEs.

In this paper we consider the general form of one-dimensional SDE with

$$\begin{aligned} \begin{gathered} dX(t,\omega)=f\bigl(t,X(t,\omega) \bigr)\,dt+g\bigl(t,X(t,\omega)\bigr)\,dW(t,\omega),\quad t_{0}\leq t\leq T, \\ X(t_{0},\omega)=X_{0}(\omega), \end{gathered} \end{aligned}$$
(1)

where f is the drift coefficient, while g is the diffusion coefficient and \(W(t,\omega)\) is the Wiener process. From now on, let \(X(t,\omega)=X(t)\) and \(W(t,\omega)=W(t)\) for simplicity. The Wiener process \(W(t)\) satisfies the following three conditions:

  1. 1.

    \(W(0)=0\) (w.p.1);

  2. 2.

    \(W(t)-W(s) \sim\sqrt{t-s} N(0,1)\) for \(0\leq s < t\), where \(N(0,1)\) indicates a standard normal random variable;

  3. 3.

    Increments \(W(t)-W(s)\) and \(W(\tau)- W(\upsilon)\) are independent on distinct time intervals for \(0\leq s< t<\tau< \upsilon\).

Integral form of (1) is as follows:

$$\begin{aligned} X(t)=X(t_{0})+ \int_{t_{0}}^{t} f\bigl(s,X(s)\bigr)\,d s+ \int_{t_{0}}^{t} g\bigl(s,X(s)\bigr)\,dW(s). \end{aligned}$$
(2)

If \(f(t,X(t))=a_{1}(t)X(t)+a_{2}(t)\) and \(g(t,X(t))=b_{1}(t)X(t)+b_{2}(t)\) are linear, then the SDE is linear, and if they are nonlinear, the SDE is nonlinear, where \(a_{1}\), \(a_{2}\), \(b_{1}\), \(b_{2} \) are specified functions of time t or constants. In the next section we give the Monte Carlo simulation, the method we will use for our experiments. In Section 3 we denote the numerical methods for SDE. First, we represent a stochastic Taylor expansion and we obtain the Euler-Maruyama [11] and Milstein methods [12] from the truncated Ito-Taylor expansion. In Section 4 we consider a nonlinear SDE, and we solve and explicate our equation for two different methods, namely the EM and Milstein methods. We use MATLAB for our simulations and support our results with graphs and tables. And the last section consists of conclusion, which gives our suggestions.

2 Monte Carlo simulations

Monte Carlo methods are numerical methods, where random numbers are used to conduct a computational experiment. Numerical solution of stochastic differential equations can be viewed as a type of Monte Carlo calculation. Monte Carlo simulation is perchance the most common technique for propagating the incertitude in the various aspects of a system to the predicted performance.

In Monte Carlo simulation, the entire system is simulated a large number of times. So, a set of suitable sample paths is produced on \([t_{0},T]\). Each simulation is equally likely, referred to as a realization of the system. For each realization, all of the uncertain parameters are sampled. For each sample, we produce a sample path solution to the SDE on \([t_{0},T]\). This is generally obtained from the stochastic Taylor formula, which was derived in [13], for the solution X of the SDE, on a small subinterval of \([t_{0},T]\) [5, 14]. From the Ito-Taylor expansion, we can construct numerical schemes for (1) over the interval \([t_{i},t_{i+1}]\).

3 Stochastic Taylor series expansion

The Taylor formula plays a very significant role in numerical analysis. We can obtain the approximation of a sufficiently smooth function in a neighborhood of a given point to any desired order of accuracy with the Taylor formula.

Enlarging the increments of smooth functions of Ito processes, it is beneficial to have a stochastic expansion formula with correspondent specialities to the deterministic Taylor formula. Such a stochastic Taylor formula has some possibilities. One of these possibilities is an Ito-Taylor expansion obtained via Ito’s formula [7].

3.1 Ito-Taylor expansion

First we can obtain an Ito-Taylor expansion for the stochastic case. Consider

$$\begin{aligned} dX(t)=f\bigl(X(t)\bigr)\,dt+g\bigl(X(t)\bigr)\,dW(t), \end{aligned}$$
(3)

where f and g satisfy a linear growth bound and are sufficiently smooth.

Let F be a twice continuously differentiable function of \(X(t)\), then from Ito’s lemma we hav

$$\begin{aligned} dF\bigl[X(t)\bigr]&= \biggl\{ f\bigl[X(t)\bigr]\frac{\partial{{F[X(t)]}}}{\partial{{X}}}+ \frac {1}{2}g^{2}\bigl[X(t)\bigr]\frac{\partial^{2}{{F[X(t)]}}}{\partial{{X^{2}}}} \biggr\} \,dt \\ &\quad{}+g\bigl[X(t)\bigr]\frac{\partial{{F[X(t)]}}}{\partial{{X}}}\,dW(t). \end{aligned}$$
(4)

Defining the following operators:

$$\begin{aligned}& {L^{0}} \equiv f\bigl[X(t)\bigr]\frac{\partial}{\partial{{X}}}+ \frac {1}{2}g^{2}\bigl[X(t)\bigr]\frac{\partial^{2}}{\partial{{X^{2}}}}, \end{aligned}$$
(5)
$$\begin{aligned}& {L^{1}} \equiv g\bigl[X(t)\bigr]\frac{\partial}{\partial{{X}}}, \end{aligned}$$
(6)

(4) becomes

$$\begin{aligned} dF\bigl[X(t)\bigr]={L^{0}} {F\bigl[X(t) \bigr]}\,dt+{L^{1}} {F\bigl[X(t)\bigr]}\,dW(t), \end{aligned}$$
(7)

and integral form of (7) is

$$\begin{aligned} F\bigl[X(t)\bigr]=F\bigl[X(t_{0})\bigr]+ \int_{t_{0}}^{t} { L^{0} F\bigl[X(\tau)\bigr] }\,d \tau+ \int _{t_{0}}^{t}{ L^{1} F\bigl[X(\tau)} \bigr]\,dW(\tau). \end{aligned}$$
(8)

Choosing \(F(x)=x\), \(F(x)=f(x)\) and \(F(x)=g(x)\), (4) becomes respectively

$$\begin{aligned}& X(t) = X(t_{0})+ \int_{t_{0}}^{t} {f\bigl[X(\tau)\bigr]}\,d\tau+ \int_{t_{0}}^{t}{ g\bigl[X(\tau )\bigr]}\,dW(\tau), \end{aligned}$$
(9)
$$\begin{aligned}& f\bigl[X(t)\bigr] = f\bigl[X(t_{0})\bigr]+ \int_{t_{0}}^{t} { L^{0} f\bigl[X(\tau)\bigr] }\,d\tau+ \int _{t_{0}}^{t}{ L^{1} f\bigl[X(\tau)} \bigr]\,dW(\tau), \end{aligned}$$
(10)
$$\begin{aligned}& g\bigl[X(t)\bigr] = g\bigl[X(t_{0})\bigr]+ \int_{t_{0}}^{t} { L^{0} g\bigl[X(\tau)\bigr] }\,ds+ \int_{t_{0}}^{t}{ L^{1}g\bigl[X(\tau)} \bigr]\,dW(\tau). \end{aligned}$$
(11)

Substituting Eqs. (10) and (11) into (9), we obtain the following equation:

$$\begin{aligned} \begin{aligned}[b] X(t)&=X(t_{0}) + \int_{t_{0}}^{t} \biggl( f\bigl[X(t_{0}) \bigr]+ \int_{t_{0}}^{\tau_{1}}{ L^{0} f\bigl[X( \tau_{2})\bigr] }\,d\tau_{2} \\ &\quad{}+ \int_{t_{0}}^{\tau_{1}}{ L^{1} f\bigl[X( \tau_{2})\bigr]}\,dW(\tau_{2}) \biggr) \,d \tau_{1} \\ &\quad{}+ \int_{t_{0}}^{t} \biggl(g\bigl[X(t_{0})\bigr]+ \int_{t_{0}}^{\tau_{1}}{ L^{0} g\bigl[X( \tau_{2})\bigr] }\,d\tau_{2} \\ &\quad{}+ \int_{t_{0}}^{\tau_{1}}{ L^{1} g\bigl[X( \tau_{2})\bigr]}\,dW(\tau_{2}) \biggr) \,dW( \tau_{1} ); \end{aligned} \end{aligned}$$
(12)

and therefore,

$$\begin{aligned} X(t) =&X(t_{0})+f\bigl[X(t_{0})\bigr] \int_{t_{0}}^{t}{d\tau_{1}}+g \bigl[X(t_{0})\bigr] \int _{t_{0}}^{t}{dW( \tau_{1} )}+\mathcal{R,} \end{aligned}$$
(13)

where \(\mathcal{R}\) is the remaining terms which include the double integral terms:

$$ \begin{aligned}[b]\mathcal{R} &\equiv \int_{t_{0}}^{t} \int_{t_{0}}^{\tau_{1}} {L^{0} f\bigl[X(\tau _{2})\bigr] }\,d\tau_{2} \,d \tau_{1} + \int_{t_{0}}^{t} \int_{t_{0}}^{\tau_{1}} {L^{1} f\bigl[X( \tau_{2})\bigr] }\,dW(\tau_{2}) \,d \tau_{1} \\ &\quad {}+ \int_{t_{0}}^{t} \int_{t_{0}}^{\tau_{1}} {L^{0} g\bigl[X( \tau_{2})\bigr] }\,d\tau_{2} \,d W(\tau_{1}) + \int_{t_{0}}^{t} \int_{t_{0}}^{\tau_{1}} {L^{1} g\bigl[X( \tau_{2})\bigr] }\,dW(\tau_{2}) \,dW( \tau_{1}). \end{aligned} $$
(14)

Selecting \(F=L^{1} g\) in (8), we obtain

$$ \begin{aligned}[b] &\int_{t_{0}}^{t} \int_{t_{0}}^{\tau_{1}} {L^{1} g\bigl[X( \tau_{2})\bigr] }\,dW(\tau_{2}) \,dW( \tau_{1})\\ &\quad = \int_{t_{0}}^{t} \int_{t_{0}}^{\tau_{1}} \biggl( {L^{1} g \bigl[X(t_{0})\bigr] } \\ &\qquad{}+ \int_{t_{0}}^{\tau_{2}} {L^{0} L^{1} g \bigl[X(\tau_{3})\bigr] }\,d\tau_{3}+ \int _{t_{0}}^{\tau_{2}} {L^{1} L^{1} g \bigl[X(\tau_{3})\bigr] }\,dW(\tau_{3}) \biggr)\,dW(\tau _{2})\,dW(\tau_{1}), \end{aligned} $$
(15)

and using \(L^{1} g =g g'\), we have

$$\begin{aligned} \begin{aligned}[b] X(t)&=X(t_{0})+f \bigl[X(t_{0})\bigr] \int_{t_{0}}^{t}{d\tau_{1}} \\ &\quad{}+g\bigl[X(t_{0})\bigr] \int_{t_{0}}^{t}{dW( \tau_{1} )}+g \bigl[X(t_{0})\bigr]g'\bigl[X(t_{0})\bigr] \int _{t_{0}}^{t} \int_{t_{0}}^{\tau_{1}}\,dW(\tau_{2})\,dW( \tau_{1})+\mathcal{\widetilde{R,}} \end{aligned} \end{aligned}$$
(16)

where our new remainder is

$$\begin{aligned} \begin{aligned}[b] \mathcal{\widetilde{R}} &\equiv \int_{t_{0}}^{t} \int_{t_{0}}^{\tau_{1}} {L^{0} f\bigl[X( \tau_{2})\bigr] }\,d\tau_{2} \,d \tau_{1} + \int_{t_{0}}^{t} \int_{t_{0}}^{\tau _{1}} {L^{1} f\bigl[X( \tau_{2})\bigr] }\,dW(\tau_{2}) \,d \tau_{1} \\ &\quad{}+ \int_{t_{0}}^{t} \int_{t_{0}}^{\tau_{1}} {L^{0} g\bigl[X( \tau_{2})\bigr] }\,d\tau _{2} \,d W(\tau_{1}) \\ &\quad{}+ \int_{t_{0}}^{t} \int_{t_{0}}^{\tau_{1}} \int_{t_{0}}^{\tau_{2}}{L^{0} L^{1} g \bigl[X(\tau_{3})\bigr] }\,d(\tau_{3})\,dW( \tau_{2}) \,dW( \tau_{1}) \\ &\quad{}+ \int_{t_{0}}^{t} \int_{t_{0}}^{\tau_{1}} \int_{t_{0}}^{\tau_{2}}{L^{1} L^{1} g \bigl[X(\tau_{3})\bigr] }\,dW(\tau_{3})\,dW( \tau_{2}) \,dW( \tau_{1}). \end{aligned} \end{aligned}$$
(17)

Therefore, we obtained the Ito-Taylor expansion for process (3) as (16). Using Ito’s lemma again, we have

$$\begin{aligned} \int_{t_{0}}^{t} \int_{t_{0}}^{\tau_{1}} dW( \tau_{2}) \,dW( \tau_{1})=\frac {1}{2}\bigl[W(t)-W(t_{0}) \bigr]^{2} -\frac{1}{2}(t-t_{0} ), \end{aligned}$$
(18)

and writing (18) into (16), we obtain the stochastic Taylor expansion

$$\begin{aligned} \begin{aligned}[b] X(t)&=X(t_{0})+f \bigl[X(t_{0})\bigr] \int_{t_{0}}^{t}{d\tau_{1}}+g \bigl[X(t_{0})\bigr] \int _{t_{0}}^{t}{dW( \tau_{1} )} \\ &\quad{}+g\bigl[X(t_{0})\bigr]g'\bigl[X(t_{0}) \bigr]\biggl\{ \frac{1}{2}\bigl[W(t)-W(t_{0})\bigr]^{2} - \frac {1}{2}(t-t_{0} ) \biggr\} +\mathcal{\widetilde{R}}. \end{aligned} \end{aligned}$$
(19)

Therefore, we can produce the numerical integration scheme for the SDE from Ito-Taylor expansion (19) with a time discretization \(0=t_{0}< t_{1}<\cdots<t_{n}<\cdots<t_{N}=T\) of a time interval [0,T] as follows:

$$\begin{aligned} \begin{aligned}[b] X(t_{i+1})&=X(t_{i})+f \bigl(X(t_{i})\bigr)\Delta t +g\bigl(X(t_{i})\bigr)\Delta W_{i} \\ &\quad{} +\frac{1}{2}g\bigl(X(t_{i})\bigr)g' \bigl(X(t_{i})\bigr)\bigl[(\Delta{W_{i}})^{2}- \Delta t\bigr]+\mathcal{\widetilde{R}}, \end{aligned} \end{aligned}$$
(20)

where \(\Delta t=t_{i+1}-t_{i} \) and \(\Delta W_{i}=W(t_{i+1})-W(t_{i})\) for \(i=0,1,2, \ldots,N-1 \) with the initial condition \(X(t_{0})=X_{0}\). The random variables \(\Delta W_{i}\) are independent \(N(0,\Delta t)\) normally distributed random variables.

3.2 Euler-Maruyama method

One of the simplest numerical approximations for the SDE is the Euler-Maruyama method. If we truncate Ito’s formula of the stochastic Taylor series after the first order terms, we obtain the Euler method or Euler-Maruyama method as follows:

$$\begin{aligned} X(t_{i+1})=X(t_{i})+f\bigl(X(t_{i}) \bigr)\Delta t +g\bigl(X(t_{i})\bigr)\Delta W_{i} \end{aligned}$$
(21)

for \(i=0,1,2,\ldots,N-1\) with the initial value \(X(t_{0})=X_{0}\). Euler-Maruyama approximation converges with strong order 0.5 under Lipschitz and bounded growth conditions on the coefficients f and g, which were shown in [15]. [16] and [17] showed that an Euler-Maruyama approximation of an Ito process converges with weak order 1.0 under conditions of sufficient smoothness. It is clear that weak order of convergence is greater than strong order of convergence in the Euler-Maruyama method.

3.3 Milstein method

The other numerical approximation method for the SDE is Milstein method. If we truncate the stochastic Taylor series after second order terms, we obtain the Milstein method as follows:

$$\begin{aligned} \begin{aligned}[b] X(t_{i+1})&=X(t_{i})+f \bigl(X(t_{i})\bigr)\Delta t +g\bigl(X(t_{i})\bigr)\Delta W_{i} \\ &\quad{}+\frac{1}{2}g\bigl(X(t_{i})\bigr)g' \bigl(X(t_{i})\bigr)\bigl[(\Delta{W_{i}})^{2}- \Delta t\bigr] \end{aligned} \end{aligned}$$
(22)

for \(i=0,1,2,\ldots,N-1\) with the initial value \(X(t_{0})=X_{0}\). Milstein approximation converges with strong order 1.0 under the \(E[X(0)]^{2}<\infty\) assumption, where f and g are twice continuously differentiable, and f, \(f'\), g, \(g'\) satisfy a uniform Lipschitz condition.

Note that \(g'(X(t_{i}))\) is differentiation of \(g(X(t_{i}))\), and if the type of SDE is an additive noise SDE, then the Milstein method leads to the Euler-Maruyama method.

4 Application

Let \(f(X(t))= \frac{2}{5}X^{3/5}(t)+5X^{4/5}(t)\), \(g(X(t))=X^{4/5}(t)\) and the initial condition \(X(0)=1\) in (1), we obtain the following nonlinear stochastic differential equation:

$$\begin{aligned} \begin{aligned} &dX(t)= \biggl( \frac{2}{5}X^{3/5}(t)+5X^{4/5}(t) \biggr)\,dt+X^{4/5}(t)\,dW(t), \quad0\leq t\leq1, \\ &X(0)=1. \end{aligned} \end{aligned}$$
(23)

So, clearly, our nonlinear SDE is said to have multiplicative noise as the diffusion vector field depends multiplicatively on the solution [18]. Now, we find the exact solution of this nonlinear SDE using Ito’s formula. Let \(F(t,X(t))=[X(t)]^{1/5}\). Then

$$\begin{aligned}& \,dF\bigl(t,X(t)\bigr) = \biggl( \frac{2}{25}X^{-1/5}(t)+1- \frac{4}{50}X^{-1/5}(t) \biggr)\,dt+\frac{1}{5}\,dW(t), \end{aligned}$$
(24)
$$\begin{aligned}& d\bigl(\bigl[X(t)\bigr]^{1/5}\bigr) = dt+\frac{1}{5}\,dW(t); \end{aligned}$$
(25)

and therefore the exact solution is found

$$\begin{aligned} X(t) =& \biggl[1+t+\frac{1}{5}W(t) \biggr]^{5}. \end{aligned}$$
(26)

Now we will apply the Euler-Maruyama (EM) method and the Milstein method to the nonlinear SDE (23), where \(f(X(t))= \frac {2}{5}X^{3/5}(t)+5X^{4/5}(t)\) and \(g(X(t))=X^{4/5}(t)\).

Using the EM method, we get the following scheme:

$$\begin{aligned} X(t_{i+1}) =&X(t_{i})+ \biggl( \frac{2}{5}X^{3/5}(t_{i})+5X^{4/5}(t_{i}) \biggr)\Delta t+X^{4/5}(t_{i})\Delta W(t_{i}), \end{aligned}$$
(27)

and using the Milstein method, we get the scheme as follows:

$$\begin{aligned} \begin{aligned}[b] X(t_{i+1})&=X(t_{i})+ \biggl( \frac{2}{5}X^{3/5}(t_{i})+5X^{4/5}(t_{i}) \biggr)\Delta t+X^{4/5}(t_{i})\Delta W(t_{i}) \\ &\quad{}+\frac{2}{5}X^{3/5}(t_{i})\bigl[( \Delta{W_{i}})^{2}-\Delta t\bigr], \end{aligned} \end{aligned}$$
(28)

where \(i=0,1,2,\ldots,N-1\), \(X(0)=1\) and stepsize \(\Delta t=1/N\).

In Table 1, our EM and Milstein approximations of this example were evaluated for 10,000 sample paths for \(N=2^{9},2^{10},2^{11},2^{12}\) and 213 over \([0,1]\) to estimate \(E[X(1)]\approx\frac{1}{10\text{,}000}\sum_{i=1}^{10\text{,}000}{X_{N}^{i}}\), where \({X_{N}^{i}}\) is the estimate of X at the end time \(T=1\) for the ith sample path using N subinterval.

Table 1 Estimation values for Euler-Maruyama and Milstein methods

In Table 2, the mean square error values

$$E \bigl\vert X(1)-X_{N} \bigr\vert ^{2} \approx \frac{1}{10\text{,}000}\sum_{i=1}^{10\text{,}000} \bigl\vert X^{i}(1)-X_{N}^{i} \bigr\vert ^{2} $$

are calculated with the EM and Milstein methods for each value of N, where \({X_{N}^{i}}\) is the estimate of X at the end time \(T=1\) for the ith sample path using N subinterval.

Table 2 Calculated mean square errors for Euler-Maruyama and Milstein methods

Upper left and lower left graphs show that the exact solution of (23) \(X{\mathit{exact}}_{\mathit{mean}}\) holds average of exact solution which is plotted as blue asterisks connected with dashed lines. \(X_{\mathit{exact}}\) keeps exact solutions of (23) along individual paths on the interval \([0,1]\).

In Figure 1, the exact solution and the EM approximation are plotted for 10,000 sample paths along 50 individual paths on the interval \([0,1]\). \(X_{\mathit{mean}}\) holds average of the EM solution, which is plotted as blue asterisks connected with dashed lines; \(X_{EM}\) keeps EM approximations, which is plotted as red straight lines.

Figure 1
figure 1

Exact solution and EM simulation averaged over 10,000 discretized sample paths along 50 individual paths for \(\pmb{N=2^{10}}\) .

In Figure 2, the exact solution and the Milstein approximation are plotted for 10,000 sample paths and along 50 individual paths on the interval \([0,1]\). \(X\mathit{milstein}_{\mathit{mean}}\) holds average of the Milstein solution, which is plotted as blue asterisks connected with dashed lines. Xmilstein keeps Milstein approximations, which is plotted as red straight lines.

Figure 2
figure 2

Exact solution and Milstein simulation averaged over 10,000 discretized sample paths along 50 individual paths for \(\pmb{N=2^{10}}\) .

In Figure 3, exact solution, EM and Milstein approximations are plotted on the same graph. The first graph is plotted for \(N=2^{9}\) subintervals, and the second one is plotted for \(N=2^{13}\) subintervals. \(X_{EM}\) denotes EM approximation, \(X_{\mathit{Milstein}}\) denotes Milstein approximation and \(X_{\mathit{exact}}\) denotes exact solution, which are plotted as blue, yellow and red straight lines, respectively.

Figure 3
figure 3

EM, Milstein approximation and exact solution for one sample paths and \(\pmb{2^{9}}\) , \(\pmb{2^{13}}\) discretization, respectively.

In Figure 4, the error functions are plotted on the same graph, for EM approximations, Milstein approximations and difference between EM and Milstein approximations for each stepsize. \(X\mathit{exact}-X\mathit{EM}\), \(X\mathit{exact}-X\mathit{milstein}\) denote error function of EM and Milstein approximations in each stepsize, which is plotted as aqua and blue straight lines, respectively. \(X\mathit{milstein}-X\mathit{EM}\) means the difference between EM and Milstein approximations, which is plotted as red straight lines. Finally, we say from our graphs that, if we minimize the stepsize dt (thus maximize N) because of \(dt=1/N\), we obtain more closed approximation to exact solution with the Milstein method compared with the Euler-Maruyama method.

Figure 4
figure 4

Error function between EM, Milstein approximation and exact solution for one sample paths and \(\pmb{2^{9}}\) , \(\pmb{2^{13}}\) discretization, respectively.

5 Conclusion

In this paper we have studied the Euler and Milstein schemes which are obtained from the truncated Ito-Taylor expansion already proposed in [7]. Then we implemented these schemes to a nonlinear stochastic differential equation for comparing the EM and Milstein methods to each other while illustrating efficiency. Moreover, we calculated estimation values for Euler-Maruyama and Milstein methods so as to analyze similarities between the exact solution and numerical approximations. Then we investigated approximations for 29, 210, 211, 212 and 213 discretization in the interval \([0,1]\) with 10,000 different sample paths. According to our results, we can say that when the discretization value N is increasing, numerical solutions achieved from Euler-Maruyama and Milstein schemes are close to exact solution, and our results in the tables show that the Milstein method is more effective than the Euler-Maruyama method.