1 Introduction

Stochastic integral equations are widely applied in engineering, biology, oceanography, physical sciences, etc. There systems are dependent on a noise source, such as Gaussian white noise. As we all know, many stochastic Volterra integral equations do not have exact solutions, so it makes sense to find more precise approximate solutions to stochastic Volterra integral equations. There are different numerical methods to stochastic Volterra integral equations, for example, orthogonal basis methods [1,2,3,4,5,6,7,8,9,10], wash series methods [11, 12], and polynomials methods [13,14,15,16].

In [1], Fakhrodin studied linear stochastic Itô–Volterra integral equations (SIVIEs) through Haar wavelets (HWs). In [3], Maleknejad et al. also considered the same integral equations by applying block pulse functions (BPFs). In [9], Heydari et al. solved linear SIVIEs by the generalized hat basis functions. Meanwhile, in line with the same hat functions, Hashemi et al. also presented the numerical method of nonlinear SIVIEs driven by fractional Brownian motion [8]. Moreover, Jiang et al. applied BPFs to solve two-dimensional nonlinear SIVIEs [7]. In a general way, Zhang studied the existence and uniqueness solution to stochastic Volterra integral equations with singular kernels and constructed an Euler type approximation solution [17, 18].

Inspired by the discussion above, we use HWs to solve the following nonlinear SIVIE:

$$ x(v)=x_{0}(v)+ \int_{0}^{v}{k(u,v)}\sigma \bigl(x(u) \bigr) \,du+ \int _{0}^{v}r(u,v)\rho \bigl(x(u) \bigr)\,dB(u), \quad v\in[0,1), $$
(1)

where \(x(v)\) is an unknown stochastic process defined on some probability space \((\varOmega,\mathcal{F},P)\), \(k(u,v)\) and \(r(u,v)\) are kernel functions for \(u, v\in[0,1)\), and \(x_{0}(v)\) is an initial value function. \(B(u)\) is a Brownian motion and \(\int _{0}^{v}r(u,v)\rho(x(u))\,dB(u)\) is Itô integral. σ and ρ are analytic functions that satisfy some bounded and Lipschitz conditions.

In contrast to the above papers [1, 3, 7,8,9], the differences of this paper are as follows. Firstly, we construct a preparation theorem to deal with the nonlinear analytic functions. Secondly, the error analysis is strictly proved. Finally, compared with the reference [8], the numerical solution is more accurate and the calculation is simpler because of the use of HWs. Moreover, the rationality and effectiveness of this method can be further supported by two examples.

The structure of the article is as follows.

In Sect. 2, some preliminaries of BPFs and HWs are given. In Sect. 3, the relationship between HWs and BPFs is shown. In Sect. 4, the approximate solutions of (1) are derived. In Sect. 5, the error analysis of the numerical method is demonstrated. In Sect. 6, the validity and efficiency of the numerical method are verified by two examples.

2 Preliminaries

BPFs and HWs have been widely analysed by lots of scholars. For details, see references [1, 3].

2.1 Block pulse functions

BPFs are denoted as

$$\psi_{i}(v)= \textstyle\begin{cases} 1 & ih\leq v< (i+1)h,\\ 0 & \text{otherwise}, \end{cases} $$

for \(i=0,\ldots,m-1\), \(m=2^{L}\) for a positive integer L and \(h=\frac {1}{m}\), \(v\in[0,1)\).

The basic properties of BPFs are shown as follows:

  1. (i)

    disjointness:

    $$ \psi_{i}(v)\psi_{j}(v)=\delta_{ij} \psi_{i}(v), $$
    (2)

    where \(v\in[0,1)\), \(i, j=0,1,\ldots,m-1\), and \(\delta_{ij}\) is Kronecker delta;

  2. (ii)

    orthogonality:

    $$\int_{0}^{T}\psi_{i}(v) \psi_{j}(v)\,dt=h\delta_{ij}; $$
  3. (iii)

    completeness property: for every \(g\in L^{2}[0,1)\), Parseval’s identity satisfies

    $$ \int_{0}^{1}g^{2}(v)\,dv=\lim _{m\to\infty}\sum_{i=0}^{m}(g_{i})^{2} \bigl\Vert \psi _{i}(v) \bigr\Vert ^{2}, $$
    (3)

    where

    $$g_{i}=\frac{1}{h} \int_{0}^{1}g(v)\psi_{i}(v)\,dv. $$

The set of BPFs can be represented by the following m-dimensional vector:

$$ \varPsi_{m}(v)= \bigl( \psi_{0}(v),\ldots, \psi_{m-1}(v) \bigr)^{T},\quad v\in[0,1). $$
(4)

From the above description, it yields

$$\begin{gathered} \varPsi_{m}(v)\varPsi_{m}^{T}(v)= \left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} \psi_{0}(v)&0&\cdots&0\\ 0&\psi_{1}(v)&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\psi_{m-1}(v) \end{array}\displaystyle \right )_{m\times m}, \\ \varPsi_{m}^{T}(v)\varPsi_{m}(v)=1, \\ \varPsi_{m}(v)\varPsi_{m}^{T}(v)F_{m}= \mathbf{D}_{F_{m}}\varPsi_{m}(v),\end{gathered} $$

where \(F_{m}= (f_{0},f_{1},\ldots,f_{m-1} )^{T}\) and \(\mathbf {D}_{F_{m}}=\operatorname{diag}(F_{m})\).

Furthermore, for an \(m\times m\) matrix M, it yields

$$\varPsi_{m}^{T}(v)\mathbf{M}\varPsi_{m}(v)= \hat{{M}}^{T}\varPsi_{m}(v), $$

where is an m-dimensional vector and its entries equal the main diagonal entries of M.

In accordance with BPFs, every function \(x(v)\) which satisfies square integrable conditions in the interval \([0,1)\) can be approached as follows:

$$x(v)\simeq x_{m}(v)=\sum_{i=0}^{m-1}x_{i} \psi_{i}(v)=X_{m}^{T}\varPsi _{m}(v)= \varPsi_{m}^{T}(v)X_{m}, $$

where the function \(x_{m}(v)\) is an approximation of the function \(x(v)\) and

$$ X_{m}= (x_{0},x_{1},\ldots,x_{m-1} )^{T}. $$
(5)

Similarly, every function \(k(u,v)\) defined on \([0,1)\times[0,1)\) can be written as

$$k(u,v)=\varPsi_{m_{1}}^{T}(u)\mathbf{K}\varPhi_{m_{2}}(v), $$

where \(\mathbf{K}= (k_{ij} )_{m_{1}\times m_{2}}\) with

$$ k_{ij}\simeq\frac{1}{h_{1}h_{2}} \int_{0}^{1} \int_{0}^{1}k(u,v)\psi _{i}(u) \phi_{j}(v)\,du\,dv, $$
(6)

and \(h_{1}=\frac{1}{m_{1}}\), \(h_{2}=\frac{1}{m_{2}}\).

2.2 Haar wavelets

The notation and definition of HWs are introduced in this section (also see [1]). The set of orthogonal HWs is defined as follows:

$$ h_{i}(v)=2^{\frac{l}{2}}h \bigl(2^{l}v-z \bigr), \quad i=2^{l}+z, 0 \leq z < 2^{l}, l \geq0, i,l,z \in\mathbb{N}, $$

where \(h_{0}(v)=1\), \(v \in[0,1)\), and

$$h(v)= \textstyle\begin{cases} 1 & 0\leq v< \frac{1}{2},\\ -1 & \frac{1}{2} \leq v< 1. \end{cases} $$

For HWs \(h_{n}(v)\) defined in \([0,1)\), we have

$$ \int_{0}^{1}h_{i}(v)h_{j}(v) \,dv= \delta_{ij}, $$
(7)

where \(\delta_{ij}\) is the Kronecker delta.

In accordance with HWs, every function \(x(v)\) that satisfies square integrable conditions can be approached as follows:

$$ x(v)=c_{0}h_{0}(v)+\sum _{i=1}^{\infty} c_{i}h_{i}(v), \quad v\in [0,1), i=2^{l}+z, 0 \leq z < 2^{l}, l \geq0, l,z\in \mathbb{N}, $$
(8)

where

$$ c_{i}= \int_{0}^{1}x(v)h_{i}(v)\,dv, \quad i=0 \quad\text{or}\quad i=2^{l}+z, 0 \leq z < 2^{l}, l \geq0, l,z \in\mathbb{N}. $$
(9)

We can see that when \(m=2^{L}\), equation (8) can be rewritten as

$$ x(v)=c_{0}h_{0}(v)+\sum_{i=1}^{m-1} c_{i}h_{i}(v), \quad i=2^{l}+z, 0 \leq z < 2^{l}, l=0,1,\ldots,L-1. $$

Obviously, the vector form is as follows:

$$ x(v)\simeq C_{m}^{T}H_{m}(v)=H_{m}^{T}(v)C_{m}, $$
(10)

where \(H_{m}= (h_{0}(v),h_{1}(v),\ldots,h_{m-1}(v) )^{T}\) and \(C_{m}= (c_{0},c_{1},\ldots,c_{m-1} )^{T}\) are HWs and Haar coefficients, respectively.

Similarly, every function \(k(u,v)\) defined on \([0,1)\times[0,1)\) can be approached as follows:

$$ k(u,v)=H_{m}^{T}(u)\mathbf{K}H_{m}(v), $$

where \(\mathbf{K}= (k_{ij} )_{m\times m}\) with

$$ k_{ij}= \int_{0}^{1} \int_{0}^{1} k(u,v)h_{i}(u)h_{j}(v) \,du\,dv, \quad i,j=0,1,\ldots,m-1. $$

3 Haar wavelets and BPFs

Some lemmas about HWs and BPFs are introduced in this section. For a detailed description, see the reference [1].

Lemma 3.1

Suppose that \(H_{m}(v)\)and \(\varPsi_{m}(v)\)are respectively given in (10) and (4), \(H_{m}(v)\)can be written in accordance with BPFs as follows:

$$ H_{m}(v)=\mathbf{Q}\varPsi_{m}(v),\quad m=2^{L}, $$
(11)

where \(\mathbf{Q}= (Q_{ij} )_{m \times m}\)and

$$ Q_{ij}=2^{\frac{j}{2}}h_{i-1} \biggl( \frac{2j-1}{2m} \biggr), \quad i,j=1,2,\ldots,m, i-1=2^{l}+z, 0 \leq z< 2^{l}. $$

Proof

See [1]. □

Lemma 3.2

Suppose thatQis given in (11), then we have

$$ \mathbf{Q}^{T}\mathbf{Q}=m\mathbf{I}, $$

whereIis an \(m \times m\)identity matrix.

Proof

See [1]. □

Lemma 3.3

Suppose thatFis anm-dimensional vector, we have

$$ H_{m}(v)H_{m}^{T}(v)F=\tilde{ \mathbf{F}}H_{m}(v), $$

where \(\tilde{\mathbf{F}}\)is an \(m \times m\)matrix and \(\tilde {\mathbf{F}}=\mathbf{Q}\bar{\mathbf{F}}\mathbf{Q^{-1}}\), \(\bar {\mathbf{F}}=\operatorname{diag}(\mathbf{Q}^{T}F)\).

Proof

See [1]. □

Lemma 3.4

Suppose thatMis an \(m\times m\)matrix, we have

$$ H_{m}^{T}(v)\mathbf{M} H_{m}(v)= \hat{M}H_{m}(v), $$

where \(\hat{M}=N^{T}\mathbf{Q}^{-1}\)is anm-dimensional vector and the entries of the vectorNare the diagonal entries of matrix \(\mathbf{Q}^{T}\mathbf{MQ}\).

Proof

See [1]. □

Lemma 3.5

Suppose that \(\varPsi_{m}(v)\)is given in (4), we have

$$ \int_{0}^{v} \varPsi_{m}(u) \,du \simeq \mathbf{P} \varPsi_{m}(v), $$

where

$$\mathbf{P}=\frac{h}{2} \left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 1&2&2&\cdots&2\\ 0&1&2&\cdots&2\\ 0&0&1&\cdots&2\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&1 \end{array}\displaystyle \right )_{m\times m}. $$

Proof

See [1, 3]. □

Lemma 3.6

Suppose that \(\varPsi_{m}(v)\)is given in (4), we have

$$ \int_{0}^{v} \varPsi_{m}(u) \,dB(u) \simeq \mathbf{P}_{B} \varPsi_{m}(v), $$

where

$$\mathbf{P}_{B} = \left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} B(\frac{h}{2}) & B(h) & B(h) &\cdots& B(h)\\ 0 & B(\frac{3h}{2})-B(h) & B(2h)-B(h) & \cdots& B(2h)-B(h) \\ 0 & 0 & B(\frac{5h}{2})-B(2h) & \cdots& B(3h)-B(2h)\\ \vdots& \vdots& \vdots& \ddots& \vdots\\ 0 & 0 & 0 & \cdots& B (\frac{(2m-1)h}{2} )-B ((m-1)h ) \end{array}\displaystyle \right )_{m\times m}. $$

Proof

See [1, 3]. □

Lemma 3.7

Suppose that \(H_{m}(v)\)is given in (10), we have

$$ \int_{0}^{v} H_{m}(u) \,du \simeq \frac{1}{m}\mathbf{Q}\mathbf {P}\mathbf{Q}^{T}H_{m}(v)= \boldsymbol{\varLambda} H_{m}(v), $$

whereQandPare respectively given in (11) and Lemma 3.3, \(\boldsymbol{\varLambda}=\frac{1}{m}\mathbf {Q}\mathbf{P}\mathbf{Q}^{T}\).

Proof

See [1, 3]. □

Lemma 3.8

Suppose that \(H_{m}(v)\)is given in (10), we have

$$ \int_{0}^{v} H_{m}(u) \,dB(u) \simeq \frac{1}{m}\mathbf{Q}\mathbf {P}_{B}\mathbf{Q}^{T}H_{m}(v)= \boldsymbol{\varLambda} _{B}H_{m}(v), $$

whereQand \(\mathbf{P}_{B}\)are respectively given in (11) and Lemma 3.3and \(\boldsymbol{\varLambda}_{B}=\frac {1}{m}\mathbf{Q}\mathbf{P}_{B}\mathbf{Q}^{T}\).

Proof

See [1, 3]. □

4 Numerical method

For convenience, we set \(m_{1}=m_{2}=m\) and nonlinear SIVIE (1) can be solved by HWs. Firstly, a useful result for HWs is proved.

Theorem 4.1

For the analytic functions \(\sigma (v)=\sum a_{j}v^{j}\), \(\rho(v)=\sum b_{j}v^{j}\)andjis a positive integer, then

$$\begin{gathered} \sigma \bigl(x_{m}(v) \bigr)=\sigma^{T}(C_{m})H_{m}(v), \\ \rho \bigl(x_{m}(v) \bigr)=\rho^{T}(C_{m})H_{m}(v), \end{gathered} $$

where \(H_{m}(v)\)and \(C_{m}\)are derived in (10),

$$\begin{gathered} \sigma^{T}(C_{m})= \bigl( \sigma(c_{0}), \sigma(c_{1}),\ldots,\sigma (c_{m-1}) \bigr), \\ \rho^{T}(C_{m})= \bigl(\rho(c_{0}), \rho(c_{1}),\ldots,\rho(c_{m-1}) \bigr).\end{gathered} $$

Proof

According to the disjointness property of HWs, we can deduce

$$\begin{aligned} \sigma \bigl(x_{m}(v) \bigr) =& \sum a_{j} \bigl(x_{m}(v) \bigr)^{j} \\ =& \sum a_{j} \bigl(c_{0}h_{0}(v)+c_{1}h_{1}(v)+ \cdots+c_{m-1}h_{m-1}(v) \bigr)^{j} \\ =& \sum a_{j} \bigl(c_{0}^{j},c_{1}^{j}, \ldots,c_{m-1}^{j} \bigr)H_{m}(v) \\ =& \sigma^{T}(C_{m})H_{m}(v), \end{aligned}$$

thus,

$$ \sigma \bigl(x_{m}(v) \bigr)=\sigma^{T}(C_{m})H_{m}(v)=H_{m}^{T}(v) \sigma(C_{m}). $$
(12)

Similarly,

$$ \rho \bigl(x_{m}(v) \bigr)=\rho^{T}(C_{m})H_{m}(v)=H_{m}^{T}(v) \rho(C_{m}). $$
(13)

The proof is completed. □

Now, in order to solve (1), we approximate \(x(v)\), \(x_{0}(v)\), \(k(u,v)\), and \(r(u,v)\) in following forms by HWs:

$$\begin{aligned}& x(v)\simeq x_{m}(v)=C_{m}^{T}H_{m}(v)=H_{m}^{T}(v)C_{m}, \end{aligned}$$
(14)
$$\begin{aligned}& x_{0}(v)\simeq x_{0_{m}}(v)={C_{0}}_{m}^{T}H_{m}(v)=H_{m}^{T}(v){C_{0}}_{m}, \end{aligned}$$
(15)
$$\begin{aligned}& k(u,v)\simeq{k_{m}(u,v)}=H_{m}^{T}(u) \mathbf {K}H_{m}(v)=H_{m}^{T}(v) \mathbf{K}^{T}H_{m}(u), \end{aligned}$$
(16)
$$\begin{aligned}& r(u,v)\simeq{r_{m}(u,v)}=H_{m}^{T}(u) \mathbf {R}H_{m}(v)=H_{m}^{T}(v) \mathbf{R}^{T}H_{m}(u), \end{aligned}$$
(17)

where \(C_{m}\) and \({C_{0}}_{m}\) are HWs coefficient vectors, K and R are HWs coefficient matrices. Substituting approximations (12)–(17) into (1), we have

$$\begin{aligned} C_{m}^{T}H_{m}(v) = & {C_{0}}_{m}^{T}H_{m}(v) +H_{m}^{T}(v)\mathbf{K}^{T} \int_{0}^{v}H_{m}(u) H^{T}(u) \sigma(C_{m}) \,du \\ &{}+ H_{m}^{T}(v)\mathbf{R}^{T} \int_{0}^{v}H_{m}(u) H_{m}^{T}(u)\rho (C_{m})\,dB(u). \end{aligned}$$

By Lemma 3.3, we get

$$\begin{aligned} C_{m}^{T}H_{m}(v) = & {C_{0}}_{m}^{T}H_{m}(v) +H_{m}^{T}(v)\mathbf{K}^{T} \int_{0}^{v}\tilde{\boldsymbol{\sigma }(C_{m})} H_{m}(u) \,du \\ &{}+H_{m}^{T}(v)\mathbf{R}^{T} \int_{0}^{v}\tilde{\boldsymbol{ \rho}(C_{m})} H_{m}(u)\,dB(u). \end{aligned}$$

Applying Lemmas 3.7 and 3.8, we get

$$C_{m}^{T}H_{m}(v)={C_{0}}_{m}^{T}H_{m}(v)+H_{m}^{T}(v) \mathbf{K}^{T}\tilde{\boldsymbol{\sigma}(C_{m})} \boldsymbol{ \varLambda} H_{m}(v) +H_{m}^{T}(v) \mathbf{R}^{T} \tilde{\boldsymbol{\rho}(C_{m})}\boldsymbol{ \varLambda}_{B} H_{m}(v), $$

then by Lemma 3.4, we derive

$$ C_{m}^{T}H(v)={C_{0}}_{m}^{T}H(v)+ \hat{\mathbf{A}}_{0}^{T}H(v)+\hat {\mathbf{B}}_{0}^{T}H(v), $$
(18)

where \(\mathbf{A}_{0}=\mathbf{K}^{T}\tilde{\boldsymbol{\sigma}(C_{m})} \boldsymbol{\varLambda}\) and \(\mathbf{B}_{0}=\mathbf{R}^{T} \tilde{\boldsymbol{\rho}(C_{m})} \boldsymbol{\varLambda}_{B}\).

For nonlinear equation (18), a series of methods, such as simple trapezoid method, Simpson method, and Romberg method, are often introduced in the numerical analysis courses. In this paper, the function of fsolve in MATLAB is used to solve equation (18).

5 Error analysis

In contrast to the articles [1, 3], we will give a strict and accurate error analysis in this section. Firstly, we recall two useful lemmas.

Lemma 5.1

Suppose that function \(x(u)\), \(u\in[0,1)\)satisfies the bounded condition and \(e(u)=x(u)-x_{m}(u)\), where \(x_{m}(u)\)is m approximations of HWs of \(x(u)\), then

$$ \Vert e \Vert _{L^{2}([0,1))}^{2}= \int_{0}^{1}e^{2}(u)\,du \leq O \bigl(h^{2} \bigr). $$
(19)

Proof

See [1]. □

Lemma 5.2

Suppose that the function \(x(u,v)\)satisfying the bounded condition is defined on \(\mathbf{D}=[0,1)\times[0,1)\)and \(e(u,v)=x(u,v)-x_{m}(u,v)\), where \(x_{m}(u,v)\)is m approximations of HWs of \(x(u,v)\), then

$$ \Vert e \Vert _{L^{2}(\mathbf{D})}^{2}= \int_{0}^{1} \int_{0}^{1}e^{2}(u,v)\,du\,dv \leq O \bigl(h^{2} \bigr). $$
(20)

Proof

See [1]. □

Secondly, let \(e(v)=x(v)-x_{m}(v)\), where \(x_{m}(v)\), \(x_{0_{m}}(v)\), \(k_{m}(u,v)\), and \(r_{m}(u,v)\) are m approximations of Haar wavelets of \(x(v)\), \(x_{0}(v)\), \(k(u,v)\), and \(r(u,v)\), respectively.

$$ \begin{aligned}[b]e(v) = {}& x(v)-x_{m}(v) \\ ={} & x_{0}(v)-x_{0_{m}}(v) \\ & + \int_{0}^{v} \bigl[{k(u,v)} {\sigma \bigl(x(u) \bigr)}-k_{m}(u,v){\sigma \bigl(x_{m}(u) \bigr)} \bigr]\,du \\ & + \int_{0}^{v} \bigl[{r(u,v)} {\rho \bigl(x(u) \bigr)}-{r_{m}(u,v)} {\rho \bigl(x_{m}(u) \bigr)} \bigr] \,dB(u).\end{aligned} $$
(21)

Lastly, the main convergence theorem is proved.

Theorem 5.1

Suppose that analytic functionsσandρsatisfy the following conditions:

  1. (i)

    \(|\sigma(x)-\sigma(y)| \leq l_{1}|x-y|\), \(| \rho(x)-\rho(y)|\leq l_{3} |x-y|\);

  2. (ii)

    \(|\sigma(x)|\leq l_{2}\), \(|\rho(y)|\leq l_{4}\);

  3. (iii)

    \(|k(u,v)|\leq l_{5}\), \(|r(u,v)|\leq l_{6}\),

where \(x,y\in\mathbb{R}\), constant \(l_{i}>0\), \(i=1, 2, \ldots, 6\). Then

$$\begin{aligned} \int_{0}^{T} \mathbb{E} \bigl( \bigl\vert e_{m}(v) \bigr\vert ^{2} \bigr)\,dv= \int _{0}^{T}\mathbb{E} \bigl( \bigl\vert x(v)-x_{m}(v) \bigr\vert ^{2} \bigr)\,dv\leq O \bigl(h^{2} \bigr), \quad T\in[0,1). \end{aligned}$$

Proof

For (21), we have

$$\begin{aligned} \mathbb{E} \bigl( \bigl\vert e_{m}(v) \bigr\vert ^{2} \bigr) \leq& 3 \biggl[ \mathbb{E} \bigl( \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2} \bigr) \\ &{}+ \mathbb{E} \biggl( \biggl\vert \int_{0}^{v} \bigl({k(u,v)} {\sigma \bigl(x(u) \bigr)}-{r_{m}(u,v)} {\sigma \bigl(x_{m}(u) \bigr)} \bigr)\,du \biggr\vert ^{2} \biggr) \\ &{}+ \mathbb{E} \biggl( \biggl\vert \int_{0}^{v} \bigl({r(u,v)} {\rho \bigl(x(u) \bigr)}-{r_{m}(u,v)} {\rho \bigl(x_{m}(u) \bigr)} \bigr) \,dB(u) \biggr\vert ^{2} \biggr) \biggr]. \end{aligned}$$

On the basis of Lipschitz continuity, Itô isometry, and Cauchy–Schwarz inequality, it yields

$$\begin{aligned} \mathbb{E} \bigl( \bigl\vert e_{m}(v) \bigr\vert ^{2} \bigr) \leq& 3 \biggl[ \mathbb{E} \bigl( \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2} \bigr) \\ &{}+ \mathbb{E} \biggl( \int_{0}^{v} \bigl\vert {k(u,v)} {\sigma \bigl(x(u) \bigr)}-{k_{m}(u,v)} {\sigma \bigl(x_{m}(u) \bigr)} \bigr\vert ^{2}\,du \biggr) \\ &{}+ \mathbb{E} \biggl( \int_{0}^{v} \bigl\vert {r(u,v)} {\rho \bigl(x(u) \bigr)}-{r_{m}(u,v)} {\rho \bigl(x_{m}(u) \bigr)} \bigr\vert ^{2}\,du \biggr) \biggr] \\ = & 3 \biggl[\mathbb{E} \bigl( \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2} \bigr) \\ &{}+ \int_{0}^{v}\mathbb{E} \bigl( \bigl\vert {k(u,v)} \bigl( \sigma \bigl(x(u) \bigr)-\sigma \bigl(x_{m}(u) \bigr) \bigr)\\ &{} + \sigma \bigl(x_{m}(u) \bigr) \bigl({k(u,v)}-{k_{m}(u,v)} \bigr) \bigr\vert ^{2} \bigr)\,du \\ &{}+ \int_{0}^{v}\mathbb{E} \bigl( \bigl\vert {r(u,v)} \bigl(\rho \bigl(x(u) \bigr)-\rho \bigl(x_{m}(u) \bigr) \bigr) \\ &{}+\rho \bigl(x_{m}(u) \bigr) \bigl( {r(u,v)}-{r_{m}(u,v)} \bigr) \bigr\vert ^{2} \bigr)\,du \biggr] \\ \leq& 3 \biggl[ \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2} \\ &{}+ 2{l_{1}}^{2}{l_{5}}^{2} \int_{0}^{v}\mathbb{E} \bigl( \bigl\vert e_{m}(u) \bigr\vert ^{2} \bigr)\,du +2{l_{2}}^{2} \int_{0}^{v} \bigl\vert {k(u,v)}-{k_{m}(u,v)} \bigr\vert ^{2}\,du \\ &{}+ 2{l_{3}}^{2}{l_{6}}^{2} \int_{0}^{v}\mathbb{E} \bigl( \bigl\vert e_{m}(u) \bigr\vert ^{2} \bigr)\,du +2{l_{4}}^{2} \int_{0}^{v} \bigl\vert {r(u,v)}-{r_{m}(u,v)} \bigr\vert ^{2}\,du \biggr]. \end{aligned}$$

Then we can get

$$\begin{aligned} \mathbb{E} \bigl( \bigl\vert e_{m}(v) \bigr\vert ^{2} \bigr) \leq{}& 3 \biggl[ \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2}+2{l_{2}}^{2} \int _{0}^{v} \bigl\vert {k(u,v)}-{k_{m}(u,v)} \bigr\vert ^{2}\,du \\ & + 2{l_{4}}^{2} \int_{0}^{v} \bigl\vert {r(u,v)}-{r_{m}(u,v)} \bigr\vert ^{2}\,du \biggr] \\ & +6 \bigl({l_{1}}^{2}{l_{5}}^{2}+{l_{3}}^{2}{l_{6}}^{2} \bigr) \int_{0}^{v}\mathbb{E} \bigl( \bigl\vert e_{m}(u) \bigr\vert ^{2} \bigr)\,du,\end{aligned} $$

or

$$\begin{gathered} \mathbb{E} \bigl( \bigl\vert e_{m}(v) \bigr\vert ^{2} \bigr)\leq\beta(v)+\alpha \int_{0}^{v}\mathbb{E} \bigl( \bigl\vert e_{m}(u) \bigr\vert ^{2} \bigr)\,du, \\ \beta(v)=3 \biggl[ \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2}\\ \phantom{\beta(v)=}{}+2{l_{2}}^{2} \int _{0}^{v} \bigl\vert {k(u,v)}-{k_{m}(u,v)} \bigr\vert ^{2}\,du +2{l_{4}}^{2} \int_{0}^{v} \bigl\vert {r(u,v)}-{r_{m}(u,v)} \bigr\vert ^{2}\,du \biggr], \\ \alpha=6 \bigl({l_{1}}^{2}{l_{5}}^{2}+{l_{3}}^{2}{l_{6}}^{2} \bigr).\end{gathered} $$

Let \(f(v)=\mathbb{E} ( |e_{m}(v) |^{2} )\), we get

$$f(v)\leq\beta(v)+\alpha \int_{0}^{v}f(\tau)\,d\tau, \quad\tau\in[0,v). $$

By Gronwall’s inequality, it follows that

$$f(v)\leq\beta(v)+\alpha \int_{0}^{v}e^{\alpha(v-\tau)}\beta(\tau )\,d\tau, \quad v\in[0,1). $$

Then

$$\begin{aligned}& \int_{0}^{T}f(v)\,dv \\& \quad = \int_{0}^{T}\mathbb{E} \bigl( \bigl\vert e_{m}(v) \bigr\vert ^{2} \bigr)\,dv \\& \quad \leq \int_{0}^{T} \biggl(\beta(v)+\alpha \int_{0}^{v}e^{\alpha (v-\tau)}\beta(\tau)\,d\tau \biggr)\,dv \\& \quad = \int_{0}^{T}\beta(v)\,dv+\alpha \int_{0}^{T} \int_{0}^{v}e^{\alpha (v-\tau)}\beta(\tau)\,d\tau \,dv \\& \quad \leq \int_{0}^{T}\beta(v)\,dv+\alpha e^{\alpha T} \int_{0}^{T} \int _{0}^{v}\beta(\tau)\,d\tau \,dv \\& \quad =3 \int_{0}^{T} \bigl\vert x_{0}(v)-x_{0_{m}}(v) \bigr\vert ^{2}\,dv +6{l_{2}}^{2} \int_{0}^{T} \int_{0}^{v} \bigl\vert {k(u,v)}-{k_{m}(u,v)} \bigr\vert ^{2}\,du\,dv \\& \qquad{} + 6{l_{4}}^{2} \int_{0}^{T} \int_{0}^{v} \bigl\vert {r(u,v)}-{r_{m}(u,v)} \bigr\vert ^{2}\,du\,dv \\& \qquad{} + \alpha e^{\alpha T} \biggl[3 \int_{0}^{T} \int_{0}^{v} \bigl\vert x_{0}( \tau)-x_{0_{m}}(\tau) \bigr\vert ^{2}\,d\tau \,dv\\& \qquad{} +6{l_{2}}^{2} \int_{0}^{T} \int_{0}^{t} \int_{0}^{\tau} \bigl\vert {k(s,\tau )}-{k_{m}(s, \tau)} \bigr\vert ^{2}\,ds\,d\tau \,dt \\& \qquad{} + 6{l_{4}}^{2} \int_{0}^{T} \int_{0}^{v} \int_{0}^{\tau} \bigl\vert {r(u, \tau)}-{r_{m}(u,\tau)} \bigr\vert ^{2}\,du\,d\tau \,dv \biggr] \\& \quad =3I_{1}+6l^{2}_{2}I_{2}+6l^{2}_{4}I_{3}+ \alpha e^{\alpha T} \bigl[3I_{4}+ 6l^{2}_{2}I_{5} + 6l^{2}_{4}I_{6} \bigr]. \end{aligned}$$

By using (19) and (20), we have

$$I_{i} \leq w_{i}h^{2}, \quad i=1, 2, \ldots, 6. $$

So we can get

$$\begin{aligned} \int_{0}^{T}\mathbb{E} \bigl\vert e_{m}(v) \bigr\vert ^{2}\,dv &\leq \bigl[ \bigl(3w_{1}+6{l_{2}}^{2}w_{2}+6{l_{4}}^{2}w_{3} \bigr)+\alpha e^{\alpha T} \bigl(3w_{4}+6{l_{2}}^{2}w_{5}+6{l_{4}}^{2}w_{6} \bigr) \bigr]h^{2}\\&\leq O \bigl(h^{2} \bigr),\end{aligned} $$

where constant \(w_{i}>0\), \(i=1, 2, \ldots, 6\).

The proof is completed. □

6 Numerical examples

In this section, some examples are given to verify the validity and rationality of the above method.

Example 6.1

Consider the nonlinear SIVIE [6, 8]

$$ x(v)=x_{0}(v)-a^{2} \int_{0}^{v}x(u) \bigl(1-x^{2}(u) \bigr) \,du +a \int_{0}^{v} \bigl(1-x^{2}(u) \bigr) \,dB(u), \quad v\in[0,1), $$

where

$$ x(v)=\tanh \bigl(aB(v)+ \mathrm{arctanh}(x_{0}) \bigr). $$

In this example, \(a=\frac{1}{30}\) and \(x_{0}(v)=\frac{1}{10}\). The error means \(E_{m}\), error standard deviations \(E_{s}\), and confidence intervals of Example 6.1 for \(m=2^{4}\) and \(m=2^{5}\) are shown in Table 1 and Table 2, respectively. The error means \(E_{m}\) and error standard deviations \(E_{s}\) are obtained by 104 trajectories. Compared with Table 2 in [8], \(E_{m}\) is smaller and the confidence interval is smaller under the same confidence level. Moreover, the comparison of exact and approximate solutions of Example 6.1 for \(m=2^{4}\) and \(m=2^{5}\) is displayed in Fig. 1 and Fig. 2, respectively.

Figure 1
figure 1

\(m=2^{4}\), simulation result of the approximate solution and exact solution for Example 6.1

Figure 2
figure 2

\(m =2^{5}\), simulation result of the approximate solution and exact solution for Example 6.1

Table 1 When \(m=2^{4}\), error means \(E_{m}\), error standard deviations \(E_{s}\), and confidence intervals are given in this table
Table 2 When \(m =2^{5}\), error means \(E_{m}\), error standard deviations \(E_{s}\), and confidence intervals are given in this table

Example 6.2

Consider the nonlinear SIVIE [17, 18]

$$ x(v)=1+ \int_{0}^{v}e^{-(v-u)}\sin \bigl(x(u) \bigr) \,du + \int_{0}^{v}e^{-(v-u)}\cos \bigl(x(u) \bigr) \,dB(u), \quad v\in[0,1). $$

The mean and approximate solutions of Example 6.2 for \(m=2^{4}\) and \(m=2^{5}\) are respectively given in Fig. 3 and Fig. 4, where the mean solution is obtained by 104 trajectories.

Figure 3
figure 3

\(m=2^{4}\), simulation result of the approximate solution and mean solution for Example 6.2

Figure 4
figure 4

\(m=2^{5}\), simulation result of the approximate solution and mean solution for Example 6.2