1 Introduction

Traditionally, a dominant interest in practical applications is the existence of solutions to deterministic fractional differential equations and fractional stochastic differential equations (FSDEs) driven by Brownain motion due to their role for helping candidates explore the hidden properties of the dynamics of complex systems in viscoelasticity, diffusion, mechanics, electromagnetism, control, signal processing, and physics. For example, in [28], the authors applied the concept of Caputo’s H-differentiability to solve the fuzzy fractional differential equation with uncertainty. Benchaabane and Sakthivel [8] used the fractional calculus, semigroup theory, and stochastic analysis techniques to obtain the unique mild solution for a class of nonlinear fractional Sobolev-type SDEs with non-Lipschitz coefficients in Hilbert spaces under a new set of sufficient conditions. For further work on FSDEs and fractional differential equations (FDEs), we refer to [5, 6, 12, 15, 19, 26, 29, 31, 32] and references therein.

However, random perturbations with long-range dependence abundantly exist in a wide range of physical phenomena, such as hydrology, mathematical finance, medicine and communication networks [21, 33]. Correspondingly, fractional Brownian motion (fBm) with the Hurst index \(H\in (1/2,1)\) has been suggested as a replacement of the standard Brownian motion in studying fractional stochastic systems as follows. Under a new set of sufficient conditions, Mourad et al. [20] investigated the approximate controllability for Sobolev-type stochastic fractional control systems with fBm by using semigroup theory, fractional calculus, stochastic analysis, and Banach’s fixed point theorem. Pei and Xu [28] derived the unique solution for non-Lipschitz SDEs with fBm by using successive approximations. Moreover, for a massive body of published studies covering the existence and uniqueness of FSDEs driven by fBm; see [23, 24, 35] and references therein.

On the other hand, the past recent years have seen a rapid development of the theory of impulsive effects in many evolutionary processes such as telecommunications, finance, electronics, economics, and mechanics, in which states are often subject to abrupt and short changes in discrete moments of time and can be neglected throughout the whole duration of the intended process [22]. In light of recent developments in the theory of SDEs, it is becoming extremely difficult to ignore the existence of impulsive effects. Therefore several studies have documented the effect of impulses in studying the SDEs driven by Brownian motion [18] and fBm; see [10, 11, 13, 14] and references therein.

To the best of our knowledge, there is no work yet reported in the literature on impulsive fractional stochastic differential equations driven by fBm. Therefore, motivated by this fact and in order to close this gap, in this paper, we initiate a research on one of such equations. The specific objective of this study is to prove the existence and uniqueness of solutions to the following impulsive stochastic fractional differential equations (ISFDEs) driven by a standard Brownian motion and an independent fBm of the form:

$$ \textstyle\begin{cases} dX(t)=b(t, X(t))\,dt+\sigma _{1}(t, X(t))\,dW(t)+g(t, X(t))\,dW^{H}(t) \\ \hphantom{dX(t)={}}{}+\sigma _{2}(t, X(t))(dt)^{\alpha },\quad t\in [0, T], t\neq t_{j}, 0< \alpha < 1, \\ \triangle X(t_{j})=X(t^{+}_{j})-X(t^{-}_{j})=I_{j}(X(t_{j})),\quad j=1,2,\ldots,m, \\ X(0)=X_{0}\in \mathbb{R}^{d}, \end{cases} $$
(1)

where \(T\geqslant 0\) is a fixed horizon, \(W^{H}\) is an m-dimensional fBm with \(1/2< H<1\) independent of an m-dimensional standard Gaussian process \(W(t)\), \(t\in [0,T]\). In what follows, \((\varOmega , \mathcal{F}, P)\) is a complete probability space with probability measure P on Ω, and the filtration \(\{\mathcal{F}_{t}\}_{t\geqslant 0}\) refers to the σ-field generated by \(\{W^{H}(s),W(s), s\in [0,t] \}\) and satisfying the usual conditions, that is, it is right continuous, and \(\mathcal{F}_{0}\) contains all P-null sets. Assume that \(b, \sigma _{2}:[0, T]\times \mathbb{R}^{d}\longrightarrow \mathbb{R}^{d}\) and \(\sigma _{1}, g:[0, T]\times \mathbb{R}^{d}\longrightarrow \mathbb{R}^{d\times m}\) are appropriate measurable functions. Here \(I_{j}\in C(\mathbb{R}^{d},\mathbb{R}^{d})\) (\(j=1,2,\ldots,m\)) are bounded functions with fixed times \(t_{j}\) satisfying \(0=t_{0}< t_{1}< t_{2}<\cdots<t _{m}<T\), and \(X(t^{+}_{j})\) and \(X(t^{-}_{j})\) represent the right and left limits of \(X(t)\) at time \(t_{j}\). Further, \(\triangle X(t_{j})=X(t ^{+}_{j})-X(t^{-}_{j})\) determines the jump in the state X at time \(t_{j}\), where \(I_{j}\) is the jump size. \(X_{0}\) is an \(\mathcal{F} _{0}\)-measurable random variable satisfying \(\mathbb{E}|X_{0}|^{2}< \infty \).

The class of Eqs. (1) has attracted our attention because of their applications in complex dynamic processes in sciences and engineering and modeling many phenomena in ecological and epidemiological processes of population dynamic perturbed by unavoidable noises under multitime scales [27]. Moreover, Eqs. (1) can be used as a model of many evolutionary processes where the noises are correlated and can be modeled by fBm.

To summarize, our contribution here is the first attempt to consider the existence and uniqueness of solutions to ISFDEs driven by fBm. We obtained our results on Eqs. (1) by using Carathéodory approximation [2, 3] under non-Lipschitz (Taniguchi [34]) condition with Lipschitz one as a particular case. Moreover, the results are still new even when the coefficients of (1) satisfy the Lipschitz condition and under the non-Lipschitz condition used in [4], which is a particular case of our conditions. Finally, the obtained results extend and improve some published results of [1, 4, 28, 36].

This paper is outlined as follows. In Sect. 2, we provide necessary notions and preliminaries on the pathwise integrals with respect to fBm and hypotheses needed throughout the paper. We give our main results on the existence and uniqueness theorem for ISFDEs driven by a standard Brownian motion and an independent fBm given by (1) followed by some remarks and corollaries in Sect. 3.

2 Preliminaries

In this section, we review some basic notions and notations on the backward stochastic integral with respect to fBm, and for more details, we refer to [9, 16, 25]. The fBm with the Hurst index \(H\in ( \frac{1}{2},1)\) is a centered Wiener process \(W^{H}=\{W^{H}(t)\}_{0 \leqslant t \leqslant T}\) with the covariance function

$$ R(r,s)=\frac{1}{2}\bigl(s^{2H}+r^{2H}- \vert r-s \vert ^{2H}\bigr). $$

Let \(\psi : [0,\infty )\times [0,\infty )\longrightarrow [0,\infty )\) be given as

$$ \psi (r,s)=H(2H-1) \vert r-s \vert ^{2H-2},\quad r,s\in \mathbb{R}^{+}, $$

where \(H\in (\frac{1}{2},1)\), and define the space of Borel-measurable functions \(h:[0,\infty )\longrightarrow [0,\infty )\)

$$ L^{2}_{\psi }\bigl(\mathbb{R}^{+}\bigr)= \biggl\{ h: \Vert h \Vert ^{2}_{\psi }= \int _{0} ^{\infty } \int _{0}^{\infty }h(r)h(s)\psi (r,s)\,ds\,dr< \infty \biggr\} , $$

which is a separable Hilbert space under the inner product

$$ \langle h_{1},h_{2}\rangle _{\psi }= \int _{0}^{\infty } \int _{0}^{\infty }h_{1}(r)h_{2}(s) \psi (r,s)\,ds\,dr,\quad h_{1},h_{2}\in L^{2}_{\psi } \bigl( \mathbb{R}^{+}\bigr). $$

For any integer \(n\geqslant 1\), denote by \(\mathcal{S}\) the set of smooth cylindrical random variables of the form

$$ F=h\bigl(W^{H}(\phi _{1}), W^{H}(\phi _{2}),\ldots, W^{H}(\phi _{n})\bigr), $$

where \(h\in \mathcal{C}^{\infty }_{b}(\mathbb{R}^{n})\) (i.e., h and its partial derivatives of all orders are bounded), \(\phi _{i}\in \mathcal{H}\) (\(i=1,2,\ldots,n\)), \(\mathcal{H}\) is a Hilbert space [7] defined as the completion of measurable functions ϕ such that \(\|\phi \|^{2}_{\psi }< \infty \). Denote by \(\mathbb{D}^{1,p}( \mathcal{H})\) (\(p>0\)) the Sobolev space of \(\mathcal{H}\)-valued random variables with subspace \(\mathbb{D}^{1,p}(|\mathcal{H}|)\).

The Malliavin ψ-derivative of a smooth and cylindrical random variable \(F\in \mathcal{S}\) is defined as the \(\mathcal{H}\)-valued random variable

$$ D^{\psi }_{t}F= \int _{\mathbb{R}^{+}}\psi (t,v)D_{v}^{H}F\,dv, $$

where

$$ D^{H}F=\sum_{i=1}^{n} \frac{\partial h}{\partial x_{i}}\bigl(W^{H}(\phi _{1}), W^{H}(\phi _{2}),\ldots, W^{H}(\phi _{n})\bigr)\phi _{i}. $$

Definition 2.1

([30])

Let \(\eta (t)\), \(t\in [0,T]\), be a stochastic process with integrable trajectories. The backward stochastic integral \(\int _{0}^{T}\eta (u)\, d^{+} W^{H}(u)\) of \(\eta (t)\) with respect to \(W^{H}(t)\) is given as

$$ \lim_{\epsilon \to 0} \int _{0}^{T}\eta (u) \biggl[ \frac{W^{H}(u-\epsilon )-W^{H}(u)}{\epsilon } \biggr]\,du, $$

provided that the limit exists in probability.

According to Remark 1 and Lemma 2 in [36], the following lemma comes:

Lemma 2.1

Let \(W^{H}(t)\)be an fBm with Hurst index \(H>\frac{1}{2}\), and let a stochastic process \(\eta (t)\in \mathcal{L}_{\psi }[0, T]\cap \mathbb{D}^{1,2}(\mathcal{|H|})\). Then for every \(T<\infty \),

$$ \mathbb{E} \biggl[ \int _{0}^{T}\eta (u)\, d^{+}W^{H}(u) \biggr]^{2}\leqslant 2HT^{2H-1}\mathbb{E} \biggl[ \int _{0}^{T} \bigl\vert \eta (u) \bigr\vert ^{2}\,du \biggr]+4T \mathbb{E} \int _{0}^{T}\bigl[D^{\psi }_{u} \eta (u)\bigr]^{2}\,du. $$

The following definition defines the integration with respect to \((dt)^{\beta }\), and the reader is referred to [17] for the proof.

Definition 2.2

Let \(g(t)\) be a continuous function. Then its integral with respect to \((dt)^{\beta }\), \(0< \beta \leqslant 1\), is defined by

$$ \int _{0}^{t} g(s) (ds)^{\alpha }\beta = \beta \int _{0}^{t} (t-s)^{ \beta -1} g(s) \,ds,\quad 0< \beta \leqslant 1. $$

Similar to Definition 2.2 in [1], the definition of the unique solution to Eq. (1) can be given as follows.

Definition 2.3

An \(\mathbb{R}^{d}\)-valued stochastic process \(X(t)\), \(t\in [0,T]\), is called a unique solution to Eq. (1) if:

  1. (i)

    \(X(t)\) is \(\mathcal{F}_{t}\)-adapted;

  2. (ii)

    For every \(t\in [0, T]\), \(X(t)\) satisfies the following integral equation:

    $$\begin{aligned} X(t) =&X_{0}+ \int _{0}^{t} b\bigl(s, X(s)\bigr)\,ds+ \int _{0}^{t}\sigma _{1}\bigl(s, X(s)\bigr)\,dW(s)+ \int _{0}^{t}g\bigl(s, X(s) \bigr)\, d^{+}W^{H}(s) \\ &{}+\alpha \int _{0}^{t}\frac{\sigma _{2}(s,X(s))}{(t-s)^{1-\alpha }}\,ds+ \sum _{0< t_{j}< t}I_{j}\bigl(X(t_{j}) \bigr) \quad \mathbb{P}\mbox{-a.s.}; \end{aligned}$$
    (2)
  3. (iii)

    For any other solution \(Y(t)\), we have \(P\{X(t)=Y(t), \forall 0\leqslant t \leqslant T\}=1\).

To attain the main results, the following assumptions are imposed on the coefficients b, \(\sigma _{1}\), g, and \(\sigma _{2}\).

  1. (H1)

    For all \(t\in [0,T]\) and \(b(t,\cdot )\), \(\sigma _{1}(t, \cdot )\), \(g(t,\cdot )\), \(\sigma _{2}(t,\cdot )\in \mathcal{L}_{\psi }[0, T]\cap \mathbb{D}^{1,2}(\mathcal{|H|})\), we have

    $$ \bigl\vert b(t, X) \bigr\vert ^{2}+ \bigl\vert \sigma _{1}(t, X) \bigr\vert ^{2}+ \bigl\vert g(t, X) \bigr\vert ^{2}+ \bigl\vert D^{\psi }_{t}g(t, X) \bigr\vert ^{2}+ \bigl\vert \sigma _{2}(t, X) \bigr\vert ^{2}\leqslant \mathcal{R} \bigl(t, \vert X \vert ^{2}\bigr), $$

    where \(\mathcal{R}(t,v):[0,+\infty )\times \mathbb{R}^{+}\longrightarrow \mathbb{R}^{+}\) is a function locally integrable in t for any fixed \(v\geqslant 0\) and continuous, nondecreasing, and concave in v for any fixed \(t\in [0,T]\). Further, the integral equation

    $$ v(t)=v_{0}+K \int _{0}^{t}\mathcal{R} \bigl(s,v(s)\bigr)\,ds $$

    has a global solution on \([0,T]\) for all \(K>0\) and \(v_{0}\geqslant 0\).

  2. (H2)

    For all \(t\in [0,T]\) and \(b(t,\cdot )\), \(\sigma _{1}(t, \cdot )\), \(g(t,\cdot )\), \(\sigma _{2}(t,\cdot )\in \mathcal{L}_{\psi }[0, T]\cap \mathbb{D}^{1,2}(\mathcal{|H|})\), we have

    $$\begin{aligned}& \bigl\vert b(t, X)-b(t, Y) \bigr\vert ^{2}+ \bigl\vert \sigma _{1}(t, X)-\sigma _{1}(t, Y) \bigr\vert ^{2}+ \bigl\vert g(t, X)-g(t, Y) \bigr\vert ^{2} \\& \quad {} + \bigl\vert D^{\psi }_{t}\bigl(g(t, X)-g(t, Y)\bigr) \bigr\vert ^{2}+ \bigl\vert \sigma _{2}(t, X)-\sigma _{2}(t, Y) \bigr\vert ^{2}\leqslant \mathcal{G} \bigl(t, \vert X-Y \vert ^{2}\bigr), \end{aligned}$$

    where \(\mathcal{G}:[0,+\infty )\times \mathbb{R}^{+}\longrightarrow \mathbb{R}^{+}\) is a function locally integrable in t for any fixed \(v\geqslant 0\) and continuous, nondecreasing, and concave in v for any fixed \(t\in [0,T]\) such that \(\mathcal{G}(t,0)=0\) and \(\int _{0+} \frac{1}{ \mathcal{G}(t,v)}\,dv=+\infty \). Moreover, for \(\lambda >0\), every \(t\in [0,T]\), and every nonnegative continuous function \(\mathcal{M}(t)\) such that

    $$ \textstyle\begin{cases} \mathcal{M}(t)\leqslant \lambda \int _{0}^{t} \mathcal{G}(s, \mathcal{M}(s))\,ds, \quad t\in \mathbb{R}, \\ \mathcal{M}(0)=0, \end{cases} $$

    we have \(\mathcal{M}(t)\equiv 0\).

  3. (H3)

    There exist some positive constants \(d_{j}\) (\(j=1,2,\ldots\)) such that

    $$ \bigl\vert I_{j}(X)-I_{j}(Y) \bigr\vert \leqslant d_{j} \vert X-Y \vert $$

    for all \(X,Y\in \mathcal{L}_{\psi }[0, T]\cap \mathbb{D}^{1,2}( \mathcal{|H|})\) and \(|I_{j}(0)|=0\).

3 Main results

In this section, we present the existence and uniqueness of solutions to Eq. (1).

Theorem 3.1

Let hypotheses (H1)(H3) be satisfied, and let \(X_{0}\)be independent of the Brownian motion \(W(s)\)and the fBm \(W^{H}(s)\) (\(s>0\), \(H>1/2\)) with finite second moment. Then there exists a unique solution \(X(t)\)to Eq. (1) on \([0,T]\), provided that \(10 m\sum_{j=1}^{m}(d_{j})^{2}<1\).

Proof

To begin with, we introduce the Carathéodory approximation as follows. For any integer \(n\geqslant 1\), define \(X_{n}(t)=X(0)=X_{0}\) for all \(-1\leqslant t \leqslant 0\) and

$$\begin{aligned} X_{n}(t) =&X_{0}+ \int _{0}^{t} b \biggl(s,X_{n} \biggl(s-\frac{1}{n} \biggr) \biggr)\,ds+ \int _{0}^{t} \sigma _{1} \biggl(s,X_{n} \biggl(s-\frac{1}{n} \biggr) \biggr)\,dW(s) \\ &{}+ \int _{0}^{t}g \biggl(s,X_{n} \biggl(s-\frac{1}{n} \biggr) \biggr)\, d ^{+}W^{H}(s)+ \alpha \int _{0}^{t} \frac{\sigma _{2} (s,X_{n} (s- \frac{1}{n} ) )}{(t-s)^{1-\alpha }}\,ds \\ &{}+\sum_{0< t_{j}< t}I_{j} \biggl(X_{n} \biggl(t_{j}-\frac{1}{n} \biggr) \biggr),\quad 0\leqslant t \leqslant T. \end{aligned}$$
(3)

We split the proof into the following three parts.

Part 1. For all \(t\in [0,T]\), the sequence \(\{X_{n}(t)\}_{n \geqslant 1}\) is bounded.

By the Hölder and Burkholder–Davis–Gundy (B–D–G) inequalities and Lemma 2.1 from Eq. (3) we have

$$\begin{aligned} \mathbb{E} \Bigl(\sup_{0\leqslant s\leqslant t} \bigl\vert X_{n}(s) \bigr\vert ^{2} \Bigr) \leqslant &6 \mathbb{E} \vert X_{0} \vert ^{2}+6T\mathbb{E} \int _{0}^{T} \biggl\vert b \biggl(s,X_{n} \biggl(s-\frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,ds \\ &{}+24\mathbb{E} \int _{0}^{t} \biggl\vert \sigma _{1} \biggl(s,X_{n} \biggl(s- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,ds \\ &{}+12HT^{2H-1}\mathbb{E} \int _{0}^{t} \biggl\vert g \biggl(s,X_{n} \biggl(s- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,ds \\ &{}+24T\mathbb{E} \int _{0}^{t} \biggl\vert D^{\psi }_{s}g \biggl(s,X_{n} \biggl(s- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,ds \\ &{}+6\alpha ^{2}\frac{T^{2\alpha -1}}{2\alpha -1}\mathbb{E} \int _{0}^{t} \biggl\vert \sigma _{2} \biggl(s,X_{n} \biggl(s-\frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,ds \\ &{}+6m\mathbb{E}\sum_{j=1}^{m} \biggl\vert I_{j} \biggl(X_{n} \biggl(t_{j}- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\quad \bigl(\alpha \in (1/2, 1)\bigr). \end{aligned}$$

Thus by conditions (H1) and (H3) and the Jensen inequality we have

$$\begin{aligned} \mathbb{E} \Bigl(\sup_{0\leqslant s\leqslant t} \bigl\vert X_{n}(s) \bigr\vert ^{2} \Bigr) \leqslant &6 \mathbb{E} \vert X_{0} \vert ^{2}+6C_{1} \mathbb{E} \int _{0}^{t} \mathcal{R} \biggl(s, \biggl\vert X_{n} \biggl(s-\frac{1}{n} \biggr) \biggr\vert ^{2} \biggr)\,ds \\ &{}+6m\sum_{j=1}^{m}(d_{j})^{2} \mathbb{E} \biggl\vert X_{n} \biggl(t_{j}- \frac{1}{n} \biggr) \biggr\vert ^{2} \\ \leqslant &6\mathbb{E} \vert X_{0} \vert ^{2}+6C_{1} \int _{0}^{t}\mathcal{R} \Bigl(s, \mathbb{E} \Bigl(\sup_{0\leqslant u \leqslant s} \bigl\vert X_{n}(u) \bigr\vert ^{2} \Bigr) \Bigr)\,ds \\ &{}+6m\sum_{j=1}^{m}(d_{j})^{2} \mathbb{E} \Bigl(\sup_{0\leqslant u \leqslant t} \bigl\vert X_{n}(u) \bigr\vert ^{2} \Bigr), \end{aligned}$$

which implies that

$$ \mathbb{E} \Bigl(\sup_{0\leqslant s\leqslant t} \bigl\vert X_{n}(s) \bigr\vert ^{2} \Bigr) \leqslant C_{2}\mathbb{E} \vert X_{0} \vert ^{2}+C_{3} \int _{0}^{t}\mathcal{R} \Bigl(s,\mathbb{E} \Bigl(\sup_{0\leqslant u \leqslant s} \bigl\vert X_{n}(u) \bigr\vert ^{2} \Bigr) \Bigr)\,ds, $$
(4)

where \(C_{1}= [4+5T+2HT^{2H-1}+\alpha ^{2}\frac{T^{2\alpha -1}}{2 \alpha -1} ]\), \(C_{2}=\frac{6}{1- 6m\sum_{j=1}^{m}(d_{j})^{2}}\), and \(C_{3}=\frac{6C_{1}}{1- 6m\sum_{j=1}^{m}(d_{j})^{2}}\).

Now, by condition (H1) there exists a solution \(u(t)\), \(t\in [0,T]\), satisfying

$$ u(t)= C_{2}\mathbb{E} \vert X_{0} \vert ^{2}+C_{3} \int _{0}^{t}\mathcal{R}\bigl(s,u(s)\bigr)\,ds. $$

Comparing this above equation and Eq. (4), we have

$$ \mathbb{E} \Bigl(\sup_{0\leqslant s\leqslant t} \bigl\vert X_{n}(s) \bigr\vert ^{2} \Bigr)\leqslant u(t)\leqslant u(T)< \infty ,\quad n\geqslant 1, $$

which shows the uniform boundedness of \(\{X_{n}(t)\}_{n\geqslant 1}\).

Part 2. For \(0\leqslant s< t \leqslant T\) and integer \(n\geqslant 1\), we claim that

$$ \mathbb{E} \bigl\vert X_{n}(t)-X_{n}(s) \bigr\vert ^{2}\leqslant C_{4}(t-s)+C_{6}(t-s)^{2 \alpha }+C_{5} \sum_{s< t_{j}< t}C, $$

where \(C_{4}\), \(C_{5}\), \(C_{6}\) will be defined further in the proof, and the constant C comes from Part 1.

Note that

$$\begin{aligned}& \bigl\vert X_{n}(t)-X_{n}(s) \bigr\vert ^{2} \\& \quad \leqslant 5 \biggl\vert \int _{s}^{t}b \biggl(u,X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr)\,du \biggr\vert ^{2}+5 \biggl\vert \int _{s}^{t}\sigma _{1} \biggl(u,X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr)\,dW(u) \biggr\vert ^{2} \\& \qquad {} +5 \biggl\vert \int _{s}^{t}g \biggl(u,X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr)\, d ^{+}W^{H}(u) \biggr\vert ^{2}+5 \biggl\vert \sum_{s< t_{j}< t}I_{j} \biggl(X_{n} \biggl(t_{j}-\frac{1}{n} \biggr) \biggr) \biggr\vert ^{2} \\& \qquad {} +5\alpha ^{2} \biggl\vert \int _{0}^{s} \biggl(\frac{\sigma _{2} (u,X_{n} (u-\frac{1}{n} ) )}{(t-u)^{1-\alpha }}- \frac{\sigma _{2} (u,X_{n} (u-\frac{1}{n} ) )}{(s-u)^{1- \alpha }} \biggr)\,du \\& \qquad {} + \int _{s}^{t}\frac{\sigma _{2} (u,X_{n} (u-\frac{1}{n} ) )}{(t-u)^{1- \alpha }}\,du \biggr\vert ^{2} \\& \quad :=\sum_{i=1}^{5}I_{i}. \end{aligned}$$
(5)

Taking the expectation and using Itô isometry, Lemma 2.1, and (H1), we get

$$\begin{aligned}& \mathbb{E} \vert I_{1} \vert +\mathbb{E} \vert I_{2} \vert +\mathbb{E} \vert I_{3} \vert \\& \quad \leqslant 5(T-s) \int _{s}^{t}\mathbb{E} \biggl\vert b \biggl(u,X_{n} \biggl(u- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,du+5 \int _{s}^{t}\mathbb{E} \biggl\vert \sigma _{1} \biggl(u,X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,du \\& \qquad {}+10H(T-s)^{2H-1} \int _{s}^{t}\mathbb{E} \biggl\vert g \biggl(u,X_{n} \biggl(u- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,du \\& \qquad {}+20(T-s) \int _{s}^{t}\mathbb{E} \biggl\vert D^{\psi }_{u} g \biggl(u,X_{n} \biggl(u- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,du \\& \quad \leqslant 5 \bigl[1+5(T-s)+2H(T-s)^{2H-1} \bigr] \int _{s}^{t} \mathcal{R} \biggl(u, \mathbb{E} \biggl\vert X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr\vert ^{2} \biggr)\,du \\& \quad \leqslant 5 \bigl[1+5(T-s)+2H(T-s)^{2H-1} \bigr] \int _{s}^{t} \mathcal{R} \Bigl(u, \mathbb{E} \Bigl(\sup_{0\leqslant v \leqslant u} \bigl\vert X _{n}(v) \bigr\vert ^{2} \Bigr) \Bigr)\,du, \end{aligned}$$

which, via Part 1, gives

$$ \mathbb{E} \vert I_{1} \vert +\mathbb{E} \vert I_{2} \vert +\mathbb{E} \vert I_{3} \vert \leqslant C_{4}(t-s), $$
(6)

where \(C_{4}=5 [1+5(T-s)+2H(T-s)^{2H-1} ] (\sup_{0\leqslant t \leqslant T}\mathcal{R}(t,C) )>0\).

Now, using Hölder’s and Young’s inequalities and conditions (H1) and (H3) gives

$$\begin{aligned} \mathbb{E} \vert I_{4} \vert \leqslant & 5 \mathbb{E} \biggl(\sum_{s< t_{j}< t} \biggl\vert I _{j} \biggl(X_{n} \biggl(t_{j}- \frac{1}{n} \biggr) \biggr) \biggr\vert \biggr) ^{2} \leqslant 5\mathbb{E} \biggl(\sum_{s< t_{j}< t}d_{j} \biggl\vert X_{n} \biggl(t_{j}-\frac{1}{n} \biggr) \biggr\vert \biggr)^{2} \\ \leqslant &5\sum_{s< t_{j}< t}(d_{j})^{2} \sum_{s< t_{j}< t}\mathbb{E} \biggl\vert X_{n} \biggl(t_{j}-\frac{1}{n} \biggr) \biggr\vert ^{2} \\ \leqslant &5\sum_{s< t_{j}< t}(d_{j})^{2} \sum_{s< t_{j}< t}\mathbb{E} \Bigl(\sup_{0\leqslant u \leqslant t} \bigl\vert X_{n}(u) \bigr\vert ^{2} \Bigr)\leqslant C _{5}\sum_{s< t_{j}< t}C \end{aligned}$$
(7)

and

$$\begin{aligned} \mathbb{E} \vert I_{5} \vert =&5\alpha ^{2} \mathbb{E} \biggl\vert \int _{0}^{s} \bigl((t-u)^{ \alpha -1}-(s-u)^{\alpha -1} \bigr)\sigma _{2} \biggl(u,X_{n} \biggl(u- \frac{1}{n} \biggr) \biggr)\,du \\ &{}+ \int _{s}^{t}(t-u)^{\alpha -1}\sigma _{2} \biggl(u,X_{n} \biggl(u- \frac{1}{n} \biggr) \biggr)\,du \biggr\vert ^{2} \\ \leqslant &10\alpha ^{2} \int _{0}^{s} \bigl((t-u)^{\alpha -1}-(s-u)^{ \alpha -1} \bigr)\mathbb{E} \biggl\vert \sigma _{2} \biggl(u,X_{n} \biggl(u- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2} \,du \\ &{}\times \int _{0}^{s} \bigl((t-u)^{\alpha -1}-(s-u)^{\alpha -1} \bigr)\,du+10 \alpha ^{2} \int _{s}^{t}(t-u)^{\alpha -1}\,du \\ &{}\times \int _{s}^{t}(t-u)^{\alpha -1}\mathbb{E} \biggl\vert \sigma _{2} \biggl(u,X_{n} \biggl(u- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,du \\ \leqslant &10 \Bigl(\sup_{0\leqslant t \leqslant T}\mathcal{R}(t,C) \Bigr) \bigl(t ^{\alpha }-s^{\alpha }+(t-s)^{\alpha } \bigr)^{2}+10 \Bigl(\sup_{0\leqslant t \leqslant T}\mathcal{R}(t,C) \Bigr) (t-s)^{2\alpha } \\ \leqslant & C_{6}(t-s)^{2\alpha }, \end{aligned}$$
(8)

where \(C_{5}=5\sum_{s< t_{j}< t}(d_{j})^{2}\) and \(C_{6}=20 (\sup_{0 \leqslant t \leqslant T}\mathcal{R}(t,C) )\) are positive constants.

Taking the expectation to Eq. (5), by Eqs. (6)–(8) we obtain the required result, and the proof of Part 2 is complete.

Part 3. The sequence \(\{X_{n}(t)\}_{n\geqslant 1}\) is a Cauchy sequence. From Eq. (3), for \(m>n\geqslant 1\) and \(t\in [0,T]\), we easily get

$$\begin{aligned}& \mathbb{E} \Bigl(\sup_{0\leqslant s \leqslant t} \bigl\vert X_{m}(s)-X_{n}(s) \bigr\vert ^{2} \Bigr) \\& \quad \leqslant 5\mathbb{E} \biggl(\sup_{0\leqslant s \leqslant t} \biggl\vert \int _{0}^{s} \biggl[b \biggl(u,X_{m} \biggl(u-\frac{1}{m} \biggr) \biggr)-b \biggl(u,X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr) \biggr]\,du \biggr\vert ^{2} \biggr) \\& \qquad {} +5\mathbb{E} \biggl(\sup_{0\leqslant s \leqslant t} \biggl\vert \int _{0}^{s} \biggl[\sigma _{1} \biggl(u,X_{m} \biggl(u-\frac{1}{m} \biggr) \biggr)- \sigma _{1} \biggl(u,X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr) \biggr]\,dW(u) \biggr\vert ^{2} \biggr) \\& \qquad {} +5\mathbb{E} \biggl(\sup_{0\leqslant s \leqslant t} \biggl\vert \int _{0}^{s} \biggl[g \biggl(u,X_{m} \biggl(u-\frac{1}{m} \biggr) \biggr)-g \biggl(u,X _{n} \biggl(u-\frac{1}{n} \biggr) \biggr) \biggr]\, d^{+}W^{H}(u) \biggr\vert ^{2} \biggr) \\& \qquad {} +5\alpha ^{2}\mathbb{E} \biggl(\sup_{0\leqslant s \leqslant t} \biggl\vert \int _{0}^{s}\frac{ [\sigma _{2} (u,X_{m} (u-\frac{1}{m} ) )- \sigma _{2} (u,X_{n} (u-\frac{1}{n} ) ) ]}{(s-u)^{1- \alpha }}\,du \biggr\vert ^{2} \biggr) \\& \qquad {} +5\mathbb{E} \biggl(\sup_{0\leqslant s \leqslant t} \biggl\vert \sum _{0< u_{j}< s} \biggl[I_{j} \biggl(X_{m} \biggl(u_{j}-\frac{1}{m} \biggr) \biggr)-I _{j} \biggl(X_{n} \biggl(u_{j}- \frac{1}{n} \biggr) \biggr) \biggr] \biggr\vert ^{2} \biggr) \\& \quad :=\sum_{i=1}^{5}J_{i}. \end{aligned}$$
(9)

By the plus and minus technique and assumption (H2) this yields

$$\begin{aligned} J_{1}+J_{4} \leqslant & 5T \mathbb{E} \int _{0}^{t} \biggl\vert b \biggl(u,X _{m} \biggl(u-\frac{1}{m} \biggr) \biggr)-b \biggl(u,X_{n} \biggl(u- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,du \\ &{}+5\alpha ^{2} \frac{T^{2\alpha -1}}{2\alpha -1}\mathbb{E} \int _{0} ^{t} \biggl\vert \sigma _{2} \biggl(u,X_{m} \biggl(u-\frac{1}{m} \biggr) \biggr)- \sigma _{2} \biggl(u,X_{n} \biggl(u- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,du \\ \leqslant & 10T \mathbb{E} \int _{0}^{t} \biggl\vert b \biggl(u,X_{m} \biggl(u- \frac{1}{m} \biggr) \biggr)-b \biggl(u,X_{n} \biggl(u-\frac{1}{m} \biggr) \biggr) \biggr\vert ^{2}\,du \\ &{}+ 10T \mathbb{E} \int _{0}^{t} \biggl\vert b \biggl(u,X_{n} \biggl(u- \frac{1}{m} \biggr) \biggr)-b \biggl(u,X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,du \\ &{}+10\alpha ^{2} \frac{T^{2\alpha -1}}{2\alpha -1}\mathbb{E} \int _{0} ^{t} \biggl\vert \sigma _{2} \biggl(u,X_{m} \biggl(u-\frac{1}{m} \biggr) \biggr)- \sigma _{2} \biggl(u,X_{n} \biggl(u- \frac{1}{m} \biggr) \biggr) \biggr\vert ^{2}\,du \\ &{}+10\alpha ^{2} \frac{T^{2\alpha -1}}{2\alpha -1}\mathbb{E} \int _{0} ^{t} \biggl\vert \sigma _{2} \biggl(u,X_{n} \biggl(u-\frac{1}{m} \biggr) \biggr)- \sigma _{2} \biggl(u,X_{n} \biggl(u- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,du \\ \leqslant & 10 \biggl[T+\alpha ^{2} \frac{T^{2\alpha -1}}{2\alpha -1} \biggr] \int _{0}^{t}\mathcal{G} \biggl(u, \mathbb{E} \biggl\vert X_{m} \biggl(u- \frac{1}{m} \biggr)-X_{n} \biggl(u-\frac{1}{m} \biggr) \biggr\vert ^{2} \biggr)\,du \\ &{}+10 \biggl[T+\alpha ^{2} \frac{T^{2\alpha -1}}{2\alpha -1} \biggr] \int _{0}^{t}\mathcal{G} \biggl(u, \mathbb{E} \biggl\vert X_{n} \biggl(u- \frac{1}{m} \biggr)-X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr\vert ^{2} \biggr)\,du. \end{aligned}$$

In terms of Part 2, we have

$$\begin{aligned} J_{1}+J_{4} \leqslant &10 \biggl[T+ \alpha ^{2} \frac{T^{2\alpha -1}}{2 \alpha -1} \biggr] \int _{0}^{t}\mathcal{G} \Bigl(s, \mathbb{E} \Bigl(\sup_{0\leqslant u \leqslant s} \bigl\vert X_{m}(u)-X_{n}(u) \bigr\vert ^{2} \Bigr) \Bigr)\,ds \\ &{}+10 \biggl[T+\alpha ^{2} \frac{T^{2\alpha -1}}{2\alpha -1} \biggr] \\ &{}\times \int _{0}^{t}\mathcal{G} \biggl(s,C_{4} \biggl(\frac{1}{n}-\frac{1}{m}\biggr)+ C_{6} \biggl(\frac{1}{n}-\frac{1}{m}\biggr)^{2\alpha }+C_{5} \sum_{s-1/n< t_{j}< s-1/m}C \biggr)\,ds. \end{aligned}$$
(10)

Similarly to (10), by the B–D–G inequality, Lemma 2.1, and condition (H2) we have

$$\begin{aligned} J_{2}+J_{3} \leqslant &20\mathbb{E} \int _{0}^{t} \biggl\vert \sigma _{1} \biggl(u,X_{m} \biggl(u-\frac{1}{m} \biggr) \biggr)-\sigma _{1} \biggl(u,X_{n} \biggl(u- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,du \\ &{}+10HT^{2H-1}\mathbb{E} \int _{0}^{t} \biggl\vert g \biggl(u,X_{m} \biggl(u-\frac{1}{m} \biggr) \biggr)-g \biggl(u,X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,du \\ &{}+20T\mathbb{E} \int _{0}^{t} \biggl\vert D^{\psi }_{u} \biggl(g \biggl(u,X_{m} \biggl(u-\frac{1}{m} \biggr) \biggr)-g \biggl(u,X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr) \biggr) \biggr\vert ^{2}\,du \\ \leqslant &40\mathbb{E} \int _{0}^{t} \biggl\vert \sigma _{1} \biggl(u,X_{m} \biggl(u-\frac{1}{m} \biggr) \biggr)-\sigma _{1} \biggl(u,X_{n} \biggl(u- \frac{1}{m} \biggr) \biggr) \biggr\vert ^{2}\,du \\ &{}+40\mathbb{E} \int _{0}^{t} \biggl\vert \sigma _{1} \biggl(u,X_{n} \biggl(u-\frac{1}{m} \biggr) \biggr)-\sigma _{1} \biggl(u,X_{n} \biggl(u- \frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,du \\ &{}+20HT^{2H-1}\mathbb{E} \int _{0}^{t} \biggl\vert g \biggl(u,X_{m} \biggl(u-\frac{1}{m} \biggr) \biggr)-g \biggl(u,X_{n} \biggl(u-\frac{1}{m} \biggr) \biggr) \biggr\vert ^{2}\,du \\ &{}+20HT^{2H-1}\mathbb{E} \int _{0}^{t} \biggl\vert g \biggl(u,X_{n} \biggl(u-\frac{1}{m} \biggr) \biggr)-g \biggl(u,X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr) \biggr\vert ^{2}\,du \\ &{}+40T\mathbb{E} \int _{0}^{t} \biggl\vert D^{\psi }_{u} \biggl(g \biggl(u,X_{m} \biggl(u-\frac{1}{m} \biggr) \biggr)-g \biggl(u,X_{n} \biggl(u-\frac{1}{m} \biggr) \biggr) \biggr) \biggr\vert ^{2}\,du \\ &{}+40T\mathbb{E} \int _{0}^{t} \biggl\vert D^{\psi }_{u} \biggl(g \biggl(u,X_{n} \biggl(u-\frac{1}{m} \biggr) \biggr)-g \biggl(u,X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr) \biggr) \biggr\vert ^{2}\,du \\ \leqslant &20\bigl[2+2T+HT^{2H-1}\bigr] \int _{0}^{t}\mathcal{G} \biggl(u, \mathbb{E} \biggl\vert X_{m} \biggl(u-\frac{1}{m} \biggr)-X_{n} \biggl(u-\frac{1}{m} \biggr) \biggr\vert ^{2} \biggr)\,du \\ &{}+20\bigl[2+2T+HT^{2H-1}\bigr] \int _{0}^{t}\mathcal{G} \biggl(u, \mathbb{E} \biggl\vert X_{n} \biggl(u-\frac{1}{m} \biggr)-X_{n} \biggl(u-\frac{1}{n} \biggr) \biggr\vert ^{2} \biggr)\,du \\ \leqslant &20\bigl[2+2T+HT^{2H-1}\bigr] \int _{0}^{t}\mathcal{G} \Bigl(s, \mathbb{E} \Bigl(\sup_{0\leqslant u \leqslant s} \bigl\vert X_{m}(u)-X_{n}(u) \bigr\vert ^{2} \Bigr) \Bigr)\,ds \\ &{}+20\bigl[2+2T+HT^{2H-1}\bigr] \\ &{}\times \int _{0}^{t}\mathcal{G} \biggl(s,C_{4} \biggl(\frac{1}{n}-\frac{1}{m}\biggr)+ C_{6} \biggl(\frac{1}{n}-\frac{1}{m}\biggr)^{2\alpha }+C_{5} \sum_{s-1/n< t_{j}< s-1/m}C \biggr)\,ds. \end{aligned}$$
(11)

Finally, for \(J_{5}\), by condition (H3) we obtain

$$\begin{aligned} J_{5} \leqslant & 5m\sum _{j=1}^{m}\mathbb{E} \biggl\vert I_{j} \biggl(X_{m} \biggl(u_{j}- \frac{1}{m} \biggr) \biggr)-I_{j} \biggl(X_{n} \biggl(u _{j}-\frac{1}{n} \biggr) \biggr) \biggr\vert ^{2} \\ \leqslant & 10m\sum_{j=1}^{m} \mathbb{E} \biggl\vert I_{j} \biggl(X_{m} \biggl(u _{j}-\frac{1}{m} \biggr) \biggr)-I_{j} \biggl(X_{n} \biggl(u_{j}- \frac{1}{m} \biggr) \biggr) \biggr\vert ^{2} \\ &{}+10m\sum_{j=1}^{m}\mathbb{E} \biggl\vert I_{j} \biggl(X_{n} \biggl(u_{j}- \frac{1}{m} \biggr) \biggr)-I_{j} \biggl(X_{n} \biggl(u_{j}-\frac{1}{n} \biggr) \biggr) \biggr\vert ^{2} \\ \leqslant & 10m\sum_{j=1}^{m}(d_{j})^{2} \mathbb{E} \biggl\vert X_{m} \biggl(u _{j}- \frac{1}{m} \biggr)-X_{n} \biggl(u_{j}- \frac{1}{m} \biggr) \biggr\vert ^{2} \\ &{}+10m\sum_{j=1}^{m}(d_{j})^{2} \mathbb{E} \biggl\vert X_{n} \biggl(u_{j}- \frac{1}{m} \biggr)-X_{n} \biggl(u_{j}- \frac{1}{n} \biggr) \biggr\vert ^{2} \\ \leqslant &10m\sum_{j=1}^{m}(d_{j})^{2} \biggl(C_{4}\biggl(\frac{1}{n}- \frac{1}{m} \biggr)+ C_{6}\biggl(\frac{1}{n}-\frac{1}{m} \biggr)^{2\alpha }+C_{5} \sum_{s-1/n< t_{j}< s-1/m}C \biggr) \\ &{}+10m\sum_{j=1}^{m}(d_{j})^{2} \mathbb{E} \Bigl(\sup_{0 \leqslant u \leqslant t} \bigl\vert X_{m}(u)-X_{n}(u) \bigr\vert ^{2} \Bigr). \end{aligned}$$
(12)

Combining Eqs. (9)–(12), we conclude

$$\begin{aligned} \mathbb{E} \Bigl(\sup_{0\leqslant s \leqslant t} \bigl\vert X_{m}(s)-X_{n}(s) \bigr\vert ^{2} \Bigr) \leqslant & C_{7} \int _{0}^{t}\mathcal{G} \Bigl(s, \mathbb{E} \Bigl(\sup_{0\leqslant u \leqslant s} \bigl\vert X_{m}(u)-X_{n}(u) \bigr\vert ^{2} \Bigr) \Bigr)\,ds \\ &{}+C_{7} \int _{0}^{t}\mathcal{G} \biggl(s,C_{4} \biggl(\frac{1}{n}-\frac{1}{m}\biggr)+ C_{6} \biggl(\frac{1}{n}-\frac{1}{m}\biggr)^{2\alpha } \\ &{}+C_{5}\sum_{s-1/n< t_{j}< s-1/m}C \biggr)\,ds \\ &{}+C_{8} \biggl(C_{4}\biggl(\frac{1}{n}- \frac{1}{m}\biggr)+ C_{6}\biggl(\frac{1}{n}- \frac{1}{m}\biggr)^{2\alpha } \\ &{}+C_{5}\sum_{s-1/n< t_{j}< s-1/m}C \biggr), \end{aligned}$$
(13)

where \(C_{7}=\frac{10C_{1}}{1-10m\sum_{j=1}^{m}(d_{j})^{2}}\) and \(C_{8}= \frac{10 m\sum_{j=1}^{m}(d_{j})^{2}}{1-10m\sum_{j=1}^{m}(d_{j})^{2}}\) are positive constants. Let

$$ \mathcal{M}(t)=\lim_{m,n\longrightarrow \infty }\mathbb{E}\Bigl( \sup_{0\leqslant s \leqslant t} \bigl\vert X_{m}(s)-X_{n}(s) \bigr\vert ^{2}\Bigr). $$
(14)

Then Eqs. (13) and (14), together with Fatou’s lemma, yield

$$\begin{aligned} \mathcal{M}(t) \leqslant &C_{7}\lim _{m,n\longrightarrow \infty } \int _{0}^{t}\mathcal{G} \biggl(s,C_{4} \biggl(\frac{1}{n}-\frac{1}{m}\biggr)+ C_{6} \biggl(\frac{1}{n}-\frac{1}{m}\biggr)^{2\alpha }+C_{5} \sum_{s-1/n< t_{j}< s-1/m}C \biggr)\,ds \\ &{}+C_{8}\lim_{m,n\longrightarrow \infty } \biggl(C_{4} \biggl(\frac{1}{n}-\frac{1}{m}\biggr)+ C_{6} \biggl(\frac{1}{n}-\frac{1}{m}\biggr)^{2\alpha }+C_{5} \sum_{s-1/n< t_{j}< s-1/m}C \biggr) \\ &{}+C_{7}\lim_{m,n\longrightarrow \infty } \int _{0}^{t}\mathcal{G} \Bigl(s, \mathbb{E} \Bigl(\sup_{0\leqslant u \leqslant s} \bigl\vert X_{m}(u)-X_{n}(u) \bigr\vert ^{2}\Bigr) \Bigr)\,ds \\ \leqslant &C_{7} \int _{0}^{t}\mathcal{G} \Bigl(s, \lim _{m,n\longrightarrow \infty }\mathbb{E}\Bigl(\sup_{0\leqslant u \leqslant s} \bigl\vert X_{m}(u)-X_{n}(u) \bigr\vert ^{2} \Bigr) \Bigr)\,ds \\ \leqslant &C_{7} \int _{0}^{t}\mathcal{G} \bigl(s, \mathcal{M}(s) \bigr)\,ds, \end{aligned}$$
(15)

where we have used the facts that \(\mathcal{G}(s,0)=0\) and \(\sum_{s-1/n< t_{j}< s-1/m}C\to 0\) as \(n,m\to \infty \). Lastly, through Eq. (15) and condition (H1), we immediately get

$$ \mathcal{M}(t)=\lim_{m,n\longrightarrow \infty }\mathbb{E}\Bigl(\sup _{0\leqslant s \leqslant t} \bigl\vert X_{m}(s)-X_{n}(s) \bigr\vert ^{2}\Bigr)=0, $$

indicating that \(\{X_{n}(t)\}_{n\geqslant 1}\) is a Cauchy sequence. The Borel–Cantelli lemma shows that, as \(n\to \infty \), \(X_{n}(t)\to X(t)\) uniformly for \(t\in [0,T]\). Hence taking limits on both sides of Eq. (3), we obtain that \(X(t)\), \(t\in [0,T]\), is a solution to Eq. (1) with the property \(\mathbb{E}(\sup_{0\leqslant s\leqslant t}|X(s)|^{2})<\infty \) for all \(t\in [0,T]\), and this completes the proof of the existence. Now the uniqueness of solution can be obtained by the same procedure as Part 3. Therefore the proof of Theorem 3.1 is completed. □

If \(g\equiv 0\), then system (1) becomes

$$ \textstyle\begin{cases} dX(t)=b(t, X(t))\,dt+\sigma _{1}(t, X(t))\,dW(t) \\ \hphantom{dX(t)={}}{}+\sigma _{2}(t, X(t))(dt)^{ \alpha },\quad t\in [0, T], t\neq t_{j},\alpha \in (0,1), \\ \triangle X(t_{j})=X(t^{+}_{j})-X(t^{-}_{j})=I_{j}(X(t_{j})),\quad j=1,2,\ldots,m, \\ X(0)=X_{0}\in \mathbb{R}^{d}. \end{cases} $$
(16)

Corollary 3.1

Let hypotheses (H1)(H3) be satisfied, and let \(X_{0}\)be independent of the Wiener process \(W(s)\), \(s>0\), with finite second moment. Then there exists a unique solution \(X(t)\)to Eq. (16), provided that \(8m\sum_{j=1}^{m}(d_{j})^{2}<1\).

Remark 3.1

If \(I_{j}(\cdot )\equiv 0\) (\(j=1,2,\ldots,m\)) in Eq. (16), then Corollary 3.1 is consistent with Theorem 3.1 in Abouagwa and Li [1]. Therefore Corollary 3.1 extends and improves some results in [1].

If \(\sigma _{1}=\sigma _{2}\equiv 0\), then Eq. (1) reduces to

$$ \textstyle\begin{cases} dX(t)=b(t, X(t))\,dt+g(t, X(t))\,dW^{H}(t),\quad t\in [0, T], t\neq t_{j}, \\ \triangle X(t_{j})=X(t^{+}_{j})-X(t^{-}_{j})=I_{j}(X(t_{j})),\quad j=1,2,\ldots,m, \\ X(0)=X_{0}\in \mathbb{R}^{d}. \end{cases} $$
(17)

Corollary 3.2

Let hypotheses (H1)(H3) be satisfied, and let \(X_{0}\)be independent of the fBm \(W^{H}(s)\) (\(s>0\), \(H>1/2\)) with finite second moment. Then there exists a unique solution \(X(t)\)to Eq. (17), provided that \(6m\sum_{j=1}^{m}(d_{j})^{2}<1\).

Remark 3.2

It should be mentioned that Xue et al. [36] established the existence and uniqueness results to Eq. (17) without impulses (\(I_{j}(\cdot )=0\) (\(j\equiv 1,2,\ldots,m\))) under conditions (H1) and (H2) by means of successive approximation. Our results are obtained for Eq. (17) with impulses by means of Carathéodory approximation. Hence Corollary 3.2 is an extension and improvement of Theorem A in [36].

Remark 3.3

Replacing \(\mathcal{G}(t,v)\) in hypothesis (H1) by \(\mathcal{G}(t,v)=\lambda (t)\bar{\mathcal{G}}(v)\), \(t\in [0,T]\), where \(\lambda (t)\geqslant 0\) is locally integrable, and \(\bar{\mathcal{G}}(v):[0,\infty )\longrightarrow [0,\infty )\) is a concave nondecreasing function with \(\bar{\mathcal{G}}(0)=0\), \(\bar{\mathcal{G}}(v)>0\) for \(v>0\), and \(\int _{0+} \frac{1}{\bar{ \mathcal{G}}(v)}\,dv=+\infty \). Then Corollaries 3.1 and 3.2 extend and improve some results in Abouagwa et al. [4] and Pei and Xu [28] (\(\lambda (t)=1\)), respectively.

4 An application

In this section, as an application of the obtained results, we provide the following impulsive stochastic fractional Burgers differential equations with Dirichlet boundary conditions driven by a standard Brownian motion and independent fBm:

$$ \textstyle\begin{cases} \frac{\partial }{\partial t}\xi (t,z)=\mu (t,\xi (t,z))+\gamma (t,\xi (t,z))\,dW(t)+\eta (t,\xi (t,z))\,dW^{H}(t) \\ \hphantom{\frac{\partial }{\partial t}\xi (t,z)={}}{} +\theta (t,\xi (t,z))(dt)^{\alpha },\quad t\neq t_{j}, \\ \xi (t,0)=\xi (t,1)=0,\quad t\in [0,T], \\ \triangle \xi (t_{j})=\frac{\sigma _{3}}{j^{2}}\xi (z(t_{j})), \quad t=t_{j}, j=1,2,\ldots,m, \\ \xi (0,z)=\xi _{0}(z), \end{cases} $$
(18)

where \(0\leqslant z \leqslant 1\), \(0<\alpha < 1\), \(\xi _{0}(z)\in \mathbb{R}^{d}\), \(W(t)\) and \(W^{H}(t)\) are two independent m-dimensional Brownian motion and fBm, respectively, \(\mu ,\theta : [0,T]\times \mathbb{R}^{d}\longrightarrow \mathbb{R}^{d}\) and \(\gamma ,\eta : [0,T]\times \mathbb{R}^{d}\longrightarrow \mathbb{R}^{d\times m}\) are continuous functions, and \(\sigma _{3}>0\).

Let \(X(t)(z)=\xi (t,z)\) and

$$\begin{aligned}& b\bigl(t,X(t)\bigr) (z)=\mu \bigl(t,\xi (t,z)\bigr), \\& \sigma _{1}\bigl(t,X(t)\bigr) (z)=\gamma \bigl(t,\xi (t,z)\bigr), \\& g\bigl(t,X(t)\bigr) (z)=\eta \bigl(t,\xi (t,z)\bigr), \\& \sigma _{2}\bigl(t,X(t)\bigr) (z)=\theta \bigl(t,\xi (t,z)\bigr), \\& I_{j}\bigl(X(t_{j})\bigr)=\frac{\sigma _{3}}{j^{2}}\xi \bigl(z(t_{j})\bigr). \end{aligned}$$

Then problem (1) is an abstract version of problem (18). We can choose suitable functions μ, γ, η, θ such that conditions (H1)–(H3) are satisfied. Then by Theorem 3.1 problem (18) has a unique solution.