1 Introduction

The theory and applications of impulsive differential equations have undergone a rapid development in recent years. Impulsive phenomena have attracted considerable attention due to their wide applications in engineering and scientific fields, including information science, electronics, automatic control systems, population dynamics, robotics, telecommunications, and so on [1, 2]. In view of a mathematical modeling, such applications require finding criteria for qualitative properties of their solutions. Many sudden and rapid modifications occur instantaneously, which leads to impulsive effects. It is important and necessary to consider both time delays and impulsive effects when investigating the stability of the dynamical systems, since impulsive perturbations can affect the dynamical behavior of the system [3]. In comparison with other concepts, the “aftereffect” presented in physics is very significant. To model procedures with aftereffect, it is not sufficient to employ ordinary or partial differential equations. So, integrodifferential equations are used to resolve this problem. In recent years, the theory of impulsive integrodifferential equations in the field of modern applied mathematics has made considerable improvement because the structure of its appearance has deep physical background and realistic mathematical reasoning.

We must transfer the deterministic problems to stochastic problems, since deterministic models frequently fluctuate due to noise, which is random (or at least appears to be such). Stochastic effects are common occurrences due to disturbances or uncertainties in a considered system. The analysis and formulation of many real-time problems in economy and finance, in physics, mechanics, electric and control engineering, etc. can be modeled by stochastic differential equations. Besides, SDEs with delays are often used to model the objects where the future state depends not only on the present state but also on its past states. Neutral differential equations, initially introduced by Hale and Meyer [4] as retarded systems consisting of discrete and neutral delays, frequently appear in scientific and engineering fields [5]. In various cases, the time-delay is a source of instability. Slight changes in delay may destabilize differential equations. Therefore, many researchers increased their interest in their stability problem [6, 7]. Its worth pointing out that a neutral-type dynamical model as one of the most essential dynamical systems is omnipresent in both nature and man-made systems [8, 9]. Some essential facts of neutral stochastic delay differential equations are introduced in [10, 11]. Meanwhile, they can be widely applied to many branches for the control field, including the problem of the existence and stability of the neutral stochastic delay systems [12]. So, many researchers have focused on the study of stability analysis of neutral stochastic delay systems during the last few decades [10, 1318].

On the other hand, a fractional Brownian motion (fBm) is a Gaussian stochastic process, which varies pointedly from semimartingales and a standard Brownian motion to other processes usually utilized in the theory of stochastic processes. An fBm depends on the Hurst index \(H \in (0, 1)\), the parameter introduced by Kolmogorov [19]. We refer to [20] for more detail on fBm. As a centered Gaussian process, it is examined by its stationary increments and the medium- or long-memory property. The fBm reduces to standard Brownian motion when \(H=1/2\). However, when \(H\neq 1/2\), the fBm acts in a entirely different way, that is, it is neither a Markov process nor a semimartingale. Recently, the theory of differential equations driven by an fBm has been investigated intensively by many researchers (see [2129] and references therein).

Now, we consider the neutral stochastic integrodifferential equation with impulsive moments of the form

$$\begin{aligned} &d \bigl[x(t)-p(t, x_{t}) \bigr]= \biggl[Ax(t)+f(t, x_{t})+ \int_{0}^{t}h(t,s, x_{s})\,ds \biggr]\,dt \\ &\hphantom{d [x(t)-p(t, x_{t}) ]=}{}+\tilde{\sigma }(t)\,dB_{Q}^{H}(t),\quad t\in [0, a], t \neq t_{k}, \end{aligned}$$
$$\begin{aligned} &\triangle x(t_{k})=I_{k}\bigl(x\bigl(t_{k}^{-}\bigr)\bigr),\quad t=t_{k}, k=1, 2, \ldots, \end{aligned}$$
$$\begin{aligned} &x_{0}(t)=\varphi (t) \in \mathcal{PC}\bigl([-r, 0], X\bigr), \quad -r \leq t\leq 0, \end{aligned}$$

where A is the infinitesimal generator of an analytic semigroup \((T(t))_{t \geq 0}\) of bounded linear operators in a Hilbert space X. The impulsive moments \(t_{k}\) satisfy the condition \(0 < t_{1} < t_{2} < \cdots < t_{k} < \cdots \) , \(\lim_{k\rightarrow +\infty}t_{k}=\infty \), \(I_{k}: X\rightarrow X\), \(\triangle x(t_{k})=x(t_{k}^{+})-x(t_{k}^{-})\) (\(x(t_{k}^{+})\) and \(x(t_{k}^{-})\) are the right and left limits of \(x(t)\) at \(t_{k}\), respectively) is the jump size of the state x at \(t_{k}\). For \(\varphi \in \mathcal{PC}, \Vert \varphi \Vert _{\mathcal{PC}}=\sup_{s \in [-r, 0]}\Vert \varphi (s) \Vert < +\infty \), where \(\mathcal{PC}=\{\varphi : [-r, 0] \rightarrow X, \varphi (t)\mbox{ is continuous everywhere except a finite number of points } \tilde{t} \textrm{ at which } \varphi (\tilde{t}^{-}),\varphi(\tilde{t}^{+})\textrm{ exist and } \varphi (\tilde{t}^{-})=\varphi (\tilde{t})\}\). For any \(t \in [0, a]\) and any continuous function x, the element of \(\mathcal{PC}\) is defined by \(x_{t}(\theta )=x(t+\theta ), -r \leq \theta \leq 0\). The functions \(p, f:[0, +\infty ) \times \mathcal{PC} \rightarrow X, h: [0, +\infty ) \times [0, +\infty ) \times \mathcal{PC} \rightarrow X\), and \(\tilde{\sigma }: [0, + \infty ) \rightarrow L_{Q}^{0}(Y, X)\) are appropriate functions, and \(B_{Q}^{H}\) is assumed to be a fractional Brownian motion.

Yang and Jiang [30] investigated the exponential stability in the pth moment of impulsive stochastic neutral partial differential equations with memory. The asymptotic behavior for second-order neutral stochastic partial differential equations with infinite delay have been studied in [31]. Taniguchi et al. [32] discussed the existence, uniqueness, and asymptotic behavior of mild solutions to stochastic differential equations. Boufoussi and Hajji [23] established the existence, uniqueness, and exponential decay to zero in mean square moment for the mild solutions to neutral stochastic differential equations driven by a fractional Brownian motion. Caraballo et al. [33] discussed the existence, uniqueness, and exponential asymptotic behavior of stochastic delay evolution equations perturbed by a fractional Brownian motion. Stochastic Volterra integrodifferential equations with fractional Brownian motion have been studied in [34]. In [35], the author examined the exponential stability for stochastic partial differential equations with delays and impulses. Very recently, Chen et al. [36] examined the exponential stability of neutral stochastic partial functional differential equations with impulsive effects.

So far, several interesting results have been presented that focused on the neutral stochastic integrodifferential equations driven by fBm. In recent years, the theory and applications of impulsive stochastic integrodifferential equations have been received much attention. However, there are very few papers dealing with the stability of mild solutions to stochastic integrodifferential equations with impulsive effects. To the best of our knowledge, the corresponding theory for neutral stochastic integrodifferential equations driven by an fBm with impulsive effects has not been explored, and the aim of this paper is to close the gap. This paper, inspired by the works mentioned, addresses the existence and stability problems for neutral stochastic integrodifferential systems driven by an fBm with delays and impulsive effects.

The structure of this paper is as follows. In Sect. 2, we recall some basic facts. In Sect. 3, we discuss the results on existence and uniqueness of mild solutions. Further, we investigate the exponential stability for the mild solution in mean square moment. In Sect. 4, we give an example illustrating the results. Finally in Sect. 5, we provide a conclusion.

2 Preliminaries

For convenience, we review some fundamental concepts about the analytic semigroups and fractional powers of their infinitesimal generators. Further, we introduce some necessary facts about fractional Brownian motion (fBm) and the Wiener integral with respect to an fBm.

We first state important properties of the semigroup theory (see [37]) and the fractional powers of operators. Let \(A:D(A) \rightarrow X\) be the infinitesimal generator of an analytic semigroup of bounded linear operators \((T(t))_{t \geq 0}\) on a Hilbert space X. Then for every \(t \geq 0\), there exist constants \(N\geq 1\) and \(\gamma \in \mathbb{R}\) such that \(\Vert T(t) \Vert \leq Ne^{\gamma t}\). Also, \((T(t))_{t \geq 0}\) is a uniformly bounded and analytic semigroup such that \(0 \in \sigma (A)\) (\(\sigma (A)\) is the resolvent set of A). For \(0 < \alpha \leq 1\), the fractional power \((-A)^{\alpha}\) is defined as a closed linear operator on its domain \(D(-A)^{\alpha }\), and the equality \(\Vert \rho \Vert _{\alpha }=\Vert (-A)^{\alpha }\rho \Vert \) defines a norm in the subspace \(D(-A)^{\alpha }\), which is dense in X. Also, \(X_{\alpha }\) represents the space \(D(-A)^{\alpha }\) endowed with the norm \(\Vert \cdot \Vert _{\alpha }\).

Lemma 2.1


Assume that the stated conditions are satisfied.

  1. (1)

    If \(0 < \alpha \leq 1\), then \(X_{\alpha }\) is a Banach space.

  2. (2)

    If \(0 < \beta \leq \alpha \), then the injection \(X_{\alpha }\hookrightarrow X_{\beta }\) is continuous.

  3. (3)

    There exists \(N_{\alpha }> 0\) such that

    $$\begin{aligned} \bigl\Vert (-A)^{\alpha } T(t) \bigr\Vert \leq \frac{N_{\alpha }}{t^{\alpha }}e^{-\gamma t}, \quad t>0, \gamma >0, \end{aligned}$$

    for every \(0 < \alpha \leq 1\).

Let X and Y be real separable Hilbert spaces, and let \(L(Y, X)\) denote the space of all bounded linear operators from Y into X. For convenience, by \(\vert \cdot \vert \) we denote the norms in X, Y, and \(L(Y, X)\). Let \((\Omega , \mathcal{F}, \mathbb{P})\) be a complete probability space, and let \(Q \in L(Y, Y)\) be a nonnegative self-adjoint operator. By \(L_{Q}^{0}\) we denote the space of all \(\nu \in L(Y, X)\) such that \(\nu Q^{\frac{1}{2}}\) is a Hilbert–Schmidt operator with norm \(\vert \nu \vert ^{2}_{L_{Q}^{0}(Y, X)}= \vert \nu Q^{\frac{1}{2}} \vert ^{2}_{HS}=\operatorname{tr}( \nu Q \nu^{*})\). Such ν are called Q-Hilbert–Schmidt operators. The mathematical expectation operator with respect to \(\mathbb{P}\) is denoted by \(\mathbb{E}(\cdot )\).

Definition 2.2

A two-sided one-dimensional fractional Brownian motion (fBm) with Hurst parameter \(H \in (0, 1)\) is a continuous centered Gaussian process \(\beta^{H}=\{\beta^{H}(t), t \in \mathbb{R}\}\) with covariance function

$$\begin{aligned} R_{H(t, s)}=\mathbb{E}\bigl[\beta^{H}(t) \beta^{H}(s) \bigr]=\frac{1}{2}\bigl(\vert t \vert ^{2H}+\vert s \vert ^{2H}-\vert t-s \vert ^{2H}\bigr), \quad t , s \in \mathbb{R}. \end{aligned}$$

Now we introduce the Wiener integral with respect to a one-dimensional fBm \(\beta^{H}\). Fix \(a>0\). The linear space of \(\mathbb{R}\)-valued step functions on \([0, a]\) is denoted by Φ, that is, \(\varphi \in \Phi \) if

$$\begin{aligned} \varphi (t)=\sum_{i=1}^{n-1}x_{i} \vartheta_{[t_{i}, t_{i+1})}(t), \quad t \in [0, a], \end{aligned}$$

where \(0=t_{1}< t_{2}< \cdots < t_{n}=a\) and \(x_{i} \in \mathbb{R}\). Next, we define the Wiener integral of \(\varphi \in \Phi \) with respect to \(\beta^{H}\) by

$$\begin{aligned} \int_{0}^{a}\varphi (s)\,d\beta^{H}(s)= \sum_{i=1}^{n-1}x_{i}\bigl( \beta^{H}(t _{i+1})-\beta^{H}(t_{i}) \bigr). \end{aligned}$$

We denote by \(\mathcal{H}\) the Hilbert space that is the closure of Φ with respect to the scalar product \(\langle \vartheta_{[0, t]}, \vartheta_{[0, s]}\rangle_{\mathcal{H}}=R_{H}(t, s)\). Then, we have the following mapping:

$$\begin{aligned} \varphi =\sum_{i=1}^{n-1}x_{i} \vartheta_{[t_{i}, t_{i+1})}\mapsto \int _{0}^{a}\varphi (s)\,d\beta^{H}(s). \end{aligned}$$

This mapping is an isometry between the linear space \(\operatorname{span}\{\beta^{H}, t \in [0, a]\}\) and Φ, and thus it can be extended to an isometry between \(\mathcal{H}\) and the first Wiener chaos of the fBm \(\overline{\operatorname{span}}^{L^{2}(\Omega )}\{\beta^{H}, t\in [0, a]\}\) (see [29]). Denote by \(\beta^{H}(\varphi )\) the image of φ by this isometry. At this point in time, we present an explicit expression of this integral. Let \(K_{H}(t, s)\) be the kernel given by

$$\begin{aligned} K_{H}(t, s)=c_{H} s^{\frac{1}{2}-H} \int_{s}^{t}(\tau -s)^{H- \frac{3}{2}} \tau^{H-\frac{1}{2}}\,d\tau , \quad t >s, \end{aligned}$$

where \(c_{H}= \sqrt{\frac{H(2H-1)}{\mathcal{B}(2-2H, H-\frac{1}{2})}}\), and \(\mathcal{B}\) is the beta function. Then we have

$$\begin{aligned} \frac{\partial K_{H}(t, s)}{\partial t}=c_{H} \biggl(\frac{t}{s} \biggr)^{H- \frac{1}{2}}(t-s)^{H-\frac{3}{2}}. \end{aligned}$$

Let us consider the linear operator \(K_{H}^{*}:\Phi \mapsto L^{2}([0, a])\) defined by

$$\begin{aligned} \bigl(K_{H}^{*} \varphi \bigr) (s)= \int_{s}^{t}\varphi (t)\frac{\partial K_{H}(t,s)}{\partial t}\,dt \quad\mbox{and then} \quad \bigl(K_{H}^{*} \vartheta_{[0, t]} \bigr) (s)=K _{H}(t, s)\vartheta_{[0, t]}(s). \end{aligned}$$

The isometry \(K_{H}^{*}\) between Φ and \(L^{2}([0, a])\) can be extended to \(\mathcal{H}\). Now we consider \(\mathcal{W}=\{\mathcal{W}(t), t \in [0, a]\}\) defined by \(\mathcal{W}(t)=\beta^{H}((K_{H}^{*})^{-1}\vartheta_{[0, t]})\). Then \(\mathcal{W}\) is a Wiener process and

$$\begin{aligned} \beta^{H}(t)= \int_{0}^{t}K_{H}(t, s)\,d\mathcal{W}(s). \end{aligned}$$

For any \(\varphi \in \mathcal{H}\),

$$\begin{aligned} \int_{0}^{a}\varphi (s)\,d\beta^{H}(s)= \int_{0}^{a}\bigl(K_{H}^{*} \varphi \bigr) (t)\,d\mathcal{W}(t) \end{aligned}$$

if and only if \(K_{H}^{*}\varphi \in L^{2}([0, a])\). Moreover, letting \(L^{2}_{\mathcal{H}}([0, a])=\{\varphi \in \mathcal{H}, K_{H}^{*} \varphi \in L^{2}([0, a])\}\), when \(H>\frac{1}{2}\), we have \(L^{1/H}([0, a])\subset L^{2}_{\mathcal{H}}([0, a])\); see [26].

Lemma 2.3

([38]) For \(\varphi \in L^{1/H}([0, a])\),

$$\begin{aligned} H(2H-1) \int_{0}^{a} \int_{0}^{a}\bigl\vert \varphi (\upsilon ) \bigr\vert \bigl\vert \varphi (\tau ) \bigr\vert \vert \upsilon -\tau \vert ^{2H-2}\,d\upsilon \,d\tau \leq c_{H}\Vert \varphi \Vert ^{2}_{L ^{1/H}([0, a])}. \end{aligned}$$

Consider a sequence of independent two-sided one-dimensional standard fBms \(\{\beta_{n}^{H}(t)\}_{n \in \mathbb{N}}\) on \((\Omega , \mathcal{F}, \mathbb{P})\) and the following series, which not necessarily converge in the space Y:

$$\begin{aligned} \sum_{n=1}^{\infty }\beta_{n}^{H}(t)a_{n}, \quad t \geq 0, \end{aligned}$$

where \(\{a_{n}\}_{n \in \mathbb{N}}\) is a complete orthonormal basis in Y. Hence, consider a Y-valued stochastic process \(B^{H}_{Q}(t)\) specified by the following series that converges in the space Y if Q is a nonnegative self-adjoint trace class operator:

$$\begin{aligned} B^{H}_{Q}(t)=\sum_{n=1}^{\infty } \beta_{n}^{H}(t)Q^{\frac{1}{2}}a_{n}, \quad t \geq 0. \end{aligned}$$

Obviously, \(B^{H}_{Q}(t) \in L^{2}(\Omega , Y)\), and \(B^{H}_{Q}(t)\) is a Y-valued Q-cylindrical fBm with covariance operator Q. For example, if \(\{\lambda_{n}\}_{n \in \mathbb{N}}\) is a bounded sequence of nonnegative real numbers such that \(Qa_{n}=\lambda_{n} a_{n}\), then if Q is a nuclear operator in Y (that is, \(\sum_{n=1}^{ \infty } \lambda_{n} < \infty \)), then the stochastic process

$$\begin{aligned} B^{H}_{Q}(t)=\sum_{n=1}^{\infty } \beta_{n}^{H}(t)Q^{\frac{1}{2}}a_{n}= \sum _{n=1}^{\infty }(\lambda_{n})^{\frac{1}{2}} \beta_{n}^{H}(t)a_{n}, \quad t \geq 0, \end{aligned}$$

is well-defined as a Y-valued Q-cylindrical fBm.

Let \(\varphi :[0, a]\rightarrow L_{Q}^{0}(Y, X)\) be such that

$$\begin{aligned} \sum_{n=1}^{\infty }\bigl\Vert K_{H}^{*}\bigl(\varphi Q^{\frac{1}{2}}a_{n} \bigr) \bigr\Vert _{L ^{2}([0, a]; X)} < \infty . \end{aligned}$$

Definition 2.4

Let \(\varphi (s), s \in [0, a]\), be a function with values in \(L_{Q}^{0}(Y, X)\). Then the Wiener integral of φ with respect to \(B_{Q}^{H}\) is defined by

$$\begin{aligned} \int_{0}^{t}\varphi (s)\,dB_{Q}^{H}(s)= \sum_{n=1}^{\infty } \int_{0}^{t} \varphi (s)Q^{\frac{1}{2}}a_{n} \,d\beta_{n}^{H}=\sum_{n=1}^{\infty } \int_{0}^{t}\bigl(K_{H}^{*} \bigl(\varphi Q^{\frac{1}{2}}a_{n}\bigr)\bigr) (s)\,d\mathcal{W}(s), \quad t \geq 0. \end{aligned}$$


$$\begin{aligned} \sum_{n=1}^{\infty }\bigl\Vert \varphi Q^{\frac{1}{2}}a_{n} \bigr\Vert _{L^{1/H}([0, a]; X)} < \infty , \end{aligned}$$

then certainly (4) holds, which follows directly from \(L^{1/H}([0, a])\subset L^{2}_{\mathcal{H}}([0, a])\).

Lemma 2.5

For any \(\varphi : [0, a] \mapsto L_{Q}^{0}(Y, X)\) such that (5) holds and for any \(\alpha , \beta \in [0, a] \) with \(\alpha > \beta \),

$$\begin{aligned} \mathbb{E} \biggl\vert \int_{\beta }^{\alpha }\varphi (s)\,dB_{Q}^{H}(s) \biggr\vert ^{2} _{X} \leq cH(2H-1) (\alpha -\beta )^{2H-1}\sum_{n=1}^{\infty } \int_{ \beta }^{\alpha } \bigl\vert \varphi (s)Q^{\frac{1}{2}}a_{n} \bigr\vert ^{2}_{X} \,ds, \end{aligned}$$

where \(c=c(H)\). Also, If \(\sum_{n=1}^{\infty }\vert \varphi (t)Q^{\frac{1}{2}}a_{n} \vert _{X} \) is uniformly convergent for \(t\in [0, a]\), then

$$\begin{aligned} \mathbb{E} \biggl\vert \int_{\beta }^{\alpha }\varphi (s)\,dB_{Q}^{H}(s) \biggr\vert ^{2} _{X} \leq cH(2H-1) (\alpha -\beta )^{2H-1} \int_{\beta }^{\alpha } \bigl\vert \varphi (s) \bigr\vert ^{2}_{L_{Q}^{0}(Y, X)}\,ds. \end{aligned}$$

The proof of the lemma is given in [33], and this lemma is essential in the proof of our result, which is established as a simple application of Lemma 2.3. The following integral inequality (Lemma 3.1 in [30]) is crucial in proving the exponential stability of a mild solution of the considered system with impulses in mean square moments.

Lemma 2.6

Let a function \(\Gamma :[-r, +\infty )\rightarrow [0, +\infty )\) be such that there exist positive constants \(\omega > 0, \alpha_{j}\) \((j=1, 2, 3)\), and \(\beta_{i}\) \((i=1, 2, \ldots )\) such that

$$\begin{aligned} \Gamma (t) \leq \alpha_{1} e^{-\omega t} \quad \textit{for } t \in[-r, 0] \end{aligned}$$


$$\begin{aligned} \Gamma (t) &\leq \alpha_{1} e^{-\omega t}+\alpha_{2} \sup_{\theta \in [-r, 0]}\Gamma (t+\theta )+\alpha_{3} \int_{0}^{t}e ^{-\omega (t-s)}\sup _{\theta \in [-r, 0]}\Gamma (t+\theta )\,ds \\ &\quad {}+\sum_{t_{i} < t}\beta_{i}e^{-\omega (t-t_{i})}\Gamma \bigl(t_{i}^{-}\bigr)\quad \textit{for } t \geq 0. \end{aligned}$$

If \(\alpha_{2}+\frac{\alpha_{3}}{\omega }+\sum_{i=1}^{+\infty }\beta_{i} < 1\), then \(\Gamma (t) \leq Ne^{-\gamma t}\) for \(t \geq -r\), where \(\gamma > 0\) is the unique solution to the equation \(\alpha_{2}+\frac{ \alpha_{3}}{(\omega -\gamma )} e^{\gamma r}+\sum_{i=1}^{+ \infty }\beta_{i}=1\), and \(N=\max \{\alpha_{1}, \frac{\alpha_{1}(\omega -\gamma )}{\alpha_{3} e^{\gamma r}} \} > 0\).

3 Main results

3.1 Existence of mild solution

In this section, we first formulate and prove sufficient conditions for the existence and uniqueness of a mild solution of system (1)–(3) by utilizing the fixed-point theory. Then, under certain assumptions, we show the exponential stability of the mild solution for the considered system in the mean square moment. Before proceeding our discussion, let us first introduce the concept of a mild solution.

Definition 3.1

An X-valued stochastic process \({x(t), t\in [-r, a]}\), is called a mild solution of system (1)–(3) if

  1. (1)

    \(x(\cdot ) \in \mathcal{PC}([-r, a], L^{2}(\Omega , X))\);

  2. (2)

    \(x(t)=\varphi (t)\) for \(t\in [-r, 0]\);

  3. (3)

    For \(t\in [0, a]\), \(x(t)\) satisfies the following integral equation:

    $$\begin{aligned} x(t) =&T(t)\bigl[\varphi (0)-p(0, \varphi )\bigr]+p(t, x_{t})+ \int_{0}^{t}AT(t-s)p(s, x_{s})\,ds \\ &+ \int_{0}^{t}T(t-s)f(s, x_{s})\,ds+ \int_{0}^{t}T(t-s) \biggl( \int_{0} ^{s}h(s, \eta , x_{\eta })\,d\eta \biggr)\,ds \\ &+\sum_{0< t_{k}< t}T(t-t_{k})I_{k} \bigl(x\bigl(t_{k}^{-}\bigr)\bigr)+ \int_{0}^{t}T(t-s) \tilde{\sigma }(s) \,dB_{Q}^{H}(s) \quad \mathbb{P}\mbox{-a.s.} \end{aligned}$$

To guarantee the existence and uniqueness of the solution, we impose some hypotheses:

  1. (H1)

    A is the infinitesimal generator of an analytic semigroup \((T(t))_{t\geq 0}\) of bounded linear operators on the Hilbert space X and satisfies \(0 \in \sigma (A)\). By Lemma 2.1, for every \(t \in [0, a]\), there exist some constants N and \(N_{1-\alpha }\) such that

    $$\begin{aligned} \bigl\Vert T(t) \bigr\Vert \leq N\quad \mbox{and} \quad\bigl\Vert (-A)^{1-\alpha }T(t) \bigr\Vert \leq \frac{N_{1- \alpha }}{t^{1-\alpha }}. \end{aligned}$$
  2. (H2)

    For all \(t \in [0, a]\), there exist constants \(\frac{1}{2} < \alpha < 1\) and \(L_{1} > 0\) such that, for \(\psi_{j} \in \mathcal{PC}, j=1, 2\), the \(X_{\alpha }\)-valued function \(p:[0, +\infty ) \times \mathcal{PC} \rightarrow X\) satisfies the condition

    $$\begin{aligned} \bigl\Vert (-A)^{\alpha }p(t, \psi_{1})-(-A)^{\alpha }p(t, \psi_{2}) \bigr\Vert \leq L _{1}\Vert \psi_{1}-\psi_{2} \Vert . \end{aligned}$$

    Also, \(\tilde{L}_{1}=\sup_{ t \in [0, a]}\Vert (-A)^{\alpha }p(t, 0) \Vert \).

  3. (H3)

    \((-A)^{\alpha }p\) is a continuous function in the quadratic mean sense. For all \(\psi \in \mathcal{PC}\),

    $$\begin{aligned} \lim_{t \rightarrow s}\mathbb{E}\bigl\Vert (-A)^{\alpha }p(t, \psi )-(-A)^{\alpha }p(s, \psi ) \bigr\Vert ^{2}=0. \end{aligned}$$
  4. (H4)

    There exists a constant \(L_{2} > 0\) such that, for \(\psi_{j} \in \mathcal{PC}, j=1, 2\), the mapping \(f:[0, +\infty ) \times \mathcal{PC} \rightarrow X\) satisfies the following Lipschitz condition for all \(t \in [0, a]\):

    $$\begin{aligned} \bigl\Vert f(t, \psi_{1})-f(t, \psi_{2}) \bigr\Vert \leq L_{2} \Vert \psi_{1}-\psi_{2} \Vert . \end{aligned}$$

    Here \(\tilde{L}_{2}=\sup_{ t \in [0, a]}\Vert f(t, 0) \Vert \).

  5. (H5)

    The mapping \(h:[0, +\infty ) \times [0, +\infty ) \times \mathcal{PC} \rightarrow X\) satisfies the following Lipschitz condition. For \(t \in [0, a]\), there exists a constant \(L_{3} > 0\) such that, for \(\psi_{j} \in \mathcal{PC}\), \(j=1, 2\),

    $$\begin{aligned} \biggl\Vert \int_{0}^{t} \bigl[h(t, s, \psi_{1})-h(t, s, \psi_{2}) \bigr]\,ds \biggr\Vert \leq L_{3} \Vert \psi_{1}-\psi_{2} \Vert . \end{aligned}$$

    Here \(\tilde{L}_{3}=a\sup_{0 \leq s \leq t \leq a}\Vert h(t, s, 0) \Vert \).

  6. (H6)

    The impulsive function \(I_{k}:X \rightarrow X\) is continuous and there exist positive numbers \(q_{k}\) \((k=1, 2, \ldots )\) such that \(\sum_{k=1}^{\infty }q_{k} < \infty \) and, for all \(\psi_{1}, \psi_{2} \in \mathcal{PC}\),

    $$\begin{aligned} \bigl\Vert I_{k}(\psi_{1})-I_{k}( \psi_{2}) \bigr\Vert \leq q_{k}\Vert \psi_{1}-\psi_{2} \Vert \quad \mbox{and}\quad \bigl\Vert I_{k}(0) \bigr\Vert =0. \end{aligned}$$
  7. (H7)

    A function \(\tilde{\sigma }: [0, +\infty ) \rightarrow L_{Q}^{0}(Y, X)\) satisfies

    $$\begin{aligned} \int_{0}^{t} \bigl\Vert \tilde{\sigma }(s) \bigr\Vert _{L_{Q}^{0}}^{2}\,ds < \infty , \quad \forall t \in [0, a]. \end{aligned}$$

    We have the following two conditions for the complete orthonormal basis \(\{a_{n}\}_{n \in \mathbb{N}}\) in Y.

    1. (C.1)

      \(\sum_{n=1}^{\infty }\Vert \tilde{\sigma }Q^{1/2}a_{n} \Vert _{L^{2}([0, a] ; X)}< \infty \).

    2. (C.2)

      \(\sum_{n=1}^{\infty }\vert \tilde{\sigma }(t)Q^{1/2}a_{n} \vert _{X}\) is uniformly convergent for all \(t \in [0, a]\).

We now establish the existence and uniqueness results for system (1)–(3).

Theorem 3.2

Assume that hypotheses (H1)–(H7) are satisfied for all \(\phi \in \mathcal{PC}\), \(a > 0\), and

$$\begin{aligned} \frac{4N^{2} (\sum_{k=1}^{+\infty }q_{k} )^{2}}{(1-k)^{2}} < 1, \end{aligned}$$

where \(k=\Vert (-A)^{-\alpha } \Vert L_{1}\). Then system (1)(3) has a unique mild solution on \([-r, a]\).


In what follows, the Banach space of all continuous functions from \([-r, a]\) into \(L^{2}(\Omega , X)\) stands for the set \(\Lambda_{a}:= \mathcal{PC}([-r, a], L^{2}(\Omega , X))\) equipped with the supremum norm \(\Vert \zeta \Vert ^{2}_{\Lambda_{a}}= \sup_{s\in [-r, a]}( \mathbb{E}\Vert \zeta (s) \Vert ^{2})\). Fix \(a > 0\). Denote \(\hat{\Lambda } _{a}=\{x \in \Lambda_{a}:x(\tau )=\varphi (\tau ) \mbox{ for } \tau \in [-r, 0]\}\), which is a closed subset of \(\Lambda_{a}\) provided with the norm \(\Vert \cdot \Vert _{\Lambda_{a}}\). Now problem (1)–(3) transformed into a fixed point problem. We define the operator \(\Psi : \hat{\Lambda }_{a} \rightarrow \hat{\Lambda }_{a}\) as follows: \((\Psi x)(t)=\varphi (t), t \in [-r, 0]\), and

$$\begin{aligned} (\Psi x) (t) &=T(t)\bigl[\varphi (0)-p(0, \varphi )\bigr]+p(t, x_{t})+ \int_{0} ^{t}AT(t-s)p(s, x_{s})\,ds \\ &\quad {}+ \int_{0}^{t}T(t-s)f(s, x_{s})\,ds+ \int_{0}^{t}T(t-s) \biggl( \int_{0} ^{s}h(s, \eta , x_{\eta })\,d\eta \biggr)\,ds \\ &\quad {}+ \int_{0}^{t}T(t-s)\tilde{\sigma }(s) \,dB_{Q}^{H}(s)+\sum_{0< t_{k}< t}T(t-t _{k})I_{k}\bigl(x\bigl(t_{k}^{-}\bigr) \bigr), \quad t \in [0, a]. \end{aligned}$$

We will show that the operator Ψ has a fixed point. The proof is based on the fixed point theorem for contraction mapping principle. We divide the proof into two steps.

Step 1: We first show that the mapping \(t\rightarrow (\Psi x)(t)\) is continuous on the interval \([0, a]\). Let \(x \in \hat{\Lambda }_{a}, 0 < t < a\), and let \(\vert \tau \vert \) be sufficiently small. Then we have

$$\begin{aligned} \mathbb{E}\bigl\Vert (\Psi x) (t+\tau )-(\Psi x) (t) \bigr\Vert ^{2} &\leq 7 \bigl\{ \mathbb{E}\bigl\Vert \bigl(T(t+\tau )-T(t) \bigr)\bigl[\varphi (0)-p(0, \varphi )\bigr] \bigr\Vert ^{2} \bigr\} \\ &\quad {}+7 \sum_{j=1}^{6}\mathbb{E}\bigl\Vert G_{j}(t+\tau )-G_{j}(t) \bigr\Vert ^{2}. \end{aligned}$$

Now by the semigroup property we can write

$$\begin{aligned} \mathbb{E}\bigl\Vert \bigl(T(t+\tau )-T(t)\bigr)\bigl[\varphi (0)-p(0, \varphi )\bigr] \bigr\Vert ^{2}= \mathbb{E}\bigl\Vert \bigl(T(\tau )T(t)-T(t)\bigr)\bigl[\varphi (0)-p(0, \varphi )\bigr] \bigr\Vert ^{2}. \end{aligned}$$

By strong continuity of \(T(t)\) and hypothesis (H1) we conclude that

$$\begin{aligned} \mathbb{E}\bigl\Vert \bigl(T(t+\tau )-T(t)\bigr)\bigl[\varphi (0)-p(0, \varphi )\bigr] \bigr\Vert ^{2}\leq 2N ^{2} \mathbb{E}\bigl\Vert \varphi (0)-p(0, \varphi ) \bigr\Vert ^{2}. \end{aligned}$$

The strong continuity of \(T(t)\), together with the Lebesgue dominated convergence theorem, gives that

$$\begin{aligned} \lim_{\tau \rightarrow 0}\mathbb{E}\bigl\Vert \bigl(T(t+\tau )-T(t)\bigr) \bigl[\varphi (0)-p(0,\varphi )\bigr] \bigr\Vert ^{2}=0. \end{aligned}$$

Since the operator \((-A)^{-\alpha }\) is bounded, by (H3) we obtain that

$$\begin{aligned} \lim_{\tau \rightarrow 0}\mathbb{E}\bigl\Vert G_{1}(t+\tau )-G_{1}(t) \bigr\Vert ^{2}=0. \end{aligned}$$

By Hölder’s inequality we have

$$\begin{aligned} &\mathbb{E}\bigl\Vert G_{2}(t+\tau )-G_{2}(t) \bigr\Vert ^{2} \\ &\quad \leq 2\mathbb{E} \biggl\Vert \int_{0}^{t} \bigl[\bigl(T(\tau )-I\bigr) (-A)^{1-\alpha}T(t-s) (-A)^{\alpha } \bigr]p(s, x_{s})\,ds \biggr\Vert ^{2} \\ &\quad \quad {} +2\mathbb{E} \biggl\Vert \int_{t}^{t+\tau }(-A)^{1-\alpha }T(t+\tau -s) (-A)^{\alpha }p(s, x_{s})\,ds \biggr\Vert ^{2}. \end{aligned}$$

From assumptions (H1) and (H2) the right-hand side of this inequality becomes

$$\begin{aligned} \bigl\Vert \bigl[\bigl(T(\tau )-I\bigr) (-A)^{1-\alpha }T(t-s) (-A)^{\alpha } \bigr]p(s,x_{s}) \bigr\Vert ^{2} \leq \bigl\Vert T(\tau )-I \bigr\Vert ^{2} \frac{N^{2}_{1-\alpha }}{(t-s)^{2(1- \alpha )}} \bigl[L_{1}\Vert x_{s} \Vert ^{2}+ \tilde{L}_{1} \bigr] \end{aligned}$$


$$\begin{aligned} \bigl\Vert (-A)^{1-\alpha }T(t+\tau -s) (-A)^{\alpha }p(s, x_{s}) \bigr\Vert ^{2} \leq \frac{N^{2}_{1-\alpha }}{(t+\tau -s)^{2(1-\alpha )}} \bigl[L _{1}\Vert x_{s} \Vert ^{2}+ \tilde{L}_{1} \bigr]. \end{aligned}$$

From these inequalities, by the strong continuity of \(T(t)\) and the Lebesgue dominated convergence theorem we get

$$\begin{aligned} \mathbb{E}\bigl\Vert G_{2}(t+\tau )-G_{2}(t) \bigr\Vert ^{2}\rightarrow 0 \quad \mbox{as } \vert \tau \vert \rightarrow 0. \end{aligned}$$

Similarly, we have

$$\begin{aligned} &\mathbb{E}\bigl\Vert G_{3}(t+\tau )-G_{3}(t) \bigr\Vert ^{2} \\ &\quad \leq 2\mathbb{E} \biggl\Vert \int_{0}^{t} \bigl[\bigl(T(\tau )-I\bigr)T(t-s)\bigr]f(s,x_{s})\,ds \biggl\Vert ^{2}+2\mathbb{E} \biggr\Vert \int_{t}^{t+\tau }T(t+\tau -s)f(s,x_{s})\,ds\biggr\Vert ^{2}. \end{aligned}$$

By assumption (H4), the right-hand side of this inequality becomes

$$\begin{aligned} \bigl\Vert \bigl[\bigl(T(\tau )-I\bigr)T(t-s) \bigr]f(s, x_{s})\bigr\Vert ^{2} \leq \bigl\Vert T(\tau )-I\bigr\Vert ^{2} N^{2} \bigl[L_{2} \Vert x_{s} \Vert ^{2}+ \tilde{L}_{2} \bigr] \end{aligned}$$


$$\begin{aligned} \bigl\Vert T(t+\tau -s)f(s, x_{s}) \bigr\Vert ^{2} \leq N^{2} \bigl[L_{2}\Vert x_{s} \Vert ^{2}+ \tilde{L}_{2} \bigr]. \end{aligned}$$

By the Lebesgue dominated convergence theorem from these inequalities, along with the strong continuity of \(T(t)\), we conclude that

$$\begin{aligned} \mathbb{E}\bigl\Vert G_{3}(t+\tau )-G_{3}(t) \bigr\Vert ^{2}\rightarrow 0 \quad \mbox{as } \vert \tau \vert \rightarrow 0. \end{aligned}$$

Now we have

$$\begin{aligned} &\mathbb{E}\bigl\Vert G_{4}(t+\tau )-G_{4}(t) \bigr\Vert ^{2} \\ &\quad \leq 2\mathbb{E} \biggl\Vert \int_{0}^{t} \bigl[\bigl(T(\tau )-I\bigr)T(t-s) \bigr] \biggl( \int_{0}^{s}h(s, \eta , x_{\eta })\,d\eta \biggr)\,ds \biggr\Vert ^{2} \\ &\quad\quad {} +2\mathbb{E} \biggl\Vert \int_{t}^{t+\tau }T(t+\tau -s) \biggl( \int_{0}^{s}h(s,\eta , x_{\eta })\,d\eta \biggr)\,ds \biggr\Vert ^{2}. \end{aligned}$$

From assumption (H5) the right-hand side of the last inequality becomes

$$\begin{aligned} \biggl\Vert \bigl[\bigl(T(\tau )-I\bigr)T(t-s) \bigr] \biggl( \int_{0}^{s}h(s, \eta , x_{\eta })\,d\eta \biggr) \biggr\Vert ^{2} \leq \bigl\Vert T(\tau )-I \bigr\Vert ^{2} N^{2} \bigl[L _{3}\Vert x_{\eta } \Vert ^{2}+a \tilde{L}_{3} \bigr] \end{aligned}$$


$$\begin{aligned} \biggl\Vert T(t+\tau -s) \biggl( \int_{0}^{s}h(s, \eta , x_{\eta })\,d\eta \biggr) \biggr\Vert ^{2} \leq N^{2} \bigl[L_{3} \Vert x_{\eta } \Vert ^{2}+a \tilde{L} _{3} \bigr]. \end{aligned}$$

From these inequalities and the strong continuity of \(T(t)\), by the Lebesgue dominated convergence theorem we get that

$$\begin{aligned} \mathbb{E}\bigl\Vert G_{4}(t+\tau )-G_{4}(t) \bigr\Vert ^{2}\rightarrow 0 \quad \mbox{as } \vert \tau \vert \rightarrow 0. \end{aligned}$$

Further, we have

$$\begin{aligned} &\mathbb{E}\bigl\Vert G_{5}(t+\tau )-G_{5}(t) \bigr\Vert ^{2} \\ &\quad \leq 2\mathbb{E} \biggl\Vert \sum_{0< t_{k}< t} \bigl(T(\tau )-I\bigr)T(t-t_{k})I_{k}\bigl(x \bigl(t_{k}^{-}\bigr)\bigr) \biggr\Vert ^{2} +2 \mathbb{E} \biggl\Vert \sum_{t< t_{k}< t+\tau }T(t+\tau -t_{k})I_{k}\bigl(x\bigl(t_{k}^{-} \bigr)\bigr) \biggr\Vert ^{2}. \end{aligned}$$

By (H1) and (H6) the right-hand side of this inequality becomes

$$\begin{aligned} \bigl\Vert \bigl(T(\tau )-I\bigr)T(t-t_{k})I_{k}\bigl(x \bigl(t_{k}^{-}\bigr)\bigr) \bigr\Vert ^{2} \leq \bigl\Vert T(\tau )-I\bigl\Vert ^{2} N^{2} \bigl[ q_{k} \bigr\Vert x\bigl(t_{k}^{-}\bigr) \bigr\Vert ^{2} \bigr] \end{aligned}$$


$$\begin{aligned} \bigl\Vert T(t+\tau -t_{k})I_{k}\bigl(x \bigl(t_{k}^{-}\bigr)\bigr) \bigr\Vert ^{2} \leq N^{2} \bigl[ q_{k}\bigl\Vert x\bigl(t_{k}^{-} \bigr) \bigr\Vert ^{2} \bigr]. \end{aligned}$$

Hence, we get that

$$\begin{aligned} \mathbb{E}\bigl\Vert G_{5}(t+\tau )-G_{5}(t) \bigr\Vert ^{2}\rightarrow 0\quad\mbox{as } \vert \tau \vert \rightarrow 0. \end{aligned}$$

Now, we have

$$\begin{aligned}& \mathbb{E}\bigl\Vert G_{6}(t+\tau )-G_{6}(t) \bigr\Vert ^{2} \\& \quad \leq 2\mathbb{E} \biggl\Vert \int_{0}^{t} \bigl[\bigl(T(\tau )-I\bigr)T(t-s) \bigr]\tilde{\sigma }(s)\,dB_{Q}^{H}(s) \biggr\Vert ^{2} +2\mathbb{E} \biggl\Vert \int_{t}^{t+\tau }T(t+\tau -s)\tilde{\sigma }(s) \,dB_{Q}^{H}(s) \biggr\Vert ^{2} \\& \quad := J_{1}+J_{2}. \end{aligned}$$

By Lemma 2.5 and (H1) we have

$$\begin{aligned} J_{1} \leq & 2cH(2H-1)t^{2H-1} \int_{0}^{t} \bigl\Vert \bigl[\bigl(T(\tau )-I \bigr)T(t-s) \bigr]\tilde{\sigma }(s) \bigr\Vert ^{2}_{L_{Q}^{0}} \,ds \\ \leq & 2cH(2H-1)t^{2H-1}N^{2} \int_{0}^{t} \bigl\Vert \bigl(T(\tau )-I\bigr) \tilde{\sigma }(s) \bigr\Vert ^{2}_{L_{Q}^{0}}\,ds \rightarrow 0 \quad\mbox{as } \vert \tau \vert \rightarrow 0 \end{aligned}$$


$$\begin{aligned} T(\tau )\tilde{\sigma }(s)\rightarrow \tilde{\sigma }(s), \quad \bigl\Vert T(\tau ) \tilde{\sigma }(s) \bigr\Vert ^{2}_{L_{Q}^{0}} \leq N^{2}\bigl\Vert \tilde{\sigma }(s) \bigr\Vert ^{2}_{L_{Q}^{0}} \quad \mbox{for all } s. \end{aligned}$$

By Lemma 2.5 we get

$$\begin{aligned} J_{2} \leq & 2cH(2H-1)\tau^{2H-1}N^{2} \int_{t}^{t+\tau } \bigl\Vert \tilde{\sigma }(s) \bigr\Vert ^{2}_{L_{Q}^{0}}\,ds \rightarrow 0\quad\mbox{as } \vert \tau \vert \rightarrow 0. \end{aligned}$$

Hence, we obtain

$$\begin{aligned} \lim_{\tau \rightarrow 0}\mathbb{E}\bigl\Vert G_{6}(t+\tau )-G_{6}(t) \bigr\Vert ^{2}=0. \end{aligned}$$

Therefore, the combination of all previous estimations yields that

$$\begin{aligned} \lim_{\tau \rightarrow 0}\mathbb{E}\bigl\Vert (\Psi x) (t+\tau )-(\Psi x) (t) \bigr\Vert ^{2}=0. \end{aligned}$$

As a result, we conclude that the function \(t\rightarrow (\Psi x)(t)\) is continuous on the interval \([0, a]\).

Step 2: We show that the mapping Ψ is contraction in \(\hat{\Lambda }_{a_{1}}\) with some \(a_{1} \leq a\) to be specified later. Let \(x, y \in \hat{\Lambda }_{a}\) and \(t \in [0, a]\). By an elementary inequality we get that

$$\begin{aligned} &\bigl\Vert (\Psi x) (t)-(\Psi y) (t) \bigr\Vert ^{2} \\ &\quad \leq \frac{1}{k} \bigl\Vert p(t, x_{t})-p(t, y_{t}) \bigr\Vert ^{2}+ \frac{4}{1-k} \biggl\{ \biggl\Vert \int_{0}^{t}AT(t-s) \bigl[p(s, x_{s})-p(s, y_{s}) \bigr]\,ds \biggr\Vert ^{2} \\ &\quad \quad {}+ \biggl\Vert \int_{0}^{t}T(t-s) \bigl[f(s, x_{s})-f(s, y_{s}) \bigr]\,ds \biggr\Vert ^{2}+ \biggl\Vert \int_{0}^{t}T(t-s) \biggl[ \int_{0}^{s}h(s, \eta , x_{\eta })\,d\eta \\ &\quad \quad {}-\int_{0}^{s}h(s, \eta , y_{\eta })\,d\eta\biggr]\,ds \biggr\Vert ^{2}+ \biggl\Vert \sum_{0< t_{k}< t}T(t-t_{k}) \bigl[I_{k}\bigl(x\bigl(t_{k}^{-}\bigr)\bigr)-I_{k}\bigl(y \bigl(t_{k}^{-}\bigr)\bigr) \bigr] \biggr\Vert ^{2} \biggr\} \\ &\quad \leq \frac{1}{k} \bigl\Vert (-A)^{-\alpha } \bigr\Vert ^{2} \bigl\Vert (-A)^{\alpha } \bigl[p(t, x_{t})-p(t, y_{t}) \bigr] \bigr\Vert ^{2} \\ &\quad \quad {}+\frac{4}{1-k} \biggl\Vert \int_{0}^{t}(-A)^{1-\alpha }T(t-s) (-A)^{\alpha} \bigl[p(s, x_{s})-p(s, y_{s}) \bigr] \,ds \biggr\Vert ^{2} \\ &\quad \quad {}+\frac{4}{1-k} \biggl\Vert \int_{0}^{t}T(t-s) \bigl[f(s, x_{s})-f(s, y_{s}) \bigr]\,ds \biggr\Vert ^{2} \\ &\quad \quad {}+\frac{4}{1-k} \biggl\Vert \int_{0}^{t}T(t-s) \biggl[ \int_{0}^{s}h(s, \eta ,x_{\eta })\,d\eta - \int_{0}^{s}h(s, \eta , y_{\eta })\,d\eta \biggr]\,ds \biggr\Vert ^{2} \\ &\quad \quad {}+\frac{4}{1-k} \biggl\Vert \sum_{0< t_{k}< t}T(t-t_{k})\bigl[I_{k}\bigl(x\bigl(t_{k}^{-}\bigr)\bigr)-I_{k}\bigl(y\bigl(t_{k}^{-}\bigr)\bigr) \bigr] \biggr\Vert ^{2}. \end{aligned}$$

By the Hölder inequality, together with the Lipschitz property of \((-A)^{\alpha }p\), f, h, and \(I_{k}\), \(k=1, 2, \ldots \) , we have

$$\begin{aligned} &\mathbb{E}\bigl\Vert (\Psi x) (t)-(\Psi y) (t) \bigr\Vert ^{2} \\ &\quad \leq k \mathbb{E}\Vert x_{t}-y_{t} \Vert ^{2}+ \frac{4}{1-k}N^{2}_{1-\alpha }L_{1}^{2} \biggl(\frac{t^{2 \alpha -1}}{2\alpha -1} \biggr) \int_{0}^{t}\mathbb{E}\Vert x_{s}-y_{s} \Vert ^{2}\,ds \\ &\quad \quad {}+\frac{4}{1-k}tN^{2} \bigl[L_{2}^{2}+L_{3}^{2} \bigr] \int_{0}^{t} \mathbb{E}\Vert x_{s}-y_{s} \Vert ^{2}\,ds+\frac{4}{1-k}N^{2} \Biggl(\sum _{k=1}^{+\infty }q_{k} \Biggr)^{2}\mathbb{E}\bigl\Vert x\bigl(t_{k}^{-} \bigr)-y\bigl(t_{k}^{-}\bigr) \bigr\Vert ^{2}. \end{aligned}$$

So, we obtain

$$\begin{aligned} \sup_{s \in [-r, t]}\mathbb{E}\bigl\Vert (\Psi x) (s)-(\Psi y) (s) \bigr\Vert ^{2} \leq \rho (t)\sup_{s \in [-r, t]}\mathbb{E} \bigl\Vert x(s)-y(s) \bigr\Vert ^{2}, \end{aligned}$$


$$\begin{aligned} \rho (t)=k+\frac{4N^{2}_{1-\alpha }L_{1}^{2}}{(1-k)(2\alpha -1)}t^{2 \alpha }+\frac{4N^{2}L_{2}^{2}}{1-k}t^{2}+ \frac{4N^{2}L_{3}^{2}}{1-k}t ^{2} +\frac{4N^{2}}{1-k} \Biggl(\sum _{k=1}^{+\infty }q_{k} \Biggr)^{2}. \end{aligned}$$

Hence, by inequality (7) we have

$$\begin{aligned} \rho (0)=k+\frac{4N^{2}}{1-k} \Biggl(\sum_{k=1}^{+\infty }q_{k} \Biggr)^{2}=\frac{4N^{2} (\sum_{k=1}^{+\infty }q_{k} )^{2}}{(1-k)^{2}} < 1. \end{aligned}$$

Then there exists \(0 < a_{1} \leq a\) such that \(0 < \rho (a_{1}) < 1\) and Ψ is contraction on \(\hat{\Lambda }_{a_{1}}\). Hence, the operator Ψ has a unique fixed point, which is a mild solution of system (1)–(3) on the interval \([-r, a_{1}]\). Evidently, the solution can be continued on the succeeding intervals, and this procedure can be repeated in finitely many steps to the entire interval \([-r, a]\). Thus, the proof is completed. □

Remark 3.3

It is well known that by the theorem the solution of system (1)–(3) is well defined.

3.2 Exponential stability

In this subsection, to establish some sufficient conditions ensuring the exponential stability in the mean square moment of the mild solution for system (1)–(3), we need to state the following additional assumptions.

  1. (H8)

    \((T(t))_{t\geq 0}\) satisfies the following condition in addition to (H1):

    There exist \(\gamma >0\) and \(N>0\) such that \(\Vert T(t) \Vert \leq N e^{-\gamma t}\) for all \(t\geq 0\). Let us assume that the strongly continuous semigroup \((T(t))_{t\geq 0}\) is exponentially stable.

  2. (H9)

    For all \(t\geq 0\) and \(\psi \in \mathcal{PC}\), there exist nonnegative real numbers \(R_{1}, R_{2}, R_{3} \geq 0\) and continuous functions \(\xi_{i}:[0, +\infty )\rightarrow \mathbb{R}_{+}\), \(i=1,2,3\), such that

    $$\begin{aligned}& \bigl\Vert (-A)^{\alpha }p(t, \psi ) \bigr\Vert ^{2} \leq R_{1} \bigl[\Vert \psi \Vert ^{2} \bigr]+ \xi_{1}(t), \\& \bigl\Vert f(t, \psi ) \bigr\Vert ^{2} \leq R_{2} \bigl[\Vert \psi \Vert ^{2} \bigr]+\xi_{2}(t), \\& \biggl\Vert \int_{0}^{t}h(t, s, \psi )\,dt \biggr\Vert ^{2} \leq R_{3} \bigl[\Vert \psi \Vert ^{2} \bigr]+\xi_{3}(t). \end{aligned}$$
  3. (H10)

    There exist nonnegative real numbers \(P_{1}, P_{2}, P_{3} \geq 0\) such that

    $$\begin{aligned} \xi_{j}(t)\leq P_{j} e^{-\gamma t}, \quad \forall t\geq 0, j=1, 2, 3. \end{aligned}$$
  4. (H11)

    The function \(\tilde{\sigma }: [0, +\infty ) \rightarrow L_{Q}^{0}(Y, X)\) satisfies the following condition in addition to assumptions (C.1) and (C.2):

    $$\begin{aligned} \int_{0}^{+\infty }e^{\gamma s} \bigl\Vert \tilde{ \sigma }(s) \bigr\Vert _{L _{Q}^{0}}^{2}\,ds < \infty . \end{aligned}$$

Theorem 3.4

Suppose that conditions (H6)–(H11) are satisfied and

$$\begin{aligned} \frac{6 \{ [\gamma^{1-2\alpha }2^{2(1-\alpha )}N^{2}_{1- \alpha }N^{2}\Gamma (2\alpha -1)R_{1}/\gamma ] + [N^{2} (R_{2}+R _{3})/\gamma^{2} ]+ [N^{2} (\sum_{k=1}^{+\infty }q _{k} )^{2} ] \}}{(1-k)^{2}}< 1, \end{aligned}$$

where \(k:=\sqrt{R_{1}}\Vert (-A)^{-\alpha } \Vert \). Then the mild solution of system (1)(3) is exponentially stable in mean square moment.


By inequality (8) there is \(\epsilon > 0\) small enough such that

$$\begin{aligned} k+\frac{6\gamma^{1-2\alpha }2^{2(1-\alpha )}N^{2}_{1-\alpha }N^{2} \Gamma (2\alpha -1)R_{1}}{(\gamma -\epsilon )(1-k)}+\frac{6N^{2} (R _{2}+R_{3})}{\gamma (\gamma -\epsilon )(1-k)} +\frac{6N^{2} ( \sum_{k=1}^{+\infty }q_{k} )^{2}}{1-k}< 1. \end{aligned}$$

Let \(x(t)\) be a mild solution of system (1)–(3) and denote \(\mu =\gamma -\epsilon \). Then from Eq. (6) we have

$$\begin{aligned} \mathbb{E}\bigl\Vert x(t) \bigr\Vert ^{2} &\leq \frac{1}{k}\mathbb{E} \bigl\Vert p(t, x_{t}) \bigr\Vert ^{2} +\frac{6}{1-k}\mathbb{E} \biggl\{ \bigl\Vert T(t) \bigl[ \varphi (0)-p(0,\varphi ) \bigr] \bigr\Vert ^{2} \\ &\quad {}+ \biggl\Vert \int_{0}^{t}AT(t-s)p(s, x_{s})\,ds \biggr\Vert ^{2} + \biggl\Vert \int_{0}^{t}T(t-s)\tilde{\sigma }(s) \,dB_{Q}^{H}(s) \biggr\Vert ^{2} \\ &\quad {}+ \biggl\Vert \int_{0}^{t}T(t-s)f(s, x_{s})\,ds \biggr\Vert ^{2} + \biggl\Vert \int_{0}^{t}T(t-s) \biggl( \int_{0}^{s}h(s, \eta , x_{\eta })\,d\eta \biggr)\,ds \biggr\Vert ^{2} \\ &\quad {}+ \biggl\Vert \sum_{0< t_{k}< t}T(t-t_{k})I_{k} \bigl(x\bigl(t_{k}^{-}\bigr)\bigr) \biggr\Vert ^{2} \biggr\} \\ &\leq \sum_{j=1}^{7}F_{j}(t). \end{aligned}$$

From assumptions (H9) and (H10) we obtain that

$$\begin{aligned} F_{1}(t) &=\frac{1}{k}\mathbb{E} \bigl\Vert (-A)^{-\alpha }(-A)^{\alpha }p(t,x_{t}) \bigr\Vert ^{2} \\ &\leq \frac{\Vert (-A)^{-\alpha } \Vert ^{2}}{k} \bigl\{ R_{1}\mathbb{E}\Vert x_{t} \Vert ^{2}+\xi_{1}(t) \bigr\} \\ &\leq k\mathbb{E}\Vert x_{t} \Vert ^{2}+G_{1}e^{-\gamma t}, \end{aligned}$$

where \(G_{1}=\frac{\Vert (-A)^{-\alpha } \Vert ^{2}}{k}P_{1}\).

It follows from (H8), (H9), and (H10) that

$$\begin{aligned} F_{2}(t) &\leq \frac{12}{1-k}\mathbb{E}\bigl\Vert T(t)\varphi (0) \bigr\Vert ^{2}+ \frac{12}{1-k}\mathbb{E}\bigl\Vert T(t)p(0, \varphi ) \bigr\Vert ^{2} \\ &\leq \frac{12N^{2}}{1-k}e^{-2\gamma t}\mathbb{E}\bigl\Vert \varphi (0) \bigr\Vert ^{2}+\frac{12N ^{2}}{1-k}e^{-2\gamma t} \bigl\Vert (-A)^{-\alpha } \bigr\Vert ^{2} \bigl\{ R_{1} \mathbb{E}\Vert \varphi \Vert ^{2}+\xi_{1}(t) \bigr\} \\ &\leq G_{2} e^{-\mu t}, \end{aligned}$$

where \(G_{2}=\frac{12N^{2}}{1-k} [\mathbb{E}\Vert \varphi (0) \Vert ^{2} + \Vert (-A)^{-\alpha } \Vert ^{2} \{R_{1}\mathbb{E}\Vert \varphi \Vert ^{2}+P_{1} \} ]\).

Combining conditions (H8), (H9), and (H10) with the Hölder inequality and Lemma 2.1, we get that

$$\begin{aligned} F_{3}(t) &=\frac{6}{1-k}\mathbb{E} \biggl\Vert \int_{0}^{t}(-A)^{1-\alpha }T \biggl( \frac{t-s}{2} \biggr)T \biggl(\frac{t-s}{2} \biggr) (-A)^{\alpha }p(s, x_{s})\,ds \biggr\Vert ^{2} \\ &\leq \frac{6}{1-k} \int_{0}^{t}\frac{N^{2}_{1-\alpha }}{(t-s)^{2(1- \alpha )}}2^{2(1-\alpha )}e^{-\gamma (t-s)} \,ds \int_{0}^{t}e^{-\gamma (t-s)}N^{2} \mathbb{E} \bigl\Vert (-A)^{\alpha }p(s, x_{s}) \bigr\Vert ^{2}\,ds \\ &\leq \frac{6\gamma^{1-2\alpha }2^{2(1-\alpha )}N^{2}_{1-\alpha }N ^{2} \Gamma (2\alpha -1)R_{1}}{1-k} \int_{0}^{t}e^{-\gamma (t-s)} \mathbb{E}\Vert x_{s} \Vert ^{2}\,ds+G_{3} e^{-\mu t}, \end{aligned}$$

where \(G_{3}=\frac{6\gamma^{1-2\alpha }2^{2(1-\alpha )}N^{2}_{1- \alpha }N^{2}\Gamma (2\alpha -1)}{1-k}\frac{P_{1}}{\gamma -\mu }\).

By assumptions (H8), (H9), and (H10), applying the Hölder inequality, we get

$$\begin{aligned} F_{4}(t) &\leq \frac{6}{1-k}\mathbb{E} \biggl( \int_{0}^{t} N e^{-\gamma (t-s)} \bigl\Vert f(s, x_{s}) \bigr\Vert \,ds \biggr)^{2} \\ &\leq \frac{6N^{2} R_{2}}{\gamma (1-k)} \int_{0}^{t}e^{-\gamma (t-s)} \mathbb{E}\Vert x_{s} \Vert ^{2}\,ds+G_{4} e^{-\mu t}, \end{aligned}$$

where \(G_{4}=\frac{6N^{2}}{\gamma (1-k)}\frac{P_{2}}{\gamma -\mu }\).


$$\begin{aligned} F_{5}(t) &\leq \frac{6}{1-k}\mathbb{E} \biggl( \int_{0}^{t} N e^{-\gamma (t-s)} \biggl\Vert \int_{0}^{s}h(s, \eta , x_{\eta })\,d\eta \biggr\Vert \,ds \biggr)^{2} \\ &\leq \frac{6N^{2} R_{3}}{\gamma (1-k)} \int_{0}^{t}e^{-\gamma (t-s)} \mathbb{E}\Vert x_{s} \Vert ^{2}\,ds+G_{5} e^{-\mu t}, \end{aligned}$$

where \(G_{5}=\frac{6N^{2}}{\gamma (1-k)}\frac{P_{3}}{\gamma -\mu }\).

By Lemma 2.5 and (H8) we have

$$\begin{aligned} F_{6}(t) &\leq \frac{6}{1-k}N^{2} cH(2H-1)t^{2H-1} \int_{0}^{t}e^{-2 \gamma (t-s)} \bigl\Vert \tilde{ \sigma }(s) \bigr\Vert ^{2}_{L_{Q}^{0}}\,ds \\ &\leq e^{-\mu t}\frac{6N^{2}}{(1-k)}cH(2H-1)t^{2H-1}e^{-\epsilon t} \int_{0}^{t}e^{\gamma s} \bigl\Vert \tilde{ \sigma }(s) \bigr\Vert ^{2}_{L_{Q} ^{0}}\,ds. \end{aligned}$$

Notice that assumption (H11) guarantees the existence of a constant \(G_{6} >0\) such that, for all \(t \geq 0\),

$$\begin{aligned} \frac{6N^{2}}{(1-k)}cH(2H-1)t^{2H-1}e^{-\epsilon t} \int_{0}^{t}e^{ \gamma s} \bigl\Vert \tilde{ \sigma }(s) \bigr\Vert ^{2}_{L_{Q}^{0}}\,ds \leq G _{6}. \end{aligned}$$

So, we obtain

$$\begin{aligned} F_{6}(t) \leq G_{6} e^{-\mu t}. \end{aligned}$$

From (H6) we get

$$\begin{aligned} F_{7}(t) &\leq \frac{6N^{2}}{1-k} \Biggl(\sum _{k=1}^{+\infty }q_{k} \Biggr)^{2}e ^{-2\gamma (t-t_{k})}\mathbb{E}\bigl\Vert x\bigl(t_{k}^{-}\bigr) \bigr\Vert ^{2} \\ &\leq \frac{6N^{2}}{1-k} \Biggl(\sum_{k=1}^{+\infty }q_{k} \Biggr) \Biggl( \sum_{k=1}^{+\infty }q_{k} \Biggr) e^{-\gamma (t-t_{k})}\mathbb{E}\bigl\Vert x\bigl(t_{k}^{-} \bigr) \bigr\Vert ^{2}. \end{aligned}$$

By Lemma 2.6 and inequalities (9)–(15) it follows that

$$\begin{aligned} \mathbb{E}\bigl\Vert x(t) \bigr\Vert ^{2} &\leq \rho e^{-\mu t} \quad \mbox{for } t \in [-r, 0] \end{aligned}$$

and, for each \(t \geq 0\),

$$\begin{aligned} \mathbb{E}\bigl\Vert x(t) \bigr\Vert ^{2} &\leq \rho e^{-\mu t}+k \sup_{-r \leq \theta \leq 0}\mathbb{E}\bigl\Vert x(t+\theta ) \bigr\Vert ^{2} +\hat{k} \int_{0}^{t}e^{-\mu (t-s)}\sup_{-r \leq \theta \leq 0} \mathbb{E}\bigl\Vert x(t+\theta ) \bigr\Vert ^{2}\,ds \\ &+\sum_{k=1}^{+\infty }\omega_{k} e^{-\mu (t-t_{k})}\mathbb{E}\bigl\Vert x\bigl(t_{k}^{-} \bigr) \bigr\Vert ^{2}, \end{aligned}$$


$$\begin{aligned} \hat{k}=\frac{6\gamma^{1-2\alpha }2^{2(1-\alpha )}N^{2}_{1-\alpha }N ^{2}\Gamma (2\alpha -1)R_{1}}{1-k}+\frac{6N^{2}(R_{2}+R_{3})}{\gamma (1-k)} \end{aligned}$$


$$\begin{aligned} \rho = \max \Biggl(\sum_{j=1}^{6}G_{j}, \sup_{-r \leq \theta \leq 0} \mathbb{E}\bigl\Vert \varphi (\theta ) \bigr\Vert ^{2} \Biggr). \end{aligned}$$

The mild solution of system (1)–(3) is exponentially stable in mean square moment, since \(k+\frac{\hat{k}}{\mu }+\sum_{k=1}^{+\infty }\omega_{k} < 1\) and by Lemma 2.6 there exist two positive constants G and θ such that \(\mathbb{E}\Vert x(t) \Vert ^{2} \leq G e^{-\theta t}\) for any \(t \geq -r\), where \(\theta > 0\) is the unique solution to the equation \(k+\frac{\hat{k}}{(\mu -\theta )}e ^{\theta r}+\sum_{k=1}^{+\infty }\omega_{k}=1\) and \(G=\max \{\rho , \frac{\rho (\mu -\theta )}{\hat{k} e^{\theta r}} \} > 0\). This completes the proof of the theorem. □

Remark 3.5

If the impulsive moments \(\triangle x(t_{k})=I_{k}(\cdot )= 0\), \(k=1, 2,\ldots \) , then system (1)–(3) reduces to the following form:

$$\begin{aligned}& d \bigl[x(t)-p(t, x_{t}) \bigr] = \biggl[Ax(t)+f(t, x_{t})+ \int_{0}^{t}h(t, s, x_{s})\,ds \biggr]\,dt \\& \hphantom{d \bigl[x(t)-p(t, x_{t}) \bigr] =}{}+\tilde{\sigma }(t)\,dB_{Q}^{H}(t), \quad t\in [0, a], \end{aligned}$$
$$\begin{aligned}& x_{0}(t) =\varphi (t) \in \mathcal{C}, \quad -r \leq t \leq 0, \end{aligned}$$

where the operators A, p, f, h, and σ̃ are defined as before. Here \(\mathcal{C}=\mathcal{C}([-r, 0], X)\) is endowed with the norm \(\Vert \varphi \Vert _{\mathcal{C}}=\sup_{\theta \in [-r, 0]}\Vert \varphi (\theta ) \Vert \).

We can easily deduce the following corollary by utilizing the same technique as in Theorem 3.4.

Corollary 3.6

Assume that conditions (H7)–(H11) hold and

$$\begin{aligned} \frac{5 \{ [\gamma^{1-2\alpha }2^{2(1-\alpha )}N^{2}_{1- \alpha }N^{2}\Gamma (2\alpha -1)R_{1}/\gamma ] + [N^{2} (R_{2}+R _{3})/\gamma^{2} ] \}}{(1-k)^{2}}< 1. \end{aligned}$$

Then the mild solution of system (16)(17) is exponentially stable in mean square moment.

4 Example

In this section, we consider an application of the theory developed in the previous section. Let \(X=Y=L^{2}[0, \pi ]\) and define the operator \(A:X\rightarrow X\) by \(A=\frac{\partial^{2}}{\partial \tau^{2}}\) with domain \(D(A)=H_{0}^{1}(0, \pi )\cap H^{2}(0, \pi )\). It is well known that there exists a complete orthonormal set \(\{e_{n}\}_{n \in \mathbb{N}}\) of eigenvectors of A with \(e_{n}(x)=\sqrt{\frac{2}{ \pi }}\sin nx, n=1, 2, \ldots \) . Then

$$\begin{aligned} Ax=-\sum_{n=1}^{+\infty }n^{2}\langle x, e_{n} \rangle e_{n},\quad x \in D(A). \end{aligned}$$

It is easy to see that A is the infinitesimal generator of an analytic semigroup \(T(t)\), \(t\geq 0\), in X and

$$\begin{aligned} T(t)x=\sum_{n=1}^{+\infty }\exp \bigl(-n^{2} t\bigr)\langle x, e_{n} \rangle e _{n}, \quad x \in X. \end{aligned}$$

Furthermore, we know that \(\Vert T(t) \Vert \leq \exp (-\pi^{2} t)\), \(t\geq 0\). The bounded linear operator \((-A)^{\frac{3}{4}}\) is defined by

$$\begin{aligned} (-A)^{\frac{3}{4}}x=\sum_{n=1}^{+\infty }n^{\frac{3}{2}} \langle x, e _{n} \rangle_{X} e_{n} \end{aligned}$$

with domain

$$\begin{aligned} D\bigl((-A)^{\frac{3}{4}}\bigr)=X_{\frac{3}{4}}= \Biggl\{ x \in X, \sum _{n=1}^{+ \infty }n^{\frac{3}{2}}\langle x, e_{n} \rangle_{X} e_{n} \in X \Biggr\} . \end{aligned}$$

Consider the impulsive neutral stochastic partial integrodifferential equations of the form

$$\begin{aligned}& \,d \bigl[w(t, \tau )-\beta_{1} \bigl(t, w(t-\zeta , \tau ) \bigr) \bigr] \\& \quad = \biggl[\frac{\partial^{2}}{\partial \tau^{2}} w(t, \tau )+\beta_{2} \bigl(t, w(t- \zeta , \tau ) \bigr)+ \int_{0}^{s}\beta_{3}\bigl(t, s, w(s- \zeta , \tau )\bigr)\,ds \biggr]\,dt+\tilde{\sigma }(t)\,dB_{Q}^{H}(t), \end{aligned}$$

\(0\leq \tau \leq \pi , t\neq t_{k}, t \in [0, a]\), subject to the initial conditions

$$\begin{aligned}& w(t, 0)=w(t, \pi )=0, \quad 0 \leq t \leq a, \\& \triangle w(t_{k}, \cdot ) (\tau )=\frac{\beta_{4}}{k^{2}}z \bigl(t_{k}^{-}, \tau \bigr), \quad t=t_{k}, k=1, 2, \ldots , \\& w(t, \tau )=\varphi (t, \tau ) \in \mathcal{PC}\bigl([-r, 0], L^{2}[0,\pi ]\bigr), \quad -r \leq t \leq 0, \end{aligned}$$

where \(\beta_{j}>0\), \(j=1, 2, 3, 4\), and \(\tilde{\sigma }: \mathbb{R} ^{+} \rightarrow \mathbb{R}\) is a continuous function satisfying assumption (H11).


$$\begin{aligned}& p(t, w_{t}) (\tau ) = \beta_{1}\bigl(t, w(t-\zeta , \tau ) \bigr), \\& f(t, w_{t}) (\tau ) = \beta_{2}\bigl(t, w(t-\zeta , \tau ) \bigr), \\& \int_{0}^{s}h(t, s, w_{t}) (\tau )\,ds = \int_{0}^{s}\beta_{3}\bigl(t, s, w(s- \zeta , \tau )\bigr)\,ds, \\& I_{k}\bigl(z\bigl(t_{k}^{-}\bigr)\bigr) = \frac{\beta_{4}}{k^{2}}z\bigl(t_{k}^{-}\bigr) \quad (k=1, 2, \ldots ). \end{aligned}$$

To define the operator \(Q:X\rightarrow X\), we can choose a sequence \(\{\lambda_{n}\}_{n\geq 1} \subset \mathbb{R}^{+}\) and set \(Q e_{n}= \lambda_{n} e_{n}\). Also, assume that \(\operatorname{tr}(Q)=\sum_{n=1}^{ \infty }(\lambda_{n})^{\frac{1}{2}}< \infty \). Now we define the process \(B^{H}_{Q}(t)\) by

$$\begin{aligned} B^{H}_{Q}(t)=\sum_{n=1}^{\infty }( \lambda_{n})^{\frac{1}{2}}\alpha ^{H}_{n}(t)e_{n}, \end{aligned}$$

where \(H \in (\frac{1}{2}, 1)\), and \(\{\alpha^{H}_{n}\}_{n \in \mathbb{N}}\) is a sequence of independent two-sided one-dimensional fBms.

It is obvious that all the assumptions are satisfied with

$$\begin{aligned}& \gamma =\pi^{2},\qquad N=1, \qquad r=1, \\& R_{1}=\beta_{1} \bigl\Vert (-A)^{\frac{3}{4}} \bigr\Vert , \qquad R_{2}= \beta_{2}, \qquad R_{3}=\beta_{3},\qquad q_{k}= \frac{\beta_{4}}{k^{2}} \quad (k=1, 2, \ldots ). \end{aligned}$$

By the definition of \((-A)^{-\frac{3}{4}}\) (see [37]) it is easy to conclude that

$$\begin{aligned} \bigl\Vert (-A)^{-\frac{3}{4}} \bigr\Vert \leq \frac{1}{\Gamma (\frac{3}{4})} \int_{0}^{+\infty }u^{-\frac{1}{4}}\bigl\Vert T(u) \bigr\Vert \,du \leq \frac{1}{ \pi^{\frac{3}{2}}} \end{aligned}$$

and \(\Vert (-A)^{\frac{3}{4}} \Vert =1\). Consequently, all the assumptions of Theorem 3.2 are satisfied, and if

$$\begin{aligned} \frac{\beta_{4}^{2} \pi^{4}}{9}< \biggl(1-\frac{\beta_{1}}{\pi^{ \frac{3}{2}}} \biggr)^{2}, \end{aligned}$$

then we easily get that system (19) has a unique mild solution. Here \(L_{1}=\beta_{1} \Vert (-A)^{\frac{3}{4}} \Vert \). Furthermore, by Theorem 3.4 we may infer that if

$$\begin{aligned} 6 \biggl\{ \biggl[\sqrt{2}\beta_{1}\Gamma \biggl(\frac{1}{2} \biggr)/\pi^{3} \biggr]+ \bigl[(\beta_{2}+ \beta_{3})/\pi^{4} \bigr]+ \bigl[\beta_{4}^{2} \pi ^{4}/36 \bigr] \biggr\} < \biggl(1-\frac{\beta_{1}^{1/2}}{\pi^{\frac{3}{2}}} \biggr)^{2}, \end{aligned}$$

then the mild solution of the considered system (19) is exponentially stable in mean square sense.

5 Conclusion

In this paper, we study a general class of impulsive neutral stochastic integrodifferential equations driven by an fBm. We give sufficient conditions ensuring the existence and uniqueness of a mild solution to the considered system by using the fixed point approach. Further, exponential stability of mild solutions to impulsive neutral stochastic integrodifferential equations is examined based on the impulsive integral inequality. Finally, we illustrate the efficiency of the derived criteria by an example.