Complex & Intelligent Systems

, Volume 4, Issue 1, pp 19–29 | Cite as

p-th moment exponential convergence analysis for stochastic networked systems driven by fractional Brownian motion

Open Access
Original Article


In this paper, the existence, uniqueness and asymptotic behavior of mild solutions of stochastic neural network systems driven by fractional Brownian motion are investigated. By applying the Banach fixed point theorem, the existence and uniqueness of mild solution are analytically proved in a Hilbert space. Based on the moment inequality of wick-type integral analysis technique, the p-th moment exponential convergence condition of the mild solution is presented. Finally, two numerical examples are presented to demonstrate the validity of the theoretical results.


Stochastic Hopfield neural networks Mild solution Fractional Brownian motion p-th moment exponential convergence 


The research of Hopfield neural networks (NNs) has been widespread in different fields because of their extensive applications; for instance, image processing, pattern recognition, associative memory and combinational optimization; see [1, 2, 3, 4, 5]. When applying NNs to solve many practical problems in optimization, NNs are usually designed to be globally asymptotically or exponentially stable to avoid spurious responses or the problem of local minima. Hence, exploring the convergence of NNs is of primary importance; see [6, 7, 8, 9, 10, 11], and references therein.

It is generally known that the external perturbations is unavoidable in practical applications. A large class of natural phenomena with time evolution behaviors cannot be described by the classical Brownian motion. Nevertheless, the fractional Brownian motion (fBm) has a strong nature of memory, which can better simulate the noise from surrounding environments. It is of great practical significance to study fractional Brownian motion in the fields of physics, financial, hydrology, telecommunications and random network control; see [12, 13, 14]. Recently, an increasing number of people are interested in exploring the dynamic behavior for a variety of differential equation systems driven by the fBm. In [15, 16, 17, 18, 19], the asymptotic behavior of solution for stochastic differential equations driven by the fBm were investigated, and the conditions to ensure the existence and uniqueness of solution were proposed. Boufoussi and Hajji [20, 21] considered the existence of a mild solution for neutral stochastic functional differential equations driven by the fBm in a Hilbert space. Furthermore, Boufoussi et al. [22, 23] investigated the existence and uniqueness of a mild solution for time-dependent neutral stochastic functional differential equations driven by the fBm. Ferrante and Rovira [24, 25] explored the convergence of stochastic delay differential equations driven by the fBm with Hurst parameter \(H > 1/2\). Taniguchi et al. [26] discussed the existence, uniqueness and asymptotic behavior of mild solutions to stochastic functional differential equations in Hilbert spaces. In [27], the existence and uniqueness of positive solutions were considered for higher-order nonlocal fractional differential equations by the fBm . Wei and Wang [28] investigated the existence and uniqueness of the solution for stochastic functional differential equations with infinite delay.

Very recently, a little work with respect to the dynamic behavior of neural networked system driven by fBm have been reported in [29, 30, 31]. In [29], the authors discussed the exponential synchronization for stochastic NNs with delay driven by fBm and, by using ingenious mathematical transformative technique and some well-known inequality, the exponential synchronization condition was established. In [30], by applying the numerical method, the authors considered the mean-square dissipativity of a class of stochastic NNs with the fBm and jumps.

It should be pointed out that, in [29], under the assumption of the existence of a solution, the authors addressed the exponential synchronization results for stochastic NNs with delay driven by fBm. In [30], the existence of a solution for NNs with the fBm was not also considered. It is well known that the system, which has the state solution, is only useful in the real-world application. For all we know, in the literature, very little attention has been paid to the investigation of the the existence and uniqueness of the solution for stochastic NNs systems driven by fBm.

Motivated by the previous discussion, in this paper, Based on the analytical semi-groups theory and by means of the moment inequality of wick-type integral, the p-th moment exponential convergence condition of the mild solution is developed. This research method is different from the method of the general dynamics system, such as linear matrix inequality approach and M-matrix technique. In addition, the p-norm we give is different from the usual 2-norm, and thus the conditions given are different. To the best of our knowledge, the p-norm requires more stringent conditions. Nowadays, few people have conducted a comprehensive study of this aspect. The study of convergence problem is meaningful and challenging for the stochastic neural network with the memorial perturbation effect.

Notation. \(R^{n}\) and \(R^{n\times n}\) denote n-dimensional Euclidean space and the set of all \(n\times n\) real matrices. If Bis a symmetric matrix, \(B=B^{T}\), where the superscript T denotes the transpose. Denote by \(\lambda _\mathrm{{max}}(B)\) and \(\lambda _\mathrm{{min}}(B)\) the largest and smallest eigenvalues of B, respectively. Let \(r=(r_{1},r_{2},\ldots ,r_{n})^{T}\in R^{n}\), denote by \(\Vert r\Vert =(\sum _{i=1}^{n}r_{i}^{p})^{\frac{1}{p}}\) the vector norm. For any matrix \( A=(a_{ij})_{n\times m}\), the Hilbert–Schmidt norm is defined as \(\Vert A\Vert =(\sum _{i=1}^{m}\sum _{j=1}^{n}|a_{ij}|^{2})^{\frac{1}{2}}=(tr(A^{T}A))^{\frac{1}{2}}\). \(\mathrm{Diag} \{...\}\) stands for the diagonal matrix. \((\Omega ,\mathcal {F},\mathcal {P})\) is a complete probability space with a filtration \(\{\mathcal {F}_{t}\}_{t\ge 0}\) satisfying the usual conditions (i.e., the filtration contains all P-null sets and is right continuous). The \((\cdot ,\cdot )\) represents the inner product. \((U,\Vert \cdot \Vert , (\cdot ,\cdot )_{U})\) and \((K,\Vert \cdot \Vert , (\cdot ,\cdot )_{K})\) denote the separable Hilbert spaces, and let L(KU) be the space of all bounded linear operators from K to U. Let \(Q\in L(K,K)\) be a non-negative self-adjoint operator and denote by \(L_{Q}^{p}(K,U)\) the space of all \(\xi \in L(K,U)\) such that \(\xi Q^{\frac{1}{2}}\)is a Hilbert–Schmidt operator. If \(x(\cdot )\) a measurable function on F, then \(|x(\cdot )|^{p}\in \mathcal {L}(F)\) . The \(E(\cdot )\) stands for the mathematic expectation operator with respect to the given probability measure on R.

Preliminaries and model description


If \(S_{T}\) is a closed subset of a Banach space X and \(\psi :S_{T}\rightarrow S_{T}\) is a contraction on \(S_{T}\), then \(\psi \) has a unique fixed point \(\bar{x}\) in \(S_{T}\). Also, if \(x_{0}\) in \(S_{T}\) is arbitrary, then the sequence \(\{x_{n+1}=\psi x_{n}, n=0,1,\ldots \}\) converges to \(\bar{x}\) as \(n\rightarrow \infty \) and \(|\bar{x}-x_{n}|\le \lambda ^{n}|x_{1}-x_{0}|/(1-\lambda ),\) where \(\lambda <1\) is the contraction constant for \(\psi \) on \(S_{T}.\)

\(B^{H}=\{B^{H}(t), t\in R\}\) on \( (\Omega ,\mathcal {F},\mathcal {P})\) is called the normalized (two sided) fBm with Hurst parameter \(H\in (\frac{1}{2},1)\). \(B^{H}\) is a centered Gaussian process with covariance function:
$$\begin{aligned} R_{H}(s,t)=\frac{1}{2}(t^{2H}+s^{2H}-|t-s|^{2H}). \end{aligned}$$
Moreover, \(B^{H}\) has the following Wiener integral representation:
$$\begin{aligned} B^{H}(t)=\int _{0}^{t}K_{H}(t,s)\mathrm{d}\beta (s), \end{aligned}$$
where \(\beta = \{\beta (t): t\in [0,T] \}\) is a Wiener process, and \(K_{H}(t,s)\) is the kernel given by
$$\begin{aligned} K_{H}(t,s)=c_{H}s^{\frac{1}{2}-H}\int _{0}^{t}(u-s)^{H-\frac{3}{2}}u^{H-\frac{1}{2}}\mathrm{d}u, \end{aligned}$$
for \(t>s\), where
$$\begin{aligned} c_{H}=\sqrt{\frac{H(2H-1)}{\beta (2-2H,H-\frac{1}{2})}}, \end{aligned}$$
and \(\beta (\cdot ,\cdot )\) denotes the Beta function. We take \(K_{H}(t,s)=0\) if \(t\le s\).
We consider a U-valued stochastic process \(B_{Q}^{H}(t)\) given by the following series:
$$\begin{aligned} B_{Q}^{H}(t)=\sum _{n=1}^{\infty }\beta _{n}^{H}(t)Q^{\frac{1}{2}}e_{n}=\sum _{n=1}^{\infty }\sqrt{\lambda _{n}}\beta _{n}^{H}(t)e_{n}, \quad t\ge 0, \end{aligned}$$
where \(\beta _{n}^{H}(t)(n=1,2,\ldots )\) are a sequence of two-sided one-dimensional, standard fBm mutually independent on \( (\Omega ,\mathcal {F},\mathcal {P})\). The \({e_{n}}(n=1,2,\ldots )\) is a complete orthonormal basis in K, and \(Q\in L(K,K)\) is a non-negative self-adjoint operator defined by \(Qe_{n}=\lambda _{n}e_{n}\) with trace \(trQ=\sum _{n=1}^{\infty }\lambda _{n}<\infty \), where \(\lambda _{n}\ge 0 \ (n=1,2,\ldots )\) are non-negative real numbers. \(B_{Q}^{H}(t)\) is a U-valued Q-fractional Brownian motion.

The following will introduce some concepts about stochastic integration of fBm.

Lemma 1

[31] Let \((\sigma (t), t\in [0,T])\) be a stochastic process such that \(\sigma \in L^{P}_{Q}(K,U)\). The following exist: \(\lim _{|\pi |\rightarrow 0}\sum _{i=0}^{n-1}\sigma ^{\pi }(t_{i})\lozenge (B_{Q}^{H}(t_{i+1})-B_{Q}^{H}(t_{i})),\) and this limit is defined as \(\int _{0}^{T} \sigma (s)dB_{Q}^{H}(s).\) Moreover, this integral satisfies \(E\int _{0}^{T}\sigma (s)dB_{Q}^{H}(s)=0\) and
$$\begin{aligned} E\left| \int _{0}^{T}\sigma (s)dB_{Q}^{H}(s)\right| ^{2}&=E\left\{ \left( \int _{0}^{T}D_{s}^{\varnothing }\sigma (s)ds\right) ^{2}\right. \\ {}&\quad \left. +\left| 1_{\left[ 0,T\right] }\sigma \right| _{\varnothing }^{2}\right\} . \end{aligned}$$

Remark 1

[31] To extend the theory of stochastic calculus for the fractional Brownian motion, the Wick calculus for Gaussian process (Gaussian measures) is used. The Wick product of exponentials \(\varepsilon (f)\) and \(\varepsilon (g)\) is defined as \( \varepsilon (f) \lozenge \varepsilon (g)=\varepsilon (f+g),\) where \(\mathcal {E}=\{\sum ^{n}_{k=1}a_{k}\varepsilon (f_{k}),\) \(n\in \mathbb {N}, a_{k}\in R, f_{k}\in R^{n}\) for \(k \in \{1,\ldots ,n\}\}\) is the linear span of the exponentials. This definition can be extended to define the Wick product \(F \lozenge G\) of two functionals F and G of \(\mathcal {E}.\)

Lemma 2

[31] Let \(P>2\). If there exists \(\sigma (s): [0,T]\longmapsto L^{P}_{Q}(K,U)\) satisfing \(D_{s}^{\varnothing }\sigma =0\) and \(\sigma (s) \ge 0\) such that \(E\int _{0}^{T} |\sigma (s) |^{p}ds<\infty ,\) then
$$\begin{aligned}&E\left| \int _{0}^{T}\sigma (s)dB_{Q}^{H}(s)\right| ^{p}\le \left( p(p-1)\right) ^{\frac{p}{2}}T^{\frac{p-2}{2}}\nonumber \\&\quad \times E\int _{0}^{T}\bigg (\sigma (s)\int _{0}^{s} \phi (u,s)\sigma (u)du\bigg )^{\frac{p}{2}}ds \end{aligned}$$


We take \(0\le t\le T\), \(x(t)=\int _{0}^{t}\sigma (s)\mathrm{d}B_{Q}^{H}(s)\). According to \(It\hat{o}\) formula, we have
$$\begin{aligned} |x(t)|^{p}&=x(0)+\int _{0}^{t}p|x(t)|^{p-1}\sigma (s)\mathrm{d}B_{Q}^{H}(t)\\&\quad \ +\int _{0}^{t}P(P-1)|x(t)|^{p-2}\sigma (s)D_{s}^{\varnothing }x(s)\mathrm{d}s. \end{aligned}$$
Without loss of generality, let \(x(0)=0\); by Lemma 1, it obtains
$$\begin{aligned} E |x(t)|^{p}&=E\int _{0}^{t}p(p-1) |x(s)|^{p-2}\sigma (s)D_{s}^{\varnothing }x(s)\mathrm{d}s\nonumber \\&=E\int _{0}^{t}p(p-1) |x(s)|^{p-2} \sigma (s)\\&\ \quad \times \left( D_{s}^{\varnothing }\int _{0}^{s}\sigma (u)\mathrm{d}B_{Q}^{H}(u)\right) \mathrm{d}s\nonumber \\&=E\int _{0}^{t}p(p-1)|x(s)|^{p-2}\sigma (s)\\&\ \quad \times \left( D_{s}^{\varnothing }\int _{0}^{\infty }\sigma (u)1_{[0,s]}\mathrm{d}B_{Q}^{H}(u)\right) \mathrm{d}s\nonumber \\&= E\int _{0}^{t}p(p-1) |x(s)|^{p-2}\sigma (s)\\&\ \quad \times \left( \int _{0}^{\infty }\phi (u,s)\sigma (u)1_{[0,s]}\mathrm{d}u\right) \mathrm{d}s\nonumber \\&= \int _{0}^{t}p(p-1)|x(s)|^{p-2}\sigma (s)\left( \int _{0}^{s}\phi (u,s)\sigma (u)\mathrm{d}u\right) \mathrm{d}s\nonumber \\&= p(p-1)E\int _{0}^{t}|x(s)|^{p-2}\sigma (s)\left( \int _{0}^{s}\phi (u,s)\sigma (u)\mathrm{d}u\right) \mathrm{d}s. \end{aligned}$$
Noting that \(\sigma (s)>0\), then \(E|x(t)|^{p}\) regarding t is non-decreasing which implies that
$$\begin{aligned} E |x(t)|^{p}&\le p(p-1)E\left( \int _{0}^{t}|x(s)|^{p}\mathrm{d}s\right) ^{\frac{p-2}{p}}\nonumber \\&\quad \times \bigg [E\int _{0}^{t}\bigg (\sigma (s)\int _{0}^{s}\phi (u,s)\sigma (u)\mathrm{d}u\bigg )^{\frac{p}{2}}\mathrm{d}s\bigg ]^{\frac{2}{p}}\\&\le p(p-1)(tE|x(t)|^{p})^{\frac{p-2}{p}}\nonumber \\&\quad \times \bigg [E\int _{0}^{t}(\sigma (s)\int _{0}^{s}\phi (u,s)\sigma (u)\mathrm{d}u)^{\frac{p}{2}}\mathrm{d}s\bigg ]^{\frac{2}{p}}. \end{aligned}$$
Furthermore, we can obtain
$$\begin{aligned} (E |x(t)|^{p})^{\frac{2}{p}}&\le p(p-1)t^{\frac{p-2}{p}}\\ {}&\quad \times \bigg [E\int _{0}^{t}\bigg (\sigma (s)\int _{0}^{s}\phi (u,s)\sigma (u)\mathrm{d}u\bigg )^{\frac{p}{2}}\mathrm{d}s\bigg ]^{\frac{2}{p}}. \end{aligned}$$
It is obvious that
$$\begin{aligned} E |x(t)|^{p}\le (p(p-1))^{\frac{p}{2}}t^{\frac{p-2}{2}}E\int _{0}^{t} \left( \sigma (s)\int _{0}^{s}\phi (u,s) \sigma (u)\mathrm{d}u\right) ^{\frac{p}{2}}\mathrm{d}s. \end{aligned}$$
Finally, by substituting t with T, we can get the inequality (1). This completes the proof.

Remark 2

[31] Suppose that \(\sigma (s): [0,T]\longmapsto L^{P}_{Q}(K,U)\) is a stochastic process. The process \(\sigma \) is said to be \(\varnothing \)-differentiable, if for each \(t\in [0,T], \sigma \left( t,\cdot \right) \) is \(\varnothing \)-differentiable and \(D_{s}^{\varnothing }F_{t}\) is jointly measurable. It is worth noting that one of the properties of the \(\varnothing \)-differentiable is
$$\begin{aligned} D_{s}^{\varnothing }\int _{0}^{\infty }\sigma (u)\mathrm{d}B_{Q}^{H}(u)=\int _{0}^{\infty }\varnothing (s,t)\sigma (u)\mathrm{d}u, \end{aligned}$$
where \(\varnothing :R_{+}\times R_{+}\longrightarrow R_{+}\) is given by \(\varnothing (s,t)=H(2H-1)|s-t|^{2H-2}.\)

Lemma 3

[31] Let \(p>2\), if there exists \(\sigma (s): [0,T]\longmapsto L^{P}_{Q}(K,U)\) satisfing \(\sigma (s)\ge 0\) and \(\sigma (s)\) is nondecrescent, such that \(E\int _{0}^{T}|\sigma (s)|^{p}ds<\infty .\) Then
$$\begin{aligned}&E\left| \int _{0}^{T}\sigma (s)dB_{Q}^{H}(s)\right| ^{p}\nonumber \\&\quad \le \left( p(p-1)H\right) ^{\frac{p}{2}}T^{pH-1}E\int _{0}^{T}|\sigma (s)|^{p}ds \end{aligned}$$


According to Lemma 2, we can derive that
$$\begin{aligned}&E\left| \int _{0}^{T}\sigma (s)\mathrm{d}B_{s}^{H}\right| ^{p}\nonumber \\&\quad \le (p(p-1))^{\frac{p}{2}}T^{\frac{p-2}{2}} \nonumber \\&\qquad \times E\int _{0}^{T}\left( \sigma (s)\int _{0}^{s}\phi (u,s)\sigma (u)\mathrm{d}u\right) ^{\frac{p}{2}}\mathrm{d}s\\&\quad \le (p(p-1))^{\frac{p}{2}}T^{\frac{p-2}{2}}\nonumber \\&\qquad \times E\int _{0}^{T} |\sigma (s)|^{p}\left( \int _{0}^{s}\phi (u,s)\mathrm{d}u\right) ^{\frac{p}{2}}\mathrm{d}s\\&\quad \le (p(p-1))^{\frac{p}{2}}T^{\frac{p-2}{2}}E\int _{0}^{T} |\sigma (s)|^{p}\nonumber \\&\qquad \times H^{\frac{p}{2}}s^{\frac{2Hp-p}{2}}\mathrm{d}s\\&\quad \le (p(p-1)H)^{\frac{p}{2}}T^{\frac{p-2}{2}}E\int _{0}^{T}|\sigma (s)|^{p}\mathrm{d}s. \end{aligned}$$
At last, we can get the inequality (2). This proof can be completed.

Lemma 4

(Hölder’s inequality) Let \(p>1, \frac{1}{p}+\frac{1}{q}=1, x(\cdot )\in \mathcal {L}^{p}(F)\), \(y(\cdot )\in \mathcal {L}^{q}(F)\), then \(x(\cdot )y(\cdot )\in \mathcal {L}(F)\), and
$$\begin{aligned} \int _{F}|x(t)y(t)|dt\le \left( \int _{F}|x(t)|^{p}dt\right) ^{\frac{1}{p}}\left( \int _{E}|y(t)|^{q}dt\right) ^{\frac{1}{q}} \end{aligned}$$

Model description

Consider the stochastic HNNs driven by fractional Brownian motion of the form
$$\begin{aligned} \mathrm{d}x(t)= [-Cx(t)+Bf (x(t))+D ]\mathrm{d}t+ \sigma (t,x(t))\mathrm{d}B^{H}_{Q}(t), \end{aligned}$$
where \(x(t)=(x_{1}(t),x_{2}(t),\ldots ,x_{n}(t))^{T}\) is the vector of neuron states at time t; \(C=\mathrm{diag} (c_{1},c_{2},\ldots ,c_{n})\) is a diagonal matrix with entries \(c_{i}>0 (i=1,\ldots ,n)\); \(B= (b_{ij})_{n\times n}\) is an \(n\times n\) interconnection matrix; \(D= (D_{1},\ldots ,D_{n})^{'}\) is a constant vector of neural networks; \(f(t)=(f_{1}(t),f_{2}(t),\ldots ,f_{n}(t))^{T}~~(i=1,\ldots ,n)\) are the neuron activation functions; \(\sigma (\cdot )\): \(R^{+}\times R^{n}\rightarrow R^{n\times n}\) is the noise intensity matrix, which can be regarded as a result from the occurrence of stochastic perturbation. The noises \(B^{H}(t)\in R^{n}\) are an n-dimensional fBm with Hurst index \(\frac{1}{2}<H<1.\)

Assumption 1

There exist \(L_{0},L_{1}, L_{2}, L_{3}>0\), such that, for all \(t\in [0,\infty )\) and \(x(t),y(t)\in R^{n}\)
$$\begin{aligned}&\Vert \sigma (t,x(t))\Vert \le L_{0} \Vert x(t) \Vert ,\ \ \Vert f(x(t))\Vert \le L_{1} \Vert x(t)\Vert ,\nonumber \\&\Vert f (x(t))-f (y(t))\Vert \le L_{2}\Vert x(t)-y(t)\Vert ,\nonumber \\&\Vert \sigma (t,x(t))-\sigma (t,y(t)) \Vert \le L_{3} \Vert x(t)-y(t) \Vert . \end{aligned}$$

Assumption 2

Let Dom\((\mathfrak {N})\):\( \subset U\rightarrow U\) be the infinitesimal generator of an analytic semigroup \(S(\cdot )\) on U, that is, there exists a constant \(M>0\) such that
$$\begin{aligned} \Vert S(t)\Vert \le Me^{-a t} \ \ t\ge 0, a>0. \end{aligned}$$

Remark 3

[29] In this remark, we give exemplification for the existence of infinitesimal generator of an analytic semigroup by example as follows:
$$\begin{aligned} S(t)x=\sum _{n=1}^{\infty }e^{-n^{2}t}<x, \quad e_{n}>e_{n}, \quad \forall x\in H,\quad t>0, \end{aligned}$$
where \(e_{n}(t)=\sqrt{\frac{2}{\pi }}\sin (nt)(n=1,2,\ldots )\) are orthogonal eigenvectors. And \(\Vert S(t)\Vert \le e^{-\pi ^{2}t}, t>0.\)

Existence and uniqueness of a mild solution

In this section, we study the existence and uniqueness of mild solutions for Eq. (3). To do so, we assume that the following conditions hold.
  1. (H.1)
    \((S(t))_{t\ge 0}\), of bounded linear operators on H, and satisfied
    $$\begin{aligned}&\Vert S(t)\Vert \le M,\quad \mathrm{for}\ \ M>0. \end{aligned}$$
  2. (H.2)
    Let \(P>2, \sigma \in L_{Q}^{p}(K,U), D_{s}^{\phi }\sigma =0,\) and \( \sigma (s)\) is nondecreasing, such that
    $$\begin{aligned} E\int _{0}^{T} |\sigma (s)|^{p}\mathrm{d}s<\infty , \quad \forall T >0. \end{aligned}$$
We give the following definition of mild solutions for Eq.(3).

Definition 1

[29] A U-valued process x(t) is said to be a mild solution of Eq. (3) based on the space of Hilbert–Schmidt operator if the above assumptions and the lemma established, corresponding to the initial function \(\phi (x)\in L^{p}(\Omega ,U)\), for all \(t\in [0,T]\) such that
$$\begin{aligned} x(t)&=S(t)\phi (0)+\int _{0}^{t}S(t-s)[-Cx(s)+Bf(x(s))+D]\mathrm{d}s\nonumber \\&\quad +\int _{0}^{t}S(t-s)\sigma (s)\mathrm{d}B^{H}_{Q}(s). \end{aligned}$$

Theorem 1

Suppose that Assumption 1, \((\mathrm{H.1})\) and \((\mathrm{H.2})\) hold. Let \(\lambda _{C}=\lambda _{max}(C)\), \(\lambda _{B}=\lambda _{max}(\frac{B^{T}+B}{2})\), where C is a diagonal matrix and B is an \(n\times n\) matrix. Then for all \(T>0\), Eq. (3) has a unique mild solution on [0, T].


Let \(\mathfrak {R}_{T}:=L_{\mathcal {F}_{0}}^{p}([0,T],\) \(L^{p}(\Omega ,U))\) be the Hilbert space of all continuous functions from \(\left[ 0, T\right] \) into \(L^{p}(\Omega ,U)\), equipped with the supremum norm \(\Vert \xi \Vert _{\mathfrak {R}_{T}}=\mathrm{sup}_{v\in [0,T]}(E\Vert \xi \Vert ^{P})^{\frac{1}{P}}\) and let us consider the set \(S_{T}\) is a closed subset of \(\mathfrak {R}_{T}\) provided with the norm \(\Vert \cdot \Vert _{\mathfrak {R}_{T}}\). Define the operator \(\psi \) on \(S_{T}\) by \(\psi (x)(t)=\phi (t)\) for \( t\in [0, T]\)
$$\begin{aligned} \psi (x)(t)&=S(t)\phi (0)-C\int _{0}^{t}S(t-s)x(s)\mathrm{d}s\nonumber \\&\quad \ +B\int _{0}^{t}S(t-s) f (x(s))\mathrm{d}s\\&\quad \ +\int _{0}^{t}S(t-s)D\mathrm{d}s+\int _{0}^{t}S(t-s) \sigma (s,x(s))\mathrm{d}B^{H}_{Q}(s), \end{aligned}$$
then it is clear that the proof of the existence of mild solutions to Eq. (3) is equivalent to find a fixed point for the operator \(\psi \).

Next, by using Banach fixed point theorem, we show that \(\psi \) has a unique fixed point. We divide the subsequent proof into two steps.

Step 1 For arbitrary \(x\in S_{T}\), let us prove that \(t\rightarrow \psi (x)(t)\) is continuous on the interval \(\left[ 0, T\right] .\) Let \(0\le t\le T\) and |h| be sufficiently small. Then for any fixed \(x\in S_{T}\), we have
$$\begin{aligned}&\Vert \psi (x)(t+h)-\psi (x)(t)\Vert \le \Vert (S(t+h)-S(t))(\phi (0))\Vert \nonumber \\&\quad +C\times \bigg \Vert \int _{0}^{t+h}S(t+h-s)x(s)\mathrm{d}s-\int _{0}^{t}S(t-s) x(s)\mathrm{d}s\bigg \Vert \\&\quad +B\times \bigg \Vert \int _{0}^{t+h}S(t+h-s) f(x(s))\mathrm{d}s\nonumber \\&\quad -\int _{0}^{t}S(t-s)f(x(s))\mathrm{d}s\bigg \Vert +\bigg \Vert \int _{0}^{t+h}S(t+h-s) D\mathrm{d}s \\&\quad -\int _{0}^{t}S(t-s)D\mathrm{d}s\bigg \Vert +\bigg \Vert \int _{0}^{t+h}S(t+h-s)\sigma (s,x(s))\mathrm{d}B^{H}_{Q}(s)\\&\quad -\int _{0}^{t}S(t-s)\sigma (s,x(s))\mathrm{d}B^{H}_{Q}(s)\bigg \Vert =\sum _{1\le i\le 5}I_{i}(h), \end{aligned}$$
where \(I_{i}\) stands for every item of a polynomial, \(i=(1,2,3,4,5).\)
By the strong continuity of S(t), we have \( \lim _{h\rightarrow 0}\left( S(t+h)\right. \left. -S(t)\right) \phi (0)=0.\) The condition (H.1) assures that \( \left\| (S(t\right. \left. +h)-S(t))\phi (0)\right\| \le 2M\Vert \phi (0)\Vert ,\) then, by the Lebesgue dominated theorem, we conclude that first item
$$\begin{aligned} \lim _{h\rightarrow 0}E |I_{1}(h)|^{p}=0. \end{aligned}$$
For the second term \(I_{2}(h)\), suppose \(h>0\) (Similar estimates hold for \(h<0\)), we obtain
$$\begin{aligned} I_{2}(h)&\le \lambda _{C}\left\| \int _{0}^{t}(S(h)-I)S(t-s)x(s)\mathrm{d}s\right\| \nonumber \\&\quad +\lambda _{C}\left\| \int _{t}^{t+h}S(t+h-s)x(s)\mathrm{d}s\right\| \nonumber \\&= I_{21}(h)+I_{22}(h), \end{aligned}$$
where \(I_{21}(h)\) and \(I_{22}(h)\) represent the first and second terms of the polynomial, respectively. Using Hölder’s inequality, one has that
$$\begin{aligned} E\left| I_{21}(h)\right| ^{p}\le \lambda _{c}^{p}t^{\frac{p}{2}}E\bigg (\int _{0}^{t}\Vert (S(h)-I)S(t-s) x(s)\Vert ^{2}\mathrm{d}s\bigg )^{\frac{p}{2}}. \end{aligned}$$
By using the strong continuity of S(t), for each \(s\in [0,t]\), we have
$$\begin{aligned} \lim _{h\rightarrow 0}((S(h)-I)S(t-s)x(s))=0. \end{aligned}$$
By using condition (H.1), one obtains \(\left\| (S(h)-I)S(t-s)x\right. \left. (s)\right\| \le (M+1)M\left\| x(s)\right\| ,\) and by the Lebesgue dominated theorem, we conclude that \( \lim _{h\rightarrow 0} E|I_{21}(h)|^{p}=0\).
By conditions (H.1) and Hölder’s inequality, we get
$$\begin{aligned} E|I_{22}(h)|^{p}&\le \lambda _{C}^{p}M^{P}h^{\frac{p}{2}}\left( \int _{0}^{t}\Vert x(s)\Vert ^{2}\mathrm{d}s\right) ^{\frac{p}{2}}\nonumber \\&\le \lambda _{C}^{p}M^{P}h^{\frac{p}{2}}\int _{0}^{t}\Vert x(s)\Vert ^{p}\mathrm{d}s, \end{aligned}$$
then \(\lim _{h\rightarrow 0}E\left| I_{2}(h)\right| ^{p}=0.\)
Third item:
$$\begin{aligned} I_{3}(h)\le & {} \lambda _{B}\left\| \int _{0}^{t}(S(h)-I)S(t-s)f(x(s))\mathrm{d}s\right\| \nonumber \\&+\lambda _{B}\left\| \int _{t}^{t+h}S(t+h-s)f(x(s))\mathrm{d}s\right\| \\= & {} I_{31}(h)+I_{32}(h). \end{aligned}$$
Owing to Hölder’s inequality, one has that
$$\begin{aligned} E|I_{31}(h)|^{p}&\le \lambda _{B}^{p}t^{\frac{p}{2}}E\bigg (\int _{0}^{t}\Vert (S(h)-I)S(t-s) f(x(s))\Vert ^{2}\mathrm{d}s\bigg )^{\frac{p}{2}}. \end{aligned}$$
For each \(s\in \left[ 0,t\right] \), it is clear that \(\lim _{h\rightarrow 0}((S(h)-I)S(t-s)f(x(s)))=0.\) Besides, \(\left\| (S(h)-I)S(t-s)f(x(s))\right\| \le (M+1)M\left\| f(x(s))\right\| ,\) we conclude that
$$\begin{aligned} \lim _{h\rightarrow 0} E\left| I_{31}(h)\right| ^{p}=0. \end{aligned}$$
We use conditions (H.1) and Hölder’s inequality to derive
$$\begin{aligned} E |I_{32}(h)|^{p}&\le \lambda _{B}^{p}M^{P}h^{\frac{p}{2}}\left( \int _{0}^{t}\Vert f(x(s))\Vert ^{2}\mathrm{d}s\right) ^{\frac{p}{2}}\nonumber \\&\le \lambda _{B}^{p}M^{P}h^{\frac{p}{2}}\int _{0}^{t}\Vert f(x(s))\Vert ^{p}\mathrm{d}s, \end{aligned}$$
after that, \(\lim _{h\rightarrow 0}E|I_{3}(h)|^{p}=0.\)
Fourth item:
$$\begin{aligned} I_{4}(h)&\le \left\| \int _{0}^{t}(S(h)-I)S(t-s)D\mathrm{d}s\right\| \nonumber \\&\quad +\left\| \int _{t}^{t+h}S(t+h-s)D)\mathrm{d}s\right\| =I_{41}(h)+I_{42}(h). \end{aligned}$$
By Hölder’s inequality, one has \( E|I_{41}(h)|^{p}\le t^{\frac{p}{2}}E\big (\int _{0}^{t}\Vert (S(h)-I)S(t-s)\times D\Vert ^{2}\mathrm{d}s\big )^{\frac{p}{2}},\) and \(\lim _{h\rightarrow 0}(S(h)-I)S(t-s)D=0,\) next, \( \left\| (S(h)-I)S(t-s)D\right\| \le (M+1)M\Vert D\Vert ,\) that is, \(\lim _{h\rightarrow 0} E|I_{41}(h)|^{p}=0.\) Moreover, \(E|I_{42}(h)|^{p}\le M^{P}h^{\frac{p}{2}}(\int _{0}^{t}\Vert D\Vert ^{2}\mathrm{d}s)^{\frac{p}{2}}, \) at last \(\lim _{h\rightarrow 0}E|I_{4}(h)|^{p}=0.\)
In addition, the fifth item is
$$\begin{aligned} I_{5}(h)&\le \left\| \int _{0}^{t}(S(h)-I)S(t-s)\sigma (s,x(s))\mathrm{d}B^{H}_{Q}(s)\right\| \nonumber \\&\quad +\bigg \Vert \int _{t}^{t+h}S(t+h-s)) \times \sigma (s,x(s))\mathrm{d}B^{H}_{Q}(s)\bigg \Vert \\&=I_{51}(h)+I_{52}(h). \end{aligned}$$
Utilizing Lemma 3, we obtain
$$\begin{aligned}&E|I_{51}(h)|^{p}\le E\left\| \int _{0}^{t}(S(h)-I)S(t-s)L_{0}x(s)\mathrm{d}B^{H}_{Q}(s)\right\| ^{p}\\&\quad \le L_{0}^{p}(p(p-1)H)^{\frac{p}{2}}t^{pH-1}E\int _{0}^{t}\Vert (S(h)-I) S(t-s)x(s)\Vert ^{p}\mathrm{d}s\\&\quad \le L_{0}^{p}M^{P}(p(p-1)H)^{\frac{p}{2}}t^{pH-1}E\int _{0}^{t}\Vert (S(h)-I) x(s)\Vert ^{p}\mathrm{d}s. \end{aligned}$$
By Lemma 3, we get
$$\begin{aligned} E |I_{52}(h)|^{p}&\le E\left\| \int _{0}^{t+h}\Vert S(t+h-s)\sigma (s,x(s))\mathrm{d}B^{H}_{Q}(s)\right\| ^{p}\\&\le L_{0}^{p}M^{P}(p(p-1)H)^{\frac{p}{2}}h^{pH-1}E\int _{0}^{t}\Vert x(s)\Vert ^{p}\mathrm{d}s. \end{aligned}$$
That is to say \(\lim _{h\rightarrow 0}E\left| I_{52}(h)\right| ^{p}=0,\) then \(\lim _{h\rightarrow 0}E\left| I_{5}(h)\right| ^{p}=0.\) The above arguments show that
$$\begin{aligned} \lim _{h\rightarrow 0}E\left\| \psi (x)(t+h)-\psi (x)(t)\right\| ^{p}=0. \end{aligned}$$
Hence, we conclude that the function \(t\rightarrow \psi (x)(t)\) is continuous on \(\left[ 0,T\right] \) in \(L^{p}\)-sense.

Step 2 We will show that \(\psi \) is a contraction mapping in \(S_{T_{1}}\) with some \(T_{1}\le T\) to be specified later.

Let \(x,y\in S_{T}\); by using the inequality \((a+b+c)^{p}\le \) \(3^{p-1}a^{p}+3^{p-1}b^{p}+3^{p-1}c^{p}\), we can derive the following inequality for any fixed \(t\in \left[ 0,T\right] \),
$$\begin{aligned}&\Vert \psi (x)(t)-\psi (y)(t)\Vert ^{p}\\&\quad \le 3^{p-1}\lambda _{C}^{p}\bigg \Vert \int _{0}^{t}S(t-s)(x(s)-y(s))\mathrm{d}s\bigg \Vert ^{p}\nonumber \\&\qquad +3^{p-1}\lambda _{B}^{p}\bigg \Vert \int _{0}^{t}S(t-s)(f(x(s))-f(y(s)))\mathrm{d}s\bigg \Vert ^{p}\\&\qquad +3^{p-1}\bigg \Vert \int _{0}^{t}S(t-s)(\sigma \left( s,x(s)\right) -\sigma \left( s,y(s)\right) )\mathrm{d}B^{H}_{Q}(s)\bigg \Vert ^{p}\\&\quad \le 3^{p-1}\lambda _{C}^{p}M^{p}\bigg \Vert \int _{0}^{t}(x(s)-y(s))\mathrm{d}s\bigg \Vert ^{p}\nonumber \\&\qquad +3^{p-1}\lambda _{B}^{p}M^{p}L_{2}^{p}\bigg \Vert \int _{0}^{t}(x(s)-y(s))P\mathrm{d}s\bigg \Vert ^{p}\\&\qquad +3^{p-1}\lambda _{B}^{p}M^{p}L_{3}^{p}\bigg \Vert \int _{0}^{t}\left( x(s)-y(s)\right) \mathrm{d}B^{H}_{Q}(s)\bigg \Vert ^{p}, \end{aligned}$$
$$\begin{aligned}&E \Vert \psi (x)(t)-\psi (y)(t)\Vert ^{p}\\&\quad \le 3^{p-1}\lambda _{C}^{p}M^{P}E\left\| \int _{0}^{t}(x(s)-y(s))\mathrm{d}s\right\| ^{p}\nonumber \\&\qquad +3^{p-1}\lambda _{B}^{p}M^{P}L_{2}^{p} E\left\| \int _{0}^{t}\left( x(s)-y(s)\right) \mathrm{d}s\right\| ^{p}\\&\qquad +3^{p-1}M^{P}L_{3}^{p}E\bigg \Vert \int _{0}^{t}(x(s)-y(s))\mathrm{d}B^{H}_{Q}(s)\bigg \Vert ^{p}\nonumber \\&\quad \le 3^{p-1}\lambda _{C}^{p}M^{p}\int _{0}^{t}E\left\| (x(s)-y(s))\right\| ^{p}\mathrm{d}s\\&\qquad +3^{p-1}\lambda _{B}^{p}M^{p}L_{2}^{p}\int _{0}^{t}E\left\| x(s)-y(s)\right\| ^{p}\mathrm{d}s\nonumber \\&\qquad +3^{p-1}M^{p}L_{3}^{p}(p(p-1)H)^{\frac{p}{2}}t^{pH-1}\\&\qquad \times E \int _{0}^{t}\left\| x(s)-y(s)\right\| ^{p}\mathrm{d}s. \end{aligned}$$
$$\begin{aligned}&\sup _{s\in [0,t]}E\Vert \psi (x)(s)-\psi (y)(s)\Vert ^{p}\\&\quad \le \delta (t)\sup _{s\in [0,t]}E\Vert x(s)-y(s)\Vert ^{p}, \end{aligned}$$
where \(\delta (t)=3^{p-1}\lambda _{C}^{p}M^{P}t+3^{p-1}\lambda _{B}^{p}M^{P}L_{1}^{p}t+3^{p-1} M^{p}L_{2}^{p}\left( p(p-1)H\right) ^{\frac{p}{2}}t^{pH}. \)

We have \(\delta (0)=0<1\). Then there exists \(0< T_{1}<T\) such that \(0< \delta (T_{1})<1\) and \(\psi \) is a contraction mapping on \(S_{T_{1}}\) and has a unique fixed point, which is a mild solution of Eq. (3) on \([0,T_{1}]\).

This procedure can be repeated to extend the solution to the entire interval [0, T] in finitely many steps. This completes the proof.

Exponential convergence of solution

Now, we are in a position to present the stability results of the solution to Eq. (3). To establish some sufficient conditions ensuring the pth moment exponentially stable for mild solution of Eq. (3), we further assume:
  1. (H.3)
    There exist non-negative real numbers \(Q_{1}\ge 0\) and continuous functions \(\xi (t):[0,\infty )\rightarrow R_{+}\) such that
    $$\begin{aligned}&E\left\| f(x(t))\right\| ^{p}\le Q_{1}E\left\| x(t)\right\| ^{p}+\xi (t), \quad t\ge 0, \end{aligned}$$
    and there exist nonnegative real numbers \(\xi _{1}\ge O\) such that \(|\xi (t)|\le \xi _{1}e^{-\mathrm{apt}}.\)

Theorem 2

Suppose that the conditions (H.1)–(H.3), Assumption 1 and 2 hold. Let \(p>2\), \(\lambda _{C}=\lambda _{max}(C)\), \(\lambda _{B}=\lambda _{max}(\frac{B^{T}+B}{2})\), where C is a diagonal matrix and B is an \(n\times n\) matrix. Then the solution of Eq. (3) is the pth moment exponentially stable.


Let \(x_{0}=\phi (0)\), it has
$$\begin{aligned} E\left\| x(t)\right\| ^{p}&\le 5^{p-1}E \Vert S(t)\phi (0)\Vert ^{p}\nonumber \\&\qquad +5^{p-1}\lambda _{C}^{p}E\bigg \Vert \int _{0}^{t}S(t-v)x(v)\mathrm{d}v\bigg \Vert ^{p}\\&\qquad +5^{p-1}\lambda _{B}^{p}E\left\| \int _{0}^{t}S(t-v)f(x(v))\mathrm{d}v\right\| ^{p}\nonumber \\&\qquad +5^{p-1}E\bigg \Vert \int _{0}^{t}S(t-v)D\mathrm{d}v\bigg \Vert ^{p}\\&\qquad +5^{p-1} E\bigg \Vert \int _{0}^{t}S(t-v)\sigma (v)\mathrm{d}B^{H}_{Q}(v)\bigg \Vert ^{p}\nonumber \\&\quad =\sum _{i=1}^{5}I_{i}. \end{aligned}$$
It is obvious that \(I_{1}\le 5^{p-1}M^{p}e^{-pat}E\Vert \phi (0)\Vert ^{p}\le 5^{p-1}M^{p}e^{-at}E\Vert x_{0}\Vert ^{p}.\) On the other hand, by the Hölder inequality we derive that
$$\begin{aligned} I_{2}&\le 5^{p-1}\lambda _{C}^{p}M^{p}E\left( \int _{0}^{t}e^{-\frac{a}{q}(t-v)}e^{-\frac{a}{p}(t-v)}\Vert x(v)\Vert \mathrm{d}v\right) ^{p}\\&\le 5^{p-1}\lambda _{C}^{p}M^{p}\left( \int _{0}^{t}e^{-a(t-v)}\mathrm{d}v\right) ^{\frac{p}{q}}\int _{0}^{t}e^{-a(t-v)}\\ {}&\quad \times E\Vert x(v)\Vert ^{p}\mathrm{d}v\\&\le 5^{p-1}\lambda _{C}^{p}M^{p}\left( \frac{1-e^{-at}}{a}\right) ^{\frac{p}{q}}e^{-at} \int _{0}^{t}e^{av}E\Vert x(v)\Vert ^{p}\mathrm{d}v. \end{aligned}$$
$$\begin{aligned} I_{3}&\le 5^{p-1}\lambda _{B}^{p}M^{p}E\left( \int _{0}^{t}e^{-\frac{a}{q}(t-v)}e^{-\frac{a}{p}(t-v)}\Vert f(x(v))\Vert \mathrm{d}v\right) ^{p}\\&\le 5^{p-1}\lambda _{B}^{p}M^{p}\left( \int _{0}^{t}e^{-a(t-v)}dv\right) ^{\frac{p}{q}}\int _{0}^{t}e^{-a(t-v)}\\&\quad \times E\left\| f(x(v))\right\| ^{p}\mathrm{d}v\\&\le 5^{p-1}\lambda _{B}^{p}M^{p}\left( \frac{1-e^{-at}}{a}\right) ^{\frac{p}{q}}e^{-at} \int _{0}^{t}e^{av}\\&\quad \times E\left\| f(x(v))\right\| ^{p}\mathrm{d}v\\&\le 5^{p-1}\lambda _{B}^{p}M^{p}\left( \frac{1-e^{-at}}{a}\right) ^{\frac{p}{q}}e^{-at} \int _{0}^{t}e^{av}\\&\quad \times \left( Q_{1}E\Vert x(v)\Vert ^{p}+\xi (v)\right) \mathrm{d}v\\&\le 5^{p-1}\lambda _{B}^{p}M^{p}\left( \frac{1-e^{-at}}{a}\right) ^{\frac{p}{q}}Q_{1}e^{-at}\int _{0}^{t}e^{av}\\&\quad \times (E\Vert x(v)\Vert ^{p}\mathrm{d}v +5^{p-1}\lambda _{B}^{p}M^{p}\left( \frac{1-e^{-at}}{a}\right) ^{\frac{p}{q}}\xi _{1}e^{-at}. \end{aligned}$$
The same truth,
$$\begin{aligned} I_{4}&\le 5^{p-1}M^{2p}E\left( \int _{0}^{t}e^{-a(t-2v)}e^{-av}\Vert D\Vert \mathrm{d}v\right) ^{p}\nonumber \\&\le 5^{p-1}M^{2p}\left( \int _{0}^{t}e^{-\frac{a}{2}q(t-2v)}\mathrm{d}v\right) ^{\frac{p}{q}}\\&\quad \times \int _{0}^{t}e^{-apv}E\Vert D\Vert ^{p}\mathrm{d}v\le 5^{p-1}M^{2p}\left[ e^{\frac{aqt}{2}}\frac{(1-e^{-aqt})}{qa}\right] ^{\frac{p}{q}}\\&\quad \times e^{-apt} E\Vert D\Vert ^{p}\\&=5^{p-1}M^{2p}\left( \frac{1-e^{-aqt}}{a}\right) ^{\frac{p}{q}}e^{-\frac{ap}{2}t} E\Vert D\Vert ^{p}. \end{aligned}$$
By virtue of Lemma 3, we can deduce
$$\begin{aligned} I_{5}&\le 5^{p-1}E\left\| \int _{0}^{t}S(t-v)\sigma (v)\mathrm{d}B^{H}(v)\right\| ^{p}\nonumber \\&\le 5^{p-1}L_{0}^{P}M^{p}E\left\| \int _{0}^{t}e^{-a(t-v)}x(v)\mathrm{d}B^{H}(v)\right\| ^{p}\\&\le 5^{p-1}L_{0}^{P}M^{p}(p(p-1)H)^{\frac{p}{2}}t^{pH-1}\\&\quad \times E\int _{0}^{t}\left\| e^{av}x(v)\right\| ^{p}\mathrm{d}v\cdot e^{-apt}.\\&\le 5^{p-1}L_{0}^{P}M^{p}(p(p-1)H)^{\frac{p}{2}}e^{\frac{-apt}{2}}t^{pH-1}\\&\quad \times \int _{0}^{t} e^{av}E\Vert x(v)\Vert ^{p}\mathrm{d}v\cdot e^{\frac{-apt}{2}}\\&\le 5^{p-1}L_{0}^{P}M^{p}(p(p-1)H)^{\frac{p}{2}}e^{\frac{-apt}{2}}t^{pH-1}\\&\quad \times \int _{0}^{t} e^{av}E\Vert x(v)\Vert ^{p}\mathrm{d}v\cdot e^{-at} \end{aligned}$$
Note that \(\left( \frac{1-e^{-aqt}}{a}\right) \le \frac{1}{a},\) suppose that
$$\begin{aligned}&M_{0}=5^{p-1}M^{p}E\Vert x_{0}\Vert ^{p},\ \ M_{1}=5^{p-1}\lambda _{C}^{p}M^{p}\left( \frac{1}{a}\right) ^{\frac{p}{q}}\\&M_{2}=5^{p-1}\lambda _{B}^{p}M^{p}\left( \frac{1}{a}\right) ^{\frac{p}{q}}Q_{1}, \ M_{2}^{'}=5^{p-1}\lambda _{B}^{p}M^{p}\left( \frac{1}{a}\right) ^{\frac{p}{q}}\xi _{1}\\&M_{3}=5^{p-1}M^{p}\left( \frac{1}{a}\right) ^{\frac{p}{q}}E\Vert D\Vert ^{p}, \end{aligned}$$
where \(Q_{1}\) and \(\xi _{1}\) are given in (H.3), and condition (H.2) ensures the existence of a positive constant \(M_{4}\) such that
$$\begin{aligned} 5^{p-1}L_{0}^{p}M^{p}(p(p-1)H)^{\frac{p}{2}}e^{\frac{-apt}{2}} t^{pH-1}\le M_{4}. \end{aligned}$$
Then we have
$$\begin{aligned} E\Vert x(t)\Vert ^{p}&\le M_{0}e^{-at}+M_{1}e^{-at}\int _{0}^{t}e^{av}E\Vert x(v)\Vert ^{p}\mathrm{d}v\nonumber \\&\quad +M_{2}e^{-at}\int _{0}^{t}e^{av}E\Vert x(v)\Vert ^{p}\mathrm{d}v+M_{2}^{'}e^{-at}\\&\quad +M_{3}e^{-at}+M_{4}\int _{0}^{t}e^{av}E\Vert x(v)\Vert ^{p}\mathrm{d}v\cdot e^{-at}. \end{aligned}$$
That is to say,
$$\begin{aligned} E\Vert x(t)\Vert ^{p}&\le (M_{0}+M_{2}^{'}+M_{3})e^{-at}+(M_{1}+M_{2}+M_{4})e^{-at}\nonumber \\&\quad \times \int _{0}^{t}e^{av}E\Vert x(v)\Vert ^{p}\mathrm{d}v; \end{aligned}$$
therefore, for arbitrary \(\varepsilon \in \mathbb {R_{+}}\) with \(0<\varepsilon <a-(M_{1}+M_{2}+M_{4})\) and \(T>0\) large enough, we have
$$\begin{aligned}&\int _{0}^{T}e^{\varepsilon t}E\Vert x(t)\Vert ^{p}\mathrm{d}t \le (M_{0}+M_{2}^{'}+M_{3})\int _{0}^{T}e^{-at+\varepsilon t}\mathrm{d}t\nonumber \\&\quad +(M_{1}+M_{2}+M_{4}) \times \int _{0}^{T}e^{\varepsilon t-at}\int _{0}^{t}e^{av}E\Vert x(v)\Vert ^{p}\mathrm{d}v\mathrm{d}t. \end{aligned}$$
On the other hand,
$$\begin{aligned}&\int _{0}^{T}e^{\varepsilon t-at}\int _{0}^{t}e^{av}E\Vert x(v)\Vert ^{p}\mathrm{d}v\mathrm{d}t =\int _{0}^{T}\int _{v}^{T}e^{av}\nonumber \\&\qquad \times E\Vert x(v)\Vert ^{p}e^{(\varepsilon -a)t}\mathrm{d}t\mathrm{d}v\nonumber \\&\quad =\int _{0}^{T}e^{av}E\Vert x(v)\Vert ^{p}\mathrm{d}v \left( \frac{e^{-(a-\varepsilon )v}}{a-\varepsilon }-\frac{e^{-(a-\varepsilon )T}}{a-\varepsilon }\right) \nonumber \\&\quad \le \frac{1}{a-\varepsilon }\int _{0}^{T}e^{\varepsilon v-av}e^{av}E\Vert x(v)\Vert ^{p}\mathrm{d}v; \end{aligned}$$
therefore, combining (6) with (7) yields
$$\begin{aligned}&\int _{0}^{T}e^{\varepsilon t}E\Vert x(t)\Vert ^{p}\mathrm{d}t\nonumber \\&\quad \le (M_{0}+M_{2}^{'}+M_{3})\int _{0}^{T}e^{\varepsilon t-at}\mathrm{d}t\nonumber \\&\qquad +\frac{(M_{1}+M_{2}+M_{4})}{a-\varepsilon }\int _{0}^{T}e^{\varepsilon t}E\Vert x(t)\Vert ^{p}\mathrm{d}t. \end{aligned}$$
$$\begin{aligned}&\left( 1-\frac{(M_{1}+M_{2}+M_{4})}{a-\varepsilon }\right) \int _{0}^{T}e^{\varepsilon t}E\Vert x(t)\Vert ^{p}dt\nonumber \\&\quad \le (M_{0}+M_{2}^{'}+M_{3}) \int _{0}^{T}e^{\varepsilon t-at}\mathrm{d}t. \end{aligned}$$
Since \(a>M_{1}+M_{2}+M_{4}\), it is probable to choose a suitable \(\varepsilon \in \mathbb {R_{+}}\) with \(0<\varepsilon <a-(M_{1}+M_{2}+M_{4})\) so that \(\frac{(M_{1}+M_{2}+M_{4})}{a-\varepsilon }<1\). Let \(T>0\) tend to infinity and using (9) immediately yields
$$\begin{aligned}&\int _{0}^{\infty }e^{\varepsilon t}E\Vert x(t)\Vert ^{p}\mathrm{d}t\nonumber \\&\quad \le \frac{1}{1-\frac{M_{1}+M_{2}+M_{4}}{a-\varepsilon }}\bigg [(M_{0}+M_{2}^{'}+M_{3})\int _{0}^{\infty }e^{\varepsilon t-at}\mathrm{d}t\bigg ]\nonumber \\&\quad =K^{'}(p,\varepsilon , \phi )<\infty . \end{aligned}$$
By virtue of (5), (9) and (10), we can easily deduce (note \(0<\varepsilon <a-(M_{1}+M_{2}+M_{4}))\)
$$\begin{aligned} E\Vert x(t)\Vert ^{p}&\le (M_{0}+M_{2}^{'}+M_{3})e^{-\varepsilon t}\nonumber \\&\quad +(M_{1}+M_{2}+M_{4})e^{-\varepsilon t}\int _{0}^{\infty }e^{\varepsilon v}E\Vert x(v)\Vert ^{p}\mathrm{d}v\nonumber \\&=(M_{0}+M_{2}^{'}+M_{3})e^{-\varepsilon t}\nonumber \\&\quad +(M_{1}+M_{2}+M_{4})e^{-\varepsilon t}K^{'} (p,\varepsilon , \phi )\nonumber \\&\le \left[ (M_{0}+M_{2}^{'}+M_{3})+(M_{1}+M_{2}+M_{4})K^{'}\right. \nonumber \\&\quad \left. \times (p,\varepsilon ,\phi )\right] e^{-\varepsilon t}\nonumber \\&:= K(p,\varepsilon , \phi )e^{-\varepsilon t}; \end{aligned}$$
$$\begin{aligned} E\Vert x(t)\Vert ^{p}\le K(p,\varepsilon , \phi )\cdot e^{-\varepsilon t}. \end{aligned}$$

Numerical examples

In this section, we can provide the following examples to illustrate the effectiveness and feasibility for the results we obtained.

Example 1

Consider the two- dimensional neural system driven by fBm with Hurst index \(H=0.65\). The corresponding parameters are as follows:
$$\begin{aligned} C=\begin{pmatrix}-1&{}0\\ 0&{}-1.5\end{pmatrix}, B=\begin{pmatrix}4&{}-0.5\\ -4.5&{}6.5\end{pmatrix}, D=(4.1,-11.1)^{T}. \end{aligned}$$
Neuron activation function f(x(t)) is as follows: \(f(x(t))=\frac{|x(t)+1|-|x(t)-1|}{2}. \) Take \(t\in \left[ 0,10\right] \) and set the initial state as \(x_1(t)=(-1,1)^{T}, x_2(t)=(-6,8)^{T}.\) Select the noise intensity matrix \(\sigma (t)\) as follows:
$$\begin{aligned} \sigma (t)=\begin{pmatrix}0.6 &{}-0.8\\ 0.8&{}0.6\end{pmatrix} \times \begin{pmatrix}\exp (-t)\\ \exp (-t)\end{pmatrix}. \end{aligned}$$
By calculating, we derive that
$$\begin{aligned}&\lambda _{C}=-1.0000, \quad \lambda _{B}=7.2026. M_{0}=70.7107, \nonumber \\&\quad M_{1}=-0.25,\quad M_{2}=0.0934, \\&\quad M_{2}^{'}=0.0934, \quad M_{3}=414.2250, M_{4}=5.2237\times 10^{-5}. \end{aligned}$$
Let \(a=10,\) and satisfied that \(a>M_{1}+M_{2}+M_{4},\) we can derive that \(0<\varepsilon <10.1565.\) Let \(\varepsilon =5.\) It can be computed that
$$\begin{aligned}&K^{'}(p,\varepsilon ,\phi )=96.8797<\infty .\\&K(p,\varepsilon ,\phi )=469.8674. \end{aligned}$$
Fig. 1

Fractional Brownian motion when \(H=0.6\)

Fig. 2

The curves of stochastic HNNs system with Fractional Brownian motion

It is easy to verify that the conditions of Theorem 2 are satisfied. Therefore, the system (3) with respect to fBm is the pth moment exponentially stable (see Fig. 2). To understand the content of the paper thoroughly, we give a graph of the fractional Brownian motion; see Fig. 1.

Example 2

Consider a three-dimensional system driven by fractional Brownian motion with Hurst index \(H=0.6\). The network parameters are as follows:
$$\begin{aligned} C= & {} \left( {\begin{array}{lll} { - 2} &{}\quad 0&{}\quad 0\\ 0&{}\quad { - 1.5}&{}\quad 0\\ 0&{}\quad \quad 0&{}\quad { - 1.5} \end{array}} \right) ,\,\,\,B = \left( {\begin{array}{lll} { - 2}&{}\quad \quad {0.5}&{}\quad 2\\ { - 0.5}&{}\quad {1.5} &{}\quad {0.5}\\ 2&{}\quad \quad {1.5}&{}\quad 2 \end{array}} \right) ,\\ D= & {} {(0,0,0)^T}. \end{aligned}$$
Neuron activation function f(x(t)) is as follows:
$$\begin{aligned} f(x(t))=\tanh (x(t))=\frac{\exp (x(t))-\exp (-x(t)}{\exp (x(t))+\exp (-x(t)}. \end{aligned}$$
Take \(t\in [0,10]\) and set the intitial state as
$$\begin{aligned}&x_1(t)=(4,-1,9)^{T}, x_2(t)=(3,7,10)^{T}, \nonumber \\&\quad x_3(t)=(-5,2,9)^{T},x_4(t)=(-3,1,5)^{T}. \end{aligned}$$
Select the noise intensity matrix \(\sigma (t)\) as follows:
$$\begin{aligned} \sigma (t)=\begin{pmatrix}\frac{1}{8} &{}-\frac{8}{9} &{}-\frac{5}{9}\\ -\frac{8}{9} &{}\frac{1}{8} &{}-\frac{5}{9}\\ frac{5}{9} &{}-\frac{5}{9} &{}\frac{7}{8}\end{pmatrix} \times \begin{pmatrix}\exp (-t)\\ \exp (-t)\\ \exp (-t)\end{pmatrix}. \end{aligned}$$
By calculating, we derive that
$$\begin{aligned}&\lambda _{C}=-1.5000, \ \ \lambda _{B}=3.0976.\ \ M_{0}=129.9050, \\&\quad M_{1}=-3.3750,\ \ M_{2}=0.0297, \ \ M_{2}^{'}=29.7219, \ \ M_{3}=0, \\&\quad M_{4}=0.0944. \end{aligned}$$
Let \(a=5,\) and satisfied that \(a>M_{1}+M_{2}+M_{4},\) we can derive that \(0<\varepsilon <8.2509.\) Let \(\varepsilon =2.\) It can be computed that
$$\begin{aligned}&K^{'}(p,\varepsilon ,\phi )=98.8306<\infty .\\&K(p,\varepsilon ,\phi )=60.7963. \end{aligned}$$
Fig. 3

The curves of the stochastic HNNs system with fractional Brownian motion

System (3) with respect to fBm meets the conditions of Theorem 2. Figure 3 shows the pth moment exponentially stable of the stochastic HNNs driven by fractional Brownian motion.


In this paper, the existence, uniqueness and asymptotic behavior of mild solutions have been discussed for stochastic neural networked systems driven by fractional Brownian motion. The proof of existence and uniqueness of mild solution has been analytically given in a Hilbert space. In addition, the p-th moment exponential convergence condition of the mild solution has been presented.

It would be interesting to extend the results here obtained to more general neural networked models incorporating the presence of a delay in neuron interconnections. This topic will be a challenging issue for future research.


  1. 1.
    Banerjee S, Verghese G (2001) Nonlinear phenomena in power electronics: attractors, bifuractions, chaos and nonlinear control. Wiley-IEEE Press, New YorkCrossRefGoogle Scholar
  2. 2.
    Liberzon D (2003) Switching in system and control, systems & control: foundations & applications. Birkhäuser, BostonCrossRefMATHGoogle Scholar
  3. 3.
    Leine R, Nijmeijer H (2004) Dynamics and bifurcation of nonsmooth mechanical system. In: Lect. Notes Appl. Comput. Mech. Spring, BerlinGoogle Scholar
  4. 4.
    Forti M, Tesi A (1995) New conditions for global stability of neural networks with application to linear and quadratic programming problems. IEEE Trans Circuits Syst I(42):354–366MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Gupta M, Jin L, Homma N (2003) Static and dynamic neural networks. Wiley-Interscience, New YorkCrossRefGoogle Scholar
  6. 6.
    Forti M, Nistri P (2003) Global convergence of neural networks with discontinuous neuron activations. IEEE Trans Circuits Syst I(50):1421–1435MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Forti M, Nistri P, Papini D (2005) Global exponential stability and global convergence in finite time of delayed neural networks with infinite gain. IEEE Trans Neural Netw 16:1449–1463CrossRefGoogle Scholar
  8. 8.
    Wu H (2009) Global stability analysis of a general class of discontinuous neural networks with linear growth activation functions. Inf Sci 179:3432–3441MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Qin S, Xue X, Wang P (2013) Global exponential stability of almost periodic solution of delayed neural networks with discontinuous activations. Inf Sci 220:367–378MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Wu H, Tao F, Qin L, Shi R, He L (2011) Robust exponential stability for interval neural networks with delays and non-Lipschitz activation functions. Nonlinear Dyn 66:479–487MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Wu H, Wang L, Wang Y, Niu P, Fang B (2016) Exponential state estimation for Markovian jumping neural networks with mixed time-varying delays and discontinuous activation functions. Int J Mach Learn Cybern 7:641–652CrossRefGoogle Scholar
  12. 12.
    Comte F, Renault E (1996) Long memory continuous time models. J Econ 73:101–149MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Rogers LCG (1997) Arbitrage with fractional Brownian motion. Math Finance 7:95C105MathSciNetGoogle Scholar
  14. 14.
    Vučetić M, Avramovic Z (1994) On the self-similar nature of ethernet traffic. Life Cycle Eng Manag 2:1–15Google Scholar
  15. 15.
    Nualart D, Saussereau B (2009) Malliavin calculus for stochastic differential equations driven by a fractional Brownian motion. Stoch Process Their Appl 119:391–409MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Tindel S, Tudor CA, Viens F (2003) Stochastic evolution equations with fractional Brownian motion. Probab Theory Relat Fields 127:186–204MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Nualart D, Ouknine Y (2002) Regularization of differential equations by fractional noise. Stoch Process Their Appl 102:103–116MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Nualart D, Răcanu A (2002) Differential equations driven by fractional Brownian motion. Collectanea Mathematica 53:55–81MathSciNetMATHGoogle Scholar
  19. 19.
    Cotain L, Qian Z (2000) Stochastic differential equations for fractional Brownian motions. C R Acad Sci Ser I Math 331:75–80Google Scholar
  20. 20.
    Boufoussi B, Hajji S (2011) Functional differential equations driven by a fractional Brownian motion. Comput Math Appl 62:746–754MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Boufoussi B, Hajji S (2012) Neutral stochastic functional differential equations driven by a fractional Brownian motion in a Hilbert space. Stat Probab Lett 82:1549–1558MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Boufoussi B, Hajji S, Lakhel E (2012) Functional differential equations in Hilbert spaces driven by a fractional Brownian motion. Afrika Matematika 23:173–194MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Boufoussi B, Hajji S, Lakhel E (2016) Time-dependent neutral stochastic functional differential equations driven by a fractional Brownian motion. Commun Stoch Anal 10:1–12MathSciNetMATHGoogle Scholar
  24. 24.
    Ferrante M, Rovira C (2010) Convergence of delay differential equations driven by fractional Brownian motion. J Evol Equ 10:761–783MathSciNetCrossRefMATHGoogle Scholar
  25. 25.
    Ferrante M, Rovira C (2006) Stochastic delay differential equations driven by fractional Brownian motion with Hurst parameter \(H > 1/2\). Bernoulli 12:85–100MathSciNetMATHGoogle Scholar
  26. 26.
    Taniguchi T, Liu K, Truman A (2002) Existence, uniqueness, and asymptotic behavior of mild solutions to stochastic functional differential equations in Hilbert spaces. J Differ Equ 181:72–91Google Scholar
  27. 27.
    Zhang X, Han Y (2012) Existence and uniqueness of positive solutions for higher order nonlocal fractional differential equations. Appl Math Lett 25:555–560MathSciNetCrossRefMATHGoogle Scholar
  28. 28.
    Wei F, Wang K (2007) The existence and uniqueness of the solution for stochastic functional differential equations with infinite delay. J Math Anal Appl 331:516–531MathSciNetCrossRefMATHGoogle Scholar
  29. 29.
    Zhou W, Zhou X (2016) Exponential synchronization for stochastic neural networks driven by fractional Brownian motion. J Franklin Inst 353:1689–1712MathSciNetCrossRefMATHGoogle Scholar
  30. 30.
    Ma W, Ding B, Yang H (2015) Mean-square dissipativity of numerical methods for a class of stochastic neural networks with fractional Brownian motion and jumps. Neurocomputing 166:256–264CrossRefGoogle Scholar
  31. 31.
    Duncan TE, Hu Y, Pasik-Duncan B (2000) Stochastic calculus for fractional Brownian motion. I. Theory. SIAM J Control Optim 38:582–612MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© The Author(s) 2017

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.School of ScienceYanshan UniversityQinhuangdaoChina

Personalised recommendations