p-th moment exponential convergence analysis for stochastic networked systems driven by fractional Brownian motion

In this paper, the existence, uniqueness and asymptotic behavior of mild solutions of stochastic neural network systems driven by fractional Brownian motion are investigated. By applying the Banach fixed point theorem, the existence and uniqueness of mild solution are analytically proved in a Hilbert space. Based on the moment inequality of wick-type integral analysis technique, the p-th moment exponential convergence condition of the mild solution is presented. Finally, two numerical examples are presented to demonstrate the validity of the theoretical results.


Introduction
The research of Hopfield neural networks (NNs) has been widespread in different fields because of their extensive applications; for instance, image processing, pattern recognition, associative memory and combinational optimization; see [1][2][3][4][5]. When applying NNs to solve many practical problems in optimization, NNs are usually designed to be globally asymptotically or exponentially stable to avoid spurious responses or the problem of local minima. Hence, exploring the convergence of NNs is of primary importance; see [6][7][8][9][10][11], and references therein.
It is generally known that the external perturbations is unavoidable in practical applications. A large class of nat-B Jiaxin Shi jiaxin@stumail.ysu.edu.cn 1 School of Science, Yanshan University, Qinhuangdao 066000, China ural phenomena with time evolution behaviors cannot be described by the classical Brownian motion. Nevertheless, the fractional Brownian motion (fBm) has a strong nature of memory, which can better simulate the noise from surrounding environments. It is of great practical significance to study fractional Brownian motion in the fields of physics, financial, hydrology, telecommunications and random network control; see [12][13][14]. Recently, an increasing number of people are interested in exploring the dynamic behavior for a variety of differential equation systems driven by the fBm. In [15][16][17][18][19], the asymptotic behavior of solution for stochastic differential equations driven by the fBm were investigated, and the conditions to ensure the existence and uniqueness of solution were proposed. Boufoussi and Hajji [20,21] considered the existence of a mild solution for neutral stochastic functional differential equations driven by the fBm in a Hilbert space. Furthermore, Boufoussi et al. [22,23] investigated the existence and uniqueness of a mild solution for time-dependent neutral stochastic functional differential equations driven by the fBm. Ferrante and Rovira [24,25] explored the convergence of stochastic delay differential equations driven by the fBm with Hurst parameter H > 1/2. Taniguchi et al. [26] discussed the existence, uniqueness and asymptotic behavior of mild solutions to stochastic functional differential equations in Hilbert spaces. In [27], the existence and uniqueness of positive solutions were considered for higher-order nonlocal fractional differential equations by the fBm . Wei and Wang [28] investigated the existence and uniqueness of the solution for stochastic functional differential equations with infinite delay.
Very recently, a little work with respect to the dynamic behavior of neural networked system driven by fBm have been reported in [29][30][31]. In [29], the authors discussed the exponential synchronization for stochastic NNs with delay driven by fBm and, by using ingenious mathematical transformative technique and some well-known inequality, the exponential synchronization condition was established. In [30], by applying the numerical method, the authors considered the mean-square dissipativity of a class of stochastic NNs with the fBm and jumps.
It should be pointed out that, in [29], under the assumption of the existence of a solution, the authors addressed the exponential synchronization results for stochastic NNs with delay driven by fBm. In [30], the existence of a solution for NNs with the fBm was not also considered. It is well known that the system, which has the state solution, is only useful in the real-world application. For all we know, in the literature, very little attention has been paid to the investigation of the the existence and uniqueness of the solution for stochastic NNs systems driven by fBm.
Motivated by the previous discussion, in this paper, Based on the analytical semi-groups theory and by means of the moment inequality of wick-type integral, the p-th moment exponential convergence condition of the mild solution is developed. This research method is different from the method of the general dynamics system, such as linear matrix inequality approach and M-matrix technique. In addition, the p-norm we give is different from the usual 2-norm, and thus the conditions given are different. To the best of our knowledge, the p-norm requires more stringent conditions. Nowadays, few people have conducted a comprehensive study of this aspect. The study of convergence problem is meaningful and challenging for the stochastic neural network with the memorial perturbation effect.
Notation. R n and R n×n denote n-dimensional Euclidean space and the set of all n × n real matrices. If Bis a symmetric matrix, B = B T , where the superscript T denotes the transpose. Denote by λ max (B) and λ min (B) the largest and smallest eigenvalues of B, respectively. Let the vector norm. For any matrix A = (a i j ) n×m , the Hilbert-Schmidt norm is defined as Diag{...} stands for the diagonal matrix. ( , F, P) is a complete probability space with a filtration {F t } t≥0 satisfying the usual conditions (i.e., the filtration contains all P-null sets and is right continuous). The (·, ·) represents the inner product. (U, · , (·, ·) U ) and (K , · , (·, ·) K ) denote the separable Hilbert spaces, and let L(K , U ) be the space of all bounded linear operators from K to U . Let Q ∈ L(K , K ) be a non-negative self-adjoint operator and denote by L p Q (K , U ) the space of all ξ ∈ L(K , U ) such that ξ Q 1 2 is a Hilbert-Schmidt operator. If x(·) a measurable function on F, then |x(·)| p ∈ L(F) . The E(·) stands for the mathematic expectation operator with respect to the given probability measure on R.

Preliminaries and model description Preliminaries
If S T is a closed subset of a Banach space X and ψ : S T → S T is a contraction on S T , then ψ has a unique fixed point x in S T . Also, if x 0 in S T is arbitrary, then the sequence {x n+1 = ψ x n , n = 0, 1, . . .} converges tox as n → ∞ and |x − x n | ≤ λ n |x is called the normalized (two sided) fBm with Hurst parameter H ∈ ( 1 2 , 1). B H is a centered Gaussian process with covariance function: Moreover, B H has the following Wiener integral representation: } is a Wiener process, and K H (t, s) is the kernel given by , and β(·, ·) denotes the Beta function. We take K H (t, s) = 0 if t ≤ s. We consider a U -valued stochastic process B H Q (t) given by the following series: where β H n (t)(n = 1, 2, . . .) are a sequence of two-sided one-dimensional, standard fBm mutually independent on ( , F, P). The e n (n = 1, 2, . . .) is a complete orthonormal basis in K , and Q ∈ L(K , K ) is a non-negative self-adjoint operator defined by Qe n = λ n e n with trace The following will introduce some concepts about stochastic integration of fBm. Lemma 1 [31] Let (σ (t), t ∈ [0, T ]) be a stochastic process such that σ ∈ L P Q (K , U ). The following exist: Remark 1 [31] To extend the theory of stochastic calculus for the fractional Brownian motion, the Wick calculus for Gaussian process (Gaussian measures) is used. The Wick product of exponentials ε( f ) and ε(g) is defined as . . , n}} is the linear span of the exponentials. This definition can be extended to define the Wick product F♦G of two functionals F and G of E.
Without loss of generality, let x(0) = 0; by Lemma 1, it obtains Noting that σ (s) > 0, then E|x(t)| p regarding t is nondecreasing which implies that Furthermore, we can obtain It is obvious that Finally, by substituting t with T, we can get the inequality (1). This completes the proof. holds.
Proof According to Lemma 2, we can derive that At last, we can get the inequality (2). This proof can be completed.

Model description
Consider the stochastic HNNs driven by fractional Brownian motion of the form where . . , f n (t)) T (i = 1, . . . , n) are the neuron activation functions; σ (·): R + × R n → R n×n is the noise intensity matrix, which can be regarded as a result from the occurrence of stochastic perturbation. The noises B H (t) ∈ R n are an n-dimensional fBm with Hurst index 1 2 < H < 1.

Existence and uniqueness of a mild solution
In this section, we study the existence and uniqueness of mild solutions for Eq. (3). To do so, we assume that the following conditions hold.
(H.1) (S(t)) t≥0 , of bounded linear operators on H, and satisfied We give the following definition of mild solutions for Eq.(3).
Definition 1 [29] A U -valued process x(t) is said to be a mild solution of Eq. (3) based on the space of Hilbert-Schmidt operator if the above assumptions and the lemma established, corresponding to the initial function φ(x) ∈ L p ( , U ), for all t ∈ [0, T ] such that

S(t − s)[−C x(s)+ B f (x(s))+ D]ds
then it is clear that the proof of the existence of mild solutions to Eq. (3) is equivalent to find a fixed point for the operator ψ.
Next, by using Banach fixed point theorem, we show that ψ has a unique fixed point. We divide the subsequent proof into two steps.
Step 1 For arbitrary x ∈ S T , let us prove that t → ψ(x)(t) is continuous on the interval [0, T ] . Let 0 ≤ t ≤ T and |h| be sufficiently small. Then for any fixed x ∈ S T , we have where I i stands for every item of a polynomial, i = (1, 2, 3, 4, 5).
For the second term I 2 (h), suppose h > 0 (Similar estimates hold for h < 0), we obtain where I 21 (h) and I 22 (h) represent the first and second terms of the polynomial, respectively. Using Hölder's inequality, one has that Owing to Hölder's inequality, one has that We use conditions (H.1) and Hölder's inequality to derive after that, lim h→0 E|I 3 (h)| p = 0. Fourth item:

By Hölder's inequality, one has E|I
In addition, the fifth item is Utilizing Lemma 3, we obtain By Lemma 3, we get That is to say lim h→0 E |I 52 (h)| p = 0, then lim h→0 E |I 5 (h)| p = 0. The above arguments show that Hence, we conclude that the function t → ψ(x)(t) is continuous on [0, T ] in L p -sense.
Step 2 We will show that ψ is a contraction mapping in S T 1 with some T 1 ≤ T to be specified later. Let x, y ∈ S T ; by using the inequality (a + b + c) p ≤ 3 p−1 a p + 3 p−1 b p + 3 p−1 c p , we can derive the following inequality for any fixed t ∈ [0, T ], Hence, We have δ(0) = 0 < 1. Then there exists 0 < T 1 < T such that 0 < δ(T 1 ) < 1 and ψ is a contraction mapping on S T 1 and has a unique fixed point, which is a mild solution of Eq. (3) on [0, This procedure can be repeated to extend the solution to the entire interval [0, T ] in finitely many steps. This completes the proof.

Exponential convergence of solution
and there exist nonnegative real numbers Proof It is obvious that On the other hand, by the Hölder inequality we derive that Similarly, The same truth, By virtue of Lemma 3, we can deduce where That is to say, therefore, for arbitrary ε ∈ R + with 0 < ε < a − (M 1 + M 2 + M 4 ) and T > 0 large enough, we have On the other hand, therefore, combining (6) with (7) yields T 0 e εt−at dt Accordingly, Since a > M 1 + M 2 + M 4 , it is probable to choose a suitable ε ∈ R + with 0 < ε < a − (M 1 + M 2 + M 4 ) so that Let T > 0 tend to infinity and using (9) immediately yields By virtue of (5), (9) and (10), we can easily deduce (note consequently,

Numerical examples
In this section, we can provide the following examples to illustrate the effectiveness and feasibility for the results we obtained.
Example 1 Consider the two-dimensional neural system driven by fBm with Hurst index H = 0.65. The corresponding parameters are as follows: Neuron activation function f (x(t)) is as follows: . Take t ∈ [0, 10] and set the initial state as Select the noise intensity matrix σ (t) as follows: By calculating, we derive that Let a = 10, and satisfied that a > M 1 + M 2 + M 4 , we can derive that 0 < ε < 10.1565. Let ε = 5. It can be computed that It is easy to verify that the conditions of Theorem 2 are satisfied. Therefore, the system (3) with respect to fBm is the pth moment exponentially stable (see Fig. 2). To understand the content of the paper thoroughly, we give a graph of the fractional Brownian motion; see Fig. 1. Neuron activation function f (x(t)) is as follows: f (x(t)) = tanh(x(t)) = exp(x(t)) − exp(−x(t) exp(x(t)) + exp(−x(t) .
Take t ∈ [0, 10] and set the intitial state as  Figure 3 shows the pth moment exponentially stable of the stochastic HNNs driven by fractional Brownian motion.

Conclusion
In this paper, the existence, uniqueness and asymptotic behavior of mild solutions have been discussed for stochastic neural networked systems driven by fractional Brownian motion. The proof of existence and uniqueness of mild solution has been analytically given in a Hilbert space. In addition, the p-th moment exponential convergence condition of the mild solution has been presented.
It would be interesting to extend the results here obtained to more general neural networked models incorporating the presence of a delay in neuron interconnections. This topic will be a challenging issue for future research.