Exponential state estimation for impulsive neural networks with time delay in the leakage term

AbstractIn this paper, the exponential state estimation problem for impulsive neural networks with both leakage delay and time-varying delays is investigated. Several sufficient conditions which are given in terms of linear matrix inequalities (LMIs) are derived to estimate the neuron states such that the dynamics of the estimation error is globally exponentially stable by constructing suitable Lyapunov–Krasovskii functionals and employing available output measurements and LMI technique. The obtained results are dependent on the leakage delay and upper bound of time-varying delays and independent of the derivative of time-varying delays. Moreover, some comparisons are made to contrast to some of existing results about state estimation of neural networks. Finally, two numerical examples and their computer simulations are given to show the effectiveness of proposed estimator.


Recently, Gopalsamy [12] pointed out that in real nervous systems, time delay in the stabilizing negative feedback terms has a tendency to destabilize a system (This kind of time delays are known as leakage delays or "forgetting" delays). Moreover, sometimes it has more significant effect on dynamics of neural networks than other kinds of delays. Hence, it is of significant importance to consider the leakage delay effects on dynamics of neural networks. However, due to some theoretical and technical difficulties, there has been very little existing work on neural networks with leakage delays [13,19,27,35]. In [13], Gopalsamy initially investigated the bidirectional associative memory (BAM) neural networks with leakage constant delays and obtained some sufficient conditions to guarantee the existence and global stability of a unique equilibrium using Lyapunov-Kravsovskii functionals and M-matrices method. Then based on this work, Peng [35] further studied the existence and global stability of periodic solutions for BAM neural networks with leakage continuously distributed delays using continuation theorem in coincidence degree theory and the Lyapunov functional.
On the other hand, the problem of the state estimator for various neural networks has received much attention recently, see [9,15,25,29,[32][33][34]38,40,41]. The motivation for the investigation of the neuron state estimation is that the neuron states are not often fully available in the network outputs in many applications; one may estimate the neuron states through available measurement such that the estimated ones can be applied to real problems effectively [40]. In particular, Park et al. [32][33][34] investigated the state estimation of neutral-type neural networks with time-varying delays and/or interval time-varying delays using Lyapunov-Kravsovskii functionals and linear matrix inequality (LMI) approach. Via the same methods, Li and Fei [25] studied the state estimation of neural networks with distributed delays. In [29], Mahmoud studied the state estimation of neural networks with interval time-varying delays through Luenberger-type linear estimator, which improves and extends the previous ones in [33,34,40]. However, most of the results [9,15,25,29,[32][33][34]40,41] are based on the assumption that the time-varying delays is differentiable, which greatly reduce the applied range of those results in practice. More recently, Wang and Song [38] improved the issue and established some LMIs conditions to estimate the neuron state of mixed delayed neural networks in which the time-varying delays are non-differentiable. Unfortunately, all of those results [9,15,25,29,[32][33][34]38,40,41] cannot be applied to neural networks with leakage delays. In addition, it is well known that impulsive effects are likely to exist in the neural network systems [6,23,24,30,[43][44][45]. For instance, in the implementation of the electronic networks, the state of neuron is subject to instantaneous perturbations and experiences abrupt change at certain moments, which may be caused by switching phenomenon, frequency change or other sudden noise, that is, does exhibit impulsive effects [3,21]. Hence, it is necessary to consider the impulsive effects to the state estimation problem of neural networks with delays to reflect a more realistic dynamics. However, to the best of our knowledge, there are no results on the state estimation problem of impulsive neural networks with leakage delays.
In this paper, we consider the state estimation problem for a class of impulsive neural networks with leakage delay by constructing appropriate Lyapunov-Krasovskii functional and employing available output measurements. Several LMI-based conditions are derived to estimate the neuron states such that the estimation error system exponentially tends to zero. Compared with the previous results, the main advantages of the obtained results include the following: • It can be applied to the state estimation problem of neural networks with leakage delays and/or impulsive effects. • In [9,15,25,29,[32][33][34]38,40,41], the gain matrix K of the state estimator is mostly given in form of where Q denotes a positive definite matrix (Y denotes an available matrix). This restriction will be relaxed in our results. We only require the reversibility of matrix Q to estimate the gain matrix K. • We essentially drop the requirement of differentiability on the time-varying delays, and do not require the activation functions to be differentiable, bounded or monotone nondecreasing, which are less restrictive than those results in [9,15,25,29,[32][33][34]40,41]. • Most of the numerical simulations in [9,15,25,29,[32][33][34]38,40,41] show a fact that estimation error system tends to zero under the help of gain matrix. But from computer simulations, one may observe that the estimation error system in those papers still tends to zero even without the help of gain matrix (i.e., when K = 0 ). It is an undesirable phenomenon. This problem will be solved in this paper.
• The LMI conditions in this paper contain all the information of neural networks including physical parameters of neural networks, leakage delay, time-varying delays, and impulsive matrix, which can be checked easily and quickly. Moreover, this paper initially considers the impulsive effects on state estimation problem of neural networks.
The rest of this paper is organized as follows: In Sect. 2, the state estimation problem is formulated. In Sect. 3, by constructing suitable Lyapunov functionals, we shall derive some LMI-based conditions to estimate the neuron states. Two numerical examples and their computer simulations are given to show the effectiveness of proposed estimator in Sect. 4. Finally, we shall make concluding remarks in Sect. 5.

Preliminaries
Notations. Let R (R + ) denote the set of (positive) real numbers, Z + denote the set of positive integers, R n and R n×m denote the n-dimensional and n × m-dimensional real spaces equipped with the Euclidean norm || • ||.A > 0 or A < 0 denotes that the matrix A is a symmetric and positive definite or negative definite matrix. The notation A T and A −1 mean the transpose of A and the inverse of a square matrix.
In addition, the notation always denotes the symmetric block in one symmetric matrix.
Consider the following impulsive neural networks model with leakage delay: where x(t) = (x 1 (t), . . . , x n (t)) T ∈ R n is the neuron state vector of the neural networks; A = diag (a 1 , . . . , a n ) ∈ R n×n is a diagonal matrix with a i > 0, i ∈ ; B ∈ R n×n and W ∈ R n×n are the connection weight matrix and the delayed weight matrix, respectively; J (t) is an external input; f (x(·)) = ( f 1 (x 1 (·)), . . . , f n (x n (·))) T denotes the neuron activation function and D k ∈ R n×n , k ∈ Z + denotes the impulsive matrix.
In this paper, we make the following assumptions: • (H 1 ) The neuron activation function f j , j ∈ , are continuous on R and satisfy where l − j and l + j are some real constants and they may be positive, zero or negative.
As usual, suppose that the output form of System (1) to be of the form where y(t) = (y 1 (t), . . . , y m (t)) T ∈ R m denotes the measurement output of System (1); C ∈ R m×n denotes a constant matrix, and h(t, x(t)) ∈ R m is the nonlinear disturbance and satisfies the following Lipschitz condition: where H is a known constant matrix. Now we introduce the following full-order state estimation to estimate the neuron state of (1): wherex(t) ∈ R n is the state estimation and K ∈ R n×m is the gain matrix to be designed. Then let e(t) = x(t) −x(t) be the state estimation error; then by (1), (2) and (4) we get the following error dynamical system: where Before giving the main results, we need introduce the following lemmas:  22 , is equivalent to any one of the following conditions:

Main results
In this section, we will establish some LMI conditions for the existence of exponential state estimator by using Lyapunov-Krasovskii functional and linear matrix inequality (LMI) approach.

Theorm 3.1 Assume that assumptions
where (5) is globally exponentially stable with a convergence rate 0.5ε. Moreover, the gain matrix K of the state estimator (4) is given by

Then the error system
Proof Construct a Lyapunov-Krasovskii functional as follows: where Calculating the time-derivative of V along the solution of (5) at the continuous interval [t k−1 , t k ), k ∈ Z + , one can deduce that It follows from Lemma 2.1 that By directly computing the time-derivative of V 3 , V 4 and V 5 , it yields It is well known that for any n × n diagonal matrices U 1 > 0, U 2 > 0, the following inequalities hold: e(t − τ (t)) g(e(t − τ (t))) g(e(t − τ (t))) ≥ 0.
Moreover, it follows from (5) that and In addition, from (3) it can be deduced that Now considering (9)- (17) and employing Lemma 2.2, one obtains that

s)ds, g(e(t)), g(e(t − τ (t))), H (t, e(t))
Under the condition (7) of the theorem, it yields On the other hand, from System (5) we know Moreover, it follows from (6) that Together with (19) and (20), it yields By (18) and (21), we get Hence, utilizing Lemma 2.1, we know Similarly, considering the definition of V 1 , we can obtain that which together with (22) yields Note that Substituting the above inequality to (23), we get Hence, System (5) is globally exponentially stable with a convergence rate 0.5ε and therefore the proof is completed.
Remark 3.2 One may observe that when calculating the time-derivative of V 1 , we do not substitute the right hand of System (5) to D + V 1 directly, but to complete it by (15) and (16). This ideas plays an important role to design the gain matrix K. In addition, the constructions of V 4 and V 5 can effectively avoid the restrictions on the derivative of time-varying delays. Hence, the results in this paper can be applied to neural networks with non-differentiable time-varying delays. But it should be noted that the criterion given in Theorem 3.1 is dependent on the leakage delay and the upper bound of time-varying delays.
In particular if we take σ = 0, then networks model (1) becomes Suppose that the output form of the system (24) is the same as (2). Similarly, we introduce the following full-order state estimation: Then the error system between Systems (24) and (25) may be expressed as For System (26), we have (H 1 )-(H 4 ) hold. If there exist three constants ε > 0, α > 0, β > 0, two n × n matrices P > 0, Q 2 > 0, an n × n inverse matrix Q 3 , an n × m real matrix Y, two n × n diagonal matrices U 1 > 0, U 2 > 0, and a 2n × 2n matrix T 11 T 12 T 22 > 0 such that

Corollary 3.3 Assume that assumptions
. . , (l − n + l + n )/2 . Then the error system (26) is globally exponentially stable with a convergence rate 0.5ε. Moreover, the gain matrix K of the state estimator (25) is given by where d is a constant, then the LMIs in (6) and (27) can be removed if d ∈ [0, 2].

Remark 3.5
The obtained criterion in this paper can be applied to state estimation problem of neural networks with leakage delay and/or impulsive effects. However, the considered leakage delay is a constant one. How to establish the state estimation criterion for neural networks with leakage time-varying delays may be a troublesome issue. In the future, we will do some research on this problem.

Numerical examples
Example 4.1 Consider a simple two-dimensional impulsive neural networks model with leakage delays: and the full-order observer: where   Then, the gain matrix K are designed as follows: Hence, from Theorem 3.1, System (29) with above gain matrix K is an estimator of System (28).  (28) and (−1, 2) T for System (29). One can easily find that System (29) without controller (i.e., K ≡ 0) cannot be regarded as the estimator of System (28) (see Fig. 1a,b). But it becomes the desirable estimator if the gain matrix K in (30) is applied (see Fig. 1c,d). Those simulations match the obtained results perfectly.

Example 4.3
Consider a three-dimensional system (28) and the observed system (29) with the following parameters: 3 sin 0.5t · cos 0.5t 9.5 sin 0.2t · cos 2.4t 2 cos 0.8t · sin 0.2t In this case, we know that l − j = 0, l + j = 1, τ = 0.08 and H = 0.5I.   Then, the gain matrix K are designed as follows: Hence, from Theorem 3.1, System (29) with above gain matrix K is an estimator of System (28), which is shown in Fig. 2a-f.

Conclusion
In this paper, some exponential state estimation criterion for a class of impulsive neural networks with both leakage delay and time-varying delays has been presented by constructing suitable Lyapunov-Krasovskii functionals and employing available output measurements and LMI technique. The LMI criterion in this paper contains all the information of neural networks including physical parameters of neural networks, leakage delay, time-varying delays, and impulsive matrix, which can be checked easily and quickly. Furthermore, the obtained results improve upon some existing results and are less conservative than some previous results. Finally, two numerical examples and their computer simulations have been presented to show the effectiveness of proposed estimator.