Abstract
In this paper, the stability problem is studied for inertial neural network with time-varying delays. The sampled-data control method is employed for the system design. First, by choosing a proper variable substitution, the original system is transformed into first-order differential equations. Then, an input delay approach is applied to deal with the stability of sampling system. Based on the Lyapunov function method, several sufficient conditions are derived to guarantee the global stability of the equilibrium. Furthermore, when employing an error-feedback control term to the slave neural network, parallel criteria regarding to the synchronization of the master neural network are also generated. Finally, some examples are given to illustrate the theoretical results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Recurrent neural networks (RNNS), especially Hopfield neural networks, cellular neural networks (CNNs), Cohen–Grossberg neural networks (CGNNs), bidirectional associative memory (BAM) neural networks have been extensively investigated during the past years, since their potential application in the fields of pattern recognition, associative memories, signal processing, fixed-point computations, and so on. In the design of neural networks, the dynamical properties of networks, such as the stability of the networks, play important roles. And there have been many literatures on the stability of neural networks [1,2,3,4,5,6,7,8,9,10,11,12,13,14].
It has been confirmed that the time delay, which is an inherent feature of signal transmission between neurons, is one of the main sources for causing oscillation, divergence or instability of neural networks. Therefore, stability analysis for neural networks with time delays has been an attractive subject of research in the past few years [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21]. Noticing that lots of previous studies mainly focused on neural networks with only first derivative of the states, whereas it is also of significant importance to introduce an inertial term, or equivalently, the influence of inductance, into the artificial neural networks, because the inertial term is considered to be a critical tool to generate complicated bifurcation behavior and chaos [22, 23]. Babcocka and Westervelt pointed out that dynamics may become complexity when the neuron couplings contain an inertial nature [24]. Therefore, the dynamical characteristics of the inertial neural network have been investigated extensively [15,16,17,18,19,20, 25,26,27]. For instance, resorting to homeomorphism and inequality technique, the stability of inertial Cohen–Grossberg neural networks with time delays was studied [16, 17]. In [18], the matrix measure strategies were used to analyze the stability and synchronization of inertial BAM neural network with time delays. Stability of inertial BAM neural network with time-varying delay was studied via impulsive control [19].
As we all know, external forcing or stimuli plays a very important role in nonlinear systems. By adding certain types of forcing, rich phenomena can be observed in physical, ecological systems. Dynamic features including stability types can be greatly affected by external forcing. For master–slave neural networks, some kind of control term can been added to the slave system in order to guarantee the synchronization between the master system and the slave system. There are many control strategies such as impulsive control [7,8,9, 19, 25], feedback control [10, 28], intermittent control [11, 12, 15], sampled-data control [13, 14] and so on to stabilize the RNNs. To be noted that the sampled-data control drastically reduces the amount of transmitted information and increases the efficiency of bandwidth usage, because its’ control signals are kept constant during the sampling period and are allowed to change only at the sampling instant. On the other hand, due to the rapid growth of the digital hardware technologies, the sampled-data control method is more efficient, secure and useful. Due to these advantages, there are many results on the sampled-data control of finite-dimensional systems over the past decades [13, 14, 29,30,31,32,33,34,35,36,37]. For example, the stabilization of BAM neural networks with time-varying delay in the leakage terms was studied via sampled-data control [14]. Using sampled-data control, the asymptotical synchronization problem of Chaotic Lur’e systems with time delays was investigated in [31]. Based on the fuzzy-model-based control approach and LMI technique, several conditions were derived to guarantee the stability of the T-S fuzzy delayed neural networks with Markovian jumping parameters via sampled-data control [33].
Motivated by the above discussions, this paper mainly focuses on the fundamental problem of stability discussion of a class of time-varying delayed inertial neuron network via sampled-data control. To the best of our knowledge, this extension has not been reported in the literatures at the present stage. The main contributions of this paper are as follows. (1) We make the first attempt to address the sampled-data stability control problem for inertial neuron networks with time-varying delays. (2) Because sampled-data control signals are kept constant during the sampling period and are allowed to change only at the sampling instant, the control signals have discontinuous form. For the sake of discussion, the sampling system is changed into a continuous time-delay system by using the input delay approach. (3) A sampled-data controller is designed to achieve the synchronization of the master–slaver neural networks.
The remaining paper is organized as follows. Section 2 describes some preliminaries including definition, assumption and lemmas. The main result is stated in Sect. 3. The application concerning the synchronization of master–slave systems is investigated in Sect. 4. Section 5 gives some numerical examples to testify the theoretical analysis. Finally, conclusions are drawn in Sect. 6.
2 Preliminaries
For the sake of convenience, we introduce some notations as follows. \(R^n\) denotes the n-dimensional Euclidean space. The notation\(X > Y(X \ge Y)\), where X and Y are symmetric matrices, means that \(X-Y\) is positive definite (positive semidefinite). The superscript “T” represents the transpose, and diag{...} stands for a block-diagonal matrix. For a symmetry matrix,“ \(*\) ” denotes the symmetric elements in it. Let \(\lambda _{\max } (P)\,\)and \(\lambda _{\min } (P)\) be the maximum and minimum eigenvalue of matrix P, respectively.
In this paper, we consider the following inertial neural network:
where \(u_i (t)\) denotes the state variable of the ith neuron at time t, the second derivative of \(u_i (t)\) is called an inertial term of system (2.1), \(a_i> 0\,,\,b_i > 0\) are constants, \(c_{ij} \,,\,d_{ij} \,\) are constants and denote the connection weights of the neural network, \(g_j ( \cdot )\)denotes the neurons activation function of jth neuron at time t, \(\tau (t)\) is the time varying delays which satisfies \(0 \le \tau (t) \le \tau \,,\,{\dot{\tau }}(t) \le {\bar{\tau }} < 1\), \(I_i \) is the external input on the ith neuron from the neural field.
Assumption 1
For any \( x_1 \ne x_2 \in R \), the neuron activation functions \(g_i ( \cdot )\) satisfy
with \(l_i^ - \,,\,l_i^ + \) being constant scalars.
Denote \( L^ - = diag\{l_1^- ,l_2^ - , \ldots ,l_n^ - \} \) and \( L^ + = diag\{l_1^ + ,l_2^ + , \ldots ,l_n^ + \} \) for convenience.
Next, by introducing the following variable transformation:
the original neural network (2.1) can be written as
for \(i = 1,2, \ldots ,n.\)
Denote \(u(t) = (u_1 (t),u_2 (t), \ldots ,u_n (t))^T\),\(v(t) = (v_1 (t),v_2 (t), \ldots ,v_n (t))^T\), the network (2.3) can be written as
where \(\varLambda = diag\{\xi _1 ,\xi _2 , \ldots ,\xi _n \}\), \(A = diag\{\alpha _1 ,\alpha _2 , \ldots ,\alpha _n \}\), \(B = diag\{\beta _1 ,\beta _2 , \ldots ,\beta _n \}\), \( C = (c_{ij} )_{n\times n} ,\,D = (d_{ij} )_{n\times n},\, I = diag\{I_1 ,I_2 , \ldots ,I_n \}, \quad \alpha _i = b_i - \xi _i (a_i - \xi _i ),\,\beta _i = a_i - \xi _i.\)
Remark 1
In order to get less conservative results, we can use the transformation: \( v_i (t) = \zeta _i \frac{du_i (t)}{dt} + \xi _i u_i (t)\,,\quad i = 1,2, \ldots ,n.\) Here, for simplicity, we only choose one variable to express the transformation.
Definition 1
The point \((u^ * ,v^ * )\) with \(u^ * = (u_1 ,u_2 ,\ldots ,u_n )^T,v^ * = (v_1,v_2 ,\ldots ,v_n )^T\) is called an equilibrium of the neural network in (2.4) if
Let \((u^ * ,v^ * )\) be the equilibrium point of (2.4). For the purpose of simplicity, we can shift the equilibrium point \((u^ * ,v^ * )\) to the origin by letting \(x = u - u^ * ,\,y = v - v^ * \), then the network (2.4) can be transformed into
where \(x(t) = (x_1 (t),x_2 (t), \ldots ,x_n (t))^T \), \( y(t) = (y_1 (t),y_2 (t), \ldots ,y_n (t))^T \) are the state vectors of the transformed networks. It follows from (2.2) that the function \(f(x) = g(x + u^ * ) - g(u^ * )\) satisfies
for any \(x \ne 0 \in R\).
Remark 2
From (2.7), we can obtain \( - [f(x(t)) - L^ - x(t)]^TU_1[f(x(t)) - L^ + x(t)] \ge 0 \) and \( - [f(x(t - \tau (t))) - L^ - x(t - \tau (t))]^TU_2[f(x(t - \tau (t))) - L^ + x(t - \tau (t))] \ge 0 , \) where \( U_1, U_2 \) are positive diagonal matrices.
In the following, we will consider the sampled-data stability problem of the neural network (2.4). Let \(\mu (t) = Kx(t_k )\,,\,\nu (t) = My(t_k )\), where \(K\,,\,M \in R^{n\times n}\) are the sampled-data feedback controller gain matrices to be designed . \(t_k \) denotes the sample time point and satisfies \(0 = t_0< t_1< \cdots< t_k < \cdots \), and \(\mathop {\lim }\nolimits _{k \rightarrow +\, \infty } t_k = +\, \infty \). Moreover, we assume that there exists a positive constant h such that \(t_{k + 1} - t_k \le h,\forall k \in N\).
Under the sampled-data control, (2.6) can be modeled as
Let \(h(t) = t - t_k \), for \(t \in [t_k,\,t_{k + 1} )\), we have
The initial conditions of (2.9) are given as :
where \({\bar{h}} = \max \{\,h,\tau \}.\)
Remark 3
Based on the input delay approach [14, 33, 38], the system (2.8) is changed into a continuous-time system (2.9) because it is difficult to solve the stabilization problem of system (2.8) with the discrete term \(\mu (t) = Kx(t_k ), \nu (t) = My(t_k )\).
In order to investigate the stabilization problem of the network (2.9), the following lemmas are needed.
Lemma 1
[39] For any constant matrix \(M > 0\), any scalars a and b with \(a < b\),and a vector function \(x(t):[a,b] \rightarrow R\) such that the integrals concerned are well defined, then the following inequality holds:
Lemma 2
[19] If \(P \in R^{n\times n}\) is a symmetric positive definite matrix and \(Q \in R^{n\times n}\)is a symmetric matrix, then
Lemma 3
[40] If \(\omega (t):[t_0 , + \infty ) \rightarrow R \) is uniformly continuous, and \(\int _{t_0 }^t {\omega (s)ds} \) has a finite limit as \(t \rightarrow + \infty \), then \(\mathop {\lim }\nolimits _{t \rightarrow + \infty } \omega (t) = 0\).
3 Main Results
In this section, we shall establish some sufficient conditions to ensure the stability of network (2.9).
Theorem 1
Under Assumption 1, if there exist positive-definite matrices \( P_i (i = 1,\ldots ,9) \), \( Q_i (i = 1,2) \), positive-definite diagonal matrices \( U_i (i = 1,2) \) and any matrices X, Y such that the following LMI holds:
where
then the equilibrium of network (2.9) is globally asymptotically stable. Moreover, the desired controller gain matrices are given as \( K = Q_1^{ - 1} X,\,M = Q_2^{ - 1} Y \) .
Proof
Consider the following Lyapunov-Krasovskii functional :
where
Then take the derivative of V(t) along the system (2.9) ,
In addition,
we have
From (3.23), it can be seen that
Moreover,
where
On the other hand, by the definition of V(t), we get
Then, combining (3.24), (3.25) and (3.26), one has
which demonstrates that the solution of (2.9) is uniformly bound on \([0, + \infty )\). Next, we shall prove that \((\left\| {x(t)} \right\| ,\left\| {y(t)} \right\| ) \rightarrow (0,0)\) as \(t \rightarrow + \infty \). On the one hand, the boundedness of \(\left\| {{\dot{x}}(t)} \right\| \) and \(\left\| {{\dot{y}}(t)} \right\| \) can be deduced from (2.9) and (3.27). On the other hand, from (3.23), we have
So, \(\int _0^t {x^T(s)x(s)} ds < + \infty \) and \(\int _0^t {y^T(s)y(s)} ds < + \infty \). In view of Lemma 3, we have \((\left\| {x(t)} \right\| ,\left\| {y(t)} \right\| ) \rightarrow (0,0)\). That is, the equilibrium point of the system (2.9) is globally asymptotically stable.
\(\square \)
Remark 4
Let \( - L^- = L^+ > 0 \) in Assumption 1, we can obtain the corresponding result for inertial neural network with Lipschtiz-type activation function which was discussed in [18, 19]. Thus, our results possess better application.
Remark 5
In Theorem 1, the matrix \( \varLambda \) determines whether the condition (3.11) can hold, in which different \( \xi _i (i=1,2,\ldots ,n) \) represents different variable transformation. Comparing with a certain variable substitution, Theorem 1 is less conservative by introducing a free-weigh matrix \( \varLambda \).
Remark 6
The construction of appropriate Lyapunov function is one of difficulties of the Lyapunov theory. The Lyapunov function we made is dependent on the time delays, which results in our conclusions less conservative.
Remark 7
Synchronization, which means that the dynamical behaviors of coupled systems obtain the same time spatial state. In real world, stability and synchronization of chaotic systems are very important in consideration of its potential applications in different kinds of areas including secure communication, biology system, optics, information processing and so on [37]. Therefore, based on the proposed results, the synchronization problem of the master–slave neural network is discussed below.
4 Application
In this section, the precious results will be applied to the master–slave synchronization problem.
The master neural network is given as
and the slave neural network
where \(J_i (t)\) is the coupling control term to be designed, the two neural networks may have different initial values. The notation of \( a_i, b_i ,c_{ij}, d_{ij},g_j ( \cdot )\) and \( I_i \) are the same as ones in Sect. 2. Let
and denote \( u(t) = (u_1 (t),u_2 (t), \ldots ,u_n (t))^T \), \( \tilde{u}(t) = ({\tilde{u}}_1 (t),{\tilde{u}}_2 (t), \ldots ,{\tilde{u}}_n (t))^T \), \( \omega (t) = (\omega _1 (t),\omega _2 (t), \ldots ,\omega _n (t))^T \), \( {\tilde{\omega }}(t) = ({\tilde{\omega }}_1 (t),\tilde{\omega }_2 (t), \ldots ,{\tilde{\omega }}_n (t))^T \), the above two systems can be written as
and
where \( \varLambda = diag\{\xi _1 ,\xi _2 , \ldots ,\xi _n \},\)\( A = diag\{\alpha _1 ,\alpha _2 , \ldots ,\alpha _n \},\)\( B = diag\{\beta _1 ,\beta _2 , \ldots ,\beta _n \},\)\( \alpha _i = b_i \, - \xi _i (a_i - \xi ),\)\( \beta _i = a_i - \xi _i ,\)\( C = (c_{ij} )_{n\times n}, \)\( D = (d_{ij} )_{n\times n},\)\( I = diag\{I_1 ,I_2 , \ldots ,I_n \}, J(t) = diag\{J_1 (t),J_2 (t), \ldots ,J_n (t)\}.\)
Here J(t) is considered to be an error feedback control term with the form \( J(t) = Kx(t_k)+ My(t_k )\), where \( t_k \) satisfies \( 0 = t_0< t_1< \ldots< t_k < \ldots \) , \(\mathop {\lim }\nolimits _{k \rightarrow + \infty } t_k = + \infty \) and \( t_{k + 1} - t_k \le h \) ,\( \forall k \in N \) ,where K, M are the sampled-data feedback controller gain matrices to be designed. Let \( h(t) = t - t_k \) for \( t \in [t_k ,\,t_{k + 1} )\) and define the synchronization error signal as \( x(t) = \omega (t) - u(t) \), \( y(t) = {\tilde{\omega }}(t) - {\tilde{u}}(t).\) Subtracting (4.31) from (4.32), the error system can be obtained as follows
Theorem 2
Under Assumption 1, if there exist positive-definite symmetric matrices \( P_i (i = 1,\ldots ,9) \), \( Q_i (i = 1,2) \), positive-definite diagonal matrices \( U_i (i = 1,2) \) and any matrices X, Y such that the following LMI holds:
where
then the master system (4.29) and the slave system (4.30) are globally synchronized under the desired controller gain matric being given as \( K = Q_2^{ - 1} X,M = Q_2^{ - 1} Y \).
Proof
Noting that
then we obtain
where \( V(t), \xi (t) \) are the same as Sect. 3.
Similar to the proof of Theorem 1, we get that the slave system (4.30) can be globally synchronized to the master system (4.29) under control \( K = Q_2^{ - 1} X,M = Q_2^{ - 1} Y \) . \(\square \)
5 Illustrative example
In this section, two examples are given to illustrate the effectiveness of our results.
Example 1
Consider the following delayed inertial neuron network
where \( g_1 (x_1 (t)) = \sin (x_1 (t))\),\(g_2 (x_2 (t)) = \cos (x_2 (t))\),\(\tau (t) = 0.5\sin ^2(t) + 0.5\).
Obviously, \( g_i ( \cdot ) \quad (i = 1,2)\) satisfy (2.2) with \(l_1^ - = l_2^ - = - 1\),\(l_1^ + = l_2^ + = 1\), and \(\tau = 1,{\bar{\tau }} = 0.5.\)
Choosing \( \xi _1 = \xi _2 = 2 \), the inertial neural network (5.38) can be rewritten in the form (2.4), and the corresponding matrices can be easily obtained
Taking sampling time points \( t_k = 0.02k,k = 1,2, \ldots ,\) and the sampling period is \( h = 0.02 \) . Solving LMIs (3.11), we have
The controller gain matrices can be obtained as follows:
From Figs. 1 and 2, we can see that the equilibrium is globally asymptotically stable. Thus Theorem 1 is well-verified.
Example 2
We consider the following master–slave neural networks
where \( i = 1,2,\;\tau (t) = \frac{e^x}{1 + e^x}\;,\;(x_1 ,y_1 )^T \) is the state of the master neural network and \( (x_2 ,y_2 )^T \) is the state of the slave neural network. When no control term is added to the slave neural network, Fig. 3 indicates the error between the two neural networks always exists.
Taking sampling time points \( t_k=0.1k,k=1,2,\ldots ,\) and the sampling period \( h=0.1 \), choosing
and by using Theorem 2, the corresponding matrices are
From Fig. 4, we can see the slave neural network is globally synchronized to the master neural network. Thus our criteria are well-verified.
6 Conclusion
This paper concerns with the stability of inertial neural network with time-varying delays using sampled-data control. By employing an input delay approach and the Lyapunov theory, some sufficient conditions are obtained to ensure the stability of the equilibrium. Furthermore, several criteria are given to guarantee the synchronization of the master–slave neural networks under error-feedback sampled-data control. Meanwhile, the variable transformation we introduced can assure our results are less conservative. Two examples are used to demonstrate that the theoretical results are correct and effective.
References
Ozcan N (2011) A new sufficient condition for global robust stability of delayed neural networks. Neural Process Lett 34(3):305–316
Zhang Z, Liu W, Zhou D (2012) Global asymptotic stability to a generalized cohen-grossberg BAM neural networks of neural type delays. Neural Netw 25:94–105
Huang H, Huang T, Chen X, Qian C (2013) Exponential stabilization of delayed recurrent neural networks: a state estimation based approach. Neural Netw 48:153–157
Zhou LQ (2013) Delay-dependent exponential stability of cellular neural networks with multi-proportional delays. Neural Process Lett 38:347–359
Bao HM (2016) Existence and exponential stability of periodic solution for BAM fuzzy Cohen–Grossberg neural networks with mixed delays. Neural Process Lett 43:871–885
Xin YM, Li YX, Cheng ZS, Huang X (2016) Global exponential stability for switched memristive neural networks with time-varying delays. Neural Netw 80:34–42
Wei LN, Chen WH, Huang GJ (2015) Globally exponential stabilization of neural networks with mixed time delays via implusive control. Appl Math Comput 260:10–26
Song QK, Yan H, Zhao ZJ, Liu YR (2016) Global exponential stability of complex-valued neural networks with both time-varying delays and impulsive effects. Neural Netw 79:108–116
Wang JL, Wu HN, Guo L (2013) Stability analysis of reaction-diffusion Cohen–Grossberg neural networks under impulsive control. Neurocomputing 106:21–30
Wang LM, Shen Y, Sheng Y (2016) Finite-time robust stabilization of uncertain delayed neural networks with discontinuous activations via delayed feedback control. Neural Netw 76:46–54
Zhang ZM, He Y, Zhang CK, Wu M (2016) Exponential stabilization of neural networks with time-varying delay by periodically intermittent control. Neurocomputing 207:469–475
Yang SJ, Li CD, Huang TW (2016) Exponential stabilization and synchronization for fuzzy model of memristive neural networks by periodically intermittent control. Neural Netw 75:162–172
Wu ZG, Shi P (2014) Exponential stabilization for sampled-data neural-network-based control systems. IEEE Trans Neural Netw Learn Syst 25(12):2180–2190
Li L, Yang YQ, Lin G (2016) The stabilization of BAM neural network with time-varying delays in the leakage terms via sampled-data control. Neural Comput Appl 27:447–457
Zhang W, Li CD, Huang TW, Tan J (2015) Exponential stability of inertial BAM neural networks with time varying delay via periodically intermittent control. Neural Comput Appl 26:1781–1787
Ke Y, Miao C (2013) Stability analysis of inertial Cohen Grossberg-type neural networks with time delays. Neurocomputing 117:196–205
Yu S, Zhang Z, Quan Z (2015) New global exponential stability conditions for inertial Cohen Grossberg neural networks with time delays. Neurocomputing 151:1446–1454
Cao J, Wan Y (2014) Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays. Neural Netw 53:165–172
Qi J, Li C, Huang T (2015) Stability of inertial BAM neural network with time-varying delay via impulsive control. Neurocomputing 161:162–167
Rakkiyappan R, Premalatha S, Chandrasekar A, Cao J (2016) Stability and synchronization analysis of inertial memristive neural networks with time delays. Cogn Neurodyn 10(5):437–451
Zhang HX, Hu J, Zou L, Yu XY, Wu ZH (2018) Event-based state estimation for time-varying stochastic coupling networks with missing measurements under uncertain occurrence probabilities. Int J Gen Syst 47(5):506–521
He X, Li CD, Shu YL (2012) Bogdanov–Takens bifurcation in a single inertial neuron model with delay. Neurocomputing 89:193–201
He X, Yu JZ, Huang TW, Li CD, Li CJ (2014) Neural network for solving Nash equilibrium problem in application of multiuser power control. Neural Netw 57:73–78
Babcock KL, Westervelt RM (1986) Stability and dynamics of simple electronic neural networks with added inertia. Phys D 23:464–469
Tu ZW, Cao JD, Hayat T (2016) Matrix measure based dissipativity analysis for inertial delayed uncertain neural networks. Neural Netw 75:47C–55
Wan P, Jian JG (2017) Global convergence analysis of impulsive inertial neural networks with time-varying delays. Neurocomputing 245:68C–76
Tu ZW, Cao JD, Hayat T (2016) Global exponential stability in Lagrange sense for inertial neural networks with time-varying delays. Neurocomputing 171:524C–531
Ebrahimi B, Tafreshi R, Franchek M (2014) A dynamic feedback control strategy for control loops with time-varying delay. Int J Control 87(5):887–897
Lee TH, Park JH, Kwon OM (2013) Stochastic sampled-data control for state estimation of time-varying delayed neural networks. Neural Netw Off J Int Neural Netw Soc 46(5):99–108
Rakkiyappan R, Chandrasekar A, Park JH, Kwon OM (2014) Exponential synchronization criteria for Markovian jumping neural networks with time-varying delays and sampled-data control. Nonlinear Anal Hybrid Syst 14:16–37
Hua CC, Ge C, Guan XP (2015) Synchronization of Chaotic Lur’e systems with time delays using sampled-data control. IEEE Trans Neural Netw Learn Syst 26(6):1214–1221
Zhang WY, Li JM, Xing KY, Ding CY (2016) Synchronization for distributed parameter NNs with mixed time delays via sampled-data control. Neurocomputing 175:265–277
SyedAli M, Gunasekaran N, Zhu QX (2017) State estimation of T-S fuzzy delayed neural networks with Markovian jumping parameters using sampled-data control. Fuzzy Sets Syst 306:87–104
Wu HQ, Li RX, Wei HZ (2015) Synchronization of a class of memristive neural networks with time delays via sampled-data control. Int J Mach Learn Cybern 6(3):365–373
Pakkiyappan R, Dharani S, Cao JD (2015) Synchronization of neural networks with control packet loss and time-varying delay via stochastic sampled-data controller. IEEE Trans Neural Netw Learn Syst 26(12):3215–3226
Zhang WB, Tang Y, Huang TW, Kurths J (2017) Sampled-data consensus of linear multi-agent systems with packet losses. IEEE Trans Neural Netw Learn Syst 28(11):2516–2527
Huang DS, Jiang MH, Jian JG (2017) Finite-time synchronization of inertial memristive neural networks with time-varying delays via sampled-data control. Neurocomputing 293:100–107
Fridman EM (1992) Use models with aftereffect in the problem of design of optimal digital control. Autom Remote Control 53:1523–1528
Liu YR, Wang ZD, Liu XH (2006) Global exponential stability of generalized recurrent neural networks with discrete and distributed delays. Neural Netw 19:667–67
Slotine JJE, Li W (2004) Applied nonlinear control. China Machine Press, Beijing
Acknowledgements
This research is supported by the National Natural Science Foundation of China (Nos. 91546118, 71690242).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wang, J., Tian, L. Stability of Inertial Neural Network with Time-Varying Delays Via Sampled-Data Control. Neural Process Lett 50, 1123–1138 (2019). https://doi.org/10.1007/s11063-018-9905-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-018-9905-6