1 Introduction

Neural networks (NNs) have been widely investigated recently because of their potential applications in many areas such as associative memory, pattern recognition, parallel computing, and image processing; see [1,2,3,4,5,6,7] and references therein. In these real applications, stability property of equilibrium points of the networks is an important factor in the design of NNs. However, due to the inherent communication between neurons and the finite switching speed of amplifiers, time delays are always unavoidably encountered in neural networks, and their existence may cause poor performance and even instability. Therefore, stability analysis of time-delay neural networks has received considerable attention, and many interesting results have been obtained in the literature [8,9,10,11,12,13,14,15,16,17].

In recent years, the discontinuous control approaches, such as intermittent and impulsive control, have aroused a great deal of interest in many applications because these control methods can reduce the amount of the transmitted information and thus are more economic [18,19,20,21,22]. An intermittent control scheme comprises working and rest times in turn, and only in each working time, the controller is activated. Compared with impulsive control which is activated only at certain instants, intermittent control has the advantage of easy implementation in process control and engineering applications because of its nonzero control width. Owing to these merits, the intermittent control method has been widely applied to the fields of chaotic systems and networks; see, e.g., [18, 23,24,25,26,27,28,29,30,31,32,33] and references therein.

Nevertheless, most researches focus on the periodic case, i.e., the control width and period are fixed. For example, the authors of [25, 26] considered the exponential stabilization problem for chaotic systems without or with constant delays by periodically intermittent control. In [27, 28], the authors treated neural networks with time-varying delays, but these studies were based on the fact that time-varying delays are differentiable. Moreover, the stability criteria were presented in terms of transcendental equations or nonlinear matrix inequalities, which are computationally difficult. In [18], a class of time-delay neural networks was also studied via periodically intermittent control under the restriction of differentiability of time-varying delays. Some delay-dependent sufficient conditions for exponential stability were obtained in the form of linear matrix inequalities, which were easily checked by Matlab LMI toolbox. However, subject to the change of the real environment, some systems such as the generation of wind power and information exchange for routers on the internet are typically aperiodically intermittent [29, 30]. Periodically intermittent control may be unreasonable and inadequate in practice. Therefore, it is of significance to investigate the cases where the control width and period are not fixed.

Very recently, in [31], an aperiodically intermittent pinning control was introduced to guarantee the synchronization of hybrid-coupled delayed dynamical networks. Hu et al. [32] addressed stabilization and synchronization of chaotic systems without delays under adaptive intermittent control strategy with generalized control width and period. In [33], by designing an intermittent control scheme with non-fixed control width and period, Song et al. further considered the stabilization and synchronization of chaotic systems with mixed time varying delays. Note that all those results in the literature are based on the fact that parameters are known. Actually, the exact values of parameters are difficult to obtain in neural networks because of the external disturbances or the modeling inaccuracies. Discarding parametric uncertainties may lead to wrong conclusions in studying dynamical behaviors of systems. On the other hand, receiving a signal and transmitting it from the controller to the controlled system need some time. So, it is more reasonable that the control input is relevant to the previous state variables. Moreover, sometimes introducing time delays in control schemes can acquire better control performance, such as the delayed state-feedback control [34]. It could adjust the value of time delay in the controller without adding control gain, which begins to attract considerable attention for the good of control performance in recent years [35, 36]. To the best of our knowledge, although there are many results on intermittent control for stability of neural networks, few results are concerned with the stability analysis of uncertain neural networks by using delayed intermittent control. This motivates our present study.

Based on the above discussions, in this paper, we investigate the globally asymptotical stability of neural networks with both time-varying delays and uncertainties. A delayed intermittent controller with non-fixed control width and period is designed, which is new in the sense that by selecting an appropriate matrix B, it could be activated in all states or in some states, and non-fixed control width and period make the control scheme more flexible. By constructing a proper Lyapunov–Krasovskii functional, some new delay-dependent sufficient criteria for globally asymptotical stability of the addressed system are derived in terms of LMIs. The proposed stability criteria establish the relationship between the transmission delay in system and time delay in the controller, which could be easily verified by Matlab LMI toolbox. Moreover, the value of time delay in the controller can be adjusted without adding control gain and free-weighting matrices with full cross-terms are employed in this paper, which provide more feasible results than those studied in the previous work [32, 33]. Finally, a numerical example is studied to show the effectiveness of the proposed approach.

The rest of this paper is organized as follows. Section 2 introduces some preliminaries assumptions, basic definitions, and necessary lemmas. In Sect. 3, the globally asymptotical stability results with corresponding proofs are presented. The effectiveness of the developed methods is shown by a numerical example in Sect. 4.

Notations

By \(\mathbb{R}^{n}\) we denote the n-dimensional real space equipped with the Euclidean norm \(\Vert \cdot \Vert \); \(\mathbb{R} ^{n\times m}\) refers to the \(n\times m\)-dimensional real space. Let \(\mathbb{Z}^{+}\) represent the set of positive integers, and \(\mathbb{N}\) denote the set of nonnegative integers; \(A<0\) or \(A>0\) means that the matrix A is a symmetric and negative definite or positive definite matrix. If A, B are symmetric matrices, \(A>B\) means that \(A-B\) is a positive definite matrix. By \(A^{-1}\) and \(A^{T}\) we denote the inverse and the transpose of A, respectively. Let \(\alpha \vee \beta \) denote the maximum value of α and β. For any \(\mathrm{J}\subseteq \mathbb{R}\) and \(\mathrm{S} \subseteq \mathbb{R}^{k}\) (\(1\leq k\leq n\)), set \(C(\mathrm{J}, \mathrm{S})=\{\varphi :\mathrm{J}\rightarrow \mathrm{S}\text{ is continuous}\}\) and \(C^{1}(\mathrm{J},\mathrm{S})=\{\varphi :\mathrm{J}\rightarrow \mathrm{S}\text{ is continuously differentiable}\}\). For each \(\varphi \in C^{1}([-\rho ,0],\mathbb{R}^{n})\), the norm is defined by \(\Vert \varphi \Vert _{\rho }=\sup_{s\in [-\rho ,0]}\{ \vert \varphi (s) \vert , \vert \dot{\varphi }(s) \vert \}\). With ∗ we always denote the symmetric block in a symmetric matrix, and \(\varLambda =\{1,2,\ldots,n\}\).

2 Preliminaries

Consider the time-delay neural networks with uncertainties

$$\begin{aligned} \textstyle\begin{cases} \dot{x}(t)=-C(t)x(t)+W_{1}(t)f(x(t))+W_{2}(t)f(x(t-\tau (t)))+J , \quad t>0, \\ x(t)=\phi (t), \quad \forall t\in [-\rho ,0], \end{cases}\displaystyle \end{aligned}$$
(1)

where \(x(t)=(x_{1}(t),x_{2}(t),\ldots,x_{n}(t))\in \mathbb{R}^{n}\) is the neural state vector, \(f(x(t))=(f_{1}(x_{1}(t)), f_{2}(x_{2}(t)),\ldots,f_{n}(x_{n}(t)))^{T}\in \mathbb{R}^{n}\) is the neuron activation function, \(C(t)=C+\Delta C(t)\), \(W_{1}(t)=W_{1}+\Delta W_{1}(t)\), \(W_{2}(t)=W_{2}+\Delta W_{2}(t)\), in which C is a positive diagonal matrix, \(W_{1},W_{2}\in \mathbb{R}^{n\times n}\) are the connection weight matrices of the neurons, and \(\Delta C(t)\), \(\Delta W_{1}(t)\), \(\Delta W_{2}(t)\) are time-varying parametric uncertainties, \(\phi \in C^{1}([-\rho ,0],\mathbb{R}^{n})\) is the initial state, J is the external input, \(\tau (t)\) is a time-varying delay which satisfies \(0\leq \tau (t)\leq \tau \), where τ is a constant.

The following assumptions are made throughout this paper:

\((H_{1})\):

The activation function \(f\in C(\mathbb{R}^{n},\mathbb{R} ^{n})\), \(f_{j}(0)=0\), \(j\in \varLambda \), is such that there exist constants \(l_{j}^{-}\) and \(l_{j}^{+}\) such that

$$\begin{aligned}& l_{j}^{-}\leq \frac{f_{j}(\alpha _{1})-f_{j}(\alpha _{2})}{\alpha _{1}- \alpha _{2}}\leq l_{j}^{+}, \quad \forall \alpha _{1},\alpha _{2}\in \mathbb{R}, \alpha _{1}\neq \alpha _{2}. \end{aligned}$$

For convenience, we denote

$$ L_{1}=\operatorname{diag}\bigl(l_{1}^{-}l_{1}^{+},l_{2}^{-}l_{2}^{+}, \ldots,l _{n}^{-}l_{n}^{+}\bigr), \quad\quad L_{2}=\operatorname{diag} \biggl(\frac{l_{1}^{-}+l_{1}^{+}}{2}, \frac{l_{2} ^{-}+l_{2}^{+}}{2},\ldots,\frac{l_{n}^{-}+l_{n}^{+}}{2} \biggr). $$
\((H_{2})\):

The time-varying parametric uncertainties \(\Delta C(t)\), \(\Delta W_{1}(t)\), \(\Delta W_{2}(t)\), \(\Delta B(t)\) are of the form

$$ \bigl(\Delta C(t),\Delta W_{1}(t),\Delta W_{2}(t), \Delta B(t) \bigr)=HA(t) (E _{1},E_{2},E_{3},E_{4}), $$

where \(A(t)\) is an unknown matrix satisfying \(A^{T}(t)A(t)\leq I\), and H, \(E_{1}\), \(E_{2}\), \(E_{3}\), \(E_{4}\) are known constant matrices of appropriate dimensions.

Suppose that \(x^{\star }=(x_{1}^{\star },x_{2}^{\star },\ldots,x_{n} ^{\star })^{T}\) is an equilibrium point of system (1). By the transformation \(y=x-x^{\star }\), the equilibrium point \(x^{\star }\) can be shifted to the origin. Then system (1) turns into

$$\begin{aligned} \textstyle\begin{cases} \dot{y}(t)=-C(t)y(t)+W_{1}(t)F(y(t))+W_{2}(t)F(y(t-\tau (t))), \quad t>0, \\ y(t)=\varphi (t), \quad \forall t\in [-\rho ,0], \end{cases}\displaystyle \end{aligned}$$
(2)

where \(\varphi (t)=\phi (t)-x^{\star }\), \(\varphi \in C^{1}([-\rho ,0], \mathbb{R}^{n})\) and \(F(y)=f(y+x^{\star })-f(x^{\star })\).

To achieve the stability of system (2), firstly, a delayed intermittent control with non-fixed control width and period is designed. The controlled system of (2) can be described as follows:

$$\begin{aligned} \textstyle\begin{cases} \dot{y}(t)=-C(t)y(t)+W_{1}(t)F(y(t))+W_{2}(t)F(y(t-\tau (t)))+B(t)u(t), \quad t>0, \\ y(t)=\varphi (t), \quad \forall t\in [-\rho ,0], \end{cases}\displaystyle \end{aligned}$$
(3)

where \(B(t)=B+\Delta B(t)\), \(B\in \mathbb{R}^{n\times m}\) represents a known real matrix, \(\Delta B(t)\) is a time-varying parametric uncertainty, \(u(t)\in \mathbb{R}^{m}\) is a control input with the form:

$$\begin{aligned} u(t)= \textstyle\begin{cases} K(y(t-\gamma )), & t_{k}\leq t< t_{k}+d_{k}, \\ 0, & t_{k}+d_{k}\leq t< t_{k+1}, \end{cases}\displaystyle \end{aligned}$$
(4)

with γ is a positive constant delay, \(K\in \mathbb{R}^{m \times n}\) is the controller gain to be designed, \(d_{k}\) is the so-called control width, and \(t_{k+1}-t_{k}\) is the control period. The control instant is defined by \(0=t_{1}< t_{2}<\cdots <t_{k}<\cdots \) , \(\lim_{k\rightarrow \infty }t_{k}=+\infty \), \(k\in \mathbb{Z}^{+}\).

Then, one may transform system (3) and (4) into the following closed-loop system:

$$\begin{aligned} \textstyle\begin{cases} \dot{y}(t)=-C(t)y(t)+W_{1}(t)F(y(t))+W_{2}(t)F(y(t-\tau (t))) \\ \hphantom{\dot{y}(t)}\quad {} +D(t)y(t-\gamma ), \quad t_{k}\leq t< t_{k}+d_{k}, \\ \dot{y}(t)=-C(t)y(t)+W_{1}(t)F(y(t))+W_{2}(t)F(y(t-\tau (t))), \\ \quad t_{k}+d_{k}\leq t< t_{k+1}, \\ y(t)=\varphi (t), \quad \forall t\in [-\rho ,0], \end{cases}\displaystyle \end{aligned}$$
(5)

where \(D(t)=B(t)K\) and \(\rho =\gamma \vee \tau \).

Definition 1

([37])

System (5) is said to be globally asymptotically stable if it is stable in the sense of Lyapunov and \(\lim_{t\rightarrow \infty }y(t) = 0\) for any initial condition \(\varphi \in C^{1}([- \rho ,0],\mathbb{R}^{n})\).

Definition 2

([18])

System (5) is said to be globally exponentially stable if there exist two positive constants \(\varepsilon >0\) and \(\tilde{A}>0\) such that the solution \(y(t)\) of (5) satisfies \(\Vert y(t) \Vert \leq \tilde{A} \Vert \varphi \Vert _{\rho }e^{-\varepsilon t}\), for all \(t\geq 0\), \(\varphi \in C^{1}([-\rho ,0],\mathbb{R}^{n})\).

Lemma 1

([36])

Suppose that matrix \(M=M^{T}>0\)is a real matrix of appropriate dimensions and \(\omega (\cdot ):[a,b]\mapsto \mathbb{R}^{n}\)is a vector function such that the integrations concerned are well defined, then

$$ \biggl[ \int _{a}^{b}\omega (s)\,ds \biggr]^{T}M \biggl[ \int _{a}^{b}\omega (s)\,ds \biggr]\leq (b-a) \int _{a}^{b}\omega ^{T}(s)M\omega (s)\,ds. $$

Lemma 2

([38])

For matrices, E, andVof appropriate dimensions and assuming \(V^{T}=V\),

$$ V+\tilde{J}LE+E^{T}L^{T}\tilde{J}^{T}< 0 $$

holds for all matricesLsatisfying \(L^{T}L\leq I\)if and only if there exists a constant \(\varepsilon >0\)such that

$$ V+\varepsilon \tilde{J}\tilde{J}^{T}+\varepsilon ^{-1} E^{T}E< 0. $$

3 Main results

This section is devoted to study the globally asymptotical stability of the neural networks (5) by constructing suitable Lyapunov–Krasovskii functional. First, the following stability results can be derived in terms of LMIs.

Theorem 1

Under conditions \((H_{1})\)and \((H_{2})\), the neural network (5) is globally asymptotically stable if for given constants \(\alpha >0\), \(\beta \geq -\alpha \), \(c\geq 0\), there exist \(n\times n\)matrices \(P>0\), \(Q>0\), \(n\times n\)diagonal matrices \(X>0\), \(Y>0\), \(n\times n\)matrixRand \(2n\times 2n\)matrix

$$\begin{aligned} T_{1}={ \begin{bmatrix} T_{11} & T_{12} \\ \ast & T_{22} \end{bmatrix}}>0 \end{aligned}$$
(6)

such that following inequalities hold:

$$\begin{aligned}& M_{1}={ \begin{bmatrix} M_{11} & M_{12} & M_{13} & M_{14} & M_{15} & 0 \\ \ast & M_{22} & 0 & M_{24} & M_{25} & M_{26} \\ \ast & \ast & M_{33} & 0 & 0 & M_{36} \\ \ast & \ast & \ast & M_{44} & 0 & 0 \\ \ast & \ast & \ast & \ast & -X & 0 \\ \ast & \ast & \ast & \ast & \ast & -Y \end{bmatrix}}< 0, \end{aligned}$$
(7)
$$\begin{aligned}& N_{1}={ \begin{bmatrix} N_{11} & M_{12} & M_{13} & M_{14} & M_{15} & 0 \\ \ast & M_{22} & 0 & 0 & M_{25} & M_{26} \\ \ast & \ast & M_{33} & 0 & 0 & M_{36} \\ \ast & \ast & \ast & M_{44} & 0 & 0 \\ \ast & \ast & \ast & \ast & -X & 0 \\ \ast & \ast & \ast & \ast & \ast & -Y \end{bmatrix}}< 0, \end{aligned}$$
(8)

and the control width and period satisfy

$$\begin{aligned} \sup_{k\in \mathbb{Z}^{+}}\{{t_{k+1}-t_{k}-d_{k}} \}\leq c, \quad\quad \lim_{k\rightarrow \infty } \Biggl(\beta t_{k}-(\alpha +\beta ) \sum_{i=1}^{k-1}d_{i} \Biggr)=-\infty , \quad k\in \mathbb{Z}^{+}, \end{aligned}$$
(9)

where \(M_{11}=\alpha P-\gamma ^{-1}Q-L_{1}X\), \(M_{12}=P-RC(t)\), \(M_{13}=T _{12}\), \(M_{14}=\gamma ^{-1}Q\), \(M_{15}=L_{2}X\), \(M_{22}=\tau e^{\alpha \tau }T_{22}+\gamma e^{\alpha \gamma }Q-R-R^{T}\), \(M_{24}=RD(t)\), \(M_{25}=RW _{1}(t)\), \(M_{26}=RW_{2}(t)\), \(M_{33}=\tau T_{11}-T_{12}-T_{12}^{T}-L _{1}Y\), \(M_{36}=L_{2}Y\), \(M_{44}=-\gamma ^{-1}Q\), \(N_{11}=-\beta P-\gamma ^{-1}Q-L_{1}X\).

Proof

We construct a Lyapunov–Krasovskii functional of the form

$$ V(t)=V_{1}(t)+V_{2}(t)+V_{3}(t)+V_{4}(t), $$

where

$$\begin{aligned}& V_{1}(t)=y^{T}(t)Py(t), \quad\quad V_{2}(t)= \int _{-\tau }^{0} \int _{t+\xi }^{t}e^{\alpha (s-t+\tau )} \dot{y}^{T}(s)T_{22}\dot{y}(s)\,ds\,d\xi, \\& V_{3}(t)= \int _{0}^{t} \int _{\xi -\tau (\xi )}^{\xi }e^{\alpha (s-t)} { \begin{bmatrix} y(\xi -\tau (\xi )) \\ \dot{y}(s) \end{bmatrix}}^{T}{ \begin{bmatrix} T_{11} & T_{12} \\ \ast & T_{22} \end{bmatrix}} { \begin{bmatrix} y(\xi -\tau (\xi )) \\ \dot{y}(s) \end{bmatrix}}\,ds\,d\xi , \\& V_{4}(t)= \int _{-\gamma }^{0} \int _{t+\xi }^{t}e^{\alpha (s-t+\gamma )} \dot{y}^{T}(s)Q\dot{y}\,ds\,d\xi. \end{aligned}$$

Calculating the derivative of \(V(t)\) along the trajectory of neural network (5) at the interval \(t\in [t_{k},t_{k+1})\), it follows that

$$\begin{aligned}& \dot{V}_{1}(t) =2\dot{y}(t)Py^{T}(t)=-\alpha V_{1}(t)+\alpha y^{T}(t)Py(t)+2 \dot{y}(t)Py^{T}(t) , \end{aligned}$$
(10)
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{2}(t)&=\tau e^{\alpha \tau } \dot{y}^{T}(t)T_{22}\dot{y}(t)- \int _{t-\tau }^{t}e^{\alpha (s-t+\tau )} \dot{y}^{T}(s)T_{22}\dot{y}(s)\,ds -\alpha V_{2}(t) \\ &\leq -\alpha V_{2}(t)+\tau e^{\alpha \tau }\dot{y}^{T}(t)T_{22} \dot{y}(t)- \int _{t-\tau }^{t}\dot{y}^{T}(s)T_{22} \dot{y}(s)\,ds, \end{aligned} \end{aligned}$$
(11)
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{3}(t)&= \int _{t-\tau (t)}^{t}{ \begin{bmatrix} y(t-\tau (t)) \\ \dot{y}(s) \end{bmatrix}}^{T}{ \begin{bmatrix} T_{11} & T_{12} \\ \ast & T_{22} \end{bmatrix}} { \begin{bmatrix} y(t-\tau (t)) \\ \dot{y}(s) \end{bmatrix}}\,ds-\alpha V_{3}(t) \\ &\leq -\alpha V_{3}(t)+ \int _{t-\tau }^{t}\dot{y}^{T}(s)T_{22} \dot{y}(s)\,ds+2y ^{T}\bigl(t-\tau (t)\bigr)T_{12}y(t) \\ &\quad{} +y^{T}\bigl(t-\tau (t)\bigr) \bigl(\tau T_{11}-T_{12}-T^{T}_{12}\bigr)y \bigl(t-\tau (t)\bigr) , \end{aligned} \\& \begin{aligned} \dot{V}_{4}(t)&= \gamma e^{\alpha \gamma }\dot{y}^{T}(t)Q\dot{y}(t)- \int _{t-\gamma }^{t}e^{\alpha (s-t+\gamma )} \dot{y}^{T}(s)Q\dot{y}(s)\,ds -\alpha V_{4}(t) \\ &\leq -\alpha V_{4}(t)+\gamma e^{\alpha \gamma } \dot{y}^{T}(t)Q \dot{y}(t)- \int _{t-\gamma }^{t}\dot{y}^{T}(s)Q \dot{y}(s)\,ds. \end{aligned} \end{aligned}$$
(12)

By Lemma 1, we have

$$\begin{aligned} \int _{t-\gamma }^{t}\dot{y}^{T}(s)Q \dot{y}(s)\,ds&\leq -\frac{1}{\gamma } \biggl( \int _{t-\gamma }^{t}\dot{y}^{T}(s)\,ds \biggr)^{T} Q \int _{t- \gamma }^{t}\dot{y}(s)\,ds \\ &= -\frac{1}{\gamma }\bigl(y(t)-y(t-\gamma )\bigr)^{T}Q \bigl(y(t)-y(t-\gamma )\bigr) \\ &= -\frac{1}{\gamma } y^{T}(t)Qy(t)+\frac{2}{\gamma }y^{T}(t)Qy(t- \gamma ) \\ &\quad{} -\frac{1}{\gamma }y^{T}(t-\gamma )Qy(t-\gamma ). \end{aligned}$$

Then

$$\begin{aligned} \dot{V}_{4}(t)&\leq -\alpha V_{4}(t)+\gamma e^{\alpha \gamma }\dot{y} ^{T}(t)Q\dot{y}(t)-\frac{1}{\gamma } y^{T}(t)Qy(t) \\ &\quad{} +\frac{2}{\gamma }y^{T}(t)Qy(t-\gamma )- \frac{1}{\gamma }y^{T}(t- \gamma )Qy(t-\gamma ) . \end{aligned}$$
(13)

Consider \(n\times n\) diagonal matrices \(X>0\), \(Y>0\). From assumption \((H_{1})\), we get

$$\begin{aligned} { \begin{bmatrix} y(t) \\ F(y(t)) \end{bmatrix}}^{T}{ \begin{bmatrix} L_{1}X & -L_{2}X \\ \ast & X \end{bmatrix}} { \begin{bmatrix} y(t) \\ F(y(t)) \end{bmatrix}}\leq 0, \end{aligned}$$
(14)

and

$$\begin{aligned} { \begin{bmatrix} y(t-\tau (t)) \\ F(y(t-\tau (t))) \end{bmatrix}}^{T}{ \begin{bmatrix} L_{1}Y & -L_{2}Y \\ \ast & Y \end{bmatrix}} { \begin{bmatrix} y(t-\tau (t)) \\ F(y(t-\tau (t))) \end{bmatrix}}\leq 0. \end{aligned}$$
(15)

It then follows from inequalities (10)–(15) that

$$\begin{aligned} \dot{V}(t)&\leq -\alpha V(t)+y^{T}(t) \biggl(\alpha P- \frac{1}{\gamma }Q-L_{1}X \biggr)y(t)+y^{T}(t) \bigl(P+P^{T}\bigr)\dot{y}(t) \\ & \quad{} +\dot{y}^{T}(t) \bigl(\tau e^{\alpha \tau }T_{22}+ \gamma e^{\alpha \gamma }Q\bigr)\dot{y}(t)+y^{T}\bigl(t-\tau (t)\bigr) \bigl(\tau T_{11}-T_{12} \\ &\quad{} -T^{T}_{12}-L_{1}Y\bigr)y\bigl(t- \tau (t)\bigr)+ y^{T}(t) \bigl(T_{12}+T^{T}_{12} \bigr)y\bigl(t- \tau (t)\bigr) \\ & \quad{} +\frac{2}{\gamma }y^{T}(t)Qy(t-\gamma )- \frac{1}{\gamma }y^{T}(t- \gamma ) Qy(t-\gamma )+2L_{2}y^{T}(t)XF\bigl(y(t)\bigr) \\ &\quad{} -F^{T}\bigl(y(t)\bigr)XF\bigl(y(t)\bigr)+2L_{2}y^{T} \bigl(t-\tau (t)\bigr)Y F\bigl(y\bigl(t-\tau (t)\bigr)\bigr) \\ & \quad{} -F^{T}\bigl(y\bigl(t-\tau (t)\bigr)\bigr)YF\bigl(y\bigl(t- \tau (t)\bigr)\bigr) . \end{aligned}$$
(16)

In addition, when \(t_{k}\leq t< t_{k}+d_{k}\), we introduce the auxiliary equality as follows:

$$\begin{aligned} 0&= 2\dot{y}^{T}(t)R\bigl(-\dot{y}(t)+\dot{y}(t)\bigr) \\ &= -2\dot{y}^{T}(t)R\dot{y}(t)-2\dot{y}^{T}(t)RC(t)y(t)+2 \dot{y}^{T}(t)RW _{1}(t)F\bigl(y(t)\bigr) \\ &\quad {} +2\dot{y}^{T}(t)RW_{2}(t)F\bigl(y\bigl(t-\tau (t)\bigr)\bigr)+2\dot{y}^{T}(t)RD(t)y(t- \gamma ) . \end{aligned}$$
(17)

Then, from (16) and (17), we obtain

$$\begin{aligned} \dot{V}(t)&\leq -\alpha V(t)+y^{T}(t) \biggl(\alpha P- \frac{1}{\gamma }Q-L_{1}X \biggr)y(t)+2y^{T}(t) \bigl(P-RC(t)\bigr)\dot{y}(t) \\ & \quad {} +\dot{y}^{T}(t) \bigl(\tau e^{\alpha \tau }T_{22}+ \gamma e^{\alpha \gamma }Q-R-R^{T}\bigr)\dot{y}(t)+y^{T} \bigl(t-\tau (t)\bigr) \bigl(\tau T_{11}-T_{12} \\ & \quad {} -T^{T}_{12}-L_{1}Y\bigr)y\bigl(t- \tau (t)\bigr)+2 y^{T}(t)T_{12}y\bigl(t-\tau (t)\bigr)+ \frac{2}{ \gamma }y^{T}(t)Qy(t-\gamma ) \\ & \quad {} -\frac{1}{\gamma }y^{T}(t-\gamma ) Qy(t-\gamma )+2L_{2}y^{T}(t)XF\bigl(y(t)\bigr)+2 \dot{y}^{T}(t)RW_{1}(t)F\bigl(y(t)\bigr) \\ & \quad {} -F^{T}\bigl(y(t)\bigr)XF\bigl(y(t)\bigr)+2L_{2}y^{T} \bigl(t-\tau (t)\bigr)Y F\bigl(y\bigl(t-\tau (t)\bigr)\bigr) \\ & \quad {} +2\dot{y}(t)RW_{2}(t)F\bigl(y\bigl(t-\tau (t)\bigr) \bigr)+2\dot{y}^{T}(t)RD(t)y(t-\gamma ) \\ & \quad {} -F^{T}\bigl(y\bigl(t-\tau (t)\bigr)\bigr)YF\bigl(y \bigl(t-\tau (t)\bigr)\bigr) \\ &= \eta ^{T}(t)M_{1}\eta (t)-\alpha V(t), \end{aligned}$$

where \(\eta (t)= ( y^{T}(t), \dot{y}^{T}(t), y^{T}(t-\tau (t)), y ^{T}(t-\gamma ), F^{T}(y(t)), F^{T}(y(t-\tau (t)) ))^{T}\), which together with (7) yields

$$\begin{aligned} \dot{V}(t)\leq -\alpha V(t), \quad t\in [t_{k},t_{k}+d_{k}) . \end{aligned}$$
(18)

Let \(G_{1}(t)=e^{\alpha t}V(t)\). By (18), one can see that \(G_{1}(t)\) is a monotone decreasing function on \(t\in [t_{k},t_{k}+d_{k})\). Then

$$\begin{aligned}& G_{1}(t)\leq G_{1}(t_{k}), \quad\quad G_{1}(t_{k}+d_{k})\leq G_{1}(t_{k}), \end{aligned}$$

which implies that

$$\begin{aligned}& V(t)\leq e^{-\alpha (t-t_{k})}V(t_{k}), \quad t\in [t_{k},t_{k}+d_{k}) , \end{aligned}$$
(19)
$$\begin{aligned}& V(t_{k}+d_{k})\leq e^{-\alpha d_{k}}V(t_{k}) . \end{aligned}$$
(20)

When \(t_{k}+d_{k}\leq t< t_{k+1}\), from the second equation of system (5), it follows that

$$\begin{aligned} 0&= 2\dot{y}^{\mathrm{T}}(t)R\bigl(-\dot{y}(t)+\dot{y}(t)\bigr) \\ &= -2\dot{y}(t)R\dot{y}(t)-2\dot{y}(t)RC(t)y(t)+2\dot{y}^{\mathrm{T}}(t)RW _{1}(t)F\bigl(y(t)\bigr) \\ &\quad{} +2\dot{y}(t)RW_{2}(t)F\bigl(y\bigl(t-\tau (t)\bigr)\bigr) . \end{aligned}$$
(21)

Since \(\alpha +\beta >0\), from (8), (16) and (21), we obtain

$$\begin{aligned} \dot{V}(t)&\leq -\alpha V(t)+\eta ^{\mathrm{T}}(t)N_{1}\eta (t)+( \alpha +\beta )y^{\mathrm{T}}(t)Py(t) \\ &\leq -\alpha V(t)+\eta ^{\mathrm{T}}(t)N_{1}\eta (t)+(\alpha + \beta )V(t) \leq \beta V(t) . \end{aligned}$$
(22)

Let \(G_{2}(t)=e^{-\beta t}V(t) \). Then \(G_{2}(t)\) is a monotone decreasing function on \(t\in [t_{k}+d _{k},t_{k+1})\) and we have

$$\begin{aligned}& G_{2}(t)\leq G_{2}(t_{k}+d_{k}), \quad\quad G_{2}(t_{k+1})\leq G_{2}(t_{k}+d_{k}), \end{aligned}$$

which implies that

$$\begin{aligned}& V(t)\leq V(t_{k}+d_{k})e^{\beta (t-t_{k}-d_{k})}, \quad t\in [t_{k}+d_{k},t_{k+1}) , \end{aligned}$$
(23)
$$\begin{aligned}& V(t_{k+1})\leq V(t_{k}+d_{k})e^{\beta (t_{k+1}-t_{k}-d_{k})} . \end{aligned}$$
(24)

Thus, combing (20) and (24), one may deduce that

$$\begin{aligned}& V(t_{k+1})\leq V(0)e^{-\alpha \sum _{i=1}^{k}d_{i}+\beta (t _{k+1}-\sum _{i=1}^{k}d_{i} )}=V(0)e^{\beta t_{k+1}-( \alpha +\beta )\sum _{i=1}^{k}d_{i}} , \end{aligned}$$
(25)
$$\begin{aligned}& V(t_{k}+d_{k})\leq V(0)e^{-\alpha \sum _{i=1}^{k}d_{i}+\beta (t_{k}-\sum _{i=1}^{k-1}d_{i} )}\leq V(0)e^{\beta t _{k}-(\alpha +\beta )\sum _{i=1}^{k-1}d_{i}} . \end{aligned}$$
(26)

When \(t_{k}\leq t< t_{k}+d_{k}\), it follows from (19) and (25) that

$$\begin{aligned} V(t)\leq e^{-\alpha (t-t_{k})}V(t_{k})\leq V(0)e^{\beta t_{k}-(\alpha +\beta )\sum _{i=1}^{k-1}d_{i}} . \end{aligned}$$
(27)

When \(t_{k}+d_{k}\leq t< t_{k+1}\), it follows from (9), (23) and (26) that

$$\begin{aligned} V(t)\leq e^{\beta (t-t_{k}-d_{k})}V(t_{k}+d_{k})\leq e^{ \vert \beta \vert c}V(0)e ^{\beta t_{k}-(\alpha +\beta )\sum _{i=1}^{k-1}d_{i}} . \end{aligned}$$
(28)

Therefore, we obtain from (27) and (28) that

$$\begin{aligned}& V(t)\leq e^{ \vert \beta \vert c}V(0)e^{\beta t_{k}-(\alpha +\beta )\sum _{i=1}^{k-1}d_{i}}, \quad k\in \mathbb{Z}^{+}, t\geq 0. \end{aligned}$$

Combing condition (9), we have \(\lim_{t\rightarrow \infty }V(t)=0\), which implies that \(\lim_{t\rightarrow \infty }y(t)=0\). Hence, system (5) is globally asymptotically stable. The proof is completed. □

Especially, in the case of system (5) without parametric uncertainties, i.e., when \(\Delta C(t)=\Delta W_{1}(t)=\Delta W_{2}(t)= \Delta B(t)=0\), system (5) reduces to

$$\begin{aligned} \textstyle\begin{cases} \dot{y}(t)=-Cy(t)+W_{1}F(y(t))+W_{2}F(y(t-\tau (t)))+Dy(t-\gamma ), \\ \quad t_{k}\leq t< t_{k}+d_{k}, \\ \dot{y}(t)=-Cy(t)+W_{1}F(y(t))+W_{2}F(y(t-\tau (t))), \\ \quad t_{k}+d_{k}\leq t< t_{k+1}, \\ y(t)=\varphi (t), \quad \forall t\in [-\rho ,0], \end{cases}\displaystyle \end{aligned}$$
(29)

where the matrix \(D=BK\). Then the following corollary is easily derived.

Corollary 1

Suppose that \((H_{1})\)holds. Then the neural network (29) is globally asymptotically stable if for given constants \(\alpha >0\), \(\beta \geq -\alpha \), \(c\geq 0\), there exist \(n\times n\)matrices \(P>0\), \(Q>0\), \(n\times n\)diagonal matrices \(X>0\), \(Y>0\), \(n\times n\)matrixRand \(2n\times 2n\)matrix

$$\begin{aligned}& T_{1}= \begin{bmatrix} T_{11} & T_{12} \\ \ast & T_{22} \end{bmatrix}>0 \end{aligned}$$

such that following inequalities hold:

$$\begin{aligned}& M_{2}={ \begin{bmatrix} M_{11} & \tilde{M}_{12} & M_{13} & M_{14} & M_{15} & 0 \\ \ast & M_{22} & 0 & \tilde{M}_{24} & \tilde{M}_{25} & \tilde{M}_{26} \\ \ast & \ast & M_{33} & 0 & 0 & M_{36} \\ \ast & \ast & \ast & M_{44} & 0 & 0 \\ \ast & \ast & \ast & \ast & -X & 0 \\ \ast & \ast & \ast & \ast & \ast & -Y \end{bmatrix}}< 0, \\& N_{2}={ \begin{bmatrix} N_{11} & \tilde{M}_{12} & M_{13} & M_{14} & M_{15} & 0 \\ \ast & M_{22} & 0 & 0 & \tilde{M}_{25} & \tilde{M}_{26} \\ \ast & \ast & M_{33} & 0 & 0 & M_{36} \\ \ast & \ast & \ast & M_{44} & 0 & 0 \\ \ast & \ast & \ast & \ast & -X & 0 \\ \ast & \ast & \ast & \ast & \ast & -Y \end{bmatrix}}< 0, \end{aligned}$$

and the control width and period satisfy (9), where \(\tilde{M} _{12}=P-RC\), \(\tilde{M}_{24}=RD\), \(\tilde{M}_{25}=RW_{1}\), \(\tilde{M} _{26}=RW_{2}\), and other parameters are the same as in Theorem 1.

It is worth noticing that Theorem 1 can be applied only if the uncertainty \(A(t)\) and the matrix \(D(t)\) are exactly known. When just an estimate of \(A(t)\) is known or K is a control gain to be designed, it is difficult to use the above derived results. Next, we will give a result for such general case, in which the control gain K is determined by making some transformations.

Theorem 2

Under conditions \((H_{1})\)and \((H_{2})\), the neural network (5) is globally asymptotically stable if for given constants \(\mu _{i}\), \(i=1,2,\ldots,7\), \(\alpha >0\), \(\beta \geq -\alpha \), \(c\geq 0\), there exist \(n\times n\)diagonal matrix \(S>0\)and \(m\times n\)matrixZsuch that the following inequalities hold:

$$\begin{aligned}& T_{2}={ \begin{bmatrix} \mu _{5}S & \mu _{6}S \\ \ast & \mu _{7}S \end{bmatrix}}>0, \end{aligned}$$
(30)
$$\begin{aligned}& M_{3}={ \begin{bmatrix} \phi _{11} & \phi _{12} & \phi _{13} & \phi _{14} & \phi _{15} & 0 & 0 & 0 \\ \ast & \phi _{22} & 0 & \phi _{24} & \phi _{25} & \phi _{26} & -SE_{1} ^{T} & 0 \\ \ast & \ast & \phi _{33} & 0 & 0 & \phi _{36} & 0 & 0 \\ \ast & \ast & \ast & \phi _{44} & 0 & 0 & 0 & Z^{T}E_{4}^{T} \\ \ast & \ast & \ast & \ast & -\mu _{1}S & 0 & 0 & SE_{2}^{T} \\ \ast & \ast & \ast & \ast & \ast & -\mu _{2}S & 0 & SE_{3}^{T} \\ \ast & \ast & \ast & \ast & \ast & \ast & -\varepsilon _{1}I & 0 \\ \ast & \ast & \ast & \ast & \ast & \ast & \ast & -\varepsilon _{2}I \end{bmatrix}}< 0, \end{aligned}$$
(31)
$$\begin{aligned}& N_{3}={ \begin{bmatrix} \tilde{\phi }_{11} & \phi _{12} & \phi _{13} & \phi _{14} & \phi _{15} & 0 & 0 & 0 \\ \ast & \tilde{\phi }_{22} & 0 & 0 & \phi _{25} & \phi _{26} & -SE_{1} ^{T} & 0 \\ \ast & \ast & \phi _{33} & 0 & 0 & \phi _{36} & 0 & 0 \\ \ast & \ast & \ast & \phi _{44} & 0 & 0 & 0 & 0 \\ \ast & \ast & \ast & \ast & -\mu _{1}S & 0 & 0 & SE_{2}^{T} \\ \ast & \ast & \ast & \ast & \ast & -\mu _{2}S & 0 & SE_{3}^{T} \\ \ast & \ast & \ast & \ast & \ast & \ast & -\varepsilon _{1}I & 0 \\ \ast & \ast & \ast & \ast & \ast & \ast & \ast & -\varepsilon _{3}I \end{bmatrix}}< 0, \end{aligned}$$
(32)

and the control width and period satisfy (9), where \(\phi _{11}=\alpha S-\gamma ^{-1}\mu _{4}S-\mu _{1}SL_{1}+(\varepsilon _{1}\mu _{3}^{2})HH^{T}\), \(\phi _{12}=S-\mu _{3}CS\), \(\phi _{13}=\mu _{6}S\), \(\phi _{14}=\gamma ^{-1}\mu _{4}S\), \(\phi _{15}=\mu _{1}SL_{2}\), \(\phi _{22}= \mu _{7}\tau e^{\alpha \tau }S+\mu _{4}\gamma e^{\alpha \gamma }S-2\mu _{3}S+(\varepsilon _{2}\mu _{3}^{2})HH^{T}\), \(\phi _{24}=\mu _{3}BZ\), \(\phi _{25}=\mu _{3}W_{1}S\), \(\phi _{26}=\mu _{3}W_{2}S\), \(\phi _{33}=\mu _{5} \tau S-2\mu _{6}S-\mu _{2}SL_{1}\), \(\phi _{36}=\mu _{2}SL_{2}\), \(\phi _{44}=- \gamma ^{-1}\mu _{4}S\), \(\tilde{\phi }_{11}=-\beta S-\gamma ^{-1}\mu _{4}S- \mu _{1}SL_{1}+(\varepsilon _{1}\mu _{3}^{2})HH^{T}\), \(\tilde{\phi }_{22}= \mu _{7}\tau e^{\alpha \tau }S+\mu _{4}\gamma e^{\alpha \gamma }S-2\mu _{3}S+(\varepsilon _{3}\mu _{3}^{2})HH^{T}\).

Moreover, the gain matrixKis given by \(K=ZS^{-1}\).

Proof

When \(t_{k}\leq t< t_{k}+d_{k}\), the condition \(M_{1}<0\) in Theorem 1 can be rewritten as

$$\begin{aligned} M_{1}={} &M_{2}+{ \begin{bmatrix} 0 & -R\triangle C(t) & 0 & 0 & 0 & 0 \\ \ast & 0 & 0 & R\triangle D(t) & R\triangle W_{1}(t) & R\triangle W _{2}(t) \\ \ast & \ast & 0 & 0 & 0 & 0 \\ \ast & \ast & \ast & 0 & 0 & 0 \\ \ast & \ast & \ast & \ast & 0 & 0 \\ \ast & \ast & \ast & \ast & \ast & 0 \end{bmatrix}} \\ ={} &M_{2}+\varGamma _{1}^{T}A(t) \varUpsilon _{1}+\varUpsilon _{1}^{T}A^{T}(t) \varGamma _{1}+\varGamma _{2}^{T}A(t) \varUpsilon _{2}+\varUpsilon _{2}^{T}A^{T}(t) \varGamma _{2} < 0, \end{aligned}$$
(33)

where \(\varGamma _{1}^{T}=(\begin{array}{cccccc}H^{T}R^{T}&0&0&0&0&0\end{array})^{T}\), \(\varGamma _{2}^{T}=( \begin{array}{cccccc}0&H ^{T}R^{T}&0&0&0&0\end{array})^{T}\), \(\varUpsilon _{1}= (\begin{array}{cccccc}0&-E_{1}&0&0&0&0\end{array})\), \(\varUpsilon _{2}= (\begin{array}{cccccc}0&0&0&E_{4}K&E_{2}&E_{3}\end{array})\), and \(M_{2}\) is defined as in Corollary 1.

When \(t_{K}+d_{k}\leq t< t_{k+1}\), the condition \(N_{1}<0\) in Theorem 1 can be written as

$$\begin{aligned} N_{1} &=N_{2}+{ \begin{bmatrix} 0 & -R\triangle C(t) & 0 & 0 & 0 & 0 \\ \ast & 0 & 0 & 0 & R\triangle W_{1}(t) & R\triangle W_{2}(t) \\ \ast & \ast & 0 & 0 & 0 & 0 \\ \ast & \ast & \ast & 0 & 0 & 0 \\ \ast & \ast & \ast & \ast & 0 & 0 \\ \ast & \ast & \ast & \ast & \ast & 0 \end{bmatrix}} \\ &=N_{2}+\varGamma _{1}^{T}A(t)\varUpsilon _{1}+\varUpsilon _{1}^{T}A^{T}(t) \varGamma _{1}+\varGamma _{2}^{T}A(t) \varUpsilon _{3}+\varUpsilon _{3}^{T}A^{T}(t) \varGamma _{2} < 0, \end{aligned}$$
(34)

where \(\varUpsilon _{3}=(\begin{array}{cccccc}0&0&0&0&E_{2}&E_{3}\end{array})\) and \(N_{2}\) is defined as in Corollary 1. By Lemma 2, (33) and (34) are equivalent to following inequalities:

$$\begin{aligned}& M_{2}+\varepsilon _{1}\varGamma _{1}^{T} \varGamma _{1}+\varepsilon _{1}^{-1} \varUpsilon _{1}^{T}\varUpsilon _{1}+ \varepsilon _{2}\varGamma _{2}^{T}\varGamma _{2}+ \varepsilon _{2}^{-1}\varUpsilon _{2}^{T}\varUpsilon _{2} < 0, \end{aligned}$$
(35)
$$\begin{aligned}& N_{2}+\varepsilon _{1}\varGamma _{1}^{T} \varGamma _{1}+\varepsilon _{1}^{-1} \varUpsilon _{1}^{T}\varUpsilon _{1}+ \varepsilon _{3}\varGamma _{2}^{T}\varGamma _{2}+ \varepsilon _{3}^{-1}\varUpsilon _{3}^{T}\varUpsilon _{3} < 0. \end{aligned}$$
(36)

Then applying Schur complement lemma, we have from the above inequalities that

$$\begin{aligned}& { \begin{bmatrix} \hat{M} & \varUpsilon _{1}^{T} & \varUpsilon _{2}^{T} \\ \ast & -\varepsilon _{1}I & 0 \\ \ast & \ast & -\varepsilon _{2}I \end{bmatrix}}< 0, \end{aligned}$$
(37)
$$\begin{aligned}& { \begin{bmatrix} \hat{N} & \varUpsilon _{1}^{T} & \varUpsilon _{3}^{T} \\ \ast & -\varepsilon _{1}I & 0 \\ \ast & \ast & -\varepsilon _{3}I \end{bmatrix}}< 0, \end{aligned}$$
(38)

where \(\hat{M}=M_{2}+\varepsilon _{1}\varGamma _{1}^{T}\varGamma _{1}+ \varepsilon _{2}\varGamma _{2}^{T}\varGamma _{2}\) and \(\hat{N}=N_{2}+ \varepsilon _{1}\varGamma _{1}^{T}\varGamma _{1}+\varepsilon _{3}\varGamma _{2}^{T} \varGamma _{2}\). Let \(X=\mu _{1}P\), \(Y=\mu _{2}P\), \(R=\mu _{3}P\), \(Q=\mu _{4}P\), \(T_{11}=\mu _{5}P\), \(T_{12}=\mu _{6}P\), \(T_{22}=\mu _{7}P\) in \(M_{2}\) and \(N_{2}\) and let \(S=P^{-1}\), \(Z=SK\), where P is a diagonal matrix. By pre-multiplying and post-multiplying inequality (6) with diag \(\{P^{-1},P^{-1}\}\), inequalities (37) and (38) with diag \(\{P^{-1},P^{-1},P^{-1},P^{-1},P^{-1},P^{-1},I,I\}\), we obtain that inequalities (6), (37) and (38) are equivalent to inequalities (30), (31) and (32), which completes the proof. □

Remark 1

Up to now, many interesting results on intermittent control have been reported in the literature [18, 25,26,27,28]. Note that most of them are based on the facts that the control scheme is periodic or only associated with the state of current time. However, in many applications, the fixed control width and period may be inadequate [29, 30], and there always exist delays in the process of inputting control. Therefore, the existing control schemes in [18, 25,26,27,28] seem to be unreasonable. In is paper, we focus on the delayed intermittent control with non-fixed control width and period for the stability of uncertain neural networks with delays. Because the value of time delay in a controller could be changed without adding control gain and the control width and period are non-fixed, the delayed intermittent control has better performance, which is more flexible and practical in engineering control and industrial process.

Remark 2

The control term in system (3) is given in the form of Bu. Then the controller can be input to some states, not all states by selecting proper matrix B, which has some significant importance in engineering control problems. The numerical examples are shown in Sect. 4.

In particular, if there exists no time delay in the controller, i.e., \(\gamma =0\), it is easy to prove the following corollary.

Corollary 2

Under conditions \((H_{1})\)and \((H_{2})\), neural network (5) with \(\gamma =0\)is globally asymptotically stable if for given constants \(\mu _{i}\), \(i=1,2,3,5,6,7\), \(\alpha >0\), \(\beta \geq -\alpha \), \(c\geq 0\), there exist \(n\times n\)diagonal matrix \(S>0\)and \(m\times n\)matrixZsuch that (30) and the following inequalities hold:

$$\begin{aligned}& M_{4}={ \begin{bmatrix} \pi _{11} & \pi _{12} & \pi _{13} & \pi _{14} & 0 & 0 & 0 \\ \ast & \pi _{22} & 0 & \pi _{24} & \pi _{25} & -SE_{1}^{T}+Z^{T}E_{4} ^{T} & 0 \\ \ast & \ast & \pi _{33} & 0 & \pi _{35} & 0 & 0 \\ \ast & \ast & \ast & -\mu _{1}S & 0 & 0 & SE_{2}^{T} \\ \ast & \ast & \ast & \ast & -\mu _{2}S &0 & SE_{3}^{T} \\ \ast & \ast & \ast & \ast & \ast & -\varepsilon _{4}I & 0 \\ \ast & \ast & \ast & \ast & \ast & \ast & -\varepsilon _{5}I \end{bmatrix}}< 0, \end{aligned}$$
(39)
$$\begin{aligned}& N_{4}={ \begin{bmatrix} \tilde{\pi }_{11} & \tilde{\pi }_{12} & \pi _{13} & \pi _{14} & 0 & 0 & 0 \\ \ast & \pi _{22} & 0 & \pi _{24} & \pi _{25} & -SE_{1}^{T} & 0 \\ \ast & \ast & \pi _{33} & 0 & \pi _{35} & 0 & 0 \\ \ast & \ast & \ast & -\mu _{1}S & 0 & 0 & SE_{2}^{T} \\ \ast & \ast & \ast & \ast & -\mu _{2}S &0 & SE_{3}^{T} \\ \ast & \ast & \ast & \ast & \ast & -\varepsilon _{6}I & 0 \\ \ast & \ast & \ast & \ast & \ast & \ast & -\varepsilon _{5}I \end{bmatrix}}< 0, \end{aligned}$$
(40)

and the control width and period satisfy (9), where \(\pi _{11}=\alpha S-\mu _{1}SL_{1}+\varepsilon _{4}\mu _{3}^{2}HH^{T}\), \(\pi _{12}=S-\mu _{3}CS+\mu _{3}BZ\), \(\pi _{13}=\mu _{6}S\), \(\pi _{14}=\mu _{1}SL _{2}\), \(\pi _{22}=\mu _{7}\tau e^{\alpha \tau }S-2\mu _{3}S+\varepsilon _{5}\mu _{3}^{2}HH^{T}\), \(\pi _{24}=\mu _{3}W_{1}S\), \(\pi _{25}=\mu _{3}W_{2}S\), \(\pi _{33}=\mu _{5}\tau S-2\mu _{6}S-\mu _{2}SL_{1}\), \(\pi _{35}=\mu _{2}SL _{2}\), \(\tilde{\pi }_{11}=-\beta S-\mu _{1}SL_{1}+\varepsilon _{6}\mu _{3} ^{2}HH^{T}\), \(\tilde{\pi }_{12}=S-\mu _{3}CS\).

Moreover, the gain matrixKis taken as \(K=ZS^{-1}\).

Proof

Consider a Lyapunov–Krasovskii functional \(V=V_{1}+V _{2}+V_{3}\), where \(V_{1}\), \(V_{2}\) and \(V_{3}\) are the same as in Theorem 1. When \(t_{k}\leq t< t_{k}+d_{k}\), if

$$\begin{aligned} M_{5}=M_{6}+\varGamma _{3}^{T}A(t) \varUpsilon _{4}+\varUpsilon _{4}^{T}A^{T}(t) \varGamma _{3}+\varGamma _{4}^{T}A(t) \varUpsilon _{5}+\varUpsilon _{5}^{T}A^{T}(t) \varGamma _{4} < 0, \end{aligned}$$
(41)

it can be derived that \(\dot{V}(t)\leq \eta ^{T}(t)M_{5}\eta (t)- \alpha V(t) \leq -\alpha V(t)\), where

$$\begin{aligned}& M_{6} ={ \begin{bmatrix} \varphi _{11} & \varphi _{12} & \varphi _{13} & \varphi _{14} & 0 \\ \ast & \varphi _{22} & 0 & \varphi _{24} & \varphi _{25} \\ \ast & \ast & \varphi _{33} & 0 & \varphi _{35} \\ \ast & \ast & \ast & -X & 0 \\ \ast & \ast & \ast & \ast & -Y \end{bmatrix}}, \end{aligned}$$

\(\varphi _{11}=\alpha P-L_{1}X\), \(\varphi _{12}=P-RC+RD\), \(\varphi _{13}=T _{12}\), \(\varphi _{14}=L_{2}X\), \(\varphi _{22}=\tau e^{\alpha \tau }T_{22}-R-R ^{T}\), \(\varphi _{24}=RW_{1}\), \(\varphi _{25}=RW_{2}\), \(\varphi _{33}=\tau T _{11}-T_{12}-T_{12}^{T}-L_{1}Y\), \(\varphi _{35}=L_{2}Y\), \(\varGamma _{3}^{T}=( \begin{array}{ccccc}H ^{T}R^{T}&0&0&0&0\end{array})^{T}\), \(\varGamma _{4}^{T}=(\begin{array}{cccccc}0&H^{T}R^{T}&0&0&0\end{array})^{T}\), \(\varUpsilon _{4}=(\begin{array}{ccccc}0&-E_{1}+E_{4}K&0&0&0\end{array})\), \(\varUpsilon _{5}=(\begin{array}{ccccc}0&0&0&E _{2}&E_{3}\end{array})\).

When \(t_{K}+d_{k}\leq t< t_{k+1}\), if

$$\begin{aligned} N_{5}=N_{6}+\varGamma _{3}^{T}A(t) \varUpsilon _{6}+\varUpsilon _{6}^{T}A^{T}(t) \varGamma _{3}+\varGamma _{4}^{T}A(t) \varUpsilon _{5}+\varUpsilon _{5}^{T}A^{T}(t) \varGamma _{4} < 0, \end{aligned}$$
(42)

we have \(\dot{V}(t)\leq \eta ^{T}(t)N_{5}\eta (t)+\beta V(t)\leq \beta V(t)\), where

$$\begin{aligned}& N_{6} ={ \begin{bmatrix} \hat{\varphi }_{11} & \hat{\varphi }_{12} & \varphi _{13} & \varphi _{14} & 0 \\ \ast & \varphi _{22} & 0 & \varphi _{24} & \varphi _{25} \\ \ast & \ast & \varphi _{33} & 0 & \varphi _{35} \\ \ast & \ast & \ast & -X & 0 \\ \ast & \ast & \ast & \ast & -Y \end{bmatrix}}, \end{aligned}$$

\(\hat{\varphi }_{11}=-\beta P-L_{1}X\), \(\hat{\varphi }_{12}=P-RC\), \(\varUpsilon _{6}=(\begin{array}{cccccc}0&-E_{1}&0&0&0\end{array})\).

Then by following the steps of Theorem 1, under conditions (6), (41), and (42), system (5) with \(\gamma =0\) is globally asymptotically stable. Next, we will make some transformations in order to get equivalent stability conditions which can be solved by LMI toolbox. By Lemma 2 and Schur complement lemma, (41) and (42) are equivalent to the following inequalities:

$$\begin{aligned}& { \begin{bmatrix} \bar{M} & \varUpsilon _{4}^{T} & \varUpsilon _{5}^{T} \\ \ast & -\varepsilon _{4}I & 0 \\ \ast & \ast & -\varepsilon _{5}I \end{bmatrix}}< 0, \end{aligned}$$
(43)
$$\begin{aligned}& { \begin{bmatrix} \bar{N} & \varUpsilon _{6}^{T} & \varUpsilon _{5}^{T} \\ \ast & -\varepsilon _{6}I & 0 \\ \ast & \ast & -\varepsilon _{5}I \end{bmatrix}}< 0, \end{aligned}$$
(44)

where \(\bar{M}=M_{6}+\varepsilon _{4}\varGamma _{3}^{T}\varGamma _{3}+ \varepsilon _{5}\varGamma _{4}^{T}\varGamma _{4}\), \(\bar{N}=N_{6}+\varepsilon _{6}\varGamma _{3}^{T}\varGamma _{3}+\varepsilon _{5}\varGamma _{4}^{T}\varGamma _{4}\). Similarly, repeating the arguments as in Theorem 2, one can obtain that (6), (43), and (44) are equivalent to (30), (39), and (40). The proof is completed. □

It should be noted that the proposed approach in this paper is also available to study the stability of system (5) when the control scheme is periodic.

In this case, the intermittent controller \(u(t)\) can be written as

$$\begin{aligned} u(t)= \textstyle\begin{cases} Ky(t-\gamma ), & kT\leq t< kT+\delta , \\ 0, & kT+\delta \leq t< (k+1)T, \end{cases}\displaystyle \end{aligned}$$
(45)

where δ denotes the control width and \(T>0\) is the control period. System (5) turns into the following form:

$$\begin{aligned} \textstyle\begin{cases} \dot{y}(t)=-C(t)y(t)+W_{1}(t)F(y(t))+W_{2}(t)F(y(t-\tau (t))) \\ \hphantom{\dot{y}(t)}\quad {} +D(t)y(t-\gamma ), \quad kT\leq t< kT+\delta , \\ \dot{y}(t)=-C(t)y(t)+W_{1}(t)F(y(t))+W_{2}(t)F(y(t-\tau (t)), \\ \quad kT+\delta \leq t< (k+1)T, \\ y(t)=\varphi (t), \quad \forall t\in [-\rho ,0], \end{cases}\displaystyle \end{aligned}$$
(46)

where \(k\in \varLambda \). Then we can get the result as follows.

Corollary 3

Under conditions \((H_{1})\)and \((H_{2})\), the neural network (46) is globally exponentially stable if for given constants \(\mu _{i}\), \(i=1,2,\ldots,7\), \(\alpha >0\), \(\beta \geq -\alpha \), \(c\geq 0\), there exist \(n\times n\)diagonal matrix \(S>0\)and \(m\times n\)matrixZsuch that inequalities (30)(32) hold and the control period and control width satisfy

$$ (\alpha +\beta )\delta -\beta T>0 . $$
(47)

Proof

When \(kT\leq t< kT+\delta \), based on (27) and (47), we get

$$\begin{aligned} V(t)\leq{} & V(0)e^{\beta kT-(\alpha +\beta )(k-1)\delta } =V(0)e^{( \alpha +\beta )\delta +(\beta T-(\alpha +\beta )\delta )k} \\ \leq{} &C_{1}V(0)e^{(\beta T-(\alpha +\beta )\delta ) \frac{t-\delta }{T}} , \end{aligned}$$
(48)

where \(C_{1}=e^{(\alpha +\beta )\delta }\). When \(kT+\delta \leq t<(k+1)T\), using (28) and (47), we can obtain

$$\begin{aligned} V(t)\leq{} & e^{ \vert \beta \vert c}V(0)e^{\beta kT-(\alpha +\beta )(k-1)\delta } =V(0)e^{ \vert \beta \vert c+(\alpha +\beta )\delta +(\beta T-(\alpha +\beta ) \delta )k} \\ \leq{} &C_{2}V(0)e^{(\beta T-(\alpha +\beta )\delta )\frac{t}{T}} , \end{aligned}$$
(49)

where \(C_{2}=e^{ \vert \beta \vert c-\beta T+2(\alpha +\beta )\delta }\). Therefore, it follows from (48) and (49) that

$$\begin{aligned}& V(t)\leq C_{2}V(0)e^{-2\zeta (t-\delta )}, \end{aligned}$$

where \(\zeta =\frac{(\alpha +\beta )\delta -\beta T}{2T}>0\). Letting

$$\begin{aligned}& C_{3}=\lambda _{\max }(P)+\frac{1}{2}\tau ^{2}e^{\alpha \tau } \lambda _{\max }(T_{22})+ \frac{1}{2}\gamma ^{2}e^{\alpha \gamma } \lambda _{\max }(Q), \end{aligned}$$

one get

$$\begin{aligned}& V(t)\geq V_{1}(t)\geq \lambda _{\min }(P) \bigl\Vert y(t) \bigr\Vert ^{2}, \quad\quad V(0)\leq C_{3} \Vert \phi \Vert _{\rho }^{2}. \end{aligned}$$

Thus, \(\Vert y(t) \Vert \leq C_{4} \Vert \phi \Vert _{\rho }e^{-\zeta t}\), where \(C_{4}=e^{\zeta \delta } \sqrt{C_{2}C_{3}/\lambda _{\min }(P)}\). The proof is completed. □

Remark 3

In [18, 27, 28], the exponential stability of time-delay neural networks has been extensively studied by the periodically intermittent control method. However, these studies were based on the fact that time-varying delays are differentiable. Corollary 3 removes the restriction of differentiability of the transmission delays and gets the globally exponential stability of uncertain neural networks (46), which improves the results in [18, 27, 28]. Moreover, the fact that the value of time delay in the controller is adjustable makes our results more practical in real applications.

4 Numerical illustrations

In this section, a numerical simulation is provided to show the effectiveness of the proposed approach.

Consider a 2D neural network (5) with parameters as follows:

$$\begin{aligned}& C={ \begin{bmatrix} 1.2 & 0 \\ 0 & 0.1 \end{bmatrix}}, \quad\quad B={ \begin{bmatrix} 0 \\ 1 \end{bmatrix}}, \quad\quad W_{1}={ \begin{bmatrix} 0 & -0.4 \\ 1.3 & -0.3 \end{bmatrix}}, \quad\quad W_{2}={ \begin{bmatrix} -0.1 & 0.2 \\ -0.01 & 0.9 \end{bmatrix}}, \\& F(s)={ \begin{bmatrix} \tanh (0.3s)-0.2\sin s \\ \tanh (0.2s)+0.3\sin s \end{bmatrix}}, \quad\quad \tau (t)=0.1+0.01 \sin (5t). \end{aligned}$$

It is obtained that \(l_{1}^{-}=-0.2\), \(l_{1}^{+}=0.5\), \(l_{2}^{-}=-0.3\), \(l _{2}^{+}=0.5\), i.e.,

$$ L_{1}={ \begin{bmatrix} -0.1 & 0 \\ 0 & -0.15 \end{bmatrix}}, \quad\quad L_{2}={ \begin{bmatrix} 0.15 & 0 \\ 0 & 0.1 \end{bmatrix}}. $$

The delayed intermittent controller u is designed as

$$\begin{aligned} u(t)= \textstyle\begin{cases} Ky(t-\gamma ), & t_{k}\leq t< t_{k}+d_{k}, \\ 0, & t_{k}+d_{k}\leq t< t_{k+1}, \end{cases}\displaystyle \end{aligned}$$
(50)

where

$$ \textstyle\begin{cases} t_{2k-1} =1.5(k-1),\quad k\in \mathbb{N}, \\ t_{2k} =1.5k-0.8, \quad k\in \mathbb{N}, \end{cases}\displaystyle \quad\quad \textstyle\begin{cases} d_{2k-1} =0.3,\quad k\in \mathbb{N}, \\ d_{2k} =0.5,\quad k\in \mathbb{N}, \end{cases} $$

and \(\gamma =0.01\).

In particular, we consider the case that \(\varphi _{1}(t)=-0.5\sin t\), \(\varphi _{2}(t)=0.5\cos t\), \(\alpha =0.1\), \(\beta =0.1\), \(c=0.4\), \(\tau =0.11\), \(\mu _{1}=1.01\), \(\mu _{2}=0.38\), \(\mu _{3}=0.08\), \(\mu _{4}=0.5\), \(\mu _{5}=1.4\), \(\mu _{6}=0.26\), \(\mu _{7}=0.93\). By MATLAB LMI toolbox, it follows from Theorem 2 that

$$ S={ \begin{bmatrix} 0.0230 & 0 \\ 0 & 0.0229 \end{bmatrix}}, \quad\quad Z={ \begin{bmatrix} -0.0017 & -0.2645 \end{bmatrix}}. $$

Then the gain matrix of the control law is taken as \(K=[\begin{array}{cc} -0.0667 & -11.5328 \end{array}] \).

The numerical simulation of neural network (5) with controller \(u=0\) in \([0,+\infty )\) is presented in Fig. 1, which shows that the system is unstable. Under the designed intermittent controller (50) (see Fig. 3), neural network (5) becomes a closed-loop system. In this case, Fig. 2 illustrates that the neural network (5) is globally asymptotical stable. Noting that \(B=[0,1]^{T}\), the controller is only imposed on the second variable. Actually, the stability of system (5) is achieved through the interaction of \(x_{1}\) and \(x_{2}\).

Figure 1
figure 1

State trajectories for neural networks (5) without control input

Figure 2
figure 2

State trajectories for neural networks (5) with delayed intermittent controller (50)

Figure 3
figure 3

Delayed intermittent controller (50)

Remark 4

The established stability conditions in this paper depend both on the upper bound of transmission delays of the system and time delay in the controller. The relationship between the two delays is shown in Tables 1 and 2, which can be conveniently checked by the MATLAB LMI toolbox.

Table 1 The allowed upper bound τ for different γ
Table 2 The allowed upper bound γ for different τ

5 Conclusion

This paper was dedicated to the stability problem of neural networks with both time-varying delays and uncertainties. A novel delayed intermittent controller with non-fixed control width and period has been designed in the sense that it could be activated in all states or in some states and the value of time delay in controller could be adjusted without adding control gain. Moreover, the control width and period are non-fixed, which makes the control scheme is more flexible in real applications. Some delay-dependent stability criteria have been presented by using free-weighting matrix techniques and Lyapunov–Krasovskii functional method. It is shown that such criteria can provide better feasibility results than some existing ones. Actually, from the viewpoint of switched systems, the intermittently controlled system can be seen as a switched system composed of an unstable-uncontrolled subsystem and a stable-controlled subsystem. Switching phenomena are likely to cause impulsive effects to systems. Hence, how to develop our results to impulsive systems is an interesting problem for the future.