1 Introduction

Neural networks have become a hot research topic in the past few years, and many problems such as feedback control [1], stability [213], dissipativity and passivity [1417] are being taken to treat in various dynamic neural networks systems. Since only partial information about the neuron states is available in the network output of large-scale complex networks, it is important and necessary to estimate the neuron states through available measurements. The state estimation problem is studied for neural networks with time-varying delays in [18]. A novel delay partition approach in [19] was proposed to study the state estimation problem of recurrent neural networks. The \(H_{\infty}\) state estimation for static neural networks is studied in [20, 21].

Most of works on state estimation for neural networks are focused on the continuous-time cases [1827]. However, discrete-time neural networks play an important role when implementing the dynamic system in a digital way. In recent years, some significant results about state estimation problem for discrete-time neural networks have been obtained in [2834]. For example, the robust state estimation problem for discrete-time bidirectional associative memory (BAM) neural networks is studied in [28]. A sufficient condition is obtained such that the error estimate system for discrete-time BAM neural networks is globally exponentially stable in [29]. Wu et al. [33] studied the state estimation for discrete-time neural networks with time-varying delay. A typical time delay called leakage delay has a tendency to destabilize a system. Since the leakage delay has a great impact on the dynamical behavior of neural networks, it is necessary to take the effect of leakage delay on state estimation of neural networks into account. Recently, the neural networks with leakage delay have received much attention [813, 16, 35, 36]. However, there are few research results on the state estimation for discrete-time neural networks with leakage delay in the existing literature.

Motivated by the above discussion, we consider the problem of state estimation for discrete-time recurrent neural networks with leakage delay. This paper aims to design a state estimator via the available output measurement such that the estimation error system is asymptotically stable. The major contributions of this paper can be summarized as follows: (1) A state estimator and a delay-dependent stability criterion for the error system of discrete-time neural networks with leakage delay in terms of linear matrix inequalities (LMIs) are developed. (2) Based on a novel double summation inequality, reciprocally convex method, and three zero-value equalities, a less conservative stability criterion with less computational complexity is derived in terms of LMIs.

Notation

Throughout this paper, \(\mathbb{Z}\) denotes the set of integers, \(\mathbb{R}^{n}\) is the n-dimensional Euclidean vector space, and \(\mathbb{R}^{m\times n}\) denotes the set of all \(m\times n\) real matrices. The superscript T stands for the transpose of a matrix; \(I_{n}\) and \(0_{m\times n}\) represent the \(n\times n\) identity matrix and \({m\times n}\) zero matrix, respectively; \(\|\cdot\|\) refers to the Euclidean vector norm or the induced matrix norm. The symbol ∗ denotes the symmetric term in a symmetric matrix, and \(\operatorname{Sym}\{X\}=X+X^{T}\).

2 Problem formulation and preliminaries

Consider the following discrete-time recurrent neural networks with leakage delay:

$$ \begin{aligned} &x(k+1)=Ax(k-\sigma)+W_{0}g \bigl(x(k) \bigr)+W_{1}g \bigl(x \bigl(k-\tau(k) \bigr) \bigr)+J, \\ &y(k)=Cx(k)+\phi \bigl(k,x(k) \bigr), \end{aligned} $$
(1)

where \(x(k)=[x_{1}(k),x_{2}(k),\ldots,x_{n}(k)]^{T}\in\mathbb{R}^{n}\) is the state vector, \(g(x(k))=[g_{1}(x_{1}(k)), g_{2}(x_{2}(k)), \ldots ,g_{n}(x_{n}(k))]^{T}\in\mathbb{R}^{n}\) denotes the activation function, \(A=\operatorname{diag}\{a_{1},a_{2},\ldots,a_{n}\}\) is the state feedback matrix with entries \(|a_{i}|<1\), \(W_{0}\in\mathbb{R}^{n\times n}\) and \(W_{1}\in\mathbb{R}^{n\times n }\) are the interconnection weight matrices, J denotes an external input vector, \(y(k)\in\mathbb{R}^{m}\) is the measurement output, \(\phi(k,\cdot)\) is the neuron-dependent nonlinear disturbance on the network outputs, C is a known constant matrix of appropriate dimension, \(\tau(k)\) denotes the time-varying delay satisfying \(0<\tau_{1}\leq\tau (k)\leq\tau_{2}\), where \(\tau_{1}\), \(\tau_{2}\) are known positive integers, and σ is a known positive integer representing the leakage delay.

Assumption 1

[37]

For any \(u, v\in\mathbb{R}\), \(u\neq v\), each activation function \(g_{i}(\cdot)\) in (1) satisfies

$$l_{i}^{-}\leq\frac{g_{i}(u)-g_{i}(v)}{u-v}\leq l_{i}^{+} \quad (i=1,2, \ldots,n), $$

where \(l_{i}^{-}\) and \(l_{i}^{+}\) are known constants.

Assumption 2

[34]

The function \(\phi(k,\cdot)\) is assumed to be globally Lipschitz continuous, that is,

$$\bigl\Vert \phi(k,u)-\phi(k,v) \bigr\Vert \leq \bigl\Vert L(u-v) \bigr\Vert \quad \mbox{for all } u, v \in\mathbb{R}^{n}, u \neq v, $$

where L is a known constant matrix of appropriate dimension.

Now, the full-order state estimator for system (1) is of the form

$$ \hat{x}(k+1)=A\hat{x}(k-\sigma)+W_{0}g \bigl(\hat{x}(k) \bigr)+W_{1}g \bigl(\hat{x} \bigl(k-\tau (k) \bigr) \bigr)+J+K \bigl[y(k)- \hat{y}(k) \bigr], $$
(2)

where \(\hat{x}(k)\) is the estimation of the state vector \(x(k)\), \(\hat{y}(k)\) is the estimation of the measurement output vector \(y(k)\), and \(K\in\mathbb{R}^{ n \times m}\) is the estimator gain matrix to be designed later.

Let the error state vector be \(e(k)=x(k)-\hat{x}(k)\). Then we can obtain the following error-state system from (1) and (2):

$$ e(k+1)=Ae(k-\sigma)+W_{0}f(k)+W_{1}f \bigl(k- \tau(k) \bigr)-KCe(k)-K\psi(k), $$
(3)

where \(f(k)=g(x(k))-g(\hat{x}(k))\) and \(\psi(k)=\phi(k,x(k))-\phi(k,\hat {x}(k))\).

From Assumption 1 it can be easily seen that \(l_{i}^{-}\leq\frac{f_{i}(k)}{e_{i}(k)}\leq l_{i}^{+}\) for all \({e_{i}(k)} \neq0\), \(i=1,2,\ldots,n\).

Before proceeding further, we introduce the following three lemmas.

Lemma 1

[38]

For any vectors \(\xi_{1}\), \(\xi _{2}\) in \(\mathbb{R}^{m}\), given a positive definite matrix Q in \(\mathbb{R}^{n\times n}\), any matrices \(W_{1}\), \(W_{2}\) in \(\mathbb {R}^{n\times m}\), and a scalar α in the interval \((0,1)\), if there exists a matrix X in \(\mathbb{R}^{n\times n}\) such that \(\bigl [{\scriptsize\begin{matrix}{} Q & X \cr \ast& Q \end{matrix}} \bigr ]>0 \), then the following inequality holds:

$$ \frac{1}{\alpha}\xi^{T}W_{1}^{T}QW_{1} \xi+\frac{1}{1-\alpha}\xi^{T}W_{2}^{T}QW_{2} \xi \geq \left [ \textstyle\begin{array}{@{}c@{}} W_{1}\xi \\ W_{2}\xi \end{array}\displaystyle \right ]^{T} \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} Q & X \\ \ast& Q \end{array}\displaystyle \right ] \left [ \textstyle\begin{array}{@{}c@{}} W_{1}\xi \\ W_{2}\xi \end{array}\displaystyle \right ]. $$

Lemma 2

[39]

For a given matrix \(Z>0\), any sequence of discrete-time variables y in \([-h,0]\cap\mathbb {Z}\rightarrow\mathbb{R}^{n}\), the following inequality holds:

$$\sum_{i=-h+1}^{0}\sum _{k=i}^{0}y^{T}(k)Zy(k)\geq \frac {2(h+1)}{h}\Theta_{0}^{T}Z\Theta_{0}+ \frac{4(h+1)(h+2)}{h(h-1)}\Theta _{1}^{T}Z\Theta_{1}, $$

where \(y(k)=x(k)-x(k-1)\), \(\Theta_{0}=x(0)-\frac{1}{h+1}\sum_{i=-h}^{0}x(i)\), \(\Theta_{1}=x(0)+\frac{2}{h+1}\sum_{i=-h}^{0}x(i)-\frac{6}{(h+1)(h+2)}\sum_{i=-h}^{0}\sum_{k=i}^{0}x(k)\).

Lemma 3

[40]

For given matrix \(Z>0\), three nonnegative integers a, b, k satisfying \(a\leq b\leq k\), define the function \(\omega(k,a,b)\) as

$$ \omega(k,a,b)= \textstyle\begin{cases} \frac{1}{b-a}[2\sum_{s=k-b}^{k-a-1}x(s)+x(k-a)-x(k-b)],& a< b, \\ 2x(k-a), & a=b. \end{cases} $$

Then, the following summation inequality holds:

$$ -(b-a)\sum_{s=k-b}^{k-a-1}\Delta x^{T}(s)Z\Delta x(s)\leq-\left [ \textstyle\begin{array}{@{}c@{}}\nu_{0}\\ \nu_{1} \end{array}\displaystyle \right ]^{T}\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} Z & 0\\ 0 & 3Z \end{array}\displaystyle \right ] \left [ \textstyle\begin{array}{@{}c@{}} \nu_{0} \\ \nu_{1} \end{array}\displaystyle \right ], $$

where \(\Delta x(s)=x(s+1)-x(s)\), \(\nu_{0}=x(k-a)-x(k-b)\), and \(\nu_{1}=x(k-a)+x(k-b)-\omega(k,a,b)\).

3 Main results

In this section, we consider the asymptotic stability of the error-state system (3). For simplicity, \(e_{i}\in R^{17n\times n}\) (\(i=1,2,\ldots,17\)) are defined as block entry matrices (e.g., \(e_{3}=[0_{n\times 2n},I_{n},0_{n\times14n}]^{T}\)). The other notations are defined as

$$\begin{aligned}& \Delta e(k) = e(k+1)-e(k), \qquad \tau_{d}= \tau_{2}-\tau_{1}, \\& L_{m} = \operatorname{diag} \bigl\{ l_{1}^{-},l_{2}^{-}, \ldots,l_{n}^{-} \bigr\} , \qquad L_{p}=\operatorname{diag} \bigl\{ l_{1}^{+},l_{2}^{+},\ldots,l_{n}^{+} \bigr\} , \\& \mathcal{Z}_{i} = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}}0 & Z_{i} \\ \ast& Z_{i} \end{array}\displaystyle \right ] \quad (i=1,2,3), \\& \xi(k) = \Biggl[e^{T}(k), e^{T}(k-\tau_{1}), e^{T}(k-\tau_{2}), e^{T} \bigl(k-\tau(k) \bigr), e^{T}(k-\sigma), \sum_{s=k-\tau_{1}}^{k-1}e^{T}(s), \\& \hphantom{\xi(k) ={}}{}\sum_{s=k-\tau(k)}^{k-1-\tau_{1}}e^{T}(s), \sum_{s=k-\tau _{2}}^{k-1-\tau(k)}e^{T}(s), \sum _{s=k-\sigma}^{k-1}e^{T}(s), \Delta e^{T}(k), \Delta e^{T}(k-\tau_{1}), \\& \hphantom{\xi(k) ={}}{}\Delta e^{T}(k-\tau_{2}), f^{T}(k), f^{T}(k-\tau_{1}), f^{T}(k- \tau_{2}), f^{T} \bigl(k-\tau(k) \bigr), \psi^{T}(k) \Biggr]^{T}, \\& \eta_{1}(k) = \Biggl[e^{T}(k), e^{T}(k- \tau_{1}), e^{T}(k-\tau_{2}),\sum _{s=k-\tau_{1}}^{k-1}e^{T}(s), \sum _{s=k-\tau_{2}}^{k-1-\tau _{1}}e^{T}(s) \Biggr]^{T}, \\& \eta_{2}(k) = \bigl[e^{T}(k), \Delta e^{T}(k) \bigr]^{T}, \\& \Omega_{1} = \operatorname{diag} \bigl\{ \tau_{1}Z_{1}, -\tau_{1}Z_{1}+\tau_{d}Z_{2}, - \tau_{d}(Z_{2}-Z_{3}), -\tau_{d}Z_{3} \bigr\} , \\& \Omega_{2} = \mathcal{N}_{1}+\mathcal{Z}_{1}, \Omega_{3}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} \mathcal{N}_{2}+\mathcal{Z}_{2}& \mathcal{M} \\ \ast& \mathcal{N}_{2}+\mathcal{Z}_{3} \end{array}\displaystyle \right ], \\& \pi_{1} = [e_{1}+e_{10}, e_{2}+e_{11},e_{3}+e_{12}, e_{1}-e_{2}+e_{6}, e_{2}-e_{3}+e_{7}+e_{8}], \\& \pi_{2} = [e_{1}, e_{2}, e_{3}, e_{6}, e_{7}+e_{8}], \\& \pi_{3} = [e_{1}, e_{10}], \qquad \pi_{4}=[e_{2},e_{11}], \qquad \pi_{5}=[e_{3}, e_{12}], \qquad \pi_{6}=[e_{1},e_{2},e_{4},e_{3}], \\& \pi_{7} = [e_{6},e_{1}-e_{2}], \qquad \pi_{8}=[e_{7}, e_{2}-e_{4},e_{8},e_{4}-e_{3}], \\& \Xi_{1} = \pi_{1}P\pi_{1}^{T}- \pi_{2}P\pi_{2}^{T}, \\& \Xi_{2} = \bigl(e_{1}+e_{10}-(e_{1}+e_{9}-e_{5})A \bigr)S_{1} \bigl(e_{1}+e_{10}-(e_{1}+e_{9}-e_{5})A \bigr)^{T} \\& \hphantom{\Xi_{2} = {}}{}-(e_{1}-e_{9}A)S_{1}(e_{1}-e_{9}A)^{T}+ \sigma^{2}[e_{1},e_{10}]S_{2}[e_{1},e_{10}]^{T} \\& \hphantom{\Xi_{2} = {}}{}-[e_{9},e_{1}-e_{5}]S_{2}[e_{9},e_{1}-e_{5}]^{T}, \\& \Xi_{3} = \pi_{3}Q_{1}\pi_{3}^{T}- \pi_{4}(Q_{1}-Q_{2})\pi_{4}^{T}- \pi_{5}Q_{2}\pi_{5}^{T}, \\& \Xi_{4} = \tau_{1}^{2}\pi_{3} \mathcal{N}_{1}\pi_{3}^{T}+\tau_{d}^{2} \pi_{4}\mathcal {N}_{2}\pi_{4}^{T}+ \pi_{6}\Omega_{1}\pi_{6}^{T}- \pi_{7}\Omega_{2}\pi_{7}^{T}- \pi_{8}\Omega_{3}\pi _{8}^{T}, \\& \Theta = -\operatorname{Sym} \bigl\{ (e_{13}-e_{1}L_{m})H_{1}(e_{13}-e_{1}L_{p})^{T}+(e_{14}-e_{2}L_{m})H_{2}(e_{14}-e_{2}L_{p})^{T} \\& \hphantom{\Theta ={}}{}+(e_{15}-e_{3}L_{m})H_{3}(e_{15}-e_{3}L_{p})^{T}+(e_{16}-e_{4}L_{m})H_{4}(e_{16}-e_{4}L_{p})^{T} \bigr\} , \\& \Upsilon_{1} = \epsilon \bigl(e_{1}L^{T}Le_{1}^{T}-e_{17}e_{17}^{T} \bigr),\qquad \Upsilon _{2}=e_{1}+e_{10}, \\& \Upsilon _{3} = X \bigl(Ae_{5}^{T}-e_{1}^{T}-e_{10}^{T}+W_{0}e_{13}^{T}+W_{1}e_{16}^{T} \bigr)-Y \bigl(Ce_{1}^{T}+e_{17}^{T} \bigr), \\& \Xi = \sum_{i=1}^{4}\Xi_{i}+ \Theta+\Upsilon_{1}+\operatorname{Sym}\{\Upsilon _{2} \Upsilon_{3}\}. \end{aligned}$$

Theorem 1

For given integers \(0<\tau_{1} <\tau_{2}\), \(0<\sigma\), the error-state system (3) is asymptotically stable if there exist symmetric positive definite matrices \(P\in\mathbb {R}^{5n\times5n}\), \(S_{1}\in\mathbb{R}^{n\times n}\), \(S_{2}\in\mathbb{R}^{2n\times2n}\), \(Q_{1}\in\mathbb{R}^{2n\times2n}\), \(Q_{2}\in\mathbb{R}^{2n\times2n}\), \(\mathcal{N}_{1}\in\mathbb{R}^{2n\times2n}\), \(\mathcal{N}_{2}\in \mathbb{R}^{2n\times2n}\), positive diagonal matrices \(H_{i}\in\mathbb{R}^{n\times n}\) (\(i=1,2,3,4\)), a scalar \(\epsilon>0\), symmetric matrices \(Z_{i}\in\mathbb{R}^{n\times n}\) (\(i=1,2,3\)), and matrices \(\mathcal{M}\in\mathbb{R}^{2n\times2n}\), \(X\in\mathbb{R}^{n\times n}\), \(Y\in\mathbb{R}^{n\times n}\) satisfying the following LMIs:

$$\begin{aligned}& \Xi< 0, \end{aligned}$$
(4)
$$\begin{aligned}& \Omega_{i}\geq0 \quad (i=2, 3). \end{aligned}$$
(5)

Furthermore, the estimator gain matrix is given by \(K=X^{-1}Y\).

Proof

Consider the Lyapunov-Krasovskii functional for system (3) as follows:

$$ V(k)=V_{1}(k)+V_{2}(k)+V_{3}(k)+V_{4}(k), $$
(6)

where

$$\begin{aligned}& V_{1}(k) =\eta_{1}^{T}(k)P\eta_{1}(k), \\& V_{2}(k) = \Biggl[e(k)-A\sum_{s=k-\sigma}^{k-1}e(s) \Biggr]^{T}S_{1} \Biggl[e(k)-A\sum _{s=k-\sigma}^{k-1}e(s) \Biggr] \\& \hphantom{V_{2}(k) ={}}{}+\sigma\sum_{s=-\sigma}^{-1}\sum _{u=k+s}^{k-1} \eta _{2}^{T}(u)S_{2} \eta_{2}(u), \\& V_{3}(k) =\sum_{s=k-\tau_{1}}^{k-1} \eta_{2}^{T}(s)Q_{1}\eta_{2}(s)+\sum _{s=k-\tau_{2}}^{k-1-\tau_{1}}\eta_{2}^{T}(s)Q_{2} \eta_{2}(s), \\& V_{4}(k) =\tau_{1}\sum_{s=-\tau_{1}}^{-1} \sum_{u=k+s}^{k-1}\eta _{2}^{T}(u) \mathcal {N}_{1}\eta_{2}(u)+ \tau_{d}\sum _{s=-\tau_{2}}^{-1-\tau_{1}}\sum_{u=k+s}^{k-1-\tau _{1}} \eta_{2}^{T}(u)\mathcal {N}_{2} \eta_{2}(u). \end{aligned}$$

Define the forward difference of \(V(k)\) as \(\Delta{V(k)}=V(k+1)-V(k)\). Calculating \(\Delta V_{i}(k)\) (\(i=1, 2, 3, 4\)), we have

$$\begin{aligned}& \Delta V_{1}(k)=\eta_{1}^{T}(k+1)P \eta_{1}(k+1)-\eta_{1}^{T}(k)P \eta_{1}(k)=\xi ^{T}(k)\Xi_{1}\xi(k), \\& \Delta V_{2}(k) = \Biggl[e(k+1)-A\sum_{s=k-\sigma +1}^{k}e(s) \Biggr]^{T}S_{1} \Biggl[e(k+1) -A\sum _{s=k-\sigma+1}^{k}e(s) \Biggr] \\& \hphantom{\Delta V_{2}(k) ={}}{}- \Biggl[e(k)-A\sum_{s=k-\sigma}^{k-1}e(s) \Biggr]^{T}S_{1} \Biggl[e(k)-A\sum _{s=k-\sigma}^{k-1}e(s) \Biggr] \\& \hphantom{\Delta V_{2}(k) ={}}{}+\sigma^{2}\eta_{2}^{T}(k)S_{2} \eta_{2}(k)-\sigma \sum_{s=k-\sigma}^{k-1} \eta_{2}^{T}(s)S_{2}\eta_{2}(s). \end{aligned}$$
(7)

Using Jensen’s inequality in [37], we get

$$ -\sigma \sum_{s=k-\sigma}^{k-1} \eta_{2}^{T}(s)S_{2}\eta_{2}(s)\leq- \Biggl(\sum_{s=k-\sigma}^{k-1}\eta_{2}^{T}(s) \Biggr)S_{2} \Biggl(\sum_{s=k-\sigma}^{k-1} \eta_{2}(s) \Biggr). $$

So

$$ \Delta V_{2}(k) \leq \xi^{T}(k) \Xi_{2}\xi(k). $$
(8)

Obviously,

$$\begin{aligned} \Delta V_{3}(k) =&\eta_{2}^{T}(k)Q_{1} \eta_{2}(k)-\eta_{2}^{T}(k-\tau_{1})Q_{1} \eta _{2}(k-\tau_{1}) \\ &{}+\eta_{2}^{T}(k-\tau_{1})Q_{2} \eta_{2}(k-\tau_{1})-\eta_{2}^{T}(k- \tau_{2})Q_{2}\eta _{2}(k-\tau_{2}) \\ =&\xi^{T}(k)\Xi_{3}\xi(k). \end{aligned}$$
(9)

Inspired by the work of [41], for any symmetric matrices \(Z_{i}\) of appropriate dimension (\(i=1,2,3\)), we introduce the following zero equalities:

$$\begin{aligned}& 0 = \tau_{1}e^{T}(k)Z_{1}e(k)- \tau_{1}e^{T}(k-\tau_{1})Z_{1}e(k- \tau_{1}) \\& \hphantom{0 ={}}{}-\tau_{1}\sum_{s=k-\tau_{1}}^{k-1} \bigl(\Delta e^{T}(s)Z_{1}\Delta e(s)+2e^{T}(s)Z_{1} \Delta e(s) \bigr), \end{aligned}$$
(10)
$$\begin{aligned}& 0 = \tau_{d}e^{T}(k-\tau_{1})Z_{2}e(k- \tau_{1})-\tau_{d}e^{T} \bigl(k-\tau(k) \bigr)Z_{2}e \bigl(k-\tau (k) \bigr) \\& \hphantom{0 ={}}{}-\tau_{d}\sum_{s=k-\tau(k)}^{k-1-\tau_{1}} \bigl(\Delta e^{T}(s)Z_{2}\Delta e(s)+2e^{T}(s)Z_{2} \Delta e(s) \bigr), \end{aligned}$$
(11)
$$\begin{aligned}& 0 = \tau_{d}e^{T} \bigl(k-\tau(k) \bigr)Z_{3}e \bigl(k-\tau(k) \bigr)-\tau_{d}e^{T}(k-\tau_{2})Z_{3}e(k- \tau _{2}) \\& \hphantom{0 ={}}{}-\tau_{d}\sum_{s=k-\tau_{2}}^{k-1-\tau(k)} \bigl(\Delta e^{T}(s)Z_{3}\Delta e(s)+2e^{T}(s)Z_{3} \Delta e(s) \bigr). \end{aligned}$$
(12)

Using these zero equalities and the Jensen inequality, we have

$$\begin{aligned} \Delta V_{4}(k) =&\tau_{1}^{2} \eta_{2}^{T}(k)\mathcal{N}_{1}\eta_{2}(k)- \tau_{1}\sum_{s=k-\tau_{1}}^{k-1} \eta_{2}^{T}(s) (\mathcal{N}_{1}+ \mathcal{Z}_{1})\eta_{2}(s) \\ &{}+\tau_{d}^{2}\eta_{2}^{T}(k- \tau_{1})\mathcal {N}_{2}\eta_{2}(k- \tau_{1})-\tau_{d}\sum_{s=k-\tau(k)}^{k-1-\tau_{1}} \eta _{2}^{T}(s) (\mathcal {N}_{2}+ \mathcal{Z}_{2})\eta_{2}(s) \\ &{}-\tau_{d}\sum_{s=k-\tau_{2}}^{k-1-\tau(k)} \eta_{2}^{T}(s) (\mathcal {N}_{2}+ \mathcal{Z}_{3})\eta_{2}(s)+\xi^{T}(k) \pi_{6}\Omega_{1}\pi_{6}^{T}\xi(k) \\ \leq&\tau_{1}^{2}\eta_{2}^{T}(k) \mathcal{N}_{1}\eta_{2}(k)+\tau_{d}^{2} \eta_{2}^{T}(k-\tau _{1})\mathcal{N}_{2} \eta_{2}(k-\tau_{1})+\xi^{T}(k)\pi_{6} \Omega_{1}\pi_{6}^{T}\xi(k) \\ &{}-\sum_{s=k-\tau_{1}}^{k-1}\eta_{2}^{T}(s) \Omega_{2}\sum_{s=k-\tau_{1}}^{k-1} \eta_{2}(s)-\frac{\tau_{d}}{\tau(k)-\tau_{1}} \sum_{s=k-\tau(k)}^{k-1-\tau_{1}} \eta_{2}^{T}(s) (\mathcal {N}_{2}+ \mathcal{Z}_{2}) \\ &{}\times\sum_{s=k-\tau(k)}^{k-1-\tau_{1}} \eta_{2}(s)-\frac{\tau _{d}}{\tau_{2}-\tau(k)} \sum_{s=k-\tau_{2}}^{k-1-\tau(k)} \eta_{2}^{T}(s) (\mathcal {N}_{2}+\mathcal {Z}_{3})\sum_{s=k-\tau_{2}}^{k-1-\tau(k)} \eta_{2}(s). \end{aligned}$$

By Lemma 1, since \(\Omega_{3}\geq0\), we get

$$\begin{aligned}& -\frac{\tau_{d}}{\tau(k)-\tau_{1}} \sum_{s=k-\tau(k)}^{k-1-\tau_{1}} \eta_{2}^{T}(s) (\mathcal {N}_{2}+ \mathcal{Z}_{2}) \sum_{s=k-\tau(k)}^{k-1-\tau_{1}} \eta_{2}(s) \\& \qquad {}-\frac{\tau_{d}}{\tau_{2}-\tau(k)} \sum_{s=k-\tau_{2}}^{k-1-\tau(k)} \eta_{2}^{T}(s) (\mathcal {N}_{2}+ \mathcal{Z}_{3})\sum_{s=k-\tau_{2}}^{k-1-\tau(k)} \eta_{2}(s) \\& \quad \leq-\left [ \textstyle\begin{array}{@{}c@{}}\sum_{s=k-\tau(k)}^{k-1-\tau_{1}}\eta_{2}(s) \\ \sum_{s=k-\tau_{2}}^{k-1-\tau(k)}\eta_{2}(s) \end{array}\displaystyle \right ]^{T} \Omega_{3}\left [ \textstyle\begin{array}{@{}c@{}} \sum_{s=k-\tau(k)}^{k-1-\tau_{1}}\eta_{2}(s) \\ \sum_{s=k-\tau_{2}}^{k-1-\tau(k)}\eta_{2}(s) \end{array}\displaystyle \right ]. \end{aligned}$$

The difference \(\Delta V_{4}(k)\) can be rebounded as

$$ \Delta V_{4}(k)\leq\xi^{T}(k) \Xi_{4}\xi(k). $$
(13)

By Assumption 1, for any positive diagonal matrices \(H_{i}=\operatorname{diag}\{h_{i1},\ldots,h_{in}\}\) (\(i=1,2,3,4\)), the following inequality holds:

$$\begin{aligned} 0 \leq&-2\sum_{i=1}^{n}h_{1i} \bigl(f_{i}(k)-l_{i}^{-}e_{i}(k) \bigr) \bigl(f_{i}(k)-l_{i}^{+}e_{i}(k) \bigr) \\ &{}-2\sum_{i=1}^{n}h_{2i} \bigl(f_{i}(k-\tau_{1})-l_{i}^{-}e_{i}(k- \tau _{1}) \bigr) \bigl(f_{i}(k-\tau_{1})-l_{i}^{+}e_{i}(k- \tau_{1}) \bigr) \\ &{}-2\sum_{i=1}^{n}h_{3i} \bigl(f_{i}(k-\tau_{2})-l_{i}^{-}e_{i}(k- \tau _{2}) \bigr) \bigl(f_{i}(k-\tau_{2})-l_{i}^{+}e_{i}(k- \tau_{2}) \bigr) \\ &{}-2\sum_{i=1}^{n}h_{4i} \bigl(f_{i} \bigl(k-\tau(k) \bigr)-l_{i}^{-}e_{i} \bigl(k-\tau (k) \bigr) \bigr) \bigl(f_{i} \bigl(k-\tau(k) \bigr)-l_{i}^{+}e_{i} \bigl(k-\tau(k) \bigr) \bigr) \\ =&\xi^{T}(k)\Theta\xi(k). \end{aligned}$$
(14)

From Assumption 2, for any positive scalar ϵ, we can deduce that

$$ 0 \leq \epsilon \bigl(e^{T}(k)L^{T}Le(k)- \psi(k)^{T}\psi(k) \bigr) = \xi^{T}(k)\Upsilon_{1} \xi(k). $$
(15)

On the other hand, to design the gain matrix K, for any matrix X of appropriate dimension, we use the following zero equality to avoid the nonlinear matrix inequality

$$\begin{aligned} \begin{aligned}[b] 0={}&2 \bigl(e^{T}(k)+\Delta e^{T}(k) \bigr)X \bigl[Ae(k-\sigma)-KCe(k)+W_{0}f(k)+W_{1}f \bigl(k-\tau (k) \bigr) \\ &{}-K\psi(k)-e(k+1) \bigr] \\ ={}&\xi^{T}(k)\operatorname{Sym}\{\Upsilon_{2} \Upsilon_{3}\}\xi(k). \end{aligned} \end{aligned}$$
(16)

Therefore, from (7)-(16), the following inequality holds:

$$ \Delta V(k)\leq\xi^{T}(k)\Xi\xi(k). $$
(17)

Obviously, if \(\Xi<0\) and \(\xi(k)\neq0\), then \(\Delta V(k)<0\), which indicates that the error-state system (3) is asymptotically stable. This completes the proof of Theorem 1. □

Remark 1

Differently from methods in [2833], we introduce three zero equalities (10)-(12) to reduce the conservatism of the stability criterion. In [19], the authors used the inequality \(-PX^{-1}P\leq-2P+X\) (\(X\geq0\)) to deal with the problem of nonlinear matrix inequality. In this paper, by employing the zero equality (16), the nonlinear matrix inequality can be avoided. At the same time, the method in our work can provide much flexibility in solving linear matrix inequalities.

Remark 2

In order to estimate \(-\sum_{j=t-h_{M}}^{t-h_{m}-1}\eta_{1}(j)^{T}R_{1}\eta_{1}(j)\), the authors in [28] divided the sum into two parts, \(-\sum_{j=t-h_{M}}^{t-h(t)-1}\eta_{1}^{T}(j)R_{1}\eta_{1}(j)\) and \(-\sum_{j=t-h(t)}^{t-h_{m}-1}\eta_{1}^{T}(j)R_{1}\eta_{1}(j)\), and then simply estimated them respectively. In [30], \(\sum_{i=k+1-\tau(k)}^{k-1}e^{T}(i)Qe(i)\) was approximated with \(\sum_{i=k+1-\tau_{m}}^{k-1}e^{T}(i)Qe(i)\). So the methods in [28, 30] may bring some conservatism. In this paper, the reciprocally convex approach and some inequality techniques are employed to deal with this kind of terms. Tighter upper bounds for these terms are obtained.

Recently, Nam et al. [40] obtained a discrete Wirtinger-based inequality. Based on this inequality, we will reconsider the asymptotic stability of the error-state system (3). For simplicity, \(\tilde {e}_{i}\in R^{18n\times n}\) (\(i=1,2,\ldots,18\)) are defined as block entry matrices (e.g., \(\tilde{e}_{3}=[0_{n\times2n},I_{n},0_{n\times 15n}]^{T}\)). The other notations are defined as

$$\begin{aligned}& \tilde{\xi}(k) = \Biggl[e^{T}(k), e^{T}(k- \tau_{1}), e^{T}(k-\tau_{2}), e^{T} \bigl(k-\tau(k) \bigr), e^{T}(k-\sigma), \sum _{s=k-\tau_{1}}^{k-1}e^{T}(s), \\& \hphantom{\tilde{\xi}(k) ={}}{}\sum_{s=k-\tau(k)}^{k-1-\tau_{1}}e^{T}(s), \sum_{s=k-\tau _{2}}^{k-1-\tau(k)}e^{T}(s), \sum _{s=k-\sigma}^{k-1}e^{T}(s), \Delta e^{T}(k), \Delta e^{T}(k-\tau_{1}), \\& \hphantom{\tilde{\xi}(k) ={}}{}\Delta e^{T}(k-\tau_{2}), f^{T}(k), f^{T}(k-\tau_{1}), f^{T}(k- \tau_{2}), f^{T} \bigl(k-\tau(k) \bigr), \\& \hphantom{\tilde{\xi}(k) ={}}{}\sum_{s=-\tau_{1}+1}^{0}\sum _{j=k+s}^{k}e^{T}(j), \psi ^{T}(k) \Biggr]^{T}, \\& \tilde{\pi}_{1} = [\tilde{e}_{1}+\tilde{e}_{10}, \tilde{e}_{2}+\tilde {e}_{11},\tilde{e}_{3}+ \tilde{e}_{12}, \tilde{e}_{1}-\tilde{e}_{2}+\tilde {e}_{6}, \tilde{e}_{2}-\tilde{e}_{3}+ \tilde{e}_{7}+\tilde{e}_{8}], \\& \tilde{\pi}_{2} = [\tilde{e}_{1},\tilde{ e}_{2}, \tilde{e}_{3}, \tilde {e}_{6},\tilde{ e}_{7}+ \tilde{e}_{8}], \qquad \tilde{\pi}_{3}=\tilde{e}_{1}- \tilde {e}_{5}, \\& \tilde{\pi}_{4} = \tilde{e}_{1}+\tilde{e}_{5}- \frac{1}{\sigma}(\tilde {e}_{1}-\tilde{e}_{5}+2 \tilde{e}_{9}), \qquad \tilde{\pi}_{5}=[ \tilde{e}_{1}, \tilde {e}_{10}], \qquad \tilde{ \pi}_{6}=[\tilde{e}_{2},\tilde{e}_{11}], \\& \tilde{\pi}_{7} = [\tilde{e}_{3}, \tilde{e}_{12}], \qquad \tilde{\pi}_{8}=[\tilde {e}_{1}, \tilde{e}_{2}, \tilde{e}_{4},\tilde{e}_{3}], \qquad \tilde{ \pi}_{9}=[\tilde {e}_{6}, \tilde{e}_{1}- \tilde{e}_{2}], \\& \tilde{\pi}_{10} = [\tilde{e}_{7}, \tilde{e}_{2}- \tilde{e}_{4},\tilde {e}_{8},\tilde{e}_{4}- \tilde{e}_{3}], \qquad \tilde{\pi}_{11}=\tilde{e}_{1}- \frac {1}{\tau_{1}+1}(\tilde{e}_{6}+\tilde{e}_{1}), \\& \tilde{\pi}_{12} = \tilde{e}_{1}+ \biggl( \frac{2}{\tau_{1}+1}- \frac{6}{(\tau _{1}+1)(\tau_{1}+2)} \biggr) (\tilde{e}_{1}+ \tilde{e}_{6})- \frac{6}{(\tau_{1}+1)(\tau _{1}+2)}\tilde{e}_{17}, \\& \tilde{\Xi}_{1} = \tilde{\pi}_{1}P\tilde{ \pi}_{1}^{T}-\tilde{\pi}_{2}P\tilde{\pi }_{2}^{T}, \\& \tilde{\Xi}_{2} = \bigl(\tilde{e}_{1}+\tilde{e}_{10}-( \tilde{e}_{1}+\tilde {e}_{9}-\tilde{e}_{5})A \bigr)S_{1} \bigl(\tilde{e}_{1} +\tilde{e}_{10}-( \tilde{e}_{1}+\tilde{e}_{9}-\tilde{e}_{5})A \bigr)^{T} \\& \hphantom{\tilde{\Xi}_{2} ={}}{}-(\tilde{e}_{1}-\tilde{e}_{9}A)S_{1}( \tilde{e}_{1}-\tilde{e}_{9}A)^{T}+\sigma ^{2}\tilde{e}_{10}S_{2}\tilde{e}_{10}^{T}- \tilde{\pi}_{3}S_{2} \tilde{\pi}_{3}^{T}- \tilde{\pi}_{4}3S_{2}\tilde{\pi}_{4}^{T}, \\& \tilde{\Xi}_{3} = \tilde{\pi}_{5}Q_{1}\tilde{ \pi}_{5}^{T}-\tilde{\pi }_{6}(Q_{1}-Q_{2}) \tilde{\pi}_{6}^{T}-\tilde{\pi}_{7}Q_{2} \tilde{\pi}_{7}^{T}, \\& \tilde{\Xi}_{4} = \tau_{1}^{2}\tilde{ \pi}_{5}\mathcal{N}_{1}\tilde{\pi}_{5}^{T}+ \tau _{d}^{2}\tilde{\pi}_{6}\mathcal{N}_{2} \tilde{\pi}_{6}^{T}+\tilde{\pi}_{8}\Omega _{1}\tilde{\pi}_{8}^{T}-\tilde{ \pi}_{9}\Omega_{2}\tilde{\pi}_{9}^{T}- \tilde{\pi }_{10}\Omega_{3}\tilde{\pi}_{10}^{T}, \\& \tilde{\Xi}_{5} = \frac{\tau_{1}(\tau_{1}+1)}{2}\tilde{e}_{10}Q_{3} \tilde {e}_{10}^{T}-\frac{2(\tau_{1}+1)}{\tau_{1}}\tilde{ \pi}_{11}Q_{3}\tilde{\pi }_{11}^{T}- \frac{4(\tau_{1}+1)(\tau_{1}+2)}{\tau_{1}(\tau_{1}-1)} \tilde{\pi}_{12}Q_{3}\tilde{ \pi}_{12}^{T}, \\& \tilde{\Theta} = -\operatorname{Sym} \bigl\{ (\tilde{e}_{13}- \tilde{e}_{1}L_{m})H_{1}(\tilde {e}_{13}-\tilde{e}_{1}L_{p})^{T}+( \tilde{e}_{14}-\tilde{e}_{2}L_{m})H_{2}( \tilde {e}_{14}-\tilde{e}_{2}L_{p})^{T} \\& \hphantom{\tilde{\Theta} = {}}{}+(\tilde{e}_{15}-\tilde{e}_{3}L_{m})H_{3}( \tilde{e}_{15}-\tilde {e}_{3}L_{p})^{T}+( \tilde{e}_{16}-\tilde{e}_{4}L_{m})H_{4}( \tilde{e}_{16}-\tilde {e}_{4}L_{p})^{T} \bigr\} , \\& \tilde{\Upsilon}_{1} = \epsilon \bigl(\tilde{e}_{1}L^{T}L \tilde{e}_{1}^{T}-\tilde {e}_{18} \tilde{e}_{18}^{T} \bigr),\qquad \tilde{\Upsilon}_{2}= \tilde{e}_{1}+\tilde {e}_{10}, \\& \tilde{\Upsilon}_{3} = X \bigl(A\tilde{e}_{5}^{T}- \tilde{e}_{1}^{T}-\tilde {e}_{10}^{T}+W_{0} \tilde{e}_{13}^{T}+W_{1}\tilde{e}_{16}^{T} \bigr)-Y \bigl(C\tilde {e}_{1}^{T}+\tilde{e}_{18}^{T} \bigr), \\& \tilde{\Xi} = \sum_{i=1}^{5}\tilde{ \Xi}_{i}+\tilde{\Theta}+\tilde {\Upsilon}_{1}+ \operatorname{Sym}\{\tilde{\Upsilon}_{2}\tilde{\Upsilon}_{3} \}. \end{aligned}$$

Theorem 2

For given integers \(0<\tau_{1} <\tau_{2}\), \(0<\sigma\), the error-state system (3) is asymptotically stable if there exist symmetric positive definite matrices \(P\in\mathbb {R}^{5n\times5n}\), \(S_{1}\in\mathbb{R}^{n\times n}\), \(S_{2}\in\mathbb{R}^{n\times n}\), \(Q_{1}\in\mathbb{R}^{2n\times2n}\), \(Q_{2}\in\mathbb{R}^{2n\times2n}\), \(\mathcal{N}_{1}\in\mathbb{R}^{2n\times2n}\), \(\mathcal{N}_{2}\in \mathbb{R}^{2n\times2n}\), \(Q_{3}\in\mathbb{R}^{n\times n}\), positive diagonal matrices \(H_{i}\in\mathbb{R}^{n\times n}\) (\(i=1,2,3,4\)), a scalar \(\epsilon>0\), symmetric matrices \(Z_{i}\in\mathbb{R}^{n\times n}\) (\(i=1,2,3\)), and matrices \(\mathcal{M}\in\mathbb{R}^{2n\times2n}\), \(X\in\mathbb{R}^{n\times n}\), \(Y\in\mathbb{R}^{n\times n}\) satisfying the following LMIs:

$$\begin{aligned}& \tilde{\Xi}< 0, \end{aligned}$$
(18)
$$\begin{aligned}& \Omega_{i}\geq0 \quad (i=2, 3). \end{aligned}$$
(19)

Then, the estimator gain matrix is given by \(K=X^{-1}Y\), and the other parameters are defined as in Theorem  1.

Proof

Defined a Lyapunov-Krasovskii functional as

$$ V(k)=V_{1}(k)+V_{2}(k)+V_{3}(k)+V_{4}(k)+V_{5}(k), $$
(20)

where

$$\begin{aligned}& V_{1}(k) =\eta_{1}^{T}(k)P\eta_{1}(k), \\& V_{2}(k) = \Biggl[e(k)-A\sum_{s=k-\sigma}^{k-1}e(s) \Biggr]^{T}S_{1} \Biggl[e(k)-A\sum _{s=k-\sigma}^{k-1}e(s) \Biggr] \\& \hphantom{V_{2}(k) ={}}{}+\sigma\sum_{s=-\sigma}^{-1}\sum _{u=k+s}^{k-1} \Delta e^{T}(u)S_{2} \Delta e(u), \\& V_{3}(k) =\sum_{s=k-\tau_{1}}^{k-1} \eta_{2}^{T}(s)Q_{1}\eta_{2}(s)+\sum _{s=k-\tau_{2}}^{k-1-\tau_{1}}\eta_{2}^{T}(s)Q_{2} \eta_{2}(s), \\& V_{4}(k) =\tau_{1}\sum_{s=-\tau_{1}}^{-1} \sum_{u=k+s}^{k-1}\eta _{2}^{T}(u) \mathcal {N}_{1}\eta_{2}(u)+ \tau_{d}\sum _{s=-\tau_{2}}^{-1-\tau_{1}}\sum_{u=k+s}^{k-1-\tau _{1}} \eta_{2}^{T}(u)\mathcal {N}_{2} \eta_{2}(u), \\& V_{5}(k) =\sum_{s=-\tau_{1}+1}^{0}\sum _{j=s}^{0}\sum_{u=k+j}^{k}y^{T}(u)Q_{3}y(u), \end{aligned}$$

where \(y(u)=e(u)-e(u-1)\).

By arguments similar to those in Theorem 1, we have

$$ \Delta V_{i}(k) \leq \tilde{\xi}^{T}(k) \tilde{\Xi}_{i}\tilde{\xi}(k)\quad ( i=1, 3, 4 ). $$
(21)

Calculating the forward difference of \(V_{2}(k)\) yields

$$\begin{aligned} \Delta V_{2}(k) =& \Biggl[e(k+1)-A\sum_{s=k-\sigma +1}^{k}e(s) \Biggr]^{T}S_{1} \Biggl[e(k+1) -A\sum _{s=k-\sigma+1}^{k}e(s) \Biggr] \\ &{}- \Biggl[e(k)-A\sum_{s=k-\sigma}^{k-1}e(s) \Biggr]^{T}S_{1} \Biggl[e(k)-A\sum _{s=k-\sigma}^{k-1}e(s) \Biggr] \\ &{}+\sigma^{2}\Delta e^{T}(k)S_{2}\Delta e(k)- \sigma \sum_{s=k-\sigma}^{k-1}\Delta e^{T}(s)S_{2}\Delta e(s). \end{aligned}$$

Lemma 3 gives

$$\begin{aligned}& -\sigma\sum_{s=k-\sigma}^{k-1}\Delta e^{T}(s)S_{2}\Delta e(s) \\& \quad \leq-\left [ \textstyle\begin{array}{@{}c@{}}\zeta_{1}\\ \zeta_{2} \end{array}\displaystyle \right ]^{T} \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} S_{2} & 0\\ 0 & 3S_{2} \end{array}\displaystyle \right ] \left [ \textstyle\begin{array}{@{}c@{}} \zeta_{1} \\ \zeta_{2} \end{array}\displaystyle \right ] \\& \quad =-\tilde{\xi}^{T}(k) \bigl(\tilde{\pi}_{3}^{T}S_{2} \tilde{\pi}_{3}-\tilde{\pi }_{4}^{T}3S_{2} \tilde{\pi}_{4} \bigr)\tilde{\xi}(k), \end{aligned}$$

where \(\zeta_{1}=e(k)-e(k-\sigma)\), \(\zeta_{2}=e(k)+e(k-\sigma)-\frac {1}{\sigma}(2\sum_{s=k-\sigma}^{k-1}e(s)+e(k)-e(k-\sigma))\).

So

$$ \Delta V_{2}(k) \leq \tilde{\xi}^{T}(k) \tilde{ \Xi}_{2}\tilde{\xi}(k). $$
(22)

Calculating \(\Delta V_{5}(k)\), we get

$$ \Delta V_{5}(k) = \frac{\tau_{1}(\tau_{1}+1)}{2}\Delta e^{T}(k)Q_{3} \Delta e(k)-\sum_{s=-\tau_{1}+1}^{0}\sum _{j=k+s}^{k}y^{T}(j)Q_{3}y(j). $$

By Lemma 2 we obtain the following inequality:

$$\begin{aligned}& -\sum_{s=-\tau_{1}+1}^{0}\sum _{j=k+s}^{k}y^{T}(j)Q_{3}y(j) \\& \quad \leq -\frac{2(\tau_{1}+1)}{\tau_{1}}\zeta_{3}^{T}Q_{3} \zeta_{3} \\& \qquad {}-\frac{4(\tau_{1}+1)(\tau_{1}+2)}{\tau_{1}(\tau_{1}-1)} \Biggl[e(k)+\frac{2}{\tau _{1}+1}\sum _{s=k-\tau_{1}}^{k}e(s)-\frac{6}{(\tau_{1}+1)(\tau_{1}+2)} \sum _{s=-\tau_{1}}^{0}\sum_{j=k+s}^{k}e(j) \Biggr]^{T} \\& \qquad {}\times Q_{3} \Biggl[e(k)+\frac{2}{\tau_{1}+1}\sum _{s=k-\tau _{1}}^{k}e(s)-\frac{6}{(\tau_{1}+1)(\tau_{1}+2)} \sum _{s=-\tau_{1}}^{0}\sum_{j=k+s}^{k}e(j) \Biggr] \\& \quad = -\frac{2(\tau_{1}+1)}{\tau_{1}}\zeta_{3}^{T}Q_{3} \zeta_{3}-\frac{4(\tau_{1}+1)(\tau _{1}+2)}{\tau_{1}(\tau_{1}-1)}\zeta_{4}^{T}Q_{3} \zeta_{4} \\& \quad = \tilde{\xi}^{T}(k) \biggl(-\frac{2(\tau_{1}+1)}{\tau_{1}}\tilde{ \pi}_{11}Q_{3}\tilde {\pi}_{11}^{T}- \frac{4(\tau_{1}+1)(\tau_{1}+2)}{\tau_{1}(\tau_{1}-1)} \tilde{\pi}_{12}Q_{3}\tilde{ \pi}_{12}^{T} \biggr)\tilde{\xi}^{T}(k), \end{aligned}$$

where \(\zeta_{3}=e(k)-\frac{1}{\tau_{1}+1}\sum_{s=k-\tau _{1}}^{k}e(s)\), and

$$\zeta_{4}=e(k)+ \biggl(\frac{2}{\tau_{1}+1}-\frac{6}{(\tau_{1}+1)(\tau_{1}+2)} \biggr) \sum_{s=k-\tau_{1}}^{k}e(s)-\frac{6}{(\tau_{1}+1)(\tau_{1}+2)} \sum _{s=-\tau_{1}+1}^{0}\sum_{j=k+s}^{k}e(j). $$

Hence,

$$ \Delta V_{5}(k)\leq\tilde{\xi}^{T}(k)\tilde{ \Xi}_{5}\tilde{\xi}(k). $$
(23)

Following a similar procedure as from (14) to (16), we gather from (21) to (23) that

$$\Delta V(k)\leq\tilde{\xi}^{T}(k)\tilde{\Xi}\tilde{\xi}(k). $$

Inequality (18) implies

$$ \Delta V(k)\leq\tilde{\xi}^{T}(k)\tilde{\Xi}\tilde{ \xi}(k)< 0,\quad \forall \tilde{\xi}(k)\neq0 . $$
(24)

Therefore, the error-state system (3) is asymptotically stable. This completes the proof of Theorem 2. □

Remark 3

In 2015, Nam et al. [40] derived a discrete version of the Wirtinger-based integral inequality. Combining this new inequality with the reciprocally convex technique, a less conservative stability condition for the linear discrete systems with an interval time-varying delay is obtained in [40]. Using the Wirtinger-based summation inequality obtained in [40], we derive Theorem 2, which is less conservative than Theorem 1. Recently, Zhang et al. [7] investigate the delay-variation-dependent stability of discrete-time systems with a time-varying delay. A novel augmented Lyapunov functional is constructed. A generalized free-weighing matrix approach is proposed to estimate the summation terms appearing in the forward difference of the Lyapunov functional. The generalized free-weighing matrix approach encompasses the Jensen-based inequality approach and the Wirtinger-based inequality approach as particular cases. Our results may be further improved by using the generalized free-weighing matrix approach.

Remark 4

In order to reduce the conservatism of the stability criterion, we modify the Lyapunov-Krsovskii functional in the proof of Theorem 1. The Lyapunov-Krsovskii functional term \(\sum_{s=-\tau_{1}+1}^{0}\sum_{j=s}^{0}\sum_{u=k+j}^{k}y^{T}(u)Q_{3}y(u)\) is taken into account,

$$V_{2}(k)=\Biggl[e(k)-A\sum_{s=k-\sigma}^{k-1}e(s) \Biggr]^{T}S_{1}\Biggl[e(k)-A\sum _{s=k-\sigma}^{k-1}e(s)\Biggr]+\sigma\sum _{s=-\sigma}^{-1}\sum_{u=k+s}^{k-1} \eta_{2}^{T}(u)S_{2}\eta_{2}(u) $$

in the proof of Theorem 1 is replaced by

$$V_{2}(k)=\Biggl[e(k)-A\sum_{s=k-\sigma }^{k-1}e(s) \Biggr]^{T}S_{1}\Biggl[e(k)-A\sum _{s=k-\sigma}^{k-1}e(s)\Biggr]+\sigma \sum _{s=-\sigma}^{-1}\sum_{u=k+s}^{k-1} \Delta e^{T}(u)S_{2}\Delta e(u). $$

A new asymptotic stability criterion - Theorem 2 is derived. Theorem 2 in this paper is less conservative than Theorem 1.

If the leakage delay disappears, that is \(\sigma=0\), then the error-state system (3) reduces to

$$ e(k+1)=(A-KC)e(k)+W_{0}f(k)+W_{1}f \bigl(k- \tau(k) \bigr)-K \psi(k). $$
(25)

From Theorem 2, the following stability criterion for the error-state system (25) can be obtained. For simplicity, \(\check{e}_{i}\in R^{16n\times n}\) (\(i=1,2,\ldots,16\)) are defined as block entry matrices (e.g., \(\check {e}_{3}=[0_{n\times2n},I_{n},0_{n\times13n}]^{T}\)). The other notations are defined as

$$\begin{aligned}& \check{\xi}(k) = \Biggl[e^{T}(k), e^{T}(k- \tau_{1}),e^{T}(k-\tau_{2}), e^{T} \bigl(k-\tau(k) \bigr), \sum_{s=k-\tau_{1}}^{k-1}e^{T}(s), \sum_{s=k-\tau(k)}^{k-1-\tau_{1}}e^{T}(s), \\& \hphantom{\check{\xi}(k) ={}}{}\sum_{s=k-\tau_{2}}^{k-1-\tau(k)}e^{T}(s), \Delta e^{T}(k), \Delta e^{T}(k-\tau_{1}),\Delta e^{T}(k-\tau_{2}), f^{T}(k), f^{T}(k- \tau_{1}), \\& \hphantom{\check{\xi}(k) ={}}{}f^{T}(k-\tau_{2}),f^{T} \bigl(k- \tau(k) \bigr), \sum_{s=-\tau_{1}+1}^{0}\sum _{j=k+s}^{k}e^{T}(j),\psi^{T}(k) \Biggr]^{T}, \\& \check{\pi}_{1} = [\check{e}_{1}+\check{e}_{8}, \check{e}_{2}+\check {e}_{9},\check{e}_{3}+ \check{e}_{10}, \tilde{e}_{1}-\check{e}_{2}+\check {e}_{5},\check{e}_{2}-\check{e}_{3}+ \check{e}_{6}+\check{e}_{7}], \\& \check{\pi}_{2} = [\check{e}_{1}, \check{e}_{2}, \check{e}_{3},\check {e}_{5},\check{e}_{6}+ \check{e}_{7}],\qquad \check{\pi}_{3}=[\check{e}_{1}, \check{e}_{8}], \qquad \check{\pi}_{4}=[\check {e}_{2},\check{e}_{9}], \\& \check{\pi}_{5} = [\check{e}_{3}, \check{e}_{10}], \qquad \check{\pi}_{6}=[\check {e}_{1}, \check{e}_{2}, \check{e}_{4},\check{e}_{3},], \qquad \check{ \pi}_{7}=[\check {e}_{5}, \check{e}_{1}- \check{e}_{2}], \\& \check{\pi}_{8} = [\check{e}_{6}, \check{e}_{2}- \check{e}_{4},\check {e}_{7},\check{e}_{4}- \check{e}_{3}], \qquad \check{\pi}_{9}=\check{e}_{1}- \frac{1}{\tau_{1}+1}(\check{e}_{5}+\check {e}_{1}), \\& \check{\pi}_{10} = \check{e}_{1}+ \biggl( \frac{2}{\tau_{1}+1}- \frac{6}{(\tau _{1}+1)(\tau_{1}+2)} \biggr) (\check{e}_{1}+ \check{e}_{5})- \frac{6}{(\tau_{1}+1)(\tau _{1}+2)}\check{e}_{15}, \\& \check{\Xi}_{1} = \check{\pi}_{1}P\check{ \pi}_{1}^{T}-\check{\pi}_{2}P\check{\pi }_{2}^{T}, \\& \check{\Xi}_{2} = \check{\pi}_{3}Q_{1}\check{ \pi}_{3}^{T}-\check{\pi }_{4}(Q_{1}-Q_{2}) \check{\pi}_{4}^{T}-\check{\pi}_{5}Q_{2} \check{\pi}_{5}^{T}, \\& \check{\Xi}_{3} = \tau_{1}^{2}\check{ \pi}_{3}\mathcal{N}_{1}\check{\pi}_{3}^{T}+ \tau _{d}^{2}\check{\pi}_{4}\mathcal{N}_{2} \check{\pi}_{4}^{T}+\check{\pi}_{6}\Omega _{1}\check{\pi}_{6}^{T}-\check{ \pi}_{7}\Omega_{2}\check{\pi}_{7}^{T}- \check{\pi }_{8}\Omega_{3}\check{\pi}_{8}^{T}, \\& \check{\Xi}_{4} = \frac{\tau_{1}(\tau_{1}+1)}{2}\check{e}_{8}Q_{3} \check {e}_{8}^{T}-\frac{2(\tau_{1}+1)}{\tau_{1}}\check{ \pi}_{9}Q_{3}\check{\pi}_{9}^{T}- \frac {4(\tau_{1}+1)(\tau_{1}+2)}{\tau_{1}(\tau_{1}-1)} \check{\pi}_{10}Q_{3}\check{ \pi}_{10}^{T}, \\& \check{\Theta} = -\operatorname{Sym} \bigl\{ (\check{e}_{11}- \check{e}_{1}L_{m})H_{1}(\check {e}_{11}-\check{e}_{1}L_{p})^{T} +( \check{e}_{12}-\check{e}_{2}L_{m})H_{2}( \check{e}_{12}-\check{e}_{2}L_{p})^{T} \\& \hphantom{\check{\Theta} ={}}{}+(\check{e}_{13}-\check{e}_{3}L_{m})H_{3}( \check{e}_{13}-\check{e}_{3}L_{p})^{T} +(\check{e}_{14}-\check{e}_{4}L_{m})H_{4}( \check{e}_{14}-\check{e}_{4}L_{p})^{T} \bigr\} , \\& \check{\Upsilon}_{1} = \epsilon \bigl(\check{e}_{1}L^{T}L \check{e}_{1}^{T}-\check {e}_{16} \check{e}_{16}^{T} \bigr),\qquad \check{\Upsilon}_{2}= \check{e}_{1}+\check {e}_{8}, \\& \check{\Upsilon}_{3} = X \bigl[(A-I_{n}) \check{e}_{1}^{T}-\check{e}_{8}^{T}+W_{0} \check {e}_{11}^{T}+W_{1}\check{e}_{14}^{T} \bigr]-Y \bigl(C\check{e}_{1}^{T}+\check{e}_{16}^{T} \bigr), \\& \check{\Xi} = \sum_{i=1}^{4}\check{ \Xi}_{i}+\check{\Upsilon }_{1}+\operatorname{Sym}\{\check{ \Upsilon}_{2}\check{\Upsilon}_{3}\}. \end{aligned}$$

Corollary 1

For given integers \(0<\tau_{1} <\tau_{2}\) and diagonal matrices \(L_{m}=\operatorname{diag}\{l_{1}^{-},\ldots,l_{n}^{-}\}\) and \(L_{p}=\operatorname{diag}\{ l_{1}^{+},\ldots,l_{n}^{+}\}\), the error-state system (25) is asymptotically stable for \(\tau_{1}\leq \tau(k)\leq\tau_{2}\) if there exist symmetric positive definite matrices \(P\in\mathbb {R}^{5n\times5n}\), \(Q_{1}\in\mathbb{R}^{2n\times2n}\), \(Q_{2}\in\mathbb{R}^{2n\times2n}\), \(\mathcal{N}_{1}\in\mathbb{R}^{2n\times2n}\), \(\mathcal{N}_{2}\in \mathbb{R}^{2n\times2n}\), \(Q_{3}\in\mathbb{R}^{n\times n}\), positive diagonal matrices \(H_{i}\in\mathbb{R}^{n\times n}\) (\(i=1,2,3,4\)), a scalar \(\epsilon>0\), symmetric matrices \(Z_{i}\in\mathbb{R}^{n\times n}\) (\(i=1,2,3\)), and matrices \(\mathcal{M}\), X, Y of appropriate dimensions satisfying the following LMIs:

$$\begin{aligned}& \check{\Xi}< 0, \end{aligned}$$
(26)
$$\begin{aligned}& \Omega_{i}\geq0 \quad ( i=2,3). \end{aligned}$$
(27)

Then, the estimator gain matrix is given by \(K=X^{-1}Y\), and the other parameters are defined as in Theorem  1.

Proof

Choose the following Lyapunov-Krasovskii functional for system (25):

$$ V(k)=V_{1}(k)+V_{2}(k)+V_{3}(k)+V_{4}(k), $$
(28)

where

$$\begin{aligned}& V_{1}(k) = \eta_{1}^{T}(k)P\eta_{1}(k), \\& V_{2}(k) = \sum_{s=k-\tau_{1}}^{k-1} \eta_{2}^{T}(s)Q_{1}\eta_{2}(s)+\sum _{s=k-\tau_{2}}^{k-1-\tau_{1}}\eta_{2}^{T}(s)Q_{2} \eta_{2}(s), \\& V_{3}(k) = \tau_{1}\sum_{s=-\tau_{1}}^{-1} \sum_{u=k+s}^{k-1}\eta _{2}^{T}(u) \mathcal{N}_{1}\eta_{2}(u)+ \tau_{d}\sum _{s=-\tau_{2}}^{-1-\tau_{1}}\sum_{u=k+s}^{k-1-\tau _{1}} \eta_{2}^{T}(u)\mathcal{N}_{2}\eta_{2}(u), \\& V_{4}(k)=\sum_{s=-\tau_{1}+1}^{0}\sum _{j=s}^{0}\sum_{u=k+j}^{k}y^{T}(u)Q_{3}y(u), \end{aligned}$$

where \(y(u)=e(u)-e(u-1)\).

From (7), (9), (13), and (23), the forward difference of \(V_{i}(k)\) (\(i=1, 2, 3, 4\)) satisfies

$$ \Delta V_{i}(k)\leq\check{\xi}^{T}(k)\check{ \Xi}_{i}\check{\xi}(k) \quad (i=1, 2, 3, 4). $$
(29)

Combining (29) with (26) gives

$$ \Delta V(k) \leq \check{\xi}^{T}(k)\check{\Xi}\check{ \xi}(k)< 0, \quad \forall \check{\xi}(k)\neq0. $$
(30)

So the error-state system (25) is asymptotically stable. This completes the proof. □

Remark 5

A novel Lyapunov-Krasovskii functional in Corollary 1 is constructed, which includes the Lyapunov-Krasovskii functional term \(V_{4}(k)=\sum_{s=-\tau_{1}+1}^{0}\sum_{j=s}^{0}\sum_{u=k+j}^{k}y^{T}(u) \times Q_{3}y(u)\). However, this Lyapunov-Krasovskii functional term was not taken into account in [30, 33]. The Jensen inequality was employed to estimate an upper bound of \(-\sum_{s=-\tau_{1}+1}^{0}\sum_{j=k+s}^{k}y^{T}(j)Q_{3}y(j)\) in the forward difference of the Lyapunov-Krasovskii functional in [34]. Since the Jensen inequality ignored some terms, the estimation methods in [34] may bring conservatism to some extent. In this paper, by employing a novel double summation inequality in Lemma 2, a tight upper bound of \(-\sum_{s=-\tau _{1}+1}^{0}\sum_{j=k+s}^{k}y^{T}(j)Q_{3}y(j)\) is given. The stability criterion in [34] needs \(56n^{2}+16n\) decision variables. However, the number of decision variables in Corollary 1 is \(28.5n^{2}+12.5n\). Therefore, Corollary 1 has lower computational complexity.

4 Numerical examples

In this section, we give two numerical examples to demonstrate the effectiveness of our stability criteria.

Example 1

Consider the discrete-time error-state system (3) with leakage delay and the following parameters:

$$\begin{aligned}& A=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 0.2 & 0\\ 0& 0.2 \end{array}\displaystyle \right ] ,\qquad W_{0}= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 0.2 & -0.2\\ 0& -0.3 \end{array}\displaystyle \right ] ,\qquad W_{1}= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} -0.2 & 0.1\\ -0.2 & 0.3 \end{array}\displaystyle \right ] , \\& C=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 1 & 0\\ 0 & 1 \end{array}\displaystyle \right ] ,\qquad L= \operatorname{diag}\{0.4,0.4\} . \end{aligned}$$

Let the activation function \(g(x)=\bigl [{\scriptsize\begin{matrix}{} g_{1}(x_{1})\cr g_{2}(x_{2})\end{matrix}} \bigr ]=\bigl [{\scriptsize\begin{matrix}{} \tanh (0.3x_{1})\cr \tanh (0.4x_{2}) \end{matrix}} \bigr ]\). Then \(L_{m}=\operatorname{diag}\{0,0\}\) and \(L_{P}=\{0.3,0.4\}\).

By solving the LMIs in Theorem 1 and Theorem 2, the allowable upper bounds of \(\tau_{2}\) for different \(\tau_{1}\) and σ are listed in Table 1 and Table 2, respectively. For the case \(\tau_{1}=2\), \(\tau _{2}=11\), and \(\sigma=2\), by Theorem 2, the corresponding gain matrix is

$$K=X^{-1}Y=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} -0.0003& 0.0005 \\ -0.0006 & -0.0015 \end{array}\displaystyle \right ]. $$

Furthermore, the state dynamics trajectories of \((x(t),\hat{x}(t))\) and error dynamics trajectories of \(e(t)\) are shown in Figures 1 and 2, respectively.

Figure 1
figure 1

Error trajectories with \(\pmb{\sigma=2}\) (a) the state \(\pmb{x_{1}}\) and its estimation; (b) the state \(\pmb{x_{2}}\) and its estimation; (c) the error state.

Figure 2
figure 2

Error trajectories with \(\pmb{\sigma=5}\) (a) the state \(\pmb{x_{1}}\) and its estimation; (b) the state \(\pmb{x_{2}}\) and its estimation; (c) the error state.

Table 1 Upper delay bound of \(\pmb{\tau_{2}}\) for different σ and \(\pmb{\tau _{1}}\) by Theorem 1 (Example 1 )
Table 2 Upper delay bound of \(\pmb{\tau_{2}}\) for different σ and \(\pmb{\tau _{1}}\) by Theorem 2 (Example 1 )

Remark 6

From Table 1 or Table 2, it can be easily seen that the error-state system (3) is globally asymptotically stable when the leakage delay \(\sigma=1\mbox{ and }2\). But when \(\sigma\geq3\), the LMIs in Theorem 1 and Theorem 2 are infeasible as shown in Table 1 and Table 2. Table 1 and Table 2 also show that Theorem 2 is less conservative than Theorem 1 when \(\sigma=2\).

Remark 7

Figure 1 and Figure 2 show the simulation results. For the case \(\sigma=2\), Figure 1 shows that the state trajectories of \((x(t),\hat{x}(t))\) and the error state \(e(t)\) converge to zero smoothly. Figure 2 shows that the state trajectories of \((x(t),\hat{x}(t))\) and the error state \(e(t)\) do not converge to an equilibrium point in case of \(\sigma=5\). Hence, the effect of leakage delay in the dynamical system cannot be neglected.

Example 2

Consider the discrete-time error-state system (25) with the following parameters:

$$\begin{aligned}& A=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 0.4 & 0&0\\ 0& 0.3 &0 \\ 0&0&0.3 \end{array}\displaystyle \right ] ,\qquad W_{0}= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 0.2 & -0.2&0.1\\ 0& -0.3&0.2 \\ -0.2&-0.1&-0.2 \end{array}\displaystyle \right ] , \\& W_{1}= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} -0.2 & 0.1&0\\ -0.2 & 0.3&0.1\\ 0.1&-0.2&0.3 \end{array}\displaystyle \right ] , \qquad C=I_{3} , \qquad L=\operatorname{diag}\{0.4,0.4,0.4\} . \end{aligned}$$

Let the activation function be \(g(x)=\tanh (0.5x)\), we obtain \(L_{m}=\operatorname{diag}\{0,0,0\}\) and \(L_{P}=\{0.5,0.5,0.5\}\). Since all conditions in Corollary 1 are satisfied, the error-state system (25) with given parameters is globally asymptotically stable. For different values of \(\tau_{1}\), the allowed maximum time-delay bounds obtained by Corollary 1 are listed in Table 3. From Table 3, it can be confirmed that Corollary 1 gives larger delay upper bounds than those obtained by the stability criteria in [30, 33]. Although the allowed delay upper bounds obtained by Corollary 1 are the same as those obtained in [34], the stability criterion in [34] requires 552 decision variables. The number of decision variables (NoDV) in Corollary 1 is 294, which means that Corollary 1 has lower computational complexity.

Table 3 Upper delay bound of \(\pmb{\tau_{2}}\) with various \(\pmb{\tau_{1}}\) values (Example 2 )

For the case \(\tau_{1}=8\), \(\tau_{2}=34\), by solving the LMIs in Corollary 1, the corresponding state gain matrix is

$$K=X^{-1}Y=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 0.0942& -0.0245 &-0.0300 \\ -0.0060 & 0.0911 & 0.0449\\ -0.0122& 0.0049 & 0.0806 \end{array}\displaystyle \right ]. $$

5 Conclusions

In this paper, we have investigated the problem of delay-dependent state estimation for discrete-time recurrent neural networks with leakage delay and time-varying delay. By constructing two new Lyapunov-Krasovskii functionals, some new delay-dependent stability criteria for designing state estimator of the discrete-time networks are established. Moreover, the simulation results show that the effect of leakage delay in dynamical neural networks cannot be ignored. The effectiveness of the developed results has been verified via two examples.