1 Introduction

Filter design is one of the fundamental problems in various engineering applications, especially in the fields of signal processing and control communities. In order to obtain the desired signal and eliminate interference, the method of \({H_{\infty}}\) filter is very appropriate to some practical applications, and there has been considerable attention to \({H_{\infty}}\) filter problems for dynamic systems. Non-fragile \({H_{\infty}}\) filter was designed for discrete-time fuzzy systems with multiplicative gain variations in [1]. Robust \({H_{\infty}}\) filtering for a class of uncertain linear systems with time-varying delay was studied in [2]. Delay-dependent robust \({H_{\infty}}\) filter for T-S fuzzy time-delay systems with exponential stability was proposed in [3]. Exponential \({H_{\infty}}\) filtering for time-varying delay systems was studied through Markovian approach in [4].

Singular systems have extensive applications in electrical circuits, power systems, economics and other areas, so the filtering of a singular system is especially important and has been extensively studied [5, 6]. Exponential \({H_{\infty}}\) filtering for singular systems with Markovian jump parameters was put forward in [7]. \({H_{\infty}}\) filtering for a class of discrete-time singular Markovian jump systems with time-varying delays was designed in [8]. In [9], the problem of exponential \({H_{\infty}}\) filtering was studied for discrete-time switched singular time-delay systems.

On the other hand, time delay is frequently one of the main causes of instability and poor performance of systems, and it is encountered in a variety of engineering systems such as [14, 810]. But many of the results are about constant time-delay, in practice, which is conservative to some extent. Recently, interval time-varying delay has been identified in practical engineering systems. Interval time-varying delay is a time delay that varies in an interval in which the lower bound is not restricted to be zero. For more details, we refer readers to [1113].

In the past two decades, the T-S fuzzy model has been used to represent many nonlinear systems. And practice has proved that the T-S fuzzy model is very effective to simplify many complex systems. In [14], the filter was designed through discrete-time singularly perturbed T-S fuzzy model. For T-S fuzzy discrete-time systems, \({H_{\infty}}\) descriptor fault detection filter was designed in [15]. New results on \({H_{\infty}}\) filtering for fuzzy systems with interval time-varying delays were obtained in [16]. However, to the authors’ knowledge, the problem of delay-dependent exponential \({H_{\infty}}\) filter for nonlinear singular systems with interval time-varying delay through T-S fuzzy model has rarely been reported.

Motivated by the above discussions, the problem of delay-dependent exponential \({H_{\infty}}\) filter for nonlinear singular systems with interval time-varying delay through T-S fuzzy mode is considered in this paper. The main constructions of this paper can be concluded as follows. (1) The triple integral was taken into account when constructing the LKF, which can reduce the conservatism effectively. (2) The use of Lemma 1 refines the delay interval, which is the other reason why the result is less conservative. (3) The parameters of the filter can be obtained by solving the LMIs. (4) The comparisons with other references are listed in the form of tables, which can clearly show the advantage of the proposed method in this paper.

2 Preliminaries

Consider the following nonlinear system with interval time-varying delay which could be approximated by a T-S singular model with r plant rules.

Plant rule i: If \(\xi_{1} (t)\) is \(M_{1i}\) and \(\xi_{2} (t)\) is \(M_{2i}\)\(\xi_{p} (t)\) is \(M_{pi}\), then

$$ \begin{aligned} &E\dot{x}(t) = ({A_{1i}} + \Delta{A_{1i}})x(t) + ({A_{2i}} + \Delta {A_{2i}})x \bigl(t - d ( t )\bigr) + ({B_{i}} + \Delta{B_{i}}) \omega(t), \\ &y(t) = ({C_{1i}} + \Delta{C_{1i}})x(t) + ({C_{2i}} + \Delta {C_{2i}})x\bigl(t - d ( t )\bigr) + ({D_{i}} + \Delta{D_{i}})\omega(t), \\ &z(t) = ({L_{1i}} + \Delta{L_{1i}})x(t)+({L_{2i}} + \Delta{L_{2i}})x\bigl(t-d(t)\bigr), \\ &x(t) = \phi(t),\quad t \in[ - {d_{2}},0], \end{aligned} $$
(1)

where \(M_{ij}\) denotes the fuzzy sets; r denotes the number of model rules; \(x(t) \in\mathrm{R}^{n}\) is the input vector; \(y(t) \in \mathrm{R}^{q}\) is the measurement; \(z(t) \in\mathrm{R}^{p}\) is the signal to be estimated; \(w(t) \in\mathrm{R}^{m}\) is the noise signal vector; \(\phi(t)\) is a compatible vector valued initial function; E is a singular matrix and assume that \(\operatorname{rank} E = r \leq n\); \(A_{1i}\), \(A_{2i}\), \(B_{i}\), \(C_{1i}\), \(C_{2i}\), \(D_{i}\), \(L_{1i}\), \({L_{2i}}\) are constant matrices with appropriate dimensions; \(\xi_{1} (t), \xi_{2} (t), \ldots,\xi_{p} (t)\) are the functions of state variables; \(d ( t )\) is time-varying delay and satisfies

$$0 \le{d_{1}} \le d(t) \le{d_{2}}, \qquad\dot{d}(t) \le\mu; $$

\(\Delta A_{1i}\), \(\Delta A_{2i}\), \(\Delta B_{i}\), \(\Delta C_{1i}\), \(\Delta C_{2i}\), \(\Delta D_{i}\), \(\Delta L_{1i}\), \(\Delta{L_{2i}}\) are unknown matrices describing the model uncertainties, and they are assumed to be of the form

$$ \begin{aligned} & \begin{bmatrix} {\Delta{A_{1i}}} & {\Delta{A_{2i}}} & {\Delta{B_{i}}} \\ {\Delta{C_{1i}}} & {\Delta{C_{2i}}} & {\Delta{D_{i}}} \end{bmatrix} = \begin{bmatrix} {{H_{1i}}} \\ {{H_{2i}}} \end{bmatrix} {F_{i}}[{{E_{1i}}} \quad {{E_{2i}}} \quad {{E_{3i}}}], \\ &\Delta{L_{1i}} = {H_{3i}} {F_{i}} {E_{4i}}, \qquad \Delta {L_{2i}} = {H_{4i}} {F_{i}} {E_{5i}}, \end{aligned} $$
(2)

where \(H_{1i}\), \(H_{2i}\), \(H_{3i}\), \({H_{4i}}\), \(E_{1i}\), \(E_{2i}\), \(E_{3i}\), \(E_{4i}\), \({E_{5i}}\) are known real constant matrices, and \(F_{i}\) is unknown real time-varying matrix satisfying

$$ F_{i}^{T} F_{i} \le I. $$
(3)

The parametric uncertainties \(\Delta A_{1i}\), \(\Delta A_{2i}\), \(\Delta B_{i}\), \(\Delta C_{1i}\), \(\Delta C_{2i}\), \(\Delta D_{i}\), \(\Delta L_{1i}\), \(\Delta{L_{2i}}\) are said to be admissible if both (2) and (3) hold.

Let \(\varepsilon(t) = [{{\varepsilon_{1}}(t)}\ {{\varepsilon_{2}}(t)}\ {\cdots}\ {{\varepsilon _{p}}(t)}]^{T}\), by using a center-average defuzzifier, product fuzzy inference and singleton fuzzifier, the global model of the system is as follows:

$$ \begin{aligned} &E\dot{x}(t) = \sum_{i = 1}^{r} {{h_{i}}\bigl(\varepsilon(t)\bigr)} \bigl[ ({A_{1i}} + \Delta{A_{1i}})x(t) + ({A_{2i}} + \Delta{A_{2i}})x \bigl(t - d ( t )\bigr) + ({B_{i}} + \Delta{B_{i}})\omega(t) \bigr], \\ &y(t) = \sum_{i = 1}^{r} {{h_{i}} \bigl(\varepsilon(t)\bigr)} \bigl[ ({C_{1i}} + \Delta{C_{1i}})x(t) + ({C_{2i}} + \Delta{C_{2i}})x\bigl(t - d ( t )\bigr) + ({D_{i}} + \Delta{D_{i}})\omega(t) \bigr], \\ &z(t) = \sum_{i = 1}^{r} {{h_{i}} \bigl(\varepsilon(t)\bigr)} \bigl[ ({L_{1i}} + \Delta{L_{1i}})x(t)+({L_{2i}} + \Delta {L_{2i}})x\bigl(t-d(t)\bigr) \bigr], \end{aligned} $$
(4)

where

$${h_{i}}\bigl(\varepsilon(t)\bigr) = {{{\beta_{i}}\bigl( \varepsilon(t)\bigr)} \bigg/{\sum_{i = 1}^{r} {{ \beta _{i}}\bigl(\varepsilon(t)\bigr)} }},\quad{\beta_{i}}\bigl( \varepsilon(t)\bigr) = \prod_{j = 1}^{p} {{M_{ij}} \bigl( {{\varepsilon_{j}} ( t )} \bigr)} , $$

\({M_{ij}} ( {{\varepsilon_{j}} ( t )} )\) is the grade of membership of \({\varepsilon_{j}}(t)\) in \({M_{ij}}\). It is easy to see that \({\beta_{i}}(\varepsilon(t)) \ge0\) and \(\sum_{i = 1}^{r} {{\beta_{i}}(\varepsilon(t))} \ge0\). Hence, we have \({h_{i}}(\varepsilon(t)) \ge0\) and \(\sum_{i = 1}^{r} {{h_{i}}(\varepsilon(t))} = 1\).

In this paper, we consider the following delay-dependent fuzzy filter:

$$ \begin{aligned} &E \dot{\hat{x}}(t) = \sum _{i = 1}^{r} {h_{i} \bigl(\varepsilon (t) \bigr)\bigl[A_{fi} \hat{x}(t) + B_{fi} y(t) + C_{fi} \hat{x}\bigl(t - d(t)\bigr)\bigr]}, \\ &\hat{z}(t) = \sum_{i = 1}^{r} {h_{i} \bigl(\varepsilon(t)\bigr) \bigl[ {L_{fi} \hat{x}(t)} \bigr],} \quad i = 1,2, \ldots, r, \end{aligned} $$
(5)

where \(\hat{x}(t) \in R^{k}\) is the filter state vector, \(\hat{z}(t) \in R^{p}\) is the estimated vector, \(A_{fi}\), \(B_{fi}\), \(C_{fi}\), \(L_{fi}\) are the filter matrices with appropriate dimensions, which are to be designed.

Remark 1

When \(E=I\) and \(C_{fi} = 0\), the fuzzy filter (5) was studied in [17], which is time-independent filter. The simulations will demonstrate the advantage in this paper.

The filtering error system from (4) and (5) can be described by

$$ \begin{aligned} &\bar{E}\dot{\tilde{x}}(t) = \sum _{i = 1}^{r} \sum_{j = 1}^{r} {h_{i} \bigl(\xi(t)\bigr)h_{j} \bigl(\xi(t)\bigr)} \bigl[ { {A_{ij} \tilde{x}(t)} + { A_{dij} \tilde{x}(t - d)} { + B_{ij} w(t)} }\bigr], \\ &e(t) = \sum_{i = 1}^{r} {\sum _{j = 1}^{r} {{h_{i}}\bigl(\xi (t) \bigr){h_{j}}\bigl(\xi(t)\bigr)} } \bigl[{L_{ij}}\tilde{x}(t) + {L_{dij}}\tilde{x}\bigl(t - d(t)\bigr)\bigr], \end{aligned} $$
(6)

where

$$\begin{aligned}& e(t) = z(t) - \hat{z}(t),\qquad \tilde{x}(t) = { \bigl[ { {{x^{T}}(t)} \quad {{{\hat{x}}^{T}}(t)} } \bigr]^{T}}, \\& {A_{ij}} = \begin{bmatrix} {{A_{1i}} + \Delta{A_{1i}}} & 0 \\ {{B_{fj}}({C_{1i}} + \Delta{C_{1i}})} & {{A_{fj}}} \end{bmatrix},\qquad {B_{ij}} = \begin{bmatrix} {{B_{i}} + \Delta{B_{i}}} \\ {{B_{fj}}({D_{i}} + \Delta{D_{i}})} \end{bmatrix}, \\& {A_{dij}} = \begin{bmatrix} {{A_{2i}} + \Delta{A_{2i}}} & 0 \\ {{B_{fj}}({C_{2i}} + \Delta{C_{2i}})} & {{C_{fj}}} \end{bmatrix},\qquad \bar{E} = \begin{bmatrix} E & 0 \\ 0 & E \end{bmatrix}, \\& {L_{ij}} = [ {{L_{1i}} + \Delta{L_{1i}}} \quad {-{L_{fj}}} ],\qquad {L_{dij}} = [ {{L_{2i}} + \Delta{L_{2i}}} \quad 0 ]. \end{aligned}$$

System (6) is simply denoted by the following form:

$$ \begin{aligned} &\bar{E}\dot{ \tilde{x}}(t) = \tilde{A}\tilde{x}(t) + \tilde{A}_{d} \tilde{x}\bigl(t - d_{i} (t)\bigr) + \tilde{B} w(t), \\ &e(t) = \tilde{L}\tilde{x}(t)+{{\tilde{L}}_{d}} \tilde{x} \bigl( {t - d ( t )} \bigr), \end{aligned} $$
(7)

where

$$\begin{aligned}& \tilde{A} = \sum_{i = 1}^{r} {\sum _{j = 1}^{r} {{h_{i}}\bigl(\varepsilon(t) \bigr){h_{j}}\bigl(\varepsilon(t)\bigr)} } {A_{ij}},\qquad {{ \tilde{A}}_{d}} = \sum_{i = 1}^{r} { \sum_{j = 1}^{r} {{h_{i}}\bigl( \varepsilon (t)\bigr){h_{j}}\bigl(\varepsilon(t)\bigr)} } {A_{dij}}, \\& \tilde{B} = \sum_{i = 1}^{r} {\sum _{j = 1}^{r} {{h_{i}}\bigl(\varepsilon(t) \bigr){h_{j}}\bigl(\varepsilon(t)\bigr)} } {B_{ij}},\qquad \tilde{L} = \sum_{i = 1}^{r} {\sum _{j = 1}^{r} {{h_{i}}\bigl(\varepsilon (t) \bigr){h_{j}}\bigl(\varepsilon(t)\bigr)} } {L_{ij}}, \\& {{\tilde{L}}_{d}} = \sum_{i = 1}^{r} {\sum_{j = 1}^{r} {{h_{i}}\bigl( \varepsilon(t)\bigr){h_{j}}\bigl(\varepsilon(t)\bigr)} } {L_{dij}}. \end{aligned}$$

Definition 1

The filtering error system (7) is said to be exponentially admissible if it satisfies the following:

  1. (a)

    The augmented system (7) with \(w(t)=0\) is said to be regular and impulse-free if the pair \(( {\bar{E},\tilde{A}} )\), \(( {\bar{E},\tilde{A} + {{\tilde{A}}_{d}}} )\) is regular and impulse-free;

  2. (b)

    The filtering error system (7) is said to be exponentially stable if there exist scalars \(\delta>0\) and \(\lambda>0\) such that \(\Vert {x ( t )} \Vert \le \delta{e^{ - \lambda t}}{\Vert {\phi ( \theta )} \Vert _{{d_{2}}}}\), where \({\Vert {\phi ( \theta )} \Vert _{{d_{2}}}} = \sup_{ - {d_{2}} \le\theta \le0} \vert {\phi ( \theta )} \vert \) and \(\vert \cdot \vert \) is the Euclidean norm in \({R^{n}}\). When the above inequality is satisfied, λ, σ and \({e^{ - \lambda t}}{\Vert {\phi ( \theta )} \Vert _{{d_{2}}}}\) are called the decay rate, the decay coefficient and an upper bound of the state trajectories, respectively.

Lemma 1

([18])

For any constant matrix \(R \in{R^{n \times n}}\), \(R = {R^{T}} > 0\), scalars \({d_{1}} \le d(t) \le{d_{2}}\), and vector function \(\dot{x}: [ { - {d_{2}}, - {d_{1}}} ] \to {R^{n}}\) such that the following integration is well defined, it holds that

$$- ({d_{2}} - {d_{1}})\int_{ t - {d_{2}}}^{ t - {d_{1}}} {{{\dot{\tilde{x}}}^{T}} ( s )R} \dot{\tilde{x}} ( s )\,ds \le{ \zeta^{T}} ( t )\Lambda\zeta ( t ), $$

where

$$\zeta ( t ) = \begin{bmatrix} {x ( {t - {d_{1}}} )} \\ {x ( {t - d ( t )} )} \\ {x ( {t - {d_{2}}} )} \end{bmatrix},\qquad \Lambda = \begin{bmatrix} { - R} & R & 0 \\ {*} & { - 2R} & R \\ {*} & * & { - R} \end{bmatrix}. $$

Lemma 2

For any symmetric positive-definite constant matrices \(M > 0\) and any constant matrix \(E \in{R^{n \times n}}\), and scalars \({d_{2}} > {d_{1}} > 0\), if there exists a vector function \(\dot{x} ( \cdot ): [ { - {d_{2}},0} ] \to{R^{n}}\) such that the following integration is well defined, then it holds that

$$\begin{aligned} &{-}\frac{{d_{2}^{2} - d_{1}^{2}}}{2}\int_{ - {d_{2}}}^{ - {d_{1}}} {\int _{t + \theta}^{t} {{{ \bigl( {E\dot{x} ( t )} \bigr)}^{T}}ME\dot{x} ( t )\,ds} }\,d\theta \\ &\quad\le \begin{bmatrix} {x ( t )} \\ {\int_{t - {d_{2}}}^{t - {d_{1}}} {x ( s )}\,ds} \end{bmatrix}^{T} \begin{bmatrix} { - d_{12}^{2}{E^{T}}ME} & {{d_{12}}{E^{T}}ME} \\ {{d_{12}}{E^{T}}ME} & { - {E^{T}}ME} \end{bmatrix} \begin{bmatrix} {x ( t )} \\ {\int_{t - {d_{2}}}^{t - {d_{1}}} {x ( s )}\,ds} \end{bmatrix}. \end{aligned}$$

Proof

Notice that

$$\begin{aligned} & \begin{bmatrix} {{{ ( {E\dot{x} ( t )} )}^{T}}ME\dot{x} ( t )} & {{{ ( {E\dot{x} ( t )} )}^{T}}} \\ {E\dot{x} ( t )} & {{M^{ - 1}}} \end{bmatrix} = \begin{bmatrix} {{{ ( {E\dot{x} ( t )} )}^{T}}{M^{1/2}}} & 0 \\ {{M^{ - 1/2}}} & 0 \end{bmatrix} \begin{bmatrix} {{M^{1/2}}E\dot{x} ( t )} & {{M^{ - 1/2}}} \\ 0 & 0 \end{bmatrix} \ge0. \end{aligned}$$

Hence,

$$ \begin{bmatrix} {\int_{ - {d_{2}}}^{ - {d_{1}}} {\int_{t + \theta}^{t} {{{ ( {E\dot{x} ( t )} )}^{T}}ME\dot{x} ( t )\,ds}\,d\theta} } & {\int_{ - {d_{2}}}^{ - {d_{1}}} {\int_{t + \theta}^{t} {{{ ( {E\dot{x} ( t )} )}^{T}}\,ds}\,d\theta} } \\ {\int_{ - {d_{2}}}^{ - {d_{1}}} {\int_{t + \theta}^{t} { ( {E\dot{x} ( t )} )\,ds}\,d\theta} } & {\int_{ - {d_{2}}}^{ - {d_{1}}} {\int_{t + \theta}^{t} {{M^{ - 1}}\,ds}\,d\theta} } \end{bmatrix} \ge0. $$

Using Schur complements and some simple manipulation, the lemma can be obtained. □

Lemma 3

([19])

Suppose that a positive continuous function \(f ( t )\) satisfies

$$f ( t ) \le{\varsigma_{1}}\mathop{\sup } _{t - \tau \le s \le t} f ( s ) + { \varsigma _{2}} {e^{ - \varepsilon t}}, $$

where \(\varepsilon > 0\), \({\varsigma_{1}} < 1\), \({\varsigma_{2}} > 0\) and \(\tau > 0\). Then \(f ( t )\) satisfies

$$f ( t ) \le\mathop{\sup} _{ - \tau \le s \le0} f ( s ){e^{ - {\xi_{0}}t}} + \frac{{{\varsigma_{2}}{e^{ - {\xi_{0}}t}}}}{{1 - {\varsigma _{1}}{e^{{\xi_{0}}\tau}}}}, $$

where \({\xi_{0}} = \min\{ \varepsilon ,\xi\} \) and \(0 < \xi < - (1/\tau ) \ln{\varsigma_{1}}\).

Lemma 4

([8])

Given a set of suited dimension real matrices Q, H, E, Q is a symmetric matrix such that \(Q + HFE + E^{T} F^{T} H^{T} < 0 \) for all F satisfies \(F^{T} F \le I\) if and only if there exists \(\varepsilon > 0\) such that

$$Q + \varepsilon HH^{T} + \varepsilon^{ - 1} E^{T} E < 0. $$

Aims of this paper

The exponential \(H_{\infty}\) filter problem to be addressed in this paper is formulated as follows: given the uncertain time-delay T-S fuzzy system (4) and a prescribed level of noise attenuation \(\gamma>0\), determine an exponentially stable filter in the form of (5) such that the following requirements are satisfied:

  1. (a)

    The filtering error system (6) is exponentially admissible;

  2. (b)

    Under zero initial conditions, the filtering error system (6) satisfies \(\|e(t)\|_{2} < \gamma\|\omega(t)\|_{2} \) for any nonzero \(\omega(t) \in L_{2}[0, \infty)\) and all admissible uncertainties.

3 Main results

3.1 Exponential stability and \({H_{\infty}}\) performance analysis

Theorem 1

For prescribed scalar \(0 \le\mu < 1\), system (7) is exponentially admissible with \({H_{\infty}}\) performance γ for any time delay \(d ( t )\) satisfying \(0 \le{d_{1}} \le d(t) \le{d_{2}}\) if there exist positive-definitive matrices P, \({Q_{1}}\), \({Q_{2}}\), \({Q_{3}}\), \({R_{1}}\), \({R_{2}}\), \({Z_{1}}\), \({Z_{2}}\), \({Z_{3}}\), \({Z_{4}}\) such that

$$\begin{aligned}& \bar{E}^{\mathrm{T}} P = P^{T} \bar{E} \ge0, \end{aligned}$$
(8)
$$\begin{aligned}& \Sigma = \begin{bmatrix} {{\Lambda_{11}}} & {{\Lambda_{12}}} & {{\Lambda_{13}}} \\ {*} & {{\Lambda_{22}}} & {{\Lambda_{23}}} \\ {*} & * & {{\Lambda_{33}}} \end{bmatrix} < 0, \end{aligned}$$
(9)

where

$$\begin{aligned}& {\Lambda_{11}} = \begin{bmatrix} {{\Sigma_{11}}} & {{\Sigma_{12}}} & {{\Sigma_{13}}} & 0 \\ {*} & {{\Sigma_{22}}} & {{\Sigma_{23}}} & {{\Sigma_{24}}} \\ {*} & * & {{\Sigma_{33}}} & 0 \\ {*} & * & * & {{\Sigma_{44}}} \end{bmatrix},\qquad {\Lambda_{12}} = \begin{bmatrix} {{d_{1}}{{\bar{E}}^{T}}{R_{1}}\bar{E}} & {{d_{12}}{{\bar{E}}^{T}}{R_{2}}\bar{E}} & {{P^{T}}\tilde{B}} \\ {{0_{3 \times1}}} & {{0_{3 \times1}}} & {{0_{3 \times1}}} \end{bmatrix}, \\& {\Lambda_{13}} = \begin{bmatrix} {{{\tilde{A}}^{T}}} & {{{\tilde{L}}^{T}}} \\ {\tilde{A}_{d}^{T}} & {\tilde{L}_{d}^{T}} \\ 0 & 0 \\ 0 & 0 \end{bmatrix},\qquad {\Lambda_{22}} = \operatorname{diag}\bigl\{ {\Sigma_{55}},{\Sigma_{66}}, - { \gamma^{2}}I\bigr\} ,\qquad {\Lambda_{23}} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \\ {{{\tilde{B}}^{T}}} & 0 \end{bmatrix}, \\& {\Lambda_{33}} = \operatorname{diag}\bigl\{ - {W^{ - 1}}, - I \bigr\} ,\qquad {\Sigma_{11}} = {P^{T}}\tilde{A} + {{\tilde{A}}^{T}}P + {Q_{1}} - {{\bar{E}}^{T}} {Z_{1}}\bar{E} - {{\bar{E}}^{T}}Y\bar{E}+Z, \\& {\Sigma_{12}} = {P^{T}} {{\tilde{A}}_{d}},\qquad { \Sigma_{13}} = {{\bar{E}}^{T}} {Z_{1}}\bar{E},\qquad {\Sigma_{22}} = - ( {1 - \mu} ){Q_{3}} - 2{{\bar{E}}^{T}} {Z_{2}}\bar{E},\qquad {\Sigma_{23}} = {{\bar{E}}^{T}} {Z_{2}}\bar{E}, \\& {\Sigma_{24}} = {{\bar{E}}^{T}} {Z_{2}}\bar{E}, \qquad {\Sigma_{33}} = - {Q_{1}} + {Q_{2}} + {Q_{3}} - {{\bar{E}}^{T}} {Z_{1}}\bar{E} - {{\bar{E}}^{T}} {Z_{2}}\bar{E},\qquad {\Sigma_{44}} = - {Q_{2}} - {{\bar{E}}^{T}} {Z_{2}}\bar{E}, \\& {\Sigma_{55}} = - {{\bar{E}}^{T}} {R_{1}}\bar{E} - {Z_{3}},\qquad {\Sigma_{66}} = - {{\bar{E}}^{T}} {R_{2}}\bar{E} - {Z_{4}},\qquad {d_{12}} = {d_{2}} - {d_{1}}, \qquad{d_{s}} = \frac {{d_{2}^{2} - d_{1}^{2}}}{2}, \\& W = d_{1}^{2}{Z_{1}} + d_{12}^{2}{Z_{2}} + \frac{{d_{1}^{4}}}{4}{R_{1}} + d_{s}^{2}{R_{2}}, \qquad Y = d_{1}^{2}{R_{1}} + d_{12}^{2}{R_{2}}, \qquad Z = d_{1}^{2}{Z_{3}} + d_{12}^{2}{Z_{4}}. \end{aligned}$$

Proof

Firstly, we prove that system (7) is regular and impulse-free. We choose two non-singular matrices U and V, note that

$$\begin{aligned}& U\bar{E}V = \begin{bmatrix} {{I_{r}}} & 0 \\ 0 & 0 \end{bmatrix}, \qquad U\tilde{A}V = \begin{bmatrix} {{{\tilde{A}}_{11}}} & {{{\tilde{A}}_{12}}} \\ {{{\tilde{A}}_{21}}} & {{{\tilde{A}}_{22}}} \end{bmatrix}, \qquad U{{\tilde{A}}_{d}}V = \begin{bmatrix} {{{\tilde{A}}_{d1}}} & {{{\tilde{A}}_{d2}}} \\ {{{\tilde{A}}_{d3}}} & {{{\tilde{A}}_{d4}}} \end{bmatrix}, \\& {U^{ - 1}}PV = \begin{bmatrix} {{P_{11}}} & {{P_{12}}} \\ {{P_{21}}} & {{P_{22}}} \end{bmatrix},\qquad {V^{T}}ZV = \begin{bmatrix} {{Z_{11}}} & {{Z_{12}}} \\ {{Z_{21}}} & {{Z_{22}}} \end{bmatrix}. \end{aligned}$$

From (8) and the expression above, it is easy to obtain that \({P_{12}} = 0\). Pre- and post-multiplying \({\Sigma_{11}} < 0\) by \({V^{T}}\) and V, we have

$$P_{22}^{T}{\tilde{A}_{22}} + \tilde{A}_{22}^{T}{P_{22}} + {Z_{22}} < 0, $$

which means \({\tilde{A}_{22}}\) is non-singular. So the pair \(( {E,\tilde{A}} )\) is regular and impulse-free.

On the other hand, via the Schur complement, it can be seen from (9) that \({\Lambda_{11}}<0\), pre-multiplying and post-multiplying it, by \([ {I \ I \ I \ I } ]\) and its transposition, we can get

$$P_{22}^{T}({\tilde{A}}_{22} + {\tilde{A}_{d4}}) + \bigl(\tilde{A}_{22}^{T} + \tilde{A}_{d4}^{T}\bigr){P_{22}} + {Z_{22}} < 0, $$

together with \(Z > 0\), which implies \({\tilde{A}_{22}}\) and \({\tilde{A}_{d4}}\) are non-singular. Hence, the pair \(( {\bar{E},\tilde{A}} )\) and \(( {\bar{E},\tilde{A} + {{\tilde{A}}_{d}}} )\) is regular and impulse-free. According to Definition 1, system (7) is regular and impulse-free.

Next, we show the exponential stability of system (7). Choose the following Lyapunov functional

$$ V({\tilde{x}_{t}}) = \sum_{i = 1}^{5} {V_{i}} ( {{{\tilde{x}}_{t}}} ), $$
(10)

where

$$\begin{aligned}& {V_{1}}({{\tilde{x}}_{t}}) = {{\tilde{x}}^{T}} ( t ){{\bar{E}}^{T}}P\tilde{x} ( t ), \\& {V_{2}} ( {{{\tilde{x}}_{t}}} ) = \int _{t - {d_{1}}}^{t} {{{\tilde{x}}^{T}} ( s ){Q_{1}}\tilde{x} ( s )\,ds} + \int_{t - {d_{2}}}^{t - {d_{1}}} {{{\tilde{x}}^{T}} ( s ){Q_{2}}\tilde{x} ( s )\,ds} + \int_{t - d ( t )}^{t - {d_{1}}} {{{{\tilde{x}}}^{T}} ( s ){Q_{3}}\tilde{x} ( s )\,ds}, \\& {V_{3}} ( {{{\tilde{x}}_{t}}} ) = {d_{1}}\int_{ - {d_{1}} }^{0} {\int _{t + \theta}^{t} {{{\dot{\tilde{x}}}^{T}} ( s )} } {{\bar{E}}^{T}} {Z_{1}}\bar{E}\dot{\tilde{x}} ( s )\,ds\,d\theta + {d_{12}}\int_{ - {d_{2}} }^{ - {d_{1}}} {\int _{ t + \theta}^{ t} {{{\dot{\tilde{x}}}^{T}} ( s )} } {{\bar{E}}^{T}} {Z_{2}}\bar{E}\dot{\tilde{x}} ( s )\,ds\,d\theta, \\& {V_{4}} ( {{{{\tilde{x}}}_{t}}} ) = {d_{1}}\int_{ - {d_{1}} }^{0} {\int _{t + \theta}^{t} {{{{\tilde{x}}}^{T}} ( s )} } {Z_{3}} {\tilde{x}} ( s )\,ds\,d\theta + {d_{12}}\int_{ - {d_{2}} }^{ - {d_{1}}} {\int _{t + \theta}^{t} {{{{\tilde{x}}}^{T}} ( s )} } {Z_{4}} {\tilde{x}} ( s )\,ds\,d\theta, \\& \begin{aligned}[b] {V_{5}} ( {{{{\tilde{x}}}_{t}}} ) ={}& \frac{{{d_{1}}^{2}}}{2}\int_{ - {d_{1}}}^{0} {\int _{\theta}^{0} {\int_{t + \lambda}^{t} {{{\dot{\tilde{x}}}^{T}} ( s )} } } \bar{E}{R_{1}}\bar{E}\dot{ \tilde{x}} ( s )\,ds\,d\lambda \,d\theta \\ &{} + {d_{s}}\int_{ - {d_{2}}}^{ - {d_{1}}} {\int _{\theta}^{0} {\int_{t + \lambda}^{t} {{{\dot{\tilde{x}}}^{T}} ( s )} } } {{\bar{E}}^{T}} {R_{2}}\bar{E}\dot{\tilde{x}} ( s )\,ds\,d\lambda\,d\theta, \end{aligned} \end{aligned}$$

where \({\tilde{x}_{t}} = \tilde{x} ( {t + \alpha} )\), \(- {d_{2}} \le\alpha \le0\).

Then the time-derivative of \(V ( {{{\tilde{x}}_{t}}} )\) along the solution of system (7) gives

$$\begin{aligned}& {\dot{V}_{1}} ( {{{\tilde{x}}_{t}}} ) = 2{\tilde{x}^{T}} ( t ){\bar{E}^{T}}P\dot{\tilde{x}} ( t ), \\& \begin{aligned}[b] {{\dot{V}}_{2}} ( {{{\tilde{x}}_{t}}} ) \le{}&{{\tilde{x}}^{T}} ( t ){Q_{1}}\tilde{x} ( t ) + {{\tilde{x}}^{T}} ( {t - {d_{1}}} ) ( - {Q_{1}} + {Q_{2}} + {Q_{3}}){\tilde{x}} ( {t - {d_{1}}} ) \\ &{}- {{\tilde{x}}^{T}} ( {t - {d_{2}}} ){Q_{2}} { \tilde{x}} ( {t - {d_{2}}} ) - ( {1 - \mu} ){{\tilde{x}}^{T}} \bigl( {t - d ( t )} \bigr){Q_{3}}\tilde{x} \bigl( {t - d ( t )} \bigr), \end{aligned} \\& \begin{aligned}[b] {{\dot{V}}_{3}} ( {{{\tilde{x}}_{t}}} ) ={}& {{\dot{\tilde{x}}}^{T}} ( t ){{\bar{E}}^{T}} \bigl( {{d_{1}^{2}} {Z_{1}} + {d_{12}^{2}} {Z_{2}}} \bigr)\bar{E}\dot{\tilde{x}} ( t ) - {d_{1}}\int _{ t - {d_{1}}}^{ t} {{{\dot{\tilde{x}}}^{T}} ( s ){{\bar{E}}^{T}}} {Z_{1}}\bar{E}\dot{\tilde{x}} ( s )\,ds \\ &{}- {d_{12}}\int_{ t - {d_{2}}}^{ t - {d_{1}}} {{{\dot{ \tilde{x}}}^{T}} ( s ){{\bar{E}}^{T}}} {Z_{2}}\bar{E} \dot {\tilde{x}} ( s )\,ds , \end{aligned} \\& \begin{aligned}[b] {{\dot{V}}_{4}} ( {{{\tilde{x}}_{t}}} ) ={}& {{\tilde{x}}^{T}} ( t ) \bigl( {{d_{1}^{2}} {Z_{1}} + {d_{12}^{2}} {Z_{2}}} \bigr) \tilde{x} ( t ) - {d_{1}}\int_{ t - {d_{1}}}^{ t} {{{\tilde{x}}^{T}} ( s )} {Z_{1}}\tilde{x} ( s )\,ds\\ &{}- {d_{12}}\int_{ t - {d_{2}}}^{ t - {d_{1}}} {{{\tilde{x}}^{T}} ( s )} {Z_{2}} {\tilde{x}} ( s )\,ds, \end{aligned} \\& \begin{aligned}[b] {{\dot{V}}_{5}} ( {{{\tilde{x}}_{t}}} ) ={}& {{\dot{\tilde{x}}}^{T}} ( t ){{\bar{E}}^{T}} \biggl( { \frac{{{d_{1}^{4}}}}{4}{Z_{1}} + {d_{s}^{2}} {Z_{2}}} \biggr)\bar{E}\dot{\tilde{x}} ( t ) - \frac{{{d_{1}}^{2}}}{2}\int _{ - {d_{1}}}^{ 0} {\int_{ t + \theta}^{ t} {{{\dot{\tilde{x}}}^{T}} ( s )} {{\bar{E}}^{T}}} {R_{1}}\bar{E}\dot{\tilde{x}} ( s )\,ds\,d\theta \\ &{}- {d_{s}}\int_{ - {d_{2}}}^{ - {d_{1}}} {\int _{ t + \theta}^{ t} {{{\dot{\tilde{x}}}^{T}} ( s )} {{\bar{E}}^{T}}} {R_{2}}\bar{E}\dot{\tilde{x}} ( s )\,ds\,d\theta. \end{aligned} \end{aligned}$$

Taking into account the lemma (Jensen’s integral inequality), Lemma 1 and Lemma 2, we get

$$\begin{aligned}& - {d_{1}}\int_{ t - {d_{1}}}^{ t} {{{\dot{\tilde{x}}}^{T}} ( s ){{\bar{E}}^{T}}} {Z_{1}}\bar{E}\dot{ \tilde{x}} ( s )\,ds\\& \quad\le \begin{bmatrix} {\tilde{x} ( t )} \\ {\tilde{x} ( {t - {d_{1}}} )} \end{bmatrix}^{T} \begin{bmatrix} { - {{\bar{E}}^{T}}{Z_{1}}\bar{E}} & {{{\bar{E}}^{T}}{Z_{1}}\bar{E}} \\ {*} & { - {{\bar{E}}^{T}}{Z_{1}}\bar{E}} \end{bmatrix} \begin{bmatrix} {\tilde{x} ( t )} \\ {\tilde{x} ( {t - {d_{1}}} )} \end{bmatrix}, \\& - {d_{12}}\int_{ t - {d_{2}}}^{ t - {d_{1}}} {{{\dot{\tilde{x}}}^{T}} ( s ){{\bar{E}}^{T}}} {Z_{2}}\bar{E}\dot{ \tilde{x}} ( s )\,ds \\& \quad\le \begin{bmatrix} {\tilde{x} ( {t - {d_{1}}} )} \\ {\tilde{x} ( {t - d ( t )} )} \\ {\tilde{x} ( {t - {d_{2}}} )} \end{bmatrix}^{T} \begin{bmatrix} { - {{\bar{E}}^{T}}{Z_{2}}\bar{E}} & {{{\bar{E}}^{T}}{Z_{2}}\bar{E}} & 0 \\ {*} & { - 2{{\bar{E}}^{T}}{Z_{2}}\bar{E}} & {{{\bar{E}}^{T}}{Z_{2}}\bar{E}} \\ {*} & * & { - {{\bar{E}}^{T}}{Z_{2}}\bar{E}} \end{bmatrix} \begin{bmatrix} {\tilde{x} ( {t - {d_{1}}} )} \\ {\tilde{x} ( {t - d ( t )} )} \\ {\tilde{x} ( {t - {d_{2}}} )} \end{bmatrix}, \\& - \frac{{{d_{1}}^{2}}}{2}\int_{ - {d_{1}}}^{ 0} {\int _{ t + \theta}^{ t} {{{\dot{\tilde{x}}}^{T}} ( s )} {{\bar{E}}^{T}}} {R_{1}}\bar{E}\dot{\tilde{x}} ( s )\,ds\,d\theta \\& \quad\le\int_{ - {d_{1}}}^{ 0} {\int_{ t + \theta}^{ t} {{{\dot {\tilde{x}}}^{T}} ( s )} } \,ds\,d\theta \bigl( { - {{\bar{E}}^{T}} {R_{1}}\bar{E}} \bigr)\int_{ - {d_{1}}}^{ 0} {\int_{ t + \theta }^{ t} {\dot{\tilde{x}} ( s )} } \,ds\,d\theta \\& \quad\le \begin{bmatrix} {x ( t )} \\ {\int_{ t - {d_{1}}}^{ t} {x ( s )\,ds} } \end{bmatrix}^{T} \begin{bmatrix} { - {d_{1}}^{2}{{\bar{E}}^{T}}{R_{1}}\bar{E}} & {{d_{1}}{{\bar{E}}^{T}}{R_{1}}\bar{E}} \\ {*} & { - {{\bar{E}}^{T}}{R_{1}}\bar{E}} \end{bmatrix} \begin{bmatrix} {x ( t )} \\ {\int_{ t - {d_{1}}}^{ t} {x ( s )\,ds} } \end{bmatrix}, \\& - {d_{s}}\int_{ - {d_{2}}}^{ - {d_{1}}} {\int _{ t + \theta}^{ t} {{{\dot{\tilde{x}}}^{T}} ( s )} {{\bar{E}}^{T}}} {R_{2}}\bar{E}\dot {\tilde{x}} ( s )\,ds\,d\theta \\& \quad\le\int_{ - {d_{2}}}^{ - {d_{1}}} {\int_{ t + \theta}^{ t} {{{\dot {\tilde{x}}}^{T}} ( s )} } \,ds\,d\theta \bigl( { - {{\bar{E}}^{T}} {R_{2}}\bar{E}} \bigr)\int_{ - {d_{2}}}^{ - {d_{1}}} {\int_{ t + \theta }^{ t} {\dot{\tilde{x}} ( s )} } \,ds\,d\theta \\& \quad\le \begin{bmatrix} {x ( t )} \\ {\int_{ t - {d_{2}}}^{ t - {d_{1}}} {x ( s )\,ds} } \end{bmatrix}^{T} \begin{bmatrix} { - {d_{12}}^{2}{{\bar{E}}^{T}}{R_{2}}\bar{E}} & {{d_{12}}{{\bar{E}}^{T}}{R_{2}}\bar{E}} \\ {*} & { - {{\bar{E}}^{T}}{R_{2}}\bar{E}} \end{bmatrix} \begin{bmatrix} {x ( t )} \\ {\int_{ t - {d_{2}}}^{ t - {d_{1}}} {x ( s )\,ds} } \end{bmatrix}. \end{aligned}$$

Then we have

$$\dot{V} ( {{x_{t}}} ) + {e^{T}} ( t )e ( t ) - { \gamma^{2}} {\omega^{T}} ( t )\omega ( t ) \le{\xi ^{T}} ( t ) \bigl({\Sigma_{0}} + \Pi_{1}^{T}W{ \Pi_{1}} + \Pi_{2}^{T}{\Pi _{2}}\bigr)\xi ( t ), $$

where

$$\begin{aligned}& {\xi^{T}} ( t ) = \biggl[ {x{{ ( t )}^{T}}\quad x{{ \bigl( {t - d ( t )} \bigr)}^{T}}\quad x{{ ( {t - {d_{1}}} )}^{T}} } \quad x{ ( {t - {d_{2}}} )^{T}} \quad \int_{t - {d_{1}}}^{t} {x{{ ( s )}^{T}}\,ds} \\& \hphantom{{\xi^{T}} ( t ) = \biggl[} { \int_{t - {d_{2}}}^{t - {d_{1}}} {x{{ ( s )}^{T}}\,ds}\quad { \omega^{T}} ( t )} \biggr], \\& {\Pi_{1}} = [ {\tilde{A}} \quad {{{\tilde{A}}_{d}}} \quad 0 \quad 0 \quad 0 \quad 0 \quad {\tilde{B}} ],\qquad {\Pi_{2}} = [ {\tilde{L}} \quad {{{\tilde{L}}_{d}}} \quad 0 \quad 0 \quad 0 \quad 0 \quad 0 ], \\& {\Sigma_{0}}= \begin{bmatrix} {{\Sigma_{11}}} & {{\Sigma_{12}}} & {{\Sigma_{13}}} & 0 & {{d_{1}}{{\bar{E}}^{T}}{R_{1}}\bar{E}} & {{d_{12}}{{\bar{E}}^{T}}{R_{2}}\bar{E}} & {{P^{T}}\tilde{B}} \\ {*} & {{\Sigma_{22}}} & {{\Sigma_{23}}} & {{\Sigma_{24}}} & 0 & 0 & 0 \\ {*} & * & {{\Sigma_{33}}} & 0 & 0 & 0 & 0 \\ {*} & * & * & {{\Sigma_{44}}} & 0 & 0 & 0 \\ {*} & * & * & * & {{\Sigma_{55}}} & 0 & 0 \\ {*} & * & * & * & * & {{\Sigma_{66}}} & 0 \\ {*} & * & * & * & * & * & { - I} \end{bmatrix}. \end{aligned}$$

Now, applying the Schur complements, when \(\omega ( t ) = 0\), it is easy to see from (9) that there exists a scalar \(\delta_{0}>0\) such that for any \(t \ge{d_{2}}\),

$$ \dot{V}(\tilde{x}_{t}) \leq -\delta_{0}\bigl\| \tilde{x}(t)\bigr\| ^{2}. $$
(11)

Moreover, by the definition of \(V ( {{{\tilde{x}}_{t}}} )\), there exist positive scalars \({\delta_{1}} > 0\), \({\delta_{2}} > 0\), \(\varepsilon > 0\) such that

$$V ( {{{\tilde{x}}_{t}}} ) \le\varepsilon{\delta_{1}} {\bigl\Vert {\tilde{x} ( t )} \bigr\Vert ^{2}} + \varepsilon{ \delta_{2}}\int_{t - {d_{2}}}^{t} {{{\bigl\Vert { \tilde{x} ( s )} \bigr\Vert }^{2}}\,ds}. $$

Now, we have

$$\begin{aligned} \frac{d}{{{d_{t}}}} \bigl[ {{e^{\varepsilon t}}V ( {{{\tilde{x}}_{t}}} )} \bigr] &= {e^{\varepsilon t}} \bigl[ {\varepsilon V ( {{{\tilde{x}}_{t}}} ) + \dot{V} ( {{{\tilde{x}}_{t}}} )} \bigr] \\ &\le{e^{\varepsilon t}} \biggl[ { ( {\varepsilon{\delta_{1}} - { \delta_{0}}} ){{\bigl\Vert {\tilde{x} ( t )} \bigr\Vert }^{2}} + \varepsilon{\delta_{2}}\int_{t - {d_{2}}}^{t} {{{\bigl\Vert {\tilde{x} ( s )} \bigr\Vert }^{2}}\,ds} } \biggr]. \end{aligned}$$

Integrating both sides from 0 to \(T > 0\) gives

$${e^{\varepsilon T}}V ( {{{\tilde{x}}_{T}}} ) - V ( {{{\tilde{x}}_{0}}} ) = ( {\varepsilon{\delta_{1}} - { \delta_{0}}} )\int_{0}^{T} {{e^{\varepsilon t}} {{\bigl\Vert {\tilde{x} ( t )} \bigr\Vert }^{2}}}\,dt + \varepsilon{\delta_{2}}\int_{0}^{T} {{e^{\varepsilon t}}}\,dt\int_{t - {d_{2}}}^{t} {{{\bigl\Vert {\tilde{x} ( s )} \bigr\Vert }^{2}}}\,ds. $$

By interchanging the integration sequence, we can get that

$$\begin{aligned}& \int_{0}^{T} {{e^{\varepsilon t}}}\,dt\int _{t - {d_{2}}}^{t} {{{\bigl\Vert {\tilde{x} ( s )} \bigr\Vert }^{2}}}\,ds \\& \quad= \int_{ - {d_{2}}}^{0} {{{\bigl\Vert {\tilde{x} ( s )} \bigr\Vert }^{2}}}\,ds\int_{0}^{s + {d_{2}}} {{e^{\varepsilon t}}}\,dt \\& \qquad{}+ \int_{0}^{T - {d_{2}}} {{{\bigl\Vert {\tilde{x} ( s )} \bigr\Vert }^{2}}}\,ds\int_{s}^{s + {d_{2}}} {{e^{\varepsilon t}}}\,dt + \int_{T - {d_{2}}}^{T} {{{\bigl\Vert {\tilde{x} ( s )} \bigr\Vert }^{2}}}\,ds\int _{s}^{T} {{e^{\varepsilon t}}}\,dt \\& \quad\le\int_{ - {d_{2}}}^{0} {d{e^{\varepsilon ( {s + {d_{2}}} )}} {{ \bigl\Vert {\tilde{x} ( s )} \bigr\Vert }^{2}}}\,ds + \int _{0}^{T - {d_{2}}} {{d_{2}} {e^{t ( {s + {d_{2}}} )}} {{ \bigl\Vert {\tilde{x} ( s )} \bigr\Vert }^{2}}}\,ds + \int_{T - {d_{2}}}^{T} {{d_{2}} {e^{\varepsilon ( {s + {d_{2}}} )}} {{\bigl\Vert {\tilde{x} ( s )} \bigr\Vert }^{2}}}\,ds \\& \quad= {d_{2}} {e^{\varepsilon{d_{2}}}} {\int_{ - {d_{2}}}^{T} {{e^{\varepsilon s}}\bigl\Vert {\tilde{x} ( s )} \bigr\Vert } ^{2}}\,ds. \end{aligned}$$

Let the scalar \(\varepsilon> 0\) be small enough such that \(\varepsilon{\delta_{1}} - {\delta_{0}} + {d_{2}}\varepsilon{\delta _{2}}{e^{\varepsilon{d_{2}}}} < 0\). Then we get that there exists a scalar \(\kappa> 0\) such that

$${e^{\varepsilon T}}V ( {{{\tilde{x}}_{T}}} ) \le V ( {{{\tilde{x}}_{0}}} ) + \bigl[ {\varepsilon{\delta_{1}} - { \delta_{0}} + {d_{2}}\varepsilon {\delta_{2}} {e^{\varepsilon{d_{2}}}}} \bigr]\int_{0}^{T} {{e^{\varepsilon t}} {{\bigl\Vert {\tilde{x} ( t )} \bigr\Vert }^{2}}}\,dt \le\kappa\bigl\Vert {\phi ( \theta )} \bigr\Vert _{{d_{2}}}^{2}. $$

It is not difficult to see that, for any \(T > 0\),

$$ V ( {{{\tilde{x}}_{T}}} ) \le\kappa{e^{ - \varepsilon T}} \bigl\Vert {\phi ( \theta )} \bigr\Vert _{{d_{2}}}^{2}. $$
(12)

Note that the regularity and the absence of impulses of the pair \(( {E,\tilde{A}} )\) imply that there always exist two non-singular matrices M and N such that

$$\begin{aligned}& M\bar{E}N = \begin{bmatrix} {I_{r}} & 0 \\ 0 & 0 \end{bmatrix},\qquad M\tilde{A}N = \begin{bmatrix} {A_{1}} & 0 \\ 0 & {I_{n-r}} \end{bmatrix}, \end{aligned}$$
(13)
$$\begin{aligned}& \begin{aligned} &M{{\tilde{A}}_{d}}N = \begin{bmatrix} {{A_{d1}}} & {{A_{d2}}} \\ {{A_{d3}}} & {{A_{d4}}} \end{bmatrix},\qquad {M^{ - T}}PN = \begin{bmatrix} {{P_{1}}} & {{P_{2}}} \\ {{P_{3}}} & {{P_{4}}} \end{bmatrix}, \\ &{N^{T}} {Q_{1}}N = \begin{bmatrix} {{Q_{11}}} & {{Q_{12}}} \\ {Q_{12}^{T}} & {{Q_{22}}} \end{bmatrix}. \end{aligned} \end{aligned}$$
(14)

Substituting the partition into (8) yields that \(P_{1}>0\), \(P_{2} = 0\).

Define

$$ \varepsilon(t) = \begin{bmatrix} {\varepsilon_{1}(t)} \\ {\varepsilon_{2}(t)} \end{bmatrix} = N^{-1} \tilde{x}(t). $$
(15)

Using the expressions in (13), (14) and (15), system (7) can be written as

$$ \begin{aligned} &{{\dot{\varepsilon}}_{1}} ( t ) = {A_{1}} {\varepsilon_{1}} ( t ) + {A_{d1}} { \varepsilon_{1}} \bigl( {t - d ( t )} \bigr) + {A_{d2}} { \varepsilon_{2}} \bigl( {t - d ( t )} \bigr), \\ & 0 = {\varepsilon_{2}} ( t ) + {A_{d3}} {\varepsilon _{1}} \bigl( {t - d ( t )} \bigr) + {A_{d4}} {\varepsilon _{2}} \bigl( {t - d ( t )} \bigr). \end{aligned} $$
(16)

It is easy to show that

$$\begin{aligned} V ( {{{\tilde{x}}_{t}},t} ) &\ge{{\tilde{x}}^{T}} ( t ){{\bar{E}}^{T}}P\tilde{x} ( t ) = {{\tilde{x}}^{T}} ( t ){N^{ - T}} \bigl( {{N^{T}} {{\bar{E}}^{T}} {M^{T}}} \bigr) \bigl( {{M^{ - T}}PN} \bigr){N^{ - 1}} \tilde{x} ( t ) \\ & = {\varepsilon^{T}} ( t ) \begin{bmatrix} {{I_{r}}} & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} {{P_{1}}} & {{P_{2}}} \\ {{P_{3}}} & {{P_{4}}} \end{bmatrix} \varepsilon ( t ) = \varepsilon_{1}^{T} ( t ){P_{1}} {\varepsilon_{1}} ( t ) \ge \frac{1}{{\Vert {{P_{1}}^{ - 1}} \Vert }}{\bigl\Vert {{\varepsilon_{1}} ( t )} \bigr\Vert ^{2}}. \end{aligned}$$

Hence, for any \(t \ge{d_{2}}\),

$$ {\bigl\Vert {{\varepsilon_{1}} ( t )} \bigr\Vert ^{2}} \le\bigl\Vert {{P_{1}}^{ - 1}} \bigr\Vert \kappa{e^{ - \varepsilon T}}\bigl\Vert {\phi ( \theta )} \bigr\Vert _{{d_{2}}}^{2}. $$
(17)

Define \(\varsigma ( t ) = {A_{d3}}{\varepsilon _{1}} ( {t - d ( t )} )\), then from (17) a scalar \(m>0\) can be found such that for any \(t > 0\), \({\Vert {\varsigma ( t )} \Vert ^{2}} \le m{e^{ - \varepsilon t}}\Vert {\phi ( \theta )} \Vert _{{d_{2}}}^{2}\).

To study the exponential stability of \(\varepsilon_{2}(t)\), we construct a function as

$$ J ( t ) = \varepsilon_{2}^{T} ( t ){Q_{22}} {\varepsilon_{2}} ( t ) - \varepsilon _{2}^{T} \bigl( {t - d ( t )} \bigr){Q_{22}} { \varepsilon _{2}} \bigl( {t - d ( t )} \bigr). $$
(18)

By pre-multiplying the second equation of (16) with \(\varepsilon_{2}^{T}(t)P_{4}^{T}\), we obtain that

$$ 0 = \varepsilon_{2}^{T} ( t )P_{4}^{T}{\varepsilon_{2}} ( t ) + \varepsilon_{2}^{T} ( t )P_{4}^{T}{A_{d4}} {\varepsilon_{2}} \bigl( {t - d ( t )} \bigr) + \varepsilon_{2}^{T} ( t )P_{4}^{T}\varsigma ( t ). $$
(19)

Adding (19) in (18) and using Lemma 4 yields that

$$\begin{aligned} J ( t ) ={}& \varepsilon_{2}^{T} ( t ) \bigl( {P_{4}^{T} + {P_{4}} + {Q_{22}}} \bigr){\varepsilon_{2}} ( t ) + 2\varepsilon_{2}^{T} ( t )P_{4}^{T}{A_{d4}} {\varepsilon_{2}} \bigl( {t - d ( t )} \bigr) \\ &{} - \varepsilon_{2}^{T} \bigl( {t - d ( t )} \bigr){Q_{22}} {\varepsilon_{2}} \bigl( {t - d ( t )} \bigr) + 2\varepsilon_{2}^{T} ( t )P_{4}^{T} \varsigma ( t ) \\ \le{}& \begin{bmatrix} {{\varepsilon_{2}} ( t )} \\ {{\varepsilon_{2}} ( {t - d ( t )} )} \end{bmatrix}^{T} \begin{bmatrix} {P_{4}^{T} + {P_{4}} + {Q_{22}}} & {P_{4}^{T}{A_{d4}}} \\ {*} & { - {Q_{22}}} \end{bmatrix} \begin{bmatrix} {{\varepsilon_{2}} ( t )} \\ {{\varepsilon_{2}} ( {t - d ( t )} )} \end{bmatrix} \\ &{} + {\gamma_{1}}\varepsilon_{2}^{T} ( t ){ \varepsilon _{2}} ( t ) + \gamma_{1}^{ - 1}{ \varsigma^{T}} ( t )P_{4}^{T}{P_{4}} \varsigma ( t ), \end{aligned}$$
(20)

where \(\gamma_{1}\) is any positive scalar.

Pre-multiplying and post-multiplying

$$ \begin{bmatrix} {\Sigma_{11} } & {\Sigma_{12} } \\ {*} & {\Sigma_{22} } \end{bmatrix}< 0, $$

by

$$ \begin{bmatrix} {N} & {0} \\ {0} & {N} \end{bmatrix}^{T} \quad\mbox{and} \quad \begin{bmatrix} {N} & {0} \\ {0} & {N} \end{bmatrix}, $$

respectively, a scalar \(\gamma_{2}>0\) can be found such that

$$ \begin{bmatrix} {P_{4}^{T} + P_{4} + Q_{22}} & {P_{4}^{T} A_{d4} } \\ {*} & {- Q_{22}} \end{bmatrix} \leq- \begin{bmatrix} {\gamma_{2} I} & {0} \\ 0 & 0 \end{bmatrix}. $$

Then

$$J ( t ) \le ( {{\gamma_{1}} - {\gamma_{2}}} ) \varepsilon_{2}^{T} ( t ){\varepsilon_{2}} ( t ) + \gamma_{1}^{ - 1}{\varsigma^{T}} ( t ){P_{4}}P_{4}^{T}\varsigma ( t ). $$

On the other hand, since \(\gamma_{1}\) can be chosen arbitrarily, \(\gamma_{1}\) can be chosen small enough such that \(\gamma_{2} - \gamma_{1} > 0\). Then a scalar \(\gamma_{3}>1\) can always be found such that

$$ Q_{22} - (\gamma_{1} - \gamma_{2})I \geq \gamma_{3} Q_{22}. $$
(21)

It follows from (18), (20) and (21) that

$$\varepsilon_{2}^{T} ( t ){Q_{22}}\varepsilon ( t ) \le\gamma_{3}^{ - 1}\varepsilon_{2}^{T} \bigl( {t - d ( t )} \bigr){Q_{22}}\varepsilon \bigl( {t - d ( t )} \bigr) + { ( {{\gamma_{1}} {\gamma_{3}}} )^{ - 1}} { \varsigma ^{T}} ( t ){P_{4}}P_{4}^{T} \varsigma ( t ), $$

which infers \(f ( t ) \le\gamma_{3}^{ - 1}\mathop{\sup } _{t - {d_{2}} \le s \le t} f ( s ) + \tau{e^{ - \sigma t}}\), where \(0 < \sigma < \min\{ \varepsilon,{d^{ - 1}}\ln{\gamma_{3}}\} \), \(f ( t ) = \varepsilon _{2}^{T} ( t ){Q_{22}}\varepsilon ( t )\) and \(\tau = { ( {{\gamma_{1}}{\gamma_{3}}} )^{ - 1}}m{\Vert {{P_{4}}} \Vert ^{2}}\Vert {\phi ( t )} \Vert _{{d_{2}}}^{2}\).

Therefore, applying Lemma 3, we can obtain that

$${\bigl\Vert {{\varepsilon_{2}} ( t )} \bigr\Vert ^{2}} \le\lambda_{\min }^{ - 1} ( {{Q_{22}}} ){ \lambda_{\max}} ( {{Q_{22}}} ){e^{ - \sigma t}} {\bigl\Vert {{\varepsilon _{2}} ( t )} \bigr\Vert ^{2}} + \frac{{\lambda_{\min}^{ - 1} ( {{Q_{22}}} )\tau{e^{ - \sigma t}}}}{{1 - \gamma _{3}^{ - 1}{e^{\sigma{d_{2}}}}}}, $$

which means, combining (17), that system (16) is exponentially stable, that is, system (7) is exponentially stable.

Finally, we show that for any nonzero \(\omega(t) \in {L_{2}}[0,\infty)\), system (6) satisfies \(\|e(t)\|_{2} < \gamma\|\omega(t)\|_{2}\) under the zero initial condition.

Observing (9), we have

$$\dot{V} ( {{{\tilde{x}}_{t}}} ) + {e^{T}} ( t )e ( t ) - { \gamma^{2}} {\omega^{T}} ( t )\omega ( t ) < 0. $$

Integrating both sides from 0 to \(T > 0\) gives

$$V ( T ) - V ( 0 ) + {\int_{ 0}^{ T} {\bigl\Vert {e ( t )} \bigr\Vert } ^{2}}\,dt - {\int_{ 0}^{ T} {{\gamma^{2}}\bigl\Vert {\omega ( t )} \bigr\Vert } ^{2}}\,dt < 0. $$

Since \(V ( 0 ) = 0\), when \(T \to\infty\), we have

$${\int_{ 0}^{ \infty} {\bigl\Vert {e ( t )} \bigr\Vert } ^{2}}\,dt < {\int_{ 0}^{ \infty} {{ \gamma^{2}}\bigl\Vert {\omega ( t )} \bigr\Vert } ^{2}}\,dt, $$

which implies \({\Vert {e ( t )} \Vert _{2}} < \gamma{ \Vert {\omega ( t )} \Vert _{2}}\) for any nonzero \(\omega ( t ) \in {L_{2}}[0,\infty)\). This completes the proof. □

Remark 2

It should be pointed out that LKF (10) is chosen here based on triple integral and both left and right endpoints of the time-varying delay interval, which will decrease the conservatism.

Remark 3

In [17], \(\int_{ - h}^{ 0} {\int_{ t + \theta}^{ t} {{{\dot{\xi}}^{T}} ( s )} } Z\dot{\xi}( s )\,ds\,d\theta\) was estimated as

$$h{\dot{\xi}^{T}} (t )Z\dot{\xi}( t ) - \int_{ t - d ( t )}^{ t} {{{\dot{\xi}}^{T}} ( s )Z\dot{\xi}( s )}\,ds, $$

and the term \(- \int_{ t - h}^{ t-d ( t )} {{{\dot{\xi}}^{T}} ( s )Z\dot{\xi}( s )}\,ds\) was ignored, which may lead to considerable conservativeness. In this paper, by using Lemma 1, the shortcoming can be avoided and a less conservative result can be obtained.

3.2 Delay-dependent filter design

Based on the sufficient conditions above, the design problem of robust \(H_{\infty}\) filter can be transformed into a problem of LMIs.

Theorem 2

Given a finite scalar \(\gamma> 0\), the exponential \(H_{\infty}\) filter problem is solved for system (6) if there exist constant scalars \(\varepsilon_{ij}> 0\), symmetric matrix \({\mathop {\mathit {Q}}\limits ^{\frown }} _{1}\), \({\mathop {\mathit {Q}}\limits ^{\frown }} _{2}\), \({\mathop {\mathit {Q}}\limits ^{\frown }} _{3}\), \({\mathop {\mathit {R}}\limits ^{\frown }} _{1}\), \({\mathop {\mathit {R}}\limits ^{\frown }} _{2}\), \({\mathop {\mathit {Z}}\limits ^{\frown }} _{1}\), \({\mathop {\mathit {Z}}\limits ^{\frown }} _{2}\), and matrix \(P = \operatorname{diag} ( {{P_{1}},{P_{2}}} )\), \(P> 0\), such that

$$\begin{aligned}& E^{T} P_{1} = P_{1}^{T} E \ge0,\qquad E^{T} P_{2} = P_{2}^{T} E \ge0, \end{aligned}$$
(22)
$$\begin{aligned}& \begin{aligned} &{\Psi_{ii}} < 0, \quad i = 1,2,\ldots,r, \\ &{\Psi_{ij}} + {\Psi_{ji}} < 0, \quad i < j, i,j = 1,2, \ldots,r, \end{aligned} \end{aligned}$$
(23)

where

$$\begin{aligned}& {\Psi_{ij}} = \begin{bmatrix} {{\Omega_{ij}}} & {{\Gamma_{1}}} & {{\varepsilon_{ij}}\Gamma_{2}^{T}} \\ {*} & { - {\varepsilon_{ij}}I} & 0 \\ {*} & * & { - {\varepsilon_{ij}}I} \end{bmatrix},\qquad {\Omega_{ij}} = \begin{bmatrix} {{\Omega_{11ij}}} & {{\Omega_{12ij}}} & {{\Omega_{13ij}}} \\ {*} & {{\Omega_{22ij}}} & 0 \\ {*} & * & {{\Omega_{33ij}}} \end{bmatrix}, \\& {\Omega_{11ij}} = \begin{bmatrix} {{\omega_{11}}} & {{\omega_{12}}} & {{\omega_{13}}} & 0 & {{\omega _{15}}} & {{\omega_{16}}} \\ {*} & {{\omega_{22}}} & {{\omega_{23}}} & {{\omega_{24}}} & {{\omega_{25}}} & {{\omega_{26}}} \\ {*} & * & {{\omega_{33}}} & {{\omega_{34}}} & {{\omega_{35}}} & {{\omega_{36}}} \\ {*} & * & * & {{\omega_{44}}} & {{\omega_{45}}} & {{\omega_{46}}} \\ {*} & * & * & * & {{\omega_{55}}} & {{\omega_{56}}} \\ {*} & * & * & * & * & {{\omega_{66}}} \end{bmatrix},\qquad {\Omega_{13ij}} = \begin{bmatrix} {{\omega_{113}}} & {{\omega_{114}}} & {L_{1i}^{T}} \\ {{\omega_{213}}} & {{\omega_{214}}} & { - L_{fj}^{T}} \\ 0 & {{\omega_{314}}} & {L_{2i}^{T}} \\ 0 & {C_{fj}^{T}} & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \\& {\Omega_{12ij}} = \begin{bmatrix} 0 & 0 & {{\omega_{19}}} & {{\omega_{110}}} & {{\omega_{111}}} & {{\omega_{112}}} \\ 0 & 0 & {{\omega_{29}}} & {{\omega_{210}}} & {{\omega_{211}}} & {{\omega_{212}}} \\ {{\omega_{37}}} & {{\omega_{38}}} & 0 & 0 & 0 & 0 \\ {{\omega_{47}}} & {{\omega_{48}}} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}, \\& {\Omega_{22ij}} = \begin{bmatrix} {{\omega_{77}}} & {{\omega_{78}}} & 0 & 0 & 0 & 0 \\ {*} & {{\omega_{88}}} & 0 & 0 & 0 & 0 \\ {*} & * & {{\omega_{99}}} & {{\omega_{910}}} & 0 & 0 \\ {*} & * & * & {{\omega_{1{,}010}}} & 0 & 0 \\ {*} & * & * & * & {{\omega_{1{,}111}}} & {{\omega_{1{,}112}}} \\ {*} & * & * & * & * & {{\omega_{1{,}212}}} \end{bmatrix},\qquad {\Omega_{33ij}} = \begin{bmatrix} { - {\gamma^{2}}I} & {{\omega_{1{,}314}}} & 0 \\ {*} & { - Q} & 0 \\ {*}& * & { - I} \end{bmatrix}, \\& {\Gamma_{1}} = \begin{bmatrix} {P_{1}^{T}{H_{1i}}} & 0 & 0 \\ {{M_{2j}}{H_{2i}}} & 0 & 0 \\ {{0_{11 \times1}}} & {{0_{11 \times1}}} & {{0_{11 \times1}}} \\ {{H_{1i}} + {B_{fj}}{H_{2i}}} & 0 & 0 \\ 0 & {{H_{3i}}} & {{H_{4i}}} \end{bmatrix}, \qquad{\Gamma_{2}} = \begin{bmatrix} {{E_{1i}}} & 0 & {{E_{2i}}} & {{0_{1 \times9}}} & {{E_{3i}}} & 0 & 0\\ {{E_{4i}}} & 0 & 0 & {{0_{1 \times9}}} & 0 & 0 & 0 \\ 0 & 0 & {{E_{5i}}} & {{0_{1 \times9}}} & 0 & 0 & 0 \end{bmatrix}, \\& {\omega_{11}} = P_{1}^{T}{A_{1i}} + A_{1i}^{T}{P_{1}} - {E^{T}} { {\mathop {\mathit {Z}}\limits ^{\frown }}}_{1}E + {{\mathop {\mathit {Q}}\limits ^{\frown }}_{1}} - E{\mathop {\mathit {Y}}\limits ^{\frown }} E + {\mathop {\mathit {Z}}\limits ^{\frown }} , \\& {\omega_{12}} = C_{1i}^{T}M_{2j}^{T} - {E^{T}} {{\mathop {\mathit {Z}}\limits ^{\frown }}_{1}}E + {\mathop {\mathit {Q}}\limits ^{\frown }}_{1} - {E^{T}}{\mathop {\mathit {Y}}\limits ^{\frown }} E + {\mathop {\mathit {Z}}\limits ^{\frown }} , \qquad {\omega_{13}} = P_{1}^{T}{A_{2i}},\\& {\omega_{15}}={\omega _{25}}={\omega_{16}}={ \omega_{26}}= {E^{T}}{\mathop {\mathit {Z}}\limits ^{\frown }} E, \qquad{ \omega_{111}} = {\omega_{211}} = {\omega_{112}} = { \omega_{212}} = {d_{12}} {E^{T}} { {\mathop {\mathit {R}}\limits ^{\frown }}_{2}}E, \\& {\omega_{113}} = P_{1}^{T}{B_{i}}, \qquad {\omega_{114}} = A_{1i}^{T} + C_{1i}^{T}B_{fj}^{T},\qquad{ \omega_{19}} = {\omega _{29}} = {\omega_{110}} = { \omega_{210}} = {d_{1}} {E^{T}} { {\mathop {\mathit {R}}\limits ^{\frown }}}_{1}E, \\& {\omega_{22}} = M_{1j}^{T} + {M_{1j}} - {E^{T}} {{\mathop {\mathit {Z}}\limits ^{\frown }}_{1}}E + {\mathop {\mathit {Q}}\limits ^{\frown }}_{1} - {E^{T}}YE + {\mathop {\mathit {Z}}\limits ^{\frown }} ,\qquad {\omega_{23}} = {M_{2j}} {C_{2i}},\qquad {\omega_{24}} = {M_{3j}}, \\& {\omega_{213}} = {M_{2j}} {D_{i}},\qquad{ \omega_{214}} = A_{fj}^{T},\qquad {\omega_{33}} = {\omega_{34}} = {\omega_{44}} = - ( {1 - \mu} ){ {\mathop {\mathit {Q}}\limits ^{\frown }}}_{3} - 2{E^{T}} {{\mathop {\mathit {Z}}\limits ^{\frown }}_{2}}E, \\& {\omega_{35}} = {\omega_{36}} = {\omega_{45}} = {\omega_{46}} = {E^{T}} {{\mathop {\mathit {Z}}\limits ^{\frown }}_{2}}E,\qquad { \omega_{77}} = {\omega_{78}} = {\omega_{88}} = - {{\mathop {\mathit {Q}}\limits ^{\frown }}_{2}} - {E^{T}} {{\mathop {\mathit {Z}}\limits ^{\frown }}_{2}}E, \\& {\omega_{37}} = {\omega_{38}} = {\omega_{47}} = {\omega_{48}} = {E^{T}} {{\mathop {\mathit {Z}}\limits ^{\frown }}_{2}}E,\qquad{ \omega_{314}} = A_{2i}^{T} + C_{2i}^{T}B_{fj}^{T}, \\& {\omega_{55}} = {\omega_{56}} = {\omega_{66}} = - {{\mathop {\mathit {Q}}\limits ^{\frown }} _{1}} + {{\mathop {\mathit {Q}}\limits ^{\frown }}_{2}} + {{\mathop {\mathit {Q}}\limits ^{\frown }} _{3}} - {E^{T}} {{\mathop {\mathit {Z}}\limits ^{\frown }}_{1}}E - {E^{T}} {{\mathop {\mathit {Z}}\limits ^{\frown }}_{2}}E, \\& {\omega_{{99}}} = {\omega_{{910}}} = {\omega_{{1{,}010}}} = - {E^{T}} {{\mathop {\mathit {R}}\limits ^{\frown }}_{1}}E - {{\mathop {\mathit {Z}}\limits ^{\frown }}_{3}}, \qquad{\omega_{{1{,}111}}} = {\omega_{{1{,}112}}} = {\omega _{{1{,}212}}} = - {E^{T}} {{\mathop {\mathit {R}}\limits ^{\frown }}_{2}}E - { {\mathop {\mathit {Z}}\limits ^{\frown }}_{4}}, \\& {\omega_{1{,}314}} = B_{i}^{T} + D_{i}^{T}B_{fj}^{T}, \qquad Q = {{\hat{W}}^{ - 1}},\qquad {\mathop {\mathit {Y}}\limits ^{\frown }} = d_{1}^{2} {\mathop {\mathit {R}}\limits ^{\frown }}_{1} + d_{12}^{2}{\mathop {\mathit {R}}\limits ^{\frown }} _{2}, \qquad {\mathop {\mathit {Z}}\limits ^{\frown }} = d_{1}^{2}{\mathop {\mathit {Z}}\limits ^{\frown }}_{3} + d_{12}^{2}{\mathop {\mathit {Z}}\limits ^{\frown }}_{4}. \end{aligned}$$

Proof

From (8) we have (22). From Theorem 1, let \(H = [ { I \ I } ]\), \({Q_{i}} = {H^{T}}{\mathop {\mathit {Q}}\limits ^{\frown }} _{i}H\) (\({i = 1,2,3}\)), \({Z_{i}} = {H^{T}}{\mathop {\mathit {Z}}\limits ^{\frown }} _{i}H\) (\(i = 1,2,3,4\)), \({R_{i}} = {H^{T}}{\mathop {\mathit {R}}\limits ^{\frown }}_{i}H\) (\(i = 1,2\)).

Then we have

$$\sum_{i = 1}^{r} {\sum _{j = 1}^{r} {{h_{i}}\bigl(\varepsilon (t) \bigr){h_{j}}\bigl(\varepsilon(t)\bigr)} } ( {{\Sigma_{ij}} + \Delta{\Sigma _{ij}}} ) < 0, $$

where

$$\begin{aligned}& {\Sigma_{ij}} = \begin{bmatrix} {{\varphi_{11ij}}} & {{\Omega_{12ij}}} & {{\varphi_{13ij}}} \\ {*} & {{\Omega_{22ij}}} & 0 \\ {*} & {*} & {{\varphi_{33ij}}} \end{bmatrix}, \qquad{\varphi_{11ij}} = \begin{bmatrix} {{\omega_{11}}} & {{\varphi_{12}}} & {{\omega_{13}}} & 0 & {{\omega _{15}}} & {{\omega_{16}}} \\ {*} & {{\varphi_{22}}} & {{\varphi_{23}}} & {{\varphi_{24}}} & {{\omega_{25}}} & {{\omega_{26}}} \\ {*} & * & {{\omega_{33}}} & {{\omega_{34}}} & {{\omega_{35}}} & {{\omega_{36}}} \\ {*} & * & * & {{\omega_{44}}} & {{\omega_{45}}} & {{\omega_{46}}} \\ {*} & * & * & * & {{\omega_{55}}} & {{\omega_{56}}} \\ {*} & * & * & * & * & {{\omega_{66}}} \end{bmatrix}, \\& {\varphi_{13ij}} = \begin{bmatrix} {{\omega_{113}}} & {{\omega_{114}}} & {L_{1i}^{T}} \\ {{\varphi_{213}}} & {{\omega_{214}}} & { - L_{fj}^{T}} \\ 0 & {{\omega_{314}}} & {L_{2i}^{T}} \\ 0 & {C_{fj}^{T}} & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix},\qquad { \varphi_{33ij}} = \begin{bmatrix} { - {\gamma^{2}}I} & {{\omega_{1{,}314}}} & 0 \\ {*} & { - {{\hat{W}}^{ - 1}}} & 0 \\ {*} & * & { - I} \end{bmatrix}, \\& {\varphi_{12}} = C_{1i}^{T}B_{fj}^{T}{P_{2}} - {E^{T}}{\mathop {\mathit {Z}}\limits ^{\frown }}_{1} E + {\mathop {\mathit {Q}}\limits ^{\frown }}_{1} - E {\mathop {\mathit {Y}}\limits ^{\frown }} E + {\mathop {\mathit {Z}}\limits ^{\frown }} ,\qquad {\varphi_{213}} = P_{2}^{T}{B_{fj}} {D_{i}},\qquad {\varphi_{23}} = P_{2}^{T}{B_{fj}} {C_{2i}}, \\& {\varphi_{22}} = P_{2}^{T}{A_{fj}} + A_{fj}^{T}{P_{2}} - {E^{T}} {\mathop {\mathit {Z}}\limits ^{\frown }}_{1}E + {\mathop {\mathit {Q}}\limits ^{\frown }}_{1} - E{\mathop {\mathit {Y}}\limits ^{\frown }} E + {\mathop {\mathit {Z}}\limits ^{\frown }} , \qquad {\varphi_{24}} = P_{2}^{T}{C_{fj}}, \end{aligned}$$

and other items are defined in Theorem 2.

Based on (2) and from Lemma 4, we obtain that

$$\Delta{\Sigma_{ij}} = {\eta_{1}} {F_{i}} { \eta_{2}} + \eta_{2}^{T}F_{i}^{T} \eta _{1}^{T} \le\varepsilon_{ij}^{ - 1}{ \eta_{1}}\eta_{1}^{T} + {\varepsilon _{ij}}\eta_{2}^{T}{\eta_{2}}, $$

where

$${\eta_{1}} = \begin{bmatrix} {P_{1}^{T}{H_{1i}}} & 0 & 0 \\ {P_{2}^{T}{B_{fj}}{H_{2i}}} & 0 & 0 \\ {{0_{11 \times1}}} & {{0_{11 \times1}}} & {{0_{11 \times1}}} \\ {{H_{1i}} + {B_{fj}}{H_{2i}}} & 0 & 0 \\ 0 & {{H_{3i}}} & {{H_{4i}}} \end{bmatrix},\qquad {\eta_{2}} = \begin{bmatrix} {{E_{1i}}} & 0 & {{E_{2i}}} & {{0_{1 \times9}}} & {{E_{3i}}} & 0 & 0\\ {{E_{4i}}} & 0 & 0 & {{0_{1 \times9}}} & 0 & 0 & 0 \\ 0 & 0 & {{E_{5i}}} & {{0_{1 \times9}}} & 0 & 0 & 0 \end{bmatrix}. $$

Via the Schur complement, we obtain that

$$ \sum_{i = 1}^{r} {\sum _{j = 1}^{r} {{h_{i}}\bigl(\varepsilon (t) \bigr){h_{j}}\bigl(\varepsilon(t)\bigr)} } {\Theta_{ij}} < 0, $$
(24)

where

$${\Theta_{ij}} = \begin{bmatrix} {{\Sigma_{ij}}} & {{\eta_{1}}} & {{\varepsilon_{ij}}\eta_{2}^{T}} \\ {*} & { - {\varepsilon_{ij}}I} & 0 \\ {*} & * & { - {\varepsilon_{ij}}I} \end{bmatrix}. $$

It can be easily shown that inequality (24) equals the following condition:

$$\sum_{i = 1}^{r} {h_{i}^{2} \bigl(\varepsilon(t)\bigr)} {\Theta_{ii}} + \sum _{i < j}^{r} {{h_{i}}\bigl(\varepsilon(t) \bigr){h_{j}}\bigl(\varepsilon(t)\bigr)} ({\Theta_{ji}} + { \Theta_{ij}}) < 0. $$

Let \(Q = {\hat{W}^{ - 1}}\), \({M_{1i}} = P_{2}^{T}{A_{fi}}\), \({M_{2i}} = P_{2}^{T}{B_{fi}}\), \({M_{3i}} = P_{2}^{T}{C_{fi}}\), and then it leads to LMIs (23). This completes the proof. □

The parameters of the \({H_{\infty}}\) filter are given by

$${A_{fi}} = { \bigl( {P_{2}^{ - 1}} \bigr)^{T}} {M_{1i}},\qquad{B_{fi}} = { \bigl( {P_{2}^{ - 1}} \bigr)^{T}} {M_{2i}},\qquad {C_{fi}} = { \bigl( {P_{2}^{ - 1}} \bigr)^{T}} {M_{3i}},\qquad {L_{fi}}. $$

4 Numerical value examples

Example 1

In order to show the advantage of the proposed method in this paper, we consider the time-delay T-S fuzzy systems as the system shown in Example 2 in [18] with two rules:

IF \({x_{1}}\) is \({W^{i}}\) (\({i = 1,2}\)), THEN

$$\begin{aligned}& E\dot{x} ( t ) = ({A_{i}} + \Delta{A_{i}})x ( t ) + ({A_{di}} + \Delta{A_{di}})x \bigl( {t - d ( t )} \bigr) + ({B_{i}} + \Delta{B_{i}})\omega ( t ), \\& y ( t ) = ({C_{i}} + \Delta{C_{i}})x ( t ) + ({C_{di}} + \Delta{C_{di}})x \bigl( {t - d ( t )} \bigr) + ({D_{i}} + \Delta{D_{i}})\omega ( t ), \\& z ( t ) = ({L_{i}} + \Delta{L_{i}})x ( t ) + ({L_{di}} + \Delta{L_{di}})x \bigl( {t - d ( t )} \bigr), \end{aligned}$$

where

$$\begin{aligned}& {A_{1}} = \begin{bmatrix} 1 & {0.1} \\ { - 0.5} & 1 \end{bmatrix}, \qquad {A_{2}} = \begin{bmatrix} 1 & {0.5} \\ { - 0.1} & 1 \end{bmatrix},\qquad {A_{d1}} = \begin{bmatrix} 0 & {0.2} \\ { - 0.5} & {0.1} \end{bmatrix}, \\& {A_{d2}} = \begin{bmatrix} 0 & {0.3} \\ 0 & {0.6} \end{bmatrix},\qquad {B_{1}} = \begin{bmatrix} 0 \\ 1 \end{bmatrix},\qquad {B_{2}} = \begin{bmatrix} 1 \\ 0 \end{bmatrix},\qquad {C_{1}} = [1 \quad 0 ], \\& {C_{d1}} = [ { - 0.2} \quad {0.3} ],\qquad {C_{2}} = [ { {0.5} \quad{ - 0.6} } ],\qquad {C_{d2}} = [ { { - 0.2} \quad {0.6} } ], \\& {L_{1}} = {D_{1}} = [ { { - 0.8} \quad {0.6} } ],\qquad {L_{2}} = {D_{2}} = [ { { - 0.2} \quad 1 } ], \\& {H_{11}} = \begin{bmatrix} {0.1} \\ { - 0.2} \end{bmatrix}, \qquad{H_{22}} = {H_{42}} = \begin{bmatrix} {0.2} \\ { - 0.1} \end{bmatrix},\qquad {H_{32}} = \begin{bmatrix} { - 0.2} \\ {0.3} \end{bmatrix}, \\& {H_{21}} = {H_{41}} = \begin{bmatrix} { - 0.2} \\ {0.2} \end{bmatrix},\qquad {H_{31}} = {H_{12}} = \begin{bmatrix} { - 0.2} \\ {0.1} \end{bmatrix}, \\& {E_{11}} = {E_{31}} = {E_{12}} = [ { - 0.2} \quad {0.1} ],\qquad {E_{41}} = [ { { - 0.2} \quad {0.2} } ], \\& {E_{51}} = {E_{21}} = [ { {0.2} \quad { - 0.2} } ],\qquad {E_{22}} = [ { { - 0.1} \quad { - 0.2} } ], \\& {E_{32}} = [ { - 0.3} \quad {0.2} ],\qquad {E_{42}} = [ { - 0.2} \quad {0.3} ],\qquad {E_{52}} = [ {0.2} \quad { - 0.1} ]. \end{aligned}$$

Firstly, consider that E is nonsingular, let \(E = I\), using Theorem 2 of this paper, the minimum \(\gamma = 0.0172\), it is less than the minimal level 0.2453 in [18].

Then consider that E is singular, suppose

$$E = \begin{bmatrix} 1 & {0.2} \\ {0.5} & {0.1} \end{bmatrix}, $$

and let \(\mu = 0\), \(d_{1} = 0\), \(d_{2} = 2\), \(\gamma = 0.7\) The filter parameters can be obtained as follows:

$$\begin{aligned}& {A_{f1}} = \begin{bmatrix} { - 0.7432} & {0.3496} \\ { - 1.7348} & { - 3.4052} \end{bmatrix},\qquad {A_{f2}} = \begin{bmatrix} { - 0.6193} & {0.5764} \\ { - 1.7867} & { - 4.6321} \end{bmatrix}, \\& {B_{f1}} = \begin{bmatrix} {0.3099} \\ {0.3427} \end{bmatrix},\qquad {B_{f2}} = \begin{bmatrix} { - 0.0458} \\ {0.6180} \end{bmatrix}, \\& {C_{f1}} = \begin{bmatrix} { - 1.6425} & { - 2.5701} \\ { - 0.0749} & { - 0.0789} \end{bmatrix},\qquad {C_{f2}} = \begin{bmatrix} {4.1954} & {2.4001} \\ { - 0.0110} & { - 0.0798} \end{bmatrix}, \\& {L_{f1}} = [ {2.4751} \quad {4.2527} ],\qquad {L_{f1}} = [ { { - 0.1434} \quad { - 1.0178} } ]. \end{aligned}$$

However, it cannot be solved by criterion in [18] because of the emergence of the impulse phenomenon. Based on the discussion above, the results obtained in this paper are more widely applicable and less conservative.

Example 2

Consider the example in [17] with two fuzzy rules and without uncertainties:

IF \({x_{1}}\) is \({W^{i}}\) (\({i = 1,2}\)), THEN

$$\begin{aligned}& \dot{x} ( t ) = {A_{i}}x ( t ) + {A_{di}}x \bigl( {t - d ( t )} \bigr) + {B_{i}}\omega ( t ), \\& y ( t ) = {C_{i}}x ( t ) + {C_{di}}x \bigl( {t - d ( t )} \bigr) + {D_{i}}\omega ( t ), \\& z ( t ) = {E_{i}}x ( t ) + {E_{di}}x \bigl( {t - d ( t )} \bigr), \end{aligned}$$

where

$$\begin{aligned}& {A_{1}} = \begin{bmatrix} { - 2.1} & {0.1} \\ 1 & { - 2} \end{bmatrix},\qquad {A_{2}} = \begin{bmatrix} { - 1.9} & 0 \\ { - 0.2} & { - 1.1} \end{bmatrix},\qquad {A_{d1}} = \begin{bmatrix} { - 1.1} & {0.1} \\ { - 0.8} & { - 0.9} \end{bmatrix}, \\& {A_{d2}} = \begin{bmatrix} { - 0.9} & 0 \\ { - 1.1} & { - 1.2} \end{bmatrix},\qquad {B_{1}} = \begin{bmatrix} 1 \\ { - 0.2} \end{bmatrix},\qquad {B_{2}} = \begin{bmatrix} {0.3} \\ {0.1} \end{bmatrix}, \qquad{C_{1}} = [ { 1 \quad 0 } ], \\& {C_{2}} = [ {0.5} \quad { - 0.6} ],\qquad {C_{d1}} = [ { - 0.8} \quad {0.6} ],{\qquad C_{d2}} = [ { - 0.2} \quad 1 ], \\& {D_{1}} = 0.3,\qquad {D_{2}} = - 0.6,\qquad {E_{1}} = [ 1 \quad { - 0.5} ],\qquad {E_{2}} = [ { - 0.2} \quad {0.3} ], \\& {E_{d1}} = [ { {0.1} \quad 0 } ],\qquad {E_{d2}} = [ { 0 \quad {0.2} } ]. \end{aligned}$$

Let the fuzzy weighting functions be \({h_{1}} ( {\theta ( t )} ) = {\sin^{2}} ( t )\) and \({h_{2}} ( {\theta ( t )} ) = {\cos^{2}} ( t )\). For \(\sigma=1\) in [19], Table 1 gives different values of \({d_{2}}\) and yields minimum γ for \({d_{1}} = 0\), \(\mu = 0.2\). From the table, it is clear that the results in this paper are much less conservative than the ones obtained in [17] and [19].

Table 1 The minimum γ with different \(\pmb{{d_{2}}}\) (Example 2 )

Example 3

Consider TS fuzzy system (1) without uncertainties and the fuzzy rule \(r = 2\) in [20], the parameters are as follows:

$$\begin{aligned}& {A_{11}} = \begin{bmatrix} { - 2.1} & {0.1} \\ 1 & { - 2} \end{bmatrix},\qquad {A_{21}} = \begin{bmatrix} { - 1.1} & {0.1} \\ { - 0.8} & { - 0.9} \end{bmatrix},\qquad {A_{12}} = \begin{bmatrix} { - 1.9} & 0 \\ { - 0.2} & { - 1.1} \end{bmatrix}, \\& {A_{22}} = \begin{bmatrix} { - 0.9} & 0 \\ { - 1.1} & { - 1.2} \end{bmatrix},\qquad {B_{1}} = \begin{bmatrix} 1 \\ { - 0.2} \end{bmatrix},\qquad {B_{2}} = \begin{bmatrix} {0.3} \\ {0.1} \end{bmatrix}, \qquad {C_{11}} = [ 1 \quad 0 ], \\& {C_{21}} = [ { - 0.8} \quad {0.6} ],\qquad {C_{12}} = [ { {0.5} \quad { - 0.6} } ],\qquad {C_{22}} = [ { { - 0.2} \quad 1 } ], \\& {D_{1}} = 0.3,\qquad {D_{2}} = - 0.6,\qquad {L_{11}} = [ { 1 \quad { - 0.5} } ],\qquad {L_{12}} = [ { { - 0.2} \quad {0.3} } ], \\& {L_{21}} = [ {0.1} \quad 0 ],\qquad {L_{22}} = [ 0 \quad {0.2} ]. \end{aligned}$$

Set \({d_{1}} = 0\), \({d_{2}} = 0.8\), \(\mu = 0.2\). Table 2 lists the comparison results of the minimum of γ with [1921] (\(\delta= 20\)).

Table 2 Comparison of minimum γ (Example 3 )

From the comparison, it is clear that the minimum γ obtained in this paper is smaller. It has to be mentioned that the value of γ depends on the parameter δ in [1921]; however, δ is not easy to determine. In this paper, this drawback can be avoided.

To sum up, the method in this paper is less conservative and easily computed.

Example 4

Consider a tunnel diode circuit shown in Figure 1, whose fuzzy model was done in [22]. \({x_{1}} ( t ) = {v_{c}} ( t )\), \({x_{2}} ( t ) = {i_{L}} ( t )\), \(\omega ( t )\) is the disturbance noise input, \(y ( t )\) is the measurement output, and \(z ( t )\) is the controlled output. Based on the discussion of [22], the nonlinear network can be approximated by the following two fuzzy rules.

Figure 1
figure 1

Tunnel diode circuit (Example 4 ).

Plant rule i (\(i = 1,2\)): IF \({x_{1}} ( t )\) is \({M_{i}} ( {{x_{1}} ( t )} )\), THEN

$$\begin{aligned}& \dot{x} ( t ) = {A_{1i}}x ( t ) + {B_{i}}\omega ( t ), \\& y ( t ) = {C_{1i}}x ( t ) + {D_{i}}\omega ( t ), \\& z ( t ) = {L_{i}}x ( t ), \end{aligned}$$

where

$$\begin{aligned}& {A_{11}} = \begin{bmatrix} { - 0.1} & {50} \\ { - 1} & { - 10} \end{bmatrix},\qquad {A_{12}} = \begin{bmatrix} { - 4.6} & {50} \\ { - 1} & { - 10} \end{bmatrix},\qquad {B_{1}} = {B_{2}} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}, \\& {C_{11}} = {C_{12}} = [ { 1 \quad 0 } ],\qquad {D_{1}} = {D_{2}} = 1,\qquad {L_{1}} = {L_{2}} = [ 1 \quad 0 ]. \end{aligned}$$

The fuzzy membership function is assumed as follows:

$$\begin{aligned}& {h_{1}} ( t ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{{{x_{1}} ( t ) + 3}}{3}, &{-} 3 \le{x_{1}} ( t ) \le0, \\ 0, & {x_{1}} ( t ) < - 3, \\ \frac{{3 - {x_{1}} ( t )}}{3},& 0 \le{x_{1}} ( t ) \le 3, \\ 0, & {x_{1}} ( t ) > 3, \end{array}\displaystyle \displaystyle \right . \\& {h_{2}} ( t ) = 1 - {h_{1}} ( t ). \end{aligned}$$

Give the \({H_{\infty}}\) performance \(\gamma = 1\), the following filter parameter matrices could be obtained:

$$\begin{aligned}& {A_{f1}} = \begin{bmatrix} - 0.2587 & -0.0116 \\ -0.4567 & -0.5528 \end{bmatrix},\qquad {B_{f1}} = \begin{bmatrix} { - 0.0102} \\ { - 0.0008} \end{bmatrix},\qquad {L_{f1}} = [ { {6.1137} \quad {0.8524} } ], \\& {A_{f2}} = \begin{bmatrix} - 0.2830 & - 0.2548 \\ - 0.0019 & -0.2479 \end{bmatrix},\qquad {B_{f2}} = \begin{bmatrix} {0.0045} \\ {0.0007} \end{bmatrix},\qquad {L_{f2}} = [ { {1.4112} \quad {0.1452} } ]. \end{aligned}$$

We assume the disturbance \(\omega ( t ) = 0.5\mathrm{e}^{2.3t}\), using this filter, Figure 2 plots the response of the filter error of \(e ( t )\). It is clear that the proposed method is feasible to deal with the problem of tunnel diode circuit.

Figure 2
figure 2

Error response of \(\pmb{e ( t )}\) (Example 4 ).

5 Conclusion

In this paper, based on the T-S fuzzy model, we have treated the delay-dependent exponential \({H_{\infty}}\) filter for a class of nonlinear singular systems with interval time-varying delay. In order to obtain a less conservative result, a new filter and new LKF has been constructed. The new \({H_{\infty}}\) filter guarantees the error system to be regular, impulse-free, exponentially stable and satisfies a prescribed \({H_{\infty}}\) performance. Three numerical examples have demonstrated the effectiveness and superiority of the proposed approach.