1 Introduction

Customarily, \(R_{+}\) denotes the set of positive real numbers. \(R^{n}\) is an n-dimensional real Euclidean space with the norm \(\Vert \cdot \Vert \). \(R^{m \times n}\) refers to the set of all \(m \times n\)-dimensional real matrices. \(\lambda _{M} ( A ) \), \(\lambda _{m} ( A ) \), \(A^{T} \), and \(A^{-1}\) are the maximum, the minimum eigenvalue, the transpose, and the inverse of matrix A, respectively. I represents the identity matrix with proper dimension. The positive definite matrix A is represented by \(A > 0\). Define \(f ( {x ( {b^{-} } )} ) = \lim_{t \to b^{-} } f ( {x ( t )} )\).

Over the past two decades, nonlinear systems have been paid considerable attention because many systems in many practical applications can be modeled by nonlinear systems, for instance, robotics, information science, artificial intelligence, automatic control systems, and so forth [8, 14, 15, 17, 24]. Due to impulsive effects, the stability of systems will become oscillations and instability. Therefore, it is significant to discuss stability of nonlinear systems with impulsive effects [9, 10, 19, 21, 22]. In recent years, many sufficient criteria on the asymptotic stability for impulsive control of nonlinear systems have been published under some conditions [1, 16]. We consider not only the asymptotic satiability of the nonlinear impulsive control systems but other aspects in the design of nonlinear impulsive control systems. In particular, it is often desirable that nonlinear impulsive control systems converge fast enough in order to reach fast response. Obviously, exponential stability is a fast convergence rate to the equilibrium point [7, 11, 13].

Many scholars just assume that impulses occur at fixed-time points [12, 18, 20]. However, in many practical applications, impulses occur stochastically. Therefore, it is necessary to study a more practical impulsive scheme which concerns the above case. In what follows, we will discuss the following nonlinear impulsive control systems with impulse time window, disturbance input and bounded gain error:

$$ \textstyle\begin{cases} \dot{x}(t) = Ax ( t ) + Bw ( t ) +Cu ( t )+ f ( {x ( t )} ), &kT\leq t< kT+\tau_{k},\\ x(t)=x ( {t^{-} } ) + Qx ( {t^{-} } ) + \phi ( {x ( {t^{-} } )} ), &t=kT+\tau_{k},\\ \dot{x}(t) = Ax ( t ) + Bw ( t ) +Cu ( t )+ f ( {x ( t )} ), &kT+\tau_{k}< t< (k+1)T , \end{cases} $$
(1.1)

where \(x ( t )\in R^{n}\) is the state variable, \(w ( t ) \in R^{r} \) denotes the disturbance input, \(u ( t ) \in R^{p} \) is the control input, \(\phi ( {x ( {t} )} )\) is the gain error, \(f: R^{n}\rightarrow R^{n}\) and \(\phi : R^{n}\rightarrow R^{n}\) are said to be continuous nonlinear functions satisfying \(f(0)=0\) and \(\phi(0)=0\), respectively, \(T>0\) represents the control period, \(\tau_{k} \in ( {kT, ( {k + 1} )T} ) \) is unknown. \(A\in R^{n \times n}\), \(B \in R^{n \times r}\), \(C \in R^{n \times p}\), and \(Q\in R^{n \times n} \) are constant matrices. In general, let

$$\begin{aligned} &\bigl\Vert {f \bigl( {x ( t )} \bigr)} \bigr\Vert \le l \bigl\Vert {x ( t )} \bigr\Vert ,\\ & \bigl\Vert {w ( t )} \bigr\Vert \le l_{1} \bigl\Vert {x ( t )} \bigr\Vert ,\\ & \bigl\Vert {\phi \bigl( {x ( t )} \bigr)} \bigr\Vert \le l_{2} \bigl\Vert {x ( t )} \bigr\Vert , \end{aligned}$$

where \(l, l_{1}\), and \(l_{2}\) are nonnegative constants. In system (1.1), the impulse is stochastic in an impulse time window, which is wider than an impulse occurring at fixed-time points. For more information on an impulse time window, the reader is referred to [35, 23].

In order to obtain exponential stability, a linear feedback controller \(u ( t ) = Gx ( t )\) is considered, where \(G\in R^{r \times n}\) is a constant matrix. We rewrite system (1.1) as follows:

$$ \textstyle\begin{cases} \dot{x}(t) = (A+CG)x ( t ) + Bw ( t ) + f ( {x ( t )} ), &kT\leq t< kT+\tau_{k},\\ x(t)=x ( {t^{-} } ) + Qx ( {t^{-} } ) + \phi ( {x ( {t^{-} } )} ), &t=kT+\tau_{k},\\ \dot{x}(t) =(A+CG)x ( t )+ Bw ( t ) + f ( {x ( t )} ), &kT+\tau_{k}< t< (k+1)T . \end{cases} $$
(1.2)

The main purpose of this paper is to investigate the exponential stability of system (1.1). By employing the obtained result, system (1.2) is exponentially stable via constructing a linear feedback gain matrix G. A numerical example is given to demonstrate the effectiveness of the theoretical results.

2 Main results

We need the following definitions and lemmas which play a major role in the proof of the theorems.

Definition 2.1

([11])

The function \(V:[t_{0} - \alpha ,\infty ) \times R^{n} \to R_{+} \) belongs to class \(v_{0}\) if

  1. (1)

    V is continuous on each of the sets \([\tau _{k - 1} ,\tau _{k} ) \times R^{n} \) and \(\lim_{(t,y) \to (\tau _{k}^{-} ,x)} V(t,y) = V(\tau _{k}^{-} ,x)\) exists;

  2. (2)

    \(V(t,x)\) is locally Lipschitzian in \(x \in R^{n} \) and \(V(t,0) \equiv 0\).

Definition 2.2

([11])

For \(V \in v_{0} \), the right and upper Dini’s derivative of V is defined as

$$D^{+} V\bigl(t,x(t)\bigr) = \lim_{h \to 0^{+} } \sup { {1 \over h}}\bigl[V\bigl(t + h,x(t) + hf\bigl(t,x(t)\bigr)\bigr) - V \bigl(t,x(t)\bigr)\bigr]. $$

Lemma 2.1

([6])

Let \(x,y \in R^{n}\) and \(\eta >0 \), then

$$2x^{T} y \le \eta x^{T} x+\eta ^{ - 1}y^{T} y. $$

Lemma 2.2

([2])

The following linear matrix inequality (LMI)

$$ \begin{bmatrix} Q & S \\ {S^{T} } & G \end{bmatrix} < 0, $$

where \(Q^{T}=Q\), \(G^{T}=G\), is equivalent to

$$G < 0,\quad Q - SG^{ - 1} S^{T} < 0. $$

Lemma 2.3

([6])

Let \(x \in R^{n}\) and \(A \in R^{n \times n} \) be a symmetric matrix, then

$$\lambda _{m} ( A )x^{T} x \le x^{T} Ax \le \lambda _{M} ( A )x^{T} x. $$

Theorem 2.1

Let the assumptions about \(w ( t )\), \(f ( {x ( t )} )\), \(\phi ( {x ( t )} ) \) be satisfied and \(u ( t )=0 \). If there exist positive numbers \(\varepsilon, \eta\) and \(0 < P \in R^{n \times n}\) satisfying conditions as follows:

$$\begin{aligned} & ( 1 )\quad \begin{bmatrix} {A^{T} P + PA + ( {l^{2}+ \eta l_{1}^{2} } )I} & P \\ P & { ( { - I - \eta ^{ - 1} BB^{T} } )^{ - 1} } \end{bmatrix} < 0, \\ & ( 2 )\quad \ln \gamma + T ( {h + \varepsilon } )\le 0, \end{aligned}$$

where \(\beta = \lambda _{M} ( {P^{ -1} ( {I + Q} )^{T} P ( {I + Q} ) } )\), \(\beta _{1} = \lambda _{M} ( P )\), \(\beta _{2} = \lambda _{m} ( P )\), \(h = \lambda _{M} ( P^{ - 1} ( PA + A^{T} P + \eta ^{ - 1} PBB^{T} P + P^{2} + ( l^{2}+ \eta l_{1}^{2} )I ) )\), \(\gamma = ( {\sqrt {\beta } +\sqrt { \frac{{\beta _{2} }}{{\beta _{3} }}}}l_{2} )^{2} \). Then system (1.1) is exponentially stable at origin.

Proof

Define

$$V \bigl( {x ( t )} \bigr) = x^{T} ( t )Px ( t ). $$

Let \(t \in [ {kT,kT + \tau _{k} } )\), we have

$$\begin{aligned} D^{+} \bigl( {V \bigl( {x ( t )} \bigr)} \bigr) =& 2x^{T} ( t )P \bigl( {Ax ( t ) + Bw ( t ) + f \bigl( {x ( t )} \bigr)} \bigr) \\ =& x^{T} ( t ) \bigl(PA+A^{T}P\bigr)x ( t ) + 2x^{T} ( t )P \bigl( {Bw ( t ) + f \bigl( {x ( t )} \bigr)} \bigr) . \end{aligned}$$
(2.1)

By Lemma 2.1, it is clear that

$$ 2x^{T} ( t )PBw ( t ) \le \eta ^{ - 1} x^{T} ( t )PBB^{T} Px ( t ) + \eta w^{T} ( t )w ( t ) $$
(2.2)

and

$$ 2x^{T} ( t )Pf \bigl( {x ( t )} \bigr) \le x^{T} ( t )P^{2} x ( t ) + f^{T} \bigl( {x ( t )} \bigr)f \bigl( {x ( t )} \bigr). $$
(2.3)

From the assumptions about \(f ( {x ( t )} )\), \(w ( t )\), substituting (2.2) and (2.3) into (2.1) yields

$$\begin{aligned} D^{+} \bigl( {V \bigl( {x ( t )} \bigr)} \bigr) \le& x^{T} ( t ) \bigl(PA+A^{T}P\bigr)x ( t ) + \eta ^{ - 1} x^{T} ( t )PBB^{T} Px ( t ) \\ &{}+ \eta w^{T} ( t )w ( t ) + x^{T} ( t )P^{2} x ( t ) + f^{T} \bigl( {x ( t )} \bigr)f \bigl( {x ( t )} \bigr) \\ \le & x^{T} ( t ) \bigl( {PA + A^{T} P + \eta ^{ - 1} PBB^{T} P + P^{2} + \bigl( {l^{2} + \eta l_{1}^{2} } \bigr)I} \bigr)x ( t ). \end{aligned}$$
(2.4)

By Lemma 2.2, condition (1) and inequality (2.4), we have

$$D^{+} \bigl( {V \bigl( {x ( t )} \bigr)} \bigr) \le hV \bigl( {x ( t )} \bigr), $$

which yields that

$$ V \bigl( {x ( t )} \bigr) \le V \bigl( {x ( {kT} )} \bigr)e^{h ( {t - kT} )}. $$
(2.5)

In the same way, let \(t \in ( {kT + \tau _{k} , ( {k + 1} )T} ) \), we also have

$$D^{+} \bigl( {V \bigl( {x ( t )} \bigr)} \bigr) \le hV \bigl( {x ( t )} \bigr), $$

which leads to

$$ V \bigl( {x ( t )} \bigr) \le V \bigl( {x ( {kT + \tau _{k} } )} \bigr)e^{h ( {t - kT - \tau _{k} } )}. $$
(2.6)

Let \(t = kT + \tau _{k} \), we obtain

$$\begin{aligned} V \bigl( {x ( t )} \bigr) =& \bigl( { ( {I + Q} )x \bigl( {t^{-} } \bigr) + \phi \bigl( {x \bigl( {t^{-} } \bigr)} \bigr)} \bigr)^{T} P \bigl( { ( {I + Q} )x \bigl( {t^{-} } \bigr) + \phi \bigl( {x \bigl( {t^{-} } \bigr)} \bigr)} \bigr) \\ =& x^{T} \bigl( {t^{-} } \bigr) ( {I + Q} )^{T}P ( {I + Q} )x \bigl( {t^{-} } \bigr)+ \phi ^{T} \bigl( {x \bigl( {t^{-} } \bigr)} \bigr)P\phi \bigl( {x \bigl( {t^{-} } \bigr)} \bigr) \\ &{}+ 2x^{T} \bigl( {t^{-} } \bigr) ( {I + Q} )^{T} P\phi \bigl( {x \bigl( {t^{-} } \bigr)} \bigr) \\ \le& x^{T} \bigl( {t^{-} } \bigr) ( {I + Q} )^{T} P ( {I + Q} )x \bigl( {t^{-} } \bigr) + \phi ^{T} \bigl( {x \bigl( {t^{-} } \bigr)} \bigr)P\phi \bigl( {x \bigl( {t^{-} } \bigr)} \bigr) \\ &{}+ 2\sqrt {x^{T} \bigl( {t^{-} } \bigr) ( {I + Q} )^{T} P ( {I + Q} )x \bigl( {t^{-} } \bigr)\phi ^{T} \bigl( {x \bigl( {t^{-} } \bigr)} \bigr)P\phi \bigl( {x \bigl( {t^{-} } \bigr)} \bigr)} \\ =& \bigl( {\sqrt {x^{T} \bigl( {t^{-} } \bigr) ( {I + Q} )^{T} P ( {I + Q} )x \bigl( {t^{-} } \bigr)} +\sqrt { \phi ^{T} \bigl( {x \bigl( {t^{-} } \bigr)} \bigr)P\phi \bigl( {x \bigl( {t^{-} } \bigr)} \bigr)}} \bigr)^{2} \\ \le& \biggl( {\sqrt {\beta } +\sqrt { \frac{{\beta _{2} }}{{\beta _{3} }}}}l_{2} \biggr)^{2} V \bigl( {x \bigl( {t^{-} } \bigr)} \bigr) \\ =& \gamma V \bigl( {x \bigl( {t^{-} } \bigr)} \bigr). \end{aligned}$$
(2.7)

(2.6) and (2.7) can lead to

$$ V \bigl( {x ( t )} \bigr) \le \gamma V \bigl( {x \bigl( { ( {kT + \tau _{k} } )^{-} } \bigr)} \bigr)e^{h ( {t - kT - \tau _{k} } )}, $$
(2.8)

where \(t \in [ {kT + \tau _{k} , ( {k + 1} )T} ) \).

When \(k=0\), let \(t \in [ {0 ,\tau _{0} } )\), from (2.5), we obtain

$$V \bigl( {x ( t )} \bigr) \le V \bigl( {x ( 0 )} \bigr)e^{ht} . $$

Thus

$$ V \bigl( {x \bigl( {\tau _{0}^{-} } \bigr)} \bigr) \le V \bigl( {x ( 0 )} \bigr)e^{h\tau _{0} } . $$
(2.9)

Let \(t \in [ {\tau _{0} , T} )\), from (2.8) and (2.9), we have

$$\begin{aligned} V \bigl( {x ( t )} \bigr) \le&\gamma V \bigl( {x \bigl( {\tau _{0}^{-} } \bigr)} \bigr)e^{h ( {t - \tau _{0} } )} \le \gamma V \bigl( {x ( 0 )} \bigr)e^{ht}. \end{aligned}$$
(2.10)

When \(k=1\), let \(t \in [ {T , T+\tau _{1} } )\), from (2.5) and (2.10), we have

$$\begin{aligned} V \bigl( {x ( t )} \bigr) \le& V \bigl( {x ( T )} \bigr)e^{h ( {t - T} )} \\ \le& \gamma V \bigl( {x \bigl( {\tau _{0}^{-} } \bigr)} \bigr)e^{h ( {T - \tau _{0} } )} e^{h ( {t - T} )} \\ =& \gamma V \bigl( {x \bigl( {\tau _{0}^{-} } \bigr)} \bigr)e^{h ( {t - \tau _{0} } )} \\ \le& \gamma V \bigl( {x ( 0 )} \bigr)e^{ht}. \end{aligned}$$
(2.11)

Let \(t \in [ {T+\tau _{1},2T } )\), from (2.8) and (2.11), we get

$$\begin{aligned} V \bigl( {x ( t )} \bigr) \le& \gamma V \bigl( {x \bigl( { ( {T + \tau _{1} } )^{-} } \bigr)} \bigr)e^{h ( {t - T - \tau _{1} } )} \\ \le& \gamma ^{2} V \bigl({x \bigl( {\tau _{0}^{-} } \bigr)} \bigr)e^{h ( {T + \tau _{1} - \tau _{0} } )} e^{h ( {t - T - \tau _{1} } )} \\ \le& \gamma^{2} V \bigl( {x ( 0 )} \bigr)e^{ht}. \end{aligned}$$
(2.12)

When \(k=2\), let \(t \in [ {2T , 2T+\tau _{2} } )\), from (2.5) and (2.12), we get

$$\begin{aligned} V \bigl( {x ( t )} \bigr) \le& V \bigl( {x ( 2T )} \bigr)e^{h ( {t - 2T} )} \\ \le& \gamma ^{2} V \bigl( {x \bigl( {\tau _{0}^{-} } \bigr)} \bigr)e^{h ( {2T - \tau _{0} } )} e^{h ( {t - 2T} )} \\ \le& \gamma^{2} V \bigl( {x ( 0 )} \bigr)e^{ht}. \end{aligned}$$
(2.13)

Let \(t \in [\tau _{0} ,T+\tau _{1} )\), from (2.10) and (2.11), we get

$$V \bigl( {x ( t )} \bigr) \le \gamma V \bigl( {x ( 0 )} \bigr)e^{ht}. $$

Let \(t \in [T+\tau _{1} ,2T+\tau _{2} )\), from (2.12) and (2.13), we get

$$V \bigl( {x ( t )} \bigr) \le \gamma ^{2} V \bigl( {x ( 0 )} \bigr)e^{ht} . $$

By induction, for \(t \in [kT+\tau _{k} ,(k+1)T+\tau _{k + 1} )\), we get

$$V \bigl( {x ( t )} \bigr) \le \gamma ^{k + 1} V \bigl( {x ( 0 )} \bigr)e^{ht} . $$

Let \(kT + \tau _{k} = \tau '_{k} \). Since \(\ln \gamma +T ( {h + \varepsilon } ) \le 0\), we get

$$\begin{aligned} V \bigl( {x ( t )} \bigr) \le& \gamma ^{k + 1} V \bigl( {x ( 0 )} \bigr)e^{ht} \\ =& \gamma ^{k + 1} V \bigl( {x ( 0 )} \bigr)e^{ ( {h + \varepsilon } )t} e^{ - \varepsilon t} \\ \le& \gamma ^{k + 1} V \bigl( {x ( 0 )} \bigr)e^{ ( {h + \varepsilon } ){\tau '_{k} } } e^{ - \varepsilon t} \\ \le& \gamma ^{k + 1} V \bigl( {x ( 0 )} \bigr)e^{ ( {h + \varepsilon } )kT } e^{ - \varepsilon t} \\ =& \gamma V \bigl( {x ( 0 )} \bigr)e^{k ( {\ln \gamma + ( {h + \varepsilon } )T} )} e^{ - \varepsilon t} \\ \le& \gamma V \bigl( {x ( 0 )} \bigr)e^{ - \varepsilon t} . \end{aligned}$$
(2.14)

By Lemma 2.3 and (2.14), we obtain

$$\lambda _{m} ( {P } ) \bigl\Vert {x \bigl( {t,\tau _{0} ,x(0) } \bigr)} \bigr\Vert ^{2} \le V \bigl( {x ( t )} \bigr) \le \gamma V \bigl( {x ( 0 )} \bigr)e^{ - \varepsilon t} \le \bigl\Vert {x(0)} \bigr\Vert ^{2} \lambda _{M} (P )e^{ - \varepsilon t}. $$

That is,

$$\bigl\Vert {x \bigl( {t,\tau _{0} ,x(0) } \bigr)} \bigr\Vert \le \sqrt {\frac{{\lambda _{M} ( P )}}{{\lambda _{m} ( P )}}} \bigl\Vert {x(0) } \bigr\Vert e^{\frac{- \varepsilon t}{2}}. $$

This completes the proof. □

Theorem 2.2

Let the assumptions about \(w ( t )\), \(f ( {x ( t )} )\), \(\phi ( {x ( t )} ) \) be satisfied. If there exist positive numbers \(\varepsilon, \eta\), matrices \(H, W\) with \(0 < H \in R^{n \times n}\) satisfying conditions as follows:

$$\begin{aligned} & ( 1 )\quad \begin{bmatrix} {I + ( {AH + CW} )^{T} + ( {AH + CW} ) + \eta ^{ - 1} BB^{T} } & {\sqrt {l^{2} +\eta l_{1}^{2} } H} \\ {\sqrt {l^{2} +\eta l_{1}^{2} } H} & { - I} \end{bmatrix} < 0, \\ & ( 2 )\quad {\sqrt {\beta } +\sqrt { \frac{{\beta _{2} }}{{\beta _{3} }}}}l_{2} < 1, \end{aligned}$$

where \(\beta = \lambda _{M} ( {H ( {I + Q} )^{T} H^{ -1} ( {I + Q} ) } )\), \(\beta _{1} = \lambda _{M} ( H^{ -1} )\), \(\beta _{2} = \lambda _{m} ( H^{ -1} )\), \(h = \lambda _{M} ( H ( H^{ - 1}(A+CG) + (A+CG)^{T} H^{ - 1} + \eta ^{ - 1} H^{ - 1}BB^{T} H^{ - 1} + ( {H^{ - 1} } )^{2} + ( {l^{2} + \eta l_{1}^{2} } )I ) )\), \(\gamma = ( {\sqrt {\beta } +\sqrt { \frac{{\beta _{2} }}{{\beta _{3} }}}}l_{2} )^{2} \). Then system (1.2) is exponentially stable at origin and we have the following linear feedback controller:

$$u ( t ) = Gx ( t ),\qquad G = WH^{ - 1} . $$

Proof

By Lemma 2.2, condition (1) of Theorem 2.2 is equivalent to

$$ I + ( {AH + CW} )^{T} + ( {AH + CW} ) + \eta ^{ - 1} BB^{T} + \bigl( {l^{2} +\eta l_{1}^{2} } \bigr)H^{2} < 0. $$
(2.15)

Let

$$P = H^{ - 1} ,\qquad G = WH^{ - 1}. $$

Multiplying both sides of (2.15) by P, we have

$$P^{2} + P ( {AH + CW} )^{T} P + P ( {AH + CW} )P + \eta ^{ - 1} PBB^{T} P + \bigl( {l^{2}+ \eta l_{1}^{2} } \bigr)I < 0. $$

That is,

$$ P^{2} + ( {A + CG} )^{T} P + P ( {A + CG} ) + \eta ^{ - 1} PBB^{T} P + \bigl( {l^{2} +\eta l_{1}^{2} } \bigr)I < 0. $$
(2.16)

By Lemma 2.2 and (2.16), we have

$$ \begin{bmatrix} { ( {A + CG} )^{T} P + P ( {A + CG} ) + ( {l^{2}+ \eta l_{1}^{2} } )I} & P \\ P & { ( { - I - \eta ^{ - 1} BB^{T} } )^{ - 1} } \end{bmatrix} < 0 . $$

Thus, condition (1) of Theorem 2.1 holds. Since

$${\sqrt {\beta } +\sqrt { \frac{{\beta _{2} }}{{\beta _{3} }}}}l_{2} < 1, $$

which implies

$$\ln \gamma + T ( {h + \varepsilon } )\le 0, $$

namely, condition (2) of Theorem 2.1 is satisfied, too. Then system (2.2) is exponentially stable at origin.

This completes the proof. □

3 A numerical example

In this section, we demonstrate and verify the effectiveness of our theoretical results employing a nonlinear impulsive system as follows:

$$A = \begin{bmatrix} 2 & 1 \\ 2 & 3 \end{bmatrix},\qquad B = \begin{bmatrix} {1.5} & {1.3} \\ {1.2} & 0 \end{bmatrix},\qquad C = \begin{bmatrix} {1.8} & {0.8} \\ 1 & {1.7} \end{bmatrix},\qquad f \bigl( {x ( t )} \bigr) = \begin{bmatrix} {\sin x_{1} } \\ {\sin x_{2} } \end{bmatrix} $$

and

$$Q = - \begin{bmatrix} {0.58} & 0 \\ 0 & {0.58} \end{bmatrix},\qquad \phi \bigl( {x ( t )} \bigr) = 0.3 \begin{bmatrix} {\sin x_{1} } \\ {\sin x_{2} } \end{bmatrix},\qquad \omega ( t ) = \begin{bmatrix} {x_{1} \sin 20\pi t} \\ {x_{2} \sin 20\pi t} \end{bmatrix}. $$

Then we can choose

$$\eta = l= l_{1} = 1,\qquad l_{2} = 0.3. $$

By condition (1) of Theorem 2.2, we obtain

$$H = \begin{bmatrix} {0.2565} & 0 \\ 0 & {0.2565} \end{bmatrix}, \qquad W = \begin{bmatrix} { - 25.7414} & {-49.9086} \\ {53.3876} & { 27.8104} \end{bmatrix}. $$

Simple calculations show that \(\gamma = 0.72 < 1\). Thus, the nonlinear impulsive system is exponentially stable because the conditions of Theorem 2.2 are satisfied.

4 Conclusions

In this paper, we discuss exponential stability of nonlinear systems with impulse time window, disturbance input, and bounded gain error. In [3], the authors did not consider the disturbance input and bounded gain error of nonlinear impulsive control systems. In [25], the authors did not consider the disturbance input of nonlinear impulsive control systems. Obviously, system (1.1) is more general and more applicable than [3, 25]. Using Theorem 2.1 and the construction of a linear stabilizing feedback controller, a new criterion of exponential stability is obtained. Finally, a numerical example demonstrates the effectiveness of the theoretical results.