1 Introduction

In [1], Kosko proposed originally the time-delay differential equations as follows:

$$ \left \{ \textstyle\begin{array}{@{}l} \frac{dx_{i}(t)}{dt} = - a_{i}x_{i}(t)+\sum_{j=1}^{n}c_{ij}f_{j}(y_{j}(t-\tau_{j}))+I_{i},\\ \frac{dy_{i}(t)}{dt} = - b_{i}y_{i}(t)+\sum_{j=1}^{n}d_{ij}g_{j}(x_{j}(t-\rho_{j}))+J_{i}, \end{array}\displaystyle \quad i=1,2,\ldots,n, t>0, \right . $$
(1.1)

which belongs to the so-called bidirectional associative memory (BAM) neural networks. Here, the parameters \(a_{i}>0, b_{i}>0\) denote the time scales of the respective layers of the network. The first term in each of the right sides of system (1.1) corresponds to a stabilizing negative feedback of the system which acts instantaneously without time delay. These terms are known as forgetting or leakage terms [2, 3]. Since then, various generalized BAM neural networks have become a hot research topic due to their potential applications in associative memory, parallel computation, artificial intelligence, and signal and image processes [4, 5]. However, the above successful applications often depend on whether the system has a certain stability, and the robust stability always plays an important role. In recent decades, stability analyses of neural networks have attracted the attention of many researchers (see e.g. [630]). The Lyapunov function method was always employed in the existing literature to obtain stability criteria. But any method has its limitations. As one of the alternative methods, fixed point theorems played positive roles in the stability analysis of BAM neural networks (see e.g. [31, 32]). Some stability criteria of BAM neural networks without impulse were derived by fixed point theorems. But the impulsive model of BAM neural networks has rarely been investigated. Besides, there are a lot of other factors and problems in practical engineering. In fact, there exist parameter errors unavoidable in factual systems due to aging of electronic components, external disturbance, and parameter perturbations. It is very important to ensure that the system is stable with respect to these uncertainties. So, in this paper, the contraction mapping principle and linear matrix inequalities (LMIs) technique are applied to generate the LMI-based exponential robust stability criterion for the impulsive BAM neural networks model with distributed delays and uncertain parameters. Finally, a numerical example and two comparable tables are presented to show the novelty and effectiveness of the derived result.

For the sake of convenience, we introduce the following standard notations.

  • \(L=(l_{ij})_{n\times n}>0\) (<0): a positive (negative) definite matrix, i.e., \(y^{T}Ly>0\) (<0) for any \(0\neq y\in R^{n}\).

  • \(L=(l_{ij})_{n\times n}\geqslant0\) (⩽0): a semi-positive (semi-negative) definite matrix, i.e., \(y^{T}Ly\geqslant 0(\leqslant0)\) for any \(y\in R^{n}\).

  • \(L\in[-L^{*},L^{*}]\) implies that \(|l_{ij}|\leqslant l_{ij}^{*}\) for all \(i,j\) with \(L=(l_{ij})_{n\times n}\) and \(L^{*}=(l_{ij}^{*})_{n\times n}\).

  • \(L_{1}\geqslant L_{2}\) (\(L_{1}\leqslant L_{2}\)): this means matrix \((L_{1}-L_{2})\) is a semi-positive (semi-negative) definite matrix.

  • \(L_{1}> L_{2}\) (\(L_{1}< L_{2}\)): this means matrix \((L_{1}-L_{2})\) is a positive (negative) definite matrix.

  • \(\lambda_{\max}(\Phi), \lambda_{\min}(\Phi)\) denotes the largest and smallest eigenvalue of matrix Φ, respective.

  • Denote \(|L|=(|l_{ij}|)_{n\times n}\) for any matrix \(L=(l_{ij})_{n\times n}\).

  • \(|u |=(|u_{1} |,|u_{2} |,\ldots,|u_{n} |)^{T}\) for any vector \(u =(u_{1} ,u_{2} ,\ldots,u_{n} )^{T}\in R^{n}\).

  • \(u\leqslant(\geqslant) v\) implies that \(u_{i}\leqslant(\geqslant) v_{i}\), ∀i, and \(u< (>) v\) implies that \(u_{i}< (>) v_{i}\), ∀i, for any vectors \(u =(u_{1} ,u_{2} ,\ldots,u_{n} )^{T}\in R^{n}\) and \(v =(v_{1} ,v_{2} ,\ldots,v_{n} )^{T}\in R^{n}\).

  • I: the identity matrix with compatible dimension.

  • Denote vector \(\mu=(1,1,\ldots,1)^{T}\in R^{n}\).

Remark 1

The main purpose of this paper is to obtain the LMI-based robust stability criteria of BAM neural networks with uncertain parameters by using the Banach fixed point theorem. To overcome mathematical difficulties, it is necessary to formulate a novel contraction mapping. Therefore, the following main task is to construct a new contraction mapping on the space suitable, and prove that the fixed point of this mapping is the robust stability solution of the BAM neural network.

2 Preliminaries

The physical background of the following integro-differential equations is in the BAM neural networks (see e.g. [33, 34]):

$$ \left \{ \textstyle\begin{array}{@{}l} \frac{dx(t)}{dt} = -(A+\Delta A(t))x(t)+(C+\Delta C(t))f(y(t))\\ \hphantom{\frac{dx(t)}{dt} =}{}+(M+\Delta M(t))\int_{t-\tau(t)}^{t}f(y(s))\,ds, \quad t\geqslant0, t\neq t_{k},\\ \frac{dy(t)}{dt} = -(B+\Delta B(t))y(t)+(D+\Delta D(t))g(x(t))\\ \hphantom{\frac{dy(t)}{dt} =}{}+(W+\Delta W(t))\int_{t-\rho(t)}^{t}g(x(s))\,ds,\quad t\geqslant0, t\neq t_{k}, \\ x(t_{k}^{+})-x(t_{k}^{-}) =\phi(x(t_{k})), \qquad y(t_{k}^{+})-y(t_{k}^{-}) =\varphi (y(t_{k})), \quad k=1,2,\ldots, \end{array}\displaystyle \right . $$
(2.1)

with the initial condition

$$ x(s)=\xi(s),\qquad y(s)=\eta(s),\quad s\in[-\tau,0], $$
(2.2)

where \(x=(x_{1},x_{2},\ldots,x_{n}), y=(y_{1},y_{2},\ldots,y_{n})\in R^{n}\) with \(x_{i}(t),y_{j}(t)\) being the state variables of the ith neuron and the jth neuron at time t, respectively. Also \(f(x) = (f_{1}(x_{1}(t)), f_{2}(x_{2}(t)), \ldots, f_{n}(x_{n}(t)))^{T}\), \(g(x) = (g_{1}(x_{1}(t)), g_{2}(x_{2}(t)), \ldots, g_{n}(x_{n}(t)))^{T} \in R^{n}\) are the neuron active functions. Both \(A=\operatorname{diag}(a_{1},a_{2},\ldots,a_{n}) \) and \(B=\operatorname{diag}(b_{1},b_{2},\ldots,b_{n})\) are \((n\times n)\)-dimension positive definite matrices with \(a_{i}\) and \(b_{j}\) denoting the rate with which the ith neuron and the jth neuron will reset their potential to the resting state in isolation when disconnected from the networks and the external inputs, respectively. C and D denote the connection weight matrices with \((n\times n)\) dimensions. M and W are the distributively delayed connection weight matrices with \((n\times n)\) dimensions. The parameter uncertainties considered here are norm-bounded and of the following forms:

$$ \begin{aligned} &\Delta A(t) \in[-A_{*}, A_{*}], \qquad \Delta B(t) \in[-B_{*}, B_{*}],\qquad \Delta C(t) \in[-C_{*}, C_{*}], \\ &\Delta D(t) \in[-D_{*}, D_{*}], \qquad \Delta M(t) \in[-M_{*}, M_{*}], \qquad \Delta W(t) \in[-W_{*}, W_{*}], \end{aligned} $$
(2.3)

where \(A_{*},B_{*}, C_{*},D_{*}, M_{*},W_{*}\) all are nonnegative matrices.

Assume, in addition, distributed delays \(\tau(t),\rho(t)\in[0,\tau]\). The fixed impulsive moments \(t_{k}\) (\(k=,1,2,\ldots\)) satisfy \(0< t_{1}< t_{2}<\cdots\) with \(\lim_{k\to+\infty}t_{k}=+\infty\). \(x(t_{k}^{+})\) and \(x(t_{k}^{-})\) stand for the right-hand and left-hand limit of \(x(t)\) at time \(t_{k}\), respectively. Further, suppose that

$$ x\bigl(t_{k}^{-}\bigr)=\lim_{t\to t_{k}^{-}}x(t)=x(t_{k}),\qquad y\bigl(t_{k}^{-}\bigr)= \lim_{t\to t_{k}^{-}}y(t)=y(t_{k}), \quad k=1,2,\ldots. $$
(2.4)

Throughout this paper, we assume that \(f(0)=g(0)=\phi(0)=\varphi (0)=0\in R^{n}\), and \(F=\operatorname{diag}(F_{1},F_{2},\ldots,F_{n})\), \(G=\operatorname{diag}(G_{1},G_{2},\ldots,G_{n})\), \(H=\operatorname{diag}(H_{1},H_{2},\ldots,H_{n})\), and \(\mathcal{H}=\operatorname{diag}(\mathcal{H}_{1}, \mathcal{H}_{2},\ldots,\mathcal{H}_{n})\) are positive definite diagonal matrices, satisfying

  1. (H1)

    \(|f(x)-f(y)|\leqslant F|x-y|, x,y\in R^{n}\);

  2. (H2)

    \(|g(x)-g(y)|\leqslant G|x-y|, x,y\in R^{n}\);

  3. (H3)

    \(|\phi(x)-\phi(y)|\leqslant H|x-y|, x,y\in R^{n}\);

  4. (H4)

    \(|\varphi(x)-\varphi(y)|\leqslant\mathcal{H}|x-y|, x,y\in R^{n}\);

  5. (H5)

    there exist nonnegative matrices \(A_{*},B_{*}, C_{*},D_{*}, M_{*},W_{*}\), satisfying (2.3).

Definition 2.1

System (2.1) with initial condition (2.2) shows globally exponential robust stability in mean square for all admissible uncertainties if for any initial condition \(\bigl( {\scriptsize\begin{matrix}{} \xi(s)\cr \eta(s) \end{matrix}} \bigr)\in C([-\tau,0],R^{2n})\), there exist positive constants a and b such that

$$\left\| \begin{pmatrix} x(t;s,\xi,\eta)\\ y(t;s,\xi,\eta) \end{pmatrix} \right\|\leqslant be^{-at},\quad \mbox{for all } t>0, $$

for all admissible uncertainties in (2.3), where the norm \(\bigl\| \bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr) \bigr\|= (\sum_{i=1}^{n}|x_{i}(t)|^{2}+ \sum_{i=1}^{n}|y_{i}(t)|^{2} )^{\frac{1}{2}}\), and \(x=(x_{1},\ldots,x_{n}), y=(y_{1},\ldots,y_{n})\in R^{n}\).

Lemma 2.2

Contraction mapping theorem [35]

Let P be a contraction operator on a complete metric space Θ, then there exists a unique point \(\theta\in\Theta\) for which \(P(\theta )=\theta\).

3 Global exponential robust stability via contraction mapping

For convenience, we may denote

$$C_{t}=C+\Delta C(t), \qquad D_{t}=D+\Delta D(t),\qquad M_{t}=M+\Delta M(t),\qquad W_{t}=W+\Delta W(t). $$

Before giving the LMI-based robust stability criterion, we may firstly present the following fact.

Lemma 3.1

The impulsive system (2.1) with initial condition (2.2) is equivalent to the following integral equations with initial condition (2.2):

$$\begin{aligned} {{\begin{pmatrix} x(t)\\ y(t) \end{pmatrix} = \begin{pmatrix} e^{-At} \{\xi(0)+\int_{0}^{t} e^{As} [-\Delta A(s)x(s)+C_{s}f(y(s))+M_{s}\int_{s-\tau (s)}^{s}f(y(r))\,dr ]\,ds+\sum_{0< t_{k}< t}e^{At_{k}}\phi(x_{t_{k}}) \}\\ e^{-Bt} \{\eta(0)+\int_{0}^{t} e^{Bs} [-\Delta B(s)x(s)+D_{s}g(x(s))+W_{s}\int_{s-\rho (s)}^{s}g(x(r))\,dr ]\,ds+\sum_{0< t_{k}< t}e^{Bt_{k}}\varphi(x_{t_{k}}) \} \end{pmatrix},}} \end{aligned}$$
(3.1)

for all \(t\geqslant0\), and \(x(s)=\xi(s), y(s)=\eta(s), s\in[-\tau,0]\).

Proof

Indeed, we only need to prove that each solution of system (3.1) with initial condition (2.2) is a solution of the impulsive system (2.1) with initial condition (2.2), and the converse is also true.

On the one hand, suppose that \(\bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr)\) is a solution of (3.1) with initial condition (2.2). Then we have

$$\begin{aligned} e^{At}x(t)={}&\xi(0)+ \int_{0}^{t} e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau (s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds\\ &{}+\sum _{0< t_{k}< t}e^{At_{k}}\phi(x_{t_{k}}). \end{aligned}$$

For \(t\geqslant0\), \(t\neq t_{k}\), taking the derivative in both sides of the above equation results in

$$e^{At}\frac{dx(t)}{dt}+Ae^{At}x(t)=\frac{d}{dt} \bigl(e^{At}x(t) \bigr) =e^{At} \biggl[-\Delta A(t)x(t)+C_{t}f\bigl(y(t)\bigr)+M_{t} \int_{t-\tau (t)}^{t}f\bigl(y(r)\bigr)\,dr \biggr], $$

or

$$\frac{dx(t)}{dt}+Ax(t) =-\Delta A(t)x(t)+C_{t}f\bigl(y(t) \bigr)+M_{t} \int_{t-\tau(t)}^{t}f\bigl(y(r)\bigr)\,dr, $$

which is the first equation of system (2.1). Similarly, we can also derive the second equation of (2.1).

Moreover, as \(t\to t_{j}^{-}\), we can get by (3.1)

$$x\bigl(t_{j}^{-}\bigr)=\lim_{\varepsilon\to0^{+}}x(t_{j}- \varepsilon)=x(t_{j}),\qquad y\bigl(t_{j}^{-}\bigr)=\lim _{\varepsilon\to0^{+}}y(t_{j}-\varepsilon)=y(t_{j}),\quad j=1,2,\ldots, $$

and

$$\begin{aligned}& x\bigl(t_{j}^{+}\bigr)=\lim_{\varepsilon\to0^{+}}x(t_{j}+ \varepsilon)=x(t_{j})+\phi \bigl(x(t_{j})\bigr),\\& y \bigl(t_{j}^{+}\bigr)=\lim_{\varepsilon\to0^{+}}y(t_{j}+ \varepsilon)=y(t_{j})+\phi \bigl(y(t_{j})\bigr),\quad j=1,2, \ldots. \end{aligned}$$

Hence, we have proved that each solution of (3.1) with initial condition (2.2) is a solution of (2.1) with initial condition (2.2).

On the other hand, suppose that \(\bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr)\) is a solution of (2.1) with initial condition (2.2). Then multiplying both sides of the first equation of system (2.1) with \(e^{At}\) results in

$$e^{At}\frac{dx(t)}{dt}+Ae^{At}x(t) =e^{At} \biggl[-\Delta A(t)x(t)+C_{t}f\bigl(y(t)\bigr)+M_{t} \int_{t-\tau (t)}^{t}f\bigl(y(s)\bigr)\,ds \biggr],\quad t \geqslant0, t\neq t_{k}. $$

Moreover, integrating from \(t_{k-1}+\varepsilon\) to \(t\in(t_{k-1},t_{k})\) gives

$$\begin{aligned} e^{At}x(t)={}&e^{A(t_{k-1}+\varepsilon)}x(t_{k-1}+\varepsilon)\\ &{}+ \int _{t_{k-1}+\varepsilon}^{t}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int _{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds, \end{aligned}$$

which yields, after letting \(\varepsilon\to0^{+}\),

$$\begin{aligned} &e^{At}x(t)=e^{A(t_{k-1})}x\bigl(t_{k-1}^{+}\bigr)+ \int_{t_{k-1}}^{t}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds, \\ &\quad t \in(t_{k-1},t_{k}). \end{aligned}$$
(3.2)

Throughout this paper, we assume that ε is a sufficient small positive number. Now, taking \(t=t_{k}-\varepsilon\) in (3.2) one obtains

$$\begin{aligned} e^{At_{k}-\varepsilon}x(t_{k}-\varepsilon)={}&e^{At_{k-1}}x \bigl(t_{k-1}^{+}\bigr)\\ &{}+ \int _{t_{k-1}}^{t_{k}-\varepsilon}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds, \end{aligned}$$

which yields by (2.4) and letting \(\varepsilon\to0^{+}\)

$$e^{At_{k}}x(t_{k})=e^{At_{k-1}}x\bigl(t_{k-1}^{+} \bigr)+ \int_{t_{k-1}}^{t_{k}}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds. $$

Combining (3.2) and the above equation generates

$$\begin{aligned} e^{At}x(t)={}&e^{At_{k-1}}x\bigl(t_{k-1}^{+}\bigr)+ \int_{t_{k-1}}^{t}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds \\ ={}&e^{At_{k-1}}x(t_{k-1})+ \int_{t_{k-1}}^{t}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds\\ &{}+e^{At_{k-1}}\phi\bigl(x(t_{k-1})\bigr), \end{aligned}$$

for all \(t\in(t_{k-1},t_{k}]\), \(k=1,2,\ldots \) . Thereby, we have

$$\begin{aligned}& \begin{aligned}[b] e^{At_{k-1}}x(t_{k-1}) ={}&e^{At_{k-2}}x(t_{k-2})+ \int_{t_{k-2}}^{t_{k-1}}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)\\ &{}+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds +e^{At_{k-2}}\phi\bigl(x(t_{k-2})\bigr), \end{aligned} \\& \vdots \\& \begin{aligned}[b] e^{At_{2}}x(t_{2}) ={}&e^{At_{1}}x(t_{1})+ \int_{t_{1}}^{t_{2}}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds\\ &{}+e^{At_{1}}\phi \bigl(x(t_{1})\bigr), \end{aligned}\\& e^{At_{1}}x(t_{1}) =\phi(0)+ \int_{0}^{t_{1}}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int _{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds . \end{aligned}$$

Synthesizing the above analysis results in the first equation of system (3.1). Similarly, the second equation of system (3.1) can also be derived by system (2.1) with initial condition (2.2). Hence, we have proved that each solution of (2.1) with initial condition (2.2) is that of (3.1) with initial condition (2.2). This completes the proof. □

Theorem 3.2

The impulsive system (2.1) with initial condition (2.2) shows globally exponential robust stability in mean square for all admissible uncertainties if there exists a positive number \(\alpha<1\), satisfying the following two LMIs conditions:

$$\begin{aligned}& A^{*}+\bigl(|C|+C^{*}\bigr)F+\tau\bigl(|M|+M^{*}\bigr)F +\frac{1}{\delta}H+AH -\alpha A < 0, \\& B^{*}+\bigl(|D|+D^{*}\bigr)G +\tau\bigl(|W|+W^{*}\bigr)G +\frac{1}{\delta}\mathcal{H}+B\mathcal {H}-\alpha B< 0, \end{aligned}$$

where \(\delta=\inf_{k=1,2,\ldots}(t_{k+1}-t_{k})>0\), and \(A^{*}, B^{*}, C^{*}, D^{*}, M^{*}, W^{*}\) are real matrices defined in (2.3).

Proof

To apply the contraction mapping theorem, we firstly define the complete metric space \(\Omega=\Omega_{1}\times\Omega_{2}\) as follows.

Let \(\Omega_{i}\) (\(i=1,2\)) be the space consisting of functions \(q_{i}(t): [-\tau,\infty)\to R^{n}\), satisfying

  1. (a)

    \(q_{i}(t)\) is continuous on \(t\in[0,+\infty)\backslash\{t_{k}\} _{k=1}^{\infty}\);

  2. (b)

    \(q_{1}(t)=\xi(t)\), \(q_{2}(t)=\eta(t)\), for \(t\in[-\tau,0]\);

  3. (c)

    \(\lim_{t\to t_{k}^{-}}q_{i}(t)=q_{i}(t_{k})\), and \(\lim_{t\to t_{k}^{+}}q_{i}(t)\) exists, for all \(k=1,2,\ldots\) ;

  4. (d)

    \(e^{\gamma t}q_{i}(t)\to0\in R^{n}\) as \(t\to+\infty\), where \(\gamma >0\) is a positive constant, satisfying \(\gamma<\min\{\lambda_{\min}A,\lambda_{\min}B\}\).

It is not difficult to verify that the product space Ω is a complete metric space if it is equipped with the following metric:

$$ \operatorname{dist}(\overline{q},\widetilde{q})=\max_{i=1,2,\ldots,2n-1,2n} \Bigl(\sup _{t\geqslant-\tau}\bigl|\overline{q}^{(i)}(t)-\widetilde{q}^{(i)}(t)\bigr| \Bigr), $$
(3.3)

where

$$\begin{aligned}& \overline{q}=\overline{q}(t)= \begin{pmatrix} \overline{q}_{1}(t)\\ \overline{q}_{2}(t) \end{pmatrix} =\bigl(\overline{q}^{(1)}(t), \overline{q}^{(2)}(t),\ldots,\overline {q}^{(2n)}(t) \bigr)^{T}\in\Omega,\\& \widetilde{q}=\widetilde{q}(t)= \begin{pmatrix} \widetilde{q}_{1}(t)\\ \widetilde{q}_{2}(t) \end{pmatrix}= \bigl(\widetilde{q}^{(1)}(t),\ldots,\widetilde{q}^{(2n)}(t) \bigr)^{T}\in\Omega, \end{aligned}$$

and \(\overline{q}_{i}\in\Omega_{i}, \widetilde{q}_{i}\in\Omega_{i}, i=1,2\).

Next, we want to formulate the contraction mapping \(P: \Omega\to \Omega\) as follows:

$$\begin{aligned} {{P \begin{pmatrix} x(t)\\ y(t) \end{pmatrix} = \begin{pmatrix} e^{-At} \{\xi(0)+\int_{0}^{t} e^{As} [-\Delta A(s)x(s)+C_{s}f(y(s))+M_{s}\int_{s-\tau (s)}^{s}f(y(r))\,dr ]\,ds+\sum_{0< t_{k}< t}e^{At_{k}}\phi(x_{t_{k}}) \}\\ e^{-Bt} \{\eta(0)+\int_{0}^{t} e^{Bs} [-\Delta B(s)x(s)+D_{s}g(x(s))+W_{s}\int_{s-\rho (s)}^{s}g(x(r))\,dr ]\,ds+\sum_{0< t_{k}< t}e^{Bt_{k}}\varphi(x_{t_{k}}) \} \end{pmatrix},}} \end{aligned}$$
(3.4)

for all \(t\geqslant0\), and

$$ P \begin{pmatrix} x(t)\\ y(t) \end{pmatrix} = \begin{pmatrix} \xi(t)\\ \eta(t) \end{pmatrix}, \quad \mbox{for all } t\in[-\tau,0]. $$
(3.5)

From Lemma 3.1, it is obvious that each fixed point of P is a solution of system (2.1) with initial condition (2.2), and each solution of system (2.1) with initial condition (2.2) is a fixed point of P.

Below, we only need to prove that the mapping P defined as (3.4)-(3.5) is truly a contraction mapping from Ω into Ω, which may be divided into two steps.

Step 1. We claim that \(P(\Omega)\subset\Omega\). That is, for any \(\bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr)\in \Omega\), we shall prove that \(P \bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t)\end{matrix}} \bigr)\) satisfies the conditions (a)-(d) of Ω.

Indeed, it follows by the definition of P that \(P \bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t)\end{matrix}} \bigr)\) satisfies the conditions (a)-(b). Besides, because of

$$\lim_{\varepsilon\to0^{+}} \begin{pmatrix} \sum_{0< t_{k}< t_{j}-\varepsilon}e^{At_{k}}\phi (x_{t_{k}})\\ \sum_{0< t_{k}< t_{j}-\varepsilon}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix}= \begin{pmatrix} \sum_{0< t_{k}< t_{j}}e^{At_{k}}\phi(x_{t_{k}})\\ \sum_{0< t_{k}< t_{j}}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} $$

and

$$\lim_{\varepsilon\to0^{+}} \begin{pmatrix} \sum_{0< t_{k}< t_{j}+\varepsilon}e^{At_{k}}\phi (x_{t_{k}})\\ \sum_{0< t_{k}< t_{j}+\varepsilon}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} = \begin{pmatrix} \sum_{0< t_{k}< t_{j}}e^{At_{k}}\phi(x_{t_{k}})\\ \sum_{0< t_{k}< t_{j}}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} + \begin{pmatrix} e^{At_{j}}\phi(x_{t_{j}})\\ e^{Bt_{j}}\varphi(y_{t_{j}}) \end{pmatrix}, $$

we can conclude directly from (3.4) that

$$\lim_{\varepsilon\to0^{+}} P \begin{pmatrix} x(t_{j}-\varepsilon)\\ y(t_{j}-\varepsilon) \end{pmatrix} =P \begin{pmatrix} x(t_{j})\\ y(t_{j}) \end{pmatrix} $$

and

$$\lim_{\varepsilon\to0^{+}} P \begin{pmatrix} x(t_{j}+\varepsilon)\\ y(t_{j}+\varepsilon) \end{pmatrix} =P \begin{pmatrix} x(t_{j})\\ y(t_{j}) \end{pmatrix} + \begin{pmatrix} \phi(x(t_{j}))\\ \varphi(y(t_{j})) \end{pmatrix}, $$

which implies that \(P(\cdot)\) satisfies the condition (c).

Finally, we verify that \(P(\cdot)\) satisfies the condition (d). In fact, we can conclude from \(e^{\gamma t}x(t)\to0\) and \(e^{\gamma t}y(t)\to0\) that, for any given \(\varepsilon>0\), there exists a corresponding constant \(t^{*}>\tau\) such that

$$\bigl\vert e^{\gamma t}x(t)\bigr\vert +\bigl\vert e^{\gamma t}y(t) \bigr\vert < \varepsilon\mu, \quad\forall t\geqslant t^{*}, \mbox{where } \mu=(1,1, \ldots,1)^{T}\in R^{n}. $$

Next, we get by (H1)

$$ \begin{aligned}[b] & \biggl|e^{\gamma t}e^{-At} \int_{0}^{t}e^{As}C_{s}f \bigl(y(s)\bigr)\,ds \biggr|\\ &\quad\leqslant e^{-(A-\gamma I)t} \int_{0}^{t^{*}}e^{As}|C_{s}|F\bigl\vert y(s)\bigr\vert \,ds+e^{-(A-\gamma I)t} \int_{t^{*}}^{t}e^{As}|C_{s}|F\bigl\vert y(s)\bigr\vert \,ds. \end{aligned} $$
(3.6)

On the one hand, the boundedness assumption (2.3) produces \(|C_{s}|\mu \leqslant(|C|+C^{*})\mu\), and then

$$\begin{aligned} &e^{-(A-\gamma I)t} \int_{0}^{t^{*}}e^{As}|C_{s}|F\bigl\vert y(s)\bigr\vert \,ds \\ &\quad\leqslant t^{*}e^{-(A-\gamma I)t}e^{At^{*}}\bigl( \vert C\vert +C^{*}\bigr)F \Bigl[\max_{i} \Bigl(\sup _{s\in [0,t^{*}]}\bigl\vert y_{i}(s)\bigr\vert \Bigr) \Bigr]\mu \to0\in R^{n},\quad t\to\infty. \end{aligned}$$
(3.7)

Remark that the convergence in (3.7) is in the sense of the metric defined as (3.3). Below, all the cases of convergence are in the sense of the metric defined as (3.3), and we need not repeat it anywhere else.

Due to (2.3), obviously there exists a positive number \(a_{0}\) such that

$$\bigl(\vert C_{s}\vert +\vert M_{s}\vert \bigr)F \mu \leqslant a_{0}\mu. $$

So we have

$$\begin{aligned} &e^{-(A-\gamma I)t} \int_{t^{*}}^{t}e^{As}|C_{s}|F\bigl|y(s)\bigr|\,ds \\ &\quad\leqslant\varepsilon e^{-(A-\gamma I)t} \int_{t^{*}}^{t}e^{(A-\gamma I)s}|C_{s}|F \mu \,ds \\ &\quad\leqslant \varepsilon a_{0}e^{-(A-\gamma I)t} \begin{pmatrix} \frac{e^{(a_{1}-\gamma)t}}{a_{1}-\gamma}& 0& \cdots& 0& 0\\ 0&\frac{e^{(a_{2}-\gamma)t}}{a_{2}-\gamma}& 0& \cdots& 0\\ & &\ddots\\ 0&0&\cdots&0&\frac{e^{(a_{n}-\gamma)t}}{a_{n}-\gamma} \end{pmatrix} \mu \\ &\quad=\varepsilon a_{0}\biggl(\frac{1}{a_{1}-\gamma},\frac{1}{a_{2}-\gamma}, \ldots,\frac {1}{a_{n}-\gamma}\biggr)^{T}. \end{aligned}$$
(3.8)

Now, the arbitrariness of ε together with (3.6)-(3.8) implies that

$$ e^{\gamma t}e^{-At} \int_{0}^{t}e^{As}C_{s}f \bigl(y(s)\bigr)\,ds\to0\in R^{n},\quad t\to +\infty. $$
(3.9)

Similarly as the proof of (3.9), we can prove from (2.3) that

$$ e^{\gamma t}e^{-At} \int_{0}^{t} e^{As} \bigl(-\Delta A(s)x(s) \bigr)\,ds\to0\in R^{n},\quad t\to+\infty. $$
(3.10)

Further, the definition of γ gives

$$ e^{\gamma t}e^{-At} \xi(0)\to0\in R^{n}, \quad t\to+ \infty. $$
(3.11)

Below, we estimate

$$\begin{aligned} & \biggl|e^{\gamma t}e^{-At} \int_{0}^{t} e^{As}M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr\,ds \biggr| \\ &\quad\leqslant e^{-(A-\gamma I)t} \int_{0}^{t} e^{As}|M_{s}| \int_{s-\tau(s)}^{s}\bigl|f\bigl(y(r)\bigr)\bigr|\,dr\,ds \\ &\quad\leqslant e^{-(A-\gamma I)t} \int_{0}^{t} e^{As}|M_{s}| \int_{s-\tau}^{s}F\bigl|y(r)\bigr|\,dr\,ds \\ &\quad\leqslant e^{\gamma\tau}e^{-(A-\gamma I)t} \int_{0}^{t^{*}+\tau} e^{(A-\gamma I) s}|M_{s}| \int_{s-\tau}^{s}Fe^{\gamma r}\bigl|y(r)\bigr|\,dr\,ds \\ &\qquad{}+e^{\gamma \tau}e^{-(A-\gamma I)t} \int_{t^{*}+\tau}^{t} e^{(A-\gamma I) s}|M_{s}| \int_{s-\tau}^{s}Fe^{\gamma r}\bigl|y(r)\bigr|\,dr\,ds. \end{aligned}$$
(3.12)

It follows by the definitions of \(a_{0}\), \(t^{*}\), and τ that

$$\begin{aligned} 0&\leqslant e^{\gamma\tau}e^{-(A-\gamma I)t} \int_{0}^{t^{*}+\tau} e^{(A-\gamma I) s}|M_{s}| \int_{s-\tau}^{s}Fe^{\gamma r}\bigl|y(r)\bigr|\,dr\,ds \\ &\leqslant e^{\gamma\tau}e^{-(A-\gamma I)t} \int_{0}^{t^{*}+\tau} e^{(A-\gamma I) s}|M_{s}| \int_{-\tau}^{t^{*}+\tau}F\max_{i} \Bigl( \sup_{r\in[-\tau,t^{*}+\tau]}e^{\gamma r}\bigl|y_{i}(r)\bigr| \Bigr)\mu \,dr\,ds \\ &= e^{\gamma\tau}e^{-(A-\gamma I)t} \int_{0}^{t^{*}+\tau} e^{(A-\gamma I) s}|M_{s}| \bigl(t^{*}+\tau+\tau\bigr)F\max_{i} \Bigl(\sup _{r\in[-\tau,t^{*}+\tau]}e^{\gamma r}\bigl|y_{i}(r)\bigr| \Bigr)\mu \,ds \\ &\leqslant e^{-(A-\gamma I)t} \biggl[\bigl(t^{*}+2\tau\bigr)e^{\gamma\tau}\max _{i} \Bigl(\sup_{r\in[-\tau,t^{*}+\tau]}e^{\gamma r}\bigl|y_{i}(r)\bigr| \Bigr)e^{(A-\gamma I)(t^{*}+\tau)} \\ &\qquad{}\times \int_{0}^{t^{*}+\tau} |M_{s}|F\mu \,ds \biggr]\to0 \in R^{n},\quad t\to+\infty, \end{aligned}$$
(3.13)

and

$$\begin{aligned} &e^{\gamma\tau}e^{-(A-\gamma I)t} \int_{t^{*}+\tau}^{t} e^{(A-\gamma I) s}|M_{s}| \int_{s-\tau}^{s}Fe^{\gamma r}\bigl|y(r)\bigr|\,dr\,ds \\ &\quad\leqslant\varepsilon\tau a_{0}e^{\gamma\tau}e^{-(A-\gamma I)t} \biggl( \int _{t^{*}+\tau}^{t} e^{(A-\gamma I) s}\,ds \biggr)\mu \\ &\quad\leqslant\varepsilon\tau a_{0}e^{\gamma\tau}e^{-(A-\gamma I)t} \begin{pmatrix} \frac{e^{(a_{1}-\gamma)t}}{a_{1}-\gamma}& 0& \cdots& 0& 0\\ 0&\frac{e^{(a_{2}-\gamma)t}}{a_{2}-\gamma}& 0& \cdots& 0\\ & &\ddots\\ 0&0&\cdots&0&\frac{e^{(a_{n}-\gamma)t}}{a_{n}-\gamma} \end{pmatrix} \mu \\ &\quad=\varepsilon \biggl[\tau a_{0}e^{\gamma\tau}\biggl( \frac{1}{a_{1}-\gamma},\frac {1}{a_{2}-\gamma},\ldots,\frac{1}{a_{n}-\gamma} \biggr)^{T} \biggr]. \end{aligned}$$
(3.14)

Synthesizing (3.12)-(3.14) derives

$$ e^{\gamma t}e^{-At} \int_{0}^{t} e^{As}M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr\,ds\to0\in R^{n}, \quad t\to+\infty . $$
(3.15)

Combining (3.9)-(3.11) and (3.15) results in

$$\begin{aligned} &e^{\gamma t}e^{-At} \biggl\{ \xi(0)+ \int_{0}^{t} e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau (s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds \biggr\} \to R^{n}, \\ &\quad t\to+\infty. \end{aligned}$$
(3.16)

Similarly, we can prove and obtain

$$\begin{aligned} &e^{\gamma t}e^{-Bt} \biggl\{ \eta(0)+ \int_{0}^{t} e^{Bs} \biggl[-\Delta B(s)x(s)+D_{s}g\bigl(x(s)\bigr)+W_{s} \int_{s-\rho (s)}^{s}g\bigl(x(r)\bigr)\,dr \biggr]\,ds \biggr\} \to R^{n}, \\ &\quad t\to+\infty. \end{aligned}$$
(3.17)

In addition, we claim that if \(t\to+\infty\),

$$\begin{aligned} e^{\gamma t} \begin{pmatrix} e^{-At} \sum_{0< t_{k}< t}e^{At_{k}}\phi(x_{t_{k}}) \\ e^{-Bt} \sum_{0< t_{k}< t}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} ={}&e^{\gamma t} \left[ \begin{pmatrix} e^{-At} \sum_{0< t_{k}\leqslant t^{*}}e^{At_{k}}\phi(x_{t_{k}}) \\ e^{-Bt} \sum_{0< t_{k}\leqslant t^{*}}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} \right.\\ &{}\left.+\begin{pmatrix} e^{-At} \sum_{t^{*}< t_{k}< t}e^{At_{k}}\phi (x_{t_{k}}) \\ e^{-Bt} \sum_{t^{*}< t_{k}< t}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} \right] \to0\in R^{2n} . \end{aligned}$$
(3.18)

In fact, on the one hand,

$$\begin{aligned} &e^{\gamma t} \begin{pmatrix} e^{-At} \sum_{0< t_{k}\leqslant t^{*}}e^{At_{k}}\phi(x_{t_{k}}) \\ e^{-Bt} \sum_{0< t_{k}\leqslant t^{*}}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} = \begin{pmatrix} e^{(\gamma I -A)t} \sum_{0< t_{k}\leqslant t^{*}}e^{At_{k}}\phi(x_{t_{k}}) \\ e^{(\gamma I -B)t} \sum_{0< t_{k}\leqslant t^{*}}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} \to0\in R^{2n}, \\ &\quad t\to+ \infty. \end{aligned}$$
(3.19)

Below we shall prove

$$ e^{\gamma t} \begin{pmatrix} e^{-At} \sum_{t^{*}< t_{k}< t}e^{At_{k}}\phi (x_{t_{k}}) \\ e^{-Bt} \sum_{t^{*}< t_{k}< t}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix}\to0\in R^{2n}, \quad t\to+ \infty. $$
(3.20)

Firstly, we may assume that \(t_{m-1}< t^{*}\leqslant t_{m}\) and \(t_{j}< t\leqslant t_{j+1}\) for any given \(t>t^{*}\),

$$\begin{aligned} & \biggl|e^{\gamma t} e^{-At} \sum_{t^{*}< t_{k}< t}e^{At_{k}} \phi (x_{t_{k}}) \biggr| \\ &\quad\leqslant\frac{\varepsilon}{\delta} \begin{pmatrix} e^{-(\gamma-a_{1})t} (\delta e^{(a_{1}-\gamma )t_{j+1}}+ \sum_{t_{m}\leqslant t_{k}\leqslant t_{j}}(t_{k+1}-t_{k})e^{(a_{1}-\gamma)t_{k}} )H_{1}\\ \vdots\\ e^{-(\gamma-a_{n})t} (\delta e^{(a_{n}-\gamma)t_{j+1}}+ \sum_{t_{m}\leqslant t_{k}\leqslant t_{j}}(t_{k+1}-t_{k})e^{(a_{n}-\gamma)t_{k}} )H_{n} \end{pmatrix} \\ &\quad\leqslant\frac{\varepsilon}{\delta} \begin{pmatrix} e^{-(\gamma-a_{1})t} (\delta e^{(a_{1}-\gamma )t_{j+1}}+ \frac{1}{a_{1}-\gamma}e^{(a_{1}-\gamma)t} )H_{1}\\ \vdots\\ e^{-(\gamma-a_{n})t} (\delta e^{(a_{n}-\gamma)t_{j+1}}+ \frac{1}{a_{n}-\gamma }e^{(a_{1}-\gamma)t} )H_{n} \end{pmatrix}, \end{aligned}$$

which together with the arbitrariness of the positive number ε implies that

$$e^{-At} \sum_{t^{*}< t_{k}< t}e^{At_{k}} \phi(x_{t_{k}})\to0\in R^{n},\quad t\to+\infty. $$

Similarly, we can also obtain

$$e^{-Bt} \sum_{t^{*}< t_{k}< t}e^{Bt_{k}} \varphi(y_{t_{k}})\to0\in R^{n},\quad t\to+\infty. $$

So we have proved (3.20). Moreover, combining (3.18)-(3.20) implies that

$$e^{\gamma t}P \begin{pmatrix} x(t)\\ y(t) \end{pmatrix} \to0\in R^{2n}, \quad t\to+\infty. $$

Hence \(P(\cdot)\) satisfies the condition (d). So we have proved that \(P(\Omega)\subset\Omega\).

Step 2. Below, we only need to prove that the operator \(P: \Omega\to\Omega\) is a contraction mapping.

Indeed, for any \(\bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr), \bigl( {\scriptsize\begin{matrix}{}\overline{x}(t)\cr \overline{y}(t) \end{matrix}} \bigr)\in \Omega\), we can get by the LMI conditions of Theorem 3.2

$$\begin{aligned} & \left|P \begin{pmatrix} x(t)\\ y(t) \end{pmatrix} -P \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix}\right| \\ &\quad \leqslant \begin{pmatrix} e^{-At}\int_{0}^{t} e^{As}|\Delta A(s)||x(s)-\overline{x}(s)|\,ds\\ e^{-Bt}\int_{0}^{t} e^{Bs}|\Delta B(s)||y(s)-\overline{y}(s)|\,ds \end{pmatrix} + \begin{pmatrix} e^{-At}\int_{0}^{t} e^{As}|C_{s}||f(y(s))-f(\overline{y}(s))|\,ds\\ e^{-Bt}\int_{0}^{t} e^{Bs}|D_{s}||g(x(s))-g(\overline{x}(s))|\,ds \end{pmatrix} \\ &\qquad{}+ \begin{pmatrix} e^{-At}\int_{0}^{t} e^{As}|M_{s}|\int_{s-\tau(s)}^{s}|f(y(r))-f(\overline{y}(r))|\,dr\,ds\\ e^{-Bt}\int_{0}^{t} e^{Bs}|W_{s}|\int_{s-\rho(s)}^{s}|g(x(r))-g(\overline{x}(r))|\,dr\,ds \end{pmatrix} \\ &\qquad{}+ \begin{pmatrix} e^{-At} \sum_{0< t_{k}< t}e^{At_{k}}|\phi (x_{t_{k}})-\phi(\overline{x}_{t_{k}})| \\ e^{-Bt} \sum_{0< t_{k}< t}e^{Bt_{k}}|\varphi(y_{t_{k}})-\varphi(\overline{y}_{t_{k}})| \end{pmatrix} \\ &\quad\leqslant \left[ \begin{pmatrix} A^{-1}A^{*}\mu\\ B^{-1}B^{*}\mu \end{pmatrix} + \begin{pmatrix} A^{-1}(|C|+C^{*})F\mu \\ B^{-1}(|D|+D^{*})G\mu \end{pmatrix} +\tau \begin{pmatrix} e^{-At}\int_{0}^{t} e^{As}(|M|+M^{*})F\mu \,ds\\ e^{-Bt}\int_{0}^{t} e^{Bs}(|W|+W^{*})G\mu \,dr \end{pmatrix}\right. \\ &\qquad{}\left.+\frac{1}{\delta}\begin{pmatrix} e^{-At} ( \sum_{t_{1}\leqslant t_{k}\leqslant t_{j-1}}(t_{k+1}-t_{k})e^{At_{k}}+\delta e^{At_{j}} )H\mu\\ e^{-Bt} ( \sum_{t_{1}\leqslant t_{k}\leqslant t_{j-1}}(t_{k+1}-t_{k})e^{Bt_{k}}+\delta e^{Bt_{j}} )\mathcal{H}\mu \end{pmatrix} \right] \operatorname{dist} \left( \begin{pmatrix} x(t)\\ y(t) \end{pmatrix}, \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix} \right) \\ &\quad\leqslant \biggl[ \begin{pmatrix} A^{-1}A^{*}\mu\\ B^{-1}B^{*}\mu \end{pmatrix}+ \begin{pmatrix} A^{-1}(|C|+C^{*})F\mu \\ B^{-1}(|D|+D^{*})G\mu \end{pmatrix} +\tau \begin{pmatrix} A^{-1}(|M|+M^{*})F\mu \\ B^{-1}(|W|+W^{*})G\mu \end{pmatrix} \\ &\qquad{}+\frac{1}{\delta}\begin{pmatrix} e^{-At} (\int_{0}^{t}e^{As}\,ds+\delta e^{At} )H\mu\\ e^{-Bt} (\int_{0}^{t}e^{Bs}\,ds+\delta e^{Bt} )\mathcal{H}\mu \end{pmatrix} \biggr] \operatorname{dist} \left( \begin{pmatrix} x(t)\\ y(t) \end{pmatrix}, \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix} \right) \\ &\quad\leqslant \left[ \begin{pmatrix} (A^{-1}A^{*}+A^{-1}(|C|+C^{*})F+\tau A^{-1}(|M|+M^{*})F +\frac{1}{\delta}A^{-1}H+H )\mu \\ ( B^{-1}B^{*}+B^{-1}(|D|+D^{*})G +\tau B^{-1}(|W|+W^{*})G +\frac{1}{\delta}B^{-1}\mathcal{H}+\mathcal{H} )\mu \end{pmatrix} \right]\\ &\qquad{}\times \operatorname{dist} \left( \begin{pmatrix} x(t)\\ y(t) \end{pmatrix}, \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix} \right) \\ &\quad< \alpha \begin{pmatrix} \mu\\ \mu \end{pmatrix} \operatorname{dist} \left( \begin{pmatrix} x(t)\\ y(t) \end{pmatrix}, \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix} \right), \end{aligned}$$

where we assume \(t_{j}< t\leqslant t_{j+1}\) with \(j=0,1,2,\ldots \) . Here, \(t_{0}=0\). Hence

$$\operatorname{dist} \left(P \begin{pmatrix} x(t)\\ y(t) \end{pmatrix}, P \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix} \right)\leqslant \alpha \operatorname{dist} \left( \begin{pmatrix} x(t)\\ y(t) \end{pmatrix}, \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix} \right), $$

where \(A^{-1}\) and \(B^{-1}\) are the inverse matrices of A and B, respectively.

Therefore, \(P: \Omega\to\Omega\) is a contraction mapping such that there exists the fixed point \(\bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr)\) of P in Ω, which implies that \(\bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr)\) is a solution of the impulsive dynamic equations (2.1) with the initial condition (2.2), satisfying \(e^{\gamma t} \bigl\| \bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr) \bigr\|\to0\) as \(t\to+\infty\). Therefore, the impulsive equations (2.1)-(2.2) show globally exponential robust stability in mean square for all admissible uncertainties. □

Remark 2

It is the first time one employs the contraction mapping theory to derive the LMI-based exponential robust stability criterion for BAM neural networks with distributed delays and parameter uncertainties. So the obtained criterion is novel against existing results (see below Remarks 3-5 and Tables 1 and 2).

Table 1 Numerical comparison of Theorem  3.2 with [ 36 ], Theorem 2, in Example 1
Table 2 Comparing our Theorem  3.2 with other existing results about BAM neural networks models

If impulse behaviors are ignored, we can derive the following corollary from Theorem 3.2.

Corollary 3.3

The following system with initial condition (2.2) shows globally exponential robust stability in mean square for all admissible uncertainties:

$$\left \{ \textstyle\begin{array}{@{}l} \frac{dx(t)}{dt} = -(A+\Delta A(t))x(t)+(C+\Delta C(t))f(y(t)) +(M+\Delta M(t))\int_{t-\tau(t)}^{t}f(y(s))\,ds,\\ \quad t\geqslant0,\\ \frac{dy(t)}{dt} = -(B+\Delta B(t))y(t)+(D+\Delta D(t))g(x(t))+ (W+\Delta W(t))\int_{t-\rho(t)}^{t}g(x(s))\,ds, \\ \quad t\geqslant0, \end{array}\displaystyle \right . $$

if there exists a positive number \(\alpha<1\), satisfying the following two LMIs conditions:

$$\begin{aligned}& A^{*}+\bigl(|C|+C^{*}\bigr)F+\tau\bigl(|M|+M^{*}\bigr)F -\alpha A < 0, \\& B^{*}+\bigl(|D|+D^{*}\bigr)G +\tau\bigl(|W|+W^{*}\bigr)G -\alpha B< 0, \end{aligned}$$

where \(A^{*}, B^{*}, C^{*}, D^{*}, M^{*}, W^{*}\) are the real matrices defined in (2.3).

4 Numerical example

Example 1

Equip the impulsive system (2.1)-(2.2) with the following parameters:

$$\begin{aligned}& f\bigl(y(t)\bigr)= \begin{pmatrix} \sin(0.1y_{1}(t))\\ 0.2\sin(y_{2}(t)) \end{pmatrix},\qquad g\bigl(x(t)\bigr)= \begin{pmatrix} \sin(0.2x_{1}(t))\\ 0.1\sin(x_{2}(t)) \end{pmatrix}, \end{aligned}$$
(4.1)
$$\begin{aligned}& \phi\bigl(x(t_{k})\bigr)= \begin{pmatrix} 0.3x_{1}(t_{k})\cos(100.01t_{k})\\ \sin(0.2x_{2}(t_{k})) \end{pmatrix},\qquad \varphi \bigl(y(t_{k})\bigr)= \begin{pmatrix} \cos(0.3y_{1}(t_{k}))\\ 0.2y_{2}(t_{k})\sin(100.01t_{k}) \end{pmatrix}, \\& A= \begin{pmatrix} 1.9& 0\\ 0& 2 \end{pmatrix}, \qquad B= \begin{pmatrix} 2& 0\\ 0& 1.8 \end{pmatrix},\qquad C= \begin{pmatrix} -0.1& 0.01\\ 0& 0.2 \end{pmatrix}, \\& D= \begin{pmatrix} 0.2& 0.02\\ 0& -0.1 \end{pmatrix}, \qquad M= \begin{pmatrix} 0.002&-0.001\\ 0.001& 0.001 \end{pmatrix}, \\& W= \begin{pmatrix} 0.008&-0.001\\ 0.001& 0.001 \end{pmatrix},\qquad F= \begin{pmatrix} 0.1& 0\\ 0& 0.2 \end{pmatrix}, \\& G= \begin{pmatrix} 0.2& 0\\ 0& 0.1 \end{pmatrix},\qquad H= \begin{pmatrix} 0.3& 0\\ 0& 0.2 \end{pmatrix}= \mathcal{H}, \\& A^{*}= \begin{pmatrix} 0.01& 0.01\\ 0& 0.03 \end{pmatrix},\qquad B^{*}= \begin{pmatrix} 0.03& 0\\ 0.01& 0.02 \end{pmatrix},\qquad C^{*}= \begin{pmatrix} 0.03&0\\ 0.01& 0.02 \end{pmatrix}, \\& D^{*}= \begin{pmatrix} 0.01&0.01\\ 0& 0.02 \end{pmatrix},\qquad M^{*}= \begin{pmatrix} 0.015&0\\ 0.01&\ 0.012 \end{pmatrix},\qquad W^{*}= \begin{pmatrix} 0.011&0.01\\ 0& 0.018 \end{pmatrix}. \end{aligned}$$
(4.2)

Take \(t_{1}=0.3, t_{k}=t_{k-1}+0.3k\), and \(\delta=0.5, \tau(t)=\rho(t)=\tau=2.1\). Let \(x_{1}(s)=\tanh s, x_{2}(s)=e^{s+0.5}, y_{1}(s)=2\sin s, y_{2}(s)=2\cos(s+0.5)\). Then we can use the matlab LMI toolbox to solve the two LMI conditions in Theorem 3.2, obtaining the following datum feasible:

$$\alpha=0.9885, $$

satisfying \(0<\alpha<1\). Thereby, we can conclude from Theorem 3.2 that the impulsive equations (2.1)-(2.2) show globally exponential robust stability in mean square for all admissible uncertainties (see Figure 1).

Figure 1
figure 1

State trajectory of the system for Example  1 .

Remark 3

Example 1 can be studied by [36], Theorem 2, Example 1. Table 1 is presented to compare our Example 1 with it. Our Example 1 illustrates the effectiveness of the LMI-based criterion (Theorem 3.2). Although both [36], Theorem 2, and our Theorem 3.2 are involved with BAM neural networks with distributed delays, our Theorem 3.2 includes an impulse.

Remark 4

In Example 1, our upper bound of time-delays \(\tau=2.1\) while the upper bound of time-delays of [36], Example 1, is 1.9, which implies that our obtained result is close to some of the current good results. For this purpose, below we give another table to verify it.

Remark 5

From Table 2, we can synthetically analyze the criteria involved to various mathematical models, main methods, and the effectiveness of the conclusions. In summary, the different methods and models imply that our Theorem 3.2 is really novel against existing results.

5 Conclusions

Impulsive uncertain BAM neural networks modeling brings about some mathematical difficulties to the applications of the contraction mapping theorem. Thereby, the contraction mapping theorem has never been employed to derive the robust stability of the impulsive uncertain BAM neural networks before our Theorem 3.2. Moreover, our new criterion can easily be applied to the computer Matlab LMI toolbox. Example 1 illustrates the effectiveness and feasibility by using the Matlab LMI toolbox. In addition, Tables 1 and 2 are presented to show the novelty of our Theorem 3.2 (see Remarks 2-5).