1 Introduction

Complex networks [1, 2] lie everywhere in our daily life, such as the Internet, world wide web, communication networks, social networks, genetic regulatory networks, power grid networks, and so on. In 1990, controlling chaos was proposed, and it was shown that one can convert a chaotic attractor to any one of a large number of possible attracting time-periodic motions via making only small time-dependent perturbations of an available system parameter [1]. Synchronization of complex networks has become an active field of research [310], because the synchronization mechanism can better explain many natural phenomena, including the synchronous information exchange in the Internet and world wide web, and the synchronous transfer of digital or analog signals in communication networks. In 1998, focusing on the transition from a regular lattice to a random graph, Watts and Strogatz introduced an interesting model, i.e., the small-world network [4].

From the control strategy point of view, all kinds of control approaches have been proposed in the field of the synchronization of complex networks. These approaches can be divided into two classes in general. One is based on the continuous updating feedback signals, for instance time-delay feedback control [11], adaptive control [12, 13], and nonlinear feedback control [14]. The other kind is based on the discrete signals updated at instant times, such as sampled-data control [15] and impulsive control [16]. The traditional method to synchronize a complex network is to add a controller to each of the network nodes to tame the dynamics to approach a desired synchronization trajectory. However, a complex network is normally composed of a large number of high-dimensional nodes, and it is expensive and infeasible to control all the neurons. Motivated by this practical consideration, the idea of controlling a small portion of nodes, named pinning control, was introduced in [17]. Pinning control, as a feasible strategy, could drive networks of coupled oscillators onto some desired trajectory. Generally speaking, we apply pinning control by adding a feedback control input on a fraction of networks nodes. Though we just exert the direct control on such pinned nodes, it can be propagated to the rest of the oscillators through the coupling among the nodes. Many pinning algorithms have been reported for the synchronization of dynamical networks (see [1823]). [18] investigated the effects of control strength on nonlinearly coupled systems in the process of synchronization. Wu and Fu [21] studied the cluster mixed synchronization of complex networks with community structure and nonidentical nodes by using some linear pinning control schemes. It turned out that the pinning control method reduces the control cost to a certain extent by reducing the amount of controllers added to the nodes.

Up to now, all sorts of different synchronization phenomena, such as complete synchronization [24, 25], generalized synchronization [26], phase synchronization [27], cluster synchronization [28], exponential synchronization [29] etc., have been reported in the literature. Complete synchronization means that the coupled chaotic systems remain in step with each other in the course of time. In coupled systems with identical elements (i.e., each component having the same dynamics and parameter set), we can observe complete synchronization. Complete synchronization can be regarded as inner synchronization. Except the aforementioned inner synchronization behavior, there exists another synchronization phenomenon named outer synchronization, which occurs between two or more coupled networks [30]. In reality, many practical systems, such as the different species development in balance [31], the groups of Drosophila clock neurons [32], and the spread of diseases, such as SARS and bird flu, between two communities [33], can be used to illustrate the outer synchronization phenomenon between two networks. Due to its theoretical and practical importance, outer synchronization between two dynamical networks has drawn much attention in recent years [3438]. In [34], the outer synchronization was studied between two delay-coupled complex dynamical networks with nonidentical topological structures and a noise perturbation. Zheng et al. [35] investigated the adaptive synchronization between two nonlinearly delay-coupled complex networks with the bidirectional actions and nonidentical topological structures. Outer synchronization between drive and response networks via adaptive impulsive pinning control was investigated in [36]. The problem of pinning outer synchronization was considered between two delayed complex networks with nonlinear coupling [37]. In [38], an aperiodically adaptive intermittent control scheme combined with the pinning strategy was proposed for outer synchronization between two general complex delayed dynamical networks.

Motivated by the above discussions, in this paper, we study the outer synchronization of complex dynamical networks with mixed time delayed coupling. Combining the pinning control method and the linear matrix inequality technique, some sufficient conditions for synchronization of complex networks with time-varying delays are derived. By proposing a novel activation functional condition, further improved outer synchronization criteria are proposed. Finally, two examples are given to illustrate the effectiveness of the proposed methods.

This paper is organized as follows. The network model is introduced and some necessary lemmas are given in Sect. 2. Section 3 discusses the outer synchronization of the complex dynamical networks with mixed delayed coupling by the pinning control method. Corresponding synchronization criteria for guaranteeing synchronization are obtained. The theoretical results are verified numerically by several representative examples in Sect. 4. Finally, this paper is concluded in Sect. 5.

Notation

Throughout this paper, \(\mathbb{R} ^{n}\) denotes the n-dimensional Euclidean space and \(\mathbb{R} ^{n \times n}\) is the set of all \(n \times n\) real matrices. For symmetric matrices X and Y, the notation \(X> Y\) (\(X\ge Y\)) means that the matrix \(X-Y\) is positive definite (nonnegative). [ X Y Y T Z ] = [ X Y Z ] with ∗ denoting the symmetric term in a symmetric matrix.

2 Preliminaries

Consider a complex dynamical network consisting of N identical nodes with linearly coupling network with mixed delays:

$$ \dot{x}_{i}(t)=Ax_{i}(t)+Bf \bigl(x_{i}(t)\bigr)+Cf\bigl(x_{i}\bigl(t-\tau(t)\bigr) \bigr)+D \int_{t-\sigma }^{t}f\bigl(x_{i}(s)\bigr)\,ds + \sum_{j=1}^{N}{h_{ij}\Gamma x_{j}(t)}, $$
(1)

where \(i=1,2,\ldots,N\), \(x_{i}(t)=(x_{i1}(t),\ldots, x_{in}(t))^{T} \in\mathbb {R} ^{n}\) is the state vector of node i, \(f:\mathbb{R} ^{n} \to\mathbb {R} ^{n}\) is continuously differentiable, \(f(x_{i}(t))=[f_{1}(x_{i1}(t)),\ldots,f_{n}(x_{in}(t))]^{T}\). Here \(A=\operatorname{diag}\{ a_{1}, a_{2},\ldots,a_{n}\}<0\). The matrices \(B=(b_{ij})_{n\times n} \) and \(C=(c_{ij})_{n\times n} \) are weight and delayed weight matrices, respectively. The matrix \(D=(d_{ij})_{n\times n} \) is a distributed delayed weight matrix. σ is a distributed delay. The matrix \(\Gamma =\operatorname{diag}\{\gamma_{1},\gamma_{2},\ldots,\gamma_{n}\} \in\mathbb{R} ^{n \times n} \) is an inner coupling matrix of complex networks and it is a positive definite matrix. \(H=(h_{ij})_{N\times N}\) is an outer coupling matrix representing the complex network topology defined as follows: If there is a connection between nodes i and j (\(j\ne i\)), then \(h_{ij}\ne0\); otherwise, \(h_{ij}=0\) (\(j\ne i\)) and the diagonal elements of matrix H are defined by \(h_{ii}=-\sum_{j=1,j\ne i}^{N}{h_{ij}}\).

Assumption 1

Here \(\tau(t)\) denotes the interval time-varying delay satisfying

$$ 0\le\tau_{0} \le\tau(t) \le\tau_{m}, \qquad \mu_{0} \le\dot{\tau}(t)\le \mu_{m} < +\infty, \quad \mbox{and}\quad \bar{\tau}=\tau_{m}-\tau_{0}. $$
(2)

Assumption 2

$$ l _{i}^{-} \le\frac{{{f_{i}}(x) - {f_{i}}(y)}}{{x - y}} \le l _{i}^{+}, \quad \forall x,y \in\mathbb{R},x \ne y, i = 1,2, \ldots,n, $$
(3)

where \(l _{i}^{-} \), \(l_{i}^{+} \), \(i = 1,2, \ldots,n\), are constants and \(L_{1}=\operatorname{diag}(l_{1}^{-}, l_{2}^{-},\ldots, l_{n}^{-})\), \(L_{2}=\operatorname{diag}(l_{1}^{+}, l_{2}^{+},\ldots, l_{n}^{+})\).

Remark 1

The constants \(l _{i}^{-}\) and \(l _{i}^{+}\) in Assumption 2 are allowed to be positive, negative, or zero. This assumption is weaker than Assumption 2 in [20], which is the special case of the function satisfying condition (3).

We refer to model (1) as the drive-coupled complex dynamical network. Correspondingly, the response complex network with controller \(u_{i}(t)\), \(i=1,2,\ldots,N\), can be written as

$$\begin{aligned}& \begin{aligned} \dot{y}_{i}(t)&=Ay_{i}(t)+Bf \bigl(y_{i}(t)\bigr)+Cf\bigl(y_{i}\bigl(t-\tau(t)\bigr) \bigr) \\ &\quad {}+D \int_{t-\sigma }^{t}f\bigl(y_{i}(s)\bigr)\,ds + \sum_{j=1}^{N}{h_{ij}\Gamma y_{j}(t)}+u_{i}(t), \end{aligned} \end{aligned}$$
(4)
$$\begin{aligned}& u_{i}(t)= \textstyle\begin{cases} -d(y_{i}(t)-x_{i}(t)), & i=1,2,\ldots,l; \\ 0, & i=l+1,\ldots,N, \end{cases}\displaystyle \end{aligned}$$
(5)

where \(d=\operatorname{diag}(d_{1},d_{2},\ldots,d_{n})>0\) is a positive definite feedback gain matrix, and \(1\leq l\leq N\).

Definition 2.1

The drive-response networks (1) and (4) are said to achieve outer synchronization if

$$ \lim_{t\rightarrow\infty}\bigl\| y_{i}(t)-x_{i}(t)\bigr\| =0, \quad i=1,2,\ldots,N. $$

We define outer synchronization errors between the drive network (1) and the response network (4) as \(e_{i}(t)=y_{i}(t)-x_{i}(t)\).

Then, the following error dynamical network can be obtained:

$$\begin{aligned} \dot{e}_{i}(t) =&Ae_{i}(t)+Bg \bigl(e_{i}(t)\bigr)+Cg\bigl(e_{i}\bigl(t-\tau(t)\bigr) \bigr) \\ &{}+D \int_{t-\sigma}^{t}g\bigl(e_{i}(s)\bigr)\,ds + \sum_{j=1}^{N}h_{ij}\Gamma e_{j}(t)-de_{i}(t), \end{aligned}$$
(6)

where \(i=1,2,\ldots,l\), \(g(e_{i}(t))=f(y_{i}(t))-f(x_{i}(t))\), \(g(e_{i}(t))=[g_{1}(e_{i1}(t)),\ldots,g_{n}(e_{in}(t))]\). According to inequality (3), one can obtain

$$ l _{i}^{-} \le\frac{{{g_{i}}(x)}}{{x}} \le l _{i}^{+} , \qquad g_{i}(0) = 0, \quad \forall x \in R, i=1,2,\ldots, n. $$
(7)

Lemma 2.1

(Jensen’s inequality)

For any constant matrix \(X\in\Re^{n\times n}\), \(X=X^{T}>0\), two scalars \(h_{2}\ge h_{1}>0\) such that the following integration is well defined, then

$$ -(h_{2}-h_{1}) \int_{t-h_{2}}^{t-h_{1}}{x^{T}(s)Xx(s)\,ds}\le- \biggl( \int _{t-h_{2}}^{t-h_{1}}{x(s)\,ds}\biggr)^{T}X \biggl( \int_{t-h_{2}}^{t-h_{1}}{x(s)\,ds}\biggr). $$

Lemma 2.2

For any constant matrix \(Z\in\Re^{n\times n}\), \(Z=Z^{T}>0\), two scalars \(\tau_{m}\ge\tau_{0}>0\) such that the following integration is well defined, then

$$\begin{aligned}& - \int_{-\tau_{m}}^{-\tau_{0}}{ \int_{t+\theta}^{t}{\dot{x}^{T}(s)Z\dot {x}(s) \,ds}} \\& \quad \le \frac{2(1-\bar{\tau})}{\tau_{m}+\tau_{0}}x^{T}(t)Zx(t) -\frac{2(1-\bar{\tau})}{\tau_{m}^{2}-\tau_{0}^{2}} \int_{t-\tau_{m}}^{t-\tau_{0}}x^{T}(s)\, ds\, Z \int_{t-\tau_{m}}^{t-\tau_{0}}x(s)\,ds. \end{aligned}$$

Proof

[ x ˙ T ( s ) Z x ˙ ( s ) x ˙ T ( s ) x ˙ ( s ) Z 1 ] 0.

Integration of the above inequality from \(t+\theta\) to t, where \(-\tau _{m} \le\theta\le-\tau_{0}\), yields

[ t + θ t x ˙ T ( s ) Z x ˙ ( s ) d s t + θ t x ˙ T ( s ) d s t + θ t x ˙ ( s ) d s θ Z 1 ] 0.

Integration from \(-\tau_{m}\) to \(-\tau_{0}\) yields

[ τ m τ 0 t + θ t x ˙ T ( s ) Z x ˙ ( s ) d s d θ τ m τ 0 t + θ t x ˙ T ( s ) d s d θ τ m τ 0 t + θ t x ˙ ( s ) d s d θ τ m τ 0 θ Z 1 d θ ] 0.

According to Schur complements,

$$\begin{aligned} &- \int_{-\tau_{m}}^{-\tau_{0}}{ \int_{t+\theta}^{t}{\dot{x}^{T}(s)Z\dot {x}(s) \,ds}\,d\theta} \\ &\quad \le-\frac{2}{\tau_{m}^{2}-\tau_{0}^{2}} \biggl( \int_{-\tau_{m}}^{-\tau_{0}}{ \int_{t+\theta}^{t}{\dot{x}^{T}(s)\,ds}\,d \theta}\biggr)Z \biggl( \int_{-\tau_{m}}^{-\tau_{0}}{ \int_{t+\theta}^{t}{\dot{x}(s)\,ds}\,d\theta}\biggr) \\ &\quad =-\frac{2}{\tau_{m}^{2}-\tau_{0}^{2}}\biggl[(\tau_{m}-\tau_{0})x^{T}(t)- \int_{t-\tau _{m}}^{t-\tau_{0}}{x^{T}(s)\,ds}\biggr]Z \biggl[(\tau_{m}-\tau_{0})x(t)- \int_{t-\tau_{m}}^{t-\tau_{0}}{x(s)\,ds}\biggr] \\ &\quad =-\frac{2(\tau_{m}-\tau_{0})^{2}}{\tau_{m}^{2}-\tau_{0}^{2}}x^{T}(t)Zx(t) +\frac{4}{\tau_{m}+\tau_{0}}x^{T}(t)Z \int_{t-\tau_{m}}^{t-\tau_{0}}x(s)\,ds \\ &\qquad {}-\frac{2}{\tau_{m}^{2}-\tau_{0}^{2}} \int_{t-\tau_{m}}^{t-\tau_{0}}x^{T}(s)\, ds\, Z \int _{t-\tau_{m}}^{t-\tau_{0}}x(s)\,ds. \end{aligned}$$

Applying the inequality \(2X^{T}PY\le X^{T}PX+Y^{T}PY\), one can obtain

$$\begin{aligned} &- \int_{-\tau_{m}}^{-\tau_{0}}{ \int_{t+\theta}^{t}{\dot{x}^{T}(s)Z\dot {x}(s) \,ds}\,d\theta} \\ &\quad \le-\frac{2(\tau_{m}-\tau_{0})}{\tau_{m}+\tau_{0}}x^{T}(t)Zx(t) -\frac{2}{\tau_{m}^{2}-\tau_{0}^{2}} \int_{t-\tau_{m}}^{t-\tau_{0}}x^{T}(s)\,ds\,Z \int _{t-\tau_{m}}^{t-\tau_{0}}x(s)\,ds \\ &\qquad {}+\frac{2}{\tau_{m}+\tau_{0}}x^{T}(t)Zx(t) +\frac{2}{\tau_{m}+\tau_{0}} \int_{t-\tau_{m}}^{t-\tau_{0}}x^{T}(s)\,ds\,Z \int_{t-\tau _{m}}^{t-\tau_{0}}x(s)\,ds \\ &\quad =\frac{2(1-\bar{\tau})}{\tau_{m}+\tau_{0}}x^{T}(t)Zx(t) -\frac{2(1-\bar{\tau})}{\tau_{m}^{2}-\tau_{0}^{2}} \int_{t-\tau_{m}}^{t-\tau_{0}}x^{T}(s)\,ds\,Z \int_{t-\tau_{m}}^{t-\tau_{0}}x(s)\,ds. \end{aligned}$$

 □

3 Main results

In this section, by using a Lyapunov–Krasovskii functional, new outer synchronization criteria for mixed delayed complex networks will be proposed.

Theorem 3.1

Under Assumptions 1 and 2, for given matrices \(L_{1} = \operatorname{diag}\{l _{1}^{-} ,l_{2}^{-} , \ldots,l _{n}^{-} \}\), \(L_{2} = \operatorname{diag}\{l _{1}^{+} ,l _{2}^{+} , \ldots , l _{n}^{+} \}\), and scalars \(\tau_{m}\), \(\tau_{0}\), \(\mu_{0}\), \(\mu_{m}\), the complex networks (1) and (4) can achieve outer synchronization if there exist positive definite matrices P, Q, \(R_{1}\), \(R_{2}\), \(X_{1}\), \(X_{2}\), Y, Z, positive diagonal matrices \(W_{i}=\operatorname{diag}\{ w_{1i},\ldots,w_{ni}\}\), \(i=1,2,\ldots,8\), \(\Lambda = \operatorname{diag}\{{\lambda _{1}}, {\lambda_{2}}, \ldots,{\lambda_{n}}\}\), and \(\Delta = \operatorname{diag}\{ {\delta_{1}},{\delta_{2}}, \ldots,{\delta_{n}}\}\) such that the following LMIs hold:

F= ( 2 ( p k + l k + δ k l k λ k ) H w k 8 I N ( λ k θ k ) γ k H w k 5 γ k H w k 7 I N 0 w k 6 I N ) <0
(8)

and

$$\begin{aligned} E =&\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {E_{11}} & 0 & 0 & 0 & {E_{15}} & 0 & {E_{17}} & 0 & 0 & 0 & {E_{1,11}} & {E_{1,12}} \\ * & {E_{22}} & 0 & 0 & {E_{25}} & {E_{26}} & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & {E_{33}} & 0 & 0 & 0 & 0 & {E_{38}} & 0 & 0 & 0 & 0 \\ * & * & * & {E_{44}} & 0 & 0 & {E_{47}} & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & {E_{55}} & 0 & {E_{57}} & 0 & 0 & 0 & {E_{5,11}} & {E_{5,12}} \\ * & * & * & * & * & {E_{66}} & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & {E_{77}} & 0 & 0 & 0 & 0 & {E_{7,12}} \\ * & * & * & * & * & * & * & {E_{88}} & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & * & * & {E_{99}} & 0 & 0 & 0 \\ * & * & * & * & * & * & * & * & * & {E_{10,10}} & 0 & {E_{10,12}} \\ * & * & * & * & * & * & * & * & * & * & {E_{11,11}} & 0 \\ * & * & * & * & * & * & * & * & * & * & * & {E_{12,12}} \end{array}\displaystyle \right ) \\ < &0, \end{aligned}$$
(9)

where

$$\begin{aligned}& \begin{aligned} E_{11}&=2P(A-d)+Q+2(L_{2} \Delta-L_{1}\Lambda) (A-d)+\tau_{0}^{2}X_{1}+ \bar{\tau }^{2}X_{2} \\ &\quad {}+\frac{2(1-\bar{\tau})}{\tau_{m}+\tau_{0}}Z-2L_{1}W_{1}L_{2}+W_{8}, \end{aligned} \\& E_{15}=PB+(\Lambda-\Delta) (A-d)+(L_{2}\Delta-L_{1} \Lambda)B +L_{1}W_{1}+L_{2}W_{1}, \\& E_{17}=PC+(L_{2}\Delta-L_{1}\Lambda)C, \qquad E_{1,11}=PD+(L_{2}\Delta -L_{1}\Lambda)D, \qquad E_{1,12}=(A-d)W_{5}, \\& E_{22}=R_{2}-2L_{1}W_{2}L_{2}, \qquad E_{26}=L_{2}W_{2}+L_{1}W_{1}, \\& E_{33}=-Q-R_{1}-2L_{1}W_{4}L_{2}, \qquad E_{38}=L_{1}W_{4}+L_{2}W_{4}, \\& E_{44}=(1-\mu_{0})R_{1}-(1-\mu_{m})R_{2}-2L_{1}W_{3}L_{2}, \qquad E_{47}=L_{1}W_{3}+L_{2}W_{3}, \\& E_{55}=2(\Lambda-\Delta)B+\sigma^{2}Y-2W_{1}+W_{7}, \qquad E_{57}=(\Lambda -\Delta)C, \qquad E_{5,11}=(\Lambda- \Delta)D, \\& E_{5,12}=BW_{5},\qquad E_{66}=-2W_{2}, \qquad E_{77}=-2W_{3}, \qquad E_{7,12}=-CW_{5}, \\& E_{88}=-2W_{4}, \qquad E_{99}=-X_{1}, \\& E_{10,10}=-X_{2}-\frac{2(1-\bar{\tau})}{\tau_{m}^{2}-\tau_{0}^{2}}Z, \qquad E_{10,12}=DW_{5}, \\& E_{11,11}=-\sigma Y, \quad \textit{and} \quad E_{12,12}= \frac{\tau_{m}^{2}-\tau _{0}^{2}}{2}Z-2W_{5}+W_{6}. \end{aligned}$$

Proof

Construct a Lyapunov functional as follows:

$$ V(e_{t}) = \sum_{i = 1}^{7} {{V_{i}}(e_{t})}, $$

where

$$\begin{aligned}& {V_{1}}(e_{t}) = \sum_{i = 1}^{N}{e_{i}^{T}}(t)Pe_{i}(t), \\& {V_{2}}(e_{t}) =\sum_{i=1}^{N} \int_{t-\tau_{m}}^{t}e_{i}^{T}(s)Qe_{i}(s) \,ds, \\& V_{3}(e_{t})=2\sum_{i=1}^{N} \sum_{j=1}^{n}{\lambda_{i} \int_{0}^{e_{ij}(t)} {\bigl(g_{j}(s)-l_{j}^{-}s \bigr)\,ds}} +2\sum_{i=1}^{N}\sum _{j=1}^{n}{\delta_{i} \int_{0}^{e_{ij}(t)} {\bigl(l_{j}^{+}s-g_{j}(s) \bigr)\,ds}}, \\& V_{4}(e_{t})=\sum_{i=1}^{N}{ \int_{t-\tau_{m}}^{t-\tau(t)}{e_{i}^{T}(s)R_{1}e_{i}(s) \,ds}} +\sum_{i=1}^{N}{ \int_{t-\tau(t)}^{t-\tau_{0}}{e_{i}^{T}(s)R_{2}e_{i}(s) \,ds}}, \\& V_{5}(e_{t})=\tau_{0}\sum _{i=1}^{N}{ \int_{-\tau_{0}}^{0}{ \int_{t+\theta}^{t} {e_{i}^{T}(s)X_{1}e_{i}(s) \,ds}\,d\theta}} +\bar{\tau}\sum_{i=1}^{N} \int_{-\tau_{m}}^{-\tau_{0}} \int_{t+\theta}^{t} {e_{i}^{T}(s)X_{2}e_{i}(s) \,ds}\,d\theta, \\& V_{6}(e_{t})=\sigma\sum_{i=1}^{N}{ \int_{-\sigma}^{0}{ \int_{t+\theta}^{t} {g^{T}\bigl(e_{i}(s) \bigr)Yg\bigl(e_{i}(s)\bigr)\,ds}\,d\theta}}, \end{aligned}$$

and

$$ V_{7}(e_{t})=\sum_{i=1}^{N}{ \int_{-\tau_{m}}^{-\tau_{0}}{ \int_{\theta}^{0} { \int_{t+\lambda}^{t}{\dot{e}_{i}^{T}(s)Z \dot{e}_{i}(s)\,ds}\,d\lambda}\,d\theta}}. $$

The time derivative of \(V(e_{t})\) along the trajectory of system (6) is given by

$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{1}(e_{t})&=2\sum _{i=1}^{N}{e_{i}^{T}(t)P \dot{e}_{i}(t)} \\ &=2\sum_{i=1}^{N}e_{i}^{T}(t)P \biggl[Ae_{i}(t)+Bg\bigl(e_{i}(t)\bigr) +Cg \bigl(e_{i}\bigl(t-\tau(t)\bigr)\bigr)+D \int_{t-\sigma}^{t}g\bigl(e_{i}(s)\bigr) \,ds-de_{i}(t)\biggr] \\ &\quad {}+2\sum_{i=1}^{N}e_{i}^{T}(t)P \sum_{j=1}^{N}h_{ij}\Gamma e_{j}(t), \end{aligned} \end{aligned}$$
(10)
$$\begin{aligned}& \dot{V}_{2}(e_{t})=\sum_{i=1}^{N}e_{i}^{T}(t)Qe_{i}(t) -\sum_{i=1}^{N}e_{i}^{T}(t- \tau_{m})Qe_{i}(t-\tau_{m}), \end{aligned}$$
(11)
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{3}(e_{t})&=2\sum _{i=1}^{N}{\bigl[g^{T}\bigl(e_{i}(t) \bigr) (\Lambda-\Delta) +e_{i}^{T}(t) (L_{2} \Delta-L_{1}\Lambda)\bigr]\dot{e_{i}}(t)} \\ &=2\sum_{i=1}^{N}g^{T} \bigl(e_{i}(t)\bigr) (\Lambda-\Delta)\biggl[(A-d)e_{i}(t)+Bg \bigl(e_{i}(t)\bigr) +Cg\bigl(e_{i}\bigl(t-\tau(t)\bigr) \bigr) \\ &\quad {}+D \int_{t-\sigma}^{t}g\bigl(e_{i}(s)\bigr)\,ds \biggr] \\ &\quad {}+2\sum_{i=1}^{N}e_{i}^{T}(t) (L_{2}\Delta-L_{1}\Lambda)\biggl[(A-d)e_{i}(t)+Bg \bigl(e_{i}(t)\bigr) +Cg\bigl(e_{i}\bigl(t-\tau(t)\bigr) \bigr) \\ &\quad {}+D \int_{t-\sigma}^{t}g\bigl(e_{i}(s)\bigr)\,ds \biggr]+2\sum_{i=1}^{N}g^{T} \bigl(e_{i}(t)\bigr) (\Lambda-\Delta)\sum_{j=1}^{N}h_{ij} \Gamma e_{j}(t) \\ &\quad {} +2\sum_{i=1}^{N}e_{i}^{T}(t) (L_{2}\Delta-L_{1}\Lambda)\sum_{j=1}^{N}h_{ij} \Gamma e_{j}(t), \end{aligned} \end{aligned}$$
(12)
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{4}(e_{t})&=\sum _{i=1}^{N}\bigl[ \bigl(1-\dot{\tau}(t) \bigr)e_{i}^{T}\bigl(t-\tau (t)\bigr)R_{1}e_{i} \bigl(t-\tau(t)\bigr) -e_{i}^{T}(t-\tau_{m})R_{1}e_{i}(t- \tau_{m}) \\ &\quad {} +e_{i}^{T}(t-\tau_{0})R_{2}e_{i}(t- \tau_{0}) -\bigl(1-\dot{\tau}(t)\bigr)e_{i}^{T} \bigl(t-\tau(t)\bigr)R_{2}e_{i}\bigl(t-\tau(t)\bigr)\bigr] \\ &\le\sum_{i=1}^{N}\bigl[ (1- \mu_{0})e_{i}^{T}\bigl(t-\tau(t) \bigr)R_{1}e_{i}\bigl(t-\tau(t)\bigr) -e_{i}^{T}(t- \tau_{m})R_{1}e_{i}(t-\tau_{m}) \\ &\quad {} +e_{i}^{T}(t-\tau_{0})R_{2}e_{i}(t- \tau_{0}) -(1-\mu_{m})e_{i}^{T}\bigl(t- \tau(t)\bigr)R_{2}e_{i}\bigl(t-\tau(t)\bigr)\bigr]. \end{aligned} \end{aligned}$$
(13)

By employing Lemma 2.1, one can obtain

$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{5}(e_{t})&= \tau_{0}^{2}\sum_{i=1}^{N}{e_{i}^{T}(t)X_{1}e_{i}(t)} -\tau_{0}\sum_{i=1}^{N}{ \int_{t-\tau_{0}}^{t}{e_{i}^{T}(s)X_{1}e_{i}(s) \,ds}} \\ &\quad {}+\sum_{i=1}^{N}\bar{ \tau}^{2}e_{i}^{T}(t)X_{2}e_{i}(t) -\sum_{i=1}^{N}\bar{\tau} \int_{t-\tau_{m}}^{t-\tau _{0}}e_{i}^{T}(s)X_{2}e_{i}(s) \,ds \\ &\le \tau_{0}^{2}\sum_{i=1}^{N}{e_{i}^{T}(t)X_{1}e_{i}(t)} -\sum_{i=1}^{N}{ \biggl( \int_{t-\tau_{0}}^{t}{e_{i}^{T}(s)\,ds} \biggr)X_{1} \biggl( \int_{t-\tau_{0}}^{t}{e_{i}(s)\,ds} \biggr)} \\ &\quad {}+\bar{\tau}^{2}\sum_{i=1}^{N}e_{i}^{T}(t)X_{2}e_{i}(t) -\sum_{i=1}^{N} \int_{t-\tau_{m}}^{t-\tau_{0}}e_{i}^{T}(s)dsX_{2} \int_{t-\tau _{m}}^{t-\tau_{0}}e_{i}(s)\,ds, \end{aligned} \end{aligned}$$
(14)
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{6}(e_{t})&=\sigma^{2} \sum_{i=1}^{N}g^{T} \bigl(e_{i}(t)\bigr)Yg\bigl(e_{i}(t)\bigr) -\sigma\sum _{i=1}^{N} \int_{t-\sigma}^{t}g^{T}\bigl(e_{i}(s) \bigr)Yg\bigl(e_{i}(s)\bigr)\,ds \\ &\le\sigma^{2}\sum_{i=1}^{N}g^{T} \bigl(e_{i}(t)\bigr)Yg\bigl(e_{i}(t)\bigr) -\sum _{i=1}^{N} \int_{t-\sigma}^{t}g^{T}\bigl(e_{i}(s) \bigr)dsY \int_{t-\sigma}^{t}g\bigl(e_{i}(s)\bigr)\,ds. \end{aligned} \end{aligned}$$
(15)

According to Lemma 2.2, one can obtain

$$\begin{aligned} \dot{V}_{7}(e_{t})&=\frac{\tau_{m}^{2}-\tau_{0}^{2}}{2}\sum _{i=1}^{N}{\dot {e}_{i}^{T}(t)Z \dot{e}_{i}(t)} -\sum_{i=1}^{N}{ \int_{-\tau_{m}}^{-\tau_{0}} \int_{t+\theta}^{t} {\dot{e}_{i}^{T}(s)Z \dot{e}_{i}(s)\,ds\,d\theta}} \\ &\le \frac{\tau_{m}^{2}-\tau_{0}^{2}}{2}\sum_{i=1}^{N}{ \dot{e}_{i}^{T}(t)Z\dot{e}_{i}(t)} + \frac{2(1-\bar{\tau})}{\tau_{m}+\tau_{0}}\sum_{i=1}^{N}{e_{i}^{T}(t)Ze_{i}(t)} \\ &\quad {}-\frac{2(1-\bar{\tau})}{\tau_{m}^{2}-\tau_{0}^{2}} \sum_{i=1}^{N} \int_{t-\tau_{m}}^{t-\tau_{0}}e_{i}^{T}(s)\,ds \,Z \int_{t-\tau _{m}}^{t-\tau_{0}}e_{i}(s)\,ds. \end{aligned}$$
(16)

According to (7), there exist positive diagonal matrices \(W_{i}=\operatorname{diag}(w_{1i},\ldots,w_{ni})\), \(i=1,2, \ldots,8\), such that the following inequalities hold:

$$\begin{aligned} 0 \le&-2\sum_{j=1}^{n}\sum _{i=1}^{n}w_{i1}\bigl[g_{i} \bigl(e_{ij}(t)\bigr)-l_{i}^{-}e_{ij}(t)\bigr] \bigl[g_{i}\bigl(e_{ij}(t)\bigr)-l_{i}^{+}e_{ij}(t) \bigr] \\ &{}-2\sum_{j=1}^{n}\sum _{i=1}^{n}w_{i2} \bigl[g_{i} \bigl(e_{ij}(t-\tau_{0})\bigr)-l_{i}^{-}e_{ij}(t- \tau_{0})\bigr] \bigl[g_{i}\bigl(e_{ij}(t- \tau_{0})\bigr)-l_{i}^{+}e_{ij}(t- \tau_{0})\bigr] \\ &{}-2\sum_{j=1}^{n}\sum _{i=1}^{n}w_{i3} \bigl[g_{i} \bigl(e_{ij}\bigl(t-\tau(t)\bigr)\bigr)-l_{i}^{-}e_{ij} \bigl(t-\tau(t)\bigr)\bigr] \bigl[g_{i}\bigl(e_{ij}(t- \tau_{0})\bigr)-l_{i}^{+}e_{ij}\bigl(t-\tau(t)\bigr) \bigr] \\ &{}-2\sum_{j=1}^{n}\sum _{i=1}^{n}w_{i4} \bigl[g_{i} \bigl(e_{ij}(t-\tau_{m})\bigr)-l_{i}^{-}e_{ij}(t- \tau_{m})\bigr] \bigl[g_{i}\bigl(e_{ij}(t- \tau_{m})\bigr)-l_{i}^{+}e_{ij}(t- \tau_{m})\bigr], \end{aligned}$$
(17)

and

$$\begin{aligned} 0 =&2\sum_{i=1}^{N} \dot{e}_{i}^{T}(t)W_{5}\Biggl[- \dot{e}_{i}(t)+(A-d)e_{i}(t)+Bg\bigl(e_{i}(t) \bigr) +Cg\bigl(e_{i}\bigl(t-\tau(t)\bigr)\bigr) \\ &{}+D \int_{t-\sigma}^{t}g\bigl(e_{i}(s)\bigr)\,ds+ \sum_{j=1}^{N}h_{ij}\Gamma e_{j}(t)\Biggr]. \end{aligned}$$
(18)

Let \(\tilde{e}_{k}(t)=[\tilde{e}_{1k},\tilde{e}_{2k},\ldots,\tilde {e}_{Nk}]\), \(i=1,2,\ldots,n\). Then

$$\begin{aligned} 0&=\sum_{i=1}^{N}\bigl[ \dot{e}^{T}(t)W_{6}\dot {e}_{i}(t)+g^{T} \bigl(e_{i}(t)\bigr)W_{7}g\bigl(e_{i}(t) \bigr)+e_{i}^{T}(t)W_{8}e_{i}(t)\bigr] \\ &\quad {}-\sum_{i=1}^{N}\bigl[w_{k}^{6} \dot{\tilde{e}}_{k}^{T}(t)\dot{\tilde{e}}_{k}(t) +w_{k}^{7}g^{T}\bigl(\tilde{e}_{k}(t) \bigr)g\bigl(\tilde{e}_{k}(t)\bigr)+w_{k}^{8} \tilde{e}_{k}^{T}(t)\tilde{e}_{k}(t)\bigr]. \end{aligned}$$
(19)

Combining with the terms in (10)–(19), we can get

$$ \dot{V}(e_{t})\le\sum_{i=1}^{N} \xi_{i}^{T}(t)E\xi_{i}(t)+\sum _{k=1}^{n}\zeta_{k}^{T}(t)F \zeta_{k}(t), $$

where

ξ i T ( t ) = [ e i T ( t ) e i T ( t τ 0 ) e i T ( t τ m ) e i T ( t τ ( t ) ) g T ( e i ( t ) ) g T ( e i ( t τ 0 ) ) g T ( e ( t τ ( t ) ) ) g T ( e i ( t τ m ) ) t τ 0 t e i T ( s ) d s t τ m t τ 0 e i T ( s ) d s t σ t g T ( e i ( s ) ) d s e ˙ T ( t ) ] , ζ k T ( t ) = [ e ˜ k T ( t ) g T ( e ˜ k ( t ) ) e ˜ ˙ k T ( t ) ] .

Based on LMIs (8) and (9), one can guarantee \(\dot{V}(e_{t})<0\) to be true. Based on the theory of Lyapunov–Krasovskii stability theorem, the controlled networks (1) and (4) can achieve the desired synchronization and the proof is completed. □

Next, we derive an improved result for the dynamical networks (1) and (4).

Theorem 3.2

Under Assumptions 1 and 2, for given matrices \(L_{1} = \operatorname{diag}(l _{1}^{-} ,l_{2}^{-} , \ldots,l _{n}^{-} )\), \(L_{2} = \operatorname{diag}(l _{1}^{+} ,l _{2}^{+} , \ldots, l _{n}^{+} )\), and scalars \(\tau_{m}\), \(\tau_{0} \), the complex networks (1) and (4) can achieve outer synchronization if there exist symmetric positive definite matrices P, Q, \(R_{1}\), \(R_{2}\), \(X_{1}\), \(X_{2}\), Y, Z, positive diagonal matrices \(w_{i}=\operatorname{diag}(k_{1i},\ldots,w_{ni})\), \(i=5,6,7,8\), \(K_{i}=\operatorname{diag}(k_{1i},\ldots,k_{ni})\), \(i=1,2,\ldots,8\), \(\Lambda = \operatorname{diag}({\lambda_{1}},{\lambda_{2}}, \ldots,{\lambda_{n}})\), and \(\Delta = \operatorname{diag}({\delta_{1}},{\delta_{2}}, \ldots,{\delta_{n}})\) such that the following LMIs hold:

F= ( 2 ( p k + l k + δ k l k λ k ) H w k 8 I N ( λ k θ k ) γ k H w k 5 γ k H w k 7 I N 0 w k 6 I N ) <0,
(20)
$$\begin{aligned}& \Pi=(\Pi_{ij})_{n\times n}< 0, \quad i,j=1,\ldots,12, \end{aligned}$$
(21)

and

$$ \Omega=(\Omega_{ij})_{n\times n}< 0, \quad i,j=1, \ldots,12, $$
(22)

where

$$\begin{aligned}& \begin{aligned} \Pi_{11}&=2P(A-d)+Q+2(L_{2} \Delta-L_{1}\Lambda) (A-d)+\tau_{0}^{2}X_{1}+ \bar{\tau }^{2}X_{2} \\ &\quad {}+\frac{2(1-\bar{\tau})}{\tau_{m}+\tau_{0}}Z-L_{1}K_{1}(L_{1}+L_{2})+W_{8}, \end{aligned} \\& \Pi_{15}=PB+(\Lambda-\Delta) (A-d)+(L_{2} \Delta-L_{1}\Lambda)B+L_{1}K_{1}+ \frac {L_{1}+L_{2}}{2}K_{1}, \\& \Pi_{17}=E_{17}, \qquad \Pi_{1,11}=E_{1,11}, \qquad \Pi_{1,12}=E_{1,12} \\& \Pi_{22}=R_{2}-L_{1}K_{2}(L_{1}+L_{2}), \qquad \Pi_{26}=\frac{L_{1}+L_{2}}{2}K_{2}+L_{1}K_{1}, \\& \Pi_{33}=-Q-R_{1}-L_{1}K_{4}(L_{1}+L_{2}), \qquad \Pi_{38}=L_{1}K_{4}+\frac{L_{1}+L_{2}}{2}K_{4}, \\& \Pi_{44}=(1-\mu_{0})R_{1}-(1- \mu_{m})R_{2}-L_{1}K_{3}(L_{1}+L_{2}), \qquad \Pi _{47}=L_{1}K_{3}+\frac{L_{1}+L_{2}}{2}K_{3}, \\& \Pi_{55}=2(\Lambda-\Delta)B+\sigma^{2}Y-2K_{1}+W_{7}, \qquad \Pi_{57}=E_{57}, \qquad \Pi_{5,11}=E_{57}, \\& \Pi_{5,12}=E_{5,12},\qquad \Pi_{66}=-2K_{2}, \qquad \Pi_{77}=-2K_{3}, \qquad \Pi_{7,12}=-CW_{5}, \\& \Pi_{88}=-2K_{4}, \qquad \Pi_{99}=-X_{1}, \\& \Pi_{10,10}=-X_{2}-\frac{2(1-\bar{\tau})}{\tau_{m}^{2}-\tau_{0}^{2}}Z, \qquad \Pi _{10,12}=DW_{5}, \\& \Pi_{11,11}=-\sigma Y, \qquad \Pi_{12,12}=\frac{\tau_{m}^{2}-\tau_{0}^{2}}{2}Z-2W_{5}+W_{6}, \\& \begin{aligned} \Omega_{11}&=2P(A-d)+Q+2(L_{2} \Delta-L_{1}\Lambda) (A-d)+\tau_{0}^{2}X_{1}+ \bar {\tau}^{2}X_{2} \\ &\quad {}+\frac{2(1-\bar{\tau})}{\tau_{m}+\tau_{0}}Z-(L_{1}+L_{2})K_{5}L_{2}+W_{8}, \end{aligned} \\& \Omega_{15}=PB+(\Lambda-\Delta) (A-d)+(L_{2} \Delta-L_{1}\Lambda)B+\frac {L_{1}+L_{2}}{2}K_{5}+L_{2}K_{5}, \\& \Omega_{17}=E_{17}, \qquad \Omega_{1,11}=E_{1,11}, \qquad \Omega_{1,12}=E_{1,12} \\& \Omega_{22}=R_{2}-(L_{1}+L_{2})K_{6} L_{2}, \qquad \Omega_{26}=L_{2}K_{6}+ \frac {L_{1}+L_{2}}{2}K_{5}, \\& \Omega_{33}=-Q-R_{1}-(L_{1}+L_{2})K_{7} L_{2}, \qquad \Omega_{38}=\frac {L_{1}+L_{2}}{2}K_{7}+L_{2}K_{7}, \\& \Omega_{44}=(1-\mu_{0})R_{1}-(1- \mu_{m})R_{2}-(L_{1}+L_{2})K_{6} L_{2}, \qquad \Omega _{47}=\frac{L_{1}+L_{2}}{2}K_{6}+L_{2}K_{6}, \\& \Omega_{55}=2(\Lambda-\Delta)B+\sigma^{2}Y-2K_{4}+W_{7}, \qquad \Omega _{57}=E_{57}, \qquad \Omega_{5,11}=E_{57}, \\& \Omega_{5,12}=E_{5,12},\qquad \Omega_{66}=-2K_{5}, \qquad \Omega_{77}=-2K_{6}, \qquad \Omega_{7,12}=-CW_{5}, \\& \Omega_{88}=-2W_{7}, \qquad \Omega_{99}=-X_{1}, \\& \Omega_{10,10}=-X_{2}-\frac{2(1-\bar{\tau})}{\tau_{m}^{2}-\tau_{0}^{2}}Z, \qquad \Omega_{10,12}=DW_{5}, \\& \Omega_{11,11}=-\sigma Y, \quad \textit{and} \quad \Omega_{12,12}= \frac{\tau _{m}^{2}-\tau_{0}^{2}}{2}Z-2W_{5}+W_{6}. \end{aligned}$$

Proof

For positive definite matrices P, Q, \(R_{1}\), \(R_{2}\), \(X_{1}\), \(X_{2}\), Y, Z, let us consider the same Lyapunov functional \(V(e_{t}) = \sum_{i = 1}^{7} {{V_{i}}(e_{t})}\) proposed in Theorem 3.1.

Here, we consider inequality (17) to two cases.

Case 1:

$$ l_{i}^{-}\le\frac{g_{i}(x)}{x}\le\frac{l_{i}^{-}+l_{i}^{+}}{2}. $$

This condition is equivalent to

$$ \bigl[g_{i}(x)-l_{i}^{-}x\bigr] \biggl[g_{i}(x)-\frac{l_{i}^{-}+l_{i}^{+}}{2}x\biggr]< 0,\quad i=1,\ldots,n. $$
(23)

For any positive diagonal matrices \(K_{i}=\operatorname{diag}(k_{i1},\ldots,k_{in})\), \(i=1,2,3,4\), the following inequality holds:

$$\begin{aligned} 0 \le&-2\sum_{j=1}^{n}\sum _{i=1}^{n}k_{1i}\bigl[g_{i} \bigl(e_{ij}(t)\bigr)-l_{i}^{-}e_{ij}(t)\bigr] \biggl[g_{i}\bigl(e_{ij}(t)\bigr)-\frac{l_{i}^{-}+l_{i}^{+}}{2}e_{ij}(t) \biggr] \\ &{}-2\sum_{j=1}^{n}\sum _{i=1}^{n}k_{2i}\bigl[g_{i} \bigl(e_{ij}(t-\tau _{0})\bigr)-l_{i}^{-}e_{ij}(t- \tau_{0})\bigr] \biggl[g_{i}\bigl(e_{ij}(t- \tau_{0})\bigr)-\frac{l_{i}^{-}+l_{i}^{+}}{2}e_{ij}(t- \tau_{0})\biggr] \\ &{}-2\sum_{j=1}^{n}\sum _{i=1}^{n}k_{3i}\bigl[g_{i} \bigl(e_{ij}\bigl(t-\tau (t)\bigr)\bigr)-l_{i}^{-}e_{ij} \bigl(t-\tau(t)\bigr)\bigr] \\ &{}\times\biggl[g_{i}\bigl(e_{ij}\bigl(t- \tau(t)\bigr)\bigr)-\frac{l_{i}^{-}+l_{i}^{+}}{2}e_{ij}\bigl(t-\tau(t)\bigr)\biggr] \\ &{}-2\sum_{j=1}^{n}\sum _{i=1}^{n}k_{4i}\bigl[g_{i} \bigl(e_{ij}(t-\tau _{m})\bigr)-l_{i}^{-}e_{ij}(t- \tau_{m})\bigr] \\ &{}\times\biggl[g_{i}\bigl(e_{ij}(t- \tau_{m})\bigr)-\frac{l_{i}^{-}+l_{i}^{+}}{2}e_{ij}(t- \tau_{m})\biggr]. \end{aligned}$$
(24)

Then, from the proof of Theorem 3.1, when \(l_{i}^{-}\le\frac{g_{i}(x)}{x}\le \frac{l_{i}^{-}+l_{i}^{+}}{2}\), an upper bound of \(\dot{V}(e_{t})\) can be obtained

$$ \dot{V}(e_{t})\le\sum_{i=1}^{N} \xi_{i}^{T}(t)\Pi\xi_{i}(t)+\sum _{k=1}^{n}\zeta_{k}^{T}(t)F \zeta_{k}(t)< 0. $$
(25)

Based on the theory of Lyapunov–Krasovskii stability, (20) and (21), the controlled networks (1) can achieve synchronization when \(l_{i}^{-}\le \frac{g_{i}(x)}{x}\le\frac{l_{i}^{-}+l_{i}^{+}}{2}\).

Case 2:

$$ \frac{l_{i}^{-}+l_{i}^{+}}{2}\le\frac{g_{i}(x)}{x}\le k_{i}^{+}. $$

This condition is equivalent to

$$ \biggl[g_{i}(x)-\frac{l_{i}^{-}+l_{i}^{+}}{2}x\biggr] \bigl[g_{i}(x)-l_{i}^{+}x\bigr]< 0, \quad i=1,\ldots,n. $$
(26)

For any positive diagonal matrices \(K_{i}=\operatorname{diag}(k_{i1},\ldots,k_{in})\), \(i=5,6,7,8\), the following inequality holds:

$$\begin{aligned} 0 \le&-2\sum_{i=1}^{N}\sum _{j=1}^{n}k_{5i}\biggl[g_{i} \bigl(e_{ij}(t)\bigr)-\frac{l_{i}^{-}+l_{i}^{+}}{2}e_{ij}(t)\biggr] \bigl[g_{i}\bigl(e_{ij}(t)\bigr)-l_{i}^{+} e_{ij}(t)\bigr] \\ &{}-2\sum_{i=1}^{N}\sum _{j=1}^{n}k_{6i}\biggl[g_{i} \bigl(e_{ij}(t-\tau_{0})\bigr) -\frac{l_{i}^{-}+l_{i}^{+}}{2}e_{ij}(t- \tau_{0})\biggr] \bigl[g_{i}\bigl(e_{ij}(t- \tau_{0})\bigr)-l_{i}^{+}e_{ij}(t- \tau_{0})\bigr] \\ &{}-2\sum_{i=1}^{N}\sum _{j=1}^{n}k_{7i} \biggl[g_{i} \bigl(e_{ij}\bigl(t-\tau(t)\bigr)\bigr)-\frac{l_{i}^{-}+l_{i}^{+}}{2}e_{ij} \bigl(t-\tau(t)\bigr)\biggr] \\ &{}\times\bigl[g_{i}\bigl(e_{ij}\bigl(t- \tau(t)\bigr)\bigr)-l_{i}^{+}e_{ij}\bigl(t-\tau(t)\bigr)\bigr] \\ &{}-2\sum_{i=1}^{N}\sum _{j=1}^{n}k_{8i} \biggl[g_{i} \bigl(e_{ij}(t-\tau_{m})\bigr)-\frac{l_{i}^{-}+l_{i}^{+}}{2}e_{ij}(t- \tau_{m})\biggr] \\ &{}\times\bigl[g_{i}\bigl(e_{ij}(t- \tau_{m})\bigr)-l_{i}^{+}e_{ij}(t- \tau_{m})\bigr]. \end{aligned}$$
(27)

Then, from the proof of Theorem 3.1, when \(l_{i}^{-}\le\frac{g_{i}(x)}{x}\le \frac{l_{i}^{-}+l_{i}^{+}}{2}\), an upper bound of \(\dot{V}(e_{t})\) can be obtained

$$ \dot{V}(e_{t})\le\sum_{i=1}^{N} \xi_{i}^{T}(t)\Omega\xi_{i}(t)+\sum _{k=1}^{n}\zeta_{k}^{T}(t)F \zeta_{k}(t)< 0. $$
(28)

Based on the theory of Lyapunov–Krasovskii stability, (20) and (22), the controlled networks (1) can achieve synchronization when \(\frac {l_{i}^{-}+l_{i}^{+}}{2}\le\frac{g_{i}(x)}{x}\le l_{i}^{+}\). □

Remark 2

In Theorem 3.2, by choosing \((x,y)\) in (3) as \((e_{i}(t-\tau_{0}),e_{i}(t-\tau(t)))\) and \((e_{i}(t-\tau(t)),e_{i}(t-\tau _{m}))\) at each subinterval \(l_{i}^{-}\le\frac{g_{i}(x)}{x}\le\frac {l_{i}^{-}+l_{i}^{+}}{2}\) and \(\frac{l_{i}^{-}+l_{i}^{+}}{2}\le\frac{g_{i}(x)}{x}\le k_{i}^{+}\), respectively, more information on cross terms among the states \(g_{i}(e_{i}(t-\tau_{0}))\), \(g_{i}(e_{i}(t-\tau(t)))\), \(g_{i}(e_{i}(t-\tau_{m}))\), \(e_{i}(t-\tau_{0})\), \(e_{i}(t-\tau(t))\), and \(e_{i}(t-\tau_{m})\) will be utilized, which may lead to less conservative stability criteria.

Remark 3

In [39], they only investigated the pinning synchronization in neural networks with discrete time-varying delay. In this paper, we consider a complex network with discrete and distributed delays. For the coupling matrix H, we need \(h_{ij} \neq0\), but in [14] they need \(h_{ij}>0\) or \(h_{ij}\geq0\). So our results improve and generalize the works in [14, 39].

4 Illustrative example

In this section, a numerical example is given to demonstrate the effectiveness of the proposed methods.

Example 1

Consider a two-dimensional network with both discrete and distributed delays:

$$ \dot{x}_{i}(t)=Ax_{i}(t)+Bf \bigl(x_{i}(t)\bigr)+Cf\bigl(x_{i}\bigl(t-\tau(t)\bigr) \bigr)+D \int_{t-\sigma }^{t}f\bigl(x_{i}(s)\bigr)\,ds, $$
(29)

where \(x_{i}(t)=(x_{i1}(t), x_{i2}(t))^{T}\) is the state vector of the networks, \(f(x_{i}(t))=(\tanh(x_{i1}(t)), \tanh(x_{i2}(t)))^{T}\) is the activation functions vector, \(\tau(t)=|\sin(t)|\). Taking

A = ( 1.5 0 0 2 ) , B = ( 2.5 0.3 6 5 ) , C = ( 3 2 0.3 8 ) , D = ( 0.5 1 3.6 0.5 ) ,

the dynamical behavior of (29) with the initial condition \(x_{1}(0)=[0.2,0.5]^{T}\), \(x_{2}(0)=[-0.6,0.1]^{T}\), Fig. 1 displays the chaotic attractor.

Figure 1
figure 1

Attractor trajectories of (29) and the state trajectories of x1, x2

In this simulation, we consider the two-dimensional error system

$$ \dot{e}_{i}(t)=Ae_{i}(t)+Bg \bigl(e_{i}(t)\bigr)+Cg\bigl(e_{i}\bigl(t-\tau(t)\bigr) \bigr)+D \int_{t-\sigma}^{t}g\bigl(e_{i}(s)\bigr)\,ds + \sum_{j=1}^{20}h_{ij}\Gamma e_{j}(t)-de_{i}(t), $$
(30)

where \(e_{i}(t)=(e_{i1}(t), e_{i2}(t))^{T} \) is the state vector of the ith neural network. Here \(i=1,2,\ldots,10\), which means we just choose 10 nodes to control. We set the parameters to be

H = ( H 11 H 11 H 11 H 11 ) , H 11 = ( 9 1 1 1 1 1 9 1 1 1 1 1 9 1 1 1 1 9 ) 10 × 10 , Γ = I , σ = 0.5 , g ( x i ( t ) ) = ( tanh ( x i 1 ( t ) ) , tanh ( x i 2 ( t ) ) ) T , d = ( 2 0 0 2 ) .

Here, we just choose the control gains \(d_{i}\) to be 2. If it is selected as too high value, it is easy to synchronize the systems. Figure 2 shows the drive system \(x_{i}(t)\) state orbits which are unstable.

Figure 2
figure 2

The state trajectories of \(x_{i1}\), \(x_{i2}\) in system (29)

The evolution of the synchronization errors \(e_{i}(t)\) (\(i=1,2,\ldots,20\)) is depicted in Fig. 3. It shows that the error system can be outer synchronization under pinning controller.

Figure 3
figure 3

The evolution of \(e_{i1}(t)\), \(e_{i2}(t)\) in system (30) under pinning control

Then by resorting to LMI in Matlab Toolbox, Theorem 3.1 can guarantee the outer synchronization of the dynamical networks, and we can derive the feasible solution to LMIs in (8) and (9) as follows:

P = ( 0.3345 0.0331 0.0331 0.1210 ) , Q = ( 3.7550 0.0007 0.0007 3.7551 ) , R 1 = ( 0.7754 0.0676 0.0676 0.8167 ) , R 2 = ( 0.1286 0.0064 0.0064 0.1433 ) , X 1 = ( 0.1541 0.0063 0.0063 0.1414 ) , X 2 = ( 0.1566 0.0078 0.0078 0.1407 ) , Y = ( 0.7120 0.0843 0.0843 0.7254 ) , Z = ( 0.2000 0.0011 0.0011 0.1971 ) , W 1 = ( 0.3272 0.0058 0.0058 0.3363 ) , W 2 = ( 0.7126 0.2715 0.2715 0.6910 ) , W 3 = ( 0.8899 0.0479 0.0479 0.9368 ) , W 4 = ( 2.3118 0.0008 0.0008 2.3119 ) , W 5 = ( 0.4599 0.0685 0.0685 0.1909 ) , W 6 = ( 0.4803 0.0001 0.0001 0.4813 ) , W 7 = ( 0.2768 0.3642 0.3642 0.3453 ) , W 8 = ( 0.4511 0.0316 0.0316 0.3875 ) , Λ = ( 0.6757 0.2486 0.2486 0.6525 ) , Δ = ( 0.5372 0.1231 0.1231 0.3499 ) .

From these simulations, one can conclude that using the proposed method in this paper, outer synchronization of systems (1) and (4) can be achieved.

5 Conclusion

In Theorem 3.1, by choosing a new Lyapunov functional, some new synchronization condition for complex networks with mixed time-varying delays has been proposed. Based on the results of Theorem 3.1, by constructing new inequalities of activation functions, further improved stability criteria are proposed in Theorem 3.2. Finally, a numerical example has been given to illustrate the effectiveness of the proposed methods. It should be mentioned that the given method can be further studied and can be extended to fractional-order systems and discrete complex systems.