1 Introduction

Complex dynamic networks (CDNS) have received a lot of attention because they can be used to accurately model a variety of reality-related systems, including global networks, socially relevant networks, power grids, etc [1,2,3]. In addition, as a major group behavior, synchronization of complex networks has been widely discussed because of its uncertain uses in communication security, biological processes, the formation of harmonic oscillations, and the design of chaos generators. [4,5,6,7]. On the other side, some control mechanisms have been put forth to address the synchronization challenge faced by CDNs, including pinning control [10], adaptive control [9], and impulsive control [8], among others. The above control strategies have continuity. This means that state variables are acquired, sent, and handled every minute, which is sometimes impossible to guarantee in the actual world [11]. Influenced by the rapid development of computers for evaluating and smart devices, continuous-time controls are increasingly being replaced by electronic controls, as the latter might offer higher dependability, accuracy, and stability. As a result, controls that utilize discontinuous data have garnered a lot of interest [12,13,14,15]. The sampled-data control method has various benefits over some continuous control approaches, including ease of installation, compact size, cheap maintenance costs, and effectiveness in achieving CDNs synchronization [16]. However, only a few papers have studied the synchronization problem of complex networks in the sense of sampling, such as [3, 4, 16].

Three kinds in particular have emerged from the examination of sampled-data systems. By converting the system over data sampling into a discontinuous system before analyzing it [17,18,19], the first method called the discrete-time technique is usually used. In the second method, the sampling system is viewed from the viewpoint of a hybrid system, an impulse model is constructed to describe it, and the stability theorem is obtained by employing discontinuous Lyapunov functional analysis [20, 21]. The last input delay method has been developed from a continuous-time perspective, particularly for systems with various delays in time [22, 23]. Researchers have developed and improved a number of methods within this framework, including the looped functioning-based approach [24], the discontinuous Lyapunov functional methodology [25, 26], and the time-dependent Lyapunov functioning technique [27,28,29]. By constructing a continuous Lyapunov function for neural networks, improved synchronization conditions with control using sampled data were obtained [30]. The constraints derived in [31] are less conservative than [32] because the sawtooth characteristic information about the variable in time delay is captured by the continuous Lyapunov functional. However, the outcomes are still overly conservative even when each integrating variable included in the appropriate Lyapunov functional’s derivative is addressed by implementing the Jensen inequality [31]. In an effort to boost the effectiveness of collecting information mastery for synchronizing chaotic Lur’e systems, methods rooted in free matrix inequalities (FMIs) from [33] and [34] in [35] have been implemented more recently. However, from the existing literature [30,31,32,33,34,35], we can see that there is still room for improvement in Jensen inequality and free matrix inequality methods.

Based on past discussions, we look into the synchronization control issue with discontinuous sampled data for delayed complex networks in this research. The following three significant contributions are highlighted: (1) The use of a data sampling controller for synchronization control in delayed complex networks is discussed. (2) To handle the Lyapunov function’s derivative, modified free-matrix-based integral inequalities (MFMBIIs) with less conservatism and lower computational complexity are suggested. To make certain that the error system that occurs from investigating is asymptotically and exponentially stable, respectively, novel requirements are derived. (3) Time-dependent continuous Lyapunov functions are created with sufficient reliable data about the current sampling sequence, which appropriately lowers the restrictiveness of the suggested conclusions. The paper deviates from the previous findings in [2, 4, 7] in that it only allows positive definiteness for each term of the constructed Lyapunov-Krasovskii functions (LKFs). The LKFs are made positive-definite in the current work by taking them into account as a whole.

Notations: \(\Re ^n\) refers to Euclidean space with n dimensions in this paper. \(||\cdot ||\) stands for the Euclidean vector norm. The sign \(Y>0\) for any matrix Y suggests that the structure of the matrix is positively defined. \(Sym\left\{ Y\right\}\) is denoted by \(Y+Y^T\). The signs \(-1\) and T, respectively, indicate the inverse as well as the transposition of matrices. Arranged symmetrically, elements in a symmetrical matrix are indicated by the symbol \(*\). The Kronecker product is represented with the notation \(\otimes\). For a real symmetric matrix \({\mathcal {P}}\), the maximum of its eigenvalue is denoted by \(\lambda _{max}({\mathcal {P}})\). \(diag\left\{ \cdot \cdot \cdot \right\}\) refers to the diagonally blocked matrix. As the identity matrix, \(\amalg\) is defined.

2 Preliminaries

Consider the complicated network displayed below, in which a nonlinear dynamic follower system with n dimensions makes up each node and there are \({\mathscr {N}}\) identically connected nodes in total:

$$\begin{aligned}&\left\{ \begin{aligned} {\dot{e}}_p({\hat{t}})=&\pounds (e_p({\hat{t}}))+{c}\sum _{q=1,q\ne p}^{{\mathscr {N}}}{\mathscr {G}}_{pq}{\mathscr {A}}(e_q({\hat{t}}-\ell )-e_p({\hat{t}}-\ell ))+ {u}_p({\hat{t}}), \\ {y}_p({\hat{t}})=&\Game e_p({\hat{t}}), p=1,2,\ldots ,{\mathscr {N}}, \end{aligned}\right.&\end{aligned}$$
(1)

where the control input and state variable relating to the pth node, respectively, are \({u}_p({\hat{t}})\) and \(e_p({\hat{t}})\in \Re ^n\). The dynamics of nodes is described by the nonlinear vector-valued function \(\pounds (e_p({\hat{t}})) = (\pounds _1(e_p({\hat{t}})), \pounds _2(e_p({\hat{t}})),\ldots , \pounds _n(e_p({\hat{t}})))^T \in \Re ^n\). The coupling force is suggested with c, and the time delay, \(\ell\), is constant. The following outer connecting configuration matrix \({\mathscr {G}}=({\mathscr {G}}_{pq})_{{\mathscr {N}}\times {\mathscr {N}}}\in \Re ^{{\mathscr {N}}\times {\mathscr {N}}}\) is represented as: \({\mathscr {G}}_{pq}=0\) when the relation connecting nodes q to \(p(p\ne q)\) doesn’t exist; in other cases, \({\mathscr {G}}_{pq}>0\). The diagonally oriented components of \({\mathscr {G}}\) are represented with the symbol \({\mathscr {G}}_{pp}=-{c}\sum \limits _{q=1,q\ne p}^{{\mathscr {N}}}{\mathscr {G}}_{pq}\). The unchanged internal coupling matrix is denoted as \({\mathscr {A}}=({\mathscr {A}}_{pq})^{n\times n}\in \Re ^{n\times n}\). \({y}_p(t)\) refers to the produced measurements associated with the pth node, and the matrix \(\Game\) has been identified and has a proper dimension.

Assumption 1

The continual non-linear function \(\pounds (\cdot ):\Re ^n\rightarrow \Re ^n\) is a sector valued in addition to meeting the condition as follows:

$$\begin{aligned}&(\pounds ({v})-\pounds ({w})-{S_1}({v}-{w}))^T(\pounds ({v})-\pounds ({w})-{S_2}({v}-{w}))\le 0,\ \ \forall {v}, {w}\in \Re ^n,&\end{aligned}$$
(2)

where \({S_1}\) and \({S_2}\) denote recognized constant matrices and have the required dimensions.

Therefore, the representation of the synchronization error is

$$\begin{aligned} {\hat{e}}_p({\hat{t}})=e_p({\hat{t}})-\rightthreetimes ({\hat{t}}), p=1,2,\ldots ,{\mathscr {N}}, \end{aligned}$$

in which, known as the leader node, the state evolution of the unrestrained isolation node is denoted as \(\rightthreetimes ({\hat{t}})\in \Re ^n\) and fulfills:

$$\begin{aligned}&{\dot{\rightthreetimes }}({\hat{t}})=\pounds (\rightthreetimes ({\hat{t}})).&\end{aligned}$$
(3)

Assumption 2

The discontinuous measurements associated with \(e({\hat{t}})\) and \(\rightthreetimes ({\hat{t}})\) during the sampling moment \({\hat{t}}_f\), respectively, are denoted by \(e({\hat{t}}_f)\) and \(\rightthreetimes ({\hat{t}}_f)\) and are accessible, and a holder of zero orders yields and outputs the control signal at a string of hold intervals,

$$\begin{aligned}&0={\hat{t}}_0<{\hat{t}}_1<\cdot \cdot \cdot<{\hat{t}}_f<\cdot \cdot \cdot <\lim _{f\rightarrow \infty }{\hat{t}}_f= +\infty , \end{aligned}$$

where

$$\begin{aligned}&{\hat{t}}_{f+1}-{\hat{t}}_f=d_f\in (0,d]. \end{aligned}$$

\(d_f\) stands for sampling periods for all \(f\ge 0\), while d stands for the upper limit of sampling intervals.

Consider the controller for data sampling as follows:

$$\begin{aligned}&{u}_p({\hat{t}})=\daleth _{p}{\hat{e}}_p({\hat{t}}_f),\ {\hat{t}}_f\le {\hat{t}} < {\hat{t}}_{f+1}, p=1,2,\ldots ,{\mathscr {N}},&\end{aligned}$$
(4)

from which the controlling gain matrix needs to be built, which is \(\daleth _{p}\in \Re ^{n\times n}\).

The synchronization error system (ES) is then displayed with the following structure:

$$\begin{aligned}&\left\{ \begin{aligned} \dot{{\hat{e}}}_p({\hat{t}})=&\pounds (e_p({\hat{t}}))-\pounds (\rightthreetimes ({\hat{t}}))+{c}\sum _{q=1}^{{\mathscr {N}}}{\mathscr {G}}_{pq}{\mathscr {A}}{\hat{e}}_q({\hat{t}}-\ell )+\daleth _{p}{\hat{e}}_p{({\hat{t}}_f)},\\ \hat{{y}}_p({\hat{t}})=&\Game e_p({\hat{t}})-\Game \rightthreetimes ({\hat{t}})=\Game {\hat{e}}_p({\hat{t}}),\ {\hat{t}}_f\le {\hat{t}}<{\hat{t}}_{f+1}, \end{aligned}\right.&\end{aligned}$$
(5)

and Eq. (5) could be established as

$$\begin{aligned}&\left\{ \begin{aligned} \dot{{\hat{e}}}({\hat{t}})=&\beth ({\hat{e}}({\hat{t}}))+{c}({\mathscr {G}}\otimes {\mathscr {A}}){\hat{e}}({\hat{t}}-\ell )+\daleth {\hat{e}}{({\hat{t}}_f)},\\ \hat{{y}}({\hat{t}})=&(I_{\mathscr {N}}\otimes \Game ){\hat{e}}({\hat{t}}),\ {\hat{t}}_f\le {\hat{t}}<{\hat{t}}_{f+1}, \end{aligned}\right.&\end{aligned}$$
(6)

in which

$$\begin{aligned}&\hat{{y}}({\hat{t}})=(\hat{{y}}_1^T({\hat{t}}),\hat{{y}}_2^T({\hat{t}}),\ldots ,\hat{{y}}_{\mathscr {N}}^T({\hat{t}}))^T, {\hat{e}}({\hat{t}})=({\hat{e}}_1^T({\hat{t}}),{\hat{e}}_2^T({\hat{t}}),\ldots ,{\hat{e}}_{\mathscr {N}}^T({\hat{t}}))^T, \daleth =diag\left\{ \daleth _{1}, \daleth _{2},\cdot \cdot \cdot , \daleth _{{\mathscr {N}}}\right\} ,\\&\beth ({\hat{e}}({\hat{t}}))=(\beth ^T(e_1({\hat{t}}))-\beth ^T(\rightthreetimes ({\hat{t}})),\beth ^T(e_2({\hat{t}}))-\beth ^T(\rightthreetimes ({\hat{t}})),\ldots ,\beth ^T(e_{\mathscr {N}}({\hat{t}}))-\beth ^T(\rightthreetimes ({\hat{t}})))^T. \end{aligned}$$

Throughout this research, the synchronization of the \({\mathscr {N}}\) follower systems (FSs) (1) and the leader system (LS) (3) is respected to be acquired by using the controller (4) with sampling data. In essence, the associated status trajectory of ES (6) gets closer to the origin in a gradual or exponential manner for any initial condition.

Proposition 1

ES (6) is assumed to be asymptotically stable. If for any started situation \({\hat{e}}({\hat{t}}_0)\),

$$\begin{aligned}&\lim _{{\hat{t}}\rightarrow +\infty }||{\hat{e}}({\hat{t}})||=0. \end{aligned}$$

Proposition 2

It is assumed that ES (6) is exponentially stable. If given a pair of constants, \(\backepsilon >0\) and \(\eth >0\),

$$\begin{aligned}&||{\hat{e}}({\hat{t}})||\le \eth e^{-\backepsilon {\hat{t}}}\sup _{-\ell \le \jmath \le 0} \left\{ ||{\hat{e}}(\jmath )||,||\dot{{\hat{e}}}(\jmath )||\right\} \end{aligned}$$

for any starting condition \({\hat{e}}({\hat{t}}_0)\), where \(\eth\) and \(\backepsilon\) are present, respectively, as the decaying coefficient and decaying rate.

Lemma 1

(Zeng, He, Wu, and She [33]). Assuming that the function e is differential and defined on: \([\ltimes , \veebar ]\rightarrow \Re ^n.\) The following inequality is true with regard to symmetry matrices \({\mathscr {R}}\in \Re ^{n\times n}\), \(U_1, U_3\in \Re ^{3n\times 3n}\), as well as whatever matrices \(U_2\in \Re ^{3n\times 3n}\), \(M_1, M_2\in \Re ^{3n\times n}\) fulfill

$$\begin{aligned}&{\bar{\Phi }}= \begin{bmatrix} U_1 &{} U_2 &{} M_1 \\ *&{} U_3 &{} M_2 \\ *&{} *&{} {\mathscr {R}} \end{bmatrix} \ge 0: \\&\qquad -\int \limits _\ltimes ^\veebar {\dot{e}}^T(\wp ){\mathscr {R}}{\dot{e}}(\wp )d\wp \le \phi ^T(\ltimes ,\veebar ) \varPsi (\ltimes ,\veebar )\phi (\ltimes ,\veebar ), \end{aligned}$$

where

$$\begin{aligned}&\varPsi (\ltimes ,\veebar )=(\veebar -\ltimes )\Bigl (U_1+\dfrac{1}{3}U_3\Bigl )+Sym\big \{ M_1{\bar{\Phi }}_1+M_2{\bar{\Phi }}_2\big \},\ {\bar{\Phi }}_1=\flat _1-\flat _2,\ {\bar{\Phi }}_2=2\flat _3-\flat _1-\flat _2,\\&\flat _1= [\amalg \ \ 0\ \ 0], \flat _2=[0\ \ \amalg \ \ 0] , \flat _3=[0\ \ 0\ \ \amalg ] , \phi (\ltimes ,\veebar )=\left[ e^T(\veebar )\ \ e^T(\ltimes )\ \ \frac{1}{\veebar -\ltimes }\int \limits _\ltimes ^\veebar e^T(\wp )d\wp \right] ^T. \end{aligned}$$

Lemma 2

(Lee and Park [36]). Assuming the function e is a differentiable function and defined on: \([\ltimes , \veebar ]\rightarrow \Re ^n.\) With regard to a symmetry matrix \({\mathscr {R}}\in \Re ^{n\times n}>0\), as well as whatever matrices \(M_1, M_2\in \Re ^{3n\times n}\), the inequality is true as follows:

$$\begin{aligned}&-\int \limits _\ltimes ^\veebar {\dot{e}}^T(\wp ){\mathscr {R}}{\dot{e}}(\wp )d\wp \le \varpi ^T(\ltimes ,\veebar )\varPsi (\ltimes ,\veebar )\varpi (\ltimes ,\veebar ), \end{aligned}$$

where

$$\begin{aligned}&\varpi (\ltimes ,\veebar )=\left[ e^T(\veebar )\ \ e^T(\ltimes ) \ \ \int \limits _\ltimes ^\veebar e^T(\wp )d\wp \right] ^T, \\&\varPsi (\ltimes ,\veebar )=(\veebar -\ltimes )\biggl (P_1+\frac{(\veebar -\ltimes )^2}{3}P_2-Sym\left\{ \left[ M_2\ \ M_2 \ \ 0 \right] \right\} \biggl )+Sym\left\{ \left[ M_1\ \ -M_1\ \ 2M_2 \right] \right\} , \\&P_1=M_1{\mathscr {R}}^{-1}{M_1}^T, P_2=M_2{\mathscr {R}}^{-1}{M_2}^T. \end{aligned}$$

Remark 1

The primary distinction in Lemma 2 is that \(\frac{1}{\veebar -\ltimes }\int \limits _\ltimes ^\veebar e(\wp )d\wp\) is replaced by \(\int \limits _\ltimes ^\veebar e(\wp )d\wp\) in an augmented vector \(\varpi (\ltimes ,\veebar )\). For sampled-data systems, this characteristic is advantageous. Allow \(\varpi (\ltimes , \veebar )=\frac{1}{\veebar -\ltimes }\int \limits _\ltimes ^\veebar e(\wp )d\wp\) to be the value. The phrase \(\int \limits _{{\hat{t}}-\varsigma ({\hat{t}})}^{{\hat{t}}}e(\wp )d\wp =\varsigma ({\hat{t}})\varpi ({\hat{t}}-\varsigma ({\hat{t}}),{\hat{t}})\) is required to be included within the expanded vector of stabilized requirements when Lemma 1 is implemented in the time-dependent delaying system, and it might be dealt with using the convex combinating method. The term “\(({\hat{t}}-{\hat{t}}_f)\varpi ({\hat{t}}_f,{\hat{t}})\)", however, creates a challenge when it comes to systems with sampling data since the convex combination approach couldn’t be used to handle the term, \({\hat{t}}-{\hat{t}}_f\), as for the discrete property of the systems with sampling data, i.e., \({\hat{t}}-{\hat{t}}_f\in [0,d)\) for \({\hat{t}}\in [{\hat{t}}_f,{\hat{t}}_{f+1})\). Due to this issue, it was determined that the most widely used inequalities were unable to serve as a tool for data sampling systems with time delays; nevertheless, the MFMBII presented in Lemma 2 could be used in this situation. Lemma 1 uses \(24.5n^2+3.5n\) various deciding factors; however, Lemma 2 adopts \(6.5n^2+0.5n\), which is an additional benefit. In other words, Lemma 2 can significantly simplify the stability criteria’s numerical complexity.

Remark 2

It should be noted that the numerical factor of the matrix \(P_2\) in Lemma 2, \(\dfrac{(\veebar -\ltimes )^2}{3}\), is expected to be \(\dfrac{d^2}{3}\) to adopt the convex combination approach in the sampling system, where d is an upper limit on the sample period of the system. This truth causes conservatism in our findings, so we suggest the next lemma to fix this problem. Compared with the existing results [33, 36], the inequality proposed in Lemma 3 deserves to be widely applied to sampled-data systems since it is less conservative and computationally complex in deriving treatable stability conditions expressed in terms of linear matrix inequalities.

Lemma 3

Assuming the function e is a differentiable function and defined on: \([\ltimes , \veebar ]\rightarrow \Re ^n.\) In relation to a symmetry matrix \({\mathscr {R}}\in \Re ^{n\times n}\) \(>0,\ \mathrm {whatever\ matrices}\ M_1, M_2\in \Re ^{3n\times n}\), the inequality is true as follows:

$$\begin{aligned}&-\int \limits _\ltimes ^\veebar {\dot{e}}^T(\wp ){\mathscr {R}}{\dot{e}} (\wp )d\wp \le \varpi ^T(\ltimes ,\veebar )\varPsi (\ltimes ,\veebar )\varpi (\ltimes ,\veebar ), \end{aligned}$$

where

$$\begin{aligned}&\varpi (\ltimes ,\veebar )=\left[ e^T(\veebar )\ \ e^T(\ltimes ) \ \ \int \limits _\ltimes ^\veebar e^T(\wp )d\wp \right] ^T,\\&\varPsi (\ltimes ,\veebar )=(\veebar -\ltimes )\Bigl (P_1+\frac{1}{3}P_2 \Bigl )+Sym\big \{ M_1(\flat _1-\flat _2)+M_2(2\flat _3-\flat _1-\flat _2) \big \},\\&\flat _1= [\amalg \ \ 0\ \ 0], \flat _2=[0\ \ \amalg \ \ 0] , \flat _3=[0\ \ 0\ \ \amalg ],\\&P_1=M_1{\mathscr {R}}^{-1}{M_1}^T, P_2=M_2{\mathscr {R}}^{-1}{M_2}^T. \end{aligned}$$

Proof

The following inequality holds according to the fact that \({\mathscr {R}}>0\) and Schur complement:

$$\begin{aligned} \begin{bmatrix} M_1{\mathscr {R}}^{-1}M_1^T &{} M_1{\mathscr {R}}^{-1}M_2^T &{} M_1 \\ *&{} M_2{\mathscr {R}}^{-1}M_2^T &{} M_2 \\ *&{} *&{} {\mathscr {R}} \end{bmatrix} \ge 0. \end{aligned}$$

By replacing \(\phi (\ltimes , \veebar )\) with \(\varpi (\ltimes , \veebar )\) and \(U_1=M_1{\mathscr {R}}^{-1}M_1^T\), \(U_2=M_1{\mathscr {R}}^{-1}M_2^T\), and \(U_3=M_2{\mathscr {R}}^{-1}M_2^T\), Lemma 3 could be naturally established employing the same proof strategy as Lemma 1 deployed. \(\square\)

Lemma 4

(Gunasekaran, Zhai, and Yu [37]). For matrix \(\mho \in \Re ^{3n\times 3n}\) and positive definite matrix \({\mathcal {Q}}\in \Re ^{n \times n}\) fulfilling

$$\begin{aligned} \mho = \begin{bmatrix} \mho _{11} &{} \mho _{12} &{} \mho _{13} \\ *&{} \mho _{22} &{} \mho _{23} \\ *&{} *&{} \mho _{33} \end{bmatrix} \ge 0, \mho _{pq}\in \Re ^{n\times n}, p,q=1,2,3, \end{aligned}$$

and \({\mathcal {Q}}-\mho _{33}>0\), the integral inequality as follows is true:

$$\begin{aligned}&-\ell \int \limits _{{\hat{t}}-\ell }^{{\hat{t}}}{\dot{\rightthreetimes }}^T(\wp ){\mathcal {Q}}{\dot{\rightthreetimes }}(\wp )d\wp \le {\begin{bmatrix} \rightthreetimes ({\hat{t}}) \\ \rightthreetimes ({\hat{t}}-\ell ) \end{bmatrix}^T \begin{bmatrix} H_{11} &{} H_{12}\\ *&{} H_{22} \end{bmatrix} \begin{bmatrix} \rightthreetimes ({\hat{t}}) \\ \rightthreetimes ({\hat{t}}-\ell ) \end{bmatrix} }, \end{aligned}$$

where

$$\begin{aligned}&H_{11}=-{\mathcal {Q}}+\mho _{33}+\ell (\ell \mho _{11}+\mho _{13}^T+\mho _{13}),\\&H_{12}={\mathcal {Q}}-\mho _{33}+\ell (\ell \mho _{12}-\mho _{13}+\mho _{23}^T),\\&H_{22}=-{\mathcal {Q}}+\mho _{33}+\ell (\ell \mho _{22}-\mho _{23}-\mho _{23}^T).\\&\quad -\int \limits _0^1cos(s)3cos(s)ds\le \left[ 0\ \ sin(1)\ \ \int \limits _0^1 sin(s)ds \right] , \end{aligned}$$

Remark 3

Observe that the integral inequality’s matrix, \({\mathcal {Q}}\), in Lemma 4 has a size relationship with the matrix, \(\mho _{33}\), with the result that \({\mathcal {Q}}-\mho _{33}>0\). In contrast to Lemma 4, Shur complement lemma is utilized to create a matrix in the following lemma, \({\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T\), such that \({\mathscr {U}}_{11}-{\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T\ge 0\). This makes the matrix in the integral inequality a member of the free matrix. As a result, our findings are substantially shorter. Lemma 4, on the other hand, employs \(10n^2\) various deciding factors. The following lemma greatly reduces the numerical complexity of the stability criteria by using \(4n^2\) choice variables. Note that the integral inequality method in Lemma 5 was first proposed in [41], then developed and applied to multi-agent networks [37]. Therefore, Lemma 5 provides more useful information about state vectors than Jensen inequality.

Lemma 5

For a symmetric matrix \({\mathscr {U}}\in \Re ^{2n\times 2n}\) satisfying

$$\begin{aligned}&{\mathscr {U}}= \begin{bmatrix} {\mathscr {U}}_{11} &{} {\mathscr {U}}_{12} \\ *&{} {\mathscr {U}}_{22} \end{bmatrix} \ge 0, {\mathscr {U}}_{pq}\in \Re ^{n\times n}, p,q=1,2, \end{aligned}$$

the integral inequality displayed below is reliable:

$$\begin{aligned}&-({\hat{t}}-{\hat{t}}_f)\int \limits _{{\hat{t}}_f}^{{\hat{t}}}{\dot{\rightthreetimes }}^T(\wp ){\mathscr {U}}_{11}{\dot{\rightthreetimes }}(\wp )d\wp \le {\begin{bmatrix} \rightthreetimes ({\hat{t}}) \\ \rightthreetimes ({\hat{t}}_f) \end{bmatrix}^T \begin{bmatrix} H_{11} &{} H_{12}\\ *&{} H_{22} \end{bmatrix} \begin{bmatrix} \rightthreetimes ({\hat{t}}) \\ \rightthreetimes ({\hat{t}}_f) \end{bmatrix} },&\end{aligned}$$
(7)

where

$$\begin{aligned}&H_{11}={\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T-{\mathscr {U}}_{11}+({\hat{t}}-{\hat{t}}_f)^2{\mathscr {U}}_{22}+2({\hat{t}}-{\hat{t}}_f){\mathscr {U}}_{12},\\&H_{12}=-{\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T+{\mathscr {U}}_{11}-({\hat{t}}-{\hat{t}}_f){\mathscr {U}}_{12}^T,\\&H_{22}={\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T-{\mathscr {U}}_{11}. \end{aligned}$$

Proof

Think about the following equation:

$$\begin{aligned}&-({\hat{t}}-{\hat{t}}_f)\int \limits _{{\hat{t}}_f}^{{\hat{t}}}{\dot{\rightthreetimes }}^T(\wp ){\mathscr {U}}_{11}{\dot{\rightthreetimes }}(\wp )d\wp \nonumber \\&\quad = -({\hat{t}}-{\hat{t}}_f)\int \limits _{{\hat{t}}_f}^{{\hat{t}}}{\dot{\rightthreetimes }}^T(\wp )({\mathscr {U}}_{11} -{\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T){\dot{\rightthreetimes }}(\wp )d\wp \nonumber \\&\qquad -({\hat{t}}-{\hat{t}}_f)\int \limits _{{\hat{t}}_f}^{{\hat{t}}}{\dot{\rightthreetimes }}^T(\wp ){\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T{\dot{\rightthreetimes }}(\wp )d\wp . \end{aligned}$$
(8)

By Schur complement, the following is true:

$$\begin{aligned} {\mathscr {U}}_{11}-{\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T\ge 0. \end{aligned}$$

Therefore,

$$\begin{aligned}&-({\hat{t}}-{\hat{t}}_f)\int \limits _{{\hat{t}}_f}^{{\hat{t}}}{\dot{\rightthreetimes }}^T(\wp )({\mathscr {U}}_{11}-{\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T){\dot{\rightthreetimes }}(\wp )d\wp \nonumber \\&\quad \le - {\begin{bmatrix} \rightthreetimes ({\hat{t}}) \\ \rightthreetimes ({\hat{t}}_f) \end{bmatrix}^T \begin{bmatrix} {\mathscr {U}}_{11}-{\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T &{} -{\mathscr {U}}_{11}+{\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T\\ *&{} {\mathscr {U}}_{11}-{\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T \end{bmatrix} \begin{bmatrix} \rightthreetimes ({\hat{t}}) \\ \rightthreetimes ({\hat{t}}_f) \end{bmatrix} }, \end{aligned}$$
(9)

where Jensen inequality is applied.

In addition to that,

$$\begin{aligned}&- ({\hat{t}}-{\hat{t}}_f)\int \limits _{{\hat{t}}_f}^{{\hat{t}}} {\begin{bmatrix} {\dot{\rightthreetimes }}(\wp ) \\ \rightthreetimes ({\hat{t}}) \end{bmatrix}^T \begin{bmatrix} {\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T &{} {\mathscr {U}}_{12}\\ *&{} {\mathscr {U}}_{22} \end{bmatrix} \begin{bmatrix} {\dot{\rightthreetimes }}(\wp ) \\ \rightthreetimes ({\hat{t}}) \end{bmatrix} }d\wp \\&\quad =-({\hat{t}}-{\hat{t}}_f)\int \limits _{{\hat{t}}_f}^{{\hat{t}}}{\dot{\rightthreetimes }}^T(\wp ){\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T{\dot{\rightthreetimes }}(\wp )d\wp -2({\hat{t}}-{\hat{t}}_f) (\rightthreetimes ^T({\hat{t}})-\rightthreetimes ^T({\hat{t}}_f)){\mathscr {U}}_{12}\rightthreetimes ({\hat{t}})\\&\qquad - ({\hat{t}}-{\hat{t}}_f)^2\rightthreetimes ^T({\hat{t}}){\mathscr {U}}_{22}\rightthreetimes ({\hat{t}}). \end{aligned}$$

Note that the following is accurate according to Schur complement:

$$\begin{aligned}&{\begin{bmatrix} {\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T &{} {\mathscr {U}}_{12}\\ *&{} {\mathscr {U}}_{22} \end{bmatrix} }\ge 0. \end{aligned}$$

Hence,

$$\begin{aligned}&-({\hat{t}}-{\hat{t}}_f)\int \limits _{{\hat{t}}_f}^{{\hat{t}}}{\dot{\rightthreetimes }}^T(\wp ){\mathscr {U}}_{12}{\mathscr {U}}_{22}^{-1}{\mathscr {U}}_{12}^T{\dot{\rightthreetimes }}(\wp )d\wp \nonumber \\&\quad \le 2({\hat{t}}-{\hat{t}}_f)(\rightthreetimes ^T({\hat{t}})-\rightthreetimes ^T({\hat{t}}_f)){\mathscr {U}}_{12}\rightthreetimes ({\hat{t}}) +({\hat{t}}-{\hat{t}}_f)^2\rightthreetimes ^T({\hat{t}}){\mathscr {U}}_{22}\rightthreetimes ({\hat{t}}) \nonumber \\&\qquad = {\begin{bmatrix} \rightthreetimes ({\hat{t}}) \\ \rightthreetimes ({\hat{t}}_f) \end{bmatrix}^T \begin{bmatrix} ({\hat{t}}-{\hat{t}}_f)^2{\mathscr {U}}_{22}+2({\hat{t}}-{\hat{t}}_f){\mathscr {U}}_{12}&{} -({\hat{t}}-{\hat{t}}_f){\mathscr {U}}^T_{12}\\ *&{} 0 \end{bmatrix} \begin{bmatrix} \rightthreetimes ({\hat{t}}) \\ \rightthreetimes ({\hat{t}}_f) \end{bmatrix} }.&\end{aligned}$$
(10)

Then, Eq. (7) can be derived from (8), (9), and (10). \(\square\)

3 Sampled-data asymptotical synchronization analysis

The asymptotic synchronization of sampling between FSs (1) and LS (3) is examined in this chapter, and a sufficient condition is discovered.

Regarding brevity, a handful of necessary notations for vectors and matrices have been added,

$$\begin{aligned}&u_1({\hat{t}})={\hat{e}}({\hat{t}})-{\hat{e}}({\hat{t}}_f),u_2({\hat{t}})={\hat{e}}({\hat{t}}) -{\hat{e}}({\hat{t}}_{f+1}), \\&\xi _1({\hat{t}})=col\left\{ {\hat{e}}({\hat{t}}_f),{\hat{e}}({\hat{t}}_{f+1}) \right\} , \xi _2({\hat{t}})=col\left\{ u_1({\hat{t}}),u_2({\hat{t}})\right\} ,\\&\xi _3({\hat{t}})=col\left\{ ({\hat{t}}_{f+1}-{\hat{t}})u_1({\hat{t}}),({\hat{t}}-{\hat{t}}_f)u_2({\hat{t}}) \right\} , \\&w_1({\hat{t}})=\int \limits _{{\hat{t}}_{f}}^{{\hat{t}}}{\hat{e}}(\wp )d\wp , w_2({\hat{t}})=\int \limits _{{\hat{t}}}^{{\hat{t}}_{f+1}}{\hat{e}}(\wp )d\wp , \\&\xi ({\hat{t}})=col\left\{ {\hat{e}}({\hat{t}}),{\hat{e}}({\hat{t}}_f),{\hat{e}} ({\hat{t}}_{f+1}),w_1({\hat{t}}),w_2({\hat{t}}), \beth ({\hat{e}}({\hat{t}})),{\hat{e}}({\hat{t}}-\ell ),\dot{{\hat{e}}}({\hat{t}}) \right\} , \\&e_q=\left[ 0_{n\times (q-1)n}\ \amalg _n\ 0_{n\times (8-q)n}\right] , q=1,2,\ldots ,8. \end{aligned}$$

Theorem 1

For given positive constants \(d, \ell , \rho , \gamma\), when positive definite and symmetric matrices \({\mathscr {W}}_1>0,{\mathscr {W}}_2>0,{\mathscr {P}}>0,{\mathscr {R}}_1>0,{\mathscr {R}}_2>0,\ \textrm{matrices} \ {\mathscr {Q}}_1,{\mathscr {Q}}_2,M_1,M_2, N_1,N_2\), diagonal matrices \(Z_0=diag\left\{ Z_{01},Z_{02},\cdot \cdot \cdot ,Z_{0{\mathscr {N}}}\right\}\), \(Z_1=diag\left\{ Z_{11},Z_{12},\cdot \cdot \cdot ,Z_{1{\mathscr {N}}}\right\}\) exist that make the following LMIs true for any \(d_f\in (0, d],\)

$$\begin{aligned}&\Delta _1(d_f)= {\begin{bmatrix} \Phi _1+d_f\Phi _2 &{}\sqrt{d_f}\Pi _{10}^TN_1 &{} \sqrt{d_f}\Pi _{10}^TN_2 \\ *&{} -{\mathscr {R}}_2 &{} 0 \\ *&{} *&{} -3{\mathscr {R}}_2 \end{bmatrix} } <0, \end{aligned}$$
(11)
$$\begin{aligned}&\Delta _2(d_f)= {\begin{bmatrix} \Phi _1+d_f\Phi _3 &{}\sqrt{d_f}\Pi _{9}^TM_1 &{} \sqrt{d_f}\Pi _{9}^TM_2 \\ *&{} -{\mathscr {R}}_1 &{} 0 \\ *&{} *&{} -3{\mathscr {R}}_1 \end{bmatrix} } <0,&\end{aligned}$$
(12)

where

$$\begin{aligned} \qquad \Phi _1=&{}Sym \biggl \{ -\rho \Pi _{11}^T\Lambda _{1}\Pi _{11}+\Pi _1^T({\mathscr {Q}}_1\Pi _2 +{\mathscr {Q}}_2\Pi _3)+\Lambda _{2}+\Pi _{9}^TM_1\Pi _{14}+\Pi _{9}^TM_2\Pi _{15} \\&{}+\Pi _{10}^TN_1\Pi _{16}+\Pi _{10}^TN_2\Pi _{17}-e_1^TZ_0e_8+e_1^TZ_0e_6 +e_1^TZ_0\Pi _{12}+e_1^TZ_1 e_2 \\&{}-\gamma e_8^TZ_0e_8+\gamma e_{8}^TZ_0e_6+\gamma e_{8}^TZ_0\Pi _{12} +\gamma e_{8}^TZ_1 e_2\biggl \}, \\ \Phi _2=&{}Sym\left\{ \Pi _5^T({\mathscr {Q}}_1\Pi _2+{\mathscr {Q}}_2\Pi _3)+\Pi _{7}^T{\mathscr {Q}}_1\Pi _4\right\} +e_{8}^T{\mathscr {R}}_1e_{8}, \\ \Phi _3=&{}Sym\left\{ \Pi _6^T({\mathscr {Q}}_1\Pi _2+{\mathscr {Q}}_2\Pi _3)+\Pi _{8}^T{\mathscr {Q}}_1\Pi _4\right\} +e_{8}^T{\mathscr {R}}_2e_{8}, \\ \Pi _1=&{}\left[ e_2^T-e_1^T\quad e_1^T-e_3^T\right] ^T, \\ \Pi _2=&{}\left[ e_1^T-e_2^T\quad e_1^T-e_3^T\right] ^T, \\ \Pi _3=&{}\left[ e_2^T\quad e_3^T\right] ^T, \Pi _4=\left[ e_{8}^T\quad e_{8}^T\right] ^T, \\ \Pi _5=&{}\left[ e_{8}^T\quad 0\right] ^T, \Pi _6=\left[ 0 \quad e_{8}^T\right] ^T, \\ \Pi _{7}=&{}\left[ e_1^T-e_2^T\quad 0\right] ^T, \Pi _{8}=\left[ 0 \quad e_1^T-e_3^T\right] ^T, \\ \Pi _{9}=&{}\left[ e_1^T\quad e_3^T\quad e_4^T\right] ^T, \Pi _{10}=\left[ e_1^T\quad e_3^T\quad e_5^T\right] ^T, \\ \Pi _{11}=&{}\left[ e_1^T\quad e_6^T\right] ^T, \Pi _{12}={c}({\mathscr {G}}\otimes {\mathscr {A}})e_7, \\ \Pi _{13}=&e_1-e_7, \Pi _{14}=(\flat _1-\flat _2)\Pi _9,\\ \Pi _{15}=&(2\flat _3-\flat _1-\flat _2)\Pi _9,\Pi _{16}=(\flat _1-\flat _2)\Pi _{10}, \\ \Pi _{17}=&(2\flat _3-\flat _1-\flat _2)\Pi _{10}, \\ \varLambda _1=&{\begin{bmatrix} \dfrac{\amalg \otimes ({S_1^T}{S_2}+{S_2^T}{S_1})}{2} &{} \dfrac{-\amalg \otimes ({S_1^T}+{S_2^T})}{2}\\ *&{}\amalg \end{bmatrix} },\\ \varLambda _2=&{} \ell ^2e_{8}^T{\mathscr {W}}_1e_{8} -\Pi _{13}^T{\mathscr {W}}_1\Pi _{13} +e_1^T{\mathscr {W}}_2e_1-e_7^T{\mathscr {W}}_2e_7+e_1^T{\mathscr {P}}e_{8}, \end{aligned}$$

then, for any initial condition \({\hat{e}}({\hat{t}}_0)\), FSs (1) and LS (3) could realize asymptotical synchronization, i.e., ES (6) is stable, and the formula \(\daleth =Z_0^{-1}Z_1\) yields the controlling gain matrix.

Proof

Choose Lyapunov-Krasovskii functions for ES (6) as follows:

$$\begin{aligned} {\mathscr {K}}({\hat{t}})=\sum _{p=1}^{4}{\mathscr {K}}_p({\hat{t}}),\ {\hat{t}}\in [{\hat{t}}_f,{\hat{t}}_{f+1}),&\end{aligned}$$
(13)

in which

$$\begin{aligned}&{\mathscr {K}}_1({\hat{t}})=2\ell \int \limits _{-\ell }^0\int \limits _{{\hat{t}}+\varkappa }^{{\hat{t}}}\dot{{\hat{e}}}^T(\wp ){\mathscr {W}}_1\dot{{\hat{e}}}(\wp )d\wp d\varkappa +2\int \limits _{{\hat{t}}-\ell }^{{\hat{t}}}{\hat{e}}^T(\wp ){\mathscr {W}}_2{\hat{e}}(\wp )d\wp +{\hat{e}}^T({\hat{t}}){\mathscr {P}}{\hat{e}}({\hat{t}}),&{\mathscr {K}}_2({\hat{t}})=2\xi _3^T({\hat{t}})\left[ {\mathscr {Q}}_1\xi _2({\hat{t}})+{\mathscr {Q}}_2\xi _1({\hat{t}})\right] ,\\&{\mathscr {K}}_3({\hat{t}})=({\hat{t}}_{f+1}-{\hat{t}})\int \limits _{{\hat{t}}_f}^{{\hat{t}}}\dot{{\hat{e}}}^T(\wp ){\mathscr {R}}_1\dot{{\hat{e}}}(\wp )d\wp ,\\&{\mathscr {K}}_4({\hat{t}})=-({\hat{t}}-{\hat{t}}_f)\int \limits _{{\hat{t}}}^{{\hat{t}}_{f+1}}\dot{{\hat{e}}}^T(\wp ){\mathscr {R}}_2\dot{{\hat{e}}}(\wp )d\wp . \end{aligned}$$

Taking into account the trajectory of ES (6), the differentiation of \({\mathscr {K}}({\hat{t}})\) results in

$$\begin{aligned}&\dot{{\mathscr {K}}}_1({\hat{t}})=2\ell ^2\dot{{\hat{e}}}^T({\hat{t}}){\mathscr {W}}_1\dot{{\hat{e}}} ({\hat{t}})-2[{\hat{e}}({\hat{t}})-{\hat{e}}({\hat{t}}-\ell )]^T{\mathscr {W}}_1[{\hat{e}}({\hat{t}}) -{\hat{e}}({\hat{t}}-\ell )]+2{\hat{e}}^T({\hat{t}}){\mathscr {W}}_2{\hat{e}}({\hat{t}})\\&\qquad \quad -2{\hat{e}}^T({\hat{t}}-\ell ){\mathscr {W}}_2{\hat{e}}({\hat{t}}-\ell ) +2{\hat{e}}^T({\hat{t}}){\mathscr {P}}\dot{{\hat{e}}}({\hat{t}}) \\&\qquad =2\xi ^T({\hat{t}})\left\{ \ell ^2e_{8}^T{\mathscr {W}}_1e_{8} -\Pi _{13}^T{\mathscr {W}}_1\Pi _{13} +e_1^T{\mathscr {W}}_2e_1-e_7^T{\mathscr {W}}_2e_7+e_1^T{\mathscr {P}}e_{8}\right\} \xi ({\hat{t}}), \\&\dot{{\mathscr {K}}}_2({\hat{t}})=2\xi ^T({\hat{t}})\big \{ \Pi _1^T({\mathscr {Q}}_1\Pi _2+{\mathscr {Q}}_2\Pi _3) +({\hat{t}}_{f+1}-{\hat{t}})\times \big [\Pi _5^T({\mathscr {Q}}_1\Pi _2+{\mathscr {Q}}_2\Pi _3)+\Pi _{7}{\mathscr {Q}}_1\Pi _4\big ]\\&\qquad \quad +({\hat{t}}-{\hat{t}}_f)\times \big [\Pi _6^T({\mathscr {Q}}_1\Pi _2+{\mathscr {Q}}_2\Pi _3)+\Pi _{8}^T{\mathscr {Q}}_1 \Pi _4\big ]\big \} \xi ({\hat{t}}), \\&\dot{{\mathscr {K}}}_3({\hat{t}})=({\hat{t}}_{f+1}-{\hat{t}})\dot{{\hat{e}}}^T({\hat{t}}) {\mathscr {R}}_1\dot{{\hat{e}}}({\hat{t}})+{\mathscr {J}}_1,\\&\dot{{\mathscr {K}}}_4({\hat{t}})=({\hat{t}}-{\hat{t}}_f)\dot{{\hat{e}}}^T({\hat{t}}) {\mathscr {R}}_2\dot{{\hat{e}}}({\hat{t}})+{\mathscr {J}}_2, \end{aligned}$$

with \({\mathscr {J}}_1=-\int \limits _{{\hat{t}}_f}^{{\hat{t}}}\dot{{\hat{e}}}^T(\wp ){\mathscr {R}}_1\dot{{\hat{e}}}(\wp )d\wp\) and \({\mathscr {J}}_2=-\int \limits _{{\hat{t}}}^{{\hat{t}}_{f+1}}\dot{{\hat{e}}}^T(\wp ){\mathscr {R}}_2\dot{{\hat{e}}}(\wp )d\wp .\)

Apply Lemma 3 to \({\mathscr {J}}_1\) and \({\mathscr {J}}_2\), then

$$\begin{aligned}&\dot{{\mathscr {K}}}_3({\hat{t}})\le ({\hat{t}}_{f+1}-{\hat{t}})\xi ^T({\hat{t}})e_8^T{\mathscr {R}}_1e_8\xi ({\hat{t}}) +\xi ^T({\hat{t}})\Pi _{9}^T\biggl [({\hat{t}} -{\hat{t}}_f)\biggl (M_1{\mathscr {R}}_1^{-1}M_1^T+\frac{1}{3}M_2{\mathscr {R}}_1^{-1}M_2^T\biggl )\\&\qquad \quad +Sym\big \{ M_1(\flat _1-\flat _2)+M_2(2\flat _3-\flat _1-\flat _2)\big \}\biggl ]\Pi _{9}\xi ({\hat{t}}), \\&\dot{{\mathscr {K}}}_4({\hat{t}})\le ({\hat{t}}-{\hat{t}}_f)\xi ^T({\hat{t}})e_8^T{\mathscr {R}}_2e_8\xi ({\hat{t}})+\xi ^T({\hat{t}})\Pi _{10}^T\biggl [({\hat{t}}_{f+1}-{\hat{t}})\biggl (N_1{\mathscr {R}}_2^{-1}N_1^T +\frac{1}{3}N_2{\mathscr {R}}_2^{-1}N_2^T\biggl )\\&\qquad \quad +Sym \big \{ N_1(\flat _1-\flat _2)+N_2(2\flat _3-\flat _1-\flat _2) \big \}\biggl ]\Pi _{10}\xi ({\hat{t}}), \end{aligned}$$

for any matrices with the proper dimensions, \(M_i,N_i, i=1, 2,\) are acceptable.

As an additional feature, the statement that follows is true for a scalar \(\rho >0\) according to Assumption 1:

$$\begin{aligned} \qquad&0\le -2\rho {\begin{bmatrix} {\hat{e}}({\hat{t}}) \\ \beth ({\hat{e}}({\hat{t}})) \end{bmatrix}^T \begin{bmatrix} \dfrac{\amalg \otimes ({S_1^T}{S_2}+{S_2^T}{S_1})}{2} &{} \dfrac{-\amalg \otimes ({S_1^T}+{S_2^T})}{2}\\ *&{}\amalg \end{bmatrix}^T \begin{bmatrix} {\hat{e}}({\hat{t}}) \\ \beth ({\hat{e}}({\hat{t}})) \end{bmatrix} }, \end{aligned}$$

ie.,

$$\begin{aligned}&0\le -2\rho \xi ^T({\hat{t}})\Pi _{11}^T {\begin{bmatrix} \dfrac{\amalg \otimes ({S_1^T}{S_2}+{S_2^T}{S_1})}{2} &{} \dfrac{-\amalg \otimes ({S_1^T}+{S_2^T})}{2}\\ *&{}\amalg \end{bmatrix}^T } \Pi _{11}\xi ({\hat{t}}).&\end{aligned}$$
(14)

According to ES (6), for any diagonal matrix \(Z_0\), positive scalar \(\gamma\),

$$\begin{aligned}&0= 2\left[ {\hat{e}}^T({\hat{t}})Z_0+\gamma \dot{{\hat{e}}}^T({\hat{t}})Z_0\right] \left[ -\dot{{\hat{e}}}({\hat{t}})+ \beth ({\hat{e}}({\hat{t}}))+{c}({\mathscr {G}}\otimes {\mathscr {A}}){\hat{e}}({\hat{t}}-\ell )+\daleth {\hat{e}}{({\hat{t}}_f)} \right] . \end{aligned}$$
(15)

Define \(Z_0\daleth =Z_1\), add the right end of (14) and (15) to \(\dot{{\mathscr {K}}}({\hat{t}})\) and attain

$$\begin{aligned}&\dot{{\mathscr {K}}}({\hat{t}})\le \xi ^T({\hat{t}})\biggl [\dfrac{{\hat{t}}-{\hat{t}}_f}{d_f}{\hat{\Delta }}_2(d_f)+\dfrac{{\hat{t}}_{f+1}-{\hat{t}}}{d_f}{\hat{\Delta }}_1(d_f)\biggl ]\xi ({\hat{t}}), \end{aligned}$$

where

$$\begin{aligned}&{\hat{\Delta }}_1(d_f)=\Phi _1+d_f\Phi _2 +d_f\Pi _{10}^T\biggl (N_1{\mathscr {R}}_2^{-1}N_1^T+\frac{1}{3}N_2{\mathscr {R}}_2^{-1}N_2^T\biggl )\Pi _{10},\\&{\hat{\Delta }}_2(d_f)=\Phi _1+d_f\Phi _3+d_f\Pi _{9}^T\biggl (M_1{\mathscr {R}}_1^{-1}M_1^T +\frac{1}{3}M_2{\mathscr {R}}_1^{-1}M_2^T\biggl )\Pi _{9}. \end{aligned}$$

Depending on Schur complement, assume that \({\hat{\Delta }}_1(d_f)<0\) and \({\hat{\Delta }}_2(d_f)<0\), which are equal to (11) and (12), correspondingly. After that \(\dot{{\mathscr {K}}}({\hat{t}})\le 0\) follows, and ES (6) is therefore stable according to Seuret’s theorem 1 [24]. \(\square\)

Remark 4

Here, the continuous function (13) containing information from both sides of the sampling interval is proposed by combining the two-sided looped-function theory in [24], as well as the time-dependent continuous Lyapunov functional theory in [36], where state information \({\hat{e}}({\hat{t}}_f)\), \({\hat{e}}({\hat{t}})\), and \({\hat{e}}({\hat{t}}_{f+1})\) are fully utilized, in addition to the integrating states \(\int \limits _{{\hat{t}}}^{{\hat{t}}_f}{\hat{e}}(\wp )d\wp\) and \(\int \limits _{{\hat{t}}}^{{\hat{t}}_{f+1}}{\hat{e}}(\wp )d\wp\). Taking advantage of information from these states is extremely beneficial to producing less conservative results. Yet another contrast to earlier studies on the synchronization of CDNs with sampling data, for instance, those in [2,3,4, 7, 16], the presence of \({\mathscr {K}}_2({\hat{t}})\), where the matrices \({\mathscr {Q}}_1\) and \({\mathscr {Q}}_2\) are not positively definite, makes the constraint on the function (13) weaker and zero at the sampling time. Therefore, the Lyapunov functions \({\mathscr {K}}_1({\hat{t}})\), \({\mathscr {K}}_3({\hat{t}})\) and \({\mathscr {K}}_4({\hat{t}})\) proposed in (13) are novel.

FSs (1) is thus transformed into the following model if the parameter \(\ell\) is equal to 0,

$$\begin{aligned}&\left\{ \begin{aligned} {\dot{e}}_p({\hat{t}})=&\pounds (e_p({\hat{t}}))+{c}\sum _{q=1,q\ne p}^{{\mathscr {N}}}{\mathscr {G}}_{pq}{\mathscr {A}}(e_q({\hat{t}})-e_p({\hat{t}}))+ {u}_p({\hat{t}}), \\ {y}_p({\hat{t}})=&\Game e_p({\hat{t}}), p=1,2,\ldots ,{\mathscr {N}}. \end{aligned}\right.&\end{aligned}$$
(16)

Theorem 1’s parameter \(\ell\) can be set to 0 to provide the following inference:

Corollary 1

For given positive constants \(d, \rho , \gamma\), when positive definite and symmetric matrices \({\mathscr {W}}_1>0,{\mathscr {W}}_2>0,{\mathscr {P}}>0,{\mathscr {R}}_1>0,{\mathscr {R}}_2>0,\ \textrm{matrices} \ {\mathscr {Q}}_1,{\mathscr {Q}}_2,M_1,M_2, N_1,N_2\), diagonal matrices \(Z_0=diag\left\{ Z_{01},Z_{02},\cdot \cdot \cdot ,Z_{0{\mathscr {N}}}\right\}\), \(Z_1=diag\left\{ Z_{11},Z_{12},\cdot \cdot \cdot ,Z_{1{\mathscr {N}}}\right\}\) exist that make the following LMIs true for any \(d_f\in (0, d],\)

$$\begin{aligned}&\Delta _1(d_f)= {\begin{bmatrix} \Phi _1+d_f\Phi _2 &{}\sqrt{d_f}\Pi _{10}^TN_1 &{} \sqrt{d_f}\Pi _{10}^TN_2 \\ *&{} -{\mathscr {R}}_2 &{} 0 \\ *&{} *&{} -3{\mathscr {R}}_2 \end{bmatrix} } <0, \end{aligned}$$
(17)
$$\begin{aligned}&\Delta _2(d_f)= {\begin{bmatrix} \Phi _1+d_f\Phi _3 &{}\sqrt{d_f}\Pi _{9}^TM_1 &{} \sqrt{d_f}\Pi _{9}^TM_2 \\ *&{} -{\mathscr {R}}_1 &{} 0 \\ *&{} *&{} -3{\mathscr {R}}_1 \end{bmatrix} } <0,&\end{aligned}$$
(18)

where

$$\begin{aligned} \Phi _1=&{}Sym \biggl \{ -\rho \Pi _{11}^T\Lambda _{1}\Pi _{11}+\Pi _1^T({\mathscr {Q}}_1\Pi _2+{\mathscr {Q}}_2\Pi _3) +\Lambda _{2}+\Pi _{9}^TM_1\Pi _{14}+\Pi _{9}^TM_2\Pi _{15}\\&{}+\Pi _{10}^TN_1\Pi _{16}+\Pi _{10}^TN_2\Pi _{17}-e_1^TZ_0e_8+e_1^TZ_0e_6 +e_1^TZ_0\Pi _{12}+e_1^TZ_1 e_2 \\&{}-\gamma e_8^TZ_0e_8+\gamma e_{8}^TZ_0e_6+\gamma e_{8}^TZ_0\Pi _{12} +\gamma e_{8}^TZ_1 e_2\biggl \},\\ \Phi _2=&{}Sym\left\{ \Pi _5^T({\mathscr {Q}}_1\Pi _2+{\mathscr {Q}}_2\Pi _3)+\Pi _{7}^T{\mathscr {Q}}_1\Pi _4\right\} +e_{8}^T{\mathscr {R}}_1e_{8}, \\ \Phi _3=&{}Sym\left\{ \Pi _6^T({\mathscr {Q}}_1\Pi _2+{\mathscr {Q}}_2\Pi _3)+\Pi _{8}^T{\mathscr {Q}}_1\Pi _4\right\} +e_{8}^T{\mathscr {R}}_2e_{8}, \\ \varLambda _2=&{} -\Pi _{13}^T{\mathscr {W}}_1\Pi _{13} +e_1^T{\mathscr {W}}_2e_1-e_7^T{\mathscr {W}}_2e_7+e_1^T{\mathscr {P}}e_{8}, \end{aligned}$$

where \(\Lambda _{1},\Pi _i, i=1,2,\ldots ,17,\) are defined by Theorem 1, then, given any starting situation \({\hat{e}}({\hat{t}}_0)\), FSs (16) and LS (3) could realize asymptotical synchronization, and the formula \(\daleth =Z_0^{-1}Z_1\) yields the controlling gain matrix.

4 Sampled-data exponential synchronization analysis

In this part, the exponential stability of ES (6) under the data sampling controller in the form of (4) is studied by creating a Lyapunov function depending on sampling times. The following necessary designations on behalf of vectors and matrices are presented for simplicity:

$$\begin{aligned}&\eta ({\hat{t}})=col\left\{ {\hat{e}}({\hat{t}}),{\hat{e}}({\hat{t}}_f),\daleth ({\hat{e}}({\hat{t}})),{\hat{e}}({\hat{t}}-\ell ),\dot{{\hat{e}}}({\hat{t}}), \int \limits _{{\hat{t}}_f}^{{\hat{t}}}{\hat{e}}(\wp )d\wp \right\} ,\\&e_q=\left[ 0_{n\times (q-1)n}\ \amalg _n\ 0_{n\times (6-q)n}\right] , q=1,2,\ldots ,6. \end{aligned}$$

Theorem 2

With respect to the supplied positive scalars \(a, d, \ell , \rho , \gamma\), if symmetric and positive definite matrices \({\mathscr {P}}>0, {\mathscr {Q}}_1>0, {\mathscr {Z}}_1>0, {\begin{bmatrix} {\mathscr {Z}}_1 &{} {\mathscr {Z}}_2 \\ *&{} {\mathscr {Z}}_3 \end{bmatrix} }>0, \mho _1>0,\ \textrm{matrices}\ {\mathscr {Y}}, {\mathscr {Y}}_1, M_1, M_2\), diagonal matrices \(R_0=diag\big \{R_{01}, R_{02},\cdot \cdot \cdot , R_{0{\mathscr {N}}}\big \},\) \(R_1=diag\left\{ R_{11},R_{12},\cdot \cdot \cdot ,R_{1{\mathscr {N}}}\right\}\) exist that make the following LMIs true for any \(d_f\in (0, d],\)

$$\begin{aligned} \qquad&-{\begin{bmatrix} {\mathscr {P}}+d\dfrac{{\mathscr {Y}}+{\mathscr {Y}}^T}{2}+e^{-2ad}\mho _1 &{} \Phi _1 -e^{-2ad}\mho _1\\ *&{} \Phi _2+e^{-2ad}\mho _1 \end{bmatrix} }<0, \end{aligned}$$
(19)
$$\begin{aligned}&\Delta _1(d_f)= {\begin{bmatrix} \Xi _1+d_f\Xi _2 &{} \sqrt{2}e^{-a\ell }e_1^T{\mathscr {Z}}_2-\sqrt{2}e^{-a\ell }e_4^T{\mathscr {Z}}_2 \\ *&{} -{\mathscr {Z}}_3 &{} \end{bmatrix} } <0, \end{aligned}$$
(20)
$$\begin{aligned}&\Delta _2(d_f)= {\begin{bmatrix} \Xi _1 &{} \sqrt{d_f}e^{-ad}\Pi _2^TM_1 &{} \sqrt{d_f}e^{-ad}\Pi _2^TM_2&{} \sqrt{2}e^{-a\ell }e_1^T{\mathscr {Z}}_2-\sqrt{2}e^{-a\ell }e_4^T{\mathscr {Z}}_2 \\ *&{} -\mho _1 &{} 0 &{} 0\\ *&{} *&{} -3\mho _1 &{} 0\\ *&{} *&{} *&{} -{\mathscr {Z}}_3 \end{bmatrix} } <0,&\end{aligned}$$
(21)

where

$$\begin{aligned}&\Xi _1=Sym\biggl \{ e_1^T{\mathscr {P}}e_5+ae_1^T{\mathscr {P}}e_1+e_1^T{\mathscr {Q}}_1e_1-e^{-2a\ell }e_4^T{\mathscr {Q}}_1e_4+\ell ^2e_5^T{\mathscr {Z}}_1e_5-\dfrac{1}{2} \Phi _5\\&\qquad \ +e^{-2ad}\Pi _2^TM_1\Pi _3+e^{-2ad}\Pi _2^TM_2\Pi _4+ \Phi _4 -\rho {\Pi _{5}^T\Phi _3 \Pi _{5} }-e_1^TR_0e_5+e_1^TR_0e_3 \\&\qquad \ +e_1^TR_0\Pi _{1}+e_1^TR_1 e_2-\gamma e_{5}^TR_0e_5+\gamma e_{5}^TR_0e_3+\gamma e_{5}^TR_0\Pi _{1} +\gamma e_{5}^TR_1e_2\biggl \} ,\\&\Xi _2=Sym\biggl \{e_5^T\mho _1e_5+ a\Phi _5 +\dfrac{1}{2}e_1^T{\mathscr {Y}}e_5+\dfrac{1}{2}e_5^T{\mathscr {Y}}e_1-e_5^T{\mathscr {Y}}e_2+e_5^T{\mathscr {Y}}_1e_2 \biggl \}, \\&\Pi _1={c}({\mathscr {G}}\otimes {\mathscr {A}})e_4, \Pi _{2}=[e_1^T\ \ e_2^T\ \ e_6^T]^T, \\&\Pi _{3}=(\flat _1-\flat _2)\Pi _2, \Pi _{4}=(2\flat _3-\flat _1-\flat _2)\Pi _2, \Pi _5=[e_1^T\ \ e_3^T]^T, \\&\Phi _1=-d{\mathscr {Y}}+d{\mathscr {Y}}_1, \\&\Phi _2=-d{\mathscr {Y}}_1-d{\mathscr {Y}}_1^T+d\frac{{\mathscr {Y}}+{\mathscr {Y}}^T}{2},\\&\Phi _3= {\begin{bmatrix} \dfrac{\amalg \otimes ({S_1^T}{S_2}+{S_2^T}{S_1})}{2} &{} \dfrac{-\amalg \otimes ({S_1^T}+{S_2^T})}{2}\\ *&{}\amalg \end{bmatrix} },\\&\Phi _4 =-{}e^{-2a\ell }e_1^T{\mathscr {Z}}_1e_1+\ell ^2e^{-2a\ell }e_1^T{\mathscr {Z}}_3e_1+2\ell e^{-2a\ell }e_1^T {\mathscr {Z}}_2e_1 +2e^{-2a\ell }e_1^T{\mathscr {Z}}_1e_4\\&\qquad \ -2\ell e^{-2a\ell }e_4^T{\mathscr {Z}}_2e_1-e^{-2a\ell }e_4^T{\mathscr {Z}}_1e_4,\\&\Phi _5=e_1^T{\mathscr {Y}}e_1-2e_1^T{\mathscr {Y}}e_2+2e_1^T{\mathscr {Y}}_1e_2 -2e_2^T{\mathscr {Y}}_1e_2+e_2^T{\mathscr {Y}}e_2, \end{aligned}$$

then, FSs (1) and LS (3) could realize exponential synchronization, i.e., ES (6) is exponentially stable, and the formula \(\daleth =R_0^{-1}R_1\) yields the controlling gain matrix.

Proof

Consider LKFs for ES (6) below:

$$\begin{aligned}&{\mathscr {K}}({\hat{t}})=\sum _{p=1}^{5}{\mathscr {K}}_p({\hat{t}}), {\hat{t}}\in [{\hat{t}}_f,{\hat{t}}_{f+1}),&\end{aligned}$$
(22)

in which

$$\begin{aligned}&{\mathscr {K}}_1({\hat{t}})=e^{2a{\hat{t}}}{\hat{e}}^T({\hat{t}}){\mathscr {P}}{\hat{e}}({\hat{t}}),\\&{\mathscr {K}}_2({\hat{t}})=2\int \limits _{{\hat{t}}-\ell }^{{\hat{t}}}e^{2a\wp }{\hat{e}}^T(\wp ){\mathscr {Q}}_1{\hat{e}}(\wp )d\wp ,\\&{\mathscr {K}}_3({\hat{t}})=2\ell \int \limits _{-\ell }^{0}\int \limits _{{\hat{t}}+\varkappa }^{{\hat{t}}}e^{2a\wp }\dot{{\hat{e}}}^T(\wp ){\mathscr {Z}}_1\dot{{\hat{e}}}(\wp )d\wp d\varkappa ,\\&{\mathscr {K}}_4({\hat{t}})=({\hat{t}}_{f+1}-{\hat{t}})\int \limits _{{\hat{t}}_f}^{{\hat{t}}}e^{2a\wp } \dot{{\hat{e}}}^T(\wp )\mho _1\dot{{\hat{e}}}(\wp )d\wp ,\\&{\mathscr {K}}_5({\hat{t}})=({\hat{t}}_{f+1}-{\hat{t}})e^{2a{\hat{t}}}\eta _2^T {\mathcal {W}} \eta _2,\\&\quad \ {\mathcal {W}}= {\begin{bmatrix} \frac{{\mathscr {Y}}+{\mathscr {Y}}^T}{2} &{} -{\mathscr {Y}}+{\mathscr {Y}}_1 \\ *&{} -{\mathscr {Y}}_1-{\mathscr {Y}}_1^T+\frac{{\mathscr {Y}}+{\mathscr {Y}}^T}{2} \end{bmatrix}, \eta _2= \begin{bmatrix} {\hat{e}}({\hat{t}}) \\ {\hat{e}}({\hat{t}}_f) \end{bmatrix} }. \end{aligned}$$

Already, the matrices \({\mathscr {Q}}_1\) and \({\mathscr {Z}}_1\) are required to remain positively defined. Afterwards,

$$\begin{aligned}&{\mathscr {K}}({\hat{t}})\ge {\mathscr {K}}_1({\hat{t}})+{\mathscr {K}}_4({\hat{t}})+{\mathscr {K}}_5({\hat{t}}) \nonumber \\&\quad \ge e^{2a{\hat{t}}}{\hat{e}}({\hat{t}})^T{\mathscr {P}}{\hat{e}}({\hat{t}}) +e^{2a{\hat{t}}}({\hat{t}}_{f+1}-{\hat{t}}) {\begin{bmatrix} {\hat{e}}({\hat{t}}) \\ {\hat{e}}({\hat{t}}_f) \end{bmatrix}^T \left( \frac{1}{d}{\mathcal {T}}+{\mathcal {W}}\right) \begin{bmatrix} {\hat{e}}({\hat{t}}) \\ {\hat{e}}({\hat{t}}_f) \end{bmatrix} } \nonumber \\&\qquad = e^{2a{\hat{t}}} {\begin{bmatrix} {\hat{e}}({\hat{t}}) \\ {\hat{e}}({\hat{t}}_f) \end{bmatrix}^T {\hat{\varOmega }} \begin{bmatrix} {\hat{e}}({\hat{t}}) \\ {\hat{e}}({\hat{t}}_f) \end{bmatrix} }, \end{aligned}$$
(23)

in which Jensen inequality is utilized with notations

$$\begin{aligned}&{\mathcal {T}}=e^{-2ad} {\begin{bmatrix} \mho _1 &{} -\mho _1 \\ *&{} \mho _1 \end{bmatrix},} \nonumber \\&\qquad {\hat{\varOmega }}=\dfrac{{\hat{t}}_{f+1}-{\hat{t}}}{d_f}({\hat{\Upsilon }} +\frac{d_f}{d}{\mathcal {T}}+d_f{\mathcal {W}})+\dfrac{{\hat{t}}-{\hat{t}}_f}{d_f}{\hat{\Gamma }},\ \ \ {\hat{\Upsilon }}= {\begin{bmatrix} {\mathscr {P}} &{} 0 \\ *&{} 0 \end{bmatrix}}. \end{aligned}$$
(24)

Let’s perform the modification shown below:

$$\begin{aligned}&{\hat{\Upsilon }}+\frac{d_f}{d}{\mathcal {T}}+d_f{\mathcal {W}}=\frac{d_f}{d}({\hat{\Upsilon }} +{\mathcal {T}}+d{\mathcal {W}})+\frac{d-d_f}{d} {\hat{\Upsilon }}. \end{aligned}$$

By Schur complement, Eq. (19) implies

$$\begin{aligned}&{\hat{\Upsilon }}+{\mathcal {T}}+d{\mathcal {W}}> 0. \end{aligned}$$

Assuming that \({\mathscr {P}}>0\) and (19), then there is a tiny enough scalar \(\beta >0\) to allow for \({\hat{\Upsilon }}>\beta diag\left\{ \amalg ,0\right\}\) and \({\hat{\Upsilon }}+{\mathcal {T}}+d{\mathcal {W}}>\beta diag\left\{ \amalg ,\amalg \right\}\). In accordance with Eqs (23) and (24),

$$\begin{aligned}&\qquad {\mathscr {K}}({\hat{t}})\ge \beta e^{2a{\hat{t}}}||{\hat{e}}({\hat{t}})||^2. \end{aligned}$$
(25)

\(\square\)

It should be noted that the methods described in [10, 16], and [17] served as inspiration for Lyapunov functional (22), especially the important terms \({\mathscr {K}}_4({\hat{t}})\) and \({\mathscr {K}}_5({\hat{t}})\). The findings indicate that \({\mathscr {K}}_4({\hat{t}})\) and \({\mathscr {K}}_5({\hat{t}})\) disappear at \({\hat{t}}={\hat{t}}_f \ \textrm{or}\ {\hat{t}}_{f+1}\). \({\mathscr {K}}({\hat{t}})\) is therefore continuous for the reason that \(\lim _{{\hat{t}}\rightarrow {\hat{t}}_f}{\mathscr {K}}({\hat{t}})={\mathscr {K}}({\hat{t}}_f)\). As a result, it is appropriate to choose the Lyapunov function \({\mathscr {K}}({\hat{t}})\) described in (22) for ES (6). For \({\hat{t}}\in [{\hat{t}}_f,{\hat{t}}_{f+1})\), calculate and obtain the value of the time derivative for \({\mathscr {K}}({\hat{t}})\) across the track on ES (6) below:

$$\begin{aligned} \dot{{\mathscr {K}}}_1({\hat{t}})&=\ 2e^{2a{\hat{t}}} \eta ^T({\hat{t}})(e_1^T{\mathscr {P}}e_5+ae_1^T{\mathscr {P}}e_1) \eta ({\hat{t}}),\\ \dot{{\mathscr {K}}}_2({\hat{t}})&=\ 2e^{2a{\hat{t}}} \eta ^T({\hat{t}})(e_1^T{\mathscr {Q}}_1e_1-e^{-2a\ell }e_4^T{\mathscr {Q}}_1e_4) \eta ({\hat{t}}),\\ \dot{{\mathscr {K}}}_3({\hat{t}})&=\ -2\ell \int \limits _{{\hat{t}}-\ell }^{{\hat{t}}}e^{2a\wp }\dot{{\hat{e}}}^T(\wp ){\mathscr {Z}}_1\dot{{\hat{e}}}(\wp )d\wp + 2\ell ^2e^{2a{\hat{t}}}\dot{{\hat{e}}}^T({\hat{t}}){\mathscr {Z}}_1\dot{{\hat{e}}}({\hat{t}})\\&\le \ -2e^{2a{\hat{t}}}e^{-2a\ell }\ell \int \limits _{{\hat{t}}-\ell }^{{\hat{t}}}\dot{{\hat{e}}}^T(\wp ){\mathscr {Z}}_1\dot{{\hat{e}}}(\wp )d\wp +2e^{2a{\hat{t}}} \eta ^T({\hat{t}})\ell ^2e_5^T{\mathscr {Z}}_1e_5 \eta ({\hat{t}}),\\ \dot{{\mathscr {K}}}_4({\hat{t}})&={} -\int \limits _{{\hat{t}}_f}^{{\hat{t}}}e^{2a\wp }\dot{{\hat{e}}}^T(\wp )\mho _1\dot{{\hat{e}}}(\wp )d\wp +({\hat{t}}_{f+1} -{\hat{t}})e^{2a{\hat{t}}}\dot{{\hat{e}}}^T({\hat{t}})\mho _1\dot{{\hat{e}}}({\hat{t}}) \\&\le -e^{2a{\hat{t}}}e^{-2ad}\int \limits _{{\hat{t}}_f}^{{\hat{t}}}\dot{{\hat{e}}}^T(\wp )\mho _1\dot{{\hat{e}}}(\wp )d\wp +({\hat{t}}_{f+1}-{\hat{t}})e^{2a{\hat{t}}}\eta ^T({\hat{t}})e_5^T\mho _1e_5\eta ({\hat{t}}),\\ \dot{{\mathscr {K}}}_5({\hat{t}})&={} -e^{2a{\hat{t}}}\eta ^T({\hat{t}}) {\begin{bmatrix} e_1\\ e_2 \end{bmatrix}^T {\mathcal {W}} \begin{bmatrix} e_1\\ e_2 \end{bmatrix} } \eta ({\hat{t}})\\&\quad \ +e^{2a{\hat{t}}}({\hat{t}}_{f+1}-{\hat{t}}) \eta ^T({\hat{t}})2a {\begin{bmatrix} e_1\\ e_2 \end{bmatrix}^T {\mathcal {W}} \begin{bmatrix} e_1\\ e_2 \end{bmatrix} }\eta ({\hat{t}})\\&\quad +e^{2a{\hat{t}}}({\hat{t}}_{f+1}-{\hat{t}})\eta ^T({\hat{t}})e_1^T({\mathscr {Y}}+{\mathscr {Y}}^T)e_5\eta ({\hat{t}})\\&\quad \ +e^{2a{\hat{t}}}({\hat{t}}_{f+1}-{\hat{t}})\eta ^T({\hat{t}})2e_5^T(-{\mathscr {Y}}+{\mathscr {Y}}_1)e_2\eta ({\hat{t}}). \end{aligned}$$

According to Lemmas 3 and 5,

$$\begin{aligned}&-\int \limits _{{\hat{t}}_f}^{{\hat{t}}}\dot{{\hat{e}}}^T(\wp )\mho _1\dot{{\hat{e}}}(\wp )d\wp \le \eta ^T({\hat{t}})\Pi _{2}^T\biggl [({\hat{t}}-{\hat{t}}_f)\biggl (M_1\mho _1^{-1}M_1^T+\frac{1}{3}M_2\mho _1^{-1}M_2^T\biggl )\\&\qquad +Sym\big \{ M_1(\flat _1-\flat _2)+M_2(2\flat _3-\flat _1-\flat _2)\big \}\biggl ]\Pi _{2}\eta ({\hat{t}}), \\&\qquad -2e^{2a{\hat{t}}}e^{-2a\ell }\ell \int \limits _{{\hat{t}}-\ell }^{{\hat{t}}}\dot{{\hat{e}}}^T(\wp ){\mathscr {Z}}_1 \dot{{\hat{e}}}(\wp )d\wp \le 2e^{2a{\hat{t}}}e^{-2a\ell }\eta ^T({\hat{t}}) {\begin{bmatrix} e_1\\ e_4 \end{bmatrix}^T \varUpsilon \begin{bmatrix} e_1\\ e_4 \end{bmatrix} }\eta ({\hat{t}}), \end{aligned}$$

where

$$\begin{aligned}&\varUpsilon _{11}={\mathscr {Z}}_2{\mathscr {Z}}_3^{-1}{\mathscr {Z}}_2^T-{\mathscr {Z}}_1+\ell ^2{\mathscr {Z}}_3+2\ell {\mathscr {Z}}_{2},\\&\varUpsilon _{12}=-{\mathscr {Z}}_2{\mathscr {Z}}_3^{-1}{\mathscr {Z}}_2^T+{\mathscr {Z}}_1-\ell {\mathscr {Z}}_{2}^T,\\&\varUpsilon _{22}={\mathscr {Z}}_2{\mathscr {Z}}_3^{-1}{\mathscr {Z}}_2^T-{\mathscr {Z}}_1. \end{aligned}$$

Additionally, the statement that follows is true for a scalar \(\rho >0\) through Assumption 1:

$$\begin{aligned}&0\le -2\rho e^{2a{\hat{t}}} \eta ^T({\hat{t}}) {\begin{bmatrix} e_1 \\ e_3 \end{bmatrix}^T \begin{bmatrix} \dfrac{\amalg \otimes ({S_1^T}{S_2}+{S_2^T}{S_1})}{2} &{} \dfrac{-\amalg \otimes ({S_1^T}+{S_2^T})}{2}\\ *&{}\amalg \end{bmatrix}^T \begin{bmatrix} e_1 \\ e_3 \end{bmatrix} }\eta ({\hat{t}}). \end{aligned}$$
(26)

According to ES (6),

$$\begin{aligned}&0= 2e^{2a{\hat{t}}}\left[ {\hat{e}}^T({\hat{t}})R_0+\gamma \dot{{\hat{e}}}^T({\hat{t}})R_0\right] \left[ -\dot{{\hat{e}}}({\hat{t}})+\beth ({\hat{e}}({\hat{t}}))+{c}({\mathscr {G}}\otimes {\mathscr {A}}){\hat{e}}({\hat{t}}-\ell )+\daleth {\hat{e}}{({\hat{t}}_f)} \right] . \end{aligned}$$
(27)

Define \(R_0\daleth =R_1\), add the right-side places from (26) and (27) to \(\dot{{\mathscr {K}}}({\hat{t}})\), then for \({\hat{t}}\in [{\hat{t}}_f, {\hat{t}}_{f+1})\),

$$\begin{aligned}&\dot{{\mathscr {K}}}({\hat{t}}) \le e^{2a{\hat{t}}}\eta ^T({\hat{t}})\biggl [\frac{{\hat{t}}-{\hat{t}}_f}{d_f}{\hat{\Delta }}_2(d_f) +\frac{{\hat{t}}_{f+1}-{\hat{t}}}{d_f}\hat{\Delta }_1(d_f)\biggl ]\eta ({\hat{t}}), \end{aligned}$$

with \({\hat{\Delta }}_1(d_f)\) and \({\hat{\Delta }}_2(d_f)\) are given below:

$$\begin{aligned}&{\hat{\Delta }}_1(d_f)=\Xi _1+d_f\Xi _2+2e^{-2a\ell }e_1^T{\mathscr {Z}}_2{\mathscr {Z}}_3^{-1}{\mathscr {Z}}_2^Te_1 -4e^{-2a\ell }e_1^T{\mathscr {Z}}_2{\mathscr {Z}}_3^{-1}{\mathscr {Z}}_2^Te_4 +2e^{-2a\ell }e_4^T{\mathscr {Z}}_2{\mathscr {Z}}_3^{-1}{\mathscr {Z}}_2^Te_4,\\&{\hat{\Delta }}_2(d_f)=\Xi _1+d_fe^{-2ad}\Pi _2^TM_1\mho _1^{-1}M_1^T\Pi _2 +\dfrac{1}{3}d_fe^{-2ad}\Pi _2^TM_2\mho _1^{-1}M_2^T\Pi _2\\&\qquad +2e^{-2a\ell }e_1^T{\mathscr {Z}}_2{\mathscr {Z}}_3^{-1}{\mathscr {Z}}_2^Te_1 -4e^{-2a\ell }e_1^T{\mathscr {Z}}_2{\mathscr {Z}}_3^{-1}{\mathscr {Z}}_2^Te_4 +2e^{-2a\ell }e_4^T{\mathscr {Z}}_2{\mathscr {Z}}_3^{-1}{\mathscr {Z}}_2^Te_4. \end{aligned}$$

Meanwhile, depending on Schur complement, it’s concluded from (20) and (21) that \({\hat{\Delta }}_1(d_f)<0,\ {\hat{\Delta }}_2(d_f)<0.\) Then,

$$\begin{aligned} \dot{{\mathscr {K}}}({\hat{t}})\le 0, {\hat{t}}\in [{\hat{t}}_f,{\hat{t}}_{f+1}). \end{aligned}$$

Therefore,

$$\begin{aligned}&{\mathscr {K}}(0)\ge {\mathscr {K}}({\hat{t}}_1)\ge \cdot \cdot \cdot \ge {\mathscr {K}}({\hat{t}}_{f-1}) \ge {\mathscr {K}}({\hat{t}}_f)\ge {\mathscr {K}}({\hat{t}}). \end{aligned}$$
(28)

Notice \({\mathscr {K}}_4(0)=0\) and \({\mathscr {K}}_5(0)=0\), indicating that

$$\begin{aligned}&{\mathscr {K}}(0)= \sum _{p=1}^{5}{\mathscr {K}}_p(0)\nonumber \\&\quad \le \lambda _{max}({\mathscr {P}})||{\hat{e}}(0)||^2+ 2\ell \lambda _{max}({\mathscr {Q}}_1)\sup _{-\ell \le \curlyvee \le 0} \left\{ ||{\hat{e}}(\curlyvee )||^2\right\} \nonumber \\&\qquad + 2\ell ^3\lambda _{max}({\mathscr {Z}}_1)\sup _{-\ell \le \curlyvee \le 0} \left\{ ||\dot{{\hat{e}}}(\curlyvee )||^2\right\} \nonumber \\&\quad \le a_0\biggl (\sup _{-\ell \le \curlyvee \le 0}\left\{ ||{\hat{e}}(\curlyvee )||, ||\dot{{\hat{e}}}(\curlyvee )||\right\} \biggl )^2, \end{aligned}$$
(29)

where

$$\begin{aligned} a_0=\lambda _{max}({\mathscr {P}})+ 2\ell \lambda _{max}({\mathscr {Q}}_1)+ 2\ell ^3\lambda _{max}({\mathscr {Z}}_1). \end{aligned}$$

Focusing on (25), (28), and (29), we have

$$\begin{aligned} \beta e^{2a{\hat{t}}}||{\hat{e}}({\hat{t}})||^2\le a_0\biggl (\sup _{-\ell \le \curlyvee \le 0}\left\{ ||{\hat{e}}(\curlyvee )||,||\dot{{\hat{e}}}(\curlyvee )||\right\} \biggl )^2, \end{aligned}$$

which suggests

$$\begin{aligned} ||{\hat{e}}({\hat{t}})||\le \sqrt{\frac{a_0}{\beta }}e^{-a{\hat{t}}} \sup _{-\ell \le \curlyvee \le 0}\left\{ ||{\hat{e}}(\curlyvee )||,||\dot{{\hat{e}}}(\curlyvee )||\right\} . \end{aligned}$$
(30)

From (30), it can be inferred that ES (6) has an exponentially stable decay rate of a.

Remark 5

Emphasize that we verify their positive characteristics by treating \({\mathscr {K}}_1\), \({\mathscr {K}}_2\), \({\mathscr {K}}_3\), \({\mathscr {K}}_4\), and \({\mathscr {K}}_5\) as a whole, in contrast to the current results in [3, 8, 16], which call for all the matrices with a symmetrical structure to be positively defined matrices. As a consequence, the matrices in the LKF’s constraint are currently eased, which is a crucial factor toward lowering conservativeness. Additionally, based on the Lyapunov functional method depending on sampling times [10], the Lyapunov functioning (22) introduces the limitations \({\mathscr {K}}_4({\hat{t}})\) and \({\mathscr {K}}_5({\hat{t}})\) with respect to \({\hat{t}}_f \ \textrm{and}\ {\hat{t}}_{f+1}\), which effectively lower the conservatism of the obtained results by making the most of the accessible sampling interval information.

Set the parameter a in Theorem 2 equal to 0, and we get the following inference:

Corollary 2

With respect to the supplied positive scalars \(d, \ell , \rho , \gamma\), if symmetric and positive definite matrices \({\mathscr {P}}>0, {\mathscr {Q}}_1>0, {\mathscr {Z}}_1>0, {\begin{bmatrix} {\mathscr {Z}}_1 &{} {\mathscr {Z}}_2 \\ *&{} {\mathscr {Z}}_3 \end{bmatrix} }>0, \mho _1>0,\ \textrm{matrices}\ {\mathscr {Y}}, {\mathscr {Y}}_1, M_1, M_2\), diagonal matrices \(R_0=diag\big \{R_{01}, R_{02},\cdot \cdot \cdot , R_{0{\mathscr {N}}}\big \},\) \(R_1=diag\left\{ R_{11},R_{12},\cdot \cdot \cdot ,R_{1{\mathscr {N}}}\right\}\) exist that make the following LMIs true for any \(d_f\in (0, d],\)

$$\begin{aligned}&-{\begin{bmatrix} {\mathscr {P}}+d\dfrac{{\mathscr {Y}}+{\mathscr {Y}}^T}{2}+\mho _1 &{} \Phi _1 -\mho _1\\ *&{} \Phi _2+\mho _1 \end{bmatrix} }<0, \end{aligned}$$
(31)
$$\begin{aligned}&\Delta _1(d_f)= {\begin{bmatrix} \Xi _1+d_f\Xi _2 &{} \sqrt{2}e_1^T{\mathscr {Z}}_2-\sqrt{2}e_4^T{\mathscr {Z}}_2 \\ *&{} -{\mathscr {Z}}_3 &{} \end{bmatrix} } <0, \end{aligned}$$
(32)
$$\begin{aligned}&\Delta _2(d_f)= {\begin{bmatrix} \Xi _1 &{} \sqrt{d_f}\Pi _2^TM_1 &{} \sqrt{d_f}\Pi _2^TM_2&{} \sqrt{2}e_1^T{\mathscr {Z}}_2-\sqrt{2}e_4^T{\mathscr {Z}}_2 \\ *&{} -\mho _1 &{} 0 &{} 0\\ *&{} *&{} -3\mho _1 &{} 0\\ *&{} *&{} *&{} -{\mathscr {Z}}_3 \end{bmatrix}} <0, \end{aligned}$$
(33)

where

$$\begin{aligned}&\Xi _1=Sym\biggl \{e_1^T{\mathscr {P}}e_5+e_1^T{\mathscr {Q}}_1e_1-e_4^T{\mathscr {Q}}_1e_4 +\ell ^2e_5^T{\mathscr {Z}}_1e_5-\dfrac{1}{2} \Phi _5\\&\qquad \ +\Pi _2^TM_1\Pi _3+\Pi _2^TM_2\Pi _4+ \Phi _4 -\rho {\Pi _{5}^T\Phi _3 \Pi _{5} }-e_1^TR_0e_5+e_1^TR_0e_3 \\&\qquad \ +e_1^TR_0\Pi _{1}+e_1^TR_1 e_2-\gamma e_{5}^TR_0e_5+\gamma e_{5}^TR_0e_3+\gamma e_{5}^TR_0\Pi _{1} +\gamma e_{5}^TR_1e_2\biggl \} ,\\&\Xi _2=Sym\biggl \{e_5^T\mho _1e_5+ \dfrac{1}{2}e_1^T{\mathscr {Y}}e_5+\dfrac{1}{2}e_5^T{\mathscr {Y}}e_1-e_5^T{\mathscr {Y}}e_2 +e_5^T{\mathscr {Y}}_1e_2 \biggl \}, \\&\Pi _1={c}({\mathscr {G}}\otimes {\mathscr {A}})e_4, \Pi _{2}=[e_1^T\ \ e_2^T\ \ e_6^T]^T, \\&\Pi _{3}=(\flat _1-\flat _2)\Pi _2, \Pi _{4}=(2\flat _3-\flat _1-\flat _2)\Pi _2, \Pi _5=[e_1^T\ \ e_3^T]^T, \\&\Phi _1=-d{\mathscr {Y}}+d{\mathscr {Y}}_1, \\&\Phi _2=-d{\mathscr {Y}}_1-d{\mathscr {Y}}_1^T+d\frac{{\mathscr {Y}}+{\mathscr {Y}}^T}{2}, \\&\Phi _3= {\begin{bmatrix} \dfrac{\amalg \otimes ({S_1^T}{S_2}+{S_2^T}{S_1})}{2} &{} \dfrac{-\amalg \otimes ({S_1^T}+{S_2^T})}{2}\\ *&{}\amalg \end{bmatrix} }, \\&\Phi _4 =-{}e_1^T{\mathscr {Z}}_1e_1+\ell ^2e_1^T{\mathscr {Z}}_3e_1+2\ell e_1^T {\mathscr {Z}}_2e_1 +2e_1^T{\mathscr {Z}}_1e_4 \\&\qquad \ -2\ell e_4^T{\mathscr {Z}}_2e_1-e_4^T{\mathscr {Z}}_1e_4, \\&\Phi _5=e_1^T{\mathscr {Y}}e_1-2e_1^T{\mathscr {Y}}e_2+2e_1^T{\mathscr {Y}}_1e_2 -2e_2^T{\mathscr {Y}}_1e_2+e_2^T{\mathscr {Y}}e_2, \end{aligned}$$

then, FSs (1) and LS (3) could realize exponential synchronization with a small enough decay rate, i.e., ES (6) is exponentially stable and has a sufficiently slow degradation rate, and the formula \(\daleth =R_0^{-1}R_1\) yields the controlling gain matrix.

Remark 6

Due to the influence of digital feedback, sampling control is more suitable for practical applications than continuous control because it only uses the data of the state vector at a discrete time. Furthermore, compared with the memory sampling data controllers, the nonmemory sampled-data controllers are widely used in this article, which don’t need the state value of the previous sampling time in the actual control process, thus improving the computation burden and load limitation. It should be pointed out that the problem of sampling synchronization control for delayed complex networks is solved, and the sufficient conditions for the existence of the sampled data controller are given, which are represented by LMIs and can be solved easily with standard numerical software. It should be noted that the LMIs given in Theorem 2 depend not only on the maximum sampling interval d but also on the decay rate \(\alpha\).

5 Numerical examples

Example 1

Think about FSs (1) with three follower systems. The matrix of inner connection as well as the matrix of outer connection, respectively, are provided as

$$\begin{aligned} {\mathscr {A}}=&\begin{bmatrix} 1 &{} 0\\ 0 &{} 1 \end{bmatrix},\ \ {\mathscr {G}}= \begin{bmatrix} -1 &{} 0 &{} 1\\ 0 &{} -1 &{} 1\\ 1 &{} 1 &{} -2 \end{bmatrix}. \end{aligned}$$

Take the non-linear function \(\pounds\) to be

$$\begin{aligned} \pounds (e_p({\hat{t}}))=&\begin{bmatrix} -0.5e_{p1}({\hat{t}})+tanh(0.2e_{p1}({\hat{t}}))+0.2e_{p2}({\hat{t}}) \\ 0.95e_{p2}({\hat{t}})-tanh(0.75e_{p2}({\hat{t}})) \end{bmatrix}. \end{aligned}$$

Finding that \(\pounds\) meets the condition (2) with

$$\begin{aligned} \qquad {S_1}=&\begin{bmatrix} -0.5 &{} 0.2\\ 0 &{} 0.95 \end{bmatrix},\ \ {S_2}= \begin{bmatrix} -0.3 &{} 0.2 \\ 0 &{} 0.2 \\ \end{bmatrix}. \end{aligned}$$

Pick \(\ell =0.25\), \(\rho =1\), and \(\gamma =1\). For various levels of coupling strength c, the maximum value allowed for sampling period d is listed in Table 1. To show how well the suggested method works, the simulation results are going to be offered. Taking into account \({c}=0.5\), \(d=1.8850\), and the other parameters remaining the same, the matrices (all of the derived matrices are unable to be shared here for space concerns) are presented as follows when the Matlab LMI toolbox is applied to acquire workable answers to the linear matrix inequalities in Theorem 1:

$$\begin{aligned} \qquad Z_{01}=&\begin{bmatrix} 1.1145 &{} -0.1685\\ *&{} 0.5437 \end{bmatrix},\ \ Z_{02}= \begin{bmatrix} 1.1145 &{} -0.1685\\ *&{} 0.5437 \end{bmatrix},\ \ Z_{03}= \begin{bmatrix} 1.0884 &{} -0.1648 \\ *&{} 0.5432 \end{bmatrix},\\ \qquad Z_{11}=&\begin{bmatrix} -2.9011 &{} 0.3718\\ *&{} -1.0621 \end{bmatrix},\ \ Z_{12}= \begin{bmatrix} -2.9011 &{} 0.3718\\ *&{} -1.0621 \end{bmatrix},\ \ Z_{13}= \begin{bmatrix} -2.6249 &{} 0.3356 \\ *&{} -1.0571 \end{bmatrix}. \end{aligned}$$
Fig. 1
figure 1

The state trajectory for error system with control input u(T) of data sampling

Fig. 2
figure 2

The control input u(T) of data sampling

Through employing the matrices mentioned above, the following controller gains could be derived with \(\daleth =Z_0^{-1}Z_1\):

$$\begin{aligned} \qquad \daleth _{1}=&\begin{bmatrix} -2.6225 &{} 0.0401\\ -0.1289 &{} -1.9410 \end{bmatrix},\ \ \daleth _{2}= \begin{bmatrix} -2.6225 &{} 0.0401\\ -0.1289 &{} -1.9410 \end{bmatrix},\ \ \daleth _{3}= \begin{bmatrix} -2.4298 &{} 0.0143 \\ -0.1193 &{} -1.9417 \end{bmatrix}. \end{aligned}$$
(34)
Table 1 The highest value allowed for information collection interval d

Given the original conditions, \(e_1(0)=[-4, 3]^T, e_2(0)=[2, -5]^T, e_3(0)=[3, 1]^T\), as well as \(\rightthreetimes (0)=[-1, 0]^T\), Figs. 1 and 2, respectively, show the trajectory for states in ES (6) and control input \({u}_p({\hat{t}})\) under these controller gains. It can be observed from Figs. 1 and 2 that the control input \(u_p({\hat{t}})\) satisfies \(u_p({\hat{t}})=\daleth _p {\hat{e}}_p({\hat{t}}_f), \ {\hat{t}}\in [{\hat{t}}_f,{\hat{t}}_{f+1})\). Moreover, it is obvious that FSs (1) and LS (3) are asymptotically synchronous, which illustrates the effectiveness of our approach.

Example 2

Think about FSs (1) that has three follower systems with the same inner-coupling matrix, outer-coupling matrix, and nonlinear function \(\pounds\) as in Example 1.

Take \(a=0.1\), \(\ell =0.25\), \(\rho =1\), and \(\gamma =0.5\) as your values. The permissible highest value of the sampling period d is listed under Table 2 for various levels of coupling strength c. The simulation outcomes will be displayed to illustrate the validity of the suggested strategy. Considering \({c}=0.8\), \(d=0.4990\), and all other values remaining unchanged, some matrices derived are shown as follows through the use of the Matlab LMI toolbox to achieve workable answers to the linear matrix inequalities in Theorem 2:

$$\begin{aligned} \qquad R_{01}=&\begin{bmatrix} 0.6075 &{} -0.0818\\ *&{} 0.5922 \end{bmatrix},\ \ R_{02}= \begin{bmatrix} 0.6075 &{} -0.0818\\ *&{} 0.5922 \end{bmatrix},\ \ R_{03}= \begin{bmatrix} 0.3696 &{} -0.0499 \\ *&{} 0.5393 \end{bmatrix}, \\ \qquad R_{11}=&\begin{bmatrix} -0.4036 &{} 0.0426\\ *&{} -0.7247 \end{bmatrix},\ \ R_{12}= \begin{bmatrix} -0.4036 &{} 0.0426\\ *&{} -0.7247 \end{bmatrix},\ \ R_{13}= \begin{bmatrix} -0.1959 &{} 0.0176 \\ *&{} -0.5100 \end{bmatrix}. \end{aligned}$$
Table 2 The highest value allowed for sampling period d with \(a=0.1\)
Fig. 3
figure 3

The state trajectory for error system alongside c=0.8 and a=0.1

Fig. 4
figure 4

The data sampling control input alongside c=0.8 and a=0.1

Based on the above-given matrices, the following controller gains can be obtained by employing \(\daleth =R_0^{-1}R_1\):

$$\begin{aligned} \qquad \daleth _{1}=&\begin{bmatrix} -0.6671 &{} -0.0964\\ -0.0202 &{} -1.2371 \end{bmatrix},\ \ \daleth _{2}= \begin{bmatrix} -0.6671 &{} -0.0964\\ -0.0202 &{} -1.2371 \end{bmatrix},\ \ \daleth _{3}= \begin{bmatrix} -0.5323 &{} -0.0811 \\ -0.0166 &{} -0.9532 \end{bmatrix}. \end{aligned}$$

Under these controller gains, the trajectory of states in ES (6) as well as the controlling input \({u}_p({\hat{t}})\), respectively, are presented in Figs. 3 and 4, where \(e_1(0)=[8, -9]^T, e_2(0)=[-6, 2]^T, e_3(0)=[6, 4]^T\), and \(\rightthreetimes (0)=[5, 6]^T\). Figs. 3 and 4 show that the control input \(u_p({\hat{t}})\) meets the condition \(u_p({\hat{t}})=\daleth _p {\hat{e}}_p({\hat{t}}_f), \ {\hat{t}}\in [{\hat{t}}_f,{\hat{t}}_{f+1})\). Additionally, it is clear that FSs (1) and LS (3) are exponentially synchronous, demonstrating the success of our strategy.

6 Conclusions

Emphasize that the main contribution in this article is Lemmas 3 and 5, which can better handle the Lyapunov functional derivative about data sampling systems. Taking advantage of the MFMBIIs, the Lyapunov functional method depending on sampling times, and the convex combination approach, novel standards are developed to make certain that the synchronization error system offers asymptotic and exponential stability, respectively. The effectiveness of new techniques for sampled-data synchronization control is illustrated by two examples. Be aware that many realistic control schemes cannot avoid actuator saturation. Actuator saturation can worsen dynamic behavior or potentially cause the system under investigation to become unstable. In the future, we’ll concentrate on the challenge of synchronizing control of delayed and complicated networks with actuator saturation while sampling information. At the same time, new techniques designed for sampling control in this paper will be attempted to extend this topic.