1 Introduction

In the past decades, the neural networks (NNs) have been found extensive applications in many areas, such as pattern recognition, computer vision, speech synthesis, artificial intelligence and so on; see [13]. Such a wide range of applications attract considerable attention from many scholars to the dynamical behavior of the networks. Up to now, many significant works with respect to NNs have been reported; see [49], and the references therein.

Synchronization, which means that the dynamical behaviors of coupled systems achieve the same state, is a fundamental phenomenon in networks. At present, considerable attention has been devoted to the analysis of the synchronization of NNs and some effective synchronization criteria of NNs have been established in the literature [1015]. Via the sliding mode control, the synchronization problem for complex-valued neural network was addressed in [12]. Reference [14] elaborates the impulsive stabilization and impulsive synchronization of discrete-time delayed neural networks. By adopting the periodically intermittent control scheme, the exponential lag synchronization issue for neural networks with mixed delays was described in [15]. It should be pointed out that most of these synchronization criteria are based on the Lyapunov stability theory, which is defined over an infinite-time interval. However, from the practical perspective, we are inclined to realize the synchronization goal in a finite-time interval. Because in a finite-time interval the maximal synchronization time can be calculated through appropriate methods. Hence, it is significative to study the finite-time synchronization of NNs. In Ref. [16], the finite-time robust synchronization issue for memristive neural networks was discussed. By utilizing the discontinuous controllers, the finite-time synchronization issue for the coupled neural networks was addressed in [17]. And under the sampled-date control scheme, some finite-time synchronization criteria for inertial memristive neural networks were established in [18].

For the finite-time synchronization, the settling time heavily depends on the initial conditions, which may lead to different convergence times under different initial conditions. However, the initial conditions may be invalid in practice. In order to overcome these shortcomings, a new concept named fixed-time synchronization was firstly taken into account in [19]. Hints for future research on the fixed-time synchronization problem can be found in [2025]. By designing a sliding mode controller, the fixed-time synchronization issue for complex dynamical networks was addressed in [21]. Robust fixed-time synchronization for uncertain complex-valued neural networks with discontinuous activation functions was introduced in [23]. Furthermore, the fixed-time synchronization issue for delayed memristor-based recurrent neural networks was investigated in [25].

As is well known, time delay is inevitable in the process of transitional information because of the finite velocity of the transmission signal. Time delays often cause the systems to be instable and oscillatory. Thus, considering the synchronization of NNs with delays is meaningful. Owing to the value of the delay not always being fixed, exploring the synchronization of NNs with time-varying delays has become the subject of great interests for many scholars. Finite-time and fixed-time synchronization analysis for inertial memristive neural networks with time-varying delays was addressed in [26]. Reference [27] also presents an intensive study of the fixed-time synchronization issue for the memristor-based BAM neural networks with time-varying discrete delays. In [28], the author elaborated the synchronization control problem for chaotic neural networks with time-varying and distributed delays. Moreover, the robust extended dissipativity criteria for discrete-time uncertain neural networks with time-varying delays were investigated in [29].

By adding the Markovian process into the network systems of NNs, a new network model is developed. Up to now, the study concerning synchronization of Markovian jumping NNs, especially the global finite-time synchronization of Markovian jumping NNs have received wide attention from the scholars, and a number of results have been developed, such as finite-time synchronization [30], robust control [31], exponential synchronization [32], and state estimation [33]. However, the sojourn-time of a Markovian process obeys an exponential distribution, which results in the transition rate to be a constant. That limits the application of Markovian process. Compared with Markovian process, semi-Markovian process can obey to some other probability distributions, such as Weibull distribution, Gaussian distribution, which makes the semi-Markovian process has a more extensive application prospect. Hence, the investigation for semi-Markovian jumping NNs is of great theoretical value and practical significance, which has been conducted in [3438]. In [34], the finite-time \(H_{\infty }\) synchronization for complex networks with semi-Markov jump topology was investigated by adopting a suitable Lyapunov function and LMI approach. In [36], the exponential stability issue for the semi-Markovian jump generalized neural networks with interval time-varying delays was addressed. And in [38], the improved stability and stabilization results for stochastic synchronization of continuous-time semi-Markovian jump NNs with time-varying delays were also studied. However, to the best of our knowledge, little attention was paid to the synchronization issue for semi-Markovian jumping NNs. This motivates us to study the fixed-time synchronization of semi-Markovian jumping NNs with time-varying delays.

Motivated by the aforementioned discussions, we intend to realize the fixed-time synchronization goal for semi-Markovian jumping NNs with time-varying delays. By applying Lyapunov functional approach, the fixed-time synchronization conditions are presented in terms of LMIs. Therefore, the novelty of our contributions is in the following:

  1. (1)

    A novel state-feedback controller, which includes double-integral terms, is designed to ensure the fixed-time synchronization, which can further improve the effectiveness of the convergence.

  2. (2)

    A new formula for calculating the settling time for semi-Markovian jumping nonlinear system is developed; see Theorem 3.2.

  3. (3)

    The time-varying delays and semi-Markovian processes are introduced in the construction of the NNs models.

  4. (4)

    The fixed-time synchronization conditions are addressed in terms of LMIs.

The rest of this article is arranged as follows. Some preliminaries and model description are described in Sect. 2. In Sect. 3, we introduce the main results, the fixed-time synchronization conditions are derived under different nonlinear controllers. In Sect. 4, two examples are presented to show the correctness of our main results. Section 5, also the final part, the conclusion of this paper is shown.

Notation

R represents the set of real numbers. \(R^{n} \) denotes the n-dimensional Euclidean space, and \(R^{n\times n}\) denotes the set of all \(n\times n \) matrices. Given column vectors \(x=(x_{1},x_{2}, \ldots,x_{n})^{T} \in R^{n}\), where the superscript T represents the transpose operator. \(X< Y\) (\(X>Y\)), which means that \(X-Y\) is negative (positive) definite. \(\mathscr{E}\) stand for mathematical expectation. \(\Gamma V(x(t),r(t),t)\) denotes the infinitesimal generator of \(V(x(t),r(t),t)\). For real matrix \(P=(p_{ij})_{n\times n}\), \(|P|=(|p_{ij}|)_{n\times n}\), \(\lambda_{\min }(P)\) and \(\lambda_{ \max }(P)\) denote the minimum and maximum eigenvalues of P, respectively. ∗ stands for the symmetric terms in a symmetric block matrix. \(\|x\|\) stands for the Euclidean norm of the vector x, i.e., \(\|x\|=(x^{T}x)^{\frac{1}{2}}\). Matrices, if their dimensions are not explicitly stated, are assumed to have compatible dimensions for the algebraic operation.

2 Preliminaries and model description

Let \((\Omega,\mathscr{F},\{\mathscr{F}\}_{t\geq 0},\mathcal{P})\) be the complete probability space and the filtration \(\mathscr{F} _{t\geq 0}\) satisfies the usual conditions that it is right continuous and increasing while \(\mathscr{F}\) contains all \(\mathcal{P}\)-null sets, where Ω is the sample space, \(\mathscr{F}\) is the algebra of events and \(\mathcal{P}\) is the probability measure defined on \(\mathscr{F}\). Let \(\{r(t),t\geq 0\}\) be a continuous-time semi-Markovian process taking values in a finite state space \(S=\{1,2,3,\ldots,N\}\). The evolution of the semi-Markovian process \(r(t)\) obeys the following probability transitions:

$$P\bigl(r(t+h)={k}|r(t)={r}\bigr)= \textstyle\begin{cases} \pi_{{rk}}{(h)h}+o(h), &\text{if }r\neq k, \\ 1+\pi_{{rr}}(h)h+o(h), &\text{if }r=k, \end{cases} $$

where \(h>0\), \(\lim_{h\rightarrow 0}\frac{o(h)}{h}=0\), \(\pi_{rk}(h)\geq 0 \) (\(r,k\in S\), \(r\neq k\)) is the transition rate from mode r to k and for any state or mode, it satisfies

$$\begin{aligned} \begin{aligned} \pi_{rr}(h)=-\sum _{k=1,k\neq r}^{N}\pi_{rk}(h). \end{aligned} \end{aligned}$$

Remark 2.1

It is worth noting that in the continuous-time semi-Markovian process, the transition rate \(\pi _{rk}(h)\) is time-varying and depend on the sojourn-time h. Meanwhile, the probability distribution of sojourn-time h obeys the Weibull distribution, etc [39]. If the sojourn-time h subjects to the exponential distribution, and the transition rate \(\pi _{rk}(h)=\pi _{rk}\), is a constant. Then the continuous-time semi-Markovian process recedes to the continuous-time Markovian process. On the other hand, the transition rate \(\pi_{rk}(h)\) is bounded, with \(\underline{\pi }_{rk}\leq \pi_{rk}(h)\leq \overline{ \pi }_{rk}\), \(\underline{\pi }_{rk}\) and \(\overline{\pi }_{rk}\) are known constant scalars, and \(\pi_{rk}(h)\) can be denoted as \(\pi_{rk}(h)=\pi_{rk}+\Delta \pi_{rk}\), where \(\pi_{rk}=\frac{1}{2}(\underline{ \pi }_{rk}+\overline{\pi }_{rk})\), and \(| \Delta \pi_{rk}| \leq \kappa_{rk}\) with \(\kappa_{rk}=\frac{1}{2}(\underline{\pi }_{rk}-\overline{ \pi }_{rk})\), see [37].

The model we consider in this paper is the neural networks model with semi-Markovian jumping parameters. The dynamical behavior of the drive system is described as the following stochastic differential equation:

$$\begin{aligned} \begin{aligned} \textstyle\begin{cases} \dot{x}_{i}(t)=-D(r(t))x_{i}(t)+A(r(t))f_{i}(x_{i}(t))+B(r(t))f_{i}(x _{i}(t-\tau (t)) +I_{i}, \\ x_{i}(t)={\psi _{i}(t)},\quad -\tau \leq t\leq 0, \end{cases}\displaystyle \end{aligned} \end{aligned}$$
(1)

and the corresponding response system is

$$\begin{aligned} \begin{aligned} \textstyle\begin{cases} \dot{y}_{i}(t)=-D(r(t))y_{i}(t)+A(r(t))f_{i}(y_{i}(t))+B(r(t))f_{i}(y _{i}(t-\tau (t)) +I_{i}+u_{i}(t), \\ y_{i}(t)={\phi _{i}(t)},\quad -\tau \leq t\leq 0, \end{cases}\displaystyle \end{aligned} \end{aligned}$$
(2)

where \(\{r(t),t\geq 0\}\) is the continuous-time semi-Markovian process and \(r(t)\) stands for the evolution of the mode at time t. \({x(t)}=(x_{1}(t),x_{2}(t),\ldots,x_{n}(t))^{T} \in R^{n}\), \({y(t)}=(y_{1}(t),y_{2}(t),\ldots, y_{n}(t))^{T} \in R^{n}\) denotes the state vector of the ith neuron at time t; \(D(r(t))\in R^{n}\) is a positive-definite diagonal matrix; \(A(r(t)) \in R^{n\times n}\) and \(B(r(t))\in R^{n\times n}\) are matrices with real values in mode \(r(t)\); \({f(x(t))}=(f_{1}(x_{1}(t)),f _{2}(x_{2}(t)),\ldots,f_{n}(x_{n}(t)))^{T}\in R^{n}\) is the neuronal activation function; \(I=(I_{1},I_{2},\ldots,I _{n})^{T}\) denotes the external input on the ith neuron. \(u(t)=(u_{1}(t),u_{2}(t),\ldots,u_{n}(t))^{T}\) stands for the control input, which will be designed later. \(\psi (t)=(\psi _{1}(t),\psi _{2}(t),\ldots,\psi _{n}(t)\in \mathcal{C}([-\tau,0];R^{n})\) and \(\phi (t)=(\phi _{1}(t),\phi _{2}(t),\ldots,\phi _{n}(t))\in \mathcal{C}([-\tau,0];R^{n})\) are the initial conditions of system (1) and (2), respectively.

Variable \(\tau (t)\) denotes the time-varying delay function, and it is assumed to satisfy

where \(\tau >0\) and are known constants.

For notation simplicity, we replace \(D(r(t))\), \(A(r(t))\), \(B(r(t))\) with \(D_{r}\), \(A_{r}\) and \(B_{r}\), respectively, for \(r(t)=r\in S\). Then the neural networks models can be rewritten as follows:

$$\begin{aligned} \begin{aligned} \textstyle\begin{cases} \dot{x}_{i}(t)=-D_{r}x_{i}(t)+A_{r}f_{i}(x_{i}(t))+B_{r}f_{i}(x_{i}(t- \tau (t))+I_{i}, \\ x_{i}(t)={\psi _{i}(t)},\quad -\tau \leq t\leq 0, \end{cases}\displaystyle \end{aligned} \end{aligned}$$
(3)
$$\begin{aligned} \begin{aligned} \textstyle\begin{cases} \dot{y}_{i}(t)=-D_{r}y_{i}(t)+A_{r}f_{i}(y_{i}(t))+B_{r}f_{i}(y_{i}(t- \tau (t))+I_{i}+u_{i}(t), \\ y_{i}(t)={\phi _{i}(t)},\quad -\tau \leq t\leq 0. \end{cases}\displaystyle \end{aligned} \end{aligned}$$
(4)

For the purpose of this paper, we suppose that the activation function \(f_{i}(\cdot)\) satisfies the following assumption:

(\(H_{1}\)):

For any \(x_{i}{(t)}\), \(y_{i} {(t)}\in R^{n} \), \({f_{i}}(\cdot)\) satisfies

$$\bigl\vert f_{i}\bigl(y_{i}(t)\bigr)-f_{i} \bigl(x_{i}(t)\bigr) \bigr\vert \leq \mu _{i} \bigl\vert y_{i}(t)-x_{i}(t) \bigr\vert \quad \mbox{and}\quad \bigl\vert f_{i}(\cdot) \bigr\vert \leq Q_{i}, $$

where \(\mu_{i}>0\) and \(Q_{i}>0\) are both known constants.

Let \(e_{i}(t)=y_{i}(t)-x_{i}(t)\) be the synchronization error, then the error dynamics system can be expressed as

$$ \textstyle\begin{cases} \dot{e}_{i}(t)=-D_{r}e_{i}(t)+A_{r}g_{i}(e_{i}(t))+B_{r}g_{i}(e_{i}(t- \tau (t))+u_{i}(t), \\ e_{i}(t)= {\varphi (t)},\quad - \tau \leq t\leq 0,\end{cases} $$
(5)

where \({e(t)=(e_{1}(t),e_{2}(t),\ldots,e_{n}(t))^{T}}\), \(g_{i}(e_{i}(t))=f_{i}(y_{i}(t))-f_{i}(x_{i}(t))\) and \(g_{i}(e_{i}(t- \tau (t)))=f_{i}(y_{i}(t-\tau (t)))-f_{i}(x_{i}(t-\tau (t)))\), and \({\varphi (t)}= \psi_{i}(t)-\phi_{i}(t)\).

Remark 2.2

From assumption (\(H_{1}\)), we can conclude that \(g_{i}(\cdot)\) is also continuous and bounded, then

$$\bigl\vert g_{i}\bigl(e_{i}(t)\bigr) \bigr\vert \leq \mu_{i} \bigl\vert e_{i}(t) \bigr\vert \quad \mbox{and} \quad \bigl\vert g_{i}(\cdot) \bigr\vert \leq H_{i}, $$

where \(H_{i}\) is a known positive constant.

Before proceeding our main results, some basic definitions and lemmas are introduced.

Definition 2.1

The neural network system (3) is said to be synchronized with the system (4) in finite time, if for any initial condition \({\varphi (t)}\), \(-\tau \leq t\leq 0\), there exists a settling time function \(T_{\varphi }=T(\varphi)\), such that

$$\begin{aligned} \begin{aligned} \lim_{ t\to T_{\varphi }} {\mathscr{E}} \bigl\Vert e(t) \bigr\Vert =0,\qquad e(t)=0,\quad \forall t\geq T_{\varphi }. \end{aligned} \end{aligned}$$

Moreover, if there exists a constant \(T_{\max }>0\), such that \(T_{\varphi }\leq T_{\max }\), then the neural network system (3) is said to be synchronized onto system (4) in fixed time. \(T_{\max }\) is called the synchronization settling time.

Lemma 2.1

([40])

Given any scalar ε and matrix \(S\in R^{n\times n}\), the following inequality:

$$\begin{aligned} \begin{aligned} \varepsilon \bigl(S+S^{T}\bigr)\leq \varepsilon^{2}W+SW^{-1}S^{T}, \end{aligned} \end{aligned}$$

holds for any symmetric positive-definite matrix \(W\in R^{n\times n}\).

Lemma 2.2

([41])

For any constant vector \(x\in R^{n}\) and \(0< c< l\), the following norm equivalence holds:

$$\begin{aligned} \begin{aligned} \Biggl(\sum_{i=1}^{n} \vert x_{i} \vert ^{l} \Biggr)^{\frac{1}{l}}\leq \Biggl(\sum_{i=1}^{n} \vert x_{i} \vert ^{c} \Biggr)^{\frac{1}{c}} \quad \textit{and} \quad \Biggl(\frac{1}{n}\sum_{i=1}^{n} \vert x_{i} \vert ^{l} \Biggr)^{\frac{1}{l}}\geq \Biggl(\frac{1}{n}\sum_{i=1}^{n} \vert x_{i} \vert ^{c} \Biggr)^{\frac{1}{c}}. \end{aligned} \end{aligned}$$

Lemma 2.3

Let \(U\in R^{n\times n}\) be a symmetric matrix, and let \(x\in R^{n}\), then the following inequality holds:

$$\begin{aligned} \begin{aligned} \lambda_{\min }(U)x^{T}x\leq x^{T}Ux\leq \lambda_{\max }(U)x^{T}x. \end{aligned} \end{aligned}$$

Lemma 2.4

([42])

Suppose there exists a continuous nonnegative function \(V(t): R^{n}\rightarrow R_{+}\cup {(0)}\), such that

  1. (1)

    \(V(e(t))>0\) for \(e(t)\neq 0\), \(V(e(t))=0\Leftrightarrow e(t)=0\);

  2. (2)

    for given constants \(\alpha >0\), \(\beta >0\), \(0<\rho <1\), and \(\upsilon >1\), any solution \(e(t)\) satisfies the following inequality:

    $$\begin{aligned} \begin{aligned} D^{+}V\bigl(e(t)\bigr)\leq -\alpha V^{\rho }\bigl(e(t)\bigr)-\beta V^{\upsilon }\bigl(e(t)\bigr), \end{aligned} \end{aligned}$$

then the error system (5) is globally fixed-time stable for any initial conditions \(\varphi (t)\), and it satisfies

$$\begin{aligned} \begin{aligned} V(t)\equiv 0, \quad t\geq {T_{\varphi }}, \end{aligned} \end{aligned}$$

with the settling time estimated as

$$\begin{aligned} \begin{aligned} {T_{\varphi }}\leq T_{\max }:= \frac{1}{\alpha (1- \rho)}+\frac{1}{\beta (\upsilon -1)}. \end{aligned} \end{aligned}$$

Lemma 2.5

([43])

Suppose there exists a continuous nonnegative function \(V(t): R^{n}\rightarrow R_{+}\cup {(0)}\), such that

  1. (1)

    \(V(e(t))>0\) for \(e(t)\neq 0\), \(V(e(t))=0\Leftrightarrow e(t)=0\);

  2. (2)

    for some α, \(\beta >0\), \(\rho =1-\frac{1}{2p}\), \(\upsilon =1+\frac{1}{2p}\), \(p>1\), any solution \(e(t)\) satisfies

    $$\begin{aligned} \begin{aligned} D^{+}V\bigl(e(t)\bigr)\leq -\alpha V^{\rho }\bigl(e(t)\bigr)-\beta V^{\upsilon }\bigl(e(t)\bigr), \end{aligned} \end{aligned}$$

then the error system (5) is globally fixed-time stable, and the settling time bounded by

$$\begin{aligned} \begin{aligned} {T_{\varphi } \leq } T_{\max }= \frac{\pi p}{\sqrt{ \alpha \beta }}. \end{aligned} \end{aligned}$$

Lemma 2.6

([44])

Suppose that there exists a positive-definite, continuous differential function \(V(t)\) which satisfies

$$\begin{aligned} \begin{aligned} \dot{V}(t)\leq -\alpha V^{\rho }(t)\quad (t \geq 0), \end{aligned} \end{aligned}$$

where \(\alpha >0\), \(0<\rho <1\) are two constants. Then we have \(\lim_{t\rightarrow {T^{*}}} V(t)=0\), and \(V(t)\equiv 0\), \(\forall t\geq {T^{*}}\), with the settling time \(T^{*}\) estimated as

$$\begin{aligned} \begin{aligned} T^{*}=\frac{V^{1-\rho }(0)}{\alpha (1-\rho)}. \end{aligned} \end{aligned}$$

3 Main results

In this subsection, the fixed-time synchronization conditions are developed between the system (3) and (4). For this purpose, we adopt the following discontinuous feedback controller:

$$\begin{aligned} u_{i}(t)= {}&{-}\lambda_{i1}e_{i}(t)- \lambda_{i2}\operatorname{sign}\bigl(e_{i}(t)\bigr)- \lambda_{i3} \operatorname{sign}\bigl(e_{i}(t)\bigr) \bigl\vert e_{i}(t) \bigr\vert ^{\rho } -\lambda_{i4}\operatorname{sign}\bigl(e_{i}(t)\bigr) \bigl\vert e_{i}(t) \bigr\vert ^{\upsilon }, \end{aligned}$$
(6)

where \(0<\rho <1\), \(\upsilon >1\), \(\lambda_{i1}\), \(\lambda_{i2}\), \(\lambda_{i3}\), \(\lambda_{i4}\), \(i=1,2\), are the parameters to be designed later.

Theorem 3.1

Under assumption (\(H_{1}\)), for given scalars \(0<\alpha <1\) and \(\beta >1\), if there exist symmetric positive-definite matrices \(P_{r}\), \(W_{rk}\), such that

$$ \textstyle\begin{cases} -\lambda_{i1}P_{r}-P_{r}D_{r}+\mu_{i} \vert P_{r}A_{r} \vert +\frac{ \widetilde{\Omega }}{2}< 0,\\ \lambda_{i2}P_{r}>H_{i} \vert P_{r}B_{r} \vert ,\\ \lambda_{i3}>0,\qquad \lambda_{i4}>0, \end{cases} $$
(7)

where \(\widetilde{\Omega }=\sum_{k=1}^{N}\pi_{rk}P_{k}+\sum_{k=1,k \neq r}^{N} [\frac{\kappa_{rk}^{2}}{4}W_{rk}+(P_{k}-P_{r})W_{rk} ^{-1}(P_{k}-P_{r}) ]\), then the drive system (3) is synchronized onto the response system (4) in fixed time.

Proof

Consider the following Lyapunov functional:

$$\begin{aligned} \begin{aligned} V(t)=\sum_{i=1}^{n}e_{i}^{T}(t)P_{r}e_{i}(t). \end{aligned} \end{aligned}$$
(8)

For simplicity, here, we replace \(V(e(t),t,r)\), \(\mathscr{L}V(e(t),t,r)\) with \(V(t)\) and \(\mathscr{L}V(t)\), respectively. With regard to Itô formula, we have

$$\mathscr{L}V(t)=\lim_{\Delta t\to 0}\frac{\mathscr{E}\{V(e(t+\Delta t),r(t+ \Delta t),t+\Delta t)|r(t)=r\}-V(e(t),r,t)}{\Delta t}, $$

where Δt is a small positive number. Hence, for every \(r(t)={r}\in S\), it can be deduced that

$$\begin{aligned} \begin{aligned} \mathscr{L}V(t)=2\sum _{i=1}^{n}e_{i}^{T}(t)P_{r} \dot{e_{i}}(t)+\sum_{i=1}^{n}e_{i}^{T}(t) \Biggl[ \sum_{k=1}^{N}\pi_{rk}(h)P_{k} \Biggr] e _{i}(t). \end{aligned} \end{aligned}$$
(9)

Considering \(\pi_{rk}(h)=\pi_{rk}+\Delta \pi_{rk}\), \(\Delta \pi_{{rr}}=-\sum_{k=1,k\neq r}^{N}\Delta \pi_{rk}\) and applying Lemma 2.1, we obtain

$$\begin{aligned} \sum_{k=1}^{N} \pi_{rk}(h)P_{k} =&\sum_{k=1}^{N} \pi_{rk}P_{k}+ \sum_{k=1,k\neq r}^{N} \Delta \pi_{rk}P_{k}+\Delta \pi_{rr}P_{r} \\ =&\sum_{k=1}^{N}\pi_{rk}P_{k}+ \sum_{k=1,k\neq r}^{N}\Delta \pi_{rk}(P _{k}-P_{r}) \\ =&\sum_{k=1}^{N}\pi_{rk}P_{k}+ \sum_{k=1,k\neq r}^{N} \biggl[ \frac{1}{2} \Delta \pi_{rk}(P_{k}-P_{r})+\frac{1}{2} \Delta \pi_{rk}(P_{k}-P_{r}) \biggr] \\ \leq& \sum_{k=1}^{N}\pi_{rk}P_{k}+ \sum_{k=1,k\neq r}^{N} \biggl[ \frac{ \kappa_{rk}^{2}}{4}W_{rk} +(P_{k}-P_{r})W_{rk}^{-1}(P_{k}-P_{r}) \biggr] . \end{aligned}$$

Then calculating the derivative of \(V(t)\) along the trajectory of (5), we have

$$\begin{aligned} \mathscr{L}V(t) \leq{}& 2\sum_{i=1}^{n}e_{i}^{T}(t)P_{r}(-D_{r}e_{i}(t)+A _{r}g_{i}\bigl(e_{i}(t)\bigr)+B_{r}g_{i} \bigl(e_{i}\bigl(t-\tau (t)\bigr)+u_{i}(t)\bigr) \\ &{}+\sum_{i=1}^{n}e_{i}^{T}(t) \Biggl( \sum_{k=1}^{N}\pi_{rk}P_{k} + \sum_{k=1,k\neq r}^{N} \biggl[ \frac{\kappa_{rk}^{2}}{4}W_{rk} +(P_{k}-P _{r})W_{rk}^{-1}(P_{k}-P_{r}) \biggr] \Biggr) e_{i}(t). \end{aligned}$$

Based on assumption (\(H_{1}\)), we get

$$\begin{aligned} \mathscr{L}V(t)\leq {}& {-}2\sum _{i=1}^{n}e_{i}^{T}(t)P_{r}D_{r}e_{i}(t)+2 \mu_{i}\sum_{i=1}^{n}e_{i}^{T}(t) \vert P_{r}A_{r} \vert e_{i}(t)+2H_{i} \sum_{i=1} ^{n} \bigl\vert e_{i}^{T}(t) \bigr\vert \vert P_{r}B_{r} \vert \\ &{}+2\sum_{i=1}^{n}e_{i}^{T}(t)P_{r}u_{i}(t)+ \sum_{i=1}^{n}e_{i}^{T}(t) \widetilde{\Omega }e_{i}(t). \end{aligned}$$
(10)

Substituting the controller (6) into (10), it yields

$$\begin{aligned} \mathscr{L}V(t) \leq{}& 2\sum _{i=1}^{n}e_{i}^{T}(t) \biggl(- \lambda_{i1}P_{r}-P _{r}D_{r}+ \mu_{i} \vert P_{r}A_{r} \vert + \frac{\widetilde{\Omega }}{2}\biggr)e_{i}(t) \\ &{}+2\sum_{i=1}^{n} \bigl\vert e_{i}^{T}(t) \bigr\vert \bigl(H_{i} \vert P_{r}B_{r} \vert -\lambda_{i2}P_{r} \bigr)-2 \lambda_{i3}\sum_{i=1}^{n}e_{i}^{T}(t)P_{r} \bigl\vert e_{i}(t) \bigr\vert ^{\rho } \\ &{}-2\lambda_{i4}\sum_{i=1}^{n}e_{i}^{T}(t)P_{r} \bigl\vert e_{i}(t) \bigr\vert ^{\upsilon }. \end{aligned}$$
(11)

By the condition (7), (11) can be rewritten as the following inequality:

$$\begin{aligned} \begin{aligned} \mathscr{L}V(t) &\leq -2\lambda_{i3}\sum _{i=1}^{n}e_{i}^{T}(t)P_{r} \bigl\vert e _{i}(t) \bigr\vert ^{\rho }-2 \lambda_{i4}\sum_{i=1}^{n}e_{i}^{T}(t)P_{r} \bigl\vert e_{i}(t) \bigr\vert ^{ \upsilon }. \end{aligned} \end{aligned}$$

In view of Lemmas 2.2 and 2.3, it derives that

$$\begin{aligned} \mathscr{L}V(t)\leq{} &{-}2\lambda_{i3}\frac{\lambda_{\min }(P_{r})}{ \lambda_{\max }(P_{r})^{\frac{1+\rho }{2}}} \Biggl( \sum_{i=1}^{n}e_{i} ^{T}(t)P_{r}e_{i}(t) \Biggr) ^{\frac{1+\rho }{2}} \\ &{}-2\lambda_{i4}\frac{\lambda_{\min }(P_{r})n^{\frac{1-\upsilon }{2}}}{ \lambda_{\max }(P_{r})^{\frac{1+\upsilon }{2}}} \Biggl( \sum _{i=1}^{n}e _{i}^{T}(t)P_{r}e_{i}(t) \Biggr) ^{\frac{1+\upsilon }{2}}. \end{aligned}$$

According to (8), one obtains

$$\begin{aligned} \begin{aligned} \mathscr{L}V(t) &\leq -2 \lambda_{i3}\frac{\lambda_{\min }(P_{r})}{ \lambda_{\max }(P_{r})^{\frac{1+\rho }{2}}}\bigl[V(t)\bigr]^{\frac{1+\rho }{2}}-2 \lambda_{i4}\frac{\lambda_{\min }(P_{r})n^{\frac{1-\upsilon }{2}}}{ \lambda_{\max }(P_{r})^{\frac{1+\upsilon }{2}}}\bigl[V(t)\bigr]^{\frac{1+\upsilon }{2}}. \end{aligned} \end{aligned}$$
(12)

Then, taking the expectation on both sides of (12), we can get

$$\begin{aligned} \mathscr{E}\mathscr{L}V(t) &\leq -2 \lambda_{i3}\frac{\lambda_{\min }(P _{r})}{\lambda_{\max }(P_{r})^{\frac{1+\rho }{2}}}\mathscr{E}\bigl[\bigl(V(t)\bigr) \bigr]^{\frac{1+ \rho }{2}} -2\lambda_{i4}\frac{\lambda_{\min }(P_{r})n^{\frac{1- \upsilon }{2}}}{\lambda_{\max }(P_{r})^{\frac{1+\upsilon }{2}}} \mathscr{E}\bigl[ \bigl(V(t)\bigr)\bigr]^{\frac{1+\upsilon }{2}}. \end{aligned}$$

As is well known, for any \(t>0\), \(\mathscr{E}[(V(t))^{ \frac{1+\rho }{2}}]=(\mathscr{E}[V(t)])^{\frac{1+\rho }{2}}\) and \(\mathscr{E}[(V(t))^{\frac{1+\upsilon }{2}}]= (\mathscr{E}[V(t)])^{\frac{1+ \upsilon }{2}}\), then we have the following inequality:

$$\begin{aligned} \mathscr{E}\mathscr{L}V(t)\leq {}&{-}2\min (\lambda_{i3}) \frac{\lambda _{\min }(P_{r})}{\lambda_{\max }(P_{r})^{\frac{1+\rho }{2}}}\bigl[ \mathscr{E}\bigl(V(t)\bigr)\bigr]^{\frac{1+\rho }{2}} \\ &{}-2\min (\lambda_{i4})\frac{\lambda_{\min }(P_{r})n^{\frac{1-\upsilon }{2}}}{\lambda_{\max }(P_{r})^{\frac{1+\upsilon }{2}}}\bigl[\mathscr{E}\bigl(V(t) \bigr)\bigr]^{\frac{1+ \upsilon }{2}}. \end{aligned}$$

By Lemma 2.4, we know that the error system (5) is globally fixed-time stable. And the settling time is estimated as

$$\begin{aligned} \begin{aligned} {T_{\varphi }}\leq T_{\max } &\leq \frac{\lambda_{ \max }(P_{r})^{\frac{1+\rho }{2}}}{2\min (\lambda_{i3})\lambda_{ \min }(P_{r})(1-\rho)} +\frac{\lambda_{\max }(P_{r})^{\frac{1+\upsilon }{2}}}{2\min (\lambda_{i4})n^{\frac{1-\upsilon }{2}}\lambda_{\min }(P _{r})(\upsilon -1)}. \end{aligned} \end{aligned}$$

Hence, under the controller (6), the fixed-time synchronization conditions is derived. The proof is completed. □

Remark 3.1

The function \(f_{i}(\cdot)\) we choose in this paper is continuous and bounded by a constant \(G_{i}\). It is a special condition for the function \(f_{i}(\cdot)\). The boundedness is not necessary in general conditions. In this paper, for estimating the parameter accurately, we choose the function bounded by \(G_{i}\). In other continuous cases, there only needs the condition \(|f_{i}(y_{i}(t))-f_{i}(x_{i}(t))|\leq \mu _{i}|y_{i}(t)-x_{i}(t)|\). Owing to \(\frac{|f_{i}(y_{i}(t))-f_{i}(x_{i}(t))|}{|y_{i}(t)-x_{i}(t)|}\leq \mu _{i}\), thus, for the constant \(\mu _{i}\), the value of \(\mu _{i}\) is determined by the selection of activation function \(f_{i}(\cdot)\).

Remark 3.2

To the best of our knowledge, of the current literature on the synchronization issue for NNs, only a part of the matrices in the network systems and Lyapunov functional are distinct for different system modes. Hence, the network systems and the Lyapunov functional in this paper are more general than the existing results (such as [24, 26]). Meanwhile, inspired by [33], the double-integral terms is introduced into the Lyapunov functional to deal with the adverse effect caused by the integral terms which include the semi-Markovian jumping parameters. The following theorem is established to show the advantage of this approach.

In the following, the fixed-time synchronization conditions are addressed in terms of LMIs between the system (3) and (4). For this purpose, we adopt the following discontinuous feedback controller which includes the integral terms:

$$\begin{aligned} u_{i}(t)= {}&{-}\lambda_{i1}e_{i}(t)- \lambda_{i2} \operatorname{sign}\bigl(e_{i}(t)\bigr)- \lambda_{i3} \operatorname{sign}\bigl(e_{i}(t)\bigr) \bigl\vert e_{i}(t) \bigr\vert ^{\rho }-\lambda_{i4} \operatorname{sign}\bigl(e_{i}(t)\bigr) \bigl\vert e_{i}(t) \bigr\vert ^{\upsilon } \\ &{}-\lambda_{i5}\frac{e_{i}(t)}{ \Vert e_{i}(t) \Vert ^{2}} \biggl[ \int_{t-\tau (t)} ^{t}e_{i}^{T}(s)K_{r}e_{i}(s) \,ds+ \int_{-\tau }^{0} \int_{t+s}^{t}e_{i} ^{T}(s)Ke_{i}(s) \,ds \biggr] ^{\frac{1+\rho }{2}} \\ &{}-\lambda_{i6}\frac{e_{i}(t)}{ \Vert e_{i}(t) \Vert ^{2}} \biggl[ \int_{t-\tau (t)} ^{t}e_{i}^{T}(s)K_{r}e_{i}(s) \,ds+ \int_{-\tau }^{0} \int_{t+s}^{t}e_{i} ^{T}(s)Ke_{i}(s) \,ds \biggr] ^{\frac{1+\upsilon }{2}}. \end{aligned}$$
(13)

Theorem 3.2

Under assumption (\(H_{1}\)), for given scalars \(0<\alpha <1\) and \(\beta >1\), if there exist symmetric positive-definite matrices \(P_{r}\), \(W_{rk}\), and \(K_{r}\), symmetric matrix \(K\geq 0\), such that

(14)

where \(\overline{\Omega }=K_{r}+\sum_{k=1}^{N}\pi_{rk}P_{k}+ \sum_{k=1,k\neq r}^{N} [\frac{\kappa_{rk}^{2}}{4}W_{rk}+(P_{k}-P _{r})W_{rk}^{-1}(P_{k}-P_{r}) ]+\tau K\),

$$ \textstyle\begin{cases} -P_{r}D_{r}-\lambda_{i1}P_{r}+\mu_{i} \vert P_{r}A_{r} \vert \leq 0,\\ \lambda _{i2}P_{r}>H_{i} \vert P_{r}B_{r} \vert ,\\ \lambda_{i5}>\frac{\lambda_{i3}}{ \lambda_{\max }(P_{r})^{\frac{1+\rho }{2}}}>0,\\ \lambda_{i6}>\frac{ \lambda_{i4}}{\lambda_{\max }(P_{r})^{\frac{1+\upsilon }{2}}}>0,\\ \sum_{k\in S}\pi_{rk}(h)K_{k}-K\leq 0,\end{cases} $$
(15)

then the drive system (3) is synchronized onto the response system (4) in fixed time.

Proof

Consider the following Lyapunov functional:

$$\begin{aligned} V(t) &=V_{1}{(t)}+V_{2}{(t)}+V_{3} {(t)} \\ &=\sum_{i=1}^{n}e_{i}^{T}(t)P_{r}e_{i}(t)+ \sum_{i=1}^{n} \int_{t- \tau (t)}^{t}e_{i}^{T}(s)K_{r}e_{i}(s) \,ds +\sum_{i=1}^{n} \int_{-\tau } ^{0} \int_{t+s}^{t}e_{i}^{T}(s)Ke_{i}(s) \,ds. \end{aligned}$$
(16)

For \(V_{1}(t)\), based on (8) and (9), we have

$$\begin{aligned} \begin{aligned} \mathscr{L}V_{1}(t) &=2\sum _{i=1}^{n}e_{i}^{T}(t)P_{r} \dot{e_{i}}(t) + \sum_{i=1}^{n}e_{i}^{T}(t) \Biggl[ \sum_{k=1}^{N}\pi_{rk}(h)P_{k} \Biggr] e _{i}(t). \end{aligned} \end{aligned}$$
(17)

Calculating the derivatives of \(V_{2}(t)\) and \(V_{3}(t)\) along the trajectory of (5), it yields

$$\begin{aligned} \begin{aligned} \mathscr{L}V_{2}(t) ={}&\sum _{i=1}^{n}e_{i}^{T}(t)K_{r}e_{i}(t)- \bigl(1- \dot{\tau }(t)\bigr)\sum_{i=1}^{n}e_{i}^{T} \bigl(t-\tau (t)\bigr)K_{r}e_{i}\bigl(t-\tau (t)\bigr) \\ &{}+\sum_{i=1}^{n}\sum _{k=1}^{N}\pi_{rk}(h) \int_{t-\tau (t)}^{t}e_{i} ^{T}(s)K_{k}e_{i}(s) \,ds, \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} \mathscr{L}V_{3}(t) &=\sum _{i=1}^{n}e_{i}^{T}(t) \tau Ke_{i}(t)-\sum_{i=1}^{n} \int_{t-\tau }^{t}e_{i}^{T}(s)Ke_{i}(s) \,ds. \end{aligned} \end{aligned}$$
(18)

Combining (16)–(18), we acquire

$$\begin{aligned} \mathscr{L}V(t) ={}&2\sum_{i=1}^{n}e_{i}^{T}(t)P_{r} \dot{e_{i}}(t)+\sum_{i=1}^{n}e_{i}^{T}(t) \Biggl[ \sum_{k=1}^{N}\pi_{rk}(h)P_{k} \Biggr] e _{i}(t)+\sum_{i=1}^{n}e_{i}^{T}(t)K_{r}e_{i}(t) \\ &{}-\bigl(1-\dot{\tau }(t)\bigr)\sum_{i=1}^{n}e_{i}^{T} \bigl(t-\tau (t)\bigr)K_{r}e_{i}\bigl(t- \tau (t)\bigr) \\ &{} + \sum_{i=1}^{n}\sum _{k=1}^{N}\pi_{rk}(h) \int_{t-\tau (t)} ^{t}e_{i}^{T}(s)K_{k}e_{i}(s) \,ds \\ &{}+\sum_{i=1}^{n}e_{i}^{T}(t) \tau Ke_{i}(t)-\sum_{i=1}^{n} \int_{t- \tau }^{t}e_{i}^{T}(s)Ke_{i}(s) \,ds. \end{aligned}$$

Based on assumption (\(H_{1}\)) and the error system (5), we have

Under the condition of the Theorem, we have the following inequality:

(19)

Substituting (13) into (19), we can obtain

By the conditions (14) and (15), then employing Lemma 2.3, we have

$$\begin{aligned} \mathscr{L}V(t) \leq {}&{-}2\lambda_{i3}\lambda_{\min }(P_{r})\sum _{i=1} ^{n} \bigl\vert e_{i}(t) \bigr\vert ^{\rho +1}-2\lambda_{i4}\lambda_{\min }(P_{r}) \sum_{i=1} ^{n} \bigl\vert e_{i}(t) \bigr\vert ^{\upsilon +1} \\ & {}-2\lambda_{i5}\lambda_{\min }(P_{r}) \Biggl[ \sum_{i=1}^{n} \int_{- \tau }^{0} \int_{t+s}^{t}e_{i}^{T}(s)Ke_{i}(s) \,ds+\sum_{i=1}^{n} \int_{t-\tau (t)}^{t}e_{i}^{T}(s)K_{r}e_{i}(s) \,ds \Biggr] ^{\frac{1+ \rho }{2}} \\ & {}-2\lambda_{i6}\lambda_{\min }(P_{r}) \Biggl[ \sum_{i=1}^{n} \int_{- \tau }^{0} \int_{t+s}^{t}e_{i}^{T}(s)Ke_{i}(s) \,ds \\ &{}+\sum_{i=1}^{n} \int_{t-\tau (t)}^{t}e_{i}^{T}(s)K_{r}e_{i}(s) \,ds \Biggr] ^{\frac{1+ \upsilon }{2}}. \end{aligned}$$
(20)

According to Lemma 2.2, we get

$$\begin{aligned} \Biggl({\sum_{i=1}^{n}} \bigl\vert e_{i}(t) \bigr\vert ^{1+\rho } \Biggr)^{\frac{1}{1+ \rho }} \geq \Biggl({\sum_{i=1}^{n}} \bigl\vert e_{i}(t) \bigr\vert ^{2} \Biggr)^{\frac{1}{2}}, \end{aligned}$$

thus

$$\begin{aligned} {\sum_{i=1}^{n}} \bigl\vert e_{i}(t) \bigr\vert ^{1+\rho }&\geq \Biggl( {\sum _{i=1}^{n}} \bigl\vert e_{i}(t) \bigr\vert ^{2} \Biggr)^{ \frac{1+\rho }{2}} \\ &= \Biggl(\sum _{i=1}^{n}e_{i}^{T}(t)e_{i}(t) \Biggr)^{\frac{1+ \rho }{2}}. \end{aligned}$$

Thus, (20) can be rewritten as

$$\begin{aligned} \mathscr{L}V(t)\leq {}&{-}2\lambda_{i3}\frac{\lambda_{\min }(P_{r})}{ \lambda_{\max }(P_{r})^{\frac{1+\rho }{2}}} \Biggl[ \sum_{i=1}^{n}e_{i} ^{T}(t)P_{r}e_{i}(t) \Biggr] ^{\frac{1+\rho }{2}} \\ &{}-2\lambda_{i4}\frac{\lambda_{\min }(P_{r})}{\lambda_{\max }(P_{r})^{\frac{1+ {\upsilon }}{2}}} \Biggl[ \sum _{i=1}^{n}e_{i}^{T}(t)P_{r}e _{i}(t) \Biggr] ^{\frac{1+\upsilon }{2}} \\ &{}-2\lambda_{i5}\lambda_{\min }(P_{r}) \Biggl[ \sum _{i=1}^{n} \int_{- \tau }^{0} \int_{t+s}^{t}e_{i}^{T}(s)Ke_{i}(s) \,ds +\sum_{i=1}^{n} \int_{t-\tau (t)}^{t}e_{i}^{T}(s)K_{r}e_{i}(s) \,ds \Biggr] ^{\frac{1+ \rho }{2}} \\ &{}-2\lambda_{i6}\lambda_{\min }(P_{r}) \Biggl[ \sum _{i=1}^{n} \int_{- \tau }^{0} \int_{t+s}^{t}e_{i}^{T}(s)Ke_{i}(s) \,ds +\sum_{i=1}^{n} \int_{t-\tau (t)}^{t}e_{i}^{T}(s)K_{r}e_{i}(s) \,ds \Biggr] ^{\frac{1+ \upsilon }{2}}. \end{aligned}$$

According to the conditions given in (15), then, based on Lemma 2.2, we get

$$\begin{aligned} \mathscr{L}V(t)\leq{} & {-}2\lambda_{i3}\frac{\lambda_{\min }(P_{r})}{ \lambda_{\max }(P_{r})^{\frac{1+\rho }{2}}} \Biggl[\sum_{i=1}^{n}e_{i} ^{T}(t)P_{r}e_{i}(t) \\ &{}+\sum_{i=1}^{n} \int_{t-\tau (t)}^{t}e_{i}^{T}(s)K_{r}e_{i}(s) \,ds+ \sum_{i=1}^{n} \int_{-\tau }^{0} \int_{t+s}^{t}e_{i}^{T}(s)Ke_{i}(s) \,ds \Biggr]^{\frac{1+\rho }{2}} \\ &{}-2\lambda_{i4}\frac{\lambda_{\min }(P_{r})}{\lambda_{\max }(P_{r})^{\frac{1+ {\upsilon }}{2}}} \Biggl[\sum _{i=1}^{n}e_{i}^{T}(t)P_{r}e _{i}(t) \\ &{}+\sum_{i=1}^{n} \int_{t-\tau (t)}^{t}e_{i}^{T}(s)K_{r}e_{i}(s) \,ds+ \sum_{i=1}^{n} \int_{-\tau }^{0} \int_{t+s}^{t}e_{i}^{T}(s)Ke_{i}(s) \,ds \Biggr]^{\frac{1+\upsilon }{2}}, \end{aligned}$$

where \(0<\rho <1\), \(\upsilon >1\), \(\lambda_{i3}\), \(\lambda_{i4}>0\).

$$\begin{aligned} \mathscr{L}V(t) &\leq -2\lambda_{i3} \frac{\lambda_{\min }(P_{r})}{ \lambda_{\max }(P_{r})^{\frac{1+\rho }{2}}}\bigl[V(t)\bigr]^{\frac{1+\rho }{2}} -2\lambda_{i4} \frac{\lambda_{\min }(P_{r})n^{\frac{1-\upsilon }{2}}}{ \lambda_{\max }(P_{r})^{\frac{1+\upsilon }{2}}}\bigl[V(t)\bigr]^{\frac{1+\upsilon }{2}}. \end{aligned}$$
(21)

Taking the expectation on both sides of (21), it yields

$$\begin{aligned} \mathscr{E}\mathscr{L}V(t)\leq -2\lambda_{i3} \frac{\lambda_{\min }(P _{r})}{\lambda_{\max }(P_{r})^{\frac{1+\rho }{2}}}\mathscr{E}\bigl[\bigl(V(t)\bigr)^{\frac{1+ \rho }{2}}\bigr] -2 \lambda_{i4}\frac{\lambda_{\min }(P_{r})n^{\frac{1- \upsilon }{2}}}{\lambda_{\max }(P_{r})^{\frac{1+\upsilon }{2}}} \mathscr{E}\bigl[\bigl(V(t) \bigr)^{\frac{1+\upsilon }{2}}\bigr]. \end{aligned}$$
(22)

It is easily known that \(\mathscr{E}[(V(t))^{\frac{1+\rho }{2}}]=(\mathscr{E}[V(t)])^{\frac{1+\rho }{2}}\) and \(\mathscr{E}[(V(t))^{\frac{1+\upsilon }{2}}]=(\mathscr{E}[V(t)])^{\frac{1+\upsilon }{2}}\), then (22) can be rewritten as

$$\begin{aligned} {\mathscr{E}}\mathscr{L}V(t)\leq{}& {-}2\min ( \lambda_{i3})\frac{ \lambda_{\min }(P_{r})}{\lambda_{\max }(P_{r})^{\frac{1+\rho }{2}}}\bigl[ {\mathscr{E}}V(t) \bigr]^{\frac{1+\rho }{2}} \\ &{} -2\min (\lambda _{i4})\frac{\lambda_{\min }(P_{r})n^{\frac{1-\upsilon }{2}}}{ \lambda_{\max }(P_{r})^{\frac{1+\upsilon }{2}}}\bigl[ {\mathscr{E}}V(t) \bigr]^{\frac{1+\upsilon }{2}}. \end{aligned}$$
(23)

Together with Lemma 2.4 and (23), we conclude that the error system (5) is globally fixed-time stable, and the settling time is estimated as

$$\begin{aligned} T_{\varphi }\leq T_{\max } &\leq \frac{\lambda_{\max }(P_{r})^{\frac{1+ \rho }{2}}}{2\min (\lambda_{i3})\lambda_{\min }(P_{r})(1-\rho)} +\frac{ \lambda_{\max }(P_{r})^{\frac{1+\upsilon }{2}}}{2\min (\lambda_{i4})n ^{\frac{1-\upsilon }{2}}\lambda_{\min }(P_{r})(\upsilon -1)}. \end{aligned}$$
(24)

Hence, the fixed-time synchronization conditions are addressed in terms of LMIs. The proof is completed. □

Remark 3.3

To the best of our knowledge, many existing works with respect to the fixed-time synchronization conditions for NNs, see [25, 27], address these in terms of algebraic inequalities. Compared with the approach used in [25], the fixed-time synchronization conditions obtained in Theorem 3.2 can be addressed in terms of LMIs, which can be solved by utilizing the LMI toolbox in Matlab. It should be mentioned that the condition (14) cannot be solved directly in terms of LMIs, because there exists a nonlinear term \(\sum_{r=1,k\neq r}^{N}(P_{k}-P_{r})W_{rk}^{-1}(P_{k}-P_{r})\) in Ω̅. In order to overcome this difficulty, constructing a diagonal matrix \(\operatorname{diag}\{\sum_{r=1,k\neq r}^{N}(P_{k}-P_{r})W_{rk}^{-1}(P_{k}-P_{r}), 0\}\) is necessary. Then, utilizing the condition of the transition rate \(\pi _{rk}(h)\) and Schur complement lemma which are mentioned in [37], the matrix inequalities is turned into the linear matrix inequalities, which can be solved in terms of LMIs.

Corollary 3.1

Suppose the conditions in Theorem 3.2 admit. Under the controller (13), the drive system (3) is synchronized with the response system (4) in fixed time, and the settling time is estimated as

$$\begin{aligned} T_{\varphi }\leq T_{\max }\leq \frac{\pi p}{2\lambda_{\min }(P_{r})\sqrt{ \min (\lambda_{i3})\min (\lambda_{i4})n^{\frac{1-\upsilon }{2}} \lambda_{\max }(P_{r})^{\frac{2+\rho +\upsilon }{2}}}}, \end{aligned}$$
(25)

with \(p>1\).

Corollary 3.2

Under assumption (\(H_{1}\)), for given scalars \(0<\rho <1\), the drive system (3) is synchronized onto the response system (4) in a finite-time interval based on the following controller:

$$\begin{aligned} u_{i}(t)=-\lambda_{i1}e_{i}(t)- \lambda_{i2}\operatorname{sign}\bigl(e_{i}(t)\bigr)-\lambda_{i3} \operatorname{sign}\bigl(e _{i}(t)\bigr) \bigl\vert e_{i}(t) \bigr\vert ^{\rho }, \end{aligned}$$
(26)

if there exist symmetric positive-definite matrices \(P_{r}\), \(W_{rk}\), such that

$$\textstyle\begin{cases} -\lambda_{i1}P_{r}-P_{r}D_{r}+\mu_{i} \vert P_{r}A_{r} \vert +\frac{ \widetilde{\Omega }}{2}< 0,\\ \lambda_{i2}>H_{i} \vert P_{r}B_{r} \vert ,\\ \lambda_{i3}>0, \end{cases} $$

where \(\widetilde{\Omega }=\sum_{k=1}^{N}\pi_{rk}P_{k}+\sum_{k=1,k \neq r}^{N} [\frac{\kappa_{rk}^{2}}{4}W_{rk}+(P_{k}-P_{r})W_{rk} ^{-1}(P_{k}-P_{r}) ]\).

Meanwhile, the settling time is estimated as

$$\begin{aligned} {T^{*}}\leq \frac{V(0)^{1-\frac{1+\rho }{2}}\lambda _{\max }(P_{r})^{\frac{1+\rho }{2}}}{2\lambda_{{i3}}(1- \rho)\lambda_{\min }(P_{r})}. \end{aligned}$$
(27)

Remark 3.4

Compared with the finite-time synchronization conditions obtained in [30], it needs more conditions to realize the fixed-time synchronization goal. For finite-time synchronization, there only needs such a term \(-V^{\rho }(t)\), \(0<\rho <1\); whereas for fixed-time synchronization, it needs the two terms \(-V^{\rho }(t)\) (\(0<\rho <1\)) and \(-V^{\nu }(t)\) (\(\nu >1\)). Similar to the results in [30], the settling time \(T^{*}\) of the finite-time synchronization obtained in Corollary 3.2 depends on the initial condition \(V(0)\). When \(V(0)\) is so large that the \(T^{*}\) is not reasonable in practice application. However, the settling time \(T_{\varphi }\) of the fixed-time synchronization obtained in Theorem 3.2 is independent of any initial conditions. Thus, the settling time can be accurately evaluated by selecting appropriate control input parameters and semi-Markovian jumping parameters.

4 Numerical examples

Example 1

In this section, we perform two examples to demonstrate the correctness of Theorem 3.2.

Consider the 2-dimensional semi-Markovian jumping neural networks system. The parameters we choose as follows:

$$\begin{aligned}& D_{1}= \left ( \begin{matrix} 4.5&0 \\ 0&2.5 \end{matrix} \right ) ,\qquad D_{2}= \left ( \begin{matrix} 2.5&0 \\ 0&1.0 \end{matrix} \right ) ,\qquad A_{1}= \left ( \begin{matrix} 1.3&-1.4 \\ -2.6&1.3 \end{matrix} \right ) , \\& A_{2}= \left ( \begin{matrix} 0.8&-1.4 \\ 0.6&1.8 \end{matrix} \right ) ,\qquad B_{1}= \left ( \begin{matrix} -0.6&0.8 \\ 0.4&-0.7 \end{matrix} \right ) ,\qquad B_{2}= \left ( \begin{matrix} -0.7&0.4 \\ 0.3&-0.5 \end{matrix} \right ). \end{aligned}$$

The scalars we use in this paper are chosen as follows. The activation function is taken as \(f(t)=\tanh (t)\), thus \(\mu_{1}=\mu_{2}=1\), and \(G_{1}=G_{2}=1\). \(\rho =0.5\), \(\upsilon =2\). The time-varying delay function is assumed to be \(\tau (t)=0.5+0.5\cos (t)\), the initial value is \({x(t)}=(e^{3t},e^{3t})^{T}\), \({y(t)}=(\sin (3t),\tanh (3t))^{T}\), \(I=(0,0)^{T}\). We can easily see that its upper bound \(\tau =1\), .

The transition rates for each mode are given as follows:

For mode 1

$$\pi_{11}(h)\in (-2.53,-2.35),\qquad \pi_{12}(h)\in (2.43,2.63). $$

For mode 2

$$\pi_{21}(h)\in (0.45,0.59),\qquad \pi_{22}(h)\in (-0.62,-0.46). $$

Then we can get the parameters \(\pi_{rk}\), \(\kappa_{rk}\), where \(r,k \in S=\{1,2\}\).

$$\begin{aligned}& \pi_{11}=-2.5,\qquad \pi_{12}=2.5,\qquad \pi_{21}=0.5,\qquad \pi_{22}=-0.5, \\& \kappa_{11}=0.09,\qquad \kappa_{12}=0.10,\qquad \kappa_{21}=0.07,\qquad \kappa_{22}=0.08. \end{aligned}$$

Through simple computations, we have

$$\begin{aligned}& P_{1}= \left ( \begin{matrix} 11.4132&0.0545 \\ 0.0545&10.3522 \end{matrix} \right ) ,\qquad P_{2}= \left ( \begin{matrix} 10.1249&0.1175 \\ 0.1175&9.7243 \end{matrix} \right ) , \\& K_{1}= \left ( \begin{matrix} 0.9500&0 \\ 0&0.3500 \end{matrix} \right ) ,\qquad K_{2}= \left ( \begin{matrix} 0.2500&0 \\ 0&0.1300 \end{matrix} \right ) , \\& K= \left ( \begin{matrix} 1.1345&0 \\ 0&1.1326 \end{matrix} \right ) . \end{aligned}$$

Meanwhile, the parameters of the controller we choose as follows:

For mode 1

$$\begin{aligned}& \lambda_{11}= \left ( \begin{matrix} 0.8725&0 \\ 0&0.5248 \end{matrix} \right ) ,\qquad \lambda_{12}= \left ( \begin{matrix} 2.4156&0 \\ 0&2.5426 \end{matrix} \right ) , \\& \lambda_{13}= \left ( \begin{matrix} 0.2500&0 \\ 0&0.2500 \end{matrix} \right ) , \\& \lambda_{14}= \left ( \begin{matrix} 1.5236&0 \\ 0&1.2698 \end{matrix} \right ) ,\qquad \lambda_{15}= \left ( \begin{matrix} 0.0625&0 \\ 0&0.0625 \end{matrix} \right ) , \\& \lambda_{16}= \left ( \begin{matrix} 0.3048&0 \\ 0&0.2540 \end{matrix} \right ) . \end{aligned}$$

For mode 2

$$\begin{aligned}& \lambda_{21}= \left ( \begin{matrix} 1.8961&0 \\ 0&1.6261 \end{matrix} \right ) ,\qquad \lambda_{22}= \left ( \begin{matrix} 2.0501&0 \\ 0&1.7626 \end{matrix} \right ) , \\& \lambda_{23}= \left ( \begin{matrix} 0.2100&0 \\ 0&0.2100 \end{matrix} \right ) , \\& \lambda_{24}= \left ( \begin{matrix} 1.4046&0 \\ 0&1.3898 \end{matrix} \right ) ,\qquad \lambda_{25}= \left ( \begin{matrix} 0.0530&0 \\ 0&0.0530 \end{matrix} \right ) , \\& \lambda_{26}= \left ( \begin{matrix} 0.2810&0 \\ 0&0.2780 \end{matrix} \right ) , \end{aligned}$$

and the settling time \(T_{\max }\) can be calculated as 4.45.

Based on the values given above, then the first and second state trajectories of the systems (3) and (4) are displayed in Fig. 1 and Fig. 2, respectively. And the trajectories of the corresponding synchronization error system are depicted in Fig. 3. Hence, the correctness of Theorem 3.2 is proved.

Figure 1
figure 1

State trajectory of variables \(x_{1}(t)\) and \(y_{1}(t)\) under the controller (13)

Figure 2
figure 2

State trajectory of variables \(x_{2}(t)\) and \(y_{2}(t)\) under the controller (13)

Figure 3
figure 3

The trajectories of the synchronization error \(e_{1}(t)\) and \(e_{2}(t)\) under the controller (13)

Example 2

Consider the 3-dimensional semi-Markovian jumping neural networks system. The parameters we choose as follows:

$$\begin{aligned}& D_{1}= \left ( \begin{matrix} 3.5&0&0 \\ 0&2.6&0 \\ 0&0&2.6 \end{matrix} \right ) ,\qquad D_{2}= \left ( \begin{matrix} 2.5&0&0 \\ 0&1.5&0 \\ 0&0&1.5 \end{matrix} \right ) , \\& A_{1}= \left ( \begin{matrix} -2.0&-1.2&-0.5 \\ 1.8&1.6&1.5 \\ 1.8&1.4&1.6 \end{matrix} \right ) , \\& A_{2}= \left ( \begin{matrix} -1.5&1.0&0.5 \\ 1.0&-0.5&0.3 \\ 0.5&-1.2&-1.5 \end{matrix} \right ) ,\qquad B_{1}= \left ( \begin{matrix} 0.6&0.4&0.3 \\ 0.4&-0.5&-0.2 \\ 0.3&0.1&-0.4 \end{matrix} \right ) , \\& B_{2}= \left ( \begin{matrix} -0.6&-0.3&0.5 \\ 0.2&-0.5&0.4 \\ -0.1&0.2&0.6 \end{matrix} \right ) . \end{aligned}$$

It is assumed that the activation function and the time-varying delay function are taken as the same as Example 1. The initial conditions we choose as \({x(t)}=(3e^{2t},3e^{2t},3e^{2t})^{T}\), \({y(t)}=(3\sin (2t),3\sin (2t),3\tanh (2t))^{T}\), \(I=(0,0,0)^{T}\). And the relevant parameters are \(\mu_{1}=\mu_{2}=\mu _{3}=1\), \(G_{1}=G_{2}=G_{3}=1\), \(\rho =0.5\), \(\upsilon =2.0\).

The transition rates for each mode are given as follows.

For mode 1

$$\pi_{11}(h)\in (-1.22,-1.06),\qquad \pi_{12}(h)\in (1.04,1.24). $$

For mode 2

$$\pi_{21}(h)\in (-0.40,-0.26),\qquad \pi_{22}(h)\in (0.27,0.45). $$

Then we can get the parameters \(\pi_{rk}\), \(\kappa_{rk}\), where \(r, k \in S=\{1,2\}\),

$$\begin{aligned}& \pi_{11}=-1.2,\qquad \pi_{12}=1.2,\qquad \pi_{21}=0.30,\qquad \pi_{22}=-0.30, \\& \kappa_{11}=0.08,\qquad \kappa_{12}=0.10,\qquad \kappa_{21}=0.07,\qquad \kappa_{22}=0.09. \end{aligned}$$

Through simple computations, we have

$$\begin{aligned}& P_{1}= \left ( \begin{matrix} 7.3749&-0.2883&-0.2382 \\ -0.2883&6.5211&0.3268 \\ -0.2382&0.3268&6.0236 \end{matrix} \right ) , \\& P_{2}= \left ( \begin{matrix} 6.0354&-0.1984&-0.1371 \\ -0.1984&5.4230&0.1258 \\ -0.1371&0.1258&4.5223 \end{matrix} \right ) , \\& K_{1}= \left ( \begin{matrix} 0.55&0&0 \\ 0&0.37&0 \\ 0&0&0.48 \end{matrix} \right ) ,\qquad K_{2}= \left ( \begin{matrix} 0.25&0&0 \\ 0&0.35&0 \\ 0&0&0.35 \end{matrix} \right ) , \\& K= \left ( \begin{matrix} 0.3567&0&0 \\ 0&0.2462&0 \\ 0&0&0.6426 \end{matrix} \right ) . \end{aligned}$$

For mode 1

$$\begin{aligned}& \lambda_{11}= \left ( \begin{matrix} 1.6231&0&0 \\ 0&2.5401&0 \\ 0&0&1.5480 \end{matrix} \right ) ,\qquad \lambda_{12}= \left ( \begin{matrix} 2.3122&0&0 \\ 0&2.0512&0 \\ 0&0&2.5501 \end{matrix} \right ) , \\& \lambda_{13}= \left ( \begin{matrix} 0.5200&0&0 \\ 0&0.5200&0 \\ 0&0&0.5200 \end{matrix} \right ) ,\qquad \lambda_{14}= \left ( \begin{matrix} 1.2415&0&0 \\ 0&0.8575&0 \\ 0&0&1.5786 \end{matrix} \right ) , \\& \lambda_{15}= \left ( \begin{matrix} 0.1456&0&0 \\ 0&0.1456&0 \\ 0&0&0.1456 \end{matrix} \right ) ,\qquad \lambda_{16}= \left ( \begin{matrix} 0.3104&0&0 \\ 0&0.2145&0 \\ 0&0&0.3947 \end{matrix} \right ) . \end{aligned}$$

For mode 2

$$\begin{aligned}& \lambda_{21}= \left ( \begin{matrix} 2.5311&0&0 \\ 0&1.0120&0 \\ 0&0&1.4284 \end{matrix} \right ) ,\qquad \lambda_{22}= \left ( \begin{matrix} 2.2315&0&0 \\ 0&2.2413&0 \\ 0&0&2.5194 \end{matrix} \right ) , \\& \lambda_{23}= \left ( \begin{matrix} 0.4000&0&0 \\ 0&0.4000&0 \\ 0&0&0.4000 \end{matrix} \right ) ,\qquad \lambda_{24}= \left ( \begin{matrix} 1.2135&0&0 \\ 0&1.0574&0 \\ 0&0&1.3589 \end{matrix} \right ) , \\& \lambda_{25}= \left ( \begin{matrix} 0.1120&0&0 \\ 0&0.1120&0 \\ 0&0&0.1120 \end{matrix} \right ) ,\qquad \lambda_{26}= \left ( \begin{matrix} 0.3034&0&0 \\ 0&0.2644&0 \\ 0&0&0.3397 \end{matrix} \right ) , \end{aligned}$$

and the settling time \(T_{\max }\) is evaluated as 4.45.

Under controller (13), the first, second and third state trajectories of the system (3) and (4) are plotted in Figs. 4, 5, and 6, respectively. Moreover, Fig. 7 shows the trajectories of the corresponding synchronization error system. The numerical simulation perfectly supports Theorem 3.2.

Figure 4
figure 4

State trajectory of variables \(x_{1}(t)\) and \(y_{1}(t)\) under the controller (13)

Figure 5
figure 5

State trajectory of variables \(x_{2}(t)\) and \(y_{2}(t)\) under the controller (13)

Figure 6
figure 6

State trajectory of variables \(x_{3}(t)\) and \(y_{3}(t)\) under the controller (13)

Figure 7
figure 7

The trajectories synchronization errors \(e_{1}(t)\), \(e_{2}(t)\), and \(e_{3}(t)\) under the controller (13)

5 Conclusion

In this paper, the fixed-time synchronization issue for semi-Markovian jumping neural networks with time-varying delays is discussed. A novel state-feedback controller is designed which includes double-integral terms and time-varying delay terms. Based on the linear matrix inequality (LMI) technique, the Lyapunov functional method, some effective conditions are established to guarantee the fixed-time synchronization of neural networks. Moreover, the upper bound of the settling time can be explicitly evaluated. To a certain extent, the results obtained in this paper have improved the previous works. More complex conditions, such as discontinuous functions, stochastic disturbances and fixed-time synchronization for complex dynamical networks will be taken into consideration in future research.